content
stringlengths
0
625k
subset
stringclasses
1 value
meta
dict
--- abstract: | In this short note, we consider a normal compact Kähler klt space $X$ whose canonical divisor $K_X$ is pseudo-effective, and give a dynamical criterion for $X$ to be a $Q$-complex torus. We show that, if such $X$ admits an int-amplified endomorphism, then $X$ is a $Q$-complex torus. As an application, we prove that, if a simply connected compact Kähler (smooth) threefold admits an int-amplified endomorphism, then it is (projective and) rationally connected. address: " [Department of Mathematics, National University of Singapore, Singapore 119076, Republic of Singapore. Current address: Center for Complex Geometry, Institute for Basic Science (IBS), 55 Expo-ro, Yuseong-gu, Daejeon, 34126, Republic of Korea.]{.smallcaps} " author: - Guolei Zhong title: "Characterization of $Q$-complex tori via endomorphisms -- an addendum to \"Int-amplified endomorphisms of compact Kähler spaces\"" --- # Introduction We work over the field $\mathbb{C}$ of complex numbers. Let $(X,\omega)$ be a compact Kähler manifold of dimension $n$ such that $c_1(X)=0$ and $\int_Xc_2(X)\wedge\omega^{n-2}=0$. Following from Yau's solution of Calabi conjecture [@Yau78], the vanishing of the first Chern class $c_1(X)$ implies that we can find a Ricci flat metric $\omega$. On the other hand, the vanishing $\int_Xc_2(X)\wedge \omega^{n-2}=0$ implies that the full curvature tensor of $\omega$ is identically zero. By the uniformization theorem, the universal cover of $X$ is an affine space and $X$ is a quotient of a complex torus $T$ by a finite group acting freely on $T$. From the viewpoints of birational geometry, since the end product of the minimal model program (MMP for short) is usually singular, it is important to consider the singular version of the uniformization theorem. We say that a compact Kähler space $X$ is a *$Q$-complex torus*, if there is a finite morphism $T\to X$ from a complex torus $T$, which is étale in codimension one (cf. [@Nak99 Definition 3.6], [@NZ10 Definition 2.13] and [@Zho21 Section 5]). Thanks to many people's efforts, when $X$ is projective, the numerical characterization of $Q$-complex tori has been successfully generalized to singular varieties (see [@GKP16] and [@LT18]). In the transcendental case however, the slicing arguments used to reduce a projective variety to a complete intersection surface (by integral ample divisors) do not work any more. Recently, Kirschner and Graf settled this problem in [@GK20] for Kähler threefolds, and in higher dimensions, Claudon, Graf and Guenancia settled the uniformization problem in [@CGG23] (cf. [@CGG22]). From the dynamical viewpoints, one of the application of the numerical characterization of $Q$-complex tori is to describe the end product of the equivariant minimal model program (E-MMP for short). In other words, given a holomorphic self-map $f$ on a compact Kähler space $X$ with mild singularities, we are interested in the construction of the $f$-equivariant MMP for $X$, assuming the existence of an MMP for $X$. We refer the reader to our recent works [@Zho22] and [@Zho23] for the aspect of automorphism groups. When $X$ is projective, Nakayama first established the E-MMP for surfaces admitting non-isomorphic surjective endomorphisms (see [@Nak20a], [@Nak20b]). In higher dimensions, Meng and Zhang constructed in [@MZ18] and [@Men20] the $f$-E-MMP when $f$ is *polarized*, i.e., $f^*H\sim qH$ for some (integral) ample divisor $H$ and integer $q>1$, or more generally, $f$ is *int-amplified*, i.e., $f^*H-H=L$ for some (integral) ample divisors $H$ and $L$. In particular, the following Theorem [Theorem 1](#thm-proj-Q-torus){reference-type="ref" reference="thm-proj-Q-torus"} characterizes numerically the end product of the E-MMP (in the projective case) by considering the first Chern class and the orbifold second Chern class. **Theorem 1** ([@GKP16 Theorem 1.21], [@Men20 Theorem 5.2]; cf. [@NZ10 Theorem 3.4]). *Let $X$ be a normal projective klt variety admitting an int-amplified endomorphism $f$. Suppose the canonical divisor $K_X$ is pseudo-effective. Then $X$ is a $Q$-complex torus.* In [@Zho21], the author generalized the ideas of the E-MMP for projective varieties to the transcendental case, and constructed the E-MMP for terminal threefolds. In the higher dimensional case however, due to the lack of the existence of an MMP, the E-MMP hasn't been established in its full version so far. Recall that a holomorphic self-map $f:X\to X$ on a compact Kähler space $X$ with only rational singularities is said to be *int-amplified*, if the induced linear operation $f^*|_{\textup{H}^{1,1}_{\textup{BC}}(X)}$ on the Bott-Chern cohomology space $\textup{H}^{1,1}_{\textup{BC}}(X)$ (see [@HP16 Definition 3.1]) has all the eigenvalues being of modulus greater than 1, or equivalently, $f^*[\omega]-[\omega]=[\eta]$ for some Kähler classes $[\omega]$ and $[\eta]$ (cf. [@Zho21 Theorem 1.1]). We refer the reader to [@Zho21 Section 2] for notations and terminologies involved. Based on this generalized notion, it is natural for us to ask the following question, as an analytic version of Theorem [Theorem 1](#thm-proj-Q-torus){reference-type="ref" reference="thm-proj-Q-torus"}. **Question 2**. *Let $X$ be a normal compact Kähler space with at worst rational singularities. Suppose that $X$ admits an int-amplified endomorphism and the canonical divisor $K_X$ is pseudo-effective. Is $X$ a $Q$-complex torus?* We refer the reader to [@Zho21 Theorem 1.4] (cf. [@Zho21 Section 6]) for a positive answer to Question [Question 2](#ques-q-torus){reference-type="ref" reference="ques-q-torus"} when $X$ satisfies one of the following: (1) $X$ is smooth; (2) $\dim X\le 2$; or (3) $X$ is a terminal threefold. In this short note, we study Question [Question 2](#ques-q-torus){reference-type="ref" reference="ques-q-torus"} and characterize the $Q$-complex tori dynamically in a more general setting. **Remark 3**. 1. After the previous paper [@Zho21] being accepted, the characterization of $Q$-complex tori has been greatly extended from threefolds [@GK20] to higher dimensions [@CGG22; @CGG23]. Hence, together with our dynamical viewpoints, one of the main goals of this paper is to prove a higher dimensional analogue of [@Zho21 Theorem 1.4]. 2. In the papers [@Zho21] and [@Zho22], the notion "$Q$-torus" therein should be understood as "$Q$-complex torus" (cf. [@Nak99 Definition 3.6]). Theorems [Theorem 4](#thm-Q-torus){reference-type="ref" reference="thm-Q-torus"} $\sim$ [Corollary 6](#simply-connected){reference-type="ref" reference="simply-connected"} below are our main results. **Theorem 4**. *Let $X$ be an $n$-dimensional compact Kähler klt space whose canonical divisor $K_X$ is pseudo-effective. Suppose that $X$ admits an int-amplified endomorphism $f$. Then $X$ is a $Q$-complex torus. In particular, Question [Question 2](#ques-q-torus){reference-type="ref" reference="ques-q-torus"} has a positive answer if $X$ has only klt singularities.* There are different kinds of applications of Theorem [Theorem 4](#thm-Q-torus){reference-type="ref" reference="thm-Q-torus"}. On the one hand, we can characterize the end product of the E-MMP for compact Kähler klt spaces in arbitrary dimension (as a quasi-étale quotient of a complex torus), if the MMP exists. On the other hand, since the existence of the MMP has been nicely confirmed for Kähler klt threefolds ([@DH22] and [@DO22]; cf. [@HP16], [@HP15], [@CHP16] and [@DO23]), we can reformulate the E-MMP for klt threefolds as below, extending the previous E-MMP for terminal threefolds. **Theorem 5** (cf. [@Zho21 Theorem 1.5]). *Let $f:X\to X$ be an int-amplified endomorphism of a compact Kähler klt threefold. Then, after replacing $f$ by a power, there exists an $f$-equivariant minimal model program $$X=X_1\dashrightarrow\cdots\dashrightarrow X_i\dashrightarrow\cdots\dashrightarrow X_r=Y$$ (i.e., $f=f_1$ descends to each $f_i$ on $X_i$) with each $X_i\dashrightarrow X_{i+1}$ a divisorial contraction, a flip or a Fano contraction, of a $K_{X_i}$-negative extremal ray. Moreover,* 1. *If $K_X$ is pseudo-effective, then $X=Y$ is a $Q$-complex torus.* 2. *If $K_X$ is not pseudo-effective, then for each $i$, $f_i$ is int-amplified and the composite $X_i\to Y$ is equidimensional and holomorphic with every fibre being reduced and irreducible. Furthermore, every fibre of $X_i\to Y$ is rationally connected; and the last step $X_{r-1}\to X_r$ is a Fano contraction.* Finally, running the E-MMP for a compact Kähler (smooth) threefold $X$, we obtain the following theorem, which is a transcendental version of [@Yos21 Corollary 1.4] in low dimension. In this case when $X$ is simply connected, $X$ admitting an int-amplified endomorphism forces it to be projective. **Corollary 6** (cf. [@Yos21 Corollary 1.4]). *Let $X$ be a simply connected compact Kähler (smooth) threefold admitting an int-amplified endomorphism. Then $X$ is a (projective) rationally connected threefold. In particular, it is of Fano type, i.e., there exists an effective Weil $\mathbb{Q}$-divisor $\Delta$ on $X$ such that $(X,\Delta)$ is klt and $-(K_X+\Delta)$ is ample.* ### **[Acknowledgments]{.upright}** {#acknowledgments .unnumbered} The author would like to thank Professor De-Qi Zhang for many valuable discussions. The author would also like to thank the referees for the very careful reading, pointing out the references, and many constructive suggestions to improve the paper. The author was partially supported by a Graduate Scholarship from the department of Mathematics in NUS and was partially supported by the Institute for Basic Science (IBS-R032-D1-2023-a00). # Proofs of the statements ## Characterization of Q-complex tori In this subsection, we prove Theorem [Theorem 4](#thm-Q-torus){reference-type="ref" reference="thm-Q-torus"}. We refer the reader to [@GK20 Section 5] for the definition of the first Chern class and the second Chern class in the singular Kähler setting. For the convenience of readers and us, we briefly recall them here. Let $X$ be a normal complex space of dimension $n$ such that the canonical divisor $K_X$ is $\mathbb{Q}$-Cartier, i.e., there exists some integer $m>0$ such that the reflexive tensor power $\omega_X^{[m]}:=(\omega_X^{\otimes m})^{\vee\vee}=\mathcal{O}_X(mK_X)$ is locally free, where $\omega_X$ is the dualizing sheaf. The *first Chern class* $c_1(X)$ of $X$ is then defined as $$c_1(X):=-\frac{1}{m}\mathcal{O}_X(mK_X)\in \textup{H}^2(X,\mathbb{R}).$$ In the following, we further assume that $X$ has only klt singularities. Let $X^{\circ}\subseteq X$ be the open locus of quotient singularities of $X$; then the *second orbifold Chern class* of $X$ is defined as the unique element in the Poincaré dual space $\widetilde{c_2}(X)\in \textup{H}^{2n-4}(X,\mathbb{R})^{\vee}$ whose restriction to $\textup{H}^{2n-4}(X^\circ,\mathbb{R})^\vee$ is the usual second orbifold Chern class $\widetilde{c_2}(X^\circ)\in \textup{H}^4(X^\circ,\mathbb{R})$ ([@GK20 Definition 5.2]). Before we prove Theorem [Theorem 4](#thm-Q-torus){reference-type="ref" reference="thm-Q-torus"}, let us recall the weak numerical equivalence defined in [@Zho21 Section 3]. Let $f:X\rightarrow X$ be a surjective endomorphism of a normal compact Kähler space $X$ of dimension $n$ with at worst rational singularities. Denote by $$\textup{L}^k(X):=\{\sum [\alpha_1]\cup\cdots\cup[\alpha_k]~|~[\alpha_i]\in \textup{H}_{\text{BC}}^{1,1}(X)\},$$ the subspace of $\textup{H}^{2k}(X,\mathbb{R})$. Here, $\textup{H}^{1,1}_{\textup{BC}}(X)$ denotes the Bott-Chern cohomology space (see [@HP16 Definition 3.1]). Let $\textup{N}^k(X):=\textup{L}^k(X)/\equiv_w$, where a cycle $[\alpha]\in \textup{L}^k(X)$ is weakly numerically equivalent (denoted by $\equiv_w$) to zero if and only if for any $[\beta_{k+1}],\cdots,[\beta_n]\in \textup{H}_{\text{BC}}^{1,1}(X)$, $[\alpha]\cup[\beta_{k+1}]\cup\cdots\cup [\beta_n]=0$. Moreover, for any $[\alpha],[\beta]\in \textup{L}^k(X)$, $[\alpha]\equiv_w [\beta]$ if and only if $[\alpha]-[\beta]\equiv_w 0$. We note that, when $X$ has only rational singularities, there is a natural injection $\textup{H}^{1,1}_{\textup{BC}}(X)\hookrightarrow\textup{H}^2(X,\mathbb{R})$ (cf. [@HP16 Remark 3.7]); hence the above intersection product makes sense. Let $\textup{L}_{\mathbb{C}}^k(X)=\textup{L}^k(X)\otimes_{\mathbb{R}}\mathbb{C}$ and $\textup{N}_{\mathbb{C}}^k(X)=\textup{N}^k(X)\otimes_{\mathbb{R}}\mathbb{C}$. As the pull-back operation $f^*$ on Bott-Chern cohomology space induces a linear operation on the subspace $\textup{L}^k_{\mathbb{C}}(X)\subseteq H^{2k}(X,\mathbb{C})$ for each $k$, it follows from [@Zho21 Proposition 2.8] that $f^*$ also gives a well-defined linear operation on the quotient space $\textup{N}_{\mathbb{C}}^k(X)$. The following lemma was proved in [@Zho21 Lemma 3.4] and we recall it here for our later proof. **Lemma 7** ([@Zho21 Lemma 3.4]). *Let $f:X\to X$ be an int-amplified endomorphism of a normal compact Kähler space of dimension $n$. Suppose that $X$ has at worst rational singularities. Then, for each $0<k<n$, all the eigenvalues of $f^*|_{\textup{N}^k_{\mathbb{C}}(X)}$ are of modulus less than $\deg f$. In particular, $\lim\limits_{i\rightarrow+\infty}\frac{(f^i)^*[x]}{(\deg f)^i}\equiv_w 0$ for any $[x]\in \textup{L}^k_{\mathbb{C}}(X)$.* *Proof of Theorem [Theorem 4](#thm-Q-torus){reference-type="ref" reference="thm-Q-torus"}.* To prove Theorem [Theorem 4](#thm-Q-torus){reference-type="ref" reference="thm-Q-torus"}, we resort to [@CGG23 Corollary 1.7] by showing that $c_1(X)=0\in \textup{H}^2(X,\mathbb{R})$ and $\widetilde{c_2}(X)\cdot[\omega]^{n-2}=0$ for some Kähler class $[\omega]\in \textup{H}^2(X,\mathbb{R})$. Here, $c_1(X)$ is the first Chern class and $\widetilde{c_2}(X)$ is the second orbifold Chern class of $X$ as defined above. Since $-K_X$ is pseudo-effective by [@Zho21 Theorem 1.3] and $K_X$ is pseudo-effective by assumption, the vanishing of the first Chern class $c_1(X)=0$ follows. Then $K_X=f^*K_X$ and it follows from the purity of branch loci (cf. [@GR55]) that our $f$ is étale in codimension one. Let $d:=\deg f>1$. Fix a Kähler class $[\omega]$ on $X$. Since $f$ is étale in codimension one, by [@GK20 Proposition 5.6], we get $\widetilde{c_2}(X)\cdot (f^*[\omega])^{n-2}=d\cdot \widetilde{c_2}(X)\cdot [\omega]^{n-2}$. Then with the equality divided by $d$, we have $$\begin{aligned} \label{eqa-def-xi} \widetilde{c_2}(X)\cdot [\omega]^{n-2}=\widetilde{c_2}(X)\cdot \frac{f^*[\omega]^{n-2}}{d}=\widetilde{c_2}(X)\cdot\frac{(f^i)^*[\omega]^{n-2}}{d^i}=:\widetilde{c_2}(X)\cdot [x_i], \end{aligned}$$ where $\lim\limits_{i\to\infty}[x_i]\equiv_w0$ by Lemma [Lemma 7](#lem-tends-zero){reference-type="ref" reference="lem-tends-zero"}. Hence, $$(\lim\limits_{i\to\infty}[x_i])\cdot [\omega]^2=\lim_{i\to\infty}[x_i]\cdot[\omega]^2=0.$$ Let $C\subseteq H^{2n-4}(X,\mathbb{R})$ be the closure of the convex cone generated by $\xi_1\cup\cdots\cup\xi_{n-2}$ where $\xi_i$ are Kähler classes. Let $C^{\vee}$ be the dual cone of $C$ in $H^{2n-4}(X,\mathbb{R})^\vee$. It is clear that $[\omega]^2$ lies in the interior of $C^\vee$ and hence $a[\omega]^2-\widetilde{c_2}(X)$ lies in the interior of $C^\vee$ for sufficiently large $a\gg 1$. In particular, we have $$0\le \widetilde{c_2}(X)\cdot [\omega]^{n-2}=\lim_{i\to\infty}(\widetilde{c_2}(X)-a[\omega]^2)\cdot [x_i]\le 0,$$ where the first inequality is due to the singular Miyaoka-Yau inequality (see [@CGG23 Theorem 1.6]), the second equality follows from Lemma [Lemma 7](#lem-tends-zero){reference-type="ref" reference="lem-tends-zero"} and the last inequality is by the construction of the dual cone. Therefore, we conclude our proof by applying [@CGG23 Corollary 1.7]. ◻ ## E-MMP for threefolds: the klt case {#sec-E-MMP} In this subsection, we construct the equivariant minimal model program (E-MMP) for int-amplified endomorphism on compact Kähler klt threefolds. To prove Theorem [Theorem 5](#E-MMP-threefolds){reference-type="ref" reference="E-MMP-threefolds"}, we refer the reader to [@Zho21 Section 8] and [@Men20 Section 8] for the technical proofs involved. Let $X$ be an $n$-dimensional compact Kähler space with only rational singularities. Let $\textup{N}_1(X)$ be the vector space of real closed currents of bi-degree $(n-1,n-1)$ (or equivalently bi-dimension $(1,1)$) modulo the equivalence: $T_1\equiv T_2$ if and only if $T_1(\eta)=T_2(\eta)$ for all real closed $(1,1)$-forms $\eta$ with local potentials (cf. [@HP16 Definition 3.8]). The following lemma was first proved in [@Zha10] when $X$ is projective and reformulated in [@Zho21] for terminal threefolds. Due to the organization of the published version [@Zho21], we skipped its proof therein. In this note, we follow the idea of [@Zha10] to reprove it in our transcendental case here for the convenience of readers, noting that we also weaken the condition on the singularities herein. **Lemma 8** (cf. [@Zho21 Lemma 8.3], [@MZ18 Lemma 6.2]). *Let $f$ be a surjective endomorphism of a compact Kähler klt threefold $X$ and $\pi:X\rightarrow Y$ a contraction of a $K_X$-negative extremal ray $R_\Gamma:=\mathbb{R}_{\ge 0}[\Gamma]$ generated by a positive closed $(2,2)$-current $\Gamma$. Suppose further that $E\subseteq X$ is an analytic subvariety such that $\dim(\pi(E))<\dim E$ and $f^{-1}(E)=E$. Then, up to replacing $f$ by its power, $f(R_{\Gamma})=R_{\Gamma}$ (hence, for any curve $C$ with $[C]\in R_{\Gamma}$, its image $[f(C)]$ still lies in $R_{\Gamma}$), i.e., the contraction $\pi$ is $f$-equivariant.* *Proof.* Since $\dim \pi(E)<\dim E$, we may fix an irreducible curve $C$ with its current of integration $[C]\in R_{\Gamma}$ such that $C\subseteq E$. Then, any other contracted curve is a multiple of $C$ in the sense of currents of integration. Let $\textup{L}_{\mathbb{C}}^1(X):=\textup{H}_{\textup{BC}}^{1,1}(X)\otimes_{\mathbb{R}}\mathbb{C}$ and $\textup{L}_{\mathbb{C}}^1(Y):=\textup{H}_{\textup{BC}}^{1,1}(Y)\otimes_{\mathbb{R}}\mathbb{C}$. By [@Nak87], our $Y$ also has at worst rational singularities. Let us recall the following exact sequence in [@HP16 Proposition 8.1], $$\begin{aligned} \label{exact} 0\rightarrow \textup{L}^1_{\mathbb{C}}(Y)\xrightarrow{\pi^*} \textup{L}^1_{\mathbb{C}}(X)\xrightarrow{[\alpha]\mapsto \alpha\cdot[C]}\mathbb{C}\rightarrow 0.\end{aligned}$$ Then, the pull-back $\pi^* \textup{L}^1_{\mathbb{C}}(Y)\subseteq \textup{L}_{\mathbb{C}}^1(X)$ is a linear subspace of codimension $1$. For each $\xi\in \textup{L}^1_{\mathbb{C}}(X)$, denote by $\xi|_E:=i^*\xi\in \textup{L}^1_{\mathbb{C}}(X)|_E$ the restriction, where $i:E\hookrightarrow X$ is the natural inclusion. Consider the linear space $\textup{L}^1_{\mathbb{C}}(X)|_E$ and its subspace $$H:=(\pi^*\textup{L}_{\mathbb{C}}^1(Y))|_E=i^*\pi^*\textup{L}_{\mathbb{C}}^1(Y)\subseteq i^*\textup{L}_{\mathbb{C}}^1(X)=\textup{L}_{\mathbb{C}}^1(X)|_E.$$ Therefore, $H$ is a linear subspace of $\textup{L}_{\mathbb{C}}^1(X)|_E$ of codimension $0$ or $1$. Fixing a Kähler class $\omega$ on $X$, we have $i^*\omega\cdot C>0$, since the restriction $\omega|_C$ is also a Kähler class on the curve $C$ (cf. [@GK20 Proposition 3.5] or [@Zho21 Proposition 2.6]). We claim that $H\subseteq \textup{L}_{\mathbb{C}}^1(X)|_E$ is a linear subspace of codimension 1. Suppose the contrary that $H=\textup{L}_{\mathbb{C}}^1(X)|_E$. Then $i^*\omega\in \textup{L}_{\mathbb{C}}^1(X)|_E=H(=(\pi^*\textup{L}_{\mathbb{C}}^1(Y))|_E)$ and thus there exists some $\eta\in \textup{L}_{\mathbb{C}}^1(Y)$ such that $i^*\omega=i^*\pi^*\eta\in H$. By projection formula, $0<\deg \omega|_C=i^*\omega\cdot C=\eta\cdot \pi_*i_*C=0$, noting that $C$ is contracted by $\pi$, which is absurd. Therefore, $i^*\omega\not\in H$ and hence $H$ is a subspace of $\textup{L}_{\mathbb{C}}^1(X)|_E$ of codimension $1$. Now, let us consider the following subset of $\textup{L}_{\mathbb{C}}^1(X)|_E$: $$S:=\{\alpha|_E~|~ \alpha\in \textup{L}_{\mathbb{C}}^1(X),~(\alpha|_E)^{\dim E}=0 \}.$$ **We claim that: [$(**)$]{#$(**)$} $S$ is a hypersurface in $\textup{L}_{\mathbb{C}}^1(X)|_E$ and $H=(\pi^*\textup{L}_{\mathbb{C}}^1(Y))|_E$ is an irreducible component of $S$ (with respect to the Zariski topology)**; cf. [@MZ18 Proof of Lemma 6.2]. Suppose the claim [$(**)$](#$(**)$) holds for the time being. Since $f^{-1}(E)=E$ by our assumption, the operation $(f|_E)^*$ gives an automorphism of the subspace $\textup{L}_{\mathbb{C}}^1(X)|_E$. Similar to the pull-back of cycles in the normal projective case, we may assume $f^*E=aE$ for some $a>0$. For any $\beta\in \textup{L}_{\mathbb{C}}^1(X)$, by projection formula, $$((f|_E)^*(\beta|_E))^{\dim E}=((f^*\beta)|_E)^{\dim E}=(f^*\beta)^{\dim E}\cdot E=\frac{\deg f}{a}\beta^{\dim E}\cdot E=\frac{\deg f}{a}(\beta|_E)^{\dim E}.$$ Then $\beta|_E\in S$ if and only if $(f^*\beta)|_E\in S$. Therefore, $S$ is $(f|_E)^*$-stable. By our claim [$(**)$](#$(**)$), with $f$ replaced by its power, $H$ is also $(f|_E)^*$-invariant. As a result, for any $\beta\in \text{L}_{\mathbb{C}}^1(Y)$, $$\begin{aligned} \label{equa-sym} (\pi^*\beta)|_E=(f|_E)^*((\pi^*\beta')|_E)=(f^*(\pi^*\beta'))|_E, \end{aligned}$$ for some $\beta'\in \textup{L}_{\mathbb{C}}^1(Y)$. By [@Zho21 Lemma 8.1], $f^{-1}(R_{\Gamma})=R_{\Gamma'}$ with $\Gamma'\in\overline{\textup{NA}}(X)$ being a positive closed $(n-1,n-1)$-current. Since $f^{-1}(E)=E$, for each curve $C$ with $[C]\in R_\Gamma$, there exists some curve $C'\subseteq E$ such that $f(C')=C$ and $[C']\in R_{\Gamma'}$. Write $f_*[C']=k[C]$ for some $k>0$. Then by Equation ([\[equa-sym\]](#equa-sym){reference-type="ref" reference="equa-sym"}), for every $\beta\in \textup{L}_{\mathbb{C}}^1(Y)$, there exists some $\beta'\in\textup{L}_{\mathbb{C}}^1(Y)$ such that $$\pi^*\beta\cdot C'=f^*\pi^*\beta'\cdot C'=\pi^*\beta'\cdot kC=0,$$ noting that $\Gamma$ is perpendicular to $\pi^*\textup{L}_{\mathbb{C}}^1(Y)$ by the exact sequence ([\[exact\]](#exact){reference-type="ref" reference="exact"}). Hence, $R_{\Gamma'}=R_{\Gamma}$ by the choice of $\pi$, and thus $f(R_{\Gamma})=R_{\Gamma}$. Since $\pi$ is uniquely determined by $R_{\Gamma}$, it follows from the rigidity lemma that $f$ descends to a holomorphic $g$ on $Y$. Now, the only thing is to prove our claim [$(**)$](#$(**)$). Let us fix a basis $\{v_1,\ldots, v_k\}$ of $\textup{L}_{\mathbb{C}}^1(X)|_E$. Then we have $$S=\{(x_1,\ldots,x_k)\in\mathbb{C}^k~|~(\sum_{i=1}^k x_iv_i)^{\dim E}=0\}.$$ This implies that $S$ is determined by a homogeneous polynomial of degree $\dim E$. Note that the polynomial has at least one coefficient non-vanishing. Indeed, the coefficient of the monomial $\prod_i x_i^{l_i}$ is the intersection number $v_1^{l_1}\cup\cdots\cup v_k^{l_k}$. For any Kähler class $\omega$ on $X$, the restriction $\omega|_E\in \textup{L}_{\mathbb{C}}^1(X)|_E$ is still Kähler (cf. [@GK20 Proposition 3.5] or [@Zho21 Proposition 2.6]); hence $(\omega|_E)^{\dim E}>0$. Since $\omega|_E$ is a linear combination of the basis $\{v_j\}_{j=1}^k$, there exists an item $\prod_i v_i^{l_i}\neq 0$; hence $S$ is determined by a non-zero homogeneous polynomial, which proves the first part of our claim. Moreover, for any $(\pi^*\gamma)|_E\in H$ with $\gamma\in\textup{L}_{\mathbb{C}}^1(Y)$, we have $$\int_E((\pi^*\gamma)|_E)^{\dim E}=\int_X(\pi^*\gamma)^{\dim E}\cdot E=\int_X \gamma^{\dim E}\cup (\pi_* [E])=0,$$ noting that $E$ is $\pi$-exceptional. This implies that $H\subseteq S$. Since $H$ and $S$ have the same codimension in $\textup{L}_{\mathbb{C}}^1(X)|_E$, the linear subspace $H$ of $S$ is thus an irreducible component of $S$, and the second part of our claim is proved. ◻ **Proposition 9** (cf. [@Zho21 Proposition 8.4]). *Let $f:X\rightarrow X$ be an int-amplified endomorphism of a normal $\mathbb{Q}$-factorial compact Kähler klt threefold $X$. If $\pi:X\rightarrow Y$ is a divisorial contraction of a $K_X$-negative extremal ray $R_\Gamma:=\mathbb{R}_{\ge 0}[\Gamma]$ generated by a positive closed $(2,2)$-current $\Gamma$. Then after iteration, $f$ descends to an int-amplified endomorphism on $Y$.* *Proof.* The proof is the same as [@Zho21 Proposition 8.4] after replacing [@Zho21 Lemma 8.3] by Lemma [Lemma 8](#lemequivariant){reference-type="ref" reference="lemequivariant"}. ◻ With the same proof as in [@Zha10 Lemma 3.6], we have the following lemma. **Lemma 10** (cf. [@Zho21 Lemma 8.5], [@Zha10 Lemma 3.6]). *Let $f:X\rightarrow X$ be a surjective endomorphism of a normal $\mathbb{Q}$-factorial compact Kähler klt threefold $X$ and $\sigma:X\dashrightarrow X'$ a flip with $\pi:X\rightarrow Y$ the corresponding flipping contraction of a $K_X$-negative extremal ray $R_\Gamma:=\mathbb{R}_{\ge 0}[\Gamma]$ generated by a positive closed $(2,2)$-current $\Gamma$. Suppose that $R_{f_*\Gamma}=R_\Gamma$. Then the dominant meromorphic map $f^+:X^+\dashrightarrow X^+$ induced from $f$, is holomorphic. Both $f$ and $f^+$ descend to one and the same endomorphism of $Y$.* **Proposition 11** (cf. [@Zho21 Proposition 8.6], [@MZ18 Lemma 6.5]). *Let $f:X\rightarrow X$ be an int-amplified endomorphism of a normal $\mathbb{Q}$-factorial compact Kähler klt threefold $X$ and $\sigma:X\dashrightarrow X'$ a flip with $\pi:X\rightarrow Y$ the corresponding flipping contraction of a $K_X$-negative extremal ray $R_\Gamma:=\mathbb{R}_{\ge 0}[\Gamma]$ generated by a positive closed $(2,2)$-current $\Gamma$. Then there exists a $K_X$-flip of $\pi$, $\pi^+:X^+\rightarrow Y$ such that $\pi=\pi^+\circ\sigma$ and for some $s>0$, the commutativity is $f^s$-equivariant.* *Proof.* The proof is the same as [@Zho21 Proposition 8.6] after replacing [@Zho21 Lemmas 8.3 and 8.5] by Lemmas [Lemma 8](#lemequivariant){reference-type="ref" reference="lemequivariant"} and [Lemma 10](#flipdescendingholomorphic){reference-type="ref" reference="flipdescendingholomorphic"}. ◻ With all the preparations settled, we now can prove Theorem [Theorem 5](#E-MMP-threefolds){reference-type="ref" reference="E-MMP-threefolds"}. *Proof of Theorem [Theorem 5](#E-MMP-threefolds){reference-type="ref" reference="E-MMP-threefolds"}.* The proof is the same as [@Zho21 Theorem 1.5] after replacing [@HP15 Theorem 1.1] by the pure case of [@DH22 Theorem 1.2], and replacing [@Zho21 Theorem 8.7] by Lemmas [Lemma 8](#lemequivariant){reference-type="ref" reference="lemequivariant"}, [Lemma 10](#flipdescendingholomorphic){reference-type="ref" reference="flipdescendingholomorphic"} and Propositions [Proposition 9](#prodivisorial){reference-type="ref" reference="prodivisorial"}, [Proposition 11](#proflipdescending){reference-type="ref" reference="proflipdescending"}. ◻ ## Proof of Corollary [Corollary 6](#simply-connected){reference-type="ref" reference="simply-connected"} {#proof-of-corollary-simply-connected} In the subsection, we prove Corollary [Corollary 6](#simply-connected){reference-type="ref" reference="simply-connected"} by running the E-MMP. Note that, for a $Q$-complex torus $X$ admitting an int-amplified endomorphism $f$, it has been shown that there is no proper $f^{-1}$-periodic analytic subvariety in $X$ (cf. [@Zho21 Lemma 5.4]). We shall heavily use this fact to show that the end product of the E-MMP for such $X$ in Corollary [Corollary 6](#simply-connected){reference-type="ref" reference="simply-connected"} turns out to be a single point. *Proof of Corollary [Corollary 6](#simply-connected){reference-type="ref" reference="simply-connected"}.* First, we show the canonical divisor $K_X$ is not pseudo-effective. Suppose the contrary that $K_X$ is pseudo-effective. Then $X$ is a $Q$-complex torus by Theorem [Theorem 4](#thm-Q-torus){reference-type="ref" reference="thm-Q-torus"}. Since $X$ is simply connected, the irregularity $q(X)=0$. Since there exists a complex $3$-torus $T$ such that $T\rightarrow X$ is quasi-étale and thus étale by the purity of branch theorem (cf. [@GR55]), our $T\rightarrow X$ is trivial and $X$ itself is a complex torus. However, this contradicts $q(X)=0$. Therefore, $K_X$ is not pseudo-effective. Second, we show that $X$ is projective. Suppose the contrary that $X$ is non-projective. By [@HP15 Theorem 1.1] and Theorem [Theorem 5](#E-MMP-threefolds){reference-type="ref" reference="E-MMP-threefolds"}, with $f$ replaced by its power, there exists an $f$-equivariant minimal model program from $X$ to a Kähler surface $S$ with only klt singularities. Since $X$ is non-algebraic, our $S$ is non-uniruled; for otherwise, the MRC fibration of $X$ will have the target of dimension $\le 1$ (and hence $X$ is projective; see [@HP15 Introduction]). Therefore, $K_S$ is pseudo-effective. Then $S$ is a $2$-dimensional $Q$-complex torus (cf. [@Zho21 Theorem 1.4]), i.e., there is a finite morphism $T\to S$ from a complex torus $T$, which is étale in codimension one. Since $S$ does not contain any totally invariant periodic proper subsets (cf. [@Zho21 Lemma 5.4]), there does not exist any flips or divisorial contractions when running the E-MMP for $X$, noting that $\dim X=3$. Hence, with $f$ replaced by its power, $\pi:X\rightarrow S$ is an $f$-equivariant Fano contraction of a $K_X$-negative extremal ray. Let $\widetilde{X}$ be the normalization of the main component of $X\times_S T$. Then we have the following commutative diagram: $$\xymatrix{\widetilde{X}\ar[drrr]^{p_2\circ n}\ar[dr]^n\ar[ddr]_{\nu_X}&&&\\ &X\times_ST\ar[rr]_{p_2}\ar[d]^{p_1}&&T\ar[d]^{\nu_S}\\ &X\ar[rr]_\pi&&S }$$ where $n$ is the normalization and $p_1$ and $p_2$ are natural projections. **We claim that $\nu_X$ is étale.** Suppose the claim for the time being. Then $1=\deg \nu_X=\deg \nu_S$, noting that $X$ is simply connected. So $S=T$ is a complex $2$-torus and $0<q(S)\le q(X)=0$, a contradiction. Hence, $X$ is projective. Now we are left to prove our claim. Since $\nu_S$ is étale in codimension one, the branch locus of $\nu_S$ consists of finitely many points. Note that étaleness is stable under base change and $\pi$ is equidimensional. Therefore, $p_1$ is étale outside finitely many curves, which is a codimension $\ge 2$ subset. Since $X$ is smooth, around the étale locus of $p_1$, $X\times_S T$ is smooth and hence normal. So $n$ is biholomorphic around the étale locus of $X\times_S T$. This implies that $\nu_X$ is étale outside a codimension $\ge 2$ subset. By the purity of branch theorem (cf. [@GR55]), $\nu_X$ is étale. This proves our claim. Finally, we are now in the case when $X$ is projective and $K_X$ is not pseudo-effective. By the same proof as in the second step, we know that the end product $Z$ of the E-MMP has dimension $\le 1$. If $Z$ is a single point, then it follows from Theorem [Theorem 5](#E-MMP-threefolds){reference-type="ref" reference="E-MMP-threefolds"} or [@Men20 Theorem 1.10] that $X$ is rationally connected. If $\dim Z=1$, then $q(Z)\le q(X)=0$ which implies that $Z$ is rational and hence $X$ is also rationally connected; see [@GHS03 Corollary 1.3]. Thus, $X$ is (projective and) rationally connected. Applying [@Yos21 Corollary 1.4], we see that $X$ is of Fano type and our theorem is thus proved. ◻ 99 B. Claudon, P. Graf, and H. Guenancia, Numerical characterization of complex torus quotients, *Comment. Math. Helv.*, **97** (2022), no. 4, 769--799. B. Claudon, P. Graf, and H. Guenancia, Equality in the Miyaoka-Yau inequality and uniformization of non-positively curved klt pairs, [arXiv:2305.04074](https://arxiv.org/abs/2305.04074), 2023. F. Campana, A. Höring and T. Peternell, Abundance for Kähler threefolds, *Ann. Sci. Éc. Norm. Supér. (4)*, **49** (2016), no. 4, 971--1025. O. Das and C. Hacon, The log minimal model program for Kähler $3$-folds, [arXiv:2009.05924v2](https://arxiv.org/abs/2009.05924), 2022. O. Das and W. Ou, On the Log Abundance for Compact Kähler 3-folds, manuscripta math. (to appear), [arXiv:2201.01202](https://arxiv.org/abs/2201.01202), 2022. O. Das and W. Ou, On the Log Abundance for Compact Kähler 3-folds II, [arXiv:2306.00671](https://arxiv.org/abs/2306.00671), 2023. T. Graber, J. Harris, and J. Starr, Families of rationally connected varieties, *J. Amer. Math. Soc.*, **16** (2003), no. 1, 57--67. P. Graf and T. Kirschner, Finite quotients of three-dimensional complex tori, *Ann. Inst. Fourier (Grenoble)*, **70** (2020), no. 2, 881--914. D. Greb, S. Kebekus and T. Peternell, Étale fundamental groups of Kawamata log terminal spaces, flat sheaves, and quotients of Abelian varieties, *Duke Math. J.*, **165** (2016), no. 10, 1965--2004. H. Grauert and R. Remmert, Zur Theorie der Modifikationen. I. Stetige und eigentliche Modifikationen komplexer Räume, *Math. Ann.*, **129** (1955), 274--296. A. Höring and T. Peternell, Mori fibre spaces for Kähler threefolds, *J. Math. Sci. Univ. Tokyo*, **22** (2015), no. 1, 219--246. A. Höring and T. Peternell, Minimal models for Kähler threefolds, *Invent. Math.*, **203** (2016), no. 1, 217--264. S. Lu, and B. Taji, *A characterization of finite quotients of abelian varieties*, Int. Math. Res. Not. IMRN (2018), no. 1, 292--319. S. Meng, Building blocks of amplified endomorphisms of normal projective varieties, *Math. Z.*, **294** (2020), no. 3-4, 1727--1747. S. Meng and D.-Q. Zhang, Building blocks of polarized endomorphisms of normal projective varieties, *Adv. Math.*, **325** (2018), 243--273. N. Nakayama, *The lower semicontinuity of the plurigenera of complex varieties*, Algebraic geometry, Sendai, 1985, Adv. Stud. Pure Math., vol. 10, North-Holland, 1987, pp. 551--590. N. Nakayama, Compact Kähler Manifolds whose Universal Covering Spaces are Biholomorphic to $\mathbb{C}^n$, [RIMS1230.pdf](https://www.kurims.kyoto-u.ac.jp/preprint/preprint_y1999.html), May 1999. N. Nakayama, Singularity of Normal Complex Analytic Surfaces Admitting Non-Isomorphic Finite Surjective Endomorphisms, Preprint, [RIMS1920.pdf](https://www.kurims.kyoto-u.ac.jp/preprint/file/RIMS1920.pdf), Kyoto Univ., July 2020. N.Nakayama, On normal Moishezon surfaces admitting non-isomorphic surjective endomorphisms, Preprint, [RIMS1923.pdf](https://www.kurims.kyoto-u.ac.jp/preprint/file/RIMS1923.pdf), Kyoto Univ., September 2020. N. Nakayama and D.-Q. Zhang, Polarized endomorphisms of complex normal varieties, *Math. Ann.*, **346** (2010), no. 4, 991--1018, [ arXiv:0909.1688v1](https://arxiv.org/abs/0908.1688v1). S.-T. Yau, On the ricci curvature of a compact kähler manifold and the complex monge-ampére equation, I, *Comm. Pure Appl. Math.*, **31** (1978), no. 3, 339--411. S. Yoshikawa, Structure of Fano fibrations of varieties admitting an int-amplified endomorphism, *Adv. Math.*, **391** (2021), 107964. D.-Q. Zhang, Polarized endomorphisms of uniruled varieties, *Compos. Math.*, **146** (2010), no. 1, 145--168. G. Zhong, Int-amplified endomorphisms of compact Kähler spaces, *Asian J. Math.*, **25** (2021), no. 3, 369--392. G. Zhong, Compact Kähler threefolds with the action of an abelian group of maximal rank, *Proc. Amer. Math. Soc.*, **150** (2022), no. 1, 55--68. G. Zhong, Existence of the equivariant minimal model program for compact Kähler threefolds with the action of an abelian group of maximal rank, *Math. Nachr.* **296** (2023), no. 7, 3128--3135.
arxiv_math
{ "id": "2309.17047", "title": "Characterization of $Q$-complex tori via endomorphisms -- an addendum to\n \"Int-amplified endomorphisms of compact K\\\"ahler spaces''", "authors": "Guolei Zhong", "categories": "math.AG math.DS", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- author: - Benoit Saussol and Arthur Boos title: Keplerian shear with Rajchman property --- # Summary {#summary .unnumbered} The Keplerian shear was introduced within the context of measure preserving dynamical systems by Damien Thomine [@DaTho], as a version of mixing for non erogdic systems. In this study we provide a characterization of the Keplerian shear using Rajchman measure, for some flows on tori bundles. Our work applies to dynamical systems with singularities or with non-absolutely continuous measures. We relate the speed of decay of conditional correlations with the Rajchman order of the measures. Some of these results are extended to the case of compact Lie group bundles. # Introduction In a dynamical system, the mixing property reveals that trajectories are intermingled and asymptotically distributed somewhat homogeneously. A paradigmatic example is provided by Arnold's cat map on the 2-torus $\mathbb{T}^2$, endowed with the Lebesgue measure, $$T=\left(\begin{matrix} 2 & 1\\ 1 & 1 \end{matrix}\right).$$ From a probabilistic perspective, mixing signifies the asymptotic independence of events; long-term evolution forgets the initial conditions. This property, along with its quantitative counterparts (e.g., decay of correlations), forms the foundation for various probabilistic limit theorems within the context of deterministic dynamics, such as the Central Limit Theorem and Borel-Cantelli lemmas (see [@Chern]). However, there are many systems in which certain quantities are conserved during evolution. This occurs in integrable systems of Hamiltonian dynamics, particularly in some geodesic flows and rational billiards (see [@DaTho] and references therein). This phenomenon can also manifest in biological systems, where the preservation of a character results in different evolutionary paths. The presence of these invariants prevents the system from being ergodic and, consequently, from exhibiting mixing. The transvection $$T := \left(\begin{matrix} 1 & 0\\ 1 & 1 \end{matrix}\right),$$ acting on the torus $\mathbb{T}^2$, is a paradigmatic example. This map acts on the second coordinate as a rotation on the circle. The evolution in each ergodic component (each individual system) is notably simple and predictable. Nevertheless, Kesten demonstrated that, with some randomness introduced into the rotation angle, trajectories distribute quite homogeneously in the long term. Specifically, the discrepancies, suitably normalized, converge to a Cauchy distribution [@kesten] (see also [@Dolgo] for multidimensional generalization). In celestial mechanics, planetary rings (the motion of each dust particle) may be modeled by the flow $$\begin{array}{lll} g_t : &[a,b]\times \mathbb{T} &\to [a,b]\times \mathbb{T}\\ &(r,\theta)&\mapsto \left(r,\theta+tr^{-\frac{3}{2}}\right).\end{array}$$ While the distance to the center is preserved, and each trajectory is essentially a rotation, the fact that angular velocities vary allows for the aggregation of materials, potentially explaining the formation of larger bodies. Recently Damien Thomine [@DaTho] introduced the notion of Keplerian shear, formalizing the fact that in non ergodic systems, trajectories may distribute homogeneously and independently of their past, provided we ignore the invariants. The aim of our work is to pursue the study of non-ergodic dynamical systems which have the property of Keplerian shear. The probabilistic dynamical system $(X,T,\mu)$ has Keplerian shear if for all $f\in L^2(\mu)$ we have the weak convergence $$\label{1} f\circ T^n \to E_\mu(f|\mathcal{I}),$$ $\mathcal{I}$ being the $\sigma$-algebra of invariant by the transformation $T$. The lack of ergodicity is indeed adding a hazard to the dynamics (the choice of the ergodic component), and even if the dynamics on the fiber is not mixing the system can globally appear mixing conditionally to the fibers. We now present the organisation of the article. In the section 2, we define keplerian shear, relate it to the mixing property and the Rajchman property of a measure, the main notion in this article. We also make a description of the dynamical systems that we consider in this work, which includes locally action-angle dynamics. In the section 3, we show the main result in the discrete case and we investigate the speed of shearing, using anisotropic Sobolev spaces. Section 4 addresses in the continuous case the same questions of section 3. Section 5 generalizes the work to the case where the phase space is a connected-compact Hausdorff Lie group. And to finish, section 6 provides applications of keplerian shear to Borel-Cantelli lemmas and diophantine approximation. # Settings ## Mixing property and keplerian shear The mixing property is well known and is satisfied in many situations. However it makes sense only for ergodic systems. From this point of view, keplerian shear may be a right compromise. We will note in the article $(\Omega,\mathcal{T},\mu,(g_t)_{t\in\mathbb{R}})$ (resp. $(\Omega,\mathcal{T},\mu,T)$) a continuous dynamical (resp. discrete). A mixing system is an asymptotically independent system: **Definition 1** (Mixing system). *We say that $(\Omega,\mathcal{T},\mu,(g_t)_{t\in\mathbb{R}})$ (resp. $(\Omega,\mathcal{T},\mu,T)$) is mixing if* *$$\forall (A,B)\in \mathcal{T}^2, \mu(A\cap g_t^{-1}(B))-\mu(A)\mu(B)\xrightarrow[t\to+\infty]{}0$$* *$$\left(\text{resp. } \mu(A\cap \left(T^n\right)^{-1}(B))-\mu(A)\mu(B)\xrightarrow[t\to+\infty]{}0\right)$$* We recall that mixing systems are ergodic. Consequently the following systems are not mixing, but we will show that they exhibit keplerian shear. **Example 1** (Non mixing system because lack of ergodicity). *We endow these systems with the Lebesgue measure on $\mathbb{T}^2$.* 1. *$\begin{array}{lll}T : \mathbb{T}^2&\to&\mathbb{T}^2\\ (x,y)&\mapsto &(x,y+x)\end{array}$* 2. *$\begin{array}{lll}g_t : \mathbb{T}^2&\to&\mathbb{T}^2\\ (x,y)&\mapsto &\left(x,y+t\cos\left(2\pi\left(x-\frac{1}{2}\right)\right)\right)\end{array}$* **Definition 2** (Invariant $\sigma-$algebra). *Invariant $\sigma-$algebra $\mathcal{I}$ is the $\sigma-$algebra of measurable sets invariant by continuous flow (resp. discrete):* *For $(\Omega,\mathcal{T},\mu,(g_t)_{t\in\mathbb{R}})$ (resp. $(\Omega,\mathcal{T},\mu,T)$) a continuous dynamical system (resp.discrete)* *$\mathcal{I} := \left\{A\in\mathcal{T} : \forall t\in \mathbb{R},\mu(A\Delta g_t^{-1}(A))=0\right\} (\text{resp. } \left\{A\in\mathcal{T} :\mu(A\Delta T^{-1}(A))=0\right\})$* Keplerian shear is a notion of asymptotic independence conditionally to the fibers of the invariant algebra. **Definition 3** (Keplerian shear). *Keplerian shear is defined by:* *$$\forall f\in \mathbb{L}^2_\mu(\Omega),f\circ g_t\underset{t\to+\infty}{\rightharpoonup}\mathbb{E}_\mu(f\vert\mathcal{I})\, \left(\text{resp.} f\circ T^n \underset{n\to+\infty}{\rightharpoonup}\mathbb{E}_\mu(f\vert\mathcal{I})\right)$$* **Definition 4** (Conditional correlation). *We define the conditional correlation by: $$\forall (f_1,f_2)\in\left(\mathbb{L}^2_\mu(\Omega)\right)^2,Cov_t(f_1,f_2\vert\mathcal{I}):=\mathbb{E}_\mu(\overline{f_1}\cdot (f_2\circ g_t)\vert\mathcal{I})-\overline{\mathbb{E}_\mu(f_1\vert \mathcal{I})} \cdot\mathbb{E}_\mu(f_2\vert \mathcal{I})$$* *$$\left(\text{resp. }Cov_n(f_1,f_2\vert\mathcal{I}):=\mathbb{E}_\mu(\overline{f_1}\cdot (f_2\circ T^n)\vert\mathcal{I})-\overline{\mathbb{E}_\mu(f_1\vert \mathcal{I})} \cdot\mathbb{E}_\mu(f_2\vert \mathcal{I})\right)$$* **Proposition 1** ([@DaTho]). *The keplerian shear property is equivalent to the convergence to $0$ of the expectation of the conditional correlation, in other words: $$\forall (f_1,f_2)\in\left(\mathbb{L}^2_\mu(\Omega)\right)^2,\mathbb{E}_\mu\left(Cov_t(f_1,f_2\vert \mathcal{I})\right)\xrightarrow[t\to+\infty]{}0\left(\text{resp. } \mathbb{E}_\mu\left(Cov_n(f_1,f_2\vert \mathcal{I})\right)\xrightarrow[n\to+\infty]{}0 \right) \label{correlation}$$* In ergodic systems, keplerian shear is equivalent to mixing. The mixing property is thus equivalent to the ergodicity and keplerian shear. ## Rajchman measure Fourier theory will be instrumental in our study of correlation decay. In particular measures with the Riemann-Lebesgue property will play an important role in this work. **Definition 5** (Rajchman measure). *We are in the space $(\mathbb{R},\mathcal{B}(\mathbb{R}),\nu)$ for the continuous case (resp. $(\mathbb{T},\mathcal{B}(\mathbb{T}),\nu)$ for the discrete case).* *$\nu$ is Rajchman if* *$$\widehat{\nu}(t)\xrightarrow[t\to\pm\infty]{}0\, \left(\text{resp. }\widehat{\nu}(n)\xrightarrow[n\to\pm\infty]{}0\right)$$* *$\text{with } \, \hat{\nu}(t)= \text{$\int_{\mathbb{R}} e^{2i\pi tx} d\nu(x)$}\, \left(resp. \hat{\nu}(n)= \text{$\int_{\mathbb{T}} e^{2i\pi nx} d\nu(x)$}\right)$* Note that we will only consider one dimensional Rajchman measures. **Definition 6** (Rajchman measure in higher dimension). *We are in the space $(\mathbb{R}^d,\mathcal{B}(\mathbb{R}^d),\nu)$ for the continuous case (resp. $(\mathbb{T}^d,\mathcal{B}(\mathbb{T}^d),\nu)$ for the discrete case).* *$\nu$ is Rajchman if* *$$\widehat{\nu}(w)\xrightarrow[\text{$\left\Vert w\right\Vert$}\to+\infty]{}0\, \left(\text{resp. }\widehat{\nu}(n)\xrightarrow[\text{$\left\Vert n\right\Vert$}\to+\infty]{}0\right)$$* *$\text{with } \, \hat{\nu}(w)= \text{$\int_{\mathbb{R}} e^{2i\pi <w\vert x>} d\nu(x)$}\, \left(resp. \hat{\nu}(n)= \text{$\int_{\mathbb{T}} e^{2i\pi <n\vert x>} d\nu(x)$}\right)$* ### Functional equivalences We recall an equivalent interpretation of the Rajchman property in terms of convergence in distribution or equidistribution. **Proposition 2**. *In the continuous case (resp.discrete), the measure $\nu$ is Rajchman if and only if for $x$ distibuted by $\nu$ :* *$$\forall a>0, \frac{nx}{a} [1] \xrightarrow[n\to+\infty]{\mathscr{L}}\mathcal{U}\left(\mathbb{T}\right)$$* *$$\left( resp.\,nx [1]\xrightarrow[n\to+\infty]{\mathscr{L}} \mathcal{U}\left(\mathbb{T}\right)\right)$$* *Proof.* [Discrete case.]{.ul} Let $\varphi\in\mathcal{D}(\mathbb{T})$. Its Fourier series decomposition gives $$\forall x\in \mathbb{T},\forall n\in \mathbb{Z}, \varphi(nx)=\text{${\underset{k\in \mathbb{Z}}{\sum}} \widehat{\varphi}(k)e^{2i\pi kn x}$}.$$ By Lebesgue convergence theorem we get $$\int_{\mathbb{T}}\varphi(nx)d\nu(x)=\text{${\underset{k\in \mathbb{Z}}{\sum}} \widehat{\varphi}(k)\int_{\mathbb{T}}e^{2i\pi kn x}$}d\nu(x)= \widehat{\varphi}(0)+\text{${\underset{k\in \mathbb{Z}^*}{\sum}} \widehat{\varphi}(k)$}\widehat{\nu}(kn)\xrightarrow[n\to+\infty]{}\text{$\int_{\mathbb{T}} \varphi(x) d\lambda(x)$}.$$ Let's prove the reciprocal implication. We take $\varphi : x\mapsto e^{2i\pi x}$ and note that $\widehat{\nu}(n)=\text{$\int_{\mathbb{T}} \varphi(nx) d\nu(x)$}\xrightarrow[n\to+\infty]{}0$. [Continuous case.]{.ul} Let's start with the direct implication. Let $a>0$. Let $\mu$ be the push forward of $\nu$ by $x\mapsto\frac{x}{a} [1]$. We have for any $n\in\mathbb{Z}$ $$\widehat{\mu}(n)=\text{$\int_{\mathbb{T}} e^{2i\pi nx} d\mu(x)$}=\text{$\int_{\mathbb{R}} e^{2i\pi\frac{n}{a}x} d\nu(x)$}=\widehat{\nu}\left(\frac{n}{a}\right)\xrightarrow[n\to+\infty]{}0$$ Therefore, $\mu$ is Rajchman on $\mathbb{T}$ and we apply the discrete case. Let's prove the reciprocal implication. Let $\varepsilon>0$. The uniform continuity of the characteristic function gives $$\exists \delta>0,\forall (x,y)\in\mathbb{R}^2,\left(\vert x-y\vert<\delta\Longrightarrow \vert \widehat{\nu}(x)-\widehat{\nu}(y)\vert<\frac{\varepsilon}{2} \right)$$ In addition, by the Rajchman hypothesis $$\exists n_0\in \mathbb{N},\forall n\in \mathbb{N},\left(n\geq n_0\Longrightarrow \widehat{\nu}\left(n\frac{\delta}{2}\right)<\frac{\varepsilon}{2}\right)$$. Let $t\geq n_0\frac{\delta}{2}$. Since $\frac{\delta}{2}\mathbb{N}\cap [t,t+\delta[\ne\emptyset$ there exists $n\in \mathbb{N},\vert n\frac{\delta}{2}-t\vert<\delta$. Thus $$\vert\widehat{\nu}(t)\vert\leq \left\vert \widehat{\nu}(t)-\widehat{\nu}\left(n\frac{\delta}{2}\right)\right\vert +\left\vert\widehat{\nu}\left(n\frac{\delta}{2}\right)\right\vert<\varepsilon.$$ ◻ **Lemma 1** (Weak$-*$ convergence and Rajchman property). *Let $\mu$ be a Borel-probabilty measure on $\mathbb{R}$.* *The measure $\mu$ is Rajchman if and only if $\left(x\mapsto e^{2i\pi tx}\right)\xrightarrow[t\to+\infty]{}0$ in the weak-$*$ topology on $\left(\mathbb{L}^1_\mu(\mathbb{R})\right)^* = \mathbb{L}^\infty_\mu(\mathbb{R})$.* *Proof.* Reciprocal implication is obtained immediately with $(x\in\mathbb{R}\mapsto 1)\in \mathbb{L}^1_\mu(\mathbb{R})$. Let's prove the direct implication. Let $\varphi\in \mathcal{S}(\mathbb{R})$. Writting $\varphi$ with its reverse Fourier transform and using Fubini theorem we get $$\begin{aligned} \text{$\int_{\mathbb{R}} e^{2i\pi tx}\varphi(x) d\mu(x)$}&=\text{$\int_{\mathbb{R}} e^{2i\pi tx}\left(\text{$\int_{\mathbb{R}} e^{2i\pi xs}\widehat{\varphi}(s) d\lambda(s)$}\right) d\mu(x)$}\\ &=\text{$\int_{\mathbb{R}} \widehat{\varphi}(s)\widehat{\mu}(t+s) d\lambda(s)$}\xrightarrow[t\to+\infty]{} 0\end{aligned}$$ by Rajchman property and Lebesgue convergence theorem. We conclude by density of $\mathcal{S}(\mathbb{R})$ in $\mathbb{L}^1_\mu(\mathbb{R})$. ◻ ### Fourier-Rajchman decay The notion of Rajchman measure is only qualitative. To estimate the speed of convergence to $0$ of the expectation of conditional correlations [\[correlation\]](#correlation){reference-type="eqref" reference="correlation"}, a quantitative version of the Fourier decay will be needed. **Definition 7** (Fourier-Rajchman decay). *A Rajchman speed of a Rajchman measure on $\mathbb{T}^d\,(resp.\, \mathbb{R}^d)$ is a value $r\geq 0$ such that* *$$\exists C>0, \exists n_0\in \mathbb{N}^*,\forall n\in\mathbb{Z}^d,\left(\text{$\left\Vert n\right\Vert$}\geq n_0\Longrightarrow \text{$\left\vert \widehat{\nu}(n)\right\vert$}\leq \frac{C}{\text{$\left\Vert n\right\Vert$}^r}\right)$$* *$$\left(resp.\, \exists C>0, \exists t_0>0,\forall t\in \mathbb{R}^d,\left(\text{$\left\Vert t\right\Vert$}\geq t_0\Longrightarrow \text{$\left\vert \widehat{\nu}(t)\right\vert$}\leq \frac{C}{\text{$\left\Vert t\right\Vert$}^r}\right)\right)$$* **Definition 8** (Rajchman order). *Rajchman order of a Rajchman measure is the supremum of its Rajchman speeds. We denote it $r(\mu)$.* **Remark 1**. *Rajchman order is linked to the Fourier dimension $dimF(\mu)$ defined in Boris Solomyak documents [@BoSol], we have $dimF(\mu)=2r(\mu)$.* *Additionnaly, as mentioned in the work [@BoSol], when the Rajchman order $r$ verify $r>\frac{1}{2}$, $\mu\ll\lambda$. Consequently a Rajchman measure can be singular continuous only if its Rajchman order $r$ satisfies $r\leq \frac{1}{2}$.* **Definition 9** (Diophantine exponent). *We call the Diophantine exponent of $x\in \mathbb{R}$, $Dio(x)=\inf A = \sup B$ where* *$$A=\left\{s>0 : \exists C>0,\forall (p,q)\in \mathbb{Z}\times \mathbb{N}^*,\text{$\left\vert qx-p\right\vert$}\geq \frac{C}{q^s}\right\}$$* *and $$B=\left\{t>0 : \exists \, \text{infinitely many} \, (p,q)\in \mathbb{Z}\times \mathbb{N}^*,\text{$\left\vert qx-p\right\vert$}< \frac{1}{q^t}\right\}$$* The Rajchman order of a measure control the diophantine exponent a.s. (See Proposition [Proposition 14](#kurzman){reference-type="ref" reference="kurzman"}) **Proposition 3**. *If $r(\mu)\leq \frac{1}{2}$, then $\mu-a.a\, \alpha \in [0,1[, Dio(\alpha)\leq \frac{1}{r(\mu)}-1$. In the absolutely continuous case, $Dio(\alpha)=1$ a.e.* ### Radon-Nikodyme-Lebesgue decomposition We discuss the simple relations that can be seen between the Rajchman property and the Radon-Nikodyme-Lebesgue decomposition $$\nu = \nu_{ac} + \nu_{sc} + \nu_{d}.$$ 1. [Absolutely continue part]{.ul}. Absolutely continuous measure are Rajchman. This is garanteed by Riemann-Lebesgue. When we know the regularity of the density, we can estimate the Rajchman order. The work of Damien Thomine already covers the absolutely continuous case. 2. [Singular continuous part]{.ul}. Many singular continuous measures are Rajchman and other not. We extend Damien Thomine results for very singular systems. 3. [Discrete part]{.ul}. Discrete measure are never Rajchman because the non-convergence of complexe exponential to $0$. Consequently, $\nu$ is Rajchman if and only if $\nu_{sc}$ is Rajchman and $\nu_d=0$. ### Example of singular Rajchman measures {#examRajch} **Definition 10** (Pisot number). *A real $\theta$ is a Pisot number if and only if $\theta >1$ is algebraic and every other roots $\theta_r$ of its minimal polynomial satisfy $\text{$\left\vert \theta_r\right\vert$}<1$.* We note that Lebesgue-almost all numbers are not Pisot. Now, to highlight the ambiguity of the Rajchman property for the singular measure, we'll expose examples of continuous singular measures respecting this property. **Example 2** (Self-similar measure). *Consider for $\theta>2$, $\mu_\theta$ the distribution of $\sum_{n\in\mathbb{N}^*}\pm \theta^{-n}$ where signs are chosen with i.i.d probabilities $\frac{1}{2}$. Note that $\mu_{\theta}$ has as characteristic function $\widehat{\mu_\theta} : k\in\mathbb{Z}\mapsto \prod_{n\in\mathbb{N}}{\cos\left(2\pi\theta^{-n}k\right)}$.* *$\mu_{\theta}$ is Rajchman if and only if $\theta$ is not a Pisot number [@Kaha].* *Additionally, for Lebesgue-almost all real $\theta>2$, theses measures are singular and the Rajchman order $r\left(\mu_\theta\right)>0$.* **Example 3** (Rajchman measure with Liouville set as support $(Dio=+\infty)$). *Christian Bluhm [@CBluhm] has built a Rajchman measure $\mu_{\infty}$ supported on Liouville numbers.* *According to the proposition [Proposition 3](#diophraj){reference-type="ref" reference="diophraj"}, its order is $r\left(\mu_{\infty}\right)=0$.* ## Tori bundle  [\[toribundle\]]{#toribundle label="toribundle"} We study dynamical systems defined in the setting introduced in [@DaTho], tori bundles. These systems have local action-angle coordinates. **Definition 11** (Tori bundle). *Let $(M,\mathcal{A})$ a $n\in\mathbb{N}^*$ dimensional $\mathcal{C}^1$ Lindelöf[^1] manifold.* *Let $d\in \mathbb{N}^*$, $(\Omega,\mu)$ a Borel space and $\pi$ a continuous projection of $\Omega$ in $M$.* *$\Omega$ is a $(n,d)$ dimensional tori bundle if* 1. *locally, we have for charts $U$ de $\mathcal{A}$ an homeomorphism : $\psi_U : \pi^{-1}(U)\to U\times \mathbb{T}^d$* 2. *for all $U$ in $\mathcal{A},\pi_1\circ \psi_U=\pi_{\vert U}$* Bundle of the form $M\times \mathbb{T}^d$ constitute already interesting examples. Note that we don't impose the affine property on tori bundle because not necessary for the sequel. **Definition 12** (Compatible flow). *Let $(\Omega, \mu, (g_t)_{t\in\mathbb{R}})$ be a measure preserving dynamical system.* *The flow $(g_t)_{t\in \mathbb{R}}$ is a compatible flow with $(\Omega,\mu)$ as a tori bundle if for all charts $U\in \mathcal{A}$ there exists $v_U \in \left(\mathbb{R}^d\right)^U$ measurable such that for $t\in\mathbb{R}, \psi_U\circ g_t\circ \psi_U^{-1}(x,y)=(x,y+tv_U(x))$.* *We note $g_t^U : \psi_U\circ g_t\circ \psi_U^{-1}$* **Definition 13** (Compatibles measure). *$\mu$ is a compatible measure if for all charts $U\in \mathcal{A},\left(\mu_{\vert\pi^{-1}(U)}\right)_{\psi_U}=\left(\mu_{\pi}\right)_{\vert U}\otimes \lambda$* # Main result in discrete dynamical systems In this case, we use the Rajchman property with the help of the corresponding Fourier series. We will highlight the necessary path to show the presence of keplerian shear in this case. We get the main result in the discrete case here. **Theorem 1** (The main result). *The discrete dynamical system $(\Omega, \mu, T)$ with $\Omega$ a tori bundle exhibits keplerian shear if and only if for all $\xi\ne 0_{\mathbb{Z}^d}$ and for all charts $U\in\mathcal{A},m^\mathbb{T}_{\xi,U}$ is Rajchman with $$m^\mathbb{T}_{\xi,U}=\left(\left(\left(\mu_{\pi}\right)_{\vert U}\right)_{\left<\xi\vert v_U(\cdot)\right>-\lfloor \left<\xi\vert v_U(\cdot)\right>\rfloor}\right)_{\vert\mathbb{T}\setminus \{0\}}.$$* *Proof.* Let's start with the direct implication. Let $\xi\in\mathbb{Z}^d\setminus\{0_{\mathbb{Z}^d}\}$ and $U\in\mathcal{A}$. Let take $$f_1 : x\in \pi^{-1}(U)\mapsto \mathds{1}_{\left(\left<\xi\vert v_U(\cdot)\right>-\lfloor \left<\xi\vert v_U(\cdot)\right>\rfloor\right)^{-1}({\mathbb{T}\setminus\{0\}})}(\pi(x))e^{2i\pi\left<\xi\vert \pi_2\circ \psi_U(x)\right>}.$$ Let take $f_2 =f_1$. So $$\forall n\in\mathbb{N},\text{$\int_{\pi^{-1}(U)} \overline{f_1}\cdot f_2\circ T^n d\mu(x)$}=\text{$\int_{\mathbb{R}} e^{2i\pi n z} dm^\mathbb{T}_{\xi,U}(z)$}$$ And with Birkhoff-Kakutani and keplerian shear $$\text{$\int_{\mathbb{R}} e^{2i\pi n z} dm^\mathbb{T}_{\xi,U}(z)$}\xrightarrow[n\to+\infty]{}0.$$ So $${\text{$U\in\mathcal{A},m^\mathbb{T}_{\xi,U}$ is {Rajchman}.}}$$ Let threat the reciprocal implication. The weak$-*$ convergence of exponentials of Fourier series in $+\infty$ is equivalent to the Rajchman property of measures $m^\mathbb{T}_{\xi,U}$. We show by Fourier decomposition this implication. ◻ ## Already known results **Definition 14** (Anisotropic Sobolev spaces). *Let $s>0$ and $h : (x,y)\in\mathbb{R}^2\mapsto \begin{cases} \left(1+\frac{x^2}{y^2}\right)^{\frac{1}{2}}\, if \, y\ne 0\\ 1\,else\end{cases}$.* *We define anisothropic Sobolev space of order $s>0$ like this :* *$$H^{s,0}(\mathbb{T}^2) :=\left\{f\in \mathbb{L}^2(\mathbb{T}^2) : \sum_{\xi\in\mathbb{Z}^d}\text{$\left\vert \widehat{f}(\xi)\right\vert$}^2h^{2s}(\xi)\in\mathbb{R}\right\}$$* **Proposition 4**. *Considering this dynamical system $(\mathbb{T}^2,\lambda\otimes\lambda,T)$ with $T : (x,y)\in\mathbb{T}^2\mapsto(x,y+x)$, we have the next result $$\forall s>0,\forall (f_1,f_2)\in (H^{s,0}(\mathbb{T}^2))^2,\forall n\in \mathbb{N}^*, \text{$\left\vert \mathbb{E}_{\lambda\otimes\lambda}(Cov_n(f_1,f_2\vert\mathcal{I}))\right\vert$}\leq \frac{4^s}{n^{2s}}\text{$\left\Vert f_1\right\Vert$}_{H^{s,0}(\mathbb{T}^2)}\text{$\left\Vert f_2\right\Vert$}_{H^{s,0}(\mathbb{T}^2)}$$* ## Results with another measure than Lebesgue measure **Definition 15** (Sobolev space on the torus). *We will note for $s>0,d\in\mathbb{N}^*$ the Sobolev space $$H^{s}(\mathbb{T}^d) := \left\{f\in \mathbb{C}^{\mathbb{T}^d} : \sum_{k\in\mathbb{Z}^d}\text{$\left\vert \widehat{f}(k)\right\vert$}^2\left(1+\text{$\left\Vert k\right\Vert$}_2^2\right)^{s}\in\mathbb{R}\right\}.$$* **Definition 16**. *We will note for $s>0,d\in\mathbb{N}^*$ and $f\in H^s(\mathbb{T}^d),C_f := \sup\left\{\text{$\left\vert \widehat{f}(\xi)\right\vert$}\left(1+\text{$\left\Vert \xi\right\Vert$}_2^2\right)^{\frac{s}{2}} : \xi\in \mathbb{Z}^d\right\}$* To work easier and also estimate convergence speeds, we will threat the next proposition. **Proposition 5**. *Let the dynamical system $(\mathbb{T}^2,\mu\otimes\lambda,T)$ with the transvection function $T : (x,y)\in\mathbb{T}^2\mapsto(x,y+x)$ yet and $\mu$ with $r>0$ as the Rajchman order.* *We get the next proposition : $$\forall s>2,\exists \gamma\in \left[\min\left\{\frac{s}{2}-1,r\right\},r\right],\forall (f_1,f_2)\in\left(H^s(\mathbb{T}^2)\right)^2,\exists C>0,\forall n\in \mathbb{N}^*,\text{$\left\vert \mathbb{E}_{\mu\otimes\lambda}(Cov_n(f_1,f_2\vert\mathcal{I}))\right\vert$}\leq \frac{C}{n^\gamma}$$* *with $\gamma$ optimal.* *Proof.* Let $s>2$. Let $$(f_1,f_2)\in \left(H^s(\mathbb{T}^2)\right)^2.$$ Let $n\in \mathbb{N}^*$. But $$\text{$\int_{\mathbb{T}^2} \overline{f_1}\left(f_2\circ T^n\right) d\left(\mu\otimes\lambda\right)$}=\sum_{\xi\in\mathbb{Z}^2}\widehat{f_2}(\xi)\text{$\int_{\mathbb{T}} e^{2i\pi n\xi_2x}\left(e^{2i\pi\xi_1x}\left(\text{$\int_{\mathbb{T}} \overline{f_1}(x,y)e^{2i\pi\xi_2 y} d\lambda(y)$}\right)\right) d\mu(x)$}.$$ And $$e^{2i\pi\xi_1x}\left(\text{$\int_{\mathbb{T}} \overline{f_1}(x,y)e^{2i\pi\xi_2 y} d\lambda(y)$}\right)=\sum_{k\in\mathbb{Z}}\widehat{f_1}(k,\xi_2)e^{2i\pi (\xi_1-k)x}.$$ Let pose $$g_{(\xi_1,\xi_2)} : x\in\mathbb{T}\mapsto e^{2i\pi\xi_1x}\left(\text{$\int_{\mathbb{T}} \overline{f_1}(x,y)e^{2i\pi\xi_2 y} d\lambda(y)$}\right).$$ But $$\forall (k,\xi_1,\xi_2)\in \mathbb{Z}^3,\text{$\left\vert \widehat{g_{(\xi_1,\xi_2)}}(k)\right\vert$}(1+k^2)^{\frac{s}{2}}=\text{$\left\vert \widehat{f_1}(\xi_1-k,\xi_2)\right\vert$}(1+k^2)^{\frac{s}{2}}\leq C_{f_1}\frac{(1+k^2)^{\frac{s}{2}}}{(1+\xi_2^2+(\xi_1-k)^2)^{\frac{s}{2}}}.$$ And $$\mathbb{E}_{\mu\otimes\lambda}(Cov_n(f_1,f_2\vert \mathcal{I}))=\sum_{\xi\in\mathbb{Z}\times\mathbb{Z}^*}\widehat{f_2}(\xi)\text{$\int_{\mathbb{T}} e^{2i\pi n\xi_2x}\left(e^{2i\pi\xi_1x}\left(\text{$\int_{\mathbb{T}} \overline{f_1}(x,y)e^{2i\pi\xi_2 y} d\lambda(y)$}\right)\right) d\mu(x)$}.$$ Let $\delta\in ]0,1]$. We assume that $$\text{$\left\vert \xi_1-k+n\xi_2\right\vert$}\leq \frac{(n\text{$\left\vert \xi_2\right\vert$})^{\delta}}{4}\,et\,\text{$\left\vert n\xi_2-k\right\vert$}\leq \frac{(n\text{$\left\vert \xi_2\right\vert$})^\delta}{2}.$$ So $\text{$\left\vert k\right\vert$}\geq n\text{$\left\vert \xi_2\right\vert$}-\frac{(n\text{$\left\vert \xi_2\right\vert$})^\delta}{2}\geq \frac{n\text{$\left\vert \xi_2\right\vert$}}{2}$. And $$\text{$\left\vert \xi_1\right\vert$}\geq n\text{$\left\vert \xi_2\right\vert$}-\text{$\left\vert k-n\xi_2\right\vert$}\geq n\text{$\left\vert \xi_2\right\vert$}-\frac{(n\text{$\left\vert \xi_2\right\vert$})^\delta}{4}\geq \frac{3}{4}n\text{$\left\vert \xi_2\right\vert$}.$$ And $\text{$\left\vert \widehat{\mu}(\xi_1-k+n\xi_2)\right\vert$}\leq 1$. So we get $$\sum_{(k,\xi_1,\xi_2)\in \mathbb{Z}\times\mathbb{Z}\times\mathbb{Z}^*\,et \,\text{$\left\vert \xi_1-k+n\xi_2\right\vert$}\leq \frac{(n\text{$\left\vert \xi_2\right\vert$})^{\delta}}{4}\,et\,\text{$\left\vert n\xi_2-k\right\vert$}\leq \frac{(n\text{$\left\vert \xi_2\right\vert$})^\delta}{2}}\text{$\left\vert \widehat{f_2}(\xi_1,\xi_2)\right\vert$}\text{$\left\vert \widehat{f_1}(k,\xi_2)\right\vert$}\leq \frac{2^{6(s+1)}\zeta(2(s-\delta))C_{f_1}C_{f_2}}{3^sn^{2(s-\delta)}}.$$ Now, we suppose that $$\text{$\left\vert \xi_1-k+n\xi_2\right\vert$}\leq \frac{(n\text{$\left\vert \xi_2\right\vert$})^{\delta}}{4}\,and\,\text{$\left\vert n\xi_2-k\right\vert$}> \frac{(n\text{$\left\vert \xi_2\right\vert$})^\delta}{2}.$$ So $\text{$\left\vert \xi_1\right\vert$}\geq \text{$\left\vert n\xi_2-k\right\vert$}-\frac{(n\text{$\left\vert \xi_2\right\vert$})^\delta}{4}>\frac{(n\text{$\left\vert \xi_2\right\vert$})^\delta}{4}$. And $\text{$\left\vert \widehat{f_1}(k,\xi_2)\right\vert$}\text{$\left\vert \widehat{\mu}(\xi_1-k+n\xi_2)\right\vert$}\leq \frac{C_{f_1}}{(1+k^2+\xi_2^2)^{\frac{s}{2}}}$. So $\text{$\left\vert \widehat{f_2}(\xi_1,\xi_2)\right\vert$}\leq \frac{4^sC_{f_2}}{(1+\xi_1^2+\xi_2^2)^{\frac{s}{2}}}\leq \frac{4^sC_{f_2}}{\text{$\left\vert \xi_2\right\vert$}^{\frac{s}{2}\left(1+\delta\right)}n^{\frac{s}{2}\delta}}$. So $$\sum_{(k,\xi_1,\xi_2)\in \mathbb{Z}\times\mathbb{Z}\times\mathbb{Z}^*\,et \,\begin{array}{ll}\text{$\left\vert \xi_1-k+n\xi_2\right\vert$}\leq \frac{(n\text{$\left\vert \xi_2\right\vert$})^{\delta}}{4}\\et\,\text{$\left\vert n\xi_2-k\right\vert$}> \frac{(n\text{$\left\vert \xi_2\right\vert$})^\delta}{2}\end{array}}\frac{C_{f_1}\text{$\left\vert \widehat{f_2}(\xi_1,\xi_2)\right\vert$}}{(1+k^2+\xi_2^2)^{\frac{s}{2}}}\leq \frac{4^s\left(1+\zeta\left(\frac{s}{2}\right)\right)\zeta\left(\frac{s}{2}(1+\delta)-\delta\right)C_{f_1}C_{f_2}}{n^{\left(\frac{s}{2}-1\right)\delta}}.$$ But $s>2$. So $\frac{s}{2}(1+\delta)-\delta>1\,et\, \frac{s}{2}-1>0$. And then, we assume $$\text{$\left\vert \xi_1-k+n\xi_2\right\vert$}> \frac{(n\text{$\left\vert \xi_2\right\vert$})^{\delta}}{4}.$$ So $\text{$\left\vert \widehat{\mu}(\xi_1-k+n\xi_2)\right\vert$}\leq \frac{4^rC_\mu}{(n\text{$\left\vert \xi_2\right\vert$})^{r\delta}}$. So $$\sum_{(k,\xi_1,\xi_2)\in \mathbb{Z}\times\mathbb{Z}\times\mathbb{Z}^*\,et \,\text{$\left\vert \xi_1-k+n\xi_2\right\vert$}> \frac{(n\text{$\left\vert \xi_2\right\vert$})^{\delta}}{4}}\frac{C_{f_1}\text{$\left\vert \widehat{f_2}(\xi_1,\xi_2)\right\vert$}}{(1+k^2+\xi_2^2)^{\frac{s}{2}}}\leq \frac{4^r\left(1+\zeta\left(\frac{s}{2}\right)\right)^2C_{f_1}C_{f_2}}{n^{r\delta}}.$$ Let pose $$M(r,s)=\sup\left\{4^r\left(1+\zeta\left(\frac{s}{2}\right)\right)^2,4^s\left(1+\zeta\left(\frac{s}{2}\right)\right)\zeta\left(\frac{s}{2}(1+\delta)-\delta\right),2^{6(s+1)}\zeta(2(s-\delta))\right\}.$$ With a deep study on extrema, we get $$\text{$\left\vert \mathbb{E}_{\mu\otimes\lambda}\left(Cov_n(f_1,f_2\vert \mathcal{I})\right)\right\vert$}\leq M(r,s)\left(\frac{C_{f_1}C_{f_2}}{n^{\min\left\{\frac{s}{2}-1,r\right\}}}\right).$$ Let note $\gamma>0$ optimal convergence order optimal of correlations. So $${\gamma\geq \min\left\{\frac{s}{2}-1,r\right\}}.$$ But $$\left(f : (x,y)\in\mathbb{T}^2\mapsto e^{2i\pi y}\right)\in H^s(\mathbb{T}^2)$$ because $\mathcal{C}^\infty.$ And $$\text{$\left\vert \text{$\int_{\mathbb{T}^2} \overline{f}(x,y)f(T^n(x,y)) d\mu\otimes\lambda(x,y)$}\right\vert$}\leq \frac{C_\mu}{n^r}.$$ And $r$ is the Rajchman order and then optimal. So $${\gamma\leq r}.$$ And then $${\gamma\in \left[\min\left\{\frac{s}{2}-1,r\right\},r\right]}.$$ And so finally $${\forall s>2,\exists \gamma\in \left[\min\left\{\frac{s}{2}-1,r\right\},r\right],\forall (f_1,f_2)\in\left(H^s(\mathbb{T}^2)\right)^2,\exists C>0,\forall n\in \mathbb{N}^*,\text{$\left\vert \mathbb{E}_{\mu\otimes\lambda}(Cov_n(f_1,f_2\vert\mathcal{I}))\right\vert$}\leq \frac{C}{n^\gamma}}.$$ And ${\gamma \text{ is optimal}}.$ ◻ **Remark 2**. *The convergence speed is bounded by the order of Rajchman of the measure $\mu$, which makes it unnecessary to take increasingly regular applications to observe convergence speeds greater than $r$. On the other hand, this is consistent with the result obtained with the Lebesgue measure because its Rajchman order is $+\infty$, which removes the maximum imposed by $r$ for the speed of convergence and allows the observation of ever greater speeds of convergence when the regularity of the functions used increases.* ## $\sigma-$algebra and invariant functions by discrete flow Just like for the continuous flow case, we can identify the $\sigma-$algebra of invariant for the discrete flow using the Fourier series and orthogonality. In the regular case, we always have that the $\sigma-$algebra of invariants is negligible up to $\pi^{-1}(\mathcal{B}(M))$ under the conditions of the theorem [\[thomcis\]](#thomcis){reference-type="ref" reference="thomcis"}. **Theorem 2**. *We assume that $(\Omega,\mu,T)$ exhibits Keplerian shear.* *$f\in \mathbb{L}^2_\mu(\Omega)$ is invariant according to $T$ if and only if for all $U\in\mathcal{A},$* *$\exists (a_k)_{k\in\mathbb{Z}^d}\in \mathbb{L}^2_{\left(\mu_{\pi}\right)_{\vert U}}(U)^{\mathbb{Z}^d},\left(\mu_{\pi}\right)_{\vert U}\otimes\lambda-a.a \,(x,y)\in U\times\mathbb{T}^d,$* *$$\text{$f_{\left \vert \pi^{-1}(U) \right.}$}(\psi_U^{-1}(x,y))=\sum_{k\in\mathbb{Z}^d\cap\left\{v_U(x)\right\}^\perp}a_k(x)e^{2i\pi<k\vert y>}.$$* *Proof.* Let $f\in \mathbb{L}^2_\mu(\Omega)$ and $U\in\mathcal{A}$. Let's prove the direct assertion. Suppose that $f\circ T=f\,\mu-p.p$. Let $(x,y)\in U\times \mathbb{T}^d$. So $$f(T(\psi_U^{-1}(x,y)))=f(\psi_U^{-1}(x,y)).$$ Namely $f(\psi_U^{-1}(x,y+v_U(x)))=f(\psi_U^{-1}(x,y))$. By Fourier series decomposition: $$\sum_{k\in\mathbb{Z}^d}\widehat{f\circ \psi_U(x,\cdot)}(k)e^{2i\pi<k\vert y+v_U(x)>}=\sum_{k\in\mathbb{Z}^d}\widehat{f\circ \psi_U(x,\cdot)}(k)e^{2i\pi<k\vert y>}.$$ By uniqueness of the coefficients of a Fourier series $$\forall k\in\mathbb{Z}^d,\widehat{f\circ \psi_U(x,\cdot)}(k)\left(e^{2i\pi <k\vert v_U(x)>}-1\right)=0.$$ We assume $$\widehat{f\circ \psi_U(x,\cdot)}(k)=0.$$ Conclusion reached! Now, we suppose that $$\widehat{f\circ \psi_U(x,\cdot)}(k)\ne 0\,et\,\left(\mu_{\pi}\right)_{\vert U}\left((v_U^{-1}\circ <k\vert \cdot>)^{-1}(\mathbb{Z}^*) \right)=0$$ Conclusion reached! And then $$\widehat{f\circ \psi_U(x,\cdot)}(k)\ne 0\,et\,\left(\mu_{\pi}\right)_{\vert U}\left((v_U^{-1} ((<k\vert \cdot>)^{-1}(\mathbb{Z}^*)) \right)>0.$$ But $\mathbb{Z}^*$ is countable. So $$\exists j\in\mathbb{Z}^*,\left(\mu_{\pi}\right)_{\vert U}\left((v_U^{-1}\circ <k\vert \cdot>)^{-1}(\{j\}) \right)>0.$$ And then, we suppose that $$x\in v_U^{-1} ((<k\vert \cdot>)^{-1}(\{j\})).$$ So $e^{2i\pi <k\vert v_U(x)>}=1$. So $$\forall n\in\mathbb{N},\text{$\int_{v_U^{-1} ((<k\vert \cdot>)^{-1}(\{j\}))} e^{2i\pi n<k\vert v_U(x)>} d\left(\mu_{\pi}\right)_{\vert U}(x)$}=\left(\mu_{\pi}\right)_{\vert U}\left((v_U^{-1} ((<k\vert \cdot>)^{-1}(\mathbb{Z}^*)) \right)>0.$$ By Keplerian shear, we have that $m^\mathbb{T}_{\xi,U}$ is Rajchman. So $$\text{$\int_{v_U^{-1} ((<k\vert \cdot>)^{-1}(\{j\}))} e^{2i\pi n<k\vert v_U(x)>} d\left(\mu_{\pi}\right)_{\vert U}(x)$}\xrightarrow[n\to+\infty]{}0.$$ And so $${0>0}.$$ ${\text{Impossible!}}$ Let's prove the reciprocal. Suppose that $f$ satisfies for all charts $U\in \mathcal{A}$ $$\left(\mu_{\pi}\right)_{\vert U}\otimes\lambda-a.a \,(x,y)\in U\times\mathbb{T}^d,f(\psi_U^{-1}(x,y))=\sum_{k\in\mathbb{Z}^d\cap\left\{v_U(x)\right\}^\perp}a_k(x)e^{2i\pi<k\vert y>}.$$ Let $z\in \Omega$. So $$\exists U\in\mathcal{A},z\in \pi^{-1}(U).$$ And so $$f(T(z))=f(T(\psi_U^{-1}(\psi_U(z))))=f(\psi_U^{-1}(\pi(z),\pi_2\circ \psi_U(z)+v_U(\pi(z)))).$$ So $$f(T(z))=\sum_{k\in\mathbb{Z}^d\cap\left\{v_U(\pi(z))\right\}^\perp}a_k(\pi(z))e^{2i\pi<k\vert \pi_2\circ\psi_U(z)+v_U(\pi(z))>}.$$ By orthogonality of the terms $$f(T(z))=\sum_{k\in\mathbb{Z}^d\cap\left\{v_U(\pi(z))\right\}^\perp}a_k( \pi(z))e^{2i\pi<k\vert \pi_2\circ\psi_U(z)>}.$$ So $${f(T(z))=f(\psi_U^{-1}(\psi_U(z)))=f(z)}.$$ ◻ **Remark 3**. *We note that in the discrete case and in the continuous case, the invariant measurable functions have the same form, ie the same Fourier series decomposition in a dynamical system exhibiting Keplerian shear. Thanks to this consequence, we have the same invariant sets in the discrete case and in the continuous case. We then get the following lemma.* **Definition 17** (Orthogonal stability). *For $U\in\mathcal{A},A\in \mathcal{B}(U\times \mathbb{T}^d)$ is orthogonally stable if* *$$\label{invcar} \left(\mu_\pi\right)_{\vert U}-a.a\, x\in U,A_{x\cdot} \overset{\lambda-a.s}{=} A_{x\cdot}+p_{\mathbb{T}^d}\left(\text{$\underset{\xi\in \mathbb{Z}^d\cap \{v_u(x)\}^\perp}{\bigcap} \{\xi\}^\perp$}\right)$$.* **Lemma 2**. *For $U\in \mathcal{A}$, a measurable $A\in \mathcal{B}(U\times \mathbb{T}^d)$, $$\eqref{invcar}\iff\left(\mathds{1}_A(x,y)=\text{${\underset{k\in \mathbb{Z}^d\cap \{v_u(x)\}^\perp}{\sum}} \widehat{\mathds{1}_A(x,\cdot)}(k)e^{2i\pi\left<\xi\vert y\right>}$}\,\left(\left(\mu_{\pi}\right)_{\vert U}\otimes \lambda\right)-p.s.\right)$$* *Proof.* Let's prove the direct implication. Let $A\in\mathcal{B}(U\times \mathbb{T}^d)$ invariant. So $$\label{eqInd} \mathds{1}_A(x,y) =\text{${\underset{k\in \mathbb{Z}^d\cap \{v_u(x)\}^\perp}{\sum}} \widehat{\mathds{1}_A(x,\cdot)}(k)e^{2i\pi\left<\xi\vert y\right>}$}\, \left(\mu_{\pi}\right)_{\vert U}\otimes \lambda-p.s.$$ Obviously we already have $$\forall x\in U,A_{x\cdot} \subset A_{x\cdot}+p_{\mathbb{T}^d}\left(\text{$\underset{\xi\in \mathbb {Z}^d\cap \{v_u(x)\}^\perp}{\bigcap} \{\xi\}^\perp$}\right).$$ Consider $(x,y)\in U\times \mathbb{T}^d$ which satisfies [\[eqInd\]](#eqInd){reference-type="eqref" reference="eqInd"}$$y\in A_{x\cdot}+p_{\mathbb{T}^d}\left(\text{$\underset{\xi\in \mathbb{Z}^d\cap \{v_u(x)\}^\perp}{\bigcap} \{\xi\}^\perp$}\right).$$ Let take $k\in \mathbb{Z}^d\cap \{v_u(x)\}^\perp$ So $$\exists (z,r)\in A_{x\cdot}\times p_{\mathbb{T}^d}\left(\{k\}^\perp\right),y=z+r.$$ We find $$\mathds{1}_A(x,y)=\text{${\underset{k\in \mathbb{Z}^d\cap \{v_u(x)\}^\perp}{\sum}} \widehat{\mathds{1}_A(x,\cdot)}(k)e^{2i \pi\left<\xi\vert z\right>}$}=\mathds{1}_A(x,z).$$ So $y\in A_{x\cdot}$ We conclude that $$A_{x\cdot} \overset{\lambda-a.s}{=} A_{x\cdot}+p_{\mathbb{T}^d}\left(\text{$\underset{\xi\in \mathbb{Z}^d\cap \{v_u(x)\}^\perp}{\bigcap} \{\xi\}^\perp$}\right).$$ For the reciprocal implication, we easily notice that the measurables $A$ satisfying [\[invcar\]](#invcar){reference-type="eqref" reference="invcar"} give the Fourier series decomposition got in [\[invarloc\]](#invarloc){reference-type="eqref" reference="invarloc"}. ◻ **Proposition 6** (Identification of invariant sets by orthogonality with fixed $x$). *Thanks to the lemma [Lemma 2](#leminvar){reference-type="ref" reference="leminvar"}, we deduce that for $U\in \mathcal{A}$, a measurable $A\in \mathcal{B}(U\times \mathbb{T}^d)$ is flow-invariant $(g_t)_{t\in\mathbb{R}}$ (resp. $T$) if and only if for $(\mu_{\pi})_{\vert U}-a.a\ , x\in U,$ $$A_{x\cdot}=A_{x\cdot}+p_{\mathbb{T}^d}\left(\text{$\underset{k\in\mathbb{Z}^d\cap \{v_U(x)\}^{\perp}}{\bigcap} \{k\}^\perp$}\right).$$* *That is, for $A\in \mathcal{B}(U\times \mathbb{T}^d)$, referring to [\[invarloc\]](#invarloc){reference-type="eqref" reference="invarloc"}, $$\eqref{invcar}\iff A\in \mathcal{I}_U$$* # Main result in continuous dynamical systems As mentioned in the introduction, many resuts which guarantee keplerian shear were given, mainly in Damien Thomine work [@DaTho]. Our next result extends his Theorem 3.3 below. **Theorem 3** (Result in the regular case). *[\[thomcis\]]{#thomcis label="thomcis"} Let $(\Omega, \mu, (g_t)_{t\in\mathbb{R}})$ be a compatible flow with a compatible measure on a tori (affine) bundle as in Subsection [\[toribundle\]](#toribundle){reference-type="ref" reference="toribundle"}.* *Assume that:* 1. *$\mu \ll \lambda$,* 2. *All velocity vectors $v_U$ are $\mathcal{C}^1$,* 3. *$\forall U\in \mathcal{A},\mu\left(\text{$\underset{\xi\in\mathbb{Z}^d\setminus\{0_{\mathbb{Z}^d}\}}{\bigcup} \{x\in U : d\left<\xi\vert v_U(x)\right>=0\}$}\right)=0$.* *Then the system exhibit keplerian shear. Moreover $\mathcal{I} =\left\{B\in \mathcal{B}(\Omega) : \exists A\in\mathcal{B}(M),\mu\left(B\Delta \pi^{-1}(A)\right)=0\right\}$* The proof of this theorem uses Fourier transform, and relies on methods of differential geometry; In particular, the normal form of submersions. We can see that the push forward measures $m_{\xi,U} :=\left(\left(\mu_{\pi}\right)_{\vert U}\right)_{\left<\xi\vert v_U(\cdot)\right>}$ (see Figure [\[mesimage\]](#mesimage){reference-type="ref" reference="mesimage"}) are absolutely continuous and verify Riemann-Lebesgue lemma. This last property, in other words, Rajchman property will allow lonely to get keplerian shear, it is even an equivalence. We have successfully abstract this property and obtained the keplerian shear under the Rajchman property. Using tools from measure theory (conditional expectation) allow to some extent to get rid of the regularity of the velocity. The next theorem is the main result in continuous case, and refers to the real Rajchman property. **Theorem 4** (The main result). *Let $(\Omega, \mu, (g_t)_{t\in\mathbb{R}})$ be a compatible flow with a compatible measure on a tori bundle as in Subsection [\[toribundle\]](#toribundle){reference-type="ref" reference="toribundle"}.* *The dynamical system exhibit keplerian shear if and only if for all charts $U\in\mathcal{A}$, for all non zero $\xi\in \mathbb{Z}^d$, the push forward measures $\left(m_{\xi,U}\right)_{\vert \mathbb{R}^*}$ are Rajchman.* *Proof.* Let's prove the direct implication. Let $U\in \mathcal{A}$ and non zero $\xi\in \mathbb{Z}^d$. Take $$f_1 : z\in \pi^{-1}(U)\mapsto \mathds{1}_{\left<\xi\vert v_U\right>\ne 0}(\pi(z))e^{2i\pi \left<\xi\vert \pi_2\circ \psi_U(z)\right>}.$$ We have thanks to the compatibility of $\mu$ $$\int_\Omega\overline{f_1}(z)\cdot\left(f_1\circ g_t(z)\right)d\mu(z)=\int_{\mathbb{R}}\mathds{1}_{\mathbb{R}^*}(s)e^{2i\pi ts}dm_{\xi,U}(s) = \int_{\mathbb{R}}e^{2i\pi ts}d\left(m_{\xi,U}\right)_{\vert \mathbb{R}^*}(s)$$ by push forward theorem. Moreover, with keplerian shear, $$\int_\Omega\overline{f_1}(z)\cdot\left(f_1\circ g_t(z)\right)d\mu(z)\xrightarrow[t\to+\infty]{}\int_{\Omega}\overline{\mathbb{E}_\mu(f_1\vert\mathcal{I})}\mathbb{E}_\mu(f_1\vert \mathcal{I})d\mu=0$$ because by ergodic Birkhoff theorem $$\begin{aligned} \label{birkth}\mathbb{E}_\mu(f_1\vert \mathcal{I})\left(\psi_U^{-1}(x,y)\right)&\overset{\left(\mu_\pi\right)_{\vert U}\otimes \lambda-a.s}{=}e^{2i\pi\left<\xi\vert y\right>}\mathds{1}_{\left<\xi\vert v_U\right>\ne 0}(x)\underset{T\to +\infty}{\lim}\frac{1}{T}\text{$\int_{0}^{T} e^{2i\pi t\left<\xi\vert v_U(x)\right>} dt$}=0.\end{aligned}$$ So $\left(m_{\xi,U}\right)_{\vert\mathbb{R}^*}$ is Rajchman. Let's prove the reciprocal implication. Let $(U_i)_{i\in I}\in \mathcal{A}^I$ be a countable partition of $M$ modulo $\mu_{\pi}$. Let $$Y :=\text{$\underset{(j,\xi)\in I\times \text{$\mathbb{Z}^d \setminus \left\{0\right\}$}}{\bigcup} \left\{\left(a\circ\pi\right)\cdot e^{2i\pi\left<\xi\vert \pi_2\circ\psi_{U_j}\right>}\in L^2_\mu\left(\text{${\pi}^{-1}$}(U_j)\right) : a\in \mathbb{L}^\infty_\mu(U_j)\right\}$}.$$ Let $(i,j)\in I^2$,$\left(a_1,a_2\right)\in \mathbb{L}^\infty_\mu(U_i)\times \mathbb{L}^\infty_\mu(U_j)$ and $\left(\xi_1,\xi_2\right)\in \left(\mathbb{Z}^d\right)^2.$ Take $$f_l = (a_l\circ \pi)\cdot e^{2i\pi\left<\xi\vert \pi_2\circ\psi_{U_j}\right>}$$ for $l\in\{1,2\}$. When $i\ne j$, the supports are disjoint, and in this case $$\mathbb{E}_\mu\left(Cov_t\left(f_1,f_2\vert \mathcal{I}\right)\right)=0.$$ The same happens when $\xi_1\ne \xi_2$ by periodicity of complex exponentials. Finally, when $\xi_1=\xi_2=0$, $\mathbb{E}_\mu\left(Cov_t\left(f_1,f_2\vert \mathcal{I}\right)\right)$ is constant. Suppose without loss of generality that $i=j$ and $\xi_1=\xi_2=\xi\ne 0.$ Set $W^{\xi}_i=U_i\setminus{\left(\left<\xi\vert v_{U_i}\right>=0\right)}$. We have $$\text{$\int_{\Omega} \overline{f_1}\cdot(f_2\circ g_t)\cdot\mathds{1}_{\left<\xi\vert v_{U_i}\right>\ne 0}\circ \pi d\mu$}=\text{$\int_{\mathbb{R}} f(s)e^{2i\pi ts} d(m_{\xi,U_i})_{\vert \mathbb{R}^*}(s)$} \xrightarrow[t\to\pm\infty]{}0$$ because $(m_{\xi,U_i})_{\vert \mathbb{R}^*}$ is Rajchman thus converges weakly-$*$ according to Lemma [Lemma 1](#convfaible){reference-type="ref" reference="convfaible"}. Moreover $$\text{$\int_{\Omega} \overline{f_1}\cdot(f_2\circ g_t)\cdot\mathds{1}_{\left<\xi\vert v_{U_i}\right>= 0}\circ \pi d\mu$}=\text{$\int_{\Omega} \overline{f_1}\cdot f_2\cdot\mathds{1}_{\left<\xi\vert v_{U_i}\right>= 0}\circ \pi d\mu$}.$$ By proposition 2.4 in [@DaTho], we have that $$f_2\cdot\mathds{1}_{\left<\xi\vert v_{U_i}\right>= 0}\circ \pi=\mathbb{E}_\mu\left(f_2\vert \mathcal{I}\right).$$ We obtains by totality of $Y$ that $$\forall \left(f_1,f_2\right)\in \left(\mathbb{L}^2_\mu(\Omega)\right)^2,\mathbb{E}\left(Cov_t\left(f_1,f_2\vert \mathcal{I}\right)\right)\xrightarrow[t\to\pm \infty]{}0.$$ ◻ **Remark 4**. *We can also note that the result generalises to infinite dimensional tori bundles $\mathbb{T}^\mathbb{N}$ with product topology.* **Remark 5**. *When keplerian shear property holds, the decomposition of Radon-Nikodyme-Lebesgue of the image measures $m_{\xi,U}$ in theorem [Theorem 4](#mtcont){reference-type="ref" reference="mtcont"} reads as $\left(m_{\xi,U}\right)_{ac}+\left(m_{\xi,U}\right)_{sc}+\left(m_{\xi,U}\right)_{d}$ with $\left(m_{\xi,U}\right)_{d}=\alpha \delta_0$, and $\alpha\geq 0$. Recall that the absolutely continuous part satisfies the Rajchman property. The discrete part is concentrated on $0$ because otherwise, a non-trivial periodicity would break the keplerian shear. The behavior of the singular part $\nu$ is not obvious, since it may be Rajchman or not, see subsection [2.2.4](#examRajch){reference-type="ref" reference="examRajch"}.* **Remark 6**. *When all the $m_{\xi,U}$'s are of the form $\left(m_{\xi,U}\right)_{ac}+\alpha\delta_0$, Theorem [Theorem 4](#mtcont){reference-type="ref" reference="mtcont"} immediately gives Keplerian shear.* ## $\sigma-$algebra and invariant function by continuous flow Under the conditions of Theorem [\[thomcis\]](#thomcis){reference-type="ref" reference="thomcis"}, in the regular case, the invariant $\sigma-$algebra is just $\pi^{-1}(\mathcal{B}(M))$ modulo zero measure set. On the functional aspect, it amounts to say that a measurable function $f$ is invariant by the flow if and only if it is $\mu-$a.s independent on the second variable on every chart $U$. However, we will first highlight the invariant functions and then the $\sigma$-algebra of invariant sets in order to compare with the regular case mentioned above. In the following theorem, we will use Fourier series to identify a characterization of invariance by an orthogonality property. **Proposition 7** (Orthogonality and Fourier characterization of invariant functions). *Under the conditions of Theorem [Theorem 4](#mtcont){reference-type="ref" reference="mtcont"}, a function $f\in \mathbb{L}^2_\mu(\Omega)$ is invariant according to $(g_t)_{t\in\mathbb{R}}$ if and only if for all $U\in\mathcal{A}, \exists (a_k)_{k\in\mathbb{Z}^d}\in \mathbb{L}^2_{\left(\mu_{\pi}\right)_{\vert U}}(U)^{\mathbb{Z}^d},\left(\mu_{\pi}\right)_{\vert U}\otimes\lambda-a.a \,(x,y)\in U\times\mathbb{T}^d$, $$\text{$f_{\left \vert \pi^{-1}(U) \right.}$}(\psi_U^{-1}(x,y))=\sum_{k\in\mathbb{Z}^d\cap\left\{v_U(x)\right\}^\perp}a_k(x)e^{2i\pi<k\vert y>}.$$ In other words, for a chart $U\in \mathcal{A}$ and for $x\in U$ fixed, non zero Fourier coefficients have indices orthogonal with the vector $v_U(x)$. [\[orthofour\]]{#orthofour label="orthofour"}* *Proof.* Let $f\in \mathbb{L}^2_\mu(\Omega)$. Let's start with the direct implication. Suppose that $$\forall t\in \mathbb{R},f\circ g_t=f\,\mu-a.e.$$ Let $U\in\mathcal{A}$, $t\in\mathbb{R}$ and $(x,y)\in U\times \mathbb{T}^d$. Since $$f(g_t(\psi_U^{-1}(x,y)))=f(\psi_U^{-1}(x,y))$$ we have $$f(\psi_U^{-1}(x,y+tv_U(x)))=f(\psi_U^{-1}(x,y)).$$ By Fourier serie decomposition, this gives $$\sum_{k\in\mathbb{Z}^d}\widehat{f\circ \psi_U(x,\cdot)}(k)e^{2i\pi<k\vert y+tv_U(x)>}=\sum_{k\in\mathbb{Z}^d}\widehat{f\circ \psi_U(x,\cdot)}(k)e^{2i\pi<k\vert y>}.$$ Hence, by uniqueness of Fourier serie coefficients, $$\forall k\in\mathbb{Z}^d,\widehat{f\circ \psi_U(x,\cdot)}(k)\left(e^{2i\pi t<k\vert v_U(x)>}-1\right)=0.$$ When $k$ is such that $\widehat{f\circ \psi_U(x,\cdot)}(k)\ne 0$ we get $\forall t\in \mathbb{R}, t<k\vert v_U(x)>\in \mathbb{Z}$. So $<k\vert v_U(x)>=0$, and then $k\in \left\{v_U(x)\right\}^\perp$. And so $$\left(\mu_{\pi}\right)_{\vert U}\otimes\lambda-a.a \,(x,y)\in U\times\mathbb{T}^d,f(\psi_U^{-1}(x,y))=\sum_{k\in\mathbb{Z}^d\cap\left\{v_U(x)\right\}^\perp}\widehat{f\circ \psi_U(x,\cdot)}(k)e^{2i\pi<k\vert y>}.$$ Let see the reciprocal implication. Let $t\in \mathbb{R}$. Suppose that $f$ satisfies for all charts $U\in \mathcal{A}$ $$\left(\mu_{\pi}\right)_{\vert U}\otimes\lambda-a.a \,(x,y)\in U\times\mathbb{T}^d,f(\psi_U^{-1}(x,y))=\sum_{k\in\mathbb{Z}^d\cap\left\{v_U(x)\right\}^\perp}a_k(x)e^{2i\pi<k\vert y>}.$$ Let $z\in \Omega$ and take $U\in\mathcal{A}$ such that $z\in \pi^{-1}(U)$. We have $$f(g_t(z))=f(g_t(\psi_U^{-1}(\psi_U(z))))=f(\psi_U^{-1}(\pi(z),\pi_2\circ \psi_U(z)+tv_U(\pi(z)))).$$ So $$\begin{split} f(g_t(z))&=\sum_{k\in\mathbb{Z}^d\cap\left\{v_U(\pi(z))\right\}^\perp}a_k(\pi(z))e^{2i\pi<k\vert \pi_2\circ\psi_U(z)+tv_U(\pi(z))>} \\ &=\sum_{k\in\mathbb{Z}^d\cap\left\{v_U(\pi(z))\right\}^\perp}a_k(\pi(z))e^{2i\pi<k\vert \pi_2\circ\psi_U(z)>}. \end{split}$$ by orthogonality. So $$f(g_t(z))=f(\psi_U^{-1}(\psi_U(z)))=f(z).$$ ◻ **Remark 7**. *We can note that in this proof, we did not use Rajchman property to identify invariant $\sigma-$algebra. This version of the proposition does not depend on keplerian shear, by opposition to the discrete case.* **Remark 8**. *As a byproduct, the proposition gives an explicit form to the conditional expectation, with respect to the invariant $\sigma$-algebra, of a function $f\in \mathbb{L}^2_\mu(\Omega)$ locally by the next formula for $U\in\mathcal{A}$ $$\left(\mu_{\pi}\right)_{\vert U}\otimes \lambda-a.a\, (x,y)\in U\times \mathbb{T}^d,\left(\mathbb{E}_\mu(f\vert \mathcal{I})\circ \psi_U^{-1}\right)(x,y) =\sum_{k\in \{v_U(x)\}^\perp\cap\mathbb{Z}^d}\widehat{(f\circ \psi_U^{-1})(x,\cdot)}(k)e^{2i\pi<k\vert y>}.$$* **Lemma 3** (Charts invariance). *For any $U\in\mathcal{A}$, $\pi^{-1}(U)$ is $(g_t)_{t\in\mathbb{R}}$-invariant.* *Proof.* Let $U\in\mathcal{A}$ and $t\in\mathbb{R}$. We have $$\mu-a.a \,z\in \pi^{-1}(U),\pi(z)=\pi_1\circ \psi_U(z).$$ And $$\mu-a.a \,z\in \pi^{-1}(U),(\psi_U\circ g_t )(z)=(\psi_U\circ g_t\circ \psi_U^{-1})(\pi_1\circ \psi_U(z),\pi_2\circ \psi_U(z)).$$ So $$\mu-a.a\,z\in \pi^{-1}(U),\pi_1((\psi_U\circ g_t )(z))=\pi(z)$$ because tori bundle property of $\Omega$. Namely $${\mu-a.a\,z\in \pi^{-1}(U),\pi(g_t(z))=\pi(z)}.$$ ◻ **Proposition 8**. *The invariant $\sigma$-algebra is $$\mathcal{I} =\left\{\text{$\underset{U\in\mathcal{A}}{\bigcup} B_U$} \in\mathcal{B}(\Omega) : (B_U)_{U\in\mathcal{A}}\in \text{${\underset{U\in\mathcal{A}}{\prod}} \psi_U^{-1}\left(\mathcal{I}_U\right)$}\right\},$$ where for $U\in\mathcal{A}$, the local invariant $\sigma-$algebra on $U$ is $$\label{invartrib} \mathcal{I}_U :=\left\{C\in \mathcal{B}(U\times \mathbb{T}^d) : \mu_{{\pi}_{\vert U}}-a.a\, x\in U,\widehat{\mathds{1}_C(x,\cdot)}^{-1}(\mathbb{C}^*)\subset \{v_U(x)\}^\perp\right\}.$$* *Proof.* We prove the direct inclusion. Let $A\in\mathcal{I}$. Let $U\in\mathcal{A}$ and set $C=\psi_U(A\cap\pi^{-1}(U))$. By proposition [\[orthofour\]](#orthofour){reference-type="ref" reference="orthofour"} $$\mathds{1}_C(x,y)=\sum_{k\in \{v_U(x)\}^\perp\cap\mathbb{Z}^d}a_k(x)e^{2i\pi<k\vert y>}\,\left(\mu_{\pi}\right)_{\vert U}\otimes \lambda-a.s$$ which shows that $C\in \mathcal{I}_U$. Hence $A\cap \pi^{-1}(U)\in \psi_U^{-1}(\mathcal{I}_U)$, and finally $$A=\text{$\underset{U\in\mathcal{A}}{\bigcup} A\cap\pi^{-1}(U)$}.$$ Now, we prove the reciprocal inclusion. Let $A\in \mathcal{B}(\Omega)$ s.t $\exists (B_U)_{U\in \mathcal{A}}\in \text{${\underset{U\in\mathcal{A}}{\prod}} \psi_U^{-1}\left(\mathcal{I}_U\right)$},A=\text{$\underset{U\in\mathcal{A}}{\bigcup} B_U$}$. Then $$\forall t\in\mathbb{R},g_t^{-1}(A)=\text{$\underset{U\in\mathcal{A}}{\bigcup} g_t^{-1}(B_U)$}.$$ Let $U\in\mathcal{A}$. By bijectivity of $\psi_U$, $\psi_U(B_U)\in\mathcal{I}_U$. Hence $$\left(\mu_{\pi}\right)_{\vert U}\otimes\lambda-a.a\,(x,y)\in U\times\mathbb{T}^d,\mathds{1}_{\psi_U(B_U)}(x,y)=\sum_{k\in \{v_U(x)\}^\perp\cap\mathbb{Z}^d}\widehat{\mathds{1}_{\psi_U(B_U)}(x,\cdot)}(k)e^{2i\pi<k\vert y>},$$ which proves that $A\in\mathcal{I}$ by Proposition [\[orthofour\]](#orthofour){reference-type="ref" reference="orthofour"}. ◻ **Remark 9**. *We can see that local invariants $A\in \mathcal{I}_U$ satisfies* *$$\label{invarloc} \mathds{1}_A(x,y) =\text{${\underset{k\in \mathbb{Z}^d\cap \{v_u(x)\}^\perp}{\sum}} \widehat{\mathds{1}_A(x,\cdot)}(k)e^{2i\pi\left<\xi\vert y\right>}$}\, \left(\mu_{\pi}\right)_{\vert U}\otimes \lambda-a.s$$* To show the consistency of this result with earlier works, we have the next corollary. **Corollary 1**. *$\pi^{-1}(\mathcal{B}(M))\subset\mathcal{I}$* *Proof.* Let $A=\pi^{-1}(B)$ for some $B\in \mathcal{B}(M)$. Let $U\in\mathcal{A}$. We have $A\cap \pi^{-1}(U)=\pi^{-1}(B\cap U)$, hence $$\psi_U(A\cap\pi^{-1}(U))=(B\cap U)\times\mathbb{T}^d\in \mathcal{I}_U.$$ Therefore $A\in \mathcal{I}$ by Proposition [\[orthofour\]](#orthofour){reference-type="ref" reference="orthofour"}. ◻ Note that this inclusion is generally strict as shown in the following example. **Example 4**. *Consider $g_t =Id_{\mathbb{T}^2}$, $\Omega=\mathbb{T}^2$, $M=\mathbb{T}$.* *Clearly $\mathcal{I}=\mathcal{B}(\mathbb{T}^2)$, while $\pi^{-1}(\mathcal{B}(\mathbb{T}))=\{A\times \mathbb{T}\in \mathcal{B}(\mathbb{T}^2) : A\in \mathcal{B}(\mathbb{T})\}\subsetneq \mathcal{I}$.* ## Convergence speed with the real Rajchman property We are now interested by the speed of convergence of the conditional correlations. Next result shows that even in $C^\infty$ regularity, the order of convergence is limitated by the Rajchman order. **Proposition 9** (Speed roof). *Let $\xi\ne 0_{\mathbb{Z}^d}$ and $U\in \mathcal{A}$. For all $\gamma>r\left(\left(m_{\xi,U}\right)_{\vert \mathbb{R}^*}\right)$, there exists $(f_1,f_2)\in (\mathcal{C}^\infty(\text{${\pi}^{-1}$}(U))\cap \mathbb{L}^2_\mu(\Omega))^2$ such that the decay of conditional correlations is not faster than $t^{-\gamma}$, that is, $$\forall C>0,\forall T>0,\exists t>T,\text{$\left\vert \mathbb{E}_\mu(Cov_{t}(f_1\cdot \mathds{1}_U\circ \pi,f_2\vert \mathcal{I}))\right\vert$}>\frac{C}{t^\gamma}.$$* *Proof.* Let $U\in \mathcal{A}$, $\xi\in \mathbb{Z}^d\setminus\{0_{\mathbb{Z}^d}\}$. Let $\gamma>r\left(\left(m_{\xi,U}\right)_{\vert \mathbb{R}^*}\right)$. Consider $$f : z\in\Omega \mapsto e^{2i\pi <\xi\vert (\pi_2\circ\psi_{U}^{-1})(z)>}\mathds{1}_{\pi^{-1}(U)}(z)\in \mathcal{C}^\infty(\pi^{-1}(U))\cap\mathbb{L}^2_\mu(\Omega).$$ So $$\text{$\int_{\Omega} \overline{f}(f\circ g_t) d\mu$}=\text{$\int_{U} e^{2i\pi t<\xi\vert v_{U}(x)>} d\left(\mu_\pi\right)(x)$}=\text{$\int_{\mathbb{R}} e^{2i\pi tz} dm_{\xi,U}(z)$}.$$ By optimality of $r\left(\left(m_{\xi,U}\right)_{\vert \mathbb{R}^*}\right)$, $$\forall C>0,\forall T>0,\exists t>T,\text{$\left\vert \text{$\int_{\mathbb{R}} e^{2i\pi tz} d\left(m_{\xi,U}\right)_{\vert \mathbb{R}^*}$}\right\vert$}>\frac{C}{t^\gamma}.$$ The result follows by definition of conditional correlations. ◻ **Remark 10**. *When $\Omega \simeq M\times \mathbb{T}^d$, we can take $f_1,f_2$ globally defined of class $\mathcal{C}^\infty$ in the proposition [Proposition 9](#roofsp){reference-type="ref" reference="roofsp"}.* ## Speed of decay of conditional correlations for absolutely continuous measures We assume that $\Omega=M\times \mathbb{T}^d$ is the trivial bundle, endowed with an absolutely continuous measures. The speed of decay will depends on the regularity properties of the velocity vector $v$. We will use stationnary phase method [@Swor] that allow us to evaluate in an optimal way oscillating integrals. The regularity of the velocity vector and the presence of critical points influences the convergence order. Before this study, we recall that under mild assumption the set of critical points of the velocity vector is discrete. **Lemma 4** (Isolation of non-degenerated critical points). *Let $M$ be a finite dimensional $\mathcal{C}^2$ manifold. Let $v\in \mathcal{C}^2(M,\mathbb{R})$. Then every non-degenerated critical point of $v$ is isolated.* *Proof.* Let $x\in M$ a critical point of $v$. Let $U\in\mathcal{A}$ s.t $x\in U$. Suppose that $x$ is not an isolated point. So there exists a sequence $(x_n)_{n\in\mathbb{N}}$ of critical points of $v$ convergent to $x$. Let $n\in \mathbb{N}$. We have $\nabla(v\circ \psi_U^{-1})_{\psi_U(x_n)}=0$. A first order expansion gives $$\nabla(v\circ \psi_U^{-1})_{\psi_U(x_n)}=\nabla(v\circ \psi_U^{-1})_{\psi_U(x)}+Hess(v\circ \psi_U^{-1})(\psi_U(x))(\psi_U(x_n),\cdot)+\text{$\left\Vert \psi_U(x_n)-\psi_U(x)\right\Vert$}h(\psi_U(x_n)),$$ where $h(y)\to0$ as $y\to\psi_U(x)$. Thus $$0=Hess(v\circ \psi_U^{-1})(\psi_U(x))(\psi_U(x_n)-\psi_U(x),\cdot)+\text{$\left\Vert \psi_U(x_n)-\psi_U(x)\right\Vert$}h(\psi_U(x_n)).$$ So $\forall n\in\mathbb{N},0=Hess(v\circ \psi_U^{-1})(\psi_U(x))\left(\frac{\psi_U(x_n)-\psi_U(x)}{\text{$\left\Vert \psi_U(x_n)-\psi_U(x)\right\Vert$}},\cdot\right)+h(\psi_U(x_n))$. By compactness of the sphere in finite dimension, we can extract a converging subsequence $$\left(\frac{1}{\text{$\left\Vert \psi_U(x_{\sigma(n)})-\psi_U(x)\right\Vert$}}(\psi_U(x_{\sigma(n)})-\psi_U(x))\right)_{n\in\mathbb{N}},$$ we get there exists for this one a limit $y$ in the unit sphere centered in $\psi_U(x)$. By letting $n$ tend towards $+\infty$, we have $$Hess(v\circ \psi_U^{-1})(\psi_U(x))(y,\cdot)=0.$$ Since $y\ne 0$ we get $$det(Hess(v\circ \psi_U^{-1})(\psi_U(x)))=0.$$ ◻ **Theorem 5** (Stationary phase, e.g. [@Swor]). *Let $\varphi\in\mathcal{C}^2(\mathbb{R}^n,\mathbb{R})$ with a unique critical point $x_c$. We suppose that $x_c$ is non degenerated, in other words $det(Hess(x_c))\ne 0$.* *Then, for any $a\in\mathcal{C}^1_0(\mathbb{R}^n,\mathbb{R})$ $$\forall t >0,\text{$\int_{\mathbb{R}^n} e^{2i\pi t\varphi(x)}a(x) d\lambda(x)$}=\frac{a(x_c)e^{i\frac{\pi}{4}\left(\sum_{\lambda\in \sigma(Hess(x_c))}sgn(\lambda)\right)}}{t^{\frac{n}{2}}\sqrt{\text{$\left\vert det(Hess(x_c))\right\vert$}}}+O_{+\infty}\left(\frac{1}{t^n}\right)$$* In the following we say that $a$ is a critical point of order $q$ of a function $f$ if for $1\le m\le q$, $f^{(m)}(a)=0$ and $f^{(q+1)}(a)\neq0$. **Lemma 5** (Convergence order with singular points in dimension $(1,d)$ with $d\geq 2$). *Suppose that $dim(M)=1\le d$ and $M$ is a compact manifold of class $\mathcal{C}^\infty$.* *Let $\xi\ne 0_{\mathbb{Z}^d}$ and $\ell\ge2$.* *We suppose that $v$ is of class $\mathcal{C}^{\ell}$, that there exists a unique critical point of order $\ell-1$ for $<\xi\vert v(\cdot)>$, and that all the other eventual critical points are of order strictly smaller.* *Then $$r\left(\widehat{\lambda}_{<\xi\vert v(\cdot)>}\right)=\frac{1}{\ell}.$$* *Proof.* $M$ is compact Hausdorff, so the number of critical points for functions $<\xi\vert v>$ is finite. Let $(a_k)_{k\in\text{$\llbracket 1,m\rrbracket$}}$ the family of critical point of $<\xi\vert v(\cdot)>$ with respectives orders $l_k-1\geq 1$, $1\leq m<l_k,$ $$<\xi\vert v^{(m)}(a_k)>=0\, and \, <\xi\vert v^{(l_k)}(a_k)>\ne 0.$$ Let $(U_k)_{k\in \text{$\llbracket 1,m\rrbracket$}}$ charts such that,$$\forall k\in\text{$\llbracket 1,m\rrbracket$},a_k\in U_k$$ We have $$<\xi\vert v(\varphi_{U_k}^{-1}(x))>=<\xi\vert v(a_k)>+\frac{(x-\varphi_{U_k}(a_k))^{l_k}}{l_k!}<\xi\vert v^{(l_k)}(a_k)>+(x-\varphi_{U_k}(a_k))^{l_k} h(x-\varphi_{U_k}(a_k)).$$ Let pose for $k\in \text{$\llbracket 1,m\rrbracket$},w_k : x\in\mathbb{R} \mapsto (x-\varphi_{U_k}(a_k))\left(\frac{<\xi\vert v^{(l_k)}(a_k)>}{l_k!}+h(x-\varphi_{U_k}(a_k))\right)^{\frac{1}{l_k}}$. So $w_k^' : x\in\mathbb{R}\mapsto \left(\frac{<\xi\vert v^{(l_k)}(a_k)>}{l_k!}+h(x-\varphi_{U_k}(a_k))\right)^{\frac{1}{l_k}}+(x-\varphi_{U_k}(a_k))\frac{h^'(x-\varphi_{U_k}(a_k))}{\left(<\xi\vert v^{(l)}(a_k)>+h(x-\varphi_{U_k}(a_k))\right)^{\frac{1-l_k}{l_k}}}$. So, we get $w_k^'(\varphi_{U_k}(a_k))=\left(\frac{<\xi\vert v^{(l_k)}(a)>}{l_k!}\right)^{\frac{1}{l_k}}\ne 0$. We get a local reverse of $w_k$ on a neighbourhood $W_k$ of $0$. Let $(V_k)_{k\in \text{$\llbracket 1,m+m^'\rrbracket$}}$ a family of charts on $M$ such that, $$\forall k\in \text{$\llbracket 1,m\rrbracket$}, (V_k = w_k^{-1}(W_k)\subset U_k\,and\,\forall j\in \text{$\llbracket m+1,m^'\rrbracket$},a_k\not\in V_k).$$ Let $(\psi_k)_{k\in\text{$\llbracket 1,m+m^'\rrbracket$}}$ a partition of unity subordinated to $(V_k)_{k\in\text{$\llbracket 1,m+m^'\rrbracket$}}$. We get immediately that for $k\in \text{$\llbracket 1,m\rrbracket$}$, $\psi_k(a_k)=1$. By local reverse, for $k\in \text{$\llbracket 1,m\rrbracket$}$, we get that on $W_k$, $$\forall x\in W_k, <\xi\vert v(\varphi_{U_k}^{-1}(w_k^{-1}(x)))>=<\xi\vert v^{(l_k)}(a_k)>+x^{l_k}.$$ So, for $k\in\text{$\llbracket 1,m\rrbracket$}$, $$\text{$\int_{\varphi_{U_k}^{-1}(w^{-1}(W_k))} e^{2i\pi t<\xi\vert v(x)>}\psi_k(x) d\widehat{\lambda(x)}$}=e^{2i\pi t<\xi\vert v(\varphi_{W_k}^{-1}(w^{-1}(a_k)))>}\text{$\int_{W_k} e^{2i\pi tx^{l_k}}\check{\psi_k}(x) d\lambda(x)$}$$ with $$\check{\psi_k} : x\in W\mapsto \psi\left(\varphi_{U_k}^{-1}(w_k^{-1}(x))\right)J(\varphi_{U_k}^{-1})(w_k^{-1}(x))J(w_k^{-1}(x)).$$ So $$\begin{array}{ll} \text{$\int_{\varphi_{U_k}^{-1}(w_k^{-1}(W_k))} e^{2i\pi t<\xi\vert v(x)>}\psi(x) d\widehat{\lambda(x)}$}&=e^{2i\pi t<\xi\vert v(\varphi_{U_k}^{-1}(w_k^{-1}(a_k)))>}\frac{1}{t^{\frac{1}{l_k}}}\text{$\int_{\mathbb{R}} e^{2i\pi x^{l_k}}\check{\psi_k}\left(\frac{x}{t^{\frac{1}{l_k}}}\right) d\lambda(x)$}. \end{array}$$ And $$\text{$\int_{\mathbb{R}} e^{2i\pi x^{l_k}}\check{\psi_k}\left(\frac{x}{t^{\frac{1}{l_k}}}\right) d\lambda(x)$}=\left(\text{$\int_{\mathbb{R}} \frac{(l_k-1)(e^{2i\pi x^l}-1)}{x^{l_k}}\check{\psi_k}\left(\frac{x}{t^{\frac{1}{l}}}\right) d\lambda(x)$}-\frac{1}{t^{\frac{1}{l_k}}}\text{$\int_{\mathbb{R}} \frac{(e^{2i\pi x^{l_k}}-1)}{x^{l_k-1}}\check{\psi_k}^'\left(\frac{x}{t^{\frac{1}{l_k}}}\right) d\lambda(x)$}\right).$$ But $$\forall \alpha>0,\exists C>0,\forall x\in \mathbb{R},\text{$\left\vert \check{\psi_k}^'(x)\right\vert$}\text{$\left\vert 1+x^\alpha\right\vert$}\leq C.$$ And $$\exists C_2>0,\forall x\in \mathbb{R},\text{$\left\vert \frac{(e^{2i\pi x^{l_k}}-1)}{x^{l_k-1}}\right\vert$}\text{$\left\vert 1+x^{l_k-1}\right\vert$}\leq C_2.$$ And $$\exists C_3>0,\forall x\in \mathbb{R},\forall t\in \mathbb{R}^*,\text{$\left\vert \frac{(l_k-1)(e^{2i\pi x^{l_k}}-1)}{x^l}\check{\psi_k}\left(\frac{x}{t^{\frac{1}{l_k}}}\right)\right\vert$}\text{$\left\vert 1+x^{l_k-1}\right\vert$}\leq C_3\text{$\left\Vert \psi\right\Vert$}_\infty.$$ By Lebesgue convergence theorem $$\text{$\int_{\mathbb{R}} e^{2i\pi x^l}\check{\psi_k}\left(\frac{x}{t^{\frac{1}{l_k}}}\right) d\lambda(x)$}\xrightarrow[t\to\pm\infty]{}\check{\psi_k}(0)\text{$\int_{\mathbb{R}} \frac{(l_k-1)(e^{2i\pi x^{l_k}}-1)}{x^{l_k}} d\lambda(x)$}.$$ For $k\in \text{$\llbracket m+1,m+m^'\rrbracket$},$ by Greene formula, we get $$\text{$\int_{\varphi_{U_k}^{-1}(V_k)} e^{2i\pi t<\xi\vert v(x)>}\psi_k(x) d\widehat{\lambda(x)}$}\in O_{\pm\infty}\left(\frac{1}{t}\right).$$ Thus, $$\text{$\int_{M} e^{2i\pi <\xi\vert v(x)>} d\widehat{\lambda(x)}$}=\sum_{k=1}^m\frac{1}{t^{\frac{1}{l_k}}}\left(J_{\varphi_{U_k}^{-1}}\left(\varphi_{U_k}(a_k)\right)\left(\frac{l_k!}{<\xi\vert v^{(l_k)}(a_k)>}\right)^{\frac{1}{l_k}}I_{l_k}+o_{\pm\infty}(1)\right)+O_{\pm\infty}\left(\frac{1}{t}\right)$$ with $I_{l} :=\text{$\int_{\mathbb{R}} \frac{e^{2i\pi x^{l}}-1}{x^{l}} d\lambda(x)$}.$ A simple analysis shows that $I_l$ does not vanish[^2]. By hypothesis, there exists just one critical $a$ point with maximal order $\ell$, hence $$\text{$\int_{M} e^{2i\pi <\xi\vert v(x)>} d\widehat{\lambda(x)}$}= \frac{1}{t^{\frac{1}{\ell}}}\left(J_{\varphi_{U_j}^{-1}}\left(\varphi_{U_j}(a)\right)\left(\frac{\ell!}{<\xi\vert v^{(\ell)}(a)>}\right)^{\frac{1}{\ell}}I_j+o_{\pm\infty}(1)\right).$$ Then, $$r\left(\widehat{\lambda}_{<\xi\vert v(\cdot)>}\right)=\frac{1}{\ell}.$$ ◻ We get then the next proposition. **Proposition 10**. *Under the hypothesis of lemma [Lemma 5](#Convdim){reference-type="ref" reference="Convdim"}, we get that the order of decay of correlation $\gamma$ with $f_1,f_2\in\mathcal{C}^\infty(\Omega)$ satisfies $$\gamma \leq \frac{1}{\ell}.$$* *Proof.* In proof of lemma [Lemma 5](#Convdim){reference-type="ref" reference="Convdim"}, you can take $f_1=e^{2i\pi <\xi\vert v(\cdot)>}$ and $f_2 = \psi_j$ with $j$ the index of the chart containing the critical point $a$ with a maximal order $\ell$. ◻ We can now threat the general case. The following lemma is obtained with the theorem 7.5 p.226 in [@SiDM]. **Lemma 6** (Convergence order around a singular points with analytical functions). *We will use notations and results of [@SiDM].* *We place ourselves in $\mathbb{R}^n$.* *Let $\xi\ne 0_{\mathbb{Z}^d}$.* *Let $v$ be such that $<\xi\vert v>$ is analytic with $0$ a critical point of multiplicity $m\in\mathbb{N}^*$* *Then there exists $\alpha\in\mathbb{Q}^*_+$ and $j\in\mathbb{N}$ in a compact neighbourhood $V$ of $0$ such that* *$$\forall \varphi\in \mathcal{D}_V(\Omega),\exists C\in\mathbb{R}^*,\frac{\text{$\int_{\mathbb{R}^n} e^{2i\pi t<\xi\vert v(x)>}\varphi(x) d\lambda(x)$}}{\frac{C(\ln(t))^j}{t^\alpha}}\xrightarrow[t\to+\infty]{}1.$$* **Definition 18** (Logarithmic convergence order). *Consider a probability measure $\nu$ on $(\mathbb{R},\mathcal{B}(\mathbb{R})$* *We call logarithmic convergence order of $\nu$ the value* *$$rl(\nu) :=\inf\left\{\beta >0 : \exists C>0,\forall t>0,\text{$\left\vert \widehat{\mu(t)}\right\vert$}\leq\frac{C\text{$\left\vert \ln(t)\right\vert$}^\beta}{t^{r(\nu)}}\right\}.$$* Next result may be obtained by a partition of unity and the previous lemma. **Proposition 11** (Convergence order with singular points in dimension $(n,d)$ with $d\geq 2$). *Suppose that $M$ is analytic. Let $\xi\ne 0_{\mathbb{Z}^d}$. Suppose that $v$ such that $<\xi\vert v>$ is analytic. We have $$\exists (j,\alpha)\in\mathbb{N}\times \mathbb{Q}_+^*,\left(r\left(\lambda_{<\xi\vert v(\cdot)>}\right),rl\left(\lambda_{<\xi\vert v(\cdot)>}\right)\right)=(\alpha,j).$$* **Corollary 2**. *Let $v$ be a $\mathcal{C}^2$ function. If there exists only one critical points $a\in M$ and it satisfies $$\exists\xi\ne 0_{\mathbb{Z}^d}, \left(\nabla(<\xi\vert v(\cdot)>)(a)=0\, and \, det(Hess<\xi\vert v(\cdot)>(a))\ne 0\right)$$ then $r(\widehat{\lambda}_{<\xi\vert v(\cdot)>})= \frac{n}{2}.$* *Proof.* Let $\xi\ne 0_{\mathbb{Z}^d}$. Applying the stationary phase theorem on respectives charts $(U,\varphi_U)$ of $\mathcal{A}$ containing at most just one critical point $a\in U$ and test functions $\psi$ on $U$, we get $$\begin{split} \text{$\int_{M} e^{2i\pi t<\xi\vert v(x)>}\psi(x) d\widehat{\lambda}(x)$}&=\text{$\int_{\mathbb{R}^n} e^{2i\pi t<\xi\vert v\left(\varphi_U^{-1}(x)\right)>}\psi\left(\varphi_U^{-1}(x)\right)J(\varphi_U^{-1})(x) d\lambda(x)$}\\ &=\frac{\psi\left(\varphi_U^{-1}(a)\right)e^{i\frac{\pi}{4}\left(\sum_{\lambda\in \sigma(Hess(a))}sgn(\lambda)\right)}}{t^{\frac{n}{2}}\sqrt{\text{$\left\vert det(Hess(a))\right\vert$}}}+O_{+\infty}\left(\frac{1}{t^n}\right). \end{split}$$ By partition of unity we obtain $${\exists C\in\mathbb{R},\text{$\int_{M} e^{2i\pi<\xi\vert v(x)>} d\widehat{\lambda}(x)$}=\frac{C}{t^{\frac{n}{2}}}+O_{+\infty}\left(\frac{1}{t^n}\right)}.$$ ◻ **Remark 11**. *Note that the result is consistent with the case $n=1$.* We recall the definition of Damien Thomine [@DaTho] **Definition 19** (Anisotropic Sobolev space on $\mathbb{R}\times \mathbb{T}$). *Let $s\geq 0$. Let $h : x\in \mathbb{R}\mapsto \sqrt{1+x^2}$. Let $$H^{s,0}(\mathbb{R}\times\mathbb{T}) := \left\{ f\in \mathbb{L}^2(\mathbb{R}\times\mathbb{T}) : \sum_{k\in\mathbb{Z}}\text{$\int_{\mathbb{R}} \text{$\left\vert \widehat{f}(x,k)\right\vert$}^2h^{2s}(x) d\lambda(x)$}\right\}.$$* **Proposition 12**. *Consider $(\mathbb{T}^2,\mu\otimes \lambda,(g_t)_{t\in \mathbb{R}})$ such that $r=r(\mu)>0$ and for $(x,y)\in \mathbb{T}^2,g_t(x,y)=(x,y+tx)$. Let $s>\frac{1}{2}$. Let for $\varepsilon\in\left]0,\frac{1}{2s}\right[,q_\varepsilon := \min{\left\{s(1-\varepsilon),r(\mu)\right\}}$. Then, we get for all $\varepsilon \in\left]0,\frac{1}{2s}\right[$, there exists $C_\varepsilon>0$ such that $\forall t>0$, $$\text{$\left\vert \mathbb{E}_{\mu\otimes\lambda}(Cov_t(f_1,f_2\vert \mathcal{I}))\right\vert$}\leq \frac{C_\varepsilon\text{$\left\Vert f_1\right\Vert$}_{H^{s,0}(\mathbb{R}\times\mathbb{T})}\text{$\left\Vert f_2\right\Vert$}_{H^{s,0}(\mathbb{R}\times\mathbb{T})}}{t^{q_\varepsilon}}$$ and if $\gamma>0$ denotes the convergence order on $H^{s,0}(\mathbb{R}\times \mathbb{T})$, we get $\min{\left\{s-\frac{1}{2},r(\mu)\right\}}\leq \gamma$. Moreover if $supp(\mu)$ is compact then $\gamma\leq r(\mu)$.* *Proof.* Let $s>\frac{1}{2}$ and $(f_1,f_2)\in (H^{s,0}(\mathbb{R}\times\mathbb{T}))^2$. Then by Cauchy-Schwarz inequality and Parseval$$\begin{array}{ll}\text{$\left\vert \mathbb{E}_{\mu\otimes\lambda}(Cov_t(f_1,f_2\vert \mathcal{I}))\right\vert$} &=\text{$\left\vert \sum_{k\in\mathbb{Z}^*}\text{$\int_{\mathbb{R}^2} \overline{\widehat{f_1}}(x,k)\widehat{f_2}(y,k)\widehat{\mu}(kt-(x+y)) d\lambda(x,y)$}\right\vert$}\\ &\leq S(t)\text{$\left\Vert f_1\right\Vert$}_{H^{s,0}(\mathbb{R}\times\mathbb{T})}\text{$\left\Vert f_2\right\Vert$}_{H^{s,0}(\mathbb{R}\times\mathbb{T})}\end{array}$$ with $$S(t) := \sup_{k\in \mathbb{Z}^*} C_\mu\left(\text{$\int_{\mathbb{R}^2} \frac{1}{(1+\text{$\left\vert kt-(x-y)\right\vert$})^{2r(\mu)}h^{2s}(x)h^{2s}(y)} d\lambda(x,y)$}\right)^\frac{1}{2}.$$ We can consider $k=1$ considering $kt$ instead of $t$. Let $D_1(t) := \left\{(x,y)\in \mathbb{R}^2 : \text{$\left\vert t-(x-y)\right\vert$}\leq \frac{t}{2}\ and\ \text{$\left\vert y\right\vert$}\leq \frac{t}{4}\right\}$, $D_2(t) := \left\{(x,y)\mathbb{R}^2 : \text{$\left\vert t-(x-y)\right\vert$}\leq \frac{t}{2}\ and\ \text{$\left\vert y\right\vert$}>\frac{t}{4}\right\}$ and $D_3(t) :=\left\{(x,y)\mathbb{R}^2 : \text{$\left\vert t-(x-y)\right\vert$}> \frac{t}{2}\right\}$. Then for all $\varepsilon\in \left]0,\frac{1}{2s}\right[$ $$\begin{array}{ll}\text{$\int_{\mathbb{R}^2} \frac{1}{(1+\text{$\left\vert t-(x-y)\right\vert$})^{2r(\mu)}h^{2s}(x)h^{2s}(y)} d\lambda(x,y)$}&=\text{$\int_{D_1(t)} \frac{1}{(1+\text{$\left\vert t-(x-y)\right\vert$})^{2r(\mu)}h^{2s}(x)h^{2s}(y)} d\lambda(x,y)$}\\ &+\text{$\int_{D_2(t)} \frac{1}{(1+\text{$\left\vert t-(x-y)\right\vert$})^{2r(\mu)}h^{2s}(x)h^{2s}(y)} d\lambda(x,y)$}\\ &+\text{$\int_{D_3(t)} \frac{1}{(1+\text{$\left\vert t-(x-y)\right\vert$})^{2r(\mu)}h^{2s}(x)h^{2s}(y)} d\lambda(x,y)$}\\ &\leq \frac{8}{t^{2s(1-\varepsilon)}}\text{$\int_{\mathbb{R}^2} h^{-2s}(x)h^{-2s\varepsilon}(y) d\lambda(x,y)$}\\ &+\frac{2}{t^{2r(\mu)}}\text{$\int_{\mathbb{R}^2} h^{-2s}(x)h^{-2s}(y) d\lambda(x,y)$}\end{array}$$ Then, $$\forall \varepsilon \in\left]0,\frac{1}{2s}\right[,S(t)\leq \frac{8C_{\mu}}{t^{q_\varepsilon}}\left(\text{$\int_{\mathbb{R}} h^{-2s}(x) d\lambda(x)$}\text{$\int_{\mathbb{R}} h^{-2s\varepsilon}(x) d\lambda(x)$}\right)^{\frac{1}{2}}$$ with $q_\varepsilon = \min{\left\{s(1-\varepsilon),r(\mu)\right\}}$. Finally, $$\forall \varepsilon\in\left]0,\frac{1}{2s}\right[,\exists C_\varepsilon>0,\forall t>0, \text{$\left\vert \mathbb{E}_{\mu\otimes\lambda}(Cov_t(f_1,f_2\vert \mathcal{I}))\right\vert$}\leq \frac{C_\varepsilon}{t^{q_\varepsilon}}.$$ And for $\gamma>0$ the convergence order on $H^{s,0}(\mathbb{R}\times \mathbb{T})$, we get $\min{\left\{s-\frac{1}{2},r(\mu)\right\}}\leq \gamma$ and when $supp(\mu)$ is compact, $\gamma\leq r(\mu)$. ◻ **Remark 12**. *When $(f_1,f_2)\in (H^{s,0}(\mathbb{R}\times \mathbb{T}))^2$ with $s\geq \frac{r(\mu)+1}{2}$, we get that $\gamma=r(\mu)$ as optimal order of convergence when $supp(\mu)$ is compact.* **Remark 13**. *For $s>\frac{1}{2}$, for all integer $k>s+\frac{3}{2}$, $\mathcal{C}^k_c(\mathbb{R}\times\mathbb{T})\subset H^{s,0}(\mathbb{R}\times \mathbb{T})$, and then, for $k>s+\frac{3}{2}$ the convergence order $\gamma>0$ for $\mathcal{C}^k_c(\mathbb{R}\times\mathbb{T})$ is such that $\gamma \in \left[\min{\left\{s-\frac{1}{2},r(\mu)\right\}},r(\mu)\right]$ when $supp(\mu)$ is compact.* **Remark 14**. *When $\mu=\lambda_v$ with $v\in \mathcal{C}^\infty$ with a unique critical point of order $l-1$ with $l\geq 3$, then $\gamma=r(\mu)=\frac{1}{l}$.* # Flow on compact Lie group bundle In this part, we extend the notion of Keplerian shear in a more general framework allowing us to cover other cases in which we are not in torus bundles as in the previous sections. The main case we want to cover is the case of Lie group bundles with the non-abelian fibration Lie group. It is precisely this point of non-commutativity which announces the most severe breaking point between the work carried out previously in the article and those which will follow. In this section, however, the aim will be to come back to a case analogous to the classic case seen previously in the article in order to show the fact that the notion exposed in this part is a generalization of that exposed previously. We maintain the properties of compactness, of Lie and of connectedness to maintain properties on the fibration group allowing us to make the analogy with the previous cases which used the torus and also to define a continuous flow/a $\mathbb{R}-$action easily. ## Tools used in connected-compact Hausdorff Lie group bundle **Definition 20**. *Let $(M,\mathcal{A})$ be a of Lindelöf manifold of dimension $n\in\mathbb{N}^*$ of class $\mathcal{C}^1$.* *Let $(\Omega,\mu)$ be a Borelian space, $G$ a connected compact Lie group and $\pi$ a continuous map from $\Omega$ into $M$.* *$\Omega$ is a connected compact Lie group bundle if* 1. *locally, we have for the charts $U$ of $\mathcal{A}$ a homeomorphism $\psi_U : \pi^{-1}(U)\to U\times G$* 2. *for all $U$ in $\mathcal{A},\pi_1\circ \psi_U=\pi_{\vert U}$* To define the continuous flow, we are going to use the exponential defined on the corresponding Lie algebra of the group. Here the connectedness and the compactness of the group provide good properties to the exponential, allowing us to work in setting similar to the previous one. We will use a local coordinate system, namely coordinates of the second kind, that will make it possible to fiberize the group $G$ as a torus. Let us consider a basis $(X_j)_{j\in {\text{$\llbracket 1,d\rrbracket$}}}$ of $Lie(G)$ with $d\in \mathbb{N}^*$ the dimension of $Lie (G)$. We can get a coordinate system in a neighbourhood of the unit $1_G$. **Proposition 13** (Coordinate systems of the second kind, e.g. [@NBour]). *There exists a neighborhood $V$ of the neutral element $1_G$ such that all the elements $a\in G$ are written in a unique way as follows $$a= \prod_{j=1}^d exp\left(x_jX_j\right)\,\text{ with }\,(x_j)_{j\in \text{$\llbracket 1,d\rrbracket$}}\in \mathbb{R}^{\text{$\llbracket 1,d\rrbracket$}}$$ (See Figure [\[coordlie\]](#coordlie){reference-type="ref" reference="coordlie"})* Now, we explain why we assume that the group is compact Hausdorff and connected. Compacity ensures the surjectivity of the exponential over a certain domain. More precisely, we use the following result. **Theorem 6**. *The compactness of $G$ implies the surjectivity of the exponential on the connected component of the neutral element of $G$.* By considering the neighborhood of the neutral element on which the points are written in a unique way with the coordinate system of the second kind, we can with all the right translations of this neighborhood cover the whole group $G$. By compacity, we can extract a finite subcover from it, which allows us to approach more easily the properties already encountered in the case of tori bundles. The fact that $G$ is connected complement the previous property. The only connected component which will necessarily be that of the neutral element, so that the exponential become surjective over the whole group $G$. ## Main properties of the flow on the Lie group This step enables us to identify the possible flows on the Lie group. This will give us a more simplified approach to the dynamic system under study. **Theorem 7**. *All flows $(g_t)_{t\in\mathbb{R}}$ on $G$ satisfying for all $y,z\in G, g_t(yz)=g_t(y)z$ are of the form $(g _t (x,y) \mapsto exp(tv)x)$ with $g_t(x,y) \mapsto \exp(tv)x)$.* We can then continue by defining the notion of compatible flow. **Definition 21** (Compatible flow). *Let $(g_t)_{t\in \mathbb{R}}$ a flow on $(\Omega,\mu)$* *$(g_t)_{t\in \mathbb{R}}$ is a compatible flow iff :* 1. *There exists for all $U\in \mathcal{A}$ a measurable function $v_U : U\to G$* 2. *$\forall U\in \mathcal{A},\forall (x,y)\in U\times G, \psi_U\circ g_t\circ\psi_U^{-1}(x,y)=(x,\exp(tv_U(x))y)$* **Definition 22** (Compatible measure). *Consider the notations previously established with the connected compact Lie group bundle.* *Let $U\in \mathcal{A}$. Let pose $\mu^U := \mu_{\psi_U}$. And let pose $\mu^' := \mu_{\pi}$. $\mu$ on $\Omega$ is compatible iff* *$$\text{$\mu^U_{\left \vert \pi^{-1}(U) \right.}$}=\text{$\mu^'_{\left \vert U \right.}$}\otimes \mathcal{H}\,with\,\mathcal{H}\,the \,Haar\,measure\,on\,G$$* The next theorem written in paragraph 2 of the article by Antoine Delzant [@Adelz] allow us to come back to an analogous case with torus. **Theorem 8** (Isomorphism with torus). *Any compact, connected and abelian Lie group is isomorphic to a torus.* Then, we use the following theorem ensuring that we have the properties of Lie on the subgroups of $G$ to get tori. **Theorem 9** (Lie subgroup). *Any closed subgroup of $G$ admits a group structure of Lie and $Lie(H)$ is a sev of $Lie(G)$* **Definition 23** (Flow orbit on the Lie group). *We call for the flow previously defined the orbital group of a direction $v\in Lie(G)$ the set $$H_v :=\overline{\left\{\exp(tv)\in G : t\in \mathbb{R}\right\}}$$* *$H_v$ is an abelian Lie group, compact and connected and so isomorphic to a torus.* *Let $d_v$ be the dimension of this torus.* *Note $\chi_v : H_v\to \mathbb{T}^{d_{v}}$ the isomorphism* With the flow orbit, we can make a semi-fibration on Lie group to use torus property as in the previous theorem in the tori bundle case. We consider $v\in Lie(G)$. Consider then the orbital group in the direction $v$. $H_v$ has a group structure of Lie by observing the previous theorem. We consider a basis $(X_j^v)_{j\in \text{$\llbracket 1,d_v\rrbracket$}}$ of $Lie(H_v)$ that we complete to make it a basis $(X_j^v) _{j\in \text{$\llbracket 1,d\rrbracket$}}$ of $Lie(G)$. Consider the neighborhood $V$ of the local coordinate system of the second kind associated with $(X_j^v)_{j\in \text{$\llbracket 1,d\rrbracket$}}$. We cover $G$ with a finite family of elements $(g_k)_{k\in \text{$\llbracket 1,l\rrbracket$}}\in G^{\text{$\llbracket 1,l\rrbracket$}}$ s.t $$\label{exploc} G=\bigcup_{k=1}^lVg_k$$ Let $(W_k)_{k\in \text{$\llbracket 1,l\rrbracket$}}$ s.t $W_1=Vg_1$ and $$\label{partition}\forall k\in \text{$\llbracket 2,l\rrbracket$},W_k=Vg_k\setminus \left(\text{$\underset{j\in\text{$\llbracket 1,k-1\rrbracket$}}{\bigcup} Vg_j$}\right).$$ Let $\varphi : G^2\to G$ defined by $\varphi(x,y)=xy$. By uniqueness of the coordinate system in $V$ of the second kind we can define $$\begin{array}{lccl} \psi_k^v: & Vg_k&\to & Lie(H_v)\times G\\ &x&\mapsto& \left(\text{${\underset{k\in\text{$\llbracket 1,d_v\rrbracket$}}{\sum}} x_j^{(v,k)}X_j^v$},\text{${\underset{k\in \text{$\llbracket d_v+1,d\rrbracket$}}{\prod}} \exp(x_j^{(v,k)}X_j^v)$}g_k\right). \end{array}$$ Since the orbits are abelian subgroups we have $$\forall k\in \text{$\llbracket 1,l\rrbracket$},\forall x\in Vg_k,\forall v\in Lie(G),\text{${\underset{k\in\text{$\llbracket 1,d_v\rrbracket$}}{\prod}} \exp(x_j^{(v,k)}X_j^v)$}=\exp\left(\text{${\underset{k\in\text{$\llbracket 1,d_v\rrbracket$}}{\sum}} x_j^{(v,k)}X_j^v$}\right)$$ Let $\phi_t^v: Lie(G)\times G\to Lie(G)\times G$ defined by $\phi_t^v(\theta,y)=(\theta+tv,y)$ and $\eta : Lie(G)\times G\to G^2$ defined by $\eta(\theta,y)=\left(\exp(\theta),y\right)$. We immediately get $$\forall k\in\text{$\llbracket 1,l\rrbracket$},\forall x\in Vg_k,\left(\varphi\circ\eta\circ\phi_t^v\circ\psi_k^v\right)(x)=\exp(tv)x.$$ We finally set $\check{\varphi} := \varphi\circ \eta$. ## Keplerian shear on compact Lie group bundle We can now study the Keplerian shear problem. Let pose for $U\in\mathcal{A}$ and $y\in G,w_U=\chi_{(v_U(\cdot),y)}\circ v_U(\cdot)$ Let pose for nonzero $\xi\in\mathbb{Z}^d$ and $U\in\mathcal{A}$, $\nu^{(\xi,U)}:=\text{$\mu^'_{\left \vert U \right.}$}_{\left<\xi\vert w_U(\cdot)\right>}$ **Theorem 10**. *The dynamical system $(\Omega,\mu,(g_t)_{t\in\mathbb{R}})$ exhibits keplerian shear iff for all nonzero $\xi\in \mathbb{Z}^d$, and for all $U\in\mathcal{A}$, $\text{$\nu^{(\xi,U)}_{\left \vert \mathbb{R}^* \right.}$}$ is Rajchman* The plan of the demonstration will initially be to come back to a situation very similar to that of the torus bundle by relying on the maximal torus. To do this, we are going to use the orbit in the vector $v_U(x)$ in the Lie algebra in order to identify an abelian subgroup which is isomorphic to a torus. The second step will be for a map $U\in \mathcal{A}$ and $x\in U$ fixed to work on the sections of the Lie group $G$ by the translations of the orbit of $v_U( x)$. Once immersed in this configuration provided by Lemma [Lemma 7](#lie2tore){reference-type="ref" reference="lie2tore"}, we will be able to carry out the same reasoning as in the torus in order to arrive at the property of Rajchman of $\text{$\nu^{(\xi,U)}_{\left \vert \mathbb{R}^* \right.}$}$. In order to clarify and highlight the configuration by the orbits, we will show an equality allowing us to realize this link. **Lemma 7**. *Let $U\in\mathcal{A}$ and $(f_1,f_2)\in \left(\mathbb{L}^2_{\text{$\mu^'_{\left \vert U \right.}$}\otimes\mathcal{H}}\left(U\times G\right)\right)^2$. We define for $t\in\mathbb{R}$ $$b_t(f_1,f_2):= \text{$\int_{U\times G} \overline{f_1}(x,\exp\left(tv_U(x)\right)y)f_2(x,y) d\text{$\mu^'_{\left \vert U \right.}$}\otimes\mathcal{H}(x,y)$}$$* *Then $$\label{bt} b_t(f_1,f_2)=\text{${\underset{k\in\text{$\llbracket 1,l\rrbracket$}}{\sum}} \text{$\int_{U} \left(\text{$\int_{\mathbb{T}^{d_{v_U(x)}}\times W_k} \overline{\check{f_1}}(x,z+tw_U(x),r))\check{f_2}(x,z,r)) dm^'_{(x,k)}$}(z,r)\right) d\text{$\mu^'_{\left \vert U \right.}$}(x)$}$}$$ with $$\check{f_1} : (x,z,r)\mapsto f_1(x,\varphi(\chi_{v_U(x)}^{-1}\left(z\right),r)) \text{ and } \check{f_2} : (x,z,r)\mapsto f_2(x,\varphi(\chi_{v_U(x)}^{-1}\left(z\right),r)).$$ and $m^'_{(x,k)}=\left(\chi_{v_U(x)}\circ\exp\right)_{*}m_{(x,k)}$ is the push forward measure obtained by transfer theorem, where $m_{(x,k)}=\left(\psi_k^{v_U(x)}\right)_{*} \mathcal{H}$ is the push forward measure associated.* *Proof.* Let non-zero $\xi\in \mathbb{Z}^d$. Let $U\in\mathcal{A}$ and $(f_1,f_2)\in \left(\mathbb{L}^2_{\text{$\mu^'_{\left \vert U \right.}$}\otimes\mathcal{H}}\left(U\times G\right)\right)^2$. We have the following expansion $$\begin{aligned} b_t(f_1,f_2)&:= \text{$\int_{U\times G} \overline{f_1}(x,\exp\left(tv_U(x)\right)y)f_2(x,y) d\text{$\mu^'_{\left \vert U \right.}$}\otimes\mathcal{H}(x,y)$}\\ &=\text{$\int_{U} \text{$\int_{G} \overline{f_1}(x,\exp\left(tv_U(x)\right)y)f_2(x,y) d\mathcal{H}(y)$} d\text{$\mu^'_{\left \vert U \right.}$}(x)$}\\ &=\sum_{k=1}^l\text{$\int_{U} \left(\text{$\int_{W_k} \overline{f_1}(x,\check{\varphi}\circ\phi_t^{v_U(x)}\circ\psi_k^{v_U(x)}(y))f_2(x,\check{\varphi}\circ\psi_k^{v_U(x)}(y)) d\mathcal{H}(y)$}\right) d\text{$\mu^'_{\left \vert U \right.}$}(x)$}\\ &=\sum_{k=1}^l\text{$\int_{U} \left(\text{$\int_{Lie(H_{v_U(x)})\times W_k} \overline{f_1}(x,\check{\varphi}\circ\phi_t^{v_U(x)}(\theta,r))f_2(x,\check{\varphi}(\theta,r)) dm_{(x,k)}$}(\theta,r)\right) d\text{$\mu^'_{\left \vert U \right.}$}(x)$},\end{aligned}$$ In this part, we have established the first step by placing ourselves in the Lie algebra of the orbit of $v_U(x)$ which is the denoted group $H_{v_U(x)}$ and which is abelian, so $$\check{\varphi}\circ\phi_t^{v_U(x)}(\theta,r)=\check{\varphi}(\theta+tv_U(x),r)=\varphi(\exp(\theta+tv_U(x)),r)=\varphi(\exp(tv_U(x))\exp(\theta),r) .$$ We are now at the second step placing ourselves in the sections of the Lie group G by $H_{v_U(x)}$. So $$\begin{aligned} &\quad b_t(f_1,f_2)\\ &=\sum_{k=1}^l\text{$\int_{U} \left(\text{$\int_{Lie(H_{v_U(x)})\times W_k} \overline{f_1}(x,\varphi(\exp(tv_U(x))\chi_{v_U(x)}^{-1}\left(\chi_{v_U(x)}\left(e^\theta)\right)\right),r))f_2(x,\varphi(e^\theta),r)) dm_{(x,k)}$}(\theta,r)\right) d\text{$\mu^'_{\left \vert U \right.}$}(x)$}\\ &=\sum_{k=1}^l\text{$\int_{U} \left(\text{$\int_{\mathbb{T}^{d_{v_U(x)}}\times W_k} \overline{f_1}(x,\varphi(\exp(tv_U(x))\chi_{v_U(x)}^{-1}\left(z\right),r))f_2(x,\varphi(\chi_{v_U(x)}^{-1}\left(z\right),r)) dm^'_{(x,k)}$}(z,r)\right) d\text{$\mu^'_{\left \vert U \right.}$}(x)$}\end{aligned}$$ We pass here in the torus isomorphic to $H_{v_U(x)}$. Since $$z+\chi_{v_U(x)}\left(\exp(tv_U(x))\right)=z+t\chi_{v_U(x)}\left(\exp(v_U(x))\right)=z+tw_U(x)$$ we get $$\begin{aligned} &\quad b_t(f_1,f_2)\\ &=\sum_{k=1}^l\text{$\int_{U} \left(\text{$\int_{\mathbb{T}^{d_{v_U(x)}}\times W_k} \overline{f_1}(x,\varphi(\chi_{v_U(x)}^{-1}\left(z+tw_U(x)\right),r))f_2(x,\varphi(\chi_{v_U(x)}^{-1}\left(z\right),r)) dm^'_{(x,k)}$}(\theta,r)\right) d\text{$\mu^'_{\left \vert U \right.}$}(x)$}\end{aligned}$$ This proves [\[bt\]](#bt){reference-type="eqref" reference="bt"}. ◻ We are then in the case of a torus foliation by the variable $x$ which will then allow us to apply the same reasoning as in the case of the torus. *Proof of Theorem [Theorem 10](#Lieth){reference-type="ref" reference="Lieth"}.* Let begin with the direct implication. Suppose that the dynamical system $(\Omega,\mu,(g_t)_{t\in\mathbb{R}})$ exhibits keplerian shear. Let $U\in\mathcal{A}$ and let $$\xi\in\mathbb{Z}^d\setminus\{0_{\mathbb{Z}^d}\}.$$ Let $(f_1,f_2)\in \left(\mathbb{L}^2_{\text{$\mu^'_{\left \vert U \right.}$}\otimes\mathcal{H}}\left(U\times G\right)\right)^2$ such that : $$\check{f_1} : (x,z,r) \mapsto e^{2i\pi\left<\xi\vert z\right>}$$ and $$\check{f_2} : (x,z,r) \mapsto \mathds{1}_{\left<\xi\vert _U(\cdot)\right>\ne 0}(x)e^{2i\pi\left<\xi\vert z\right>}$$ taking notations of Lemma [Lemma 7](#lie2tore){reference-type="ref" reference="lie2tore"}. Then $$\text{$\int_{\mathbb{R}} e^{-2i\pi tz} d\nu^{\xi,U}(z)$}=\text{$\int_{U} e^{-2i\pi t\left<\xi\vert w_U(x)\right>}g(x) d\mu^'(x)$}$$ with $$g : x\in U\mapsto \mathds{1}_{\left<\xi\vert w_U(\cdot)\right>\ne 0}(x).$$ Thus, by Lemma [Lemma 7](#lie2tore){reference-type="ref" reference="lie2tore"} $$\text{$\int_{U} e^{-2i\pi t\left<\xi\vert w_U(x)\right>}g(x) d\text{$\mu^'_{\left \vert U \right.}$}(x)$}=\text{$\int_{U\times G} \overline{f_1}(x,\exp\left(tv_U(x)\right)y)f_2(x,y) d\text{$\mu^'_{\left \vert U \right.}$}\otimes\mathcal{H}(x,y)$}.$$ By the Keplerian shear property the right hand term converges and as in ([\[birkth\]](#birkth){reference-type="ref" reference="birkth"}), the limit is zero. So $${\text{$\int_{\mathbb{R}} e^{-2i\pi tz} d\nu^{\xi,U}(z)$}\xrightarrow[t\to\pm\infty]{}0}.$$ Let us continue with the reciprocal implication. Let $d$ be the dimension of the maximal torus of $G$. Consider the level set of torus dimensions (see Definition [Definition 23](#maxtor){reference-type="ref" reference="maxtor"}) $$Y_m := \left\{x\in U : d_{v_U(x)}=m\right\}, m\in\text{$\llbracket 1,d\rrbracket$}.$$ Consider $(m,k)\in\text{$\llbracket 1,d\rrbracket$}\times \text{$\llbracket 1,l\rrbracket$}$ and define the measure $\omega_m^k$ on $U\times \mathbb{T}^m\times W_k$ by $$\forall A\in \mathcal{B}(U\times \mathbb{T}^m\times W_k), \omega_m^k(A)=\text{$\int_{U} \text{$\int_{\mathbb{T}^m\times W_k} \mathds{1}_A(x,y,z) dm^'_{(x,k)}(y,z)$} d\mu^'_{\vert Y_m}(x)$}.$$ Let $U\in\mathcal{A}$ and $(\xi_1,\xi_2)\in \mathbb{Z}^m\times \mathbb{Z}^{m^'}$. Let $(f_1,f_2)\in \left(\mathbb{L}^2_{\text{$\mu^'_{\left \vert U \right.}$}\otimes\mathcal{H}}\left(U\times G\right)\right)^2$ such that, $$\check{f_1} : (x,z,r) \mapsto a_1(x,r)e^{2i\pi\left<\xi_1\vert z\right>}\mathds{1}_{Y_m}(x)$$ and $$\check{f_2} : (x,z,r) \mapsto a_2(x,r)e^{2i\pi\left<\xi_2\vert z\right>}\mathds{1}_{Y_{m^'}}(x)$$ with $a_1$ and $a_2$ square summable functions in the appropriate Lebesgue space. If $m\neq m^{'}$, the scalar product between $f_1,f_2$ is zero. Therefore we assume that $m=m^{'}$. So, by Lemma [Lemma 7](#lie2tore){reference-type="ref" reference="lie2tore"} $$\label{expfunction}\text{$\int_{U\times G} \overline{f_1}(x,\exp\left(tv_U(x)\right)y)f_2(x,y) d\text{$\mu^'_{\left \vert U \right.}$}\otimes\mathcal{H}(x,y)$}=\text{$\int_{U} e^{-2i\pi t\left<\xi_1\vert w_U(x)\right>}g(x) d\text{$\mu^'_{\left \vert U \right.}$}(x)$}$$ with $$g : x\in U\mapsto \mathds{1}_{Y_m}(x)\sum_{k=1}^l\left(\text{$\int_{W_k} \overline{a_1}(x,r)a_2(x,r)e^{2i\pi\left<\xi_2-\xi_1\vert z \right>} dm^'_{(x,k)}(z,r)$}\right).$$ By hypothesis, when $\xi_1\ne 0$, $$\text{$\int_{\mathbb{R}} e^{-2i\pi tz} d\nu^{\xi_1,U}(z)$}\xrightarrow[t\to\pm\infty]{}0.$$ And then, by Lemma [Lemma 1](#convfaible){reference-type="ref" reference="convfaible"}, we have that $\exp(2i\pi t\cdot)$ converge weakly-$*$ in $\mathbb{L}^{\infty}_{\nu^{\xi_1,U}}(\mathbb{R})$ to $0$. Thus, $$\text{$\int_{U} e^{-2i\pi t\left<\xi_1\vert w_U(x)\right>}g(x) d\text{$\mu^'_{\left \vert U \right.}$}(x)$}\xrightarrow[t\to\pm\infty]{}\text{$\int_{\{<\xi_1\vert w_U(\cdot)>=0\}\times G} \overline{f_1}(x,y)f_2(x,y) d\mu^'_{\vert U}\otimes \mathcal{H}(x,y)$}$$ Then, remembering ([\[expfunction\]](#expfunction){reference-type="ref" reference="expfunction"}), we get $$\text{$\int_{U\times G} \overline{f_1}(x,\exp\left(tv_U(x)\right)y)f_2(x,y) d\text{$\mu^'_{\left \vert U \right.}$}\otimes\mathcal{H}(x,y)$}\xrightarrow[t\to\pm\infty]{}\text{$\int_{\{<\xi_1\vert w_U(\cdot)>=0\}\times G} \overline{f_1}(x,y)f_2(x,y) d\mu^'_{\vert U}\otimes \mathcal{H}(x,y)$}.$$ By totality obtained by the Fourier series expansion, this gives that for all functions $f_1,f_2\in\mathbb{L}^2_{\mu^'_{\vert U}\otimes\mathcal{H}}(Y_m\times G)$ $$\label{denselie} \text{$\int_{Y_m\times G} \overline{f_1}(x,\exp\left(tv_U(x)\right)y)f_2(x,y) d\text{$\mu^'_{\left \vert U \right.}$}\otimes\mathcal{H}(x,y)$}\xrightarrow[t\to\pm\infty]{}\text{$\int_{Y_m\times G} \overline{E_{\mu^'_{\vert U}\otimes\mathcal{H}}(f_1\vert \mathcal{I}_U)}E_{\mu^'_{\vert U}\otimes\mathcal{H}}(f_2\vert \mathcal{I}_U) d\mu^'_{\vert U}\otimes\mathcal{H}$}.$$ Let $(f_1,f_2)\in \left(\mathbb{L}^2_{\mu^'_{\vert U}\otimes\mathcal{H}}(U\times G)\right)^2$. Finally we apply ([\[denselie\]](#denselie){reference-type="ref" reference="denselie"}) on each term of the following sum $$\begin{aligned} &\quad\text{$\int_{U\times G} \overline{f_1}(x,\exp\left(tv_U(x)\right)y)f_2(x,y) d\text{$\mu^'_{\left \vert U \right.}$}\otimes\mathcal{H}(x,y)$}\\ &=\sum_{m=1}^d{\text{$\int_{Y_m\times G} \overline{f_1}(x,\exp\left(tv_U(x)\right)y)f_2(x,y) d\text{$\mu^'_{\left \vert U \right.}$}\otimes\mathcal{H}(x,y)$}}.\end{aligned}$$ ◻ Now, the aim here is to give an analogue of the fundamental theorem 3.3 in [@DaTho], which guarantees Keplerian shear for flows with regular velocities and the negligible critical points. **Theorem 11**. *If for all $U\in \mathcal{A}$, $w_U$ is of class $\mathcal{C}^1$, $\mu^'_{\vert U}\ll\lambda$ and $$\mu\left(\text{$\underset{\xi\in\text{$\mathbb{Z}^d \setminus \left\{0_{\mathbb{Z}^d}\right\}$}}{\bigcup} \left\{x\in U : d\left<\xi\vert w_U(x)\right>=0\right\}$}\right)=0,$$ then the dynamical system $(\Omega,\mu,(g_t)_{t\in\mathbb{R}})$ has Keplerian shear.* *Proof.* Let $U\in\mathcal{A}$. Let us show that the measures $\nu^{(\xi,U)}$ have the Rajchman property. We assume $v_U$ of class $\mathcal{C}^1$. So $w_U : x\in U\mapsto\chi_{v_U(x)}(v_U(x))$ is $\mathcal{C}^1$. Let $$\xi\in\text{$\mathbb{Z}^d \setminus \{0_{\mathbb{Z}^d}\}$}.$$ By Radon-Nikodyme $$\text{$\int_{U} e^{-2i\pi t\left<\xi\vert w_U(x)\right>} d\text{$\mu^'_{\left \vert U \right.}$}(x)$}=\text{$\int_{U} e^{-2i\pi t\left<\xi\vert w_U(x)\right>}\frac{d\mu^'_{\vert U}}{d\lambda}(x) d\lambda(x)$}.$$ But $$\mu\left(\text{$\underset{\xi\in\text{$\mathbb{Z}^d \setminus d\left\{0_{\mathbb{Z}^d}\right\}$}}{\bigcup} \left\{x\in U : d\left<\xi\vert w_U(x)\right>=0\right\}$}\right)=0.$$ So $$\forall a\in\mathbb{R},\mu\left(\text{$\underset{\xi\in\text{$\mathbb{Z}^d \setminus \left\{0_{\mathbb{Z}^d}\right\}$}}{\bigcup} \left\{x\in U : \left<\xi\vert w_U(x)\right>=a\right\}$}\right)=0.$$ And so $$\text{$\int_{U} e^{-2i\pi t\left<\xi\vert w_U(x)\right>}\frac{d\mu^'_{\vert U}}{d\lambda}(x) d\lambda(x)$}=\text{$\int_{\mathbb{R}^d} e^{-2i\pi t<\xi\vert z>}\sum_{j\in L}\frac{\mathds{1}_{w_U(U)\cap V_j}(z)\left(\left(w_U\right)_{\vert w_U^{-1}(V_k)}\right)^{-1}(z)}{\text{$\left\vert det\left(J_{w_U}\left(\left(\left(w_U\right)_{\vert w_U^{-1}(V_k)}\right)^{-1}(z)\right)\right)\right\vert$}} d\lambda(x)$}.$$ And so by Riemann-Lebesgue, $$\text{$\int_{U} e^{-2i\pi t\left<\xi\vert w_U(x)\right>}\frac{d\mu^'_{\vert U}}{d\lambda}(x) d\lambda(x)$}\xrightarrow[t\to+\infty]{}0.$$ And then, $${\text{$\int_{U} e^{-2i\pi t\left<\xi\vert w_U(x)\right>} d\text{$\mu^'_{\left \vert U \right.}$}(x)$}\xrightarrow[t\to+\infty]{}0}.$$ ◻ **Remark 15**. *We can note that we could use same arguments as Damien Thomine [@DaTho] in the proof of his theorem analogous to this one, i.e., we could use the normal forms of submergences by relying on the regularity properties of $w_U$. Thanks to the property of Rajchman, we were able to more expeditiously prove the above theorem.* ## Main examples **Example 5** (Torus). *The tori also form an abelian example, it is still a connected compact Lie group. The most notorious torus is the torus of dimension $2$ denoted $\mathbb{T}^2$ which we can easily represent in $\mathbb{R}^3$. The first example in the abelian framework would be the one used in the article by Damien Thomine [@DaTho] with $\mathbb{T}^2$, a measurable velocity vector $v(x)$, and for a single fiber to torus, the flow: $$g_t : (x,y)\in \mathbb{T}^2\mapsto (x,y+tv(x))$$ To know if this dynamical system exhibits Keplerian shear, it is necessary and it suffices that $\mu_v$ Let of Rajchman. We can then study the speeds of convergence.* **Example 6** (Spinorial groups). *The spin group for $n\geq 2$, $Spin(n)$ is a compact and connected Lie group, which allows us to use it as an example.* **Example 7** (Orthogonal groups). *The orthogonal groups for $n\geq 2$ $SO_n(\mathbb{R})$ are connected compact Lie groups. The most usual example is that of the orthogonal group $SO_3(\mathbb{R})$ which is compact, connected with a structure of Lie but not abelian. We consider as flow $g_t : M\in SO_3(\mathbb{R})\mapsto exp(tA)M$ with $A$ an antisymmetric matrix, fundamental property for the stability of the exponential in the orthogonal group $SO_3( \mathbb{R})$. As before, we can leaf it in torus as with the general cases previously treated in bundles in compact and connected Lie groups. The other classic examples are in the abelian framework which, according to the work of Antoine Delzant [@Adelz], amounts to torus bundles as before. As before, we consider as flow $g_t : M\in Spin(3)\mapsto exp(tA)M$ with $A$ an antisymmetric matrix.* **Example 8** (Special unitary group). *A special unitary group for $n\geq 2$, $SU_n(\mathbb{R})$ is also a connected compact Lie group and so we can use the previous theorems. Moreover, these Lie groups are even simply connected. They are always non-abelian groups, which means that they are not a torus. The most common example is $SU_2(\mathbb{R})$ which is isomorphic to the hypersphere $\mathbb{S}^3$ of $\mathbb{R}^4$. The flow for $Ain \mathcal{M}_2(\mathbb{R})$ such that $^t A=-A^*$ on it will be defined by $$g_t : M\in SU_2(\mathbb{R}) \mapsto exp(tA)M.$$* # Keplerian shear, Rajchman property and Diophantine approximation In this section we present an application of Keplerian shear (with speed estimates) to dynamical Borel Cantelli lemmas. The latter is linked to Diophantine approximation and we discuss the relations between the Rajchman property and diophantine properties. Let the probabilistic space $(\Omega,\mu)$. The main application consists in readjusting the dynamic Borel-Cantelli theorem in the non-ergodic case. We find an adaptation of Sprindzuk's theorem inspired by the thesis of Victoria Xing [@VicXing]. The keplerian shear will ensure that for almost any $x$, there exists an infinity of integers $n$ satisfying $T^n(x)\in A_n$. **Theorem 12** (Variable Sprindzuk). *Let $(f_k)_{k\in\mathbb{N}^*}$ and $(g_k)_{k\in\mathbb{N}^*}$ measurable and positive application sequencies. Let $(\varphi_k)_{k\in\mathbb{N}^*}$ a real sequence such that : $\forall k\in\mathbb{N}^*,0\leq g_k\leq \varphi_k\leq 1\,\mu-p.s$ Let $\delta>1$ and $C>0$. Suppose that for all $(m,n)\in\left(\mathbb{N}^*\right)^2$ satisfying $n\geq m$, $$\text{$\int_{\Omega} \left(\sum_{k=m}^nf_k(x)-g_k(x)\right)^2 d\mu(x)$}\leq C\left(\sum_{k=m}^n\varphi_k\right)^\delta.$$ Then $$\forall n\in\mathbb{N}, \forall \varepsilon>0,\sum_{k=1}^n f_k = \sum_{k=1}^n g_k+O\left(\phi(n)^{\frac{\delta}{2}}\left(\log\left(\phi(n)\right)\right)^{1+\varepsilon}\right)\, \mu-a.e.$$ with $\phi(n)= \sum_{k=1}^n\varphi_k$.* For the proof, we will draw on the proof of the original theorem in the book by Sprindzuk [@Sprdz] on page 45 formula (68) *Proof.* Let $\delta>1$ and let us denote for $I\subset \mathbb{N}^*,\phi(I)=\sum_{k\in I}\varphi_k$. By the fact that $\forall k\in\mathbb{N}^*,0\leq \varphi_k\leq 1$, we have that $$\exists (n_v)_{v\in\mathbb{N}^*}\in{\mathbb{N}^*}^{\mathbb{N}^*},\forall v\in\mathbb{N}^*,\phi(n_v)<v\leq \phi(n_v+1).$$ We have also $$\forall v\in\mathbb{N}^*,n_{v+1}\geq n_v+1.$$ And $$\forall v\in\mathbb{N}^*,\phi(n_v)+1<v+1\leq \phi(n_{v+1}+1).$$ So $$\forall (u,v)\in(\mathbb{N}^*)^2,\left(u<v\Longrightarrow \text{$\llbracket n_u+1,n_v\rrbracket$}\ne\emptyset\right).$$ Let $$\begin{array}{lcl} \sigma : &\mathcal{P}(\mathbb{N}^*)&\to \mathcal{P}(\mathbb{N}^*)\\ &I&\mapsto \left\{n_w\in\mathbb{N} : w\in I\right\}\end{array}.$$ Let for $r\in\mathbb{N}^*, s\in\text{$\llbracket 0,r\rrbracket$}$ sets of parts $$J_{r,s} :=\left\{\text{$\llbracket i2^s+1,(i+1)2^s\rrbracket$}\in\mathcal{P}(\mathbb{N}^*) : i\in\text{$\llbracket 0,2^{r-s}-1\rrbracket$}\right\}.$$ Let $r\in\mathbb{N}^*$ and $s\in\text{$\llbracket 0,r\rrbracket$}.$ We notice that $$\text{$\underset{I\in J_{r,s}}{\bigcup} \sigma(I)$}=\text{$\llbracket 1,n_{2^r}\rrbracket$}.$$ Let $i\in \text{$\llbracket 0,2^{r-s}-1\rrbracket$}.$ But $$\phi\left(n_{i2^s}\right)<i2^s\leq \phi\left(n_{i2^s}+1\right)\leq \phi\left(n_{i2^s}\right)+1.$$ And then $$i2^s-1\leq \phi\left(n_{i2^s}\right)<i2^s$$ And $$\phi\left(n_{(i+1)2^s}\right)<(i+1)2^s\leq \phi\left(n_{(i+1)2^s}+1\right)\leq \phi\left(n_{(i+1)2^s}\right)+1.$$ And $$(i+1)2^s-1\leq\phi\left(n_{(i+1)2^s}\right)<(i+1)2^s.$$ And then, $$\phi\left(\sigma\left(\text{$\llbracket n_{i2^s}+1,n_{(i+1)2^s}\rrbracket$}\right)\right)=\phi\left(n_{(i+1)2^s}\right)-\phi\left(n_{i2^s}\right)\leq 2^s+1\leq 2^{s+1}.$$ So $$\sum_{I\in J_{r,s}}\left(\phi(\sigma(I))\right)^\delta\leq 2^\delta 2^{r+s(\delta-1)}.$$ Denote $$J_r :=\text{$\underset{s\in \text{$\llbracket 0,r\rrbracket$}}{\bigcup} J_{r,s}$}.$$ So $$\sum_{I\in J_{r}}\left(\phi(\sigma(I))\right)^\delta=\sum_{s=0}^r\sum_{I\in J_{r,s}}\left(\phi(\sigma(I))\right)^\delta\leq \frac{2^\delta 2^{r\delta}}{2^{\delta-1}-1}.$$ Let $$\begin{array}{lcl} h : & \mathbb{N}^*\times \Omega&\to \mathbb{R}\\ & (l,x)&\mapsto \sum_{I\in J_l}\left(\sum_{k\in I}f_k(x)-g_k(x)\right)^2\end{array}.$$ So $$\text{$\int_{\Omega} h(r,x) d\mu(x)$}\leq C\left(\sum_{I\in J_{r}}\phi(\sigma(I))\right)\leq \frac{2^\delta C2^{r\delta}}{2^{\delta-1}-1}.$$ Let $\varepsilon>0$. By the Markov inequality, $$\mu\left(h(r,X)\geq \frac{2^\delta Cr^{1+\varepsilon}2^{r\delta}}{2^{\delta-1}-1}\right)\leq r^{-(1+\varepsilon)}.$$ By Borel-Cantelli, $$\mu-a.a \,x\in \Omega,\exists r_x\in\mathbb{N}^*,\forall r\geq r_x,h(r,x)\leq \frac{2^\delta Cr^{1+\varepsilon}2^{r\delta}}{2^{\delta-1}-1}.$$ For $v\in\mathbb{N}^*$, we have that $\text{$\llbracket 1,v\rrbracket$}$ can be split into a finite number $r_v$ of interval $J_r$ such that $$r_v\leq \lfloor{\log_2(v)}\rfloor+1.$$ Let $J(v)$ denote the set of these intervals. We then get $$\sum_{k=1}^{n_v}f_k-g_k=\sum_{I\in J(v)}\left(\sum_{k\in I}f_k-g_k\right).$$ By the Cauchy-Schwarz inequality, $$\mu-a.a\,x\in \Omega,\left(\sum_{k=1}^{n_v}f_k-g_k\right)^2\leq r_v\sum_{I\in J(v)}\left(\sum_{k\in I}f_k-g_k\right)^2=r_vh\left(r_v,x\right).$$ So $$\mu-a.a\,x\in\Omega,\exists r_x\in\mathbb{N}^*,\forall r\geq r_x,\left(\sum_{k=1}^{n_v}f_k(x)-g_k(x)\right)^2\leq \frac{2^\delta Cr^{2+\varepsilon}2^{r\delta}}{2^{\delta-1}-1}\leq \frac{2^\delta C\log_2^{2+\varepsilon}(v)v^{\delta}}{2^{\delta-1}-1}.$$ And so $$\mu-a.a\,x\in\Omega,\exists r_x\in\mathbb{N}^*,\forall r\geq r_x,\text{$\left\vert \sum_{k=1}^{n_v}f_k(x)-g_k(x)\right\vert$} \leq\frac{2^\frac{\delta}{2} C^{\frac{1}{2}}\log_2^{1+\frac{\varepsilon}{2}}(v)v^{\frac{\delta}{2}}}{\left(2^{\delta-1}-1\right)^{\frac{1}{2}}}.$$ Let $n\in\mathbb{N}^*.$ So $$\exists v\in\mathbb{N}^*,n_v\leq n\leq n_{v+1}.$$ So $$\mu-a.a\,x\in\Omega,\sum_{k=1}^{n_v}f_k(x)\leq \sum_{k=1}^{n}f_k(x)\leq \sum_{k=1}^{n_{v+1}}f_k(x).$$ We also have $$\mu-a.a\,x\in \Omega,0\leq \sum_{k=n_v}^{n_{v+1}}g_k(x)\leq \phi\left(n_{v+1}\right)-\phi\left(n_v\right).$$ In addition, $$v-1\leq\phi\left(n_v\right)<v\leq \phi\left(n_v+1\right)\leq\phi\left(n_{v+1}\right)<v+1.$$ And so $$0<\phi\left(n_{v+1}\right)-\phi\left(n_v\right)< 2.$$ And then, $$\mu-a.a\,x\in \Omega, 0\leq \sum_{k=n_v+1}^{n_{v+1}}g_k(x)< 2.$$ So $$\mu-a.a\,x\in\Omega,\sum_{k=1}^{n_v}g_k(x)+O\left(\log_2^{1+\frac{\varepsilon}{2}}\left(\phi(n_v)\right)\left(\phi(n_v)\right)^{\frac{\delta}{2}}\right)\leq \sum_{k=1}^nf_k(x)\leq\sum_{k=1}^{n_{v+1}}g_k(x)+O\left(\log_2^{1+\frac{\varepsilon}{2}}\left(\phi(n_{v+1})\right)\left(\phi(n_{v+1})\right)^{\frac{\delta}{2}}\right).$$ We then find $$\mu-a.a\,x\in\Omega,-2+O\left(\log_2^{1+\frac{\varepsilon}{2}}\left(\phi(n_v)\right)\left(\phi(n_v)\right)^{\frac{\delta}{2}}\right)\leq \sum_{k=1}^n (f_k(x)-g_k(x))\leq 2+O\left(\log_2^{1+\frac{\varepsilon}{2}}\left(\phi(n_{v+1})\right)\left(\phi(n_{v+1})\right)^{\frac{\delta}{2}}\right).$$ We easily find $$-2+O\left(\log_2^{1+\frac{\varepsilon}{2}}\left(\phi(n_v)\right)\left(\phi(n_v)\right)^{\frac{\delta}{2}}\right)=O\left(\log_2^{1+\frac{\varepsilon}{2}}\left(\phi(n)\right)\left(\phi(n)\right)^{\frac{\delta}{2}}\right).$$ And on the other hand $$2+O\left(\log_2^{1+\frac{\varepsilon}{2}}\left(\phi(n_{v+1})\right)\left(\phi(n_{v+1})\right)^{\frac{\delta}{2}}\right)=O\left(\log_2^{1+\frac{\varepsilon}{2}}\left(\phi(n)\right)\left(\phi(n)\right)^{\frac{\delta}{2}}\right).$$ Finally $${\mu-a.a\,x\in\Omega,\sum_{k=1}^n f_k(x)=\sum_{k=1}^{n}g_k(x)+O\left(\log_2^{1+\frac{\varepsilon}{2}}\left(\phi(n)\right)\left(\phi(n)\right)^{\frac{\delta}{2}}\right)}.$$ ◻ **Theorem 13** (Dynamical Borel-Cantelli by keplerian shear). *Suppose that $(\Omega,T,\mu)$ is a discrete measure preserving dynamical system with keplerian shear. Let $(A_n)$ be a sequence of measurable sets and note $S_{M,N} = \sum_{k=M}^N\mathds{1}_{A_k}\circ T^k$.* *We suppose that [\[1.a\]](#1.a){reference-type="ref" reference="1.a"} or [\[1.b\]](#1.b){reference-type="ref" reference="1.b"} holds and [\[2\]](#2){reference-type="ref" reference="2"}* 1. 1. *[\[1.a\]]{#1.a label="1.a"} There exists $\gamma>0,C>0$ such that for all $(j,k)\in\mathbb{N}^2$ satisfying $k\ne j$, $\text{$\left\vert \mathbb{E}\left(Cov_{k-j}\left(\mathds{1}_{A_j},\mathds{1}_{A_k}\vert\mathcal{I}\right)\right)\right\vert$}\leq \frac{C}{\text{$\left\vert k-j\right\vert$}^\gamma}$* 2. *[\[1.b\]]{#1.b label="1.b"} There exists $\gamma>0,D>0$ and $\mathcal{B}\subset \mathbb{L}^2_\mu(\Omega)$, a Banach space such that : $\exists C>0,\forall (f,g)\in \mathcal{B}^2,\forall n\in\mathbb{N},\text{$\left\vert \mathbb{E}(Cov_n(f,g\vert\mathcal{I}))\right\vert$}\leq \frac{C\text{$\left\Vert f\right\Vert$}_\mathcal{B}\text{$\left\Vert g\right\Vert$}_\mathcal{B}}{n^{\gamma}}$ and $\forall j\in\mathbb{N},\text{$\left\Vert \mathds{1}_{A_j}\right\Vert$}_\mathcal{B}\leq D$* 2. *[\[2\]]{#2 label="2"} There exists $\beta>\max(\frac12,1-\frac\gamma2)$ such that $\liminf_{N\to\infty}\left({\frac{\mathbb{E}(S_N\vert \mathcal{I})}{N^\beta}}\right)>0\ \mu-a.s$.* *We have $$\frac{S_N}{\mathbb{E}(S_N\vert \mathcal{I})}\overset{\mu-a.s}{\xrightarrow[N\to+\infty]{}}1.$$* *Proof.* Without loss of generality we assume that $\gamma\in(0,1)$. Note that [\[1.b\]](#1.b){reference-type="ref" reference="1.b"} implies [\[1.a\]](#1.a){reference-type="ref" reference="1.a"} so [\[1.a\]](#1.a){reference-type="ref" reference="1.a"} holds in both cases. By expansion of the square as a double sum and the series-integral comparison criterion, we get $\forall (M,N)\in\mathbb{N}^2$ s.t. $M<N$, $$\begin{split} \text{$\left\Vert S_{M,N}-\mathbb{E}\left(S_{M,N}\vert \mathcal{I}\right)\right\Vert$}_{2}^2&\leq \mathbb{E}(S_{M,N})+2C\sum_{l=1}^{N-M+1} \frac{(N-M+1)}{l^\gamma}\\ &\le \mathbb{E}(S_{M,N}) + 2C (N-M+1)^{2-\gamma}. \end{split}$$ By setting $\delta = 2-\gamma$, and noticing that $E(S_{M,N})\le N-M$ we get that there exists $D>0$ satisfying $$\forall N\in\mathbb{N},\text{$\left\Vert S_{M,N}-\mathbb{E}\left(S_{M,N}\vert \mathcal{I}\right)\right\Vert$}_{2}^2\leq D(N-M+1)^{\delta}.$$ By variable Sprindzuk theorem [Theorem 12](#Sprinvar){reference-type="ref" reference="Sprinvar"} with $\varphi_k=1$ for all $k$, we get $$S_N-\mathbb{E}(S_N\vert \mathcal{I})\overset{a.s}{=} O(N^{\frac{\delta}{2}}\log^{1+\varepsilon}(N)).$$ Then $$\frac{S_N}{\mathbb{E}(S_N\vert \mathcal{I})} =1+O\left(\frac{N^{\frac\delta2}\log^{1+\varepsilon}(N)}{N^\beta}\right) \overset{a.s}{\xrightarrow[N\to+\infty]{}}1.$$ ◻ **Example 9**. *Let us place ourselves in the case of $(\mathbb{T}^2,\lambda\otimes\lambda,T)$ with $$Mat(T)=\begin{pmatrix}1 & 0\\ 1 & 1\end{pmatrix}.$$ Consider a sequence $(b_n)\in\mathbb{T}^\mathbb{N}$ and let $$A_n=\mathbb{T}\times \left[b_n-\frac{1}{n^p},b_n+\frac{1}{n^p}\right]$$ with $0<p<\frac{1}{2}.$ We get that $\forall N\in\mathbb{N}^*$, $$\mathbb{E}(S_N\vert \mathcal{I})=E(S_N)=2\sum_{k=1}^N\frac{1}{k^p}\sim \frac{2}{1-p}N^{1-p}.$$ Let $s:=\frac12$ and $n\in\mathbb{N}^*$. By direct computation of the Fourier coefficients of $1_{A_n}$, we see that $$\mathds{1}_{A_n}\in H^{s,0}(\mathbb{T}^2)$$ since $$\text{$\left\Vert \mathds{1}_{A_n}\right\Vert$}_{H^{s,0}(\mathbb{T}^2)}^2\leq \frac{1}{\pi^2}\zeta(2)+\frac{2}{n^p}\leq 3.$$ By the decorrelation estimates in [@DaTho], $\forall (k,j)\in\mathbb{N}^2$, $$\mathbb{E}\left(Cov_{k-j}\left(\mathds{1}_{A_k},\mathds{1}_{A_j}\vert\mathcal{I}\right)\right)\leq\frac{4^s 3^2}{\text{$\left\vert k-j\right\vert$}^{2s}}.$$ The hypotheses are then brought together, applying theorem [Theorem 13](#thm:dbcbyks){reference-type="ref" reference="thm:dbcbyks"} with $\gamma=1$ and $\beta\in(\frac12,1-p)$, we get $$S_N\sim \frac{2}{1-p}N^{1-p}\quad{\mu-a.s}.$$* **Definition 24** ($s-$Diophantine number). *Let $s>0.$* *A number $x\in\mathbb{R}$ is $s-$diophantine if* *$$\exists C>0,\forall (p,q)\in \mathbb{Z}\times\mathbb{N}^*,\text{$\left\vert qx-p\right\vert$}\geq \frac{C}{q^s} \label{diophante}.$$ Let $\mathcal{D}_i(s)$ be the set of diophantine numbers.* **Definition 25** ( Liouville Number). *A number $x\in \mathbb{R}$ is Liouville if and only if for all $s>0$, $x$ is not diophantine, that is, it does not satisfy for any $s>0$ the assertion [\[diophante\]](#diophante){reference-type="eqref" reference="diophante"}. In other words $$x\in \mathbb{L} := \mathbb{R}\setminus \left(\text{$\underset{s>0}{\bigcup} \mathcal{D}_i(s)$}\right).$$ Or even, $Dio(x)=+\infty$.* In the case of any measure of Rajchman, we have as an application a proposition based on theorem 4.2 page 551 of Athreya [@Atry] and also the theorem of Kurzweil ([@Kurz] and [@Tsen], theorem 1.3 page 3) concerning diophantine numbers. To introduce this theorem, we will also have to recall the Shrinking Target Property (STP) and its monotone version (MSTP) of a dynamical system. **Definition 26** (Shrinking Target Properties). *A discrete dynamical system $(\Omega,\mu,T)$ is called STP if for all sequence of balls $(B_n)_{n\in\mathbb{N}}$ of radius $r_n>0$ tending to $0$ and satisfying $$\sum_{n\in\mathbb{N}}\mu(B_n)=+\infty\text{ we have } \mu\left(\overline{\underset{n\to+\infty}{\lim}}T^{-n}(B_n)\right)=1.$$* *The MSTP property is defined in the same way, assuming in addition that the sequence $r_n$ is monotone.* Let us now recall Kurzweil's theorem in dimension $1$. **Theorem 14** (Kurzweil-Tseng). *Consider the dynamical system $(\mathbb{T},\lambda,T_\alpha)$ with $\alpha\in\mathbb{T}$ and $T_\alpha : x\mapsto x+\alpha$. Then the dynamical system is $s-$MSTP if and only if $\alpha$ is $s$-diophantine (i.e., $\alpha\in\mathcal{D}_i(s)$).* We also deduce with the documents of Chaika [@ChaiKon] the following theorem. **Theorem 15** (Kurzweil). *For $\lambda-a.a \,\alpha\in [0,1[,(\mathbb{T},\lambda, T_\alpha)\, is \, MSTP$.* In the case of singular continuous Rajchman measures, we will have a weakening of the theorem [Theorem 15](#kurzmstp){reference-type="ref" reference="kurzmstp"} of Kurzweil. First, we present a dynamical Borel Cantelli for Rajchman measures. As in Theorem [Theorem 13](#thm:dbcbyks){reference-type="ref" reference="thm:dbcbyks"}, it is based on covariance estimates. Here they do not rely on the keplerian shear but are obtained directly from the Rajchman property. **Theorem 16** (Dynamical Borel-Cantelli for Rajchman measures). *Consider a Rajchman measure $\mu$ on $\mathbb{T}$ of order $0\le r(\mu)\le\frac{1}{2}$. Let $T$ be the transvection on $\mathbb{T}^2$ defined by $T(x,y)=(x,x+y)$. Let $C>0$ and $s>\frac{1}{r(\mu)}-1$. We have $$\frac{S_N}{\mathbb{E}(S_N)}\xrightarrow[N\to+\infty]{\mu\otimes \lambda-p.s} 1$$ where $$S_N=\sum_{k=1}^N \mathds{1}_{A_k}\circ T^k, \quad\text{and}\quad A_n = \mathbb{T}\times \left]z_n-\frac{C}{n^{\frac{1}{s}}},z_n+\frac{C}{n^{\frac{1}{s}}}\right[.$$* *Proof.* Let $p=\frac{1}{s}$. Let for $(m,n)\in(\mathbb{N}^*)^2$ such that $m>n$ the sum $$S_{(m,n)}=\sum_{k=m}^n\mathds{1}_{A_k}\circ T^k.$$ We get $$\begin{array}{ll}\text{$\int_{\mathbb{T}^2} \left(S_{(m,n)}-\mathbb{E}\left(S_{(m,n)}\right)\right)^2 d\mu\otimes \lambda$}&=\sum_{k=m}^n\sum_{j=m}^n\text{$\int_{\Omega} \mathds{1}_{A_k}\circ T^{k-j}\mathds{1}_{A_j}-\mathbb{E}_\mu\left(\mathds{1}_{A_k}\vert \mathcal{I}\right)\mathbb{E}_\mu\left(\mathds{1}_{A_j}\vert \mathcal{I}\right) d\mu$}\\ &\leq \mathbb{E}_{\mu}(S_{(m,n)})+2\text{$\overset{n}{\underset{j=m}{\sum}} \text{$\overset{n-m}{\underset{l=1}{\sum}} \text{$\int_{\Omega} \mathds{1}_{A_{j+l}}\circ T^{l}\mathds{1}_{A_j}-\mathbb{E}_\mu\left(\mathds{1}_{A_{j+l}}\vert \mathcal{I}\right)\mathbb{E}_\mu\left(\mathds{1}_{A_j}\vert \mathcal{I}\right) d\mu$}$}$} .\end{array}$$ And then, we have $$\begin{array}{ll}\text{$\left\vert \text{$\int_{\Omega} \mathds{1}_{A_{j+l}}\circ T^{l}\mathds{1}_{A_j}-\mathbb{E}_\mu\left(\mathds{1}_{A_{j+l}}\vert \mathcal{I}\right)\mathbb{E}_\mu\left(\mathds{1}_{A_j}\vert \mathcal{I}\right) d\mu$}\right\vert$}&=\text{$\left\vert \sum_{k\in\mathbb{Z}^*}\frac{\sin\left(\frac{2k\pi C}{j^p}\right)\sin\left(\frac{2k\pi C}{(j+l)^p}\right)}{k^2\pi^2}\widehat{\mu}(kl)\right\vert$}\\ &\leq 4C_\mu C\sum_{k\in\mathbb{N}^*}\frac{1}{l^{r(\mu)}}\frac{\text{$\left\vert \sin\left(\frac{2\pi k C}{j^p}\right)\right\vert$}}{k^{1+r}(j+l)^p\pi}\\ &\simeq \frac{2^{2+r}C_\mu C^{1+r}\pi^{1-r}}{j^{pr(\mu)}(j+l)^pl^{r(\mu)}}\text{$\int_{\mathbb{R}_+^*} \frac{\text{$\left\vert \sin(x)\right\vert$}}{x^{1+r}} d\lambda(x)$}\end{array}$$ And we get $$\begin{array}{ll}\text{$\int_{\mathbb{T}^2} \left(S_{(m,n)}-\mathbb{E}\left(S_{(m,n)}\right)\right)^2 d\mu\otimes \lambda$}&\ll \mathbb{E}_\mu\left(S_{(m,n)}\right)+\text{$\int_{[0,n-m]} \text{$\int_{[0,n-m]} \frac{1}{x^{r(\mu)}}\frac{1}{(x+y)^p}\frac{1}{y^{pr(\mu)}} d\lambda(y)$} d\lambda(x)$}\\ &\ll \text{$\int_{[0,\frac{\pi}{2}]} \frac{1}{(\cos(\theta))^{r(\mu)}(\cos(\theta)+\sin(\theta))^p(\sin(\theta))^{pr(\mu)}} d\lambda(\theta)$}(n-m)^{2-r(\mu)-p(1+r(\mu))}\end{array}.$$ Let's consider $\delta=2-r(\mu)-p(1+r(\mu))$. But $p<\frac{r(\mu)}{1-r(\mu)}$. So $\frac{\delta}{2}<1-p$. By Sprindzuk's theorem, $$S_N \overset{\mu\otimes \lambda-a.s}{=}\mathbb{E}(S_N)+O\left(N^{\frac{\delta}{2}}\log_2^{1+\varepsilon}\left(N\right)\right).$$ So $${\frac{S_N}{\mathbb{E}(S_N)}\xrightarrow[N\to+\infty]{\mu\otimes\lambda-a.s}1}.$$ ◻ **Remark 16**. *We observe that this \"loss of power\" in the Borel-Cantelli lemma above increases as the Rajchman order decreases.* Next result relates the order of a Rajchman measure to diophantine properties of its support and thus the SMTP property. This result will therefore also show the proposition [Proposition 3](#diophraj){reference-type="ref" reference="diophraj"}. **Proposition 14** (General Kurzweil for Rajchman measures). *Consider a Rajchman measure $\mu$ on $\mathbb{T}$ of order $0\le r(\mu)\le\frac{1}{2}$. Let $s>\frac{1}{r(\mu)}-1$. We have $\mu(\mathcal{D}_i(s))=1$, or equivalently $$\mu-a.a\, \alpha \in [0,1[, \left(\mathbb{T},\lambda,T_\alpha\right) \,s-MSTP.$$* *Proof.* Let $C>0$, $j\in \mathbb{N}^*$ and $p\in \text{$\llbracket 0,j-1\rrbracket$}$. Consider $$\sum_{p=0}^{q-1}\mu\left(\left]\frac{p}{j}-\frac{C}{j^{s+1}},\frac{p}{j}+\frac{C}{j^{s+1}}\right[\right)=\text{$\int_{\mathbb{T}} \mathds{1}_{\left]\frac{p}{j}-\frac{C}{j^{s+1}},\frac{p}{j}+\frac{C}{j^{s+1}}\right[} d\mu$}=2\frac{C}{j^{s}}+\sum_{k\in \mathbb{Z}^*}\frac{\sin{\left(2\pi k \frac{C}{j^{s}}\right)}}{k\pi}\widehat{\mu}(kj).$$ So $$\sum_{p=0}^{q-1}\mu\left(\left]\frac{p}{j}-\frac{C}{j^{s+1}},\frac{p}{j}+\frac{C}{j^{s+1}}\right[\right)\leq 2\frac{C}{j^{s+1}}+\frac{2C_\mu C^{\varepsilon}}{\pi j^{r(\mu)+s\varepsilon}}\zeta(1+r(\mu)-\varepsilon)$$ with $\varepsilon\in ]0,r(\mu)[$. And $$\mu\left(\text{$\underset{p\in \text{$\llbracket 0,j-1\rrbracket$}}{\bigcup} \left]\frac{p}{j}-\frac{C}{j^{s+1}},\frac{p}{j}+\frac{C}{j^{s+1}}\right[$}\right)\leq 2\left(\frac{C}{j^s}+\frac{C_\mu C^\varepsilon}{\pi j^{r(\mu)+s\varepsilon}}\zeta(1+r(\mu)-\varepsilon)\right)$$ And we have by hypothesis that $$r(\mu)>\frac{1}{s+1}.$$ By strict inequality, we get that there exists $\varepsilon\in ]0,r(\mu)[$ such that $$r(\mu)>\frac{1}{s+1}\ and \ r(\mu)>1-s\varepsilon.$$ We also have the assumption that $r\leq \frac{1}{2}$. By Borel-Cantelli $$\mu\left(\underset{n\to+\infty}{\overline{lim}}\left(\text{$\underset{p\in \text{$\llbracket 0,n-1\rrbracket$}}{\bigcup} \left]\frac{p}{n}-\frac{C}{n^{s+1}},\frac{p}{n}+\frac{C}{n^{s+1}}\right[$}\right)\right)=0.$$ So for $$\mu-a.a\, \alpha \in [0,1[,\exists n\in \mathbb{N}^*, \forall k\geq n, d(k\alpha,\mathbb{Z})\geq \frac{C}{k^s}.$$ Let then take $\alpha\in [0,1[$ such that $$\exists n\in \mathbb{N}^*, \forall k\geq n, d(k\alpha,\mathbb{Z})\geq \frac{C}{k^s}.$$ This proves that $\alpha\in\mathcal{D}_i(s)$, hence $\mu(\mathcal{D}_i(s))=1$. By Kurzweil-Tseng theorem [Theorem 14](#kurztseng){reference-type="ref" reference="kurztseng"}, $${\left(\mathbb{T},\lambda,T_\alpha\right)\, s-MSTP}.$$ ◻ Recall that when we have $r(\mu)>\frac{1}{2}$, the measure $\mu$ is absolutely continuous and therefore by Kurzweil theorem cited above $\mu-a.a\, \alpha\in [0,1[$, the system $\left(\mathbb{T},\lambda,T_{\alpha}\right)$ is $1-MSTP$. **Remark 17**. *We can note that the result obtained in proposition [Proposition 14](#kurzman){reference-type="ref" reference="kurzman"} remains consistent with that obtained with the Keplerian shear in the theorem [Theorem 16](#thm:dbcbyrp){reference-type="ref" reference="thm:dbcbyrp"} because $$\sum_{k=1}^n \lambda\left(\left]z_k-\frac{C}{k^{\frac{1}{s}}},z_k+\frac{C}{k^{\frac{1}{s}}}\right[\right)^s=\sum_{k=1}^n \frac{2^sC^s}{k}\xrightarrow[n\to+\infty]{}+\infty.$$* **Remark 18**. *For a Rajchman measure with positive order $r(\mu)>0$, $\mu-a.a\, \alpha\in [0,1[$ is Diophantine, that is, $Dio(\alpha)<+\infty$ $\mu-$almost surely.* The following result shows that Proposition [Proposition 14](#kurzman){reference-type="ref" reference="kurzman"} is optimal. **Proposition 15** (Optimality of Rajchman order [@Kauf]). *Let $0<r< \frac12$. Then for all $1<s<\frac{1}{r}-1$, there exists $\mu$ a Rajchman measure such that $r(\mu)=r$ and $\mu(\mathcal{D}_i(s))=0$.* *Proof.* Let $\varepsilon=\frac{1}{r}-(s+1)>0$. We can consider $\mu_{\alpha}$ with $\alpha = s-1+\varepsilon$ constructed in the document of Kaufman [@Kauf] with its support in $E(\alpha)\subset [0,1[\setminus {\mathcal{D}_i(s)}$. ◻ **Remark 19**. *With convex combination, for any $s>1$ we can construct a Rajchman measure $\mu$ for which $\mu(\mathcal{D}_i(s))$ is equal to any value in $[0,1]$.* **Example 10** (Diophantine property with self-similar measures). *The self-similar measure $\mu_{\theta}$ with $\theta>1$ not being Pisot, according to the work of Pablo Shmerkin and Jean-Pierre Kahane [@Kaha], has an order of Rajchman $r\left(\mu_{\theta}\right)>0$. In particular $\mu_{\theta}-a.a \,\alpha\in [0,1[$ is diophantine.* **Example 11** (Liouville property with Rajchman measures on $S_\infty$). *The measure of Rajchman $\mu_{\infty}$ mentioned in [Example 3](#cantraj){reference-type="ref" reference="cantraj"} is Rajchman but since its support $S_\infty$ almost surely contains Liouville numbers, then we deduce by contrapositive of the proposition [Proposition 14](#kurzman){reference-type="ref" reference="kurzman"} that $\mu_{\infty}$ does not have a strictly positive Rajchman order, i.e. that $r\left(\mu_{\infty}\right)=0$.* 2 V.I. Arnold ; S.M. Gusein-Zade; A.N. Varchenko , Singularities of differential Maps Volume 2: Monodromy and Asymptotics of Integrals, Birkhäuser, 1988 Jayadev Athreya, Logarithm laws and shrinking target properties, Proc. Indian Acad. Sci. (Math. Sci.) Vol. 119, No. 4, September 2009, pp. 541--557. Christian Bluhm, Liouville Numbers, Rajchman Measures, and Small Cantor Sets, Proceedings of the American Mathematical Society Vol. 128, No. 9 (Sep., 2000), pp. 2637-2640 N. Bourbaki, Groupe et algèbre de Lie, Chapitre 2 et 3, Springer, 2006 Jon Chaika ; David Constantine, Quantitative shrinking target properties for rotations and interval exchanges. Israel Journal of Mathematics 230, 275--334 (2019). https://doi.org/10.1007/s11856-018-1824-8 N. Chernov, D. Kleinbock, Dynamical Borel-Cantelli lemmas for Gibbs measures. *Isr. J. Math.* 122 (2001) 1--27. Antoine Delzant, Groupes de Lie compacts et tores maximaux, Séminaire Henri Cartan, tome 12, no 1 (1959-1960), exp. no 1, p. 1-14 Dimitri Dolgopyat, Bassam Fayad, Deviation of ergodic sums for toral translation II. boxes, IHES and Springer-Verlag GmbH Germany, part of Springer Nature 2020 https://doi.org/10.1007/s10240-020-00120-2 Fredrik Ekström, Tomas Persson, Jörg Schmeling, On the Fourier dimension and a modification, On the Fourier dimension and a modification. Journal of Fractal Geometry, 2(3), 309-337. https://doi.org/10.4171/JFG/23 Jean-Pierre Kahane, Sur la distribution de certaines séries aléatoires, Colloque de théorie des nombres (Bordeaux, 1969), Mémoires de la Société Mathématique de France, no. 25 (1971), pp. 119-122. doi : 10.24033/msmf.42 Kaufman, R., On the theorem of Jarn  and Besicovitch, Acta Arith. 39 (1981), 265-267. Kesten, H., Uniform distribution mod 1, Annals of mathematics 71 (1960) 445--471 Jaroslav Kurzweil, On the metric theory of inhomogeneous diophantine approximations, Studia Mathematica 15.1 (1955): 84-112 Boris Solomyak, Fourier decay for self-similar measures, Proceedings of the American Mathematical Society (2021) 149(08):1 Sprindzhuk, V. G. Metric theory of diophantine approximations / Vladimir G. Sprindzuk ; translated and edited by Richard A. Silverman V. H. Winston ; Wiley Washington, D.C. : New York 1979 Thomine, Damien, Keplerian shear in ergodic theory, Annales Henri Lebesgue, Volume 3 (2020), pp. 649-676. Jimmy Tseng, On circle rotations and the shrinking target properties. Discrete and Continuous Dynamical Systems, 2008, 20(4): 1111-1122. doi: 10.3934/dcds.2008.20.1111 Victoria Xing, Dynamical Borel--Cantelli Lemmas and Applications, 8th april 2020, LUTFMA-3402-2020 Maciej Zworski, Semiclassical Analysis. Graduate Studies in Mathematics 138. Amer. Math. Soc., Providence, RI, 2012 [^1]: *Any open cover has a countable subcover.* [^2]: For $l$ even $I_l\in(-\infty,0)$ trivially, while for $l$ odd we have $I_l\in i(0,\infty)$, using a decomposition of the integral on intervals $[2n,2n+2]$.
arxiv_math
{ "id": "2309.10437", "title": "Keplerin shear with Rajchman property", "authors": "Benoit Saussol (I2M), Arthur Boos (I2M)", "categories": "math.DS", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We consider the problem $(\rm P)$ of exactly fitting an ellipsoid (centered at $0$) to $n$ standard Gaussian random vectors in $\mathbb{R}^d$, as $n, d \to \infty$ with $n / d^2 \to \alpha > 0$. This problem is conjectured to undergo a sharp transition: with high probability, $(\rm P)$ has a solution if $\alpha < 1/4$, while $(\rm P)$ has no solutions if $\alpha > 1/4$. So far, only a trivial bound $\alpha > 1/2$ is known to imply the absence of solutions, while the sharpest results on the positive side assume $\alpha \leq \eta$ (for $\eta > 0$ a small constant) to prove that $(\rm P)$ is solvable. In this work we study universality between this problem and a so-called "Gaussian equivalent", for which the same transition can be rigorously analyzed. Our main results are twofold. On the positive side, we prove that if $\alpha < 1/4$, there exist an ellipsoid fitting all the points up to a small error, and that the lengths of its principal axes are bounded above and below. On the other hand, for $\alpha > 1/4$, we show that achieving small fitting error is not possible if the length of the ellipsoid's shortest axis does not approach $0$ as $d \to \infty$ (and in particular there does not exist any ellipsoid fit whose shortest axis length is bounded away from $0$ as $d \to \infty$). To the best of our knowledge, our work is the first rigorous result characterizing the expected phase transition in ellipsoid fitting at $\alpha = 1/4$. In a companion non-rigorous work, the first author and D. Kunisky give a general analysis of ellipsoid fitting using the replica method of statistical physics, which inspired the present work. author: - Antoine Maillard$^{\star, \diamond}$, Afonso S. Bandeira$^\star$ bibliography: - refs.bib title: Exact threshold for approximate ellipsoid fitting of random points --- [^1] # Introduction and main results {#sec:introduction} ## The ellipsoid fitting conjecture We consider the *random ellipsoid fitting problem*: given $n$ random standard Gaussian vectors in dimension $d$, when do they all lie on the boundary of a (centered) ellipsoid? Formally, we define an ellipsoid fit using the set $\mcS_d$ of $d \times d$ real symmetric matrices, as follows. [\[def:ellipsoid_fit\]]{#def:ellipsoid_fit label="def:ellipsoid_fit"} Let $x_1, \cdots, x_n \in \mathbb{R}^d$. We say that $S \in \mcS_d$ is an *ellipsoid fit* for $(x_\mu)_{\mu=1}^n$ if it satisfies: $$\label{eq:def_P} ({\rm P}) \, : \, \begin{dcases} &x_\mu^\intercal S x_\mu = d \, \textrm{ for all } \mu \in \{1, \cdots, n\}, \\ &S \succeq 0. \end{dcases}$$ In Definition [\[def:ellipsoid_fit\]](#def:ellipsoid_fit){reference-type="ref" reference="def:ellipsoid_fit"}, the matrix $S \succeq 0$ defines the ellipsoid $\Sigma \coloneqq \{x \in \mathbb{R}^d \, : \, x^\intercal S x = d\}$. Geometrically speaking, the eigenvectors of $S$ give the directions of the principal axes of the ellipsoid, while its eigenvalues $(\lambda_i)_{i=1}^d$ are related to the lengths $(r_i)_{i=1}^d$ of its principal (semi-)axes by $r_i = \sqrt{d} \lambda^{-1/2}$. **Scaling --** In what follows, we will rather refer to the rescaled quantities $r'_i = r_i / \sqrt{d}$ as the lengths of the ellipsoid axes, effectively rescaling distances so that the sphere of radius $\sqrt{d}$ (with $S = \mathrm{I}_d$) has all (semi-)axes of length $1$. In particular, the lengths of the ellipsoid's longest and shortest axis are then respectively $\lambda_{\min}(S)^{-1/2}$ and $\lambda_{\max}(S)^{-1/2}$. We are interested in finding an ellipsoid fit to a set of *random* points $x_1, \cdots, x_n \overset{\mathrm{i.i.d.}}{\sim}\mcN(0, \mathrm{I}_d)$. The question of the existence of such an ellipsoid arose first in [@saunderson2011subspace; @saunderson2012diagonal; @saunderson2013diagonal], which conjectured the following. [\[conj:ellipsoid_fitting\]]{#conj:ellipsoid_fitting label="conj:ellipsoid_fitting"} Let $n, d \geq 1$, and $x_1, \cdots, x_n \overset{\mathrm{i.i.d.}}{\sim}\mcN(0, \mathrm{I}_d)$. Let $p(n, d) \coloneqq \mathbb{P}[\exists S \in \mcS_d \textrm{ an ellipsoid fit for }(x_\mu)_{\mu=1}^n]$. For any $\varepsilon> 0$, the following holds: $$\begin{aligned} \label{eq:conj_ef_positive} \limsup_{d \to \infty} \frac{n}{d^2} \leq \frac{1 - \varepsilon}{4} &\Rightarrow \lim_{d \to \infty} p(n, d) = 1, \\ \label{eq:conj_ef_negative} \liminf_{d \to \infty} \frac{n}{d^2} \geq \frac{1 + \varepsilon}{4} &\Rightarrow \lim_{d \to \infty} p(n, d) = 0. \end{aligned}$$ Informally, Conjecture [\[conj:ellipsoid_fitting\]](#conj:ellipsoid_fitting){reference-type="ref" reference="conj:ellipsoid_fitting"} predicts a sharp transition for the existence of an ellipsoid fit in the regime $n / d^2 \to \alpha > 0$ exactly at $\alpha = 1/4$. ## Related works Conjecture [\[conj:ellipsoid_fitting\]](#conj:ellipsoid_fitting){reference-type="ref" reference="conj:ellipsoid_fitting"} was first stated and studied in the series of works [@saunderson2011subspace; @saunderson2012diagonal; @saunderson2013diagonal], where it arose as being connected to the decomposition of a (random) data matrix $M$ as $M = L + D$, with $L \succeq 0$ being low-rank, and $D$ a diagonal matrix. Connections to other problems throughout theoretical computer science have since then been unveiled, such as certifying a lower bound on the discrepancy of a random matrix using a canonical semidefinite relaxation [@saunderson2012diagonal; @potechin2023near], overcomplete independent component analysis [@podosinnikova2019overcomplete], or Sum-of-Squares lower bound for the Sherrington-Kirkpatrick Hamiltonian [@ghosh2020sum]. We refer the reader to the more detailed expositions of [@potechin2023near; @maillard2023fitting] on the connections of ellipsoid fitting to theoretical computer science and machine learning. Interestingly, Conjecture [\[conj:ellipsoid_fitting\]](#conj:ellipsoid_fitting){reference-type="ref" reference="conj:ellipsoid_fitting"} arose both from numerical evidence [^2] and the remark that $d^2/4$ is known to be the statistical dimension (or squared Gaussian width) of $\mcS_d^+$, the set of positive semidefinite matrices [@chandrasekaran2012convex; @amelunxen2014living]. As such, if one replaces eq. [\[eq:def_P\]](#eq:def_P){reference-type="eqref" reference="eq:def_P"} by $$\label{eq:def_P_Gaussian} ({\rm P}_{\mathrm{Gauss.}}) \, : \, \begin{dcases} &{\rm Tr}[S G_\mu] = d \, \textrm{ for all } \mu \in \{1, \cdots, n\}, \\ &S \succeq 0, \end{dcases}$$ in which $(G_\mu)_{\mu=1}^n$ are (independent) standard Gaussian matrices, Conjecture [\[conj:ellipsoid_fitting\]](#conj:ellipsoid_fitting){reference-type="ref" reference="conj:ellipsoid_fitting"} provably holds for $(\rm P_\mathrm{Gauss.})$. The crucial property of $(\rm P_\mathrm{Gauss.})$ behind this result is that the affine subspace $\{S \in \mcS_d \, : \, ({\rm Tr}[S G_\mu] = d)_{\mu=1}^n\}$ is randomly oriented, *uniformly* in all directions. Although this motivation for the conjecture was known, our work is (to the best of the knowledge) the first mathematically rigorous approach to leverage the connection between $(\rm P)$ and $(\rm P_\mathrm{Gauss.})$. Indeed, previous progress on Conjecture [\[conj:ellipsoid_fitting\]](#conj:ellipsoid_fitting){reference-type="ref" reference="conj:ellipsoid_fitting"} has mostly focused on proving the existence of a fitting ellipsoid using an ansatz solution: the first line of eq. [\[eq:def_P\]](#eq:def_P){reference-type="eqref" reference="eq:def_P"} defines an affine subspace $V$ of symmetric matrices of codimension $n$, so one can study a well-chosen $S^\star \in V$, and argue that for small enough $n$ it satisfies $S^\star \succeq 0$ with high probability as $d \to \infty$. Various such constructions have been used, and we summarize in Fig. [\[fig:summary\]](#fig:summary){reference-type="ref" reference="fig:summary"} the current rigorous progress on the ellipsoid fitting conjecture that arose from these approaches. Presently, the best rigorous results on Conjecture [\[conj:ellipsoid_fitting\]](#conj:ellipsoid_fitting){reference-type="ref" reference="conj:ellipsoid_fitting"} are due to the recent works [@bandeira2023fitting; @hsieh2023ellipsoid; @tulsiani2023ellipsoid] and can be summarized as follows: [\[thm:previous_results\]]{#thm:previous_results label="thm:previous_results"} Let $n, d \geq 1$, and $x_1, \cdots, x_n \overset{\mathrm{i.i.d.}}{\sim}\mcN(0, \mathrm{I}_d)$. Let $p(n, d) \coloneqq \mathbb{P}[\exists S \in \mcS_d \textrm{ an ellipsoid fit for }(x_\mu)_{\mu=1}^n]$. There exists a (small) universal constant $\eta > 0$ such that: $$\limsup_{d \to \infty} \frac{n}{d^2} \leq \eta \Rightarrow \lim_{d \to \infty} p(n, d) = 1, \\ $$ Moreover, if $n > d(d+1)/2$, then $p(n, d) = 0$. Note that the bound $n > d(d+1)/2$ in Theorem [\[thm:previous_results\]](#thm:previous_results){reference-type="ref" reference="thm:previous_results"} arises from a simple dimension counting argument, as $d(d+1)/2$ is the dimension of the space of symmetric matrices: for such values of $n$, not only does there not exist a solution to eq. [\[eq:def_P\]](#eq:def_P){reference-type="eqref" reference="eq:def_P"}, there does not exist any solution even without the constraint $S \succeq 0$! **Statistical physics approaches: heuristic and rigorous --** In this work, we tackle Conjecture [\[conj:ellipsoid_fitting\]](#conj:ellipsoid_fitting){reference-type="ref" reference="conj:ellipsoid_fitting"} using techniques inspired by the statistical physics of disordered systems. While analytical methods developed in this field were originally designed to study models known as spin glasses [@anderson1989spin; @mezard1987spin], they have seen in the past decades a great number of applications in high dimensional statistics, theoretical computer science, and machine learning. Moreover, despite these techniques often being non-rigorous, a growing line of mathematics literature has emerged establishing many of their predictions. Notably, ellipsoid fitting is an example of a *semidefinite program (SDP)*[^3] with random linear constraints, and some such SDPs have been previously analyzed with tools of statistical physics [@montanari2016semidefinite; @javanmard2016phase], although the methods of these works fall short for analyzing the satisfiability transition in random ellipsoid fitting [@maillard2023fitting]. We refer the interested reader to the recent book [@charbonneau2023spin] that compiles many (sometimes surprising) applications of the theory of disordered systems, as well as mathematically rigorous approaches to it. Notably, in the companion work to our manuscript [@maillard2023fitting], non-rigorous methods of statistical physics are employed to provide a detailed picture of the satisfiability transition in random ellipsoid fitting. Besides predicting a threshold for $n \sim d^2/4$, this work gives analytical formulas for the typical shape of ellipsoid fits in the satisfiable phase (i.e. the spectral density of $S$), generalizes these predictions for non-Gaussian but rotationally-invariant vectors $\{x_\mu\}_{\mu=1}^n$, and also studies the performance of different explicit solutions, notably ones used in the previous literature (see Fig. [\[fig:summary\]](#fig:summary){reference-type="ref" reference="fig:summary"}). We emphasize that the present paper is, in contrast, mathematically rigorous. **Inspiration of our approach --** Importantly, the non-rigorous analysis of [@maillard2023fitting] suggests that a quantity known as the free entropy (or free energy) in statistical physics, is universal: its value is (with high probability) the same for $(\rm P)$ and a variant of $(\rm P_\mathrm{Gauss.})$, as $d \to \infty$. Such a universality property would have major consequences, as the free entropy carries deep information about the structure of the space of solutions to the problem. Remarkably, similar phenomena have been studied numerically and theoretically in statistical learning models, in which one can effectively replace an arbitrary (and possibly complicated) data distribution by its "Gaussian equivalent". Investigating this Gaussian equivalence phenomenon is the object of a recent and very active line of work, with consequences on the theory of empirical risk minimization and beyond [@goldt2022gaussian; @hu2022universality; @montanari2022universality; @gerace2022gaussian; @adamczak2011restricted; @dandi2023universality; @loureiro2021learning; @dhifallah2020precise; @schroder2023deterministic]. Inspired by these works (in particular [@montanari2022universality]) we provide a rigorous proof of the universality conjectured in [@maillard2023fitting], using an interpolation argument. We then leverage tools of the theory of random convex programs [@chandrasekaran2012convex; @amelunxen2014living], such as Gordon's min-max inequality [@gordon1988milman], to sharply characterize the space of solutions to $(\rm P_\mathrm{Gauss.})$. Using the aforementioned universality allows to transfer many of these conclusions to the original problem $(\rm P)$, yielding our main results. ## Main results **Notation --** We denote $\mcS_d$ the set of $d \times d$ real symmetric matrices, while $\mathbb{S}^{d-1}(r)$ refers to the Euclidean unit sphere of radius $r$ in $\mathbb{R}^d$. For $S \in \mcS_d$, $\mathrm{Sp}(S) = \{\lambda_i\}_{i=1}^d$ is the set of eigenvalues of $S$. For $\gamma \in [1, \infty]$, $\|S\|_{S_\gamma} \coloneqq (\sum_i |\lambda_i|^\gamma)^{1/\gamma}$ stands for the Schatten-$\gamma$ norm. $B_\gamma(S, \delta)$ is the Schatten-$\gamma$ ball of radius $\delta$ centered in $S$, and $B_\gamma(\delta)$ the ball centered at $S = 0$. We denote by $\|S\|_{\mathrm{op}} \coloneqq \|S\|_{S_\infty}$ the operator norm, and $\|S\|_F \coloneqq \|S\|_{S_2}$ the Frobenius norm. For a function $\psi : \mathbb{R}\to \mathbb{R}$, we write $\|\psi\|_L$ to denote its Lipschitz constant. Finally, we use a generic nomenclature $C, c > 0$ to denote positive constants (not depending on the dimension), that may vary from line to line. If necessary, we will make explicit the dependency of these constants on parameters of the problems. We now state our main results, separating the conjecturally satisfiable ($\alpha = n/d^2 < 1/4$) and unsatisfiable ($\alpha > 1/4$) regimes. ### The satisfiable phase: $\alpha < 1/4$ Our main result on the "positive" side of the ellipsoid fitting conjecture can be stated as follows. [\[thm:main_positive_side\]]{#thm:main_positive_side label="thm:main_positive_side"} Assume $\alpha \coloneqq \limsup (n/d^2) < 1/4$ and let $r \in [1,4/3)$. There exists $0 < \lambda_- \leq \lambda_+$, depending only on $\alpha$, such that the following holds. Let $$\Gamma_r(\varepsilon) \coloneqq \Bigg\{S \in \mcS_d : \mathrm{Sp}(S) \subseteq [\lambda_-, \lambda_+] \textrm{ and } \frac{1}{n} \sum_{\mu=1}^n \left|\sqrt{d} \left[\frac{x_\mu^\intercal S x_\mu}{d} - 1\right]\right|^r \leq \varepsilon\Bigg\}.$$ Then for any $\varepsilon> 0$, if $x_1, \cdots, x_n \overset{\mathrm{i.i.d.}}{\sim}\mcN(0, \mathrm{I}_d)$, $\mathbb{P}[\Gamma_r(\varepsilon) \neq \emptyset] \to 1$ as $n, d \to \infty$. Let us make a series of remarks on Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"}. First, one can alternatively formulate its conclusion as: $$\label{eq:main_positive_side_alternate} \mathop{\mathrm{p-lim}}_{d \to \infty} \min_{\mathrm{Sp}(S) \subseteq [\lambda_-, \lambda_+]} \frac{1}{n} \sum_{\mu=1}^n \left|\sqrt{d} \left[\frac{x_\mu^\intercal S x_\mu}{d} - 1\right]\right|^r = 0,$$ where $\mathop{\mathrm{p-lim}}$ denotes limit in probability. Secondly, while our current proof limits the choice of $r \in [1, 4/3)$, it might be possible to refine our arguments to reach the same result for any $r \in [1,2]$, see our discussion in Section [1.4](#subsec:generalizations){reference-type="ref" reference="subsec:generalizations"}. Moreover, note that by standard concentration arguments, we expect Gaussian points to be close to the sphere $\mathbb{S}^{d-1}(\sqrt{d})$, i.e. the ellipsoid defined by $S = \mathrm{I}_d$. A detailed analysis yields, for any $r \in [1,2]$: $$\label{eq:error_identity} \mathop{\mathrm{p-lim}}_{d \to \infty} \frac{1}{n} \sum_{\mu=1}^n \left|\sqrt{d} \left[\frac{\|x_\mu\|^2}{d} - 1\right]\right|^r = \mathbb{E}[|Z|^r] > 0,$$ where $Z \sim \mcN(0,2)$. This follows from classical concentration arguments, see Appendix [6](#sec_app:error_identity){reference-type="ref" reference="sec_app:error_identity"} for a detailed derivation. Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"} therefore shows that there exists an ellipsoid whose "fitting error" improves by an arbitrary factor over the one achieved by the unit sphere, as long as $\alpha < 1/4$. On the other hand, we will see that this is not possible for $\alpha > 1/4$, strongly suggesting that our results capture the phenomenon responsible for the conjectured satisfiability transition of ellipsoid fitting. In Section [1.4](#subsec:generalizations){reference-type="ref" reference="subsec:generalizations"} we will consider possible future directions that could allow to improve our results to the existence of fitting ellipsoids with *exactly zero* error, i.e. the conjecture of eq. [\[eq:conj_ef_positive\]](#eq:conj_ef_positive){reference-type="eqref" reference="eq:conj_ef_positive"}. Finally, we notice that Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"} is coherent with the non-rigorous analysis of [@maillard2023fitting], which predicts that for any $\alpha < 1/4$ typical solutions to ellipsoid fitting have spectral density contained in an interval of the type $[\lambda_-, \lambda_+]$ depending only on $\alpha$. ### The unsatisfiable phase: $\alpha > 1/4$ Our main result towards proving the non-existence of fitting ellipsoids for $\alpha > 1/4$ is the following. [\[thm:main_negative_side\]]{#thm:main_negative_side label="thm:main_negative_side"} Assume $\alpha \coloneqq \liminf (n/d^2) > 1/4$. Let $\phi: \mathbb{R}_+ \to \mathbb{R}_+$ be a non-decreasing differentiable function, with $\phi(0) = 0$, and such that $\phi$ has a unique global minimum in $0$. For any $\varepsilon> 0$ and $M > 0$ we let: $$\Gamma(\varepsilon, M) \coloneqq \Bigg\{S \in \mcS_d \, : \mathrm{Sp}(S) \subseteq [0, M] \textrm{ and } \frac{1}{n} \sum_{\mu=1}^n \phi\left(\sqrt{d} \left|\frac{x_\mu^\intercal S x_\mu}{d} - 1\right|\right) \leq \varepsilon\Bigg\}.$$ There exists $\varepsilon= \varepsilon(\alpha, \phi) > 0$ such that for all $M > 0$, if $x_1, \cdots, x_n \overset{\mathrm{i.i.d.}}{\sim}\mcN(0, \mathrm{I}_d)$, $$\lim_{d \to \infty} \mathbb{P}\left[\Gamma(\varepsilon, M) \neq \emptyset\right] = 0.$$ Theorem [\[thm:main_negative_side\]](#thm:main_negative_side){reference-type="ref" reference="thm:main_negative_side"} shows that when $\alpha > 1/4$, ellipsoid fitting admits no solutions (even allowing a small approximation error $\varepsilon$) with spectrum bounded above as $d \to \infty$ (i.e. an ellipsoid whose smallest axis has length bounded away from zero). As a simple corollary of Theorem [\[thm:main_negative_side\]](#thm:main_negative_side){reference-type="ref" reference="thm:main_negative_side"}, we get that eq. [\[eq:conj_ef_negative\]](#eq:conj_ef_negative){reference-type="eqref" reference="eq:conj_ef_negative"} holds under the assumption that if there exists ellipsoid fits, then one of them has bounded spectrum. [\[cor:sufficient_cond_negative_conj\]]{#cor:sufficient_cond_negative_conj label="cor:sufficient_cond_negative_conj"} Consider the hypothesis: 1. [\[hyp:bounded_solutions\]]{#hyp:bounded_solutions label="hyp:bounded_solutions"} For any $\alpha \in (0,1/2)$ there exists $M = M(\alpha) > 0$ such that the following holds. Let $n, d \to \infty$ with $n/d^2 \to \alpha > 0$, and $x_1, \cdots, x_n \overset{\mathrm{i.i.d.}}{\sim}\mcN(0, \mathrm{I}_d)$. We denote $\Gamma$ the set of ellipsoid fits for $(x_\mu)_{\mu=1}^n$. Then (with high probability), if $\Gamma \neq \emptyset$, there exists $S \in \Gamma$ such that $\|S\|_\mathrm{op}\leq M(\alpha)$. Assumption [\[hyp:bounded_solutions\]](#hyp:bounded_solutions){reference-type="ref" reference="hyp:bounded_solutions"} implies the negative side of the ellipsoid fitting conjecture, i.e. eq. [\[eq:conj_ef_negative\]](#eq:conj_ef_negative){reference-type="eqref" reference="eq:conj_ef_negative"}. While we are not aware at the present of a proof of [\[hyp:bounded_solutions\]](#hyp:bounded_solutions){reference-type="ref" reference="hyp:bounded_solutions"}, it seems possible that this question is easier to resolve than the original ellipsoid fitting conjecture. Moreover, non-rigorous calculations [@maillard2023fitting] predict that, with high probability, *all ellipsoid fits* (when existing) have bounded operator norm, which would imply Assumption [\[hyp:bounded_solutions\]](#hyp:bounded_solutions){reference-type="ref" reference="hyp:bounded_solutions"}. ## Discussion and consequences {#subsec:generalizations} The combination of our two main results (Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"} and Theorem [\[thm:main_negative_side\]](#thm:main_negative_side){reference-type="ref" reference="thm:main_negative_side"}) provides strong evidence for the original ellipsoid fitting conjecture (Conjecture [\[conj:ellipsoid_fitting\]](#conj:ellipsoid_fitting){reference-type="ref" reference="conj:ellipsoid_fitting"}). Our conclusions are attained through the study of a "Gaussian equivalent" problem, which partly motivated Conjecture [\[conj:ellipsoid_fitting\]](#conj:ellipsoid_fitting){reference-type="ref" reference="conj:ellipsoid_fitting"}. Informally, we show that an approximate version of the ellipsoid fitting (i.e. by allowing infinitesimally small error) undergoes a sharp satisfiability transition at $\alpha = n/d^2 = 1/4$. Moreover, we also show in our proof that in the Gaussian equivalent problem, the satisfiability transition for this "approximate" version corresponds to the one of the exact fitting problem (i.e. not allowing for any non-zero error). This strongly suggests that our method is indeed capturing the phenomenon responsible for the ellipsoid fitting transition. Our results are an example of a universality phenomenon in high-dimensional stochastic geometry: we show that the statistical dimension (or the square of the Gaussian width) of the set of positive semidefinite matrices determines the satisfiability of -- a modified version of -- random ellipsoid fitting, even though the affine subset $\{S \in \mcS_d \, : \, (x_\mu^\intercal S x_\mu = d)_{\mu=1}^n\}$ is *not* randomly oriented uniformly in all directions. In general, understanding the conditions under which universality holds in such problems of high-dimensional random geometry is an important open question. We mention [@oymak2018universality], which proves universality between a model in which the random subspace is given by the kernel of a random i.i.d. Gaussian matrix, and a second model where the subspace is the kernel of a matrix with independent elements (not necessarily Gaussian). ### Towards Conjecture [\[conj:ellipsoid_fitting\]](#conj:ellipsoid_fitting){reference-type="ref" reference="conj:ellipsoid_fitting"} Unfortunately, while our main theorems characterize a satisfiability transition for ellipsoid fitting at the expected threshold, they do not formally imply Conjecture [\[conj:ellipsoid_fitting\]](#conj:ellipsoid_fitting){reference-type="ref" reference="conj:ellipsoid_fitting"}. We discuss briefly some improvements of our results that could potentially allow to bridge this gap. On the "positive" side of the conjecture (i.e. the regime $\alpha = n/d^2 < 1/4$), Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"} shows the existence of bounded ellipsoids that can achieve arbitrarily small error $\varepsilon$ (where the error is taken to $0$ *after* $d \to \infty$). On the other hand, a proof of eq. [\[eq:conj_ef_positive\]](#eq:conj_ef_positive){reference-type="eqref" reference="eq:conj_ef_positive"} would require to invert these limits, and take $\varepsilon\to 0$ *before* $d \to \infty$. In this regard, an important strengthening of Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"} would be to obtain non-asymptotic bounds on $\mathbb{P}[\Gamma_r(\varepsilon) = \emptyset]$, that depend on $\varepsilon$. Another potential for improvement stems from geometrical considerations: denoting $V \coloneqq \{S \in \mcS_d \, : \, x_\mu^\intercal S x_\mu = d \textrm{ for all } \mu \in [n]\}$, one may use Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"} to bound the distance of the set $\Gamma_r(\varepsilon)$ to the affine subspace $V$. Since $\lambda_{\min}(S) \geq \lambda_-(\alpha)$ for any $S \in \Gamma_r(\varepsilon)$, it would suffice to show that $d_\mathrm{op}(\Gamma_r(\varepsilon), V) \leq \lambda_-(\alpha)$ to deduce eq. [\[eq:conj_ef_positive\]](#eq:conj_ef_positive){reference-type="eqref" reference="eq:conj_ef_positive"}, the first part of Conjecture [\[conj:ellipsoid_fitting\]](#conj:ellipsoid_fitting){reference-type="ref" reference="conj:ellipsoid_fitting"}. We perform in Appendix [7](#sec_app:geometric_approx_exact){reference-type="ref" reference="sec_app:geometric_approx_exact"} a naive analysis of necessary conditions for this conclusion to follow from Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"}. Unfortunately, we find that (among other considerations) these conditions would require a significantly stronger form of Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"}, by proving the conclusion for larger values of $r \in [1,2]$ and/or a better scaling with $d$ of the minimal error achievable (i.e. proving the conclusion of Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"} for $\Gamma_r(\varepsilon_d)$ with $\varepsilon_d \to 0$ as $d \to \infty$). Moreover, let us emphasize that a critical difficulty in improving our proof techniques would be to quantitatively sharpen the universality arguments we carry out, and in particular to strengthen Proposition [\[prop:universality_gs\]](#prop:universality_gs){reference-type="ref" reference="prop:universality_gs"}, which shows the universality of the minimal fitting error, or "ground state" energy, for ellipsoid fitting and a simpler "Gaussian equivalent" problem. While the present form of Proposition [\[prop:universality_gs\]](#prop:universality_gs){reference-type="ref" reference="prop:universality_gs"} shows universality of this error up to a $o_d(1)$ difference, this estimate would likely have to be improved in order to carry out the aforementioned approaches. This part of our proof is greatly inspired by a recent literature on similar universality phenomena [@goldt2022gaussian; @hu2022universality; @montanari2022universality; @gerace2022gaussian; @adamczak2011restricted; @dandi2023universality; @loureiro2021learning; @dhifallah2020precise; @schroder2023deterministic], and we are not aware of the existence of such universality results at a finer scale (or even predictions/conjectures of conditions under which they should hold). Finally, on the "negative" side of the conjecture (i.e. for $\alpha > 1/4$), we have seen that Theorem [\[thm:main_negative_side\]](#thm:main_negative_side){reference-type="ref" reference="thm:main_negative_side"} reduces the second part of Conjecture [\[conj:ellipsoid_fitting\]](#conj:ellipsoid_fitting){reference-type="ref" reference="conj:ellipsoid_fitting"} (eq. [\[eq:conj_ef_negative\]](#eq:conj_ef_negative){reference-type="eqref" reference="eq:conj_ef_negative"}) to proving the boundedness of the spectrum of solutions, as emphasized in Corollary [\[cor:sufficient_cond_negative_conj\]](#cor:sufficient_cond_negative_conj){reference-type="ref" reference="cor:sufficient_cond_negative_conj"}. If a proof of Assumption [\[hyp:bounded_solutions\]](#hyp:bounded_solutions){reference-type="ref" reference="hyp:bounded_solutions"} were to become available, our results would imply eq. [\[eq:conj_ef_negative\]](#eq:conj_ef_negative){reference-type="eqref" reference="eq:conj_ef_negative"}, i.e. the regime $\alpha > 1/4$ of Conjecture [\[conj:ellipsoid_fitting\]](#conj:ellipsoid_fitting){reference-type="ref" reference="conj:ellipsoid_fitting"}. ### Further directions Our proof method that leverages universality of the minimal fitting error is quite versatile, and we end our discussion by mentioning a few further directions and generalized results that stem from our analysis. **The dual program --** First, as a semidefinite program, ellipsoid fitting admits a dual formulation, as written e.g. in [@bandeira2023fitting]. While the limitations of Theorems [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"} and [\[thm:main_negative_side\]](#thm:main_negative_side){reference-type="ref" reference="thm:main_negative_side"}, discussed above, prevent us from directly drawing conclusions on the dual, it might be possible to directly apply to it a similar universality approach. Such an application might allow to overcome some current limitations of our results, and we leave this investigation for future work. **Beyond Gaussian vectors --** Secondly, while we perform our analysis for $x_1, \cdots, x_n \sim \mcN(0, \mathrm{I}_d)$, it is clear from our proof that our results (both Theorems [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"} and [\[thm:main_negative_side\]](#thm:main_negative_side){reference-type="ref" reference="thm:main_negative_side"}) hold for any i.i.d. $(x_\mu)_{\mu=1}^n$ such that the matrices $W_\mu \coloneqq (x_\mu x_\mu^\intercal- \mathrm{I}_d)/\sqrt{d}$ satisfy a uniform pointwise normality (or uniform one-dimensional CLT) assumption, as defined in Definition [\[def:one_dimensional_CLT\]](#def:one_dimensional_CLT){reference-type="ref" reference="def:one_dimensional_CLT"}, and proven for the case of Gaussian vectors in Lemma [\[lemma:1d_clt_ellipse\]](#lemma:1d_clt_ellipse){reference-type="ref" reference="lemma:1d_clt_ellipse"}. An interesting example of a non-Gaussian distribution is given by the case of rotationally-invariant vectors with fluctuating norm, of the form $$x_\mu \overset{\rm d}{=}\sqrt{r_\mu} \omega_\mu,$$ with $r_\mu$ and $\omega_\mu$ independent, and $\omega_\mu \sim \mathrm{Unif}(\mathbb{S}^{d-1}(\sqrt{d}))$. Letting $\tau \coloneqq \lim_{d \to \infty} [\sqrt{d} \mathrm{Var}(r_1)]$, [@maillard2023fitting] conjectures that the ellipsoid fitting transition point for this model is located at $n/d^2 = \alpha_c(\tau) \in (0,1/2)$, and gives an exact expression of $\alpha_c(\tau)$ (see Fig. 5 of [@maillard2023fitting]), showing that ellipsoid fitting becomes harder as the fluctuations of the norm increase. While pointwise normality may not hold in this setting, it is conceivable that our proof techniques can be adapted to handle these distributions, by following the calculations of [@maillard2023fitting], to obtain results akin to Theorems [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"} and [\[thm:main_negative_side\]](#thm:main_negative_side){reference-type="ref" reference="thm:main_negative_side"}. More generally, while it is clear that some distributions can not satisfy uniform pointwise normality (see the discussion below Lemma [\[lemma:1d_clt_ellipse\]](#lemma:1d_clt_ellipse){reference-type="ref" reference="lemma:1d_clt_ellipse"} for examples), a more thorough investigation of the class of distributions of $x_\mu$'s for which pointwise normality holds (and thus our proof applies) is, in our opinion, an interesting direction to explore. **A minimal nuclear norm estimator --** Let us conclude by mentioning a different approach to a possible solution of the first part of Conjecture [\[conj:ellipsoid_fitting\]](#conj:ellipsoid_fitting){reference-type="ref" reference="conj:ellipsoid_fitting"}. It is conjectured in [@maillard2023fitting] (through non-rigorous methods) that the minimal nuclear norm solution, i.e. $$\hS_\mathrm{NN}\coloneqq \min_{\substack{S \in \mcS_d \\ \{x_\mu^\intercal S x_\mu = d\}_{\mu=1}^n}} \|S\|_{S_1} = \min_{\substack{S \in \mcS_d \\ \{x_\mu^\intercal S x_\mu = d\}_{\mu=1}^n}} \sum_{i=1}^d |\lambda_i(S)| ,$$ satisfies $\hS_\mathrm{NN}\succeq 0$ with high probability for any $\alpha < 1/4$. Analyzing $\hS_\mathrm{NN}$, whether through the techniques of the present paper or with different methods, is another promising approach to prove eq. [\[eq:conj_ef_positive\]](#eq:conj_ef_positive){reference-type="eqref" reference="eq:conj_ef_positive"}, the "positive" part of Conjecture [\[conj:ellipsoid_fitting\]](#conj:ellipsoid_fitting){reference-type="ref" reference="conj:ellipsoid_fitting"}. ## Structure of the paper In Section [2](#sec:outline_proof){reference-type="ref" reference="sec:outline_proof"} we present the proof of our main results. The proof of some of the intermediate results are postponed to later sections: in Section [3](#sec:gaussian_equivalent){reference-type="ref" reference="sec:gaussian_equivalent"} we study in detail the "Gaussian equivalent" problem to random ellipsoid fitting, and in Section [4](#sec:proof_universality_gs){reference-type="ref" reference="sec:proof_universality_gs"} we prove an important universality property between the two models. # Proof of the main results {#sec:outline_proof} We prove here Theorems [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"} and [\[thm:main_negative_side\]](#thm:main_negative_side){reference-type="ref" reference="thm:main_negative_side"}. The core idea of our proof can be sketched as follows: - Using rigorous methods inspired by statistical physics, we prove that a quantity known as the *asymptotic free entropy* is universal for the ellipsoid fitting problem of eq. [\[eq:def_P\]](#eq:def_P){reference-type="eqref" reference="eq:def_P"} and a variant of its Gaussian counterpart of eq. [\[eq:def_P\_Gaussian\]](#eq:def_P_Gaussian){reference-type="eqref" reference="eq:def_P_Gaussian"}, for any value of $\alpha = n/d^2$. The main technique we use is an interpolation method. In a suitable limit (known as the low-temperature limit in statistical physics), this implies the universality of the minimal "fitting error". - We study the Gaussian equivalent problem using methods of random convex geometry [@chandrasekaran2012convex; @amelunxen2014living], leveraging in particular Gordon's min-max inequality [@gordon1988milman]. When $\alpha < 1/4$ we show that not only a zero error is achievable, but that one can achieve it by a matrix whose spectrum is contained in an interval of the type $[\lambda_-, \lambda_+]$, i.e. the axis of the corresponding ellipsoid have lengths bounded above and below. On the other hand, for $\alpha > 1/4$, we prove that not only is the Gaussian equivalent problem not satisfiable, but one can lower bound the minimal fitting error as long as the set of candidate matrices is contained in an operator norm ball $B_\mathrm{op}(M)$ (for any constant $M > 0$). - We prove that the conclusions of $(ii)$ transfer to the original ellipsoid fitting problem, using the universality shown in $(i)$. Our proof of $(i)$ leverages an important line of work on free entropy universality [@hu2022universality; @montanari2022universality; @gerace2022gaussian], and a part of it closely follows the proof of [@montanari2022universality], which we will point out in relevant places. Nevertheless, as our setting does not satisfy all the hypotheses of this work, and for completeness of our exposition, we chose to write the whole proof in a self-contained manner. ## Reduction of the problem and Gaussian equivalent For any $0 \leq \lambda_- \leq \lambda_+$, any function $\phi : \mathbb{R}\to \mathbb{R}$ and any $\varepsilon> 0$ we define the set $$\label{eq:def_Gamma} \Gamma(\phi, \lambda_-, \lambda_+, \varepsilon) \coloneqq \Bigg\{S \in S_d : \mathrm{Sp}(S) \subseteq [\lambda_-, \lambda_+] \textrm{ and } \frac{1}{n} \sum_{\mu=1}^n \phi\left(\sqrt{d} \left[\frac{x_\mu^\intercal S x_\mu}{d} - 1\right]\right) \leq \varepsilon\Bigg\}.$$ If one thinks of $\phi$ as an error (or loss) function, then $\Gamma(\phi, \lambda_-, \lambda_+, \varepsilon)$ represents the set of matrices with spectrum in $[\lambda_-, \lambda_+]$ that solve $(\rm P)$ up to an approximation error $\varepsilon$. Notice that for any $S \in \mcS_d$: $$\begin{aligned} \label{eq:correspondance_X_W} \sqrt{d} \left[\frac{x_\mu^\intercal S x_\mu}{d} - 1\right] = {\rm Tr}[W_\mu S] - \frac{d - {\rm Tr}[S]}{\sqrt{d}},\end{aligned}$$ with $W_\mu \coloneqq (x_\mu x_\mu^\intercal- \mathrm{I}_d)/\sqrt{d}$. Moreover, $W_\mu$ has the same first two moments as a Gaussian matrix. Formally, we define: [\[def:matrix_ensembles\]]{#def:matrix_ensembles label="def:matrix_ensembles"} Let $d \geq 1$. We say that a random symmetric $W \in \mcS_d$ is generated according to: - $W \sim \mathrm{GOE}(d)$ if $W_{ij} \overset{\mathrm{i.i.d.}}{\sim}\mcN(0, [1+\delta_{ij}]/d)$ for $i \leq j$. - $W \sim \mathrm{Ellipse}(d)$ if $W \overset{\rm d}{=}(x x^\intercal- \mathrm{I}_d)/\sqrt{d}$ for $x \sim \mcN(0, \mathrm{I}_d)$. One checks easily that $\mathbb{E}_{\mathrm{Ellipse}(d)}[W_{ij} W_{kl}] = \mathbb{E}_{\mathrm{GOE}(d)}[W_{ij} W_{kl}]$ for any $i \leq j$ and $k \leq l$. This remark and eq. [\[eq:correspondance_X\_W\]](#eq:correspondance_X_W){reference-type="eqref" reference="eq:correspondance_X_W"} lead to consider the following modified problem, with $W_\mu \coloneqq (x_\mu x_\mu^\intercal- \mathrm{I}_d)/\sqrt{d}$ and $b \in \mathbb{R}$: $$\label{eq:def_Gamma_b} \Gamma_b(\phi, \lambda_-, \lambda_+, \varepsilon) \coloneqq \left\{S \in S_d : \mathrm{Sp}(S) \subseteq [\lambda_-, \lambda_+] \textrm{ and } \frac{1}{n} \sum_{\mu=1}^n \phi\left({\rm Tr}[W_\mu S] - b\right) \leq \varepsilon\right\}.$$ In the rest of the proof we will focus on studying the set $\Gamma_b$ of eq. [\[eq:def_Gamma_b\]](#eq:def_Gamma_b){reference-type="eqref" reference="eq:def_Gamma_b"} with[^4] $b \in \mathbb{R}$, for both $W_\mu \sim \mathrm{Ellipse}(d)$ and $W_\mu \sim \mathrm{GOE}(d)$ (which we call the "Gaussian equivalent" problem). At the end of our proof, we will transfer our conclusions on $\Gamma_b$ back to the original solution set $\Gamma$ of eq. [\[eq:def_Gamma\]](#eq:def_Gamma){reference-type="eqref" reference="eq:def_Gamma"}. ## Free entropy universality {#subsec:fe_universality_main_results} We can now state the main result concerning on the universality of the minimal error (or "ground state energy" in statistical physics jargon). This result is inspired by a rich line of work on universality of empirical risk minimization [@hu2022universality; @montanari2022universality; @gerace2022gaussian; @dandi2023universality]. [\[prop:universality_gs\]]{#prop:universality_gs label="prop:universality_gs"} Let $\phi : \mathbb{R}\to \mathbb{R}_+$ and $\psi : \mathbb{R}\to \mathbb{R}$ two bounded differentiable functions with bounded derivatives, and assume furthermore $\|\psi'\|_L < \infty$. Let $n,d \geq 1$ and $n,d \to \infty$ with $\alpha_1 d^2 \leq n \leq \alpha_2 d^2$ for some $0 < \alpha_1 \leq \alpha_2$, and $B \subseteq \mcS_d$ a closed set such that $B \subseteq B_\mathrm{op}(C_0)$ for some $C_0 > 0$ (not depending on $d$). For $X_1, \cdots, X_n \in \mcS_d$ we define the *ground state energy*: $$\label{eq:def_gs} \mathrm{GS}_d(\{X_\mu\}) \coloneqq \inf_{S \in B} \frac{1}{d^2}\sum_{\mu=1}^n\phi({\rm Tr}[X_\mu S]).$$ Then we have: $$\label{eq:universality_gs} \lim_{d \to \infty} \left| \mathbb{E}_{\{W_\mu\} \overset{\mathrm{i.i.d.}}{\sim}\mathrm{Ellipse}(d)} \psi[\mathrm{GS}_d(\{W_\mu\})] - \mathbb{E}_{\{G_\mu\} \overset{\mathrm{i.i.d.}}{\sim}\mathrm{GOE}(d)} \psi[\mathrm{GS}_d(\{G_\mu\})] \right| = 0.$$ Therefore, for any $\rho \geq 0$ and $\delta > 0$: $$\label{eq:universality_gs_probabilities} \begin{dcases} \lim_{d \to \infty} \mathbb{P}[\mathrm{GS}_d(\{W_\mu\}) \geq \rho + \delta] \leq \lim_{d \to \infty} \mathbb{P}[\mathrm{GS}_d(\{G_\mu\}) \geq \rho], \\ . \lim_{d \to \infty} \mathbb{P}[\mathrm{GS}_d(\{W_\mu\}) \leq \rho - \delta] \leq \lim_{d \to \infty} \mathbb{P}[\mathrm{GS}_d(\{G_\mu\}) \leq \rho]. \end{dcases}$$ **A word on the proof --** The main proof technique we use is Gaussian interpolation: namely we define an interpolating $U_\mu(t)$ such that $U_\mu(0) = G_\mu$ and $U_\mu(1) = W_\mu$, and show that $\mathrm{GS}_d(\{U_\mu(t)\})$ is constant (up to negligible terms) along the interpolation path. Note that while Proposition [\[prop:universality_gs\]](#prop:universality_gs){reference-type="ref" reference="prop:universality_gs"} is very close to the results of [@montanari2022universality], there is a technical difference with the setup of this work: for any fixed $S$, the random variable ${\rm Tr}[W S]$ for $W \sim \mathrm{Ellipse}(d)$ is not sub-Gaussian but only sub-exponential. As a consequence, we can not achieve a good control of the Lipschitz constant of the error (or "energy" function) of eq. [\[eq:def_gs\]](#eq:def_gs){reference-type="eqref" reference="eq:def_gs"} with respect to the Frobenius norm of $S$, as is required in [@montanari2022universality]. We bypass this difficulty by controlling instead the Lipschitz constant with respect to the operator norm (see Lemma [\[lemma:energy_change_small_ball\]](#lemma:energy_change_small_ball){reference-type="ref" reference="lemma:energy_change_small_ball"}), using important empirical process bounds over the operator norm ball (see Lemma [\[lemma:emp_proc_ellipse\]](#lemma:emp_proc_ellipse){reference-type="ref" reference="lemma:emp_proc_ellipse"}): this leads to the limitation $B \subseteq B_\mathrm{op}(C_0)$. Interestingly, improving these bounds would also allow to relax the limitation $r \in [1,4/3)$ in Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"}, as we discuss after. Having dealt with this difficulty, the rest of the interpolation argument is very similar to [@montanari2022universality]. We show Proposition [\[prop:universality_gs\]](#prop:universality_gs){reference-type="ref" reference="prop:universality_gs"} in Section [4](#sec:proof_universality_gs){reference-type="ref" reference="sec:proof_universality_gs"}, deferring some arguments to Appendix [8](#sec_app:technical_universality){reference-type="ref" reference="sec_app:technical_universality"}. ## The Gaussian equivalent problem {#subsec:proof_gaussian_equivalent} We now study the Gaussian equivalent problem. We will later transfer our analysis to the original ellipsoid fitting case using Proposition [\[prop:universality_gs\]](#prop:universality_gs){reference-type="ref" reference="prop:universality_gs"}. Our results are stated separately for the satisfiable and unsatisfiable regimes. [\[prop:regular_sol_gaussian\]]{#prop:regular_sol_gaussian label="prop:regular_sol_gaussian"} Let $n, d \to \infty$ with $n / d^2 \to \alpha < 1/4$, and let $\{G_\mu\}_{\mu=1}^n \overset{\mathrm{i.i.d.}}{\sim}\mathrm{GOE}(d)$. Let $$V \coloneqq \{S \in \mcS_d \, : \forall \mu \in [n], \, {\rm Tr}[G_\mu S] = 1 \}.$$ There exists $0 < \lambda_- \leq \lambda_+$ (depending only on $\alpha$) such that: $$\lim_{d \to \infty} \mathbb{P}\{\exists S \in V \, \textrm{ s.t. } \mathrm{Sp}(S) \subseteq [\lambda_-, \lambda_+] \} = 1.$$ Proposition [\[prop:regular_sol_gaussian\]](#prop:regular_sol_gaussian){reference-type="ref" reference="prop:regular_sol_gaussian"} shows that for $\alpha < 1/4$, there exists w.h.p. ellipsoids satisfying the "Gaussian equivalent" to random ellipsoid fitting, and that such solutions might also be assumed to have their axes' lengths bounded above and below as $d \to \infty$. In the unsatisfiable regime $\alpha > 1/4$, we show on the other hand that with high probability there are no solutions to the Gaussian equivalent problem, even allowing for some error when fitting the random points. [\[prop:no_approx_unsat_gaussian\]]{#prop:no_approx_unsat_gaussian label="prop:no_approx_unsat_gaussian"} Let $n, d \to \infty$ with $n / d^2 \to \alpha > 1/4$, and let $\{G_\mu\}_{\mu=1}^n \overset{\mathrm{i.i.d.}}{\sim}\mathrm{GOE}(d)$. Let $b \in \mathbb{R}$ and denote $\mcC_\mu^{(b)}(S) \coloneqq |{\rm Tr}(G_\mu S) - b|$. Define the affine subspace: $$V_b \coloneqq \{S \in \mcS_d \, : \, \forall \mu \in [n], \, \mcC^{(b)}_\mu(S) = 0 \}.$$ - Assume that $b \neq 0$. Then there exists $c = c(\alpha,b) > 0$ and $\eta = \eta(\alpha, b) \in (0,1)$ such that $$\lim_{d \to \infty} \mathbb{P}\{\forall S \succeq 0 \, : \, \# \{\mu \in [n] \, : \, \mcC^{(b)}_\mu(S) > c\} \geq \eta n\} = 1.$$ - Assume that $b = 0$. Then there exists $c = c(\alpha) > 0$ and $\eta = \eta(\alpha) \in (0,1)$ such that, with probability $1 - o_d(1)$, the following holds for all $\tau \geq 0$: $$\sup_{\substack{S \succeq 0 \\ \# \{\mu \in [n] \, : \, \mcC^{(0)}_\mu(S) > c \tau\} < \eta n}} \|S\|_F \leq \tau \sqrt{d}.$$ Propositions [\[prop:regular_sol_gaussian\]](#prop:regular_sol_gaussian){reference-type="ref" reference="prop:regular_sol_gaussian"} and [\[prop:no_approx_unsat_gaussian\]](#prop:no_approx_unsat_gaussian){reference-type="ref" reference="prop:no_approx_unsat_gaussian"} are proven in Section [3](#sec:gaussian_equivalent){reference-type="ref" reference="sec:gaussian_equivalent"}. Our proof follows a standard approach in random geometry problems involving Gaussian distributions, by leveraging Gordon's min-max inequality [@gordon1988milman] and its sharpness in convex settings [@thrampoulidis2015regularized; @thrampoulidis2018precise]. It strengthens for our setting the results obtained for general random convex programs in [@chandrasekaran2012convex; @amelunxen2014living] (using either Gordon's inequality or tools of integral geometry). ## The satisfiable regime: proof of Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"} {#subsec:proof_thm_main_positive_side} Propositions [\[prop:universality_gs\]](#prop:universality_gs){reference-type="ref" reference="prop:universality_gs"} and [\[prop:regular_sol_gaussian\]](#prop:regular_sol_gaussian){reference-type="ref" reference="prop:regular_sol_gaussian"} have the following consequence, taking $B \coloneqq \{S \, : \, \lambda_-\mathrm{I}_d \preceq S \preceq \lambda_+ \mathrm{I}_d\}$, with $(\lambda_-, \lambda_+)$ given by Proposition [\[prop:regular_sol_gaussian\]](#prop:regular_sol_gaussian){reference-type="ref" reference="prop:regular_sol_gaussian"}. [\[cor:positive_side_weak_phi\]]{#cor:positive_side_weak_phi label="cor:positive_side_weak_phi"} Let $n, d \to \infty$ with $n / d^2 \to \alpha < 1/4$, and $W_1, \cdots, W_n \overset{\mathrm{i.i.d.}}{\sim}\mathrm{Ellipse}(d)$. There exists $\lambda_-, \lambda_+ > 0$ depending only on $\alpha$ such that the following holds. If we have $\phi : \mathbb{R}\to \mathbb{R}_+$ with $\|\phi\|_\infty, \|\phi'\|_\infty < \infty$ and such that $\phi(0) = 0$, then $$\mathop{\mathrm{p-lim}}_{d\to \infty} \min_{\mathrm{Sp}(S) \subseteq [\lambda_-, \lambda_+]} \frac{1}{n} \sum_{\mu=1}^n \phi({\rm Tr}[W_\mu S] - 1) = 0.$$ The proof of Corollary [\[cor:positive_side_weak_phi\]](#cor:positive_side_weak_phi){reference-type="ref" reference="cor:positive_side_weak_phi"} is immediate by combining Propositions [\[prop:universality_gs\]](#prop:universality_gs){reference-type="ref" reference="prop:universality_gs"} and [\[prop:regular_sol_gaussian\]](#prop:regular_sol_gaussian){reference-type="ref" reference="prop:regular_sol_gaussian"}. We can furthermore relax some of the assumptions on $\phi$ in Corollary [\[cor:positive_side_weak_phi\]](#cor:positive_side_weak_phi){reference-type="ref" reference="cor:positive_side_weak_phi"}, as we now show. [\[lemma:positive_side_strong_phi\]]{#lemma:positive_side_strong_phi label="lemma:positive_side_strong_phi"} Corollary [\[cor:positive_side_weak_phi\]](#cor:positive_side_weak_phi){reference-type="ref" reference="cor:positive_side_weak_phi"} holds for $\phi(x) = |x|^r$, for any $1 \leq r < 4/3$. Note that the limitation $r < 4/3$ is a consequence of a limitation on the control of an empirical process that is done in Lemma [\[lemma:emp_proc_ellipse\]](#lemma:emp_proc_ellipse){reference-type="ref" reference="lemma:emp_proc_ellipse"} (see also the discussion in Section [4.1](#subsec:lipschitz_energy){reference-type="ref" reference="subsec:lipschitz_energy"}). **Proof of Lemma [\[lemma:positive_side_strong_phi\]](#lemma:positive_side_strong_phi){reference-type="ref" reference="lemma:positive_side_strong_phi"} --** Let $\varepsilon> 0$ and $A > 0$, and let us denote $u_A : \mathbb{R}_+ \to [0,1]$ a $\mcC^\infty$ function such that $u_A(x) = 1$ if $x \leq A$ and $u_A(x) = 0$ if $x \geq A+1$. We denote $\phi_A(z) \coloneqq |z|^r u_A(|z|)$. Then $\phi_A$ is bounded, with bounded derivative. Moreover, we have for any $x \in \mathbb{R}$: $$|x|^r = \phi_A(x) + |x|^r(1-u_A(|x|)) \leq \phi_A(x) + |x|^r \mathds{1}\{|x| \geq A\}.$$ By Corollary [\[cor:positive_side_weak_phi\]](#cor:positive_side_weak_phi){reference-type="ref" reference="cor:positive_side_weak_phi"}, under an event of probability $1 - o_d(1)$ we can fix $S$ with $\mathrm{Sp}(S) \in [\lambda_-, \lambda_+]$ and such that $\sum_{\mu=1}^n \phi_A({\rm Tr}[W_\mu S] - 1) \leq n \varepsilon/2$. We pick $\gamma > 1$ such that $\gamma r \leq 4/3$, and condition on the $1-o_d(1)$ probability event, thanks to Lemma [\[lemma:emp_proc_ellipse\]](#lemma:emp_proc_ellipse){reference-type="ref" reference="lemma:emp_proc_ellipse"}: $$\label{eq:bound_emp_process} \max_{\|R\|_\mathrm{op}= 1} \sum_{\mu=1}^n |{\rm Tr}(W_\mu R) |^{\gamma r} \leq C n$$ We have, with probability $1 - o_d(1)$: $$\begin{aligned} \frac{1}{n} \sum_{\mu=1}^n |{\rm Tr}[W_\mu S] - 1|^r &\leq \frac{\varepsilon}{2} + \frac{1}{n} \sum_{\mu=1}^n |{\rm Tr}[W_\mu S] - 1|^r \mathds{1}\{|{\rm Tr}[W_\mu S] - 1| \geq A\}, \\ &\overset{\mathrm{(a)}}{\leq}\frac{\varepsilon}{2}+ \frac{A^{r(1-\gamma)}}{n} \sum_{\mu=1}^n |{\rm Tr}[W_\mu S] - 1|^{\gamma r}, \\ &\overset{\mathrm{(b)}}{\leq}\frac{\varepsilon}{2}+ \frac{A^{r(1-\gamma) }2^{\gamma r - 1}}{n} [n + C \lambda_+^{\gamma r} n], \\ &\leq \frac{\varepsilon}{2}+ C(\gamma, r, \alpha) A^{r(1-\gamma)}. \end{aligned}$$ We used in $(\rm a)$ the following inequality, for a positive random variable $X$, $t \geq 0$, and any $\gamma > 1$: $$\mathbb{E}[X \mathds{1}\{X \geq t\}] = t \mathbb{E}\left[\frac{X}{t} \mathds{1}\left\{\frac{X}{t} \geq 1\right\}\right] \leq t^{1-\gamma} \mathbb{E}[X^\gamma].$$ In $(\rm b)$ we used eq. [\[eq:bound_emp_process\]](#eq:bound_emp_process){reference-type="eqref" reference="eq:bound_emp_process"} and $|a+b|^{r} \leq 2^{r-1}(|a|^r + |b|^r)$. We pick $A = [\varepsilon/ (2 C(\gamma, r, \alpha))]^{1/[r(1-\gamma)]}$. We have then, with probability $1 - o_d(1)$: $$\min_{\mathrm{Sp}(S) \subseteq [\lambda_-, \lambda_+]}\frac{1}{n} \sum_{\mu=1}^n |{\rm Tr}[W_\mu S] - 1|^r \leq \varepsilon,$$ which ends the proof. $\square$ **Proof of Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"} --** Notice that Lemma [\[lemma:positive_side_strong_phi\]](#lemma:positive_side_strong_phi){reference-type="ref" reference="lemma:positive_side_strong_phi"} precisely shows that, for $\phi(x) = |x|^r$ and $\varepsilon> 0$, the set $\Gamma_1$ of eq. [\[eq:def_Gamma_b\]](#eq:def_Gamma_b){reference-type="eqref" reference="eq:def_Gamma_b"} is non-empty with high probability. We now use the following remark (recall the definition of $\Gamma$ in eq. [\[eq:def_Gamma\]](#eq:def_Gamma){reference-type="eqref" reference="eq:def_Gamma"}): [\[lemma:Gamma_1\_incl_Gamma\]]{#lemma:Gamma_1_incl_Gamma label="lemma:Gamma_1_incl_Gamma"} For any $(x_\mu)_{\mu=1}^n$ and $\lambda_-, \lambda_+, \varepsilon> 0, r \geq 1$, if $S \in \Gamma_1(|\cdot|^r, \lambda_-, \lambda_+, \varepsilon)$, then $\hS \in \Gamma(|\cdot|^r, \lambda_-', \lambda_+', \varepsilon')$, with $$\hS = \frac{dS}{\sqrt{d} + {\rm Tr}[S]}, \qquad \lambda_-' = \frac{\lambda_-}{\lambda_+ + d^{-1/2}}, \qquad \lambda_+' = \frac{\lambda_+}{\lambda_- + d^{-1/2}}, \qquad \varepsilon' = \frac{\varepsilon}{\left(\lambda_- + d^{-1/2}\right)^r}.$$ Since $\lambda_+' \leq \lambda_+ / \lambda_-$, $\varepsilon' \leq \varepsilon/(\lambda_-)^r$, and $\lambda_-' \geq \lambda_- / (2 \lambda_+)$ for $d$ large enough, combining Lemmas [\[lemma:positive_side_strong_phi\]](#lemma:positive_side_strong_phi){reference-type="ref" reference="lemma:positive_side_strong_phi"} and [\[lemma:Gamma_1\_incl_Gamma\]](#lemma:Gamma_1_incl_Gamma){reference-type="ref" reference="lemma:Gamma_1_incl_Gamma"} imply that for any $r \in [1, 4/3)$ and $\varepsilon> 0$: $$\mathbb{P}[\Gamma(|\cdot|^r, a, b, \varepsilon) \neq \emptyset] \to_{d \to \infty} 1,$$ for some $0 < a \leq b$ depending only on $\alpha$, which ends the proof of Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"}. $\qed$ **Proof of Lemma [\[lemma:Gamma_1\_incl_Gamma\]](#lemma:Gamma_1_incl_Gamma){reference-type="ref" reference="lemma:Gamma_1_incl_Gamma"} --** Let $S \in \Gamma_1(|\cdot|^r, \lambda_-, \lambda_+, \varepsilon)$. Defining $\hS = dS / (\sqrt{d} + {\rm Tr}[S])$, we have by eq. [\[eq:correspondance_X\_W\]](#eq:correspondance_X_W){reference-type="eqref" reference="eq:correspondance_X_W"}: $$\left|\sqrt{d} \left[\frac{x_\mu^\intercal\hS x_\mu}{d} - 1\right]\right|^r = \left(\frac{d}{\sqrt{d} + {\rm Tr}[S]}\right)^r \left|{\rm Tr}[S W_\mu] - 1\right|^r \leq \frac{1}{\left(\lambda_- + d^{-1/2}\right)^r} \left|{\rm Tr}[S W_\mu] - 1\right|^r.$$ $\square$ ## The unsatisfiable regime: proof of Theorem [\[thm:main_negative_side\]](#thm:main_negative_side){reference-type="ref" reference="thm:main_negative_side"} Propositions [\[prop:universality_gs\]](#prop:universality_gs){reference-type="ref" reference="prop:universality_gs"} and [\[prop:no_approx_unsat_gaussian\]](#prop:no_approx_unsat_gaussian){reference-type="ref" reference="prop:no_approx_unsat_gaussian"} have the following corollary. [\[cor:negative_side_weak_phi\]]{#cor:negative_side_weak_phi label="cor:negative_side_weak_phi"} Let $n, d \to \infty$ with $n / d^2 \to \alpha > 1/4$, and $W_1, \cdots, W_n \overset{\mathrm{i.i.d.}}{\sim}\mathrm{Ellipse}(d)$. Let $\phi: \mathbb{R}_+ \to \mathbb{R}_+$ be a non-decreasing differentiable function, with $\phi(0) = 0$, and such that $\phi$ has a unique global minimum in $0$. Then: - Let $b \in \{-1, 1\}$. There exists $\varepsilon= \varepsilon(\alpha, \phi) > 0$ such that for all $M > 0$: $$\lim_{d\to \infty} \mathbb{P}\left[\min_{\mathrm{Sp}(S) \subseteq [0, M]} \frac{1}{n} \sum_{\mu=1}^n \phi(|{\rm Tr}[W_\mu S] - b|) \geq \varepsilon\right] = 1.$$ - Let $b = 0$. For all $\tau > 0$, there exists $\varepsilon= \varepsilon(\tau, \alpha, \phi) > 0$ such that for all $M > 0$: $$\lim_{d\to \infty} \mathbb{P}\left[\min_{\substack{\mathrm{Sp}(S) \subseteq [0, M] \\ \|S\|_F \geq \tau \sqrt{d}}} \frac{1}{n} \sum_{\mu=1}^n \phi(|{\rm Tr}[W_\mu S]|) \geq \varepsilon\right] = 1.$$ **Proof of Corollary [\[cor:negative_side_weak_phi\]](#cor:negative_side_weak_phi){reference-type="ref" reference="cor:negative_side_weak_phi"} --** Note that we can assume that $\phi$ is bounded with bounded derivative and $\phi'(0) = 0$: if not it is always possible to lower bound $\phi$ by such a function. $x \mapsto \phi(|x|)$ is then a bounded function on $\mathbb{R}$ with bounded derivative. We start with $(i)$. By Proposition [\[prop:no_approx_unsat_gaussian\]](#prop:no_approx_unsat_gaussian){reference-type="ref" reference="prop:no_approx_unsat_gaussian"}, there exists $c_\alpha, \eta_\alpha > 0$ such that $$\lim_{d \to \infty} \mathbb{P}\{\forall S \succeq 0 \, : \, \# \{\mu \in [n] \, : \, |{\rm Tr}(G_\mu S) - b| \leq c_\alpha\} \leq (1-\eta_\alpha) n\} = 1.$$ Conditioning on this event, we have $$\inf_{S \succeq 0} \sum_{\mu=1}^n \phi(|{\rm Tr}[G_\mu S] - b|) \geq n \eta_\alpha \phi(c_\alpha).$$ Using Proposition [\[prop:universality_gs\]](#prop:universality_gs){reference-type="ref" reference="prop:universality_gs"} with $B = \{S \, : \, \mathrm{Sp}(S) \subseteq [0, M]\}$ we reach that, for all $M > 0$, with probability $1 - o_d(1)$: $$\inf_{\mathrm{Sp}(S) \subseteq [0, M]} \frac{1}{n}\sum_{\mu=1}^n \phi(|{\rm Tr}[W_\mu S] - b|) \geq \frac{1}{2}\eta_\alpha \phi(c_\alpha).$$ We now turn to $(ii)$. Again by Proposition [\[prop:no_approx_unsat_gaussian\]](#prop:no_approx_unsat_gaussian){reference-type="ref" reference="prop:no_approx_unsat_gaussian"}, we fix $c_\alpha, \eta_\alpha > 0$ such that for all $\tau \geq 0$: $$\lim_{d \to \infty} \mathbb{P}\left\{\sup_{\substack{S \succeq 0 \\ \# \{\mu \in [n] \, : \, |{\rm Tr}(G_\mu S)| > c_\alpha \tau\} < \eta_\alpha n}} \|S\|_F \leq \frac{\tau}{2} \sqrt{d}\right\} = 1.$$ Stated differently: $$\lim_{d \to \infty} \mathbb{P}\{\forall S \succeq 0 : \, \|S\|_F \leq \frac{\tau}{2} \sqrt{d} \, \textrm{ or } \, \# \{\mu \in [n] \, : \, |{\rm Tr}(G_\mu S)| > c_\alpha \tau\} \geq \eta_\alpha n\} = 1.$$ Conditioning on this event and since $\phi$ is non-decreasing on $\mathbb{R}_+$: $$\inf_{\substack{S \succeq 0 \\ \|S\|_F \geq \tau \sqrt{d}}} \sum_{\mu=1}^n \phi(|{\rm Tr}[G_\mu S]|) \geq n \eta_\alpha \phi(c_\alpha \tau).$$ Using Proposition [\[prop:universality_gs\]](#prop:universality_gs){reference-type="ref" reference="prop:universality_gs"} with $B = \{S \, : \, 0 \preceq S \preceq M \mathrm{I}_d \, \textrm{ and } \|S\|_F \geq \tau\sqrt{d}\}$ we reach that, for all $\tau, M$, with probability $1 - o_d(1)$: $$\inf_{\substack{\mathrm{Sp}(S) \subseteq [0, M] \\ \|S\|_F \geq \tau \sqrt{d}}} \frac{1}{n}\sum_{\mu=1}^n \phi(|{\rm Tr}[W_\mu S]|) \geq \frac{1}{2}\eta_\alpha \phi(c_\alpha \tau),$$ which ends the proof. $\square$ We now turn to the proof of Theorem [\[thm:main_negative_side\]](#thm:main_negative_side){reference-type="ref" reference="thm:main_negative_side"}. **Proof of Theorem [\[thm:main_negative_side\]](#thm:main_negative_side){reference-type="ref" reference="thm:main_negative_side"} --** Let $M > 0$. As in the proof of Corollary [\[cor:negative_side_weak_phi\]](#cor:negative_side_weak_phi){reference-type="ref" reference="cor:negative_side_weak_phi"}, we can assume without loss of generality that $\phi$ has bounded derivative: if it is not, it is always possible to lower bound $\phi$ by such a function. Let $\delta \in (0,1)$, and $S$ with $\mathrm{Sp}(S) \subseteq [0, M]$ and $|{\rm Tr}[S] - d| \geq \delta \sqrt{d}$. Notice that, defining $S' \coloneqq \sqrt{d}S / |d - {\rm Tr}[S]| \succeq 0$, we have with $b \coloneqq \mathop{\mathrm{sign}}(d - {\rm Tr}[S]) \in \{\pm 1\}$: $${\rm Tr}[S' W_\mu] - b = \frac{x_\mu^\intercal S x_\mu - d}{|{\rm Tr}S - d|},$$ and so since $\phi$ is non-decreasing: $$\phi\left(\sqrt{d} \left|\frac{x_\mu^\intercal S x_\mu}{d} - 1\right|\right) \geq \phi(\delta |{\rm Tr}(W_\mu S') - b|).$$ Moreover, $\mathrm{Sp}(S') \subseteq [0, M / \delta]$. Since this argument is valid for any $S$ with $\mathrm{Sp}(S) \subseteq [0, M]$ we get: $$\min_{\substack{\mathrm{Sp}(S) \subseteq [0, M]\\ |{\rm Tr}[S] - d| \geq \delta \sqrt{d}}} \frac{1}{n} \sum_{\mu=1}^n \phi\left(\sqrt{d} \left|\frac{x_\mu^\intercal S x_\mu}{d} - 1\right|\right) \geq \min_{b \in \{-1,1\}}\min_{\mathrm{Sp}(S) \subseteq [0, M / \delta]} \frac{1}{n} \sum_{\mu=1}^n \phi(\delta |{\rm Tr}[W_\mu S] - b|).$$ Using Corollary [\[cor:negative_side_weak_phi\]](#cor:negative_side_weak_phi){reference-type="ref" reference="cor:negative_side_weak_phi"} applied to $x \mapsto \phi(\delta x)$ there exists therefore $c = c(\alpha, \delta, \phi) > 0$ such that with probability $1 - o_d(1)$: $$\label{eq:lb_negative_side_trace_far} \min_{\substack{\mathrm{Sp}(S) \subseteq [0, M]\\ |{\rm Tr}[S] - d| \geq \delta \sqrt{d}}} \frac{1}{n} \sum_{\mu=1}^n \phi\left(\sqrt{d} \left|\frac{x_\mu^\intercal S x_\mu}{d} - 1\right|\right) \geq c(\alpha, \delta, \phi).$$ Let now $S \in \mcS_d$ with $\mathrm{Sp}(S) \subseteq [0, M]$ and $|{\rm Tr}[S] - d| \leq \delta \sqrt{d}$. Then: $${\rm Tr}[W_\mu S] = \sqrt{d} \left(\frac{x_\mu^\intercal S x_\mu}{d} - 1\right) + \underbrace{r(S)}_{|r(S)| \leq \delta},$$ so that $$\label{eq:lb_negative_side_trace_close_1} \min_{\substack{\mathrm{Sp}(S) \subseteq [0, M]\\ |{\rm Tr}[S] - d| \leq \delta \sqrt{d}}} \frac{1}{n} \sum_{\mu=1}^n \phi\left(\sqrt{d} \left|\frac{x_\mu^\intercal S x_\mu}{d} - 1\right|\right) \geq \min_{\substack{\mathrm{Sp}(S) \subseteq [0, M]\\ |{\rm Tr}[S] - d| \leq \delta \sqrt{d}}} \frac{1}{n} \sum_{\mu=1}^n \phi(|{\rm Tr}[W_\mu S]|) - \|\phi'\|_\infty \delta.$$ Notice that since $\delta < 1$, for large enough $d$ we have $|{\rm Tr}[S] - d|\leq \delta \sqrt{d} \Rightarrow {\rm Tr}[S] \geq d/2$. If moreover $S \succeq 0$, by Cauchy-Schwarz we have $\|S\|_F \geq {\rm Tr}[S]/\sqrt{d} \geq \sqrt{d}/2$. This implies: $$\label{eq:lb_negative_side_trace_close_2} \min_{\substack{\mathrm{Sp}(S) \subseteq [0, M]\\ |{\rm Tr}[S] - d| \leq \delta \sqrt{d}}} \frac{1}{n} \sum_{\mu=1}^n \phi(|{\rm Tr}[W_\mu S]|) \geq \min_{\substack{\mathrm{Sp}(S) \subseteq [0, M]\\ \|S\|_F \geq \sqrt{d}/2}} \frac{1}{n} \sum_{\mu=1}^n \phi(|{\rm Tr}[W_\mu S]|).$$ Using Corollary [\[cor:negative_side_weak_phi\]](#cor:negative_side_weak_phi){reference-type="ref" reference="cor:negative_side_weak_phi"} we can obtain $\varepsilon= \varepsilon(\alpha, \phi) > 0$ such that, with probability $1 - o_d(1)$, we have: $$\label{eq:lb_negative_side_trace_close_3} \min_{\substack{\mathrm{Sp}(S) \subseteq [0, M]\\ \|S\|_F \geq \sqrt{d}/2}} \frac{1}{n} \sum_{\mu=1}^n \phi(|{\rm Tr}[W_\mu S]|) \geq \varepsilon.$$ Combining eq. [\[eq:lb_negative_side_trace_far\]](#eq:lb_negative_side_trace_far){reference-type="eqref" reference="eq:lb_negative_side_trace_far"} with all three equations [\[eq:lb_negative_side_trace_close_1\]](#eq:lb_negative_side_trace_close_1){reference-type="eqref" reference="eq:lb_negative_side_trace_close_1"},[\[eq:lb_negative_side_trace_close_2\]](#eq:lb_negative_side_trace_close_2){reference-type="eqref" reference="eq:lb_negative_side_trace_close_2"},[\[eq:lb_negative_side_trace_close_3\]](#eq:lb_negative_side_trace_close_3){reference-type="eqref" reference="eq:lb_negative_side_trace_close_3"}, we get that for any $\delta > 0$, with probability $1 - o_d(1)$: $$\min_{\mathrm{Sp}(S) \subseteq [0, M]} \frac{1}{n} \sum_{\mu=1}^n \phi\left(\sqrt{d} \left[\frac{x_\mu^\intercal S x_\mu}{d} - 1\right]\right) \geq \min[c(\alpha, \delta, \phi), \varepsilon(\alpha, \phi) - \delta \|\phi'\|_\infty].$$ Taking $\delta \coloneqq \min\left(1,\varepsilon(\alpha, \phi)/(2 \|\phi'\|_\infty)\right) > 0$ ends the proof. $\square$ # The Gaussian equivalent problem {#sec:gaussian_equivalent} ## The satisfiable regime: proof of Proposition [\[prop:regular_sol_gaussian\]](#prop:regular_sol_gaussian){reference-type="ref" reference="prop:regular_sol_gaussian"} {#subsec:gaussian_sat} ### Gordon's min-max theorem We will use the Gaussian min-max theorem of Gordon [@gordon1988milman], as stated in [@thrampoulidis2015regularized; @thrampoulidis2018precise]: [\[prop:gaussian_minmax\]]{#prop:gaussian_minmax label="prop:gaussian_minmax"} Let $n, p \geq 1$, $W \in \mathbb{R}^{n \times p}$ an i.i.d. standard normal matrix, and $g \in \mathbb{R}^n,h \in \mathbb{R}^p$ two independent vectors with i.i.d. $\mcN(0,1)$ coordinates. Let $\mcS_v, \mcS_u$ be two compact subsets respectively of $\mathbb{R}^p$ and $\mathbb{R}^n$, and let $\psi : \mcS_v \times \mcS_u \to \mathbb{R}$ a continuous function. We define the two optimization problems: $$\begin{dcases} C(W) &\coloneqq \min_{v \in \mcS_v} \max_{u \in \mcS_{u}} \left\{u^\intercal W v+ \psi(v, u)\right\}, \\ \mcC(g, h) &\coloneqq \min_{v \in \mcS_v} \max_{u \in \mcS_{u}} \left\{\norm{u}_2 h^\intercal v + \norm{v}_2 g^\intercal u+ \psi(v, u)\right\}. \end{dcases}$$ Then: - For all $t \in \mathbb{R}$, one has $$\mathbb{P}[C(W) < t] \leq 2\mathbb{P}[\mcC(g,h) \leq t].$$ - Assume that $\mcS_v, \mcS_u$ are convex and that $\psi$ is convex-concave on $\mcS_v \times \mcS_u$. Then for all $t \in \mathbb{R}$: $$\mathbb{P}[C(W) > t] \leq 2\mathbb{P}[\mcC(g,h) \geq t].$$ ### Gordon's min-max inequality and random geometry We first introduce the notion of Gaussian width of a convex cone. [\[def:gaussian_width\]]{#def:gaussian_width label="def:gaussian_width"} For $p \geq 1$, and a closed convex cone $K \subseteq \mathbb{R}^p$, we define its *Gaussian width* as $$\omega(K) \coloneqq \mathbb{E}\max_{x \in K \cap \mathbb{S}^{p-1}} \langle g, x\rangle,$$ for $g \sim \mcN(0, \mathrm{I}_p)$. We show now a general result leveraging Gordon's min-max inequality to prove the existence of a solution to a general type of random geometry problem. Such applications are classical, and we show here that one can assume furthermore that the solution is bounded. [\[lemma:bounded_sols_general\]]{#lemma:bounded_sols_general label="lemma:bounded_sols_general"} Let $n, p \geq 1$, and $(g_\mu)_{\mu=1}^n \overset{\mathrm{i.i.d.}}{\sim}\mcN(0, \mathrm{I}_p)$. We define $V \coloneqq \{x \in \mathbb{R}^p \, : \forall \mu \in [n], \, \langle g_\mu, x \rangle = 1 \}$. Let $K \subseteq \mathbb{R}^p$ be a closed convex cone, with Gaussian width $\omega(K)$. Assume that there exists $\varepsilon \in (0,1)$ such that $n \leq (1-\varepsilon) \, \omega(K)^2$ as $n \to \infty$. Then: $$\label{eq:bounded_sols_general} \lim_{n \to \infty} \mathbb{P}\left\{\exists x \in K \cap V \, \textrm{ s.t. } \, \|x\|_2 \leq \frac{2}{\sqrt{\varepsilon}} \right\} = 1.$$ **Proof of Lemma [\[lemma:bounded_sols_general\]](#lemma:bounded_sols_general){reference-type="ref" reference="lemma:bounded_sols_general"} --** Let us denote, for $A > 0$: $$P_n(A) \coloneqq \mathbb{P}\left\{\exists x \in K \cap V \, \textrm{ s.t. } \, \|x\|_2 \leq A \right\},$$ and define $G \in \mathbb{R}^{n \times p}$ as the Gaussian matrix with $g_\mu$ as its $\mu$-th row. By elementary compactness and duality arguments, we have: $$1 - P_n(A) = \mathbb{P}\Big[\min_{\substack{x \in K \\ \|x \|_2 \leq A}} \|G x - \mathbf{1}_n \|_2 > 0\Big] = \mathbb{P}\Big[\min_{\substack{x \in K \\ \|x \|_2 \leq A}} \max_{\| \lambda\|_2 \leq 1} \{-\lambda^\intercal \mathbf{1}_n + \lambda^\intercal G x\} > 0\Big],$$ for $G \in \mathbb{R}^{n \times p}$ with i.i.d. $\mcN(0,1)$ elements. By dominated convergence we have then $$\label{eq:ub_PnA_1} 1 - P_n(A) = \lim_{\eta \to 0} \mathbb{P}\Big[\min_{\substack{x \in K \\ \|x \|_2 \leq A}} \max_{\| \lambda\|_2 \leq 1} \{-\lambda^\intercal \mathbf{1}_n + \lambda^\intercal G x\} > \eta\Big].$$ Note that in eq. [\[eq:ub_PnA_1\]](#eq:ub_PnA_1){reference-type="eqref" reference="eq:ub_PnA_1"}, both $x$ and $\lambda$ belong to a convex and compact set (since $K$ is closed and convex), and $\psi(x, \lambda) = -\lambda^\intercal\mathbf{1}_n$ is clearly convex-concave. We can thus apply item $(ii)$ of Proposition [\[prop:gaussian_minmax\]](#prop:gaussian_minmax){reference-type="ref" reference="prop:gaussian_minmax"}: $$\label{eq:ub_PnA_2} 1 - P_n(A) \leq 2 \lim_{\eta \to 0} \mathbb{P}\Big[\min_{\substack{x \in K \\ \|x \|_2 \leq A}} \max_{\| \lambda\|_2 \leq 1} \{-\lambda^\intercal \mathbf{1}_n + \| \lambda\|_2 g^\intercal x + \|x \|_2 \lambda^\intercal h\} \geq \eta\Big],$$ with $g \sim \mcN(0, \mathrm{I}_p)$ and $h \sim \mcN(0, \mathrm{I}_n)$. We then control the right-hand-side of the last equation, using that $K$ is a cone: $$\begin{aligned} \label{eq:ub_PnA_3} \nonumber &\min_{\substack{x \in K \\ \|x \|_2 \leq A}} \max_{\| \lambda\|_2 \leq 1} \{-\lambda^\intercal \mathbf{1}_n + \| \lambda\|_2 g^\intercal x + \|x \|_2 \lambda^\intercal h\} = \min_{\substack{x \in K \\ \|x \|_2 \leq A}} \max \{0, \|\|x\|_2 h - \mathbf{1}_n \|_2 + g^\intercal x\}, \\ \nonumber &= \max \Big\{0,\min_{\substack{x \in K \\ \|x \|_2 \leq A}} \big[\|\|x\|_2 h - \mathbf{1}_n \|_2 + g^\intercal x \big]\Big\}, \\ &= \max \Big\{0,\min_{v \in [0, A]} \big[\|v h - \mathbf{1}_n \|_2 + v \min_{x \in K \cap \mathbb{S}^{p-1}}g^\intercal x \big]\Big\}. \end{aligned}$$ Note that $g \to \max_{x \in K \cap \mathbb{S}^{p-1}}[g^\intercal x]$ is $1$-Lipschitz, and in particular concentrates on its average, which by definition is the Gaussian width $\omega(K)$. We use the classical result (see e.g. Theorem 3.25 of [@van2014probability]): [\[thm:gaussian_conc_lipschitz\]]{#thm:gaussian_conc_lipschitz label="thm:gaussian_conc_lipschitz"} Let $X_1, \cdots, X_n \overset{\mathrm{i.i.d.}}{\sim}\mcN(0,1)$. Let $f : \mathbb{R}^n \to \mathbb{R}$ a Lipschitz function. Then for all $t \geq 0$: $$\mathbb{P}[f(X_1, \cdots, X_n) - \mathbb{E}f(X_1, \cdots, X_n) \geq t] \leq \exp\Big\{-\frac{t^2}{2 \| f \|_\mathrm{L}^2}\Big\}.$$ Therefore, for $\delta \in (0, \omega(K))$, we have[^5]: $$\label{eq:concentration_gwidth} \mathbb{P}\Big\{\min_{x \in K \cap \mathbb{S}^{p-1}}[g^\intercal x] \geq - \omega(K) + \delta\Big\} \leq e^{-\delta^2/2}.$$ From eqs. [\[eq:ub_PnA_2\]](#eq:ub_PnA_2){reference-type="eqref" reference="eq:ub_PnA_2"},[\[eq:ub_PnA_3\]](#eq:ub_PnA_3){reference-type="eqref" reference="eq:ub_PnA_3"} and [\[eq:concentration_gwidth\]](#eq:concentration_gwidth){reference-type="eqref" reference="eq:concentration_gwidth"} we have: $$\label{eq:ub_PnA_4} 1 - P_n(A) \leq 2 \lim_{\eta \to 0} \mathbb{P}\Big[ \max \Big\{0,\min_{v \in [0, A]} \big[\|v h - \mathbf{1}_n \|_2 + v (- \omega(K) + \delta) \big]\Big\}\geq \eta\Big] + 2 e^{-\delta^2/2}.$$ Recall that we assumed $n \leq (1-\varepsilon) \, \omega(K)^2$ and $n \to \infty$. Therefore, for $n \geq n_0(\varepsilon,\delta)$ large enough we can assume $n \leq (1-\varepsilon/2) \, [\omega(K) - \delta]^2$. Let $v^\star = v^\star(\varepsilon) = \sqrt{(4-\varepsilon)/\varepsilon}$ such that: $$\Big(1 - \frac{\varepsilon}{4}\Big) \mathbb{E}_{X \sim \mcN(0,1)} [(v^\star X - 1)^2] = (v^\star)^2.$$ Thus, for $A = v^\star(\varepsilon)$, we have from eq. [\[eq:ub_PnA_4\]](#eq:ub_PnA_4){reference-type="eqref" reference="eq:ub_PnA_4"}: $$\label{eq:ub_PnA_5} 1 - P_n(v^\star) \leq 2 \lim_{\eta \to 0} \mathbb{P}\Big[ \max \Big\{0, \big[\|v^\star h - \mathbf{1}_n \|_2 + v^\star (- \omega(K) + \delta) \big]\Big\}\geq \eta\Big] + 2 e^{-\delta^2/2}.$$ By the law of large numbers we have ($\overset{\rm p}{\to}$ denotes convergence in probability): $$\frac{1}{\sqrt{n}} \|v^\star h - \mathbf{1}_n \|_2 \overset{\rm p}{\to}\sqrt{\mathbb{E}[(v^\star X - 1)^2]} = \frac{v^\star}{\sqrt{1 - \frac{\varepsilon}{4}}} < \frac{v^\star}{\sqrt{1 - \frac{\varepsilon}{3}}}.$$ In particular, with probability $1 - o_n(1)$, we have $$\|v^\star h - \mathbf{1}_n \|_2 + v^\star (- \omega(K) + \delta) \leq v^\star \sqrt{n} \left[\left(1 - \frac{\varepsilon}{3}\right)^{-1/2} - \left(1 - \frac{\varepsilon}{2}\right)^{-1/2} \right] < 0.$$ Therefore, we have by eq. [\[eq:ub_PnA_5\]](#eq:ub_PnA_5){reference-type="eqref" reference="eq:ub_PnA_5"}: $$1 - P_n(v^\star) \leq o_n(1) + 2 e^{-\delta^2/2}.$$ Taking the limit $n \to \infty$ and then $\delta \to \infty$ finishes the proof[^6] (notice that $v^\star \leq 2/\sqrt{\varepsilon}$). $\square$ ### The cone of positive matrices with bounded condition number We study here the Gaussian width of the convex cone of positive semidefinite matrices with bounded condition number. We start with a classical result on the Gaussian width of $\mcS_d^+$ [@chandrasekaran2012convex; @amelunxen2014living], we refer the reader to Proposition 10.2 of [@amelunxen2014living] for a proof. [\[prop:gwidth_pds\]]{#prop:gwidth_pds label="prop:gwidth_pds"} The Gaussian width of $\mcS_d^+$ satisfies: $$\sqrt{\frac{d(d+1)}{4} - 1} \leq \omega(\mcS_d^+) \leq \sqrt{\frac{d(d+1)}{4}}.$$ In particular, notice that $\omega(\mcS_d^+) \sim d/2$ as $d \to \infty$. We generalize this result by asking that the matrices have bounded condition number. [\[lemma:gwidth_positive_condition_nb\]]{#lemma:gwidth_positive_condition_nb label="lemma:gwidth_positive_condition_nb"} For any $\kappa \geq 1$, define $K_\kappa \coloneqq \{S \in \mcS_d^+ \, : \, \lambda_\mathrm{max}(S) \leq \kappa \lambda_\mathrm{min}(S) \}$. Then $K_\kappa$ is a closed convex cone, and its Gaussian width satisfies $$\liminf_{d \to \infty} \frac{2 \omega(K_\kappa)}{d} = f(\kappa),$$ where $\kappa \to f(\kappa) \in [0,1]$ is non-decreasing, with $\lim_{\kappa \to \infty} f(\kappa) = 1$. **Proof of Lemma [\[lemma:gwidth_positive_condition_nb\]](#lemma:gwidth_positive_condition_nb){reference-type="ref" reference="lemma:gwidth_positive_condition_nb"} --** The fact that $K_\kappa$ is a closed convex cone is easy to verify. Moreover, for any $\kappa \leq \kappa'$ we have $K_\kappa \subseteq K_{\kappa'} \subseteq \mcS_d^+$, and $\omega(\mcS_d^+) \sim d/2$ by Proposition [\[prop:gwidth_pds\]](#prop:gwidth_pds){reference-type="ref" reference="prop:gwidth_pds"}, so $\kappa \mapsto f(\kappa) \in [0,1]$, and $f$ is non-decreasing. To finish the proof, we show that $f(\kappa) \to 1$ as $\kappa \to \infty$. We let $\kappa > 1$. To identify $\mcS_d$ with $\mathbb{R}^{d(d+1)/2}$, we use the matrix flattening function, for $M \in \mcS_d$: $$\begin{aligned} \label{eq:def_flattening} \mathop{\mathrm{vec}}(M) &\coloneqq ((\sqrt{2}M_{ab})_{1 \leq a < b \leq d}, (M_{aa})_{a=1}^d) \in \mathbb{R}^{d(d+1)/2}, \\ \nonumber &= ((2 - \delta_{ab})^{1/2} M_{ab})_{a\leq b}.\end{aligned}$$ It is an isometry ($\langle \mathop{\mathrm{vec}}(M), \mathop{\mathrm{vec}}(N) \rangle = {\rm Tr}[MN]$), and if $Z \sim \mathrm{GOE}(d)$ then $\mathop{\mathrm{vec}}(Z) \overset{\mathrm{i.i.d.}}{\sim}\mcN(0, 2 / d)$. We thus reach: $$\omega(K_\kappa) \coloneqq \mathbb{E}\max_{\substack{S \in K_\kappa \\ \|S \|_\mathrm{F}^2 = 2 d}} \left\{\frac{1}{2} {\rm Tr}[Z S]\right\},$$ in which $Z \sim \mathrm{GOE}(d)$, cf. Definition [\[def:matrix_ensembles\]](#def:matrix_ensembles){reference-type="ref" reference="def:matrix_ensembles"}. Letting $z_1 \geq \cdots \geq z_d$ be the eigenvalues of $Z$, by Wigner's theorem [@anderson2010introduction] $(1/d) \sum_i \delta_{z_i}$ weakly converges as $d \to \infty$ (a.s.) to $\sigma_{\mathrm{s.c.}}(\mathrm{d}x) = (2\pi)^{-1} \sqrt{4-x^2} \mathds{1}\{ |x|\leq 2\} \mathrm{d}x$. Moreover, we have (taking $S$ having same eigenvectors as $Z$)[^7]: $$\label{eq:lb_omega_Kkappa} \omega(K_\kappa) \geq \mathbb{E}\max_{\substack{\lambda_1 \geq \cdots \lambda_d \geq 0 \\ \sum \lambda_i^2 \leq 2 d \\ \lambda_1 \leq \kappa \lambda_d}} \left\{\frac{1}{2} \sum_{i=1}^d \lambda_i z_i\right\}.$$ Let us now sketch the end of the proof. We define $$\label{eq:def_lambdastar_Kkappa} \begin{dcases} \gamma(\kappa) &\coloneqq \sqrt{\frac{2}{\int_{2 \kappa^{-1}}^2 x^2 \, \sigma_\mathrm{s.c.}(\mathrm{d}x) + \frac{4}{\kappa^2} \int_{-2}^{2 \kappa^{-1}} \sigma_\mathrm{s.c.}(\mathrm{d}x)}}, \\ \lambda_i^\star &\coloneqq \gamma(\kappa) \Big[z_i \mathds{1}\{z_i \geq 2 \kappa^{-1}\} + \frac{2}{\kappa} \mathds{1}\{z_i < 2 \kappa^{-1}\}\Big]. \end{dcases}$$ Let $\varepsilon> 0$. Since one can show that $z_1 \to 2$ a.s. as $d \to \infty$ [@anderson2010introduction], with high probability we have $\lambda_1^\star \leq \kappa(1+\varepsilon) \lambda_d^\star$. Moreover, one checks easily that $d^{-1} \sum_{i=1}^d [\lambda_i^\star]^2 \overset{\rm p}{\to}2$ as $d \to \infty$. Letting $\mu_i \coloneqq \lambda_i^\star / \sqrt{1+\varepsilon}$, we can therefore use $\{\mu_i\}$ to lower bound $\omega(K_\kappa)$ as $d \to \infty$, with $\kappa_\varepsilon\coloneqq \kappa(1+\varepsilon)$. This yields that, for any $\varepsilon> 0$: $$\label{eq:lb_omega_Kkappa_2} f(\kappa_\varepsilon) = \liminf_{d \to \infty} \frac{2 \omega(K_{\kappa_\varepsilon})}{d} \geq \frac{1}{\sqrt{1+\varepsilon}} \sqrt{\frac{2 \left[\int_{2 \kappa^{-1}}^2 x^2 \, \sigma_\mathrm{s.c.}(\mathrm{d}x) + \frac{2}{\kappa} \int_{-2}^{2 \kappa^{-1}} x \, \sigma_\mathrm{s.c.}(\mathrm{d}x)\right]^2}{\int_{2 \kappa^{-1}}^2 x^2 \, \sigma_\mathrm{s.c.}(\mathrm{d}x) + \frac{4}{\kappa^2} \int_{-2}^{2 \kappa^{-1}} \sigma_\mathrm{s.c.}(\mathrm{d}x)}} .$$ Taking the limits $\kappa \to \infty$ and $\varepsilon\to 0$ in eq. [\[eq:lb_omega_Kkappa_2\]](#eq:lb_omega_Kkappa_2){reference-type="eqref" reference="eq:lb_omega_Kkappa_2"} yields $\lim_{\kappa \to \infty} f(\kappa) \geq 1$, which ends the proof. $\square$ ### Proof of Proposition [\[prop:regular_sol_gaussian\]](#prop:regular_sol_gaussian){reference-type="ref" reference="prop:regular_sol_gaussian"} We can now complete the proof of Proposition [\[prop:regular_sol_gaussian\]](#prop:regular_sol_gaussian){reference-type="ref" reference="prop:regular_sol_gaussian"}. Since $\alpha = n/d^2 < 1/4$, by Lemma [\[lemma:gwidth_positive_condition_nb\]](#lemma:gwidth_positive_condition_nb){reference-type="ref" reference="lemma:gwidth_positive_condition_nb"}, we can find $\kappa = \kappa(\alpha)$ and $\varepsilon= \varepsilon(\alpha)$ such that $n \leq (1 - \varepsilon) \omega(K_\kappa)^2$ for $n$ large enough. Therefore, by Lemma [\[lemma:bounded_sols_general\]](#lemma:bounded_sols_general){reference-type="ref" reference="lemma:bounded_sols_general"} there exists $A = A(\alpha) > 0$ such that: $$\label{eq:regular_sol_gauss_1} \lim_{d \to \infty} \mathbb{P}\left\{\exists S \in V \, \textrm{ s.t. } S \succeq 0 \, \textrm{ and } \,{\rm Tr}[S^2] \leq A d \,\textrm{ and } \, \lambda_\mathrm{max}(S) \leq \kappa \lambda_\mathrm{min}(S) \right\} = 1.$$ Notice that $H \coloneqq n^{-1/2} \sum_{\mu=1}^n G_\mu \sim \mathrm{GOE}(d)$, and thus $\mathbb{P}[\|H\|_\mathrm{op}\leq 3] = 1 - o_d(1)$ [@vershynin2018high]. Conditioning on this event, let $S \in V$. Then ${\rm Tr}[HS] = \sqrt{n} = \sqrt{\alpha} d$, and thus by duality of $\|\cdot\|_\mathrm{op}$ and $\|\cdot\|_{S_1}$: $${\rm Tr}|S| \geq \frac{1}{\|H\|_\mathrm{op}} |{\rm Tr}[H S]| \geq \frac{\sqrt{\alpha}}{3} d.$$ The proof of Proposition [\[prop:regular_sol_gaussian\]](#prop:regular_sol_gaussian){reference-type="ref" reference="prop:regular_sol_gaussian"} is then ended by noticing that: $$\mathrm{Sp}(S) \subseteq [\lambda_-, \lambda_+] \Leftarrow \begin{cases} &S \succeq 0, \\ &{\rm Tr}[S^2] \leq A(\alpha) d,\\ &\lambda_{\max}(S) \leq \kappa(\alpha) \lambda_{\min}(S) , \\ &{\rm Tr}[S] \geq B(\alpha) d, \end{cases}$$ for some $0 < \lambda_- \leq \lambda_+$ depending only on $\alpha$. $\qed$ ## The unsatisfiable regime: proof of Proposition [\[prop:no_approx_unsat_gaussian\]](#prop:no_approx_unsat_gaussian){reference-type="ref" reference="prop:no_approx_unsat_gaussian"} {#subsec:gaussian_unsat} We will show the following general result on the unsatisfiability of approximate versions of a general class of random geometry problems. [\[prop:thick_mesh\]]{#prop:thick_mesh label="prop:thick_mesh"} Let $b \in \mathbb{R}$, $p, n \to \infty$, $K \subseteq \mathbb{R}^p$ a closed convex cone, with Gaussian width $\omega(K)$, $(a_\mu)_{\mu=1}^n \overset{\mathrm{i.i.d.}}{\sim}\mcN(0, \mathrm{I}_p)$, and denote $\mcC_\mu^{(b)}(x) \coloneqq |a_\mu^\intercal x - b|$ the $\mu$-th "constraint". We denote $$V_b \coloneqq \{x \in \mathbb{R}^p \, : \forall \mu \in [n], \, \, \mcC^{(b)}_\mu(x) = 0 \}$$ a randomly-oriented affine subspace. Then we have the following: - Assume that $b \neq 0$. Then - If there exists $\varepsilon > 0$ such that $n \leq (1-\varepsilon) \, \omega(K)^2$ as $n \to \infty$, then $\mathbb{P}[V_{b} \cap K \neq \emptyset] \to_{n\to\infty} 1$. - If there exists $\varepsilon > 0$ such that $n \geq (1+\varepsilon) \, \omega(K)^2$ as $n \to \infty$, then there exists $c = c(\varepsilon,b) > 0$ and $\eta = \eta(\varepsilon,b) \in (0,1)$ such that $$\lim_{n \to \infty} \mathbb{P}\{\forall x \in K \, : \, \# \{\mu \in [n] \, : \, \mcC^{(b)}_\mu(x) > c\} \geq \eta n\} = 1.$$ - Assume that $b = 0$. Note that $V_0$ is a linear subspace, and $V_{0} \cap K$ is a closed convex cone. - Assume that $n \leq (1-\varepsilon) \, \omega(K)^2$ for some $\varepsilon> 0$. Then as $n \to \infty$, $\mathbb{P}[V_{0} \cap K \cap S^{p-1} \neq \emptyset] \to 1$. - Assume that $n \geq (1+\varepsilon) \, \omega(K)^2$ for some $\varepsilon> 0$. Then there exists $c = c(\varepsilon,b) > 0$ and $\eta = \eta(\varepsilon,b) \in (0,1)$ such that, with probability $1 - o_n(1)$, the following holds for all $\tau \geq 0$: $$\max_{\substack{x \in K \\ \# \{\mu \in [n] \, : \, \mcC^{(0)}_\mu(x) > c \tau\} < \eta n}} \|x\|_2 \leq \tau.$$ It is clear that Proposition [\[prop:thick_mesh\]](#prop:thick_mesh){reference-type="ref" reference="prop:thick_mesh"} ends the proof of Proposition [\[prop:no_approx_unsat_gaussian\]](#prop:no_approx_unsat_gaussian){reference-type="ref" reference="prop:no_approx_unsat_gaussian"}. Indeed, seen as an element of $\mathbb{R}^{d(d+1)/2}$, $\sqrt{d/2} G_\mu \overset{\mathrm{i.i.d.}}{\sim}\mcN(0, \mathrm{I}_{d(d+1)/2})$, see the canonical embedding in eq. [\[eq:def_flattening\]](#eq:def_flattening){reference-type="eqref" reference="eq:def_flattening"}. By Proposition [\[prop:gwidth_pds\]](#prop:gwidth_pds){reference-type="ref" reference="prop:gwidth_pds"}, we have $\omega(\mcS_d^+)^2 = d^2/4 + o(d^2)$, and thus we can apply Proposition [\[prop:thick_mesh\]](#prop:thick_mesh){reference-type="ref" reference="prop:thick_mesh"} with $p = d(d+1)/2$. The difference $\sqrt{d}$ in normalization in point $(ii)$ of Proposition [\[prop:no_approx_unsat_gaussian\]](#prop:no_approx_unsat_gaussian){reference-type="ref" reference="prop:no_approx_unsat_gaussian"} comes from the additional $\sqrt{d}$ factor that arises when relating $G_\mu$ to a standard Gaussian. We now focus on proving Proposition [\[prop:thick_mesh\]](#prop:thick_mesh){reference-type="ref" reference="prop:thick_mesh"}. ### Proof of Proposition [\[prop:thick_mesh\]](#prop:thick_mesh){reference-type="ref" reference="prop:thick_mesh"} We first show $(i)$, assuming $b \neq 0$. Results of [@chandrasekaran2012convex; @amelunxen2014living] show that $V_b \cap K \neq \emptyset$ with high probability if $n \leq (1-\epsilon) \omega(K)^2$ (see e.g. Theorem 8.1 of [@amelunxen2014living]), i.e. point $(a)$. While the converse is also shown in these works for $n \geq (1+\varepsilon) \omega(K)^2$, here we wish to prove the stronger statement $(b)$. Assume therefore that $n \geq (1+\varepsilon) \omega(K)^2$. By the union bound, for all $c \geq 0$ and $\eta \in (0,1)$: $$\begin{aligned} \label{eq:ub_ceta} \nonumber &\mathbb{P}(\exists x \in K : \, \# \{\mu \in [n] \, : \, \mcC^{(b)}_\mu(x) > c\} < \eta n) \\ \nonumber &= \mathbb{P}(\exists x \in K, \exists S \subseteq [n] \, : \, |S| > (1-\eta) n \, \textrm{ and } \, \forall \mu \in S, \, \mcC^{(b)}_\mu(x) \leq c) \\ \nonumber &\leq \nonumber \sum_{\substack{S \subseteq [n] \\ |S| > (1-\eta) n}}\mathbb{P}(\exists x \in K : \, \forall \mu \in S, \, \mcC^{(b)}_\mu(x) \leq c), \\ \nonumber &\overset{\mathrm{(a)}}{\leq}\left[\sum_{k < \eta \cdot n} \binom{n}{k}\right] \mathbb{P}(\exists x \in K \, : \, \|\{a_\mu^\intercal x - b\}_{\mu=1}^{(1-\eta)n} \|_\infty \leq c), \\ &\overset{\mathrm{(b)}}{\leq}\exp\left\{\eta n \log \frac{e}{\eta}\right\} \mathbb{P}(\exists x \in K \, : \, \|\{a_\mu^\intercal x - b\}_{\mu=1}^{(1-\eta)n} \|_\infty \leq c). \end{aligned}$$ We used that $a_\mu$ are i.i.d. in $(\rm a)$, and the bound $\sum_{i=0}^k \binom{n}{i} \leq (en/k)^k$ in $(\rm b)$. We now make use of the following lemma (proven later on). [\[lemma:ub_ceta\]]{#lemma:ub_ceta label="lemma:ub_ceta"} Recall that $n \geq (1+\varepsilon) \omega(K)^2$. There exists $c_1, c_2 > 0$ and $\eta_0 \in (0,1)$ depending only on $\varepsilon$ such that for any $\eta \in (0,\eta_0)$: $$\mathbb{P}(\exists x \in K \, : \, \|\{a_\mu^\intercal x - b\}_{\mu=1}^{(1-\eta)n} \|_\infty \leq c_1 \cdot b) \leq 2 \exp\{-n c_2\}.$$ Applying Lemma [\[lemma:ub_ceta\]](#lemma:ub_ceta){reference-type="ref" reference="lemma:ub_ceta"} in eq. [\[eq:ub_ceta\]](#eq:ub_ceta){reference-type="eqref" reference="eq:ub_ceta"}, we can consider $\eta = \eta(\varepsilon,b) \in (0,1/2)$ small enough, such that $\eta < \eta_0$ and $\eta \log(e/\eta) \leq c_2 /2$. This yields then that $$\mathbb{P}(\exists x \in K : \, \# \{\mu \in [n] \, : \, \mcC^{(b)}_\mu(x) > c_1 \cdot b\} < \eta n) \leq 2 \exp\{-nc_2/2\} \to 0,$$ and ends the proof. We now prove $(ii)$ of Proposition [\[prop:thick_mesh\]](#prop:thick_mesh){reference-type="ref" reference="prop:thick_mesh"}, assuming $b = 0$. $(a)$ is here a simple consequence of the usual Gordon's "escape through a mesh" theorem [@gordon1988milman], so we focus on $(b)$, assuming $n \geq (1+\varepsilon) \omega(K)^2$. Note that since $K$ is a cone, we have for all $c \geq 0$, $\eta \in (0,1)$: $$\begin{aligned} \label{eq:b0} \nonumber &\sup_{\substack{x \in K \\ \# \{\mu \in [n] \, : \, \mcC^{(0)}_\mu(x) > c\} < \eta n}} \|x\|_2 \\ &= \sup\{v \geq 0 \, : \, \exists x \in K \cap \mathbb{S}^{p-1} \, \textrm{ s.t. } \# \{\mu \in [n] \, : \, \mcC^{(0)}_\mu(x) > c / v\} < \eta n\}. \end{aligned}$$ We now use the following counterpart to Lemma [\[lemma:ub_ceta\]](#lemma:ub_ceta){reference-type="ref" reference="lemma:ub_ceta"} in the case $b = 0$, also proven later: [\[lemma:ub_ceta_b0\]]{#lemma:ub_ceta_b0 label="lemma:ub_ceta_b0"} Recall that $n \geq (1+\varepsilon) \omega(K)^2$. There exists $c_1, c_2 > 0$ and $\eta_0 \in (0,1)$ depending only on $\varepsilon$ such that for any $\eta \in (0,\eta_0)$: $$\mathbb{P}(\exists x \in K \cap \mathbb{S}^{p-1} \, : \, \|\{a_\mu^\intercal x\}_{\mu=1}^{(1-\eta)n} \|_\infty \leq c_1) \leq 2 \exp\{-n c_2\}.$$ Repeating the same reasoning as in the $b \neq 0$ case, we can then find $c = c(\varepsilon) > 0$ and $\eta = \eta(\varepsilon) \in (0,1/2)$ such that with probability $1 - o_n(1)$: $$\begin{aligned} \forall x \in K \cap \mathbb{S}^{p-1} \, : \, \# \{\mu \in [n] \, : \, \mcC^{(0)}_\mu(x) > c\} \geq \eta n. \end{aligned}$$ Plugging this in eq. [\[eq:b0\]](#eq:b0){reference-type="eqref" reference="eq:b0"} yields that with probability $1 - o_n(1)$, for all $\tau \geq 0$: $$\max_{\substack{x \in K \\ \# \{\mu \in [n] \, : \, \mcC^{(0)}_\mu(x) > c \tau\} < \eta n}} \|x\|_2 \leq \tau,$$ which ends the proof. $\qed$ We now prove the two Lemmas [\[lemma:ub_ceta\]](#lemma:ub_ceta){reference-type="ref" reference="lemma:ub_ceta"} and [\[lemma:ub_ceta_b0\]](#lemma:ub_ceta_b0){reference-type="ref" reference="lemma:ub_ceta_b0"}. ### Proofs of Lemma [\[lemma:ub_ceta\]](#lemma:ub_ceta){reference-type="ref" reference="lemma:ub_ceta"} and [\[lemma:ub_ceta_b0\]](#lemma:ub_ceta_b0){reference-type="ref" reference="lemma:ub_ceta_b0"} {#subsubsec:proof_lemma_ub_ceta} **Proof of Lemma [\[lemma:ub_ceta\]](#lemma:ub_ceta){reference-type="ref" reference="lemma:ub_ceta"} --** Note that for all $t \geq 0$ and $\eta \in (0,1)$: $$\label{eq:rewriting_prob_intersection} \mathbb{P}\left(\exists x \in K \, : \, \|\{a_\mu^\intercal x - b\}_{\mu=1}^{(1-\eta)n} \|_\infty \leq t\right) = \mathbb{P}\left[\exists x \in K \, : \, \|G x - b \mathbf{1}_m\|_\infty \leq t\right],$$ with $m \coloneqq n(1-\eta)$, $G \in \mathbb{R}^{m \times p}$ an i.i.d. $\mcN(0,1)$ matrix, and $\mathbf{1}_m$ the all-ones vector. We thus have: $$\begin{aligned} \label{eq:def_GammaAG} \nonumber \mathbb{P}(\exists x \in K \, : \, \|\{a_\mu^\intercal x - b\}_{\mu=1}^{(1-\eta)n} \|_\infty \leq t) &\overset{\mathrm{(a)}}{=}\lim_{A \to \infty} \mathbb{P}\left[\exists x \in K \, : \, \| x \|_2 \leq A \, \textrm{ and } \, \|G x - b \mathbf{1}_m\|_\infty \leq t\right], \\ &\overset{\mathrm{(b)}}{=}\lim_{A \to \infty} \mathbb{P}\Big[\min_{\substack{x \in K \\ \| x\|_2 \leq A}} \|G x - b \mathbf{1}_m\|_\infty \leq t\Big], \end{aligned}$$ where $(\rm a)$ follows from dominated convergence, and $(\rm b)$ uses that the minimum is now over a compact set since $K$ is closed. Since $\| \cdot\|_\infty$ and $\| \cdot \|_1$ are dual norms, we have for all $x \in K$: $$\|G x - b \mathbf{1}_m\|_\infty = \max_{\substack{\lambda \in \mathbb{R}^m \\ \|\lambda\|_1 \leq 1}} \left[-b\lambda^\intercal \mathbf{1}_m + \lambda^\intercal G x\right].$$ We can now use the Gaussian min-max inequality (Proposition [\[prop:gaussian_minmax\]](#prop:gaussian_minmax){reference-type="ref" reference="prop:gaussian_minmax"}), which, together with eq. [\[eq:def_GammaAG\]](#eq:def_GammaAG){reference-type="eqref" reference="eq:def_GammaAG"}, implies: $$\mathbb{P}(\exists x \in K \, : \, \|\{a_\mu^\intercal x - b\}_{\mu=1}^{(1-\eta)n} \|_\infty \leq t) \leq \lim_{A \to \infty} 2\mathbb{P}[\gamma_A(g, h) \leq t] \overset{\mathrm{(a)}}{\leq}2\mathbb{P}[\gamma(g, h)\leq t].$$ Here we defined: $$\label{eq:def_gamma_gh} \gamma(g, h) \coloneqq \inf_{x \in K} \max_{\substack{\lambda \in \mathbb{R}^m \\ \|\lambda\|_1 \leq 1}} \left[- b \lambda^\intercal \mathbf{1}_m + \|\lambda\|_2 g^\intercal x + \|x \|_2 h^\intercal \lambda\right],$$ and $\gamma_A$ is defined by restricting the infimum to $\|x\|_2 \leq A$. Moreover, $g \sim \mcN(0, \mathrm{I}_p), h \sim \mcN(0, \mathrm{I}_m)$. The inequality $(\rm a)$ holds since $\gamma(g, h) \leq \gamma_A(g, h)$ for all $A > 0$. To conclude the proof, it therefore suffices to show: $$\label{eq:to_show_gammag} \mathbb{P}[\gamma(g, h) \leq c_1 b] \leq 2 \exp\{-nc_2\},$$ for $c_1, c_2$ small enough (depending on $\varepsilon, b$). We use again that $g \to \max_{x \in K \cap \mathbb{S}^{p-1}}[g^\intercal x]$ is $1$-Lipschitz, and in particular concentrates on the Gaussian width by Theorem [\[thm:gaussian_conc_lipschitz\]](#thm:gaussian_conc_lipschitz){reference-type="ref" reference="thm:gaussian_conc_lipschitz"}. Using that $g$ is distributed as $-g$, this implies that for any $u > 0$: $$\mathbb{P}\left\{\min_{x \in K \cap \mathbb{S}^{p-1}}[g^\intercal x] \leq - \omega(K) - u\right\} \leq e^{-u^2/2}.$$ Since $\omega(K) \leq \sqrt{n / (1+\varepsilon)}$ by hypothesis, we can fix $\delta = \delta(\varepsilon) > 0$ and $\eta_0 = \eta_0(\varepsilon) > 0$ such that for $n$ large enough we have for $\eta < \eta_0$: $\omega(K) + \delta \sqrt{m} \leq \sqrt{m / (1+\varepsilon/2)}$ (recall that $m = (1-\eta) n$). Thus, since $K$ is a cone, and using the max-min inequality, we have with probability at least $1 - e^{- n (1-\eta_0) \delta^2/2}$: $$\label{eq:lb_gamma_gh} \gamma(g, h) \geq \inf_{v \geq 0} \max_{\substack{\lambda \in \mathbb{R}^m \\ \|\lambda\|_1 \leq 1}} \left[- b \lambda^\intercal \mathbf{1}_m + v \left( h^\intercal \lambda- \|\lambda\|_2 (\omega(K) + \delta \sqrt{m}) \right) \right].$$ Let us now assume that $b > 0$, and let $X \sim \mcN(0,1)$, and $D \coloneqq \mathbb{E}[|X|] = \sqrt{2/\pi}$. We pick $\sigma = \sigma(\varepsilon) \in (0,1)$ (its choice will be constrained later on), and define $A_\varepsilon$ by: $$\label{eq:def_Aeps} \mathbb{E}[X^2 \mathds{1}\{X \leq A_\varepsilon\}] = \sigma(\varepsilon).$$ Finally, we define $\lambda^\star = \lambda^\star(h) \in \mathbb{R}^m$ by $$\label{eq:def_lambdastar} \lambda^\star_\mu \coloneqq \frac{1}{D m} h_\mu \mathds{1}\{h_\mu \leq A_\varepsilon\}.$$ It is a simple exercise based on Hoeffding's and Bernstein's inequalities [@vershynin2018high] to check that for any $u > 0$ (recall that $m = (1-\eta) n \geq (1-\eta_0(\varepsilon)) n$): $$\label{eq:plims_lambdastar} \begin{dcases} \mathbb{P}[\|\lambda^\star\|_1 \leq 1] &\geq 1 - \exp\{-C_1 n\} , \\ \mathbb{P}\left[h^\intercal\lambda^\star - \frac{\sigma(\varepsilon)^2}{D} \leq - u\right] &\leq \exp\{-C_2 n \min(u^2, u)\}, \\ \mathbb{P}\left[\|\lambda^\star\|_2^2 - \frac{\sigma(\varepsilon)^2}{m D^2} \geq \frac{u}{n}\right] &\leq \exp\{-C_3 n \min(u^2, u)\}, \\ \mathbb{P}\left[\mathbf{1}_m^\intercal\lambda^\star \geq - C_4\right] &\leq \exp\{-C_5 n\}, \end{dcases}$$ for some positive constants $(C_a)_{a=1}^5$, all depending on $\varepsilon$. In particular, the first three lines of eq. [\[eq:plims_lambdastar\]](#eq:plims_lambdastar){reference-type="eqref" reference="eq:plims_lambdastar"} imply that for all $u \in (0,1)$, with probability at least $1 - 3 \exp\{-C(\varepsilon) n u^2\}$: $$\label{eq:def_sigma_gamma_eps} \|\lambda^\star\|_1 \leq 1 \, \textrm{ and } \frac{h^\intercal\lambda^\star}{\|\lambda^\star\|_2} \frac{1}{\omega(K) + \delta \sqrt{m}} \geq \frac{\sigma(\varepsilon)^2/D - u}{\sqrt{\sigma(\varepsilon)^2 / D^2 + u}} \sqrt{1 + \frac{\varepsilon}{2}} ,$$ in which we used that $\omega(K) + \delta \sqrt{m} \leq \sqrt{m} (1+\varepsilon/2)^{-1/2}$, and $m / n \leq 1$. We can choose $\sigma(\varepsilon) \in (0,1)$ sufficiently close to $1$, and $u(\varepsilon) \in (0,1)$ sufficiently close to $0$ such that the right-hand side of eq. [\[eq:def_sigma_gamma_eps\]](#eq:def_sigma_gamma_eps){reference-type="eqref" reference="eq:def_sigma_gamma_eps"} is greater than $1$. Combining it with the last equation of eq. [\[eq:plims_lambdastar\]](#eq:plims_lambdastar){reference-type="eqref" reference="eq:plims_lambdastar"} and the lower bound of eq. [\[eq:lb_gamma_gh\]](#eq:lb_gamma_gh){reference-type="eqref" reference="eq:lb_gamma_gh"}, we get that (with new constants $c_1, c_2$ depending on $\varepsilon$), with probability at least $1 - 2 \exp\{-c_2(\varepsilon) n\}$: $$\gamma(g, h) \geq b c_1(\varepsilon),$$ which implies eq. [\[eq:to_show_gammag\]](#eq:to_show_gammag){reference-type="eqref" reference="eq:to_show_gammag"} and ends the proof. The case $b < 0$ is treated in a similar way, constraining $h_\mu \geq - A_\varepsilon$ rather than $h_\mu \leq A_\varepsilon$ in eq. [\[eq:def_lambdastar\]](#eq:def_lambdastar){reference-type="eqref" reference="eq:def_lambdastar"}. $\square$ **Proof of Lemma [\[lemma:ub_ceta_b0\]](#lemma:ub_ceta_b0){reference-type="ref" reference="lemma:ub_ceta_b0"} --** Let $t \geq 0$. Repeating the same arguments as in the proof of Lemma [\[lemma:ub_ceta\]](#lemma:ub_ceta){reference-type="ref" reference="lemma:ub_ceta"} one obtains that $$\mathbb{P}(\exists x \in K \cap \mathbb{S}^{p-1} \, : \, \|\{a_\mu^\intercal x\}_{\mu=1}^{(1-\eta)n} \|_\infty \leq t) \leq 2 \mathbb{P}\Bigg[\underbrace{\min_{x \in K \cap \mathbb{S}^{p-1}} \max_{\substack{\lambda \in \mathbb{R}^m \\ \|\lambda\|_1 \leq 1}} \left\{\|\lambda\|_2 g^\intercal x + h^\intercal\lambda\right\} \leq t}_{\eqqcolon \gamma(g,h)}\Bigg].$$ Again, we can fix $\eta_0(\varepsilon) > 0$ and $\delta(\varepsilon) > 0$ such that for $\eta < \eta_0$ and $n$ large enough we have $\omega(K) + \delta\sqrt{m} \leq \sqrt{m} [1+\varepsilon/2]^{-1/2}$. By the max-min inequality and the concentration of the Gaussian width, this implies that with probability at least $1-e^{-n (1-\eta_0) \delta^2}$: $$\label{eq:lb_gamma_gh_b0} \gamma(g, h) \geq \max_{\substack{\lambda \in \mathbb{R}^m \\ \|\lambda\|_1 \leq 1}} \left[h^\intercal \lambda- \|\lambda\|_2 (\omega(K) + \delta \sqrt{m}) \right].$$ Defining again $D \coloneqq \sqrt{2/\pi}$ so that $D = \mathbb{E}[|X|]$ for $X \sim \mcN(0,1)$, we now define $\lambda^\star$ as: $$\lambda_\mu^\star \coloneqq \frac{1}{D m} h_\mu.$$ We have the counterpart to eq. [\[eq:plims_lambdastar\]](#eq:plims_lambdastar){reference-type="eqref" reference="eq:plims_lambdastar"} for this case: for any $u > 0$ and $\tau > 0$, $$\label{eq:plims_lambdastar_b0} \begin{dcases} \mathbb{P}[\|\lambda^\star\|_1 \leq 1 + \tau] &\geq 1 - \exp\{-C_1 n \tau^2\} , \\ \mathbb{P}\left[h^\intercal\lambda^\star - \frac{1}{D} \leq - u\right] &\leq \exp\{-C_2 n \min(u^2, u)\}, \\ \mathbb{P}\left[\|\lambda^\star\|_2^2 - \frac{1}{m D^2} \geq \frac{u}{n}\right] &\leq \exp\{-C_3 n \min(u^2, u)\}, \end{dcases}$$ for some $(C_a)_{a=1}^3$ depending on $\varepsilon$. We can fix $u = u(\varepsilon) > 0$ such that $$D^{-1}-u - \sqrt{\frac{u + D^{-2}}{1 + \varepsilon/2}} \eqqcolon 2 c_1(\varepsilon) > 0,$$ since the limit as $u \to 0$ of the left-hand side is strictly positive. Letting $\tau = 1$, we finally get that with probability at least $1 - 3 \exp(-c_2(\varepsilon) n)$ we can lower bound (using $\lambda = \lambda^\star/2$ such that $\|\lambda\|_1 \leq 1$) $$\begin{aligned} \gamma(g, h) &\geq \frac{D^{-1} - u}{2} - \frac{1}{2} \sqrt{\frac{u}{n} + \frac{1}{mD^2}} (\omega(K) + \delta \sqrt{m}), \\ &\geq \frac{D^{-1} - u}{2} - \frac{1}{2} \sqrt{\frac{u + D^{-2}}{1 + \varepsilon/2}}, \\ &\geq c_1(\varepsilon). \end{aligned}$$ This ends the proof. $\square$ # Universality: proof of Proposition [\[prop:universality_gs\]](#prop:universality_gs){reference-type="ref" reference="prop:universality_gs"} {#sec:proof_universality_gs} This section is devoted to the proof of Proposition [\[prop:universality_gs\]](#prop:universality_gs){reference-type="ref" reference="prop:universality_gs"}. We first show in Section [4.1](#subsec:lipschitz_energy){reference-type="ref" reference="subsec:lipschitz_energy"} a critical result on the Lipschitz constant of the "error" function appearing in eq. [\[eq:def_gs\]](#eq:def_gs){reference-type="eqref" reference="eq:def_gs"}. This requires controlling a random process on the operator norm sphere, which is also useful in the proof of Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"}, see Section [2.4](#subsec:proof_thm_main_positive_side){reference-type="ref" reference="subsec:proof_thm_main_positive_side"}. We leverage this control to show in Section [4.2](#subsec:universality_matrix){reference-type="ref" reference="subsec:universality_matrix"} a general result on the universality of a quantity known as the asymptotic free entropy of the model, both for matrices arising from ellipsoid fitting and its Gaussian equivalent. This result follows from an interpolation argument. Finally, we apply these results in the so-called "low-temperature" limit in Section [4.3](#subsec:proof_universality_gs){reference-type="ref" reference="subsec:proof_universality_gs"} to deduce Proposition [\[prop:universality_gs\]](#prop:universality_gs){reference-type="ref" reference="prop:universality_gs"}. As we mentioned, while we can not directly apply the results of [@montanari2022universality], Sections [4.2](#subsec:universality_matrix){reference-type="ref" reference="subsec:universality_matrix"} and [4.3](#subsec:proof_universality_gs){reference-type="ref" reference="subsec:proof_universality_gs"} closely follows their approach. We defer to Appendix [8](#sec_app:technical_universality){reference-type="ref" reference="sec_app:technical_universality"} some technicalities, as well as some parts of the proof that more directly follow the arguments of [@montanari2022universality]. ## Lipschitz constant of the energy function, and bounding random processes {#subsec:lipschitz_energy} We show here the following result on the behavior of the error (or "energy") function, under both models $X_\mu \sim \mathrm{Ellipse}(d)$ and $X_\mu \sim \mathrm{GOE}(d)$. [\[lemma:energy_change_small_ball\]]{#lemma:energy_change_small_ball label="lemma:energy_change_small_ball"} Let $n,d \geq 1$ and $n,d \to \infty$ with $\alpha_1 d^2 \leq n \leq \alpha_2 d^2$ for some $0 < \alpha_1 < \alpha_2$. Let $\phi : \mathbb{R}\to \mathbb{R}_+$ such that $\|\phi'\|_\infty < \infty$, and $X_1, \cdots, X_n \in \mcS_d$ be generated i.i.d. according to either $\mathrm{GOE}(d)$ or $\mathrm{Ellipse}(d)$. For $S \in \mcS_d$, we define the *energy*: $$E_{\{X_\mu\}}(S) \coloneqq \ \frac{1}{d^2} \sum_{\mu=1}^n \phi[{\rm Tr}(X_\mu S)].$$ Then the following holds for some $C > 0$ (depending only on $\alpha$): $$\mathbb{P}\left[\sup_{S_1, S_2 \in \mcS_d} \frac{|E_{\{X_\mu\}}(S_1) - E_{\{X_\mu\}}(S_2)|}{\|S_1 - S_2\|_\mathrm{op}} \leq C \|\phi'\|_\infty \right] \geq 1 - 2 e^{-n}.$$ In other words, the energy function has a bounded Lipschitz constant (as $d \to \infty$) with respect to the operator norm. Note that this is strongly stronger than what a naive use of the triangular inequality and of the duality $\|\cdot\|_\mathrm{op}\leftrightarrow \|\cdot\|_{S_1}$ yields: $$\frac{|E_{\{X_\mu\}}(S_1) - E_{\{X_\mu\}}(S_2)|}{\|S_1 - S_2\|_\mathrm{op}} \leq \frac{\|\phi'\|_\infty}{d^2} \sum_{\mu=1}^n {\rm Tr}|X_\mu|,$$ since ${\rm Tr}|X_\mu| \gtrsim d$ for $X_\mu \sim \mathrm{GOE}(d)$, and ${\rm Tr}|X_\mu| \gtrsim \sqrt{d}$ for $X_\mu \sim \mathrm{Ellipse}(d)$. Instead, the proof of Lemma [\[lemma:energy_change_small_ball\]](#lemma:energy_change_small_ball){reference-type="ref" reference="lemma:energy_change_small_ball"} is based on the following bounds for random processes, for which we separate the $\mathrm{GOE}(d)$ and $\mathrm{Ellipse}(d)$ setting. Lemma [\[lemma:emp_proc_gaussian\]](#lemma:emp_proc_gaussian){reference-type="ref" reference="lemma:emp_proc_gaussian"} is a consequence of elementary concentration results, and is proven in Appendix [8.1](#subsec_app:proof_lemma_emp_proc_gaussian){reference-type="ref" reference="subsec_app:proof_lemma_emp_proc_gaussian"}, while Lemma [\[lemma:emp_proc_ellipse\]](#lemma:emp_proc_ellipse){reference-type="ref" reference="lemma:emp_proc_ellipse"} is proven in the following. [\[lemma:emp_proc_gaussian\]]{#lemma:emp_proc_gaussian label="lemma:emp_proc_gaussian"} Let $(G_\mu)_{\mu=1}^n \overset{\mathrm{i.i.d.}}{\sim}\mathrm{GOE}(d)$. Let $r \in [1,2]$. There exists $C > 0$ such that for all $t > 0$: $$\mathbb{P}\Bigg[\max_{\|S\|_F^2 = d} \left(\sum_{\mu=1}^n |{\rm Tr}[G_\mu S] | ^r\right)^{1/r} \geq (C + t) n^{1/r}\Bigg] \leq \exp\left(-nt^2/2\right).$$ [\[lemma:emp_proc_ellipse\]]{#lemma:emp_proc_ellipse label="lemma:emp_proc_ellipse"} Let $(W_\mu)_{\mu=1}^n \overset{\mathrm{i.i.d.}}{\sim}\mathrm{Ellipse}(d)$. Let $r \in [1,4/3]$. We assume that $\alpha_1 d^2 \leq n \leq \alpha_2 d^2$, for some $0 < \alpha_1 < \alpha_2$. There are constants $C_1, C_2 > 0$ (that might depend on $\alpha_1, \alpha_2$) such that for all $t > 0$: $$\mathbb{P}\Big[\max_{\|S\|_\mathrm{op}= 1} \sum_{\mu=1}^n |{\rm Tr}(W_\mu S) |^r \geq n (C_1 + t)\Big] \leq 2 \exp\left\{-C_2 \min (n t^{\frac{2}{r}}, n^{\frac{1}{4} + \frac{1}{r}} t^{\frac{1}{r}} )\right\}.$$ **Proof of Lemma [\[lemma:energy_change_small_ball\]](#lemma:energy_change_small_ball){reference-type="ref" reference="lemma:energy_change_small_ball"} --** We finish here the proof, assuming Lemmas [\[lemma:emp_proc_gaussian\]](#lemma:emp_proc_gaussian){reference-type="ref" reference="lemma:emp_proc_gaussian"} and [\[lemma:emp_proc_ellipse\]](#lemma:emp_proc_ellipse){reference-type="ref" reference="lemma:emp_proc_ellipse"}. By the mean value theorem and the duality $\|\cdot\|_\mathrm{op}\leftrightarrow \|\cdot\|_{S_1}$: $$|E_{\{X_\mu\}}(S_1) - E_{\{X_\mu\}}(S_2)| \leq \left|\sup_{S \in \mcS_d} \langle \nabla_S E_{\{X_\mu\}}(S), S_1 - S_2 \rangle\right| \leq \|S_1 - S_2\|_\mathrm{op}\sup_{S \in \mcS_d} \| \nabla_S E_{\{X_\mu\}}(S) \|_{S_1}.$$ Again using the duality $\|\cdot\|_\mathrm{op}\leftrightarrow \|\cdot\|_{S_1}$: $$\begin{aligned} \| \nabla_S E_{\{X_\mu\}}(S) \|_{S_1} &= \frac{1}{d^2} \left\| \sum_{\mu=1}^n X_\mu \phi'[{\rm Tr}(X_\mu S)]\right\|_{S_1}, \\ &= \frac{1}{d^2}\sup_{\|R\|_\mathrm{op}= 1} \sum_{\mu=1}^n {\rm Tr}[X_\mu R] \phi'[{\rm Tr}(X_\mu S)], \\ &\leq \frac{\|\phi'\|_\infty}{d^2} \sup_{\|R\|_\mathrm{op}= 1} \sum_{\mu=1}^n |{\rm Tr}[X_\mu R]|. \end{aligned}$$ Using Lemmas [\[lemma:emp_proc_gaussian\]](#lemma:emp_proc_gaussian){reference-type="ref" reference="lemma:emp_proc_gaussian"} and [\[lemma:emp_proc_ellipse\]](#lemma:emp_proc_ellipse){reference-type="ref" reference="lemma:emp_proc_ellipse"} in the case $r = 1$, we reach the sought statement. $\square$ **Remark I--** Note that a naive argument using that ${\rm Tr}[W_\mu S]$ is a sub-exponential random variable yields Lemma [\[lemma:emp_proc_ellipse\]](#lemma:emp_proc_ellipse){reference-type="ref" reference="lemma:emp_proc_ellipse"} for $r = 1$, which is already enough to deduce Lemma [\[lemma:energy_change_small_ball\]](#lemma:energy_change_small_ball){reference-type="ref" reference="lemma:energy_change_small_ball"}. However, Lemma [\[lemma:emp_proc_ellipse\]](#lemma:emp_proc_ellipse){reference-type="ref" reference="lemma:emp_proc_ellipse"} is also used later in the proof of Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"}, see Section [2.4](#subsec:proof_thm_main_positive_side){reference-type="ref" reference="subsec:proof_thm_main_positive_side"}. Since ${\rm Tr}[W_\mu S]$ is the sum of many independent random variables, we can leverage its two-tailed behavior (by Bernstein's inequality) to prove Lemma [\[lemma:emp_proc_ellipse\]](#lemma:emp_proc_ellipse){reference-type="ref" reference="lemma:emp_proc_ellipse"} for all $r \leq 4/3$, yielding the limitation $r < 4/3$ in Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"}. It is possible that a finer analysis could lead to a proof of Lemma [\[lemma:emp_proc_ellipse\]](#lemma:emp_proc_ellipse){reference-type="ref" reference="lemma:emp_proc_ellipse"} for the case $4/3 \leq r \leq 2$, which would in turn imply Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"} for $r < 2$. **Remark II--** We give an informal argument as to why we can not hope to extend Lemma [\[lemma:emp_proc_gaussian\]](#lemma:emp_proc_gaussian){reference-type="ref" reference="lemma:emp_proc_gaussian"} nor [\[lemma:emp_proc_ellipse\]](#lemma:emp_proc_ellipse){reference-type="ref" reference="lemma:emp_proc_ellipse"} for $r > 2$. Indeed, in the $\mathrm{GOE}(d)$ setting, the choice $S = \sqrt{d} G_\mu / \|G_\mu\|_F$ (for some $\mu \in [n]$) yields that the objective function is at least $\sqrt{d} \|G_\mu\|_F \gtrsim \sqrt{n}$. In the $\mathrm{Ellipse}(d)$ setting, assume that $r > 2$. If $1/q + 1/r = 1$, then by the dualities $\ell^p \leftrightarrow \ell^q$ and $\|\cdot\|_\mathrm{op}\leftrightarrow \|\cdot\|_{S_1}$: $$\label{eq:dual_empr_proc} \max_{\|S\|_\mathrm{op}= 1} \left(\sum_{\mu=1}^n |{\rm Tr}(W_\mu S) |^r\right)^{1/r} = \max_{\|\lambda\|_q = 1} \left\|\sum_{\mu=1}^n \lambda_\mu W_\mu\right\|_{S_1}.$$ Let us lower bound the right-hand side of eq. [\[eq:dual_empr_proc\]](#eq:dual_empr_proc){reference-type="eqref" reference="eq:dual_empr_proc"}. Let $\beta \in (0,1)$, $p = \beta n$, and $\lambda_1 = \cdots = \lambda_p = p^{-1/q} > \lambda_{p+1} = \cdots = \lambda_n = 0$. Then $\|\lambda\|_q = 1$. Moreover, $$\left\|\sum_{\mu=1}^n \lambda_\mu W_\mu\right\|_{S_1} = p^{1-1/q} d^{-1/2} \left\|\frac{1}{p}\sum_{\mu=1}^p x_\mu x_\mu^\intercal- \mathrm{I}_d\right\|_{S_1}.$$ By classical results of concentration of Wishart matrices [@vershynin2018high], we know that since $p \lesssim d^2$ then $$\left\|\frac{1}{p}\sum_{\mu=1}^p x_\mu x_\mu^\intercal- \mathrm{I}_d\right\|_{S_1} \gtrsim \frac{d^{3/2}}{\sqrt{p}} \gtrsim \sqrt{\frac{d}{\beta}},$$ where $\gtrsim$ might hide constants that depend on $\alpha$. We then reach: $$\left\|\sum_{\mu=1}^n \lambda_\mu W_\mu\right\|_{S_1} \gtrsim \beta^{1/2 - 1/q} n^{1-1/q}.$$ Since $r > 2$, one has $1/2 - 1/q < 0$. Letting $\beta$ going to $0$, this shows that any bound of the type $$\max_{\|\lambda\|_q = 1} \left\|\sum_{\mu=1}^n \lambda_\mu W_\mu\right\|_{S_1} \leq C(\alpha) n^{1-1/q}$$ can not hold. **Proof of Lemma [\[lemma:emp_proc_ellipse\]](#lemma:emp_proc_ellipse){reference-type="ref" reference="lemma:emp_proc_ellipse"} --** Throughout this proof, constants might depend on $\alpha_1, \alpha_2$. Let us define, for any $S \in \mcS_d$: $$\label{eq:def_XY_processes} \begin{dcases} Y(S) &\coloneqq \sum_{\mu=1}^n |{\rm Tr}(W_\mu S) |^r, \\ X(S) &\coloneqq Y(S) - \mathbb{E}Y(S). \end{dcases}$$ We fix $\varepsilon\in (0,1)$. It follows of classical covering number bounds [@van2014probability] that if $T \coloneqq \{S \in \mcS_d \, : \, \|S \|_\mathrm{op}= 1\}$, then $$\label{eq:covering_number_op_norm_ball} \log \mcN(T, \| \cdot \|_\mathrm{op}, \varepsilon) \leq \frac{d(d+1)}{2} \log \frac{3}{\varepsilon}.$$ Let us fix $N$ a minimal $\varepsilon$-net of $T$ for $\|\cdot \|_\mathrm{op}$. If we let $S^\star \coloneqq \mathop{\mathrm{arg\,max}}_{\|S\|_\mathrm{op}= 1} Y(S)$, and $S_0 \in N$ with $\|S^\star - S_0\|_\mathrm{op}\leq \varepsilon$, then we have by Minkowski's inequality (recall $r \geq 1$): $$\begin{aligned} \| \{{\rm Tr}[W_\mu S^\star]\}_{\mu=1}^n\|_r - \| \{{\rm Tr}[W_\mu S_0]\}_{\mu=1}^n\|_r &\leq \| \{{\rm Tr}[W_\mu (S_0 - S^\star)]\}_{\mu=1}^n\|_r \\ &\leq \varepsilon\max_{\|S\|_\mathrm{op}= 1} \| \{{\rm Tr}[W_\mu S]\}_{\mu=1}^n\|_r.\end{aligned}$$ This implies $$\label{eq:reduction_net} \max_{\|S\|_\mathrm{op}= 1} \sum_{\mu=1}^n |{\rm Tr}(W_\mu S) |^r \leq \frac{1}{(1-\varepsilon)^r} \max_{S \in N} \sum_{\mu=1}^n |{\rm Tr}(W_\mu S) |^r.$$ We combine the covering number upper bound of eq. [\[eq:covering_number_op_norm_ball\]](#eq:covering_number_op_norm_ball){reference-type="eqref" reference="eq:covering_number_op_norm_ball"} and the relation of eq. [\[eq:reduction_net\]](#eq:reduction_net){reference-type="eqref" reference="eq:reduction_net"} with the following lemma, proven later on, which bounds the deviation probability of the process for a given $S$. [\[lemma:ub_ellipse_process_op_sphere\]]{#lemma:ub_ellipse_process_op_sphere label="lemma:ub_ellipse_process_op_sphere"} With the notations of eq. [\[eq:def_XY_processes\]](#eq:def_XY_processes){reference-type="eqref" reference="eq:def_XY_processes"}, we have, for any $S \in T$: - $\mathbb{E}\,[ Y(S) ] \leq C_1 n$. - For all $t \geq 0$: $$\mathbb{P}[X(S) \geq n t] \leq 2 \exp\left\{-C_2 \min (n t^2, n t^{\frac{2}{r}}, n^{\frac{1}{4} + \frac{1}{r}} t^{\frac{1}{r}} )\right\}.$$ Note that the above constants may depend on $r$. Picking $\varepsilon= 1/2$ and performing a union bound over $N$, we reach using eqs. [\[eq:covering_number_op_norm_ball\]](#eq:covering_number_op_norm_ball){reference-type="eqref" reference="eq:covering_number_op_norm_ball"},[\[eq:reduction_net\]](#eq:reduction_net){reference-type="eqref" reference="eq:reduction_net"} and Lemma [\[lemma:ub_ellipse_process_op_sphere\]](#lemma:ub_ellipse_process_op_sphere){reference-type="ref" reference="lemma:ub_ellipse_process_op_sphere"}: $$\mathbb{P}[\sup_{S \in T} Y(S) \geq n (C_1 + t)] \leq 2\exp\left\{C_3 n - C_2 \min (n t^2, n t^{\frac{2}{r}}, n^{\frac{1}{4} + \frac{1}{r}} t^{\frac{1}{r}} )\right\}.$$ We thus have for any $t \geq 1$: $$\mathbb{P}[\sup_{S \in T} Y(S) \geq n (C_1 + t)] \leq 2 \begin{dcases} \exp\left\{C_3 n - C_2 n t^{\frac{2}{r}} \right\} \hspace{1cm} &\textrm{ if } t \leq n^{1 - \frac{3r}{4}}, \\ \exp\left\{C_3 n - C_2 n^{\frac{1}{4} + \frac{1}{r}} t^{\frac{1}{r}} \right\} \hspace{1cm} &\textrm{ if } t \geq n^{1 - \frac{3r}{4}}. \end{dcases}$$ Note that $n^{1/4+1/r} \geq n$ since $r \leq 4/3$. Therefore, for (new) constants $C_1, C_2$ we have for all $t > 0$: $$\mathbb{P}[\sup_{S \in T} Y(S) \geq n (C_1 + t)] \leq 2 \exp\left\{ - C_2 \min (n t^{\frac{2}{r}}, n^{\frac{1}{4} + \frac{1}{r}} t^{\frac{1}{r}} )\right\},$$ which ends the proof. $\square$ We now tackle Lemma [\[lemma:ub_ellipse_process_op_sphere\]](#lemma:ub_ellipse_process_op_sphere){reference-type="ref" reference="lemma:ub_ellipse_process_op_sphere"}. **Proof of Lemma [\[lemma:ub_ellipse_process_op_sphere\]](#lemma:ub_ellipse_process_op_sphere){reference-type="ref" reference="lemma:ub_ellipse_process_op_sphere"} --** We start with $(i)$, fixing $S \in T$. We have $\mathbb{E}[Y(S)] = n \mathbb{E}[|{\rm Tr}(W_1 S)|^r]$. Let $Z \coloneqq {\rm Tr}(W_1 S) \overset{\rm d}{=}(x^\intercal S x - {\rm Tr}[S])/\sqrt{d}$, with $x \sim \mcN(0, \mathrm{I}_d)$. Since $\|S\|_\mathrm{op}= 1$, we have by Hanson-Wright's inequality [@vershynin2018high]: $$\begin{aligned} \label{eq:ub_pz} \nonumber \mathbb{P}[|Z| \geq t] &\leq 2 \exp\Big\{-C \min \Big(\frac{d t^2}{\|S\|_F^2},\sqrt{d} t\Big)\Big\} , \\ &\leq 2 \exp\left\{-C \min \left(t^2,\sqrt{d} t\right)\right\},\end{aligned}$$ where $C > 0$ is an absolute constant, and we used that $\|S\|_F \leq \sqrt{d} \|S\|_\mathrm{op}$. Separating the sub-Gaussian and the sub-exponential parts of the tail we have, we have for all $p \geq 1$: $$\begin{aligned} \nonumber \mathbb{E}[|Z|^p] &= p \int_{0}^\infty \mathrm{d}u \, u^{p-1} \, \mathbb{P}[|Z| \geq u], \\ \nonumber &\leq 2p \Bigg[\int_{0}^{\sqrt{d}} \mathrm{d}u \, u^{p-1} \, e^{-Cu^2} + \int_{\sqrt{d}}^{\infty} \mathrm{d}u \, u^{p-1} \, e^{- C \sqrt{d} u}\Bigg], \\ \nonumber &\leq 2p \Bigg[\int_{0}^{\infty} \mathrm{d}u \, u^{p-1} \, e^{- Cu^2} + e^{-Cd}\int_{0}^{\infty} \mathrm{d}u \, (u+\sqrt{d})^{p-1} \, e^{- C \sqrt{d} u}\Bigg], \\ \nonumber &\overset{\mathrm{(a)}}{\leq}2p \Bigg[\frac{1}{2 C^{p/2}} \Gamma(p/2) + e^{-Cd} \max(1, 2^{p-2})\int_{0}^{\infty} \mathrm{d}u \, [u^{p-1} + d^{(p-1)/2}] \, e^{- C \sqrt{d} u}\Bigg], \\ \nonumber &\leq 2p \Bigg[\frac{\Gamma(p/2)}{2 C^{p/2}} + e^{-Cd} \max(1, 2^{p-2})\Big(\frac{\Gamma(p)}{C^p d^{p/2}} + \frac{d^{(p-2)/2}}{C}\Big)\Bigg].\end{aligned}$$ We used in $(\rm a)$ that $(a+b)^x \leq \max(1, 2^{x-1})(a^x + b^x)$ for all $a,b, x \geq 0$. Using Minkowski's inequality, we reach that for all $p \geq 1$: $$\label{eq:ub_Z_pth} \mathbb{E}[|Z|^p]^{1/p} \leq C_1 \sqrt{p} + C_2 e^{-\frac{C_3 d}{p}} \Big(\frac{p}{\sqrt{d}} + d^{\frac{1}{2} - \frac{1}{p}}\Big),$$ for some positive constants $(C_a)_{a=1}^3$ independent of $p$ and $d$. Informally, the sub-Gaussian tail dominates the first moments of $Z$ since the sub-exponential tail only kicks in at the scale $\mcO(\sqrt{d})$. Eq. [\[eq:ub_Z\_pth\]](#eq:ub_Z_pth){reference-type="eqref" reference="eq:ub_Z_pth"} implies claim $(i)$ of Lemma [\[lemma:ub_ellipse_process_op_sphere\]](#lemma:ub_ellipse_process_op_sphere){reference-type="ref" reference="lemma:ub_ellipse_process_op_sphere"} by taking $p = r$ (since the second term goes to $0$ as $d \to \infty$ for any fixed $p$). We turn to $(ii)$. We make use of classical tail bounds for sub-Weibull random variables, recalled in Lemma [\[lemma:tail_sum_sub_Weibull\]](#lemma:tail_sum_sub_Weibull){reference-type="ref" reference="lemma:tail_sum_sub_Weibull"}. Denoting $Z_\mu \coloneqq {\rm Tr}(W_\mu S)$, we have $X(S) = \sum_{\mu=1}^n \{|Z_\mu|^r - \mathbb{E}[|Z_\mu|^r]\}$. We decompose $X(S)$ in two parts, i.e. $X(S) = X_1(S) + X_2(S)$, with $$\label{eq:def_Xa_bS} \begin{dcases} X_1(S) &\coloneqq \sum_{\mu=1}^n \left[\min(|Z_\mu|, \sqrt{d})^r - \mathbb{E}\{\min(|Z_\mu|, \sqrt{d})^r\}\right], \\ X_2(S) &\coloneqq \sum_{\mu=1}^n \left([|Z_\mu|^r - d^{r/2}] \mathds{1}\{|Z_\mu| > \sqrt{d}\} - \mathbb{E}\left\{[|Z_\mu|^r - d^{r/2}] \mathds{1}\{|Z_\mu| > \sqrt{d}\}\right\}\right). \end{dcases}$$ We will successively bound $X_1(S), X_2(S)$. To lighten the notations, we do not write their dependency on $S$ in what follows. Observe that $(Z_\mu)_{\mu=1}^n$ are i.i.d. random variables, and that they satisfy the tail bound of eq. [\[eq:ub_pz\]](#eq:ub_pz){reference-type="eqref" reference="eq:ub_pz"}. **Bounding $X_1$ --** Denoting $T_\mu \coloneqq \min(|Z_\mu|, \sqrt{d})$, we have $\mathbb{P}[T_\mu \geq t] \leq 2 \exp\{-C_2 t^2\}$ by the tail bound of eq. [\[eq:ub_pz\]](#eq:ub_pz){reference-type="eqref" reference="eq:ub_pz"}. Moreover, $\mathbb{E}[T_\mu^r] \leq \mathbb{E}[|Z_\mu|^r] \leq C_1$ (depending only on $r$) by eq. [\[eq:ub_Z\_pth\]](#eq:ub_Z_pth){reference-type="eqref" reference="eq:ub_Z_pth"}. Therefore, for every $t > C_1$, we have $$\mathbb{P}[|T_\mu^r - \mathbb{E}[T_\mu^r]| \geq t] = \mathbb{P}[T_\mu^r \geq \mathbb{E}[T_{\mu}^r] + t] \leq 2 e^{-C_2(t+\mathbb{E}[T_\mu^r])^{2/r}} \leq 2 e^{- C_2t^{2/r}}.$$ This implies that $\mathbb{P}[|T_\mu^r - \mathbb{E}[T_\mu^r]| \geq t] \leq 2 e^{-C t^{2/r}}$ for all $t \geq 0$ and some (new) constant $C > 0$, depending only on $r$. We can thus apply $(i)$ of Lemma [\[lemma:tail_sum_sub_Weibull\]](#lemma:tail_sum_sub_Weibull){reference-type="ref" reference="lemma:tail_sum_sub_Weibull"} for $q = 2 / r \in [1, 2]$ and $a_i = 1/n$ (so $\|a\|_2^2 = \|a\|_{q^\star}^q = n^{-1}$), which yields that for all $t \geq 0$ $$\label{eq:ub_P_X1} \mathbb{P}[|X_1| \geq n t] = \mathbb{P}\Bigg[\frac{1}{n} \Bigg|\sum_{\mu=1}^n \{T_\mu^r - \mathbb{E}[T_\mu^r]\} \Bigg| \geq t\Bigg] \leq 2 \exp \left\{-C n \min(t^2, t^{2/r})\right\}.$$ **Bounding $X_2$ --** We proceed similarly, using $(ii)$ of Lemma [\[lemma:tail_sum_sub_Weibull\]](#lemma:tail_sum_sub_Weibull){reference-type="ref" reference="lemma:tail_sum_sub_Weibull"}. Letting $$U_\mu \coloneqq d^{r/2}[|Z_\mu|^r - d^{r/2}] \mathds{1}\{|Z_\mu| > \sqrt{d}\},$$ then $U_\mu \geq 0$, and moreover, by the Cauchy-Schwarz inequality: $$\begin{aligned} \mathbb{E}[U_\mu] &\leq d^{r/2} \sqrt{\mathbb{E}|Z_\mu|^{2r}} \sqrt{\mathbb{P}[|Z_\mu| > \sqrt{d}]}, \\ &\leq C_1 e^{-C_2 d},\end{aligned}$$ for some $C_1, C_2 > 0$ depending only on $r$, using the moments and tail bound of eqs. [\[eq:ub_pz\]](#eq:ub_pz){reference-type="eqref" reference="eq:ub_pz"} and [\[eq:ub_Z\_pth\]](#eq:ub_Z_pth){reference-type="eqref" reference="eq:ub_Z_pth"}. Repeating the argument used on $T_\mu$ above (using this time the second part of the tail of eq. [\[eq:ub_pz\]](#eq:ub_pz){reference-type="eqref" reference="eq:ub_pz"}), we then reach that for all $t \geq 0$: $$\mathbb{P}[|U_\mu - \mathbb{E}[U_\mu]| \geq t] \leq 2 e^{-C t^{1/r}}.$$ We can then apply $(ii)$ of Lemma [\[lemma:tail_sum_sub_Weibull\]](#lemma:tail_sum_sub_Weibull){reference-type="ref" reference="lemma:tail_sum_sub_Weibull"} with $q = 1/r \in [1/2,1]$ to reach: $$\begin{aligned} \label{eq:ub_P_X2} \nonumber \mathbb{P}[|X_2| \geq n t] &= \mathbb{P}\Bigg[\frac{1}{n} \Bigg|\sum_{\mu=1}^n \{U_\mu - \mathbb{E}[U_\mu]\} \Bigg| \geq t d^{r/2}\Bigg] \leq 2 \exp \left\{-C \min(n d^r t^2, d^{1/2} (nt)^{1/r})\right\}, \\ &\leq 2 \exp \left\{-C \min(n^{1 + r/2} t^2, n^{1/4 + 1/r} t^{1/r})\right\},\end{aligned}$$ using that $\alpha_1 d^2 \leq n \leq \alpha_2 d^2$. We conclude the proof of Lemma [\[lemma:ub_ellipse_process_op_sphere\]](#lemma:ub_ellipse_process_op_sphere){reference-type="ref" reference="lemma:ub_ellipse_process_op_sphere"} by combining eqs. [\[eq:ub_P\_X1\]](#eq:ub_P_X1){reference-type="eqref" reference="eq:ub_P_X1"} and eq. [\[eq:ub_P\_X2\]](#eq:ub_P_X2){reference-type="eqref" reference="eq:ub_P_X2"}, along with the union bound $\mathbb{P}[|X| \geq nt] \leq \mathbb{P}[|X_1| \geq nt/2] + \mathbb{P}[|X_2| \geq nt/2]$. $\square$ ## Free entropy universality for matrix models {#subsec:universality_matrix} In this section we state and prove a general universality theorem for the asymptotic free entropy in a large class of matrix models, under a "uniform one-dimensional central limit theorem" assumption (or pointwise normality). We first need to define such an assumption. [\[def:one_dimensional_CLT\]]{#def:one_dimensional_CLT label="def:one_dimensional_CLT"} Let $d \geq 1$, and $\rho$ a probability distribution on $\mcS_d$. We say that $\rho$ satisfies a *one-dimensional CLT with respect to the set $A_d \subseteq \mcS_d$* if: - The mean and covariance of $\rho$ are matching the $\mathrm{GOE}(d)$ distribution, i.e. for $W \sim \rho$ and $G \sim \mathrm{GOE}(d)$, we have $\mathbb{E}[W] = \mathbb{E}[G] = 0$ and for all $i\leq j$ and $k \leq l$: $\mathbb{E}[W_{ij} W_{kl}] = \mathbb{E}[G_{ij} G_{kl}] = \delta_{ik} \delta_{jl} (1+\delta_{ijkl})/d$. - For any bounded Lipschitz function $\varphi$, we have: $$\label{eq:1d_clt} \lim_{d \to \infty} \sup_{S \in A_d} \Big| \mathbb{E}_{W \sim \rho}\big[\varphi\big({\rm Tr}[W S]\big)\big] - \mathbb{E}_{G \sim \mathrm{GOE}(d)}\big[\varphi\big({\rm Tr}[G S]\big)\big]\Big| = 0.$$ We can now state the universality theorem for the free entropy. Its proof is in great part an adaptation of the proof arguments for Theorem 1 and Lemma 1 in [@montanari2022universality] (see also [@hu2022universality; @gerace2022gaussian; @dandi2023universality]). We sketch the ideas of its proof in the following, deferring some technicalities and adaptations of the arguments of [@montanari2022universality] to appendices. [\[thm:universality_matrix\]]{#thm:universality_matrix label="thm:universality_matrix"} Let $n,d \geq 1$ and $n,d \to \infty$ with $\alpha_1 d^2 \leq n \leq \alpha_2 d^2$ for some $0 < \alpha_1 \leq \alpha_2$. We are given: 1. [\[hyp:P0\]]{#hyp:P0 label="hyp:P0"} $P_0$ a probability distribution on $\mcS_d$, such that $\mathop{\mathrm{supp}}(P_0) \subseteq B_2(C_0 \sqrt{d})$, for a constant $C_0 > 0$. 2. [\[hyp:phi\]]{#hyp:phi label="hyp:phi"} $\phi : \mathbb{R}\to\mathbb{R}_+$ a bounded differentiable function with bounded derivative. 3. [\[hyp:Ad\]]{#hyp:Ad label="hyp:Ad"} A series of symmetric convex sets $A_d$ such that $\mathop{\mathrm{supp}}(P_0) \subseteq A_d$. 4. [\[hyp:rho\]]{#hyp:rho label="hyp:rho"} $\rho$ a probability distribution on $\mcS_d$, which satisfies a one-dimensional CLT with respect to $A_d$ as per Definition [\[def:one_dimensional_CLT\]](#def:one_dimensional_CLT){reference-type="ref" reference="def:one_dimensional_CLT"}. For $W_1, \cdots, W_n \in \mcS_d$ we define the free entropy: $$\label{eq:def_fe_matrix_universality} F_d(\{W_\mu\}) \coloneqq \frac{1}{d^2} \log \int P_0(\mathrm{d}S) \exp\Big\{-\sum_{\mu=1}^n \phi\left({\rm Tr}[W_\mu S]\right)\Big\}.$$ Then for any bounded differentiable function $\psi$ with bounded Lipschitz derivative we have $$\label{eq:equivalence_asymptotic_fe} \lim_{d \to \infty} \left| \mathbb{E}_{\{W_\mu\} \overset{\mathrm{i.i.d.}}{\sim}\rho} \psi[F_d(\{W_\mu\})] - \mathbb{E}_{\{G_\mu\} \overset{\mathrm{i.i.d.}}{\sim}\mathrm{GOE}(d)} \psi[F_d(\{G_\mu\})] \right| = 0.$$ **Remark I --** One could straightforwardly weaken the hypothesis $\mathop{\mathrm{supp}}(P_0) \subseteq A$ in Theorem [\[thm:universality_matrix\]](#thm:universality_matrix){reference-type="ref" reference="thm:universality_matrix"} to the weaker condition $d^{-2} \log P_0(A^c) \to -\infty$ as $d \to \infty$. **Remark II --** Note that our setup differs slightly from the one of [@montanari2022universality], as we consider distributions $P_0$ with possibly continuous support, and (more importantly) for a fixed $S \in \mcS_d$, the projections $\{{\rm Tr}[W_\mu S]\}_{\mu=1}^n$ are not sub-Gaussian when $W_\mu \sim \mathrm{Ellipse}(d)$, but only sub-exponential. Nevertheless, we will see that the approach of [@montanari2022universality] can in large part be adapted to prove Theorem [\[thm:universality_matrix\]](#thm:universality_matrix){reference-type="ref" reference="thm:universality_matrix"}, thanks to the results we showed in Section [4.1](#subsec:lipschitz_energy){reference-type="ref" reference="subsec:lipschitz_energy"}. **Sketch of proof of Theorem [\[thm:universality_matrix\]](#thm:universality_matrix){reference-type="ref" reference="thm:universality_matrix"} --** Since $\mathop{\mathrm{supp}}(P_0) \subseteq A_d$, the integral in eq. [\[eq:def_fe_matrix_universality\]](#eq:def_fe_matrix_universality){reference-type="eqref" reference="eq:def_fe_matrix_universality"} can be restricted to $S \in A_d$. We make use of an interpolation argument to show the universality of the free entropy. We define, for $t \in [0, \pi/2]$ and $\mu \in [n]$: $$\label{eq:def_U_tU} \begin{dcases} U_\mu(t) &\coloneqq \cos(t) W_\mu + \sin(t) G_\mu, \\ \tU_\mu(t) &\coloneqq \frac{\partial U_\mu(t)}{\partial t} = -\sin(t) W_\mu + \cos(t) G_\mu. \end{dcases}$$ Note that $\{U_\mu\}_{\mu=1}^n$ are still i.i.d., and are smooth functions of $t$. Moreover, if $W_\mu$ was also a $\mathrm{GOE}(d)$ matrix, then $U_\mu(t), \tU_\mu(t)$ would be independent $\mathrm{GOE}(d)$ matrices. By the fundamental theorem of calculus: $$\label{eq:ftc} |\mathbb{E}\psi[F_d(W)] - \mathbb{E}\psi[F_d(G)]| = \left|\int_{0}^{\pi/2} \frac{\partial}{\partial t} \{\mathbb{E}\, \psi[F_d(U(t))]\} \mathrm{d}t\right| \overset{\mathrm{(a)}}{\leq}\int_{0}^{\pi/2} \left|\mathbb{E}\frac{\partial \psi[F_d(U(t))]}{\partial t}\right| \, \mathrm{d}t,$$ where $(\rm a)$ follows by dominated convergence since $\psi[F_d(U(t))]$ is continuously differentiable on $[0, \pi/2]$, and the triangular inequality. We will deduce Theorem [\[thm:universality_matrix\]](#thm:universality_matrix){reference-type="ref" reference="thm:universality_matrix"} if we can show the following two lemmas: [\[lemma:domination\]]{#lemma:domination label="lemma:domination"} Under the hypotheses of Theorem [\[thm:universality_matrix\]](#thm:universality_matrix){reference-type="ref" reference="thm:universality_matrix"}: $$\int_{0}^{\pi/2} \sup_{d \geq 1} \left|\mathbb{E}\frac{\partial \psi[F_d(U(t))]}{\partial t}\right| \, \mathrm{d}t < \infty.$$ [\[lemma:pointwise_limit\]]{#lemma:pointwise_limit label="lemma:pointwise_limit"} Under the hypotheses of Theorem [\[thm:universality_matrix\]](#thm:universality_matrix){reference-type="ref" reference="thm:universality_matrix"}, for any $t \in (0, \pi/2)$: $$\lim_{d \to \infty} \mathbb{E}\, \frac{\partial \psi[F_d(U(t))]}{\partial t} = 0.$$ Indeed, plugging Lemmas [\[lemma:domination\]](#lemma:domination){reference-type="ref" reference="lemma:domination"} and [\[lemma:pointwise_limit\]](#lemma:pointwise_limit){reference-type="ref" reference="lemma:pointwise_limit"} in eq. [\[eq:ftc\]](#eq:ftc){reference-type="eqref" reference="eq:ftc"} and taking the $d \to \infty$ limit using dominated convergence ends the proof of Theorem [\[thm:universality_matrix\]](#thm:universality_matrix){reference-type="ref" reference="thm:universality_matrix"}. We therefore focus on proving these two lemmas in the following. As it will be useful, we state the result of the elementary computation of the derivative: $$\label{eq:derivative_psi} \frac{\partial \psi[F_d(U(t))]}{\partial t} = -\frac{\psi'[F_d(U(t))]}{d^2} \sum_{\mu=1}^n \frac{\int P_0(\mathrm{d}S) \, e^{-\sum_\nu \phi({\rm Tr}[U_\nu(t) S])} \left({\rm Tr}[S \tU_\mu(t)] \, \phi'({\rm Tr}[U_\mu(t) S])\right)}{\int P_0(\mathrm{d}S) \, e^{-\sum_\nu \phi({\rm Tr}[U_\nu(t) S])}}.$$ Because $\{G_\mu, W_\mu\}$ are i.i.d. we get further: $$\label{eq:ee_derivative_psi} \mathbb{E}\, \frac{\partial \psi[F_d(U(t))]}{\partial t} = - \frac{n}{d^2} \mathbb{E}\Bigg[\psi'[F_d(U(t))] \frac{\int P_0(\mathrm{d}S) \, e^{-\sum_\nu \phi({\rm Tr}[U_\nu(t) S])} \left({\rm Tr}[S \tU_1(t)] \, \phi'({\rm Tr}[U_1(t) S])\right)}{\int P_0(\mathrm{d}S) \, e^{-\sum_\nu \phi({\rm Tr}[U_\nu(t) S])}}\Bigg].$$ Note that if $W_\mu$ was also a $\textrm{GOE}(d)$ matrix, for any $t$, $U_\mu(t)$ and $\tU_\mu(t)$ would be independent $\textrm{GOE}(d)$ matrices. The main idea behind the interpolation is that the matrix $W_\mu$ will only appear through some one-dimensional projection with a matrix $S$. We will then use Definition [\[def:one_dimensional_CLT\]](#def:one_dimensional_CLT){reference-type="ref" reference="def:one_dimensional_CLT"} to argue that one can effectively replace $W_\mu$ by a $\mathrm{GOE}(d)$ matrix, which by the argument above would mean that we can consider the case of independent $\mathrm{GOE}(d)$ matrices $U_\mu(t)$ and $\tU_\mu(t)$. In this case, the RHS of eq. [\[eq:ee_derivative_psi\]](#eq:ee_derivative_psi){reference-type="eqref" reference="eq:ee_derivative_psi"} would be $0$, since there is only a single term involving $\tU_1(t)$, which has zero mean: this crucial idea is the intuition behind Lemma [\[lemma:pointwise_limit\]](#lemma:pointwise_limit){reference-type="ref" reference="lemma:pointwise_limit"}. The details of the proofs of Lemmas [\[lemma:domination\]](#lemma:domination){reference-type="ref" reference="lemma:domination"} and [\[lemma:pointwise_limit\]](#lemma:pointwise_limit){reference-type="ref" reference="lemma:pointwise_limit"} are fairly technical and substantially follow the ones of their counterparts in [@montanari2022universality]. For this reason, we defer them to Appendix [8.2](#subsec_app:proof_universality_matrix){reference-type="ref" reference="subsec_app:proof_universality_matrix"}. ## Proof of Proposition [\[prop:universality_gs\]](#prop:universality_gs){reference-type="ref" reference="prop:universality_gs"} {#subsec:proof_universality_gs} ### Consequences of universality for ellipsoid fitting {#subsubsec:universality_ellipse} We investigate here the consequences of Theorem [\[thm:universality_matrix\]](#thm:universality_matrix){reference-type="ref" reference="thm:universality_matrix"} for the ellipsoid fitting problem. It follows by the Berry-Esseen central limit theorem [@o2014analysis] that the distribution $\mathrm{Ellipse}(d)$ satisfies uniform pointwise normality on a large set of matrices (in the sense of Definition [\[def:one_dimensional_CLT\]](#def:one_dimensional_CLT){reference-type="ref" reference="def:one_dimensional_CLT"}). [\[lemma:1d_clt_ellipse\]]{#lemma:1d_clt_ellipse label="lemma:1d_clt_ellipse"} Let $d \geq 1$ and $W \sim \mathrm{Ellipse}(d)$. Fix any $\eta \in (0,1/2)$ Let $A_d \coloneqq \{S \in \mcS_d \, : \, {\rm Tr}[|S|^3] \leq d^{3/2-\eta} \}$. Then $A_d$ is convex and symmetric, and the law of $W$ satisfies a one-dimensional CLT with respect to $A_d$, in the sense of Definition [\[def:one_dimensional_CLT\]](#def:one_dimensional_CLT){reference-type="ref" reference="def:one_dimensional_CLT"}. **Remark --** This lemma makes crucial use of the Gaussian nature of the vectors, and more specifically it relies on their rotation invariance and the first moments of their norm, as is clear from the proof. On the other hand, for vectors sampled from other distributions, such as $x \sim \mathrm{Unif}(\{\pm 1\}^d)$ or $x \sim \mathrm{Unif}(\mathbb{S}^{d-1})$, it is easy to see that Lemma [\[lemma:1d_clt_ellipse\]](#lemma:1d_clt_ellipse){reference-type="ref" reference="lemma:1d_clt_ellipse"} can not hold: indeed, $S = \mathrm{I}_d$ is such that ${\rm Tr}[S W] = 0$ deterministically, while ${\rm Tr}[S G] = {\rm Tr}[G] \sim \mcN(0, 2)$ for $G \sim \mathrm{GOE}(d)$. This is consistent, as in the example of these two distributions there always exists an ellipsoid fit, which is the sphere itself, and therefore Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"} can not possibly hold. **Proof of Lemma [\[lemma:1d_clt_ellipse\]](#lemma:1d_clt_ellipse){reference-type="ref" reference="lemma:1d_clt_ellipse"} --** Note that $A_d$ is a centered ball for the $S_3$-norm, and is therefore convex and symmetric. The proof of the first and second moments condition of Definition [\[def:one_dimensional_CLT\]](#def:one_dimensional_CLT){reference-type="ref" reference="def:one_dimensional_CLT"} is immediate via a simple calculation. We focus on proving condition $(ii)$ of Definition [\[def:one_dimensional_CLT\]](#def:one_dimensional_CLT){reference-type="ref" reference="def:one_dimensional_CLT"}. Fix $S \in A_d$, with eigenvalues $(\lambda_i)_{i=1}^d$. With $W \sim \mathrm{Ellipse}(d)$ and $G \sim \mathrm{GOE}(d)$, let $$\begin{dcases} X &\coloneqq {\rm Tr}[S W], \\ Y &\coloneqq {\rm Tr}[S G] . \end{dcases}$$ It is trivial to see that $Y \sim \mcN(0, 2 {\rm Tr}[S^2]/d)$, so that $Y \overset{\rm d}{=}d^{-1/2} \sum_{i=1}^d \lambda_i z_i$ for $z_i \overset{\mathrm{i.i.d.}}{\sim}\mcN(0,2)$. Moreover, $$X \overset{\rm d}{=}\frac{1}{\sqrt{d}} \sum_{i=1}^d \lambda_i (x_i^2 - 1),$$ with $x_i \overset{\mathrm{i.i.d.}}{\sim}\mcN(0,1)$. We use the Berry-Esseen central limit theorem, and in particular the formulation of Chapter 11 of [@o2014analysis] -- itself a simple consequence of the Lindeberg exchange method. [\[lemma:BE_lipschitz\]]{#lemma:BE_lipschitz label="lemma:BE_lipschitz"} There exists a universal constant $C > 0$ such that the following holds. Let $p \geq 1$ and $X_1, \cdots, X_p$ and $Y_1, \cdots, Y_p$ be independent random variables, such that $\mathbb{E}[X_i] = \mathbb{E}[Y_i]$ and $\mathbb{E}[X_i^2] = \mathbb{E}[Y_i^2]$ for all $i \in [p]$. Let $\varphi : \mathbb{R}\to \mathbb{R}$ a Lipschitz function with Lipschitz constant $\|\varphi\|_L$. Then $$\left|\mathbb{E}\, \varphi\left(\sum_{i=1}^p X_i\right) - \mathbb{E}\, \varphi\left(\sum_{i=1}^p Y_i\right)\right| \leq C \|\varphi\|_L \left[\sum_{i=1}^p \left(\mathbb{E}\, |X_i|^3 + \mathbb{E}\, |Y_i|^3\right)\right]^{1/3}.$$ Lemma [\[lemma:BE_lipschitz\]](#lemma:BE_lipschitz){reference-type="ref" reference="lemma:BE_lipschitz"} yields: $$| \mathbb{E}\, \varphi(X) - \mathbb{E}\, \varphi(Y)| \leq C \|\varphi\|_L \left[B\frac{{\rm Tr}[|S|^3]}{d^{3/2}} \right]^{1/3},$$ with $B = \mathbb{E}[|z^2 - 1|^3] + 2^{3/2} \mathbb{E}|z|^3$ for $z \sim \mcN(0,1)$. Using the definition of $A_d$, we reach: $$\sup_{S \in A_d} | \mathbb{E}\, \varphi(X) - \mathbb{E}\, \varphi(Y)| \leq C \|\varphi\|_L \sup_{S \in A_d} \left[\frac{1}{\sqrt{d}} \frac{{\rm Tr}|S|^3}{d} \right]^{1/3} \leq C \|\varphi\|_L \, d^{-\eta/3} \to 0.$$ This ends the proof. $\square$ We can now state the main result of this section, a corollary of Theorem [\[thm:universality_matrix\]](#thm:universality_matrix){reference-type="ref" reference="thm:universality_matrix"} and Lemma [\[lemma:1d_clt_ellipse\]](#lemma:1d_clt_ellipse){reference-type="ref" reference="lemma:1d_clt_ellipse"}. [\[cor:universality_ellipse\]]{#cor:universality_ellipse label="cor:universality_ellipse"} Let $n,d \geq 1$ and $n,d \to \infty$ with $\alpha_1 d^2 \leq n \leq \alpha_2 d^2$ for some $0 < \alpha_1 \leq \alpha_2$. Let $P_0$ be a probability distribution such that $\mathop{\mathrm{supp}}(P_0) \subseteq B_\mathrm{op}(C_0)$ for some constant $C_0 > 0$. Let $\phi : \mathbb{R}\to \mathbb{R}_+$ a bounded differentiable function with bounded derivative. For $X_1, \cdots, X_n \in \mcS_d$ we define the free entropy: $$F_d(\{X_\mu\}) \coloneqq \frac{1}{d^2} \log \int P_0(\mathrm{d}S) \exp\Big\{-\sum_{\mu=1}^n \phi\left({\rm Tr}[X_\mu S]\right)\Big\}.$$ Then for any $\psi$ such that $\|\psi\|_\infty, \|\psi'\|_\infty, \|\psi'\|_L < \infty$ we have $$\label{eq:equivalence_asymptotic_fe_ellipse} \lim_{d \to \infty} \left| \mathbb{E}_{\{W_\mu\} \overset{\mathrm{i.i.d.}}{\sim}\mathrm{Ellipse}(d)} \psi[F_d(\{W_\mu\})] - \mathbb{E}_{\{G_\mu\} \overset{\mathrm{i.i.d.}}{\sim}\mathrm{GOE}(d)} \psi[F_d(\{G_\mu\})] \right| = 0.$$ We notice that the only requirement for Corollary [\[cor:universality_ellipse\]](#cor:universality_ellipse){reference-type="ref" reference="cor:universality_ellipse"} to hold is $\mathop{\mathrm{supp}}(P_0) \subseteq B_F(C\sqrt{d}) \cap B_3(d^{3/2-\eta})$ for some $C > 0$ and $\eta > 0$, a weaker requirement than $\mathop{\mathrm{supp}}(P_0) \subseteq B_\mathrm{op}(C_0)$. **Proof of Corollary [\[cor:universality_ellipse\]](#cor:universality_ellipse){reference-type="ref" reference="cor:universality_ellipse"} --** Since $B_\mathrm{op}(C_0) \subseteq B_2(C_0 \sqrt{d})$, hypothesis [\[hyp:P0\]](#hyp:P0){reference-type="ref" reference="hyp:P0"} of Theorem [\[thm:universality_matrix\]](#thm:universality_matrix){reference-type="ref" reference="thm:universality_matrix"} is satisfied. Condition [\[hyp:phi\]](#hyp:phi){reference-type="ref" reference="hyp:phi"} is satisfied by hypothesis. Since $B_\mathrm{op}(C_0) \subseteq B_3(C_0 d^{1/3}) \subseteq B_3(d^{1/2 - \eta})$ for any $\eta \in (0, 1/6)$, condition [\[hyp:Ad\]](#hyp:Ad){reference-type="ref" reference="hyp:Ad"} of Theorem [\[thm:universality_matrix\]](#thm:universality_matrix){reference-type="ref" reference="thm:universality_matrix"} is satisfied with $A_d = B_3(d^{1/2 - \eta})$. Finally, Lemma [\[lemma:1d_clt_ellipse\]](#lemma:1d_clt_ellipse){reference-type="ref" reference="lemma:1d_clt_ellipse"} verifies condition [\[hyp:rho\]](#hyp:rho){reference-type="ref" reference="hyp:rho"} for this choice of $A_d$. All in all we can apply Theorem [\[thm:universality_matrix\]](#thm:universality_matrix){reference-type="ref" reference="thm:universality_matrix"}, from which the conclusion follows. $\qed$ $\square$ ### Proof of Proposition [\[prop:universality_gs\]](#prop:universality_gs){reference-type="ref" reference="prop:universality_gs"} {#proof-of-proposition-propuniversality_gs} We are now ready to prove Proposition [\[prop:universality_gs\]](#prop:universality_gs){reference-type="ref" reference="prop:universality_gs"}, taking a "small-temperature" limit. Such arguments are classical in rigorous statistical mechanics, see e.g. Appendix A of [@montanari2022universality]. Notice that the restriction $B \subseteq B_\mathrm{op}(C_0)$ will be critical because we proved an upper bound on the Lipschitz constant of the energy for the operator norm, cf. Lemma [\[lemma:energy_change_small_ball\]](#lemma:energy_change_small_ball){reference-type="ref" reference="lemma:energy_change_small_ball"}. Recall the definition of the energy function: $$E_{\{X_\mu\}}(S) \coloneqq \frac{1}{d^2} \sum_{\mu=1}^n \phi[{\rm Tr}(X_\mu S)].$$ We fix $\eta \in (0,1)$, and $\mcN_\eta \subseteq B$ a minimal $\eta$-net of $B$ for $\|\cdot\|_\mathrm{op}$. By Since $B \subseteq B_\mathrm{op}(C_0)$ and $\dim(\mcS_d) = d(d+1)/2$, it follows by standard covering number upper bounds [@vershynin2018high; @van2014probability] that $$\label{eq:net_upper_bound} \log |\mcN_\eta| = \log \mcN(B, \|\cdot\|_\mathrm{op}, \eta) \leq \log \mcN\left(B_\mathrm{op}(C_0), \|\cdot\|_\mathrm{op}, \frac{\eta}{2}\right) \leq d^2 \log \frac{K}{\eta} ,$$ for some $K > 0$ depending on $C_0$. Recall the definition of $\mathrm{GS}_d(\{X_\mu\})$ in eq. [\[eq:def_gs\]](#eq:def_gs){reference-type="eqref" reference="eq:def_gs"}. We define: $$\mathrm{GS}_d(\eta,\{X_\mu\}) \coloneqq \inf_{S \in \mcN_\eta} E_{\{X_\mu\}}(S).$$ We will show the two lemmas: [\[lemma:universality_GS_net\]]{#lemma:universality_GS_net label="lemma:universality_GS_net"} For any $\eta > 0$ and any $\psi$ such that $\|\psi\|_\infty, \|\psi'\|_\infty, \|\psi'\|_L < \infty$: $$\lim_{d \to \infty} \left| \mathbb{E}_{\{W_\mu\} \overset{\mathrm{i.i.d.}}{\sim}\mathrm{Ellipse}(d)} \psi[\mathrm{GS}_d(\eta,\{W_\mu\})] - \mathbb{E}_{\{G_\mu\} \overset{\mathrm{i.i.d.}}{\sim}\mathrm{GOE}(d)} \psi[\mathrm{GS}_d(\eta,\{G_\mu\})] \right| = 0.$$ [\[lemma:GS_net\]]{#lemma:GS_net label="lemma:GS_net"} Let $X_1, \cdots, X_n \overset{\mathrm{i.i.d.}}{\sim}\rho$, with $\rho \in \{\mathrm{GOE}(d),\mathrm{Ellipse}(d)\}$. Then, with probability at least $1 - 2 e^{-n}$: $$|\mathrm{GS}_d(\eta,\{X_\mu\}) - \mathrm{GS}_d(\{X_\mu\})| \leq C \|\phi'\|_\infty \cdot \eta.$$ These results are proven in the following, let us first see how they end the proof of Proposition [\[prop:universality_gs\]](#prop:universality_gs){reference-type="ref" reference="prop:universality_gs"}. We fix $\eta \in (0,1)$. We have $$\begin{aligned} \left| \mathbb{E}\psi[\mathrm{GS}_d(\{W_\mu\})] - \mathbb{E}\psi[\mathrm{GS}_d(\{G_\mu\})] \right| \leq &\left| \mathbb{E}\psi[\mathrm{GS}_d(\eta,\{W_\mu\})] - \mathbb{E}\psi[\mathrm{GS}_d(\eta, \{G_\mu\})] \right| \\ &+ \|\psi\|_L \sum_{X \in \{W, G\}} \mathbb{E}|\mathrm{GS}_d(\eta,\{X_\mu\}) - \mathrm{GS}_d(\{X_\mu\})|. \end{aligned}$$ The first term goes to $0$ as $d \to \infty$ by Lemma [\[lemma:universality_GS_net\]](#lemma:universality_GS_net){reference-type="ref" reference="lemma:universality_GS_net"}. Finally, using Lemma [\[lemma:GS_net\]](#lemma:GS_net){reference-type="ref" reference="lemma:GS_net"} and the Cauchy-Schwarz inequality, we have: $$\begin{aligned} \mathbb{E}|\mathrm{GS}_d(\eta,\{X_\mu\}) - \mathrm{GS}_d(\{X_\mu\})| &\leq e^{-n/2} \sqrt{2\mathbb{E}|\mathrm{GS}_d(\eta,\{X_\mu\}) - \mathrm{GS}_d(\{X_\mu\})|^2} + C \|\phi'\|_\infty \eta, \\ &\overset{\mathrm{(a)}}{\leq}4 \alpha \|\phi\|_\infty e^{-n/2} + C \|\phi'\|_\infty \eta, \end{aligned}$$ using in $(\rm a)$ that $|E(S)| \leq \alpha \|\phi\|_\infty$. Letting $d \to \infty$, we get $$\lim_{d \to \infty} \left| \mathbb{E}\psi[\mathrm{GS}_d(\{W_\mu\})] - \mathbb{E}\psi[\mathrm{GS}_d(\{G_\mu\})] \right| \leq C \|\phi'\|_\infty \|\psi\|_L \cdot \eta.$$ Taking the limit $\eta \to 0$ ends the proof of eq. [\[eq:universality_gs\]](#eq:universality_gs){reference-type="eqref" reference="eq:universality_gs"}. The claim of eq. [\[eq:universality_gs_probabilities\]](#eq:universality_gs_probabilities){reference-type="eqref" reference="eq:universality_gs_probabilities"} can be obtained easily by picking $\psi$ approximating an indicator function, see e.g. Section A.1.3 of [@montanari2022universality] for a detail of this argument. $\qed$ **Proof of Lemma [\[lemma:universality_GS_net\]](#lemma:universality_GS_net){reference-type="ref" reference="lemma:universality_GS_net"} --** We define, for $\beta > 0$: $$F_d(\eta, \beta, \{X_\mu\}) \coloneqq \frac{1}{d^2 \beta} \log \frac{1}{|\mcN_\eta|} \sum_{S \in \mcN_\eta} \exp\Big\{-\beta \sum_{\mu=1}^n \phi\left({\rm Tr}[X_\mu S]\right)\Big\}.$$ Using Corollary [\[cor:universality_ellipse\]](#cor:universality_ellipse){reference-type="ref" reference="cor:universality_ellipse"} with $P_0$ being the uniform distribution over $\mcN_\eta$, we have, for any $\beta > 0$: $$\label{eq:universality_fe_net} \lim_{d \to \infty} \left| \mathbb{E}\psi[F_d(\eta, \beta, \{W_\mu\})] - \mathbb{E}\psi[F_d(\eta, \beta, \{G_\mu\})] \right| = 0.$$ Moreover, for any fixed $d, \eta$, we have $\mathrm{GS}_d(\eta, \{X_\mu\}) = \lim_{\beta \to \infty} F_d(\eta, \beta, \{X_\mu\})$. Thus: $$\label{eq:gs_fenergy_net} |\mathrm{GS}_d(\eta, \{X_\mu\}) - F_d(\eta, \beta, \{X_\mu\})| \leq \int_{\beta}^\infty \left|\frac{\partial F_d(\eta, s, \{X_\mu\})}{\partial s}\right| \mathrm{d}s.$$ Defining the "Gibbs" measure for $S \in \mcN_\eta$: $$\mathbb{P}_{\beta}(S) \coloneqq \frac{\exp\left\{-\beta \sum_{\mu=1}^n \phi\left({\rm Tr}[X_\mu S]\right)\right\}}{\sum_{S' \in \mcN_\eta} \exp\left\{-\beta \sum_{\mu=1}^n \phi\left({\rm Tr}[X_\mu S']\right)\right\}},$$ it is easy to check that $$\begin{aligned} \left|\frac{\partial F_d(\eta, s, \{X_\mu\})}{\partial s}\right| &= \frac{1}{s^2 d^2} \left|\sum_{S \in \mcN_\eta} \mathbb{P}_{s}(S) \log \mathbb{P}_{s}(S) + \log |\mcN_\eta| \right|, \\ &\overset{\mathrm{(a)}}{\leq}\frac{1}{s^2 d^2} \log |\mcN_\eta|, \\ &\overset{\mathrm{(b)}}{\leq}\frac{1}{s^2} \log \frac{K}{\eta},\end{aligned}$$ where $(\rm a)$ follows from the fact that, the uniform distribution over $\mcN_\eta$ maximizes the entropy, and $(\rm b)$ is eq. [\[eq:net_upper_bound\]](#eq:net_upper_bound){reference-type="eqref" reference="eq:net_upper_bound"}. Plugging the result back in eq. [\[eq:gs_fenergy_net\]](#eq:gs_fenergy_net){reference-type="eqref" reference="eq:gs_fenergy_net"} we get: $$\begin{aligned} \label{eq:gs_fenergy_net_2} \nonumber |\mathrm{GS}_d(\eta, \{X_\mu\}) - F_d(\eta, \beta, \{X_\mu\})| &\leq \log \left(\frac{K}{\eta}\right) \int_{\beta}^\infty \frac{\mathrm{d}s}{s^2}, \\ &\leq\frac{1}{\beta} \log \left(\frac{K}{\eta}\right).\end{aligned}$$ Combining eqs. [\[eq:universality_fe_net\]](#eq:universality_fe_net){reference-type="eqref" reference="eq:universality_fe_net"} and [\[eq:gs_fenergy_net_2\]](#eq:gs_fenergy_net_2){reference-type="eqref" reference="eq:gs_fenergy_net_2"} we get, for any $\beta > 0$: $$\begin{aligned} \limsup_{d \to \infty} \left| \mathbb{E}\psi[\mathrm{GS}_d(\eta, \{W_\mu\})] - \mathbb{E}\psi[\mathrm{GS}_d(\eta, \{G_\mu\})] \right| & \leq \|\psi\|_L \sum_{X \in \{W, G\}} \mathbb{E}|\mathrm{GS}_d(\eta, \{X_\mu\}) - F_d(\eta, \beta, \{X_\mu\})|, \\ &\leq \frac{2 \|\psi\|_L}{\beta} \log \left(\frac{K}{\eta}\right).\end{aligned}$$ Taking the limit $\beta \to \infty$ ends the proof of Lemma [\[lemma:universality_GS_net\]](#lemma:universality_GS_net){reference-type="ref" reference="lemma:universality_GS_net"}. $\square$ **Proof of Lemma [\[lemma:GS_net\]](#lemma:GS_net){reference-type="ref" reference="lemma:GS_net"} --** Note that $\mathrm{GS}_d(\eta, \{X_\mu\}) \geq \mathrm{GS}_d(\{X_\mu\})$ since $\mcN_\eta \subseteq B$. The other side of this inequality is a direct consequence of Lemma [\[lemma:energy_change_small_ball\]](#lemma:energy_change_small_ball){reference-type="ref" reference="lemma:energy_change_small_ball"}. Indeed, assuming that $E(S)$ is $C \|\phi'\|_\infty$-Lipschitz with respect to the operator norm, let us fix $S^\star \in B$ such that $E(S^\star) = \mathrm{GS}_d(\{X_\mu\})$ (since $B$ is closed and bounded it is compact, therefore this minimizer exists). Letting $S \in \mcN_\eta$ such that $\|S^\star - S\|_\mathrm{op}\leq \eta$, we have $$\begin{aligned} \mathrm{GS}_d(\{X_\mu\}) &= E(S^\star), \\ &\geq E(S) - |E(S^\star) - E(S)|, \\ &\geq \mathrm{GS}_d(\eta, \{X_\mu\}) - C \|\phi'\|_\infty \cdot \eta,\end{aligned}$$ which ends the proof. $\square$ # Acknowledgements {#acknowledgements .unnumbered} The authors are grateful to March Boedihardjo, Tim Kunisky, Petar Nizić-Nikolac, and Joel Tropp for insightful discussions and suggestions. # A classical concentration inequality {#sec_app:classical} We will make use of the following elementary concentration inequality, a generalization of Bernstein's inequality for $\psi_q$ tails [@talagrand1994supremum; @hitczenko1997moment; @adamczak2011restricted]. [\[lemma:tail_sum_sub_Weibull\]]{#lemma:tail_sum_sub_Weibull label="lemma:tail_sum_sub_Weibull"} Let $q \in (0,2]$, and $W_1, \cdots, W_n$ be i.i.d. centered random variables satisfying $\mathbb{P}[|W_1| \geq t] \leq 2 e^{-C t^q}$. - If $q \in [1,2]$, then for all $a \in \mathbb{R}^n$ and all $t \geq 0$: $$\mathbb{P}\left[\left|\sum_{\mu=1}^n a_\mu W_\mu\right| \geq t\right] \leq 2 \exp\Big\{-c \min\left(\frac{t^2}{\|a\|_2^2}, \frac{t^q}{\|a\|_{q^\star}^q}\right)\Big\},$$ where $q^\star \in [2, + \infty]$ with $1/q + 1/q^\star = 1$. - If $q \in [1/2, 1]$, then for all $t > 0$ $$\mathbb{P}\left[\left|\frac{1}{n}\sum_{\mu=1}^n W_\mu\right| \geq t\right] \leq 2 \exp\left\{-c_q \min(n t^2, (nt)^q)\right\}.$$ This lemma is stated in [@adamczak2011restricted], see Lemmas 3.6 and 3.7 -- and eq. (3.7) -- and is a consequence of the same result for symmetric Weibull random variables [@hitczenko1997moment]. # Fitting error of the sphere {#sec_app:error_identity} We show here eq. [\[eq:error_identity\]](#eq:error_identity){reference-type="eqref" reference="eq:error_identity"}. By Bernstein's inequality [@vershynin2018high], we have for all $\mu \in [n]$ and $u \geq 0$: $$\mathbb{P}\left[\left|\frac{\|x_\mu\|^2}{d} - 1\right|\geq u\right] \leq 2 \exp\left(-C d \min(u,u^2)\right).$$ As a consequence, if $X_\mu \coloneqq \sqrt{d}(\|x_\mu\|^2/d - 1)$, then $\|X_\mu\|_{\psi_1} \leq C$. Let $Y_\mu \coloneqq |X_\mu|^r - \mathbb{E}[|X_\mu^r|]$. By the central limit theorem, $X_\mu \overset{\rm d}{\to}\mcN(0, 2)$ as $d \to \infty$. One shows easily that $\sup_{d \geq 1} \mathbb{E}[|X_\mu|^{2+\varepsilon}] < \infty$ (e.g. for $\varepsilon= 2$), and thus (since $r \leq 2$) $|X_\mu|^r$ is uniformly integrable as $d \to \infty$. This implies (cf. Theorem 3.5 of [@billingsley2013convergence]) that $\mathbb{E}|X_\mu|^r \to \mathbb{E}|Z|^r$ with $Z \sim \mcN(0,2)$. Notice that $\mathbb{E}[|Z|^r] = 2^r \Gamma([r+1]/2)/\sqrt{\pi}$. Let $q \coloneqq 1/r \in [1/2,1]$. Since $\|X_\mu\|_{\psi_1} \leq C$, we have $\| |X_\mu|^r \|_{\psi_q} \leq C$, and thus $\|Y_\mu\|_{\psi_q} \leq C'$. We can then use Lemma [\[lemma:tail_sum_sub_Weibull\]](#lemma:tail_sum_sub_Weibull){reference-type="ref" reference="lemma:tail_sum_sub_Weibull"}, and we get: $$\mathbb{P}\left[\left|\frac{1}{n} \sum_{\mu=1}^n (|X_\mu|^r - \mathbb{E}[|X_\mu|^r])\right| \geq t\right] \leq 2 \exp\left\{-c_r \min(nt^2, (nt)^{1/r})\right\}.$$ Combining the above, we get that for any $\varepsilon> 0$, we have with probability $1 - o_d(1)$: $$\mathbb{E}[|Z|^r] - \varepsilon\leq \frac{1}{n} \sum_{\mu=1}^n \left|\sqrt{d} \left[\frac{\|x_\mu\|^2}{d} - 1\right]\right|^r \leq \mathbb{E}[|Z|^r] + \varepsilon.$$ # Towards exact ellipsoid fitting {#sec_app:geometric_approx_exact} We show here the following proposition. [\[prop:bound_to_exact_conjecture\]]{#prop:bound_to_exact_conjecture label="prop:bound_to_exact_conjecture"} Let $n, d \to \infty$ with $n / d^2 \to \alpha \in (0, 1/2)$. Let $\{X_\mu\}_{\mu=1}^n$ be symmetric random matrices, and $H_{\mu \nu} \coloneqq {\rm Tr}[X_\mu X_\nu]$. Assume that: - With probability $1 - o_d(1)$, $\|H^{-1}\|_\mathrm{op}\leq C n^{-1/2}$ for some $C = C(\alpha) > 0$. - There exists $\lambda_- > 0$ (depending only on $\alpha$) such that $$\label{eq:necessary_bound} \mathop{\mathrm{p-lim}}_{n \to \infty} \min_{S \succeq \lambda_- \mathrm{I}_d} \frac{1}{\sqrt{n}}\sum_{\mu=1}^n |{\rm Tr}(X_\mu S) - 1 |^2 = 0.$$ Then, with probability $1 - o_d(1)$ there exists $S \succeq 0$ such that ${\rm Tr}[X_\mu S] = 1$ for all $\mu \in [n]$. Let us emphasize that given our current proof of Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"} (cf. Section [2.4](#subsec:proof_thm_main_positive_side){reference-type="ref" reference="subsec:proof_thm_main_positive_side"}), if the assumptions of Proposition [\[prop:bound_to_exact_conjecture\]](#prop:bound_to_exact_conjecture){reference-type="ref" reference="prop:bound_to_exact_conjecture"} hold for $X_\mu \sim \mathrm{Ellipse}(d)$ and $\alpha < 1/4$, then the first part of Conjecture [\[conj:ellipsoid_fitting\]](#conj:ellipsoid_fitting){reference-type="ref" reference="conj:ellipsoid_fitting"} will hold. However, the proof of Proposition [\[prop:bound_to_exact_conjecture\]](#prop:bound_to_exact_conjecture){reference-type="ref" reference="prop:bound_to_exact_conjecture"} is rather naive, as it crudely bounds the operator norm distance of the minimizer of eq. [\[eq:necessary_bound\]](#eq:necessary_bound){reference-type="eqref" reference="eq:necessary_bound"} to a subspace $V$ (the affine subspace of solutions to the linear constraints ${\rm Tr}[X_\mu S] = 1$) by its distance in Frobenius norm. For these reasons, the assumptions of Proposition [\[prop:bound_to_exact_conjecture\]](#prop:bound_to_exact_conjecture){reference-type="ref" reference="prop:bound_to_exact_conjecture"} may be far from being optimal. **Remark I --** Note that the condition $(i)$ is clearly satisfied if $X_\mu \overset{\mathrm{i.i.d.}}{\sim}\mathrm{GOE}(d)$ and $\alpha \in (0, 1/2)$, since $H$ is then distributed as a Wishart matrix. On the other hand, while we expect it to hold as well for $X_\mu \overset{\mathrm{i.i.d.}}{\sim}\mathrm{Ellipse}(d)$, this condition is (to the best of our knowledge) not known unless $\alpha$ is small enough: interestingly, this was one of the limitations in the recent works [@bandeira2023fitting; @tulsiani2023ellipsoid; @hsieh2023ellipsoid] that proved that ellipsoid fitting is feasible for $\alpha$ sufficiently small. **Remark II --** Note that it is sufficient for eq. [\[eq:necessary_bound\]](#eq:necessary_bound){reference-type="eqref" reference="eq:necessary_bound"} to hold that there exists $r \in [1,2]$ such that: $$\mathop{\mathrm{p-lim}}_{n \to \infty} \min_{S \succeq \lambda_- \mathrm{I}_d} \frac{1}{n^{r/4}}\sum_{\mu=1}^n |{\rm Tr}(X_\mu S) - 1 |^r = 0.$$ At the moment, our proof of Theorem [\[thm:main_positive_side\]](#thm:main_positive_side){reference-type="ref" reference="thm:main_positive_side"} (cf. Lemma [\[lemma:positive_side_strong_phi\]](#lemma:positive_side_strong_phi){reference-type="ref" reference="lemma:positive_side_strong_phi"}) only implies (for $r < 4/3$) a similar statement with a prefactor $1/n$ rather than the required $1/n^{r/4}$. It would thus need to be improved to show that eq. [\[eq:necessary_bound\]](#eq:necessary_bound){reference-type="eqref" reference="eq:necessary_bound"} holds for the ellipsoid fitting setting. **Proof of Proposition [\[prop:bound_to_exact_conjecture\]](#prop:bound_to_exact_conjecture){reference-type="ref" reference="prop:bound_to_exact_conjecture"} --** Let $V \coloneqq \{S \in \mcS_d \, : \, {\rm Tr}[X_\mu S] = 1, \, \forall \mu \in [n]\}$ be the affine space of solutions to the constraints. Let $\varepsilon> 0$, and $\hS \succeq \lambda_- \mathrm{I}_d$ such that $$\frac{1}{\sqrt{n}}\sum_{\mu=1}^n |{\rm Tr}(X_\mu \hS) - 1 |^2 \leq \varepsilon.$$ Note that for all $M \in \mcS_d$, if $\|M\|_\mathrm{op}\leq \lambda_-$, then $\hS + M \succeq 0$. In particular, $\|M\|_\mathrm{F}\leq \lambda_- \Rightarrow \hS + M \succeq 0$. In order to conclude it thus suffices to show that $d_\mathrm{F}(\hS, V) \leq \lambda_-$. The following lemma is an elementary geometrical result: [\[lemma:distance_subspace\]]{#lemma:distance_subspace label="lemma:distance_subspace"} Let $d \geq 1$ and $1 \leq r \leq d$ two integers. Let $(a_k, b_k)_{k=1}^r \in (\mathbb{R}^d \times \mathbb{R})^r$, with $(a_k)_{k=1}^r$ linearly independent. We define $G \coloneqq \{x \in \mathbb{R}^d \, : \, a_k^\intercal x + b_k = 0, \ \forall k \in [r]\}$. Then, for any $y \in \mathbb{R}^d$: $$d_2(y, G)^2 = v^\intercal H^{-1} v,$$ in which $v_k \coloneqq a_k^\intercal y + b_k$, and $H_{k k'} = \langle a_k , a_{k'} \rangle$ is the Gram matrix of the $\{a_k\}$. We now condition on the event of condition $(i)$. For any $\varepsilon> 0$, applying Lemma [\[lemma:distance_subspace\]](#lemma:distance_subspace){reference-type="ref" reference="lemma:distance_subspace"} yields (with probability $1 - o_d(1)$): $$d_\mathrm{F}(\hS, V)^2 \leq C(\alpha) \varepsilon,$$ so that picking $\varepsilon\leq \lambda_-^2 / C(\alpha)$ implies the result. $\square$ # Additional proofs for Proposition [\[prop:universality_gs\]](#prop:universality_gs){reference-type="ref" reference="prop:universality_gs"} {#sec_app:technical_universality} ## Proof of Lemma [\[lemma:emp_proc_gaussian\]](#lemma:emp_proc_gaussian){reference-type="ref" reference="lemma:emp_proc_gaussian"} {#subsec_app:proof_lemma_emp_proc_gaussian} The lemma is a direct corollary of the following elementary result. [\[prop:bound_G\_process\]]{#prop:bound_G_process label="prop:bound_G_process"} Let $n, p \geq 1$. Let $G \in \mathbb{R}^{n \times p}$ with $G_{ij} \overset{\mathrm{i.i.d.}}{\sim}\mcN(0,1)$. There is $A > 0$ such that for all $\delta > 0$ and $r \geq 1$: $$\mathbb{P}\Big[\max_{\|x\|_2 = 1} \|G x\|_r \geq \left[A(\sqrt{n} + \sqrt{p}) + \delta \sqrt{n}\right] n^{\max(1/r - 1/2,0)}\Big] \leq \exp\Big\{-\frac{n\delta^2}{2}\Big\}.$$ **Proof of Proposition [\[prop:bound_G\_process\]](#prop:bound_G_process){reference-type="ref" reference="prop:bound_G_process"} --** Notice that $G \to \max_{\|x \|_2 = 1} \|G x \|_r$ is Lipschitz: $$\begin{aligned} \left|\max_{\|x\|_2 = 1} \| G_1 x \|_r - \max_{\|x\|_2 = 1} \| G_2 x \|_r\right| &\leq \max_{\|x\|_2 = 1} \| (G_1 - G_2) x \|_r, \\ &\overset{\mathrm{(a)}}{\leq}n^{\max(\frac{1}{r} - \frac{1}{2},0)} \max_{\|x\|_2 = 1} \| (G_1 - G_2) x \|_2, \\ &\leq n^{\max(\frac{1}{r} - \frac{1}{2},0)} \|G_1 - G_2 \|_\mathrm{F}, \end{aligned}$$ where we used Hölder's inequality in the form $\| y \|_r \leq n^{\frac{1}{r} - \frac{1}{2}} \|y \|_2$ for $r \leq 2$, and for $r \geq 2$ the fact that $\|y \|_r \leq \|y \|_2$, alongside with $\|G\|_\mathrm{op}\leq \|G\|_F$. By Theorem [\[thm:gaussian_conc_lipschitz\]](#thm:gaussian_conc_lipschitz){reference-type="ref" reference="thm:gaussian_conc_lipschitz"} we reach: $$\mathbb{P}\Big[\max_{\|x\|_2 = 1} \| G x \|_r \geq \mathbb{E}\max_{\|x\|_2 = 1} \| G x \|_r + \delta n^{\max(1/r,1/2)}] \leq \exp\Big\{- \frac{n \delta^2}{2}\Big\}.$$ The proof is complete if we can show that $\mathbb{E}\max_{\|x\|_2 = 1} \| G x \|_r = \mcO(n^{\max(1/r-1/2,0)}) \cdot (\sqrt{n} + \sqrt{p})$. This follows by the same inequality as above: $$\mathbb{E}\max_{\|x\|_2 = 1} \| G x \|_r \leq n^{\max(\frac{1}{r} - \frac{1}{2},0)} \mathbb{E}\max_{\|x\|_2 = 1} \| G x \|_2.$$ The bound $\mathbb{E}\max_{\|x\|_2 = 1} \| G x \|_2 = \mcO(\sqrt{n} + \sqrt{p})$ is well-known, see e.g. [@vershynin2018high]. $\square$ ## Proof of Theorem [\[thm:universality_matrix\]](#thm:universality_matrix){reference-type="ref" reference="thm:universality_matrix"} {#subsec_app:proof_universality_matrix} As we have seen, it suffices to prove Lemmas [\[lemma:domination\]](#lemma:domination){reference-type="ref" reference="lemma:domination"} and [\[lemma:pointwise_limit\]](#lemma:pointwise_limit){reference-type="ref" reference="lemma:pointwise_limit"}. We introduce some additional notations. - For any $\mu \in [n]$, we denote $$\label{eq:def_Gibbs_mu} \langle \cdot \rangle_\mu \coloneqq \frac{\int P_0(\mathrm{d}S) \, e^{-\sum_{\nu(\neq \mu)} \phi[{\rm Tr}(U_\nu(t) S)]} \big(\cdot\big)}{\int P_0(\mathrm{d}S) \, e^{-\sum_{\nu(\neq \mu)} \phi[{\rm Tr}(U_\nu(t) S)]}}.$$ We do not write explicitly the $t$ dependency of this average as it will be clear from context. - For any $\mu$, we denote $\mathbb{E}_{(\mu)}$ the expectation conditioned on $\{G_\mu, W_\mu\}$, i.e. the expectation over $\{G_\nu, W_\nu\}_{\nu (\neq \mu)}$. Note that this notation is different from [@montanari2022universality], for which $\mathbb{E}_{(\mu)}$ was the expectation with respect to $(G_\mu, W_\mu)$. On the other hand, we denote without parenthesis the expectation with respect to these variables: e.g. $\mathbb{E}_{(1)}$ is conditioned on $\{G_1, W_1\}$, but $\mathbb{E}_{G_1, W_1}$ is the expectation w.r.t. $G_1, W_1$. ### Proof of Lemma [\[lemma:domination\]](#lemma:domination){reference-type="ref" reference="lemma:domination"} {#subsubsec:proof_lemma_domination} We fix $t \in (0, \pi/2)$, and we start from eq. [\[eq:ee_derivative_psi\]](#eq:ee_derivative_psi){reference-type="eqref" reference="eq:ee_derivative_psi"}, which we can rewrite using eq. [\[eq:def_Gibbs_mu\]](#eq:def_Gibbs_mu){reference-type="eqref" reference="eq:def_Gibbs_mu"} as: $$\mathbb{E}\, \frac{\partial \psi[F_d(U(t))]}{\partial t} = - \frac{n}{d^2} \mathbb{E}\Bigg[\psi'[F_d(U(t))] \frac{\Big \langle e^{-\phi({\rm Tr}[U_1(t) S])} \Big({\rm Tr}[S \tU_1(t)] \, \phi'({\rm Tr}[U_1(t) S])\Big) \Big \rangle_1}{\Big \langle e^{-\phi({\rm Tr}[U_1(t) S])} \Big \rangle_1}\Bigg].$$ Since $\|\psi'\|_\infty < \infty$ and $n/d^2 \leq \alpha_2$, using the triangular inequality the proof of Lemma [\[lemma:domination\]](#lemma:domination){reference-type="ref" reference="lemma:domination"} is complete if one can show the following bound, which will also be useful afterwards: [\[lemma:boundedness_single_matrix\]]{#lemma:boundedness_single_matrix label="lemma:boundedness_single_matrix"} There exists a universal constant $C > 0$ such that: $$\sup_{d \geq 1} \sup_{t \in (0, \pi/2)} \sup_{\{W_\mu, G_\mu\}_{\mu=2}^n} \mathbb{E}_{W_1, G_1} \frac{\Big \langle e^{-\phi({\rm Tr}[U_1 S])} \Big|{\rm Tr}[S \tU_1] \, \phi'({\rm Tr}[U_1 S]) \Big|\Big \rangle_1}{\Big \langle e^{-\phi({\rm Tr}[U_1 S])} \Big \rangle_1} \leq C.$$ **Proof of Lemma [\[lemma:boundedness_single_matrix\]](#lemma:boundedness_single_matrix){reference-type="ref" reference="lemma:boundedness_single_matrix"} --** Note that $$\begin{aligned} \label{eq:domination_dpsi_dt} \nonumber \mathbb{E}_{W_1, G_1} \frac{\Big \langle e^{-\phi({\rm Tr}[U_1 S])} \Big| {\rm Tr}[S \tU_1] \, \phi'({\rm Tr}[U_1 S])\Big|\Big \rangle_1}{\Big \langle e^{-\phi({\rm Tr}[U_1 S])} \Big \rangle_1} &\leq \|\phi'\|_\infty \mathbb{E}_{W_1, G_1} \Bigg[\frac{\Big\langle e^{-\phi({\rm Tr}[U_1 S])} \Big|{\rm Tr}[S \tU_1(t)]\Big|\Big\rangle_1 }{\Big\langle e^{-\phi[{\rm Tr}(U_1 S)]} \Big \rangle_1}\Bigg], \\ &\leq e^{\|\phi\|_\infty} \|\phi'\|_\infty \mathbb{E}_{W_1, G_1} \Big[\Big\langle \Big|{\rm Tr}[S \tU_1(t)]\Big|\Big\rangle_1\Big],\end{aligned}$$ in which we used the positivity and boundedness of $\phi$ in the last inequality. To control the last term in eq. [\[eq:domination_dpsi_dt\]](#eq:domination_dpsi_dt){reference-type="eqref" reference="eq:domination_dpsi_dt"} we write: $$\begin{aligned} \label{eq:bound_first_moment_1} \nonumber \mathbb{E}_{G_1, W_1} \Big[\Big\langle \Big|{\rm Tr}[S \tU_1(t)]\Big|\Big\rangle_1\Big] &\overset{\mathrm{(a)}}{=}\Big\langle \mathbb{E}_{G_1, W_1}\Big|{\rm Tr}[S \tU_1(t)]\Big| \Big\rangle_1, \\ \nonumber &\overset{\mathrm{(b)}}{\leq}\Big\langle \Big\{\mathbb{E}_{G_1, W_1}\Big({\rm Tr}[S \tU_1(t)]^2\Big) \Big\}^{1/2} \Big\rangle_1, \\ &\overset{\mathrm{(c)}}{\leq}\sqrt{2} \Big\langle \Big\{ \frac{1}{d}{\rm Tr}[S^2] \Big\}^{1/2} \Big\rangle_1, \\ &\overset{\mathrm{(d)}}{\leq}\sqrt{2} C_0,\end{aligned}$$ where $(\rm a)$ uses that $\langle \cdot \rangle_1$ is independent of $\{W_1,G_1\}$, $(\rm b)$ is from the Cauchy-Schwarz inequality, in $(\rm c)$ we use the hypothesis on the first two moments of $\rho$ matching the ones of $\mathrm{GOE}(d)$, and in $(\rm d)$ that $\mathop{\mathrm{supp}}(P_0) \subseteq B_2(C \sqrt{d})$. $\square$ ### Proof of Lemma [\[lemma:pointwise_limit\]](#lemma:pointwise_limit){reference-type="ref" reference="lemma:pointwise_limit"} We fix $t \in (0, \pi/2)$ for the rest of the proof, and we write $U, \tU$ for $U(t), \tU(t)$. We follow the ideas of Appendix A.3 of [@montanari2022universality], and start again from eq. [\[eq:ee_derivative_psi\]](#eq:ee_derivative_psi){reference-type="eqref" reference="eq:ee_derivative_psi"}: $$\begin{aligned} \label{eq:def_I1_I2} \nonumber &\Bigg|\mathbb{E}\, \frac{\partial \psi[F_d(U(t))]}{\partial t}\Bigg| \leq \alpha_2(I_1 + I_2), \\ &\begin{dcases} I_1 &= \Bigg|\mathbb{E}\Bigg[\left\{\psi'[F_d(U)] - \psi'[F_d(U^{(1)})]\right\} \frac{\int P_0(\mathrm{d}S) \, e^{-\sum_\nu \phi({\rm Tr}[U_\nu S])} \Big({\rm Tr}[S \tU_1] \, \phi'({\rm Tr}[U_1 S])\Big)}{\int P_0(\mathrm{d}S) \, e^{-\sum_\nu \phi({\rm Tr}[U_\nu S])}}\Bigg] \Bigg|, \\ I_2 &= \Bigg|\mathbb{E}\Bigg[\psi'[F_d(U^{(1)})] \frac{\int P_0(\mathrm{d}S) \, e^{-\sum_\nu \phi({\rm Tr}[U_\nu S])} \Big({\rm Tr}[S \tU_1] \, \phi'({\rm Tr}[U_1 S])\Big)}{\int P_0(\mathrm{d}S) \, e^{-\sum_\nu \phi({\rm Tr}[U_\nu S])}}\Bigg] \Bigg|, \end{dcases}\end{aligned}$$ with $U^{(\mu)}$ obtained from $U$ by setting $U_\mu = 0$. We show successively $I_1 \to 0$ and $I_2 \to 0$. Since $\psi'$ is assumed to be Lipschitz, we have: $$\begin{aligned} |\psi'[F_d(U)] - \psi'[F_d(U^{(1)})]| &\leq \frac{\|\psi'\|_\mathrm{Lip}}{d^2} \Bigg|\log \frac{\int P_0(\mathrm{d}S) e^{-\sum_{\nu=1}^n \phi({\rm Tr}[U_\nu S])}}{\int P_0(\mathrm{d}S) e^{-\sum_{\nu=2}^n \phi({\rm Tr}[U_\nu S])}} \Bigg|, \\ &\leq \frac{\|\psi'\|_\mathrm{Lip}}{d^2} \Big|\log \Big\langle e^{-\phi({\rm Tr}[U_1 S])}\Big\rangle_1 \Big|, \\ &\leq -\frac{\|\psi'\|_\mathrm{Lip}}{d^2} \log \Big\langle e^{-\phi({\rm Tr}[U_1 S])}\Big\rangle_1, \\ &\leq \frac{\|\psi'\|_\mathrm{Lip}\|\phi\|_\infty}{d^2}.\end{aligned}$$ Therefore, $$\label{eq:bound_I1} I_1 \leq \mcO_d\Bigg(\frac{1}{d^2}\Bigg) \times \mathbb{E}\Bigg|\frac{\int P_0(\mathrm{d}S) \, e^{-\sum_\nu \phi({\rm Tr}[U_\nu S])} \Big({\rm Tr}[S \tU_1] \, \phi'({\rm Tr}[U_1 S])\Big)}{\int P_0(\mathrm{d}S) \, e^{-\sum_\nu \phi({\rm Tr}[U_\nu S])}} \Bigg|.$$ The second term in eq. [\[eq:bound_I1\]](#eq:bound_I1){reference-type="eqref" reference="eq:bound_I1"} is bounded by Lemma [\[lemma:boundedness_single_matrix\]](#lemma:boundedness_single_matrix){reference-type="ref" reference="lemma:boundedness_single_matrix"}, uniformly in $t$. Therefore, we reach that $I_1 \to 0$ as $d \to \infty$. We now tackle $I_2$. Note that since $U^{(1)}$ is independent of $U_1$, we can rewrite it as: $$\begin{aligned} \label{eq:bound_I2_1} \nonumber I_2 &= \Bigg|\mathbb{E}_{(1)} \Bigg[\psi'[F_d(U^{(1)})] \mathbb{E}_{G_1, W_1}\Bigg[\frac{\Big\langle e^{-\phi({\rm Tr}[U_1 S])} \Big({\rm Tr}[S \tU_1] \, \phi'({\rm Tr}[U_1 S])\Big)\Big\rangle_1}{\Big\langle e^{-\phi({\rm Tr}[U_1 S])} \Big\rangle_1} \Bigg]\Bigg] \Bigg|, \\ &\leq \|\psi'\|_\infty \mathbb{E}_{(1)} \Bigg| \Bigg\langle\mathbb{E}_{G_1, W_1}\Bigg[\frac{e^{-\phi({\rm Tr}[U_1 S])} \Big({\rm Tr}[S \tU_1] \, \phi'({\rm Tr}[U_1 S])\Big)}{\Big\langle e^{-\phi({\rm Tr}[U_1 S])} \Big\rangle_1} \Bigg] \Bigg\rangle_1\Bigg|.\end{aligned}$$ We focus on bounding the right-hand side of eq. [\[eq:bound_I2_1\]](#eq:bound_I2_1){reference-type="eqref" reference="eq:bound_I2_1"}. We will show the following lemma: [\[lemma:bound_term_I2\]]{#lemma:bound_term_I2 label="lemma:bound_term_I2"} Uniformly over all $t \in [0, \pi / 2]$ and $\{W_\mu, G_\mu\}_{\mu=2}^n$, and under the hypotheses of Theorem [\[thm:universality_matrix\]](#thm:universality_matrix){reference-type="ref" reference="thm:universality_matrix"}: $$\label{eq:lemma_bound_term_I2} \lim_{d \to \infty} \Bigg\langle \mathbb{E}_{G_1, W_1}\Bigg[\frac{e^{-\phi({\rm Tr}[U_1 S])} \Big({\rm Tr}[S \tU_1] \, \phi'({\rm Tr}[U_1 S])\Big)}{\Big\langle e^{-\phi({\rm Tr}[U_1 S])} \Big\rangle_1} \Bigg] \Bigg\rangle_1 = 0.$$ One directly concludes that $I_2 \to 0$ from using the dominated convergence theorem in eq. [\[eq:bound_I2_1\]](#eq:bound_I2_1){reference-type="eqref" reference="eq:bound_I2_1"} (the pointwise limit is given by Lemma [\[lemma:bound_term_I2\]](#lemma:bound_term_I2){reference-type="ref" reference="lemma:bound_term_I2"} and the domination hypothesis by Lemma [\[lemma:boundedness_single_matrix\]](#lemma:boundedness_single_matrix){reference-type="ref" reference="lemma:boundedness_single_matrix"}). This ends the proof of Lemma [\[lemma:pointwise_limit\]](#lemma:pointwise_limit){reference-type="ref" reference="lemma:pointwise_limit"}. We thus focus on the proof of Lemma [\[lemma:bound_term_I2\]](#lemma:bound_term_I2){reference-type="ref" reference="lemma:bound_term_I2"}. Following [@montanari2022universality], the sketch of the proof is the following: - Show that the denominator appearing in eq. [\[eq:lemma_bound_term_I2\]](#eq:lemma_bound_term_I2){reference-type="eqref" reference="eq:lemma_bound_term_I2"} can be moved to the numerator by using the expansion of $1/x$ in power series around $1$. This transforms the quantity to control to a sum of terms of the type $\langle \mathbb{E}_{G_1, W_1} [f(S_1, \cdots, S_k)] \rangle_1$, with $S_1, \cdots, S_k$ independent samples under $\langle \cdot \rangle_1$. - Extend the one-dimensional CLT of Definition [\[def:one_dimensional_CLT\]](#def:one_dimensional_CLT){reference-type="ref" reference="def:one_dimensional_CLT"} to $k$-dimensional projections of $G$ and $W$ (with $k = \mcO_d(1)$), and to square-integrable locally-Lipschitz functions. This allows to apply it to the terms appearing in $(i)$, and write (uniformly in $S_1, \cdots, S_k$) that $\mathbb{E}_{G_1, W_1} [f(S_1, \cdots, S_k)] \simeq \mathbb{E}_{G_1, \widetilde{G}_1} [f(S_1, \cdots, S_k)]$, with $\widetilde{G}_1$ an independent $\mathrm{GOE}(d)$ matrix. - For the case of Gaussian matrices, as explained above we have $\tU_1$ independent of $U_1$. Using the form of the function $f$ that appears in eq. [\[eq:lemma_bound_term_I2\]](#eq:lemma_bound_term_I2){reference-type="eqref" reference="eq:lemma_bound_term_I2"} this implies that $\mathbb{E}_{G_1, \widetilde{G}_1} [f(S_1, \cdots, S_k)] = 0$ and concludes the proof. Let us perform this strategy in detail. The following lemma is proven in Section [8.3](#subsec_app:additional_proofs){reference-type="ref" reference="subsec_app:additional_proofs"}. [\[lemma:poly_approx\]]{#lemma:poly_approx label="lemma:poly_approx"} For all $\delta > 0$, there exists a real polynomial $Q$ (depending only on $\delta$) such that for all $d \geq 1$, all $t \in (0, \pi/2)$ and all $\{W_\mu, G_\mu\}_{\mu=2}^n$: $$\begin{aligned} &\Bigg| \Bigg\langle \mathbb{E}_{G_1, W_1}\Bigg[\frac{e^{-\phi({\rm Tr}[U_1 S])} \Big({\rm Tr}[S \tU_1] \, \phi'({\rm Tr}[U_1 S])\Big)}{\Big\langle e^{-\phi({\rm Tr}[U_1 S])} \Big\rangle_1} \Bigg] \Bigg\rangle_1 \Bigg| \\ &\leq \Big| \mathbb{E}_{G_1, W_1}\Big\{\Big\langle e^{-\phi({\rm Tr}[U_1 S])} \Big({\rm Tr}[S \tU_1] \, \phi'({\rm Tr}[U_1 S])\Big) \Big\rangle_1Q\Big(\Big\langle e^{-\phi({\rm Tr}[U_1 S])} \Big\rangle_1\Big) \Big\} \Big| + \delta. \end{aligned}$$ We fix $\delta > 0$, and denote $Q(X) = \sum_{k=0}^K a_k X^k$ the polynomial of Lemma [\[lemma:poly_approx\]](#lemma:poly_approx){reference-type="ref" reference="lemma:poly_approx"}. Therefore, uniformly in $d$, $t$, and $\{W_\mu, G_\mu\}$: $$\begin{aligned} \label{eq:poly_expansion_application} \nonumber &\Bigg| \Bigg\langle \mathbb{E}_{G_1, W_1}\Bigg[\frac{e^{-\phi({\rm Tr}[U_1 S])} \Big({\rm Tr}[S \tU_1] \, \phi'({\rm Tr}[U_1 S])\Big)}{\Big\langle e^{-\phi({\rm Tr}[U_1 S])} \Big\rangle_1} \Bigg] \Bigg\rangle_1 \Bigg| \\ &\leq \sum_{k=0}^K |a_k| \Big| \Big \langle \mathbb{E}_{G_1, W_1}\Big\{e^{-\sum_{a=0}^k\phi({\rm Tr}[U_1 S_a])} \Big({\rm Tr}[S_0 \tU_1] \, \phi'({\rm Tr}[U_1 S_0])\Big) \Big\} \Big\rangle_1 \Big| + \delta,\end{aligned}$$ with $\{S_a\}_{a=0}^k$ i.i.d. samples from $\langle \cdot \rangle_1$. We then extend the one-dimensional CLT of Definition [\[def:one_dimensional_CLT\]](#def:one_dimensional_CLT){reference-type="ref" reference="def:one_dimensional_CLT"} to finite-dimensional projections, similarly to Lemmas 29 and 30 of [@montanari2022universality]. The proof of this lemma is deferred to Section [8.3](#subsec_app:additional_proofs){reference-type="ref" reference="subsec_app:additional_proofs"}. [\[lemma:finite_dim_clt\]]{#lemma:finite_dim_clt label="lemma:finite_dim_clt"} Let $R \in \mathbb{N}^\star$, and $\tG \sim \mathrm{GOE}(d)$, independent of everything else. Let $\varphi : \mathbb{R}^{2R} \to \mathbb{R}$ a locally Lipschitz function such that for both $X \in \{W, \tG\}$: $$\label{eq:square_integrable_phi} \sup_{d \geq 1} \sup_{\{W_\mu, G_\mu\}_{\mu=2}^n} \sup_{t \in (0, \pi/2)} \Big\langle\mathbb{E}_{X, G}\Big[\varphi\Big(\{{\rm Tr}(X S_a)\}_{a=1}^R, \{{\rm Tr}(G S_a)\}_{a=1}^R\Big)^2\Big] \Big\rangle_1 < \infty.$$ It is understood there that $\{S_a\} \overset{\mathrm{i.i.d.}}{\sim}\langle \cdot \rangle_1$. Then $$\lim_{d \to \infty} \sup_{\{W_\mu, G_\mu\}_{\mu=2}^n} \sup_{t \in (0, \pi/2)} \Big\langle\Big| \mathbb{E}_{W,G} \, \varphi(\{{\rm Tr}(W S_a)\}, \{{\rm Tr}(G S_a)\}) - \mathbb{E}_{\tG,G} \, \varphi(\{{\rm Tr}(\tG S_a)\}, \{{\rm Tr}(G S_a)\})\Big| \Big \rangle_1 = 0.$$ We wish to apply Lemma [\[lemma:finite_dim_clt\]](#lemma:finite_dim_clt){reference-type="ref" reference="lemma:finite_dim_clt"} to eq. [\[eq:poly_expansion_application\]](#eq:poly_expansion_application){reference-type="eqref" reference="eq:poly_expansion_application"}, i.e. to $$\varphi(\{{\rm Tr}(W_1 S_a)\}, \{{\rm Tr}(G_1 S_a)\}) \coloneqq e^{-\sum_{a=0}^k\phi({\rm Tr}[U_1 S_a])} \Big({\rm Tr}[S_0 \tU_1] \, \phi'({\rm Tr}[U_1 S_0])\Big).$$ $\varphi$ is locally Lipschitz by our hypotheses on $\phi$. Moreover, note that for $X \in \{W, \tG\}$, and $\{S^a\} \in \mathop{\mathrm{supp}}(P_0)$ (using that $W$ has the same two first moments as $\tG$): $$\mathbb{E}_{X, G}\Big[\varphi\Big(\{{\rm Tr}(X S_a)\}, \{{\rm Tr}(G S_a)\}\Big)^2\Big] \leq 2 \|\phi'\|_\infty^2 {\rm Tr}[S_0^2]/d \leq 2 C_0^2 \|\phi'\|_\infty^2 < \infty.$$ This allows to apply Lemma [\[lemma:finite_dim_clt\]](#lemma:finite_dim_clt){reference-type="ref" reference="lemma:finite_dim_clt"} in eq. [\[eq:poly_expansion_application\]](#eq:poly_expansion_application){reference-type="eqref" reference="eq:poly_expansion_application"}, and to reach that, uniformly in $\{W_\mu, G_\mu\}_{\mu=2}^n$ and $t \in (0, \pi/2)$ we have: $$\begin{aligned} &\limsup_{d \to \infty} \Bigg| \Bigg\langle \mathbb{E}_{G_1, W_1}\Bigg[\frac{e^{-\phi({\rm Tr}[U_1 S])} \Big({\rm Tr}[S \tU_1] \, \phi'({\rm Tr}[U_1 S])\Big)}{\Big\langle e^{-\phi({\rm Tr}[U_1 S])} \Big\rangle_1} \Bigg] \Bigg\rangle_1 \Bigg| \\ & \leq \delta + \sum_{k=0}^K |a_k| \limsup_{d \to \infty} \Big\langle \Big| \mathbb{E}_{G_1,W_1} \, \varphi(\{{\rm Tr}(W_1 S_a)\}, \{{\rm Tr}(G_1 S_a)\})\Big| \Big \rangle_1, \\ & \leq \delta + \sum_{k=0}^K |a_k| \limsup_{d \to \infty} \Big\langle \Big| \mathbb{E}_{G_1,\tG_1} \, \varphi(\{{\rm Tr}(\tG_1 S_a)\}, \{{\rm Tr}(G_1 S_a)\})\Big| \Big \rangle_1 \\ &+ \sum_{k=0}^K |a_k| \limsup_{d \to \infty} \Big\langle \Big|\mathbb{E}_{G_1,W_1} \, \varphi(\{{\rm Tr}(W_1 S_a)\}, \{{\rm Tr}(G_1 S_a)\}) - \mathbb{E}_{G_1,\tG_1} \, \varphi(\{{\rm Tr}(\tG_1 S_a)\}, \{{\rm Tr}(G_1 S_a)\})\Big| \Big \rangle_1, \\ &\overset{\mathrm{(a)}}{\leq} \delta + \sum_{k=0}^K |a_k| \limsup_{d \to \infty} \Big| \Big \langle \mathbb{E}_{G_1, \tG_1}\Big\{e^{-\sum_{a=0}^k\phi({\rm Tr}[V_1 S_a])} \Big({\rm Tr}[S_0 \tV_1] \, \phi'({\rm Tr}[V_1 S_0])\Big) \Big\} \Big\rangle_1 \Big| ,\end{aligned}$$ where we used Lemma [\[lemma:finite_dim_clt\]](#lemma:finite_dim_clt){reference-type="ref" reference="lemma:finite_dim_clt"} in $(\rm a)$, and with $V_1 = \cos(t) \tG_1 + \sin(t) G_1$, and $\tV_1 = -\sin(t) \tG_1 + \cos(t) G_1$. Since $G_1, \tG_1$ are gaussians, so are $V_1$ and $\tV_1$, and one verifies easily that they are independent since their covariance is zero. Therefore, we have: $$\begin{aligned} &\mathbb{E}_{G_1, \tG_1}\Big\{e^{-\sum_{a=0}^k\phi({\rm Tr}[V_1 S_a])} \Big({\rm Tr}[S_0 \tV_1] \, \phi'({\rm Tr}[V_1 S_0])\Big) \Big\} \\ &= \mathbb{E}_{V_1}\Big\{e^{-\sum_{a=0}^k\phi({\rm Tr}[V_1 S_a])} \Big(\underbrace{\Big[\mathbb{E}_{\tV_1}{\rm Tr}[S_0 \tV_1]\Big]}_{=0} \, \phi'({\rm Tr}[V_1 S_0])\Big) \Big\} = 0.\end{aligned}$$ Thus, we reach, for any $\delta > 0$: $$\limsup_{d \to \infty} \sup_{t \in (0, \pi/2)} \sup_{\{W_\mu, G_\mu\}_{\mu=2}^n} \Bigg| \Bigg\langle \mathbb{E}_{G_1, W_1}\Bigg[\frac{e^{-\phi({\rm Tr}[U_1 S])} \Big({\rm Tr}[S \tU_1] \, \phi'({\rm Tr}[U_1 S])\Big)}{\Big\langle e^{-\phi({\rm Tr}[U_1 S])} \Big\rangle_1} \Bigg] \Bigg\rangle_1 \Bigg| \leq \delta.$$ Letting $\delta \to 0$ finishes the proof of Lemma [\[lemma:bound_term_I2\]](#lemma:bound_term_I2){reference-type="ref" reference="lemma:bound_term_I2"}. $\qed$ ## Additional proofs {#subsec_app:additional_proofs} ### Proof of Lemma [\[lemma:poly_approx\]](#lemma:poly_approx){reference-type="ref" reference="lemma:poly_approx"} We use the power expansion of $x \mapsto 1/x$ around $x = 1$, defining, for $M \geq 1$: $$Q_M(x) \coloneqq \sum_{k=0}^M (1-x)^k, \hspace{1cm} R_M(x) \coloneqq \frac{1}{x} - Q_M(x).$$ We make use of Lemma 27 of [@montanari2022universality]: [\[lemma:27_montanari\]]{#lemma:27_montanari label="lemma:27_montanari"} For any integer $M \geq 1$ we have - For all $x \neq 0$, $R_M(x) = (1-x)^{M+1}/x$. - $x \mapsto R_M(x)^2$ is convex on $(0, 1]$. - For any $s \in (0,1)$ and $\eta > 0$, there exists $M \geq 1$ such that $\sup_{t \in [s, 1]} |R_M(t)| \leq \eta$. Since $e^{-\|\phi\|_\infty} \leq e^{-\phi} \leq 1$, we have that for all $\eta > 0$ there exists $M_\eta \geq 1$ such that for all $M \geq M_\eta$, all matrices $\{W_\mu, G_\mu\}_{\mu=1}^n$, all $t \in (0, \pi/2)$: $$\label{eq:bound_RM} \Big| R_M\Big(\langle e^{-\phi({\rm Tr}[S U_1])} \rangle_1\Big) \Big| \leq \eta.$$ Thus, since $1/x = Q_M(x) + R_M(x)$: $$\begin{aligned} &\Bigg| \Bigg\langle \mathbb{E}_{G_1, W_1}\Bigg[\frac{e^{-\phi({\rm Tr}[U_1 S])} \Big({\rm Tr}[S \tU_1] \, \phi'({\rm Tr}[U_1 S])\Big)}{\Big\langle e^{-\phi({\rm Tr}[U_1 S])} \Big\rangle_1} \Bigg] \Bigg\rangle_1 \Bigg| \\ &\leq \Big| \Big\langle \mathbb{E}_{G_1, W_1}\Big[e^{-\phi({\rm Tr}[U_1 S])} \Big({\rm Tr}[S \tU_1] \, \phi'({\rm Tr}[U_1 S])\Big) Q_M\Big(\Big\langle e^{-\phi({\rm Tr}[U_1 S])} \Big\rangle_1\Big) \Big] \Big\rangle_1 \Big| \\ &\qquad + \Big| \Big\langle \mathbb{E}_{G_1, W_1}\Big[e^{-\phi({\rm Tr}[U_1 S])} \Big({\rm Tr}[S \tU_1] \, \phi'({\rm Tr}[U_1 S])\Big) R_M\Big(\Big\langle e^{-\phi({\rm Tr}[U_1 S])} \Big\rangle_1\Big) \Big] \Big\rangle_1 \Big|, \\ &\overset{\mathrm{(a)}}{\leq}\Big| \Big\langle \mathbb{E}_{G_1, W_1}\Big[e^{-\phi({\rm Tr}[U_1 S])} \Big({\rm Tr}[S \tU_1] \, \phi'({\rm Tr}[U_1 S])\Big) Q_M\Big(\Big\langle e^{-\phi({\rm Tr}[U_1 S])} \Big\rangle_1\Big) \Big] \Big\rangle_1 \Big| \\ &\qquad+ \eta \Big\langle \mathbb{E}_{G_1, W_1}\Big[e^{-\phi({\rm Tr}[U_1 S])} \Big| \Big({\rm Tr}[S \tU_1] \, \phi'({\rm Tr}[U_1 S])\Big) \Big|\Big] \Big\rangle_1, \\ &\leq \Big| \Big\langle \mathbb{E}_{G_1, W_1}\Big[e^{-\phi({\rm Tr}[U_1 S])} \Big({\rm Tr}[S \tU_1] \, \phi'({\rm Tr}[U_1 S])\Big) Q_M\Big(\Big\langle e^{-\phi({\rm Tr}[U_1 S])} \Big\rangle_1\Big) \Big] \Big\rangle_1 \Big| + \eta \|\phi'\|_\infty \langle \mathbb{E}_{G_1, W_1} |{\rm Tr}[S \tU_1]| \rangle_1,\\ &\overset{\mathrm{(b)}}{\leq}\Big| \Big\langle \mathbb{E}_{G_1, W_1}\Big[e^{-\phi({\rm Tr}[U_1 S])} \Big({\rm Tr}[S \tU_1] \, \phi'({\rm Tr}[U_1 S])\Big) Q_M\Big(\Big\langle e^{-\phi({\rm Tr}[U_1 S])} \Big\rangle_1\Big) \Big] \Big\rangle_1 \Big| + C \eta \|\phi'\|_\infty,\end{aligned}$$ using eq. [\[eq:bound_RM\]](#eq:bound_RM){reference-type="eqref" reference="eq:bound_RM"} in $(\rm a)$, and the Cauchy-Schwarz inequality (cf. the proof of Lemma [\[lemma:boundedness_single_matrix\]](#lemma:boundedness_single_matrix){reference-type="ref" reference="lemma:boundedness_single_matrix"}) in $(\rm b)$. We emphasize that this bound is uniform in $t$ and $\{W_\mu, G_\mu\}$. Choosing $\eta = \delta / (C \|\phi'\|_\infty)$ ends the proof of Lemma [\[lemma:poly_approx\]](#lemma:poly_approx){reference-type="ref" reference="lemma:poly_approx"}. ## Proof of Lemma [\[lemma:finite_dim_clt\]](#lemma:finite_dim_clt){reference-type="ref" reference="lemma:finite_dim_clt"} {#subsec_app:proof_finite_dim_clt} We first prove that the conclusion of Lemma [\[lemma:finite_dim_clt\]](#lemma:finite_dim_clt){reference-type="ref" reference="lemma:finite_dim_clt"} holds uniformly over all $S_1, \cdots, S_R \in A_d$ when the function $\varphi$ is assumed to be continuous with compact support: [\[lemma:finite_dim_clt_compact_support\]]{#lemma:finite_dim_clt_compact_support label="lemma:finite_dim_clt_compact_support"} Let $R \in \mathbb{N}^\star$, and $\tG \sim \mathrm{GOE}(d)$, independent of everything else. Let $\varphi : \mathbb{R}^{2R} \to \mathbb{R}$ be Lipschitz and compactly supported. Recall that $W \sim \rho$, a measure who is assumed to satisfy a one-dimensional CLT with respect to a set $A_d \subseteq S_d$, see Definition [\[def:one_dimensional_CLT\]](#def:one_dimensional_CLT){reference-type="ref" reference="def:one_dimensional_CLT"}. We assume that $A_d$ is convex and symmetric. Then: $$\lim_{d \to \infty} \sup_{S_1, \cdots, S_R \in A_d} \Big| \mathbb{E}_{W,G} \, \varphi\Big(\{{\rm Tr}(W S_a)\}, \{{\rm Tr}(G S_a)\}\Big) - \mathbb{E}_{\tG,G} \, \varphi\Big(\{{\rm Tr}(\tG S_a)\}, \{{\rm Tr}(G S_a)\}\Big)\Big| = 0.$$ **Proof of Lemma [\[lemma:finite_dim_clt_compact_support\]](#lemma:finite_dim_clt_compact_support){reference-type="ref" reference="lemma:finite_dim_clt_compact_support"} --** Recall that we use the matrix flattening function of eq. [\[eq:def_flattening\]](#eq:def_flattening){reference-type="eqref" reference="eq:def_flattening"}. Let us denote: $$\label{eq:notations_H_h_v} \begin{dcases} H &\coloneqq \begin{pmatrix} \mathop{\mathrm{vec}}(S^1) & 0 & \mathop{\mathrm{vec}}(S_2) & 0 & \cdots & \mathop{\mathrm{vec}}(S^R) & 0 \\ 0 & \mathop{\mathrm{vec}}(S^1) & 0 & \mathop{\mathrm{vec}}(S_2) & \cdots & 0 & \mathop{\mathrm{vec}}(S^R) \end{pmatrix} \in \mathbb{R}^{d(d+1) \times 2 R}, \\ v &\coloneqq (\mathop{\mathrm{vec}}(W)^\intercal, \mathop{\mathrm{vec}}(G)^\intercal)^\intercal\in \mathbb{R}^{d(d+1)}, \\ h &\coloneqq (\mathop{\mathrm{vec}}(\tG)^\intercal, \mathop{\mathrm{vec}}(G)^\intercal)^\intercal\in \mathbb{R}^{d(d+1)}. \end{dcases}$$ Using these notations, we have: $$\label{eq:notations_H_h_v_2} \begin{dcases} \Big(\{{\rm Tr}(W S_a)\}_{a=1}^R, \{{\rm Tr}(G S_a)\}_{a=1}^R\Big) &= H^\intercal v \in \mathbb{R}^{2R}, \\ \Big(\{{\rm Tr}(\tG S_a)\}_{a=1}^R, \{{\rm Tr}(G S_a)\}_{a=1}^R\Big) &= H^\intercal h \in \mathbb{R}^{2R}. \end{dcases}$$ We add a small Gaussian noise to help us deal with characteristic functions later on. Let $\delta > 0$, and $Z \sim \mcN(0, \delta^2 \mathrm{I}_{2R})$. For all $S_1, \cdots, S_R$ we have: $$\begin{aligned} \label{eq:adding_small_gauss_noise} \nonumber |\mathbb{E}\, \varphi(H^\intercal v) - \mathbb{E}\, \varphi(H^\intercal h)| &\leq |\mathbb{E}\, \varphi(H^\intercal v) - \mathbb{E}\, \varphi(H^\intercal v + Z)| + |\mathbb{E}\, \varphi(H^\intercal h) - \mathbb{E}\, \varphi(H^\intercal h + Z)| \\ \nonumber & \hspace{5.15cm}+ |\mathbb{E}\, \varphi(H^\intercal v + Z) - \mathbb{E}\, \varphi(H^\intercal h + Z)|, \\ \nonumber &\leq 2 \|\varphi\|_L \mathbb{E}\|Z\|_2 + |\mathbb{E}\, \varphi(H^\intercal v + Z) - \mathbb{E}\, \varphi(H^\intercal h + Z)|, \\ &\leq C R^{1/2} \|\varphi\|_L \delta + |\mathbb{E}\, \varphi(H^\intercal v + Z) - \mathbb{E}\, \varphi(H^\intercal h + Z)|.\end{aligned}$$ We now control the last term of eq. [\[eq:adding_small_gauss_noise\]](#eq:adding_small_gauss_noise){reference-type="eqref" reference="eq:adding_small_gauss_noise"}. For $X \in \mathbb{R}^{2R}$ a random variable, we define its characteristic function as $\phi_X(u) \coloneqq \mathbb{E}\, e^{-i u^\intercal X}$. We have then $$\begin{aligned} \frac{1}{(2\pi)^{2R}}\int_{\mathbb{R}^{2R}} \varphi(t) \int_{\mathbb{R}^{2R}} e^{i t^\intercal u - \frac{\delta^2}{2} \|u\|^2} \phi_X(u) \mathrm{d}u \, \mathrm{d}t &= \frac{1}{(2\pi \delta^2)^{R}} \mathbb{E}_X \int_{\mathbb{R}^{2R}} \varphi(t) e^{-\frac{\|t-X\|^2}{2 \delta^2}}, \\ &= \mathbb{E}\, \varphi(X + Z).\end{aligned}$$ Coming back to eq. [\[eq:adding_small_gauss_noise\]](#eq:adding_small_gauss_noise){reference-type="eqref" reference="eq:adding_small_gauss_noise"} we get: $$\begin{aligned} \label{eq:bound_with_gauss_noise} \nonumber |\mathbb{E}\, \varphi(H^\intercal v + Z) - \mathbb{E}\, \varphi(H^\intercal h + Z)| &\leq \frac{1}{(2\pi)^{2R}}\int_{\mathbb{R}^{2R}} |\varphi(t)| \Bigg|\int_{\mathbb{R}^{2R}} e^{i t^\intercal u - \frac{\delta^2}{2} \|u\|^2} [\phi_{H^\intercal v}(u) - \phi_{H^\intercal h}(u)] \mathrm{d}u \, \Bigg| \, \mathrm{d}t, \\ &\leq \frac{\|\varphi\|_{L_1}}{(2\pi)^{2R}}\int_{\mathbb{R}^{2R}} e^{- \frac{\delta^2}{2} \|u\|^2} |\phi_{H^\intercal v}(u) - \phi_{H^\intercal h}(u)| \mathrm{d}u.\end{aligned}$$ Here $\|\varphi\|_{L_1} \coloneqq \int |\varphi(t)| \mathrm{d}t$. We will show that for any $u \in \mathbb{R}^{2R}$: $$\label{eq:pointwise_cv_charac} \lim_{d \to \infty} \sup_{S_1, \cdots, S_R \in A_d} |\phi_{H^\intercal v}(u) - \phi_{H^\intercal h}(u)| = 0.$$ Combining eq. [\[eq:pointwise_cv_charac\]](#eq:pointwise_cv_charac){reference-type="eqref" reference="eq:pointwise_cv_charac"} with the dominated convergence theorem applied in eq. [\[eq:bound_with_gauss_noise\]](#eq:bound_with_gauss_noise){reference-type="eqref" reference="eq:bound_with_gauss_noise"}, we get: $$\lim_{d \to \infty} \sup_{S_1, \cdots, S_R \in A_d}|\mathbb{E}\, \varphi(H^\intercal v + Z) - \mathbb{E}\, \varphi(H^\intercal h + Z)| = 0.$$ Plugging this back into eq. [\[eq:adding_small_gauss_noise\]](#eq:adding_small_gauss_noise){reference-type="eqref" reference="eq:adding_small_gauss_noise"}, we get that for any $\delta > 0$: $$\limsup_{d \to \infty} \sup_{S_1, \cdots, S_R \in A_d}|\mathbb{E}\, \varphi(H^\intercal v) - \mathbb{E}\, \varphi(H^\intercal h)| \leq C R^{1/2} \|\varphi\|_L \delta.$$ Letting $\delta \to 0$ ends the proof of Lemma [\[lemma:finite_dim_clt_compact_support\]](#lemma:finite_dim_clt_compact_support){reference-type="ref" reference="lemma:finite_dim_clt_compact_support"}. There remains to prove eq. [\[eq:pointwise_cv_charac\]](#eq:pointwise_cv_charac){reference-type="eqref" reference="eq:pointwise_cv_charac"}. Let $u = (u^{(1)}, u^{(2)}) \in \mathbb{R}^{2R}$, with $u^{(1)}, u^{(2)} \in \mathbb{R}^R$, and let us fix $S_1, \cdots, S_R \in A_d$. We have $$\begin{aligned} \label{eq:diff_charac_1} \nonumber &|\phi_{H^\intercal v}(u) - \phi_{H^\intercal h}(u)| \\ \nonumber &= \Big|\mathbb{E}\, e^{-i \sum_{a=1}^r u^{(1)}_a {\rm Tr}[W S_a] -i \sum_{a=1}^r u^{(2)}_a {\rm Tr}[G S_a]} - \mathbb{E}\, e^{-i \sum_{a=1}^r u^{(1)}_a {\rm Tr}[\tG S_a] -i \sum_{a=1}^r u^{(2)}_a {\rm Tr}[G S_a]}\Big|, \\ &\overset{\mathrm{(a)}}{=}\Big|\mathbb{E}\, \exp\Big\{-i \, {\rm Tr}\Big[W \sum_{a=1}^r u^{(1)}_a S_a \Big]\Big\} - \mathbb{E}\, \exp\Big\{-i \, {\rm Tr}\Big[\tG \sum_{a=1}^r u^{(1)}_a S_a \Big]\Big\}\Big|,\end{aligned}$$ using in $(\rm a)$ that $G$ is independent of $W, \tG$. We can assume that $u^{(1)} \neq 0$, otherwise the result of eq. [\[eq:pointwise_cv_charac\]](#eq:pointwise_cv_charac){reference-type="eqref" reference="eq:pointwise_cv_charac"} is clear. Since $A_d$ is symmetric and convex we have $$\hS \coloneqq \frac{1}{\|u^{(1)}\|_1}\sum_{a=1}^r u^{(1)}_a S_a \in A_d.$$ Therefore, letting $\varphi_u(x) \coloneqq e^{-i \|u\|_1 x}$, we have by eq. [\[eq:diff_charac_1\]](#eq:diff_charac_1){reference-type="eqref" reference="eq:diff_charac_1"}: $$\label{eq:diff_charac_2} |\phi_{H^\intercal v}(u) - \phi_{H^\intercal h}(u)| = |\mathbb{E}\, \varphi_u({\rm Tr}[\hS W]) - \mathbb{E}\, \varphi_u({\rm Tr}[\hS \tG])|$$ Moreover, $$|\varphi_u(x) - \varphi_u(y)| = \|u\|_1 \Bigg|\int_0^{y-x} e^{i \|u\|_1 t} \mathrm{d}t\Bigg| \leq \|u\|_1 |y-x|,$$ so that $\|\varphi_u\|_L \leq \|u\|_1$. We can then therefore apply the one-dimensional CLT of Definition [\[def:one_dimensional_CLT\]](#def:one_dimensional_CLT){reference-type="ref" reference="def:one_dimensional_CLT"} in eq. [\[eq:diff_charac_2\]](#eq:diff_charac_2){reference-type="eqref" reference="eq:diff_charac_2"}, and we get that $$\sup_{S_1, \cdots, S_R \in A_d}|\phi_{H^\intercal v}(u) - \phi_{H^\intercal h}(u)| \leq \sup_{S \in A_d} |\mathbb{E}\, \varphi_u({\rm Tr}[\hS W]) - \mathbb{E}\, \varphi_u({\rm Tr}[\hS \tG])| \to_{d\to\infty} 0.$$ This ends the proof of eq. [\[eq:pointwise_cv_charac\]](#eq:pointwise_cv_charac){reference-type="eqref" reference="eq:pointwise_cv_charac"}. $\square$ We then deduce the full statement of Lemma [\[lemma:finite_dim_clt\]](#lemma:finite_dim_clt){reference-type="ref" reference="lemma:finite_dim_clt"} by a truncation argument. **End of the proof of Lemma [\[lemma:finite_dim_clt\]](#lemma:finite_dim_clt){reference-type="ref" reference="lemma:finite_dim_clt"} --** $\varphi$ is now only assumed to be locally Lipschitz and square integrable, in the sense of eq. [\[eq:square_integrable_phi\]](#eq:square_integrable_phi){reference-type="eqref" reference="eq:square_integrable_phi"}. We take the same notations as in the proof of Lemma [\[lemma:finite_dim_clt_compact_support\]](#lemma:finite_dim_clt_compact_support){reference-type="ref" reference="lemma:finite_dim_clt_compact_support"}, see in particular eq. [\[eq:notations_H\_h_v\]](#eq:notations_H_h_v){reference-type="eqref" reference="eq:notations_H_h_v"}, so that $$\mathbb{E}_{W,G} \, \varphi\Big(\{{\rm Tr}(W S_a)\}, \{{\rm Tr}(G S_a)\}\Big) - \mathbb{E}_{\tG,G} \, \varphi\Big(\{{\rm Tr}(\tG S_a)\}, \{{\rm Tr}(G S_a)\}\Big) = \mathbb{E}\, \varphi(H^\intercal v) - \mathbb{E}\, \varphi(H^\intercal h).$$ Let $B > 0$, and let us denote $u_B : \mathbb{R}_+ \to [0,1]$ a $\mcC^\infty$ function such that $u_B(x) = 1$ if $x \leq B$ and $u_B(x) = 0$ if $x \geq B+1$. We denote $\varphi_B(z) \coloneqq \varphi(z) u_B(\|z\|)$. One can then check easily that $\varphi_B$ is Lipschitz (because $\varphi$ is locally-Lipschitz), and compactly supported. Moreover, we have: $$\begin{aligned} \label{eq:loc_lipschitz_1} \nonumber &\langle |\mathbb{E}\, \varphi(H^\intercal v) - \mathbb{E}\, \varphi(H^\intercal h)| \rangle_1 \\ &\leq \langle |\mathbb{E}\, \varphi_B(H^\intercal v) - \mathbb{E}\, \varphi_B(H^\intercal h)| \rangle_1 + \sum_{z \in \{h, v\}} \langle \mathbb{E}\, |\varphi(H^\intercal z)| (1 - u_B(\|H^\intercal z\|)) \rangle_1. \end{aligned}$$ We now control the different terms in eq. [\[eq:loc_lipschitz_1\]](#eq:loc_lipschitz_1){reference-type="eqref" reference="eq:loc_lipschitz_1"} successively. Notice that $$\langle |\mathbb{E}\, \varphi_B(H^\intercal v) - \mathbb{E}\, \varphi_B(H^\intercal h)| \rangle_1 \leq \sup_{S_1, \cdots, S_R \in A_d} |\mathbb{E}\, \varphi_B(H^\intercal v) - \mathbb{E}\, \varphi_B(H^\intercal h)|,$$ so that by Lemma [\[lemma:finite_dim_clt_compact_support\]](#lemma:finite_dim_clt_compact_support){reference-type="ref" reference="lemma:finite_dim_clt_compact_support"}: $$\label{eq:loc_lipschitz_2} \lim_{d \to \infty} \sup_{\{W_\mu, G_\mu\}_{\mu=2}^n} \sup_{t \in (0, \pi/2)} \langle |\mathbb{E}\, \varphi_B(H^\intercal v) - \mathbb{E}\, \varphi_B(H^\intercal h)| \rangle_1 = 0.$$ We now tackle the remaining terms in eq. [\[eq:loc_lipschitz_1\]](#eq:loc_lipschitz_1){reference-type="eqref" reference="eq:loc_lipschitz_1"}. Let $z \in \{h, v\}$. Using the Cauchy-Schwarz inequality twice we get: $$\begin{aligned} \label{eq:loc_lipschitz_3_1} \nonumber \langle \mathbb{E}\, |\varphi(H^\intercal z)| (1 - u_B(\|H^\intercal z\|)) \rangle_1 &\leq \langle \mathbb{E}\, |\varphi(H^\intercal z)| \mathds{1}\{\|H^\intercal z\|_2 \geq B\} \rangle_1, \\ \nonumber &\leq \langle (\mathbb{E}_z [\varphi(H^\intercal z)^2])^{1/2} \mathbb{P}_z\{\|H^\intercal z\|_2 \geq B\}^{1/2} \rangle_1 , \\ &\leq \langle (\mathbb{E}_z [\varphi(H^\intercal z)^2])\rangle_1^2 \, \cdot \, \langle \mathbb{P}_z\{\|H^\intercal z\|_2 \geq B\} \rangle_1^{1/2}. \end{aligned}$$ The first term in eq. [\[eq:loc_lipschitz_3\_1\]](#eq:loc_lipschitz_3_1){reference-type="eqref" reference="eq:loc_lipschitz_3_1"} is bounded by the square integrability assumption, cf. eq. [\[eq:square_integrable_phi\]](#eq:square_integrable_phi){reference-type="eqref" reference="eq:square_integrable_phi"}. To bound the second term, we use Markov's inequality: $$\langle \mathbb{P}_z\{\|H^\intercal z\|_2 \geq B\} \rangle_1 \leq \frac{1}{B^2} \langle \mathbb{E}_z\, \|H^\intercal z\|_2^2 \rangle_1.$$ From eq. [\[eq:notations_H\_h_v\_2\]](#eq:notations_H_h_v_2){reference-type="eqref" reference="eq:notations_H_h_v_2"} and the matching of the first two moments of $\rho$ with $\mathrm{GOE}(d)$, we have: $$\mathbb{E}_z\, \|H^\intercal z\|_2^2 = \frac{4}{d} \sum_{a=1}^R {\rm Tr}[S_a^2].$$ Thus, we get: $$\langle \mathbb{P}_z\{\|H^\intercal z\|_2 \geq B\} \rangle_1 \leq \frac{4R}{B^2} \Bigg\langle \frac{{\rm Tr}S^2}{d} \Bigg\rangle_1 \leq \frac{4RC_0}{B^2}.$$ All in all, we get: $$\label{eq:loc_lipschitz_3} \sup_{d \geq 1} \sup_{\{W_\mu, G_\mu\}_{\mu=2}^n} \sup_{t \in (0, \pi/2)} \langle \mathbb{E}\, |\varphi(H^\intercal z)| (1 - u_B(\|H^\intercal z\|)) \rangle_1 \leq \frac{C(R, \varphi)}{B}.$$ Combining eqs. [\[eq:loc_lipschitz_2\]](#eq:loc_lipschitz_2){reference-type="eqref" reference="eq:loc_lipschitz_2"} and eq. [\[eq:loc_lipschitz_3\]](#eq:loc_lipschitz_3){reference-type="eqref" reference="eq:loc_lipschitz_3"} into eq. [\[eq:loc_lipschitz_1\]](#eq:loc_lipschitz_1){reference-type="eqref" reference="eq:loc_lipschitz_1"}, and taking $B \to \infty$ after $d \to \infty$, we conclude the proof of Lemma [\[lemma:finite_dim_clt\]](#lemma:finite_dim_clt){reference-type="ref" reference="lemma:finite_dim_clt"}. $\square$ [^1]: $\star$ Department of Mathematics, ETH Zürich, Switzerland.\ $\diamond$ To whom correspondence shall be sent: <antoine.maillard@math.ethz.ch>. [^2]: Given $(x_\mu)_{\mu=1}^n$, eq. [\[eq:def_P\]](#eq:def_P){reference-type="eqref" reference="eq:def_P"} is a convex problem (it is an example of a semidefinite program, or SDP), and thus efficiently solvable when solutions exist. [^3]: i.e. a combination of linear equations with a positivity constraint $S \succeq 0$. [^4]: Furthermore, by rescaling $S$ (and up to a change in $\lambda_-, \lambda_+, \phi$) we will reduce to the case $b \in \{-1,0,1\}$. [^5]: Since $g \overset{\rm d}{=}-g$, $\min_{x \in K \cap \mathbb{S}^{p-1}}[g^\intercal x] \overset{\rm d}{=}- \max_{x \in K \cap \mathbb{S}^{p-1}}[g^\intercal x]$. [^6]: Recall that $\delta < \omega(K)$ but $\omega(K) \to \infty$ as $n \to \infty$ by hypothesis. [^7]: Notice that we replaced $\|S\|_F^2 = 2d$ by $\|S\|_F^2 \leq 2d$ since w.h.p. one can always find $S \in K_\kappa$ such that ${\rm Tr}[S Z] > 0$.
arxiv_math
{ "id": "2310.05787", "title": "Exact threshold for approximate ellipsoid fitting of random points", "authors": "Antoine Maillard and Afonso S. Bandeira", "categories": "math.PR cond-mat.dis-nn cs.DS math.ST stat.TH", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We obtain a criterion for when the specialization of the iterated Galois group for a post-critically finite (PCF) rational map is as large as possible, i.e., it equals the generic iterated Galois group for the given map. address: - | Department of Mathematics and Statistics\ Amherst College\ P. O. Box 5000\ MA 01002-5000\ USA - | Department of Mathematics\ University of British Columbia\ 1984 Mathematics Road\ BC V6T 1Z2\ Canada - | Department of Mathematics\ 1874 Campus Delivery\ Fort Collins\ CO 80523-1874\ USA - | Department of Mathematics\ Hylan Building\ University of Rochester\ Rochester, NY 14627\ USA author: - Robert L. Benedetto - Dragos Ghioca - Jamie Juul - Thomas J. Tucker title: Specializations of Iterated Galois Groups of PCF Rational Functions --- # Introduction ## Notation and setting {#subsec:tec} We fix the following notation throughout this paper. - a field $k$; - $f\in k(x)$ a post-critically finite rational function defined over $k$ and of degree $d\geq 2$; - $K_n= k(f^{-n}(t))$ for each $n\ge 0$, where $t$ is transcendental over $k$; - $K_\infty = \bigcup_{n=0}^\infty K_n$; - $k_n=\overline{k}\cap K_n$ for each $n\ge 0$, and $k_\infty = \overline{k}\cap K_\infty$; - $G_n = \mathop{\mathrm{Gal}}(K_n/k(t))$ for each $n\ge 0$; - $G_\infty=\varprojlim G_{n}\cong \mathop{\mathrm{Gal}}(K_\infty/k(t))$ is the limit of the $G_n$; - $\alpha \in \mathbb{P}^1(k)$ an arbitrary point defined over $k$; - $K_{\alpha,n} = k(f^{-n}(\alpha))$ for each $n\ge 0$, and $K_{\alpha,\infty} = \bigcup_{n=1}^\infty K_{\alpha,n}$; - $G_{\alpha,n} = \mathop{\mathrm{Gal}}(K_{\alpha,n}/k)$ for each $n\ge 0$; - $G_{\alpha,\infty}=\varprojlim G_{\alpha,n}\cong \mathop{\mathrm{Gal}}(K_{\alpha,\infty}/k)$ is the limit of the $G_{\alpha,n}$; - $T^d_\infty$ is the infinite $d$-ary rooted tree. - $T^d_n$ is the $d$-ary rooted tree with $n$ levels. Here, $f^n$ denotes the iterated composition $f\circ \cdots \circ f$, with $f^0(x)=x$ and $f^1(x)=f(x)$, and $f^{-n}(a)$ denotes the inverse image of $a$ under $f^n$. We assume for each $n$ that the equations $f^n(x)-t=0$ and $f^n(x)-\alpha$ are separable, so that each of the fields $K_n$ and $K_{\alpha,n}$ are indeed Galois extensions of $K_0=k(t)$ and of $K_{\alpha,0}=k$, respectively. By identifying the $d^n$ elements of $f^{-n}(t)$ or of $f^{-n}(\alpha)$, counted with multiplicity, with the $d^n$ vertices at the $n$-th level of the tree $T^d_n$, the Galois groups $G_n$ and $G_{\alpha,n}$ act on $T^d_n$. Similarly, the Galois groups $G_\infty$ and $G_{\alpha,\infty}$ act on $T^d_\infty$. Recall that we say a point $P\in\mathbb{P}^1$ is *preperiodic* under $f$ if there are integers $n>m\geq 0$ such that $f^n(P)=f^m(P)$, and we say $f$ is *post-critically finite*, or PCF, if all critical point of $f$ are preperiodic. ## Overview of the problem Note that for $\alpha \in k$, there is a simple way to view $G_{\alpha,\infty}$ as a subgroup of $G_\infty$ up to conjugacy whenever $\alpha$ is not strictly post-critical. (That is, whenever $\alpha$ does not equal $f^n(c)$ for any critical point $c$ of $f$, and any integer $n\geq 1$.) Let ${\mathfrak p}$ be the prime corresponding to $\alpha$ in $\mathop{\mathrm{Spec}}k[t]$, and let ${\mathfrak q} _n$ be a prime lying over it in the integral closure of $\mathop{\mathrm{Spec}}k[t]$ in $K_{\alpha,n}$. Then the decomposition group of ${\mathfrak q} _n$ over ${\mathfrak p}$ is isomorphic to $G_{\alpha,n}$. Choosing primes $${\mathfrak q} _1 \subseteq {\mathfrak q} _2 \cdots \subseteq {\mathfrak q} _n \subseteq \cdots$$ gives an embedding of $G_{\alpha,\infty}$ into $G_\infty$, well-defined up to conjugacy. The following has become a standard question in the area (see [@BostonJonesArboreal; @BostonJonesImage; @RafeArborealSurvey; @BDGHT2]). **Question 1**. *Let $f$ be a rational function defined over a number field $k$. Let $\alpha \in k$ be a point that is not strictly post-critical for $f$ and is not fixed by any rational function that commutes with any iterate of $f$. Then do we have $[G_\infty: G_{\alpha,\infty}] < \infty$?* Question [Question 1](#q){reference-type="ref" reference="q"} is known to have a positive answer for non-PCF quadratic and cubic polynomials (see [@BDGHT1; @BDGHT2; @BT2; @JKL]) if one assumes the $abc$ conjecture along with a conjecture about irreducibility of iterates of rational functions due to Jones and Levy [@RafeAlon]. Jones and Manes have proved similar results for special families of quadratic rational functions [@JonesManes]. Unconditionally, less is known about Question [Question 1](#q){reference-type="ref" reference="q"}, but partial results are known in some cases. Over function fields, Odoni [@OdoniIterates; @OdoniWreathProducts] (see also [@Juul]) has shown that for generic pairs $(f,\alpha)$, $G_{\infty}$ is all of $\mathop{\mathrm{Aut}}(T^d_\infty)$. When $d=2$, Odoni further proved that the example of $(x^2-x+1,0)$ over $\mathbb{Q}$ [@OdoniExample] [@OdoniExample], satisfies $G_{\alpha,\infty}=G_\infty=\mathop{\mathrm{Aut}}(T^d_\infty)$. Stoll [@Stoll92] later extended Odoni's construction to infinitely many quadratic polynomials over $\mathbb{Q}$. Looper [@Looper] produced infinitely many such examples in any prime degree $d=p$, later generalized to all degrees $d\geq 2$ in [@BenJuul; @Kadets; @Specter]. When $G_\infty\subsetneq\mathop{\mathrm{Aut}}(T^d_\infty)$, for example when $f$ is PCF, there had been fewer examples for which the answer to Question [Question 1](#q){reference-type="ref" reference="q"} was known. As shown in [@ABCCF], the answer is yes for nearly all $\alpha\in k$ for the polynomial $f(x)=x^2-1$, using a Hilbert irreducibility argument. A somewhat similar result for a specific PCF cubic has been shown in [@BFHJY], and then generalized to normalized Belyi maps in [@BEK]. We note however that the results there only yield infinitely many $\alpha$ such that $G_\infty = G_{\alpha, \infty}$, rather than nearly all $\alpha\in k$. Here we show that Question [Question 1](#q){reference-type="ref" reference="q"} has a positive answer for any PCF quadratic rational function and for nearly all $\alpha \in k$. **Theorem 2**. *Let $f$ be a PCF quadratic rational function defined over a number field $k$. Then for all $\alpha \in k$ outside of a thin set, we have $G_\infty = G_{\alpha,\infty}$.* Theorem [Theorem 2](#hilbert-plus){reference-type="ref" reference="hilbert-plus"} follows from combining the result below with the Hilbert irreducibility theorem. **Theorem 3**. *Let $k$ be a number field and let $f \in k(x)$ be a PCF rational function such that $\mathop{\mathrm{Gal}}(K_1/k_1(t))$ is a $p$-group. Then there is an integer $m\geq 1$ (depending on $f$ and $k$) such that $G_\infty = G_{\alpha,\infty}$ whenever $G_m = G_{\alpha,m}$.* Note in particular that Theorem [Theorem 3](#more-general){reference-type="ref" reference="more-general"} applies to any PCF quadratic rational function. We have a little bit more precision about the integer $m$ from the conclusion of Theorem [Theorem 3](#more-general){reference-type="ref" reference="more-general"} in the case of certain polynomials. **Theorem 4**. *Let $f(x) = x^{p^n} + c$ be a PCF polynomial defined over a number field $k$. Let $N$ be the size of the forward orbit of the critical point $0$. Then there is a finite extension $k'$ of $k_1$ such that for any $\alpha\in k$, we have $G_{\alpha, \infty} = G_\infty$ if and only if $$\label{k1} |\mathop{\mathrm{Gal}}(K_{\alpha, N} \cdot k' / k)| =|G_N| \cdot [k': k_1].$$* *Remark 5*. Theorem [Theorem 4](#quadratic3){reference-type="ref" reference="quadratic3"} is stated with an explicit description of $k'$ in Theorem [Theorem 37](#quadratic2){reference-type="ref" reference="quadratic2"}; more precisely, we have that $k'$ is the compositum of the finitely many extensions of $k$ of degree $p$ over $k_1$ contained in $k_\infty$. ## Our strategy of proof The proofs of both Theorems [Theorem 3](#more-general){reference-type="ref" reference="more-general"} and [Theorem 4](#quadratic3){reference-type="ref" reference="quadratic3"} leverage properties of Frattini subgroups (see Section [3](#more){reference-type="ref" reference="more"}). Because $G_\infty$ is a $p$-group in Theorems [Theorem 3](#more-general){reference-type="ref" reference="more-general"} and [Theorem 4](#quadratic3){reference-type="ref" reference="quadratic3"}, every maximal closed subgroup of $G_\infty$ is normal and of index $p$ in $G_\infty$. By the theory of Frattini subgroups, then, Theorem [Theorem 3](#more-general){reference-type="ref" reference="more-general"} reduces to showing that $K_\infty$ contains only finitely many Galois extensions of degree $p$ over $K$. In Theorem [Theorem 10](#k){reference-type="ref" reference="k"} of Section [2](#ext){reference-type="ref" reference="ext"}, we show that for any number field $k$, the resulting base field extension $k_\infty$ contains only finitely many extensions of $k$ of bounded degree (with no conditions on $f$). If we add the hypothesis that $f$ is PCF, then using standard facts about fundamental groups, we prove in Lemma [Lemma 25](#geo){reference-type="ref" reference="geo"} that the field $K_\infty$ also contains only finitely many extensions of bounded degree over $k_\infty(t)$. Combining Lemma [Lemma 25](#geo){reference-type="ref" reference="geo"} with Theorem [Theorem 10](#k){reference-type="ref" reference="k"} gives Theorem [Theorem 3](#more-general){reference-type="ref" reference="more-general"}, which we prove in Section [3](#more){reference-type="ref" reference="more"}. In Section [4](#quadratic){reference-type="ref" reference="quadratic"}, using more careful arguments about the exact number of extensions of degree $p$ in $K_\infty$ (see Lemma [Lemma 34](#count1){reference-type="ref" reference="count1"}), we prove Theorem [Theorem 37](#quadratic2){reference-type="ref" reference="quadratic2"}, which is the more precise and explicit form of Theorem [Theorem 4](#quadratic3){reference-type="ref" reference="quadratic3"}. We note that our proofs do not involve deriving information about the group $G_\infty$ itself (as is done in [@JKMT; @PinkQuadratic; @PinkQuadraticInfiniteOrbits; @ABCCF; @BFHJY], for example). In a future paper, we plan to give a concrete presentation of the explicit data about $G_\infty$ for PCF quadratic polynomials developed in [@PinkQuadratic]. # Extensions of the base field {#ext} In this section, we collect some information about the fields $k_n$. We begin with the following dynamical analog of a standard result [@AEC Proposition VII.4.1] on the ramification of Tate modules of elliptic curves. **Lemma 6**. *Let $R$ be a discrete valuation ring with maximal ideal ${\mathfrak p}$, and let $k$ be the field of fractions of $R$. Let $g(x)\in R[x]$ be a nonconstant polynomial with coefficients in $R$, and let ${\bar g}(x)$ denote its image in $(R/ {\mathfrak p} )[x]$. Suppose that $\deg {\bar g} = \deg g$ and that ${\bar g}$ is separable over $R/ {\mathfrak p}$. Let $L$ be a splitting field of the polynomial $g(x)-t\in R[t][x]$ over $k(t)$, where $t$ is transcendental over $k$. Then ${\mathfrak p}$ does not ramify in $L \cap \overline{k}$.* *Proof.* We may suppose that $R$ is ${\mathfrak p}$-adically complete since ${\mathfrak p}$ will ramify in $L \cap \overline{k}$ if and only if the completion of $R$ at ${\mathfrak p}$ ramifies in the compositum of $L \cap \overline{k}$ with the completion of $k$ at ${\mathfrak p}$. We may also assume that $L$ does not contain any algebraic extensions of $k$ that are unramified at ${\mathfrak p}$, after replacing $R$ with its integral closure in the maximal unramified algebraic extension of $k$ at ${\mathfrak p}$ in $L$. Let ${\mathfrak q} = {\mathfrak p} R[t]$. Then the localization $R[t]_ {\mathfrak q}$ is a discrete valuation ring with maximal ideal ${\mathfrak q} R[t]_{ {\mathfrak q} }$. We will begin by showing that the prime ${\mathfrak q} R[t]_{ {\mathfrak q} }$ does not ramify in $L$. Let $M$ be an extension of $k(t)$ generated by a root of $g(x) - t$, and let $B$ be the integral closure of $R[t]_{ {\mathfrak q} }$ in $M$. Then ${\bar g}(x) - t\in (R/ {\mathfrak p} )[t][x]$ is separable and irreducible over $(R/ {\mathfrak p} )(t)$ and is of degree $\deg g$. Hence, there is a single prime ${\mathfrak m}$ of $B$ lying over ${\mathfrak q} R[t]_{ {\mathfrak q} }$, and $$\big[B/ {\mathfrak m} : R[t]_{ {\mathfrak q} } /( {\mathfrak q} R[t]_{ {\mathfrak q} }) \big] = \deg g = [M:k(t)].$$ Thus, ${\mathfrak q} R[t]_{ {\mathfrak q} }$ does not ramify in $M$. Because $L$ is the compositum of $\deg g$ copies of $M$ (one for each root of $g(x) - t$), we see that ${\mathfrak q} R[t]_ {\mathfrak q}$ does not ramify in $L$, as we claimed above. Let $\ell = L \cap \overline{k}$. Then we have a containment of fields $k(t) \subseteq k(t) \cdot \ell \subseteq L$. We must show that ${\mathfrak p}$ does not ramify in $\ell$; in fact, we will show that $\ell=k$. Let $R'$ be the integral closure of $R$ in $\ell$. Since $R$ is complete, the ring $R'$ is a discrete valuation ring with maximal ideal ${\mathfrak m}$, and we have ${\mathfrak p} R' = {\mathfrak m} ^e$ for some integer $e$. Writing ${\mathfrak m} = \alpha\cdot R'$ (for a suitable element $\alpha\in R'$), we see that $R' = R[\alpha]$, since the only elements of $k[\alpha]$ with ${\mathfrak m}$-adic absolute value less than or equal to $1$ are those are in $R[\alpha]$. Hence, the integral closure of $R[t]_ {\mathfrak q}$ in $L$ is $R'[t]_ {\mathfrak q} = R[t]_ {\mathfrak q} [\alpha]$. Let $h(x)\in R[x]$ be the minimal polynomial of $\alpha$ over $k$. Let ${\bar h}$ be the image of $h$ in $(R/ {\mathfrak p} )[x]$. Then ${\bar h} = (x-\bar{\beta})^e$ for some $\beta \in R$, since ${\mathfrak m} ^e = {\mathfrak p}$, by the Dedekind-Kummer theorem (see, for example, [@Neukirch Proposition I.8.1]) applied to the extension $R[\alpha]$ of $R$. The polynomial $h$ remains irreducible over $k(t)$, and thus applying the Dedekind-Kummer theorem to the extension $R[t]_ {\mathfrak q} [\alpha]$ of $R_ {\mathfrak q}$, we see that ${\mathfrak q} R[t]_ {\mathfrak q} [\alpha]$ must also have the form ${\mathfrak n} ^e$ for some maximal ideal ${\mathfrak n}$ in $R[t]_ {\mathfrak q} [\alpha]$. Since $R[t]_ {\mathfrak q} [\alpha]$ does not ramify in $L$, we must have $e = 1$, and hence $\ell = k$. ◻ **Definition 7**. Let $k$ be a field, and let $f: \mathbb{P}_k^1 \longrightarrow\mathbb{P}_k^1$, written as $[P(x,y) : Q(x,y)]$ with $P,Q$ each homogenous of the same degree $d\geq 1$ in $k[x,y]$. Let $R$ be a Dedekind domain with field of fractions $k$. We say that $f$ has (*explicit*) *good reduction* at a prime ${\mathfrak p}$ of $R$ if the coefficients of $P$ and $Q$ are in $R_ {\mathfrak p}$, the reductions $P_ {\mathfrak p} , Q_ {\mathfrak p} \in (R/ {\mathfrak p} )[x,y]$ of $P$ and $Q$ at ${\mathfrak p}$ have no common roots in the algebraic closure of $R/ {\mathfrak p}$, and $\max(\deg P_ {\mathfrak p} , \deg Q_ {\mathfrak p} ) =d$. We say that $f$ has (*explicit*) *good separable reduction* at ${\mathfrak p}$ if in addition the map over $R/ {\mathfrak p}$ sending $[x:y]$ to $[P_ {\mathfrak p} (x,y) : Q_ {\mathfrak p} (x,y)]$ is separable. Note that if $f$ has good separable reduction at a prime ${\mathfrak p}$, then so does $f^n$ for any $n\geq 0$. **Lemma 8**. *Suppose that $k$ is the field of fractions of a Dedekind domain $R$ and that the PCF rational function $f$ is separable. Then there are at most finitely many primes ${\mathfrak p}$ of $R$ that ramify in $k_\infty$.* *Proof.* Since $f$ is separable, the set $S$ of primes at which $f$ fails to have good separable reduction is a finite set. Hence, for all $n\geq 0$ and all primes ${\mathfrak p}$ of $R$ outside of $S$, the function $f^n$ has good separable reduction. By Lemma [Lemma 6](#ram){reference-type="ref" reference="ram"}, it follows that any such ${\mathfrak p}$ is unramified in $K_n \cap \overline{k}$, for all $n\geq 0$. Therefore, any prime ${\mathfrak p}$ outside the finite set $S$ is unramified in $k_{\infty}$. ◻ We will also use the following result, which is proved in [@Neukirch Theorem II.2.13]. **Lemma 9**. *Let $D\in\mathbb{N}$ and let $S$ be a finite set of places of a number field $L$. There are only finitely many extensions $L'$ of $L$ unramified outside of $S$ such that $[L:L'] \leq D$.* Lemmas [Lemma 8](#ram2){reference-type="ref" reference="ram2"} and [Lemma 9](#N){reference-type="ref" reference="N"} immediately yield the following useful result. **Theorem 10**. *Let $k$ be a number field. Then for any $D\geq 1$, the field $k_\infty$ contains only finitely many intermediate fields $k'$ with $[k':k]\leq D$.* *Proof.* By Lemma [Lemma 8](#ram2){reference-type="ref" reference="ram2"}, there at most finitely many primes in $k$ that ramify in $k_\infty$. Let $S$ be the set of all such primes together with the archimedean places of $k$. Applying Lemma [Lemma 9](#N){reference-type="ref" reference="N"} to $S$ yields the desired result. ◻ We will make use also of the following simple lemmas from commutative algebra. **Lemma 11**. *Let $A$ be a complete discrete valuation ring with maximal ideal ${\mathfrak p}$ and field of fractions $k$, let $B$ be its integral closure in a finite, separable, unramified extension $M$ of $k$, and let ${\mathfrak m}$ be the maximal ideal of $B$. Suppose that there is some $\alpha\in B$ such that $M = k(\alpha)$ and the minimal polynomial $g(x) \in A[x]$ of $\alpha$ has the property that the reduction ${\bar g}(x)$ of $g$ in $A/ {\mathfrak p} [x]$ does not have repeated roots. Then $B = A[\alpha]$.* *Proof.* The ring $A[x]$ is Noetherian by the Hilbert basis theorem, and hence so is its quotient $A[\alpha]$. Moreover, $A[\alpha]$ has dimension one because $A$ is integrally closed and of dimension one, by the going-up and going-down theorems. By a variant of Hensel's Lemma (see, for example, [@BGR Proposition 3.3.4.1]), the reduced polynomial ${\bar g}$ is irreducible in $A/ {\mathfrak p} [x]$, because it does not have repeated roots, and because the original polynomial $g$ is irreducible over $k$. We have $A[\alpha] / {\mathfrak p} A[\alpha] \cong (A/ {\mathfrak p} )[x] / {\bar g(x)}$, which is a field, since ${\bar g}$ is irreducible. Therefore ${\mathfrak p} A[\alpha]$ is a maximal ideal in the ring $A[\alpha]$; since it is generated by ${\mathfrak p}$, it must be the unique maximal ideal. Writing ${\mathfrak p} =\pi A$ for some uniformizer $\pi\in A$, then this unique maximal ideal in $A[\alpha]$ is the principal ideal $A[\alpha] \pi$. As a Noetherian local domain of dimension one whose maximal ideal is principal, $A[\alpha]$ must be a discrete valuation ring and hence integrally closed. (See, for example, [@AM Proposition 9.2].) Since $A[\alpha]\subseteq B$ has the same field of fractions $M$ as $B$ does, it follows that $A[\alpha]=B$. ◻ **Lemma 12**. *Let $A$ be a discrete valuation ring with maximal ideal ${\mathfrak p}$ and with field of fractions $k$. Let $L_1$ and $L_2$ be finite separable extensions of $k$, both contained in a common algebraic closure of $k$. For each $i=1,2$, let $R_i$ be the integral closure of $A$ in $L_i$, and let ${\mathfrak q} _i$ be a maximal ideal of $R_i$. Let $B$ be the integral closure of $A$ in $L_1 \cdot L_2$, and let ${\mathfrak m}$ be a maximal ideal of $B$ lying above both ${\mathfrak q} _1$ and ${\mathfrak q} _2$. Suppose that ${\mathfrak q} _1$ does not ramify over ${\mathfrak p}$ and that $R/ {\mathfrak q} _1$ is separable over $A/ {\mathfrak p}$. Then $B/ {\mathfrak m} = R_1 / {\mathfrak q} _1 \cdot R_2 / {\mathfrak q} _2$.* In the final sentence of Lemma [Lemma 12](#comp){reference-type="ref" reference="comp"}, note that the fields $R_1 / {\mathfrak q} _1$ and $R_2 / {\mathfrak q} _2$ both embed naturally into $B/ {\mathfrak m}$ under the inclusions of $R_1$ and $R_2$ into $B$. Thus, the conclusion is that the compositum of these two quotient field is the whole field $B/ {\mathfrak m}$. *Proof.* By passing to the completion of $B$ at ${\mathfrak m}$, we may assume that the rings $A$, $R_1$, $R_2$, and $B$ are complete with respect to their (unique) maximal ideals. Choose $\alpha \in R_1$ such that the image of $\alpha$ in $R_1/ {\mathfrak q} _1$ generates $R_1/ {\mathfrak q} _1$ over $A/ {\mathfrak p}$; such an $\alpha$ exists by the primitive element theorem, since $R_1/ {\mathfrak q} _1$ is separable over $A/ {\mathfrak p}$. Let $g(x)\in A[x]$ be the minimal polynomial of $\alpha$ over $k$. Then ${\bar g}$ must be irreducible over $A/ {\mathfrak p}$, because ${\mathfrak q} _1$ is unramified over ${\mathfrak p}$, and hence $[L_1:k] = [R_1/ {\mathfrak q} _1: A / {\mathfrak p} ]$. Furthermore, ${\bar g}$ must be separable since $R_1/ {\mathfrak q} _1$ is separable over $A/ {\mathfrak p}$. Thus, $R_1 = A[\alpha]$, by Lemma [Lemma 11](#gen-close){reference-type="ref" reference="gen-close"}. Let $h\in R_2[x]$ be the minimal polynomial of $\alpha$ over $R_2$. Then $h$ divides $g$ in $R_2[x]$, so the reduction ${\bar h}$ of $h$ in $R_2/ {\mathfrak q} _2$ divides ${\bar g}$. In particular, ${\bar h}$ is also separable. Thus, by Lemma [Lemma 11](#gen-close){reference-type="ref" reference="gen-close"} again, we have $B = R_2[\alpha]$, which is the subring $R_1\cdot R_2$ of $B$ generated by $R_1$ and $R_2$. It follows immediately that $B/ {\mathfrak m} = R_1 / {\mathfrak q} _1 \cdot R_2 / {\mathfrak q} _2$. ◻ **Proposition 13**. *Let $f$ be a separable rational function defined over a field $k$, let $n$ be a positive integer, and let $\alpha \in k$ be any point such that $f^n$ does not ramify over $\alpha$. (That is, there are no critical points $c$ of $f^n$ for which $f^n(c)=\alpha$.) Then $K_{\alpha,n}=k(f^{-n}(\alpha))$ contains $k_n=\overline{k}\cap K_n$.* *Proof.* For each $\beta_i\in K_n$ such that $f^n(\beta_i) = t$, let $L_{i} = k(\beta_i)$; the field $K_n$ is the compositum of these $L_i$. Let $A$ be the local ring for the ideal ${\mathfrak p} =(t-\alpha)$ in $k[t]$, let $R_i$ be the integral closure of $A$ in $L_i$ for each $i$, let $B$ be the integral closure of $A$ in $K_n$, let ${\mathfrak m}$ be a maximal ideal in $B$, and let ${\mathfrak q} _i = R_i \cap {\mathfrak m}$ for each $i$. Note that none of the primes ${\mathfrak q} _i$ ramify over ${\mathfrak p}$, since $\beta_i$ is not a critical point of $f^n$. The field $K_{\alpha,n}=k(f^{-n}(\alpha))$ contains the compositum of the fields $R_i/ {\mathfrak q} _i$, which is equal to $B/ {\mathfrak m}$ by Lemma [Lemma 12](#comp){reference-type="ref" reference="comp"}. Since $k_n$ is contained in $B$, we see that $B/ {\mathfrak m}$ contains $k_n$, so $K_{\alpha,n}$ must also contain $k_n$, as desired. ◻ We will use the following standard lemma from Galois theory throughout the paper; see [@Lang Theorem VI.1.12]. We include a proof for completeness. **Lemma 14**. *Let $K$ and $L$ be separable field extensions of a field $F$, contained in the same algebraic closure of $F$. Suppose that $K$ is normal over $F$. Then the natural restriction map $r: \mathop{\mathrm{Gal}}(K \cdot L / L)\longrightarrow \mathop{\mathrm{Gal}}(K/K \cap L)$ is an isomorphism.* *Proof.* It suffices to prove the statement when $K \cap L = F$. Clearly $r$ is a homomorphism. Any $\sigma\in\ker(r)$ acts trivially on both $L$ and $K$ and is thus $1_{KL}$. Let $H\subseteq\mathop{\mathrm{Gal}}(K/F)$ be the image of $r$. We claim that the fixed field $K^H$ is $F$. Clearly any $\gamma\in F$ is fixed by every $\sigma\in H\subseteq\mathop{\mathrm{Gal}}(K/F)$. Conversely, any $\gamma\in K^H$ is fixed by every $\sigma\in\mathop{\mathrm{Gal}}(K\cdot L /L)$ and hence lies in $L$. Therefore, $\gamma\in K\cap L = F$, proving our claim. By the Galois correspondence, it follows that $H=\mathop{\mathrm{Gal}}(K/F)$, and hence that $r$ is surjective. ◻ **Proposition 15**. *Let $\ell$ an algebraic extension of a field $k$. If the embedding induced by specializing $t$ to $\alpha$ gives an isomorphism $$\label{gg} \mathop{\mathrm{Gal}}(\ell \cdot K_{\alpha, \infty}/ \ell) \cong \mathop{\mathrm{Gal}}(\ell \cdot K_\infty / \ell(t)),$$ then $G_{\alpha, \infty} \cong G_\infty$.* *Proof.* If $\alpha$ were post-critical (i.e., if $\alpha=f^n(c)$ for some critical point $c$ of $f$ and some $n\geq 1$), then isomorphism [\[gg\]](#gg){reference-type="eqref" reference="gg"} would fail. After all, in that case, the Galois group on the left would have to act in exactly the same way on the two or more copies of $T^d_\infty$ rooted at the multiple copies of $c$ inside the main tree $T^d_\infty$ rooted at $\alpha$. Thus, $\alpha$ must not be post-critical. Therefore, by Proposition [Proposition 13](#kn){reference-type="ref" reference="kn"}, we must have $$\label{eq:ellinclude} %\ell \cap K_n = \ell\cap k_n \subseteq \ell \cap K_{\alpha, n} \quad \text{for each } n\ge 1 .$$ By Lemma [Lemma 14](#isomgal){reference-type="ref" reference="isomgal"}, we have $\mathop{\mathrm{Gal}}( \ell \cdot K_{\alpha, n}/ \ell)\cong \mathop{\mathrm{Gal}}(K_{\alpha,n} /\ell\cap K_{\alpha,n})$, and hence $$\begin{aligned} \label{eq:Gansize} |G_{\alpha,n}| &= [K_{\alpha,n}:k] = [K_{\alpha,n}:\ell\cap K_{\alpha,n}] \cdot [\ell\cap K_{\alpha,n}:k] \notag \\ &= |\mathop{\mathrm{Gal}}( \ell \cdot K_{\alpha, n}/ \ell)|\cdot [\ell \cap K_{\alpha, n} : k] .\end{aligned}$$ Meanwhile, we have $[ (\ell\cap k_n)(t) : k(t) ] = [\ell \cap k_n : k]$, since $t$ is transcendental over $k$. We also have $$\label{eq:ellkn} (\ell\cap k_n)(t) = (\ell \cap K_n)(t) = \big( \ell(t) \big) \cap \big( K_n(t) \big) = \ell(t) \cap K_n .$$ By Lemma [Lemma 14](#isomgal){reference-type="ref" reference="isomgal"} again, we have $\mathop{\mathrm{Gal}}( \ell \cdot K_{n}/ \ell(t) )\cong \mathop{\mathrm{Gal}}(K_{n} /\ell(t) \cap K_{n})$, and hence $$\begin{aligned} \label{eq:Gnsize} |G_n| &= [K_n:k(t)] = [K_n : \ell(t) \cap K_n] [(\ell \cap k_n)(t) : k(t)] \notag \\ & =\big|\mathop{\mathrm{Gal}}\big( \ell \cdot K_{n}/ \ell(t) \big) \big| \cdot [(\ell \cap k_n)(t) : k(t)] = |\mathop{\mathrm{Gal}}( \ell \cdot K_{\alpha, n}/ \ell)|\cdot [\ell \cap k_n : k],\end{aligned}$$ where the second equality is by equation [\[eq:ellkn\]](#eq:ellkn){reference-type="eqref" reference="eq:ellkn"}, and the fourth is by hypothesis [\[gg\]](#gg){reference-type="eqref" reference="gg"}. Combining equations [\[eq:Gansize\]](#eq:Gansize){reference-type="eqref" reference="eq:Gansize"} and [\[eq:Gnsize\]](#eq:Gnsize){reference-type="eqref" reference="eq:Gnsize"} with the fact that $\ell\cap k_n \subseteq \ell \cap K_{\alpha,n}$ from inclusion [\[eq:ellinclude\]](#eq:ellinclude){reference-type="eqref" reference="eq:ellinclude"}, it follows that $|G_{\alpha,n}| \geq |G_n|$. However, we also have $|G_{\alpha,n}| \leq |G_n|$ by construction, whence $|G_{\alpha,n}| = |G_n|$ for all $n\ge 1$. Therefore $G_{\alpha, \infty} \cong G_\infty$, as desired. ◻ The converse of Proposition [Proposition 15](#any){reference-type="ref" reference="any"} is false in general (take $\ell = \overline{k}$, for example), but as our next lemma shows, it does hold when $\ell \subseteq k_\infty$. **Lemma 16**. *Let $\ell$ be an algebraic extension of $k$ contained in $k_\infty$. If $G_{\alpha, \infty} \cong G_\infty$, then $$\label{ggg} \mathop{\mathrm{Gal}}(\ell \cdot K_{\alpha, \infty}/ \ell) \cong \mathop{\mathrm{Gal}}(\ell \cdot K_\infty / \ell(t)).$$* *Proof.* As in the proof of Proposition [Proposition 15](#any){reference-type="ref" reference="any"}, we may assume that $\alpha$ is not post-critical, so that Proposition [Proposition 13](#kn){reference-type="ref" reference="kn"} applies. Suppose that $|\mathop{\mathrm{Gal}}(\ell \cdot K_{\alpha, m}/ \ell)| < |\mathop{\mathrm{Gal}}(\ell \cdot K_m / \ell(t))|$ for some $m\ge 1$. Then equation [\[eq:Gansize\]](#eq:Gansize){reference-type="eqref" reference="eq:Gansize"} from the proof of Proposition [Proposition 15](#any){reference-type="ref" reference="any"} still holds, but the last equality in equation [\[eq:Gnsize\]](#eq:Gnsize){reference-type="eqref" reference="eq:Gnsize"} becomes a strict inequality. By hypothesis, we have $G_{\alpha,m}\cong G_m$, and thus it follows that $[\ell \cap K_{\alpha,m}:k] > [\ell \cap K_m: k]$. Let $\ell' = \ell \cap K_{\alpha,m}$, which is a subfield of $k_{\infty}$ by hypothesis. Then $\ell' \cdot K_m$ is contained in $K_{\infty}$; moreover, by the inequality at the end of the previous paragraph, it is a nontrivial extension of $K_m$. Let $A\subseteq k[t]$ be the local ring for ${\mathfrak p} =(t-\alpha)$, and let $B$ be the integral closure of $A$ in $\ell'\cdot K_m$. Because $G_{\alpha, m}$ is the decomposition group of ${\mathfrak p}$ in $K_m$ but is equal to all of $G_m$ by hypothesis, and because $\ell'\cdot K_m$ is a base field extension of $K_m$, it follows that there is only a single prime ${\mathfrak m}$ of $B$ above ${\mathfrak p}$. Moreover, because we assumed $\alpha$ is not post-critical, the extension $K_m/k(t)$ is unramified at ${\mathfrak p}$, and hence $$\label{eq:D1} [k_ {\mathfrak m} : k] = [\ell' \cdot K_m: k(t)],$$ where $k_ {\mathfrak m}$ is the residue field $k_ {\mathfrak m} = B/ {\mathfrak m}$. Setting $L_1=\ell'(t)$ and $L_2=K_m$, both of which are finite separable extensions of $k(t)$, and letting $R_i$ be the integral closure of $A$ in $L_i$ and ${\mathfrak q} _i= {\mathfrak m} \cap R_i$ for $i=1,2$, Lemma [Lemma 12](#comp){reference-type="ref" reference="comp"} yields $$\label{eq:D2} k_ {\mathfrak m} = B/ {\mathfrak m} = R_1/ {\mathfrak q} _1 \cdot R_2/ {\mathfrak q} _2 \cong \ell' \cdot K_{\alpha,m} = K_{\alpha,m}.$$ But since $\ell'\cdot K_m$ is a nontrivial extension of $K_m$, we have $$[K_{\alpha,m}: k] = |G_{\alpha,m}| = |G_m| = [K_m: k(t)] < [\ell' \cdot K_m: k(t)],$$ which contradicts [\[eq:D1\]](#eq:D1){reference-type="eqref" reference="eq:D1"} and [\[eq:D2\]](#eq:D2){reference-type="eqref" reference="eq:D2"}. So we must have $$|\mathop{\mathrm{Gal}}(\ell \cdot K_{\alpha,m}/ \ell)| = |\mathop{\mathrm{Gal}}(\ell \cdot K_m / \ell(t))| \quad \text{for all }m\ge 1,$$ which implies that [\[ggg\]](#ggg){reference-type="eqref" reference="ggg"} holds. ◻ The following is an immediate consequence of Proposition [Proposition 15](#any){reference-type="ref" reference="any"} and Lemma [Lemma 16](#con){reference-type="ref" reference="con"}. **Theorem 17**. *Let $\ell$ be any extension of $k$ contained in $k_\infty$. Then $$\mathop{\mathrm{Gal}}( K_{\alpha, \infty}/ \ell) = \mathop{\mathrm{Gal}}(K_\infty / \ell(t))$$ if and only if $G_{\alpha, \infty} = G_\infty$.* # Proof of Theorems [Theorem 2](#hilbert-plus){reference-type="ref" reference="hilbert-plus"} and [Theorem 3](#more-general){reference-type="ref" reference="more-general"} {#more} ## Additional technical results We need several more useful facts in order to prove Theorem [Theorem 3](#more-general){reference-type="ref" reference="more-general"}, beginning with a brief discussion of Frattini subgroups. **Definition 18**. The *Frattini subgroup* of a profinite group $G$ is the intersection of all closed maximal subgroups of $G$. The following is well-known. We provide a short proof for completeness. **Lemma 19**. *Let $G$ be a profinite group, let $H$ be a closed subgroup of $G$, and let $F$ be the Frattini subgroup of $G$. If $H$ intersects every coset of $F$ in $G$ nontrivially, then $H = G$.* *Proof.* Since $H$ intersects every coset of $F$ in $G$ nontrivially, it must intersect every coset of any closed maximal subgroup $M$ in $G$ nontrivially as well. This means that $H$ is not contained in any closed maximal subgroup of $G$, which means that $H$ is all of $G$. ◻ **Lemma 20**. *Let $F$ be the Frattini subgroup of $G_\infty$. Then the following are equivalent:* - *$G_\infty$ has only finitely many closed maximal subgroups;* - *$K_\infty^F$ (the fixed field of $F$) is a finite extension of $k(t)$.* *Proof.* For the forward implication, denote the finitely many closed maximal subgroups of $G_\infty$ as $H_1, \dots, H_n$. Then $F = \bigcap_{i=1}^n H_i$, and hence $K_\infty^F = K_\infty^{H_1} \cdots K_\infty^{H_n}$. We have $[K_{\infty}^{H_i} : k(t)] = [G_{\infty}:H_i] < \infty$ for each $i$, since $H_i$ is a maximal subgroup. Hence, $K_\infty^F$ is also a finite extension of $k(t)$. Conversely, if $[K_\infty^F:k(t)] < \infty$, then $H=\mathop{\mathrm{Gal}}(K_\infty^F/k(t))$ is a finite group, and hence it contains only finitely many subgroups. These subgroups are in one-to-one correspondence with the closed subgroups of $G_\infty$ containing $F$. Since $F$ is contained in every closed maximal subgroup of $G_{\infty}$, it follows that $G_\infty$ has only finitely many closed maximal subgroups. ◻ **Lemma 21**. *Let $L$ be a Galois extension of $k(t)$ contained in $K_{\infty}$ and containing $K_\infty^F$. If $H$ is a closed subgroup of $G_\infty$ such that the restriction of $H$ to $L$ is all of $\mathop{\mathrm{Gal}}(L/k(t))$, then $H = G_\infty$.* *Proof.* By hypothesis, the homomorphism from $H$ to $\mathop{\mathrm{Gal}}(L/k(t))$ given by restriction to $L$ is surjective. Since $\mathop{\mathrm{Gal}}(K_\infty^F/k(t))$ is a quotient of $\mathop{\mathrm{Gal}}(L/k(t))$, the restriction homomorphism from $H$ to $\mathop{\mathrm{Gal}}(K_\infty^F/k(t))$ is also surjective. That is, the natural homomorphism $H\to G_\infty /F$ is surjective, meaning that $H$ intersects every coset of $F$ nontrivially. Hence we have $H = G_\infty$, by Lemma [Lemma 19](#easy){reference-type="ref" reference="easy"}. ◻ **Lemma 22**. *Suppose that $K_\infty^F$ is a finite extension of $k(t)$. Then there is an integer $m\geq 1$ (depending only on $f$ and $k$) such that for any $\alpha\in k$ for which $G_m \cong G_{\alpha,m}$, we have $G_\infty \cong G_{\alpha,\infty}$.* *Proof.* Since $K_\infty = \bigcup_{n=1}^\infty K_n$ and $K_\infty^F$ has finite degree over $k(t)$, there exists $m\geq 1$ such that $K_m$ contains $K_\infty^F$. Given any $\alpha\in k$ for which $G_m \cong G_{\alpha,m}$, Let $H\subseteq G_{\infty}$ be the (closed) decomposition subgroup of $G_{\infty}$ for the prime $(t-\alpha)$ of $k[t]$, so that $H\cong G_{\alpha,\infty}$. Since the image of $G_{\alpha,m}$ in $G_m$ is the restriction of $H$ to $K_m\supseteq K_\infty^F$, Lemma [Lemma 21](#H){reference-type="ref" reference="H"} shows that $G_{\infty}=H\cong G_{\alpha,\infty}$. ◻ **Lemma 23**. *Suppose that $G_1$ is a $p$-group. Then $G_\infty$ is a pro-$p$ group.* *Proof.* It is well known that $G_\infty$ is a subgroup of the infinite wreath product of $G_1$ (see, for example, [@JKMT Lemma 3.3]). Since this product is a pro-$p$ group, so is $G_{\infty}$. ◻ The following result is a standard fact regarding pro-$p$ groups. **Lemma 24**. *Let $G$ be a pro-$p$ group. Then every closed maximal subgroup of $G$ is normal of index $p$ in $G$.* The next statement follows from standard facts regarding étale fundamental groups. **Lemma 25**. *Let $\ell$ be an algebraically closed field. Let $S$ be a finite set of primes in the field $\ell(t)$, and let $\overline{\ell(t)}$ be an algebraic closure of $\ell(t)$. Let $p$ be a rational prime that is not equal to the characteristic of $\ell$. Then $\overline{\ell(t)}$ contains exactly $\frac{p^{|S| -1} - 1}{p-1}$ degree $p$ normal extensions of $\ell(t)$ that are unramified away from primes in $S$.* *Proof.* By [@Gro X, Cor. 2.12] (see also [@Volk Section 7.1] for a discussion over $\mathbb{C}$), the number of Galois extensions of $\ell(t)$ of degree $p$ in $\overline{\ell(t)}$ is equal to the number of normal subgroups of index $p$ in a free group of rank $|S| - 1$. There are $p^s$ homomorphisms from a free group $G$ with $s$ generators to $\mathbb{Z}/p\mathbb{Z}$, since each generator may be mapped to any of the $p$ elements of $\mathbb{Z}/p\mathbb{Z}$. Hence, there are $p^s-1$ nontrivial such homomorphisms. Now, for each normal subgroup $N$ of index $p$ in $G$ there are exactly $p-1$ homomorphisms with kernel $N$, each determined by the image of a fixed generator $aN$ for $G/N$. So we obtain exactly $\frac{p^s -1}{p-1}$ normal subgroups of index $p$, as desired. ◻ *Remark 26*. Lemma [Lemma 25](#geo){reference-type="ref" reference="geo"} is false if $p=\mathop{\mathrm{char}}k$, since for any monic polynomial $g\in k[t]$, the splitting field of the polynomial $x^p-x+g(t)\in k(t)[x]$ is a new degree $p$ normal extension of $k(t)$ ramified only above the place at infinity from $k(t)$. **Lemma 27**. *Fix $D\geq 1$, and suppose that $k_\infty$ contains only finitely many subfields of degree at most $D$ over $k$. Suppose further that $K_\infty\cdot \overline{k}$ contains only finitely many subfields of degree at most $D$ over $\overline{k}(t)$. Then $K_\infty$ has only finitely many extensions of degree at most $D$ over $k(t)$.* *Proof.* For any field $L$ with $k(t)\subseteq L \subseteq K_{\infty}$ and $[L:k(t)]\leq D$, the field $L\cdot \overline{k}$ satisfies $\overline{k}(t)\subseteq L\cdot\overline{k}\subseteq K_{\infty}\cdot\overline{k}$ and $[L\cdot\overline{k}:\overline{k}(t)]\leq D$. Thus, by the second assumption, it suffices to show that for any such field $L$, there are only finitely many other such fields $L'$ satisfying $L' \cdot \overline{k}= L \cdot \overline{k}$. For any such $L,L'$, we have $L \subseteq L \cdot L' \subseteq L \cdot L' \cdot \overline{k}= L \cdot \overline{k}$, and $[L \cdot L' : L] \leq D$. Hence, $L \cdot L' = L \cdot \ell$ for some field $\ell \subseteq k_\infty$ with $[\ell: k] \leq D$. There are only finitely many such subfields $\ell$ in $k_\infty$, by the first assumption. Finally, for each such $\ell$, the field $L \cdot \ell$ has only finitely many subfields $L'$ that contain $k(t)$. Thus, our proof is complete. ◻ **Lemma 28**. *Let $p$ be a rational prime, let $k$ be a field of characteristic not equal to $p$, and let $f(x)$ be a post-critically finite polynomial with coefficients in $k$ such that $G_1$ is a $p$-group. If $k_\infty$ contains only finitely many extensions of $k$ of degree $p$, then $K_\infty^F$ is a finite extension of $k(t)$.* *Proof.* By Lemma [Lemma 23](#simple){reference-type="ref" reference="simple"}, $G_{\infty}=\mathop{\mathrm{Gal}}(K_{\infty}/ k(t))$ is a pro-$p$ group. Hence, $\mathop{\mathrm{Gal}}(\overline{k}\cdot K_\infty /\overline{k}(t))$, which is isomorphic to a subgroup of $G_{\infty}$ by the natural restriction homomorphism, is also a pro-$p$ group. Any degree $p$ extension of $\overline{k}(t)$ inside $\overline{k}\cdot K_{\infty}$ corresponds to a closed maximal subgroup of the pro-$p$ group $\mathop{\mathrm{Gal}}(\overline{k}\cdot K_\infty /\overline{k}(t))$ and hence is normal by Lemma [Lemma 24](#pro-p){reference-type="ref" reference="pro-p"}. Meanwhile, note that $\overline{k}\cdot K_\infty$ is unramified over $\overline{k}(t)$ outside the places corresponding to the post-critical set of $f$. The set of such places is finite, since $f$ is PCF. Hence, by Lemma [Lemma 25](#geo){reference-type="ref" reference="geo"}, there are only finitely many extensions of $\overline{k}(t)$ in $K_{\infty}$ that are of degree $p$. Thus, by Lemma [Lemma 27](#K){reference-type="ref" reference="K"} and the hypotheses, $K_{\infty}$ contains only finitely many extensions of degree $p$ over $k(t)$. Applying Lemma [Lemma 24](#pro-p){reference-type="ref" reference="pro-p"}, it follows that $G_{\infty}$ has only finitely many closed maximal subgroups. The desired conclusion is then immediate from Lemma [Lemma 20](#1){reference-type="ref" reference="1"}. ◻ ## Proof of our first two main theorems We are now ready to prove Theorem [Theorem 3](#more-general){reference-type="ref" reference="more-general"}. **Theorem 29** (Theorem [Theorem 3](#more-general){reference-type="ref" reference="more-general"}). *Let $k$ be a number field and let $f \in k(x)$ be a PCF rational function such that $\mathop{\mathrm{Gal}}(K_1/k_1(t))$ is a $p$-group. Then there is an integer $m\geq 1$ (depending on $f$ and $k$) such that $G_\infty = G_{\alpha,\infty}$ whenever $G_m = G_{\alpha,m}$.* *Proof.* We may assume that $k_1 = k$, since Theorem [Theorem 17](#geo-does){reference-type="ref" reference="geo-does"} tells us that $G_\infty = G_{\alpha,\infty}$ whenever $\mathop{\mathrm{Gal}}(K_{\alpha,\infty}/ k_1) = \mathop{\mathrm{Gal}}(K_\infty/k_1(t))$. By Theorem [Theorem 10](#k){reference-type="ref" reference="k"}, the field $k_\infty$ contains only finitely many extensions of degree $p$ over $k$, so by Lemma [Lemma 28](#number){reference-type="ref" reference="number"}, the field $K_\infty^F$ is a finite extension of $k(t)$. Lemma [Lemma 22](#almost){reference-type="ref" reference="almost"} then gives the desired result. ◻ When $\deg f=2$, $G_1$ is a $2$-group. Thus, Theorem [Theorem 2](#hilbert-plus){reference-type="ref" reference="hilbert-plus"} follows from Theorem [Theorem 3](#more-general){reference-type="ref" reference="more-general"} because the set of $\alpha$ in the number field $k$ for which the specialization of the finite extension $K_m/k(t)$ to $K_{\alpha,m}/k$ fails to preserve the Galois group is a thin set. ## Other consequences Recently, there has been a good deal of work on iterated Galois groups over local fields (see [@Anderson; @Berger; @Ingram; @Sing], for example). Our next result is an analog of Theorem [Theorem 3](#more-general){reference-type="ref" reference="more-general"} for local fields. **Theorem 30**. *Let $k$ be a finite extension of $\mathbb{Q}_q$ for some prime $q$, and let $f \in k(x)$ be a PCF rational function such that $\mathop{\mathrm{Gal}}(K_1/k_1(t))$ is a $p$-group. Then there is an integer $m\geq 1$ (depending on $f$ and $k$) such that $G_\infty = G_{\alpha,\infty}$ whenever $G_m = G_{\alpha,m}$.* *Proof.* As in the proof of Theorem [Theorem 3](#more-general){reference-type="ref" reference="more-general"}, we may assume that $k_1 = k$. A finite extension of $\mathbb{Q}_q$ has only finitely many extensions of bounded degree, so $k_\infty$ contains only finitely many extensions of bounded degree over $k$. Lemmas [Lemma 22](#almost){reference-type="ref" reference="almost"} and [Lemma 28](#number){reference-type="ref" reference="number"} then give the existence of the desired integer $m$. ◻ We are also able to prove a result about specializations of iterated Galois groups from number fields to finite fields. **Theorem 31**. *Let $k$ be a number field with ring of integers ${\mathfrak o} _k$, and let $f \in k(x)$. Suppose that $k = k_\infty$ and that $\mathop{\mathrm{Gal}}(K_1/k(t))$ is a $p$-group. Then for all but finitely many primes ${\mathfrak p}$ of ${\mathfrak o} _k$, the infinite iterated Galois group $G_\infty$ for $f$ over $k(t)$ is isomorphic to the infinite iterated Galois group for the reduction $\bar{f}_ {\mathfrak p} \in {\mathfrak o} _k / {\mathfrak p} [x]$ over ${\mathfrak o} _k / {\mathfrak p} (t)$.* *Proof.* By Lemma [Lemma 28](#number){reference-type="ref" reference="number"}, the field $K_\infty^F$ is a finite extension of $k(t)$. Therefore, as in the proof of Lemma [Lemma 22](#almost){reference-type="ref" reference="almost"}, there is an integer $m\geq 1$ such that $K_m$ contains $K_{\infty}^F$. Let $h(x)=f^m(x)-t\in k(t)[x]$, and for a prime ${\mathfrak p}$ of ${\mathfrak o} _k$, let $\bar{h}_ {\mathfrak p} (x) = \bar{f}_ {\mathfrak p} ^m(x) -t\in ( {\mathfrak o} _k / {\mathfrak p} ) (t)[x]$. (This reduction makes sense for all but finitely many primes ${\mathfrak p}$ of ${\mathfrak o} _k$.) Then $K_m$ is the splitting field of $h$ over $k(t)$, and we may define $K'_{m, {\mathfrak p} }$ to be the splitting field of $\bar{h}_ {\mathfrak p}$ over ${\mathfrak o} _k / {\mathfrak p} (t)$. Further define $G'_{m, {\mathfrak p} }=\mathop{\mathrm{Gal}}(K'_{m, {\mathfrak p} } / ( {\mathfrak o} _k / {\mathfrak p} ) (t))$. By [@JKMT Proposition 4.1], we have $G_m\cong G'_{m, {\mathfrak p} }$ for all but finitely many primes ${\mathfrak p}$ of ${\mathfrak o} _k$. Meanwhile, the infinite iterated Galois group $G'_{\infty, {\mathfrak p} }$ for the reduction $\bar{f}_ {\mathfrak p}$ over ${\mathfrak o} _k / {\mathfrak p} (t)$ is isomorphic to a closed subgroup $H_{\infty, {\mathfrak p} }$ of $G_{\infty}$. (In particular, as a Galois group, $G'_{\infty, {\mathfrak p} }$ is an inverse limit of finite groups; therefore its image $H_{\infty, {\mathfrak p} }$ is as well, and hence it is closed in the profinite topology on $G_{\infty}$.) Thus, because the restriction of $H_{\infty, {\mathfrak p} }$ to $K_m$ is all of $G_m$, and because $K_m \supseteq K_{\infty}^F$, we have $H_{\infty, {\mathfrak p} }= G_\infty$ for all but finitely many ${\mathfrak p}$, by Lemma [Lemma 21](#H){reference-type="ref" reference="H"}. ◻ # Proof of Theorem [Theorem 4](#quadratic3){reference-type="ref" reference="quadratic3"} {#quadratic} In this section, we give some more precise results in the case of PCF polynomials of the form $f(x) = x^{p^n} + c$ for $p$ a prime number and $n\geq 1$ an integer. ## Preliminary technical results We start with a few useful lemmas. **Lemma 32**. *Let $f(x) = x^{p^n} + c$ be a PCF polynomial defined over a field $k$ of characteristic other than $p$, and let $N=|\{f^i(0) : i\geq 0\}|$ be the size of the forward orbit of the critical point $0$. Let $\ell$ be any algebraic extension of $k_1$. Then $\mathop{\mathrm{Gal}}(\ell \cdot K_N / \ell(t))$ is isomorphic to the full $N$-th iterated wreath product of the cyclic group $C_{p^n}$, and $(\ell \cdot K_N) \cap \overline{k}= \ell$. Furthermore, if $k = k_1$, then $G_{\infty}$ is isomorphic to a subgroup of the infinite iterated wreath product of $C_{p^n}$.* *Proof.* Since $\mathop{\mathrm{Gal}}(\ell \cdot K_N / \ell(t)) = \mathop{\mathrm{Gal}}(\ell \cdot k_1(f^{-N}(t))/ \ell(t))$, to prove the first statement, it suffices to show that $\mathop{\mathrm{Gal}}(\ell \cdot K_N / \ell(t))$ is isomorphic to the full $N$-th iterated wreath product of the cyclic group $C_{p^n}$ under the assumption that $k = k_1$. Let $u\in f^{-1}(t)\subseteq K_1$. Then $f^{-1}(t)=\{\zeta^i u : 0\leq i\leq p^n-1\}$, where $\zeta$ is a primitive $p^n$-th root of unity. Thus, the assumption that $k=k_1$ says precisely that $\zeta\in k$. Since $f(x)-t$ is irreducible over $\overline{k}(t)$, it follows that $\mathop{\mathrm{Gal}}(\ell \cdot K_1 / \ell(t)) \cong C_{p^n}$. The prime $(t-c)$ of $k(t)$ associated with $c=f(0)$ ramifies in $\ell \cdot K_1$ as $(u)^{p^n}$, and hence the ramification group of $(t-c)$ must be the whole Galois group $\mathop{\mathrm{Gal}}(\ell \cdot K_1 / \ell(t))$, since the ramification index $p^n$ equals the order of $\mathop{\mathrm{Gal}}(\ell \cdot K_1 / \ell(t))\cong C_{p^n}$. Meanwhile, the subset $S=\{0\}$ of the critical points of $f$ has the property that for any $a\in S$, any critical point $b$ of $f$ (i.e., any $b\in \{0,\infty\}$), and any $0\leq i,j\leq N$, we have $f^i(a)\neq f^j(b)$ unless $a=b=0$ and $i=j$. Together, these are precisely the hypotheses of [@JKMT Theorem 3.1], which shows that $\mathop{\mathrm{Gal}}(\ell \cdot K_N / \ell(t))$ is isomorphic to the full $N$-th iterated wreath product of the cyclic group $C_{p^n}$. Since the same argument applied to $\overline{k}$ shows that $\mathop{\mathrm{Gal}}(\overline{k}\cdot K_N / \overline{k}(t))$ is the same iterated wreath product, Lemma [Lemma 14](#isomgal){reference-type="ref" reference="isomgal"} shows that we must have $(\ell \cdot K_N) \cap \overline{k}(t) = \ell(t)$. Intersecting with $\overline{k}$, it follows that $(\ell \cdot K_N) \cap \overline{k}= \ell$. Finally, as noted in the proof of Lemma [Lemma 23](#simple){reference-type="ref" reference="simple"}, $G_\infty$ is a subgroup of the infinite iterated wreath product of $G_1\cong C_{p^n}$. ◻ *Remark 33*. Whether or not $k$ and $k_1$ coincide, Lemma [Lemma 32](#first){reference-type="ref" reference="first"} also yields the equality $k_N = k_1$. After all, we have $k_1 \subseteq k_N$ trivially, and then choosing $\ell=k_1$ in Lemma [Lemma 32](#first){reference-type="ref" reference="first"} gives $k_N \subseteq (k_1 \cdot K_N) \cap \overline{k}= k_1$. **Lemma 34**. *Let $f(x) = x^{p^n} + c$ be a PCF polynomial defined over a field $k$ of characteristic other than $p$, and let $N$ be the size of the forward orbit of the critical point $0$. Then for any algebraic extension $\ell$ of $k_1$, the field $\ell \cdot K_N$ contains at least $(p^N - 1)/(p-1)$ extensions of degree $p$ over $\ell(t)$.* *Proof.* By Lemma [Lemma 32](#first){reference-type="ref" reference="first"}, the group $\mathop{\mathrm{Gal}}(\ell \cdot K_N / \ell(t))$ is isomorphic to the full $N$-th iterated wreath product of the cyclic group $C_{p^n}$. The abelianization of $\mathop{\mathrm{Gal}}(\ell \cdot K_N/ \ell)$ is thus isomorphic to $C_{p^n}^N$, since the abelianization of any wreath product $A \wr B$ is isomorphic to the product of the abelianizations of $A$ and $B$ (see [@dH00 p. 215]). Therefore, it suffices to show that $C_{p^n}^N$ has at least $(p^N - 1)/(p-1)$ subgroups of index $p$. To see this, observe that the elements of $C_{p^n}^N$ of order $p$ are precisely those of the form $(a_1 p^{n-1} , \ldots, a_N p^{n-1})$, where $a_1,\ldots,a_N\in\{0,\ldots, p-1\}$ are not all $0$. There are $p^N-1$ such elements, and each belongs to an equivalence class of size $p-1$ that generates the same subgroup of order $p$. Thus, there are indeed at least $(p^N-1)/(p-1)$ subgroups of index $p$ in $C_{p^n}^N$. ◻ **Lemma 35**. *Let $f(x) = x^{p^n} + c$ be a PCF polynomial defined over a field $k$ of characteristic other than $p$, and let $N$ be the size of the forward orbit of the critical point 0. Let $\ell$ be an algebraic extension of $k_1$ contained in $k_\infty$. Then for every $L \subseteq K_\infty$ with $[L:\ell(t)] = p$, we have $L \subseteq k_\infty \cdot K_N$.* *Proof.* By the second statement of Lemma [Lemma 32](#first){reference-type="ref" reference="first"}, $\mathop{\mathrm{Gal}}(\overline{k}\cdot K_\infty / \overline{k}(t))$ is a subgroup of an iterated wreath product of $C_{p^n}$ and hence is a pro-$p$ group. Therefore, by Lemma [Lemma 24](#pro-p){reference-type="ref" reference="pro-p"}, every extension of $\overline{k}(t)$ of degree $p$ that is contained in $\overline{k}\cdot K_\infty$ is normal. In addition, $\overline{k}\cdot K_\infty$ is unramified away from the set $S=\{\infty\}\cup\{ (t-f^i(0)) | i\geq 0\}$ of primes of $k(t)$ corresponding to the forward orbits of the critical points $\infty$ and $0$. By hypothesis, we have $|S|=N+1$, and hence by Lemma [Lemma 25](#geo){reference-type="ref" reference="geo"}, $\overline{k}\cdot K_\infty$ contains at most $(p^N-1)/(p-1)$ extensions of degree $p$ over $\overline{k}(t)$. On the other hand, by Lemma [Lemma 34](#count1){reference-type="ref" reference="count1"}, the field $\ell \cdot K_N$ contains at least $(p^N-1)/(p-1)$ extensions of degree $p$ over $\ell(t)$. Thus, defining $U_{\ell,N}$ to be the set of subfields $L$ of $\ell\cdot K_N$ satisfying $[L:\ell(t)]=p$, and defining $U_{\overline{k},\infty}$ to be the set of subfields $M$ of $\overline{k}\cdot K_{\infty}$ satisfying $[M:\overline{k}(t)]=p$, this means that $$\label{eq:Uineq} \big| U_{\overline{k},\infty} \big| \leq \frac{p^N-1}{p-1} \leq \big| U_{\ell,N} \big| .$$ We claim that the mapping $\phi: U_{\ell,N} \to U_{\overline{k},\infty}$ by $L\mapsto \overline{k}\cdot L$ is one-to-one. To prove the claim, first observe that any $L\in U_{\ell,N}$ is a geometric extension of $\ell(t)$, since $L\subseteq\ell\cdot K_N$, and hence $L\cap\overline{k}=\ell$ by Lemma [Lemma 32](#first){reference-type="ref" reference="first"}. Thus, we have $$[\overline{k}\cdot L : \overline{k}(t)]=[L:\ell(t)]=p,$$ so that $\phi(L)=\overline{k}\cdot L$ is indeed an element of $U_{\overline{k},\infty}$. In addition, if $L_1,L_2\in U_{\ell,N}$ satisfy $\phi(L_1)=\phi(L_2)$, then $$L_1\cdot L_2 \subseteq L_1\cdot L_2 \cdot \overline{k}= L_1\cdot L_1 \cdot \overline{k}= \overline{k}\cdot L_1 .$$ At the same time, we also have $L_1\cdot L_2 \subseteq \ell \cdot K_N$, and hence $$L_1\cdot L_2 \subseteq (\overline{k}\cdot L_1) \cap (\ell\cdot K_N) = \big(\overline{k}\cap (\ell\cdot K_N) \big) \cdot L_1 = \ell \cdot L_1 = L_1,$$ where the second equality is again by Lemma [Lemma 32](#first){reference-type="ref" reference="first"}. Similarly, we also have $L_1\cdot L_2 \subseteq L_2$, and hence $L_1=L_1\cdot L_2 = L_2$, proving the claim. Given any $L$ as in the statement of the lemma, suppose first that $L$ is not a geometric extension of $\ell(t)$. Then $L\subseteq k_\infty (t) \subseteq k_{\infty} \cdot K_N$, as desired. Otherwise, the field $\overline{k}\cdot L$ is an extension of $\overline{k}(t)$ of degree $p$ contained in $\overline{k}\cdot K_{\infty}$, and hence $\overline{k}\cdot L \in U_{\overline{k},\infty}$. It follows from the claim and inequality [\[eq:Uineq\]](#eq:Uineq){reference-type="eqref" reference="eq:Uineq"} that $\phi$ is bijective, and hence there is some field $L'\in U_{\ell,N}$ so that $\overline{k}\cdot L = \overline{k}\cdot L'$. As in the proof of the claim, it follows that $L\cdot L' \subseteq \overline{k}\cdot L\cdot L' = \overline{k}\cdot L'$, and hence $L\cdot L' \subseteq k_\infty \cdot L'$, since $L\cdot L'\subseteq K_\infty$, which has constant field $k_\infty$. Since $L'\subseteq \ell\cdot K_N$, it follows that $$L\subseteq L\cdot L' \subseteq k_\infty \cdot L' \subseteq k_\infty \cdot (\ell \cdot K_N) \subseteq k_\infty \cdot K_N . \qedhere$$ ◻ **Theorem 36**. *Let $f(x) = x^{p^n} + c$ be a PCF polynomial defined over a field $k$ of characteristic other than $p$. Let $N$ be the size of the forward orbit of the critical point $0$. Then we have $G_{\alpha, \infty} = G_\infty$ if and only if $$\label{g2} \mathop{\mathrm{Gal}}(k_\infty \cdot K_{\alpha,N} / k_\infty) = \mathop{\mathrm{Gal}}(k_\infty \cdot K_N / k_\infty(t)).$$* *Proof.* Applying Theorem [Theorem 17](#geo-does){reference-type="ref" reference="geo-does"} with $\ell=k_\infty$, we may assume without loss that $k=k_\infty$. The forward implication is then immediate by restriction to $K_N$ and $K_{\alpha,N}$. Conversely, suppose that equation [\[g2\]](#g2){reference-type="eqref" reference="g2"} holds. Then Lemma [Lemma 35](#count-final){reference-type="ref" reference="count-final"} with $\ell=k_\infty$ implies that every degree $p$ extension $L$ of $k_\infty(t)$ in $K_\infty$ is contained in $k_\infty \cdot K_N$. Hence, the fixed field $K_\infty^F$ is also contained in $k_\infty \cdot K_N$, where $F$ is the Frattini subgroup of $\mathop{\mathrm{Gal}}(K_\infty/k_\infty(t))$. Recalling that we have assumed $k=k_\infty$, let $L=k_\infty \cdot K_N = K_N$ and $H=G_{\alpha,\infty}$, viewed as a (closed) subgroup of $G_{\infty}$. The restriction of $H$ to $L$ is $$\mathop{\mathrm{Gal}}(k_\infty \cdot K_{\alpha,N} / k_\infty) = \mathop{\mathrm{Gal}}(k_\infty \cdot K_N / k_\infty(t)) = \mathop{\mathrm{Gal}}(L/ k(t)),$$ by equation [\[g2\]](#g2){reference-type="eqref" reference="g2"} and our assumption that $k=k_\infty$. Therefore, by Lemma [Lemma 21](#H){reference-type="ref" reference="H"}, we have $H=G_\infty$, as desired. ◻ ## Proof of our second main theorem We are ready to prove Theorem [Theorem 4](#quadratic3){reference-type="ref" reference="quadratic3"}, by obtaining a more precise verson of it, as stated below in Theorem [Theorem 37](#quadratic2){reference-type="ref" reference="quadratic2"}. Because we work over a number field $k$, the field $k_\infty$ contains finitely many extensions of $k$ of degree $p$ over $k_1$ by Lemma [Theorem 10](#k){reference-type="ref" reference="k"}; let $k'$ denote their compositum. **Theorem 37**. *Let $f(x) = x^{p^n} + c$ be a PCF polynomial defined over a number field $k$. Let $N$ be the size of the forward orbit of the critical point $0$. Let $k'$ be the compositum of the degree $p$ extensions of $k_1$ in $k_\infty$. Then $G_{\alpha, \infty} = G_\infty$ if and only if $$\label{k11} |\mathop{\mathrm{Gal}}(k' \cdot K_{\alpha, N} / k)| =|G_N|\cdot [k': k_1]$$* *Proof.* Suppose that $G_{\alpha, \infty} = G_\infty$. Applying Theorem [Theorem 17](#geo-does){reference-type="ref" reference="geo-does"}, we see that $\mathop{\mathrm{Gal}}(K_\infty / k'(t)) = \mathop{\mathrm{Gal}}(K_{\alpha,\infty} / k')$, and hence that $|\mathop{\mathrm{Gal}}(k' \cdot K_N / k'(t))| = |\mathop{\mathrm{Gal}}(k' \cdot K_{\alpha, N} / k')|$. Therefore, $$\begin{aligned} |\mathop{\mathrm{Gal}}(k' \cdot K_{\alpha, N} / k)| &= |\mathop{\mathrm{Gal}}(k' \cdot K_N / k'(t))| [k':k] = \frac{|\mathop{\mathrm{Gal}}(K_N/k(t))|}{[k'(t) \cap K_N:k(t)]} [k':k] \\ & = \frac{|G_N|}{[k' \cap K_N:k]} [k':k] = |G_N|\cdot [k': k_1],\end{aligned}$$ where the second equality is by Lemma [Lemma 14](#isomgal){reference-type="ref" reference="isomgal"}, and the fourth is because $k' \cap K_N = k_1$, by Remark [Remark 33](#rm){reference-type="ref" reference="rm"}. Conversely, suppose that [\[k11\]](#k11){reference-type="eqref" reference="k11"} holds. We have $$\label{gn} [K_{\alpha, N}: k] = |G_{\alpha,N}| \leq |G_N|$$ and also, by Lemma [Lemma 14](#isomgal){reference-type="ref" reference="isomgal"} and the fact that $k_1\subseteq k'\cap K_{\alpha,N}$, $$\label{kg} [k' \cdot K_{\alpha, N}: K_{\alpha, N}] \leq [k': k_1].$$ Therefore, by equation [\[k11\]](#k11){reference-type="eqref" reference="k11"}, we have $$|G_N| \cdot [k':k_1] = [k' \cdot K_{\alpha, N}: k] = [k' \cdot K_{\alpha, N}: K_{\alpha, N}] \cdot [K_{\alpha, N}: k] \leq [k':k_1] \cdot |G_N|,$$ so that we must have equality in both [\[gn\]](#gn){reference-type="eqref" reference="gn"} and [\[kg\]](#kg){reference-type="eqref" reference="kg"}. Let $\ell=k_\infty\cap K_{\alpha,N}\supseteq k_1$. We claim that $\ell=k_1$. To see this, note by Lemma [Lemma 32](#first){reference-type="ref" reference="first"} that $\mathop{\mathrm{Gal}}(K_N/k_1(t))$ is a $p$-group, and hence so is the subgroup $H=\mathop{\mathrm{Gal}}(K_{\alpha,N}/k_1)$. If $\ell\neq k_1$, then $\ell$ is a nontrivial extension of $k_1$ contained in $K_{\alpha,N}$, and therefore $\mathop{\mathrm{Gal}}(K_{\alpha,N}/\ell)$ is contained in a maximal subgroup $H'$ of $H$, which must have index $p$ in $H$. The fixed field of $H'$ is therefore an extension $\ell'$ of $k_1$ of degree $p$ and contained in $k_{\infty}$, so we have $\ell'\subseteq k'$ by definition of $k'$. Therefore, by the same reasoning as in inequality [\[kg\]](#kg){reference-type="eqref" reference="kg"}, $$[k'\cdot K_{\alpha, N}: K_{\alpha, N}] \leq [k' : \ell'] = \frac{1}{p} [k' : k_1] < [k':k_1] = [k'\cdot K_{\alpha, N}: K_{\alpha, N}],$$ where the final equality is because we showed above that [\[kg\]](#kg){reference-type="eqref" reference="kg"} is an equality. This contradiction proves our claim. By the claim and Lemma [Lemma 14](#isomgal){reference-type="ref" reference="isomgal"}, we have $$\begin{aligned} \label{eq:Kansize} [k_\infty \cdot K_{\alpha, N}: k_\infty] &= [K_{\alpha, N}: k_1] = \frac{[K_{\alpha,N} : k]}{[k_1:k]} = \frac{|G_N|}{[k_1(t):k(t)]} \notag \\ & = [K_N: k_1(t)] = [k_\infty \cdot K_N: k_\infty(t)].\end{aligned}$$ Here, the third equality is because $[k_1(t):k(t)]=[k_1:k]$ and because we showed [\[gn\]](#gn){reference-type="eqref" reference="gn"} is an equality. The fifth is by Lemma [Lemma 14](#isomgal){reference-type="ref" reference="isomgal"} again, together with the fact that $k_\infty \cap K_N = k_1$, by Remark [Remark 33](#rm){reference-type="ref" reference="rm"}. Equation [\[eq:Kansize\]](#eq:Kansize){reference-type="eqref" reference="eq:Kansize"} shows that the subgroup $\mathop{\mathrm{Gal}}(k_{\infty}\cdot K_{\alpha,N} / k_\infty)$ of $\mathop{\mathrm{Gal}}(k_{\infty}\cdot K_N / k_\infty(t))$ is the whole group. Applying Theorem [Theorem 17](#geo-does){reference-type="ref" reference="geo-does"} then gives $G_{\alpha, \infty} = G_\infty$, as desired. ◻ # Further questions {#further} In light of Theorem [Theorem 3](#more-general){reference-type="ref" reference="more-general"}, it is natural to ask whether the Frattini subgroup of $G_\infty$ has finite index in $G_\infty$ for any post-critically finite rational function defined over a number field. Unfortunately, this is not the case, as [@BEK] gives examples where $G_\infty$ is the infinite iterated wreath product of the alternating group $A_d$. It is easy to see that the infinite iterated wreath product of any nontrivial group has infinitely many closed maximal subgroups. It would be interesting to know wheter the Frattini subgroup of $G_\infty$ has finite index in $G_\infty$ in the case where $f$ is a PCF polynomial of the form $f(x) = x^m + c$ for $m$ not a prime power. We would also like to explore a more general form of Odoni's conjecture. Recall that a field $k$ is said to be *Hilbertian* if the complement of any thin set in $k$ is infinite. Odoni [@OdoniIterates] conjectures that for any integer $d \geq 2$ and any Hilbertian field $k$ of characteristic 0, there is a polynomial $f$ and an $\alpha \in k$ such that $G_{\alpha, \infty}$ is the full automorphism group of $T^d_\infty$. Dittman and Kadets [@DK] have given counterexamples to this conjecture in every degree. More generally, given a particular polynomial $f$ defined over a number field $\ell$, one might ask whether it is true that for any Hilbertian field $k$ containing $\ell$, there are infinitely many $\alpha \in k$ such that $G_\infty = G_{\alpha, \infty}$. (We note here that the Galois groups are taken relative to $k$, not $\ell$, so that $K_{n,\alpha} = k(f^{-n}(\alpha))$.) For non-PCF quadratic polynomials, the the answer is no, using the results of [@DK]. In fact, we have the following stronger result. **Proposition 38**. *Let $\ell$ be a number field, and let $f\in\ell[x]$ be a quadratic polynomial that is not PCF. Then there is a Hilbertian field $k$ that is algebraic over $\ell$ such that for all $\alpha\in k$, the group $G_{\alpha,\infty}$ has infinite index in $G_\infty$.* *Proof.* Since $\deg f=2$, there are exactly two critical points. Because $f$ is a non-PCF polynomial, the two critical orbits are disjoint, and one of those orbits is infinite. Therefore, by [@PinkQuadraticInfiniteOrbits Theorem 4.8.1(a)], $G_\infty$ is the full automorphism group of $T^2_\infty$. (See also [@JKMT Theorem 3.1].) According to [@DK Theorem 1.2], there is some Hilbertian $k$, algebraic over $\ell$, such that for any quadratic polynomial $g\in k[x]$, the arboreal Galois group for $g$ with base point $0$ over $k$ has infinite index in $\mathop{\mathrm{Aut}}(T^2_\infty)$. For any $\alpha\in k$, defining $g(x)=f(x+\alpha)-\alpha\in k[x]$, it follows that $G_{\alpha,\infty}$ has infinite index in $\mathop{\mathrm{Aut}}(T^2_\infty)=G_{\infty}$. ◻ On the other hand, combining the results of this paper with Pink's classification of $k_\infty$ for PCF quadratic polynomials, rather than non-PCF, the question above has a positive answer, as follows. **Theorem 39**. *Let $f(x) = x^2 + c$ be a PCF quadratic polynomial defined over a number field $\ell$. Let $k$ be a Hilbertian field containing $\ell$. Then there are infinitely many $\alpha \in k$ such that $G_\infty = G_{\alpha, \infty}$.* *Proof.* Pink [@PinkQuadratic] shows that $k_\infty$ is contained in the field generated by all $2^n$-th roots of unity over $k$. Thus, $k_\infty$ contains at most finitely many quadratic extensions of $k$. Since $G_1$ is obviously a $2$-group, we have that $K_\infty^F$ is a finite extension of $k(t)$, by Lemma [Lemma 28](#number){reference-type="ref" reference="number"}, where $F$ is the Frattini subgroup of $G_\infty$. Lemma [Lemma 22](#almost){reference-type="ref" reference="almost"} then implies that there is an $m\geq 1$ such that $G_\infty = G_{\alpha,\infty}$ whenever $G_m = G_{\alpha,m}$. Since the set of $\alpha \in k$ such that $G_m = G_{\alpha,m}$ is the complement of a thin set in $k$ (see [@Serre 9.2, Proposition 2]), there are thus infinitely many $\alpha \in k$ such that $G_\infty = G_{\alpha,\infty}$ because $k$ is Hilbertian. ◻ It would be interesting to know whether there are other families of PCF polynomials for which this variant of the Odoni conjecture holds. **Acknowledgments**. The first author gratefully acknowledges the support of NSF grant DMS-2101925. The second author was partially supported by an NSERC Discovery Grant. The authors thank Vesselin Dimitrov, Philipp Habegger, Alexander Hulpke, Jonathan Pakianathan, Wayne Peng, Harry Schmidt, Romyar Sharifi, and David Zureick-Brown, for helpful conversations. BDGHT22 M. Atiyah and I. MacDonald, *Introduction to Commutative Algebra*, Addison-Wesley, Reading, 1969. ix+128 pp. F. Ahmad, R. L. Benedetto, J. Cain, G. Carroll, L. Fang, *The arithmetic basilica: a quadratic PCF arboreal Galois group*, J. Number Theory **238** (2022), 842--868. J. Anderson, S. Hamblen, B. Poonen, and L. Walton, *Local arboreal representations*, Int. Math. Res. Not. IMRN (2018), no. 19, 5974--5994. L. Berger, *Iterated extensions and relative Lubin-Tate groups*, Ann. Math. Qué. **40** (2016), no. 1, 17--28. R. L. Benedetto, X. Faber, B. Hutz, J. Juul, and Y. Yasufuku, *A large arboreal Galois representation for a cubic postcritically finite polynomial*, Res. Number Theory **3** (2017), Paper No. 29, 21 pp. R. L. Benedetto and J. Juul, *Odoni's conjecture for number fields*, Bull. London Math. Soc. **51** (2019), 237--250. N. Boston and R. Jones, *Arboreal Galois representations*, Geom. Dedicata **124** (2007), 27--35. N. Boston and R. Jones, *The image of an arboreal Galois representation*, Pure Appl. Math. Q. **5** (2009), no. 1, 213--225. S. Bosch, U. Güntzer, and R. Remmert, *Non-Archimedean analysis*, Grundlehren der mathematischen Wissenschaften \[Fundamental Principles of Mathematical Sciences\], **261**. Springer-Verlag, Berlin, 1984. xii+436 pp. A. Bridy, J. R. Doyle, D. Ghioca, L.-C. Hsia, and T. J. Tucker, *Finite index theorems for iterated Galois groups of unicritical polynomials*, Trans. Amer. Math. Soc. **374** (2021), no. 1, 733--752. A. Bridy, J. R. Doyle, D. Ghioca, L.-C. Hsia, and T. J. Tucker, *A question for iterated Galois groups in arithmetic dynamics*, Canad. Math. Bull. **64** (2021), no. 2, 401--417. A. Bridy, P. Ingram, R. Jones, J. Juul, A. Levy, M. Manes, S. Rubinstein-Salzedo, and J. H. Silverman, *Finite ramification for preimage fields of post-critically finite morphisms*, Math. Res. Lett. **24** (2017), no. 6, 1633--1647. A. Bridy and T. J. Tucker, *Finite index theorems for iterated Galois groups of cubic polynomials*, Math. Ann. **373** (2019), no. 1-2, 37--72. I. I. Bouw, Ö. Ejder, and V. Karemaker, *Dynamical Belyi maps and arboreal Galois groups*, Manuscripta Math. **165** (2021), no. 1-2, 1--34. P. de la Harpe, *Topics in geometric group theory*, Chicago Lectures in Mathematics. University of Chicago Pres, Chicago, 2000. P. Dittmann and B. Kadets, *Odoni's conjecture on arboreal Galois representations is false*, Proc. Amer. Math. Soc. **150** (2022), no. 8, 3335--3343. A.  Grothendieck, *Revêtements étales et groupe fondamental. Fasc. I: Exposés 1 à 5*, Institut des Hautes Études Scientifiques, Paris, 1963, Troisième édition, corrigée, Séminaire de Géométrie Algébrique, 1960/61. P. Ingram, *Arboreal Galois representations and uniformization of polynomial dynamics*, Bull. Lond. Math. Soc. **45** (2013), no. 2, 301--308. R. Jones, *Iterated Galois towers, their associated martingales, and the $p$-adic Mandelbrot set*, Compos. Math. **143** (2007), no. 5, 1108--1126. R. Jones, *Galois representations from pre-image trees: an arboreal survey*, Actes de la Conférence "Théorie des Nombres et Applications", Publ. Math. Besançon Algèbre Théorie Nr. vol. 2013, Presses Univ. Franche-Comté, Besançon, 2013, pp. 107--136. R. Jones and A. Levy, *Eventually stable rational functions*, Int. J. Number Theory **13** (2017), no. 9, 2299--2318. R. Jones and M. Manes, *Galois theory of quadratic rational functions*, Comment. Math. Helv. **89** (2014), no. 1, 173--213. J. Juul, *Iterates of generic polynomials and generic rational functions*, Trans. Amer. Math. Soc. **371** (2019), no. 2, 809--831. J. Juul, P. Kurlberg, K. Madhu, and T. J. Tucker, *Wreath products and proportions of periodic points*, Int. Math. Res. Not. (IMRN), 2016, no. 13, 3944--3969. J. Juul, H. Krieger, N. Looper, M. Manes, B. Thompson, and L. Walton, *Arboreal representations for rational maps with few critical points*, Research directions in number theory---Women in Numbers IV, Assoc. Women Math. Ser., vol. 19, Springer, Cham, \[2019\] © 2019, pp. 133--151. B. Kadets, *Large arboreal Galois representations*, J. Number Theory **210** (2020), 416--430. S. Lang, *Algebra*, third ed. Graduate Texts in Mathematics **211**. Springer-Verlag, New York, 2002. xvi+914 pp. N. R. Looper, *Dynamical Galois groups of trinomials and Odoni's conjecture*, Bull. London Math. Soc. **51** (2019), 278--292. J. Neukirch, *Algebraic number theory*. Translated from the 1992 German original and with a note by Norbert Schappacher. With a foreword by G. Harder. Grundlehren der mathematischen Wissenschaften \[Fundamental Principles of Mathematical Sciences\], **322**. Springer-Verlag, Berlin, 1999. xviii+571 pp. R. W. K. Odoni, *The Galois theory of iterates and composites of polynomials*, Proc. London Math. Soc. (3) **51** (1985), no. 3, 385--414. R. W. K. Odoni, *On the prime divisors of the sequence $w\sb {n+1}=1+w\sb 1\cdots w\sb n$*, J. London Math. Soc. (2) **32** (1985), no. 1, 1--11. R. W. K. Odoni, *Realising wreath products of cyclic groups as Galois groups*, Mathematika **35** (1988), no. 1, 101--113. R. Pink, *Profinite iterated monodromy groups arising from quadratic polynomials*, preprint 2013, available at `arXiv:1307.5678`. R. Pink, *Profinite iterated monodromy groups arising from quadratic morphisms with infinite postcritical orbits*, preprint 2013, available at `arXiv:1309.5804`. J.-P. Serre, *Lectures on the Mordell-Weil theorem*, third ed., Aspects of Mathematics, Friedr. Vieweg & Sohn, Braunschweig, 1997. J. Silverman, *The arithmetic of elliptic curves*, second ed., Graduate Texts in Mathematics **106**. Springer, Dordrecht, 2009. xx+513 pp. M. O-S Sing, *A dynamical analogue of Sen's theorem*, Int. Math. Res. Not. IMRN (2023), no. 9, 7502--7540. J. Specter, *Polynomials with surjective arboreal Galois representations exist in every degree*, preprint 2018, available at `arXiv:1803.00434`. M. Stoll, *Galois groups over ${\bf Q}$ of some iterated polynomials*, Arch. Math. (Basel) **59** (1992), no. 3, 239--244. H. Völklein, *Groups as Galois groups*, Cambridge Studies in Advanced Mathematics, vol. 53, Cambridge University Press, Cambridge, 1996, An introduction.
arxiv_math
{ "id": "2309.00840", "title": "Specializations of Iterated Galois Groups of PCF Rational Functions", "authors": "Robert L. Benedetto, Dragos Ghioca, Jamie Juul, and Thomas J. Tucker", "categories": "math.NT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this article, we consider the following weighted fractional Hardy inequality: $$\begin{aligned} \label{Fractional Hardy_abst} \int_{\mathbb{R}^N} |w(x)||u(x)|^p \mathrm{d}x \leq C \int_{\mathbb{R}^N \times \mathbb{R}^N} \frac{|u(x)-u(y)|^p}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}:= \|u\|_{s,p}^p\,, \ \forall u \in \mathcal{D}^{s,p}(\mathbb{R}^N), \end{aligned}$$ where $0<s<1<p<\frac{N}{s}$, and ${\mathcal D}^{s,p}(\mathbb{R}^N)$ is the completion of $C_c^1(\mathbb{R}^N)$ with respect to the seminorm $\|\cdot\|_{s,p}$. We denote the space of admissible $w$ in [\[Fractional Hardy_abst\]](#Fractional Hardy_abst){reference-type="eqref" reference="Fractional Hardy_abst"} by $\mathcal{H}_{s,p}(\mathbb{R}^N)$. Maz'ya-type characterization helps us to define a Banach function norm on $\mathcal{H}_{s,p}(\mathbb{R}^N)$. Using the Banach function space structure and the concentration compactness type arguments, we provide several characterizations for the compactness of the map ${W}(u)= \int_{{{\mathbb R}^N}} |w| |u|^p {\rm d}x$ on $\mathcal{D}^{s,p}(\mathbb{R}^N)$. In particular, we prove that ${W}$ is compact on $\mathcal{D}^{s,p}(\mathbb{R}^N)$ if and only if $w \in \mathcal{H}_{s,p,0}(\mathbb{R}^N):=\overline{C_c(\mathbb{R}^N)} \ \mbox{in} \ \mathcal{H}_{s,p}(\mathbb{R}^N)$. Further, we study the following eigenvalue problem: $$(-\Delta_{p})^{s}u = \lambda w(x) |u|^{p-2}u ~~\text{in}~\mathbb{R}^{N},$$ where $(-\Delta_{p})^{s}$ is the fractional $p$-Laplace operator and $w = w_{1} - w_{2}~\text{with}~ w_{1},w_{2} \geq 0,$ is such that $w_{1} \in \mathcal{H}_{s,p,0}({\mathbb R}^N)$ and $w_{2} \in L^{1}_{loc}({\mathbb R}^N)$. author: - "Ujjal Das, Rohit Kumar and Abhishek Sarkar[^1]" bibliography: - ref.bib title: Characterizations of compactness and weighted eigenvalue problem for fractional $p$-Laplacian in ${\mathbb R}^N$ --- ***Mathematics Subject Classification (2010)---*** 26D10, 31B15, 35A15, 35R11 ***Keywords---*** Weighted fractional Hardy inequality, compactness, variational methods, concentration-compactness, eigenvalue problem # Introduction For $p\in (1,N)$ and a domain $\Omega$ in ${\mathbb R}^N$, the Beppo Levi space $\mathcal{D}^{1,p}_0(\Omega)$ is the completion of $C_c^{1}(\Omega)$ with respect to the norm, $\norm{u}_{\mathcal{D}^{1,p}_0(\Omega)} :=\left[ \int_{\Omega}|\nabla u|^p {\rm d}x\right]^ \frac{1}{p}$. Let us first recall the following classical Hardy inequality: $$\label{CHS} \int_{\Omega} \frac{1}{|x|^p} |u|^p\ {\rm d}x\leq \displaystyle \left(\frac{p}{N-p} \right)^p \int_{\Omega} |\nabla u|^p \ {\rm d}x, \ \forall u \in \mathcal{D}^{1,p}_0(\Omega) \,.$$ The one-dimensional Hardy inequality was proved by Hardy (see, [@Hardy p. 316]). For a detailed historical background on this inequality, we refer to [@Kufner]. Many authors have generalised this inequality by identifying more general weight function $w \in L^1_{loc}(\Omega)$ (instead of $\frac{1}{|x|^p}$) so that the following inequality holds $$\label{WHS} \int_{\Omega} |w| |u|^p {\rm d}x\leq \displaystyle C \int_{\Omega} |\nabla u|^p {\rm d}x, \ \forall u \in \mathcal{D}^{1,p}_0(\Omega)$$ for some $C>0$. We denote $\mathcal{H}_p(\Omega)=\{w\in L^1_{loc}(\Omega): w \ \mbox{satisfies} \ \eqref{WHS} \}.$ One can use the Sobolev embedding to show that $L^{\frac{N}{p}}(\Omega) \subseteq \mathcal{H}_p(\Omega)$ [@Allegretto for $p=2$] and [@Allegretto1995eigenvalues for $p \in (1,N)$]. Further, using the Lorentz-Sobolev embedding, Visciglia [@Visciglia] showed that $L^{\frac{N}{p},\infty}(\Omega)\subseteq \mathcal{H}_p(\Omega)$ for $p=2$. The inclusion is also true for general $p$ due to the Lorentz-Sobolev embedding. Indeed, $L^{\frac{N}{p},\infty}(\Omega)$ does not exhaust $\mathcal{H}_p(\Omega)$, for instance, see [@Anoop2015weighted]. Further, we refer to [@Anoop2021compactness; @anoop-p] for more nontrivial spaces contained in $\mathcal{H}_p(\Omega)$. In this context, Maz'ya [@Mazya Section 2.4.1, page 128] gave a very intrinsic characterization of $\mathcal{H}_p(\Omega)$ using the $p$-capacity. Recall that, for $F \Subset \Omega, \ { \text{i.e. } F \subset \Bar{F} \subset \Omega \text{ and } \Bar{F} \text{ is compact},}$ the $p$-capacity of $F$ relative to $\Omega$ is defined as, $${\text{Cap}_p(F,\Omega)} = \inf \left\{ \displaystyle \int_{\Omega} | \nabla u |^p {\rm d}x: u \in \mathcal{N}_p (F) \right \},$$ where $\mathcal{N}_p (F)= \{ u \in {\mathcal D}_0^{1,p}(\Omega): u \geq 1 \ \mbox{in a neighbourhood of}\; F \}$. Maz'ya's characterization ensures that $w \in \mathcal{H}_p(\Omega)$ if and only if $$\begin{aligned} \norm{w}_{\mathcal{H}_p(\Omega)}:= \sup\left\{ \frac{\int_{F} |w|{\rm d}x}{\text{Cap}_p(F,\Omega)}:F \Subset \Omega; |F|\ne 0 \right\}<\infty. \end{aligned}$$ In this view, $\mathcal{H}_p(\Omega)$ is identified as $\mathcal{H}_p(\Omega)=\{w\in L^1_{loc}(\Omega): \norm{w}_{\mathcal{H}_p(\Omega)}<\infty \}$. Indeed, $\norm{.}_{\mathcal{H}_p(\Omega)}$ is a Banach function space norm on $\mathcal{H}_p(\Omega)$ [@Anoop2021compactness]. Next, one may look for $w \in \mathcal{H}_p(\Omega)$ for which the best constant in [\[WHS\]](#WHS){reference-type="eqref" reference="WHS"} is attained in $\mathcal{D}^{1,p}_0(\Omega)$. Let $\mathcal{B}_{p}(w)$ be the best constant in [\[WHS\]](#WHS){reference-type="eqref" reference="WHS"} i.e., $\mathcal{B}_{p}(w)$ is the least possible constant so that [\[WHS\]](#WHS){reference-type="eqref" reference="WHS"} holds. Therefore, for $w \in \mathcal{H}_p(\Omega)$, we have $$\label{bestHardy} \mathcal{B}_{p}(w)^{-1}=\inf \left\{\int_{{\Omega}} |\nabla u|^p {\rm d}x: u \in \mathcal{D}^{1,p}_0({\Omega}), \int_{\Omega} |w| |u|^p {\rm d}x=1 \right\} \,.$$ Thus the best constant $\mathcal{B}_{p}(w)$ is attained in $\mathcal{D}^{1,p}_0({\Omega})$ if and only if [\[bestHardy\]](#bestHardy){reference-type="eqref" reference="bestHardy"} admits a minimizer. One of the simplest conditions that guarantee the existence of a minimizer for [\[bestHardy\]](#bestHardy){reference-type="eqref" reference="bestHardy"} is the compactness of the map $$W(u)= \int_{\Omega} |w||u|^p {\rm d}x$$ on $\mathcal{D}^{1,p}_0(\Omega)$ (i.e., for $u_n \rightharpoonup u$ in $\mathcal{D}^{1,p}_0(\Omega)$, $W(u_n) \rightarrow W(u)$ as $n \rightarrow\infty$). Many authors have given various sufficient conditions for the compactness of the map $W$. For example, Visciglia [@Visciglia] proved the compactness of $W$ for $w\in L^{\frac{N}{p},d}(\Omega)$ with $d<\infty$, which is later extended for $w\in \overline{\text{C}_c^{\infty}(\Omega)}$ in $L^{\frac{N}{p},\infty}(\Omega)$ [@anoop-p]. Furthermore, in [@Anoop2021compactness], authors have identified the optimal space for the compactness of $W$, which is precisely $\overline{\text{C}_c^{\infty}(\Omega)}$ in $\mathcal{H}_p(\Omega).$ In this article, we are interested in the non-local analogous of [\[WHS\]](#WHS){reference-type="eqref" reference="WHS"}, namely, the weighted fractional Hardy inequality: $$\begin{aligned} \label{Fractional Hardy} \int_{\Omega} |w(x)||u(x)|^p {\rm d}x\leq C \int_{{{\mathbb R}^N \times {\mathbb R}^N}} \frac{|u(x)-u(y)|^p}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}:= { \norm{u}_{s,p}^p}\,, \ \forall u \in \mathcal{D}^{s,p}_0(\Omega), \end{aligned}$$ where $0<s<1<p<\frac{N}{s}$, and ${\mathcal D}^{s,p}_0(\Omega)$ is the completion of $C_c^1(\Omega)$ with respect to the seminorm $\|\cdot\|_{s,p}$. In the case of $\Omega=\mathbb{R}^N$, we simply denote ${\mathcal D}^{s,p}_0(\mathbb{R}^N)={\mathcal D}^{s,p}(\mathbb{R}^N)$. **Definition 1** ($(s,p)$-Hardy Potential). *A function $w \in L^1_{\mathrm{loc}}(\Omega)$ is called a $(s,p)$-Hardy potentials if $w$ satisfies [\[Fractional Hardy\]](#Fractional Hardy){reference-type="eqref" reference="Fractional Hardy"}. We denote the space of $(s,p)$-Hardy potentials by $\mathcal{H}_{s,p}(\Omega)$.* If $\Omega$ admits the *regional fractional Poincaré inequality* (see [@CCGP2021]) then, we have $L^{\infty}(\Omega) \subset \mathcal{H}_{s,p}(\Omega)$. Examples of such domains can be found in [@CCGP2021] and the references therein. Further, we know that the homogeneous weight function $w(x)=\displaystyle\frac{1}{|x|^{sp}}$, belongs to $\mathcal{H}_{s,p}({\mathbb R}^N)$, see [@RS2008]. Note that for $\Omega=\mathbb{R}^N$, due to the fractional Sobolev inequality (see [@DNPV2012 Theorem 6.5]), we have $L^r(\Omega) \subset \mathcal{H}_{s,p}(\Omega)$ for $r = \frac{N}{sp}$. In fact, as in the local case (i.e., $s=1$), we can also characterize the space $\mathcal{H}_{s,p}(\Omega)$ using the $(s, p)$-capacities, which is defined as follows: **Definition 2** ($(s,p)$-Capacity). *For any $F \Subset \Omega$, we define $$\begin{aligned} {\rm{Cap}}_{s,p}(F,\Omega)=\inf \big\{\norm{u}_{s,p}^p: u \in \mathcal{N}_{s,p}(F,\Omega)\big\}\end{aligned}$$ where $\mathcal{N}_{s,p}(F,\Omega):= \{ u \in {\mathcal D}^{s,p}_0(\Omega): u \geq 1 \text{ a.e. in } F\}.$ For $\Omega={\mathbb R}^N$, we shall write ${\rm{Cap}}_{s,p}(F,{\mathbb R}^N)$ as ${\rm{Cap}}_{s,p}(F)$ and $\mathcal{N}_{s,p}(F,{\mathbb R}^N)$ as $\mathcal{N}_{s,p}(F).$ In fact, in the definition of $\mathcal{N}_{s,p}(F,\Omega)$, one may assume that $u =1$ a.e. on $F$ and $0 \leq u \leq 1$ in $\Omega$ (see [@Xiao Theorem 2.1]).* Motivated by the local case (i.e., $s=1$), for $w \in L^1_{loc}(\Omega)$, we define $$\begin{aligned} \norm{w}_{\mathcal{H}_{s,p}(\Omega)} = \sup_{F \Subset \Omega} \frac{\int_{F} |w(x)| {\rm d}x}{\mbox{Cap}_{s,p}(F,\Omega)}.\end{aligned}$$ Observe that, if $w$ satisfies [\[Fractional Hardy\]](#Fractional Hardy){reference-type="eqref" reference="Fractional Hardy"}, then for any $F \Subset \Omega$ and $u \in \mathcal{N}_{s,p}(F,\Omega)$, we have $$\int_{F} |w(x)| {\rm d}x\leq \int_{\Omega} |w(x)| |u(x)|^p {\rm d}x\leq C \norm{u}_{s,p}^p.$$ This implies $\int_{F} |w(x)| {\rm d}x\leq C \mathrm{Cap}_{s,p}(F,\Omega).$ Therefore, $w$ necessarily satisfies $\norm{w}_{\mathcal{H}_{s,p}(\Omega)} < \infty$. In fact, this condition is also sufficient for $w$ to satisfy [\[Fractional Hardy\]](#Fractional Hardy){reference-type="eqref" reference="Fractional Hardy"} [@Dyda Proposition 3.1] (see also Theorem [Theorem 11](#H1){reference-type="ref" reference="H1"}). Therefore, the space of $(s,p)$-Hardy potentials can be identified as $$\mathcal{H}_{s,p}(\Omega)=\big\{w \in L^1_{loc}(\Omega): \norm{w}_{\mathcal{H}_{s,p}(\Omega)} < \infty \big\} \,.$$ Indeed, $\|\cdot\|_{\mathcal{H}_{s,p}(\Omega)}$ is a Banach function norm on $\mathcal{H}_{s,p}(\Omega)$ (for more details we refer to [@Zaanen1958 Section 30, Chapter 6]). Next, let $\mathcal{B}_{s,p}(w)$ be the best constant in [\[Fractional Hardy\]](#Fractional Hardy){reference-type="eqref" reference="Fractional Hardy"} i.e., $\mathcal{B}_{s,p}(w)$ is the least possible constant so that [\[Fractional Hardy\]](#Fractional Hardy){reference-type="eqref" reference="Fractional Hardy"} holds. Therefore, for $w \in \mathcal{H}_{s,p}(\Omega)$, we have $$\label{bestHardy1} \mathcal{B}_{s,p}(w)^{-1}=\inf \left\{ \|u\|_{{s,p}}^p : u \in \mathcal{D}^{s,p}_0({\Omega}), \int_{\Omega} |w| |u|^p {\rm d}x=1 \right\}.$$ Similar to the local case, the compactness of the map $W$ on $\mathcal{D}^{s,p}_0({\Omega})$ ensures that the best constant $\mathcal{B}_{s,p}(w)$ is attained in $\mathcal{D}^{s,p}_0({\Omega})$. Notice that, if $w \equiv 1$ and $\Omega$ is bounded then $W$ is compact on $\mathcal{D}^{s,p}_0({\Omega})$ for $p=2$ in [@Servadei2013] and for general $p$ in [@Franzina2013]. For bounded domain $\Omega$ and $sp<N$, the compactness of $W$ is obtained for positive $w \in L^{\alpha}(\Omega)$ with $\alpha > \frac{N}{sp}$ in [@Pucci2015eigenvalue] and sign changing $w \in L^{\alpha}(\Omega)$ with $\alpha = \frac{N}{sp}$ in [@Iannizzotto2021monotonicity]. For $sp<N$ and $\Omega={\mathbb R}^N$, $w \in L^{\frac{N}{sp}}(\Omega)\cap L^{\infty}(\Omega)$ in [@Pezzo2020], and $w$ be such that $w_1\in L^{\frac{N}{sp}}(\Omega) \cap L^{\infty}(\Omega),\ w_2 \in L^{\infty}(\Omega)$ with $w_1 \not\equiv 0$ in [@Cui]. We define the following closed subspace of $\mathcal{H}_{s,p}(\Omega)$: $$\mathcal{H}_{s,p,0}(\Omega)=\overline{C_c(\Omega)} \ \mbox{in} \ \mathcal{H}_{s,p}(\Omega).$$ For $w \in \mathcal{H}_{s,p}(\Omega)$ and $x \in \overline{\Omega}$, we define $$\mathcal{C}_w(x):= \lim_{r \to 0} \|w \chi_{B_r(x)}\|_{\mathcal{H}_{s,p}(\Omega)},\ \mathcal{C}_w(\infty):=\lim_{r \to \infty} \|w \chi_{B_r(0)} \|_{\mathcal{H}_{s,p}(\Omega)} \text{ and } \mathcal{C}_w^*:=\sup_{x\in \overline{\Omega}} \mathcal{C}_w(x) \,,$$ where $B_r(x)$ be the ball of radius $r$ centered at $x$. In this article, for $\Omega={\mathbb R}^N$, we prove the following equivalent characterizations for the compactness of $W$ on $\mathcal{D}^{s,p}(\mathbb{R}^N)$. **Theorem 3**. *Let $w \in \mathcal{H}_{s,p}({\mathbb R}^N)$. Then, the following statements are equivalent:* 1. *The map $W:\mathcal{D}^{s,p}(\mathbb{R}^N) \to {\mathbb R}$, defined as $W(u)=\int_{{\mathbb R}^N} |w| |u|^p {\rm d}x$, is compact,* 2. *$w$ has absolute continuous norm in $\mathcal{H}_{s,p}({\mathbb R}^N)$, i.e., for any sequence of open sets $G_{n+1} \subset G_{n}$ for $n=1,2,\cdots$ and $\displaystyle\bigcap_{n=1}^{\infty}G_n =\emptyset$, the norms $\|w\chi_{G_n}\|_{\mathcal{H}_{s,p}({\mathbb R}^N)} \to 0$ as $n \to \infty$.* 3. *$w \in \mathcal{H}_{s,p,0}({\mathbb R}^N)$,* 4. *${\mathcal C}^*_w =0= {\mathcal C}_w(\infty)$.* Next, we are interested in studying the following fractional $p$-Laplace weighted eigenvalue problem: $$\label{Weighetd eigenvalue problem} (-\Delta_{p})^{s}u = \lambda w(x) |u|^{p-2}u ~~\text{in}~\mathbb{R}^{N},$$ where $0<s<1<p<\frac{N}{s}$ and $(-\Delta_{p})^{s}$ is the fractional $p$-Laplace operator defined on smooth functions as $$(-\Delta_{p})^{s}u(x) = 2 \lim_{\epsilon \rightarrow 0^{+}} \int_{\mathbb{R}^{N} \backslash B_{\epsilon}(x)} \frac{|u(x) - u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{N+sp}}\mathrm{d}y~~\text{for}~x \in \mathbb{R}^{N} \,,$$ and the weight function $w = w_{1} - w_{2}~\text{with}~ w_{1},w_{2} \geq 0,$ is such that $w_{1} \in \mathcal{H}_{s,p,0}({\mathbb R}^N)$ and $w_{2} \in L^{1}_{loc}({\mathbb R}^N)$. If the weighted eigenvalue problem [\[Weighetd eigenvalue problem\]](#Weighetd eigenvalue problem){reference-type="eqref" reference="Weighetd eigenvalue problem"} has a non-trivial solution for some $\lambda \in {\mathbb R}$ i.e., there exists $u \in {\mathcal D}^{s,p}(\mathbb{R}^{N}) \backslash \{0\}$ such that the following Euler-Lagrange equation $$\label{Lagrange equation} \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \dfrac{|u(x)-u(y)|^{p-2} (u(x)-u(y)) (v(x) -v(y))}{ |x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}= \lambda \int_{\mathbb{R}^{N}} w|u|^{p-2}uv {\rm d}x,$$ holds for all $v \in {\mathcal D}^{s,p}(\mathbb{R}^{N})$, then the scalar $\lambda \in {\mathbb R}$ is known to be the eigenvalue of [\[Weighetd eigenvalue problem\]](#Weighetd eigenvalue problem){reference-type="eqref" reference="Weighetd eigenvalue problem"}. The function $u$ satisfying [\[Lagrange equation\]](#Lagrange equation){reference-type="eqref" reference="Lagrange equation"} is known as the eigenfunction corresponding to the eigenvalue $\lambda$. The first eigenvalue is the least possible eigenvalue defined by $\lambda_{1}:= \inf\{\|u\|_{s,p}^{p}: u \in {\mathcal D}^{s,p}({\mathbb R}^N),\ \int_{\mathbb{R}^{N}} w|u|^{p}\,{\rm d}x=1\}$ and the corresponding eigenfunction is known as the first eigenfunction. An eigenvalue $\lambda$ is called principal if at least one of the eigenfunctions associated with the eigenvalue $\lambda$ is of a constant sign. If the eigenfunctions associated with the eigenvalue $\lambda$ are unique up to some constant multiple, then $\lambda$ is known as a simple eigenvalue. Let us consider the problem [\[Weighetd eigenvalue problem\]](#Weighetd eigenvalue problem){reference-type="eqref" reference="Weighetd eigenvalue problem"} in a bounded domain i.e., $$\begin{aligned} \label{Weighetd eigenvalue problem bounded domain} \begin{cases} &(-\Delta_{p})^{s}u = \lambda w(x) |u|^{p-2}u ~~\text{in}~\Omega,\\ &u = 0 ~~\text{in}~\mathbb{R}^N \setminus \Omega, \end{cases}\end{aligned}$$ where $sp<N$ and $\Omega$ is an open bounded subset of $\mathbb{R}^N$. The existence, simplicity, and the principality of eigenvalues of [\[Weighetd eigenvalue problem bounded domain\]](#Weighetd eigenvalue problem bounded domain){reference-type="eqref" reference="Weighetd eigenvalue problem bounded domain"} have been discussed extensively in the literature. For $p=2$ and $w \equiv 1$, Servadei and Valdinoci [@Servadei2013] proved the existence of infinitely many eigenvalues to the problem [\[Weighetd eigenvalue problem bounded domain\]](#Weighetd eigenvalue problem bounded domain){reference-type="eqref" reference="Weighetd eigenvalue problem bounded domain"} i.e., $0< \lambda_{1} < \lambda_{2} \leq \lambda_{3} \leq ... \leq \lambda_{k} \leq \cdots, ~\lambda_{k} \rightarrow \infty \text{ as }k \rightarrow \infty$. Also, the authors proved the existence of a non-negative eigenfunction corresponding to the first eigenvalue $\lambda_1$. For general $p$, we refer to [@Franzina2013]. Even if $sp>N$, the first eigenvalue $\lambda_1$ is simple and isolated, and the corresponding eigenfunction is positive in $\Omega$ (see [@Lindgren2014]). In 2015, Pucci and Saldi [@Pucci2015eigenvalue] obtained the existence of a positive first eigenvalue of [\[Weighetd eigenvalue problem bounded domain\]](#Weighetd eigenvalue problem bounded domain){reference-type="eqref" reference="Weighetd eigenvalue problem bounded domain"} when $w \in L^{\alpha}(\Omega)$ is positive with $\alpha > \frac{N}{sp}$ and we refer to [@Iannizzotto2021monotonicity] for $\alpha = \frac{N}{sp}$, where the author proved the existence of an infinite eigenvalue and the first eigenvalue is simple, isolated and principal. For the local case (i.e., $s=1$), the existence of a positive principal eigenvalue of [\[Weighetd eigenvalue problem\]](#Weighetd eigenvalue problem){reference-type="eqref" reference="Weighetd eigenvalue problem"} was studied in [@Fleckinger2004antimaximum; @Fleckinger2005principal; @anoop-p]. Huang [@Huang1995eigenvalues], Allegreto and Huang [@Allegretto1995eigenvalues] and Anoop [@anoop-p] studied the existence, simplicity, and uniqueness of the first eigenvalue of [\[Weighetd eigenvalue problem\]](#Weighetd eigenvalue problem){reference-type="eqref" reference="Weighetd eigenvalue problem"}. Moreover, they obtained the existence of a sequence of infinite eigenvalues. Later, Pezzo and Quaas [@Pezzo2020 Theorem 1.1, Theorem 1.2] studied the nonlocal version of [@Allegretto1995eigenvalues] in two cases. For $sp<N$, they considered a sign changing $w \in L^{\frac{N}{sp}}(\mathbb{R}^{N})\cap L^{\infty}(\mathbb{R}^{N})$ with $w_1 \not\equiv 0$, and on the other hand for $sp \geq N$, they proceeded with $w \in L^{\infty}({\mathbb R}^N)$, $w = w_1 -w_2$ with assumptions: $(a) \ w_1(x) \geq 0 \text{ a.e. in } {\mathbb R}^N \text{ and }w_1 \in L^{\frac{N}{sp}}(\mathbb{R}^{N})\cap L^{\infty}(\mathbb{R}^{N})$, and $(b) \ w_2(x) \geq \epsilon>0 \ \text{a.e. in }{\mathbb R}^N.$ In both cases, the authors obtained the existence of infinite eigenvalues with the first eigenvalue as simple and principal. For $sp<N$, Cui and Sun [@Cui] recently obtained the similar type results for eigenvalues as in [@Pezzo2020 Theorem 1.1] by considering $w=w_1 - w_2, \ w_1,w_2 \geq 0$, such that $w_1\in L^{\frac{N}{sp}}(\mathbb{R}^{N}) \cap L^{\infty}(\mathbb{R}^{N}),w_2 \in L^{\infty}(\mathbb{R}^{N})$ and $w_1 \not\equiv 0$. In this article, we generalise this result for $w=w_1-w_2$ with $0 \leq w_{1} \in \mathcal{H}_{s,p,0}({\mathbb R}^N)$ and $0\leq w_{2} \in L^{1}_{loc}({\mathbb R}^N)$. Now we state our next result: **Theorem 4**. *Assume that $w_{1} \in \mathcal{H}_{s,p,0}(\mathbb{R}^{N})$ and $w_{2} \in L^{1}_{loc}({\mathbb R}^N)$ with $w_{1} \not\equiv 0$, then there exists a sequence of eigenvalues $\{ \lambda_{k} \}$ for the problem ([\[Weighetd eigenvalue problem\]](#Weighetd eigenvalue problem){reference-type="ref" reference="Weighetd eigenvalue problem"}) such that $$0< \lambda_{1} < \lambda_{2} \leq \lambda_{3} \leq ... \leq \lambda_{k} \leq \cdots, ~~~~ \lambda_{k} \rightarrow \infty ~~\text{as} ~~k \rightarrow \infty.$$ The first eigenvalue $\lambda_{1}$ is simple and principal.* # Preliminaries {#Prelims} In this section, we recall the notion of symmetrization, define Lorentz space, and provide some known results that will be used in the subsequent sections. ## Symmetrization Assume that $\Omega \subset \mathbb{R}^{N}$ is an open set. The set of all extended real-valued Lebesgue measurable functions that are finite a.e. in $\Omega$ is denoted by $\mathcal{L}(\Omega)$. For $f \in \mathcal{L}(\Omega)$ and for $s>0$, we define $T_{f}(s) = \{ x:|f(x)|>s\}$ and the distribution function $\delta_{f}$ of $f$ is defined as $$\delta_{f}(s): = \mu(T_{f}(s)),~~\text{for}~s>0,$$ where $\mu$ denotes the Lebesgue measure. The one-dimensional decreasing rearrangement $f^{*}$ of $f$ is defined as below: $$f^{*}(t) := \begin{cases} \mathrm{ess\ sup} f,~~t=0 \\ \mathrm{inf}\{s>0:\delta_{f}(s)<t\},~t>0 \end{cases}$$ The map $f \mapsto f^{*}$ is not sub-additive. However, we obtain a sub-additive function from $f^{*}$, namely the maximal function $f^{**}$ of $f^{*}$, defined by $$f^{**}(t) = \frac{1}{t}\int_{0}^{t}f^{*}(\alpha) \mathrm{d}\alpha,~~~t>0.$$ The sub-additivity of $f^{**}$ with respect to $f$ helps us to define norms in certain function spaces. The Schwarz symmetrization of $f$ is defined by $$f^{\star}(x) = f^{*}(\omega_{N}|x|^{N}),~~\forall~x \in \Omega^{\star},$$ where $\omega_{N}$ is the measure of the unit ball in $\mathbb{R}^{N}$ and $\Omega^{\star}$ is the open ball centered at the origin with the same measure as $\Omega$. Next, we state an important inequality concerning the Schwarz symmetrization; see [@Edmunds Theorem 3.2.10]. **Proposition 5** (Hardy-Littlewood inequality). *Let $\Omega \subset \mathbb{R}^{N}$ with $N\geq 1$ and $f,g \in \mathcal{L}(\Omega)$ be non-negative functions. Then $$\label{Hardy Littlewood} \int_{\Omega} f(x) g(x) {\rm d}x\leq \int_{\Omega^{\star}} f^{\star}(x) g^{\star}(x) {\rm d}x= \int_{0}^{\mu(\Omega)} f^{*}(x) g^{*}(x) {\rm d}x.$$* ## Lorentz spaces The Lorentz spaces are refinements of the usual Lebesgue spaces and introduced by Lorentz in [@Lorentz_1950]. We refer to the book [@Edmunds] for further details on Lorentz spaces and related results.\ Let $\Omega \subseteq {\mathbb R}^N ~\mbox{be an open set and}~ (p,q) \in [1,\infty) \times [1,\infty]$, we define the Lorentz space $L^{p,q}(\Omega)$ as follow: $$L^{p,q}(\Omega) := \{ f \in \mathcal{L}(\Omega) ~:~ |f|_{(p,q)} < \infty \}.$$ Where $|f|_{(p,q)}$ is a complete quasi-norm on $L^{p,q}(\Omega)$ and it is given by $$|f|_{(p,q)}:= \norm[\bigg]{t^{\frac{1}{p} - \frac{1}{q}} f^{*}(t)}_{L^{q}(0,\infty)} = \begin{cases} \bigg(\int_{0}^{\infty} \big[ t^{\frac{1}{p} - \frac{1}{q}} f^{*}(t) \big]^{q} \mathrm{d}t\bigg)^{\frac{1}{q}}; & 1\leq q <\infty, \\ \sup_{t>0}t^{\frac{1}{p}} f^{*}(t); & q= \infty. \end{cases}$$ Moreover, if we define $$\|f\|_{(p,q)}:= \norm[\bigg]{t^{\frac{1}{p} - \frac{1}{q}} f^{**}(t)}_{L^{q}(0,\infty)}.$$ Then $\|f\|_{(p,q)}$ is a norm on $L^{p,q}(\Omega)$ and it is equivalent to the quasi-norm $|f|_{(p,q)}$ (see Lemma 3.4.6 of [@Edmunds]). ## Brézis-Lieb lemma and the discrete Picone-type identity The following lemma is due to Brézis and Lieb [@Brezis_Lieb_1983 Theorem 1]. **Lemma 6** (Brézis-Lieb lemma). *Let $(\Omega, \mathcal{A},\mu)$ be a measure space and $\langle f_{n} \rangle$ be a sequence of complex-valued measurable functions which are uniformly bounded in $L^{p}(\Omega,\mu)$ for some $0<p<\infty$. Moreover, if $\langle f_{n} \rangle$ converges to $f$ a.e., then $$\lim\limits_{n \rightarrow\infty} \bigg| \|f_{n}\|_{p} - \|f_{n}-f\|_{p} \bigg| = \|f\|_{p}.$$* Next, we recall a discrete Picone-type identity in [@Amghibech2008 Lemma 6.2] that will be useful to prove all the eigenfunctions except the first one change sign. **Lemma 7**. *Let $p \in (1,+\infty)$. For $u,v: \mathbb{R}^{N} \rightarrow \mathbb{R}$ such that $u \geq 0 ~\text{and}~v>0$, we have $$K(u,v) \geq 0~~\text{in}~~\mathbb{R}^{N} \times \mathbb{R}^{N},$$ where $$\begin{aligned} \label{def:K}K(u,v)(x,y) = |u(x)-u(y)|^{p} - |v(x)-v(y)|^{p-2}(v(x)-v(y)) \bigg(\frac{u(x)^p}{v(x)^{p-1}} - \frac{u(y)^p}{v(y)^{p-1}}\bigg).\end{aligned}$$ The equality holds if and only if $u = cv$ a.e. for some constant $c$.* ## Some important estimates We recall the scaling property and the decay estimate of the nonlocal $(s,p)$-gradient given by Bonder et al. [@Bonder]. For $u \in \mathcal{D}^{s,p}({\mathbb R}^N)$, define $$|D^s u(x)|^p:= \int_{{\mathbb R}^N} \frac{|u(x+h)-u(x)|^p}{|h|^{N+sp}} \ \mathrm{d}h \,.$$ **Lemma 8** ([@Bonder Lemma 2.1]). *Let $\phi \in \mathcal{D}^{s,p}({\mathbb R}^N)$ and given $r>0$ and $x_{0} \in {\mathbb R}^N$ we define $\phi_{x_0,r}(x)=\phi(\frac{x-x_0}{r})$. Then $$|D^s \phi_{x_0,r}(x)|^{p} = \frac{1}{r^{sp}} \bigg|D^s \phi\bigg(\frac{x-x_0}{r}\bigg)\bigg|^{p}.$$* **Lemma 9** ([@Bonder Lemma 2.2]). *Let $\phi \in W^{1,\infty}({\mathbb R}^N)$ be such that $\mbox{supp}(\phi) \subset B_{1}(0)$. Then, there exists a constant $\text{C}>0$ depends on $N,s,p$ and $\|\phi\|_{W^{1,\infty}}$ such that $$|D^s \phi(x)|^p \leq \text{C} \min\{1,|x|^{-(N+sp)}\} \,.$$* **Remark 10**. *Let $\phi \in W^{1,\infty}({\mathbb R}^N)$ with compact support. Then, by Lemma [Lemma 9](#Bonder lemma two){reference-type="ref" reference="Bonder lemma two"}, we have $D^s \phi \in L^{\infty}({\mathbb R}^N).$ Moreover, $$|D^s \phi(x)|^p \leq \text{C} \min\{1,|x|^{-(N+sp)}\} \,,$$ where $\text{C}>0$ depends on $N,s,p$ and $\|\phi\|_{W^{1,\infty}}$. Consequently, $D^s \phi \in L^p({\mathbb R}^N).$ Now, let $\psi \in C_b^{1}({\mathbb R}^N)$ be such that $0\leq \psi\leq 1$, $\psi=0$ on $B_1(0)$, and $\psi =1$ on $B_2(0)^c$. Then, $\phi := 1-\psi \in W^{1,\infty}({\mathbb R}^N)$ with support in $B_2(0)$ and $D^s \psi = D^s \phi$. Thus, $$|D^s \psi(x)|^p \leq \text{C} \min\{1,|x|^{-(N+sp)}\} \,,$$ where $\text{C}>0$ depends on $N,s,p$ and $\|\psi\|_{W^{1,\infty}}$.* # Compactness of the energy functional Next, we obtain a result that is a by-product of fractional Maz'ya-type characterization of Hardy potentials. For that, we define the following. For a nondecreasing function $\phi$ with $\phi(t)>0$ for $t>0$ and $\phi$ is continuous from right for $t \geq 0$ and also satisfies $\phi(0)=0$, $\phi(t) \to \infty$ as $t \to \infty$. Further, let $\psi(s)= \sup\{t: \phi(t) \leq s\}.$ And we define the function $$P(u):=\int_0^{|u|} \psi(s) \mathrm{d}s.$$ **Theorem 11**. *Let $p>1$ such that $sp < N,~s \in (0,1)$ and $w \in L^{1}_{loc}({\mathbb R}^N)$. If $w \in \mathcal{H}_{s,p}(\mathbb{R}^{N})$, then $$\int_{{\mathbb R}^N} w |u|^{p} {\rm d}x\leq C_{H} \|w\|_{\mathcal{H}_{s,p}({\mathbb R}^N)} \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \dfrac{|u(x)-u(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y},~\forall ~ u \in \mathcal{D}_{}^{s,p}(\mathbb{R}^{N}).$$* *Proof.* Suppose $U_{t} = \{ x \in {\mathbb R}^N : |u(x)| \geq t \}$. Now by [@Mazya Sect. 2.3.2] we have $$\begin{aligned} \|~|u|^{p}\|_{\mathcal{L}_{M}({\mathbb R}^N,\mu)} & = &\mbox{sup} \bigg\{~\bigg|\int_{{\mathbb R}^N} w |u|^{p} {\mathrm{d}\mu}\bigg| : \int_{{\mathbb R}^N} P(w) {\mathrm{d}\mu}\leq 1 \bigg\}\\ &=& \mbox{sup} \bigg\{\int_{0}^{\infty} \int_{U_{t}} w ~{\mathrm{d}\mu}~\mathrm{d}({t}^{p}) : \int_{{\mathbb R}^N} P(w) {\mathrm{d}\mu}\leq 1 \bigg\}\\ & \leq& \int_{0}^{\infty} \mbox{sup} \bigg\{ \int_{{\mathbb R}^N} \displaystyle\chi_{U_{t}} w ~{\mathrm{d}\mu}: \int_{{\mathbb R}^N} P(w) {\mathrm{d}\mu}\leq 1 \bigg\} ~\mathrm{d}({t}^{p})~ \\ & = & \int_{0}^{\infty} \|\chi_{U_{t}}\|_{\mathcal{L}_{M}({\mathbb R}^N,\mu)}~\mathrm{d}({t}^{p}) \\ &\leq& \|w\|_{\mathcal{H}_{s,p}({\mathbb R}^N)} \int_{0}^{\infty} {\rm Cap_{s,p}}(U_{t})~\mathrm{d}({t}^{p})\\ &\leq& C_{H} \|w\|_{\mathcal{H}_{s,p}({\mathbb R}^N)} \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \dfrac{|u(x)-u(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}.\end{aligned}$$ Hence, we get the desired result, where the last inequality is followed by [@Wu Theorem 1$'$]. ◻ The next proposition gives an interesting property of the $(s,p)$-capacity, which helps us to localize the norm on $\mathcal{H}_{s,p}({\mathbb R}^N)$. **Proposition 12**. *There exists $C_1, C_2 >0$ such that for $F \Subset {\mathbb R}^N,$* (i) *${\rm{Cap}}_{s,p}(F \cap B_r(x), B_{2r}(x)) \leq C_1 {\rm{Cap}}_{s,p}(F \cap B_{r}(x), {\mathbb R}^N),\ \forall r>0.$* (ii) *${\rm{Cap}}_{s,p}(F \cap B_{2R}^c, \overline{B_R}^c) \leq C_2 {\rm{Cap}}_{s,p}(F \cap B_{2R}^c, {\mathbb R}^N), \ \forall R>0.$* *Proof.* $(i)$ Fix $x_0 \in {\mathbb R}^N$ and choose $\epsilon >0$ arbitrarily. Then, there exists $u \in C_c^1({\mathbb R}^N)$ with $u \geq 1$ a.e. in $F \cap B_r(x_0)$ such that $$\int_{{\mathbb R}^N \times {\mathbb R}^N} \frac{|u(x)-u(y)|^p}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}< \mbox{Cap}_{s,p}(F \cap B_r(x_0), {\mathbb R}^N) + \epsilon \,.$$ Take $\phi \in C_c^{\infty}({\mathbb R}^N)$ such that $0 \leq \phi \leq 1$, $\phi =1$ on $\overline{B_1(0)}$ and vanishes outside $B_2(0)$. Consider $v_r=u \phi_{x_0,r}$, where $\phi_{x_0,r}(y)=\phi(\frac{y-x_0}{r})$. Note that, $v_r \geq 1$ on $F \cap B_r(x_0)$. Now $$\begin{aligned} & \, \int_{{\mathbb R}^N \times {\mathbb R}^N} \frac{|v_{r}(x)-v_{r}(y)|^p}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\\ & \leq \int_{{\mathbb R}^N \times {\mathbb R}^N} \frac{|u(x)-u(y)|^p}{|x-y|^{N+sp}}{\mathrm{d}x\mathrm{d}y}+ \int_{{\mathbb R}^N \times {\mathbb R}^N} |u(y)|^p \frac{|\phi_{x_0,r}(x)-\phi_{x_0,r}(y)|^p}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\\ & := I_1 + I_2 \,.\end{aligned}$$ Next, we estimate $I_2$ as follows. $$\begin{aligned} I_2 & = & \int_{{\mathbb R}^N} |u(y)|^p \int_{{\mathbb R}^N} \frac{|\phi_{x_0,r}(x)-\phi_{x_0,r}(y)|^p}{|x-y|^{N+sp}} \ {\rm d}x{\mathrm{d}y}\\ & = & \int_{{\mathbb R}^N} |u(y)|^p \int_{{\mathbb R}^N} \frac{|\phi_{x_0,r}(y+z)-\phi_{x_0,r}(y)|^p}{|z|^{N+sp}} \ \mathrm{d}z {\mathrm{d}y}\\ & = & \int_{{\mathbb R}^N} |u(y)|^p \int_{{\mathbb R}^N} \frac{|\phi_{x_0,r}(y+z)-\phi_{x_0,r}(y)|^p}{|z|^{N+sp}} \ \mathrm{d}z {\mathrm{d}y}\\ & \leq & \int_{{\mathbb R}^N} |u(y)|^p |D^s \phi_{x_0,r}(y)|^p {\mathrm{d}y}\\ & = & \frac{1}{r^{sp}} \int_{{\mathbb R}^N} |u(y)|^p |D^s \phi(\frac{y-x_0}{r})|^p {\mathrm{d}y}\ \ \mbox{ \cite[Lemma \ 2.1]{Bonder}} \\ & \leq & \|D^s \phi\|_{\infty}^p \left( \int_{{\mathbb R}^N} |u(y)|^{p_s^*} {\mathrm{d}y}\right)^{\frac{N-sp}{N}} \left( \int_{{\mathbb R}^N} \left|\frac{D^s \phi(y)}{\|D^s \phi\|_{\infty}} \right|^{\frac{N}{s}} {\mathrm{d}y}\right)^{\frac{sp}{N}} \\ & \leq & \|D^s \phi\|_{\infty}^p \left( \int_{{\mathbb R}^N} |u(y)|^{p_s^*} {\mathrm{d}y}\right)^{\frac{N-sp}{N}} \left( \int_{{\mathbb R}^N} \left|\frac{D^s \phi(y)}{\|D^s \phi\|_{\infty}} \right|^{p} {\mathrm{d}y}\right)^{\frac{sp}{N}} \\ & \leq & C \int_{{\mathbb R}^N \times {\mathbb R}^N} \frac{|u(x)-u(y)|^p}{|x-y|^{N+sp}} {\rm d}x{\mathrm{d}y} =C I_1 \ \ \ \mbox{[by Remark \ref{infestimate}]}\end{aligned}$$ Thus, we obtain $$\begin{aligned} \mbox{Cap}_{s,p}(F \cap B_r(x_0), B_{2r}(x_0)) & \leq \int_{{\mathbb R}^N \times {\mathbb R}^N} \frac{|v_{r}(x)-v_{r}(y)|^p}{|x-y|^{N+sp}}{\mathrm{d}x\mathrm{d}y} \leq (1+C) I_1 \\ & <C_{1}\mbox{Cap}_{s,p}(F \cap B_r(x_0), {\mathbb R}^N) +C_{1} \epsilon \,, \end{aligned}$$ where $C_{1} = 1+C.$ By taking $\epsilon \rightarrow 0$, we prove $(i).$ $(ii)$ Choose $\epsilon >0$ arbitrarily. Then there exists $u \in C_c^1({\mathbb R}^N)$ with $u \geq 1$ a.e. in $F \cap B_{2R}(0)^c$ such that $$\int_{{\mathbb R}^N \times {\mathbb R}^N} \frac{|u(x)-u(y)|^p}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}< \mbox{Cap}_{s,p}(F \cap B_{2R}(0)^c, {\mathbb R}^N) + \epsilon \,.$$ Take $\phi \in C_b^{\infty}({\mathbb R}^N)$ such that $0 \leq \phi \leq 1$, $\phi =0$ on $B_1(0)$ and $\phi =1$ on $B_2(0)^c$. Consider $v_{R}=u \phi_{R}$, where $\phi_{R}(x)=\phi(\frac{x}{R})$. Now $$\begin{aligned} & \, \int_{{\mathbb R}^N \times {\mathbb R}^N} \frac{|v_{R}(x)-v_{R}(y)|^p}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\\ & \leq \int_{{\mathbb R}^N \times {\mathbb R}^N} \frac{|u(x)-u(y)|^p}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}+ \int_{{\mathbb R}^N \times {\mathbb R}^N} |u(y)|^p \frac{|\phi_{R}(x)-\phi_{R}(y)|^p}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\\ & := I_1 + I_2 \,.\end{aligned}$$ Next, we estimate $I_2$ as follows. $$\begin{aligned} I_2 & = \int_{{\mathbb R}^N} |u(y)|^p \int_{{\mathbb R}^N} \frac{|\phi_{R}(x)-\phi_{R}(y)|^p}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\\ & = \int_{{\mathbb R}^N} |u(y)|^p \int_{{\mathbb R}^N} \frac{|\phi_{R}(y+z)-\phi_{R}(y)|^p}{|z|^{N+sp}} \ \mathrm{d}z \mathrm{d}y \\ & \leq \int_{{\mathbb R}^N} |u(y)|^p |D^s \phi_{R}(y)|^p {\mathrm{d}y}\\ & = \frac{1}{R^{sp}} \int_{{\mathbb R}^N} |u(y)|^p \bigg|D^s \phi\bigg(\frac{y}{R}\bigg)\bigg|^p {\mathrm{d}y}\ \ \ \mbox{\cite[Lemma \ 2.1]{Bonder}} \\ & \leq \left( \int_{{\mathbb R}^N} |u(y)|^{p_s^*} \mathrm{d}y \right)^{\frac{N-sp}{N}} \left( \int_{{\mathbb R}^N} |D^s \phi(y)|^{\frac{N}{s}} \mathrm{d}dy \right)^{\frac{sp}{N}} \\ & \leq \|D^s \phi\|_{\infty}^p \left( \int_{{\mathbb R}^N} |u(y)|^{p_s^*} {\mathrm{d}y}\right)^{\frac{N-sp}{N}} \left( \int_{{\mathbb R}^N} \left|\frac{D^s \phi(y)}{\|D^s \phi\|_{\infty}} \right|^{\frac{N}{s}} {\mathrm{d}y}\right)^{\frac{sp}{N}} \\ & \leq \|D^s \phi\|_{\infty}^p \left( \int_{{\mathbb R}^N} |u(y)|^{p_s^*} {\mathrm{d}y}\right)^{\frac{N-sp}{N}} \left( \int_{{\mathbb R}^N} \left|\frac{D^s \phi(y)}{\|D^s \phi\|_{\infty}} \right|^{p} {\mathrm{d}y}\right)^{\frac{sp}{N}} \\ & \leq C \int_{{\mathbb R}^N \times {\mathbb R}^N} \frac{|u(x)-u(y)|^p}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y} =C I_1 \ \ \mbox{[by Remark \ref{infestimate}]}. \end{aligned}$$ Therefore, the result follows. ◻ Now in the next proposition, we establish a necessary and sufficient condition for the weights $w\in L^1_{loc}(\mathbb{R}^N)$ to be in the space $\mathcal{H}_{s,p,0}({\mathbb R}^N).$ **Proposition 13**. *Let $w \in L^1_{loc}(\mathbb{R}^N)$. Then, $w \in \mathcal{H}_{s,p,0}({\mathbb R}^N)$ if and only if for every $\epsilon >0$, there exists $w_{\epsilon} \in L^{\infty}({\mathbb R}^N)$ such that $|Supp(w_{\epsilon})|< \infty$ (where $|E|$ denotes the $N$-dimensional Lebesgue measure of the set $E$) and $\norm{w-w_{\epsilon}}_{\mathcal{H}_{s,p}({{\mathbb R}^N})}< \epsilon.$* *Proof.* Let $w \in \mathcal{H}_{s,p,0}({\mathbb R}^N)$ and $\epsilon >0$ be given. By definition of $\mathcal{H}_{s,p,0}({\mathbb R}^N)$, there exists $w_{\epsilon} \in C_c({\mathbb R}^N)$ such that $\norm{w-w_{\epsilon}}_{\mathcal{H}_{s,p}(\mathbb{R}^N)} < \epsilon .$ This $w_{\epsilon}$ fulfill our requirements. For the converse part, take a $w$ satisfying the hypothesis. Let $\epsilon >0$ be arbitrary. Then there exists $w_{\epsilon} \in L^{\infty}({\mathbb R}^N)$ such that $|Supp(w_{\epsilon})|< \infty$ and $\norm{w-w_{\epsilon}}_{\mathcal{H}_{s,p}({\mathbb R}^N)}< \frac{\epsilon}{2}.$ Thus, $w_{\epsilon} \in L^{\frac{N}{sp}}({\mathbb R}^N)$ and hence there exists $\phi_{\epsilon} \in C_c^{}({\mathbb R}^N)$ such that $\norm{w_{\epsilon}-\phi_{\epsilon}}_{\frac{N}{sp}} < \frac{\epsilon}{2C}$, where $C$ is the embedding constant for the embedding $L^{\frac{N}{sp}}({\mathbb R}^N)$ into $\mathcal{H}_{s,p}({\mathbb R}^N)$. Now by triangle inequality, we obtain $\norm{w-\phi_{\epsilon}}_{\mathcal{H}_{s,p}(\mathbb{R}^N)}< \epsilon$ as required. ◻ ## Some important embeddings In this subsection, we will prove some of the important embedding results, which will later help us reach our final goal. First, we prove the following result: **Proposition 14**. *For $p > 1$ such that $sp < N$ , $L^{\frac{N}{sp}, \infty}(\mathbb{R}^{N})$ is continuously embedded in $\mathcal{H}_{s,p}(\mathbb{R}^{N})$.* *Proof.* Observe that $\mathrm{Cap}_{s,p}(F^{\star}) \leq \mathrm{Cap}_{s,p}(F)$. The inequality follows from Pólya-Szegö inequality [@Almgren Theorem 9.2]. Also, we know that $\mathrm{Cap}_{s,p}(F^{\star}) \geq \mathcal{K}_{N,s,p} R^{N-sp},$ where R is the radius of $F^{\star}$ and $\mathcal{K}_{N,s,p} > 0$ is a constant independent of R [@Xiao Theorem 3]. Now for a relatively compact set F, $$\dfrac{\int_{F}|w|(x) {\rm d}x}{\mathrm{Cap}_{s,p}(F)} \leq \dfrac{\int_{F^{\star}} w^{\star}(x) {\rm d}x}{\mathrm{Cap}_{s,p}(F^{\star})} \leq \dfrac{\int_{0}^{\mu(F)} w^{\ast}(x) {\rm d}x}{\mathcal{K}_{N,s,p} R^{N-sp}} = \dfrac{\omega_{N} R^{N} w^{\ast \ast}(\omega_{N} R^{N})}{\mathcal{K}_{N,s,p} R^{N-sp}} = \dfrac{R^{sp}~ w^{\ast \ast}(\omega_{N} R^{N})}{\mathcal{K}_{N,s,p}},$$ where we use Hardy-Littlewood inequality [@Edmunds Thoerem 3.2.10] in the first and second inequality. By setting $\omega_{N}{R}^{N} = t,$ we get $$\begin{aligned} \dfrac{\int_{F}|w|(x) {\rm d}x}{\mathrm{Cap}_{s,p}(F)} \leq {\mathcal{K}_{N,s,p}} \|w\|_{(\frac{N}{sp}, \infty)}.\end{aligned}$$ Now take the supremum over $F \Subset {\mathbb R}^N$ to obtain, $$\begin{aligned} \|w\|_{\mathcal{H}_{s,p}({\mathbb R}^N)} \leq {\mathcal{K}_{N,s,p}} \|w\|_{(\frac{N}{sp}, \infty)} \end{aligned}$$ with $\mathcal{K}_{N,s,p} > 0$ and the constant is depending on $N,s$ and $p$. ◻ Let us define the following spaces $$L^{\frac{N}{sp},\infty}_0(\mathbb{R}^{N}) = \overline{C_{c}^{}(\mathbb{R}^{N})} ~~\mbox{in} ~~L^{\frac{N}{sp},\infty}(\mathbb{R}^{N}),$$ $$\mathcal{H}_{s,p,0}(\mathbb{R}^{N}) = \overline{C_{c}^{}(\mathbb{R}^{N})} ~~\mbox{in} ~~\mathcal{H}_{s,p}(\mathbb{R}^{N}).$$ **Proposition 15**. *Let $p>1$ such that $sp < N$. Then $L^{\frac{N}{sp},\infty}_0(\mathbb{R}^{N}) \subset \mathcal{H}_{s,p,0}(\mathbb{R}^{N})$.* *Proof.* Since, $L^{\frac{N}{sp},\infty}_0(\mathbb{R}^{N})$ is the closure of $C_{c}^{}(\mathbb{R}^{N})$ in $L^{\frac{N}{sp}, \infty}(\mathbb{R}^{N})$ and $\mathcal{H}_{s,p,0}(\mathbb{R}^{N})$ is the closure of $C_{c}^{}(\mathbb{R}^{N})$ in $\mathcal{H}_{s,p}(\mathbb{R}^{N})$. Now by Proposition [Proposition 14](#Lorentz space embedding){reference-type="ref" reference="Lorentz space embedding"}, we have $\|\cdot\|_{\mathcal{H}_{s,p}({\mathbb R}^N)} \leq C \|\cdot\|_{\frac{N}{sp}, \infty}$. Therefore, it is immediate that $L^{\frac{N}{sp},\infty}_0(\mathbb{R}^{N})$ is contained in $\mathcal{H}_{s,p,0}(\mathbb{R}^{N})$. ◻ In the following proposition, we establish the Lorentz-Sobolev embedding for ${\mathcal D}^{s,p}(\mathbb{R}^{N})$. **Proposition 16**. *Let $p>1$ with $sp<N$, then ${\mathcal D}^{s,p}(\mathbb{R}^{N})$ is continuously embedded in the Lorentz space $L^{p^{*}_{s},p}(\mathbb{R}^{N})$, where $p^{*}_{s} = \frac{Np}{N-sp}.$* *Proof.* Let $w \in \mathcal{H}_{s,p}(\mathbb{R}^{N})$ be such that $w^{\star} \in \mathcal{H}_{s,p}(\mathbb{R}^{N}).$ Then using the Pólya-Szegö inequality [@Almgren Theorem 9.2], we have $$\begin{aligned} \int_{\mathbb{R}^{N}} w^{\star} |u^{\star}|^{p} {\rm d}x&\leq & C \|w^{\star}\|_{\mathcal{H}_{s,p}({\mathbb R}^N)} \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \dfrac{|u^{\star}(x) - u^{\star}(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\\ &\leq & C \|w^{\star}\|_{\mathcal{H}_{s,p}({\mathbb R}^N)} \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \dfrac{|u(x) - u(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y},\ \forall u \in {\mathcal D}^{s,p}(\mathbb{R}^{N}).\end{aligned}$$ In Particular, for $w(x) = \dfrac{1}{\omega_{N}^{\frac{sp}{N}} |x|^{sp}},~w^{*}(t) = \dfrac{1}{t^{\frac{sp}{N}}},~\mbox{and}~ \|w^{\star}\| \leq C(N,p,s).$ Also we have $$\int_{\mathbb{R}^{N}} w^{\star} |u^{\star}|^{p} {\rm d}x= \int_{0}^{\infty} w^{*}(t) |u^{*}(t)|^{p} \mathrm{d}t.$$ Thus, from the above inequality we have, $$\begin{aligned} \int_{0}^{\infty} \dfrac{1}{t^{\frac{sp}{N}}} |u^{*}(t)|^{p} \mathrm{d}t&=& \int_{0}^{\infty} w^{*}(t) |u^{*}(t)|^{p} \mathrm{d}t = \int_{\mathbb{R}^{N}} w^{\star} |u^{\star}|^{p} {\rm d}x\\ &\leq& C(N,p,s) \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \dfrac{|u(x) - u(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y},\ \forall u \in {\mathcal D}^{s,p}(\mathbb{R}^{N}).\end{aligned}$$ The left-hand side of the above inequality is $|u|^{p}_{(p^{*}_{s},p)}$, a quasi-norm equivalent to the norm $\|u\|^{p}_{(p^{*}_{s},p)}$ in $L^{p^{*}_{s},p}(\mathbb{R}^{N})$. This completes the proof. ◻ ## Concentration compactness Let $\mathbb{M} ({\mathbb R}^N)$ be the space of all regular, finite, Borel-signed measures on ${\mathbb R}^N.$ Then $\mathbb{M} ({\mathbb R}^N)$ is a Banach space with respect to the norm $\norm{\mu}=|\mu|({\mathbb R}^N)$ (total variation of the measure $\mu$). By Riesz representation theorem, we know that $\mathbb{M}({\mathbb R}^N)$ is the dual of $\text{C}_0({\mathbb R}^N)$ (= $\overline{\text{C}_c({\mathbb R}^N)}$ in $L^{\infty}({\mathbb R}^N)$) [@Border Theorem 14.14, Chapter 14]. The next proposition follows from the uniqueness part of the Riesz representation theorem. **Proposition 17**. *Let $\mu \in \mathbb{M}({\mathbb R}^N)$ be a positive measure. Then for an open $V \subseteq {\mathbb R}^N$, $$\mu(V)= \sup \left \{ \int_{{\mathbb R}^N} \phi {\mathrm{d}\mu}: 0 \leq \phi \leq 1, \phi \in \rm{C}_c^{\infty}({\mathbb R}^N) \ with \ Supp(\phi) \subseteq V \right \}$$ and for any Borel set $E \subseteq {\mathbb R}^N$, $\mu(E):= \inf \big\{ \mu(V) : E \subseteq V \ \mbox{and} \ V \text {is open } \big\}$.* A sequence $(\mu_n)$ is said to be weak\* convergent to $\mu$ in $\mathbb{M}({\mathbb R}^N)$, if $$\begin{aligned} \int_{{\mathbb R}^N} \phi \ {\mathrm{d}\mu}_n \rightarrow\int_{{\mathbb R}^N} \phi {\mathrm{d}\mu}, \ as \ n \rightarrow\infty, \forall\, \phi \in \text{C}_0({\mathbb R}^N). \end{aligned}$$ In this case, we denote $\mu_n \overset{\ast}{\rightharpoonup}\mu$. The following proposition is a consequence of the Banach-Alaoglu theorem [@Conway Chapter 5, Section 3], which states that for any normed linear space $X$, the closed unit ball in $X^*$ is weak\* compact. **Proposition 18**. *Let $(\mu_n)$ be a bounded sequence in $\mathbb{M}({\mathbb R}^N)$, then there exists $\mu \in \mathbb{M}({\mathbb R}^N)$ such that $\mu_n \overset{\ast}{\rightharpoonup} \mu$ up to a subsequence.* *Proof.* Recall that, if $X=\rm{C}_0({\mathbb R}^N)$ then by Riesz Representation theorem [@Border Theorem 14.14, Chapter 14] $X^*=\mathbb{M}({\mathbb R}^N)$. Thus, the proof follows from the Banach-Alaoglu theorem [@Conway Chapter 5, Section 3]. ◻ For $u_n, u\in \mathcal{D}^{s,p}_0({\mathbb R}^N)$ and a Borel set $E$ in ${\mathbb R}^N,$ we denote $$\begin{aligned} \nu_n(E)=\int_E w|u_n - u|^p \ {\rm d}x\,, &\, \quad \Gamma_n(E)=\int_E|D^s (u_n-u)|^p \ {\rm d}x\, \\ \widetilde{\Gamma}_n(E) &=\int_E|D^s u_n|^p \ {\rm d}x\,. \end{aligned}$$ If $u_n\rightharpoonup u$ in $\mathcal{D}^{s,p}_0({\mathbb R}^N)$, then $\nu_n$, $\Gamma_n$ and $\widetilde{\Gamma}_n$ have weak\* convergent sub-sequences (Proposition [Proposition 18](#BanachAlaoglu){reference-type="ref" reference="BanachAlaoglu"}) in $\mathbb{M}({\mathbb R}^N)$. Let $$\nu_n \overset{\ast}{\rightharpoonup} \nu \,, \qquad \Gamma_n \overset{\ast}{\rightharpoonup} \Gamma \,, \qquad \widetilde{\Gamma}_n \overset{\ast}{\rightharpoonup} \widetilde{\Gamma} \ \mbox{ in } \mathbb{M}({\mathbb R}^N).$$ We develop a $w$-depended concentration compactness lemma using our concentration function ${\mathcal C}_w$ (see for the definition). Our results are analogous to the results of Tertikas [@Tertikas] and Smets [@Smets]. The following lemma is due to [@Bonder Remark 2.5]. **Lemma 19**. *Let $0<s<1<p<\frac{N}{s}$ and $\phi \in W^{1,\infty}({\mathbb R}^N)$ with compact support. Let $u_n \rightharpoonup u$ in $\mathcal{D}^{s,p}_0({\mathbb R}^N)$. Then $$\displaystyle \lim_{n\rightarrow\infty} \int_{{\mathbb R}^N} |(u_n-u)(x)|^p |D^s \phi|^p(x) \ {\rm d}x= 0 \,.$$* **Remark 20**. *Lemma [Lemma 19](#lem: 1){reference-type="ref" reference="lem: 1"} also holds if we replace $\phi$ with $\psi \in C_b^{\infty}({\mathbb R}^N)$ with $0 \leq \psi \leq 1$, $\psi = 0$ on $B_1(0)$ and $\psi =1$ on $B_2(0)^c$.* **Corollary 21**. *Let $u_n \rightharpoonup u$ in $\mathcal{D}^{s,p}({\mathbb R}^N)$. Let $\phi \in W^{1,\infty}({\mathbb R}^N)$ with compact support or $\phi \in C_b^{\infty}({\mathbb R}^N)$ with $0 \leq \phi \leq 1$, $\phi = 0$ on $B_1(0)$ and $\phi =1$ on $B_2(0)^c$. Then, for $v_n=(u_n-u)\phi$, we have $$\begin{aligned} & \, \lim_{n\rightarrow\infty} \int_{{\mathbb R}^N \times {\mathbb R}^N} \frac{|v_n(x)-v_n(y)|^p}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\\ & \leq \lim_{n\rightarrow\infty} \int_{{\mathbb R}^N \times {\mathbb R}^N} |\phi(y)|^p\frac{|(u_n-u)(x)-(u_n-u)(y)|^p}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\,. \end{aligned}$$* Next, we prove the absolute continuity of $\nu$ with respect to $\Gamma$. **Lemma 22**. *Let $w \in \mathcal{H}_{s,p}({\mathbb R}^N)$, $w \geq 0$ and $u_n \rightharpoonup u$ in $\mathcal{D}^{s,p}({\mathbb R}^N)$. Then for any Borel set $E$ in ${\mathbb R}^N$, $$\nu(E) \leq C_{H} {\mathcal C}^*_w \Gamma (E), \ \mbox{where ${\mathcal C}^*_w = \displaystyle\sup_{x \in {\mathbb R}^N} {\mathcal C}_w(x)$ } .$$* *Proof.* As $u_n \rightharpoonup u$ in $\mathcal{D}^{s,p}({\mathbb R}^N)$, $u_n \rightarrow u$ in $L^{p}_{loc}({\mathbb R}^N)$. For $\Phi \in C_c^{\infty}({\mathbb R}^N)$, $(u_n-u) \Phi \in {\mathcal D}^{s,p}({\mathbb R}^N)$. Thus, denoting $v_n=(u_n-u)\Phi$, we have $$\begin{aligned} \int_{{\mathbb R}^N} |\Phi|^p \ \mathrm{d} \nu_n & = \int_{{\mathbb R}^N} w|(u_n-u)\Phi|^p {\rm d}x\\ & \leq C_H \norm{w}_{\mathcal{H}_{s,p}({\mathbb R}^N)} \int_{{\mathbb R}^N \times {\mathbb R}^N} \frac{|v_n(x)-v_n(y)|^p}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\,.\end{aligned}$$ Taking $n \rightarrow\infty$ and using Corollary [Corollary 21](#passing_limit){reference-type="ref" reference="passing_limit"}, we obtain $$\begin{aligned} \label{forrmk} \int_{{\mathbb R}^N} |\Phi|^p \ \mathrm{d} \nu \leq C_{H} \norm{w}_{\mathcal{H}_{s,p}({\mathbb R}^N)} \int_{{\mathbb R}^N} |\Phi|^p \ \mathrm{d} \Gamma . \end{aligned}$$ Now by Proposition [Proposition 17](#defmeasure){reference-type="ref" reference="defmeasure"}, we get $$\label{measureinequality1} \nu(E) \leq C_{H} \norm{w}_{\mathcal{H}_{s,p}({\mathbb R}^N)} \Gamma (E), \ \forall E \ \text{Borel in} \ {\mathbb R}^N.$$ In particular, $\nu \ll \Gamma$ and hence by Radon-Nikodym theorem, $$\label{measureinequality} \nu(E) = \int_E \frac{\mathrm{d} \nu}{\mathrm{d} \Gamma} \ \mathrm{d}\Gamma \ , \forall E \ \text{Borel in} \ {\mathbb R}^N.$$ Further, by Lebesgue differentiation theorem (page 152-168 of [@Federer]), we have $$\label{Lebdiff} \frac{\mathrm{d} \nu}{\mathrm{d} \Gamma}(x) = \lim_{r \rightarrow 0} \frac{\nu (B_r(x))}{\Gamma (B_r(x))}.$$ Now replacing $w$ by $w \chi_{B_r(x)}$ and proceeding as before, $$\nu(B_r(x)) \leq C_{H} \norm{w \chi_{B_r(x)}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} \ \Gamma (B_r(x)).$$ Thus from [\[Lebdiff\]](#Lebdiff){reference-type="eqref" reference="Lebdiff"} we get $$\begin{aligned} \label{21} \frac{\mathrm{d} \nu}{\mathrm{d} \Gamma} (x) \leq C_{H} {\mathcal C}_w(x)\end{aligned}$$ and hence $\norm{\frac{\mathrm{d} \nu}{\mathrm{d} \Gamma}}_{\infty} \leq C_{H} {\mathcal C}^*_w$. Now from [\[measureinequality\]](#measureinequality){reference-type="eqref" reference="measureinequality"} we obtain $\nu(E) \leq C_{H} {\mathcal C}^*_w \Gamma (E)$ for all Borel subsets $E$ of ${\mathbb R}^N$. ◻ The next lemma gives a lower estimate for the measure $\tilde{\Gamma}.$ Similar estimate is obtained in Lemma 2.1 of [@Smets]. We make a weaker assumption, $\overline{\sum_w}$ is of Lebesgue measure $0$, than the assumption $\overline{\sum_w}$ is countable. **Lemma 23**. *Let $w \in \mathcal{H}_{s,p}({\mathbb R}^N)$ be such that $w \ge 0$ and $|\overline{\sum_w}|=0$. If $u_n \rightharpoonup u$ in ${\mathcal D}^{s,p}({\mathbb R}^N)$, then $$\tilde{\Gamma} \geq \begin{cases} |D^s u|^p + \frac{\nu}{C_{H} {\mathcal C}_w^*}, \quad \text{if} \ {\mathcal C}_w^* \neq 0, \\ |D^s u|^p, \quad \text{otherwise}. \end{cases}$$* *Proof.* Our proof splits into three steps.\ **Step 1:** $\tilde{\Gamma} \geq |D^s u|^p.$ Let $\phi \in C_c^{\infty}({\mathbb R}^N)$ with $0\leq \phi \leq 1$, we need to show that $\int_{{\mathbb R}^N} \phi \ \mathrm{d}\tilde{\Gamma} \geq \int_{{\mathbb R}^N} \phi |D^s u|^p \ {\rm d}x.$ Notice that, $$\begin{aligned} \int_{{\mathbb R}^N} \phi \ \mathrm{d}\tilde{\Gamma}= \lim_{n \rightarrow\infty} \int_{{\mathbb R}^N} \phi \ \mathrm{d}\tilde{\Gamma}_n &= \lim_{n \rightarrow\infty} \int_{{\mathbb R}^N} \phi |D^s u_n|^p \ {\rm d}x\\ &= \lim_{n \rightarrow\infty} \int_{{\mathbb R}^N} F(x,D^s u_n(x)) \ {\rm d}x,\end{aligned}$$ where $F:{\mathbb R}^N \times {\mathbb R}\mapsto {\mathbb R}$ is defined as $F(x,z)=\phi(x)|z|^p.$ Clearly, $F$ is a Carathéodory function and $F(x,.)$ is convex for almost every $x$. Hence, by Theorem 2.6 of [@Fillip] (page 28), we have $\lim_{n \rightarrow\infty} \int_{{\mathbb R}^N} \phi |D^s u_n|^p \ {\rm d}x\geq \int_{{\mathbb R}^N} \phi |D^s u|^p \ {\rm d}x$ and this proves our claim 1. $\tilde{\Gamma}=\Gamma$, on $\overline{\sum_w}.$ Let $E\subset\overline{\sum_w}$ be a Borel set. Thus, for each $m \in {\mathbb N}$, there exists an open subset $O_{m}$ containing $E$ such that $|O_m|=|O_{m} \setminus E| < \frac{1}{m}$. Let $\epsilon>0$ be given. Then, for any $\phi \in C_c^{\infty}(O_{m})$ with $0 \leq \phi \leq 1$, we have $$\begin{aligned} \bigg|\int_{{\mathbb R}^N} \phi \ \mathrm{d}\Gamma_n \ {\rm d}x& - \int_{{\mathbb R}^N} \phi \ d\tilde{\Gamma}_n \ {\rm d}x\bigg| \\ &= \left|\int_{{\mathbb R}^N} \phi |D^s (u_n-u)|^p \ {\rm d}x-\int_{{\mathbb R}^N} \phi |D^s u_n|^p \ {\rm d}x\right| \\ &\leq \int_{{\mathbb R}^N \times {\mathbb R}^N} \phi(x) \frac{\big| |u_n(x)-u_n(y)|^p - |(u_n-u)(x)-(u_n-u)(y)|^p \big|}{|x-y|^{N+sp}}\ {\mathrm{d}x\mathrm{d}y}\\ &\leq \epsilon \int_{{\mathbb R}^N \times {\mathbb R}^N} \phi(x) \frac{ |u_n(x)-u_n(y)|^p}{|x-y|^{N+sp}}\ {\mathrm{d}x\mathrm{d}y}\\ & \hspace{3cm} + c(\epsilon,p) \int_{{\mathbb R}^N \times {\mathbb R}^N} \phi(x) \frac{ |u(x)-u(y)|^p}{|x-y|^{N+sp}}\ {\mathrm{d}x\mathrm{d}y}\\ & \leq \epsilon \int_{{\mathbb R}^N \times {\mathbb R}^N} \frac{ |u_n(x)-u_n(y)|^p}{|x-y|^{N+sp}}\ {\mathrm{d}x\mathrm{d}y}+ c(\epsilon,p) \int_{O_m} |D^s u|^p \ {\rm d}x\,.\end{aligned}$$ Now taking $n \rightarrow \infty$ and then $\epsilon \rightarrow 0$, we obtain $\left|\int_{{\mathbb R}^N} \phi \ \mathrm{d}\Gamma -\int_{{\mathbb R}^N} \phi \ \mathrm{d}\tilde{\Gamma} \right| \leq c(p) \int_{O_{m}} |D^s u|^p \ {\rm d}x.$ Therefore, $$\begin{aligned} \left|\Gamma (O_m)-\tilde{\Gamma} (O_m) \right| \leq c(p) \int_{O_{m}} |D^s u|^p \ {\rm d}x,\end{aligned}$$ Thus, as $m \rightarrow\infty,$ $|O_m|\rightarrow 0$ and hence $| \Gamma (E)-\tilde{\Gamma} (E)| =0$ i.e., $\Gamma(E)=\tilde{\Gamma} (E).$ $\tilde{\Gamma} \geq |D^s u|^p + \frac{\nu}{C_{H} {\mathcal C}_w^*},$ if ${\mathcal C}_w^* \neq 0$. Let ${\mathcal C}_w^* \neq 0$. Then from Lemma [Lemma 22](#mlc1){reference-type="ref" reference="mlc1"} we have $\Gamma \geq \frac{\nu}{C_{H} {\mathcal C}_w^*}$. Furthermore, [\[21\]](#21){reference-type="eqref" reference="21"} and [\[measureinequality\]](#measureinequality){reference-type="eqref" reference="measureinequality"} ensures that $\nu$ is supported on $\sum_w.$ Hence Step 1 and Step 2 yields the following: $$\label{rep1} \tilde{\Gamma} \geq \left\{\begin{array}{ll} |D^s u|^p, & \\ \frac{\nu}{C_{H} {\mathcal C}_w^*} . \end{array}\right.$$ Since $|\overline{\sum_w}|=0$, the measure $|D^s u|^p$ is supported inside $\overline{\sum_w}^{c}$ and hence from [\[rep1\]](#rep1){reference-type="eqref" reference="rep1"} we easily obtain $\tilde{\Gamma} \geq |D^s u|^p + \frac{\nu}{C_{H} {\mathcal C}_w^*}.$ ◻ Now we prove the following lemma. **Lemma 24**. *Let $w \in \mathcal{H}_{s,p}({\mathbb R}^N)$, $w \geq 0$ and $u_n \rightharpoonup u$ in $\mathcal{D}^{s,p}({\mathbb R}^N)$. Set $$\begin{aligned} \nu_{\infty} = \displaystyle\lim_{R \rightarrow\infty} \overline{\lim_{n \rightarrow\infty}} \nu_n( \overline{B_R}^c) \quad \mbox{and} \quad \Gamma_{\infty} = \displaystyle\lim_{R \rightarrow\infty} \overline{\lim_{n \rightarrow\infty}} \Gamma_n( \overline{B_R}^c). \end{aligned}$$ Then* (i) *$\nu_{\infty} \leq C_{H} {\mathcal C}_w(\infty) \Gamma_{\infty} \nonumber,$* (ii) *$\displaystyle \overline{\lim}_{n \rightarrow\infty} \int_{{\mathbb R}^N} w|u_n|^p {\rm d}x= \int_{{\mathbb R}^N} w|u|^p {\rm d}x+ \norm{\nu} + \nu_{\infty}$,* (iii) *Further, if $|\overline{\sum_w}|=0$, then we have* *$$\displaystyle \overline{\lim}_{n \rightarrow\infty} \int_{{\mathbb R}^N} |D^s u_n|^p {\rm d}x\geq \begin{cases} \int_{{\mathbb R}^N} |D^s u|^p {\rm d}x+ \frac{\norm{\nu}}{C_H{\mathcal C}_w^*} + \Gamma_{\infty}, \quad \ \text{if} \ \ {\mathcal C}_w^* \neq 0 \\ \int_{{\mathbb R}^N} |D^s u|^p {\rm d}x+ \Gamma_{\infty}, \quad \ \text{otherwise}. \end{cases}$$* *Proof.* (i) For $R>0$, choose $\Phi_R \in C_b^{1}({\mathbb R}^N)$ satisfying $0\leq \Phi_R \leq 1$, $\Phi_R = 0$ on $\overline{B_R}$ and $\Phi_R = 1$ on $B_{R+1}^c$. Clearly, $v_n:=(u_n-u) \Phi_R \in \mathcal{D}^{s,p}_0( \displaystyle\overline{B_R}^c)$. Since $\norm{w \chi_{ \overline{B_R}^c}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} < \infty,$ by Maz'ya's theorem, $$\begin{aligned} & \, \int_{{\mathbb R}^N} |\Phi_R|^p \ \mathrm{d}\nu_n = \int_{{\mathbb R}^N} w|(u_n-u)\Phi_R|^p \ {\rm d}x= \int_{\overline{B_R}^c} w |v_n|^p \ {\rm d}x\nonumber \\ & \leq C_H \norm{w \chi_{\overline{B_R}^c}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} \int_{{\mathbb R}^N \times {\mathbb R}^N} \frac{|v_n(x)-v_n(y)|^p}{|x-y|^{N+sp}} \ {\mathrm{d}x\mathrm{d}y}\nonumber \\ & \leq C_{H} \norm{w \chi_{\overline{B_R}^c}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} \int_{{\mathbb R}^N \times {\mathbb R}^N} |\Phi_R(x)|^p \frac{|(u_n-u)(x)-(u_n-u)(y)|^p}{|x-y|^{N+sp}} \ {\mathrm{d}x\mathrm{d}y}\nonumber \\ &= C_{H} \norm{w \chi_{\overline{B_R}^c}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} \int_{{\mathbb R}^N} |\Phi_R|^p \ \mathrm{d}\Gamma_n. \label{lab: 1} \end{aligned}$$ Also, notice that $$\begin{aligned} \nu_n(\overline{B_{R+1}^c}) & \leq \int_{{\mathbb R}^N} |\Phi_R|^p \ \mathrm{d}\nu_n \leq \nu_n(\overline{B_{R}^c}), \, \\ \Gamma_n(\overline{B_{R+1}^c}) & \leq \int_{{\mathbb R}^N} |\Phi_R|^p \ \mathrm{d}\Gamma_n \leq \Gamma_n(\overline{B_{R}^c}) \,.\end{aligned}$$ Thus, $$\label{nu infty gamma infty} \displaystyle\nu_{\infty}=\lim_{R \rightarrow\infty} \overline{\lim_{n \rightarrow\infty}} \int_{{\mathbb R}^N} |\Phi_R|^p \ \mathrm{d}\nu_n ~~\text{and}~~ \displaystyle\Gamma_{\infty}=\lim_{R \rightarrow\infty} \overline{\lim_{n \rightarrow\infty}}\int_{{\mathbb R}^N} |\Phi_R|^p \ \mathrm{d}\Gamma_n.$$ Consequently, by taking $n \rightarrow\infty$, and $R \rightarrow\infty$ in [\[lab: 1\]](#lab: 1){reference-type="eqref" reference="lab: 1"}, we get $\nu_{\infty} \leq C_{H} {\mathcal C}_{w}(\infty) \Gamma_{\infty}.$ $(ii)$ By choosing $\Phi_R$ as above and using Brézis-Lieb lemma together with [\[nu infty gamma infty\]](#nu infty gamma infty){reference-type="eqref" reference="nu infty gamma infty"} we have $$\begin{aligned} \overline{\lim_{n\rightarrow\infty}} \int_{{\mathbb R}^N} w| u_n|^p {\rm d}x &= \overline{\lim_{R\rightarrow\infty}} \ \overline{\lim_{n\rightarrow\infty}} \left[ \int_{{\mathbb R}^N} w| u_n|^p (1-\Phi_R){\rm d}x+ \int_{{\mathbb R}^N} w| u_n|^p \Phi_R {\rm d}x\right] \\ &=\overline{\lim_{R\rightarrow\infty}} \ \overline{\lim_{n\rightarrow\infty}} \left[ \int_{{\mathbb R}^N} w| u|^p (1-\Phi_R) {\rm d}x+ \int_{{\mathbb R}^N} w| u_n-u|^p (1-\Phi_R) {\rm d}x+ \int_{{\mathbb R}^N} w| u_n|^p \Phi_R {\rm d}x\right] \\ &= \int_{{\mathbb R}^N} w| u|^p {\rm d}x+ \norm{\nu} + \nu_{\infty}. \end{aligned}$$ $(iii)$ Notice that $$\begin{aligned} \overline{\lim_{n\rightarrow\infty}} \int_{{\mathbb R}^N} |D^s u_n|^p {\rm d}x&= \overline{\lim_{n\rightarrow\infty}} \left[ \int_{{\mathbb R}^N} |D^s u_n|^p (1-\Phi_R) {\rm d}x+ \int_{{\mathbb R}^N} |D^s u_n|^p \Phi_R {\rm d}x\right] \nonumber \\ &= \tilde{\Gamma}(1-\Phi_R) {\rm d}x+ \overline{\lim_{n\rightarrow\infty}} \int_{{\mathbb R}^N} \Phi_R \ \mathrm{d}\tilde{\Gamma}_n. \label{lab: 2} \end{aligned}$$ Now for a given $\epsilon>0$ we have, $$\begin{aligned} &\, \left|\int_{{\mathbb R}^N} \Phi_R \ \mathrm{d}\Gamma_n \ {\rm d}x-\int_{{\mathbb R}^N} \Phi_R \ \mathrm{d}\tilde{\Gamma}_n \ {\rm d}x\right| \\ &= \left|\int_{{\mathbb R}^N} \Phi_R |D^s (u_n-u)|^p \ {\rm d}x-\int_{{\mathbb R}^N} \Phi_R |D^s u_n|^p \ {\rm d}x\right| \\ &\leq \int_{{\mathbb R}^N \times {\mathbb R}^N} \Phi_R(x) \frac{\big| |u_n(x)-u_n(y)|^p - |(u_n-u)(x)-(u_n-u)(y)|^p \big|}{|x-y|^{N+sp}}\ {\mathrm{d}x\mathrm{d}y}\\ &\leq \epsilon \int_{{\mathbb R}^N \times {\mathbb R}^N} \Phi_R(x) \frac{ |u_n(x)-u_n(y)|^p}{|x-y|^{N+sp}}\ {\mathrm{d}x\mathrm{d}y}\\ & \hspace{3cm} + c(\epsilon,p) \int_{{\mathbb R}^N \times {\mathbb R}^N} \Phi_R(x) \frac{ |u(x)-u(y)|^p}{|x-y|^{N+sp}}\ {\mathrm{d}x\mathrm{d}y}\\ & \leq \epsilon \int_{{\mathbb R}^N \times {\mathbb R}^N} \frac{ |u_n(x)-u_n(y)|^p}{|x-y|^{N+sp}}\ {\mathrm{d}x\mathrm{d}y}+ c(\epsilon,p) \int_{\overline{B_R}^c} |D^s u|^p \ {\rm d}x\,.\end{aligned}$$ As limit $n \rightarrow\infty$ and $\epsilon \rightarrow 0$, we get $$\begin{aligned} \left|\int_{{\mathbb R}^N} \Phi_R \ \mathrm{d}\Gamma \ {\rm d}x-\int_{{\mathbb R}^N} \Phi_R \ \mathrm{d}\tilde{\Gamma} \ {\rm d}x\right| \leq c(p) \int_{\overline{B_R}^c} |D^s u|^p \ {\rm d}x\,.\end{aligned}$$ By taking $R \rightarrow\infty$, we have $\Gamma_{\infty}=\displaystyle \lim_{R \rightarrow\infty} \int_{{\mathbb R}^N} \Phi_R \ \mathrm{d}\tilde{\Gamma} \ {\rm d}x=\displaystyle \lim_{R \rightarrow\infty} \overline{\lim_{n \rightarrow\infty}} \int_{{\mathbb R}^N} \Phi_R \ \mathrm{d}\tilde{\Gamma}_n \ {\rm d}x$. Therefore, by taking $R \rightarrow\infty$ in [\[lab: 2\]](#lab: 2){reference-type="eqref" reference="lab: 2"}, we get $$\overline{\lim_{n\rightarrow\infty}} \int_{\Omega} |D^s u_n|^p \ {\rm d}x= \|\tilde{\Gamma}\|+ \Gamma_{\infty}.$$ Now, using Lemma [Lemma 23](#lemma9){reference-type="ref" reference="lemma9"}, we obtain $$\displaystyle \overline{\lim}_{n \rightarrow\infty} \int_{{\mathbb R}^N} |D^s u_n|^p \ {\rm d}x\geq \begin{cases} \displaystyle \int_{{\mathbb R}^N} |D^s u|^p {\rm d}x+ \frac{\norm{\nu}}{C_H{\mathcal C}_g^*} + \Gamma_{\infty}, \quad \ \text{if} \ {\mathcal C}_g^* \neq 0 \\ \displaystyle \int_{{\mathbb R}^N} |D^s u|^p {\rm d}x+ \Gamma_{\infty}, \quad \ \text{otherwise}. \end{cases}$$ ◻ **Lemma 25**. *Let $w \in \mathcal{H}_{s,p}({\mathbb R}^N)$ and $W(u):=\displaystyle\int_{{\mathbb R}^N} |w| |u|^p {\rm d}x$ on $\mathcal{D}^{s,p}({\mathbb R}^N)$ is compact. Then,* 1. *if $(A_n)$ is a sequence of bounded measurable subsets such that $\chi_{A_n}$ decreases to $0,$ then $$\displaystyle \norm{w \chi_{A_n}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} \rightarrow 0 \text{ as } n \rightarrow\infty.$$* 2. *$\norm{w \chi_{B_n^c}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} \rightarrow 0$ as $n \rightarrow\infty$.* *Proof.* $(i)$ Let $(A_n)$ be a sequence of bounded measurable subsets such that $\chi_{A_n}$ decreases to $0$. Suppose that, $\norm{w \chi_{A_n}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} \nrightarrow 0$. Then, there exists $a>0$ such that $\norm{w \chi_{A_n}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} >a$, for all $n$ (by the monotonicity of the norm). Thus, there exists $F_n \Subset {\mathbb R}^N$ and $u_n \in \mathcal{N}_{s,p}(F_n)$ such that $$\label{convDp} \int_{{\mathbb R}^N} |D^s u_n|^p {\rm d}x< \frac{1}{a} \int_{F_n \cap A_n} |w| {\rm d}x\leq \frac{1}{a}\int_{|u_n| = 1} |w| |u_n|^{p_{s}^*} {\rm d}x\,.$$ Since $A_n$'s are bounded and $\chi_{A_n}$ decreases to $0$, it follows that $|A_n| \rightarrow 0$, as $n \rightarrow\infty.$ Hence, we also have $\int_{F_n \cap A_n} |w| \ {\rm d}x\rightarrow 0$ as $n \rightarrow\infty$ (as $w \in L^1(A_1)$). Hence from the above inequalities, $u_n \rightarrow 0$ in $\mathcal{D}^{s,p}_0({\mathbb R}^N)$. Now take $v_n=\frac{u_n^{\frac{p_{s}^*}{p}}}{\|u_n\|_{s,p}}$. Then, one can show that $(v_n)$ is bounded in $\mathcal{D}^{s,p}_0({\mathbb R}^N)$ and $v_n \rightarrow 0$ a.e. because $\|v_n\|_p^p=\frac{\|u_n\|_{p_{s}^*}^{p_{s}^*}}{\|u_n\|_{s,p}^{p}} \leq C\|u_n\|_{s,p}^{p_{s}^*-p} \rightarrow 0$ as $n\rightarrow\infty$. Thus, $v_n \rightharpoonup 0$ in $\mathcal{D}^{s,p}_0({\mathbb R}^N)$. By the compactness of $W$ we infer that $\lim_{n\rightarrow\infty} \int_{{\mathbb R}^N} |w||v_n|^p{\rm d}x=0.$ On the other hand, $$\begin{aligned} \int_{{\mathbb R}^N} |w| |v_n|^p {\rm d}x&=& \int_{{\mathbb R}^N} \frac{|w| |u_n|^{p_{s}^*}}{ \norm{u_n}_{s,p}^p} {\rm d}x \geq \int_{|u_n| = 1} \frac{|w| |u_n|^{p_{s}^*}}{ \norm{u_n}_{s,p}^p} {\rm d}x>a % &>& a \end{aligned}$$ which is a contradiction. $(ii)$ If $\norm{w \chi_{B_n^c}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} \nrightarrow 0$, as $n \rightarrow\infty,$ then there exists $F_n \Subset {\mathbb R}^N$ such that $$\begin{aligned} a < \frac{\int_{F_n \cap B_n^c} |w| {\rm d}x}{{\rm Cap}_{s,p}(F_n)} \leq \frac{\int_{F_n \cap B_n^c} |w| {\rm d}x}{{\rm Cap}_{s,p}(F_n \cap B_n^c)} \leq \frac{C\int_{F_n \cap B_n^c} |w| {\rm d}x}{{\rm Cap}_{s,p}(F_n \cap B_n^c, \overline{B}_{\frac{n}{2}}^c) }\end{aligned}$$ for some $a>0$ and $C>0$. The last inequality follows from part $(ii)$ of Proposition [Proposition 12](#propofcap){reference-type="ref" reference="propofcap"}. Thus, for each $n\in {\mathbb N}$ there exists $z_{n} \in {\mathcal D}^{s,p}_0( \overline{B}_{\frac{n}{2}}^c)$ with $z_n \geq 1$ on $F_n \cap B_n^c$ such that $$\int_{{\mathbb R}^N} |D^s z_n|^p {\rm d}x< \frac{C}{a} \int_{F_n \cap B_n^c} |w| {\rm d}x\leq \frac{C}{a} \int_{{\mathbb R}^N} |w||z_n|^p {\rm d}x.$$ By taking $v_n=\displaystyle \frac{z_n}{\norm{z_n}_{s,p}}$ and following a same argument as in $(i)$ we contradict the compactness of $W$. ◻ Next for $\phi \in C_c^{}({\mathbb R}^N)$ we compute ${\mathcal C}_{\phi}$. For that, we will be using the fact that $${\rm Cap}_{s,p}((F \cap B_r)^{\star}) \geq \mathcal{K}_{N,s,p} r^{N-sp} \,,$$ where $\mathcal{K}_{N,s,p}>0$ is a constant independent of $r$ [@Jie]. **Proposition 26**. *Let $\phi \in C_c({\mathbb R}^N)$. Then ${\mathcal C}_{\phi} \equiv 0.$* *Proof.* First notice that, for $\phi \in C_c^{}({\mathbb R}^N),$ $$\begin{aligned} \norm{\phi \chi_{B_r(x)}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} = \sup_{F \Subset {\mathbb R}^N} \left[ \frac{\int_{F \cap B_r(x)}|\phi|{\rm d}x}{{\rm Cap}_{s,p}(F, {\mathbb R}^N)}\right] \leq \sup_{F \Subset {\mathbb R}^N} \left[ \frac{ \sup (|\phi|) |(F\cap B_r)^{\star}|}{{\rm Cap}_{s,p}((F \cap B_r)^{\star})} \right]. \nonumber \end{aligned}$$ The last inequality follows from Pólya-Szegö inequality [@Almgren Theorm 9.2]. If $d$ is the radius of $(F \cap B_r)^{\star}$ then $$\frac{ |(F\cap B_r)^{\star}|}{{\rm Cap}_{s,p}((F \cap B_r)^{\star})} \leq \text{C}(N,s,p) \frac{d^N}{d^{(N-sp)}} = \text{C}(N,s,p) d^{sp} \leq \text{C}(N,s,p) r^{sp} .$$ Thus, ${\mathcal C}_{\phi}(x) = \lim_{r \rightarrow 0} \norm{\phi \chi_{B_r(x)}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} =0$. Also, one can easily see that ${\mathcal C}_{\phi}(\infty)=0$ as $\phi$ has compact support. ◻ Now, we prove our main theorem. *Proof of Theorem [Theorem 3](#allinone1){reference-type="ref" reference="allinone1"}.* $(i) \implies (ii):$ Let $W$ be compact. Take a sequence of measurable subsets $(A_n)$ of ${\mathbb R}^N$ such that $\chi_{A_n}$ decreases to $0$ a.e. in ${\mathbb R}^N$. Part $(ii)$ of Lemma [Lemma 25](#cpct){reference-type="ref" reference="cpct"} gives $\norm{w \chi_{B_n^c}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} \rightarrow 0$, as $n \rightarrow\infty$. Choose $\epsilon >0$ arbitrarily. There exists $N_0 \in {\mathbb N},$ such that $\norm{w \chi_{B_n^c}}_{({\mathbb R}^N)} \leq \frac{\epsilon}{2},$ for all $n \geq N_0.$ Now $A_n= (A_n \cap B_{N_0}) \cup (A_n \cap B_{N_0}^c)$, for each $n$. Thus, $$\norm{w \chi_{A_n}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} \leq \norm{w \chi_{A_n \cap B_{N_0}}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} + \norm{w \chi_{A_n \cap B_{N_0}^c}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} \leq \norm{w \chi_{A_n \cap B_{N_0}}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} + \frac{\epsilon}{2}.$$ Part $(i)$ of Lemma [Lemma 25](#cpct){reference-type="ref" reference="cpct"} implies that there exists a natural number $N_1(\geq N_0)$ such that $$\norm{w \chi_{A_n \cap B_{N_0}}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} \leq \frac{\epsilon}{2}, \ \forall n \geq N_1$$ and hence $\norm{w \chi_{A_n}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} \leq \epsilon$ for all $n \geq N_1$. Therefore, $w$ has absolutely continuous norm. $(ii) \implies (iii):$ Let $w$ has absolute continuous norm in $\mathcal{H}_{s,p}({\mathbb R}^N)$. Then, $\norm{w \chi_{B_m^c}}_{\mathcal{H}_{s,p}({\mathbb R}^N)}$ converges to $0$ as $m \rightarrow\infty$. Let $\epsilon >0$ be arbitrary. We choose $m_{\epsilon} \in {\mathbb N}$ such that $\norm{w \chi_{B_m^c}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} < \epsilon$, for all $m \geq m_{\epsilon}$. Now for any $n \in {\mathbb N}$, $$w = w \chi_{\{|w| \leq n\} \cap B_{m_{\epsilon}}} + w \chi_{\{|w| >n\} \cap B_{m_{\epsilon}}} + w \chi_{B_{m_{\epsilon}}^c} := w_n + z_n.$$ where $w_n=w \chi_{\{|w| \leq n\} \cap B_{m_{\epsilon}}}$ and $z_n=w \chi_{\{|w| >n\} \cap B_{m_{\epsilon}}} + w \chi_{B_{m_{\epsilon}}^c}.$ Clearly, $w_n \in L^{\infty}({\mathbb R}^N)$ and $|Supp(w_n)| < \infty$. Furthermore, $$\norm{z_n}_{\mathcal{H}_{s,p}({\mathbb R}^N)} \leq \norm{w \chi_{\{|w| >n\} \cap B_{m_{\epsilon}}}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} + \norm{w \chi_{B_{m_{\epsilon}}^c}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} < \norm{w \chi_{\{|w| >n\} \cap B_{m_{\epsilon}}}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} + \epsilon \,.$$ Now, $w \in L^1_{loc}({\mathbb R}^N)$ ensures that $\chi_{\{|w| >n\} \cap B_{m_{\epsilon}}} \rightarrow 0$ as $n\rightarrow\infty$. As $w$ has absolutely continuous norm, $\norm{w \chi_{\{|w| >n\} \cap B_{{m_{\epsilon}}}}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} < \epsilon$ for large $n$. Therefore, $\norm{z_n}_{\mathcal{H}_{s,p}({\mathbb R}^N)}< 2\epsilon$ for large $n$. Hence, Lemma [Proposition 13](#charF){reference-type="ref" reference="charF"} concludes that $w \in \mathcal{H}_{s,p,0}({\mathbb R}^N)$. $(iii) \implies (iv):$ Let $w \in \mathcal{H}_{s,p,0}({\mathbb R}^N)$ and $\epsilon >0$ be arbitrary. Then there exists $w_\epsilon\in C_c({\mathbb R}^N)$ such that $\norm{w-w_\epsilon}_{\mathcal{H}_{s,p}({\mathbb R}^N)} < \epsilon$. Thus Proposition [Proposition 26](#Cgzero){reference-type="ref" reference="Cgzero"} infers that ${\mathcal C}_{w_\epsilon}$ vanishes. Now as $w = w_\epsilon+ (w-w_\epsilon)$, it follows that ${\mathcal C}_w(x) \leq {\mathcal C}_{w_\epsilon}(x) + {\mathcal C}_{w - w_\epsilon}(x) \leq \norm{w - w_\epsilon}_{\mathcal{H}_{s,p}({\mathbb R}^N)} < \epsilon$ and hence ${\mathcal C}^*_w=0$. By a similar argument one can show ${\mathcal C}_w(\infty)=0.$ $(iv) \implies (i):$ Assume that ${\mathcal C}^*_w =0= {\mathcal C}_w(\infty)$. Let $(u_n)$ be a bounded sequence in $\mathcal{D}^{s,p}_0({\mathbb R}^N)$. Then by Lemma [Lemma 24](#mlc2){reference-type="ref" reference="mlc2"}, up to a sub-sequence we have, $$\begin{aligned} \nu_{\infty} &\leq& C_H\ {\mathcal C}_w(\infty) \Gamma_{\infty} \label{1},\\ \norm{\nu} &\leq& C_H {\mathcal C}^{*}_w \norm{\Gamma} \label{2}, \\ \overline{\lim_{n \rightarrow\infty}} \int_{{\mathbb R}^N} |w||u_n|^p {\rm d}x&=& \int_{{\mathbb R}^N} |w||u|^p {\rm d}x+ \norm{\nu} + \nu_{\infty} \label{3}. \end{aligned}$$ As ${\mathcal C}^*_w=0= {\mathcal C}_w(\infty)$ we immediately conclude that $\displaystyle \overline{\lim_{n \rightarrow\infty}} \int_{{\mathbb R}^N} |w||u_n|^p {\rm d}x= \int_{{\mathbb R}^N} |w||u|^p{\rm d}x$ and hence $W: \mathcal{D}^{s,p}_0({\mathbb R}^N) \mapsto {\mathbb R}$ is compact. ◻ # Weighted Eigenvalue Problem This section deals with the weighted eigenvalue problem given by [\[Weighetd eigenvalue problem\]](#Weighetd eigenvalue problem){reference-type="eqref" reference="Weighetd eigenvalue problem"}. We show the existence of the first eigenvalue by using the Rayleigh quotient and then prove some qualitative properties of the first eigenvalue. Finally, we prove that there exist infinite eigenvalues increasing to infinity. ## Qualitative behaviour of the first eigenvalue We show the existence of an eigenvalue by following a direct variational approach. We begin with the Rayleigh quotient $Q(u)$ given by $$\label{Rayleigh quotient} Q(u) = \dfrac{\int_{{\mathbb R}^N \times {\mathbb R}^N} \frac{|u(x)-u(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}}{\int_{\mathbb{R}^{N}} w|u|^{p} {\rm d}x},$$ with the domain of definition $$\label{L} \mathbb{L}:= \{ u \in {\mathcal D}^{s,p}(\mathbb{R}^{N}): \int_{\mathbb{R}^{N}} w|u|^{p} {\rm d}x> 0 \},$$ Since $w \in L_{loc}^{1}(\mathbb{R}^{N})$ and $w_{1} \not\equiv 0$, then by [@Prashanth Proposition 4.2] there exists $\phi \in C_{c}^{\infty}(\mathbb{R}^{N})$ such that $\int_{\mathbb{R}^{N}} w|\phi|^{p}dx >0$. Therefore, the set $\mathbb{L}$ is non-empty. Now, let us consider $$\label{M} \mathbb{S} := \{ u \in {\mathcal D}^{s,p}(\mathbb{R}^{N}) : \int_{\mathbb{R}^{N}} w|u|^{p} {\rm d}x= 1 \},$$ $$\label{J(u)} % J(u) = \frac{1}{p} \iint_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u(x)-u(y)|^{p}}{|x-y|^{N+sp}} dxdy. J(u) = \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u(x)-u(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}.$$ Suppose $Q$ is $C^{1}$. In that case, the critical points of $Q$ over $\mathbb{L}$ correspond to the Euler-Lagrange equation associated with the weighted eigenvalue problem ([\[Weighetd eigenvalue problem\]](#Weighetd eigenvalue problem){reference-type="ref" reference="Weighetd eigenvalue problem"}) and the corresponding critical values of $Q$ are the eigenvalues of the problem ([\[Weighetd eigenvalue problem\]](#Weighetd eigenvalue problem){reference-type="ref" reference="Weighetd eigenvalue problem"}). Observe that finding a critical point of the Rayleigh quotient $Q$ over the domain $\mathbb{L}$ is similar to finding the critical point of the functional $J$ over $\mathbb{S}$, i.e., there is a one-to-one correspondence between them. Therefore, we try to find the critical points of the functional $J$ on $\mathbb{S}$ by employing some sufficient assumptions on $w_1$. One of the main difficulties in showing the existence of a critical point of $J$ on $\mathbb{S}$ arises due to the non-compactness of the map $W$. Since we have a weak assumption on $w_{2}$, i.e., it is just locally integrable, therefore the map $W: {\mathcal D}^{s,p}(\mathbb{R}^{N}) \rightarrow \mathbb{R}$ given by $$\label{W(u)} % W(u) = \frac{1}{p}\int_{\mathbb{R}^{N}} w|u|^{p}\dx. W(u) = \int_{\mathbb{R}^{N}} w|u|^{p}{\rm d}x,$$ may not even be continuous and hence $\mathbb{S}$ may not be closed in ${\mathcal D}^{s,p}(\mathbb{R}^{N})$. In spite of that, we prove that a sequence of minimizers of $J$ on $\mathbb{S}$ has a weak limit, which also lies in $\mathbb{S}$. From the definition of the space ${\mathcal D}^{s,p}(\mathbb{R}^{N})$, it is easy to check that the functional $J$ becomes coercive and weakly lower semi-continuous on ${\mathcal D}^{s,p}(\mathbb{R}^{N})$. Further, if $w_{1} \in \mathcal{H}_{s,p,0}(\mathbb{R}^{N})$, the map $W_{1}: {\mathcal D}^{s,p}(\mathbb{R}^{N}) \rightarrow \mathbb{R}$ given by $$\label{W1(u)} % W_{1}(u) = \frac{1}{p}\int_{\mathbb{R}^{N}} w_{1}|u|^{p}\dx. W_{1}(\varphi) = \int_{\mathbb{R}^{N}} w_{1}|\varphi|^{p}{\rm d}x,$$ is continuous and compact on ${\mathcal D}^{s,p}(\mathbb{R}^{N})$ by Theorem [Theorem 3](#allinone1){reference-type="ref" reference="allinone1"}. **Theorem 27**. *Let $w \in L_{loc}^{1}(\mathbb{R}^{N})$ with $w_{1} \in \mathcal{H}_{s,p,0}(\mathbb{R}^{N}), w_1 \not\equiv 0$ and $sp<N$. Then $J$ admits a minimizer on $\mathbb{S}$.* *Proof.* Since $w \in L_{loc}^{1}(\mathbb{R}^{N})$ and $w_{1} \not\equiv 0$, then by [@Prashanth Proposition 4.2] there exists $\phi \in C_{c}^{\infty}(\mathbb{R}^{N})$ such that $\int_{\mathbb{R}^{N}} w|\phi|^{p}dx >0$ and hence $\mathbb{S} \neq \emptyset$. Let $\{u_{n}\}$ be a minimizing sequence for $J$ on $\mathbb{S}$; i.e., $$\lim_{n \rightarrow \infty} J(u_{n}) = \lambda_{1} := \inf_{u \in \mathbb{S}} J(u).$$ By the coercivity of $J$, $\{u_{n}\}$ is bounded in ${\mathcal D}^{s,p}(\mathbb{R}^{N})$ and hence by reflexivity of ${\mathcal D}^{s,p}(\mathbb{R}^{N})$, the sequence $\{u_{n}\}$ admits a weakly convergent subsequence in ${\mathcal D}^{s,p}(\mathbb{R}^{N})$. Let us denote the subsequence by $\{u_{n}\}$ itself and the weak limit by $u$ in ${\mathcal D}^{s,p}(\mathbb{R}^{N})$. Further, the compactness of the map $W_{1}$ gives $$\lim_{n \rightarrow \infty} \int_{\mathbb{R}^{N}} w_{1}|u_{n}|^{p}{\rm d}x= \int_{\mathbb{R}^{N}} w_{1}|u|^{p}{\rm d}x.$$ Since $u_{n} \in \mathbb{S}$, we write $$\int_{\mathbb{R}^{N}} w_{2}|u_{n}|^{p}{\rm d}x= \int_{\mathbb{R}^{N}} w_{1}|u_{n}|^{p}{\rm d}x-1.$$ Also, we know that the embedding ${\mathcal D}^{s,p}(\mathbb{R}^{N}) \hookrightarrow L_{loc}^{p}(\mathbb{R}^{N})$ is compact, thus $u_{n} \rightarrow u$ a.e. in $\mathbb{R}^{N}$ up to a subsequence. We apply Fatou's lemma to get $$\int_{\mathbb{R}^{N}} w_{2}|u|^{p}{\rm d}x\leq \int_{\mathbb{R}^{N}} w_{1}|u|^{p}{\rm d}x-1,$$ which shows that $\int_{\mathbb{R}^{N}} w|u|^{p}{\rm d}x\geq 1$. Setting $\tilde{u} := \dfrac{u}{(\int_{\mathbb{R}^{N}} w|u|^{p}{\rm d}x)^{1/p}}$, and since $J$ is weakly lower semi-continuous, we have $$\lambda_{1} \leq J(\Tilde{u}) = \dfrac{J(u)}{\int_{\mathbb{R}^{N}} w|u|^{p} {\rm d}x} \leq J(u) \leq \liminf_{n} J(u_{n}) = \lambda_{1}.$$ Thus the equality must hold at each step and $\int_{\mathbb{R}^{N}} w|u|^{p}{\rm d}x=1$, which shows that $u \in \mathbb{S}$ and $J(u) = \lambda_{1}$. Hence, $J$ admits a minimizer $u$ on $\mathbb{S}$. ◻ Further, we prove that any minimizer of $Q$ on $\mathbb{L}$ is an eigenfunction of [\[Weighetd eigenvalue problem\]](#Weighetd eigenvalue problem){reference-type="eqref" reference="Weighetd eigenvalue problem"}. **Proposition 28**. *Let $u$ be a minimizer of $Q$ on $\mathbb{L}$. Then $u$ is an eigenfunction of ([\[Weighetd eigenvalue problem\]](#Weighetd eigenvalue problem){reference-type="ref" reference="Weighetd eigenvalue problem"}).* *Proof.* For each $\phi \in C_{c}^{\infty}(\mathbb{R}^{N})$, we can verify that $Q$ admits directional derivative along $\phi$ by using dominated convergence theorem. It is given that $u$ is a minimizer of $Q$ on $\mathbb{L}$, therefore we have a necessary condition $$\begin{aligned} \dfrac{\mathrm{d}}{\mathrm{d}t} Q(u + t \phi) |_{t = 0} &= 0.\end{aligned}$$ This further implies $$\begin{aligned} \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \dfrac{|u(x)-u(y)|^{p-2} (u(x)-u(y)) (\phi(x) -\phi(y))}{ |x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}&= \lambda_{1} \int_{\mathbb{R}^{N}} w|u|^{p-2}u\phi~{\rm d}x, \end{aligned}$$ $\text{for all}~\phi \in C_{c}^{\infty}(\mathbb{R}^{N})$. Now using the density of $C_{c}^{\infty}(\mathbb{R}^{N})$ into ${\mathcal D}^{s,p}(\mathbb{R}^{N})$, we can conclude $$\int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \dfrac{|u(x)-u(y)|^{p-2} (u(x)-u(y)) (\phi(x) -\phi(y))}{ |x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}= \lambda_{1} \int_{\mathbb{R}^{N}} w|u|^{p-2}u\phi~{\rm d}x,$$ $\text{for all}~ \phi \in {\mathcal D}^{s,p}(\mathbb{R}^{N}).$ ◻ Next, we prove that the first eigenfunction does not change its sign. We adapt the idea from the article [@Cui] to prove this lemma. **Lemma 29**. *The first eigenfunctions (i.e., the eigenfunctions corresponding to the first eigenvalue $\lambda_1$) of the weighted eigenvalue problem ([\[Weighetd eigenvalue problem\]](#Weighetd eigenvalue problem){reference-type="ref" reference="Weighetd eigenvalue problem"}) are of a constant sign. Moreover, a non-negative first eigenfunction is positive.* *Proof.* We consider $u_{1}$ the first eigenfunction of [\[Weighetd eigenvalue problem\]](#Weighetd eigenvalue problem){reference-type="eqref" reference="Weighetd eigenvalue problem"} corresponding to the first eigenvalue $\lambda_1$. Then $u_1$ is a minimizer of $J$ over $\mathbb{S}$. Since $u_1 \in \mathbb{S}$, this implies that $|u_1|\in \mathbb{S}$. Now we have $$\begin{aligned} \lambda_1 = \inf\limits_{u \in \mathbb{S}} \int_{{\mathbb R}^N \times {\mathbb R}^N} \frac{|u(x)-u(y)|^{p}}{|x-y|^{N+sp}}\,{\mathrm{d}x\mathrm{d}y}&\leq \int_{{\mathbb R}^N \times {\mathbb R}^N} \frac{\big||u_1(x)|-|u_1(y)|\big|^{p}}{|x-y|^{N+sp}}\,{\mathrm{d}x\mathrm{d}y}\\ &\leq \int_{{\mathbb R}^N \times {\mathbb R}^N} \frac{|u_1(x)-u_1(y)|^{p}}{|x-y|^{N+sp}}\,{\mathrm{d}x\mathrm{d}y}= \lambda_1,\end{aligned}$$ Therefore, equality must hold at each step, which implies either $u_{1}^{+} \equiv 0$ or $u_{1}^{-} \equiv 0$. Thus, the eigenfunction $u_{1}$ corresponding to the first eigenvalue $\lambda_1$ does not change its sign. If we assume $u_1 \geq 0$, then we have $$(-\Delta_{p})^{s}u_{1} + \lambda_{1}w_{2} (u_{1})^{p-1} = \lambda_{1}w_{1} (u_{1})^{p-1} \geq 0 ~~\text{in}~\mathbb{R}^{N}.$$ Thus the strong minimum principle [@Pezzo Theorem 1.2] yields $u_{1} >0$ a.e. in $\mathbb{R}^{N}$. ◻ The next lemma shows that the first eigenfunctions are the only eigenfunctions that do not change their sign. We use the idea of [@Goyal2018eigenvalues Theorem 3.3] to prove the following lemma. **Lemma 30**. *The eigenfunctions of [\[Weighetd eigenvalue problem\]](#Weighetd eigenvalue problem){reference-type="eqref" reference="Weighetd eigenvalue problem"} corresponding to the eigenvalues other than $\lambda_1$ change its sign.* *Proof.* We assume that $u_1$ and $u$ are the eigenfunctions associated with two distinct eigenvalues $\lambda_1$ and $\lambda$, respectively. Then we have the following: $$\begin{aligned} (-\Delta_{p})^{s}u_{1} &= \lambda_{1}w (u_{1})^{p-1} ~~\text{in}~({\mathcal D}^{s,p}(\mathbb{R}^{N}))', \label{equation first}\\ (-\Delta_{p})^{s}u_{} &= \lambda_{}w |u_{}|^{p-2}u ~~\text{in}~({\mathcal D}^{s,p}(\mathbb{R}^{N}))'. \label{equation second}\end{aligned}$$ We proceed by using the method of contradiction. On the contrary, assume that the eigenfunction $u$ does not change its sign. Without the loss of generality, we may suppose that $u \geq 0$. We take $\{\phi_m\}$ as a sequence in $C^\infty_c(\mathbb{R}^{N})$ such that $\phi_m \rightarrow u_1$ in ${\mathcal D}^{s,p}({\mathbb R}^N)$ as $m \rightarrow \infty$. Now we take two test functions $w_1 = u_1,\ w_2 = \frac{\phi^p_m}{(u+\frac{1}{m})^{p-1}}$. First we show that $w_2 \in {\mathcal D}^{s,p}(\mathbb{R}^{N})$. We have $$\begin{aligned} |w_2(x)-w_2(y)| &= \bigg|\frac{\phi^p_m(x)}{(u+\frac{1}{m})^{p-1}(x)} - \frac{\phi^p_m(y)}{(u+\frac{1}{m})^{p-1}(y)} \bigg|\\ & = \bigg|\frac{\phi^p_m(x)- \phi^p_m(y)}{(u+\frac{1}{m})^{p-1}(x)} + \frac{\phi^p_m(y) \bigg((u+\frac{1}{m})^{p-1}(y) - (u+\frac{1}{m})^{p-1}(x) \bigg)}{(u+\frac{1}{m})^{p-1}(x)(u+\frac{1}{m})^{p-1}(y)} \bigg|\\ & \leq m^{p-1}|\phi^p_m(x)- \phi^p_m(y)| + \norm{\phi_m}_{\infty}^p \frac{\big|(u+\frac{1}{m})^{p-1}(x) - (u+\frac{1}{m})^{p-1}(y) \big|}{(u+\frac{1}{m})^{p-1}(x)(u+\frac{1}{m})^{p-1}(y)}\\ & \leq m^{p-1}p(\phi^{p-1}_m(x)+ \phi^{p-1}_m(y))|\phi_m(x)- \phi_m(y)|\\ &\qquad + \norm{\phi_m}_{\infty}^p (p-1)\frac{\big((u+\frac{1}{m})^{p-2}(x) + (u+\frac{1}{m})^{p-2}(y)\big)}{(u+\frac{1}{m})^{p-1}(x)(u+\frac{1}{m})^{p-1}(y)}\times \\ &\qquad \quad \times\bigg|(u+\frac{1}{m})^{}(x) - (u+\frac{1}{m})^{}(y)\bigg|\\ & \leq 2pm^{p-1}\norm{\phi_m}_{\infty}^{p-1}|\phi_m(x)- \phi_m(y)| + \norm{\phi_m}_{\infty}^p (p-1)|u(x) -u(y)| \times\\ &\qquad \times \bigg( \frac{1}{(u+\frac{1}{m})(x)~(u+\frac{1}{m})^{p-1}(y)} +\frac{1}{(u+\frac{1}{m})^{p-1}(x)~(u+\frac{1}{m})(y)} \bigg).\end{aligned}$$ Therefore, we finally get from the above: $$|w_2(x)-w_2(y)| \leq C(m,p,\norm{\phi_m}_{\infty}) \big(|\phi_m(x)- \phi_m(y)| + |u(x) -u(y)| \big).$$ Since $\phi_m$ and $u$ both are already in ${\mathcal D}^{s,p}(\mathbb{R}^{N})$, we can conclude from the above inequality that $w_2 \in {\mathcal D}^{s,p}(\mathbb{R}^{N})$. Taking $w_1$ and $w_2$ as test functions in [\[equation first\]](#equation first){reference-type="eqref" reference="equation first"} and [\[equation second\]](#equation second){reference-type="eqref" reference="equation second"} respectively, we have $$\label{equation third} \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \dfrac{|u_1(x)-u_1(y)|^{p}}{ |x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}= \lambda_{1} \int_{\mathbb{R}^{N}} w|u_1|^{p}\, {\rm d}x,$$ and $$\begin{aligned} \label{equation sp} \begin{split} \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \dfrac{|u(x)-u(y)|^{p-2} (u(x)-u(y))}{ |x-y|^{N+sp}}\bigg(\frac{\phi^p_m(x)}{(u+\frac{1}{m})^{p-1}(x)}-\frac{\phi^p_m(y)}{(u+\frac{1}{m})^{p-1}(y)} \bigg) {\mathrm{d}x\mathrm{d}y}\\ = \lambda_{} \int_{\mathbb{R}^{N}} w|u|^{p-2}u\frac{\phi^p_m(x)}{(u+\frac{1}{m})^{p-1}(x)}\,{\rm d}x. \end{split}\end{aligned}$$ From Lemma [Lemma 7](#Picone identity){reference-type="ref" reference="Picone identity"} we have $K(\phi_m, u + \frac{1}{m}) \geq 0$, where $K$ is as in [\[def:K\]](#def:K){reference-type="eqref" reference="def:K"}. Now, combining this inequality with [\[equation sp\]](#equation sp){reference-type="eqref" reference="equation sp"}, we get $$\label{equation four} \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \dfrac{|\phi_m(x)-\phi_m(y)|^{p}}{ |x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}- \lambda_{} \int_{\mathbb{R}^{N}} w\phi_m^{p}\bigg(\frac{u}{u+ \frac{1}{m}} \bigg)^{p-1}\, {\rm d}x\geq 0.$$ Next, subtracting [\[equation third\]](#equation third){reference-type="eqref" reference="equation third"} from [\[equation four\]](#equation four){reference-type="eqref" reference="equation four"} and taking the limit as $m \rightarrow \infty$, we obtain $$(\lambda_1 -\lambda) \int_{\mathbb{R}^{N}}w |u_1|^p\, {\rm d}x\geq 0.$$ Therefore, the above inequality holds if and only if $\lambda_1 > \lambda$ and a contradiction arises to the fact that $\lambda_1$ is the smallest eigenvalue. Thus, the proof is complete. ◻ Further, we show the simplicity of the first eigenvalue of [\[Weighetd eigenvalue problem\]](#Weighetd eigenvalue problem){reference-type="eqref" reference="Weighetd eigenvalue problem"}. **Lemma 31**. *The eigenfunction of ([\[Weighetd eigenvalue problem\]](#Weighetd eigenvalue problem){reference-type="ref" reference="Weighetd eigenvalue problem"}) corresponding to $\lambda_{1}$ are unique up to some constant multiplication, i.e., $\lambda_{1}$ is simple.* *Proof.* Let $\phi_{1}$ and $\phi_{2}$ be two eigenfunctions corresponding to the same eigenvalue $\lambda_{1}$, then we may suppose that $\phi_{1}$, $\phi_{2}>0$ and $\phi_{1}, \phi_{2} \in \mathbb{S}$, namely, $$\int_{\mathbb{R}^{N}} w|\phi_{1}|^{p} {\rm d}x= \int_{\mathbb{R}^{N}} w|\phi_{2}|^{p} {\rm d}x= 1.$$ Let $$\Phi = \bigg(\dfrac{\phi_{1}^{p}+\phi_{2}^{p}}{2}\bigg)^{1/p},$$ then we have $\Phi \in \mathbb{S}$. Since the function $\alpha(r,s) := |r^{1/p} - s^{1/p}|^{p}$ is convex for $r,s>0$, we have the following inequality $$\alpha\bigg(\frac{r_{1}+r_{2}}{2}, \frac{s_{1}+s_{2}}{2}\bigg) \leq \frac{1}{2}\alpha(r_{1},s_{1}) + \frac{1}{2}\alpha(r_{2},s_{2}) ,$$ where the equality holds only for $r_{1}s_{2} = r_{2}s_{1}$ (see [@Lindgren2014 Lemma 13]). Therefore, according to the above inequality and $\phi_{1},\phi_{2},\Phi \in \mathbb{S}$ we deduce, $$\begin{aligned} \lambda_{1} \leq \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \dfrac{|\Phi(x) - \Phi(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\leq \frac{1}{2}\int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \dfrac{|\phi_{1}(x) - \phi_{1}(y)|^{p}}{|x-y|^{N+sp}}{\mathrm{d}x\mathrm{d}y}\hspace{30mm}\\ + \frac{1}{2} \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \dfrac{|\phi_{2}(x) - \phi_{2}(y)|^{p}}{|x-y|^{N+sp}}{\mathrm{d}x\mathrm{d}y}= \lambda_{1}; \end{aligned}$$ Thus, equality must hold at each step. Therefore, $\phi_{1}(x)\phi_{2}(y) = \phi_{1}(y)\phi_{2}(x)$ which implies that $\phi_{1}(x) = c \phi_{2}(x)$ with $c \in \mathbb{R}$. ◻ ## Infinite set of eigenvalues This section deals with the existence of an infinite set of eigenvalues of ([\[Weighetd eigenvalue problem\]](#Weighetd eigenvalue problem){reference-type="ref" reference="Weighetd eigenvalue problem"}). We follow the Ljusternik-Schnirelmann theory on $C^1$-manifold due to Szulkin [@Szulkin]. The Ljusternik-Schnirelmann theory enables us to find the critical points of a functional $J$ on a manifold $M$. First, we recall the definition of the Palais-Smale (PS) condition and genus. Assume that $M$ is a $C^1$-manifold and $f \in C^{1}(M; \mathbb{R})$. A sequence $\{u_n\} \subset M$ is said to be a (PS) sequence on $M$ if $f(u_n) \rightarrow \lambda$ and $f'(u_n) \rightarrow 0$, where $f'(u)$ represents the Fréchet differential of $f$ at $u$. If every (PS) sequence $\{u_n\}$ admits a convergent subsequence, then we say the map $f$ satisfies the (PS) condition on $M$. Let $\Theta$ be the family of sets $A \subset M \setminus \{0\}$ such that $A$ is closed in $M$ and symmetric concerning $0$, i.e. $z \in A$ implies $-z \in A$. If $A \in \Theta$, then the Krasnoselski genus of $A$ is denoted by $\gamma(A)$ and is defined as the smallest integer $k$ for which there exists a non-vanishing odd continuous mapping from A to $\mathbb{R}^{k}$. When no such map exists for any $k$, we set $\gamma(A) = \infty$, and also we set $\gamma(\emptyset) = 0$. We refer to [@Rabinowitz] for more details and properties of the genus. We can deduce the next theorem from [@Szulkin Corollary 4.1]. **Theorem 32**. *Let M be a closed symmetric $C^1$-submanifold of a real Banach space X and $0 \notin M$. Let $f \in C^1(M; R)$ be an even function satisfying the (PS) condition on M and is bounded below. Define $$\lambda_{j} := \inf_{A \in \Gamma_{j}} \sup_{x \in A} f(x),$$ where $\Gamma_j = \{ A \subset M : A ~\text{is compact and symmetric about origin},~ \gamma(A) \geq j\}$. If for a given $j,~ \lambda_j = \lambda_{j+1}= .~ .~ . = \lambda_{j+p} \equiv \lambda, ~then ~\gamma(K_\lambda) \geq p + 1,~ where~ K_\lambda = \{x \in M : f(x) = \lambda ,~ f'(x) = 0 \}$.* . For $w_{2} \in L^{1}_{loc}(\mathbb{R}^{N})$, we define $$\|u\|_{X}^{p} : = \|u\|_{s,p}^p + \int_{\mathbb{R}^{N}} w_{2}|u|^{p} {\rm d}x,$$ and $$X:= \{ u \in \mathcal{D}^{s,p}(\mathbb{R}^{N}): \|u\|_{X} < \infty\}.$$ **Lemma 33**. *The space $X = (X, \|\cdot\|_{X})$ is a uniformly convex Banach space.* *Proof.* We break the proof into a few steps by using the approach as done in [@Chen2022 Lemma 5.1].\ First, we claim that $X$ is a complete space concerning the given norm. Let $\{u_{n}\}$ be a Cauchy sequence in $X$, i.e. given any $\epsilon > 0$, there exists a positive integer $N_0$ depending on $\epsilon$ such that if $n,m \geq N_0$, then $$\label{Cauchy} \|u_{n} - u_{m}\|_{X} < \epsilon.$$ Following the definition of the norm on $X$, we observe that $\|u_{n}-u_{m}\|_{s,p} \leq \|u_{n}-u_{m}\|_{X}< \epsilon.$ This implies that the sequence $\{u_{n}\}$ is Cauchy in ${\mathcal D}^{s,p}(\mathbb{R}^{N})$. By the completeness, there exists $u \in {\mathcal D}^{s,p}(\mathbb{R}^{N})$ such that $u_{n} \rightarrow u$ in ${\mathcal D}^{s,p}(\mathbb{R}^{N})$. Now, we need to show that $u \in X$. There exists a subsequence $\{u_{n_{k}}\}$ of $\{u_n\}$ such that $\{u_{n_{k}}\} \rightarrow u$ a.e. in $\mathbb{R}^{N}~\text{as}~k \rightarrow \infty$. Now applying Fatou's lemma and using ([\[Cauchy\]](#Cauchy){reference-type="ref" reference="Cauchy"}), we $$\begin{aligned} \int_{\mathbb{R}^{N}} w_{2}|u|^{p} {\rm d}x\leq \liminf_{k \rightarrow \infty} \int_{\mathbb{R}^{N}} w_{2}|u_{n_{k}}|^{p} {\rm d}x\hspace{60mm}\\ \leq \liminf_{k \rightarrow \infty} ( \|u_{n_{k}}- u_{N_0}\|_{X} + \|u_{N_0}\|_{X})^{p} \leq (\epsilon + \|u_{N_0}\|_{X})^{p} < \infty.\end{aligned}$$ Thus, $u \in X$. Further for $n \geq N_0$, we have $\|u_{n}-u\|_{X} \leq \liminf_{k \rightarrow \infty}\|u_{n}-u_{n_{k}}\| \leq \epsilon$. Therefore, the sequence $\{u_n\}$ converges to $u$ strongly in $X$, i.e. $X$ is a complete space.\ **Step 2:** Now we want to show that $X$ is a uniformly convex Banach space. For $0 <\epsilon \leq 2$, let $u,v \in X$ such that $$\begin{aligned} \label{UC} \|u\|_{X}=1=\|v\|_{X} \ \text{and} \ \|u-v\|_{X} \geq \epsilon.\end{aligned}$$ We separately prove the case $1<p<2$ and $p \geq 2$. First, we begin with the case when $p \geq 2$. Let us recall the following inequality [@Adams2003sobolev Lemma 2.37, page 42] given by $$\label{adam} \bigg| \frac{a+b}{2} \bigg|^{p} + \bigg|\frac{a-b}{2} \bigg|^{p} \leq \frac{|a|^{p}+|b|^{p}}{2}, \ \text{for }a,b \in \mathbb{R}.$$ From [\[adam\]](#adam){reference-type="eqref" reference="adam"}, we can deduce the following: $$\begin{aligned} \bigg\| \frac{u+v}{2} \bigg\|_{X}^{p} + \bigg\|\frac{u-v}{2} \bigg\|_{X}^{p} &= \bigg\|{\frac{u+v}{2}}\bigg\|_{s,p}^{p} + \bigg\|{\frac{u-v}{2}}\bigg\|_{s,p}^{p} + \int_{\mathbb{R}^{N}} w_{2} \bigg(\bigg| \frac{u+v}{2} \bigg|^{p} + \bigg| \frac{u-v}{2} \bigg|^{p} \bigg) {\rm d}x\\ &\leq \frac{1}{2} \bigg[ \|u\|_{s,p}^{p} + \|v\|_{s,p}^{p} + \int_{\mathbb{R}^{N}} w_{2} (|u|^{p}+ |v|^{p}) {\rm d}x\bigg] \\ & = \frac{1}{2} [\|u\|_{X}^{p} + \|v\|_{X}^{p}] = 1. \end{aligned}$$ Thus by choosing $\delta = 1- \big(1-\big(\frac{\epsilon}{2}\big)^p\big)^{1/p}>0$, we can deduce from above that $\big\| \frac{u+v}{2} \big\|_{X} \leq 1-\delta$. Therefore, the space $X$ is uniformly convex for $p \geq 2$.\ Now we consider the case when $1<p<2$. If we set $p'=\frac{p}{p-1}$, then by using [@Adams2003sobolev Theorem 2.13, page 28] and [@Adams2003sobolev Lemma 2.37, page 42] for $u,v \in {\mathcal D}^{s,p}({\mathbb R}^N)$ we have $$\begin{aligned} \bigg\| \frac{u+v}{2} \bigg\|_{s,p}^{p'} + \bigg\| \frac{u-v}{2} \bigg\|_{s,p}^{p'} &= \bigg\| \bigg( \bigg| \frac{u(x)+v(x)}{2} - \frac{u(y)+v(y)}{2} \bigg| \cdot |x-y|^{\frac{-N-sp}{p}} \bigg)^{p'} \bigg\|_{L^{p-1}({\mathbb R}^{2N})} \\ & \qquad + \bigg\| \bigg( \bigg| \frac{u(x)-v(x)}{2} - \frac{u(y)-v(y)}{2} \bigg| \cdot |x-y|^{\frac{-N-sp}{p}} \bigg)^{p'} \bigg\|_{L^{p-1}({\mathbb R}^{2N})} \\ & \leq \bigg\| \bigg( \bigg| \frac{u(x)-u(y) + v(x)-v(y)}{2} \bigg|^{p'} + \bigg| \frac{u(x)-u(y) - (v(x)-v(y))}{2} \bigg|^{p'} \bigg) \cdot \\ &\qquad \cdot |x-y|^{\frac{-N-sp}{p-1}} \bigg\|_{L^{p-1}({\mathbb R}^{2N})}\\ & \leq \bigg\| \bigg( \frac{|u(x)-u(y)|^p + |v(x)-v(y)|^p}{2} \bigg)^{\frac{1}{p-1}} \cdot |x-y|^{\frac{-N-sp}{p-1}} \bigg\|_{L^{p-1}({\mathbb R}^{2N})} \\ & = \big[ \frac{1}{2} \|u\|_{s,p}^p + \frac{1}{2} \|v\|_{s,p}^p \big]^{\frac{1}{p-1}}.\end{aligned}$$ Now for $0<\epsilon_1 \leq 2$ and $u,v \in {\mathcal D}^{s,p}({\mathbb R}^N)$ such that $\|u\|_{s,p} = 1 = \|v\|_{s,p}$ and $\|u-v\|_{s,p} \geq \epsilon_1$, we can choose $\delta_1 = 1- \big( 1- (\epsilon_1 /2)^{p'} \big)^{\frac{1}{p'}}>0$ so that $\big\| \frac{u+v}{2} \big\|_{s,p} \leq 1 - \delta_1$. Thus $\|\cdot\|_{s,p}$ is a uniformly convex norm. In a similar way the $\|u\|_{w_2,p} := \big( \int_{{\mathbb R}^N} w_2 |u|^p {\rm d}x\big)^{1/p}$ is also a uniformly convex norm.\ From [\[UC\]](#UC){reference-type="eqref" reference="UC"}, we can notice that $\|u\|_{s,p} \leq 1$ and $\|v\|_{s,p} \leq 1$ and also we can assume that $\|u-v\|_{s,p} \geq \frac{\epsilon}{2^{1/p}}$. Further, we claim that there exists some $\delta_2 >0$ such that $$\begin{aligned} \label{E1} \bigg\|\frac{u+v}{2} \bigg\|_{s,p}^p \leq \frac{1- \delta_2}{2} \big(\|u\|_{s,p}^p + \|v\|_{s,p}^p \big). \end{aligned}$$ We prove the above claim by using the method of contradiction. we break the proof into two parts.\ **Case 1**. Let $\|u\|_{s,p} = 1$ and $\|v\|_{s,p} \leq 1$. On contrary, we suppose that the claim [\[E1\]](#E1){reference-type="eqref" reference="E1"} is not true i.e., there must exist an $\epsilon_0 >0$ and two sequences $\{u_n\}$ and $\{v_n\}$ in $X$ such that $\|u_n\|_{s,p} = 1$ and $\|v_n\|_{s,p} \leq 1$ and $\|u_n-v_n\|_{s,p} \geq \frac{\epsilon_0}{2^{1/p}}$, and satisfying $$\begin{aligned} \label{E2} \bigg\|\frac{u_n+v_n}{2} \bigg\|_{s,p}^p \geq \frac{1}{2} \big(1- \frac{1}{n} \big) \big(\|u_n\|_{s,p}^p + \|v_n\|_{s,p}^p \big).\end{aligned}$$ First we prove $\lim\limits_{n \rightarrow \infty} \|v_n\|_{s,p} =1$. If we suppose that $\lim\limits_{n \rightarrow \infty} \|v_n\|_{s,p} <1$, then by definition there exists a subsequence $\{v_{n_l}\}$ of $\{v_{n}\}$ such that $\|v_{n_l}\|_{s,p} \leq B <1$. Thus, we use the triangle inequality to obtain $$\begin{aligned} \label{E3} \bigg\|\frac{u_{n_l}+v_{n_l}}{2} \bigg\|_{s,p}^p \leq \bigg(\frac{\|u_{n_l}\|_{s,p} + \|v_{n_l}\|_{s,p}}{2} \bigg)^p \leq \frac{\|u_{n_l}\|_{s,p}^p + \|v_{n_l}\|_{s,p}^p}{2} \cdot \bigg(\frac{1+B}{2} \bigg)^p \bigg/ \bigg(\frac{1+B^p}{2} \bigg),\end{aligned}$$ where the last inequality follows by monotonicity (increasing) of the function $f(x) = \frac{(1+x)^p}{1+x^p},\ 1<p<2, \ x \in (0,1)$. Observe that $\big(\frac{1+B}{2} \big)^p \big/ \big(\frac{1+B^p}{2} \big)<1$ for all $1<p<2$. Therefore, a contradiction to [\[E2\]](#E2){reference-type="eqref" reference="E2"} arises by [\[E3\]](#E3){reference-type="eqref" reference="E3"}. Hence, we have $\lim\limits_{n \rightarrow \infty} \|v_n\|_{s,p} =1$.\ We define $w_n = \frac{v_n}{\|v_n\|_{s,p}}$, then it is easy to observe that $\lim\limits_{n \rightarrow \infty}\|v_n - w_n\|_{s,p}=0$. Taking limit as $n \rightarrow \infty$ in [\[E2\]](#E2){reference-type="eqref" reference="E2"} and using $\lim\limits_{n \rightarrow \infty} \|v_n\|_{s,p} =1$, we have $$\begin{aligned} 1 \leq \lim\limits_{n \rightarrow \infty} \bigg\|\frac{u_{n}+v_{n}}{2} \bigg\|_{s,p} \leq \lim\limits_{n \rightarrow \infty}\bigg\|\frac{u_{n}+w_{n}}{2} \bigg\|_{s,p} \leq 1,\end{aligned}$$ which implies $\lim\limits_{n \rightarrow \infty}\big\|\frac{u_{n}+w_{n}}{2} \big\|_{s,p}=1$. Using the fact that $\|u_n-v_n\|_{s,p} \geq \frac{\epsilon_0}{2^{1/p}}$ for all $n\geq 1$, $\exists$ a positive integer $N_1$ such that $\|u_n-w_n\|_{s,p} \geq \frac{\epsilon_0}{2^{1+1/p}}$ for all $n \geq N_1$. Thus, the uniform convexity of the $\|\cdot\|_{s,p}$ norm ensures the existence of a $\delta_3 >0$ depending on $\epsilon_0$ such that $\big\|\frac{u_{n}+w_{n}}{2} \big\|_{s,p}\leq 1 - \delta_3$ for all $n \geq N_0$. This contradicts to $\lim\limits_{n \rightarrow \infty}\big\|\frac{u_{n}+w_{n}}{2} \big\|_{s,p}=1$. Hence, the claim [\[E1\]](#E1){reference-type="eqref" reference="E1"} follows.\ **Case 2**. Let $\|u\|_{s,p} \leq 1$ and $\|v\|_{s,p} \leq 1$. Now either $\|u\|_{s,p} \leq \|v\|_{s,p}$ or $\|u\|_{s,p} \geq \|v\|_{s,p}$. We assume that $\|u\|_{s,p} \geq \|v\|_{s,p}>0$ and the other case will follow similarly. We define $u_1 = \frac{u}{\|u\|_{s,p}}, \ v_1 = \frac{v}{\|u\|_{s,p}}$. Notice that $\|u_1\|_{s,p}=1, \ \|v_1\|_{s,p}\leq 1 \ \text{and} \ \|u_1 - v_1\|_{s,p} \geq \frac{\epsilon}{2^{1/p}}$. By Case 1, the inequality [\[E1\]](#E1){reference-type="eqref" reference="E1"} is true for $u_1$ and $v_1$, and therefore [\[E1\]](#E1){reference-type="eqref" reference="E1"} also holds for $u$ and $v$.\ Thus, using $\frac{\|u\|_{s,p}^p +\|v\|_{s,p}^p}{2} \geq \|\frac{u-v}{2} \|_{s,p}^p \geq \frac{\epsilon^p}{2^{p+1}}$, we get $$\begin{aligned} \bigg\|\frac{u+v}{2} \bigg\|_{X} &= \bigg(\bigg\|\frac{u+v}{2} \bigg\|_{s,p}^{p} + \int_{{\mathbb R}^N} w_2 \bigg|\frac{u+v}{2}\bigg|^p {\rm d}x\bigg)^{1/p} \\ & \leq \bigg( (1- \delta_2)\frac{\|u\|_{s,p}^p +\|v\|_{s,p}^p}{2} + \int_{{\mathbb R}^N} w_2 \big( \frac{|u|^p+|v|^p}{2} \big) {\rm d}x\bigg)^{1/p} \\ & = \bigg( \frac{1}{2} \|u\|_{X}^p + \frac{1}{2} \|v\|_{X}^p -\delta_2\frac{\|u\|_{s,p}^p +\|v\|_{s,p}^p}{2} \bigg)^{1/p} \\ & \leq \bigg( 1 -\delta_2 \frac{\epsilon^p}{2^{p+1}} \bigg)^{1/p} := 1 - \delta,\end{aligned}$$ where $\delta = 1- \bigg( 1 -\delta_2 \frac{\epsilon^p}{2^{p+1}} \bigg)^{1/p}>0$. Hence, we conclude that the space $(X, \|\cdot\|_X)$ is also uniformly convex for $1<p<2$. ◻ To fix the notations, let us denote the dual space of $X$ by $X'$ and the duality action by $\langle \cdot , \cdot \rangle$. By the definition of $\|\cdot\|_{X}$, one can verify easily that the function $W_{2}$ given by $$W_{2}(\varphi) = \int_{\mathbb{R}^{N}} w_{2}|\varphi|^{p} {\rm d}x,$$ is continuous on $X$. Moreover, the map $W_{2}$ is continuously differentiable on $X$ and the Frëchet derivative of $W_2$ is written as $$\langle W'_{2}(\varphi), v \rangle = p \int_{\mathbb{R}^{N}} w_{2}|\varphi|^{p-2}\ \varphi v~ {\rm d}x.$$ Similarly, using the weighted fractional Hardy inequality, we can verify that the map $W_1$ is $C^1$ in $X$ and the Frëchet derivative is given by $$\langle W'_{1}( \varphi), v \rangle = p \int_{\mathbb{R}^{N}} w_{1}| \varphi|^{p-2} \varphi v~ {\rm d}x.$$ Thus for $w_1 \in \mathcal{H}_{s,p,0}({\mathbb R}^N) \text{ and } w_2 \in L^{1}_{loc}({\mathbb R}^N)$, the map $W$ is in $C^{1}(X;{\mathbb R})$ and the Frëchet derivative is given by $$\langle W'_{}(\varphi), v \rangle = p \int_{\mathbb{R}^{N}} w_{}| \varphi|^{p-2}\varphi v~ {\rm d}x.$$ It is easy to note that for $u \in \mathbb{S},\ \langle W'(u), u \rangle = p$ and therefore the map $W'(u) \neq 0$. Thus, $1$ is a regular value of $W$. We say a real number $\alpha \in {\mathbb R}$ a regular value of $W$, if $W'(\varphi) \neq 0$ for all $\varphi$ such that $W(\varphi)=\alpha$. Moreover, the set $\mathbb{S}$ admits a $C^1$ Banach sub-manifold structure on $X$ by [@Deimling1985 Example 27.2].\ Further, we verify that the functional $J$ satisfies all the conditions of Theorem [Theorem 32](#Ljusternik){reference-type="ref" reference="Ljusternik"}. **Lemma 34**. *The functional J is a $C^1$ on $\mathbb{S}$ and the Frëchet derivative of J is given by $$\langle J'(u),v \rangle = p \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \dfrac{|u(x)-u(y)|^{p-2} (u(x)-u(y)) (v(x) -v(y))}{ |x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}.$$ We omit the proof as it is straightforward.* **Remark 35**. *We can deduce from [@Drabek Proposition 6.4.35] that $$\|J'(u)\| = \min_{\lambda \in \mathbb{R}} \| J'(u) - \lambda W'(u)\| .$$ Thus $J'(u_n) \rightarrow 0$ if and only if there exists a sequence $\{\lambda_n \}$ of real numbers such that $J'(u_n) - \lambda_{n} W'(u_n) \rightarrow 0.$* **Definition 36**. *For $\lambda \in \mathbb{R}^{+}$, we define $A_\lambda : X \rightarrow X'$ as $$A_\lambda = J' + \lambda W'_{2}.$$* The following lemma is motivated by Szulkin and Willem [@Szulkin1998 Lemma 4.3]. **Lemma 37**. *If $u_{n} \rightharpoonup u ~in ~X ~and~\big<A_{\lambda}(u_{n}), u_{n}-u\big> \longrightarrow 0$, then $u_{n} \rightarrow u$ in X.* *Proof.* Clearly, $\langle A_{\lambda}(u_{n})- A_{\lambda}(u), u_{n}-u \rangle \longrightarrow 0.$ We can write $\langle A_{\lambda}(u_{n})- A_{\lambda}(u), u_{n}-u \rangle = B_n + \lambda C_n,$ where $B_n = \langle J'(u_n) - J'(u), u_n -u \rangle$ and $C_n = \langle W'_{2}(u_n) - W'_{2}(u), u_n -u \rangle$. Now by using the Hölder's inequality, we have $$\begin{aligned} {\frac{C_n}{p}} &= \int_{\mathbb{R}^{N}} w_{2} (|u_{n}|^{p-2}u_{n} - |u|^{p-2}u)(u_{n}-u) {\rm d}x\\ &= \int_{\mathbb{R}^{N}} w_{2} (|u_{n}|^{p}+ |u|^{p} -|u_{n}|^{p-2}u_{n}u - |u|^{p-2}u u_{n}) {\rm d}x\\ &=\int_{\mathbb{R}^{N}} w_{2} (|u_{n}|^{p}+ |u|^{p}){\rm d}x- \int_{\mathbb{R}^{N}} w_{2}|u_{n}|^{p-2}u_{n}u ~{\rm d}x- \int_{\mathbb{R}^{N}} w_{2}|u|^{p-2}u u_{n}~ {\rm d}x\\ &\geq \int_{\mathbb{R}^{N}} w_{2} (|u_{n}|^{p}+ |u|^{p}){\rm d}x- \Bigg( \int_{\mathbb{R}^{N}} w_{2}|u_{n}|^{p} ~{\rm d}x\Bigg)^{\frac{p-1}{p}} \Bigg( \int_{\mathbb{R}^{N}} w_{2}|u|^{p} ~{\rm d}x\Bigg)^{\frac{1}{p}} ~~\\ &\quad-\Bigg( \int_{\mathbb{R}^{N}} w_{2}|u|^{p} ~{\rm d}x\Bigg)^{\frac{p-1}{p}} \Bigg( \int_{\mathbb{R}^{N}} w_{2}|u_{n}|^{p} ~{\rm d}x\Bigg)^{\frac{1}{p}}\\ &= \Bigg[ \Bigg( \int_{\mathbb{R}^{N}} w_{2}|u_{n}|^{p} ~{\rm d}x\Bigg)^{\frac{p-1}{p}} - \Bigg( \int_{\mathbb{R}^{N}} w_{2}|u|^{p} ~{\rm d}x\Bigg)^{\frac{p-1}{p}} \Bigg] \times\\ &\quad~~~~~~~~\times\Bigg[ \Bigg( \int_{\mathbb{R}^{N}} w_{2}|u_{n}|^{p} ~{\rm d}x\Bigg)^{\frac{1}{p}} - \Bigg( \int_{\mathbb{R}^{N}} w_{2}|u|^{p} ~{\rm d}x\Bigg)^{\frac{1}{p}} \Bigg] \geq 0. \end{aligned}$$ Now $$\begin{aligned} { \frac{B_n}{p} }=&\int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \bigg( \frac{|u_{n}(x)-u_{n}(y)|^{p-2}(u_{n}(x)-u_{n}(y))}{|x-y|^{N+sp}} - \frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{N+sp}} \bigg) \\ &\hspace{3cm} \cdot \big(u_{n}(x)-u(x)-u_{n}(y)+u(y)\big) ~{\mathrm{d}x\mathrm{d}y}\\ &= \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u_{n}(x) - u_{n}(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}+ \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u(x) - u(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\\ & \hspace{1cm}- \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u_{n}(x)-u_{n}(y)|^{p-2}(u_{n}(x)-u_{n}(y))(u(x)-u(y))}{|x-y|^{N+sp}}{\mathrm{d}x\mathrm{d}y}\\ & \hspace{2cm} - \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u(x)-u(y)|^{p-2}(u(x)-u(y)) (u_{n}(x)-u_{n}(y))}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\\ % &\int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \bigg( \frac{|u_{n}(x)-u_{n}(y)|^{p-2}(u_{n}(x)-u_{n}(y))}{|x-y|^{N+sp}} - \frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{N+sp}} \bigg) \\ % &~~~~~~~~~~~~~~~~~~~.\big(u_{n}(x)-u(x)-u_{n}(y)+u(y)\big) ~\dxy \\ &\geq \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u_{n}(x) - u_{n}(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}+ \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u(x) - u(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\\ &\hspace{1cm} - \bigg( \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u_{n}(x) - u_{n}(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\bigg)^{\frac{p-1}{p}} \cdot \bigg( \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u(x) - u(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\bigg)^{\frac{1}{p}} \\ &\hspace{1.5cm} - \bigg( \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u(x) - u(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\bigg)^{\frac{p-1}{p}} \cdot \bigg( \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u_{n}(x) - u_{n}(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\bigg)^{\frac{1}{p}}\\ &= \bigg[ \bigg( \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u_{n}(x) - u_{n}(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\bigg)^{\frac{p-1}{p}} - \bigg( \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u(x) - u(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\bigg)^{\frac{p-1}{p}} \bigg] \\ &\hspace{1cm}\bigg[ \bigg( \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u_{n}(x) - u_{n}(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\bigg)^{\frac{1}{p}} - \bigg( \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u(x) - u(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\bigg)^{\frac{1}{p}} \bigg] \geq 0. \end{aligned}$$ Since $\langle A_{\lambda}(u_{n})- A_{\lambda}(u), u_{n}-u \rangle \longrightarrow 0$ as $n \rightarrow \infty$ and the sequences $B_n$ and $C_n$ are non-negative, we get $$B_n \rightarrow 0 \text{ and } C_n \rightarrow 0 \text{ as } n \rightarrow \infty.$$ This further implies $$\int_{\mathbb{R}^{N}} w_{2}|u_{n}|^{p} ~{\rm d}x\rightarrow \int_{\mathbb{R}^{N}} w_{2}|u|^{p} ~{\rm d}x,$$ and $$\int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u_{n}(x) - u_{n}(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}\rightarrow \int_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u(x) - u(y)|^{p}}{|x-y|^{N+sp}} {\mathrm{d}x\mathrm{d}y}.$$ Hence $\|u_{n}\|_{X} \rightarrow \|u\|_{X}$ and therefore $u_{n} \rightarrow u$ in X. ◻ **Notes:** Source Bonder's article page no.16 $$\xi_{n} = \frac{|u_{n}(x)-u_{n}(y)|^{p-2} (u_{n}(x)-u_{n}(y))}{|x-y|^{\frac{N+sp}{p'}}} \in L^{p'}(\mathbb{R}^{N} \times \mathbb{R}^{N})$$ where $p' = \frac{p}{p-1}$ and $\{ \xi_{n} \}$ is bounded in $L^{p'}(\mathbb{R}^{N} \times \mathbb{R}^{N})$. Also $\frac{u(x)-u(y)}{|x-y|^{\frac{N}{p} + s}} \in L^{p}(\mathbb{R}^{N} \times \mathbb{R}^{N})$.\ Therefore, by Holder's inequality, we have $$\iint_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u_{n}(x)-u_{n}(y)|^{p-2} (u_{n}(x)-u_{n}(y))}{|x-y|^{\frac{N+sp}{p'}}}. \frac{(u(x)-u(y))}{|x-y|^{\frac{N}{p} + s}} dxdy$$ $$\leq \bigg( \iint_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u_{n}(x)-u_{n}(y)|^{p}}{|x-y|^{N+sp}}dxdy \bigg)^{\frac{p-1}{p}}. \bigg( \iint_{\mathbb{R}^{N} \times \mathbb{R}^{N}} \frac{|u(x)-u(y)|^{p}}{|x-y|^{N+sp}}dxdy \bigg)^{\frac{1}{p}}$$ Hence, Hölder's inequality in the above proof is verified. **Lemma 38**. *For $w_{1} \in \mathcal{H}_{s,p,0}(\mathbb{R}^{N})$, the map $W'_{1} : X \rightarrow X'$ is compact.* *Proof.* Let $u_{n} \rightharpoonup u$ in $X$ and $v \in X$. For $w_{1} \in \mathcal{H}_{s,p,0}(\mathbb{R}^{N})$, by Theorem [Theorem 11](#H1){reference-type="ref" reference="H1"} we have $$\label{Sobolev type inequality} \| w_{1}^{\frac{1}{p}} u \|_{p} \leq C \|w_{1}\|^{\frac{1}{p}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} \|u\|_{s,p},$$ where the constant $C>0$ depends on $N,s,p$ only and independent of $u$. Thus, we use Hölder's inequality to obtain $$\begin{aligned} |\langle W_{1}'(u_n) - W'_{1}(u), v \rangle| &\leq ~\int_{\mathbb{R}^{N}} w_{1} |( |u_{n}|^{p-2}u_{n} - |u|^{p-2}u)| |v| {\rm d}x\\ &\leq \bigg(\int_{\mathbb{R}^{N}} w_{1} |(|u_{n}|^{p-2}u_{n} -|u|^{p-2}u)|^{\frac{p}{p-1}} {\rm d}x\bigg)^{\frac{p-1}{p}} .~ \bigg(\int_{\mathbb{R}^{N}} w_{1} |v|^{p} {\rm d}x\bigg)^{\frac{1}{p}} \\ &\leq C~ \|w_{1}\|^{\frac{1}{p}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} \|v\|_{s,p} \bigg(\int_{\mathbb{R}^{N}} w_{1} |(|u_{n}|^{p-2}u_{n} -|u|^{p-2}u)|^{\frac{p}{p-1}} {\rm d}x\bigg)^{\frac{p-1}{p}}.\end{aligned}$$ Thus $$\|W'_{1}(u_{n}) - W'_{1}(u)\| \leq C ~\|w_{1}\|^{\frac{1}{p}}_{\mathcal{H}_{s,p}({\mathbb R}^N)} \bigg(\int_{\mathbb{R}^{N}} w_{1} |(|u_{n}|^{p-2}u_{n} -|u|^{p-2}u)|^{\frac{p}{p-1}} {\rm d}x\bigg)^{\frac{p-1}{p}}.$$ Now, it is sufficient to show that $$\bigg(\int_{\mathbb{R}^{N}} w_{1} |(|u_{n}|^{p-2}u_{n} -|u|^{p-2}u)|^{\frac{p}{p-1}} {\rm d}x\bigg)^{\frac{p-1}{p}} \longrightarrow 0 ~ as~n \rightarrow \infty.$$ Let $\epsilon>0$ and $w_{\epsilon} \in C_{c}^{\infty}(\mathbb{R}^{N})$ be arbitrary. $$\label{Integral with w} \int_{\mathbb{R}^{N}} w_{1} |(|u_{n}|^{p-2}u_{n} -|u|^{p-2}u)|^{\frac{p}{p-1}} {\rm d}x\hspace{86mm}$$ $$= \int_{\mathbb{R}^{N}} w_{\epsilon} |(|u_{n}|^{p-2}u_{n} -|u|^{p-2}u)|^{\frac{p}{p-1}} {\rm d}x+ \int_{\mathbb{R}^{N}} (w_{1} -w_{\epsilon}) |(|u_{n}|^{p-2}u_{n} -|u|^{p-2}u)|^{\frac{p}{p-1}} {\rm d}x.$$ First, we estimate the second integral. Since $\{u_{n}\}$ is bounded in $X$, $$K := \sup_{n} (\|u_{n}\|_{s,p}^{p} + \|u\|_{s,p}^{p}) < \infty.$$ Now $$\begin{aligned} \int_{\mathbb{R}^{N}} (w_{1} -w_{\epsilon}) |(|u_{n}|^{p-2}u_{n} &-|u|^{p-2}u)|^{\frac{p}{p-1}} {\rm d}x\\ &\leq \int_{\mathbb{R}^{N}} (w_{1} -w_{\epsilon}) (|u_{n}|^{p-1} +|u|^{p-1})^{\frac{p}{p-1}} {\rm d}x\\ &\leq 2^{\frac{1}{p-1}} \bigg( \int_{\mathbb{R}^{N}} (w_{1} -w_{\epsilon}) |u_{n}|^{p} {\rm d}x+ \int_{\mathbb{R}^{N}} (w_{1} -w_{\epsilon})|u|^{p} {\rm d}x\bigg) \\ &\leq 2^{\frac{1}{p-1}}~ C ~ \|w_{1} - w_{\epsilon}\|_{\mathcal{H}_{s,p}({\mathbb R}^N)} (\|u_{n}\|_{s,p}^{p}+ \|u\|_{s,p}^{p})\\ &\leq 2^{\frac{1}{p-1}}~ C \cdot K \|w_{1} - w_{\epsilon}\|_{\mathcal{H}_{s,p}({\mathbb R}^N)}.\end{aligned}$$ Now since $w_{1} \in \mathcal{H}_{s,p,0}(\mathbb{R}^{N})$, from the definition of $\mathcal{H}_{s,p,0}(\mathbb{R}^{N})$, we can choose $w_{\epsilon} \in C_{c}^{\infty}(\mathbb{R}^{N})$ such that $$2^{\frac{1}{p-1}} K \|w_{1} - w_{\epsilon}\|_{\mathcal{H}_{s,p}({\mathbb R}^N)} < \frac{\epsilon}{2C}.$$ Hence, we can choose $w_\epsilon$ suitably such that the second integral in ([\[Integral with w\]](#Integral with w){reference-type="ref" reference="Integral with w"}) can be made less than $\frac{\epsilon}{2}$. The space X is compactly embedded into $L^{p}_{loc}(\mathbb{R}^{N})$, therefore the first integral converges to $0$ up to a subsequence $\{u_{n_{k}}\}$ of $\{u_{n}\}$. Thus we obtain $k_{0} \in \mathbb{N}$ such that, $$\int_{\mathbb{R}^{N}} w_{1} |(|u_{n_{k}}|^{p-2}u_{n_{k}} -|u|^{p-2}u)|^{\frac{p}{p-1}} {\rm d}x< \epsilon, ~~~\forall ~k>k_{0}.$$ We conclude by the uniqueness of the limit of subsequence that $$\int_{\mathbb{R}^{N}} w_{1} |(|u_{n}|^{p-2}u_{n} -|u|^{p-2}u)|^{\frac{p}{p-1}} {\rm d}x~\longrightarrow ~0 ~~as~~n \rightarrow \infty.$$ Hence, the proof. ◻ Further, we prove that the (PS) condition is satisfied by the functional $J$ on $\mathbb{S}$. **Proposition 39**. *The functional $J$ satisfies the Palais-Smale (PS) condition on $\mathbb{S}$.* *Proof.* We consider a sequence $\{ u_n \}$ in $\mathbb{S}$ such that $J(u_n) \rightarrow \lambda$ and $J'(u_n) \rightarrow 0.$ Thus by Remark [Remark 35](#rem2){reference-type="ref" reference="rem2"} , there exists a sequence $\{ \lambda_{n} \}$ such that $$\label{Palais smale condition} J'(u_n) - \lambda_{n} W'(u_n) \rightarrow 0 \text{ in }X' \text{ as } n \rightarrow \infty.$$ Since $J(u_n)$ is bounded, using the inequalities $\int_{\mathbb{R}^{N}} w |u_{n}|^{p} dx > 0$, and $$\begin{aligned} \label{L1} \int_{\mathbb{R}^{N}} w_{2} |u_{n}|^{p} {\rm d}x&< \int_{\mathbb{R}^{N}} w_{1} |u_{n}|^{p} {\rm d}x\leq C \|w_{1}\|_{\mathcal{H}_{s,p}({\mathbb R}^N)} \|u_{n}\|_{s,p}^{p},\end{aligned}$$ we derive that the sequence $\{W_{2}({u_n}) \}$ is bounded in ${\mathbb R}$. So the sequence $\{ u_{n} \}$ is bounded in X, and since $X$ is reflexive, the sequence $\{ u_{n} \}$ admits a weakly convergent subsequence i.e., there exists a $u \in X$ such that $u_n \rightharpoonup u$ in X up to a subsequence. Since $X$ is continuously embedded in ${\mathcal D}^{s,p}({\mathbb R}^N)$, the map $W_1$ is also compact on X. Thus, we obtain $W_1(u_n) \rightarrow W_1(u)$ in ${\mathbb R}$. Now Fatou's Lemma yields $$\begin{aligned} \label{L2} \int_{\mathbb{R}^{N}} w_{2} |u|^{p} {\rm d}x\leq \liminf_{n} \int_{\mathbb{R}^{N}} w_{1} |u_n|^{p} {\rm d}x- 1 = \int_{\mathbb{R}^{N}} w_{1} |u|^{p} {\rm d}x- 1.\end{aligned}$$ Thus $\int_{\mathbb{R}^{N}} w |u|^{p} {\rm d}x\geq 1$ and hence $u \neq 0.$ Further, $\lambda_{n} \rightarrow \lambda$ as $n \rightarrow \infty$, since $$p (J(u_n) - \lambda_n) = \langle J'(u_n) - \lambda_n W'(u_n), u_n \rangle ~ \rightarrow~0, \text{ as } n \rightarrow\infty.$$ Now we write ([\[Palais smale condition\]](#Palais smale condition){reference-type="ref" reference="Palais smale condition"}) as $$A_{\lambda_n}(u_n) - \lambda_n W'_{1}(u_n) \rightarrow 0 ~ as~n \rightarrow \infty.$$ Since $\lambda_n \rightarrow \lambda$, we obtain $A_{\lambda_n}(u_n) - A_{\lambda}(u_n) \rightarrow 0$ in $X'$. Further the compactness of $W'_{1}$ implies the strong convergence of $A_{\lambda}(u_n)$ and hence $\langle A_{\lambda}(u_n), u_{n} - u \rangle \rightarrow 0$. Since $u_{n} \rightharpoonup u$ in $X$, using Lemma [Lemma 37](#convergence lemma){reference-type="ref" reference="convergence lemma"} one obtains $u_{n} \rightarrow u$ in X. ◻ Next, we state the following Lemma: **Lemma 40** (). *The set $\Gamma_{n}$ is non-empty for each $n \in \mathbb{N}$.* Finally, we prove the existence of an infinite set of eigenvalues for ([\[Weighetd eigenvalue problem\]](#Weighetd eigenvalue problem){reference-type="ref" reference="Weighetd eigenvalue problem"}) by employing the Ljusternik-Schnirelmann theorem on $C^1$-manifold. *Proof of theorem [Theorem 4](#Infinite eigenvalue){reference-type="ref" reference="Infinite eigenvalue"}.* We know that the functional $J$ and the set $\mathbb{S}$ satisfy all the conditions of Theorem [Theorem 32](#Ljusternik){reference-type="ref" reference="Ljusternik"}. Therefore, we get $\gamma(K_{\lambda_{j}}) \geq 1$ for each $j \in \mathbb{N}$. Thus $K_{\lambda_{j}} \neq \emptyset$ and hence there exists $u_{j} \in \mathbb{S}$ such that $J'(u_{j}) = 0$ and $J(u_{j}) = \lambda_{j}$. Therefore, $\lambda_{j}$ is an eigenvalue and $u_{j}$ is the corresponding eigenfunction for ([\[Weighetd eigenvalue problem\]](#Weighetd eigenvalue problem){reference-type="ref" reference="Weighetd eigenvalue problem"}). Recall that X is separable [@Cui Lemma 2.1] and hence X admits a bi-orthogonal system $\{ e_{m}, e_{m}^{*}\}$ such that $$\overline{ \{ e_{m} : m \in \mathbb{N} \}} = X,~ e_{m}^{*} \in X',~~<e_{m},e_{m}^{*}> = \delta_{n,m}$$ $$<e_{m}^{*}, x> = 0,~ \forall m \implies x=0.$$ Let $E_{n} = span \{ e_{1}, e_{2}, ... ,e_{n} \}$ and let $E_{n}^{\perp} = \overline{span \{ e_{n+1}, e_{n+2}, ... \}}$. Since $E_{n-1}^{\perp}$ is of co-dimension $(n-1)$, for any $A \in \Gamma_{n}$ we have $A \cap E_{n-1}^{\perp} \neq \emptyset.$ Let $$\mu_{n} = \inf_{A \in \Gamma_{n}} \sup_{A \cap E_{n-1}^{\perp}} J(u),~~n=1,2,...$$ Now we show that $\{ \mu_{n} \} \rightarrow \infty$. On contrary suppose that $\{ \mu_{n} \}$ is bounded, then there exists $u_{n} \in E_{n-1}^{\perp} \cap \mathbb{S}$ such that $\mu_{n} \leq J(u_{n}) <c$, for some constant $c >0$. Since $u_{n} \in \mathbb{S}$, the sequence $\{u_{n}\}$ is bounded in $X$ by using estimate ([\[L1\]](#L1){reference-type="ref" reference="L1"}). Thus $u_{n} \rightharpoonup u$ for some $u \in X$. Now by the choice of biorthogonal system, for each $m$, $\langle e_{m}^{*}, u_{n} \rangle \longrightarrow 0 ~as~ n \rightarrow \infty.$ Thus $u_{n} \rightharpoonup 0$ in X and hence $u = 0$, a contradiction to $\int_{\mathbb{R}^{N}} w|u|^{p} {\rm d}x\geq 1$ (See the conclusion of [\[L2\]](#L2){reference-type="eqref" reference="L2"}). Therefore, $\mu_{n} \rightarrow \infty$ as $n \rightarrow\infty$ and $\lambda_{n} \rightarrow \infty \ \text{as } n \rightarrow\infty, \ \text{since }\mu_{n} \leq \lambda_{n}$. Moreover, the first eigenvalue is simple by Lemma [Lemma 31](#Simple){reference-type="ref" reference="Simple"} and positive by Lemma [Lemma 29](#Positive){reference-type="ref" reference="Positive"}. Hence, the proof is complete. ◻ **Remark 41**. *For $w_{2} \in \mathcal{H}_{s,p,0}(\mathbb{R}^{N}) \backslash \{0 \}$, a sequence of negative eigenvalues, namely, $\{\mu_{n}\}_{n \in \mathbb{N}}$ of ([\[Weighetd eigenvalue problem\]](#Weighetd eigenvalue problem){reference-type="ref" reference="Weighetd eigenvalue problem"}) tending to $-\infty$ exists. In addition, the first eigenvalue $\mu_{1}$ is a simple and negative principal.* # Acknowledgements {#acknowledgements .unnumbered} U.D. acknowledges the support of the Israel Science Foundation (grant $637/19$) founded by the Israel Academy of Sciences and Humanities. U.D. is also partially supported by a fellowship from the Lady Davis Foundation. RK wants to thank the support of the CSIR fellowship, File No. 09/1125(0016)/2020--EMR--I, for his Ph.D. work. A.S. was supported by the DST-INSPIRE grant DST/INSPIRE/04/2018/002208. \ Department of Mathematics, Technion - Israel Institute of Technology\ Haifa 32000, Israel.\ *Email*: ujjal.rupam.das\@gmail.com\ \ Department of Mathematics, Indian Institute of Technology Jodhpur\ Rajasthan 342030, India.\ *Email: kumar.174\@iitj.ac.in*\ \ Department of Mathematics, Indian Institute of Technology Jodhpur\ Rajasthan 342030, India.\ *Email: abhisheks\@iitj.ac.in* [^1]: Corresponding author.
arxiv_math
{ "id": "2309.09532", "title": "Characterizations of compactness and weighted eigenvalue problem for\n fractional $p$-Laplacian in $\\mathbb{R}^N$", "authors": "Ujjal Das, Rohit Kumar and Abhishek Sarkar", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Cop-width and flip-width are new families of graph parameters introduced by Toruńczyk (2023) that generalise treewidth, degeneracy, generalised colouring numbers, clique-width and twin-width. In this paper, we bound the cop-width and flip-width of a graph by its strong colouring numbers. In particular, we show that for every $r\in \mathbb{N}$, every graph $G$ has $\mathop{\mathrm{copwidth}}_r(G)\leqslant\mathop{\mathrm{scol}}_{4r}(G)$. This implies that every class of graphs with linear strong colouring numbers has linear cop-width and linear flip-width. We use this result to deduce improved bounds for cop-width and flip-width for various sparse graph classes. author: - "Robert Hickingbotham[^1]" title: "**Cop-Width, Flip-Width and Strong Colouring Numbers**" --- # Introduction @KY2003orderings introduced the following definitions[^2]. For a graph $G$, a total order $\preceq$ of $V(G)$, a vertex $v\in V(G)$, and an integer $r\geqslant 1$, let $\textcolor{Maroon}{R(G,\preceq,v,r)}$ be the set of vertices $w\in V(G)$ for which there is a path $v=w_0,w_1,\dots,w_{r'}=w$ of length $r'\in [0,r]$ such that $w\preceq v$ and $v\prec w_i$ for all $i\in [r'-1]$, and let $\textcolor{Maroon}{Q(G,\preceq,v,r)}$ be the set of vertices $w\in V(G)$ for which there is a path $v=w_0,w_1,\dots,w_{r'}=w$ of length $r'\in [0,r]$ such that $w\preceq v$ and $w\prec w_i$ for all $i\in [r'-1]$. For a graph $G$ and integer $r\geqslant 1$, the [*$r$-strong colouring number of $G$, $\mathop{\mathrm{scol}}_r(G)$,*]{style="color: Maroon"} is the minimum integer such that there is a total order $\preceq$ of $V(G)$ with $|R(G,\preceq,v,r)|\leqslant\mathop{\mathrm{scol}}_r(G)$ for every vertex $v$ of $G$. Likewise, the [*$r$-weak colouring number of $G$, $\mathop{\mathrm{wcol}}_r(G)$,*]{style="color: Maroon"} is the minimum integer such that there is a total order $\preceq$ of $V(G)$ with $|Q(G,\preceq,v,r)|\leqslant\mathop{\mathrm{wcol}}_r(G)$ for every vertex $v$ of $G$. Colouring numbers provide upper bounds on several graph parameters of interest. First note that $\mathop{\mathrm{scol}}_1(G)=\mathop{\mathrm{wcol}}_1(G)$ which equals the degeneracy of $G$ plus 1, implying $\chi(G)\leqslant\mathop{\mathrm{scol}}_1(G)$. A proper graph colouring is [*acyclic*]{style="color: Maroon"} if the union of any two colour classes induces a forest; that is, every cycle is assigned at least three colours. For a graph $G$, the [*acyclic chromatic number of $G$, $\chi_\text{a}(G)$,*]{style="color: Maroon"} is the minimum integer $k$ such that $G$ has an acyclic $k$-colouring. @KY2003orderings proved that $\chi_\text{a}(G)\leqslant\mathop{\mathrm{scol}}_2(G)$ for every graph $G$. Other parameters that can be bounded by strong and weak colouring numbers include game chromatic number [@KT1994uncooperative; @KY2003orderings], Ramsey numbers [@CS1993ramsey], oriented chromatic number [@KSZ1997acyclic], arrangeability [@CS1993ramsey], boxicity [@EW2018boxicity], odd chromatic number [@H2022odd] and conflict-free chromatic number [@H2022odd]. Another attractive aspect of strong colouring numbers is that they interpolate between degeneracy and treewidth[^3]. As previously noted, $\mathop{\mathrm{scol}}_1(G)$ equals the degeneracy of $G$ plus $1$. At the other extreme, @GKRSS2018coveringsnowhere showed that $\mathop{\mathrm{scol}}_r(G)\leqslant\mathop{\mathrm{tw}}(G)+1$ for every $r\in \mathbb{N}$, and indeed $\mathop{\mathrm{scol}}_r(G) \to \mathop{\mathrm{tw}}(G)+1$ as $r\to\infty$. Colouring numbers are important because they characterise bounded expansion [@zhu2009generalized] and nowhere dense classes [@GKRSS2018coveringsnowhere], and have several algorithmic applications [@dvorak2014approximation; @GKS2017propertiesnowhere]. Let $G$ be a graph and $r\geqslant 0$ be an integer. A graph $H$ is an [*$r$-shallow minor*]{style="color: Maroon"} of $G$ if $H$ can be obtained from a subgraph of $G$ by contracting disjoint subgraphs each with radius at most $r$. Let [*$G \,\triangledown \, r$*]{style="color: Maroon"} be the set of all $r$-shallow-minors of $G$, and let [*$\nabla_r(G)$*]{style="color: Maroon"}$:=\max\{|E(H)|/|V(H)|\colon H\in G \,\triangledown \, r \}$. A hereditary graph class $\mathcal{G}$ has [*bounded expansion*]{style="color: Maroon"} with [*expansion function*]{style="color: Maroon"} $f_{\mathcal{G}}: \mathbb{N}\cup \{0\} \to \mathbb{R}$ if $\nabla_r(G)\leqslant f_{\mathcal{G}}(r)$ for every $r \geqslant 0$ and graph $G \in \mathcal{G}$. Bounded expansion is a robust measure of sparsity with many characterisations [@zhu2009generalized; @nevsetvril2008grad; @nevsetvril2012sparsity]. For example, @zhu2009generalized showed that a hereditary graph class $\mathcal{G}$ has bounded expansion if there is a function $f$ such that $\mathop{\mathrm{scol}}_r(G)\leqslant f(r)$ for every $r \geqslant 1$ and graph $G \in \mathcal{G}$. Examples of graph classes with bounded expansion includes classes that have bounded maximum degree [@nevsetvril2008grad], bounded stack number [@NOW2012examples], bounded queue-number [@NOW2012examples], bounded nonrepetitive chromatic number [@NOW2012examples], strongly sublinear separators [@DN2016sublinear], as well as proper minor-closed graph classes [@nevsetvril2008grad]. See the book by @nevsetvril2012sparsity for further background on bounded expansion. Given the richness of generalised colouring numbers, several attempts have been made to extend these parameters to the dense setting. In a recent breakthrough, @torunczyk2023flip introduced cop-width and flip-width, new families of graph parameters that generalise treewidth, degeneracy, colouring numbers, clique-width and twin-width. Their definitions are inspired by a game of cops and robber by @seymour1993graph: *"The robber stands on a vertex of the graph, and can at any time run at great speed to any other vertex along a path of the graph. He is not permitted to run through a cop, however. There are $k$ cops, each of whom at any time either stands on a vertex or is in a helicopter (that is, is temporarily removed from the game). The objective of the player controlling the movement of the cops is to land a cop via helicopters on the vertex occupied by the robber, and the robber's objective is to elude capture. (The point of the helicopters is that cops are not constrained to move along paths of the graph -- they move from vertex to vertex arbitrarily.) The robber can see the helicopter approaching its landing spot and may run to a new vertex before the helicopter actually lands"* [@seymour1993graph]. @seymour1993graph showed that the least number of cops needed to win this game on a graph $G$ is in fact equal to $\mathop{\mathrm{tw}}(G)+1$, thus giving a min-max theorem for treewidth. @torunczyk2023flip introduced the following parameterised version of this game: for some fixed $r\in \mathbb{N}$, the robber runs at speed $r$. So in each round, after the cops have taken off in their helicopters to their new positions (they may also choose to stay put), which are known to the robber, and before the helicopters have landed, the robber may traverse a path of length at most $r$ that does not run through a cop that remains on the ground. This variant is called the [*cop-width game with radius $r$ and width $k$*]{style="color: Maroon"}, if there are $k$ cops, and the robber can run at speed $r$. For a graph $G$, the [*radius-$r$ cop-width of $G$, $\mathop{\mathrm{copwidth}}_r(G)$,*]{style="color: Maroon"} is the least number $k\in \mathbb{N}$ such that the cops have a winning strategy for the cop-width game played on $G$ with radius $r$ and width $k$. Say a class of graphs $\mathcal{G}$ has [*bounded cop-width*]{style="color: Maroon"} if there is a function $f$ such that for every $r\in \mathbb{N}$ and graph $G\in \mathcal{G}$, $\mathop{\mathrm{copwidth}}_r(G)\leqslant f(r)$. @torunczyk2023flip showed that bounded cop-width coincides with bounded expansion. **Theorem 1** ([@torunczyk2023flip]). *A class of graphs has bounded expansion if and only if it has bounded cop-width.* As such, only sparse graph classes have bounded cop-width. Flip-width is then defined as a dense analog of cop-width. Here, the cops have enhanced power where they are allowed to perform flips on subsets of the vertex set of the graph with the goal of isolating the robber. For a fixed graph $G$, applying a flip between a pair of sets of vertices $A, B \subseteq V(G)$ results in the graph obtained from $G$ by inverting the adjacency between any pair of vertices $a, b$ with $a \in A$ and $b \in B$. If $G$ is a graph and $\mathcal{P}$ is a partition of $V(G)$, then call a graph $G'$ a [*$\mathcal{P}$-flip*]{style="color: Maroon"} of $G$ if $G'$ can be obtained from $G$ by performing a sequence of flips between pairs of parts $A, B \in \mathcal{P}$ (possibly with $A = B$). Finally, call $G'$ a [*$k$-flip*]{style="color: Maroon"} of $G$, if $G'$ is a $\mathcal{P}$-flip of $G$, for some partition $\mathcal{P}$ of $V(G)$ with $|\mathcal{P}|\leqslant k$. The [*flip-width game with radius $r\in \mathbb{N}$ and width $k \in \mathbb{N}$*]{style="color: Maroon"} is played on a graph $G$. Initially, $G_0 = G$ and $v_0$ is a vertex of $G$ chosen by the robber. In each round $i\geqslant 1$ of the game, the cops announce a new $k$-flip $G_i$ of $G$. The robber, knowing $G_i$, moves to a new vertex $v_i$ by running along a path of length at most $r$ from $v_{i-1}$ to $v_i$ in the previous graph $G_{i-1}$. The game terminates when $v_i$ is an isolated vertex in $G_i$. For a fixed $r \in \mathbb{N}$, the [*radius-$r$ flip-width of a graph $G$, $\mathop{\mathrm{flipwidth}}_r(G)$*]{style="color: Maroon"}, is the least number $k \in \mathbb{N}$ such that the cops have a winning strategy in the flip-width game of radius $r$ and width $k$ on $G$. In contrast to cop-width, flip-width is well-behaved on dense graphs. For example, one can easily observe that for all $r\in \mathbb{N}$, the radius-$r$ flip-width of a complete graph is equal to 1. Moreover, to demonstrate the robustness of flip-width, @torunczyk2023flip proved the following results. **Theorem 2** ([@torunczyk2023flip]). * * - *Every class of graphs with bounded expansion has bounded flip-width.* - *Every class of graphs with bounded twin-width has bounded flip-width.* - *If a class of graphs $\mathcal{G}$ has bounded flip-width, then any first-order interpretations of $\mathcal{G}$ also has bounded flip-width.* - *There is a slicewise polynomial algorithm that approximates the flip-width of a given graph graphs $G$.* So flip-width is believed to be the right analog of generalised colouring numbers for dense graphs. See [@torunczyk2023flip; @CKKL2023flipwidth; @EM2023geometric] for further results and conjectures on flip-width. ## Results {#results .unnumbered} In this paper, we bound the cop-width of a graph by its strong colouring numbers. thmMainStrongColouring[\[MainStrongColouring\]]{#MainStrongColouring label="MainStrongColouring"} For every $r\in \mathbb{N}$, every graph $G$ has $\mathop{\mathrm{copwidth}}_r(G)\leqslant\mathop{\mathrm{scol}}_{4r}(G)$. Previously, the best known bounds for the cop-width of a sparse graph was through its weak-colouring numbers. @torunczyk2023flip showed that for every $r\in \mathbb{N}$, every graph $G$ has $$\mathop{\mathrm{copwidth}}_r(G)\leqslant\mathop{\mathrm{wcol}}_{2r}(G)+1.$$ Moreover, if $G$ excludes $K_{t,t}$ as a subgraph, then $\mathop{\mathrm{flipwidth}}_r(G)\leqslant(\mathop{\mathrm{copwidth}}_r(G))^t.$ While graph classes with bounded strong colouring numbers have bounded weak colouring numbers, strong colouring numbers often give much better bounds than weak colouring numbers. In fact, @GKRSS2018coveringsnowhere and @DPTY2022weak have both shown that there are graph classes with polynomial strong colouring numbers and exponential weak colouring numbers. We now present a couple of applications of [\[MainStrongColouring\]](#MainStrongColouring){reference-type="ref" reference="MainStrongColouring"}. First, we use [\[MainStrongColouring\]](#MainStrongColouring){reference-type="ref" reference="MainStrongColouring"} to show that graph classes with linear strong colouring numbers have linear cop-width and linear flip-width. thmLinearStrongCopFlip[\[LinearStrongCopFlip\]]{#LinearStrongCopFlip label="LinearStrongCopFlip"} Every class of graphs with linear strong colouring numbers has linear cop-width and linear flip-width. Second, [\[MainStrongColouring\]](#MainStrongColouring){reference-type="ref" reference="MainStrongColouring"} gives improved bounds for the cop-width of many well-studied sparse graphs. A graph $H$ is a [*minor*]{style="color: Maroon"} of a graph $G$ if $H$ is isomorphic to a graph that can be obtained from a subgraph of $G$ by contracting edges. A graph $G$ is [*$H$-minor-free*]{style="color: Maroon"} if $H$ is not a minor of $G$. Van den Heuvel, Ossona de Mendez, Quiroz, Rabinovich and Siebertz [@HMQRS2017fixed] showed that for every $r\in \mathbb{N}$, every $K_t$-minor-free graph $G$ has $\mathop{\mathrm{scol}}_r(G)\leqslant\binom{t-1}{2}(2r+1)$. So [\[MainStrongColouring\]](#MainStrongColouring){reference-type="ref" reference="MainStrongColouring"} implies the following. thmKtMain[\[KtMain\]]{#KtMain label="KtMain"} For all $r,t\in \mathbb{N}$, every $K_t$-minor-free graph $G$ has $$\mathop{\mathrm{copwidth}}_r(G)\leqslant\binom{t-1}{2}(8r+1).$$ By [\[LinearStrongCopFlip\]](#LinearStrongCopFlip){reference-type="ref" reference="LinearStrongCopFlip"}, $K_t$-minor-free graphs also have linear flip-width; see [Corollary 5](#KtMainFlipWidth){reference-type="ref" reference="KtMainFlipWidth"} for an explicit bound. In regard to the previous best known bound for this class of graphs, van den Heuvel et al. [@HMQRS2017fixed] showed that for every $r\in \mathbb{N}$, every $K_t$-minor-free graph $G$ has $\mathop{\mathrm{wcol}}_r(G)\in O_t(r^t)$. By the aforementioned result of @torunczyk2023flip, the previous best known bound for the cop-width and flip-width of $K_t$-minor-free graph $G$ was: $$\mathop{\mathrm{copwidth}}_r(G) \in O_t(r^{t-1})\text{ and } \mathop{\mathrm{flipwidth}}_r(G) \in O_t(r^{(t-1)^2}).$$ [\[MainStrongColouring\]](#MainStrongColouring){reference-type="ref" reference="MainStrongColouring"} is also applicable to non-minor-closed graph classes. For a surface $\Sigma$, we say that a graph $G$ is [*$(\Sigma, k)$-planar*]{style="color: Maroon"} if $G$ has a drawing on $\Sigma$ such that every edge of $G$ is involved in at most $k$ crossings. A graph is [*$(g,k)$-planar*]{style="color: Maroon"} if it is $(\Sigma, k)$-planar for some surface $\Sigma$ with Euler genus at most $g$. Such graphs are widely studied [@PT1997crossings; @DMW20; @DEW2017locally; @dujmovic2017layered] and are a classic example of a sparse non-minor-closed class of graphs. Van den Heuvel and Wood [@HW2018improperARXIV; @HW2018improper] showed that for every $r\in \mathbb{N}$, every $(g,k)$-planar graph $G$ has $\mathop{\mathrm{scol}}_r(G)\leqslant(4g+6)(k+1)(2r+1)$. So [\[MainStrongColouring\]](#MainStrongColouring){reference-type="ref" reference="MainStrongColouring"} implies the following. **Theorem 3**. *For all $g,k,r\in \mathbb{N}$, every $(g,k)$-planar graph $G$ has $$\mathop{\mathrm{copwidth}}_r(G)\leqslant(4g+6)(k+1)(8r+1).$$* See [@HW2021shallow; @HW2018improperARXIV; @HMQRS2017fixed; @DMN2021convex] for other graph classes that [\[MainStrongColouring\]](#MainStrongColouring){reference-type="ref" reference="MainStrongColouring"} applies to. # Proofs We now prove our main theorem. *Proof.* Let $n:=|V(G)|-1$ and let $(v_0,v_1,\dots,v_n)$ be a total order $\preceq$ of $V(G)$ where $|R(G,\preceq,v,4r)|\leqslant\mathop{\mathrm{scol}}_{4r}(G)$ for every $v\in V(G)$. For every $s\in \mathbb{N}$ and $v_i,v_j\in V(G)$ where $i\leqslant j$, let $\textcolor{Maroon}{M(v_i,v_j,s)}$ be the set of vertices $w\in V(G)$ for which there is a path $v_j=w_0,w_1,\dots,w_{s'}=w$ of length $s'\in [0,s]$ such that $w\preceq v_i$ and $v_i\prec w_{\ell}$ for all $\ell\in [s'-1]$. **Claim:** For all $v_i,v_j\in V(G)$ where $i\leqslant j$, $|M(v_i,v_j,2r)|\leqslant\mathop{\mathrm{scol}}_{4r}(G)$. *Proof.* Let $k\in [i,j]$ be minimal such that $v_k\in Q(G,\preceq,v_j,2r)$. So $G$ contains a path $P=(v_j=w_0,\dots,w_{r'}=v_k)$ of length $r'\in [0,2r]$ such that $v_k\prec w_{\ell}$ for all $\ell\in [r'-1]$. We claim that $M(v_i,v_j,2r)\subseteq R(G,\preceq,v_k,4r)$. Let $u\in M(v_i,v_j,2r)$. Then there is a path $P'=(v_j=u_0,\dots,u_{s'}=u)$ of length $s'\in [0,2r]$ such that $u\preceq v_{i}$ and $v_i\prec u_{\ell}$ for all $\ell \in [s'-1]$. Suppose there is an $\ell \in [s'-1]$ such that $u_{\ell} \prec v_k$. Choose $\ell$ to be minimum. Then $u_{\ell}\in Q(G,\preceq,v_j,2r)$ since $u_{\ell}\prec u_a$ for each $a\in [0,\ell-1]$, which contradicts the choice of $k$. So $v_k\preceq u_{\ell}$ for all $\ell \in [s'-1]$. By taking the union of $P$ and $P'$, it follows that $G$ contains a $(v_k,u)$-walk $W$ of length at most $4r$ such that $v_k \prec w$ for all $w\in V(W)\setminus\{v_k,u\}$. Therefore $w\in R(G,\preceq,v_k,4r)$ and so $|M(v_i,v_j,2r)|\leqslant|R(G,\preceq,v_k,4r)|\leqslant\mathop{\mathrm{scol}}_{4r}(G).$ ◻ For each round $i\geqslant 0$ until the robber is caught, we define a tuple $(v_i,x_i,C_i,D_i,V_i,P_i)$ where: (1) [\[A1\]]{#A1 label="A1"} $x_i\in V(G)$ is the location of the robber at the end of round $i$; (2) [\[A4\]]{#A4 label="A4"} $C_0:=\{v_0\}$ and $C_i:=M(v_i,x_{i-1},2r)$ is the set of vertices that the cops are on at the end of round $i$ for each $i\geqslant 1$; (3) [\[A5\]]{#A5 label="A5"} $D_0:=\varnothing$ and $D_i:=C_{i-1}\cap C_i$ is the set of vertices where cops remain put throughout round $i$ for each $i\geqslant 1$; (4) [\[A7\]]{#A7 label="A7"} $V_i:=\{v_0,\dots,v_i\}$; and (5) [\[A6\]]{#A6 label="A6"} $P_0:=\varnothing$ and $P_i$ is the $(x_{i-1},x_i)$-path of length at most $r$ that the robber traverse during round $i$ for each $i\geqslant 1$. We will also maintain the following invariants for each round $i\geqslant 0$: (6) [\[Inv1\]]{#Inv1 label="Inv1"} $v_i \preceq x_i$; (7) [\[Inv2\]]{#Inv2 label="Inv2"} for $i\geqslant 1$, every path in $G$ from $x_{i-1}$ to a vertex in $V_{i-1}$ of length at most $r$ contains a vertex from $D_{i}$; (8) [\[Inv3\]]{#Inv3 label="Inv3"} $M(v_i,x_{i},r)\subseteq C_i$; (9) [\[Inv4\]]{#Inv4 label="Inv4"} $V(P_i)\cap V_{i-1}=\varnothing$ for each $i\geqslant 1$; and (10) [\[Inv5\]]{#Inv5 label="Inv5"} if $v_i=x_i$, then the robber is caught. Together with the previous claim, imply that the robber is caught within $n$ rounds using at most $\mathop{\mathrm{scol}}_{4r}(G)$ cops. We construct our sequence of tuples using induction on $i\geqslant 0$. Initialise the game of cops and robber with the robber on some vertex in $V(G)$, one cop on $v_0$ and the remaining cops all in the helicopters. Define the tuple $(v_0,x_0,C_0,D_0,V_0,P_0)$ according to . Clearly such a tuple is well-defined. Moreover, it is easy to see that the tuple satisfies . Now suppose we are at round $i \geqslant 1$ and the robber has not yet been caught. By induction, we may assume that there is a tuple $(v_{i-1},x_{i-1},C_{i-1},D_{i-1},V_{i-1},P_{i-1})$ for round $i-1$ which satisfies . Since the robber has not yet been caught, imply that $v_{i-1}\prec x_{i-1}$, so $v_i \preceq x_{i-1}$. Therefore, there is a well-defined tuple $(v_i,x_i,C_i,D_i,V_i,P_i)$ which satisfies . We now show that $(v_i,x_i,C_i,D_i,V_i,P_i)$ satisfies the additional invariants. We first verify . Let $F_{i}:=M(v_{i-1},x_{i-1},r)$. Let $u\in V_{i-1}$ and suppose there is a $(x_{i-1}=w_0,w_1,\dots,w_{r'}=u)$-path $P^{\star}$ in $G$ where $r'\in [0,r]$. Consider the minimal $j\in [r']$ such that $w_j\in V_{i-1}$. Since $\{w_1,\dots,w_{j-1}\}\cap V_{i-1}=\varnothing$, it follows that $w_j\in F_i$. So for every $u\in V_{i-1}$, every $(x_{i-1},u)$-path in $G$ of length at most $r$ contains a vertex from $F_i$. By and (from the $i-1$ case), it follows that $F_i\subseteq C_{i-1}\cap C_i$. So follows from . Now since the robber is not allowed to run through a cop that stays put, follows by . Property  then immediately follows from since $x_i\in V(P_i)$. Now consider a vertex $y\in M(v_i,x_i,r)$. Then $G$ contains a $(x_i=w_0,w_1,\dots,w_{r'}=y)$-path $P'$ of length $r'\in [0,r]$ such that $v_i\preceq w_j$ for all $j\in [r'-1]$. By taking the union of $P'$ and $P$, it follows that $G$ contains an $(x_{i-1},y)$-walk $W$ of length at most $2r$. Moreover, by , $v_i\preceq z$ for all $z\in V(W)\setminus\{x_{i-1},y\}$. So $u\in M(v_i,x_{i-1},2r)$ and thus implies . Finally, if $v_i=x_i$ then implies $x_i\in C_i$, so follows by , as required. ◻ [\[MainStrongColouring\]](#MainStrongColouring){reference-type="ref" reference="MainStrongColouring"} implies that graph classes with linear strong colouring numbers have linear cop-width. To complete the proof of [\[LinearStrongCopFlip\]](#LinearStrongCopFlip){reference-type="ref" reference="LinearStrongCopFlip"}, we leverage known results concerning neighbourhood diversity. Neighbourhood diversity is a well-studied concept with various applications [@RVS19; @EGKKPRS17; @GHOORRVS17; @PP20; @BKW2022bandwidth; @BFLP2024neighbourhood; @JR2023neighborhood]. Let $G$ be a graph. For a set $S\subseteq V(G)$, let $\textcolor{Maroon}{\pi_G(S)}:=|\{N_G(v)\cap S \colon v\in V(G)\setminus S\}|$. For $k\in \mathbb{N}$, let ${\textcolor{Maroon}{\pi_G(k)}:=\max\{\pi_G(S)\colon S\subseteq V(G), |S|\leqslant k\}}$. **Lemma 4**. *For all $k,r\in \mathbb{N}$, every graph $G$ with $\mathop{\mathrm{copwidth}}_r(G)\leqslant k$ has $${\mathop{\mathrm{flipwidth}}_r(G)\leqslant\pi_G(k)+k}.$$* *Proof.* We claim that for every set $S\subseteq V(G)$ where $|S|\leqslant k$, there is a $(\pi_G(k)+k)$-flip that isolates $S$ while leaving $G-S$ untouched. Let $\mathcal{P}$ be a partition of $V(G)$ that partition $S$ into singleton and vertices in $v\in V(G)\setminus S$ according to $N_G(v)\cap S$. Then $|\mathcal{P}|\leqslant\pi_G(k)+k$. Moreover, every vertex $s\in S$ can be isolated by flipping $\{s\}$ with every part of $\mathcal{P}$ that is complete to $\{s\}$. Thus, a winning strategy for the cops in the cop-width game with radius $r$ and width $k$ can be transformed into a winning strategy for the flip-width graph with radius $r$ and width $c+k$, as required. ◻ @RVS19 showed that for every graph class $\mathcal{G}$ with bounded expansion, there exists $c>0$ such that $\pi_G(k)\leqslant ck$ for every $G\in \mathcal{G}$. Since graph classes with linear strong colouring numbers have bounded expansion, [\[MainStrongColouring,FlipWidthLinear\]](#MainStrongColouring,FlipWidthLinear){reference-type="ref" reference="MainStrongColouring,FlipWidthLinear"} imply [\[LinearStrongCopFlip\]](#LinearStrongCopFlip){reference-type="ref" reference="LinearStrongCopFlip"}. As a concrete example, @BKW2022bandwidth showed that for every $K_t$-minor-free graph $G$ and for every set $A\subseteq V(G)$, $$\pi_G(A)\leqslant 3^{2t/3+o(t)}|A|+1.$$ So [\[KtMain,FlipWidthLinear\]](#KtMain,FlipWidthLinear){reference-type="ref" reference="KtMain,FlipWidthLinear"} imply the following. **Corollary 5**. *For all $r,t\in \mathbb{N}$, every $K_t$-minor-free graph $G$ has $$\mathop{\mathrm{flipwidth}}_r(G)\leqslant(3^{2t/3+o(t)}+1)(t-2)(t-3)(8r+1)+1.$$* ### Acknowledgement {#acknowledgement .unnumbered} This work was initiated at the 10th Annual Workshop on Geometry and Graphs held at Bellairs Research Institute in February 2023. Thanks to Rose McCarty for introducing the author to flip-width and to David Wood for helpful conversations. 36 urlstyle [Édouard Bonnet, Florent Foucaud, Tuomo Lehtilä, and Aline Parreau]{.smallcaps}. [Neighbourhood complexity of graphs of bounded twin-width](https://doi.org/10.1016/j.ejc.2023.103772). *European J. Combin.*, 115:Paper No. 103772, 2024. [Édouard Bonnet, O-joung Kwon, and David R. Wood]{.smallcaps}. [Reduced bandwidth: a qualitative strengthening of twin-width in minor-closed classes (and beyond)](http://arxiv.org/abs/2202.11858). . arXiv:2202.11858. [Yeonsu Chang, Sejin Ko, O-joung Kwon, and Myounghwan Lee]{.smallcaps}. [A characterization of graphs of radius-$r$ flip-width at most $2$](http://arxiv.org/abs/2306.15206). . arXiv:2306.15206. [Guantao Chen and R. H. Schelp]{.smallcaps}. [Graphs with linearly bounded Ramsey numbers](https://doi.org/10.1006/jctb.1993.1012). *J. Combin. Theory Ser. B*, 57(1):138--149, 1993. [Reinhard Diestel]{.smallcaps}. [Graph theory](https://doi.org/10.1007/978-3-662-53622-3), vol. 173 of *Graduate Texts in Mathematics*. Springer, 5th edn., 2017. [Vida Dujmović, David Eppstein, and David R. Wood]{.smallcaps}. [Structure of graphs with locally restricted crossings](https://doi.org/10.1137/16M1062879). *SIAM J. Discrete Math.*, 31(2):805--824, 2017a. [Vida Dujmović, Pat Morin, and David R. Wood]{.smallcaps}. [Layered separators in minor-closed graph classes with applications](https://doi.org/10.1016/j.jctb.2017.05.006). *J. Combin. Theory Ser. B*, 127:111--147, 2017b. [Vida Dujmović, Pat Morin, and David R. Wood]{.smallcaps}. [Graph product structure for non-minor-closed classes](http://dx.doi.org/10.1016/j.jctb.2023.03.004). *J. Combin. Theory Ser. B*, 162:34--67, 2023. arXiv:1907.05168. [Zdeněk Dvořák]{.smallcaps}. [Constant-factor approximation of the domination number in sparse graphs](https://doi.org/10.1016/j.ejc.2012.12.004). *European J. Combin.*, 34(5):833--840, 2013. [Zdeněk Dvořák, Rose McCarty, and Sergey Norin]{.smallcaps}. [Sublinear separators in intersection graphs of convex shapes](https://doi.org/10.1137/20M1311156). *SIAM J. Discrete Math.*, 35(2):1149--1164, 2021. [Zdeněk Dvořák and Sergey Norin]{.smallcaps}. [Strongly sublinear separators and polynomial expansion](https://doi.org/10.1137/15M1017569). *SIAM J. Discrete Math.*, 30(2):1095--1101, 2016. [Zdeněk Dvořák, Jakub Pekárek, Torsten Ueckerdt, and Yelena Yuditsky]{.smallcaps}. [Weak coloring numbers of intersection graphs](https://doi.org/10.4230/lipics.socg.2022.39). In [Xavier Goaoc and Michael Kerber]{.smallcaps}, eds., *Proc. 38th Int'l Symp. on Computat. Geometry (SoCG 2022)*, vol. 224 of *LIPIcs.*, pp. 39:1--39:15. Schloss Dagstuhl, 2022. [Kord Eickmeyer, Archontia C. Giannopoulou, Stephan Kreutzer, O-joung Kwon, Michal Pilipczuk, Roman Rabinovich, and Sebastian Siebertz]{.smallcaps}. [Neighborhood complexity and kernelization for nowhere dense classes of graphs](https://doi.org/10.4230/LIPIcs.ICALP.2017.63). In [Ioannis Chatzigiannakis, Piotr Indyk, Fabian Kuhn, and Anca Muscholl]{.smallcaps}, eds., *Proc. 44th Int'l Coll. on Automata, Languages, and Programming (ICALP '17)*, vol. 80 of *Leibniz Int. Proc. Inform.*, pp. 63:1--63:14. Schloss Dagstuhl, 2017. [David Eppstein and Rose McCarty]{.smallcaps}. [Geometric graphs with unbounded flip-width](http://arxiv.org/abs/2306.12611). . arXiv:2306.12611. [Louis Esperet and Veit Wiechert]{.smallcaps}. [Boxicity, poset dimension, and excluded minors](https://doi.org/10.37236/7787). *Electron. J. Combin.*, 25(4):Paper No. 4.51, 11, 2018. [Jakub Gajarský, Petr Hlinený, Jan Obdrzálek, Sebastian Ordyniak, Felix Reidl, Peter Rossmanith, Fernando Sánchez Villaamil, and Somnath Sikdar]{.smallcaps}. [Kernelization using structural parameters on sparse graph classes](https://doi.org/10.1016/j.jcss.2016.09.002). *J. Comput. Syst. Sci.*, 84:219--242, 2017. [Martin Grohe, Stephan Kreutzer, Roman Rabinovich, Sebastian Siebertz, and Konstantinos Stavropoulos]{.smallcaps}. [Coloring and covering nowhere dense graphs](https://doi.org/10.1137/18M1168753). *SIAM J. Discrete Math.*, 32(4):2467--2481, 2018. [Martin Grohe, Stephan Kreutzer, and Sebastian Siebertz]{.smallcaps}. [Deciding first-order properties of nowhere dense graphs](https://doi.org/10.1145/3051095). *J. ACM*, 64(3):Art. 17, 32, 2017. [Robert Hickingbotham]{.smallcaps}. [Odd colourings, conflict-free colourings and strong colouring numbers](https://ajc.maths.uq.edu.au/pdf/87/ajc_v87_p160.pdf). *Australas. J. Combin.*, 87:160--164, 2023. [Robert Hickingbotham and David R. Wood]{.smallcaps}. [Shallow minors, graph products and beyond planar graphs](http://arxiv.org/abs/2111.12412). . arXiv:2111.12412. [Gwenaël Joret and Clément Rambaud]{.smallcaps}. [Neighborhood complexity of planar graphs](http://arxiv.org/abs/2302.12633). . arXiv:2302.12633. [Hal A. Kierstead and William T. Trotter]{.smallcaps}. [Planar graph coloring with an uncooperative partner](https://doi.org/10.1002/jgt.3190180605). *J. Graph Theory*, 18(6):569--584, 1994. [Hal A. Kierstead and Daqing Yang]{.smallcaps}. [Orderings on graphs and game coloring number](https://doi.org/10.1023/B:ORDE.0000026489.93166.cb). *Order*, 20(3):255--264, 2003. [Alexandr V. Kostochka, Eric Sopena, and Xuding Zhu]{.smallcaps}. [Acyclic and oriented chromatic numbers of graphs](https://doi.org/10.1002/(SICI)1097-0118(199704)24:4<331::AID-JGT5>3.0.CO;2-P). *J. Graph Theory*, 24(4):331--340, 1997. [Jaroslav Nešetřil and Patrice Ossona de Mendez]{.smallcaps}. Sparsity: graphs, structures, and algorithms, vol. 28. Springer, 2012. [Jaroslav Nešetřil and Patrice Ossona de Mendez]{.smallcaps}. [Grad and classes with bounded expansion. I. Decompositions](https://doi.org/10.1016/j.ejc.2006.07.013). *European J. Combin.*, 29(3):760--776, 2008. [Jaroslav Nešetřil, Patrice Ossona de Mendez, and David R. Wood]{.smallcaps}. [Characterisations and examples of graph classes with bounded expansion](https://doi.org/10.1016/j.ejc.2011.09.008). *European J. Combin.*, 33(3):350--373, 2012. [János Pach and Géza Tóth]{.smallcaps}. [Graphs drawn with few crossings per edge](https://doi.org/10.1007/BF01215922). *Combinatorica*, 17(3):427--439, 1997. [Adam Paszke and Michał Pilipczuk]{.smallcaps}. [VC density of set systems defnable in tree-like graphs](http://arxiv.org/abs/2003.14177). , arXiv:2003.14177. [Felix Reidl, Fernando Sánchez Villaamil, and Konstantinos S. Stavropoulos]{.smallcaps}. [Characterising bounded expansion by neighbourhood complexity](https://doi.org/10.1016/j.ejc.2018.08.001). *Eur. J. Comb.*, 75:152--168, 2019. [Paul Seymour and Robin Thomas]{.smallcaps}. [Graph searching and a min-max theorem for tree-width](https://doi.org/10.1006/jctb.1993.1027). *J. Combin. Theory Ser. B*, 58(1):22--33, 1993. [Symon Toruńczyk]{.smallcaps}. [Flip-width: Cops and robber on dense graphs](http://arxiv.org/abs/2302.00352). . arXiv:2302.00352. [Jan van den Heuvel, Patrice Ossona de Mendez, Daniel Quiroz, Roman Rabinovich, and Sebastian Siebertz]{.smallcaps}. [On the generalised colouring numbers of graphs that exclude a fixed minor](https://doi.org/10.1016/j.ejc.2017.06.019). *European J. Combin.*, 66:129--144, 2017. [Jan van den Heuvel and David R. Wood]{.smallcaps}. [Improper colourings inspired by Hadwiger's conjecture](http://arxiv.org/abs/1704.06536). . arXiv:1704.06536. [Jan van den Heuvel and David R. Wood]{.smallcaps}. [Improper colourings inspired by Hadwiger's conjecture](https://doi.org/10.1112/jlms.12127). *J. Lond. Math. Soc. (2)*, 98(1):129--148, 2018. arXiv:1704.06536. [Xuding Zhu]{.smallcaps}. [Colouring graphs with bounded generalized colouring number](https://doi.org/10.1016/j.disc.2008.03.024). *Discrete Math.*, 309(18):5562--5568, 2009. [^1]: School of Mathematics, Monash University, Melbourne, Australia (`robert.hickingbotham@monash.edu`). Research supported by an Australian Government Research Training Program Scholarship. [^2]: We consider simple, finite, undirected graphs $G$ with vertex-set ${V(G)}$ and edge-set ${E(G)}$. See [@diestel2017graphtheory] for graph-theoretic definitions not given here. Let ${\mathbb{N}\coloneqq \{1,2,\dots\}}$. For integers $a,b$ where $a<b$, let $[a,b]:=\{a,a+1,\dots,b-1,b\}$ and for $n \in \mathbb{N}$, let $[n]:=[1,n]$. A [*graph class*]{style="color: Maroon"} is a collection of graphs closed under isomorphism. For a graph $G$ and a vertex $v\in V(G)$, let $\textcolor{Maroon}{N_G(v)}:=\{w\in V(G)\colon vw\in E(G)\}$. [^3]: A [*tree-decomposition*]{style="color: Maroon"} of a graph $G$ is a collection $\mathcal{W}= (B_x \colon x \in V(T))$ of subsets of $V(G)$ indexed by the nodes of a tree $T$ such that (i) for every edge $vw \in E(G)$, there exists a node $x \in V(T)$ with $v,w \in B_x$; and (ii) for every vertex $v \in V(G)$, the set $\set{x \in V(T) \colon v \in B_x}$ induces a (connected) subtree of $T$. The [*width*]{style="color: Maroon"} of $\mathcal{W}$ is $\max\set{\abs{B_x} \colon x \in V(T)} - 1$. The [*treewidth $\mathop{\mathrm{tw}}(G)$*]{style="color: Maroon"} of a graph $G$ is the minimum width of a tree-decomposition of $G$.
arxiv_math
{ "id": "2309.05874", "title": "Cop-width, flip-width and strong colouring numbers", "authors": "Robert Hickingbotham", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | For the non-ergodic Vasicek model driven by one of two types of Gaussian process $(G_t)_{t\in[0,T]}$, we obtain the joint asymptotic distribution of the estimations of the three parameters $\theta,\,\mu$ and $\alpha=\mu \theta$ of the model. A novel contribution is the identification of three different types of scaling in the asymptotic behavior of the estimator $\hat{\theta}_T$ of $\theta$. The proof is based on a slight modification of the idea of [@Es-Sebaiy22021]. The two types of Gaussian process $(G)$ satisfy that either the first-order partial derivative of its covariance function or the difference between its covariance function and that of the fractional Brownian motion $(B^H)_{t\in[0,T ]}$ is a normalized bounded variation function. There are ten Gaussian processes with nonstationary increments in the literature that belong to this category, and nine of them can be used as a driven noise for the Vasicek model to obtain the joint asymptotic distribution. The same result concerning the non-ergodic Ornstein--Uhlenbeck process is a by-product. author: - title: Two Types of Gaussian Processes and their Application to Statistical Estimations for Non-ergodic Vasicek Model --- Gaussian Vasicek-type model; Least squares estimate; Fractional Gaussian process; Ornstein-Uhlenbeck process. 60G15; 60G22; 62M09 # Introduction and main results The motivation of this paper is to study the statistical estimation for a non-ergodic Gaussian Vasicek-type model driven by one of two types of Gaussian processes: $$\begin{aligned} \label{Vasicekmodel} \mathrm{d}X_t =\theta(\mu +X_t) \mathrm{d}t +\mathrm{d}G_t, \quad X_0 = 0 ,\quad \theta>0,\, \mu\in \mathbb{R},\end{aligned}$$ where the driven Gaussian noise $(G_t)_{t\in[0,T ]}$ satisfies either Hypothesis [HYPOTHESIS 1](#hypthe1-1){reference-type="ref" reference="hypthe1-1"} or Hypothesis [HYPOTHESIS 2](#hypthe2new-new){reference-type="ref" reference="hypthe2new-new"}. **HYPOTHESIS 1**. *The centred Gaussian process $(G_t)_{t\in[0,T ]}$ with $G(0)=0$ has a covariance function $R(s,t)=E (G_sG_t)$ that satisfies the following conditions:* 1. *$R(s,t)$ is an absolutely continuous function on $t\in [0,T]$ for any fixed $s \in[0,T]$;* 2. *the first-order partial derivative $$\frac {\partial }{ \partial t} R(s,t)$$ is a normalized bounded variation function on $s\in [0,T]$ for any fixed $t \in[0,T]$.* **HYPOTHESIS 2**. *The centred Gaussian process $(G_t)_{t\in[0,T ]}$ with $G(0)=0$ has a covariance function $R(s,t)=E (G_sG_t)$ that satisfies Hypothesis ($H_1$) and the following condition:* 1. *the difference $$\begin{aligned} \frac {\partial R(s,t)}{\partial t} - \frac {\partial R^{B}(s,t)}{\partial t} \label{cha-key}\end{aligned}$$ is a normalized bounded variation function on $s\in [0,T]$ for any fixed $t \in[0,T]$, where $R^B(s,t)$ is the covariance function of the fractional Brownian motion (fBm) $(B^H_t)_{t\in[0,T ]}$ with Hurst parameter $H\in (0, \,1)$.* A bounded variation function $F$ is said to be normalized if it is also right continuous on $\mathbb{R}$ and its limits at $-\infty$ is zero, i.e., $F(-\infty)=0$, see [@Foll; @99]. It can be easily verified that the following Examples [Example 3](#exmp 000001){reference-type="ref" reference="exmp 000001"}-[Example 6](#exmp 000001-06-buch){reference-type="ref" reference="exmp 000001-06-buch"} satisfy Hypothesis [HYPOTHESIS 1](#hypthe1-1){reference-type="ref" reference="hypthe1-1"}. **Example 3**. Suppose that $H \in (0, \frac12)\cup(\frac12,1)$, and the constant $C_H$ is given by $$\begin{aligned} C_H=\left\{ \begin{array}{ll} \frac{H}{\Gamma(1-2H)}, & \quad H\in (0,\frac12),\\ \frac{H(2H-1)} {\Gamma(2-2H)}, &\quad H\in (\frac12,1), \end{array} \right.\end{aligned}$$ and $\{W_t,t \ge 0 \}$ is the standard Brownian motion. The covariance function of the following Gaussian process $$X_{t} = \sqrt{C_H}\int_{0}^{\infty} \left( 1-e^{-r t} \right) r^{-\frac {1+2H}{2}} \mathrm{d}W_{r} ,\quad t\ge 0$$ is given by $$R(s,t)=\left\{ \begin{array}{ll} \frac12[t^{2H}+s^{2H} -(t+s)^{2H}], & \quad H\in (0,\frac12),\\ \frac12[(t+s)^{2H}-t^{2H}-s^{2H} ], &\quad H\in (\frac12,1). \end{array} \right.$$Please refer to @Bardina2009 [@Lei2009]. **Example 4**. The covariance function of the trifractional Brownian motion $Z^{H',K}(t)$ with parameters $H' \in (0, 1),\,K \in(0,1)$ is given by $$R(t,\, s)=t^{2H'K} + s^{2H'K}- (t^{2H'}+s^{2H'})^{K} .$$ Please refer to @Lei2009 [@ma2013]. In this case, we take $H=H'K$. In fact, Examples [Example 3](#exmp 000001){reference-type="ref" reference="exmp 000001"}-[Example 4](#exmp 000001-04){reference-type="ref" reference="exmp 000001-04"} satisfy the following stronger condition: 1. The first-order partial derivative $\frac {\partial }{ \partial t} R(s,t)$ is an absolutely continuous function on $s\in [0,T]$ for any fixed $t \in[0,T]$. As a comparison, in Examples [Example 5](#exmp 000001-06){reference-type="ref" reference="exmp 000001-06"}-[Example 6](#exmp 000001-06-buch){reference-type="ref" reference="exmp 000001-06-buch"}, the first-order partial derivatives $\frac {\partial }{ \partial t} R(s,t)$ are not absolutely continuous but bounded variation functions on $s\in [0,T]$ for any fixed $t \in[0,T]$. **Example 5**. Suppose that $H \in (0, \frac12)$. The Gaussian process $\{G_t , t\ge 0\}$ has the covariance function $$R(s, t)= \frac12\big[(t+s)^{2H}-(\max(s,t))^{2H} \big] .$$ Please refer to Theorem 1.1 of [@Talarczyk2020]. Clearly, for any fixed $t\in [0,T]$, the first-order partial derivative $$\label{exmp 000001-06-piandao} \frac {\partial }{ \partial t} R(s,t)=\left\{ \begin{array}{ll} H(t+s)^{2H-1}-H t^{2H-1}, & \quad 0<s\le t,\\ H(t+s)^{2H-1}, & \quad t<s\le T \end{array} \right.$$ is a bounded variation function on $s\in [0,T]$. **Example 6**. Suppose that $H \in (0, \frac12)$. The Gaussian process $\{Z_t , t\ge 0\}$ has the covariance function $$R(s, t)= \Gamma(1-2H)(\min(s,t))^{2H} .$$ Please refer to Theorem 2.1 of [@DW2016]. For any fixed $t\in [0,T]$, the first-order partial derivative $$\label{exmp 000001-06-piandao} \frac {\partial }{ \partial t} R(s,t)=\left\{ \begin{array}{ll} 0 , & \quad 0<s\le t,\\ 2H\Gamma(1-2H) t^{2H-1}, & \quad t<s\le T \end{array} \right.$$ is a bounded variation function on $s\in [0,T]$. The process that satisfies Hypothesis [HYPOTHESIS 2](#hypthe2new-new){reference-type="ref" reference="hypthe2new-new"} is referred to as the fractional Gaussian process in [@cl2023], which includs the commonly used four types of Gaussian processes with nonstationary increments such as sub-fractional Brownian motion, bi-fractional Brownian motion, generalized sub-fractional Brownian motion and generalized fractional Brownian motion. We present the latter two as follows. **Example 7**. The generalized sub-fractional Brownian motion (also known as the sub-bifractional Brownian motion) $S^{H',K}(t)$ with parameters $H' \in (0, 1),\,K \in(0,2)$ and $H:=H'K\in (0,1)$ satisfies Hypothesis [HYPOTHESIS 2](#hypthe2new-new){reference-type="ref" reference="hypthe2new-new"}. The covariance function is given by: $$R(t,\, s)= (s^{2H'}+t^{2H'})^{K}-\frac12 \left[(t+s)^{2H'K} + \left\vert t-s\right\vert^{2H'K} \right].$$ In particular, when $K=1$, it degenerates to the sub-fractional Brownian motion $S^H(t)$ with parameter $H\in (0,1)$. For the case of $K\in (0,1)$ and $K\in (1,2)$, please refer to [@El-NoutyJourn2013] and [@Sgh; @2013] respectively. **Example 8**. The generalized fractional Brownian motion is an extension of both fractional and sub-fractional Brownian motions, whose covariance function is $$R(t,\, s)=\frac{(a+b)^2}{2(a^2+b^2)}(s^{2H}+t^{2H})-\frac{ab}{a^2+b^2}(s+t)^{2H}-\frac12 \left\vert t-s\right\vert^{2H},$$ where $H\in (0, 1)$ and $(a,b)\neq (0,0)$. See @Zili [@17]. The following two processes in the literature are special cases of this one. In @BGT [@2007], the Gaussian process $\vartheta_t$ with the covariance function $$R(t,\, s)=\frac{1}{2}\left((s+t)^{2H}-|t-s|^{2H}\right) ,\quad H \in (0,1)$$ is the derivative of the negative sub-fractional Brownian motion with parameter $h=2+2H$. In @Sgh [@2014], the covariance function of the self-similar Gaussian process $\mathsf{S}^{2H,K}(t)$ with parameters $H\in (0,\frac12),\,K\in (0, 1)$ is as follows: $$R(t,\, s)=\frac{1}{2(1-K)}[t^{2H}+s^{2H}-K(t+s)^{2H}]-\frac12 |t-s|^{2H}.$$ Examples [Example 7](#exmp0005){reference-type="ref" reference="exmp0005"}-[Example 8](#exmp7-1zili){reference-type="ref" reference="exmp7-1zili"} satisfy the following stronger condition: 1. The difference [\[cha-key\]](#cha-key){reference-type="eqref" reference="cha-key"} is an absolutely continuous function on $s\in [0,T]$ for each fixed $t\in [0,T]$. As a comparison, in the following Example [Example 9](#exmp lizi001){reference-type="ref" reference="exmp lizi001"}, the difference [\[cha-key\]](#cha-key){reference-type="eqref" reference="cha-key"} is a (pure discontinuous) step function of $s\in [0,T]$. **Example 9**. Suppose that $H \in (0, \frac12)$. The Gaussian process $(G)$ has the covariance function $$\begin{aligned} \label{cova func 01} R(s, t)=\frac12\big[ (\max(s,t))^{2H} - |t-s|^{2H}\big] .\end{aligned}$$ Please refer to Theorem 1.1 of [@Talarczyk2020]. Now let us return to the Vasicek model [\[Vasicekmodel\]](#Vasicekmodel){reference-type="eqref" reference="Vasicekmodel"}, and assume that the whole trajectory of $(X)$ is continuously observed on $[0, T]$. Following the approach of [@Es-Sebaiy22021], we propose the least squares type estimators for the unknown parameters $\theta$, $\alpha=\theta\mu$ and $\mu$ as follows: $$\begin{aligned} \label{LSE1} \hat{\theta}_T =\frac{\frac 12 T X_T^2-X_T \int_{0}^{T}X_t \mathrm{d}t }{T\int_{0}^{T}X_t^2 \mathrm{d}t-\left(\int_{0}^{T}X_t \mathrm{d}t \right)^2},\end{aligned}$$ $$\begin{aligned} \label{LSE2} \hat{\alpha}_T=\frac{X_T\int_{0}^{T}X_t^2 \mathrm{d}t - \frac 12 X_T^2 \int_{0}^{T}X_t \mathrm{d}t }{T\int_{0}^{T}X_t^2 \mathrm{d}t-\left( \int_{0}^{T}X_t \mathrm{d}t \right)^2},\end{aligned}$$ and $$\begin{aligned} \label{LSE3} \hat{\mu}_T=\frac{\hat{\alpha}_T} {\hat{\theta}_T } =\frac{\int_{0}^{T}X_t^2 \mathrm{d}t - \frac 12 X_T \int_{0}^{T}X_t \mathrm{d}t }{\frac 12 T X_T- \int_{0}^{T}X_t \mathrm{d}t }.\end{aligned}$$ The statistical estimations for the Vasicek model [\[Vasicekmodel\]](#Vasicekmodel){reference-type="eqref" reference="Vasicekmodel"}, driven by either sub-fractional Brownian motion or bi-fractional Brownian motion, have been studied in [@Yuqian2020], [@Es-Sebaiy22021], respectively. In [@Yuqian2020], the asymptotic behavior of each estimator is obtained separately, while in [@Es-Sebaiy22021], the joint asymptotic distribution of the estimations is derived. In [@KX; @2023], the same problem is studied when the driven noise $(G)$ is the generalized sub-fractional Brownian motion with $H',K\in (0,1)$ where the asymptotic behavior for each estimator is shown separately. We point out that the parameter range in our Example [Example 7](#exmp0005){reference-type="ref" reference="exmp0005"} is $H' \in (0,1),\,K\in (0,2)$. In this paper, we will demonstrate that the joint asymptotic distribution of the estimations, similar to the result in [@Es-Sebaiy22021], holds when the driven noise $(G)$ is taken from any one of Examples [Example 3](#exmp 000001){reference-type="ref" reference="exmp 000001"}-[Example 9](#exmp lizi001){reference-type="ref" reference="exmp lizi001"}. To the best of our knowledge, Vasicek models [\[Vasicekmodel\]](#Vasicekmodel){reference-type="eqref" reference="Vasicekmodel"} driven by the aforementioned Gaussian processes have not been thoroughly investigated in the literature. Now we present the main result of this paper. **Theorem 10**. *If the driven process $(G)$ of the Vasicek model [\[Vasicekmodel\]](#Vasicekmodel){reference-type="eqref" reference="Vasicekmodel"} is taken from any one of Examples [Example 3](#exmp 000001){reference-type="ref" reference="exmp 000001"}-[Example 9](#exmp lizi001){reference-type="ref" reference="exmp lizi001"}, we have that as $T \to \infty$, $$\begin{aligned} &\left(T^{\beta} e ^{\theta T}(\hat{\theta}_T - \theta), \, T^{1-H}(\hat{\mu}_T - \mu) \right) \xrightarrow{law} \left( \frac{ N_2}{N_3},\, \frac {1}{\theta} N_1 \right), \\ &\left( T^{\beta} e ^{\theta T}(\hat{\theta}_T - \theta), \, T^{1-H}(\hat{\alpha}_T - \alpha)\right) \xrightarrow{law} \left(\frac{ N_2}{N_3},\, N_1 \right).\end{aligned}$$ where $$\label{beta biaoshishizi} \beta=\left\{ \begin{array}{ll} 1-H, & \quad \text{for Examples~\ref{exmp 000001}-\ref{exmp 000001-04}} ,\\ \frac{1}{2}-H, &\quad \text{for Examples~\ref{exmp 000001-06}-\ref{exmp 000001-06-buch}},\\ 0, &\quad\text{for Examples~\ref{exmp0005}-\ref{exmp lizi001}}, \end{array} \right.$$ and $N_1\sim \mathcal{N}(0,\lambda_G^2), N_2\sim\mathcal{N}(0, 4\theta^2\sigma_G^2), N_3\sim \mathcal{N}(\mu,\theta^2\sigma^2_{\infty})$ are independent Gaussian random variables and $\lambda_G^2,\,\sigma_{G}^2,\,\sigma^2_{\infty}$ are positive constants given as in [\[assum51-new\]](#assum51-new){reference-type="eqref" reference="assum51-new"}, [\[changshu sg2\]](#changshu sg2){reference-type="eqref" reference="changshu sg2"} and [\[Sigmapingfang 2\]](#Sigmapingfang 2){reference-type="eqref" reference="Sigmapingfang 2"} respectively.* *Remark 1*. 1. There are three different types of scaling in the asymptotic behavior of $\hat{\theta}_T$ as indicated by the identity [\[beta biaoshishizi\]](#beta biaoshishizi){reference-type="eqref" reference="beta biaoshishizi"}. This finding represents a novel contribution to our knowledge. 2. The proof idea is based on a slight modification of the set of sufficient conditions proposed in [@Es-Sebaiy22021]. In Section [4](#nine example){reference-type="ref" reference="nine example"}, we will construct an artificial Gaussian process that satisfies Hypothesis [HYPOTHESIS 1](#hypthe1-1){reference-type="ref" reference="hypthe1-1"}, but the joint distribution convergence fails. 3. The proofs for Examples [Example 7](#exmp0005){reference-type="ref" reference="exmp0005"}-[Example 8](#exmp7-1zili){reference-type="ref" reference="exmp7-1zili"} are also applicable to both the sub-fractional Brownian motion and the bi-fractional Brownian motion since they satisfy Hypothesis ($H_3'$) and ($H_5$) given below. It should be noted that the range of parameters in ($H_5$) extends beyond the bi-fractional Brownian motion $(B^{H',K}(t))_{ t\geq 0}$ in [@Es-Sebaiy22021] where they assume that $H'\in (0,1),\,K\in (0,1]$. Recall that the covariance function of bi-fractional Brownian motion $(B^{H',K})$ with $H'>0,K \in (0, 2)$ such that $H:=H'K\in (0, 1)$ is as follows: $$R(t,s)=\frac{1}{2}\left((s^{2H'}+t^{2H'})^K - |t-s|^{2H'K}\right),$$ please refer to [@Bardina2011] for the case of $K\in (1, 2)$. 4. The rationale behind proposing the sophisticated conditions on $(G)$, such as Hypothesis [HYPOTHESIS 1](#hypthe1-1){reference-type="ref" reference="hypthe1-1"} or Hypothesis [HYPOTHESIS 2](#hypthe2new-new){reference-type="ref" reference="hypthe2new-new"}, is as follows: the covariance structure (i.e., the covariance function) of a Gaussian process can determine all of its properties. Therefore, it is preferable to establish conditions directly in terms of the covariance function. 5. Non-Gaussian Vasicek-type models driven by Hermite process or Liu process were investigated in [@Yuqian2020] and [@wei; @2023] respectively. As a by-product, we also demonstrate the asymptotic behavior of the least squares estimator for the drift parameter of the non-ergodic Ornstein--Uhlenbeck (OU) process driven by the Gaussian process: $$\begin{aligned} \label{OU-model} \mathrm{d}Y_t =\theta Y_t \mathrm{d}t +\mathrm{d}G_t, \quad Y_0 = 0 ,\quad \theta>0.\end{aligned}$$ The least squares estimator is proposed in [@El-Es-Sebaiy2016] as follows: $$\begin{aligned} \tilde{\theta}_T=\frac{Y_T^2}{2\int_0^T Y_t^2 \mathrm{d}t}.\end{aligned}$$ **Theorem 11**. *If the driven process $(G)$ of the Ornstein--Uhlenbeck process [\[OU-model\]](#OU-model){reference-type="eqref" reference="OU-model"} is taken from any one of Examples [Example 3](#exmp 000001){reference-type="ref" reference="exmp 000001"}-[Example 9](#exmp lizi001){reference-type="ref" reference="exmp lizi001"}, we have that as $T \to \infty$, $$\begin{aligned} T^{\beta} e ^{\theta T}(\tilde{\theta}_T - \theta) \xrightarrow{law} \frac{2\sigma_G}{\sigma_{\infty}}\mathcal{C}(1),\end{aligned}$$ where $\mathcal{C}(1)$ is the standard Cauchy distribution, and $\beta,\,\sigma_G^2,\,\sigma_{\infty}^2$ are positive constants given as in [\[beta biaoshishizi\]](#beta biaoshishizi){reference-type="eqref" reference="beta biaoshishizi"}, [\[changshu sg2\]](#changshu sg2){reference-type="eqref" reference="changshu sg2"} and [\[Sigmapingfang 2\]](#Sigmapingfang 2){reference-type="eqref" reference="Sigmapingfang 2"} respectively.* *Remark 2*. There are three different types of scaling in the asymptotic behavior of $\tilde{\theta}_T$ for the non-ergodic OU process. This is a novel contribution in our knowledge. The rest of the paper is organized as follows: Section [2](#sec-002){reference-type="ref" reference="sec-002"} is dedicated to describing the associated reproducing kernel Hilbert space of $(G)$. In Section [3](#proof mn results){reference-type="ref" reference="proof mn results"}, we prove Theorems [Theorem 10](#asymptoticth){reference-type="ref" reference="asymptoticth"} and [Theorem 11](#asymptoticth OU){reference-type="ref" reference="asymptoticth OU"}. In Section [4](#nine example){reference-type="ref" reference="nine example"}, we present two examples for which either Hypothesis [HYPOTHESIS 1](#hypthe1-1){reference-type="ref" reference="hypthe1-1"} or Hypothesis [HYPOTHESIS 2](#hypthe2new-new){reference-type="ref" reference="hypthe2new-new"} is valid, but the joint asymptotic distribution may not hold. Throughout this paper, we always assume that the covariance function $R(t,s)$ satisfies Hypothesis [HYPOTHESIS 1](#hypthe1-1){reference-type="ref" reference="hypthe1-1"} ($H_1$). The symbol $C$ represents a generic constant, the value of which may vary from one line to another. # Preliminary {#sec-002} Denote $\mathcal{V}_{[0,T]}$ as the set of functions on $[0,T]$ with bounded variation. For each $g\in \mathcal{V}_{[0,T]}$, we denote $\nu_{g}$ as the restriction of the Lebesgue-Stieljes signed measure associated with $g^0$ to $([0,T ], \mathcal{B}([0,T ]))$. Here, $g^0$ is defined as $$g^0(x)=\left\{ \begin{array}{ll} g(x), & \quad \text{if } x\in [0,T] ,\\ 0, &\quad \text{otherwise}. \end{array} \right.$$ As usual, we assume that $(G)$ is defined on a complete probability space $(\Omega, \mathcal{F}, P)$. Let $\mathfrak{H}$ denote the associated reproducing kernel Hilbert space of $(G)$. This Hilbert space is defined as the closure of the space of all real-valued step functions on $[0, T]$ with the inner product $$\begin{aligned} \langle \mathbbm{1}_{[a,b]},\,\mathbbm{1}_{[c,d]}\rangle_{\mathfrak{H}}=\mathbb{E}\left(( G_b-G_a) ( G_d-G_c) \right).\end{aligned}$$ By abuse of notation, we use the same symbol $$G=\left\{G(h)=\int_{[0,T]}h(t)\mathrm{d}G_t, \quad h \in \mathfrak{H}\right\}$$ to represent the isonormal Gaussian process on the probability space $(\Omega, \mathcal{F}, P)$. This process is indexed by the elements in the Hilbert space $\mathfrak{H}$ and satisfies Itô's isometry: $$\begin{aligned} \label{G extension defn} \mathbb{E}(G(g)G(h)) = \langle g, h \rangle_{\mathfrak{H}}, \quad \forall g, h \in \mathfrak{H}.\end{aligned}$$ Furthermore, we denote by $\mathfrak{H}_1$ the associated reproducing kernel Hilbert space of the fBm $(B^H)$. **Proposition 12**. *Let $f,\, g\in \mathcal{V}_{[0,T]}$ and $R(s,t)=\mathbb{E}[G_sG_t],\,R^B(s,t)=\mathbb{E}[B^H_sB^H_t]$.* 1. *If the covariance function $R(s,t)$ satisfies Hypothesis [HYPOTHESIS 1](#hypthe1-1){reference-type="ref" reference="hypthe1-1"} and if $g(\cdot)$ has no common discontinuous points with the function $\frac {\partial R(\cdot,t)}{\partial t}$ on $[0,T]$ for each fixed $t\in [0,T]$, then $$\begin{aligned} \label{innp fg3-zhicheng0-0} \langle f,\,g \rangle_{\mathfrak{H}}=\int_{[0,T]^2} f(t)g(s) \frac{\partial }{\partial s }\big( \frac{\partial R(s,t)}{ \partial t }\big) \mathrm{d}t \mathrm{d}s.\end{aligned}$$* 2. *If the covariance function $R(s,t)$ satisfies Hypothesis [HYPOTHESIS 2](#hypthe2new-new){reference-type="ref" reference="hypthe2new-new"} and if $g(\cdot)$ has no common discontinuous points with the difference function $\frac {\partial R(\cdot,t)}{\partial t} - \frac {\partial R^{B}(\cdot,t)}{\partial t}$ on $[0,T]$ for each fixed $t\in [0,T]$, then $$\begin{aligned} \label{innp fg3-zhicheng0} \langle f,\,g \rangle_{\mathfrak{H}}-\langle f,\,g \rangle_{\mathfrak{H}_1}=\int_{[0,T]^2} f(t)g(s)\frac{\partial }{\partial s}\left( \frac{\partial R(s,t)}{\partial t} - \frac{\partial R^B(s,t)}{\partial t} \right) \mathrm{d}t \mathrm{d}s.\end{aligned}$$* *Additionally, we interpret the mixed second-order partial derivatives $$\label{papr dereiv} \frac{\partial }{\partial s }\left( \frac{\partial R(s,t)}{ \partial t }\right)\text{\quad and \quad} \frac{\partial }{\partial s}\left( \frac{\partial R(t,s)}{\partial t} - \frac{\partial R^B(t,s)}{\partial t} \right)$$ as the distributional derivatives of $\frac{\partial R(s,t)}{ \partial t }$ and $\frac{\partial R(t,s)}{\partial t} - \frac{\partial R^B(t,s)}{\partial t}$ with respect to the variable $s$ respectively.* *Proof.* We only need to show the first one since the latter is similar. If the covariance function $R(t,s)$ satisfies Hypothesis [HYPOTHESIS 1](#hypthe1-1){reference-type="ref" reference="hypthe1-1"} ($H_1$), then $$\begin{aligned} \label{innp fg3} \langle f,\,g \rangle_{\mathfrak{H}}=-\int_{[0,T]} f(t) \mathrm{d}t \int_{[0,T]}\frac{\partial R(s,t)}{\partial t} \nu_{g}(\mathrm{d}s),\end{aligned}$$ as shown in Proposition 3.1 of [@cl2023]. Next, under Hypothesis [HYPOTHESIS 1](#hypthe1-1){reference-type="ref" reference="hypthe1-1"} ($H_2$), we apply Lemma 7.4 of [@cl2023] to the bounded variation function $\frac{\partial}{\partial t}R(\cdot,t)$ and $g(\cdot)\in \mathcal{V}_{[0,T]}$ for fixed $t\in [0,T]$ to obtain: $$\begin{aligned} \langle f,\,g \rangle_{\mathfrak{H}}&=\int_{[0,T]^2} f(t)g(s) \frac{\partial }{\partial s }\left( \frac{\partial R(s,t)}{ \partial t }\right) \mathrm{d}t \mathrm{d}s. \end{aligned}$$ ◻ As a special case of Proposition [Proposition 12](#prop 2-1){reference-type="ref" reference="prop 2-1"}, it is evident that the following corollary holds. **Corollary 13**. *Let $f,\, g\in \mathcal{V}_{[0,T]}$. If Hypothesis [HYPOTHESIS 1](#hypthe1-1){reference-type="ref" reference="hypthe1-1"} ($H_2$) and Hypothesis [HYPOTHESIS 2](#hypthe2new-new){reference-type="ref" reference="hypthe2new-new"} ($H_3$) are replaced respectively by the following assumptions:* 1. *The partial derivative $\frac {\partial R(s,t)}{\partial t}$ is an absolutely continuous function on $s\in [0,T]$ for any fixed $t \in[0,T]$.* 2. *The difference [\[cha-key\]](#cha-key){reference-type="eqref" reference="cha-key"} is an absolutely continuous function on $s\in [0,T]$ for any fixed $t \in[0,T]$.* *then both the equations [\[innp fg3-zhicheng0-0\]](#innp fg3-zhicheng0-0){reference-type="eqref" reference="innp fg3-zhicheng0-0"} and [\[innp fg3-zhicheng0\]](#innp fg3-zhicheng0){reference-type="eqref" reference="innp fg3-zhicheng0"} hold, and we interpret the mixed second-order partial derivatives in [\[papr dereiv\]](#papr dereiv){reference-type="eqref" reference="papr dereiv"} as functions.* Now we return to Examples [Example 3](#exmp 000001){reference-type="ref" reference="exmp 000001"}-[Example 9](#exmp lizi001){reference-type="ref" reference="exmp lizi001"}. It is clear that all of Examples [Example 3](#exmp 000001){reference-type="ref" reference="exmp 000001"}-[Example 4](#exmp 000001-04){reference-type="ref" reference="exmp 000001-04"} satisfy Hypothesis ($H_2'$). As for Example [Example 5](#exmp 000001-06){reference-type="ref" reference="exmp 000001-06"}, the first-order partial derivative [\[exmp 000001-06-piandao\]](#exmp 000001-06-piandao){reference-type="eqref" reference="exmp 000001-06-piandao"} for each fixed $t\in [0,T]$ is a linear combination of an absolutely continuous function $$\label{varphislizi} \varphi(s)=H(t+s)^{2H-1}-H t^{2H-1},\quad s\in[0,T],$$ and a step function $$\label{phislizi} \phi(s)=\left\{ \begin{array}{ll} 0, & \quad 0<s\le t,\\ H t^{2H-1}, & \quad t<s\le T. \end{array} \right.$$ Hence, it follows from [\[innp fg3-zhicheng0-0\]](#innp fg3-zhicheng0-0){reference-type="eqref" reference="innp fg3-zhicheng0-0"} that $$\begin{aligned} \label{guanjiandengshi 000-new1-000} \langle f,\,g \rangle_{\mathfrak{H}}&=H(2H-1)\int_{[0,T]^2}f(t)g(s) (t+s)^{2H-2} \mathrm{d}t \mathrm{d}s +H \int_0^T {f}(t) g(t)t^{2H-1}\mathrm{d}t . \end{aligned}$$ As for Example [Example 6](#exmp 000001-06-buch){reference-type="ref" reference="exmp 000001-06-buch"}, in the same vein, it follows from [\[innp fg3-zhicheng0-0\]](#innp fg3-zhicheng0-0){reference-type="eqref" reference="innp fg3-zhicheng0-0"} that $$\begin{aligned} \label{guanjiandengshi 000-new1-000-buch} \langle f,\,g \rangle_{\mathfrak{H}}&=2H\Gamma(1-2H) \int_0^T {f}(t) g(t)t^{2H-1}\mathrm{d}t, \end{aligned}$$ from which the Gaussian process $(G)$ given in Example [Example 6](#exmp 000001-06-buch){reference-type="ref" reference="exmp 000001-06-buch"} is increment independent. Additionally, Examples [Example 7](#exmp0005){reference-type="ref" reference="exmp0005"}-[Example 8](#exmp7-1zili){reference-type="ref" reference="exmp7-1zili"} satisfy Hypothesis ($H_3'$). As for Example [Example 9](#exmp lizi001){reference-type="ref" reference="exmp lizi001"}, its first-order partial derivative of the difference function [\[cha-key\]](#cha-key){reference-type="eqref" reference="cha-key"} $$\label{guanjiandengshi 000-new1000} \frac{\partial R(s,t)}{\partial t }- \frac{\partial R^B(s,t)}{\partial t }=\left\{ \begin{array}{ll} 0, & \quad 0<s\le t,\\ -H t^{2H-1}, & \quad t<s\le T, \end{array} \right. % \langle f,\,g \rangle_{\FH}- \langle f,\,g \rangle_{\FH_1}&=-H \int_0^T {f}(t) g(t)t^{2H-1}\dif t$$ is a step function. It follows from the identity [\[innp fg3-zhicheng0\]](#innp fg3-zhicheng0){reference-type="eqref" reference="innp fg3-zhicheng0"} that $$\begin{aligned} \label{guanjiandengshi 000-new1} \langle f,\,g \rangle_{\mathfrak{H}}- \langle f,\,g \rangle_{\mathfrak{H}_1}&=-H \int_0^T {f}(t) g(t)t^{2H-1}\mathrm{d}t . \end{aligned}$$ As a direct corollary of [\[guanjiandengshi 000-new1\]](#guanjiandengshi 000-new1){reference-type="eqref" reference="guanjiandengshi 000-new1"}, we have that if the intersection of the supports of $f,\,g$ has Lebesgue measure zero, then $$\begin{aligned} \label{innp fg3-zhicheng0-00} \langle f,\,g \rangle_{\mathfrak{H}}=\langle f,\,g \rangle_{\mathfrak{H}_1}=\int_{[0,T]^2} f(t)g(s) \frac{\partial^2 R^{B}(t,s)}{\partial t\partial s} \mathrm{d}t \mathrm{d}s. %\quad , \red{f,g\in \FH??}.\end{aligned}$$ # Proof of the main results {#proof mn results} In this section, we assume that the Gaussian process $(G)$ is taken from Examples [Example 3](#exmp 000001){reference-type="ref" reference="exmp 000001"}-[Example 9](#exmp lizi001){reference-type="ref" reference="exmp lizi001"}. It is evident that all the Gaussian processes given in Examples [Example 3](#exmp 000001){reference-type="ref" reference="exmp 000001"}-[Example 9](#exmp lizi001){reference-type="ref" reference="exmp lizi001"} are self-similar. Hence, it is trivial that the following limit [\[assum51-new\]](#assum51-new){reference-type="eqref" reference="assum51-new"} holds:$$\label{assum51-new} \lim_{t \to \infty}\frac{1}{t^{2H}} R(t,t) = \lambda_G^2=\left\{ \begin{array}{ll} \left\vert 1-2^{2H-1}\right\vert, & \quad \text{for Examples~\ref{exmp 000001} and \ref{exmp 000001-06}},\\ {2-2^{K}}, & \quad \text{for Example~\ref{exmp 000001-04}} ,\\ \Gamma(1-2H),& \quad \text{for Example~\ref{exmp 000001-06-buch}},\\ 2^{K}-2^{2H'K-1} , &\quad \text{for Example~\ref{exmp0005}},\\ \frac{(a+b)^2- 2^{2H}ab}{a^2+b^2}, &\quad\text{for Example~\ref{exmp7-1zili}},\\ \frac12 , &\quad \text{for Example~\ref{exmp lizi001}}. \end{array} \right.$$ To avoid verifying the other conditions of [@Es-Sebaiy22021] for the seven examples on a case-by-case basis, we classify Examples [Example 3](#exmp 000001){reference-type="ref" reference="exmp 000001"}-[Example 9](#exmp lizi001){reference-type="ref" reference="exmp lizi001"} into four types: 1. Examples [Example 3](#exmp 000001){reference-type="ref" reference="exmp 000001"}-[Example 4](#exmp 000001-04){reference-type="ref" reference="exmp 000001-04"} satisfy Hypothesis ($H_2'$) and the following condition: 1. There exist constants $H'>0,\, K\in (0,2)$ independent of $T$ such that $H:=H'K\in (0, 1)$, and there exist constants $C_1,C_2\ge 0$ which depend only on $H',\,K$ such that the inequality $$\label{phi2-new} \left| \frac{\partial}{\partial s}\left(\frac {\partial R(s,t)}{\partial t} \right)\right| \le C_1 (t+s)^{2H-2}+C_2 (s^{2H'}+t^{2H'})^{K-2}(st)^{2H'-1}$$ holds. 2. Examples [Example 5](#exmp 000001-06){reference-type="ref" reference="exmp 000001-06"}-[Example 6](#exmp 000001-06-buch){reference-type="ref" reference="exmp 000001-06-buch"} satisfies Hypothesis ($H_2$) but does not satisfy Hypothesis ($H_2'$). 3. Examples [Example 7](#exmp0005){reference-type="ref" reference="exmp0005"}-[Example 8](#exmp7-1zili){reference-type="ref" reference="exmp7-1zili"} satisfy Hypothesis ($H_3'$) and the following condition: 1. There exist constants $H'>0,\, K\in (0,2)$ independent of $T$ such that $H:=H'K\in (0, 1)$, and there exist constants $C_1,C_2\ge 0$ which depend only on $H',\,K$ such that the inequality $$\label{phi2} \left| \frac{\partial}{\partial s}\left(\frac {\partial R(s,t)}{\partial t} - \frac {\partial R^{B}(s,t)}{\partial t}\right)\right| \le C_1 (t+s)^{2H-2}+C_2 (s^{2H'}+t^{2H'})^{K-2}(st)^{2H'-1}$$ holds. 4. Example [Example 9](#exmp lizi001){reference-type="ref" reference="exmp lizi001"} satisfies Hypothesis ($H_3$) but does not satisfy Hypothesis ($H_3'$). It is apparent that the right-hand sides of [\[phi2-new\]](#phi2-new){reference-type="eqref" reference="phi2-new"}-[\[phi2\]](#phi2){reference-type="eqref" reference="phi2"} are bounded by $$\label{uppppper} C\times (ts)^{H-1},$$ see [@cl2023]. The following propositions are employed to verify the set of conditions $(\mathcal{A}_1)$ and $(\mathcal{A}_3)$-$(\mathcal{A}_5)$ of [@Es-Sebaiy22021] respectively. **Proposition 14**. *There exists a positive constant $C$ that is independent of $T$, satisfying $$\begin{aligned} \label{gima2fang jie} \sigma^2(s,t):=\mathbb{E}\big[(G_s-G_t)^2\big]\le C\left\vert s-t\right\vert^{2H},\quad 0\le s, t\le T,\end{aligned}$$where $\sigma^2(s,t)$ and $\sigma(s,t)$ are called the structure function and canonical metric for $(G)$, respectively.* *Proof.* Suppose $0\le s<t\le T$. According to the classification of the Gaussian processes mentioned above, we will show it separately:\ (i) For the case of Examples [Example 3](#exmp 000001){reference-type="ref" reference="exmp 000001"}-[Example 4](#exmp 000001-04){reference-type="ref" reference="exmp 000001-04"}, Corollary [Corollary 13](#coro teshuqk){reference-type="ref" reference="coro teshuqk"}, along with the modulus inequality of integration, implies that $$\begin{aligned} \mathbb{E}\left[(G_s-G_t)^2\right]&=\int_{[s,t]^2} \frac {\partial^2 R(u,v)}{\partial u\partial v} \mathrm{d}u\mathrm{d}v\le C \int_{[s,t]^2} (uv)^{H-1}\mathrm{d}u\mathrm{d}v \le C(t-s)^{2H}.\end{aligned}$$ (ii) For the case of Example [Example 5](#exmp 000001-06){reference-type="ref" reference="exmp 000001-06"}, since $H\in (0,\frac12)$, the identity [\[guanjiandengshi 000-new1-000\]](#guanjiandengshi 000-new1-000){reference-type="eqref" reference="guanjiandengshi 000-new1-000"} implies that $$\begin{aligned} \mathbb{E}\left[(G_s-G_t)^2\right]&=\int_{[s,t]^2} \frac{\partial}{\partial u}\left( \frac {\partial R(u,v)}{\partial v}\right) \mathrm{d}u\mathrm{d}v\\ &= H(2H-1) \int_{[s,t]^2} (u+v)^{2(H-1)}\mathrm{d}u\mathrm{d}v + H\int_{s} ^t u^{2H-1}\mathrm{d}u\\ &\le \frac12 t^{2H}\left(1-\left(\frac{s}{t}\right)^{2H}\right) \le \frac12(t-s)^{2H}.\end{aligned}$$ For the case of Example [Example 6](#exmp 000001-06-buch){reference-type="ref" reference="exmp 000001-06-buch"}, in the same vein, we have that $$\begin{aligned} \mathbb{E}\left[(G_s-G_t)^2\right]&=\int_{[s,t]^2} \frac{\partial}{\partial u}\left( \frac {\partial R(u,v)}{\partial v}\right) \mathrm{d}u\mathrm{d}v\\ &= 2H\Gamma(1- 2H)\int_{s} ^t u^{2H-1}\mathrm{d}u\\ &= \Gamma(1- 2H) t^{2H}\left(1- \left(\frac{s}{t}\right)^{2H}\right) \le \Gamma(1-2H) (t-s)^{2H}.\end{aligned}$$ (iii) For the case of Examples [Example 7](#exmp0005){reference-type="ref" reference="exmp0005"}-[Example 8](#exmp7-1zili){reference-type="ref" reference="exmp7-1zili"}, Corollary [Corollary 13](#coro teshuqk){reference-type="ref" reference="coro teshuqk"}, together with the upper bound [\[uppppper\]](#uppppper){reference-type="eqref" reference="uppppper"} and the modulus inequality of integration, implies that $$\begin{aligned} \left\vert\mathbb{E}\left[(G_s-G_t)^2\right]-\mathbb{E}\left[(B^H_s-B^H_t)^2\right]\right\vert&=\left\vert\int_{[s,t]^2} \frac{\partial}{\partial u}\left( \frac{\partial R(u,v)}{\partial v} - \frac{\partial R^B(u,v)}{\partial v} \right)\mathrm{d}u\mathrm{d}v\right\vert\\ &\le C \int_{[s,t]^2} (uv)^{H-1}\mathrm{d}u\mathrm{d}v \le C(t-s)^{2H}.\end{aligned}$$ It is well-known that for the fBm $(B^H)$, we have $$\begin{aligned} \label{wellknownn} \mathbb{E}\left[(B^H_s-B^H_t)^2\right]=\left\vert s-t\right\vert^{2H}. \end{aligned}$$ Therefore, the claim [\[gima2fang jie\]](#gima2fang jie){reference-type="eqref" reference="gima2fang jie"} can be derived from the triangle inequality for absolute values.\ (iv) For the case of Example [Example 9](#exmp lizi001){reference-type="ref" reference="exmp lizi001"}, since $H\in (0,\frac12)$, the identity [\[innp fg3-zhicheng0-00\]](#innp fg3-zhicheng0-00){reference-type="eqref" reference="innp fg3-zhicheng0-00"} implies that $$\begin{aligned} \left\vert\mathbb{E}\big[(G_s-G_t)^2\big] -\mathbb{E}\big[(B^H_s-B^H_t)^2\big]\right\vert&=H \int_{s}^t u^{2H-1}\mathrm{d}u \le \frac12\left\vert s-t\right\vert^{2H}, \end{aligned}$$ which, together with the identity [\[wellknownn\]](#wellknownn){reference-type="eqref" reference="wellknownn"}, implies the desired [\[gima2fang jie\]](#gima2fang jie){reference-type="eqref" reference="gima2fang jie"}. ◻ **Corollary 15**. *All of $\hat{\theta}_T$,  $\hat{\mu}_T$ and $\hat{\alpha}_T$ are strong consistent. Furthermore, the Gaussian processes $$\begin{aligned} \xi_t=\int_0^t e^{-\theta s}\mathrm{d}G_s\text{\quad and\quad} Z_t=\int_0^t e^{-\theta s} G_s \mathrm{d}s\end{aligned}$$ satisfy the formula of integration by parts: $$\begin{aligned} \xi_t=e^{-\theta t}G_t+\theta Z_t. \end{aligned}$$ Additionly, as $t\to\infty$, it converges almost surely to a limit: $$\begin{aligned} \lim_{t\to\infty} \xi_t = \theta \lim_{t\to\infty} Z_t= \theta\int_0^{\infty} e^{-\theta s} G_s \mathrm{d}s,\end{aligned}$$with $$\begin{aligned} \label{Sigmapingfang 2} \sigma^2_{\infty}:= \mathbb{E}[Z_{\infty}^2]=\mathbb{E}\left[\left(\int_0^{\infty} e^{-\theta s} G_s \mathrm{d}s\right)^2\right]<\infty.\end{aligned}$$* *Proof.* We only show the integrability of the integral in the identity [\[Sigmapingfang 2\]](#Sigmapingfang 2){reference-type="eqref" reference="Sigmapingfang 2"}. The others are the direct corollary of Lemma 2.1 and Theorem 3.1 of [@Es-Sebaiy22021] and Proposition [Proposition 14](#proop3-1){reference-type="ref" reference="proop3-1"}. In fact, Proposition [Proposition 14](#proop3-1){reference-type="ref" reference="proop3-1"} implies that there exists a positive constant $C$ independent of $T$ such that $$\begin{aligned} \mathbb{E}\left[ G_t^2\right]\le Ct^{2H},\quad t\ge 0.\end{aligned}$$ Then it follows from Fubini's theorem and Cauchy-Schwarz inequality that $$\mathbb{E}\left[\left(\int_0^{\infty} e^{-\theta s} G_s \mathrm{d}s\right)^2\right]\le C \int_0^{\infty} \mathrm{d}s \int_0^{\infty} \mathrm{d}t\,e^{-\theta (s+t)} (ts)^{H}<\infty.$$ ◻ **Proposition 16**. *The covariance function $R(t,s)$ satisfies for any fixed $s>0$, $$\begin{aligned} \lim_{t \to \infty}\frac{1}{t^{H}} R(s,t)& =0. \label{assum51}\end{aligned}$$* *Proof.* Denote $$\begin{aligned} \label{fof jihao} f_0(\cdot) = \mathbf{1}_{[0,s]} (\cdot),\quad f(\cdot) = \mathbf{1}_{[0,t]} (\cdot),\quad g(\cdot) = \mathbf{1}_{[s,t]} (\cdot).\end{aligned}$$ To prove [\[assum51\]](#assum51){reference-type="eqref" reference="assum51"}, we only need to show that for any fixed $s\in (0,T)$, if $2s<t\in (0,T)$, the following limit holds: $$\begin{aligned} \lim_{t\to \infty}\frac{1}{t^H}\langle f,\, f_0\rangle_{\mathfrak{H}}= \lim_{t\to \infty}\frac{1}{t^H}\big[\langle f_0,\, f_0\rangle_{\mathfrak{H}}+\langle g,\, f_0\rangle_{\mathfrak{H}}\big]= \lim_{t\to \infty}\frac{1}{t^H}\langle g,\, f_0\rangle_{\mathfrak{H}}=0.\end{aligned}$$ (i) For the case of Examples [Example 3](#exmp 000001){reference-type="ref" reference="exmp 000001"}-[Example 4](#exmp 000001-04){reference-type="ref" reference="exmp 000001-04"}, recall that they satisfy Hypothesis ($H_2'$) and ($H_4$). Corollary [Corollary 13](#coro teshuqk){reference-type="ref" reference="coro teshuqk"} implies that $$\begin{aligned} \quad&\left\vert\frac{1}{t^H}\langle g,\, f_0\rangle_{\mathfrak{H}}\right\vert\notag\\ &\le \frac{1}{t^H}\int_{s}^t\mathrm{d}u \int_0^s \left\vert \frac {\partial^2 R(u,v)} {\partial u\partial v} \right\vert\mathrm{d}v\notag\\ &\le \frac{C}{t^H}\int_0^s\mathrm{d}u \int_s^t (u+v)^{2H-2}+ (u^{2H'}+v^{2H'})^{K-2}(uv)^{2H'-1} \mathrm{d}v\notag \\ &= \frac{C}{t^H} \Bigg[t^{2H}\left(\left(1+\frac{s}{t}\right)^{2H}-1\right)-(2s)^{2H}+s^{2H}\notag\\ &+ t^{2H}\left(\left(1+\left(\frac{s}{t}\right)^{2H'}\right)^{K}-1\right)-2^K s^{2H}+s^{2H}\Bigg]\notag\\ &\le {C\left\{t^{H-1}\left[2Hs+ O\left(\frac{s}{t}\right)\right]+t^{H}\left[K\left(\frac{s}{t}\right)^{2H'}+ O\left(\left(\frac{s}{t}\right)^{4H'}\right)\right]+ s^{2H} t^{-H}\right\}}\notag \\ &\to 0,\label{bdengshi-222227}\end{aligned}$$ since $H'>0,\, K\in (0,2)$ and $H:=H'K\in (0, 1)$.\ (ii) For the case of Example [Example 6](#exmp 000001-06-buch){reference-type="ref" reference="exmp 000001-06-buch"}, the limit [\[assum51\]](#assum51){reference-type="eqref" reference="assum51"} is trivial. For the case of Example [Example 5](#exmp 000001-06){reference-type="ref" reference="exmp 000001-06"}, since $H\in (0,\frac12)$, the identity [\[guanjiandengshi 000-new1-000\]](#guanjiandengshi 000-new1-000){reference-type="eqref" reference="guanjiandengshi 000-new1-000"} implies that $$\begin{aligned} \left\vert\frac{1}{t^H}\langle g,\, f_0\rangle_{\mathfrak{H}}\right\vert= \frac{1}{t^H}\left\vert \int_{s}^t\mathrm{d}u \int_0^s \frac {\partial^2 R(u,v)} {\partial u\partial v} \mathrm{d}v \right\vert\le \frac{C}{t^H}\int_0^s\mathrm{d}u \int_s^t (u+v)^{2H-2} \mathrm{d}v\notag \to 0. \end{aligned}$$ (iii) For the case of Examples [Example 7](#exmp0005){reference-type="ref" reference="exmp0005"}-[Example 8](#exmp7-1zili){reference-type="ref" reference="exmp7-1zili"}, recall that they satisfy Hypothesis ($H_3'$) and ($H_5$). The identity [\[innp fg3-zhicheng0\]](#innp fg3-zhicheng0){reference-type="eqref" reference="innp fg3-zhicheng0"} implies that as $t\to \infty$, $$\begin{aligned} \frac{1}{t^H} \left\vert\langle g,\, f_0\rangle_{\mathfrak{H}}- \langle g,\, f_0\rangle_{\mathfrak{H}_1}\right\vert&\le\frac{1}{t^H} \int_0^s\mathrm{d}u \int_s^t \left\vert \frac{\partial^2 R(u,v)}{\partial u\partial v}-\frac{\partial^2 R^B(u,v)}{\partial u\partial v}\right\vert \mathrm{d}v\notag\\ &\le \frac{C}{t^H}\int_0^s\mathrm{d}u \int_s^t (u+v)^{2H-2}+ (u^{2H'}+v^{2H'})^{K-2}(uv)^{2H'-1} \mathrm{d}v\notag \\ &\to 0, \end{aligned}$$ where the last limit is in the same vein as [\[bdengshi-222227\]](#bdengshi-222227){reference-type="eqref" reference="bdengshi-222227"}. For the fBm $(B^{H})$ and its covariance function $R^B(s,t)=\mathbb{E}[B^H_sB^H_t]$, Itô's isometry [\[G extension defn\]](#G extension defn){reference-type="eqref" reference="G extension defn"} implies that as $t\to \infty$, $$\begin{aligned} \frac{1}{t^H} \langle g,\, f_0\rangle_{\mathfrak{H}_1}= \frac{1}{t^H} \mathbb{E}\big[B^H(s)\big(B^H(t)-B^H(s)\big)\big]=\frac{1}{2t^H}\big[t^{2H}-(t-s)^{2H}-s^{2H}\big]\to 0. % &=\frac{1}{2}\big[t^{2H}\big(1-(1-\frac{s}{t})^{2H} \big)-s^{2H}\big]=\frac{1}{2}\big[t^{2H-1}(2Hs+O(\frac{s}{t}))-s^{2H} \big]. \label{taylar 1} \end{aligned}$$ By the triangle inequality for absolute values, we have that as $t\to \infty$, $$\begin{aligned} \frac{1}{t^H}\left\vert\langle g,\, f_0\rangle_{\mathfrak{H}}\right\vert\to 0. \end{aligned}$$ (iv) For the case of Example [Example 9](#exmp lizi001){reference-type="ref" reference="exmp lizi001"}, since the intersection of the supports of $f_0$ and $g$ is of Lebesgue measure zero, the identity [\[innp fg3-zhicheng0-00\]](#innp fg3-zhicheng0-00){reference-type="eqref" reference="innp fg3-zhicheng0-00"} implies that as $t\to \infty$, $$\begin{aligned} \frac{1}{t^H}\left\vert\langle g,\, f_0\rangle_{\mathfrak{H}}\right\vert=\frac{1}{t^H} \langle g,\, f_0\rangle_{\mathfrak{H}_1}\to 0.\end{aligned}$$ ◻ Denote $$\begin{aligned} \label{zeta T jifen} \zeta_t = \int_0^t e^{-\theta(t-s)} \mathrm{d}G_s,\qquad \eta_t = \int_0^t e^{-\theta(t-u)}\mathrm{d}B^H_u,\end{aligned}$$ where $\theta>0$ is a constant. Then $\zeta_t,\,\eta_t$ are respectively the ergodic Ornstein-Uhlenbeck process defined by the stochastic differential equations $$\begin{aligned} \mathrm{d}\zeta_t &=-\theta \zeta_t \mathrm{d}t +\mathrm{d}G_t, \quad \zeta_0 = 0,\label{OU dingyi}\\ \mathrm{d}\eta_t &=-\theta \eta_t \mathrm{d}t +\mathrm{d}B^H_t, \quad \eta_0 = 0.\label{OU dingyi duibi}\end{aligned}$$ The next two propositions are concerning the asymptotic growth of the covariance functions $\mathbb{E}(G_t\zeta_t),\, \mathbb{E}(G_s\zeta_t),\,\mathbb{E}(\zeta_t^2)$ as $t\to \infty$, where the following trivial estimate is used in the proof several times. **Lemma 17**. *Assume $\beta>0$ and $\theta>0$. Denote $${A}(s)=e^{-\theta s}\int_0^{s} e^{\theta r} r^{\beta -1}\mathrm{d}r.$$ Then there exists a positive constant $C$ such that for any $s\in [0,\infty)$, $$\begin{aligned} {A}(s)&\le C \times\left(s^{\beta}\mathbbm{1}_{[0,1]}(s) + s^{\beta-1}\mathbbm{1}_{ (1,\,\infty)}(s)\right)\le C \times \left(s^{\beta-1} \wedge s^{\beta} \right).\end{aligned}$$ Especially, when $\beta\in (0,1)$, there exists a positive constant $C$ such that for any $s\in [0,\infty)$, $$\begin{aligned} {A}(s)&\le C \times(1\wedge s^{\beta-1}).\end{aligned}$$* **Proposition 18**. *Suppose that the driven process $(G)$ of the OU process [\[OU dingyi\]](#OU dingyi){reference-type="eqref" reference="OU dingyi"} is taken from any one of Examples [Example 3](#exmp 000001){reference-type="ref" reference="exmp 000001"}-[Example 9](#exmp lizi001){reference-type="ref" reference="exmp lizi001"}, we have that for any fixed $s\ge 0$, $$\begin{aligned} \lim_{t \to \infty} \frac{1}{t^{H}}\mathbb{E}\left( G_t \zeta_t \right) =0, \qquad \lim_{t \to \infty} \mathbb{E}\left( G_s \zeta_t \right)& =0. \label{assum52-1} \end{aligned}$$* *Proof.* We assume that $\theta=1$ without loss of generality. Denote $$h(\cdot)= e^{-(t-\cdot)} \mathbf{1}_{[0,t]}(\cdot).$$ To prove the limits [\[assum52-1\]](#assum52-1){reference-type="eqref" reference="assum52-1"}, it follows from Itô's isometry [\[G extension defn\]](#G extension defn){reference-type="eqref" reference="G extension defn"} that we need only to show that for any fixed $s\ge 0$, $$\begin{aligned} \label{jixian limit 00} \lim_{t\to \infty}\frac{1}{t^H}\langle h,\, f\rangle_{\mathfrak{H}}=0,\qquad \lim_{t\to \infty} \langle h,\, f_0\rangle_{\mathfrak{H}}=0,\end{aligned}$$ where the functions $f,\,f_0$ are given as in [\[fof jihao\]](#fof jihao){reference-type="eqref" reference="fof jihao"}. According to the classification of the Gaussian processes as above, we show it separately:\ (i) For the case of Example [Example 3](#exmp 000001){reference-type="ref" reference="exmp 000001"}, it follows from Corollary [Corollary 13](#coro teshuqk){reference-type="ref" reference="coro teshuqk"}, L'Hôpital's rule that $$\begin{aligned} \lim_{t\to \infty}\frac{1}{t^H}\langle h,\, f\rangle_{\mathfrak{H}}&=\lim_{t\to\infty}\frac{1}{e^{t} t^{ H }}{\int_{[0,t]^2} e^{u} \frac {\partial^2 R(u,v)}{\partial u\partial v} \mathrm{d}u\mathrm{d}v}\\ &=\lim_{t\to\infty}\frac{H\left\vert 2H-1\right\vert}{t^H} \int_{0}^t \left( e^{u-t} +1 \right)(u+t)^{2H-2}\mathrm{d}u=0,\\ \lim_{t\to \infty} \langle h,\, f_0\rangle_{\mathfrak{H}}&=\lim_{t\to\infty}\frac{1}{e^{t} }{\int_0^t e^{u} \mathrm{d}u \int_0^s \frac {\partial^2 R(u,v)}{\partial u\partial v} \mathrm{d}v}\\ &=\lim_{t\to\infty} {H\left\vert 2H-1\right\vert} t^{2H-2}\int_{0}^s (\frac{v}{t}+1)^{2H-2}\mathrm{d}v=0. \end{aligned}$$ In the same vein, for the case of Example [Example 4](#exmp 000001-04){reference-type="ref" reference="exmp 000001-04"}, we have that $$\begin{aligned} \lim_{t\to \infty}\frac{1}{t^H}\langle h,\, f\rangle_{\mathfrak{H}}&=\lim_{t\to\infty}\frac{1}{e^{t} t^{ H }}{\int_{[0,t]^2} e^{u} \frac {\partial^2 R(u,v)}{\partial u\partial v} \mathrm{d}u\mathrm{d}v}\\ &=\lim_{t\to\infty} \frac{K(1-K)(2H')^2}{t^{H'K-2H'+1}} \int_{0}^t \left( e^{u-t} +1 \right)(u^{2H'}+t^{2H'})^{K-2}u^{2H'-1}\mathrm{d}u=0,\\ \lim_{t\to \infty} \langle h,\, f_0\rangle_{\mathfrak{H}}&=\lim_{t\to\infty}\frac{1}{e^{t} }{\int_0^t e^{u} \mathrm{d}u \int_0^s \frac {\partial^2 R(u,v)}{\partial u\partial v} \mathrm{d}v}\\ &=\lim_{t\to\infty}{K(1-K)(2H')^2}\int_{0}^s (v^{2H'}+t^{2H'})^{K-2}v^{2H'-1}\mathrm{d}v=0. \end{aligned}$$ (ii) For the case of Example [Example 5](#exmp 000001-06){reference-type="ref" reference="exmp 000001-06"}, since $H\in (0,\frac12)$, the identity [\[guanjiandengshi 000-new1-000\]](#guanjiandengshi 000-new1-000){reference-type="eqref" reference="guanjiandengshi 000-new1-000"} implies that as $t\to \infty$, $$\begin{aligned} \frac{1}{t^H}\langle h,\, f\rangle_{\mathfrak{H}}&= \frac{1}{e^{t} t^{H}}\left[ H(2H-1) \int_{[0,t]^2} e^{u} (u+v)^{2(H-1)}\mathrm{d}u\mathrm{d}v + H\int_{0} ^t e^{u}u^{2H-1}\mathrm{d}u\right]\to 0,\\ \langle h,\, f_0\rangle_{\mathfrak{H}}&=\frac{1}{e^{t} }\left[ H(2H-1) \int_{0}^t e^{u}\mathrm{d}u \int_0^s (u+v)^{2(H-1)}\mathrm{d}v + H\int_{0} ^s e^{u}u^{2H-1}\mathrm{d}u\right]\to 0. \end{aligned}$$ The above two limits displayed also imply that the limits [\[jixian limit 00\]](#jixian limit 00){reference-type="eqref" reference="jixian limit 00"} hold for Example [Example 6](#exmp 000001-06-buch){reference-type="ref" reference="exmp 000001-06-buch"}.\ (iii) For the case of Examples [Example 7](#exmp0005){reference-type="ref" reference="exmp0005"}-[Example 8](#exmp7-1zili){reference-type="ref" reference="exmp7-1zili"}, Corollary [Corollary 13](#coro teshuqk){reference-type="ref" reference="coro teshuqk"}, together with the upper bound [\[uppppper\]](#uppppper){reference-type="eqref" reference="uppppper"} and the modulus inequality of integration and Lemma [Lemma 17](#upper bound F){reference-type="ref" reference="upper bound F"}, implies that as $t\to \infty$, $$\begin{aligned} \frac{1}{t^H} \left\vert\langle h,\, f\rangle_{\mathfrak{H}}-\langle h,\, f\rangle_{\mathfrak{H}_1}\right\vert & \le \frac{ C}{t^H}\int_{[0,t]^2} e^{u-t}u^{H-1}v^{H-1} \mathrm{d}u\mathrm{d}v \to 0,\\ \left\vert\langle h,\, f_0\rangle_{\mathfrak{H}}-\langle h,\, f_0\rangle_{\mathfrak{H}_1}\right\vert &\le \int_{0}^t e^{u-t}u^{H-1} \mathrm{d}u\int_0^s v^{H-1}\mathrm{d}v \to 0.\end{aligned}$$ Recall that for the associated reproducing kernel Hilbert spaces of the fBm $(B^H)$, it is well known that the following limits hold: $$\begin{aligned} \frac{1}{t^H} \langle h,\, f\rangle_{\mathfrak{H}_1} &= -\frac{H}{t^H} \int_{[0,t]^2} e^{u-t} \Big(u^{2H-1}-\left\vert u-v\right\vert^{2H-1}\mathrm{sgn}{(u-v)}\Big)\big(\delta_0(v)-\delta_t(v)\big)\mathrm{d}v\mathrm{d}u\\ &= \frac{H}{e^tt^H} \int_0^t e^{ u}\Big(u^{2H-1}+(t-u)^{2H-1}\Big)\mathrm{d}u\to 0,\\ \langle h,\, f_0\rangle_{\mathfrak{H}_1} &= - {H} \int_{[0,t]^2} e^{u-t} \Big(u^{2H-1}-\left\vert u-v\right\vert^{2H-1}\mathrm{sgn}{(u-v)}\Big)\big(\delta_0(v)-\delta_s(v)\big)\mathrm{d}v\mathrm{d}u\\ &=H \left(\int_0^s+\int_s^t \right) e^{u-t} \Big(u^{2H-1}-\left\vert u-s\right\vert^{2H-1}\mathrm{sgn}{(u-s)}\Big) \mathrm{d}u\to 0.\end{aligned}$$ By the triangle inequality for absolute values, the limits [\[jixian limit 00\]](#jixian limit 00){reference-type="eqref" reference="jixian limit 00"} hold.\ (iv) For the case of Example [Example 9](#exmp lizi001){reference-type="ref" reference="exmp lizi001"}, since $H\in(0,\frac12)$, the identity [\[guanjiandengshi 000-new1\]](#guanjiandengshi 000-new1){reference-type="eqref" reference="guanjiandengshi 000-new1"} implies that as $t\to \infty$, $$\begin{aligned} \frac{1}{t^H} \left\vert\langle h,\, f\rangle_{\mathfrak{H}}-\langle h,\, f\rangle_{\mathfrak{H}_1}\right\vert &= \frac{ H}{t^H}\int_{0}^t e^{u-t}u^{2H-1} \mathrm{d}u \to 0,\\ \left\vert\langle h,\, f_0\rangle_{\mathfrak{H}}-\langle h,\, f_0\rangle_{\mathfrak{H}_1}\right\vert &=H \int_{0}^s e^{u-t}u^{2H-1} \mathrm{d}u \to 0.\end{aligned}$$ In the same vein as case (iii), we have that the limits [\[jixian limit 00\]](#jixian limit 00){reference-type="eqref" reference="jixian limit 00"} hold. ◻ **Proposition 19**. *Let the constant $\beta$ be given in [\[beta biaoshishizi\]](#beta biaoshishizi){reference-type="eqref" reference="beta biaoshishizi"}. There exists a positive constant $\sigma_G^2$ such that when the driven process $(G)$ of the OU process [\[OU dingyi\]](#OU dingyi){reference-type="eqref" reference="OU dingyi"} is taken from any one of Examples [Example 3](#exmp 000001){reference-type="ref" reference="exmp 000001"}-[Example 9](#exmp lizi001){reference-type="ref" reference="exmp lizi001"}, we have that as $T \to \infty$, $$\begin{aligned} \label{assum-chy-52-001} \lim_{t \to \infty} t^{2\beta}\mathbb{E}\left( \zeta_t ^2\right) &=\sigma_G^2,%\theta^{-2H}H\Gamma(2H).\end{aligned}$$where $$\label{changshu sg2} \sigma_G^2=\left\{ \begin{array}{ll} \frac{H\left\vert 2H-1\right\vert 2^{2H-2}}{\theta^2}, & \text{for Example~\ref{exmp 000001}},\\ \frac{ 2^K K(1-K)(H')^2}{\theta^2} , & \text{for Example~\ref{exmp 000001-04}},\\ \frac{H}{2\theta} , & \text{for Example~\ref{exmp 000001-06}},\\ \frac{H\Gamma(1-2H)}{\theta} , & \text{for Example~\ref{exmp 000001-06-buch}},\\ \theta^{-2H}H\Gamma(2H),& \text{for Examples~\ref{exmp0005}-\ref{exmp lizi001}}. \end{array} \right.$$* *Proof.* Without loss of generality, we assume that $\theta=1$ in the proof. According to the classification of the Gaussian processes as above, we show it separately:\ (i) For the case of Example [Example 3](#exmp 000001){reference-type="ref" reference="exmp 000001"}, it follows from Corollary [Corollary 13](#coro teshuqk){reference-type="ref" reference="coro teshuqk"}, L'Hôpital's rule and the symmetry of the mixed second-order partial derivative $\frac {\partial^2 R(u,v)}{\partial u\partial v}$ that $$\begin{aligned} \lim_{t\to\infty} t^{2-2H}\mathbb{E}\left( \zeta_t ^2\right)&=\lim_{t\to\infty}\frac{1}{e^{2t} t^{2H-2}}{\int_{[0,t]^2} e^{u+v} \frac {\partial^2 R(u,v)}{\partial u\partial v} \mathrm{d}u\mathrm{d}v}\\ &=\lim_{t\to\infty}H\left\vert 2H-1\right\vert \int_{0}^t e^{u-t} (1+\frac{u}{t})^{2H-2} \mathrm{d}u=H\left\vert 2H-1\right\vert 2^{2H-2}. \end{aligned}$$ In the same vein, for the case of Example [Example 4](#exmp 000001-04){reference-type="ref" reference="exmp 000001-04"}, we can derive the following expression: $$\begin{aligned} &\lim_{t\to\infty}{t^{2-2H}} \mathbb{E}\left( \zeta_t ^2\right)\\ &=\lim_{t\to\infty}\frac{1}{e^{2t} t^{2H-2}}{\int_{[0,t]^2} e^{u+v} \frac {\partial^2 R(u,v)}{\partial u\partial v} \mathrm{d}u\mathrm{d}v}\\ &=\lim_{t\to\infty}\frac{K(1-K)(2H')^2}{e^t t^{2H'(K-1)-1}} \int_{0}^t e^{u} (t^{2H'}+u^{2H'})^{K-2} u^{2H'-1}\mathrm{d}u\\ &=\lim_{t\to\infty}\frac{K(1-K)(2H')^2}{e^t t^{2H'(K-1)-1}} \left[t^{2H'(K-1)-1}2^{K-2}e^t +(K-2)t^{2H'-1}\int_{0}^t e^{u}(t^{2H'}+u^{2H'})^{K-3} u^{2H'-1} \mathrm{d}u\right] \\ & = 2^K K(1-K) (H')^2,\end{aligned}$$ where in the last line, we use the following estimate by Lemma [Lemma 17](#upper bound F){reference-type="ref" reference="upper bound F"}: $$\begin{aligned} \frac{1}{t^{2H'(K-2)}}\int_{0}^t e^{u-t}(t^{2H'}+u^{2H'})^{K-3} u^{2H'-1} \mathrm{d}u \le \frac{1}{t^{2H'K}}\int_{0}^t e^{u-t} u^{2H'K-1} \mathrm{d}u\le \frac{C }{t }. \end{aligned}$$ (ii) For the case of Example [Example 5](#exmp 000001-06){reference-type="ref" reference="exmp 000001-06"}, since $H\in (0,\frac12)$, the identity [\[guanjiandengshi 000-new1-000\]](#guanjiandengshi 000-new1-000){reference-type="eqref" reference="guanjiandengshi 000-new1-000"} implies that as $t\to \infty$, $$\begin{aligned} \quad&t^{1-2H}\mathbb{E}\left( \zeta_t ^2\right)\\ &= \frac{1}{e^{2t} t^{2H-1}}\left[ H(2H-1) \int_{[0,t]^2} e^{u+v} (u+v)^{2(H-1)}\mathrm{d}u\mathrm{d}v + H\int_{0} ^t e^{2u}u^{2H-1}\mathrm{d}u\right]\\ &\to \frac{H}{2}. \end{aligned}$$ The above limit displayed also implies that the identity [\[changshu sg2\]](#changshu sg2){reference-type="eqref" reference="changshu sg2"} holds for Example [Example 6](#exmp 000001-06-buch){reference-type="ref" reference="exmp 000001-06-buch"}.\ (iii) For the case of Examples [Example 7](#exmp0005){reference-type="ref" reference="exmp0005"}-[Example 8](#exmp7-1zili){reference-type="ref" reference="exmp7-1zili"}, Corollary [Corollary 13](#coro teshuqk){reference-type="ref" reference="coro teshuqk"}, together with the upper bound [\[uppppper\]](#uppppper){reference-type="eqref" reference="uppppper"} and the modulus inequality of integration, implies that as $t\to\infty$ $$\begin{aligned} \left\vert\mathbb{E}[ \zeta_t^2]-\mathbb{E}[ \eta_t^2] \right\vert&=\left\vert\int_{[0,t]^2} e^{u-t+v-t} \frac{\partial }{\partial u}\left( \frac{\partial R(u,v)}{\partial v} - \frac{\partial R^B(u,v)}{\partial v} \right) \mathrm{d}u \mathrm{d}v\right\vert\\ &\le C\left(\int_0^t e^{u-t} u^{H-1}\mathrm{d}u\right)^2\to 0,\end{aligned}$$ since $H\in (0,1)$. Furthermore, using the well-known fact that $$\begin{aligned} \lim_{t\to\infty}\mathbb{E}[ \eta_t^2] =\theta^{-2H}H\Gamma(2H),\end{aligned}$$ we can conclude that $$\begin{aligned} \lim_{t\to\infty}\mathbb{E}[ \zeta_t^2] =\theta^{-2H}H\Gamma(2H).\end{aligned}$$ (iv) For the case of Example [Example 9](#exmp lizi001){reference-type="ref" reference="exmp lizi001"}, since $H\in (0,\frac12)$, by applying the identity [\[guanjiandengshi 000-new1\]](#guanjiandengshi 000-new1){reference-type="eqref" reference="guanjiandengshi 000-new1"}, we can conclude that as $t\to\infty$ $$\begin{aligned} {\mathbb{E}[ \zeta_t^2]-\mathbb{E}[ \eta_t^2] }&= -H{\int_{0}^t e^{2(u-t) } u^{2H-1} \mathrm{d}u }\to 0,\end{aligned}$$ which implies that $$\begin{aligned} \lim_{t\to\infty}\mathbb{E}[ \zeta_t^2] =\theta^{-2H}H\Gamma(2H).\end{aligned}$$ ◻ Since the ergodic OU process $(\zeta)$ is a centred Gaussian processes, we can apply Proposition [Proposition 19](#proop 3-5){reference-type="ref" reference="proop 3-5"} to obtain the following corollary: **Corollary 20**. *Let the constants $\beta,\,\sigma^2_G$ and the ergodic OU process $(\zeta)$ be given in [\[beta biaoshishizi\]](#beta biaoshishizi){reference-type="eqref" reference="beta biaoshishizi"}, [\[changshu sg2\]](#changshu sg2){reference-type="eqref" reference="changshu sg2"} and [\[OU dingyi\]](#OU dingyi){reference-type="eqref" reference="OU dingyi"} respectively. Then as $t\to \infty$, $$\label{jianjinzhengtia 111} t^{{\beta}}\zeta_t \xrightarrow{law} \mathcal{N}(0,\,\sigma_G^2).$$* **Proof of Theorem [Theorem 10](#asymptoticth){reference-type="ref" reference="asymptoticth"}:** We can conclude that the conditions $(\mathcal{A}_1)$-$(\mathcal{A}_2)$ and $(\mathcal{A}_4)$-$(\mathcal{A}_5)$ of [@Es-Sebaiy22021] are satisfied based on the limit [\[assum51-new\]](#assum51-new){reference-type="eqref" reference="assum51-new"}, Propositions [Proposition 14](#proop3-1){reference-type="ref" reference="proop3-1"}, [Proposition 18](#assum5){reference-type="ref" reference="assum5"} and [Proposition 16](#assum5-1){reference-type="ref" reference="assum5-1"}. Furthermore, we know from [\[beta biaoshishizi\]](#beta biaoshishizi){reference-type="eqref" reference="beta biaoshishizi"} that $\beta\ge 0$. Using the identity (2.12) in [@Es-Sebaiy22021] and a slight modification of the proof of Theorem 3.2 in [@Es-Sebaiy22021], we can demonstrate that the asymptotic joint distributions of Theorem [Theorem 10](#asymptoticth){reference-type="ref" reference="asymptoticth"} hold if we replace the condition $(\mathcal{A}_3)$ of [@Es-Sebaiy22021] with the equation [\[jianjinzhengtia 111\]](#jianjinzhengtia 111){reference-type="eqref" reference="jianjinzhengtia 111"}. Therefore, the claim is valid. $\Box$ **Proof of Theorem [Theorem 11](#asymptoticth OU){reference-type="ref" reference="asymptoticth OU"}:** Theorem [Theorem 11](#asymptoticth OU){reference-type="ref" reference="asymptoticth OU"} is a special case of Theorem [Theorem 10](#asymptoticth){reference-type="ref" reference="asymptoticth"} essentially. However, we would like to give it an independent proof. The conditions $(\mathcal{H}_1)$-$(\mathcal{H}_2)$ of [@El-Es-Sebaiy2016] are a direct consequence of Propositions [Proposition 14](#proop3-1){reference-type="ref" reference="proop3-1"}. The condition $(\mathcal{H}_4)$ of [@El-Es-Sebaiy2016] is the second limit in the identity [\[assum52-1\]](#assum52-1){reference-type="eqref" reference="assum52-1"}. By a slight modification of the proof of Theorem 2.2 of [@El-Es-Sebaiy2016], we show that the asymptotic distributions of Theorem [Theorem 11](#asymptoticth OU){reference-type="ref" reference="asymptoticth OU"} are valid if the condition $(\mathcal{H}_3)$ of [@El-Es-Sebaiy2016] is replaced by the equation [\[jianjinzhengtia 111\]](#jianjinzhengtia 111){reference-type="eqref" reference="jianjinzhengtia 111"}. Hence the claim holds. $\Box$ # Discussion {#nine example} The tenth type of Gaussian process in the literature that belongs to the scope of this paper is as follows: **Example 21**. Let $\beta(\cdot,\cdot)$ denote the beta function. Suppose that $a>-1, \,0<{b}<1\wedge (1+a)$ and $2H=a+b+1$. The covariance function of the weighted-fractional Brownian motion $(B_t^{a,b})_{t \geq 0}$ is given by $$\begin{aligned} R(s,t)&=\frac{1}{2\beta(a+1,b+1)}\int_{0}^{s\wedge t}u^a \left[ (t-u)^b +(s-u)^b \right] \mathrm{d}u\\ &=\frac12\Big[t^{2H}+s^{2H}-\frac{1}{\beta(a+1,b+1)}\int_{s\wedge t}^{s\vee t}u^a (t \vee s -u)^b \mathrm{d}u\Big].\end{aligned}$$ Please refer to @Alsenafi2021 [@BGT; @2007]. It is evident that $$\begin{aligned} \frac{\partial^2 R(s,t)}{\partial s \partial t }=\frac{b}{\beta(a+1,b+1)} (t\wedge s)^{a} \left\vert t-s\right\vert^{b-1}\end{aligned}$$ and hence the assumption $a>-1, \,0<{b}<1\wedge (1+a)$ implies that $(B^{a,b})$ satisfies Hypothesis [HYPOTHESIS 1](#hypthe1-1){reference-type="ref" reference="hypthe1-1"}. Thus, Propositions [Proposition 12](#prop 2-1){reference-type="ref" reference="prop 2-1"} implies that if $f,\, g\in \mathcal{V}_{[0,T]}$ then $$\begin{aligned} \langle f,\,g \rangle_{\mathfrak{H}} &= \int_{[0,T]^2}f(t)g(s) \frac{\partial^2 R(s,t)}{\partial s \partial t } \mathrm{d}t\mathrm{d}s.\end{aligned}$$ Although in @Alsenafi2021, the asymptotic behavior of the least squares estimation for non-ergodic OU process [\[OU-model\]](#OU-model){reference-type="eqref" reference="OU-model"} driven by $(B^{a,b})$ is obtained, we point that in the inequality [\[gima2fang jie\]](#gima2fang jie){reference-type="eqref" reference="gima2fang jie"} concerning the structure function $\sigma^2(s,t)$, the constant $C$ can not be chosen to be independent of $T$. Hence we do not know whether Theorem [Theorem 10](#asymptoticth){reference-type="ref" reference="asymptoticth"} is valid or not for the Vasicek model [\[Vasicekmodel\]](#Vasicekmodel){reference-type="eqref" reference="Vasicekmodel"} driven by $(B^{a,b})$. The following artificial example is a mixed Gaussian process, which is a linear combination of independent centred Gaussian processes **Example 22**. Let $Z\sim N(0,1)$ be independent of the fBm $(B^H)$ or the Gaussian process $(G)$ from any one of Examples [Example 3](#exmp 000001){reference-type="ref" reference="exmp 000001"}-[Example 9](#exmp lizi001){reference-type="ref" reference="exmp lizi001"}. We construct a mixed Gaussian process as follows: $$\begin{aligned} \bar{G}_t:=B^H_t+t^H Z,\text{\quad or\quad } \bar{G}_t:=G_t+t^H Z, \qquad t\in [0,T],\end{aligned}$$ whose covariance function satisfies that $$\begin{aligned} \bar{R}(t,s)-R^{B}(t,s) =(ts)^H,\text{\quad or\quad } \bar{R}(t,s)-R(t,s) =(ts)^H.\end{aligned}$$ It is evident that Propisition [Proposition 16](#assum5-1){reference-type="ref" reference="assum5-1"} fails for the process $(\bar{G})$. That is to say, the condition $(\mathcal{A}_5)$ of [@Es-Sebaiy22021] dose not hold for $(\bar{G})$. Thus, we can not obtain the the joint asymptotic distribution of the estimations. However, the asymptotic behavior for each estimator separately still holds, as all the other four conditions of [@Es-Sebaiy22021] holds. **Acknowledgements**: This work was partly supported by NSFC, PR China (No. 11961033, 12171410) and the General Project of Hunan Provincial Education Department of China (No. 22C0072). Alsenafi, A., Al-Foraih, M., Es-Sebaiy, K., 2021. Least squares estimation for non-ergodic weighted fractional Ornstein-Uhlenbeck process of general parameters. *AIMS Math.* 6(11): 12780-12794. Bardina, X., Bascompte, D., 2009. A decomposition and weak approximation of the sub-fractional Brownian motion. *Departament de Matematiques, UAB* Bardina, X., Es-Sebaiy, K., 2011. An extension of bifractional Brownian motion. *Comm. Stoch. Anal.* 5 (2): 333-340. Bojdecki, T., Gorostiza, L., Talarczyk, A., 2007. Some extensions of fractional Brownian motion and sub-fractional Brownian motion related to particle systems. *Electron. Commun. Probab.* 12: 161-172. Chen, Y., Li, Y., 2023. The properties of fractional Gaussian Process and their Applications. arXiv: 2309.10415. Durieu, O., Wang, Y., 2016. From infinite urn schemes to decompositions of self-similar Gaussian process. *Electron. J. Probab.* 21, Paper No. 43, 23 pp. El Machkouri, M., Es-Sebaiy, K., Ouknine, Y., 2016. Least squares estimator for nonergodic Ornstein-Uhlenbeck processes driven by Gaussian processes. *J. Korean Statist. Soc.* 45(3): 329-341. El-Nouty, C., Journé, J.-L., 2013. The sub-bifractional Brownian motion. *Studia Sci. Math. Hungar.* 50(1): 67-121. Es-Sebaiy, K., Es.Sebaiy, M., 2021. Estimating drift parameters in a non-ergodic Gaussian Vasicek-type model. *Stat. Methods App.* 30(2): 409-436. Folland, G. B., 1999. *Real analysis. Vol. 40 of Modern techniques and their applications.* John Wiley & Sons. Kuang, N., Xie, H., 2023. Least squares type estimators for the drift parameters in the sub-bifractional Vasicek processes. *Infin. Dimens. Anal. Quantum Probab. Relat. Top.* 26(2): 2350004. Lei, P., Nualart, D., 2009. A decomposition of the bi-fractional Brownian motion and some applications. *Statist. Probab. Lett.* 79(5): 619-624. Ma, C., 2013. The Schoenberg-Lévy kernel and relationships among fractional Brownian motion, bifractional Brownian motion, and others. *Theory Probab. Appl.* 57(4): 619-632. Sghir, A., 2013. The generalized Sub-fractional Brownian motion. *Commu. Stoch. Anal.* 7(3): 373-382. Sghir, A., 2014. A self-similar Gaussian process. *Random Oper. Stoch. Equ.* 22, 85-92. Talarczyk, A., 2020. Bifractional Brownian motion for $H>1$ and $2HK \le 1$. *Statist. Probab. Lett.* 157: 108628. Wei, C., 2023. Least squares estimation for a class of uncertain Vasicek model and its application to interest rates. *Stat. Papers.* https://doi.org/10.1007/s00362-023-01494-1 Yu, Q., 2020. Statistical inference for Vasicek-type model driven by self-similar Gaussian processes. *Comm. Statist. Theory Methods*. 49(2): 471-484. Zili, M., 2017. Generalized fractional Brownian motion. *Mod. Stoch. Theory Appl.* 4(1): 15-24.
arxiv_math
{ "id": "2310.00885", "title": "Two Types of Gaussian Processes and their Application to Statistical\n Estimations for Non-ergodic Vasicek Model", "authors": "Yong Chen and Ying Li and Yanping Lu", "categories": "math.PR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We prove strong estimates for averages of shifted convolution sums consisting of quadratic twists of $\mathrm{GL}_{2}$ $L$-functions. The key input involves the circle method together with standard tools such as Voronoı̆, quadratic reciprocity, amplification, and divisor switching. address: The Division of Physics, Mathematics and Astronomy, California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125, USA author: - Ikuya Kaneko title: On Multiple Shifted Convolution Sums --- [^1] # Introduction ## Brief Retrospection Given arithmetically interesting sequences of complex numbers $\{a(n) \}_{n \in \mathbb{N}}$ and $\{b(n) \}_{n \in \mathbb{N}}$, the *shifted convolution problem* or the *generalised additive divisor problem* asks for determining the behaviour of (or even just detecting nontrivial cancellations in) correlations of the shape $$\label{eq:ab} \sum_{T \leq n \leq 2T} a(n) b(n+h).$$ Such a sum pertains to various arithmetic problems depending on the sequences $a(n)$ and $b(n)$. Achieving subconvex bounds for [\[eq:ab\]](#eq:ab){reference-type="eqref" reference="eq:ab"} yields salient and sometimes unexpected applications. The archetype is when $a(n)$ and $b(n)$ come from the von Mangoldt function, Möbius function, or the divisor function, in which case [\[eq:ab\]](#eq:ab){reference-type="eqref" reference="eq:ab"} is related to the Hardy--Littlewood prime $k$-tuple conjecture [@HardyLittlewood1923-2], Chowla conjecture [@Chowla1965], gaps between multiplicative sequences [@Hooley1971; @Hooley1994], and moments of $L$-functions [@ConreyKeating2015], to name a few. Another example is when $a(n)$ and $b(n)$ come from $\mathrm{GL}_{2}$ Hecke eigenvalues, in which case [\[eq:ab\]](#eq:ab){reference-type="eqref" reference="eq:ab"} is related to the subconvexity problem and quantum unique ergodicity. For further details, see [@BlomerHarcos2008; @Blomer2004; @DeshouillersIwaniec1982; @DukeFriedlanderIwaniec1993; @Harcos2003; @Holowinsky2009; @Holowinsky2010; @KowalskiMichelVanderKam2002; @Leung2022; @Leung2022-2; @Maga2018; @Michel2004; @Michel2022; @Topacogullari2016; @Topacogullari2017; @Topacogullari2018]. ## Statement of the Main Result It is often beneficial for applications to consider [\[eq:ab\]](#eq:ab){reference-type="eqref" reference="eq:ab"} with an averaging over the shifts $h$ in a dyadic interval $[H, 2H]$. Fix a Hecke--Maaß cusp form on the modular surface $\mathrm{SL}_{2}(\mathbb{Z}) \backslash \mathbb{H}$, where $\mathbb{H} \coloneqq \{z = x+iy \in \mathbb{C}: y > 0 \}$ is the upper half-plane upon which the modular group acts via Möbius transformations. Given a fundamental discriminant $d$, let $\chi_{d} = \left(\frac{d}{\cdot} \right)$ be the primitive quadratic character modulo $|d|$. Then $\varphi \otimes \chi_{d}$ boils down to a Hecke--Maaß newform of level $|d|^{2}$ and principal nebentypus whose $L$-function is expressed in terms of a Dirichlet series and an Euler product, both converging absolutely for $\mathrm{Re}(s) > 1$: $$L(s, \varphi \otimes \chi_{d}) \coloneqq \sum_{n = 1}^{\infty} \frac{\lambda_{\varphi}(n) \chi_{d}(n)}{n^{s}} = \prod_{p} \left(1-\frac{\lambda_{\varphi}(p) \chi_{d}(p)}{p^{s}}+\frac{\chi_{d}(p)^{2}}{p^{2s}} \right)^{-1}.$$ For $1 \leq H \leq T$, we define the *multiple shifted convolution problem*[^2] by $$\label{eq:def-M} \mathcal{M}_{\varphi}(T, H) \coloneqq \sideset{}{^{\ast}} \sum_{H \leq h \leq 2H} \ \sideset{}{^{\ast}} \sum_{T \leq n \leq 2T} L \left(\frac{1}{2}, \varphi \otimes \chi_{8n} \right) L \left(\frac{1}{2}, \varphi \otimes \chi_{8(n+h)} \right),$$ where the asterisks mean that each sum runs through positive squarefree integers $n$ and $n+h$ such that $(n, 2) = 1$ and $(n+h, 2) = 1$, respectively. In analogy with the shifted convolution problem for $\mathrm{GL}_{2}$, one should expect substantial cancellations in $\mathcal{M}_{\varphi}(T, H)$. In this paper, we study an unconditional quantitative manifestation of this conjecture in certain ranges of $H$. **Theorem 1**. *Let $\varphi$ be a Hecke--Maaß cusp form on $\mathrm{SL}_{2}(\mathbb{Z}) \backslash \mathbb{H}$. Then we have for any $\varepsilon> 0$ that $$\mathcal{M}_{\varphi}(T, H) \ll_{\varphi, \varepsilon} T^{\frac{5}{4}+\varepsilon}, \qquad T^{\frac{1}{4}} \leq H \leq \sqrt{T}.$$* In down-to-earth terms, Theorem [Theorem 1](#main){reference-type="ref" reference="main"} asserts that the total saving that we attain is roughly of size $H T^{-\frac{1}{4}} \geq 1$, since the trivial bound is $O_{\varphi, \varepsilon}(HT^{1+\varepsilon})$ via the second moment bound for quadratic twists and Cauchy--Schwarz (trivially bounding the $h$-sum). Theorem [Theorem 1](#main){reference-type="ref" reference="main"}, however, falls well shy of the truth since one would expect the best possible bound to be $O_{\varphi, \varepsilon}(T^{1+\varepsilon})$. *Remark 1*. Our method also works when $\varphi$ is either holomorphic or Eisenstein, but we here restrict to the Maaß case for brevity, which is fundamentally formidable over the others in the sense that the Ramanujan--Petersson conjecture for Maaß forms is unproven up until now. *Remark 2*. For brevity, we restrict to positive fundamental discriminants of the form $8n$ and $8(n+h)$, but we may deal similarly with all discriminants. This assumption is also imposed in the work of Soundararajan--Young [@SoundararajanYoung2010] and Li [@Li2022] and enables subsequent discussions. The proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} relies on the Duke--Friedlander--Iwaniec *circle method* along with standard manipulations including Voronoı̆, Poisson, orthogonality, and quadratic reciprocity. The crucial ingredients include *divisor switching*, which guarantees a conductor drop in other summations. Nonetheless, this manoeuvre sacrifices the complementary divisor being larger than the original divisor. To eschew this drawback, we utilise an *amplification*. It appears at first glance that applying the lengthening here is nonsense, but in fact facilitates a conductor drop in Poisson summation. This should be thought of as an analogue of the trick of Li [@Li2022]. It behoves us to mention that the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} does not require Cauchy--Schwarz because the sum over $n$ in [\[eq:def-M\]](#eq:def-M){reference-type="eqref" reference="eq:def-M"} becomes symmetrical after using the circle method. We comment that the multiple shifted convolution problem that we address pertains to the quantum unique ergodicity conjecture for half-integral weight Eisenstein series. In fact, their $n$-th Fourier coefficient $c_{t}(n)$ may be written in the form $$c_{t}(n) = \frac{(\star)}{\zeta(1+it)} \cdot L \left(\frac{1}{2}+it, \chi_{n} \right),$$ where $(\star)$ hides some fairly tame fudge factors. The contribution of an incomplete Eisenstein series boils down to the second moment problem for $L(\frac{1}{2}+it, \chi_{n})$, while the contribution of an incomplete Poincaré series boils down to the shifted convolution problem of the shape $$\sideset{}{^{\ast}} \sum_{n \sim t} L \left(\frac{1}{2}+it, \chi_{n} \right) L \left(\frac{1}{2}+it, \chi_{n+h} \right)$$ for any fixed $h \ne 0$. Choosing $\varphi = |\cdot|_{\mathbb{A}}^{it}$ for $t \asymp T$ in [\[eq:def-M\]](#eq:def-M){reference-type="eqref" reference="eq:def-M"} recovers the above expression (but with an averaging over the shifts $h$). Note that one would expect $$\sideset{}{^{\ast}} \sum_{n \sim t} \left|L \left(\frac{1}{2}+it, \chi_{n} \right) L \left(\frac{1}{2}+it, \chi_{n+h} \right) \right| \asymp t \sqrt{\log t}.$$ This type of bound appears in the work of Holowinsky--Soundararajan [@Holowinsky2009; @HolowinskySoundararajan2010; @Soundararajan2010-2], which adopts Shiu's bound; see [@ElliottMorenoShahidi1984; @Nair1992; @NairTenenbaum1998; @Shiu1980]. In the half-integral weight case, Shiu's bound does not work as the coefficients are not multiplicative, but a tight upper bound follows from the Maaß--Selberg relation instead. Petridis--Raulf--Risager [@PetridisRaulfRisager2014] established quantum unique ergodicity for half-integral weight Eisenstein series under subconvex bounds for multiple Dirichlet series. For a general theory of multiple Dirichlet series and applications thereof, see for instance [@Blomer2011; @BlomerGoldmakherLouvel2014; @Bump; @BumpFriedbergGoldfeld2012; @BumpFriedbergGoldfeldHoffstein2006; @BumpFriedbergHoffstein1996; @Cech2022-2; @Cech2022; @Cech2023; @ChintaGunnells2007; @ChintaGunnells2010; @Dahl2015; @Dahl2018; @DiaconuGoldfeldHoffstein2003; @FriedbergHoffsteinLieman2003; @GaoZhao2023; @GoldfeldHoffstein1985; @PetridisRaulfRisager2014; @Sawin2023; @Wachter2021]. ## Discussions on the Proof This section unveils a heuristic argument for Theorem [Theorem 1](#main){reference-type="ref" reference="main"} in a back-of-the-envelope fashion, giving a high-level sketch geared to experts. It is structured such that any reader can understand the flow of the discussion. There is a caveat that we here ignore various technicalities such as complicated smooth weights and a number of coprimality conditions and common divisors. We pretend that everything is coprime to everything, which is morally not too far from reality. Furthermore, we have freedom to use quadratic reciprocity, which allows us to flip the numerator and denominator in the Jacobi--Kronecker symbol up to a correction factor that we shall elide. Given a Hecke--Maaß cusp form $\varphi$ and $T^{\frac{1}{4}} \leq H \leq \sqrt{T}$, we wish to estimate nontrivially a multiple shifted convolution problem roughly of the shape[^3] $$\mathcal{M}_{\varphi}(T, H) \approx \sum_{h \sim H} \sum_{n \sim T} L \left(\frac{1}{2}, \varphi \otimes \chi_{n} \right) L \left(\frac{1}{2}, \varphi \otimes \chi_{n+h} \right),$$ where we drop the superscripts $\ast$ in the definition [\[eq:def-M\]](#eq:def-M){reference-type="eqref" reference="eq:def-M"} for simplicity. While such individual shifted convolution sums are out of reach of current technology, we can leverage an averaging over $h$ for a gain. We now insert the Kronecker symbol $\delta(m = n)$ to separate the oscillations trapped in $\mathcal{M}_{\varphi}(T, H)$, so that the Duke--Friedlander--Iwaniec circle method implies $$\mathcal{M}_{\varphi}(T, H) \approx \frac{1}{T} \sum_{c \sim \sqrt{T}} \ \sideset{}{^{\ast}} \sum_{a {\@displayfalse \pmod{c}}} \sum_{h \sim H} e \left(-\frac{ah}{c} \right) \left|\sum_{m \sim T} L \left(\frac{1}{2}, \varphi \otimes \chi_{m} \right) e \left(\frac{am}{c} \right) \right|^{2}.$$ By Poisson summation, the sum over $h$ transforms into $$\sum_{h \sim H} e \left(-\frac{ah}{c} \right) \approx H \sum_{h \sim \frac{\sqrt{T}}{H}} \delta(h \equiv a {\@displayfalse \pmod{c}}).$$ while the sum over $n$ transforms into (via the approximate functional equation) $$\sum_{m \sim T} L \left(\frac{1}{2}, \varphi \otimes \chi_{m} \right) e \left(\frac{am}{c} \right) \approx \sum_{\ell \sim T} \sum_{m \sim \sqrt{T}} \lambda_{\varphi}(\ell) \left(\frac{cm}{\ell} \right) \delta(m \equiv a \ell {\@displayfalse \pmod{c}}).$$ Therefore, we obtain something roughly of the shape $$\mathcal{M}_{\varphi}(T, H) \approx \frac{H}{T} \sum_{c \sim \sqrt{T}} \sum_{h \sim \frac{\sqrt{T}}{H}} \left|\sum_{\ell \sim T} \sum_{m \sim \sqrt{T}} \lambda_{\varphi}(\ell) \left(\frac{cm}{\ell} \right) \delta(m \equiv h \ell {\@displayfalse \pmod{c}}) \right|^{2}.$$ The square-root cancellation heuristic implies that the best possible bound for the right-hand side is $O_{\varphi, \varepsilon}(T^{1+\varepsilon})$. For the ensuing analysis, it is now convenient to introduce an amplification parameter $1 \leq L \leq \sqrt{T}$ and elongate the sum over $c$ by $L$. Opening the square, the problem boils down to determining bounds for $$\mathcal{M}_{\varphi}(T, H) \ll \frac{1}{\sqrt{T}} \sum_{c \sim L \sqrt{T}} \sum_{\ell_{1}, \ell_{2} \sim T} \sum_{m, n \sim \sqrt{T}} \lambda_{\varphi}(\ell_{1}) \lambda_{\varphi}(\ell_{2}) \left(\frac{cm}{\ell_{1}} \right) \left(\frac{cn}{\ell_{2}} \right) \delta(\ell_{1} n \equiv \ell_{2} m {\@displayfalse \pmod{c}}).$$ Divisor switching then comes into play, and we write $$\ell_{1} n = \ell_{2} m+cq, \qquad c \sim L \sqrt{T}, \qquad q \sim \frac{T}{L}.$$ It replaces a congruence condition modulo $c$ with a congruence condition modulo $q$, achieving a huge conductor drop simultaneously in the other variables. Without an amplification, the complementary divisor $q$ would be much larger than the initial divisor $c$. Hence, there holds $$\mathcal{M}_{\varphi}(T, H) \ll \frac{1}{\sqrt{T}} \sum_{q \sim \frac{T}{L}} \sum_{\ell_{1}, \ell_{2} \sim T} \sum_{m, n \sim \sqrt{T}} \lambda_{\varphi}(\ell_{1}) \lambda_{\varphi}(\ell_{2}) \left(\frac{q}{\ell_{1} \ell_{2}} \right) \delta(\ell_{1} n \equiv \ell_{2} m {\@displayfalse \pmod{q}}).$$ By Poisson summation, the sum over $m$ transforms into $$\sum_{m \sim \sqrt{T}} \delta(\ell_{1} n \equiv \ell_{2} m {\@displayfalse \pmod{q}}) \approx \frac{L}{\sqrt{T}} \sum_{m \sim \frac{\sqrt{T}}{L}} e \left(\frac{\ell_{1} \overline{\ell_{2}} mn}{q} \right),$$ while the sum over $n$ transforms into $$\sum_{n \sim \sqrt{T}} e \left(\frac{\ell_{1} \overline{\ell_{2}} mn}{q} \right) \approx \sqrt{T} \sum_{n \sim \frac{\sqrt{T}}{L}} \delta(\ell_{1} m \equiv \ell_{2} n {\@displayfalse \pmod{q}}).$$ By orthogonality, one expands $$\delta(\ell_{1} m \equiv \ell_{2} n {\@displayfalse \pmod{q}}) \approx \frac{1}{q} \ \sideset{}{^{\star}} \sum_{b {\@displayfalse \pmod{q}}} e \left(\frac{b(\ell_{1} m-\ell_{2} n)}{q} \right),$$ where $\star$ denotes summation restricted to reduced residue classes. To handle the sums over $\ell_{1}$ and $\ell_{2}$, note that [@JacquetLanglands1970 Proposition 3.8 (iii)] or [@AtkinLi1978 Theorem 3.1 (ii)] implies that there exists a Hecke--Maaß newform $\varphi \otimes (\frac{q}{\cdot})$ of level $q^{2}$ and trivial nebentypus such that $\lambda_{\varphi \otimes (\frac{q}{\cdot})}(\ell) = \lambda_{\varphi}(\ell) (\frac{q}{\ell})$. Hence, by $\mathrm{GL}_{2}$ Voronoı̆ summation, the sum over $\ell_{1}$ transforms into $$\sum_{\ell_{1} \sim T} \lambda_{\varphi \otimes (\frac{q}{\cdot})}(\ell_{1}) e \left(\frac{b \ell_{1} m}{q} \right) \approx L \sum_{\ell_{1} \sim \frac{T}{L^{2}}} \lambda_{\varphi \otimes (\frac{q}{\cdot})}(\ell_{1}) e \left(\frac{\overline{b} \ell_{1} \overline{m}}{q} \right),$$ while the sum over $\ell_{2}$ transforms into $$\sum_{\ell_{2} \sim T} \lambda_{\varphi \otimes (\frac{q}{\cdot})}(\ell_{2}) e \left(-\frac{b \ell_{2} n}{q} \right) \approx L \sum_{\ell_{2} \sim \frac{T}{L^{2}}} \lambda_{\varphi \otimes (\frac{q}{\cdot})}(\ell_{2}) e \left(-\frac{\overline{b} \ell_{2} \overline{n}}{q} \right),$$ Summing over $b {\@displayfalse \pmod{q}}$ via orthogonality yields $$\mathcal{M}_{\varphi}(T, H) \ll \frac{L^{3}}{\sqrt{T}} \sum_{q \sim \frac{T}{L}} \sum_{\ell_{1}, \ell_{2} \sim \frac{T}{L^{2}}} \sum_{m, n \sim \frac{\sqrt{T}}{L}} \lambda_{\varphi \otimes (\frac{q}{\cdot})}(\ell_{1}) \lambda_{\varphi \otimes (\frac{q}{\cdot})}(\ell_{2}) \delta(\ell_{1} n \equiv \ell_{2} m {\@displayfalse \pmod{q}}).$$ As an endgame, we employ the Rankin--Selberg bound for the Hecke eigenvalues and estimate everything trivially, deducing $$\mathcal{M}_{\varphi}(T, H) \ll_{\varphi, \varepsilon} L^{-\frac{7}{2}} T^{3+\varepsilon} = T^{\frac{5}{4}+\varepsilon},$$ where we optimise $L = \sqrt{T}$. This finishes the sketch of the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}. ## A Road Map and Notation Sections [2](#arithmetic-machinery){reference-type="ref" reference="arithmetic-machinery"} and [3](#automorphic-machinery){reference-type="ref" reference="automorphic-machinery"} assemble requisite tools for the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}. In Section [4](#proof){reference-type="ref" reference="proof"}, we prove Theorem [Theorem 1](#main){reference-type="ref" reference="main"} along the same lines as in Section [1.3](#discussions-on-the-proof){reference-type="ref" reference="discussions-on-the-proof"}. Throughout the paper, we make constant use of the notation $e(x) = e^{2\pi ix}$. We use $\varepsilon > 0$ to denote an arbitrarily small positive quantity that is possibly different in each instance. The Vinogradov symbol $f \ll_{\nu} g$ or the big $O$ notation $f = O_{\nu}(g)$ indicates that there exists an effectively computable constant $c_{\nu} > 0$, depending at most on $\nu$, such that $|f(z)| \leq c_{\nu} |g(z)|$ for all $z$ in a specified range. If no parameter $\nu$ is present, then $c$ is absolute. The Kronecker symbol $\delta(\mathrm{S})$ detects $1$ or $0$ according as the statement $\mathrm{S}$ is true or not. ## Acknowledgements {#acknowledgements .unnumbered} The author is indebted to Wing Hong Leung for helpful comments. # Arithmetic Toolbox {#arithmetic-machinery} This section compiles the arithmetic machinery that we shall need later. In particular, we formulate a version of the circle method (due to Duke--Friedlander--Iwaniec) and the Poisson summation formula. Some fundamental properties of quadratic characters are also presented. ## $\delta$-Symbols There are two oscillations contributing to the shifted convolution problem that we address. The idea is to separate these oscillations via the circle method or the delta method. One seeks for a Fourier expansion that matches the Kronecker symbol $\delta(n = 0)$. **Lemma 2** (Leung [@Leung2021; @Leung2022]). *Let $n \in \mathbb{Z}$ be such that $|n| \ll N$, $q \in \mathbb{N}$, and let $C > N^{\varepsilon}$. Let $U \in C_{c}^{\infty}(\mathbb{R})$ and $W \in C_{c}^{\infty}([-2, -1] \cup [1, 2])$ be nonnegative even functions such that $U(x) = 1$ for $x \in [-2, 2]$. Then we have that $$\delta(n = 0) = \frac{1}{\mathcal{C}} \sum_{c = 1}^{\infty} \frac{1}{cq} \sum_{a {\@displayfalse \pmod{cq}}} e \left(\frac{an}{cq} \right) V_{0} \left(\frac{c}{C}, \frac{n}{cCq} \right),$$ where $$\mathcal{C} \coloneqq \sum_{c = 1}^{\infty} W \left(\frac{c}{C} \right) \sim C,$$ and $$V_{0}(x, y) \coloneqq W(x) U(x) U(y)-W(y) U(x) U(y)$$ is a smooth function satisfying $V_{0}(x, y) \ll \delta(|x|, |y| \ll 1)$.* By [@HeathBrown1996 Theorem 1], Lemma [Lemma 2](#DeltaCor){reference-type="ref" reference="DeltaCor"} is equivalent to the Duke--Friedlander--Iwaniec [@DukeFriedlanderIwaniec1994] circle method with a simpler weight function $V_{0}$ that constrains $|n| \ll cCQ$. This particular feature is beneficial in the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}. See [@KanekoLeung2023; @Leung2021; @Leung2022] for further details. ## Poisson Summation {#Poisson-summation} For $n \in \mathbb{N}$ and an integrable function $w \colon \mathbb{R}^{n} \to \mathbb{C}$, denote its Fourier transform by $$\widehat{w}(y) \coloneqq \int_{\mathbb{R}^{n}} w(x) e(-\langle x, y \rangle) dx,$$ where $\langle \cdot, \cdot \rangle$ stands for the standard inner product on $\mathbb{R}^{n}$. Moreover, if $c \in \mathbb{N}$ and $K \colon \mathbb{Z} \to \mathbb{C}$ is a periodic function of period $c$, then its Fourier transform $\widehat{K}$ is again the periodic function of period $c$: $$\widehat{K}(n) \coloneqq \sum_{a {\@displayfalse \pmod{c}}} K(a) e \left(-\frac{an}{c} \right).$$ Note that there is a minor inconsistency in sign choices, namely $\widehat{\widehat{K}}(n) = K(-n)$ for all $n \in \mathbb{Z}$. We invoke a form of the Poisson summation formula with a $c$-periodic function involved. **Lemma 3** (Fouvry--Kowalski--Michel [@FouvryKowalskiMichel2015 Lemma 2.1]). *For any $c \in \mathbb{N}$, any $c$-periodic function $K$, and any even smooth function $V$ compactly supported on $\mathbb{R}$, we have that $$\sum_{n = 1}^{\infty} K(n) V(n) = \frac{1}{c} \sum_{n \in \mathbb{Z}} \widehat{K}(n) \widehat{V} \left(\frac{n}{c} \right).$$* ## Quadratic Characters We adhere to the notation of [@Blomer2011; @DiaconuGoldfeldHoffstein2003]. Let $d$ and $n$ be odd positive integers that we factorise uniquely as $d = d_{0} d_{1}^{2}$ with $d_{0}$ squarefree and $n = n_{0} n_{1}^{2}$ with $n_{0}$ squarefree. Define the Jacobi--Kronecker symbol by $$\left(\frac{d}{n} \right) \coloneqq \prod_{p^{v} \parallel n} \left(\frac{d}{p} \right)^{v},$$ where for an odd prime $p$, we denote by $(\frac{d}{p})$ the standard Legendre symbol. Then the symbol $(\frac{d}{n})$ is extended to all odd $n \in \mathbb{Z}$ (cf. [@Shimura1973 p.442] and [@Koblitz1984 p.147, 187--188]). We write $$\chi_{d}(n) \coloneqq \left(\frac{d}{n} \right) \eqqcolon \widetilde{\chi}_{n}(d).$$ The character $\chi_{d}$ is the Jacobi--Kronecker symbol of conductor $d_{0}$ if $d \equiv 1 {\@displayfalse \pmod{4}}$ and $4d_{0}$ if $d \equiv 3 {\@displayfalse \pmod{4}}$. By definition, we know $$\chi_{d}(2) = \begin{cases} 1 & \text{if $d \equiv 1 {\@displayfalse \pmod{8}}$},\\ -1 & \text{if $d \equiv 5 {\@displayfalse \pmod{8}}$},\\ 0 & \text{if $d \equiv 3 {\@displayfalse \pmod{4}}$}, \end{cases}$$ and $\chi_{d}(-1) = 1$, namely $\chi_{d}$ is even. Quadratic reciprocity [@IwaniecKowalski2004 Theorem 3.5] states that for relatively prime odd positive integers $d$ and $n$, $$\label{eq:quadratic-reciprocity} \bigg(\frac{d}{n} \bigg) \bigg(\frac{n}{d} \bigg) = (-1)^{\frac{(d-1)(n-1)}{4}}.$$ This implies in particular that $$\widetilde{\chi}_{n} = \begin{cases} \chi_{n} & \text{if $n \equiv 1 {\@displayfalse \pmod{4}}$},\\ \chi_{-n} & \text{if $n \equiv 3 {\@displayfalse \pmod{4}}$}. \end{cases}$$ ## Gauß Sums For a Dirichlet character $\chi {\@displayfalse \pmod{c}}$, orthogonality asserts $$\label{orthogonality-of-Dirichlet-characters} \sum_{a {\@displayfalse \pmod{c}}} \chi(a) = \begin{cases} \varphi(c) & \text{if $\chi = \chi_{0}$},\\ 0 & \text{otherwise}, \end{cases} \qquad \sum_{\chi {\@displayfalse \pmod{c}}} \chi(a) = \begin{cases} \varphi(c) & \text{if $a \equiv 1 {\@displayfalse \pmod{c}}$},\\ 0 & \text{otherwise}. \end{cases}$$ Given $h \in \mathbb{Z}$, we define the Gauß sum associated to $\chi$ by $$\label{Gauss-sum} \tau(\chi, h) \coloneqq \sum_{b {\@displayfalse \pmod{c}}} \chi(b) e \left(\frac{bh}{c} \right).$$ We write $\tau(\chi) \coloneqq \tau(\chi, 1)$. Multiplying [\[Gauss-sum\]](#Gauss-sum){reference-type="eqref" reference="Gauss-sum"} by $\overline{\chi}(a)$ and summing over $\chi$, we derive from [\[orthogonality-of-Dirichlet-characters\]](#orthogonality-of-Dirichlet-characters){reference-type="eqref" reference="orthogonality-of-Dirichlet-characters"} $$e \left(\frac{ah}{c} \right) = \frac{1}{\varphi(c)} \sum_{\chi {\@displayfalse \pmod{c}}} \overline{\chi}(a) \tau(\chi, h), \qquad (a, c) = 1.$$ This serves as a Fourier expansion of additive characters in terms of the multiplicative ones. When $\chi$ is quadratic and $d$ is a positive odd squarefree integer, the Gauß sum simplifies to $$\tau \left(\left(\frac{\cdot}{d} \right) \right) = \varepsilon_{d} \sqrt{d},$$ where $$\varepsilon_{d} = \begin{cases} 1 & \text{if $d \equiv 1 {\@displayfalse \pmod{4}}$},\\ i & \text{if $d \equiv 3 {\@displayfalse \pmod{4}}$}. \end{cases}$$ It is straightforward to verify that the right-hand side of [\[eq:quadratic-reciprocity\]](#eq:quadratic-reciprocity){reference-type="eqref" reference="eq:quadratic-reciprocity"} is equal to $\varepsilon_{d} \varepsilon_{n} \varepsilon_{dn}^{-1}$. ## The Gamma Function {#the-Gamma-function} For fixed $\sigma \in \mathbb{R}$, real $|\tau| \geq 3$, and any $M > 0$, we make use of Stirling's formula $$\label{eq:Stirling} \Gamma(\sigma+i\tau) = e^{-\frac{\pi|\tau|}{2}} |\tau|^{\sigma-\frac{1}{2}} \exp \left(i\tau \log \frac{|\tau|}{e} \right) g_{\sigma, M}(\tau)+O_{\sigma, M}(|\tau|^{-M}),$$ where $$g_{\sigma, M}(\tau) = \sqrt{2\pi} \exp \left(\frac{\pi}{4}(2\sigma-1)i \operatorname{sgn}(\tau) \right)+O_{\sigma, M}(|\tau|^{-1}),$$ and $$|\tau|^{j} g_{\sigma, M}^{(j)}(\tau) \ll_{j, \sigma, M} 1$$ for all fixed $j \in \mathbb{N}_{0}$. # Automorphic Toolbox {#automorphic-machinery} This section reviews the automorphic machinery to be considered in the rest of the paper. In particular, we define automorphic $L$-functions and their quadratic twists, followed by the approximate functional equation. The Voronoı̆ summation formula for twists is also shown. ## Automorphic Forms Let $\{\varphi \}$ be an orthonormal basis of Hecke--Maaß cusp forms on the modular surface $\mathrm{SL}_{2}(\mathbb{Z}) \backslash \mathbb{H}$. We can assume without loss of generality that all $\varphi$ are real-valued. Denote by $t_{\varphi} > 1$ the spectral parameter, and by $\lambda_{\varphi}(n)$ the $n$-th Fourier coefficient. Given $t \in \mathbb{R}$, let $E(z, \frac{1}{2}+it)$ be the unitary Eisenstein series whose $n$-th Fourier coefficient is $\lambda(n, t) \coloneqq \sum_{ab = |n|} (\frac{a}{b})^{it}$. Let $\vartheta$ be an admissible exponent towards the Ramanujan--Petersson conjecture. At the current state of knowledge, $\vartheta \leq \frac{7}{64}$ is known; see Kim--Sarnak [@Kim2003]. Nonetheless, the Ramanujan--Petersson conjecture holds *on average* in the following form. **Lemma 4** (Rankin--Selberg bound [@Iwaniec1992 Lemma 1]). *Keep the notation as above. Then we have for any $\varepsilon> 0$ that $$\sum_{n \leq N} |\lambda_{\varphi}(n)|^{2} \ll_{\varepsilon} t_{\varphi}^{\varepsilon} N.$$* The Fourier coefficients $\lambda_{\varphi}(n)$ also obey the Hecke multiplicativity relation $$\label{eq:Hecke} \lambda_{\varphi}(mn) = \sum_{d \mid (m, n)} \mu(d) \lambda_{\varphi} \left(\frac{m}{d} \right) \lambda_{\varphi} \left(\frac{n}{d} \right), \qquad m, n \in \mathbb{N}.$$ ## $L$-Functions {#GL(2)-l-functions} Let $\varphi$ be a Hecke--Maaß cusp form on $\mathrm{SL}_{2}(\mathbb{Z}) \backslash \mathbb{H}$ of Laplacian eigenvalue $\frac{1}{4}+t_{\varphi}^{2} \geq 0$. Let $\lambda_{\varphi}(n)$ be its $n$-th Fourier coefficient. Then the $L$-function associated to $\varphi$ is given by $$L(s, \varphi) \coloneqq \sum_{n = 1}^{\infty} \frac{\lambda_{\varphi}(n)}{n^{s}} = \prod_{p} \left(1-\frac{\lambda_{\varphi}(p)}{p^{s}}+\frac{1}{p^{2s}} \right)^{-1},$$ which converges absolutely for $\mathrm{Re}(s) > 1$, extends to the whole complex plane $\mathbb{C}$, and satisfies the functional equation $$\Lambda(s, \varphi) \coloneqq \pi^{-s} \Gamma \left(\frac{s+\kappa+it_{\varphi}}{2} \right) \Gamma \left(\frac{s+\kappa-it_{\varphi}}{2} \right) L(s, \varphi) = \varepsilon(\varphi) \Lambda(1-s, \varphi),$$ where $\varepsilon(\varphi)$ stands for the root number of modulus $1$, and $$\kappa = \begin{cases} 0 & \text{if $\varepsilon(\varphi) = 1$},\\ 1 & \text{if $\varepsilon(\varphi) = -1$}. \end{cases}$$ Furthermore, $L(s, \varphi) = \zeta(s)^{2}$ if $\varphi$ is Eisenstein. ## Quadratic Twists With the notation as above, the quadratic twist $\varphi \otimes \chi_{d}$ becomes a Hecke--Maaß newform of level $|d|^{2}$ whose $L$-function can be expressed in terms of a Dirichlet series and an Euler product, each converging absolutely for $\mathrm{Re}(s) > 1$: $$L(s, \varphi \otimes \chi_{d}) \coloneqq \sum_{n = 1}^{\infty} \frac{\lambda_{\varphi}(n) \chi_{d}(n)}{n^{s}} = \prod_{p} \left(1-\frac{\lambda_{\varphi}(p) \chi_{d}(p)}{p^{s}}+\frac{\chi_{d}(p)^{2}}{p^{2s}} \right)^{-1}.$$ It extends to the whole complex plane $\mathbb{C}$ and satisfies the functional equation $$\begin{aligned} \Lambda(s, \varphi \otimes \chi_{d}) &\coloneqq \left(\frac{|d|}{\pi} \right)^{s} \Gamma \left(\frac{s+\kappa+it_{\varphi}}{2} \right) \Gamma \left(\frac{s+\kappa-it_{\varphi}}{2} \right) L(s, \varphi \otimes \chi_{d})\\ & = \varepsilon(\varphi \otimes \chi_{d}) \Lambda(1-s, \varphi \otimes \chi_{d}),\end{aligned}$$ where $\varepsilon(\varphi \otimes \chi_{d}) = \varepsilon(\varphi) \varepsilon(d)$ with $\varepsilon(d) = (\frac{d}{-1}) = \pm 1$ depending on the sign of $d$. Furthermore, $L(s, \varphi \otimes \chi_{d}) = L(s, \chi_{d})^{2}$ if $\varphi$ is Eisenstein. ## The Approximate Functional Equation {#approximate-functional-equations} We record a version of the approximate functional equation due to Iwaniec--Kowalski [@IwaniecKowalski2004 Theorem 5.3] applied to $L(\frac{1}{2}, \varphi \otimes \chi_{d})$. **Lemma 5** (Iwaniec--Kowalski [@IwaniecKowalski2004 Theorem 5.3]). *Let $G(u)$ be any function that is even, holomorphic and bounded in the horizontal strip $-4 < \mathrm{Re}(u) < 4$, and normalised such that $G(0) = 1$. Then we have that $$L \left(\frac{1}{2}, \varphi \otimes \chi_{d} \right) = (1+\varepsilon(\varphi \otimes \chi_{d})) \sum_{n = 1}^{\infty} \frac{\lambda_{\varphi}(n) \chi_{d}(n)}{\sqrt{n}} W \left(\frac{n}{|d|} \right)+O(|d|^{-2023}),$$ where for any $c > 1$, $$\label{eq:V} W(y) \coloneqq \frac{1}{2\pi i} \int_{(c)} (\pi y)^{-u} G(u) \frac{\Gamma(\frac{s+u+\kappa+it_{\varphi}}{2}) \Gamma(\frac{s+u+\kappa-it_{\varphi}}{2})}{\Gamma(\frac{s+\kappa+it_{\varphi}}{2}) \Gamma(\frac{s+\kappa-it_{\varphi}}{2})} \frac{du}{u}.$$* Note that $W(y)$ decays rapidly as $y \to \infty$ by taking $c$ suitably large in the definition [\[eq:V\]](#eq:V){reference-type="eqref" reference="eq:V"} and then using Stirling's formula [\[eq:Stirling\]](#eq:Stirling){reference-type="eqref" reference="eq:Stirling"}. Since we are only interested in positive fundamental discriminants $d$, we assume without loss of generality that $\varepsilon(\varphi) = 1$, namely that $\varphi$ is even, because the central $L$-value vanishes otherwise. ## Voronoı̆ Summation {#subsec:Voronoi} In conjunction with Poisson summation in Section [2.2](#Poisson-summation){reference-type="ref" reference="Poisson-summation"}, one of the key ingredients in the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} is Voronoı̆ summation for Hecke--Maaß cusp forms on $\mathrm{SL}_{2}(\mathbb{Z}) \backslash \mathbb{H}$, which is thought of as applying Poisson summation (Lemma [2.2](#Poisson-summation){reference-type="ref" reference="Poisson-summation"}) twice. To enable subsequent discussions, we need some notation. Let $V: (0, \infty) \to \mathbb{C}$ be a smooth function with compact support. Define the Hankel transform of $V$ by $$\mathring{V}_{\varphi}^{\pm}(y) \coloneqq \int_{0}^{\infty} V(x) J_{\varphi}^{\pm}(4\pi \sqrt{xy}) dx,$$ where $$\label{eq:J-phi} J_{\varphi}^{+}(x) \coloneqq -\frac{\pi}{\cosh(\pi t_{\varphi})}(Y_{2it_{\varphi}}(x)+Y_{-2it_{\varphi}}(x)), \qquad J_{\varphi}^{-}(x) \coloneqq 4\varepsilon(\varphi) \cosh(\pi t_{\varphi}) K_{2it_{\varphi}}(x).$$ It is straightforward to confirm that $\mathring{V}$ is a Schwartz function (cf. [@GradshteynRyzhik2007]). We are now ready to formulate the Voronoı̆ summation formula; see [@BlomerHarcos2012 Proposition 2]. **Lemma 6** (Voronoı̆ summation). *Let $c \in \mathbb{N}$ and $d \in \mathbb{Z}$ with $(c, d) = 1$. Let $V: (0, \infty) \to \mathbb{C}$ be a smooth function with compact support. Then we have for $N > 0$ that $$\sum_{n} \lambda_{\varphi}(n) e \left(\frac{dn}{c} \right) V \left(\frac{n}{N} \right) = \frac{N}{c} \sum_{n} \sum_{\pm} \lambda_{\varphi}(n) e \left(\mp \frac{\overline{d} n}{c} \right) \mathring{V}_{\varphi}^{\pm} \left(\frac{n}{c^{2}/N} \right).$$* **Corollary 7** (Voronoı̆ summation for twists). *Let $c \in \mathbb{N}$ be an odd squarefree integer and $d \in \mathbb{Z}$ with $(c, d) = 1$. Let $V: (0, \infty) \to \mathbb{C}$ be a smooth function with compact support. Then we have for $N > 0$ that $$\label{eq:Voronoi-twists} \sum_{n} \lambda_{\varphi}(n) \left(\frac{c}{n} \right) e \left(\frac{dn}{c} \right) V \left(\frac{n}{N} \right) = \frac{N}{c} \sum_{n} \sum_{\pm} \lambda_{\varphi}(n) \left(\frac{c}{n} \right) e \left(\mp \frac{\overline{d} n}{c} \right) \mathring{V}_{\varphi}^{\pm} \left(\frac{n}{c^{2}/N} \right).$$* *Proof.* We use [@JacquetLanglands1970 Proposition 3.8 (iii)] or [@AtkinLi1978 Theorem 3.1 (ii)] to see that there exists a Hecke--Maaß newform $\varphi \otimes (\frac{c}{\cdot})$ of level $c^{2}$ and trivial central character such that $\lambda_{\varphi \otimes (\frac{c}{\cdot})}(n) = \lambda_{\varphi}(n) (\frac{c}{n})$. Corollary [Corollary 7](#lem:Voronoi-twists){reference-type="ref" reference="lem:Voronoi-twists"} now follows from (a level-included version of) Lemma [Lemma 6](#lem:Voronoi){reference-type="ref" reference="lem:Voronoi"}. ◻ # Proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} {#proof} In this section, we embark on the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}. Recall that our goal is to estimate $$\mathcal{M}_{\varphi}(T, H) \coloneqq \sideset{}{^{\ast}} \sum_{H \leq h \leq 2H} \ \sideset{}{^{\ast}} \sum_{T \leq n \leq 2T} L \left(\frac{1}{2}, \varphi \otimes \chi_{8n} \right) L \left(\frac{1}{2}, \varphi \otimes \chi_{8(n+h)} \right).$$ We regard $\varphi$ as fixed, and use the convention that $\varepsilon$ is an arbitrarily small positive quantity, not necessarily the same in each instance. Each inequality in what follows is allowed to have an implicit constant dependent at most on $\varphi$ and $\varepsilon$, unless otherwise specified. ## Trivial Bound Applying the Cauchy--Schwarz inequality and trivially estimating the second moment of quadratic twists imply $$\mathcal{M}_{\varphi}(T, H) \ll \sideset{}{^{\ast}} \sum_{h \ll H} \left(\sideset{}{^{\ast}} \sum_{n \ll T} \left|L \left(\frac{1}{2}, \varphi \otimes \chi_{8n} \right) \right|^{2} \right)^{\frac{1}{2}} \left(\sideset{}{^{\ast}} \sum_{n \ll T} \left|L \left(\frac{1}{2}, \varphi \otimes \chi_{8(n+h)} \right) \right|^{2} \right)^{\frac{1}{2}} \ll HT^{1+\varepsilon}$$ for any $1 \leq H \leq T$. To establish Theorem [Theorem 1](#main){reference-type="ref" reference="main"}, we thus need to save roughly $HT^{-\frac{1}{4}} \geq 1$. ## Smoothing Upon approximating the indicator function $\mathbf{1}_{(H, 2H] \times (N, 2N]}$ by a compactly supported smooth function $W \in C_{c}^{\infty}([1, 2] \times [1, 2])$, it suffices to handle the smoothed version $$\sideset{}{^{\ast}} \sum_{h} \sideset{}{^{\ast}} \sum_{n} L \left(\frac{1}{2}, \varphi \otimes \chi_{8n} \right) L \left(\frac{1}{2}, \varphi \otimes \chi_{8(n+h)} \right) W \left(\frac{n}{T}, \frac{h}{H} \right).$$ ## Applying the $\delta$-Symbol We now use the circle method (Lemma [Lemma 2](#DeltaCor){reference-type="ref" reference="DeltaCor"}) to separate the oscillations. Let $1 \leq C \leq \sqrt{T}$ be a parameter that we shall determine later, and fix a smooth function $U$ that takes $1$ on $[1, 2]$ and $0$ outside $[\frac{1}{2}, \frac{5}{2}]$. Then we need to analyse the expression $$\begin{gathered} \sideset{}{^{\ast}} \sum_{h} \sideset{}{^{\ast}} \sum_{n} L \left(\frac{1}{2}, \varphi \otimes \chi_{8n} \right) W \left(\frac{n}{T}, \frac{h}{H} \right) \sideset{}{^{\ast}} \sum_{m} L \left(\frac{1}{2}, \varphi \otimes \chi_{8m} \right) U \left(\frac{m}{T} \right) \delta(m = n+h)\\ = \frac{1}{\mathcal{C}} \sum_{c} \frac{1}{c} \sideset{}{^{\ast}} \sum_{h} \sideset{}{^{\ast}} \sum_{n} L \left(\frac{1}{2}, \varphi \otimes \chi_{8n} \right) W \left(\frac{n}{T}, \frac{h}{H} \right) \sideset{}{^{\ast}} \sum_{m} L \left(\frac{1}{2}, \varphi \otimes \chi_{8m} \right) U \left(\frac{m}{T} \right)\\ \times \sum_{a {\@displayfalse \pmod{c}}} e \left(\frac{a(n+h-m)}{c} \right) V_{0} \left(\frac{c}{C}, \frac{n+h-m}{cC} \right)\end{gathered}$$ for some $\mathcal{C} \sim C$ and a fixed smooth function $V_{0}$ satisfying $V_{0}(x, y) \ll \delta(|x|, |y| \ll 1)$. Pulling out the divisor $b = (a, c)$ in tandem with the replacement $a \mapsto -a$ yields $$\begin{gathered} \mathcal{M}_{\varphi}(T, H) \ll \frac{1}{C} \sum_{b, c} \frac{1}{bc} \ \sideset{}{^{\star}} \sum_{a {\@displayfalse \pmod{c}}} \sideset{}{^{\ast}} \sum_{h} e \left(-\frac{ah}{c} \right) \sideset{}{^{\ast}} \sum_{m} L \left(\frac{1}{2}, \varphi \otimes \chi_{8m} \right) e \left(\frac{am}{c} \right) U \left(\frac{m}{T} \right)\\ \times \sideset{}{^{\ast}} \sum_{n} L \left(\frac{1}{2}, \varphi \otimes \chi_{8n} \right) e \left(-\frac{an}{c} \right) V_{0} \left(\frac{bc}{C}, \frac{n+h-m}{bcC} \right) W \left(\frac{n}{T}, \frac{h}{H} \right)+T^{-2023}.\end{gathered}$$ ## Poisson Summation in $h$ We first remove the asterisk (the squarefree condition) on the $h$-sum via Möbius inversion, writing $$\begin{gathered} \sideset{}{^{\ast}} \sum_{h} e \left(-\frac{ah}{c} \right) W \left(\frac{n}{T}, \frac{h}{H} \right) V_{0} \left(\frac{bc}{C}, \frac{n+h-m}{bcC} \right)\\ = \sum_{d} \mu(d) \sum_{h} e \left(-\frac{ad^{2} h}{c} \right) V_{0} \left(\frac{bc}{C}, \frac{n+d^{2} h-m}{bcC} \right) W \left(\frac{n}{T}, \frac{d^{2} h}{H} \right).\end{gathered}$$ Applying Poisson summation (Lemma [Lemma 3](#Fouvry-Kowalski-Michel){reference-type="ref" reference="Fouvry-Kowalski-Michel"}) to the $h$-sum shows that the right-hand side is $$H \sum_{d} \frac{\mu(d)}{d^{2}} \sum_{h} \delta(h \equiv ad^{2} {\@displayfalse \pmod{c}}) \mathcal{J}_{1}(h, m, n, c),$$ where[^4] $$\mathcal{J}_{0}(h, m, n, c) \coloneqq \int_{\mathbb{R}} V_{0} \left(\frac{bc}{C}, \frac{n+Hy-m}{bcC} \right) W \left(\frac{n}{T}, y \right) e \left(-\frac{hHy}{cd^{2}} \right) dy.$$ Repeated integration by parts ensures an arbitrary saving unless[^5] $$h \ll \frac{cd^{2}}{H}.$$ The above congruence condition is solvable with $(a, c) = 1$ if and only if $(c, d^{2}) = (c, h)$. We thus factorise $c$ in terms of the Chinese Remainder Theorem (cf. [@KiralYoung2021 Section 6.3]). Write $$\label{eq:condition-1} c = c_{0} c_{1}, \qquad h = h_{0} h_{1},$$ where the factorisations may be written locally as $$\begin{aligned} c_{0} &= \prod_{\nu_{p}(c) > \nu_{p}(h)} p^{\nu_{p}(c)}, \qquad c_{1} = \prod_{1 \leq \nu_{p}(c) \leq \nu_{p}(h)} p^{\nu_{p}(c)},\\ h_{0} &= \prod_{\nu_{p}(h) \geq \nu_{p}(c)} p^{\nu_{p}(h)}, \qquad h_{1} = \prod_{1 \leq \nu_{p}(h) < \nu_{p}(c)} p^{\nu_{p}(h)},\end{aligned}$$ with the $p$-adic valuation defined by $\nu_{p}(n) = d$ for $p^{d} \parallel n$. Alternatively, if $n^{\ast} = \prod_{p \mid n} p$, then $$\label{eq:condition-2} (c_{0}, h_{0}) = 1, \qquad c_{1} \mid h_{0}, \qquad h_{1} h_{1}^{\ast} \mid c_{0}.$$ These conditions characterise the variables $c_{0}$, $c_{1}$, $h_{0}$, $h_{1}$. Note that $$\label{eq:condition-3} (c_{0}, c_{1}) = (c_{1}, h_{1}) = (h_{0}, h_{1}) = 1$$ automatically from the other conditions. It transpires from [\[eq:condition-1\]](#eq:condition-1){reference-type="eqref" reference="eq:condition-1"} and [\[eq:condition-2\]](#eq:condition-2){reference-type="eqref" reference="eq:condition-2"} that $(c, h) = c_{1} h_{1}$ so that we impose the condition $c_{1} h_{1} = (c_{0} c_{1}, d^{2}) = (\frac{c_{0}}{h_{1}} c_{1} h_{1}, d^{2})$. Hence, one may decompose $$d^{2} = c_{1} d^{\prime} h_{1},$$ where the new variable $d^{\prime}$ is only subject to the restriction $(\frac{c_{0}}{h_{1}}, d^{\prime}) = 1$, namely $(c_{0}, d^{\prime}) = 1$, since $\frac{c_{0}}{h_{1}}$ shares the same prime factors as $c_{0}$. It is now possible to recast the initial congruence condition $h \equiv ad^{2} {\@displayfalse \pmod{c}}$ as $$h_{0} h_{1} \equiv ac_{1} d^{\prime} h_{1} {\@displayfalse \pmod{c_{0} c_{1}}} \qquad \Longleftrightarrow \qquad a \equiv \frac{h_{0} \overline{d^{\prime}}}{c_{1}} {\@displayfalse \pmod{\frac{c_{0}}{h_{1}}}},$$ where $\overline{d^{\prime}}$ is taken to be the multiplicative inverse modulo $c_{0}$ thanks to [\[eq:condition-2\]](#eq:condition-2){reference-type="eqref" reference="eq:condition-2"}. This condition can further be rewritten as $$a \equiv \frac{h_{0} \overline{d^{\prime}}}{c_{1}}+\frac{c_{0} u}{h_{1}} {\@displayfalse \pmod{c_{0}}}, \qquad u {\@displayfalse \pmod{h_{1}}},$$ where $u$ runs through all residue classes modulo $h_{1}$, since as soon as $a$ is coprime to $\frac{c_{0}}{h_{1}}$, it is also coprime to $c_{0}$. The Chinese Remainder Theorem implies that the sum over $a$ equals $$h_{1} e \left(\frac{\overline{c_{1} d^{\prime}} h_{0}(m-n)}{c_{0} c_{1}} \right) S(m-n; 0; c_{1}) \delta(m \equiv n {\@displayfalse \pmod{h_{1}}}).$$ It is advantageous to open the Kloosterman sum (or the Ramanujan sum) in the form $$\sum_{c_{2} \mid (c_{1}, m-n)} \mu \left(\frac{c_{1}}{c_{2}} \right) c_{2},$$ and replace $c_{1} \mapsto c_{1} c_{2}$. As a result, we are led to the expression $$\begin{gathered} \mathcal{M}_{\varphi}(T, H) \ll \frac{H}{C} \sum_{\substack{b, c_{0}, c_{1}, c_{2}, d \\ d^{2} = c_{1} c_{2} d^{\prime} h_{1} \\ (c_{0}, d^{\prime}) = 1}} \frac{\mu(c_{1}) \mu(d)}{bc_{0} c_{1}^{2} c_{2} d^{\prime}} \sum_{h_{0} \ll \frac{c_{0} c_{1}^{2} c_{2}^{2} d^{\prime}}{H}} \ \underset{m \equiv n {\@displayfalse \pmod{c_{2} h_{1}}}}{\sideset{}{^{\ast}} \sum \sideset{}{^{\ast}} \sum} L \left(\frac{1}{2}, \varphi \otimes \chi_{8m} \right) L \left(\frac{1}{2}, \varphi \otimes \chi_{8n} \right)\\ \times e \left(\frac{\overline{c_{1} c_{2} d^{\prime}} h_{0}(m-n)}{c_{0} c_{1}} \right) U \left(\frac{m}{T} \right) \mathcal{J}_{0}(h_{0} h_{1}, m, n, c_{0} c_{1} c_{2})+T^{-2023},\end{gathered}$$ where we drop the conditions [\[eq:condition-1\]](#eq:condition-1){reference-type="eqref" reference="eq:condition-1"} and [\[eq:condition-2\]](#eq:condition-2){reference-type="eqref" reference="eq:condition-2"} from summations for simplicity. ## Poisson Summation in $m$ It now follows from the approximate functional equation (Lemma [3.4](#approximate-functional-equations){reference-type="ref" reference="approximate-functional-equations"}) that there exists a smooth function $W_{1} \in C_{c}^{\infty}(\mathbb{R})$ supported on $[\frac{1}{2}, \frac{5}{2}]$ such that $$\begin{gathered} \sideset{}{^{\ast}} \sum_{m \equiv n {\@displayfalse \pmod{c_{2} h_{1}}}} L \left(\frac{1}{2}, \varphi \otimes \chi_{8m} \right) e \left(\frac{\overline{c_{1} c_{2} d^{\prime}} h_{0} m}{c_{0} c_{1}} \right) U \left(\frac{m}{T} \right) \mathcal{J}_{0}(h_{0} h_{1}, m, n, c_{0} c_{1} c_{2})\\ = 2 \sum_{\ell_{1} = 1}^{\infty} \frac{\lambda_{\varphi}(\ell_{1})}{\sqrt{\ell_{1}}} \ \sideset{}{^{\ast}} \sum_{m \equiv n {\@displayfalse \pmod{c_{2} h_{1}}}} \left(\frac{8m}{\ell_{1}} \right) e \left(\frac{\overline{c_{1} c_{2} d^{\prime}} h_{0} m}{c_{0} c_{1}} \right) U \left(\frac{m}{T} \right) W_{1} \left(\frac{\ell_{1}}{8m} \right)\\ \times \mathcal{J}_{0}(h_{0} h_{1}, m, n, c_{0} c_{1} c_{2})+T^{-2023}.\end{gathered}$$ We remove the asterisk and execute Poisson summation (Lemma [Lemma 3](#Fouvry-Kowalski-Michel){reference-type="ref" reference="Fouvry-Kowalski-Michel"}) in the $m$-sum, deducing $$\begin{gathered} \frac{2T}{c_{0} c_{1} c_{2} h_{1}} \sum_{\ell_{1} = 1}^{\infty} \frac{\lambda_{\varphi}(\ell_{1})}{\ell_{1}^{\frac{3}{2}}} \sum_{(e, \ell_{1}) = 1} \frac{\mu(e)}{e^{2}} \sum_{m} \sum_{\alpha {\@displayfalse \pmod{c_{0} c_{1} c_{2} h_{1} \ell_{1}}}} \left(\frac{8\alpha}{\ell_{1}} \right)\\ \times e \left(\frac{\alpha \overline{c_{1} c_{2} d^{\prime}} e^{2} h_{0}}{c_{0} c_{1}}+\frac{\alpha m}{c_{0} c_{1} c_{2} h_{1} \ell_{1}} \right) \delta(\alpha e^{2} \equiv n {\@displayfalse \pmod{c_{2} h_{1}}}) \mathcal{J}_{1}(h_{0} h_{1}, \ell_{1}, m, n, c_{0} c_{1} c_{2}),\end{gathered}$$ where $$\mathcal{J}_{1}(h_{0} h_{1}, \ell_{1}, m, n, c_{0} c_{1} c_{2}) \coloneqq \int_{\mathbb{R}} U(y) W_{1} \left(\frac{\ell_{1}}{8Ty} \right) \mathcal{J}_{0}(h_{0} h_{1}, Ty, n, c_{0} c_{1} c_{2}) e \left(-\frac{mTy}{c_{0} c_{1} c_{2} e^{2} h_{1} \ell_{1}} \right) dy.$$ Repeated integration by parts ensures an arbitrary saving unless $$m \ll \frac{c_{0} c_{1} c_{2} e^{2} h_{1} \ell_{1}}{T}.$$ ## Poisson Summation in $n$ We execute Poisson summation (Lemma [Lemma 3](#Fouvry-Kowalski-Michel){reference-type="ref" reference="Fouvry-Kowalski-Michel"}) in the $n$-sum in the same manner as above, deducing $$\begin{aligned} \mathcal{M}_{\varphi}(T, H) &\ll \frac{HT^{2}}{C} \sum_{\substack{b, c_{0}, c_{1}, c_{2}, d, e, f \\ d^{2} = c_{1} c_{2} d^{\prime} h_{1} \\ (c_{0}, d^{\prime}) = 1}} \frac{\mu(c_{1}) \mu(d) \mu(e) \mu(f)}{bc_{0}^{3} c_{1}^{4} c_{2}^{3} d^{\prime} e^{2} f^{2} h_{1}^{2}} \sum_{h_{0} \ll \frac{c_{0} c_{1}^{2} c_{2}^{2} d^{\prime}}{H}} \sum_{\substack{T^{1-\varepsilon} \ll \ell_{1}, \ell_{2} \ll T^{1+\varepsilon} \\ (\ell_{1}, e) = (\ell_{2}, f) = 1}} \frac{\lambda_{\varphi}(\ell_{1}) \lambda_{\varphi}(\ell_{2})}{(\ell_{1} \ell_{2})^{\frac{3}{2}}}\\ & \quad \times \sum_{m \ll \frac{c_{0} c_{1} c_{2} e^{2} h_{1} \ell_{1}}{T}} \sum_{n \ll \frac{c_{0} c_{1} c_{2} f^{2} h_{1} \ell_{2}}{T}} \sum_{\substack{\alpha {\@displayfalse \pmod{c_{0} c_{1} c_{2} h_{1} \ell_{1}}} \\ \beta {\@displayfalse \pmod{c_{0} c_{1} c_{2} h_{1} \ell_{2}}}}} \left(\frac{8\alpha}{\ell_{1}} \right) \left(\frac{8\beta}{\ell_{2}} \right)\\ & \quad \times e \left(\frac{\alpha \overline{c_{1} c_{2} d^{\prime}} e^{2} h_{0}}{c_{0} c_{1}}-\frac{\beta \overline{c_{1} c_{2} d^{\prime}} f^{2} h_{0}}{c_{0} c_{1}}+\frac{\alpha m}{c_{0} c_{1} c_{2} h_{1} \ell_{1}}+\frac{\beta n}{c_{0} c_{1} c_{2} h_{1} \ell_{2}} \right)\\ &\qquad \times \delta(\alpha e^{2} \equiv \beta f^{2} {\@displayfalse \pmod{c_{2} h_{1}}}) \mathcal{J}_{2}(h_{0} h_{1}, \ell_{1}, \ell_{2}, m, n, c_{0} c_{1} c_{2})+T^{-2023},\end{aligned}$$ where $$\mathcal{J}_{2}(h_{0} h_{1}, \ell_{1}, \ell_{2}, m, n, c_{0} c_{1} c_{2}) \coloneqq \int_{\mathbb{R}} W_{2} \left(\frac{\ell_{2}}{8Ty} \right) \mathcal{J}_{1}(h_{0} h_{1}, \ell_{1}, m, Ty, c_{0} c_{1} c_{2}) e \left(-\frac{nTy}{c_{0} c_{1} c_{2} f^{2} h_{1} \ell_{2}} \right) dy$$ for a smooth function $W_{2} \in C_{c}^{\infty}(\mathbb{R})$ supported on $[\frac{1}{2}, \frac{5}{2}]$. ## A Simplification of Character Sums In anticipation of the forthcoming analysis, it is convenient to simplify the character sums appearing in the above section. Detecting the restriction $\alpha e^{2} \equiv \beta f^{2} {\@displayfalse \pmod{c_{2} h_{1}}}$ via additive characters modulo $c_{2} h_{1}$, we derive $$\begin{gathered} \frac{1}{c_{2} h_{1}} \sum_{\substack{\alpha {\@displayfalse \pmod{c_{0} c_{1} c_{2} h_{1} \ell_{1}}} \\ \beta {\@displayfalse \pmod{c_{0} c_{1} c_{2} h_{1} \ell_{2}}} \\ \gamma {\@displayfalse \pmod{c_{2} h_{1}}}}} \left(\frac{8\alpha}{\ell_{1}} \right) \left(\frac{8\beta}{\ell_{2}} \right) e \left(\frac{\alpha \overline{c_{1} c_{2} d^{\prime}} e^{2} h_{0}}{c_{0} c_{1}}-\frac{\beta \overline{c_{1} c_{2} d^{\prime}} f^{2} h_{0}}{c_{0} c_{1}} \right)\\ \times e \left(\frac{\alpha m}{c_{0} c_{1} c_{2} h_{1} \ell_{1}}+\frac{\beta n}{c_{0} c_{1} c_{2} h_{1} \ell_{2}}+\frac{\gamma(\alpha e^{2}-\beta f^{2})}{c_{2} h_{1}} \right).\end{gathered}$$ The sum over $\alpha$ vanishes unless $$\label{eq:alpha} \overline{c_{1}} c_{2} \overline{c_{2} d^{\prime}} e^{2} h_{0} h_{1} \ell_{1}+m+\gamma c_{0} c_{1} e^{2} \ell_{1} \equiv 0 {\@displayfalse \pmod{c_{0} c_{1} c_{2} h_{1}}},$$ in which case it is $$\label{eq:alpha-evaluation} c_{0} c_{1} c_{2} h_{1} \left(\frac{8c_{0} c_{1} c_{2} h_{1} m}{\ell_{1}} \right) \tau \left(\left(\frac{\cdot}{\ell_{1}} \right) \right).$$ Similarly, the sum over $\beta$ vanishes unless $$\label{eq:beta} -\overline{c_{1}} c_{2} \overline{c_{2} d^{\prime}} f^{2} h_{0} h_{1} \ell_{2}+n-\gamma c_{0} c_{1} f^{2} \ell_{2} \equiv 0 {\@displayfalse \pmod{c_{0} c_{1} c_{2} h_{1}}},$$ in which case it is $$\label{eq:beta-evaluation} c_{0} c_{1} c_{2} h_{1} \left(\frac{8c_{0} c_{1} c_{2} h_{1} n}{\ell_{2}} \right) \tau \left(\left(\frac{\cdot}{\ell_{2}} \right) \right).$$ Furthermore, we factorise the sum over $\gamma$ into sums over $\gamma_{1} {\@displayfalse \pmod{c_{2}}}$ and $\gamma_{2} {\@displayfalse \pmod{h_{1}}}$. The combination of [\[eq:alpha\]](#eq:alpha){reference-type="eqref" reference="eq:alpha"} and [\[eq:beta\]](#eq:beta){reference-type="eqref" reference="eq:beta"} shows $$\begin{gathered} \overline{c_{1} d^{\prime}} e^{2} h_{0} h_{1} \ell_{1}+m \equiv -\overline{c_{1} d^{\prime}} f^{2} h_{0} h_{1} \ell_{2}+n \equiv 0 {\@displayfalse \pmod{c_{0}}}, \qquad m \equiv n \equiv 0 {\@displayfalse \pmod{c_{1}}},\\ m+\gamma_{1} c_{0} c_{1} e^{2} \ell_{1} \equiv n-\gamma_{1} c_{0} c_{1} f^{2} \ell_{2} \equiv 0 {\@displayfalse \pmod{c_{2}}}, \qquad m \equiv n \equiv 0 {\@displayfalse \pmod{h_{1}}}.\end{gathered}$$ Note that [\[eq:condition-2\]](#eq:condition-2){reference-type="eqref" reference="eq:condition-2"} and [\[eq:condition-3\]](#eq:condition-3){reference-type="eqref" reference="eq:condition-3"} imply in particular that $(c_{0} c_{1}, c_{2} h_{1}) = (c_{1}, c_{2})h_{1}$ and $(c_{2}, h_{1}) = 1$. By an elementary consideration, the congruence condition modulo $c_{0}$ boils down to $$f^{2} \overline{\ell_{1}} m \equiv -e^{2} \overline{\ell_{2}} n {\@displayfalse \pmod{c_{0}}},$$ where we assume $(c_{0}, \ell_{1} \ell_{2}) = 1$ due to the presence of quadratic characters in [\[eq:alpha-evaluation\]](#eq:alpha-evaluation){reference-type="eqref" reference="eq:alpha-evaluation"} and [\[eq:beta-evaluation\]](#eq:beta-evaluation){reference-type="eqref" reference="eq:beta-evaluation"}. Similarly, the congruence condition modulo $c_{2}$ boils down to $$f^{2} \overline{\ell_{1}} m \equiv -e^{2} \overline{\ell_{2}} n {\@displayfalse \pmod{c_{2}}},$$ which altogether does not depend on $\gamma_{1} {\@displayfalse \pmod{c_{2}}}$. Gathering the above computations together leads to $$\begin{aligned} \mathcal{M}_{\varphi}(T, H) &\ll T^{2} \sum_{\substack{b, c_{1}, c_{2}, d, e, f \\ d^{2} = c_{1} c_{2} d^{\prime} h_{1}}} \frac{\mu(c_{1}) \mu(d) \mu(e) \mu(f)}{b^{2} c_{1} e^{2} f^{2}} \sup_{\substack{c_{0} \ll \frac{C}{bc_{1} c_{2}} \\ (c_{0}, d^{\prime}) = 1}} \sup_{h_{0} \ll \frac{c_{1} c_{2} d^{\prime} C}{bH}} \sum_{\substack{T^{1-\varepsilon} \ll \ell_{1}, \ell_{2} \ll T^{1+\varepsilon} \\ (\ell_{1}, c_{1} eh_{1}) = (\ell_{2}, c_{1} fh_{1}) = 1}} \frac{\lambda_{\varphi}(\ell_{1}) \lambda_{\varphi}(\ell_{2})}{(\ell_{1} \ell_{2})^{\frac{3}{2}}}\\ & \quad \times \sum_{m \ll \frac{e^{2} \ell_{1} C}{bc_{1} T}} \sum_{n \ll \frac{f^{2} \ell_{2} C}{bc_{1} T}} \left(\frac{8c_{0} c_{2} m}{\ell_{1}} \right) \left(\frac{8c_{0} c_{2} n}{\ell_{2}} \right) \tau \left(\left(\frac{\cdot}{\ell_{1}} \right) \right) \tau \left(\left(\frac{\cdot}{\ell_{2}} \right) \right)\\ & \quad \times \delta(e^{2} \ell_{1} n \equiv -f^{2} \ell_{2} m {\@displayfalse \pmod{\frac{c_{0} c_{2}}{\delta}}}) \mathcal{J}_{2}(h_{0} h_{1}, \ell_{1}, \ell_{2}, c_{1} h_{1} m, c_{1} h_{1} n, c_{0} c_{1} c_{2})+T^{-2023},\end{aligned}$$ where $\delta = (c_{1} h_{1}, c_{0} c_{2}) = (c_{1}, c_{2})h_{1}$. It is convenient to restrict our attention to odd squarefree integers $\ell_{1}$ and $\ell_{2}$ so that the Gauß sums simplify to $$\tau \left(\left(\frac{\cdot}{\ell_{1}} \right) \right) = \varepsilon_{\ell_{1}} \sqrt{\ell_{1}}, \qquad \tau \left(\left(\frac{\cdot}{\ell_{2}} \right) \right) = \varepsilon_{\ell_{2}} \sqrt{\ell_{2}}.$$ Without loss of generality, we shall focus on the case where $\varepsilon_{\ell_{1}} = \varepsilon_{\ell_{2}} = 1$. ## Amplification We introduce an amplification parameter $L \geq 1$ at our disposal. Then $$\begin{aligned} \mathcal{M}_{\varphi}(T, H) &\ll T^{\varepsilon} \sum_{\substack{b, c_{1}, c_{2}, d, e, f \\ d^{2} = c_{1} c_{2} d^{\prime} h_{1}}} \frac{\mu(c_{1}) \mu(d) \mu(e) \mu(f)}{b^{2} c_{1} e^{2} f^{2}} \sup_{\substack{c_{0} \ll \frac{CL}{bc_{1} c_{2}} \\ (c_{0}, d^{\prime}) = 1}} \sup_{h_{0} \ll \frac{c_{1} c_{2} d^{\prime} C}{bH}} \ \sideset{}{^{\ast}} \sum_{\substack{T^{1-\varepsilon} \ll \ell_{1}, \ell_{2} \ll T^{1+\varepsilon} \\ (\ell_{1}, c_{1} eh_{1}) = (\ell_{2}, c_{1} fh_{1}) = 1}}\\ & \quad \times \lambda_{\varphi}(\ell_{1}) \lambda_{\varphi}(\ell_{2}) \sum_{m \ll \frac{e^{2} \ell_{1} C}{bc_{1} T}} \sum_{n \ll \frac{f^{2} \ell_{2} C}{bc_{1} T}} \left(\frac{8c_{0} c_{2} m}{\ell_{1}} \right) \left(\frac{8c_{0} c_{2} n}{\ell_{2}} \right)\\ & \quad \times \delta(e^{2} \ell_{1} n \equiv -f^{2} \ell_{2} m {\@displayfalse \pmod{\frac{c_{0} c_{2}}{\delta}}}) \mathcal{J}_{2}(h_{0} h_{1}, \ell_{1}, \ell_{2}, c_{1} h_{1} m, c_{1} h_{1} n, c_{0} c_{1} c_{2})+T^{-2023},\end{aligned}$$ where we pull out the factor $(\ell_{1} \ell_{2})^{-1}$ by partial summation. The determination of $L$ dictates the quality of the final bound. ## Divisor Switching We now perform divisor switching and write $$f^{2} \ell_{2} m+e^{2} \ell_{1} n = \frac{c_{0} c_{2}}{\delta} \cdot q, \qquad q \ll \frac{\delta e^{2} f^{2} \ell_{1} \ell_{2}}{LT}.$$ It follows from quadratic reciprocity [\[eq:quadratic-reciprocity\]](#eq:quadratic-reciprocity){reference-type="eqref" reference="eq:quadratic-reciprocity"} and the assumption $\varepsilon_{\ell_{1}} = \varepsilon_{\ell_{2}} = 1$ that $$\left(\frac{8c_{0} c_{2} m}{\ell_{1}} \right) \left(\frac{8c_{0} c_{2} n}{\ell_{2}} \right) = \left(\frac{8\delta q}{\ell_{1} \ell_{2}} \right) \delta((\ell_{1}, fm) = (\ell_{2}, en) = (\ell_{1}, \ell_{2}) = 1).$$ Therefore, we are led to the expression $$\begin{aligned} \mathcal{M}_{\varphi}(T, H) &\ll T^{\varepsilon} \sum_{\substack{b, c_{1}, c_{2}, d, e, f \\ d^{2} = c_{1} c_{2} d^{\prime} h_{1}}} \frac{\mu(c_{1}) \mu(d) \mu(e) \mu(f)}{b^{2} c_{1} e^{2} f^{2}} \sup_{h_{0} \ll \frac{c_{1} c_{2} d^{\prime} C}{bH}} \ \sideset{}{^{\ast}} \sum_{\substack{T^{1-\varepsilon} \ll \ell_{1}, \ell_{2} \ll T^{1+\varepsilon} \\ (\ell_{1} \ell_{2}, c_{1} efh_{1}) = (\ell_{1}, \ell_{2}) = 1}} \lambda_{\varphi}(\ell_{1}) \lambda_{\varphi}(\ell_{2})\\ & \quad \times \sum_{\substack{q \ll \frac{\delta e^{2} f^{2} \ell_{1} \ell_{2}}{LT} \\ (q, ef) = 1}} \sum_{\substack{m \ll \frac{e^{2} \ell_{1} C}{bc_{1} T} \\ (m, q) = 1}} \sum_{\substack{n \ll \frac{f^{2} \ell_{2} C}{bc_{1} T} \\ (n, q) = 1}} \left(\frac{8\delta q}{\ell_{1} \ell_{2}} \right) \delta(e^{2} \ell_{1} n \equiv -f^{2} \ell_{2} m {\@displayfalse \pmod{q}})\\ & \quad \times \mathcal{J}_{2}(h_{0} h_{1}, \ell_{1}, \ell_{2}, c_{1} h_{1} m, c_{1} h_{1} n, c_{0} c_{1} c_{2})+T^{-2023},\end{aligned}$$ where we attach the restriction $(q, ef) = 1$ for technical brevity. ## Poisson Summation in $m$ Using Poisson summation (Lemma [Lemma 3](#Fouvry-Kowalski-Michel){reference-type="ref" reference="Fouvry-Kowalski-Michel"}) in the $m$-sum yields $$\begin{gathered} \sum_{\substack{m \ll \frac{e^{2} \ell_{1} C}{bc_{1} T} \\ (m, q) = 1}} \delta(e^{2} \ell_{1} n \equiv -f^{2} \ell_{2} m {\@displayfalse \pmod{q}}) \mathcal{J}_{2}(h_{0} h_{1}, \ell_{1}, \ell_{2}, c_{1} h_{1} m, c_{1} h_{1} n, c_{0} c_{1} c_{2})\\ = \frac{e^{2} \ell_{1} C}{bc_{1} qT} \sum_{m} e \left(-\frac{e^{2} \overline{f}^{2} \ell_{1} \overline{\ell_{2}} mn}{q} \right) \mathcal{J}_{3}(h_{0}, h_{1}, \ell_{1}, \ell_{2}, m, n, c_{1}, c_{2}),\end{gathered}$$ where $$\mathcal{J}_{3}(h_{0}, h_{1}, \ell_{1}, \ell_{2}, m, n, c_{1}, c_{2}) \coloneqq \int_{\mathbb{R}} \mathcal{J}_{2} \left(h_{0} h_{1}, \ell_{1}, \ell_{2}, \frac{e^{2} h_{1} \ell_{1} Cy}{bT}, c_{1} h_{1} n, c_{0} c_{1} c_{2} \right) e \left(-\frac{e^{2} \ell_{1} Cmy}{bc_{1} qT} \right) dy.$$ Repeated integration by parts ensures an arbitrary saving unless $$m \ll \frac{bc_{1} qT}{e^{2} \ell_{1} C}.$$ ## Poisson Summation in $n$ Using Poisson summation (Lemma [Lemma 3](#Fouvry-Kowalski-Michel){reference-type="ref" reference="Fouvry-Kowalski-Michel"}) in the $n$-sum yields $$\begin{gathered} \sum_{\substack{n \ll \frac{f^{2} \ell_{2} C}{bc_{1} T} \\ (n, q) = 1}} e \left(-\frac{e^{2} \overline{f}^{2} \ell_{1} \overline{\ell_{2}} mn}{q} \right) \mathcal{J}_{3}(h_{0}, h_{1}, \ell_{1}, \ell_{2}, m, n, c_{1}, c_{2})\\ = \frac{f^{2} \ell_{2} C}{bc_{1} qT} \sum_{n} \delta(e^{2} \ell_{1} m \equiv f^{2} \ell_{2} n {\@displayfalse \pmod{q}}) \mathcal{J}_{4}(h_{0}, h_{1}, \ell_{1}, \ell_{2}, m, n, c_{1}, c_{2}),\end{gathered}$$ where $$\mathcal{J}_{4}(h_{0}, h_{1}, \ell_{1}, \ell_{2}, m, n, c_{1}, c_{2}) \coloneqq \int_{\mathbb{R}} \mathcal{J}_{3} \left(h_{0}, h_{1}, \ell_{1}, \ell_{2}, m, \frac{f^{2} \ell_{2} Cy}{bc_{1} T}, c_{1}, c_{2} \right) e \left(-\frac{f^{2} \ell_{2} Cny}{bc_{1} qT} \right) dy.$$ Repeated integration by parts ensures an arbitrary saving unless $$n \ll \frac{bc_{1} qT}{f^{2} \ell_{2} C}.$$ Altogether, we obtain $$\begin{aligned} \mathcal{M}_{\varphi}(T, H) &\ll C^{2} T^{\varepsilon} \sum_{\substack{b, c_{1}, c_{2}, d, e, f \\ d^{2} = c_{1} c_{2} d^{\prime} h_{1}}} \frac{\mu(c_{1}) \mu(d) \mu(e) \mu(f)}{b^{4} c_{1}^{3}} \sup_{h_{0} \ll \frac{c_{1} c_{2} d^{\prime} C}{bH}} \ \sideset{}{^{\ast}} \sum_{\substack{T^{1-\varepsilon} \ll \ell_{1}, \ell_{2} \ll T^{1+\varepsilon} \\ (\ell_{1} \ell_{2}, c_{1} efh_{1}) = (\ell_{1}, \ell_{2}) = 1}} \lambda_{\varphi}(\ell_{1}) \lambda_{\varphi}(\ell_{2})\\ & \quad \times \sum_{\substack{q \ll \frac{\delta e^{2} f^{2} T^{1+\varepsilon}}{L} \\ (q, ef) = 1}} \frac{1}{q^{2}} \sum_{\substack{m \ll \frac{bc_{1} qT^{\varepsilon}}{e^{2} C} \\ (m, q) = 1}} \sum_{\substack{n \ll \frac{bc_{1} qT^{\varepsilon}}{f^{2} C} \\ (n, q) = 1}} \left(\frac{8\delta q}{\ell_{1} \ell_{2}} \right) \delta(e^{2} \ell_{1} m \equiv f^{2} \ell_{2} n {\@displayfalse \pmod{q}})\\ & \quad \times \mathcal{J}_{4}(h_{0}, h_{1}, \ell_{1}, \ell_{2}, m, n, c_{1}, c_{2})+T^{-2023}.\end{aligned}$$ ## Orthogonality Detecting the congruence condition at hand via primitive additive characters modulo $q$, we derive $$\delta(e^{2} \ell_{1} m \equiv f^{2} \ell_{2} n {\@displayfalse \pmod{q}}) = \sum_{q_{1} \mid q} \ \sideset{}{^{\star}} \sum_{a {\@displayfalse \pmod{q_{1}}}} e \left(\frac{a(e^{2} \ell_{1} m-f^{2} \ell_{2} n)}{q_{1}} \right).$$ This is thought of as an orthogonality relation in terms of the Ramanujan sum. ## Voronoı̆ Summation in $\ell_{1}$ To circumvent increase of more variables, we remove the asterisk and the coprimality conditions on the $\ell_{1}$-sum. For general non-squarefree $\ell_{1}$, we may utilise Möbius inversion and the Hecke multiplicativity relation [\[eq:Hecke\]](#eq:Hecke){reference-type="eqref" reference="eq:Hecke"} to estimate peripheral variables trivially. Applying Voronoı̆ summation (Lemma [Corollary 7](#lem:Voronoi-twists){reference-type="ref" reference="lem:Voronoi-twists"}) in the $\ell_{1}$-sum implies $$\begin{gathered} \sum_{T^{1-\varepsilon} \ll \ell_{1} \ll T^{1+\varepsilon}} \lambda_{\varphi \otimes (\frac{q}{\cdot})}(\ell_{1}) e \left(\frac{ae^{2} \ell_{1} m}{q_{1}} \right) \mathcal{J}_{4}(h_{0}, h_{1}, \ell_{1}, \ell_{2}, m, n, c_{1}, c_{2})\\ = \frac{T}{q} \sum_{\ell_{1}} \sum_{\sigma \in \{\pm \}} \lambda_{\varphi \otimes (\frac{q}{\cdot})}(\ell_{1}) e \left(\sigma \frac{\overline{ae^{2}} \ell_{1} \overline{m}}{q_{1}} \right) \mathcal{J}_{5}(h_{0}, h_{1}, \ell_{1}, \ell_{2}, m, n, c_{1}, c_{2}),\end{gathered}$$ where $$\mathcal{J}_{5}^{\sigma}(h_{0}, h_{1}, \ell_{1}, \ell_{2}, m, n, c_{1}, c_{2}) \coloneqq \int_{0}^{\infty} \mathcal{J}_{4}(h_{0}, h_{1}, x, \ell_{2}, m, n, c_{1}, c_{2}) J_{\varphi}^{-\sigma} \left(\frac{4\pi \sqrt{\ell_{1} Tx}}{q} \right) dx$$ with $J_{\varphi}^{\pm}(x)$ defined in [\[eq:J-phi\]](#eq:J-phi){reference-type="eqref" reference="eq:J-phi"}. Repeated integration by parts ensures an arbitrary saving unless $$\ell_{1} \ll \frac{q^{2+\varepsilon}}{T}.$$ ## Voronoı̆ Summation in $\ell_{2}$ In a similar fashion, we make use of Voronoı̆ summation (Lemma [Corollary 7](#lem:Voronoi-twists){reference-type="ref" reference="lem:Voronoi-twists"}) in the $\ell_{2}$-sum, deducing $$\begin{gathered} \sum_{T^{1-\varepsilon} \ll \ell_{2} \ll T^{1+\varepsilon}} \lambda_{\varphi \otimes (\frac{q}{\cdot})}(\ell_{2}) e \left(-\frac{af^{2} \ell_{2} n}{q_{1}} \right) \mathcal{J}_{5}^{\sigma}(h_{0}, h_{1}, \ell_{1}, \ell_{2}, m, n, c_{1}, c_{2})\\ = \frac{T}{q} \sum_{\ell_{1}} \sum_{\tau \in \{\pm \}} \lambda_{\varphi \otimes (\frac{q}{\cdot})}(\ell_{1}) e \left(-\tau \frac{\overline{af^{2}} \ell_{2} \overline{n}}{q_{1}} \right) \mathcal{J}_{6}^{\sigma, \tau}(h_{0}, h_{1}, \ell_{1}, \ell_{2}, m, n, c_{1}, c_{2}),\end{gathered}$$ where $$\mathcal{J}_{6}^{\sigma, \tau}(h_{0}, h_{1}, \ell_{1}, \ell_{2}, m, n, c_{1}, c_{2}) \coloneqq \int_{0}^{\infty} \mathcal{J}_{5}^{\sigma}(h_{0}, h_{1}, \ell_{1}, x, m, n, c_{1}, c_{2}) J_{\varphi}^{-\tau} \left(\frac{4\pi \sqrt{\ell_{2} Tx}}{q} \right) dx.$$ Repeated integration by parts ensures an arbitrary saving unless $$\ell_{2} \ll \frac{q^{2+\varepsilon}}{T}.$$ ## Endgame Assembling the observations in the antecedent sections, we arrive at $$\begin{aligned} \mathcal{M}_{\varphi}(T, H) &\ll C^{2} T^{2+\varepsilon} \sum_{\substack{b, c_{1}, c_{2}, d, e, f \\ d^{2} = c_{1} c_{2} d^{\prime} h_{1}}} \frac{\mu(c_{1}) \mu(d) \mu(e) \mu(f)}{b^{4} c_{1}^{3}} \sup_{h_{0} \ll \frac{c_{1} c_{2} d^{\prime} C}{bH}} \sum_{\substack{q \ll \frac{\delta e^{2} f^{2} T^{1+\varepsilon}}{L} \\ (q, ef) = 1}} \frac{1}{q^{4}}\sum_{\ell_{1}, \ell_{2} \ll \frac{q^{2+\varepsilon}}{T}}\\ & \quad \times \lambda_{\varphi}(\ell_{1}) \lambda_{\varphi}(\ell_{2}) \sum_{\sigma, \tau \in \{\pm \}} \sum_{\substack{m \ll \frac{bc_{1} qT^{\varepsilon}}{e^{2} C} \\ (m, q) = 1}} \sum_{\substack{n \ll \frac{bc_{1} qT^{\varepsilon}}{f^{2} C} \\ (n, q) = 1}} \left(\frac{8\delta q}{\ell_{1} \ell_{2}} \right) \delta(\sigma e^{2} \ell_{2} m \equiv \tau f^{2} \ell_{1} n {\@displayfalse \pmod{q}})\\ & \quad \times \mathcal{J}_{6}^{\sigma, \tau}(h_{0}, h_{1}, \ell_{1}, \ell_{2}, m, n, c_{1}, c_{2})+T^{-2023}.\end{aligned}$$ where we execute the sum over $a$ via orthogonality after replacing $a \mapsto \overline{a}$. Upon applying the Rankin--Selberg bound (Lemma [Lemma 4](#lem:Rankin-Selberg){reference-type="ref" reference="lem:Rankin-Selberg"}) and estimating everything trivially, it transpires that $$\begin{aligned} \mathcal{M}_{\varphi}(T, H) &\ll C^{2} T^{2+\varepsilon} \sum_{\substack{b, c_{1}, c_{2}, d, e, f \\ d^{2} = c_{1} c_{2} d^{\prime} h_{1}}} \frac{\mu(c_{1}) \mu(d) \mu(e) \mu(f)}{b^{4} c_{1}^{3}} \sum_{q \ll \frac{\delta e^{2} f^{2} T^{1+\varepsilon}}{L}} \frac{1}{q^{4}} \frac{q^{2+\varepsilon}}{T} \frac{q^{2+\varepsilon}}{T} \frac{bc_{1} qT^{\varepsilon}}{e^{2} C} \frac{bc_{1} qT^{\varepsilon}}{f^{2} C} \frac{1}{\sqrt{q}}\\ &\ll T^{\frac{5}{2}+\varepsilon} \sum_{\substack{c_{1}, c_{2}, d, e, f \\ d^{2} = c_{1} c_{2} d^{\prime} h_{1}}} \mu(c_{1}) \mu(d) c_{1}^{-1} \delta^{\frac{5}{2}} e^{3} f^{3} L^{-\frac{5}{2}}.\end{aligned}$$ Theorem [Theorem 1](#main){reference-type="ref" reference="main"} then follows from the optimisations $$C = \sqrt{T}, \qquad L = c_{1} c_{2} \delta e^{2} f^{2} \sqrt{T}.$$ The proof is complete.0◻ BFGH06 Arthur Oliver Lonsdale Atkin and Wen-Ching Winnie Li, *Twists of Newforms and Pseudo-Eigenvalues of $W$-Operators*, Inventiones Mathematicae **48** (1978), no. 3, 221--243. MR 508986 Daniel Willis Bump, Solomon Friedberg, and Dorian Morris Goldfeld (eds.), *Multiple Dirichlet Series, $L$-Functions and Automorphic Forms*, Progress in Mathematics, vol. 300, Birkhäuser/Springer, New York, 2012. MR 2961902 Daniel Willis Bump, Solomon Friedberg, Dorian Morris Goldfeld, and Jeffrey Ezra Hoffstein (eds.), *Multiple Dirichlet Series, Automorphic Forms, and Analytic Number Theory*, Proceedings of Symposia in Pure Mathematics, vol. 75, American Mathematical Society, Providence, RI, 2006. MR 2265405 Daniel Willis Bump, Solomon Friedberg, and Jeffrey Ezra Hoffstein, *On Some Applications of Automorphic Forms to Number Theory*, Bulletin of the American Mathematical Society **33** (1996), no. 2, 157--175. MR 1359575 Valentin Blomer, Leo Goldmakher, and Benoît Louvel, *$L$-Functions with $n$-th-Order Twists*, International Mathematics Research Notices (2014), no. 7, 1925--1955. MR 3190355 Valentin Blomer and Gergely Harcos, *The Spectral Decomposition of Shifted Convolution Sums*, Duke Mathematical Journal **144** (2008), no. 2, 321--339. MR 2437682 to3em, *A Hybrid Asymptotic Formula for the Second Moment of Rankin-Selberg $L$-Functions*, Proceedings of the London Mathematical Society **105** (2012), no. 3, 473--505. MR 2974197 Valentin Blomer, *Shifted Convolution Sums and Subconvexity Bounds for Automorphic $L$-Functions*, International Mathematics Research Notices (2004), no. 73, 3905--3926. MR 2104288 to3em, *Subconvexity for a Double Dirichlet Series*, Compositio Mathematica **147** (2011), no. 2, 355--374. MR 2776608 Daniel Willis Bump, *Multiple Dirichlet Series*. Martin Čech, *The Ratios Conjecture for Real Dirichlet Characters and Multiple Dirichlet Series*, arXiv e-prints (2022), 46 pages. to3em, *Applications of Multiple Dirichlet Series in Analytic Number Theory*, Ph.D. thesis, Concordia University, 2022. to3em, *Mean Value of Real Dirichlet Characters Using a Double Dirichlet Series*, to appear in Canadian Mathematical Bulletin (2023), 17 pages. Gautam Chinta and Paul Edward Gunnells, *Weyl Group Multiple Dirichlet Series Constructed from Quadratic Characters*, Inventiones Mathematicae **167** (2007), no. 2, 327--353. MR 2270457 to3em, *Constructing Weyl Group Multiple Dirichlet Series*, Journal of the American Mathematical Society **23** (2010), no. 1, 189--215. MR 2552251 Sarvadaman D. S. Chowla, *The Riemann Hypothesis and Hilbert's Tenth Problem*, Mathematics and Its Applications, vol. 4, Gordon and Breach Science Publishers, New York-London-Paris, 1965. MR 177943 John Brian Conrey and Jonathan Peter Keating, *Moments of Zeta and Correlations of Divisor-Sums: I*, Philosophical Transactions of the Royal Society A **373** (2015), no. 2040, Article ID. 20140313, 11. MR 3338122 Alexander Oswald Dahl, *Subconvexity for a Twisted Double Dirichlet Series and Non-Vanishing of $L$-Functions*, Ph.D. thesis, University of Toronto, 2015, p. 71. MR 3474705 to3em, *Subconvexity for a Double Dirichlet Series and Non-Vanishing of $L$-Functions*, International Journal of Number Theory **14** (2018), no. 6, 1573--1604. MR 3827947 William Drexel Duke, John Benjamin Friedlander, and Henryk Iwaniec, *Bounds for Automorphic $L$-Functions*, Inventiones Mathematicae **112** (1993), no. 1, 1--8. MR 1207474 to3em, *A Quadratic Divisor Problem*, Inventiones Mathematicae **115** (1994), no. 2, 209--217. MR 1258903 Adrian Diaconu, Dorian Morris Goldfeld, and Jeffrey Ezra Hoffstein, *Multiple Dirichlet Series and Moments of Zeta and $L$-Functions*, Compositio Mathematica **139** (2003), no. 3, 297--360. MR 2041614 Jean-Marc Deshouillers and Henryk Iwaniec, *An Additive Divisor Problem*, Journal of the London Mathematical Society **26** (1982), no. 1, 1--14. MR 667238 Peter D. T. A. Elliott, Carlos Julio Moreno, and Freydoon Shahidi, *On the Absolute Value of Ramanujan's $\tau$-Function*, Mathematische Annalen **266** (1984), no. 4, 507--511. MR 735531 Solomon Friedberg, Jeffrey Ezra Hoffstein, and Daniel Bennett Lieman, *Double Dirichlet Series and the $n$-th Order Twists of Hecke $L$-Series*, Mathematische Annalen **327** (2003), no. 2, 315--338. MR 2015073 Étienne Fouvry, Emmanuel Kowalski, and Philippe Gabriel Michel, *On the Exponent of Distribution of the Ternary Divisor Function*, Mathematika **61** (2015), no. 1, 121--144. MR 3333965 Dorian Morris Goldfeld and Jeffrey Ezra Hoffstein, *Eisenstein Series of $\frac{1}{2}$-Integral Weight and the Mean Value of Real Dirichlet $L$-Series*, Inventiones Mathematicae **80** (1985), no. 2, 185--208. MR 788407 Izrail Solomonovich Gradshteyn and Iosif Moiseevich Ryzhik, *Table of Integrals, Series, and Products*, eighth ed., Elsevier/Academic Press, Amsterdam, 2015, Translated from the Russian, Translation Edited and with a Preface by Daniel Zwillinger and Victor Moll. MR 3307944 Peng Gao and Liangyi Zhao, *Subconvexity of a Double Dirichlet Series over the Gaussian Field*, to appear in The Quarterly Journal of Mathematics (2023), 10 pages. Gergely Harcos, *An Additive Problem in the Fourier Coefficients of Cusp Forms*, Mathematische Annalen **326** (2003), no. 2, 347--365. MR 1990914 David Rodney Heath-Brown, *A New Form of the Circle Method, and Its Application to Quadratic Forms*, Journal für die Reine und Angewandte Mathematik **481** (1996), 149--206. MR 1421949 Godfrey Harold Hardy and John Edensor Littlewood, *Some Problems of 'Partitio Numerorum'; III: On the Expression of a Number as a Sum of Primes*, Acta Mathematica **44** (1923), no. 1, 1--70. MR 1555183 Roman Holowinsky, *A Sieve Method for Shifted Convolution Sums*, Duke Mathematical Journal **146** (2009), no. 3, 401--448. MR 2484279 to3em, *Sieving for Mass Equidistribution*, Annals of Mathematics **172** (2010), no. 2, 1499--1516. MR 2680498 Christopher Hooley, *On the Intervals Between Numbers that Are Sums of Two Squares*, Acta Mathematica **127** (1971), 279--297. MR 294281 to3em, *On the Intervals Between Numbers that Are Sums of Two Squares. IV*, Journal für die Reine und Angewandte Mathematik **452** (1994), 79--109. MR 1282197 Roman Holowinsky and Kannan Soundararajan, *Mass Equidistribution for Hecke Eigenforms*, Annals of Mathematics **172** (2010), no. 2, 1517--1528. MR 2680499 Henryk Iwaniec and Emmanuel Kowalski, *Analytic Number Theory*, American Mathematical Society Colloquium Publications, vol. 53, American Mathematical Society, Providence, RI, 2004. MR 2061214 Henryk Iwaniec, *The Spectral Growth of Automorphic $L$-Functions*, Journal für die Reine und Angewandte Mathematik **428** (1992), 139--159. MR 1166510 Hervé Michel Jacquet and Robert Phelan Langlands, *Automorphic Forms on $\mathrm{GL}(2)$*, Lecture Notes in Mathematics, vol. 114, Springer-Verlag, Berlin-New York, 1970. MR 401654 Henry Hyeongsin Kim, *Functoriality for the Exterior Square of $\mathrm{GL}_{4}$ and the Symmetric Fourth of $\mathrm{GL}_{2}$*, Journal of the American Mathematical Society **16** (2003), no. 1, 139--183, With Appendix 1 by Dinakar Ramakrishnan and Appendix 2 by Henry Hyeongsin Kim and Peter Clive Sarnak. MR 1937203 Ikuya Kaneko and Wing Hong Leung, *The Short Second Moment of $\mathrm{GL}_{3}$ $L$-Functions in the Depth Aspect*, preprint (2023), 43 pages. Emmanuel Kowalski, Philippe Gabriel Michel, and Jeffrey Mark VanderKam, *Rankin-Selberg $L$-Functions in the Level Aspect*, Duke Mathematical Journal **114** (2002), no. 1, 123--191. MR 1915038 Neal I. Koblitz, *Introduction to Elliptic Curves and Modular Forms*, Graduate Texts in Mathematics, vol. 97, Springer-Verlag, New York, 1984. MR 766911 Eren Mehmet Kıral and Matthew Patrick Young, *The Fifth Moment of Modular $L$-Functions*, Journal of the European Mathematical Society **23** (2021), no. 1, 237--314. MR 4186468 Wing Hong Leung, *Hybrid Subconvexity Bound for $L \left(\frac{1}{2}, \mathrm{Sym}^{2} f \otimes \rho \right)$ via the Delta Method*, arXiv e-prints (2021), 51 pages. to3em, *A Reformulation of the Delta Method and the Subconvexity Problem*, Ph.D. thesis, The Ohio State University, 2022, p. 199. MR 4495301 to3em, *Shifted Convolution Sums for $\mathrm{GL}(3) \times \mathrm{GL}(2)$ Averaged over Weighted Sets*, arXiv e-prints (2022), 17 pages. Xiannan Li, *Moments of Quadratic Twists of Modular $L$-Functions*, arXiv e-prints (2022), 32 pages. Péter Maga, *The Spectral Decomposition of Shifted Convolution Sums over Number Fields*, Journal für die Reine und Angewandte Mathematik **744** (2018), 1--27. MR 3871439 Philippe Gabriel Michel, *The Subconvexity Problem for Rankin-Selberg $L$-Functions and Equidistribution of Heegner Points*, Annals of Mathematics **160** (2004), no. 1, 185--236. MR 2119720 to3em, *Recent Progresses on the Subconvexity Problem*, Astérisque (2022), no. 438, 353--401. MR 4576022 Mohan K. N. Nair, *Multiplicative Functions of Polynomial Values in Short Intervals*, Acta Arithmetica **62** (1992), no. 3, 257--269. MR 1197420 Mohan K. N. Nair and Gérald Tenenbaum, *Short Sums of Certain Arithmetic Functions*, Acta Mathematica **180** (1998), no. 1, 119--144. MR 1618321 Yiannis Nicolaos Petridis, Nicole Raulf, and Morten Skarsholm Risager, *Double Dirichlet Series and Quantum Unique Ergodicity of Weight One-Half Eisenstein Series*, Algebra & Number Theory **8** (2014), no. 7, 1539--1595. MR 3272275 William French Sawin, *General Multiple Dirichlet Series from Perverse Sheaves*, arXiv e-prints (2023), 34 pages. Goro Shimura, *On Modular Forms of Half Integral Weight*, Annals of Mathematics **97** (1973), 440--481. MR 332663 Peter Man-Kit Shiu, *A Brun-Titchmarsh Theorem for Multiplicative Functions*, Journal für die Reine und Angewandte Mathematik **313** (1980), 161--170. MR 552470 Kannan Soundararajan, *Quantum Unique Ergodicity for $\mathrm{SL}_{2}(\mathbb{Z}) \backslash \mathbb{H}$*, Annals of Mathematics **172** (2010), no. 2, 1529--1538. MR 2680500 Kannan Soundararajan and Matthew Patrick Young, *The Second Moment of Quadratic Twists of Modular $L$-Functions*, Journal of the European Mathematical Society **12** (2010), no. 5, 1097--1116. MR 2677611 Berke Topacogullari, *The Shifted Convolution of Divisor Functions*, The Quarterly Journal of Mathematics **67** (2016), no. 2, 331--363. MR 3509996 to3em, *On a Certain Additive Divisor Problem*, Acta Arithmetica **181** (2017), no. 2, 143--172. MR 3726186 to3em, *The Shifted Convolution of Generalized Divisor Functions*, International Mathematics Research Notices (2018), no. 24, 7681--7724. MR 3892276 Seraina Regina Wachter, *Half-Integral Weight Eisenstein Series, Double Dirichlet Series and Equidistribution*, Ph.D. thesis, ETH Zürich, 2021. [^1]: The author acknowledges the support of the Masason Foundation. [^2]: This name stems from multiple $L$-functions, namely $L$-functions whose coefficients are again $L$-functions. They have proven to be a quite powerful and elegant tool that in some cases is capable of yielding results that are not yet available with other techniques. To circumvent terminological redundancy, it is convenient in this paper to call the averaged version [\[eq:def-M\]](#eq:def-M){reference-type="eqref" reference="eq:def-M"} a (multiple) shifted convolution problem, albeit being less standard. [^3]: Here and henceforth, the meaning of the symbol $\approx$ is left vague on purpose. Furthermore, we shall write temporarily $n \sim T$ in place of $T \leq n \leq 2T$, which applies to other summations. [^4]: We suppress less important variables from the notation, which applies to the ensuing integral transforms. [^5]: We assume without loss of generality that $h > 0$, since the argument would be quite similar in the other case. This kind of restriction applies to the subsequent analysis.
arxiv_math
{ "id": "2310.06345", "title": "On Multiple Shifted Convolution Sums", "authors": "Ikuya Kaneko", "categories": "math.NT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | This is an addendum to previous work on the models of populations and relative populations of small gaps $g$ across stages of Eratosthenes sieve. If we have the initial conditions in the cycle of gaps ${\mathcal G}({p_0}^{\#})$, we can exhibit exact population models for all gaps $g < 2 p_1$. For gaps beyond this threshold $2 p_1$, we could not be certain of the counts of the driving terms for a gap $g$ of various lengths. Here we extend this work by introducing the exact population models for $g=2p_1$. We are able to get this one additional case in a general form. Using the initial conditions from ${\mathcal G}({p_0}^{\#})$ we advance the model one time to obtain exact counts for the driving terms for $g=2p_1$ in ${\mathcal G}({p_1}^{\#})$. This iteration uses a different system matrix than the usual one. After this special iteration we can apply the usual dynamics to obtain the exact population model for the gap $g=2p_1$ across all stages of Eratosthenes sieve. address: "fbholt62\\@gmail.com; https://www.primegaps.info" author: - Fred B. Holt date: 25 Sept 2023 title: "Addendum: models for gaps $g=2p_{1}$" --- # Setting This is an addendum to previous work [@FBHSFU; @FBHPatterns] on exact models for the populations and relative populations of gaps $g$ across stages of Eratosthenes sieve. At each stage of Eratosthenes sieve, there is a cycle of gaps ${\mathcal G}({p}^{\#})$ of length $\phi({p}^{\#})$ (number of gaps in the cycle) and span ${p}^{\#}$ (sum of the gaps in the cycle). For example, the cycle ${\mathcal G}({5}^{\#})$ has length (number of gaps) $\phi({5}^{\#})=8$ and span (sum of gaps) ${5}^{\#}=30$. $${\mathcal G}({5}^{\#}) \makebox[0.2 in]{}= \makebox[0.2 in]{}6 \; 4 \; 2 \; 4 \; 2 \; 4 \; 6 \; 2.$$ There is a recursion from one cycle ${\mathcal G}({p_k}^{\#})$ to the next ${\mathcal G}({p_{k+1}}^{\#})$. We concatenate $p_{k+1}$ copies of ${\mathcal G}({p_k}^{\#})$ and then add adjacent gaps at the running sums given by the elementwise product $p_{k+1}\ast {\mathcal G}({p_k}^{\#})$. These additions of adjacent gaps are called *fusions*. The gaps that survive the fusions are the gaps between primes. The recursion across the cycles of gaps ${\mathcal G}({p_k}^{\#})$ is a discrete dynamic system. If we take initial conditions from ${\mathcal G}({p_0}^{\#})$, then we can create *exact* population models for all gaps $g < 2 p_1$. The populations all grow superexponentially, so we divide by factors of $p_k-2$ to obtain the exact models for relative populations for all gaps $g < 2p_1$. $$\begin{aligned} w_g({p_{k+1}}^{\#}) \; = \; \left[ \begin{array}{c} w_{g,1} \\ w_{g,2} \\ \vdots \\ w_{g,J} \end{array} \right]_{{p_{k+1}}^{\#}} & = & \left[ \begin{array}{ccccc} 1 & \scriptstyle \frac{1}{p_{k+1}-2} & 0 & \cdots & 0 \\ 0 & \scriptstyle \frac{p_{k+1}-3}{p_{k+1}-2} & \scriptstyle \frac{2}{p_{k+1}-2} & & 0 \\ 0 & 0 & \scriptstyle \frac{p_{k+1}-4}{p_{k+1}-2} & \ddots & 0 \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & \cdots & \cdots & \scriptstyle \frac{p_{k+1}-J-1}{p_{k+1}-2} \end{array} \right] \cdot \left[ \begin{array}{c} w_{g,1} \\ w_{g,2} \\ \vdots \\ w_{g,J} \end{array} \right]_{{p_k}^{\#}} \\[0.2cm] & = & M_J(p_{k+1}) \cdot w_g({p_k}^{\#}) \\[0.2cm] & = & M_J^{k+1} \cdot w_g({p_0}^{\#})\end{aligned}$$ Here the elements $w_{g,j}$ denote the relative population of driving terms for gap $g$ of length $j$. $w_{g,1}({p}^{\#})$ is the relative population of the gap $g$ itself in the cycle ${\mathcal G}({p}^{\#})$. And we use the notation $M^k$ to denote the product of matrices: $$M_J^k \; = \; M_J(p_k) \cdot M_J(p_{k-1}) \cdots M_J(p_2) \cdot M_J(p_1)$$ $w_g({p_0}^{\#})$ is the vector of the initial conditions in the cycle of gaps ${\mathcal G}({p_0}^{\#})$. These are the counts of the gap $g$ in this cycle and of all of its driving terms, divided by the population of gaps $2$. We need $J$ to be at least as large as the longest driving terms for $g$. For more details about the discrete dynamic system and the population models, please see the prior work [@FBHSFU; @FBHPatterns]. # Models for $g < 2p_1$ The iterative model $$w_g({p_k}^{\#}) \; = \; M_J^k w_g({p_0}^{\#})$$ only applies to gaps $g < 2 p_1$. This constraint $g < 2p_1$ arises from needing to be sure that under the recursion each fusion occurs in its own copy of a driving term for $g$. This allows us to get the exact counts across driving terms of all lengths. Since the fusions are spaced according to the elementwise product $p_{k+1} \ast {\mathcal G}({p_k}^{\#})$ and the smallest element in ${\mathcal G}({p_k}^{\#})$ is $2$, the fusions are separated by at least $2 p_k$. So $g < 2p_0$ suffices to use the iterative system for the gap $g$. The challenge to us developing models for larger gaps $g$ is that we need to get the initial populations from a cycle of gaps ${\mathcal G}({p_0}^{\#})$ such that $g < 2p_1$. The cycle ${\mathcal G}({p}^{\#})$ has length $\phi({p}^{\#})$. ------------------ ----------------------- ------------------------- ------------------------ ------------------------ ------------------------ ------------------------ ------------------------ ------------------------ $p$ $29$ $31$ $37$ $41$ $43$ $47$ $53$ $59$ $\max g$ $60$ $72$ $80$ $84$ $92$ $104$ $116$ $120$ $\phi({p}^{\#})$ $\scriptstyle 1.02E9$ $\scriptstyle 3..07E10$ $\scriptstyle 1.10E12$ $\scriptstyle 4.41E13$ $\scriptstyle 1.85E15$ $\scriptstyle 8.53E16$ $\scriptstyle 4.44E18$ $\scriptstyle 2.57E20$ ------------------ ----------------------- ------------------------- ------------------------ ------------------------ ------------------------ ------------------------ ------------------------ ------------------------ We see that computing ${\mathcal G}({59}^{\#})$ is at the horizon of current computing capability, and this would enable us to calculate the initial conditions for the models for all gaps $g \le 120$. For the models in our prior work [@FBHSFU; @FBHPatterns] we used ${\mathcal G}({37}^{\#})$ and exhibited the relative population models for $g \le 80$. # Models for $g=2p_1$ We can extend the models to cover the case $g = 2p_1$. The important insight here is that the populations of driving terms for this gap $g$ can be exactly produced from the initial conditions $w_g({p_0}^{\#})$. We have to use a different system matrix to update the relative populations for $g$ and its driving terms from $w_g({p_0}^{\#})$ to $w_g({p_1}^{\#})$. After this one special iteration we can apply the general model described above. The special first iteration is motivated by the following Lemma. **Lemma 1**. *Let $s$ be a constellation of span $g$ in ${\mathcal G}({p_k}^{\#})$. If $p_{k+1} | g$, then both ends of $s$ are fused in the same copy under the recursion from ${\mathcal G}({p_k}^{\#})$ to ${\mathcal G}({p_{k+1}}^{\#})$.* This is Lemma 28 on page 139 of [@FBHPatterns] Let $s$ be a driving term for $g$ of length $j$. Under the recursion from ${\mathcal G}({p_0}^{\#})$ to ${\mathcal G}({p_1}^{\#})$, the concatenation step initially produces $p_1$ copies of $s$. Each of the possible $j+1$ fusions in $s$ will occur exactly once. The $j-1$ interior fusions will result in a shorter driving term for $g$ in ${\mathcal G}({p_1}^{\#})$, and the $2$ boundary fusions will eliminate that image of $s$ as a driving term for $g$. In order to get a count of the driving terms for $g=2p_1$ in ${\mathcal G}({p_1}^{\#})$, we need to confirm that the $j-2$ interior fusions occur in separate images of $s$, and that the $2$ boundary fusions eliminate only $1$ image of $s$. For any interior fusion in $s$, the span from this fusion to either end of $s$ is strictly less than $2p_1$. But the smallest distance between spans is $2p_1$, and thus the interior fusions will occur in separate images of $s$. The interior fusions in $s$ result in $j-1$ driving terms for $g$ of length $j-1$. By the lemma above, the two boundary fusions occur in the same image of $s$, eliminating exactly one image of $s$ as a driving term for $g$. Note that this is specific to $g=2p_1$; for $g < 2p_1$ the two boundary fusions occur in separate images of $s$. So for the gap $g=2p_1$, each driving term $s$ of length $j$ in ${\mathcal G}({p_0}^{\#})$ produces $j-1$ driving terms of length $j-1$ in ${\mathcal G}({p_1}^{\#})$, one image of $s$ is removed as a driving term for $g$, and $p_1-j$ images of $s$ are preserved intact as driving terms for $g$ in ${\mathcal G}({p_1}^{\#})$. This specific iteration for $w_{g=2p_1}$ is $$\begin{aligned} w_g({p_1}^{\#}) & = & \widehat{ M}_J(p_1) \cdot w_g({p_0}^{\#}) \\[0.2cm] & = & \left[ \begin{array}{cccccc} \scriptstyle \frac{p_1-1}{p_1-2} & \scriptstyle \frac{1}{p_1-2} & 0 & 0 & \cdots & 0 \\ 0 & 1 & \scriptstyle \frac{2}{p_1-2} & 0 & & 0 \\ 0 & 0 & \scriptstyle \frac{p_1-3}{p_1-2} & \scriptstyle \frac{3}{p_1-2} & \ddots & 0 \\ \vdots & \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & \cdots & \scriptstyle \frac{p_1-J}{p_1-2} \end{array} \right] \cdot w_g({p_0}^{\#})\end{aligned}$$ If we have computed the cycle ${\mathcal G}({p_0}^{\#})$ for initial conditions, the counts of the driving terms across a range of gaps $g$, we normalize these counts by the number of gaps $2$ and from these relative populations $w_g({p_0}^{\#})$ we have the complete exact models for the relative populations across all subsequent stages of Eratosthenes sieve: $$w_g({p_k}^{\#}) \; = \; \left\{ \begin{array}{cc} M_J(p_k) \cdot M_J(p_{k-1}) \cdots M_J(p_2) \cdot M_J(p_1) \cdot w_g({p_0}^{\#}) & {\rm if} \; g < 2p_1 \\[0.3cm] M_J(p_k) \cdot M_J(p_{k-1}) \cdots M_J(p_2) \cdot \widehat{M}_J(p_1) \cdot w_g({p_0}^{\#}) & {\rm if} \; g = 2p_1 \end{array} \right.$$ Specifically, from our initial conditions from ${\mathcal G}({37}^{\#})$ we have previously been able to exhibit the population models for all gaps $g \le 80$. Here $p_0=37$ and $p_1=41$. We can now add one more model, the model for $g=82$. To harmonize the collection of models $w_g$ for $g \le 82$, we need the same starting point $p_0$. We could use the dynamic system to advance the models for all of the $g \le 80$ up to $w_g({41}^{\#})$ and use $p_0=41$ as the starting point; or we could back $w_{82}({41}^{\#})$ up using $M^{-1}_J(41)$ to obtain an equivalent surrogate starting point $\widehat{w}_{82}({37}^{\#})$ that provides the exact model $w_{82}({p}^{\#})$ for all $p \ge 41$. We pursue this second approach here. $$\begin{aligned} \widehat{w}_{82}({37}^{\#}) & = & M^{-1}_J(41) \cdot w_{82}({41}^{\#}) \\ & = & M^{-1}_J(41) \cdot \widehat{M}_J(41) \cdot w_{82}({37}^{\#})\end{aligned}$$ This gives us a starting point $\widehat{w}_{82}({37}^{\#})$ that aligns with our other starting points in ${\mathcal G}({37}^{\#})$ and that provides the exact values for $w_{82}({p_k}^{\#})$ for all $p_k \ge 41$. This surrogate starting point $\widehat{w}_{82}({37}^{\#})$ is *not* the correct relative population $w_{82}({37}^{\#})$. $$\begin{aligned} \widehat{w}_{82}({37}^{\#}) & \neq & w_{82}({37}^{\#}) \\ M_J(41) \cdot \widehat{w}_{82}({37}^{\#}) & = & w_{82}({41}^{\#}) \; = \; \widehat{M}_J(41) \cdot w_{82}({37}^{\#}) \end{aligned}$$ Using ${\mathcal G}({37}^{\#})$ for the starting point, we have driving terms for $g=82$ up to length $J=19$. The number of gaps $g=2$ in ${\mathcal G}({37}^{\#})$ is $$n_{2,1}({37}^{\#}) =217929355875.$$ For the gap $g=82$ we tabulate the data from ${\mathcal G}({37}^{\#})$. Our calculations have some numerical errors on the order of $10^{-13}$. $j$ $n_{82,j}({37}^{\#})$ $w_{82,j}({37}^{\#})$ $w_{82,j}({41}^{\#})$ $\widehat{w}_{82,j}({37}^{\#})$ $l_{82,j}$ ------ ----------------------------- ------------------------------ ----------------------------- --------------------------------- --------------------------- $1$ $\scriptstyle 0$ $\scriptstyle 0$ $\scriptstyle 0$ $\scriptstyle -2.768491E-13$ $\scriptstyle 1.025641$ $2$ $\scriptstyle 0$ $\scriptstyle 0$ $\scriptstyle 2.353150E-13$ $\scriptstyle 1.976531E-12$ $\scriptstyle 11.553942$ $3$ $\scriptstyle 1$ $\scriptstyle 4.5886429E-12$ $\scriptstyle 1.160809E-09$ $\scriptstyle -3.035074E-12$ $\scriptstyle 60.410483$ $4$ $\scriptstyle 3276$ $\scriptstyle 1.5032394E-08$ $\scriptstyle 1.415302E-07$ $\scriptstyle 1.503443E-08$ $\scriptstyle 194.488465$ $5$ $\scriptstyle 270422$ $\scriptstyle 1.2408700E-06$ $\scriptstyle 5.882215E-06$ $\scriptstyle 1.244637E-06$ $\scriptstyle 431.258857$ $6$ $\scriptstyle 8051838$ $\scriptstyle 3.6947010E-05$ $\scriptstyle 1.179125E-04$ $\scriptstyle 3.716878E-05$ $\scriptstyle 697.937481$ $7$ $\scriptstyle 120058788$ $\scriptstyle 5.5090691E-04$ $\scriptstyle 1.326320E-03$ $\scriptstyle 5.558081E-04$ $\scriptstyle 852.214506$ $8$ $\scriptstyle 1027245782$ $\scriptstyle 4.7136641E-03$ $\scriptstyle 9.081749E-03$ $\scriptstyle 4.769260E-03$ $\scriptstyle 800.382107$ $9$ $\scriptstyle 5411112020$ $\scriptstyle 2.4829661E-02$ $\scriptstyle 3.968207E-02$ $\scriptstyle 2.519649E-02$ $\scriptstyle 584.027075$ $10$ $\scriptstyle 18234669494$ $\scriptstyle 8.3672387E-02$ $\scriptstyle 1.136091E-01$ $\scriptstyle 8.516773E-02$ $\scriptstyle 332.122850$ $11$ $\scriptstyle 40031677310$ $\scriptstyle 1.8369107E-01$ $\scriptstyle 2.155096E-01$ $\scriptstyle 1.875723E-01$ $\scriptstyle 146.758457$ $12$ $\scriptstyle 57338080360$ $\scriptstyle 2.6310398E-01$ $\scriptstyle 2.702203E-01$ $\scriptstyle 2.695709E-01$ $\scriptstyle 49.939794$ $13$ $\scriptstyle 52822037198$ $\scriptstyle 2.4238147E-01$ $\scriptstyle 2.204693E-01$ $\scriptstyle 2.492174E-01$ $\scriptstyle 12.883935$ $14$ $\scriptstyle 30369623454$ $\scriptstyle 1.3935536E-01$ $\scriptstyle 1.135898E-01$ $\scriptstyle 1.438024E-01$ $\scriptstyle 2.461383$ $15$ $\scriptstyle 10389093440$ $\scriptstyle 4.7671840E-02$ $\scriptstyle 3.526600E-02$ $\scriptstyle 4.936702E-02$ $\scriptstyle 0.336767$ $16$ $\scriptstyle 1974527214$ $\scriptstyle 9.0604004E-03$ $\scriptstyle 6.171214E-03$ $\scriptstyle 9.413221E-03$ $\scriptstyle 0.031541$ $17$ $\scriptstyle 192967582$ $\scriptstyle 8.8545933E-04$ $\scriptstyle 5.642306E-04$ $\scriptstyle 9.225035E-04$ $\scriptstyle 0.001910$ $18$ $\scriptstyle 9665424$ $\scriptstyle 4.4351180E-05$ $\scriptstyle 2.673245E-05$ $\scriptstyle 4.631847E-05$ $\scriptstyle 0.000070$ $19$ $\scriptstyle 272272$ $\scriptstyle 1.2493590E-06$ $\scriptstyle 7.047666E-07$ $\scriptstyle 1.308852E-06$ $\scriptstyle 0.000001$ sum $\scriptstyle 217929355875$ $\scriptstyle 1.000000$ $\scriptstyle 1.025641$ $\scriptstyle 1.025641$ : [\[w82Tbl\]]{#w82Tbl label="w82Tbl"} Data for the gap $g=82$ from the cycle ${\mathcal G}({37}^{\#})$. The column $n_{82,j}$ lists the populations of driving terms for $g=82$ for various lengths $j$. The longest driving term has length $J=19$. The column $w_{82,j}({37}^{\#})$ contains the normalized populations for these driving terms, and $w_{82,j}({41}^{\#})$ calculates the relative populations for the driving terms in ${\mathcal G}({41}^{\#})$ using $\widehat{M}_{19}(41)$. The next column $\widehat{w}_{82,j}({37}^{\#})$ is the pre-image of $w_{82,j}({41}^{\#})$ under $M^{-1}_{19}(41)$. The final column lists the coefficients $l_{82,j}$ for the population model. The models update the relative populations of all of the driving terms for $g=82$. If we take just the top row we extract the model just for the relative population of the gap $g=82$ itself. $$w_{82,1}({p_k}^{\#}) = \; l_{82,1} - l_{82,2} \prod_{41}^{p_k} \frac{q-3}{q-2} + l_{82,3} \prod_{41}^{p_k} \frac{q-4}{q-2} - \cdots + l_{82,19} \prod_{41}^{p_k} \frac{q-20}{q-2}$$ for all $p_k \ge 41$. Table [1](#w82Tbl){reference-type="ref" reference="w82Tbl"} lists these coefficients $l_{82,j}$ in the final column. # Notes on models for slightly larger $g$ The beauty of the work above is that we can work directly with the relative populations of driving terms of various lengths $j$. Can we extend these methods any further, to extract models for $g=2p_1+2$ or $g=2p_1+4$? To do so, we would need to track subpopulations among the driving terms. ![[\[2p2Fig\]]{#2p2Fig label="2p2Fig"} A diagram of the specific iteration for gaps $g=2p_1+2$ from ${\mathcal G}({p_0}^{\#})$ to ${\mathcal G}({p_1}^{\#})$. 'X' denotes any gap in a driving term $s$ for $g$ that is not a $2$. If the driving term $s$ does not begin or end with a $2$, then the general model applies. If $s$ ends with a $2$, then that interior fusion occurs in the same image of $s$ as the far boundary fusion. One fewer image of length $j-1$ survives, and one more image of length $j$ survives.](2p2.pdf){#2p2Fig width="5in"} *Gaps $g=2p_1+2$.* This gap is small enough that $g < 2p_2$, so the only spans that complicate our tracking the counts of the driving terms for $g$ are the spans of $2p_1$ where the two fusions occur as a boundary fusion and interior fusion in a single image of a driving term $s$. This will occur iff the driving term $s$ begins or ends with a $2$. If we separate the counts for the driving terms for $g=2p_1+2$ to cover four subpopulations, we could perform a distinct iteration from $w_{2p_1+2}({p_0}^{\#})$ to $w_{2p_1+2}({p_1}^{\#})$, after which we can use the general model. Consider a driving term $s$ for $g$ of length $j$ in ${\mathcal G}({p_0}^{\#})$. Since $g=2p_1+2$ the two boundary fusions occur in different images of $s$, so $p_1-2$ images of $s$ survive as driving terms for $g$ in ${\mathcal G}({p_1}^{\#})$. We need to track their lengths. If the first or last gap in $s$ is a $2$, then by the lemma this interior fusion occurs in the same image of $s$ as the far boundary fusion. We have the following four subpopulations of driving terms $s$ of span $2p_1+2$ and length $j$ in ${\mathcal G}({p_0}^{\#})$: 1. $s = X \ldots X$. If $s$ begins and ends with gaps other than $2$, then all of the fusions - interior and boundary - occur in separate images of $s$ during the recursive construction of ${\mathcal G}({p_1}^{\#})$. We can use the general model for these populations described above. 2. $s = X \ldots 2$. If $s$ ends with a $2$, then the interior fusion for this last gap will occur in the same image of $s$ as the boundary fusion at the start of $s$. Of the $p_1$ images of $s$ created during the concatenation step, two are eliminated as driving terms for $g$ by the boundary fusions. The one boundary fusion takes an interior fusion along with it. The remaining $j-2$ interior fusions result in driving terms for $g$ in ${\mathcal G}({p_1}^{\#})$ of length $j-1$. $p_1-j$ images of $s$ survive intact as driving terms of length $j$ for $g$ in ${\mathcal G}({p_1}^{\#})$. 3. $s = 2 \ldots X$. The cycle of gaps ${\mathcal G}({p}^{\#})$ is symmetric. So if $s$ starts with a $2$, the analysis is the same as the previous case. 4. $s = 2 \ldots 2$. For driving terms $s$ that begin *and* end with a $2$, these would fall into both of the previous cases, complicating the counts. So we separate them out as their own subpopulation. In this case *both* boundary fusions coincide with an interior fusion. Of the $j-1$ interior fusions, $j-3$ produce driving terms for $g$ in ${\mathcal G}({p_1}^{\#})$ of length $j-1$. $p_1-j+1$ images of $s$ survive intact as driving terms of length $j$ for $g$ in ${\mathcal G}({p_1}^{\#})$. We could exhibit the block banded matrices for this system. But unless we partition the population of driving terms for the gap $g=2p_1+2$ into the required subpopulations, we cannot apply this model. *Gaps $g=2p_1+4$.* For gaps $g=2p_1+4$, the subpopulations of driving terms become more complicated. We need to consider cases in which a driving term begins or ends with a $4$, and the analysis parallels the work above for $g=2p_1+2$. We also have to consider the possibility that $g=2p_2$, which would occur when $p_2=p_1+2$. # Conclusion This work serves as an addendum to the existing references [@FBHSFU; @FBHPatterns]. We do not duplicate that background here, beyond summarizing a few needed results. We have shown previously that at each stage of Eratosthenes sieve there is a corresponding cycle of gaps ${\mathcal G}({p}^{\#})$. We can view these cycles of gaps as a discrete dynamic system, and from this system we can obtain exact models for the populations and relative populations of gaps $g < 2p_1$ if we can get the initial conditions from ${\mathcal G}({p_0}^{\#})$. In this addendum we have shown that we can produce the model for ${g=2p_1}$ from these initial conditions. This model requires one special iteration to track the count from ${\mathcal G}({p_0}^{\#})$ to ${\mathcal G}({p_1}^{\#})$, after which we can use the general model for these populations. We have further shown that in order to produce the models for ${g=2p_1+2}$ and beyond from initial conditions in ${\mathcal G}({p_0}^{\#})$, we would have to track subpopulations of the driving terms $s$ until the general model applies, that is until ${g < 2p_{k+1}}$. 10 F.B. Holt, *Combinatorics of the gaps between primes*, Connections in Discrete Mathematics, Simon Fraser U., arXiv 1510.00743, June 2015. F.B. Holt, *Patterns among the Primes*, KDP, June 2022.
arxiv_math
{ "id": "2309.16833", "title": "Models for gaps $g=2p_1$", "authors": "Fred B. Holt", "categories": "math.GM", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The purpose of this paper is to model mathematically certain mechanical aspects of defibrillation. The time deformation of the heart tissue is modeled with the elastodynamics equations dealing with the displacement field as main unknown. These equations are coupled with a pressure whose variations characterize the defibrillation process. The pressure variable corresponds to a Lagrange multiplier associated with the so-called global injectivity condition. We develop a hybrid optimal control approach in a general framework that covers in particular the maximization of the variations of this pressure, and also the time the maximum is reached. The control operator is distributed, and can be described in a form that corresponds to geometric aspects of the modeling. For mathematical convenience a damping term is added, and mathematical analysis based on the $\mathrm{L}^p$-parabolic maximal regularity is provided for the state equations and the rigorous derivation of optimality conditions. Numerical simulations for a toy-model exploit these optimality conditions and illustrate the capacity of the approach. author: - Sébastien Court bibliography: - pressure_ref.bib title: "A damped elastodynamics system under the global injectivity condition: A hybrid optimal control problem" --- Nonlinear elastodynamics, Hybrid optimal control problems, PDE-constrained optimization problems, $L^p$-maximal parabolic regularity, Finite Element Method, Cardiac electrophysiology.\ \ # Introduction {#sec-intro} Defibrillation of the heart in case of arrhytmias is realized with the use of electric shocks acting on muscular tissues, leading to a reset of its electrical activity and thus the re-oxygenation of the ill area. Mechanically, the oxygenation can be related to the arterial pressure, exerted by the blood on the walls of arteries as it flows through the circulatory system [@Chernysh]. Mathematically this pressure corresponds to a Lagrange multiplier associated with the constraint of constant global volume of the tissues, as they are crossed by blood, considered to be an incompressible fluid. The time variation of this pressure quantifies shape variations of the heart domain via its deformation, and thus the efficiency of the defibrillation. The time at which these variations occur is not imposed, and is actually let free, in order to choose the best time for the most efficient contraction. Thus, choosing an optimal time $\tau$ at which the maximum is reached is of importance, as it can improve the efficiency of the defibrillation. From a control point of view, this corresponds to a maximization problem of a function of the state variables, optimizing both a control function and time parameter of the optimum. This constitutes a type of *hybrid* optimal control problem [@Maxmax1; @Maxmax2]. We refer to [@CKP16] for more details on related optimal control problems arising in electro-cardiology. The focus of the present article lies in the mathematical modeling of mechanical aspects of the heart deformation. The heart tissues are considered as an hyperelastic material. By applying a distributed control function on a part of the heart, the tissue is suddenly deformed. This organ is meanwhile crossed by blood, assumed to be incompressible, and so the total volume inside the heart remains constant through the time. Considering that the exterior part of the boundary of the domain is only subject to rigid displacements (see Figure [\[fig0\]](#fig0){reference-type="ref" reference="fig0"}), it means that the total volume of the heart domain itself remains constant through the time. A complete realistic modeling of such a problem would require multiphysics coupling, in particular concerning the complex geometry of the heart domain and connected arteries, and also the electrical aspects. Due to the obvious complexity that arises when modeling defibrillation, we address in this article a simplified version of the problem, that still involves unsteady nonlinear elasticity models and general optimal control formulation. The analysis and computation are non-trivial topics too. For mathematical convenience a damping effect is added, in order to facilitate the rigorous analysis of the underlying dynamics as well as the corresponding optimal control problem.\ #### Elastodynamics equations with damping and global injectivity constraint. In a smooth bounded domain $\Omega$ of $\mathbb{R}^d$ ($d \leq 3$) we consider the elastodynamics system with damping and global volume preserving constraint. The unknown are a displacement field denoted by $u$, and a pressure variable denoted by $\mathfrak{p}$. The pressure $\mathfrak{p}$ is a Lagrange multiplier associated with the volume preserving constraint, and depends only on the time variable. The control operator is distributed, represented by a smooth mapping $\xi \mapsto f(\xi)$. The initial state is given by the couple $(u_0,\dot{u}_0)$. The couple $(u,\mathfrak{p})$ satisfies the following system: [\[sysmain\]]{#sysmain label="sysmain"} $$\begin{aligned} %\begin{equation} %\begin{array}{rcl} \displaystyle \ddot{u} - \kappa\Delta \dot{u} -\mathop{\mathrm{div}}\left(\sigma(\nabla u) \right) = f(\xi) & & \text{in } \Omega \times (0,T), \label{sysmain1}\\ \displaystyle\kappa \frac{\partial\dot{u}}{\partial n} + \sigma(\nabla u)n + \mathfrak{p}\, \mathrm{cof}(\mathrm{I}+\nabla u)n = g & & \text{on } \Gamma_N \times (0,T), \label{sysmain2} \\ \displaystyle\int_{\Omega} \mathrm{det}(\mathrm{I}+\nabla u)\, \mathrm{d}\Omega = \int_{\Omega} \mathrm{det}(\mathrm{I}+\nabla u_0)\, \mathrm{d}\Omega & & \text{in } (0,T), \label{sysmain4}\\ u=0 & & \text{on } \Gamma_D \times (0,T),\label{sysmain3}\\ \displaystyle u(\cdot,0) = u_0, \quad \dot{u}(\cdot,0) = \dot{u}_0& & \text{in } \Omega. \label{sysmain5} %\end{array} %\end{equation}\end{aligned}$$ The tensor field $\sigma(\nabla u)$ is derived from the elasticity model that we adopt. The damping term $\kappa\Delta \dot{u}$ is added for the sake of mathematical convenience. Indeed, it enables us to use well-established results for parabolic equations, while the original system is hyperbolic and nonlinear. The constraint [\[sysmain4\]](#sysmain4){reference-type="eqref" reference="sysmain4"} is the so-called Ciarlet-Nečas global injectivity condition, studied in [@Necas1987]. It represents the time preservation of the total volume of $\Omega$ under the deformation $\mathrm{Id}+u$. The right-hand-side $g$ in [\[sysmain2\]](#sysmain2){reference-type="eqref" reference="sysmain2"} represents possible given surface forces. Local-in-time wellposedness for system [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} has been established in [@Court2023]. In the present paper, for the analysis we prefer to assume smallness of the data rather than relying on small time $T>0$, as it offers more straightforward wellposedness results which rely on the application of the inverse function theorem. The derivation of other type of existence results (like global wellposedness), that would not require smallness assumptions for example, would go beyond the scope of the present article. Note that under smallness assumptions we can keep the displacement $u$ small enough, so that the deformation $\mathrm{Id}+u$ remains invertible, that will be assumed in the rest of the paper. #### A hybrid optimal control problem. We assume that the control function $\xi$ acts on a subdomain $\omega \subset \Omega$. The goal is to maximize an objective function $\Phi^{(1)}$ at time $\tau$. Consider a cost functional $c$, the general optimal control problem that the present paper proposes to address is the following: $$\tag{$\mathcal{P}$} \left\{ \begin{array}{l}\displaystyle \max_{\xi \in \mathcal{X}_{p,T}(\omega), \tau \in (0,T)} \left( J(\xi,\tau):= \int_0^T c(u,\dot{u},\xi) \, \mathrm{d}t + \phi^{(1)}(u,\dot{u})(\tau) + \phi^{(2)}(u,\dot{u})(T)\right), \\[10pt] \text{subject to~\eqref{sysmain}.} \end{array} \right. \label{mainpb}$$ The control space $\mathcal{X}_{p,T}(\omega)$ will be defined later. The functional $\phi^{(2)}$ represents the terminal cost, and $\phi^{(1)}$ the objective functional that we want to maximize at time $\tau$. Note that in the formulation of Problem [\[mainpb\]](#mainpb){reference-type="eqref" reference="mainpb"}, the state variables are coupled with the time parameter $\tau$ to be optimized. The values of the state variables at time $\tau$ depend on the control function $\xi$. Therefore the two control variables, $\xi$ and $\tau$ are coupled in a complex manner, defining why we call such a problem a *hybrid* optimal control problem. This type of problems were treated in [@Maxmax1; @Maxmax2] when dealing with a time-parameter, and in [@Maxmax3] when dealing with a space parameter. #### On the choice of the objective function. For example we can decide to maximize the variations of the pressure $\mathfrak{p}$, on a short time interval $(\tau, \tau +\varepsilon)$, where the parameter $\tau$ is let free and also chosen optimally, while $\varepsilon >0$ is fixed and chosen as small as possible. This is our original motivation, related to the modeling of the defibrillation process. For several reasons, one would not make tend $\varepsilon$ to $0$, as the pressure variable may not necessarily be differentiable in time in view of the regularity of the data and the functional framework in which the unknown of system [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} will be considered. Besides, for practical realization it may not be possible in practice to choose technically $\varepsilon$ as small as desired. Problem [\[mainpb\]](#mainpb){reference-type="eqref" reference="mainpb"} would then involve the following objective function at time $\tau$: $$\phi_1(u,\dot{u})(\tau) = \frac{\mathfrak{p}(\tau+\varepsilon) -\mathfrak{p}(\tau)}{\varepsilon}. \label{sysmainpressure}$$ From equation [\[sysmain2\]](#sysmain2){reference-type="eqref" reference="sysmain2"} (with $g=0$) the variable $\mathfrak{p}$ is indeed a function of $(u,\dot{u})$: $$\mathfrak{p} = \displaystyle -\frac{1}{|(\mathrm{Id}+u)(\Gamma_N)|} \int_{\Gamma_N} \left( \kappa \frac{\partial\dot{u}}{\partial n} + \sigma(\nabla u)n \right)\mathrm{d}\Gamma_N , \text{\quad with \quad} |(\mathrm{Id}+u)(\Gamma_N)| = \int_{\Gamma_N} |\mathrm{cof}(\mathrm{I}+\nabla u)n|_{\mathbb{R}^d} \mathrm{d}\Gamma_N.$$ In [\[sysmainpressure\]](#sysmainpressure){reference-type="eqref" reference="sysmainpressure"}, assuming that the parameter $\varepsilon$ could be supposed to tend towards zero would lead to maximize directly the time-derivative of the pressure. This would require more regularity to consider for the state variables, and would also involve to incorporate $u$, $\dot{u}$ and $\ddot {u}$ in the objective function, leading to undesirable complexities. #### Methodology. The type of objective functional that we consider in Problem [\[mainpb\]](#mainpb){reference-type="eqref" reference="mainpb"} requires that the state variables $u$ and $\dot{u}$ are continuous in time, with values in smooth trace spaces. Therefore a strong functional framework is adopted, corresponding to the so-called $L^p$-maximal parabolic regularity, leading us to assume that a solution $u$ of [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} satisfies $$\dot{u}\in \mathrm{L}^p(0,T; \mathbf{W}^{2,p}(\Omega)) \cap \mathrm{W}^{1,p}(0,T;\mathbf{L}^p(\Omega)),$$ with $p>3$. See sections [2.3](#sec-notation-func){reference-type="ref" reference="sec-notation-func"} and [3](#sec-well){reference-type="ref" reference="sec-well"} for more details. With the help of results obtained in [@Pruess2002; @Arendt2007], we first study system [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} linearized around $0$, in order to deduce local existence of solutions for [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"}, while assuming that the data -- initial conditions and right-and-sides -- small enough. We also prove wellposedness for a non-autonomous linear system (namely system [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} linearized around a non-trivial state) that will be used for deriving rigorously optimality conditions. From there we are in position to address the question of optimality conditions for Problem [\[mainpb\]](#mainpb){reference-type="eqref" reference="mainpb"}. Due to the (lack of) regularity of the given right-hand-sides, the time-derivative of the state variable, namely $\ddot{u}$, is not continuous in time, and therefore it is not possible to determine the optimal parameter $\tau$ directly by deriving in time the function $\phi^{(1)}(u,\dot{u})$. In order to uncouple the time parameter and the state variables, we first introduce a change of variable, and reformulate Problem [\[mainpb\]](#mainpb){reference-type="eqref" reference="mainpb"} in terms of the new variables, like in [@Maxmax1; @Maxmax2] (see Figure [\[fig-diagram\]](#fig-diagram){reference-type="ref" reference="fig-diagram"}). Next we introduce an adjoint system whose wellposedness is obtained by transposition, via the non-autonomous linear system aforementioned. Necessary first-order optimality conditions are then derived and expressed in terms of the state, the control function, the time parameter $\tau$ and the adjoint state. Further, the optimality conditions so obtained are re-expressed in terms of the original state variables, by reversing the change of variables previously used. This is convenient for numerical implementation, even if this was not the approach adopted in [@Maxmax1; @Maxmax2]. [\[fig-diagram\]]{#fig-diagram label="fig-diagram"} $$\boxed{ \begin{array} {rcclclcll} & (u,\dot{u}) & \longrightarrow & (\mathcal{P}) & \longrightarrow & J & \text{\scalebox{1.2}{$\overset{\mathop{?}}{\dashrightarrow}$}} & \nabla J & \\ \text{\scriptsize{ }} \hspace*{-15pt} & \mu \big\downarrow & & \hspace{0pt} \mu\big\downarrow & & \hspace{-5pt} \mu\big\downarrow & & \hspace*{5pt}\big\uparrow\mu^{-1} \\ [5pt]%\text{\scriptsize{ }}\\[5pt] & (\tilde{u},\dot{\tilde{u}}) & \longrightarrow & (\tilde{\mathcal{P}}) & \longrightarrow & \tilde{J} & \longrightarrow & \nabla \tilde{J} & \end{array}}$$ These new expressions for the optimality conditions can be more suitable for performing numerical illustrations. The latter are performed on a 1D model with finite elements approximation combined with an augmented Lagrangian technique. #### Plan. The paper is organized as follows: Notation, assumptions and functional framework are provided in section [2](#sec-notation){reference-type="ref" reference="sec-notation"}. In particular, we show with classical examples in section [7.2](#sec-app-energies){reference-type="ref" reference="sec-app-energies"} that the assumptions made on the strain energy in section [2.4](#sec-notation-ass){reference-type="ref" reference="sec-notation-ass"} are reasonable. Section [3](#sec-well){reference-type="ref" reference="sec-well"} is dedicated to wellposedness results in the context of the $L^p$-maximal parabolic regularity. In section [3.1](#sec-well1){reference-type="ref" reference="sec-well1"} we study system [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} linearized around $0$, leading in section [3.2](#sec-well2){reference-type="ref" reference="sec-well2"} to local existence results for system [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} and also for its linearized version around some non-trivial states. Solutions for a general adjoint system are studied in section [3.3](#sec-well3){reference-type="ref" reference="sec-well3"}. Next section [4](#sec-optcond){reference-type="ref" reference="sec-optcond"} is devoted to the derivation of optimality conditions: Problem [\[mainpb\]](#mainpb){reference-type="eqref" reference="mainpb"} is transformed in section [4.1](#sec-transformation){reference-type="ref" reference="sec-transformation"}, the corresponding control-to-state mapping is studied in section [4.2](#sec-cts){reference-type="ref" reference="sec-cts"}, and the main results are obtained in sections [4.4](#sec-optcond_1st){reference-type="ref" reference="sec-optcond_1st"} and [4.5](#sec-optcond-final){reference-type="ref" reference="sec-optcond-final"}. Numerical illustrations are presented in section [5](#sec-num){reference-type="ref" reference="sec-num"}. Final comments are given in the conclusion (section [6](#sec-conclusion){reference-type="ref" reference="sec-conclusion"}). In the Appendix, we present modeling aspects of the problem, in particular the formal derivation of the optimality conditions from a Lagrangian mechanics perspective (section [7.3](#sec-app-Lag){reference-type="ref" reference="sec-app-Lag"}), the investigation of classical elasticity models with regards to the assumptions made in this article (section [7.2](#sec-app-energies){reference-type="ref" reference="sec-app-energies"}) and the physical description of the control operator (section [7.4](#sec-app-control){reference-type="ref" reference="sec-app-control"}). ## Acknowledgments {#acknowledgments .unnumbered} The author thanks Prof. Karl Kunisch (University of Graz & RICAM, Linz), Prof. Gernot Plank (Biotechmed Graz) and his research group. The discussions and ideas he had during the years he spent in Graz have lead to the present work. # Notation and assumptions {#sec-notation} Let us introduce the notation for functional spaces and assumptions on the data and the elasticity model. The reader is invited to refer to the present section when reading the rest of the paper. ## General notation {#sec-notation-general} We denote by $u\cdot v$ the inner product between two vectors $u$, $v \in \mathbb{R}^d$, and the corresponding Euclidean norm by $|u|_{\mathbb{R}^d}$. We define the tensor product $u\otimes v \in \mathbb{R}^{d\times d}$, such that $(u\otimes v)_{ij} := u_i v_j$. The inner product between two matrices $A$, $B\in \mathbb{R}^{d\times d}$ is given by $A:B = \mathrm{trace}(A^TB)$, and we recall that the associated Euclidean norm satisfies $| AB |_{\mathbb{R}^{d\times d}} \leq | A|_{\mathbb{R}^{d\times d}} |B|_{\mathbb{R}^{d\times d}}$. The tensor product between matrices is denoted by $A \otimes B \in \mathbb{R}^{d\times d \times d \times d}$, such that for all matrix $C \in \mathbb{R}^{d\times d}$ we have $(A\otimes B)C := (B:C)A \in \mathbb{R}^{d\times d}$. #### On the cofactor matrix. We denote by $\mathrm{cof}(A)$ the cofactor matrix of any matrix field $A$. Recall that this is a polynomial function of the coefficients of $A$. When $A$ is invertible, the following formula holds $$\mathrm{cof}(A) = (\mathrm{det}(A))A^{-T} .$$ Recall that $H \mapsto (\mathrm{cof}A):H$ is the differential of $A \mapsto \mathrm{det}(A)$ at point $A$. Further, recall the differential of $A \mapsto \mathrm{cof}(A)$, given by the following formula $$D_A(\mathrm{cof}).H = \frac{1}{\mathrm{det}A} \left(\left(\mathrm{cof}(A) \otimes \mathrm{cof}(A) \right)H - \mathrm{cof}(A) H^T \mathrm{cof}(A)\right),$$ for all matrix $H \in \mathbb{R}^{d\times d}$. ## Geometric assumptions and the global injectivity condition The domain $\Omega \subset \mathbb{R}^d$ (with $d=2$ or $3$) is assumed to be smooth and bounded. Its boundary $\partial\Omega$ is made of two smooth parts $\Gamma_D$ and $\Gamma_N$ such that $\Gamma_D \cap \Gamma_N = \emptyset$ (see Figure [\[fig0\]](#fig0){reference-type="ref" reference="fig0"}), and their respective surface Lebesgue measures are positive. We assume that $\Gamma_D$ and $\Gamma_N$ are smooth in the sense that the surfaces $\Gamma_D$ and $\Gamma_N$ are *regular*, meaning that at any point of $\Gamma_D$ and $\Gamma_N$ we can define a tangent plane. Therefore the outward unit normal of $\partial\Omega$ is well-defined. On $\Gamma_N$, we will assume that $n\in \mathbf{W}^{2-1/p,p}(\Gamma_N)$.\ The deformation gradient tensor associated with the displacement field $u$ is denoted by $$\Phi(u) = \nabla (\mathrm{Id}+u) = \mathrm{I}+\nabla u.$$ Equation [\[sysmain4\]](#sysmain4){reference-type="eqref" reference="sysmain4"} translates the fact that the total volume of $\Omega$ must remain constant over time. Differentiating this equality in the direction $v$ yields $$\int_{\Omega} \mathrm{det}(\Phi(u)) \mathrm{d}\Omega = % |\Omega| \int_{\Omega} \mathrm{det}(\Phi(u_0)) \mathrm{d}\Omega \quad \Rightarrow \quad \int_{\Omega} \mathrm{cof}(\Phi(u)): \nabla v \, \mathrm{d}\Omega = 0.$$ Further, using the Piola's identity, we have $\mathop{\mathrm{div}}(\mathrm{cof}(\Phi(u))^T v) = \mathrm{cof}(\Phi(u)): \nabla v$, and then the divergence formula enables us to rewrite the quantity above as an integral over $\Gamma_N$ only, as follows $$\int_{\Omega} \mathrm{cof}(\Phi(u)): \nabla v \, \mathrm{d}\Omega = \int_{\Gamma_N} v\cdot \mathrm{cof}(\Phi(u))n \, \mathrm{d}\Gamma_N = 0,$$ if we assume $v_{|\Gamma_D} = 0$. In particular, equation [\[sysmain4\]](#sysmain4){reference-type="eqref" reference="sysmain4"} can be equivalently replaced by its time-derivative, namely $$\int_{\Gamma_N} \dot{u}\cdot \mathrm{cof}(\Phi(u))n \, \mathrm{d}\Gamma_N = 0. \label{sysmain4bis}$$ dealing with the boundary $\Gamma_N$ only. ## Functional spaces {#sec-notation-func} Throughout we consider the exponent $p>3$. In order to distinguish scalar fields, vector fields and matrix fields, we use the following notation $$\mathrm{L}^{p}(\Omega) = \left\{\varphi :\Omega \rightarrow \mathbb{R}\mid \int_{\Omega} |\varphi|_{\mathbb{R}}^p\mathrm{d}\Omega <\infty \right\}, \quad \mathbf{L}^{p}(\Omega) = [\mathrm{L}^p(\Omega)]^d, \quad \mathbb{L}^{p}(\Omega) = [\mathrm{L}^p(\Omega)]^{d\times d},$$ that we transpose by analogy to other types of Lebesgue and Sobolev spaces. Denote $$\mathbf{W}^{1,p}_{D,0}(\Omega) := \left\{ \varphi \in \mathbf{W}^{1,p}(\Omega) \mid \quad \varphi_{|\Gamma_D} = 0 \right\}.$$ The displacement $u$ and its time-derivative $\dot{u}$ are considered in the spaces given below: $$\begin{array} {l} u \in \mathcal{U}_{p,T}(\Omega) := \mathrm{W}^{1,p}(0,T; \mathbf{W}^{2,p}(\Omega)\cap \mathbf{W}^{1,p}_{0,D}(\Omega)) \cap \mathrm{W}^{2,p}(0,T; \mathbf{L}^p(\Omega )), \\ \dot{u} \in \dot{\mathcal{U}}_{p,T}(\Omega) := \mathrm{L}^p(0,T; \mathbf{W}^{2,p}(\Omega)\cap \mathbf{W}^{1,p}_{0,D}(\Omega)) \cap \mathrm{W}^{1,p}(0,T;\mathbf{L}^p(\Omega)). \end{array}$$ Given $r\in (1,\infty)$, we denote by $r'$ its dual exponent satisfying $1/r+1/{r'} = 1$. The trace space for $\dot{u} \in \dot{\mathcal{U}}_{p,T}(\Omega)$ involves the Besov spaces obtained by real interpolation as $\displaystyle \left(\mathbf{L}^p(\Omega);\mathbf{W}^{2,p}(\Omega)\right)_{1/{p'},p} =: \mathbf{B}^{2/{p'}}_{pp}(\Omega)$ and $\big(\mathbf{L}^p(\Omega) ;\mathbf{W}^{1,p}_{0,D}(\Omega)\big)_{1/{p'},p} =: \mathring{\mathbf{B}}^{1/{p'}}_{pp}(\Omega)$, which coincide with $\mathbf{W}^{2/{p'},p}(\Omega)$ and $\mathbf{W}^{1/{p'},p}_{0,D}(\Omega)$, respectively. See for instance [@Triebel]. The initial conditions are assumed to lie in the trace space $\left\{ (u(0),\dot{u}(0)) \mid u \in \mathcal{U}_{p,T}(\Omega)\times \dot{\mathcal{U}}_{p,T}(\Omega) \right\}$, namely: $$(u_0,\dot{u}_0) \in \mathcal{U}_p^{(0,1)}(\Omega) := \left(\mathbf{W}^{2,p}(\Omega)\cap \mathbf{W}^{1,p}_{0,D}(\Omega) \right) \times \left(\mathbf{W}^{2/{p'},p}(\Omega) \cap \mathbf{W}^{1/{p'},p}_{0,D}(\Omega)\right). %\left(\WW^{2/{p'},p}(\Omega)\cap \WW^{1,p}_{0,D}(\Omega) \right)$$ We refer to [@Chill2005] and [@Arendt2007 section 6] for more details. The choice of such a strong functional frame is motivated by the fact that the trace space described above guarantees that the gradient of the displacement is continuous in space. Further, introduce the following spaces $$\begin{array} {l} \mathcal{F}_{p,T}(\Omega) := \mathrm{L}^p(0,T;\mathbf{L}^p(\Omega)),\\ \mathcal{G}_{p,T}(\Gamma_N) := \mathrm{W}^{1/2-1/{2p},p}(0,T; \mathbf{L}^p(\Gamma_N)) \cap \mathrm{L}^p(0,T;\mathbf{W}^{1-1/p,p}(\Gamma_N)),\\ \mathcal{H}_{p,T}(\Gamma_N) := \mathrm{W}^{1-1/{2p},p}(0,T; \mathbf{L}^p(\Gamma_N)) \cap \mathrm{L}^p(0,T;\mathbf{W}^{2-1/p,p}(\Gamma_N)), \\ \mathcal{H}_{p,T} := \mathrm{W}^{1-1/{2p},p}(0,T; \mathbb{R}). \end{array}$$ Following [@Pruess2002], the Neumann boundary condition [\[sysmain2\]](#sysmain2){reference-type="eqref" reference="sysmain2"} is considered in $\mathcal{G}_{p,T}(\Gamma_N)$, and the trace of $\dot{u}$ on $\Gamma_N$ is considered in $\mathcal{H}_{p,T}(\Gamma_N)$. More precisely, we recall the following boundary trace embedding (see for example [@Denk2006 Lemma 3.5]): $$\|v_{|\Gamma_N}\|_{\mathcal{H}_{p,T}(\Gamma_N)} \leq C\|v \|_{\dot{\mathcal{U}}_{p,T}(\Omega)}, \label{est-trace-emb}$$ where the constant $C>0$ is independent of $v$. The constraint [\[sysmain4bis\]](#sysmain4bis){reference-type="eqref" reference="sysmain4bis"} involves the trace of $\dot{u}$ on $\Gamma_N$, and the space $\mathcal{H}_{p,T}$ is where this constraint [\[sysmain4bis\]](#sysmain4bis){reference-type="eqref" reference="sysmain4bis"} is considered (namely constraint [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} derived in time). Further, we also recall this other trace embedding: $$\displaystyle \left\|\frac{\partial v}{\partial n}\right\|_{\mathcal{G}_{p,T}(\Gamma_N)} \leq C\|v \|_{\dot{\mathcal{U}}_{p,T}(\Omega)}. \label{est-trace-emb2}$$ In the Hilbert case, we will need the following estimate: $$\left\|\frac{\partial v}{\partial n}\right\|_{\mathrm{W}^{1/(2p')}(0,T;\mathbf{H}^{1/2-1/p}(\Gamma_N))'} \leq C\| v\|_{\mathrm{L}^p(0,T;\mathbf{H}^2(\Omega))\cap \mathrm{W}^{1,p}(0,T;\mathbf{L}^2(\Omega))}. \label{est-trace-emb3}$$ Note that the trace space of $\mathrm{L}^p(0,T;\mathbf{H}^2(\Omega))\cap \mathrm{W}^{1,p}(0,T;\mathbf{L}^2(\Omega))$ coincides with $\mathbf{H}^{2/(p')}(\Omega)$.\ Finally, the pressure variable $\mathfrak{p}$ that appears in the Neumann condition will be considered such that $$\mathfrak{p} \in \mathcal{P}_{p,T} := \mathrm{W}^{1/2-1/{2p},p}(0,T; \mathbb{R}) = \mathrm{W}^{1/(2p'),p}(0,T; \mathbb{R}).$$ ## Assumptions on the strain energy and operator notation {#sec-notation-ass} In th rest of the paper we will assume that the functionals which appear in Problem [\[mainpb\]](#mainpb){reference-type="eqref" reference="mainpb"}, namely $c$, $\phi^{(1)}$ and $\phi^{(2)}$, are Fréchet-differentiable on $\mathcal{U}_{p,T}(\Omega) \times \dot{\mathcal{U}}_{p,T}(\Omega)\times \mathcal{X}_{p,T}(\omega)$ for functional $c$, and on $\mathcal{U}_{p,T}(\Omega) \times \dot{\mathcal{U}}_{p,T}(\Omega)$ for functionals $\phi^{(1)}$ and $\phi^{(2)}$. Let us give the assumptions that we make on the other operators of the problem. ### Notation for the strain energy {#sec-operators} Recall that for $p>d$, the space $\mathbb{W}^{1,p}(\Omega)$ is an algebra. In particular, there exists a positive constant $C$, depending only on $\Omega$ and $p$, such that for all $A, \, B \in \mathbb{W}^{1,p}(\Omega)$, we have $$\|AB \|_{\mathbb{W}^{1,p}(\Omega)} \leq C \| A \|_{\mathbb{W}^{1,p}(\Omega)} \|B \|_{\mathbb{W}^{1,p}(\Omega)}. \label{W-algebra}$$ See for instance [@BB1974 Lemma A.1]. Therefore the different products between the elasticity-related tensors will be mainly understood in $\mathbb{W}^{1,p}(\Omega)$. Recall the expression of the deformation gradient $\Phi(u) = \mathrm{I}+ \nabla u$ introduced previously. The strain energy of the elastic material is denoted by $\mathcal{W}$, and is a function of the Green--Saint-venant strain tensor $$E(u) := \frac{1}{2}\left(\Phi(u)^T \Phi(u) - \mathrm{I}\right) = \frac{1}{2}\left((\mathrm{I}+\nabla u)^T(\mathrm{I}+\nabla u) - \mathrm{I}\right).$$ We denote classically [@Ciarlet] by $\check{\Sigma}$ the differential of $\mathcal{W}$: $$\check{\Sigma}(E) = \frac{\partial\mathcal{W}}{\partial E}(E),$$ and by $\Sigma$ its composition by $E(u)$ as follows $$\Sigma(u) := \frac{\partial\mathcal{W}}{\partial E}(E(u)).$$ We further introduce $$\sigma(\nabla u) = (\mathrm{I}+\nabla u )\Sigma(u) = \Phi(u)\Sigma(u) = (\mathrm{I}+ \nabla u) \frac{\partial\mathcal{W}}{\partial E}(E(u)),$$ that is the operator which appears in system [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"}. Note that $\sigma$ is a function of $\nabla u$ only, since the strain energy is chosen to be a function of he Green -- St-Venant strain tensor $E(u)$, which is itself a function of $\nabla u$ only. The derivation of system [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} from $\mathcal{W}$ is given in section [7.3](#sec-app-Lag){reference-type="ref" reference="sec-app-Lag"}.\ The tensor $E$ linearized around $u$ in the direction $v$ is given by $$E'(u).v = \frac{1}{2}\left(\Phi(u)^T\nabla v + \nabla v^T\Phi(u) \right).$$ The linearized systems, around $0$ in section [3.1](#sec-well1){reference-type="ref" reference="sec-well1"} and around some unsteady state $u$ in sections [3.2](#sec-well2){reference-type="ref" reference="sec-well2"} and [3.3](#sec-well3){reference-type="ref" reference="sec-well3"}, involve the differentials of $\sigma(\nabla u)$ and $\mathrm{cof}(\Phi(u))$ (with respect to $\nabla u$), denoted respectively by $\sigma_L(\nabla u)$ and $\sigma_N(\nabla u)$, and given as follow [\[tensor-linear\]]{#tensor-linear label="tensor-linear"} $$\begin{aligned} \sigma_L(\nabla u).\nabla v & = & \nabla v \Sigma(u) + (\mathrm{I}+\nabla u)\frac{\partial^2 \mathcal{W}}{\partial E^2}(E(u)).(E'(u).v), \label{tensor-lin1}\\ \sigma_N(\nabla u).\nabla v %& = & D_{u}(\cof)(\Phi(u)).\nabla v \label{tensor-lin2}\\ & = & \frac{1}{\mathrm{det}\Phi(u)} \left(\left(\mathrm{cof}\Phi(u) \otimes \mathrm{cof}\Phi(u) \right)\nabla v - \mathrm{cof}\Phi(u) \nabla v^T \mathrm{cof}\Phi(u)\right). \label{tensor-lin2}\end{aligned}$$ Note that $\sigma_L(\nabla u)$ is symmetric, by assumptions $\mathbf{A2}$ and $\mathbf{A3}$. A variational formulation of its expressions gives, for all vector field $w$ $$\begin{array} {rcl} (\sigma_L(\nabla u).\nabla v): \nabla w & = & (\nabla v \Sigma(u)):\nabla w + \displaystyle \left(\frac{\partial^2 \mathcal{W}}{\partial E^2}(E(u)).(E'(u).v)\right): (E'(u).w). \end{array}$$ The operator $v \mapsto (\sigma_N(\nabla u).\nabla v)n$ is symmetric too. Indeed, for all smooth test function $\zeta$ such that $\zeta_{|\Gamma_D} = 0$, we first express, using the divergence formula and the Piola's identity $$%\begin{array} {rcl} \displaystyle \int_{\Gamma_N} \zeta \cdot \mathrm{cof}(\Phi(u))n \mathrm{d}\Gamma_N = \displaystyle \int_{\Gamma_N} (\mathrm{cof}(\Phi(u))^T\zeta) \cdot n \mathrm{d}\Gamma_N \\ = \displaystyle \int_{\Omega} \mathop{\mathrm{div}}\left(\mathrm{cof}(\Phi(u))^T \zeta\right) \mathrm{d}\Omega \\ = \displaystyle \int_{\Omega} \mathrm{cof}(\Phi(u)): \nabla \zeta \mathrm{d}\Omega. %\end{array}$$ Next, differentiating the left- and right-hand-sides of this equality yields $$\begin{array} {rcl} %\displaystyle & & \displaystyle\int_{\Gamma_N} \zeta \cdot (\sigma_N(\nabla u).\nabla v)n\, \mathrm{d}\Gamma_N = \displaystyle \int_{\Omega} (\sigma_N(\nabla u).\nabla v): \nabla \zeta \, \mathrm{d}\Omega \\[5pt] & = & \displaystyle \int_{\Omega} \frac{1}{\mathrm{det}\Phi(u)} \left( (\mathrm{cof}(\Phi(u)) : \nabla v)(\mathrm{cof}(\Phi(u)):\nabla \zeta) - (\mathrm{cof}(\Phi(u))\nabla v^T)(\mathrm{cof}(\Phi(u))\nabla \zeta^T) \right) \mathrm{d}\Omega. \end{array}$$ This symmetric form shows that the operator $v\mapsto (\sigma_N(\nabla u).\nabla v)n$ is symmetric, and in particular we have $$\displaystyle\int_{\Gamma_N} \zeta \cdot (\sigma_N(\nabla u).\nabla v)n \, \mathrm{d}\Gamma_N = \displaystyle\int_{\Gamma_N} (\sigma_N(\nabla u).\nabla \zeta)n \cdot v\, \mathrm{d}\Gamma_N.$$ When dealing with the adjoint system (from section [3.3](#sec-well3){reference-type="ref" reference="sec-well3"}) we will still use $\sigma_N(\nabla u)^{\ast}.\nabla \zeta$, for the sake of consistency. ### Assumptions on the strain energy First, we define what we call an *admissible* operator for a second-order linear parabolic system. **Definition 1**. *Introduce the following Hilbert spaces $$\mathcal{V}_0(\Omega):= \displaystyle \left\{v\in \mathbf{H}^1(\Omega) \mid v_{|\Gamma_D} = 0 \right\}, \quad \mathcal{V}_0(\Gamma_N) := \mathbf{H}^{1/2}.$$ Consider the following abstract system $$\begin{array} {rcl} \ddot{u} - \kappa \Delta \dot{u} -\mathop{\mathrm{div}}(B.\nabla u) = f & & \text{in $\Omega \times (0,T)$},\\ \kappa \displaystyle \frac{\partial\dot{u}}{\partial n} + (B.\nabla u) n = g & & \text{on $\Gamma_N \times (0,T)$},\\ u = 0 & & \text{on $\Gamma_D \times (0,T)$},\\ u(\cdot,0) = u_0, \quad \dot{u}(\cdot,0) = \dot{u}_0& & \text{in $\Omega$},\\ \end{array} \label{sys-beta}$$ Given $T>0$, we say that the operator $B$ is *admissible* if for all $$f \in \mathrm{L}^2(0,T;\mathcal{V}_0(\Omega)'), \quad g \in \mathrm{L}^2(0,T;\mathcal{V}_0(\Gamma_N)'), \quad u_0 \in \mathbf{L}^2(\Omega), \quad \dot{u}_0 \in \mathbf{L}^2(\Omega)$$ there exists a unique solution $\dot{u}$ to system [\[sys-beta\]](#sys-beta){reference-type="eqref" reference="sys-beta"} such that $\dot{u} \in \mathrm{L}^2(0,T;\mathcal{V}_0(\Omega)) \cap \mathrm{H}^1(0,T;\mathcal{V}_0(\Omega)')$.* The functional framework in the definition above corresponds to the standard notion of weak solutions in Hilbert spaces for a second-order parabolic system.\ We summarize the set of general assumptions we make on the strain energy $\mathcal{W}$, and that are needed for the analysis. Using the notation introduced previously from $\mathcal{W}$, these assumptions are listed below: $\mathbf{A1}$ : The Nemytskii operator $\mathcal{W}: \mathbb{W}^{1,p}(\Omega) \ni E \mapsto \mathcal{W} (E) \in \mathbb{R}$ is twice continuously Fréchet-differentiable. $\mathbf{A2}$ : For all matrix $E \in \mathbb{R}^{d\times d}$ the tensor $\check{\Sigma}(E)$ defines a symmetric matrix field. $\mathbf{A3}$ : The operator $\sigma_L(0)$ is *admissible* in the sense of Definition [Definition 1](#def-ass-op){reference-type="ref" reference="def-ass-op"}. Assumptions $\mathrm{A1}-\mathrm{A2}$ are quite natural. About Assumption $\mathbf{A3}$, we have from [\[tensor-lin1\]](#tensor-lin1){reference-type="eqref" reference="tensor-lin1"} the following expression: $$\sigma_L(0).\nabla v = \nabla v \Sigma(0) + \displaystyle \frac{\partial^2\mathcal{W}}{\partial E^2}(0).(E'(0).v) = \nabla v \Sigma(0) + \displaystyle \frac{1}{2} \frac{\partial^2\mathcal{W}}{\partial E^2}(0).(\nabla v + \nabla v^T).$$ In section [7.2](#sec-app-energies){reference-type="ref" reference="sec-app-energies"} we show that well-known examples of strain energies from the literature fulfill these assumptions. ### On the control operator {#sec-control-operator} The control operator, denoted as follows $$f: \mathcal{X}_{p,T}(\omega)\ni \xi \mapsto f(\xi) \in \mathcal{F}_{p,T}(\Omega),$$ is distributed on a subdomain $\omega \subset\subset \Omega$, as it appears in equation [\[sysmain1\]](#sysmain1){reference-type="eqref" reference="sysmain1"}. We assume that $f$ is Fréchet-differentiable on $\mathcal{X}_{p,T}(\omega)$, with values in $\mathcal{F}_{p,T}(\Omega)$. We refer to section [7.4](#sec-app-control){reference-type="ref" reference="sec-app-control"} for modeling related comments on the control operator. In particular, $f$ may possibly be linear. # Wellposedness results {#sec-well} The goal of this section is to establish existence of solutions for system [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"}, and also for its linearized version around a non-trivial state, which will be used in section [4](#sec-optcond){reference-type="ref" reference="sec-optcond"}. We first show in section [3.1](#sec-well1){reference-type="ref" reference="sec-well1"} that the linearized system around $0$ is well-posed in the context of the $L^p$-maximal regularity. Via the inverse function theorem, we deduce in section [3.2](#sec-well2){reference-type="ref" reference="sec-well2"} that the same property holds for the state system [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} under smallness assumptions on the data, and also for the non-autonomous linear system, namely system [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} linearized around a non-trivial state $(u,\mathfrak{p})$. We rely on these results for studying in section [3.3](#sec-well3){reference-type="ref" reference="sec-well3"} the adjoint system. ## $L^p$-maximal regularity for the linear autonomous system {#sec-well1} System [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} linearized around $(u,\mathfrak{p}) = (0,0)$ writes formally as follows: [\[siszero\]]{#siszero label="siszero"} $$\begin{aligned} %\begin{array} {rcl} \ddot{u} - \kappa \dot{u} -\mathop{\mathrm{div}}(\sigma_L(0).\nabla u) = f & & \text{in $\Omega \times (0,T)$}, \label{siszero1}\\ \kappa \displaystyle \frac{\partial\dot{u}}{\partial n} + (\sigma_L(0).\nabla u)n + \mathfrak{p}\, n = g & & \text{on $\Gamma_N \times (0,T)$},\\ \displaystyle \int_{\Gamma_N} u\cdot n\, \mathrm{d}\Gamma_N = h & & \text{on $(0,T)$}, \label{siszero3}\\ u = 0 & & \text{on $\Gamma_D \times (0,T)$}, \\ u(\cdot,0) = u_0, \quad \dot{u}(\cdot,0) = \dot{u}_0& & \text{in $\Omega$.} %\end{array}\end{aligned}$$ The goal of this subsection is to provide wellposedness for system [\[siszero\]](#siszero){reference-type="eqref" reference="siszero"}, which is stated in Corollary [Corollary 4](#coro-well-auto-NH){reference-type="ref" reference="coro-well-auto-NH"}. Let us first omit constraint [\[siszero3\]](#siszero3){reference-type="eqref" reference="siszero3"}, and the associated pressure $\mathfrak{p}$. We state the following result dealing with a general second-order linear parabolic system with non-homogeneous right-hand-sides and initial value conditions. **Proposition 2**. *Let be $T \in (0,\infty)$. Assume that $f\in \mathcal{F}_{p,T}(\Omega), \quad g\in \mathcal{G}_{p,T}(\Gamma_N), \quad (u_0,\dot{u}_0) \in \mathcal{U}^{(0,1)}_p(\Omega)$ with the compatibility condition $\displaystyle \kappa\frac{\partial\dot{u}_0}{\partial n} + (\sigma_L(0).\nabla u_0)n = g(\cdot,0)$ on $\Gamma_N$. Then the following system $$\begin{array} {rcl} \ddot{u} - \kappa \Delta \dot{u} -\mathop{\mathrm{div}}(\sigma_L(0).\nabla u) = f & & \text{in $\Omega \times (0,T)$},\\ \kappa \displaystyle \frac{\partial\dot{u}}{\partial n} + (\sigma_L(0).\nabla u) n = g & & \text{on $\Gamma_N \times (0,T)$},\\ u = 0 & & \text{on $\Gamma_D \times (0,T)$},\\ u(\cdot,0) = u_0, \quad \dot{u}(\cdot,0) = \dot{u}_0& & \text{in $\Omega$},\\ \end{array} \label{sys-begin}$$ admits a unique solution $u\in \mathcal{U}_{p,T}(\Omega)$. Further, the following estimate holds $$\|u\|_{\mathcal{U}_{p,T}(\Omega)} \leq C(T)\left( \|f\|_{\mathcal{F}_{p,T}(\Omega)} + \|g\|_{\mathcal{G}_{p,T}(\Gamma_N)} + \|(u_0,\dot{u}_0)\|_{\mathcal{U}^{(0,1)}_p(\Omega)} \right),$$ where the constant $C(T)>0$ is non-decreasing with respect to $T$.* *Proof.* Such a result falls into the frame of the so-called $L^p$-maximal parabolic regularity [@AMS2003]. Given Assumption $\mathbf{A3}$, the result stated in [@Arendt2007 Theorem 6.1] addresses the question of existence of solutions for this type of second-order autonomous equations with homogeneous Dirichlet conditions on $\partial\Omega$. System [\[sys-begin\]](#sys-begin){reference-type="eqref" reference="sys-begin"} can be rewritten in terms of $\dot{u}$ as main unknown variable, and thus becomes a first-order parabolic system. Then the results provided in [@Pruess2002] when considering mixed boundary conditions apply, and thus we obtain the $L^p$-maximal regularity property for system [\[sys-begin\]](#sys-begin){reference-type="eqref" reference="sys-begin"}. ◻ Now we establish the same type of results for system [\[siszero\]](#siszero){reference-type="eqref" reference="siszero"}, when constraint [\[siszero3\]](#siszero3){reference-type="eqref" reference="siszero3"} is homogeneous: **Proposition 3**. *Let be $T \in (0,\infty)$, and assume the hypotheses of Proposition [Proposition 2](#prop-well-auto0){reference-type="ref" reference="prop-well-auto0"}, with additionally $$\displaystyle \int_{\Gamma_N} u_0 \cdot n \, \mathrm{d}\Gamma_N = 0.$$ Then the following system* *[\[syslih\]]{#syslih label="syslih"} $$\begin{aligned} %\begin{array} {rcl} \ddot{u} - \kappa \Delta \dot{u} -\mathop{\mathrm{div}}(\sigma_L(0).\nabla u) = f & & \text{in $\Omega \times (0,T)$}, \label{syslih1}\\ \kappa \displaystyle \frac{\partial\dot{u}}{\partial n} + (\sigma_L(0).\nabla u) n +\mathfrak{p}\, n = g & & \text{on $\Gamma_N \times (0,T)$}, \label{syslih15}\\ \displaystyle \int_{\Gamma_N} u\cdot n\, \mathrm{d}\Gamma_N = 0 & & \text{in $(0,T)$} , \label{syslih2}\\ u = 0 & & \text{on $\Gamma_D \times (0,T)$},\label{syslih4}\\ u(\cdot,0) = u_0, \quad \dot{u}(\cdot,0) = \dot{u}_0& & \text{in $\Omega$}, %\end{array}\end{aligned}$$* *admits a unique solution $(u,\mathfrak{p})\in \mathcal{U}_{p,T}(\Omega)\times \mathcal{P}_{p,T}$. Further, the following estimate holds $$\|u\|_{\mathcal{U}_{p,T}(\Omega)} + \|\mathfrak{p}\|_{\mathcal{P}_{p,T}}\leq C(T)\left( \|f\|_{\mathcal{F}_{p,T}(\Omega)} + \|g\|_{\mathcal{G}_{p,T}(\Gamma_N)} + \|(u_0,\dot{u}_0)\|_{\mathcal{U}^{(0,1)}_p(\Omega)} \right), \label{est-prop32}$$ where the constant $C(T)>0$ is non-decreasing with respect to $T$.* The proof of Proposition [Proposition 3](#prop-well-auto){reference-type="ref" reference="prop-well-auto"} is given in the section [7.1.1](#sec-app-tek1){reference-type="ref" reference="sec-app-tek1"}. We deduce the same result when the constraint [\[syslih2\]](#syslih2){reference-type="eqref" reference="syslih2"} is non-homogeneous. More precisely, we consider constraint [\[siszero3\]](#siszero3){reference-type="eqref" reference="siszero3"} with $h\in \mathcal{H}_{p,T} = \mathrm{W}^{1-1/(2p),p}(0,T;\mathbb{R})$. **Corollary 4**. *Let be $T \in (0,\infty)$, and assume $f\in \mathcal{F}_{p,T}(\Omega), \quad g\in \mathcal{G}_{p,T}(\Gamma_N), \quad h\in \mathcal{H}_{p,T}, \quad (u_0,\dot{u}_0) \in \mathcal{U}^{(0,1)}_p(\Omega)$ satisfying the compatibility conditions $$\displaystyle g(\cdot,0) = \kappa\frac{\partial\dot{u}_0}{\partial n} +(\sigma_L(0).\nabla u_0)n \ \text{on } \Gamma_N, \quad h(0) = \displaystyle \int_{\Gamma_N} u_0 \cdot n\, \mathrm{d}\Gamma_N \quad \text{and} \quad \dot{h}(0) = \displaystyle \int_{\Gamma_N} \dot{u}_0 \cdot n\, \mathrm{d}\Gamma_N.$$ Then there exists a unique solution $(u,\mathfrak{p})$ to the following system* *[\[siszeroNH\]]{#siszeroNH label="siszeroNH"} $$\begin{aligned} %\begin{array} {rcl} \ddot{u} - \kappa \Delta \dot{u} -\mathop{\mathrm{div}}(\sigma_L(0).\nabla u) = f & & \text{in $\Omega \times (0,T)$},\\ \kappa \displaystyle \frac{\partial\dot{u}}{\partial n} + (\sigma_L(0).\nabla u) n +\mathfrak{p}\, n = g & & \text{on $\Gamma_N \times (0,T)$},\\ \displaystyle \int_{\Gamma_N} u\cdot n\, \mathrm{d}\Gamma_N = h & & \text{in $(0,T)$} , \label{siszeroNH3}\\ u = 0 & & \text{on $\Gamma_D \times (0,T)$},\\ u(\cdot,0) = u_0, \quad \dot{u}(\cdot,0) = \dot{u}_0& & \text{in $\Omega$}, %\end{array}\end{aligned}$$* *and it satisfies the following estimate $$\|u\|_{\mathcal{U}_{p,T}(\Omega)} + \|\mathfrak{p}\|_{\mathcal{P}_{p,T}}\leq C(T)\left( \|f\|_{\mathcal{F}_{p,T}(\Omega)} + \|g\|_{\mathcal{G}_{p,T}(\Gamma_N)} + \|h\|_{\mathcal{H}_{p,T}} + \|(u_0,\dot{u}_0)\|_{\mathcal{U}^{(0,1)}_p(\Omega)} \right),$$ where the constant $C(T)>0$ is non-decreasing with respect to $T$.* The technical proof of Corollary [Corollary 4](#coro-well-auto-NH){reference-type="ref" reference="coro-well-auto-NH"} is given in section [7.1.2](#sec-app-tek2){reference-type="ref" reference="sec-app-tek2"}. ## Local existence result for the state system {#sec-well2} Define the mapping $$\begin{array} {rrcc} \mathcal{K}: & %\text{\begin{small}$ \mathcal{U}_{p,T}(\Omega) \times \mathcal{P}_{p,T} %$\end{small}} & \rightarrow & %\text{\begin{small}$ \mathcal{F}_{p,T}(\Omega) \times \mathcal{G}_{p,T}(\Gamma_N) \times \mathcal{H}_{p,T} \times \mathcal{U}_p^{(0,1)}(\Omega) %$\end{small}} \\[5pt] & (u,\mathfrak{p}) & \mapsto & \left( \begin{array}{c} \ddot{u} - \kappa \Delta\dot{u} - \mathop{\mathrm{div}}(\sigma(\nabla u))\\ \kappa \displaystyle \frac{\partial\dot{u}}{\partial n} + \sigma(\nabla u)n +\mathfrak{p}\, \mathrm{cof}(\Phi(u))n \\ \displaystyle \int_{\Omega} \mathrm{det}(\Phi(u)) \mathrm{d}\Omega\\ (u(\cdot,0), \dot{u}(\cdot,0)) \end{array} \right). \end{array}$$ We have $\mathcal{K}(0,0) = (-\mathop{\mathrm{div}}(\sigma(0)),\sigma(0)n,|\Omega|,0)^T$. From Corollary [Corollary 4](#coro-well-auto-NH){reference-type="ref" reference="coro-well-auto-NH"}, the differential of mapping $\mathcal{K}$ at $(u,\mathfrak{p}) = (0,0)$ is an isomorphism. Therefore, from the inverse function theorem, system [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} is locally wellposed. More precisely, we state the following result: **Proposition 5**. *Let be $T\in(0,\infty)$. There exists $\eta>0$ such that if $$\|f+ \mathop{\mathrm{div}}(\sigma(0))\|_{\mathcal{F}_{p,T}(\Omega)} + \|g-\sigma(0)n\|_{\mathcal{G}_{p,T}(\Gamma_N)} + \|(u_0,\dot{u}_0)\|_{\mathcal{U}^{(0,1)}_p(\Omega)} \leq \eta$$ with the compatibility condition $\displaystyle \kappa \frac{\partial\dot{u}_0}{\partial n} + \sigma(\nabla u_0)n = g(\cdot,0)$ on $\Gamma_N$, then system [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} admits a unique solution $(u,\mathfrak{p}) \in \mathcal{U}_{p,T}(\Omega) \times \mathcal{P}_{p,T}$.* *Proof.* Note that the smallness assumption on $u_0 \in \mathbf{W}^{2,p}(\Omega)$ implies $$\begin{array} {rcl} \displaystyle \left||\Omega| - \int_{\Omega}\mathrm{det}(\Phi(u_0))\mathrm{d}\Omega \right| = \left| \int_{\Omega}\mathrm{det}(\Phi(0))\mathrm{d}\Omega - \int_{\Omega}\mathrm{det}(\Phi(u_0))\mathrm{d}\Omega \right| & \leq & \|\nabla u_0\|_{\mathbb{W}^{1,p}(\Omega)} \displaystyle \sup_{\alpha \in [0,1]} \int_{\Omega} \| \mathrm{cof}(\mathrm{I}+ \alpha \nabla u)\|\mathrm{d}\Omega\\ & \leq & \displaystyle C\eta\sum_{i=0}^{d-1} \eta^i, \end{array}$$ where we have used the mean-value theorem in the algebra $\mathbb{W}^{1,p}(\Omega)$. Therefore, provided that $\eta>0$ is chosen small enough, the assumptions of the inverse function theorem are satisfied, which yields the result. ◻ Further, the differential of mapping $\mathcal{K}$ is also locally an isomorphism, that means that system [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} linearized around some state $(u,\mathfrak{p})$ is well-posed, provided smallness assumptions on $(u,\mathfrak{p})$. This is achieved by assuming the data small enough, in virtue of Proposition [Proposition 5](#prop-well-sysmain){reference-type="ref" reference="prop-well-sysmain"}. This non-autonomous linear system writes formally as follows, where $(v,\mathfrak{q})$ denotes its unknown: [\[syslin-nonauto\]]{#syslin-nonauto label="syslin-nonauto"} $$\begin{aligned} %\begin{array} {rcl} \ddot{v} - \kappa \Delta\dot{v} -\mathop{\mathrm{div}}(\sigma_L(\nabla u).\nabla v) = f & & \text{in $\Omega \times (0,T)$},\\ \kappa \displaystyle \frac{\partial\dot{v}}{\partial n} + \Big(\big(\sigma_L(\nabla u)+\mathfrak{p}\, \sigma_N(\nabla u)\big).\nabla v \Big) n + \mathfrak{q}\, \mathrm{cof}(\Phi(u))n = g & & \text{on $\Gamma_N \times (0,T)$},\\ \displaystyle \int_{\Gamma_N} v\cdot \mathrm{cof}(\Phi(u))n\, \mathrm{d}\Gamma_N = 0 & & \text{on $(0,T)$}, \label{syslin-nonauto3}\\ v = 0 & & \text{on $\Gamma_D \times (0,T)$},\\ v(\cdot,0) = 0,%v_0, \quad \dot{v}(\cdot,0) = 0, & & %\dot{v}_0 & & \text{in $\Omega$}. %\end{array}\end{aligned}$$ Recall that $\sigma_N$ is introduced in [\[tensor-lin2\]](#tensor-lin2){reference-type="eqref" reference="tensor-lin2"}. Note that in the writing of system [\[syslin-nonauto\]](#syslin-nonauto){reference-type="eqref" reference="syslin-nonauto"}, only $u$ appears, not $\dot{u}$, as the terms involving $\dot{u}$ are linear in system [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"}. We state the following result: **Proposition 6**. *Let be $T\in(0,\infty)$, and assume that $(u,\mathfrak{p})$ is small enough in $\mathcal{U}_{p,T}(\Omega) \times \mathcal{P}_{p,T}$. Then for all $f\in \mathcal{F}_{p,T}(\Omega)$, $g \in \mathcal{G}_{p,T}(\Gamma_N)$ satisfying the compatibility condition $g(\cdot,0) = 0$, system [\[syslin-nonauto\]](#syslin-nonauto){reference-type="eqref" reference="syslin-nonauto"} admits a unique solution $(v,\mathfrak{q}) \in \mathcal{U}_{p,T}(\Omega) \times \mathcal{P}_{p,T}$, and it satisfies $$\|v\|_{\mathcal{U}_{p,T}(\Omega)} + \|\mathfrak{q} \|_{\mathcal{P}_{p,T}} \leq C(u,\mathfrak{p}) \left( \|f\|_{\mathcal{F}_{p,T}(\Omega)} + \|g\|_{\mathcal{G}_{p,T}(\Gamma_N)} %+\|(v_0,\dot{v}_0)\|_{\mathcal{U}^{(0,1)}_p(\Omega)} \right). \label{est-prop-nonauto}$$ The constant $C(u,\mathfrak{p})>0$ is independent of $(v,\mathfrak{q})$.* ## The adjoint system {#sec-well3} Let us first rewrite the second-order linear system [\[syslin-nonauto\]](#syslin-nonauto){reference-type="eqref" reference="syslin-nonauto"} in the form of a first-order parabolic system, by setting $(y_0,y_1) = (u,\dot{u})$ and $(z_0,z_1) = (v,\dot{v})$. More generally we consider the following system: [\[sysx\]]{#sysx label="sysx"} $$\begin{aligned} \dot{z}_0 - z_1 = f_0 & & \text{in $\Omega \times(0,T)$}, \label{sysx_0} \\ \dot{z}_1 - \kappa \Delta z_1 - \mathop{\mathrm{div}}(\sigma_L(\nabla y_0).\nabla z_0) = f_1 & & \text{in $\Omega \times(0,T)$}, \label{sysx1} \\ \kappa \frac{\partial z_1}{\partial n} + \Big((\sigma_L + \mathfrak{p}\, \sigma_N)(\nabla y_0).\nabla z_0\Big)n + \mathfrak{q}\, \mathrm{cof}\left(\Phi(y_0) \right) n = g & & \text{on $\Gamma_N\times (0,T)$}, \label{sysx2}\\ \displaystyle \int_{\Gamma_N} z_0 \cdot \mathrm{cof}(\Phi(y_0))n\, \mathrm{d}\Gamma_N = 0 & & \text{in } (0,T),\\ z_1 = 0 & & \text{on $\Gamma_D\times (0,T)$}, \\ z_0(\cdot, 0) = 0, \displaystyle \quad z_1(\cdot, 0) = 0 & & \text{in $\Omega$}.\end{aligned}$$ Note that only $y_0 = u$ appears in the writing of system [\[sysx\]](#sysx){reference-type="eqref" reference="sysx"}, not $y_1$. Using Proposition [Proposition 6](#prop-well-syslin){reference-type="ref" reference="prop-well-syslin"}, we state the following proposition: **Proposition 7**. *Let be $T\in(0,\infty)$, and assume that $(y_0,\mathfrak{p})$ is small enough in $\mathcal{U}_{p,T}(\Omega) \times \mathcal{P}_{p,T}$. Then for all $f_0 \in \dot{\mathcal{U}}_{p,T}(\Omega), \quad f_1\in \mathcal{F}_{p,T}(\Omega), \quad g \in \mathcal{G}_{p,T}(\Gamma_N)$ satisfying the compatibility condition $g(\cdot,0) = 0$, system [\[syslin-nonauto\]](#syslin-nonauto){reference-type="eqref" reference="syslin-nonauto"} admits a unique solution $(z_0,z_1,\mathfrak{q}) \in \mathcal{U}_{p,T}(\Omega) \times \dot{\mathcal{U}}_{p,T}(\Omega) \times \mathcal{P}_{p,T}$, and it satisfies $$\|z_0\|_{\mathcal{U}_{p,T}(\Omega)} + \|z_1\|_{\dot{\mathcal{U}}_{p,T}(\Omega)} + \|\mathfrak{q} \|_{\mathcal{P}_{p,T}} \leq C(y_0,\mathfrak{p}) \left( \|f_0\|_{\dot{\mathcal{U}}_{p,T}(\Omega)} + \|f_1\|_{\mathcal{F}_{p,T}(\Omega)} + \|g\|_{\mathcal{G}_{p,T}(\Gamma_N)} \right).$$ The constant $C(y_0,\mathfrak{p})>0$ is independent of $(z_0,z_1,\mathfrak{q})$.* *Proof.* Equation [\[sysx_0\]](#sysx_0){reference-type="eqref" reference="sysx_0"} derived in time and combined with [\[sysx1\]](#sysx1){reference-type="eqref" reference="sysx1"} shows that variable $z_0$ satisfies system [\[syslin-nonauto\]](#syslin-nonauto){reference-type="eqref" reference="syslin-nonauto"}, with respectively $z_0$ in the role of $v$, $f_1+\dot{f}_0 -\kappa \Delta f_0 \in \mathcal{F}_{p,T}(\Omega)$ in the role of $f$, and $g +\kappa \displaystyle \frac{\partial f_0}{\partial n} \in \mathcal{G}_{p,T}(\Gamma_N)$ in the role of $g$. Proposition [Proposition 6](#prop-well-syslin){reference-type="ref" reference="prop-well-syslin"} states the existence and uniqueness of $z_0 \in \mathcal{U}_{p,T}(\Omega)$, which satisfies $$\begin{aligned} %\begin{array}{rcl} \|z_0\|_{\mathcal{U}_{p,T}(\Omega)} + \|\mathfrak{q}\|_{\mathcal{P}_{p,T}} & \leq & C(y_0,\mathfrak{p}) \left( \|f_1\|_{\mathcal{F}_{p,T}(\Omega)} + \|\dot{f}_0\|_{\mathcal{F}_{p,T}(\Omega)} + \|\Delta f_0\|_{\mathcal{F}_{p,T}(\Omega)} + \|g\|_{\mathcal{G}_{p,T}(\Gamma_N)} + \displaystyle \left\|\frac{\partial f_0}{\partial n}\right\|_{\mathcal{G}_{p,T}(\Gamma_N)} \right) \nonumber \\ & \leq & CC(y_0,\mathfrak{p}) \left( \|f_0\|_{\dot{\mathcal{U}}_{p,T}(\Omega)}+ \|f_1\|_{\mathcal{F}_{p,T}(\Omega)} + \|g\|_{\mathcal{G}_{p,T}(\Gamma_N)} \right), \label{est-z0} %\end{array}\end{aligned}$$ where the constant $C(y_0,\mathfrak{p})$ is the one of estimate [\[est-prop-nonauto\]](#est-prop-nonauto){reference-type="eqref" reference="est-prop-nonauto"}. We still denote $C(y_0,\mathfrak{p})$ constants of type $CC(y_0,\mathfrak{p})$. Further, equation [\[sysx_0\]](#sysx_0){reference-type="eqref" reference="sysx_0"}, namely $z_1 = \dot{z}_0 - f_0$ yields $$\|z_1\|_{\dot{\mathcal{U}}_{p,T}(\Omega)} \leq \|z_0\|_{\mathcal{U}_{p,T}(\Omega)} + \|f_0\|_{\dot{\mathcal{U}}_{p,T}(\Omega)}.$$ Combined with [\[est-z0\]](#est-z0){reference-type="eqref" reference="est-z0"}, we deduce the announced estimate and complete the proof. ◻ We stress that solutions $((z_0,z_1),\mathfrak{q})$ of system [\[sysx\]](#sysx){reference-type="eqref" reference="sysx"} are continuous on $[0,T]$ -- with values in $\mathcal{U}_p^{(0,1)}(\Omega) \times \mathbb{R}$. Recall that our goal is to address Problem [\[mainpb\]](#mainpb){reference-type="eqref" reference="mainpb"}, involving the functionals $c: \mathcal{U}_{p,T}(\Omega) \times \dot{\mathcal{U}}_{p,T}(\Omega) \times \mathcal{X}_{p,T}(\omega)\rightarrow \mathbb{R}$, $\phi^{(1)}: \mathcal{U}^{(0,1)}_{p}(\Omega) \rightarrow \mathbb{R}$ and $\phi^{(2)}: \mathcal{U}^{(0,1)}_{p}(\Omega) \rightarrow \mathbb{R}$. Given $\tau \in (0,T)$, $\xi \in \mathcal{X}_{p,T}(\omega)$ and $(y_0,y_1, \mathfrak{p}) \in \mathcal{U}_{p,T}(\Omega) \times \dot{\mathcal{U}}_{p,T}(\Omega)\times \mathcal{P}_{p,T}$, we introduce the adjoint system, namely [\[sysadjoint-init\]]{#sysadjoint-init label="sysadjoint-init"} $$\begin{aligned} -\dot{\zeta}_0 - \mathop{\mathrm{div}}(\sigma_L(\nabla y_0)^{\ast}.\nabla \zeta_1) = -c'_{y_0}(y_0, y_1, \xi) & & \text{in $\Omega \times\left((0,\tau) \cup (\tau,T)\right)$}, \label{sysadjoint-init1} \\ -\dot{\zeta}_1 - \zeta_0 -\kappa \Delta \zeta_1 = -c'_{y_1}(y_0,y_1, \xi) & & \text{in $\Omega \times(0,T)$}, \label{sysadjoint-init2} \\ %\quad \text{and} \quad \Big((\sigma_L + \mathfrak{p}\, \sigma_N)(\nabla y_0)^{\ast}.\nabla \zeta_1\Big)n + \pi\, \mathrm{cof}(\Phi(y_0)) n = 0 & & \text{on $\Gamma_N\times (0,T)$}, \label{sysadjoint-init3} \label{Neu-adj-init0}\\ \kappa \frac{\partial\zeta_1}{\partial n} =0 & & \text{on $\Gamma_N\times (0,T)$}, \label{Neu-adj-init1}\\ \zeta_1 = 0 & & \text{on $\Gamma_D\times(0,T)$}, \\ \displaystyle \left\langle \zeta_1 \, ; \mathrm{cof}(\Phi(y_0))n\right\rangle_{\mathbf{W}^{1/(p'),p}(\Gamma_N)',\mathbf{W}^{1/(p'),p}(\Gamma_N)} %\displaystyle \int_{\Gamma_N} \zeta_1 \cdot \cof(\Phi(y_0) )n\, \d\Gamma_N = 0 & & \text{in } (0,T)\\ \left[ \zeta_0 \right]_{\tau} = \phi^{(1)'}_{y_0}(y_0,y_1) (\tau) \quad \text{and} \quad \left[ \zeta_1 \right]_{\tau} = \phi^{(1)'}_{y_1}(y_0,y_1) (\tau) & & \text{in $\Omega$}, \label{sysadjoint-init4}\\ \zeta_0(\cdot, T) = -\phi^{(2)'}_{y_0}(y_0,y_1)(T), \displaystyle \quad \zeta_1(\cdot, T) = -\phi^{(2)'}_{y_1}(y_0,y_1)(T) & & \text{in $\Omega$}.\end{aligned}$$ We have introduce the notation $\left[ \zeta \right]_{\tau} := \displaystyle\lim_{t\rightarrow \tau^+} \zeta(t) - \lim_{t\rightarrow \tau^-} \zeta(t)$ which describes the jump of a variable $\zeta$ at time $t=\tau$. We define solutions of the adjoint system [\[sysadjoint-init\]](#sysadjoint-init){reference-type="eqref" reference="sysadjoint-init"} by *transposition*. **Definition 8**. *Let be $\tau \in (0,T)$ and $(y_0, y_1,\mathfrak{p},\xi) \in \mathcal{U}_{p,T}(\Omega) \times \dot{\mathcal{U}}_{p,T}(\Omega)\times \mathcal{P}_{p,T}\times \mathcal{X}_{p,T}(\omega)$. We say that $(\zeta_0,\zeta_1,\pi)$ is a solution of [\[sysadjoint-init\]](#sysadjoint-init){reference-type="eqref" reference="sysadjoint-init"} associated with $(y_0,y_1,\mathfrak{p})$, if for all $(f_0, f_1,g) \in \dot{\mathcal{U}}_{p,T}(\Omega) \times \mathcal{F}_{p,T}(\Omega) \times \mathcal{G}_{p,T}(\Gamma_N)$ we have $$\begin{array} {l} \displaystyle \left\langle \zeta_0 ;f_0 \right\rangle_{\dot{\mathcal{U}}_{p,T}(\Omega)',\dot{\mathcal{U}}_{p,T}(\Omega)} +\left\langle \zeta_1 ; f_1 \right\rangle_{\mathcal{F}_{p,T}(\Omega)',\mathcal{F}_{p,T}(\Omega)} + \left\langle \zeta_1 ; g \right\rangle_{\mathcal{G}_{p,T}(\Gamma_N)',\mathcal{G}_{p,T}(\Gamma_N)} \\ %+ \langle \zeta_0(0) ; v_0 \rangle_{\mathcal{U}_p_^{(0)}(\Omega)',\mathcal{U}_p^{(0)}(\Omega)} %+ \langle \zeta_1(0) ; v_1 \rangle_{\mathcal{U}_p^{(1)}(\Omega)',\mathcal{U}_p^{(1)}(\Omega)} \\ \displaystyle = -\left\langle c'_{y_0}(y_0,y_1, \xi)\, ; z_0 \right\rangle_{\mathcal{U}_{p,T}(\Omega)',\mathcal{U}_{p,T}(\Omega)} - \left\langle c'_{y_1}(y_0,y_1, \xi)\, ; z_1 \right\rangle_{\dot{\mathcal{U}}_{p,T}(\Omega)',\dot{\mathcal{U}}_{p,T}(\Omega)} \\ - \left\langle(\phi^{(1)'}_{y_0}(y_0,y_1), \phi^{(1)'}_{y_1}(y_0,y_1) )\, ; (z_0(\cdot,\tau),z_1(\cdot,\tau)) \right\rangle_{\mathcal{U}_p^{(0,1)}(\Omega)',\mathcal{U}_p^{(0,1)}(\Omega)} \\ -\left\langle (\phi^{(2)'}_{y_0}(y_0,y_1), \phi^{(2)'}_{y_1}(y_0,y_1) )\, ; (z_0(\cdot,T),z_1(\cdot,T)) \right\rangle_{\mathcal{U}_p^{(0,1)}(\Omega)',\mathcal{U}_p^{(0,1)}(\Omega)} , \label{id-def-trans-init} \end{array}$$ where $(z_0,z_1,\mathfrak{q})$ is the solution of system [\[sysx\]](#sysx){reference-type="eqref" reference="sysx"} with $(y_0,y_1,\mathfrak{p})$ and $(f_0,f_1,g)$ as data.* **Remark 9**. *Solutions $(\zeta_0,\zeta_1)$ in the sense of Definition [Definition 8](#def-trans-init){reference-type="ref" reference="def-trans-init"} lie in $\dot{\mathcal{U}}_{p,T}(\Omega)' \times \mathcal{F}_{p,T}(\Omega)'$, and therefore satisfy $$\begin{array}{l} %\begin{array} {l} \zeta_0 \in \mathrm{L}^{p'}(0,T;\mathbf{W}^{2,p}(\Omega)'), \quad \zeta_1 \in \mathrm{L}^{p'}(0,T;\mathbf{L}^{p'}(\Omega)), \quad {\zeta_1}_{|\Gamma_N} \in \mathrm{L}^{p'}(0,T; \mathbf{W}^{1/(p'),p}(\Gamma_N)') \\ \nabla \zeta_1 \in \mathrm{L}^{p'}(0,T; \mathbb{W}^{1,p}(\Omega)'), \quad \displaystyle \frac{\partial\zeta_1}{\partial n} \in \mathrm{L}^{p'}(0,T; \mathbf{W}^{2-1/p,p}(\Gamma_N)'). \end{array}$$ It is unnecessary to comment on the regularity of the variable $\pi$, playing the role of Lagrange multiplier for the constraint [\[Neu-adj-init0\]](#Neu-adj-init0){reference-type="eqref" reference="Neu-adj-init0"}.* We prove the existence and uniqueness of a *very weak* solution for system [\[sysadjoint-init\]](#sysadjoint-init){reference-type="eqref" reference="sysadjoint-init"}, in the sense of Definition [Definition 8](#def-trans-init){reference-type="ref" reference="def-trans-init"}. **Proposition 10**. *Let be $(y_0,y_1,\mathfrak{p},\xi) \in \mathcal{U}_{p,T}(\Omega) \times \dot{\mathcal{U}}_{p,T}(\Omega)\times \mathcal{P}_{p,T}\times \mathcal{X}_{p,T}(\omega)$. If $(y_0,\mathfrak{p})$ is small enough in $\mathcal{U}_{p,T}(\Omega) \times \mathcal{P}_{p,T}$, then system [\[sysadjoint-init\]](#sysadjoint-init){reference-type="eqref" reference="sysadjoint-init"} admits a unique solution $(\zeta_0,\zeta_1,\pi) \in \dot{\mathcal{U}}_{p,T}(\Omega)' \times \mathcal{F}_{p,T}(\Omega)' \times \mathcal{G}_{p,T}(\Gamma_N)'$, in the sense of Definition [Definition 8](#def-trans-init){reference-type="ref" reference="def-trans-init"}. Moreover, there exists a constant $C(y_0, \mathfrak{p}) >0$ depending only on $(y_0,\mathfrak{p})$ such that $$\begin{array} {rcl} \|(\zeta_0,\zeta_1) \|_{\dot{\mathcal{U}}_{p,T}(\Omega)' \times \mathcal{F}_{p,T}(\Omega)'} & \leq & C(y_0, \mathfrak{p}) \left( \| c'_{y_0}(y_0,y_1,\xi) \|_{\mathcal{U}_{p,T}(\Omega)'} + \| c'_{y_1}(y_0,y_1,\xi) \|_{\dot{\mathcal{U}}_{p,T}(\Omega)'} \right.\\ & & \left. + \| (\phi^{(1)'}_{y_0}(y_0,y_1)(\tau), \phi^{(1)'}_{y_1}(y_0,y_1)(\tau)) \|_{\mathcal{U}^{(0,1)}_p(\Omega)'} \right. \\ & & \left. +\| (\phi^{(2)'}_{y_0}(y_0,y_1)(T), \phi^{(2)'}_{y_1}(y_0,y_1)(T)) \|_{\mathcal{U}^{(0,1)}_p(\Omega)'} \right). \end{array}$$ In particular, $C(y_0,\mathfrak{p})$ is independent of $c$, $\phi^{(1)}$ and $\phi^{(2)}$.* *Proof.* Define the operator $$\Lambda(y_0,y_1,\mathfrak{p}): (f_0,f_1,g) \mapsto \big(z_0,z_1, (z_0(\cdot,\tau),z_1(\cdot,\tau)), (z_0(\cdot,T),z_1(\cdot,T))\big),$$ where $(z_0,z_1)$ is the solution of system [\[sysx\]](#sysx){reference-type="eqref" reference="sysx"}. From Proposition [Proposition 7](#prop-well-syslin-init){reference-type="ref" reference="prop-well-syslin-init"}, the linear operator $\Lambda(y_0,y_1,\mathfrak{p})$ is bounded from $\dot{\mathcal{U}}_{p,T}(\Omega) \times \mathcal{F}_{p,T}(\Omega) \times \mathcal{G}_{p,T}(\Gamma_N)$ into $\mathcal{U}_{p,T}(\Omega) \times \dot{\mathcal{U}}_{p,T}(\Omega) \times \mathcal{U}_p^{(0,1)}(\Omega) \times \mathcal{U}_p^{(0,1)}(\Omega)$. Therefore $\Lambda(y_0,y_1,\mathfrak{p})^{\ast}$ is bounded from $\mathcal{U}_{p,T}(\Omega)' \times \dot{\mathcal{U}}_{p,T}(\Omega)' \times \mathcal{U}_p^{(0,1)}(\Omega)' \times \mathcal{U}_p^{(0,1)}(\Omega)'$ into $\dot{\mathcal{U}}_{p,T}(\Omega)' \times \mathcal{F}_{p,T}(\Omega)' \times \mathcal{G}_{p,T}(\Gamma_N)'$. Defining $$(\zeta_0,\zeta_1,\pi) = -\Lambda(y_0,y_1,p)^{\ast}\left( c_{y_0}'(y_0,y_1,\xi), c_{y_1}'(y_0,y_1,\xi), \phi^{(1)'}_{y_0}(y_0,y_1), \phi^{(1)'}_{y_1}(y_0,y_1), \phi^{(2)'}_{y_0}(y_0,y_1), \phi^{(2)'}_{y_1}(y_0,y_1)\right),$$ we can verify that $(\zeta_0, \zeta_1,\pi)$ is solution of system [\[sysadjoint-init\]](#sysadjoint-init){reference-type="eqref" reference="sysadjoint-init"} in the sense of Definition [Definition 8](#def-trans-init){reference-type="ref" reference="def-trans-init"}. For that we use the Green formula twice, and proceed by integration by parts in $(0,\tau) \cup (\tau,T)$. Uniqueness is due to the linearity of system [\[sysadjoint-init\]](#sysadjoint-init){reference-type="eqref" reference="sysadjoint-init"}. Indeed, if $$\left(c_{y_0}'(y_0,y_1,\xi), c_{y_1}'(y_0,y_1,\xi), \phi^{(1)'}_{y_0}(y_0,y_1)(\tau), \phi^{(1)'}_{y_1}(y_0,y_1)(\tau), \phi^{(2)'}_{y_0}(y_0,y_1)(T), \phi^{(2)'}_{y_1}(y_0,y_1)(T)\right) = (0,0,0,0,0,0),$$ then from [\[id-def-trans-init\]](#id-def-trans-init){reference-type="eqref" reference="id-def-trans-init"} we deduce that $(\zeta_0, \zeta_1) = (0,0)$ in $\dot{\mathcal{U}}_{p,T}(\Omega)' \times \mathcal{F}_{p,T}(\Omega)'$, which also implies in [\[sysadjoint-init3\]](#sysadjoint-init3){reference-type="eqref" reference="sysadjoint-init3"} that $\pi = 0$, completing the proof. ◻ # Optimal control formulation and optimality conditions {#sec-optcond} The purpose of this section is the derivation of necessary optimality conditions for the original optimal control problem [\[mainpb\]](#mainpb){reference-type="eqref" reference="mainpb"}: $$\tag{$\mathcal{P}$} \left\{ \begin{array} {l} \displaystyle \max_{(\xi,\tau) \in \mathcal{X}_{p,T}(\omega) \times (0,T)} \left( J(\xi,\tau) = \int_0^T c(u,\dot{u}, \xi)\, \mathrm{d}t + \phi^{(1)}(u,\dot{u})(\tau) + \phi^{(2)}(u,\dot{u})(T) \right) %\phi((u,p)(\tau)) + %\int_0^T \ell(\xi,(u,p))\d t, \\[10pt] \text{where $(u, \dot{u})$ satisfies~\eqref{sysmain}.} \end{array} \right. \label{optcontprob}$$ **Remark 11**. *In order to guarantee the existence of solutions $(u,\dot{u})$ to system [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} independently of the set of control/parameters $(\xi,\tau)\in \mathcal{X}_{p,T}(\omega)\times(0,T)$ that is considered in Problem [\[optcontprob\]](#optcontprob){reference-type="eqref" reference="optcontprob"}, we may need to add norm constraints on the control function $\xi$. More precisely, in virtue of Proposition [Proposition 5](#prop-well-sysmain){reference-type="ref" reference="prop-well-sysmain"}, we could consider in addition the following constraint $$\|f(\xi)+ \mathop{\mathrm{div}}(\sigma(0))\|_{\mathcal{F}_{p,T}(\Omega)} + \|g-\sigma(0)n\|_{\mathcal{G}_{p,T}(\Gamma_N)} + \|(u_0,\dot{u}_0)\|_{\mathcal{U}^{(0,1)}_p(\Omega)} \leq \eta,$$ for some $\eta>0$ chosen small enough. We would then proceed like in [@Maxmax3] for deriving the corresponding optimality conditions, incorporating a Lagrange multiplier for taking into account such a constraint. In order to not make the complexity heavier, we choose to omit this point in what follows.* For example, like in the illustrations presented in section [5](#sec-num){reference-type="ref" reference="sec-num"}, one can choose $c(u,\dot{u}, \xi)= - \frac{1}{2} \| \xi\|_{\mathrm{L}^2(\omega)}^2$ as cost functional. Due to a lack of smoothness of the state at a (possible) optimal time $\tau$, we need to make a change of variable, in order to uncouple the state variable $(u,\dot{u})$ and the time parameter $\tau$. ## Transformation of the problem and new formulation {#sec-transformation} Let $\tilde{\varepsilon} \in (0,1)$ be a fix parameter chosen small enough, typically of the range of the time step when doing numerical simulations in section [5](#sec-num){reference-type="ref" reference="sec-num"}. When considering functionals $\phi^{(1)}$ like [\[sysmainpressure\]](#sysmainpressure){reference-type="eqref" reference="sysmainpressure"} for example, we introduce the change of variables $\mu :[0,2] \rightarrow [0,T]$ given as follows (see Figure [\[fig-graph-mu\]](#fig-graph-mu){reference-type="ref" reference="fig-graph-mu"}): $$\mu(s,\tau) = \left\{ \begin{array} {ll} \tau s & \text{if } s\in [0,1], \\[5pt] \displaystyle \frac{\varepsilon}{\tilde{\varepsilon}}(s-1) + \tau & \text{if } s\in [1,1+\tilde{\varepsilon}], \\[5pt] \displaystyle T- (2-s) \frac{T-(\tau+\varepsilon)}{1-\tilde{\varepsilon}} & \text{if } s\in [1+\tilde{\varepsilon}, 2]. \end{array} \right.$$ This change of variable is designed such that $\mu(\cdot,\tau)$ is bijective from $[0,2]$ to $[0,T]$, and $$\mu(0,\tau) = 0, \quad \mu(2,\tau) = T, \quad \mu(1,\tau)= \tau, \quad \mu(1+\tilde{\varepsilon},\tau) = \tau+ \varepsilon.$$ **Remark 12**. *This kind of change of variables corresponds to functionals $\phi^{(1)}$ that involve evaluations of the state variables at times $\tau$ and $\tau+\varepsilon$. Of course these changes of variables must be adapted when considering evaluations at other times (but still in function of $\tau$). When the functional $\phi^{(1)}$ involves only evaluations at time $\tau$, then the one given above is still valid by choosing $\varepsilon = \tilde{\varepsilon} = 0$.* The time-derivative $\dot{\mu}$ of $\mu$ (with respect to $s$), as well its the partial derivative $\dot{\mu}_\tau$ with respect to $\tau$ are given as follows: $$\dot{\mu}(s,\tau) = \left\{ \begin{array} {ll} \tau & \text{if } s\in [0,1), \\[5pt] \displaystyle \varepsilon/\tilde{\varepsilon} & \text{if } s \in (1, 1+\tilde{\varepsilon}), \\[5pt] \displaystyle \frac{T-(\tau+\varepsilon)}{1-\tilde{\varepsilon}} & \text{if } s\in (1+\tilde{\varepsilon}, 2], \end{array} \right. \qquad \dot{\mu}_\tau(s,\tau) = \left\{ \begin{array} {ll} 1 & \text{if } s\in [0,1), \\ 0 & \text{if } s \in (1, 1+\tilde{\varepsilon}), \\ -1/(1-\tilde{\varepsilon}) & \text{if } s\in (1+\tilde{\varepsilon}, 2]. \end{array} \right.$$ Note that $\dot{\mu}_\tau$ is actually independent of $\tau$, and that $\ddot{\mu} = 0$. For a given switching time $\tau$, we introduce the following change of unknowns and variables $$\tilde{u}:s \mapsto u (\cdot,\mu(s,\tau)), \quad \tilde{\mathfrak{p}}:s \mapsto \mathfrak{p}( \mu(s,\tau)), \quad \tilde{\xi}:s \mapsto \xi(\cdot,\mu(s)), \quad s \in [0,2]. \label{eqChangeOfVar}$$ We then transform [\[mainpb\]](#mainpb){reference-type="eqref" reference="mainpb"} into the following one: $$\tag{$\tilde{\mathcal{P}}$} %\left\{ \begin{array}{l}\displaystyle \max_{\tilde{\xi} \in \mathcal{X}_{p,2}(\omega), \tau \in (0,T)} -\frac{\alpha}{2}\int_0^2 \dot{\mu}\| \tilde{\xi}\|^2_{\LL^2(\omega)} \d s + \frac{\tilde{\mathfrak{p}}(1+\tilde{\epsilon}) -\tilde{\mathfrak{p}}(1)}{\varepsilon}, \\[10pt] \left\{ \begin{array}{l}\displaystyle \max_{\tilde{\xi} \in \mathcal{X}_{p,2}(\omega), \tau \in (0,T)} \int_0^2 \dot{\mu}(s,\tau)c(\tilde{u},\dot{\tilde{u}}/\dot{\mu}, \tilde{\xi})\, \mathrm{d}s + \phi^{(1)}(\tilde{u},\dot{\tilde{u}}/\dot{\mu})(1) + \phi^{(2)}(\tilde{u},\dot{\tilde{u}}/\dot{\mu})(2), \\[10pt] \text{subject to~\eqref{sysmaintilde}}, \end{array} \right. \label{mainpbtilde}$$ where [\[sysmaintilde\]](#sysmaintilde){reference-type="eqref" reference="sysmaintilde"} is the system satisfied by $(\tilde{u}, \tilde{\mathfrak{p}})$, namely [\[sysmaintilde\]]{#sysmaintilde label="sysmaintilde"} $$\begin{aligned} \ddot{\tilde{u}} - \kappa \dot{\mu}\Delta \dot{\tilde{u}} - \dot{\mu}^2 \mathop{\mathrm{div}}\sigma(\nabla \tilde{u}) = \dot{\mu}^2 f(\tilde{\xi}) & & \text{in $\Omega \times(0,2)$}, \label{sysmaintilde1} \\ \frac{\kappa}{\dot{\mu}} \frac{\partial\dot{\tilde{u}}}{\partial n} + \sigma(\nabla \tilde{u})n + \tilde{\mathfrak{p}}\, \mathrm{cof}\left(\Phi(\tilde{u}) \right) n = \tilde{g} & & \text{on $\Gamma_N\times (0,2)$}, \\ \displaystyle \int_{\Omega} \mathrm{det}(\Phi(\tilde{u})) \mathrm{d}\Omega = \int_{\Omega} \mathrm{det}(\Phi(u_0)) \mathrm{d}\Omega & & \text{in } (0,2)\\ \dot{\tilde{u}} = 0 & & \text{on $\Gamma_D\times(0,2)$}, \\ %\displaystyle \int_{\Gamma_N} \dot{\tilde{u}} \cdot \cof\left(\Phi(\tilde{u})\right)n\, \d\Gamma_N = 0 & & \text{in } (0,2)\\ \tilde{u}(\cdot, 0) = u_0, \displaystyle \quad \dot{\tilde{u}}(\cdot, 0) = \dot{\mu}(0)\dot{u}_0& & \text{in $\Omega$},\end{aligned}$$ with $\tilde{g}(\cdot,s) := g(\cdot \mu(s,\tau))$. The interest of this change of unknowns lies in the fact that in the new optimal control problem [\[mainpbtilde\]](#mainpbtilde){reference-type="eqref" reference="mainpbtilde"}, the two variables to be optimized, namely the time parameter $\tau$ and the control function $\tilde{\xi}$, are no longer coupled. Let us rewrite [\[sysmaintilde\]](#sysmaintilde){reference-type="eqref" reference="sysmaintilde"} as a first-order system of evolution equations, by introducing $$(\tilde{y}_0, \tilde{y}_1) = \displaystyle \big(\tilde{u}, \dot{\tilde{u}}/\dot{\mu}\big), \label{introduce-y}$$ so that $(\tilde{y}_0,\tilde{y}_1) =( u\circ \mu, \dot{u} \circ \mu)$. Then $(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}})$ satisfies the following system: [\[sysmaintilde2\]]{#sysmaintilde2 label="sysmaintilde2"} $$\begin{aligned} \dot{\tilde{y}}_0 - \dot{\mu} \tilde{y}_1 = 0 & & \text{in $\Omega \times(0,2)$}, \label{sysmaintilde20} \\ \dot{\tilde{y}}_1 - \kappa \dot{\mu}\Delta \tilde{y}_1 - \dot{\mu} \mathop{\mathrm{div}}\sigma(\nabla \tilde{y}_0) = \dot{\mu} f(\tilde{\xi}) & & \text{in $\Omega \times(0,2)$}, \label{sysmaintilde21} \\ \kappa \frac{\partial\tilde{y}_1}{\partial n} + \sigma(\nabla \tilde{y}_0)n + \tilde{\mathfrak{p}}\, \mathrm{cof}\left(\Phi(\tilde{y}_0) \right) n = \tilde{g} & & \text{on $\Gamma_N\times (0,2)$}, \label{sysmaintilde22}\\ \displaystyle \int_{\Omega} \mathrm{det}(\Phi(\tilde{y}_0)) \mathrm{d}\Omega = \int_{\Omega} \mathrm{det}(\Phi(u_0)) \mathrm{d}\Omega & & \text{in } (0,2) \label{etc1}\\ \tilde{y}_1 = 0 & & \text{on $\Gamma_D\times(0,2)$}, \\ %\displaystyle \int_{\Gamma_N} y_1 \cdot \cof\left(\Phi(y_0) \right)n\, \d\Gamma_N = 0 & & \text{in } (0,2)\\ \tilde{y}_0(\cdot, 0) = u_0, \displaystyle \quad \dot{\tilde{y}}_1(\cdot, 0) = \dot{u}_0& & \text{in $\Omega$}.\end{aligned}$$ **Remark 13**. *Note that deriving in time [\[etc1\]](#etc1){reference-type="eqref" reference="etc1"} combined with [\[sysmaintilde20\]](#sysmaintilde20){reference-type="eqref" reference="sysmaintilde20"} yields $\displaystyle \dot{\mu}\int_{\Omega} \mathrm{cof}(\Phi(\tilde{y}_0)): \nabla \tilde{y}_1 \mathrm{d}\Omega = 0$. Further, in the same way as we obtained [\[sysmain4bis\]](#sysmain4bis){reference-type="eqref" reference="sysmain4bis"}, we deduce that $$\int_{\Gamma_N} \tilde{y}_1 \cdot \mathrm{cof}(\Phi(\tilde{y}_0))n\, \mathrm{d}\Gamma_N = 0.$$* Problem [\[mainpbtilde\]](#mainpbtilde){reference-type="eqref" reference="mainpbtilde"} is equivalent to the following one, for which we use the same notation: $$\tag{$\tilde{\mathcal{P}}$} \left\{ \begin{array}{l}\displaystyle \max_{\tilde{\xi} \in \mathcal{X}_{p,2}(\omega), \tau \in (0,T)} \left( J(\tilde{\xi},\tau) = \int_0^2 \dot{\mu}(s,\tau)c(\tilde{y}_0,\tilde{y}_1, \tilde{\xi})\, \mathrm{d}s + \phi^{(1)}(\tilde{y}_0,\tilde{y}_1)(1) + \phi^{(2)}(\tilde{y}_0,\tilde{y}_1)(2)\right), \\[10pt] \text{subject to~\eqref{sysmaintilde2}}. \end{array} \right. \label{mainpbtilde2}$$ ## The control-to-state mapping {#sec-cts} We first state a result for a general linear system that will be used several times in the rest. **Proposition 14**. *Let be $(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}}) \in \mathcal{U}_{p,2}(\Omega) \times \dot{\mathcal{U}}_{p,2}(\Omega) \times \mathcal{P}_{p,2}$ and $\tau \in (0,T)$. Assume that $\tilde{f}_0 \in \mathcal{U}_{p,2}(\Omega)$, $\tilde{f}_1 \in \mathcal{F}_{p,2}(\Omega)$, and $\tilde{g} \in \mathcal{G}_{p,2}(\Gamma_N)$ with the compatibility condition $\tilde{g}(\cdot,0) = 0$. Recall that the tensor fields $\sigma_L$ and $\sigma_N$ have been introduced in [\[tensor-linear\]](#tensor-linear){reference-type="eqref" reference="tensor-linear"}. Then, if $(\tilde{y_0},\tilde{\mathfrak{p}})$ is small enough in $\mathcal{U}_{p,2}(\Omega) \times \mathcal{P}_{p,2}$, the following system* *[\[syssuperlinear\]]{#syssuperlinear label="syssuperlinear"} $$\begin{aligned} \dot{\tilde{z}}_0 - \dot{\mu} \tilde{z}_1 = \dot{\mu}\tilde{f}_0 & & \text{in $\Omega \times(0,2)$}, \label{syssuperlinear0} \\ \dot{\tilde{z}}_1 - \kappa \dot{\mu}\Delta \tilde{z}_1 - \dot{\mu} \mathop{\mathrm{div}}(\sigma_L(\nabla \tilde{y}_0).\nabla \tilde{z}_0) = \dot{\mu} \tilde{f}_1 & & \text{in $\Omega \times(0,2)$}, \label{syssuperlinear1} \\ \kappa \frac{\partial\tilde{z}_1}{\partial n} + \Big((\sigma_L + \tilde{\mathfrak{p}}\, \sigma_N)(\nabla \tilde{y}_0).\nabla \tilde{z}_0\Big)n + \tilde{\mathfrak{q}}\, \mathrm{cof}\left(\Phi(\tilde{y}_0) \right) n = \tilde{g} & & \text{on $\Gamma_N\times (0,2)$}, \label{syssuperlinear2}\\ \tilde{z}_1 = 0 & & \text{on $\Gamma_D\times(0,2)$}, \\ %\displaystyle \int_{\Gamma_N} z_1 \cdot \cof\left(\Phi(y_0) \right)n\, \d\Gamma_N %+ \int_{\Gamma_N} y_1 \cdot (\sigma_N(\nabla y_0).\nabla z_0)n\, \d\Gamma_N \displaystyle \int_{\Gamma_N} \tilde{z}_0 \cdot \mathrm{cof}(\Phi(\tilde{y}_0))n\, \mathrm{d}\Gamma = 0 & & \text{in } (0,2)\\ \tilde{z}_0(\cdot, 0) = 0, \displaystyle \quad \tilde{z}_1(\cdot, 0) = 0 & & \text{in $\Omega$}.\end{aligned}$$* *admits a unique solution $(\tilde{z}_0, \tilde{z}_1,\tilde{\mathfrak{q}}) \in \mathcal{U}_{p,2}(\Omega)\times \dot{\mathcal{U}}_{p,2}(\Omega) \times \mathcal{P}_{p,2}$. Moreover, it satisfies $$\|(\tilde{z}_0,\tilde{z}_1)\|_{\mathcal{U}_{p,2}(\Omega)\times \dot{\mathcal{U}}_{p,2}(\Omega)} \leq C(\tilde{y}_0,\tilde{\mathfrak{p}}) \left( \| \tilde{f}_0\|_{\mathcal{U}_{p,2}(\Omega)} + \| \tilde{f}_1\|_{\mathcal{F}_{p,2}(\Omega)} + \| \tilde{g}\|_{\mathcal{G}_{p,2}(\Gamma_N)} \right).$$ where the constant $C(\tilde{y}_0,\tilde{\mathfrak{p}})>0$ depends only on $(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}})$.* Note here again that solutions $(\tilde{z}_0,\tilde{z}_1,\tilde{\mathfrak{q}})$ of system [\[syssuperlinear\]](#syssuperlinear){reference-type="eqref" reference="syssuperlinear"} are continuous on $[0,2]$ -- with values in $\mathcal{U}_p^{(0,1)}(\Omega) \times \mathbb{R}$. *Proof.* Notice that $(\tilde{z}_0,\tilde{z_1},\tilde{\mathfrak{q}})$ is solution of system [\[syssuperlinear\]](#syssuperlinear){reference-type="eqref" reference="syssuperlinear"} with $(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}})$ and $(\tilde{f}_0,\tilde{f}_1,\tilde{g})$ as data if and only if $(z_0,z_1,\mathfrak{q})$ is solution of system [\[sysx\]](#sysx){reference-type="eqref" reference="sysx"} with $(y_0,y_1,\mathfrak{p})$ and $(f_0,f_1,g)$ as data, where we have $$\begin{array} {l} y_0 = \tilde{y}_0(\cdot, \mu^{-1}(\cdot,\tau)), \quad y_1 = \tilde{y}_1 (\cdot, \mu^{-1}(\cdot,\tau)), \quad \mathfrak{p} = \tilde{\mathfrak{p}}(\mu^{-1}(\cdot,\tau)), \\ f_0 = \tilde{f}_0 (\cdot, \mu^{-1}(\cdot,\tau)), \quad f_1 = \tilde{f}_1 (\cdot, \mu^{-1}(\cdot,\tau)), \quad g = \tilde{g} (\cdot, \mu^{-1}(\cdot,\tau)). \end{array}$$ This is due to the regularity of the change of variables $\mu(\cdot,\tau)$, in particular the fact that its derivatives and those of $\mu^{-1}(\cdot,\tau)$ are in $\mathrm{L}^{\infty}(0,2;\mathbb{R})$ and $\mathrm{L}^{\infty}(0,T;\mathbb{R})$, respectively. Then the result follows from Proposition [Proposition 7](#prop-well-syslin-init){reference-type="ref" reference="prop-well-syslin-init"}, and the announced estimate too, which concludes the proof. ◻ ### Regularity In this subsection we study the regularity of the control-to-state mapping, given as $$\begin{array} {rrcl} \mathbb{S}: & \mathcal{X}_{p,2}(\omega) \times (0,T) & \rightarrow & \mathcal{U}_{p,2}(\Omega)\times \dot{\mathcal{U}}_{p,2}(\Omega) \times \mathcal{P}_{p,2} \\ & (\tilde{\xi} , \tau) & \mapsto & (\tilde{y}_0, \tilde{y}_1,\tilde{\mathfrak{p}}), \end{array}$$ where $(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}})$ is the solution of system [\[sysmaintilde2\]](#sysmaintilde2){reference-type="eqref" reference="sysmaintilde2"} corresponding to $\tilde{\xi}$ and $\dot{\mu} = \dot{\mu}(\cdot,\tau)$. Let us show that $\mathbb{S}$ is *locally* well-defined. More precisely, we state: **Proposition 15**. *Let be $T\in(0,\infty)$. There exists $\eta>0$ such that if $$\|\dot{\mu}f(\tilde{\xi})+ \mathop{\mathrm{div}}(\sigma(0))\|_{\mathcal{F}_{p,2}(\Omega)} + \|\tilde{g}-\sigma(0)n\|_{\mathcal{G}_{p,2}(\Gamma_N)} + \|(u_0,\dot{u}_0)\|_{\mathcal{U}^{(0,1)}_p(\Omega)} \leq \eta$$ with the compatibility condition $\displaystyle \kappa\frac{\partial\dot{u}_0}{\partial n} + \sigma(\nabla u_0)n = \tilde{g}(\cdot,0)$ on $\Gamma_N$, then system [\[sysmaintilde2\]](#sysmaintilde2){reference-type="eqref" reference="sysmaintilde2"} admits a unique solution $(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}}) \in \mathcal{U}_{p,2}(\Omega) \times \dot{\mathcal{U}}_{p,2}(\Omega) \times \mathcal{P}_{p,2}$.* *Proof.* From [\[eqChangeOfVar\]](#eqChangeOfVar){reference-type="eqref" reference="eqChangeOfVar"} and [\[introduce-y\]](#introduce-y){reference-type="eqref" reference="introduce-y"} we have $$\tilde{y}_0(\cdot,s) = u(\cdot, \mu(s,\tau)), \quad \tilde{y}_1(\cdot,s) = \dot{u}(\cdot, \mu(s,\tau)), \quad \tilde{\mathfrak{p}}(s) = \mathfrak{p}(\mu(s,\tau)), \quad \tilde{\xi}(\cdot,s) = \xi(\cdot,\mu(s,\tau)),$$ and then it is clear that $(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}})$ is solution of [\[sysmaintilde2\]](#sysmaintilde2){reference-type="eqref" reference="sysmaintilde2"} with $(\tilde{\xi},\tau)$ as control if and only if $(u, \dot{u}, \mathfrak{p})$ is solution of [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} with $\xi$ as control , provided that $\tilde{g}(\cdot,s) = g(\cdot, \mu(s,\tau))$. Therefore we conclude by invoking Proposition [Proposition 5](#prop-well-sysmain){reference-type="ref" reference="prop-well-sysmain"}. ◻ We are now in position to prove regularity for the control-to-state mapping. **Theorem 16**. *The control-to-state mapping $\mathbb{S}$ is locally of class $\mathcal{C}^1$ from $\mathcal{X}_{p,2}(\omega) \times (0,T)$ onto $\mathcal{U}_{p,2}(\Omega)\times \dot{\mathcal{U}}_{p,2}(\Omega) \times \mathcal{P}_{p,2}$.* *Proof.* The result is an application of the implicit function theorem. Define the mapping $$\begin{array} {rrcc} e: & \text{\begin{small}$\mathcal{U}_{p,2}(\Omega)\times \dot{\mathcal{U}}_{p,2}(\Omega) \times \mathcal{P}_{p,2} \times \mathcal{X}_{p,2}(\omega) \times (0,T)$\end{small}} & \rightarrow & \text{\begin{small}$\dot{\mathcal{U}}_{p,2}(\Omega) \times \mathcal{F}_{p,2}(\Omega) \times \mathcal{G}_{p,2}(\Gamma_N) \times %\mathcal{H}_{p,2}(\Gamma_D) \times \mathcal{H}_{p,2} \times \mathcal{U}_p^{(0,1)}(\Omega)$\end{small}} \\[5pt] & (\tilde{y}_0, \tilde{y}_1,\tilde{\mathfrak{p}},\tilde{\xi},\tau) & \mapsto & \left( \begin{array}{c} \dot{\tilde{y}}_0 - \dot{\mu} \tilde{y}_1 \\ \dot{\tilde{y}}_1 - \dot{\mu}\left(\kappa \Delta \tilde{y}_1 - \mathop{\mathrm{div}}(\sigma(\nabla \tilde{y}_0)) - f(\tilde{\xi})\right) \\ \kappa \displaystyle \frac{\partial\tilde{y}_1}{\partial n} + \sigma(\nabla \tilde{y}_0)n +\mathfrak{p}\, \mathrm{cof}(\Phi(\tilde{y}_0))n - g \\[5pt] %\tilde{y}_{1_{|\Gamma_D}} \\ \displaystyle \int_{\Omega} \mathrm{det}(\Phi(\tilde{y}_0)) \mathrm{d}\Omega -\int_{\Omega} \mathrm{det}(\Phi(u_0))\mathrm{d}\Omega\\ (\tilde{y}_0(\cdot,0), \tilde{y}_1(\cdot,0)) - (u_0,\dot{u}_0) \end{array} \right), \end{array}$$ where the dependence on $\tau$ lies in $\dot{\mu} = \dot{\mu}(\cdot,\tau)$. From Proposition [Proposition 15](#prop-genesis){reference-type="ref" reference="prop-genesis"}, the mapping $e$ is locally well-defined, and the equality $e(\mathbb{S}(\tilde{\xi},\tau), \tilde{\xi},\tau) = 0$ holds for all $(\tilde{\xi}, \tau) \in \mathcal{X}_{p,2}(\omega) \times (0,T)$. Furthermore, from assumption $\mathbf{A1}$, the mapping $e$ is of class $\mathcal{C}^1$. Proposition [Proposition 14](#prop-superlinear){reference-type="ref" reference="prop-superlinear"} shows that the derivative of $e$ with respect to $(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}})$ is invertible. Then the implicit function theorem provides us the existence of a $\mathcal{C}^1$ mapping on $\mathcal{U}_{p,2}(\Omega) \times \dot{\mathcal{U}}_{p,2}(\Omega)\times \mathcal{P}_{p,2}$ that coincides with $\mathbb{S}$, which concludes the proof. ◻ We can now describe the partial derivatives of the control-to-state mapping. ### Partial derivatives Let us introduce the linear system satisfied by $(\tilde{v}_0,\tilde{v}_1,\tilde{\mathfrak{q}}):=\mathbb{S}'_{\tilde{\xi}}(\tilde{\xi},\tau).\hat{\xi}$, denoting the sensitivity of $\mathbb{S}$ with respect to variable $\tilde{\xi}$ in the direction $\hat{\xi}$, at point $(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}}) = \mathbb{S}(\tilde{\xi}, \tau)$: [\[sysprimexi\]]{#sysprimexi label="sysprimexi"} $$\begin{aligned} \dot{\tilde{v}}_0 - \dot{\mu} \tilde{v}_1 = 0 & & \text{in $\Omega \times(0,2)$}, \label{sysprimexi0} \\ \dot{\tilde{v}}_1 - \kappa \dot{\mu}\Delta \tilde{v}_1 - \dot{\mu} \mathop{\mathrm{div}}(\sigma_L(\nabla \tilde{y}_0).\nabla \tilde{v}_0) = \dot{\mu} f'(\tilde{\xi}).\hat{\xi} & & \text{in $\Omega \times(0,2)$}, \label{sysprimexi1} \\ \kappa \frac{\partial\tilde{v}_1}{\partial n} + \Big((\sigma_L + \tilde{\mathfrak{p}}\, \sigma_N)(\nabla \tilde{y}_0).\nabla \tilde{v}_0\Big)n + \tilde{\mathfrak{q}}\, \mathrm{cof}(\Phi(\tilde{y}_0)) n = 0 & & \text{on $\Gamma_N\times (0,2)$}, \label{sysprimexi2}\\ \tilde{v}_1 = 0 & & \text{on $\Gamma_D\times(0,2)$}, \\ %\displaystyle \int_{\Gamma_N} v_1 \cdot \cof\left(\Phi(y_0) \right)n\, \d\Gamma_N %+ \int_{\Gamma_N} y_1 \cdot (\sigma_N(\nabla y_0).\nabla v_0)n\, \d\Gamma_N \displaystyle \int_{\Gamma_N} \tilde{v}_0 \cdot \mathrm{cof}(\Phi(\tilde{y}_0))n\, \mathrm{d}\Gamma_N = 0 & & \text{in } (0,2)\\ \tilde{v}_0(\cdot, 0) = 0, \displaystyle \quad \tilde{v}_1(\cdot, 0) = 0 & & \text{in $\Omega$}.\end{aligned}$$ Note that when $\xi \mapsto f(\xi)$ is linear (see section [7.4](#sec-app-control){reference-type="ref" reference="sec-app-control"}), obviously we have $f'(\tilde{\xi}).\hat{\xi} = f(\hat{\xi})$ for all $\tilde{\xi} \in \mathcal{X}_{p,2}(\omega)$. Let us state that system [\[sysprimexi\]](#sysprimexi){reference-type="eqref" reference="sysprimexi"} is well-posed. **Proposition 17**. *Assume that $(\tilde{\xi},\tau) \in \mathcal{X}_{p,2}(\omega) \times (0,T)$, and denote $(\tilde{y}_0, \tilde{y}_1,\tilde{\mathfrak{p}}) = \mathbb{S}(\tilde{\xi},\tau) \in \mathcal{U}_{p,2}(\Omega) \times \dot{\mathcal{U}}_{p,2}(\Omega)\times \mathcal{P}_{p,2}$. Then, if $(\tilde{y_0},\tilde{\mathfrak{p}})$ is small enough in $\mathcal{U}_{p,2}(\Omega) \times \mathcal{P}_{p,2}$, system [\[sysprimexi\]](#sysprimexi){reference-type="eqref" reference="sysprimexi"} admits a unique solution $(\tilde{v}_0, \tilde{v}_1, \tilde{\mathfrak{q}}) \in \mathcal{U}_{p,2}(\Omega)\times \dot{\mathcal{U}}_{p,2}(\Omega) \times \mathcal{P}_{p,2}$ for all $\hat{\xi} \in \mathcal{X}_{p,2}(\omega)$. Moreover, there exists a constant $C(\tilde{y}_0,\tilde{\mathfrak{p}})$ depending only on $(\tilde{y}_0,\tilde{\mathfrak{p}})$ such that $$\| \tilde{v}_0\|_{\mathcal{U}_{p,2}(\Omega)}+ \| \tilde{v}_1\|_{\dot{\mathcal{U}}_{p,2}(\Omega)} + \| \tilde{\mathfrak{q}} \|_{\mathcal{P}_{p,2}} \leq C(\tilde{y}_0,\tilde{\mathfrak{p}}) \|f'(\tilde{\xi}).\hat{\xi}\|_{\mathcal{F}_{p,2}(\Omega)}.$$* *Proof.* This is a consequence of Proposition [Proposition 14](#prop-superlinear){reference-type="ref" reference="prop-superlinear"} with $\tilde{f}_0 = 0$, $\tilde{f}_1 = f'(\hat{\xi}).\tilde{\xi}$ and $\tilde{g}=0$. ◻ We also introduce the linear system satisfied by $(\tilde{w}_0,\tilde{w}_1,\tilde{\mathfrak{r}}):=\mathbb{S}'_{\tau}(\tilde{\xi},\tau)$, denoting the sensitivity of $\mathbb{S}$ with respect to $\tau$ at point $(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}}) = \mathbb{S}(\tilde{\xi},\tau)$: [\[sysprimetau\]]{#sysprimetau label="sysprimetau"} $$\begin{aligned} \dot{\tilde{w}}_0 - \dot{\mu} \tilde{w}_1 = \dot{\mu}_{\tau} \tilde{y}_1 & & \text{in $\Omega \times(0,2)$}, \label{sysprimetau0} \\ \dot{\tilde{w}}_1 - \kappa \dot{\mu}\Delta \tilde{w}_1 - \dot{\mu} \mathop{\mathrm{div}}(\sigma_L(\nabla \tilde{y}_0).\nabla \tilde{w}_0) = \dot{\mu}_{\tau} \left( \kappa \Delta \tilde{y}_1 + \mathop{\mathrm{div}}(\sigma(\nabla \tilde{y}_0)) + f(\tilde{\xi}) \right) & & \text{in $\Omega \times(0,2)$}, \label{sysprimetau1} \\ \kappa \frac{\partial\tilde{w}_1}{\partial n} + \Big((\sigma_L + \tilde{\mathfrak{p}}\, \sigma_N)(\nabla \tilde{y}_0).\nabla \tilde{w}_0\Big)n + \tilde{\mathfrak{r}} \, \mathrm{cof}\left(\Phi(\tilde{y}_0) \right) n = 0 & & \text{on $\Gamma_N\times (0,2)$}, \label{sysprimetau2}\\ \tilde{w}_1 = 0 & & \text{on $\Gamma_D\times(0,2)$}, \\ %\displaystyle \int_{\Gamma_N} \tilde{w}_1 \cdot \cof\left(\Phi(y_0) \right)n\, \d\Gamma_N %+ \int_{\Gamma_N} y_1 \cdot (\sigma_N(\nabla y_0).\nabla \tilde{w}_0)n\, \d\Gamma_N \displaystyle \int_{\Gamma_N} \tilde{w}_0 \cdot \mathrm{cof}(\phi(\tilde{y}_0))n\, \mathrm{d}\Gamma_N = 0 & & \text{in } (0,2)\\ \tilde{w}_0(\cdot, 0) = 0, \displaystyle \quad \tilde{w}_1(\cdot, 0) = 0 & & \text{in $\Omega$}.\end{aligned}$$ We show that system [\[sysprimetau\]](#sysprimetau){reference-type="eqref" reference="sysprimetau"} is also well-posed. **Proposition 18**. *Assume that $(\tilde{\xi},\tau) \in \mathcal{X}_{p,2}(\omega) \times (0,T)$, and denote $(\tilde{y}_0, \tilde{y}_1,\tilde{\mathfrak{p}}) = \mathbb{S}(\tilde{\xi},\tau) \in \mathcal{U}_{p,2}(\Omega) \times \dot{\mathcal{U}}_{p,2}(\Omega)\times \mathcal{P}_{p,2}$. Then, if $(\tilde{y_0},\tilde{\mathfrak{p}})$ is small enough in $\mathcal{U}_{p,2}(\Omega) \times \mathcal{P}_{p,2}$, system [\[sysprimetau\]](#sysprimetau){reference-type="eqref" reference="sysprimetau"} admits a unique solution $(\tilde{w}_0,\tilde{w}_1,\tilde{\mathfrak{r}}) \in \mathcal{U}_{p,2}(\Omega)\times \dot{\mathcal{U}}_{p,2}(\Omega) \times \mathcal{P}_{p,2}$. Moreover, there exists a constant $C(\tilde{y}_0,\tilde{\mathfrak{p}})$ depending only on $(\tilde{y}_0,\tilde{\mathfrak{p}})$ such that $$\| \tilde{w}_0\|_{\mathcal{U}_{p,2}(\Omega)} + \|\tilde{w}_1\|_{\dot{\mathcal{U}}_{p,2}(\Omega)} + \| \tilde{\mathfrak{r}} \|_{\mathcal{P}_{p,2}} \leq C(\tilde{y}_0,\tilde{\mathfrak{p}}) \left(1+ \|f(\tilde{\xi})\|_{\mathcal{F}_{p,2}(\Omega)}\right).$$* *Proof.* This is a consequence of Proposition [Proposition 14](#prop-superlinear){reference-type="ref" reference="prop-superlinear"} with $$\tilde{f}_0 = \frac{\dot{\mu}_{\tau}}{\dot{\mu}}\tilde{y}_1 \in \dot{\mathcal{U}}_{p,2}(\Omega), \quad \tilde{f}_1 = \displaystyle \frac{\dot{\mu}_{\tau}}{\dot{\mu}} \left( \kappa \Delta \tilde{y}_1 + \mathop{\mathrm{div}}(\sigma(\nabla \tilde{y}_0)) + f(\tilde{\xi}) \right) \in \mathcal{F}_{p,2}(\Omega),$$ and $\tilde{g}=0$. Since $\dot{\mu}$ and $\dot{\mu}_{\tau}$ are in $\mathrm{L}^{\infty}(0,2;\mathbb{R})$, the parameter $\tau$ does not appear in the dependence of the constant $C(\tilde{y}_0,\tilde{\mathfrak{p}})$. ◻ ## The adjoint system {#sec-adjoint} Recall the notation introduced in system [\[sysadjoint-init\]](#sysadjoint-init){reference-type="eqref" reference="sysadjoint-init"}: For a function $\varphi$ continuous on $(0,1)\cup(1,2)$ we define the jump of $\varphi$ at $s=1$ as folows: $$\left[\varphi\right]_1 := \lim_{s\rightarrow 1^+} \varphi(s) - \lim_{s\rightarrow 1^-} \varphi(s).$$ Let be $(\tilde{y}_0, \tilde{y}_1,\tilde{\mathfrak{p}}) \in \mathcal{U}_{p,2}(\Omega)\times \dot{\mathcal{U}}_{p,2}(\Omega) \times \mathcal{P}_{p,2}$. We associate with $(\tilde{y}_0, \tilde{y}_1,\tilde{\mathfrak{p}})$ the adjoint state $(\tilde{\zeta}_0, \tilde{\zeta}_1,\tilde{\pi})$, assumed to satisfy the following system [\[sysadjoint\]]{#sysadjoint label="sysadjoint"} $$\begin{aligned} -\dot{\tilde{\zeta}}_0 - \dot{\mu} \mathop{\mathrm{div}}(\sigma_L(\nabla \tilde{y}_0)^{\ast}.\nabla \tilde{\zeta}_1) = -\dot{\mu}c'_{y_0}(\tilde{y}_0, \tilde{y}_1, \tilde{\xi}) & & \text{in $\Omega \times\left((0,1) \cup (1,2)\right)$}, \label{sysadjoint1} \\ -\dot{\tilde{\zeta}}_1 - \dot{\mu} \tilde{\zeta}_0 -\kappa \dot{\mu} \Delta \tilde{\zeta}_1 = -\dot{\mu}c'_{y_1}(\tilde{y}_0,\tilde{y}_1, \tilde{\xi}) & & \text{in $\Omega \times\left((0,1) \cup (1,2)\right)$}, \label{sysadjoint2} \\ %\quad \text{and} \quad \Big((\sigma_L + \tilde{\mathfrak{p}}\, \sigma_N)(\nabla \tilde{y}_0)^{\ast}.\nabla \tilde{\zeta}_1\Big)n + \tilde{\pi}\, \mathrm{cof}(\Phi(\tilde{y}_0)) n = 0 & & \text{on $\Gamma_N\times \left((0,1) \cup (1,2)\right)$}, \label{sysadjoint3} \label{Neu-adj0}\\ \kappa \frac{\partial\tilde{\zeta}_1}{\partial n} =0 & & \text{on $\Gamma_N\times \left((0,1) \cup (1,2)\right)$}, \label{Neu-adj1}\\ \tilde{\zeta}_1 = 0 & & \text{on $\Gamma_D\times\left((0,1) \cup (1,2)\right)$}, \\ %\displaystyle \int_{\Gamma_N} \tilde{\zeta}_1 \cdot \cof(\Phi(\tilde{y}_0) )n\, \d\Gamma_N = 0 \left\langle \tilde{\zeta}_1 \, ; \mathrm{cof}(\Phi(\tilde{y}_0))n\right\rangle_{\mathbf{W}^{1/(p'),p}(\Gamma_N)',\mathbf{W}^{1/(p'),p}(\Gamma_N)} = 0 & & \text{in } \left((0,1) \cup (1,2)\right) \label{sysadjoint5}\\ \left[ \tilde{\zeta}_0 \right]_1 = \phi^{(1)'}_{y_0}(\tilde{y}_0,\tilde{y}_1)(1) \quad \text{and} \quad \left[ \tilde{\zeta}_1 \right]_1 = \phi^{(1)'}_{y_1}(\tilde{y}_0,\tilde{y}_1)(1) & & \text{in $\Omega$}, \label{sysadjoint4}\\ \tilde{\zeta}_0(\cdot, 2) = -\phi^{(2)'}_{y_0}(\tilde{y}_0,\tilde{y}_1)(2), \displaystyle \quad \tilde{\zeta}_1(\cdot, 2) = -\phi^{(2)'}_{y_1}(\tilde{y}_0,\tilde{y}_1)(2) & & \text{in $\Omega$}.\end{aligned}$$ Note that the derivatives of mapping $\phi^{(1)}$ induce jumps for variables $\tilde{\zeta}_0$ and $\tilde{\zeta}_1$ at time $s=1$. Similarly to Definition [Definition 8](#def-trans-init){reference-type="ref" reference="def-trans-init"} that deals with solutions of system [\[sysadjoint-init\]](#sysadjoint-init){reference-type="eqref" reference="sysadjoint-init"}, we define solutions of system [\[sysadjoint\]](#sysadjoint){reference-type="eqref" reference="sysadjoint"} by *transposition* as follows: **Definition 19**. *Let be $(\tilde{y}_0, \tilde{y}_1,\tilde{\mathfrak{p}},\tilde{\xi}, \tau) \in \mathcal{U}_{p,2}(\Omega) \times \dot{\mathcal{U}}_{p,2}(\Omega)\times \mathcal{P}_{p,2}\times \mathcal{X}_{p,2}(\omega) \times (0,T)$. We say that $(\tilde{\zeta}_0,\tilde{\zeta}_1,\tilde{\pi})$ is a solution of [\[sysadjoint\]](#sysadjoint){reference-type="eqref" reference="sysadjoint"} associated with $(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}})$, if for all $(\tilde{f}_0, \tilde{f}_1,\tilde{g}) \in \dot{\mathcal{U}}_{p,2}(\Omega) \times \mathcal{F}_{p,2}(\Omega) \times \mathcal{G}_{p,2}(\Gamma_N)$ we have $$\begin{array} {l} \displaystyle \left\langle \tilde{\zeta}_0 ; \dot{\mu}\tilde{f}_0 \right\rangle_{\dot{\mathcal{U}}_{p,2}(\Omega)',\dot{\mathcal{U}}_{p,2}(\Omega)} +\left\langle \tilde{\zeta}_1 ; \dot{\mu}\tilde{f}_1 \right\rangle_{\mathcal{F}_{p,2}(\Omega)',\mathcal{F}_{p,2}(\Omega)} + \left\langle \tilde{\zeta}_1 ; \dot{\mu}\tilde{g} \right\rangle_{\mathcal{G}_{p,2}(\Gamma_N)',\mathcal{G}_{p,2}(\Gamma_N)} \\ %+ \langle \tilde{\zeta}_0(0) ; v_0 \rangle_{\mathcal{U}_p_^{(0)}(\Omega)',\mathcal{U}_p^{(0)}(\Omega)} %+ \langle \tilde{\zeta}_1(0) ; v_1 \rangle_{\mathcal{U}_p^{(1)}(\Omega)',\mathcal{U}_p^{(1)}(\Omega)} \\ \displaystyle = -\int_0^2\dot{\mu}\left\langle c'_{y_0}(\tilde{y}_0,\tilde{y}_1, \tilde{\xi})\, ; \tilde{z}_0 \right\rangle_{\mathbf{W}^{2,p}(\Omega)',\mathbf{W}^{2,p}(\Omega)} \mathrm{d}s - \int_0^2\dot{\mu}\left\langle c'_{y_1}(\tilde{y}_0,\tilde{y}_1, \tilde{\xi})\, ; \tilde{z}_1 \right\rangle_{\mathbf{W}^{2,p}(\Omega)',\mathbf{W}^{2,p}(\Omega)} \mathrm{d}s \\ - \left\langle(\phi^{(1)'}_{y_0}(\tilde{y}_0,\tilde{y}_1), \phi^{(1)'}_{y_1}(\tilde{y}_0,\tilde{y}_1) )\, ; (\tilde{z}_0(\cdot,1),\tilde{z}_1(\cdot,1)) \right\rangle_{\mathcal{U}_p^{(0,1)}(\Omega)',\mathcal{U}_p^{(0,1)}(\Omega)} \\ -\left\langle (\phi^{(2)'}_{y_0}(\tilde{y}_0,\tilde{y}_1), \phi^{(2)'}_{y_1}(\tilde{y}_0,\tilde{y}_1) )\, ; (\tilde{z}_0(\cdot,2),\tilde{z}_1(\cdot,2)) \right\rangle_{\mathcal{U}_p^{(0,1)}(\Omega)',\mathcal{U}_p^{(0,1)}(\Omega)} , \label{id-def-trans} \end{array}$$ where $(\tilde{z}_0,\tilde{z}_1,\tilde{\mathfrak{q}})$ is the solution of system [\[syssuperlinear\]](#syssuperlinear){reference-type="eqref" reference="syssuperlinear"} with $(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}})$ and $(\tilde{f}_0,\tilde{f}_1,\tilde{g})$ as data.* It is clear that $(\tilde{\zeta}_0,\tilde{\zeta}_1,\tilde{\pi})$ is solution of system [\[sysadjoint\]](#sysadjoint){reference-type="eqref" reference="sysadjoint"} associated with $(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}})$ in the sense of Definition [Definition 19](#def-trans){reference-type="ref" reference="def-trans"} if and only if $(\zeta_0,\zeta_1,\pi)$ is solution of system [\[sysadjoint-init\]](#sysadjoint-init){reference-type="eqref" reference="sysadjoint-init"} associated with $(y_0,y_1,\mathfrak{p})$ in the sense of Definition [Definition 8](#def-trans-init){reference-type="ref" reference="def-trans-init"}, provided that $$\begin{array} {l} \tilde{y}_0(\cdot,s) = y_0(\cdot, \mu(s,\tau)), \quad \tilde{y}_1(\cdot,s) = y_1(\cdot, \mu(s,\tau)), \quad \tilde{\mathfrak{p}}(s) = \mathfrak{p}(\mu(s,\tau)), \quad \tilde{\xi}(\cdot,s) = \xi(\cdot,\mu(s,\tau)), \\ \tilde{f}_0(\cdot,s) = f_0(\cdot,\mu(s,\tau)), \quad \tilde{f}_1(\cdot,s) = f_1(\cdot,\mu(s,\tau)), \quad \tilde{g}(\cdot,s) = g(\cdot,\mu(s,\tau)). \end{array}$$ The solutions of systems [\[sysadjoint-init\]](#sysadjoint-init){reference-type="eqref" reference="sysadjoint-init"} and [\[sysadjoint\]](#sysadjoint){reference-type="eqref" reference="sysadjoint"} then satisfy the relations $$\tilde{\zeta}_0(\cdot,s) = \zeta_0(\cdot, \mu(s,\tau)), \quad \tilde{\zeta}_1(\cdot,s) = \zeta_1(\cdot, \mu(s,\tau)), \quad \tilde{\pi}(s) = \pi(\mu(s,\tau)).$$ Therefore we rely on Proposition [Proposition 10](#propadj-init){reference-type="ref" reference="propadj-init"} for stating the following result: **Proposition 20**. *Let be $(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}},\tilde{\xi},\tau) \in \mathcal{U}_{p,2}(\Omega) \times \dot{\mathcal{U}}_{p,2}(\Omega)\times \mathcal{P}_{p,2}\times \mathcal{X}_{p,2}(\omega) \times (0,T)$. System [\[sysadjoint\]](#sysadjoint){reference-type="eqref" reference="sysadjoint"} admits a unique solution $(\tilde{\zeta}_0,\tilde{\zeta}_1,\tilde{\pi}) \in \dot{\mathcal{U}}_{p,2}(\Omega)' \times \mathcal{F}_{p,2}(\Omega)' \times \mathcal{G}_{p,2}(\Gamma_N)'$, in the sense of Definition [Definition 19](#def-trans){reference-type="ref" reference="def-trans"}. Moreover, there exists a constant $C(\tilde{y}_0,\tilde{\mathfrak{p}}) >0$ depending only on $(\tilde{y}_0,\tilde{\mathfrak{p}},\tau)$ such that $$\begin{array} {rcl} \|(\tilde{\zeta}_0,\tilde{\zeta}_1) \|_{\dot{\mathcal{U}}_{p,2}(\Omega)' \times \mathcal{F}_{p,2}(\Omega)'} & \leq & C(\tilde{y}_0, \tilde{\mathfrak{p}}) \left( \| c'_{y_0}(\tilde{y}_0,\tilde{y}_1,\tilde{\xi}) \|_{\mathcal{U}_{p,2}(\Omega)'} + \| c'_{y_1}(\tilde{y}_0,\tilde{y}_1,\tilde{\xi}) \|_{\dot{\mathcal{U}}_{p,2}(\Omega)'} \right.\\ & & \left. + \| (\phi^{(1)'}_{y_0}(\tilde{y}_0,\tilde{y}_1), \phi^{(1)'}_{y_1}(\tilde{y}_0,\tilde{y}_1) \|_{\mathcal{U}^{(0,1)}_p(\Omega)'} + \| (\phi^{(2)'}_{y_0}(\tilde{y}_0,\tilde{y}_1), \phi^{(2)'}_{y_1}(\tilde{y}_0,\tilde{y}_1) \|_{\mathcal{U}^{(0,1)}_p(\Omega)'} \right). \end{array}$$ In particular, $C(\tilde{y}_0,\tilde{\mathfrak{p}})$ is independent of $c$, $\phi^{(1)}$ and $\phi^{(2)}$.* ## First-order necessary optimality conditions {#sec-optcond_1st} Introduce the functional of problem [\[mainpbtilde2\]](#mainpbtilde2){reference-type="eqref" reference="mainpbtilde2"}: $$\tilde{J}: \mathcal{X}_{p,2}(\omega) \times (0,T) \ni (\tilde{\xi},\tau) \mapsto \int_0^2 \dot{\mu}(s,\tau)c(\mathbb{S}(\tilde{\xi}, \tau)(s),\tilde{\xi}(s))\mathrm{d}s + \phi_1(\mathbb{S}(\tilde{\xi}, \tau))(1) + \phi_2(\mathbb{S}(\tilde{\xi}, \tau))(2). \label{def-J}$$ Define the Hamiltonian $\mathcal{H}$ for problem [\[mainpbtilde2\]](#mainpbtilde2){reference-type="eqref" reference="mainpbtilde2"}, formally given by $$\begin{array} {rcl} \mathcal{H} \big(y_0,y_1,\mathfrak{p},\xi, \zeta_0, \zeta_1, \pi\big) & := & c(y_0,y_1, \xi) - \left\langle \zeta_1 ; f(\xi)\right\rangle_{\mathbf{L}^{p'}(\Omega), \mathbf{L}^{p}(\Omega)} - \left\langle \zeta_1; g \right\rangle_{\mathbf{W}^{1/(p'),p}(\Gamma_N)',\mathbf{W}^{1/(p'),p}(\Gamma_N)}\\ & &- \langle \zeta_0 \, ; y_1 \rangle_{\mathbf{W}^{2,p}(\Omega)', \mathbf{W}^{2,p}(\Omega)} + \left\langle \nabla\zeta_1 ; \kappa\nabla y_1 + \sigma(\nabla y_0)\right\rangle_{\mathbb{W}^{1,p}(\Omega)', \mathbb{W}^{1,p}(\Omega)} \\ & & + \pi \displaystyle \int_{\Omega} \mathrm{det}(\Phi(y_0)) \mathrm{d}\Omega + \mathfrak{p} \left\langle \zeta_1 \, ; \mathrm{cof}(\Phi(y_0))n\right\rangle_{\mathbf{W}^{1/(p'),p}(\Gamma_N)',\mathbf{W}^{1/(p'),p}(\Gamma_N)}. \end{array} \label{eqHami}$$ We use the results of sections [4.2](#sec-cts){reference-type="ref" reference="sec-cts"} and [4.3](#sec-adjoint){reference-type="ref" reference="sec-adjoint"} to calculate the first-order derivatives of $\tilde{J}$. **Proposition 21**. *The functional $\tilde{J}$ is of class $\mathcal{C}^1$ and its first-order derivatives write as follows* *[\[id-deriv\]]{#id-deriv label="id-deriv"} $$\begin{aligned} \tilde{J}'_{\tilde{\xi}}(\tilde{\xi},\tau).\hat{\xi} & = & \dot{\mu} \mathcal{H}_{\xi}\big(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}}, \tilde{\xi}, \tilde{\zeta}_0, \tilde{\zeta}_1,\tilde{\pi}\big).\hat{\xi}, \label{id-deriv1} \\ \tilde{J}'_{\tau}(\tilde{\xi},\tau) & = & \int_0^2\dot{\mu}_{\tau} (s,\tau) \mathcal{H}\big(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}}, \tilde{\xi}, \tilde{\zeta}_0, \tilde{\zeta}_1,\tilde{\pi}\big)(s) \mathrm{d}s, \label{id-deriv2}\end{aligned}$$* *for all $\hat{\xi} \in \mathcal{X}_{p,2}(\omega)$, where $(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}})$ satisfies system [\[sysmaintilde2\]](#sysmaintilde2){reference-type="eqref" reference="sysmaintilde2"} corresponding to $(\tilde{\xi},\tau)$, and $(\tilde{\zeta}_0, \tilde{\zeta}_1,\tilde{\pi})$ satisfies system [\[sysadjoint\]](#sysadjoint){reference-type="eqref" reference="sysadjoint"} associated with $(\tilde{y}_0, \tilde{y}_1,\tilde{\mathfrak{p}})$ and $(\tilde{\xi},\tau)$.* *Proof.* Denote $(\tilde{y}_0,\tilde{y}_1) = \mathbb{S}(\tilde{\xi},\tau)$. For the sake of concision we will denote $\varphi(s)$ when dealing with $\varphi(\cdot,s)$, for any $s\in[0,2]$. Differentiating functional $\tilde{J}$ with respect to the variable $\tilde{\xi}$ gives $$\begin{array} {rcl} J'_{\tilde{\xi}}(\tilde{\xi},\tau).\hat{\xi} & = & \displaystyle \int_0^2 \dot{\mu}(s,\tau) c'_{\xi}(\tilde{y}_0(s),\tilde{y}_1(s),\tilde{\xi}(s)).\hat{\xi}(s)\, \mathrm{d}s \\ & & +\displaystyle \int_0^2 \dot{\mu}(s,\tau) \left(c'_{y_0}(\tilde{y}_0(s),\tilde{y}_1(s),\tilde{\xi}(s)).\tilde{v}_0(s) + c'_{y_1}(\tilde{y}_0(s),\dot{\tilde{y}}_1(s),\tilde{\xi}(s)).\tilde{v}_1(s)\right) \mathrm{d}s \\[10pt] & & + \left\langle\left(\tilde{\phi}^{(1)'}_{y_0}(\tilde{y}_0,\tilde{y}_1)(1), \tilde{\phi}^{(1)'}_{y_1}(\tilde{y}_0,\tilde{y}_1)(1)\right);(\tilde{v}_0(1),\tilde{v}_1(1)) \right\rangle_{\mathcal{U}_p^{(0,1)}(\Omega)',\mathcal{U}_p^{(0,1)}(\Omega)}\\ & & + \left\langle\left(\tilde{\phi}^{(2)'}_{y_0}(\tilde{y}_0,\tilde{y}_1)(2), \tilde{\phi}^{(2)'}_{y_1}(\tilde{y}_0,\tilde{y}_1)(2)\right);(\tilde{v}_0(2),\tilde{v}_1(2)) \right\rangle_{\mathcal{U}_p^{(0,1)}(\Omega)',\mathcal{U}_p^{(0,1)}(\Omega)}, \end{array}$$ where $(\tilde{v}_0,\tilde{v}_1) := \mathbb{S}_{\tilde{\xi}}(\tilde{\xi}, \tau).\hat{\xi}$ satisfies system [\[sysprimexi\]](#sysprimexi){reference-type="eqref" reference="sysprimexi"}. Taking the duality product in $\mathbf{L}^{p'}(\Omega)\times \mathbf{L}^p(\Omega)$ of [\[sysprimexi1\]](#sysprimexi1){reference-type="eqref" reference="sysprimexi1"} by $\tilde{\zeta}_1$, integrating by parts on $(0,1) \cup (1,2)$, and using the Green formula two times, leads us to $$J'_{\tilde{\xi}}(\tilde{\xi},\tau).\hat{\xi} = \int_0^2 \dot{\mu}(s,\tau)c'_{\xi}(\tilde{y}_0(s),\tilde{y}_1(s),\tilde{\xi}(s)).\hat{\xi}(s)\, \mathrm{d}s - \int_0^2 \langle \tilde{\zeta}_1(s)\, ; \dot{\mu}(s,\tau) f'(\tilde{\xi}(s)).\hat{\xi} \rangle_{\mathbf{L}^{p'}(\Omega), \mathbf{L}^p(\Omega)} \mathrm{d}s.$$ Noticing that $$\mathcal{H}_{\xi}\big(\tilde{y}_0,\tilde{y}_1, \tilde{\mathfrak{p}},\tilde{\xi}, \tilde{\zeta}_0, \tilde{\zeta}_1,\tilde{\pi}\big).\hat{\xi} = c'_{\xi}(\tilde{y}_0,\tilde{y}_1,\tilde{\xi}).\hat{\xi} - \langle \tilde{\zeta}_1; f'(\tilde{\xi}).\hat{\xi} \rangle_{\mathbf{L}^{p'}(\Omega),\mathbf{L}^p(\Omega)} = \left\langle c'_{\xi}(\tilde{y}_0,\tilde{y}_1,\tilde{\xi}) - f'(\tilde{\xi})^{\ast}.\tilde{\zeta}_1; \hat{\xi} \right\rangle_{\mathcal{X}_{p,2}(\omega)',\mathcal{X}_{p,2}(\omega)},$$ identity [\[id-deriv1\]](#id-deriv1){reference-type="eqref" reference="id-deriv1"} follows. Differentiating $\tilde{J}$ with respect to $\tau$ gives $$\begin{aligned} \begin{array} {rcl} J'_{\tau}(\tilde{\xi},\tau) & = & \displaystyle \int_0^2 \dot{\mu}_{\tau}(s,\tau) c(\tilde{y}_0(s),\tilde{y}_1(s),\tilde{\xi}(s)(s))\, \mathrm{d}s\\ & & + \displaystyle \int_0^2 \dot{\mu}(s,\tau) \left(c'_{y_0}(\tilde{y}_0(s),\tilde{y}_1(s),\tilde{\xi}(s)).\tilde{w}_0(s) + c'_{y_1}(\tilde{y}_0(s),\tilde{y}_1(s),\tilde{\xi}(s)).\tilde{w}_1(s)\right) \mathrm{d}s \\[10pt] & & %+ \phi^{(1)'}_{y_0}(\tilde{y}_0,\tilde{y}_1)(1).\tilde{w}_0(1) %+ \phi^{(1)'}_{y_1}(\tilde{y}_0,\tilde{y}_1)(1).\tilde{w}_1(1) %+ \phi^{(2)'}_{y_0}(\tilde{y}_0,\tilde{y}_1)(2).\tilde{w}_0(2) %+ \phi^{(2)'}_{y_1}(\tilde{y}_0,\tilde{y}_1)(2).\tilde{w}_1(2), + \left\langle\left(\tilde{\phi}^{(1)'}_{y_0}(\tilde{y}_0,\tilde{y}_1)(1), \tilde{\phi}^{(1)'}_{y_1}(\tilde{y}_0,\tilde{y}_1)(1)\right);(\tilde{w}_0(1),\tilde{w}_1(1)) \right\rangle_{\mathcal{U}_p^{(0,1)}(\Omega)',\mathcal{U}_p^{(0,1)}(\Omega)}\\ & & + \left\langle\left(\tilde{\phi}^{(2)'}_{y_0}(\tilde{y}_0,\tilde{y}_1)(2), \tilde{\phi}^{(2)'}_{y_1}(\tilde{y}_0,\tilde{y}_1)(2)\right);(\tilde{w}_0(2),\tilde{w}_1(2)) \right\rangle_{\mathcal{U}_p^{(0,1)}(\Omega)',\mathcal{U}_p^{(0,1)}(\Omega)}, \end{array}\end{aligned}$$ where $(\tilde{w}_0,\tilde{w}_1) := \mathbb{S}_{\tau}(\tilde{\xi}, \tau)$ satisfies system [\[sysprimetau\]](#sysprimetau){reference-type="eqref" reference="sysprimetau"}. Taking the inner product of [\[sysprimetau1\]](#sysprimetau1){reference-type="eqref" reference="sysprimetau1"} by $\tilde{\zeta}_1$, integrating by parts and using the Green formula, two times, leads us to $$\begin{array} {rcl} J'_{\tau}(\tilde{\xi},\tau) & = & \displaystyle \int_0^2 \dot{\mu}_{\tau}(s,\tau)c(\tilde{y}_0(s),\tilde{y}_1(s),\tilde{\xi}(s)) \mathrm{d}s - \int_0^2 \langle \tilde{\zeta}_0(s) ; \dot{\mu}_{\tau}(s,\tau) \tilde{y}_1(s) \rangle_{\mathbf{W}^{2,p}(\Omega)', \mathbf{W}^{2,p}(\Omega)} \mathrm{d}s \\ & & \displaystyle - \int_0^2 \left\langle \tilde{\zeta}_1(s) ; \dot{\mu}_{\tau}(s,\tau) \left(\kappa \Delta \tilde{y}_1(s) + \mathop{\mathrm{div}}(\sigma(\nabla \tilde{y}_0(s))) + f(\tilde{\xi}(s)) \right) \right\rangle_{\mathbf{L}^{p'}(\Omega), \mathbf{L}^{p}(\Omega)} \mathrm{d}s, \end{array}$$ where in particular we have used the identity $\displaystyle \int_{\Gamma_N} \tilde{y}_1 \cdot \mathrm{cof}(\Phi(\tilde{y}_0))n\, \mathrm{d}\Gamma_N = 0$ (see Remark [Remark 13](#remark-id-boundary){reference-type="ref" reference="remark-id-boundary"}). We have also used $\displaystyle \left\langle \tilde{\zeta}_1 \, ; \mathrm{cof}(\Phi(\tilde{y}_0))n\right\rangle_{\mathbf{W}^{1/(p'),p}(\Gamma_N)',\mathbf{W}^{1/(p'),p}(\Gamma_N)} = 0$, imposed by [\[sysadjoint5\]](#sysadjoint5){reference-type="eqref" reference="sysadjoint5"} in system [\[sysadjoint\]](#sysadjoint){reference-type="eqref" reference="sysadjoint"}. Further, by using the Green's formula we obtain $$\begin{array} {l} \dot{\mu}_{\tau}c(\tilde{y}_0,\tilde{y}_1,\tilde{\xi}) - \langle \tilde{\zeta}_0 ; \dot{\mu}_{\tau} \tilde{y}_1 \rangle_{\mathbf{W}^{2,p}(\Omega)', \mathbf{W}^{2,p}(\Omega)} - \left\langle \tilde{\zeta}_1 ; \dot{\mu}_{\tau} \left(\kappa \Delta \tilde{y}_1 + \mathop{\mathrm{div}}(\sigma(\nabla \tilde{y}_0)) + f(\tilde{\xi}) \right) \right\rangle_{\mathbf{L}^{p'}(\Omega), \mathbf{L}^{p}(\Omega)} \\ = \dot{\mu}_{\tau}\left(c(\tilde{y}_0,\tilde{y}_1,\tilde{\xi}) - \langle \tilde{\zeta}_0 ; \tilde{y}_1 \rangle_{\mathbf{W}^{2,p}(\Omega)', \mathbf{W}^{2,p}(\Omega)} - \langle \tilde{\zeta}_1 ; f(\tilde{\xi}) \rangle_{\mathbf{L}^{p'}(\Omega), \mathbf{L}^p(\Omega)} + \left\langle \nabla \tilde{\zeta}_1 ; \kappa \nabla \tilde{y}_1 + \sigma(\nabla \tilde{y}_0) \right\rangle_{\mathbb{W}^{1,p}(\Omega)', \mathbb{W}^{1,p}(\Omega)} \right)\\ - \displaystyle \dot{\mu}_{\tau}\left\langle \tilde{\zeta}_1 ; \frac{\partial\tilde{y}_1}{\partial n} + \sigma(\nabla \tilde{y}_0) \right\rangle_{\mathbf{W}^{1/(p'),p}(\Gamma_N)',\mathbf{W}^{1/(p'),p}(\Gamma_N)} \\ = \dot{\mu}_{\tau} \mathcal{H}(\tilde{y}_0,\tilde{y}_1, \tilde{\mathfrak{p}},\tilde{\xi},\tilde{\zeta}_0,\tilde{\zeta}_1, \tilde{\pi}). \end{array}$$ Thus we obtain [\[id-deriv2\]](#id-deriv2){reference-type="eqref" reference="id-deriv2"}, which completes the proof. ◻ **Remark 22**. *Since the chosen control operator appears in [\[eqHami\]](#eqHami){reference-type="eqref" reference="eqHami"} in the specific form $\tilde{\xi} \mapsto f(\tilde{\xi})$, the derivative [\[id-deriv1\]](#id-deriv1){reference-type="eqref" reference="id-deriv1"} reduces to $$\tilde{J}'_{\tilde{\xi}}(\tilde{\xi},\tau) = -\dot{\mu}f'(\tilde{\xi})^{\ast}.\tilde{\zeta}_1.$$ We keep the general form [\[id-deriv1\]](#id-deriv1){reference-type="eqref" reference="id-deriv1"} because this formula applies in a more case (see [@Maxmax2]).* Then the first main result follows, namely the first-order optimality conditions for problem [\[mainpbtilde2\]](#mainpbtilde2){reference-type="eqref" reference="mainpbtilde2"}: **Theorem 23**. *Let be $(\tilde{\xi}, \tau) \in \mathcal{X}_{p,2}(\omega) \times (0,T)$ an optimal solution of problem [\[mainpbtilde2\]](#mainpbtilde2){reference-type="eqref" reference="mainpbtilde2"}. Then we have* *[\[ende\]]{#ende label="ende"} $$\begin{aligned} c'_{\xi}(\tilde{y}_0,\tilde{y}_1,\tilde{\xi}) - f'(\tilde{\xi})^{\ast}.\tilde{\zeta}_1 & = & 0, \label{ende1}\\ \displaystyle\int_0^2\dot{\mu}_{\tau} (s,\tau) \mathcal{H}\big(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}}, \tilde{\xi}, \tilde{\zeta}_0, \tilde{\zeta}_1,\tilde{\pi}\big)(s)\, \mathrm{d}s & = & 0, \label{ende2} %\end{array} %\end{equation*}\end{aligned}$$* *where $(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}}) = \mathbb{S}(\tilde{\xi}, \tau)$ is the solution of [\[sysmaintilde2\]](#sysmaintilde2){reference-type="eqref" reference="sysmaintilde2"} and $(\tilde{\zeta}_0,\tilde{\zeta}_1,\tilde{\pi})$ is the solution of [\[sysadjoint\]](#sysadjoint){reference-type="eqref" reference="sysadjoint"} associated with $(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}})$.* *Proof.* Problem [\[mainpbtilde2\]](#mainpbtilde2){reference-type="eqref" reference="mainpbtilde2"} consists in minimizing the functional $\tilde{J}$ defined in [\[def-J\]](#def-J){reference-type="eqref" reference="def-J"}. From Theorem [Theorem 16](#th-cts){reference-type="ref" reference="th-cts"}, the functional $\tilde{J}$ is $C^1$. Its derivatives with respect to $\tilde{\xi}$ and $\tau$ are given in Proposition [Proposition 21](#prop-optcond){reference-type="ref" reference="prop-optcond"}. As mentioned in the prrof of the latter, we have $\mathcal{H}_{\xi}\big(\tilde{y}_0,\tilde{y}_1, \tilde{\mathfrak{p}},\tilde{\xi}, \tilde{\zeta}_0, \tilde{\zeta}_1,\tilde{\pi}\big).\hat{\xi} = c'_{\xi}(\tilde{y}_0,\tilde{y}_1,\tilde{\xi}).\hat{\xi} - f'(\hat{\xi}).\hat{\xi}$. We conclude by utilizing the Karush-Kuhn-Tucker conditions. ◻ ## Other formulation of the optimality conditions {#sec-optcond-final} The optimality conditions stated in Theorem [Theorem 23](#th-optcond){reference-type="ref" reference="th-optcond"} deal with transformed systems whose time variables is $s\in(0,2)$. For practical purposes, like implementation, it might be more convenient to rewrite them in terms of variables satisfying systems whose time variable stands for $t\in (0,T)$. We state the second main result of the paper, as a corollary to Theorem [Theorem 23](#th-optcond){reference-type="ref" reference="th-optcond"}. **Corollary 24**. *Let be $(\tilde{\xi}, \tau) \in \mathcal{X}_{p,2}(\omega) \times (0,T)$ an optimal solution of problem [\[mainpbtilde2\]](#mainpbtilde2){reference-type="eqref" reference="mainpbtilde2"}. Then we have* *[\[endebis\]]{#endebis label="endebis"} $$\begin{aligned} c'_{\xi}(u,\dot{u},\xi) - f'(\xi) & = & 0, \label{ende3}\\ \displaystyle \int_0^T \left(\dot{\mu}_{\tau}\circ \mu^{-1} (t,\tau)\right) \mathcal{H}\big(u,\dot{u},p, \xi, \chi_0, \chi_1,\pi)(t)\, \mathrm{d}t & = & 0, \label{ende4} %\end{array} %\end{equation*}\end{aligned}$$* *where $(u,\dot{u},\mathfrak{p}, \xi) = (\mathbb{S}(\tilde{\xi}, \tau),\tilde{\xi})\circ \mu^{-1}(\cdot,\tau)$ satisfies [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"}, and $(\zeta_0,\zeta_1,\pi)$ is solution of system [\[sysadjoint-init\]](#sysadjoint-init){reference-type="eqref" reference="sysadjoint-init"} associated with $(y_0,y_1,\mathfrak{p}) = (u,\dot{u},\mathfrak{p})$ in the sense of Definition [Definition 8](#def-trans-init){reference-type="ref" reference="def-trans-init"}.* *Proof.* Recall from section [4.1](#sec-transformation){reference-type="ref" reference="sec-transformation"} that we have $$\tilde{y}_0(\cdot,s) = u(\cdot,\mu(s,\tau)), \quad \tilde{y}_1(\cdot,s) = \dot{u}(\cdot,\mu(s,\tau)), \quad \tilde{\mathfrak{p}}(s) = \mathfrak{p}(\mu(s,\tau)), \quad \tilde{\xi}(\cdot,s) = \xi(\cdot,\mu(s,\tau)), \quad s\in[0,2], \label{change-var-proof}$$ so that $(u,\dot{u},\mathfrak{p})$ satisfies [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} if and only if $(\tilde{y}_0,\tilde{y}_1, \tilde{\mathfrak{p}})$ satisfies [\[sysmaintilde2\]](#sysmaintilde2){reference-type="eqref" reference="sysmaintilde2"}. Now introduce $$\zeta_0(\cdot,t) = \tilde{\zeta}_0(\cdot,\mu^{-1}(t,\tau)), \quad \zeta_1(\cdot,t) = \tilde{\zeta}_1(\cdot,\mu^{-1}(t,\tau)), \quad \pi(t) = \tilde{\pi}(\mu^{-1}(t,\tau)), \quad t\in(0,T),$$ where the notation $\mu^{-1}(t,\tau)$ refers to the inverse of $s\mapsto \mu(s,\tau)$. Composing system [\[sysadjoint\]](#sysadjoint){reference-type="eqref" reference="sysadjoint"} (satisfied by $(\tilde{\zeta}_0,\tilde{\zeta}_1,\tilde{\pi})$) by $\mu^{-1}(t,\tau)$ yields system [\[sysadjoint-init\]](#sysadjoint-init){reference-type="eqref" reference="sysadjoint-init"} (satisfied by $(\zeta_0,\zeta_1,\pi)$) corresponding to $(y_0,y_1,\mathfrak{p}) = (u,\dot{u},\mathfrak{p})$. Then we compose [\[ende1\]](#ende1){reference-type="eqref" reference="ende1"} by $\mu^{-1}(\cdot,\tau)$ in order to obtain [\[ende3\]](#ende3){reference-type="eqref" reference="ende3"}, and use the change of variable formula in the integral of [\[ende2\]](#ende2){reference-type="eqref" reference="ende2"} in order to obtain [\[ende4\]](#ende4){reference-type="eqref" reference="ende4"}, which concludes the proof. ◻ **Remark 25**. *The term $\dot{\mu}_{\tau}\circ \mu^{-1}(t,\tau)$ which appears in [\[ende4\]](#ende4){reference-type="eqref" reference="ende4"} is an Eulerian representation of the sensitivity of $\dot{\mu}$ with respect to $\tau$. Further, choosing $\mu$ as in section [4.1](#sec-transformation){reference-type="ref" reference="sec-transformation"}, this term is actually piecewise constant: $$\dot{\mu}_\tau(\mu^{-1}(t,\tau),\tau) = \left\{ \begin{array} {ll} 1 & \text{if } t\in [0,\tau), \\ 0 & \text{if } t \in (\tau, \tau+\varepsilon), \\ -1/(1-\tilde{\varepsilon}) & \text{if } t\in (\tau+\varepsilon, T]. \end{array} \right.$$* **Remark 26**. *The functional $J(\tilde{\xi},\tau)$ of Problem [\[mainpbtilde2\]](#mainpbtilde2){reference-type="eqref" reference="mainpbtilde2"}, which deals with variables $(\tilde{y}_0,\tilde{y}_1,\tilde{\xi})$, is also expressed in terms of the original variables as follows $$J(\xi,\tau) = \int_0^T c(u,\dot{u}, \xi)\, \mathrm{d}t + \phi^{(1)}(u,\dot{u})(\tau) + \phi^{(2)}(u,\dot{u})(T), \label{myfunctional}$$ using the change of variables [\[change-var-proof\]](#change-var-proof){reference-type="eqref" reference="change-var-proof"}.* # Numerical illustrations {#sec-num} We propose to illustrate the optimality conditions obtained in Corollary [Corollary 24](#cor-optcond){reference-type="ref" reference="cor-optcond"} by realizing numerical simulations which rely on finite element formulations for the space discretization. While a strong functional framework has been considered for the theoretical study of Problem [\[mainpbtilde2\]](#mainpbtilde2){reference-type="eqref" reference="mainpbtilde2"}, the variational formulation corresponding to the finite element discretization only requires weaker regularity. ## Variational formulations {#sec-variational} Solving numerically the optimality conditions for Problem [\[mainpbtilde2\]](#mainpbtilde2){reference-type="eqref" reference="mainpbtilde2"} -- for example those provided by Corollary [Corollary 24](#cor-optcond){reference-type="ref" reference="cor-optcond"} -- requires solving the state system [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} and the adjoint system [\[sysadjoint-init\]](#sysadjoint-init){reference-type="eqref" reference="sysadjoint-init"}. Let us write their respective variational formulations, whose space discretizations lead to their respective finite element formulations. #### Weak formulation of the state system. The variational formulation of the state system [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} is given, for all test function $\varphi \in \mathbf{W}^{1,p'}(\Omega)$ such that $\varphi_{|\Gamma_D} = 0$, and for all multiplier $\mathfrak{q} \in \mathbb{R}$, as follows $$\begin{aligned} & & \langle \ddot{u}; \varphi\rangle_{\mathbf{L}^p(\Omega),\mathbf{L}^{p'}(\Omega)} + \kappa \langle \nabla \dot{u}; \nabla \varphi \rangle_{\mathbb{L}^p(\Omega),\mathbb{L}^{p'}(\Omega)} + \displaystyle\left\langle \frac{\partial\mathcal{W}}{\partial E}(E(u));(E'(u).\varphi) \right\rangle_{\mathbb{L}^p(\Omega),\mathbb{L}^{p'}(\Omega)} \nonumber \\ & & = \langle f; \varphi\rangle_{\mathbf{L}^p(\Omega),\mathbf{L}^{p'}(\Omega)} + \langle g; \varphi\rangle_{\mathbf{L}^p(\Gamma_N),\mathbf{L}^{p'}(\Gamma_N)} , \label{varfor1}\\ & & \mathfrak{q} \int_{\Omega} \mathrm{det}(\Phi(u)) \mathrm{d}\Omega = \mathfrak{q} \mathrm{det}_{\Omega} \mathrm{det}(\Phi(u_0))\mathrm{d}\Omega,\end{aligned}$$ almost everywhere in $(0,T)$. We obtained the bilinear form associated with the strain energy by using the symmetry of tensor $\displaystyle \frac{\partial\mathcal{W}}{\partial E}$, provided by Assumption $\mathbf{A2}$ (we refer to Remark [Remark 28](#remark-sym){reference-type="ref" reference="remark-sym"} for more details). Note that the Neumann condition [\[sysmain2\]](#sysmain2){reference-type="eqref" reference="sysmain2"} is implicitly contained in [\[varfor1\]](#varfor1){reference-type="eqref" reference="varfor1"}: By using the Green formula on [\[varfor1\]](#varfor1){reference-type="eqref" reference="varfor1"}, we deduce both [\[sysmain1\]](#sysmain1){reference-type="eqref" reference="sysmain1"} and [\[sysmain2\]](#sysmain2){reference-type="eqref" reference="sysmain2"}. #### Weak formulation of the adjoint system. Using the Hamiltonian functional introduced in [\[eqHami\]](#eqHami){reference-type="eqref" reference="eqHami"}, we notice that the weak formulation of the adjoint system [\[sysadjoint-init\]](#sysadjoint-init){reference-type="eqref" reference="sysadjoint-init"} writes $$\left\{\begin{array} {rcl} -\dot{\zeta}_0 + \mathcal{H}_{y_0}(y_0,y_1, \mathfrak{p},\xi,\zeta_0, \zeta_1,\pi) = 0 & & \text{in $\Omega \times\left((0,\tau) \cup (\tau, T)\right)$},\\ -\dot{\zeta}_1 + \mathcal{H}_{y_1}(y_0,y_1,\mathfrak{p},\xi,\zeta_0, \zeta_1,\pi) = 0 & & \text{in $\Omega \times\left((0,\tau) \cup (\tau, T)\right)$}, \\ \mathcal{H}_{\mathfrak{p}}(y_0,y_1,\mathfrak{p},\xi,\zeta_0, \zeta_1,\pi) = 0 & & \text{in $\left((0,\tau) \cup (\tau, T)\right)$}, \end{array} \right.$$ where, using the Green formula, and denoting $X=(y_0,y_1, \mathfrak{p},\xi,\zeta_0, \zeta_1,\pi)$, we have $$\begin{array} {rcl} \mathcal{H}_{y_0}(X) .\varphi_0 & = & c'_{y_0}(y_0,y_1,\xi).\varphi_0 + \left\langle \nabla\zeta_1 ; \sigma_L(\nabla y_0).\nabla \varphi_0\right\rangle_{\mathbb{W}^{1,p}(\Omega)', \mathbb{W}^{1,p}(\Omega)} \\ & & + \left\langle \varphi_0 ; \pi\, \mathrm{cof}(\Phi(y_0))n \right\rangle_{\mathbf{L}^2(\Gamma_N)} + \left\langle \mathfrak{p}\, \zeta_1 \, ; (\sigma_N(\nabla y_0).\nabla \varphi_0)n\right\rangle_{\mathbf{L}^2(\Gamma_N)} \\ & = & c'_{y_0}(y_0,y_1,\xi).\varphi_0 - \left\langle \mathop{\mathrm{div}}(\sigma_L(\nabla y_0)^{\ast}. \nabla \zeta_1) ; \varphi_0\right\rangle_{\mathbf{L}^{p'}(\Omega), \mathbf{L}^{p}(\Omega)} \\ & & +\left\langle \varphi_0 ; ((\sigma_L+\mathfrak{p}\, \sigma_N)(\nabla y_0)^{\ast}. \nabla \zeta_1)n + \pi\, \mathrm{cof}(\Phi(y_0))n \right\rangle_{\mathbf{L}^2(\Gamma_N)} , \\ \mathcal{H}_{y_1}(X) .\varphi_1 & = & c'_{y_1}(y_0,y_1,\xi).\varphi_1 - \langle \zeta_0 \, ; \varphi_1 \rangle_{\mathbf{W}^{2,p}(\Omega)', \mathbf{W}^{2,p}(\Omega)} + \kappa\left\langle \nabla\zeta_1 ; \nabla \varphi_1 \right\rangle_{\mathbb{W}^{1,p}(\Omega)', \mathbb{W}^{1,p}(\Omega)} \\ & = & c'_{y_1}(y_0,y_1,\xi).\varphi_1 - \langle \zeta_0 + \kappa \Delta \zeta_1 \, ; \varphi_1 \rangle_{\mathbf{W}^{2,p}(\Omega)', \mathbf{W}^{2,p}(\Omega)} + \left\langle \displaystyle\kappa\frac{\partial\zeta_1}{\partial n}\, ; \varphi_1 \right\rangle_{\mathbf{W}^{2-1/p,p}(\Gamma_N)',\mathbf{W}^{2-1/p,p}(\Gamma_N)}, \\ \mathcal{H}_{\mathfrak{p}}(X) & = & \left\langle \zeta_1 \, ; \mathrm{cof}(\Phi(y_0))n\right\rangle_{\mathbf{W}^{1/(p'),p}(\Gamma_N)',\mathbf{W}^{1/(p'),p}(\Gamma_N)}. \end{array}$$ Actually it has been shown in [@Maxmax1] that the variational formulation for the adjoint system [\[sysadjoint-init\]](#sysadjoint-init){reference-type="eqref" reference="sysadjoint-init"} can be derived as from the Hamiltonian functional as above. Note that the adjoint system is solved backward in time. It remains to comment on the initial values of system [\[sysadjoint-init\]](#sysadjoint-init){reference-type="eqref" reference="sysadjoint-init"}. #### Expressions of the initial value conditions for the adjoint system. As mentioned in the introduction, we aim to maximize the variations of the pressure, namely $$\phi^{(1)}(u,\dot{u})(\tau) = (\mathfrak{p}(\tau+\varepsilon) - \mathfrak{p}(\tau))/\varepsilon,$$ where $\mathfrak{p}$ is a function of $(u,\dot{u})$, as follows $$\mathfrak{p} = -\frac{1}{|\Gamma_N|}\int_{\Gamma_N} (\mathrm{det}(\Phi(u)))^{-1} \Phi(u)^T \left( \kappa \frac{\partial\dot{u}}{\partial n} + \sigma(\nabla u) n \right) \mathrm{d}\Gamma_N.$$ The sensitivity of $\mathfrak{p}$ with respect to $u$ and $\dot{u}$ is given in the variational sense by $$\begin{array} {rcl} \displaystyle \frac{\partial\mathfrak{p}}{\partial u}.v & = & \displaystyle -\frac{1}{|\Gamma_N|}\int_{\Gamma_N} (\mathrm{det}(\Phi(u)))^{-1} \left( \left(\nabla v^T - (\Phi(u)^{-T}:\nabla v)\Phi(u)^T\right)\kappa \frac{\partial\dot{u}}{\partial n} + \Phi(u)^T\left(\sigma_L(\nabla u).\nabla v\right)n \right) \mathrm{d}\Gamma_N,\\[10pt] %& = & %\displaystyle %-\frac{1}{|\Gamma_N|}\int_{\Gamma_N} %(\det(\Phi(u)))^{-1} \left( %\kappa\nabla v^T \frac{\p \dot{u}}{\p n} %+ \Phi^T\left(\sigma_L(\nabla u).\nabla v\right)n \right) % \d \Gamma_N %\\ \displaystyle\frac{\partial\mathfrak{p}}{\partial\dot{u}}.\dot{v} & = & \displaystyle -\frac{\kappa}{|\Gamma_N|}\int_{\Gamma_N} (\mathrm{det}(\Phi(u)))^{-1} \Phi(u)^T \frac{\partial\dot{v}}{\partial n} \mathrm{d}\Gamma_N, \end{array}$$ and consequently the first-order derivatives of functional $\phi^{(1)}$ are expressed as $$\phi^{(1)'}_u(u,\dot{u}) = \frac{1}{\varepsilon}\left( \frac{\partial\mathfrak{p}}{\partial u}(\tau+\varepsilon) - \frac{\partial\mathfrak{p}}{\partial u}(\tau) \right), \quad \phi^{(1)'}_{\dot{u}}(u,\dot{u}) = \frac{1}{\varepsilon}\left( \frac{\partial\mathfrak{p}}{\partial\dot{u}}(\tau+\varepsilon) - \frac{\partial\mathfrak{p}}{\partial\dot{u}}(\tau)\right).$$ These expressions are needed when implementing the numerical solution of the adjoint system [\[sysadjoint-init\]](#sysadjoint-init){reference-type="eqref" reference="sysadjoint-init"}. #### Implementation of the jump conditions for the adjoint state. In practice we consider a subdivision of the time interval $(0,2)$ when addressing the optimality conditions given in Theorem [Theorem 23](#th-optcond){reference-type="ref" reference="th-optcond"}, or of the time interval $(0,T)$ when addressing those given in Corollary [Corollary 24](#cor-optcond){reference-type="ref" reference="cor-optcond"}. The adjoint system [\[sysadjoint\]](#sysadjoint){reference-type="eqref" reference="sysadjoint"} deals with jump conditions [\[sysadjoint4\]](#sysadjoint4){reference-type="eqref" reference="sysadjoint4"} at fixed time $s=1$, and therefore if the time subdivision of the interval $(0,2)$ meets $s=1$, then implementing the jump conditions at $s=1$ does not present any difficulties. On the other hand, the adjoint system [\[sysadjoint-init\]](#sysadjoint-init){reference-type="eqref" reference="sysadjoint-init"} involves jumps [\[sysadjoint-init4\]](#sysadjoint-init4){reference-type="eqref" reference="sysadjoint-init4"} at free time $t=\tau$, which in general does not coincide with a point of the time subdivision. Therefore we need to approximate the values of the right-and-sides of these jump conditions at time $\tau$. For example, if $\tau \in [t_i,t_{i+1}]$, where the $\{t_i\}_{i\in I}$ define the time subdivision, one can use the following linear approximation: $$\phi^{(1)}(y_0,y_1)(\tau) \approx \displaystyle \frac{t_{i+1}-\tau}{t_{i+1}-t_i}\phi^{(1)}(y_0,y_1)(t_i) + \frac{\tau-t_i}{t_{i+1}-t_i}\phi^{(1)}(y_0,y_1)(t_{i+1}).$$ Such an approximation introduces an error of order $1$ in the time scheme. In the numerical realizations presented in section [5.3](#sec-results){reference-type="ref" reference="sec-results"}, we chose to deal with the transformed systems [\[sysmaintilde\]](#sysmaintilde){reference-type="eqref" reference="sysmaintilde"} and [\[sysadjoint\]](#sysadjoint){reference-type="eqref" reference="sysadjoint"}, and thus with Problem [\[mainpbtilde\]](#mainpbtilde){reference-type="ref" reference="mainpbtilde"}. ## Algorithm {#sec-algorithm} We adopt the so-called *optimize-discretize* approach, meaning that we discretize the optimality conditions initially obtained in Corollary [Corollary 24](#cor-optcond){reference-type="ref" reference="cor-optcond"} for the continuous problem. The other approach would consist in first discretizing Problem [\[mainpb\]](#mainpb){reference-type="eqref" reference="mainpb"}, and next deriving optimality conditions for the corresponding discretized problem (which would be the focus of another approach). Note that the optimality conditions so derived would be specific to the discretization chosen for the state system and the objective functional. The optimality conditions obtained in Corollary [Corollary 24](#cor-optcond){reference-type="ref" reference="cor-optcond"} provide a gradient to vanish, namely $$\mathcal{G}(\xi, \tau) := \left(\begin{array} {c} c'_{\xi}(u,\dot{u},\xi) - f'(\xi) \\ \displaystyle \int_0^T \left(\dot{\mu}_{\tau}\circ \mu^{-1} (t,\tau)\right) \mathcal{H}\big(u,\dot{u},\mathfrak{p}, \xi, \zeta_0, \zeta_1,\pi)(t)\, \mathrm{d}t \end{array} \right), \label{mygradient}$$ where $(u,\dot{u}, \mathfrak{p}) = (y_0,y_1,\mathfrak{p})$ satisfies [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} and $(\zeta_0,\zeta_1,\pi)$ satisfies [\[sysadjoint-init\]](#sysadjoint-init){reference-type="eqref" reference="sysadjoint-init"}. We solve Problem [\[mainpb\]](#mainpb){reference-type="eqref" reference="mainpb"} with a gradient rule, more specifically the Barzilai-Borwein algorithm [@BarBor]. The corresponding method is given in Algorithm [\[algo-super\]](#algo-super){reference-type="ref" reference="algo-super"} below.   Initialization: : \- : Initialize $(\xi_{0}, \tau_0)= (0,T/2)$. Initial gradient: : From $(\xi_0, \tau_0)$, compute the (initial) gradient as follows:\ ----------------------------------------------------------------------------------------------------------------------------- -- $\left.\begin{minipage}{11.5cm} \begin{description} \item[-] Compute the state$(u,,)$corresponding to$\_0$, by solving~\eqref{sysmain}. \item[-] Compute the adjoint state$(\_0,\_1,)$corresponding to$(\_0,\_0)$,\\by solving~\eqref{sysadjoint-init}. \item[-] Compute the gradient$\_0:= (\_0,\_0)$, using the expression~\eqref{mygradient}. \end{description} \end{minipage} \right\}$ in 3 steps ----------------------------------------------------------------------------------------------------------------------------- --  \ Store $(\xi_0, \tau_0)$ and $\mathcal{G}_0$. Armijo rule: : Choose $\alpha = 0.5$. \- : Find the smallest $n\in\mathbb{N}$ such that $J((\xi_0,\tau_0)-\alpha^n \mathcal{G}_0) < J(\xi_0,\tau_0)$, using expression [\[myfunctional\]](#myfunctional){reference-type="eqref" reference="myfunctional"}. \- : Define $(\xi_1,\tau_1) = (\xi_0,\tau_0) - \alpha^n G_0$. \- : Compute the gradient $\mathcal{G}_1$ as above, corresponding to $(\xi_1,\tau_1)$. \- : Store $(\xi_1,\tau_1)$ and $\mathcal{G}_1$. Barzilai-Borwein gradient steps: : Initialization with $((\xi_0,\tau_0),\mathcal{G}_0)$ and $((\xi_1,\tau_1),\mathcal{G}_1)$.\ Compute iteratively $(\xi_n,\tau_n)$ ($n\geq 2$) with the Barzilai-Borwein steps.\ While $|| \mathcal{G}(\xi_n,\tau_n)||_{\mathbf{L}^2(\Omega) \times \mathbb{R}} > 1.e^{-10}$, do gradient steps. End: : Obtain $(\xi,\tau)$, approximated solution of [\[mainpbtilde2\]](#mainpbtilde2){reference-type="eqref" reference="mainpbtilde2"} with. ## Implementation and results {#sec-results} Consider a 1D toy-model for which $\Omega = (0,1)$, $\Gamma_D = \{0\}$ and $\Gamma_N = \{1\}$. While the theoretical analysis provided in this article may *a priori* not apply to the 1D case, the goal of this subsection is to provide an illustration of the implementation of a solution for Problem [\[mainpb\]](#mainpb){reference-type="eqref" reference="mainpb"}. For the strain energy, we propose to consider the example Saint Venant-Kirchhoff model (see section [7.2](#sec-app-energies){reference-type="ref" reference="sec-app-energies"}), namely $$\mathcal{W}(E) = \mu_L \mathrm{tr}(E^2) + \frac{\lambda_L}{2} \mathrm{tr}(E)^2, \quad \displaystyle \frac{\partial\mathcal{W}}{\partial E}(E) = 2\mu_L E + \lambda_L \mathrm{tr}(E) \mathrm{I}, \label{Lame-model} %\quad \frac{\p^2 \mathcal{W}}{\p E^2} =$$ where $\mu_L$ and $\lambda_L$ are given Lamé coefficients. We refer to section [7.2](#sec-app-energies){reference-type="ref" reference="sec-app-energies"} for further details on this model. As mentioned previously the space discretization is realized with finite elements, more specifically P1-elements. The control is distributed on the subdomain $\omega = [0.75,1.00]$. The cost function and the objective functions are chosen to be $$c(u,\dot{u},\xi) = -\displaystyle\frac{\alpha}{2}\|\xi\|_{\mathrm{L}^2(\omega)}^2, \quad \phi^{(1)}(u,\dot{u}) = \mathfrak{p}, \quad \phi^{(2)}(u,\dot{u}) = 0.$$ with $\alpha >0$. Regarding the choice of the functional $\phi^{(1)}$, unlike [\[sysmainpressure\]](#sysmainpressure){reference-type="eqref" reference="sysmainpressure"} where we aim at maximizing the variations of the pressure at some time $\tau \in (0,T)$, we rather aim at maximizing the pressure itself directly, because initially the pressure is equal to zero, in view of the choice we made for the initial conditions in Table [\[tab-param\]](#tab-param){reference-type="ref" reference="tab-param"}. The time discretization for the state system [\[sysmain\]](#sysmain){reference-type="eqref" reference="sysmain"} corresponds to the Crank-Nicolson method (that is the $\theta$-method with $\theta = 0.5$), while the time discretization for the adjoint system [\[sysadjoint-init\]](#sysadjoint-init){reference-type="eqref" reference="sysadjoint-init"} is an implicit Euler scheme. At each time step, the nonlinearity due to the strain energy terms are treated with the Newton method. The choices for the different parameters are summarized in Table [\[tab-param\]](#tab-param){reference-type="ref" reference="tab-param"}. $$\begin{array} {|c|c|c|c|c|c|c|c|c|c|} \hline \alpha & \kappa & \lambda_L & \mu_L & g & u_0 & \dot{u}_0 & T & \text{time step} & \text{mesh size} \\ \hline 2.10^{-3} & 2.10^{-4} & 0.05 & 0.05 & 0 & 0 & 0 & 15.0 & 0.02 & 0.01\\ \hline \end{array}$$ Using Algorithm [\[algo-super\]](#algo-super){reference-type="ref" reference="algo-super"}, we obtain that the optimal time parameter is approximately $\tau \approx 3.85$. We provide screenshots representing the time evolution of the different variables in Figures [\[figControl\]](#figControl){reference-type="ref" reference="figControl"}, [\[figDisp\]](#figDisp){reference-type="ref" reference="figDisp"} and [\[figVelo\]](#figVelo){reference-type="ref" reference="figVelo"}. The time evolution of the pressure is represented in Figure [1](#fig-pressure){reference-type="ref" reference="fig-pressure"}. Note that in view of the parameters given in Table [\[tab-param\]](#tab-param){reference-type="ref" reference="tab-param"}, with no control we would obtain the trivial states $u=0$ and $\dot{u} =0$ on $(0,T)$. ![image](./images/control1.png) $t = 0.02$ ![image](./images/control50.png) $t = 1.00$ ![image](./images/control75.png) $t = 1.5$ ![image](./images/control100.png) $t = 2.0$ \ ![image](./images/control125.png) $t = 2.5$ ![image](./images/control150.png) $t = 3.0$ ![image](./images/control175.png) $t = 3.5$ ![image](./images/control200.png) $t = 4.0$ \ ![image](./images/control225.png) $t = 4.5$ ![image](./images/control250.png) $t = 5.0$ ![image](./images/control275.png) $t = 5.5$ ![image](./images/control300.png) $t = 6.0$ \ ![image](./images/control325.png) $t = 6.5$ ![image](./images/control350.png) $t = 7.0$ ![image](./images/control373.png) $t = 7.46$ ![image](./images/control374.png) $t = 7.48$ \ \ \ \ In Figure [\[figControl\]](#figControl){reference-type="ref" reference="figControl"} we observe that the control function, distributed on the segment $[0.75,1.00]$, is sparse in time, in the sense that it becomes inactivated from $t=7.48$, a short time before the optimal time parameter $t=\tau \approx 7.9$. The fact that it is sparse in time is clearly explained by the fact that the terminal objective functional $\phi^{(2)}$ is chosen to be equal to zero. The fact that the control function gets inactive a short time before $\tau$ could be explained by the propagation effect that makes the control useless during a (short) time interval before $\tau$. Note also that the sign of the control function changes rapidly, that we could explain by the necessity of to creating a wave phenomenon (see Figure [\[figVelo\]](#figVelo){reference-type="ref" reference="figVelo"}). ![image](./images/state1.png) $t = 0.02$ ![image](./images/state50.png) $t = 1.00$ ![image](./images/state125.png) $t = 2.5$ ![image](./images/state150.png) $t = 3.0$ \ ![image](./images/state200.png) $t = 4.0$ ![image](./images/state250.png) $t = 5.0$ ![image](./images/state300.png) $t = 6.0$ ![image](./images/state315.png) $t = 6.3$ \ ![image](./images/state350.png) $t = 7.0$ ![image](./images/state400.png) $t = 8.0$ ![image](./images/state450.png) $t = 9.0$ ![image](./images/state500.png) $t = 10.0$ \ ![image](./images/state600.png) $t = 12.0$ ![image](./images/state650.png) $t = 13.0$ ![image](./images/state700.png) $t = 14.0$ ![image](./images/state750.png) $t = 15.0$ \ \ \ In Figure [\[figDisp\]](#figDisp){reference-type="ref" reference="figDisp"}, where the time evolution of the displacement field is represented, we observe that the state $u$ tends to become steady, and then define a deformed domain $(\mathrm{Id}+u)(\Omega)$ of steady shape. ![image](./images/velo1.png) $t = 0.02$ ![image](./images/velo50.png) $t = 1.00$ ![image](./images/velo125.png) $t = 2.5$ ![image](./images/velo150.png) $t = 3.0$ \ ![image](./images/velo200.png) $t = 4.0$ ![image](./images/velo250.png) $t = 5.0$ ![image](./images/velo300.png) $t = 6.0$ ![image](./images/velo315.png) $t = 6.3$ \ ![image](./images/velo350.png) $t = 7.0$ ![image](./images/velo400.png) $t = 8.0$ ![image](./images/velo450.png) $t = 9.0$ ![image](./images/velo500.png) $t = 10.0$ \ ![image](./images/velo600.png) $t = 12.0$ ![image](./images/velo650.png) $t = 13.0$ ![image](./images/velo700.png) $t = 14.0$ ![image](./images/velo750.png) $t = 15.0$ \ \ \ The influence of the control is more visible on the time evolution of the velocity field $\dot{u}$ represented in Figure [\[figVelo\]](#figVelo){reference-type="ref" reference="figVelo"}: At the beginning the state $\dot{u}$ becomes non-positive, before changing its sign in order to create the profile of an oscillation. Next, after $t=\tau$, it adopts a profile that is translated to the left. ![Values of the pressure variable.](./images/pressure.png){#fig-pressure} \ In Figure [1](#fig-pressure){reference-type="ref" reference="fig-pressure"} we observe that under the action of the control, the pressure presents small oscillations, until approximately $t=5$ when it starts increasing steeply, for reaching it first maximum around $t \approx 6.3$. Next the pressure decreases, before bouncing again around time $t=5$, increasing next even more steeply for reaching another maximum around $t=\tau \approx7.9$. Next, the pressure oscillates while decreasing, and seems to reach a pseudo-steay state from $t=13.5$. The final value of the pressure is still larger than its initial value (that is $0$). # Conclusion {#sec-conclusion} In this article we proposed a mathematical framework for the modeling of the mechanical aspects of defibrillation, based on the application of a distributed control on a part of the heart tissue. In particular, we developed an approach based on the optimal control theory, in order to enable us to maximize a class of functionals at a free time parameter, also optimized as well as the distributed control. We were able to derive rigorously first-order optimality conditions, that are possible to exploit for numerical realizations. We believe that our approach can pave the way to the development of robust and realistic numerical realizations. Further, based on the mathematical analysis we provide for the elastodynamics system with global injectivity condition, other physics-related aspects of the defibrillation problem could also be coupled to it, and addressed in the same fashion, like the electrical activity of the heart tissue for example. Let us finally mention that the complexity and the inherent technicalities seem *a priori* to be unavailable, as we aim at modeling phenomena with hyperelastic behavior. # Appendix {#sec-apendix} ## Proofs of intermediate results This subsection is dedicated to the technical proofs of results related to the wellposedness of linear systems, namely Proposition [Proposition 3](#prop-well-auto){reference-type="ref" reference="prop-well-auto"} and Corollary [Corollary 4](#coro-well-auto-NH){reference-type="ref" reference="coro-well-auto-NH"}. The methodology is similar to [@Court2023], but differs by its approach, as it actually deals with different systems. ### Proof of Proposition [Proposition 3](#prop-well-auto){reference-type="ref" reference="prop-well-auto"} {#sec-app-tek1} Introduce first the following Hilbert spaces $$%\begin{array} {l} \mathcal{V}(\Omega) = \left\{ v \in \mathbf{H}^1(\Omega) \mid v_{|\Gamma_D} = 0, \ \displaystyle \int_{\Gamma_N} v\cdot n \, \mathrm{d}\Gamma_N = 0 \right\}, \quad \mathcal{V}(\Gamma_N) = \left\{ v \in \mathbf{H}^{1/2}(\Gamma_N) \mid \displaystyle \int_{\Gamma_N} v\cdot n \, \mathrm{d}\Gamma_N = 0 \right\}.%\\[5pt] %\end{array}$$ Remark that these spaces are included in those introduced in Definition [Definition 1](#def-ass-op){reference-type="ref" reference="def-ass-op"}, namely $\mathcal{V}_0(\Omega)$ and $\mathcal{V}_0(\Gamma_N)$, respectively. We characterize the dual of $\mathcal{V}(\Gamma_N)$: **Lemma 27**. *A function $\chi \in \mathcal{V}(\Gamma_N)'$ if and only if there exists $\mathfrak{p} \in \mathbb{R}$ such that $\chi = \mathfrak{p}\, n$.* *Proof.* Set $\mathfrak{p} = \displaystyle \frac{1}{|\Gamma_N|} \int_{\Gamma_N} \chi \cdot n\, \mathrm{d}\Gamma_N$. It is easy to verify that $\chi -\mathfrak{p}\, n = 0$ in $\mathbf{H}^{-1/2}(\Gamma_N)$, which ends the proof. ◻ Define the following operators $$\langle Av ; \varphi \rangle_{\mathcal{V}(\Omega)', \mathcal{V}(\Omega)} = \displaystyle \kappa\int_{\Omega} \nabla v : \nabla \varphi \, \mathrm{d}\Omega, \quad \langle Bw; \psi\rangle_{\mathcal{V}(\Omega)', \mathcal{V}(\Omega)} = \displaystyle \int_{\Omega} (\sigma_L(0).\nabla w):\nabla \psi \, \mathrm{d}\Omega.$$ Denoting $y_0=u$, $y_1 = \dot{u}$ and $y:=(y_0,y_1)^T$, the variational formulation of system [\[syslih\]](#syslih){reference-type="eqref" reference="syslih"} is given for all $\varphi = (\varphi_0,\varphi_1) \in \mathbf{L}^2(\Omega) \times\mathcal{V}(\Omega)$ as follows: $$\begin{aligned} %\begin{array} {rcl} \langle\dot{y}, \varphi\rangle_{\mathbf{L}^2(\Omega)\times \mathcal{V}(\Omega)', \mathbf{L}^2(\Omega)\times \mathcal{V}(\Omega)} & = & \langle \dot{y}_0,\varphi_0 \rangle_{\mathbf{L}^2(\Omega)} + \langle \dot{y}_1,\varphi_1 \rangle_{\mathcal{V}(\Omega)',\mathcal{V}(\Omega)} \nonumber\\ & = & \langle y_1,\varphi_0 \rangle_{\mathbf{L}^2(\Omega)} - \kappa \langle \nabla y_1, \nabla \varphi_1\rangle_{\mathbb{L}^2(\Omega)} -\langle \sigma_L(0).\nabla y_0,\nabla \varphi_1 \rangle_{\mathbb{L}^2(\Omega)} \nonumber \\ & & + \langle f, \varphi_1\rangle_{\mathcal{V}(\Omega)',\mathcal{V}(\Omega)} + \langle g, \varphi_1 \rangle_{\mathcal{V}(\Gamma_N)',\mathcal{V}(\Gamma_N)}, \nonumber\\ \langle\dot{y}, \varphi\rangle_{\mathbf{L}^2(\Omega)\times \mathcal{V}(\Omega)', \mathbf{L}^2(\Omega)\times \mathcal{V}(\Omega)} & = & \langle y_1,\varphi_0 \rangle_{\mathbf{L}^2(\Omega)} - \langle A y_1, \varphi_1\rangle_{\mathcal{V}(\Omega)', \mathcal{V}(\Omega)} -\langle B y_0, \varphi_1 \rangle_{\mathcal{V}(\Omega)', \mathcal{V}(\Omega)} \nonumber\\ & & + \langle f, \varphi_1\rangle_{\mathcal{V}(\Omega)',\mathcal{V}(\Omega)} + \langle g, \varphi_1 \rangle_{\mathcal{V}(\Gamma_N)',\mathcal{V}(\Gamma_N)}. \label{varform} %\end{array}\end{aligned}$$ Using Assumption $\mathbf{A3}$, we deduce that there exists $y_1 = \dot{u}\in \mathrm{L}^2(0,T;\mathcal{V}(\Omega)) \cap \mathrm{H}^1(0,T; \mathcal{V}(\Omega)')$ such that [\[varform\]](#varform){reference-type="eqref" reference="varform"} holds for all $\varphi \in \mathcal{V}(\Omega)$, almost everywhere in $(0,T)$. After integration by parts, we obtain $$\ddot{u} -\kappa \Delta \dot{u} - \mathop{\mathrm{div}}(\sigma_L(0).\nabla u)- f = 0 \quad \text{in } \mathcal{V}(\Omega)', \qquad \text{and} \qquad \kappa \frac{\partial\dot{u}}{\partial n} + (\sigma_L(0).\nabla u)n- g = 0 \quad \text{in } \mathcal{V}(\Gamma_N)'.$$ From the second equation we deduce by Lemma [Lemma 27](#lemma-dual){reference-type="ref" reference="lemma-dual"} the existence of $\mathfrak{p}(t) %= \displaystyle \frac{1}{|\Gamma_N|} \int_{\Gamma_N} \left( g - \kappa\frac{\p \dot{u}}{\p n} - (\sigma_L(0).\nabla u)n\right) \cdot n\, \d \Gamma_N \in \mathbb{R}$ such that $$\kappa \frac{\partial\dot{u}}{\partial n} + (\sigma_L(0).\nabla u)n + \mathfrak{p}\, n- g = 0 \quad \text{in } \mathbf{H}^{-1/2}(\Gamma_N).$$ Note that deriving in time [\[syslih2\]](#syslih2){reference-type="eqref" reference="syslih2"}-[\[syslih4\]](#syslih4){reference-type="eqref" reference="syslih4"} yields $\langle \dot{u},n\rangle_{\mathbf{H}^{-1/2}(\Gamma_N),\mathbf{H}^{1/2}(\Gamma_N)} = 0$ and $\dot{u} = 0$ on $\Gamma_D$ for almost every $t\in[0,T]$. Taking the scalar product of [\[syslih1\]](#syslih1){reference-type="eqref" reference="syslih1"} by $\dot{u}$ and using the Green formula, we obtain $$\frac{\mathrm{d}}{\mathrm{d}t} \left( \frac{1}{2}\|\dot{u}\|_{\mathbf{L}^2(\Omega)}^2 + \sigma_L(0).\nabla u: \nabla u \right) + \kappa \| \nabla \dot{u}\|_{\mathbb{L}^2(\Omega)}^2 = \langle f,\dot{u} \rangle_{\mathbf{L}^2(\Omega)} + \langle g,\dot{u} \rangle_{\mathbf{H}^{1/2}(\Gamma_N),\mathbf{H}^{-1/2}(\Gamma_N)}.$$ Integrating in time this equality and using Assumption $\mathbf{A2}$, we deduce via the Young's inequality $$\nabla u \in \mathrm{L}^{\infty}(0,T;\mathbb{L}^2(\Omega)), \quad \dot{u} \in \mathrm{L}^{\infty}(0,T;\mathbf{L}^2(\Omega)), \quad \nabla \dot{u} \in \mathrm{L}^2(0,T;\mathbb{L}^2(\Omega)).$$ Next, proceeding as previously with $\ddot{u}$ in the role of $\dot{u}$, we obtain similarly $$\begin{array}{rl} \displaystyle\|\ddot{u}\|_{\mathrm{L}^2(0,T;\mathbf{L}^2(\Omega))}^2 + \frac{\kappa}{2} \|\nabla \dot{u}(T)\|_{\mathbb{L}^2(\Omega)}^2 + \displaystyle\int_0^T \langle\sigma_L(0).\nabla u , \nabla \ddot{u}\rangle_{\mathcal{V}(\Omega),\mathcal{V}(\Omega)'} \mathrm{d}t = & \displaystyle \frac{\kappa}{2} \|\dot{u}_0\|_{\mathbb{L}^2(\Omega)}^2 + \langle f,\ddot{u}\rangle_{\mathbf{L}^2(\Omega)} \\ & + \langle g,\ddot{u} \rangle_{\mathbf{H}^{1/2}(\Gamma_N),\mathbf{H}^{-1/2}(\Gamma_N)}. \end{array}$$ Further, by integration by parts on $(0,T)$, we deduce $$\|\ddot{u}\|_{\mathrm{L}^2(0,T;\mathbf{L}^2(\Omega))}^2 + \frac{\kappa}{2} \frac{\mathrm{d}}{\mathrm{d}t} \|\nabla \dot{u}\|_{\mathbb{L}^2(\Omega)}^2 = \int_0^T \langle\sigma_L(0).\nabla \dot{u} , \nabla \dot{u}\rangle_{\mathbb{L}^2(\Omega)} \mathrm{d}t + \langle f,\ddot{u}\rangle_{\mathbf{L}^2(\Omega)} + \langle g,\ddot{u} \rangle_{\mathbf{H}^{1/2}(\Gamma_N),\mathbf{H}^{-1/2}(\Gamma_N)}.$$ which shows that $$\ddot{u} \in \mathrm{L}^2(0,T;\mathbf{L}^2(\Omega)), \quad \nabla \dot{u} \in \mathrm{L}^{\infty}(0,T;\mathbb{L}^2(\Omega)).$$ Further, from [\[syslih1\]](#syslih1){reference-type="eqref" reference="syslih1"} we have $-\kappa \Delta \dot{u} = -\ddot{u} + \mathop{\mathrm{div}}(\sigma_L(0).\nabla u) + f \in \mathrm{L}^2(0,T;\mathbf{L}^2(\Omega))$, which yields $\dot{u} \in \mathrm{L}^2(0,T;\mathbf{H}^2(\Omega))$ and therefore $$\dot{u} \in \mathrm{L}^2(0,T;\mathbf{H}^2(\Omega)) \cap \mathrm{H}^1(0,T;\mathbf{L}^2(\Omega)).$$ Thus system [\[syslih\]](#syslih){reference-type="eqref" reference="syslih"} owns the $L^p$-maximal regularity property for $p=2$, and so for any $p>3$, namely $$\begin{array} {l} \dot{u} \in \mathrm{L}^p(0,T;\mathbf{H}^2(\Omega)) \cap \mathrm{W}^{1,p}(0,T;\mathbf{L}^2(\Omega),\\ \begin{array} {rcl} \|\dot{u}\|_{\mathrm{L}^p(0,T;\mathbf{H}^2(\Omega)) \cap \mathrm{W}^{1,p}(0,T;\mathbf{L}^2(\Omega)} & \leq & \displaystyle C \left( \|f\|_{\mathrm{L}^p(0,T;\mathbf{L}^2(\Omega))} +\|g\|_{\mathrm{L}^p(0,T;\mathbf{H}^{1/2}(\Gamma_N))\cap\mathrm{W}^{1/(2p'),p}(0,T;\mathbf{L}^2(\Gamma_N))} \right. \\ & & \displaystyle \left. + \|(u_0,\dot{u}_0)\|_{\mathbf{H}^2(\Omega)\times \mathbf{H}^{2/(p')}(\Omega)} \right). \end{array} \end{array}$$ Further, estimate [\[est-trace-emb3\]](#est-trace-emb3){reference-type="eqref" reference="est-trace-emb3"} yields also $$\begin{array} {rcl} \displaystyle \left\|\frac{\partial\dot{u}}{\partial n} \right\|_{\mathrm{W}^{1/(2p'),p}(0,T;\mathbf{H}^{1/2-1/p}(\Gamma_N)')} & \leq & C \left(\|f\|_{\mathrm{L}^p(0,T;\mathbf{L}^2(\Omega))} +\|g\|_{\mathrm{L}^p(0,T;\mathbf{H}^{1/2}(\Gamma_N))\cap\mathrm{W}^{1/(2p'),p}(0,T;\mathbf{L}^2(\Gamma_N))} \right. \\ & & \displaystyle \left. + \|(u_0,\dot{u}_0)\|_{\mathbf{H}^2(\Omega)\times \mathbf{H}^{2/(p')}(\Omega)} \right). \end{array} \label{est-norm-deriv}$$ Since the coefficients of $\sigma_L(0)$ are in $\mathrm{L}^{\infty}(\Omega)$, we also have $$\displaystyle \|(\sigma_L(0).\nabla u)n \|_{\mathrm{W}^{1/(2p'),p}(0,T;\mathbf{H}^{1/2-1/p}(\Gamma_N)')} \leq C \left( \|f\|_{\mathrm{L}^p(0,T;\mathbf{L}^2(\Omega))} +\|g\|_{\mathrm{L}^p(0,T;\mathbf{H}^{1/2}(\Gamma_N))\cap\mathrm{W}^{1/(2p'),p}(0,T;\mathbf{L}^2(\Gamma_N))} \right). \label{est-norm-deriv2}$$ Next, using equation [\[syslih15\]](#syslih15){reference-type="eqref" reference="syslih15"}, since $n\in \mathbf{W}^{2-1/p,p}(\Gamma_N) \hookrightarrow \mathbf{H}^{2-1/p}(\Gamma_N) \hookrightarrow \mathbf{H}^{1/2-1/p}(\Gamma_N)$ we deduce for the pressure $$\begin{array} {l} \mathfrak{p} = \displaystyle \frac{1}{|\Gamma_N|}\left( \int_{\Gamma_N} g \cdot n \, \mathrm{d}\Gamma_N -\kappa\left\langle\frac{\partial\dot{u}}{\partial n}+ (\sigma_L(0).\nabla u)n; n\right\rangle_{\mathbf{H}^{1/2-1/p}(\Gamma_N)',\mathbf{H}^{1/2-1/p}(\Gamma_N)} \right) \in \mathrm{W}^{1/(2p'),p}(0,T;\mathbb{R}), \\ \displaystyle \| \mathfrak{p}\|_{\mathrm{W}^{1/(2p'),p}(0,T;\mathbb{R})} \leq C\left( \|g\|_{\mathrm{W}^{1/(2p'),p}(0,T;\mathbf{L}^2(\Gamma_N))} + \left\|\frac{\partial\dot{u}}{\partial n} + (\sigma_L(0).\nabla u)n \right\|_{\mathrm{W}^{1/(2p'),p}(0,T;\mathbf{H}^{1/2-1/p}(\Gamma_N)')} %\|v\|_{\dot{\mathcal{U}}_{p,T}(\Omega)} %+\left\|\frac{\p u}{\p n} \right\|_{\W^{1/(2p'),p}(0,T;\HH^{1/2-1/p}(\Gamma_N)')} \right). \end{array}$$ Combined with estimates [\[est-norm-deriv\]](#est-norm-deriv){reference-type="eqref" reference="est-norm-deriv"}-[\[est-norm-deriv2\]](#est-norm-deriv2){reference-type="eqref" reference="est-norm-deriv2"}, this yields $$\|\mathfrak{p} \|_{\mathcal{P}_{p,T}} \leq C \left( \|f\|_{\mathrm{L}^p(0,T;\mathbf{L}^2(\Omega))} +\|g\|_{\mathrm{L}^p(0,T;\mathbf{H}^{1/2}(\Gamma_N))\cap\mathrm{W}^{1/(2p'),p}(0,T;\mathbf{L}^2(\Gamma_N))} + \|(u_0,\dot{u}_0)\|_{\mathbf{H}^2(\Omega)\times \mathbf{H}^{2/(p')}(\Omega)} \right). \label{est-p-prop32}$$ Therefore from equation [\[syslih15\]](#syslih15){reference-type="eqref" reference="syslih15"} we have $\displaystyle \kappa\frac{\partial\dot{u}}{\partial n} + (\sigma_L(0).\nabla u)n = g -\mathfrak{p}\, n \in \mathcal{G}_{p,T}(\Gamma_N)$, and then we deduce the regularity of $(u,\dot{u})$ in $\mathcal{U}_{p,T}(\Omega) \times \dot{\mathcal{U}}_{p,T}(\Omega)$ from Proposition [Proposition 3](#prop-well-auto){reference-type="ref" reference="prop-well-auto"}. Further, estimate [\[est-prop32\]](#est-prop32){reference-type="eqref" reference="est-prop32"} yields $$\begin{array} {rcl} \|u\|_{\mathcal{U}_{p,T}(\Omega)} + \|\mathfrak{p}\|_{\mathcal{P}_{p,T}} & \leq & C\left(\|(u_0,\dot{u}_0)\|_{\mathcal{U}^{(0,1)}_p(\Omega)} + \|f\|_{\mathcal{F}_{p,T}(\Omega)} + \|g\|_{\mathcal{G}_{p,T}(\Gamma_N)} + \| \mathfrak{p}\, n\|_{\mathcal{G}_{p,T}(\Gamma_N)} \right) \\ & \leq & C\left(\|(u_0,\dot{u}_0)\|_{\mathcal{U}^{(0,1)}_p(\Omega)} + \|f\|_{\mathcal{F}_{p,T}(\Omega)} + \|g\|_{\mathcal{G}_{p,T}(\Gamma_N)} + \| \mathfrak{p}\|_{\mathcal{P}_{p,T}} \right). \end{array}$$ Combined with [\[est-p-prop32\]](#est-p-prop32){reference-type="eqref" reference="est-p-prop32"}, this yields the announced estimate and completes the proof. ### Proof of Corollary [Corollary 4](#coro-well-auto-NH){reference-type="ref" reference="coro-well-auto-NH"} {#sec-app-tek2} Notice that the constraint [\[siszeroNH3\]](#siszeroNH3){reference-type="eqref" reference="siszeroNH3"} also writes $$\int_{\Gamma_N} \left(u-\frac{1}{|\Gamma_N|}hn\right) \cdot n\, \mathrm{d}\Gamma_N = 0 \quad \quad \text{in $(0,T)$.}$$ We then proceed with a lifting method. We need to define an extension of $\displaystyle \frac{1}{|\Gamma_N|}hn$ in $\Omega$. Let us first define extensions of $\displaystyle \frac{1}{|\Gamma_N|}h(0)n$ and $\displaystyle \frac{1}{|\Gamma_N|}\dot{h}(0)n$. Recall that $h(0) \in \mathbb{R}$ and $\dot{h}(0)\in \mathbb{R}$ do not depend on the space variable. Since by assumption $n \in \mathbf{W}^{2-1/p,p}(\Gamma_N) \hookrightarrow\mathbf{W}^{2/{p'}-1/p,p}(\Gamma_N)$, there exists $H_0 \in \mathbf{W}^{2,p}(\Omega)$ and $\dot{H}_0 \in \mathbf{W}^{2/{p'},p}(\Omega)$ respective extensions of $\displaystyle \frac{1}{|\Gamma_N|} h(0)n$ and $\displaystyle \frac{1}{|\Gamma_N|} \dot{h}(0)n$ such that [\[lifteq\]]{#lifteq label="lifteq"} $$\begin{aligned} %\begin{array} {l} {H_0}_{| \Gamma_N} = \displaystyle \frac{1}{|\Gamma_N|} h(0)n, \qquad \| H_0 \|_{\mathbf{W}^{2,p}(\Omega)} \leq C\|h(0) \|_{\mathbb{R}},\label{lifteq1}\\[5pt] {\dot{H}}_{0| \Gamma_N} = \displaystyle \frac{1}{|\Gamma_N|} \dot{h}(0)n, \qquad \| \dot{H}_0 \|_{\mathbf{W}^{2/{p'},p}(\Omega)} \leq C\|\dot{h}(0) \|_{\mathbb{R}}.\label{lifteq2} %\end{array}\end{aligned}$$ We now define an extension $H$ of $\displaystyle \frac{1}{|\Gamma_N|}hn$ by solving first the following heat equation with mixed boundary conditions, dealing with $\dot{H}$ as unknown: $$%\begin{array} {rcl} \ddot{H}- \kappa\Delta \dot{H} = 0 \text{ in } \Omega \times (0,T), \quad \dot{H}_{|\Gamma_N} = \displaystyle \frac{1}{|\Gamma_N|}\dot{h} n \ \text{on } \Gamma_N \times (0,T), \quad \dot{H} = 0 \ \text{on } \Gamma_D \times (0,T), \quad \dot{H}(0) = \dot{H}_0 \ \text{in } \Omega. %\end{array} %\right.$$ Since $n \in \mathbf{W}^{2-1/p,p}(\Gamma_N)$, we have $\displaystyle \frac{1}{|\Gamma_N|}\dot{h}n \in \mathcal{H}_{p,T}(\Gamma_N)$. From [@Pruess2002] the solution of this equation satisfies $$\| \dot{H}\|_{\dot{\mathcal{U}}_{p,T}(\Omega)} \leq C\left( \| \dot{h}n\|_{\mathcal{H}_{p,T}(\Gamma_N)} + \| \dot{H}_0\|_{\mathbf{W}^{2/{p'},p}(\Omega)}\right) \leq C\left( \|\dot{h} \|_{\mathrm{W}^{1-1/{2p},p}(0,T;\mathbb{R})} + \|\dot{h}(0)\|_{\mathbb{R}}\right) = C\|\dot{h}\|_{\mathcal{H}_{p,T}}, \label{estliftx}$$ where we have used [\[lifteq2\]](#lifteq2){reference-type="eqref" reference="lifteq2"}. Using [\[est-trace-emb2\]](#est-trace-emb2){reference-type="eqref" reference="est-trace-emb2"}, we deduce in particular $$\|\dot{H}(0)\|_{\mathbf{W}^{2/{p'},p}(\Omega)} %+ \|\dot{\bar{h}}\|_{\mathcal{F}_{p,T}(\Omega)} +\kappa\left\|\frac{\partial\dot{H}}{\partial n}\right\|_{\mathcal{G}_{p,T}(\Gamma_N)} \leq C \|\dot{h} \|_{\mathcal{H}_{p,T}}. \label{est-dirichlet-hbar}$$ Further, we set $H(\cdot,t) = H_0 + \displaystyle \int_0^t \dot{H}(\cdot,s)\mathrm{d}s$, which implies $$\|H\|_{\mathcal{U}_{p,T}(\Omega)} \leq C\left( \|H_0\|_{\mathbf{W}^{2,p}(\Omega)} + \|\dot{H}\|_{\dot{\mathcal{U}}_{p,T}(\Omega)} \right) \leq C \left( \|h(0)\|_{\mathbb{R}} + \|\dot{h}\|_{\mathcal{H}_{p,T}} \right), \label{est-HU}$$ where we have used [\[lifteq1\]](#lifteq1){reference-type="eqref" reference="lifteq1"} and [\[estliftx\]](#estliftx){reference-type="eqref" reference="estliftx"}, and thus we obtain in particular $$%\begin{array} {rcl} \|H\|_{\mathrm{L}^p(0,T;\mathbf{W}^{2,p}(\Omega))} +\|(\sigma_L(0).\nabla H)n\|_{\mathcal{G}_{p,T}(\Gamma_N)} \displaystyle %+ \kappa\left\|\frac{\p \dot{H}}{\p n}\right\|_{\mathcal{G}_{p,T}(\Gamma_N)} %& \leq & C\left( %\|H_0\|_{\WW^{2,p}(\Omega)} %+ \|\dot{H}\|_{\dot{\mathcal{U}}_{p,T}(\Omega)} %\right) \\ \leq C\left( \|h(0)\|_{\mathbb{R}} + \| \dot{h}\|_{\mathcal{H}_{p,T}} \right). %\end{array} \label{est-dirichlet-hbar2}$$ Now define $\bar{u} := u - H$. We rewrite system [\[siszeroNH\]](#siszeroNH){reference-type="eqref" reference="siszeroNH"} as $$\begin{aligned} %\begin{array} {rcl} \ddot{\bar{u}} - \kappa \Delta \dot{\bar{u}} -\mathop{\mathrm{div}}(\sigma_L(0).\nabla \bar{u}) = f + \mathop{\mathrm{div}}(\sigma_L(0).\nabla H) & & \text{in $\Omega \times (0,T)$},\\ \kappa \displaystyle \frac{\partial\dot{\bar{u}}}{\partial n} + (\sigma_L(0).\nabla \bar{u}) n +\mathfrak{p}\, n = g - \kappa\frac{\partial\dot{H}}{\partial n}- (\sigma_L(0).\nabla H)n & & \text{on $\Gamma_N \times (0,T)$},\\ \displaystyle \int_{\Gamma_N} \bar{u}\cdot n\, \mathrm{d}\Gamma_N = 0 & & \text{in $(0,T)$} ,\\ \bar{u} = 0 & & \text{on $\Gamma_D \times (0,T)$},\\ u(\cdot,0) = u_0-h_0, \quad \dot{\bar{u}}(\cdot,0) = \dot{u}_0- \dot{h}_0 & & \text{in $\Omega$}, %\end{array}\end{aligned}$$ We recognize system [\[syslih\]](#syslih){reference-type="eqref" reference="syslih"} satisfied by $(\bar{u},\mathfrak{p})$. Then from Proposition [Proposition 3](#prop-well-auto){reference-type="ref" reference="prop-well-auto"} there exists a unique $(\bar{u},\mathfrak{p}) \in \mathcal{U}_{p,T}(\Omega) \times \mathcal{P}_{p,T}$, and it satisfies $$\begin{array} {rcl} \|\bar{u}\|_{\mathcal{U}_{p,T}(\Omega)} + \| \mathfrak{p} \|_{\mathcal{P}_{p,T}} & \leq & \displaystyle C \left(\displaystyle \|f\|_{\mathcal{F}_{p,T}} + \|H\|_{\mathrm{L}^p(0,T;\mathbf{W}^{2,p}(\Omega))} \right. \\ & & \displaystyle \left. + \| g\|_{\mathcal{G}_{p,T}(\Gamma_N)} +\|(\sigma_L(0).\nabla H)n\|_{\mathcal{G}_{p,T}(\Gamma_N)} + \kappa\left\|\frac{\partial\dot{H}}{\partial n}\right\|_{\mathcal{G}_{p,T}(\Gamma_N)} \right. \\ & & \displaystyle \left. + \|(u_0,\dot{u}_0)\|_{\mathcal{U}_{p}^{(0,1)}(\Omega)} + \|(H_0,\dot{H}_0)\|_{\mathcal{U}_{p}^{(0,1)}(\Omega)} \right), \\ & \leq & C\left( \displaystyle \|f\|_{\mathcal{F}_{p,T}} + \| g\|_{\mathcal{G}_{p,T}(\Gamma_N)} + \|(u_0,\dot{u}_0)\|_{\mathcal{U}_{p}^{(0,1)}(\Omega)} + \|h(0)\|_{\mathbb{R}} + \| \dot{h}\|_{\mathcal{H}_{p,T}} \right) \end{array}$$ where we have used [\[est-dirichlet-hbar\]](#est-dirichlet-hbar){reference-type="eqref" reference="est-dirichlet-hbar"}-[\[est-dirichlet-hbar2\]](#est-dirichlet-hbar2){reference-type="eqref" reference="est-dirichlet-hbar2"} and [\[lifteq1\]](#lifteq1){reference-type="eqref" reference="lifteq1"}-[\[lifteq2\]](#lifteq2){reference-type="eqref" reference="lifteq2"}. Further, the couple $(u,\mathfrak{p}) = (\bar{u}+ H,\mathfrak{p})$ satisfies $$\|u\|_{\mathcal{U}_{p,T}(\Omega)} + \|\mathfrak{p}\|_{\mathcal{P}_{p,T}} \leq \|\bar{u}\|_{\mathcal{U}_{p,T}(\Omega)} + \|\mathfrak{p}\|_{\mathcal{P}_{p,T}} + \|H\|_{\mathcal{U}_{p,T}(\Omega)},$$ which, combined with the previous estimate and [\[est-HU\]](#est-HU){reference-type="eqref" reference="est-HU"}, yields $$\|u\|_{\mathcal{U}_{p,T}(\Omega)} + \|\mathfrak{p}\|_{\mathcal{P}_{p,T}} \leq C\left( \displaystyle \|f\|_{\mathcal{F}_{p,T}} + \| g\|_{\mathcal{G}_{p,T}(\Gamma_N)} + \|h(0)\|_{\mathbb{R}} + \|\dot{h}\|_{\mathcal{H}_{p,T}} +\|(u_0,\dot{u}_0)\|_{\mathcal{U}_{p}^{(0,1)}(\Omega)} \right).$$ This provides the existence of solution $(u,\mathfrak{p})$. Uniqueness is due to the linearity of system [\[siszeroNH\]](#siszeroNH){reference-type="eqref" reference="siszeroNH"}: Considering two solutions of [\[siszeroNH\]](#siszeroNH){reference-type="eqref" reference="siszeroNH"}, their difference satisfies system [\[syslih\]](#syslih){reference-type="eqref" reference="syslih"} with zero right-hand-sides, and therefore is equal to zero, because solutions of [\[syslih\]](#syslih){reference-type="eqref" reference="syslih"} are unique, which concludes the proof. ## Examples of strain energies {#sec-app-energies} Let us give a set of examples of classical strain energies from the literature, and show that they satisfy the assumptions $\mathbf{A1}$--$\mathbf{A3}$. #### The Saint Venant-Kirchhoff's model. It corresponds to the following strain energy $$\mathcal{W}_1(E) = \mu_L \mathrm{tr}\left( E^2 \right) + \frac{\lambda_L}{2} \mathrm{tr}(E)^2,$$ where $\mu_L >0$ and $\lambda_L \geq 0$ are the so-called Lamé coefficients. The energy is clearly twice differentiable, its first- and second-derivatives of $\mathcal{W}_1$ are given respectively by $$\check{\Sigma}_1(E) = 2\mu_L E + \lambda_L \mathrm{tr}(E)\mathrm{I}, \qquad \frac{\partial^2\mathcal{W}_1}{\partial E^2}(E) = \frac{\partial\check{\Sigma}_1}{\partial E}(E) = 2\mu_L \mathbb{I}+ \lambda_L \mathrm{I}\otimes \mathrm{I},$$ where $\mathbb{I}\in \mathbb{R}^{d\times d \times d \times d}$ denotes the identity tensor of order $4$. In particular, we see that if the matrix $E$ is symmetric, then $\check{\Sigma}_1$ defines a symmetric matrix. Therefore Assumptions $\mathbf{A1}$-$\mathbf{A2}$ are satisfied by this strain energy. Further, regarding Assumption $\mathbf{A3}$, we see that $$\sigma_L(0).\nabla v = \nabla v \check{\Sigma}_1(0) + \displaystyle \frac{1}{2} \frac{\partial^2\mathcal{W}_1}{\partial E^2}(0).(\nabla v + \nabla v^T) = 2\mu_L\epsilon(v) + \lambda_L \mathrm{trace}(\epsilon(v)) \mathrm{I}.$$ where we have introduced the notation $\epsilon(v) = \frac{1}{2}(\nabla v + \nabla v^T)$ for the symmetric part of $\nabla v$. The operator $\sigma_L(0)$ then corresponds to the well-known linearized Lamé operator, which defines the coercive operator $-\mathop{\mathrm{div}}(\sigma_L(0).\nabla v)$ under the condition $v_{|\Gamma_D} = 0$, in virtue of the Petree-Tartar lemma [@Ern Lemma A.38 page 469]. We can also refer to the Korn's inequality for this claim. Therefore we can claim that Assumption $\mathrm{A3}$ is satisfied for this example. #### The Fung's model. It corresponds to the following strain energy $$\mathcal{W}_2(E) = \mathcal{W}_2(0) + \beta \left(\exp\left(\gamma \ \mathrm{tr}(E^2)\right) - 1\right),$$ where $\mathcal{W}_2(0) \geq 0$, $\beta >0$ and $\gamma > 0$ are given coefficients. The space $\mathbb{W}^{1,p}(\Omega)$ is invariant under composition of the exponential function when $p>d$ (see [@BB1974], Lemma A.2. page 359). The first- and second-derivatives of $\mathcal{W}_2$ are given respectively by $$%\begin{array} {rcl} \check{\Sigma}_2(E) = 2\gamma\beta \exp\left(\gamma \ \mathrm{tr}(E^2) \right) E, \qquad \frac{\partial\check{\Sigma}_2}{\partial E}(E) = \beta \exp\left(\gamma \ \mathrm{tr}(E^2) \right) \left( 2\gamma \mathbb{I}+ (2\gamma)^2 E \otimes E\right). %\end{array}$$ Again, if $E$ is symmetric, then $\check{\Sigma}_2$ is symmetric. Assumptions $\mathbf{A1}$-$\mathbf{A2}$ are then satisfied. For Assumption $\mathbf{A3}$, we need to evaluate $$\sigma_L(0).\nabla v = \nabla v \check{\Sigma}_2(0) + \displaystyle \frac{\partial^2\mathcal{W}_2}{\partial E^2}(0).(\epsilon(v)) = 2\beta \gamma \epsilon(v).$$ Like in the previous example, the operator $-\mathop{\mathrm{div}}(\sigma_L(0).\nabla v)$ is coercive, and thus Assumption $\mathbf{A3}$ is satisfied. #### The Ogden's model. The family of strain energies corresponding to this model are linear combinations of energies of the following form $$\mathcal{W}_3(E) = \mathrm{tr}\left((2E+\mathrm{I})^{\gamma} - \mathrm{I}\right),$$ where $\gamma \in \mathbb{R}$. Since the tensor $2E+\mathrm{I}$ is real and symmetric, the expression $(2E+\mathrm{I})^{\beta}$ makes sense for any $\beta \in \mathbb{R}$ by diagonalizing $2E+\mathrm{I}$, and the energy $\mathcal{W}_3(E)$ can be expressed in terms of the eigenvalues of $2E+\mathrm{I}$. Since $2E(u)+\mathrm{I}= (\mathrm{I}+\nabla u)^T(\mathrm{I}+\nabla u)$, if $(\lambda_i)_{1\leq i \leq d}$ denote the singular values of $\mathrm{I}+\nabla u$, and $(\mu_i)_{1\leq i \leq d}$ denote those of $E(u)$, we have $$\mathcal{W}_3(E) = \sum_{i=1}^d \left(\lambda_i^{2\gamma} -1\right) = \sum_{i=1}^d \left((1+2\mu_i)^\gamma -1\right), \qquad \check{\Sigma}_3(E) = 2\gamma (2E+\mathrm{I})^{\gamma-1}.$$ Denoting by $(v_l)_{1\leq l\leq d}$ the normalized orthogonal eigenvectors of $E$, we further write $$\check{\Sigma}_3(E) = \sum_{i=1}^d 2\gamma(2\mu_i+1)^{\gamma-1}v_i \otimes v_i.$$ Note that the operator $v_i \otimes v_i$ is the projection on $\mathrm{Span}(v_i)$, and Assumption $\mathbf{A2}$ is satisfied by $\check{\Sigma}_3$. The sensitivity of the eigenvalues and eigenvectors with respect to the matrix can be derived for example from [@Stewart] (see Theorem IV.2.3 page 183, and Remark 2.9 page 239). Thus after calculations we get $$\begin{array} {rcl} \displaystyle \frac{\partial\check{\Sigma}_3}{\partial E}(E) & = & \displaystyle 2\gamma \sum_{i=1}^d 2(\gamma-1)(2\mu_i+1)^{\gamma-2} (v_i\otimes v_i) \otimes (v_i\otimes v_i) \displaystyle \\ & & + \displaystyle 2\gamma\sum_{i=1}^d (2\mu_i+1)^{\gamma-1} \sum_{j\neq i} \frac{1}{\mu_i-\mu_j}(v_j\otimes v_i) \otimes (v_j\otimes v_i) . \end{array}$$ This expression shows that the strain energy $\mathcal{W}_3$ fulfills also Assumption $\mathbf{A1}$. Finally, considering the vectors $(v_i)_{1\leq i\leq d}$ of the canonical basis as normalized orthogonal eigenvectors of matrix $E=0$ (with eigenvalues $\mu_i = 0$), we evaluate $$\begin{array} {rcl} \sigma_L(0).\nabla u = \nabla u \check{\Sigma}_3(0) + \displaystyle \frac{\partial\check{\Sigma}_3}{\partial E}(0).(\epsilon(u)) & = & 2\gamma \nabla u + \displaystyle 4\gamma(\gamma-1) \sum_{i=1}^d \Big((v_i\otimes v_i):\epsilon(u) \Big) (v_i\otimes v_i), \\ (\sigma_L(0).\nabla u) : \nabla u & = & 2\gamma |\nabla u|^2_{\mathbb{R}^{d\times d}} +4\gamma(\gamma-1) \displaystyle \sum_{i=1}^d \Big((v_i\otimes v_i):\epsilon(u) \Big)^2. \end{array}$$ We then obtain a coercive operator, provided that for example $\gamma\geq 1$, and deduce that Assumption $\mathbf{A3}$ is also satisfied for this model. ## A Lagrangian mechanics perspective {#sec-app-Lag} Introduce formally the Lagrangian functional associated with Problem [\[mainpbtilde2\]](#mainpbtilde2){reference-type="eqref" reference="mainpbtilde2"} as follows: $$\begin{array} {rcl} \tilde{\mathcal{L}}(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}}, \tilde{\xi}, \tau,\tilde{\zeta}_0, \tilde{\zeta}_1, \tilde{\pi}) & = & \displaystyle \int_0^2 \dot{\mu}c(\tilde{y}_0,\tilde{y}_1,\tilde{\xi})\mathrm{d}s + \phi^{(1)}(\tilde{y}_0,\tilde{y}_1)(1) + \phi^{(2)}(\tilde{y}_0,\tilde{y}_1)(2) \\ & & +\displaystyle \int_0^2 \left(\langle \dot{\tilde{y}}_1 - \dot{\mu} f(\tilde{\xi}) , \tilde{\zeta}_1\rangle_{\mathbf{L}^p(\Omega),\mathbf{L}^{p'}(\Omega)} + \dot{\mu}\langle \kappa \nabla \tilde{y}_1 + \sigma(\nabla \tilde{y}_0), \nabla \tilde{\zeta}_1\rangle_{\mathbf{W}^{1,p}(\Omega),\mathbf{W}^{1,p}(\Omega)'} %- \dot{\mu} \langle f(\tilde{\xi}), \tilde{\zeta}_1 \rangle_{\LL^p(\Omega),\LL^{p'}(\Omega)} \right) \mathrm{d}s\\ & & \displaystyle + \langle \dot{\tilde{y}}_0-\dot{\mu}\tilde{y}_1,\tilde{\zeta}_0 \rangle_{\dot{\mathcal{U}}_{p,2}(\Omega), \dot{\mathcal{U}}_{p,2}(\Omega)'} - \langle \tilde{g},\tilde{\zeta}_1\rangle_ {\mathcal{G}_{p,2}(\Gamma_N),\mathcal{G}_{p,2}(\Gamma_N)'}\\ %& & \displaystyle \\ & & \displaystyle + \int_0^2 \dot{\mu}\tilde{\pi} \int_{\Omega} \mathrm{det}\Phi(\tilde{y}_0) \, \mathrm{d}\Omega\, \mathrm{d}s + \int_0^2 \dot{\mu}\tilde{\mathfrak{p}}\int_{\Gamma_N} \tilde{\zeta}_1 \cdot \mathrm{cof}(\Phi(\tilde{y}_0))n \, \mathrm{d}\Gamma_N\, \mathrm{d}s. \end{array}$$ Recall that the dependence of $\tilde{\mathcal{L}}$ with respect to $\tau$ is represented by the change of variables $\dot{\mu} = \dot{\mu}(\cdot,\tau)$. Coming back to the original variables, namely $$\begin{array} {llll} \tilde{y}_0(\cdot,s) = y_0(\cdot,\mu(s,\tau)), & \tilde{y}_1(\cdot,s) = y_1(\cdot,\mu(s,\tau)), & \tilde{\mathfrak{p}}(s) = \mathfrak{p}(\mu(s,\tau)), & \tilde{\xi}(\cdot,s) = \xi(\cdot,\mu(s,\tau)),\\ \tilde{\zeta}_0(\cdot,s) = \zeta_0(\cdot,\mu(s,\tau)), & \tilde{\zeta}_1(\cdot,s) = \zeta_1(\cdot,\mu(s,\tau)), & \tilde{\pi}(s) = \pi(\mu(s,\tau)), & \\ \tilde{f}(\cdot,s) = f(\cdot,\mu(s,\tau)), & \tilde{g}(\cdot,s) = g(\cdot,\mu(s,\tau)), & & \end{array}$$ we have $$\tilde{\mathcal{L}}(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}}, \tilde{\xi}, \tau,\tilde{\zeta}_0, \tilde{\zeta}_1, \tilde{\pi}) = \mathcal{L}(y_0,y_1,\mathfrak{p}, \xi, \tau,\zeta_0, \zeta_1, \pi),$$ where $$\begin{array} {rcl} \mathcal{L}(y_0,y_1,\mathfrak{p}, \xi,\tau,\zeta_0, \zeta_1, \pi) & = & \displaystyle \int_0^T c(y_0,y_1,\xi)\mathrm{d}t + \phi^{(1)}(y_0,y_1)(\tau) + \phi^{(2)}(y_0,y_1)(T) \\ & & + \displaystyle \int_0^T \left(\langle \dot{y}_1 - f(\xi), \zeta_1\rangle_{\mathbf{L}^p(\Omega),\mathbf{L}^{p'}(\Omega)} + \langle \kappa \nabla y_1 + \sigma(\nabla y_0), \nabla \zeta_1 \rangle_{\mathbf{W}^{1,p}(\Omega),\mathbf{W}^{1,p}(\Omega)'} %- \langle f(\xi), \zeta_1 \rangle_{\LL^p(\Omega),\LL^{p'}(\Omega)} \right)\mathrm{d}t\\ & & \displaystyle + \langle \dot{y}_0-y_1,\zeta_0 \rangle_{\dot{\mathcal{U}}_{p,T}(\Omega), \dot{\mathcal{U}}_{p,T}(\Omega)'} - \langle g,\zeta_1\rangle_{\mathcal{G}_{p,T}(\Gamma_N),\mathcal{G}_{p,T}(\Gamma_N)'}\\ %& & \displaystyle \\ & & \displaystyle + \int_0^T \pi \int_{\Omega} \mathrm{det}\Phi(y_0) \, \mathrm{d}\Omega\, \mathrm{d}t + \int_0^T \mathfrak{p}\int_{\Gamma_N} \zeta_1 \cdot \mathrm{cof}(\Phi(y_0))n \, \mathrm{d}\Gamma_N\, \mathrm{d}t. \end{array}$$ Differentiating $\tilde{\mathcal{L}}$ with respect to $(\tilde{\zeta}_0,\tilde{\zeta}_1,\tilde{\pi})$ yields system [\[sysmaintilde2\]](#sysmaintilde2){reference-type="eqref" reference="sysmaintilde2"}. Differentiating $\tilde{\mathcal{L}}$ with respect to $(\tilde{y}_0,\tilde{y}_1,\tilde{\mathfrak{p}})$ yields system [\[sysadjoint\]](#sysadjoint){reference-type="eqref" reference="sysadjoint"}. And differentiating $\tilde{\mathcal{L}}$ with respect to $(\tilde{\xi},\tau)$ yields [\[id-deriv\]](#id-deriv){reference-type="eqref" reference="id-deriv"}. Therefore a critical point of functional $\tilde{\mathcal{L}}$ satisfies the optimality conditions stated in Theorem [Theorem 23](#th-optcond){reference-type="ref" reference="th-optcond"}. Actually, following the approach adopted in [@Maxmax1], we could show that an optimal solution of Problem [\[mainpbtilde\]](#mainpbtilde){reference-type="eqref" reference="mainpbtilde"} is necessarily a critical point of function $\tilde{\mathcal{L}}$. Further, the optimality conditions stated as in Corollary [Corollary 24](#cor-optcond){reference-type="ref" reference="cor-optcond"} correspond to a critical point of mapping $\mathcal{L}$. **Remark 28**. *Let us detail the term of $\mathcal{L}$ derived from the strain energy, namely $$\begin{array} {rcl} \langle \sigma(\nabla y_0), \nabla \zeta_1 \rangle_{\mathbf{W}^{1,p}(\Omega),\mathbf{W}^{1,p}(\Omega)'} & = & \langle(\mathrm{I}+\nabla y_0) \check{\Sigma}(E(y_0)), \nabla \zeta_1 \rangle_{\mathbf{W}^{1,p}(\Omega),\mathbf{W}^{1,p}(\Omega)'} \\ & = & \langle \check{\Sigma}(E(y_0)), (\mathrm{I}+\nabla y_0)^T\nabla \zeta_1 \rangle_{\mathbf{W}^{1,p}(\Omega),\mathbf{W}^{1,p}(\Omega)'} \\ & = & \langle \check{\Sigma}(E(y_0)), E'(y_0).\zeta_1 \rangle_{\mathbf{W}^{1,p}(\Omega),\mathbf{W}^{1,p}(\Omega)'} \end{array}$$ with $E'(y_0).\zeta_1 = \frac{1}{2}((\mathrm{I}+\nabla y_0)^T\nabla \zeta_1 + \nabla \zeta_1^T(\mathrm{I}+\nabla y_0)$, because the tensor $\check{\Sigma}$ is assumed to be symmetric in Assumption $\mathbf{A2}$. Further, recall that $\check{\Sigma}(E(y_0)) = \displaystyle \frac{\partial\mathcal{W}}{\partial E}(E(y_0))$, and thus $$\langle \sigma(\nabla y_0), \nabla \zeta_1 \rangle_{\mathbf{W}^{1,p}(\Omega),\mathbf{W}^{1,p}(\Omega)'} = \displaystyle \frac{\partial}{\partial y_0}(\mathcal{W}( E(y_0))).\zeta_1.$$* ## The control operator in the context of cardiac electrophysiology {#sec-app-control} The control is realized through a distributed right-hand-side $f$ in equation [\[sysmain1\]](#sysmain1){reference-type="eqref" reference="sysmain1"}. In practice this function is expressed in terms of the *fiber direction*, denoted by $\hat{\mathfrak{f}}$, namely a vector tangent to the tissue, depending on the geometry and considered as a part of the data. More precisely, $f$ is chosen in the form $$f = \mathop{\mathrm{div}}(s_a \hat{\mathfrak{f}} \otimes \hat{\mathfrak{f}} ),$$ where $s_a$ is a scalar function, depending on space and time, that we choose as being the command, denoted formally by $\xi$ throughout the paper. The tensor $s_a \hat{\mathfrak{f}} \otimes \hat{\mathfrak{f}}$ is the so-called *active stress tensor*. Since on $\partial\Omega$ the vector $\hat{\mathfrak{f}}$ is tangent, by the Green formula the following inner product by any test function $\varphi$ writes simply as $$\langle f(\xi) ; \varphi \rangle_{\mathbf{L}^2(\Omega)} = -\int_{\Omega} \mathop{\mathrm{div}}(\xi \hat{\mathfrak{f}} \otimes \hat{\mathfrak{f}})\cdot \varphi \, \mathrm{d}\Omega = \int_{\Omega} \xi( \hat{\mathfrak{f}} \otimes \hat{\mathfrak{f}} : \nabla \varphi)\, \mathrm{d}\Omega. %= \int_{\Omega} \xi( \hat{\mathfrak{f}} \cdot (\nabla \varphi) \hat{\mathfrak{f}})\, \d \Omega.$$ Denoting by $\omega \subset \Omega$ the control domain, an example of control space for the distributed control function $\xi$ on $\omega$ is the following $$\mathcal{X}_{p,T}(\omega) = \mathrm{L}^p(0,T;\mathrm{W}^{1,p}(\omega)).$$ In this example the control function $\xi$ is only scalar, but since the quantity to maximize, namely the variations of the pressure $\mathfrak{p}$, is also scalar, and moreover depends only on time, we expect that the set of controls is rich enough for our purpose.
arxiv_math
{ "id": "2309.12973", "title": "A damped elastodynamics system under the global injectivity condition: A\n hybrid optimal control problem", "authors": "S\\'ebastien Court", "categories": "math.OC", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We analyze the conforming approximation of the time-harmonic Maxwell's equations using Nédélec (edge) finite elements. We prove that the approximation is asymptotically optimal, i.e., the approximation error in the energy norm is bounded by the best-approximation error times a constant that tends to one as the mesh is refined and/or the polynomial degree is increased. Moreover, under the same conditions on the mesh and/or the polynomial degree, we establish discrete inf-sup stability with a constant that corresponds to the continuous constant up to a factor of two at most. Our proofs apply under minimal regularity assumptions on the exact solution, so that general domains, material coefficients, and right-hand sides are allowed. author: - "T. Chaumont-Frelet[^1] and A. Ern[^2]" bibliography: - biblio.bib title: Asymptotic optimality of the edge finite element approximation of the time-harmonic Maxwell's equations --- # Introduction This work analyzes the conforming approximation of the time-harmonic Maxwell's equations using Nédélec (edge) finite elements. In this section, we present the model problem, outline the main challenges in its finite element approximation, and discuss the present contributions in view of the existing literature. ## Setting Let $\Dom\subset \Real^d$, $d=3$, be an open, bounded, Lipschitz polyhedron with boundary $\front$ and outward unit normal $\bn_\Dom$. We do not make any simplifying assumption on the topology of $\Dom$. We use boldface fonts for vectors, vector fields, and functional spaces composed of such fields. More details on the notation are given in Section [2](#sec:continuous){reference-type="ref" reference="sec:continuous"}. Given a positive real number $\omega>0$ representing a frequency and a source term $\bJ: \Dom \to \mathbb R^3$, and focusing for simplicity on homogeneous Dirichlet boundary conditions (a.k.a. perfect electric conductor boundary conditions), the model problem consists in finding $\bE: \Dom \to \mathbb R^3$ such that [\[eq_maxwell_strong\]]{#eq_maxwell_strong label="eq_maxwell_strong"} $$\begin{aligned} {2} -\omega^2 \eps \bE+\ROT(\bmu^{-1}\ROT \bE) &= \bJ &\quad&\text{in $\Dom$}, \\ \bE \CROSS \bn_\Dom &= \bzero &\quad&\text{on $\partial \Dom$},\end{aligned}$$ where $\eps$ represents the electric permittivity of the materials contained in $\Dom$ and $\bmu$ their magnetic permeability. Both material properties can vary in $\Dom$ and take symmetric positive-definite values with eigenvalues uniformly bounded from above and from below away from zero. We assume that $\omega$ is not a resonant frequency, so that [\[eq_maxwell_strong\]](#eq_maxwell_strong){reference-type="eqref" reference="eq_maxwell_strong"} is uniquely solvable in $\Hrotz$ for every $\bJ$ in the topological dual space $\Hrotz'$. The time-harmonic Maxwell's equations [\[eq_maxwell_strong\]](#eq_maxwell_strong){reference-type="eqref" reference="eq_maxwell_strong"} are one of the central models of electrodynamics. Therefore, efficient discretizations are a cornerstone for the computational modelling of electromagnetic wave propagation [@Hiptmair_Acta_Num_2002; @Monk_book_2003]. ## Main challenges when approximating [\[eq_maxwell_strong\]](#eq_maxwell_strong){reference-type="eqref" reference="eq_maxwell_strong"} To highlight the main challenges associated with the finite element approximation of [\[eq_maxwell_strong\]](#eq_maxwell_strong){reference-type="eqref" reference="eq_maxwell_strong"}, let us first briefly discuss the Helmholtz problem that arises when considering polarized electromagnetic waves. In this case, a two-dimensional domain $\widehat \Dom \subset \mathbb R^2$ is considered, and the component of the electric field normal to $\widehat \Dom$, $\widehat E: \widehat \Dom \to \mathbb R$, satisfies [\[eq_helmholtz_strong\]]{#eq_helmholtz_strong label="eq_helmholtz_strong"} $$\begin{aligned} {2} -\omega^2 \widehat\epsilon \widehat E-\DIV(\widehat\bmu^{-1}\GRAD \widehat E) &= \widehat J &\quad&\text{in $\widehat \Dom$}, \label{eq:EDP_Helmholtz} \\ \widehat E &= 0&\quad&\text{on $\partial \widehat \Dom$},\end{aligned}$$ where $\widehat J: \widehat \Dom \to \mathbb R$ is the component of $\bJ$ normal to $\widehat \Dom$, $\widehat\epsilon$ the normal-normal component of $\eps$, and $\widehat\bmu$ the transpose-adjugate of the tangent-tangent components of $\bmu$. The Helmholtz problem also arises in other contexts, such as (three-dimensional) acoustic wave propagation. The Laplace operator in [\[eq_helmholtz_strong\]](#eq_helmholtz_strong){reference-type="eqref" reference="eq_helmholtz_strong"} is coercive over $H^1_0(\widehat \Dom)$, and the compact embedding $H^1_0(\widehat \Dom) \hookrightarrow L^2(\widehat \Dom)$ ensures that the negative term in [\[eq:EDP_Helmholtz\]](#eq:EDP_Helmholtz){reference-type="eqref" reference="eq:EDP_Helmholtz"} is a compact perturbation. As a result, at the continuous level, [\[eq_helmholtz_strong\]](#eq_helmholtz_strong){reference-type="eqref" reference="eq_helmholtz_strong"} falls into the framework of the Fredholm alternative, and the Helmholtz problem is well-posed in $H^1_0(\widehat \Dom)$ if and only if $\omega$ is not a resonant frequency. Compactness also has direct implications at the discrete level. The most common manifestation is probably the celebrated Aubin--Nitsche duality argument. Specifically, setting $\omega = 0$ and considering conforming Lagrange finite elements for simplicity, it is well-known that the finite element approximation, $\widehat E_h$, converges to $\widehat E$ faster in the $L^2(\widehat \Dom)$-norm than in the $H^1_0(\widehat \Dom)$-norm. When $\omega > 0$, this observation can be leveraged into a technique often called Schatz's argument [@Schatz:74]. The key idea is that the negative $L^2(\widehat \Dom)$-term in [\[eq_helmholtz_strong\]](#eq_helmholtz_strong){reference-type="eqref" reference="eq_helmholtz_strong"} becomes negligible on sufficiently fine meshes, leaving only a coercive term and thus enabling the derivation of a Céa-like lemma. More specifically, considering the Lagrange finite element space $\widehat V_h \subset H^1_0(\widehat \Dom)$ with mesh size $h$ and polynomial degree $k\ge1$, one can show that if the mesh is sufficiently refined and/or the polynomial degree is sufficiently increased, the finite element approximation, $\widehat E_h$, is uniquely defined and satisfies the following error bound: $$\label{eq_optimality_helmholtz} (1-\gamma) \tnorm{\widehat E-\widehat E_h} \leq \min_{\widehat v_h \in \widehat V_h} \tnorm{\widehat E-\widehat v_h},$$ where the approximation factor $\gamma$ satisfies $\lim_{{h/k} \to 0} \gamma = 0$, and $$\tnorm{w}^2 \eqq \omega^2\|\epsilon^{\frac12}w\|_{\Ldeux}^2 + \|\bmu^{-\frac12}\GRAD w\|_{\Ldeuxd}^2,$$ is the natural energy norm for [\[eq_helmholtz_strong\]](#eq_helmholtz_strong){reference-type="eqref" reference="eq_helmholtz_strong"}. Crucially, [\[eq_optimality_helmholtz\]](#eq_optimality_helmholtz){reference-type="eqref" reference="eq_optimality_helmholtz"} implies that the finite element approximation is *asymptotically optimal*. The original argument of Schatz in [@Schatz:74] for conforming finite elements requires some extra smoothness on the solution, and it has been extended in a number of ways. Of particular importance is the seminal work [@melenk_sauter_2010a] which tracks the dependence of $\gamma$ on the frequency $\omega$, the mesh size $h$ and the polynomial degree $k$. This has later been extended to non-smooth domains and varying coefficients [@chaumontfrelet_nicaise_2020a; @LaSpW:22; @melenk_sauter_2011a]. The challenges encountered in the Helmholtz problem [\[eq_helmholtz_strong\]](#eq_helmholtz_strong){reference-type="eqref" reference="eq_helmholtz_strong"} are also present in the time-harmonic Maxwell's equations [\[eq_maxwell_strong\]](#eq_maxwell_strong){reference-type="eqref" reference="eq_maxwell_strong"}, with one key additional difficulty: the lack of compactness of the embedding $\Hrotz \hookrightarrow \Ldeuxd$. This is remedied by working in the subspace $\Hrotz \cap \Hdiveps$, with $\Hdiveps := \{ \bv \in \bL^2(D); \DIV(\eps\bv) \in L^2(D) \}$, which compactly embeds into $\Ldeuxd$ [@Birman_Solomyak_1987; @Costabel:90; @Weber:80]. Therefore, a crucial ingredient in the analysis of any finite element approximation to [\[eq_maxwell_strong\]](#eq_maxwell_strong){reference-type="eqref" reference="eq_maxwell_strong"} is to derive some suitable control on the divergence of the discrete solution. This is discussed in the context of Lagrange finite elements, e.g., in [@Costabel:91; @CosDa:02; @BoGuL:16; @BuCiJ:09]. However, a somewhat more popular approach to approximate the time-harmonic Maxwell's equations in a conforming setting hinges on Nédélec finite elements [@Nedel:80; @Nedel:86]. We now discuss our main contributions to this topic. ## Main contributions The Nédélec finite element approximation to [\[eq_maxwell_strong\]](#eq_maxwell_strong){reference-type="eqref" reference="eq_maxwell_strong"} is $\Hrotz$-conforming, but only weakly $\Hdiveps$-conforming. The lack of $\Hdiveps$-conformity must be taken into account in the error analysis. Early contributions on the topic are based on the concept of collective compactness. A seminal work in this regard is [@Kikuchi:89] for lowest-order Nédélec elements, and extensions to higher-order elements have been carried out [@BCDDH:11; @MonDe:01]. We also refer the reader to [@Buffa:05; @CaFeR:00], and to [@Monk_book_2003 Section 7.3] for an overview. Later on, duality proofs in the spirit of Schatz were proposed. For Maxwell's equations, the Aubin--Nitsche trick cannot be applied directly, so that an intermediate step involving a commuting interpolation operator is added to the proof. This argument seems to date back to the work of Girault [@Girault:88] using canonical interpolation operators, and has been used for Maxwell's equations in [@Monk:92; @ZhSWX:09]. As pointed out in [@Girault:88 Remark 3.1], one drawback of using the canonical interpolation operators is the extra regularity requirement on the exact solution. This limitation was lifted following the development of commuting quasi-interpolation operators working under minimal regularity assumptions (see [@Schoberl:01; @ArnFW:06; @Christiansen:07; @ChrWi:08] and [@EG_volI Chap. 22-23]). The application to Maxwell's equations can be found in [@chaumontfrelet_nicaise_pardo_2018a; @ErnGu:18], see also [@EG_volII Chap. 44]. The current state-of-the-art using the above techniques shows that the Nédélec finite element approximation is uniquely defined for fine enough meshes, and allows for error estimates of the form $$\label{eq_quasi_optimality_maxwell} (1-\gamma) \tnorm{\bE-\bE_h} \leq c \min_{\bv_h \in \bV_h\upc} \tnorm{\bE-\bv_h},$$ where $\bV_h\upc$ is the Nédélec finite element space with mesh size $h$ and polynomial degree $k\ge0$, the approximation factor $\gamma$ again satisfies $\lim_{h/(k+1) \to 0} \gamma = 0$, and the natural energy norm is $$\label{eq:energy_norm} \tnorm{\bw}^2 \eqq \omega^2\|\eps^{\frac12}\bw\|_{\Ldeuxd}^2 + \|\bmu^{-\frac12}\ROT \bw\|_{\Ldeuxd}^2.$$ Unfortunately, $c$ is a generic constant that depends on the shape-regularity of the mesh. The key contribution of this work is to show that [\[eq_quasi_optimality_maxwell\]](#eq_quasi_optimality_maxwell){reference-type="eqref" reference="eq_quasi_optimality_maxwell"} actually holds true with $c = 1$. In other words, Nédélec finite element discretizations of the time-harmonic Maxwell's equations are *asymptotically optimal*. Moreover, we establish discrete inf-sup stability with a constant that corresponds, as the mesh is refined, to the continuous constant up to a factor of two at most. These results are, to the best of our knowledge, a novel contribution to the literature. Our proofs are valid irrespective of the topology of the domain $\Omega$ and apply under a minimal regularity assumption on the exact solution, so that general domains, material coefficients and right-hand sides are allowed. In addition, the dependence of $\gamma$ on key parameters can be traced following [@ChaVe:22; @melenk_sauter_2021]. ## Outline The paper is organized as follows. In Section [2](#sec:continuous){reference-type="ref" reference="sec:continuous"}, we briefly present the continuous setting, and in Section [3](#sec:disc_setting){reference-type="ref" reference="sec:disc_setting"}, we do the same for the discrete setting. In Section [4](#sec:conforming){reference-type="ref" reference="sec:conforming"}, we present the error and stability analysis. The main results in this section are Theorem [Theorem 9](#th:est_err_c){reference-type="ref" reference="th:est_err_c"} and Theorem [Theorem 11](#th:inf_sup_c){reference-type="ref" reference="th:inf_sup_c"}. Finally, in Section [5](#sec:bnd_app_fac){reference-type="ref" reference="sec:bnd_app_fac"}, we establish bounds on the approximation and divergence conformity factors introduced in the analysis, proving that these quantities tend to zero as the mesh is refined. Incidentally, we notice that the results established in Section [4](#sec:conforming){reference-type="ref" reference="sec:conforming"} hold more generally when working on a generic subspace of $\Hrotz$ which is not necessarily constructed using finite elements. The finite element structure is used in Section [5](#sec:bnd_app_fac){reference-type="ref" reference="sec:bnd_app_fac"}. # Continuous setting {#sec:continuous} In this section, we briefly recall the functional setting for the time-harmonic Maxwell's equations, formulate the model problem and examine its inf-sup stability. ## Functional spaces We use standard notation for Lebesgue and Sobolev spaces. To alleviate the notation, the inner product and associated norm in the spaces $\Ldeux$ and $\Ldeuxd$ are denoted by $(\SCAL,\SCAL)$ and $\|\SCAL\|$, respectively. The material properties $\eps$ and $\bnu\eqq \bmu^{-1}$ are measurable functions that take symmetric positive-definite values in $\Dom$ with eigenvalues uniformly bounded from above and from below away from zero. It is convenient to introduce the inner product and associated norm weighted by either $\eps$ or $\bnu$, leading to the notation $(\SCAL,\SCAL)_\eps$, $\|\SCAL\|_\eps$, $(\SCAL,\SCAL)_{\bnu}$ and $\|\SCAL\|_{\bnu}$. Whenever no confusion can arise, we use the symbol $^\perp$ to denote orthogonality with respect to the inner product $(\SCAL,\SCAL)_\eps$. Moreover, all the projection operators denoted using the symbol $\bPi$ are meant to be orthogonal with respect to this inner product; we say that the projections are $\bL^2_\eps$-orthogonal. We consider the Hilbert Sobolev spaces [\[eq:Hrot_spaces\]]{#eq:Hrot_spaces label="eq:Hrot_spaces"} $$\begin{aligned} \Hrot & \eqq \{\bv \in \Ldeuxd \st \ROT\bv\in \Ldeuxd\},\\ \Hrotrotz &\eqq \{\bv \in \Hrot \st \ROT\bv=\bzero\}, \\ \Hrotz &\eqq \{\bv \in \Hrot \st \gamma\upc_\front(\bv)=\bzero\}, \\ \Hrotzrotz &\eqq \{\bv \in \Hrotz \st \ROTZ\bv=\bzero\}, \end{aligned}$$ where the tangential trace operator $\gamma\upc_{\front}:\Hrot\rightarrow \bH^{-\frac12}(\front)$ is the extension by density of the tangent trace operator such that $\gamma\upc_\front(\bv)=\bv|_{\front}\CROSS \bn_\Dom$ for smooth fields. The subscript ${}_0$ indicates the curl operator acting on fields respecting homogeneous Dirichlet conditions. Notice that $\ROT$ and $\ROTZ$ are adjoint to each other, i.e., $(\ROTZ\bv,\bw)=(\bv,\ROT\bw)$ for all $(\bv,\bw)\in \Hrotz\times\Hrot$. We consider the subspace $$\bX\upc_0\eqq \Hrotz\cap \Hrotzrotz^\perp,$$ and we introduce the $\bL^2_\eps$-orthogonal projection $$\bPi\upc_0: \Ldeuxd \to \Hrotzrotz.$$ Since $\GRAD\Hunz \subset \Hrotzrotz$, any field $\bxi \in \bX\upc_0$ is such that $\DIV(\eps\bxi)=0$ in $\Dom$. Hence, $\bX\upc_0$ compactly embeds into $\Ldeuxd$ [@Weber:80]. **Remark 1** (Topology of $\Dom$). *We have $\Hrotzrotz^\perp\subset \{\bv\in \Ldeuxd, \; \DIV(\eps\bv)=0\}$ with equality if only if $\front$ is connected (see, e.g., [@AmBDG:98]).* ## Model problem We focus for simplicity on homogeneous Dirichlet boundary conditions and consider the functional space $$\bV_0\upc \eqq \Hrotz.$$ Given a positive real number $\omega>0$ and a source term $\bJ \in (\bV_0\upc)'$ (the topological dual space of $\bV_0\upc$), the model problem amounts to finding $\bE\in\bV_0\upc$ such that $$\label{eq:weak} b(\bE,\bw) = \langle\bJ,\bw\rangle \qquad \forall \bw \in \bV_0\upc,$$ with the bilinear form defined on $\bV_0\upc\times\bV_0\upc$ such that $$b(\bv,\bw) \eqq -\omega^2(\bv,\bw)_\eps + (\ROTZ\bv,\ROTZ\bw)_{\bnu},$$ and where the brackets on the right-hand side of [\[eq:weak\]](#eq:weak){reference-type="eqref" reference="eq:weak"} denote the duality product between $(\bV_0\upc)'$ and $\bV_0\upc$. In what follows, we assume that $\omega^2$ is not an eigenvalue of the $\ROT(\bnu\ROTZ {\cdot})$ operator in $\Dom$. As a result, the model problem [\[eq:weak\]](#eq:weak){reference-type="eqref" reference="eq:weak"} is well-posed. We equip the space $\Hrot$ and its subspaces defined in [\[eq:Hrot_spaces\]](#eq:Hrot_spaces){reference-type="eqref" reference="eq:Hrot_spaces"} with the energy norm defined in [\[eq:energy_norm\]](#eq:energy_norm){reference-type="eqref" reference="eq:energy_norm"}, and observe that the bilinear form $b$ satisfies $|b(\bv,\bw)|\le \tnorm{\bv}\tnorm{\bw}$. ## Inf-sup stability For all $\bg \in \Ldeuxd$, let $\bv_{\bg}\in \bV_0\upc$ denote the unique solution to [\[eq:weak\]](#eq:weak){reference-type="eqref" reference="eq:weak"} with right-hand side $(\bg,\bw)_\eps$, i.e., $b(\bv_\bg,\bw)=(\bg,\bw)_\eps$ for all $\bw\in \bV_0\upc$. We introduce the (nondimensional) stability constant $$\label{eq:def_bst} \bst \eqq \sup_{\substack{\bg \in \Hrotzrotz^\perp \\ \|\bg\|_\eps = 1}} \omega \tnorm{\bv_{\bg}}.$$ **Lemma 2** (Inf-sup stability). *The following holds: $$\label{eq:infsup_exact} \frac{1}{1+2\bst} \le \inf_{\substack{\bv \in \bV_0\upc\\ \tnorm{\bv}=1}} \sup_{\substack{\bw\in \bV_0\upc \\ \tnorm{\bw}=1}} |b(\bv,\bw)| \le \frac{1}{\bst}.$$* *Proof.* (1) Lower bound. Let $\bv \in \bV_0\upc$ and let us set $\bv=\bv_0+\bv_\Pi$ with $\bv_0\eqq (I-\bPi\upc_0)(\bv)$ and $\bv_\Pi\eqq \bPi\upc_0(\bv)$. Let $\bxi_0 \in \bV_0\upc$ be the adjoint solution such that $b(\bw,\bxi_0)=\omega^2 (\bw,\bv_0)_\eps$ for all $\bw\in\bV_0\upc$. Since $\bv_0\in \Hrotzrotz^\perp$ by construction, we have $$\label{eq:pty_bxi} \tnorm{\bxi_0} \le \bst \omega \|\bv_0\|_\eps \le \bst \tnorm{\bv_0},$$ owing to the symmetry of $b$ and the definition of $\bst$ for the first bound, and the definition of the $\tnorm{\SCAL}$-norm for the second bound. Moreover, taking the test function $\bw\eqq \bv\in\bV_0\upc$ in the adjoint problem, we infer that $$b(\bv,\bxi_0) = \omega^2(\bv,\bv_0)_\eps=\omega^2\|\bv_0\|_\eps^2,$$ since $\bv_0$ and $\bv_\Pi$ are $\bL^2_\eps$-orthogonal. In addition, invoking the symmetry of $b$ and since $\bv_\Pi$ is curl-free, we have $$\begin{aligned} b(\bv,\bv_0-\bv_\Pi) &=b(\bv_0+\bv_\Pi,\bv_0-\bv_\Pi) =b(\bv_0,\bv_0)-b(\bv_\Pi,\bv_\Pi) \\ &=\tnorm{\bv_0}^2 -2\omega^2\|\bv_0\|_\eps^2 + \omega^2\|\bv_\Pi\|_\eps^2 =\tnorm{\bv}^2-2\omega^2\|\bv_0\|_\eps^2.\end{aligned}$$ Combining the above two identities proves that $$b(\bv,\bv_0+2\bxi_0-\bv_\Pi) = \tnorm{\bv}^2.$$ Finally, owing to [\[eq:pty_bxi\]](#eq:pty_bxi){reference-type="eqref" reference="eq:pty_bxi"}, we have $$\begin{aligned} \tnorm{\bv_0+2\bxi_0-\bv_\Pi}^2 &= \tnorm{\bv_0+2\bxi_0}^2 + \omega^2\|\bv_\Pi\|_\eps^2 \\ &\le (1+2\bst)^2 \tnorm{\bv_0}^2 + \omega^2\|\bv_\Pi\|_\eps^2 \le (1+2\bst)^2 \tnorm{\bv}^2,\end{aligned}$$ since $\tnorm{\bv_0}^2 + \omega^2\|\bv_\Pi\|_\eps^2 = \tnorm{\bv}^2$. This proves the lower bound. \(2\) Upper bound. For all $\bphi\in (\bV_0\upc)'$, let $\bv_\bphi$ denote the unique solution to [\[eq:weak\]](#eq:weak){reference-type="eqref" reference="eq:weak"} with right-hand side $\langle \bphi,\bw\rangle$. Let $\alpha$ denote the inf-sup constant in [\[eq:infsup_exact\]](#eq:infsup_exact){reference-type="eqref" reference="eq:infsup_exact"}. Then $\alpha>0$ since [\[eq:weak\]](#eq:weak){reference-type="eqref" reference="eq:weak"} is well posed, and we have $\alpha^{-1} = \sup_{\bphi\in (\bV_0\upc)'}\frac{\tnorm{\bv_\bphi}}{\|\bphi\|_{(\bV_0\upc)'}}$ (see, e.g., [@EG_volII Lem. C.51]). We consider the linear forms $\bphi_\bg\in(\bV_0\upc)'$ such that $\langle \bphi_\bg,\bw\rangle = (\bg,\bw)_\eps$ for all $\bw\in\bV_0\upc$ for some function $\bg\in\Hrotzrotz^\perp$. Owing to the Cauchy--Schwarz inequality and the definition of the $\tnorm{\SCAL}$-norm, we have $$\|\bphi_\bg\|_{(\bV_0\upc)'} = \sup_{\bw\in \bV_0\upc} \frac{|\langle \bphi_\bg,\bw\rangle|}{\tnorm{\bw}} \le \sup_{\bw\in \bV_0\upc} \frac{\|\bg\|_\eps\|\bw\|_\eps}{\tnorm{\bw}}\le \omega^{-1}\|\bg\|_\eps.$$ Restricting the supremum defining $\alpha^{-1}$ to the above linear forms, we infer that $$\alpha^{-1} \ge \sup_{\bg\in \Hrotzrotz^\perp}\frac{\tnorm{\bv_\bg}}{\omega^{-1}\|\bg\|_\eps} = \bst.$$ This proves the upper bound. ◻ **Remark 3** (Inf-sup condition). *Since the stability constant $\bst$ is expected to be large (it scales as $\omega$ times the reciprocal of the distance of $\omega$ to the spectrum of the $\ROT(\bnu\ROTZ{\cdot})$ operator [@ChaVe:22]), [\[eq:infsup_exact\]](#eq:infsup_exact){reference-type="eqref" reference="eq:infsup_exact"} means that, up to at most a factor of two, the inf-sup constant of the bilinear form $b$ can be estimated as $(1+2\bst)^{-1}$.* # Discrete setting {#sec:disc_setting} In this section, we present the setting to formulate the discrete problem, and we introduce two (nondimensional) quantities to be used in the error and stability analysis presented in the next section. ## Mesh and finite element spaces Let $\calT_h$ be an affine simplicial mesh covering $\Dom$ exactly. A generic mesh cell is denoted $K$, its diameter $h_K$ and its outward unit normal $\bn_K$. Let $k\ge0$ be the polynomial degree. Let $\polP_{k,d}$ be the space composed of $d$-variate polynomials of total degree at most $k$ and set $\bpolP_{k,d}\eqq [\polP_{k,d}]^d$. Let $\bpolN_{k,d}$ be the space composed of the $d$-variate Nédélec polynomials of order $k$ of the first kind (recall that $\bpolP_{k,d}\subsetneq \bpolN_{k,d} \subsetneq \bpolP_{k+1,d}$). We consider the discrete subspace: $$\bV_{h0}\upc \eqq \bset \bv_h\in \bV_0\upc \st \bv_h|_K\in \bpolN_{k,d},\, \forall K\in\calT_h\eset. \label{eq:def_Vhc}$$ Moreover, we let $$\bV\upc_{h0}(\ceqz) \eqq \bset \bv_h \in \bV_{h0}\upc \st \ROTZ \bv_h = \bzero \eset.$$ The $\bL^2_\eps$-orthogonal projection $$\bPi\upc_{h0} : \Ldeuxd \to \bV\upc_{h0}(\ceqz)$$ plays a key role in what follows. In particular, we introduce the subspace $$\label{eq:disc_involution} \bX\upc_{h0}\eqq \bV_{h0}\upc \cap\bV\upc_{h0}(\ceqz)^\perp,$$ which is composed of fields $\bv_h$ such that $\bPi\upc_{h0}(\bv_h)=\bzero$. Loosely speaking, discrete fields in $\bX\upc_{h0}$ are discretely divergence-free (and satisfy a finite number of additional constraints when $\front$ is not connected). ## Discrete problem The discrete problem reads as follows: Find $\bE_h\in \bV_{h0}\upc$ such that $$\label{eq:disc_pb_c} b(\bE_h,\bw_h)=\langle \bJ,\bw_h\rangle \qquad \forall \bw_h\in \bV_{h0}\upc.$$ The well-posedness of [\[eq:disc_pb_c\]](#eq:disc_pb_c){reference-type="eqref" reference="eq:disc_pb_c"} is established in Section [4](#sec:conforming){reference-type="ref" reference="sec:conforming"}. ## Approximation and divergence conformity factors {#sec:def_factors_c} In this section, we define two factors, the approximation factor and the divergence conformity factor. Both factors are used in the error and inf-sup stability analysis. We prove in Section [5](#sec:bnd_app_fac){reference-type="ref" reference="sec:bnd_app_fac"} that both factors tend to zero as the mesh size tends to zero or the polynomial degree is increased. For all $\btheta \in \Hrotzrotz^\perp$, we consider the adjoint problem consisting of finding $\bzeta_\btheta \in \bV_0\upc$ such that $$\label{eq:adjoint} b(\bw,\bzeta_\btheta) = (\bw,\btheta)_\eps \qquad \forall \bw \in \bV_0\upc.$$ Taking any test function $\bw \in \Hrotzrotz\subset \bV_0\upc$ shows that $$\omega^2(\bw,\bzeta_\btheta)_\eps=b(\bw,\bzeta_\btheta)=\omega^2(\bw,\btheta)_\eps=0,$$ where the first equality follows from $\ROTZ\bw=\bzero$, the second from the definition of the adjoint solution, and the third from the assumption $\btheta \in \Hrotzrotz^\perp$. Since $\bw$ is arbitrary in $\Hrotzrotz$, this proves that $\bzeta_\btheta \in \Hrotzrotz^\perp$. Thus, $\bzeta_\btheta \in \bX\upc_0$, and owing to the compact embedding $\bX\upc_0\hookrightarrow \Ldeuxd$, it is reasonable to expect that the (nondimensional) approximation factor $$\gpc\eqq \sup_{\substack{\btheta \in \Hrotzrotz^\perp\\ \|\btheta\|_\eps=1}} \min_{\bv_h\upc\in \bV\upc_{h0}} \omega \tnorm{\bzeta_\btheta-\bv_h\upc}, \label{eq:def_gamma_c}$$ exhibits some decay rate as the mesh size tends to zero and/or the polynomial degree is increased. The second useful quantity is the (nondimensional) divergence conformity factor $$\label{eq_gamma_bX} \gdivc \eqq \sup_{\substack{\bv_h \in \bX\upc_{h0} \\ \|\ROTZ \bv_h\|_\bnu = 1}} \omega \|\Pi\upc_0(\bv_h)\|_\eps.$$ Loosely speaking, the adopted terminology reminds us that $\gdivc$ essentially measures how much a discrete field that is discretely divergence-free is not pointwise divergence-free. It is again reasonable to expect, under the same conditions as above, that $\gdivc$ tends to zero. # Error and stability analysis {#sec:conforming} In this section, we analyze the conforming approximation [\[eq:disc_pb_c\]](#eq:disc_pb_c){reference-type="eqref" reference="eq:disc_pb_c"} of the model problem [\[eq:weak\]](#eq:weak){reference-type="eqref" reference="eq:weak"}. As usual in duality arguments, we first establish an error estimate under a smallness condition on the mesh size (this estimate implies the uniqueness, and thus also the existence, of the discrete solution under the same condition on the mesh size), and then establish an inf-sup condition again under the same assumptions. ## Error decomposition and preliminary bounds Let us define the bilinear form $b^+$ on $\bV_0\upc\times \bV_0\upc$ such that $$b^+(\bv,\bw) \eqq \omega^2(\bv,\bw)_\eps + (\ROTZ\bv,\ROTZ\bw)_\bnu.$$ We define the best-approximation operator $\bestc:\bV_0\upc\rightarrow \bV\upc_{h0}$ as follows: For all $\bv \in \bV_0\upc$, $\bestc(\bv) \in \bV\upc_{h0}$ is such that $$\label{eq:def_best_c} b^+(\bv-\bestc(\bv),\bw_h) = 0 \quad \forall \bw_h\in \bV\upc_{h0}.$$ **Lemma 4** (Properties of $\bestc$). *The best-approximation operator $\bestc$ defined in [\[eq:def_best_c\]](#eq:def_best_c){reference-type="eqref" reference="eq:def_best_c"} enjoys the following two properties:* *$$\begin{aligned} {2} &\tnorm{\bestc(\bv)} \le \tnorm{\bv},&\qquad&\forall \bv\in \bV_0\upc, \label{eq:stab_best} \\ &\bestc(\bv) \in \bV\upc_{h0}(\ceqz)^\perp,&\qquad&\forall \bv \in \bV\upc_{h0}(\ceqz)^\perp. \label{eq:useful_pty}\end{aligned}$$* *Proof.* [\[eq:stab_best\]](#eq:stab_best){reference-type="eqref" reference="eq:stab_best"} follows from the fact that the bilinear form $b^+$ is the inner product associated with the $\tnorm{\SCAL}$-norm. To prove [\[eq:useful_pty\]](#eq:useful_pty){reference-type="eqref" reference="eq:useful_pty"}, take any $\bw_h \in \bV\upc_{h0}(\ceqz)$ in [\[eq:def_best_c\]](#eq:def_best_c){reference-type="eqref" reference="eq:def_best_c"} and observe that $\omega^2(\bv-\bestc(\bv),\bw_h)_\eps = b^+(\bv-\bestc(\bv),\bw_h) = 0$. ◻ We define the approximation error and the best-approximation error as follows: $$\be\eqq \bE-\bE_h, \qquad \beeta\eqq \bE-\bestc(\bE).$$ We consider the error decomposition $\be = \btheta_0 + \btheta_\Pi$, with $$\label{eq:def_theta} \btheta_0\eqq(I-\bPi\upc_0)(\be)\in \Hrotzrotz^\perp, \qquad \btheta_\Pi\eqq\bPi\upc_0(\be) \in \Hrotzrotz.$$ The motivation behind this decomposition is that $\omega\|\btheta_0\|_\eps$ can be bounded by a duality argument, whereas $\omega\|\btheta_\Pi\|_\eps$ represents a divergence conformity error. **Lemma 5** (Bound on $\btheta_0$). *We have $$\label{eq:bnd_thet1_c} \omega \|\btheta_0\|_\eps \le \gpc\, \tnorm{\be},$$ with the approximation factor $\gpc$ defined in [\[eq:def_gamma_c\]](#eq:def_gamma_c){reference-type="eqref" reference="eq:def_gamma_c"}.* *Proof.* We consider the adjoint problem [\[eq:adjoint\]](#eq:adjoint){reference-type="eqref" reference="eq:adjoint"} with data $\btheta\eqq\btheta_0$. Notice that $\btheta\in\Hrotzrotz^\perp$ by construction. Let $\bzeta_\btheta \in \bX\upc_0$ denote the unique adjoint solution to this problem. Since $\be \in \bV_0\upc$, we have $$b(\be,\bzeta_\btheta)=(\be,\btheta_0)_\eps.$$ Using Galerkin's orthogonality together with $(\btheta_0,\btheta_\Pi)_\eps=0$, and multiplying by $\omega^2$ gives $$\omega^2\|\btheta_0\|_\eps^2 = \omega^2(\be,\btheta_0)_\eps = \omega^2b(\be,\bzeta_\btheta-\bv_h) \qquad \forall \bv_h\in \bV\upc_{h0}.$$ Owing to the boundedness of the bilinear form $b$, we infer that $$\omega^2\|\btheta_0\|_\eps^2 \le \omega^2 \tnorm{\be} \, \tnorm{\bzeta_\btheta-\bv_h}.$$ Invoking the approximation factor $\gpc$ gives $$\omega^2\|\btheta_0\|_\eps^2\le \tnorm{\be}\, \gpc \omega \|\btheta_0\|_\eps,$$ since $\bv_h$ is arbitrary. This proves the claim. ◻ **Lemma 6** (Bound on $\btheta_\Pi$). *We have $$\label{eq:bnd_thet2_c} (1-\gdivc)\omega \|\btheta_\Pi\|_\eps \le \omega \|\bPi\upc_0(\beeta)\|_\eps + \gdivc\tnorm{\btheta_0}.$$* *Proof.* We observe that $\bPi\upc_{h0}(\btheta_\Pi)=\bPi\upc_{h0}(\bPi\upc_0(\be)) =\bPi\upc_{h0}(\be)=\bzero$ owing to Galerkin's orthogonality. Indeed, $\omega^2(\be,\bw_h)_\eps=b(\be,\bw_h)=0$ for all $\bw_h\in \bV\upc_{h0}(\ceqz)$. Hence, we have $$\label{eq:theta_Pi_perp_c} \btheta_\Pi \in \bV\upc_{h0}(\ceqz)^\perp.$$ To bound $\btheta_\Pi$, we write $$\|\btheta_\Pi\|_\eps^2 = (\btheta_\Pi,\bestc(\btheta_\Pi))_\eps + (\btheta_\Pi,\btheta_\Pi-\bestc(\btheta_\Pi))_\eps \qqe \Theta_1+\Theta_2,$$ and estimate the two terms on the right-hand side. Since $\btheta_\Pi=\bPi\upc_0(\btheta_\Pi)$ and $\bPi\upc_0$ is self-adjoint for the inner product $(\SCAL,\SCAL)_\eps$, we obtain $$\begin{aligned} \Theta_1 = (\btheta_\Pi,\bestc(\btheta_\Pi))_\eps &= (\btheta_\Pi,\bPi\upc_0(\bestc(\btheta_\Pi)))_\eps \\ &\le \|\btheta_\Pi\|_\eps \, \|\bPi\upc_0(\bestc(\btheta_\Pi))\|_\eps \\ &\le \|\btheta_\Pi\|_\eps \,\gdivc \omega^{-1} \|\ROTZ\bestc(\btheta_\Pi)\|_\bnu,\end{aligned}$$ where we used the Cauchy--Schwarz inequality in the second line and the divergence conformity factor defined in [\[eq_gamma_bX\]](#eq_gamma_bX){reference-type="eqref" reference="eq_gamma_bX"} in the third line (recall that $\bestc(\btheta_\Pi)\in \bV\upc_{h0}$ by construction and observe that $\bestc(\btheta_\Pi) \in \bV\upc_{h0}(\ceqz)^\perp$ owing to [\[eq:useful_pty\]](#eq:useful_pty){reference-type="eqref" reference="eq:useful_pty"} and [\[eq:theta_Pi_perp_c\]](#eq:theta_Pi_perp_c){reference-type="eqref" reference="eq:theta_Pi_perp_c"}). Since $$\|\ROTZ\bestc(\btheta_\Pi)\|_\bnu \le \tnorm{\bestc(\btheta_\Pi)} \le \tnorm{\btheta_\Pi} = \omega\|\btheta_\Pi\|_\eps,$$ owing to the definition of the $\tnorm{\SCAL}$-norm and the stability property [\[eq:stab_best\]](#eq:stab_best){reference-type="eqref" reference="eq:stab_best"} of $\bestc$, we infer that $$|\Theta_1| \le \gdivc \|\btheta_\Pi\|_\eps^2.$$ Furthermore, recalling that $\btheta_\Pi = \bE-\btheta_0-\bE_h$ and the definition of $\beeta$, we obtain $$\Theta_2 = (\btheta_\Pi,\beeta)_\eps - (\btheta_\Pi,\btheta_0-\bestc(\btheta_0))_\eps \qqe \Theta_{2,1}-\Theta_{2,2},$$ where we used that $\bE_h-\bestc(\bE_h)=\bzero$. The Cauchy--Schwarz inequality gives $$|\Theta_{2,1}| = |(\btheta_\Pi,\beeta)_\eps| = |(\btheta_\Pi,\bPi\upc_0(\beeta))_\eps| \le \|\btheta_\Pi\|_\eps\, \|\bPi\upc_0(\beeta)\|_\eps.$$ Concerning $\Theta_{2,2}$, we have $$\begin{aligned} |\Theta_{2,2}| &= |(\btheta_\Pi,\btheta_0-\bestc(\btheta_0))_\eps| = |(\btheta_\Pi,\bestc(\btheta_0))_\eps| = |(\btheta_\Pi,\bPi\upc_0(\bestc(\btheta_0)))_\eps| \\ &\le \|\btheta_\Pi\|_\eps\, \|\bPi\upc_0(\bestc(\btheta_0))\|_\eps \le \|\btheta_\Pi\|_\eps\, \gdivc\omega^{-1} \|\ROTZ\bestc(\btheta_0)\|_\bnu \le \|\btheta_\Pi\|_\eps\, \gdivc\omega^{-1} \tnorm{\btheta_0}.\end{aligned}$$ Notice that we can again use the divergence conformity factor $\gdivc$ since $\btheta_0 \in \Hrotzrotz^\perp$ implies by [\[eq:useful_pty\]](#eq:useful_pty){reference-type="eqref" reference="eq:useful_pty"} that $\bestc(\btheta_0)\in \bV\upc_{h0}(\ceqz)^\perp$. Putting the above bounds on $\Theta_{2,1}$ and $\Theta_{2,2}$ together gives $$|\Theta_2| \le \|\btheta_\Pi\|_\eps\, \omega^{-1} \Big( \omega \|\bPi\upc_0(\beeta)\|_\eps + \gdivc \tnorm{\btheta_0}\Big).$$ The estimate [\[eq:bnd_thet2_c\]](#eq:bnd_thet2_c){reference-type="eqref" reference="eq:bnd_thet2_c"} follows by combining the above bounds on $\Theta_1$ and $\Theta_2$. ◻ **Remark 7** (Lemma [Lemma 6](#lem:bnd_thet2_c){reference-type="ref" reference="lem:bnd_thet2_c"}). *Obviously, the estimate [\[eq:bnd_thet2_c\]](#eq:bnd_thet2_c){reference-type="eqref" reference="eq:bnd_thet2_c"} is meaningful only if $\gdivc<1$, which holds true if the mesh size is small enough and/or the polynomial degree is large enough; see Section [5](#sec:bnd_app_fac){reference-type="ref" reference="sec:bnd_app_fac"} for further insight.* **Lemma 8** (Bound on $\ROTZ\btheta_0$). *We have $$\label{eq:bnd_rot_theta_c} \|\ROTZ\btheta_0\|_\bnu^2\le \tnorm{(I-\bPi\upc_0)(\beeta)}^2 + (2\gdivc+3\gpc^2)\tnorm{\be}^2 + 2\gdivc\omega^2\|\btheta_\Pi\|_\eps^2.$$* *Proof.* Recalling that $\btheta_0=(I-\bPi\upc_0)(\be)$, a straightforward calculation shows that $$\begin{aligned} b(\btheta_0,\btheta_0) &= b(\btheta_0,(I-\bPi\upc_0)(\be)) \\ &=b(\btheta_0,(I-\bPi\upc_0)(\beeta))-b(\btheta_0,(I-\bPi\upc_0)(\bE_h-\bestc(\bE))) \\ &=b(\btheta_0,(I-\bPi\upc_0)(\beeta))-b(\be,(I-\bPi\upc_0)(\bE_h-\bestc(\bE))) \\ &\le \tnorm{\btheta_0} \, \tnorm{(I-\bPi\upc_0)(\beeta)}-b(\be,(I-\bPi\upc_0)(\bE_h-\bestc(\bE))),\end{aligned}$$ where we used that $b(\btheta_\Pi,(I-\bPi\upc_0)(\SCAL)) = 0$ on the third line and the boundedness of $b$ on the fourth line. Focusing on the second term on the right-hand side, we notice using Galerkin's orthogonality that $$\begin{aligned} -b(\be,(I-\bPi\upc_0)(\bE_h-\bestc(\bE))) &= b(\be,\bPi\upc_0(\bE_h-\bestc(\bE))) \\ &=-\omega^2 (\btheta_\Pi,\bPi\upc_0(\bE_h-\bestc(\bE)))_\eps \\ &\le \omega^2 \|\btheta_\Pi\|_\eps\, \|\bPi\upc_0(\bE_h-\bestc(\bE))\|_\eps.\end{aligned}$$ For all $\bv_h\in \bV\upc_{h0}(\ceqz)$, we have $$\begin{aligned} \omega^2(\bE_h-\bestc(\bE),\bv_h)_\eps = -b(\bE_h,\bv_h)-b^+(\bestc(\bE),\bv_h) &= -b(\bE,\bv_h)-b^+(\bE,\bv_h) \\ &= \omega^2(\bE,\bv_h)-\omega^2(\bE,\bv_h) = 0,\end{aligned}$$ where the first and third equalities follow from the fact that $\bv_h$ is curl-free, and the second from the definition of $\bestc$ and Galerkin's orthogonality. This shows that $$\bE_h-\bestc(\bE) \in \bV\upc_{h0}(\ceqz)^\perp.$$ Using the divergence conformity factor $\gdivc$ defined in [\[eq_gamma_bX\]](#eq_gamma_bX){reference-type="eqref" reference="eq_gamma_bX"} then yields $$|b(\be,(I-\bPi\upc_0)(\bE_h-\bestc(\bE)))| \le \gdivc \omega \|\btheta_\Pi\|_\eps \|\ROTZ(\bE_h-\bestc(\bE))\|_\bnu.$$ Invoking the triangle inequality and the stability property [\[eq:stab_best\]](#eq:stab_best){reference-type="eqref" reference="eq:stab_best"} of $\bestc$ gives $$\begin{aligned} \|\ROTZ(\bE_h-\bestc(\bE))\|_\bnu & \le \|\ROTZ(\bE_h-\bE)\|_\bnu+\|\ROTZ(\bE-\bestc(\bE))\|_\bnu \\ & \le \tnorm{\be}+\tnorm{\bE-\bestc(\bE)} \le 2\tnorm{\be},\end{aligned}$$ since $\tnorm{\bE-\bestc(\bE)} \le \tnorm{\bE-\bE_h}=\tnorm{\be}$ by definition of the best-approximation operator $\bestc$. Putting everything together gives $$b(\btheta_0,\btheta_0) \le \tnorm{\btheta_0} \, \tnorm{(I-\bPi\upc_0)(\beeta)} + 2\gdivc \omega \|\btheta_\Pi\|_\eps\tnorm{\be}.$$ As a result, we have $$\begin{aligned} \|\ROTZ\btheta_0\|_\bnu^2 &= b(\btheta_0,\btheta_0) + \omega^2\|\btheta_0\|_\eps^2 \\ & \le \tnorm{\btheta_0} \, \tnorm{(I-\bPi\upc_0)(\beeta)} + 2\gdivc \omega \|\btheta_\Pi\|_\eps\tnorm{\be}+ \omega^2\|\btheta_0\|_\eps^2.\end{aligned}$$ Invoking Young's inequality for the first and second terms on the right-hand side and using that $\tnorm{\btheta_0}^2=\|\ROTZ\btheta_0\|_\bnu^2+\omega^2\|\btheta_0\|_\eps^2$, we infer that $$\|\ROTZ\btheta_0\|_\bnu^2 \le \tnorm{(I-\bPi\upc_0)(\beeta)}^2 + 2\gdivc\tnorm{\be}^2 + 2\gdivc\omega^2 \|\btheta_\Pi\|_\eps^2 + 3\omega^2\|\btheta_0\|_\eps^2.$$ The assertion now follows from the bound on $\btheta_0$ established in Lemma [Lemma 5](#lem:bnd_thet1_c){reference-type="ref" reference="lem:bnd_thet1_c"}. ◻ ## Error estimate We are now ready to establish our main error estimate. **Theorem 9** (A priori error estimate and discrete well-posedness). *Assume that $\gdivc \le 1$. The following holds: $$\label{eq:opt_err_c} (1-15\gdivc-4\gpc^2)\tnorm{\bE-\bE_h} \le \min_{\bv_h \in \bV_{h0}\upc}\tnorm{\bE-\bv_h}. % (1-c_\gamma) \tnorm{\be}^2 \le \tnorm{\beeta}^2,$$ Consequently, if the mesh size is small enough and/or the polynomial degree is large enough so that $15\gdivc+4\gpc^2<1$, the discrete problem [\[eq:disc_pb_c\]](#eq:disc_pb_c){reference-type="eqref" reference="eq:disc_pb_c"} is well-posed.* *Proof.* (1) In this first step, we establish some preliminary bounds. Since $\tnorm{\btheta_0}\le \tnorm{\be}$ (because $\tnorm{\be}^2=\tnorm{\btheta_0}^2 + \omega^2\|\btheta_\Pi\|_\eps^2$), the estimate [\[eq:bnd_thet2_c\]](#eq:bnd_thet2_c){reference-type="eqref" reference="eq:bnd_thet2_c"} implies that $$\label{eq:bnd_theta22a_c} (1-\gdivc) \omega \|\btheta_\Pi\|_\eps \le \omega \|\bPi\upc_0(\beeta)\|_\eps + \gdivc\tnorm{\be}.$$ Moreover, we have $$\omega \|\bPi\upc_0(\beeta)\|_\eps \le \omega \|\beeta\|_\eps \le \tnorm{\beeta} \le \tnorm{\bE-\bE_h} = \tnorm{\be}.$$ Squaring [\[eq:bnd_theta22a_c\]](#eq:bnd_theta22a_c){reference-type="eqref" reference="eq:bnd_theta22a_c"} (recall that $\gdivc\le 1$ by assumption) and using the above bound in the double product, we obtain $$\begin{aligned} (1-\gdivc)^2 \omega^2 \|\btheta_\Pi\|_\eps^2 \le {}& \omega^2 \|\bPi\upc_0(\beeta)\|_\eps^2 + (2\gdivc+\gdivc^2)\tnorm{\be}^2 \nonumber \\ \le {}& \omega^2 \|\bPi\upc_0(\beeta)\|_\eps^2 + 3\gdivc\tnorm{\be}^2, \label{eq:bnd_prep}\end{aligned}$$ where the last bound follows from $\gdivc\le1$. Since $\omega \|\bPi\upc_0(\beeta)\|_\eps \le \tnorm{\be}$ and $\gdivc\le1$, [\[eq:bnd_prep\]](#eq:bnd_prep){reference-type="eqref" reference="eq:bnd_prep"} implies that $$\label{eq:bnd_prepp} (1-\gdivc)^2 \omega^2 \|\btheta_\Pi\|_\eps^2 \le 4 \tnorm{\be}^2.$$ \(2\) We are now ready to prove [\[eq:opt_err_c\]](#eq:opt_err_c){reference-type="eqref" reference="eq:opt_err_c"}. Multiplying the estimate [\[eq:bnd_rot_theta_c\]](#eq:bnd_rot_theta_c){reference-type="eqref" reference="eq:bnd_rot_theta_c"} from Lemma [Lemma 8](#lem:bnd_rot_theta_c){reference-type="ref" reference="lem:bnd_rot_theta_c"} by $(1-\gdivc)^2$ (which is $\le1$) and using [\[eq:bnd_prepp\]](#eq:bnd_prepp){reference-type="eqref" reference="eq:bnd_prepp"} gives $$\begin{aligned} (1-\gdivc)^2\|\ROTZ\btheta_0\|_\bnu^2 \le {}& \tnorm{(I-\bPi\upc_0)(\beeta)}^2 + (2\gdivc+3\gpc^2)\tnorm{\be}^2 + 2\gdivc(1-\gdivc)^2\omega^2\|\btheta_\Pi\|_\eps^2 \nonumber \\ \le {}&\tnorm{(I-\bPi\upc_0)(\beeta)}^2 + (10\gdivc+3\gpc^2)\tnorm{\be}^2. \label{eq:prelim_bnd_c}\end{aligned}$$ Since $\tnorm{\be}^2 = \omega^2\|\btheta_0\|_\eps^2 + \omega^2\|\btheta_\Pi\|_\eps^2 + \|\ROTZ\btheta_0\|_\bnu^2$, we infer that $$\begin{aligned} (1-\gdivc)^2\tnorm{\be}^2 \le{}& \omega^2\|\btheta_0\|_\eps^2 + (1-\gdivc)^2\omega^2\|\btheta_\Pi\|_\eps^2 + (1-\gdivc)^2\|\ROTZ\btheta_0\|_\bnu^2 \nonumber \\ \le{}& \gpc^2 \tnorm{\be}^2 + \omega^2 \|\bPi\upc_0(\beeta)\|_\eps^2 + 3\gdivc\tnorm{\be}^2 \nonumber \\ &+\tnorm{(I-\bPi\upc_0)(\beeta)}^2 + (10\gdivc+3\gpc^2)\tnorm{\be}^2 \nonumber \\ = {}& \tnorm{\beeta}^2 + (13\gdivc+4\gpc^2)\tnorm{\be}^2, \label{eq:bnd_e_eta_e}\end{aligned}$$ where the second bound follows from Lemma [Lemma 5](#lem:bnd_thet1_c){reference-type="ref" reference="lem:bnd_thet1_c"}, [\[eq:bnd_prep\]](#eq:bnd_prep){reference-type="eqref" reference="eq:bnd_prep"}, and [\[eq:prelim_bnd_c\]](#eq:prelim_bnd_c){reference-type="eqref" reference="eq:prelim_bnd_c"}, and the last equality follows from $\tnorm{\beeta}^2=\tnorm{(I-\bPi\upc_0)(\beeta)}^2+\omega^2 \|\bPi\upc_0(\beeta)\|_\eps^2$. The error estimate [\[eq:opt_err_c\]](#eq:opt_err_c){reference-type="eqref" reference="eq:opt_err_c"} follows by observing that $(1-\gdivc)^2 \ge 1-2\gdivc$. \(3\) If the mesh size is small enough so that $15\gdivc + 4\gpc^2<1$, the error estimate [\[eq:opt_err_c\]](#eq:opt_err_c){reference-type="eqref" reference="eq:opt_err_c"} implies the uniqueness of the discrete solution. Existence then follows from the fact that [\[eq:disc_pb_c\]](#eq:disc_pb_c){reference-type="eqref" reference="eq:disc_pb_c"} amounts to a square linear system. ◻ **Remark 10** (Asymptotic optimality). *Notice that in [\[eq:opt_err_c\]](#eq:opt_err_c){reference-type="eqref" reference="eq:opt_err_c"}, we have $\gdivc \to 0$ and $\gpc \to 0$ as $h\to0$ or $k\to\infty$. Hence, we have $$\tnorm{\bE-\bE_h} \leq (1+\theta(h)) \min_{\bv_h \in \bV_{h0}\upc}\tnorm{\bE-\bv_h},$$ with $\lim_{h/(k+1) \to 0} \theta(h) = 0$.* ## Inf-sup stability We are now ready to establish our main stability result. **Theorem 11** (Inf-sup stablity). *We have $$\label{eq:inf_sup_c} \min_{\substack{\bv_h \in \bV_{h0}\upc \\ \tnorm{\bv_h} = 1}} \max_{\substack{\bw_h \in \bV_{h0}\upc \\ \tnorm{\bw_h} = 1}} |b(\bv_h,\bw_h)| \geq \frac{1-2(\gdivc^2+\gpc)}{1 + 2\bst}.$$* *Proof.* We adapt to the discrete setting the arguments of the proof of Lemma [Lemma 2](#lem:infsup){reference-type="ref" reference="lem:infsup"}. Let $\bv_h \in \bV_{h0}\upc$ and set $\bv_h=\bv_{h0}+\bv_{h\Pi}$ with $\bv_{h0}\eqq (I-\bPi\upc_{h0})(\bv_h) \in \bX\upc_{h0}$ and $\bv_{h\Pi}\eqq \bPi\upc_{h0}(\bv_h)$. \(1\) In this first step, we gain control on $\omega\|\bv_{h0}\|_\eps$. Since $\bv_{h0}$ is (loosely speaking) discretely divergence-free, but not pointwise divergence-free, we need to consider a further decomposition of $\bv_{h0}$. Let us set $\bv_{h0}=\bphi_0+\bphi_\Pi$ with $\bphi_0\eqq (I-\bPi\upc_0)(\bv_{h0})$ and $\bphi_\Pi\eqq \bPi\upc_0(\bv_{h0})$. Notice that $$\omega\|\bphi_0\|_\eps \le \omega\|\bv_{h0}\|_\eps \le \omega\|\bv_{h}\|_\eps \le \tnorm{\bv_h}.$$ Let $\bxi_0\in \bX\upc_0$ be the unique adjoint solution such that $b(\bw,\bxi_0)=\omega^2(\bw,\bphi_0)_\eps$ for all $\bw\in \bV_0\upc$ (notice that $\bphi_0 \in\Hrotzrotz^\perp$). We have $$\tnorm{\bxi_0} \le \bst \omega\|\bphi_0\|_\eps\le \bst \omega \|\bv_{h0}\|_\eps.$$ Let us set $\bxi_{h0}\eqq \bestc(\bxi_0)$. Then $\bxi_{h0}\in \bV_{h0}\upc$ by definition, and $\bxi_{h0}\in \bV\upc_{h0}(\ceqz)^\perp$ by [\[eq:useful_pty\]](#eq:useful_pty){reference-type="eqref" reference="eq:useful_pty"} since $\bxi_0\in \Hrotzrotz^\perp$. Moreover, owing to [\[eq:stab_best\]](#eq:stab_best){reference-type="eqref" reference="eq:stab_best"}, we have $\tnorm{\bxi_{h0}} \le \tnorm{\bxi_0}$. Using these properties, we infer that $$\begin{aligned} b(\bv_h,\bxi_{h0}) &= b(\bv_{h0},\bxi_{h0})+b(\bv_{h\Pi},\bxi_{h0}) = b(\bv_{h0},\bxi_{h0}) \\ &=b(\bv_{h0},\bxi_{0})+b(\bv_{h0},\bxi_{h0}-\bxi_0) \\ &\ge \omega^2\|\bphi_0\|_\eps^2 - \gpc \tnorm{\bv_h}^2,\end{aligned}$$ since $b(\bv_{h0},\bxi_{0})=\omega^2(\bv_{h0},\bphi_0)_\eps=\omega^2\|\bphi_0\|_\eps^2$ and $$|b(\bv_{h0},\bxi_{h0}-\bxi_0)| \le \tnorm{\bv_{h0}}\, \tnorm{\bxi_{h0}-\bxi_0} \le \tnorm{\bv_{h0}}\, \gpc \omega\|\bphi_0\|_\eps \le \gpc\tnorm{\bv_h}^2,$$ owing to the boundedness of the bilinear form $b$, the above bound on $\omega\|\bphi_0\|_\eps$, and since $\tnorm{\bv_{h0}}\le\tnorm{\bv_h}$. Moreover, using the divergence conformity factor, we infer that $$\omega\|\bphi_\Pi\|_\eps^2 = \omega^2 \|\bPi\upc_0(\bv_{h0})\|_\eps^2 \le \gdivc^2\|\ROTZ\bv_{h0}\|_\bnu^2 \le \gdivc^2 \tnorm{\bv_h}^2.$$ Since $\|\bv_{h0}\|_\eps^2=\|\bphi_0\|_\eps^2+\|\bphi_\Pi\|_\eps^2$, putting everything together gives $$\label{eq:lower_infsup_c} b(\bv_h,\bxi_{h0}) \ge \omega^2\|\bv_{h0}\|_\eps^2 - (\gdivc^2+\gpc)\tnorm{\bv_h}^2.$$ \(2\) Since $b(\bv_h,\bv_{h0}-\bv_{h\Pi})=\tnorm{\bv_h}^2 - 2\omega^2\|\bv_{h0}\|_\eps^2$, using [\[eq:lower_infsup_c\]](#eq:lower_infsup_c){reference-type="eqref" reference="eq:lower_infsup_c"} yields $$b(\bv_h,\bv_{h0}+2\bxi_{h0}-\bv_{h\Pi}) \ge \tnorm{\bv_h}^2 -2(\gdivc^2+\gpc)\tnorm{\bv_h}^2.$$ Moreover, using the same arguments as in the proof of Lemma [Lemma 2](#lem:infsup){reference-type="ref" reference="lem:infsup"} and recalling that $\tnorm{\bxi_{h0}}\le \tnorm{\bxi}$, we obtain $$\tnorm{\bv_{h0}+2\bxi_{h0}-\bv_{h\Pi}}^2 = \tnorm{\bv_{h0}+2\bxi_{h0}}^2 + \tnorm{\bv_{h\Pi}}^2 \le (1+2\bst)^2 \tnorm{\bv_h}^2.$$ Since $\bv_{h0}+2\bxi_{h0}-\bv_{h\Pi}\in \bV\upc_{h0}$, this concludes the proof. ◻ **Remark 12** (Discrete inf-sup constant). *Since $\gpc$ and $\gdivc$ tend to zero as the mesh size is small enough and/or the polynomial degree is large enough, the discrete inf-sup constant appearing on the left-hand side of [\[eq:inf_sup_c\]](#eq:inf_sup_c){reference-type="eqref" reference="eq:inf_sup_c"} tends to $(1+2\bst)^{-1}$. Recall from Remark [Remark 3](#rem:exact_infsup){reference-type="ref" reference="rem:exact_infsup"} that this quantity corresponds, up to a factor of two at most, to the inf-sup constant of the bilinear form $b$ in the continuous setting.* # Bound on approximation and divergence conformity factors {#sec:bnd_app_fac} In this section, we bound the two (nondimensional) quantities introduced in Section [3.3](#sec:def_factors_c){reference-type="ref" reference="sec:def_factors_c"} and used in Section [4](#sec:conforming){reference-type="ref" reference="sec:conforming"}: the approximation factor $\gpc$ and the divergence conformity factor $\gdivc$. To this purpose, we consider the commuting quasi-interpolation operator $\calJ\upc_{h0}: \bL^2(\Dom) \to \bV\upc_{h0}$ and $\calJ\upd_{h0}: \bL^2(\Dom) \to \bV\upd_{h0}$ (the Raviart--Thomas finite element space of order $k\ge0$ satisfying zero normal boundary conditions); see [@Schoberl:01; @ArnFW:06; @Christiansen:07; @ChrWi:08] and also [@EG_volI Chap. 22-23]. Both operators are bounded in $\bL^2(\Dom)$, they are projections, and they satisfy the commuting property $\ROT(\calJ\upc_{h0}(\bv))=\calJ\upd_{h0}(\ROT\bv)$ for all $\bv\in\bL^2(\Dom)$. For positive real numbers $A$ and $B$, we abbreviate as $A\lesssim B$ the inequality $A\le CB$ with a generic constant $C$ whose value can change at each occurrence as long as it is independent of the mesh size, the frequency parameter $\omega$, and, whenever relevant, any function involved in the bound. The constant $C$ can depend on the shape-regularity of the mesh and the polynomial degree $k$ as well as on the domain $\Omega$ and on the coefficients $\eps$ and $\bnu$. We introduce the notation $$\epsmax := \max_{\bx \in \Dom} \max_{\substack{\bu \in \mathbb R^d \\ |\bu| = 1}} \max_{\substack{\bv \in \mathbb R^d \\ |\bv| = 1}} \eps(\bx) \bu \cdot \bv \qquad \epsmin := \min_{\bx \in \Dom} \min_{\substack{\bu \in \mathbb R^d \\ |\bu| = 1}} \eps(\bx) \bu \cdot \bu$$ and define $\numax$ and $\numin$ similarly. Then, $\velmin = \sqrt{\numin/\epsmax}$ stands for the minimum wavespeed in the domain. ## Piecewise smooth coefficients For the sake of simplicity, we start by assuming that the coefficients are piecewise smooth in $\Dom$. Then, the following regularity results from [@costabel_dauge_nicaise_1999a; @Jochmann_maxwell_1999; @BoGuL:13] will be useful: there exists $s \in (0,1]$ such that, for all $\bv \in \Hrotz$ with $\DIV(\eps\bv) = 0$ and all $\bw \in \Hdivzdivz$ with $\bnu \bw \in \Hrot$, we have $\bv,\bw \in \bH^s(\Dom)$ with the estimates $$\label{eq:extra_regularity} |\bv|_{\bH^s(\Dom)} \lesssim \ell_\Dom^{1-s} \numin^{-\frac12}\|\ROTZ \bv\|_\bnu, \qquad |\bw|_{\bH^s(\Dom)} \lesssim \ell_\Dom^{1-s} \frac{1}{\velmin} \numin^{-\frac12} \|\ROT (\bnu \bw)\|_{\eps^{-1}}.$$ If $\Dom$ is convex and $\eps$ and $\bnu$ are (globally) Lipschitz continuous, we can take $s = 1$. **Lemma 13** (Bound on approximation factor). *Let $\gpc$ be defined in [\[eq:def_gamma_c\]](#eq:def_gamma_c){reference-type="eqref" reference="eq:def_gamma_c"}. The following holds: $$\label{eq:regularity_c} \gpc \lesssim (1+\bst)\left (\frac{\omega\ell_\Dom}{\velmin}\right )^{1-s} \left (\frac{\omega h}{\velmin}\right )^s,$$ with the stability constant $\bst$ defined in [\[eq:def_bst\]](#eq:def_bst){reference-type="eqref" reference="eq:def_bst"}.* *Proof.* Let $\btheta \in \Hrotzrotz^\perp$ and let $\bzeta_\btheta \in \bX\upc_0$ solve the adjoint problem [\[eq:adjoint\]](#eq:adjoint){reference-type="eqref" reference="eq:adjoint"}. On the one hand, invoking [\[eq:extra_regularity\]](#eq:extra_regularity){reference-type="eqref" reference="eq:extra_regularity"}, using the stability constant $\bst$, we infer that $$|\bzeta_\btheta|_{\bH^s(\Dom)} \lesssim \ell_\Dom^{1-s} \numin^{-\frac12}\|\ROTZ \bzeta_\btheta\|_\bnu \le \ell_\Dom^{1-s} \numin^{-\frac12}\tnorm{\bzeta_\btheta} \lesssim \bst \ell_\Dom^{1-s} \omega^{-1} \numin^{-\frac12}\|\btheta\|_\eps.$$ Invoking the approximation properties of $\calJ\upc_{h0}$ leads to $$\begin{aligned} \omega^2 \|\bzeta_\btheta-\calJ\upc_{h0}(\bzeta_\btheta)\|_\eps &\leq \omega^2\epsmax^{\frac12} \|\bzeta_\btheta-\calJ\upc_{h0}(\bzeta_\btheta)\| \nonumber \\ & \lesssim \omega^2 h^s \epsmax^{\frac12}|\bzeta_\btheta|_{\bH^s(\Dom)} \lesssim \bst \left (\frac{\omega\ell_\Dom}{\velmin}\right )^{1-s} \left (\frac{\omega h}{\velmin}\right )^s \|\btheta\|_\eps. \label{tmp_gpc_l2}\end{aligned}$$ On the other hand, we have $\eps^{-1} \ROT(\bnu\ROTZ\bzeta_\btheta) = \btheta + \omega^2 \bzeta_\btheta$, so that $$\|\ROT(\bnu\ROTZ\bzeta_\btheta)\|_{\eps^{-1}} = \|\eps^{-1} \ROT(\bnu\ROTZ\bzeta_\btheta)\|_{\eps} \leq \|\btheta\|_\eps + \omega^2 \|\bzeta_\btheta\|_\eps \leq (1+\bst)\|\btheta\|_\eps.$$ Since $\bw\eqq \ROTZ \bzeta_\btheta \in \Hdivzdivz$ with $\bnu\bw\in\Hrot$, we can again invoke [\[eq:extra_regularity\]](#eq:extra_regularity){reference-type="eqref" reference="eq:extra_regularity"}, giving $$|\ROTZ\bzeta_\btheta|_{\bH^s(\Dom)} \lesssim \ell_\Dom^{1-s} \frac{\numin^{-\frac12}}{\velmin} \|\ROT(\bnu\ROTZ\bzeta_\btheta)\|_{\eps^{-1}} \leq (1+\bst) \ell_\Dom^{1-s} \frac{\numin^{-\frac12}}{\velmin} \|\btheta\|_\eps.$$ Owing to the commuting property $\ROTZ\calJ\upc_{h0}(\SCAL)=\calJ\upd_{h0}(\ROTZ\SCAL)$ where $\calJ\upd_{h0}$ is the commuting quasi-interpolation operator mapping onto the Raviart--Thomas finite element space with zero normal component on the boundary, we infer that $$\begin{aligned} \omega \|\ROTZ(\bzeta_\btheta-\calJ\upc_{h0}(\bzeta_\btheta))\|_\bnu &= \omega \|\ROTZ\bzeta_\btheta-\calJ\upd_{h0}(\ROTZ\bzeta_\btheta)\|_\bnu \nonumber \\ &\lesssim \omega \numax^{\frac12} h^s|\ROTZ\bzeta_\btheta|_{\bH^s(\Dom)} \nonumber \\ &\lesssim (1+\bst) \left (\frac{\omega \ell_\Dom}{\velmin}\right )^{1-s} \left (\frac{\omega h}{\velmin}\right )^{s} \|\btheta\|_\eps. \label{tmp_gpc_rot}\end{aligned}$$ (Recall that the ratio $\numax/\numin$ can be hidden in the generic constant $C$.) The conclusion follows from [\[tmp_gpc_l2\]](#tmp_gpc_l2){reference-type="eqref" reference="tmp_gpc_l2"} and [\[tmp_gpc_rot\]](#tmp_gpc_rot){reference-type="eqref" reference="tmp_gpc_rot"}. ◻ **Lemma 14** (Bound on divergence conformity factor). *[\[lemma_gdivc_smooth\]]{#lemma_gdivc_smooth label="lemma_gdivc_smooth"} Let $\gdivc$ be defined in [\[eq_gamma_bX\]](#eq_gamma_bX){reference-type="eqref" reference="eq_gamma_bX"}. The following holds: $$\gdivc \lesssim \left (\frac{\omega\ell_\Dom}{\velmin}\right )^{1-s} \left (\frac{\omega h}{\velmin}\right )^s.$$* *Proof.* (1) Let $\bv_h \in \bX\upc_{h0}= \bV\upc_{h0}\cap \bV\upc_{h0}(\ceqz)^\perp$. Let us write $$\bv_h = \bw + \bPi\upc_0(\bv_h),$$ with $\bw \eqq (I-\bPi\upc_0)(\bv_h)$. By construction, $\bw\in \Hrotzrotz^\perp$, and we have $\bw\in \Hrotz$ since $\bv_h\in \bV\upc_{h0} \subset \Hrotz$; hence, $\bw\in \bX\upc_0$. Invoking [\[eq:extra_regularity\]](#eq:extra_regularity){reference-type="eqref" reference="eq:extra_regularity"} and observing that $\ROTZ \bw = \ROTZ \bv_h$, we infer that $$|\bw|_{\bH^s(\Dom)} \lesssim \ell_\Dom^{1-s} \numin^{-\frac12} \|\ROTZ\bv_h\|_\bnu.$$ Moreover, we have $\bPi\upc_{h0}(\bPi\upc_0(\bv_h)) = \bPi\upc_{h0}(\bv_h) = \bzero$ since $\bv_h\in \bV\upc_{h0}(\ceqz)^\perp$; hence, $\bPi\upc_0(\bv_h)\in \bV\upc_{h0}(\ceqz)^\perp$ as well. \(2\) Recall the commuting quasi-interpolation operator $\calJ\upc_{h0}: \bL^2(\Dom) \to \bV\upc_{h0}$. Since $\calJ\upc_{h0}$ leaves $\bV\upc_{h0}$ pointwise invariant, we have $(I-\calJ\upc_{h0})(\bv_h)=\bzero$, so that $$(I-\calJ\upc_{h0})(\bPi\upc_0(\bv_h)) = -(I - \calJ\upc_{h0})(\bw).$$ Moreover, since $\bPi\upc_0(\bv_h)$ is curl-free by construction, the commuting property of $\calJ\upc_{h0}$ implies that $$\calJ\upc_{h0}(\bPi\upc_0(\bv_h)) \in \bV\upc_{h0}(\ceqz).$$ Recalling that $\bPi\upc_0(\bv_h)\in \bV\upc_{h0}(\ceqz)^\perp$, we infer that $$\begin{aligned} \|\bPi\upc_0(\bv_h)\|_\eps^2 = (\bPi\upc_0(\bv_h),\bPi\upc_0(\bv_h))_\eps &= (\bPi\upc_0(\bv_h),\bPi\upc_0(\bv_h)-\calJ\upc_{h0}(\bPi\upc_0(\bv_h)))_\eps \\ &= -(\bPi\upc_0(\bv_h),\bw-\calJ\upc_{h0}(\bw))_\eps.\end{aligned}$$ The Cauchy--Schwarz inequality together with the approximation properties of $\calJ\upc_{h0}$ gives $$\|\bPi\upc_0(\bv_h)\|_\eps \lesssim h^s \epsmax^{\frac12}|\bw|_{\bH^s(\Dom)},$$ and we conclude using the above bound on $|\bw|_{\bH^s(\Dom)}$. ◻ **Remark 15** (Bound on $\gdivc$). *The above proof can be rewritten as the following statement: $$\label{eq:gamma_calJ} \gdivc \le \gamma_\calJ := \sup_{\substack{\bw \in \bX\upc_0 \\ \|\ROTZ \bw\|_\bnu = 1}} \omega \|\bw-\calJ\upc_{h0}(\bw)\|_\eps \lesssim \left (\frac{\omega\ell_\Dom}{\velmin}\right )^{1-s} \left (\frac{\omega h}{\velmin}\right )^s.$$ This shows that $\gdivc$ is bounded by an approximation factor on $\bX\upc_0$ using the commuting quasi-interpolation operator $\calJ\upc_{h0}$. Notice that only the rightmost bound uses [\[eq:extra_regularity\]](#eq:extra_regularity){reference-type="eqref" reference="eq:extra_regularity"}.* **Remark 16** (Convex domain). *For a convex domain $\Dom$, the factors are bounded as $$\gpc \lesssim (1+\bst) \frac{\omega h}{\velmin}, \qquad \gdivc \lesssim \frac{\omega h}{\velmin}.$$ The quantity $(\omega h)/\velmin$ is inversly proportinal to the (minimal) number of mesh elements per wavelenth. It is therefore reasonable to assume that $\gdivc \lesssim 1$. We also see that $\gpc$ is typically not bounded for all frequencies assuming a constant number of elements per wavelength, since $\bst$ can be large. This is the standard manifestation of dispersion errors, also known in this context as pollution effect. This is completely standard, and also happens in the (simpler) case of Helmholtz problems. It is interesting to notice that the constraint that $\gdivc$ is small, which is specific to Maxwell's equations, is less restrictive than the constraint that $\gpc$ is small, which is common to Maxwell and Helmholtz equations.* **Remark 17** (Reduced dispersion for high-order elements). *When the domain $\Dom$ and the coefficients are smooth, it is shown in [@ChaVe:22] that $$\gpc \lesssim \frac{\omega h}{\velmin} + (1+\bst) \left (\frac{\omega h}{\velmin}\right )^{(k+1)},$$ so that $\gpc$ is small if $$\label{eq_dof_reduction} \frac{\omega h}{\velmin} \lesssim \bst^{-1/(k+1)}.$$ For large frequencies (or frequencies close to resonant frequencies), $\bst$ becomes large, so that the number of elements per wavelength needs to be increased. Nevertheless, [\[eq_dof_reduction\]](#eq_dof_reduction){reference-type="eqref" reference="eq_dof_reduction"} expresses that the required increase is less important for higher order elements, which corresponds to numerical observations. It is also expected that such a result remains true for general domains and piecewise smooth coefficients if the mesh is suitably refined locally, but this claim has only been established for two-dimensional problems in [@chaumontfrelet_nicaise_2020a]. We finally refer the reader to [@melenk_sauter_2021] where stronger results explicit in the polynomial degree $k$ are established, but under stronger assumptions on the domain.* **Remark 18** ($k$ convergence). *When $s=1$, we can consider the interpolation operators from [@MelRo:20], instead of the quasi-interpolation operators $\calJ\upc_{h0}$ and $\calJ\upd_{h0}$ considered above, thereby showing that $\gpc$ and $\gdivc$ tend to zero (and optimally so) as $k$ is increased (and $h$ is fixed). More generally, a similar strategy can be employed as long as a piecewise version of the regularity regularity shift in [\[eq:extra_regularity\]](#eq:extra_regularity){reference-type="eqref" reference="eq:extra_regularity"} holds true with $s > 1/2$. In such a case, we can use, e.g., the interpolation operators from [@DeBuffa05].* ## General coefficients Here, we consider general coefficients $\eps$ and $\bnu$ for which [\[eq:extra_regularity\]](#eq:extra_regularity){reference-type="eqref" reference="eq:extra_regularity"} may not hold for any $s > 0$. **Lemma 19** (Convergence of approximation factor). *We have $\gpc \to 0$ as $h \to 0$.* *Proof.* Let us set $$\begin{aligned} \calB_\eps &:= \left \{ \bv \in \Hrotz \; | \; \DIV(\eps\bv) = 0, \; \|\ROTZ \bv\|_{\bmu} \leq \bst \right \}, \\ \calB_\bnu &:= \left \{ \bw \in \Hdivzdivz \; | \; \|\ROT (\bnu\bw)\|_{\eps^{-1}} \leq 1+\bst \right \}.\end{aligned}$$ Owing to the definition [\[eq:def_bst\]](#eq:def_bst){reference-type="eqref" reference="eq:def_bst"} of the stability constant $\bst$, if $\btheta \in \Hrotzrotz^\perp$ with $\|\btheta\|_\eps = 1$ and $\bxi_\btheta \in \bV_0\upc$ satisfy $b(\bw,\bxi_\btheta) = (\bw,\btheta)_\eps$ for all $\bw \in \bV_0\upc$, we have $\bxi_\btheta \in \calB_\eps$ and $\bPhi_\btheta := \ROTZ \bxi_\btheta \in \calB_\bnu$. Owing to the compact injections established in [@Weber:80 Theorem 2.2], given any $\delta > 0$, there exists a finite number $N_\delta$ of functions $\bv_j \in \calB_\eps$ and $\bw_\ell \in \calB_\bnu$ such that, for all $\bv \in \calB_\eps$ and all $\bw \in \calB_\bnu$, there exist indices $j,\ell \in \{1{:} N_\delta\}$ such that $$\|\bv-\bv_j\|_\eps \leq \delta, \qquad \|\bw-\bw_\ell\|_\bnu \leq \delta.$$ Furthermore, the density of $\bC^\infty_{\rm c}(\Dom)$ in $\bL^2(\Dom)$ implies that we can find $\widetilde \bv_j,\widetilde \bw_\ell \in \bC^\infty_{\rm c}(\Dom)$ such that $$\|\bv-\widetilde \bv_j\|_\eps \leq 2\delta, \qquad \|\bw-\widetilde \bw_\ell\|_\bnu \leq 2\delta.$$ We then write that $$\min_{\bv\upc_h \in \bV_{h0}\upc} \tnorm{\bxi_{\btheta}-\bv\upc_h}^2 \leq \tnorm{\bxi_{\btheta}-\calJ\upc_{h0}(\bxi_{\btheta})}^2 = \omega^2\|\bxi_{\btheta}-\calJ\upc_{h0}(\bxi_{\btheta})\|^2_\eps + \|\bPhi_{\btheta}-\calJ\upd_{h0}(\bPhi_{\btheta})\|^2_\bnu.$$ Invoking the triangle inequality and the above bounds (with $\bv:=\bxi_{\btheta}$) gives $$\begin{aligned} \|\bxi_{\btheta}-\calJ\upc_{h0}(\bxi_{\btheta})\|_\eps &\leq \|\bxi_{\btheta}-\widetilde \bv_j\|_\eps + \|\widetilde \bv_j-\calJ\upc_{h0}(\widetilde \bv_j)\|_\eps + \|\calJ\upc_{h0}(\widetilde \bv_j-\bxi_{\btheta})\|_\eps \\ &\lesssim \|\bxi_{\btheta}-\widetilde \bv_j\|_{\eps} + \|\widetilde \bv_j-\calJ\upc_{h0}(\widetilde \bv_j)\|_\eps \leq 2\delta + \|\widetilde \bv_j-\calJ\upc_{h0}(\widetilde \bv_j)\|_\eps,\end{aligned}$$ where we used the $\bL^2$-stability of $\calJ\upc_{h0}$. Similarly, we obtain $$\|\bPhi_{\btheta}-\calJ\upd_{h0}(\bxi_{\btheta})\|_\bnu \lesssim \|\bPhi_{\btheta}-\widetilde \bw_\ell\|_\bnu + \|\widetilde \bw_\ell-\calJ\upd_{h0}(\widetilde \bw_\ell)\|_\bnu \leq 2\delta + \|\widetilde \bw_\ell-\calJ\upd_{h0}(\widetilde \bw_\ell)\|_\bnu.$$ Since the functions $\widetilde \bv_j$ and $\widetilde \bw_\ell$ are in finite number and smooth, we can assume that, if the mesh is sufficiently refined, $$\|\widetilde \bv_j-\calJ\upc_{h0}(\widetilde \bv_j)\|_\eps \leq \delta, \qquad \|\widetilde \bw_j-\calJ\upd_{h0}(\widetilde \bw_j)\|_\bnu \leq \delta.$$ This completes the proof since $\delta>0$ is arbitrary. ◻ **Lemma 20** (Convergence of divergence conformity factor). *We have $\gdivc \to 0$ as $h \to 0$.* *Proof.* We use the bound $\gdivc \le \gamma_\calJ$ with $\gamma_\calJ$ defined in [\[eq:gamma_calJ\]](#eq:gamma_calJ){reference-type="eqref" reference="eq:gamma_calJ"}. Thus, it suffices to show that $\gamma_\calJ\to0$ as $h\to0$. We consider the unit ball $\calB := \{ \bw \in \Hrotz; \DIV(\eps\bw) = 0, \; \|\ROTZ\bw\|_\bnu \leq 1\}$. It is established in [@Weber:80 Theorem 2.2] that the embedding $\Hrotz \cap \Hdiveps \hookrightarrow \bL^2(\Dom)$ is compact. As a result, given any $\delta > 0$, there exists a finite number $N_\delta$ of elements $\bv_j \in \calB$ such that, for all $\bw \in \calB$, $\|\bw-\bv_j\|_\eps \leq \delta$ for some index $j\in \{1{:}N_\delta\}$. Moreover, since $\bC^\infty_{\rm c}(\Dom)$ is dense in $\bL^2(\Dom)$, for each $j\in \{1{:}N_\delta\}$, there exists $\widetilde \bv_j \in \bC^\infty_{\rm c}(\Dom)$ such that $\|\bv_j-\widetilde \bv_j\|_\eps \leq \delta$. We have therefore shown that for all $\bw \in \calB$, there exists an index $j\in\{1{:}N_\delta\}$ such that $$\|\bw-\widetilde \bv_j\|_\eps \leq 2\delta.$$ We can now conclude by using the same arguments as in the above proof (notice that $\{ \bw \in \bX\upc_0, \; \|\ROTZ\bw\|_\bnu \leq 1\} \subset \calB$). ◻ [^1]: Inria Univ. Lille and Laboratoire Paul Painlevé, 59655 Villeneuve-d'Ascq, France [^2]: CERMICS, Ecole des Ponts, 77455 Marne-la-Vallée Cedex 2, France and Inria Paris, 75589 Paris, France
arxiv_math
{ "id": "2309.14189", "title": "Asymptotic optimality of the edge finite element approximation of the\n time-harmonic Maxwell's equations", "authors": "T. Chaumont-Frelet and A. Ern", "categories": "math.NA cs.NA math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We consider operators acting in $L^2(\mathbb{R}^d)$ with $d\geq3$ that locally behave as a magnetic Schrödinger operator. For the magnetic Schrödinger operators we suppose the magnetic potentials are smooth and the electric potential is two times differentiable and the second derivatives are Hölder continuous. Under these assumptions we establish sharp spectral asymptotics for localised counting functions and Riesz means. author: - Søren Mikkelsen bibliography: - Bib_paperB.bib title: Sharp semiclassical spectral asymptotics for local magnetic Schrödinger operators on $\mathbb{R}^d$ without full regularity. --- # Introduction We will here consider sharp semiclassical spectral asymptotics for operators $\mathcal{H}_{\hbar,\mu}$ that locally are given by a magnetic Schrödinger operators acting in $L^2(\mathbb{R}^d)$ for $d\geq3$. What we precisely mean by "locally given by" will be clarified below. That is we consider operators that locally are of the form $$\label{def_op_intro} H_{\hbar,\mu} = (-i\hbar\nabla - \mu a)^2 + V,$$ where $\hbar\in(0,1]$ is the semiclassical parameter, $\mu\geq0$ is the intensity of the magnetic field, $a$ is the magnetic vector potential and $V$ is the electric potential. Our exact assumptions on the potentials and intensity $\mu$ will be stated below. We will here for $\gamma\in[0,1]$ be interested in the asymptotics as $\hbar$ goes to zero of the following traces $$\label{traces_to_consider} \mathop{\mathrm{Tr}}[\varphi g_\gamma(\mathcal{H}_{\hbar,\mu})],$$ for $\gamma\in[0,1]$, where $\varphi\in C_0^\infty(\mathbb{R}^d)$. The function $g_\gamma$ is given by $$g_\gamma(t) = \begin{cases} \boldsymbol{1}_{(-\infty,0]}(t) &\gamma=0 \\ (t)_{-}^\gamma &\gamma\in(0,1], \end{cases}$$ where we have used the notation $(x)_{-} = \max(0,-x)$ and $\boldsymbol{1}_{(-\infty,0]}$ is the characteristic function for the set $(-\infty,0]$. To ensure that the leading order term in the asymptotics is independent of the magnetic field we will assume that $\hbar\mu\leq C$, where $C$ is some positive constant. Understanding these localised traces are crucial steps in understanding the global quantity $$\label{traces_to_consider_global} \mathop{\mathrm{Tr}}[g_\gamma(H_{\hbar,\mu})].$$ Especially the case $\gamma=1$ has physical motivation both with and without a magnetic vector potential. For details see e.g. [@MR1272387; @MR1266071; @PhysRevLett.69.749; @MR1181242; @MR428944; @MR2013804]. The case $\gamma=0$ is also of interest. Recently in [@MR4182014] sharp estimates for the trace norm of commutators between spectral projections and position and momentum operators was obtained using asymptotic for [\[traces_to_consider\]](#traces_to_consider){reference-type="eqref" reference="traces_to_consider"} for $\gamma=0$. This type of bound first appeared as an assumption in a [@MR3248060], where the mean-field evolution of fermionic systems was studied. The assumption have also appeared in [@MR3570479; @MR3202863; @MR3381147; @MR3461406; @MR4009687; @MR4602009]. The asymptotics used in [@MR4182014] was obtained in [@MR1343781]. Before we state our main result we will specify our assumptions on the operator $\mathcal{H}_{\hbar,\mu}$ and what we mean by "locally given by a magnetic Schrödinger operator". That we only locally assume $\mathcal{H}_{\hbar,\mu}$ is acting as a magnetic Schrödinger operator is due to the presence of the cut-off function. This type of assumptions first appeared in [@MR1343781] to the knowledge of the author. Our exact assumptions are given below. **Assumption 1**. Let $\mathcal{H}_{\hbar,\mu}$ be an operator acting in $L^2(\mathbb{R}^d)$, where $\hbar>0$ and $\mu\geq0$. Moreover, let $\gamma\in[0,1]$. Suppose that 1. [\[G.L.1.1\]]{#G.L.1.1 label="G.L.1.1"} $\mathcal{H}_{\hbar,\mu}$ is self-adjoint and lower semibounded. 2. [\[G.L.1.2\]]{#G.L.1.2 label="G.L.1.2"} Suppose there exists an open set $\Omega\subset\mathbb{R}^d$ and real valued functions $V\in C^{2,\kappa}_0(\mathbb{R}^d)$ with $\kappa>\gamma$, $a_j\in C_0^\infty(\mathbb{R}^d)$ for $j\in\{1,\dots,d\}$ such that $C_0^\infty(\Omega)\subset\mathcal{D}(\mathcal{H})$ and $$\mathcal{H}_{\hbar,\mu} \varphi = H_{\hbar,\mu}\varphi \quad\text{for all $\varphi\in C_0^\infty(\Omega)$},$$ where $H_{\hbar,\mu}= (-i\hbar\nabla - \mu a)^2 + V$. In the assumption we have used the notation $C_0^{2,\kappa}(\mathbb{R}^d)$. This is the space of compactly supported functions that are two times differentiable and the second derivatives are uniformly Hölder continuous with parameter $\kappa$. That is for $f\in C_0^{2,\kappa}(\mathbb{R}^d)$ there exsist a constant $C>0$ such that for all $x,y\in\mathbb{R}^d$ it holds that $$\begin{aligned} |\partial_x^{\alpha} f(x) - \partial_x^{\alpha} f(y) | \leq C |x-y|^{\kappa} \quad\text{for all $\alpha \in\mathbb{N}_0^d$ with $\left| \alpha \right|=2$.} \end{aligned}$$ Note that we here and in the following are using the convention that $\mathbb{N}$ does not contain $0$ and we will use the notation $\mathbb{N}_0 = \mathbb{N}\cup\{0\}$. Moreover for the cases where $\kappa>1$ we use the convention that $$C_0^{2,\kappa}(\mathbb{R}^d) \coloneqq C_0^{2 + \lfloor \kappa \rfloor, \kappa-\lfloor \kappa \rfloor}(\mathbb{R}^d),$$ where $C_0^{k,\kappa}(\mathbb{R}^d)$ is the space of compactly supported functions that are $k$ times differentiable and the $k$'ed derivatives are uniformly Hölder continuous with parameter $\kappa$. The assumptions we make on the operator $\mathcal{H}_{\hbar,\mu}$ is very similar to the assumptions made in [@MR1343781]. The difference is that we do not require $V$ to be smooth. But instead assume it has two derivatives and that the second derivative is uniformly Hölder continuous. With this assumption in place we can state our main result. **Theorem 2**. *Let $\mathcal{H}_{\hbar,\mu}$ be an operator acting in $L^2(\mathbb{R}^d)$ and let $\gamma\in[0,1]$. If $\gamma=0$ we assume $d\geq3$ and if $\gamma\in(0,1]$ we assume $d\geq4$. Suppose that $\mathcal{H}_{\hbar,\mu}$ satisfies Assumption [Assumption 1](#Assumption:local_potential_1){reference-type="ref" reference="Assumption:local_potential_1"} with the set $\Omega$ and the functions $V$ and $a_j$ for $j\in\{1,\dots,d\}$. Then for any $\varphi\in C_0^\infty(\Omega)$ it holds that $$\Big|\mathop{\mathrm{Tr}}[\varphi g_\gamma(\mathcal{H}_{\hbar,\mu})] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma(p^2+V(x)) \varphi(x) \,dx dp \Big| \leq C \langle \mu \rangle^{1+\gamma} \hbar^{1+\gamma-d}$$ for all $\hbar\in(0,\hbar_0]$ and $\mu\leq C\hbar^{-1}$, where $\hbar_0$ is sufficiently small. With the notation $\langle \mu \rangle=(1+\mu^2)^{\frac12}$. The constant $C$ depends on the dimension $d$, the numbers $\gamma$, $\lVert \varphi \rVert_{L^\infty(\mathbb{R}^d)}$, $\lVert \partial^\alpha_x \varphi \rVert_{L^\infty(\mathbb{R}^d)}$ and $\lVert \partial^\alpha a_j \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ with $|\alpha|\geq1$ and $j\in\{1\dots,d\}$, $\lVert \partial_x^\alpha V \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ such that $|\alpha|\leq 2$.* *Remark 3*. We remark that the error term is independent of $\lVert a_j \rVert_{L^\infty(\mathbb{R}^d)}$ for all $j\in\{1\dots,d\}$. This is also the case for the results in [@MR1343781]. As remarked in [@MR1343781] it is not surprising as the magnitude of $a_j$ can easily be change by a Gauge transform. The assumptions on the dimension are needed to ensure convergence of certain integrals. As mentioned above, asymptotics in the case, where $V\in C_0^\infty(\mathbb{R}^d)$, was obtained in [@MR1343781]. In [@MR1412359] sharp asymptotics was also obtained, however the potential was allowed to be singular at the origin but otherwise smooth. In [@MR2179891] non-smooth potentials are also considered in the presence of a magnetic field. Theses results are also given in [@ivrii2019microlocal1 Vol IV]. In some cases the results presented in [@MR2179891] and [@ivrii2019microlocal1 Vol IV] requires less smoothness than here. However, to the knowledge of the author the results presented here do not appear in either [@MR2179891] or [@ivrii2019microlocal1 Vol IV]. In Section [2](#sec:Pre){reference-type="ref" reference="sec:Pre"} we specify the notation we use and describe the operators we will be working with. Moreover, we recall some definitions and results that we will need later. In the end of the section we describe how we approximate the non-smooth potential by a smooth potential. In Section [3](#sec:Rough_pseudo_diff_op){reference-type="ref" reference="sec:Rough_pseudo_diff_op"} we recall some results and definitions on rough $\hbar$-pseudo-differential operators. We also prove some specific results for rough Schrödinger operators. In Section [4](#sec:Aux_est){reference-type="ref" reference="sec:Aux_est"} we establish a number of estimates for operator satisfying Assumption [Assumption 1](#Assumption:local_potential_1){reference-type="ref" reference="Assumption:local_potential_1"}. The ideas and techniques used here are inspired by the ideas and techniques used in [@MR1343781]. Some of the results will also be taken directly from [@MR1343781]. These auxiliary results are needed to prove a version of the main theorem under an additional non-critical condition. This version is proven in Section [5](#sec:model_prob){reference-type="ref" reference="sec:model_prob"}. Finally in Section [6](#sec:proof_main){reference-type="ref" reference="sec:proof_main"} we give the proof of the main theorem in two steps. First in the case where $\mu\leq\mu_0<1$ and then the general case. ## Acknowledgement {#acknowledgement .unnumbered} The author is grateful to the Leverhulme Trust for their support via Research Project Grant 2020-037. # Preliminaries {#sec:Pre} We start by specifying some notation. For an open set $\Omega\subset\mathbb{R}^d$ we will in the following by $\mathcal{B}^\infty(\Omega)$ denote the space $$\mathcal{B}^\infty(\Omega) \coloneqq \big\{ \psi\in C^\infty(\Omega) \, \big| \, \lVert \partial^\alpha \psi \rVert_{L^\infty(\Omega) }<\infty \, \forall \alpha\in\mathbb{N}_0^d \big\}.$$ We will for an operator $A$ acting in a Hilbert space $\mathscr{H}$ denote the operator norm by $\lVert A \rVert_{\mathrm{op}}$ and the trace norm by $\lVert A \rVert_1$. Next we describe the operators we will be working with. If we have $a_j\in L^2_{loc}(\mathbb{R}^d)$ for all $j\in\{1,\dots,d\}$ then we can consider the following form $$\mathfrak{h}_0[f,g] = \sum_{j=1}^d \int_{\mathbb{R}^d} (-i\hbar \partial_{x_j} -\mu a_j(x))f(x) \overline{(-i\hbar \partial_{x_j} -\mu a_j(x))g(x)} \,dx \quad f,g\in \mathcal{D}[\mathfrak{h}_0]$$ for $\mu\geq0$ and $\hbar>0$, where $\mathcal{D}[\mathfrak{h}_0]$ is the domain for the form. Note that $C_0^\infty(\mathbb{R}^d)\subset \mathcal{D}[\mathfrak{h}_0]$. Moreover, this form is closable and lower semibounded (by zero) see [@MR526289] for details. Hence there exists a positive self-adjoint operator associated to the form (the Friederichs exstension). For details see e.g. [@MR0493420] or [@MR526289]. We will by $\mathcal{Q}_j$ denote the square root of this operator. When we also have a potential $V\in L^\infty(\mathbb{R}^d)$ we can define the operator $H_{\hbar,\mu}$ as the Friederichs exstension of the quadratic form $$\mathfrak{h}[f,g] = \int_{\mathbb{R}^d} \sum_{j=1}^d(-i\hbar \partial_{x_j} -\mu a_j(x))g(x) \overline{(-i\hbar \partial_{x_j} -\mu a_j(x))g(x)} + V(x)f(x)\overline{g(x)} \,dx \quad f,g\in \mathcal{D}[\mathfrak{h}]$$ for $\mu\geq0$ and $\hbar>0$, where $\mathcal{D}[\mathfrak{h}]$ is the domain for the form. This construction gives us that $H_{\hbar,\mu}$ is self-adjoint and lower semibounded. Again for details see e.g. [@MR0493420] or [@MR526289]. When working with the Fourier transform we will use the following semiclassical version for $\hbar>0$ $$\mathcal{F}_\hbar[ \varphi ](p) \coloneqq \int_{\mathbb{R}^d} e^{-i\hbar^{-1} \langle x,p \rangle}\varphi(x) \, dx,$$ and with inverse given by $$\mathcal{F}_\hbar^{-1}[\psi] (x) \coloneqq \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^d} e^{i\hbar^{-1} \langle x,p \rangle}\psi(p) \, dp,$$ where $\varphi$ and $\psi$ are elements of $\mathcal{S}(\mathbb{R}^d)$. Here $\mathcal{S}(\mathbb{R}^d)$ denotes the Schwartz space. We will for some of the results see that they are true for a larger class of functions containing $g_\gamma$. These classes where first defined in [@MR1343781; @MR1272980] and we recall the definition here. **Definition 4**. A function $g\in C^\infty(\mathbb{R}\setminus\{0\})$ is said to belong to the class $C^{\infty,\gamma}(\mathbb{R})$, $\gamma\in[0,1]$, if $g\in C(\mathbb{R})$ for $\gamma>0$, for some constants $C >0$ and $r>0$ it holds that $$\begin{aligned} g(t) &= 0, \qquad\text{for all $t\geq C$} \\ |\partial_t^m g(t)| &\leq C_m |t|^r, \qquad\text{for all $m\in\mathbb{N}_0$ and $t\leq -C$} \\ |\partial_t^m g(t)| &\leq \begin{cases} C_m & \text{if $\gamma=0,1$} \\ C_m|t|^{\gamma-m} &\text{if $\gamma\in(0,1)$} \end{cases}, \qquad\text{for all $m\in\mathbb{N}$ and $t\in [ -C,C]\setminus\{0\}$}. \end{aligned}$$ A function $g$ is said to belong to $C^{\infty,\gamma}_0(\mathbb{R})$ if $g\in C^{\infty,\gamma}(\mathbb{R})$ and $g$ has compact support. We will in our analysis need deferent ways for expressing functions of self-adjoint operators. One of these is the Helffer-Sjöstrand formula. Before we state it we will recall a definition of an almost analytic extension. **Definition 5** (Almost analytic exstension). For $f\in C_0^\infty(\mathbb{R})$ we call a function $\tilde{f} \in C_0^\infty(\mathbb{C})$ an almost analytic extension if it has the properties $$\begin{aligned} |\bar{\partial} \tilde{f}(z)| &\leq C_n |\mathop{\mathrm{\mathrm{Im}}}(z)|^n, \qquad \text{for all $n\in\mathbb{N}_0$} \\ \tilde{f}(t)&=f(t) \qquad \text{for all $t\in\mathbb{R}$}, \end{aligned}$$ where $\bar{\partial} = \frac12 (\partial_x +i\partial_y)$. For how to construct the almost analytic extension for a given $f\in C_0^\infty(\mathbb{R})$ see e.g. [@MR2952218; @MR1735654]. The following theorem is a simplified version of a theorem in [@MR1349825]. **Theorem 6** (The Helffer-Sjöstrand formula). *Let $H$ be a self-adjoint operator acting on a Hilbert space $\mathscr{H}$ and $f$ a function from $C_0^\infty(\mathbb{R})$. Then the bounded operator $f(H)$ is given by the equation $$f(H) =- \frac{1}{\pi} \int_\mathbb{C}\bar{\partial }\tilde{f}(z) (z-H)^{-1} \, L(dz),$$ where $L(dz)=dxdy$ is the Lebesgue measure on $\mathbb{C}$ and $\tilde{f}$ is an almost analytic extension of $f$.* ## Approximation of the potential In our analysis we will need to approximate the potential with a smooth potential. How we choose this approximation is the content of the next lemma. **Lemma 7**. *Let $V\in C_0^{k,\kappa}(\mathbb{R}^d)$ be real valued, where $k\in\mathbb{N}_0$ and $\kappa\in[0,1]$. Then for all $\varepsilon >0$ there exists a rough potential $V_\varepsilon \in C_0^{\infty}(\mathbb{R}^d)$ such that $$\label{EQ:rough_potential_local} \begin{aligned} \big| \partial_x^\alpha V(x) - \partial_x^\alpha V_\varepsilon(x)\big| \leq C_\alpha \varepsilon^{k+\kappa - |\alpha|} \quad\text{for all $\alpha\in\mathbb{N}_0^d$ such that $|\alpha|\leq k$} \\ \big| \partial_x^\alpha V_\varepsilon(x)\big| \leq C_\alpha \varepsilon^{k+\kappa - |\alpha|} \quad\text{for all $\alpha\in\mathbb{N}_0^d$ such that $|\alpha|> k$}, \end{aligned}$$ where the constants $C_\alpha$ are independent of $\varepsilon$ but depend on $\lVert \partial^\beta V \rVert_{L^\infty(\mathbb{R}^d)}$ for $\beta\in\mathbb{N}_0^d$ with $|\beta|\leq \min(|\alpha|,k)$. Moreover, if for some open set $\Omega$ and a constant $c>0$ it holds that $$|V(x)| +\hbar^{\frac{2}{3}} \geq c \qquad\text{for all $x\in\Omega$}.$$ Then there exists a constant $\tilde{c}$ such that for all $\varepsilon$ sufficiently small it holds that $$|V_\varepsilon(x)| +\hbar^{\frac{2}{3}} \geq \tilde{c} \qquad\text{for all $x\in \Omega$}.$$* *Proof.* A proof of the estimates in [\[EQ:rough_potential_local\]](#EQ:rough_potential_local){reference-type="eqref" reference="EQ:rough_potential_local"} can be found in either [@MR1974450 Proposition 1.1] or [@ivrii2019microlocal1 Proposition 4.A.2]. The second part of the lemma is a direct consequence of the estimates in [\[EQ:rough_potential_local\]](#EQ:rough_potential_local){reference-type="eqref" reference="EQ:rough_potential_local"}. To see this note that $$|V_\varepsilon(x) -V(x)| \leq C_0 \varepsilon^{k+\kappa} \implies |V_\varepsilon(x)| \geq |V(x)| - C_0 \varepsilon^{k+\kappa}.$$ Hence for $C_0\varepsilon^{k+\kappa} < \frac{c}{2}$ we obtain the desired estimate. This concludes the proof. ◻ We will in the following call the potentials depending on the parameter $\varepsilon$ for rough potentials. *Remark 8*. Let $\mathcal{H}_{\hbar,\mu}$ be an operator acting in $L^2(\mathbb{R}^d)$ and assume it satisfies Assumption [Assumption 1](#Assumption:local_potential_1){reference-type="ref" reference="Assumption:local_potential_1"} with some open set $\Omega$, numbers $\hbar>0$, $\mu\geq0$ and $\gamma\in[0,1]$. When ever we have such an operator we have by assumption the associated magnetic Schrödinger operator $H_{\hbar,\mu}= (-i\hbar\nabla - \mu a)^2 + V$, where $V\in C_0^{2,\kappa}(\mathbb{R}^d)$. Applying Lemma [Lemma 7](#LE:rough_potential_local){reference-type="ref" reference="LE:rough_potential_local"} to $V$ we can also associate the approximating rough Schrödinger operator $H_{\hbar,\mu,\varepsilon}= (-i\hbar\nabla - \mu a)^2 + V_\varepsilon$ to $\mathcal{H}_{\hbar,\mu}$. In what follows when we have an operator $\mathcal{H}_{\hbar,\mu}$ satisfies Assumption [Assumption 1](#Assumption:local_potential_1){reference-type="ref" reference="Assumption:local_potential_1"} we will just say with associated rough Schrödinger operator $H_{\hbar,\mu,\varepsilon}$. This will always be the operator we get from replacing $V$ by $V_\varepsilon$ from Lemma [Lemma 7](#LE:rough_potential_local){reference-type="ref" reference="LE:rough_potential_local"}. One thing to observe is that often when you want to prove sharp spectral asymptotics without full regularity you compare quadratic forms. See e.g. [@MR1974450; @MR2179891; @MR1631419; @MR1974451; @MR2105486; @mikkelsen2022optimal]. This is due to the observation that if you have an operator $A(\hbar)$ and two approximating or framing operators $A^{\pm}(\hbar)$ such that $$A^{-}(\hbar) \leq A(\hbar) \leq A^{+}(\hbar)$$ in the sense of quadratic forms. Then by the min-max-theorem we obtain the raltion $$\label{EQ:com_quad_tr} \mathop{\mathrm{Tr}}[\boldsymbol{1}_{(-\infty,0]}(A^{+}(\hbar))] \leq \mathop{\mathrm{Tr}}[\boldsymbol{1}_{(-\infty,0]}(A(\hbar)) ] \leq \mathop{\mathrm{Tr}}[\boldsymbol{1}_{(-\infty,0]}(A^{-}(\hbar))].$$ The aim is then to choose the approximating operators such that sharp asymptotics can be obtained for these and then use [\[EQ:com_quad_tr\]](#EQ:com_quad_tr){reference-type="eqref" reference="EQ:com_quad_tr"} to deduce it for the original operator $A(\hbar)$. In the situation we currently are considering we have also a localisation. This implies that we can not get an relation like [\[EQ:com_quad_tr\]](#EQ:com_quad_tr){reference-type="eqref" reference="EQ:com_quad_tr"} from the min-max theorem. What we instead will do is to estimate the difference directly and prove that the traces of our original problem is sufficiently close to the trace when we have inserted the approximation. # Rough $\hbar$-pseudo-differential operators {#sec:Rough_pseudo_diff_op} Our proof is based on the theory of $\hbar$-pseudo-differential operators ($\hbar$-$\Psi$DO's). To be precise we will need a rough version of the general theory. We will here recall properties and results concerning rough $\hbar$-$\Psi$DO's. A more complete discussion of these operators can be found in [@mikkelsen2022optimal]. An version of rough $\hbar$-$\Psi$DO theory can be found in [@ivrii2019microlocal1]. It first appears in Vol. 1 Section 2.3. ## Definitions and basic properties By a rough pseudo-differential operator $A_\varepsilon(\hbar) = \mathop{\mathrm{Op_\hbar^w}}(a_\varepsilon)$ of regularity $\tau$ we mean the operator $$\label{EQ_def_PDO_op} \mathop{\mathrm{Op_\hbar^w}}(a_\varepsilon)\psi(x) = \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}} e^{i\hbar^{-1} \langle x-y,p\rangle} a_\varepsilon( \tfrac{x +y}{2},p) \psi(y) \, d y \, d p \quad\text{for $\psi\in\mathcal{S}(\mathbb{R}^d)$},$$ where $a_\varepsilon(x,p)$ is a rough symbol of regularity $\tau \in \mathbb{Z}$ and satisfies for all $\alpha,\beta\in\mathbb{N}^d_0$ that $$\label{symbolb_est_def} \begin{aligned} |\partial_x^\alpha\partial_p^\beta a_\varepsilon(x,p)| \leq C_{\alpha\beta} \varepsilon^{\min(0,\tau-\left| \alpha \right|)} m (x,p) \quad\text{for all $(x,p)\in\mathbb{R}^d\times\mathbb{R}^d$}, \end{aligned}$$ where $C_{\alpha\beta}$ is independent of $\varepsilon$ and $m$ is a tempered weight function. A tempered weight function is in some parts of the literature called an order function. The integral in [\[EQ_def_PDO_op\]](#EQ_def_PDO_op){reference-type="eqref" reference="EQ_def_PDO_op"} should be understood as an oscillatory integral. For $\varepsilon>0$, $\tau\in\mathbb{Z}$ and a tempered weight function $m$ we will use the notation $\Gamma_{\varepsilon}^{m,\tau}(\mathbb{R}^{2d})$ for the set of all $a_\varepsilon(x,p)\in C^\infty(\mathbb{R}^{2d})$ which satisfies [\[symbolb_est_def\]](#symbolb_est_def){reference-type="eqref" reference="symbolb_est_def"} for all $\alpha,\beta\in\mathbb{N}^d_0$. As we are interested in traces of our operators it will be important for us to know when the operator is bounded and trace class. This is the content of the following two theorems. **Theorem 9**. *Let $a_\varepsilon \in \Gamma_{\varepsilon}^{m,\tau}(\mathbb{R}^{2d})$, where we assume $m\in L^\infty(\mathbb{R}^{2d})$ and $\tau\geq0$. Suppose $\hbar\in(0,\hbar_0]$ and there exists a $\delta$ in $(0,1)$ such that $\varepsilon\geq\hbar^{1-\delta}$. Then there exists a constant $C_d$ and an integer $k_d$ only depending on the dimension such that $$\lVert \mathop{\mathrm{Op_\hbar^w}}(a_\varepsilon)\psi \rVert_{L^2(\mathbb{R}^d)} \leq C_d \sup_{\substack{\left| \alpha \right|,\left| \beta \right|\leq k_d \\ (x,p)\in \mathbb{R}^{2d}}} \varepsilon^{\left| \alpha \right|} \left| \partial_x^\alpha \partial_p^\beta a_\varepsilon(x,p) \right| \lVert \psi \rVert_{L^2(\mathbb{R}^d)} \quad\text{for all $\psi\in\mathcal{S}(\mathbb{R}^d)$}.$$ Especially $\mathop{\mathrm{Op_\hbar^w}}(a_\varepsilon)$ can be extended to a bounded operator on $L^2(\mathbb{R}^d)$.* **Theorem 10**. *There exists a constant $C(d)$ only depending on the dimension such $$\lVert \mathop{\mathrm{Op_\hbar^w}}(a_\varepsilon) \rVert_{\mathop{\mathrm{Tr}}} \leq \frac{ C(d)} {\hbar^d} \sum_{\left| \alpha \right|+\left| \beta \right|\leq 2d+2} \varepsilon^{\left| \alpha \right|} \hbar^{\delta\left| \beta \right|} \int_{\mathbb{R}^{2d}} |\partial_x^\alpha \partial_p^\beta a_\varepsilon(x,p)| \,dxdp.$$ for every rough symbol $a_\varepsilon\in\Gamma_{\varepsilon}^{m,\tau}(\mathbb{R}^{2d})$ with $\tau\geq0$, $\hbar\in(0,\hbar_0]$ and $\varepsilon\geq\hbar^{1-\delta}$ for some $\delta\in(0,1)$.* Both of these theorems can be found in [@mikkelsen2022optimal], where they are Theorem 3.25 and Theorem 3.26 respectively. We will also need to calculate the trace of a rough $\hbar$-$\Psi$DO. This is the content of the next theorem. **Theorem 11**. *Let $a_\varepsilon$ be in $\Gamma_{\varepsilon}^{m,\tau}(\mathbb{R}^{2d})$ with $\tau\geq0$ and suppose $\partial_x^\alpha \partial_p^\beta a_\varepsilon(x,p)$ is an element of $L^1(\mathbb{R}^{2d})$ for all $\left| \alpha \right|+\left| \beta \right|\leq 2d+2$. Then $\mathop{\mathrm{Op_\hbar^w}}(a_\varepsilon)$ is trace class and $$\mathop{\mathrm{Tr}}(\mathop{\mathrm{Op_\hbar^w}}(a_\varepsilon))=\frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}} a_\varepsilon(x,p) \,dxdp.$$* This theorem is Theorem 3.27 from [@mikkelsen2022optimal]. We will also need to compose operators. The following theorem is a simplified version of Theorem 3.24 from [@mikkelsen2022optimal] on composing rough $\hbar$-$\Psi$DO's. **Theorem 12**. *Let $a_\varepsilon$ be in $\Gamma_{\varepsilon}^{m_1,\tau_{a}}(\mathbb{R}^{2d})$ and $b_\varepsilon$ be in $\Gamma_{\varepsilon}^{m_2,\tau_{b}}(\mathbb{R}^{2d})$ with $\tau_a,\tau_b\geq0$ and $m_1,m_2\in L^\infty(\mathbb{R}^{2d})$. Suppose $\hbar\in(0,\hbar_0]$ and $\varepsilon \geq \hbar^{1-\delta}$ for a $\delta \in(0,1)$ and let $\tau=\min(\tau_a,\tau_b)$. Then there exists a a sequence of rough symbols in $\{c_{\varepsilon,j}\}_{j\in\mathbb{N}_0}$ such that $c_{\varepsilon,j} \in \Gamma_{\varepsilon}^{m_1 m_2,\tau-j}(\mathbb{R}^{2d})$ for all $j\in\mathbb{N}_0$ and for every $N\in\mathbb{N}$ there exists $N_\delta\geq N$ such that $$\mathop{\mathrm{Op_\hbar^w}}(a_\varepsilon) \mathop{\mathrm{Op_\hbar^w}}(b_\varepsilon) = \sum_{j=0}^{N_\delta} \hbar^j \mathop{\mathrm{Op_\hbar^w}}(c_{\varepsilon,j}) + \hbar^{N_\delta+1} \mathcal{R}_\varepsilon(N_\delta;\hbar),$$ where $\mathcal{R}_\varepsilon(N_\delta;\hbar)$ is a rough $\hbar$-$\Psi$DO which satisfies the bound $$\hbar^{N_\delta+1} \lVert \mathcal{R}_\varepsilon(N_\delta;\hbar) \rVert_{\mathrm{op}} \leq C_N \hbar^N,$$ where $C_N$ is independent of $\varepsilon$, but depend on the numbers $N$, $\lVert m_1 \rVert_{L^\infty(\mathbb{R}^{2d})}$, $\lVert m_2 \rVert_{L^\infty(\mathbb{R}^{2d})}$ and the constants $C_{\alpha\beta}$ from [\[symbolb_est_def\]](#symbolb_est_def){reference-type="eqref" reference="symbolb_est_def"} for both $a_\varepsilon$ and $b_\varepsilon$. The rough symbols $c_{\varepsilon,j}$ are explicit given by $$c_{\varepsilon,j}(x,p) = (-i)^j \sum_{\left| \alpha \right|+\left| \beta \right|=j} \frac{1}{\alpha!\beta!}\Big(\frac{1}{2} \Big)^{\left| \alpha \right|}\Big(-\frac{1}{2} \Big)^{\left| \beta \right|} (\partial_p^\alpha \partial_x^\beta a_{\varepsilon})(x,p) (\partial_p^\beta \partial_x^\alpha b_{\varepsilon})(x,p).$$* *Remark 13*. Assume we are in the setting of Theorem [Theorem 12](#THM:composition-weyl-thm){reference-type="ref" reference="THM:composition-weyl-thm"}. If we had assumed the at least one of the tempered weight functions $m_1$ or $m_2$ was in $L^\infty(\mathbb{R}^{2d})\cap L^1(\mathbb{R}^{2d})$ we would get that the error term is not just bounded in operator norm but also in trace norm. That is $$\hbar^{N_\delta+1} \lVert \mathcal{R}_\varepsilon(N_\delta;\hbar) \rVert_{1} \leq C_N \hbar^{N-d}.$$ The following lemma is an easy consequence of the result on composition of $\hbar$-$\Psi$DO's. It can also be found in [@MR1343781] as Lemma 2.2. **Lemma 14**. *Let $\theta_1,\theta_2\in \mathcal{B}^\infty(\mathbb{R}^{2d})$ and suppose that there exists a constant $c>0$ such that $$\label{EQ:disjoint_supp_PDO} \mathop{\mathrm{dist}}(\mathop{\mathrm{supp}}(\theta_1),\mathop{\mathrm{supp}}(\theta_2)) \geq c.$$ Then for all $N\in\mathbb{N}$ it holds that $$\lVert \mathop{\mathrm{Op_\hbar^w}}(\theta_1) \mathop{\mathrm{Op_\hbar^w}}(\theta_2) \rVert_{\mathrm{op}} \leq C_N \hbar^N.$$ If we further assume $\theta_1\in C_0^\infty(\mathbb{R}^{2d})$ it holds for all $N\in\mathbb{N}$ that $$\lVert \mathop{\mathrm{Op_\hbar^w}}(\theta_1) \mathop{\mathrm{Op_\hbar^w}}(\theta_2) \rVert_{1} \leq C_N \hbar^N.$$ In both cases the constant $C_N$ depends on the numbers $\lVert \partial^\alpha_x \partial^\beta_p \theta_1 \rVert_{L^\infty(\mathbb{R}^{2d})}$ and $\lVert \partial^\alpha_x \partial^\beta_p \theta_2 \rVert_{L^\infty(\mathbb{R}^{2d})}$ for all $\alpha,\beta\in\mathbb{N}_0^d$. In the second case the constant $C_N$ will also depend on $c$ from [\[EQ:disjoint_supp_PDO\]](#EQ:disjoint_supp_PDO){reference-type="eqref" reference="EQ:disjoint_supp_PDO"}.* ## Properties of rough Schrödinger operators We will in the following consider rough Schrödinger operators that satisfies the following assumption. **Assumption 15**. Let $H_{\hbar,\mu,\varepsilon} = (-i\hbar\nabla - \mu a)^2 + V_\varepsilon$ be a rough Schrödinger operator acting in $L^2(\mathbb{R}^d)$. Suppose that $a_j\in C_0^\infty(\mathbb{R}^d)$ for all $j\in\{1,\dots,d\}$ and are real valued. Moreover, suppose that $V_\varepsilon$ is a rough potential of regularity $\tau\geq0$ such that 1. [\[R.P.1\]]{#R.P.1 label="R.P.1"} $V_\varepsilon$ is real, smooth and $\min_{x\in\mathbb{R}^d} V_\varepsilon(x)>-\infty$. 2. [\[R.P.2\]]{#R.P.2 label="R.P.2"} There exists a $\zeta>0$ such that for all $\alpha\in\mathbb{N}_0^d$ there exists a constant $C_\alpha$ such that $$|\partial_x^\alpha V_\varepsilon(x)| \leq C_\alpha \varepsilon^{\min(0,\tau-|\alpha|)} (V_\varepsilon(x) +\zeta) \qquad\text{for all $x\in\mathbb{R}^d$}.$$ 3. [\[R.P.3\]]{#R.P.3 label="R.P.3"} There exists two constants $C,M>0$ such that $$|V_\varepsilon(x)| \leq C(V_\varepsilon(y)+\zeta) (1+ |x-y|)^M \qquad\text{for all $x,y\in\mathbb{R}^d$}.$$ *Remark 16*. When a rough Schrödinger operator $H_{\hbar,\mu,\varepsilon}$ satisfies Assumption [Assumption 15](#Assumption:Rough_schrodinger){reference-type="ref" reference="Assumption:Rough_schrodinger"} it can be shown that as a $\hbar$-$\Psi$DO it is essentially self-adjoint. For details see e.g. [@mikkelsen2022optimal Section 4]. We will in these cases denote the closure by $H_{\hbar,\mu,\varepsilon}$ as well. In the case where $H_{\hbar,\mu,\varepsilon} = (-i\hbar\nabla - \mu a)^2 + V_\varepsilon$ with $a_j\in C_0^\infty(\mathbb{R}^d)$ for all $j\in\{1,\dots,d\}$ and $V_\varepsilon$ having compact support we have that $H_{\hbar,\mu,\varepsilon}$ satisfies Assumption [Assumption 15](#Assumption:Rough_schrodinger){reference-type="ref" reference="Assumption:Rough_schrodinger"}. The following theorem is a simplified version of a more general theorem that can be found in [@mikkelsen2022optimal]. **Theorem 17**. *Let $H_{\hbar,\mu,\varepsilon} = (-i\hbar\nabla - \mu a)^2 + V_\varepsilon$ be a rough Schrödinger operator of regularity $\tau \geq 1$ acting in $L^2(\mathbb{R}^d)$ with $\hbar$ in $(0,\hbar_0]$ and $\mu\in[0,\mu_0]$. Suppose that $H_{\hbar,\mu,\varepsilon}$ satisfies Assumption [Assumption 15](#Assumption:Rough_schrodinger){reference-type="ref" reference="Assumption:Rough_schrodinger"} and there exists a $\delta$ in $(0,1)$ such that $\varepsilon\geq\hbar^{1-\delta}$. Then for any function $f\in C_0^\infty(\mathbb{R})$ and every $N\in\mathbb{N}$ there exists a $N_\delta\in\mathbb{N}$ such that $$f(H_{\hbar,\mu,\varepsilon}) = \sum_{j = 0}^{N_\delta} \hbar^j \mathop{\mathrm{Op_\hbar^w}}(a_{\varepsilon,j}^f) + \hbar^{N_\delta+1} \mathcal{R}_\varepsilon(N_\delta,f;\hbar),$$ where $$\hbar^{N_\delta+1} \lVert \mathcal{R}_\varepsilon(N_\delta;\hbar) \rVert_{\mathrm{op}} \leq C_N \hbar^N,$$ and $$\label{B.func_cal_sym} \begin{aligned} a_{\varepsilon,0}^f(x,p) &= f( (p- \mu a(x))^2 + V_\varepsilon(x) ), \\ a_{\varepsilon,1}^f(x,p) &= 0, \\ a_{\varepsilon,j}^f (x,p)&= \sum_{k=1}^{2j-1} \frac{(-1)^k}{k!} d_{\varepsilon,j,k}(x,p) f^{(k)}( (p- \mu a(x))^2 + V_\varepsilon(x)) \quad\quad\text{for $j\geq2$}, \end{aligned}$$ where $d_{\varepsilon, j ,k}$ are universal polynomials in $\partial_p^\alpha\partial_x^\beta [ (p- \mu a(x))^2 + V_\varepsilon(x)]$ for $\left| \alpha \right|+\left| \beta \right|\leq j$. Especially we have that $a_{\varepsilon,j}^f (x,p)$ is a rough symbol of regularity $\tau-j$ for all $j\in\mathbb{N}_0$.* *Remark 18*. In order to prove the following theorem one will need to understand the Schrödinger propagator associated to $H_{\hbar,\mu,\varepsilon}$. That is the operator $e^{i\hbar^{-1}t H_{\hbar,\mu,\varepsilon}}$. Under the assumptions of the following theorem we can find an operator with an explicit kernel that locally approximate $e^{i\hbar^{-1}t H_{\hbar,\mu,\varepsilon}}$ in a suitable sense. This local construction is only valid for times of order $\hbar^{1-\frac{\delta}{2}}$. But if we locally have a non-critical condition the approximation can be extended to a small time interval $[-T_0,T_0]$. For further details see [@mikkelsen2022optimal]. In the following we will reference this remark and the number $T_0$. **Theorem 19**. *Let $H_{\hbar,\mu,\varepsilon}$ be a rough Schrödinger operator of regularity $\tau \geq 2$ acting in $L^2(\mathbb{R}^d)$ with $\hbar$ in $(0,\hbar_0]$ and $\mu\in[0,\mu_0]$ which satisfies Assumption [Assumption 15](#Assumption:Rough_schrodinger){reference-type="ref" reference="Assumption:Rough_schrodinger"}. Suppose there exists a $\delta$ in $(0,1)$ such that $\varepsilon\geq\hbar^{1-\delta}$. Assume that $\theta\in C_0^\infty(\mathbb{R}^{2d})$ and there exists two constants $\eta,c>0$ such that $$|\nu -V_\varepsilon(x)| + \hbar^{\frac{2}{3}} \geq c \qquad\text{for all $(x,p)\in \mathop{\mathrm{supp}}(\theta)$ and $\nu\in(-2\eta,2\eta)$}.$$ Let $\chi$ be in $C^\infty_0((-T_0,T_0))$ and $\chi=1$ in a neighbourhood of $0$, where $T_0$ is the number from Remark [Remark 18](#RE:propagator){reference-type="ref" reference="RE:propagator"}. Then for every $f$ in $C_0^\infty((-\eta,\eta))$ we have $$\Big| \mathop{\mathrm{Tr}}\big[\mathop{\mathrm{Op_\hbar^w}}(\theta)f(H_{\hbar,\mu,\varepsilon}) \mathcal{F}_{\hbar}^{-1}[\chi](H_{\hbar,\mu,\varepsilon}-s)\big] - \frac{1}{(2\pi\hbar)^{d}} f(s) \int_{\{a_{\varepsilon,0}=s\}} \frac{\theta }{\left| \nabla{a_{\varepsilon,0}} \right|} \,dS_s\Big| \leq C\hbar^{2-d}.$$ where $a_{\varepsilon,0}(x,p)=(p- \mu a(x))^2 + V_\varepsilon(x)$, $S_s$ is the Euclidean surface measure on the surface $\{a_{\varepsilon,0}(x,p)=s\}$. The error term is uniform with respect to $s \in (-\eta,\eta)$ but the constant $C$ depends on the dimension $d$, the numbers $\mu_0$, $\lVert \partial_x^\alpha\partial_p^\beta\theta \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha,\beta\in N_0^d$, $\lVert \partial^\alpha a_j \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ with $|\alpha|\geq1$ and $j\in\{1\dots,d\}$, $\lVert V \rVert_{L^\infty(\mathop{\mathrm{supp}}(\theta))}$ and the numbers $C_\alpha$ from Assumption [Assumption 15](#Assumption:Rough_schrodinger){reference-type="ref" reference="Assumption:Rough_schrodinger"}.* This theorem is a special case of [@mikkelsen2022optimal Theorem 6.1]. One thing to observe is that in the formulation of Theorem 6.1 the assumption on the principal symbol $a_{\varepsilon,0}$ is $$|\nabla_p a_{\varepsilon,0}(x,p)| \geq c \quad\text{for all } (x,p)\in a_{\varepsilon,0}^{-1}([-2\eta,2\eta]).$$ This is technically the same assumption as the one we make in Theorem [Theorem 19](#THM:Expansion_of_trace){reference-type="ref" reference="THM:Expansion_of_trace"} up to a square root. To see this note that we here have that $a_{\varepsilon,0}(x,p)= (p^2-\mu a(x))^2 + V_\varepsilon(x)$. Hence we have that $$\label{EQ:non_critial_assumption_com_1} |\nabla_p a_{\varepsilon,0}(x,p)|^2 = 4(p-\mu a(x))^2 = 4(\nu -V_\varepsilon(x))$$ for all $(x,p)\in\mathbb{R}^{2d}$ such that $a_{\varepsilon,0}(x,p)=\nu$. From [\[EQ:non_critial_assumption_com_1\]](#EQ:non_critial_assumption_com_1){reference-type="eqref" reference="EQ:non_critial_assumption_com_1"} we see that the two assumptions are indeed equivalent. Furthermore, if we had assumed the operator was of regularity $1$ we can obtain an error that is slightly better than $\hbar^{1-d}$ but not $\hbar^{2-d}$. Before we continue we will need the following remark to set some notation and the following Proposition that is a type of Tauberian result. *Remark 20*. Let $T\in(0,T_0]$, where $T_0$ is the number from Remark [Remark 18](#RE:propagator){reference-type="ref" reference="RE:propagator"} and let $\hat{\chi}\in C_0^\infty((-T,T))$ be a real valued function such that $\hat{\chi}(s)=\hat{\chi}(-s)$ and $\hat{\chi}(s)=1$ for all $t\in(-\frac{T}{2},\frac{T}{2})$. Define $$\chi_1(t) = \frac{1}{2\pi} \int_{\mathbb{R}} \hat{\chi}(s) e^{ist} \,ds.$$ We assume that $\chi_1(t)\geq 0$ for all $t\in\mathbb{R}$ and there exist $T_1\in(0,T)$ and $c>0$ such that $\chi_1(t)\geq c$ for all $T\in[-T_1,T_1]$. We can guarantee these assumptions by (possible) replace $\hat{\chi}$ by $\hat{\chi}*\hat{\chi}$. We will by $\chi_\hbar(t)$ denote the function $$\chi_\hbar(t) = \tfrac{1}{\hbar} \chi_1(\tfrac{t}{\hbar}) = \mathcal{F}_\hbar^{-1}[\hat{\chi}](t).$$ Moreover for any function $g\in L^1_{loc}(\mathbb{R})$ we will use the notation $$g^{(\hbar)}(t) =g*\chi_\hbar(t) = \int_\mathbb{R}g(s) \chi_\hbar(t-s).$$ **Proposition 21**. *Let $A$ be a self-adjoint operator acting in a Hilbert space $\mathscr{H}$ and $g\in C_{0}^{\infty,\gamma}(\mathbb{R})$. Let $\chi_1$ be defined as in Remark [Remark 20](#RE:mollyfier_def){reference-type="ref" reference="RE:mollyfier_def"}. If for a Hilbert-Schmidt operator $B$ $$\label{EQ:PRO:Tauberian_1} \sup_{t\in \mathcal{D}(\delta)} \lVert B^{*}\chi_\hbar (A-t) B \rVert_1 \leq Z(\hbar),$$ where $\mathcal{D}(\delta) = \{ t \in\mathbb{R}\,|\, \mathop{\mathrm{dist}}(\mathop{\mathrm{supp}}(g)),t)\leq \delta \}$, $Z(\beta)$ is some positive function and strictly positive number $\delta$. Then it holds that $$\lVert B^{*}(g(A)-g^{(\hbar)}(A)) B \rVert_1 \leq C \hbar^{1+\gamma} Z(\hbar) + C_{N}' \hbar^N \lVert B^{*}B \rVert_1 \quad\text{for all $N\in\mathbb{N}$},$$ where the constants $C$ and $C'$ depend on the number $\delta$ and the functions $g$ and $\chi_1$ only.* The Proposition is taken from [@MR1343781], where it is Proposition 2.6. It first appeared in [@MR1272980] for $\gamma\in(0,1]$. In order to apply this proposition we we will establish a case where we have a bound of the type [\[EQ:PRO:Tauberian_1\]](#EQ:PRO:Tauberian_1){reference-type="eqref" reference="EQ:PRO:Tauberian_1"} from Proposition [Proposition 21](#PRO:Tauberian){reference-type="ref" reference="PRO:Tauberian"}. **Lemma 22**. *Let $H_{\hbar,\mu,\varepsilon}$ be a rough Schrödinger operator of regularity $\tau \geq 2$ acting in $L^2(\mathbb{R}^d)$ with $\hbar$ in $(0,\hbar_0]$ and $\mu\in[0,\mu_0]$ which satisfies Assumption [Assumption 15](#Assumption:Rough_schrodinger){reference-type="ref" reference="Assumption:Rough_schrodinger"}. Suppose there exists a $\delta$ in $(0,1)$ such that $\varepsilon\geq\hbar^{1-\delta}$. Assume that $\theta\in C_0^\infty(\mathbb{R}^{2d})$ and there exists two constants $\eta,c>0$ such that $$|\nu -V_\varepsilon(x)| + \hbar^{\frac{2}{3}} \geq c \qquad\text{for all $(x,p)\in \mathop{\mathrm{supp}}(\theta)$ and $\nu\in(-2\eta,2\eta)$}.$$ Let $\chi_\hbar$ be the function from Remark [Remark 20](#RE:mollyfier_def){reference-type="ref" reference="RE:mollyfier_def"}. Then for every $f$ in $C_0^\infty((-\eta,\eta))$ we have $$\lVert \mathop{\mathrm{Op_\hbar^w}}(\theta)f(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s) f(H_{\hbar,\mu,\varepsilon}) \mathop{\mathrm{Op_\hbar^w}}(\theta) \rVert_1 \leq C\hbar^{-d},$$ where the constant depends on the dimension, the numbers $\mu_0$, $\lVert \partial_x^\alpha\partial_p^\beta\theta \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha,\beta\in N_0^d$, $\lVert \partial^\alpha a_j \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ with $|\alpha|\geq1$ and $j\in\{1\dots,d\}$, $\lVert V_\varepsilon \rVert_{L^\infty(\mathop{\mathrm{supp}}(\theta))}$, the numbers $C_\alpha$ from Assumption [Assumption 15](#Assumption:Rough_schrodinger){reference-type="ref" reference="Assumption:Rough_schrodinger"} and $\lVert \partial^\alpha f \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0$.* *Proof.* Since we assume that $\chi_\hbar(t)\geq0$ for all $t\in\mathbb{R}$ we have that the composition of the operators will be a positive operator and hence we have that $$\label{EQ:verification_of_assumption_1} \begin{aligned} \MoveEqLeft \lVert \mathop{\mathrm{Op_\hbar^w}}(\theta)f(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s) f(H_{\hbar,\mu,\varepsilon}) \mathop{\mathrm{Op_\hbar^w}}(\theta) \rVert_1 \\ &= \mathop{\mathrm{Tr}}[\mathop{\mathrm{Op_\hbar^w}}(\theta)f(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s) f(H_{\hbar,\mu,\varepsilon}) \mathop{\mathrm{Op_\hbar^w}}(\theta)] \\ &= \mathop{\mathrm{Tr}}[\mathop{\mathrm{Op_\hbar^w}}(\theta)\mathop{\mathrm{Op_\hbar^w}}(\theta) f^2(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s) ], \end{aligned}$$ where we in the last equality have used cyclicality of the trace. From applying Theorem [Theorem 12](#THM:composition-weyl-thm){reference-type="ref" reference="THM:composition-weyl-thm"} and Remark [Remark 13](#Re:composition-weyl-thm){reference-type="ref" reference="Re:composition-weyl-thm"} we obtain that $$\label{EQ:verification_of_assumption_2} \begin{aligned} \MoveEqLeft | \mathop{\mathrm{Tr}}[\mathop{\mathrm{Op_\hbar^w}}(\theta)\mathop{\mathrm{Op_\hbar^w}}(\theta) f^2(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s) ]| \\ &\leq | \mathop{\mathrm{Tr}}[\mathop{\mathrm{Op_\hbar^w}}(\theta^2) f^2(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s) ]| +C\hbar^{-d}, \end{aligned}$$ where the constant $C$ depends on the numbers $\lVert \partial^\alpha_x \partial^\beta_p \theta \rVert_{L^\infty(\mathbb{R}^d)}$ and $\lVert f \rVert_{L^\infty(\mathbb{R})}$. Applying Theorem [Theorem 19](#THM:Expansion_of_trace){reference-type="ref" reference="THM:Expansion_of_trace"} we get that $$\label{EQ:verification_of_assumption_3} \begin{aligned} | \mathop{\mathrm{Tr}}[\mathop{\mathrm{Op_\hbar^w}}(\theta^2) f^2(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s) ]| \leq C\hbar^{-d}, \end{aligned}$$ where the constant $C$ depends on . Finally by combining [\[EQ:verification_of_assumption_1\]](#EQ:verification_of_assumption_1){reference-type="eqref" reference="EQ:verification_of_assumption_1"}, [\[EQ:verification_of_assumption_2\]](#EQ:verification_of_assumption_2){reference-type="eqref" reference="EQ:verification_of_assumption_2"} and [\[EQ:verification_of_assumption_3\]](#EQ:verification_of_assumption_3){reference-type="eqref" reference="EQ:verification_of_assumption_3"} we obtain the desired estimate and this concludes the proof. ◻ **Theorem 23**. *Let $H_{\hbar,\mu,\varepsilon}$ be a rough Schrödinger operator of regularity $\tau \geq 2$ acting in $L^2(\mathbb{R}^d)$ with $\hbar$ in $(0,\hbar_0]$ and $\mu\in[0,\mu_0]$ which satisfies Assumption [Assumption 15](#Assumption:Rough_schrodinger){reference-type="ref" reference="Assumption:Rough_schrodinger"}. Suppose there exists a $\delta$ in $(0,1)$ such that $\varepsilon\geq\hbar^{1-\delta}$. Moreover, suppose there exists some $c>0$ such that $$|V_\varepsilon(x)| +\hbar^{\frac{2}{3}} \geq c \qquad\text{for all $x\in \Omega$}.$$ Then for $\gamma\in[0,1]$ and any $g \in C^{\infty,\gamma}(\mathbb{R})$ and any $\theta\in C_0^\infty(\Omega\times\mathbb{R}^d)$ it holds that $$\Big|\mathop{\mathrm{Tr}}[\mathop{\mathrm{Op_\hbar^w}}(\theta) g(H_{\hbar,\mu,\varepsilon})] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g((p- \mu a(x))^2+V_\varepsilon(x))\theta(x,p) \,dx dp \Big| \leq C \hbar^{1+\gamma-d},$$ where the constant $C$ is depending on the dimension, $\mu_0$, the numbers $\lVert \partial_x^\alpha\partial_p^\beta\theta \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha,\beta\in N_0^d$, $\lVert \partial^\alpha a_j \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ with $|\alpha|\geq1$ and $j\in\{1\dots,d\}$, $\lVert V_\varepsilon \rVert_{L^\infty(\Omega)}$ and the numbers $C_\alpha$ from Assumption [Assumption 15](#Assumption:Rough_schrodinger){reference-type="ref" reference="Assumption:Rough_schrodinger"}.* *Proof.* By continuity there exists an $\eta>0$ such that $$|\nu -V_\varepsilon(x)| + \hbar^{\frac{2}{3}} \geq \frac{c}{2} \qquad\text{for all $x\in \Omega$ and $\nu\in(-2\eta,2\eta)$}.$$ Let $f_1,f_2 \in C_0^\infty(\mathbb{R})$ such that $\mathop{\mathrm{supp}}(f_2)\subset (-\eta,\eta)$ and $$\label{EQ:Loc_rough_prob_0} g(H_{\hbar,\mu,\varepsilon}) = f_1(H_{\hbar,\mu,\varepsilon}) + f_2^2(H_{\hbar,\mu,\varepsilon})g(H_{\hbar,\mu,\varepsilon}).$$ We can ensure this since $H_{\hbar,\mu,\varepsilon}$ is lower semibounded. With these functions we have that $$\label{EQ:Loc_rough_prob_1} \mathop{\mathrm{Tr}}[\mathop{\mathrm{Op_\hbar^w}}(\theta) g(H_{\hbar,\mu,\varepsilon})] = \mathop{\mathrm{Tr}}[\mathop{\mathrm{Op_\hbar^w}}(\theta) f_1(H_{\hbar,\mu,\varepsilon}) ] + \mathop{\mathrm{Tr}}[\mathop{\mathrm{Op_\hbar^w}}(\theta) f_2^2(H_{\hbar,\mu,\varepsilon})g(H_{\hbar,\mu,\varepsilon})].$$ We will consider each term separately and start by considering the first term on the right hand side of [\[EQ:Loc_rough_prob_1\]](#EQ:Loc_rough_prob_1){reference-type="eqref" reference="EQ:Loc_rough_prob_1"}. Here we get by applying Theorem [Theorem 17](#THM:func_calc){reference-type="ref" reference="THM:func_calc"} that $$\label{EQ:Loc_rough_prob_2} \mathop{\mathrm{Tr}}[\mathop{\mathrm{Op_\hbar^w}}(\theta) f_1(H_{\hbar,\mu,\varepsilon}) ] = \mathop{\mathrm{Tr}}[\mathop{\mathrm{Op_\hbar^w}}(\theta) \mathop{\mathrm{Op_\hbar^w}}(a^{f_1}_{\varepsilon,0}) ] + C\hbar^{2-d},$$ where the constant $C$ depends on the numbers $\lVert \partial^\alpha_x \partial^\beta_p \theta \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha,\beta\in\mathbb{N}_0^d$, $\lVert \partial^\alpha a_j \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}^d$ and $j\in\{1\dots,d\}$ and $\lVert \partial_x^\alpha V_\varepsilon \rVert_{L^\infty(\Omega)}$ for all $\alpha\in\mathbb{N}_0^d$ such that $|\alpha|\leq 2$. Moreover we have used the notation $a_{\varepsilon,0}^{f_1}(x,p) = f_1( (p- \mu a(x))^2 + V_\varepsilon(x) )$. From applying Theorem [Theorem 12](#THM:composition-weyl-thm){reference-type="ref" reference="THM:composition-weyl-thm"} and Theorem [Theorem 11](#BTHM:trace_formula){reference-type="ref" reference="BTHM:trace_formula"} we get that $$\label{EQ:Loc_rough_prob_3} \begin{aligned} \mathop{\mathrm{Tr}}[\mathop{\mathrm{Op_\hbar^w}}(\theta) \mathop{\mathrm{Op_\hbar^w}}(a^{f_1}_{\varepsilon,0}) ] = {}& \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}} f_1( (p- \mu a(x))^2 + V_\varepsilon(x) ) \theta(x,p) \,dxdp \\ &- \frac{i\hbar}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}} c_{\varepsilon,1}(x,p) \,dxdp +\mathcal{O}(\hbar^{2-d}), \end{aligned}$$ where $c_{\varepsilon,1}$ is the subprincipal symbol we get from composing the operators. Since the lefthand side of [\[EQ:Loc_rough_prob_3\]](#EQ:Loc_rough_prob_3){reference-type="eqref" reference="EQ:Loc_rough_prob_3"} is real and $c_{\varepsilon,1}$ is real we have that the second term of the righthand side has to be of lower order. Hence we have that $$\label{EQ:Loc_rough_prob_4} \begin{aligned} \mathop{\mathrm{Tr}}[\mathop{\mathrm{Op_\hbar^w}}(\theta) \mathop{\mathrm{Op_\hbar^w}}(a^{f_1}_{\varepsilon,0}) ] = {}& \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}} f_1( (p- \mu a(x))^2 + V_\varepsilon(x) ) \theta(x,p) \,dxdp +\mathcal{O}(\hbar^{2-d}). \end{aligned}$$ Now we turn to the second term on the righthand side of [\[EQ:Loc_rough_prob_1\]](#EQ:Loc_rough_prob_1){reference-type="eqref" reference="EQ:Loc_rough_prob_1"}. When we consider this term we may due to the support properties of $f_2$ assume that $\mathop{\mathrm{supp}}(g)\subset (-\frac{3}{2}\eta,0]$ that is $g\in C_0^{\infty,s}(\mathbb{R})$. Let $g^\hbar$ be the smoothed version of $g$ as described in Remark [Remark 20](#RE:mollyfier_def){reference-type="ref" reference="RE:mollyfier_def"}. We then have $$\label{EQ:Loc_rough_prob_5} \begin{aligned} \mathop{\mathrm{Tr}}[\mathop{\mathrm{Op_\hbar^w}}(\theta) f_2^2(H_{\hbar,\mu,\varepsilon})g(H_{\hbar,\mu,\varepsilon})] = {}& \mathop{\mathrm{Tr}}[\mathop{\mathrm{Op_\hbar^w}}(\theta) f_2^2(H_{\hbar,\mu,\varepsilon})g^\hbar (H_{\hbar,\mu,\varepsilon})] \\ &+ \mathop{\mathrm{Tr}}\big[\mathop{\mathrm{Op_\hbar^w}}(\theta) f_2(H_{\hbar,\mu,\varepsilon}) [g (H_{\hbar,\mu,\varepsilon})-g^\hbar (H_{\hbar,\mu,\varepsilon})] f_2(H_{\hbar,\mu,\varepsilon})\big]. \end{aligned}$$ Let $\theta_1\in C_0^\infty(\Omega\times\mathbb{R}^d)$ such that $\theta \theta_1=\theta$. Then from applying Lemma [Lemma 14](#LE:disjoint_supp_PDO){reference-type="ref" reference="LE:disjoint_supp_PDO"} twice we get for all $N\in\mathbb{N}^d$ that $$\label{EQ:Loc_rough_prob_6} \begin{aligned} \MoveEqLeft \big| \mathop{\mathrm{Tr}}\big[\mathop{\mathrm{Op_\hbar^w}}(\theta) f_2(H_{\hbar,\mu,\varepsilon}) [g(H_{\hbar,\mu,\varepsilon})-g^\hbar (H_{\hbar,\mu,\varepsilon})] f_2(H_{\hbar,\mu,\varepsilon})\big] \big| \\ &\leq \lVert \mathop{\mathrm{Op_\hbar^w}}(\theta) \rVert_{\mathrm{op}} \big\lVert\mathop{\mathrm{Op_\hbar^w}}(\theta_1) f_2(H_{\hbar,\mu,\varepsilon}) [g (H_{\hbar,\mu,\varepsilon})-g^\hbar (H_{\hbar,\mu,\varepsilon})] f_2(H_{\hbar,\mu,\varepsilon})\mathop{\mathrm{Op_\hbar^w}}(\theta_1)\big\rVert_{1} + C_N \hbar^N. \end{aligned}$$ From Lemma [Lemma 22](#LE:verification_of_assumption){reference-type="ref" reference="LE:verification_of_assumption"} we have that assumption [\[EQ:PRO:Tauberian_1\]](#EQ:PRO:Tauberian_1){reference-type="eqref" reference="EQ:PRO:Tauberian_1"} from Proposition [Proposition 21](#PRO:Tauberian){reference-type="ref" reference="PRO:Tauberian"} is satisfied with $B=f_2(H_{\hbar,\mu,\varepsilon})\mathop{\mathrm{Op_\hbar^w}}(\theta_1)$. Hence Proposition [Proposition 21](#PRO:Tauberian){reference-type="ref" reference="PRO:Tauberian"} gives us that $$\label{EQ:Loc_rough_prob_7} \begin{aligned} \big\lVert\mathop{\mathrm{Op_\hbar^w}}(\theta_1) f_2(H_{\hbar,\mu,\varepsilon}) [g (H_{\hbar,\mu,\varepsilon})-g^\hbar (H_{\hbar,\mu,\varepsilon})] f_2(H_{\hbar,\mu,\varepsilon})\mathop{\mathrm{Op_\hbar^w}}(\theta_1)\big\rVert_{1} \leq C \hbar^{1+\gamma-d}. \end{aligned}$$ Using the definition of $g^\hbar$ and applying Theorem [Theorem 19](#THM:Expansion_of_trace){reference-type="ref" reference="THM:Expansion_of_trace"} we have that $$\label{EQ:Loc_rough_prob_8} \begin{aligned} \MoveEqLeft \mathop{\mathrm{Tr}}[\mathop{\mathrm{Op_\hbar^w}}(\theta) f_2^2(H_{\hbar,\mu,\varepsilon})g^\hbar (H_{\hbar,\mu,\varepsilon})] \\ ={}& \int_\mathbb{R}g_\gamma(s) \mathop{\mathrm{Tr}}[\mathop{\mathrm{Op_\hbar^w}}(\theta) f_2^2(H_{\hbar,\mu,\varepsilon})\chi_\hbar (H_{\hbar,\mu,\varepsilon}-s)] \,ds \\ ={}& \frac{1}{(2\pi\hbar)^{d}} \int_\mathbb{R}g(s) f_2^2(s) \int_{\{a_{\varepsilon,0}=s\}} \frac{\theta }{\left| \nabla{a_{\varepsilon,0}} \right|} \,dS_s \,ds + \mathcal{O}(\hbar^{2-d}) \\ ={}& \frac{1}{(2\pi\hbar)^{d}} \int_{\mathbb{R}^{2d}} f_2^2g( (p- \mu a(x))^2 + V_\varepsilon(x) ) \theta(x,p) \,dxdp + \mathcal{O}(\hbar^{2-d}). \end{aligned}$$ From combining [\[EQ:Loc_rough_prob_5\]](#EQ:Loc_rough_prob_5){reference-type="eqref" reference="EQ:Loc_rough_prob_5"}, [\[EQ:Loc_rough_prob_6\]](#EQ:Loc_rough_prob_6){reference-type="eqref" reference="EQ:Loc_rough_prob_6"}, [\[EQ:Loc_rough_prob_7\]](#EQ:Loc_rough_prob_7){reference-type="eqref" reference="EQ:Loc_rough_prob_7"} and [\[EQ:Loc_rough_prob_8\]](#EQ:Loc_rough_prob_8){reference-type="eqref" reference="EQ:Loc_rough_prob_8"} we obtain that $$\label{EQ:Loc_rough_prob_9} \begin{aligned} \MoveEqLeft \mathop{\mathrm{Tr}}[\mathop{\mathrm{Op_\hbar^w}}(\theta) f_2^2(H_{\hbar,\mu,\varepsilon})g(H_{\hbar,\mu,\varepsilon})] \\ = {}& \frac{1}{(2\pi\hbar)^{d}} \int_{\mathbb{R}^{2d}} f_2^2g( (p- \mu a(x))^2 + V_\varepsilon(x) ) \theta(x,p) \,dxdp + \mathcal{O}(\hbar^{1+\gamma-d}). \end{aligned}$$ Recalling the identity in [\[EQ:Loc_rough_prob_0\]](#EQ:Loc_rough_prob_0){reference-type="eqref" reference="EQ:Loc_rough_prob_0"} and combining [\[EQ:Loc_rough_prob_1\]](#EQ:Loc_rough_prob_1){reference-type="eqref" reference="EQ:Loc_rough_prob_1"}, [\[EQ:Loc_rough_prob_4\]](#EQ:Loc_rough_prob_4){reference-type="eqref" reference="EQ:Loc_rough_prob_4"} and [\[EQ:Loc_rough_prob_9\]](#EQ:Loc_rough_prob_9){reference-type="eqref" reference="EQ:Loc_rough_prob_9"} we obtain that $$\Big|\mathop{\mathrm{Tr}}[\mathop{\mathrm{Op_\hbar^w}}(\theta) g(H_{\hbar,\mu,\varepsilon})] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g((p- \mu a(x))^2+V_\varepsilon(x))\theta(x,p) \,dx dp \Big| \leq C \hbar^{1+\gamma-d},$$ where the constant depends on the numbers stated in the theorem. This concludes the proof. ◻ # Auxiliary estimates {#sec:Aux_est} We will in the this section establish bounds on traces of the form $\lVert \varphi f(\mathcal{H}_{\hbar,\mu}) \rVert$, where $f\in C_0^\infty(\mathbb{R})$, $\varphi\in C_0^\infty(\Omega)$ and $\mathcal{H}_{\hbar,\mu}$ satisfies Assumption [Assumption 1](#Assumption:local_potential_1){reference-type="ref" reference="Assumption:local_potential_1"} with some set $\Omega$ and the numbers $\hbar>0$ and $\mu\geq0$. The results in this section is based on ideas originating in [@MR1343781]. The main estimate from which the other estimates is deduced is contained in the following Lemma. The Lemma is taken from [@MR1343781], where it is Lemma 3.6. **Lemma 24**. *Let $H_{\hbar,\mu} = (-i\hbar\nabla-\mu a)^2 +V$ be a Schrödinger operator acting in $L^2(\mathbb{R}^d)$ and assume that $V\in L^\infty(\mathbb{R}^d)$ and $a_j\in L^2_{loc}(\mathbb{R}^d)$ for $j\in\{1,\dots,d\}$. Moreover, suppose that $\mu\leq\mu_0<1$ and $\hbar\in(0,\hbar_0]$, with $\hbar_0$ sufficiently small. Let $\varphi_1\in C^\infty_0(\mathbb{R}^d)$ and $\varphi_2\in \mathcal{B}^{\infty}(\mathbb{R}^d)$ such that $$\mathop{\mathrm{dist}}\big\{\mathop{\mathrm{supp}}(\varphi_1),\mathop{\mathrm{supp}}(\varphi_2)\big\} \geq c>0,$$ and let $r,m\in\{0,1\}$. Then for any $N>\frac{d}{2}$ it holds that $$\lVert \varphi_1 Q_l^r (H_\hbar - z)^{-1} (Q_q^{*})^m \varphi_2 \rVert_1 \leq C_N \frac{\langle z \rangle^{\frac{m+r}{2}}}{d(z)} \frac{\langle z \rangle^{\frac{d}{2}}}{\hbar^d} \frac{\langle z \rangle^N \hbar^{2N}}{d(z)^{2N}},$$ where $Q_l = -i\hbar\partial_{x_l}-\mu a_j$. The constant $C_N$ depends only on the numbers $N$, $\lVert \partial^\alpha\varphi_1 \rVert_{L^\infty(\mathbb{R}^d)}$, $\lVert \partial^\alpha\varphi_1 \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ and the constant $c$.* The next Lemma is also from [@MR1343781], where it is Lemma 3.9. **Lemma 25**. *Let $H_{\hbar,\mu} = (-i\hbar\nabla-\mu a)^2 +V$ be a Schrödinger operator acting in $L^2(\mathbb{R}^d)$ and assume that $V\in L^\infty(\mathbb{R}^d)$ and $a_j\in L^2_{loc}(\mathbb{R}^d)$ for $j\in\{1,\dots,d\}$. Moreover, suppose that $\mu\leq\mu_0<1$ and $\hbar\in(0,\hbar_0]$, with $\hbar_0$ sufficiently small. Let $f\in C_0^\infty(\mathbb{R})$ and $\varphi\in C_0^\infty(\mathbb{R}^d)$. Then $$\lVert \varphi f(H_\hbar) \rVert_1 \leq C \hbar^{-d}.$$ If $\varphi_1\in C^\infty_0(\mathbb{R}^d)$ and $\varphi_2\in \mathcal{B}^{\infty}(\mathbb{R}^d)$ such that $$\mathop{\mathrm{dist}}\big\{\mathop{\mathrm{supp}}(\varphi_1),\mathop{\mathrm{supp}}(\varphi_2)\big\} \geq c>0.$$ Then for any $N\geq0$ it holds that $$\lVert \varphi_1 f(H_\hbar) \varphi_2 \rVert_1 \leq C_N \hbar^N.$$ The constant $C_N$ depends only on the numbers $N$, $\lVert f \rVert_{L^\infty(\mathbb{R})}$, $\lVert \partial^\alpha\varphi_1 \rVert_{L^\infty(\mathbb{R}^d)}$, $\lVert \partial^\alpha\varphi_1 \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ and the constant $c$.* There is an almost similar result as the next lemma in [@MR1343781]. This is Theorem 3.12. The difference in the two results is that our constant will not be directly dependent on the number $\lambda_0$ (with the notation from [@MR1343781]). This is due to us using the Helffer-Sjöstrand formula instead of the representation formula for $f(A)$ used in [@MR1343781], where $f\in C_0^\infty(\mathbb{R}^d)$ and $A$ is some self-adjoint lower semibounded operator. **Lemma 26**. *Let $\mathcal{H}_{\hbar,\mu}$ be an operator acting in $L^2(\mathbb{R}^d)$ which satisfies Assumption [Assumption 1](#Assumption:local_potential_1){reference-type="ref" reference="Assumption:local_potential_1"} with the open set $\Omega$ and the local operator $H_{\hbar,\mu} = (-i\hbar\nabla-\mu a)^2 +V$. Assume that $\mu\leq\mu_0<1$ and $\hbar\in(0,\hbar_0]$, with $\hbar_0$ sufficiently small. Then for $f\in C_0^\infty(\mathbb{R})$ and $\varphi\in C_0^\infty(\Omega)$ we have for any $N\in\mathbb{N}_0$ that $$\lVert \varphi[f(\mathcal{H}_{\hbar,\mu})-f(H_{\hbar,\mu})] \rVert_1\leq C_N \hbar^N,$$ and $$\lVert \varphi f(\mathcal{H}_{\hbar,\mu}) \rVert_1\leq C \hbar^{-d},$$ The constant $C_N$ depends only on the numbers $N$, $\lVert f \rVert_{L^\infty(\mathbb{R})}$, $\lVert \partial^\alpha\varphi \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ and the constant $c$.* *Proof.* Using the Helffer-Sjöstrand formula (Theorem [Theorem 6](#THM:Helffer-Sjostrand){reference-type="ref" reference="THM:Helffer-Sjostrand"}) we obtain that $$\label{EQ:Comparision_Loc_infty_1} \varphi[f(\mathcal{H}_{\hbar,\mu})-f(H_{\hbar,\mu})] =- \frac{1}{\pi} \int_\mathbb{C}\bar{\partial }\tilde{f}(z) \varphi [(z-\mathcal{H}_{\hbar,\mu})^{-1}-(z-H_{\hbar,\mu})^{-1}] \, L(dz),$$ where $\tilde{f}$ is an almost analytic extension of $f$. Since we assume that $\varphi\in C_0^\infty(\Omega)$ there exists a positive constant $c$ such that $$\mathop{\mathrm{dist}}\big(\mathop{\mathrm{supp}}(\varphi),\partial\Omega\big) \geq 4c.$$ Let $\varphi_1 \in C_0^\infty(\mathbb{R}^d)$ such that $\varphi_1(x)\in[0,1]$ for all $x\in\mathbb{R}^d$. Moreover, we chose $\varphi_1$ such that $\varphi_1(x)=1$ on the set $\{x\in\mathbb{R}^d \,|\, \mathop{\mathrm{dist}}(\mathop{\mathrm{supp}}(\varphi),x) \leq c \}$ and $$\mathop{\mathrm{supp}}(\varphi_1)\subset \{x\in\mathbb{R}^d \,|\, \mathop{\mathrm{dist}}(\mathop{\mathrm{supp}}(\varphi),x) \leq 3c \}.$$ With this function we have that $$\label{EQ:Comparision_Loc_infty_2} \begin{aligned} \MoveEqLeft \varphi [(z-\mathcal{H}_{\hbar,\mu})^{-1}-(z-H_{\hbar,\mu})^{-1}] \\ &= \varphi [\varphi_1(z-\mathcal{H}_{\hbar,\mu})^{-1}-(z-H_{\hbar,\mu})^{-1}\varphi_1] - \varphi (z-H_{\hbar,\mu})^{-1}(1-\varphi_1). \end{aligned}$$ For the second term on the right hand side of [\[EQ:Comparision_Loc_infty_2\]](#EQ:Comparision_Loc_infty_2){reference-type="eqref" reference="EQ:Comparision_Loc_infty_2"} we have by Lemma [Lemma 24](#LE:Resolvent_est_local){reference-type="ref" reference="LE:Resolvent_est_local"} for all $N>\frac{d}{2}$ that $$\label{EQ:Comparision_Loc_infty_3} \lVert \varphi (z-H_{\hbar,\mu})^{-1}(1-\varphi_1) \rVert_1 \leq C_N \frac{\langle z \rangle^{N+\frac{d}{2}} \hbar^{2N-d}}{d(z)^{2N+1}},$$ where $C_N$ depends only on the numbers $N$, the functions $\varphi$, $\varphi_1$ and the constant $c$. For the first term on the right hand side of [\[EQ:Comparision_Loc_infty_2\]](#EQ:Comparision_Loc_infty_2){reference-type="eqref" reference="EQ:Comparision_Loc_infty_2"} we have by resolvent formalism that $$\label{EQ:Comparision_Loc_infty_4} \begin{aligned} \MoveEqLeft \varphi_1(z-\mathcal{H}_{\hbar,\mu})^{-1}-(z-H_{\hbar,\mu})^{-1}\varphi_1 \\ &= \sum_{j=1}^d (z-H_{\hbar,\mu})^{-1} [Q_j^{*}Q_j, \varphi_1 ] (z-\mathcal{H}_{\hbar,\mu})^{-1} \\ &= \sum_{j=1}^d (z-H_{\hbar,\mu})^{-1} \big(-i\hbar Q_j \partial_{x_j}\varphi_1 -\hbar^2\partial_{x_j}^2\varphi_1 \big) (z-\mathcal{H}_{\hbar,\mu})^{-1}, \end{aligned}$$ where $\partial_{x_j}\varphi_1$ and $\partial_{x_j}^2\varphi_1$ are the derivatives of $\varphi_1$ with respect to $x_j$ once or twice respectively. Notice that due to our choice of $\varphi_1$ we have that $$\mathop{\mathrm{dist}}\big( \mathop{\mathrm{supp}}(\partial_{x_j}\varphi_1),\mathop{\mathrm{supp}}(\varphi) \big) \geq c \quad\text{and}\quad \mathop{\mathrm{dist}}\big( \mathop{\mathrm{supp}}(\partial_{x_j}^2\varphi_1),\mathop{\mathrm{supp}}(\varphi) \big) \geq c.$$ Using [\[EQ:Comparision_Loc_infty_4\]](#EQ:Comparision_Loc_infty_4){reference-type="eqref" reference="EQ:Comparision_Loc_infty_4"} we have by Lemma [Lemma 24](#LE:Resolvent_est_local){reference-type="ref" reference="LE:Resolvent_est_local"} for all $N>\frac{d}{2}$ that $$\label{EQ:Comparision_Loc_infty_5} \begin{aligned} \MoveEqLeft \lVert \varphi [\varphi_1(z-\mathcal{H}_{\hbar,\mu})^{-1}-(z-H_{\hbar,\mu})^{-1}\varphi_1] \rVert_1 \\ &\leq \big\lVert (z-\mathcal{H}_{\hbar,\mu})^{-1} \big\rVert_{\mathrm{op}} \sum_{j=1}^d \hbar \big\lVert \varphi (z-H_{\hbar,\mu})^{-1} Q_j \partial_{x_j}\varphi_1 \big\rVert_1 + \hbar^2 \big\lVert \varphi (z-H_{\hbar,\mu})^{-1} \partial_{x_j}^2\varphi_1 \big\rVert_1 \\ &\leq C_N \frac{\langle z \rangle^{N+\frac{d+1}{2}} \hbar^{2N-d}}{d(z)^{2N+1}}\frac{\hbar+\hbar^2}{|\mathop{\mathrm{\mathrm{Im}}}(z)|}, \end{aligned}$$ where $C_N$ depends only on the dimension the numbers $N$, the functions $\varphi$, $\varphi_1$ and the constant $c$. From combing [\[EQ:Comparision_Loc_infty_2\]](#EQ:Comparision_Loc_infty_2){reference-type="eqref" reference="EQ:Comparision_Loc_infty_2"}, [\[EQ:Comparision_Loc_infty_3\]](#EQ:Comparision_Loc_infty_3){reference-type="eqref" reference="EQ:Comparision_Loc_infty_3"} and [\[EQ:Comparision_Loc_infty_5\]](#EQ:Comparision_Loc_infty_5){reference-type="eqref" reference="EQ:Comparision_Loc_infty_5"} we obtain that $$\label{EQ:Comparision_Loc_infty_6} \big \lVert \varphi [(z-\mathcal{H}_{\hbar,\mu})^{-1}-(z-H_{\hbar,\mu})^{-1}] \big\rVert_1 \leq C_N \frac{\langle z \rangle^{N+\frac{d+1}{2}} \hbar^{2N-d}}{|\mathop{\mathrm{\mathrm{Im}}}(z)|^{2N+2}}.$$ Combining [\[EQ:Comparision_Loc_infty_1\]](#EQ:Comparision_Loc_infty_1){reference-type="eqref" reference="EQ:Comparision_Loc_infty_1"}, [\[EQ:Comparision_Loc_infty_6\]](#EQ:Comparision_Loc_infty_6){reference-type="eqref" reference="EQ:Comparision_Loc_infty_6"} and using properties of the integral we get for all $N>\frac{d}{2}$ that $$\label{EQ:Comparision_Loc_infty_7} \begin{aligned} \big \lVert \varphi[f(\mathcal{H}_{\hbar,\mu})-f(H_{\hbar,\mu})] \big \rVert_1 \ &\leq \frac{1}{\pi} \int_\mathbb{C}\big| \bar{\partial }\tilde{f}(z) \big| \big \lVert \varphi [(z-\mathcal{H}_{\hbar,\mu})^{-1}-(z-H_{\hbar,\mu})^{-1}] \big\rVert_1 \, L(dz) \\ &\leq C_N \frac{\hbar^{2N-d}}{\pi} \int_\mathbb{C}\bar{\partial } \big|\tilde{f}(z) \big| \frac{\langle z \rangle^{N+\frac{d+1}{2}} }{|\mathop{\mathrm{\mathrm{Im}}}(z)|^{2N+2}} \, L(dz) \leq \tilde{C}_N \hbar^{2N-d}, \end{aligned}$$ where the constant $\tilde{C}_N$ depends on the dimension the numbers $N$, the functions $\varphi$, $\varphi_1$, $f$ and the constant $c$. We have in the last inequality used the properties of the almost analytic extension $\tilde{f}$. The estimate in [\[EQ:Comparision_Loc_infty_7\]](#EQ:Comparision_Loc_infty_7){reference-type="eqref" reference="EQ:Comparision_Loc_infty_7"} concludes the proof. ◻ **Lemma 27**. *Let $\mathcal{H}_{\hbar,\mu}$ be an operator acting in $L^2(\mathbb{R}^d)$. Suppose $\mathcal{H}_{\hbar,\mu}$ satisfies Assumption [Assumption 1](#Assumption:local_potential_1){reference-type="ref" reference="Assumption:local_potential_1"} with the open set $\Omega$ and let $H_{\hbar,\mu,\varepsilon} = (-i\hbar\nabla-\mu a)^2 +V_\varepsilon$ be the associated rough Schrödinger operator. Assume that $\mu\leq\mu_0<1$ and $\hbar\in(0,\hbar_0]$, with $\hbar_0$ sufficiently small. Let $f\in C_0^\infty(\mathbb{R})$ and $\varphi\in C_0^\infty(\Omega)$ then it holds that $$\label{LE:EQ:Func_com_model_reg_2} \lVert \varphi [f(\mathcal{H}_{\hbar,\mu})-f(H_{\hbar,\mu,\varepsilon})] \rVert_1 \leq C \varepsilon^{2+\kappa} \hbar^{-d}.$$ The constant $C_N$ depends only on the dimension, the numbers $\lVert f \rVert_{L^\infty(\mathbb{R})}$, $\lVert \partial^\alpha\varphi \rVert_{L^\infty(\mathbb{R})}$ for all $\alpha\in\mathbb{N}^d_0$, $\lVert \partial^\alpha a_j \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ with $|\alpha|\geq1$ and $j\in\{1\dots,d\}$ and $\lVert \partial_x^\alpha V \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ such that $|\alpha|\leq 2$.* *Proof.* Let $H_{\hbar,\mu}$ be the magenetic Schrödinger operator associated to $\mathcal{H}_{\hbar,\mu}$. We then have that $$\label{EQ:Func_com_model_reg_1} \lVert \varphi [f(\mathcal{H}_{\hbar,\mu})-f(H_{\hbar,\mu,\varepsilon})] \rVert_1 \leq \lVert \varphi [f(\mathcal{H}_{\hbar,\mu})-f(H_{\hbar,\mu})] \rVert_1 + \lVert \varphi [f(H_{\hbar,\mu})-f(H_{\hbar,\mu,\varepsilon})] \rVert_1.$$ By Lemma [Lemma 26](#LE:Comparision_Loc_infty){reference-type="ref" reference="LE:Comparision_Loc_infty"} it follows for all $N\in\mathbb{N}$ that $$\label{EQ:Func_com_model_reg_2} \lVert \varphi [f(\mathcal{H}_{\hbar,\mu})-f(H_{\hbar,\mu})] \rVert_1 \leq C_N \hbar^N.$$ To estimate the second term on the right hand side of [\[EQ:Func_com_model_reg_1\]](#EQ:Func_com_model_reg_1){reference-type="eqref" reference="EQ:Func_com_model_reg_1"} let $f_1\in C_0^\infty(\mathbb{R})$ such that $f_1(t)f(t)=f(t)$ for all $t\in \mathbb{R}$. Moreover, let $\varphi_1\in C_0^\infty (\Omega)$ such that $\varphi_1(x) = 1$ for all $x\in\overline{\mathop{\mathrm{supp}}(\varphi)}$. We then have for each $N\in\mathbb{N}_0$ that $$\label{EQ:Func_com_model_reg_3} \begin{aligned} \MoveEqLeft \lVert \varphi [f(H_{\hbar,\mu})-f(H_{\hbar,\mu,\varepsilon})] \rVert_1 \\ \leq{}& \lVert \varphi [f_1(H_{\hbar,\mu})-f_1(H_{\hbar,\mu,\varepsilon})] \varphi_1 f(H_{\hbar,\mu}) \rVert_1 + \lVert \varphi f_1(H_{\hbar,\mu,\varepsilon})[f(H_{\hbar,\mu}) -f(H_{\hbar,\mu,\varepsilon})] \rVert_1+C_N\hbar^N \\ \leq{}& C\hbar^{-d}\big[ \lVert f_1(H_{\hbar,\mu})-f_1(H_{\hbar,\mu,\varepsilon}) \rVert_{\mathrm{op}} + \lVert f(H_{\hbar,\mu})-f(H_{\hbar,\mu,\varepsilon}) \rVert_{\mathrm{op}}\big]+C_N\hbar^N, \end{aligned}$$ where we have used Lemma [Lemma 25](#LE:Func_cal_est_inf_pon){reference-type="ref" reference="LE:Func_cal_est_inf_pon"} three times. We can use this as $V,V_\varepsilon\in L^\infty(\mathbb{R}^d)$ and the functions $\varphi$ and $1-\varphi_1$ have disjoint support. Applying Theorem [Theorem 6](#THM:Helffer-Sjostrand){reference-type="ref" reference="THM:Helffer-Sjostrand"} and the resolvent formalism we get that $$\label{EQ:Func_com_model_reg_4} \begin{aligned} \lVert f(H_{\hbar,\mu})-f(H_{\hbar,\mu,\varepsilon}) \rVert_{\mathrm{op}} &\leq \frac{1}{\pi} \int_{\mathbb{C}} | \bar{\partial }\tilde{f}(z)| \lVert (z-H_{\hbar,\mu})-(z-H_{\hbar,\mu,\varepsilon}) \rVert_{\mathrm{op}} \, L(dz) \\ &\leq \frac{1}{\pi} \int_{\mathbb{C}} \frac{| \bar{\partial }\tilde{f}(z)| }{|\mathop{\mathrm{\mathrm{Im}}}(z)|^2} \lVert V-V_{\varepsilon} \rVert_{\mathrm{op}} \, L(dz) \\ &\leq C \varepsilon^{2+\kappa}, \end{aligned}$$ where we in the last inequality have used that $\tilde{f}$ is an almost analytic extension and have compact support and Lemma [Lemma 7](#LE:rough_potential_local){reference-type="ref" reference="LE:rough_potential_local"}. Analogously we obtain that $$\label{EQ:Func_com_model_reg_5} \begin{aligned} \lVert f_1(H_{\hbar,\mu})-f_1(H_{\hbar,\mu,\varepsilon}) \rVert_{\mathrm{op}} \leq C \varepsilon^{2+\kappa}. \end{aligned}$$ Combining the estimates in [\[EQ:Func_com_model_reg_1\]](#EQ:Func_com_model_reg_1){reference-type="eqref" reference="EQ:Func_com_model_reg_1"}, [\[EQ:Func_com_model_reg_2\]](#EQ:Func_com_model_reg_2){reference-type="eqref" reference="EQ:Func_com_model_reg_2"}, [\[EQ:Func_com_model_reg_3\]](#EQ:Func_com_model_reg_3){reference-type="eqref" reference="EQ:Func_com_model_reg_3"}, [\[EQ:Func_com_model_reg_4\]](#EQ:Func_com_model_reg_4){reference-type="eqref" reference="EQ:Func_com_model_reg_4"} and [\[EQ:Func_com_model_reg_5\]](#EQ:Func_com_model_reg_5){reference-type="eqref" reference="EQ:Func_com_model_reg_5"} we obtain the estimate in [\[LE:EQ:Func_com_model_reg_2\]](#LE:EQ:Func_com_model_reg_2){reference-type="eqref" reference="LE:EQ:Func_com_model_reg_2"}. This concludes the proof. ◻ Before we proceed we will need a technical Lemma. This Lemma gives us a version of the estimate [\[EQ:PRO:Tauberian_1\]](#EQ:PRO:Tauberian_1){reference-type="eqref" reference="EQ:PRO:Tauberian_1"} from Proposition [Proposition 21](#PRO:Tauberian){reference-type="ref" reference="PRO:Tauberian"}. **Lemma 28**. *Let $H_{\hbar,\mu,\varepsilon} = (-i\hbar\nabla-\mu a)^2 +V_\varepsilon$ be a rough Schrödinger operator acting in $L^2(\mathbb{R}^d)$ of regularity $\tau\geq2$ with $\mu\leq\mu_0<1$ and $\hbar\in(0,\hbar_0]$, $\hbar_0$ sufficiently small. Assume that $a_j\in C_0^\infty(\mathbb{R}^d)$ for all $j\in\{1,\dots,d\}$ and $V_\varepsilon \in C_0^\infty(\mathbb{R}^d)$. Suppose there is an open set $\Omega \subset \mathop{\mathrm{supp}}(V_\varepsilon)$ and a $c>0$ such that $$|V_\varepsilon(x)| +\hbar^{\frac{2}{3}} \geq c \qquad\text{for all $x\in \Omega$}.$$ Let $\chi_\hbar(t)$ be the function from Remark [Remark 20](#RE:mollyfier_def){reference-type="ref" reference="RE:mollyfier_def"}, $f\in C_0^\infty(\mathbb{R})$ and $\varphi\in C_0^\infty(\Omega)$ then it holds for $s\in\mathbb{R}$ that $$\lVert \varphi f(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s)f(H_{\hbar,\mu,\varepsilon}) \varphi \rVert_1 \leq C \hbar^{-d}.$$ The constant $C_N$ depends only on the dimension and the numbers $\lVert f \rVert_{L^\infty(\mathbb{R})}$, $\lVert \partial^\alpha\varphi \rVert_{L^\infty(\mathbb{R})}$ for all $\alpha\in\mathbb{N}^d_0$, $\lVert \partial^\alpha a_j \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ with $|\alpha|\geq1$ and $j\in\{1\dots,d\}$, $\lVert V_\varepsilon \rVert_{L^\infty(\mathbb{R}^d)}$ and the numbers $C_\alpha$ from Assumption [Assumption 15](#Assumption:Rough_schrodinger){reference-type="ref" reference="Assumption:Rough_schrodinger"}.* *Proof.* Under the assumptions of the Lemma we have that $a$ and $V_\varepsilon$ satisfies Assumption [Assumption 15](#Assumption:Rough_schrodinger){reference-type="ref" reference="Assumption:Rough_schrodinger"}. Hence if we can find $\theta\in C_0^\infty(\Omega \times \mathbb{R}^d)$ such that for all $N\in\mathbb{N}$ we have that $$\label{EQ:assump_est_func_loc_1} \begin{aligned} \MoveEqLeft \big\lVert \varphi f(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s)f(H_{\hbar,\mu,\varepsilon}) \varphi \\ &- \varphi \mathop{\mathrm{Op_\hbar^w}}(\theta) f(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s)f(H_{\hbar,\mu,\varepsilon}) \mathop{\mathrm{Op_\hbar^w}}(\theta) \varphi \big\rVert_1 \leq C_N \hbar^{N}, \end{aligned}$$ where $C_N$ depends on. Then the result will follow from Lemma [Lemma 22](#LE:verification_of_assumption){reference-type="ref" reference="LE:verification_of_assumption"}. Since this gives us that $$\begin{aligned} \lVert \varphi \mathop{\mathrm{Op_\hbar^w}}(\theta) f(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s)f(H_{\hbar,\mu,\varepsilon}) \mathop{\mathrm{Op_\hbar^w}}(\theta) \varphi \rVert_1 \leq C \hbar^{-d}. \end{aligned}$$ In order to find such a $\theta$ we observe that since $V_\varepsilon$ and $a_j$ are bounded for all $j\in\{1,\dots,d\}$ there exist a $K>1$ such that $$f(a_{\varepsilon,0}^f(x,p)) =0 \qquad\text{if $|p|\geq K-1$},$$ where we have also used that $f$ is compactly supported and the notation $a_{\varepsilon,0}^f(x,p) = f( (p- \mu a(x))^2 + V_\varepsilon(x) )$. Hence we will choose $\theta\in C_0^\infty(\Omega\times B(0,K+1))$ such that $$\mathop{\mathrm{supp}}(\varphi)\cap \mathop{\mathrm{supp}}(1-\theta) \cap \mathop{\mathrm{supp}}(f(a_{\varepsilon,0}^f) )=\emptyset.$$ Hence from applying Lemma [Lemma 14](#LE:disjoint_supp_PDO){reference-type="ref" reference="LE:disjoint_supp_PDO"} and Theorem [Theorem 17](#THM:func_calc){reference-type="ref" reference="THM:func_calc"} we obtain that $$\label{EQ:assump_est_func_loc_2} \begin{aligned} \lVert \varphi (1- \mathop{\mathrm{Op_\hbar^w}}(\theta)) f(H_{\hbar,\mu,\varepsilon}) \rVert_{\mathrm{op}} \leq C_N\hbar^N. \end{aligned}$$ By Theorem [Theorem 10](#THM:thm_est_tr){reference-type="ref" reference="THM:thm_est_tr"} and Lemma [Lemma 25](#LE:Func_cal_est_inf_pon){reference-type="ref" reference="LE:Func_cal_est_inf_pon"} we have that $\lVert \mathop{\mathrm{Op_\hbar^w}}(\theta) \rVert_1\leq C\hbar^{-d}$ and $\lVert \varphi f(H_{\hbar,\mu,\varepsilon}) \rVert_{1}\leq C\hbar^{-d}$ respectively. Hence we get that $$\begin{aligned} \MoveEqLeft \big\lVert \varphi f(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s)f(H_{\hbar,\mu,\varepsilon}) \varphi - \varphi \mathop{\mathrm{Op_\hbar^w}}(\theta) f(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s)f(H_{\hbar,\mu,\varepsilon}) \mathop{\mathrm{Op_\hbar^w}}(\theta) \varphi \big\rVert_1 \\ \leq {}& \big\lVert \varphi (1-\mathop{\mathrm{Op_\hbar^w}}(\theta)) f(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s)f(H_{\hbar,\mu,\varepsilon}) \varphi \big\rVert_1 \\ &+\big\lVert \varphi \mathop{\mathrm{Op_\hbar^w}}(\theta) f(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s)f(H_{\hbar,\mu,\varepsilon})(1- \mathop{\mathrm{Op_\hbar^w}}(\theta)) \varphi \big\rVert_1 \\ \leq {}& C\hbar^{-d} \lVert \varphi (1- \mathop{\mathrm{Op_\hbar^w}}(\theta)) f(H_{\hbar,\mu,\varepsilon}) \rVert_{\mathrm{op}} \leq C_N\hbar^N. \end{aligned}$$ where we have used [\[EQ:assump_est_func_loc_2\]](#EQ:assump_est_func_loc_2){reference-type="eqref" reference="EQ:assump_est_func_loc_2"}. This establishes [\[EQ:assump_est_func_loc_1\]](#EQ:assump_est_func_loc_1){reference-type="eqref" reference="EQ:assump_est_func_loc_1"} and concludes the proof. ◻ In the same manner as the previous lemma we will prove an asymptotic formula for the case with a compactly supported potential. **Lemma 29**. *Let $H_{\hbar,\mu,\varepsilon} = (-i\hbar\nabla-\mu a)^2 +V_\varepsilon$ be a rough Schrödinger operator acting in $L^2(\mathbb{R}^d)$ of regularity $\tau\geq2$ with $\mu\leq\mu_0<1$ and $\hbar\in(0,\hbar_0]$, $\hbar_0$ sufficiently small. Assume that $a_j\in C_0^\infty(\mathbb{R}^d)$ for all $j\in\{1,\dots,d\}$ and $V_\varepsilon \in C_0^\infty(\mathbb{R}^d)$. Suppose there is an open set $\Omega \subset \mathop{\mathrm{supp}}(V_\varepsilon)$ and a $c>0$ such that $$|V_\varepsilon(x)| +\hbar^{\frac{2}{3}} \geq c \qquad\text{for all $x\in \Omega$}.$$ Then for $g\in C^{\infty,\gamma}(\mathbb{R})$ with $\gamma\in[0,1]$ and any $\varphi\in C_0^\infty(\Omega)$ it holds that $$\Big|\mathop{\mathrm{Tr}}[\varphi g(H_{\hbar,\mu,\varepsilon})] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g(p^2+V_\varepsilon(x))\varphi(x) \,dx dp \Big| \leq C \hbar^{1+\gamma-d}.$$ The constant $C_N$ depends only on the dimension and the numbers $\lVert f \rVert_{L^\infty(\mathbb{R})}$, $\lVert \partial^\alpha\varphi \rVert_{L^\infty(\mathbb{R})}$ for all $\alpha\in\mathbb{N}^d_0$, $\lVert \partial^\alpha a_j \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ with $|\alpha|\geq1$ and $j\in\{1\dots,d\}$, $\lVert V_\varepsilon \rVert_{L^\infty(\mathbb{R}^d)}$ and the numbers $C_\alpha$ from Assumption [Assumption 15](#Assumption:Rough_schrodinger){reference-type="ref" reference="Assumption:Rough_schrodinger"}.* *Proof.* As in the proof of Lemma [Lemma 28](#LE:assump_est_func_loc){reference-type="ref" reference="LE:assump_est_func_loc"} we let $\theta\in C_0^\infty(\Omega\times B(0,K+1))$ such that $$\mathop{\mathrm{supp}}(\varphi)\cap \mathop{\mathrm{supp}}(1-\theta) \cap \mathop{\mathrm{supp}}(f(a_{\varepsilon,0}^f) )=\emptyset,$$ where $a_{\varepsilon,0}^f(x,p) = f( (p- \mu a(x))^2 + V_\varepsilon(x) )$. Then as in the proof of Lemma [Lemma 28](#LE:assump_est_func_loc){reference-type="ref" reference="LE:assump_est_func_loc"} we then get for all $N\in\mathbb{N}$ that $$\label{EQ:asymp_est_func_loc_1} \mathop{\mathrm{Tr}}[\varphi g(H_{\hbar,\mu,\varepsilon})] = \mathop{\mathrm{Tr}}[\varphi \mathop{\mathrm{Op_\hbar^w}}(\theta) g(H_{\hbar,\mu,\varepsilon})] + \mathcal{O}(\hbar^N).$$ This choice of $\theta$ ensures the assumptions of Theorem [Theorem 23](#THM:Loc_rough_prob){reference-type="ref" reference="THM:Loc_rough_prob"} is satisfied. Hence we get that $$\label{EQ:asymp_est_func_loc_2} \Big|\mathop{\mathrm{Tr}}[\varphi \mathop{\mathrm{Op_\hbar^w}}(\theta) g(H_{\hbar,\mu})] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g((p- \mu a(x))^2+V_\varepsilon(x))\varphi(x)\theta(x,p) \,dx dp \Big| \leq C \hbar^{1+\gamma-d},$$ From the support properties of $\theta$ we have that $$\label{EQ:asymp_est_func_loc_3} \int_{\mathbb{R}^{2d}}g((p- \mu a(x))^2+V_\varepsilon(x))\varphi(x)\theta(x,p) \,dx dp = \int_{\mathbb{R}^{2d}}g(p^2+V_\varepsilon(x))\varphi(x) \,dx dp.$$ From combining [\[EQ:asymp_est_func_loc_1\]](#EQ:asymp_est_func_loc_1){reference-type="eqref" reference="EQ:asymp_est_func_loc_1"}, [\[EQ:asymp_est_func_loc_2\]](#EQ:asymp_est_func_loc_2){reference-type="eqref" reference="EQ:asymp_est_func_loc_2"} and [\[EQ:asymp_est_func_loc_3\]](#EQ:asymp_est_func_loc_3){reference-type="eqref" reference="EQ:asymp_est_func_loc_3"} we obtain the desired estimate. This concludes the proof. ◻ **Lemma 30**. *Let $\mathcal{H}_{\hbar,\mu}$ be an operator acting in $L^2(\mathbb{R}^d)$. Suppose $\mathcal{H}_{\hbar,\mu}$ satisfies Assumption [Assumption 1](#Assumption:local_potential_1){reference-type="ref" reference="Assumption:local_potential_1"} with the open set $\Omega$ and let $H_{\hbar,\mu,\varepsilon} = (-i\hbar\nabla-\mu a)^2 +V_\varepsilon$ be the associated rough Schrödinger operator. Assume that $\mu\leq\mu_0<1$ and $\hbar\in(0,\hbar_0]$, with $\hbar_0$ sufficiently small. Moreover, let $\chi_\hbar(t)$ be the function from Remark [Remark 20](#RE:mollyfier_def){reference-type="ref" reference="RE:mollyfier_def"}, $f\in C_0^\infty(\mathbb{R})$ and $\varphi\in C_0^\infty(B(\Omega))$ then it holds for $s\in\mathbb{R}$ that $$\label{EQLE:Func_moll_com_model_reg} \lVert \varphi f(\mathcal{H}_{\hbar,\mu}) \chi_\hbar(\mathcal{H}_{\hbar,\mu}-s)f(\mathcal{H}_{\hbar,\mu}) \varphi -\varphi f(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s)f(H_{\hbar,\mu,\varepsilon}) \varphi \rVert_1 \leq C \varepsilon^{2+\kappa} \hbar^{-d-1}.$$ Moreover, suppose there exists some $c>0$ such that $$|V(x)| +\hbar^{\frac{2}{3}} \geq c \qquad\text{for all $x\in \Omega$}.$$ Then it holds that $$\label{LEEQ:Func_moll_com_model_reg} \lVert \varphi f(\mathcal{H}_{\hbar,\mu}) \chi_\hbar(\mathcal{H}_{\hbar,\mu}-s)f(\mathcal{H}_{\hbar,\mu}) \varphi \rVert_1 \leq C \hbar^{-d}.$$ The constant $C$ depends only on the dimension and the numbers $\lVert f \rVert_{L^\infty(\mathbb{R})}$, $\lVert \partial^\alpha\varphi \rVert_{L^\infty(\mathbb{R})}$ for all $\alpha\in\mathbb{N}^d_0$, $\lVert \partial^\alpha a_j \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ with $|\alpha|\geq1$ and $j\in\{1\dots,d\}$ and $\lVert \partial_x^\alpha V \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ such that $|\alpha|\leq 2$. .* *Proof.* Let $H_{\hbar,\mu}$ be the magenetic Schrödinger operator associated to $\mathcal{H}_{\hbar,\mu}$. We then have $$\label{EQ:Func_moll_com_model_reg_0} \begin{aligned} \MoveEqLeft \lVert \varphi f(\mathcal{H}_{\hbar,\mu}) \chi_\hbar(\mathcal{H}_{\hbar,\mu}-s)f(\mathcal{H}_{\hbar,\mu}) \varphi -\varphi f(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s)f(H_{\hbar,\mu,\varepsilon}) \varphi \rVert_1 \\ \leq{}& \lVert \varphi f(\mathcal{H}_{\hbar,\mu}) \chi_\hbar(\mathcal{H}_{\hbar,\mu}-s)f(\mathcal{H}_{\hbar,\mu}) \varphi -\varphi f(H_{\hbar,\mu}) \chi_\hbar(H_{\hbar,\mu}-s)f(H_{\hbar,\mu}) \varphi \rVert_1 \\ &+\lVert \varphi f(H_{\hbar,\mu}) \chi_\hbar(H_{\hbar,\mu}-s)f(H_{\hbar,\mu}) \varphi - \varphi f(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s)f(H_{\hbar,\mu,\varepsilon}) \varphi \rVert_1 \end{aligned}$$ We start by estimating the the first term on the righthand side of [\[EQ:Func_moll_com_model_reg_0\]](#EQ:Func_moll_com_model_reg_0){reference-type="eqref" reference="EQ:Func_moll_com_model_reg_0"}. By applying Lemma [Lemma 26](#LE:Comparision_Loc_infty){reference-type="ref" reference="LE:Comparision_Loc_infty"} we get that $$\label{EQ:Func_moll_com_model_reg_1} \begin{aligned} \MoveEqLeft\lVert \varphi f(\mathcal{H}_{\hbar,\mu}) \chi_\hbar(\mathcal{H}_{\hbar,\mu}-s)f(\mathcal{H}_{\hbar,\mu}) \varphi -\varphi f(H_{\hbar,\mu}) \chi_\hbar(H_{\hbar,\mu}-s)f(H_{\hbar,\mu}) \varphi \rVert_1 \\ \leq{}& C\hbar^{-d} \lVert \varphi f(\mathcal{H}_{\hbar,\mu}) \chi_\hbar(\mathcal{H}_{\hbar,\mu}-s) -\varphi f(H_{\hbar,\mu}) \chi_\hbar(H_{\hbar,\mu}-s) \rVert_{\mathrm{op}} \\ &+ C\hbar^{-1} \lVert f(\mathcal{H}_{\hbar,\mu}) \varphi -f(H_{\hbar,\mu}) \varphi \rVert_1 \\ \leq{}& C\hbar^{-d}\lVert \varphi f(\mathcal{H}_{\hbar,\mu}) \chi_\hbar(\mathcal{H}_{\hbar,\mu}-s) -\varphi f(H_{\hbar,\mu}) \chi_\hbar(H_{\hbar,\mu}-s) \rVert_{\mathrm{op}} + C \hbar^N, \end{aligned}$$ where we in the first inequality have added and subtracted the term $\varphi f(H_{\hbar,\mu}) \chi_\hbar(H_{\hbar,\mu}-s)f(\mathcal{H}_{\hbar,\mu}) \varphi$, used the triangle inequality, Lemma [Lemma 26](#LE:Comparision_Loc_infty){reference-type="ref" reference="LE:Comparision_Loc_infty"} and that $\sup_{t\in\mathbb{R}}\chi_\hbar(t) \leq C\hbar^{-1}$. In the second inequality we have used Lemma [Lemma 27](#LE:Func_com_model_reg){reference-type="ref" reference="LE:Func_com_model_reg"}. We observe that we can write the function $\chi_\hbar(z-s)$ as $$\chi_\hbar(z-s) = \mathcal{F}_\hbar^{-1}[\chi] (z-s).$$ From this expression we observe that $\chi_\hbar(z-s)$ is holomorphic since $\chi \in C_0^\infty(\mathbb{R})$. Hence using the Helffer-Sjöstrand formula (Theorem [Theorem 6](#THM:Helffer-Sjostrand){reference-type="ref" reference="THM:Helffer-Sjostrand"}) we get that $$\label{EQ:Func_moll_com_model_reg_2} \begin{aligned} \MoveEqLeft \varphi f(\mathcal{H}_{\hbar,\mu}) \chi_\hbar(\mathcal{H}_{\hbar,\mu}-s) -\varphi f(H_{\hbar,\mu}) \chi_\hbar(H_{\hbar,\mu}-s) \\ &=- \frac{1}{\pi} \int_\mathbb{C}\bar{\partial }\tilde{f}(z) \chi_\hbar(z-s) \varphi [(z-\mathcal{H}_{\hbar,\mu})^{-1}-(z-H_{\hbar,\mu})^{-1}] \, L(dz), \end{aligned}$$ where $\tilde{f}$ is an almost analytic extension of $f$. From the proof of Lemma [Lemma 26](#LE:Comparision_Loc_infty){reference-type="ref" reference="LE:Comparision_Loc_infty"} we have the estimate $$\label{EQ:Func_moll_com_model_reg_3} \big \lVert \varphi [(z-\mathcal{H}_{\hbar,\mu})^{-1}-(z-H_{\hbar,\mu})^{-1}] \big\rVert_{\mathrm{op}} \leq C_N \frac{\langle z \rangle^{N+\frac{d+1}{2}} \hbar^{2N-d}}{|\mathop{\mathrm{\mathrm{Im}}}(z)|^{2N+2}}.$$ Since the trace norm dominates the operator norm. Combining [\[EQ:Func_moll_com_model_reg_2\]](#EQ:Func_moll_com_model_reg_2){reference-type="eqref" reference="EQ:Func_moll_com_model_reg_2"}, [\[EQ:Func_moll_com_model_reg_3\]](#EQ:Func_moll_com_model_reg_3){reference-type="eqref" reference="EQ:Func_moll_com_model_reg_3"} and using the properties of $\tilde{f}$ and $\chi_\hbar$ we obtain that $$\label{EQ:Func_moll_com_model_reg_4} \begin{aligned} \lVert \varphi f(\mathcal{H}_{\hbar,\mu}) \chi_\hbar(\mathcal{H}_{\hbar,\mu}-s) -\varphi f(H_{\hbar,\mu}) \chi_\hbar(H_{\hbar,\mu}-s) \rVert_{\mathrm{op}} \leq C_N\hbar^N, \end{aligned}$$ where $C_N$ depends on the dimension, the number $N$ and the functions $f$, $\varphi$. Combining the estimates in [\[EQ:Func_moll_com_model_reg_1\]](#EQ:Func_moll_com_model_reg_1){reference-type="eqref" reference="EQ:Func_moll_com_model_reg_1"} and [\[EQ:Func_moll_com_model_reg_4\]](#EQ:Func_moll_com_model_reg_4){reference-type="eqref" reference="EQ:Func_moll_com_model_reg_4"} we obtain that $$\label{EQ:Func_moll_com_model_reg_5} \begin{aligned} \lVert \varphi f(H_\hbar) \chi_\hbar(H_\hbar-s)f(H_\hbar) \varphi -\varphi f(\tilde{H}_{\hbar}) \chi_\hbar(\tilde{H}_{\hbar}-s)f(\tilde{H}_{\hbar}) \varphi \rVert_1 \leq C \hbar^N. \end{aligned}$$ We now turn to the lefthand side of [\[EQ:Func_moll_com_model_reg_0\]](#EQ:Func_moll_com_model_reg_0){reference-type="eqref" reference="EQ:Func_moll_com_model_reg_0"}. Here we do the same type of estimates as in [\[EQ:Func_moll_com_model_reg_1\]](#EQ:Func_moll_com_model_reg_1){reference-type="eqref" reference="EQ:Func_moll_com_model_reg_1"} $$\label{EQ:Func_moll_com_model_reg_6} \begin{aligned} \MoveEqLeft \lVert \varphi f(H_{\hbar,\mu}) \chi_\hbar(H_{\hbar,\mu}-s)f(H_{\hbar,\mu}) \varphi - \varphi f(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s)f(H_{\hbar,\mu,\varepsilon}) \varphi \rVert_1 \\ \leq{}& C\hbar^{-d} \lVert \varphi f(H_{\hbar,\mu}) \chi_\hbar(H_{\hbar,\mu}-s) - \varphi f(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s) \rVert_{\mathrm{op}} \\ &+ C\hbar^{-1} \lVert f(H_{\hbar,\mu}) \varphi -f(H_{\hbar,\mu,\varepsilon}) \varphi ) \varphi \rVert_1 \\ \leq{}&C\hbar^{-d} \lVert \varphi f(H_{\hbar,\mu}) \chi_\hbar(H_{\hbar,\mu}-s) - \varphi f(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s) \rVert_{\mathrm{op}}+ C \varepsilon^{2+\kappa} \hbar^{-d-1}, \end{aligned}$$ where the last inequality follows from the proof of Lemma [Lemma 27](#LE:Func_com_model_reg){reference-type="ref" reference="LE:Func_com_model_reg"}. As above we again use the Helffer-Sjöstrand formula and the resolvent formalism and obtain that $$\label{EQ:Func_moll_com_model_reg_7} \begin{aligned} \MoveEqLeft \lVert \varphi f(H_{\hbar,\mu}) \chi_\hbar(H_{\hbar,\mu}-s) - \varphi f(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s) \rVert_{\mathrm{op}} \\ &= \frac{1}{\pi} \big\lVert \int_\mathbb{C}\bar{\partial }\tilde{f}(z) \chi_\hbar(z-s) \varphi [(z-H_{\hbar,\mu})^{-1}-(z-H_{\hbar,\mu,\varepsilon})^{-1}] \, L(dz) \big\rVert_{\mathrm{op}} \\ &\leq \frac{1}{\pi} \int_{\mathbb{C}} \frac{| \bar{\partial }\tilde{f}(z)| }{|\mathop{\mathrm{\mathrm{Im}}}(z)|^2} \chi_\hbar(z-s) \lVert V-V_{\varepsilon} \rVert_{\mathrm{op}} \, L(dz) \leq C\hbar^{-1} \varepsilon^{2+\kappa}. \end{aligned}$$ Combining the estimates in [\[EQ:Func_moll_com_model_reg_6\]](#EQ:Func_moll_com_model_reg_6){reference-type="eqref" reference="EQ:Func_moll_com_model_reg_6"} and [\[EQ:Func_moll_com_model_reg_7\]](#EQ:Func_moll_com_model_reg_7){reference-type="eqref" reference="EQ:Func_moll_com_model_reg_7"} we get that $$\label{EQ:Func_moll_com_model_reg_8} \begin{aligned} \lVert \varphi f(H_{\hbar,\mu}) \chi_\hbar(H_{\hbar,\mu}-s)f(H_{\hbar,\mu}) \varphi - \varphi f(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s)f(H_{\hbar,\mu,\varepsilon}) \varphi \rVert_1 \leq C \varepsilon^{2+\kappa} \hbar^{-d-1}. \end{aligned}$$ Finally by combing the estimates in [\[EQ:Func_moll_com_model_reg_0\]](#EQ:Func_moll_com_model_reg_0){reference-type="eqref" reference="EQ:Func_moll_com_model_reg_0"}, [\[EQ:Func_moll_com_model_reg_5\]](#EQ:Func_moll_com_model_reg_5){reference-type="eqref" reference="EQ:Func_moll_com_model_reg_5"} and [\[EQ:Func_moll_com_model_reg_8\]](#EQ:Func_moll_com_model_reg_8){reference-type="eqref" reference="EQ:Func_moll_com_model_reg_8"} we obtain the estimate stated in [\[EQLE:Func_moll_com_model_reg\]](#EQLE:Func_moll_com_model_reg){reference-type="eqref" reference="EQLE:Func_moll_com_model_reg"}. By combining the the estimate in [\[EQLE:Func_moll_com_model_reg\]](#EQLE:Func_moll_com_model_reg){reference-type="eqref" reference="EQLE:Func_moll_com_model_reg"} with Lemma [Lemma 28](#LE:assump_est_func_loc){reference-type="ref" reference="LE:assump_est_func_loc"} we can obtain the estimate [\[LEEQ:Func_moll_com_model_reg\]](#LEEQ:Func_moll_com_model_reg){reference-type="eqref" reference="LEEQ:Func_moll_com_model_reg"}. This concludes the proof. ◻ **Lemma 31**. *Let $\mathcal{H}_{\hbar,\mu}$ be an operator acting in $L^2(\mathbb{R}^d)$. Suppose $\mathcal{H}_{\hbar,\mu}$ satisfies Assumption [Assumption 1](#Assumption:local_potential_1){reference-type="ref" reference="Assumption:local_potential_1"} with the open set $\Omega$ and let $H_{\hbar,\mu,\varepsilon} = (-i\hbar\nabla-\mu a)^2 +V_\varepsilon$ be the associated rough Schrödinger operator. Assume that $\mu\leq\mu_0<1$ and $\hbar\in(0,\hbar_0]$, with $\hbar_0$ sufficiently small. Moreover, suppose there exists some $c>0$ such that $$|V(x)| +\hbar^{\frac{2}{3}} \geq c \qquad\text{for all $x\in \Omega$}$$ and let $\varphi\in C_0^\infty(B(0,R))$. Then for $g\in C^{\infty,\gamma}(\mathbb{R})$ with $\gamma\in[0,1]$ it holds that $$\Big|\mathop{\mathrm{Tr}}[\varphi g(\mathcal{H}_{\hbar,\mu})] -\mathop{\mathrm{Tr}}[\varphi g(H_{\hbar,\mu,\varepsilon})] \Big| \leq C \hbar^{1+\gamma-d} + C' \varepsilon^{2+\kappa} \hbar^{-d-1}.$$ The constants $C$ and $C'$ depends on the dimension and the numbers $\lVert f \rVert_{L^\infty(\mathbb{R})}$, $\lVert \partial^\alpha\varphi \rVert_{L^\infty(\mathbb{R})}$ for all $\alpha\in\mathbb{N}^d_0$, $\lVert \partial^\alpha a_j \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ with $|\alpha|\geq1$ and $j\in\{1\dots,d\}$ and $\lVert \partial_x^\alpha V \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ such that $|\alpha|\leq 2$. .* *Proof.* Since both operators are lower semi-bounded we may assume that $g$ is compactly supported. Let $f\in C_0^\infty(\mathbb{R})$ such that $f(t)g(t)= g(t)$ for all $t\in\mathbb{R}$. Moreover let $\varphi_1\in C_0^\infty(\Omega)$ such that $\varphi(x)\varphi_1(x) = \varphi(x)$ for all $x\in\mathbb{R}^d$. Moreover, let $\chi_\hbar(t)$ be the function from Remark [Remark 20](#RE:mollyfier_def){reference-type="ref" reference="RE:mollyfier_def"} and set $g^{(\hbar)}(t) = g*\chi_\hbar(t)$. With this notation set up we have that $$\label{EQ:trace_com_model_reg_1} \begin{aligned} \MoveEqLeft \Big|\mathop{\mathrm{Tr}}[\varphi g(\mathcal{H}_{\hbar,\mu})] -\mathop{\mathrm{Tr}}[\varphi g(H_{\hbar,\mu,\varepsilon})] \Big| \\ \leq {}& \lVert \varphi \varphi_1f(\mathcal{H}_{\hbar,\mu}) (g(\mathcal{H}_{\hbar,\mu}) - g^{(\hbar)}(\mathcal{H}_{\hbar,\mu}))f(\mathcal{H}_{\hbar,\mu})\varphi_1] \rVert_1 \\ &+\lVert \varphi \varphi_1 f(H_{\hbar,\mu,\varepsilon}) (g(H_{\hbar,\mu,\varepsilon})-g^{(\hbar)}(H_{\hbar,\mu,\varepsilon}))f(H_{\hbar,\mu,\varepsilon})\varphi_1] \rVert_1 + \lVert \varphi \rVert_{L^\infty(\mathbb{R}^d)} \int_{\mathbb{R}} g_\gamma(s) \,ds \\ &\times \sup_{s\in\mathbb{R}} \lVert\varphi \varphi_1 f(\mathcal{H}_{\hbar,\mu}) \chi_\hbar(\mathcal{H}_{\hbar,\mu}-s)f(\mathcal{H}_{\hbar,\mu}) \varphi_1 -\varphi_1 f(H_{\hbar,\mu,\varepsilon}) \chi_\hbar(H_{\hbar,\mu,\varepsilon}-s)f(H_{\hbar,\mu,\varepsilon}) \varphi_1\rVert_1. \end{aligned}$$ Lemma [Lemma 28](#LE:assump_est_func_loc){reference-type="ref" reference="LE:assump_est_func_loc"} and Lemma [Lemma 30](#LE:Func_moll_com_model_reg){reference-type="ref" reference="LE:Func_moll_com_model_reg"} gives us that the assumptions of Proposition [Proposition 21](#PRO:Tauberian){reference-type="ref" reference="PRO:Tauberian"} is fulfilled with $B$ equal to $\varphi_1 f(H_\hbar)$ and $\varphi_1 f(H_{\hbar,\varepsilon})$ respectively. Hence we hvae that $$\label{EQ:trace_com_model_reg_2} \begin{aligned} \lVert \varphi \varphi_1f(\mathcal{H}_{\hbar,\mu}) (g(\mathcal{H}_{\hbar,\mu}) - g^{(\hbar)}(\mathcal{H}_{\hbar,\mu}))f(\mathcal{H}_{\hbar,\mu})\varphi_1] \rVert_1 \leq C \hbar^{1+\gamma-d} \end{aligned}$$ and $$\label{EQ:trace_com_model_reg_3} \begin{aligned} \lVert \varphi \varphi_1 f(H_{\hbar,\mu,\varepsilon}) (g(H_{\hbar,\mu,\varepsilon})-g^{(\hbar)}(H_{\hbar,\mu,\varepsilon}))f(H_{\hbar,\mu,\varepsilon})\varphi_1] \rVert_1 \leq C \hbar^{1+\gamma-d}. \end{aligned}$$ From applying Lemma [Lemma 30](#LE:Func_moll_com_model_reg){reference-type="ref" reference="LE:Func_moll_com_model_reg"} we get that $$\label{EQ:trace_com_model_reg_4} \begin{aligned} \MoveEqLeft \sup_{s\in\mathbb{R}} \lVert\varphi \varphi_1 f(\mathcal{H}_{\hbar,\mu}) \chi_\hbar(\mathcal{H}_{\hbar,\mu}-s)f(\mathcal{H}_{\hbar,\mu}) \varphi_1 -\varphi_1 f(H_{\hbar,\mu\varepsilon}) \chi_\hbar(H_{\hbar,\mu\varepsilon}-s)f(H_{\hbar,\mu,\varepsilon}) \varphi_1\rVert_1 \\ &\leq C\varepsilon^{2+\kappa} \hbar^{-d-1}. \end{aligned}$$ Finally from combining the estimates in [\[EQ:trace_com_model_reg_1\]](#EQ:trace_com_model_reg_1){reference-type="eqref" reference="EQ:trace_com_model_reg_1"}, [\[EQ:trace_com_model_reg_2\]](#EQ:trace_com_model_reg_2){reference-type="eqref" reference="EQ:trace_com_model_reg_2"}, [\[EQ:trace_com_model_reg_3\]](#EQ:trace_com_model_reg_3){reference-type="eqref" reference="EQ:trace_com_model_reg_3"} and [\[EQ:trace_com_model_reg_4\]](#EQ:trace_com_model_reg_4){reference-type="eqref" reference="EQ:trace_com_model_reg_4"}we obtain the desired estimate and this concludes the proof. ◻ # Local model problem {#sec:model_prob} Before we state and prove our local model problem we will state a result on comparison of phase-space integrals that we will need later. **Lemma 32**. *Suppose $\Omega\subset\mathbb{R}^d$ is an open set and let $\varphi\in C_0^\infty(\Omega)$. Moreover, let $\varepsilon>0$, $\hbar\in(0,\hbar_0]$ and $V,V_\varepsilon\in L^1_{loc}(\mathbb{R}^d)\cap C(\Omega)$. Suppose that $$\label{EQLE:comparison_phase_space_int} \lVert V-V_\varepsilon \rVert_{L^\infty(\Omega)}\leq c\varepsilon^{k+\mu}.$$ Then for $\gamma\in[0,1]$ and $\varepsilon$ sufficiently small it holds that $$\label{LEEQ:Loc_mod_prob_5} \begin{aligned} \Big| \int_{\mathbb{R}^{2d}} [g_\gamma(p^2+V_\varepsilon(x))-g_\gamma(p^2+V(x))]\varphi(x) \,dx dp \Big| \leq C\varepsilon^{k+\mu}, \end{aligned}$$ where the constant $C$ depends on the dimension and the numbers $\gamma$ and $c$ in [\[EQLE:comparison_phase_space_int\]](#EQLE:comparison_phase_space_int){reference-type="eqref" reference="EQLE:comparison_phase_space_int"}.* *Proof.* Firstly we observe that due to [\[EQLE:comparison_phase_space_int\]](#EQLE:comparison_phase_space_int){reference-type="eqref" reference="EQLE:comparison_phase_space_int"} we have that $$\label{EQ:comparison_phase_space_int_1} \sup_{x\in\Omega}\big|V(x)_{-}-V_\varepsilon(x)_{-}\big|\leq c\varepsilon^{k+\mu}.$$ To compare the phase-space integrals we start by evaluating the integral in $p$. This yields $$\label{EQ:comparison_phase_space_int_2} \begin{aligned} \MoveEqLeft \int_{\mathbb{R}^{2d}} [g_\gamma(p^2+V_\varepsilon(x))-g_\gamma(p^2+V(x))]\varphi(x) \,dx dp \\ &= L_{\gamma,d}^{\mathrm{cl}} \int_{\mathbb{R}^d} \big[V_\varepsilon(x)_{-}^{\frac{d}{2}+\gamma} - V(x)_{-}^{\frac{d}{2}+\gamma}\big] \varphi(x) \,dx, \end{aligned}$$ where the constant $L_{\gamma,d}^{\mathrm{cl}}$ is given by $$\begin{aligned} L_{\gamma,d}^{\mathrm{cl}} = \frac{\Gamma(\gamma+1)}{(4\pi)^{\frac{d}{2}}\Gamma(\gamma+\frac{d}{2}+1)}, \end{aligned}$$ where $\Gamma$ is the standard gamma functions. Since both $V_\varepsilon(x)_{-}$ and $V(x)_{-}$ are bounded from below and $d\geq3$ we can use that the map $r\mapsto r^{\frac{d}{2}+\gamma}$ is uniformly Lipschitz continuous, when restricted to a compact domain. This gives us that $$\label{EQ:comparison_phase_space_int_3} \begin{aligned} \int_{\mathbb{R}^d} \big[V_\varepsilon(x)_{-}^{\frac{d}{2}+\gamma} - V(x)_{-}^{\frac{d}{2}+\gamma}\big] \varphi(x) \,dx \leq C_\gamma \int_{\mathbb{R}^d} \big|V_\varepsilon(x)_{-} - V(x)_{-}\big| \varphi(x) \,dx \leq \tilde{C}_\gamma \varepsilon^{k+\mu}, \end{aligned}$$ where we have used [\[EQ:comparison_phase_space_int_1\]](#EQ:comparison_phase_space_int_1){reference-type="eqref" reference="EQ:comparison_phase_space_int_1"} and that $\mathop{\mathrm{supp}}(\varphi)\subset \Omega$. From combining [\[EQ:comparison_phase_space_int_2\]](#EQ:comparison_phase_space_int_2){reference-type="eqref" reference="EQ:comparison_phase_space_int_2"} and [\[EQ:comparison_phase_space_int_3\]](#EQ:comparison_phase_space_int_3){reference-type="eqref" reference="EQ:comparison_phase_space_int_3"} we obtain the desired estimate and this concludes the proof. ◻ With this established we can now state our model problem. **Theorem 33**. *Let $\mathcal{H}_{\hbar,\mu}$ be an operator acting in $L^2(\mathbb{R}^d)$ and let $\gamma\in[0,1]$. Suppose $\mathcal{H}_{\hbar,\mu}$ satisfies Assumption [Assumption 1](#Assumption:local_potential_1){reference-type="ref" reference="Assumption:local_potential_1"} with the open set $\Omega$ and let $H_{\hbar,\mu} = (-i\hbar\nabla-\mu a)^2 +V$ be the associated Schrödinger operator. Assume that $\mu\leq\mu_0<1$ and $\hbar\in(0,\hbar_0]$, with $\hbar_0$ sufficiently small. Moreover, suppose there exists some $c>0$ such that $$|V(x)| +\hbar^{\frac{2}{3}} \geq c \qquad\text{for all $x\in\Omega$}.$$ Then for any $\varphi\in C_0^\infty(\Omega)$ it holds that $$\Big|\mathop{\mathrm{Tr}}[\varphi g_\gamma(\mathcal{H}_{\hbar,\mu})] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma(p^2+V(x)) \varphi(x) \,dx dp \Big| \leq C \hbar^{1+\gamma-d},$$ where the constant $C$ is depending on the dimension, the numbers $\lVert \partial^\alpha\varphi \rVert_{L^\infty(\mathbb{R})}$ for all $\alpha\in\mathbb{N}^d_0$, $\lVert \partial^\alpha a_j \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ with $|\alpha|\geq1$ and $j\in\{1\dots,d\}$ and $\lVert \partial_x^\alpha V \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ such that $|\alpha|\leq 2$.* *Proof of Theorem [Theorem 33](#THM:Loc_mod_prob){reference-type="ref" reference="THM:Loc_mod_prob"}.* Let $H_{\hbar,\mu,\varepsilon} = (-i\hbar\nabla-\mu a)^2 +V_\varepsilon$ be the rough Schrödinger operator associated to $\mathcal{H}_{\hbar,\mu}$. We have in the construction of $V_\varepsilon$ chosen $\varepsilon=\hbar^{1-\delta}$, where $$\delta=\frac{\kappa-\gamma}{2+\kappa}.$$ Note that since we assume $\kappa>\gamma$, we have that $1>\delta>0$. With this choice of $\varepsilon$ and $\delta$ we have that $$\label{EQ:Loc_mod_prob_1} \varepsilon^{2+\kappa} = \hbar^{(1-\delta)(2+\kappa)} = \hbar^{2+\gamma}.$$ Moreover since we have assumed a non-critical condition for our original problem we get that there exists a constant $\tilde{c}$ such that for all $\varepsilon$ sufficiently small it holds that $$|V_\varepsilon(x)| +\hbar^{\frac{2}{3}} \geq \tilde{c} \qquad\text{for all $x\in\Omega$}.$$ With this in place we have that $$\label{EQ:Loc_mod_prob_2} \begin{aligned} \MoveEqLeft \Big|\mathop{\mathrm{Tr}}[\varphi g_\gamma(\mathcal{H}_{\hbar,\mu})] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma(p^2+V(x)) \varphi(x) \,dx dp \Big| \\ \leq{}& \Big|\mathop{\mathrm{Tr}}[\varphi g_\gamma(\mathcal{H}_{\hbar,\mu}) -\mathop{\mathrm{Tr}}[\varphi g_\gamma(H_{\hbar,\mu,\varepsilon})] \Big| \\ &+ \Big|\mathop{\mathrm{Tr}}[\varphi g_\gamma(H_{\hbar,\mu,\varepsilon})] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma(p^2+V_\varepsilon(x)) \varphi(x) \,dx dp \Big| \\ &+ \Big|\frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}} (g_\gamma(p^2+V_\varepsilon(x))- g_\gamma(p^2+V(x))) \varphi(x) \,dx dp \Big|. \end{aligned}$$ We have by Lemma [Lemma 31](#LE:trace_com_model_reg){reference-type="ref" reference="LE:trace_com_model_reg"} that $$\label{EQ:Loc_mod_prob_3} \Big|\mathop{\mathrm{Tr}}[\varphi g_\gamma(\mathcal{H}_{\hbar,\mu}) -\mathop{\mathrm{Tr}}[\varphi g_\gamma(H_{\hbar,\mu,\varepsilon})] \Big| \leq C \hbar^{1+\gamma-d} + C \varepsilon^{2+\kappa} \hbar^{-d-1} = \tilde{C} \hbar^{1+\gamma-d},$$ where we in the last equality have used [\[EQ:Loc_mod_prob_1\]](#EQ:Loc_mod_prob_1){reference-type="eqref" reference="EQ:Loc_mod_prob_1"}. From Lemma [Lemma 29](#LE:asymp_est_func_loc){reference-type="ref" reference="LE:asymp_est_func_loc"} we get that $$\label{EQ:Loc_mod_prob_4} \begin{aligned} \Big|\mathop{\mathrm{Tr}}[\varphi g_\gamma(H_{\hbar,\mu,\varepsilon})] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma(p^2+V_\varepsilon(x)) \varphi(x) \,dx dp \Big| \leq C\hbar^{1+\gamma-d}. \end{aligned}$$ To estimate the last contribution in [\[EQ:Loc_mod_prob_2\]](#EQ:Loc_mod_prob_2){reference-type="eqref" reference="EQ:Loc_mod_prob_2"} we first notice that by construction of $V_\varepsilon$ we have that $$\sup_{x\in\Omega}\big|V(x)_{-}-V_\varepsilon(x)_{-}\big|\leq C\varepsilon^{2+\kappa} = C \hbar^{2+\gamma}.$$ Hence it follow from Lemma [Lemma 32](#LE:comparison_phase_space_int){reference-type="ref" reference="LE:comparison_phase_space_int"} that $$\label{EQ:Loc_mod_prob_9} \begin{aligned} \Big|\frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}} (g_\gamma(p^2+V_\varepsilon(x))- g_\gamma(p^2+V(x))) \varphi(x) \,dx dp \Big| \leq C\hbar^{2+\gamma-d}. \end{aligned}$$ Finally by combining [\[EQ:Loc_mod_prob_2\]](#EQ:Loc_mod_prob_2){reference-type="eqref" reference="EQ:Loc_mod_prob_2"}, [\[EQ:Loc_mod_prob_3\]](#EQ:Loc_mod_prob_3){reference-type="eqref" reference="EQ:Loc_mod_prob_3"}, [\[EQ:Loc_mod_prob_4\]](#EQ:Loc_mod_prob_4){reference-type="eqref" reference="EQ:Loc_mod_prob_4"} and [\[EQ:Loc_mod_prob_9\]](#EQ:Loc_mod_prob_9){reference-type="eqref" reference="EQ:Loc_mod_prob_9"} we obtain the desired estimate and this concludes the proof. ◻ # Proof of Theorem [Theorem 2](#THM:Local_two_derivative){reference-type="ref" reference="THM:Local_two_derivative"} {#sec:proof_main} This section is devoted to the proof of Theorem [Theorem 2](#THM:Local_two_derivative){reference-type="ref" reference="THM:Local_two_derivative"}. The proof is based on the multi-scale techniques of [@MR1343781] (see also [@MR1631419; @MR1240575]). Before we start the proof we will recall the following Lemma from [@MR1343781] where it is Lemma 5.4. **Lemma 34**. *Let $\Omega\subset\mathbb{R}^d$ be an open set and let $l$ be a function in $C^1(\bar{\Omega})$ such that $l>0$ on $\bar{\Omega}$ and assume that there exists $\rho$ in $(0,1)$ such that $$\left| \nabla_x l(x) \right| \leq \rho,$$ for all $x$ in $\Omega$.* *Then* 1. *There exists a sequence $\{x_k\}_{k=0}^\infty$ in $\Omega$ such that the open balls $B(x_k,l(x_k))$ form a covering of $\Omega$. Furthermore, there exists a constant $N_\rho$, depending only on the constant $\rho$, such that the intersection of more than $N_\rho$ balls is empty.* 2. *One can choose a sequence $\{\varphi_k\}_{k=0}^\infty$ such that $\varphi_k \in C_0^\infty(B(x_k,l(x_k)))$ for all $k$ in $\mathbb{N}$. Moreover, for all multiindices $\alpha$ and all $k$ in $\mathbb{N}$ $$\left| \partial_x^\alpha \varphi_k(x) \right|\leq C_\alpha l(x_k)^{-{\left| \alpha \right|}},$$ and $$\sum_{k=1}^\infty \varphi_k(x) = 1,$$ for all $x$ in $\Omega$.* The proof of the Lemma is analogous to the proof of [@MR1996773 Theorem 1.4.10]. Before we give a proof of Theorem [Theorem 2](#THM:Local_two_derivative){reference-type="ref" reference="THM:Local_two_derivative"} we will prove the following theorem, where we have an additional assumption on the magnetic field compared to Theorem [Theorem 2](#THM:Local_two_derivative){reference-type="ref" reference="THM:Local_two_derivative"}. **Theorem 35**. *Let $\mathcal{H}_{\hbar,\mu}$ be an operator acting in $L^2(\mathbb{R}^d)$ and let $\gamma\in[0,1]$. If $\gamma=1$ we assume $d\geq3$ and if $\gamma\in(0,1]$ we assume $d\geq4$. Suppose that $\mathcal{H}_{\hbar,\mu}$ satisfies Assumption [Assumption 1](#Assumption:local_potential_1){reference-type="ref" reference="Assumption:local_potential_1"} with the set $\Omega$ and the functions $V$ and $a_j$ for $j\in\{1,\dots,d\}$. Then for any $\varphi\in C_0^\infty(\Omega)$ it holds that $$\Big|\mathop{\mathrm{Tr}}[\varphi g_\gamma(\mathcal{H}_{\hbar,\mu})] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma(p^2+V(x)) \varphi(x) \,dx dp \Big| \leq C \hbar^{1+\gamma-d}.$$ for all $\hbar\in(0,\hbar_0]$ and $\mu\leq \mu_0<1$, $\hbar_0$ sufficiently small. The constant $C$ is depending on the dimension, the numbers $\lVert \partial^\alpha \varphi \rVert_{L^\infty(\mathbb{R}^d)}$ and $\lVert \partial^\alpha a_j \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ and $j\in\{1\dots,d\}$ and $\lVert \partial_x^\alpha V \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$ such that $|\alpha|\leq 2$.* *Proof.* Since $\varphi\in C_0^\infty(\Omega)$ there exists a number $\epsilon>0$ such that $$\mathop{\mathrm{dist}}(\mathop{\mathrm{supp}}(\varphi), \Omega^{c}) >\epsilon.$$ We need this number to ensure we stay in the region where $\mathcal{H}$ behaves as a magnetic Schrödinger operator. We let $$l(x) = A^{-1}\sqrt{ |V(x)|^2 + \hbar^\frac{4}{3}} \quad\text{and}\quad f(x)=\sqrt{l(x)}.$$ Where we choose $A >0$ sufficiently large such that $$\label{EQ:Global_one_derivative_1} l(x) \leq \min\big(\tfrac{\epsilon}{11},1\big) \quad\text{and}\quad |\nabla l(x)|\leq \rho <\frac{1}{8}$$ for all $x\in\overline{\mathop{\mathrm{supp}}(\varphi)}$. Note that we can choose $A$ independent of $\hbar$ and uniformly for $\hbar\in(0,\hbar_0]$. Moreover, we have that $$\label{EQ:Global_one_derivative_2} |V(x)| \leq A l(x).$$ We use Lemma [Lemma 34](#LE:partition_lemma){reference-type="ref" reference="LE:partition_lemma"} with the set $\mathop{\mathrm{supp}}(\varphi)$ and the function $l(x)$. We can do this as due to the presence of $\hbar$ in the definition of $l$ we have that $l>0$. By Lemma [Lemma 34](#LE:partition_lemma){reference-type="ref" reference="LE:partition_lemma"} with the set $\mathop{\mathrm{supp}}(\varphi)$ and the function $l(x)$ there exists a sequence $\{x_k\}_{k=1}^\infty$ in $\mathop{\mathrm{supp}}(\varphi)$ such that $\mathop{\mathrm{supp}}(\varphi) \subset \cup_{k\in\mathbb{N}} B(x_k,l(x_k))$ and there exists a constant $N_{\frac{1}{8}}$ such that at most $N_{\frac{1}{8}}$ of the sets $B(x_k,l(x_k))$ can have a non-empty intersection. Moreover there exists a sequence $\{\varphi_{k}\}_{k=1}^\infty$ such that $\varphi_k\in C_0^\infty(B(x_k,l(x_k))$, $$\label{EQ:Global_one_derivative_2.5} \big| \partial_x^\alpha \varphi_k(x) \big| \leq C_\alpha l(x_k)^{-|\alpha|} \qquad\text{for all $\alpha\in\mathbb{N}_0$},$$ and $$\sum_{k=1}^\infty \varphi_k(x) =1 \qquad\text{for all $\mathop{\mathrm{supp}}(\varphi)$}.$$ We have that $\cup_{k\in\mathbb{N}} B(x_k,l(x_k))$ is an open covering of $\mathop{\mathrm{supp}}(\varphi)$ and since this set is compact there exists a finite subset $\mathcal{I}'\subset \mathbb{N}$ such that $$\mathop{\mathrm{supp}}(\varphi) \subset \bigcup_{k\in\mathcal{I}'} B(x_k,l(x_k)).$$ In order to ensure that we have a finite partition of unity over the set $\mathop{\mathrm{supp}}(\varphi)$ we define the set $$\mathcal{I} = \bigcup_{j\in\mathcal I'} \Big\{ k\in\mathbb{N}\,\big|\, B(x_k,l(x_k))\cap B(x_j,l(x_j)) \neq \emptyset \Big\}.$$ We have that $\mathcal{I}$ is still finite since at most $N_{\frac{1}{8}}$ balls can have non-empty intersection. Moreover, we have that $$\sum_{k\in\mathcal{I}} \varphi_k(x) =1 \qquad\text{for all $\mathop{\mathrm{supp}}(\varphi)$}.$$ From this we get the following identity $$\label{EQ:Rough_weyl_asymptotics_2.6} \mathop{\mathrm{Tr}}[\varphi \boldsymbol{1}_{ (-\infty,0]}(H_{\hbar,\varepsilon})] = \sum_{k\in\mathcal{I}} \mathop{\mathrm{Tr}}[\varphi_k \varphi \boldsymbol{1}_{ (-\infty,0]}(H_{\hbar,\varepsilon})] ,$$ where we have used linearity of the trace. In what follows we will use the following notation $$l_k=l(x_k), \quad f_k=f(x_k), \quad h_k = \frac{\hbar}{l_kf_k} \quad\text{and}\quad \mu_k = \frac{\mu l_k}{f_k}.$$ We have that $h_k$ is uniformly bounded from above since $$l(x)f(x) = A^{-\frac32}(\left| V_\varepsilon(x) \right|^2+\hbar^{\frac43})^{\frac34} \geq A^{-\frac32} \hbar,$$ for all $x\in\mathbb{R}^d$. Moreover, due to our choice of $f$ and $l$ we have that $\mu_k$ is bounded from above by $\mu_0$ since for all $x\in\mathbb{R}^d$ we have that $$\label{EQ:Rough_weyl_asymptotics_3} \frac{ l(x)}{f(x)} = \sqrt{l(x)} \leq 1.$$ We define the two unitary operators $U_l$ and $T_z$ by $$U_l f(x) = l^{\frac{d}{2}} f( l x) \quad\text{and}\quad T_zf(x)=f(x+z) \qquad\text{for $f\in L^2(\mathbb{R}^d)$}.$$ Moreover we set $$\begin{aligned} \tilde{\mathcal{H}}_{\hbar_k,\mu_k} = f_k^{-2} (T_{x_k} U_{l_k}) \mathcal{H}_{\hbar,\mu} (T_{x_k} U_{l_k})^{*}. \end{aligned}$$ Since we have that $\mathcal{H}_{\hbar,\mu}$ satisfies Assumption [Assumption 1](#Assumption:local_potential_1){reference-type="ref" reference="Assumption:local_potential_1"} with the open set $\Omega$ and the functions $V$ and $a_j$ for all $j\in\{1,\dots,d\}$ and we have that $\tilde{\mathcal{H}}_{\hbar_k,\mu_k}$ will satisfies Assumption [Assumption 1](#Assumption:local_potential_1){reference-type="ref" reference="Assumption:local_potential_1"} with the open set $B(0,10)$ and the functions $\tilde{V}$ and $\tilde{a}_l$ for all $j\in\{1,\dots,d\}$, where $$\tilde{V}(x)=f_k^{-2} V(l_kx+x_k) \quad\text{and}\quad \tilde{a}_l(x) = l_k^{-1} a_j(l_kx+x_k) \quad\text{for all $j\in\{1,\dots,d\}$}.$$ We will here need to establish that this rescaled operator satisfies the assumptions of Theorem [Theorem 33](#THM:Loc_mod_prob){reference-type="ref" reference="THM:Loc_mod_prob"} with the parameters $\hbar_k$ and $\mu_k$ and the set $B(0,8)$. Since we have that $\hbar_k$ is bounded from above and $\mu_k\leq\mu_0$ as established above what remains is to verify that we have a non-critical condition. To establish this we firstly observe that by [\[EQ:Global_one_derivative_1\]](#EQ:Global_one_derivative_1){reference-type="eqref" reference="EQ:Global_one_derivative_1"} we have $$\label{EQ:Global_one_derivative_3.5} (1-8\rho) l_k \leq l(x) \leq (1+8\rho) l_k \qquad\text{for all $x \in B(x_k,8l_k)$}.$$ Using [\[EQ:Global_one_derivative_3.5\]](#EQ:Global_one_derivative_3.5){reference-type="eqref" reference="EQ:Global_one_derivative_3.5"} we have for for $x$ in $B(0,8)$ that $$\begin{aligned} \left| \tilde{V}(x) \right| + h_k^{\frac{2}{3}} &= f_k^{-2} \left| V(l_kx+x_k) \right| + (\tfrac{\hbar}{f_k l_k})^{\frac{2}{3}} =l_k^{-1}( \left| V(l_kx+x_k) \right| +\hbar^{\frac23}) \\ &\geq l_k^{-1} A l(l_k x+x_k) \geq (1-8\rho) A. \end{aligned}$$ Hence we have a non-critical assumption for all $x\in B(0,8)$. What remains is to verify that the norms of the functions $\widetilde{\varphi_k\varphi}=(T_{x_k} U_{l_k})\varphi_k\varphi(T_{x_k} U_{l_k})^{*}$, $\tilde{V}$ and $\tilde{a}_l$ for all $j\in\{1,\dots,d\}$ are independent of $\hbar$ and $k$. Due to [\[EQ:Global_one_derivative_2\]](#EQ:Global_one_derivative_2){reference-type="eqref" reference="EQ:Global_one_derivative_2"} and that $l$ is slowly varying [\[EQ:Global_one_derivative_3.5\]](#EQ:Global_one_derivative_3.5){reference-type="eqref" reference="EQ:Global_one_derivative_3.5"} we have that $$\lVert \tilde{V} \rVert_{L^\infty(B(0,8))} = \sup_{x\in B(0,8)} \big| f_k^{-2} V(l_kx+x_k) \big| \leq A.$$ For $\alpha\in\mathbb{N}^d_0$ with $1\leq|\alpha|\leq2$ we have that $$\begin{aligned} \lVert \partial_x^\alpha \tilde{V}(x) \rVert_{L^\infty(B(0,8))} = f_k^{-2} l_k^{|\alpha|} \sup_{x\in B(0,8)} |(\partial_x^\alpha V)(l_kx+x_k)| \leq \lVert \partial_x^\alpha V(x) \rVert_{L^\infty(\mathbb{R}^d)}. \end{aligned}$$ For $\alpha\in\mathbb{N}^d_0$ with $|\alpha|\geq1$ we have that $$\begin{aligned} \lVert \partial_x^\alpha \tilde{a}_l(x) \rVert_{L^\infty(B(0,8))} = l_k^{|\alpha|-1} \sup_{x\in B(0,8)} |(\partial_x^\alpha a_j)(l_kx+x_k)| \leq \lVert \partial_x^\alpha a_j(x) \rVert_{L^\infty(\mathbb{R}^d)}, \end{aligned}$$ for all $j\in\{1,\dots,d\}$. Both bounds are independent of $k$ and $\hbar$. The last numbers we check are the numbers $\lVert \partial_x^\alpha \widetilde{\varphi_k\varphi} \rVert_{L^\infty(\mathbb{R}^d)}$ for all $\alpha\in\mathbb{N}_0^d$. Here we have by construction of $\varphi_k$ ([\[EQ:Global_one_derivative_2.5\]](#EQ:Global_one_derivative_2.5){reference-type="eqref" reference="EQ:Global_one_derivative_2.5"}) for all $\alpha\in\mathbb{N}_0^d$ that $$\begin{aligned} \lVert \partial_x^\alpha \widetilde{\varphi_k\varphi} \rVert_{L^\infty(\mathbb{R}^d)} &=\sup_{x\in\mathbb{R}^d} \left| l_k^{\left| \alpha \right|} \sum_{\beta\leq \alpha} {\binom{\alpha}{\beta}} (\partial_x^{\beta}\varphi_k)(l_k x+x_k) (\partial_x^{\alpha - \beta}\varphi)(l_kx+x_k) \right| \\ &\leq C_\alpha \sup_{x\in\mathbb{R}^d} \sum_{\beta\leq \alpha} {\binom{\alpha}{\beta}} l_k^{\left| \alpha-\beta \right| }\left| (\partial_x^{\alpha - \beta} \varphi)(l_kx+x_k) \right| \leq \widetilde{C}_\alpha. \end{aligned}$$ With this we have established that all numbers the constant from Theorem [Theorem 33](#THM:Loc_mod_prob){reference-type="ref" reference="THM:Loc_mod_prob"} depends on are independent of $\hbar$ and $k$. From applying Theorem [Theorem 33](#THM:Loc_mod_prob){reference-type="ref" reference="THM:Loc_mod_prob"} we get that $$\label{EQ:Global_one_derivative_6} \begin{aligned} \MoveEqLeft \big| \mathop{\mathrm{Tr}}[\varphi g_\gamma (\mathcal{H}_{\hbar,\mu}) ] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma( p^2+V(x))\varphi(x) \,dx dp \big| \\ \leq {}& \sum_{k\in\mathcal{I}}\big| \mathop{\mathrm{Tr}}[\varphi_k\varphi g_\gamma (\mathcal{H}_{\hbar,\mu}) ] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma( p^2+V(x))\varphi_k\varphi(x) \,dx dp \big| \\ \leq {} & \sum_{k\in\mathcal{I}} f_k^{2\gamma} \big|\mathop{\mathrm{Tr}}[ g_\gamma(\mathcal{H}_{\hbar_k,\mu_k}) \widetilde{\varphi_k\varphi} ] - \frac{1}{(2\pi h_k)^d} \int_{\mathbb{R}^{2d}} g_\gamma( p^2+\tilde{V}(x))\widetilde{\varphi_k\varphi}(x) \,dx dp \big| \\ \leq {} & C \sum_{k\in\mathcal{I}} h_k^{1+\gamma-d}f_k^{2\gamma}. \end{aligned}$$ When we consider the sum over the error terms we have $$\label{EQ:Global_one_derivative_7} \begin{aligned} \sum_{k\in\mathcal{I}} C h_k^{1+\gamma-d}f_k^{2\gamma} &= \sum_{k\in\mathcal{I}} \tilde{C} \hbar^{1+\gamma-d} \int_{B(x_k,l_k)} l_k^{-d} f_k^{2\gamma}(l_kf_k)^{d-1-\gamma} \,dx \\ & = \sum_{k\in\mathcal{I}} \tilde{C} \hbar^{1+\gamma-d} \int_{B(x_k,l_k)} l_k^{\gamma-d} l_k^{\frac{3d-3-3\gamma}{2}} \,dx \\ &\leq \sum_{k\in\mathcal{I}} \hat{C} \hbar^{1+\gamma-d} \int_{B(x_k,l_k)} l(x)^{\frac{d -3 -\gamma}{2}}\,dx, \leq C \hbar^{1+\gamma-d} \end{aligned}$$ where we have used the definition of $f_k$ and that $l$ is slowly varying. In the last inequality we have used that $\mathop{\mathrm{supp}}(\varphi)$ is compact. This ensures that the constant is finite. Moreover, in order for the last inequality to be true we have used our assumptions on dimensions. Combining the estimates in [\[EQ:Global_one_derivative_6\]](#EQ:Global_one_derivative_6){reference-type="eqref" reference="EQ:Global_one_derivative_6"} and [\[EQ:Global_one_derivative_7\]](#EQ:Global_one_derivative_7){reference-type="eqref" reference="EQ:Global_one_derivative_7"} we obtain the desired estimate. This concludes the proof. ◻ We are now ready to give a proof of our main Theorem. Most of the work have already been done in establishing Theorem [Theorem 35](#THM:Local_two_derivative_almost_there){reference-type="ref" reference="THM:Local_two_derivative_almost_there"}. When comparing to Theorem [Theorem 35](#THM:Local_two_derivative_almost_there){reference-type="ref" reference="THM:Local_two_derivative_almost_there"} what remains in in establishing Theorem [Theorem 2](#THM:Local_two_derivative){reference-type="ref" reference="THM:Local_two_derivative"} is to allow $\mu\leq C\hbar^{-1}$ for some positive constant and not be bounded by $1$. The argument is identical to the argument used in [@MR1343781] to allow $\mu\leq C\hbar^{-1}$ for some positive constant and not be bounded by $1$. We have include it for sake of completeness. *Proof of Theorem [Theorem 2](#THM:Local_two_derivative){reference-type="ref" reference="THM:Local_two_derivative"}.* Since the Theorem have already been established for $\mu\leq\mu_0<1$ we can without loss of generality assume that $\mu\geq \mu_0$, where $\mu_0<1$. We will use the same scaling technique as we used in the proof of Theorem [Theorem 35](#THM:Local_two_derivative_almost_there){reference-type="ref" reference="THM:Local_two_derivative_almost_there"}. Again we have a $\epsilon>0$ such that $$\mathop{\mathrm{dist}}(\mathop{\mathrm{supp}}(\varphi), \Omega^{c}) >\epsilon$$ since $\varphi\in C_0^\infty(\Omega)$. This time however, we let $$l(x) = \min\big(1,\tfrac{\epsilon}{11}\big) \frac{ \mu_0}{ \mu} \quad\text{and}\quad f(x)=1.$$ We can again use Lemma [Lemma 34](#LE:partition_lemma){reference-type="ref" reference="LE:partition_lemma"} with $l$ from above to construct the the partition of unity for $\mathop{\mathrm{supp}}(\varphi)$. After this we do the rescaling as above with unitary conjugations. For this case we get $$h_k = \frac{\hbar}{l_kf_k} \leq \hbar \mu \leq C \quad\text{and}\quad \mu_k = \frac{\mu l_k}{f_k} = \mu \min\big(1,\tfrac{\epsilon}{11}\big) \frac{ \mu_0}{ \mu} \leq \mu_0.$$ Moreover, we can analogously to above verify that all norm bounds are independent of $k$, $\mu$ and $\hbar$. So after rescaling we have operators satisfing the assumptions of Theorem [Theorem 35](#THM:Local_two_derivative_almost_there){reference-type="ref" reference="THM:Local_two_derivative_almost_there"} from applying this theorem we get analogous to the calculation in [\[EQ:Global_one_derivative_6\]](#EQ:Global_one_derivative_6){reference-type="eqref" reference="EQ:Global_one_derivative_6"} that $$\label{EQ:main_proof_1} \begin{aligned} \big| \mathop{\mathrm{Tr}}[\varphi g_\gamma (\mathcal{H}_{\hbar,\mu}) ] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma( p^2+V(x))\varphi(x) \,dx dp \big| \leq C \sum_{k\in\mathcal{I}} h_k^{1+\gamma-d}. \end{aligned}$$ Since $\mathcal{I}$ is a finite set we have by our choice of the functions $l$ and $f$ that $$\label{EQ:main_proof_2} \begin{aligned} \ \sum_{k\in\mathcal{I}} h_k^{1+\gamma-d} = C \mu^{1+\gamma} \hbar^{1+\gamma-d}, \end{aligned}$$ where $C$ depends on $\mu_0$, $\epsilon$ and the number of elements in $\mathcal{I}$. Combining the estimates in [\[EQ:main_proof_1\]](#EQ:main_proof_1){reference-type="eqref" reference="EQ:main_proof_1"} and [\[EQ:main_proof_2\]](#EQ:main_proof_2){reference-type="eqref" reference="EQ:main_proof_2"} we obtain that $$\begin{aligned} \big| \mathop{\mathrm{Tr}}[\varphi g_\gamma (\mathcal{H}_{\hbar,\mu}) ] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma( p^2+V(x))\varphi(x) \,dx dp \big| \leq C \mu^{1+\gamma} \hbar^{1+\gamma-d}. \end{aligned}$$ Recalling the results from Theorem [Theorem 35](#THM:Local_two_derivative_almost_there){reference-type="ref" reference="THM:Local_two_derivative_almost_there"} we get for all $\mu\leq C\hbar^{-1}$ that $$\begin{aligned} \big| \mathop{\mathrm{Tr}}[\varphi g_\gamma (\mathcal{H}_{\hbar,\mu}) ] - \frac{1}{(2\pi\hbar)^d} \int_{\mathbb{R}^{2d}}g_\gamma( p^2+V(x))\varphi(x) \,dx dp \big| \leq C \langle \mu \rangle^{1+\gamma} \hbar^{1+\gamma-d}, \end{aligned}$$ where $\langle \mu \rangle = (1+|\mu|^2)^{\frac{1}{2}}$. This concludes the proof. ◻
arxiv_math
{ "id": "2309.03716", "title": "Sharp semiclassical spectral asymptotics for local magnetic\n Schr\\\"odinger operators on $\\mathbb{R}^d$ without full regularity", "authors": "S{\\o}ren Mikkelsen", "categories": "math.SP math-ph math.MP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We study Ricci-flat perturbations of gravitational instantons of Petrov type D. Analogously to the Lorentzian case, the Weyl curvature scalars of extreme spin-weight satisfy a Riemannian version of the separable Teukolsky equation. As a step towards rigidity of the type D Kerr and Taub-bolt families of instantons, we prove mode stability, i.e. that the Teukolsky equation admits no solutions compatible with regularity and asymptotic (local) flatness. author: - | Gustav Nilsson\ Max Planck Institute for Gravitational Physics (Albert Einstein Institute)\ Am Mühlenberg 1, D-14476 Potsdam, Germany bibliography: - \\jobname.bib title: Mode Stability for Gravitational Instantons of Type D --- # Introduction A gravitational instanton is a complete and non-compact Ricci-flat Riemannian four-manifold with quadratic curvature decay. There are a number of families of known examples, such as the Riemannian Kerr instanton, and the Taub--NUT and Taub-bolt instantons. There are some known results about gravitational instantons, a large part of which hold under various symmetry assumptions, such as the existence of a $U(1)$ or $U(1)\times U(1)$ isometry group, see e.g. [@aksteiner2023gravitational; @nilsson2023topology; @Biquard_2022; @aksteiner2022gravitational]. In the compact case, we have the Besse conjecture [@MR867684], stating that all compact Ricci-flat manifolds have special holonomy. This is a wide-open conjecture; there are no known examples of compact Ricci-flat four-manifolds with generic holonomy, i.e. holonomy group $\mathit{SO}(4)$. This is in contrast to the the non-compact case, since there are examples of gravitational instantons with generic holonomy. However, all known examples still satisfy the weaker requirement of Hermiticity, and it is therefore natural to conjecture that all gravitational instantons are Hermitian. A first step towards such a result is given by proving *rigidity*, i.e. for various known examples of gravitational instantons, showing that there are no other Ricci-flat metrics close to that metric. It was shown in [@MR0995773], that perturbations of the Lorentzian Kerr metric whose frequency lies in the upper half plane, and satisfying certain boundary conditions, the perturbations of the Weyl scalars of extreme spin weight vanish identically. This result is known as *mode stability*. Furthermore, mode stability for frequencies on the real axis was shown in [@Andersson_2017]. As was shown in [@10.1063/1.1666203], see also [@andersson2022mode], perturbations of the Lorentzian Kerr metric whose Weyl scalars of extreme spin weight vanish identically must be perturbations within the Kerr family, modulo gauge. The following two main theorems of this paper show that the Riemannian analog of mode stability holds in the ALF type D case.[^1] **Theorem 1**. *For Ricci-flat AF perturbations of the Riemannian Kerr metric, the perturbed Weyl scalars $\dot{\Psi}_0,\dot{\tilde{\Psi}}_0$ vanish identically.[\[vryfcgpvhdetgswyzzcn\]]{#vryfcgpvhdetgswyzzcn label="vryfcgpvhdetgswyzzcn"}* **Theorem 2**. *For Ricci-flat ALF perturbations of the Taub-bolt metric, the perturbed Weyl scalars $\dot{\Psi}_0,\dot{\tilde{\Psi}}_0$ vanish identically.[\[wnqtrcvwqbjaufatjvne\]]{#wnqtrcvwqbjaufatjvne label="wnqtrcvwqbjaufatjvne"}* As a consequence, one might conjecture that rigidity in the above mentioned sense holds for these two instantons. ## Acknowledgements {#acknowledgements .unnumbered} The author would like to thank Lars Andersson, Mattias Dahl, Oliver Petersen and Klaus Kröncke for helpful comments and discussion. Special thanks should be given to Steffen Aksteiner for assistance with computations using the computer algebra package xAct for Mathematica, and for assistance with understanding the Newman--Penrose formalism. For the latter, the author would also like to give a special thanks to Bernardo Araneda. This research was supported by the IMPRS for Mathematical and Physical Aspects of Gravitation, Cosmology and Quantum Field Theory. # The Newman--Penrose Formalism in Riemannian Signature {#aifeebkthgighqdqooow} The Newman--Penrose formalism [@MR141500], commonly used in general relativity, can be adapted to a Riemannian signature (cf. [@MR1669188; @MR1294497]). Let $(l,\overline{l},m,\overline{m})$ be a tetrad of vector fields with complex coefficients, in which the metric has the form $$\begin{pmatrix}0&1&0&0\\1&0&0&0\\0&0&0&1\\0&0&1&0\end{pmatrix}.$$ When viewed as first order differential operators, we denote the vector fields $l,\overline{l},m,\overline{m}$ by $D,\Delta ,\delta,-\tilde{\delta}$, respectively. With respect to the tetrad $(l,\overline{l},m,\overline{m})$, the Levi-Civita connection is represented by $24$ *spin coefficients*, denoted by Greek letters and defined to be the coefficients in the right hand sides of the equations $$\begin{aligned} \frac{1}{2}(\overline{l}^a\nabla_bl_a-m^a\nabla_b\overline{m}_a)&=\gamma l_b+\epsilon\overline{l}_b-\alpha m_b+\beta\overline{m}_b,\\ \overline{l}^a\nabla_b\overline{m}_a&=-\nu l_b-\pi\overline{l}_b+\lambda m_b-\mu\overline{m}_b,\\ m^a\nabla_bl_a&=\tau l_b+\kappa\overline{l}_b-\rho m_b+\sigma\overline{m}_b,\\ \frac{1}{2}(\overline{l}^a\nabla_bl_a+\overline{m}^a\nabla_bm_a)&=\tilde{\gamma}l_b+\tilde{\epsilon}l_b-\tilde{\beta}m_b+\tilde{\alpha}\overline{m}_b,\\ \overline{l}^a\nabla_bm_a&=\tilde{\nu}l_b+\tilde{\pi}\overline{l}_b-\tilde{\mu}m_b+\tilde{\lambda}\overline{m}_b,\\ \overline{m}^a\nabla_bl_a&=-\tilde{\tau}l_b-\tilde{\kappa}\overline{l}_b+\tilde{\sigma}m_b-\tilde{\rho}\overline{m}_b.\end{aligned}$$ We also have the Weyl scalars: $$\begin{gathered} \begin{aligned} \Psi_0&=-W(l,m,l,m),& \Psi_1&=-W(l,\overline{l},l,m),& \Psi_2&=W(l,m,\overline{l},\overline{m}),\\ \Psi_3&=W(l,\overline{l},\overline{l},\overline{m}),& \Psi_4&=-W(\overline{l},\overline{m},\overline{l},\overline{m}), \end{aligned}\\ \begin{aligned} \tilde{\Psi}_0&=-W(l,\overline{m},l,\overline{m}),& \tilde{\Psi}_1&=-W(l,\overline{l},l,\overline{m}),& \tilde{\Psi}_2&=W(l,\overline{m},\overline{l},m),\\ \tilde{\Psi}_3&=W(l,\overline{l},\overline{l},m),& \tilde{\Psi}_4&=-W(\overline{l},m,\overline{l},m), \end{aligned}\end{gathered}$$ where $W$ denotes the Weyl curvature tensor. From the definitions, one sees immediately that $\overline{\Psi}_k=\Psi_{4-k}$ and $\overline{\tilde{\Psi}}_k=\tilde{\Psi}_{4-k}$, so that the Weyl tensor is determined by the six scalars $\Psi_0,\Psi_1,\Psi_2,\tilde{\Psi}_0,\tilde{\Psi}_1,\tilde{\Psi}_2$, and that $\Psi_2$ and $\tilde{\Psi}_2$ are real. The Weyl scalars of *extreme spin weight* are defined to be $\Psi_0,\Psi_4,\tilde{\Psi}_0$ and $\tilde{\Psi}_4$. A tetrad $(l,\overline{l},m,\overline{m})$ is said to be *principal* if $\Psi_0=\Psi_1=\tilde{\Psi}_0=\tilde{\Psi}_1=0$. It can be shown that a Ricci-flat four-manifold admits a principal tetrad if and only if it has type D. A proof of this fact in the Lorentzian case can be found in [@MR838301 Chapter 7]. ## The Perturbation Equations When referring to a *perturbation* $\dot{g}$ of a metric $g$, we are referring to a linear perturbation of $g$, i.e. a symmetric two-tensor $\dot{g}$. When $g$ is Ricci-flat, we say that $\dot{g}$ is a *Ricci-flat perturbation* if it is Ricci-flat to first order, i.e. if $\dot{g}\in\ker((D\operatorname{Ric})_g)$. In general, for a quantity depending on the metric $g$, we let a dot above the quantity denote its derivative in the direction $\dot{g}$. Then $\dot{g}$ is a Ricci-flat perturbation if and only if $\dot{\operatorname{Ric}}=0$. Ricci-flat perturbations of the Lorentzian Kerr metric have been studied extensively, and in [@1973ApJ...185..635T], Teukolsky derived a well-known equation for the perturbation of the Weyl scalars of extreme spin weight, for such perturbations of the metric. The following theorem gives a Riemannian analog of that perturbation equation. **Theorem 3**. *Consider a Ricci-flat perturbation $\dot{g}$ of a Ricci-flat type D metric $g$. Relative to a principal tetrad, the perturbation $\dot{\Psi}_0$ satisfies the equation $$((D-3\epsilon+\tilde{\epsilon}-\tilde{\rho}-4\rho)(\Delta-4\gamma+\mu)-(\delta-\tilde{\alpha}-3\beta+\tilde{\pi}-4\tau)(\tilde{\delta}-4\alpha+\pi)-3\Psi_2)\dot{\Psi}_0=0,\label{gtrpatiwyutgcvfpswpn}$$ and the perturbation $\dot{\tilde{\Psi}}_0$ satisfies the equation $$((D-3\tilde{\epsilon}+\epsilon-\rho-4\tilde{\rho})(\Delta -4\tilde{\gamma}+\tilde{\mu})-(\tilde{\delta}-\alpha-3\tilde{\beta}+\pi-4\tilde{\tau})(\delta-4\tilde{\alpha}+\tilde{\pi})-3\tilde{\Psi}_2)\dot{\tilde{\Psi}}_0=0.\label{cwtixuyczqdydtnkkwrz}$$* *Proof.* Since we have a principal tetrad, $\Psi_0=\tilde{\Psi}_0=\Psi_1=\tilde{\Psi}_1=\kappa=\tilde{\kappa}=\sigma=\tilde{\sigma}=0$. The perturbed versions of [\[dgzjedhogf\]](#dgzjedhogf){reference-type="eqref" reference="dgzjedhogf"} and [\[tffqwgbzjf\]](#tffqwgbzjf){reference-type="eqref" reference="tffqwgbzjf"} become $$(\Delta-4\gamma+\mu)\dot{\Psi}_0=(\delta-4\tau-2\beta)\dot{\Psi}_1+3\dot{\sigma}\Psi_2\label{frdoryujnwxibijhaopi}$$ and $$(\tilde{\delta}-4\alpha+\pi)\dot{\Psi}_0=(D-4\rho-2\epsilon)\dot{\Psi}_1+3\dot{\kappa}\Psi_2\label{tjoberdmujyzyhouxdxg}$$ respectively. Operating on [\[frdoryujnwxibijhaopi\]](#frdoryujnwxibijhaopi){reference-type="eqref" reference="frdoryujnwxibijhaopi"} with $D$ and on $\eqref{tjoberdmujyzyhouxdxg}$ with $\delta$, subtracting the resulting equations and using the commutator relation [\[cljxbmfrlnvouvtstprh\]](#cljxbmfrlnvouvtstprh){reference-type="eqref" reference="cljxbmfrlnvouvtstprh"}, we get $$\begin{aligned} (D(\Delta-4\gamma+\mu)-\delta(\tilde{\delta}-4\alpha+\pi))\dot{\Psi}_0 &= ([D,\delta]-4D\tau-2D\beta+4\delta\rho+2\delta\epsilon)\dot{\Psi}_1+(3D\dot{\sigma}-3\delta\dot{\kappa})\Psi_2 \\ &= \begin{multlined}[t] (-(\tilde{\alpha} + 3\beta - \tilde{\pi}+4\tau) D+ (3\epsilon - \tilde{\epsilon} + \tilde{\rho}+4\rho) \delta \\ -(D(4\tau+2\beta))+(\delta(4\rho+2\epsilon)))\dot{\Psi}_1+(3D\dot{\sigma}-3\delta\dot{\kappa})\Psi_2. \end{multlined}\end{aligned}$$ We eliminate the first two terms in the first bracket on the right: $$((D-3\epsilon+\tilde{\epsilon}-\tilde{\rho}-4\rho)(\Delta-4\gamma+\mu)-(\delta-\tilde{\alpha}-3\beta+\tilde{\pi}-4\tau)(\tilde{\delta}-4\alpha+\pi))\dot{\Psi}_0=A_1+A_2,$$ where $$\begin{gathered} A_1=((-3\epsilon+\tilde{\epsilon}-\tilde{\rho}-4\rho)(-4\tau-2\beta)-(-\tilde{\alpha}-3\beta+\tilde{\pi}-4\tau)(-4\rho-2\epsilon)\\ -(D(4\tau+2\beta))+(\delta(4\rho+2\epsilon)))\end{gathered}$$ and $$A_2=(3(D-3\epsilon+\tilde{\epsilon}-\tilde{\rho}-4\rho)\dot{\sigma}-3(\delta-\tilde{\alpha}-3\beta+\tilde{\pi}-4\tau)\dot{\kappa})\Psi_2.$$ By using [\[ztdbwahxrsqwsbvwjemy\]](#ztdbwahxrsqwsbvwjemy){reference-type="eqref" reference="ztdbwahxrsqwsbvwjemy"}, [\[dxjlcufhwofmcfykmlmk\]](#dxjlcufhwofmcfykmlmk){reference-type="eqref" reference="dxjlcufhwofmcfykmlmk"} and [\[ylsqrpzctcwthxvmlbxg\]](#ylsqrpzctcwthxvmlbxg){reference-type="eqref" reference="ylsqrpzctcwthxvmlbxg"}, we see that $A_1=0$. Also, from the fact that our metric has type D, together with [\[lwtfleqbiqyqbugeyliv\]](#lwtfleqbiqyqbugeyliv){reference-type="eqref" reference="lwtfleqbiqyqbugeyliv"} and a suitable linear combination of [\[dozkyraifwwbixgillgs\]](#dozkyraifwwbixgillgs){reference-type="eqref" reference="dozkyraifwwbixgillgs"} and [\[drypwpymxoihwivvzutw\]](#drypwpymxoihwivvzutw){reference-type="eqref" reference="drypwpymxoihwivvzutw"}, we have $$D\Psi_2=3\rho\Psi_2,\qquad\delta\Psi_2=3\tau\Psi_2.$$ Therefore, by the Leibniz rule, $$\begin{aligned} A_2&=3D\Psi_2-3\delta\Psi_2+(3((D-3\epsilon+\tilde{\epsilon}-\tilde{\rho}-4\rho)\dot{\sigma})-3((\delta-\tilde{\alpha}-3\beta+\tilde{\pi}-4\tau)\dot{\kappa}))\Psi_2\\ &=3(((D-3\epsilon+\tilde{\epsilon}-\tilde{\rho}-\rho)\dot{\sigma})-((\delta-\tilde{\alpha}-3\beta+\tilde{\pi}-\tau)\dot{\kappa}))\Psi_2\\ &=3\dot{\Psi}_0\Psi_2,\end{aligned}$$ where we used the linearization of [\[bepkreovuhsolsavrkpy\]](#bepkreovuhsolsavrkpy){reference-type="eqref" reference="bepkreovuhsolsavrkpy"} in the last step, showing that [\[gtrpatiwyutgcvfpswpn\]](#gtrpatiwyutgcvfpswpn){reference-type="eqref" reference="gtrpatiwyutgcvfpswpn"} holds. The proof of [\[cwtixuyczqdydtnkkwrz\]](#cwtixuyczqdydtnkkwrz){reference-type="eqref" reference="cwtixuyczqdydtnkkwrz"} is similar, referring to the tilded equivalents of the NP equations instead. ◻ # The Riemannian Kerr Instanton In Boyer-Lindquist coordinates $(t,r,\theta,\phi)$, the Riemannian Kerr family of metrics is given by the expression $$g=\frac{\Sigma}{\Delta}\,dr^2+\Sigma\,d\theta^2+\frac{\Delta}{\Sigma}(dt-a\sin^2\theta\,d\phi)^2+\frac{\sin^2\theta}{\Sigma}((r^2-a^2)\,d\phi+a\,dt)^2.\label{xneemhurgawbdbmwulrk}$$ Here, $M>0$ and $a\in\mathbb{R}$ are the parameters of the family, $\Delta=\Delta(r)=r^2-2Mr-a^2$ and $\Sigma=r^2-a^2\cos^2\theta$, and the coordinates have the ranges $r>r_+$, $0<\theta<\pi$, where $r_\pm=M\pm\sqrt{M^2+a^2}$ are the roots of $\Delta$. Like its Lorentzian counterpart, this metric is Ricci-flat. Note that $\Delta$ has a different meaning than in Section [2](#aifeebkthgighqdqooow){reference-type="ref" reference="aifeebkthgighqdqooow"}, and it will retain this new meaning throughout this section. Now define new coordinates $(\tilde{t},\tilde{r},\theta,\tilde{\phi})$ by $$\begin{cases}r&=M+\sqrt{M^2+a^2}\cosh\tilde{r},\\t&=\frac{1}{\kappa}\tilde{t},\\\phi&=\tilde{\phi}-\frac{\Omega}{\kappa}\tilde{t},\end{cases}$$ where $\kappa=\frac{\sqrt{M^2+a^2}}{2Mr_+}$ and $\Omega=\frac{a}{2Mr_+}$. Then $r$ is a smooth function of $\tilde{r}^2$, and [\[xneemhurgawbdbmwulrk\]](#xneemhurgawbdbmwulrk){reference-type="eqref" reference="xneemhurgawbdbmwulrk"} gives $$g=\Sigma(d\tilde{r}^2+d\theta^2+(\tilde{r}^2+O(\tilde{r}^4))\,d\tilde{t}^2+(\sin^2\theta+O(\sin^4\theta))\,d\tilde{\phi}^2).$$ Letting $(\tilde{r},\tilde{t})$ be polar coordinates on $\mathbb{R}^2$ and letting $(\theta,\tilde{\phi})$ be spherical coordinates on $S^2$, it follows that $g$ extends to a complete metric on $\mathbb{R}^2\times S^2$, provided that we identify $\tilde{t}$ and $\tilde{\phi}$ with period $2\pi$ independently. Note that this is equivalent to performing the identifications $(t,\phi)\sim(t+\frac{2\pi}{\kappa},\phi-\frac{2\pi\Omega}{\kappa})\sim(t,\phi+2\pi)$. ## The Separated Perturbation Equations in Coordinates {#gsopptqotnpieuvyzrsp} We shall be interested in a particular choice of complex null tetrad $(l,\overline{l},m,\overline{m})$, called the *Carter tetrad*, defined by $$\begin{aligned} l&=\frac{1}{\sqrt{2\Delta\Sigma}}\left((r^2-a^2)\frac{\partial}{\partial t}-a\frac{\partial}{\partial\phi}\right)+i\sqrt{\frac{\Delta}{2\Sigma}}\frac{\partial}{\partial r},\\ m&=\frac{1}{\sqrt{2\Sigma}}\frac{\partial}{\partial\theta}-\frac{i}{\sqrt{2\Sigma}}\left(\frac{1}{\sin\theta}\frac{\partial}{\partial\phi}+a\sin\theta\frac{\partial}{\partial t}\right).\end{aligned}$$ Note that $|l|_g=|m|_g=1$. The spin coefficients for the Carter tetrad are given explicitly in Section [5.1](#lkxvvmyizsfanehcgbyb){reference-type="ref" reference="lkxvvmyizsfanehcgbyb"}. For this tetrad we have $$\Psi_2=\frac{M}{(r-a\cos\theta)^3}, \qquad \tilde{\Psi}_2=\frac{M}{(r+a\cos\theta)^3},$$ and all other Weyl scalars vanish. In particular, this is a principal tetrad. We shall now analyze the perturbation equations in the Carter tetrad. The relevant properties of the equations are given in the following four lemmas. **Lemma 1**. *For the Carter tetrad, the perturbation equation [\[gtrpatiwyutgcvfpswpn\]](#gtrpatiwyutgcvfpswpn){reference-type="eqref" reference="gtrpatiwyutgcvfpswpn"} is equivalent to the equation[^2] $\mathbf{L}\Phi=0$, where $\Phi=\Psi_2^{-2/3}\dot{\Psi}_0$ and $$\begin{gathered} \mathbf{L}=\frac{\partial}{\partial r}\Delta\frac{\partial}{\partial r}+\frac{1}{\Delta}\left((r^2-a^2)\frac{\partial}{\partial t}-a\frac{\partial}{\partial\phi}+2i(r-M)\right)^2+8i(r+a\cos\theta)\frac{\partial}{\partial t}\\+\frac{1}{\sin\theta}\frac{\partial}{\partial\theta}\sin\theta\frac{\partial}{\partial\theta}+\frac{1}{\sin^2\theta}\left(a\sin^2\theta\frac{\partial}{\partial t}+\frac{\partial}{\partial\phi}-2i\cos\theta\right)^2.\end{gathered}$$* *Furthermore, if $\Phi$ is a solution to this equation coming from a perturbation of the metric, then we can write $$\Phi(t,r,\theta,\phi)=\sum_{m,\omega,\Lambda}e^{i(m\phi-\omega t)}R_{m,\omega,\Lambda}(r)S_{m,\omega,\Lambda}(\theta),\label{hlketuuyfnuahxwbkgcr}$$ where $m$ runs over $\mathbb{Z}$, $\omega$ runs over $\Omega+\kappa\mathbb{Z}$, and for each choice of $m,\omega,\Lambda$, the function $R=R_{m,\omega,\Lambda}$ solves the equation $\mathbf{R}R=0$. The function $S=S_{m,\omega,\Lambda}$ is the unique solution to the boundary value problem $\mathbf{S}S=0$, $S'(0)=S'(\pi)=0$, where $$\mathbf{R}=\frac{d}{dr}\Delta\frac{d}{dr}+U(r),\label{btffnmnitezlpabydygn}$$$$U(r)=-\frac{((r^2-a^2)\omega+am+2(r-M))^2}{\Delta}+8r\omega-\Lambda$$ and $$\mathbf{S}=\frac{1}{\sin\theta}\frac{d}{d\theta}\sin\theta\frac{d}{d\theta}+V(\cos\theta),$$$$V(x)=8a\omega x-\frac{1}{1-x^2}\left(a\omega(1-x^2)-m+2x\right)^2+\Lambda.$$ Here, $S$ is normalized with respect to the $L^2$ product with measure $\sin\theta\,d\theta$, and the separation constant $\Lambda$ runs over the (countable set of) values for which such an $S$ exists.* *The same statement holds for the perturbation equation [\[cwtixuyczqdydtnkkwrz\]](#cwtixuyczqdydtnkkwrz){reference-type="eqref" reference="cwtixuyczqdydtnkkwrz"}, if $\Phi$ is replaced by $\tilde{\Phi}$, where $\tilde{\Phi}=\tilde{\Psi}_2^{-2/3}\dot{\tilde{\Psi}}_0$.* *Proof.* The fact that [\[gtrpatiwyutgcvfpswpn\]](#gtrpatiwyutgcvfpswpn){reference-type="eqref" reference="gtrpatiwyutgcvfpswpn"} is equivalent to $\mathbf{L}\Phi=0$ follows from a direct computation, using the expressions for the spin coefficients in Section [5.1](#lkxvvmyizsfanehcgbyb){reference-type="ref" reference="lkxvvmyizsfanehcgbyb"}. Now note that the boundary value problem $\mathbf{S}S=0$, $S'(0)=S'(\pi)=0$ is a Sturm-Liouville problem. Thus, there exists an orthonormal $L^2$ basis of functions $\{S_{m,\omega,\Lambda}\}_\Lambda$ solving it, and furthermore, we can perform a Fourier series decompositions in the coordinates $(t,\phi)$. From these considerations, we can write [\[hlketuuyfnuahxwbkgcr\]](#hlketuuyfnuahxwbkgcr){reference-type="eqref" reference="hlketuuyfnuahxwbkgcr"}, where $$R_{m,\omega,\Lambda}(r)=\frac{\kappa}{4\pi^2}\int_0^{2\pi/\kappa}\int_0^{2\pi}\int_0^\pi e^{-i(m\phi-\omega t)}\Phi(t,r,\theta,\phi) S_{m,\omega,\Lambda}(\theta)\sin\theta\,d\theta\,d\phi\,dt.\label{dnctdzzlskuhirvgitbn}$$ The fact that $R=R_{m,\omega,\Lambda}$ satisfies $\mathbf{R}R=0$ now follows directly from [\[dnctdzzlskuhirvgitbn\]](#dnctdzzlskuhirvgitbn){reference-type="eqref" reference="dnctdzzlskuhirvgitbn"}, along with the fact that $\mathbf{L}\Phi=0$. For the statement involving $\tilde{\Phi}$, the proof is entirely analogous. ◻ **Lemma 2**. *The equation $\mathbf{R}R=0$ is an ordinary differential equation in a complex variable $r$, which has regular singular points at $r=r_\pm$. The point $r=\infty$ is an irregular singular point of rank $1$, except when $\omega=0$, in which case it is a regular singular point. Thus, the equation $\mathbf{R}R=0$ is a confluent Heun equation (see [@MR1858237 Section 3]) when $\omega\neq 0$, and a hypergeometric equation (see [@MR1858237 Section 2]) when $\omega=0$. The characteristic exponents at $r=r_+$ are $$\pm\left(1+\frac{2Mr_++am}{r_+-r_-}\right),$$ and those at $r=r_-$ are $$\pm\left(-1+\frac{2Mr_-+am}{r_+-r_-}\right).$$ When $\omega=0$, we have $\Lambda\geq0$, and the characteristic exponents at $r=\infty$ are $$-\frac{3}{2}\pm i\sqrt{\frac{7}{2}+\Lambda}.$$ When $\omega\neq0$, the equation $\mathbf{R}R=0$ admits normal solutions (see [@MR0078494 Section 3.2]), near $r=\infty$, of the asymptotic form $$R\sim e^{\pm r\omega}r^{-1\pm2(M\omega-1)}.$$* *Proof.* The fact that $r=r_\pm$ are regular singular points follows directly from the fact that $\Delta=(r-r_+)(r-r_-)$, the statement about the type and rank of the singular point at $r=\infty$ follows directly from the discussion in [@MR0078494 Section 3.1], and the expressions for the characteristic exponents can be seen from the discussion in [@MR1858237 Section 1.1.3]. Letting $R=y/\sqrt{\Delta}$, the equation $\mathbf{R}R=0$ is transformed into $$\frac{d^2y}{dr^2}+qy=0,$$ where $$q(r)=\frac{U(r)}{\Delta}+\left(\frac{r_+-r_-}{2\Delta}\right)^2=-\omega^2-\frac{4\omega(M\omega-1)}{r}+O(r^{-2}).$$ Following [@MR0078494 Section 3.2], the equation $\mathbf{R}R=0$ therefore has normal solutions of the asymptotic form $$R\sim e^{\pm r\omega}r^{-1\pm2(M\omega-1)}.$$ ◻ **Lemma 3**. *For a solution to the equation $\mathbf{R}R=0$ coming from a (globally smooth) perturbation of the Kerr metric, the corresponding characteristic exponent at $r=r_+$ is $$\left|1+\frac{2Mr_++am}{r_+-r_-}\right|.$$* *Proof.* Since the set corresponding to $r=r_+$ is compact, and by assumption, the perturbation $\dot{W}$ of the Weyl tensor is continuous, $\dot{W}$ has bounded norm in a neighborhood of this set. Consequently, since $l$ and $m$ have norm $1$, it follows that $\dot{\Psi}_0=-\dot{W}(l,m,l,m)$ is bounded near $r=r_+$. Since $\Psi_2^{-2/3}=O(r^2)$, it follows that $\Phi$, and therefore $R$, is bounded near $r=r_+$. The statement now follows immediately. ◻ **Lemma 4**. *Let $R$ be a solution to the equation $\mathbf{R}R=0$ coming from an asymptotically flat perturbation of the Kerr metric. When $\omega=0$, none of the characteristic exponents at $r=\infty$ are compatible with the asymptotic flatness assumption. When $\omega\neq 0$, exactly one of the asymptotic normal solutions is compatible with this assumption, namely $$R\sim e^{-r|\omega|}r^{-1-2(M\omega-1)\operatorname{sgn}(\omega)}.$$* *Proof.* The assumption of asymptotic flatness means that $\dot{g}=O(r^{-1})$ as $r\to\infty$, with corresponding decay on derivatives. In particular, $W=O(r^{-3})$, which means that $\Phi$, and therefore $R$, decays as $r^{-1}$ as $r\to\infty$. In particular, we must have $\lim_{r\to\infty}R(r)=0$, and the result now follows immediately. ◻ ## Mode Stability Equipped with the lemmas of the previous subsection, we are now in a position to prove Theorem [\[vryfcgpvhdetgswyzzcn\]](#vryfcgpvhdetgswyzzcn){reference-type="ref" reference="vryfcgpvhdetgswyzzcn"}. *Proof of Theorem [\[vryfcgpvhdetgswyzzcn\]](#vryfcgpvhdetgswyzzcn){reference-type="ref" reference="vryfcgpvhdetgswyzzcn"}.* For $r>r_+$ and $-1<x<1$, note that $$\begin{aligned} \begin{split}U(r)+V(x)&=\begin{multlined}[t]-\frac{16 M (r+a x)}{(r-a x)^2}\\-\frac{(a^2 x (m x-2)+2 a (x^2-1) (M (r \omega -1)+r)+r (m-2 x) (2 M-r))^2}{(1-x^2)(\Delta+(1-x^2)a^2)\Delta}\\-\frac{(2 M (a x+3 r)+(r-a x) (r+a x) (-a x \omega +r \omega -2))^2}{(r-a x)^2(\Delta+(1-x^2)a^2)}\end{multlined}\\&<0.\end{split}\label{encwddkayatjihlzkuhh}\end{aligned}$$ Here, the strict negativity follows from that of the first term, which holds because $r_+>|a|$. By an integration of parts, we have $$\begin{aligned} U(r)&\leq U(r)+\int_0^\pi\left(\frac{dS}{d\theta}\right)^2\sin\theta\,d\theta=U(r)+\int_0^\pi\left(\mathbf{S}S-\frac{1}{\sin\theta}\frac{d}{d\theta}\left(\sin\theta\frac{dS}{d\theta}\right)\right)S\sin\theta\,d\theta\\&=U(r)+\int_0^\pi V(\cos\theta)S^2\sin\theta\,d\theta=\int_0^\pi(U(r)+V(\cos\theta))S^2\sin\theta\,d\theta<0,\end{aligned}$$ where the last equality follows from [\[encwddkayatjihlzkuhh\]](#encwddkayatjihlzkuhh){reference-type="eqref" reference="encwddkayatjihlzkuhh"} and the normalization of $S$. Multiplying [\[btffnmnitezlpabydygn\]](#btffnmnitezlpabydygn){reference-type="eqref" reference="btffnmnitezlpabydygn"} by $\overline{R}$ and integrating, the first term being integrated by parts, we get $$0=\left[\Delta\frac{dR}{dr}\overline{R}\right]_{r=r_+}^{r=\infty}-\int_{r_+}^\infty\left(\Delta\left|\frac{dR}{dr}\right|^2-U|R|^2\right)dr.$$ We claim that the first term vanishes; to see this, we consider the endpoints separately. Near $r=\infty$, we have $\Delta\sim r^2$, while $R$ and its derivative decays exponentially. Thus, the term in square brackets decays exponentially, and in particular it goes to zero as $r\to\infty$. Near $r=r_+$, we know that $\overline{R}$ is bounded. The characteristic exponent corresponding to $R$ is either positive, in which case $\frac{dR}{dr}=o(r^{-1})$, or $R$ is analytic in a neighborhood of $r=r_+$, in which case $\frac{dR}{dr}$ is bounded. In either case, the product $\Delta\frac{dR}{dr}$ goes to zero as $r\to r_+$, and from this it immediately follows that the term in square brackets goes to zero. We have thus shown that $$\int_{r_+}^\infty\left(\Delta\left|\frac{dR}{dr}\right|^2-U|R|^2\right)\,dr=0.$$ Since $U<0$, the terms in the integrand are both non-negative, and must therefore vanish. We conclude that $R$ vanishes identically. ◻ # The Taub-Bolt Instanton The general *Taub--NUT* family of Ricci-flat metrics, depending on two parameters $M,N>0$, is given in coordinates $(t,r,\theta,\phi)$ by $$g=\frac{\Sigma}{\Delta}\,dr^2+4N^2\frac{\Delta}{\Sigma}(dt+\cos\theta\,d\phi)^2+\Sigma(d\theta^2+\sin^2\theta\,d\phi^2),$$ where $\Delta=r^2-2Mr+N^2$ and $\Sigma=r^2-N^2$. Setting $M=N$ yields the self-dual Taub--NUT metric, a complete metric on $\mathbb{R}^4$. Another metric of interest, the *Taub-bolt metric*, arises by letting $M=\frac{5}{4}N$, which we shall do from now. We will now give a brief account of the regularity of this metric. Introducing the coordinate system $(\tilde{t},\tilde{r},\tilde{\theta},\phi)$ by $$\begin{cases}r&=\frac{N}{4}(5+3\cosh\tilde{r}),\\t&=2\tilde{t}-\phi,\\\theta&=2\arctan(\frac{\tilde{\theta}}{2}),\end{cases}$$ we see that $r$ is smooth as a function of $\tilde{r}^2$, and that $$g=\Sigma(d\tilde{r}^2+(\tilde{r}^2+O(\tilde{r}^4))\,d\tilde{t}^2+(1+O(\tilde{\theta}^2))\,d\tilde{\theta}^2+(\tilde{\theta}^2+O(\tilde{\theta}^4))\,d\phi^2+O(\tilde{r}^2\tilde{\theta}^2)\,d\tilde{t}d\phi).$$ Viewing $(\tilde{\theta},\phi)$ as polar coordinates on $\mathbb{R}^2\times\{y\}\subseteq\mathbb{R}^4$, and viewing $(\tilde{r},\tilde{t})$ as polar coordinates on $\{x\}\times\mathbb{R}^2\subseteq\mathbb{R}^4$, it follows that $g$ extends to a smooth metric on $\mathbb{R}^4\cong\mathbb{C}^2$, provided that we identify $\tilde{t}$ and $\phi$ with period $2\pi$ independently. This is equivalent to making the identifications $(t,\phi)\sim(t+4\pi,\phi)\sim(t+2\pi,\phi+2\pi)$. We can also introduce another coordinate system $(\hat{t},\tilde{r},\hat{\theta},\phi)$ by $$\begin{cases}t&=2\hat{t}+\phi,\\\theta&=2\mathop{\mathrm{arccot}}(\frac{\hat{\theta}}{2}),\end{cases}$$ so that $$g=\Sigma(d\tilde{r}^2+(\tilde{r}^2+O(\tilde{r}^4))\,d\hat{t}^2+(1+O(\hat{\theta}^2))\,d\hat{\theta}^2+(\hat{\theta}^2+O(\hat{\theta}^4))\,d\phi^2+O(\tilde{r}^2\hat{\theta}^2)\,d\hat{t}d\phi).$$ In the same way as for the previous coordinate system, this shows that $g$ extends to a smooth metric on another copy of $\mathbb{C}^2$. Note that the identifications made to ensure regularity in the coordinate system $(\tilde{t},\tilde{r},\tilde{\theta},\phi)$ also ensure regularity in the coordinate system $(\hat{t},\tilde{r},\hat{\theta},\phi)$. When defined on the union of these copies of $\mathbb{C}^2$, this metric is complete. Computing the transition map between the two coordinate systems, we see that they are related by $(\hat{t},\tilde{r},\hat{\theta},\phi)=(\tilde{t}-\phi,\tilde{r},\frac{4}{\tilde{\theta}},\phi)$. In other words, the two copies of $\mathbb{C}^2$ are glued together according to the map $$\begin{aligned} (\mathbb{C}\setminus\{0\})\times\mathbb{C}&\to(\mathbb{C}\setminus\{0\})\times\mathbb{C},\\(z_1,z_2)&\mapsto\left(\frac{4}{\overline{z_1}},z_2\cdot\frac{|z_1|}{z_1}\right).\end{aligned}$$ Topologically, this is the same thing as gluing two such copies along the map $(z_1,z_2)\mapsto(\frac{1}{z_1},z_2\cdot\frac{|z_1|}{z_1})$, or equivalently, gluing together two copies of $\overline{D}^2\times\mathbb{C}$ along the map $$\begin{aligned} \begin{split}S^1\times\mathbb{C}&\to S^1\times\mathbb{C},\\(z_1,z_2)&\mapsto\left(\frac{1}{z_1},\frac{z_2}{z_1}\right).\end{split}\label{oswcizfgjdkkzzkzmmkm}\end{aligned}$$ We now claim that the manifold is diffeomorphic to $\mathbb{C}P^2$ minus a point. To see this, consider two of the projective coordinate charts for $\mathbb{C}P^2$, $(U_0,\phi_0)$ and $(U_1,\phi_1)$, where $$U_i=\{[Z_0:Z_1:Z_2]\in\mathbb{C}P^2\mid Z_i\neq 0\},$$ and $$\begin{aligned} \phi_0:U_0&\to\mathbb{C}^2,\\ [Z_0:Z_1:Z_2]&\mapsto\left(\frac{Z_1}{Z_0},\frac{Z_2}{Z_0}\right),\\\phi_1:U_1&\to\mathbb{C}^2,\\ [Z_0:Z_1:Z_2]&\mapsto\left(\frac{Z_0}{Z_1},\frac{Z_2}{Z_1}\right).\end{aligned}$$ Since $U_0\cup U_1=\mathbb{C}P^2\setminus\{[0:0:1]\}$, it follows that the latter is topologically equivalent to two copies of $\mathbb{C}^2$, glued together along the transition map $$\begin{aligned} (\mathbb{C}\setminus\{0\})\times\mathbb{C}&\to(\mathbb{C}\setminus\{0\})\times\mathbb{C},\\(z_1,z_2)&\mapsto\left(\frac{1}{z_1},\frac{z_2}{z_1}\right).\end{aligned}$$ Again, topologically this is the same thing as gluing together two copies of $\overline{D}^2\times\mathbb{C}$ along the map [\[oswcizfgjdkkzzkzmmkm\]](#oswcizfgjdkkzzkzmmkm){reference-type="eqref" reference="oswcizfgjdkkzzkzmmkm"}. This shows that the manifold is *homeomorphic* to $\mathbb{C}P^2$ minus a point. To show that these are *diffeomorphic*, we can replace the closed disk by an open disk of radius slightly larger than $1$, gluing the two spaces together along a thin open strip around $S^1$. The gluing map will then be isotopic to the corresponding transition map in $\mathbb{C}P^2$. ## The Separated Perturbation Equations in Coordinates {#the-separated-perturbation-equations-in-coordinates} As for Kerr, we are interested in a particular choice of complex null tetrad $(l,\overline{l},m,\overline{m})$, in this case given by $$\begin{aligned} l&=\frac{1}{\sqrt{2\Sigma}}\left(\frac{1}{\sin\theta}\left(\cos\theta\frac{\partial}{\partial t}-\frac{\partial}{\partial\phi}\right)+i\frac{\partial}{\partial\theta}\right),\label{gphzicgjqwnoeqtpedqy}\\ m&=\sqrt{\frac{\Delta}{2\Sigma}}\frac{\partial}{\partial r}+\frac{i\sqrt{\Sigma/2\Delta}}{2N}\frac{\partial}{\partial t},\label{evgrekozfhalihhtkzwj}\end{aligned}$$ satisfying $|l|_g=|m|_g=1$. The spin coefficients for this tetrad are given explicitly in Section [5.2](#qbpavailpgpsninudcgc){reference-type="ref" reference="qbpavailpgpsninudcgc"}. For this tetrad, we have $$\Psi_2=\frac{N}{4(r-N)^3},\qquad\tilde{\Psi}_2=\frac{9N}{4(r+N)^3},$$ and the rest of the Weyl scalars vanish. Thus, this is a principal tetrad, and we see that the Taub-bolt metric is of type D. The following four lemmas give the relevant properties of the perturbation equations [\[gtrpatiwyutgcvfpswpn\]](#gtrpatiwyutgcvfpswpn){reference-type="eqref" reference="gtrpatiwyutgcvfpswpn"} [\[cwtixuyczqdydtnkkwrz\]](#cwtixuyczqdydtnkkwrz){reference-type="eqref" reference="cwtixuyczqdydtnkkwrz"} for our analysis. The proofs are entirely analogous to those in Section [3.1](#gsopptqotnpieuvyzrsp){reference-type="ref" reference="gsopptqotnpieuvyzrsp"}, and are therefore omitted. **Lemma 5**. *For the tetrad given in [\[gphzicgjqwnoeqtpedqy\]](#gphzicgjqwnoeqtpedqy){reference-type="eqref" reference="gphzicgjqwnoeqtpedqy"} and [\[evgrekozfhalihhtkzwj\]](#evgrekozfhalihhtkzwj){reference-type="eqref" reference="evgrekozfhalihhtkzwj"}, the perturbation equation [\[gtrpatiwyutgcvfpswpn\]](#gtrpatiwyutgcvfpswpn){reference-type="eqref" reference="gtrpatiwyutgcvfpswpn"} is equivalent to the equation $\mathbf{L}\Phi=0$, where $\Phi=\Psi_2^{-2/3}\dot{\Psi}_0$ and $$\begin{gathered} \mathbf{L}=\frac{\partial}{\partial r}\Delta\frac{\partial}{\partial r}-\frac{4N(r+N)}{(r-N)^2}+\frac{\Sigma^2}{4N^2\Delta}\left(\frac{\partial}{\partial t}-i\frac{N(4r^2-11Nr+3N^2)}{\Sigma(r-N)}\right)^2\\+\frac{1}{\sin\theta}\frac{\partial}{\partial\theta}\sin\theta\frac{\partial}{\partial\theta}+\frac{1}{\sin^2\theta}\left(\cos\theta\frac{\partial}{\partial t}-\frac{\partial}{\partial\phi}-2i\cos\theta\right)^2.\end{gathered}$$* *Furthermore, if $\Phi$ is a solution to this equation coming from a perturbation of the metric, then we can write $$\Phi(t,r,\theta,\phi)=\sum_{m,\omega,\Lambda}e^{i(m\phi-\omega t)}R_{m,\omega,\Lambda}(r)S_{m,\omega,\Lambda}(\theta),$$ where $m$ runs over $\frac{1}{2}\mathbb{Z}$, and $\omega$ runs over $m+\mathbb{Z}$, and for each choice of $m,\omega,\Lambda$, the function $R=R_{m,\omega,\Lambda}$ solves the equation $\mathbf{R}R=0$, and the function $S=S_{m,\omega,\Lambda}$ solves the boundary value problem $\mathbf{S}S=0$, $S'(0)=S'(\pi)=0$, where $$\mathbf{R}=\frac{d}{dr}\Delta\frac{d}{dr}+U(r),$$$$U(r)=-\frac{4N(r+N)}{(r-N)^2}-\frac{\Sigma^2}{4N^2\Delta}\left(\omega+\frac{N(4r^2-11Nr+3N^2)}{\Sigma(r-N)}\right)^2-\Lambda\label{brscghflsvddvilvaxss}$$ and $$\mathbf{S}=\frac{1}{\sin\theta}\frac{d}{d\theta}\sin\theta\frac{d}{d\theta}+V(\cos\theta),$$$$V(x)=-\frac{((\omega+2)x+m)^2}{1-x^2}+\Lambda.$$ The separation constant $\Lambda$ runs over the (countable set of) values for which such an $S$ exists, all of which are non-negative.* *The same statement holds if $\Phi$ is replaced by $\tilde{\Phi}=\tilde{\Psi}_2^{-2/3}\dot{\tilde{\Psi}}_0$, the operator $\mathbf{L}$ is replaced by $\tilde{\mathbf{L}}$, and the operator $\mathbf{R}$ replaced by $\tilde{\mathbf{R}}$, defined in the same way but using a potential $\tilde{U}$ in place of $U$. Here, $$\begin{gathered} \tilde{\mathbf{L}}=\frac{\partial}{\partial r}\Delta\frac{\partial}{\partial r}-\frac{36N(r-N)}{(r+N)^2}+\frac{\Sigma^2}{4N^2\Delta}\left(\frac{\partial}{\partial t}+i\frac{N(4r^2-19Nr+13N^2)}{\Sigma(r+N)}\right)^2\\+\frac{1}{\sin\theta}\frac{\partial}{\partial\theta}\sin\theta\frac{\partial}{\partial\theta}+\frac{1}{\sin^2\theta}\left(\cos\theta\frac{\partial}{\partial t}-\frac{\partial}{\partial\phi}-2i\cos\theta\right)^2\end{gathered}$$ and $$\tilde{U}(r)=-\frac{36N(r-N)}{(r+N)^2}-\frac{\Sigma^2}{4N^2\Delta}\left(\omega-\frac{N(4r^2-19Nr+13N^2)}{\Sigma(r+N)}\right)^2-\Lambda.$$* **Lemma 6**. *The equation $\mathbf{R}R=0$ is an ordinary differential equation in a complex variable $r$, which has regular singular points at $r=2N$ and $r=N/2$. The point $r=\infty$ is an irregular singular point of rank $1$, except when $\omega=0$, in which case it is a regular singular point. Thus, the equation $\mathbf{R}R=0$ is a confluent Heun equation (see [@MR1858237 Section 3]) when $\omega\neq 0$, and a hypergeometric equation (see [@MR1858237 Section 2]) when $\omega=0$. The characteristic exponents at $r=2N$ are $$\pm(\omega-1),\label{ktwoepgjxhbeqkcvopco}$$ and those at $r=N/2$ are $$\pm\left(\frac{\omega}{4}-1\right).\label{zuksiezerycagdpuerxi}$$ When $\omega=0$, the characteristic exponents at $r=\infty$ are $$-\frac{3}{2}\pm i\sqrt{\frac{7}{2}+\Lambda}.\label{yproqligyhfppalbpbpv}$$ When $\omega\neq0$, the equation $\mathbf{R}R=0$ admits normal solutions (see [@MR0078494 Section 3.2]), near $r=\infty$, of the asymptotic form $$R\sim e^{\pm r\omega/2N}r^{-1\pm(5\omega/4-2)}.\label{uhibwswvrmbwarwyqxuc}$$* **Lemma 7**. *For a solution to the equation $\mathbf{R}R=0$ coming from a (globally smooth) perturbation of the Taub-bolt metric, the corresponding characteristic exponent at $r=2N$ is $|\omega-1|$.* **Lemma 8**. *Let $R$ be a solution to the equation $\mathbf{R}R=0$ coming from an asymptotically locally flat perturbation of the Taub-bolt metric. When $\omega=0$, none of the characteristic exponents at $r=\infty$ are compatible with the assumption of asymptotic local flatness. When $\omega\neq 0$, exactly one of the asymptotic normal solutions is compatible with this assumption, namely $$R\sim e^{-r|\omega|/2N}r^{-1-(5\omega/4-2)\operatorname{sgn}(\omega)}.$$* Corresponding lemmas regarding the asymptotics of the equation $\tilde{\mathbf{R}}R=0$ also hold. We omit them, since they are entirely analogous. ## Mode Stability *Proof of Theorem [\[wnqtrcvwqbjaufatjvne\]](#wnqtrcvwqbjaufatjvne){reference-type="ref" reference="wnqtrcvwqbjaufatjvne"}.* In this case, we see directly that $U(r)<0$, and integrating the equation $\mathbf{R}R=0$ by parts like in the proof of Theorem [\[vryfcgpvhdetgswyzzcn\]](#vryfcgpvhdetgswyzzcn){reference-type="ref" reference="vryfcgpvhdetgswyzzcn"}, we see that $R$ vanishes identically. The case involving the equation $\tilde{\mathbf{R}}R=0$ is entirely analogous. ◻ # Newman--Penrose Equations We have the Newman--Penrose commutation relations: $$\begin{aligned} \eta &= (\gamma + \tilde{\gamma}) D\eta + (\epsilon + \tilde{\epsilon}) \Delta \eta - (\pi + \tilde{\tau}) \delta \eta - (\tilde{\pi} + \tau) \tilde{\delta}\eta, \\ [D,\delta] \eta &= -(\tilde{\alpha} + \beta - \tilde{\pi}) D\eta - \kappa \Delta\eta + (\epsilon - \tilde{\epsilon} + \tilde{\rho}) \delta \eta + \sigma \tilde{\delta}\eta \label{cljxbmfrlnvouvtstprh}, \\ [\delta,\Delta] \eta &= - \tilde{\nu} D\eta - (\tilde{\alpha} + \beta - \tau) \Delta \eta + (- \gamma + \tilde{\gamma} + \mu) \delta \eta + \tilde{\lambda} \tilde{\delta}\eta, \\ [\tilde{\delta},D]\eta &= (\alpha + \tilde{\beta} - \pi) D\eta + \tilde{\kappa} \Delta \eta - \tilde{\sigma} \delta \eta + (\epsilon - \tilde{\epsilon} - \rho) \tilde{\delta}\eta, \\ [\tilde{\delta},\Delta] \eta &= - \nu D\eta - (\alpha + \tilde{\beta} - \tilde{\tau}) \Delta \eta + \lambda \delta \eta + (\gamma - \tilde{\gamma} + \tilde{\mu}) \tilde{\delta}\eta, \\ [\tilde{\delta},\delta] \eta &= (- \mu + \tilde{\mu}) D\eta + (- \rho + \tilde{\rho}) \Delta \eta + (\alpha - \tilde{\beta}) \delta \eta + (- \tilde{\alpha} + \beta) \tilde{\delta}\eta.\end{aligned}$$ In terms of the spin coefficients, the vacuum Einstein equations become $$\begin{aligned} - D\gamma + \Delta\epsilon ={}&\Psi_{2}{} - \tilde{\gamma} \epsilon - \gamma (2 \epsilon + \tilde{\epsilon}) - \kappa \nu + \beta \pi + \alpha \tilde{\pi} + \alpha \tau + \pi \tau + \beta \tilde{\tau},\\ - D\tilde{\gamma} + \Delta\tilde{\epsilon}={}&\tilde{\Psi}_{2}{} - \gamma \tilde{\epsilon} - \tilde{\gamma} (\epsilon + 2 \tilde{\epsilon}) - \tilde{\kappa} \tilde{\nu} + \tilde{\alpha} \pi + \tilde{\beta} \tilde{\pi} + \tilde{\beta} \tau + \tilde{\alpha} \tilde{\tau} + \tilde{\pi} \tilde{\tau},\\ - D\tau + \Delta\kappa ={}&\Psi_{1}{} - 3 \gamma \kappa - \tilde{\gamma} \kappa + \tilde{\pi} \rho + \pi \sigma + \epsilon \tau - \tilde{\epsilon} \tau + \rho \tau + \sigma \tilde{\tau},\label{ztdbwahxrsqwsbvwjemy} \\ - D\tilde{\tau} + \Delta\tilde{\kappa}={}&\tilde{\Psi}_{1}{} - \gamma \tilde{\kappa} - 3 \tilde{\gamma} \tilde{\kappa} + \pi \tilde{\rho} + \tilde{\pi} \tilde{\sigma} + \tilde{\sigma} \tau - \epsilon \tilde{\tau} + \tilde{\epsilon} \tilde{\tau} + \tilde{\rho} \tilde{\tau},\\ - D\nu + \Delta\pi ={}&\Psi_{3}{} - 3 \epsilon \nu - \tilde{\epsilon} \nu + \gamma \pi - \tilde{\gamma} \pi + \mu \pi + \lambda \tilde{\pi} + \lambda \tau + \mu \tilde{\tau},\\ - D\tilde{\nu} + \Delta\tilde{\pi}={}&\tilde{\Psi}_{3}{} - \epsilon \tilde{\nu} - 3 \tilde{\epsilon} \tilde{\nu} + \tilde{\lambda} \pi - \gamma \tilde{\pi} + \tilde{\gamma} \tilde{\pi} + \tilde{\mu} \tilde{\pi} + \tilde{\mu} \tau + \tilde{\lambda} \tilde{\tau},\\ - \Delta\beta + \delta\gamma ={}&\tilde{\alpha} \gamma + 2 \beta \gamma - \alpha \tilde{\lambda} - \beta (\tilde{\gamma} + \mu) + \epsilon \tilde{\nu} + \nu \sigma - \gamma \tau - \mu \tau ,\\ \Delta\tilde{\alpha} - \delta\tilde{\gamma}={}&\tilde{\Psi}_{3}{} - \beta \tilde{\gamma} + \tilde{\beta} \tilde{\lambda} + \tilde{\alpha} (- \gamma + \mu) - \tilde{\epsilon} \tilde{\nu} - \tilde{\nu} \tilde{\rho} + \tilde{\gamma} \tau + \tilde{\lambda} \tilde{\tau},\\ - D\beta + \delta\epsilon ={}&\Psi_{1}{} - \tilde{\alpha} \epsilon - \beta \tilde{\epsilon} - \gamma \kappa - \kappa \mu + \epsilon \tilde{\pi} + \beta \tilde{\rho} + \alpha \sigma + \pi \sigma ,\label{dxjlcufhwofmcfykmlmk} \\ - D\tilde{\alpha} + \delta\tilde{\epsilon}={}&- \beta \tilde{\epsilon} - \tilde{\gamma} \kappa - \tilde{\kappa} \tilde{\lambda} + \tilde{\epsilon} \tilde{\pi} + \tilde{\pi} \tilde{\rho} + \tilde{\alpha} (\epsilon - 2 \tilde{\epsilon} + \tilde{\rho}) + \tilde{\beta} \sigma ,\\ - D\sigma + \delta\kappa ={}&\Psi_{0}{} - \tilde{\alpha} \kappa - 3 \beta \kappa + \kappa \tilde{\pi} + 3 \epsilon \sigma - \tilde{\epsilon} \sigma + \rho \sigma + \tilde{\rho} \sigma - \kappa \tau ,\label{bepkreovuhsolsavrkpy}\\ D\tilde{\rho} - \delta\tilde{\kappa}={}&3 \tilde{\alpha} \tilde{\kappa} + \beta \tilde{\kappa} - \tilde{\kappa} \tilde{\pi} - \epsilon \tilde{\rho} - \tilde{\epsilon} \tilde{\rho} - \tilde{\rho}^2 - \sigma \tilde{\sigma} + \kappa \tilde{\tau},\\ \Delta\mu - \delta\nu ={}&\lambda \tilde{\lambda} + \gamma \mu + \tilde{\gamma} \mu + \mu^2 - \tilde{\alpha} \nu - 3 \beta \nu - \tilde{\nu} \pi + \nu \tau ,\\ - \Delta\tilde{\lambda} + \delta\tilde{\nu}={}&- \tilde{\Psi}_{4}{} + \gamma \tilde{\lambda} - 3 \tilde{\gamma} \tilde{\lambda} - \tilde{\lambda} \mu - \tilde{\lambda} \tilde{\mu} + \tilde{\nu} (3 \tilde{\alpha} + \beta + \tilde{\pi}) - \tilde{\nu} \tau ,\\ - D\mu + \delta\pi ={}&\Psi_{2}{} - \epsilon \mu - \tilde{\epsilon} \mu - \kappa \nu - \tilde{\alpha} \pi + \beta \pi + \pi \tilde{\pi} + \mu \tilde{\rho} + \lambda \sigma ,\\ - D\tilde{\lambda} + \delta\tilde{\pi}={}&\epsilon \tilde{\lambda} - 3 \tilde{\epsilon} \tilde{\lambda} - \kappa \tilde{\nu} + \tilde{\alpha} \tilde{\pi} - \beta \tilde{\pi} + \tilde{\pi}^2 + \tilde{\lambda} \tilde{\rho} + \tilde{\mu} \sigma ,\\ - \Delta\sigma + \delta\tau ={}&\kappa \tilde{\nu} - \tilde{\lambda} \rho + 3 \gamma \sigma - \tilde{\gamma} \sigma - \mu \sigma + \tilde{\alpha} \tau - \beta \tau - \tau^2,\\ \Delta\tilde{\rho} - \delta\tilde{\tau}={}&\tilde{\Psi}_{2}{} - \tilde{\kappa} \tilde{\nu} - \gamma \tilde{\rho} - \tilde{\gamma} \tilde{\rho} + \mu \tilde{\rho} + \tilde{\lambda} \tilde{\sigma} + \tilde{\alpha} \tilde{\tau} - \beta \tilde{\tau} + \tau \tilde{\tau},\\ - \delta\tilde{\beta} + \tilde{\delta}\tilde{\alpha}={}&\tilde{\Psi}_{2}{} - \alpha \tilde{\alpha} + 2 \tilde{\alpha} \tilde{\beta} - \beta \tilde{\beta} + \tilde{\epsilon} \mu - \tilde{\epsilon} \tilde{\mu} + \tilde{\gamma} \rho - \tilde{\gamma} \tilde{\rho} - \tilde{\mu} \tilde{\rho} + \tilde{\lambda} \tilde{\sigma},\\ \delta\alpha - \tilde{\delta}\beta ={}&\Psi_{2}{} - \alpha \tilde{\alpha} + 2 \alpha \beta - \beta \tilde{\beta} - \epsilon \mu + \epsilon \tilde{\mu} - \gamma \rho - \mu \rho + \gamma \tilde{\rho} + \lambda \sigma ,\\ \Delta\alpha - \tilde{\delta}\gamma ={}&\Psi_{3}{} - \tilde{\beta} \gamma - \alpha \tilde{\gamma} + \beta \lambda + \alpha \tilde{\mu} - \epsilon \nu - \nu \rho + \lambda \tau + \gamma \tilde{\tau},\\ - \Delta\tilde{\beta} + \tilde{\delta}\tilde{\gamma}={}&\alpha \tilde{\gamma} - \tilde{\alpha} \lambda - \tilde{\beta} (\gamma - 2 \tilde{\gamma} + \tilde{\mu}) + \tilde{\epsilon} \nu + \tilde{\nu} \tilde{\sigma} - \tilde{\gamma} \tilde{\tau} - \tilde{\mu} \tilde{\tau},\\ D\alpha - \tilde{\delta}\epsilon ={}&2 \alpha \epsilon + \tilde{\beta} \epsilon + \gamma \tilde{\kappa} + \kappa \lambda - \epsilon \pi - \pi \rho - \alpha (\tilde{\epsilon} + \rho) - \beta \tilde{\sigma},\\ - D\tilde{\beta} + \tilde{\delta}\tilde{\epsilon}={}&\tilde{\Psi}_{1}{} - \alpha \tilde{\epsilon} - \tilde{\gamma} \tilde{\kappa} - \tilde{\kappa} \tilde{\mu} + \tilde{\epsilon} \pi + \tilde{\beta} (- \epsilon + \rho) + \tilde{\alpha} \tilde{\sigma} + \tilde{\pi} \tilde{\sigma},\\ D\rho - \tilde{\delta}\kappa ={}&3 \alpha \kappa + \tilde{\beta} \kappa - \kappa \pi - \epsilon \rho - \tilde{\epsilon} \rho - \rho^2 - \sigma \tilde{\sigma} + \tilde{\kappa} \tau ,\\ - D\tilde{\sigma} + \tilde{\delta}\tilde{\kappa}={}&\tilde{\Psi}_{0}{} - \alpha \tilde{\kappa} - 3 \tilde{\beta} \tilde{\kappa} + \tilde{\kappa} \pi - \epsilon \tilde{\sigma} + 3 \tilde{\epsilon} \tilde{\sigma} + \rho \tilde{\sigma} + \tilde{\rho} \tilde{\sigma} - \tilde{\kappa} \tilde{\tau},\\ - \delta\tilde{\mu} + \tilde{\delta}\tilde{\lambda}={}&\tilde{\Psi}_{3}{} - \alpha \tilde{\lambda} + 3 \tilde{\beta} \tilde{\lambda} - \tilde{\alpha} \tilde{\mu} - \beta \tilde{\mu} + \mu \tilde{\pi} - \tilde{\mu} \tilde{\pi} + \tilde{\nu} \rho - \tilde{\nu} \tilde{\rho},\\ \delta\lambda - \tilde{\delta}\mu ={}&\Psi_{3}{} - \tilde{\alpha} \lambda + 3 \beta \lambda - \alpha \mu - \tilde{\beta} \mu - \mu \pi + \tilde{\mu} \pi - \nu \rho + \nu \tilde{\rho},\\ - \Delta\lambda + \tilde{\delta}\nu ={}&- \Psi_{4}{} - 3 \gamma \lambda + \tilde{\gamma} \lambda - \lambda \mu - \lambda \tilde{\mu} + \nu (3 \alpha + \tilde{\beta} + \pi) - \nu \tilde{\tau},\\ \Delta\tilde{\mu} - \tilde{\delta}\tilde{\nu}={}&\lambda \tilde{\lambda} + \gamma \tilde{\mu} + \tilde{\gamma} \tilde{\mu} + \tilde{\mu}^2 - \alpha \tilde{\nu} - 3 \tilde{\beta} \tilde{\nu} - \nu \tilde{\pi} + \tilde{\nu} \tilde{\tau},\\ D\lambda - \tilde{\delta}\pi ={}&3 \epsilon \lambda - \tilde{\epsilon} \lambda + \tilde{\kappa} \nu - \alpha \pi + \tilde{\beta} \pi - \pi^2 - \lambda \rho - \mu \tilde{\sigma},\\ - D\tilde{\mu} + \tilde{\delta}\tilde{\pi}={}&\tilde{\Psi}_{2}{} - \epsilon \tilde{\mu} - \tilde{\epsilon} \tilde{\mu} - \tilde{\kappa} \tilde{\nu} - \alpha \tilde{\pi} + \tilde{\beta} \tilde{\pi} + \pi \tilde{\pi} + \tilde{\mu} \rho + \tilde{\lambda} \tilde{\sigma},\\ - \delta\tilde{\sigma} + \tilde{\delta}\tilde{\rho}={}&\tilde{\Psi}_{1}{} + \tilde{\kappa} (\mu - \tilde{\mu}) - \alpha \tilde{\rho} - \tilde{\beta} \tilde{\rho} + 3 \tilde{\alpha} \tilde{\sigma} - \beta \tilde{\sigma} + \rho \tilde{\tau} - \tilde{\rho} \tilde{\tau},\\ \delta\rho - \tilde{\delta}\sigma ={}&\Psi_{1}{} + \kappa (- \mu + \tilde{\mu}) - \tilde{\alpha} \rho - \beta \rho + 3 \alpha \sigma - \tilde{\beta} \sigma - \rho \tau + \tilde{\rho} \tau ,\label{ylsqrpzctcwthxvmlbxg} \\ \Delta\rho - \tilde{\delta}\tau ={}&\Psi_{2}{} - \kappa \nu - \gamma \rho - \tilde{\gamma} \rho + \tilde{\mu} \rho + \lambda \sigma + \alpha \tau - \tilde{\beta} \tau + \tau \tilde{\tau},\\ - \Delta\tilde{\sigma} + \tilde{\delta}\tilde{\tau}={}&\tilde{\kappa} \nu - \lambda \tilde{\rho} - \gamma \tilde{\sigma} + 3 \tilde{\gamma} \tilde{\sigma} - \tilde{\mu} \tilde{\sigma} + \alpha \tilde{\tau} - \tilde{\beta} \tilde{\tau} - \tilde{\tau}^2.\end{aligned}$$ Finally, we have the Bianchi identities: $$\begin{aligned} \begin{split}D\tilde{\Psi}_{3}{} - \Delta\Psi_{1}{} + \delta\Psi_{2}{} - \delta\tilde{\Psi}_{2}{}={}&\tilde{\Psi}_{4}{} \tilde{\kappa} + 2 \tilde{\Psi}_{1}{} \tilde{\lambda} + 2 \Psi_{1}{} (\gamma - \mu) + \Psi_{0}{} \nu - 3 \tilde{\Psi}_{2}{} \tilde{\pi}\\ & + 2 \tilde{\Psi}_{3}{} (\tilde{\epsilon} - \tilde{\rho}) + 2 \Psi_{3}{} \sigma - 3 \Psi_{2}{} \tau ,\end{split}\\ - \Delta\Psi_{0}{} + \delta\Psi_{1}{}={}&4 \Psi_{0}{} \gamma - \Psi_{0}{} \mu + 3 \Psi_{2}{} \sigma - 2 \Psi_{1}{} (\beta + 2 \tau),\label{dgzjedhogf}\\ D\tilde{\Psi}_{2}{} - \delta\tilde{\Psi}_{1}{}={}&2 \tilde{\Psi}_{3}{} \tilde{\kappa} + \tilde{\Psi}_{0}{} \tilde{\lambda} + 2 \tilde{\Psi}_{1}{} (\tilde{\alpha} - \tilde{\pi}) - 3 \tilde{\Psi}_{2}{} \tilde{\rho},\label{dozkyraifwwbixgillgs}\\ D\tilde{\Psi}_{4}{} - \delta\tilde{\Psi}_{3}{}={}&4 \tilde{\Psi}_{4}{} \tilde{\epsilon} + 3 \tilde{\Psi}_{2}{} \tilde{\lambda} - 2 \tilde{\Psi}_{3}{} (\tilde{\alpha} + 2 \tilde{\pi}) - \tilde{\Psi}_{4}{} \tilde{\rho},\\ - \Delta\Psi_{2}{} + \delta\Psi_{3}{}={}&-3 \Psi_{2}{} \mu + 2 \Psi_{1}{} \nu + \Psi_{4}{} \sigma + 2 \Psi_{3}{} (\beta - \tau),\\ - \Delta\Psi_{1}{} + \delta\Psi_{2}{}={}&2 \Psi_{1}{} (\gamma - \mu) + \Psi_{0}{} \nu + 2 \Psi_{3}{} \sigma - 3 \Psi_{2}{} \tau ,\label{lwtfleqbiqyqbugeyliv}\\ D\Psi_{1}{} - \tilde{\delta}\Psi_{0}{}={}&3 \Psi_{2}{} \kappa + \Psi_{0}{} (4 \alpha - \pi) - 2 \Psi_{1}{} (\epsilon + 2 \rho),\label{tffqwgbzjf}\\ \begin{split}D\Psi_{2}{} + D\tilde{\Psi}_{2}{} - \delta\tilde{\Psi}_{1}{} - \tilde{\delta}\Psi_{1}{}={}&2 \Psi_{3}{} \kappa + 2 \tilde{\Psi}_{3}{} \tilde{\kappa} + \Psi_{0}{} \lambda + \tilde{\Psi}_{0}{} \tilde{\lambda} + 2 \Psi_{1}{} (\alpha - \pi)\\ & + 2 \tilde{\Psi}_{1}{} (\tilde{\alpha} - \tilde{\pi}) - 3 \Psi_{2}{} \rho - 3 \tilde{\Psi}_{2}{} \tilde{\rho},\end{split}\label{drypwpymxoihwivvzutw}\\ \begin{split}D\Psi_{3}{} - \Delta\tilde{\Psi}_{1}{} - \tilde{\delta}\Psi_{2}{} + \tilde{\delta}\tilde{\Psi}_{2}{}={}&\Psi_{4}{} \kappa + 2 \Psi_{1}{} \lambda + 2 \tilde{\Psi}_{1}{} (\tilde{\gamma} - \tilde{\mu}) + \tilde{\Psi}_{0}{} \tilde{\nu} - 3 \Psi_{2}{} \pi\\ & + 2 \Psi_{3}{} (\epsilon - \rho) + 2 \tilde{\Psi}_{3}{} \tilde{\sigma} - 3 \tilde{\Psi}_{2}{} \tilde{\tau},\end{split}\\ \begin{split}- \Delta\Psi_{2}{} - \Delta\tilde{\Psi}_{2}{} + \delta\Psi_{3}{} + \tilde{\delta}\tilde{\Psi}_{3}{}={}&-3 \Psi_{2}{} \mu - 3 \tilde{\Psi}_{2}{} \tilde{\mu} + 2 \Psi_{1}{} \nu + 2 \tilde{\Psi}_{1}{} \tilde{\nu} + \Psi_{4}{} \sigma\\ & + \tilde{\Psi}_{4}{} \tilde{\sigma} + 2 \Psi_{3}{} (\beta - \tau) + 2 \tilde{\Psi}_{3}{} (\tilde{\beta} - \tilde{\tau}),\end{split}\\ - \Delta\tilde{\Psi}_{3}{} + \tilde{\delta}\tilde{\Psi}_{4}{}={}&-2 \tilde{\Psi}_{3}{} (\tilde{\gamma} + 2 \tilde{\mu}) + 3 \tilde{\Psi}_{2}{} \tilde{\nu} + \tilde{\Psi}_{4}{} (4 \tilde{\beta} - \tilde{\tau}),\\ D\tilde{\Psi}_{1}{} - \delta\tilde{\Psi}_{0}{}={}&3 \tilde{\Psi}_{2}{} \tilde{\kappa} + \tilde{\Psi}_{0}{} (4 \tilde{\alpha} - \tilde{\pi}) - 2 \tilde{\Psi}_{1}{} (\tilde{\epsilon} + 2 \tilde{\rho}),\\ D\tilde{\Psi}_{2}{} - \delta\tilde{\Psi}_{1}{}={}&2 \tilde{\Psi}_{3}{} \tilde{\kappa} + \tilde{\Psi}_{0}{} \tilde{\lambda} + 2 \tilde{\Psi}_{1}{} (\tilde{\alpha} - \tilde{\pi}) - 3 \tilde{\Psi}_{2}{} \tilde{\rho},\\ - \Delta\tilde{\Psi}_{0}{} + \tilde{\delta}\tilde{\Psi}_{1}{}={}&4 \tilde{\Psi}_{0}{} \tilde{\gamma} - \tilde{\Psi}_{0}{} \tilde{\mu} + 3 \tilde{\Psi}_{2}{} \tilde{\sigma} - 2 \tilde{\Psi}_{1}{} (\tilde{\beta} + 2 \tilde{\tau}),\\ - \Delta\Psi_{3}{} + \delta\Psi_{4}{}={}&-2 \Psi_{3}{} (\gamma + 2 \mu) + 3 \Psi_{2}{} \nu + \Psi_{4}{} (4 \beta - \tau),\\ - \Delta\Psi_{2}{} + \delta\Psi_{3}{}={}&-3 \Psi_{2}{} \mu + 2 \Psi_{1}{} \nu + \Psi_{4}{} \sigma + 2 \Psi_{3}{} (\beta - \tau),\\ D\Psi_{4}{} - \tilde{\delta}\Psi_{3}{}={}&4 \Psi_{4}{} \epsilon + 3 \Psi_{2}{} \lambda - 2 \Psi_{3}{} (\alpha + 2 \pi) - \Psi_{4}{} \rho ,\\ - \Delta\Psi_{1}{} + \delta\Psi_{2}{}={}&2 \Psi_{1}{} (\gamma - \mu) + \Psi_{0}{} \nu + 2 \Psi_{3}{} \sigma - 3 \Psi_{2}{} \tau ,\\ D\Psi_{3}{} - \tilde{\delta}\Psi_{2}{}={}&\Psi_{4}{} \kappa + 2 \Psi_{1}{} \lambda - 3 \Psi_{2}{} \pi + 2 \Psi_{3}{} (\epsilon - \rho),\\ D\Psi_{3}{} - \tilde{\delta}\Psi_{2}{}={}&\Psi_{4}{} \kappa + 2 \Psi_{1}{} \lambda - 3 \Psi_{2}{} \pi + 2 \Psi_{3}{} (\epsilon - \rho).\end{aligned}$$ ## Spin Coefficients for Kerr {#lkxvvmyizsfanehcgbyb} $$\begin{aligned} \alpha ={}&\frac{r\cos\theta-a}{(r-a\cos\theta)2\sqrt{2\Sigma}\sin\theta},\\ \beta ={}&\frac{r\cos\theta-a}{(r-a\cos\theta)2\sqrt{2\Sigma}\sin\theta},\\ \gamma ={}&\frac{i(-\Delta/(r-a\cos\theta)+r-M)}{2\sqrt{2\Delta\Sigma}},\\ \epsilon ={}&\frac{i(-\Delta/(r-a\cos\theta)+r-M)}{2\sqrt{2\Delta\Sigma}},\\ \kappa ={}&0,\\ \lambda ={}&0,\\ \mu ={}&-\frac{i\sqrt{\Delta/2\Sigma}}{r-a\cos\theta},\\ \nu ={}&0,\\ \pi ={}&-\frac{a\sin\theta}{(r-a\cos\theta)\sqrt{2\Sigma}},\\ \rho ={}&-\frac{i\sqrt{\Delta/2\Sigma}}{r-a\cos\theta},\\ \sigma ={}&0,\\ \tau ={}&-\frac{a\sin\theta}{(r-a\cos\theta)\sqrt{2\Sigma}},\\ \tilde{\alpha}={}&-\frac{r\cos\theta+a}{(r+a\cos\theta)2\sqrt{2\Sigma}\sin\theta},\\ \tilde{\beta}={}&-\frac{r\cos\theta+a}{(r+a\cos\theta)2\sqrt{2\Sigma}\sin\theta},\\ \tilde{\gamma}={}&\frac{i(-\Delta/(r+a\cos\theta)+r-M)}{2\sqrt{2\Delta\Sigma}},\\ \tilde{\epsilon}={}&\frac{i(-\Delta/(r-a\cos\theta)+r-M)}{2\sqrt{2\Delta\Sigma}},\\ \tilde{\kappa}={}&0,\\ \tilde{\lambda}={}&0,\\ \tilde{\mu}={}&- \frac{i \sqrt{\Delta/2\Sigma}}{r+a \cos\theta},\\ \tilde{\nu}={}&0,\\ \tilde{\pi}={}&- \frac{a \sin\theta}{(r+a \cos\theta) \sqrt{2\Sigma}},\\ \tilde{\rho}={}&- \frac{i \sqrt{\Delta/2\Sigma}}{r+a \cos\theta},\\ \tilde{\sigma}={}&0,\\ \tilde{\tau}={}&- \frac{a \sin\theta}{(r+a \cos\theta) \sqrt{2\Sigma}}.\end{aligned}$$ ## Spin Coefficients for Taub-Bolt {#qbpavailpgpsninudcgc} $$\begin{aligned} \alpha ={}&\frac{N(r+N)^2}{8\Sigma\sqrt{2\Delta\Sigma}},\\ \beta ={}&\frac{N(r+N)^2}{8\Sigma\sqrt{2\Delta\Sigma}},\\ \gamma ={}&\frac{i\cot\theta}{2\sqrt{2\Sigma}},\\ \epsilon ={}&\frac{i\cot\theta}{2\sqrt{2\Sigma}},\\ \kappa ={}&0,\\ \lambda ={}&0,\\ \mu ={}&0,\\ \nu ={}&0,\\ \pi ={}&-\frac{(r+N)\sqrt{\Delta/2\Sigma}}{\Sigma},\\ \rho ={}&0,\\ \sigma ={}&0,\\ \tau ={}&-\frac{(r+N)\sqrt{\Delta/2\Sigma}}{\Sigma},\\ \tilde{\alpha}={}&-\frac{9N(r-N)}{8(r+N)\sqrt{\Delta\Sigma}},\\ \tilde{\beta}={}&-\frac{9N(r-N)}{8(r+N)\sqrt{\Delta\Sigma}},\\ \tilde{\gamma}={}&\frac{i\cot\theta}{2\sqrt{2\Sigma}},\\ \tilde{\epsilon}={}&\frac{i\cot\theta}{2\sqrt{2\Sigma}},\\ \tilde{\kappa}={}&0,\\ \tilde{\lambda}={}&0,\\ \tilde{\mu}={}&0,\\ \tilde{\nu}={}&0,\\ \tilde{\pi}={}&\frac{\sqrt{\Delta/2\Sigma}}{r+N},\\ \tilde{\rho}={}&0,\\ \tilde{\sigma}={}&0,\\ \tilde{\tau}={}&\frac{\sqrt{\Delta/2\Sigma}}{r+N}.\end{aligned}$$ [^1]: By type D, we mean Petrov type $\mathrm{D}^+\mathrm{D}^-$, cf. [@Biquard_2022; @aksteiner2022gravitational]. The only ALF instantons of type D are the Riemannian Kerr and the Taub-bolt metrics. [^2]: *Note that the equation $\mathbf{L}\Phi=0$ is the same equation as that occuring in the Lorentzian case (see [@MR0995773; @Andersson_2017]), but with $t$ replaced with $it$, $a$ replaced with $-ia$, and with $s=-2$.*
arxiv_math
{ "id": "2309.03972", "title": "Mode Stability for Gravitational Instantons of Type D", "authors": "Gustav Nilsson", "categories": "math.DG gr-qc", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: - | In [@Hal23], we showed that the blowup of a weighted extremal Kähler manifold at a relatively stable fixed point admits a weighted extremal metric. Using this result, we prove that a weighted extremal manifold is relatively weighted K-polystable. In particular, a weighted cscK manifold is weighted K-polystable. This strengthens both the weighted K-semistability proved by Lahdili [@Lah19] and Inoue [@Ino20], and the weighted K-polystability with respect to smooth degenerations by Apostolov--Jubert--Lahdili [@AJL21], allowing for possibly singular degenerations. - | I thank Eiji Inoue and Abdellah Lahdili for kindly answering some questions on their works. I thank Ruadhaí Dervan for comments on the paper and discussions on the existence of Chow stable points, and Zakarias Sjöström Dyrefelt for comments on the paper. I also thank Eveline Legendre and Lars Sektnan for their interest in the project. address: Department of Mathematics, Aarhus University, Ny Munkegade 118, DK-8000 Aarhus C, Denmark. author: - Michael Hallam bibliography: - references.bib title: Stability of weighted extremal manifolds through blowups --- # Introduction The question of when a Kähler manifold admits a canonical Kähler metric is a cornerstone of Kähler geometry, and has yielded fascinating links with algebraic geometry. The problem was first posed by Calabi in the case of Kähler--Einstein metrics [@Cal54], and later in greater generality for constant scalar curvature Kähler (cscK) metrics and extremal Kähler metrics [@Cal82]. The existence of a cscK or extremal metric is conjecturally equivalent to various notions of K-stability, through the Yau--Tian--Donaldson conjecture [@Yau93; @Tia97; @Don02]. The earliest forms of K-stability were introduced in [@Tia97; @Don02]; also relevant to us will be the condition of relative K-stability [@Sze07], as well as the generalisations of K-stability to non-algebraic Kähler manifolds [@SD18; @DR17; @SD20; @Der18]. By now there are various approaches to proving K-stability of manifolds with canonical Kähler metrics, such as employing geodesic rays and asymptotic slope formulae of energy functionals; see [@PRS08; @BHJ19; @BDL20; @SD18; @SD20]. Although some of these methods will be important in this work, we instead focus on proofs which make use of existence results for canonical Kähler metrics on blowups. The first result of this kind was proven for cscK manifolds with discrete automorphisms [@AP06], and was later generalised to extremal manifolds in [@APS11; @Sze12; @Sze15; @DS21]. Using such existence results, one can "upgrade\" K-semistability to K-stability. This argument was first carried out by Stoppa in the polarised cscK case [@Sto09; @Sto11], and later by Stoppa--Székelyhidi in the polarised extremal case [@SS11]; see also [@DR17; @Der18] for the non-polarised Kähler setting. Alongside cscK and extremal Kähler metrics, many other important notions of canonical Kähler metric have been introduced. Examples include Kähler--Ricci solitons, extremal Sasaki metrics [@BGS08], as well as conformally Kähler Einstein--Maxwell metrics [@LeB10]. It was shown by Lahdili that these all fit under the umbrella of *weighted extremal metrics* [@Lah19]. Independently, Inoue introduced an important subclass of Lahdili's metrics called *$\mu$-cscK metrics*, which provide a convenient generalisation of both extremal metrics and Kähler--Ricci solitons [@Ino22]. To give a brief description of weighted cscK and extremal metrics, let $(X,\omega)$ be a compact Kähler manifold, and let $T$ be a compact real torus acting on $X$ by hamiltonian isometries. Let $\mu:X\to\mathfrak{t}^*$ be a moment map for the action, and write $P:=\mu(X)$ for the moment polytope. The *weight functions* that give rise to the weighted extremal equation are positive smooth functions $v,w:P\to\mathbb{R}_{>0}$. Using these, one defines a *weighted scalar curvature* $S_{v,w}(\omega)$, which is a deformation of the usual scalar curvature $S(\omega)$; we refer to Section [2.1](#subsec:weighted_extremal_metrics){reference-type="ref" reference="subsec:weighted_extremal_metrics"} for the precise definitions and examples. If $S_{v,w}(\omega)$ is constant we say $\omega$ is *weighted cscK*, and if $\nabla^{1,0}S_{v,w}(\omega)$ is a holomorphic vector field then we call $\omega$ a *weighted extremal metric*. All of the canonical Kähler metrics above can be recovered as weighted cscK or extremal metrics. Alongside weighted cscK metrics, Lahdili introduced a notion of weighted K-stability with respect to smooth degenerations [@Lah19]. The definition involves integrating the weighted scalar curvature of a Kähler metric over the total space of a test configuration, which requires the test configuration be smooth. While this is assumption is sufficient to recover a good notion of weighted K-semistability, for weighted K-(poly)stability one must also define stability with respect to test configurations that have singular total space. Such a definition was achieved by Inoue [@Ino20], who used the theory of equivariant cohomology to define the weighted Donaldson--Futaki invariant of a singular test configuration. One slight restriction we make in our work, as was made in [@Ino20], is that we assume the weight functions $v$ and $w$ can be written as the composition of a linear functional $\ell_\xi:P\to\mathbb{R}$ with real analytic functions $f_v$ and $f_w$ on $\ell_\xi(P)\subset\mathbb{R}$. In practice, most examples of weighted cscK and extremal metrics satisfy this condition. The assumption on the weights is laid out clearly in Assumption [Assumption 9](#ass:weights){reference-type="ref" reference="ass:weights"}. We now state the main results. Let $(X,\alpha_T)$ be a compact Kähler manifold equipped with a hamiltonian isometric $T$-action; here $\alpha$ is a fixed Kähler class on $X$ and $\alpha_T$ a fixed extension of $\alpha$ to a $T$-equivariant degree 2 cohomology class (such an extension is equivalent to a choice of normalisation for the moment map of the $T$-action). In [@Hal23], we showed that if $X$ admits a weighted extremal metric, so too does its blowup at a relatively stable point that is fixed by both the $T$-action and the extremal field. Using this, we will show that a weighted extremal manifold is relatively weighted K-stable. **Theorem 1**. *Let $(X,\alpha_T)$ be a $(v,w)$-extremal manifold, where $T$ is maximal in the hamiltonian isometry group of the weighted extremal metric, and the weight functions $v,w$ satisfy Assumption [Assumption 9](#ass:weights){reference-type="ref" reference="ass:weights"}. Then $(X,\alpha_T)$ is relatively weighted K-polystable.* From this, we will easily obtain the following corollary: **Corollary 2**. *Let $(X,\alpha_T)$ be a $(v,w)$-cscK manifold, where $T$ is maximal and the weight functions $v,w$ satisfy Assumption [Assumption 9](#ass:weights){reference-type="ref" reference="ass:weights"}. Then $(X,\alpha_T)$ is weighted K-polystable.* These results rely on the already proven fact that a weighted extremal manifold is relatively weighted K-semistable. This was first shown with respect to smooth degenerations by Lahdili using asymptotic slope techniques [@Lah19], and then extended to singular degenerations by Inoue [@Ino20], who used a continuity argument to reduce to Lahdili's result. Then, Apostolov--Jubert--Lahdili proved that weighted cscK manifolds are weighted K-polystable with respect to smooth degenerations [@AJL21]; Corollary [Corollary 2](#cor:main){reference-type="ref" reference="cor:main"} strengthens this stability result to possibly singular test configurations, using a completely different approach. For the proof of the main result, we follow the basic strategy of previous works [@Sto09; @SS11; @Der18]. Namely, we first use that a weighted extremal manifold is relatively weighted K-semistable. We then take a test configuration with vanishing relative Donaldson--Futaki invariant, and seek to show it is a product test configuration; see Section [2](#sec:background){reference-type="ref" reference="sec:background"} for the relevant definitions here. Assuming it is not a product, we show that there exists a $T$-invariant point $p\in X$ such that $\mathrm{Bl}_pX$ has a destabilising test configuration, which is obtained from the original test configuration through a standard process. This contradicts the existence result in [@Hal23] for weighted extremal metrics on blowups, and hence implies the test configuration must have been a product. One novelty in our approach is that we compute the change in weighted Donaldson--Futaki invariant of a twisted test configuration using equivariant localisation techniques, in place of asymptotic slope formulae for the weighted Mabuchi functional used in [@Der18]. The use of equivariant localisation techniques has become increasingly prominent in K-stability, in particular through the work of Legendre [@Leg21] and Inoue [@Ino20]. Whereas Legendre's work computes the Donaldson--Futaki invariant directly via equivariant localisation on the total space of the test configuration, Inoue's approach instead pushes forward the equivariant classes to $\mathbb{C}$ first, and then applies equivariant localisation. This latter approach was first seen in the work of Wang in the unweighted setting [@Wan12], and is this method that we use here. Along the way, we also correct some minor inaccuracies in [@Der18]. In particular, we introduce a new variant of the Chow weight that is invariant under twisting the test configuration by the torus action. In the polarised unweighted case, this invariant is nothing new as it can be obtained by twisting the test configuration by a rational one-parameter subgroup so that it is orthogonal to the torus. However, when the Kähler class is not rational, or in the presence of weight functions, this new Chow weight is an essential ingredient in expanding the Donaldson--Futaki invariant orthogonal to the torus. To prove the existence of a point with positive $T$-invariant Chow weight, we prove a kind of uniform Chow stability, namely that one can find a point with positive Chow weight bounded below by a positive constant times the norm of the test configuration. This eventually allows one to reduce to the rational case via an approximation argument. # Background {#sec:background} ## Weighted extremal metrics {#subsec:weighted_extremal_metrics} Here we very briefly review the definitions and our conventions on weighted cscK metrics; a more thorough treatment was already given in [@Hal23]. Let $(X,\omega)$ be a compact $n$-dimensional Kähler manifold. The Ricci curvature of $\omega$ is $$\mathrm{Ric}(\omega):=-\frac{i}{2\pi}\partial\overline{\partial}\log\omega^n,$$ and the scalar curvature is $$S(\omega):=\Lambda_\omega\mathrm{Ric}(\omega)=\frac{n\,\mathrm{Ric}(\omega)\wedge\omega^{n-1}}{\omega^n}.$$ Take: 1. $T$ a real torus acting effectively on $(X,\omega)$ by hamiltonian isometries, 2. $\mu:X\to \mathfrak{t}^*$ a moment map for the $T$-action, 3. $P:=\mu(X)\subset\mathfrak{t}^*$ the moment polytope [@Ati82; @GS82], 4. $v,w:X\to\mathbb{R}_{>0}$ positive smooth functions. Our sign convention for moment maps is $$\langle d\mu,\xi\rangle = -\omega(\xi,-)$$ for all $\xi\in\mathfrak{t}$, where the pairing $\langle-,-\rangle$ is the natural one on $\mathfrak{t}^*\otimes\mathfrak{t}$, and we abuse notation by writing $\xi$ for the vector field it generates on $X$. **Definition 3**. *Given the above data, we define the *$v$-weighted scalar curvature* of $\omega$ to be $$S_v(\omega):=v(\mu)S(\omega)-2\Delta(v(\mu))+\frac{1}{2}\mathrm{Tr}(g\circ\mathrm{Hess}(v)(\mu)).$$ Here $S(\omega)=\Lambda_\omega\mathrm{Ric}(\omega)$ is the scalar curvature, $\Delta=-\partial^*\partial$ is the Kähler Laplacian of $\omega$, and $g$ is the Riemannian metric determined by $\omega$.* Concretely, the term $\mathrm{Tr}(g\circ\mathrm{Hess}(v)(\mu))$ may be written $$\sum_{a,b}v_{,ab}(\mu)g(\xi_a,\xi_b),$$ where $\xi_1,\ldots,\xi_r$ is a basis of $\mathfrak{t}$, and $v_{,ab}$ denotes the $ab$-partial derivative of $v$ with respect to the dual basis of $\mathfrak{t}^*$. **Definition 4** ([@Lah19]). *The metric $\omega$ is:* 1. *a *$(v,w)$-weighted cscK metric*, if $$S_v(\omega)=c_{v,w}w(\mu),$$ where $c_{v,w}$ is a constant,* 2. *a *$(v,w)$-weighted extremal metric* if the function $$S_{v,w}(\omega):=S_v(\omega)/w(\mu)$$ is a holomorphy potential with respect to $\omega$.* Sometimes we shorten the full name to just a $(v,w)$-cscK metric, or a $(v,w)$-extremal metric. If the weight functions $v$ and $w$ are understood or irrelevant, we may also refer to such a metric simply as a weighted cscK metric, or a weighted extremal metric. For $\xi\in\mathfrak{t}$, we write $\mu^\xi:=\langle\mu,\xi\rangle=\ell_\xi\circ\mu$, where $\ell_\xi\in(\mathfrak{t}^*)^*$ is the element corresponding to $\xi\in\mathfrak{t}$. Given a basis $\{\xi_a\}$ for $\mathfrak{t}$, we also write $\mu^a$ in place of $\mu^{\xi_a}$. The function $\mu^\xi$ is then a hamiltonian for the infinitesimal action of $\xi$ on $X$. We are interested in finding a weighted extremal metric in the class $\alpha:=[\omega]$. Given a $T$-invariant Kähler potential $\varphi\in\mathcal{H}^T$, let $$\mu_\varphi:=\mu+d^c\varphi.$$ That is, for any $\xi\in\mathfrak{t}$, $$\mu_\varphi^\xi:=\mu^\xi+d^c\varphi(\xi),$$ where we again abuse notation by writing $\xi$ for the vector field it generates on $X$. Our convention here is $d^c:=\frac{i}{2}(\overline{\partial}-\partial)$, so that $dd^c=i\partial\overline{\partial}$. **Lemma 5** ([@Lah19 Lemma 1]). *With the above definition, $\mu_\varphi$ is a moment map for the $T$-action with respect to $\omega_\varphi$, and $\mu_\varphi(X)=P$, where $P$ is the moment polytope for $\mu=\mu_0$.* With this lemma in mind, it then makes sense to search for a $(v,w)$-extremal metric in the class $\alpha$, which is $T$-invariant by fiat. **Lemma 6** ([@Lah19 Lemma 2]). *With the above choice of moment map $\mu_\varphi$, the following quantities are independent of the choice of $\varphi\in\mathcal{H}^T$:* 1. *$\int_Xv(\mu_\varphi)\,\omega_\varphi^n$,* 2. *$\int_Xv(\mu_\varphi)\,\mathrm{Ric}(\omega_\varphi)\wedge\omega_\varphi^{n-1}+\int_X\langle dv(\mu_\varphi),-\Delta_{\varphi}\mu_\varphi\rangle\,\omega_\varphi^n$,* 3. *$\int_X S_v(\omega_\varphi)\,\omega_\varphi^n$.* *It follows that the constant $c_{v,w}$ of Definition [Definition 4](#def:weighted_extremal){reference-type="ref" reference="def:weighted_extremal"} is fixed, given by $$c_{v,w}=\frac{\int_XS_v(\omega)\,\omega^n}{\int_Xw(\mu)\,\omega^n}.$$* We will henceforth write $$\label{eq:average_weighted_scalar_curvature} \hat{S}_{v,w}:=\frac{\int_X S_{v,w}(\omega) w(\mu) \omega^n}{\int_X w(\mu) \omega^n}$$ in place of $c_{v,w}$, to indicate that this is the average of the weighted scalar curvature $S_{v,w}(\omega)$ with respect to the measure $w(\mu)\omega^n$. **Remark 7**. The significance of $-\Delta_\varphi\mu_\varphi$ in *(2)* of Lemma [Lemma 6](#lem:const_cvw){reference-type="ref" reference="lem:const_cvw"} is that it is a moment map for the Ricci curvature $\mathrm{Ric}(\omega_\varphi)$, see [@Lah19 Lemma 5]. That is, for any $\xi\in\mathfrak{t}$, $$\langle d(-\Delta_\varphi\mu_\varphi)(x),\xi\rangle = \mathrm{Ric}(\omega_\varphi)(x)(-,\xi_x).$$ **Example 8**. Fix an element $\xi\in\mathfrak{t}$, and denote by $\ell_\xi:\mathfrak{t}^*\to\mathbb{R}$ the corresponding element of $(\mathfrak{t}^*)^*$. Let $a$ be a constant such that $a+\ell_\xi>0$ on $P$. Many standard canonical metrics can be obtained from certain choices of the functions $v,w$ [@Lah19 Section 3]: 1. **CscK:** Taking $v$ and $w$ constant, the weighted cscK equation reduces to $S(\omega)=const$. 2. **Extremal:** Taking $v$ and $w$ constant again, a weighted extremal metric is precisely an extremal metric in the usual sense, meaning $\overline{\partial}\nabla^{1,0}S(\omega)=0$. 3. **Kähler--Ricci soliton:** For $X$ Fano and $\alpha = c_1(X)$, a weighted extremal metric in $\alpha$ with weights $v=w=e^{\ell_\xi}$ and extremal field $\xi$ is a Kähler--Ricci soliton. 4. **Extremal Sasaki:** Suppose that $[\omega]$ is the first Chern class $c_1(L)$ of an ample line bundle $L\to X$. A choice of Kähler metric $\omega_\varphi\in\alpha$ then corresponds to a Sasaki metric on the unit circle bundle $S$ of $L^*$. Letting $$v:=(a+\ell_\xi)^{-n-1},\quad w:=(a+\ell_\xi)^{-n-3},$$ a $(v,w)$-extremal metric on $X$ then corresponds to an extremal Sasaki metric on $S$ with Sasaki--Reeb field $\xi$ [@AC21; @ACL21]. 5. **Conformally Kähler--Einstein Maxwell:** Letting $$v=(a+\ell_\xi)^{-2n+1},\quad w=(a+\ell_\xi)^{-2n-1},$$ a $(v,w)$-cscK metric on $X$ then corresponds to a conformally Kähler Einstein--Maxwell metric [@Lah20]. 6. **$v$-soliton:** Suppose $X$ is Fano and $\alpha=c_1(X)$. Taking $v$ arbitrary and defining $$w(p)=2v(p)(n+\langle d\log v(p),p\rangle),$$ the $(v,w)$-cscK equation then becomes $$\mathrm{Ric}(\omega)-\omega=i\partial\overline{\partial}\log v(\mu),$$ which is the $v$-soliton equation [@HL20]. 7. **$\mu$-cscK:** In [@Ino22; @Ino20], Inoue introduced and studied a class of *$\mu$-cscK metrics*. These are a special class of weighted extremal metrics, given by the same weight functions $v=w=e^{\ell_\xi}$ and extremal field $\xi$ as for Kähler--Ricci solitons, only one drops the condition of $X$ being Fano [@Ino22 Section 2.1.6]. Aside from the general $v$-solitons, notice that all of these examples are of a particular form, namely the weight functions are the composition of a linear functional $\ell_\xi:\mathfrak{t}^*\to\mathbb{R}$ with a real analytic function $\widetilde{v}$ or $\widetilde{w}$ on a subset of $\mathbb{R}$. In order to apply Inoue's equivariant intersection theory for weighted cscK metrics [@Ino20], we will assume that our weight functions take this form in later sections. Furthermore, suppose we choose another moment map $\mu'=\mu+\eta$ where $\eta\in\mathfrak{t}^*$ is a constant. We then define $v',w':P'\to\mathbb{R}$ by $v'(p'):=v(p'-\eta)$ and $w'(p'):=w(p'-\eta)$, where $P'=P+\eta$ is the moment polytope of $\mu'$. The class $\alpha$ then admits a weighted extremal (resp. weighted cscK) metric for the weight functions $v,w$ and moment map $\mu$ if and only if it admits a weighted extremal (resp. weighted cscK) metric for the weight functions $v',w'$ and moment map $\mu'$. It follows we may freely choose our normalisation of the moment map in what follows. **Assumption 9**. We assume: 1. The weight functions $v,w:P\to\mathbb{R}$ are of the form $v(p)=\widetilde{v}(\langle\xi,p\rangle)$, $w(p)=\widetilde{w}(\langle\xi,p\rangle)$, where: 1. $\xi\in\mathfrak{t}$ is a fixed element of the Lie algebra, 2. $\widetilde{v}=f^{(n)}$ and $\widetilde{w}=g^{(n+1)}$, where $f$ and $g$ are real analytic functions defined on a neighbourhood of $\langle P,\xi\rangle\subset\mathbb{R}$. 2. The moment polytope $P$ is translated so that the closed interval $\langle P,\xi\rangle\subset\mathbb{R}$ has $0$ as its midpoint, hence the analytic functions $f$ and $g$ have power series expansions about $0\in\mathbb{R}$ that converge on a neighbourhood of $\langle P,\xi\rangle$. As a remark, we will later assume that the torus $T$ is maximal. This does not impose any condition on the existence of solutions: suppose that $\omega$ is a weighted cscK metric for the torus $T$, and let $T'\supset T$ be a larger torus in the hamiltonian isometry group of $(X,\omega)$. There is a natural projection $p:(\mathfrak{t}')^*\to\mathfrak{t}^*$, which is compatible with the moment maps $\mu:X\to\mathfrak{t}^*$ and $\mu':X\to(\mathfrak{t}')^*$. Defining $v':= v\circ p$ and $w':= w\circ p$, we see that $v'(\mu') = v(\mu)$ and $w'(\mu') = w(\mu)$. It follows easily from the definitions that a $(v,w)$-cscK (resp. extremal) metric for $T$ is a $(v',w')$-cscK (resp. extremal) metric for $T'$, and vice versa. ## Kähler test configurations We next review the theory of Kähler test configurations, which was introduced in [@SD18; @DR17]. We take a particular interest in the equivariant geometry of test configurations, via moment maps. We will assume some familiarity with equivariant cohomology (including Cartan's formulation) [@Ino20 Appendix] as well as Kähler complex spaces [@Fis76] and Bott--Chern cohomology [@BG13 Section 4.6.1]. Let $(X,\omega)$ be a compact Kähler manifold, and $T$ be a compact torus acting on $(X,\omega)$ by hamiltonian isometries. Note the action extends to a $T^{\mathbb{C}}$ action. We write $\alpha=[\omega]$ for the Kähler class and identify $\mathbb{P}^1=\mathbb{C}\cup\{\infty\}$. **Definition 10**. *A *$T$-equivariant Kähler test configuration* for $(X,\alpha)$ is a pair $(\mathcal{X},\mathcal{A})$, where:* 1. *$\mathcal{X}$ is a normal compact Kähler complex space,* 2. *$\mathcal{A}\in H^{1,1}_{\mathrm{BC}}(\mathcal{X};\mathbb{R})$ is a Kähler class on $\mathcal{X}$,* *along with the following data:* 1. *A $T^{\mathbb{C}}\times\mathbb{C}^*$-action on $\mathcal{X}$ which preserves the class $\mathcal{A}$,* 2. *A $T^{\mathbb{C}}\times\mathbb{C}^*$-equivariant flat surjection $\pi:\mathcal{X}\to\mathbb{P}^1$, where $T^{\mathbb{C}}$ acts trivially on $\mathbb{P}^1$ and $\mathbb{C}^*$ acts on $\mathbb{P}^1$ by scalar multiplication on $\mathbb{C}\subset\mathbb{P}^1$,* 3. *A $T^{\mathbb{C}}\times\mathbb{C}^*$-equivariant isomorphism $$\label{eq:general_fibre} \Psi:(\mathcal{X},\mathcal{A})|_{\pi^{-1}(\mathbb{P}^1\backslash\{0\})}\cong(X\times\mathbb{P}^1\backslash\{0\},p^*\alpha),$$ where $p:X\times\mathbb{P}^1\backslash\{0\}\to X$ is the projection, and $T^{\mathbb{C}}\times\mathbb{C}^*$ acts diagonally on $X\times\mathbb{P}^1\backslash\{0\}$.* *Two $T$-equivariant Kähler test configurations are *isomorphic* if they are $T^{\mathbb{C}}\times\mathbb{C}^*$-equivariantly isomorphic to each other.* Since the $T$-action on $(X,\omega)$ is hamiltonian, it admits a moment map $\mu:X\to\mathfrak{t}^*$. A choice of moment map determines an extension of $\alpha$ to an equivariant cohomology class $\alpha_T:=[\omega+\mu]\in H^2_T(X;\mathbb{R})$, by Cartan's formulation of equivariant cohomology. Given such an extension, we will show that the class $\mathcal{A}$ of a $T$-equivariant Kähler test configuration $(\mathcal{X},\mathcal{A})$ admits a unique extension to an equivariant class $\mathcal{A}_T$ that is compatible with $\alpha_T$ under the isomorphism [\[eq:general_fibre\]](#eq:general_fibre){reference-type="eqref" reference="eq:general_fibre"}; for this reason, we will later write $T$-equivariant Kähler test configurations as $(\mathcal{X},\mathcal{A}_T)$, and refer to them simply as *test configurations*. Let $\Omega$ be a $T\times S^1$-invariant Kähler form on $\mathcal{X}$ representing the class $\mathcal{A}$. To find $\mathcal{A}_T$, we will construct a moment map on $\mathcal{X}$ for the $T\times S^1$-action with respect to $\Omega$. In order to do this, it suffices to construct a hamiltonian function for the infinitesimal action of any one-parameter subgroup $\beta:\mathbb{C}^*\to T^{\mathbb{C}}\times\mathbb{C}^*$ on $\mathcal{X}$. Since $\beta(\mathbb{C}^*)$ preserves the class $\mathcal{A}$, for all $t\in\mathbb{R}$ there exists a smooth function $\varphi_t:\mathcal{X}\to\mathbb{R}$ such that $$\beta(e^{-t})^*\Omega-\Omega=i\partial\overline{\partial}\varphi_t,$$ by definition of Bott--Chern cohomology. **Lemma 11**. *There exist $T\times S^1$-invariant smooth functions $\varphi_t:\mathcal{X}\to\mathbb{R}$ such that:* 1. *For all $t\in\mathbb{R}$, $$\beta(e^{-t})^*\Omega-\Omega=i\partial\overline{\partial}\varphi_t,$$* 2. *The $\varphi_t$ depend smoothly on $t$,* 3. *On the regular locus $\mathcal{X}_{\mathrm{reg}}$ of $\mathcal{X}$, the function $-\dot{\varphi}_0/2$ is hamiltonian for the real holomorphic vector field $V$ generating the $\beta(S^1)$-action: $$\Omega(-,V)=-d\dot{\varphi}_0/2,$$* 4. *For all $t\in\mathbb{R}$, $$\beta(e^{-t})^*\dot{\varphi}_0=\dot{\varphi}_t.$$* *Proof.* Let us first prove this assuming $\mathcal{X}$ is smooth. In this case, the map $i\partial\overline{\partial}:C_0^\infty(\mathcal{X};\mathbb{R})\to\Omega^{1,1}(\mathcal{X};\mathbb{R})$ has a $T\times S^1$-equivariant left-inverse $L$, since the Laplacian $\Delta_\Omega:=\Lambda_{\Omega}\circ i\partial\overline{\partial}:C^\infty_0(\mathcal{X};\mathbb{R})\to C^\infty_0(\mathcal{X};\mathbb{R})$ is invertible and $T\times S^1$-equivariant. We define $$\varphi_t:=L(\beta(e^{-t})^*\Omega-\Omega),$$ which is clearly smooth in $t$ and $T\times S^1$-invariant. Since $\beta$ preserves $\mathcal{A}$, there exist $\varphi_t'$ satisfying $\beta(e^{-t})^*\Omega-\Omega=i\partial\overline{\partial}\varphi_t'$. Hence $$\varphi_t=L(i\partial\overline{\partial}\varphi'_t)=\varphi'_t-\overline{\varphi}'_t,$$ where $\overline{\varphi}'_t$ is the average of $\varphi'_t$. It follows that $\beta(e^{-t})^*\Omega-\Omega=i\partial\overline{\partial}\varphi_t$, and so the $\varphi_t$ are $T\times S^1$-invariant and satisfy (1) and (2). Differentiating the equation $\beta(e^{-t})^*\Omega-\Omega=i\partial\overline{\partial}\varphi_t$ at $t=0$, we produce $$\mathcal{L}_{\mathcal{J}V}\Omega=i\partial\overline{\partial}\dot{\varphi}_0,$$ where $V=V^{1,0}+V^{0,1}$ is the real holomorphic vector field generating the $\beta(S^1)$-action and $\mathcal{J}$ is the almost complex structure of $\mathcal{X}$. This can be rearranged using Cartan's magic formula to get $$d(\Omega(\mathcal{J}V,-)-d^c\dot{\varphi}_0)=0,$$ where $d^c:=\frac{i}{2}(\overline{\partial}-\partial)$. Since the vector field $V$ is hamiltonian, it has a fixed point. Hence by [@LS94 Theorem 1], there exists a smooth function $f:\mathcal{X}\to\mathbb{C}$ such that $\Omega(V^{1,0},-)=\overline{\partial}f$. Since $V$ is Killing, $f$ is real-valued by [@Sze14 p. 64]. It follows that $\Omega(\mathcal{J}V,-)=2d^cf$, which in turn gives $\Omega(-,V)=-df$. Hence $dd^c(2f-\dot{\varphi}_0)=0$, so $2f$ and $\dot{\varphi}_0$ differ by a constant. Therefore $\Omega(-,V)=-d\dot{\varphi}_0/2$ and (3) holds. Finally to show (4), differentiating $\beta(e^{-t})^*\Omega-\Omega=i\partial\overline{\partial}\varphi_t$ at arbitrary $t$ yields $$\beta(e^{-t})^*i\partial\overline{\partial}\dot{\varphi}_0=i\partial\overline{\partial}\dot{\varphi}_t.$$ Hence $\beta(e^{-t})^*\dot{\varphi}_0=\dot{\varphi}_t+C(t)$, where $C:\mathbb{R}\to\mathbb{R}$ is a smooth function such that $C(0)=0$. If we replace $\varphi_t$ with $\varphi_t+\int_0^tC(s)\,ds$ then each of the conditions (1)--(3) are maintained, and (4) is also satisfied. In the case where $\mathcal{X}$ is singular, begin by choosing smooth functions $\varphi_t$ satisfying $\beta(e^{-t})^*\Omega-\Omega=i\partial\overline{\partial}\varphi_t$, which we do not require to vary smoothly with $t$. Let $\widetilde{\mathcal{X}}$ be an equivariant resolution of singularities, and denote by $\widetilde{\Omega}$ (resp. $\widetilde{\varphi}_t$) the pullback of $\Omega$ (resp. $\varphi_t$) to $\widetilde{\mathcal{X}}$. Choose an invariant Kähler form $\eta$ on $\widetilde{\mathcal{X}}$. By the smooth case, there exist functions $g_t$ varying smoothly with $t$ such that $$\beta(e^{-t})^*\eta-\eta=i\partial\overline{\partial}g_t,$$ and for each $\epsilon>0$ there are functions $f_{\epsilon,t}$ varying smoothly with $t$ satisfying $$\beta(e^{-t})^*(\widetilde{\Omega}+\epsilon\eta)-(\widetilde{\Omega}+\epsilon\eta)=i\partial\overline{\partial}f_{\epsilon,t}.$$ Note we also have $$\beta(e^{-t})^*(\widetilde{\Omega}+\epsilon\eta)-(\widetilde{\Omega}+\epsilon\eta)=i\partial\overline{\partial}(\widetilde{\varphi}_t+\epsilon g_t).$$ It follows that $\widetilde{\varphi}_t+\epsilon g_t=f_{\epsilon,t}-c_{\epsilon,t}$, where $c_{\epsilon,t}$ is a constant depending on $\epsilon$ and $t$. Rearranging we get $\widetilde{\varphi}_t+c_{\epsilon,t}=f_{\epsilon,t}-\epsilon g_t$. Since the right-hand side depends smoothly on $t$, we can fix $\epsilon$ and replace $\varphi_t$ with $\varphi_t+c_{\epsilon,t}$ so that the functions $\varphi_t$ vary smoothly with $t$. Finally since $f_{\epsilon,t}$ and $g_t$ are $T\times S^1$-invariant and satisfy (3) and (4) for their respective Kähler metrics, the $\varphi_t$ are also $T\times S^1$-invariant and satisfy (3) and (4). ◻ **Corollary 12**. *There exists a smooth function $m:\mathcal{X}\to\mathfrak{t}^*\oplus\mathbb{R}$ that is a $T\times S^1$-moment map for $\Omega$, in the sense that:* 1. *$m$ is $T\times S^1$-equivariant, and* 2. *For all $\xi\in\mathfrak{t}\oplus \mathbb{R}$, $\langle dm,\xi\rangle = \Omega(-,\xi)$ on the regular locus $\mathcal{X}_{\mathrm{reg}}$ of $\mathcal{X}$.* *Proof.* As the Lie algebra of a compact torus, $\mathfrak{t}\oplus\mathbb{R}$ has a natural lattice. We therefore choose an integral basis for $\mathfrak{t}\oplus\mathbb{R}$, and apply the previous lemma to show that each basis vector has a $T\times S^1$-invariant hamiltonian function. Collecting these hamiltonian functions in a vector using the dual basis for $\mathfrak{t}^*\oplus\mathbb{R}^*$, we get the moment map $m$ which is automatically equivariant since the adjoint action on $\mathfrak{t}\oplus\mathbb{R}$ is trivial. ◻ When we wish to refer to the component of $m$ taking values in $\mathfrak{t}^*$ we shall write $m_T$, and similarly write $m_{S^1}$ for the component in $\mathbb{R}^*=\mathrm{Lie}(S^1)^*$. We are interested in normalising the moment map $m_T$ so that it is suitably compatible with the original moment map $\mu:X\to\mathfrak{t}^*$. This is achievable by the following result, which follows easily from [@Lah19 Lemma 7]: **Lemma 13**. *Denote by $m_T:\mathcal{X}\to\mathfrak{t}^*$ the $T$-moment map for $\Omega$. We may normalise $m_T$ so that for each $\tau\in\mathbb{P}^1$, $m_T(\mathcal{X}_\tau)=P:=\mu(X)$. In particular, under the isomorphism $\Psi$ from equation [\[eq:general_fibre\]](#eq:general_fibre){reference-type="eqref" reference="eq:general_fibre"}, $$\Psi_*[\Omega+m_T]|_{\pi^{-1}(\mathbb{P}^1\backslash\{0\})}=p^*[\omega+\mu]\in H^2_T(X\times\mathbb{P}^1\backslash\{0\};\mathbb{R}).$$* *Proof.* The result [@Lah19 Lemma 7] states the equality $m_T(\mathcal{X}_\tau)=P:=\mu(X)$ holds for all $\tau\in\mathbb{C}^*$ for a suitable normalisation of $m_T$; we note this extends to all $\tau\in\mathbb{P}^1$ by continuity of $m_T$ and compactness of $\mathcal{X}$.[^1] For the statement on equivariant cohomology, first note for any $\tau\in\mathbb{P}^1\backslash\{0\}$ the condition $m_T(\mathcal{X}_\tau)=P:=\mu(X)$ implies the restriction $[\Omega+m_T]|_{\mathcal{X}_\tau}$ maps to $[\omega+\mu]$ under $\Psi$; see [@Lah19 Lemma 1]. To see this implies an equality of equivariant classes on all of $X\times\mathbb{P}^1\backslash\{0\}$, we merely note the $T$-action on $\mathbb{P}^1\backslash\{0\}$ is trivial, so there is an equality $$H^2_T(X\times\mathbb{P}^1\backslash\{0\};\mathbb{R})\cong H^2_T(X;\mathbb{R})\otimes H^0(\mathbb{P}^1\backslash\{0\})=H^2_T(X;\mathbb{R}),$$ where the first isomorphism follows from the Kunneth theorem since the higher cohomology groups of $\mathbb{P}^1\backslash\{0\}$ vanish. ◻ We would like to say the moment map $m$ gives us an extension of $\mathcal{A}$ to an equivariant class $\mathcal{A}_{T}:=[\Omega+m_T]\in H^2_{T}(\mathcal{X};\mathbb{R})$. However, the Cartan model for equivariant cohomology is not well suited to singular spaces. Instead, given an equivariant resolution of singularities $\widetilde{\mathcal{X}}\to\mathcal{X}$, we pull back the equivariant form $\Omega+m_T$ to $\widetilde{\Omega}+\widetilde{m}_T$ on $\widetilde{\mathcal{X}}$, which determines an equivariant cohomology class $\widetilde{\mathcal{A}}_{T}:=[\widetilde{\Omega}+\widetilde{m}_T]\in H^2_{T}(\widetilde{\mathcal{X}};\mathbb{R})$ extending the pulled back class $\widetilde{\mathcal{A}}$. We thus abuse notation by writing $$\mathcal{A}_{T\times S^1}:=[\Omega+m]\in H^2_{T\times S^1}(\mathcal{X};\mathbb{R}),\quad\quad\mathcal{A}_T:=[\Omega+m_T]\in H^2_{T}(\mathcal{X};\mathbb{R}),$$ even though we only define the classes $$\widetilde{\mathcal{A}}_{T\times S^1}:=[\widetilde{\Omega}+\widetilde{m}]\in H^2_{T\times S^1}(\widetilde{\mathcal{X}};\mathbb{R}),\quad\quad\widetilde{A}_{T}:=[\widetilde{\Omega}+\widetilde{m}_T]\in H^2_{T}(\widetilde{\mathcal{X}};\mathbb{R}).$$ In particular, $T$-equivariant test configurations will henceforth be denoted $(\mathcal{X},\mathcal{A}_T)$, and we will often call these simply "test configurations\", taking the $T$-action as implicit in the notation. **Remark 14**. The class $\widetilde{\mathcal{A}}_T$ on the resolution is independent of the form $\Omega$ and moment map $m$, which follows from the choice of normalisation in Lemma [Lemma 13](#lem:moment_map_normalisation){reference-type="ref" reference="lem:moment_map_normalisation"}. We will need the following generalisation of [@Lah19 Lemma 7]; the proof follows largely the same approach: **Lemma 15**. *Let $(\mathcal{X},\mathcal{A}_T)$ be a test configuration for $(X,\alpha_T)$, with $\Omega+m_T$ an equivariant representative for $\mathcal{A}_T$. Let $h:P\to\mathbb{R}$ be an arbitrary smooth function on the moment polytope. Then for all $\tau\in\mathbb{P}^1\backslash\{0\}$, $$\int_{\mathcal{X}_\tau}h(m_\tau)\Omega_\tau^n=\int_{\mathcal{X}_1}h(m_1)\Omega_1^n=\int_Xh(\mu)\omega^n,$$ where $\Omega_\tau$ (resp. $m_\tau$) denotes the restriction of $\Omega$ (resp. $m_T$) to $\mathcal{X}_\tau$. Furthermore, for all $\tau\in\mathbb{P}^1\backslash\{0\}$ and $\xi\in\mathfrak{t}$, $$\int_{\mathcal{X}_\tau}S_v(\Omega_\tau)\ell_\xi(m_\tau)\Omega_\tau^n=\int_{\mathcal{X}_1}S_v(\Omega_1)\ell_\xi(m_1)\Omega_1^n=\int_XS_v(\omega)\ell_\xi(\mu)\omega^n.$$* *Proof.* It suffices to prove these equalities for $\tau\in\mathbb{C}^*$; the equalities at $\tau=\infty$ will follow from continuity. Denote by $\lambda$ the $\mathbb{C}^*$-action on the test configuration. For $\tau\in\mathbb{C}^*$ define $\omega_\tau:=\lambda(\tau)^*\Omega_\tau$ and $\mu_\tau:=\lambda(\tau)^*m_\tau$, so that $\mu_\tau$ is a moment map for $\omega_\tau$ on $\mathcal{X}_1\cong X$. Our first goal is then to show that $$\int_{\mathcal{X}_1}h(\mu_\tau)\omega_\tau^n$$ is independent of $\tau$. Let $t:=-\log|\tau|$; by circle invariance it suffices to show the derivative in $t$ of the integral vanishes. Denote by $V$ the infinitesimal generator of the circle action on $\mathcal{X}$. Then $$\begin{aligned} \frac{d}{dt}\mu_\tau^\xi &= \frac{d}{dt}(\lambda(e^{-t})^*m_\tau^\xi) \\ &=\lambda(e^{-t})^*\mathcal{L}_{\mathcal{J}V}m_T^\xi \\ &=\lambda(e^{-t})^*dm_T^\xi(\mathcal{J}V) \\ &=-\lambda(e^{-t})^*(dm_T^\xi,dm_{S^1})_{\Omega} \\ &=\lambda(e^{-t})^*dm_{S^1}(\mathcal{J}\xi) \\ &=-\lambda(e^{-t})^*((d_{\mathcal{V}}m_{S^1})^{\#}, \xi)_{\Omega_\tau}, \end{aligned}$$ where $(-,-)_{\Omega}$ is the Hermitian inner product determined by $\Omega$, $d_{\mathcal{V}}$ denotes the vertical (fibrewise) derivative, and $\#$ is conversion of a 1-form to a vector field on $\mathcal{X}_\tau$ via $\Omega_\tau$. Similarly, using (1), (3) and (4) of Lemma [Lemma 11](#lem:hamiltonians){reference-type="ref" reference="lem:hamiltonians"}, $$\frac{d}{dt}\omega_\tau = -2\lambda(e^{-t})^*(i\partial\overline{\partial}m_{S^1}|_{\mathcal{X}_\tau}).$$ It follows that $$\begin{aligned} \frac{d}{dt}\int_{\mathcal{X}_1}h(\mu_\tau)\omega_\tau^n =& -\sum_a\int_{\mathcal{X}_1}h_{,a}(\mu_\tau)\lambda(e^{-t})^*((d_{\mathcal{V}}m_{S^1})^{\#}, \xi_a)_{\Omega_\tau}\lambda(e^{-t})^*\Omega_\tau^n \\ &-2\int_{\mathcal{X}_1}h(\mu_\tau)\lambda(e^{-t})^*\Delta_{\Omega_\tau}(m_{S^1})\lambda(e^{-t})^*\Omega_\tau^n \\ =& -\sum_a\int_{\mathcal{X}_\tau}h_{,a}(m_\tau)((d_{\mathcal{V}}m_{S^1})^{\#}, \xi_a)_{\Omega_\tau}\Omega_\tau^n \\ &+ \sum_a\int_{\mathcal{X}_\tau}h_{,a}(m_\tau)((d_{\mathcal{V}}m_{S^1})^{\#}, \xi_a)_{\Omega_\tau}\Omega_\tau^n \\ =&0. \end{aligned}$$ Here $\{\xi_a\}$ is a basis for $\mathfrak{t}$, and $h_{,a}$ denotes the partial derivative of $h$ in the $\xi^a$-direction, where $\{\xi^a\}$ is the dual basis for $\mathfrak{t}^*$. The equality $$\int_{\mathcal{X}_1}h(m_1)\Omega_1^n=\int_Xh(\mu)\omega^n$$ then follows from the general fact that such integrals are independent of the choice of representative for the equivariant class $[\omega+\mu]$, which can easily be seen by differentiating along a straight line path between two representatives. We now consider the integrals involving the $v$-weighted scalar curvature; our aim is to show that the derivative of $$\int_{\mathcal{X}_1}S_v(\omega_\tau)\ell_\xi(\mu_\tau)\omega_\tau^n$$ in $t$ vanishes. Lahdili computes the derivative of $S_v(\omega_\tau)$ in $t$ as $$\frac{d}{dt}S_v(\omega_\tau)=-\mathcal{D}_\tau^*(v(\mu_\tau)\mathcal{D}_\tau\dot{\varphi}_t)+\frac{1}{2}(dS_v(\omega_\tau),d\dot{\varphi}_t),$$ where $\mathcal{D}_\tau:=\overline{\partial}\nabla^{1,0}_{\omega_\tau}$; see [@Lah19 Lemma B.1]. Here $\mathcal{D}_\tau^*$ refers to the formal $L^2$-adjoint with respect to the measure $\omega^n$. From this we have $$\begin{aligned} \frac{d}{dt}\int_{\mathcal{X}_1}S_v(\omega_\tau)\ell_\xi(\mu_\tau)\omega_\tau ^n =&-\int_{\mathcal{X}_1}\mathcal{D}_\tau^*(v(m_\tau)\mathcal{D}_\tau\dot{\varphi}_t)\ell_\xi(\mu_\tau)\omega_\tau^n \\ &+\frac{1}{2}\int_{\mathcal{X}_1}(dS_v(\omega_\tau),d\dot{\varphi}_t)\ell_\xi(\mu_\tau)\omega_\tau^n \\ &-\int_{\mathcal{X}_1}S_v(\omega_\tau)\lambda(e^{-t})^*((d_{\mathcal{V}}m_{S^1})^{\#}, \xi))_{\Omega_\tau} \omega_\tau^n \\ &+\int_{\mathcal{X}_1}S_v(\omega_\tau)\ell_\xi(\mu_\tau)\Delta_{\omega_\tau}(\dot{\varphi}_t)\omega_\tau^n \\ =&\,\,0. \end{aligned}$$ Here the first line vanishes since $\ell_\xi(\mu_\tau)$ is in the kernel of $\mathcal{D}_\tau$, and the remaining three lines cancel via integration by parts similarly to above using that $\dot{\varphi}_t=-2\lambda(e^{-t})^*m_{S^1}$. The final equality $$\int_{\mathcal{X}_1}S_v(\Omega_1)\ell_\xi(m_1)\Omega_1^n=\int_XS_v(\omega)\ell_\xi(\mu)\omega^n$$ is another straightforward exercise in differentiating along a straight line path between equivariant representatives. ◻ ## Weighted K-stability Here we review the definition of weighted K-(semi/poly)stability. This concept was initially introduced by Lahdili [@Lah19] in the case where the test configuration is smooth and Kähler. Later, Inoue [@Ino20] introduced weighted K-stability for arbitrary singular test configurations, and this is the theory we will use here. We assume the reader is familiar with equivariant cohomology and the de Rham representation of this theory, as well as equivariant locally finite homology and its duality with equivariant cohomology; see the appendix of [@Ino20] for all the necessary details. Let $Y$ be a $T$-equivariant complex manifold of dimension $d$; later we will either take $Y$ to be the Kähler manifold $X$, an equivariant resolution $\widetilde{\mathcal{X}}$ of a test configuration $\mathcal{X}$ for $X$, or the restriction $\widetilde{\mathcal{X}}|_{\mathbb{C}}$ of such a resolution. In particular we do not require $Y$ to be compact, for which reason we use locally finite homology in what follows. The torus $T$ will either be $T$ from previous sections, or we may later take $T\times S^1$ for the action on a test configuration. The map $\pi:Y\to\mathrm{pt}$ is $T$-equivariant, so induces a pushforward map on equivariant locally finite homology: $$\pi_*:H_*^{\mathrm{lf},T}(Y;\mathbb{R})\to H_*^{\mathrm{lf},T}(\mathrm{pt};\mathbb{R}).$$ Since $Y$ and $\mathrm{pt}$ are smooth, there are Poincaré duality isomorphisms in the equivariant theory: $$H^{\mathrm{lf},T}_{2k}(Y;\mathbb{R})\cong H_T^{2d-2k}(Y;\mathbb{R}),$$ $$H^{\mathrm{lf},T}_{2k}(\mathrm{pt};\mathbb{R})\cong H_T^{-2k}(\mathrm{pt};\mathbb{R}).$$ Using these isomorphisms, the map $\pi_*$ on equivariant homology induces an integration map on equivariant cohomology, which we denote by the same symbol: $$\pi_*:H^{2k}_T(Y;\mathbb{R})\to H^{2k-2d}_T(\mathrm{pt};\mathbb{R}).$$ The term "integration map\" is justified; indeed, if $Y$ is compact and we take an equivariant form $u$ on $Y$ representing a class $[u]\in H^{2k}_T(Y;\mathbb{R})$, then $\int_Y u$ represents $\pi_*[u]\in H^{2k-2d}_T(\mathrm{pt};\mathbb{R})$. It will be convenient to reserve the notation $\pi$ for test configurations, so we will instead denote $\pi_*(\alpha)$ by just $(\alpha)$. Unlike in the non-equivariant case, we can have $(\alpha)\neq0$ even when $\deg\alpha> 2d$. The following special case of the projection formula is derived from the observation that to integrate a smooth differential form, it suffices to integrate the form over a dense open subset. This will later be applied to show that the weighted Donaldson--Futaki invariant is well-defined. **Lemma 16** (Projection formula). *Let $p:\widetilde{Y}\to Y$ be a proper $T$-equivariant morphism of $T$-equivariant complex manifolds, and assume there are $T$-invariant dense open subsets $U\subset\widetilde{Y}$ and $V\subset Y$ such that $p(U)\subset V$ and $p:U\to V$ is a biholomorphism. Let $\beta\in H^{2k}_T(Y;\mathbb{R})$. Then $$(\beta)=(p^*\beta),$$ where the former is an integral over $Y$ and the latter an integral over $\widetilde{Y}$.* Let $h(x) = \sum_{k=0}^\infty\frac{a_k}{k!}x^k$ be a formal power series with coefficients $a_k\in\mathbb{R}$. Given $\beta\in H^2_T(Y;\mathbb{R})$, we define $$h(\beta):=\sum_{k=0}^\infty\frac{a_k}{k!}\beta^k\in \hat{H}^{*,\,\mathrm{even}}_T(Y,\mathbb{R})$$ in the completion of the even-degree equivariant cohomology. The integration map $\pi_*$ on cohomology then induces $$\pi_*:\hat{H}_T^{*,\,\mathrm{even}}(Y;\mathbb{R})\to\hat{H}_T^{*-2d,\,\mathrm{even}}(\mathrm{pt};\mathbb{R}).$$ Given an element $\zeta\in \hat{H}_T^{*,\,\mathrm{even}}(Y;\mathbb{R})$, we similarly write $(\zeta)\in \hat{H}_T^{*,\,\mathrm{even}}(\mathrm{pt};\mathbb{R})$ in place of $\pi_*(\zeta)$. In particular, we have $(h(\beta))\in \hat{H}_T^{*,\,\mathrm{even}}(\mathrm{pt};\mathbb{R})$, and if $\rho\in H^2_T(Y;\mathbb{R})$ is an auxiliary class, we also have $(\rho\,h(\beta))\in \hat{H}_T^{*,\,\mathrm{even}}(\mathrm{pt};\mathbb{R})$. The Chern--Weil theorem furnishes an isomorphism $$\hat{H}^{*,\mathrm{even}}_{T}(\mathrm{pt};\mathbb{R})\cong\mathbb{R}\llbracket\mathfrak{t}\rrbracket,$$ where $\mathfrak{t}$ is the Lie algebra of $\mathfrak{t}$ and homogeneous polynomials on $\mathfrak{t}$ of degree $k$ are assigned degree $2k$ in the ring. It follows we can identify $(\zeta)$ with a formal power series on the Lie algebra $\mathfrak{t}$; we shall do this from now on. Let $(X,\alpha_T)$ be a $T$-equivariant compact Kähler manifold, and let $(\mathcal{X},\mathcal{A}_T)$ be a test configuration for $(X,\alpha_T)$. We take $(\widetilde{\mathcal{X}},\widetilde{\mathcal{A}}_T)$ a $T^{\mathbb{C}}\times \mathbb{C}^*$-equivariant resolution of singularities of $(\mathcal{X},\mathcal{A}_T)$. Recall that we have a moment map $m$ for the $T\times S^1$-action on $\mathcal{X}$ that is compatible with $\mu$, in the sense that $m(\mathcal{X}_\tau)=P:=\mu(X)$ for all $\tau\in\mathbb{P}^1$, and $\widetilde{\mathcal{A}}_T:=[\widetilde{\Omega}+\widetilde{m}_T]$. Define $c_1(\widetilde{\mathcal{X}}/\mathbb{P}^1)_T:=c_1(\widetilde{\mathcal{X}})_T-c_1(\mathbb{P}^1)_T$, where the class $c_1(\mathbb{P}^1)_T$ is tacitly pulled back from $\mathbb{P}^1$ to $\widetilde{\mathcal{X}}$. We note that the first Chern classes here arise from line bundles (namely the anticanonical bundles) which have canonical $T$-actions, so are automatically extended to equivariant cohomology. Given all this data, we can define the equivariant intersections $$(c_1(\widetilde{\mathcal{X}}/\mathbb{P}^1)_Th(\widetilde{\mathcal{A}}_T)),\: (h(\widetilde{\mathcal{A}}_T)),\: (c_1(X)_Th(\alpha_T)),\:(h(\alpha_T)),$$ all of which take values in $\mathbb{R}\llbracket\mathfrak{t}\rrbracket$. It is a natural question to ask whether these power series converge. The following result of Inoue addresses this, and is the essential ingredient in defining the weighted Donaldson--Futaki invariant of a $T$-equivariant Kähler test configuration. **Proposition 17** ([@Ino20 Proposition 3.8]). *Let $\xi\in\mathfrak{t}$, and let $h$ be a real analytic function on the subset $\langle P,\xi\rangle\subset\mathbb{R}$, where $P\subset\mathfrak{t}^*$ is the moment polytope of $(X,\alpha_T)$. As in Assumption [Assumption 9](#ass:weights){reference-type="ref" reference="ass:weights"}, we assume the interval $\langle P,\xi\rangle$ has $0$ as its midpoint, so $h$ has a convergent power series expansion about $0$. Then the power series $$(c_1(\widetilde{\mathcal{X}}/\mathbb{P}^1)_Th(\widetilde{\mathcal{A}}_T)),\: (h(\widetilde{\mathcal{A}}_T)),\: (c_1(X)_Th(\alpha_T)),\:(h(\alpha_T))\in\mathbb{R}\llbracket\mathfrak{t}\rrbracket$$ each converge absolutely on a neighbourhood of $\xi\in\mathfrak{t}$. Furthermore, the first two are independent of the choice of equivariant resolution of singularities. Lastly, if $h^{(n)}>0$ (resp. $h^{(n+1)}>0$) then $(h(\alpha_T))(\xi)>0$ (resp. $(h(\mathcal{A}_T))(\xi)>0$).* In [@Ino20 Proposition 3.8], the function $h$ is assumed to be entire on $\mathbb{R}$, however the proof carries over exactly the same in the case described here. To see this, we will briefly describe how the proof works and how to adapt it to this setting. *Proof.* Write $h(x) = \sum_{k=0}^\infty\frac{a_k}{k!}x^k$. For $\eta\in\mathfrak{t}$ and $N\geq n+1$, $$\begin{aligned} \sum_{k=0}^N \frac{a_k}{k!} (\widetilde{\mathcal{A}}_T^k) (\eta) &=\sum_{k=0}^N \frac{a_k}{k!} \int_{\widetilde{\mathcal{X}}} (\widetilde{\Omega} + \langle \widetilde{m}_T, \eta \rangle)^k \\ &= \int_{\widetilde{\mathcal{X}}} \sum_{k=0}^N \frac{a_k}{k!} (\widetilde{\Omega} + \langle \widetilde{m}_T, \eta \rangle)^k \\ &= \frac{1}{(n+1)!} \int_{\widetilde{\mathcal{X}}} \left( \sum_{k=0}^{N-n-1} \frac{a_{k+n+1}}{k!} \langle \widetilde{m}_T, \eta \rangle^k \right) \widetilde{\Omega}^{n+1}. \end{aligned}$$ Since $h$ is absolutely convergent on a neighbourhood of $\langle P,\xi\rangle=\langle\widetilde{m}_T(\widetilde{\mathcal{X}}),\xi\rangle$, the sum $\sum_{k=0}^\infty\frac{a_{k+n+1}}{k!}\langle\widetilde{m}_T,\eta\rangle^k$ converges absolutely and uniformly for all $\eta$ in a neighbourhood of $\xi$. Therefore, sending $N\to\infty$ we may pass the limit through the integral to get $$\sum_{k=0}^\infty\frac{a_k}{k!}(\widetilde{\mathcal{A}}_T^k)(\eta)=\frac{1}{n!}\int_{\widetilde{\mathcal{X}}}h^{(n+1)}(\langle \widetilde{m},\eta\rangle)\widetilde{\Omega}^{n+1},$$ and the convergence of the sum is absolute. From this formula, we also see the positivity of $(h(\widetilde{\mathcal{A}}_T))(\eta)$, given positivity of $h^{(n+1)}$. Independence of the choice of resolution is straightforward using Lemma [Lemma 16](#lem:projection_formula){reference-type="ref" reference="lem:projection_formula"} as well as the fact that given two resolutions $\mathcal{X}_1\to\mathcal{X}$ and $\mathcal{X}_2\to\mathcal{X}$, there is a third resolution $\mathcal{X}_3\to\mathcal{X}$ dominating $\mathcal{X}_1$ and $\mathcal{X}_2$. For the intersection $(c_1(\widetilde{\mathcal{X}}/\mathbb{P}^1)_Th(\widetilde{\mathcal{A}}_T))$, choose a Cartan representative $\Sigma+\sigma$ for $c_1(\widetilde{\mathcal{X}}/\mathbb{P}^1)_T$. We similarly compute $$\begin{aligned} \sum_{k=0}^N\frac{a_k}{k!}(c_1(\widetilde{\mathcal{X}}/\mathbb{P}^1)_T\widetilde{\mathcal{A}}_T^k)(\eta) =&\sum_{k=0}^N\frac{a_k}{k!}\int_{\widetilde{\mathcal{X}}}(\Sigma+\langle\sigma,\eta\rangle)\wedge(\widetilde{\Omega}+\langle\widetilde{m}_T,\eta\rangle)^k \\ =& \int_{\widetilde{\mathcal{X}}}\sum_{k=0}^N\frac{a_k}{k!}(\Sigma+\langle\sigma,\eta\rangle)\wedge(\widetilde{\Omega}+\langle\widetilde{m}_T,\eta\rangle)^k \\ =& \frac{1}{n!}\int_{\widetilde{\mathcal{X}}}\sum_{k=0}^{N-n}\frac{a_{k+n}}{k!}\langle\widetilde{m}_T,\eta\rangle^k\Sigma\wedge\widetilde{\Omega}^n \\ &+\frac{1}{(n+1)!}\int_{\widetilde{\mathcal{X}}}\sum_{k=0}^{N-n-1}\frac{a_{k+n+1}}{k!}\langle\sigma,\eta\rangle\langle\widetilde{m}_T,\eta\rangle^k\widetilde{\Omega}^{n+1}. \end{aligned}$$ Sending $N\to\infty$, we have uniform convergence in the integrals and $$\begin{aligned} \sum_{k=0}^\infty\frac{a_k}{k!}(c_1(\widetilde{\mathcal{X}}/\mathbb{P}^1)_T\widetilde{\mathcal{A}}_T^k)(\eta)=&\frac{1}{n!}\int_{\widetilde{\mathcal{X}}}h^{(n)}(\langle\widetilde{m}_T,\eta\rangle)\Sigma\wedge\widetilde{\Omega}^n \\ &+\frac{1}{(n+1)!}\int_{\widetilde{\mathcal{X}}}\langle\sigma,\eta\rangle h^{(n+1)}(\langle\widetilde{m}_T,\eta\rangle)\widetilde{\Omega}^{n+1}. \end{aligned}$$ Hence the intersection $(c_1(\widetilde{\mathcal{X}}/\mathbb{P}^1)_Tf(\widetilde{\mathcal{A}}_T))$ converges absolutely on a neighbourhood of $\xi$. It remains to see this is independent of the choice of resolution. It suffices to show that if we have a tower $\widetilde{\mathcal{X}}'\to\widetilde{\mathcal{X}}\to\mathcal{X}$ of equivariant resolutions, then the intersection numbers on $\widetilde{\mathcal{X}}'$ and $\widetilde{\mathcal{X}}$ agree. This is straightforward however; the difference $c_1(\widetilde{\mathcal{X}}'/\mathbb{P}^1)_T-c_1(\widetilde{\mathcal{X}}/\mathbb{P}^1)_T$ is a sum of equivariant divisors on $\widetilde{\mathcal{X}}'$, which all map to analytic subvarieties of $\mathcal{X}$ with dimension at most $n-2$. It follows from the projection formula that the intersection of $(\widetilde{\mathcal{A}}_T')^{n-1}$ with these divisors vanishes, so the divisors do not contribute to the equivariant intersections. ◻ **Definition 18** ([@Ino20 Proposition 3.26]). *Let $(\mathcal{X},\mathcal{A}_T)$ be a test configuration for $(X,\alpha_T)$, and choose a $T^{\mathbb{C}}\times\mathbb{C}^*$-equivariant resolution of singularities $\widetilde{\mathcal{X}}\to\mathcal{X}$. The *weighted Donaldson--Futaki invariant* of $(\mathcal{X},\mathcal{A}_T)$ is $$\mathrm{DF}_{v, w}(\mathcal{X}, \mathcal{A}_T) := \left[-(c_1(\widetilde{\mathcal{X}}/\mathbb{P}^1)_T f(\widetilde{\mathcal{A}}_T))+\frac{(c_1(X)_T f'(\alpha_T))}{(g'(\alpha_T))}(g(\widetilde{\mathcal{A}}_T))\right](\xi),$$ and is independent of the choice of equivariant resolution.* Here the expression between the square brackets is an analytic function on $\mathfrak{t}$ defined in a neighbourhood of $\xi\in\mathfrak{t}$, and the expression is evaluated at $\xi$ to produce a real number that is the weighted Donaldson--Futaki invariant. We remark that $(g')^{(n)}=g^{(n+1)}>0$, so $(g'(\alpha_T))(\xi)>0$ and hence $(g'(\alpha_T))$ is non-zero on a neighbourhood of $\xi$. **Remark 19**. Lahdili has also introduced a weighted Donaldson--Futaki invariant in the case the total space $\mathcal{X}$ of the Kähler test configuration is smooth [@Lah19 Definition 11]. By [@Ino20 Proposition 3.26 (3)], this agrees with the invariant $\mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T)$ defined above, and hence Inoue's invariant provides an extension of Lahdili's invariant to singular test configurations. From the proof of [@Ino20 Proposition 3.6] we have: **Lemma 20** ([@Ino20]). *$$\frac{(c_1(X)_T f'(\alpha_T))(\xi)}{(g'(\alpha_T))(\xi)}=\hat{S}_{v,w},$$ where $\hat{S}_{v,w}$ is the average weighted scalar curvature defined in equation [\[eq:average_weighted_scalar_curvature\]](#eq:average_weighted_scalar_curvature){reference-type="eqref" reference="eq:average_weighted_scalar_curvature"}.* We will also need some more general results concerning the equivariant calculus. In particular, instead of pushing forward equivariant classes to a point, it will be useful to push them to $\mathbb{P}^1$ or $\mathbb{C}$, to apply localisation in equivariant cohomology. Let $(\mathcal{X},\mathcal{A}_T)$ be a test configuration for $(X,\alpha_T)$, with a choice of equivariant resolution $\widetilde{\mathcal{X}}\to\mathcal{X}$. We denote by $\widetilde{\mathcal{X}}_{\mathbb{C}}$ the restriction of $\widetilde{\mathcal{X}}$ to $\mathbb{C}$, and write $\pi:\widetilde{\mathcal{X}}_{\mathbb{C}}\to\mathbb{C}$ for the projection. This induces an integration over the fibres map $$\pi_*:H^{2k}_{T\times S^1}(\widetilde{\mathcal{X}}_{\mathbb{C}};\mathbb{R})\to H^{2k-2n}_{T\times S^1}(\mathbb{C};\mathbb{R}),$$ where the $T$-action on $\mathbb{C}$ is taken to be trivial. Given an equivariant class $\rho\in H^{2k}_{T\times S^1}(\widetilde{\mathcal{X}}_{\mathbb{C}};\mathbb{R})$ we write $(\rho)_{\mathbb{C}}\in H^{2k-2n}_{T\times S^1}(\mathbb{C};\mathbb{R})$ for the image of $\rho$ under $\pi_*$, and extend this to the completion of the equivariant cohomology rings of $\widetilde{\mathcal{X}}$ and $\mathbb{C}$. In particular, we have $$(c_1(\widetilde{\mathcal{X}}/\mathbb{P}^1)_{T\times S^1}f(\widetilde{\mathcal{A}}_{T\times S^1}))_{\mathbb{C}},\:(g(\widetilde{\mathcal{A}}_{T\times S^1}))_{\mathbb{C}}\in \hat{H}^*_{T\times S^1}(\mathbb{C};\mathbb{R}).$$ Notice we use $T\times S^1$-equivariant cohomology since we are working with the non-compact test configuration $\mathcal{X}_{\mathbb{C}}$, whereas previously the information in the $S^1$-action was already encoded in the compactification. Now, since the $T$-action on $\mathbb{C}$ is trivial, there is a Kunneth decomposition $$H^{2\ell}_{T\times S^1}(\mathbb{C};\mathbb{R})=\bigoplus_{j+k=\ell}H^{2j}_T(\mathrm{pt};\mathbb{R})\otimes H^{2k}_{S^1}(\mathbb{C};\mathbb{R}).$$ By the Chern--Weil isomorphism, $H^{2j}_T(\mathrm{pt};\mathbb{R})$ is the space $S^{j}\mathfrak{t}^*$ of homogeneous degree $j$ polynomials on $\mathfrak{t}$. Thus, given an element $\xi\in\mathfrak{t}$, there is an evaluation map $$\mathrm{el}_\xi:H^*_{T\times S^1}(\mathbb{C};\mathbb{R})\to H^*_{S^1}(\mathbb{C};\mathbb{R}),$$ given by evaluating the $H^*_T(\mathrm{pt};\mathbb{R})$-component at $\xi$. Following [@Ino20], define $$D_\xi:H^*_{T\times S^1}(\mathbb{C};\mathbb{R})\to H^2_{S^1}(\mathbb{C};\mathbb{R})$$ to be the composition of $\mathrm{el}_\xi$ with projection to the degree 2 component in equivariant cohomology. Note there is an $S^1$-equivariant deformation retraction $\mathbb{C}\to\mathrm{pt}$, so $$H^2_{S^1}(\mathbb{C};\mathbb{R})\cong H^2_{S^1}(\mathrm{pt};\mathbb{R})=H^2(\mathbb{CP}^\infty;\mathbb{R})=\mathbb{R}\cdot c_1(\mathcal{O}(1)),$$ where $c_1(\mathcal{O}(1))$ is the first chern class of the universal line bundle $\mathcal{O}(1)$ on $\mathbb{CP}^\infty$. Thus, taking the coefficient of $c_1(\mathcal{O}(1))$, we may instead consider $D_\xi$ as a map $$D_\xi:H^*_{T\times S^1}(\mathbb{C};\mathbb{R})\to\mathbb{R}.$$ The following result of Inoue is a consequence of the localisation formula in equivariant cohomology: **Proposition 21** ([@Ino20 Proposition 3.19]). *Let $(\mathcal{X},\mathcal{A}_T)$ be a test configuration for $(X,\alpha_T)$. Let $\xi\in\mathfrak{t}$, and suppose that $f$ is a power series with real coefficients that converges absolutely on the subset $\langle P,\xi\rangle\subset\mathbb{R}$, where $P\subset\mathfrak{t}^*$ is the moment polytope of $(X,\alpha_T)$. Choose a $T^{\mathbb{C}}\times \mathbb{C}^*$-equivariant resolution of singularities $\widetilde{\mathcal{X}}\to\mathcal{X}$. Then $$D_\xi(f(\widetilde{\mathcal{A}}_{T\times S^1}))_{\mathbb{C}} = (f(\widetilde{\mathcal{A}}_T))(\xi),$$ and $$D_\xi(c_1(\widetilde{\mathcal{X}}_{\mathbb{C}}/\mathbb{C})_{T\times S^1}f(\widetilde{\mathcal{A}}_{T\times S^1}))_{\mathbb{C}}=(c_1(\widetilde{\mathcal{X}}/\mathbb{P}^1)_T f(\widetilde{\mathcal{A}}_T))(\xi).$$* **Remark 22**. In order to compute the intersections over $\mathbb{C}$ we can proceed exactly as in the proof of Proposition [Proposition 17](#prop:convergence){reference-type="ref" reference="prop:convergence"}. That is, we take a resolution of singularities $p:\widetilde{\mathcal{X}}\to\mathcal{X}$, pull back $\Omega$ and its moment map $m:\mathcal{X}\to\mathfrak{t}^*\oplus\mathbb{R}$ for the $T\times S^1$-action to get an equivariant representative $\widetilde{\Omega}+\widetilde{m}$ for $\widetilde{\mathcal{A}}_{T\times S^1}:=p^*\mathcal{A}_{T\times S^1}$. For the class $c_1(\widetilde{\mathcal{X}}/\mathbb{P}^1)_{T\times S^1}$, we simply choose an equivariant representative $\Sigma+\sigma$. Then we can integrate equivariant forms over the fibre to produce an equivariant representative on the base $\mathbb{C}$. We comment that this is only a *continuous* representative in general; since $\widetilde{\mathcal{X}}\to\mathbb{C}$ is not a submersion, the fibre integrals will not necessarily produce smooth forms on the base, hence we only have an equivariant *current* on the base representing the equivariant intersection. # Relative weighted K-stability In this section we study relative weighted K-stability. One such definition was previously given by Lahdili [@Lah20b], who related the condition to existence of a weighted extremal metric. Here we give a separate treatment using inner products associated to test configurations. After showing the two definitions are equivalent, we use Lahdili's work to argue that a weighted extremal manifold is relatively weighted K-semistable for our definition. This will be upgraded to relative weighted K-polystability in the next section. ## Twisting test configurations Since we are dealing with non-projective Kähler manifolds, we will need a mild generalisation of test configurations that allows one to incorporate irrational elements of the Lie algebra $\mathfrak{t}$; such test configurations were used in [@Der18]. We first describe how integral elements of $\mathfrak{t}$ allow one to twist test configurations, and then we will extend the definition to arbitrary elements of $\mathfrak{t}$. Let $\beta:\mathbb{C}^*\to T^{\mathbb{C}}$ be a 1-parameter subgroup. Such a $\beta$ determines a *product test configuration* $(\mathcal{X}_\beta,\mathcal{A}_T)$, first by letting $$(\mathcal{X}_\beta,\mathcal{A}_T)|_{\mathbb{C}}:=(X\times\mathbb{C},p^*\alpha_T)$$ where the $\mathbb{C}^*$-action on the right-hand side is $s(x,t):=(\beta(s)x,st)$ for $s\in\mathbb{C}^*$, and then compactifying trivially over $\infty\in\mathbb{P}^1$, in the sense that the restriction to $\mathbb{P}^1\backslash\{0\}$ is trivial. Letting $h_\beta:X\to\mathbb{R}$ be a hamiltonian function for the generator of $\beta$, the Donaldson--Futaki invariant $\mathrm{DF}_{v,w}(\mathcal{X}_\beta,\mathcal{A}_T)$ of the product test configuration is equal to $$\label{eq:weighted_Futaki} F_{v,w}(\beta):=\frac{1}{n!}\int_X(\hat{S}_{v,w}-S_{v,w}(\omega))h_\beta w(\mu) \omega^n,$$ by [@Lah19 Proposition 3]. Identifying $\beta$ with an integral element of the Lie algebra $\mathfrak{t}$ of $T$, note this formula extends naturally to all of $\mathfrak{t}$, even to irrational elements which do not define test configurations. It may be that such an irrational element destabilises the manifold $(X,\omega)$ (see for example [@ACGTF08]) so we will incorporate these into our definition of stability. **Definition 23**. *Let $(\mathcal{X},\mathcal{A}_T)$ be a $T$-equivariant Kähler test configuration for $(X,\alpha_T)$, and let $\beta\in\mathfrak{t}$. Then we define $$\mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T,\beta):=\mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T)+F_{v,w}(\beta),$$ where $F_{v,w}(\beta)$ is defined in [\[eq:weighted_Futaki\]](#eq:weighted_Futaki){reference-type="eqref" reference="eq:weighted_Futaki"}.* Given a test configuration $(\mathcal{X},\mathcal{A}_T)$ for $(X,\alpha_T)$ and a 1-parameter subgroup $\beta:\mathbb{C}^*\to T^{\mathbb{C}}$, we can *twist* the test configuration by the 1-parameter subgroup to produce a new test configuration. To do this, we restrict $(\mathcal{X},\mathcal{A}_T)$ to $\mathbb{C}$, then define a new $\mathbb{C}^*$ action to be the original $\mathbb{C}^*$-action on $\mathcal{X}$ multiplied by the 1-parameter subgroup $\beta$. Compactifying trivially over infinity, we then produce a new test configuration $(\mathcal{X}',\mathcal{A}'_T)$ for $(X,\alpha_T)$. We claim: **Lemma 24**. *Let $(\mathcal{X}',\mathcal{A}'_T)$ denote the twist of a test configuration $(\mathcal{X},\mathcal{A}_T)$ by a one-parameter subgroup $\beta:\mathbb{C}^*\to T^{\mathbb{C}}$. Then $$\mathrm{DF}_{v,w}(\mathcal{X}',\mathcal{A}_T')=\mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T)+F_{v,w}(\beta).$$* *Proof.* By Proposition [Proposition 21](#prop:localisation){reference-type="ref" reference="prop:localisation"}, we have $$(g(\widetilde{\mathcal{A}}_T'))(\xi)=D_\xi(g(\widetilde{\mathcal{A}}'_{T\times S^1_\beta}))_{\mathbb{C}},$$ where we write $S^1_\beta$ to remind ourselves the circle action is twisted by $\beta$. The class $\widetilde{\mathcal{A}}'_{T\times S^1_\beta}|_{\mathbb{C}}$ has as an equivariant representative $$\widetilde{\Omega}+\widetilde{m}_T+\widetilde{m}_{S^1_\beta}=\widetilde{\Omega}+\widetilde{m}_T+(\widetilde{m}_{S^1}+\langle \widetilde{m}_T,\beta\rangle),$$ where $\widetilde{m}_T+(\widetilde{m}_{S^1}+\langle \widetilde{m}_T,\beta\rangle)$ takes values in $\mathfrak{t}^*\oplus \mathbb{R}=\mathrm{Lie}(T\times S^1)^*$. For the remainder of the proof we shall omit the excessive tilde notation, although the reader should keep in mind objects are always integrated over the resolution $\widetilde{\mathcal{X}}$. Evaluating at $\xi\in\mathfrak{t}$, an equivariant current representative for $D_\xi(g(\mathcal{A}'_{T\times S^1_\beta}))_{\mathbb{C}}$ is $$\begin{aligned} \label{eq:dummy} &\left[\int_{\mathcal{X}'_{\mathbb{C}} / \mathbb{C}} \sum_{k=0}^\infty \frac{b_k}{k!} (\Omega + m_{S^1} + \langle m_T, \beta \rangle + \langle m_T, \xi \rangle)^k\right]_{\mathrm{deg=2}} \nonumber \\ =&\int_{\mathcal{X}'_{\mathbb{C}} / \mathbb{C}} \sum_{k=n+1}^\infty \frac{b_k}{k!} \binom{k}{n+1} (\Omega + m_{S^1} + \langle m_T, \beta \rangle)^{n+1} \langle m_T, \xi \rangle^{k-(n+1)} \nonumber \\ =&\frac{1}{(n+1)!} \int_{\mathcal{X}'_{\mathbb{C}} / \mathbb{C}} \sum_{k=0}^\infty \frac{b_{k+n+1}}{k!} (\Omega + m_{S^1} + \langle m_T, \beta \rangle)^{n+1} \langle m_T, \xi \rangle^k \nonumber \\ =&\frac{1}{(n+1)!} \int_{\mathcal{X}'_{\mathbb{C}} / \mathbb{C}} g^{(n+1)}(\langle m_T, \xi \rangle) (\Omega + m_{S^1} + \langle m_T,\beta \rangle)^{n+1} \nonumber \\ =& \frac{1}{(n+1)!} \int_{\mathcal{X}'_{\mathbb{C}} / \mathbb{C}} w(m_T) (\Omega + m_{S^1})^{n+1} + \frac{1}{n!} \int_{\mathcal{X}'_{\mathbb{C}} / \mathbb{C}} w(m_T) \langle m_T, \beta \rangle \Omega^n. \end{aligned}$$ A similar calculation (or setting $\beta=0$ above) shows that the first term of [\[eq:dummy\]](#eq:dummy){reference-type="eqref" reference="eq:dummy"} is an equivariant representative for $$D_\xi(g(\mathcal{A}_{T\times S^1}))_{\mathbb{C}}=(g(\mathcal{A}_T))(\xi),$$ i.e. the corresponding invariant for the untwisted test configuration (note the restrictions $\mathcal{X}'_{\mathbb{C}}$ and $\mathcal{X}_{\mathbb{C}}$ are identical, by definition of $\mathcal{X}'$). By Lemma [Lemma 15](#lem:independence){reference-type="ref" reference="lem:independence"}, the second term of [\[eq:dummy\]](#eq:dummy){reference-type="eqref" reference="eq:dummy"} is a constant function on the base, equal to $\frac{1}{n!}\int_X h_\beta w(\mu)\omega^n$. We therefore have $$(g(\mathcal{A}'_T))(\xi)=(g(\mathcal{A}_T))(\xi)+\frac{1}{n!}\int_X h_\beta w(\mu)\omega^n.$$ We next calculate $$(c_1(\mathcal{X}'/\mathbb{P}^1)_Tf(\mathcal{A}'_T))(\xi)=D_\xi(c_1(\mathcal{X}'_{\mathbb{C}}/\mathbb{C})_{T\times S^1_\beta}f(\mathcal{A}'_{T\times S^1_\beta}))_{\mathbb{C}}.$$ Choosing an equivariant representative $$\Sigma+\sigma_T+\sigma_{S^1_\beta}=\Sigma+\sigma_T+(\sigma_{S^1}+\langle\sigma_T,\beta\rangle)$$ of $c_1(\mathcal{X}'_{\mathbb{C}}/\mathbb{C})_{T\times S^1_\beta}$ on $\mathcal{X}'_{\mathbb{C}}$, we get $$\begin{aligned} &\left[\int_{\mathcal{X}'_{\mathbb{C}} / \mathbb{C}} (\Sigma + \sigma_{S^1_\beta} + \langle \sigma_T, \xi \rangle) \sum_{k=0}^\infty \frac{a_k}{k!} (\Omega + m_{S^1_\beta} + \langle m_T, \xi \rangle)^k\right]_{\mathrm{deg}=2} \\ =& \int_{\mathcal{X}'_{\mathbb{C}} / \mathbb{C}} \langle \sigma_T, \xi \rangle \sum_{k=n+1}^\infty \frac{a_k}{k!} \binom{k}{n+1} (\Omega+m_{S^1} + \langle m_T, \beta \rangle)^{n+1} \langle m_T, \xi \rangle^{k-(n+1)} \\ &+\int_{\mathcal{X}'_{\mathbb{C}} / \mathbb{C}} (\Sigma + \sigma_{S^1} + \langle \sigma_T, \beta \rangle) \sum_{k=n}^\infty \frac{a_k}{k!} \binom{k}{n} (\Omega + m_{S^1} + \langle m_T, \beta \rangle)^n \langle m_T, \xi \rangle^{k-n} \\ =&\frac{1}{(n+1)!} \int_{\mathcal{X}'_{\mathbb{C}} / \mathbb{C}} \langle \sigma_T, \xi \rangle \sum_{k=0}^\infty \frac{a_{k+n+1}}{k!} (\Omega + m_{S^1} + \langle m_T, \beta \rangle)^{n+1} \langle m_T, \xi \rangle^{k} \\ &+\frac{1}{n!} \int_{\mathcal{X}'_{\mathbb{C}} / \mathbb{C}} (\Sigma + \sigma_{S^1} + \langle \sigma_T, \beta \rangle) \sum_{k=0}^\infty \frac{a_{k+n}}{k!} (\Omega + m_{S^1} + \langle m_T, \beta \rangle)^n \langle m_T, \xi \rangle^k \\ =& \frac{1}{(n+1)!} \int_{\mathcal{X}'_{\mathbb{C}} / \mathbb{C}} \langle \sigma_T, \xi \rangle f^{(n+1)}(\langle m_T, \xi \rangle) (\Omega + m_{S^1} + \langle m_T, \beta \rangle)^{n+1} \\ &+\frac{1}{n!}\int_{\mathcal{X}'_{\mathbb{C}} / \mathbb{C}}f^{(n)}(\langle m_T, \xi \rangle)(\Sigma + \sigma_{S^1} + \langle \sigma_T, \beta \rangle)(\Omega + m_{S^1} + \langle m_T, \beta \rangle)^n. \end{aligned}$$Expanding out these final lines, we have an equivariant representative of $$D_\xi(c_1(\mathcal{X}_{\mathbb{C}}/\mathbb{C})_{T\times S^1}f(\mathcal{A}_{T\times S^1}))_{\mathbb{C}}=(c_1(\mathcal{X}/\mathbb{P}^1)_Tf(\mathcal{A}_T))(\xi)$$ for the untwisted test configuration, plus the remaining terms involving $\beta$: $$\begin{aligned} \label{eq:remaining_terms} &\frac{1}{n!} \int_{\mathcal{X}_{\mathbb{C}}/\mathbb{C}}\langle m_T,\beta\rangle\langle\sigma_T,\xi\rangle \widetilde{v}'(\langle m_T,\xi\rangle)\Omega^n \\ +&\frac{1}{n!} \int_{\mathcal{X}_{\mathbb{C}}/\mathbb{C}}\langle\sigma_T,\beta\rangle\widetilde{v}(\langle m_T,\xi\rangle)\Omega^n \nonumber \\ +&\frac{1}{(n-1)!} \int_{\mathcal{X}_{\mathbb{C}}/\mathbb{C}}\langle m_T,\beta\rangle \widetilde{v}(\langle m_T,\xi\rangle)\Sigma\wedge\Omega^{n-1}. \nonumber \end{aligned}$$ We claim that these remaining terms are (fibrewise) independent of the choice of representative $\Sigma+\sigma_T$, and are in fact constant. To see the independence, suppose we have another representative $\Sigma'+\sigma_T'$. We may write $\Sigma' = \Sigma+i\partial\overline{\partial}\psi$, in which case $\sigma'_T = \sigma_T+d^c\psi$. Differentiating along the straight line $t(\Sigma'+\sigma_T')+(1-t)(\Sigma+\sigma_T)$ joining these representatives, we get $$\begin{aligned} &\frac{1}{n!} \int_{\mathcal{X}_{\mathbb{C}}/\mathbb{C}}\langle m_T,\beta\rangle d^c\psi(\xi) \widetilde{v}'(\langle m_T,\xi\rangle)\Omega^n \\ +&\frac{1}{n!} \int_{\mathcal{X}_{\mathbb{C}}/\mathbb{C}}d^c\psi(\beta)\widetilde{v}(\langle m_T,\xi\rangle)\Omega^n \\ +&\frac{1}{n!} \int_{\mathcal{X}_{\mathbb{C}}/\mathbb{C}}\langle m_T,\beta\rangle \widetilde{v}(\langle m_T,\xi\rangle)n\,i\partial\overline{\partial}\psi\wedge\Omega^{n-1}. \end{aligned}$$ Integrating by parts and using the moment map property, the final term cancels with the first two terms, hence we have independence. To see that the terms [\[eq:remaining_terms\]](#eq:remaining_terms){reference-type="eqref" reference="eq:remaining_terms"} are constant, we may choose on any given fibre $\mathcal{X}_\tau$ for $\tau\neq0$ the representative $\mathrm{Ric}(\Omega_\tau)+\Delta m_T$, where $\Delta$ is the Laplacian of $\Omega_\tau$. The terms [\[eq:remaining_terms\]](#eq:remaining_terms){reference-type="eqref" reference="eq:remaining_terms"} then reduce to $$\begin{aligned} &\frac{1}{n!} \int_{\mathcal{X}_\tau}\langle m_T,\beta\rangle((\Delta m_T^\xi) \widetilde{v}'(m_T^\xi) + \Delta(\widetilde{v}(m_T^\xi))+\widetilde{v}(m_T^\xi)S(\Omega_\tau))\Omega_\tau^n \\ =&\frac{1}{n!}\int_{\mathcal{X}_\tau}\langle m_T,\beta\rangle S_v(\Omega_\tau)\Omega_\tau^n. \end{aligned}$$ By Lemma [Lemma 15](#lem:independence){reference-type="ref" reference="lem:independence"}, this is a constant function on $\mathbb{C}^*$. By continuity, the terms [\[eq:remaining_terms\]](#eq:remaining_terms){reference-type="eqref" reference="eq:remaining_terms"} are therefore constant on $\mathbb{C}$, and equal to $$\frac{1}{n!}\int_X h_\beta S_v(\omega)\omega^n=\frac{1}{n!}\int_X S_{v,w}(\omega)h_\beta w(\mu)\omega^n.$$ We now substitute these equalities into the formula for the weighted Donaldson--Futaki invariant: $$\begin{aligned} \mathrm{DF}_{v, w}(\mathcal{X}', \mathcal{A}'_T) :=& \left[-(c_1(\mathcal{X}' / \mathbb{P}^1)_T f(\mathcal{A}'_T))+\frac{(c_1(X)_T f'(\alpha_T))}{(g'(\alpha_T))}(g(\mathcal{A}'_T))\right](\xi) \\ =&-(c_1(\mathcal{X}/\mathbb{P}^1)_T f(\mathcal{A}_T))(\xi)-\frac{1}{n!}\int_X S_{v,w}(\omega)h_\beta w(\mu)\omega^n \\ &+\frac{(c_1(X)_T f'(\alpha_T))}{(g'(\alpha_T))} \left(g(\mathcal{A}_T)(\xi)+\frac{1}{n!}\int_X h_\beta w(\mu)\omega^n\right) \\ =& \mathrm{DF}_{v, w}(\mathcal{X}, \mathcal{A}_T) + \frac{1}{n!}\int_X(\hat{S}_{v,w}-S_{v,w}(\omega))h_\beta w(\mu) \omega^n\\ =&\mathrm{DF}_{v, w}(\mathcal{X}, \mathcal{A}_T) + F_{v,w}(\beta). \end{aligned}$$ Here we have used Lemma [Lemma 20](#lem:average_weighted_scalar_curvature){reference-type="ref" reference="lem:average_weighted_scalar_curvature"}, and the proof is complete. ◻ **Remark 25**. In [@Der18 Proposition 4.3], the corresponding result in the unweighted case was proved using asymptotics of the Mabuchi functional; the same proof works in this setting with the weighted Mabuchi functional. **Definition 26** ([@Ino20 Definition 1.8]). *A $T$-equivariant Kähler manifold $(X,\alpha_T)$ is:* 1. **weighted K-semistable* if $\mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T,\beta)\geq0$ for all $T$-equivariant normal Kähler test configurations $(\mathcal{X},\mathcal{A}_T)$ for $(X,\alpha_T)$ and all elements $\beta\in\mathfrak{t}$,* 2. **weighted K-polystable* if it is weighted K-semistable, and $\mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T,\beta)=0$ only if $(\mathcal{X},\mathcal{A}_T)$ is isomorphic to a $T$-equivariant product test configuration,* 3. **weighted K-stable* if it is weighted K-polystable, and $\mathrm{Aut}_0(X)^T=T^{\mathbb{C}}$, where $\mathrm{Aut}_0(X)^T$ is the group of $T$-equivariant reduced automorphisms of $X$.* Note that if $\beta$ is a rational element of $\mathfrak{t}$ and $k\in\mathbb{N}$ is such that $k\beta$ is integral, then $$\begin{aligned} \mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T,\beta) :=&\mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T)+F_{v,w}(\beta) \\ =& \frac{1}{k}(k\mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T)+kF_{v,w}(\beta)) \\ \geq& \frac{1}{k}(\mathrm{DF}_{v,w}(\rho_k^*(\mathcal{X},\mathcal{A}_T))+ F_{v,w}(k\beta)),\end{aligned}$$ where $\rho_k$ is the map $\mathbb{P}^1\to\mathbb{P}^1$, $z\mapsto z^k$, and we write $\rho_k^*(\mathcal{X},\mathcal{A}_T)$ for the normalised pulled back test configuration under $\rho_k$; see [@BHJ19 Proposition 7.14] for the final inequality in the unweighted case, the weighted case is proved entirely similarly by introducing a weighted analogue of the non-Archimedean Mabuchi functional: $$M^{\mathrm{NA}}_{v,w}(\mathcal{X},\mathcal{A}_T) := \mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T) + \frac{((\mathcal{X}_{0,\mathrm{red}}-\mathcal{X}_0)_T g'(\mathcal{A}_T))(\xi)}{(g'(\alpha_T))(\xi)}.$$ By Lemma [Lemma 24](#lem:integral_twist){reference-type="ref" reference="lem:integral_twist"}, the bottom line $\frac{1}{k}(\mathrm{DF}_{v,w}(\rho_k^*(\mathcal{X},\mathcal{A}_T))+ F_{v,w}(k\beta))$ is $\frac{1}{k}$ times the weighted Donaldson--Futaki invariant of the test configuration $\rho_k^*(\mathcal{X},\mathcal{A}_T)$ twisted by $k\beta$. By approximating an irrational $\beta$ by rational elements of $\mathfrak{t}$, we can therefore deduce: **Lemma 27**. *$(X,\alpha_T)$ is weighted K-semistable if and only if $\mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T)\geq0$ for all test configurations $(\mathcal{X},\mathcal{A}_T)$ for $(X,\alpha_T)$.* That is, to test weighted K-semistability one need not consider twists by elements of the Lie algebra of $T$. ## Relative weighted K-(semi/poly)stability In this section we introduce notions of relative weighted K-(semi/poly)stability. In the non-weighted setting, relative K-stability was initially introduced by Székelyhidi [@Sze07] for algebraic varieties, and later extended to the Kähler setting by Dervan [@Der18]. Although relative weighted K-semistability does not appear to be explicitly defined in Lahdili's work, it is actually encompassed by the definition of weighted K-semistability in [@Lah19; @Lah20b]. The reason for the difference in terminology here is that in Lahdili's work, the weight function $w$ is permitted to take non-positive values. Thus, in [@Lah19; @Lah20b], a weighted extremal metric is a special example of a weighted cscK metric. Here we will use different terminology, restricting the weight function $w$ to be always positive, and explicitly separating out the cscK case from the extremal case. Furthermore, we will give a slightly different definition of relative weighted K-semistability to what is in [@Lah19; @Lah20b], more along the lines of [@Sze07; @Der18]. To define relative weighted K-stability, we first introduce weighted inner products determined by test configurations. Let $(\mathcal{X},\mathcal{A}_T)$ be a test configuration, and let $\zeta_1, \zeta_2\in\mathrm{Lie}(T\times S^1)$. Denote by $h_1$ and $h_2$ the hamiltonian functions on $\mathcal{X}$ of $\zeta_1$ and $\zeta_2$ constructed in Lemma [Lemma 11](#lem:hamiltonians){reference-type="ref" reference="lem:hamiltonians"}, and let $\hat{h}_1$ and $\hat{h}_2$ be their fibrewise averages with respect to the measure $w(m_T|_{\mathcal{X}_\tau})\Omega_\tau^n$. **Definition 28**. *The *weighted inner product* on $\mathrm{Lie}(T\times S^1) = \mathfrak{t}\oplus \mathbb{R}$ determined by $(\mathcal{X},\mathcal{A}_T)$ is $$\langle\zeta_1,\zeta_2\rangle_{(\mathcal{X},\mathcal{A}_T)}:=\int_{\mathcal{X}_0}(h_1-\hat{h}_1)(h_2-\hat{h}_2)w(m_0)\Omega_0^n.$$* It is clear that this is a genuine inner product, i.e. is bilinear and positive definite. **Lemma 29**. *The inner product $\langle-,-\rangle_{(\mathcal{X},\mathcal{A}_T)}$ is independent of the choice of equivariant representative $\Omega+m_T$ for $\mathcal{A}_T$. Furthermore, when $\zeta_1,\zeta_2\in\mathfrak{t}$, the inner product is independent of the test configuration itself, and may be computed on $(X,\alpha_T)$.* Hence, when we write the inner product of elements of $\mathfrak{t}$, we shall omit the test configuration from the notation. *Proof.* First assume $\zeta_1,\zeta_2\in\mathfrak{t}$. By Lemma [Lemma 15](#lem:independence){reference-type="ref" reference="lem:independence"}, the integral of $(h_1-\hat{h}_1)(h_2-\hat{h}_2)w(m_\tau)\Omega_\tau^n$ over the general fibre $\mathcal{X}_\tau$ of $\mathcal{X}$ is independent of $\tau\in\mathbb{C}^*$, and is equal to the integral of $(h'_1-\hat{h}'_1)(h'_2-\hat{h}'_2)w(\mu)\omega^n$ over $X$, where $h_j'$ is the hamiltonian for $\zeta_j$ on $(X,\omega)$. Since $$\int_{\mathcal{X}_0}(h_1-\hat{h}_1)(h_2-\hat{h}_2)w(m_0)\Omega_0^n = \lim_{\tau\to0}\int_{\mathcal{X}_\tau}(h_1-\hat{h}_1)(h_2-\hat{h}_2)w(m_\tau)\Omega_\tau^n,$$ we have $$\langle\zeta_1,\zeta_2\rangle_{(\mathcal{X},\mathcal{A}_T)} = \int_X(h_1'-\hat{h}'_1)(h_2'-\hat{h}_2')w(\mu)\omega^n,$$ and the inner product depends only on $(X,\alpha_T)$. In the case where $\zeta_1,\zeta_2$ may also take values in $\mathrm{Lie}(S^1)$, we can only compute the integral over $\mathcal{X}_0$. In this case, the integral takes the form $$\int_{\mathcal{X}_0}h(m_{T\times S^1})\Omega_0^n,$$ where $h$ is a function on the moment polytope of $T\times S^1$. Let $\widetilde{\mathcal{X}}\to\mathcal{X}$ be a $T^{\mathbb{C}}\times\mathbb{C}^*$-equivariant resolution of singularities so that the central fibre $\widetilde{\mathcal{X}}_0$ is a simple normal crossings divisor. The integral can then be computed as $$\int_{\widetilde{\mathcal{X}}_0}h(\widetilde{m}_{T\times S^1})\widetilde{\Omega}_0^n = \sum_{j = 1}^N a_j \int_{\mathcal{Y}_j}h(\widetilde{m}_{T\times S^1})\widetilde{\Omega}_0^n,$$ where $\mathcal{Y}_1,\ldots,\mathcal{Y}_N$ denote the reduced components of central fibre $\widetilde{\mathcal{X}}_0$, which have multiplicities $a_1,\ldots,a_N$ respectively. Each $\mathcal{Y}_j$ is a smooth manifold, and the integral over $\mathcal{Y}_j$ is equivariant cohomological so is independent of the choice of representative for the equivariant cohomology class $[\widetilde{\Omega}+\widetilde{m}]|_{\mathcal{Y}_j}$. Hence the original integral does not depend on the choice of representative $[\Omega+m]$ for $\mathcal{A}_T$. ◻ We now introduce the weighted $L^1$-norm of a test configuration, although we will not need this until later. **Definition 30**. *Let $(\mathcal{X},\mathcal{A}_T)$ be a test configuration for $(X,\alpha_T)$, and let $\lambda$ denote the $\mathbb{C}^*$-action of the test configuration, with hamiltonian function $h_\lambda$. Then the *weighted $L^1$-norm* of the test configuration is $$\|(\mathcal{X},\mathcal{A}_T)\|_1^w := \int_{\mathcal{X}_0}|h_\lambda - \hat{h}_\lambda| w(m_0) \Omega_0^n.$$* Similar to the inner product, one may prove the norm is independent of the choice of equivariant representative for $\mathcal{A}_T$. **Definition 31**. *Let $(X,\alpha_T)$ be a $T$-equivariant Kähler manifold, and let $(\mathcal{X},\mathcal{A}_T)$ be a $T$-equivariant Kähler test configuration for $(X,\alpha_T)$. Denote by $\lambda$ the generator of the $\mathbb{C}^*$-action of the test configuration, and let $\{\beta_j\}_{j=1}^r$ be an orthonormal basis for $\mathfrak{t}$. We define $$\mathrm{DF}^T_{v,w}(\mathcal{X},\mathcal{A}_T):=\mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T)-\sum_{j=1}^r\langle\lambda,\beta_j\rangle_{(\mathcal{X},\mathcal{A}_T)}F_{v,w}(\beta_j).$$* This is easily observed to be independent of the choice of orthonormal basis for $\mathfrak{t}$. Notice the summation is equal to the weighted Futaki invariant of the orthogonal projection of $\lambda$ onto $\mathfrak{t}$ with respect to the inner product $\langle-,-\rangle_{(\mathcal{X},\mathcal{A}_T)}$. Thus we can consider this as the Donaldson--Futaki invariant of a test configuration "orthogonal to $\mathfrak{t}$\". In fact, letting $\beta := \sum_{j=1}^r \langle \lambda, \beta_i \rangle_{(\mathcal{X}, \mathcal{A}_T)} \beta_i$, we write $(\mathcal{X},\mathcal{A}_T)^\perp := (\mathcal{X},\mathcal{A}_T, -\beta)$, and then $$\mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T)^\perp = \mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T,-\beta) = \mathrm{DF}_{v,w}^T(\mathcal{X},\mathcal{A}_T).$$ **Remark 32**. Given an element $\beta\in\mathfrak{t}$ we could also define $$\mathrm{DF}^T_{v,w}(\mathcal{X},\mathcal{A}_T,\beta):=\mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T,\beta)-\sum_{i=1}^r\langle\lambda+\beta,\beta_i\rangle_{(\mathcal{X},\mathcal{A}_T)}F_{v,w}(\beta_i),$$ however this is equal to $\mathrm{DF}_{v,w}^T(\mathcal{X},\mathcal{A}_T)$, by bilinearity of the inner product and linearity of $F_{v,w}$ on $\mathfrak{t}$. Thus, $\mathrm{DF}^T_{v,w}$ is the "twist invariant\" version of $\mathrm{DF}_{v,w}$. **Definition 33**. *A $T$-equivariant Kähler manifold $(X,\alpha_T)$ is:* 1. **relatively weighted K-semistable* if $\mathrm{DF}^T_{v,w}(\mathcal{X},\mathcal{A}_T)\geq0$ for all test configurations $(\mathcal{X},\mathcal{A}_T)$ for $(X,\alpha_T)$,* 2. **relatively weighted K-polystable* if it is relatively weighted K-semistable, and $\mathrm{DF}_{v,w}^T(\mathcal{X},\mathcal{A}_T)=0$ only if $(\mathcal{X},\mathcal{A}_T)$ is isomorphic to a product test configuration,* 3. **relatively weighted K-stable* if it is relatively weighted K-polystable, and $\mathrm{Aut}_0(X)^T=T^{\mathbb{C}}$.* **Remark 34**. For the definition of relative weighted K-polystability, an equivalent condition to being a product test configuration is that the usual $L^1$-norm of the projection of the test configuration orthogonal to the torus is positive, by [@SD20 Appendix]. This is further equivalent to positivity of the weighted $L^1$-norm. In particular, we will write $\|(\mathcal{X},\mathcal{A}_T)^\perp\|_1^w > 0$ in this situation, where $$\label{eq:weighted_orthogonal_projection_norm} \|(\mathcal{X},\mathcal{A}_T)^\perp\|_1^w := \int_{\mathcal{X}_0}|(h_\lambda-h_\beta) - (\hat{h}_\lambda - \hat{h}_\beta)|w(m_0) \Omega_0^n,$$ where $\beta$ denotes the orthogonal projection of $\lambda$ to $\mathfrak{t}$ via the inner product $\langle-,-\rangle_{(\mathcal{X},\mathcal{A}_T)}$ on $\mathrm{Lie}(S^1\times T)$. We remarked that Lahdili gave another definition of relative weighted K-semistability, which we will now cover. By [@Lah19 Section 3.2], there exists a unique candidate $\chi\in\mathfrak{t}$ for the weighted extremal field, regardless of whether a weighted extremal metric exists. This is obtained by taking the $L^2$-orthogonal projection of $S_{v,w}(\omega)$ onto the hamiltonian generators of $\mathfrak{t}$, with respect to the inner product defined by the measure $\frac{1}{n!}w(\mu)\omega^n$; this projection generates a unique $\chi\in\mathfrak{t}$, independent of the choice of representative $\omega+\mu \in \alpha_T$. The weighted extremal equation is then $$S_{v,w}(\omega)=\mu^\chi+a$$ for a suitable constant $a\in\mathbb{R}$ depending only on the equivariant cohomology class $[\omega+\mu]$, the weights $v,w$, and $\chi\in\mathfrak{t}$. From this observation, one can think of a weighted extremal metric as a solution to the equation $$S_v(\omega)=w(\mu)w_{\mathrm{ext}}(\mu),$$ where $w_{\mathrm{ext}}(p) := \langle p,\chi\rangle + a$. In particular, we may define $w':=ww_{\mathrm{ext}}$. Even though $w'$ is not positive, we may still define $\mathrm{DF}_{v,w'}(\mathcal{X},\mathcal{A}_T)$ when $\mathcal{X}$ is smooth via the formula $$\begin{aligned} \label{eq:DF_not_positive} \mathrm{DF}_{v,w'}(\mathcal{X},\mathcal{A}_T) :=&-\frac{1}{(n+1)!}\int_{\mathcal{X}}\left(S_v(\Omega)-\hat{S}_{v,w'}w'(m_T)\right)\Omega^{n+1} \\ &+\frac{2}{n!}\int_{\mathcal{X}}v(m_T)\pi^*\omega_{\mathrm{FS}}\wedge\Omega^n,\nonumber\end{aligned}$$ where $\omega_{\mathrm{FS}}$ denotes the Fubini--Study metric on $\mathbb{P}^1$ [@Lah19 Definition 11].[^2] Lahdili then shows that a weighted extremal manifold satisfies $$\mathrm{DF}_{v,w'}(\mathcal{X},\mathcal{A}_T)\geq0$$ for all smooth test configurations $(\mathcal{X},\mathcal{A}_T)$. We shall show that this result is equivalent to relative weighted K-semistability of weighted extremal manifolds, in the sense of Definition [Definition 33](#def:weighted_K-stability){reference-type="ref" reference="def:weighted_K-stability"}: **Theorem 35** ([@Lah20b Corollary 2]). *Let $(X,\alpha_T)$ be a weighted extremal manifold. Then $(X,\alpha_T)$ is relatively weighted K-semistable.* To prove this, we first show the following: **Lemma 36**. *Let $(X,\mathcal{A}_T)$ be a smooth test configuration. Then $$\mathrm{DF}_{v,ww_{\mathrm{ext}}}(\mathcal{X},\mathcal{A}_T) = \mathrm{DF}^T_{v,w}(\mathcal{X},\mathcal{A}_T),$$ where $\mathrm{DF}_{v,ww_{\mathrm{ext}}}(\mathcal{X},\mathcal{A}_T)$ is defined in equation [\[eq:DF_not_positive\]](#eq:DF_not_positive){reference-type="ref" reference="eq:DF_not_positive"}.* *Proof.* Recall that the extremal field $w_{\mathrm{ext}}$ is defined by the $L^2$-orthogonal projection of $S_{v,w}(\omega)$ onto the generators of $\mathfrak{t}$, with respect to the inner product defined by the measure $\frac{1}{n!}w(\mu)\omega^n$. It follows that the weighted Futaki invariant is given by $$\begin{aligned} F_{v,w}(\beta) &= \frac{1}{n!}\int_X(\hat{S}_{v,w}-S_{v,w}(\omega))h_\beta w(\mu)\omega^n \\ &=-\frac{1}{n!}\int_X (w_{\mathrm{ext}}(\mu)-\hat{S}_{v,w})(h_\beta - \hat{h}_\beta) w(\mu)\omega^n \\ &=-\langle w_{\mathrm{ext}},\beta\rangle. \end{aligned}$$ Now, in the definition of $\mathrm{DF}^T_{v,w}(\mathcal{X},\mathcal{A}_T)$, we compute the weighted Futaki invariant $F_{v,w}(\beta_i)$ of a basis element $\beta_i$. In the proof of Lemma [Lemma 29](#lem:inner_product){reference-type="ref" reference="lem:inner_product"} we observed that the inner product of elements of $\mathfrak{t}$ can be computed on the central fibre of an arbitrary test configuration. Thus, we have $$\begin{aligned} \mathrm{DF}_{v,w}^T(\mathcal{X},\mathcal{A}_T) &= \mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T) - \sum_j\langle \lambda,\beta_j\rangle_{(\mathcal{X},\mathcal{A}_T)} F_{v,w}(\beta_j) \\ &= \mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T) + \sum_j\langle \lambda,\beta_j\rangle_{(\mathcal{X},\mathcal{A}_T)} \langle w_{\mathrm{ext}},\beta_j\rangle_{(\mathcal{X},\mathcal{A}_T)} \\ &= \mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T) + \langle \lambda,w_{\mathrm{ext}}\rangle_{(\mathcal{X},\mathcal{A}_T)}, \end{aligned}$$ where $\lambda$ generates the $\mathbb{C}^*$-action on the test configuration. It remains to show that this final line is equal to $\mathrm{DF}_{v,ww_{\mathrm{ext}}}(\mathcal{X},\mathcal{A}_T)$. To see this, we use an asymptotic slope formula due to Lahdili for the functional $\mathcal{E}_u(\varphi)$ defined by its first variation along smooth paths: $$\frac{d}{dt}\mathcal{E}_u(\varphi_t) := \int_X \dot{\varphi}_t u(\mu_t)\omega_t^n,$$ and normalised so that $\mathcal{E}_u(0)=0$; here $u:P\to\mathbb{R}$ is an arbitrary smooth function on the moment polytope. On $X\cong\mathcal{X}_1$ we write $\omega_t := \omega + i\partial\overline{\partial}\varphi_t = \lambda(e^{-t})^*\Omega|_{\mathcal{X}_1}$, and write $\mu_t$ for the corresponding moment map induced by $m_T$. By [@Lah19 Lemma 9], $$\lim_{t\to\infty}\frac{d}{dt}\mathcal{E}_u(\varphi_t)=\frac{1}{(n+1)!}\int_{\mathcal{X}}u(m_T)\Omega^{n+1}.$$ In particular, using equation [\[eq:DF_not_positive\]](#eq:DF_not_positive){reference-type="eqref" reference="eq:DF_not_positive"}, $$\begin{aligned} \mathrm{DF}_{v,ww_{\mathrm{ext}}}(\mathcal{X},\mathcal{A}_T) - \mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T) &= \frac{1}{(n+1)!}\int_{\mathcal{X}}(w_{\mathrm{ext}}(m_T)-\hat{S}_{v,w})w(m_T)\Omega^{n+1} \\ &= \lim_{t\to\infty}\frac{d}{dt}\mathcal{E}_{(w_{\mathrm{ext}}-\hat{S}_{v,w})w}(\varphi_t) \\ &= \lim_{t\to\infty}\int_X\dot{\varphi}_t(w_{\mathrm{ext}}(\mu_t)-\hat{S}_{v,w})w(\mu_t)\omega_t^n \\ &= \int_{\mathcal{X}_0}h_\lambda(w_{\mathrm{ext}}(m_T)-\hat{S}_{v,w})w(m_0)\Omega_0^n \\ &= \langle \lambda, w_{\mathrm{ext}}\rangle_{(\mathcal{X},\mathcal{A}_T)}. \end{aligned}$$ Thus, $\mathrm{DF}_{v,w'}(\mathcal{X},\mathcal{A}_T)$ and $\mathrm{DF}^T_{v,w}(\mathcal{X},\mathcal{A}_T)$ are both computed as $\mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T)+\langle \lambda, w_{\mathrm{ext}} \rangle_{(\mathcal{X},\mathcal{A}_T)}$, and we are done. ◻ *Proof of Theorem [Theorem 35](#thm:weighted_semistable){reference-type="ref" reference="thm:weighted_semistable"}.* Let $(\mathcal{X},\mathcal{A}_T)$ be a test configuration for $(X,\alpha_T)$. Take an equivariant resolution of singularities $\widetilde{\mathcal{X}}\to\mathcal{X}$, let $E_T\subset\widetilde{\mathcal{X}}$ be the exceptional divisor of the resolution, and consider $(\widetilde{\mathcal{X}},\widetilde{\mathcal{A}}_T - \epsilon E_T)$, which is a smooth test configuration for $(X,\alpha_T)$ for all $\epsilon > 0$ sufficiently small. Sending $\epsilon\to0$, we observe $$\mathrm{DF}_{v,w}^T(\widetilde{\mathcal{X}},\widetilde{\mathcal{A}}_T - \epsilon E_T) \to \mathrm{DF}^T_{v,w}(\mathcal{X},\mathcal{A}_T).$$ Since $\mathrm{DF}^T_{v,w}(\widetilde{\mathcal{X}}, \widetilde{\mathcal{A}}_T - \epsilon E_T)\geq 0$ for all $\epsilon > 0$ sufficiently small, it follows that $\mathrm{DF}_{v,w}^T(\mathcal{X},\mathcal{A}_T)\geq0$. ◻ # Relative K-polystability of weighted extremal manifolds In this section we prove Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"}, that a weighted extremal manifold is relatively weighted K-polystable. The style of argument we use was coined by Stoppa for polarised cscK manifolds with discrete automorphism group [@Sto09; @Sto11]. It was then generalised to the setting of polarised extremal manifolds by Stoppa--Székelyhidi [@SS11]. In the non-polarised Kähler setting, the argument was extended by Dervan--Ross to the cscK case [@DR17], then later by Dervan to the extremal case [@Der18]. Before beginning the proof of the main theorem, we first assume it holds to prove Corollary [Corollary 2](#cor:main){reference-type="ref" reference="cor:main"}, that a weighted cscK manifold is weighted K-polystable: *Proof of Corollary [Corollary 2](#cor:main){reference-type="ref" reference="cor:main"}.* Let $(X,\alpha_T)$ be a weighted cscK manifold. Then by [@Lah20b Corollary 2] and [@Ino20 Theorem A], the manifold $(X,\alpha_T)$ is weighted K-semistable. Let $(\mathcal{X},\mathcal{A}_T)$ be a test configuration for $(X,\alpha_T)$, and $\beta\in\mathfrak{t}$ be such that $\mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T,\beta)=0$. Since $(X,\alpha_T)$ is weighted cscK we have $F_{v,w}=0$ on $\mathfrak{t}$, and so $\mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T,\beta)=\mathrm{DF}_{v,w}^T(\mathcal{X},\mathcal{A}_T)=0$. Being weighted cscK, $(X,\alpha_T)$ is in particular weighted extremal, and so by Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} is relatively weighted K-polystable. It follows that $(\mathcal{X},\mathcal{A}_T)$ is a product test configuration. ◻ ## Expansions of weighted invariants To prove the main theorem, we will find a suitable $T$-invariant point $p\in X$, blow up the test configuration $(\mathcal{X},\mathcal{A}_T)$ along the orbit closure $C:=\overline{\mathbb{C}^*p}\subset\mathcal{X}$ of $p$, and compute the expansion of the $\mathrm{DF}_{v,w}^T$-invariant of the blown up test configuration as $\epsilon\to0$, where $\epsilon$ is the coefficient of the exceptional divisor. When expanding the unweighted Donaldson--Futaki invariant, the subleading coefficient is given by the Chow weight $$\mathrm{Ch}_p(\mathcal{X},\mathcal{A}) := \frac{1}{n+1}\frac{\mathcal{A}^{n+1}}{\alpha^n} - \int_C\Omega,$$ see [@DR17 Proposition 5.4]. In our setting, it will be given by the following weighted analogue: **Definition 37**. *Let $(\mathcal{X},\mathcal{A}_T)$ be a test configuration for $(X,\alpha_T)$. Let $p\in X$ be a $T$-invariant point, and define $C:=\overline{\mathbb{C}^*p}\subset\mathcal{X}$. The *weighted Chow weight of $p$* is $$\mathrm{Ch}_p^w(\mathcal{X},\mathcal{A}_T):=\frac{(g(\mathcal{A}_T))(\xi)}{(g'(\alpha_T))(\xi)}-\int_C\Omega.$$* We then have the following expansion of the weighted Donaldson--Futaki invariant. We first consider the case where both the test configuration and the curve $C$ are smooth for notational simplicity, and later explain how to compute the expansion in the general case. **Proposition 38**. *Let $(\mathcal{X},\mathcal{A}_T)$ be a smooth test configuration for $(X,\alpha_T)$, and let $p\in X$ be a $T$-fixed point. Denote by $C$ the orbit closure of $p\in\mathcal{X}_1\cong X$, and let $(\mathrm{Bl}_C\mathcal{X},\widetilde{\mathcal{A}}_T-\epsilon\mathcal{E}_T)$ be the blown up test configuration with $T$-invariant exceptional divisor $\mathcal{E}_T$, depending on a parameter $0<\epsilon\ll1$; here $\widetilde{\mathcal{A}}_T$ denotes the pullback of $\mathcal{A}_T$ to $\mathrm{Bl}_C\mathcal{X}$. Suppose that $C$ is smooth. Then $$\begin{aligned} &\mathrm{DF}_{v,w}(\mathrm{Bl}_C\mathcal{X},\widetilde{\mathcal{A}}_T-\epsilon\mathcal{E}_T) \\ =&\mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T)- \frac{v(\mu(p))}{(n-2)!}\mathrm{Ch}_p^w(\mathcal{X},\mathcal{A}_T)\epsilon^{n-1}+\mathrm{O}(\epsilon^n). \end{aligned}$$* *Proof.* Write $$f(x) = \sum_k \frac{a_k}{k!} x^k;\quad\quad g(x) = \sum_k \frac{b_k}{k!} x^k,$$ so that $$f^{(\ell)}(x) = \sum_{k=0}^\infty \frac{a_{k+\ell}}{k!} x^k;\quad\quad g^{(\ell)}(x) = \sum_{k=0}^\infty \frac{b_{k+\ell}}{k!} x^k.$$ We also write $\widetilde{\alpha}_T$ for the pullback of $\alpha_T$ to $\widetilde{X}:=\mathrm{Bl}_pX$, and $\widetilde{\mathcal{A}}_T$ for the pullback of $\mathcal{A}_T$ to $\widetilde{\mathcal{X}}:=\mathrm{Bl}_C\mathcal{X}$. Denote by $E_T$ the exceptional divisor of $\mathrm{Bl}_pX$ and $\mathcal{E}_T$ the exceptional divisor of $\mathrm{Bl}_C\mathcal{X}$. Then $$\begin{aligned} g'(\widetilde{\alpha}_T-\epsilon E_T) &= g'(\widetilde{\alpha}_T) + (-\epsilon E_T) \sum_{k=1}^\infty \frac{b_{k+1}}{k!} \binom{k}{1} \widetilde{\alpha}_T^{k-1} + (-\epsilon E_T)^2\sum_{k=2}^\infty \frac{b_{k+1}}{k!}\binom{k}{2}\widetilde{\alpha}_T^{k-2} +\cdots \\ &= g'(\widetilde{\alpha}_T)+(-\epsilon E_T)\sum_{k=0}^\infty \frac{b_{k+2}}{k!} \widetilde{\alpha}_T^k +\frac{1}{2!}(-\epsilon E_T)^2\sum_{k=0}^\infty \frac{b_{k+3}}{k!}\widetilde{\alpha}_T^k+\cdots \\ &= g'(\widetilde{\alpha}_T)+(-\epsilon E_T)g^{(2)}(\widetilde{\alpha}_T)+\frac{1}{2!}(-\epsilon E_T)^2g^{(3)}(\widetilde{\alpha}_T)+\cdots. \end{aligned}$$ Now, note that $\widetilde{\alpha}_T|_{E_T} = [0+\mu(p)]$ is constant, so $$((-E_T)^k\widetilde{\alpha}_T^\ell)=0$$ whenever $k<n$ and $$((-E_T)^n\widetilde{\alpha}_T^\ell) = \mu(p)^\ell((-E)^n)=-\mu(p)^\ell$$ for $\ell\geq1$, where the intersection $((-E)^n)$ is computed in the usual non-equivariant cohomology ring. Hence, $$\begin{aligned} (g'(\widetilde{\alpha}_T-\epsilon E_T))(\xi) &= (g'(\widetilde{\alpha}_T))(\xi) + \frac{1}{n!}((-\epsilon E_T)^ng^{(n+1)}(\widetilde{\alpha}_T))(\xi) + \mathrm{O}(\epsilon^{n+1}) \\ &= (g'(\alpha_T))(\xi) - \frac{w(\mu(p))}{n!} \epsilon^n + \mathrm{O}(\epsilon^{n+1}). \end{aligned}$$ Next, note that $\widetilde{\mathcal{A}}_T|_{\mathcal{E}_T}$ is the pullback of $\mathcal{A}_T|_{C}$ under the map $\mathcal{E}_T\to C$. It follows that $$((-\mathcal{E}_T)^k\widetilde{\mathcal{A}}_T^\ell)=0$$ whenever $k\leq n-1$, and $$((-\mathcal{E}_T)^n\widetilde{\mathcal{A}}_T^\ell) = -\ell \int_Cm_T^{\ell-1} \Omega$$ for $\ell\geq1$, noting that $\mathcal{A}_T=[\Omega+m_T]$. But note the $T$-action on $C$ is trivial, hence $m_T$ is constant on $C$ (equal to $\mu(p)$) and can be pulled out of the integral. Thus, similar to the calculation on $\mathrm{Bl}_pX$, $$(g(\widetilde{\mathcal{A}}_T-\epsilon \mathcal{E}_T))(\xi) = (g(\mathcal{A}_T))(\xi) - \frac{w(\mu(p))\int_C\Omega}{n!}\epsilon^n + \mathrm{O}(\epsilon^{n+1}).$$ Next we compute $(c_1(\widetilde{X})_T f'(\widetilde{\alpha}_T - \epsilon E_T))$. Similarly to $g$, $$f'(\widetilde{\alpha}_T-\epsilon E_T) = f'(\widetilde{\alpha}_T) + (-\epsilon E_T)f^{(2)}(\widetilde{\alpha}_T)+\frac{1}{2!}(-\epsilon E_T)^2f^{(3)}(\widetilde{\alpha}_T)+\cdots.$$ Next note $c_1(\widetilde{X})_T = \widetilde{c}_1(X)_T - (n-1)[E_T]$, where we write $\widetilde{c}_1(X)_T$ for the pull back of $c_1(X)_T$ to $\widetilde{X}$. Since $\widetilde{c}_1(X)_T|_{E_T} = [0+\Delta\mu (p)]$ is constant, $$(\widetilde{c}_1(X)_T f'(\widetilde{\alpha}_T-\epsilon E_T))(\xi) = (c_1(X)_T f'(\alpha_T))(\xi) + \mathrm{O}(\epsilon^n).$$ Next, $$\begin{aligned} (E_Tf'(\widetilde{\alpha}_T-\epsilon E_T)) (\xi) &= \frac{1}{(n-1)!}\epsilon^{n-1}(E_T(-E_T)^{n-1}f^{(n)}(\widetilde{\alpha}_T))(\xi) + \mathrm{O}(\epsilon^n) \\ &= \frac{v(\mu(p))}{(n-1)!}\epsilon^{n-1}+\mathrm{O}(\epsilon^n). \end{aligned}$$ Hence $$(\widetilde{c}_1(X)_T f'(\widetilde{\alpha}_T - \epsilon E_T))(\xi) = (c_1(X)_T f'(\alpha_T))(\xi) - \frac{v(\mu(p))}{(n-2)!} \epsilon^{n-1} + \mathrm{O}(\epsilon^n).$$ In the same manner we readily compute $$\begin{aligned} & (c_1(\widetilde{\mathcal{X}}/\mathbb{P}^1)_T f(\widetilde{\mathcal{A}}_T - \epsilon \mathcal{E}_T))(\xi) \\ =& (c_1(\mathcal{X}/\mathbb{P}^1)_T f(\mathcal{A}_T))(\xi) - \frac{v(\mu(p)) \int_C\Omega}{(n-2)!} \epsilon^{n-1} + \mathrm{O}(\epsilon^{n+1}), \end{aligned}$$ and we will give more details of this in the more general singular case below. From these calculations, the expansion of the weighted Donaldson--Futaki invariant follows immediately. ◻ When $\mathcal{X}$ and $C$ are not necessarily smooth, we use the following modification: **Proposition 39**. *Let $(\mathcal{X},\mathcal{A}_T)$ be a test configuration for $(X,\alpha_T)$, let $p\in X$ be a $T$-fixed point, and define $C:=\overline{\mathbb{C}^*p}\subset\mathcal{X}$. Let $r:\mathcal{Y}\to\mathcal{X}$ be a $T^{\mathbb{C}}\times\mathbb{C}^*$-equivariant resolution of singularities of $\mathcal{X}$ with exceptional divisor $\mathcal{F}$ supported on $\mathcal{Y}_0$, such that $\mathcal{Y}_0$ is a simple normal crossings divisor[^3] and the proper transform $\hat{C}\subset\mathcal{Y}$ of $C$ is smooth. Let $b:\mathcal{B}\to\mathcal{Y}$ be the blowup of $\mathcal{Y}$ along $\hat{C}$, with exceptional divisor $\mathcal{E}$. Then the weighted Donaldson--Futaki invariant of the test configuration $(\mathcal{B},b^*r^*\mathcal{A}_T-\epsilon\mathcal{E}_T-\epsilon^n b^*\mathcal{F}_T)$ has an expansion $$\begin{aligned} &\mathrm{DF}_{v,w}(\mathcal{B},b^*r^*\mathcal{A}_T-\epsilon\mathcal{E}_T-\epsilon^n b^*\mathcal{F}_T) \\ =&\mathrm{DF}_{v,w}(\mathcal{X},\mathcal{A}_T) - \frac{v(\mu(p))}{(n-2)!}\mathrm{Ch}_p^w(\mathcal{X},\mathcal{A}_T)\epsilon^{n-1}+\mathrm{O}(\epsilon^n). \end{aligned}$$* *Proof.* The same calculation as in the proof of Proposition [Proposition 38](#prop:DF_expansion_smooth){reference-type="ref" reference="prop:DF_expansion_smooth"} carries through, only it is more notationally cumbersome. First, note that $\epsilon^n b^*\mathcal{F}_T$ can be effectively ignored in the calculation, since any term involving it will be $\mathrm{O}(\epsilon^n)$. Hence the calculation of most terms is essentially unchanged; we will however calculate the term $(c_1(\mathcal{B}/\mathbb{P}^1)_T f(b^*r^*\mathcal{A}_T - \epsilon\mathcal{E}_T-\epsilon^n b^*\mathcal{F}_T))(\xi)$. First, note $c_1(\mathcal{B}/\mathbb{P}^1)_T = b^*c_1(\mathcal{Y}/\mathbb{P}^1)_T - (n-1)[\mathcal{E}_T]$. For $f(b^*r^*\mathcal{A}_T - \epsilon \mathcal{E}_T - \epsilon^n b^* \mathcal{F}_T)$, ignoring $\epsilon^n b^* \mathcal{F}_T$ we compute $$\begin{aligned} f(b^*r^*\mathcal{A}_T - \epsilon \mathcal{E}_T) &= f(b^*r^*\mathcal{A}_T) + \frac{1}{1!} (-\epsilon \mathcal{E}_T) \sum_{k=0}^\infty \frac{a_{k+1}}{k!}(b^*r^*\mathcal{A}_T)^k + \cdots \\ &+ \frac{1}{(n-1)!} (-\epsilon \mathcal{E}_T)^{n-1} \sum_{k=0}^\infty \frac{a_{k+n-1}}{k!}(b^*r^*\mathcal{A}_T)^k +\mathrm{O}(\epsilon^n). \end{aligned}$$ Note that $(-\mathcal{E}_T)^\ell b^*r^*\mathcal{A}_T^k$ has trivial integral whenever $\ell<n$, since $b^*r^*\mathcal{A}_T|_{\mathcal{E}_T}$ is pulled back from $\hat{C}$. Furthermore, since $\mathcal{A}_T = [\Omega + m_T]$ and $r^* m_T|_{\hat{C}} = \mu(p)$ is constant, for $k\geq1$ we have $$((-\mathcal{E}_T)^n b^*r^*\mathcal{A}_T^k)=-k\mu(p)^{k-1}\int_{\hat{C}}r^*\Omega.$$ Therefore $$\begin{aligned} &((n-1)\mathcal{E}_T f(b^*r^*\mathcal{A}_T-\epsilon\mathcal{E}_T))(\xi) \\ =& \epsilon^{n-1}\frac{n-1}{(n-1)!}\sum_{k=0}^\infty\frac{a_{k+n-1}}{k!}(\mathcal{E}_T(-\mathcal{E}_T)^{n-1}(b^*r^*\mathcal{A}_T)^k)(\xi) + \mathrm{O}(\epsilon^n) \\ =& \epsilon^{n-1}\frac{1}{(n-2)!}\sum_{k=0}^\infty\frac{a_{k+n}}{k!}\langle\mu(p),\xi\rangle^k\int_{\hat{C}}r^*\Omega + \mathrm{O}(\epsilon^n) \\ =& \frac{v(\mu(p))\int_{C}\Omega}{(n-2)!}\epsilon^{n-1} + \mathrm{O}(\epsilon^n), \end{aligned}$$ where we used $\int_{\hat{C}}r^*\Omega = \int_C\Omega$, which can be seen by integrating over the smooth locus of $C$. Next, $$\begin{aligned} (b^*c_1(\mathcal{Y}/\mathbb{P}^1)_T f(b^*r^*\mathcal{A}_T - \epsilon \mathcal{E}_T))(\xi) &= (b^*c_1(\mathcal{Y}/\mathbb{P}^1)_T f(b^*r^*\mathcal{A}_T))(\xi) + \mathrm{O}(\epsilon^n) \\ &= (c_1(\mathcal{Y}/\mathbb{P}^1)_T f(r^*\mathcal{A}_T))(\xi) + \mathrm{O}(\epsilon^n). \end{aligned}$$ Here we used the projection formula for $b:\mathcal{B}\to\mathcal{Y}$ to reduce the first term. The higher order terms are order $\epsilon^n$, since $b^*c_1(\mathcal{Y}/\mathbb{P}^1)_T|_{\mathcal{E}_T}$ and $b^*r^*\mathcal{A}_T|_{\mathcal{E}_T}$ are pulled back from $\hat{C}$, which is 1-dimensional. Combining these computations, we deduce $$\begin{aligned} &(c_1(\mathcal{B}/\mathbb{P}^1)_T f(b^*r^*\mathcal{A}_T - \epsilon\mathcal{E}_T-\epsilon^n b^*\mathcal{F}_T))(\xi) \\ =& (c_1(\mathcal{Y}/\mathbb{P}^1)_T f(r^*\mathcal{A}_T))(\xi) - \frac{v(\mu(p))\int_{C}\Omega}{(n-2)!}\epsilon^{n-1} + \mathrm{O}(\epsilon^n). \end{aligned}$$ Recall that the intersections in the weighted Donaldson--Futaki invariant are defined by pulling back to a resolution of singularities, so this first term is the appropriate one appearing in the weighted Donaldson--Futaki invariant of $(\mathcal{X},\mathcal{A}_T)$. ◻ Since we wish to compute relative stability, we must also compute how the inner products $\langle-,-\rangle_{(\mathcal{X},\mathcal{A}_T)}$ and the weighted Futaki invariants $F_{v,w}$ change on the blowup. First, we state the following analogue of [@Sze15 Proposition 37] in the weighted setting: **Lemma 40**. *Let $(X,\alpha_T)$ be a Kähler manifold with $T$-action, and let $p\in X$ be a $T$-invariant point. Denote by $\langle-,-\rangle$ the inner product on $\mathfrak{t}$ defined on the base manifold $(X,\alpha_T)$, and by $\langle-,-\rangle_\epsilon$ the inner product on $\mathfrak{t}$ defined on $(\mathrm{Bl}_pX,\alpha_T - \epsilon E_T)$. Then $$\langle\beta_1, \beta_2\rangle_\epsilon = \langle\beta_1, \beta_2\rangle + \mathrm{O}(\epsilon^{n-\delta})$$ for all $\delta > 0$ sufficiently small.* The exact same proof from [@Sze15] carries over with minimal modification, since the measure $w(\mu)\omega^n$ is equivalent to the measure $\omega^n$. We only note that the result in [@Sze15] is stated in the case $\langle\beta_1,\beta_2\rangle = 0$, but the proof works equally well without this assumption. **Lemma 41**. *Given the setup of Proposition [Proposition 39](#prop:blowup_general){reference-type="ref" reference="prop:blowup_general"}, denote by $\lambda$ the generator of the $\mathbb{C}^*$-action of the test configuration $\mathcal{X}$, and let $\beta\in\mathfrak{t}$. Then $$\begin{aligned} \langle \lambda,\beta\rangle_{(\mathcal{B}, b^*r^*\mathcal{A}_T-\epsilon \mathcal{E}_T-\epsilon^{n}b^*E_{\mathcal{Y},T})} = \langle \lambda, \beta \rangle_{(\mathcal{X},\mathcal{A}_T)} + \mathrm{O}(\epsilon^{n-\delta}), \end{aligned}$$ for all $\delta>0$ sufficiently small.* The proof is unchanged from [@Der18 Proposition 4.16], where one uses the fact that $\mathcal{Y}_0$ is simple normal crossings divisor to reduce the computation to the smooth case, and apply Lemma [Lemma 40](#lem:T-ip_blowup){reference-type="ref" reference="lem:T-ip_blowup"}. **Lemma 42**. *Let $(X,\alpha_T)$ be a Kähler manifold with $T$-action, and let $p\in X$ a $T$-invariant point. Denote by $F_{v,w}^\epsilon$ the weighted Futaki invariant on the blown up manifold $\mathrm{Bl}_pX$ for the class $\alpha_T-\epsilon E_T$, where $E$ is the exceptional divisor of the blowup. Then weighted Futaki invariant has an expansion $$F_{v,w}^\epsilon(\beta) = F_{v,w}(\beta) + \frac{v(\mu(p))}{(n-2)!}(h_\beta(p) - \hat{h}_\beta)\epsilon^{n-1} + \mathrm{O}(\epsilon^n).$$* *Proof.* This is a straightforward consequence of Proposition [Proposition 39](#prop:blowup_general){reference-type="ref" reference="prop:blowup_general"}.[^4] In particular, in the case $\beta\in\mathfrak{t}$ is integral, we let $(\mathcal{X},\mathcal{A}_T)$ be the product test configuration associated to $\beta$. By Lemma [Lemma 50](#lem:twisted_Chow_weight){reference-type="ref" reference="lem:twisted_Chow_weight"} below, the weighted Chow weight is computed as $h_\beta(p)-\hat{h}_\beta$. The weighted Donaldson--Futaki invariant of a product test configuration is merely the weighted Futaki invariant of the corresponding 1-parameter subgroup, so the expansion follows immediately. For a general element $\beta\in\mathfrak{t}$, we write $\beta$ as a linear combination of integral elements, and apply the expansion in the integral case. ◻ We will see that in the expansion of $\mathrm{DF}_{v,w}^T$, rather than the weighted Chow weight of a point we get a $T$-orthogonal analogue; that is, a variant of the weighted Chow weight that is invariant under twisting the test configuration by one-parameter subgroups of $T$. **Definition 43**. *The *$T$-orthogonal weighted Chow weight* is $$\mathrm{Ch}_p^T(\mathcal{X},\mathcal{A}_T):=\frac{(g(\mathcal{A}_T))(\xi)}{(g'(\alpha_T))(\xi)}-\int_C\Omega+\sum_{i=1}^r\langle\lambda,\beta_i\rangle_{(\mathcal{X},\mathcal{A}_T)}(h_{\beta_i}(p)-\hat{h}_{\beta_i}).$$* We will study this invariant in more detail in the following subsection. For now, we describe the expansion of $\mathrm{DF}_{v,w}^T$ in terms of the $T$-orthogonal Chow weight. This is an immediate consequence of the definition of $\mathrm{DF}^T_{v,w}$ and the expansions of Proposition [Proposition 39](#prop:blowup_general){reference-type="ref" reference="prop:blowup_general"}, Lemma [Lemma 40](#lem:T-ip_blowup){reference-type="ref" reference="lem:T-ip_blowup"}, Lemma [Lemma 41](#lem:blowup_inner_product){reference-type="ref" reference="lem:blowup_inner_product"}, and Lemma [Lemma 42](#lem:Futaki_blowup){reference-type="ref" reference="lem:Futaki_blowup"}. **Proposition 44**. *Let $(\mathcal{X},\mathcal{A}_T)$ be a test configuration for $(X,\alpha_T)$, let $p\in X$ be a $T$-fixed point, and define $C:=\overline{\mathbb{C}^*p}\subset\mathcal{X}$. Let $r:\mathcal{Y}\to\mathcal{X}$ be an equivariant resolution of singularities of $\mathcal{X}$ with exceptional divisor $\mathcal{F}_T$ supported on $\mathcal{Y}_0$, such that $\mathcal{Y}_0$ is a simple normal crossings divisor and the proper transform $\hat{C}\subset\mathcal{Y}$ of $C$ is smooth. Let $b:\mathcal{B}\to\mathcal{Y}$ be the blowup of $\mathcal{Y}$ along $\hat{C}$, with exceptional divisor $\mathcal{E}_T$. Then the $\mathrm{DF}^T_{v,w}$-invariant of the test configuration $(\mathcal{B},b^*r^*\mathcal{A}_T-\epsilon\mathcal{E}_T-\epsilon^nb^*\mathcal{F}_T)$ for $(\mathrm{Bl}_pX,\alpha_T-\epsilon E_T)$ has an expansion $$\begin{aligned} &\mathrm{DF}_{v,w}^T(\mathcal{B},b^*r^*\mathcal{A}_T-\epsilon\mathcal{E}_T-\epsilon^{n}b^*\mathcal{F}_T) \\ =&\mathrm{DF}_{v,w}^T(\mathcal{X},\mathcal{A}_T) - \frac{v(\mu(p))}{(n-2)!} \mathrm{Ch}_p^T(\mathcal{X},\mathcal{A}_T) \epsilon^{n-1} + \mathrm{O}(\epsilon^\kappa), \end{aligned}$$ where $\kappa > n-1$.* The last thing we require is the existence of a $T$-invariant point with positive weighted $T$-Chow weight. We state the result here, and prove it in the next subsection. **Proposition 45**. *Let $(\mathcal{X},\mathcal{A}_T)$ be a $T$-equivariant test configuration for $(\mathcal{X},\alpha_T)$ with $\|(\mathcal{X},\mathcal{A}_T)^\perp\|^w_1 > 0$, where $(\mathcal{X},\mathcal{A}_T)^\perp$ denotes the component of the test configuration orthogonal to the torus and the norm is defined in equation [\[eq:weighted_orthogonal_projection_norm\]](#eq:weighted_orthogonal_projection_norm){reference-type="eqref" reference="eq:weighted_orthogonal_projection_norm"}. Then there exists a $T$-invariant point $p\in X$ such that $$\mathrm{Ch}_p^T(\mathcal{X},\mathcal{A}_T) > 0.$$* Given all this, we can finally complete the proof of Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"}: *Proof of Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"}.* Let $(X,\alpha_T)$ be a weighted extremal manifold. Then $(X,\alpha_T)$ is relatively weighted K-semistable by Theorem [Theorem 35](#thm:weighted_semistable){reference-type="ref" reference="thm:weighted_semistable"}. Suppose that $(\mathcal{X},\mathcal{A}_T)$ is a test configuration for $(X,\alpha_T)$ which satisfies $$\mathrm{DF}_{v,w}^T(\mathcal{X},\mathcal{A}_T)=0.$$ We wish to show that this test configuration must be a product test configuration. If it is not a product test configuration, then by Remark [Remark 34](#rem:positive_norm){reference-type="ref" reference="rem:positive_norm"}, we have $$\|(\mathcal{X},\mathcal{A}_T)^\perp\|^w_1 > 0.$$ By Proposition [Proposition 45](#prop:Chow){reference-type="ref" reference="prop:Chow"}, there exists a $T$-fixed point $p\in X$ with $$\mathrm{Ch}_p^T(\mathcal{X},\mathcal{A}_T) > 0.$$ Define $C:=\overline{\mathbb{C}^*p}\subset \mathcal{X}$. Let $r:\mathcal{Y}\to\mathcal{X}$ be an equivariant resolution of singularities of $\mathcal{X}$ with exceptional divisor $\mathcal{F}_T$ supported on $\mathcal{Y}_0$, such that $\mathcal{Y}_0$ is a simple normal crossings divisor and the proper transform $\hat{C}\subset\mathcal{Y}$ of $C$ is smooth. Let $b:\mathcal{B}\to\mathcal{Y}$ be the blowup of $\mathcal{Y}$ along $\hat{C}$, with exceptional divisor $\mathcal{E}_T$. By Proposition [Proposition 44](#prop:DFT_blowup){reference-type="ref" reference="prop:DFT_blowup"}, $$\begin{aligned} &\mathrm{DF}_{v,w}^T(\mathcal{B},b^*r^*\mathcal{A}_T-\epsilon\mathcal{E}_T-\epsilon^nb^*\mathcal{F}_T) \\ =&\mathrm{DF}_{v,w}^T(\mathcal{X},\mathcal{A}_T)- \frac{v(\mu(p))}{(n-2)!}\mathrm{Ch}_p^T(\mathcal{X},\mathcal{A}_T)\epsilon^{n-1}+\mathrm{O}(\epsilon^\kappa), \end{aligned}$$ for some $\kappa > n-1$. Since the weighted Chow weight of $p$ is strictly positive, it follows that $$\mathrm{DF}_{v,w}^T(\mathcal{B},b^*r^*\mathcal{A}_T-\epsilon\mathcal{E}_T-\epsilon^nb^*\mathcal{F}_T)<0$$ for $\epsilon>0$ sufficiently small. Since $T$ is maximal, the point $p$ is relatively stable, so by [@Hal23 Theorem 1.1], $(\mathrm{Bl}_pX,\alpha_T-\epsilon E_T)$ admits a weighted extremal metric for all $\epsilon>0$ sufficiently small. It is therefore relatively weighted K-semistable, and we have reached a contradiction. ◻ ## Existence of a Chow stable point In this final section we prove Proposition [Proposition 45](#prop:Chow){reference-type="ref" reference="prop:Chow"}, which was used in the proof of Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"}. **Lemma 46**. *For $a\in\mathbb{R}$, $$\mathrm{Ch}_p(\mathcal{X},\mathcal{A}_T+a\mathcal{O}_{\mathbb{P}^1}(1))=\mathrm{Ch}_p(\mathcal{X},\mathcal{A}_T).$$* *Proof.* The restriction of $\mathcal{A}_T+a\mathcal{O}_{\mathbb{P}^1}(1)$ to the general fibre is still $\alpha_T$, and so the denominator $(g'(\alpha_T))(\xi)$ is unchanged by this perturbation. Since $C\to\mathbb{P}^1$ is a bihomolorphism away from $0\in\mathbb{P}^1$, $$\int_C(\Omega+a\pi^*\omega_{\mathrm{FS}}) = a + \int_C\Omega.$$ On the other hand, $$\begin{aligned} (g(\mathcal{A}_T+a\mathcal{O}_{\mathbb{P}^1}(1)))(\xi) &= \frac{1}{(n+1)!}\int_{\mathcal{X}}w(m_T)(\Omega+a\pi^*\omega_{\mathrm{FS}})^{n+1} \\ &= \frac{1}{(n+1)!}\int_{\mathcal{X}}w(m_T)\Omega^{n+1}+\frac{a}{n!}\int_{\mathcal{X}}w(m_T)\Omega^n\wedge\pi^*\omega_{\mathrm{FS}} \\ &= (g(\mathcal{A}_T))(\xi) + \frac{a}{n!} \int_{\mathcal{X}_1}w(m_1)\Omega_1^n \\ &= (g(\mathcal{A}_T))(\xi) + a (g'(\alpha_T))(\xi). \qedhere \end{aligned}$$ ◻ It follows we may normalise $\mathcal{A}_T$ so that $(g(\mathcal{A}_T))(\xi)=0$, in which case $\mathrm{Ch}_p(\mathcal{X},\mathcal{A}_T)=-\int_C\Omega$. Note that in the non-weighted setting, the usual Chow weight can also be normalised as such, although with a potentially different normalisation so we cannot say that the weighted and unweighted Chow weights coincide. In order to prove the existence of a $T$-invariant point $p$ with $\mathrm{Ch}_p^T(\mathcal{X},\mathcal{A}_T) > 0$, we will first prove a kind of uniform Chow stability, in terms of the weighted $L^1$-norm of the test configuration. To do this, we will describe the weighted $L^1$-norm in terms of the weak geodesic ray associated to the test configuration; see [@Che00; @PS07] and [@Ber16 Section 2.4] for generalities on geodesic rays. Such a description was first found in the unweighted projective setting by Hisamoto [@His16 Theorem 1.2]. **Lemma 47**. *Let $(\mathcal{X},\mathcal{A}_T)$ be a test configuration for $(X,\alpha_T)$, and let $\psi_t$ be the associated weak geodesic ray. Then $$\|(\mathcal{X},\mathcal{A}_T)\|_1^w = \lim_{t\to\infty}\int_X|\dot{\psi}_t|w(\mu_t)\omega_t^n,$$ where $\omega_t := \omega + i\partial\overline{\partial}\psi_t$ and $\mu_t := \mu + d^c\psi_t$ is the associated moment map. Similarly $$(g(\mathcal{A}_T))(\xi) = \lim_{t\to\infty}\int_X\dot{\psi}_t w(\mu_t) \omega_t^n.$$* *Proof.* Note that for any continuous function $f:\mathbb{R}\to\mathbb{R}$, $$\lim_{t\to\infty}\int_Xf(\dot{\psi}_t) w(\mu_t)\omega_t^n = \lim_{\tau\to0}\int_{\mathcal{X}_\tau}f(h_{\Psi}) w(m_{T,\Psi}) \Omega_{\Psi,\tau}^n = \int_{\mathcal{X}_0}f(h_{\Psi}) w(m_{T,\Psi}) \Omega_{\Psi,0}^n,$$ where $\Omega_\Psi = \Omega + i\partial\overline{\partial}\Psi$ is the solution to the geodesic equation of the test configuration, $h_\Psi := h_\lambda + d^c\Psi$ is the hamiltonian for the $\mathbb{C}^*$-action with respect to $\Omega_\Psi$, and $m_{T,\Psi}$ the $T$-moment map with respect to $\Omega_\Psi$. We therefore wish to show that $$\int_{\mathcal{X}_0}f(h_{\Psi}) w(m_{T,\Psi}) \Omega_{\Psi,0}^n = \int_{\mathcal{X}_0}f(h_{\lambda}) w(m_0) \Omega_{0}^n,$$ and this will prove both of the claims. The idea is these are both "equivariant cohomological\" integrals, only there are three obstructions: the function $f$ may only be continuous rather than smooth, the variety $\mathcal{X}_0$ may not be smooth, and the form $\Omega_\Psi$ may also be not smooth. We can deal with these problems in a similar manner as the proof of [@Der18 Theorem 3.14]. First, we assume $\mathcal{X}_0$ and $f$ are smooth. By Lemma [Lemma 15](#lem:independence){reference-type="ref" reference="lem:independence"}, the integral is independent of the choice of smooth representative of the equivariant cohomology class $[\Omega_0+m_0]$. We may approximate $\Omega_{\Psi,0}$ by a sequence of smooth Kähler metrics $\Omega_{\Psi^\epsilon,0}$ with moment maps $m_T^\epsilon$. Sending $\epsilon\to0$, the integrals $\int_{\mathcal{X}_0}f(h_{\Psi^\epsilon})w(m_{T,\Psi^\epsilon})\Omega_{\Psi^\epsilon, 0}^n$ are independent of $\epsilon$ and converge to the limiting integral $\int_{\mathcal{X}_0}f(h_{\Psi})w(m_{T,\Psi})\Omega_{\Psi, 0}^n$. Supposing now $\mathcal{X}_0$ were smooth, we can deal with $f$ not being differentiable by simply approximating it by smooth functions, then applying Lemma [Lemma 15](#lem:independence){reference-type="ref" reference="lem:independence"} to the $T\times S^1$-action on $\mathcal{X}_0$. Finally to deal with $\mathcal{X}_0$ not being smooth, we can do exactly as in [@Der18 Theorem 3.14], namely we find a resolution of singularities $\mathcal{Y}\to \mathcal{X}$ such the central fibre $\mathcal{Y}_0$ is a simple normal crossings divisor, then compute the integrals on the resolution. Decomposing $\mathcal{Y}_0$ into its smooth components and integrating over these, we see that the integrals are indeed equal. ◻ **Lemma 48**. *Let $(\mathcal{X},\mathcal{A}_T)$ be a $T$-equivariant test configuration for $(X,\alpha_T)$ with $\|(\mathcal{X},\mathcal{A}_T)\|^w_1>0$. Normalise the test configuration so that $$(g(\mathcal{A}_T))(\xi) = 0.$$ There exists a point $p\in X$ (not necessarily $T$-invariant) such that $$\int_C\Omega \leq -\frac{1}{3V}\|(\mathcal{X},\mathcal{A})\|^w_1,$$ where $C$ is the orbit closure of $p$ in $\mathcal{X}$ under the $\mathbb{C}^*$-action, and $V$ is the volume of $(X,\alpha_T)$ with respect to the measure $w(\mu)\omega^n$.* *Proof.* Denote by $\psi_t$ the geodesic associated to the test configuration. Write $\omega_t := \omega + i\partial\overline{\partial}\psi_t$, which has associated moment map $\mu_t := \mu + d^c \psi_t$. From Lemma [Lemma 47](#lem:integral_limits){reference-type="ref" reference="lem:integral_limits"}, $$\lim_{t\to\infty}\int_X |\dot{\psi}_t| w(\mu_t) \omega_t^n = \|(\mathcal{X},\mathcal{A})\|_1^w,$$ and $$\lim_{t\to\infty} \int_X \dot{\psi}_t w(\mu_t) \omega_t^n = 0,$$ where this last equality follows from the normalisation $(g(\mathcal{A}_T))(\xi) = 0$. Let $\dot{\psi}_t = \dot{\psi}_t^+ + \dot{\psi}_t^-$ be the decomposition of $\dot{\psi}_t$ into its positive and negative parts. Then $$\int_X |\dot{\psi}_t| w(\mu_t) \omega_t^n = \int_X \dot{\psi}_t^+ w(\mu_t) \omega_t^n - \int_X \dot{\psi}_t^- w(\mu_t) \omega_t^n$$ and $$\int_X \dot{\psi}_t w(\mu_t) \omega_t^n = \int_X \dot{\psi}_t^+ w(\mu_t) \omega_t^n + \int_X \dot{\psi}_t^- w(\mu_t) \omega_t^n.$$ Subtracting the first equation from the second and sending $t\to\infty$, it follows that $$\label{eq2} \lim_{t\to\infty}\int_X \dot{\psi}_t^- w(\mu_t) \omega_t^n = -\frac{1}{2}\|(\mathcal{X},\mathcal{A})\|_1^w.$$ Now, note that $w(\mu_t) \omega_t^n$ is a non-negative Borel measure with fixed volume $V := \int_X w(\mu) \omega^n$. If $\dot{\psi}_t^-$ everywhere satisfied $$\dot{\psi}_t^- \geq -\frac{1}{3V}\|(\mathcal{X},\mathcal{A})\|_1^w,$$ then we would have $$\int_X \dot{\psi}_t^- w(\mu_t) \omega_t^n \geq -\frac{1}{3}\|(\mathcal{X},\mathcal{A})\|_1^w,$$ contradicting the above equality [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"}. It follows that for each $t$ there exists a point $p_t$ such that $$\dot{\psi}_t(p_t) \leq -\frac{1}{3V}\|(\mathcal{X},\mathcal{A})\|_1^w.$$ We wish to choose $p_t$ independent of $t$. To do this, we use the property of convexity along geodesics, which implies that $\dot{\psi}_t(p)$ is non-decreasing in $t$, for each fixed $p$. Suppose that for every $p$ there existed some $T_p$ such that $$\dot{\psi}_t(p) \geq C := -\frac{1}{3V}\|(\mathcal{X},\mathcal{A})\|_1^w$$ for all $t \geq T_p$. Then the functions $\varphi_t := \min(\dot{\psi}_t,C)$ are continuous and increase pointwise to the constant function $C$, hence $\varphi_t \to C$ uniformly. Thus for any $\epsilon > 0$ there exists $T$ such that $\dot{\psi}_t(p) \geq C - \epsilon$ for all $p\in X$ and $t \geq T$, again contradiction equation [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"}. It follows there exists $p\in X$ such that $$\lim_{t\to\infty}\dot{\psi}_t^-(p) \leq - \frac{1}{3V}\|(\mathcal{X},\mathcal{A})\|_1^w,$$ but the left hand side equals $\int_C\Omega$ by the one-dimensional case of [@DR17 Theorem 4.14] (which is also the one-dimensional and unweighted case of Lemma [Lemma 47](#lem:integral_limits){reference-type="ref" reference="lem:integral_limits"}) and we are done. ◻ The point $p$ we constructed in the previous lemma may not be torus invariant, however a simple intersection theoretic argument by Dervan implies that there exists such a point which is $T$-invariant; we refer to [@Der18 Proposition 4.14] for the proof.[^5] **Lemma 49**. *Let $(\mathcal{X},\mathcal{A}_T)$ be a $T$-equivariant test configuration for $(X,\alpha_T)$ with $\|(\mathcal{X},\mathcal{A}_T)\|_1^w > 0$, normalised so that $(g(\mathcal{A}_T))(\xi) = 0$. Then there exists a $T$-invariant point $p\in X$ such that $$\int_C \Omega \leq -\frac{1}{3V}\|(\mathcal{X},\mathcal{A})\|_1^w,$$ where $V$ is the volume of $(X,\alpha)$.* We recall that the $T$-orthogonal weighted Chow weight of a $T$-invariant point $p$ is $$\mathrm{Ch}_p^T(\mathcal{X},\mathcal{A}_T):=\mathrm{Ch}_p^w(\mathcal{X},\mathcal{A})+\sum_{i=1}^r\langle\lambda,\beta_i\rangle_{(\mathcal{X},\mathcal{A}_T)}(h_{\beta_i}(p)-\hat{h}_{\beta_i}).$$ Here $\lambda$ is the $\mathbb{C}^*$-action of the test configuration, $\beta_i$ is an orthonormal basis for the Lie algebra $\mathfrak{t}$ of $T$, and $h_{\beta_i}$ is a hamiltonian function on $X$ for $\beta_i$ with average $\hat{h}_{\beta_i}$. **Lemma 50**. *Let $(\mathcal{X},\mathcal{A}_T)$ be a test configuration for $(X,\alpha_T)$, and let $\beta:\mathbb{C}^*\to T^{\mathbb{C}}$ be a one-parameter subgroup. Denote by $(\mathcal{X},\mathcal{A}_T,\beta)$ the twist of the original test configuration by $\beta$. Then given a $T$-invariant point $p\in X$, the weighted Chow weight of $p$ changes by $$\mathrm{Ch}_p^w(\mathcal{X},\mathcal{A}_T,\beta) = \mathrm{Ch}_p^w(\mathcal{X},\mathcal{A}_T) - h_\beta(p) + \hat{h}_\beta.$$* *Proof.* This is an easy consequence of a localisation formula already computed in Lemma [Lemma 24](#lem:integral_twist){reference-type="ref" reference="lem:integral_twist"}. In particular, we observed in that proof that for $\mathcal{A}_T'$ the Kähler class of the twisted test configuration, that $$(g(\mathcal{A}_T'))(\xi) = (g(\mathcal{A}_T))(\xi) + \frac{1}{n!}\int_X h_\beta w(\mu)\omega^n.$$ Hence $$\frac{(g(\mathcal{A}_T'))(\xi)}{(g'(\alpha_T))(\xi)} = \frac{(g(\mathcal{A}_T))(\xi)}{(g'(\alpha_T))(\xi)} + \hat{h}_\beta.$$ Using this exact same formula in the one-dimensional case, we observe that $$\int_{C'}\Omega' = \int_C\Omega + h_\beta(p). \qedhere$$ ◻ **Remark 51**. It follows easily from this result that $\mathrm{Ch}^T_p(\mathcal{X},\mathcal{A}_T)$ is twist invariant. Using all of this, we can now prove the existence of a $T$-invariant point $p$ with $\mathrm{Ch}_p^T(\mathcal{X},\mathcal{A}_T) > 0$ whenever $\|(\mathcal{X},\mathcal{A}_T)^\perp\|_1^w > 0$. *Proof of Proposition [Proposition 45](#prop:Chow){reference-type="ref" reference="prop:Chow"}.* Let $$\beta := \sum_{i=1}^r \langle\lambda, \beta_i\rangle_{(\mathcal{X},\mathcal{A}_T)} \beta_i$$ be the orthogonal projection of $\lambda$ to $\mathfrak{t}$. Choose a sequence of rational points $\beta_n$ in the Lie algebra tending towards $\beta$. For each $n$ we twist the test configuration by $-\beta_n$, and there exists a $T$-invariant point $p_n$ such that $$\mathrm{Ch}_{p_n}^w(\mathcal{X},\mathcal{A}_T,-\beta_n) \leq -\frac{1}{3V}\|(\mathcal{X},\mathcal{A}_T,-\beta_n)\|_1^w.$$ The right hand side tends towards $$-\frac{1}{3V}\|(\mathcal{X},\mathcal{A}_T)^\perp\|_1^w$$ as $n\to\infty$. On the other hand, by Lemma [Lemma 50](#lem:twisted_Chow_weight){reference-type="ref" reference="lem:twisted_Chow_weight"} the left hand side satisfies $$\begin{aligned} \mathrm{Ch}_{p_n}^w(\mathcal{X},\mathcal{A}_T,-\beta_n) - \mathrm{Ch}_{p_n}^T(\mathcal{X},\mathcal{A}_T) &= h_{\beta_n}(p) - \hat{h}_{\beta_n} - \sum_{i=1}^r\langle\lambda,\beta_i\rangle_{(\mathcal{X},\mathcal{A}_T)}( h_{\beta_i}(p) - \hat{h}_{\beta_i}) \\ &= h_{\beta_n}(p) - \hat{h}_{\beta_n} - (h_{\beta}(p) - \hat{h}_{\beta}) \\ &\to 0 \end{aligned}$$ as $n\to\infty$ since $\beta_n \to \beta$ as $n\to\infty$. It follows that for $n$ sufficiently large, $$\mathrm{Ch}_{p_n}^T(\mathcal{X},\mathcal{A}_T)\leq -\frac{1}{6V}\|(\mathcal{X},\mathcal{A}_T)^\perp\|_1^w,$$ and we are done. ◻ [^1]: The result in [@Lah19] was stated for smooth test configurations, however the proof works equally well in the singular case. [^2]: In the case where $\int_Xw'(\mu)\omega^n = 0$, we simply replace $\hat{S}_{v,w'}$ with $1$ in the definition of $\mathrm{DF}_{v,w'}(\mathcal{X},\mathcal{A}_T)$; see [@Lah19 Definitions 3, 11]. [^3]: *The assumption that $\mathcal{Y}_0$ is a simple normal crossings divisor is not needed for this particular expansion, however it will be needed for later results.* [^4]: I thank Ruadhaí Dervan for pointing this out to me. [^5]: The proof of Proposition 4.14 has an error, although the alternative argument given after the proof is correct.
arxiv_math
{ "id": "2309.02279", "title": "Stability of weighted extremal manifolds through blowups", "authors": "Michael Hallam", "categories": "math.DG math.AG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- author: - Theodora Bourni and Benjamin Richards bibliography: - ACSFbib.bib title: Convex Ancient Solutions to Anisotropic Curve Shortening Flow --- # Abstract {#abstract .unnumbered} We construct a translating solution to anisotropic curve shortening flow and show that for a given anisotropic factor $g:S^1\to\mathbb{R}_+$, and a given direction and speed, this translator is unique. We then construct an ancient compact solution to anisotropic curve shortening flow, and show that this solution, along with the appropriate translating solution, are the unique solutions to anisotropic curve shortening flow that lie in a slab of a given width and no smaller.\ Introduction In what follows, $M^1$ will denote a 1-dimensional manifold, generally either $\mathbb{R}$ or $S^1$, and $I$ will be some interval of the real line, possibly infinite. We say that a family of curves $X(u,t):M^1\times I\to\mathbb{R}^2$ is a solution to Anisotropic Curve Shortening Flow (ACSF), with respect to the factor $g$, if\ $$\dpd{X}{t}(u,t)=-g(\mathsf{N})\kappa(u,t) \mathsf{N}(u,t),\text{ for all }(u,t)\in M^1\times I,$$\ where $g$ is some smooth, positive function defined on $S^1$, $\mathsf{N}(u,t)$ is a choice of normal vector, and $\kappa(u,t)$ is the curvature with respect to this normal. We will require that our curves be embedded and strictly convex, i.e., that $\kappa>0$. We will choose $\mathsf{N}$ to be pointing towards the non-convex region of the plane. Our sign convention is such that a circle has positive curvature with respect to the outward pointing unit normal.\ ACSF is a generalization of Curve Shortening Flow (CSF), introduced by Taylor [@taylor] and Angenent-Gurtin [@angenentgurtin] as a physical model for certain crystal interfaces. Gage [@gage] studied the case where $g$ is $\pi$-periodic as a way to study regular curve shortening flow on the Euclidean plane equipped with a Minkowski norm, and proved that there is a unique self-similar solution to ACSF when $g$ exhibits this particular symmetry. While it has been proved that self-similar solutions exist without this symmetry [@GageLi], it remains an open question whether these solutions are unique.\ We say that $X(u,t)$ is an *ancient solution* if $I=(-\infty,a)$ for some $a\in\mathbb{R}$, and we say that it is an *eternal solution* if $I=\mathbb{R}$. The study of ancient solutions for CSF arose from the investigation of singularity formation, as after normalization, the limiting shape of a curve approaching a singularity is that of an ancient solution. Thus, the classifcation of all ancient solutions is a useful tool in the study of the behavior of a flow. In the case of CSF, compact, convex ancient solutions were classified by Daskalopoulos-Hamilton-Sesum [@DHS], and this classification was extended to all convex cuves by Bourni-Langford-Tinaglia [@BLT]. Classification of convex ancient solutions for curves solving a flow based on the curvature raised to certain powers was done by Bourni-et.al [@betal].\ In this paper we will adapt and extend some of the methods used in these previous works in order to construct convex ancient solutions to ACSF that lie within a slab of a given width. We then show that the solutions constructed here are the only such ancient solutions to ACSF. This makes up the content of our main result, Theorem 5.3.\ **Theorem [Theorem 12](#thm){reference-type="ref" reference="thm"} 1**. *Let $g:S^1\to\mathbb{R}_+$ be a smooth and strictly positive function, $v\in S^1$, and $w\in\mathbb{R}_+$. There exists a unique, up to translation, compact ancient solution to ACSF with respect to $g$ that lies within a slab parallel to $v$ of width $w$ and in no smaller slab. There exist two, up to translation, translating solutions to ACSF with respect to $g$ that lie within a slab parallel to $v$ of width $w$ and in no smaller slab, one that travels in the $v$ direction and one that travels in the $-v$ direction.* # Acknowledgements We would like to thank Mat Langford for many valuable discussions on the topic. Both authors were supported by grant DMS-2105026 of the National Science Foundation. Preliminaries In this section we fix some notation and calculate some useful evolution equations.\ We will let $\theta=\theta(u,t)$ be the tangent-angle, that is the angle that the tangent to $X(M\times I)$ at $X(u,t)$ makes with the $x$-axis. We will denote this tangent by $\mathsf{T}(u,t)$, and we parametrize our curves counterclockwise. Thus we have that\ $$\mathsf{T}=(\cos \theta,\sin \theta)$$\ and\ $$\mathsf{N}=(\sin \theta,-\cos \theta).$$ We will usually use $\theta$ as the argument for $g$, and we will often abuse notation and fail to write the arguments at all. We will use $u$ for an arbitrary parametrization, and reserve the use of $s$ for arc-length parametrization.\ We have the Frenet-Serret formulas $$\begin{aligned} \dpd{\mathsf{T}}{s}=&-\kappa\mathsf{N}\\ \dpd{\mathsf{N}}{s}=&\kappa\mathsf{T}.\end{aligned}$$ In general, the arc-length parametrization $s$ will depend on time $t$. Thus, given a function $f$ defined on our family of curves, we have the following commutator formula\ $$\md{f}{2}{s}{ }{t}{ }=\md{f}{2}{t}{ }{s}{ }+g\kappa^2\dpd{f}{s}.$$\ Moving forward, we will denote partial derivatives using subscripts, unless doing so would be made particularly annoying by the existence of other indices.\ **Proposition 1**. *We have the following evolution equations for a family of curves that satisfy ACSF.* *$$\begin{aligned} \mathsf{T}_t=&-(g\kappa)_s\mathsf{N}\\ \mathsf{N}_t=&(g\kappa)_s\mathsf{T}\\ \theta_t=&(g\kappa)_s\\ \kappa_t=&(g\kappa)_{ss}+g\kappa^3.\end{aligned}$$* *When our curves are strictly convex, we may parametrize our curves with respect to $\theta$. When we do so, we will let $t=\tau$, and take our partials with $\theta$ fixed instead of $u$ or $s$. With this parametrization, we have $$\kappa_{\tau}=\kappa^2((g\kappa)_{\theta\theta}+g\kappa).$$* *If our curve is compact, and the area contained within the curve is denoted by $A(t)$, we have\ $$A_t=-\int_{S^1}g(\theta)\dif \theta.\label{(0.1)}$$* *Proof.* The first four evolution equations follow directly from the commutator formula and the Frenet-Serret formulas. The fifth follows by the chain rule, for we have\ $$\kappa_t=\kappa_{\tau}+\kappa_{\theta}\theta_t=\kappa_{\tau}+\kappa\kappa_{\theta}(g\kappa)_s=\kappa_{\tau}+\kappa\kappa_{\theta}(g\kappa)_{\theta},$$\ while\ $$(g\kappa)_{ss}=\kappa(\kappa(g\kappa)_{\theta})_{\theta}=\kappa\kappa_{\theta}(g\kappa)_{\theta}+\kappa^2(g\kappa)_{\theta\theta}.$$\ Substitution of these two into the expression for $\kappa_t$ above gives us our claim.\ The final evolution equation is given by direct computation and Green's formula for the area inside of a closed curve. ◻ We also have a Harnack type inequality for curves undergoing ACSF. **Proposition 2**. *For a solution to ACSF defined on $[\alpha,T)$, for all $\tau\in (\alpha,T)$ we have* 1. *$\kappa((g\kappa)_{\theta\theta}+g\kappa)+\frac{1}{2(\tau-\alpha)}\geq 0$\ * 2. *$\kappa\sqrt{\tau-\alpha}$ is increasing with respect to $\tau$.* *Proof.* The proof of (i) follows that in the textbook by Andrews et. al. [@EGF]. Defining a function\ $$Q=\frac{(g\kappa)_\tau}{g\kappa}=\frac{\kappa_\tau}{\kappa}=\kappa((g\kappa)_{\theta\theta}+g\kappa),$$\ we have that its evolution is given by\ $$Q_\tau=\kappa^2Q_{\theta\theta}+2\kappa\kappa_\theta Q_\theta+2Q^2.$$\ The result then follows by an ODE comparision principle with the solution $q(\tau)$ of $Q_\tau=2Q^2$:\ $$q(t)=-\frac{1}{2(\tau-\alpha)}.$$\ For the proof of (ii), note that (i) and the evolution of $\kappa$ given in Proposition 1.1 implies that\ $$(\log\kappa)_\tau+\log(\sqrt{\tau-\alpha})_\tau\geq 0.$$\ Since the logarithm is an increasing function, it then follows that $\kappa\sqrt{\tau-\alpha}$ must also be increasing with respect to $\tau$. ◻ As a useful corollary, we have that with a strictly convex ancient solution to ACSF, the curvature as a function of the tangent angle is nondecreasing with respect to time. **Corollary 3**. *If $X:M\times I\to\mathbb{R}^2$ is a strictly convex ancient solution to ACSF, then $k_\tau\geq 0$ at all points of the solution.* *Proof.* We have that (i) from Proposition 1.2 is true for all $\alpha$ for which the solution is defined on $[\alpha,T)$. Since the solution is ancient, the solution is defined for all $\alpha\in (-\infty,T)$, and by taking $\alpha\to-\infty$ we have that\ $$Q=\frac{\kappa_\tau}{\kappa}\geq 0.$$ ◻ Translators We now look at the existence of translators under this flow. Suppose we have a unit vector $v\in S^1$, and we take $\psi$ to be the angle $v$ makes with the $x$-axis, i.e., $v=(\cos \psi,\sin \psi)$. If we had a family of curves that translated in the direction $v$ under our flow, we could write\ $$X_t=(\cos \psi,\sin\psi)=-g\kappa\mathsf{N}+\Phi\mathsf{T}$$\ where the tangential term is a result of reparametrizing. Then we have $$\begin{aligned} -g\kappa=&\langle X_t,\mathsf{N}\rangle\\=&\langle (\cos \psi,\sin\psi),(\sin\theta,-\cos\theta)\rangle\\=&\cos\psi\sin\theta-\sin\psi\cos\theta\\=&\sin(\theta-\psi).\end{aligned}$$ Solving for $\kappa$ gives us\ $$\kappa(\theta)=\frac{\sin(\psi-\theta)}{g(\theta)}.$$ To find an initial curve for this translating solution, we write $$\begin{aligned} x(\theta)=&x\left(\psi-\frac{\pi}{2}\right)+\int_{\psi-\frac{\pi}{2}}^{\theta}\frac{\cos (u)}{\kappa}\dif u\nonumber\\ =&x\left(\psi-\frac{\pi}{2}\right)+\int_{\psi-\frac{\pi}{2}}^{\theta}\frac{\cos (u)g(u)}{\sin (\psi-u)}\dif u\label{(2.1)},\end{aligned}$$ and $$\begin{aligned} y(\theta)&=y\left(\psi-\frac{\pi}{2}\right)+\int_{\psi-\frac{\pi}{2}}^{\theta}\frac{\sin (u)}{\kappa}\dif u\nonumber\\ &=y\left(\psi-\frac{\pi}{2}\right)+\int_{\psi-\frac{\pi}{2}}^{\theta}\frac{\sin (u)g(u)}{\sin (\psi-u)}\dif u\label{(2.2)}.\end{aligned}$$ For convenience, we will write $x_0=x\left(\psi-\frac{\pi}{2}\right)$ and $y_0=y\left(\psi-\frac{\pi}{2}\right)$. We note that these curves are asymptotic to straight lines in the $v$ direction as $\theta\to\psi$ or $\theta\to\psi-\pi$. To see this, we take inner products and have $$\begin{aligned} \langle (x(\theta),y(\theta)),v\rangle=&\langle(x(\theta),y(\theta)),(\cos \psi,\sin\psi)\rangle\\ =&x_0\cos\psi+y_0\sin\psi+\int_{\psi-\frac{\pi}{2}}^{\theta}\frac{\left(\cos (u)\cos(\psi)+\sin(u)\sin(\psi)\right)g(u)}{\sin(\psi-u)}\dif u\\ =&x_0\cos\psi+y_0\sin\psi+\int_{\psi-\frac{\pi}{2}}^\theta\cot (\psi-u)g(u)\dif u\\ \geq&x_0\cos\psi+y_0\sin\psi-(\min g)\log (\sin(\psi-\theta)).\end{aligned}$$ Then note that as $\theta\uparrow\psi$ or $\theta\downarrow(\psi-\pi)$, we have that $\log (\sin(\psi-\theta))\downarrow-\infty$.\ We can also use (2) and (3) to show that the translator lives in a slab of a given width, which we will denote $w_g^{v}$. Taking $v^{\perp}=(\sin \psi,-\cos,\psi)$, we have that $$\begin{aligned} \langle (x(\theta),y(\theta)),v^{\perp}\rangle=&\langle(x(\theta),y(\theta)),(\sin \psi,-\cos\psi)\rangle\\ =&x_0\sin\psi- y_0\cos\psi+\int_{\psi-\frac{\pi}{2}}^{\theta}\frac{\left(\cos (u)\sin(\psi)-\sin(u)\cos(\psi)\right)g(u)}{\sin(\psi-u)}\dif u\\ =&x_0\cos\psi+y_0\sin\psi+\int_{\psi-\frac{\pi}{2}}^\theta g(u)\dif u.\\\end{aligned}$$ Performing similar caclulations on $-v^{\perp}$ and combining, we find that $$w_g^{\psi}=\int_{\psi-\pi}^{\psi}g(u)\dif u.\label{2.3}$$ Note that if $v=e_2$, we have $\psi=\frac{\pi}{2}$. (1) and (2) then simplify to\ $$x(\theta)=x(0)+\int_{0}^{\theta} g(u)\dif u\hspace{1in}y(\theta)=y(0)+\int_{0}^{\theta}\tan (u)g(u)\dif u.$$\ In the case where $v=e_2$, we will denote $w_g^{e_2}$ by $w_g$. Note that for $g\equiv 1$, and $x(0)=y(0)=0$, our construction recovers the famous grim reaper solution for CSF.\ **Proposition 4**. *Given a smooth, strictly positive function $g:S^1\to\mathbb{R}$, and vector $v\in S^1$, there exists a translating solution to ACSF that travels in the direction of $v$ with speed $1$. Furthermore, up to translations in $\mathbb{R}^2$ and time, this translator is unique.* *Proof.* if we let $\Gamma_{g,v}$ be the initial curve defined by (2) and (3) with $v-(\cos \psi,\sin \psi)$, the family of curves given by\ $$X(\cdot,t)=\Gamma_{g,v}+tv$$\ is a translating solution to ACSF, and any translating solution must be of this form, up to the addition of a constant multiple of $v$ and choice of initial points on our curve. ◻ In what follows we will mainly concern ourselves with translators in the $e_2$ and $-e_2$ directions, though by an orthogonal change of coordinates, the results will follow for translators in the $v$ and $-v$ directions as well.\ Given a translator moving in the $e_2$ direction at speed $1$, we have by (4) that $$w_g=\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}g(u)\dif u\label{(2.4)}.$$ Given a vertical slab of arbitrary width $w$, we can find a translator moving in the $e_2$ direction of width $w$ by having it move in the $e_2$ direction at speed $\frac{w_g}{w}$. This gives us the following. **Corollary 5**. *Given the hypotheses of the previous proposition, and a slab of width $w$ parallel to a vector $v\in S^1$, there exists a unique translating solution to ACSF that lies in that slab and no smaller slab. Furthermore, this translator travels with speed $\frac{w^v_g}{w}$* Compact Solutions This section is dedicated to proving the following Theorem.\ **Theorem 6**. *Given a smooth,strictly positive function $g:S^1\to\mathbb{R}$, and a slab of width $w$ parallel to a vector $v\in S^1$, there exists a compact ancient solution to ACSF with respect to $g$ that lies in the slab and no smaller slab.* By the reasoning in previous sections, it suffices to prove this theorem for $v=e_2$ and $w=w_g$, with $w_g$ as in (4).\ Given a smooth function $g:S^1\to\mathbb{R}^{>0}$, and using the construction developed in section 2, we will denote by $G^+$ the $t=0$ timeslice of the translator that moves at speed 1 in the $e_2$ direction as in Proposition 2.1, with $y(0)=0$ and with $x(0)$ chosen so that the curve is 'centered' about the $y$-axis, i.e, $x(0)$ will be such that $$-\left(x(0)-\int_{0}^{\frac{-\pi}{2}}g(u)\dif u\right)=x(0)+\int_{0}^{\frac{\pi}{2}}g(u)\dif u.\label{3.1}$$ We will denote by $\{G^+_t\}_{t\in(-\infty,\infty)}$ the translating solution to ACSF that moves at speed $1$ in the $e_2$ direction such that $G^+_0=G^+$, and as above, we denote by $w_g$ the width of this translator.\ The idea of constructing an ancient solution is as follows. We construct an appropriate sequence of "old-but-not-ancient\" solutions (these are flows that live in longer and longer time intervals) and show that one can extract a limit. In [@betal] the initial curves of the sequence were constructed by considering timeslices of the translating solutions further and further in the past and reflecting these across the $x$-axis to create a compact, convex curve. As our flow depends heavily on the direction of the normal vector, this method will not work for us, and we must adapt our procedure in various way to construct the sequence of flows. After constructing these so-called 'old-but-not-ancient' solutions, we wish to take a limit of the corresponding flows. We will show that this limit does, in fact, exist, and that it is an ancient solution to ACSF. To construct the initial curves of the "old-but-not-ancient\" soluctions, instead of reflecting our translator we will join together translators moving in the $e_2$ direction with those moving in the $-e_2$ direction. One issue that immediately arises is that those translators may lie in slabs of different width. In order to resolve this issue, we require that our translator in the $-e_2$ direction moves at a speed $\sigma$ with $\sigma$ chosen so that it lives in a slab of width $w_g$, and thus the curves match up appropriately. Note that in the construction in section 2 this corresponds to $v=\sigma(\cos \frac{3\pi}{2},\sin\frac{3\pi}{2})$, and so by (1) and (2), we have the following expression for the initial curve of the translator that moves in the $-e_2$ direciton:\ $$x(\theta)=x(\pi)-\frac{1}{\sigma}\int_{\pi}^{\theta}g(u)\dif u\hspace{0.1in}y(\theta)=y(\pi)-\frac{1}{\sigma}\int_{\pi}^{\theta}g(u)\tan(u)\dif u.$$\ We then have that the width of this translator is $\frac{1}{\sigma}\int_{\frac{\pi}{2}}^{\frac{3\pi}{2}}g(u)\dif u$, and we can solve for $\sigma$ to determine the appropriate speed. In particular, we pick $\sigma$ so that\ $$\frac{1}{\sigma}\int_{\frac{\pi}{2}}^{\frac{3\pi}{2}}g(u)\dif u=\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}g(u)\dif u.$$\ Similar to the above, we will denote by $G^-$ the initial curve for the translator that moves at speed $\sigma$ in the $-e_2$ direction with $y(0)=0$ and $x(0)$ chosen so that the curve is centered as in (6), and we have $\{G^{-}_t\}_{t\in(-\infty,\infty)}$ as the corresponding translating solution to ACSF in the $-e_2$ dirction with speed $\sigma$ and with $G^{-}_0=G^{-}$.\ Now, for $R>0$ we wish to construct a compact curve by taking some combination of $G^+_{-R}$ and $G^{-}_{-R}$. Note, however, that even though we have constructed the translators to have the same width, we have no guarantee that they will intersect the $x$-axis at the same points for any given $R$ (and, indeed, they almost certainly won't). So we cannot just take the intersections of our timeslice curves with the respective half-planes. Instead, we will take the union of both curves, and discard the pieces of the curves after their two intersections. We will denote the resulting compact curve by $G^R$. So $G^R$ is the boundary of the compact convex region bounded by $G_{-R}^{\pm}$.\ We further note that while the resulting curve is not smooth, as a consequence of a theorem by Andrews [@andrews], there exists a smooth solution to ACSF whose initial curve is $G^R$. We will translate by time so that for every $R$ the flow becomes extinct at $t=0$, and will denote these flows by $$\{G^R_t\}_{t\in(t_R,0)},\label{3.2}$$ with\ $$\lim_{t\downarrow t_R} G^R_t=G^R,$$\ with the limit being in the $C^{0}$-topology. **Proposition 7**. *Let $A_R(0)$ be the area bounded by the curve $G^R$. Then there exists a constant $C$, depending only on $g$ and $\sigma$, such that $w_g(R+\sigma R)-C\leq A_R(0)\leq w_g(R+\sigma R)$.* *Proof.* Let $P$ be the rectangle $\{\envert{x}\leq w_g\}\times \{-R\leq y \leq \sigma R\}$. Note that we have $G_R\subset P$ and thus $A_R(0)\leq w_g(R+\sigma R)$.\ To obtain the lower bound, we will estimate the areas between the two translators and the vertical edges of $P$. For the area between the curve $G^+_{-R}$ and the rectangle, we note that this area is smaller than the area between $G^+$ and the two lines defined by $x=\pm\frac{w_g}{2}$ in the halfplane $\{y\geq 0\}$. This is given by\ $$\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}g(u)\int_{0}^{\theta}g(u)\tan (u)\dif u\dif \theta$$\ and we have $$\begin{aligned} \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}g(u)\int_{0}^{\theta}g(u)\tan (u)\dif u\dif \theta&\leq\enVert{g}_{L^{\infty}\left(S^1\right)}^2\int_{\frac{-\pi}{2}}^{\frac{\pi}{2}}\int_{0}^{\theta}\tan (u)\dif u\dif\theta\\ &=\enVert{g}^2_{L^{\infty}\left(S^1\right)}\pi\log 2.\end{aligned}$$ A similar calculation for the area between $G^-_{-R}$ and the rectangle gives us an upper bound of $\sigma^{-2}\enVert{g}_{L^{\infty}\left(S^1\right)}^2\pi\log 2$, and this gives us our desired lower bound for $A_R(0)$ with $C=\enVert{g}_{L^{\infty}\left(S^1\right)}^2(1+\sigma^{-2})\pi\log 2.$ ◻ We define the horizontal reach $h_R$ of the curve $G^R$ to be the distance between the two vertical supporting lines of the curve. That is, $h_R$ is such that $G_R$ is contained in a vertical slab of width $h_R$ but in no thinner vertical slab. **Proposition 8**. *There exists a constant $C$, dependent on $g$ and $\sigma$, such that for all $R\in (-\infty,0)$ we have\ $$h_R\geq w_g-2C\arcsin \left(\frac{C}{R}\right).$$* *Proof.* What we will find are inner bounds for the intersection points of $G^{+}_{-R}$ and $G^{-}_{-R}$ with the $x$-axis. Note that even though there is no reason to believe that the intersection points are symmetric about the $y$-axis, our inner bounds will be. Thus, the bounds for $G^{+}_{-R}$ and $G^{-}_{-R}$ will necessarily be nested, (one set of bounds will lie within the other set) and the innermost set will therefore serve as a bound for all four intersection points. The supporting hyperplanes are certainly further from the $y$-axis as the corresponding intersection point of at least one of $G^{+}_{-R}$ and $G^{-}_{-R}$ (and exactly one unless the two curves meet at the $x$-axis), so our result will follow.\ Let $\theta_R^{-}<\theta_R^+$ be the tangent angles for the points at which $G_{-R}^+$ intsersects the $x$-axis. Then note that we have\ $$R=\int_0^{\theta_R^+}g(u)\tan (u)\dif u\leq\enVert{g}_{L^{\infty}\left(S^1\right)}\ln(\sec \left(\theta_R^+\right)\leq \enVert{g}_{L^{\infty}\left(S^1\right)}\sec\left(\theta_R^+\right).$$\ Hence $\sec \left(\theta_R^+\right)\geq\frac{R}{\enVert{g}}_{L^{\infty}\left(S^1\right)}$. A similar calculation gives the same bound for $\sec \left(\theta_R^{-1}\right)$. Then note that $$\begin{aligned} \int_{\theta_R^-}^{\theta_R^+}g(u)\dif u&=w_g-\int_{-\frac{\pi}{2}}^{\theta_R^{-}}g(u)\dif u-\int_{\theta_R^+}^{\frac{\pi}{2}}g(u)\dif u\\ &\geq w_g-\enVert{g}_{L^{\infty}\left(S^1\right)}\left(\theta_R^- +\frac{\pi}{2}\right)-\enVert{g}_{L^{\infty}\left(S^1\right)}\left(\frac{\pi}{2}-\theta_R^+\right)\\ &=w_g-\enVert{g}_{L^{\infty}\left(S^1\right)} \mathop{\mathrm{arccsc}}\left(\sec\left(\theta_R^-\right)\right)-\enVert{g}_{L^{\infty}\left(S^1\right)}\mathop{\mathrm{arccsc}}\left(\sec\left(\theta_R^+\right)\right)\\ &\geq w_g-\enVert{g}_{L^{\infty}\left(S^1\right)}\mathop{\mathrm{arccsc}}\left(\frac{R}{\enVert{g}_{L^{\infty}\left(S^1\right)}}\right)-\enVert{g}_{L^{\infty}\left(S^1\right)}\mathop{\mathrm{arccsc}}\left(\frac{R}{\enVert{g}_{L^{\infty}\left(S^1\right)}}\right)\\ &=w_g-2\enVert{g}_{L^{\infty}\left(S^1\right)}\arcsin \left(\frac{\enVert{g}_{L^{\infty}\left(S^1\right)}}{R}\right).\end{aligned}$$ If we define $\psi_R^-$ and $\psi_R^+$ to be the tangent angles for the intersection points for $G_{-R}^-$, a similar process gives us\ $$\int_{\psi_R^-}^{\psi_R^+}g(u)\geq w_g-2\frac{\enVert{g}_{L^{\infty}\left(S^1\right)}}{\sigma}\arcsin\left(\frac{\enVert{g}_{L^{\infty}\left(S^1\right)}}{\sigma R}\right).$$\ The claim then follows, with $C=\enVert{g}_{L^{\infty}\left(S^1\right)}$ or $C=\frac{\enVert{g}_{L^{\infty}\left(S^1\right)}}{\sigma}$, depending on the value of $\sigma$. ◻ Similar to our definition of $h_R$, we call $\ell_R$ the vertical reach of the curve $G^R$ and define it to be the distance between the horizontal supporting lines of the curve. We note that by construction we have $\ell_R(0)=(1+\sigma)R$.\ We call $A_R(t)$, $h_R(t)$, $\ell_R(t)$ the enclosed area, horizontal reach, and vertical reach with respect to time of the flow defined in (7). We now want to find bounds on $A_R, h_R$, and $\ell_R$ as the curve evolves under ACSF. **Proposition 9**. *Let $C$ be the constant from Proposition 3.2.* 1. *$A_R(t)=-tw_g(1+\sigma)$.\ * 2. *$-R\leq t_R\leq -R+\frac{C}{w_g(\sigma+1)}$\ * 3. *$-t(1+\sigma)\leq \ell_R(t)\leq -t(1+\sigma)+\frac{2C}{w_g(1+\sigma)}$\ * 4. *$w_g-\frac{C}{-t(1+\sigma)+\frac{C}{w_g}}\leq h_R(t)\leq w_g$\ * 5. *$\kappa(\theta,t)\leq 2\left(\min_{\theta\in S^1}g(\theta)\right)\left[(1+\sigma)-\frac{C_{\kappa}}{t}\right]$, for all $t>\frac{3}{4}t_R$,\ * *where $C_{\kappa}=w_g+\frac{2C}{w_t(1+\sigma)}.$* *Proof.* We find (i) by integrating the evolution of A(t) from $t$ to $0$, and we obtain $$\begin{aligned} A(t)=&-t\int_{S^1}g(u)\dif u\\ =&-t\left(\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}g(u)\dif u+\int_{\frac{\pi}{2}}^{\frac{3\pi}{2}}g(u)\dif u\right)\\ =&-t(w_g+\sigma w_g).\end{aligned}$$ The inequalities from (ii) follow from (i) and the estimates in Proposition 3.2.\ To prove the second inequality for (iii), let $\{\Gamma_t(\theta)\}_{t\in(-\infty,\infty)}$ be the family of curves, parametrized by tangent angle, that solves ACSF by translating at speed 1 in the $e_2$ direction such that $\Gamma_t\cap \{x\leq 0\}\neq\emptyset$ for all $t\leq 0$ and $\Gamma_t\cap\{x\leq 0\}=\emptyset$ for all $t>0$. We have for all $\epsilon>0$ that $\Gamma_{-R-\epsilon}(0)<G^R(0)$ and $\Gamma_{-R-\epsilon}\cap G^R=\emptyset$. Thus, by the avoidance principle, for all $t\in(t_R,0)$ we have $\Gamma_{-R-\epsilon+(t-t_r)}\cap G^R_t=\emptyset$. Using a similar argument with a translator traveling in the $-e_2$ direction at speed $\sigma$, taking $\epsilon$ to $0$, and using the estimate for $t_R$ in (ii) gives the inequality.\ The second part of (iv) is clear since the initial curve lies in a slab of width $w_g$. The first part of (iv) follows from noting that we must have $A_R(t)\leq h_R(t)\ell_R(t)$ by simple geometry, then using the second inequality in (iii) and the inequality in (ii). The first inequality in (iii) follows similarly using the first part of (iv).\ To prove (v), let $\eta(\theta,t)=\langle G_R(\theta,t),\mathsf{N}(\theta,t)\rangle$ be the support function. Note that by the convexity of the curve we have $\left(\ell^2(t)+h^2(t)\right)^{\frac{1}{2}}\geq\eta(\theta,t)$ for all $\theta\in S^1$. We also have by the evolution of the curve that $-\dpd{ }{t}\eta(\theta,t)=g(\theta)\kappa(\theta,t)$. Then, by (ii) of Proposition 1.2 we have $$\begin{aligned} \eta(\theta,t)=&\int_{t}^0g(\theta)\kappa(\theta,\tau)\dif \tau\\ =&\int_{t}^0g(\theta)\kappa(\theta,\tau)\frac{(\tau-t_R)^{\frac{1}{2}}}{\tau-t_R)^{\frac{1}{2}}}\dif \tau\\ \geq&g(\theta)\kappa(\theta,t)(t-t_R)^{\frac{1}{2}}\int_{t}^0\frac{1}{(\tau-t_R)}\dif \tau\\ =&g(\theta)\kappa(\theta,t)(t-t_R)\left[2\left((-t_R)^{\frac{1}{2}}-(t-t_R)^{\frac{1}{2}}\right)\right]\\ =&g(\theta)\kappa(\theta,t)(t-t_R)^{\frac{1}{2}}\frac{-2t}{(-t_R)^{\frac{1}{2}}+(t-t_R)^{\frac{1}{2}}}\\ \geq&-g(\theta)\kappa(\theta,t)t\frac{(-t_R)^{\frac{1}{2}}}{2(-t_R)^{\frac{1}{2}}}\\ =&-\frac{1}{2}g(\theta)\kappa(\theta,t)t.\\\end{aligned}$$ This, and our bounds for $\ell(t)$ and $h(t)$ give us our bound for $\kappa$. ◻ With these bounds, particularly the one on $\kappa(\theta,t)$, and by the parabolic evolution $\kappa_t$, we have bounds on higher derivatives of $\kappa$ as well, which allows us to take $R\to\infty$ and, passing to a subsequence as necessary, claim that there exists a limiting ancient solution to ACSF lying in a slab of width $w_g$ and no smaller. Uniqueness In this section we prove that the only convex ancient solutions that live in a given slab are the translator constructed in section 2, and the ancient solution constructed in section 3. A key result that we will need involves the asymptotic behavior of such solutions as $t\to-\infty$. In particular, we show that that the asymptotic behavior of convex ancient solutions living in a slab of width $w_g$ and no smaller slab is that of translators of width $w_g$ in the appropriate direction. This was proved for CSF in the paper by Bourni, Langford, and Tinaglia [@BLT]. For the proofs of the statements used in [@BLT], one can also follow the book by Andrews, et. al. [@EGF].\ We define $\Pi$ to be the slab $\Pi=\{(x,y):\envert{x}<\frac{w_g}{2}\}$, and we let $\{\Gamma_t\}_{t\in(-\infty,0)}$ be a convex ancient solution, parametrized by tangent angle, that lies in the slab $\Pi$ and no smaller slab. Note that in the compact case, we have that the turning angle $\theta$ takes on values in all of $S^1$, while in the noncompact case we either have $\theta\in(-\frac{\pi}{2},\frac{\pi}{2})$ or we have $\theta\in(\frac{\pi}{2},\frac{3\pi}{2}).$\ Define $p_{-}(t)=\Gamma_t(0)$ when $0$ is in the domain of the turning angle, and similarly define $p_{+}(t)=\Gamma_t(\pi)$. By translating in space and time, we can arrange it so that we have $y(p_{\pm}(0))=0$ in the corresponding noncompact cases, and $\lim_{t\to 0^{-}}y(p_{\pm}(t))=0$ in the compact case, where $y(p_{pm}(t)):=\langle p_{\pm}(0),e_2\rangle$.\ As in section 3 above, we let $\{G^+_t\}_{t\in(-\infty,\infty)}$ be the translator moving at speed 1 in the positive $e_2$ direction, and we let $\{G_t ^-\}_{t\in(-\infty,\infty)}$ be the translator moving at speed $\sigma$ in the negative $e_2$ direction, so that they both lie in the slab $\Pi$ and no smaller slab.\ We first have as preliminary results a slight adaptation of a result from [@BLT] (c.f., [@EGF]), the proofs of which follow exactly as in the $g=1$ case (CSF) considered in the paper, so we omit them here. **Lemma 10**. *The translated family $\{\Gamma^+_{s,t}\}_{t\in(-\infty,-s)}$ defined by\ $$\Gamma_{s,t}^+=\Gamma_{t+s}-p_-(s)$$\ converges locally uniformly in the smooth topology as $s\to-\infty$ to the translator\ $$\{r_-G_{r_-^{-2}t}\}_{t\in(-\infty,\infty)}$$\ where $r_-=\lim_{s\to-\infty}(g(0)\kappa(0,s))^{-1}.$* *Similarly, the translated family $\{\Gamma^-_{s,t}\}_{t\in(-\infty,-s)}$ defined by\ $$\Gamma_{s,t}^-=\Gamma_{t+s}-p_+(s)$$\ converges locally uniformly in the smooth topology as $s\to\-\infty$ to the translator\ $$\{r_+G_{r_+^{-2}t}\}_{t\in(-\infty,\infty)}$$\ where $r_+=\sigma \lim_{s\to-\infty}(g(\pi)\kappa(\pi,s))^{-1}$.\ Additionally, the solution $\{\Gamma_{t}\}_{t\in(-\infty,a)}$, where $a=0$ in the compact case and $a=\infty$ in the noncompact case, sweeps out all of $\Pi$.* Since our convex ancient solutions lie in the slab $\Pi$, it is clear that we have $r_{\pm}\leq 1$. We now show that we have $r_{\pm}=1$, or in other words, that the asymptotic translators are of maximal width. The idea, again following and adapting the proof in [@BLT], is to estimate the area enclosed by the ancient solution by inscribed trapezia, then show that the rate of growth of the enclosed area of the solution as we take $t\to-\infty$ would be too great if $r<1$. **Lemma 11**. *The asymptotic translators are of maximal width, i.e., $r_{\pm}=1$.* *Proof.* For notational convenience, let $\kappa_{\infty}^-=\lim_{s\to-\infty}\kappa(0,s)$ and let $\kappa_{\infty}^+=\lim_{s\to-\infty}\kappa(\pi,s)$. We have by the Harnack type inquality and Proposition 3.4 that\ $$-y(p_-(t))\geq-\left(g(0)\kappa_{\infty}^-\right)t=-r_-^{-1}t$$\ and\ $$y(p_+(t))\geq- \left(g(\pi)\kappa_{\infty}^+\right)t=-\sigma r_+^{-1}t.$$\ First, suppose that $\{\Gamma_t\}_{t\in(-\infty,\infty)}$ is noncompact with turning angle $\theta\in(-\frac{\pi}{2},\frac{\pi}{2})$. Let $A(t)$ and $B(t)$ be two points on $\Gamma_t$ such that $y(A(t)=y(B(t)=0$ and $x(A(t)>x(B(t))$. Let $A_-(t)$ be the area enclosed by $\Gamma_t$ and the $x$-axis. We then have\ $$-A_-'(t)=\int_{\theta(B(t))}^{\theta(A(t))}g(\xi)\dif \xi\leq w_g.$$\ Integrating from $-t$ to $0$, we have $A_-(t)\leq-w_gt$. Let $\delta\in(0,1)$. Since the ancient solution sweeps out the whole slab $\Pi$, we can find some $t_{\delta}<0$ such that $w_g\geq x(A(t))-x(B(t))\geq w_g-\delta$ for all $t<t_{\delta}$. Taking $t_\delta$ smaller if necessary, we can also find points $\underline{q}^{\pm}(t)\in\Gamma_t$ and a constant $C_\delta$ such that $y(\underline{q}^+(t))=y(\underline{q}^-(t))$, while $w_gr_--\delta<x(\underline{q}^+(t))-x(\underline{q}^-(t))<w_gr_-$ and $0<y(\underline{q}^{\pm}(t))-y(p_-(t))<C_\delta$.\ The area enclosed by $\Gamma_t$ and the $x$-axis is bounded below by the area of the inscribed trapezoid with vertices $A(t), B(t)$ and $\underline{q}^{\pm}(t)$. So we have\ $$-w_gt\geq A_-(t)\geq \frac{1}{2}(w_gr_-+w_g-2\delta)(-r_-^{-1}t-C_{\delta}).$$\ Multiplying both sides by $2r_-$ and rearranging, we get\ $$C_{\delta}r_-(w_g(r_-+1)-2\delta)\geq-t(w_g(1-r_-)-2\delta)$$\ for all $t<t_\delta$. Taking $t\to -\infty$, we have that\ $$w_g(1-r_-)-2\delta\leq 0$$\ for $\delta\in (0,1)$. Taking $\delta\to 0$ gives us that $r_-=1$. Taking the other noncompact case, we have $-A_+'(t)\leq \sigma w_g$ and so $A_+(t)\leq -\sigma w_gt$. Taking $\delta\in (0,1)$ and defining $\overline{q}^{\pm}$ and a new constant $C_\delta$ similarly as above we get\ $$-\sigma w_g t\geq A_-(t)\geq \frac{1}{2}(w_gr_++w_g-2\delta)(-\sigma r_+^{-1}t-C_{\delta}).$$\ Factoring out a $\sigma$, we proceed exactly as in the other noncompact case to get $r_+=1$. The compact case is proved by bounding the area by the sum of the two trapezia to get the inequality\ $$-(1+\sigma)w_gt\geq-\frac{1}{2}(w_gr_{-}+w_g-2\delta)(r_{-}^{-1}t+C_{\delta})-\frac{1}{2}(w_gr_{+}+w_g-2\delta)(\sigma r_{+}^{-1}t+C_{\delta})$$\ for all $t<t_{\delta}$. Carrying out similar calculations as above, we have that\ $$\left[-w_g(r_{-}^{-1}+\sigma r_{+}^{-1}-(1+\sigma))-2\delta(r_{-}^{-1}+\sigma r_{+}^{-1})\right]t\leq\left[w_g(r_{-}+r_{+}+2)-4\delta\right]C_{\delta}$$\ for all $t<t_{\delta}$. Taking $t\to-\infty$ implies that\ $$w_g(r_{-}^{-1}+\sigma r_{+}^{-1}-(1+\sigma))\leq 2\delta (r_{-}^{-1}+\sigma r_{+}^{-1})$$\ for all $\delta>0$. Taking $\delta\to0$ then gives us the result for the compact case. ◻ **Theorem 12**. *Let $g:S^1\to\mathbb{R}_+$ be a smooth and strictly positive function, $v\in S^1$, and $w\in\mathbb{R}_+$. There exists a unique, up to translation, compact ancient solution to ACSF with respect to $g$ that lies within a slab parallel to $v$ of width $w$ and in no smaller slab. There exist two, up to translation, translating solutions to ACSF with respect to $g$ that lie within a slab parallel to $v$ of width $w$ and in no smaller slab, one that travels in the $v$ direction and one that travels in the $-v$ direction.* *Proof.* Once again, we prove that this holds for $w=w_g$ and $v=e_2$. The proof is an adaptation of that found in Bourni-et. al. [@betal].\ Let $\{G_t\}_{t\in(-\infty,0)}$ be the solution constructed in section 3, and let $\{\Gamma_t\}_{t\in(-\infty,0)}$ be any other compact, convex ancient solution to ACSF that lies in the vertical slab of width $w_g$ and no smaller. Parametrize both by their respective tangent angles and define the quantities\ $$L(t)=-\langle \Gamma_t(0),e_2\rangle+\langle\Gamma_t(\pi),e_2\rangle$$\ and\ \ $$L_0(t)=-\langle G_t(0),e_2\rangle+\langle G_t(\pi),e_2\rangle.$$\ Note that $L_0(t)$ corresponds to $\ell(t)$ in section 3. Since the backwards limit of our ancient solutions are both translators, curvature is nondecreasing for ancient solutions to ACSF, and $g(0)\kappa(0,t)\geq 1$ and $g(\pi)\kappa(\pi,t)\geq \sigma$ for each of our solutions, we have that\ \ $$\dod{ }{t}(L(t)+t(1+\sigma))\leq 0,$$\ so when we take a limit $t\to-\infty$, the limit exists (though it may be infinite). Let $L=\lim_{t\to\infty}L(t)$, and let $L_0=\lim_{t\to\infty}L_0(t)$, and note that by Proposition 3.4 we have that $L_0<\infty$. Note also that $-\langle \Gamma_t(0),e_2\rangle+t$ and $\langle \Gamma_t(\pi),e_2\rangle+\sigma t$ both have (possibly infinite limits) as $t\to\infty$, while $-\langle G_t(0),e_2\rangle+t$ and $\langle G_t(\pi),e_2\rangle+\sigma t$ have finite limits as $t\to\infty$.\ We first show that $L=L_0$. To that end, suppose instead that $L>L_0$. Let $\tilde{\Gamma_t^{\epsilon}}=\Gamma_t\big{|}_{\theta\in(-\pi,0)}-\epsilon e_1$ and let $\tilde{G_t}=G_t\big{|}_{\theta\in(-\pi,0)}$. Since $L>L_0$, there exists some $t_0$ such that $L(t)>L_0(t)$ for all $t<t_0$. Thanks to the existence of the limits above, and the fact that these limits are finite in the case of our constructed solution $\{G_t\}_{t\in(-\infty,0)}$, we can find some constant $c$ such that $y(\Gamma_t(\pi))+c>y(G_t(\pi))$ and $y(\Gamma_t(0))+c<y(G_t(0))$ for all $<t_0$. Further, since both solutions must converge at their tips to the appropriate translating solutions, there exists some $t_\epsilon<t_0$ such that $\tilde{\Gamma_{t_\epsilon}^\epsilon}+ce_2\cap\tilde{G_{t_{\epsilon}}}=\emptyset$. By the maximum principle, we then have $\tilde{\Gamma_t^{\epsilon}}+ce_2\cap\tilde{G_t}=\emptyset$ for all $t\in(t_{\epsilon},t_0)$. Taking $\epsilon\to 0$ gives us that$\tilde{\Gamma_{t_0}}+ce_2$ lies outside $\tilde{G_{t_0}}$.\ We then repeat this argument, restricting to $\theta\in(0,\pi)$ to get that $\Gamma_{t_0}+ce_2$ lies outside $G_{t_0}$ on that side as well, and so, $G_{t_0}$ lies within $\Gamma_{t_0}$. By the strong maximum principle they cannot intersect at all, but this contradicts the fact that they both expire at time $t=0$. Thus we must have $L=L_0$.\ Now, for $\tau>0$ define $\Gamma_t^\tau$ to be the translation of $\Gamma$ in time by $\tau$, i.e., $\Gamma_t^{\tau}=\Gamma_{t-\tau}$. Then note that $L_\tau>L=L_0$, and we can repeat the above arguement to show that $\Gamma_t^\tau$ lies outsidie of $G_t$ for all $t$. Taking $\tau\to 0$ gives that $\Gamma_t$ lies outside of $G_t$ for all $t$, but as they both expire at $t=0$, this can only happen if $\Gamma_t$ and $G_t$ coincide.\ The proof that the translator is the unique noncompact solution that lies in a slab parallel to $v$ of width $w$ follows similarly, with the additional observation that $$\lim_{\theta\to\frac{\pi}{2}^+}\tilde{\Gamma}_t^{\epsilon}(\theta)<-\frac{w_g}{2}$$ for all $\epsilon>0$, which allows us to use the avoidance principle despite both curves involved being noncompact. ◻
arxiv_math
{ "id": "2309.00712", "title": "Convex Ancient Solutions to Anisotropic Curve Shortening Flow", "authors": "Theodora Bourni, Benjamin Richards", "categories": "math.DG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- author: - "Weiye Gan[^1]" - "Qiang Du[^2]" - "Zuoqiang Shi[^3]" title: "$\\Gamma$-convergence of Nonlocal Dirichlet Energies With Penalty Formulations of Dirichlet Boundary Data [^4]" --- [^1]: Department of Mathematical Sciences, Tsinghua University, Beijing, 100084, China. Email:*gwy18\@mails.tsinghua.edu.cn*. [^2]: Department of Applied Physics and Applied Mathematics and Data Science Institute, Columbia University, New York, NY 10027, USA. Email: *qd2125\@columbia.edu* [^3]: Yau Mathematical Sciences Center, Tsinghua University, Beijing, 100084, China & Yanqi Lake Beijing Institute of Mathematical Sciences and Applications, Beijing, 101408, China. Email:*zqshi\@tsinghua.edu.cn* (Corresponding author) [^4]: This work was supported by National Natural Science Foundation of China under grant 12071244 and US National Science Foundation DMS-2309245 and DMS-1937254.
arxiv_math
{ "id": "2309.10352", "title": "$\\Gamma$-convergence of Nonlocal Dirichlet Energies With Penalty\n Formulations of Dirichlet Boundary Data", "authors": "Weiye Gan and Qiang Du and Zuoqiang Shi", "categories": "math.AP cs.NA math.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We prove that for any $\eta$ that belongs to the closure of the interior of the Markov and Lagrange spectra, the sets $k^{-1}((-\infty,\eta])$ and $k^{-1}(\eta)$, which are the sets of irrational numbers with best constant of Diophantine approximation bounded by $\eta$ and exactly $\eta$ respectively, have the same Hausdorff dimension. We also show that, as $\eta$ varies in the interior of the spectra, this Hausdorff dimension is a strictly increasing function. address: - "Carlos Gustavo Moreira: IMPA, Estrada Dona Castorina 110, 22460-320, Rio de Janeiro, Brazil " - "Christian Villamil: IMPA, Estrada Dona Castorina 110, 22460-320, Rio de Janeiro, Brazil" author: - Carlos Gustavo Moreira - Christian Villamil title: Concentration of dimension in extremal points of left-half lines in the Lagrange spectrum --- # Introduction The classical Lagrange and Markov spectra are closed subsets of the real line related to Diophantine approximations. They arise naturally in the study of rational approximations of irrational numbers and of indefinite binary quadratic forms, respectively. Given $\alpha\in\mathbb R\setminus\mathbb Q$, set $$\begin{aligned} k(\alpha)&=&\sup \left\{k>0:\left|\alpha -\frac{p}{q}\right|<\frac{1}{kq^2} \ \mbox{has infinitely many rational solution} \ \frac{p}{q}\right \}\\& =&\limsup_{p\in \mathbb{Z},q\in \mathbb{N}, p,q\to \infty}|q(q\alpha-p)|^{-1}\in \mathbb{R}\cup \{\infty\}\end{aligned}$$ for the best constant of Diophantine approximations of $\alpha$. The *classical Lagrange spectrum* is the set $$L=\{k(\alpha): \alpha \in \mathbb{R}\setminus\mathbb{Q}, k(\alpha)<\infty\},$$ and the *classical Markov spectrum* is the set $$M=\left\{\left(\inf\limits_{(x,y)\in\mathbb{Z}^2-\{(0,0)\}} |q(x,y)|\right)^{-1} < \infty: q(x,y)=ax^2+bxy+cy^2, b^2-4ac=1\right\}$$ that consists of the reciprocal of the minimal values over non-trivial integer vectors $(x,y)\in\mathbb{Z}^2-\{(0,0)\}$ of indefinite binary quadratic forms $q(x,y)$ with unit discriminant. Perron gave in [@P] the following dynamical characterizations of these classical spectra in terms of symbolic dynamical systems: Given a bi-infinite sequence $\theta=(\theta_n)_{n\in\mathbb{Z}}\in(\mathbb{N}^*)^{\mathbb{Z}}$, let $$\lambda_i(\theta):=[0;a_{i+1},a_{i+2},\dots]+a_i+[0;a_{i-1}, a_{i-2},\dots].$$ If the Markov value $m(\theta)$ of $\theta$ is $m(\theta):=\sup\limits_{i\in\mathbb{Z}} \lambda_i(\theta)$ and the Lagrange value $\ell(\theta)$ is $\ell(\theta):=\limsup \limits_{i\to \infty} \lambda_i(\theta).$ Then the Markov spectrum is the set $$M=\{m(\theta)<\infty: \theta\in(\mathbb{N}^*)^{\mathbb{Z}}\}$$ and the Lagrange spectrum is the set $$L=\{\ell(\theta)<\infty: \theta\in(\mathbb{N}^*)^{\mathbb{Z}}\}.$$ It follows from these characterizations that $M$ and $L$ are closed subsets of $\mathbb R$ and that $L\subset M$. Markov showed in [@M79] that $$L\cap (-\infty, 3)=M\cap (-\infty, 3)=\{k_1=\sqrt{5}<k_2=2\sqrt{2}<k_3=\frac{\sqrt{221}}{5}<...\},$$ where $k^2_n\in \mathbb{Q}$ for every $n\in \mathbb{N}$ and $k_n\to 3$ when $n\to \infty$. M. Hall in cf.[@Hall] proved that $$C_4+C_4=[\sqrt{2}-1,4(\sqrt{2}-1)],$$ where for each positive integer $N$, $C_N$ is the set of numbers in $[0,1]$ in whose continued fractions the coefficients are bounded by $N$, i.e., $C_N=\{x=[0;a_1,...,a_n,...]\in [0,1]: a_i\le N, \ \forall i\ge 1\}.$ Together with Perron characterizations, this implies that $L$ and $M$ contain the whole half-line $[6,+\infty)$. Freiman in [@F] determined the precise beginning of Hall's ray (the biggest half-line contained in $L$), which is $$\frac{2221564096 + 283748\sqrt{462}}{491993569}=4.52782956616\dots$$ The first author in [@M3] proved several results on the geometry of the Markov and Lagrange spectra, for example that the map $d:\mathbb{R} \rightarrow [0,1]$, given by $$d(\eta)=HD(L\cap(-\infty,\eta))= HD(M\cap(-\infty,\eta))$$ is continuous, surjective and such that $d(3)=0$ and $d(\sqrt{12})=1$. Moreover, that $$d(\eta)=\min \{1,2D(\eta)\},$$ where $D(\eta)=HD(k^{-1}(-\infty,\eta))=HD(k^{-1}(-\infty,\eta])$ is a continuous surjective function from $\mathbb{R}$ to $[0,1).$ And also, the limit $$\lim_{\eta\rightarrow \infty}HD(k^{-1}(\eta))=1.$$ Recently in [@MMPV] was given the estimate $$t^*_1:=\sup \{s\in \mathbb{R}:d(s)<1\}=3.334384...$$ In particular, any $t\in \mathbb{R}$ that belongs to the interior of the Markov and Lagrange spectra, must satisfy $t>t^*_1.$ Now, let $\varphi:S\rightarrow S$ be a diffeomorphism of a $C^{\infty}$ compact surface $S$ with a mixing horseshoe $\Lambda$ and let $f:S\rightarrow \mathbb{R}$ be a differentiable function. For $x\in S$, following the above characterization of the classical spectra, we define the *Lagrange value* of $x$ associated to $f$ and $\varphi$ as being the number $\ell_{\varphi,f}(x)=\limsup_{n\to \infty}f(\varphi^n(x))$ and also the *Markov value* of $x$ associated to $f$ and $\varphi$ as the number $m_{\varphi,f}(x)=\sup_{n\in \mathbb{Z}}f(\varphi^n(x))$. The sets $$L_{\varphi,f}(\Lambda)=\{\ell_{\varphi,f}(x):x\in \Lambda\}$$ and $$M_{\varphi,f}(\Lambda)=\{m_{\varphi,f}(x):x\in \Lambda\}$$ are called *Lagrange Spectrum* of $(\varphi,f,\Lambda)$ and *Markov Spectrum* of $(\varphi,f, \Lambda)$. It turns out that dynamical Markov and Lagrange spectra associated to hyperbolic dynamics are closely related to the classical Markov and Lagrange spectra. Several results on the Markov and Lagrange dynamical spectra associated to horseshoes in dimension 2 which are analogous to previously known results on the classical spectra were obtained recently: in [@MR] it is shown that typical dynamical spectra associated to horseshoes with Hausdorff dimensions larger than one have non-empty interior (as the classical ones). In [@M4] it is shown that typical Markov and Lagrange dynamical spectra associated to horseshoes have the same minimum, which is an isolated point in both spectra and is the image by the function of a periodic point of the horseshoe. In [@GC1], in the context of *conservative* diffeomorphism it is proven (as a generalization of the results in [@CMM]) that for typical choices of the dynamic and of the function, the intersections of the corresponding dynamical Markov and Lagrange spectra with half-lines $(-\infty,t)$ have the same Hausdorff dimensions, and this defines a continuous function of $t$ whose image is $[0,\min \{1,D\}]$, where $D$ is the Hausdorff dimension of the horseshoe. For more information and results on classical and dynamical Markov and Lagrange spectra, we refer to the books [@CF] and [@LMMR]. In this paper, we use that dynamical Markov and Lagrange spectra associated to conservative horseshoes in surfaces are natural generalizations of the classical Markov and Lagrange spectra. In fact, classical Markov and Lagrange spectra are not compact sets, so they cannot be dynamical spectra associated to horseshoes. However, in [@LM2] is showed that, for any $N\ge 2$ with $N\neq 3$, the initial segments of the classical spectra until $\sqrt{N^2+4N}$ (i.e., $M\cap (-\infty,\sqrt{N^2+4N}]$ and $L\cap (-\infty,\sqrt{N^2+4N}]$) coincide with the sets $M(N)$ and $L(N)$, given, in the notation we used in Perron's characterization of $M$ and $L$ by $$M(N)=m(\Sigma(N))=\{m(\theta): \theta\in \Sigma(N)\}$$ and $$L(N)=\ell(\Sigma(N))=\{\ell(\theta): \theta\in\Sigma(N)\}$$ where $\Sigma(N):=\{1,2,\dots,N \}^{\mathbb{Z}}.$ It is proved also that $M(N)$ and $L(N)$ are dynamical Markov and Lagrange spectra associated to a smooth real function $f$ and to a horseshoe $\Lambda(N)$ defined by a smooth conservative diffeomorphism $\varphi$, and also that they are naturally associated to continued fractions with coefficients bounded by N. Here we use this relation between classical and dynamical spectra in order to understand better the fractal geometry (Hausdorff dimension) of the preimage of half-lines by the function $k$. We can state our main result as: **Theorem 1**. *Define $T:=int(L)=int(M)$. For any $\eta\in \overline{T}$, $D(\eta)=HD(k^{-1}(\eta))$ i.e. $$HD(k^{-1}((-\infty,\eta)))=HD(k^{-1}((-\infty,\eta]))=HD(k^{-1}(\eta)).$$ Even more,* - *if $\eta$ is accumulated from the left by points of $T$, then $$D(\eta)>D(t),\ \ \forall t<\eta$$* - *if $\eta$ is accumulated from the right by points of $T$, then $$D(\eta)<D(t),\ \ \forall t>\eta.$$* *In particular, $D|_{X}$ is strictly increasing, where $X$ is $T$ or any interval contained in $\overline{T}$.* # Preliminares ## Continued fractions and regular Cantor sets The continued fraction expansion of an irrational number $\alpha$ is denoted by $$\alpha=[a_0;a_1,a_2,\dots] = a_0 + \frac{1}{a_1+\frac{1}{a_2+\frac{1}{\ddots}}},$$ so that the Gauss map $G:(0,1)\to[0,1)$, $G(x)=\dfrac{1}{x}-\left\lfloor \dfrac{1}{x}\right\rfloor$ acts on continued fraction expansions by $$G([0;a_1,a_2,\dots]) = [0;a_2,\dots].$$ For an irrational number $\alpha=\alpha_0 \in (0,1)$, the continued fraction expansion $\alpha=[0;a_1,\dots]$ is recursively obtained by setting $a_n=\lfloor\alpha_n\rfloor$ and $\alpha_{n+1} = \frac{1}{\alpha_n-a_n} = \frac{1}{G^n(\alpha_0)}$. The rational approximations $$\frac{p_n}{q_n}:=[0;a_1,\dots,a_n]\in\mathbb{Q}$$ of $\alpha$ satisfy the recurrence relations $$\label{q_n} p_n=a_n p_{n-1}+p_{n-2} \ \mbox{and} \ q_n=a_n q_{n-1}+q_{n-2},\ \ n\geq0$$ with the convention that $p_{-2}=q_{-1}=0$ and $p_{-1}=q_{-2}=1$. If $0<a_j\leq N$ for all $j$, it follows that $$\frac{p_n}{N+1}\leq p_{n-1}\leq p_n \ \mbox{and} \ \frac{q_n}{N+1}\leq q_{n-1}\leq q_n\ \ n\geq1.$$ Given a finite sequence $(a_1,a_2, \dots, a_n)\in (\mathbb{N}^*)^n$, we define $$I(a_1,a_2, \dots, a_n)=\{x\in [0,1]: x=[0;a_1,a_2, \dots, a_n,\alpha_{n+1}],\ \alpha_{n+1}\geq 1 \}$$ then by [\[q_n\]](#q_n){reference-type="ref" reference="q_n"}, $I(a_1,a_2, \dots, a_n)$ is the interval with extremities $[0;a_1,a_2, \dots, a_n]=\frac{p_n}{q_n}$ and $[0;a_1,a_2, \dots, a_n+1]=\frac{p_n+p_{n-1}}{q_n+q_{n-1}}$ and so $$\lvert I(a_1,a_2, \dots, a_n)\rvert=\left |\frac{p_n}{q_n}-\frac{p_n+p_{n-1}}{q_n+q_{n-1}}\right |=\frac{1}{q_n(q_n+q_{n-1})},$$ because $p_n q_{n-1}-p_{n-1} q_n=(-1)^{n-1}.$ Also, for $(a_0,a_1, \dots, a_n)\in (\mathbb{N}^*)^{n+1}$ we set $$I(a_0;a_1, \dots, a_n)=\{x\in [0,1]: x=[a_0;a_1,a_2, \dots, a_n,\alpha_{n+1}],\ \alpha_{n+1}\geq 1 \},$$ clearly as $I(a_0;a_1, \dots, a_n)=a_0+I(a_1,a_2, \dots, a_n)$, we have $$\label{intervals} \lvert I(a_0;a_1, \dots, a_n)\rvert=\lvert I(a_1,a_2, \dots, a_n)\rvert.$$ An elementary result for comparing continued fractions is the following lemma **Lemma 2**. *Let $\alpha=[a_0;a_1,\dots, a_n, a_{n+1},\dots]$ and $\tilde{\alpha}=[a_0;a_1,\dots, a_n, b_{n+1},\dots]$, then:* - *$\lvert\alpha-\tilde{\alpha}\rvert<1/2^{n-1},$* - *If $a_{n+1}\neq b_{n+1}$, $\alpha>\tilde{\alpha}$ if and only if $(-1)^{n+1}(a_{n+1}-b_{n+1})>0.$* The next lemma is from [@M3] (see lemma A.1) **Lemma 3**. *If $a_0,a_1,a_2\dots, a_n, a_{n+1}, \dots$ and $b_{n+1},b_{n+2}, \dots$ are positive integers bounded by $N\in \mathbb{N}$ and $a_{n+1}\neq b_{n+1}$ then $$\begin{aligned} \lvert[a_0;a_1,a_2\dots, a_n, a_{n+1}, \dots]-[a_0;a_1,a_2\dots,a_n,b_{n+1}, \dots]\rvert &>& c(N)/q^2_{n-1} \\ &>& c(N)\lvert I(a_1,a_2, \dots, a_n)\rvert \end{aligned}$$ for some positive constant $c(N).$* For the sequel, the following application of lemma [Lemma 2](#leminha){reference-type="ref" reference="leminha"} will also be useful **Lemma 4**. *Given $R, N\in \mathbb{N}$, let $\beta^1,\beta^2,\beta^3\in \Sigma(N)^+:=\{1,2,\dots,N \}^{\mathbb{N}}$ such that $[0;\beta^1]<[0;\beta^2]<[0;\beta^3]$. If for two sequences $\alpha=(\alpha_n)_{n\in \mathbb{Z}}\ \mbox{and}\ \tilde{\alpha}=(\tilde{\alpha}_n)_{n\in \mathbb{Z}}\ \mbox{in}\ \Sigma(N)$ it is true that $\alpha_{0},\dots ,\alpha_{2R+1}=\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{2R+1}$. Then for all $j\leq 2R+1$ we have $$\begin{aligned} \lambda_0(\sigma^j(\dots,\alpha_{-2},\alpha_{-1};\alpha_{0},\dots ,\alpha_{2R+1},\beta^2))< \max \{m(\dots,\alpha_{-2},\alpha_{-1};\alpha_{0},\dots ,\alpha_{2R+1},\beta^1), \\ m(\dots ,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{2R+1},\beta^3)\}+1/2^{R-1}.\end{aligned}$$* *Proof.* It is just an application of lemma [Lemma 2](#leminha){reference-type="ref" reference="leminha"}. Indeed, for $j \leq R+1$ $$\begin{aligned} \lambda_0(\sigma^j(\dots,\alpha_{-1};\alpha_{0},\dots ,\alpha_{2R+1},\beta^2))< \lambda_0(\sigma^j(\dots,\alpha_{-1};\alpha_{0},\dots ,\alpha_{2R+1},\beta^1))+ 1/2^{R-1} \\ \leq \max \{m(\dots,\alpha_{-1};\alpha_{0},\dots ,\alpha_{2R+1},\beta^1), m(\dots ,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{2R+1},\beta^3)\} +1/2^{R-1}. \end{aligned}$$ For $R+1<j\leq 2R+1$, if $[\alpha_{j};\dots ,\alpha_{2R+1},\beta^2]<[\tilde{\alpha}_{j};\dots ,\tilde{\alpha}_{2R+1},\beta^3]$ $$\begin{aligned} \lambda_0(\sigma^j(\dots,\alpha_{-2},\alpha_{-1};\alpha_{0},\dots ,\alpha_{2R+1},\beta^2))< \lambda_0(\sigma^j(\dots ,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{2R+1},\beta^3)) + 1/2^{R}\\ \leq \max \{m(\dots,\alpha_{-1};\alpha_{0},\dots ,\alpha_{2R+1},\beta^1), m(\dots ,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{2R+1},\beta^3)\}+1/2^{R}.\end{aligned}$$ And for $R+1<j\leq 2R+1$, if $[\alpha_{j};\dots ,\alpha_{2R+1},\beta^2]<[\alpha_{j};\dots ,\alpha_{2R+1},\beta^1]$ $$\begin{aligned} \lambda_0(\sigma^j(\dots,\alpha_{-1};\alpha_{0},\dots ,\alpha_{2R+1},\beta^2))< \lambda_0(\sigma^j(\dots ,\alpha_{-1};\alpha_{0},\dots ,\alpha_{2R+1},\beta^1))\\ \leq \max \{m(\dots,\alpha_{-1};\alpha_{0},\dots ,\alpha_{2R+1},\beta^1), m(\dots ,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{2R+1},\beta^1)\}. \end{aligned}$$ Then we have proved the result. ◻ We end this subsection with one definition **Definition 5**. *A set $K\subset \mathbb{R}$ is called a $C^{1+\alpha}$-*regular Cantor set*, $\alpha > 0$, if there exists a collection $\mathcal{P}=\{I_1,I_2,...,I_r\}$ of compacts intervals and a $C^{1+\alpha}$-expanding map $\psi$, defined in a neighbourhood of $\displaystyle \cup_{1\leq j\leq r}I_j$ such that* 1. *$K\subset \cup_{1\leq j\leq r}I_j$ and $\cup_{1\leq j\leq r}\partial I_j\subset K$,* 2. *For every $1\leq j\leq r$ we have that $\psi(I_j)$ is the convex hull of a union of $I_r$'s, for $l$ sufficiently large $\psi^l(K\cap I_j)=K$ and $$K=\bigcap_{n\geq 0}\psi^{-n}(\bigcup_{1\leq j\leq r}I_j).$$* *More precisely, we also say that the triple $(K, \mathcal{P}, \psi)$ is a $C^{1+\alpha}$-regular Cantor set.* For example, in our context of sets of continued fractions. Let, as before, $G$ be the Gauss map and $C_N=\{x=[0;a_1,a_2,...]: a_i\le N, \forall i\ge 1\}$. Then, $$C_N=\bigcap_{n\ge 0}G^{-n}(I_N\cup... \cup I_1),$$ where $I_j=[a_j,b_j]$ and $a_j=[0;j,\overline{1,N}]$ and $b_j=[0;j,\overline{N,1}].$ That is, $C_N$ is a regular Cantor set. ## Results on Dynamical Markov and Lagrange spectra Let $\varphi:S\rightarrow S$ be a diffeomorphism of a $C^{\infty}$ compact surface $S$ with a mixing horseshoe $\Lambda$ and let $f:S\rightarrow \mathbb{R}$ be a differentiable function. Fix a Markov partition $\{R_a\}_{a\in \mathcal{A}}$ with sufficiently small diameter consisting of rectangles $R_a \sim I_a^s \times I_a^u$ delimited by compact pieces $I_a^s$, $I_a^u$, of stable and unstable manifolds of certain points of $\Lambda$, see [@PT93] theorem 2, page 172. Recall that the stable and unstable manifolds of $\Lambda$ can be extended to locally invariant $C^{1+\alpha}$ foliations in a neighborhood of $\Lambda$ for some $\alpha>0$. Using these foliations it is possible define projections $\pi^u_a: R_a\rightarrow I^s_a \times \{i^u_a\}$ and $\pi^s_a: R_a\rightarrow \{i^s_a\}\times I^u_a$ of the rectangles into the connected components $I^s_a \times \{i^u_a\}$ and $\{i^s_a\}\times I^u_a$ of the stable and unstable boundaries of $R_a$, where $i^u_a\in \partial I^u_a$ and $i^s_a\in \partial I^s_a$ are fixed arbitrarily. In this way, we have the unstable and stable Cantor sets $$K^u=\bigcup_{a\in \mathcal{A}}\pi^s_a(\Lambda\cap R_a) \ \mbox{and} \ K^s=\bigcup_{a\in \mathcal{A}}\pi^u_a(\Lambda\cap R_a).$$ In fact $K^u$ and $K^s$ are $C^{1+\alpha}$ dynamically defined, associated to some expanding maps $\psi_s$ and $\psi_u$. The stable and unstable Cantor sets, $K^s$ and $K^u$, respectively, are closely related to the fractal geometry of the horseshoe $\Lambda$. For instance, it is well-known that, $$HD(\Lambda)=HD(K^s)+HD(K^u)$$ and that in the conservative case $$HD(K^s)=HD(K^u).$$ The study of the intersection of the spectra with half-lines is related to the study of fractal dimensions of the set $$\Lambda_t=\bigcap\limits_{n\in\mathbb{Z}}\varphi^{-n}(\{y\in\Lambda: f(y)\leq t\}) = \{x\in\Lambda: m_{\varphi, f}(x)=\sup\limits_{n\in\mathbb{Z}}f(\varphi^n(x))\leq t\}$$ for $t\in\mathbb{R}$. Following this, we also consider the subsets $\Lambda_t$ through its projections on the stable and unstable Cantor sets of $\Lambda$ $$K^u_t=\bigcup_{a\in \mathcal{A}} \pi^s_a(\Lambda_t\cap R_a) \ \mbox{and} \ K^s_t=\bigcup_{a\in \mathcal{A}}\pi^u_a(\Lambda_t\cap R_a).$$ In [@GC1] is showed the following result **Theorem 6**. *Let $r\geq 2$ and $\varphi\in \text{Diff}^{2}(S)$ a conservative diffeomorphism preserving a smooth form $\omega$ and take $\Lambda$ a mixing horseshoe of $\varphi$. If $f\in C^r(S,\mathbb{R})$ satisfies that $\forall \ z\in \Lambda, \ \nabla f(z)\neq 0$, then the functions $$t\mapsto HD(K^u_t) \ \mbox{and} \ t\mapsto HD(K^s_t)$$ are equal and continuous. Even more, one has $$HD(\Lambda_t)=2HD(K^u_t).$$* ## The horseshoe $\Lambda(N)$ Given an integer $N\ge 2$, write $\tilde{C}_N=\{1,2,...,N\}+C_N$ and define $$\Lambda(N)=C_N\times \tilde{C}_N.$$ If $x=[0;a_1,a_2,...]$ and $y=[a_0;a_{-1},a_{-2},...]$ then we take $\varphi:\Lambda(N) \rightarrow \Lambda(N)$ given by $$\begin{aligned} \label{dif} \varphi(x,y)&=&(G(x),a_1+1/y)\\ &=&([0;a_2,a_3,...],a_1+[0;a_0,a_{-1},...]). \end{aligned}$$ Also, equip $\Lambda(N)$ with the real map $f(x,y)=x+y$. We note that $\varphi$ can be extended to a $C^{\infty}$-diffeomorphism on a diffeomorphic copy of the 2-dimensional sphere $\mathbb{S}^2$. Notice also that $\varphi$ is conjugated to the restriction to $C_N\times C_N$ of the map $\psi:(0,1)\times(0,1)\to [0,1)\times(0,1)$ given by $$\psi(x,y)=\left(G(x),\frac1{y+\lfloor 1/x\rfloor}\right)$$ and following [@Ar] and [@S.ITO] we know that $\psi$ has an invariant measure equivalent to the Lebesgue measure. In particular, $\varphi$ also has an invariant measure equivalent to the Lebesgue measure and then $\varphi$ is conservative. Indeed, if $\mathcal{S}=\{(x,y)\in {\mathbb R}^2|0<x<1, 0<y<1/(1+x)\}$ and $T:\mathcal{S}\to\mathcal{S}$ is given by $$T(x,y)=(G(x),x-x^2 y),$$ then $T$ preserves the Lebesgue measure in the plane. If $h:\mathcal{S}\to [0,1)\times(0,1)$ is given by $h(x,y)=(x,y/(1-xy))$ then $h$ is a conjugation between $T$ and $\psi$ (and thus $\psi$ preserves the smooth measure $h_*$(Leb). For $\Lambda(N)$ we have the Markov partition $\{R_a\}_{a\in \mathcal{A}}$ where $\mathcal{A}=\{1,2, \dots,N\}$ and $R_a$ is such that $R_a\cap \Lambda(N)=C_N\times(C_N+a)=C_N\times C_N+(0,a)$. It is clear then that $\varphi|_{\Lambda_N}$ is topologically conjugated to $\sigma:\Sigma(N)\rightarrow \Sigma(N)$ (via a map $\Pi: \Lambda(N) \to \Sigma(N)$), and that in sequences, $f$ becomes $\tilde{f}:\Sigma(N)\rightarrow \mathbb{R}$ given by $$\tilde{f}(\theta)=[0;a_1(\theta),a_2(\theta),...]+a_0(\theta)+[0;a_{-1}(\theta),a_{-2}(\theta),...]=\lambda_0(\theta),$$ where $\theta=(a_i(\theta))_{i\in \mathbb{Z}},$ and so $$L_{\varphi,f}(\Lambda(N))=L(N)\ \ \mbox{and}\ \ M_{\varphi,f}(\Lambda(N))=M(N).$$ In this context, let $\alpha=(a_{s_1},a_{s_1+1},...,a_{s_2})\in \mathcal{A}^{s_2-s_1+1}$ any word where $s_1, s_2 \in \mathbb{Z} , \ s_1 < s_2$ and fix $s_1\le s\le s_2$. Define then $$R(\alpha;s):=\bigcap_{m=s_1-s}^{s_2-s} \varphi^{-m}(R_{a_{m+s}}).$$ Note that if $x\in R(\alpha;s)\cap \Lambda(N)$ then the symbolic representation of $x$ is in the way $(...a_{s_1}...a_{s-1};a_{s},a_{s+1}...a_{s_2}...)$ where on the right of the ; is the $0$-th position. Finally, let us consider $A_N=[0;\overline{N,1}]$ and $B_N=[0;\overline{1,N}]$. As $$NA_N+ A_NB_N=1\ \mbox{and}\ B_N+B_NA_N=1,$$ we have $A_N=\dfrac{B_N}{N}.$ Thus $B_N=\frac{-N+\sqrt{N^2+4N}}{2} \ \mbox{,} \ A_N=\frac{-N+\sqrt{N^2+4N}}{2N}$ and then $$\max f|_{\Lambda(N)}=2B_N+N=\sqrt{N^2+4N},\ \min f|_{\Lambda(N)}=2A_N+1=\frac{\sqrt{N^2+4N}}{N}.$$ # Proof of the result ## Connection of subhorseshoes For the next, it will be useful to give the following definition **Definition 7**. *Given $\Lambda^1$ and $\Lambda^2$ subhorseshoes of a horseshoe $\Lambda$ and $t\in \mathbb{R}$, we said that $\Lambda^1$ *connects* with $\Lambda^2$ or that $\Lambda^1$ and $\Lambda^2$ *connect* before $t$ if there exist a subhorseshoe $\tilde{\Lambda}\subset \Lambda$ and some $q< t$ with $\Lambda^1 \cup \Lambda^2 \subset \tilde{\Lambda}\subset \Lambda_q$.* **Lemma 8**. *Suppose $\Lambda^1$ and $\Lambda^2$ are subhorseshoes of $\Lambda$ and for some $x,y \in \Lambda$ we have $x\in W^u(\Lambda^1)\cap W^s(\Lambda^2)$ and $y\in W^u(\Lambda^2)\cap W^s(\Lambda^1)$. If for some $t\in \mathbb{R}$,  it is true that $$\Lambda^1 \cup \Lambda^2 \cup \mathcal{O}(x) \cup \mathcal{O}(y) \subset \Lambda_t,$$ then for every $\epsilon >0$, $\Lambda^1$ and $\Lambda^2$ connect before $t+\epsilon$.* *Proof.* Take a Markov partition $\mathcal{P}$ for $\Lambda$ with diameter small enough such that $\max f|_{\bigcup\limits_{P\in \mathcal{R}} P}< t+\epsilon,$ where $\mathcal{R}=\{P\in \mathcal{P}: P\cap (\Lambda^1 \cup \Lambda^2 \cup \mathcal{O}(x) \cup \mathcal{O}(y))\neq \emptyset \}$ and consider $$\Lambda_{\mathcal{R}}=\bigcap \limits_{n \in \mathbb{Z}} \varphi ^{-n}(\bigcup \limits_{P\in \mathcal{R}} P).$$ Evidently $\Lambda^1 \cup \Lambda^2 \cup \mathcal{O}(x) \cup \mathcal{O}(y)\subset \Lambda_{\mathcal{R}}\subset \Lambda_{t+\epsilon}$, then the lemma will be proved if we show that $\Lambda^1$ and $\Lambda^2$ form part of the same transitive component of $\Lambda_{\mathcal{R}}$. Let $x_1\in \Lambda^1$,  $x_2\in \Lambda^2$ and $\rho_1,\rho_2>0$. Take $$\eta=\frac{1}{2}\min \{\rho_1, \rho_2, \min \{d(P,Q):P,Q\in \mathcal{R}\ \mbox{and}\ P\neq Q \} \}.$$ By the shadowing lemma there exist $0<\delta \leq \eta$ such that every $\delta$-pseudo orbit of $\Lambda$ is $\eta$-shadowed by the orbit of some element of $\Lambda$. On the other hand, as $\varphi|_{\Lambda^1}$ is transitive and $x\in W^u(\Lambda^1)$ there exist $y_1\in \Lambda^1 \cap B(x_1,\delta)$ and $N_1, M_1\in \mathbb{N}$ such that $d(\varphi^{M_1}(y_1),\varphi^{-N_1}(x))<\delta$ and analogously as $\varphi|_{\Lambda^2}$ is transitive and $x\in W^s(\Lambda^2)$ there exist $y_2\in \Lambda^2$ and $N_2, M_2\in \mathbb{N}$ such that $d(\varphi^{N_2}(x),y_2)<\delta$ and $d(x_2,\varphi^{M_2}(y_2))<\delta$. Define then the $\delta$-pseudo orbit: $$\dots ,\varphi^{-1}(y_1);y_1,\varphi(y_1), \dots, \varphi^{M_1-1}(y_1), \varphi^{-N_1}(x),\dots, \varphi^{N_2-1}(x), y_2, \varphi(y_2), \dots$$ Then there exists $w\in \Lambda$ that $\eta$-shadowed that orbit. Moreover as the $\delta$-pseudo orbit have all its terms in $\bigcup \limits_{P\in \mathcal{R}} P$ and $\eta \leq \frac{1}{2} \min \{d(P,Q):P,Q\in \mathcal{R}\ \mbox{and}\ P\neq Q \}$ we have also $\mathcal{O}(w)\subset \bigcup \limits_{P\in \mathcal{R}} P\ ;$ that is, $w\in \Lambda_{\mathcal{R}}$ and furthermore $$w\in B(x_1,\rho_1) \quad \mbox{and} \quad \varphi^{M_1+N_1-1+N_2+M_2}(w)\in B(x_2,\rho_2).$$ The proof that there exists $w\in B(x_2, \rho_2)$ and $M\in \mathbb{N}$ such that $\varphi^M(w)\in B(x_1, \rho_1)$ is analog. ◻ **Corollary 9**. *Suppose $\Lambda^1$ and $\Lambda^2$ are subhorseshoes of $\Lambda$ with $\Lambda^1 \cup \Lambda^2 \subset \Lambda_t$ for some $t\in \mathbb{R}$. If $\Lambda^1 \cap \Lambda^2\neq \emptyset$, then for every $\epsilon >0$,  $\Lambda^1$ and $\Lambda^2$ connects before $t+\epsilon$.* *Proof.* If $\Lambda^1 \cap \Lambda^2\neq \emptyset$, then every $w\in \Lambda^1 \cap \Lambda^2$ satisfies $w\in W^u(\Lambda^1)\cap W^s(\Lambda^2)$ and $w\in W^u(\Lambda^2)\cap W^s(\Lambda^1)$ and then we have the conclusion. ◻ **Corollary 10**. *Let $\Lambda^1$, $\Lambda^2$ and $\Lambda^3$ subhorseshoes of $\Lambda$ and $t\in \mathbb{R}$. If $\Lambda^1$ connects with $\Lambda^2$ before $t$ and $\Lambda^2$ connects with $\Lambda^3$ before $t$. Then also $\Lambda^1$ connects with $\Lambda^3$ before $t$.* *Proof.* By hypothesis we have two subhorseshoes $\Lambda^{1,2}$ and $\Lambda^{2,3}$ and $q_1,q_2<t$ with$$\Lambda^1 \cup \Lambda^2 \subset \Lambda^{1,2}\subset \Lambda_{q_1} \ \mbox{and }\ \Lambda^2 \cup \Lambda^3 \subset \Lambda^{2,3}\subset \Lambda_{q_2}.$$ Applying corollary [Corollary 9](#connection2){reference-type="ref" reference="connection2"} to $\Lambda^{1,2}$ and $\Lambda^{2,3}$, with $\tilde{t}=\max \{q_1,q_2\}$ and $\epsilon=t-\tilde{t}$ we have the result. ◻ ## Dimension estimates {#section3} Fix an integer $m\geq 1$ and consider the horseshoe $$\Lambda:=\Lambda(m+3)=C(m+3)\times\tilde{C}(m+3)$$ equipped with the diffeomorphism $\varphi$ and the map $f$ given in the previous section. Also, consider $$\eta \in (m+1+ [0;\overline{1}]+[0;1,m+2,\overline{1,m+3}],m+4)\cap \overline{T}$$ which is accumulated from the left by points of $T$. Given $t\in(m+1+ [0;\overline{1}]+[0;1,m+2,\overline{1,m+3}],\eta)\cap T$ and $0<\epsilon<\eta-t$, take $\ell(t,\epsilon)\in \mathbb{N}$ sufficiently large such that for the set $$C(t,\epsilon)=\{\alpha=(a_0, a_{1} \cdots, a_{2\ell(t,\epsilon)})\in \{1,2, \cdots, m+3\}^{2\ell(t,\epsilon)+1}:R(\alpha;\ell(t,\epsilon))\cap \Lambda_{t+\epsilon/4}\neq \emptyset \}$$ if $\alpha \in C(t,\epsilon)\ \mbox{and}\ z,y\in R(\alpha;\ell(t,\epsilon))\ \mbox{then}\ \lvert f(x)-f(y)\rvert<\epsilon/4.$ Define $$P(t,\epsilon):=\bigcap \limits_{n \in \mathbb{Z}} \varphi ^{-n}(\bigcup \limits_{\alpha \in C(t,\epsilon)} R(\alpha;\ell(t,\epsilon))).$$ Note that by construction, $\Lambda_{t+\epsilon/4}\subset P(t,\epsilon)\subset \Lambda_{t+\epsilon/2}$ and being $P(t,\epsilon)$ a hyperbolic set of finite type, it admits a decomposition $$P(t,\epsilon)=\bigcup \limits_{x\in \mathcal{X}} \tilde{\Lambda}_x$$ where $\mathcal{X}$ is a finite index set and for $x\in \mathcal{X}$, $\tilde{\Lambda}_i$ is a subhorseshoe or a transient set i.e a set of the form $\tau=\{x\in M: \alpha(x)\subset \tilde{\Lambda}_{i_1} \ \mbox{and } \ \omega(x)\subset \tilde{\Lambda}_{i_2}\}$ where $\tilde{\Lambda}_{i_1}$ and $\tilde{\Lambda}_{i_2}$ with $i_1, i_2 \in \mathcal{X}$ are subhorseshoes. As for every transient $\tau$ set as before, we have $$HD(\tau)=HD(K^s(\tilde{\Lambda}_{i_1}))+HD(K^u(\tilde{\Lambda}_{i_2}))$$ and for every subhorseshoe $\tilde{\Lambda}_{i}$, being $\varphi$ conservative, one has $$HD(\tilde{\Lambda}_{i})=HD(K^s(\tilde{\Lambda}_{i}))+HD(K^u(\tilde{\Lambda}_{i}))=2HD(K^u(\tilde{\Lambda}_{i}))$$ therefore $$HD(P(t,\epsilon))=\max\limits_{x\in \mathcal{X}} HD(\tilde{\Lambda}_{x})=\max \limits_{\substack{x\in \mathcal{X}: \ \tilde{\Lambda}_x \ is\\ subhorseshoe }}HD(\tilde{\Lambda}_{x}).$$ Now, in [@M3] was proved for $s\leq \max f|_{\Lambda}$ that $$D(s)=HD(k^{-1}(-\infty,s])=HD(K^u_s)$$ and by theorem [Theorem 6](#janelas){reference-type="ref" reference="janelas"}, we have $$HD(K^u_s)=\frac{1}{2}HD(\Lambda_{s}).$$ Then, for some $x\in \mathcal{X}$, $HD(\tilde{\Lambda}_x)\geq1$ because $\Lambda_t\subset P(t,\epsilon)$ and $$t^*_1=\sup \{s\in \mathbb{R}:\min \{1,HD(\Lambda_s)\}<1\}=\sup \{s\in \mathbb{R}:HD(\Lambda_s)<1\}<t.$$ We will show that any subhorseshoe contained in $P(t,\epsilon)$ with Hausdorff dimension greater or equal than $1$ connects with the periodic orbit $\xi$, given by the kneading sequence $(1)_{i\in \mathbb{Z}}$, before any time bigger than $t+\epsilon/2$. To do that, take any $\delta>0$ and write $$\tilde{P}(t,\epsilon)=\bigcup\limits_{\substack{x\in \mathcal{X}: \ \tilde{\Lambda}_x \ is\\ subhorseshoe }}\tilde{\Lambda}_x= \bigcup \limits_{i\in \mathcal{I}} \tilde{\Lambda}_i \cup \bigcup \limits_{i\in \mathcal{J}} \tilde{\Lambda}_j$$ where $$\mathcal{I}=\{i\in \mathcal{X}: \tilde{\Lambda}_i \ \mbox{is a subhorseshoe and it connects with}\ \xi\ \mbox{before}\ t+\epsilon/2+\delta \}$$ and $$\mathcal{J}=\{j\in \mathcal{X}: \tilde{\Lambda}_j \ \mbox{is a subhorseshoe and it doesn't connect with}\ \xi\ \mbox{before}\ t+\epsilon/2+\delta \}.$$ By proposition [Lemma 8](#connection){reference-type="ref" reference="connection"}, given $j\in \mathcal{J}$ as $\tilde{\Lambda}_j\cup \xi \subset \Lambda_{t+\epsilon/2}$ we cannot have at the same time the existence of two points $x\in W^u(\tilde{\Lambda}_j)\cap W^s(\xi)$ and $y\in W^u(\xi)\cap W^s(\tilde{\Lambda}_j)$ such that $\mathcal{O}(x) \cup \mathcal{O}(y) \subset \Lambda_{t+\epsilon/2+\delta/2}$. Without loss of generality suppose that there is no $x\in W^u(\tilde{\Lambda}_j)\cap W^s(\xi)$ with $m_{\varphi,f}(x)\leq t+\epsilon/2+\delta/2$ (the argument for the other case is similar). We will show that this condition forces the possible letters that may appear in the sequences that determine the unstable Cantor set of $\tilde{\Lambda}_j$. Let us begin fixing $R\in \mathbb{N}$ large enough such that $1/2^{R-1}<\delta/2$ and consider the set $\mathcal{C}_{2R+1}=\{I(a_0;a_1, \dots, a_{2R+1}): I(a_0;a_1, \dots, a_{2R+1})\cap K^u(\tilde{\Lambda}_j)\neq \emptyset\}$, clearly $\mathcal{C}_{2R+1}$ is a covering of $K^u(\tilde{\Lambda}_j)$. We will give a mechanism to construct coverings $\mathcal{C}_k$ with $k\geq 2R+1$ that can be used to *efficiently* cover $K^u(\tilde{\Lambda}_j)$ as $k$ goes to infinity. Indeed, if for some $k\geq 2R+1$, and $I(a_0; a_1, \dots,a_k)\in \mathcal{C}_k$, $(a_0, a_1, \dots,a_k)$ has continuations with forced first letter. That is, for every $\alpha=(\alpha_n)_{n\in \mathbb{Z}}\in \Pi(\tilde{\Lambda}_j)$ with $\alpha_0,\alpha_1, \dots, \alpha_k=a_0,a_1, \dots, a_k$ one has $\alpha_{k+1}=a_{k+1}$ for some fixed $a_{k+1}$, then we can refine the original cover $\mathcal{C}_k$, by replacing the interval $I(a_0;a_1, \dots, a_k)$ with the interval $I(a_0;a_1, \dots, a_k, a_{k+1})$. On the other hand, if $(a_0, a_1, \dots,a_k)$ has two continuations with different initial letter, said $\gamma_{k+1}=(a_{k+1}, a_{k+2},\dots)$ and $\beta_{k+1}=(a^*_{k+1}, a^*_{k+2},\dots)$ with $a_{k+1} \neq a^*_{k+1}$. Take $\alpha=(\alpha_n)_{n\in \mathbb{Z}}\in \Pi(\tilde{\Lambda}_j)$ and $\tilde{\alpha}=(\tilde{\alpha}_n)_{n\in \mathbb{Z}}\in \Pi(\tilde{\Lambda}_j)$, such that $\alpha=(\dots,\alpha_{-2},\alpha_{-1};a_0, a_1, \dots,a_k,\gamma_{k+1})$ and $\tilde{\alpha}=(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};a_0, a_1, \dots,a_k,\beta_{k+1})$. If\ $a_{k+1}=i$ then, necessarily either $a^*_{k+1}=i+1$ or $a^*_{k+1}=i-1$ because if for example $a_{k+1}+1<a^*_{k+1}$ we can set $s=a_{k+1}+1$ and therefore by lemma [Lemma 11](#lemao){reference-type="ref" reference="lemao"} as $[0;\beta_{k+1}]<[0;s,\overline{1}]<[0;\gamma_{k+1}]$, we would have for all $j\leq k$ $$\begin{aligned} \lambda_0(\sigma^j(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},s,\overline{1}))&\leq&\max \{m(\dots, \alpha_{-1};\alpha_{0},\dots ,\alpha_{k},\gamma_{k+1}),\\&& m(\dots ,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},\beta_{k+1})\}+1/2^{R-1} \\ &<& t+\epsilon/2+\delta/2.\end{aligned}$$ For $j=k+1$, $$\begin{aligned} \lambda_0(\sigma^j(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},s,\overline{1}))&=& [0;\tilde{\alpha}_{k},\dots, \tilde{\alpha}_{0},\tilde{\alpha}_{-1}, \dots ]+s+[0;\overline{1}]\\&<& [0;\tilde{\alpha}_{k},\dots, \tilde{\alpha}_{0},\tilde{\alpha}_{-1}, \dots ]+s+1\\&<&[0;\tilde{\alpha}_{k},\dots, \tilde{\alpha}_{0},\tilde{\alpha}_{-1}, \dots ]+a^*_{k+1}\\ && +[0;a^*_{k+2},a^*_{k+3}, \dots]\\&=& \lambda_0(\sigma^{k+1}(\dots,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},\beta_{k+1})) \\&\leq &m(\dots ,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},\beta_{k+1})\\&\leq&t+\epsilon/2 \end{aligned}$$ and for $j> k+1$, clearly $$\lambda_0(\sigma^j(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},s,\overline{1}))< 3 < t+\epsilon/2.$$ Then taking $x=\Pi^{-1}((\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},s,\overline{1}))$ one would have $$x\in W^u(\tilde{\Lambda}_j)\cap W^s(\xi)\ \mbox{and}\ m_{\varphi,f}(x)\leq t+\epsilon/2+\delta/2$$ that is a contradiction. The case $a_{k+1}-1>a^*_{k+1}$ is quite similar. Now, suppose $a_{k+1}=i$ and $a^*_{k+1}=i+1$. We affirm that $a_{k+2}=1$ because in other case by lemma [Lemma 11](#lemao){reference-type="ref" reference="lemao"}, as $[0;\beta_{k+1}]<[0;i,\overline{1}]<[0;\gamma_{k+1}]$, we would have again for all $j\leq k$ $$\begin{aligned} \lambda_0(\sigma^j(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},i,\overline{1}))< t+\epsilon/2+\delta/2.\end{aligned}$$ For $j> k+1$, one more time $$\lambda_0(\sigma^j(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},i,\overline{1}))< t+\epsilon/2$$ and for $j=k+1$, $$\begin{aligned} \lambda_0(\sigma^j(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},i,\overline{1}))&=& [0;\tilde{\alpha}_{k},\dots, \tilde{\alpha}_{0},\tilde{\alpha}_{-1}, \dots ]+i+[0;\overline{1}]\\&<& [0;\tilde{\alpha}_{k},\dots, \tilde{\alpha}_{0},\tilde{\alpha}_{-1}, \dots ]+i+1\\&<&[0;\tilde{\alpha}_{k},\dots, \tilde{\alpha}_{0},\tilde{\alpha}_{-1}, \dots ]+a^*_{k+1}\\ && +[0;a^*_{k+2},a^*_{k+3}, \dots]\\&=& \lambda_0(\sigma^{k+1}(\dots,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},\beta_{k+1})) \\&\leq &m(\dots ,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},\beta_{k+1})\\&\leq&t+\epsilon/2. \end{aligned}$$ Then for $x=\Pi^{-1}((\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},i,\overline{1}))$ one would get the contradiction $$x\in W^u(\tilde{\Lambda}_j)\cap W^s(\xi)\ \mbox{and}\ m_{\varphi,f}(x)\leq t+\epsilon/2+\delta/2.$$ Even more, we have $a_{k+3}\in \{m+1,m+2,m+3\}$ because if $a_{k+3}=\ell\leq m$, then $[0;\beta_{k+1}]<[0;i,1,\ell+1,\overline{1}]<[0;\gamma_{k+1}]$ and by lemma [Lemma 11](#lemao){reference-type="ref" reference="lemao"} we would have for all $j\leq k$ $$\begin{aligned} \lambda_0(\sigma^j(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},i,1,\ell+1, \overline{1}))<t+\epsilon/2+\delta/2.\end{aligned}$$ For $j=k+1$, $$\begin{aligned} \lambda_0(\sigma^j(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},i,1,\ell+1,\overline{1}))&=&[0;\tilde{\alpha}_{k},\dots, \tilde{\alpha}_{0},\tilde{\alpha}_{-1}, \dots ]+i+\\&& [0;1,\ell+1,\overline{1}]\\&<&[0;\tilde{\alpha}_{k},\dots, \tilde{\alpha}_{0},\tilde{\alpha}_{-1}, \dots ]+a^*_{k+1}\\ && +[0;a^*_{k+2},a^*_{k+3}, \dots]\\&=& \lambda_0(\sigma^{k+1}(\dots,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},\beta_{k+1})) \\&\leq &m(\dots ,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},\beta_{k+1})\\&\leq&t+\epsilon/2 \end{aligned}$$ and for $j> k+1$, $$\lambda_0(\sigma^j(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},i,1,\ell+1,\overline{1}))< m+1+ [0;\overline{1}]+[0;1,m+2,\overline{1,m+3}]< t+\epsilon/2$$ then taking $x=\Pi^{-1}((\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},i,1,\ell+1,\overline{1}))$ one would have $$x\in W^u(\tilde{\Lambda}_j)\cap W^s(\xi)\ \mbox{and}\ m_{\varphi,f}(x)\leq t+\epsilon/2+\delta/2$$ that is again a contradiction. In a similar way, we must have $a^*_{k+2}\in \{m+1,m+2,m+3\}$ because if $a^*_{k+2}=\ell\leq m$, then $[0;\beta_{k+1}]<[0;i+1,\ell+1,\overline{1}]<[0;\gamma_{k+1}]$ and as before we would have for all $j\leq k$ $$\begin{aligned} \lambda_0(\sigma^j(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},i+1,\ell+1, \overline{1}))<t+\epsilon/2+\delta/2,\end{aligned}$$ for $j=k+1$, $$\begin{aligned} \lambda_0(\sigma^j(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},i+1,\ell+1,\overline{1}))&=&[0;\tilde{\alpha}_{k},\dots, \tilde{\alpha}_{0},\tilde{\alpha}_{-1}, \dots ]+i+1+\\&& [0;\ell+1,\overline{1}]\\&<&[0;\tilde{\alpha}_{k},\dots, \tilde{\alpha}_{0},\tilde{\alpha}_{-1}, \dots ]+a^*_{k+1}\\ && +[0;a^*_{k+2},a^*_{k+3}, \dots]\\&=& \lambda_0(\sigma^{k+1}(\dots,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},\beta_{k+1})) \\&\leq &m(\dots ,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},\beta_{k+1})\\&\leq&t+\epsilon/2 \end{aligned}$$ and for $j> k+1$, $$\lambda_0(\sigma^j(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},i+1,\ell+1,\overline{1}))< m+1+ [0;\overline{1}]+[0;1,m+2,\overline{1,m+3}]< t+\epsilon/2$$ that let us get a contradiction again. In particular, in this case, we can refine the cover $\mathcal{C}_k$ by replacing the interval $I(a_0; a_1, \dots,a_k)$ with the six intervals $I(a_0;a_1, \dots, a_k,i,1,m+1),I(a_0;a_1, \dots, a_k,i,1,m+2), I(a_0;a_1, \dots, a_k,i,1,m+3), I(a_0;a_1, \dots, a_k,i+1,m+1), I(a_0;a_1, \dots, a_k,i+1,m+2)\ \mbox{and}\ I(a_0;a_1, \dots, a_k,i+1,m+3)$ for one and only one $i=1,\dots, m+2.$ Observe that, in fact, some of the intervals considered in the last paragraph, may not be possible. For example, if $\eta=m+3$ then $t+\epsilon/2<m+3$; therefore the letter $m+3$ cannot appear in the kneading sequence of any point of $\tilde{\Lambda}_j$. But this will not affect our argument. Indeed, we affirm that this procedure doesn't increase the 0.49-sum, $H_{0.49}(\mathcal{C}_k)= \sum \limits_{I\in \mathcal{C}_k} \lvert I\rvert^{0.49}$ of the cover $\mathcal{C}_k$ of $K^u(\tilde{\Lambda}_j)$. That is, by [\[intervals\]](#intervals){reference-type="ref" reference="intervals"} we need to prove that $$\sum\limits_{j=m+1}^{m+3}\lvert I(a_1, \dots, a_k,i,1,j)\rvert^{0.49} + \sum\limits_{j=m+1}^{m+3} \lvert I(a_1, \dots, a_k,i+1,j)\rvert^{0.49} < \lvert I(a_1, \dots, a_k)\rvert^{0.49}$$ or $$\label{sum} \sum\limits_{j=m+1}^{m+3}\left(\frac{\lvert I(a_1, \dots, a_k,i,1,j)\rvert}{\lvert I(a_1, \dots, a_k)\rvert}\right)^{0.49}+ \sum\limits_{j=m+1}^{m+3}\left(\frac{\lvert I(a_1, \dots, a_k,i+1,j)\rvert}{\lvert I(a_1, \dots, a_k)\rvert}\right)^{0.49}<1$$ where $i=1,\dots, m+2.$ In this direction, we have the following lemmas **Lemma 11**. *Given $a_0,a_1, \dots, a_n, a,b,c \in \{1, \dots, m+3 \}$ we have $$\frac{\lvert I(a_1,\dots ,a_n,a,b)\rvert}{\lvert I(a_1,\dots ,a_n)\rvert}=\frac{1+r}{(ab+1+br)(ab+a+1+(b+1)r)}$$ and $$\frac{\lvert I(a_1,\dots ,a_n,a,b,c)\rvert}{\lvert I(a_1,\dots ,a_n)\rvert}=\frac{1+r}{(abc+c+a+(bc+1)r)(abc+c+a+ab+1+(bc+b+1)r)}$$ where $r\in (0,1).$* *Proof.* Recall that the length of $I(b_1,\dots, b_m)$ is $$\lvert I(b_1,\dots, b_m)\rvert=\frac{1}{q_m(q_m+q_{m-1})},$$ where $q_s$ is the denominator of $[0;b_1,\dots, b_s]$. And that we also have the recurrence formula $$q_{s+2}=b_{s+2}q_{s+1}+q_s.$$ Using this two and three times respectively, we have $$\lvert I(a_1,\dots ,a_n,a,b)\rvert= \frac{1}{((ab+1)q_n+bq_{n-1})((ab+a+1)q_n+(b+1)q_{n-1})}$$ and $$\lvert I(a_1,\dots ,a_n,a,b,c)\rvert=\frac{1}{((abc+c+a)q_n+(bc+1)q_{n-1})((abc+c+a+ab+1)q_n+(bc+b+1)q_{n-1})}$$ so, we conclude $$\begin{aligned} \frac{\lvert I(a_1,\dots ,a_n,a,b)\rvert}{\lvert I(a_1,\dots ,a_n)\rvert}&=&\frac{q_n(q_n+q_{n-1})}{((ab+1)q_n+bq_{n-1})((ab+a+1)q_n+(b+1)q_{n-1})}\\ &=&\frac{1+r}{(ab+1+br)(ab+a+1+(b+1)r)} \end{aligned}$$ and $$\begin{aligned} \frac{\lvert I(a_1,\dots ,a_n,a,b,c)\rvert}{\lvert I(a_1,\dots ,a_n)\rvert}&=&\frac{q_n(q_n+q_{n-1})}{((abc+c+a)q_n+(bc+1)q_{n-1})((abc+c+a+ab+1)q_n+(bc+b+1)q_{n-1})}\\ &=&\frac{1+r}{(abc+c+a+(bc+1)r)(abc+c+a+ab+1+(bc+b+1)r)}\end{aligned}$$ with $r=\frac{q_{n-1}}{q_n}\in (0,1)$. ◻ **Lemma 12**. *Fix $x,y,z,w>0$, then $$\frac{d}{dr}\left( \frac{1+r}{(x+yr)(z+wr)}\right)= \frac{(x-y)(z-w)-yw(r+1)^2}{(ywr^2+(xw+yz)r+xz)^2}<\frac{(x-y)(z-w)-yw}{(ywr^2+(xw+yz)r+xz)^2}$$ for $r\geq 0$.* *Proof.* It's a straightforward computation. ◻ Using the previous lemmas, that $i\geq 1$, $m\geq1$, $r\in (0,1)$ and for $j\in \{m+1,m+2,m+3\}$ $$(2j+1-(j+1))(2j+3-(j+2))-(j+1)(j+2)=j(j-1)-(j+1)(j+2)<0,$$ for the first sum one has $$\begin{aligned} \sum\limits_{j=m+1}^{m+3}\left(\frac{\lvert I(a_1, \dots, a_k,i,1,j)\rvert}{\lvert I(a_1, \dots, a_k)\rvert}\right)^{0.49}&=& \sum\limits_{j=m+1}^{m+3} \left(\frac{1+r}{(ij+j+i+(j+1)r)(ij+j+2i+1+(j+2)r)}\right)^{0.49}\\ &\leq& \sum\limits_{j=m+1}^{m+3} \left(\frac{1+r}{(2j+1+(j+1)r)(2j+3+(j+2)r)}\right)^{0.49} \\ &<& \sum\limits_{j=m+1}^{m+3} \left(\frac{1}{(2j+1)(2j+3)}\right)^{0.49}\\&\leq& \left(\frac{1}{5\times7}\right)^{0.49}+\left(\frac{1}{7\times9}\right)^{0.49}+\left(\frac{1}{9\times11}\right)^{0.49}\\ &<& 0.412\end{aligned}$$ and for the second sum $$\begin{aligned} \sum\limits_{j=m+1}^{m+3}\left(\frac{\lvert I(a_1, \dots, a_k,i+1,j)\rvert}{\lvert I(a_1, \dots, a_k)\rvert}\right)^{0.49} &=& \sum\limits_{j=m+1}^{m+3} \left(\frac{1+r}{((i+1)j+1+jr)((i+1)j+i+2+(j+1)r)}\right)^{0.49} \\ &\leq& \sum\limits_{j=m+1}^{m+3} \left(\frac{1+r}{(2j+1+jr)(2j+3+(j+1)r)}\right)^{0.49}\\ &<& \sum\limits_{j=m+1}^{m+3} \left(\frac{2}{(2j+1)(2j+3)}\right)^{0.49} \\ &\leq& \left(\frac{2}{5\times7}\right)^{0.49}+\left(\frac{2}{7\times9}\right)^{0.49}+\left(\frac{2}{9\times11}\right)^{0.49}\\ &<& 0.579 \end{aligned}$$ that proves [\[sum\]](#sum){reference-type="ref" reference="sum"} and so let us conclude that $HD(K^u(\tilde{\Lambda}_j))\leq0.49$. Finally, as we are in the conservative setting $$HD(\tilde{\Lambda}_j)=2HD(K^u(\tilde{\Lambda}_j))<0.99.$$ Fix $\delta=\epsilon/2$. By definition, for $i\in \mathcal{I}$, $\tilde{\Lambda}_i$ connects with $\xi$ before $t+\epsilon$, then we can apply proposition 4 at most $\lvert\mathcal{I}\rvert-1$ times to see that there exists a subhorseshoe $\tilde{\Lambda}(t,\epsilon)\subset \Lambda$ and some $q(t,\epsilon)<t+\epsilon$ such that $$\bigcup \limits_{i\in \mathcal{I}} \tilde{\Lambda}_i\subset \tilde{\Lambda}(t,\epsilon)\subset \Lambda_{q(t,\epsilon)}.$$ Now, remember that for any subhorseshoe $\tilde{\Lambda} \subset \Lambda$ being locally maximal we have $$W^s(\tilde{\Lambda})= \bigcup \limits_{y\in \tilde{\Lambda}}W^s(y) \ \ \mbox{and} \ \ W^u(\tilde{\Lambda})= \bigcup \limits_{y\in \tilde{\Lambda}}W^u(y).$$ Then, there exists an $y\in \tilde{\Lambda}$ with $\lim \limits_{n\to\infty}d(f(\varphi^n(x)),f(\varphi^n(y)))=0$ for every $x\in \Lambda$, such that $\omega(x)\subset \tilde{\Lambda}$ , and so $\ell_{\varphi,f}(x)=\ell_{\varphi,f}(y)$. Using this, we have $$\ell_{\varphi,f}(P(t,\epsilon))=\ell_{\varphi,f}(\tilde{P}(t,\epsilon))=\bigcup \limits_{i\in \mathcal{I}} \ell_{\varphi,f}(\tilde{\Lambda}_i)\cup\bigcup \limits_{j\in \mathcal{J}} \ell_{\varphi,f}(\tilde{\Lambda}_j).$$ On the other hand $$HD(\bigcup \limits_{j\in \mathcal{J}} \ell_{\varphi,f}(\tilde{\Lambda}_j))= \max\limits_{j\in \mathcal{J}}HD(\ell_{\varphi,f}(\tilde{\Lambda}_j))\leq \max\limits_{j\in \mathcal{J}}HD(f(\tilde{\Lambda}_j))\leq \max\limits_{j\in \mathcal{J}}HD(\tilde{\Lambda}_j)<1$$ so $int(\bigcup \limits_{j\in \mathcal{J}} \ell_{\varphi,f}(\tilde{\Lambda}_j))=\emptyset.$ Also, as was proved in lemma $5.2$ of [@GC1], for $\tilde{t}\leq \max f|_{\Lambda}$ $$L\cap (-\infty,\tilde{t})= \bigcup \limits_{s<\tilde{t}} \ell_{\varphi,f}(\Lambda_s),$$ therefore $$\begin{aligned} t\in int(m_{\varphi,f}(\Lambda_{t+\epsilon/4}))&=&int(M\cap (-\infty,t+\epsilon/4))=int(L\cap (-\infty,t+\epsilon/4))\\&=&int(\bigcup \limits_{s<t+\epsilon/4} \ell_{\varphi,f}(\Lambda_s))= int(\ell_{\varphi,f}(\Lambda_{t+\epsilon/4}))\\ &\subset& int(\ell_{\varphi,f}(P(t,\epsilon))) \end{aligned}$$ and then, we must have $$t<\sup (\bigcup \limits_{i\in \mathcal{I}} \ell_{\varphi,f}(\tilde{\Lambda}_i))\leq \sup(\ell_{\varphi,f}(\tilde{\Lambda}(t,\epsilon)))\leq \sup f(\tilde{\Lambda}(t,\epsilon))=\max f|_{\tilde{\Lambda}(t,\epsilon)}.$$ We have then proved the following result **Proposition 13**. *Given $t\in(m+1+ [0;\overline{1}]+[0;1,m+2,\overline{1,m+3}],\eta)\cap T$ and $\epsilon<\eta-t$ there exist some $q(t,\epsilon)<t+\epsilon$ and a subhorseshoe $\tilde{\Lambda}(t,\epsilon)\subset \Lambda_{q(t,\epsilon)}$ with $HD(\tilde{\Lambda}(t,\epsilon))\geq 1$ such that* 1. *$HD(\Lambda_t)\leq HD(\tilde{\Lambda}(t,\epsilon))$* 2. *for every subhorseshoe $\tilde{\Lambda} \subset \Lambda_t$ with $HD(\tilde{\Lambda})\geq0.99$ one has $\tilde{\Lambda}\subset \tilde{\Lambda}(t,\epsilon)$* 3. *$t<\max f|_{\tilde{\Lambda}(t,\epsilon)}.$* ## Putting unstable Cantor sets into $k^{-1}(\eta)$ Let $\eta\in (m+1+ [0;\overline{1}]+[0;1,m+2,\overline{1,m+3}],m+4)\cap \overline{T}$ accumulated from the left by points of $T$ and $\epsilon>0$ such that $m+1+ [0;\overline{1}]+[0;1,m+2,\overline{1,m+3}]<\eta-\epsilon$. Take any strictly increasing sequence $\{t_n\}_{n\geq 0}$ of points of $T$ such that $t_0>\eta-\epsilon$ and $\lim\limits_{n\rightarrow\infty}t_n=\eta$. Proposition [Proposition 13](#lemmaiterativo){reference-type="ref" reference="lemmaiterativo"} let us find a sequence of subhorsehoes $\{\Lambda^n\}_{n\geq0}=\{\tilde{\Lambda}(t_n,(t_{n+1}-t_n)/2)\}_{n\geq0}$ with the following properties 1. $HD(\Lambda_{t_n})\leq HD(\Lambda^n)$ 2. $\Lambda^n\subset \Lambda^{n+1}$ 3. $t_n<\max f|_{\Lambda^n}<t_{n+1}.$ Now, we will construct a local homeomorphism $\theta:K^u(\Lambda^0) \rightarrow k^{-1}(\eta)$ with local Hölder inverse and exponent arbitrarily close to one. Given $n\geq 0$, being $\Lambda^n$ a mixing horseshoe (because $\xi \subset \Lambda^n$), we can find some $c(n)\in \mathbb{N}$ such that given two letters $a$ and $b$ in the alphabet $\mathcal{A}(\Lambda^n)$ of $\Lambda^n$ there exists some finite word of size $c(n)$: $(a_1,\dots ,a_{c(n)})$ (in the letters of $\mathcal{A}(\Lambda^n)$) such that $(a,a_1,\dots ,a_{c(n)},b)$ is admissible; given $a$ and $b$ consider always a fixed $(a_1,\dots ,a_{c(n)})$ as before. Also, as $\Lambda^n$ is a subhorseshoe of $\Lambda$, it is the invariant set in some rectangles determined for a set of words of size $2p(n)+1$ for some $p(n)\in \mathbb{N}$. Now, take $n\geq 1$ and consider the kneading sequence $\{x^n_r\}_{r\in \mathbb{Z}}$ of some point $x_n\in \Lambda^n$ such that $f(x_n)=\max f|_{\Lambda^n}$. Also take $r(n)>p(n+1)+p(n)+p(n-1)$ big enough such that for any $\alpha=(a_0, a_{1} \cdots, a_{2r(n)})\in \{1,2, \cdots, m+3\}^{2r(n)+1}$ and $z,y\in R(\alpha;r(n))$ we have $\lvert f(x)-f(y)\rvert<\min \{(t_{n+1}-\max f|_{\Lambda^n})/2,(\max f|_{\Lambda^n}-t_n)/2\}.$ Finally, set $s(n)=\sum_{k=1}^{n}2r(k)+2c(k)+1$. Given $a=[a_0;a_1,a_2,\dots ]\in K^u(\Lambda^0)$ for $n\geq 1$ set $a^{(n)}:=(a_{s(n)!+1},\dots, a_{s(n+1)!})$, so one has $$a=[a_0;a_1,a_2,\dots ]=[a_0;a_1,\dots,a_{s(1)!},a^{(1)}, a^{(2)},\dots ,a^{(n)}, \dots].$$ Define then $$\theta(a):=[a_0;a_1,\dots,a_{s(1)!},h_{1}, a^{(1)},h_{2},a^{(2)},\dots ,h_{n},a^{(n)},h_{n+1}, \dots]$$ where $$h_n=(c_1^n, x^n_{-r(n)},\dots ,x^n_{-1},x^n_0,x^n_1,\dots,x^n_{r(n)},c_2^n)$$ and $c_1^n$ and $c_2^n$ are words in the original alphabet $\mathcal{A}=\{1,\dots,m+3 \}$ with $\lvert c_1^n\rvert=\lvert c_2^n\rvert=c(n)$ such that $(a_0,a_1,\dots,a_{s(1)!},h_{1}, a^{(1)},h_{2},\dots ,h_{n},a^{(n)})$ appears in the kneading sequence of some point of $\Lambda^n$. It is easy to see using the construction of $\theta$ that for every $a\in K^u(\Lambda^0)$, $k(\theta(a))=\eta$, so we have defined the map $$\begin{aligned} \theta:K^u(\Lambda^0) &\rightarrow& k^{-1}(\eta) \\ a &\rightarrow& \theta(a)\end{aligned}$$ that is clearly continuous and injective. On the other hand, given any small $\rho>0$ because of the growth of the factorial map, we have $\lvert\tilde{a}_1-\tilde{a}_2\rvert=O(\lvert\theta(\tilde{a}_1)-\theta(\tilde{a}_2)\rvert^{1-\rho})$ for any $\tilde{a}_1, \tilde{a}_2\in K^u(\tilde{\Lambda}_i)$ and $\lvert\tilde{a}_1-\tilde{a}_2\rvert$ small. Indeed, if $\tilde{a}_1$ and $\tilde{a}_2$ are such that the letters in their continued fraction expressions are equal up to the $s$-nth letter and $n \in \mathbb{N}$ is maximal such that $s(n)!<s$ then because $\lvert h_k\rvert=2r(k)+2c(k)+1$; $\theta(\tilde{a}_1)$ and $\theta(\tilde{a}_2)$ coincide exactly in their first $$s+\sum_{k=1}^{n}2r(k)+2c(k)+1=s+s(n)$$ letters. Now, if $\alpha$ and $\beta$ are finite words of positive integers bounded by $N\in \mathbb{N}$ and $I_N(\alpha)$ is the convex hull of $I(\alpha)\cap C_N$. The so-called bounded distortion property let us conclude that for some constant $C_N>1$ $$C_N^{-1}\lvert I_N(\alpha)\rvert\lvert I_N(\beta)\rvert\leq \lvert I_N(\alpha\beta)\rvert\leq C_N\lvert I_N(\alpha)\rvert\lvert I_N(\beta)\rvert$$ also, for some positive constants $\lambda_1,\lambda_2<1$, one has $$C_N^{-1} \lambda_1^{\lvert\alpha\rvert}\leq\lvert I_N(\alpha)\rvert\leq C_N \lambda_2^{\lvert\alpha\rvert}$$ So, if $s$ is big such that $s(n)/(s+s(n))<\frac{\rho \log \lambda_2}{\log \lambda_1-4\log C_{m+3}}$, using lemma [Lemma 3](#gugu1){reference-type="ref" reference="gugu1"}, we have for some constant $\tilde{C}(m+3)$ $$\begin{aligned} \lvert\theta(\tilde{a}_1)-\theta(\tilde{a}_2)\rvert^{1-\rho}&\geq&\tilde{C}(m+3)^{1-\rho}\lvert I(a_1,\dots,a_{s(1)!},h_{1}, a^{(1)},\dots ,a^{(n-1)},h_{n},a_{s(n)!+1},\dots, a_s)\rvert^{1-\rho}\\ &\geq& \tilde{C}(m+3)^{1-\rho}\lvert I_{m+3}(a_1,\dots,a_{s(1)!},h_{1}, a^{(1)},\dots ,a^{(n-1)},h_{n},a_{s(n)!+1},\dots, a_s)\rvert^{1-\rho}\\ &=& \tilde{C}(m+3)^{1-\rho}\lvert I_{m+3}(a_1,\dots,a_{s(1)!},h_{1}, a^{(1)},\dots ,a^{(n-1)},h_{n},a_{s(n)!+1},\dots, a_s)\rvert\\ && \lvert I_{m+3}(a_1,\dots,a_{s(1)!},h_{1}, a^{(1)},\dots ,a^{(n-1)},h_{n},a_{s(n)!+1},\dots, a_s)\rvert^{-\rho} \\ &\geq& \frac{1}{C_{m+3}^{2n}}\tilde{C}(m+3)^{1-\rho}\lvert I_{m+3}(a_1,\dots,a_{s(1)!})\rvert\lvert I_{m+3}(a^{(1)})\rvert\dots \lvert I_{m+3}(a^{(n-1)})\rvert\\ &&\lvert I_{m+3}(a_{s(n)!+1},\dots, a_s)\rvert\lvert I_{m+3}(h_1)\rvert\dots \lvert I_{m+3}(h_n)\rvert\\&& \lvert I_{m+3}(a_1,\dots,a_{s(1)!},h_{1}, a^{(1)},\dots ,a^{(n-1)},h_{n},a_{s(n)!+1},\dots, a_s)\rvert^{-\rho} \\ &\geq&\frac{1}{C_{m+3}^{3n}}\tilde{C}(m+3)^{1-\rho}\lvert I_{m+3}(a_1,a_2,\dots,a_s)\rvert\lvert I_{m+3}(h_1)\rvert\dots \lvert I_{m+3}(h_n)\rvert \\ &&\lvert I_{m+3}(a_1,\dots,a_{s(1)!},h_{1}, a^{(1)},\dots a^{(n-1)},h_{n},a_{s(n)!+1},\dots, a_s)\rvert^{-\rho}\\ &\geq&\tilde{C}(m+3)^{1-\rho}\lvert I_{m+3}(a_1,a_2,\dots,a_s)\rvert e^{-4n\log C_{m+3}}e^{s(n).\log \lambda_1}\\ &&\lvert I_{m+3}(a_1,\dots,a_{s(1)!},h_{1}, a^{(1)},\dots a^{(n-1)},h_{n},a_{s(n)!+1},\dots, a_s)\rvert^{-\rho}\\ &\geq&\tilde{C}(m+3)^{1-\rho}\lvert I_{m+3}(a_1,a_2,\dots,a_s)\rvert e^{(\log \lambda_1-4\log C_{m+3})s(n)}\\ &&\lvert I_{m+3}(a_1,\dots,a_{s(1)!},h_{1}, a^{(1)},\dots a^{(n-1)},h_{n},a_{s(n)!+1},\dots, a_s)\rvert^{-\rho}\\ % &\geq&\tilde{C}(m+3)^{1-\rho}\abs{I_{m+3}(a_1,a_2,\dots,a_s)}e^{\rho (s+s(n))\log \lambda_2}\\ &&\abs{I_{m+3}(a_1,\dots,a_{s(1)!},h_{1}, a^{(1)},\dots a^{(n-1)},h_{n},a_{s(n)!+1},\dots, a_s)}^{-\rho}\\ % &\geq&\frac{\tilde{C}(m+3)^{1-\rho}}{C_{m+3}^{\rho}}\abs{I_{m+3}(a_1,a_2,\dots,a_s)}\\ %&&\abs{I_{m+3}(a_1,\dots,a_{s(1)!},h_{1}, a^{(1)},\dots ,a^{(n-1)},h_{n},a_{s(n)!+1},\dots, a_s)}^{\rho}\\ &&\abs{I_{m+3}(a_1,\dots,a_{s(1)!},h_{1}, a^{(1)},\dots a^{(n-1)},h_{n},a_{s(n)!+1},\dots, a_s)}^{-\rho}\\ %&\geq&\frac{\tilde{C}(m+3)^{1-\rho}}{C_{m+3}^{\rho}}\abs{a_1-a_2}.\end{aligned}$$ $$\begin{aligned} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ &\geq&\tilde{C}(m+3)^{1-\rho}\lvert I_{m+3}(a_1,a_2,\dots,a_s)\rvert e^{\rho (s+s(n))\log \lambda_2}\\ &&\lvert I_{m+3}(a_1,\dots,a_{s(1)!},h_{1}, a^{(1)},\dots a^{(n-1)},h_{n},a_{s(n)!+1},\dots, a_s)\rvert^{-\rho}\\ &\geq&\frac{\tilde{C}(m+3)^{1-\rho}}{C_{m+3}^{\rho}}\lvert I_{m+3}(a_1,a_2,\dots,a_s)\rvert\\ &&\lvert I_{m+3}(a_1,\dots,a_{s(1)!},h_{1}, a^{(1)},\dots ,a^{(n-1)},h_{n},a_{s(n)!+1},\dots, a_s)\rvert^{\rho}\\ &&\lvert I_{m+3}(a_1,\dots,a_{s(1)!},h_{1}, a^{(1)},\dots a^{(n-1)},h_{n},a_{s(n)!+1},\dots, a_s)\rvert^{-\rho}\\ &\geq&\frac{\tilde{C}(m+3)^{1-\rho}}{C_{m+3}^{\rho}}\lvert\tilde{a}_1-\tilde{a}_2\rvert.\end{aligned}$$ Therefore the map $\theta^{-1}:\theta(K^u(\Lambda^0)) \rightarrow K^u(\Lambda^0)$ is locally a Hölder map with exponent $1-\rho$ and then $$\begin{aligned} HD(K^u(\Lambda^0))=HD(\theta^{-1}(\theta(K^u(\Lambda^0))))&\leq& 1/(1-\rho)HD(\theta(K^u(\Lambda^0)))\\ &\leq& 1/(1-\rho)HD(k^{-1}(\eta)).\end{aligned}$$ Letting $\rho$ go to zero, we obtain $$HD(K^u(\Lambda^0))\leq HD(k^{-1}(\eta)).$$ Now, as we indicated before, for $s\leq \max f|_{\Lambda}$ one has $$HD(k^{-1}(-\infty,s])=\frac{1}{2}HD(\Lambda_{s}),$$ therefore $$\begin{aligned} HD(k^{-1}(-\infty,\eta-\epsilon])=\frac{1}{2}HD(\Lambda_{\eta-\epsilon})&\leq&\frac{1}{2}HD(\Lambda_{t_0}) \leq\frac{1}{2}HD(\Lambda^0)\\ &=& HD(K^u(\Lambda^0))\leq HD(k^{-1}(\eta)). \end{aligned}$$ Letting $\epsilon$ tend to zero we have $$HD(k^{-1}(-\infty,\eta])\leq HD(k^{-1}(\eta))$$ and as the other inequality is clearly true, the first part of the result is proven for $\eta\in (m+1+ [0;\overline{1}]+[0;1,m+2,\overline{1,m+3}],m+4)\cap \overline{T}$ which is accumulated from the left by points of $T$. For the second part of the theorem, we need the following lemma whose proof is essentially the same as the proof of lemma 2.5 of [@LM]. **Lemma 14**. *Given $(K, \mathcal{P}, \psi)$ a $C^{\alpha}$-regular Cantor set, if $\mathcal{P}^{'} \neq \mathcal{P}$ is a finite sub collection of $\mathcal{P}$ that is also a Markov partition of $\psi$, then the Cantor set determined by $\psi$ and $\mathcal{P}^{'}$ $$\tilde{K}=\bigcap \limits_{n\geq 0}\psi^{-n}\left( \bigcup \limits_{I\in \mathcal{P}^{'}}I \right)$$ satisfies $HD(\tilde{K})<HD(K).$* **Corollary 15**. *Let $\Lambda$ be a mixing horseshoe associated with a $C^2$-diffeomorphism $\varphi:S\rightarrow S$ of some surface $S$. Then for any proper mixing subhorsehoe $\tilde{\Lambda}\subset \Lambda$ $$HD(\tilde{\Lambda})<HD(\Lambda).$$* *Proof.* Refine the original Markov partition $\mathcal{P}$ of $\Lambda$ in such a way that some $\mathcal{P}^{'} \subset \mathcal{P}$, $\mathcal{P}^{'} \neq \mathcal{P}$ is a Markov partition for $\tilde{\Lambda}$. Use the lemma [Lemma 14](#perdadimen){reference-type="ref" reference="perdadimen"} with the maps $\psi_s$ and $\psi_u$ that define the stable and unstable Cantor sets, in order to obtain $$HD(\tilde{\Lambda})=HD(K^s(\tilde{\Lambda}))+HD(K^u(\tilde{\Lambda}))<HD(K^s(\Lambda))+HD(K^u(\Lambda))=HD(\Lambda).$$ ◻ Given any $t<\eta$, take $n\in \mathbb{N}$ big enough such that $t<t_n$. Now, as $\max f|_{\Lambda^n}<t_{n+1}$ and $t_{n+1}<\max f|_{\Lambda^{n+1}}$ then $\Lambda^n$ is a proper subhorseshoe of $\Lambda^{n+1}$, therefore $$\begin{aligned} HD(k^{-1}(-\infty,t])= \frac{1}{2}HD(\Lambda_t) &\leq& \frac{1}{2}HD(\Lambda^n)<\frac{1}{2}HD(\Lambda^{n+1})\\ &\leq& \frac{1}{2}HD(\Lambda_{t_{n+2}})\leq \frac{1}{2}HD(\Lambda_{\eta}) \\&=&HD(k^{-1}(-\infty,\eta]). \end{aligned}$$ Then the map is strictly monotone. As $m\geq 1$ was arbitrary, we have the result for $\eta \in (2+ [0;\overline{1}]+[0;1,3,\overline{1,4}],\infty)\cap \overline{T}=(3,4209...,\infty)\cap \overline{T}$ which is accumulated from the left by points of $T$. For $\eta \in (t^*_1,3,4209...]\cap \overline{T}$ accumulated from the left by points of $T$, consider the horseshoe $\Lambda=\Lambda(2)$ (note that $\max f|_{\Lambda(2)}=\sqrt{12}>3,4209...$). As before, given $t \in (t^*_1,\eta)\cap T$, $0<\epsilon<\eta-t$ and $\delta>0$ we consider the set $$\tilde{P}(t,\epsilon)=\bigcup\limits_{\substack{x\in \mathcal{X}: \ \tilde{\Lambda}_x \ is\\ subhorseshoe }}\tilde{\Lambda}_x= \bigcup \limits_{i\in \mathcal{I}} \tilde{\Lambda}_i \cup \bigcup \limits_{i\in \mathcal{J}} \tilde{\Lambda}_j$$ where for $i\in \mathcal{I}$, $\tilde{\Lambda}_i$ connects with $\xi$ before $t+\epsilon/2+\delta$ and for $j\in \mathcal{J}$, $\tilde{\Lambda}_j$ doesn't connect with $\xi$ before $t+\epsilon/2+\delta$. One more time, given $j\in \mathcal{J}$, we will suppose that there is no $x\in W^u(\tilde{\Lambda}_j)\cap W^s(\xi)$ with $m_{\varphi,f}(x)\leq t+\epsilon/2+\delta/2$. Following the above procedure, given $k\in \mathbb{N}$ large enough we construct the covers $\mathcal{C}_k$ of $K^u(\tilde{\Lambda}_j)$ in such a way that given $I(a_0; a_1, \dots,a_k)\in \mathcal{C}_k$, if $(a_0, a_1, \dots,a_k)$ has continuations with forced first letter $a_{k+1}$ we replace the interval $I(a_0;a_1, \dots, a_k)$ with the interval $I(a_0;a_1, \dots, a_k, a_{k+1})$. On the other hand, if $(a_0, a_1, \dots,a_k)$ has two continuations with different initial letter, said $\gamma_{k+1}=(1, a_{k+2},\dots)$ and $\beta_{k+1}=(2, a^*_{k+2},\dots)$. Take $\alpha=(\alpha_n)_{n\in \mathbb{Z}}\in \Pi(\tilde{\Lambda}_j)$ and $\tilde{\alpha}=(\tilde{\alpha}_n)_{n\in \mathbb{Z}}\in \Pi(\tilde{\Lambda}_j)$, such that $\alpha=(\dots,\alpha_{-2},\alpha_{-1};a_0, a_1, \dots,a_k,\gamma_{k+1})$ and $\tilde{\alpha}=(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};a_0, a_1, \dots,a_k,\beta_{k+1})$. We affirm that $a_{k+2}=1$ because in other case by lemma [Lemma 11](#lemao){reference-type="ref" reference="lemao"}, as $[0;\beta_{k+1}]<[0;\overline{1}]<[0;\gamma_{k+1}]$, we would have for all $j\leq k$ $$\begin{aligned} \lambda_0(\sigma^j(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},\overline{1}))< t+\epsilon/2+\delta/2,\end{aligned}$$ and for $j\geq k+1$ $$\lambda_0(\sigma^j(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},\overline{1}))<3 < t+\epsilon/2.$$ Then for $x=\Pi^{-1}((\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},i,\overline{1}))$ one would get the contradiction $$x\in W^u(\tilde{\Lambda}_j)\cap W^s(\xi)\ \mbox{and}\ m_{\varphi,f}(x)\leq t+\epsilon/2+\delta/2.$$ In a similar way, we must have $a^*_{k+2}=2$ because if $a^*_{k+2}=1$, then $[0;\beta_{k+1}]<[0;2,2,\overline{1}]<[0;\gamma_{k+1}]$ and by lemma [Lemma 11](#lemao){reference-type="ref" reference="lemao"} we would have for all $j\leq k$ $$\begin{aligned} \lambda_0(\sigma^j(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},2,2, \overline{1}))<t+\epsilon/2+\delta/2,\end{aligned}$$ for $j=k+1$, $$\begin{aligned} \lambda_0(\sigma^j(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},2,2,\overline{1}))&=&[0;\tilde{\alpha}_{k},\dots, \tilde{\alpha}_{0},\tilde{\alpha}_{-1}, \dots ]+2+\\&& [0;2,\overline{1}]\\&<&[0;\tilde{\alpha}_{k},\dots, \tilde{\alpha}_{0},\tilde{\alpha}_{-1}, \dots ]+a^*_{k+1}\\ && +[0;a^*_{k+2},a^*_{k+3}, \dots]\\&=& \lambda_0(\sigma^{k+1}(\dots,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},\beta_{k+1})) \\&\leq &m(\dots ,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},\beta_{k+1})\\&\leq&t+\epsilon/2 \end{aligned}$$ and for $j> k+1$, $$\lambda_0(\sigma^j(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},2,2,\overline{1}))< 2+ [0;\overline{1}]+[0;2,\overline{2,1}]=3,0406...< t+\epsilon/2$$ then taking $x=\Pi^{-1}((\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots ,\tilde{\alpha}_{k},2,2,\overline{1}))$ one would have $$x\in W^u(\tilde{\Lambda}_j)\cap W^s(\xi)\ \mbox{and}\ m_{\varphi,f}(x)\leq t+\epsilon/2+\delta/2$$ that is again a contradiction. In particular, in this case, we can refine the cover $\mathcal{C}_k$ by replacing the interval $I(a_0; a_1, \dots,a_k)$ with the intervals $I(a_0;a_1, \dots, a_k,1,1)$ and $I(a_0;a_1, \dots, a_k,2,2)$. By lemma [Lemma 11](#lemao){reference-type="ref" reference="lemao"} we have the inequality $$\begin{aligned} && \left(\frac{\lvert I(a_1, \dots, a_k,1,1)\rvert}{\lvert I(a_1, \dots, a_k)\rvert}\right)^{0.49}+\left(\frac{\lvert I(a_1, \dots, a_k,2,2)\rvert}{\lvert I(a_1, \dots, a_k)\rvert}\right)^{0.49} = \\ && \left(\frac{1+r}{(2+r)(3+2r)}\right) ^{0.49}+ \left(\frac{1+r}{(5+2r)(7+3r)}\right) ^{0.49}< \\ && \left(\frac{2}{2\times 3}\right) ^{0.49}+ \left(\frac{2}{5\times 7}\right) ^{0.49} < 0.9 \\\end{aligned}$$ that let us conclude that $HD(\tilde{\Lambda}_j)<0.99$ again. The rest of the proof follows the same lines as the previous one. Finally, if $\eta \in \overline{T}$ is accumulated from the right by points of $T$, as before, we can consider (depending on the region to which $\eta$ belongs) some horseshoe $\Lambda=\Lambda(\eta)$. Take any strictly decreasing sequence $\{t_n\}_{n\geq 1}$ of points of $T$ such that $\lim\limits_{n\rightarrow\infty}t_n=\eta$ and $t_1<\max f|_{\Lambda}$, also $\epsilon>0$ small enough such that $HD(\Lambda_{\eta-\epsilon})>0.99$ and take any $t_0\in(\eta-\epsilon,\eta)$. The techniques we developed allow us construct then, a sequence $\{\Lambda^n\}_{n\geq0}$ of subhorseshoes of $\Lambda$ with the following properties 1. $\max f|_{\Lambda^0}<\eta$ 2. $\max f|_{\Lambda^1}<\max f|_{\Lambda}$ 3. $t_{n+1}<\max f|_{\Lambda^{n+1}}<t_n$,  $\forall n\geq 1$ 4. $HD(\Lambda_{t_n})\leq HD(\Lambda^n)$,  $\forall n\geq 0$ 5. $\Lambda^0\subset\Lambda^{n+1}\subset \Lambda^n$,   $\forall n\geq 1.$ Therefore, we can define a map $$\begin{aligned} \theta:K^u(\Lambda^0) &\rightarrow& k^{-1}(\eta) \\ a &\rightarrow& \theta(a)\end{aligned}$$ given by $$\theta(a)=[a_0;a_1,\dots,a_{s(1)!},h_{1}, a^{(1)},h_{2},a^{(2)},\dots ,h_{n},a^{(n)},h_{n+1}, \dots]$$ where $$a=[a_0;a_1,a_2,\dots ]=[a_0;a_1,\dots,a_{s(1)!},a^{(1)}, a^{(2)},\dots ,a^{(n)}, \dots]$$ and the sequences $\{s(n)\}_{n\geq1}$ and $\{h_n\}_{n\geq1}$ are defined as before and are such that $(a^{(n)},h_{n+1},a^{(n+1)},h_{n+2}, \dots)$ appears in the kneading sequence of some point of $\Lambda^{n+1}$. It is easy to see using the construction of $\theta$ that for every $a\in K^u(\Lambda^0)$, $k(\theta(a))=\eta$ and arguing as before, that $\theta$ is a local homeomorphism with local Hölder inverse and exponent arbitrarily close to one. That let us show that $HD(k^{-1}(-\infty,\eta])=HD(k^{-1}(\eta)).$ For the second part, corollary [Corollary 15](#perdadimen2){reference-type="ref" reference="perdadimen2"} let us conclude, one more time that $HD(\Lambda^{n+1})<HD(\Lambda^n)$, for $n\geq 1$ and then that $HD(k^{-1}(-\infty,\eta])<HD(k^{-1}(-\infty,t])$, $\forall t>\eta.$ As we wanted to see. 10 J. Palis and F. Takens. . Cambridge Univ. Press, 1993. P. Arnoux, *Le codage du flot géodésique sur la surface modulaire*, Enseign. Math. (2) 40, no. 1-2, 1994, 29-48. T. Cusick and M. Flahive, *The Markoff and Lagrange spectra*, Mathematical Surveys and Monographs, 30. American Mathematical Society, Providence, RI, 1989. x+97 pp. A. Cerqueira, C. Matheus and C. G. Moreira, *Continuity of Hausdorff dimension across generic dynamical Lagrange and Markov spectra*, Journal of Modern Dynamics, 2018, 151-174. doi: 10.3934/jmd.2018006 G.A. Freiman, *Diophantine approximation and geometry of numbers (The Markoff spectrum)*, Kalininskii Gosudarstvennyi Universitet, Moscow, 1975. M. Hall, *On the sum and product of continued fractions*, Annals of Math., Vol. 48, (1947), pp. 966--993. D. Lima and C. G. Moreira, *Dynamical characterization of initials segments of the Markov and Lagrange spectra* D. Lima and C. G. Moreira, *Phase transitions on the Markov and Lagrange dynamical spectra*, Annales de l'Institut Henri Poincaré C, Analyse non linéaire, Volume 38, Issue 5, 2021, Pages 1429-1459, ISSN 0294-1449, https://doi.org/10.1016/j.anihpc.2020.11.007. D. Lima, C. Matheus, C. G. Moreira and S. Romaña, *Classical and Dynamical Markov and Lagrange spectra*, World Scientific, 2020. Y. Pesin, Dimension Theory in Dynamical Systems: Contemporary Views and Applications, University of Chicago Press, 1998. A. Markov, *Sur les formes quadratiques binaires indéfinies*, Math. Ann. v. 15, p. 381-406, 1879. C. Villamil, C. Moreira, D. Lima, *Continuity of fractal dimensions in conservative generic Markov and Lagrange dynamical spectra*, to appear. C. G. Moreira, *Sums of regular Cantor sets, dynamics and applications to number theory*, Periodica Mathematica Hungarica, v.37, n.1, p.55-63, 1998. C. G. Moreira, *Conjuntos de Cantor, Dinâmica e Aritmética*, 22o. Colóquio Brasileiro de Matemática, 1999. C. G. Moreira, *Geometric properties of the Markov and Lagrange spectra*, Annals of Math 188 (2018), no. 1, 145--170. C. G. Moreira, *On the minima of Markov and Lagrange Dynamical Spectra*, Astérisque No. 415, Quelques aspects de la théorie des systèmes dynamiques: un hommage à Jean-Christophe Yoccoz. I (2020), 45--57. C. Matheus, C. G. Moreira, M. Pollicott and P. Vytnova, *Hausdorff dimension of Gauss--Cantor sets and two applications to classical Lagrange and Markov spectra*. https://arxiv.org/abs/2106.06572 C. G. Moreira and S. Romaña, *On the Markov and Lagrange dynamical spectra*, Ergodic Theory and Dynamical Systems, 37(5), 2017, 1570--1591. doi:10.1017/etds.2015.121 O. Perron, *Über die Approximation irrationaler Zahlen durch rationale II*, S.-B. Heidelberg Akad. Wiss., Abh. 8, 1921, 12 pp. S. Ito, Number Theoretic expansions, Algorithms and Metrical observations. , 1--27,1984.
arxiv_math
{ "id": "2309.14646", "title": "Concentration of dimension in extremal points of left-half lines in the\n Lagrange spectrum", "authors": "Carlos Gustavo Moreira and Christian Camilo Silva Villamil", "categories": "math.DS", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | We consider the so-called field-road diffusion model in a bounded domain, consisting of two parabolic PDEs posed on sets of different dimensions (a *field* and a *road* in a population dynamics context) and coupled through exchange terms on the road, which makes its analysis quite involved. We propose a TPFA finite volume scheme. In both the continuous and the discrete settings, we prove the exponential decay of an entropy, and thus the long time convergence to the stationary state selected by the total mass of the initial data. To deal with the problem of different dimensions, we artificially ''thicken" the road and, then, establish a rather unconventional Poincaré-Wirtinger inequality. Numerical simulations confirm and complete the analysis, and raise new issues.\ \ bibliography: - biblio.bib title: "**Long time behavior of the field-road diffusion model: an entropy method and a finite volume scheme**" --- **Matthieu Alfaro[^1] and Claire Chainais-Hillairet[^2]** # Introduction {#s:intro} Phenomena of spatial spread are highly relevant to understand biological invasions, spreads of emergent diseases, as well as spatial shifts in distributions in the context of climate change. Let us refer, among many others, to the seminal books by Shigesada and Kawasaki [@Shi-Kaw-97], by Murray [@Murray1; @Murray2]. More recently, there has been a growing recognition of the importance of *fast diffusion channels* on biological invasions: for instance, an accidental transportation via human activities of some individuals towards northern and eastern France may be the cause of accelerated propagation of the pine processionary moth [@Rob-et-al-12]. In Canada, some GPS data revealed that wolves travel faster along seismic lines (i.e. narrow strips cleared for energy exploration), thus increasing their chances to meet a prey [@MacKen-et-al-12]. It is also acknowledged that fast diffusion channels (roads, airlines, etc.) play a central role in the propagation of epidemics. As is well known, the spread of the black plague, which killed about a third of the European population in the 14th century, was favoured by the trade routes, especially the Silk Road, see [@PesteNoireRoutesSoie]. More recently, some evidences of the the radiation of the COVID epidemic along highways and transportation infrastructures were found [@Gat-et-al-20]. The so-called *field-road* model was introduced by Berestycki, Roquejoffre and Rossi [@Ber-Roq-Ros-13-1] in order to describe such spread of diseases or invasive species in presence of networks with fast propagation. It is set on an unbounded domain. We will recall it hereafter and review the main established mathematical results. The current work is devoted to the theoretical and numerical analysis of a *purely diffusive* field-road model set on a bounded domain. We focus on the analysis of its long time behavior. ## The continuous field-road diffusion model {#ss:continuous} The *field-road* model introduced by Berestycki, Roquejoffre and Rossi [@Ber-Roq-Ros-13-1] writes as $$\label{syst-BRR} \left\{ \begin{array}{ll} \partial_{t} v = d \Delta v + f(v), &\quad t>0, \; x \in \mathbb R^{N-1}, \; y>0, \vspace{4pt}\\ - d\, \partial_{y} v|_{y=0} = \mu u - \nu v|_{y=0}, &\quad t>0, \; x \in \mathbb R^{N-1}, \vspace{4pt}\\ \partial_{t} u = D \Delta u + \nu v|_{y=0} - \mu u, &\quad t>0, \; x \in \mathbb R^{N-1}. \end{array} \right.$$ The mathematical problem then amounts to describing survival and propagation in a non-standard physical space: the geographical domain consists in the half-space (the ''field") $x\in \mathbb R^{N-1}$, $y>0$, bordered by the hyperplane (the ''road") $x\in \mathbb R^{N-1}$, $y=0$. In the field, individuals diffuse with coefficient $d>0$ and their density is given by $v=v(t,x,y)$. In particular $\Delta v$ has to be understood as $\Delta_x v+\partial_{yy}v$. On the road, individuals typically diffuse faster ($D>d$) and their density is given by $u=u(t,x)$. In particular $\Delta u$ has to be understood as $\Delta_x u$. The exchanges of population between the road and the field are described by the second equation in system [\[syst-BRR\]](#syst-BRR){reference-type="eqref" reference="syst-BRR"}, where $\mu>0$ and $\nu >0$. These boundary conditions, and the zeroth-order term on the road, link the field and the road equations and are the core of the model. In a series of works [@Ber-Roq-Ros-13-2; @Ber-Roq-Ros-13-1; @Ber-Roq-Ros-shape; @Ber-Roq-Ros-tw], Berestycki, Roquejoffre and Rossi studied the field-road system with $N=2$ and $f$ a Fisher-KPP nonlinearity. They shed light on an *acceleration phenomenon*: when $D>2d$, the road enhances the global diffusion and the spreading speed exceeds the standard Fisher-KPP invasion speed. This new feature has stimulated many works and, since then, many related problems taking into account heterogeneities, more complex geometries, nonlocal diffusions, etc. have been studied [@AC1; @AC2], [@Gil-Mon-Zho-15], [@PauthierLongRangeExchanges1; @PauthierLongRangeExchanges2; @PauthierLongRangeExchanges3], [@Tel-16], [@Ros-Tel-Val-17], [@Duc-18], [@BDR1; @Ber-Duc-Ros-20], [@Zha-21], [@Bog-Gil-Tel-21], [@Aff-22]. Very recently, the authors in [@Alf-Duc-Tre-23] considered the *purely diffusive* field-road system --- obtained by letting $f\equiv 0$ in [\[syst-BRR\]](#syst-BRR){reference-type="eqref" reference="syst-BRR"} --- as a starting point. They obtained an explicit expression for both the fundamental solution and the solution to the associated Cauchy problem, and a sharp (possibly up to a logarithmic term) decay rate of the $L^\infty$ norm of the solution. From now on, we consider the purely diffusive field-road model on a bounded domain, namely $\Omega\subset \mathbb R^N$ ($N\geq 2$) a bounded cylinder of the form $$\Omega=\omega \times (0,L), \quad \omega \text{ a bounded convex and open set of } \mathbb R^{N-1},\; L>0.$$ We still denote by $v=v(t,x,y)$ and $u=u(t,x)$ the densities of species respectively in the field and on the road. They are smooth solutions to the system $$\label{syst} \left\{ \begin{array}{ll} \partial_{t} v = d \Delta v, &\quad t>0, \; x \in \omega, \; y\in(0,L), \vspace{5pt}\\ - d\, \partial_{y} v|_{y=0}= \mu u - \nu v|_{y=0}, &\quad t>0, \; x \in \omega,\vspace{5pt}\\ \partial_{t} u = D \Delta u + \nu v|_{y=0} - \mu u, &\quad t>0, \; x \in \omega, \vspace{5pt}\\ \frac{\partial u}{\partial n'} = 0, &\quad t>0, \; x \in \partial \omega,\vspace{5pt}\\ \frac{\partial v}{\partial n} = 0, &\quad t>0, \; x \in \partial \omega, \;y\in(0,L), \text{ and } \; x\in \omega, \; y=L,\\ \end{array} \right.$$ supplemented with an initial condition $(v_0,u_0)\in L^\infty(\Omega)\times L^\infty(\omega)$. As in the classical model, $d$ and $D$ are positive diffusion coefficients, while $\mu$ and $\nu$ are positive transfer coefficients. For $u$ we impose the zero Neumann boundary conditions on the boundary $\partial \omega$ ($n'$ denotes the unit outward normal vector to $\partial \omega$). For $v$, we impose the zero Neumann boundary conditions on the lateral boundary $\partial \omega \times (0,L)$ and on the upper boundary $\omega \times \{L\}$ ($n$ denotes the unit outward normal vector to $\partial \Omega$). On the lower boundary $\omega \times \{0\}$, we impose the aforementioned boundary conditions linking $v$ and $u$, which is the essence of the model. As the system [\[syst\]](#syst){reference-type="eqref" reference="syst"} is made of two diffusive equations coupled through the transfer terms, we expect convergence towards a steady-state in long time. This convergence result comes from the dissipative structure of the model. Moreover, we aim at designing a numerical scheme for [\[syst\]](#syst){reference-type="eqref" reference="syst"} that preserves such a dissipative structure. ## The TPFA finite volume scheme {#ss:TPFA} In system [\[syst\]](#syst){reference-type="eqref" reference="syst"}, the diffusion processes on the road and in the field are obviously isotropic and homogeneous. Moreover, we can consider ''nice" geometries for the road and the field, so that the construction of meshes for the domains is not a challenge (in many cases, cartesian grids would be sufficient). Therefore, Two-Point Flux Approximation Finite Volume schemes seem to be adapted for the discretization of [\[syst\]](#syst){reference-type="eqref" reference="syst"}. We refer to the book by Eymard, Gallouët and Herbin [@Eym-Gal-Her-00], and to references therein, for a detailed presentation of finite volume methods. In many different frameworks, these methods have proved to be well-adapted for the preservation of long time behavior of diffusive problems, see for instance [@Cha-Jun-Sch-16], [@Cha-Her-20]. In order to write a numerical scheme for the field-road model, we have to define two meshes, one for the field and one for the road, with a compatibility relation between both meshes. This is a crucial point to treat correctly the exchanges between the field and the road. We emphasize that the design of the scheme is driven by the will to preserve the main features of the model (mass conservation, positivity of the densities, steady-states, long-time behavior, etc.). Let us first consider a mesh $\mathcal M_{\Omega}$ of the field $\Omega$ made of a family of control volumes $\mathcal T_\Omega$, a family of faces (or edges) $\mathcal E_\Omega$ and a family of points $\mathcal P_\Omega$, so that $\mathcal M_{\Omega}=(\mathcal T_{\Omega}, \mathcal E_{\Omega},\mathcal P_{\Omega})$. The mesh of $\omega$ is also made of a family of control volumes, a family of edges and a family of points. It is denoted $\mathcal M_{\omega}=(\mathcal T_{\omega}, \mathcal E_{\omega},\mathcal P_{\omega})$. We use classical notations: - $K\in\mathcal T_{\Omega}$ for a control volume, $\sigma\in\mathcal E_{\Omega}$ for an edge, $x_K\in{\mathcal P}_{\Omega}$ for an interior point of $K$ (named as the center of $K$), - $K^*\in\mathcal T_{\omega}$ for a control volume, $\sigma^*\in\mathcal E_{\omega}$ for an edge (it can be a point when $N=2$), $x_{K^*}\in{\mathcal P}_{\omega}$ for an interior point of $K^*$. In $\mathcal T_{\Omega}$, we can distinguish the control volumes that have an edge on the road from the ones that are strictly included in the field, which writes $\mathcal T_{\Omega}= \mathcal T_{\Omega}^r\cup \mathcal T_{\Omega}^f$. For the edges of $\mathcal E_{\Omega}$ we can also distinguish the interior edges from the boundary edges, included in $\omega$ or included in $\partial \Omega\setminus\omega$ (considered as exterior edges), which writes $\mathcal E_{\Omega}=\mathcal E_{\Omega}^{\rm int}\cup \mathcal E_\Omega^r\cup \mathcal E_\Omega^{\rm ext}$. For an interior edge $\sigma \in \mathcal E_{\Omega}^{\rm int}$, we may write $\sigma =K|L$ as it is an edge between the control volumes $K$ and $L$. Similarly, we can split $\mathcal E_\omega$ into $\mathcal E_\omega= \mathcal E_\omega^{\rm int}\cup \mathcal E_\omega^{\rm ext}$ and denote each interior edge $\sigma^*\in \mathcal E_\omega^{\rm int}$ as $\sigma^*=K^*|L^*$. The main notations are illustrated on Figure [\[fig:mesh\]](#fig:mesh){reference-type="ref" reference="fig:mesh"} in a two-dimensional case. We assume that both meshes are admissible in the sense that they satisfy the usual orthogonality property, see [@Eym-Gal-Her-00]. This means that for each edge $\sigma= K|L$ (respectively $\sigma^*=K^*|L^*$), the line joining $x_K$ to $x_L$ (respectively $x_{K^*}$ to $x_{L^*}$) is perpendicular to $\sigma$ (respectively $\sigma^*$). Moreover, we assume the compatibility of the two meshes $\mathcal M_{\Omega}$ and $\mathcal M_{\omega}$: every control volume of $\mathcal T_{\omega}$ must coincide with an edge of $\mathcal E_\Omega^r$. More precisely, for all $\sigma\in\mathcal E_{\Omega}^r$, there exists a unique $K\in\mathcal T_{\Omega}^r$ such that $\sigma$ is an edge of $K$ and a unique $K^*\in\mathcal T_{\omega}$ such that $\sigma$ coincides with $K^*$. Therefore, we will use the notation $\sigma=K|K^*$ for $\sigma\in \mathcal E_{\Omega}^r$. -- --- --   -- --- -- The measures of control volumes or edges are denoted by $m_{K}$, $m_{K^*}$, $m_\sigma$, $m_{\sigma^*}$ (which is set equal to 1 if the road has dimension 1). We also define by $d_{\sigma}$ or $d_{\sigma^*}$ the distance associated to an edge $\sigma\in \mathcal E_\Omega$ or $\sigma^*\in \mathcal E_{\omega}$, usually defined as the distance between the centers of two neighboring cells (or the distance from the center to the boundary), so that the transmissivities are defined by $$\tau_\sigma:=\displaystyle\frac{m_\sigma}{d_\sigma} \, \text{ for any } \sigma \in \mathcal E_\Omega,\quad \tau_{\sigma^*}:=\displaystyle\frac{m_{\sigma^*}}{d_{\sigma^*}} \, \text{ for any } \sigma^* \in \mathcal E_\omega.$$ Last, in view of time discretization, we consider a time step $\delta t>0$. Let us denote by $((v_K^n)_{K\in\mathcal T_{\Omega}, n\geq 0}, (v_{K^*}^n)_{K^*\in\mathcal T_{\omega}, n\geq 1}, (u_{K^*}^n)_{K^*\in\mathcal T_{\omega}, n\geq 0})$ the discrete unknowns. We start with the discretization of the initial conditions by letting $$\label{init_scheme} v_K^0=\frac{1}{m_K}\int_K v_0(x,y) dx dy, \ \forall K\in\mathcal T_\Omega\ \mbox{ and }\ u_{K^*}^0=\frac{1}{m_{K^*}}\int_{K^*} u_0(x) dx, \ \forall K^*\in \mathcal T_\omega.$$ The scheme we propose is a backward Euler scheme in time and a TPFA finite volume scheme in space. It writes as [\[scheme\]]{#scheme label="scheme"} $$\begin{aligned} & m_K\displaystyle\frac{v_K^n-v_K^{n-1}}{\delta t} + d\displaystyle\sum_{\sigma=K|L}\tau_\sigma (v_K^n-v_L^n)+ d\displaystyle\sum_{\sigma=K|K^*}\tau_\sigma(v_K^n-v_{K^*}^n)=0,\ \forall K\in\mathcal T_{\Omega},\label{scheme.v}\\%[5pt] & -d\tau_\sigma(v_K^n-v_{K^*}^n)=m_{K^*}(\mu u_{K^*}^n-\nu v_{K^*}^n),\ \forall \sigma\in \mathcal E_{\Omega}^{r}, \sigma =K|K^*,\label{scheme.interf}\\%[5pt] &m_{K^*}\displaystyle\frac{u_{K^*}^n-u_{K^*}^{n-1}}{\delta t}+D \displaystyle\sum_{{\sigma^*}=K^*|L^*}\tau_{\sigma^*}(u_{K^*}^n-u_{L^*}^n)%\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad +m_{K^*}(\mu u_{K^*}^n-\nu v_{K^*}^n)=0,\ \forall K^*\in\mathcal T_{\omega}.\label{scheme.u}\end{aligned}$$ At each time step, the scheme consists in a square linear system of equations of size $\# \mathcal T_{\Omega}+2\# \mathcal T_{\omega}$. ## Main results and organization of the paper {#ss:results} In the present work we thus consider the field-road diffusion model in a bounded domain and study its convergence, at large times, to the steady-state selected by the initial data, in both the continuous [\[syst\]](#syst){reference-type="eqref" reference="syst"} and the discrete [\[scheme\]](#scheme){reference-type="eqref" reference="scheme"} setting. To do so, we prove exponential decay of an entropy, see Theorem [**Theorem** 3](#th:H2-decroit){reference-type="ref" reference="th:H2-decroit"} and Theorem [**Theorem** 9](#th:H2-decroit-dis){reference-type="ref" reference="th:H2-decroit-dis"}. Classically, this requires to relate *dissipation* to the entropy via some functional inequalities. However, the originality of this work comes from the difference of dimension between the field and the road and the exchange terms between both. In particular some refinements of Poincaré-Wirtinger inequality are required for the analysis, see Theorem [**Theorem** 2](#th:unconv-PW){reference-type="ref" reference="th:unconv-PW"} and Theorem [**Theorem** 8](#th:unconv-PW-dis){reference-type="ref" reference="th:unconv-PW-dis"}. The paper is organized as follows. In Section [2](#s:continuous){reference-type="ref" reference="s:continuous"} we study the long time behavior of the continuous model [\[syst\]](#syst){reference-type="eqref" reference="syst"}. In Section [3](#s:discrete){reference-type="ref" reference="s:discrete"} we study the long time behavior of the TPFA scheme [\[scheme\]](#scheme){reference-type="eqref" reference="scheme"}. Last, in Section [4](#s:num){reference-type="ref" reference="s:num"}, we perform some numerical simulations, that not only confirm the theoretical results but also raise new issues. # Long time behavior of the continuous model {#s:continuous} In this section, we consider $v_0\in L^\infty(\Omega)$, $u_0\in L^\infty(\omega)$, both nonnegative and not simultaneously trivial. As a result, the total mass is initially positive $$M_0:=\int _\Omega v_0(x,y)\,dxdy+\int_\omega u_0(x)\,dx >0.$$ Through this section, we denote $(v=v(t,x,y), u=u(t,x))$ the smooth solution of [\[syst\]](#syst){reference-type="eqref" reference="syst"} starting from $(v_0=v_0(x,y), u_0=u_0(x))$. ## Mass conservation, positivity and steady-states {#ss:first prop} First of all, it follows from the strong maximum principle, see [@Ber-Roq-Ros-13-1 Proposition 3.2], that both $v(t,x,y)$ and $u(t,x)$ are positive as soon as $t>0$. Next, let us consider two test functions: $\varphi\in C^1(\mathbb R\times{\bar\Omega},\mathbb R)$ and $\psi\in C^1(\mathbb R\times\bar \omega,\mathbb R)$. We multiply the equation on $v$ in [\[syst\]](#syst){reference-type="eqref" reference="syst"} by $\varphi$ and the equation on $u$ by $\psi$ and we integrate respectively over $\Omega$ an $\omega$. After some integrations by parts, we obtain, due to the boundary conditions, $$\begin{gathered} \label{weak_form} \int_\Omega \partial_t v\varphi (t,x,y) \,dx dy + \int_\omega \partial_t u \psi(t,x) \,dx=-d\int_\Omega \nabla v\cdot\nabla \varphi (t,x,y) \, dxdy \\ -D \int_\omega \nabla u\cdot \nabla \psi(t,x) \,dx -\int_\omega (\nu v(t,x,0)-\mu u(t,x))(\varphi(t,x,0)-\psi(t,x)) \,dx.\end{gathered}$$ We emphasize that, in [\[weak_form\]](#weak_form){reference-type="eqref" reference="weak_form"} $\nabla v$ stands for $\nabla_{x,y} v$ while $\nabla u$ stands for $\nabla_x u$. In the sequel, we often omit the variables $t$, $x$ and $y$ in the integrands, when using [\[weak_form\]](#weak_form){reference-type="eqref" reference="weak_form"} or similar relations. When $v$ (or its derivatives) appears in an integrand over $\omega$, this obviously means $v|_{y=0}$. Choosing constant functions equal to 1 over $\mathbb R\times{\bar\Omega}$ and $\mathbb R\times\bar \omega$ for $\varphi$ and $\psi$ in [\[weak_form\]](#weak_form){reference-type="eqref" reference="weak_form"}, we obtain that the total mass of the system $\int_\Omega v(t,x,y)\,dxdy+\int_\omega u(t,x)\,dx$ is constant, namely $$\int_\Omega v(t,x,y)\,dxdy+\int_\omega u(t,x)\,dx=M_0, \quad \forall t>0.$$ Let us now investigate the existence of steady-states $(v=v(x,y), u= u(x))$ to [\[syst\]](#syst){reference-type="eqref" reference="syst"}. Using $\varphi=\nu v$ and $\psi=\mu u$ in [\[weak_form\]](#weak_form){reference-type="eqref" reference="weak_form"}, we obtain that $$\int_\Omega |\nabla v|^2 \,dx dy= \int_\omega |\nabla u|^2 \,dx =\int_\omega (\nu v(\cdot,0)-\mu u(\cdot))^2 \,dx = 0,$$ so that $v$ and $u$ must be constant in space and verify $\nu v-\mu u=0$. The system [\[syst\]](#syst){reference-type="eqref" reference="syst"} has thus an infinity of steady-states, but only one with the prescribed mass $M_0$. The constant steady-state $(v^\infty,u^\infty)$ with mass $M_0$ (and therefore associated to the initial state $(v_0,u_0)$) is given by $$\label{eq-pour-steady} \nu v^\infty-\mu u^\infty=0, \quad m_\Omega v^\infty+m_\omega u^\infty=M_0,$$ that is $$\label{def-steady-cst} v^\infty=\frac{\mu}{m_\omega\nu + m_\Omega\mu}M_0, \quad u^\infty=\frac{\nu}{m_\omega\nu +m_\Omega\mu}M_0.$$ The positivity of $M_0$ implies the positivity of $v^\infty$ and $u^\infty$. ## Exponential decay of relative entropy {#ss:entropy} Our aim is now to establish that $(v=v(t,x,y), u=u(t,x))$, the smooth solution of [\[syst\]](#syst){reference-type="eqref" reference="syst"} starting from $(v_0=v_0(x,y), u_0=u_0(x))$ with an initial total mass $M_0$, converges in large times towards the associated steady-state $(v^\infty,u^\infty)$ defined by [\[def-steady-cst\]](#def-steady-cst){reference-type="eqref" reference="def-steady-cst"}. To do so, we apply a relative entropy method as presented for instance in the book by Jüngel [@Jungel_2016]. For any twice differentiable function $\Phi:[0,+\infty)\to [0,+\infty)$ such that $$\Phi''>0, \quad \Phi'(1)=0, \quad \Phi(1)=0,$$ we define an entropy, relative to the steady-state $(v^\infty,u^\infty)$, by $$\label{def-H} \mathcal H(t):=\int _\Omega v^\infty\Phi\left(\frac{v(t,x,y)}{v^\infty}\right)\,dxdy+\int_\omega u^\infty\Phi \left(\frac{u(t,x)}{u^\infty}\right)\,dx,$$ which is obviously nonnegative and vanishes at time $t$ if and only if $v(t,\cdot,\cdot)\equiv v^\infty$ and $u(t,\cdot)\equiv u^\infty$. Our first result states that the entropy is dissipated by the field-road model. ****Proposition** 1** (Entropy dissipation). *Let $v_0\in L^\infty(\Omega)$ and $u_0\in L^\infty(\omega)$ be both nonnegative and satisfying $M_0>0$. Let $(v=v(t,x,y), u=u(t,x))$ be the solution to [\[syst\]](#syst){reference-type="eqref" reference="syst"} starting from $(v_0=v_0(x,y), u_0=u_0(x))$, and $(v^\infty,u^\infty)$ the associated steady-state defined by [\[def-steady-cst\]](#def-steady-cst){reference-type="eqref" reference="def-steady-cst"}. Then the entropy defined by [\[def-H\]](#def-H){reference-type="eqref" reference="def-H"} is dissipated along time, namely $$\label{diss-entropy} \frac{d}{dt}\mathcal H(t)=-\mathcal D(t)\leq 0, \quad \forall t>0,$$ where $$\begin{gathered} \label{def-Dissipation} \mathcal D(t):=d \int_\Omega \frac{\vert \nabla v\vert ^2}{v^\infty}\Phi''\left(\frac{v}{v^\infty}\right)\,dxdy+D \int_\omega \frac{\vert \nabla u\vert ^2}{u^\infty}\, \Phi''\left(\frac{u}{u^\infty}\right) \,dx\\ \qquad +\mu u^\infty\int _\omega \left(\Phi'\left(\frac{v}{v^\infty}\right)-\Phi'\left(\frac{u}{u^\infty}\right)\right)\left(\frac v{v^\infty}-\frac u{u^\infty}\right)\, dx\end{gathered}$$ is the so-called dissipation.* *Proof.* The derivative of the entropy function $\mathcal H$ is given by $$\frac{d}{dt} \mathcal H(t)= \int_\Omega \partial_t v \Phi'\left(\frac{v}{v^\infty}\right) \, dxdy+ \int_\omega \partial_t u \Phi'\left(\frac{u}{u^\infty}\right)\, dx.$$ Therefore, we apply [\[weak_form\]](#weak_form){reference-type="eqref" reference="weak_form"} with $\varphi= \Phi'(\frac{v}{v^\infty})$ and $\psi= \Phi'(\frac{u}{u^\infty})$. Since $\nabla \varphi=\frac{1}{v^\infty} \Phi''\left(\frac{v}{v^\infty}\right)\nabla v$ and $\nabla \psi=\frac{1}{u^\infty} \Phi''\left(\frac{u}{u^\infty}\right)\nabla u$, we obtain $$\begin{gathered} \frac{d}{dt} \mathcal H(t)=-d \int_\Omega \frac{\vert \nabla v\vert ^2}{v^\infty}\, \Phi''\left(\frac{v}{v^\infty}\right) \,dxdy -D\int_\omega \frac{\vert \nabla u\vert ^2}{u^\infty}\, \Phi''\left(\frac{u}{u^\infty}\right) \,dx\\ - \int_\omega (\nu v(t,x,0)-\mu u(t,x))\left(\Phi'\left(\frac{v(t,x,0)}{v^\infty}\right)-\Phi'\left(\frac{u(t,x)}{u^\infty}\right)\right)\,dx.\end{gathered}$$ Using [\[eq-pour-steady\]](#eq-pour-steady){reference-type="eqref" reference="eq-pour-steady"}, this is recast $$\begin{gathered} \frac{d}{dt} \mathcal H(t)=-d \int_\Omega \frac{\vert \nabla v\vert ^2}{v^\infty}\, \Phi''\left(\frac{v}{v^\infty}\right) \,dxdy -D\int_\omega \frac{\vert \nabla u\vert ^2}{u^\infty}\, \Phi''\left(\frac{u}{u^\infty}\right) \,dx\\ - \mu u_{\infty} \int_{\omega}\left(\frac{v(t,x,0)}{v^\infty}-\frac{u(t,x)}{u^\infty}\right)\left(\Phi'\left(\frac{v(t,x,0)}{v^\infty}\right)-\Phi'\left(\frac{u(t,x)}{u^\infty}\right)\right)\,dx,\end{gathered}$$ which concludes the proof. ◻ Proposition [**Proposition** 1](#prop:dissipation){reference-type="ref" reference="prop:dissipation"} ensures that any relative entropy $\mathcal H$ is nonincreasing in time, while being nonnegative. In order to obtain the exponential decay in time of the entropy, we expect a relation between the entropy and the dissipation of the form $\mathcal D\geq \Lambda \mathcal H$ for a given $\Lambda>0$. Indeed, combined with [\[diss-entropy\]](#diss-entropy){reference-type="eqref" reference="diss-entropy"}, this would ensure $\mathcal H(t)\leq \mathcal H(0)\exp(-\Lambda t)$. We specify now the choice of the $\Phi$ function and therefore the entropy. We select $$\label{phi-2} \Phi(s)=\Phi_2(s):=\frac 12 (s-1)^2,$$ so that $$\label{H-2} \mathcal H_2(t)=\frac 12 \int_\Omega \frac{(v-v^\infty)^2}{v^\infty}\,dxdy+\frac 12 \int_\omega \frac{(u-u^\infty)^2}{u^\infty}\,dx,$$ and $$\label{D-2} \mathcal D_2(t)= d \int_\Omega \frac{\vert \nabla v\vert ^2}{v^\infty} \,dxdy+ D \int_\omega \frac{\vert \nabla u\vert ^2}{u^\infty} \,dx+\mu u^\infty\int _\omega \left(\frac{v}{v^\infty}-\frac{u}{u^\infty}\right)^2\,dx.$$ Theorem [**Theorem** 2](#th:unconv-PW){reference-type="ref" reference="th:unconv-PW"} states the expected relation between the entropy $\mathcal H_2$ and its dissipation $\mathcal D_2$. Its proof will be given in the next subsection. As we will see, it is based on the proof of the Poincaré-Wirtinger inequality, while not being the combination of the classical Poincaré-Wirtinger inequality applied to $v$ and to $u$. ****Theorem** 2** (Relating entropy and dissipation). *Let $v_0\in L^\infty(\Omega)$ and $u_0\in L^\infty(\omega)$ be both nonnegative and satisfying $M_0>0$. Let $(v=v(t,x,y), u=u(t,x))$ be the solution to [\[syst\]](#syst){reference-type="eqref" reference="syst"} starting from $(v_0=v_0(x,y), u_0=u_0(x))$, and $(v^\infty,u^\infty)$ the associated steady-state defined by [\[def-steady-cst\]](#def-steady-cst){reference-type="eqref" reference="def-steady-cst"}. Then, for any $t>0$ (that we omit to write below), there holds $$\begin{aligned} && \frac 12 \int_\Omega \frac{(v-v^\infty)^2}{v^\infty}\,dxdy+\frac 12 \int_\omega \frac{(u-u^\infty)^2}{u^\infty}\,dx \nonumber \\ &&\qquad \leq \frac 1{\Lambda _2} \left(d \int_\Omega \frac{\vert \nabla v\vert ^2}{v^\infty} \,dxdy+ D \int_\omega \frac{\vert \nabla u\vert ^2}{u^\infty} \,dx+\mu u^\infty\int _\omega \left(\frac{v}{v^\infty}-\frac{u}{u^\infty}\right)^2\,dx\right),\label{PW}\end{aligned}$$ for some positive constant $\Lambda_2$ depending on the dimension $N$, the domain $\Omega$ (including $\omega$ and $L$), the transfer rates $\mu$, $\nu$, and the diffusion coefficients $d$, $D$, see [\[Lambda-2\]](#Lambda-2){reference-type="eqref" reference="Lambda-2"} for further details.* As [\[PW\]](#PW){reference-type="eqref" reference="PW"} means nothing else than $\mathcal D_2(t)\geq {\Lambda _2} \mathcal H_2(t)$ for all $t>0$, we deduce from Theorem [**Theorem** 2](#th:unconv-PW){reference-type="ref" reference="th:unconv-PW"} and Proposition [**Proposition** 1](#prop:dissipation){reference-type="ref" reference="prop:dissipation"} the exponential decay of the entropy $\mathcal H_2$, as stated in Theorem [**Theorem** 3](#th:H2-decroit){reference-type="ref" reference="th:H2-decroit"}. ****Theorem** 3** (Exponential decay of entropy). *Let $v_0\in L^\infty(\Omega)$ and $u_0\in L^\infty(\omega)$ be both nonnegative and satisfying $M_0>0$. Let $(v=v(t,x,y), u=u(t,x))$ be the solution to [\[syst\]](#syst){reference-type="eqref" reference="syst"} starting from $(v_0=v_0(x,y), u_0=u_0(x))$, and $(v^\infty,u^\infty)$ the associated steady-state defined by [\[def-steady-cst\]](#def-steady-cst){reference-type="eqref" reference="def-steady-cst"}. Then the entropy defined by [\[H-2\]](#H-2){reference-type="eqref" reference="H-2"} decays exponentially, namely $$0\leq \mathcal H_2(t)\leq \mathcal H_2(0) e^{-\Lambda _2 t}, \quad \forall t\geq 0,$$ where $\Lambda_2$ comes from Theorem [**Theorem** 2](#th:unconv-PW){reference-type="ref" reference="th:unconv-PW"}.* Due to the definition of $\mathcal H_2$, a direct consequence of Theorem [**Theorem** 3](#th:H2-decroit){reference-type="ref" reference="th:H2-decroit"} is the exponential decay of $v$ (*resp.* u) towards $v^\infty$ (*resp.* $u^\infty$) in $L^2(\Omega)$- (*resp.* $L^2(\omega)$-) norm. ## Relating entropy and dissipation, proof of Theorem [**Theorem** 2](#th:unconv-PW){reference-type="ref" reference="th:unconv-PW"} {#relating-entropy-and-dissipation-proof-of-theorem-thunconv-pw} At first glance, the relation between entropy and dissipation [\[PW\]](#PW){reference-type="eqref" reference="PW"} has similarities with the Poincaré-Wirtinger inequality. However, if we use the Poincaré-Wirtinger inequality twice, once for the term $\int_\Omega \vert \nabla v\vert ^2\,dxdy$ and once for the term $\int_\omega \vert \nabla u\vert ^2 \, dx$, then we fail to reconstruct $\mathcal H_2$ as a lower bound for the dissipation $\mathcal D_2$. We obviously have to take into account that the quantity that is preserved along time is the total mass $M_0$; we also have to manage the fact that $v$ and $u$ are defined on domains with different dimension. Roughly speaking, we will first ''thicken the road" from a subset of $\mathbb R^{N-1}$ to a subset of $\mathbb R^N$ and define an "enlarged" domain made of the field and the thickened road. On this enlarged domain, we may define a function based on $v$ on the field and $u$ on the thickened road. Next, we follow the main steps of the Poincaré-Wirtinger classical inequality hoping that the constant does not blow up as the thickness tends to zero. It turns out that an additional term appears and that it is precisely the non-gradient term in $\mathcal D_2$. Finally, [\[PW\]](#PW){reference-type="eqref" reference="PW"} can be interpreted as a kind of unconventional Poincaré-Wirtinger inequality. We start by recalling the very classical Poincaré-Wirtinger inequality in a bounded convex open set and take the liberty to present briefly the main steps of a possible proof. To state this precisely, we define the dimensional constant $$\label{C_N} C_d:=\left\{\begin{array}{ll} \ln 2 & \text{ if } d=1,\\ \frac{2^{d-1}-1}{d-1} & \text{ if } d\geq 2,\\ \end{array} \right.$$ which increases with $d$. ****Theorem** 4** (Poincaré-Wirtinger inequality). *Let $U$ be a bounded convex open set of $\mathbb R^d$ ($d\geq 1$). Let $f :U\to \mathbb R$ be a given function in $H^1(U)$. Define its mean as $\langle f \rangle:=\frac{1}{m_U}\int_ U f\,dx$. Then $$\label{ineg-PW} \Vert f-\langle f \rangle\Vert _{L^2(U)}^2=\frac 1 {2m_U}\iint _{U^2} \left(f(x)-f(y)\right)^2\,dxdy \leq C_d\, (\text{Diam } U)^2\, \int _U \vert \nabla f(z)\vert ^2\,dz,$$ with $C_d$ the dimensional constant defined in [\[C_N\]](#C_N){reference-type="eqref" reference="C_N"}.* *Proof.* The equality in [\[ineg-PW\]](#ineg-PW){reference-type="eqref" reference="ineg-PW"} is classical and can be straightforwardly checked. Next, for sufficiently smooth $f$ (the general case being later obtained by density arguments), $$\begin{aligned} \iint _{U^2} \left(f(x)-f(y)\right)^2\,dxdy &=& \iint _{U^2} \left(\int_0^1 \nabla f\left((1-t)x+ty\right)\cdot (x-y)\,dt\right)^2\,dxdy\\ & \leq & \iint _{U^2} \int_0^1 \vert \nabla f\left((1-t)x+ty\right)\cdot (x-y)\vert^2 \,dtdxdy \\ & \leq & (\text{Diam } U)^2 \iint_{U^2} \int_0^1 \vert \nabla f\left((1-t)x+ty\right)\vert^2 \,dtdxdy .\end{aligned}$$ We cut the integral over $t\in(0,1)$ into two pieces and write $$\begin{aligned} \iint_{U^2} \int_0^1 \vert \nabla f\left((1-t)x+ty\right)\vert^2 \,dtdxdy &\leq & \int_{y\in U} \int_0^{1/2} \int _{x\in U} \vert \nabla f\left((1-t)x+ty\right)\vert^2\,dxdtdy\\ && + \int_{x\in U} \int_{1/2}^1 \int _{y\in U} \vert \nabla f\left((1-t)x+ty\right)\vert^2\,dydtdx \\ &\leq & \int_{y\in U} \int_0^{1/2} \int _{z \in V_{t,y}} \vert \nabla f(z) \vert^2\,\frac{dz}{(1-t)^d}\,dtdy\\ && + \int_{x\in U} \int_{1/2}^1 \int _{z\in W_{t,x} } \vert \nabla f (z)\vert^2\,\frac{dz}{t^d}\,dtdx.\end{aligned}$$ Since $U$ is convex both domains of integration over $z$, namely $V_{t,y}$ and $W_{t,x}$, are subset of $U$. Since $\int_{1/2}^1 t^{-d}\,dt$ is nothing else than the dimensional constant $C_d$ defined in [\[C_N\]](#C_N){reference-type="eqref" reference="C_N"}, we get $$\iint_{U^2} \int_0^1 \vert \nabla f\left((1-t)x+ty\right)\vert^2 \,dtdxdy\leq 2\,m_U\, C_d \int_U \vert \nabla f(z)\vert ^2\,dz.$$ Putting all together, we get [\[ineg-PW\]](#ineg-PW){reference-type="eqref" reference="ineg-PW"}. ◻ Having in mind these classical moves, we now turn to the proof of the unconventional Poincaré-Wirtinger inequality. *Proof of Theorem [**Theorem** 2](#th:unconv-PW){reference-type="ref" reference="th:unconv-PW"}.* For $\ell>0$, we ''enlarge" $\Omega=\omega \times (0,L)$ to $\Omega^+=\omega\times (-\ell,L)$. We denote $\Omega_\ell =\omega \times (-\ell,0)$ the so-called thickened road. Reference points in $\Omega^+$ will be denoted $X=(x,y)$, $X'=(x',y')$, with $x$, $x'$ in $\omega$ and $y$, $y'$ in $(-\ell,L)$. We work with $$\label{d-sigma} d\rho= \left( \frac{v^\infty}{M_0}\mathbf 1_{\Omega}(x,y)+\frac{1}{\ell}\frac{u^\infty}{M_0}\mathbf 1_{\Omega_\ell}(x,y) \right)\,dxdy,$$ which is a probability measure as can be checked thanks to [\[eq-pour-steady\]](#eq-pour-steady){reference-type="eqref" reference="eq-pour-steady"}, and with $$\label{f(x,y)} f(x,y)=\frac{v(x,y)}{v^\infty}\mathbf 1_{\Omega}(x,y)+ \frac{u(x)}{u^\infty}\mathbf 1_{\Omega_\ell}(x,y), \quad (x,y)\in \Omega^+=\omega \times (-\ell,L),$$ where we have omitted the $t$ variable. The point is that, as $\ell \to 0$, the $L^\infty$ norm of the measure blows-up. Fortunately, as $\ell \to 0$, the domain $\Omega^+$ shrinks to $\Omega$ and moreover we only need to consider $f(x,y)$ given by [\[f(x,y)\]](#f(x,y)){reference-type="eqref" reference="f(x,y)"} (in particular $f$ is independent on $y$ in the thickned road). Observe first that $$\langle f \rangle:=\int_{\Omega ^+} f\, d\rho = \frac 1 M_0\int_\Omega v(x,y)\,dxdy+\frac{1}{\ell M_0}\int_{\Omega _\ell } u(x)\,dxdy=\frac 1 M_0\int_\Omega v\,dxdy+\frac{1}{M_0}\int_{\omega } u\,dx= 1,$$ and that $$\label{1} \Vert f -\langle f \rangle\Vert ^2_{L^2(\Omega^+,d\rho)}=\int_\Omega \left(\frac{v}{v^\infty}-1\right)^2\frac{v^\infty}{M_0}\,dxdy+\int_\omega \left(\frac{u}{u^\infty}-1\right)^2\frac{u^\infty}{M_0}\,dx=\frac 2 M_0\mathcal H_2(t).$$ Next, similarly to the equality in [\[ineg-PW\]](#ineg-PW){reference-type="eqref" reference="ineg-PW"}, it is straightforward to check that $$\label{2} 2 \Vert f-\langle f \rangle\Vert ^2_{L^2(\Omega^+,d\rho)}= I_{\Omega,\Omega}+2I_{\Omega,\Omega_\ell}+I_{\Omega_\ell,\Omega_\ell},$$ where $$I_{A,B}:=\int _{X\in A}\int_{X'\in B} \left(f(X)-f(X')\right)^2 \rho(X)\rho(X')\, dX'dX.$$ $(i)$ We start with the term $$\begin{aligned} I_{\Omega,\Omega}&=&\iint_{(X,X')\in \Omega^2}\left(\frac{v(x,y)}{v^\infty}-\frac{v(x',y')}{v^\infty}\right)^2\frac{(v^\infty)^2}{M_0^2}\,dXdX'\\ &=& \frac{1}{M_0^2} \iint _{(X,X')\in \Omega ^2}\left(v(x,y)-v(x',y')\right)^2 \,dXdX',\end{aligned}$$ and we are in the footsteps of the classical case. Using [\[ineg-PW\]](#ineg-PW){reference-type="eqref" reference="ineg-PW"} we get $$\label{3} I_{\Omega,\Omega}\leq \frac{1}{M_0^2}2m_\Omega C_{N}(\text{Diam } \Omega)^2 \int_\Omega \vert \nabla v\vert ^2\,dxdy=\frac{m_\Omega v^\infty}{M_0^2}2 C_{N}(\text{Diam } \Omega)^2 \int_\Omega \frac{\vert \nabla v\vert ^2}{v^\infty}\,dxdy.$$ $(ii)$ Let us now turn to the term $$\begin{aligned} I_{\Omega_\ell,\Omega_\ell}&=&\iint_{(X,X')\in \Omega_\ell^2}\left(\frac{u(x)}{u^\infty}-\frac{u(x')}{u^\infty}\right)^2\frac{1}{\ell ^2}\frac{(u^\infty)^2}{M_0^2}\,dXdX'\\ &=& \frac{1}{M_0^2} \iint _{(x,x')\in \omega ^2}(u(x)-u(x'))^2\,dxdx',\end{aligned}$$ and, again, we are in the footsteps of the classical case. Using [\[ineg-PW\]](#ineg-PW){reference-type="eqref" reference="ineg-PW"} we get $$\label{4} I_{\Omega_\ell,\Omega_\ell}\leq \frac{1 }{M_0^2}2m_\omega C_{N-1}(\text{Diam } \omega)^2 \int_\omega \vert \nabla u\vert ^2\,dx=\frac{m_\omega u^\infty}{M_0^2}2 C_{N-1}(\text{Diam } \omega)^2 \int_\omega \frac{\vert \nabla u\vert ^2}{u^\infty}\,dx.$$ $(iii)$ It remains to estimate the so-called unconventional term involving crossed terms, namely $$\begin{aligned} I_{\Omega,\Omega_\ell}&=&\int_{X\in \Omega}\int_{X'\in \Omega_\ell}\left(\frac{v(x,y)}{v^\infty}-\frac{u(x')}{u^\infty}\right)^2 \frac{v^\infty u^\infty}{M_0^2}\frac 1 \ell\, dX'dX\\ &=& \int_{X\in \Omega}\int_{x'\in \omega}\left(\frac{v(x,y)}{v^\infty}-\frac{u(x')}{u^\infty}\right)^2 \frac{v^\infty u^\infty}{M_0^2}\, dx'dxdy.\end{aligned}$$ We can split it into three pieces, thanks to the following inequality: $$\left(\frac{v(x,y)}{v^\infty}-\frac{u(x')}{u^\infty}\right)^2\leq 3\left(\left(\frac{v(x,y)}{v^\infty}-\frac{v(x,0)}{v^\infty}\right)^2 + \left(\frac{v(x,0)}{v^\infty}-\frac{u(x)}{u^\infty}\right)^2+ \left(\frac{u(x)}{u^\infty}-\frac{u(x')}{u^\infty}\right)^2\right).$$ This implies $I_{\Omega,\Omega_\ell} \leq 3 (I_{\Omega,\Omega_\ell}^1+I_{\Omega,\Omega_\ell}^2+I_{\Omega,\Omega_\ell}^3)$ with obvious notations for these three terms, for which we now give a bound. For the first one, we have $$\begin{aligned} I_{\Omega,\Omega_\ell}^1&=\frac{u^\infty}{v^\infty M_0^2}\int_{X\in \Omega}\int_{x'\in \omega}\left(v(x,y)-v(x,0)\right)^2 dx' dxdy,\\ &= \frac{u^\infty m_\omega}{v^\infty M_0^2}\int_{x \in \omega}\int_{y \in (0,L)}\left(v(x,y)-v(x,0)\right)^2 \, dydx. \end{aligned}$$ We can then apply the classical procedure used to prove the one-dimensional Poincaré inequality, namely $$\begin{aligned} \int_{x \in \omega}\int_{y \in (0,L)}\left(v(x,y)-v(x,0)\right)^2 \, dydx&=& \int_{x\in \omega}\int_{y\in(0,L)} \left(\int _0 ^y \frac{\partial v}{\partial s}(x,s)\,ds\right)^2\,dydx\\ &\leq & \int_{x\in \omega}\int_{y\in(0,L)} \int_0^y \left(\frac{\partial v}{\partial s}(x,s)\right)^2\,ds \times y \, dydx\\ &\leq & \frac{L^2}2 \int_{x\in \omega}\int_ 0 ^L \left(\frac{\partial v}{\partial s}(x,s)\right)^2\,dsdx,\\ %&\leq & \frac{L^2}2 \int_\Omega \vert \nabla v\vert ^2\,dxdy.\end{aligned}$$ which yields $$I_{\Omega,\Omega_\ell}^1\leq \frac{u^\infty m_\omega L^2}{2M_0^2}\int_\Omega \frac{\vert \nabla v\vert ^2}{v^\infty}\,dxdy.$$ Let us now consider $$\begin{aligned} I_{\Omega,\Omega_\ell}^2&=\frac{ v^\infty u^\infty}{M_0^2} \int_{X\in \Omega}\int_{x'\in \omega}\left(\frac{v(x,0)}{v^\infty}-\frac{u(x)}{u^\infty}\right)^2\, dx'dxdy,\\ &=\frac{ v^\infty u^\infty m_\Omega}{M_0^2}\int_\omega \left(\frac{v(x,0)}{v^\infty}-\frac{u(x)}{u^\infty}\right)^2\,dx \end{aligned}$$ and we recover, up to a multiplicative constant, the non-gradient term in the definition of the dissipation, see [\[D-2\]](#D-2){reference-type="eqref" reference="D-2"}. Finally, for the third term, we have $$\begin{aligned} I_{\Omega,\Omega_\ell}^3&=\frac{ v^\infty u^\infty}{M_0^2}\int_{X\in \Omega}\int_{x'\in \omega}\left(\frac{u(x)}{u^\infty}-\frac{u(x')}{u^\infty}\right)^2\, dx'dxdy\,\\ &=\frac{ v^\infty L}{u^\infty M_0^2}\int_{x\in \omega}\int_{x' \in \omega}\left(u(x)-u(x')\right)^2 \, dx'dx, \end{aligned}$$ which is nothing else that $\frac{v^\infty}{ u^\infty} L\times I_{\Omega_\ell,\Omega_\ell}$, with $I_{\Omega_\ell,\Omega_\ell}$ already estimated in [\[4\]](#4){reference-type="eqref" reference="4"}. As a result, we obtain $$\begin{gathered} \label{5} I_{\Omega,\Omega_\ell} \leq \frac 3 2 \frac{ u^\infty}{M_0^2} m_\omega L^2 \int_\Omega \frac{\vert \nabla v\vert ^2}{v^\infty}\,dxdy +3 \frac{ v^\infty u^\infty}{M_0^2}m_\Omega\int_\omega \left(\frac{v(x,0)}{v^\infty}-\frac{u(x)}{u^\infty}\right)^2\,dx\\ +6C_{N-1} \frac{v^\infty}{M_0^2}m_\Omega(\text{Diam } \omega)^2 \int_\omega \frac{\vert \nabla u\vert^2}{u^\infty}\, dx .\end{gathered}$$ Now combining [\[1\]](#1){reference-type="eqref" reference="1"}, [\[2\]](#2){reference-type="eqref" reference="2"}, [\[3\]](#3){reference-type="eqref" reference="3"}, [\[4\]](#4){reference-type="eqref" reference="4"} and [\[5\]](#5){reference-type="eqref" reference="5"} we reach $$\begin{aligned} \mathcal H_2(t)&\leq &\frac 1 4\frac{m_\Omega}{m_\omega\nu+m_\Omega\mu}\left(2\mu C_N (\text{Diam } \Omega)^2+ 3\nu L \right) \int _\Omega \frac{\vert \nabla v\vert^2}{v^\infty}\,dxdy\\ &&+ \frac 1 2 \frac{m_\omega}{m_\omega\nu+m_\Omega\mu}C_{N-1} (\text{Diam } \omega)^2\left(\nu+6\mu L \right)\int_\omega \frac{\vert \nabla u\vert^2}{u^\infty}\,dx\\ &&+\frac 3 2 \frac{m_\Omega}{m_\omega\nu+m_\Omega\mu} \mu u^\infty\int_\omega \left(\frac{v}{v^\infty}-\frac{u}{u^\infty}\right)^2\,dx,\end{aligned}$$ where we have also used the relations [\[def-steady-cst\]](#def-steady-cst){reference-type="eqref" reference="def-steady-cst"} for $v^\infty$ and $u^\infty$. Defining $$\begin{gathered} \label{Lambda-2} \Lambda_2:=\min \Bigg\{\frac{4}{2\mu C_N (\text{Diam } \Omega)^2+ 3\nu L }\, \frac{m_\omega\nu+m_\Omega\mu}{m_\Omega} \, d\,;\\ \frac{2}{C_{N-1} (\text{Diam } \omega)^2\left(\nu+6\mu L\right)}\, \frac{m_\omega\nu+m_\Omega\mu}{m_\omega}\, D \,;\, \frac 2 3 \frac{m_\omega\nu+m_\Omega\mu}{m_\Omega}\Bigg\}, \end{gathered}$$ we reach the conclusion of Theorem [**Theorem** 2](#th:unconv-PW){reference-type="ref" reference="th:unconv-PW"}. ◻ ***Remark** 5*. Applying the following scaling in time $$v(t,x,y)= V(dt,x,y),\ u(t,x)=U(dt,x),$$ we obtain that $(V,U)$ is a solution to [\[syst\]](#syst){reference-type="eqref" reference="syst"} for the set of parameters $(d=1, D,\mu,\nu)$ if and only if $(v,u)$ is a solution to [\[syst\]](#syst){reference-type="eqref" reference="syst"} for the set of parameters $(d,dD,d\mu,d\nu)$, so that the *actual* decay rate should satisfy $$\Lambda(d,dD,d\mu,d\nu)=d\Lambda(1,D,\mu,\nu).$$ With the scaling $v(t,x,y)=V(Dt,x,y), u(t,x)=U(Dt,x)$, we get $$\Lambda(Dd,D,D\mu,D\nu)=D\Lambda(d,1,\mu,\nu).$$ We observe that $\Lambda_2$ given by [\[Lambda-2\]](#Lambda-2){reference-type="eqref" reference="Lambda-2"} satisfies these two expected scaling properties. # Long time behavior of the TPFA scheme {#s:discrete} In this section, we consider the finite volume scheme [\[init_scheme\]](#init_scheme){reference-type="eqref" reference="init_scheme"}-[\[scheme\]](#scheme){reference-type="eqref" reference="scheme"} for the field-road diffusion model. The assumptions on the initial data are the same as in the continuous case: $v_0\in L^\infty(\Omega)$ and $u_0\in L^\infty(\omega)$ are nonnegative and satisfy $M_0>0$. We also assume that the meshes ${\mathcal M}_\Omega$ and ${\mathcal M}_\omega$ satisfy the admissibility and compatibility assumptions introduced in subsection [1.2](#ss:TPFA){reference-type="ref" reference="ss:TPFA"}. We start with some preliminary results: existence and uniqueness of a solution to the scheme, positivity, mass conservation and steady-states. Then, we will focus on the long time behavior of the scheme. As in the continuous case, we will establish the exponential decay of the approximate solutions towards the steady-state, a result based on a discrete counterpart of the entropy-dissipation relation [\[PW\]](#PW){reference-type="eqref" reference="PW"}. ## Preliminary results {#ss:preliminary} As already noticed in the introduction, the scheme [\[scheme\]](#scheme){reference-type="eqref" reference="scheme"} consists, at each time step, in a square linear system of equations of size $\# \mathcal T_{\Omega}+2\# \mathcal T_{\omega}$. We can obtain a weak formulation of the scheme by multiplying the equations in [\[scheme\]](#scheme){reference-type="eqref" reference="scheme"} by some test values and summing over $\mathcal T_\Omega$, $\mathcal E_{\Omega}^r$, $\mathcal T_{\omega}$. For a given vector $$((\varphi_K)_{K\in\mathcal T_{\Omega}}, (\varphi_{K^*})_{K^*\in \mathcal T_{\omega}}, (\psi_{K^*})_{K^*\in \mathcal T_{\omega}}),$$ we obtain $$\begin{gathered} \label{scheme_weak} \displaystyle\sum_{K\in\mathcal T_\Omega} m_K\varphi_K\frac{v_K^n-v_K^{n-1}}{\delta t} +\displaystyle\sum_{K^*\in\mathcal T_\omega} m_{K^*}\psi_{K^*}\frac{u_{K^*}^n-u_{K^*}^{n-1}}{\delta t}\\ +d\!\!\!\sum_{\sigma=K|K^*}\!\!\!\tau_\sigma (v_K^n-v_{K^*}^n)(\varphi_K-\varphi_{K^*})= -D\!\!\!\sum_{\sigma^*=K^*|L^*}\!\!\!\tau_{\sigma^*}(u_{K^*}^n-u_{L^*}^n) (\psi_{K^*}-\psi_{L^*})\\-d\!\!\!\sum_{\sigma=K|L}\!\!\!\tau_\sigma(v_K^n-v_L^n)(\varphi_K-\varphi_L) -\!\!\!\sum_{K^*\in\mathcal T_\omega}\!\!\! m_{K^*}(\mu u_{K^*}^n-\nu v_{K^*}^n)(\psi_{K^*}-\varphi_{K^*}).\end{gathered}$$ This weak formulation [\[scheme_weak\]](#scheme_weak){reference-type="eqref" reference="scheme_weak"} is equivalent to the scheme [\[scheme\]](#scheme){reference-type="eqref" reference="scheme"}. Indeed, setting one test value equal to 1 and the other ones equal to 0 permits to recover the scheme [\[scheme\]](#scheme){reference-type="eqref" reference="scheme"} from [\[scheme_weak\]](#scheme_weak){reference-type="eqref" reference="scheme_weak"}. ****Lemma** 6** (Well-posedness and basic facts). *There exists a unique solution $$\left((v_K^n)_{K\in\mathcal T_{\Omega}, n\geq 0}, (v_{K^*}^n)_{K^*\in\mathcal T_{\omega}, n\geq 1}, (u_{K^*}^n)_{K^*\in\mathcal T_{\omega}, n\geq 0}\right)$$ to the scheme [\[init_scheme\]](#init_scheme){reference-type="eqref" reference="init_scheme"}-[\[scheme\]](#scheme){reference-type="eqref" reference="scheme"}. Moreover it is nonnegative, positive as soon as $n\geq 1$, and preserves the total mass $M_0$, namely $$\label{mass_cons} \displaystyle\sum_{K\in\mathcal T_\Omega} m_K v_K^n+\displaystyle\sum_{K^*\in\mathcal T_\omega} m_{K^*}u_{K^*}^n=M_0, \quad \forall n\geq 0.$$* *Proof.* Assume $v_K^{n-1}=0$ for all $K\in\mathcal T_\Omega$ and $u_{K^*}^{n-1}=0$ for all $K^*\in\mathcal T_\omega$. Choosing $\varphi_K=\nu v_K^n$, $\varphi_{K^*}=\nu v_{K^*}^n$ and $\psi_{K^*}=\mu u_{K^*}^n$ in [\[scheme_weak\]](#scheme_weak){reference-type="eqref" reference="scheme_weak"} yields existence and uniqueness of a solution to the scheme [\[scheme\]](#scheme){reference-type="eqref" reference="scheme"} at each time step. Assume now $v_K^{n-1}\geq 0$ for all $K\in\mathcal T_\Omega$ and $u_{K^*}^{n-1}\geq 0$ for all $K^*\in\mathcal T_\omega$. Choosing $\varphi_K=\nu(v_K^n)^-$, $\varphi_{K^*}=\nu (v_{K^*}^n)^-$ and $\psi_{K^*}=\mu (u_{K^*}^n)^-$ (where $x^-$ denotes the negative part of $x\in\mathbb R$) in [\[scheme_weak\]](#scheme_weak){reference-type="eqref" reference="scheme_weak"} yields by induction the nonnegativity of the solution to the scheme [\[scheme\]](#scheme){reference-type="eqref" reference="scheme"} at each time step: $$v_K^n\geq 0, \; \forall K\in\mathcal T_\Omega,\quad v_{K^*}^n\geq 0,\, u_{K^*}^n\geq 0, \; \forall K^*\in\mathcal T_\omega.$$ Now that we have established the nonnegativity of the solution to [\[scheme\]](#scheme){reference-type="eqref" reference="scheme"}, let us prove its positivity. Let $n\geq 1$ be given. Assume by contradiction that there is a $K_0\in \mathcal T_\Omega$ such that $v_{K_0}^n=0$ (the case $u_{K_0^*}^n=0$ for a $K_0^*\in \mathcal T_\omega$ being treated similarly). From [\[scheme.v\]](#scheme.v){reference-type="eqref" reference="scheme.v"}, we deduce $v_L^n=0$ for all $L\in \mathcal T_\Omega$ neighbouring $K_0$ and $v_{K^*}^n=0$ for all $K^*\in \mathcal T_\omega$ bordering $K_0$. By repeating this we get $v_K^n=v_{K^*}^n=0$ for all $(K,K^*)\in \mathcal T_\Omega\times \mathcal T_\omega$. From [\[scheme.interf\]](#scheme.interf){reference-type="eqref" reference="scheme.interf"}, we get $u_{K^*}^n=0$ for all $K^*\in \mathcal T_\omega$. By induction, we obtain $v_K^0=u_{K^*}^0=0$ for all $(K,K^*)\in \mathcal T_\Omega\times \mathcal T_\omega$, which is a contradiction. If there is a ${K_0^*}\in\mathcal T_\omega$ such that $v_{K_0^*}^n=0$, [\[scheme.interf\]](#scheme.interf){reference-type="eqref" reference="scheme.interf"} implies that $v_{K_0}^n=0$ for $K_0$ such that $K_0|K_0^* \in\mathcal E_{\Omega}^r$ and we come back to the preceding case. Finally, we obtain the positivity of the set of discrete solutions as soon as $n\geq 1$. Last, choosing the test vector constant equal to 1 in [\[scheme_weak\]](#scheme_weak){reference-type="eqref" reference="scheme_weak"} leads to the conservation of the total mass [\[mass_cons\]](#mass_cons){reference-type="eqref" reference="mass_cons"}. ◻ A steady-state is a solution of the form $\left((v_K^\infty)_{K\in\mathcal T_{\Omega}}, (v_{K^*}^\infty)_{K^*\in\mathcal T_{\omega}}, (u_{K^*}^\infty)_{K^*\in\mathcal T_{\omega}}\right)$, that is independent on $n$. Choosing $\varphi_K=\nu v_K^\infty$, $\varphi_{K^*}=\nu v_{K^*}^\infty$ and $\psi_{K^*}=\mu u_{K^*}^\infty$ in [\[scheme_weak\]](#scheme_weak){reference-type="eqref" reference="scheme_weak"}, we get that the steady-state is actually constant in space: there are $v^\infty\geq 0$ and $u^\infty\geq 0$ such that $v_K^\infty=v^\infty=v_{K^*}^\infty$ and $u_{K^*}^\infty=u^\infty$, for all $K\in \mathcal T_\Omega$, $K^*\in \mathcal T_\omega$. We also obtain that $\nu v^\infty-\mu u^\infty=0$ and, from the mass conservation, that $m_\Omega v^\infty+m_\omega u^\infty=M_0$. Finally, the steady-state of the scheme coincides with the steady-state of the continuous problem defined by [\[def-steady-cst\]](#def-steady-cst){reference-type="eqref" reference="def-steady-cst"}. ## Exponential decay of discrete relative entropy {#ss:entropy-dis} As in the continuous case, we investigate the decay in time of some discrete relative entropies applied to the solution to the scheme. For any twice differentiable function $\Phi:[0,+\infty)\to [0,+\infty)$ such that $\Phi''>0,\ \Phi(1)=0,\ \Phi'(1)=0$, we define a discrete entropy, relative to the steady-state $(v^\infty,u^\infty)$, by $$\label{entropy} {\mathcal H}_\Phi^n:=\displaystyle\sum_{K\in\mathcal T_\Omega} m_K v^\infty\Phi (\displaystyle\frac{v_K^n}{v^\infty}) +\displaystyle\sum_{K^*\in\mathcal T_\omega} m_{K^*} u^\infty\Phi(\displaystyle\frac{u_{K^*}^n}{u^\infty}), \quad \forall n\geq 0.$$ This is obviously the discrete counterpart of [\[def-H\]](#def-H){reference-type="eqref" reference="def-H"} and Proposition [**Proposition** 7](#prop:dissipation-dis){reference-type="ref" reference="prop:dissipation-dis"} states that it is dissipated along time. ****Proposition** 7** (Entropy dissipation). *Let $$((v_K^n)_{K\in\mathcal T_{\Omega}, n\geq 0}, (v_{K^*}^n)_{K^*\in\mathcal T_{\omega}, n\geq 1}, (u_{K^*}^n)_{K^*\in\mathcal T_{\omega}, n\geq 0})$$ be the solution to the scheme [\[init_scheme\]](#init_scheme){reference-type="eqref" reference="init_scheme"}-[\[scheme\]](#scheme){reference-type="eqref" reference="scheme"}, and $(v^\infty,u^\infty)$ the associated steady-state defined by [\[def-steady-cst\]](#def-steady-cst){reference-type="eqref" reference="def-steady-cst"}. Then the discrete entropy defined by [\[entropy\]](#entropy){reference-type="eqref" reference="entropy"} is dissipated along time, namely $$\label{entropy-dissipation} \displaystyle\frac{{\mathcal H}_\Phi^n-{\mathcal H}_\Phi^{n-1}}{\delta t} \leq -{\mathcal D}_\Phi^n\ \leq 0, \quad \forall n\geq 1,\\$$ where $$\begin{gathered} \label{dissipation} {\mathcal D}_\Phi^n:= d\sum_{\sigma=K|K^*}\tau_\sigma (v_K^n-v_{K^*}^n)\left(\Phi'(\frac{v_K^n}{v^\infty})-\Phi'(\frac{v_{K^*}^n}{v^\infty})\right) \\ +d\sum_{\sigma=K|L}\tau_\sigma(v_K^n-v_L^n)\left(\Phi'(\frac{v_K^n}{v^\infty})-\Phi'(\frac{v_L^n}{v^\infty})\right)\\ +D\sum_{\sigma^*=K^*|L^*}\tau_{\sigma^*}(u_{K^*}^n-u_{L^*}^n) \left(\Phi'(\frac{u_{K^*}^n}{u^\infty})-\Phi'(\frac{u_{L^*}^n}{u^\infty})\right)\\ +\mu u^\infty\sum_{K^*\in\mathcal T_\omega} m_{K^*}\left(\displaystyle\frac{u_{K^*}^n}{u^{\infty}}-\frac{v_{K^*}^n}{v^\infty}\right)\left(\Phi'(\frac{u_{K^*}^n}{u^{\infty}})-\Phi'(\frac{v_{K^*}^n}{v^\infty})\right)\end{gathered}$$ is the so-called dissipation.* *Proof.* Due to the convexity of $\Phi$, we have $$\displaystyle\frac{{\mathcal H}_\Phi^n-{\mathcal H}_\Phi^{n-1}}{\delta t}\leq \displaystyle\sum_{K\in\mathcal T_\Omega} m_K\frac{v_K^n-v_K^{n-1}}{\delta t}\Phi'(\frac{v_K^n}{v^\infty}) +\displaystyle\sum_{K^*\in\mathcal T_\omega} m_{K^*}\frac{u_{K^*}^n-u_{K^*}^{n-1}}{\delta t} {\Phi'}(\frac{u_{K^*}^n}{u^\infty}).$$ Then, we apply [\[scheme_weak\]](#scheme_weak){reference-type="eqref" reference="scheme_weak"} with $$\varphi_K=\Phi'(\displaystyle\frac{v_K^n}{v^\infty}),\ \varphi_{K^*}=\Phi'(\displaystyle\frac{v_{K^*}^n}{v^\infty}),\ \psi_{K^*}=\Phi'(\displaystyle\frac{u_{K^*}^n}{u^\infty}),$$ which leads to the entropy-dissipation relation [\[entropy-dissipation\]](#entropy-dissipation){reference-type="eqref" reference="entropy-dissipation"}, with the dissipation term ${\mathcal D}_\Phi^n$ rewritten as [\[dissipation\]](#dissipation){reference-type="eqref" reference="dissipation"} thanks to [\[def-steady-cst\]](#def-steady-cst){reference-type="eqref" reference="def-steady-cst"}. Moreover the dissipation is nonnegative thanks to the monotonicity of $\Phi'$. ◻ In the special case where $\Phi(s)=\Phi_2(s)=\frac 12 (s-1)^2$, we denote by ${\mathcal H}_2^n$ and ${\mathcal D}_2^n$ the corresponding entropy and dissipation at step $n$. They rewrite as $$\begin{aligned} {\mathcal H}_2^n&=&\displaystyle\frac{1}{2} \displaystyle\sum_{K\in\mathcal T_\Omega} m_K\frac{(v_K^n-v^\infty)^2}{v^\infty}+\displaystyle\frac{1}{2} \displaystyle\sum_{K^*\in\mathcal T_\omega} m_{K^*}\frac{(u_{K^*}^n-u^\infty)^2}{u^\infty},\label{H-2-bis}\\ {\mathcal D}_2^n&=&d\sum_{\sigma=K|K^*}\tau_\sigma\frac{(v_K^n-v_{K^*}^n)^2}{v^\infty}+d\sum_{\sigma=K|L}\tau_\sigma\frac{(v_K^n-v_L^n)^2}{v^\infty}\nonumber \\ &&+D\sum_{\sigma^*=K^*|L^*}\tau_{\sigma^*}\frac{(u_{K^*}^n-u_{L^*}^n)^2}{u^\infty}+ \mu u^\infty\sum_{K^*\in\mathcal T_\omega} m_{K^*}\left(\displaystyle\frac{u_{K^*}^n}{u^{\infty}}-\frac{v_{K^*}^n}{v^\infty}\right)^2.\label{D-2-bis}\end{aligned}$$ We note that the relative entropy ${\mathcal H}_2$ corresponds to a weighted $L^2$ distance between the solution to the scheme and the constant steady-state having the same total mass, while the dissipation ${\mathcal D}_2$ corresponds to a weighted $L^2$ norm of a discrete gradient of the solution on the field and the road, with additional exchange terms. As in the continuous case, there exists a relation between the entropy ${\mathcal H}_2$ and its dissipation ${\mathcal D}_2$ given in Theorem [**Theorem** 8](#th:unconv-PW-dis){reference-type="ref" reference="th:unconv-PW-dis"}, which yields the exponential decay of ${\mathcal H}_2$ stated next in Theorem [**Theorem** 9](#th:H2-decroit-dis){reference-type="ref" reference="th:H2-decroit-dis"}. ****Theorem** 8** (Relating entropy and dissipation). *Let $$((v_K^n)_{K\in\mathcal T_{\Omega}, n\geq 0}, (v_{K^*}^n)_{K^*\in\mathcal T_{\omega}, n\geq 1}, (u_{K^*}^n)_{K^*\in\mathcal T_{\omega}, n\geq 0})$$ be the solution to the scheme [\[init_scheme\]](#init_scheme){reference-type="eqref" reference="init_scheme"}-[\[scheme\]](#scheme){reference-type="eqref" reference="scheme"}, and $(v^\infty,u^\infty)$ the associated steady-state defined by [\[def-steady-cst\]](#def-steady-cst){reference-type="eqref" reference="def-steady-cst"}. Then, there holds $$\label{rel-HD} {\mathcal H}_2^n\leq \displaystyle\frac{1}{\Lambda} {\mathcal D}_2^n, \quad \forall n\geq 1,$$ for some positive constant $\Lambda$ depending on the dimension $N$, the domain $\Omega$ (including $\omega$ and $L$), the transfer rates $\mu$, $\nu$, and the diffusion coefficients $d$, $D$.* ****Theorem** 9** (Exponential decay of discrete entropy ). *Let $$((v_K^n)_{K\in\mathcal T_{\Omega}, n\geq 0}, (v_{K^*}^n)_{K^*\in\mathcal T_{\omega}, n\geq 1}, (u_{K^*}^n)_{K^*\in\mathcal T_{\omega}, n\geq 0})$$ be the solution to the scheme [\[init_scheme\]](#init_scheme){reference-type="eqref" reference="init_scheme"}-[\[scheme\]](#scheme){reference-type="eqref" reference="scheme"}, and $(v^\infty,u^\infty)$ the associated steady-state defined by [\[def-steady-cst\]](#def-steady-cst){reference-type="eqref" reference="def-steady-cst"}. Then the entropy defined by [\[H-2-bis\]](#H-2-bis){reference-type="eqref" reference="H-2-bis"} decays exponentially, namely $$0\leq {\mathcal H}_2^n\leq (1+\Lambda \,\delta t)^{-n} {\mathcal H}_2^0, \quad \forall n\geq 0,$$ where $\Lambda$ comes from Theorem [**Theorem** 8](#th:unconv-PW-dis){reference-type="ref" reference="th:unconv-PW-dis"}.* Note that, as easily checked via Cauchy-Schwarz inequality, the initial discrete entropy ${\mathcal H}_2^0$ is smaller than the initial continuous one, so that $$0\leq {\mathcal H}_2^n\leq (1+\Lambda \,\delta t)^{-n} \left( \frac{1}{2v^\infty} \int_\Omega (v^0-v^\infty)^2\, dxdy+\displaystyle\frac{1}{2u^{\infty}}\int_\omega (u^0-u^{\infty})^2\,dx\right), \quad \forall n\geq 0.$$ Moreover, the exponential decay of ${\mathcal H}_2$ implies the exponential decay of the discrete densities towards the steady-state in $L^2$. ## Relating discrete entropy and discrete dissipation, proof of Theorem [**Theorem** 8](#th:unconv-PW-dis){reference-type="ref" reference="th:unconv-PW-dis"} {#relating-discrete-entropy-and-discrete-dissipation-proof-of-theorem-thunconv-pw-dis} As the proof of Theorem [**Theorem** 2](#th:unconv-PW){reference-type="ref" reference="th:unconv-PW"} is based on the proof of the Poincaré-Wirtinger inequality, the proof of Theorem [**Theorem** 8](#th:unconv-PW-dis){reference-type="ref" reference="th:unconv-PW-dis"} is based on the proof of the discrete Poincaré-Wirtinger inequality given in [@Eym-Gal-Her-00]. The discrete Poincaré-Wirtinger inequality applies to functions which are piecewise constant in space on a bounded domain $U$ and therefore do not belong to $H^1(U)$. More precisely, if ${\mathcal M}=(\mathcal T,\mathcal E,\mathcal P)$ is an admissible mesh of $U$, we denote by $X(\mathcal T)$ the set of piecewise constant functions defined by $$f\in X(\mathcal T) \Longleftrightarrow \exists (f_K)_{K\in \mathcal T} \in \mathbb R^\mathcal T, \, f=\sum_{K\in\mathcal T} f_K {\mathbf 1}_K.$$ We start by recalling in Lemma [**Lemma** 10](#lem:PW-dis){reference-type="ref" reference="lem:PW-dis"} a key inequality in the proof of the discrete Poincaré-Wirtinger inequality, see (10.13) in [@Eym-Gal-Her-00 Proof of Lemma 10.2]. ****Lemma** 10**. *Let $U$ be a polygonal bounded convex open set of $\mathbb R^d$ ($d\geq 1$) and ${\mathcal M}= (\mathcal T,\mathcal E,\mathcal P)$ be an admissible mesh of $U$. Then, for any $f\in X(\mathcal T)$, we have $$\label{ineg-PW-dis} \iint_{U^2} \left(f(x)-f(y)\right)^2 \,dxdy\leq C_{d,U}\, (\text{Diam } U)^2\, \sum_{\sigma=K|L} \tau_\sigma (f_K-f_L)^2,$$ with $C_{d,U}$ the measure in $\mathbb R^d$ of balls of radius $\text{Diam } U$.* Having in mind this classical result, we now turn to the proof of the so-called unconventional discrete Poincaré-Wirtinger inequality [\[rel-HD\]](#rel-HD){reference-type="eqref" reference="rel-HD"}, which parallels that of Theorem [**Theorem** 2](#th:unconv-PW){reference-type="ref" reference="th:unconv-PW"}. *Proof of Theorem [**Theorem** 8](#th:unconv-PW-dis){reference-type="ref" reference="th:unconv-PW-dis"}.* For $\ell>0$, we ''enlarge" $\Omega=\omega \times (0,L)$ to $\Omega^+=\omega\times (-\ell,L)$. We denote $\Omega_\ell =\omega \times (-\ell,0)$ the so-called thickened road. Reference points in $\Omega^+$ will be denoted $X=(x,y)$, $X'=(x',y')$, with $x$, $x'$ in $\omega$ and $y$, $y'$ in $(-\ell,L)$. Let us note that, based on the two meshes $\mathcal M_\Omega$ and $\mathcal M_\omega$, we can easily define a mesh of $\Omega^+$ just by "enlarging" the control volumes of $\omega$ to $\omega \times (-\ell,0)$. We work with the probability measure already defined in [\[d-sigma\]](#d-sigma){reference-type="eqref" reference="d-sigma"}, namely $$d\rho=\left(\displaystyle\frac{v^\infty}{M_0} {\mathbf 1}_{\Omega}(x,y)+\frac{1}{\ell}\frac{u^\infty}{M_0} {\mathbf 1}_{\Omega_\ell}(x,y)\right)\,dxdy.$$ We omit the time variable $n$ in the sequel. We consider $f$ the piecewise constant function on $\Omega^+$ defined by $$\label{f(x,y)-dis} f(x,y)=\displaystyle\sum_{K\in\mathcal T_\Omega} \frac{v_K}{v^\infty} {\mathbf 1}_K(x,y)+\displaystyle\sum_{K^*\in\mathcal T_\omega} \frac{u_{K^*}}{u^\infty} {\mathbf 1}_{K^*\times (-\ell,0)}(x,y), \quad (x,y)\in \Omega^+.$$ It satisfies $$\langle f \rangle:=\int_{\Omega ^+} f\, d\rho=1\mbox{ and } {\mathcal H}_2=\frac{M_0}{2}\Vert f-\langle f \rangle\Vert_{L^2(\Omega^+, d\rho)}^2.$$ Let us also introduce $v\in X(\mathcal T_\Omega)$, $u\in X(\mathcal T_{\omega})$ and $v^*\in X(\mathcal T_{\omega})$ defined by $$v= \sum_{K\in\mathcal T_\Omega} v_K{\mathbf 1}_K,\quad u=\displaystyle\sum_{K^*\in\mathcal T_\omega} {u_{K^*}} {\mathbf 1}_{K^*},\quad v^*=\displaystyle\sum_{K^*\in\mathcal T_\omega}{v_{K^*}} {\mathbf 1}_{K^*}.$$ As in the continuous case, see [\[1\]](#1){reference-type="eqref" reference="1"} and [\[2\]](#2){reference-type="eqref" reference="2"}, ${\mathcal H}_2$ can be rewritten as $${\mathcal H}_2= \frac{M_0}{4}(I_{\Omega,\Omega}+2I_{\Omega,\Omega_\ell}+I_{\Omega_\ell,\Omega_\ell}),$$ where $$I_{A,B}:=\int _{X\in A}\int_{X'\in B} \left(f(X)-f(X')\right)^2 \rho(X)\rho(X')\, dX dX'.$$ The terms $I_{\Omega, \Omega}$ and $I_{\Omega_\ell, \Omega_\ell}$ can be estimated as in the proof of the discrete mean Poincaré inequality. Indeed, applying Lemma [**Lemma** 10](#lem:PW-dis){reference-type="ref" reference="lem:PW-dis"}, we get $$\begin{aligned} I_{\Omega, \Omega}&=&\frac{1}{M_0^2}\int _{X\in \Omega}\int_{X'\in \Omega} \left(v(X)-v(X')\right)^2 \, dX dX'\\ &\leq &\frac{v^\infty}{M_0^2}C_{N,\Omega} \, (\text{Diam } \Omega)^2\displaystyle\sum_{\sigma=K|L}\tau_\sigma\frac{(v_K-v_L)^2}{v^\infty},\end{aligned}$$ and $$\begin{aligned} I_{\Omega_\ell, \Omega_\ell}&=&\frac{1}{M_0^2}\int _{x\in \omega}\int_{x'\in \Omega} \left(u(x)-u(x')\right) \, dx dx'\\ &\leq & \frac{u^\infty}{M_0^2}C_{N-1,\omega} \, (\text{Diam } \omega)^2\, \displaystyle\sum_{\sigma^*=K^*|L^*}\tau_{\sigma^*}\frac{(u_{K^*}-u_{L^*})^2}{u^\infty}.\end{aligned}$$ It remains to estimate the so-called unconventional term involving crossed terms, namely $I_{\Omega, \Omega_\ell}$. We write $$\begin{aligned} I_{\Omega,\Omega_\ell}&=&\int_{X\in \Omega}\int_{X'\in \Omega_\ell}\left(f(X)-f(X')\right)^2 \frac{v^\infty u^\infty}{M_0^2}\frac 1 \ell\, dX'dX\\ &=& \int_{X\in \Omega}\int_{x'\in \omega}\left(\frac{v(x,y)}{v^\infty}-\frac{u(x')}{u^\infty}\right)^2 \frac{v^\infty u^\infty}{M_0^2}\, dx'dxdy.\end{aligned}$$ Introducing $v^*$ as in the continuous case, we obtain $I_{\Omega,\Omega_\ell}\leq 3(I_{\Omega,\Omega_\ell}^1+I_{\Omega,\Omega_\ell}^2+I_{\Omega,\Omega_\ell}^3)$, with $$\begin{aligned} I_{\Omega,\Omega_\ell}^1&=&\frac{u^\infty m_\omega}{v^\infty M_0^2}\int_{x\in\omega}\int_{y\in (0,L)}(v(x,y)-v^*(x))^2 \, dy dx,\\ I_{\Omega,\Omega_\ell}^2&=&\frac{v^\infty u^\infty m_\Omega}{M_0^2} \int_{x\in\omega}\left(\frac{v^*(x)}{v^\infty}-\frac{u(x)}{u^\infty}\right)^2 \, dx,\\ I_{\Omega,\Omega_\ell}^3&=&\frac{v^\infty L}{u^\infty}I_{\Omega_\ell, \Omega_\ell}.\end{aligned}$$ Hence, the estimate of $I_{\Omega,\Omega_\ell}^3$ follows from that of $I_{\Omega_\ell, \Omega_\ell}$ above. Next, it is obvious that $$I_{\Omega,\Omega_\ell}^2=\frac{v^\infty u^\infty m_\Omega}{M_0^2} \displaystyle\sum_{K^*\in\mathcal T_\omega} m_{K^*}\left(\frac{v_{K^*}}{v^\infty}-\frac{u_{K^*}}{u^\infty}\right)^2,$$ which is, up to a multiplicative constant, the non-gradient term in the definition of the dissipation, see [\[D-2-bis\]](#D-2-bis){reference-type="eqref" reference="D-2-bis"}. It thus only remains to estimate $I_{\Omega,\Omega_\ell}^1$. To do so, we adapt the proof of the discrete Poincaré inequality in [@Eym-Gal-Her-00 Lemma 10.2]. For $\sigma \in \mathcal E_\Omega$, we define the function $\chi_{\sigma}:\Omega=\omega\times (0,L) \to \{0, 1\}$ by $$\chi_\sigma(x,y):=\left\{\begin{array}{ll} 1 & \text{ if $\sigma$ intersects the vertical segment connecting $(x,y)$ to $(x,0)$},\\ 0 & \text{ if not}.\\ \end{array} \right.$$ Therefore, for all $x\in\omega$, for all $y\in (0,L)$, $$\vert v(x,y)-v^*(x)\vert \leq \sum_{\sigma =K|L} \vert v_K-v_L\vert \chi_{\sigma}(x,y)+\sum_{\sigma =K|K^*}\vert v_K-v_{K^*}\vert \chi_{\sigma}(x,y).$$ Let $c_{\sigma}=\vert {\mathbf e}\cdot {\mathbf n}_{\sigma}\vert$ where $e$ is a unit vector of the vertical line and ${\mathbf n}_{\sigma}$ is a unit normal vector to $\sigma$. From Cauchy-Schwarz inequality, we have $$\begin{gathered} \label{diff-vv*} ( v(x,y)-v^*(x))^2\leq \left(\sum_{\sigma =K|L} \frac{(v_K-v_L)^2}{d_{\sigma} c_{\sigma}}\chi_{\sigma}(x,y)+\sum_{\sigma =K|K^*}\frac{( v_K-v_{K^*})^2}{d_{\sigma} c_{\sigma}} \chi_{\sigma}(x,y)\right)\\ \times \left(\sum_{\sigma =K|L}d_{\sigma} c_{\sigma}\chi_{\sigma}(x,y)+\sum_{\sigma =K|K^*}d_{\sigma} c_{\sigma}\chi_{\sigma}(x,y)\right).\end{gathered}$$ Similarly as in the proof of [@Eym-Gal-Her-00 Lemma 10.2], the second factor in the above right-hand-side is smaller than the ''vertical diameter", that is $L$. Let us now integrate [\[diff-vv\*\]](#diff-vv*){reference-type="eqref" reference="diff-vv*"} over $(x,y)\in \omega\times (0,L)$. Noticing that $$\int_{x\in\omega}\int_{y\in (0,L)}\chi_{\sigma}(x,y)\, dxdy \leq m_{\sigma} c_{\sigma} L,$$ we get $$I_{\Omega,\Omega_\ell}^1\leq\frac{u^\infty m_\omega}{v^\infty M_0^2} L^2 \left(\sum_{\sigma =K|L}\tau_\sigma (v_K-v_L)^2 + \sum_{\sigma =K|K^*} \tau_\sigma (v_K-v_{K^*})^2\right).$$ Gathering all the bounds, we obtain [\[rel-HD\]](#rel-HD){reference-type="eqref" reference="rel-HD"} with a decay rate $$\Lambda=\min(c_1d; c_2D; c_3),$$ where the $c_i$'s ($1\leq i\leq 3$) are positive constants depending only on $N$, $\Omega$, $\mu$ and $\nu$. ◻ # Numerical experiments {#s:num} For all the numerical experiments performed in this section, we consider the one-dimensional road $\omega=(-2L,2L)$, and the two-dimensional field $\Omega=\omega\times(0,L)$, where we fix $L=20$. ## Test cases and profiles {#ss:test} In this subsection we choose one set of parameters, but consider different initial conditions that lead to different test cases. Recently, the role of the founding population, in particular fragmentation, on the success rate of an invasion has received a lot of attention, see [@Dru-et-al-07], [@Gar-Roq-Ham-12], [@Maz-Nad-Tol-21], [@Alf-Ham-Roq-preprint] and the references therein. Related to this question, we want to check how convergence to the steady state depends on the initial distribution of individuals, using the following four test cases. Initially the road is empty and individuals are grouped together in the field, say: $$\left\{\begin{aligned} v_0(x,y)&=100\cdot{\mathbf 1}_{[-2.5,2.5]\times [2.5,7.5]}(x,y),\\ u_0(x)&=0. \end{aligned} \right.$$ Initially, individuals are grouped together on the road and in the field, say: $$\left\{\begin{aligned} v_0(x,y)&=150\cdot {\mathbf 1}_{[-2.5,2.5]\times [2.5,5]}(x,y),\\ u_0(x)&=125\cdot {\mathbf 1}_{[-2.5,2.5]}(x). \end{aligned} \right.$$ Initially, the road is empty and individuals are scattered in the field, say: $$\left\{\begin{aligned} &v_0(x,y)=100\cdot {\mathbf 1}_{[-10,-7.5]\cup[-5,-2.5]\cup[2.5,5]\cup[7.5,10]}(x)\cdot {\mathbf 1}_{[7.5,10]}(y),\\ &u_0(x)=0. \end{aligned} \right.$$ Initially, individuals are scattered on the road and in the field, say: $$\left\{\begin{aligned} &v_0(x,y)=150\cdot{\mathbf 1}_{[-10,-7.5]\cup[-5,-2.5]\cup[2.5,5]\cup[7.5,10]}(x)\cdot {\mathbf 1}_{[8.75,10]}(y),\\ &u_0(x)=62.5\cdot {\mathbf 1}_{[-10,-7.5]\cup[-5,-2.5]\cup[2.5,5]\cup[7.5,10]}(x). \end{aligned} \right.$$ Note that all these initial conditions lead to the same total mass $M_0=2500$. We fix $\mu=1$ and $\nu=5$ so that they have the same steady state, namely $(v^\infty,u^\infty)=(1.25,6.25)$. We also set the diffusion coefficient in the field to $d=1$ and on the road to $D=1$. The mesh we use for the simulations is a triangular mesh of 14336 triangles and we choose a time step equal to $10^{-1}$. Figures [8](#fig:CT12_champ){reference-type="ref" reference="fig:CT12_champ"} and [16](#fig:CT12_route){reference-type="ref" reference="fig:CT12_route"} show the density profiles in the field and on the road respectively for Test Cases 1 and 2 at different times, while Figures [24](#fig:CT34_champ){reference-type="ref" reference="fig:CT34_champ"} and [32](#fig:CT34_route){reference-type="ref" reference="fig:CT34_route"} show the density profiles for Test Cases 3 and 4 at different times. We observe that the solutions to Test Cases 1 and 2 quickly show comparable behavior, as do the solutions to Test Cases 3 and 4. This suggests that the initial presence or absence of individuals on the road has little effect on the solutions. On the other hand, we observe that the homogenization is slightly faster in Test Cases 3-4 than in Test Cases 1-2 (compare at $T=50$ for $v$ and at $T=100$ for $u$). The reason is that Test Cases 3-4 correspond to more fragmented founding populations for which it is easier to invade the whole domain (especially in the presence of moderate diffusion coefficients as chosen here, $D=d=1$). The first observations above deal with transient dynamics. In order to compute the decay rates towards the steady state, we have to move to larger time horizons. Figure [33](#fig:CT1to4_entropies){reference-type="ref" reference="fig:CT1to4_entropies"} shows the time decay of the relative entropies (divided by the value at the first time step) for the four test cases. The computations were stopped when the computed ratio of the relative entropies reached the value $10^{-5}$. We observe that the computed decay rate, according to Theorems [**Theorem** 3](#th:H2-decroit){reference-type="ref" reference="th:H2-decroit"} and [**Theorem** 9](#th:H2-decroit-dis){reference-type="ref" reference="th:H2-decroit-dis"}, is the same for the four test cases, namely $\Lambda_{num}= 0.0123$, even if the transient behavior is slightly different, as already observed above. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Profiles of $v$, density in the field, for Test Case 1 (on the left) and Test Case 2 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT1_instant1_champ.png){#fig:CT12_champ width="45%"} ![Profiles of $v$, density in the field, for Test Case 1 (on the left) and Test Case 2 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT2_instant1_champ.png){#fig:CT12_champ width="45%"} ![Profiles of $v$, density in the field, for Test Case 1 (on the left) and Test Case 2 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT1_instant10_champ.png){#fig:CT12_champ width="45%"} ![Profiles of $v$, density in the field, for Test Case 1 (on the left) and Test Case 2 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT2_instant10_champ.png){#fig:CT12_champ width="45%"} ![Profiles of $v$, density in the field, for Test Case 1 (on the left) and Test Case 2 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT1_instant50_champ.png){#fig:CT12_champ width="45%"} ![Profiles of $v$, density in the field, for Test Case 1 (on the left) and Test Case 2 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT2_instant50_champ.png){#fig:CT12_champ width="45%"} ![Profiles of $v$, density in the field, for Test Case 1 (on the left) and Test Case 2 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT1_instant100_champ.png){#fig:CT12_champ width="45%"} ![Profiles of $v$, density in the field, for Test Case 1 (on the left) and Test Case 2 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT2_instant100_champ.png){#fig:CT12_champ width="45%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Profiles of $u$, density on the road, for Test Case 1 (on the left) and Test Case 2 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT1_instant1_route.png){#fig:CT12_route width="35%"} ![Profiles of $u$, density on the road, for Test Case 1 (on the left) and Test Case 2 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT2_instant1_route.png){#fig:CT12_route width="35%"} ![Profiles of $u$, density on the road, for Test Case 1 (on the left) and Test Case 2 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT1_instant10_route.png){#fig:CT12_route width="35%"} ![Profiles of $u$, density on the road, for Test Case 1 (on the left) and Test Case 2 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT2_instant10_route.png){#fig:CT12_route width="35%"} ![Profiles of $u$, density on the road, for Test Case 1 (on the left) and Test Case 2 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT1_instant50_route.png){#fig:CT12_route width="35%"} ![Profiles of $u$, density on the road, for Test Case 1 (on the left) and Test Case 2 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT2_instant50_route.png){#fig:CT12_route width="35%"} ![Profiles of $u$, density on the road, for Test Case 1 (on the left) and Test Case 2 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT1_instant100_route.png){#fig:CT12_route width="35%"} ![Profiles of $u$, density on the road, for Test Case 1 (on the left) and Test Case 2 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT2_instant100_route.png){#fig:CT12_route width="35%"} Density on the road Density on the road ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Profiles of $v$, density in the field, for Test Case 3 (on the left) and Test Case 4 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT3_instant1_champ.png){#fig:CT34_champ width="45%"} ![Profiles of $v$, density in the field, for Test Case 3 (on the left) and Test Case 4 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT4_instant1_champ.png){#fig:CT34_champ width="45%"} ![Profiles of $v$, density in the field, for Test Case 3 (on the left) and Test Case 4 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT3_instant10_champ.png){#fig:CT34_champ width="45%"} ![Profiles of $v$, density in the field, for Test Case 3 (on the left) and Test Case 4 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT4_instant10_champ.png){#fig:CT34_champ width="45%"} ![Profiles of $v$, density in the field, for Test Case 3 (on the left) and Test Case 4 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT3_instant50_champ.png){#fig:CT34_champ width="45%"} ![Profiles of $v$, density in the field, for Test Case 3 (on the left) and Test Case 4 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT4_instant50_champ.png){#fig:CT34_champ width="45%"} ![Profiles of $v$, density in the field, for Test Case 3 (on the left) and Test Case 4 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT3_instant100_champ.png){#fig:CT34_champ width="45%"} ![Profiles of $v$, density in the field, for Test Case 3 (on the left) and Test Case 4 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT4_instant100_champ.png){#fig:CT34_champ width="45%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Profiles of $u$, density on the road, for Test Case 3 (on the left) and Test Case 4 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT3_instant1_route.png){#fig:CT34_route width="35%"} ![Profiles of $u$, density on the road, for Test Case 3 (on the left) and Test Case 4 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT4_instant1_route.png){#fig:CT34_route width="35%"} ![Profiles of $u$, density on the road, for Test Case 3 (on the left) and Test Case 4 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT3_instant10_route.png){#fig:CT34_route width="35%"} ![Profiles of $u$, density on the road, for Test Case 3 (on the left) and Test Case 4 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT4_instant10_route.png){#fig:CT34_route width="35%"} ![Profiles of $u$, density on the road, for Test Case 3 (on the left) and Test Case 4 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT3_instant50_route.png){#fig:CT34_route width="35%"} ![Profiles of $u$, density on the road, for Test Case 3 (on the left) and Test Case 4 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT4_instant50_route.png){#fig:CT34_route width="35%"} ![Profiles of $u$, density on the road, for Test Case 3 (on the left) and Test Case 4 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT3_instant100_route.png){#fig:CT34_route width="35%"} ![Profiles of $u$, density on the road, for Test Case 3 (on the left) and Test Case 4 (on the right) at different times : $T=1$, $T=10$, $T=50$, $T=100$ (from the top to the bottom).](CT4_instant100_route.png){#fig:CT34_route width="35%"} Density on the road Density on the road ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Evolution in time of the relative entropies (divided by the value at the first time step) for the four test cases.](CT1to4_entropies.png){#fig:CT1to4_entropies width="50%"} ## Entropy decay rate as a function of the different parameters {#ss:entropy-decay-rate} In the light of the four test cases in subsection [4.1](#ss:test){reference-type="ref" reference="ss:test"}, we believe that the founding population, in particular its location and fragmentation, has an effect on the behavior for small times but not on the asymptotic decay rate. Therefore to study the latter, we now restrict ourselves to Test Case 1. We fix $\mu=1$ and $\nu=5$ as before. Next, we fix $d=1$, respectively $D=1$, and compute the decay rate $\Lambda_{num}$ as a function of $D>0$, respectively $d>0$. The results are shown in figure [35](#fig:D-d){reference-type="ref" reference="fig:D-d"}. They are obtained with a time step of $\Delta t = 10^{-1}$ and with a mesh of 3584, respectively 896, triangles for the dependence on $D$, respectively $d$. As the decay rate tends to $0$, we need a huge number of time steps to compute a relevant value, and the coarser the mesh, the faster it goes. First we note that, as expected, the decay rate is increasing w.r.t. both $D>0$ and $d>0$. Also, we observe that $\Lambda_{num}(d=1,D)\in (9.50\cdot 10^{-3}, 2.51 \cdot 10^{-2})$ while $\Lambda_{num}(d,D=1)\in (0, 2.36)$. In view of these ranges of values taken by $\Lambda_{num}(d,D)$, $d$ seems to have a larger influence on the decay rate than $D$. Next, we compare the numerically computed $\Lambda_{num}=\Lambda_{num}(d,D)$ with the $\Lambda_2=\Lambda_2(d,D)$ provided by [\[Lambda-2\]](#Lambda-2){reference-type="eqref" reference="Lambda-2"}. Obviously, [\[Lambda-2\]](#Lambda-2){reference-type="eqref" reference="Lambda-2"} is recast $$\Lambda_2(d,D)=\left\{ \begin{array}{ll} \min\left(c_1; c_2 D\right) & \mbox{ if } d=1,\vspace{3pt} \\ \min\left(c_3 d; c_4\right) & \mbox{ if } D=1,\end{array} \right.$$ for some $c_i>0$ ($1\leq i \leq 4$). First, we consider the case of large diffusion coefficients. Both $\Lambda_2(1,+\infty)$ and $\Lambda_2(+\infty,1)$ are positive constants, and so are $\Lambda_{num}(1,+\infty)$ and $\Lambda_{num}(+\infty,1)$, which is qualitatively satisfactory. Next, we consider the case of small diffusion coefficients. We observe that $\Lambda_{num}(d,1)\to 0$ as $d\to 0$, which was already expected since $\Lambda_2(d,D)\to 0$ as $d\to 0$. The reason is that, if $0<d \ll 1$, individuals will very slowly invade the whole field (in particular the zone far from the road). More interestingly, we observe that $\Lambda_{num}(1,D)$ tends to a nonzero value as $D\to 0$. This can be understood as follows: even if $0<D\ll 1$, individuals will be able to invade the whole road via a combination of ''invasion of the field" and ''exchange terms". However, notice that $\Lambda_2(d,D)\to 0$ as $D\to 0$, revealing that our analysis is far from optimal in the regime $0<D\ll 1$. This sheds light on the (different) roles of $d$ and $D$. Also the singular limit problem $D\to 0$ would deserve further investigations. -- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Computed decay rate $\Lambda_{num}$ as a function of $D$ (on the left, in log-lin scale) and as a function of $d$ (on the right, in log-log scale).](Decayrates_vs_coeff_D.png){#fig:D-d width="45%"} ![Computed decay rate $\Lambda_{num}$ as a function of $D$ (on the left, in log-lin scale) and as a function of $d$ (on the right, in log-log scale).](Decayrates_vs_coeff_d_petit.png){#fig:D-d width="44%"} $D$ $d$ -- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- M. A. is supported by the *région Normandie* project BIOMA-NORMAN 21E04343 and the ANR project DEEV ANR-20-CE40-0011-01. C. C.-H. acknowledges support from the Labex CEMPI (ANR-11-LABX-0007-01). [^1]: Univ. Rouen Normandie, CNRS, LMRS UMR 6085, F-76000 Rouen, France. [^2]: Univ. Lille, CNRS, Inria, UMR 8524 - Laboratoire Paul Painlevé, F-59000 Lille, France.
arxiv_math
{ "id": "2309.16242", "title": "Long time behavior of the field-road diffusion model: an entropy method\n and a finite volume scheme", "authors": "Matthieu Alfaro (LMRS), Claire Chainais-Hillairet", "categories": "math.AP cs.NA math.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We use Lagrangian specialization to compute the degree of the Gauss map on Theta divisors with transversal $\mathrm{A}_1$ singularities. This computes the Gauss degree for a general abelian variety in the loci $\mathcal{A}^\delta_{t,g-t}$ that form some of the irreducible components of the Andreotti-Mayer loci. We also prove that the first coefficient of the Lagrangian specialization is the Samuel multiplicity of the singular locus. author: - Constantin Podelski bibliography: - main.bib title: The Gauss Map on Theta Divisors with Transversal $\mathrm{A}_1$ Singularities --- # Introduction The Gauss map makes use of the linear nature of abelian varieties by attaching to a smooth point of a divisor, its tangent space translated at the origin. This map was already used by Andreotti [@Andreotti1958] in his beautiful proof of Torelli's theorem, and its geometry is intimately connected with the singularities of the theta divisor. Let $\mathcal{A}_g$ be the moduli space of principally polarized abelian varieties of dimension $g$ over the complex numbers. Let $(A,\Theta)\in \mathcal{A}_g$, then the Gauss map $$\mathcal{G}_\Theta:\Theta \dashrightarrow \mathbb{P}(T_0^\vee A)\simeq \mathbb{P}^{g-1}$$ is the rational map defined by the complete linear system $|L|$ where $L=\mathcal{O}_A(\Theta)\raisebox{-.5ex}{$|$}_{\Theta}$ denotes the normal bundle to the hypersurface $\Theta\subset A$. The Gauss map is generically finite if and only if $(A,\Theta)$ is indecomposable as a ppav (see [@Birkenhake2004 Sec. 4.4] for generalities about $\mathcal{G}_\Theta$). The generic degree of $\mathcal{G}_\Theta$ is unknown beyond a few cases: 1. For smooth Theta divisors, the degree is $[\Theta]^{g}=g!$ (Ex. [Example 1](#Ex: Deg Gauss map smooth theta){reference-type="ref" reference="Ex: Deg Gauss map smooth theta"}). 2. For non-hyperelliptic (resp. hyperelliptic) Jacobians, the degree is $\binom{2g-2}{g-1}$ (resp. $2^{g-1}$) [@arbarello 247]. 3. For a general Prym variety the degree is $D(g+1)+2^{g-2}$, where $D(g)$ is the degree of the variety of all quadrics of rank $\leq 3$ in $\mathbb{P}^{g-1}$ [@Verra98]. Another case where the degree of the Gauss map is straightforward to compute is when $\Theta$ has isolated singularities. In this case [@Gru17 Rem. 2.8] $$\deg \mathcal{G}_\Theta = g! - \sum_{z\in \mathrm{Sing}(\Theta)} \mathrm{mult}_z \Theta \,,$$ where $\mathrm{mult}_z \Theta$ is the Samuel multiplicity as defined for example in [@Fulton1998 Sec 4.3]. For a complex variety $X$ of dimension $n$, we denote by $$\chi(X)=\sum_{i=0}^{2n} (-1)^i \dim_\mathbb{Q}\mathrm{H}^i(X,\mathbb{Q})$$ the topological Euler characteristic. We say that $X$ has *transversal $\mathrm{A}_1$ singularities* if for all points $x\in \mathrm{Sing}(X)$, there is a local analytic isomorphism $$(X,x) \simeq (V(x_1^2+\cdots+x_k^2),0)\subset (\mathbb{C}^{n+1},0) \,,$$ for some $k\geq 2$, where $x_1,\dots,x_{n+1}$ are coordinates on $\mathbb{C}^{n+1}$. In a sense, this is the simplest kind of singularities to handle after isolated singularities. We have the following: **Theorem 1** ([Theorem 1](#Theorem: Gauss degree theta divisor with smooth singular locus){reference-type="ref" reference="Theorem: Gauss degree theta divisor with smooth singular locus"}). *Let $(A,\Theta)\in \mathcal{A}_g$ such that $\Theta$ has transversal $\mathrm{A}_1$ singularities, then $$\deg \mathcal{G}_\Theta = g! - 2 (-1)^{\dim B}\chi(B)-(-1)^{\dim C} \chi(C) \,,$$ where $B=\mathrm{Sing}(\Theta)$ and $C\in| L\raisebox{-.5ex}{$|$}_{B}|$ is a general divisor in the linear system.* Recently, Codogni, Grushevsky and Sernesi [@Gru17] introduced the stratification of $\mathcal{A}_g$ by the *Gauss loci* $$\mathcal{G}^{(g)}_d \coloneqq \{ (A,\Theta)\in \mathcal{A}_g \,|\, \deg \mathcal{G}_\Theta \leq d \} \,.$$ These loci are closed by [@KraemerCodogni] and the Jacobian locus $\mathcal{J}_g$ is an irreducible component of $\mathcal{G}^{(g)}_d$ for $$d = \binom{2g-2}{g-1} \,.$$ It is interesting to study how the Gauss loci interact with the stratification introduced by Andreotti and Mayer in [@Andreotti1967], which consists of the loci $$\mathcal{N}^{(g)}_k=\{ (A,\Theta)\in \mathcal{A}_g \,|\, \dim \mathrm{Sing}(\Theta)\geq k \} \,.$$ Andreotti and Mayer prove that the Jacobian locus $\mathcal{J}_g$ is an irreducible component of $\mathcal{N}^{(g)}_{g-4}$. For $g\geq 5$, the known irreducible components of $\mathcal{N}^{(g)}_{g-4}$ away from the locus of decomposable ppav's are by [@Donagi88SchottkyProb] and [@Debarre1988]: - the locus of Jacobians $\mathcal{J}_g$, - two loci $\mathscr{E}_{g,0}$ and $\mathscr{E}_{g,1}$ arising from Prym varieties of certain étale double covers of bielliptic curves (for a definition see [@Debarre1988]), - for $2\leq t\leq g/2$, the loci $\mathcal{A}^{2}_{t,g-t}$ of ppav's containing two complementary abelian varieties of dimension $t$ and $g-t$ respectively, such that the induced polarization is of type $(2)$ (defined by Proposition [Proposition 1](#PropComplAbVar){reference-type="ref" reference="PropComplAbVar"}). It turns out that by Debarre, a general member of $\mathcal{A}^{2}_{t,g-t}$ for $2\leq t\leq g/2$ satisfies the conditions of Theorem [Theorem 1](#maintheorem: Gauss degree smooth singular locus){reference-type="ref" reference="maintheorem: Gauss degree smooth singular locus"}. As a consequence we have: **Theorem 2** ([Theorem 1](#Thm: Gauss Degree on A^d_g_1,g_2 ){reference-type="ref" reference="Thm: Gauss Degree on A^d_g_1,g_2 "}). *Let $2\leq t\leq g/2$, then for a general $(A,\Theta)\in \mathcal{A}^{2}_{t,g-t}$, we have $$\begin{aligned} \deg \mathcal{G}_\Theta= t!(g-t)! g \,.\end{aligned}$$* In particular, the degree of the Gauss map separates the components $\mathcal{A}^{2}_{t,g-t}$ from $\mathcal{J}_g$. The construction of $\mathcal{A}^{2}_{t,g-t}$ can be generalized for any polarization type $\delta=(a_1,\dots,a_k)$. One has $$\mathcal{A}^\delta_{t,g-t} \subset \mathcal{N}^{(g)}_{g-2d} \,, \quad \text{for $\deg \delta \leq t \leq g/2 $,}$$ where $\deg \delta \coloneqq a_1\cdots a_k$. Suppose $\delta \in \{(2),(3),(2,2)\}$, let $\deg \delta =d \leq t \leq g/2$, then $\mathcal{A}^\delta_{t,g-t}$ is an irreducible component of $\mathcal{N}^{(g)}_{g-2d}$ [@Debarre1988]. We compute the degree of the Gauss map for a general member of these loci as well, see Theorem [Theorem 1](#Thm: Gauss Degree on A^d_g_1,g_2 ){reference-type="ref" reference="Thm: Gauss Degree on A^d_g_1,g_2 "}. Using different techniques, it is also possible to compute the degree of the Gauss map on the loci $\mathscr{E}_{g,0}$ and $\mathscr{E}_{g,1}$, see the forthcoming paper [@podelski2023GaussEgt]. The main tool in the proof of Theorem [Theorem 1](#maintheorem: Gauss degree smooth singular locus){reference-type="ref" reference="maintheorem: Gauss degree smooth singular locus"} is the notion of Lagrangian specialization, which was already employed by Codogni and Krämer to prove that the Gauss loci are closed [@KraemerCodogni]. Let us quickly recall the setup: Let $W$ be a smooth variety. One defines the conormal variety to a closed subvariety $X\subset W$ as the Zariski closure $$\Lambda_{X} \coloneqq \overline{\{ (x,\xi)\in T^{\vee}(W) \,|\, x\in \mathrm{Sm}(X)\,, \xi \bot T_x (X) \}} \subset T^\vee W\,.$$ This can be done in a relative setting as well: Let $S$ be a smooth curve, $q:\mathcal{W}\to S$ a smooth morphism and $\mathcal{X}\subset \mathcal{W}$ a subvariety that is flat over $S$. By replacing the tangent spaces in the above definition by the relative tangent spaces over $S$, one obtains the relative conormal variety $$\Lambda_{\mathcal{X}/S} \subset T^\vee (\mathcal{W}/S)\,.$$ Let $0\in S$ be a point and $W\coloneqq \mathcal{W}\raisebox{-.5ex}{$|$}_{0}$ and $X\coloneqq \mathcal{X}\raisebox{-.5ex}{$|$}_{0}$ be the fibers above $0$. By [@FultonKleimanMacPherson1983], the specialization of $\Lambda_{\mathcal{X}/S}$ at $0$ is a formal sum of conormal varieties to subvarieties $Z\subset W$, i.e. $$\mathrm{sp}_0 (\Lambda_{\mathcal{X}/S})\coloneqq \Lambda_{\mathcal{X}/S}\raisebox{-.5ex}{$|$}_{0}=\sum_{Z\subset W} m_Z \Lambda_Z \,,$$ for some positive integers $m_Z$. Our next result is in a sense the leading term of the Lagrangian specialization in the codimension $1$ case ([Proposition 1](#Prop: Lagrangian specialization non-reduced special fiber case){reference-type="ref" reference="Prop: Lagrangian specialization non-reduced special fiber case"}): **Proposition 1** (Leading term of the Lagrangian specialization). *In the above setting assume that $\mathcal{X}\subset \mathcal{W}$ is of codimension $1$. Then $$\mathrm{sp}_0(\Lambda_{\mathcal{X}/S})=\sum_i (\mathrm{mult}_{X_i} X) \cdot \Lambda_{X_{i,\mathrm{red}}} + \sum_{Z \varsubsetneq X_{i,\mathrm{red}} } m_Z \Lambda_Z \,,$$ where $X_i$ are the reduced irreducible components of $X$ and $\mathrm{mult}_{X_i} X=\mathrm{len}(\mathcal{O}_{X,X_i})$ is the geometric multiplicity.* When $X$ is reduced, we can go one step further ([Theorem 1](#theorem: Second order Lagrangian Specialization){reference-type="ref" reference="theorem: Second order Lagrangian Specialization"}): **Theorem 3** (Second term of the Lagrangian specialization). *In the above setting assume moreover that $X$ is reduced and $\mathcal{X}\raisebox{-.5ex}{$|$}_{s}$ is smooth for $s\neq 0$. Let $\mathrm{Sing}(X)=\cup_i Z_i$ be the decomposition of the singular locus into its scheme-theoretic irreducible components. Then $$\mathrm{sp}_0 \Lambda_{\mathcal{X}/S}=\Lambda_X+\sum_i (\mathrm{mult}_{Z_i} X ) \cdot \Lambda_{Z_{i,red}} + \sum_{Y\varsubsetneq Z_{i,\mathrm{red}}} m_{Y} \Lambda_{Y}$$ where $\mathrm{mult}_Z X$ is the Samuel multiplicity of $Z$ in $X$ as defined for example in [@Fulton1998 Sec. 4.3].* As a corollary of the above theorem one obtains an upper bound on the degree of the Gauss map in terms of the degree of the conormal variety to the singular locus ([Corollary 1](#Cor: bound on degree Gauss map by Lag specialization){reference-type="ref" reference="Cor: bound on degree Gauss map by Lag specialization"}): **Corollary 1**. *Let $(A,\Theta)\in \mathcal{A}_g$ and let $\mathrm{Sing}(\Theta)=\cup_i Z_i$ be the decomposition of the singular locus of $\Theta$ into its scheme-theoretic irreducible components. We have $$\deg \mathcal{G}_\Theta \leq g!-\sum_i (\mathrm{mult}_{Z_i} \Theta )\deg (\Lambda_{Z_{i,\mathrm{red}}} )\,,$$ where $$\deg \Lambda_Z\coloneqq [\Lambda_Z] \cdot [W]\in \mathrm{H}_0(T^\vee W,\mathbb{Z})$$ is the degree of the intersection with the zero section $W\hookrightarrow T^\vee W$.* It is interesting to compare this with the formula obtained by Codogni, Grushevski and Sernesi in [@Gru17 Cor. 2.6]: $$\deg \mathcal{G}\leq g!- \sum_i (\mathrm{mult}_{Z_i} \Theta) \deg (L\raisebox{-.5ex}{$|$}_{Z_{i,\mathrm{red}}})\,,$$ where $\deg(L\raisebox{-.5ex}{$|$}_{Z})=c_1(L)^{\dim Z}\cap [Z]$ is the degree of the polarization $L=\mathcal{O}_A(\Theta)$ restricted to $Z$. Although the similarity is striking, there is no obvious relation between both bounds: Indeed, let $2\leq t \leq g/2$ and let $(A,\Theta)\in \mathcal{A}^{2}_{t,g-t}$ be general. Then $\mathrm{Sing}(\Theta)$ is smooth and by [Lemma 1](#Lem: Euler characteristic of B and C){reference-type="ref" reference="Lem: Euler characteristic of B and C"} we have $$\deg \Lambda_{\mathrm{Sing}(\Theta)}=t!(g-t)! (t-1)(g-t-1) \,.$$ A direct computation using [Theorem 1](#Thm: Debarre: Cases where Ard verifies Star){reference-type="ref" reference="Thm: Debarre: Cases where Ard verifies Star"} shows $$\begin{aligned} \deg L\raisebox{-.5ex}{$|$}_{\mathrm{Sing}(\Theta)} &=t!(g-t)! \binom{g-4}{t-2} \,,\end{aligned}$$ which is less than $\deg \Lambda_{\mathrm{Sing}(\Theta)}$ for small values of $t$ and greater than $\deg \Lambda_{\mathrm{Sing}(\Theta)}$ for big values of $t$. The proof of Codogni, Grushevski and Sernesi relies on Vogel cycles and it is not clear how both techniques relate. It would also be interesting to know how the next coefficients in the Lagrangian specialization relate to known invariants of the singularity. The text is organized as follows: In Section [2](#Section: Lagrangian Specialization){reference-type="ref" reference="Section: Lagrangian Specialization"}, we recall some well-known facts about the Lagrangian specialization, and prove Theorem [Theorem 3](#maintheorem: Second Order Lagrangian Specialization){reference-type="ref" reference="maintheorem: Second Order Lagrangian Specialization"}. In Section [3](#Sec: Theta Divisors with a Smooth Singular Locus){reference-type="ref" reference="Sec: Theta Divisors with a Smooth Singular Locus"} we then prove Theorem [Theorem 1](#maintheorem: Gauss degree smooth singular locus){reference-type="ref" reference="maintheorem: Gauss degree smooth singular locus"}. Finally, in Section [4](#Sec: Gauss degree on Ard){reference-type="ref" reference="Sec: Gauss degree on Ard"} we prove Theorem [Theorem 2](#mainthm: Gauss Degree on A^d_g_1,g_2 ){reference-type="ref" reference="mainthm: Gauss Degree on A^d_g_1,g_2 "}, and analyse the result numerically. Throughout this paper we work over the field of complex numbers. # Lagrangian Specialization {#Section: Lagrangian Specialization} ## Generalities on Lagrangian Specialization We recall some facts about Lagrangian specialization, see [@KraemerCodogni] for an introduction. Let $W$ be a smooth variety (i.e. an integral scheme over $\mathbb{C}$) of dimension $n$. To a closed subvariety $X\subset W$ we associate its *conormal variety* $$\Lambda_{X} \coloneqq \overline{\{ (x,\xi)\in T^{\vee}W \,|\, x\in \mathrm{Sm}(X)\,, \xi \bot T_x (X) \}}\subset T^\vee W \,.$$ The degree of a conormal variety is defined by $$\deg(\Lambda_{X}) \coloneqq \deg([\Lambda_{X/S}] \cdot [W ])\,,$$ where $W\hookrightarrow T^\vee (W)$ is embedded as the zero section and the product is in the Chow ring of $T^\vee W$. The same construction can be carried out in families: Suppose $S$ is a smooth (quasi-projective) curve, and $$q: \mathcal{W} \to S$$ is a smooth dominant morphism of varieties. Let $\mathcal{X}\subset \mathcal{W}$ be a closed subvariety, flat over $S$. One defines the *relative conormal variety* to $\mathcal{X}$ as the closure $$\Lambda_{\mathcal{X}/S}\coloneqq \overline{ \{ (x,\xi)\in T^\vee (\mathcal{W}/S) \, | \, x\in \mathrm{Sm}(\mathcal{X}/S),\, \xi \bot T_x \mathcal{X}_{f(x)} \} }\subset T^\vee (\mathcal{W}/S)\,,$$ where $$T^\vee(\mathcal{W}/S) \coloneqq T^\vee\mathcal{W} / f^{-1} T^\vee(S)$$ is the relative cotangent bundle. Let $$\mathscr{L}(\mathcal{W}/S)= \bigoplus_{\mathcal{X}\subset \mathcal{W}} \mathbb{Z} \cdot \Lambda_{\mathcal{X}/S}$$ denote the free abelian group generated by relative conormal varieties to closed subvarieties $\mathcal{X}\subset \mathcal{W}$ that are flat over $S$. The Lagrangian specialization of $\Lambda_{\mathcal{X}/\mathcal{W}}\in \mathscr{L}(\mathcal{W}/S)$ is the intersection with the fiber above $s\in S$. This is again a Lagrangian cycle on $\mathcal{W}_s$ [@FultonKleimanMacPherson1983] [@Trng1988LimitesDT] $$\mathrm{sp}_s(\Lambda_{\mathcal{X}/S})\coloneqq \Lambda_{\mathcal{X}/S} \cap T^\vee \mathcal{W}_s = m_{\mathcal{X}_s} \Lambda_{\mathcal{X}_s}+ \sum_{Z\subset \mathrm{Sing}(\mathcal{X}_s)} m_Z \Lambda_Z\,,$$ where $m_{\mathcal{X}_s},m_Z>0$ and the sum runs over finitely many subvarieties $Z\subset \mathrm{Sing}(\mathcal{X}_s)$. Moreover, for a general $s\in S$, $$\mathrm{sp}_s(\Lambda_{\mathcal{X}/S})=\Lambda_{\mathcal{X}_s} \,.$$ *Remark 1*. Note that the definition of the conormal variety and of the Lagrangian specialization are local. Thus, we can compute the coefficients $m_Z$ above locally. We define the *projectivised* conormal variety $\mathbb{P}\Lambda_{Z/S}$ by taking the image in the projectivised cotangent space $\mathbb{P}T^\vee (\mathcal{W}/S)$. From now on we will assume $\mathcal{X}\subset \mathcal{W}$ to be of codimension $1$. Recall that the *relative singular locus* is the scheme defined locally by $$\mathrm{Sing}(\mathcal{X}/S)=V(F, \partial_1 F,\dots,\partial_n F)\subset \mathcal{W} \,,$$ where $F$ is a holomorphic function defining $\mathcal{X}$ and $\partial_i$ generate the relative tangent space $T(\mathcal{W}/S)$ (recall that $\mathcal{W}$ is smooth, thus $\mathcal{X}$ is a Cartier divisor). We have the following: **Proposition 1**. *Let $S$ be a curve or a point, and suppose $\mathcal{X}\subset \mathcal{W}$ is of codimension $1$. There is a canonical identification $$\mathbb{P}\Lambda_{\mathcal{X}/S}=\mathop{\mathrm{Bl}}_{\mathrm{Sing}(\mathcal{X}/S)} \mathcal{X} %\quad \text{( respectively } \PP\Lambda_{X/S}=\Bl_Z X \text{ )} \,.$$* *Proof.* Let $Z=\mathrm{Sing}(\mathcal{X}/S)$. The Gauss map is the rational map defined on the relative smooth locus by $$\begin{aligned} \mathscr{G}: \mathcal{X}&\dashrightarrow \mathbb{P}T^\vee(\mathcal{W}/S) \\ x &\mapsto (T_x \mathcal{X}_{f(x)})^\ast \,.\end{aligned}$$ Locally on some open set $U\subset \mathcal{W}$ there are coordinates $x_1,\dots,x_n$ (and $s$ in the relative case), and a function $F$ defining $\mathcal{X}$. Then $\mathbb{P}T^\vee (U/S)=U\times \mathbb{P}^{n-1}$ and $$\mathscr{G}(x)=\left(x, \frac{\partial F}{\partial x_1}(x):\dots:\frac{\partial F}{\partial x_n}(x) \right)\,.$$ Thus $Z$ is exactly the base locus of $\mathcal{G}$. It is well known that after blowing up $Z$ we can extend $\mathcal{G}$ to an embedding $\tilde{\mathcal{G}}$ such that the following diagram commutes [@Fulton1998 Sec 4.4]  . This completes the proof as $$\mathbb{P}\Lambda_{\mathcal{X}/S}=\overline{\mathscr{G}(\mathcal{X})}=\tilde{\mathscr{G}}(\mathop{\mathrm{Bl}}_Z \mathcal{X})\simeq \mathop{\mathrm{Bl}}_Z \mathcal{X}\,.$$ ◻ In the case of ppav's we have the following: **Proposition 1**. *Let $(A,\Theta)$ be a polarized abelian variety, with $\Theta$ reduced and let $\mathcal{G}:\Theta \dashrightarrow \mathbb{P}T^\vee_0 A$ be the Gauss map. Then $$\mathbb{P}\Lambda_\Theta = \Gamma_\mathcal{G}\subset A\times \mathbb{P}T^\vee_0 A \simeq \mathbb{P}T^\vee A\,,$$ where $\Gamma_\mathcal{G}$ is the closure of the graph of $\mathcal{G}$. In particular: $$\deg \Lambda_\Theta= \deg \mathcal{G} \,.$$* *Proof.* The first part of the proposition follows immediately from the proof of Proposition [Proposition 1](#Prop: Lagrangian is blowup){reference-type="ref" reference="Prop: Lagrangian is blowup"}. Let $v\in T^\vee_0 A$ be general. Then $$\begin{aligned} \cdot[A\times \{0\}]&=[\Lambda_\Theta] \cdot [A\times \{v\}]\\ &=[\mathbb{P}\Lambda_\Theta]\cdot[A\times \{\overline{v}\}]\\ &=\deg \mathcal{G}\,.\end{aligned}$$ ◻ **Example 1**. *Let $(A,\Theta)\in\mathcal{A}_g$ and suppose that $\Theta$ is smooth, then $$\deg \Lambda_\Theta= \deg \mathcal{G}= g! \,.$$ Recall that the Gauss map corresponds to the complete linear system $|L\raisebox{-.5ex}{$|$}_{\Theta}|$ with $L=\mathcal{O}_A(\Theta)$. If $\Theta$ is smooth, the Gauss map is defined everywhere thus $$\deg \mathcal{G}= [\Theta]^{\cdot g}=g!$$ by Riemann Roch.* We end the section with the following computation of Lagrangian specialization: **Example 1**. *Let $n\geq 3$ and consider the following deformation $$\begin{tikzcd} \mathcal{X}=\{ x_1^2 + \cdots + x_{n-1}^2 + x_{n} s =0 \}\arrow[r,phantom,"\subset"] \arrow[dr] & \mathbb{A}^n\times \mathbb{A}\arrow[d,"q"] \\ & \mathbb{A} \end{tikzcd}\,.$$ Then $$\mathrm{sp}_0 \Lambda_{\mathcal{X}/S} = \Lambda_{X}+2\Lambda_B+\Lambda_C \,,$$ where $$\begin{aligned} B&=\{s=x_1=\cdots=x_{n-1}=0 \} \quad \text{is the singular locus of $X=\mathcal{X}_0$,} \\ C&=\{s=x_1=\cdots=x_{n}=0 \} \quad \text{is the singular locus of $\mathcal{X}$.}\end{aligned}$$* As this example is central in our proof, we will do the computation: By [Proposition 1](#Prop: Lagrangian is blowup){reference-type="ref" reference="Prop: Lagrangian is blowup"} we have $$\begin{aligned} \Lambda_\mathcal{X}&= \mathrm{Blow}_B \mathcal{X}\\ &=V(u_1 x_1+\cdots+u_{n} x_n, \, x_i u_j-x_j u_i,\, x_i u_n- s u_i)_{1\leq i,j\leq n-1}\\ &\subset \mathcal{X}\times \mathbb{P}^{n-1} \,, \end{aligned}$$ where $u_i$ are homogeneous coordinates on $\mathbb{P}^{n-1}$. Specializing to $s=0$ and restricting to the open set $U=\{u_1\neq 0\}$ we have (with $a_i=u_i/u_1$ for $2\leq i \leq n$) $$\begin{aligned} \mathrm{sp}_0 \Lambda_\mathcal{X}\raisebox{-.5ex}{$|$}_{U} &= V\left(x_1^2(1+a_2^2+\cdots+a_{n-1}^2),\right.\\ & \qquad \qquad \left. x_1(1+a_2^2+\cdots+a_{n-1}^2)+a_nx_n,\,\, x_1a_n\right)\\ & \subset \mathbb{A}^2 \times \mathbb{A}^{n-1} \,.\end{aligned}$$ Thus $\mathrm{sp}_0 \Lambda_\mathcal{X}$ has $3$ irreducible components: $$\begin{aligned} \Lambda_X&=V(1+a_2^2+\cdots + a_{n-1}^2, a_n) \subset \mathbb{A}^2\times \mathbb{A}^{n-1} &\text{with multiplicity $1$,}\\ \Lambda_B&=V(x_1,a_n) & \text{with multiplicity $2$,} \\ \Lambda_C&=V(x_1,x_n) & \text{with multiplicity $1$.}\end{aligned}$$ ## First Coefficients in the Lagrangian Specialization Let $q:\mathcal{W}\to S$ be a smooth morphism to a quasi-projective curve $S$. Let $\mathcal{X}\subset \mathcal{W}$ be a variety of codimension $1$, flat over $S$. Let $0\in S$ be a point and $X=\mathcal{X}_0$, $W=\mathcal{W}_0$ be the special fibers $$\mathcal{X}\subset \mathcal{W} \overset{q}{\longrightarrow} S\,.$$ We have the following: **Proposition 1** (Zeroth-Order Approximation of the Lagrangian Specialization). *Let $X=\cup_i X_i$ be the scheme-theoretic irreducible components of $X$. We have $$\mathrm{sp}_0(\Lambda_{\mathcal{X}/S})=\sum_i (\mathrm{mult}_{X_i} X) \cdot \Lambda_{X_{i,\mathrm{red}}} + \sum_{Z \varsubsetneq X_{i,\mathrm{red}} } m_Z \Lambda_Z \,,$$ where $\mathrm{mult}_{X_i} X=\mathrm{len}(\mathcal{O}_{X,X_i})$ is the geometric multiplicity and the last sum runs over subvarieties $Z\subset \mathrm{Sing}(X_{\mathrm{red}})\cup (\mathrm{Sing}(\mathcal{X})\cap X)$.* *Proof.* By the principle of Lagrangian specialization [@KraemerCodogni Lem 2.3], we have $$\label{Equ: Prop non-reduced fiber Lag specialization} \mathrm{sp}_0(\Lambda_{\mathcal{X}/S})=\sum_i m_{X_i} \Lambda_{X_{i,\mathrm{red}}}+\sum_{Z\subset \mathrm{Sing}(X)} m_Z \Lambda_Z \,,$$ for some coefficients $m_{X_i}$, $m_Z$. The definition of the coefficients $m_{X_i}$ is local, thus we can assume that we are working on an affine neighborhood where $X_\mathrm{red}$ is smooth and irreducible. Let $x_1,\dots, x_n,s$ be coordinates on $\mathcal{W}$ such that $q$ is the projection onto $s$. $\mathcal{X}$ is defined locally by a function $F(x_1,\dots,x_n,s)$. We will show that the ideal of the relative singular locus $\mathrm{Sing}(\mathcal{X}/S)$ $$I\coloneqq \left\langle F ,\frac{\partial F}{\partial x_i} \right\rangle_{1\leq i \leq n}$$ is locally principal in the affine coordinate ring of $\mathcal{X}$ away from a strict subset $Z\subseteq X\cap \mathrm{Sing}(\mathcal{X})$. If $X$ is reduced, then it is smooth and $I=\langle 1 \rangle$ so there is nothing to prove. We assume now that $X$ is non-reduced. $X$ is a Cartier divisor in $W$, thus defined by the vanishing of $f^k$ for some $k\geq 2$, where $X_\mathrm{red}$ is defined by the vanishing of $f(x_1,\dots,x_n)$. We have $$F=f^k+s^l\cdot g \,,$$ for some function $g$ defined on $\mathcal{W}$ not divisible by $s$, and $l\geq 1$. $g$ does not vanish identically on $X_\mathrm{red}$, else $g$ would be divisible by $f$ and $\mathcal{X}$ would not be integral. Notice that $V(g\raisebox{-.5ex}{$|$}_{X})\subset \mathrm{Sing}(\mathcal{X})\cap X$ thus we can restrict to $\{g\neq 0\}$ and assume $g$ is a unit. We have $$\begin{aligned} I&=\left\langle f^k+s^l\cdot g , \frac{\partial f}{\partial x_i} f^{k-1}+ s^l \frac{\partial g}{\partial x_i} \right\rangle_{1\leq i \leq n}\,.\end{aligned}$$ As $X_\mathrm{red}$ is smooth, we have $\langle f, \partial_i f \rangle=\langle 1 \rangle$, thus $(f^{k-1}+s^l\cdot h)\in I$ for some function $h$. Thus $s^l(g-fh)\in I$. As $g-fh$ is a unit near $X$ after restricting to a smaller open set we can assume $$s^l\in I\,, \quad \text{thus} \quad I=\langle f^{k-1},F\rangle \,.$$ In particular, $\mathrm{Sing}(\mathcal{X}/S)$ is principal in $\mathcal{X}$ (defined by $f^{k-1}$), thus by Proposition [Proposition 1](#Prop: Lagrangian is blowup){reference-type="ref" reference="Prop: Lagrangian is blowup"} we have $$\begin{aligned} \mathrm{sp}_0(\Lambda_{\mathcal{X}/S})&=(\mathop{\mathrm{Bl}}_{\mathrm{Sing}{\mathcal{X}/S}} \mathcal{X})\raisebox{-.5ex}{$|$}_{0} \\ &\simeq \mathcal{X}\raisebox{-.5ex}{$|$}_{0}\\ &=X\\ &= \mathrm{len}(\mathcal{O}_{X,X_\mathrm{red}}) \cdot X_{\mathrm{red}}\\ &\simeq\mathrm{len}(\mathcal{O}_{X,X_\mathrm{red}}) \cdot \Lambda_{X_{\mathrm{red}}} \,. \end{aligned}$$ This proves the claim on the coefficients of the $\Lambda_{X_{i,\mathrm{red}}}$. As we only needed to restrict to complements of closed sets in $\mathrm{Sing}(X_\mathrm{red})\cup \mathrm{Sing}(\mathcal{X})$ during the proof, we have that every other cycle $\Lambda_Z$ in the specialization must verify $Z\subset \mathrm{Sing}(X_\mathrm{red})\cup (\mathrm{Sing}(\mathcal{X})\cap X)$. ◻ When the special fiber is reduced, we can go one step further: **Theorem 1** (First-Order Approximation of the Lagrangian Specialization). *Assume that $X$ is reduced and $\mathcal{X}\raisebox{-.5ex}{$|$}_{s}$ is smooth for $s\neq 0$. Let $\mathrm{Sing}(X)=\cup_i Z_i$ be the decomposition of the singular locus into its scheme-theoretic irreducible components. Then $$\mathrm{sp}_0 \Lambda_{\mathcal{X}/S}=\Lambda_X+\sum_i (\mathrm{mult}_{Z_i} X ) \cdot \Lambda_{Z_{i,red}} + \sum_{Y\varsubsetneq Z_{i,\mathrm{red}}} m_{Y} \Lambda_{Y}$$ where $\mathrm{mult}_Z X$ is the Samuel multiplicity of $Z$ in $X$ as defined for example in [@Fulton1998 Sec. 4.3].* *Remark 2*. If the singular locus is $0$-dimensional, this computes the full Lagrangian specialization. *Proof.* By Proposition [Proposition 1](#Prop: Lagrangian specialization non-reduced special fiber case){reference-type="ref" reference="Prop: Lagrangian specialization non-reduced special fiber case"}, we have $$\mathrm{sp}_0 \Lambda_{\mathcal{X}/S}=\Lambda_X+\sum_i \left( m_{Z_i} \Lambda_{Z_{i,\mathrm{red}}} + \sum_{Y\subset Z_{i,\mathrm{red}}} m_{Y} \Lambda_{Y}\right)$$ for some coefficients $m_{Z_i}$, $m_Y$. The coefficients $m_{Z_i}$ are defined locally, thus after restricting to an open set we can assume that $Z=\mathrm{Sing}(X)$ is irreducible and $$\mathrm{sp}_0 \Lambda_{\mathcal{X}/S}=\Lambda_X+m_{Z}\Lambda_{Z_{\mathrm{red}}} \,.$$ Let $\mathcal{Z}\coloneqq \mathrm{Sing}(\mathcal{X}/S)$. Note that by assumption $\mathrm{Supp}(\mathcal{Z})=\mathrm{Supp}(Z)$ and $Z=\mathcal{Z}\cap X$. By Proposition [Proposition 1](#Prop: Lagrangian is blowup){reference-type="ref" reference="Prop: Lagrangian is blowup"} we have $\mathbb{P}\Lambda_{\mathcal{X}/S}=\mathop{\mathrm{Bl}}_{\mathcal{Z}} \mathcal{X}$. We have the following two fiber squares, where the right square is the diagram associated to the blowup $f$ and $E$ is the exceptional divisor. By definition, we have $$f^\ast [X] = [\mathbb{P}\Lambda_{\mathcal{X}/S}\raisebox{-.5ex}{$|$}_{0}] \eqqcolon \mathrm{sp}_0(\mathbb{P}\Lambda_{\mathcal{X}/S})=[\mathbb{P}\Lambda_X]+m_Z[\mathbb{P}\Lambda_{Z_\mathrm{red}}] \in \mathrm{CH}^1(\mathbb{P}\Lambda_{\mathcal{X}/S})\,.$$ Let $\mathcal{O}(1)=\mathcal{O}_{\mathbb{P}\Lambda_{\mathcal{X}/S}}(-E)\raisebox{-.5ex}{$|$}_{E}$ denote the canonical bundle on $E$ associated to the blowup. By [Proposition 1](#Prop: Lagrangian is blowup){reference-type="ref" reference="Prop: Lagrangian is blowup"} and [@Fulton1998 B.6.9] there is a canonical embedding $$\mathbb{P}\Lambda_X = Bl_Z X \hookrightarrow Bl_\mathcal{Z} \mathcal{X}= \mathbb{P}\Lambda_{\mathcal{X}/S}$$ and the restriction of the exceptional divisor $E'\coloneqq E\cap Bl_Z X$ is the exceptional divisor of $Bl_Z X$. Thus $$\begin{aligned} g_\ast \left(j^\ast [\mathbb{P}\Lambda_X] \cap c_1(\mathcal{O}(1))^{d-2}\right) &=g_\ast\left( j^\ast [\mathop{\mathrm{Bl}}_{Z'} X] \cap c_1(\mathcal{O}(1))^{d-2} \right) \\ &= g_\ast \left( [E'] \cap c_1(\mathcal{O}(1))^{d-2} \right) \\ &= \mathrm{mult}_{Z} X \cdot [Z_\mathrm{red}]\in \mathrm{CH}^0(Z) \,,\end{aligned}$$ by [@Fulton1998 Sec. 4.3]. Notice that $\mathbb{P}\Lambda_{Z_\mathrm{red}}\subset E$. We make the abuse of notation to write $[\mathbb{P}\Lambda_{Z_\mathrm{red}}]$ for the cycle in $\mathrm{CH}^\bullet(\mathbb{P}\Lambda_{\mathcal{X}/S})$ as well as $\mathrm{CH}^\bullet(E)$, when it is clear from the context which Chow ring we mean. Recall $\mathcal{O}_{\mathbb{P}\Lambda_{\mathcal{X}/S}}(E)\raisebox{-.5ex}{$|$}_{E}=\mathcal{O}(-1)$, thus $$\begin{aligned} g_\ast \left( j^\ast [\mathbb{P}\Lambda_{Z_\mathrm{red}}] \cap c_1(\mathcal{O}(1))^{d-2} \right) &= g_\ast \left( - [\mathbb{P}\Lambda_{Z_\mathrm{red}}] \cap c_1(\mathcal{O}(1))^{d-1} \right) \\ &=-[Z_\mathrm{red}]\in \mathrm{CH}^0(Z)\,,\end{aligned}$$ as a generic fiber of $\mathbb{P}\Lambda_{Z_\mathrm{red}}\to Z_\mathrm{red}$ is a $(d-1)$-plane. By definition $[X]=q^\ast[0]\in \mathrm{CH}^1(\mathcal{X})$. Since $\mathcal{Z}$ is supported on a fiber of $q$ we have $$0= i^\ast q^\ast [0]=i^\ast [X]\,.$$ Putting all of this together we have $$\begin{aligned} 0 &= g_\ast \left( g^\ast i^\ast [X] \cap c_1(\mathcal{O}(1))^{d-2} \right) \\ &= g_\ast \left( j^\ast f^\ast [X] \cap c_1(\mathcal{O}(1))^{d-2} \right) \\ &= g_\ast \left( j^\ast(\Lambda_X+m_Z\Lambda_{Z_\mathrm{red}}) \cap c_1(\mathcal{O}(1))^{d-2} \right) \\ &=(\mathrm{mult}_{Z} X - m_Z)[Z_\mathrm{red}] \in \mathrm{CH}^0(Z) \,.\end{aligned}$$ Thus $m_Z=\mathrm{mult}_{Z} X$, as $\mathrm{CH}^0(Z)=[Z_{\mathrm{red}}] \cdot \mathbb{Z}$. ◻ ## Application to Theta Divisors We have the following corollary to Theorem [Theorem 1](#theorem: Second order Lagrangian Specialization){reference-type="ref" reference="theorem: Second order Lagrangian Specialization"}: **Corollary 1**. *Let $(A,\Theta)\in \mathcal{A}_g$ and $\cup_i Z_i = \mathrm{Sing}(\Theta)$ the decomposition of the singular locus of Theta into its scheme-theoretic irreducible components. Then $$\deg \mathcal{G}\leq g!-\sum_i (\mathrm{mult}_{Z_i} \Theta )\deg (\Lambda_{Z_{i,\mathrm{red}}} )\,,$$ where $\mathcal{G}:\Theta \dashrightarrow \mathbb{P}^{g-1}$ is the Gauss map.* *Proof.* Let $(A_S,\Theta_S)$ be a $1$-dimensional deformation of $(A,\Theta)$, i.e. an abelian scheme over a smooth curve $S$ with special fiber $(A,\Theta)$, such that $\Theta_s$ is smooth for general $s$. The degree is invariant in flat families [@KraemerCodogni Prop. 2.4], thus $$\begin{aligned} g!&= \deg \Lambda_{\Theta_s} & \text{(Ex. \ref{Ex: Deg Gauss map smooth theta})} \\ &= \deg(\mathrm{sp}_0 \Lambda_{\Theta_S/S})& \\ &= \deg( \Lambda_\Theta)+ \sum_i (\mathrm{mult}_{Z_i} \Theta ) \deg(\Lambda_{Z_{i,\mathrm{red}}}) +\sum_{Z\subset Z_i} m_Z \deg( \Lambda_Z ) & \text{(Thm. \ref{theorem: Second order Lagrangian Specialization})}\\ &\geq \deg( \mathcal{G})+ \sum_i (\mathrm{mult}_{Z_i} \Theta ) \deg(\Lambda_{Z_{i,\mathrm{red}}}) \,.& \end{aligned}$$ The last assertion follows from the fact that $\deg \Lambda_Z \geq 0$ for a subvariety $Z$ of an abelian variety [@KraemerCodogni Lem. 5.1]. ◻ *Remark 3*. If the singular locus of $\Theta$ is $0$-dimensional, there are no other terms in the Lagrangian specialization and one recovers a result of [@Gru17 Rem 2.8] $$\deg \mathcal{G}= g!- \sum_{z\in \mathrm{Sing}(\Theta)} \mathrm{mult}_z \Theta \,.$$ # Theta Divisors with Transversal $\mathrm{A}_1$ Singularities {#Sec: Theta Divisors with a Smooth Singular Locus} The idea of the proof of Theorem [Theorem 1](#Theorem: Gauss degree theta divisor with smooth singular locus){reference-type="ref" reference="Theorem: Gauss degree theta divisor with smooth singular locus"} is to deform a given ppav to a ppav with a smooth theta divisor. Using the heat equation verified by theta functions, it is then possible to compute the Lagrangian specialization explicitly. Finally, we use the fact that the degree of Lagrangian cycles is invariant in flat families. ## Deformation of PPAV's {#Sec: Deformation of ppav's} Let $(A,\Theta)\in \mathcal{A}_g$ and denote by $T_A$ the tangent bundle on $A$. It is well-known that there is a canonical identification between $\mathrm{H}^0(A,\mathop{\mathrm{Sym}}_2(T_A))$ and infinitesimal deformations of $(A,\Theta)$ [@Welters1983] and [@Ciliberto1999 Sec. 3]. Specifically, given $$\mathscr{D}=\sum_{i,j} \lambda_{ij} \frac{\partial^2}{\partial z_i \partial z_j}\in \mathrm{H}^0(A,\mathop{\mathrm{Sym}}_2(T_A))\,,$$ there exists a deformation of $(A,\Theta)$, i.e. an abelian scheme over a smooth quasi-projective curve $S$ $$\Theta_S\subset A_S \to S \,,$$ such that the fiber above $0\in S$ is $(A,\Theta)$. Moreover, locally there are coordinates $(z_1,\dots,z_g,s)$ on $A_S$ such that the map to $S$ is given by the projection onto the last coordinate, and if $\theta$ is the theta-function defining $\Theta$ we have $$\label{Equ: heat equation theta} \mathscr{D}(\theta)=\sum_{i,j}\lambda_{ij} \frac{\partial^2 \theta}{\partial z_i \partial z_j} = \frac{\partial\theta}{\partial s} \,.$$ We call this a deformation in the $\mathscr{D}$ direction. ## Computation of the Gauss Degree Let $(X,0)=V(f)\subset (\mathbb{C}^n,0)$ be a hypersurface singularity germ. The *scheme theoretic* singular locus of $X$ is defined as $$\label{Equ: scheme theoretic singular locus} \mathrm{Sing}(X)\coloneqq V\left(f, \frac{\partial f}{\partial x_1},\dots,\frac{\partial f}{\partial x_n} \right)\subset (\mathbb{C}^n,0)\,,$$ where $x_1,\dots,x_n$ are some coordinates on $\mathbb{C}^n$. We have the following: **Proposition 1**. *Let $(X,0)=V(f)\subset (\mathbb{C}^n,0)$ be a hypersurface singularity germ, and $d= \mathrm{codim}_{\mathbb{C}^n} \mathrm{Sing}(X)$. The Hessian of $f$ $$H(f) \coloneqq \left( \frac{\partial^2 f}{\partial x_i \partial x_j} \right)_{1\leq i,j\leq n}$$ is of rank at most $d$. The following conditions are equivalent:* i) *$\mathrm{Sing}(X)$ is smooth at $0$.* ii) *$H(f)$ is of rank $d$ at $0$.* iii) *There is a holomorphic change of coordinates $z_1,\dots,z_n$ such that $$f(z)=z_1^2+\cdots+z_d^2 \,.$$* *In this case, we say that $X$ has a transversal $\mathrm{A}_1$ singularity at $0$.* *Proof.* $(i \iff ii)$ and $(iii\implies i)$ are trivial. Let us show $(i \text{ and } ii) \implies iii$. After a first change of coordinates we can assume $\mathrm{Sing}(X)=V(x_1,\dots,x_d)$. We apply the Morse Lemma with parameters (recalled below) with $x_1,\dots,x_d$ as parameters and the result follows. ◻ For a proof of the following Lemma, see for example [@zoladek2006monodromy]. **Lemma 1** (Morse Lemma with parameters). *Let $f(x;s):(\mathbb{C}^n\times \mathbb{C}^k,0)\to (\mathbb{C},0)$ be a holomorphic function such that the hessian matrix in the first $n$ coordinates $$\mathrm{H}(f)_0= \left( \frac{\partial^2 f}{\partial x_i\partial x_j}(0) \right)_{1\leq i,j \leq n}$$ is non-degenerate. Then there is a local holomorphic change of coordinates $h_s:(\mathbb{C}^n,0) \to (\mathbb{C}^n,0)$ such that $$f(h_s (y) ; s)= f(0;s)+ \sum_{i=1}^n y_i^2 \,.$$* We say that a variety $X$ has transversal $\mathrm{A}_1$ singularities if the equivalent conditions of Proposition [Proposition 1](#Prop: Appendix: Hessian max rank is equiv to smooth sing locus){reference-type="ref" reference="Prop: Appendix: Hessian max rank is equiv to smooth sing locus"} hold at every singular point of $X$. We now compute the degree of the Gauss map using the results of the previous section: **Theorem 1**. *Let $(A,\Theta)\in \mathcal{A}_g$ be a ppav such that $\Theta$ has transversal $\mathrm{A}_1$ singularities. Let $$C\in |\mathcal{O}_A(\Theta)\raisebox{-.5ex}{$|$}_{B}|$$ be any smooth member of the linear system. Then the degree of the Gauss map $\mathcal{G}:\Theta \dashrightarrow \mathbb{P}^{g-1}$ is $$\deg \mathcal{G}= g! - 2 (-1)^{\dim B}\chi(B)-(-1)^{\dim C} \chi(C) \,,$$ where $\chi$ denotes the usual topological Euler characteristic.* *Proof.* Let $d=\mathrm{dim}( B)$. Let $\theta\in \mathrm{H}^0(A,\mathcal{O}_A(\Theta))$ be a non-zero section. Let $\partial z_1,\dots,\partial z_g$ be a basis of $\mathrm{H}^0(A,T_A)$. Consider the linear system on $B$ $$T\coloneqq \left| \frac{\partial^2 \theta}{\partial z_i \partial z_j}\raisebox{-.5ex}{$|$}_{B} \right|_{1\leq i,j\leq g} \subset |\mathcal{O}_A(\Theta)\raisebox{-.5ex}{$|$}_{B}| \,.$$ Notice that by Proposition [Proposition 1](#Prop: Appendix: Hessian max rank is equiv to smooth sing locus){reference-type="ref" reference="Prop: Appendix: Hessian max rank is equiv to smooth sing locus"}, the Hessian of $\theta$ is of rank $d$, in particular $T$ is base-point free and by Bertini's theorem, a general divisor $C\in T$ is smooth. We have $$C=\mathop{\mathrm{div}}\left( \sum_{i,j} \lambda_{ij} \frac{\partial^2 \theta}{\partial z_i \partial z_j}\raisebox{-.5ex}{$|$}_{B}\right)$$ for some $\lambda_{ij}$. By the previous section, there is a deformation $q:(A_S,\Theta_S)\to S$ in the $\mathscr{D}=\sum_{i,j} \lambda_{ij} \partial_i \partial_j \theta$ direction. By [\[Equ: heat equation theta\]](#Equ: heat equation theta){reference-type="ref" reference="Equ: heat equation theta"}, there are locally coordinates $z_1,\dots,z_g,s$ on $A_S$ such that $q$ is the projection onto the last coordinate and $$C= V\left(\frac{\partial\theta}{\partial s}\right) \,.$$ Let $\Lambda_{\Theta_S/S}$ be the relative conormal variety. By the Lemma below and Example [Example 1](#Ex: Lagrangian Specialization, rank $d$ singular quadric case){reference-type="ref" reference="Ex: Lagrangian Specialization, rank $d$ singular quadric case"} we have $$\mathrm{sp}_0 \Lambda_{\Theta_S/S} = \Lambda_{\Theta_0}+2\Lambda_B+\Lambda_C \,.$$ **Lemma 1**. *Let $f(z;s):(\mathbb{C}^n \times \mathbb{C},0) \to (\mathbb{C},0)$ be a holomorphic function such that the singular locus and the critical locus of $f\raisebox{-.5ex}{$|$}_{s=0}$ $$B \coloneqq V(f,\partial_1 f,\dots,\partial_n f,s)\,,\quad C \coloneqq V(\partial_s f\raisebox{-.5ex}{$|$}_{B})\subset B\,,$$ are smooth. Then there is a local holomorphic change of coordinates $z=h_s(\tilde{z})$ such that $$f(h_s(\tilde{z});s)= \tilde{z}_1^2+\cdots +\tilde{z}_d^2 + \tilde{z}_{d+1} s \,.$$* We prove the Lemma below. By generality of $C$ and smoothness of the Theta divisor of a general ppav (it is also apparent from the normal form above), $\Theta_s$ is smooth for $s\neq 0$. Thus $$\begin{aligned} g!&=\deg \Lambda_{\Theta_s} &\text{(Ex. \ref{Ex: Deg Gauss map smooth theta})} \\ &= \deg (\mathrm{sp}_s \Lambda_{\Theta_S/S} ) & \\ &= \deg (\mathrm{sp}_0 \Lambda_{\Theta_S/S}) &\text{(\cite[Prop. 2.4]{KraemerCodogni})}\\ &= \deg\left( \Lambda_{\Theta_0}+2\Lambda_B+\Lambda_C \right)\\ &= \deg (\mathcal{G}) + 2(-1)^{\dim B} \chi(B) + (-1)^{\dim C}\chi(C) &\text{(Prop. \ref{Prop: Degree Lagrangian is Gauss degree})}\,.\end{aligned}$$ ◻ *Proof of Lemma [Lemma 1](#Lemma: Normal Form for Morse functions with a 1-dimensional parameter and smooth critical locus){reference-type="ref" reference="Lemma: Normal Form for Morse functions with a 1-dimensional parameter and smooth critical locus"}.* After a change of coordinates in $\mathbb{C}^n$ we can assume $B=\{z_1=\cdots=z_d=0\}$. We have $f\raisebox{-.5ex}{$|$}_{B}=0$ thus $$\frac{\partial^2 f}{\partial z_i \partial z_j}\raisebox{-.5ex}{$|$}_{0}=0 \quad \text{for $1\leq i \leq n$ and $d+1\leq j \leq n$.}$$ Thus the Hessian of $F$ in the first $n$ coordinates is $$H(F)_0= \begin{pmatrix} H(F\raisebox{-.5ex}{$|$}_{\mathbb{C}^d})_0 & 0 \\ 0 & 0 \end{pmatrix}\,.$$ By Proposition [Proposition 1](#Prop: Appendix: Hessian max rank is equiv to smooth sing locus){reference-type="ref" reference="Prop: Appendix: Hessian max rank is equiv to smooth sing locus"}, $H(F)$ and thus $H(F\raisebox{-.5ex}{$|$}_{\mathbb{C}^d})$ is of rank $d$ at $0$. By the Morse Lemma with parameters $(z_{d+1},\dots,z_n,s)$, there is a holomorphic change of coordinates $(z_1,\dots,z_d)=h_{(z',s)}(\tilde{z}_1,\dots,\tilde{z}_d)$ where $z'=(z_{d+1},\dots,z_n)$, such that $$f(h_{(z',s)}(\tilde{z}),z',s)=\sum_{i=1}^{d} \tilde{z}_i^2 + f(0,z',s) \,.$$ We have $$f(0,z',0)=0 \,,$$ and $\frac{\partial f}{\partial s}\raisebox{-.5ex}{$|$}_{B}$ has a simple $0$ in zero, thus after a change of coordinates (in $z'$), we can assume$$\frac{\partial f}{\partial s}\raisebox{-.5ex}{$|$}_{B}= z_{d+1} \,.$$ Thus $$f(0,z',s)=s(z_{d+1}+s g(z',s) ) \,,$$ for some holomorphic $g$. Making the coordinate change $$\tilde{z}_{d+1}=z_{d+1}+s g(z',s) \,,$$ the lemma follows. ◻ # The Family $\mathcal{A}^\delta_{t,g-t}$ {#Sec: Gauss degree on Ard} We apply Theorem [Theorem 1](#Theorem: Gauss degree theta divisor with smooth singular locus){reference-type="ref" reference="Theorem: Gauss degree theta divisor with smooth singular locus"} to the families $\mathcal{A}^\delta_{t,g-t}$ studied by Debarre in [@Debarre1988]. First we recall the definition and known results about $\mathcal{A}^\delta_{t,g-t}$. Then we compute the Gauss degree for a general member of these families. Finally we analyse the degree numerically and show that it separates the corresponding components of the Andreotti-Mayer locus. ## Definition of the Family Let $A$ be an abelian variety and $L$ an ample line bundle on $A$. Recall that the type $\delta=(a_1,\dots,a_k)$ of $L$ is defined by $$\mathrm{Ker}(\Phi_L)\simeq \bigoplus_{i=1}^k (\mathbb{Z}/a_i\mathbb{Z})^2\,, \quad \text{and} \quad a_i|a_{i+1} \quad \text{for $1\leq i <k$}\,,$$ where $\Phi_L: A \to \hat{A}$ is the polarization induced by $L$. We have the following [@Debarre1988], [@Birkenhake2004 Th. 5.3.5]: **Proposition 1** (Complementary Abelian Varieties). *Let $(A,\Theta)\in \mathcal{A}_{g}$ and $\delta$ be a polarization type. Suppose there is an abelian subvariety $X \subset A$ of dimension $t$ and the induced polarization $L_X=L\raisebox{-.5ex}{$|$}_{X}$ is of type $\delta$. Then there is a unique abelian subvariety $Y \subset A$ (of dimension $g-t$) such that:* a) *The morphism $\pi : X\times Y \overset{i_X+i_Y}{\longrightarrow} A$ is an isogeny.* b) *We have $$\pi^\star L = L_X \boxtimes L_Y \,, \quad \text{where } \quad L_Y=L\raisebox{-.5ex}{$|$}_{Y} \,.$$* *Moreover $L_Y$ is also of type $\delta$. We define $\mathcal{A}^\delta_{t,g-t}\subset \mathcal{A}_g$ to be the set of ppav's verifying the above conditions.* Reciprocally, if $(X,L_X)$, $(Y,L_Y)$ are two abelian varieties of the same type $\delta$, of dimension $t$ and $g-t$ respectively, and $\psi:\mathrm{Ker}(\Phi_{L_X}) \to \mathrm{Ker}(\Phi_{L_Y})$ is an antisymplectic isomorphism, then $$A \coloneqq X\times Y/{K} \in \mathcal{A}^\delta_{t,g-t}\quad \text{where} \quad K\coloneqq \{(x,\psi x)\,|\,x\in K(L_X)\} \,.$$ Thus $\mathcal{A}^\delta_{t,g-t}$ is irreducible loci of codimension $t(g-t)$ in $\mathcal{A}_g$ [@Debarre1988 Sec. 9.3]. Clearly $\mathcal{A}^\delta_{t,g-t}=\mathcal{A}^\delta_{g-t,t}$, so from now on we will assume $t\leq g/2$. The loci $\mathcal{A}^\delta_{t,g-t}$ are all distinct. Let $B_X$ (resp. $B_Y$) be the base locus of $L_X$ (resp. $L_Y$). Recall that by the Riemann-Roch theorem, $$\mathrm{h}^0(X,L_X)=\mathrm{h}^0(Y,L_Y)=\deg \delta \eqqcolon d \,.$$ Thus for $d\leq t$, the base loci $B_X$ and $B_Y$ are non-empty of codimension at most $d$ in $X$ and $Y$ respectively. Let $s^X_1,\dots,s^X_d$ and $s^Y_1,\dots,s^Y_d$ denote a basis of $\mathrm{H}^0(X,L_X)$ and $\mathrm{H}^0(Y,L_Y)$ respectively. Let $s$ be a generator of $\mathrm{H}^0(A,L)$. Then $$\pi^\ast s = \sum_{i,j} \lambda_{ij} s^X_i \boxtimes s^Y_j$$ for some $\lambda_{ij}$. Derivating we have $$\begin{aligned} \mathrm{d}(\pi^\ast s) = \sum_{i,j} \lambda_{ij}( (\mathrm{d}s^X_i) \boxtimes s^Y_j + s^X_i \boxtimes (\mathrm{d}s^Y_j) )\,, \end{aligned}$$ which vanishes on $B_X\times B_Y$. Thus $$\begin{aligned} \pi (B_X\times B_Y) &\subset \mathrm{Sing}(\Theta) \,, \label{Equ: Sing Locus Art and base locus} \\ \text{and} \quad \mathcal{A}^\delta_{t,g-t}&\subset \mathcal{N}^g_{g-2d} \,.\end{aligned}$$ The main result of Debarre concerning the families $\mathcal{A}^\delta_{t,g-t}$ is the following: **Theorem 1** ([@Debarre1988 Thm. 10.4 and 12.1]). *Let $\delta \in \{(2),(3),(2,2) \}$, and $d=\deg \delta$.* i) *If $t\geq d$, then $\mathcal{A}^{\delta}_{t,g-t}$ is an irreducible component of $\mathcal{N}^{(g)}_{g-2d}$. Moreover for a general $(A,\Theta)\in \mathcal{A}^\delta_{t,g-t}$, there is equality in [\[Equ: Sing Locus Art and base locus\]](#Equ: Sing Locus Art and base locus){reference-type="ref" reference="Equ: Sing Locus Art and base locus"} and $\Theta$ has transversal $\mathrm{A}_1$ singularities.* ii) *If $t= \lfloor d/2 \rfloor$, a general $(A,\Theta)\in \mathcal{A}^\delta_{t,g-t}$ has smooth theta divisor. In this case, $\deg \mathcal{G}\left(\mathcal{A}^\delta_{t,g-t}\right)=g!$.* We end this section with a result on the dimension of the fibers of the Gauss map. This is a slight slight improvement on [@AuffarthCodogni2019 Thm. 1.1] (the bound on the dimension is stronger): **Proposition 1**. *Let $(A,\Theta)\in \mathcal{A}^\delta_{t,g-t}$, $d\coloneqq \deg \delta$, and suppose $2\leq d \leq t \leq g/2$. Suppose there is a divisor $D\in |L_X|$ such that $D$ is smooth at some point $x\in B_X$. Then some fibers of the Gauss map $\mathcal{G}:\Theta\dashrightarrow \mathbb{P}^{g-1}$ are of dimension at least $g-t-d+1$.* *Proof.* Let $\pi:X\times Y\to A$ denote the isogeny of [Proposition 1](#PropComplAbVar){reference-type="ref" reference="PropComplAbVar"}. Let $\tilde{\Theta}\coloneqq \pi^\ast \Theta \subset X\times Y$. By [@Debarre1988 Prop. 9.1], there is a basis $s_1^X,\dots,s_d^X$ (resp. $s_1^Y,\dots,s_d^Y$) of $\mathrm{H}^0(X,L_X)$ (resp. $\mathrm{H}^0(Y,L_Y)$), such that $$\tilde{\Theta} = \mathop{\mathrm{div}}s \,, \quad \text{where} \quad s=\sum_{i=1}^d s^X_i \otimes s^Y_i \,.$$ We can assume $\mathop{\mathrm{div}}(s^X_d)$ is smooth for some $x\in B_X$. Let $F=V(s^Y_1,\dots,s^Y_{d-1})\setminus V(s^Y_d)\subset Y$. For $y\in F$, we have $$\begin{aligned} \mathrm{d}_{x,y} s &= \sum_i \mathrm{d}_x s^X_i \otimes s^Y_i(y)+ s^X_i(x)\otimes \mathrm{d}_y s^Y_i = \mathrm{d}_x s^X_d \otimes s^Y_d(y) \neq 0\,.\end{aligned}$$ Moreover for $y\in F$ this a constant conormal vector $v\in \mathbb{P}T_0^\vee (X\times Y)$. Thus the preimage of $v$ by the Gauss map contains $\{x\}\times F$ which is of dimension at least $$\dim Y - (d-1) = g-t-d+1 \,.$$ ◻ ## Gauss Degree on $\mathcal{A}^\delta_{t,g-t}$ {#Section: Gauss degree on Art} Knowing [Theorem 1](#Theorem: Gauss degree theta divisor with smooth singular locus){reference-type="ref" reference="Theorem: Gauss degree theta divisor with smooth singular locus"} and [Theorem 1](#Thm: Debarre: Cases where Ard verifies Star){reference-type="ref" reference="Thm: Debarre: Cases where Ard verifies Star"}, the computation of the Gauss Degree on a general $(A,\Theta)\in \mathcal{A}^\delta_{t,g-t}$ boils down to a relatively simple Euler characteristic computation: **Lemma 1**. *Let $(A,\Theta)\in \mathcal{A}^\delta_{t,g-t}$, and assume that $\Theta$ has transversal $\mathrm{A}_1$ singularities and equality holds in [\[Equ: Sing Locus Art and base locus\]](#Equ: Sing Locus Art and base locus){reference-type="ref" reference="Equ: Sing Locus Art and base locus"}, then $$\chi(\mathrm{Sing}(\Theta))=(-1)^{g-2d} t!(g-t)!\binom{t-1}{d-1}\binom{g-t-1}{d-1} \,.$$ If $C\in | L\raisebox{-.5ex}{$|$}_{\mathrm{Sing}(\Theta)}|$ is smooth, then $$\chi(C)=(-1)^{g-2d-1} t!(g-t)!c_{t-d,g-t-d}\,,$$ where $c_{m,n}$ is defined by the generating series $$\frac{x+y}{(1-x)^d(1-y)^d(1-x-y)}=\sum_{m,n} c_{m,n} x^m y^n \,.$$* *Proof.* We keep the notation of the previous section. By assumption there is an isogeny of degree $d^2$, $\pi: B_X\times B_Y\to \mathrm{Sing}(\Theta)$, thus $$\chi(\mathrm{Sing}(\Theta))=\chi(B_X\times B_Y)/d^2\,.$$ The Euler characteristic of $B_X$ is the degree of the top Chern class $c_{t-d}(T_{B_X})$ of the tangent bundle. $B_X$ is the complete intersection of $d$ divisors in $|L_X|$ thus $N_{B_X/X}=L_X^{\oplus d}\raisebox{-.5ex}{$|$}_{B_X}$ and by [@Fulton1998 Ex. 3.2.12] and Riemann-Roch we have $$\begin{aligned} \deg(c(T_{B_X}))&=\deg \left(c(T_X)\raisebox{-.5ex}{$|$}_{B_X}\cdot (c(L_X\raisebox{-.5ex}{$|$}_{B_X}))^{-d} \right)\\ &=\deg \left(1\cdot (1+c_1(L_X))^{-d}\cap [B_X]\right) \\ &=\deg \sum_{k\geq 0} \binom{d+k-1}{d-1}(-1)^k c_1(L_X)^{k+d}\cap [X] \\ &= (-1)^{t-d}d\binom{t-1}{d-1} t! \,.\end{aligned}$$ The same computation applies to $B_Y$, thus $$\chi(B_X\times B_Y)=(-1)^{g-2d}d^2 \binom{t-1}{d-1}\binom{g-t-1}{d-1} t!(g-t)! \,.$$ We now compute $\chi(C)$. Let $C'=\pi^\ast C\subset X\times Y$. Let $x =c_1(p_X^\ast L_X)\in \mathrm{CH}^1(X\times Y)$ and $y= c_1(p_Y^\ast L_Y)\in \mathrm{CH}^1(X\times Y)$. We have $C'\in|(L_X\boxtimes L_Y)\raisebox{-.5ex}{$|$}_{B_X\times B_Y}|$, $$\left[ C' \right]=x^d y^d(x+y) \in \mathrm{CH}^{2d+1}(X\times Y)\,,$$ and $N_{C'/X\times Y}=\left((p_X^\ast L_X)^{\oplus d}\oplus (p_Y^\ast L_Y)^{\oplus d} \oplus (L_X\boxtimes L_Y)\right)\raisebox{-.5ex}{$|$}_{C'}$. By [@Fulton1998 Ex. 3.2.12] we have $$\begin{aligned} c(T_{C'})&= {c(T_{X \times Y}\raisebox{-.5ex}{$|$}_{C'})}\cdot{c(N_{C'/X\times Y})^{-1}}\\ &=(1+x)^{-d}(1+y)^{-d}(1+x+y)^{-1} \cap [C'] \\ &=\frac{x^{d}y^d(x+y)}{(1+x)^d(1+y)^d(1+x+y)} \cap [X\times Y] \\ &=x^dy^d\sum_{m,n}(-1)^{m+n+1}c_{m,n} x^m y^n\,. \end{aligned}$$ The only term of degree $g$ in the above series which does not vanish is $x^t y^{g-t}$ and $$\deg ( x^{t} y^{g-t})=d^2t!(g-t)!$$ by Riemann-Roch. Thus $$\chi(C)=\chi(C')/d^2=(-1)^{g-2d-1} t!(g-t)! c_{t-d,g-t-d}\,.$$ ◻ We have the following: **Theorem 1**. *Let $\delta\in \{(2),(3),(2,2)\}$, let $t\geq d \coloneqq \deg \delta$, let $(A,\Theta)\in \mathcal{A}^\delta_{t,g-t}$ be general and $\mathcal{G}:\Theta \dashrightarrow \mathbb{P}^{g-1}$ be the Gauss map. Then $$\begin{aligned} \deg \mathcal{G}= g! - t!(g-t)! a_{t-d,g-t-d} \,. \end{aligned}$$ where $a_{m,n}$ is defined by the generating series $$\frac{1}{(1-x)^d(1-y)^d}+\frac{1}{(1-x)^d(1-y)^d(1-x-y)}=\sum_{m,n} a_{m,n} x^m y^n \,.$$ More explicitly, $$\deg \mathcal{G}= t!(g-t)! \left( \binom{t-1}{d-1} \binom{g-t-1}{d-2} + \sum_{k=2}^d \binom{t-k}{d-k}\binom{g-t-1+k}{d-1} \right) \,.$$* *Remark 4*. The theorem above holds more generally when $(A,\Theta)\in \mathcal{A}^\delta_{t,g-t}$, $\mathrm{Sing}(\Theta)$ is smooth of dimension $g-2d$ and equality holds in [\[Equ: Sing Locus Art and base locus\]](#Equ: Sing Locus Art and base locus){reference-type="ref" reference="Equ: Sing Locus Art and base locus"}, but we do not know for which values of $\delta$, $t$ and $g$ this happens in general. *Proof.* By Theorem [Theorem 1](#Thm: Debarre: Cases where Ard verifies Star){reference-type="ref" reference="Thm: Debarre: Cases where Ard verifies Star"} and Theorem [Theorem 1](#Theorem: Gauss degree theta divisor with smooth singular locus){reference-type="ref" reference="Theorem: Gauss degree theta divisor with smooth singular locus"}, there is a smooth divisor $C\in |{L_A\raisebox{-.5ex}{$|$}_{\Theta_{\mathrm{sing}}}} |$ such that $$\begin{aligned} \deg \mathcal{G}&= g!-2(-1)^{g-2d}\chi(\mathrm{Sing}(\Theta) )-(-1)^{g-2d-1}\chi(C))\,.\end{aligned}$$ By [Lemma 1](#Lem: Euler characteristic of B and C){reference-type="ref" reference="Lem: Euler characteristic of B and C"} we have $$\begin{aligned} (-1)^{g-2d}\chi(\mathrm{Sing}(\Theta))&=t!(g-t)! \binom{t-1}{d-1}\binom{g-t-1}{d-1} \\ &= t!(g-t)! \left\{ \frac{1}{(1-x)^d(1-y)^d} \right\}_{x^{t-d}y^{g-t-d}}\,,\end{aligned}$$ Thus $$2\chi(\mathrm{Sing}(\Theta))+\chi(C)=t!(g-t)! a_{m,n} \,,$$ where $$\begin{aligned} \sum_{m,n\geq 0} a_{m,n}x^m y^n &= \frac{2}{(1-x)^d(1-y)^d}+\frac{x+y}{(1-x)^d(1-y)^d(1-x-y)} \\ &=\frac{1}{(1-x)^d(1-y)^d}+\frac{1}{(1-x)^d(1-y)^d(1-x-y)}\,.\end{aligned}$$ We use the combinatorial Lemma [Lemma 1](#Lem: generating series Amn){reference-type="ref" reference="Lem: generating series Amn"} below to conclude $$\begin{aligned} \deg \mathcal{G}&= g!-t!(g-t)! a_{t-d,g-t-d} \\ &=g!-t!(g-t)! \left( \binom{t-1}{d-1} \binom{g-t-1}{d-1} + \binom{t+g-t}{t} \right. \\ &\quad \left.- \sum_{k=1}^d \binom{t-k}{t-d}\binom{g-t-1+k}{d-1} \right) \\ &=t!(g-t)! \left( \binom{t-1}{d-1} \binom{g-t-1}{d-2} + \sum_{k=2}^d \binom{t-k}{d-k}\binom{g-t-1+k}{d-1} \right)\,.\end{aligned}$$ ◻ The generating series of the theorem has the following combinatorial interpretation: **Lemma 1**. *Consider the generating series $$\frac{1}{(1-x)^{d}(1-y)^{d}(1-x-y)}= \sum_{m,n\geq 0} A_{m,n} x^m y^n \,.$$ then the coefficient $A_{m,n}$ is equal to the number of (weak) $m+d+1$ compositions of $n+d$ $$0 \leq a_1 \leq \cdots\leq a_{m+d} \leq a_{m+d+1}=n+d \,,$$ such that $a_{m+1} \geq d$. This number is equal to $$A_{m,n}= \binom{m+n+2d}{m+d} - \sum_{k=1}^d \binom{m+d-k}{m} \binom{n+d-1+k}{d-1} \,.$$* *Proof.* Recall that by a (here we mean weak) $m$ composition of $n$ we mean an $m$-tuple $(a_1,\dots , a_m)$ such that $$0 \leq a_1 \leq \cdots \leq a_{m-1} \leq a_{m} = n \,.$$ The number of $m$ compositions of $n$ is equal to $$\binom{n+m-1}{m-1} \,.$$ We know that $$\frac{1}{(1-y)^d}= \binom{n+d-1}{d-1} y^n$$ is the generating series for the $d$-compositions of $n$. Moreover, $$\frac{1}{1-x-y}= \sum_{m,n \geq 0} \binom{m+n}{m} x^m y^n$$ is the generating series for the $m+1$-compositions of $n$. Thus $$\frac{1}{(1-y)^d(1-x-y)}$$ is the generating series for the $(m+d+1)$-compositions of $n$. Now we can interpret $1/(1-x)^d$ as the generating series of the $m+1$ compositions of $d-1$. Thus the coefficient $A_{m,n}$ is in bijection with the set $$\bigsqcup_{k=0}^m \{ k+1 \text{ composition of } d-1 \} \times \{m-k+1+d \text{ composition of } n \} \,.$$ Now to a $k+1$-composition of $d-1$ $(a_1,\dots , a_{k+1})$ and a $m-k+1+d$-composition of $n$ $(b_1,\dots,b_{m-k+1+d})$, we associate a $m+d+1$-composition of $n+d$ in the following way: $$\begin{aligned} \tilde{a}_i & =a_i \quad \text{for} \quad 1\leq i \leq k \\ \tilde{a}_i & = b_{i-k} + a_{k+1} + 1 \quad \text{for} \quad k+1 \leq i \leq m+d+1 \,.\end{aligned}$$ Now it is immediate that this gives a bijection to all the $m+d+1$-compositions of $n+d$ such that $a_{m+1}\geq d$. The inverse map is given by choosing $k+1$ to be the first coefficient of the composition above $d$.\ Thus $$\begin{aligned} A_{m,n}&= \# \{ m+d+1 \text{ compositions of } n+d \} \\ & \quad - \sum_{k=0}^{d-1} \# \{ m+d+1 \text{ compositions of } n+d \text{ such that } a_{m+1}=k \} \\ &= \# \{ m+d+1 \text{ compositions of } n+d \} \\ & \quad - \sum_{k=0}^{d-1} \# \{ m+1 \text{ compositions of } k \} \times \{ d \text{ compositions of } n+d-k \} \\ &= \binom{m+n+2d}{m+d} - \sum_{k=0}^{d-1} \binom{m+k}{m} \binom{n+2d-1-k}{d-1}\end{aligned}$$ ◻ ## Numerical Analysis of the Degree {#Sec: Numerical Analysis Degree} For an irreducible locus $Z\subset \mathcal{A}_g$, we denote by $\deg \mathcal{G}(Z)$ the degree of the Gauss map for a general $(A,\Theta)\in Z$. We close this section with a numerical analysis of the degree $\deg \mathcal{G} \left( \mathcal{A}^\delta_{t,g-t} \right)$ as $t$ varies in $[\![ d, \lfloor g/2 \rfloor ]\!]$. We have the following: **Proposition 1**. *For $\delta\in \{(2),(3),(2,2)\}$ and $g\geq 2d\coloneqq 2\deg \delta$, the degree of the Gauss map on the loci $\mathcal{A}^\delta_{t,g-t}$, $$\begin{aligned} \!] &\to \mathbb{N}\\ t &\mapsto \deg \mathcal{G}\left( \mathcal{A}^\delta_{t,g-t} \right)\end{aligned}$$ is a strictly decreasing function of $t$. In particular, the degree of the Gauss map separates these loci.* *Remark 5*. The proposition states that the degree separates the loci $\mathcal{A}^\delta_{t,g-t}$ for fixed $\delta$, fixed $g$, and varying $t$. One could ask if this still hold when $\delta$ varies, i.e. that the degree of the Gauss map is different on all loci $\mathcal{A}^\delta_{t,g-t}$ for a fixed dimension $g$. This holds at least in low dimensions (up to $g=1000$). This is not true anymore when $g$ varies, as one can check that the lowest pair of $g$'s where we have an equality of degrees is $g=28$ and $g=30$, with $$\deg \mathcal{G}\left( \mathcal{A}^{3}_{5,28-5}\right)=\deg \mathcal{G}\left( \mathcal{A}^{2}_{7,30-7}\right)=3908824930919408467968000000 \,.$$ *Proof.* We will prove this by looking at the explicit description of the degree. Recall that by [Theorem 1](#Thm: Gauss Degree on A^d_g_1,g_2 ){reference-type="ref" reference="Thm: Gauss Degree on A^d_g_1,g_2 "} the degree is given by $$\begin{aligned} F_g(t)&= t!(g-t)!\left( \binom{t-1}{d-1} \binom{g-t-1}{d-2} + \sum_{k=2}^d \binom{t-k}{t-d}\binom{g-t-1+k}{d-1} \right) \,. \end{aligned}$$ We will now prove the proposition by doing each possible value of $\delta$ separately. *Case $\delta=(2)$*. In this case the formula becomes $$F_g(t)=t!(g-t)!g \,,$$ and this is obviously a decreasing function of $t$ in the range $2\leq t\leq \lfloor g/2 \rfloor$. *Case $\delta=(3)$*. In this case, $$F_g(t)=t!(g-t)!(-t^2+gt+3-g) \,.$$ Let $f(x)=-x^2+gx+3-g$. We have $$\begin{aligned} \Delta F_g(t) &\coloneqq F_g(t+1)-F_g(t)= t!(g-t-1)!(g-2t-1)h_g(t)\,,\end{aligned}$$ with $h_g(t)=t^2-(g-1)t+g-2$. Evaluating we have $$\begin{aligned} h_g(3)&=10-2g<0 \quad \text{for }g\geq 6 \\ h_g\left(\frac{g-1}{2}\right)&=(-g^2+6g-9)/4 <0 \quad \text{for }g\geq 6 \,.\end{aligned}$$ $h_g$ is convex, thus strictly negative on $[3,(g-1)/2]$, and so $F_g$ is strictly decreasing. *Case $\delta=(2,2)$*. Now $$F_g(t)=t!(g-t)!\frac{g}{12}(2t+1-g) h_g(t)\,,$$ where $$h_g(x)=x^4+x^3(-2g+2)+x^2(g^2+g-7)+x(-3g^2+11g-8)+2g^2-10g+12 \,.$$ We compute $$\frac{\partial h_g}{\partial x}=( 2 x+1-g) ( 2 x^2+2 x(1 - g )+ 3 g -8) \,,$$ which is positive for $4\leq x \leq (g-1)/2$. Evaluating at $x=4$ we have $$h_g(4)=6(g^2-13g+42) >0 \quad \text{for } g\geq 8 \,.$$ Thus $$\Delta F_g(t) <0 \quad \text{for } 4\leq t \leq \lfloor g/2 \rfloor -1 \quad \text{and } g\geq 8 \,,$$ and thus the degree of the Gauss map is strictly decreasing on this range. ◻ Finally, we study how this degree compares with the degree of the Gauss map on Jacobians. **Proposition 1**. *The degree of the Gauss map on Jacobians is always different than on a general member of the loci $\mathcal{A}^\delta_{t,g-t}$. Namely, for $g\geq 7$, $\delta\in \{(2),(3),(2,2)\}$ and $t\geq d$, $$\deg \mathcal{G}\left( \mathcal{A}^\delta_{t,g-t} \right) > \det \mathcal{G} \left( \mathcal{J}_g \right) \,.$$ For $g=5$ or $g=6$ the above inequality fails, but the degrees are still different.* *Proof.* By Proposition [Proposition 1](#Prop: Variation deg G Ard){reference-type="ref" reference="Prop: Variation deg G Ard"}, the lowest term on the left hand side of the inequality is achieved when $\delta=(2)$ and $t= \lfloor g/2 \rfloor$. Thus we have to study $$\deg \mathcal{G}\left(\mathcal{A}^{2}_{\lfloor g/2 \rfloor,g- \lfloor g/2 \rfloor}\right)-\deg \mathcal{G}(\mathcal{J}_g )\geq g (g/2)!^2-\binom{2g-2}{g-1}\,.$$ Using Stirlings lower bound for the factorial we have, for $g\geq 22 >8e$ $$\begin{aligned} g(g/2)!^2 > (g/2)!^2 > \binom{2g-2}{g-1}\,.\end{aligned}$$ The remaining values can be checked by hand. We get for example for $g=7$ $$\deg \mathcal{G}\left(\mathcal{A}^{2}_{3,4}\right)=1008>\det \mathcal{G} \left( \mathcal{J}_7 \right)=924 \,.$$ For $g=6$ $$\deg \mathcal{G}\left(\mathcal{A}^{2}_{3,3}\right)=216 \,, \quad \deg \mathcal{G}\left(\mathcal{A}^{2}_{2,4}\right)=288\,, \quad \deg \mathcal{G} \left( \mathcal{J}_6 \right)=252 \,.$$ For $g=5$ $$\deg \mathcal{G}\left(\mathcal{A}^{2}_{2,3}\right)=60 \,, \quad \deg \mathcal{G} \left( \mathcal{J}_5 \right)=70 \,.$$ ◻
arxiv_math
{ "id": "2309.09885", "title": "The Gauss Map on Theta Divisors with Transversal $\\mathrm{A}_1$\n Singularities", "authors": "Constantin Podelski", "categories": "math.AG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we prove that the uniqueness of blowup at the maximum point of coincidence set of the superconductivity problem, mainly based on the Weiss-type and Monneau-type monotonicity formulas, and the proof of the main results in this paper is inspired the recent paper [@CFL22] by Chen-Feng-Li. **Keywords** Superconductivity, Obstacle problem, Singular points, Blowup, Uniqueness address: - School of Mathematical Science, Shenzhen University, Shenzhen, 518061, P.R. China. and Department of Mathematics, Sichuan University, Chengdu, 610064, P.R. China. - Department of Mathematics, Sichuan University, Chengdu, 610064, P.R. China. - Department of Mathematics, Sichuan University, Chengdu, 610064, P.R. China. author: - Lili Du - Xu Tang$^*$ - Cong Wang title: "**Uniqueness of blowup at singular points for superconductivity problem**" --- [^1] [^2] # Introduction In this paper, we consider the superconductivity problem $$\label{11} \Delta u=\chi_{\{|\nabla u|>0\}} \quad \text{in} ~B_1,$$ which is an obstacle-type problem derived from superconducting models, with the more general form $$\label{12} \Delta u=f(x, u) \chi_{\{|\nabla u|>0\}} \quad \text { in } B_1,$$ where $B_1$ is the unit ball in $\mathbb{R}^n$, the function $f>0$ and $f \in C^{0,1}\left(\mathbb{R}^n \times \mathbb{R}\right)$. In analyzing the evolution of vortices generated in the mean-field model of a magnetic field passing through a superconductor, we obtain a degenerate parabolic-elliptic system. The simplified stationary model of the problem (in a local setting) is reduced to the problem [\[12\]](#12){reference-type="eqref" reference="12"} with appropriate boundary conditions (see [@Cha95]). Berestycki-Bonnet-Chapman [@BBC94] and Chapman-Rubinstein-Schatzman [@CRS96] proposed a related model, with a rigorous derivation from the Ginzburg-Landau model by Sandier-Serfaty [@SS00]. We would like to refer the interested readers to the references [@PSU12] and [@Ro] for the physical background. From the structure of the equation, the problem [\[11\]](#11){reference-type="eqref" reference="11"} is more general than the no-sign obstacle problem, because the function $u$ may take different constant values in different connected branches of *the coincidence set* $\{|\nabla u| =0\}$, which also leads to the complexity of the free boundary of the problem. Elliott-Schatzle-Stoth [@ESS98] studied the above general degenerate parabolic-elliptic system, they proved the existence and uniqueness of the viscosity solution in two dimensions, and found the special solutions of the stationary problem. Caffarelli-Salazar [@CS02] constructed the viscosity solution of a fully nonlinear elliptic equation more general than problem [\[12\]](#12){reference-type="eqref" reference="12"} and obtained some properties of the viscosity solution. Besides, based on the results in [@CS02], the existence and regularity can be proved using the Alexandroff-Bakelman approximation technique appropriately. Bonnet-Monneau [@BM00] and Monneau [@Mon04] investigated the free boundary of a specific configuration (with single patches). To be specific, Bonnet-Monneau [@BM00] shows that the existence and regularity via Nash-Morse theory, and Monneau [@Mon04] proved the regularity of the free boundary when it is close enough to the fixed boundary, they also gave a result of stability of the free boundary and gave a bound on the Hausdorff measure of the free boundary. The free boundary in general was first studied by Caffarelli-Salazar [@CS02] and then by Caffarelli-Salazar-Shahgholian [@CSS04]. In particular, based on a refined analysis, Caffarelli-Salazar-Shahgholian reduced the problem to the one-patch case, and whose global solutions was characterized with the help of Weiss's monotonicity formula. In [@PSU12], the authors systematically studied the problem [\[11\]](#11){reference-type="eqref" reference="11"}, and proved the optimal regularity and nondegeneracy of the solution. Moreover, they established the free boundary regularity near the regular point and the structure of the regular point set by blowup method. However, as mentioned in [@PSU12 Chapter 7, Notes], for the problem [\[11\]](#11){reference-type="eqref" reference="11"}, very little is known about the singular set, all the existing methods seem to fail. This paper attempts to make a preliminary analysis of the singular set of the problem [\[11\]](#11){reference-type="eqref" reference="11"}. It is well know the blowup limit possible dependence on subsequences is one of the main diffculties in the study of free boundary problems. This paper is devoted to the uniqueness of blowup at singular points for the superconductivity problem [\[11\]](#11){reference-type="eqref" reference="11"}. For the classical obstacle problem, the structure of the singular set was discovered by Caffarelli [@Caf98]. Subsequently, Monneau [@Mon03] introduced a concise method to prove the uniqueness of blowup at singular points, and the essential tool is the Monneau's monotonicity formula, on which one can prove that the continuous dependence of blowups, and study the structure of the singular set (see [@PSU12 Chapter 7.4]). Recently, Chen-Feng-Li [@CFL22] noted that by means of the Monneau's monotonicity formula it is also possible to prove that the uniqueness of blowup at singular points of the no-sign obstacle problem. Moreover, this paper attempts to generalize the Monneau's monotonicity formula to the superconductivity problem [\[11\]](#11){reference-type="eqref" reference="11"}. Throughout this paper, we denote by $\Gamma:=\partial\{ |\nabla u|>0\} \cap B_1$ the free boundary for the problem [\[11\]](#11){reference-type="eqref" reference="11"}, and $\Sigma$ *the singular set* of $\Gamma$, i.e. $x^{0} \in \Sigma$ if and only if there exists a sequence $r_j \rightarrow 0$ such that $u_{x_0, r_j}(x):=\frac{u\left(x^0+r_j x\right)-u(x^0)}{r_j^2}$ converges to a homogeneous quadratic polynomial $q(x)$ with $\Delta q=1$. We call such $q(x)$ is a *blowup* of $u$ at $x^0$. Without loss of generality, let us set $x^0=0$ below. For convenience, we denote $$\begin{aligned} \mathcal{Q}:=&\{q(x) \text { homogeneous quadratic polynomial}: \Delta q=1\}\end{aligned}$$ and $$\begin{aligned} \mathcal{Q}^{+}:=&\{q \in \mathcal{Q}: q \geq 0\}.\end{aligned}$$ Our main result reads **Theorem 1**. *Let $u$ be any solution to the problem [\[11\]](#11){reference-type="eqref" reference="11"}, and that $0 \in \Sigma$ satisfies $$\begin{aligned} \label{13} u \leq u(0) \quad \text{on} \quad \{|\nabla u|=0\}.\end{aligned}$$ Then there is a $q_0(x) \in \mathcal{Q}$ such that $$\begin{aligned} u_r(x):=\frac{u(r x)-u(0)}{r^2} \rightarrow q_0 ~~\text { in } C_{\rm{loc}}^{1,\alpha}\left(\mathbb{R}^n\right) \quad \text { as } r \rightarrow 0+\end{aligned}$$ for any $\alpha \in (0,1)$. Moreover, there holds $$\begin{aligned} \label{15} u(x)-u(0)=q_{0}(x)+o\left(|x|^2\right).\end{aligned}$$* **Remark 1**. *It should be noted that $u$ be a solution to the problem [\[11\]](#11){reference-type="eqref" reference="11"} means that $u \in C_{\rm{loc}}^{1,1}\left(B_1\right)$ satisfying $\left\|D^2 u\right\|_{ L^{\infty}\left(B_1\right)} \leq M$ for some constant $M>0$ and $0 \in \Gamma$ (see [@PSU12 Definition 3.15]).* **Remark 2**. *Whether blowup is unique at a singular point is usually a priori unknown, i.e. maybe $u_r$ will sub-converge to a different polynominal for another sequence $\tilde{r}_j \rightarrow 0$. In Theorem [Theorem 1](#Thm11){reference-type="ref" reference="Thm11"}, there is no need to take a subsequence for $r \rightarrow 0$, indicating the uniqueness of blowup.* Our idea to prove the uniqueness of blowup at singular point is to construct a corresponding Monneau-type monotonicity formula for the problem [\[11\]](#11){reference-type="eqref" reference="11"}. It is worth noting that a common point between the classical obstacle problem and the no-sign obstacle problem is that both the solution $u$ and its first partial derivative are equal to $0$ at the free boundary point. However, in the problem [\[11\]](#11){reference-type="eqref" reference="11"}, only the gradient of $u$ is zero on the free boundary, and there is no information about the value of $u$ itself, so the Monneau's monotonicity formula of the classical obstacle problem (e.g. see [@PSU12 Theorem 7.4]) is not valid for the problem [\[11\]](#11){reference-type="eqref" reference="11"}. Inspired by the research on the superconductivity problem in [@PSU12], when we derive the monotonicity formula in next section, we add the hypothesis [\[13\]](#13){reference-type="eqref" reference="13"}. Furthermore, we consider $$\begin{aligned} \frac{1}{r^{n+3}} \int_{\partial B r}(u-u(0)-q)^2 d \mathcal{H}^{n-1}\end{aligned}$$ as the Monneau's energy functional, where $q \in \mathcal{Q}^{+}$. We will show that the derivative of the functional with respect to $r$ (see Lemma [Lemma 3](#Lem23){reference-type="ref" reference="Lem23"} below for details) $$\begin{aligned} \frac{2}{r^{n+4}} \int_{\partial B r} w(x \cdot \nabla w-2 w) d \mathcal{H}^{n-1}\end{aligned}$$ is nonnegative, where $w=u-u(0)-q$. According to the construction of the Weiss's energy functional (see [\[21\]](#21){reference-type="eqref" reference="21"} below), there is the identity $$\begin{aligned} \frac{1}{r^{n+3}} \int_{\partial B_r} w(x \cdot \nabla w-2 w) d \mathcal{H}^{n-1}=W(r, u)-W(0+, u)+\frac{1}{r^{n+2}} \int_{B_r} w \Delta w d x,\end{aligned}$$ so the problem is transformed into proving that the right hand side of the above equation is nonnegative. The addition of the first two terms is nonnegative can be obtained by applying the Weiss's monotonicity formula, therefore a key fact is $$\begin{aligned} \label{14} w \Delta w \geq 0 \quad \text { in } \quad B_r.\end{aligned}$$ This fact also plays an important role in the discussion of the classical obstacle problem. In addition, it should be noted that the limit $W(0+, u)$ is required to exist in our proof, so monotone nondecreasing of the Weiss's energy functional is a sufficient condition. The hypothesis [\[13\]](#13){reference-type="eqref" reference="13"} guarantees that the Weiss's monotonicity formula and [\[14\]](#14){reference-type="eqref" reference="14"} holds simultaneously. Since the blowup of the problem [\[11\]](#11){reference-type="eqref" reference="11"} is known to be a homogeneous quadratic polynomial in $\mathcal{Q}$ at every singular point, in order to prove the uniqueness of $q_0 \in \mathcal{Q}$ in Theorem [Theorem 1](#Thm11){reference-type="ref" reference="Thm11"}, it is only necessary to prove that the corresponding coefficient matrix is unique. It is worth noting that $q \in \mathcal{Q}^{+}$ is arbitrary. We use the method similar to that in [@CFL22], select a special family of $\{q^{t}\}_{t \in (-1,1)}$, where $q^{t} \in \mathcal{Q}^{+}$. Applying the Monneau's monotonicity formula, we get an equation with parameter $t$. The derivative of both sides of this equation with respect to $t$ still yields an identity. Subtly, the latter leads to the desired uniqueness. In addition, [\[15\]](#15){reference-type="eqref" reference="15"} such estimates are the starting point for studying the structure of the singular set in problem [\[11\]](#11){reference-type="eqref" reference="11"}. If we can show that the bound $o(r^2)$ on the right hand side of [\[15\]](#15){reference-type="eqref" reference="15"} is uniform for all singular point, or derive a more accurate quantitative characterization of $o(r^2)$, we can expect to achieve the corresponding results for the structure of the singular set to be somewhat similar to those for the classical obstacle problem (see [@FR22; @Fig18; @Fig18b; @PSU12] and the reference therein). The structure of this paper is arranged as follows. In Section [2](#secion2){reference-type="ref" reference="secion2"}, we derive the monotonicity formulas. In Section [3](#section3){reference-type="ref" reference="section3"}, we prove the main result of this paper. # Monotonicity formulas {#secion2} In the present section, we first introduce the Weiss's monotonicity formula and then derive the Monneau's monotonicity formula from it. **Lemma 1** (Weiss's monotonicity formula). *Let $u$ be any solution to the problem [\[11\]](#11){reference-type="eqref" reference="11"}, and that $$\begin{aligned} \label{25} u \leq u(0) \quad \text{on} \quad \{|\nabla u|=0\}.\end{aligned}$$ Then $$\begin{aligned} \label{21} r \mapsto W(r, u):=\frac{1}{r^{n+2}} \int_{B_r}\left(|\nabla u|^2+2(u-u(0))\right) d x-\frac{2}{r^{n+3}} \int_{\partial B_r}(u-u(0))^2 d\mathcal{H}^{n-1}\end{aligned}$$ is a nondecreasing absolutely continuous function for $0<r<1$ and $$\begin{aligned} \label{22} \frac{d}{d r} W\left(r, u\right) \geq \frac{2}{r^{n+4}} \int_{\partial B_r}\left|x \cdot \nabla u-2(u-u(0))\right|^2 d \mathcal{H}^{n-1}\end{aligned}$$ for a.e. $0<r<1$.* **Remark 3**. *It is worth mentioning that the functional $W$ has the following scaling property $$\begin{aligned} \label{23} W\left(r s, u\right)=W(s, u_r)\end{aligned}$$ for any $0<r<1,~0<s<\frac{1}{r}$, where $$\begin{aligned} \label{24} u_r(x)=\frac{u(r x)-u(0)}{r^2} .\end{aligned}$$ In particular, $$\begin{aligned} \label{212} W\left(r, u\right)=W(1, u_r).\end{aligned}$$* The proof can refer to [@PSU12 Theorem 3.26], but for completeness, the proof is also given below. It follows from [\[212\]](#212){reference-type="eqref" reference="212"}, we have $$\begin{aligned} \frac{d}{d r} W(r, u)&=\frac{d}{d r} W\left(1, u_r\right)\\ & =\int_{B_1}\frac{d}{d r}\left(\left|\nabla u_r\right|^2+2 u_r\right) d x-2 \int_{\partial B_1} \frac{d}{d r}\left(u_r^2\right) d \mathcal{H}^{n-1} \\ & =\int_{B_1}\left(2 \nabla u_r \cdot \nabla \frac{d u_r}{d r}+2 \frac{d u_r}{d r}\right) d x-4 \int_{\partial B_1} u_r \frac{d u_r}{d r} d\mathcal{H}^{n-1} .\end{aligned}$$ By integrating by parts, we get $$\begin{aligned} \int_{B_1} \nabla u_r \cdot \nabla \frac{d u_r}{d r} d x= \int_{\partial B_1} \frac{d u_r}{d r} \frac{\partial u_r}{\partial \nu} d\mathcal{H}^{n-1}-\int_{B_1} \Delta u_r \frac{d u_r}{d r} d x,\end{aligned}$$ where $\frac{\partial u_r}{\partial \nu}$ is the outer normal derivative of $u_r$ on $\partial B_1$, and so $$\begin{aligned} \label{219} \frac{d}{d r} W(r, u) =2 \int_{B_1} \frac{d u_r}{d r}\left(1-\Delta u_r\right) d x+2 \int_{\partial B_1} \frac{d u_r}{d r}\left(\frac{\partial u_r}{\partial \nu}-2 u_r\right) d \mathcal{H}^{n-1}.\end{aligned}$$ Note that $$\begin{aligned} 1-\Delta u_r=\chi_{\left\{\left|\nabla u_r\right|=0\right\}}\end{aligned}$$ and $$\begin{aligned} \frac{d u_r}{d r} & =\frac{r x \cdot \nabla u(r x)-2(u(rx)-u(0))}{r^3} \notag\\ & =\frac{x}{r} \cdot \nabla u_r-\frac{2}{r} u_r \label{26}\end{aligned}$$ implies that $$\begin{aligned} \frac{d u_r}{d r}=-\frac{2}{r} u_r \quad \text { on }\quad \left\{\left|\nabla u_r\right|=0\right\} .\end{aligned}$$ Hence, we obtain $$\begin{aligned} \int_{B_1} \frac{d u_r}{d r}\left(1-\Delta u_r\right) d x & =-\frac{2}{r} \int_{\{|\nabla u_r|=0\}} u_r d x \notag\\ & =-\frac{2}{r} \int_{\left\{\left|\nabla u_r\right|=0\right\}} \frac{u(r x)-u(0)}{r^2} d x \notag\\ & \geq 0, \label{220}\end{aligned}$$ since the assumption [\[25\]](#25){reference-type="eqref" reference="25"} leads to $u(rx) \leq u(0)$ on $\left\{\left|\nabla u_r\right|=0\right\}$. It follows from [\[219\]](#219){reference-type="eqref" reference="219"} and [\[220\]](#220){reference-type="eqref" reference="220"} that $$\begin{aligned} \frac{d}{d r} W(r, u) & \geq 2 \int_{\partial B_1} \frac{d u_r}{d r}\left(\frac{\partial u_r}{\partial \nu}-2 u_r\right) d \mathcal{H}^{n-1} \\ & =2 \int_{\partial B_1} \frac{d u_r}{d r}\left(x \cdot \nabla u_r-2 u_r\right) d \mathcal{H}^{n-1} \\ & =\frac{2}{r} \int_{\partial B_1}\left(x \cdot \nabla u_r-2 u_r\right)^2 d \mathcal{H}^{n-1} \\ & =\frac{2}{r^{n+4}} \int_{\partial B_r} (x \cdot \nabla u-2(u-u(0)))^2 d \mathcal{H}^{n-1},\end{aligned}$$ here we have used the fact [\[26\]](#26){reference-type="eqref" reference="26"}. 0◻ The following results are needed to derive the Monneau's monotonicity formula. **Lemma 2**. *Assume that $u_{r_j} \rightarrow u_0$ in $C_{\rm{loc}}^{1,\alpha}\left(\mathbb{R}^n\right)$ for some sequence $r_j \rightarrow 0$, then $$\begin{aligned} \label{27} W\left(r, u_0\right)=W(r, q)\end{aligned}$$ for any $q \in \mathcal{Q}$ and any $r \in(0,1)$.* *Proof.* Noting that $0$ is a singular point for the problem [\[11\]](#11){reference-type="eqref" reference="11"}, we know that $u_0$ is a $2$-homogeneous polynomial (see [@PSU12 Theorem 3.23]), i.e. $u_0 \in \mathcal{Q}$. Thanks to Lemma [Lemma 1](#Lem21){reference-type="ref" reference="Lem21"} and the scaling property [\[23\]](#23){reference-type="eqref" reference="23"}, we obtain $$\begin{aligned} \label{28} W\left(r, u_0\right)=\lim _{j \rightarrow \infty} W\left(r, u_{r_j}\right)=\lim _{j \rightarrow \infty} W\left(r r_j, u\right)=W(0+, u)\end{aligned}$$ for any $r>0$, which gives that $W\left(r, u_0\right)$ is constant. In particular, $$\begin{aligned} \label{29} W\left(r, u_0\right) \equiv W\left(1, u_0\right) =W(0+, u) .\end{aligned}$$ Taking $r=1$ in [\[28\]](#28){reference-type="eqref" reference="28"}, we get $$\begin{aligned} W(0+, u) & =W\left(1, u_0\right) \\ & =\int_{B_1}\left(\left|\nabla u_0\right|^2+2\left(u_0-u_0(0)\right)\right) d x-2 \int_{\partial B_1}\left(u_0-u_0(0)\right)^2 d \mathcal{H}^{n-1} \\ & =\int_{B_1}\left(\left|\nabla u_0\right|^2+2 u_0\right) d x-2 \int_{\partial B_1} u_0^2 d \mathcal{H}^{n-1} \\ & =\int_{\partial B_1} u_0 \frac{\partial u_0}{\partial \nu} d \mathcal{H}^{n-1}-\int_{B_1} u_0 \Delta u_0 d x+2 \int_{B_1} u_0 d x-2 \int_{\partial B_1} u_0^2 d \mathcal{H}^{n-1} \\ & =\int_{B_1}\left(-\Delta u_0+2\right) u_0 d x+\int_{\partial B_1} u_0\left(\frac{\partial u_0}{\partial \nu}-2 u_0\right) d \mathcal{H}^{n-1}.\end{aligned}$$ In virtue of $u_0 \in \mathcal{Q}$, then $$\frac{\partial u_0}{\partial \nu}-2 u_0=x \cdot \nabla u_0-2 u_0=0 \quad \text{on}\quad \partial B_1.$$ In addition, $$\begin{aligned} \Delta u_0=\chi_{\left\{\left|\nabla u_0\right|>0\right\}}.\end{aligned}$$ Hence, $$\begin{aligned} W(0+, u) & =\int_{B_1}\left(-\Delta u_0+2\right) u_0 d x \\ & =\int_{B_1 \cap\left\{\left|\nabla u_0\right|>0\right\}}\left(-\Delta u_0+2\right) u_0 d x+\int_{B_1 \cap\left\{\left|\nabla u_0\right|=0\right\}}\left(-\Delta u_0+2\right) u_0 d x \\ & =\int_{B_1 \cap\left\{\left|\nabla u_0\right|>0\right\}} u_0 d x+\int_{B_1 \cap\left\{\left|\nabla u_0\right|=0\right\}} 2 u_0 d x \\ & =\int_{B_1} u_0 d x+\int_{B_1 \cap\left\{\left|\nabla u_0\right|=0\right\}} u_0 d x \\ & =\int_{B_1} u_0 d x,\end{aligned}$$ where the last equality follows from the fact that the set $B_1 \cap\{|\nabla u_0|=0\}$ is measure zero. Now let $p=\frac{1}{2} x \cdot A x \in \mathcal{Q}$, then $$\begin{aligned} \label{211} W(r, p)=W(1, p)\end{aligned}$$ for any $r \in (0,1)$, according to the scaling property. Next we compute $$\begin{aligned} W(1, p) & =\int_{B_1}\left(|\nabla p|^2+2 p\right) d x-2 \int_{\partial B_1} p^2 d \mathcal{H}^{n-1} \\ & =\left(\int_{\partial B_1} p \frac{\partial p}{\partial \nu} d \mathcal{H}^{n-1}-\int_{B_1} p \Delta p d x\right)+2 \int_{B_1} p d x-2 \int_{\partial B_1} p^2 d \mathcal{H}^{n-1} \\ & =\int_{B_1} p d x+\int_{\partial B_1} p (x \cdot \nabla p-2 p) d \mathcal{H}^{n-1} \\ & =\int_{B_1} p d x .\end{aligned}$$ Finally, a direct computation shows that there exists a dimensional constant $\alpha_n>0$ such that $$\begin{aligned} \int_{B_1} p d x=\int_{B_1} u_0 d x=\alpha_n.\end{aligned}$$ Thus, we conclude that [\[27\]](#27){reference-type="eqref" reference="27"}. ◻ **Lemma 3** (Monneau's monotonicity formula). *Let $u$ be any solution to the problem [\[11\]](#11){reference-type="eqref" reference="11"}, and that $0 \in \Sigma$ satisfies $u \leq u(0)$ on $\{|\nabla u|=0\}$. Then for any $q \in \mathcal{Q}^{+}$, the functional $$\begin{aligned} \label{213} r \mapsto M(r, u, q):=\frac{1}{r^{n+3}} \int_{\partial B r}(u-u(0)-q)^2 d \mathcal{H}^{n-1}\end{aligned}$$ is monotone nondecreasing for $r \in(0,1)$.* **Remark 4**. *The functional $M$ has the following nice rescaling property $$\begin{aligned} \label{214} M(r, u, q)=M\left(1, u_r, q\right) \quad \text{for all} \quad q \in \mathcal{Q}^{+}.\end{aligned}$$* **Remark 5**. *The polynomial $q$ in lemma [Lemma 3](#Lem23){reference-type="ref" reference="Lem23"} may not be any blowup limit for the problem [\[11\]](#11){reference-type="eqref" reference="11"}.* *Proof of Lemma [Lemma 3](#Lem23){reference-type="ref" reference="Lem23"}.* Let $w=u-u(0)-q$, then $$\begin{aligned} \frac{d}{d r} M(r, u, q) & =\frac{d}{d r}\left(\frac{1}{r^{n+3}} \int_{\partial B r} w^2(x) d \mathcal{H}^{n-1}\right) \notag\\ & =\frac{d}{d r} \int_{\partial B_1} \frac{w^2(r y)}{r^4} d \mathcal{H}^{n-1} \notag\\ & =\int_{\partial B_1} \frac{2 w(r y)(r y \cdot \nabla w(r y)-2 w(r y))}{r^5} d \mathcal{H}^{n-1} \notag\\ & =\frac{2}{r^{n+4}} \int_{\partial B r} w(x \cdot \nabla w-2 w) d \mathcal{H}^{n-1} . \label{215}\end{aligned}$$ By Lemma [Lemma 2](#Lem22){reference-type="ref" reference="Lem22"}, we have $$\begin{aligned} W(0+, u)=W(r, q)=\alpha_n .\end{aligned}$$ Then $$\begin{aligned} W(r, u)-W(0+, u) = & W(r, u)-W(r, q) \notag\\ = & \frac{1}{r^{n+2}} \int_{B r}\left(|\nabla u|^2-|\nabla q|^2+2(u-u(0)-q)\right) d x \notag\\ & -\frac{2}{r^{n+3}} \int_{\partial B_r}\left((u-u(0))^2-q^2\right) d \mathcal{H}^{n-1} \notag\\ = & \frac{1}{r^{n+2}} \int_{B r}\left(|\nabla w|^2+2 \nabla w \cdot \nabla q+2 w\right) d x \notag\\ & -\frac{2}{r^{n+3}} \int_{\partial B r} w(w+2 q) d \mathcal{H}^{n-1}\notag\\ =&\frac{1}{r^{n+2}} \int_{B r}|\nabla w|^2 d x-\frac{2}{r^{n+3}} \int_{\partial B_r} w^2 d \mathcal{H}^{n-1}\notag\\ &+\frac{2}{r^{n+3}} \int_{\partial B r} w(x \cdot \nabla q-2 q) d \mathcal{H}^{n-1} \notag\\ =&\frac{1}{r^{n+2}} \int_{B_r}|\nabla w|^2 d x-\frac{2}{r^{n+3}} \int_{\partial B r} w^2 d \mathcal{H}^{n-1} \notag\\ =&\frac{1}{r^{n+2}} \int_{B_r}(-w \Delta w) d x+\frac{1}{r^{n+3}} \int_{\partial B_r} w(x \cdot \nabla w-2 w) d \mathcal{H}^{n-1} . \label{216}\end{aligned}$$ On the other hand, we have $$\begin{aligned} w \Delta w & =(u-u(0)-q)(\Delta u-1) \\ &=\left\{\begin{array}{lr} 0 & \text { on }\{|\nabla u|>0\}, \\ q-(u-u(0)) & \text { on }\{|\nabla u|=0\} . \end{array}\right.\end{aligned}$$ Since $\Delta u=0$ and $u \leq u(0)$ on $\{|\nabla u|=0\}$, $q \in \mathcal{Q}^{+}$ means that $q \geq 0$, then $$\begin{aligned} \label{217} w \Delta w \geq 0 \quad \text { in } ~B_1.\end{aligned}$$ Combining [\[216\]](#216){reference-type="eqref" reference="216"} and [\[217\]](#217){reference-type="eqref" reference="217"}, we arrive at $$\begin{aligned} \label{218} W(r, u)-W(0+, u) \leq \frac{1}{r^{n+3}} \int_{\partial B_r} w (x \cdot \nabla w-2 w) d \mathcal{H}^{n-1}.\end{aligned}$$ Thus, it follows from the monotonicity of Weiss's energy functional, and [\[215\]](#215){reference-type="eqref" reference="215"}, [\[218\]](#218){reference-type="eqref" reference="218"} that $$\begin{aligned} \frac{d}{d r} M(r, u, q) \geq \frac{2}{r}(W(r, u)-W(0+, u)) \geq 0 .\end{aligned}$$ 0◻ # Proof of the main result {#section3} With the previous preparation, we can now prove the main result of this paper. *Proof of Theorem [Theorem 1](#Thm11){reference-type="ref" reference="Thm11"}.* Let us first prove the uniqueness of blowup. Assume that $$\begin{aligned} \label{31} u_{r_j} \rightarrow q, ~u_{\tilde{r}_j} \rightarrow \tilde{q} \quad \text { in } C_{\rm{loc}}^{1, \alpha}\left(\mathbb{R}^n\right),\end{aligned}$$ for two sequences $r_j \rightarrow 0, \tilde{r}_j \rightarrow 0$, where $q=\frac{1}{2} x \cdot A x \in \mathcal{Q}$ and $\tilde{q}=\frac{1}{2} x \cdot \tilde{A} x \in \mathcal{Q}$. Note that $A$ and $\tilde{A}$ are two symmetric matrices with $\operatorname{Tr}(A)=\operatorname{Tr}(\tilde{A})=1$. We just need to prove that $A=\tilde{A}$. Given any $\bar{q}=\frac{1}{2} x \cdot B x \in \mathcal{Q}^{+}$, Lemma [Lemma 3](#Lem23){reference-type="ref" reference="Lem23"} gives that $$\begin{aligned} \label{32} M\left(1, u_r, \bar{q}\right)=\int_{\partial B_1}\left(u_r-\bar{q}\right)^2 d \mathcal{H}^{n-1} \text { is monotone nondecreasing for } r \in(0,1) .\end{aligned}$$ Moreover, we have that $$\begin{aligned} \label{33} \int_{\partial B_1}\left(\frac{1}{2} x \cdot(A-B) x\right)^2 d \mathcal{H}^{n-1}=\int_{\partial B_1}\left(\frac{1}{2} x \cdot(\tilde{A}-B) x\right)^2 d \mathcal{H}^{n-1} .\end{aligned}$$ Since $A-\tilde{A}$ is symmetric, by rotating the coordinates, we may assume that $A-\tilde{A}$ is diagonalized with the eigenvalues $\lambda_1, \cdots, \lambda_n$. Now, we choose $B=B^t=\left(b_{i j}^t\right)_{i, j=1}^n$, where $b_{11}^t=\frac{1}{2}(1-t), b_{22}^t=$ $\frac{1}{2}(1+t)$, $t \in(-1,1)$, and $b_{i j}^t=0$ for all other $i, j$. It is easy to verify that $B^t \geq 0$ and $\operatorname{Tr}\left(B^t\right)=1$, and then $\frac{1}{2} x \cdot B^t x \in \mathcal{Q}^{+}$. Now, recalling the identity [\[33\]](#33){reference-type="eqref" reference="33"} gives that $$\begin{aligned} \label{34} f(t):=\int_{\partial B_1}(x \cdot(A-\tilde{A}) x)\left(x \cdot\left(A+\tilde{A}-2 B^t\right) x\right) d \mathcal{H}^{n-1}=0 .\end{aligned}$$ Note that $$\begin{aligned} \label{35} x \cdot(A-\tilde{A}) x=\sum_{i=1}^n \lambda_i x_i^2\end{aligned}$$ and $$\begin{aligned} \label{38} \operatorname{Tr}(A-\tilde{A})=\sum_{i=1}^n \lambda_i=0.\end{aligned}$$ Let $A=(a_{ij}), \tilde{A}=(\tilde{a}_{ij})$, it can be calculated directly that $$\begin{aligned} x \cdot\left(A+\widetilde{A}-2 B^t\right) x=\left(a_{11}+\widetilde{a}_{11}-(1+t)\right) x_1^2+\left(a_{22}+\widetilde{a}_{22}-(1-t)\right)x_2^2+R,\end{aligned}$$ where $R$ is independent on $t$, and then $$\begin{aligned} \label{36} \frac{d}{dt}\left(x \cdot\left(A+\tilde{A}-2 B^t\right) x\right)=x_1^2-x_2^2 .\end{aligned}$$ It follows from [\[34\]](#34){reference-type="eqref" reference="34"}, [\[35\]](#35){reference-type="eqref" reference="35"} and [\[36\]](#36){reference-type="eqref" reference="36"} that $$\begin{aligned} 0=\frac{d f}{d t} & =\int_{\partial B_1}\left(\sum_{i=1}^n \lambda_i x_i^2\right)\left(x_1^2-x_2^2\right) d \mathcal{H}^{n-1} \\ & =\int_{\partial B_1}\left(\lambda_1 x_1^2+\lambda_2 x_2^2\right)\left(x_1^2-x_2^2\right) d \mathcal{H}^{n-1} \\ & \quad+\int_{\partial_{B_1}}\left(\sum_{i=3}^n \lambda_i x_i^2\right)\left(x_1^2-x_2{ }^2\right) d \mathcal{H}^{n-1}\\ & =\int_{\partial B_1}\left(\lambda_1 x_1^4-\lambda_2 x_2^4-\left(\lambda_1-\lambda_2\right) x_1^2 x_2^2\right) d \mathcal{H}^{n-1} \\ & \quad +\int_{\partial B_1}\left(\sum_{i=3}^n \lambda_i x_i^2\right)\left(x_1^2-x_2^2\right) d \mathcal{H}^{n-1}.\end{aligned}$$ Due to the symmetry of $\partial B_1$ we have that $$\int_{\partial B_1} x_1^4 d \mathcal{H}^{n-1}=\int_{\partial B_1} x_2^4 d \mathcal{H}^{n-1},$$ and that $$\begin{aligned} \int_{\partial B_1} x_1^2 x_i^2 d \mathcal{H}^{n-1}=\int_{\partial B_1} x_2^2 x_i^2 d \mathcal{H}^{n-1} \quad \text { for all } ~~i=3, \cdots, n .\end{aligned}$$ Hence from the above calculation we have $$\begin{aligned} \label{37} \frac{d f}{d t}=\left(\lambda_1-\lambda_2\right)\left(\int_{\partial B_1} x_1^4 d \mathcal{H}^{n-1}-\int_{\partial B_1} x_1^2 x_2^2 d \mathcal{H}^{n-1}\right)=0.\end{aligned}$$ We observe that $$\begin{aligned} \int_{\partial B_1} x_1^4 d \mathcal{H}^{n-1} =\frac{1}{4} \int_{\partial B_1} \frac{\partial\left(x_1^4\right)}{\partial \nu} d \mathcal{H}^{n-1}=\frac{1}{4} \int_{ B_1} \Delta\left(x_1^4\right) d x =3 \int_{B_1} x_1^2 d x\end{aligned}$$ and that $$\begin{aligned} \int_{\partial B_1} x_1^2 x_2^2 d \mathcal{H}^{n-1} & =\frac{1}{4} \int_{\partial B_1} \frac{\partial\left(x_1^2 x_2^2\right)}{\partial \nu} d \mathcal{H}^{n-1} =\frac{1}{4} \int_{B_1} \Delta\left(x_1^2 x_2^2\right) d x \\ & =\frac{1}{2} \int_{B_1}\left(x_1^2+x_2^2\right) d x =\int_{B_1} x_1^2 d x.\end{aligned}$$ Hence, $$\begin{aligned} \int_{\partial B_1} x_1^4 d \mathcal{H}^{n-1}-\int_{\partial B_1} x_1^2 x_2^2 d \mathcal{H}^{n-1}=2 \int_{B_1} x_1^2 d x>0,\end{aligned}$$ which implies that $\lambda_1=\lambda_2$ from [\[37\]](#37){reference-type="eqref" reference="37"}. Similarly, for any $1 \leq i_0<j_0 \leq n$, we can choose $B^t=$ $\left(b_{i j}^t\right)_{i, j=1}^n$, where $b_{i_0 i_0}^t=\frac{1}{2}(1-t), b_{j_0 j_0}^t=\frac{1}{2}(1+t)$, $t \in(-1,1)$, and $b_{i j}^t=0$ for all other $i, j$. From an argument similar to the one above we get $\lambda_{i_0}=\lambda_{j_0}$. Hence $\lambda_1=\lambda_2=\cdots=\lambda_n$. In virtue of the fact [\[38\]](#38){reference-type="eqref" reference="38"}, we conclude that $\lambda_i=0$ for all $i=1, \cdots, n$. Therefore $A=\tilde{A}$. Finally, we show that $$\begin{aligned} u(x)-u(0)=q_0(x)+o\left(|x|^2\right) .\end{aligned}$$ This is equivalent to $$\begin{aligned} r^{-2}\left\|u-u(0)-q_0\right\|_{L^{\infty}\left(B_r\right)} \rightarrow 0 \quad \text { as } r \rightarrow 0 .\end{aligned}$$ Indeed, assume by contradiction that there is a subsequence $r_k \rightarrow 0$ along which $$\begin{aligned} r_k^{-2}\|u-u(0)-q_0\|_{L^{\infty}\left(B_{r_k}\right)} \geq c\end{aligned}$$ for some constant $c>0$. Then, there is a subsequence $r_{k_j}$ such that $u_{r_{k_j}} \rightarrow u_0$ in $C_{\mathrm{loc}}^{1,\alpha}\left(\mathbb{R}^n\right)$, for a certain blow-up $u_0$ satisfying $$\begin{aligned} \left\|u_0-q_0\right\|_{L^{\infty}\left(B_1\right)} \geq c.\end{aligned}$$ On the other hand, the uniqueness of blowup implies that $u_0=q_0$, and hence we reach a contradiction. So far, we have completed the proof of Theorem [Theorem 1](#Thm11){reference-type="ref" reference="Thm11"}. 0◻ 1 H. Berestycki, A. Bonnet, and S. J. Chapman, A semi-elliptic system arising in the theory of type-II superconductivity, *Comm. Appl. Nonlinear Anal*. 1 (1994), no. 3, 1-21. A. Bonnet and R. Monneau, Distribution of vortices in a type-II superconductor as a free boundary problem: existence and regularity via Nash-Moser theory, *Interfaces Free Bound*. 2 (2000), no. 2, 181-200. L. A. Caffarelli, The obstacle problem revisited, *J. Fourier Anal. Appl*. 4 (1998), no. 4-5, 383-402. L. A. Caffarelli and J. Salazar, Solutions of fully nonlinear elliptic equations with patches of zero gradient: existence, regularity and convexity of level curves, *Trans. Amer. Math. Soc.* 354 (2002), no. 8, 3095-3115. L. A. Caffarelli and H. Shahgholian, The structure of the singular set of a free boundary in potential theory, *Izv. Nats. Akad. Nauk Armenii Mat.* 39 (2004), no. 2, 43-58. L. A. Caffarelli, J. Salazar, and H. Shahgholian, Free-boundary regularity for a problem arising in superconductivity, *Arch. Ration. Mech. Anal.* 171 (2004), no. 1, 115-128. S. J. Chapman, A mean-field model of superconducting vortices in three dimensions, *SIAM J. Appl. Math*. 55 (1995), no. 5, 1259-1274. S. J. Chapman, J. Rubinstein, and M. Schatzman, A mean-field model of superconducting vortices, *European J. Appl. Math.* 7 (1996), no. 2, 97-111. S. Chen, Y. Feng, and Y. Li. A note on the singular set of the no-sign obstacle problem. arXiv: 2204.11426v2, 2022. M. Elliott, Sch$\ddot{\text{a}}$tzle, and E. E. Stoth, Viscosity solutions of a degenerate parabolic-elliptic system arising in the mean-field theory of superconductivity, *Arch. Ration. Mech. Anal.* 145 (1998), no. 2, 99-127. X. Fernandez-Real and X. Ros-Oton, Regularity Theory for Elliptic PDE, *Zurich Lectures in Advanced Mathematics. EMS Press, Berlin,* 2022. A. Figalli, Regularity of interfaces in phase transitions via obstacle problems, *Proceedings of the International Congress of Mathematicians--Rio de Janeiro 2018. Vol. I. Plenary lectures,* 225-247, *World Sci. Publ., Hackensack, NJ,* 2018. A. Figalli, Free boundary regularity in obstacle problems, *Journées équations aux dérivées partielles* (2018), no. 2, 1-26. R. Monneau, On the number of singularities for the obstacle problem in two dimensions, *J. Geom. Anal.* 13 (2003), no. 2, 359-389. R. Monneau, On the regularity of a free boundary for a nonlinear obstacle problem arising in superconductor modelling, *Ann. Fac. Sci. Toulouse Math.* 13 (2004), no. 2, 289-311. A. Petrosyan, H. Shahgholian, and N. Uraltseva, Regularity of free boundaries in obstacle-type problems, *Graduate Studies in Mathematics, Vol. 136. American Mathematical Society, Providence, RI,* 2012. J.-F. Rodrigues, Obstacle problems in mathematical physics, * North-Holland Mathematics Studies, Vol. 134, North-Holland Publishing Co., Amsterdam*, 1987. E. Sandier and S. Serfaty, A rigorous derivation of a free-boundary problem arising in superconductivity, *Ann. Sci. Ecole Norm. Sup.* 33 (2000), no. 4, 561-592. [^1]: \*Corresponding author: tangxu8988\@163.com [^2]: This work is supported by the National Natural Science Foundation of China grant 11971331, 12125102, 12301258, and Sichuan Youth Science and Technology Foundation 2021JDTD0024.
arxiv_math
{ "id": "2309.07642", "title": "Uniqueness of blowup at singular points for superconductivity problem", "authors": "Lili Du, Xu Tang, Cong Wang", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | We show for each $k$, any critical point for the $C^2$-Morse function $\mathop{\mathrm{sys_T}}$ or the systole function that is topologically Morse on $\mathcal M_{g,n}$ has index greater than $k$ when $g$ or $n$ is sufficiently large. In other words, there are no critical points of index $\le k$ in those moduli spaces, and all critical points for $\mathop{\mathrm{sys_T}}$ of index $\le k$ live in the Deligne-Mumford boundary. In the Morse handle decomposition given by $\mathop{\mathrm{sys_T}}$, all $k'$-handles live in the boundary of such $\overline{\mathcal M}_{g,n}$ for $k'\le k$. author: - Changjie Chen bibliography: - main.bib title: No low index critical points for $\mathop{\mathrm{sys}}$ and $\mathop{\mathrm{sys_T}}$ in large $\mathcal M_{g,n}$ --- # Introduction In [@chen2023c], the author introduces a series of $C^2$-Morse functions, that is closely related to the systole function. The *systole* function takes the value on a hyperbolic surface $X$ of the length of shortest geodesics, namely $$\mathop{\mathrm{sys}}(X)=\min_{\gamma \text{ s.c.g. on } X}l_\gamma(X),$$ and $$\mathop{\mathrm{sys_T}}(X):=-T\log\sum_{\gamma \text{ s.c.g. on } X} e^{-\frac1Tl_\gamma(X)},$$ where s.c.g. stands for simple closed geodesic. We review the key properties of $\mathop{\mathrm{sys_T}}$ as the main result of the previous paper in the following section. For Morse theory, one can see [@milnor2016morse].\ \ In $\mathcal M_{g,n}$, the critical points of $\mathop{\mathrm{sys_T}}$ are in natural bijection with those for $\mathop{\mathrm{sys}}$ by the critical point attracting property, which enables us to study one through the other. We prove a result on low index critical points in large $\mathcal M_{g,n}$. Let $\mathop{\mathrm{Crit}}(f,\le k)$ be the set of critical points of $f$ of index $\le k$, then **Main Theorem 1** (= **Theorem [Theorem 35](#main theorem down){reference-type="ref" reference="main theorem down"}**). *For any $k$, there exists $g_0=g_0(k)$ and $n_0=n_0(k)$, such that $$\mathop{\mathrm{Crit}}(\mathop{\mathrm{sys_T}},\le k)\cap\mathcal M_{g,n}=\emptyset$$ and $$\mathop{\mathrm{Crit}}(\mathop{\mathrm{sys}},\le k)\cap\mathcal M_{g,n}=\emptyset$$ for $g\ge g_0$ or $n\ge n_0$. As a result, $$\mathop{\mathrm{Crit}}(\mathop{\mathrm{sys_T}},\le k)\subset\partial\mathcal M_{g,n}.$$* Another way to state this result is, all critical points in $\mathcal M_{g,n}$ have index greater than $k$.\ \ One can construct a handle decomposition of the compactified moduli space $\overline{\mathcal M}_{g,n}$ based on the $C^2$-Morse function $\mathop{\mathrm{sys_T}}$. The main theorem implies that all $k$-handles live in the Deligne-Mumford boundary for $k$ small compared to $g$ or $n$.\ \ The way we show the main theorem is by proving a more general statement, on the rank of gradient vectors of geodesic length functions. The main theorem is a corollary through Akrout's rank theorem about $\mathop{\mathrm{sys}}$ and the author's rank theorem comparing $\mathop{\mathrm{sys_T}}$ to $\mathop{\mathrm{sys}}$.\ \ Let a $j$-*curve set* on a hyperbolic surface be a set of simple closed geodesics that pairwise intersect each other at most $j$ times, then **Theorem 1** (= **Theorem [Theorem 29](#induction){reference-type="ref" reference="induction"}**). *For any $j$, there exists a series $(r_i)$ such that for any $j$-curve set $S$ of $r_i$ curves on any hyperbolic surface $X$ with $g(X)$ or $n(X)$ large depending on $i$, we have $$\mathop{\mathrm{rank}}\{\nabla\gamma\}_{\gamma\in S}\ge i.$$* The proof is mainly about topological constructions and estimates. Besides the notion of $j$-curve set, we will introduce and study *subsurface hull*, *filling sets*, $j$-*capacity*, *essentialness of subsurface*.\ \ This paper is organized to use Section [2](#Morse section){reference-type="ref" reference="Morse section"} to review Morse property of the systole function and $\mathop{\mathrm{sys_T}}$ functions, and results on the index of a critical point. In Section [3](#basicdefinitions){reference-type="ref" reference="basicdefinitions"}, we will study some basic concepts that will be used in later proofs, and in Section [4](#essentialsection){reference-type="ref" reference="essentialsection"} we show a rank result on curves filling non-essential subsurfaces, that will lead to the main theorem after a study of shortest geodesics in Section [5](#systole section){reference-type="ref" reference="systole section"}. As an application, we will classify all index 0,1,2 critical points in the last section. # Morse Properties of $\mathop{\mathrm{sys}}$ and $\mathop{\mathrm{sys_T}}$ {#Morse section} Here we review Akrout's eutacticity conditions and his theorem on the systole function, and the author's theorem on $\mathop{\mathrm{sys_T}}$ functions. Definition of the two functions can be found at the very beginning of this paper. **Definition 2** (Eutacticity). A point $X\in\mathcal T$ is called *eutactic* (*semi-eutactic*) if the origin is contained in the interior (boundary) of the convex hull of $\{\nabla l_\gamma\}_{\gamma\in S(X)}$, the set of gradient vectors of the geodesic length functions associated to the shortest geodesics, in the tangent space $T_X\mathcal T$. **Definition 3** (Topological Morse function). Let $f:M^n\to\mathbb R$ be a continuous function. A point $x\in M$ is called ($C^0$-)*ordinary* if $f$ is a coordinate function under some $C^0$-chart near $x$, otherwise it is called ($C^0$-)*critical*. A critical point $x$ is *nondegenerate* if there is a local $C^0$-chart $(x^i)$ such that $f-f(x)=(x^1)^2+\cdots+(x^r)^2-(x^{r+1})^2-\cdots-(x^n)^2$. In this case the *index* $\mathop{\mathrm{ind}}_f(x)$ of $f$ at $x$ is defined to be $n-r$. A continuous function is called *topologically Morse* if all critical points are nondegenerate. For more, see [@morse1959topologically]. **Theorem 4** ([@akrout2003singularites]). *The systole function is topologically Morse on $\mathcal M_{g,n}$. $X$ is a critical point if and only if $X$ is eutactic, and in that case the index is equal to $\mathop{\mathrm{rank}}\{\nabla l_\gamma\}_{\gamma\in S(X)}$.* In [@chen2023c], we prove **Theorem 5**. *As $T$ decreases to 0, $\mathop{\mathrm{sys_T}}$ decreases and converges to $\mathop{\mathrm{sys}}$. Moreover, for all sufficiently small $T$, $\mathop{\mathrm{sys_T}}$ has the following properties:\ (1) Every $\mathop{\mathrm{sys_T}}$ is a $C^2$-Morse function on the Deligne-Mumford compactification $\overline{\mathcal M}_{g,n}$ (with altered differential structure).\ (2) $\mathop{\mathrm{Crit}}(\mathop{\mathrm{sys_T}}:\mathcal M_{g,n}\to\mathbb R)$ with $\mathop{\mathrm{ind}}(\mathop{\mathrm{sys_T}})$ respects the stratification: More precisely, let $\mathcal S\subset\overline{\mathcal M}_{g,n}$ be a stratum that is isomorphic to $\mathcal M_{g',n'}$, then under the isomorphism, $$\mathop{\mathrm{Crit}}(\mathop{\mathrm{sys_T}}:\mathcal S\to\mathbb R) \text{ and } \mathop{\mathrm{Crit}}(\mathop{\mathrm{sys_T}}:\mathcal M_{g',n'}\to\mathbb R)$$ are the same, counted with index.\ (3) There is a natural stratum-wise correspondence: $$\mathop{\mathrm{Crit}}(\mathop{\mathrm{sys_T}})\leftrightarrow\mathop{\mathrm{Crit}}(\mathop{\mathrm{sys}}).$$ More precisely, let $\mathcal S\subset\overline{\mathcal M}_{g,n}$ be a stratum that is isomorphic to $\mathcal M_{g',n'}$, then there is a bijection $$\begin{aligned} \mathop{\mathrm{Crit}}(\mathop{\mathrm{sys_T}}|_{\mathcal S})&\leftrightarrow\mathop{\mathrm{Crit}}(\mathop{\mathrm{sys}}|_{\mathcal M_{g',n'}})\\ p_T&\leftrightarrow p \end{aligned}$$ with the properties $$d_{\text{WP}}(p,p_T)<CT,$$ which implies $p_T\to p$ and consequently $\mathop{\mathrm{Crit}}(\mathop{\mathrm{sys_T}}|_{\mathcal S})\to\mathop{\mathrm{Crit}}(\mathop{\mathrm{sys}}|_{\mathcal M_{g',n'}})$, and $$\mathop{\mathrm{ind}}_{\mathop{\mathrm{sys_T}}}(p_T)=\mathop{\mathrm{ind}}_{\mathop{\mathrm{sys}}}(p).$$ (4) The Weil-Petersson gradient flow of $\mathop{\mathrm{sys_T}}$ on $\overline{\mathcal M}_{g,n}$ is well defined.* *Remark 6*. The rank statement in both Akrout's and the author's theorem is what we will use to calculate the index at a critical point. As the $\mathop{\mathrm{sys_T}}$ functions are Morse on the compactified Moduli space, on top of itself, by (2) and (3) in the theorem, we give a description of critical points in the Deligne-Mumford boundary $\partial\mathcal M_{g,n}$. For a stratum $\mathcal S\subset\partial\mathcal M_{g,n}$, write $\mathcal S=\oplus\mathcal S_i$ as a decomposition by connected components of the base surface away from the nodes, with each $\mathcal S_i$ isomorphic to some moduli space $\mathcal M_i$. Any critical point $X\in\mathcal S$ is a nodal surface that has the decomposition $X=\cup X_i$ plus the nodes, such that each $X_i$ is a critical point in $\mathcal M_i$. This way we can decompose a critical point in the boundary as the union of smaller surfaces that are critical in respective moduli spaces. We can also construct a critical point by connecting critical points in smaller $\mathcal M_{g,n}$'s by nodes. Because of that, the study of critical points on $\overline{\mathcal M}_{g,n}$ can come down to on each smaller $\mathcal M_{g,n}$. # Subsurface Hull, Filling Set and $j$-capacity {#basicdefinitions} **Convention 1**. *A lot of notions on hyperbolic surfaces to be defined below are invariant under diffeomorphisms or hyperbolic isometries. If $P$ is such a notion and $X$ is a $[g,n]$-surface, we make the convention $P(g,n)=P(X)$ by abuse of notation.* **Definition 7**. A $(g,n)$-surface is a complete hyperbolic surface of genus $g$ with $n$ punctures. A $(g,n,b)$-surface is a hyperbolic surface of genus $g$ with $n$ punctures and $b$ geodesic boundary components. A $[g,n]$-surface is a hyperbolic surface of genus $g$ with the number of punctures and geodesic boundary components totalling $n$. For convenience, we use $[0,2]$-surface to refer to a circle or an annulus. **Definition 8**. A *subsurface* of a hyperbolic surface $X$ is some $[g,n]$-surface whose interior is isometrically embedded in $X$. The *subsurface hull* $\mathop{\mathrm{SSH}}(S)$ of a set of simple closed geodesics $S=\{\gamma_1,\cdots,\gamma_r\}$ on $X$ is the minimal subsurface that contains $S$. *Remark 9*. The definition is valid in terms of uniqueness of such minimal subsurface. If two subsurfaces $X_1$ and $X_2$ intersect, there is the unique subsurface $X_0$ that 'supports' $X_1\cap X_2$ by pulling straight the piecewise geodesic boundaries. Note that if a simple closed geodesic $\gamma\subset X_1\cap X_2$, then $\gamma\subset X_0$. **Definition 10**. (1) A $j$*-curve set* on a hyperbolic surface is a set of simple closed geodesics that pairwise intersect at most $j$ times.\ (2) A set of simple closed geodesics *fills* a surface if every complementary region is a polygon or once-punctured polygon. In the case of the base surface being one with geodesic boundary, a complementary region is also allowed to be a once-holed polygon where the hole is a boundary component of the surface.\ (3) A set of simple closed geodesics is *minimal filling* if no proper subset is filling. *Remark 11*. A set $S$ of curves is minimal filling if and only if for any $\gamma\in S$, $S\setminus\{\gamma\}$ is not filling. **Lemma 12**. *A filling set of simple closed geodesics on a (connected) hyperbolic surface is connected as a graph.* *Proof.* Note that the boundary of any complementary region is a path in the graph of the simple closed geodesics. If the graph is not connected, then the surface that can be reassembled with the complementary regions along the graph is not connected. ◻ **Definition 13**. (1) For a subsurface $Y$ of a hyperbolic surface, let $\#^p(Y)$ be the number of pants in a pants decomposition of $Y$.\ (2) Let $M(g,n)$ be the maximum cardinality of a minimal filling set, and $m^j(g,n)$ the minimum cardinality of a filling $j$-curve set, on a $[g,n]$ surface, when $[g,n]\neq[0,3]$. *Remark 14*. $\#^p(Y)=-e(Y)$, where $e$ is the Euler characteristic. **Lemma 15**. *We have the following estimate on the size of the subsurface hull: $$\#^p(\mathop{\mathrm{SSH}}(\{\gamma_1,\cdots,\gamma_r\})\le j\binom{r}{2}.$$ for a $j$-curve set $\{\gamma_1,\cdots,\gamma_r\}$.* *Proof.* We calculate its Euler characteristic: $$\begin{aligned} \#^p=-e=&-V+E-F\\ =&V-F\le V\le j\binom{r}{2}. \end{aligned}$$ ◻ **Lemma 16**. *We have the following estimate: $$m^j(g,n)>\sqrt{\frac{4g-4+2n}{j}}.$$* *Proof.* Let $S=\{\gamma_1,\cdots,\gamma_r\}$ be a $j$-curve set that is filling a $[g,n]$-surface $X$, then $\mathop{\mathrm{SSH}}(S)=X$. By the remark and lemma above we have $$2g-2+n\le j\binom{r}{2},$$ which implies $$r>\sqrt{\frac{4g-4+2n}{j}}.$$ ◻ *Remark 17*. For better estimates, one can see [@anderson2011small] and [@fanoni2015filling]. **Lemma 18**. *Suppose $[g,n](X)\neq[0,3]$, then there exists a proper $[g',n']$-subsurface of $X$, unless $[g,n](X)=[0,2]$, such that $$M(g,n)\le 1+M(g',n').$$* *Proof.* Let $S=\{\gamma_1,\cdots,\gamma_r\}$ be a minimal filling set such that $r=M(g,n)$, and set $Y_1=\mathop{\mathrm{SSH}}(S\setminus\{\gamma_r\})$, then $Y_1\subsetneqq X$ by minimality. To show the minimality of $S\setminus\{\gamma_r\}$, we take out a curve, say $\gamma_{r-1}$, and set $Y_2=\mathop{\mathrm{SSH}}(S\setminus\{\gamma_{r-1},\gamma_r\})$. Note that $Y_2\subsetneqq Y_1$, otherwise we have $$\begin{aligned} &\mathop{\mathrm{SSH}}(S\setminus\{\gamma_{r-1}\})=\mathop{\mathrm{SSH}}(S\setminus\{\gamma_{r-1},\gamma_r\},\gamma_r)\\ =&\mathop{\mathrm{SSH}}(Y_2,\gamma_r)=\mathop{\mathrm{SSH}}(Y_1,\gamma_r)=X, \end{aligned}$$ which is contradictory to minimality of $S$. Let $[g',n']$ be the type of $Y_1$, then minimality of $S\setminus\{\gamma_r\}$ implies that $$\begin{aligned} M(g,n)=\#(S)=1+\#(S\setminus\{\gamma_r\})\le 1+M(g',n'). \end{aligned}$$ ◻ *Remark 19*. This process will not yield any $[0,3]$-subsurfaces. **Theorem 20**. *We have the following estimate $$M(0,2)=1$$ and $$M(g,n)\le 3g+n.$$* *Proof.* For a $[g,n]$-surface $X$, there are two types of maximal proper subsurfaces: $[g-1,n+2]$ and $[g,n-1]$, as long as the numbers are nonnegative. They are obtained by cutting $X$ along a non-separating or separating curve. Note that any proper subsurface can be obtained through a chain of maximal proper subsurfaces. Let $f(g,n)=3g+n$, then $f(Y)<f(X)$ for any proper subsurface $Y\subset X$. We use Lemma [Lemma 18](#iteration){reference-type="ref" reference="iteration"} to get a sequence of subsurfaces $Y_k\subsetneqq Y_{k-1}\subsetneqq\cdots\subsetneqq Y_1\subsetneqq X$, where $Y_k$ is a $[0,2]$-subsurface. Therefore, $$3g+n=f(X)\ge f(Y_1)+1\ge \cdots \ge f(Y_k)+k= 2+k$$ and $$M(X)\le 1+M(Y_1)\le\cdots\le k+M(0,2)=k+1\le 3g+n-1.$$ Note that $[0,3]$ is skipped in the descending process so a modification yields the final estimate $$M(g,n)\le 3g+n.$$ ◻ **Definition 21**. The *j-capacity* $\mathop{\mathrm{Cap^j}}(Y)$ of a subsurface $Y$ is the maximum cardinality of a $j$-curve set on $Y$. **Theorem 22**. *We have the following estimate on $j$-capacity: $$\mathop{\mathrm{Cap^j}}(g,n)\le M(g,n)+(2jM(g,n)(M(g,n)-1))^{jM(g,n)}.$$* *Proof.* Note that given a filling $j$-curve set, the element in all filling subsets that has the smallest cardinality is always a minimal filling subset. Let $S$ be a $j$-curve set on a $[g,n]$-surface $X$, then there exists a minimal filling subset $S_0\subset S$, and then any $\gamma\in S\setminus S_0$ can be obtained this way:\ \ List all the curves in $S_0$ that intersect $\gamma$ in an order of intersection: $$\delta_1,\delta_2,\cdots,\delta_l,$$ where $\delta_i$'s are not necessarily distinct but each appears at most $j$ times. Consequently, $l\le jM$. Let $\gamma\setminus\cup S_0=\cup\gamma_i$, where $\gamma_i$ is a segment of $\gamma$ that connects $\delta_i$ and $\delta_{i+1}$. The segment $\gamma_i$ lives in a convex polygon or once punctured convex polygon that has segments of $\delta_i$ and $\delta_{i+1}$ cut by $S_0$ as two sides. Note that there are at most $M\cdot j(M-1)$ segments of the graph $S_0$. Given the initial and terminal point of $\gamma_i$, there are at most two topological possibilities for $\gamma_i$ as $\gamma$ is simple, therefore we get an upper bound of all topological possibilities of such $\gamma$: $(jM(M-1))^l\cdot 2^{l}$, and thus $$\mathop{\mathrm{Cap^j}}(g,n)\le M+(2jM(M-1))^{jM}.$$ ◻ # Non-essential subsurfaces {#essentialsection} On a hyperbolic surface $X$, going from a subsurface $Y_1$ to another $Y_2$ that $Y_1$ is properly contained, it obviously increases the dimension of the tangent subspace of the Teichmüller space. That can be observed by taking enough geodesics and then the rank of the gradient vectors of the associated geodesic length functions. If we take a curve set $S_i$ on $Y_i$ (a special case is when $\mathop{\mathrm{SSH}}(S_i)=Y_i$), with $S_1\subset S_2$, we hope to find a way to determine when the rank will get strictly larger, i.e., when we have $$\mathop{\mathrm{rank}}\{\nabla l_\gamma\}_{\gamma\in S_1}<\mathop{\mathrm{rank}}\{\nabla l_\gamma\}_{\gamma\in S_2}.$$ **Definition 23**. (1) A subsurface is called *essential* if no complementary region contains a \[1,1\] or \[0,4\]-subsurface, otherwise it is *non-essential*. See Figure [1](#fig:Nonessential subsurface){reference-type="ref" reference="fig:Nonessential subsurface"} for an example.\ (2) For a subsurface $Y\subset X$, the *essential closure* $\overline Y$ of $Y$ is the largest subsurface of $X$ that $Y$ is essential in with $\partial\overline Y\subset\partial Y$. We also write $\overline{\mathop{\mathrm{SSH}}}(\cdot)=\overline{\mathop{\mathrm{SSH}}(\cdot)}$. **Lemma 24**. *Let $Y$ be a subsurface, then $\#^p(\overline Y)<2\#^p(Y)$+2.* *Proof.* Note that to get $\overline Y$, one attaches $[0,3]$-complements to $Y$ along its boundary components, and every attaching operation decreases the number of boundary components by 1 or 2 and increases $\#^p$ by 1. Therefore, there can be at most $n(Y)$ attaching operations, and thus $$\#^p(\overline Y)\le n(Y)+\#^p(Y)=2g(Y)+2n(Y)-2\le2\#^p(Y)+2.$$ ◻ We show there is a 'leap' of the rank of the gradient vectors of enough geodesic length functions when expanding a subsurface non-essentially. **Lemma 25**. *Let $S_1\subset S_2$ be two sets of curves on $X$ and $Y_i=\mathop{\mathrm{SSH}}(S_i)$, $i=1,2$. Suppose\ (1) $Y_1\subsetneqq Y_2$,\ (2) $Y_1$ is not essential in $Y_2$.\ Then $$\mathop{\mathrm{rank}}\{\nabla l_\gamma\}_{\gamma\in S_1}<\mathop{\mathrm{rank}}\{\nabla l_\gamma\}_{\gamma\in S_2}.$$* If there are two curves $\alpha\in S_2\setminus S_1$ and $\delta\subset Y_2\setminus Y_1$ such that they intersect each other exactly once and non-orthogonally, then by Kerckhoff's geodesic length-twist formula that can be found in [@kerckhoff1983nielsen], $$\langle\nabla l_\alpha,\tau_\delta\rangle=\cos\theta(\alpha,\delta)\neq 0.$$ In plain words, $\nabla l_\alpha$ will create an extra dimension on top of the space spanned by $S_1$. However, that is not always the case for randomly picked $\alpha$ and $\delta$. To create such a pair with that nonzero Weil-Petersson pairing, we pick an auxiliary curve $\lambda$ and do Dehn twists on $\delta$ along $\lambda$ until we find an eligible curve. ![The right subsurface is non-essential](Nonessential_subsurface.pdf){#fig:Nonessential subsurface width="7cm"} For this purpose, we have the following lemma: **Lemma 26**. *Suppose $\alpha$, $\delta$ and $\lambda$ are three simple closed geodesics on a hyperbolic surface, as shown in Figure [1](#fig:Nonessential subsurface){reference-type="ref" reference="fig:Nonessential subsurface"}, satisfying:\ (1) $\delta$ and $\alpha$ intersect,\ (2) $\delta$ and $\lambda$ intersect.\ Let $\alpha'$ be the geodesic arc obtained from $\alpha$ by twisting the base surface along $\lambda$ by $t$, and $\theta$ be the angle of $\delta$ and $\alpha'$ at a given intersection, then $\theta$ is monotone along the earthquake path $\mathcal E_\lambda(t)$.* Note that $\alpha'=\alpha$ if $\alpha$ and $\lambda$ are disjoint. This can be seen as a corollary to Lemma 3.6 in [@kerckhoff1983nielsen] where Kerckhoff proved the Nielsen realization theorem. We restate that lemma as the following with our notations. Figure [2](#fig:Intersection along earthquake){reference-type="ref" reference="fig:Intersection along earthquake"} below is modified on Kerckhoff's original picture, in which $\tilde\lambda$ is the preimage of $\lambda$, $\tilde\delta_i$'s are segments of a lift of $\delta$ cut by $\tilde\lambda$ and $\tilde\alpha'$ is a lift of $\alpha$. The earthquake is realized on the picture by shearing the components complementary to $\tilde\lambda$ along $\tilde\lambda$ where we fix the component containing $\tilde\delta_0$. Let $\bar\delta(t)$ be the corresponding lift of $\mathcal E_{\lambda}(t)({\delta})$, i.e., the geodesic with endpoints being $\lim_{n\to\pm\infty}{\tilde{\delta_n}}$. $\theta$ is an intersection angle of $\bar\delta$ and $\tilde\alpha'$. **Lemma 27** ([@kerckhoff1983nielsen]). *The endpoints of $\bar\delta(t)$ move strictly to the left when $t$ increases.* ![New endpoints are to the left to the old ones](Intersection_along_earthquake.pdf){#fig:Intersection along earthquake width="7cm"} *Proof of Lemma [Lemma 25](#essential){reference-type="ref" reference="essential"}.* Let $Z$ be a connected component of $Y_2\setminus Y_1$ that contains a $[1,1]$ or $[0,4]$-subsurface, then for any geodesic on $Z$, there exists a geodesic intersecting it. As $Y_2\supsetneqq Y_1$, pick $\alpha\in S_2$ crossing $Z$, then there exists $\delta$ on $Z$ intersecting $\alpha$. Pick $\lambda$ on $Z$ intersecting $\delta$. The conditions in Lemma [Lemma 26](#threecurves){reference-type="ref" reference="threecurves"} are satisfied. Let $\theta_i$ be the intersection angles measured from $\alpha$ to $\delta$, then $\theta_i$'s have the same monotonicity along the earthquake path $\mathcal E_\lambda(t)$, and therefore $\sum\cos\theta_i$ is monotone. There are only finitely many $t$'s to make $\sum\cos\theta_i=0$, so there exists an integer $t=n$ such that $\langle\nabla l_\alpha,\tau_{\delta_n}\rangle=\sum\cos\theta_i\neq0$ on $X$, where $\delta_n=\mathcal E_{\lambda}(n)(\delta)$. On the other hand, $\langle\nabla l_\gamma,\tau_{\delta_n}\rangle=0$ for any $\gamma\in S_1$. The lemma follows. ◻ The proof implies the following: **Lemma 28**. *Let $Y\subset X$ be a subsurface, and $\gamma$ a simple closed geodesic. Suppose $\gamma\not\subset\overline Y$, then $\nabla l_\gamma\not\in T_X^Y\mathcal T$, the tangent subspace given by $Y$ of the Teichmüller space of $X$.* Given a hyperbolic surface $X$, we take the set of shortest geodesics on it, denoted by $S(X)$. To prove the main theorem, we shall show this more general statement below on $j$-curve sets, as we will see that $S(X)$ is a 2-curve set in the following section. **Theorem 29**. *For any $j$, there exists a series $(r_i)$ such that for any $j$-curve set $S$ of $r_i$ curves on any hyperbolic surface $X$ with $g(X)$ or $n(X)$ large depending on $i$ (and $j$), we have $$\mathop{\mathrm{rank}}\{\nabla\gamma\}_{\gamma\in S}\ge i.$$* *Proof.* We construct the series by induction.\ The case of $i=1$ is trivial for $r_1=1$.\ \ For any $S_i$ of $r_i$ curves on $X$, when $g(X)$ or $n(X)$ is large depending on $i$, the inductive assumption gives that $$\text{rank}\{\nabla l_\gamma\}_{\gamma\in S_i}\ge i.$$ Note that $\#^p\mathop{\mathrm{SSH}}(S_i)$ is bounded from above in $r_i$ by Lemma [Lemma 15](#maxhull){reference-type="ref" reference="maxhull"}. By [Lemma 24](#essential closure upper bound){reference-type="ref" reference="essential closure upper bound"}, there exist $g(r_i)$ and $n(r_i)$ such that $\mathop{\mathrm{SSH}}(S_i)$ is not essential in any $(g,n)$-surface $X$ when $g>g(r_i)$ or $n>n(r_i)$. $\mathop{\mathrm{Cap^j}}(\overline{\mathop{\mathrm{SSH}}}(S_i))$ is bounded in $r_i$ (and $j$) uniformly in $S_i$ by Theorem [Theorem 22](#maxcap){reference-type="ref" reference="maxcap"}. Pick $$r_{i+1}>\max_{S_i}\mathop{\mathrm{Cap^j}}(\overline{\mathop{\mathrm{SSH}}}(S_i)),$$ then for any $S_i\subset S_{i+1}\subset S(X)$, by definition of $j$-capacity, $$\mathop{\mathrm{SSH}}(S_{i+1})\supsetneqq\overline{\mathop{\mathrm{SSH}}}(S_i),$$ and by Lemma [Lemma 25](#essential){reference-type="ref" reference="essential"} and Lemma [Lemma 28](#extra rank){reference-type="ref" reference="extra rank"}, $$\text{rank}\{\nabla l_\gamma\}_{\gamma\in S_{i+1}}>\text{rank}\{\nabla l_\gamma\}_{\gamma\in S_i}\ge i.$$ Therefore, $$\text{rank}\{\nabla l_\gamma\}_{\gamma\in S_{i+1}}\ge i+1.$$ Induction completes. ◻ # Shortest Geodesics and Main Theorem {#systole section} Given a hyperbolic surface $X$, $S(X)$ denotes the set of shortest geodesics on it. As a curve set, $S(X)$ satisfies certain conditions for combinatorial and geometric reasons. We say a curve bounds two cusps, if they together form a pair of pants, then **Lemma 30**. *Suppose $\gamma_1,\gamma_2\in S(X)$, then $i(\gamma_1,\gamma_2)\le 2$, i.e., $S(X)$ is a 2-curve set. If $i(\gamma_1,\gamma_2)=2$, then at least one of them bounds two cusps.* For proof one can see [@fanoni2016systoles]. *Remark 31*. $S(X)$ is a 1-curve set when $n(X)=0,1$. **Corollary 32**. *Suppose $S(X)$ is filling, then $\gamma\in S(X)$ is separating if and only if it bounds two cusps.* **Lemma 33**. *If two distinct geodesics $\gamma_1$ and $\gamma_2$ bound the same two cusps on a surface $X$, then $i(\gamma_1,\gamma_2)\ge4$.* *Proof.* For $i=1,2$, since $\gamma_i$ bounds two cusps, say $p$ and $q$, it separates the surface into two parts. Consider $p$ and $q$ as two mark points on the surface. Let $X_i$ denote the closed $[0,3]$-subsurface bounded by $\gamma_i$ that contains the cusps $p$ and $q$, then $p,q\in X_1\cap X_2$. Note that $X_1\cap X_2$ is not path connected, as $p$ and $q$ cannot be joined by a path, otherwise both $X_1$ and $X_2$ contract to that path. Since the boundary of the path component containing $p$ or $q$ is contributed by both $\gamma_1$ and $\gamma_2$, it contains at least two intersections of $\gamma_1$ and $\gamma_2$. The lemma follows. ◻ **Corollary 34**. *Let $\gamma_1,\gamma_2\in S(X)$, then $\gamma_1$ and $\gamma_2$ cannot bound the same two cusps.* Per Remark [Remark 6](#remark rank statement){reference-type="ref" reference="remark rank statement"}, we apply Theorem [Theorem 29](#induction){reference-type="ref" reference="induction"} onto $S(X)$ which is a 2-curve set, for a hyperbolic surface $X$ that is large enough. We have **Theorem 35**. *For any $k$, there exists $g_0=g_0(k)$ and $n_0=n_0(k)$, such that $$\mathop{\mathrm{Crit}}(\mathop{\mathrm{sys_T}},\le k)\cap\mathcal M_{g,n}=\emptyset$$ and $$\mathop{\mathrm{Crit}}(\mathop{\mathrm{sys}},\le k)\cap\mathcal M_{g,n}=\emptyset$$ for $g\ge g_0$ or $n\ge n_0$. As a result, $$\mathop{\mathrm{Crit}}(\mathop{\mathrm{sys_T}},\le k)\subset\partial\mathcal M_{g,n}.$$* *Proof.* Let $p$ be a critical point for the systole function, and $p_T$ a critical point for $\mathop{\mathrm{sys_T}}$, as in Theorem [Theorem 5](#Morsemain){reference-type="ref" reference="Morsemain"}. Note that $$\mathop{\mathrm{ind}}_{\mathop{\mathrm{sys_T}}}(p_T)=\mathop{\mathrm{ind}}_{\mathop{\mathrm{sys}}}(p)=\mathop{\mathrm{rank}}\{\nabla l_\gamma\}_{\gamma\in S(p)}.$$ Theorem follows from Theorem [Theorem 29](#induction){reference-type="ref" reference="induction"}. ◻ # Classification of Low Index Critical Points Based on the discussion at the end of Section [2](#Morse section){reference-type="ref" reference="Morse section"}, we shall study critical points of the $\mathop{\mathrm{sys_T}}$ functions in the main stratum $\mathcal M_{g,n}$, so we introduce the following definition: **Definition 36**. A critical point of $\mathop{\mathrm{sys_T}}$ on $\overline{\mathcal M}_{g,n}$ is *primitive* if it is in $\mathcal M_{g,n}$. Following Theorem [Theorem 29](#induction){reference-type="ref" reference="induction"}, we have more delicate results on some special cases below for shortest geodesics, and we will give a classification of primitive critical points of some low indices. **Corollary 37**. *Suppose $(g,n)(X)\neq (1,1),(0,4)$, then for distinct $\gamma_1,\gamma_2\in S(X)$, $$\mathop{\mathrm{rank}}\{\nabla l_1,\nabla l_2\}=2.$$* *Proof.* It is trivial that $$\mathop{\mathrm{rank}}\{\nabla l_1\}=\mathop{\mathrm{rank}}\{\nabla l_2\}=1.$$ Consider $\gamma_1=\mathop{\mathrm{SSH}}(\gamma_1)$. Note that $\gamma_1$ is non-essential in any hyperbolic $X$ except when $(g,n)(X)=(1,1)$ or $(0,4)$. In any non-exceptional case, follow Lemma [Lemma 28](#extra rank){reference-type="ref" reference="extra rank"}, we have $$\mathop{\mathrm{rank}}\{\nabla l_1,\nabla l_2\}>\mathop{\mathrm{rank}}\{\nabla l_1\}=1,$$ i.e., $\mathop{\mathrm{rank}}\{\nabla l_1,\nabla l_2\}=2$. ◻ **Corollary 38**. *Suppose $(g,n)(X)\neq (1,1),(0,4),(1,2),(0,5)$, then for distinct $\gamma_1,\gamma_2,\gamma_3\in S(X)$, $$\mathop{\mathrm{rank}}\{\nabla l_1,\nabla l_2,\nabla l_3\}=3.$$* *Proof.* If $\gamma_1,\gamma_2,\gamma_3$ are not connected as a graph, it reduces to the case of two curves for the same reason as above. Suppose $\gamma_1$ intersects $\gamma_2$ and $\gamma_3$. We take $Y_{12}:=\mathop{\mathrm{SSH}}\{\gamma_1,\gamma_2\}$, and consider the following two cases:\ \ (1) When $\#\gamma_1\cap\gamma_2=1$, $[g,n](Y_{12})=[1,1]$, and $Y_{12}$ is non-essential in any $X$ when $(g,n)(X)\neq(1,2)$.\ (2) When $\#\gamma_1\cap\gamma_2=2$, $[g,n](Y_{12})=[0,4]$, and at least one of $\gamma_1$ and $\gamma_2$ bounds two cusps, so $Y_{12}$ has at most two punctures. $Y_{12}$ is non-essential in any $X$ when $(g,n)(X)\neq(1,3),(0,5)$ or $(0,6)$.\ \ If $\gamma_3\in Y_{12}$, then given the equal length of them, $Y_{12}$ as a $(1,0,1)$ or $(0,3,1)$-subsurface is determined and has a $\mathbb Z/3$ rotational symmetry, then $\nabla l_1,\nabla l_2,\nabla l_3$ have rank 2 when projected onto the 2-dimensional tangent subspace at $X$ of $\mathcal T(X)$ given by $Y_{12}$ (boundary not considered). Let $\delta$ be the geodesic boundary of $Y_{12}$, then $\langle\nabla l_i,\nabla l_\delta\rangle>0$ by Riera's formula, see [@riera2005formula] or [@wolpert1987geodesic]. Therefore, $$\mathop{\mathrm{rank}}\{\nabla l_1,\nabla l_2,\nabla l_3\}=3.$$ Now suppose $\gamma_3\not\in Y_{12}$. For any type of $X$ other than those mentioned above, the conclusion follows from the previous corollary and Lemma [Lemma 25](#essential){reference-type="ref" reference="essential"}. There are two types still to be considered to complete the proof:\ \ When $X$ is (1,3): Consider $S^2:=\{\gamma_i,\gamma_i \text{ bounds 2 cusps}\}$. If $\#S^2=0$ or 1, suppose $\gamma_1,\gamma_2\not\in S^2$, then $\gamma_1$ and $\gamma_2$ are non-separating. If $\gamma_1$ and $\gamma_2$ intersect then $Y_{12}=\mathop{\mathrm{SSH}}\{\gamma_1,\gamma_2\}$ is non-essential in $X$ as the complement is $[0,4]$. If $\gamma_1$ and $\gamma_2$ are disjoint, then $Y_{12}$ is non-essential as a component of the complement is $[0,4]$.\ \ If $\#S^2=2$ or 3, suppose $\gamma_1,\gamma_2\in S^2$, then each of $\gamma_1$ and $\gamma_2$ bounds two cusps, which are not the same two by Lemma [Corollary 34](#same two cusps){reference-type="ref" reference="same two cusps"}. Therefore, $Y_{12}$ is a $(0,3,1)$-surface, and is non-essential in $X$.\ \ When $X$ is (0,6): Since any closed geodesic is separating, and in any pair there is at least one that bounds two cusps by Lemma [Lemma 30](#intersection=2){reference-type="ref" reference="intersection=2"}, at least two of $\gamma_i$'s bound two cusps, say $\gamma_1$ and $\gamma_2$. Then $\mathop{\mathrm{SSH}}\{\gamma_1,\gamma_2\}$ is $(0,3,1)$ and therefore is non-essential in $X$. As $\gamma_3\not\subset\overline{\mathop{\mathrm{SSH}}}\{\gamma_1,\gamma_2\}$, $$\mathop{\mathrm{rank}}\{\nabla l_1,\nabla l_2,\nabla l_3\}\ge\mathop{\mathrm{rank}}\{\nabla l_1,\nabla l_2\}+1=3.$$ ◻ The two corollaries above imply that no primitive critical points of respective index exist in those non-exceptional moduli spaces. For the exceptional cases, the critical points for the systole function are known thanks to Schmutz-Schaller in his paper [@schaller1999systoles]. We are going to classify primitive critical points of some low indices, namely 0,1 and 2, by listing all such surfaces. For each figure in the following theorems, there exists a unique surface with the colored curves as the shortest geodesics, with the given information on intersection or symmetry. **Theorem 39**. *Index 0 primitive critical points: Figure [3](#fig:(0,3)_0){reference-type="ref" reference="fig:(0,3)_0"}* ![$(g,n)=(0,3),\#S(X)=0$](0,3_0.pdf){#fig:(0,3)_0 width="6cm"} **Theorem 40**. *Index 1 primitive critical points: Figure [4](#fig:(1,1)_1){reference-type="ref" reference="fig:(1,1)_1"}, [5](#fig:(0,4)_1){reference-type="ref" reference="fig:(0,4)_1"}* ![$(g,n)=(1,1),\#S(X)=2$, $\frac{\pi}{2}$-intersection](1,1_1.pdf){#fig:(1,1)_1 width="6cm"} ![$(g,n)=(0,4),\#S(X)=2$, $\frac{\pi}{2}$-intersection](0,4_1.pdf){#fig:(0,4)_1 width="5cm"} **Theorem 41**. *Index 2 primitive critical points: Figure [6](#fig:(1,1)_2){reference-type="ref" reference="fig:(1,1)_2"}, [7](#fig:(0,4)_2){reference-type="ref" reference="fig:(0,4)_2"}, [8](#fig:(1,2)_2a){reference-type="ref" reference="fig:(1,2)_2a"}, [9](#fig:(1,2)_2b){reference-type="ref" reference="fig:(1,2)_2b"}, [10](#fig:(0,5)_2){reference-type="ref" reference="fig:(0,5)_2"}* ![$(g,n)=(1,1),\#S(X)=3$](1,1_2.pdf){#fig:(1,1)_2 width="6cm"} ![$(g,n)=(0,4),\#S(X)=3$](0,4_2.pdf){#fig:(0,4)_2 width="5cm"} ![$(g,n)=(1,2),\#S(X)=3$, $\frac{\pi}{2}$-intersections](1,2_2a.pdf){#fig:(1,2)_2a width="7cm"} ![$(g,n)=(1,2),\#S(X)=3$, $\mathbb Z/2$ rotational and $\mathbb Z/3$ permutational symmetry](1,2_2b.pdf){#fig:(1,2)_2b width="5cm"} ![$(g,n)=(0,5),\#S(X)=5$, $\mathbb Z/5$ rotational symmetry](0,5_2.pdf){#fig:(0,5)_2 width="6cm"}
arxiv_math
{ "id": "2309.05801", "title": "No low index critical points for the systole function and sys_T\n functions in large M_{g,n}", "authors": "Changjie Chen", "categories": "math.DG math.GT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Given a linear unknown system with $m$ inputs, $p$ outputs, $n$ dimensional state vector, and $q$ dimensional exosystem, the problem of the adaptive optimal output regulation of this system boils down to iteratively solving a set of linear equations and each of these equations contains $\frac{n (n+1)}{2} + (m+q)n$ unknown variables. In this paper, we refine the existing algorithm by decoupling each of these linear equations into two lower-dimensional linear equations. The first one contains $nq$ unknown variables, and the second one contains $\frac{n (n+1)}{2} + mn$ unknown variables. As a result, the solvability conditions for these equations are also significantly weakened. author: - "Liquan Lin and Jie Huang,  [^1] [^2]" title: A Refined Algorithm for the Adaptive Optimal Output Regulation Problem --- # Introduction ecently, reference [@gao2016adaptive] studied the optimal output regulation problem for linear unknown systems by state feedback control. The approach in [@gao2016adaptive] boils down to iteratively solving a set of linear equations. For a linear system with $m$ inputs, $p$ outputs, $n$ dimensional state vector, and $q$ dimensional exosystem, each of these equations contains $\frac{n (n+1)}{2} + (m+q)n$ unknown variables. The main objective of this paper is to refine the algorithm of [@gao2016adaptive]. We first show that the $\frac{n (n+1)}{2} + (m+q)n$ unknown variables governed by each linear equation in [@gao2016adaptive] can be separated into two groups. The first group consists of $nq$ unknown variables and the second group consists of $\frac{n (n+1)}{2}+mn$ unknown variables. As a result, the refined algorithm significantly reduces the computational complexity of the algorithm in [@gao2016adaptive]. Moreover, the success of the algorithm in [@gao2016adaptive] critically hinges on the satisfaction of $(n-p)q+2$ rank conditions for linear equations with $\frac{n (n+1)}{2} + (m+q)n$ columns. In contrast, the refined algorithm only needs to satisfy one rank condition for one linear equation with $\frac{n (n+1)}{2} + (m+q)n$ columns and $(n-p)q+1$ rank conditions of lower dimensional matrices. It is noted that the refined algorithm also applies to the adaptive cooperative optimal output regulation of linear multi-agent unknown systems developed in [@gao2017cooperative]. **Notation** Throughout this paper, $\mathbb{R}, \mathbb{N} , \mathbb{N}_+$ and $\mathbb{C}_-$ represent the sets of real numbers, nonnegative integers, positive integers and the open left-half complex plane, respectively. $|\cdot|$ represents the Euclidean norm for vectors and the induced norm for matrices. For $b=[b_1, b_2, \cdots, b_n]^T\in \mathbb{R}^n$, $\text{vecv}(b)=[b_1^2,b_1b_2,\cdots,b_1b_n,b_2^2,b_2b_3,\cdots, b_{n-1}b_n,b_n^2]^T \in \mathbb{R}^{\frac{n(n+1)}{2}}$. For a symmetric matrix $P=[p_{ij}]_{n\times n}\in \mathbb{R}^{n\times n}$, $\text{vecs}(P)=[p_{11},2p_{12},\cdots,2p_{1n},p_{22},2p_{23},\cdots, 2p_{n-1,n}, p_{nn}]^T\in \mathbb{R}^{\frac{n(n+1)}{2}}$. For $v\in \mathbb{R}^n$, $|v|_P=v^TPv$. For column vectors $a_i, i=1,\cdots,s$, $\mbox{col} (a_1,\cdots,a_s )= [a_1^T,\cdots,a_s^T ]^T,$ and, if $A = (a_1,\cdots,a_s )$, then vec$(A)=\mbox{col} (a_1,\cdots,a_s )$. For $A\in \mathbb{R}^{n\times n}$, $\sigma(A)$ denotes the spectrum of $A$, and Tr$(A)$ the trace of $A$. 'bolckdiag' denotes the block diagonal matrix operator. # Preliminaries {#sec2} In this section, we review some existing results on the adaptive optimal output regulation problem based on [@gao2016adaptive] [@su2011cooperative] [@huang2004nonlinear] [@Kleinman]. ## Output Regulation Problem (ORP) Consider a continuous-time linear system in the following form: $$\begin{aligned} \label{single} \begin{split} \dot{x}&=Ax+Bu+Dv\\ e&=Cx+Fv \end{split}\end{aligned}$$ where $x\in \mathbb{R}^{n}, u\in \mathbb{R}^{m}$ and $e\in \mathbb{R}^{p}$ are the state, control input and tracking error of the system, and $v\in \mathbb{R}^{q}$ is the state generated by the following exosystem: $$\begin{aligned} \dot{v}&=Ev \label{leaders} \end{aligned}$$ Without loss of generality, we assume $C$ is of full row rank. ****Problem** 1**. ***\[The State Feedback Output Regulation Problem\]** Design a state feedback control law of the following form: $$\begin{aligned} \label{controllersin} u=-Kx+Lv \end{aligned}$$ where $K$ is called the feedback gain matrix and $L$ is called the feedforward gain matrix such that the closed-loop system is exponentially stable in the sense that $\sigma(A-BK)\subset \mathbb{C}_-$ and the tracking error $e$ satisfies $\lim\limits_{t\to \infty}e=0$.* The solvability of Problem [**Problem** 1](#outregu){reference-type="ref" reference="outregu"} entails the following assumptions. ****Assumption** 1**. *$(A,B)$ is stabilizable.* ****Assumption** 2**. *There exist a pair of the matrices $(X,U)$ that satisfies the following so-called regulator equations: $$\begin{aligned} \label{regusin1} XE=&AX+BU+D\\ \label{regusin2} 0=&CX+F \end{aligned}$$* The following result gives the solvability of Problem [**Problem** 1](#outregu){reference-type="ref" reference="outregu"} [@huang2004nonlinear]: ****Theorem** 1**. *Under Assumptions [**Assumption** 1](#ass5){reference-type="ref" reference="ass5"} and [**Assumption** 2](#ass6){reference-type="ref" reference="ass6"}, there exist feedback gains $K$ and the feedforward gains $L=U+KX$ such that ([\[controllersin\]](#controllersin){reference-type="ref" reference="controllersin"}) solves Problem [**Problem** 1](#outregu){reference-type="ref" reference="outregu"}.* To see the role of Assumption [**Assumption** 2](#ass6){reference-type="ref" reference="ass6"}, let $\bar{x}:=x -X^*v, \bar{u}:=u-U^*v$, where the pair $(X^*,U^*)$ is any one of the solutions to [\[regusin1\]](#regusin1){reference-type="eqref" reference="regusin1"} and [\[regusin2\]](#regusin2){reference-type="eqref" reference="regusin2"}. Then, we obtain the error system as follows: $$\begin{aligned} \label{errsyssin} \dot{\bar{x}}&=A\bar{x}+B\bar{u}\\ e&=C\bar{x} \end{aligned}$$ Thus, as long as the state feedback control $\bar{u} = - K \bar{x}$ stabilizes [\[errsyssin\]](#errsyssin){reference-type="eqref" reference="errsyssin"}, then $\lim_{t \rightarrow \infty} e (t) = 0$. That is, $u = \bar{u} + U^*v = - K (x -X^*v) + U^*v = - K x + L v$ where $L = U^*+ K X^*$ solves the ORP. Under Assumption [**Assumption** 1](#ass5){reference-type="ref" reference="ass5"}, the following algebraic Riccati equation $$\begin{aligned} \label{ARE} A^TP^*+P^*A+C^TQC-P^*BR^{-1}B^TP^*=0 \end{aligned}$$ has a unique positive definite solution $P^*$. Then, a particular feedback gain $K^*$ is given by $K^*=R^{-1}B^TP^*$. This $K^*$ is such that the state feedback controller $\bar{u}^*=-K^*\bar{x}$ solves the following LQR problem: $$\begin{aligned} \begin{split}\label{costp2sin} &\min_{\bar{u}}\int_{0}^{\infty}(|e|_{Q}+|\bar{u}|_{R})dt\\ &\textup{subject to } (\ref{errsyssin}) \end{split} \end{aligned}$$ where $Q=Q^T\geq0, R=R^T>0$, with $(A, \sqrt{Q}C)$ observable. An algorithm to obtain $P^*$ is given by iteratively solving the following linear Lyapunov equations [@Kleinman]: $$\begin{aligned} \label{aleq1sin} \begin{split} &0=(A-BK_{j})^TP_{j} +P_{j}(A-BK_{j})+C^TQC+K_{j}^TRK_{j} \end{split}\\ \label{aleq2sin} &K_{j+1}=R^{-1} B^TP_{j} \end{aligned}$$ where $j=0,1,\cdots$, and $K_{0}$ is such that $A-BK_{0}$ is a Hurwitz matrix. The algorithm [\[aleq1sin\]](#aleq1sin){reference-type="eqref" reference="aleq1sin"} and [\[aleq2sin\]](#aleq2sin){reference-type="eqref" reference="aleq2sin"} guarantees the following properties for $j\in \mathbb{N}$: 1. $\sigma(A-BK_{j}) \subset \mathbb{C}_-$; 2. $P^*\leq P_{j+1}\leq P_j$; 3. $\lim\limits_{j\to \infty}K_j=K^*,\lim\limits_{j\to \infty}P_j=P^*$. ## Adaptive Optimal Output Regulation Problem (AOORP) When the matrices $A,B$ and $D$ are unknown, reference [@gao2016adaptive] proposed the integral reinforcement learning (IRL) method to solve the ORP and the LQR problem [\[costp2sin\]](#costp2sin){reference-type="eqref" reference="costp2sin"}. This subsection summarizes the approach in [@gao2016adaptive] as follows. Given constant matrices $X_{0},X_{1}$ such that $X_{0}=0_{n\times q}, CX_{1}+F=0$, let $h=(n-p)q$ and select matrices $X_{i}, i=2,3,\cdots, h+1$, such that all the vectors $\text{vec}(X_{i})$ form a basis for $\text{ker}(I_{q}\otimes C)$ for the positive integer $h$. Then a representation of the general solution to ([\[regusin2\]](#regusin2){reference-type="ref" reference="regusin2"}) is as follows: $$\begin{aligned} \label{eq1} X&=X_{1}+\sum_{i=2}^{h+1}\alpha_{i}X_{i} \end{aligned}$$ where $\alpha_{2}, \cdots, \alpha_{h+1}\in \mathbb{R}$. Define a Sylvester map $\mathcal{S}(X_i)=X_{i}E-AX_{i}$. Then, ([\[regusin1\]](#regusin1){reference-type="ref" reference="regusin1"}) can be put as follows: $$\begin{aligned} \label{eq2} \mathcal{S}(X)&=\mathcal{S}(X_{1})+\sum_{i=2}^{h+1}\alpha_{i}\mathcal{S}(X_{i})=BU+D \end{aligned}$$ Combining ([\[aleq2sin\]](#aleq2sin){reference-type="ref" reference="aleq2sin"}), ([\[eq1\]](#eq1){reference-type="ref" reference="eq1"}) and ([\[eq2\]](#eq2){reference-type="ref" reference="eq2"}) gives $$\begin{aligned} \label{regunew} \mathcal{A}\chi=b \end{aligned}$$ where $$\begin{aligned} \mathcal{A}&=\begin{bmatrix} \mathcal{A}_{1} & \mathcal{A}_{2} \end{bmatrix}\\ \mathcal{A}_{1}&=\begin{bmatrix} \text{vec}(\mathcal{S}(X_{2})) & \cdots & \text{vec}(\mathcal{S}(X_{h+1}))\\ \text{vec}(X_{2}) & \cdots & \text{vec}(X_{h+1}) \end{bmatrix}\\ \mathcal{A}_{2}&= \begin{bmatrix} 0 & -I_q \otimes (P_{j}^{-1}K_{j+1}^TR)\\ -I_{nq} & 0 \end{bmatrix}\\ \chi&=[\alpha_{2}, \cdots , \alpha_{h+1}, \text{vec}(X)^T, \text{vec}(U)^T]^T\\ b&=\begin{bmatrix} \text{vec}(D-\mathcal{S}(X_{1}))\\ -\text{vec}(X_{1}) \end{bmatrix} \end{aligned}$$ ****Remark** 1**. *Equation ([\[regunew\]](#regunew){reference-type="ref" reference="regunew"}) indicates that, with the information of $X_i,\mathcal{S}(X_{i}),\; i=1,\cdots,h+1$, $D$, $P_j$ and $K_{j+1}$ for some $j$, one can obtain the solution to the regulator equations ([\[regusin1\]](#regusin1){reference-type="ref" reference="regusin1"}) and ([\[regusin2\]](#regusin2){reference-type="ref" reference="regusin2"}) without knowing the matrices $A$ and $B$.* Let $A_{j}=A-BK_{j}$ with $K_{j}$ being obtained from [\[aleq2sin\]](#aleq2sin){reference-type="eqref" reference="aleq2sin"}, and for $i=1,\cdots,h+1$, let $\bar{x}_{i}=x-X_{i}v$ for $i=0,1,\cdots, h+1$. Then the dynamics of the error systems can be derived as follows: $$\begin{aligned} \label{errsin} \begin{split} \dot{\bar{x}}_{i}=&Ax+Bu+Dv-X_{i}Ev\\ =&A_{j}\bar{x}_{i}+B(K_{j}\bar{x}_{i}+u)+(D-\mathcal{S}(X_{i}))v \end{split} \end{aligned}$$ For any $t \geq 0$, $\delta t >0$, using ([\[aleq1sin\]](#aleq1sin){reference-type="ref" reference="aleq1sin"}), ([\[aleq2sin\]](#aleq2sin){reference-type="ref" reference="aleq2sin"}) and [\[errsin\]](#errsin){reference-type="eqref" reference="errsin"} gives $$\begin{aligned} \begin{split} \label{IRLsin1} &|\bar{x}_{i}(t+\delta t)|_{P_{j}}-|\bar{x}_{i}(t)|_{P_{j}}\\ =& \int_{t}^{t+\delta t} [|\bar{x}_{i}|_{A_{j}^TP_{j}+P_{j}A_{j}}+2(u+K_{j}\bar{x}_{i})^TB^TP_{j}\bar{x}_{i}\\ & +2v^T(D-\mathcal{S}(X_{i}))^TP_{j}\bar{x}_{i}]d\tau\\ =& \int_{t}^{t+\delta t} [-|\bar{x}_{i}|_{C^TQC+K_{j}^TRK_{j}}+2(u+K_{j}\bar{x}_{i})^TRK_{j+1}\bar{x}_{i}\\ &+2v^T(D-\mathcal{S}(X_{i}))^TP_{j}\bar{x}_{i}]d\tau\\ \end{split} \end{aligned}$$ For any vectors $a\in \mathbb{R}^n,b \in \mathbb{R}^m$ and any integer $s\in \mathbb{N}_+$, define $$\begin{aligned} \begin{split}\label{newdefi} \delta _a=&[\text{vecv}(a(t_1))-\text{vecv}(a(t_0)), \cdots ,\\ &\text{vecv}(a(t_s))-\text{vecv}(a(t_{s-1}))]^T\\ \Gamma_{ab}=&[\int_{t_0}^{t_1}a\otimes b d\tau , \int_{t_1}^{t_2}a\otimes b d\tau, \cdots , \int_{t_{s-1}}^{t_s}a\otimes b d\tau]^T \end{split} \end{aligned}$$ Then, using the notation ([\[newdefi\]](#newdefi){reference-type="ref" reference="newdefi"}), equation ([\[IRLsin1\]](#IRLsin1){reference-type="ref" reference="IRLsin1"}) can be arranged as the following linear equation: $$\begin{aligned} \label{IRLsinlinear} \Psi_{ij} \begin{bmatrix} \text{vecs} (P_{j})\\ \text{vec}(K_{j+1})\\ \text{vec}((D-\mathcal{S}(X_{i}))^TP_{j}) \end{bmatrix}=\Phi_{ij} \end{aligned}$$ where $$\begin{aligned} \Psi_{ij}&=[\delta_{\bar{x}_{i}}, -2\Gamma_{\bar{x}_{i}\bar{x}_{i}}(I_{n}\otimes K_{j}^TR)-2\Gamma_{\bar{x}_{i}u}(I_{n}\otimes R), -2\Gamma_{\bar{x}_{i}v}] \notag \\ \Phi_{ij}&=-\Gamma_{\bar{x}_{i}\bar{x}_{i}}\text{vec} (C^TQC+K_{j}^TRK_{j})\label{psijk} \end{aligned}$$ The following lemma [@gao2016adaptive] gives the condition which guarantees the uniqueness of the solution to ([\[IRLsinlinear\]](#IRLsinlinear){reference-type="ref" reference="IRLsinlinear"}). ****Lemma** 1**. *For $i\in \{0,1,2,\cdots, h+1\}$, if there exist a $s^*\in \mathbb{N}_+$ such that for all $s>s^*$, and for all sequences $t_0<t_1<\cdots <t_s$, $$\begin{aligned} \label{rankconsinpre} \textup{rank} ([\Gamma_{\bar{x}_{i}\bar{x}_{i}}, \Gamma_{\bar{x}_{i}u}, \Gamma_{\bar{x}_{i}v}])= \frac{n(n+1)}{2}+(m+q)n \end{aligned}$$ then the matrix $\Psi_{ij}$ has full column rank for all $j\in \mathbb{N}$.* # Refinements of the Algorithm for AOORP {#sec3} In this section, we refine the algorithm for solving AOORP. Let us first give a specific way to find the null space of $C X=0$ where $X \in \mathbb{R}^{n \times q}$ as follows: ****Proposition** 1**. *Let $\{y_1,y_2,\cdots, y_{n-p}\}$ form a basis for the null space of $Cy=0$ with $y \in \mathbb{R}^{n}$. Then the following matrices form a basis for the null space of $C X=0$ where $X \in \mathbb{R}^{n \times q}$: $$\begin{aligned} \label{Xi} X_{(n-p)k+i}=[0_{n\times k}, y_i, 0_{n\times (q-k-1)}] \end{aligned}$$ for $k=0,\cdots, q-1 \text{ and } i=1,\cdots,n-p$.* Next we show that ([\[IRLsinlinear\]](#IRLsinlinear){reference-type="ref" reference="IRLsinlinear"}) can be reduced to two lower-dimensional linear equations. 1. Obtain $D, P_{0}, K_{1}$ by solving the following equation $$\begin{aligned} \label{IRLsinlinear0} \Psi_{00} \begin{bmatrix} \text{vecs} (P_{0})\\ \text{vec}(K_{1})\\ \text{vec}(D^T P_{0}) \end{bmatrix}=\Phi_{00} \end{aligned}$$ which is obtained from ([\[IRLsinlinear\]](#IRLsinlinear){reference-type="ref" reference="IRLsinlinear"}) by letting $j=0,i=0$. By Lemma [**Lemma** 1](#ranconsinlem){reference-type="ref" reference="ranconsinlem"} with $i=0$, [\[IRLsinlinear0\]](#IRLsinlinear0){reference-type="eqref" reference="IRLsinlinear0"} is solvable if $$\begin{aligned} \label{rankconmul10} \textup{rank} ([\Gamma_{\bar{x}_{0}\bar{x}_{0}}, \Gamma_{\bar{x}_{0}u}, \Gamma_{\bar{x}_{0}v}])= \frac{n(n+1)}{2}+(m+q)n \end{aligned}$$ 2. Let $\Psi_{ij}=[\Psi_{ij}^1, \Psi_{ij}^2, \Psi_{ij}^3]$ with $\Psi_{ij}^1=\delta_{\bar{x}_{i}}, \Psi_{ij}^2=-2\Gamma_{\bar{x}_{i}\bar{x}_{i}}(I_{n}\otimes K_{j}^TR)-2\Gamma_{\bar{x}_{i}u}(I_{n}\otimes R), \Psi_{ij}^3=-2\Gamma_{\bar{x}_{i}v}$. Then, from [\[IRLsinlinear\]](#IRLsinlinear){reference-type="eqref" reference="IRLsinlinear"} with $j=0$, we have $$\begin{aligned} & \Psi_{i0}^3 \text{vec}((D-\mathcal{S}(X_{i}))^TP_{0}) \notag \\ =& -\Psi_{i0}^1\text{vecs} (P_{0})- \Psi_{i0}^2\text{vec}(K_{1}) + \Phi_{i0} \end{aligned}$$ which can be put in the following form: $$\begin{aligned} \label{new2} & \Gamma_{\bar{x}_{i}v} \text{vec}((D-\mathcal{S}(X_{i}))^TP_{0}) \notag \\ =& \frac{1}{2} (\Psi_{i0}^1\text{vecs} (P_{0})+\Psi_{i0}^2\text{vec}(K_{1})- \Phi_{i0}) \end{aligned}$$ If $$\begin{aligned} \label{rankconsin2} \textup{rank}(\Gamma_{\bar{x}_{i}v})=qn, \forall i=1,2,\cdots, h+1, \end{aligned}$$ then we can solve $\mathcal{S}(X_{i})$ for $i = 1, \cdots, h+1$ from [\[new2\]](#new2){reference-type="eqref" reference="new2"}. 3. Let $M\in R^{(n^2)\times(\frac{n\times (n+1)}{2})}$ be a constant matrix such that $M\text{vecs}(P_{j})=\text{vec}(P_{j})$ and $M_{D}=(I_{n}\otimes D^T) M$. Then, we have $$\begin{aligned} \label{new1} &\text{vec}(D^TP_{j}) \notag \\=&(I_{n}\otimes D^T )\text{vec}(P_{j}) \notag \\ =&(I_{n}\otimes D^T )M\text{vecs}(P_{j}) \notag \\ =&M_{D}\text{vecs}(P_{j}) \end{aligned}$$ As a result, [\[IRLsinlinear\]](#IRLsinlinear){reference-type="eqref" reference="IRLsinlinear"} with $i=0$ reduces to the following: $$\begin{aligned} \label{new3} &\Psi_{0j} \begin{bmatrix} \text{vecs} (P_{j}) \\ \text{vec}(K_{j+1})\\ \text{vec} (D^T P_{j}) \end{bmatrix} \notag \\ =&\Psi_{0j}^1\text{vecs} (P_{j})+\Psi_{0j}^2\text{vec}(K_{j+1}) \notag \\&+\Psi_{0j}^3\text{vec} (D^T P_{j}) \notag \\ =&\Psi_{0j}^1\text{vecs} (P_{j})+\Psi_{0j}^2\text{vec}(K_{j+1}) \notag \\&+\Psi_{0j}^3M_{D}\text{vecs}(P_{j}) \notag \\ =& [\Psi_{0j}^1+\Psi_{0j}^3M_{D}, \Psi_{0j}^2]\begin{bmatrix} \text{vecs} (P_{j})\\ \text{vec}(K_{j+1}) \end{bmatrix} =\Phi_{0j} \end{aligned}$$ The solvability of ([\[new3\]](#new3){reference-type="ref" reference="new3"}) is given as follows. ****Lemma** 2**. *If there exist a $s^*\in \mathbb{N}_+$ such that for all $s>s^*$, and for all sequences $t_0<t_1<\cdots <t_s$, $$\begin{aligned} \label{rankconsin} \textup{rank} ([\Gamma_{\bar{x}_{0}\bar{x}_{0}}, \Gamma_{\bar{x}_{0}u}])= \frac{n(n+1)}{2}+mn \end{aligned}$$ then the matrix $[\Psi_{0j}^1+\Psi_{0j}^3M_{D}, \Psi_{0j}^2]$ has full column rank for all $j\in \mathbb{N}$.* ****Remark** 2**. *The original algorithm needs to solve ([\[IRLsinlinear\]](#IRLsinlinear){reference-type="ref" reference="IRLsinlinear"}) for $D, P_j$ and $K_{j+1}$ with $i = 0$ and $j=0, \cdots, j^*$ where $j^*$ is such that $||P_{j^*} - P_{j^*-1}||$ is sufficiently small, and then solve ([\[IRLsinlinear\]](#IRLsinlinear){reference-type="ref" reference="IRLsinlinear"}) for ${S}(X_i)$ with $j =j^*$ and $i=1,2,\cdots,h+1$. In particular, it needs the rank condition [\[rankconsinpre\]](#rankconsinpre){reference-type="eqref" reference="rankconsinpre"} to be satisfied for $i\in \{0,1,2,\cdots, h+1\}$. In contrast, in the refined algorithm, we only need the rank condition [\[rankconmul10\]](#rankconmul10){reference-type="eqref" reference="rankconmul10"} which is obtained from the rank condition [\[rankconsinpre\]](#rankconsinpre){reference-type="eqref" reference="rankconsinpre"} with $i=0$ and the rank conditions ([\[rankconsin2\]](#rankconsin2){reference-type="ref" reference="rankconsin2"}) with $i=1,2,\cdots,h+1$ to be satisfied.* 99 W. Gao and Z. P. Jiang, "Adaptive dynamic programming and adaptive optimal output regulation of linear systems" *IEEE Transactions on Automatic Control*, vol. 61, no. 12, pp. 4164-4169, 2016. W. Gao, Z. P. Jiang, F. L. Lewis and Y. Wang, "Cooperative optimal output regulation of multi-agent systems using adaptive dynamic programming", in *2017 American Control Conference (ACC)*, 2017, pp. 2674-2679. Y. Su and J. Huang, "Cooperative output regulation of linear multi-agent systems", *IEEE Transactions on Automatic Control*, vol. 57, no. 4, pp. 1062-1066, 2011. J. Huang, *Nonlinear output regulation: theory and applications*, SIAM, 2004. D. Kleinman, "On an iterative technique for riccati equation computations", *IEEE Transactions on Automatic Control*, vol. 13, no. 1, pp. 114115, 1968. [^1]: This work was supported in part by the Research Grants Council of the Hong Kong Special Administration Region under grant No. 14201420 and in part by National Natural Science Foundation of China under Project 61973260. [^2]: The authors are with the Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong (e-mail: lqlin\@mae.cuhk.edu.hk; jhuang\@mae.cuhk.edu.hk. Corresponding author: Jie Huang.)
arxiv_math
{ "id": "2309.15632", "title": "A Refined Algorithm for the Adaptive Optimal Output Regulation Problem", "authors": "Liquan Lin and Jie Huang", "categories": "math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | A numerical study of fractional Camassa-Holm equations is presented. Smooth solitary waves are constructed numerically. Their stability is studied as well as the long time behavior of solutions for general localised initial data from the Schwartz class of rapidly decreasing functions. The appearence of dispersive shock waves is explored. address: - | Institut de Mathématiques de Bourgogne, UMR 5584\ Institut Universitaire de France\ Université de Bourgogne-Franche-Comté, 9 avenue Alain Savary, 21078 Dijon Cedex, France\ E-mail Christian.Klein\@u-bourgogne.fr - | Institut de Mathématiques de Bourgogne, UMR 5584\ Université de Bourgogne-Franche-Comté, 9 avenue Alain Savary, 21078 Dijon Cedex, France\ E-mail topkarci\@itu.edu.tr author: - Christian Klein$^{*}$ - Goksu Oruc title: Numerical study of fractional Camassa-Holm equations --- [^1] # Introduction This paper is concerned with the numerical study of solutions to the fractional Camassa-Holm (fCH) equation given by $$\label{fCH} u_t+ \kappa_1 u_x + 3 uu_x + D^{\alpha} u_t = -\kappa_2 [2D^{\alpha} (u u_x)+ uD^{\alpha}u_x ],$$ where $\kappa_{1}$, $\kappa_{2}$ are real constants. The fractional derivative $D^{\alpha}$ (also called fractional Laplacian) is defined via its Fourier symbol $$\mathcal{F}D^{\alpha} = |k|^{\alpha} \label{Da},$$ where $\mathcal{F}$ denotes the Fourier transform and where $k$ is the dual Fourier variable, see ([\[fourierdef\]](#fourierdef){reference-type="ref" reference="fourierdef"}). ## Background In the case $\alpha=2$, $\kappa_1= 2\omega$ and $\kappa_2=\frac{1}{3}$, the equation [\[fCH\]](#fCH){reference-type="eqref" reference="fCH"} turns into the Camassa Holm (CH) equation $$\label{ch} u_t+ 2 \omega u_x + 3 u u_x - u_{xxt} = 2 u_x u_{xx} + u u_{xxx}.$$ The CH equation was first introduced in [@fokas] in a formal study of a class of integrable equations. In [@camhol] the CH equation was presented to model unidirectional propagation of small-amplitude shallow water waves above a flat bottom. It has been also derived as geodesic flow on the circle in [@kolev02; @kolev03; @kouranbaeva] and recently in the context of nonlinear dispersive elastic waves in [@EEE]. The CH equation is completely integrable and has an infinite number of local conserved quantities [@fisher], three of which are given in the following form: $$H_0=\int_{\mathbb{R}} udx, \hspace{15pt} H_1=\frac{1}{2} \int_{\mathbb{R}} (u^2 + u_x^2)dx, \hspace{15pt}H_2=\frac{1}{2} \int_{\mathbb{R}} (u^3+ u u_x^2+ 2 \omega u^2)dx.$$ The CH equation has smooth solitary wave solutions and peaked solitons (*peakons*) in the cases $\omega > 0$ and $\omega =0$, respectively. The existence of peaked solitary waves has been established in [@alber]. A classification of travelling wave solutions that also contain cusped solitons (*cuspons*) has been proposed in [@lenells]. The orbital stability of the travelling waves has been obtained for smooth solitons in [@Constantin2002], for peakons in [@Constantin2000] and for periodic peakons in [@lenells1; @lenells2]. Wave breaking phenomena which cause solutions to remain bounded whereas their slopes blow up in finite time have been investigated in [@conswave]. Many numerical approaches for the CH equation have been developed such as finite difference methods, finite-volume methods, pseudo-spectral methods, local discontinuous Galerkin methods, see for instance [@finite1; @finite2; @finite3; @artebrant; @kalisch; @galerkin; @fengfeng; @camassahuang; @GK; @AGK]. Even though the CH equation has been studied extensively, there are only few results on the fractional CH equation which has been obtained in [@EEE] via a multiscales expansion for a fractional Boussinesq equation appearing in elasticity. In [@duruk] the local well-posedness of the Cauchy problem to the following form of fractional CH equation $$\label{fchv2} u_t+ u_x + \frac{1}{2} (u^{2})_x + \frac{3}{4}D^{\alpha} u_x + \frac{5}{4}D^{\alpha} u_t = -\frac{1}{4}[2D^{\alpha} (u u_x)+ uD^{\alpha}u_x ]$$ has been proven for the initial data $u_0\in$ $H^s(\mathbb{R})$, $s>\frac{5}{2}$ when $\alpha>2$ via Kato's semigroup approach for quasilinear evolution equations. In [@Besov] the local well-posedness criteria for the same Cauchy problem has been refined in an appropriate Besov space as $B_{2,1}^{s_0}$ with $s_0=2 \alpha - \frac{1}{2}$ for $\alpha > 3$ and $s_0=\frac{5}{2}$ for $2 < \alpha \leq 3$. In [@Besov2] the local well-posedness results given in [@Besov] have been recently extended to the Cauchy problem for the generalized fractional CH equation $$\label{gfchv2} u_t+ u_x + \frac{1}{2} (u^{p+1})_x + \frac{3}{4}D^{\alpha} u_x + \frac{5}{4}D^{\alpha} u_t = -\frac{p+1}{8}[2D^{\alpha} (u^p u_x)+ u^pD^{\alpha}u_x ]$$ with $\alpha>2$ and $p$ $\in$ $\mathbb{N}^+$. A blow-up criterion for the solutions has been obtained. We note here that in the standard case ($\alpha=2$) the equation [\[fchv2\]](#fchv2){reference-type="eqref" reference="fchv2"} is related to well known integrable shallow water equation derived in [@dullin]. To the best of our knowledge there is no study related to fractional CH equation when $\alpha \leq 2$ and the current paper is the first numerical study on the fractional CH equation in the literature. ## Main results In this paper we always consider positive values of the constant $\kappa_{1}=2\omega$. For the CH equation it is known that the solitary waves with velocity $c>2\omega$ are always smooth, see [@Joh]. Our numerical study indicates that there might not be smooth solitary waves for positive $\omega$ and all values of $\alpha$. If $\alpha$ and thus the dispersion is too small, there might be no such solitary waves. Thus it could be that for $\omega>0$, there exists a minimal value of $\alpha$ depending on $\omega$ and $c$, $\alpha_{s}(\omega,c)>0$, such that only for $\alpha>\alpha_{s}(\omega,c)$ there exist smooth solitary waves. It also appears that for given $\alpha$ and $\omega>0$, there exists a minimal velocity $c_{s}(\omega,\alpha)$ such that there exist smooth solitary waves for $c>c_{s}$. Since we construct the solitary waves numerically with a Newton iteration, the failure of the iteration only gives an indication that there are no smooth solutions for certain parameters $\alpha$, $c$, $\omega$. However, this could also mean that there are no appropriate initial iterates known. By studying perturbations of the numerically constructed solitary waves, we get\ **Main conjecture I:**\ The smooth solitary waves are orbitally stable. For the CH equation, it was shown in [@McK], that solutions for smooth initial data $u_{0}$ subject to the condition $u_{0}-\partial_{xx}u_{0}+\omega>0$ will stay smooth for all times. The precise nature of the singularity that can appear in finite time for initial data not satifying this non-breaking condition does not appear to be known. Our numerical experiments indicate that for sufficiently small $\alpha$, initial data satisfying a condition of the form $u_{0}+D^{\alpha}u_{0}+\omega>0$ can develop a cusp in finite time. We have\ **Main conjecture II:**\ For sufficiently small $\alpha<\alpha_{c}(\omega)$, initial data of sufficiently large mass lead to a blow-up of the fCH solution in finite time. The blow-up near a point $x_{s}$ is a cusp of the form $u\propto\sqrt{|x-x_{s}|}$. The precise value of $\alpha_{c}(\omega)$ and the conditions on the initial data are not known. This paper is organized as follows: in Section 2 basic facts on the standard CH and the fractional CH equation are gathered and some useful notation is reviewed. In Section 3 the numerical approach is introduced for solitary waves to the fractional CH equation. In Section 4 we investigate numerically the stability properties of solitary waves. The long time behavior of solution to fractional CH equation is studied in the case of initial data from the Schwartz class in Section 5. In Section 6, we study the appearence of rapid modulated oscillations, *dispersive shock waves*, in the vicinity of shocks to the corresponding dispersionless equation. We add some concluding remarks in Section 7. # Preliminaries In this section we collect some basic facts on the standard CH and the fractional CH equation. We apply the standard definition for a Fourier transform for tempered distributions $u(x)$ denoted by $(\mathcal{F} u)(k)$ with dual variable $k$ and its inverse, $$\begin{aligned} (\mathcal{F}u)(k)=\hat{u}& = \int_{\mathbb{R}}^{}u(x)e^{-ikx}dx,\quad k\in \mathbb{R}, \nonumber\\ u(x) & =\frac{1}{2\pi}\int_{\mathbb{R}}^{}(\mathcal{F}u)(k)e^{ikx} dk,\quad x\in\mathbb{R}. \label{fourierdef}\end{aligned}$$ ## Conserved Quantities We give the derivation of the conserved quantities for the equation [\[fCH\]](#fCH){reference-type="eqref" reference="fCH"}. To this end we consider sufficiently smooth solutions which tend to $0$ as $x\rightarrow \mp \infty$. Integrating equation [\[fCH\]](#fCH){reference-type="eqref" reference="fCH"} over the real line, we get $$\label{fCH1} \frac{d}{dt} \int_{\mathbb{{R}}} \left( I+ D^{\alpha} \right) u dx + \int_{\mathbb{{R}}}\left(\kappa_1 u + \frac{3}{2} u^2 + \kappa_2{D^{\alpha}u^2} \right)_x dx + \kappa_2 \int_{\mathbb{{R}}} u D^{\alpha}u_x dx =0.$$ By using Plancherel's theorem, we can rewrite the last integral in equation [\[fCH1\]](#fCH1){reference-type="eqref" reference="fCH1"} as $$\begin{aligned} \int_{\mathbb{{R}}} u(x,t) D^{\alpha}u_x(x,t) dx &=& \int_{\mathbb{{R}}} \hat{u}(k,t) |k|^{\alpha} \overline{ i k \hat{u}(k,t)} \frac{dk}{2\pi} , \nonumber \\ &=& \int_{\mathbb{{R}}} |k|^{\frac{\alpha}{2}} \hat{u} |k|^{\frac{\alpha}{2}} \overline{ i k \hat{u}} \frac{d k}{2\pi} , \nonumber \\ &=& \int_{\mathbb{{R}}} D^{\frac{\alpha}{2}} u D^{\frac{\alpha}{2}} u_x dx,\nonumber \\ &=& \frac{1}{2} \int_{\mathbb{{R}}} |D^{\frac{\alpha}{2}} u(x,t)|_x^2 dx.\end{aligned}$$ This implies for equation [\[fCH1\]](#fCH1){reference-type="eqref" reference="fCH1"} $$\label{fCH2} \frac{d}{dt} \int_{\mathbb{{R}}} \left( I+ D^{\alpha} \right) u dx + \int_{\mathbb{{R}}}\left(\kappa_1 u + \frac{3}{2}u^2 + \kappa_2 {D^{\alpha}u^2} +\frac{\kappa_2}{2} |D^{\frac{\alpha}{2}} u|^2 \right)_x dx =0,$$ which gives the conserved mass of the fCH equation in the following form $$\label{mass} I_1= \int_{\mathbb{{R}}} \left( u(x,t)+ D^{\alpha} u(x,t) \right) dx.$$ For the second conserved quantity, we first multiply both sides of the fCH equation by $u$ and integrate over $\mathbb{{R}}$. Then we have $$\frac{1}{2} \frac{d}{dt} \int_{\mathbb{{R}}} \left( u^2+ |D^{\frac{\alpha}{2}}u|^2 \right) dx + \int_{\mathbb{{R}}}\left( \frac{\kappa_1}{2}u^2 + {u^3} +\kappa_2 u^2 D^{\alpha}u \right)_x dx=0.$$ Here we have used $$\int_{\mathbb{{R}}} u(x,t) D^{\alpha}(u^2(x,t))_x dx = \int_{\mathbb{{R}}} D^{\alpha} u(x,t) (u^2(x,t))_x dx.$$ Finally, the following identity is obtained as formal conserved energy of fCH equation $$\label{energy} I_2= \int_{\mathbb{{R}}} \left(u^2(x,t) + |D^{\frac{\alpha}{2}}u(x,t) |^2 \right) dx.$$ ## Solitary Waves of the CH equation Solitary waves are localised traveling waves, i.e., solutions of the form $u(x,t)=Q_c(\xi), ~~\xi=x-ct$ with constant propagation speed $c$ and fall-off condition $\displaystyle{\lim_{|\xi| \rightarrow \infty} Q_c(\xi)=0}$. This ansatz leads to equation ([\[sw0\]](#sw0){reference-type="ref" reference="sw0"}) for the fCH equation. For the integrable CH equation ($\alpha=2$, $\kappa_{1}=2\omega$, $\kappa_{2}=1/3$), equation ([\[sw0\]](#sw0){reference-type="ref" reference="sw0"})) can be integrated explicitly leading to $$(-c(1-\partial_{xx})+2\omega)Q_{c}+\frac{3}{2}Q_{c}^{2}=Q_{c}Q_{c}''+\frac{1}{2}(Q_{c}')^{2}. \label{QCH}$$ As discussed in [@Joh], the soliton is given implicitly by $$Q_{c} = \frac{(c-2\omega)\mbox{sech}^2(\theta)}{\mbox{sech}^2(\theta)+2(\omega/c) \tanh^2(\theta)} \label{QCH2}$$ where $$x-ct = \sqrt{\frac{4c}{c-2\omega}}\theta+\ln\frac{\cosh(\theta-\theta_{0})}{\cosh(\theta+\theta_{0})} \label{QCH3}$$ with $\theta_{0}=\mbox{arctanh}(\sqrt{1 - (2\omega/c)})$. There is a smooth soliton for $\omega>0$ and $c>2\omega$. These solitons are orbitally stable, see [@Joh]. # Solitary Waves In this section we numerically construct solitary waves of the fractional CH equation. ## Defining equations With the traveling wave ansatz $u(x,t)=Q_c(\xi), ~~\xi=x-ct$, equation [\[fCH\]](#fCH){reference-type="eqref" reference="fCH"} reduces to the following equation $$\label{sw0} -cQ_c' + \kappa_1 Q_c' + \frac{3}{2}(Q_c^{2})' -cD^{\alpha} Q_c' + \kappa_2 [ D^{\alpha} (Q_c^2)' + Q_c D^{\alpha} Q_c' ] =0.$$ Here $^\prime$ denotes the derivative with respect to $\xi$. Integrating we get with the fall-off condition at infinity $$(-c(1+D^{\alpha})+\kappa_{1})Q_{c}+\frac{3}{2}Q_{c}^{2}+\kappa_{2}(D^{\alpha}Q_{c}^{2}+ \partial_{\xi}^{-1}(Q_{c}D^{\alpha}Q_{c}'))=0. \label{Q}$$ Here the antiderivative is defined via its Fourier symbol, $\mathcal{F}\partial_{x}^{-1}=1/(ik)$, i.e., $\partial_{x}^{-1}=\frac{1}{2}(\int_{-\infty}^{x}-\int_{x}^{\infty})$. ## Numerical approach To construct solitary waves numerically, we apply as in [@KS15] a Fourier spectral method. We study equation ([\[Q\]](#Q){reference-type="ref" reference="Q"}) in the Fourier domain $$(-c(1+|k|^{\alpha})+\kappa_{1})\mathcal{F}Q_{c}+\frac{3}{2}\mathcal{F}(Q_{c}^{2})+\kappa_{2}(|k|^{\alpha}\mathcal{F}(Q_{c}^{2}) +\frac{1}{ik}\mathcal{F}(Q_{c}D^{\alpha}Q_{c}'))=0. \label{Qfourier}$$ The Fourier transform is approximated on a sufficiently large torus ($x\in L[-\pi,\pi]$ with $L\gg1$) via a discrete Fourier transform (DFT) which is conveniently computed by a fast Fourier transform (FFT). This means we introduce the standard discretisation $x_{n}=-\pi L+nh$, $n=1,\ldots,N$, $h=2\pi L/N$ with $N\in \mathbb{N}$ the number of Fourier modes. The dual variable is then given by $k=(-N/2+1,\ldots,N/2)/L$. In an abuse of notation, we will use the same symbols for the DFT as for the standard Fourier transform. With the DFT discretisation, equation ([\[Qfourier\]](#Qfourier){reference-type="ref" reference="Qfourier"}) becomes a system of $N$ nonlinear equations for $\mathcal{F}Q_{c}$ which is solved with a Newton-Krylov method. This means that the action of the inverse of the Jacobian in a standard Newton method is computed iteratively with the Krylov subspace method GMRES [@SS86]. Since the convergence of a Newton method is local, the choice of the initial iterate is important. Therefore we apply a tracing technique: the implicit solution for CH ($\alpha=2$) for some value of the velocity $c$ is taken as the initial iterate for some smaller value of $\alpha$. The result of this iteration is taken as an initial iterate of an even smaller value of $\alpha$. Numerically challenging is the computation of the last term in ([\[Qfourier\]](#Qfourier){reference-type="ref" reference="Qfourier"}) since there is a division by the dual variable $k$ that can vanish. We apply the same approach as a in [@KMS], the limiting value for $k=0$ is computed via de l'Hospital's rule, $\lim_{k\to0} \hat{f}(k)/k = \hat{f}'(0)$ if $\hat{f}'(k)$ is bounded for $k=0$. The derivative for $k=0$ is computed in standard way via the sum of the inverse Fourier transform $f$ of the function $\hat{f}$ times $x$ sampled at the collocation points in $x$, see [@KMS], i.e., $\hat{f}'(0) \approx \sum_{n=1}^{N}x_{n}f_{n}$. ## Examples To study concrete examples, we work with $L=100$ and $N=2^{16}$ as in [@KS15] for fractional Korteweg-de Vries (fKdV) equations. We consider the case $\kappa_{2}=1/3$, $\kappa_{1}=2\omega$ being integrable for $\alpha=2$. We choose $\omega=3/5$ as in [@EEE]. We show solitary waves for $c=2$ for several values of $\alpha$ in Fig. [1](#Qc2){reference-type="ref" reference="Qc2"}. It can be seen that the smaller $\alpha$ and thus the dispersion, the more the solitary wave is peaked and the slower the decay towards infinity. We were not able to numerically find a solution for even smaller values of $\alpha$ since the iteration did not converge to a smooth solution. The behavior of the DFT coefficients in the iteration indicates that it converges to a peakon or cuspon. This does not mean that there are no smooth solitary waves for smaller values of $\alpha$ for this velocity, we just could not find them with a Newton iteration. But this could indicate that there might be for a given $c$ a lower limit $\alpha_{s}(\omega,c)$ for $\alpha$ below which there are no smooth solitons; we recall that there are smooth solitons for CH for all positive $\omega$ with $c>2\omega$. Note, however, that it is difficult to identify such a limit with an iterative method since the failure of the latter to converge does not mean that there is no such solution. It can be that the initial iterate was just not sufficiently close. ![Solitary waves ([\[Q\]](#Q){reference-type="ref" reference="Q"}) for $c=2$, $\kappa_{1}=1.2$, $\kappa_{2}=1/3$ for several values of $\alpha$. ](fCHsolc2alpha.eps){#Qc2 width="\\textwidth"} The amplitude of the solitary waves increases with the velocity $c$. We show this behavior on the left of Fig. [3](#Qa15){reference-type="ref" reference="Qa15"} for $\alpha=1.5$, $\omega=0.6$ and several valus of $c$. For CH solitons one has $c>2\omega$. There is clearly also a lower bound on the velocity for smooth solitons larger than the $c>2\omega$ limit for CH, but it is not clear whether it is larger than the value for CH. Once more such a value cannot be determined with an iterative approach. If one fixes $c$ and $\alpha$, there does not appear to be an obvious dependence between $\omega$ and the amplitude as in the case of CH as can be seen on the right of Fig. [3](#Qa15){reference-type="ref" reference="Qa15"}. ![Solitary waves ([\[Q\]](#Q){reference-type="ref" reference="Q"}) for $\kappa_{2}=1/3$ and $\alpha=1.5$, on the left for $\omega=0.6$ and several values of $c$, on the right for $c=2$ and several values of $\omega$. ](Qa15.eps "fig:"){#Qa15 width="49%"} ![Solitary waves ([\[Q\]](#Q){reference-type="ref" reference="Q"}) for $\kappa_{2}=1/3$ and $\alpha=1.5$, on the left for $\omega=0.6$ and several values of $c$, on the right for $c=2$ and several values of $\omega$. ](Qa15k.eps "fig:"){#Qa15 width="49%"} Note that the precise fall-off of the solitary waves for $|x|\to\infty$ appears to be unknown, it should be the same $|x|^{-(1+\alpha)}$ rate of solitary waves of the fractional Kortweg-de Vries and nonlinear Schrödinger equation, see [@FL]. But it is numerically difficult to determine the exact fall-off rate on the real line via an approximation on a torus. # Stability of the solitary waves In this section we study the stability of the solitary waves under small perturbations. For various perturbations, the solitary waves appear to be stable providing numerical evidence to the first part of the Main Conjecture. ## Numerical approach To study the time evolution of solutions to the fractional CH equation ([\[fCH\]](#fCH){reference-type="ref" reference="fCH"}), we use the same spatial discretisation as in the previous section, i.e., a standard FFT approach. The time integration is done with the well known explicit Runge-Kutta method of 4th order. The accuracy of the time integration is controlled via the conserved energy ([\[energy\]](#energy){reference-type="ref" reference="energy"}) which will numerically depend on time due to unavoidable numerical errors. As discussed in [@etna], this numerically computed energy will overestimate the numerical error by 2-3 orders of magnitude. Thus we will always track in the following the relative energy and the DFT coefficients to control the numerical resolution. As a test we propagate the solitary wave for $c=2$, $\alpha=1.5$, $\omega=3/5$ and use $N_{t}=10^{4}$ time steps for $t\in[0,1]$. The relative energy is conserved during the whole computation to the order of $10^{-15}$. The DFT coefficients decrease to machine precision (here $10^{-16}$) which means that the solution is well resolved both in space and in time. The difference between the solitary wave travelling with velocity $c=2$ and the numerically computed solution for $t=1$ is of the order $10^{-15}$. This shows that the code is able to propagate solitary waves with machine precision as indicated by the numerical conservation of the energy and the DFT coefficients. In addition it shows that the solitary waves numerically constructed in the previous section are indeed solutions to equation ([\[Q\]](#Q){reference-type="ref" reference="Q"}) with a similar accuracy. ## Perturbed solitary waves We consider perturbations of the solitary waves in the form of perturbed initial data, $$u(x,0) = Q_{c}+A e^{-x^{2}}, \label{gausspert}$$ where $A$ is a small real constant. In all examples below, the DFT coefficients always decrease to machine precision, and the relative energy is conserved to better than $10^{-6}$. First we consider the case $\alpha=1.5$ for $Q_{2}$ and $A=\pm 0.08$. This corresponds to a perturbation of the order of 10%. This is not a small perturbation, but in order to see numerical effects of a perturbation in finite time, it is convenient to consider perturbations that are of the order of a few percent. We use $N_{t}=10^{4}$ time steps for $t\leq40$. In Fig. [5](#Qa15gausspert){reference-type="ref" reference="Qa15gausspert"} we show on the left the solution for $t=40$. The solution appears to be a solitary wave with some radiation propagating towards $-\infty$. This is confirmed by the $L^{\infty}$ norm of the solution on the right of the same figure. After some time the $L^{\infty}$ norm appears to reach an asymptotic value corresponding to a slightly faster soliton. This is due to the fact that we considered a perturbation of almost 10%. The final state is thus a solitary wave with larger mass, the solitary wave appears to be stable. ![Solution to the fractional CH equation for initial data of the form ([\[gausspert\]](#gausspert){reference-type="ref" reference="gausspert"}) with $A=0.08$ for $\alpha=1.5$ and $c=2$: on the left the solution for $t=40$, on the right the $L^{\infty}$ norm of the solution in dependence of time. ](fCHsolc2a1508gausst40.eps "fig:"){#Qa15gausspert width="49%"} ![Solution to the fractional CH equation for initial data of the form ([\[gausspert\]](#gausspert){reference-type="ref" reference="gausspert"}) with $A=0.08$ for $\alpha=1.5$ and $c=2$: on the left the solution for $t=40$, on the right the $L^{\infty}$ norm of the solution in dependence of time. ](fCHsolc2a1508gaussmax.eps "fig:"){#Qa15gausspert width="49%"} We obtain a similar result if we consider a perturbation of the solitary wave with slightly smaller mass than the unperturbed solitary wave. In Fig. [7](#Qa15mgausspert){reference-type="ref" reference="Qa15mgausspert"}, we choose initial data of the form ([\[gausspert\]](#gausspert){reference-type="ref" reference="gausspert"}) with $A=-0.08$ with the same parameters as in Fig. [5](#Qa15gausspert){reference-type="ref" reference="Qa15gausspert"}. The solution at the final time $t=40$ appears to be again a solitary wave plus radiation. This is confirmed by the $L^{\infty}$ norm on the right of the same figure. The final state of the solution appears to be a solitary wave with a slightly smaller mass and velocity than $Q_{2}$. The solitary wave seems to be again stable even against comparatively large perturbations. ![Solution to the fractional CH equation for initial data of the form ([\[gausspert\]](#gausspert){reference-type="ref" reference="gausspert"}) with $A=-0.08$ for $\alpha=1.5$ and $c=2$: on the left the solution for $t=40$, on the right the $L^{\infty}$ norm of the solution in dependence of time. ](fCHsolc2a15m08gausst40.eps "fig:"){#Qa15mgausspert width="49%"} ![Solution to the fractional CH equation for initial data of the form ([\[gausspert\]](#gausspert){reference-type="ref" reference="gausspert"}) with $A=-0.08$ for $\alpha=1.5$ and $c=2$: on the left the solution for $t=40$, on the right the $L^{\infty}$ norm of the solution in dependence of time. ](fCHsolc2a15m08gaussmax.eps "fig:"){#Qa15mgausspert width="49%"} It is an interesting question whether a similar behavior can be observed for smaller values of $\alpha$, i.e., for a fractional CH equation with less dispersion. We consider the case $\alpha=0.9$ for which we could construct solitary waves with $c=2$. Here we consider smaller perturbations of the order of 1% for larger times than before. We apply $N_{t}=2*10^{4}$ time steps for $t\leq100$ to initial data of the form ([\[gausspert\]](#gausspert){reference-type="ref" reference="gausspert"}). In Fig. [9](#Qa09gausspert){reference-type="ref" reference="Qa09gausspert"} we show on the left the $L^{\infty}$ norm of the solution for $A=0.01$ and on the right for $A=-0.01$. In both cases the final state of the solution appears to be a solitary wave plus radiation. The small oscillations in the $L^{\infty}$ norm are due to the fact that we are working on a torus where the radiation can reenter the computational domain on the other side and that we determine the maximum of the solution on grid points. ![$L^{\infty}$ norm of the solution to the fractional CH equation for initial data of the form ([\[gausspert\]](#gausspert){reference-type="ref" reference="gausspert"}) for $\alpha=0.9$ and $c=2$: on the left for $A=0.01$, on the right for $A=-0.01$. ](fCHsolc2a09001gaussmax.eps "fig:"){#Qa09gausspert width="49%"} ![$L^{\infty}$ norm of the solution to the fractional CH equation for initial data of the form ([\[gausspert\]](#gausspert){reference-type="ref" reference="gausspert"}) for $\alpha=0.9$ and $c=2$: on the left for $A=0.01$, on the right for $A=-0.01$. ](fCHsolc2a09m001gaussmax.eps "fig:"){#Qa09gausspert width="49%"} The picture is very similar for different types of perturbations. We consider the solution for intial data of the form $u(x,0) = \lambda Q_{2}(x)$ for $\alpha=0.9$ and real $\lambda$ close to 1. In Fig. [11](#Qa09lapert){reference-type="ref" reference="Qa09lapert"}, the $L^{\infty}$ norms of the solution in dependence of time is shown, on the left for $\lambda=0.99$, on the right for $\lambda=1.01$. In both cases the final state appears to be a solitary wave of slightly different mass. ![$L^{\infty}$ norm of the solution to the fractional CH equation for initial data of the form $u(x,0)=\lambda Q_{2}(x)$ for $\alpha=0.9$ and $c=2$: on the left for $\lambda=0.99$, on the right for $\lambda=1.01$. ](fCHsolc2a09099max.eps "fig:"){#Qa09lapert width="49%"} ![$L^{\infty}$ norm of the solution to the fractional CH equation for initial data of the form $u(x,0)=\lambda Q_{2}(x)$ for $\alpha=0.9$ and $c=2$: on the left for $\lambda=0.99$, on the right for $\lambda=1.01$. ](fCHsolc2a09101max.eps "fig:"){#Qa09lapert width="49%"} # Localized initial data In this section we study the long time behavior of initial data from the Schwartz class of smooth rapidly decreasing functions. We are interested whether solitary waves appear in the solution asymptotically as expected from the *soliton resolution conjecture*, or whether there is a *blow-up*, a loss of regularity of the solution in finite time. Concretely we will study Gaussian initial data, $$u(x,0) = A\exp(-x^{2}) \label{gauss},$$ where $A>0$ is constant. We will use $N_{t}=10^{4}$ time steps and $N=2^{14}$ Fourier modes for $x\in 3[-\pi,\pi]$. We first study the case $\alpha=1.5$. Small initial data will be simply radiated towards infinity. But if we take initial data of the form ([\[gauss\]](#gauss){reference-type="ref" reference="gauss"}) with $A=1$, we get the solution shown in Fig. [12](#fCHa15gausswater){reference-type="ref" reference="fCHa15gausswater"}. The initial hump breaks up into several humps. One of them, possibly three, appear to be solitary waves. It seems that the solution indeed decomposes into solitary waves and radiation. The precise number of solitary waves appearing for large times is unknown. ![Solution to the fractional CH equation with $\alpha=1.5$ for initial data $u(x,0)=\exp(-x^{2})$. ](fCHa15gausswater.eps){#fCHa15gausswater width="\\textwidth"} The interpretation of the largest hump as a solitary wave is confirmed by the $L^{\infty}$ norm of the solution shown in Fig. [14](#fCHa15gauss){reference-type="ref" reference="fCHa15gauss"} on the right. It seems to reach a constant asymptotic value as expected for a solitary wave. Since the solitary waves for the fractional CH equation do not have a simple scaling with the velocity $c$ as the fKdV solitary wave, a fit of the hump to the solitary waves is not obvious. The solution for $t=20$ is shown on the left of Fig. [14](#fCHa15gauss){reference-type="ref" reference="fCHa15gauss"}. ![Solution to the fractional CH equation with $\alpha=1.5$ for initial data $u(x,0)=\exp(-x^{2})$, on the right the $L^{\infty}$ norm, on the left the solution for $t=20$. ](fCHa15gausst20.eps "fig:"){#fCHa15gauss width="49%"} ![Solution to the fractional CH equation with $\alpha=1.5$ for initial data $u(x,0)=\exp(-x^{2})$, on the right the $L^{\infty}$ norm, on the left the solution for $t=20$. ](fCHa15gaussmax.eps "fig:"){#fCHa15gauss width="49%"} For smaller values of $\alpha$, the picture can change. If we take $\alpha=0.9$ and initial data of the form ([\[gauss\]](#gauss){reference-type="ref" reference="gauss"}) with $A=0.5$, the initial hump will be radiated away. The solution for $t=40$ is shown on the left of Fig. [16](#fCHa0905gauss){reference-type="ref" reference="fCHa0905gauss"}. The $L^{\infty}$ norm of the solution on the right of the same figure appears to decrease monotonically with time. ![Solution to the fractional CH equation with $\alpha=0.9$ for initial data $u(x,0)=0.5\exp(-x^{2})$, on the left the solution for $t=40$, on the right the $L^{\infty}$ norm. ](fCHa0905gausst40.eps "fig:"){#fCHa0905gauss width="49%"} ![Solution to the fractional CH equation with $\alpha=0.9$ for initial data $u(x,0)=0.5\exp(-x^{2})$, on the left the solution for $t=40$, on the right the $L^{\infty}$ norm. ](fCHa0905gaussmax.eps "fig:"){#fCHa0905gauss width="49%"} However for initial data of the form ([\[gauss\]](#gauss){reference-type="ref" reference="gauss"}) of larger mass, i.e., larger values of $A$, we do not find solitary waves in the long time behavior of the solution. Instead for $A=1$, a cusp appears to form in finite time, for $t\sim 1.7667$ in this case. The solution at this time can be seen in Fig. [18](#fCHa09gauss){reference-type="ref" reference="fCHa09gauss"} on the left. ![Solution to the fractional CH equation with $\alpha=0.9$ for initial data $u(x,0)=\exp(-x^{2})$, on the left the solution for $t=1.7667$, on the right the DFT coefficients for this solution. ](fCHa09gausst17667.eps "fig:"){#fCHa09gauss width="49%"} ![Solution to the fractional CH equation with $\alpha=0.9$ for initial data $u(x,0)=\exp(-x^{2})$, on the left the solution for $t=1.7667$, on the right the DFT coefficients for this solution. ](fCHa09gausst17667fourier.eps "fig:"){#fCHa09gauss width="49%"} It is not only the figure that indicates a cusp formation, this is also confirmed by the Fourier coefficients (more precisely the DFT of $u$) on the right of the same figure. The algebraic decay of these coefficients with the index $|k|$ indicates that a singularity of the analytic continuation of the function $u$ to the complex plane will hit at the critical time the real axis. It is well known that an essential singularity in the complex plane of the form $u\sim (z-z_{j})^{\mu_{j}}$, $\mu_{j}\notin \mathbb{Z}$, with $z_{j}=\alpha_{j}-i\delta_{j}$ in the lower half plane ($\delta_{j}\geq 0$) implies for $k\to\infty$ the following asymptotic behavior of the Fourier coefficients (see e.g. [@asymbook]), $$\hat{u}\sim \sqrt{2\pi}\mu_{j}^{\mu_{j}+\frac{1}{2}}e^{-\mu_{j}}\frac{(-i)^{\mu_{j}+1}}{k^{\mu_{j}+1}} e^{-ik\alpha_{j}-k\delta_{j}} \label{fourierasym}.$$ For a single such singularity with positive $\delta_{j}$, the modulus of the Fourier coefficients decreases exponentially for large $k$ until $\delta_{j}=0$, when the modulus of the Fourier coefficients has an algebraic dependence on $k$. As first shown in [@SSF], one can use ([\[fourierasym\]](#fourierasym){reference-type="ref" reference="fourierasym"}) to numerically characterize the singularity as discussed in detail in [@KR] (the DFT coefficients have a similar behavior as the Fourier transform). Essentially a least square method is applied to $\ln|\hat{u}|$ to fit the parameters for $k>100$. The reader is referred to [@KR] for details. For the case shown in Fig. [18](#fCHa09gauss){reference-type="ref" reference="fCHa09gauss"}, we find $\mu=0.5007$. This means that a square root type singularity is observed in this case which provides numerical evidence of the second part of the main conjecture in the introduction. Since global existence in time does not appear to hold for solutions to initial data of sufficiently lage mass, no solitary waves are observed in this case. Note that this is in apparent contradiction to the stability of the solitary waves observed numerically in the previous section. However, it has to be remembered that the solitary waves have a slow algebraic fall-off towards infinity whereas we consider exponentially decaying data in this section. For $L^{2}$-critical generalized Korteweg-de Vries equations, a blow-up in finite time is only observed if the intitial data are sufficiently rapidly decreasing, see [@MMR] and references therein. The situation appears to be similar for fCH solutions for sufficiently small $\alpha$. # Dispersive shock waves In this section we study the appearence of dispersive shock waves (DSWs) in fCH solutions. A convenient way to study the formation of zones of rapid oscillations in the solutions of dispersive PDEs is to consider the solution for large times on large scales. This can be done by introducing a small parameter $\epsilon\ll1$ and rescale $t$, $x$ according to $t\mapsto t/\epsilon$, $x\mapsto x/\epsilon$. This leads for equation ([\[fCH\]](#fCH){reference-type="ref" reference="fCH"}) to $$\label{fCHe} u_t+ \kappa_1 u_x + 3 uu_x + \epsilon^{\alpha}D^{\alpha} u_t = -\kappa_2 \epsilon^{\alpha}[2D^{\alpha} (u u_x)+ uD^{\alpha}u_x ],$$ where we have kept the same notation as for the case $\epsilon=1$. Thus equation ([\[fCHe\]](#fCHe){reference-type="ref" reference="fCHe"}) is simply equation ([\[fCH\]](#fCH){reference-type="ref" reference="fCH"}) with $D^{\alpha}$ replaced by $\epsilon^{\alpha} D^{\alpha}$. In the formal limit $\epsilon\to0$, equation ([\[fCHe\]](#fCHe){reference-type="ref" reference="fCHe"}) reduces to the Hopf equation $$u_t+ \kappa_1 u_x + 3 uu_x = 0, \label{hopf}$$ where the term linear in $u_{x}$ can be absorbed by a Galilei transformation. The Hopf equation is known to develop a gradient catastrophe for hump-like initial data in finite time. Dispersive regularisations of this equation as ([\[fCHe\]](#fCHe){reference-type="ref" reference="fCHe"}) are expected to lead to solutions with zones of rapid oscillations in the vicinity of the shocks of the Hopf solution for the same initial data. In [@GK], we have studied numerically the onset of oscillations in solutions of the CH equation. It was conjectured that a special solution to the second equation in the Painlevé I hierarchy, see for instance [@GKK], provides an asymptotic description of the break-up of CH solutions in this case. For larger times the oscillatory zone was studied in [@AGK]. A conjecture to describe the leading edge of the oscillatory zone in terms of a Painlevé transcendent was given. We will study below similar examples as in [@GK; @AGK] for fCH, initial data of the form $u(x,0)=\mbox{sech}^{2}x$ for several values of $\epsilon$. In Fig. [22](#fChsech1e2){reference-type="ref" reference="fChsech1e2"}, we show the fCH solution for $\alpha=1.5$ and $\epsilon=10^{-2}$ for several values of $t$. We use $N=2^{14}$ Fourier modes and $N_{t}=10$ time steps for $t\leq1$. A first oscillation forms for $t\sim 0.4$ (the critical time for the Hopf solution is $t_{c}\sim 0.433$), then a well defined zone of oscillations appears also called the Whitham zone. ![Solution to the fractional CH equation ([\[fCHe\]](#fCHe){reference-type="ref" reference="fCHe"}) with $\alpha=1.5$, $\omega=0.6$ and $\epsilon=10^{-2}$ for initial data $u(x,0)=\mbox{sech}^{2} x$ for several values of time (on the top for $t=0$, $t=0.35$, in the bottom for $t=0.7$, $t=1$). ](fCHa15seche2t0.eps "fig:"){#fChsech1e2 width="49%"} ![Solution to the fractional CH equation ([\[fCHe\]](#fCHe){reference-type="ref" reference="fCHe"}) with $\alpha=1.5$, $\omega=0.6$ and $\epsilon=10^{-2}$ for initial data $u(x,0)=\mbox{sech}^{2} x$ for several values of time (on the top for $t=0$, $t=0.35$, in the bottom for $t=0.7$, $t=1$). ](fCHa15seche2t35.eps "fig:"){#fChsech1e2 width="49%"}\ ![Solution to the fractional CH equation ([\[fCHe\]](#fCHe){reference-type="ref" reference="fCHe"}) with $\alpha=1.5$, $\omega=0.6$ and $\epsilon=10^{-2}$ for initial data $u(x,0)=\mbox{sech}^{2} x$ for several values of time (on the top for $t=0$, $t=0.35$, in the bottom for $t=0.7$, $t=1$). ](fCHa15seche2t7.eps "fig:"){#fChsech1e2 width="49%"} ![Solution to the fractional CH equation ([\[fCHe\]](#fCHe){reference-type="ref" reference="fCHe"}) with $\alpha=1.5$, $\omega=0.6$ and $\epsilon=10^{-2}$ for initial data $u(x,0)=\mbox{sech}^{2} x$ for several values of time (on the top for $t=0$, $t=0.35$, in the bottom for $t=0.7$, $t=1$). ](fCHa15seche2t1.eps "fig:"){#fChsech1e2 width="49%"} The Whitham zone becomes more defined and more oscillatory the smaller $\epsilon$ is. Thus there is no strong limit $\epsilon\to0$ for DSWs. Note that DSWs in CH solutions are numerically more demanding than the corresponding KdV solutions for which special integrators exist, see for instance [@etna]. For CH time integration is more problematic since the dispersive terms are nonlinear in contrast to KdV. The reduced dispersion in CH compared to KdV because of the nonlocality ($(1-\partial_{xx})u_{t}$ leads to less oscillations in CH than in similar KdV situations, but to stronger gradients). This is amplified in fCH solutions since the dispersion is smaller than in CH. To treat the case $\epsilon=10^{-3}$ for the situation shown in Fig. [22](#fChsech1e2){reference-type="ref" reference="fChsech1e2"}, we apply $N=2^{18}$ Fourier modes and $N_{t}=10^{5}$ time steps. We show the fCH solution for three values of $\epsilon$ in Fig. [25](#fChseche){reference-type="ref" reference="fChseche"} for the same initial data at the same time. ![Solution to the fractional CH equation ([\[fCHe\]](#fCHe){reference-type="ref" reference="fCHe"}) with $\alpha=1.5$, $\omega=0.6$ for initial data $u(x,0)=\mbox{sech}^{2} x$ for $t=1$ for $\epsilon=10^{-1}$, $\epsilon=10^{-2}$, $\epsilon=10^{-3}$ from left to right. ](fCHa15seche1t1.eps "fig:"){#fChseche width="32%"} ![Solution to the fractional CH equation ([\[fCHe\]](#fCHe){reference-type="ref" reference="fCHe"}) with $\alpha=1.5$, $\omega=0.6$ for initial data $u(x,0)=\mbox{sech}^{2} x$ for $t=1$ for $\epsilon=10^{-1}$, $\epsilon=10^{-2}$, $\epsilon=10^{-3}$ from left to right. ](fCHa15seche2tf.eps "fig:"){#fChseche width="32%"} ![Solution to the fractional CH equation ([\[fCHe\]](#fCHe){reference-type="ref" reference="fCHe"}) with $\alpha=1.5$, $\omega=0.6$ for initial data $u(x,0)=\mbox{sech}^{2} x$ for $t=1$ for $\epsilon=10^{-1}$, $\epsilon=10^{-2}$, $\epsilon=10^{-3}$ from left to right. ](fCHa15seche3tf.eps "fig:"){#fChseche width="32%"} For smaller $\alpha$, the dispersion is even weaker. For the same initial data as in Fig. [25](#fChseche){reference-type="ref" reference="fChseche"}, we get for $\alpha=0.9$ and $\epsilon=10^{-1}$ again a DSW as can be seen in Fig. [27](#fChsecha09e1){reference-type="ref" reference="fChsecha09e1"}. As expected the smaller dispersion than in Fig. [25](#fChseche){reference-type="ref" reference="fChseche"} leads to less oscillatory behavior. ![Solution to the fractional CH equation ([\[fCHe\]](#fCHe){reference-type="ref" reference="fCHe"}) with $\alpha=0.9$, $\omega=0.6$ and $\epsilon=10^{-1}$ for initial data $u(x,0)=\mbox{sech}^{2} x$ for $t=1.005$ on the left and $t=1.5$ on the right. ](fCHa15seche1t1005.eps "fig:"){#fChsecha09e1 width="49%"} ![Solution to the fractional CH equation ([\[fCHe\]](#fCHe){reference-type="ref" reference="fCHe"}) with $\alpha=0.9$, $\omega=0.6$ and $\epsilon=10^{-1}$ for initial data $u(x,0)=\mbox{sech}^{2} x$ for $t=1.005$ on the left and $t=1.5$ on the right. ](fCHa15seche1t15.eps "fig:"){#fChsecha09e1 width="49%"} However the situation is different for smaller $\epsilon$ than in Fig. [27](#fChsecha09e1){reference-type="ref" reference="fChsecha09e1"}, $\epsilon=10^{-2}$, as can be seen in Fig. [29](#fChsecha09e2){reference-type="ref" reference="fChsecha09e2"}. We use $N=2^{15}$ Fourier modes and $N_{t}=10^{4}$ time steps for $t\leq 0.7$. There is once more a DSW forming, but the first peak appears to develop in finite time into a cusp, see the left of Fig. [29](#fChsecha09e2){reference-type="ref" reference="fChsecha09e2"}. It is not surprising that a smaller $\epsilon$ leads to a cusp formation that was already observed in the previous section for initial data of sufficient mass in this case. Since the formal rescaling of $x$ and $t$ with $\epsilon$ leads also to a rescaling of the mass, the same initial data will have more mass in the original setting ([\[fCH\]](#fCH){reference-type="ref" reference="fCH"}) the smaller $\epsilon$. The cusp formation is confirmed by the DFT coefficients on the right of the figure which gives more numerical evidence to the second part of the Main Conjecture. A fitting of the coefficients to ([\[fourierasym\]](#fourierasym){reference-type="ref" reference="fourierasym"}) indicates an exponent $\mu\sim 0.68$. This is slightly larger than the factor $1/2$ found in the previous section, but the accuracy in identifying the factor $\mu$ is always less than for the exponent $\delta$ in ([\[fourierasym\]](#fourierasym){reference-type="ref" reference="fourierasym"}), in particular here where the DSW already leads to a slower decay of the DFT coefficients with the index $k$. Thus there is strong evidence for cusp formation, but the exact character of the singularity needs to be justified analytically. ![Solution to the fractional CH equation ([\[fCHe\]](#fCHe){reference-type="ref" reference="fCHe"}) with $\alpha=0.9$, $\omega=0.6$ and $\epsilon=10^{-2}$ for initial data $u(x,0)=\mbox{sech}^{2} x$ for $t=0.6819$ on the left and the DFT coefficients on the right. ](fCHa09seche2t06905.eps "fig:"){#fChsecha09e2 width="49%"} ![Solution to the fractional CH equation ([\[fCHe\]](#fCHe){reference-type="ref" reference="fCHe"}) with $\alpha=0.9$, $\omega=0.6$ and $\epsilon=10^{-2}$ for initial data $u(x,0)=\mbox{sech}^{2} x$ for $t=0.6819$ on the left and the DFT coefficients on the right. ](fCHa09seche2t06819fourier.eps "fig:"){#fChsecha09e2 width="49%"} # Outlook In this paper, we have started a numerical study of the fractional CH equation. Solitary waves have been numerically constructed, and indications have been found that there could be a minimal value of $\alpha$ for given velocity $c$ and positive parameter $\omega$ below which there are no smooth solitary waves. It was shown that the numerically constructed smooth solitary waves are stable. A study of initial data from the Schwartz class of smooth rapidly decreasing functions led for initial data of small mass to scattering. For higher mass and sufficiently large $\alpha$, solitary waves seem to be observed for large times in accordance with the soliton resolution conjecture. However, for smaller values of $\alpha$, cusps can appear in finite time for such initial data. We also studied the formation of dispersive shock waves. An interesting question raised by this study is to identify the parameter space $\alpha$, $\omega$ and $c$ for which smooth solitary waves exist. The fall-off behavior of these solutions should be proven. The orbital stability of these solitary waves is a question to be addressed also analytically. Of particular interest is the question of a blow-up already in the CH equation, and even more so in fCH, for which inital data a globally smooth solution in time can be expected, and for which data a blow-up in finite time is to be expected. The type of blow-up appears to be a gradient catastrophe, but this needs to be confirmed analytically. The formation of DSWs was shown numerically. In [@GK; @AGK], the onset of the oscillations as well as the boundary of the Whitham zone was conjectured to be asymptotically given by certain Painlevé transcendents. It is an interesting question whether there are fractional ODEs that play a role in this context for the fCH equation. It will be the subject of further research to address such questions. 99 S. Abenda, T. Grava and C. Klein, Numerical Solution of the Small Dispersion Limit of the Camassa-Holm and Whitham Equations and Multiscale Expansions, SIAM J. Appl. Math. 70(8) (2010) 2797-2821. M.S. Alber, R. Camassa, D.D. Holm, J.E. Marsden, The geometry of peaked solitons and billiard solutions of a class of integrable PDE's, Lett. Math. Phys. 32 (1994) 137--151. R. Artebrant, H.J. Schroll, Numerical simulation of Camassa-Holm peakons by adaptive upwinding, Appl. Numer. Math. 56(5) (2006) 695--711. R. Camassa, D. Holm, An integrable shallow water equation with peaked solitons, Phys. Rev. Lett. (1993) 1661--1664. R. Camassa, J. Huang, L. Lee, On a completely integrable numerical scheme for a nonlinear shallow-water wave equation, J. Non. Math. Phys. 12(sup1) (2005) 146--162. G. Carrier, M.K.C. Pearson, Functions of a Complex Variable, Theory and Technique, SIAM, 2005. G.M. Coclite, K.H. Karlsen, N.H. Risebro, A Convergent Finite Difference Scheme for the Camassa--Holm Equation with General H\^1 Initial Data, SIAM J. Num. Anal. 46(3) (2008) 1554--1579. G.M. Coclite, K.H. Karlsen, N.H. Risebro, An explicit finite difference scheme for the Camassa-Holm equation, (2008) 681--732. A. Constantin, J. Escher, Wave breaking for nonlinear nonlocal shallow water equations, Acta Math. 181(2), (1998) 229--243. A. Constantin, B. Kolev, On the geometric approach to the motion of inertial mechanical systems, J. Phys. A Math. 35(32) (2002) R51. A. Constantin, B. Kolev, Geodesic flow on the diffeomorphism group of the circles, Comment. Math. Helvetici 78 (2003) 787--804. A. Constantin, W. Strauss, Stability of peakons, Commun. Pure Appl. Math. (2000) 603--610. A. Constantin, W. Strauss, Stability of Camassa-Holm solitons, J. Nonlinear Sci. (2002) 412--422. H.R. Dullin, A.G. Gottwald, D.D. Darryl, An integrable shallow water equation with linear and nonlinear dispersion, Phys. Review Lett. 87(19) (2001) 194501. N. Duruk Mutlubas, On the Cauchy problem for the fractional Camassa--Holm equation, Mon. Hefte. Math. 190(4) (2019) 755-768. H.A. Erbay, S. Erbay, A. Erkip, Derivation of the Camassa--Holm equations for elastic waves, Phys. Lett. A 379(12--13) (2015) 956-961. L. Fan, H. Gao, The Cauchy problem for generalized fractional Camassa--Holm equation in Besov space, Mon. Hefte. Math. 195 (2021) 451--475. L. Fan, H. Gao, J. Wang, W. Yan, The Cauchy problem for fractional Camassa-Holm equation in Besov space, Nonlinear Anal. Real World Appl. 61 (2021) 103348. B.-F. Feng, K. Maruno, Y. Ohta, A self-adaptive moving mesh method for the Camassa-Holm equation, J. Comput. Appl. Math. 235(1) (2010) 229--243. M. Fisher, J. Schiff, The Camassa Holm equation: conserved quantities and the initial value problem, Phys. Lett. A 259(5) (1999) 371--376. A. Fokas, B. Fuchssteiner, Symplectic structures, their B$\ddot{a}$cklund transformations and hereditary symmetries, Physica D 4 (1981) 44-66. R.L. Frank, E. Lenzmann, On the uniqueness and non degeneracy of ground states of $(-\Delta)^{s}Q + Q - Q^{\alpha+1}$ = 0 in R, Acta Math. (http://arxiv.org/abs/1009.4042). T. Grava, C. Klein, Numerical study of a multiscale expansion of KdV and Camassa-Holm equation, in Integrable Systems and Random Matrices, ed. by J. Baik, T. Kriecherbauer, L.-C. Li, K.D.T-R. McLaughlin, C. Tomei, Contemp. Math. 458 (2008) 81-99. T. Grava, A. Kapaev, C. Klein On the tritronquée solutions of P$_I^2$, Constr. Approx. 41 (2015) 425--466. H. Holden, X. Raynaud, Convergence of a finite difference scheme for the Camassa--Holm equation, SIAM J. Num. Anal. 44(4) (2006) 1655--1680. R. Johnson, On solutions of the Camassa--Holm equation, Proc. Royal Soc. A 456(2035) (2003) 1687--1708. H. Kalisch, J. Lenells, Numerical study of traveling-wave solutions for the Camassa--Holm equation, Chaos Solit. Fractals 25(2) (2005) 287--298. C. Klein, Fourth order time-stepping for low dispersion Korteweg-de Vries and nonlinear Schrödinger equation, Electron. Trans. Numer. Anal. 29 (2008) 116-135. C. Klein, K. McLaughlin, N. Stoilov, High precision numerical approach for Davey-Stewartson II type equations for Schwartz class initial data, Proc. Royal Soc. A 476(2239) (2020) 20190864. C. Klein, K. Roidot, Numerical study of shock formation in the dispersionless Kadomtsev-Petviashvili equation and dispersive regularizations, Physica D 265 (2013) 1--25. C. Klein, J.-C.Saut, A numerical approach to blow-up issues for dispersive perturbations of Burgers' equation, Physica D 295 (2015) 46--65. S. Kouranbaeva, The Camassa--Holm equation as a geodesic flow on the diffeomorphism group, J. Math. Phys. 40(2) (1999) 857--868. J. Lenells, Stability of periodic peakons, Int. Math. Res. Notices, 10 (2004) 485--499. J. Lenells, A variational approach to the stability of periodic peakons, J. Math. Phys. 11(2) (2004) 151-163. J. Lenells, Traveling wave solutions of the Camassa--Holm equation, J. Differ. Equ. 217(2) (2005) 393--430. Y. Martel, F. Merle, P. Raphaél, Blow up for the critical gKdV equation I: dynamics near the soliton. Acta Math. 212 (2014) 59-140. H. P. McKean, Breakdown of the Camassa-Holm equation, Comm. Pure Appl. Math. 57 (2004) 416-418. Y. Saad, M. Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. Comput. 7(3) (1986) 856--869. C. Sulem, P. Sulem, H. Frisch, Tracing complex singularities with spectral methods, J. Comp. Phys. 50 (1983) 138--161. Y. Xu, C-W. Shu, A local discontinuous Galerkin method for the Camassa--Holm equation, SIAM J. Numer. Anal. 46(4) (2008) 1998-2021. [^1]: This work was partially supported by the ANR-17-EURE-0002 EIPHI, the Bourgogne Franche-Comté Region, the European fund FEDER, and by the European Union Horizon 2020 research and innovation program under the Marie Sklodowska-Curie RISE 2017 grant agreement no. 778010 IPaDEGAN.
arxiv_math
{ "id": "2309.14849", "title": "Numerical study of fractional Camassa-Holm equations", "authors": "Christian Klein, Goksu Oruc", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The generalized Ramsey number $f(n, p, q)$ is the smallest number of colors needed to color the edges of the complete graph $K_n$ so that every $p$-clique spans at least $q$ colors. Erdős and Gyárfás showed that $f(n, p, q)$ grows linearly in $n$ when $p$ is fixed and $q=q_{\text{lin}}(p):=\binom p2-p+3$. Similarly they showed that $f(n, p, q)$ is quadratic in $n$ when $p$ is fixed and $q=q_{\text{quad}}(p):=\binom p2-\frac p2+2$. In this note we improve on the known estimates for $f(n, p, q_{\text{lin}})$ and $f(n, p, q_{\text{quad}})$. Our proofs involve establishing a significant strengthening of a previously known connection between $f(n, p, q)$ and another extremal problem first studied by Brown, Erdős and Sós, as well as building on some recent progress on this extremal problem by Delcourt and Postle and by Shangguan. Also, our upper bound on $f(n, p, q_{\text{lin}})$ follows from an application of the recent forbidden submatchings method of Delcourt and Postle. address: - Department of Mathematics, Western Michigan University, Kalamazoo, MI, USA - Department of Mathematics, University of Wisconsin-Eau Claire, Eau Claire, WI, USA - Department of Mathematics, Western Michigan University, Kalamazoo, MI, USA author: - Patrick Bennett - Ryan Cushman - Andrzej Dudek bibliography: - quad.bib title: Generalized Ramsey numbers at the linear and quadratic thresholds --- # Introduction Erdős and Shelah [@E75] first considered the following generalization of classical Ramsey problem. **Definition 1**. *Fix integers $p, q$ such that $p \ge 3$ and $2 \le q \le \binom p2$. A *$(p, q)$-coloring* of $K_n$ is a coloring of the edges of $K_n$ such that every $p$-clique has at least $q$ distinct colors among its edges. The generalized Ramsey number $f(n, p, q)$ is the minimum number of colors such that $K_n$ has a $(p, q)$-coloring.* Erdős and Gyárfás [@EG97] systematically studied $f(n, p, q)$ for fixed $p, q$ as $n \rightarrow \infty$. In this paper all asymptotic statements are as $n \rightarrow \infty$. Among other results, Erdős and Gyárfás [@EG97] proved that for arbitrary $p$ and $$q=q_{\text{lin}}(p):=\binom p2-p+3,$$ $f(n,p,q)$ is linear, but $f(n,p,q-1)$ is sublinear. Similarly, they showed in [@EG97] that for $$q=q_{\text{quad}}(p):=\binom p2-\lfloor p/2\rfloor+2,$$ $f(n,p,q)$ is quadratic, but $f(n,p,q-1)$ is subquadratic. Thus for fixed $p$, we call the value $q_{\text{lin}}$ the *linear threshold* and $q_{\text{quad}}$ the *quadratic threshold*. The main goal of this note is to estimate $f(n, p, q)$ when $q$ is at the linear or quadratic threshold. In terms of explicit general bounds, we prove the following. **Theorem 1**. *For all $p \ge 3$ we have $$\label{eqn:estlin} \frac{3p-7}{4p-10}n +o(n) \le f(n, p, q_{\text{lin}}) \le n+o(n).$$ For even $p \ge 6$ we have $$\label{eqn:estquad} \frac{2p-7}{5p-18}n^2 +o(n^2) \le f(n, p, q_{\text{quad}}) \le \frac 5{12} n^2+o(n^2).$$* Some of the content of Theorem [Theorem 1](#thm:est){reference-type="ref" reference="thm:est"} was also recently obtained independently by Gómez-Leos, Heath, Parker, Schwieder, and Zerbib [@GHPSZ]. Since the initial investigation by Erdős and Gyárfás [@EG97], the asymptotic behavior of $f(n,p,q)$ has attracted a considerable amount of attention. See, for example, the recent paper of the first author, third author and English [@BDE22] for some history of the problem. However, except for the trivial case of $f(n, 3, 3) = n+O(1)$, there have only been two results where $f(n,p,q)$ is known with a $(1+o(1))$ multiplicative error. Erdős and Gyárfás [@EG97] stated that it "can be easily determined" that $$\label{eqn:6,14} f(n, 6, 14)=\frac{5}{12} n^2+O(n).$$ More recently, the present authors with Prałat [@BCDP22] proved that $f(n, 4, 5) = \frac 56 n + o(n)$. In this note we provide a proof for [\[eqn:6,14\]](#eqn:6,14){reference-type="eqref" reference="eqn:6,14"} and also obtain $f(n, 6, 14)$ exactly when $n \equiv 1, 4 \pmod{12}$ (see Theorem [Theorem 6](#thm:6,14){reference-type="ref" reference="thm:6,14"}). We also obtain two more more explicit and asymptotically sharp estimates for generalized Ramsey numbers at the quadratic threshold. **Theorem 2**. *We have $$f(n, 8, 26) = \frac{9}{22} n^2 +o(n^2) \quad\text{and}\quad f(n, 10, 42) = \frac{5}{12}n^2+o(n^2).$$* The proofs of Theorems [Theorem 1](#thm:est){reference-type="ref" reference="thm:est"} and [Theorem 2](#thm:8,26){reference-type="ref" reference="thm:8,26"} will involve establishing certain connections between $f(n, p, q)$ and the following extremal problem first studied by Brown, Erdős and Sós [@BES1973]. **Definition 2**. *Let $\mathcal{H}$ be an $r$-uniform hypergraph. A *$(s, k)$-configuration* in $\mathcal{H}$ is a set of $s$ vertices inducing $k$ or more edges. We say $\mathcal{H}$ is *$(s, k)$-free* if it has no $(s, k)$-configuration. Let $F^{(r)}(n; s, k)$ be the largest possible number of edges in an $(s, k)$-free $r$-uniform hypergraph with $n$ vertices. In terms of classical extremal numbers, $$F^{(r)}(n; s, k) = \mbox{ex}_{r}(n, \mathcal{G}_{s, k}),$$ where $\mathcal{G}_{s, k}$ is the family of all $r$-uniform hypergraphs on $s$ vertices and $k$ edges.* In fact, three of the four explicit bounds in Theorem [Theorem 1](#thm:est){reference-type="ref" reference="thm:est"} follow by first bounding $f(n, p, q)$ implicitly in terms of some values $F^{(r)}(n; s, k)$ and then using explicit bounds on the latter. Thus, further improvements on the estimates for $F^{(r)}(n; s, k)$ would in some cases automatically give improved estimates for $f(n, p, q)$. In the case of the quadratic threshold (and even $p$), we actually show that the problem of asymptotically estimating $f(n, p, q)$ completely reduces to asymptotically estimating a certain value $F^{(r)}(n; s, k)$. **Theorem 3**. *For all even $p \ge 6$ we have $$\label{eqn:quadexact} \lim_{n \rightarrow \infty} \frac{f\left(n, p, q_{\text{quad}}\right)}{n^2} = \frac12 - \lim_{n \rightarrow \infty} \frac{F^{(4)}\left(n; p, \frac p2-1\right)}{n^2}.$$ In particular, the limit on the left exists. Furthermore, there exist asymptotically optimal $\left(p, q_{\text{quad}}\right)$-colorings that use no color more than twice.* It is perhaps surprising that we need not use any color more than twice. Indeed a $(p, q_{\text{quad}})$-coloring is allowed to use a color up to $\frac p2 -1$ times, and it would seem more efficient to use the same color as many times as possible. The existence of the limit on the right-hand side of [\[eqn:quadexact\]](#eqn:quadexact){reference-type="eqref" reference="eqn:quadexact"} was proved only recently by Shangguan [@S22]. Shangguan's proof generalizes another recent result by Delcourt and Postle [@DP22], which resolved a conjecture from Brown, Erdős and Sós [@BES1973] regarding the existence of a similar limit involving the function $F^{(3)}$ for 3-uniform hypergraphs. In particular, Delcourt and Postle [@DP22] proved the existence (for fixed $\ell \ge 3$) of the limit $$\lim_{n \rightarrow \infty} \frac{F^{(3)}\left(n; \ell, \ell-2\right)}{n^2}.$$ Interestingly, the proofs of Delcourt and Postle [@DP22] and Shangguan [@S22] do not seem to shed much light (at least, not as much as one might hope) on how to actually find the limits whose existence they establish. However, these limits are known in two cases relevant to us. In particular, it is known due to Shangguan and Tamo [@ST2019] that $$\label{eqn:chong} F^{(4)}(n; 8, 3) = \frac{1}{11} n^2 +o(n^2).$$ It is also known due to Glock, Joos, Kuhn, Kim, Lichev and Pikhurko [@GJKKLP2022] that $$\label{eqn:oleg} F^{(4)}(n; 10, 4) = \frac{1}{12}n^2 +o(n^2).$$ Thus, Theorem [Theorem 2](#thm:8,26){reference-type="ref" reference="thm:8,26"} follows from Theorem [Theorem 3](#thm:quadexact){reference-type="ref" reference="thm:quadexact"} together with [\[eqn:chong\]](#eqn:chong){reference-type="eqref" reference="eqn:chong"} and [\[eqn:oleg\]](#eqn:oleg){reference-type="eqref" reference="eqn:oleg"}. The lower bound on $f(n, p, q_{\text{lin}})$ in Theorem [Theorem 1](#thm:est){reference-type="ref" reference="thm:est"} is similar to the quadratic case, in the sense that it follows from an upper bound on $F^{(3)}(n;p, p-2)$. In particular we prove that **Theorem 4**. *For all $p\ge 3$ we have $$\label{eqn:linlower} \liminf_{n \rightarrow \infty} \frac{f\left(n, p, q_{\text{lin}}\right)}{n} \ge 1 - \lim_{n \rightarrow \infty} \frac{F^{(3)}\left(n; p, p-2\right)}{n^2}.$$* In light of Theorem [Theorem 3](#thm:quadexact){reference-type="ref" reference="thm:quadexact"}, one might suspect that there is a matching upper bound for [\[eqn:linlower\]](#eqn:linlower){reference-type="eqref" reference="eqn:linlower"}, but unfortunately this is not the case. Indeed, Glock [@G19] proved that $$\lim_{n \rightarrow \infty} \frac{F^{(3)}\left(n; 5, 3\right)}{n^2} = \frac 15,$$ which together with [\[eqn:linlower\]](#eqn:linlower){reference-type="eqref" reference="eqn:linlower"} yields $f(n, 5, 8) \ge \frac 45 n + o(n)$. However this lower bound is not close to the truth, as we show in our next theorem. **Theorem 5**. *We have $$f(n, 5, 8) \ge \frac 78 n + o(n).$$* ## Comparison to previous bounds Here we compare the bounds in Theorem [Theorem 1](#thm:est){reference-type="ref" reference="thm:est"} to what was previously known. For the linear threshold, Erdős and Gyárfás [@EG97] showed that $$\label{eqn:linprev} (n-1)/(p-2) \le f(n, p, q_{\text{lin}}) \le c_p n$$ for some coefficient $c_p$. The lower bound in [\[eqn:linprev\]](#eqn:linprev){reference-type="eqref" reference="eqn:linprev"} follows from the simple fact that in a $(p, q_{\text{lin}})$-coloring each vertex is adjacent to at most $p-2$ edges of each color. The upper bound in [\[eqn:linprev\]](#eqn:linprev){reference-type="eqref" reference="eqn:linprev"} follows from the Local Lemma. The constant $c_p$ is not explicitly discussed in [@EG97] but it is easy to see from their proof that $c_p \rightarrow \infty$ as $p \rightarrow \infty$. Thus we see that in Theorem [Theorem 1](#thm:est){reference-type="ref" reference="thm:est"}, [\[eqn:estlin\]](#eqn:estlin){reference-type="eqref" reference="eqn:estlin"} is a significant improvement on previous bounds. Indeed, the gap between the coefficients in [\[eqn:linprev\]](#eqn:linprev){reference-type="eqref" reference="eqn:linprev"} grows without bound with $p$, whereas the coefficients in [\[eqn:estlin\]](#eqn:estlin){reference-type="eqref" reference="eqn:estlin"} are always between $3/4$ and $1$. Likewise, for the quadratic threshold (and even $p$) the trivial bounds are $$\frac{\binom n2}{\frac p2 -1} \le f(n, p, q_{\text{quad}}) \le \binom n2.$$ The upper bound follows since we can give every edge its own color, and the lower bound follows from the fact that each color must be used at most $\frac p2 -1$ times. Thus we see that [\[eqn:estquad\]](#eqn:estquad){reference-type="eqref" reference="eqn:estquad"} in Theorem [Theorem 1](#thm:est){reference-type="ref" reference="thm:est"} is a significant improvement. ## Structure of the note The structure of this note is as follows. In Section [2](#sec:quad){reference-type="ref" reference="sec:quad"} we address the quadratic threshold. We start with a proof of a more precise version of [\[eqn:6,14\]](#eqn:6,14){reference-type="eqref" reference="eqn:6,14"}. We go on to prove Theorem [Theorem 3](#thm:quadexact){reference-type="ref" reference="thm:quadexact"} and [\[eqn:estquad\]](#eqn:estquad){reference-type="eqref" reference="eqn:estquad"} from Theorem [Theorem 1](#thm:est){reference-type="ref" reference="thm:est"}. In Section [3](#sec:lin){reference-type="ref" reference="sec:lin"} we address the linear threshold. There we prove Theorem [Theorem 4](#thm:linlower){reference-type="ref" reference="thm:linlower"}, [\[eqn:estlin\]](#eqn:estlin){reference-type="eqref" reference="eqn:estlin"} from Theorem [Theorem 1](#thm:est){reference-type="ref" reference="thm:est"}, and Theorem [Theorem 5](#thm:5,8lower){reference-type="ref" reference="thm:5,8lower"}. # Quadratic Threshold {#sec:quad} In this section we address the quadratic threshold. First we introduce some termilology. Suppose we are given a coloring of the edges of $K_n$. For a set of vertices $S$, let $c(S)$ be the number of colors appearing on edges within $S$, and let $r(S)$ be $\binom{|S|}{2}-c(S)$. We call $r(S)$ the *number of color repetitions (or just repeats) in $S$*. Sometimes it may help the reader to imagine counting $r(S)$ by examining each edge of $S$ in some order and counting a repeat whenever we see a color we have already seen. ## Estimating $f(n, 6, 14)$ In this subsection we state and prove our more precise result for $f(n, 6, 14)$. As we noted, Erdős and Gyárfás [@EG97] stated that $f(n, 6, 14)=\frac{5}{12} n^2+O(n)$ without proof. To help the reader gain familiarity with the concepts in this paper, we present a proof of a more precise version of this result. **Theorem 6**. *We have $$\frac{5}{6}\binom{n}{2} \le f(n, 6, 14) \le \frac{5}{6}\binom{n}{2}+O(n).$$ Furthermore, the lower bound above is the exact value of $f(n, 6, 14)$ whenever $n$ is congruent to $1$ or $4$ modulo $12$.* *Proof.* Starting with the lower bound, suppose we have any $(6,14)$-coloring. Since $\binom{6}{2}=15$, any set of $6$ vertices is allowed to have only one repeat, which implies that we cannot have $3$ edges of the same color. Indeed, taking the union of these edges would be a set of at most $6$ vertices with more than one repeated color. This also means that there can be at most one monochromatic path on three vertices $P_3$, since the union of two of them would be a set of at most 6 vertices with at least two repeats. If our coloring contains a monochromatic $P_3$, then we remove it and get a coloring of $K_{n-3}$. So we have a $(6, 14)$-coloring of $K_{n'}$ with $n' \in \{n-3, n\}$ with no monochromatic $P_3$. Suppose the color $c$ is used twice, say on the (nonincident) edges $ab$ and $xy$. Then the other four edges in $\{a, b, x, y\}$ must all have different colors which are only used once in the whole graph. Let $C_1$ be the set of colors used once and $C_2$ the colors used twice. For each $c \in C_2$ let $K_c$ be the set of 4 vertices consisting of both endpoints of both edges of color $c$. Note that for $c, c' \in C_2$ we have $|K_c \cap K_{c'}| \le 1$, since otherwise $K_c \cup K_{c'}$ is a set of at most 6 vertices with too many repeats. Thus the sets $K_c$ induce edge-disjoint 4-cliques. Thus, if we did not remove any $P_3$, we have that $|C_2|$, the number of such cliques, is at most $$\label{eq:f614-lb-1} \frac16\binom{n}2.$$ On the other hand, if we did remove a $P_3$, this would contribute one additional color to $C_2$ along with the restriction on the $K_c$. From our discussion above, we note that this $P_3$ is vertex disjoint from all the $K_c$ and does not share a color with any other edges. Thus, in this case, $|C_2|$ is at most $$\label{eq:f614-lb-2} 1 + \frac16\binom{n-3}{2}.$$ But since [\[eq:f614-lb-1\]](#eq:f614-lb-1){reference-type="eqref" reference="eq:f614-lb-1"} is at least [\[eq:f614-lb-2\]](#eq:f614-lb-2){reference-type="eqref" reference="eq:f614-lb-2"} for $n\ge 4$, we conclude that the number of colors used is at least $$|C_1| + |C_2| = \left(\binom{n}{2}-2|C_2|\right) + |C_2|=\binom{n}{2}-|C_2| \ge \frac56 \binom{n}2.$$ Thus we are done with the lower bound for Theorem [Theorem 6](#thm:6,14){reference-type="ref" reference="thm:6,14"}. We move on to the upper bound. If $n \equiv 1$ or $4 \mod 12$, then we are guaranteed a perfect packing of $\frac16\binom{n}{2}$ edge-disjoint 4-cliques by Hanani's result [@H61]. Then for each clique in the packing, color two nonadjacent edges the same color and give a unique color to the remaining edges. Since we use exactly 5 colors for each clique, we use exactly $\frac56\binom{n}{2}$ colors to color all the edges. Otherwise, let $i = (n \mod 12)$ and partition the vertices into $K_{n-i+1} \cup K_{i-1}$, and find a perfect packing of edge-disjoint 4-cliques for $K_{n-i+1}$. Follow the same coloring as above for the perfect packing, and then color the remaining $(n-i+1)(i-1)+\binom{i-1}{2} = O(n)$ edges with a different color for each edge. Thus, we use $\frac56\binom{n-i+1}{2}+O(n)=\frac56\binom{n}{2} + O(n)$ colors. Notice that in either case, the resulting coloring satisfies the $(6,14)$-coloring condition. If not, then there exists a set $S$ of $6$ vertices with more than 2 repeated colors. In our coloring, this means that $S$ must contain two cliques from the packing. But since the cliques must be edge-disjoint, this implies that $|S|\ge 7$, a contradiction. ◻ ## Proof of Theorem [Theorem 3](#thm:quadexact){reference-type="ref" reference="thm:quadexact"} {#proof-of-theorem-thmquadexact} In this subsection we will prove Theorem [Theorem 3](#thm:quadexact){reference-type="ref" reference="thm:quadexact"} after some discussion. We consider the case of $(p, q)$-coloring, where $$p=2\ell \quad{\text{and}}\quad q= q_{\text{quad}}(p) = \binom{2\ell}{2}-\ell+2.$$ This choice of parameters allows using a color $\ell-1$ times but not $\ell$ times. Erdős and Gyárfás [@EG97] showed that for this choice of parameters $f(n, p, q)$ is quadratic in $n$. Of course the upper bound $f(n, p, q) \le \binom n2$ is trivial, but [@EG97] also gives a nontrivial upper bound of $(1/2-\varepsilon)n^2$ for some $\varepsilon>0$. Specifically, Erdős and Gyárfás [@EG97] used a 4-uniform $(2 \ell, \ell-1)$-free hypergraph $\mathcal{H}$ to give an appropriate coloring. Crucially, every color repetition in the coloring corresponds to a hyperedge of $\mathcal{H}$. Specifically each color is used at most twice, and for any color used on two edges, the union of those two edges is a hyperedge of $\mathcal{H}$. The existence of a suitable hypergraph had already been established by Brown, Erdős and Sós [@BES1973]. The same basic connection between $(p, q)$-coloring near the quadratic threshold and 4-uniform $(s, k)$-free hypergraphs (for the appropriate $s, k$) was exploited by Sárközy and Selkow [@SS03] and again by Conlon, Gishboliner, Levanzov and Shapira [@CGLS23]. However, this connection as it was used in [@CFLS15; @EG97; @SS03] is not precise enough to prove Theorem [Theorem 3](#thm:quadexact){reference-type="ref" reference="thm:quadexact"}. Indeed, all these previous results give away a constant factor in the main term of their estimate of $f(n, p, q)$, while we want an asymptotically tight estimate. Thus, we will have to significantly refine these previously established connections between the Erdős-Gyárfás coloring problem and the Brown-Erdős-Sós packing problem. Now we will define some functions related to $F^{(4)}(n; 2\ell, \ell-1)$. The first one relaxes the problem to multi-hypergraphs. **Definition 3**. *Let $\mathcal{H}$ be an $r$-uniform *multi-hypergraph*, meaning that $\mathcal{H}$ can have edges with multiplicity (but each edge has $r$ distinct vertices). A *$(s, k)$-configuration* in $\mathcal{H}$ is a set of $s$ vertices inducing $k$ or more edges (counted with multiplicity). Let $G^{(r)}(n; s, k)$ be the largest possible number of edges in an $(s, k)$-free $r$-uniform multi-hypergraph with $n$ vertices.* Next we define a function that restricts the extremal problem for $F^{(4)}(n; 2\ell, \ell-1)$ to a smaller family of hypergraphs. **Definition 4**. *Let $H^{(4)}(n; 2\ell, \ell-1)$ be the largest possible number of edges in a $4$-uniform hypergraph $\mathcal{H}$ on $n$ vertices which satisfies the following conditions:* 1. *[\[item:1\]]{#item:1 label="item:1"} $\mathcal{H}$ is $(2\ell, \ell-1)$-free,* 2. *[\[item:2\]]{#item:2 label="item:2"} $\mathcal{H}$ is $(2i+1, i)$-free for $i=2, \ldots, \ell-2$, and* 3. *[\[item:3\]]{#item:3 label="item:3"} for every vertex $v$ of $\mathcal{H}$, either $v$ has degree $0$ or degree at least $\ell-1$.* Using Shangguan's notation [@S22], our function $H^{(4)}(n; 2\ell, \ell-1)$ defined above is the same as what Shangguan refers to as $f_r^{(t)}(n; er-(e-1)k, e)$, where $4$ is substituted for $r$, $2$ for $k$, $2$ for $t$, and $\ell-1$ for $e$. Since $H^{(4)}$ is a restriction and $G^{(4)}$ is a relaxation, we have $$H^{(4)}(n; 2\ell, \ell-1) \le F^{(4)}(n; 2\ell, \ell-1)\le G^{(4)}(n; 2\ell, \ell-1).$$ Shangguan [@S22] proved (see Lemma 5.5 and the discussion above it) that **Lemma 1** (Lemma 5.5 in [@S22]). *$$\label{eqn:FH} \lim_{n \rightarrow \infty} \frac{H^{(4)}(n; 2\ell, \ell-1)}{n^2} = \lim_{n \rightarrow \infty} \frac{F^{(4)}(n; 2\ell, \ell-1)}{n^2}.$$* Now we will easily see that $G$ is likewise asymptotically the same as the others. **Claim 1**. *$$\label{eqn:FG} \lim_{n \rightarrow \infty} \frac{G^{(4)}(n; 2\ell, \ell-1)}{n^2} = \lim_{n \rightarrow \infty} \frac{F^{(4)}(n; 2\ell, \ell-1)}{n^2}.$$* *Proof.* Let $\mathcal{H}$ be an extremal multi-hypergraph for the $G^{(4)}(n; 2\ell, \ell-1)$ problem, i.e., $\mathcal{H}$ has $G^{(4)}(n; 2\ell, \ell-1)$ edges and is $(2\ell, \ell-1)$-free. We form a new hypergraph $\mathcal{H}'$ by simply deleting all multiple edges in $\mathcal{H}$. Clearly $\mathcal{H}'$ is $(2\ell, \ell-1)$-free, so it has at most $F^{(4)}(n; 2\ell, \ell-1)$ edges. We show that $\mathcal{H}'$ has almost the same number of edges as $\mathcal{H}$. Indeed, suppose we enumerate all the multiple edges of $\mathcal{H}$, say $\{e_1, \ldots, e_a\}$ where $e_i$ has multiplicity $m_i \ge 2$. It is easy to see that $a \le \ell$ and each $m_i \le \ell$ (otherwise there is a $(2\ell, \ell-1)$-configuration). Thus, we remove at most $\ell^2$ edges from $\mathcal{H}$ to get $\mathcal{H}'$. Consequently, we have $$F^{(4)}(n; 2\ell, \ell-1) \le G^{(4)}(n; 2\ell, \ell-1)\le F^{(4)}(n; 2\ell, \ell-1) + \ell^2$$ and [\[eqn:FG\]](#eqn:FG){reference-type="eqref" reference="eqn:FG"} follows (recall we already knew that the limit on the right exists due to Lemma [Lemma 1](#lem:shangguan){reference-type="ref" reference="lem:shangguan"}). ◻ Now we start to attack the lower bound for the coloring problem. **Claim 2**. *$$f\left(n, p, q_{\text{quad}}\right) \ge \binom n2 - G^{(4)}(n; 2\ell, \ell-1).$$* We will prove this claim directly, by using a $\left( p, q_{\text{quad}}\right)$-coloring to construct a $(2\ell, \ell-1)$-free multi-hypergraph. Towards that end we define the following. **Definition 5**. *Consider any coloring $C$ of the edges of $K_n$. We say a 4-uniform hypergraph $\mathcal{H}$ is a *repeat multi-hypergraph* for the coloring $C$ if it is formed as follows. $\mathcal{H}$ has the same vertex set as $K_n$. For each color $c$ used in the coloring, let $E(c)\neq \emptyset$ be the set of edges of color $c$ and let $e_c$ be some particular (arbitrary) edge of color $c$. Then $\mathcal{H}$ will have all the edges $\{e \cup e_c: e \in E(c) \setminus\{e_c\} \}$. Of course, $e \cup e_c$ might only have 3 vertices (when we claimed $\mathcal{H}$ would be 4-uniform) but we fix this by arbitrarily adding vertices to edges of size 3.* Note that $\mathcal{H}$ can have multiple edges since a single set of 4 vertices can contain, say, two red edges and also two blue edges. Also, since the construction of $\mathcal{H}$ potentially involves some arbitrary choices (in particular, the choice of the edges $e_c$ as well as the choice of vertices used to enlarge 3-edges), in general a coloring $C$ may give rise to several possible repeat multi-hypergraphs $\mathcal{H}$. We now make the key observation about repeat multi-hypergraphs. Essentially it says that edges in $\mathcal{H}$ count color repetitions of $C$ "faithfully," i.e., without under- or over-counting. **Observation 1**. *Let $\mathcal{H}$ be a repeat multi-hypergraph for the coloring $C$. Then for all sets of vertices $S$ we have $$r(S) = |E(\mathcal{H}[S])|.$$* *Proof of Observation [Observation 1](#obs:faithful){reference-type="ref" reference="obs:faithful"}.* Recall that each hyperedge of $\mathcal{H}$ contains $e \cup e_c$ for some color $c$ and some edge $e$ that has color $c$. Now if a set of vertices $S$ vertices spans $b$ hyperedges all corresponding to the same color $c$, then $S$ contains $e_i \cup e_c$ for $1 \le i \le b$ and some edges $e_1, \ldots, e_b$ which all have color $c$. In particular $S$ contains $b+1$ edges, namely $e_c, e_1, \ldots e_b$, which all have color $c$, i.e., $S$ spans $b$ repeats in the color $c$. Now if $S$ spans $b$ hyperedges (which now need not all correspond to the same color), we likewise conclude that $S$ spans $b$ repeats by simply summing over the colors. ◻ We are now ready to prove Claim [Claim 2](#clm:quadlower){reference-type="ref" reference="clm:quadlower"}. *Proof of Claim [Claim 2](#clm:quadlower){reference-type="ref" reference="clm:quadlower"}.* Consider a $(p, q_{\text{quad}})$-coloring $C$ of $K_n$ that is optimal, i.e., uses $f\left(n, p, q_{\text{quad}}\right)$ colors. In such a coloring, any set of $p=2\ell$ vertices spans at most $\ell-2$ repeats. Let $\mathcal{H}$ be a repeat multi-hypergraph for $C$. By Observation [Observation 1](#obs:faithful){reference-type="ref" reference="obs:faithful"}, a $(2\ell, \ell-1)$-configuration in $\mathcal{H}$ would be a set of $2 \ell$ vertices spanning at least $\ell-1$ repeats. Since $C$ is a $(2\ell, \binom{2\ell}{2}-\ell+2)$-coloring, $\mathcal{H}$ is $(2\ell, \ell-1)$-free. In particular, $|E(\mathcal{H})| \le G^{(4)}(n; 2\ell, \ell-1).$ But now applying Observation [Observation 1](#obs:faithful){reference-type="ref" reference="obs:faithful"} where $S$ is the set of all vertices we have $|E(\mathcal{H})| = \binom n2 - f\left(n, p, q_{\text{quad}}\right)$ since $C$ uses $f\left(n, p, q_{\text{quad}}\right)$ colors. The claim now follows from $$\binom n2 - f\left(n, p, q_{\text{quad}}\right) = |E(\mathcal{H})| \le G^{(4)}(n; 2\ell, \ell-1)$$ ◻ Next we attack the upper bound for the coloring problem. To get a bound that comes close to matching Claim [Claim 2](#clm:quadlower){reference-type="ref" reference="clm:quadlower"}, we will have to "reverse" the procedure we used to turn a coloring $C$ into a repeat multi-hypergraph $\mathcal{H}$. We must be careful for a few reasons. First, as we discussed earlier, a single coloring $C$ can give rise to many different $\mathcal{H}$. Second, although we saw that if $C$ is a $(p, q_{\text{quad}})$-coloring then $\mathcal{H}$ must be $(2\ell, \ell-1)$-free, in general the converse does not hold. In particular, if some set of vertices $S$ does not contain the edge $e_c$ then $S$ could have many repeats in the color $c$ but not span even one edge of $\mathcal{H}$. We will get around these issues by ensuring that our coloring uses each color at most twice, and we never use the same color on adjacent edges. For such a coloring $C$, the repeat multi-hypergraph $\mathcal{H}$ is unique. Furthermore, such a coloring $C$ is a $(p, q_{\text{quad}})$-coloring if and only if $\mathcal{H}$ is $(2\ell, \ell-1)$-free. **Claim 3**. *$$f\left(n, p, q_{\text{quad}}\right) \le \binom n2 - H^{(4)}(n; 2\ell, \ell-1).$$* *Proof.* We will construct a $\left(p, q_{\text{quad}}\right)$-coloring that uses $\binom n2 - H^{(4)}(n; 2\ell, \ell-1)$ colors. We start with an extremal graph $\mathcal{H}$ for the $H^{(4)}(n; 2\ell, \ell-1)$ problem. In other words, $\mathcal{H}$ has $n$ vertices, $H^{(4)}(n; 2\ell, \ell-1)$ edges, and properties [\[item:1\]](#item:1){reference-type="eqref" reference="item:1"}--[\[item:3\]](#item:3){reference-type="eqref" reference="item:3"}. We construct a coloring as follows. Start with an edge $h_1\in \mathcal{H}$ and choose two arbitrary, disjoint pairs $e_{1}, f_{1} \subseteq h_1$ and assign them the color $c_{1}$. Assign all other pairs in $h_1$ a different color. Let the set of "active" pairs after step 1 be $A_1=\{e_1,f_1\}$. Then define $$H_1=\{h\in E(\mathcal{H})\setminus \{h_1\}: e \subseteq h \text{ for some } e\in A_1\}.$$ In general, assume that we have defined colors in $h_1, h_2, \ldots, h_{k-1}$ such that - $A_{k-1}= \{e_1, \ldots, e_{k-1}\} \cup \{f_1,\ldots,f_{k-1}\}$ - $H_{k-1}=\{h\in E(\mathcal{H})\setminus \{h_1, \ldots, h_{k-1}\}: e \subseteq h \text{ for some } e\in A_{k-1}\}$, and - $|\bigcup_{i=1}^{k-1} h_i| = 2k$. Notice that these are true for step 1. We choose an arbitrary edge $h_k$ from $H_{k-1}$. Notice that $\left(\bigcup_{i=1}^{k-1} h_i \right) \cap h_k = e$ for some $e\in A_{k-1}$. Indeed, $e$ is clearly a subset of the expression and by property [\[item:2\]](#item:2){reference-type="eqref" reference="item:2"} if the cardinality of the intersection were $3$, then $\bigcup_{i=1}^{k} h_i$ has $2k+4-3 = 2k+1$ vertices that induces at least $k$ edges, violating the $(2k+1,k)$-free condition in $\mathcal{H}$. Thus, $\left|\bigcup_{i=1}^{k} h_i\right|=2(k+1)$. Then pick two disjoint, uncolored pairs $e_{k},f_{k}\subseteq h_k$ (there are two such choices) and color them $c_{k}$ and color all other uncolored pairs in $h_k$ a unique color. Finally, define $$A_{k}= A_{k-1} \cup \{e_k,f_k\}$$ and $$H_{k}=\{h\in E(\mathcal{H})\setminus \{h_1, \ldots, h_{k}\}: e \subseteq h \text{ for some } e\in A_{k}\}.$$ Continue in this way until $H_k = \emptyset$ for some $k$. Notice that $k < \ell-1$ since otherwise there would be a set of $2\ell$ vertices on $\ell-1$ edges, violating property [\[item:3\]](#item:3){reference-type="eqref" reference="item:3"}. Then repeat this process with any edge other than $h_1, \ldots, h_k$. Continue until all edges have been processed, and give any uncolored pairs a unique color. Notice that each newly chosen edge will intersect the union of any of the former edges by at most $2$ by property [\[item:2\]](#item:2){reference-type="eqref" reference="item:2"}. Note that for each edge in $\mathcal{H}$, the coloring has exactly one repeat, and there are $H^{(4)}(n,2\ell,\ell-1)$ edges. Thus when considering the coloring of the pairs, we obtain a coloring $C$ of $K_n$ with $$|C| = \binom{n}{2} - H^{(4)}(n,2\ell,\ell-1).$$ To verify that $C$ is a $(p, q_{\text{quad}})$-coloring, choose any set $S$ of $p=2\ell$ vertices. Clearly, in $\mathcal{H}$, the set $S$ induces at most $\ell-2$ hyperedges. And our coloring defines exactly one repeat per hyperedge, and none elsewhere. So the total number of distinct colors among the edges of $K_n[S]$ is at least $q_{\text{quad}} = \binom{2\ell}{2}-\ell+2$. ◻ Finally observe that Theorem [Theorem 3](#thm:quadexact){reference-type="ref" reference="thm:quadexact"} follows from Lemma [Lemma 1](#lem:shangguan){reference-type="ref" reference="lem:shangguan"} and Claims [Claim 1](#clm:FG){reference-type="ref" reference="clm:FG"}, [Claim 2](#clm:quadlower){reference-type="ref" reference="clm:quadlower"} and [Claim 3](#clm:quadupper){reference-type="ref" reference="clm:quadupper"}. ## Proof of [\[eqn:estquad\]](#eqn:estquad){reference-type="eqref" reference="eqn:estquad"} from Theorem [Theorem 1](#thm:est){reference-type="ref" reference="thm:est"} {#proof-of-eqnestquad-from-theorem-thmest} In this subsection we establish the explicit bounds [\[eqn:estquad\]](#eqn:estquad){reference-type="eqref" reference="eqn:estquad"}. They will follow from Theorem [Theorem 3](#thm:quadexact){reference-type="ref" reference="thm:quadexact"} together with explicit bounds for the function $F^{(4)}$. As we mentioned before, Delcourt and Postle [@DP22] proved some very general and powerful results to the effect that certain hypergraphs have almost-perfect matchings which avoid certain forbidden submatchings. Similar results were independently proved by Glock, Joos, Kim, Kühn and Lichev [@GJKKL]. Each team of researchers was motivated in part by finding approximate designs of high "girth". In particular, it follows just as well from either [@DP22] (Theorem 1.3) or [@GJKKL] (Theorem 1.1) that for any $\ell \ge 3$ there exists a linear 4-uniform hypergraph $\mathcal{H}$ on $n$ vertices with $\frac 1{12} n^2 +o(n^2)$ edges which is also $(2\ell, \ell-1)$-free. In other words, $F^{(4)}(n; 2\ell, \ell-1) \ge \frac 1{12} n^2 +o(n^2)$. Now the upper bound in [\[eqn:estquad\]](#eqn:estquad){reference-type="eqref" reference="eqn:estquad"} follows from Theorem [Theorem 3](#thm:quadexact){reference-type="ref" reference="thm:quadexact"}. We move on to the lower bound in [\[eqn:estquad\]](#eqn:estquad){reference-type="eqref" reference="eqn:estquad"}, which will follow from an upper bound on $H^{(4)}(n; 2\ell, \ell-1)$ (which of course gives an upper bound on $F^{(4)}(n; 2\ell, \ell-1)$ by [\[eqn:FH\]](#eqn:FH){reference-type="eqref" reference="eqn:FH"}). In particular, we will be done when we prove the following: **Claim 4**. *For $\ell \ge 2$ we have $$H^{(4)}(n; 2\ell, \ell-1) \le \frac{\ell - 2}{10\ell - 18} n^2 + o(n^2).$$* The proof is a straightforward adaptation of Delcourt and Postle's proof of their Lemma 1.9 in [@DP22]. *Proof.* Let $\mathcal{H}$ be a 4-uniform hypergraph on $n$ vertices which is $(2\ell, \ell-1)$-free and $(2i+1, i)$-free for $i=2, \ldots, \ell-2$ (recall Definition [Definition 4](#def:H){reference-type="ref" reference="def:H"}). Define a graph $G$ with $V(G)=E(\mathcal{H})$, where $e_1e_2 \in E(G)$ whenever $|e_1 \cap e_2| \ge 2$. Each component of $G$ must have order at most $\ell-2$ since $\mathcal{H}$ is $(2\ell, \ell-1)$-free. Let $\{e_1, \ldots e_b\}$ be a component in $G$ for some $1 \le b \le \ell-2$. Assume that the ordering $\{e_1, \ldots e_b\}$ is chosen so that for each $2\le i\le b$ there is $1\le j\le i-1$ such that $|e_i \cap e_j|\ge 2$. We claim that for each $i \ge 2$, $e_i$ has two vertices (in $V(\mathcal{H})$) which are not in $e_1 \cup \ldots \cup e_{i-1}$; otherwise, we would have a $(2i+1, i)$-configuration in $\mathcal{H}$. On the other hand, due to our choice of the ordering, there is an edge $e_j\in\{e_1 \cup \ldots \cup e_{i-1}\}$ such that $|e_i\cap e_j|\ge 2$. Consequently, $e_i$ shares only one pair of vertices with $e_1 \cup \ldots \cup e_{i-1}$ and so $e_i$ contains five pairs of vertices which are not subsets of any $e_{j}, j <i$. Of course $e_1$ contains six pairs and each edge after that has five more, so the total number of pairs contained in some $e_j, j \le b$ is at least $5b+1$. Note that for two edges of $\mathcal{H}$, if they are in different components of $G$ then they do not share any pair of vertices in $\mathcal{H}$. For $1 \le b \le \ell-2$ let $C_b$ be the number of components of $G$ of order $b$. Then we have $$\label{eqn:bcb} |E(\mathcal{H})| = \sum_{1 \le b \le \ell-2} b C_b,$$ which implies $$\label{eqn:cb} \sum_{1 \le b \le \ell-2} C_b \ge \frac{1}{\ell-2} |E(\mathcal{H})|.$$ But now by summing the vertex-pairs in $\mathcal{H}$ we have $$\binom n2 \ge \sum_{1 \le b \le \ell-2} (5b + 1) C_b \ge \left(5 + \frac 1{\ell-2}\right) |E(\mathcal{H})|,$$ where the last inequality uses [\[eqn:bcb\]](#eqn:bcb){reference-type="eqref" reference="eqn:bcb"} and [\[eqn:cb\]](#eqn:cb){reference-type="eqref" reference="eqn:cb"}. It follows that $$|E(\mathcal{H})| \le \frac{\ell - 2}{10\ell - 18} n^2 + o(n^2),$$ which completes the proof. ◻ # Linear threshold {#sec:lin} In this section we address the linear threshold. First we prove Theorem [Theorem 4](#thm:linlower){reference-type="ref" reference="thm:linlower"} and the lower bound in [\[eqn:estlin\]](#eqn:estlin){reference-type="eqref" reference="eqn:estlin"}. ## Proof of Theorem [Theorem 4](#thm:linlower){reference-type="ref" reference="thm:linlower"} and the lower bound in [\[eqn:estlin\]](#eqn:estlin){reference-type="eqref" reference="eqn:estlin"} {#proof-of-theorem-thmlinlower-and-the-lower-bound-in-eqnestlin} We start by comparing $F^{(3)}$ with $G^{(3)}$. This is analogous to Claim [Claim 1](#clm:FG){reference-type="ref" reference="clm:FG"}. **Claim 5**. *For all $p\ge 3$, $$G^{(3)}(n; p, p-2) = F^{(3)}(n;p,p-2)+O(n).$$* *Proof.* Let $\mathcal H$ be an extremal multi-hypergraph for the $G^{(3)}(n;p,p-2)$ problem. So $\mathcal H$ has $G^{(3)}(n;p,p-2)$ edges and is $(p,p-2)$-free. Then $\mathcal H$ has at most $Cn$ edges of multiplicity at least 2, where $3C=2x$ and $x$ is whichever of $(p-2)/2$ or $(p-1)/2$ is an integer. Suppose to the contrary. Let $\mathcal H_2$ be the multi-hypergraph with $V(\mathcal H_2)=V(\mathcal H)$ and all edges from $\mathcal H$ with multiplicity at least 2. Then the average degree in $\mathcal H_2$ is at least $3C=2x$. So there is a set of $2x$ edges on at most $1+2x$ vertices. But $2x \ge p-2$ and $2x+1\le p$, so this contradicts the fact that $\mathcal H$ is $(p,p-2)$-free. Now we form $\mathcal H'$ by deleting the edges with multiplicity at least $2$ that appear in $\mathcal H$. Since $\mathcal H'$ is also $(p,p-2)$-free, then it has at most $F^{(4)}(n;p,p-2)$ edges. In addition, we must delete at most $Cn$ edges of $\mathcal H$ to obtain $\mathcal H'$, so $$F^{(3)}(n; p, p-2) \le G^{(3)}(n; p, p-2) \le F^{(3)}(n; p, p-2)+Cn,$$ proving the claim. ◻ The next claim is analogous to Claim [Claim 2](#clm:quadlower){reference-type="ref" reference="clm:quadlower"}. **Claim 6**. *For all $p \ge 3$ $$f\left(n, p, q_{\text{lin}}\right) \ge n -1- \frac 1n G^{(3)}(n; p, p-2).\nonumber$$* *Proof.* Consider any $(p, q_{\text{lin}})$-coloring using color set $C$. So any set of $p$ vertices spans at most $p-3$ repeats. We form the 3-uniform hypergraph $\mathcal{H}$ as follows. For each vertex $v$ and color $c$ used on at least one edge adjacent to $v$, say $\{e_1, \ldots e_\ell\}$ is the set of edges adjacent to $v$ and colored $c$. Then $\mathcal{H}$ has the edges $e_1 \cup e_i$ for $i=2, \ldots, \ell$. $\mathcal{H}$ is $(p, p-2)$-free, but it might have multiple edges which come from monochromatic triangles. Therefore $$|E(\mathcal{H})| \le G^{(3)}(n; p, p-2).$$ Each hyperedge of $\mathcal{H}$ corresponds to two edges of the same color sharing a vertex $v$, and so some particular vertex $v$ plays that role at most $$\frac 1n |E(\mathcal{H})| \le \frac 1n G^{(3)}(n; p, p-2)$$ times. But these hyperedges of $\mathcal{H}$ count all of the color repeats on edges incident with $v$. Thus the number of different colors used on edges adjacent with $v$ is at least $$n-1 - \frac 1n G^{(3)}(n; p, p-2).$$ ◻ Theorem [Theorem 4](#thm:linlower){reference-type="ref" reference="thm:linlower"} now follows from Claims [Claim 5](#clm:GFlin){reference-type="ref" reference="clm:GFlin"} and [Claim 6](#clm:linlower){reference-type="ref" reference="clm:linlower"}. In turn, the lower bound in [\[eqn:estlin\]](#eqn:estlin){reference-type="eqref" reference="eqn:estlin"} follows from Theorem [Theorem 4](#thm:linlower){reference-type="ref" reference="thm:linlower"} and Lemma 1.9 from Delcourt and Postle [@DP22], which states that $$F^{(3)}(n, p, p-2) \le \frac{p-3}{4p-10}n^2 + o(n^2).$$ ## Proof of the upper bound in [\[eqn:estlin\]](#eqn:estlin){reference-type="eqref" reference="eqn:estlin"} We now turn to the upper bound at the linear threshold found in Theorem [Theorem 1](#thm:est){reference-type="ref" reference="thm:est"}. We use the forbidden submatchings method of Delcourt and Postle [@DP22b]. To introduce this method, we require the following definitions from [@DP22b]. **Definition 6**. 1. *The $i$-degree of a vertex $v$ of $H$, denoted $d_{H,i}(v)$, is the number of edges of $H$ of size $i$ containing $v$. The maximum $i$-degree of $H$, denoted $\Delta_i(H)$, is the maximum of $d_{H,i}(v)$ over all vertices $v$ of $H$.* 2. *Let $G$ be a (multi)-hypergraph. We say a hypergraph $H$ is a configuration hypergraph for $G$ if $V (H) = E(G)$ and $E(H)$ consists of a set of matchings of $G$ of size at least two. We say a matching of G is $H$-avoiding if it spans no edge of $H$.* 3. *Let $G$ be a hypergraph and $H$ be a configuration hypergraph of $G$. We define the codegree of a vertex $v \in V(G)$ and $e \in E(G)=V(H)$ with $v \not \in e$ as the number of edges of $H$ who contain $e$ and an edge incident with $v$. We then define the *maximum codegree* of $G$ with $H$ as the maximum codegree over vertices $v \in V(G)$ and edges $e\in E(G)=V(H)$ with $v \not \in e$.* 4. *We say a hypergraph $G = (A, B)$ is *bipartite with parts $A$ and $B$* if $V (G) = A \cup B$ and every edge of $G$ contains exactly one vertex from $A$. We say a matching of $G$ is *$A$-perfect* if every vertex of $A$ is in an edge of the matching.* 5. *Let $H$ be a hypergraph. The *maximum $(k,\ell)$-codegree* of $H$ is $$\Delta_{k,\ell}(H)=\max_{S\in \binom{V(H)}{\ell}} |\lbrace e \in E(H) : S \subset e, |e|=k\rbrace|$$* 6. *Let $G$ be a hypergraph and let $H$ be a configuration hypergraph of $G$. We define the *$i$-codegree* of a vertex $v\in V(G)$ and $e\in E(G) = V(H)$ with $v \not \in e$ as the number of edges of $H$ of size $i$ who contain $e$ and an edge incident with $v$. We then define the *maximum $i$-codegree* of $G$ with $H$ as the maximum $i$-codegree over vertices $v \in V(G)$ and edges $e \in E(G) = V(H)$ with $v \not \in e$.* **Theorem 7** (Delcourt and Postle [@DP22b]). *For all integers $r, g \ge 2$ and real $\beta \in (0, 1)$, there exist an integer $D_\beta > 0$ and real $\alpha > 0$ such that following holds for all $D \ge D_\beta$:* *Let $G = (A, B)$ be a bipartite $r$-bounded (multi)-hypergraph with* 1. *codegrees at most $D^{1-\beta}$, and [\[thm-dp:A\]]{#thm-dp:A label="thm-dp:A"}* 2. *every vertex in $A$ has degree at least $(1+D^{-\alpha})D$ and every vertex in $B$ has degree at most $D$. [\[thm-dp:B\]]{#thm-dp:B label="thm-dp:B"}* *Let $H$ be a $g$-bounded configuration hypergraph of $G$ with* 1. *$\Delta_i(H) \le \alpha \cdot D^{i-1} \log D$ for all $2\le i \le g$; [\[thm-dp:C\]]{#thm-dp:C label="thm-dp:C"}* 2. *$\Delta_{k,\ell}(H) \le D^{k-\ell-\beta}$ for all $2 \le \ell < k \le g$; and [\[thm-dp:D\]]{#thm-dp:D label="thm-dp:D"}* 3. *maximum $2$-codegree of $G$ with $H$ and the maximum common $2$-degree of $H$ are both at most $D^{1-\beta}$. [\[thm-dp:E\]]{#thm-dp:E label="thm-dp:E"}* *Then there exists an $H$-avoiding $A$-perfect matching of $G$ and indeed even a set of $D_A-D^{1-\alpha}$ $(\ge D)$ disjoint $H$-avoiding $A$-perfect matchings of $G$.* *Proof of upper bound in [\[eqn:estlin\]](#eqn:estlin){reference-type="eqref" reference="eqn:estlin"}.* Fix some $\beta \in (0, 1)$, set $r=3$ and $g=\binom p2$ and let $\alpha>0$ be the value guaranteed by Theorem [Theorem 7](#thm:dp){reference-type="ref" reference="thm:dp"}. Fix some $\varepsilon$ with $0 < \varepsilon<\alpha$. Let $C$ be a set of $n+n^{1-\varepsilon}$ colors. Let $G=(A,B)$ be a bipartite hypergraph with parts $A=E(K_n)$ and $B=\left\lbrace v_c : c \in V(K_n), c\in C\right\rbrace$, and with edge set $$E(G)=\left\lbrace \lbrace uv, u_c, v_c\rbrace : u,v \in V(K_n), c\in C\right\rbrace.$$ Note that $G$ is $3$-uniform (and thus $3$-bounded). We intend to find an $A$-perfect matching $M$ in $G$, which will give us a $(p, p-2)$-coloring as follows. For each edge $\{uv, u_c, v_c\} \in M$ we just color the edge $uv$ with the color $c$. Since $M$ is $A$-perfect, every edge of $K_n$ gets exactly one color. Note that since $M$ is a matching, no two incident edges $uv$ and $vw$ in $K_n$ can get the same color $c$. We now define $H$, our configuration hypergraph of $G$. Of course we let $V(H)=E(G)$. Suppose $S\subseteq E(G)=V(H)$ is a matching, so $S$ corresponds to a coloring $c_S$ of some of the edges of $K_n$. We will let $S$ be an edge of $H$ if we have that > the number of vertices of $K_n$ spanned by edges that are colored by $c_S$ is $V(S)$ where $4 \le V(S) \le p$ and the number of color repetitions in $c_S$ is $R(S) \ge V(S)-2$. If any edge in $E(H)$ is not minimal (i.e., it properly contains another edge) we remove it. When $k=p$, an edge of $H$ corresponds to a violation of the $(p,q_{\text{lin}})$-condition in $K_n$. Including the edges of $H$ corresponding to $4 \le k \le p-1$ is important to verify the conditions Theorem [Theorem 7](#thm:dp){reference-type="ref" reference="thm:dp"}. It is easy to see that $H$ is $\binom{p}{2}$-bounded. We now verify that condition [\[thm-dp:A\]](#thm-dp:A){reference-type="ref" reference="thm-dp:A"} holds. Define $D = n$. Let $x,y \in V(G)$. Clearly, if $x,y \in A$, then the codegree is zero since there is exactly one member of $A$ in each edge of $G$. If $x \in A$ and $y \in B$ then $x=uv$ for some $u,v \in K_n$. If $y=u_c$ or $v_c$, then the codegree is $1$. Otherwise, the codegree is 0. Finally, if $x,y \in B$, then the codegree is either 0 or 1, depending on whether they share the same color subscript $c$. Thus, all codegrees in $G$ are at most $1$, verifying condition [\[thm-dp:A\]](#thm-dp:A){reference-type="ref" reference="thm-dp:A"}. Next, we verify that condition [\[thm-dp:B\]](#thm-dp:B){reference-type="ref" reference="thm-dp:B"} holds. For any vertex $uv\in A$, the degree of $uv$ in $G$ is exactly $|C|=n+n^{1-\varepsilon}=D(1+D^{-\varepsilon}) \ge D(1+D^{-\alpha})$. In addition, for any vertex $u_c\in B$, the degree of $u_c$ in $G$ is exactly $n-1 \le D$. So condition [\[thm-dp:B\]](#thm-dp:B){reference-type="ref" reference="thm-dp:B"} is verified. Next, we verify that condition [\[thm-dp:C\]](#thm-dp:C){reference-type="ref" reference="thm-dp:C"} holds. Let $e=\{uv, u_c, v_c\} \in V(H)=E(G)$. We count edges $I$ of $H$ of size $i$ with $e \in I$. For some $4 \le k \le p$, the number $V(I)$ of vertices of $K_n$ spanned by edges of $I$ in $G$ is $k$ and the number of color repetitions $R(I)$ is at least $k-2$. So besides $u$ and $v$, $V(I)$ must contain exactly $k-2$ other vertices of $K_n$. Let $x$ be the number of colors induced by $I$ other than $c$. We count $R(S)$ by taking the difference between the number of edges colored by $c_S$ and the number of distinct colors used. Thus we have $i-(1+x) \ge k-2$, so $x\le i-k+1$. There are at most $\binom p2 = O(1)$ choices for remaining, unaccounted for, colored edges of $K_n$ edges in $I$. $I$ is determined by choosing $k-2$ vertices of $K_n$ and at most $i-k+1$ colors, so $$\Delta_i(H) = O\left(\sum_{k=4}^p n^{k-2} \cdot n^{i-k+1}\right) = O( n^{i-1}) \le \alpha \cdot D^{i-1} \log D$$ for all $2\le i \le g$, verifying condition [\[thm-dp:C\]](#thm-dp:C){reference-type="ref" reference="thm-dp:C"}. To verify condition [\[thm-dp:D\]](#thm-dp:D){reference-type="ref" reference="thm-dp:D"}, fix $k$ and $\ell$, and let $L \subseteq V(H)$ have size $\ell$. We count the number of $K \in E(H)$ such that $L \subset K$ and $|K|=k$. If $V(L) > p$ there is no possible $K$, so we assume $V(L) \le p$. If $V(L)$ is 2 or 3 then $R(L)=0$. Otherwise we have $V(L) \ge 4$. If $R(L) \ge V(L)-2$ then there is no possible $K \subseteq L$ since we removed nonminimal edges from $H$, so assume $R(L) \le V(L)-3$. Let us count possible edges $K$ such that $V(K)=t$. Since $K$ is an edge of $H$, we have $R(K) = t-2$. Then $V(L)-3 \ge R(L) = \ell - |C_S|$, so $|C_S| \ge \ell-s_1+3$. Suppose there are $\ell - R(L)$ many colors used by the coloring $c_L$, and say there are $x$ colors used by $c_K$ that are not used by $c_L$. Then the number of colors used by $c_K$ is $$x + \ell - R(L) \le k - R(K) = k - t+2$$ and so $$x \le R(L) - \ell + k-t+2 \le V(L) - \ell + k - t - 1.$$ To determine $K$ we choose $t-V(L)$ vertices of $K_n$ which are not touched by the coloring $c_L$, and then we choose $x$ many colors. Given that choice there are only a constant number of ways to choose which edges are colored and which colors they get. Therefore, $$\Delta_{k,\ell}(H) \le O \left(\sum_{t \le p} n^{t-V(L)} \cdot n^{V(L) - \ell + k - t - 1}\right) = O(n^{k-\ell-1})\le D^{k-\ell-\beta}$$ for all $2 \le \ell < k \le g$, verifying condition [\[thm-dp:D\]](#thm-dp:D){reference-type="ref" reference="thm-dp:D"}. Finally, we turn to condition [\[thm-dp:E\]](#thm-dp:E){reference-type="ref" reference="thm-dp:E"}. But note that for all $e\in E(H)$, $|e|\ge 4$. So the maximum $2$-codegree of $G$ with $H$ and the maximum common $2$-degree of $H$ are both $0$. Therefore, there exists an $H$-avoiding $A$-perfect matching of $G$, which corresponds to a $\left(p,\binom p2-p+3\right)$-coloring of $K_n$ using our set $C$ of $n+n^{1-\varepsilon}$ colors. ◻ ## Proof of Theorem [Theorem 5](#thm:5,8lower){reference-type="ref" reference="thm:5,8lower"} {#proof-of-theorem-thm58lower} Here we prove that $$f(n, 5, 8) \ge \frac 78 n + o(n).$$ Consider a $(5, 8)$-coloring of $K_n$ using color set $C$. We will determine a partition $\mathcal{P}$ of the edge set as follows. First, for $j=2, 3, 4$ we let $\mathcal{S}_j$ be the family of all maximal $j$-sets of vertices spanning $j-2$ repeats. When we say "maximal" here we mean that no set $S \in \mathcal{S}_j$ is contained in a set from $S' \in \mathcal{S}_{j'}$ for $j' > j$. Each part $P$ of our partition $\mathcal{P}$ will be some set of edges that are all inside some set $S$ in $\mathcal{S}_2, \mathcal{S}_3$ or $\mathcal{S}_4$. More specifically, for each $S \in \mathcal{S}_2 \cup \mathcal{S}_3$ we will have one part $P \in \mathcal{P}$ where $P$ is just the set containing all the edges from $S$. When $S \in \mathcal{S}_4$ the corresponding part $P \in \mathcal{P}$ will be a little more complicated, and the edges inside $S$ will break down into some singleton parts (i.e., parts with just one edge) and one larger part $P$ which we will now describe. This larger part $P$ will include any edge in $S$ of a color that appears more than once in $S$, as well as any edge $uv$ in $S$ such that there is a third vertex $w\in S$ where $uw, vw$ have the same color. Any edges of $S$ which are not in the large part $P$ will be in singleton parts. So we have our partition $\mathcal{P}$. In the figures below we draw representations of all possible parts of $\mathcal{P}$. Consider a pair $(v, c)$ where $v$ is a vertex of $K_n$ and $c$ is a color. We say $(v, c)$ is *hit* by the part $P \in \mathcal{P}$ if $P$ contains an edge $e$ of color $c$ such that either $e$ is incident with $v$ or else the two edges from $v$ to the two endpoints of $e$ have the same color. Below we draw representations of every possible part of $\mathcal{P}$. For each one we count how many edges it has and how many pairs $(v, c)$ it hits. For $i=1, \ldots, 6$ let $x_i$ be the number of parts in $P$ that look like Figure [\[fig:P\]](#fig:P){reference-type="ref" reference="fig:P"}$(i)$. Then summing the edges we have $$x_1 + 3 x_2 + 5x_3 + 5x_4+ 4x_5 + 4x_6 = \binom n2.$$ Now since each pair $(v, c)$ is hit by at most one part of $\mathcal{P}$ we have $$2x_1 + 6x_2 + 10x_3 + 10x_4 + 8x_5 + 7x_6 \le n|C|.$$ But then, by comparing coefficients in the previous two displayed lines, we have $$n|C| \ge \frac 74 \binom n2$$ completing the proof. # Concluding remarks {#sec:con} We think that there exists a construction that gives a matching upper bound for Theorem [Theorem 5](#thm:5,8lower){reference-type="ref" reference="thm:5,8lower"}. We have made progress using the forbidden submatchings method, but it seems to require a much more complicated application of this method when compared to the proof of the upper bound in [\[eqn:estlin\]](#eqn:estlin){reference-type="eqref" reference="eqn:estlin"}. We hope to prove the following conjecture soon. **Conjecture 1**. *$$f(n, 5,8) = \frac78 n + o(n).$$* It is also plausible to conjecture the following. Currently it is known only for $p=3, 4$. **Conjecture 2**. *The limit $$\lim_{n \rightarrow \infty} \frac{f(n, p, q_{lin})}{n}$$ exists for all $p\ge 3$.* Finally, at the quadratic threshold, recall that we only proved that the analogous limit exists when $p$ is even. The same should likely also hold when $p$ is odd, as well as when $q$ is above the quadratic threshold. **Conjecture 3**. *The limit $$\lim_{n \rightarrow \infty} \frac{f(n, p, q)}{n^2}$$ exists for all $p\ge 4$ and $q \ge q_{quad}$.*
arxiv_math
{ "id": "2309.00182", "title": "Generalized Ramsey numbers at the linear and quadratic thresholds", "authors": "Patrick Bennett, Ryan Cushman and Andrzej Dudek", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Quadratic conjecture is a strengthening of oliver's $p$-group conjecture. Let $G$ be a $p$-group of maximal class of order $p^n$. We prove that if $n\le 8$ or $n\ge \max\{2p-6,p+2\}$ then $G$ satisfies Quadratic Conjecture. Hence quadratic conjecture holds if $G$ is a $p$-group of maximal class where $p\le 7$. Quadratic conjecture  F-module  Quadratic offender  Quadratic element  $p$-group of maximal class 20D15. author: - | Jingjing Duan and Lijian An[^1]\ Department of Mathematics, Shanxi Normal University\ Taiyuan, Shanxi 030031, P. R. China\ title: "On quadratic conjecture[^2]" --- # Introduction An open question in the theory of $p$-local finite groups claims that every fusion system has a unique $p$-completed classifying space. In [@O], Oliver introduced the characteristic subgroup $\mathfrak{X}(S)$ for a finite $p$-group $S$. For odd primes he demonstrated that the conjecture outlined below would substantiate the unique existence of the classifying space. Recall that $J(S)$ denotes the Thompson subgroup of $S$ generated by all elementary abelian subgroups of the greatest rank. **Conjecture 1**. *[@O Conjecture 3.9][\[oliver\]]{#oliver label="oliver"} For any odd prime $p$ and any $p$-group $S$, $J(S)\le \mathfrak{X}(S)$.* Conjecture [\[oliver\]](#oliver){reference-type="ref" reference="oliver"} is called Oliver's p-group conjecture. Let $G$ be a finite $p$-group and $V$ an elementary abelian $p$-group on which $G$ acts faithfully. Then $V$ is an *F-module* for $G$ if there exists a non-trivial elementary abelian subgroup $E$ of $G$ such that $|E|\cdot |C_V (E)| \ge |V |$. In this case, we call $E$ an *offender*. In [@GHL], Green, H$\acute{e}$thelyi and Lilienthal obtained the following reformulation of Conjecture [\[oliver\]](#oliver){reference-type="ref" reference="oliver"}. **Conjecture 2**. *[@GHL Conjecture 1.3][\[Oliver conjecture\]]{#Oliver conjecture label="Oliver conjecture"} Let $p$ be an odd prime and $G$ a finite $p$-group. If the faithful $\mathbb{F}_p[G]$-module $V$ is an $F$-module, then there is an element $1\ne g\in\Omega_1(Z(G))$ such that the minimal polynomial of the action of $g$ on $V$ divides $(x-1)^{p-1}$.* An element $g\in G$ is said to be *quadratic* on the $\mathbb{F}_p[G]$-module $V$ if its action has minimal polynomial $(X-1)^2$. Note that if $V$ is faithful then quadratic elements must have order $p$. In [@GHM2], Green, H$\acute{e}$thelyi and Mazza propose the following strengthening of Conjecture [\[Oliver conjecture\]](#Oliver conjecture){reference-type="ref" reference="Oliver conjecture"}.  [@GHM2 Conjecture 1.4][\[quadratic conjecture\]]{#quadratic conjecture label="quadratic conjecture"} Let $p$ be a prime and $G$ a finite $p$-group. If the faithful $\mathbb{F}_p[G]$-module $V$ is an $F$-module, then there are quadratic elements in $\Omega_1(Z(G))$. Observe that Quardatic Conjecture is trivially true for $p=2$. For $p=3$, Quardatic Conjecture is just Conjecture [\[Oliver conjecture\]](#Oliver conjecture){reference-type="ref" reference="Oliver conjecture"}. By [@GHM Theorem 1.2], Quadratic Conjecture holds if $G$ is metabelian or of (nilpotence) class at most four. By [@GHM Theorem 1.3], Conjecture [\[Oliver conjecture\]](#Oliver conjecture){reference-type="ref" reference="Oliver conjecture"} holds if $G$ is a $p$-group of maximal class. Hence Quadratic Conjecture holds if $G$ is a $3$-group of maximal class. In this paper, we get the following main result: **Theorem 3**. *Let $p$ be an odd prime and $G$ a $p$-group of maximal class of order $p^n$. If $n\le 8$ or $n\ge \max\{2p-6,p+2\}$ then $G$ satisfies Quadratic Conjecture.* By Theorem [Theorem 3](#main){reference-type="ref" reference="main"}, quadratic conjecture holds if $G$ is a $p$-group of maximal class where $p\le 7$. # Quadratic Offenders **Definition 4**. *[@MS 11, 2.3] Let $G$ be a finite group and $V$ a faithful $\mathbb{F}_p[G]$-module. For a subgroup $H\le G$ one sets $$j_H(V):=\frac{|H||C_V(H)|}{|V|}\in \mathbb{Q}.$$* Note that $j_1(V) = 1$. Suppose that $E$ is an element abelian subgroup of $G$. Then $E$ is an offender on $V$ if and only if $j_E(V)\ge 1$. **Definition 5**. *[@GLS 26.5] Let $G$ be a finite $p$-group and $V$ a faithful $\mathbb{F}_p[G]$-module. We denote by $\mathscr{E}(G)$ the poset of non-trivial elementary abelian subgroups of $G$. Define $$\mathscr{P}(G, V):= \{E \le G \mid E\in\mathscr{E}(G) \ \mbox{and}\ j_E(V)\ge j_F(V)\ \forall \ 1 \le F\le E\}.$$* Note that $V$ is an F-module if and only if $\mathscr{P}(G, V)$ is non-empty. The subgroups in $\mathscr{P}(G, V)$ are sometimes called *best offenders*. A *quadratic offender* is a subgroup which is both quadratic ($[V, E, E] = 0$) and an offender. **Theorem 6**. *(Timmesfeld's replacement theorem)[@GHM Theorem 4.1] Let $V$ be a faithful $\mathbb{F}_p[G]$-module and $E\in\mathscr{P}(G, V)$. Then there is a quadratic offender $F\in\mathscr{P}(G, V)$ which satisfies $j_F(V) = j_E(V)$ and $F\le E$ .* By Timmesfeld's replacement theorem, A minimal offender must be a quadratic offender. Following from [@GHM2], an abelian subgroup $A$ of $G$ is said to be *weakly closed* if $[A, A^g]\ne 1$ holds for every $G$-conjugate $A^g\ne A$. The following theorem show that there is also a weakly closed quadratic offender on F-module $V$. **Theorem 7**. *[@GHM2 Proposition 4.5][\[weakly\]]{#weakly label="weakly"} Suppose that the faithful $\mathbb{F}_p[G]$-module $V$ is an F-module. Set $$j_0 =\max\{ j_E(V)\mid E \ \mbox{an offender}\}.$$ Then there is a weakly closed quadratic offender $E$ with $j_E(V) = j_0$. Moreover if $D\le G$ is any offender with $j_D(V)=j_0$, then there is such an $E$ which is a subgroup of the normal closure of $D$.* **Lemma 8**. *Let $G$ a finite $p$-group and $V$ an $F$-module. Suppose that there is no quadratic element in $\Omega_1(Z(G))$. Then* - *$|E|\ge p^2$ for any offender $E$ on $V$;* - *$|E|\ge p^3$ for any weakly closed offender $E$ on $V$.* \(1\) Otherwise, there is an offender $E$ of order $p$. Since $V$ is faithful, $C_V(E)\ne V$. Since $E$ is an offender, $|E||C_V(E)|\ge |V|$. Hence $C_V(E)$ is maximal in $V$. Since $C_V(E)$ is $Z(G)$-invariant, $[V,Z(G)]\le C_V(E)$. Since $[V,Z(G)]\le C_V(E^g)$ for all $g\in G$, $[V,Z(G)]\le C_V(E^G)$. Take $1\ne z\in Z(G)\cap E^G$. Then $[V,z,z]=0$. Hence $z$ is a quadratic element in $\Omega_1(Z(G))$. This contradicts the hypothesis. \(2\) Otherwise, there is a weakly closed offender $E$ of order $p^2$. By (1), $E$ is a minimal offender. Hence $E$ is also a quadratic offender. If $E\trianglelefteq G$, then $E\cap Z(G)\ne 1$. Hence there is a quadratic element in $\Omega_1(Z(G))$. This contradicts the hypothesis. In the following, we may assume that $N_G(E)\ne G$. Let $g\in N_G(N_G(E))\setminus N_G(E)$. Then $N_G(E^g)=N_G(E)^g=N_G(E)$ and $E^g\ne E$. Hence $E$ and $E^g$ normalize each other. It follows that $[E,E^g]\le E\cap E^g$. Since $E$ is weakly closed, $[E,E^g]\ne 1$ and hence $E\cap E^g\ne 1$. Let $F=E\cap E^g$. By (1), $F$ is not an offender. Hence $|C_V(E)|\le |C_V(F)|\le \frac{1}{p^2}|V|$. On the other hand, since $E$ is an offender, $|C_V(E)|\ge \frac{1}{p^2}|V|$. Hence $|C_V(E)|=\frac{1}{p^2}|V|$ and $C_V(F)=C_V(E)$. The same reason gives that $C_V(F)=C_V(E^g)$. Since $E$ and $E^g$ are both quadratic, $[V,E]\le C_V(E)=C_V(F)=C_V(E^g)$ and $[V,E^g]\le C_V(E^g)=C_V(F)=C_V(E)$. It follows that $[V,E,E^g]=[V,E^g,E]=0$. By Three Subgroups Lemma, $[V,[E,E^g]]=0$. Hence $V$ is not faithful. This is a contradiction. $\Box$ # Quadratic elements For finding quadratic element, Lemma 4.1 of [@GHL] is a key result. In the following, we refine this lemma, and give an elementary proof. **Lemma 9**. *Suppose that $p$ is an odd prime, that $G$ is a non-trivial $p$-group, and that $V$ is a faithful $\mathbb{F}_p[G]$-module. Suppose that $a, b\in G$ are such that $c:=[a,b]$ is a non-trivial element of $C_G (a,b)$. If $a$ is quadratic, then $[V,E,E]=0$ where $E=\langle a,c\rangle$.* Let $d=a^b=ac$ and $e=a^{b^2}=ac^2=a^{-1}d^2$. Then $E=\langle a\rangle\times \langle d\rangle$. Since $a$ is quadratic, $d$ and $e$ are also quadratic. Since $[a,d]=1$, $[v,a,d]=[v,d,a]$ for all $v\in V$. Let $v\in V$ and $i,j,k,l\in \mathbb{Z}$. Then $$[v,a^id^j,a^kd^l]=(il+jk)[v,a,d].$$ In particular, $[v,e,e]=-4[v,a,d]$. It follows from $[v,e,e]=0$ that $[v,a,d]=0$. Hence $[v,a^id^j,a^kd^l]=0$ for all $v\in V$ and $i,j,k,l\in \mathbb{Z}$. Thus we have $[V,E,E]=0$. $\Box$ Recall that the following terminology. Given a finite group $G$, the ascending central series is defined inductively by $Z_0(G)=1$ and $Z_{r+1}(G)$ is the normal subgroup of $G$ containing $Z_r(G)$ and such that $Z_{r+1}(G)/Z_r(G)=Z(G/Z_r(G))$, for all $r\ge 1$. The class $c$ of $G$ is the smallest number such that $Z_c(G)=G$. In this paper we use $G>G_2=G'>\cdots>G_{c+1} = 1$ to denote the descending central series of $G$, where $G_{r+1}=[G_r,G]$ for all $r\ge 2$. **Lemma 10**. *Let $G$ be a $p$-group and $V$ an F-module. $E$ is a weakly closed quadratic offender. If there is no quadratic element in $\Omega_1(Z(G))$, then* - *there is no quadratic element in $Z_2(G)$;* - *$Z_2(G)\cap E=1$, $[Z_2(G),E]=1$;* - *$[Z_3(G),E]=1$;* - *$[Z_4(G),E]\le Z_3(G)\cap E$.* \(1\) Otherwise, let $a$ be a quadratic element in $Z_2(G)$. Then $a\not\in Z(G)$ since there is no quadratic element in $Z(G)$. Hence there is $b\in G$ such that $c:=[a,b]\ne 1$. Since $c\in Z(G)\le C_G(a,b)$, by Lemma [Lemma 9](#quadratic element){reference-type="ref" reference="quadratic element"}, $c$ is also a quadratic element. This is a contradiction. \(2\) By (1), $Z_2(G)\cap E=1$. Since $[Z_2(G),E]\le Z(G)$, $[E^g, E]=1$ for all $g\in Z_2(G)$. Since $E$ weakly closed, $E^g=E$ for all $g\in Z_2(G)$. It follows that $[Z_2(G),E]\le Z(G)\cap E=1$. \(3\) Since $[Z_3(G),E]\le Z_2(G)$, by (2), $[Z_3(G),E,E]=1$. It follows that $[E^g, E]=1$ for all $g\in Z_3(G)$. Since $E$ weakly closed, $E^g=E$ for all $g\in Z_3(G)$. Furthermore, $[Z_3(G),E]\le Z_2(G)\cap E=1$. \(4\) Since $[Z_4(G),E]\le Z_3(G)$, by (3), $[Z_4(G),E,E]=1$. It follows that $[E^g, E]=1$ for all $g\in Z_4(G)$. Since $E$ weakly closed, $E^g=E$ for all $g\in Z_4(G)$. Furthermore, $[Z_4(G),E]\le Z_3(G)\cap E$. $\Box$ **Corollary 11**. *[@GHM Theorem 5.2][\[class 4\]]{#class 4 label="class 4"} Suppose that $G$ is a $p$-group and $V$ is a faithful $\mathbb{F}_p[G]$-module such that $\Omega_1(Z(G))$ has no quadratic elements. If $G$ has class at most four, then $V$ cannot be an F-module.* Otherwise, $V$ is an F-module. By Lemma [Lemma 10](#Z_i(G)){reference-type="ref" reference="Z_i(G)"} (4), $G=Z_4(G)$ normalize $E$. Hence $E\cap Z(G)\ne 1$, which is a contradiction.$\Box$ **Theorem 12**. *[@GHM Theorem 1.5][\[thm1\]]{#thm1 label="thm1"} Let $G$ be a $p$-group and $V$ a faithful $\mathbb{F}_p[G]$-module such that there is no quadratic element in $\Omega_1(Z(G))$.* *If $A$ is an abelian normal subgroup of $G$, then $A$ does not contain any offender.* *Suppose that $E$ is an offender. Then $[G', E]\ne 1$.* **Theorem 13**. *Let $G$ be a $p$-group and $V$ a faithful $\mathbb{F}_p[G]$-module. If there is a quadratic offender $E$ such that $E^G=EK$ where $K$ is an abelian normal subgroup of $G$, then there is a quadratic element in $\Omega_1(Z(G))$.* Let $K$ be a maximal member such that $E^G=EK$ where $K$ is an abelian normal subgroup of $G$. If $K=E^G$, then $K$ contains a offender $E$. By Theorem [\[thm1\]](#thm1){reference-type="ref" reference="thm1"} (1), there is a quadratic element in $\Omega_1(Z(G))$. If $K\ne E^G$, then there is an $a\in E$ such that $N=\langle a\rangle K\unlhd G$. Hence $N'\cap \Omega_1(Z(G))\ne 1$. Since $K$ is abelian, $N'=\{[a,x]\mid x\in K\}$. Hence there exists an $x\in K$ such that $1\ne [a,x]\in \Omega_1(Z(G))$. By Lemma [Lemma 9](#quadratic element){reference-type="ref" reference="quadratic element"}, $[a,x]$ is quadratic. $\Box$ # $p$-groups of maximal class Firstly we recall some terminology. Let $G$ be a $p$-group. The (characteristic) subgroup of $G$ generated by $\{g^p \mid g\in G\}$ is denoted by $\mho_1(G)$. Now suppose that $G$ has order $p^n$ and is of maximal class. Set $G_1 = C_G(G_2/G_4)$. One says that $G$ is *exceptional* if $n\ge 5$ and there is some $3\le i \le n-2$ such that $C_G(G_i/G_{i+2})\ne G_1$. **Definition 14**. *[@LM Definition 3.2.1] The degree of commutativity $l$ of a $p$-group $G$ of maximal class is defined to be the maximum integer such that $[G_i,G_j]\le G_{i+j+l}$ for all $i,j\ge 1$ if $G_1$ is not abelian, and $l=n-3$ if $G_1$ is abelian.* **Theorem 15**. *[@LM Corollary 3.2.7][\[1\]]{#1 label="1"} The degree of commutativity of $G$ is positive if and only if $G$ is not exceptional.* **Theorem 16**. *[@LM Corollary 3.3.4 (i)][\[2\]]{#2 label="2"} Let $G$ be a $p$-group of maximal class of order $p^n$ with $n>p+1$. Then $\Omega_1(G_1)=G_{n-p+1}$.* **Theorem 17**. *[@LM Theorem 3.3.5][\[3\]]{#3 label="3"} Let $G$ be a $p$-group of maximal class of order $p^n$ with $n>p+1$. Then $G$ has positive degree of commutativity.* **Theorem 18**. *[@LM Theorem 3.2.11][\[4\]]{#4 label="4"} Let $G$ be a $p$-group of maximal class of order $p^n$ where $n$ is odd and $5\le n\le 2p+1$. Then $G$ has positive degree of commutativity.* **Theorem 19**. *Let $p$ be an odd prime and $G$ a $p$-group of maximal class of order $p^n$. If $n\ge \max\{2p-6,p+2\}$ then $G$ satisfies Quadratic Conjecture.* Otherwise, there is an F-module $V$ such that there is no quadratic element in $\Omega_1(Z(G))$. By Theorem [\[weakly\]](#weakly){reference-type="ref" reference="weakly"}, there is a weakly closed quadratic offender $E$ on $V$. If $E\trianglelefteq G$, then $G_{n-1}=Z(G)\le E$ and hence there is a quadratic element in $\Omega_1(Z(G))$. In the following, we may assume that $E<E^G$. By Theorem [\[thm1\]](#thm1){reference-type="ref" reference="thm1"} (1), we may assume that $E^G$ is not abelian. By Lemma [Lemma 10](#Z_i(G)){reference-type="ref" reference="Z_i(G)"} (3), $[Z_3(G),E]=1$. Hence $E\le C_G(G_{n-3}/G_{n-1})$. Since $n\ge p+2$, by Theorem [\[1\]](#1){reference-type="ref" reference="1"} $\&$ [\[3\]](#3){reference-type="ref" reference="3"}, $G$ is not exceptional. Hence $E\le G_1$. Furthermore, $E\le \Omega_1(G_1)$. By Theorem [\[2\]](#2){reference-type="ref" reference="2"}, $\Omega_1(G_1)=G_{n-p+1}$. It follows that $E\le G_{n-p+1}$. Since $E^G$ is not abelian, $1\ne [E,E^G]$. Since $G$ has positive degree of commutativity, $[G_{n-p+1},G_{n-p+1}]=[G_{n-p+1},G_{n-p+2}]\le G_{2n-2p+4}$. Since $n\ge 2p-6$, $2n-2p+4\ge n-2$. Hence $1\ne [E,E^G]\le [G_{n-p+1},G_{n-p+1}]\le G_{n-2}=Z_2(G)$. Thus there are $a\in E$ and $b\in E^G$ such that $1\ne c:=[a,b]\in Z_2(G)$. By Lemma [Lemma 10](#Z_i(G)){reference-type="ref" reference="Z_i(G)"} (2), $c\in C_G(a,b)$. By Lemma [Lemma 9](#quadratic element){reference-type="ref" reference="quadratic element"}, $c$ is a quadratic element. This contradicts Lemma [Lemma 10](#Z_i(G)){reference-type="ref" reference="Z_i(G)"} (1). $\Box$ By Theorem [Theorem 19](#maximal class){reference-type="ref" reference="maximal class"}, if $G$ is a $5$-group of maximal class of order $5^n$ with $n\ge 7$, then $G$ satisfies Quadratic Conjecture; and if $G$ is a $7$-group of maximal class of order $7^n$ with $n\ge 9$, then $G$ satisfies Quadratic Conjecture. By Corollary [\[class 4\]](#class 4){reference-type="ref" reference="class 4"}, if $G$ is a $p$-group of maximal class of order $p^n$ with $n\le 5$, then $G$ satisfies Quadratic Conjecture. In the following, we prove that if $G$ is a $p$-group of maximal class of order $p^n$ with $6\le n\le 8$, then $G$ satisfies Quadratic Conjecture. It will deduce that Quadratic Conjecture holds if $G$ is a $p$-group of maximal class where $p\le 7$. **Theorem 20**. *Let $G$ be a $p$-group of maximal class with $|G|=p^6$. Then $G$ satisfies Quadratic Conjecture.* Otherwise, there is an F-module $V$ such that there is no quadratic element in $\Omega_1(Z(G))$. By Theorem [\[weakly\]](#weakly){reference-type="ref" reference="weakly"}, there is a weakly closed quadratic offender $E$ on $V$. By Lemma [Lemma 10](#Z_i(G)){reference-type="ref" reference="Z_i(G)"} (4), $G'=Z_4(G)$ normalize $E$ and $[G',E]\le G_3\cap E$. By Theorem [\[thm1\]](#thm1){reference-type="ref" reference="thm1"} (2), $[G',E]\ne 1$. Hence $E\cap G_3\ne 1$. Since there is no quadratic element in $Z(G)=G_5$, $[E\cap G_3,G']\le E\cap G_5=1$. It follows that $E\cap G_3\le Z(G')$. By [@GHM Theorem 6.2], $G'$ is not abelian. Hence $Z(G')\le G_4=Z_2(G)$. Thus $E\cap G_3\le Z_2(G)$. This contradicts Lemma [Lemma 10](#Z_i(G)){reference-type="ref" reference="Z_i(G)"} (2). $\Box$ **Theorem 21**. *Let $p$ be an odd prime and $G$ a $p$-group of maximal class with $|G|=p^7$. Then $G$ satisfies Quadratic Conjecture.* Otherwise, there is an F-module $V$ such that there is no quadratic element in $\Omega_1(Z(G))$. By Theorem [\[weakly\]](#weakly){reference-type="ref" reference="weakly"}, there is a weakly closed quadratic offender $E$ on $V$. By Lemma [Lemma 10](#Z_i(G)){reference-type="ref" reference="Z_i(G)"} (4), $G_3=Z_4(G)$ normalize $E$. By Lemma [Lemma 10](#Z_i(G)){reference-type="ref" reference="Z_i(G)"} (3), $[Z_3(G),E]=1$. Hence $E\le C_G(G_{4}/G_{6})$. By Theorem [\[1\]](#1){reference-type="ref" reference="1"} $\&$ [\[4\]](#4){reference-type="ref" reference="4"}, $G$ is not exceptional. Hence $E\le G_1$. It follows that $[G_3,E]\le E\cap [G_3,G_1]\le E\cap G_5=1$. Since $[G_3,G_3]\le G_7=1$, $G_3$ is abelian. By Theorem [\[thm1\]](#thm1){reference-type="ref" reference="thm1"} (1), $E\not\le G_3$. Hence $G'\le E^{G}\le G_{1}$. Since $[E,G_{3}]=1$, $[E^{G},G_{3}]=1$. It follows that $[G',G']=[G',G_{3}]\le [E^G,G_3]=1$. Hence $G'$ is abelian. This contradicts [@GHM Theorem 6.2].$\Box$ **Lemma 22**. *Suppose that $A$ is an abelian normal subgroup of a non-abelian $p$-group $G$, $G/A=\langle xA\rangle$ is cyclic, $B\le A$ and $B\unlhd G$. Then $[B,G]\cong B/B\cap Z(G)$.* Since $G/A$ is cyclic, $G'\le A$. Let $\varphi: B\mapsto [B,G]$ be a mapping defined as follows: $\varphi(b)=[b,x]$ for $b\in B$. If $b_1,b_2\in B$, then $$\varphi(b_1b_2)=[b_1b_2,x]=[b_1,x]^{b_2}[b_2,x]=[b_1,x][b_2,x]=\varphi(b_1)\varphi(b_2),$$ since $[b_1,x], {b_2}\in A$ and $A$ is abelian. Hence $\varphi$ is a homomorphism. If $b\in B$ and $g\in G$, Then $$[b,x]^g=[b^g,x^g]=[b^g,x[x,g]]=[b^g,[x,g]][b^g,x]^{[x,g]}=[b^g,x],$$ since $b^g,[x,g],[b^g,x]\in A$ and $A$ is abelian. It follows that $\hbox{\rm Im}\varphi$ is normal in $G$. Since $B$ centralizes $G/\hbox{\rm Im}\varphi$, $[B,G]\le \hbox{\rm Im}\varphi$. Hence $\hbox{\rm Im}\varphi=[B,G]$. Next, $\hbox{\rm Ker}\varphi=\{ b\in B\mid [b,x]=1\}=C_B(x)=B\cap Z(G)$. Hence $\hbox{\rm Im}\varphi=[B,G]\cong B/B\cap Z(G)$. $\Box$ **Theorem 23**. *Let $p$ be an odd prime and $G$ a $p$-group of maximal class with $|G|=p^8$. Then $G$ satisfies Quadratic Conjecture.* Otherwise, there is an F-module $V$ such that there is no quadratic element in $\Omega_1(Z(G))$. By Theorem [\[weakly\]](#weakly){reference-type="ref" reference="weakly"}, there is a weakly closed quadratic offender $E$ on $V$. By Lemma [Lemma 10](#Z_i(G)){reference-type="ref" reference="Z_i(G)"} (4), $G_4=Z_4(G)$ normalize $E$. For convenience, let $G^*=C_G(G_{5}/G_{7})$, a maximal subgroup of $G$. By Lemma [Lemma 10](#Z_i(G)){reference-type="ref" reference="Z_i(G)"} (3), $[G_5,E]=[Z_3(G),E]=1$. Hence $$E\le C_G(G_{5}/G_{7})=G^*.$$ By [@GHM Theorem 6.2], $G'$ is not abelian. Hence $$Z(G')\le G_4.$$ We claim that $E\not\le G_3$. Otherwise, $E\le G_3$. In this case, $[E,G_3]\le [G_3,G_3]=[G_3,G_4]\le G_7$. Since $[G_3,E,E]=1$, $G_3$ normalize $E$. Hence $[E,G_3]\le E\cap G_7=1$. It follows that $E\le Z(G_3)$. This contradicts Theorem [\[thm1\]](#thm1){reference-type="ref" reference="thm1"} (1). We claim that $E\not\le G'$. Otherwise, $E^G=G'$. In this case, $[E,G_4]\le [G',G_4]\le G_6$. Since $G_4$ normalize $E$, $[E,G_4]\le E\cap G_6=1$. It follows that $[E^G,G_4]=1$. Hence $G_4=Z(G')$. Note that $G_3$ is abelian and $E^G=EG_3$. By Theorem [Theorem 13](#EK){reference-type="ref" reference="EK"}, there is a quadratic element in $\Omega_1(Z(G))$. This contradicts the hypothesis. By above argument, $$\label{4.3} E^G=G^*=EG'.$$ Since $G'$ is not abelian, $[E^G,G_3]\ge [G',G_3]=G''\ne1$. Hence $$\label{4.4} [E,G_3]\neq1.$$ **Case 1.** $[E\cap G', G']\ne 1$. In this case, we claim that $$\label{4.5} E\cap G'\not\le G_4.$$ Otherwise, $1\ne [E\cap G', G']\le [G_4,G']\le G_6$. Hence there are $a\in E\cap G'$ and $b\in G'$ such that $1\ne c:=[a,b]\in G_6=Z_2(G)$. Since $c\in C_G(a,b)$, by Lemma [Lemma 9](#quadratic element){reference-type="ref" reference="quadratic element"}, $c$ is a quadratic element. This contradicts Lemma [Lemma 10](#Z_i(G)){reference-type="ref" reference="Z_i(G)"} (1). Since $G_4$ normalize $E$, $[E\cap G', G_4]\le G_6\cap E$. By Lemma [Lemma 10](#Z_i(G)){reference-type="ref" reference="Z_i(G)"} (2), $G_6\cap E=Z_2(G)\cap E=1$. Hence $[E\cap G',G_4]=1$. Furthermore, $$\label{4.6} [(E\cap G')^G, G_4]=1.$$ We claim that $E\cap G'\le G_3$. Otherwise, $(E\cap G')^G=G'$. By ([\[4.6\]](#4.6){reference-type="ref" reference="4.6"}), $G_4\le Z(G')$. It follows that $G_3$ is abelian. By Lemma [Lemma 22](#tec){reference-type="ref" reference="tec"}, $G''=[G',G_3]\cong G_3/G_3\cap Z(G')=G_3/G_4$. Hence $G''=G_7$. Since $1\ne [E\cap G',G']=G_7=Z(G)$, there are $a\in E\cap G'$ and $b\in G'$ such that $1\ne c:=[a,b]\in Z(G)$. By Lemma [Lemma 9](#quadratic element){reference-type="ref" reference="quadratic element"}, $c$ is a quadratic element. This contradicts the hypothesis. By ([\[4.5\]](#4.5){reference-type="ref" reference="4.5"}), $$\label{4.7} (E\cap G')^G=G_3.$$ By ([\[4.6\]](#4.6){reference-type="ref" reference="4.6"}) and ([\[4.7\]](#4.7){reference-type="ref" reference="4.7"}), $[G_3,G_3]=[G_3,G_4]=[(E\cap G')^G,G_4]=1$. Hence $G_3$ is abelian. By Lemma [Lemma 22](#tec){reference-type="ref" reference="tec"}, $[G',G_4]\cong G_4/G_4\cap Z(G')=G_4/Z(G')$. Since $[G_5,E]=1$, $[G_5,E^G]=1$. Hence $G_5\le Z(G')$ and $|[G',G_4]|\le p$. Thus $$\le G_7.$$ Let $K=EG_3$. Then $K/G_3\cong E/E\cap G_3=E/E\cap G'\cong EG'/G'=G^*/G'$ is cyclic. By Lemma [Lemma 22](#tec){reference-type="ref" reference="tec"}, $[K,G_4]\cong G_4/G_4\cap Z(K)$. Since $G_5\le Z(K)$, $|[K,G_4]|\le p$. It follows that $[G^*,G_4]=[KG',G_4]=[K,G_4][G',G_4]$ is of order at most $p^2$. Hence $$\le [G^*,G_4]\le G_6.$$ Furthermore, $[E,G_4]\le G_6\cap E=1$. By ([\[4.3\]](#4.3){reference-type="ref" reference="4.3"}), $G_4\le Z(G^*)\le Z(G')$. By Lemma [Lemma 22](#tec){reference-type="ref" reference="tec"}, $G''=[G',G_3]\cong G_3/G_3\cap Z(G')=G_3/G_4$. Hence $G''=G_7$. Furthermore, $1\ne [E\cap G',G']=G''=G_7=Z(G)$. Hence there are $a\in E\cap G'$ and $b\in G'$ such that $1\ne c:=[a,b]\in Z(G)$. By Lemma [Lemma 9](#quadratic element){reference-type="ref" reference="quadratic element"}, $c$ is a quadratic element. This contradicts the hypothesis. $[E\cap G', G']=1$. In this case, $E\cap G'\le Z(G^*)$ since $[E\cap G',E]=1$. Hence $E\cap G'\le G_4$ and $$E\cap G'=E\cap G_4.$$ By Lemma [Lemma 10](#Z_i(G)){reference-type="ref" reference="Z_i(G)"} (2), $E\cap G_6=1$. It follows that $|E\cap G_4|\le p^2$. On the other hand, by Lemma [Lemma 8](#p3){reference-type="ref" reference="p3"}, $|E|\ge p^3$. It follows that $|E\cap G_4|=|E\cap G'|\ge p^2$. Thus $|E|=p^3$, $|E\cap G_4|=p^2$, $|E\cap G_5|=p$ and $E\cap G'\not\le G_5$. Hence $$Z(G^*)=G_4=Z(G'),$$ and $G_3$ is abelian. Note that $EG_3/G_3\cong E/E\cap G_3=E/E\cap G'\cong EG'/G'$ is cyclic. By Lemma [Lemma 22](#tec){reference-type="ref" reference="tec"}, $[EG_3,G_3]\cong G_3/G_3\cap Z(EG_3)=G_3/G_4$, and $[G',G_3]\cong G_3/G_3\cap Z(G')=G_3/G_4$. Hence $[E,G_3]$ and $[G',G_3]$ are of order $p$. It follows that $[G^*,G_3]=[EG',G_3]=[E,G_3][G',G_3]\le G_6$. In this case, $[G_3,E,E]=1$. Hence $G_3$ normalize $E$. Thus $[E,G_3]\le E\cap G_6=1$. This contradicts ([\[4.4\]](#4.4){reference-type="ref" reference="4.4"}).$\Box$ 99 D. J. Green, L. H$\acute{e}$thelyi and M. Lilienthal, On Oliver's $p$-group conjecture, *Alg. Number Theory* **2** (2008), 969-977. D. J. Green, L. H$\acute{e}$thelyi and N. Mazza, On Oliver's $p$-group conjecture: II, *Math. Ann.* **347** (2010), 111-122. D. J. Green, L. H$\acute{e}$thelyi and N. Mazza, On a strong form of Oliver's $p$-group conjecture, J. Algebra, **342** (2011), 1-15. Gorenstein, D., Lyons, R., Solomon, R.: The classification of the finite simple groups. Number 2, Mathematical Surveys and Monographs, vol. 40. American Mathematical Society, Providence, RI (1996) Leedham-Green,C.R., McKay, S.: The structure of groups of prime power order, London Mathematical Society Monographs. New Series, vol. 27. Oxford University Press, Oxford (2002) U. Meierfrankenfeld and B. Stellmacher, The other $\mathscr{P}(G, V)$-theorem, *Rend. Sem. Mat. Univ. Padova* **115**, (2006), 41-50. B. Oliver, Equivalences of classifying spaces completed at odd primes, *Math. Proc. Cambridge Philos. Soc.* **137**(2), (2004), 321-347. [^1]: Corresponding author. e-mail: anlj\@sxnu.edu.cn [^2]: This work was supported by NSFC (No. 11971280 & 11771258)
arxiv_math
{ "id": "2309.10474", "title": "On quadratic conjecture", "authors": "Jingjing Duan and Lijian An", "categories": "math.GR", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- bibliography: - main.bib --- **Optimal Averaging for Functional Linear Quantile Regression Models[^1]** Wenchao Xu$^1$, Xinyu Zhang$^{1,2}$ and Jeng-Min Chiou$^{3,4}$\ $^1$Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China\ $^2$International Institute of Finance, School of Management, University of Science and Technology of China, Hefei, China\ $^3$Institute of Statistics and Data Science, National Taiwan University, Taiwan\ $^4$Institute of Statistical Science, Academia Sinica, Taiwan ABSTRACT To reduce the dimensionality of the functional covariate, functional principal component analysis plays a key role, however, there is uncertainty on the number of principal components. Model averaging addresses this uncertainty by taking a weighted average of the prediction obtained from a set of candidate models. In this paper, we develop an optimal model averaging approach that selects the weights by minimizing a $K$-fold cross-validation criterion. We prove the asymptotic optimality of the selected weights in terms of minimizing the excess final prediction error, which greatly improves the usual asymptotic optimality in terms of minimizing the final prediction error in the literature. When the true regression relationship belongs to the set of candidate models, we provide the consistency of the averaged estimators. Numerical studies indicate that in most cases the proposed method performs better than other model selection and averaging methods, especially for extreme quantiles. *Keywords:* Functional data, model averaging, quantile regression, excess final prediction error, asymptotic optimality. # Introduction {#sec:intro} Functional data analysis deals with data that are in the form of functions, which have become increasingly important over the past two decades. See, e.g., the monographs by @ramsay:2005, @ferratyandvieu:2006, @horvath2012, @zhang2013ANOVA, and @hsing2015 and review articles by @cuevas2014, [@Morris2015review], and @wang2016review. This paper focuses on quantile regression for scalar responses and functional-valued covariates. Specifically, we investigate functional linear quantile regression (FLQR) model in which the conditional quantile for a fixed quantile index is modeled as a linear function of the functional covariate that has been introduced in @cardot2005. Since the seminal work of @koenker1978QR, quantile regression has become one of the most important statistical methods in measuring the impact of covariates on response variables. An attractive feature of quantile regression is that it provides much more information about the conditional distribution of a response variable than the traditional mean regression. See @koenker2005 for a comprehensive review on quantile regression and its applications. Functional quantile regression essentially extends the standard quantile regression framework to account for functional covariates, which has been extensively studied in the literature; see, for example, @cardot2005 [@kato2012; @li2022jma; @ferraty2005; @chen2012]. @cardot2005 considered a smoothing splines-based approach to represent the functional covariates for the FLQR model and established a convergence rate result. @kato2012 studied functional principal component analysis (FPCA)-based estimation for the FLQR model and established minimax optimal convergence rates for regression estimation and prediction. @li2022jma and @sang.shang.du considered statistical inference for the FLQR model based on FPCA and reproducing kernel Hilbert space framework, respectively. @ferraty2005 and @chen2012 estimated the conditional quantile function by inverting the corresponding conditional distribution function. @yao2017jma and @ma2019 studied a high-dimensional partially FLQR model that contains vector and functional-valued covariates. Since functional data are often intrinsically infinite-dimensional, FPCA is a key dimension reduction tool for functional data analysis, which has been widely used in functional regression [@cai2006; @yao2005FLR; @li2010; @kato2012; @muller2005]. When using the FPCA approach to study functional regression, a crucial step is to select the number of principal components retained for the covariate. This essentially corresponds to a model selection problem, which aims to find the best model among a set of candidate models. A variety of model selection criteria have been proposed in functional regression, including the fraction of variance explained [FVE, @chen2012], leave-one-curve-out cross-validation [@yao2005FLR; @kato2012], the Akaike information criterion [AIC, @kato2012; @muller2005; @yao2005FLR], and the Bayesian information criterion [BIC, @kato2012]. More recently, @li.wang.carroll.2013 proposed various information criteria to select the number of principal components in functional data and established their consistency. Model averaging combines a set of candidate models by taking a weighted average of them and can potentially reduce risk relative to model selection [@magnus2010comparison; @yuan2005; @peng2022; @liu2015joe; @liu2013ma]. @hansen2012 proposed jackknife (leave-one-out cross-validation) model averaging for least squares regression that selects the weights by minimizing a cross-validation criterion. Jackknife model averaging was further carried out on models with dependent data [@zhang2013JMA] and linear quantile regression [@lu2015; @wang.QR]. @cheng2015 extended the jackknife model averaging method to leave-$h$-out cross-validation criteria for forecasting in combination with factor-augmented regression, and @gao2016 further extended it to leave-subject-out cross-validation under a longitudinal data setting. Recently, the model averaging method has been developed for functional data analysis. For example, @zhang2018 developed a cross-validation model averaging estimator for functional linear regression in which both the response and the covariate are random functions. @zhang.zou2020 proposed a jackknife model averaging method for a generalized functional linear model. In this paper, we investigate the FLQR model using FPCA and model averaging methods. Instead of selecting the number of principal components [@kato2012], we propose model averaging by taking a weighted average of the estimator or prediction obtained from a set of candidate models. In addition, we use the principal components analysis through conditional expectation technique [@yao2005jasa] to estimate functional principal component scores. Therefore, our method can deal with both sparsely and densely observed functional covariates. We select the model averaging weights by minimizing the $K$-fold cross-validation prediction error. We show that when there do not exist correctly specified models (i.e., true models) in the set of candidate models, the proposed method is asymptotically optimal in the sense that its excess final prediction error is as small as that of the infeasible best possible prediction. In particular, we find asymptotic optimality in the sense of minimizing the final prediction error, which is studied in the literature such as @lu2015 and @wang.QR, do not make much sense when there exists at least one true model in the set of candidate models. In the situation with true candidate models, we show that the averaged estimators for the model parameters are consistent. Our Monte Carlo simulations indicate the superiority of the proposed approach compared with other model averaging and selection methods for extreme quantiles. We apply our method to predict the quantile of the maximum hourly log-return of price of Bitcoin for a financial data. The remainder of the paper is organized as follows. Section [2](#sec:flqr){reference-type="ref" reference="sec:flqr"} presents the model and FPCA-based estimation and conditional quantile prediction. Section [3](#sec:ma){reference-type="ref" reference="sec:ma"} proposes a model averaging procedure for conditional quantile prediction. Section [4](#sec:asy){reference-type="ref" reference="sec:asy"} establishes the asymptotic properties of the resulting model averaging estimator. Section [5](#sec:simu){reference-type="ref" reference="sec:simu"} conducts simulation studies to illustrate its finite sample performance. Section [6](#sec:realdata){reference-type="ref" reference="sec:realdata"} applies the proposed method to a real financial data example. Section [7](#sec:dis){reference-type="ref" reference="sec:dis"} concludes the paper. The proofs of the main results are relegated to the Supplementary Material. # Model and Estimation {#sec:flqr} ## Model Set-up Let $\{(Y_i, X_i)\}_{i=1}^n$ be an independent and identically distributed (i.i.d.) sample, where $Y_i$ is a scalar random variable and $X_i=\{X_i(t): t\in \mathcal{T}\}$ is a random function defined on a compact interval $\mathcal{T}$ of $\mathbb{R}$. Let $Q_{\tau}(X_i)$ denote the $\tau$th conditional quantile of $Y_i$ given $X_i$, where $\tau\in (0,1)$ is the quantile index. An FLQR model implies that $Q_{\tau}(X_i)$ can be written as a linear functional of $X_i$, that is, there exist a scalar constant $a(\tau)\in \mathbb{R}$ and a slope function $b(\cdot, \tau)\in L_2(\mathcal{T})$ such that $$\label{eq:model} Q_{\tau}(X_i)=a(\tau)+\int_{\mathcal{T}} b(t, \tau)X_i^c(t)\,dt,$$ where $L_2(\mathcal{T})$ is the set of square-integrable functions defined on $\mathcal{T}$, $X_i^c(t)=X_i(t)-\mu(t)$, and $\mu(t)=E\{X_i(t)\}$ is the mean function of $X_i$. For notational simplicity, we suppress the dependence of $a(\tau)$ and $b(\cdot,\tau)$ on $\tau$. As a result, we have the following FLQR model $$Y_i=Q_{\tau}(X_i)+\varepsilon_i=a+\int_{\mathcal{T}} b(t)X_i^c(t)\,dt+\varepsilon_i,$$ where $\varepsilon_i\equiv Y_i-Q_{\tau}(X_i)$ satisfies the quantile restriction $P(\varepsilon_i\leq 0|X_i)=\tau$ almost surely (a.s.). Denote the covariance function of $X_i(t)$ by $G(s,t)=\mathrm{Cov}\{X_i(s),X_i(t)\}$. Since $G$ is a symmetric and nonnegative definite function, Mercer's theorem [@hsing2015 Theorem 4.6.5] ensures that there exist a non-increasing sequence of eigenvalues $\kappa_1 \geq \kappa_2 \geq \cdots > 0$ and a sequence of corresponding eigenfunctions $\{\phi_j\}_{j=1}^\infty$, such that $$G(s,t)=\sum_{j=1}^\infty \kappa_j\phi_j(s)\phi_j(t).$$ Since $\{\phi_j\}_{j=1}^\infty$ is an orthonormal basis of $L_2(\mathcal{T})$, we have the following expansions in $L_2(\mathcal{T})$: $$X_i^c(t)=\sum_{j=1}^\infty \xi_{ij}\phi_j(t),\quad b(t)=\sum_{j=1}^\infty b_j\phi_j(t),$$ where $b_j=\int_{\mathcal{T}} b(t)\phi_j(t)\,dt$ and $\xi_{ij}=\int_\mathcal{T} X_i^c(t)\phi_j(t)\,dt$. The $\xi_{ij}$'s are called functional principal component scores and satisfy $E(\xi_{ij})=0$, $E(\xi_{ij}^2)=\kappa_j$, and $E(\xi_{ij}\xi_{ik})=0$ for all $j\neq k$. The expansion for $X_i^c(t)$ is called the Karhunen-Loève expansion [@hsing2015 Theorem 7.3.5]. Then, the model [\[eq:model\]](#eq:model){reference-type="eqref" reference="eq:model"} can be transformed into a linear quantile regression model with an infinite number of covariates: $$\label{eq:infty} Q_{\tau}(X_i)=a+\sum_{j=1}^\infty b_j\xi_{ij}.$$ A nonlinear ill-posed inverse problem would be encountered when we want to fit the model [\[eq:infty\]](#eq:infty){reference-type="eqref" reference="eq:infty"} with a finite number of observations; see @kato2012 for a detailed discussion. We follow @kato2012 and address the ill-posed problem by truncating the model [\[eq:infty\]](#eq:infty){reference-type="eqref" reference="eq:infty"} to a feasible linear quantile regression model. It is common to keep only the first $J_n<\infty$ eigenbases, which leads to the following truncated model $$\label{eq:trun} Q_{\tau,J_n}(X_i)=a+\sum_{j=1}^{J_n} b_j\xi_{ij},$$ where we allow $J_n$ to increase to infinity as $n\to \infty$ to reduce approximation errors. The regression parameters $a, b_1,\ldots,b_{J_n}$ in model [\[eq:trun\]](#eq:trun){reference-type="eqref" reference="eq:trun"} need to be estimated. ## FPCA-Based Quantile Prediction In practice, we cannot observe the whole predictor trajectory $X_i(t)$. We typically assume that $X_i(t)$ can only be realized for some discrete set of sampling points with additional measurement errors, i.e., we observe data $$\label{eq:disdata} U_{il}=X_i(T_{il})+\epsilon_{il},\quad T_{il}\in \mathcal{T};~l=1,\ldots,N_i,$$ where $\epsilon_{il}$ are i.i.d. measurement errors with mean zero and finite variance $\sigma_u^2$, and $X_i(\cdot)$ are independent of $\epsilon_{il}$ ($i=1,\ldots,n$). Depending on the number of observations $N_i$ within each curve, functional data are typically classified as sparse and dense; see @li2010pca and @zhang.wang:aos2016 for details. Many functions and parameters in the expressions given previously should be estimated from the data. We first use local linear smoothing to obtain the estimated mean function $\widehat{\mu}(t)$ and the estimated covariance function $\widehat{G}(s,t)$, which are well documented in the functional data analysis literature. For example, see @yao2005jasa [@yao2005FLR], @li2010pca, and @zhang.wang:aos2016 for the detailed calculations and we omit them here. Let the spectral decomposition of $\widehat{G}(s,t)$ be $\widehat{G}(s,t)=\sum_{j=1}^\infty \widehat{\kappa}_j\widehat{\phi}_j(s)\widehat{\phi}_j(t)$, where $\widehat{\kappa}_1\geq \widehat{\kappa}_2\geq \cdots \geq 0$ are the eigenvalues and $\{\widehat{\phi}_1,\widehat{\phi}_2,\ldots\}$ are corresponding eigenfunctions. When $X_i(t)$ are sparsely observed, $\xi_{ij}$ cannot be estimated by the integration formula. Here, we use the principal components analysis through conditional expectation (PACE) technique proposed by @yao2005jasa to obtain an estimate of $\xi_{ij}$. To be more specific, let $\mathbf{U}_i=(U_{i1},\ldots,U_{iN_i})^{\top}$, $\widehat{\bm{\phi}}_{ij}=\{\widehat{\phi}_j(T_{i1}),\ldots,\widehat{\phi}_j(T_{iN_i})\}^{\top}$, $\widehat{\bm{\mu}}_i=\{\widehat{\mu}(T_{i1}),\ldots,\widehat{\mu}(T_{iN_i})\}^{\top}$, and $\widehat{\mathbf{\Sigma}}_{\mathbf{U}_i}=\{\widehat{G}(T_{il},T_{il'})+\widehat{\sigma}_u^2 \delta_{ll'}\}_{l,l'=1}^{N_i}$, where $\widehat{\sigma}_u^2$ is an estimate of $\sigma_u^2$ and $\delta_{ll'}=1$ if $l=l'$ and 0 otherwise. Then, the PACE estimator of $\xi_{ij}$ is given by $$\label{eq:score} \widehat{\xi}_{ij}=\widehat{\kappa}_j\widehat{\bm{\phi}}_{ij}^{\top}\widehat{\mathbf{\Sigma}}_{\mathbf{U}_i}^{-1} (\mathbf{U}_i-\widehat{\bm{\mu}}_i).$$ Note that this method can be applied to both sparsely and densely observed functional data. When $X_i(t)$ are densely observed, an alternative method is to use the "smoothing first, then estimation\" procedure proposed by @ramsay:2005, i.e., we smooth each curve first and then estimate $\xi_{ij}$ using the integration formula; see @li2010 for details. Let $\rho_{\tau}(e)=[\tau-\mathbf{1}\{e\leq 0\}]e$ be the check function, where $\mathbf{1}\{\cdot\}$ denotes the usual indicator function. The coefficients $a$ and $b_1,\ldots,b_{J_n}$ are estimated by $$\label{eq:qreg} (\widehat{a}_{J_n},\widehat{b}_{1,J_n},\ldots,\widehat{b}_{J_n,J_n})=\mathop{\mathrm{\arg \min}}_{a,b_1,\ldots,b_{J_n}} \sum_{i=1}^n \rho_{\tau} \left(Y_i-a-\sum_{j=1}^{J_n} b_j\widehat{\xi}_{ij}\right).$$ The optimization problem [\[eq:qreg\]](#eq:qreg){reference-type="eqref" reference="eq:qreg"} can be transformed into a linear programming problem and can be solved by using standard statistical software. Then, the resulting estimator of $b(t)$ is given by $$\widehat{b}_{J_n}(t)=\sum_{j=1}^{J_n} \widehat{b}_{j,J_n}\widehat{\phi}_j(t).$$ Let $(Y_0,X_0)$ be an independent copy of $\{(Y_i,X_i)\}_{i=1}^n$. Similar to $X_i(t)$, $X_0(t)$ is observed with measurement error, i.e., only $\{(T_{0l},U_{0l})\}_{l=1}^{N_0}$ satisfying [\[eq:disdata\]](#eq:disdata){reference-type="eqref" reference="eq:disdata"} with $i=0$ is observed. As a result, the $\tau$th conditional quantile of $Y_0$ given $X_0$ is predicted by a plug-in method: $$\widehat{Q}_{\tau, J_n}(X_0)=\widehat{a}_{J_n}+\int_{\mathcal{T}} \widehat{b}_{J_n}(t)\left\{\widehat{X}_{0,J_n}(t)-\widehat{\mu}(t)\right\}\,dt=\widehat{a}_{J_n}+\sum_{j=1}^{J_n} \widehat{b}_{j,J_n}\widehat{\xi}_{0j},$$ where $\widehat{\xi}_{0j}$ is defined in [\[eq:score\]](#eq:score){reference-type="eqref" reference="eq:score"} with $i=0$ and $\widehat{X}_{0,J_n}(t)=\widehat{\mu}(t)+\sum_{j=1}^{J_n}\widehat{\xi}_{0j}\widehat{\phi}_j(t)$. The predicted conditional quantile $\widehat{Q}_{\tau, J_n}(X_0)$ depends on the number of principal components $J_n$, and thus the prediction performance varies with $J_n$. In the case where $\{X_i(t)\}_{i=0}^n$ are observed at dense discrete points without measurement errors, @kato2012 established the minimax optimal rates of convergence of $\widehat{b}_{J_n}(t)$ and $\widehat{Q}_{\tau, J_n}(X_0)$ with a proper choice of $J_n$. # Model Averaging for FLQR Model {#sec:ma} ## Weighted Average Quantile Prediction {#subsec:waqp} The choice of the best $J_n$ is usually made based on a model selection criterion such as FVE, AIC, or BIC. Let the prediction of $Q_{\tau}(X_0)$ from a fixed $J_n$ choice be $\widehat{Q}_{\tau, J_n}(X_0)$, where $J_n\in \mathcal{J}$ for a candidate set $\mathcal{J}$. Typically, $\mathcal{J}=\{J_L,J_L+1,\ldots,J_U\}$, where $J_L$ and $J_U$ are lower and upper bounds, respectively. Let $w_{J_n}$ be a weight assigned to the model with $J_n$ eigenbases. Let $\mathbf{w}$ be a vector formed by all such weights $w_{J_n}$. For example, $\mathbf{w}=(w_{J_L},w_{J_L+1},\ldots,w_{J_U})^{\top}$ if $\mathcal{J}=\{J_L,J_L+1,\ldots,J_U\}$. Let $\mathcal{W}=\{\mathbf{w}: w_{J_n}\geq 0, J_n\in \mathcal{J},\sum_{J_n\in \mathcal{J}} w_{J_n}=1\}$. Then, the model averaging prediction with weights $\mathbf{w}$ is $$\widehat{Q}_{\tau}(X_0,\mathbf{w})=\sum_{J_n\in \mathcal{J}} w_{J_n} \widehat{Q}_{\tau, J_n}(X_0).$$ We define the final prediction error (FPE, or the out-of-sample prediction error) used by @lu2015 and @wang.QR as follows $$\mathrm{FPE}_n(\mathbf{w})=E \Bigl\{\rho_{\tau} \Bigl(Y_0-\widehat{Q}_{\tau}(X_0,\mathbf{w})\Bigr)\Big|\mathcal{D}_n\Bigr\},$$ where $\mathcal{D}_n=\{(Y_i, T_{il},U_{il}): l=1,\ldots,N_i;i=1,\ldots,n\}$ is the observed sample. Let $F(\cdot|X_i)$ denote the conditional distribution function of $\varepsilon_i$ given $X_i$. Using the identity [@knight1998] $$\label{eq:knid} \rho_{\tau}(u-v)-\rho_{\tau}(u)=-v\psi_{\tau}(u)+\int_0^{v}\left[\mathbf{1}\{u\leq s\}-\mathbf{1}\{u\leq 0\}\right]\,ds,$$ where $\psi_{\tau}(u)=\tau-\mathbf{1}\{u\leq 0\}$, we have $$\begin{aligned} &\mathrm{FPE}_n(\mathbf{w})-E\{\rho_{\tau}(\varepsilon_0)\} \notag \\ & \quad =E \Bigl\{\rho_{\tau} \left(\varepsilon_0+Q_{\tau}(X_0)-\widehat{Q}_{\tau}(X_0,\mathbf{w})\right)-\rho_{\tau}(\varepsilon_0)\Big|\mathcal{D}_n\Bigr\} \notag \\ &\quad =E\left[\left\{Q_{\tau}(X_0)-\widehat{Q}_{\tau}(X_0,\mathbf{w})\right\} \psi_{\tau}(\varepsilon_0)\Bigl|\mathcal{D}_n\right] \notag \\ &\qquad +E\left(\int_0^{\widehat{Q}_{\tau}(X_0,\mathbf{w})-Q_{\tau}(X_0)} \left[\mathbf{1}\{\varepsilon_0\leq s\}-\mathbf{1}\{\varepsilon_0\leq 0\}\right]\,ds\biggl|\mathcal{D}_n\right) \notag \\ &\quad =E\left[\int_0^{\widehat{Q}_{\tau}(X_0,\mathbf{w})-Q_{\tau}(X_0)} \{F(s|X_0)-F(0|X_0)\}\,ds\biggl|\mathcal{D}_n\right]\geq 0.\label{eq:efpe}\end{aligned}$$ where the detailed derivation of the last equality of [\[eq:efpe\]](#eq:efpe){reference-type="eqref" reference="eq:efpe"} is given in the Supplementary Material. Since $E\{\rho_{\tau}(\varepsilon_0)\}$ is unrelated to $\mathbf{w}$, minimizing $\mathrm{FPE}_n(\mathbf{w})$ is equivalent to minimizing the following excess final prediction error (EFPE): $$\mathrm{EFPE}_n(\mathbf{w})=\mathrm{FPE}_n(\mathbf{w})-E\{\rho_{\tau}(\varepsilon_0)\}.$$ From [\[eq:efpe\]](#eq:efpe){reference-type="eqref" reference="eq:efpe"}, it is easy to see that $\mathrm{EFPE}_n(\mathbf{w})\geq 0$ for each $\mathbf{w}\in \mathcal{W}$. A similar predictive risk of the quantile regression model is considered by @giessing2019. In addition, EFPE is closely related to the usual mean squared prediction error $R_n(\mathbf{w})=E[\{\widehat{Q}_{\tau}(X_0,\mathbf{w})-Q_{\tau}(X_0)\}^2|\mathcal{D}_n]$. To see it, under Conditions 2 and 5 in Section [4](#sec:asy){reference-type="ref" reference="sec:asy"}, it is easy to prove that $\underline{C}\leq \mathrm{EFPE}_n(\mathbf{w})/R_n(\mathbf{w})\leq \bar{C}$ for each $\mathbf{w}\in \mathcal{W}$, where $0<\underline{C}\leq \bar{C}<\infty$ are defined in Condition 5. In the following development, we develop a procedure to select the weights and establish its asymptotic optimality by examining its performance in terms of minimizing $\mathrm{EFPE}_n(\mathbf{w})$ rather than $\mathrm{FPE}_n(\mathbf{w})$. We also discuss the advantages of using EFPE over FPE. ## Weight Choice Criterion We use $K$-fold cross-validation to choose the weights. Specifically, we divide the dataset into $K\geq 2$ groups such that the sample size of each group is $M=\lfloor n/K \rfloor$, where $\lfloor \cdot \rfloor$ denotes the greatest integer less than or equal to $\cdot$. When $K=n$, we have $M=1$, and $K$-fold cross-validation becomes leave-one-curve-out cross-validation used by @lu2015 and @wang.QR, which is not computationally feasible when $n$ is too large. For each fixed $J_n\in \mathcal{J}$, we consider the $k$th step of the $K$-fold cross-validation ($k=1,\ldots,K$), where we leave out the $k$th group and use all of the remaining observations to obtain the intercept and the slope function estimates $\widehat{a}_{J_n}^{[-k]}$ and $\widehat{b}_{J_n}^{[-k]}(t)$, respectively. Now, for $i=(k-1)M+1,\ldots,kM$, we estimate $Q_{\tau}(X_i)$ by $$\widehat{Q}_{\tau, J_n}^{[-k]}(X_i)=\widehat{a}_{J_n}^{[-k]}+\int_{\mathcal{T}} \widehat{b}_{J_n}^{[-k]}(t)\left\{\widehat{X}_{i,J_n}(t)-\widehat{\mu}(t)\right\}\,dt,$$ where $\widehat{X}_{i,J_n}(t)=\widehat{\mu}(t)+\sum_{j=1}^{J_n} \widehat{\xi}_{ij}\widehat{\phi}_j(t)$. Note that $\widehat{\mu}(t)$ and $\widehat{X}_{i,J_n}(t)$ are obtained by using all data. Then, the weighted average prediction with weights $\mathbf{w}$ is $$\widehat{Q}_{\tau}^{[-k]}(X_i,\mathbf{w})=\sum_{J_n\in \mathcal{J}} w_{J_n} \widehat{Q}_{\tau, J_n}^{[-k]}(X_i).$$ The $K$-fold cross-validation criterion is formulated as $$\mathrm{CV}_K(\mathbf{w})=\frac{1}{n}\sum_{k=1}^K \sum_{m=1}^M \rho_{\tau} \left(Y_{(k-1)M+m}-\widehat{Q}_{\tau}^{[-k]}(X_{(k-1)M+m},\mathbf{w})\right).$$ The resulting weight vector is obtained as $$\label{eq:what} \widehat{\mathbf{w}}=\mathop{\mathrm{\arg \min}}_{\mathbf{w}\in \mathcal{W}} \mathrm{CV}_K(\mathbf{w}).$$ As a result, the proposed model average $\tau$th conditional quantile prediction of $Y_0$ given $X_0$ is $\widehat{Q}_{\tau}(X_0,\widehat{\mathbf{w}})$. The constrained minimization problem [\[eq:what\]](#eq:what){reference-type="eqref" reference="eq:what"} can be reformulated as a linear programming problem, namely $$\begin{aligned} &\min\nolimits_{\mathbf{w},\mathbf{u},\mathbf{v}}\quad \tau \bm{1}_n^{\top} \mathbf{u}+(1-\tau) \bm{1}_n^{\top} \mathbf{v} \\ &\text{s.t.}\begin{cases} \sum_{J_n\in \mathcal{J}} w_{J_n} \widehat{Q}_{\tau, J_n}^{[-k]}(X_{(k-1)M+m})+u_{(k-1)M+m}-v_{(k-1)M+m}=Y_{(k-1)M+m}, \\ \sum_{J_n\in \mathcal{J}} w_{J_n}=1; w_{J_n}\geq 0~\text{for all}~ J_n\in \mathcal{J}, \\ u_{(k-1)M+m}\geq 0; v_{(k-1)M+m}\geq 0~\text{for all}~k=1,\ldots,K~\text{and}~m=1,\ldots,M, \end{cases}\end{aligned}$$ where $\mathbf{u}=(u_1,\ldots,u_n)^{\top}$ and $\mathbf{v}=(v_1,\ldots,v_n)^{\top}$ are the positive and negative slack variables and $\bm{1}_n$ is the $n\times 1$ vector of ones. This linear programming can be implemented in standard software through the simplex method or the interior point method. For example, we may use the linprog command in MATLAB. The MATLAB code for our method is available from <https://github.com/wcxstat/MAFLQR>. ## Choice of the Candidate Set {#subsec:choice} In practice, the candidate set $\mathcal{J}$ can be chosen heuristically, coupled with a model selection criterion. Let $d$ be a fixed small positive integer, and consider the set $\mathcal{J}=\{j\in \mathbb{N}:|\widehat{J}_n-j|\leq d\}$, where $\widehat{J}_n$ can be selected by the FVE, AIC, or BIC criterion. Specifically, following @chen2012, FVE is defined as $$\mathrm{FVE}_{J_n}=\sum_{j=1}^{J_n} \widehat{\kappa}_j \Biggl/ \sum_{j=1}^{\infty} \widehat{\kappa}_j,$$ and following @kato2012 and @lu2015, for model $J_n$, the AIC and BIC are respectively defined as $$\begin{aligned} \mathrm{AIC}_{J_n}&=2n\log \left\{\frac{1}{n}\sum_{i=1}^n \rho_{\tau}\left(Y_i-a-\sum_{j=1}^{J_n} \widehat{b}_{j,J_n}\widehat{\xi}_{ij}\right)\right\}+2(J_n+1), \\ \mathrm{BIC}_{J_n}&=2n\log \left\{\frac{1}{n}\sum_{i=1}^n \rho_{\tau}\left(Y_i-a-\sum_{j=1}^{J_n} \widehat{b}_{j,J_n}\widehat{\xi}_{ij}\right)\right\}+(J_n+1)\log n.\end{aligned}$$ Given a threshold $\gamma$ (e.g., 0.90 or 0.95), FVE selects $\widehat{J}_n$ as the smallest integer that satisfies $\mathrm{FVE}_{J_n} \geq \gamma$; AIC and BIC select $\widehat{J}_n$ by minimizing $\mathrm{AIC}_{J_n}$ and $\mathrm{BIC}_{J_n}$, respectively. We refer to the resultant procedure as the $d$-divergence model averaging method. When $d=0$, it reduces to the selection criterion without model averaging. In Section [5](#sec:simu){reference-type="ref" reference="sec:simu"}, we compare the numerical performance of various $d$-divergence model averaging methods under different settings of $\mathcal{J}$. For the following theoretical studies, we utilize a predetermined $\mathcal{J}$ in our method. # Asymptotic Results {#sec:asy} ## Asymptotic Weight Choice Optimality {#subsec:opt} In this subsection, we present an important result on the asymptotic optimality of the selected weights. Denote by $|\mathcal{J}|$ the cardinality of the set $\mathcal{J}$, which is also called the number of candidate models. We first introduce some conditions as follows. All limiting processes are studied with respect to $n\to \infty$. Condition 1 : There exist a constant $a_{J_n}^*$, functions $b_{J_n}^*(t)$, $X_{i,J_n}^*(t)$, and $X_{0,J_n}^*(t)$, and a series $c_n\to 0$ such that $\widehat{a}_{J_n}-a_{J_n}^*=O_p(c_n)$, $\widehat{b}_{J_n}(t)-b_{J_n}^*(t)=O_p(c_n)$, $\widehat{\mu}(t)-\mu(t)=O_p(c_n)$, $\widehat{X}_{i,J_n}(t)-X_{i,J_n}^*(t)=O_p(c_n)$, and $E\{|\widehat{X}_{0,J_n}(t)-X_{0,J_n}^*(t)||\mathcal{D}_n\}=O_p(c_n)$ hold uniformly for $t\in \mathcal{T}$, $J_n\in \mathcal{J}$, and $i\in \{1,\ldots,n\}$. Let $Q_{\tau, J_n}^*(X_i)=a_{J_n}^*+\int_{\mathcal{T}} b_{J_n}^*(t)\{X_{i,J_n}^*(t)-\mu(t)\}\,dt$ and $Q_{\tau}^*(X_i,\mathbf{w})=\sum_{J_n\in \mathcal{J}} w_{J_n} Q_{\tau, J_n}^*(X_i)$, $i=0,1,\ldots,n$. Define $$\begin{aligned} \mathrm{EFPE}_n^*(\mathbf{w})&=E \{\rho_{\tau} \left(Y_0-Q_{\tau}^*(X_0,\mathbf{w})\right)\}-E\{\rho_{\tau}(\varepsilon_0)\} \\ &=E\left[\int_0^{Q_{\tau}^*(X_0,\mathbf{w})-Q_{\tau}(X_0)} \{F(s|X_0)-F(0|X_0)\}\,ds\right]\geq 0,\end{aligned}$$ where the second equality is due to Knight's identity [\[eq:knid\]](#eq:knid){reference-type="eqref" reference="eq:knid"}. Let $\eta_n=n\inf_{\mathbf{w}\in \mathcal{W}} \mathrm{EFPE}_n^*(\mathbf{w})$. Condition 2 : There exists a constant $\varrho>0$ such that $|Q_{\tau,J_n}^*(X_i)-Q_{\tau}(X_i)|\leq \varrho$ a.s. uniformly for $i\in\{0,1,\ldots,n\}$ and $J_n\in \mathcal{J}$. Condition 3 : There exists a series $g_n\to 0$ such that $\widehat{Q}_{\tau}(X_{(k-1)M+m},\mathbf{w})-\widehat{Q}_{\tau}^{[-k]}(X_{(k-1)M+m},\mathbf{w})=O_p(g_n)$ holds uniformly for $\mathbf{w}\in \mathcal{W}$, $k\in \{1,\ldots,K\}$, and $m\in\{1,\ldots,M\}$. Condition 4 : $\eta_n^{-1}n^{1/2}|\mathcal{J}|^{1/2}=o(1)$, $\eta_n^{-1}nc_n=o(1)$, and $\eta_n^{-1}ng_n=o(1)$. Condition 1 describes the convergence rates of the estimators $\widehat{a}_{J_n}$, $\widehat{b}_{J_n}(t)$, $\widehat{\mu}(t)$, and $\widehat{X}_{i,J_n}(t)$ under each model. They need not have the same convergence rate. For example, when $\widehat{a}_{J_n}-a_{J_n}^*=O_p(n^{-\alpha_1})$, $\widehat{b}_{J_n}(t)-b_{J_n}^*(t)=O_p(n^{-\alpha_2})$, $\widehat{\mu}(t)-\mu(t)=O_p(n^{-\alpha_3})$, $\widehat{X}_{i,J_n}(t)-X_{i,J_n}^*(t)=O_p(n^{-\alpha_4})$, and $E\{|\widehat{X}_{0,J_n}(t)-X_{0,J_n}^*(t)||\mathcal{D}_n\}=O_p(n^{-\alpha_5})$, $c_n$ can be $n^{-\min_{i=1,\ldots, 5} \alpha_i}$. This condition is very mild and is verified in the functional data literature; see, e.g., @yao2005jasa [@yao2005FLR], @kato2012, and @li2022jma. Note that when model $J_n$ is a true model, $a_{J_n}^*$, $b_{J_n}^*(t)$, $X_{i,J_n}^*(t)$, and $X_{0,J_n}^*(t)$ are naturally the true parameter values. Condition 2 excludes some pathological cases in which $Q_{\tau,J_n}^*(X_i)-Q_{\tau}(X_i)$ explodes, and it also implies that $\mathrm{EFPE}_n^*(\mathbf{w})\leq \varrho$ for any $\mathbf{w}\in \mathcal{W}$. Condition 3 is similar to the condition 6 of @zhang2018, which requires the difference between the regular prediction $\widehat{Q}_{\tau}(X_{(k-1)M+m},\mathbf{w})$ and the leave-$M$-out prediction $\widehat{Q}_{\tau}^{[-k]}(X_{(k-1)M+m},\mathbf{w})$ to decrease with the rate $g_n$ as the sample size increases. Since $$\begin{aligned} &\widehat{Q}_{\tau}(X_{(k-1)M+m},\mathbf{w})-\widehat{Q}_{\tau}^{[-k]}(X_{(k-1)M+m},\mathbf{w}) \\ &\quad =\sum_{J_n\in \mathcal{J}} w_{J_n} \left\{\widehat{Q}_{\tau, J_n}(X_{(k-1)M+m})-\widehat{Q}_{\tau, J_n}^{[-k]}(X_{(k-1)M+m})\right\} \\ &\quad =\sum_{J_n\in \mathcal{J}} w_{J_n} \left(\widehat{a}_{J_n}-\widehat{a}_{J_n}^{[-k]}\right)+\sum_{J_n\in \mathcal{J}} w_{J_n} \int_{\mathcal{T}} \left\{\widehat{b}_{J_n}(t)-\widehat{b}_{J_n}^{[-k]}(t)\right\} \left\{\widehat{X}_{(k-1)M+m,J_n}(t)-\widehat{\mu}(t)\right\}\,dt,\end{aligned}$$ the explicit expression of $g_n$ can be obtained by analyzing the orders of $\max_{J_n,k}|\widehat{a}_{J_n}-\widehat{a}_{J_n}^{[-k]}|$ and $\sup_{t\in \mathcal{T}}\max_{J_n,k}|\widehat{b}_{J_n}(t)-\widehat{b}_{J_n}^{[-k]}(t)|$. Condition 4 requires that $\eta_n$ grows at a rate no slower than $\max\{n^{1/2}|\mathcal{J}|^{1/2}, nc_n, ng_n\}$. Note that when there is at least one true model included in the set of candidate models, $\eta_n\equiv 0$. Therefore, Condition 4 implies that all candidate models are misspecified. This condition is similar to the condition (21) of @zhang2013JMA, condition (7) of @ando2014, and condition 3 of @zhang2018. The first part of Condition 4 also implies that the number of candidate models $|\mathcal{J}|$ is allowed to grow to infinity as the sample size increases. Condition 4 excludes the lucky situation in which one of these candidate models happens to be true. In the next subsection, we show that when there is at least one true model included in the set of candidate models, our averaged regression estimation is consistent. We now establish the asymptotic optimality of the selected weights $\widehat{\mathbf{w}}$ in terms of minimizing $\mathrm{EFPE}_n(\mathbf{w})$ in the following theorem. [\[thm:asyopt\]]{#thm:asyopt label="thm:asyopt"} Suppose Conditions 1--4 hold. Then, $$\frac{\mathrm{EFPE}_n(\widehat{\mathbf{w}})}{\inf_{\mathbf{w}\in \mathcal{W}} \mathrm{EFPE}_n(\mathbf{w})}=1+O_p\left\{\eta_n^{-1}n\left(c_n+g_n+n^{-1/2}|\mathcal{J}|^{1/2}\right)\right\}=1+o_p(1).$$ Theorem [\[thm:asyopt\]](#thm:asyopt){reference-type="ref" reference="thm:asyopt"} shows that the selected weight vector $\widehat{\mathbf{w}}$ is asymptotically optimal in the sense that its EFPE is asymptotically identical to that of the infeasible best weight vector to minimize $\mathrm{EFPE}_n(\mathbf{w})$. The asymptotic optimality in Theorem [\[thm:asyopt\]](#thm:asyopt){reference-type="ref" reference="thm:asyopt"} is better than the existing results in the literature such as @hansen2012, @lu2015, and @zhang2018 because the former provides a convergence rate but the latter does not. The rate is determined by $\eta_n$, the convergence rate $c_n$ of estimators of each candidate model, the difference $g_n$ between the regular prediction $\widehat{Q}_{\tau}(X_{(k-1)M+m},\mathbf{w})$ and the leave-$M$-out prediction $\widehat{Q}_{\tau}^{[-k]}(X_{(k-1)M+m},\mathbf{w})$, and the number of candidate models $|\mathcal{J}|$. @lu2015 and @wang.QR established asymptotic optimality in terms of minimizing $\mathrm{FPE}_n(\mathbf{w})$ instead of $\mathrm{EFPE}_n(\mathbf{w})$. We also have a similar result in the following corollary. [\[cor:fpe\]]{#cor:fpe label="cor:fpe"} Suppose Conditions 1--3 hold. If $n^{-1}|\mathcal{J}|\to 0$ as $n\to \infty$, then $$\frac{\mathrm{FPE}_n(\widehat{\mathbf{w}})}{\inf_{\mathbf{w}\in \mathcal{W}} \mathrm{FPE}_n(\mathbf{w})}=1+O_p\left(c_n+g_n+n^{-1/2}|\mathcal{J}|^{1/2}\right)=1+o_p(1).$$ Corollary [\[cor:fpe\]](#cor:fpe){reference-type="ref" reference="cor:fpe"} can be proved by the analogous arguments as used in the proof of Theorem [\[thm:asyopt\]](#thm:asyopt){reference-type="ref" reference="thm:asyopt"}, and thus we omit its proof. Since $\inf_{\mathbf{w}\in \mathcal{W}} \mathrm{FPE}_n(\mathbf{w})\geq E\{\rho_{\tau}(\varepsilon_0)\}>0$ from [\[eq:efpe\]](#eq:efpe){reference-type="eqref" reference="eq:efpe"}, the asymptotic optimality in Corollary [\[cor:fpe\]](#cor:fpe){reference-type="ref" reference="cor:fpe"} is implied by that in Theorem [\[thm:asyopt\]](#thm:asyopt){reference-type="ref" reference="thm:asyopt"} with $n^{-1}|\mathcal{J}|\to 0$ as $n\to \infty$. Therefore, the asymptotic optimality in Theorem [\[thm:asyopt\]](#thm:asyopt){reference-type="ref" reference="thm:asyopt"} improves the results in the sense of @lu2015 and @wang.QR. This is an important contribution to model averaging in quantile regression. From [\[eq:thpf1\]](#eq:thpf1){reference-type="eqref" reference="eq:thpf1"} in the Supplementary Material, we have $\inf_{\mathbf{w}\in \mathcal{W}} \mathrm{EFPE}_n(\mathbf{w})=\eta_n n^{-1}\{1+o_p(1)\}$. Compared to the convergence rate in Corollary [\[cor:fpe\]](#cor:fpe){reference-type="ref" reference="cor:fpe"}, Theorem [\[thm:asyopt\]](#thm:asyopt){reference-type="ref" reference="thm:asyopt"} has the extra multiple $\eta_n^{-1}n$. The reason is that $\inf_{\mathbf{w}\in \mathcal{W}} \mathrm{EFPE}_n(\mathbf{w})$ may converge to zero in probability (under scenario (i) or (ii) discussed in Subsection [3.1](#subsec:waqp){reference-type="ref" reference="subsec:waqp"}). The results of Theorem [\[thm:asyopt\]](#thm:asyopt){reference-type="ref" reference="thm:asyopt"} and Corollary [\[cor:fpe\]](#cor:fpe){reference-type="ref" reference="cor:fpe"} are for a single chosen quantile $\tau$. Let $\mathcal{U}$ be a given subset of $(0, 1)$ that is away from 0 and 1. Typical examples of $\mathcal{U}$ are $\mathcal{U}=\{\tau_1,\ldots,\tau_d\}$ with $0<\tau_1<\cdots<\tau_d<1$ and $\mathcal{U}=[\tau_L,\tau_U]$ with $0<\tau_L<\tau_U<1$. If Conditions 1--3 hold uniformly for $\tau\in \mathcal{U}$, it is easy to show that the asymptotic optimality in Theorem [\[thm:asyopt\]](#thm:asyopt){reference-type="ref" reference="thm:asyopt"} and Corollary [\[cor:fpe\]](#cor:fpe){reference-type="ref" reference="cor:fpe"} hold uniformly for $\tau\in \mathcal{U}$. We finalize this subsection by discussing the advantage of using EFPE over FPE. Observe that for $J_n\in \mathcal{J}$, $$\begin{aligned} & E\left\{\left|\widehat{Q}_{\tau,J_n}(X_0)-Q_{\tau}(X_0)\right|\Big|\mathcal{D}_n\right\} \notag \\ & \quad \leq |\widehat{a}_{J_n}-a|+\int_{\mathcal{T}} \left|\widehat{b}_{J_n}(t)-b(t)\right| E\left\{\left|\widehat{X}_{0,J_n}(t)-\widehat{\mu}(t)\right|\Big|\mathcal{D}_n\right\}\,dt \notag \\ &\qquad +\int_{\mathcal{T}} |b(t)| \left[E\left\{\left|\widehat{X}_{0,J_n}(t)-X_0(t)\right|\Big|\mathcal{D}_n\right\}+|\widehat{\mu}(t)-\mu(t)|\right] \,dt. \label{eq:Qtau}\end{aligned}$$ Let $J^*$ correspond to a true model (we allow $J^*=\infty$), i.e., $$\label{truemodel} b(t)=\sum_{j=1}^{J^*} b_j\phi_j(t).$$ Note that this may not be the only true model. For example, any model containing model $J^*$ is also true. We consider two scenarios: - there is at least one true model (say $J_n$) in the set of candidate models; - all candidate models are misspecified but one of the candidate models (say $J_n$) converges to a true model with a rate. Under the scenario (i) or (ii), one can prove that $\widehat{a}_{J_n}-a=o_p(1)$, $\widehat{b}_{J_n}(t)-b(t)=o_p(1)$, $E\{|\widehat{X}_{0,J_n}(t)-X_{0,J_n}(t)||\mathcal{D}_n\}=o_p(1)$, and $\widehat{\mu}(t)-\mu(t)=o_p(1)$ hold uniformly for $t\in \mathcal{T}$ under some conditions, where $X_{0,J_n}(t)=\mu(t)+\sum_{j=1}^{J_n} \xi_{0j}\phi_j(t)$; see, e.g., @yao2005jasa, @kato2012, and @li2022jma, which lead to $E\{|\widehat{Q}_{\tau,J_n}(X_0)-Q_{\tau}(X_0)||\mathcal{D}_n\}\to 0$ in probability from [\[eq:Qtau\]](#eq:Qtau){reference-type="eqref" reference="eq:Qtau"}, i.e., $Q_{\tau}(X_0)$ can be consistently predicted by a single candidate model. This, along with [\[eq:efpe\]](#eq:efpe){reference-type="eqref" reference="eq:efpe"}, implies that under scenario (i) or (ii), $\inf_{\mathbf{w}\in \mathcal{W}} \mathrm{EFPE}_n(\mathbf{w})\to 0$ in probability. Compared to $\mathrm{EFPE}_n(\mathbf{w})$, $\mathrm{FPE}_n(\mathbf{w})$ has the extra term $E\{\rho_{\tau}(\varepsilon_0)\}$, which is a positive number unrelated to $n$ and $\mathbf{w}$. Therefore, under scenario (i) or (ii), $E\{\rho_{\tau}(\varepsilon_0)\}$ is the dominant term in FPE, which makes the asymptotic optimality built based on FPE no sense. Existing literature such as @lu2015 and @wang.QR have not found this phenomenon. ## Estimation Consistency Define the model averaging estimators of $a$ and $b(t)$ with weights $\mathbf{w}$ as $$\widehat{a}_{\mathbf{w}}=\sum_{J_n\in \mathcal{J}} w_{J_n}\widehat{a}_{J_n}\quad\text{and}\quad \widehat{b}_{\mathbf{w}}(t)=\sum_{J_n\in \mathcal{J}} w_{J_n}\widehat{b}_{J_n}(t),$$ respectively. Let $J^*<\infty$ correspond to a true model, defined by ([\[truemodel\]](#truemodel){reference-type="ref" reference="truemodel"}). Assume that there exists a series $d_n\to 0$ such that $$\label{eq:tmrate} \widehat{a}_{J^*}-a=O_p(d_n)\quad\text{and}\quad \int_{\mathcal{T}} \left\{\widehat{b}_{J^*}(t)-b(t)\right\}^2\,dt=O_p(d_n^2).$$ This states that the estimators $\widehat{a}_{J^*}$ and $\widehat{b}_{J^*}(t)$ are consistent, which are verified in the literature [e.g., see @li2022jma; @kato2012]. In the ideal case that the functional covariate $X_i(t)$ is fully observed, $d_n=n^{-1/2}$ for the fixed $J^*$. In more common cases where the curves are observed at some discrete points, $d_n$ may be slower than $n^{-1/2}$ since $\mu(t)$ and $G(s,t)$ should be estimated first by some nonparametric smoothing methods. Let $f(\cdot|X_i)$ denote the conditional density function of $\varepsilon_i$ given $X_i$, and let $\widehat{X}_{i}(t)=\widehat{\mu}(t)+\sum_{j=1}^{\infty} \widehat{\xi}_{ij}\widehat{\phi}_j(t)$, $i=1,\ldots,n$. We further impose the following conditions. Condition 5 : There exist canstants $0<\underline{C}\leq \bar{C}<\infty$ such that $\underline{C}\leq f(s|X_i)\leq \bar{C}$ holds uniformly for $|s|\leq \varrho$ and $i=1,\ldots,n$, where $\varrho$ is defined in Condition 2. Condition 6 : $\widehat{X}_i(t)$ and $\widehat{\mu}(t)$ are $O_p(1)$ uniformly for $i\in \{1,\ldots,n\}$ and $t\in \mathcal{T}$. Condition 7 : There exists $\lambda_{\min}>0$ such that for uniformly for $\mathbf{w}\in \mathcal{W}$ and for almost all $i\in \{1,\ldots,n\}$, $$\begin{aligned} \left[\widehat{a}_{\mathbf{w}}-a+\int_{\mathcal{T}} \left\{\widehat{b}_{\mathbf{w}}(t)-b(t)\right\} \left\{\widehat{X}_{i}(t)-\widehat{\mu}(t)\right\}\,dt\right]^2 && \\ &&\hspace{-5cm} \geq \lambda_{\min} \left[(\widehat{a}_{\mathbf{w}}-a)^2+\int_{\mathcal{T}} \left\{\widehat{b}_{\mathbf{w}}(t)-b(t)\right\}^2\,dt\right]. \end{aligned}$$ Condition 5 is mild, and it allows conditional heteroskedasticity in the FLQR model. Condition 6 is mild and the same as the condition 8 of @zhang2018, which excludes some pathological cases in which $\widehat{X}_i(t)$ and $\widehat{\mu}(t)$ explode. Condition 7 is similar to the condition 7 of @zhang2018, which states that most $\widehat{X}_i(t)$'s do not degenerate in the sense that their inner products with $\widehat{b}_{\mathbf{w}}(t)-b(t)$ do not approach zero. We now describe the performance of the weighted estimation results when there is at least one true model among the candidate models. [\[thm:cons\]]{#thm:cons label="thm:cons"} When there is at least one true model among candidate models, under Conditions 1--3 and 5--7 and the assumption $n^{-1}|\mathcal{J}|\to 0$ as $n\to \infty$, the weighted average estimators $\widehat{a}_{\widehat{\mathbf{w}}}$ and $\widehat{b}_{\widehat{\mathbf{w}}}(t)$ satisfy $$(\widehat{a}_{\widehat{\mathbf{w}}}-a)^2+\int_{\mathcal{T}} \left\{\widehat{b}_{\widehat{\mathbf{w}}}(t)-b(t)\right\}^2 \,dt=O_p\left(d_n^2+c_n+g_n+n^{-1/2}|\mathcal{J}|^{1/2}\right),$$ where $c_n$ and $g_n$ are defined in Conditions 1 and 3, respectively. Theorem [\[thm:cons\]](#thm:cons){reference-type="ref" reference="thm:cons"} establishes a convergence rate for $\widehat{a}_{\widehat{\mathbf{w}}}$ and $\widehat{b}_{\widehat{\mathbf{w}}}(t)$ when there is at least one true model among the candidate models. In this situation, the rate is determined by two parts: the convergence rate $d_n$ of $(\widehat{a}_{J^*}, \widehat{b}_{J^*})$ under the true model and the extra term $c_n+g_n+n^{-1/2}|\mathcal{J}|^{1/2}$, where the latter is caused by the estimated weights. Note that this rate may be not optimal and could be improved with much effort. We leave it as a future research direction. Note that if [\[eq:tmrate\]](#eq:tmrate){reference-type="eqref" reference="eq:tmrate"} and Conditions 1--3 and 7 hold uniformly for $\tau\in \mathcal{U}$, we can also show that the result in Theorem [\[thm:cons\]](#thm:cons){reference-type="ref" reference="thm:cons"} holds uniformly for $\tau\in \mathcal{U}$. # Simulation Studies {#sec:simu} In this section, we conduct simulation studies to evaluate the finite sample performance of the proposed model averaging quantile prediction and the estimation of the slope function. We consider two simulation designs. In the first design, all candidate models are misspecified, whereas in the second, there are true models included in the set of candidate models. We compare the proposed procedure with several model selection methods and several other existing model averaging methods. The model selection methods include FVE with $\gamma\in\{0.90, 0.95\}$, AIC, and BIC described in Subsection [3.3](#subsec:choice){reference-type="ref" reference="subsec:choice"}. The two existing model averaging methods are the smoothed AIC (SAIC) and smoothed BIC (SBIC) proposed by @buckland1997. Specifically, the SAIC and SBIC model averaging methods assign weights $$w_{\mathrm{AIC},J_n}=\frac{\exp\left(-\frac{1}{2}\mathrm{AIC}_{J_n}\right)}{\sum_{J_n\in \mathcal{J}} \exp\left(-\frac{1}{2}\mathrm{AIC}_{J_n}\right)}\quad \mathrm{and}\quad w_{\mathrm{BIC},J_n}=\frac{\exp\left(-\frac{1}{2}\mathrm{BIC}_{J_n}\right)}{\sum_{J_n\in \mathcal{J}} \exp\left(-\frac{1}{2}\mathrm{BIC}_{J_n}\right)}$$ to model $J_n$, respectively, where $\mathrm{AIC}_{J_n}$ and $\mathrm{BIC}_{J_n}$ are defined in Subsection [3.3](#subsec:choice){reference-type="ref" reference="subsec:choice"}. ## Simulation Design I {#subsec:simd1} We assume that the functional covariate $X_i$ is observed at time points $T_{il}$ with measurement errors for $l=1,\ldots,N_i$. We consider the following simulation designs. - Consider sparse designs for $X_i$. Set $n_T=n+n_0$ as the sample size of a dataset, among which the first $n$ dataset are used as training data and the other $n_0=100$ as test data. Set $N_i$ are sampled from the discrete uniform distribution on $\{10,11,12\}$. For each $i$, $T_{il}$, $l=1,\ldots,N_i$ are sampled from a uniform distribution on $\mathcal{T}=[0,1]$. - Set the eigenfunctions $\phi_j(t)=\sqrt{2}\cos(j\pi t)$ and the eigenvalues $\kappa_j=j^{-1.2}$, $j=1,2,\ldots$. Set $\mu(t)=0$ and generate $X_i(t)=\mu(t)+\sum_{j=1}^{J_{\max}} \kappa_j^{1/2} Z_{ij}\phi_j(t)$, where $J_{\max}=20$ and $Z_{ij}$ are independently sampled from a uniform distribution on $[-3^{1/2},3^{1/2}]$ for each $j$. - Obtain the observations $U_{il}$ by adding measurement errors $\epsilon_{il}$ to $X_i(T_{il})$, i.e., $U_{il}=X_i(T_{il})+\epsilon_{il}$, where $\epsilon_{il}$ are sampled from $N(0,0.8)$. - Set $b^{[1]}(t)=\sum_{j=1}^{J_{\max}} j^{-1}\phi_j(t)$, $a^{[2]}=2\sum_{j=1}^{J_{\max}} j^{-1.5}\kappa_j^{1/2}$, and $b^{[2]}(t)=\sum_{j=1}^{J_{\max}} j^{-1.5}\phi_j(t)$. Generate the response observation $Y_i$ by the following heteroscedastic functional linear model $$\label{eq:dgp} Y_i=\theta \int_0^1 b^{[1]}(t)X_i(t)\,dt+\sigma(X_i)e_i,$$ where $\sigma(X_i)=a^{[2]}+\int_0^1 b^{[2]}(t)X_i(t)\,dt$ and $e_i$ are sampled from $N(0,1)$. It is easy to see that $\sigma(X_i)>0$ for $X_i$ in its domain, and [\[eq:dgp\]](#eq:dgp){reference-type="eqref" reference="eq:dgp"} leads to an FLQR model of the form [\[eq:model\]](#eq:model){reference-type="eqref" reference="eq:model"} with $$a=F_e^{-1}(\tau) a^{[2]}\quad \text{and}\quad b(t)=\theta b^{[1]}(t)+F_e^{-1}(\tau) b^{[2]}(t),$$ where $F^{-1}_e(\cdot)$ is the quantile function of the distribution of $e_i$. As in @lu2015, we define the population $R^2$ as $R^2=[\mathrm{var}(Y_i)-\mathrm{var}\{\sigma(X_i)e_i\}]/\mathrm{var}(Y_i)$. We consider $\tau=0.5, 0.05$ and different choices of $\theta$ such that $R^2$ ranges from $0.1$ to $0.9$ with an increment of 0.1. To evaluate each method, we compute the excess final prediction error. We do this by computing averages across 200 replications. Specifically, by [\[eq:efpe\]](#eq:efpe){reference-type="eqref" reference="eq:efpe"}, the excess final prediction error of our method is computed as $$\begin{aligned} \mathrm{EFPE}&=\frac{1}{200 n_0}\sum_{r=1}^{200}\sum_{i=n+1}^{n_T} \int_0^{\widehat{Q}_{\tau}(X_i,\widehat{\mathbf{w}})^{(r)}-Q_{\tau}(X_i)} \{F(s|X_i)-F(0|X_i)\}\,ds \\ &=\frac{1}{200 n_0}\sum_{r=1}^{200}\sum_{i=n+1}^{n_T} \int_0^{\widehat{Q}_{\tau}(X_i,\widehat{\mathbf{w}})^{(r)}-Q_{\tau}(X_i)} \left\{\Phi\left(\frac{s}{\sigma(X_i)}+F_e^{-1}(\tau)\right)-\tau\right\}\,ds,\end{aligned}$$ where $\widehat{Q}_{\tau}(\cdot,\widehat{\mathbf{w}})^{(r)}$ denotes the prediction based on the training data in the $r$th replication and $\Phi(\cdot)$ is the cumulative distribution function of $N(0,1)$. Similarly, we can calculate the EFPE of the other six methods. We first compare the performance of the $d$-divergence model averaging under various settings of $\mathcal{J}$ described in Subsection [3.3](#subsec:choice){reference-type="ref" reference="subsec:choice"}. Specifically, let $d=0,1,2,4$, and $\widehat{J}_n$ is selected by FVE with $\gamma\in\{0.90, 0.95\}$, AIC, or BIC. We fix $n=300$ and $R^2=0.5$. In each design, we present the results with $K=2, 4$ for the $K$-fold cross-validation method to obtain the optimal weights. Figures [1](#fig:diff05){reference-type="ref" reference="fig:diff05"} and [2](#fig:diff0.05){reference-type="ref" reference="fig:diff0.05"} show the boxplots of EFPE with $\tau=0.5$ and $\tau=0.05$, respectively. When $\tau=0.5$, compared to each model selection method, the corresponding model averaging method provides little further improvement. When $\tau=0.05$, compared to each model selection method, the corresponding model averaging method can indeed further reduce the excess final prediction error. ![Boxplots of EFPE in simulation design I with $\tau=0.5$. FVE90, FVE with $\gamma=0.90$; FVE95, FVE with $\gamma=0.95$; MA(FVE90$\pm \alpha$, $K\beta$), model averaging method with $\widehat{J}_n$ determined by FVE90, with $d=\alpha$, and with weights selected by $\mathrm{CV}_{K=\beta}$; MA(FVE95$\pm \alpha$, $K\beta$), MA(AIC$\pm \alpha$, $K\beta$), and MA(BIC$\pm \alpha$, $K\beta$) have similar definitions.](diffheter05.pdf){#fig:diff05} ![Boxplots of EFPE in simulation design I with $\tau=0.05$. FVE90, FVE with $\gamma=0.90$; FVE95, FVE with $\gamma=0.95$; MA(FVE90$\pm \alpha$, $K\beta$), model averaging method with $\widehat{J}_n$ determined by FVE90, with $d=\alpha$, and with weights selected by $\mathrm{CV}_{K=\beta}$; MA(FVE95$\pm \alpha$, $K\beta$), MA(AIC$\pm \alpha$, $K\beta$), and MA(BIC$\pm \alpha$, $K\beta$) have similar definitions.](diffheter005.pdf){#fig:diff0.05} Next, we compare the proposed model averaging prediction with the other six model averaging and selection methods. We choose $d$-divergence model averaging with $d=4$ coupled with the FVE(0.90) criterion as our model averaging method. We consider $n=100, 200, 400$ and $\tau=0.5, 0.05$, and $K$ is fixed to be 4. In each simulation setting, we normalize the EFPEs of the other six methods by a division of the EFPE of our method. The results are presented in Figure [3](#fig:normFPE){reference-type="ref" reference="fig:normFPE"}. The performance of different methods becomes more similar when $R^2$ is large. When $\tau=0.5$, no method clearly denominates the others. Our method is the best in most cases for $n=100, 200$. The performances of AIC and BIC seem to be the worst. When $\tau=0.05$, it is clear that our method significantly dominates all the other six methods, and SBIC and FVE(0.90) seem to be the second and third best, respectively when $R^2 \leq 0.5$. The reason why our method has more advantages than other methods for extreme quantile cases may be that the data at extreme quantiles are usually sparse, the prediction for extreme quantiles is more challenging, and model averaging can address this challenge by using more information from more models. To further show this performance comprehensively, we conduct additional simulation studies with $\tau=0.2$ and $0.1$. The results are displayed in Figure [\[fig:normFPE_suppl\]](#fig:normFPE_suppl){reference-type="ref" reference="fig:normFPE_suppl"} in the Supplementary Material. From Figures [3](#fig:normFPE){reference-type="ref" reference="fig:normFPE"} and [\[fig:normFPE_suppl\]](#fig:normFPE_suppl){reference-type="ref" reference="fig:normFPE_suppl"}, the advantage of model averaging over the other six methods increases as $\tau$ decreases. Overall, the simulation study illustrates the advantage of model averaging over model selection in terms of prediction for extreme quantiles. ![Normalized EFPE in simulation design I for $\tau=0.5$ and $0.05$. MA denotes MA(FVE90$\pm 4$, $K4$).](normEFPE_new.pdf){#fig:normFPE} ## Simulation Design II {#subsec:simd2} We consider the same simulation design as that in Subsection [5.1](#subsec:simd1){reference-type="ref" reference="subsec:simd1"} except for $J_{\max}$, $b^{[1]}(t)$, $a^{[2]}$, and $b^{[2]}(t)$. Specifically, we set $J_{\max}=8$, $b^{[1]}(t)=\sum_{j=1}^3 j^{-1}\phi_j(t)$, $a^{[2]}=2\sum_{j=1}^3 j^{-1.5}\kappa_j^{1/2}$, and $b^{[2]}(t)=\sum_{j=1}^3 j^{-1.5}\phi_j(t)$. The candidate set $\mathcal{J}$ is fixed to be $\{0,1,\ldots,6\}$. Accordingly, the true models are $J^*=3, 4, 5$, or 6, which are included in the candidate models with the candidate set $\mathcal{J}$. We first evaluate the finite sample performance of the proposed model averaging prediction in the scenario with true candidate models. As in Subsection [5.1](#subsec:simd1){reference-type="ref" reference="subsec:simd1"}, we compare it with the other six model averaging and selection methods by computing the excess final prediction errors. We set $K=4$ and consider $n=100, 200, 400$ and $\tau=0.5, 0.05$. In each simulation setting, we normalize the EFPEs of the other six methods by a division of the EFPE of our method. The results are presented in Figure [4](#fig:mise){reference-type="ref" reference="fig:mise"}. The performance of different methods becomes more similar when $R^2$ is large. When $\tau=0.5$, the performance of our method is the best in some cases (e.g., $R^2=0.1, 0.4, 0.5$) for $n=100$, while our method has no advantage for $n=200, 400$. When $\tau=0.05$, our method significantly outperforms the other six methods when $R^2 \leq 0.6$. To further illustrate the advantage of our method in terms of prediction for extreme quantiles, we conduct additional simulation studies with $\tau=0.2$ and $0.1$. The results are displayed in Figure [\[fig:mise_suppl\]](#fig:mise_suppl){reference-type="ref" reference="fig:mise_suppl"} in the Supplementary Material. From Figures [4](#fig:mise){reference-type="ref" reference="fig:mise"} and [\[fig:mise_suppl\]](#fig:mise_suppl){reference-type="ref" reference="fig:mise_suppl"}, the advantage of our method over the other six methods increases as $\tau$ decreases. This illustrates that even when there is at least one true model in the set of candidate models, our method leads to smaller EFPEs compared with the other six methods for extreme quantiles. ![Normalized EFPE in simulation design II for $\tau=0.5$ and $0.05$. MA denotes MA(FVE90$\pm 4$, $K4$).](normEFPE_design2_new.pdf){#fig:mise} Next, we verify the consistency of $\widehat{b}_{\widehat{\mathbf{w}}}(t)$ in Theorem [\[thm:cons\]](#thm:cons){reference-type="ref" reference="thm:cons"}. We do this by computing the mean integrated squared error (MISE) based on 200 replications. Specifically, the MISE of our estimator is computed as $$\mathrm{MISE}=\frac{1}{200}\sum_{r=1}^{200}\int_0^1 \left\{\widehat{b}_{\widehat{\mathbf{w}}}(t)^{(r)}-b(t)\right\}^2 \,dt,$$ where $\widehat{b}_{\widehat{\mathbf{w}}}(t)^{(r)}$ denotes the model averaging estimator of $b(t)$ in the $r$th replication. We set $K=4$ and consider $n=50, 100, 300, 500, 700, 900, 1100$, $R^2=0.1, 0.5, 0.9$, and $\tau=0.5, 0.05$. Figure [5](#fig:mise_heter){reference-type="ref" reference="fig:mise_heter"} plots MISE against $n$ for each combination of $R^2$ and $\tau$. These plots show that as $n$ increases, the MISE of $\widehat{b}_{\widehat{\mathbf{w}}}(t)$ decreases to zero, which reflects the consistency of $\widehat{b}_{\widehat{\mathbf{w}}}(t)$. In addition, we observe that the MISE increases as $R^2$ increases. This is not counterintuitive since $\theta$ and $b(t)$ become larger as $R^2$ becomes larger. We also compare the performance of $\widehat{b}_{\widehat{\mathbf{w}}}(t)$ with the other six methods, and the results are presented in Section [\[sec:simsty\]](#sec:simsty){reference-type="ref" reference="sec:simsty"} of the Supplementary Material for saving space. ![MISE of $\widehat{b}_{\widehat{\mathbf{w}}}(t)$ with $\tau=0.5$ (left) and $\tau=0.05$ (right) under simulation design II.](mise_heter.pdf){#fig:mise_heter} # Real Data Analysis {#sec:realdata} We demonstrate the performance of the proposed model averaging prediction through data on the price of the Bitcoin cryptocurrency. The data set[^2] was collected from January 1, 2012 and January 8, 2018 on the exchange platform Bitstamp. We use our method to, based on hourly log-returns of the price of Bitcoin on a given day, predict the $\tau$th quantile of the maximum hourly log-return of Bitcoin price the next day. To reduce the temporal dependence existing in the data, as in @girard2022, we construct our sample of data by keeping a gap of one day between observations. Specifically, in the $i$th sample $(Y_i, X_i)$, the functional covariate $X_i$ is the curve of hourly log-returns on day $2i-1$ and the scalar response $Y_i$ is the maximum hourly log-return on day $2i$. After deleting missing data, $n=917$ samples are kept in the study. The left panel of Figure [6](#fig:realdata){reference-type="ref" reference="fig:realdata"} provides hourly log-returns of Bitcoin price, and the right panel plots the histogram of maximum hourly log-return of Bitcoin price. Obviously the distribution of maximum hourly log-return is highly right skewed; hence a simple statistic like sample mean cannot adequately describe the (conditional) distribution of $Y_i$ given $X_i$. We randomly sample 70% of samples as the training data to build the FLQR model and use the remaining 30% of samples as the test data to evaluate the prediction performance. To better assess prediction accuracy, we repeat this $B=200$ times based on random partitions of the data set. ![Financial data on the price of Bitcoin. Left: hourly log-returns of Bitcoin price. Right: histogram of the maximum hourly log-return of Bitcoin price.](bitcoin.pdf){#fig:realdata} We consider the $\tau$th quantile prediction with $\tau=0.5$, $0.05$, and $0.01$. To perform prediction using model averaging, we choose the $d$-divergence model averaging described in Subsection [3.3](#subsec:choice){reference-type="ref" reference="subsec:choice"}, where $d=0$, 2, 4, 8, 10, 16, 22 and $\widehat{J}_n$ is selected by FVE(0.90), FVE(0.95), AIC, and BIC. Table [1](#tab:ms){reference-type="ref" reference="tab:ms"} summarizes the averages and standard errors of $\widehat{J}_n$, where we can see that BIC selects the smallest number of principal components. We then obtain the optimal weights through the $K$-fold cross-validation, where $K=2$, $4$, and $10$. AIC BIC FVE(0.90) FVE(0.95) ------------- --------------- -------------- --------------- --------------- $\tau=0.5$ 15.160(5.831) 0.910(1.085) 17.345(0.477) 19.875(0.332) $\tau=0.05$ 14.670(5.360) 2.055(1.911) 17.335(0.473) 19.870(0.337) $\tau=0.01$ 17.545(3.950) 6.820(2.625) 17.290(0.466) 19.865(0.343) : Results for financial data on the price of Bitcoin. Averages and standard errors (in parentheses) of $\widehat{J}_n$ selected by AIC, BIC, FVE(0.90), and FVE(0.95) across the 200 random partitions. To evaluate each method, we calculate the final prediction error, which are calculated as $$\mathrm{FPE}(\mathbf{w})=\frac{1}{\lfloor 0.3n\rfloor B} \sum_{r=1}^B\sum_{i=1}^{\lfloor 0.3n\rfloor} \rho_{\tau} \left(Y_{0,i}^{(r)}-\widehat{Q}_{\tau}(X_{0,i}^{(r)},\mathbf{w})^{(r)}\right),$$ where $\{(Y_{0,i}^{(r)}, X_{0,i}^{(r)})\}_{i=1}^{\lfloor 0.3n\rfloor}$ is the test data and $\widehat{Q}_{\tau}(\cdot,\mathbf{w})^{(r)}$ denotes the prediction based on the training data in the $r$th partition. By [\[eq:efpe\]](#eq:efpe){reference-type="eqref" reference="eq:efpe"}, the above FPE also measures the EFPE. Figure [7](#fig:bitcoin){reference-type="ref" reference="fig:bitcoin"} illustrates the final prediction errors for $K=2$. For saving space, the all results for $K=2$, $4$, and $10$ are summarized in Figure [\[fig:bitcoin_all\]](#fig:bitcoin_all){reference-type="ref" reference="fig:bitcoin_all"} in the Supplementary Material, which suggest that the performance depends little on $K$. Note that the result of $d=0$ corresponds to that of model selection with $\widehat{J}_n$. Table [1](#tab:ms){reference-type="ref" reference="tab:ms"} and Figure [7](#fig:bitcoin){reference-type="ref" reference="fig:bitcoin"} suggest that for each $\tau$, the FLQR model requires a small number of principal components since the model selected by BIC yields a relatively small prediction error, and the model averaging approaches only slightly reduce it. The results using FVE(0.90), FVE(0.95), and AIC have large prediction errors since they overestimate the numbers of principal components, but these results are much improved when using model averaging with $d=10$ or larger. The data example demonstrates that model averaging improves the prediction performance and can protect against potential prediction loss caused by a single selection criterion. ![Results for financial data on the price of Bitcoin. Line charts for the $\text{FPEs}\times 100$ with $\tau=0.5$ (left), $\tau=0.05$ (middle), and $\tau=0.01$ (right), where $K=2$.](bitcoin_all.pdf){#fig:bitcoin} Finally, we compare our method with the other six model averaging and selection methods described in Section [5](#sec:simu){reference-type="ref" reference="sec:simu"}. We fix $K=2$ and choose $\mathcal{J}$ in Subsection [3.3](#subsec:choice){reference-type="ref" reference="subsec:choice"} with $d=8$ and $\widehat{J}_n$ selected by BIC as the candidate set. The boxplots of FPEs based on the seven methods are displayed in Figure [8](#fig:bitcoin_bp){reference-type="ref" reference="fig:bitcoin_bp"}. The results demonstrate that our method has the best performance for $\tau=0.05, 0.01$. Once again, these outcomes highlight the advantages of our method, especially for extreme quantiles. ![Boxplots of the $\text{FPEs}\times 100$ for financial data on the price of Bitcoin. MA denotes MA(BIC$\pm 8$, $K2$).](bitcoin_bp.pdf){#fig:bitcoin_bp} # Discussion {#sec:dis} This paper has developed a $K$-fold cross-validation model averaging for the FLQR model. When there is no true model included in the set of candidate models, the weight vector chosen by our method is asymptotically optimal in terms of minimizing the excess final prediction error, whereas when there is at least one true model included, the model averaging parameter estimates are consistent. Our simulations indicate that the proposed method outperforms the other model averaging and selection methods, especially for extreme quantiles. We apply the proposed method to a financial data on the price of Bitcoin. There are some open questions for future research. First, it is of interest to improve the convergence rate for $\widehat{a}_{\widehat{\mathbf{w}}}$ and $\widehat{b}_{\widehat{\mathbf{w}}}(t)$ in Theorem [\[thm:cons\]](#thm:cons){reference-type="ref" reference="thm:cons"}. Second, while we proposed several choices of candidate models and compared their finite sample performances, a theoretical comparison of these choices is still lacking and needs to be done in the future. Another possible research direction is the asymptotic behavior of the selected weights. It is worth noting that @zhang2019 have studied this issue for jackknife model averaging in linear regression, but its application to linear quantile regression remains unknown. Finally, an interesting extension for future studies would be to develop model averaging prediction and estimation for a partial FLQR model [@yao2017jma]. # Supplementary Material {#supplementary-material .unnumbered} Supplementary material related to this article can be found online. It contains the detailed derivation of the last equality of Equation [\[eq:efpe\]](#eq:efpe){reference-type="eqref" reference="eq:efpe"}, proofs of Theorems [\[thm:asyopt\]](#thm:asyopt){reference-type="ref" reference="thm:asyopt"} and [\[thm:cons\]](#thm:cons){reference-type="ref" reference="thm:cons"}, a simulation study continued from Section [5](#sec:simu){reference-type="ref" reference="sec:simu"}, and additional figures in Sections [5](#sec:simu){reference-type="ref" reference="sec:simu"} and [6](#sec:realdata){reference-type="ref" reference="sec:realdata"}. [^1]: The work was performed when the first author worked as a postdoctoral fellow at Academy of Mathematics and Systems Science, Chinese Academy of Sciences. [^2]: Available at <https://github.com/FutureSharks/financial-data/tree/master/pyfinancialdata/data/cryptocurrencies/bitstamp/BTC_USD>.
arxiv_math
{ "id": "2310.01970", "title": "Optimal averaging for functional linear quantile regression models", "authors": "Wenchao Xu, Xinyu Zhang, Jeng-Min Chiou", "categories": "math.ST stat.ME stat.TH", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this paper, we deal with the torsion log-Minkowski problem without symmetry assumptions via an approximation argument. address: School of Mathematics, Hunan University, Changsha, 410082, Hunan Province, China author: - Jinrong Hu title: The torsion log-Minkowski problem --- # Introduction {#Sec1} In the framework of the Brunn-Minkowski theory of convex bodies, the classic Minkowski problem is of central importance. It was introduced and solved by Minkowski himself in [@M897; @M903] and there are many excellent subsequent works (see, e.g.,  [@A39; @A42; @FJ38; @B87]). With the development of Minkowski problem, the $L_p$ Minkowski problem was first posed and solved by Lutwak  [@L93] as an analogue of the classic Minkowski problem within the $L_p$ Brunn-Minkowski theory. After that, the $L_{p}$ Minkowski problem has been the breeding ground equipped with many valuable results in a series of paper  [@B17; @CL17; @B19; @F62; @L04; @Zhu14; @Zhu15; @Zh15]. In the critical index $p=0$, the $L_{p}$ Minkowski problem is reduced to the logarithmic Minkowski problem prescribing cone-volume measure, which was first solved by Böröczky-Lutwak-Yang-Zhang [@BLYZ12] for symmetric convex bodies. Later, Zhu [@Zhu14] solved the log-Minkowski problem for polytopes, and Chen-Li-Zhu [@CL19] attacked the general case. There are some other important researches on extensions and analogues of the $L_p$ Minkowski problems for other Borel measures associated with the boundary-value problem (for example, capacity, torsional rigidity), see, e.g., [@J96; @J962; @CF10; @C15; @FZH20; @H181; @Xiong19; @XX22] and their references. In this paper, we focus on solving the log-Minkowski problem for torsional rigidity. Recall that the torsional rigidity $T(\Omega)$ of a convex body $\Omega$ in the $n$-dimensional Euclidean space ${\mathbb R^{n}}$, is defined as $$\label{tordef} \frac{1}{T(\Omega)}=\inf\left\{\frac{\int_{\Omega}|\nabla u|^{2}{d}x}{(\int_{\Omega}|u|{d}x)^{2}}: \ u\in W^{1,2}_{0}(\Omega),\ \int_{\Omega}|u|{d}x> 0\right\},$$ where $W^{1,2}_{0}(\Omega)$ denotes the Sobolev space of functions having compact support in $W^{1,2}(\Omega)$, while $W^{1,2}(\Omega)$ represents the Sobolev space of functions having weak derivatives up to first order in $L^{2}(\Omega)$. From the perspective of physics, in the circumstance that $\Omega\subset \mathbb R^{n-1}$ is the cross section of a cylindrical rod $\Omega \times \mathbb R$ under torsion, the torsion rigidity of $\Omega$ is the torque required for unit angle of twist per unit length. In the background of analysis, the torsional functional can be given by the solution of an elliptic boundary-value problem (see, e.g.,  [@CA05]). To be more specific, let $u$ be the unique solution of $$\label{torlapu} \left\{ \begin{array}{lr} \Delta u= -2, & x\in \Omega, \\ u=0, & x\in \partial \Omega. \end{array}\right.$$ Then $$\label{tordef2} T(\Omega)=\int_{\Omega}|\nabla u|^{2}{d}x=2\int_{\Omega}udx.$$ If the boundary $\partial\Omega$ is of class $C^{2}$, employing the standard regularity results of elliptic equations (see, e.g., Gilbarg-Trudinger [@GT01]), we see that $\nabla u$ can be suitably defined $\mathcal{H}^{n-1}$ a.e. on $\partial\Omega$, and $T(\Omega)$ can be expressed as (see, e.g., Proposition 18 of  [@CA05]), $$\begin{split}\label{tordef3} T(\Omega)&=\frac{1}{n+2}\int_{\partial\Omega}h(\Omega,g_{\Omega}(x))|\nabla u(x)|^{2}{d}\mathcal{H}^{n-1}(x)\\ &=\frac{1}{n+2}\int_{ {\mathbb{S}^{n-1}}}h(\Omega,v)|\nabla u(g^{-1}_{\Omega}(v))|^{2}{d}S(\Omega,v), \end{split}$$ where $S(\Omega,\cdot)$ is the surface area measure of $\Omega$.[@CF10 Theorem 3.1] indicated that $~\eqref{tordef3}$ also holds for any convex body in ${\mathbb R^{n}}$ and established the Hadamard variational formula of $T(\cdot)$, given as (see [@CF10 Theorem 4.1]), $$\label{torhadma} \frac{d}{dt}T(\Omega+t\Omega_{1})\Big|_{t=0}=\int_{ {\mathbb{S}^{n-1}}}h(\Omega_{1},v){d}\mu^{tor}(\Omega,v),$$ where the torsion measure $\mu^{tor}(\Omega,\eta)$ is defined by $$\label{tormes2} \mu^{tor}(\Omega,\eta)=\int_{g^{-1}_{\Omega}(\eta)}|\nabla u(x)|^{2}{d}\mathcal{H}^{n-1}(x)=\int_{\eta}|\nabla u(g^{-1}_{\Omega}(v))|^{2}{d}S(\Omega,v)$$ for every Borel subset $\eta\subset { {\mathbb{S}^{n-1}}}$. Proposition 2.5 of  [@CF10] revealed that $\nabla u$ has finite non-tangential limits $\mathcal{H}^{n-1}$ a.e. on $\partial \Omega$ by extending the estimates of harmonic functions proved by Dahlberg [@D77] and $|\nabla u|\in L^{2}(\partial \Omega, \mathcal{H}^{n-1})$ without the assumption of smoothness, which illustrate that $~\eqref{tormes2}$ is well-defined a.e. on the unit sphere ${ {\mathbb{S}^{n-1}}}$, not limited to the case of smoothness, and can be regarded as a Borel measure. In [@CF10], the Minkowski problem for torsional rigidity was first posed: If $\mu$ is a finite Borel measure on ${ {\mathbb{S}^{n-1}}}$, what are necessary and sufficient conditions on $\mu$ such that $\mu$ is the torsion measure $\mu^{tor}(\Omega,\cdot)$ of a convex body $\Omega$ in ${\mathbb R^{n}}$? Colesanti-Fimiani [@CF10] proved the existence and uniqueness up to translations of the solution via the variational method, which was firstly proposed by Aleksandrov  [@A39; @A42] and was later adopted by Jerison  [@J96], Colesanti-Nyström-Salani-Xiao-Yang-Zhang  [@C15]. Similar to the $L_{p}$ surface measure (see, e.g.,  [@S14]), recently, Chen-Dai [@Chen20] defined the $L_{p}$ torsion measure $\mu^{tor}_{p}(\Omega,\cdot)$ by $$\label{pmea1} \mu^{tor}_{p}(\Omega,\eta)=\int_{g^{-1}_{\Omega}(\eta)}h(\Omega,g_{\Omega}(x))^{1-p}{d}\mu^{tor}(\Omega,\eta)=\int_{\eta}h(\Omega,v)^{1-p}{d}\mu^{tor}(\Omega,\eta)$$ for every Borel subset $\eta$ of ${ {\mathbb{S}^{n-1}}}$. Naturally, the $L_{p}$ Minkowski problem for torsional rigidity first introduced by the authors [@Chen20] arises as: For $p\in {\mathbb{R}}$, given a finite Borel measure $\mu$ on ${ {\mathbb{S}^{n-1}}}$, what are the necessary and sufficient conditions on $\mu$ such that $\mu$ is the $L_{p}$ torsion measure $\mu^{tor}_{p}(\Omega,\cdot)$ of a convex body $\Omega$ in ${\mathbb R^{n}}$? In [@Chen20], the authors proved the existence and uniqueness of the solution when $p>1$. For the case $0<p<1$, Hu-Liu [@HL21] gave a sufficient condition. To our knowledge, there are few results on the existence of solution to the torsion log-Minkowski problem without symmetry assumptions in the case $p=0$. We will enrich this topic in this paper. We first reveal the cone torsion measure $G^{tor}(\Omega,\cdot)$ of a convex body $\Omega$ via applying [\[torhadma\]](#torhadma){reference-type="eqref" reference="torhadma"}, which is defined for a Borel set $\eta\subset {\mathbb{S}^{n-1}}$ by $$\label{CTC} G^{tor}(\Omega,\eta)=\frac{1}{n+2}\int_{x\in g^{-1}_{\Omega}(\eta)}h(\Omega,g_{\Omega}(x))d\mu^{tor}(\Omega,\eta).$$ The Minkowski problem prescribing cone-torsion measure is stated as: **The Torsion Log-Minkowski Problem.** If $\mu$ is a finite Borel measure on ${\mathbb{S}^{n-1}}$, what are necessary and sufficient conditions on $\mu$ to guarantee the existence of a convex body $\Omega\subset \mathbb R^{n}$ containing the origin that solves the equation $$\label{Gq} G^{tor}(\Omega,\cdot)=\mu?.$$ It was shown in [@LU23] with additional assumption that $\mu$ is even, and satisfies the following *subspace mass inequality* $$\label{SB} \frac{\mu(\xi_{k}\cap {\mathbb{S}^{n-1}})}{|\mu|}<\frac{k}{n}$$ for each $k$-dimensional subspace $\xi_{k}\subset \mathbb R^{n}$ and each $k=1,\ldots,n-1$. Then there exists an origin-symmetric convex body $\Omega$ in $\mathbb R^{n}$ such that [\[Gq\]](#Gq){reference-type="eqref" reference="Gq"} holds. In this paper, we shall show that the symmetric assumption can be removed. We first deal with the polytopal case when the given normal vectors are in *general position*. **Theorem 1**. *Let $\mu$ be a discrete measure on $\mathbb R^{n}$ whose support set is not contained in any closed hemisphere and is in general position in dimension $n$. Then there exists a polytope $P$ containing the origin in its interior such that $$G^{tor}(P,\cdot)=\mu.$$* By an approximation scheme, we shall give the sufficient condition to solve the general torsion log-Minkowski problem without symmetry assumptions. **Theorem 2**. *If $\mu$ is a finite, non-zero Borel measure on ${\mathbb{S}^{n-1}}$ satisfying the subspace mass inequality [\[SB\]](#SB){reference-type="eqref" reference="SB"}. Then there exists a convex body $\Omega$ in $\mathbb R^{n}$ with $o\in \Omega$ such that $$G^{tor}(\Omega,\cdot)=\mu.$$* We remark that the uniqueness of the torsion log-Minkowski problem is challenging. The key is to discover a logarithmic Brunn-Minkowski inequality for torsional rigidity, which is resemble to the classical logarithmic Brunn-Minkowski inequality first proved by Böröczky-Lutwak-Yang-Zhang [@BLZ12] provided that two plane convex bodies are origin-symmetric. Very recently, Crasta-Fragalà [@CF23] gave some partial answers on this topic, they illustrated that if such a measure is absolutely continuous with constant density, the underlying body is a ball. This paper is organized as follows. In Sec. [2](#Sec2){reference-type="ref" reference="Sec2"} , we collect some facts about convex bodies, torsional rigidity and torsion measure. In Sec. [3](#Sec3){reference-type="ref" reference="Sec3"}, the associated extremal problem is introduced. In Sec. [4](#Sec4){reference-type="ref" reference="Sec4"}, we solve the discrete torsion log-Minkowski problem. In Sec. [5](#Sec5){reference-type="ref" reference="Sec5"}, the general torsion log-Minkowski problem is studied by an approximation technique. # Backgrounds {#Sec2} In this section, we list some basic facts about convex bodies and torsional rigidity that we shall use in what follows. ## Convex bodies There are a great quantity of standard and good references regarding convex bodies, for example, refer to Gardner [@G06] and Schneider [@S14]. Denote by ${\mathbb R^{n}}$ the $n$-dimensional Euclidean space, by $o$ the origin of ${\mathbb R^{n}}$. $\omega_{n}$ is the $n$-dimensional volume of the unit ball $B^{n}$ in $\mathbb R^{n}$. We will also use the notation $|\mu|$ for the total mass of a measure $\mu$. Let $G_{n,k}$ denote the Grass-mannian of $k$-dimensional subspaces of $\mathbb R^{n}$. For $x,y\in {\mathbb R^{n}}$, $x\cdot y$ denotes the standard inner product. For $x\in{\mathbb R^{n}}$, denote by $|x|=\sqrt{x\cdot x}$ the Euclidean norm. The origin-centered ball $B$ is denoted by $\{x\in {\mathbb R^{n}}:|x|\leq 1\}$, its boundary by ${ {\mathbb{S}^{n-1}}}$. Denoted by $C({ {\mathbb{S}^{n-1}}})$ the set of continuous functions defined on the unit sphere ${ {\mathbb{S}^{n-1}}}$, by $C^{+}({ {\mathbb{S}^{n-1}}})$ the set of strictly positive functions in $C({ {\mathbb{S}^{n-1}}})$. A compact convex set of ${\mathbb R^{n}}$ with non-empty interior is called by a convex body. The set of all convex bodies in $\mathbb R^{n}$ is denoted by $\mathcal K^n$. The set of all convex bodies containing the origin in the interior is denoted by $\mathcal K_0^n$, and the set of all origin-symmetric convex bodies by $\mathcal K_e^n$. If $\Omega$ is a compact convex set in ${\mathbb R^{n}}$, for $x\in{\mathbb R^{n}}$, the support function of $\Omega$ with respect to $o$ is defined by $$h(\Omega,x)=\max\{x\cdot y:y \in \Omega\}.$$ For compact convex sets $\Omega$ and $L$ in ${\mathbb R^{n}}$, any real $a_{1},a_{2}\geq 0$, define the Minkowski combination of $a_{1}\Omega+a_{2}L$ in ${\mathbb R^{n}}$ by $$a_{1}\Omega+a_{2}L=\{a_{1}x+a_{2}y:x\in \Omega,\ y\in L\},$$ and its support function is given by $$h({a_{1}\Omega+a_{2}L},\cdot)=a_{1}h(\Omega,\cdot)+a_{2}h(L,\cdot).$$ For any compact convex set $\Omega$ in ${\mathbb R^{n}}$, the radial function $\rho(\Omega,u)$ of $\Omega$ with respect to $o$ is expressed as $$\rho(\Omega,u)=\max\{\lambda:\lambda u\in \Omega\},\ \forall u\in {\mathbb R^{n}} \backslash\{o\}.$$ The mag $g_{\Omega}:\partial \Omega\rightarrow {\mathbb{S}^{n-1}}$ denotes the Gauss map of $\partial \Omega$. The Hausdorff metric $\mathcal{D}(\Omega,L)$ between two compact convex sets $\Omega$ and $L$ in ${\mathbb R^{n}}$, is expressed as $$\mathcal{D}(\Omega,L)=\max\{|h(\Omega,v)-h(L,v)|:v\in { {\mathbb{S}^{n-1}}}\}.$$ Let $\Omega_{j}$ be a sequence of compact convex set in ${\mathbb R^{n}}$, for a compact convex set $\Omega_{0}$ in ${\mathbb R^{n}}$, if $\mathcal{D}(\Omega_{j},\Omega_{0})\rightarrow 0$, then $\Omega_{j}$ converges to $\Omega_{0}$. For each $h\in C^{+}( {\mathbb{S}^{n-1}})$, the *Wulff shape* generated by $h$, denoted by $[h]$, is the convex body defined by $$[h]=\{x\in \mathbb R^{n}: x \cdot v\leq h(v), {\rm for \ all} \ v\in {\mathbb{S}^{n-1}}\}.$$ For a compact convex set $\Omega$ in ${\mathbb R^{n}}$, the diameter of $\Omega$ is denoted by $$diam(\Omega)=\max\{|x-y|:x,y \in \Omega\}.$$ For any convex body $\Omega$ in ${\mathbb R^{n}}$ and $v\in { {\mathbb{S}^{n-1}}}$, the support hyperplane $H(\Omega,v)$ in direction $v$ is defined by $$H(\Omega,v)=\{x\in {\mathbb R^{n}}:x\cdot v=h(\Omega,v)\},$$ the half-space $H^{-}(\Omega,v)$ in the direction $v$ is defined by $$H^{-}(\Omega,v)=\{x\in {\mathbb R^{n}}:x\cdot v\leq h(\Omega,v)\},$$ and the support set $F(\Omega,v)$ in the direction $v$ is defined by $$F(\Omega,v)=\Omega\cap H(\Omega,v).$$ Denote by $\mathcal{P}$ the set of polytopes in ${\mathbb R^{n}}$, suppose that the unit vectors $v_{1},\ldots,v_{N}$ $(N\geq n+1)$ are not concentrated on any closed hemisphere of ${ {\mathbb{S}^{n-1}}}$, by $\mathcal{P}(v_{1},\ldots,v_{N})$ the set with $P\in \mathcal{P}(v_{1},\ldots,v_{N})$ satisfying $$P=\bigcap_{k=1}^{N}H^{-}(P,v_{k}).$$ It is easy to see that, for $P\in \mathcal{P}(v_{1},\ldots,v_{N})$, then $P$ has at most $N$ facets and the outer normals of $P$ are a subset of $\{v_{1},\ldots,v_{N}\}$. A special collection of polytopes are those facet normals are in *g*eneral position. We say $v_{1},\ldots, v_{N}$ are in general position in dimension $n$ if for any $n$-tuple $1\leq i_{1}< i_{2}< \ldots<i_{n}\leq N$, the vectors $v_{i_{1}},\ldots,v_{i_{n}}$ are linearly independent. ## Torsional rigidity and torsion measure We first list some properties corresponding to the solution $u$ of [\[torlapu\]](#torlapu){reference-type="eqref" reference="torlapu"}, as follows. **Lemma 3**. *[@K85; @CF10][\[CE\]]{#CE label="CE"} Let $\Omega$ be a compact convex set of $\mathbb R^{n}$ and let $u$ be the solution of [\[torlapu\]](#torlapu){reference-type="eqref" reference="torlapu"} in $\Omega$. Then $\sqrt{u}$ is a concave function in $\Omega$.* Let $M_{\Omega}=\max_{\overline{\Omega}}u$, for every $t\in [0,M_{\Omega}]$, we define $$\Omega_{t}=\{x\in \Omega|\ u(x)>t\}.$$ By lemma [\[CE\]](#CE){reference-type="ref" reference="CE"}, we know that $\Omega_{t}$ is convex for every $t$. **Lemma 4**. *[@CF10][\[argu\]]{#argu label="argu"} Let $\Omega$ be a compact convex set in $\mathbb R^{n}$ and let $u$ be the solution of  [\[torlapu\]](#torlapu){reference-type="eqref" reference="torlapu"} in $\Omega$, then $$\label{grdguji} |\nabla u(x)|\leq diam(\Omega),\ \forall x\in \Omega.$$* For any $x\in\partial \Omega$, $0<b<1$, the non-tangential cone is defined as $$\label{capm2} \Gamma(x)=\left\{y\in \Omega:dist(y,\partial \Omega)> b|x-y|\right\}.$$ **Lemma 5**. *[@CF10; @D77][\[Nonfin\]]{#Nonfin label="Nonfin"} Let $\Omega$ be a compact convex set in ${\mathbb R^{n}}$ and let $u$ be the solution of [\[torlapu\]](#torlapu){reference-type="eqref" reference="torlapu"} in $\Omega$. Then the non-tangential limit $$\label{capm3} \nabla u(x)=\lim_{y\rightarrow x, \ y \in \Gamma(x)}\nabla u(y),$$ exists for $\mathcal{H}^{n-1}$ almost all $x\in\partial \Omega$. Furthermore, for $\mathcal{H}^{n-1}$ almost all $x\in \partial \Omega$, $$\nabla u(x)=-|\nabla u(x)|g_{\Omega}(x)\quad{\rm and}\quad |\nabla u|\in L^{2}(\partial \Omega, \mathcal{H}^{n-1}).$$* Let $\{\Omega_{i}\}_{i=0}^{\infty}$ be a sequence of convex bodies in ${\mathbb R^{n}}$, on the one hand, with respect to torsional rigidity, there are the following properties. **Lemma 6**. *[@CF10][\[T1\]]{#T1 label="T1"} (i) It is positively homogenous of order $n+2$, i.e., $$\label{torhom} T(m\Omega_{0})=m^{n+2}T(\Omega_{0}),\ m> 0.$$* *(ii) It is translation invariant. That is $$\label{tranin} T(\Omega_{0} +x_{0})=T(\Omega_{0}),\ \forall x_{0}\in {\mathbb R^{n}}.$$* *(iii) If $\Omega_{i}$ converges to $\Omega_{0}$ in the Hausdorff metric as $i\rightarrow \infty$ (i.e., $\mathcal{D}(\Omega_{i},\Omega_{0})\rightarrow 0$ as $i\rightarrow \infty$), then $$\label{Thousdo} \lim_{i\rightarrow \infty}T(\Omega_{i})=T(\Omega_{0}).$$* *(iv) It is monotone increasing, i.e., $T(L)\leq T(\Omega)$ if $L\subset \Omega$.* Recall that torsional rigidity satisfies the following isoperimetric-type inequality: **Lemma 7**. *[@PS51 de Saint Venant inequality] [\[JKT\]]{#JKT label="JKT"} Let $\Omega$ be a convex body in $\mathbb R^{n}$, then de Saint Venant inequality is $$\label{IS2} \left(\frac{T(\Omega)}{T(B^{n})} \right)^{\frac{1}{n+2}}\leq \left( \frac{|\Omega|}{|B^{n}|} \right)^{\frac{1}{n}}.$$ More precisely, since $T(B^{n})=\frac{\omega_{n}}{n(n+2)}$, [\[IS2\]](#IS2){reference-type="eqref" reference="IS2"} becomes $$\label{IS} T(\Omega)\leq \frac{|\Omega|^{\frac{n+2}{n}}}{n(n+2)\omega^\frac{2}{n}_{n}},$$ with equality if and only if $\Omega$ is a ball.* As showed in [@CF10 Theorem 4.1], the torsion measure stems from the differential of torsional rigidity. **Lemma 8**. *Let $\Omega$, $\Omega_{1}$ be convex bodies in ${\mathbb R^{n}}$ and let $h(\Omega_{1},\cdot)$ be the support function of $\Omega_{1}$. For sufficiently small $|t|> 0$, while $\Omega_{t}$ is the Wulff shape of $h_{t}$ defined by $$h_{t}(v)=h(\Omega,v)+th(\Omega_{1},v)+o(t,v),\quad v\in {\mathbb{S}^{n-1}},$$ where $o(t,\cdot)/t\rightarrow 0$ uniformly on ${\mathbb{S}^{n-1}}$ as $t\rightarrow 0$, then $$\label{Thadama66} \frac{d}{dt}T(\Omega_{t})\big|_{t=0}=\int_{{ {\mathbb{S}^{n-1}}}}h(\Omega_{1},v){d}\mu^{tor}(\Omega,v).$$* On the other hand, with regard to torsion measure, there are also the following properties. **Lemma 9**. *[@CF10][\[meau1\]]{#meau1 label="meau1"} (a) It is positively homogenous of order $n+1$, i.e., $$\label{Tmeahom} \mu^{tor}(m\Omega_{0},\cdot)=m^{n+1}\mu^{tor}(\Omega_{0},\cdot),\ m> 0.$$* *(b) It is translation invariant. That is $$\label{Tmeatranin} \mu^{tor}(\Omega_{0} +x_{0})=\mu^{tor}(\Omega_{0}),\ \forall x_{0}\in {\mathbb R^{n}}.$$* *(c) It is absolutely continuous with respect to the surface area measure.* *(d) For any fixed $i\in\{0,1,\ldots\}$, if $\Omega_{i}$ converges to $\Omega_{0}$ in the Hausdorff metric as $i\rightarrow \infty$ (i.e., $\mathcal{D}(\Omega_{i},\Omega_{0})\rightarrow 0$ as $i\rightarrow \infty$), then the sequence of $\mu^{tor}(\Omega_{i},\cdot)$ converges weakly in the sense of measures to $\mu^{tor}(\Omega_{0},\cdot)$ as $i\rightarrow \infty$ .* ## Cone-torsion measure Analogous to torsion measure [\[Thadama66\]](#Thadama66){reference-type="eqref" reference="Thadama66"}, the differential of torsional rigidity produces the cone-torsion measure. **Lemma 10**. *Let $\Omega$, $\Omega_{1}$ be convex bodies in ${\mathbb R^{n}}$ and let $h(\Omega_{1},\cdot)$ be the support function of $\Omega_{1}$. For sufficiently small $|t|> 0$, while $\Omega_{t}$ is the Wulff shape of $h_{t}$ defined by $$\log h_{t}(v)= \log h(\Omega,v)+th(\Omega_{1},v)+o(t,v),\quad v\in {\mathbb{S}^{n-1}},$$ where $o(t,\cdot)/t\rightarrow 0$ uniformly on ${\mathbb{S}^{n-1}}$ as $t\rightarrow 0$, then $$\label{CO} \frac{d}{dt}T(\Omega_{t})\big|_{t=0}=(n+2)\int_{{ {\mathbb{S}^{n-1}}}}h(\Omega_{1},v){d}G^{tor}(\Omega,v).$$* We remark that the cone torsion measure $G^{tor}(\Omega,\cdot)$ is homogeneous of degree $(n+2)$, and $T(\Omega)=G^{tor}(\Omega, {\mathbb{S}^{n-1}})$. # The associated extremal problem {#Sec3} In this section, we study an extremal problem associated to the discrete torsion log-Minkowski problem for torsional rigidity. Suppose that $\beta_{1},\ldots,\beta_{N}\in (0,\infty)$, the unit vectors $v_{1},\ldots,v_{N}$ $(N\geq n+1)$ are not concentrated on any closed hemisphere, $\mu$ is the discrete measure on ${ {\mathbb{S}^{n-1}}}$ such that $$\label{bordef} \mu=\sum_{k=1}^{N}\beta_{k}\delta_{v_{k}}(\cdot),$$ where $\delta_{v_{k}}$ denotes the delta measure that is concentrated at the point $v_{k}$. As is established in  [@Zhu14], we define the functional $\Phi_{P}:P\rightarrow {\mathbb{R}}$ by $$\label{phimax} \Phi_{P}(\gamma)=\sum_{k=1}^{N}\beta_{k}\log(h(P,v_{k})-\gamma\cdot v_{k}).$$ Now, we consider the following the extreme value problem, $$\label{extre3} \inf\left\{\max_{\gamma\in Q}\Phi_{Q}(\gamma):Q\in \mathcal{P}(\nu_{1},\ldots,\nu_{N}),T(Q)=1\right\}.$$ In the light of [\[extre3\]](#extre3){reference-type="eqref" reference="extre3"}, we begin with showing that there exists a unique $\gamma(P)\in Int(P)$ (i.e., the interior of $P$) such that $$\label{extre4} \Phi_{P}(\gamma(P))=\max_{\gamma\in Int (P)}\Phi_{P}(\gamma),$$ with the following results. **Lemma 11**. *Let $\beta_{1},\ldots,\beta_{N}\in (0,\infty)$, suppose that the unit vectors $v_{1},\ldots,v_{N}$ $(N\geq n+1)$ are not concentrated on any closed hemisphere, $P\in \mathcal{P}(u_{1},\ldots,u_{N})$, then there exists a unique maximum point $\gamma\in Int(P)$ such that $$\Phi_{P}(\gamma(P))=\max_{\gamma\in P}\Phi_{P}(\gamma).$$* *Proof.* Firstly, we show the existence and uniqueness of the maximum point. By the aid of the concavity $\log t$ on $[0,\infty)$, thus for $0<\alpha<1$, $\gamma_{1}, \gamma_{2}\in P$, we obtain $$\begin{aligned} \label{} &\Phi_{P}(\alpha\gamma_{1}+(1-\alpha)\gamma_{2})\notag\\ &=\sum_{k=1}^{N}\beta_{k}\log [h(P,v_{k})-(\alpha\gamma_{1}+(1-\alpha)\gamma_{2})\cdot v_{k}]\notag\\ &\geq \alpha\sum_{k=1}^{N}\beta_{k}\log(h(P,v_{k})-\gamma_{1}\cdot v_{k})+ (1-\alpha)\sum_{k=1}^{N}\beta_{k}\log(h(P,v_{k})-\gamma_{2}\cdot v_{k})\notag\\ &=\alpha\Phi_{P}(\gamma_{1})+(1-\alpha)\Phi_{P}(\gamma_{2}).\end{aligned}$$ In the light of the fact that $P$ is convex, $\alpha\gamma_{1}+(1-\alpha)\gamma_{2}\in P$. Then, equality holds if and only if $$h(P,v_{k})-\gamma_{1}\cdot v_{k}=h(P,v_{k})-\gamma_{2}\cdot v_{k},\quad k=1,\ldots,N.$$ Namely, $$\gamma_{1}\cdot v_{k}=\gamma_{2}\cdot v_{k},\quad k=1,\ldots,N.$$ Since the unit vectors $v_{1},\ldots,v_{N}$ $(N\geq n+1)$ are not concentrated on any closed hemisphere, so we get $\gamma_{1}=\gamma_{2}$, which implies that $\Phi_{P}$ is strictly concave. Then, the existence and uniqueness of a maximum point are obtained by the continuity and strictly concavity of $\Phi_{P}$ and the compactness of $P$ . Secondly, we are prepared to show that $$\Phi_{P}(\gamma(P))=\max_{\gamma\in Int(P)}\Phi_{P}(\gamma)$$ with $\gamma\in Int(P)$. To prove that, argue by contradiction, assume that a sequence of interior points $\gamma_{j}\rightarrow \partial P$, then $\Phi_{P}(\gamma_{j})\rightarrow - \infty$, which follows from $\log {0}=-\infty$. Via the above statement, we see that the proof is completed. ◻ Next, we do some preparations. **Lemma 12**. *Let $\beta_{1},\ldots,\beta_{N}\in (0,\infty)$, and the unit vectors $v_{1},\ldots,v_{N}$ $(N\geq n+1)$ are not concentrated on any closed hemisphere, suppose $P_{i}\in \mathcal{P}(v_{1},\ldots, v_{N})$ and $P_{i}\rightarrow P$ as $i\rightarrow \infty$. Then $$\lim_{i\rightarrow \infty}\gamma(P_{i})=\gamma(P),\quad \lim_{i\rightarrow \infty}\Phi_{P_{i}}(\gamma (P_{i}))=\Phi_{P}(\gamma (P)).$$* **Proof.* We adopt contradiction arguments along the similar line as [@Zhu14]. Let $P_{i_{j}}$ be a subsequence of $P_{i}$ such that $P_{i_{j}}$ converges to $P$ , satisfying $\gamma(P_{i_{j}})\rightarrow \gamma_{0}$, but $\gamma_{0}\neq \gamma(P)$. Here clearly, $\gamma_{0}\in P$.* *We first show that $\gamma_{0}$ is an interior point of $P$. Since $\gamma(P)\in Int (P)$ and $P_{i}$ converges to $P$, there exists an $N_{0}>0$ such that $$h(P_{i},v_{k})-\gamma(P)\cdot v_{k}> c_{0}$$ for all $k=1,\ldots, N$, $i>N_{0}$, and here $c_{0}=\frac{1}{2}\min_{v\in {\mathbb{S}^{n-1}}}\{ h(P,v)-\gamma(P)\cdot v\}>0$. Then, it follows that $$\label{iuq} \Phi_{P_{i}}(\gamma(P_{i}))\geq\Phi_{P_{i}}(\gamma(P))>\left(\sum^{N}_{k=1}\beta_{k}\right)\log \frac{c_{0}}{2}, \ {\rm for } \ i> N_{0}.$$ If $\gamma_{0}$ is a boundary point of $P$, then $\lim_{j\rightarrow\infty}\Phi_{P_{i_{j}}}(\gamma(P_{i_{j}}))=-\infty$, which contradicts to [\[iuq\]](#iuq){reference-type="eqref" reference="iuq"}.* *In view of the fact that $\gamma_{0}$ is an interior point of $P$ with $\gamma_{0}\neq \gamma(P)$, then by  [@S14 Theorem 1.8.8] and the definition of $\gamma(P)$ showed in  [\[extre4\]](#extre4){reference-type="eqref" reference="extre4"}, it is clear to imply that $$\label{P1} \Phi_{P}(\gamma_{0})< \Phi_{P}(\gamma(P)).$$ On the other hand, by the continuity of $\Phi_{P}(\gamma(\cdot))$ in $P$ and $\gamma(\cdot)$, we have $$\label{P2} \lim_{j\rightarrow \infty}\Phi_{P_{i_{j}}}(\gamma(P_{i_{j}}))=\Phi_{P}(\gamma_{0}).$$ Meanwhile, $$\label{P3} \lim_{j\rightarrow \infty}\Phi_{P_{i_{j}}}(\gamma(P))=\Phi_{P}(\gamma(P)).$$ Combining  [\[P1\]](#P1){reference-type="eqref" reference="P1"},  [\[P2\]](#P2){reference-type="eqref" reference="P2"},  [\[P3\]](#P3){reference-type="eqref" reference="P3"}, we get $$\label{ctrat} \lim_{j\rightarrow \infty}\Phi_{P_{i_{j}}}(\gamma(P_{i_{j}}))<\lim_{j\rightarrow \infty}\Phi_{P_{i_{j}}}(\gamma(P)).$$ But for any $P_{i_{j}}$, the fact is that $$\Phi_{P_{i_{j}}}(\gamma(P_{i_{j}}))\geq \Phi_{P_{i_{j}}}(\gamma(P)).$$ Therefore, $$\lim_{j\rightarrow \infty}\Phi_{P_{i_{j}}}(\gamma(P_{i_{j}}))\geq \lim_{j\rightarrow \infty}\Phi_{P_{i_{j}}}(\gamma(P)).$$ This contradicts to  [\[ctrat\]](#ctrat){reference-type="eqref" reference="ctrat"}. So, $\lim_{i\rightarrow \infty}\gamma(P_{i})=\gamma(P)$. Make use of the continuity of $\Phi_{P}(\cdot)$, we get $$\lim_{i\rightarrow \infty}\Phi_{P_{i}}(\gamma(P_{i}))=\Phi_{P}(\gamma(P)).$$ The proof is completed. ◻* The following key lemma proved by [@GXZ23] showed that the polytopes whose normals are in general position: if the polytope gets large, then it has to get large uniformly in all directions, which is helpful to get uniform priori bounds. **Lemma 13**. *Let $v_{1},\ldots, v_{N}$ be $N$ unit vectors that are not contained in any closed hemisphere and $P_{i}$ be a sequence of polytopes in $P\in \mathcal{P}(v_{1},\ldots,v_{N})$. Assume the vectors $v_{1},\ldots, v_{N}$ are in general position in dimension $n$. If the outer radii of $P_{i}$ are not uniformly bounded in $i$, then their inner radii are not uniformly bounded in $i$ either.* Using Lemma [Lemma 13](#GGH){reference-type="ref" reference="GGH"}, we obtain the following result. **Corollary 14**. *Let $v_{1},\ldots, v_{N}\in {\mathbb{S}^{n-1}}$ be $N$ unit vectors are not contained in any closed hemisphere and $P\in \mathcal{P}(v_{1},\ldots,v_{N})$. Assume that $v_{1},\ldots,v_{N}$ are in general position in dimension $n$. If the outer radius of $P_{i}$ is not uniformly bounded, then torsional rigidity $T(P_{i})$ is also unbounded.* *Proof.* This holds by using Lemma [Lemma 13](#GGH){reference-type="ref" reference="GGH"}, the homogeneity and the translation invariance of $T$, and the fact that $T(B^{n})$ is positive for the centered unit ball $B^{n}$. ◻ It's required to prove the following lemmas for giving the existence of the solution to the torsion log-Minkowski problem. **Lemma 15**. *If $\beta_{1},\ldots,\beta_{N}\in (0,\infty)$, the unit vectors $v_{1},\ldots,v_{N}$ $(N\geq n+1)$ are in general position in dimension $n$, then there exists a polytope $P\in \mathcal{P}(v_{1},\ldots,v_{N})$ solving [\[extre3\]](#extre3){reference-type="eqref" reference="extre3"} such that $P$ has exactly $N$ facets, $\gamma(P)=o$, $T(P)=1$ and $$\Phi_{P}(o)=\inf\left\{\max_{\gamma\in Q}\Phi_{Q}(\gamma):Q\in \mathcal{P}(v_{1},\ldots,v_{N}),T(Q)=1\right\}.$$* *Proof.* By virtue of the translation invariance of $\Phi_{P}$, we can choose a sequence $P_{i}\in \mathcal{P}(v_{1},\ldots,v_{N})$ with $\gamma(P_{i})=o$ and $T(P_{i})=1$ is a minimizing sequence of problem [\[extre3\]](#extre3){reference-type="eqref" reference="extre3"}. Corollary [Corollary 14](#GGH2){reference-type="ref" reference="GGH2"} shows that $P_{i}$ is bounded. This together with the Blaschke selection theorem (see, e.g.,  [@S14 Theorem 1.8.7]), we know that there exists a subsequence of $P_{i}$ that converges to a $P$. Making use of the continuity of $T$, we have $T(P)=1$, combine with de Saint Venant inequality given by lemma [\[JKT\]](#JKT){reference-type="ref" reference="JKT"}, we know that $|P|\geq c_{0}>0$, which implies that $P$ is non-degenerate. Now, by Lemma [Lemma 12](#Pcvg){reference-type="ref" reference="Pcvg"}, there is $\gamma (P)=\lim_{i\rightarrow \infty}\gamma(P_{i})=o$, and by virtue of the definition of $\Phi_{P}$, there is $$\Phi_{P}(o)=\lim_{i\rightarrow\infty}\Phi_{P_{i}}(o)=\inf\left\{\max_{\gamma\in Q}\Phi_{Q}(\gamma):Q\in \mathcal{P}(v_{1},\ldots,v_{N}),T(Q)=1\right\}.$$ Secondly, we are prepared to prove that $P$ has exactly $N$ facets. By contradiction, there exists $i_{0}\in \{1,\ldots,N\}$ such that $F(P,u_{i_{0}})=P\cap H(P,u_{i_{0}})$ is not the facet of $P$. By virtue of  [@S14 Section 2.4] and the same arguments of [@Xiong19 Lemma 5.1], choose sufficiently small $\delta>0$, let the polytope be $$P_{\delta}=P\cap \{ \gamma: \gamma\cdot v_{i_{0}}\leq h(P,v_{i_{0}})-\delta\}\in \mathcal{P}(v_{1},\ldots,v_{N}),$$ and satisfy $$\alpha P_{\delta}=\alpha(\delta)P_{\delta}=T(P_{\delta})^{-\frac{1}{n+2}}P_{\delta}.$$ Then, $T(\alpha P_{\delta})=1$ and as $\delta\rightarrow 0^{+}$, $\alpha P_{\delta}\rightarrow P$. Moreover, by Lemma [Lemma 12](#Pcvg){reference-type="ref" reference="Pcvg"}, we get $$\gamma_{p}(P_{\delta})\rightarrow \gamma_{p}(P)=o\in Int (P),\ as\ \delta \rightarrow 0^{+}.$$ Hence, for sufficiently small $\delta> 0$, we take $$\gamma (P_{\delta})\in Int(P)$$ and $$h(P,v_{k})> \gamma(P_{\delta})\cdot v_{k}+\delta,\ for \ k\in \{ 1,\ldots, N\}.$$ Next, we are devoted to showing that $$\label{ctdo} \Phi_{\alpha P_{\delta}}(\gamma (\alpha P_{\delta}))< \Phi_{P}(\gamma(P))=\Phi(o).$$ In view of the fact that $\gamma(\alpha P_{\delta})=\alpha \gamma(P_{\delta})$, it follows that $$\begin{aligned} \label{} &\Phi_{\alpha P_{\delta}}(\gamma(\alpha P_{\delta}))\notag\\ &=\sum_{k=1}^{N}\beta_{k}\log(h(\alpha P_{\delta},v_{k})-\gamma(\alpha P_{\delta})\cdot v_{k})\notag\\ &=\log\alpha\sum_{k=1}^{N}\beta_{k}+\sum_{k=1}^{N}\beta_{k}\log(h(P_{\delta},v_{k})-\gamma(P_{\delta})\cdot v_{k})\notag\\ &=\log\alpha\sum_{k=1}^{N}\beta_{k}+\sum_{k=1}^{N}\beta_{k}\log(h(P,v_{k})-\gamma(P_{\delta})\cdot v_{k})-\beta_{i_{0}}\log(h(P,v_{i_{0}})-\gamma(P_{\delta})\cdot v_{i_{0}})\notag\\ &\quad+\beta_{i_{0}}\log(h(P,v_{i_{0}})-\gamma(P_{\delta})\cdot v_{i_{0}}-\delta)\notag\\ &=\Phi_{P}(\gamma(P_{\delta}))+H(\delta),\end{aligned}$$ where $$\begin{aligned} \label{Heq} H(\delta)&=\log\alpha\sum_{k=1}^{N}\beta_{k}-\beta_{i_{0}}\log(h(P,v_{i_{0}})-\gamma(P_{\delta})\cdot v_{i_{0}})\notag\\ &\quad+\beta_{i_{0}}\log(h(P,v_{i_{0}})-\gamma(P_{\delta})\cdot v_{i_{0}}-\delta).\end{aligned}$$ Now, we are desired to get $H(\delta)< 0$ to admit  [\[ctdo\]](#ctdo){reference-type="eqref" reference="ctdo"}. Let $q_{0}$ be the diameter of $P$, then $$0<h(P,u_{i_{0}})-\gamma(P_{\delta})\cdot u_{i_{0}}-\delta<h(P,u_{i_{0}})-\gamma(P_{\delta})\cdot u_{i_{0}}< q_{0}.$$ Thus, by the concavity of $\log t$ in $[0,\infty)$, we get $$\log(h(P,u_{i_{0}})-\gamma (P_{\delta})\cdot u_{i_{0}}-\delta)-\log(h (P,u_{i_{0}})-\gamma (P_{\delta})\cdot u_{i_{0}})<\log(q_{0}-\delta)-\log q_{0}.$$ Hence, together with  [\[Heq\]](#Heq){reference-type="eqref" reference="Heq"}, we have $$\begin{aligned} H(\delta)<M(\delta),\end{aligned}$$ where $$M(\delta)=-\frac{1}{n+2}\log T(P_{\delta})\left(\sum_{k=1}^{N}\beta_{k}\right)+\beta_{i_{0}}(\log(q_{0}-\delta)-\log q_{0}).$$ Now, by exploiting the Hadamard variational formula of torsional rigidity reflected by Lemma [Lemma 8](#argu2){reference-type="ref" reference="argu2"}, we get $$\begin{aligned} \label{Hlim} M^{'}(\delta)&=-\frac{1}{n+2}\left(\sum^{N}_{k=1}\beta_{k} \right)\frac{1}{T(P_{\delta})}\frac{d T(P_{\delta})}{d \delta}-\frac{\beta_{i_{0}}}{q_{0}-\delta}\notag\\ &=-\frac{1}{n+2}\left(\sum^{N}_{k=1}\beta_{k} \right)\frac{1}{T(P_{\delta})}\sum^{N}_{k=1}h^{'}(P_{\delta},v_{k})\mu^{tor}(P_{\delta},\{v_{k}\})-\frac{\beta_{i_{0}}}{q_{0}-\delta}.\end{aligned}$$ Suppose $\mu^{tor}(P,\{v_{k}\})\neq 0$ for some $k\in \{1,\ldots, N\}$. By the absolute continuity of $\mu^{tor}(P,\cdot)$ with regard to $S(P,\cdot)$, we know $S(P,\{v_{k}\})\neq 0$. As a result, we conclude that $P$ has a facet with normal vector $v_{k}$, and by the definition of $P_{\delta}$, for sufficiently small $\delta> 0$, we have $h(P_{\delta},v_{k})=h({P},v_{k})$, this illustrates that $h^{'}(v_{k},0^{+})=0$, where $$h^{'}(v_{k},0^{+})=\lim_{\delta\rightarrow 0^{+}}\frac{h(P_{\delta},v_{k})-h({P},v_{k})}{\delta}.$$ This together with $P_{\delta}\rightarrow P$ as $\delta \rightarrow 0^{+}$, we have $$\label{ulim} \sum^{N}_{k=1}h^{'}(P_{\delta},v_{k})\mu^{tor}(P_{\delta},\{v_{k}\})\rightarrow \sum^{N}_{k=1}h^{'}(v_{k},0^{+})\mu^{tor}(P,\{v_{k}\})=0, \ as \ \delta\rightarrow 0^{+},$$ which tells that, for sufficiently small $\delta$, $$\label{alim} M^{'}(\delta)< 0.$$ Since $M(0)=0$, for sufficiently small $\delta> 0$, we know $M(\delta)<0$, it directly yields $H(\delta)< 0$. So, there exists a $\delta_{0}> 0$ such that $P_{\delta_{0}}\in \mathcal{P}(v_{1},\ldots,v_{N})$ and $$\Phi_{\alpha_{0}P_{\delta_{0}}}(\gamma(\alpha_{0}P_{\delta_{0}}))<\Phi_{P}(\gamma(P_{\delta_{0}}))\leq\Phi_{P}(\gamma(P)) =\Phi_{P}(o),$$ where $\alpha_{0}=T(P_{\delta_{0}})^{-\frac{1}{n+2}}$. Set $P_{0}=\alpha_{0}P_{\delta_{0}}-\gamma(\alpha_{0}P_{\delta_{0}})$, for $P_{0}\in \mathcal{P}(v_{1},\ldots,v_{N})$, then we get $$T(P_{0})=1,\ \gamma(P_{0})=o,\ \Phi_{P_{0}}(o)< \Phi_{P}(o).$$ This is a contradiction. So, $P$ has exactly $N$ facets. ◻ # Dealing with the discrete torsion log-Minkowski problem {#Sec4} In this section, we first attack the discrete torsion log-Minkowski problem. **Theorem 16**. *Let $\beta_{1},\ldots,\beta_{N}\in (0,\infty)$, and the unit vectors $v_{1},\ldots,v_{N}$ $(N\geq n+1)$ are in general position in dimension $n$. If there exists a polytope $P\in \mathcal{P}(u_{1},\ldots,u_{N})$ satisfying $\gamma(P)=o$ and $T(P)=1$ such that $$\label{} \Phi_{P}(o)=\inf\left\{\max_{\gamma\in Q}\Phi_{Q}(\gamma):Q\in \mathcal{P}(v_{1},\ldots,v_{N}),T(Q)=1\right\}.$$ Then there exists a polytope $P_{0}$ such that $$h(P_{0},\cdot)\mu^{tor}(P_{0},\cdot)=\sum_{k=1}^{N}\beta_{k}\delta_{v_{k}}(\cdot).$$* *Proof.* For $\delta_{1},\ldots,\delta_{N}\in {\mathbb{R}}$, sufficiently small $|t|>0$, define the polytope $P_{t}$ as $$\label{} P_{t}=\bigcap_{j=1}^{N}\{x: x\cdot v_{j}\leq h(P,v_{j})+t\delta_{j},\ j=1,\ldots,N\},$$ and satisfy $$\label{} \alpha(t)P_{t}=T(P_{t})^{-\frac{1}{n+2}}P_{t}.$$ Thus, from the part (i) of Lemma [\[T1\]](#T1){reference-type="ref" reference="T1"}, it is to see that $T(\alpha(t)P_{t})=1$, $\alpha(t)P_{t}\in \mathcal{P}_{N}(u_{1},\ldots,u_{N})$ and $\alpha(t)P_{t}\rightarrow P$ as $t\rightarrow 0$. Meanwhile, employ$~\eqref{extre4}$, let $\gamma(t)=\gamma_{p}(\alpha(t)P_{t})$ and $$\begin{aligned} \label{pthmax6} \Phi(t)&=\max_{\gamma\in \alpha(t)P_{t}}\sum_{k=1}^{N}\beta_{k}\log(\alpha(t)h(P_{t},v_{k})-\gamma\cdot v_{k})\notag\\ &=\sum_{k=1}^{N}\beta_{k}\log(\alpha(t)h(P_{t},v_{k})-\gamma(t)\cdot v_{k}).\end{aligned}$$ By  [\[pthmax6\]](#pthmax6){reference-type="eqref" reference="pthmax6"}, Lemma [Lemma 11](#INT){reference-type="ref" reference="INT"}, and the fact is that $\gamma(t)$ is an interior point of $\alpha(t)P(t)$, then we get $$\label{gam} \frac{\partial \Phi(t)}{\partial \gamma({t})}=0.$$ Consequently, we have $$\label{gamva} \sum_{k=1}^{N}\frac{\beta_{k}{v_{k,i}}}{\alpha(t)h(P_{t},v_{k})-\gamma(t)\cdot v_{k}}=0,$$ for $i=1,\ldots,n$, $\gamma=(\gamma_{1},\ldots,\gamma_{n})^{T}$, and $v_{k}=(v_{k,1},\ldots,v_{k,n})^{T}$. By using  [\[gamva\]](#gamva){reference-type="eqref" reference="gamva"}, let $t=0$ with $\alpha(0)=1, \gamma(0)=o$, then we get $$\label{t0equ} \sum_{k=1}^{N}\frac{\beta_{k}v_{k,i}}{h(P,v_{k})}=0,\quad i=1,\ldots,n.$$ Next, we reveal that $\gamma^{'}(t)\big|_{t=0}$ exists. Let $$\label{} F_{i}(t,\gamma_{1},\ldots,\gamma_{n})=\sum_{k=1}^{N}\frac{\beta_{k}v_{k,i}}{\alpha(t)h(P_{t},v_{k})-(\gamma_{1}v_{k,1}+\ldots+\gamma_{n}v_{k,n})}$$ for $i=1,\ldots,n$. Then, we have $$\label{} \frac{\partial F_{i}}{\partial \gamma_{j}}\Big|_{(0,\ldots,0)}=\sum_{k=1}^{N}\frac{\beta_{k}v_{k,i}v_{k,j}}{h(P,v_{k})^{2}}$$ for $j=1,\ldots, n$. Hence $$\label{} \left(\frac{\partial F}{\partial \gamma}\Big|_{(0,\ldots,0)}\right)_{n\times n}=\sum_{k=1}^{N}\frac{\beta_{k}}{h(P,u_{k})^{2}}v_{k}v_{k}^{T},$$ where $v_{k}v_{k}^{T}$ is an $n\times n$ matrix. On the one hand, since $v_{1},\ldots,v_{N}$ are not concentrated on any closed hemisphere, for any $x\in{\mathbb R^{n}}$, $x\neq \{0\}$, there exists $v_{i_{0}}\in \{{v_{1},\ldots,v_{N}}\}$ to meet $v_{i_{0}}\cdot x\neq 0$, and obtain $$\begin{aligned} \label{} &x^{T}\left(\sum_{k=1}^{N}\frac{\beta_{k}}{h(P,v_{k})^{2}}v_{k}v_{k}^{T}\right)x\notag\\ &=\sum_{k=1}^{N}\frac{\beta_{k}(x\cdot v_{k})^{2}}{h(P,v_{k})^{2}}\notag\\ &\geq \frac{\beta_{i_{0}}(x\cdot v_{i_{0}})^{2}}{h(P,v_{i_{0}})^{2}}>0,\end{aligned}$$ which shows that $\frac{\partial F}{\partial \gamma}\big|(0,\ldots,0)$ is positively definite. From the inverse function theorem, we assert that $\gamma^{'}(0)=(\gamma_{1}^{'}(0),\ldots,\gamma_{n}^{'}(0))$ exists. On the other hand, since $\Phi(0)$ attains the minimum value of $\Phi(t)$, by using [\[t0equ\]](#t0equ){reference-type="eqref" reference="t0equ"}, we obtain $$\begin{aligned} \label{} 0&=\frac{d\Phi(t)}{dt}\Bigg|_{t=0}\notag\\ &=\sum_{k=1}^{N}\beta_{k}h(P,v_{k})^{-1}\left[h(P,v_{k})\left(-\frac{1}{n+2}\right)\frac{d T(P_{t})}{dt}\Bigg|_{t=0}+\delta_{k}-\gamma^{'}(0)\cdot v_{k}\right]\notag\\ &=\sum_{k=1}^{N}\beta_{k}h(P,v_{k})^{-1}\left[-\frac{1}{n+2}h(P,v_{k})\left(\sum_{i=1}^{N}\delta_{i}\mu^{tor}(P,\{v_{i}\})\right)+\delta_{k}\right]-\gamma^{'}(0)\cdot\left[\sum_{k=1}^{N}\beta_{k}h(P,v_{k})^{-1}v_{k}\right]\notag\\ &=\sum_{k=1}^{N}\beta_{k}h(P,v_{k})^{-1}\left[-\frac{1}{n+2}h(P,v_{k})\left(\sum_{i=1}^{N}\delta_{i}\mu^{tor}(P,\{v_{i}\})\right)+\delta_{k}\right]\notag\\ &=\sum_{k=1}^{N}\delta_{k}\left[\beta_{k}h(P,v_{k})^{-1}-\frac{1}{n+2}\left(\sum_{i=1}^{N}\beta_{i}\right)\mu^{tor}(P,\{v_{k}\})\right].\end{aligned}$$ Since $\delta_{1},\ldots,\delta_{N}$ are arbitrary, we have $$\label{} \beta_{k}h(P,v_{k})^{-1}=\frac{1}{n+2}\left(\sum_{i=1}^{N}\beta_{i}\right)\mu^{tor}(P,\{v_{k}\})$$ for all $k=1,\ldots,N$. In view of the fact that $P$ is $n$-dimensional and $o\in Int(P)$, as a result, $h(P,v_{k})>0$, therefore, $$\label{Icon} \beta_{k}=h(P,v_{k})\frac{1}{n+2}\left(\sum_{i=1}^{N}\beta_{i}\right)\mu^{tor}(P,\{v_{k}\}).$$ Let $c=\frac{1}{n+2}\left(\sum_{i=1}^{N}\beta_{i}\right)$, apply  [\[bordef\]](#bordef){reference-type="eqref" reference="bordef"} into  [\[Icon\]](#Icon){reference-type="eqref" reference="Icon"}, we have $$\label{Icon2} ch(P,\cdot)d\mu^{tor}(P,\cdot)=d\mu.$$ Moreover, we have $$\label{} ch(P,\cdot)d\mu^{tor}(P,\cdot)=d\mu.$$ For $m>0$, $P\in \mathcal{P}(v_{1},\ldots,v_{N})$, by the part (a) of Lemma [\[meau1\]](#meau1){reference-type="ref" reference="meau1"}, we get $$\label{} d\mu^{tor}_{0}(mP,\cdot)=m^{n+2}h(P,\cdot)d\mu^{tor}(P,\cdot).$$ Then, let $P_{0}=\left[\frac{1}{n+2}(\sum_{i=1}^{N}\beta_{i})\right]^{\frac{1}{n+2}}P$, we get $$\label{} d\mu^{tor}_{0}(P_{0},\cdot)=d\mu,$$ which verifies that $$\mu^{tor}_{0}(P_{0},\cdot)=\sum_{k=1}^{N}\beta_{k}\delta_{v_{k}}(\cdot).$$ This illustrates that $P_{0}$ is the desired polytope, which completes the proof. ◻ Applying Lemma [Lemma 15](#mexist){reference-type="ref" reference="mexist"} and Lemma [Theorem 16](#maint2){reference-type="ref" reference="maint2"}, we obtain the existence of solution to the discrete torsion log-Minkowski problem. **Theorem 17**. *Let $\mu$ be a discrete measure on $\mathbb R^{n}$ whose support set is not contained in any closed hemisphere and is in general position in dimension $n$. Then there exists a polytope $P$ containing the origin in its interior such that $$G^{tor}(P,\cdot)=\mu.$$* # The general torsion log-Minkowski problem {#Sec5} In this section, our aim is to deal with the general torsion log-Minkowski problem by virtue of the approximation technique. Given a finite Borel measure $\mu$ on the unit sphere ${ {\mathbb{S}^{n-1}}}$, not concentrated on any closed hemisphere, we first construct a sequence of discrete measures whose support sets are in general position such that sequence of discrete measures converges to $\mu$ weakly. For each positive integer $j$, divide ${\mathbb{S}^{n-1}}$ into a finite disjoint union $${\mathbb{S}^{n-1}}=\bigcup^{N_{j}}_{i=1}U_{i,j}$$ for $N_{j}>0$, which satisfies that the diameter of $U_{i,j}$ is less than $\frac{1}{j}$, and $U_{i,j}$ contains nonempty interior. Then, we can choose $v_{i,j}\in U_{i,j}$ such that $v_{1,j},\ldots, v_{N_{j},j}$ are in general position. As $j\rightarrow \infty$, we know that the vector $v_{1,j},\ldots, v_{N_{j},j}$ can not contained in any closed hemisphere. We are in a position to define the discrete measure $\mu_{j}$ on ${\mathbb{S}^{n-1}}$ as $$\mu_{j}=\sum^{N_{j}}_{i=1}\left( \mu(U_{i,j})+\frac{1}{N^{2}_{j}} \right)\delta_{v_{i,j}},$$ and define $$\label{UJ} \bar{\mu}_{j}=\frac{|\mu|}{|\mu_{j}|}\mu_{j}.$$ From the above statement, we see that $\bar{\mu}_{j}$ is a discrete measure on ${\mathbb{S}^{n-1}}$ satisfying the conditions in Theorem [Theorem 16](#maint2){reference-type="ref" reference="maint2"}, and $\bar{\mu}_{j} \rightharpoonup \mu$ weakly. Consequently, by Theorem [Theorem 16](#maint2){reference-type="ref" reference="maint2"}, there is a polytope $\tilde{P}_{j}$ such that $$\label{ajlim} h(\tilde{P}_{j},\cdot)\mu^{tor}(\tilde{P}_{j},\cdot)=\bar{\mu}_{j},$$ and $\tilde{P}_{j}$ is a rescaled of $P_{j}$, i.e., $$\label{PPJ} \tilde{P}_{j}=\left(\frac{1}{n+2}|\bar{\mu}_{j}| \right)^{\frac{1}{n+2}}P_{j},$$ where $P_{j}$ satisfies $P_{j}\in P(v_{1,j},\ldots,v_{N_{j},j})$, $\gamma_{j}(P_{j})=o$, $T(P_{j})=1$, and solves $$\label{LP} \Phi_{P_{j}}(o)=\inf\left\{\max_{\gamma_{j}\in Int (Q_{j})}\Phi_{Q_{j}}(\gamma_{j}):Q_{j}\in \mathcal{P}(v_{1,j},\ldots,v_{N_{j},j}),T(Q_{j})=1\right\}.$$ Now, we introduce the following key lemma, which shall be used in the below. **Lemma 18**. *Let $v_{1,j},\ldots,v_{N_{j},j}\in {\mathbb{S}^{n-1}}$ be as given above. Set $$\label{SJ} \mathcal{S}_{j}=\sum^{N_{j}}_{j=1}\{x\in \mathbb R^{n}: \ x\cdot v_{i,m}\leq 1 \}.$$ Then, for sufficiently large $j$, we get $$B^{n}\subset \mathcal{S}_{j} \subset 3 B^{n}.$$* *Proof.* As indicated before, since $\{U_{i,j}\}_{j}$ is a partition of ${\mathbb{S}^{n-1}}$, then for each $u\in {\mathbb{S}^{n-1}}$, there exists $i_{j}$ such that $u\in U_{i_{j},j}$. Since $diam(U_{i,j})<\frac{1}{j}$. So, we can choose $N_{0}>0$ such that for each $j>N_{0}$, $$u\cdot v_{i_{j},j}>\frac{1}{3}.$$ Due to $\rho(\mathcal{S}_{j},u)u\in \mathcal{S}_{j}$, there is $$\rho(\mathcal{S}_{j},u)/3<\rho(\mathcal{S}_{j},u)u\cdot v_{i_{j},j}\leq 1.$$ Consequently, $\rho(\mathcal{S}_{j},\cdot)< 3$ for each $j>N_{0}$, which gives the desired result. ◻ Without loss of generality, for $\gamma \in \Omega$, we write $$\label{DD2} \Phi_{\Omega,\mu}(\gamma)=\int_{ {\mathbb{S}^{n-1}}}\log h(\Omega-\gamma,\cdot)d\mu.$$ Note that, when $\mu$ is a discrete measure, and $\{v_{1},\ldots, v_{N}\}$ is the support of $\mu$, then [\[DD2\]](#DD2){reference-type="eqref" reference="DD2"} is precisely [\[bordef\]](#bordef){reference-type="eqref" reference="bordef"} given by in Sec. [3](#Sec3){reference-type="ref" reference="Sec3"}. By means of lemma [Lemma 18](#N1){reference-type="ref" reference="N1"}, we have the following result. **Lemma 19**. *Let $\tilde{P}_{j}$ be as given in [\[ajlim\]](#ajlim){reference-type="eqref" reference="ajlim"}, and $\gamma_{j}(P_{j})$ is the minimizer to [\[LP\]](#LP){reference-type="eqref" reference="LP"} with $\gamma_{j}(P_{j})=o$. If $|u|$ is positive, then there exists a constant $c_{0}>0$, independent of $j$, such that $$\label{UYT} \Phi_{\tilde{P}_{j},\bar{\mu}_{j}}(o)<c_{0}.$$* *Proof.* Set $\mathcal{S}_{j}$ as defined in [\[SJ\]](#SJ){reference-type="eqref" reference="SJ"}. Using lemma [Lemma 18](#N1){reference-type="ref" reference="N1"}, we know that, for $r>0$ and for sufficiently large $j$, one see $rB^{n}\subset r \mathcal{S}_{j}\subset 3r B^{n}$. By virtue of the homogeneity of $T$, there exists $r_{0}(j)>0$ such that $$T(r_{0}(j)\mathcal{S}_{j})=1.$$ Due to $r B\subset r \mathcal{S}_{j}$, there is $$r_{0}(j)^{n+2}T(B^{n})=T(r_{0}(j)B^{n})\leq T(r_{0}(j)\mathcal{S}_{j})=1$$ Thus, $r_{0}(j)\leq r_{0}$ for some constant $r_{0}$ independent of $j$, and $$\begin{split} \label{PPJ2} \Phi_{P_{j},\bar{\mu}_{j}}(o)&\leq \int_{ {\mathbb{S}^{n-1}}}\log h(r_{0}(j)\mathcal{S}_{j}-\gamma_{j} (r_{0}(j)\mathcal{S}_{j}),\cdot)d \bar{\mu}_{j}\\ &\leq \int_{ {\mathbb{S}^{n-1}}} \log h(6r_{0}B^{n},\cdot)d\bar{\mu}_{j}=\log(6r_{0})|\bar{\mu}_{j}|=\log(6r_{0})|\mu|. \end{split}$$ Hence, the desired bound [\[UYT\]](#UYT){reference-type="eqref" reference="UYT"} follows from $\eqref{PPJ}$ and [\[PPJ2\]](#PPJ2){reference-type="eqref" reference="PPJ2"}. ◻ As elaborated before, given a Borel measure $\mu$ on ${\mathbb{S}^{n-1}}$, $\mu$ satisfies the *subspace mass inequality* provided $$\label{space} \frac{\mu(\xi_{k}\cap {\mathbb{S}^{n-1}})}{|\mu|}< \frac{k}{n}$$ for each $\xi_{k}\in G_{n,k}$, and for each $k=1,\ldots,n-1$. **Theorem 20**. *[@LU23] If $\mu$ is an even, finite, non-zero Borel measure on ${\mathbb{S}^{n-1}}$ satisfying the following subspace mass inequality [\[space\]](#space){reference-type="eqref" reference="space"}, then there exists an origin-symmetric convex body $\Omega\subset \mathbb R^{n}$ such that $$G^{tor}(\Omega,\cdot)=\mu.$$* Our aim is to derive the above theorem without symmetric assumptions via an approximation argument. For this purpose, we need to the following preparation. **Lemma 21**. *Let $\tilde{P}_{j}$ be as given in [\[ajlim\]](#ajlim){reference-type="eqref" reference="ajlim"}. Then there exists $c_{0}>0$ such that $$T(\tilde{P}_{j})>c_{0}$$ for every $j$.* *Proof.* For a convex body $\Omega$ in $\mathbb R^{n}$, with the aid of the definition of $G^{tor}(\cdot)$ and $T(\cdot)$, one see $$\label{TTR} T(\tilde{P}_{j})=|G^{tor}(\tilde{P}_{j}, {\mathbb{S}^{n-1}})|=\frac{1}{n+2}|\bar{\mu}_{j}|=\frac{1}{n+2}|\mu|:=c_{0}>0.$$ Thus, we get the desired result. ◻ Next, we shall prove that $P_{j}$ is uniformly bounded when $\mu$ (not necessarily even) satisfies the subspace mass inequality [\[space\]](#space){reference-type="eqref" reference="space"}. For convenience, we write $$\chi_{k}=\frac{k}{n}.$$ For each $\omega\subset {\mathbb{S}^{n-1}}$ and $\eta >0$, we define $$\Re_{\eta}(\omega)=\{v\in {\mathbb{S}^{n-1}}: |v-u|< \eta, \ {\rm for \ some \ } u\in \omega\}.$$ Motivated by [@CL19 Lemma 4.1] or [@GXZ23 Lemma 5.10], we get a slightly stronger subspace mass inequality for sufficiently large $j$. **Lemma 22**. *Let $\mu$ be a finite Borel measure on ${\mathbb{S}^{n-1}}$ and $\bar{\mu}_{j}$ be established in [\[UJ\]](#UJ){reference-type="eqref" reference="UJ"}, if $\mu$ satisfies the subspace mass inequality [\[space\]](#space){reference-type="eqref" reference="space"}, then there exist $\tilde{\chi}_{k}\in (0,\chi_{k})$, $N_{0}>0$, and $\eta_{0}\in (0,1)$ such that for all $j>N_{0}$, $$\label{kkk} \frac{\bar{\mu}_{j}(\Re_{\eta_{0}}(\xi_{k}\cap {\mathbb{S}^{n-1}}))}{|\mu|}<\tilde{\chi}_{k}$$ for each $k$-dimensional subspace $\xi_{k}\subset \mathbb R^{n}$ and $k=1,\ldots,n-1$.* *Proof.* We take by contradiction argument. Suppose that [\[kkk\]](#kkk){reference-type="eqref" reference="kkk"} does not hold, then there exists a sequence of subspaces $\xi^{i}_{k}$ of $\xi_{k}$ such that $j_{i}\rightarrow \infty$, $\eta_{i}\rightarrow 0$, $\chi^{i}_{k}\rightarrow \chi_{k}$ as $i\rightarrow \infty$, and $$\label{kkk1} \frac{\bar{\mu}_{j_{i}}(\Re_{\eta_{i}}(\xi^{i}_{k}\cap {\mathbb{S}^{n-1}}))}{|\mu|}\geq\chi^{i}_{k}$$ Now, let $\{e_{1,i},\ldots, e_{k,i}\}$ be an orthonormal basis of $\xi^{i}_{k}$. By choosing suitable subsequences, we may suppose that $e_{1,i}\rightarrow e_{1},\ldots, e_{k,i}\rightarrow e_{k}$ as $i\rightarrow \infty$. Let $\xi_{k}={\rm span} \{{e_{1},\ldots, e_{k}}\}$. Since $\eta_{i}\rightarrow 0$ as $i\rightarrow \infty$, given a small positive $\eta$, we get $$\begin{split} \label{qw} \frac{\bar{\mu}_{j_{i}}(\overline{\Re_{\eta}(\xi_{k}\cap {\mathbb{S}^{n-1}})})}{|\mu|}\geq\frac{\bar{\mu}_{j_{i}}(\Re_{\eta_{i}}(\xi^{i}_{k}\cap {\mathbb{S}^{n-1}}))}{|\mu|}\geq\chi^{i}_{k} \end{split}$$ for large enough $i$. In view of the fact that $\bar{\mu}_{j_{i}}$ converges weakly to $\mu$, $\overline{\Re_{\eta}(\xi_{k}\cap {\mathbb{S}^{n-1}})}$ is compact, and $\chi^{i}_{k}\rightarrow \chi_{k}$, as $i\rightarrow \infty$, then we obtain $$\frac{\mu(\overline{\Re_{\eta}(\xi_{k}\cap {\mathbb{S}^{n-1}})})}{|\mu|}\geq \chi_{k}.$$ Since $\eta$ is arbitrary, let $\eta\rightarrow 0$, there is $$\frac{\mu(\xi_{k}\cap {\mathbb{S}^{n-1}})}{|\mu|}\geq \chi_{k}.$$ This contradicts to [\[space\]](#space){reference-type="eqref" reference="space"}. Hence, we complete the proof. ◻ Based on lemma [Lemma 22](#IU){reference-type="ref" reference="IU"}, we will estimate the functional $\Phi_{E_{j},\bar{\mu}_{j}}(o)$, where $E_{j}$ is a sequence of centered ellipsoids. The key technique is to use an appropriate spherical partition. The general argument was introduced by [@BLYZ12]. Let $e_{1},\ldots, e_{n}$ be an orthonormal basis in $\mathbb R^{n}$. For each $\delta\in (0,\frac{1}{\sqrt{n}})$, define the partion $\{A_{1,\delta},\ldots,A_{n,\delta}\}$ of ${\mathbb{S}^{n-1}}$, with respect to $e_{1},\ldots,e_{n}$ by $$\label{ER1} A_{k,\delta}=\{ v\in {\mathbb{S}^{n-1}}: |\nu\cdot e_{k}|\geq \delta \ {\rm and} \ |v \cdot e_{j}|\leq \delta \ {\rm for \ all }\ j>k\},\ k=1,\ldots,n.$$ Let $$\xi_{k}={\rm span}\{e_{1},\ldots,e_{k}\},\ k=1,\ldots,n,$$ and $\xi_{0}=\{0\}$. It was shown in [@BLYZ12] that for any non-zero finite Borel measure $\mu$ on ${\mathbb{S}^{n-1}}$, $$\label{STE} \lim_{\delta\rightarrow 0^{+}}\mu(A_{k,\delta})=\mu((\xi_{k}\backslash \xi_{k-1})\cap {\mathbb{S}^{n-1}}),$$ and therefore, $$\label{ST2} \lim_{\delta\rightarrow 0^{+}}(\mu(A_{1,\delta})+\ldots+\mu(A_{k,\delta}))=\mu(\xi_{k}\cap {\mathbb{S}^{n-1}}).$$ The following lemma from [@BLYZ19] shall be needed. **Lemma 23**. *[@BLYZ19 Lemma 4.1][\[BL\]]{#BL label="BL"} Suppose $\lambda_{1},\ldots, \lambda_{m}\in [0,1]$ are such that $$\lambda_{1}+\ldots+\lambda_{m}=1.$$ Suppose further that $a_{1},\ldots, a_{m}\in \mathbb{R}$ are such that $$a_{1}\leq a_{2}\leq \ldots \leq a_{m}.$$ Assume that there exists $\sigma_{0},\sigma_{1},\ldots,\sigma_{m}\in [0,\infty)$, with $\sigma_{0}=0$, and $\sigma_{m}=1$ such that $$\lambda_{1}+\ldots+\lambda_{k}\leq \sigma_{k}, \quad {\rm for}\ k=1,\ldots,m.$$ Then $$\sum^{m}_{k=1}\lambda_{k}a_{k}\geq \sum^{m}_{k=1}(\sigma_{k}-\sigma_{k-1})a_{k}.$$* **Lemma 24**. *Suppose $\varepsilon_{0}>0$. Suppose further that $(e_{1,j},\ldots,e_{n,j})$, where $j=1,2,\ldots,$ is a sequence of ordered orthonormal bases of $\mathbb R^{n}$ converging to the ordered orthonormal basis $(e_{1},...,e_{n})$, while $(a_{1,j},\ldots,a_{n,j})$ is a sequence of $n$-tuples satisfying $$0<a_{1,l}\leq a_{2,l}\leq \ldots\leq a_{n,j} \ and \ a_{n,j}\geq \varepsilon_{0}.$$ For each $j=1,2,\ldots$, let $$E_{j}=\left\{x\in \mathbb R^{n}:\frac{(x\cdot e_{1,j})^{2}}{a^{2}_{1,j}}+\ldots+\frac{(x\cdot e_{n,j})^{2}}{a^{2}_{n,j}}\leq 1\right\}$$ denote the ellipsoids generated by the $(e_{1,j},\ldots,e_{n,j})$ and $(a_{1,l},\ldots,a_{n,l})$. Let $\mu$ be a nonzero finite Borel measure on ${\mathbb{S}^{n-1}}$ satisfying the subspace mass inequality [\[space\]](#space){reference-type="eqref" reference="space"}. Then there exists $\delta_{0}, t_{0}\in (0,1)$ such that for each $j>N_{0}$, we have $$\begin{split} \label{EP} \frac{1}{|\bar{\mu}_{j}|}\Phi_{E_{j},\bar{ \mu}_{j} }(o)\geq\log\left(\frac{\delta_{0}}{2}\right)+t_{0}\log a_{n,j}+\frac{1}{n}(1-t_{0})\log |E_{j}|-\frac{1}{n}(1-t_{0})\log\omega_{n}. \end{split}$$* *Proof.* For each $\delta\in (0,1/\sqrt{n})$, let $\{A_{1,\delta},\ldots, A_{n,\delta}\}$ be the partition of ${\mathbb{S}^{n-1}}$ as in [\[ER1\]](#ER1){reference-type="eqref" reference="ER1"}. Since $\mu$ satisfies the subspace mass inequality $\eqref{space}$, by lemma [Lemma 22](#IU){reference-type="ref" reference="IU"}, we know that there exists $N_{0}>0$, $\eta_{0}\in (0,1)$, and $\tilde{\chi}_{k}\in (0,\chi_{k})$ such that for all $j>N_{0}$, [\[kkk\]](#kkk){reference-type="eqref" reference="kkk"} holds for each $k$-dimensional proper subspace $\xi_{k}\subset \mathbb R^{n}$. Let $t_{0}$ be sufficiently small so that $$(1-t_{0})\chi_{k}> \tilde{\chi}_{k}.$$ Thus, for all $j>N_{0}$, there is $$\label{II} \frac{\bar{\mu}_{j}(\Re_{\eta_{0}}(\xi_{k}\cap {\mathbb{S}^{n-1}}))}{|\mu|}< (1-t_{0})\chi_{k}$$ for each $k$-dimensional subspace $\xi_{k}\subset \mathbb R^{n}$ and $k=1,\ldots,n-1$. Particularly, we let $\xi_{k}=span\{e_{1},\ldots,e_{k}\}$, Note that for sufficiently small $\delta_{0}\in (0,1)$, one see $$\bigcup^{k}_{i=1}A_{i,\delta_{0}}\subset \Re_{\eta_{0}}(\xi_{k}\cap {\mathbb{S}^{n-1}}).$$ Now, let $$\lambda_{i,\delta_{0}}=\frac{\bar{\mu}_{j}(A_{i,\delta_{0}})}{|\bar{\mu}_{j}|}.$$ Note that $$\lambda_{1,\delta_{0}}+\ldots+\lambda_{n,\delta_{0}}=1.$$ By virtue of [\[II\]](#II){reference-type="eqref" reference="II"}, we obtain $$\lambda_{1,\delta_{0}}+\ldots+\lambda_{k,\delta_{0}}=\frac{\sum^{k}_{i=1}\bar{\mu}_{j}(A_{i,\delta_{0}})}{|\mu|}<(1-t_{0})\chi_{k}$$ for each $k=1,\ldots,n-1$. Let $\sigma_{n}=1$ and $\sigma_{0}=0$ and $\sigma_{k}=(1-t_{0})\chi_{k}=(1-t_{0})\frac{k}{n}$ for $k=1,\ldots,n-1$. Then there are $$\label{W1} \sigma_{1}-\sigma_{0}=\frac{1}{n}(1-t_{0}),$$ $$\label{W2} \sigma_{k}-\sigma_{k-1}=\frac{1}{n}(1-t_{0}), \ k=2,\ldots,n-1,$$ and $$\label{W3} \sigma_{n}-\sigma_{n-1}=\frac{1}{n}(1-t_{0})+t_{0}.$$ On the other hand, since $\lim_{j\rightarrow \infty}e_{k,j}=e_{k}$ for each $k=1,\ldots,n$, we may choose $j_{0}>0$, so that $|e_{k,j}-e_{k}|< \delta_{0}/2$ for each $j>j_{0}$ and each $j=1,\ldots,n$. Since $(e_{1,j},\ldots,e_{n,j})$ is orthonormal, by the definition of $E_{j}$ that $\pm a_{k,j}e_{k,j}\in E_{j}$, for $v\in A_{k,\delta_{0}}$, we have $$\label{W4} h(E_{j},v)\geq a_{k,j}|e_{k,j}\cdot v|\geq a_{k,j}(|v\cdot e_{k}|-|e_{k,j}-e_{k}|)\geq a_{k,j}\frac{\delta_{0}}{2}.$$ In view of the fact that $\{A_{k,\delta_{0}}\}^{n}_{k=1}$ is a partition of ${\mathbb{S}^{n-1}}$. Together with [\[W4\]](#W4){reference-type="eqref" reference="W4"} and lemma [\[BL\]](#BL){reference-type="ref" reference="BL"}, for $j>j_{0}$, we have $$\begin{split} \label{TOT} \frac{1}{|\bar{\mu}_{j}|}\Phi_{E_{j},\bar{ \mu}_{j} }(o)&=\frac{1}{|\bar{\mu}_{j}|}\int_{ {\mathbb{S}^{n-1}}}\log h(E_{j},\cdot)d\bar{\mu}_{j}\\ &\geq\sum^{n}_{k=1}\frac{\bar{\mu}_{j} (A_{k,\delta_{0}})}{|\bar{\mu}_{j}|}\log \left(a_{k,j}\frac{\delta_{0}}{2}\right)\\ &=\log \left(\frac{\delta_{0}}{2}\right)+\sum^{n}_{k=1}\lambda_{k,\delta_{0}}\log a_{k,j}\\ &\geq\log \left(\frac{\delta_{0}}{2}\right)+\sum^{n}_{k=1}(\sigma_{k}-\sigma_{k-1})\log a_{k,j}. \end{split}$$ Now, applying [\[W1\]](#W1){reference-type="eqref" reference="W1"}, [\[W2\]](#W2){reference-type="eqref" reference="W2"} and [\[W3\]](#W3){reference-type="eqref" reference="W3"} into [\[TOT\]](#TOT){reference-type="eqref" reference="TOT"}, we have $$\begin{split} \label{EP11} \frac{1}{|\bar{\mu}_{j}|}\Phi_{E_{j},\bar{ \mu}_{j} }(o)&\geq \log \left(\frac{\delta_{0}}{2}\right)+\frac{1}{n}(1-t_{0})\sum^{n}_{k=1}\log a_{k,j}+t_{0}\log a_{n,j}. \end{split}$$ Then [\[EP\]](#EP){reference-type="eqref" reference="EP"} holds by substituting $|E_{j}|=\omega_{n}a_{1,j}a_{2,j}\ldots a_{n,j}$ into [\[EP11\]](#EP11){reference-type="eqref" reference="EP11"}. ◻ Now, we prove that $P_{j}$ is uniformly bounded. **Lemma 25**. *Let $\mu$ be a nonzero finite Borel measure on ${\mathbb{S}^{n-1}}$ and $\bar{\mu}_{j}$ be as given in [\[UJ\]](#UJ){reference-type="eqref" reference="UJ"}. Let $\tilde{P}_{j}$ be as constructed in [\[ajlim\]](#ajlim){reference-type="eqref" reference="ajlim"}. If $\mu$ satisfies the subspace mass inequality $\eqref{space}$, the $\tilde{P}_{j}$ is uniformly bounded.* *Proof.* We take by contradiction and assume that $\tilde{P}_{j}$ is not uniformly bounded. Let $E_{j}$ be the John ellipsoid of $\tilde{P}_{j}$, then $$\label{EJJ} E_{j}\subset \tilde{P}_{j}\subset n(E_{j}-o_{j})+o_{j},$$ where the ellipsoid $E_{j}$ centered at $o_{j}\in Int (\tilde{P}_{j})$ is given by $$E_{j}=\left\{x\in \mathbb R^{n}:\frac{|(x-o_{j})\cdot e_{1,j}|^{2}}{a^{2}_{1,j}}+\ldots+\frac{|(x-o_{j})\cdot e_{n,j}|^{2}}{a^{2}_{n,j}}\leq 1\right\}$$ for a sequence of ordered orthonormal bases $(e_{1,j},\ldots,e_{n,j})$ of $\mathbb R^{n}$, with $0<a_{1,l}\leq \ldots \leq a_{n,l}$. Since $\tilde{P}_{j}$ is not uniformly bounded, by choosing a subsequence, we may assume $a_{n,j}\rightarrow \infty$ and $a_{n,j}\geq 1$. By means of the compactness of ${\mathbb{S}^{n-1}}$, we can take a subsequence and assume that $\{e_{1,j},\ldots, e_{n,j}\}$ converges to $\{e_{1},\ldots, e_{n}\}$ as an orthonormal basis in $\mathbb R^{n}$. Using Lemma [\[JKT\]](#JKT){reference-type="ref" reference="JKT"}, Lemma [Lemma 24](#EE){reference-type="ref" reference="EE"}, [\[EJJ\]](#EJJ){reference-type="eqref" reference="EJJ"}, there exists $\delta_{0}$, $t_{0}$, $c_{n}$ and $N_{0}>0$ such that $$\begin{split} \label{QWE} \frac{1}{|\bar{\mu}_{j}|}\Phi_{\tilde{P}_{j},\bar{ \mu}_{j} }(o_{j})&\geq \frac{1}{|\bar{\mu}_{j}|}\Phi_{E_{j},\bar{ \mu}_{j} }(o_{j})\\ &= \frac{1}{|\bar{\mu}_{j}|}\Phi_{E_{j}-o_{j},\bar{ \mu}_{j} }(o)\\ &\geq\log\left(\frac{\delta_{0}}{2}\right)+t_{0}\log a_{n,j}+\frac{1}{n}(1-t_{0})\log |E_{j}|-\frac{1}{n}(1-t_{0})\log\omega_{n}\\ &\geq\log\left(\frac{\delta_{0}}{2}\right)+t_{0}\log a_{n,j}+\frac{1}{n+2}(1-t_{0})\log |T(E_{j})|+c(n,t_{0}), \end{split}$$ where $c(n,t_{0})$ is not necessarily positive. To proceed further, with the aid of the homogeneity, monotonicity and translation invariance of $T$, [\[IS\]](#IS){reference-type="eqref" reference="IS"} and [\[EJJ\]](#EJJ){reference-type="eqref" reference="EJJ"}, we obtain $$\begin{split} \label{JIU} T(E_{j})=T(E_{j}-o_{j})&=n^{-\frac{1}{n+2}}T(n(E_{j}-o_{j})+o_{j})\\ &\geq n^{-\frac{1}{n+2}} T(\tilde{P}_{j})=n^{-\frac{1}{n+2}}\frac{1}{n+2}|\bar{\mu}_{j}|. \end{split}$$ Recall that $\gamma_{j}(P_{j})$ is the minimizer to [\[LP\]](#LP){reference-type="eqref" reference="LP"} with $\gamma_{j}(P_{j})=o$, by applying [\[PPJ\]](#PPJ){reference-type="eqref" reference="PPJ"}, we know that $\gamma_{j}(\tilde{P}_{j})=o$, this combines with $|\bar{\mu}_{j}|=|\mu|$, [\[JIU\]](#JIU){reference-type="eqref" reference="JIU"}, and that $a_{n,j}\rightarrow \infty$, from [\[QWE\]](#QWE){reference-type="eqref" reference="QWE"}, we have $$\label{} \Phi_{\tilde{P}_{j},\bar{\mu}_{j}}(o)\geq\Phi_{\tilde{P}_{j},\bar{\mu}_{j}}(o_{j})\rightarrow \infty, \ {\rm as} \ j\rightarrow \infty,$$ which contradicts to Lemma [Lemma 19](#PES){reference-type="ref" reference="PES"}. The proof is completed. ◻ **Lemma 26**. *If $\tilde{P}_{j}$ in [\[ajlim\]](#ajlim){reference-type="eqref" reference="ajlim"} are uniformly bounded and $T(\tilde{P}_{j})>c_{0}$ for some constant $c_{0}>0$, then there exists a convex body $\Omega\in \mathcal K^n$ with $o\in \Omega$ such that $$\label{HG} h(\Omega,\cdot)\mu^{tor}(\Omega,\cdot)=\mu.$$* *Proof.* Since $\tilde{P}_{j}$ is uniformly bounded, with the aid of the Blaschke selection theorem, there exists a subsequence $\tilde{P}_{j_{i}}$ such that $\tilde{P}_{j_{i}}\rightarrow \Omega$ for some compact convex set $\Omega$ containing the origin. By virtue of the continuity of $T$ and the fact that $T(\tilde{P}_{j})>c_{0}>0$, one see $T(\Omega)>0$. From Lemma [\[JKT\]](#JKT){reference-type="ref" reference="JKT"}, we know that $|\Omega|\geq c_{1}>0$, which implies that $\Omega$ has nonempty interior. Finally, by taking the limit of [\[ajlim\]](#ajlim){reference-type="eqref" reference="ajlim"} on both sides and employing the weak convergence of $\mu^{tor}(\cdot)$ and the uniform continuity of the support function, we conclude that [\[HG\]](#HG){reference-type="eqref" reference="HG"} holds. ◻ From Lemma [Lemma 21](#UY){reference-type="ref" reference="UY"}, Lemma [Lemma 25](#PKL){reference-type="ref" reference="PKL"} and Lemma [Lemma 26](#POI){reference-type="ref" reference="POI"}, we shall give a solution to the general torsion log-Minkowski problem described as below. **Theorem 27**. *If $\mu$ is a finite, non-zero Borel measure on ${\mathbb{S}^{n-1}}$ satisfying the subspace mass inequality [\[space\]](#space){reference-type="eqref" reference="space"}. Then there exists a convex body $\Omega\subset \mathbb R^{n}$ with $o\in \Omega$ such that $$G^{tor}(\Omega,\cdot)=\mu.$$* 20 A. D. Aleksandrov, On the surface area measure of convex bodies, Mat. Sb. (N.S.) **6** (1939), 167--174. A. D. Aleksandrov, On the theory of mixed volumes. III. Extensions of two theorems of Minkowski on convex polyhedra to arbitrary convex bodies, Mat. Sb. **3** (1938), 27--46. G. Bianchi, K. J. Böröczky, A. Colesanti and D. Yang, The $L_p$-Minkowski problem for $-n<p<1$, Adv. Math. **341** (2019), 493--535. T. Bonnesen and W. Fenchel, *Theory of convex bodies*, translated from the German and edited by L. Boron, C. Christenson and B. Smith, BCS Associates, Moscow, ID, 1987. K. Böröczky, E. Lutwak, D. Yang and G. Zhang, The log-Brunn-Minkowski inequality, Adv. Math. **231** (2012), no. 3-4, 1974--1997. K. Böröczky, E. Lutwak, D. Yang and G. Zhang, The logarithmic Minkowski problem, J. Amer. Math. Soc. **26** (2013), no. 3, 831--852. K. Böröczky, E. Lutwak, D. Yang, G. Zhang and Y. Zhao, The dual Minkowski problem for symmetric convex bodies, Adv. Math. **356** (2019), 106805, 30 pp. K. Böröczky, M. Henk and H. Pollehn, Subspace concentration of dual curvature measures of symmetric convex bodies, J. Differential Geom. **109** (2018), no. 3, 411--429. K. J. Böröczky and H. T. Trinh, The planar $L_p$-Minkowski problem for $0<p<1$, Adv. in Appl. Math. **87** (2017), 58--81. S. Chen, Q. Li and G. Zhu, On the $L_p$ Monge-Ampère equation, J. Differential Equations **263** (2017), no. 8, 4997--5011. S. Chen, Q. Li and G. Zhu, The logarithmic Minkowski problem for non-symmetric measures, Trans. Amer. Math. Soc. **371** (2019), no. 4, 2623--2641. Z. Chen and Q. Dai, The $L_ p$ Minkowski problem for torsion, J. Math. Anal. Appl. **488** (2020), no. 1, 124060, 26 pp. A. Colesanti, Brunn-Minkowski inequalities for variational functionals and related problems, Adv. Math. **194** (2005), no. 1, 105--140. A. Colesanti and M. Fimiani, The Minkowski problem for torsional rigidity, Indiana Univ. Math. J. **59** (2010), no. 3, 1013--1039. A. Colesanti, K. Nyström, P. Salani, J. Xiao, D. Yang and G. Zhang, The Hadamard variational formula and the Minkowski problem for $p$-capacity, Adv. Math. **285** (2015), 1511--1588. G. Crasta and I. Fragalà, Variational worn stones, arXiv: 2303.11764. B. E. J. Dahlberg, Estimates of harmonic measure, Arch. Rational Mech. Anal. **65** (1977), no. 3, 275--288. W. Fenchel and B. Jessen, Mengenfunktionen und konvexe Körper, Danske Vid. Selsk. Mat.-Fys. Medd. **16** (1938), 1--31. Y. Feng, Y. Zhou and B. He, The $L_p$ electrostatic $q$-capacitary Minkowski problem for general measures, J. Math. Anal. Appl. **487** (2020), no. 1, 123959, 18 pp. W. J. Firey, $p$-means of convex bodies, Math. Scand. **10** (1962), 17--24. R. J. Gardner, *Geometric tomography*, second edition, Encyclopedia of Mathematics and its Applications, 58, Cambridge University Press, New York, 2006. D. Gilbarg and N. S. Trudinger, *Elliptic partial differential equations of second order*, reprint of the 1998 edition, Classics in Mathematics, Springer-Verlag, Berlin, 2001. L. Guo, D. Xi and Y. Zhao, The $L_p$ Chord Minkowski problem in a critical interval, arXiv:2301.07603, appear in Math. Ann. J. Hu and J. Liu, On the $L_p$ torsional Minkowski problem for $0<p<1$, Adv. in Appl. Math. **128** (2021), Paper No. 102188, 22 pp. Y. Huang, C. Song and L. Xu, Hadamard variational formulas for $p$-torsion and $p$-eigenvalue with applications, Geom. Dedicata **197** (2018), 61--76. D. Jerison, A Minkowski problem for electrostatic capacity, Acta Math. **176** (1996), no. 1, 1--47. D. Jerison, The direct method in the calculus of variations for convex bodies, Adv. Math. **122** (1996), no. 2, 262--279. G. Keady, The power concavity of solutions of some semilinear elliptic boundary value problems, Bull. Austral. Math. Soc. **31** (1985), no. 2, 181--184. D. Langharst and  J. Ulivelli, The (self-similar, variational) rolling stones, arXiv:2304.02548. E. Lutwak, The Brunn-Minkowski-Firey theory. I. Mixed volumes and the Minkowski problem, J. Differential Geom. **38** (1993), no. 1, 131--150. E. Lutwak, D. Yang and G. Zhang, The Cramer-Rao inequality for star bodies, Duke Math. J. **112** (2002), no. 1, 59--81. E. Lutwak, D. Yang and G. Zhang, On the $L_p$-Minkowski problem, Trans. Amer. Math. Soc. **356** (2004), no. 11, 4359--4370. H. Minkowski, Allgemeine Lehrsätze über die convexen Polyeder, Nachr. Ges. Wiss. Göttingen (1897), 198--219. H. Minkowski, Volumen und Oberfläche, Math. Ann. **57** (1903), no. 4, 447--495. G. Pólya and G. Szegő, *Isoperimetric Inequalities in Mathematical Physics*, Annals of Mathematics Studies, No. 27, Princeton Univ. Press, Princeton, NJ, 1951. R. Schneider, *Convex bodies: the Brunn-Minkowski theory*, second expanded edition, Encyclopedia of Mathematics and its Applications, 151, Cambridge University Press, Cambridge, 2014. G. Xiong and J. Xiong, The logarithmic capacitary Minkowski problem for polytopes, Acta Math. Sin. (Engl. Ser.) **38** (2022), no. 2, 406--418. G. Xiong, J. Xiong and L. Xu, The $L_p$ capacitary Minkowski problem for polytopes, J. Funct. Anal. **277** (2019), no. 9, 3131--3155. G. Zhu, The logarithmic Minkowski problem for polytopes, Adv. Math. **262** (2014), 909--931. G. Zhu, The $L_p$ Minkowski problem for polytopes for $0<p<1$, J. Funct. Anal. **269** (2015), no. 4, 1070--1094. G. Zhu, The centro-affine Minkowski problem for polytopes, J. Differential Geom. **101** (2015) 159--174.
arxiv_math
{ "id": "2310.05326", "title": "The torsion log-Minkowski problem", "authors": "Jinrong Hu", "categories": "math.MG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- author: - "Xi Xie, Nian Li, Qiang Wang and Xiangyong Zeng [^1]" title: On constructing bent functions from cyclotomic mappings --- > **Abstract:** We study a new method of constructing Boolean bent functions from cyclotomic mappings. Three generic constructions are obtained by considering different branch functions such as Dillon functions, Niho functions and Kasami functions over multiplicative cosets and additive cosets respectively. As a result, several new explicit infinite families of bent functions and their duals are derived. We demonstrate that some previous constructions are special cases of our simple constructions. In addition, by studying their polynomial forms, we observe that the last construction provides some examples which are EA-inequivalent to five classes of monomials, Dillon type and Niho type polynomials. > > **Keywords:** Bent functions, Walsh transform, Multiplicative cyclotomic mappings, Additive cyclotomic mappings. # Introduction Boolean bent functions were first introduced by Rothaus in 1976 [@R] as an interesting combinatorial object with maximum Hamming distance to the set of all affine functions. Over the last four decades, bent functions have attracted a lot of research interest due to their important applications in cryptography [@C2010], sequences [@OSW] and coding theory [@CHLL; @DFZ]. Kumar, Scholtz and Welch in [@KSW] generalized the notion of Boolean bent functions to the case of functions over an arbitrary finite field. Then a lot of research has been devoted to the construction of bent functions, which we refer the readers to [@CM], [@Mbook], and Chapter 6 of [@Cbook]. The construction methods can be divided into two categories: primary and secondary constructions. Primary constructions build bent functions from scratch, see, e.g., the papers [@CCK; @D; @Do; @Do2; @L; @LK; @Mc; @R]. In contrast, secondary constructions provide bent functions from the known ones, and a non-exhaustive list of references is [@C1994; @C2004; @C2006; @C2012; @D; @LKMPTZ; @M; @T; @X]. Though there are many works on bent functions, the classification and the general structure of bent functions are still not clear. Therefore it remains attractive to continue studying bent functions. Cyclotomic mappings of the first order were first introduced by Evans [@Evans] and Niederreiter and Winterhof [@NW]. It was further generalized by Wang [@W2007; @W]. Let $\mathbb{F}_{p^{n}}$ be the finite field with $p^n$ elements. The so-called index $d$ generalized cyclotomic mappings of $\mathbb{F}_{p^n}$ are functions $\mathbb{F}_{p^n}\rightarrow \mathbb{F}_{p^n}$ that agree with a suitable monomial function $x \rightarrow ax^{r}$ (for a fixed $a\in\mathbb{F}_{p^n}$ and non-negative integer $r$) on each cyclotomic coset of the index $d$ subgroup of $\mathbb{F}_{p^n}^*$. It is called a cyclotomic mapping if all exponents of monomials over all cosets are the same exponent. It turns out that every polynomial fixing $0$ can be represented by a cyclotomic mapping uniquely according to its index [@AGW2009]. Cyclotomic mappings are used extensively to study the permutation behaviour of polynomials over finite fields [@AW; @W2007; @W]. Recently it was also used to construct linear codes with few weights [@FSWW]. In this paper, we explore the new application of cyclotomic mappings in the construction of bent functions. Let $q=2^m$ and $m$ be a positive integer. In the first and second generic constructions (see Theorems [Theorem 2](#thm.bent-dillon){reference-type="ref" reference="thm.bent-dillon"} and [Theorem 4](#thm.bent-Niho){reference-type="ref" reference="thm.bent-Niho"}), we use Dillon functions and Niho functions as the branch functions over index $q+1$ cyclotomic cosets of ${\mathbb F}_{q^2}$, respectively. Some explicit classes of bent functions are provided and their duals are determined as well. Using Magma we also demonstrate that many examples can be derived from these classes. For the third construction, we introduce a new variation of cyclotomic mappings. Namely, we partition the finite field ${\mathbb F}_{q^2}$ as a union of additive cosets and then use Kasami functions as branch functions over these additive cosets. As a result, we obtain another generic construction of bent functions (see Theorem [Theorem 8](#thm.bent-kasami){reference-type="ref" reference="thm.bent-kasami"}). Because the conditions in our construction are easy to meet, we demonstrate our construction by obtaining some explicit classes of non-quadratic bent functions (see Corollaries [Corollary 3](#cor.kasami0){reference-type="ref" reference="cor.kasami0"} and [Corollary 4](#cor.kasami1){reference-type="ref" reference="cor.kasami1"}). Finally, switching between the cyclotomic form and polynomial form, we illustrate that our constructions can produce new infinite classes of bent polynomials, as well as several previous known classes. In fact, these generic constructions produce bent polynomials belonging to the Partial Spread class, the class $\mathcal{H}$ and the Maiorana-McFarland class respectively. The rest of this paper is organized as follows. Some preliminaries are given in Section [2](#prel){reference-type="ref" reference="prel"}. Section [3](#cons1){reference-type="ref" reference="cons1"} and Section [4](#cons2){reference-type="ref" reference="cons2"} construct bent functions using Dillon functions, Niho functions and Kasami functions as the branch functions over multiplicative cyclotomic cosets and additive cyclotomic cosets respectively. In Section [5](#sec.poly){reference-type="ref" reference="sec.poly"}, we study the polynomial forms of our newly constructed bent functions and briefly discuss the EA-equivalence between our functions and known ones. Finally, Section [6](#conc){reference-type="ref" reference="conc"} concludes this paper. # Preliminaries {#prel} Throughout this paper, let $\mathbb{Z}_d$ denote the set $\{0,1,\cdots,d-1\}$. In addition, let $\mathbb{F}_{p^{n}}$ be the finite field with $p^n$ elements and $\mu_e=\{x\in\mathbb{F}_{p^{n}}: x^{e}=1\}$ be the set of $e$-th roots of unity in $\mathbb{F}_{p^{n}}$ for $e\,|\,p^n-1$, where $p$ is a prime and $n$ is a positive integer. The (absolute) trace function ${\rm Tr}_1^n:\mathbb F_{p^n}\longrightarrow \mathbb F_p$ is defined by ${\rm Tr}_1^n(x)=\sum_{i=0}^{n-1} x^{p^{i}}$ for all $x\in\mathbb F_{p^n}$. Given a function $f(x)$ mapping from $\mathbb{F}_{p^{n}}$ to $\mathbb{F}_{p}$, the Walsh transform of $f(x)$ is defined by $$\widehat{f}(b)=\sum\nolimits_{x\in\mathbb{F}_{p^n}}{\xi_p}^{f(x)-{\rm Tr}_1^n(bx)}, \, b\in\mathbb{F}_{p^n},$$ where $\xi_p=e^{\frac{2\pi\sqrt{-1}}{p}}$ is a complex primitive $p$-th root of unity. Then $f(x)$ is called a $p$-ary bent function if all its Walsh coefficients satisfy $|\widehat{f}(b) |=p^{n/2}$ [@R; @KSW]. A $p$-ary bent function $f(x)$ is called regular if $\widehat{f}(b)=p^{n/2}\omega^{\widetilde{f}(b)}$ holds for some function $\widetilde{f}(x)$ mapping $\mathbb{F}_{p^n}$ to $\mathbb{F}_p$, and it is called weakly regular if there exists a complex $\mu$ having unit magnitude such that $\widehat{f}(b)=\mu^{-1}p^{n/2}\omega^{\widetilde{f}(b)}$ for all $b\in\mathbb{F}_{p^n}$. The function $\widetilde{f}(x)$ is called the dual of $f(x)$ and it is also bent. The following lemmas are helpful for the subsequent sections. **Lemma 1**. *([@HK])[\[lem.km\]]{#lem.km label="lem.km"} Let $n=2m$ and $a\in{\mathbb F}_{q^2}^*$, where $q=p^m$ and $p$ is a prime. Then $$\sum\nolimits_{x\in\mu_{q+1}}\xi_p^{{\rm Tr}_1^n(a x)}=1-K_m(a^{q+1}).$$ Here $K_m(a^{q+1}):=\sum_{x\in{\mathbb F}_{q}}\xi_p^{{\rm Tr}_1^m( a^{q+1} x+x^{q-2})}$ is the Kloosterman sum over ${\mathbb F}_{q}$.* **Lemma 2**. *([@TZLH])[\[lem.eq.x\^2\]]{#lem.eq.x^2 label="lem.eq.x^2"} Let $n=2m$ be an even positive integer and $a,b\in\mathbb{F}_{2^n}^*$ satisfying ${\rm Tr}_1^n(b/a^2)=0$. Then the quadratic equation $x^2+ax+b=0$ has* *(1) both two solutions in $\mu_{2^m+1}$ if and only if $b=a^{1-2^m}$ and ${\rm Tr}_1^m(b/a^2)=1$.* *(2) exactly one solution in $\mu_{2^m+1}$ if and only if $b\ne a^{1-2^m}$ and $$(1+b^{2^m+1})(1+a^{2^m+1}+b^{2^m+1})+a^2b^{2^m}+a^{2^m}b=0.$$* # Bent functions from multiplicative cyclotomic mappings {#cons1} Let $p$ be a prime, $d,\,n$ be positive integers such that $d\mid p^n-1$, and $\omega$ be a primitive element of $\mathbb{F}_{p^n}$. Let $C$ be the (unique) index $d$ subgroup of $\mathbb{F}_{p^n}^*$. Then the cosets of $C$ in $\mathbb{F}_{p^n}^*$ are of the form $C_i:=\omega^i C$ for $i\in\mathbb{Z}_{d}$. It can be seen that $\mathbb{F}_{p^n}=(\bigcup_{i=0}^{d-1}C_i)\bigcup\{0\}$ and $C_i\bigcap C_j=\emptyset$ for $i\ne j$. Let $(a_0,a_1,\cdots,a_{d-1})\in\mathbb{F}_{p^n}^{d}$ and $r_0,r_1,\cdots,r_{d-1}$ be $d$ non-negative integers. A generalized cyclotomic mapping [@AW; @W] of $\mathbb{F}_{p^n}$ of index $d$ is defined as follows: $$\label{F} F(x)=\left \{\begin{array}{ll} 0, & {\rm if} \,\, x=0,\\ a_i x^{r_i}, &{\rm if} \,\, x\in C_i,\,i\in\mathbb{Z}_{d}. \end{array} \right.$$ Because the finite field is partitioned into the union of $0$ and the multiplicative cosets, we call these cyclotomic mappings as multiplicative cyclotomic mappings. In this paper, we investigate the bentness of the function $f(x)={\rm Tr}_1^n(F(x))$. To study the Walsh transform of $f(x)$ at point $b\in\mathbb{F}_{p^n}$, we first define $$\label{Sij} S_{i}(b)=\sum\nolimits_{x\in C_i}\xi_{p}^{{\rm Tr}_1^n(a_i x^{r_i})-{\rm Tr}_1^n(bx)}$$ for $i\in\mathbb{Z}_{d}$. This gives $$\begin{aligned} % \nonumber to remove numbering (before each equation) \widehat{f}(b) &=&\sum\nolimits_{x \in \mathbb{F}_{p^n}}\xi_p^{{\rm Tr}_1^n(F(x))-{\rm Tr}_1^n(bx)}\nonumber \\ &=&1+\sum_{i\in\mathbb{Z}_{d}}\sum_{x\in C_i}\xi_{p}^{{\rm Tr}_1^n(a_i x^{r_i})-{\rm Tr}_1^n(bx)}\nonumber \\ &=&1+\sum\nolimits_{i\in\mathbb{Z}_{d}}S_{i}(b). \label{fb-mul-ge}\end{aligned}$$ Therefore, to determine $\widehat{f}(b)$, it suffices to calculate $S_{i}(b)$ for all $i\in\mathbb{Z}_{d}$. In this section, we focus on the construction of bent functions from multiplicative cyclotomic mappings of index $q+1$ over $\mathbb{F}_{q^2}$, where $q=p^m$ and $n=2m$. In this case, $C={\mathbb F}_{q}^{*}$ and $C_i=\omega^i{\mathbb F}_{q}^*$. Note that $\omega^i{\mathbb F}_{q}^*=\{\omega^{i+k(q+1)}: k \in \mathbb{Z}_{q-1}\}=\{\omega^{i(q-1)(2^{n-1}-1)+(2^{n-1}i+k)(q+1)}: k \in \mathbb{Z}_{q-1}\}=u^i{\mathbb F}_{q}^*$ with the notation $u=\omega^{(q-1)(2^{n-1}-1)}$. Then $S_{i}(b)$ defined by [\[Sij\]](#Sij){reference-type="eqref" reference="Sij"} becomes $$\begin{aligned}S_{i}(b) &=\sum\nolimits_{x\in u^i{\mathbb F}_{q}^*}\xi_{p}^{{\rm Tr}_1^n(a_i x^{r_i})-{\rm Tr}_1^n(bx)} \\ &=\sum\nolimits_{y \in {\mathbb F}_{q}^*}\xi_p^{{\rm Tr}_1^n(a_i (u^iy)^{r_i})-{\rm Tr}_1^n(bu^iy)} \\ &=\sum\nolimits_{y \in {\mathbb F}_{q}^*}\xi_p^{{\rm Tr}_1^m(\alpha_i y^{r_i})-{\rm Tr}_1^m((bu^i+b^qu^{-i})y)} \end{aligned}$$ with $\alpha_i:=a_iu^{ir_i}+a_i^qu^{-ir_i}$ due to $u\in \mu_{q+1}$. The general case is difficult to calculate, we shall study $S_{i}(b)$ for the following two cases: If $r_i\equiv 0 \,({\rm mod}\, q-1)$, then $$\begin{aligned} % \nonumber to remove numbering (before each equation) S_{i}(b)&=&\xi_p^{{\rm Tr}_1^m(\alpha_i)}\sum\nolimits_{y \in {\mathbb F}_{q}^*}\xi_p^{-{\rm Tr}_1^m((bu^i+b^qu^{-i})y)} \nonumber\\ &=&\left \{\begin{array}{ll} (q-1)\xi_p^{{\rm Tr}_1^m(\alpha_i)}, & {\rm if} \,\, bu^i+b^qu^{-i}=0,\\ -\xi_p^{{\rm Tr}_1^m(\alpha_i)}, &{\rm otherwise}. \end{array} \right. \label{Sii-R0}\end{aligned}$$ If $r_i\equiv p^{t_i} \,({\rm mod}\, q-1)$ for some integer $t_i$, then $$\begin{aligned}S_{i}(b) &=\sum\nolimits_{y \in {\mathbb F}_{q}^*}\xi_p^{{\rm Tr}_1^m((\alpha_i^{p^{-t_i}}-(bu^i+b^qu^{-i}))y)} \\ &=\left \{\begin{array}{ll} q-1, & {\rm if} \,\, bu^i+b^qu^{-i}=\alpha_i^{p^{-t_i}},\\ -1, &{\rm otherwise}, \end{array} \right. \end{aligned}$$ which implies that $S_{i}(b)=q T_{i}(b)-1$ with the notation $$\label{Ti-b} T_{i}(b)=\left \{\begin{array}{ll} 1, & {\rm if} \,\, bu^i+b^qu^{-i}=\alpha_i^{p^{-t_i}},\\ 0, &{\rm otherwise}. \end{array} \right.$$ In the sequel, we always assume that $n=2m$, $q=2^m$ and $u=\omega^{(q-1)(2^{n-1}-1)}$ is a primitive element of $\mu_{q+1}$ with $u^{\infty}=0$. Define $R_0:=\{i\in\mathbb{Z}_{q+1}: r_i\equiv 0 \,({\rm mod}\, q-1)\}$ and $R_1:=\{i\in\mathbb{Z}_{q+1}: r_i\equiv p^{t_i} \,({\rm mod}\, q-1)\}$. Then we characterize the Walsh transform of a Boolean function $f(x)$ in the case of $\# R_0+\# R_1=q+1$. **Theorem 1**. *Let $r_i$ be $q+1$ non-negative integers satisfying $\# R_0+\# R_1=q+1$ and $a_i\in{\mathbb F}_{q^2}$ for $i\in\mathbb{Z}_{q+1}$. Define $$\label{f-mul} f(x)={\rm Tr}_1^n(a_ix^{r_i}),\,\,{\rm if} \,\, x\in u^i\mathbb{F}_{q}^*,\,i\in\{\infty\}\cup\mathbb{Z}_{q+1}.$$ Then for any $b\in{\mathbb F}_{q^2}$, $$\widehat{f}(b)=\left \{\begin{array}{lll} q(M_0+\sum_{i\in R_1} T_{i}(b))-(M_0+\# R_1-1), & {\rm if} \,\, b=0,\\ q((-1)^{{\rm Tr}_1^m(\alpha_t)}+\sum_{i\in R_1} T_{i}(b))-(M_0+\# R_1-1), & {\rm if} \,\, b^{q-1}=u^{2t},\,t\in R_0,\\ q\sum_{i\in R_1} T_{i}(b)-(M_0+\# R_1-1), & {\rm if} \,\, b^{q-1}= u^{2t},\,t\in R_1,\\ \end{array} \right.$$ where $M_0:=\sum_{i\in R_0}(-1)^{{\rm Tr}_1^m(\alpha_i)}$ with $\alpha_i:=a_iu^{ir_i}+a_i^qu^{-ir_i}$ and $T_{i}(b)$ is defined by [\[Ti-b\]](#Ti-b){reference-type="eqref" reference="Ti-b"}.* *Proof.* In this case, for any $b\in{\mathbb F}_{q^2}$, [\[fb-mul-ge\]](#fb-mul-ge){reference-type="eqref" reference="fb-mul-ge"} turns into $$\begin{aligned} % \nonumber to remove numbering (before each equation) \widehat{f}(b) &=&1+\sum_{i\in\mathbb{Z}_{q+1}}S_{i}(b)=1+\sum_{i\in R_0}S_{i}(b)+\sum_{i\in R_1}S_{i}(b) \nonumber\\ &=&1+\sum_{i\in R_0}S_{i}(b)+\sum_{i\in R_1}(q T_{i}(b)-1) \nonumber\\ &=&\sum_{i\in R_0}S_{i}(b)+q\sum_{i\in R_1} T_{i}(b)-\# R_1+1. \label{fb-mul}\end{aligned}$$ Next, we calculate $\sum_{i\in R_0}S_{i}(b)$ for $b\in{\mathbb F}_{q^2}$ by considering the following three cases. **Case 1**: $b=0$. In this case, [\[Sii-R0\]](#Sii-R0){reference-type="eqref" reference="Sii-R0"} gives $S_{i}(b)=(q-1)(-1)^{{\rm Tr}_1^m(\alpha_i)}$ for all $i\in R_0$. This means $\sum_{i\in R_0}S_{i}(b)=(q-1)M_0$ with the notation $M_0=\sum_{i\in R_0}(-1)^{{\rm Tr}_1^m(\alpha_i)}$. **Case 2**: $b^{q-1}=u^{2t}$ for some $t\in R_0$. If this case happens, then $b^qu^{-t}+bu^t=0$. Further, [\[Sii-R0\]](#Sii-R0){reference-type="eqref" reference="Sii-R0"} implies $S_{t}(b)=(q-1)(-1)^{{\rm Tr}_1^m(\alpha_t)}$. Next we claim that $b^qu^{-i}+bu^i\ne0$ for any $i\in R_0\backslash\{t\}$. Suppose that there exists $t'\in R_0\backslash\{t\}$ such that $b^qu^{-t'}+bu^{t'}=0$. Then $u^{2t'}=b^{q-1}=u^{2t}$, i.e., $u^{2(t-t')}=1$, which is impossible due to $\gcd(2,\,q+1)=1$ and $t\ne t'$. Thus $b^qu^{-i}+bu^i\ne0$ for any $i\in R_0\backslash\{t\}$. From [\[Sii-R0\]](#Sii-R0){reference-type="eqref" reference="Sii-R0"} we can know that $S_{i}(b)=-(-1)^{{\rm Tr}_1^m(\alpha_i)}$ for $i\in R_0\backslash\{t\}$. Hence one concludes $$\sum_{i\in R_0}S_{i}(b)=(q-1)(-1)^{{\rm Tr}_1^m(\alpha_t)}-\sum_{i\in R_0\backslash\{t\}}(-1)^{{\rm Tr}_1^m(\alpha_i)}=q(-1)^{{\rm Tr}_1^m(\alpha_t)}-M_0.$$ **Case 3**: $b^{q-1}=u^{2t}$ for some $t\in R_1$. If this case happens, $b^{q-1}\ne u^{2i}$, i.e., $b^qu^{-i}+bu^i\ne0$ for all $i\in R_0$. Then [\[Sii-R0\]](#Sii-R0){reference-type="eqref" reference="Sii-R0"} yields $S_{i}(b)=-(-1)^{{\rm Tr}_1^m(\alpha_i)}$ for $i\in R_0$. Therefore we have $\sum_{i\in R_0}S_{i}(b)=-M_0$. Substituting the values of $\sum_{i\in R_0}S_{i}(b)$ in Cases 1-3 into [\[fb-mul\]](#fb-mul){reference-type="eqref" reference="fb-mul"} gives the desired result. ◻ Next we consider the construction of bent functions from two classes of multiplicative cyclotomic mappings of index $q+1$ over $\mathbb{F}_{q^2}$. ## Bent functions from Dillon functions {#cons1.1} First of all, we focus on the branch functions with Dillon exponents. In this case, we consider $R_0=\mathbb{Z}_{q+1}$ and $R_1=\emptyset$. And we assume $q>2$ in this subsection. Then we state our first result on bent functions. **Theorem 2**. *Let $a_i\in{\mathbb F}_{q^2}$ and $l_i$ be non-negative integers for $i\in\mathbb{Z}_{q+1}$. Then $$f(x)={\rm Tr}_1^n(a_ix^{l_i(q-1)}),\,\,{\rm if} \,\, x\in u^i\mathbb{F}_{q}^*,\,i\in\{\infty\}\cup\mathbb{Z}_{q+1}$$ is bent if and only if $$\sum\nolimits_{i\in\mathbb{Z}_{q+1}}(-1)^{{\rm Tr}_1^n(a_iu^{-2il_i})}=1.$$ Moreover, the dual function of $f(x)$ is $$\widetilde{f}(x)={\rm Tr}_1^n(a_ix^{l_i(1-q)}),\,\, {\rm if} \,\, x^{q-1}=u^{2i},\,i\in\{\infty\}\cup\mathbb{Z}_{q+1}.$$* *Proof.* From Theorem [Theorem 1](#thm.wf-mul){reference-type="ref" reference="thm.wf-mul"}, for any $b\in{\mathbb F}_{q^2}$, one has $$\label{fb-Dillon-ge} \widehat{f}(b)=\left \{\begin{array}{lll} qM_0-(M_0-1), & {\rm if} \,\, b=0,\\ q(-1)^{{\rm Tr}_1^m(\alpha_t)}-(M_0-1), & {\rm if} \,\, b^{q-1}=u^{2t},\,t\in\mathbb{Z}_{q+1}\\ \end{array} \right.$$ due to that $\# R_1=0$ and $\sum_{i\in R_1} T_{i}(b)=0$. Here $\alpha_i=a_iu^{il_i(q-1)}+a_i^qu^{-il_i(q-1)}=a_iu^{-2il_i}+a_i^qu^{2il_i}$ and $M_0=\sum_{i\in\mathbb{Z}_{q+1}}(-1)^{{\rm Tr}_1^m(\alpha_i)}=\sum_{i\in\mathbb{Z}_{q+1}}(-1)^{{\rm Tr}_1^n(a_iu^{-2il_i})}$. Note that when $i$ runs over $\mathbb{Z}_{q+1}$, $u^{2i}$ runs over $\mu_{q+1}$, which implies $\{0\}\bigcup\{b\in{\mathbb F}_{q^2}: b^{q-1}=u^{2i}, i\in \mathbb{Z}_{q+1}\}={\mathbb F}_{q^2}$. Then we claim that $|\widehat{f}(b)|=q$ for any $b\in{\mathbb F}_{q^2}$ if and only if $M_0=1$. Obviously, if $M_0=1$, then [\[fb-Dillon-ge\]](#fb-Dillon-ge){reference-type="eqref" reference="fb-Dillon-ge"} yields $|\widehat{f}(b)|=q$ for any $b\in{\mathbb F}_{q^2}$. On the other hand, if $|\widehat{f}(b)|=q$ for any $b\in{\mathbb F}_{q^2}$, then $|\widehat{f}(0)|=q$ and then [\[fb-Dillon-ge\]](#fb-Dillon-ge){reference-type="eqref" reference="fb-Dillon-ge"} indicates $\widehat{f}(0)=qM_0-(M_0-1)=(q-1)M_0+1=\pm q$, which leads to $M_0=1$ since $q>2$. Thus $|\widehat{f}(b)|=q$ for any $b\in{\mathbb F}_{q^2}$ if and only if $M_0=1$. More precisely, [\[fb-Dillon-ge\]](#fb-Dillon-ge){reference-type="eqref" reference="fb-Dillon-ge"} gives $\widehat{f}(0)=q$, and if $b^{q-1}=u^{2t}$ for some $t\in\mathbb{Z}_{q+1}$, then $$\widehat{f}(b)=q(-1)^{{\rm Tr}_1^m(\alpha_t)}=q(-1)^{{\rm Tr}_1^n(a_tu^{-2tl_t})}=q(-1)^{{\rm Tr}_1^n(a_tb^{l_t(1-q)})}.$$ This completes the proof. ◻ A slight modification of the condition given in Theorem [Theorem 2](#thm.bent-dillon){reference-type="ref" reference="thm.bent-dillon"} such that $f(x)$ is bent allows us to construct bent functions explicitly. Note that $$\sum_{i\in\mathbb{Z}_{q+1}}(-1)^{{\rm Tr}_1^n(a_iu^{-2il_i})}=\sum_{i\in\mathbb{Z}_{q}}((-1)^{{\rm Tr}_1^n(a_iu^{-2il_i})}-(-1)^{{\rm Tr}_1^n(a_qu^{-2il_q})})+\sum_{i\in\mathbb{Z}_{q+1}}(-1)^{{\rm Tr}_1^n(a_qu^{-2il_q})}$$ and when $\gcd(l_q,\,q+1)=1$ and $a_q\ne 0$, $$\sum_{i\in\mathbb{Z}_{q+1}}(-1)^{{\rm Tr}_1^n(a_qu^{-2il_q})}=\sum_{z\in\mu_{q+1}}(-1)^{{\rm Tr}_1^n(a_qz)}=1-K_m(a_q^{q+1})$$ due to Lemma [\[lem.km\]](#lem.km){reference-type="ref" reference="lem.km"}. That means $f(x)$ in Theorem [Theorem 2](#thm.bent-dillon){reference-type="ref" reference="thm.bent-dillon"} is bent if and only if $$\sum\nolimits_{i\in\mathbb{Z}_{q}}((-1)^{{\rm Tr}_1^n(a_iu^{-2il_i})}-(-1)^{{\rm Tr}_1^n(a_qu^{-2il_q})})=K_m(a_q^{q+1}).$$ By using this observation, we obtain the following construction of bent functions for the case $\#\{a_i: i\in\mathbb{Z}_{q+1}\}\leq 2$ directly. **Theorem 3**. *Let $a_1\in{\mathbb F}_{q^2},a_2\in{\mathbb F}_{q^2}^*$, $l_1,\,l_2$ be non-negative integers and $\gcd(l_2,\,q+1)=1$. Denote $N:=\{u^{i}{\mathbb F}_{q}^*: i \in\mathbb{Z} \}$, where $\mathbb{Z}$ is a subset of $\mathbb{Z}_{q}$. Then $$f(x)=\left \{\begin{array}{ll} {\rm Tr}_1^n(a_1 x^{l_1(q-1)}), & {\rm if} \,\, x\in N,\\ {\rm Tr}_1^n(a_2 x^{l_2(q-1)}), &{\rm otherwise} \end{array} \right.$$ is a bent function if and only if $$\label{con.dillon1}\sum\nolimits_{i\in\mathbb{Z}}((-1)^{{\rm Tr}_1^n(a_1u^{-2il_1})}-(-1)^{{\rm Tr}_1^n(a_2u^{-2il_2})})=K_m(a_2^{q+1}).$$ Moreover, the dual function of $f(x)$ is $$\widetilde{f}(x)=\left \{\begin{array}{ll} {\rm Tr}_1^n(a_1x^{l_1(1-q)}), & {\rm if} \,\, x^{q-1}=u^{2i},\,i\in\mathbb{Z},\\ {\rm Tr}_1^n(a_2x^{l_2(1-q)}), &{\rm otherwise} . \end{array} \right.$$* It is easy to construct bent functions by selecting suitable set $N$ and parameters $a_1,a_2$ when $\gcd(l_1,\,q+1)=\gcd(l_2,\,q+1)=1$. In the case of $N=\mu_{r(q-1)}$ with $r\mid (q+1)$, one has $\mathbb{Z}=\{i\in\mathbb{Z}_{q}:(q+1)/r \mid i \}$. Then [\[con.dillon1\]](#con.dillon1){reference-type="eqref" reference="con.dillon1"} becomes $$\sum\nolimits_{i\in\mathbb{Z}_{q},(q+1)/r|i}((-1)^{{\rm Tr}_1^n(a_1 u^{-2il_1})}-(-1)^{{\rm Tr}_1^n(a_2 u^{-2il_2})})=K_m(a_2^{q+1}),$$ that is, $$\label{eq.dillon2} \sum\nolimits_{i\in\mathbb{Z}_{r}}((-1)^{{\rm Tr}_1^n(a_1\varepsilon^{i})}-(-1)^{{\rm Tr}_1^n(a_2\varepsilon^{i})})=K_m(a_2^{q+1})$$ due to $\gcd(l_i,\,q+1)=1$ for $i=1,\,2$, where $\varepsilon$ is a primitive element of $\mu_{r}$. Firstly, set $a_1=\varepsilon^{j} a_2$ for $j\in \mathbb{Z}_{r}$, then $$\sum\nolimits_{i\in\mathbb{Z}_{r}}(-1)^{{\rm Tr}_1^n(a_1\varepsilon^{i})}=\sum\nolimits_{i\in\mathbb{Z}_{r}}(-1)^{{\rm Tr}_1^n(a_2\varepsilon^{i+j})}=\sum\nolimits_{i\in\mathbb{Z}_{r}}(-1)^{{\rm Tr}_1^n(a_2\varepsilon^{i})}.$$ Then the following result can be obtained from Theorem [Theorem 3](#thm.bent-dillon2){reference-type="ref" reference="thm.bent-dillon2"} and [\[eq.dillon2\]](#eq.dillon2){reference-type="eqref" reference="eq.dillon2"} directly. **Corollary 1**. *Let $c\in{\mathbb F}_{q^2}^*$, $\epsilon\in\mu_{r}$, $r$ and $l_i$ be positive integers satisfying $r|(q+1)$ and $\gcd(l_i,\,q+1)=1$ for $i=1,\,2$. Define $$f(x)=\left \{\begin{array}{ll} {\rm Tr}_1^n(\epsilon c x^{l_1(q-1)}), & {\rm if} \,\, x\in \mu_{r(q-1)},\\ {\rm Tr}_1^n(c x^{l_2(q-1)}), &{\rm otherwise}. \end{array} \right.$$ Then $f(x)$ is a bent function if and only if $K_m(c^{q+1})=0$. Moreover, the dual function of $f(x)$ is $$\widetilde{f}(x)=\left \{\begin{array}{ll} {\rm Tr}_1^n(\epsilon c x^{l_1(1-q)}), & {\rm if} \,\, x\in \mu_{r(q-1)},\\ {\rm Tr}_1^n(cx^{l_2(1-q)}), &{\rm otherwise} . \end{array} \right.$$* **Remark 1**. *If we use $l_1=l_2$ and $\epsilon=1$ in Corollary [Corollary 1](#cor.bent-dillon3.1){reference-type="ref" reference="cor.bent-dillon3.1"}, then $f(x)$ is reduced to the monomial case, and Corollary [Corollary 1](#cor.bent-dillon3.1){reference-type="ref" reference="cor.bent-dillon3.1"} gives ${\rm Tr}_1^n(c x^{l_1(q-1)})$ is bent if and only if $K_m(c^{q+1})=0$, which is consistent with the results presented by Dillon [@D] for $l_1=1$, Leander [@L] and Charpin and Gong [@CG] for $l_1$ with $\gcd(l_1,\,q+1)=1$. By the way, $f(x)$ in Corollary [Corollary 1](#cor.bent-dillon3.1){reference-type="ref" reference="cor.bent-dillon3.1"} is bent if and only if the function ${\rm Tr}_1^n(c x^{l_2(q-1)})$ is bent.* **Example 1**. *Let $q=2^6$, $l_1=l_2=1$, $r=5$. According to Magma, there are 3118 pairs $(\epsilon ,\,c)$ with $\epsilon\ne 1$ such that $f(x)$ in Corollary [Corollary 1](#cor.bent-dillon3.1){reference-type="ref" reference="cor.bent-dillon3.1"} is bent over $\mathbb{F}_{2^{12}}$. Take $\epsilon=\omega^{819}$ and $c=\omega^{5}$, where $\omega$ is a primitive element of $\mathbb{F}_{2^6}$. Then $\epsilon\in\mu_{5}$ and it can be checked that $K_6(\omega^{65})=0$. Corollary [Corollary 1](#cor.bent-dillon3.1){reference-type="ref" reference="cor.bent-dillon3.1"} now establishes that $$f(x)=\left \{\begin{array}{ll} {\rm Tr}_1^{12}(\omega^{824}x^{63}), & {\rm if} \,\, x\in \mu_{315},\\ {\rm Tr}_1^{12}(\omega^{5}x^{63}), &{\rm otherwise} \end{array} \right.$$ is a bent function over $\mathbb{F}_{2^{12}}$ and its dual is $$\widetilde{f}(x)=\left \{\begin{array}{ll} {\rm Tr}_1^{12}(\omega^{824} x^{-63}), & {\rm if} \,\, x\in \mu_{315},\\ {\rm Tr}_1^{12}(\omega^{5} x^{-63}), &{\rm otherwise} . \end{array} \right.$$ * Secondly, by setting $r=3$, we give the following result. **Corollary 2**. *Let $q=2^m$ for an odd integer $m>1$, $a_1\in{\mathbb F}_{q^2}$, $a_2\in{\mathbb F}_{q^2}^*$ and $l_i$ be positive integers satisfying $\gcd(l_i,\,q+1)=1$ for $i=1,\,2$. Then $$f(x)=\left \{\begin{array}{ll} {\rm Tr}_1^n(a_1x^{l_1(q-1)}), & {\rm if} \,\, x\in \mu_{3(q-1)},\\ {\rm Tr}_1^n(a_2x^{l_2(q-1)}), &{\rm otherwise} \end{array} \right.$$ is a bent function if and only if $$K_m(a_{2}^{q+1})=4\big((1-{\rm Tr}_1^n(a_1))(1-{\rm Tr}_1^n(a_1\theta))-(1-{\rm Tr}_1^n(a_2))(1-{\rm Tr}_1^n(a_2\theta))\big),$$ where $\theta$ is a 3-rd root of unity. Moreover, the dual function of $f(x)$ is $$\widetilde{f}(x)=\left \{\begin{array}{ll} {\rm Tr}_1^n(a_1 x^{l_1(1-q)}), & {\rm if} \,\, x\in \mu_{3(q-1)},\\ {\rm Tr}_1^n(a_2 x^{l_2(1-q)}), &{\rm otherwise} . \end{array} \right.$$* *Proof.* According to Theorem [Theorem 3](#thm.bent-dillon2){reference-type="ref" reference="thm.bent-dillon2"} and [\[eq.dillon2\]](#eq.dillon2){reference-type="eqref" reference="eq.dillon2"}, $f(x)$ is bent if and only if $$\label{eq.dillon3.2} \sum_{i=0}^{2}((-1)^{{\rm Tr}_1^n(a_1\theta^{i})}-(-1)^{{\rm Tr}_1^n(a_{2}\theta^{i})})=K_m(a_{2}^{q+1}),$$ where $\theta$ is a $3$-rd root of unity. Note that $\theta^2+\theta+1=0$, which means ${\rm Tr}_1^n(a_j\theta^{2})={\rm Tr}_1^n(a_j)+{\rm Tr}_1^n(a_j\theta)$ for $j=1,\,2$. Observe that for two Boolean functions $f_1(x)$ and $f_2(x)$, $$(-1)^{f_1(x)}+(-1)^{f_2(x)}+(-1)^{f_1(x)+f_2(x)}=4(1-f_1(x))(1-f_2(x))-1.$$ This implies $$\sum_{i=0}^{2}(-1)^{{\rm Tr}_1^n(a_j\theta^{i})}=4(1-{\rm Tr}_1^n(a_j))(1-{\rm Tr}_1^n(a_j\theta))-1$$ for $j=1,\,2$. The desired result then follows from [\[eq.dillon3.2\]](#eq.dillon3.2){reference-type="eqref" reference="eq.dillon3.2"}. This completes the proof. ◻ **Example 2**. *Let $q=2^5$, $l_1=l_2=1$. According to Magma, there are 218715 pairs $(a_1,\,a_2)$ with $a_1\ne a_2$ such that $f(x)$ in Corollary [Corollary 2](#cor.bent-dillon3.2){reference-type="ref" reference="cor.bent-dillon3.2"} is bent over $\mathbb{F}_{2^{10}}$. For example, take $a_1=1$ and $a_2=\omega^{9}$, where $\omega$ is a primitive element of $\mathbb{F}_{2^{10}}$. Then $\omega^{341}$ is a 3-rd root of unity. It can be verified that $K_5(\omega^{33})=-4$, ${\rm Tr}_1^n(1)=0$, ${\rm Tr}_1^n(\omega^{341})=1$, ${\rm Tr}_1^n(\omega^{9})={\rm Tr}_1^n(\omega^{350})=0$. Corollary [Corollary 2](#cor.bent-dillon3.2){reference-type="ref" reference="cor.bent-dillon3.2"} now establishes that $$f(x)=\left \{\begin{array}{ll} {\rm Tr}_1^{10}(x^{31}), & {\rm if} \,\, x\in \mu_{93},\\ {\rm Tr}_1^{10}(\omega^{9}x^{31}), &{\rm otherwise} \end{array} \right.$$ is a bent function over $\mathbb{F}_{2^{10}}$ and its dual is $$\widetilde{f}(x)=\left \{\begin{array}{ll} {\rm Tr}_1^{10}(x^{-31}), & {\rm if} \,\, x\in \mu_{93},\\ {\rm Tr}_1^{10}(\omega^{9}x^{-31}), &{\rm otherwise} . \end{array} \right.$$* ## Bent functions from Niho functions {#cons1.2} Secondly, we focus on the branch functions with Niho exponents. Without loss of generality, we assume that $r_i=s_i(q-1)+1$ for all $i\in\mathbb{Z}_{q+1}$. In this case, we consider $R_0=\emptyset$ and $R_1=\mathbb{Z}_{q+1}$. Then the second main result on bent functions follows from Theorem [Theorem 1](#thm.wf-mul){reference-type="ref" reference="thm.wf-mul"}. **Theorem 4**. *Let $a_i\in{\mathbb F}_{q^2}$ and $s_i$ be integers with $0\leq s_i\leq q$ for $i\in\mathbb{Z}_{q+1}$. Define $$f(x)={\rm Tr}_1^n(a_ix^{s_i(q-1)+1}),\,\,{\rm if} \,\, x\in u^i\mathbb{F}_{q}^*,\,i\in\{\infty\}\cup\mathbb{Z}_{q+1}.$$ Then for any $b\in{\mathbb F}_{q^2}$, $\widehat{f}(b)=q(\sum_{i\in\mathbb{Z}_{q+1}}T_{i}(b)-1)$, where $$\label{Ti-b-Niho} T_{i}(b)=\left \{\begin{array}{ll} 1, & {\rm if} \,\, bu^i+b^qu^{-i}=\alpha_i,\\ 0, &{\rm otherwise}. \end{array} \right.$$ with $\alpha_i=a_iu^{i(1-2s_i)}+a_i^qu^{i(2s_i-1)}$. Moreover, $f(x)$ is bent if and only if $\sum_{i\in\mathbb{Z}_{q+1}}T_{i}(b)=0$ or $2$.* We first give a simple case of above construction. **Theorem 5**. *Let $a_i\in{\mathbb F}_{q^2}$ and $s_i$ be integers with $0\leq s_i\leq q$ satisfying $a_iu^{i(1-2s_i)}+a_i^qu^{i(2s_i-1)}=c$ for any $i\in\mathbb{Z}_{q+1}$, where $c$ is a fixed element in ${\mathbb F}_{q}^*$. Then $f(x)$ defined as in Theorem [Theorem 4](#thm.bent-Niho){reference-type="ref" reference="thm.bent-Niho"} is bent and its dual is ${\rm Tr}_1^m(c^{-2}x^{q+1})+1$.* *Proof.* In this case, one knows $\alpha_i=c$ for any $i\in\mathbb{Z}_{q+1}$, and then for $b\in{\mathbb F}_{q^2}$, $T_{i}(b)$ given by [\[Ti-b-Niho\]](#Ti-b-Niho){reference-type="eqref" reference="Ti-b-Niho"} turns into $$T_{i}(b)=\left \{\begin{array}{ll} 1, & {\rm if} \,\, bu^i+b^qu^{-i}=c,\\ 0, &{\rm otherwise}. \end{array} \right.$$ Next we determine $T_{i}(b)$ for any $b\in{\mathbb F}_{q^2}$. It can be verified that $T_i(0)=0$ for any $i\in\mathbb{Z}_{q+1}$ due to $c\ne 0$. Hence $\sum_{i\in\mathbb{Z}_{q+1}}T_i(0)=0$ and then $\widehat{f}(0)=-q$ according to Theorem [Theorem 4](#thm.bent-Niho){reference-type="ref" reference="thm.bent-Niho"}. For $b\in{\mathbb F}_{q^2}^*$, $T_{i}(b)=1$ if and only if $u^{2i}+b^{-1}c u^i+b^{q-1}=0$. Note that the equation $$z^2+b^{-1}c z+b^{q-1}=0$$ has 2 (resp. 0) solutions in $\mu_{q+1}$ if ${\rm Tr}_1^m(\frac{b^{q-1}}{(b^{-1}c)^2})={\rm Tr}_1^m(b^{q+1}c^{-2})=1$ (resp. ${\rm Tr}_1^m(b^{q+1}c^{-2})=0$); indeed, this follows from Lemma [\[lem.eq.x\^2\]](#lem.eq.x^2){reference-type="ref" reference="lem.eq.x^2"} because ${\rm Tr}_1^n(\frac{b^{q-1}}{(b^{-1}c)^2})=0$ and $b^{q-1}=(b^{-1}c)^{1-q}$ due to $c\in{\mathbb F}_{q}^*$. From this fact we can derive that there are 2 or 0 $T_i$'s equal to 1 and others $T_i=0$. By Theorem [Theorem 4](#thm.bent-Niho){reference-type="ref" reference="thm.bent-Niho"}, $\widehat{f}(b)=q(\sum_{i\in\mathbb{Z}_{q+1}}T_{i}(b)-1)=q$ if ${\rm Tr}_1^m(b^{q+1}c^{-2})=1$, or $\widehat{f}(b)=q(\sum_{i\in\mathbb{Z}_{q+1}}T_{i}(b)-1)=-q$ if ${\rm Tr}_1^m(b^{q+1}c^{-2})=0$. Together with $\widehat{f}(0)=-q$, we conclude $\widehat{f}(b)=q(-1)^{{\rm Tr}_1^m(b^{q+1}c^{-2})+1}$ for any $b\in{\mathbb F}_{q^2}$. This completes the proof. ◻ **Remark 2**. *It can be seen that the bent function $f(x)$ given in Theorem [Theorem 5](#thm.bent-Niho1){reference-type="ref" reference="thm.bent-Niho1"} is a dual of the Kasami bent function. On the other hand, set $a_i=a$ with $a\in{\mathbb F}_{q^2}\backslash{\mathbb F}_{q}$ and $s_i=2^{m-1}+1$ for all $z\in\mathbb{Z}_{q+1}$, then $a_iu^{i(1-2s_i)}+a_i^qu^{i(2s_i-1)}=a+a^q=c\in{\mathbb F}_{q}^*$. In this case, $f(x)={\rm Tr}_1^n(ax^{2^{-1}(q+1)})={\rm Tr}_1^m(c^2 x^{q+1})$ and Theorem [Theorem 5](#thm.bent-Niho1){reference-type="ref" reference="thm.bent-Niho1"} implies that $f(x)$ is bent and its dual is ${\rm Tr}_1^m(c^{-2}x^{q+1})+1$. This coincides with the result obtained by Mesnager in [@M].* In other cases, we obtain a general result. **Theorem 6**. *Let $a_i\in{\mathbb F}_{q^2}$, $s_i$ be integers with $0\leq s_i\leq q$ and denote $\alpha_i=a_iu^{(1-2s_i)i}+a_i^qu^{(2s_i-1)i}$, where $i\in\mathbb{Z}_{q+1}$. Then $f(x)$ defined as in Theorem [Theorem 4](#thm.bent-Niho){reference-type="ref" reference="thm.bent-Niho"} is bent if and only if one of the following conditions is satisfied:* *(1) $\alpha_i\ne 0$ for all $i\in\mathbb{Z}_{q+1}$, and for any $t\in\mathbb{Z}_{q+1}$, each element in the multiset $\{\{\frac{\alpha_i}{u^i+u^{2t-i}}: i\in\mathbb{Z}_{q+1}\backslash\{t\}\}\}$ has multiplicity 2.* *(2) $\alpha_{i_1}=\alpha_{i_2}= 0$ for two distinct integers $i_1,i_2\in\mathbb{Z}_{q+1}$ and $\alpha_i\ne 0$ for $i\in\mathbb{Z}_{q+1}\backslash\{i_1,\,i_2\}$; all elements in the set $\{\frac{\alpha_i}{u^i+u^{2t-i}}: i \in \mathbb{Z}_{q+1}\backslash\{i_1,\,i_2\}\}$ are distinct for $t=i_1,i_2$; and each element in the multiset $\{\{\frac{\alpha_i}{u^i+u^{2t-i}}: i\in\mathbb{Z}_{q+1}\backslash\{t,\,i_1,\,i_2\}\}\}$ has multiplicity 2 for any $t\in\mathbb{Z}_{q+1}\backslash\{i_1,\,i_2\}$.* *Proof.* From Theorem [Theorem 4](#thm.bent-Niho){reference-type="ref" reference="thm.bent-Niho"}, to prove $f(x)$ is bent, it suffices to prove $\sum_{i\in\mathbb{Z}_{q+1}}T_{i}(b)=0$ or $2$ for any $b\in{\mathbb F}_{q^2}$, where $T_i(b)$ is defined by [\[Ti-b-Niho\]](#Ti-b-Niho){reference-type="eqref" reference="Ti-b-Niho"}. We shall distinguish the cases $b=0$ and $b\ne0$ as follows. If $b=0$, then [\[Ti-b-Niho\]](#Ti-b-Niho){reference-type="eqref" reference="Ti-b-Niho"} yields $T_i(0)=1$ if and only if $\alpha_i=0$. This indicates $\sum_{i\in\mathbb{Z}_{q+1}}T_{i}(0)=\#\{i\in\mathbb{Z}_{q+1}: \alpha_i=0\}.$ Thus $f(x)$ is bent only if there are exactly 0 or 2 $\alpha_i$'s equal 0. If $b^{q-1}=u^{2t}$ for some $t\in\mathbb{Z}_{q+1}$, then $T_i(b)$ given by [\[Ti-b-Niho\]](#Ti-b-Niho){reference-type="eqref" reference="Ti-b-Niho"} turns into $$\label{Ti-b-Niho-1} T_{i}(b)=\left \{\begin{array}{ll} 1, & {\rm if} \,\, b(u^i+u^{2t-i})=\alpha_i,\\ 0, &{\rm otherwise}. \end{array} \right.$$ Note that $u^i+u^{2t-i}=0$ if and only if $i=t$. We consider the value of $\sum_{i\in\mathbb{Z}_{q+1}}T_{i}(b)$ as follows: \(1\) $\alpha_i\ne 0$ for any $i\in \mathbb{Z}_{q+1}$. If this case happens, one readily knows $T_{t}(b)=0$ and $$\label{Ti-b-1} T_{i}(b)=\left \{\begin{array}{ll} 1, & {\rm if} \,\, b=\frac{\alpha_i}{u^i+u^{2t-i}},\\ 0, &{\rm otherwise} \end{array} \right.$$ for $i\in\mathbb{Z}_{q+1}\backslash\{t\}$ according to [\[Ti-b-Niho-1\]](#Ti-b-Niho-1){reference-type="eqref" reference="Ti-b-Niho-1"}. Then we can deduce that $\sum_{i\in\mathbb{Z}_{q+1}}T_{i}(b)=\#\{i\in\mathbb{Z}_{q+1}\backslash\{t\}: \frac{\alpha_i}{u^i+u^{2t-i}}=b\}$, which implies that $\sum_{i\in\mathbb{Z}_{q+1}}T_{i}(b)=0$ or $2$ if and only if each element in the multiset $\{\{\frac{\alpha_i}{u^i+u^{2t-i}}: i\in\mathbb{Z}_{q+1}\backslash\{t\}\}\}$ has multiplicity 2. \(2\) There are two distinct integers $i_1,\, i_2\in\mathbb{Z}_{q+1}$ such that $\alpha_{i_1}=\alpha_{i_2}= 0$. If $t=i_1$, then [\[Ti-b-Niho-1\]](#Ti-b-Niho-1){reference-type="eqref" reference="Ti-b-Niho-1"} yields $T_{i_1}(b)=1$, $T_{i_2}(b)=0$, and other $T_i(b)$'s are given by [\[Ti-b-1\]](#Ti-b-1){reference-type="eqref" reference="Ti-b-1"} due to $u^i+u^{2t-i}\ne0$ for $i\ne t$. Hence $\sum_{i\in\mathbb{Z}_{q+1}}T_{i}(b)=1+\#\{i\in\mathbb{Z}_{q+1}\backslash\{i_1,\,i_2\}: \frac{\alpha_i}{u^i+u^{2t-i}}=b\}>0$. This implies $\#\{i\in\mathbb{Z}_{q+1}\backslash\{i_1,\,i_2\}: \frac{\alpha_i}{u^i+u^{2t-i}}=b\}=1$. It can be verified that $(\frac{\alpha_i}{u^i+u^{2t-i}})^{q-1}=u^{2t}$ and there are $q-1$ $b$'s such that $b^{q-1}=u^{2t}$ for any $b\in{\mathbb F}_{q^2}^*$. Thus $\#\{i\in\mathbb{Z}_{q+1}\backslash\{i_1,\,i_2\}: \frac{\alpha_i}{u^i+u^{2t-i}}=b\}=1$ for any $b^{q-1}=u^{2t}$ if and only if all elements in the set $\{\frac{\alpha_i}{u^i+u^{2i_1-i}}: i \in \mathbb{Z}_{q+1}\backslash\{i_1,\,i_2\}\}$ are distinct. The case $t=i_2$ is similar. If $t\ne i_1$ and $t\ne i_2$, then [\[Ti-b-Niho-1\]](#Ti-b-Niho-1){reference-type="eqref" reference="Ti-b-Niho-1"} indicates $T_{t}(b)=T_{i_1}(b)=T_{i_2}(b)=0$ and other $T_i(b)$'s are given by [\[Ti-b-1\]](#Ti-b-1){reference-type="eqref" reference="Ti-b-1"}. Thus $\sum_{i\in\mathbb{Z}_{q+1}}T_{i}(b)=\#\{i\in\mathbb{Z}_{q+1}\backslash\{t,\,i_1,\,i_2\}: \frac{\alpha_i}{u^i+u^{2t-i}}=b\}$, which implies that $\sum_{i\in\mathbb{Z}_{q+1}}T_{i}(b)=0$ or $2$ if and only if each element in the multiset $\{\{\frac{\alpha_i}{u^i+u^{2t-i}}: i\in\mathbb{Z}_{q+1}\backslash\{t,\,i_1,\,i_2\}\}\}$ has multiplicity 2. This completes the proof. ◻ We don't have an efficient way to construct the explicit classes of such bent functions. However, we can search for many bent examples by Magma, and some of them are given below. **Example 3**. *Let $q=2^3$. According to Magma, it can be verified that $s_i=4$ for all $i\in\mathbb{Z}_9$, $a_i=1$ for $i=0,2,4$, $a_7=\omega^5$ and others $a_i=\omega^{2}$; $s_i=2$ for all $i\in\mathbb{Z}_9$, $a_i=1$ for $i=2,4,7,8$, $a_7=\omega^6$ for $i=1,3$ and others $a_i=\omega^{2}$ satisfy the conditions (1) and (2) in Theorem [Theorem 6](#thm.bent-Niho2){reference-type="ref" reference="thm.bent-Niho2"} respectively. Then Theorem [Theorem 6](#thm.bent-Niho2){reference-type="ref" reference="thm.bent-Niho2"} establishes that $$f(x)=\left \{\begin{array}{lll} {\rm Tr}_1^6( x^{29}), & {\rm if} \,\, x\in w^{7i}{\mathbb F}_{q}^*,\,\,i \in\{0,2,4\},\\ {\rm Tr}_1^6(\omega^5 x^{29}), & {\rm if} \,\, x\in w^{7i}{\mathbb F}_{q}^*,\,\,i \in\{7\},\\ {\rm Tr}_1^6(\omega^{2}x^{29}), &{\rm otherwise} \end{array} \right.$$ and $$f(x)=\left \{\begin{array}{lll} {\rm Tr}_1^6( x^{15}), & {\rm if} \,\, x\in w^{7i}{\mathbb F}_{q}^*,\,\,i \in\{2,4,7,8\},\\ {\rm Tr}_1^6(\omega^6 x^{15}), & {\rm if} \,\, x\in w^{7i}{\mathbb F}_{q}^*,\,\,i \in\{1,3\},\\ {\rm Tr}_1^6(\omega^{2}x^{15}), &{\rm otherwise} \end{array} \right.$$ are bent functions over $\mathbb{F}_{2^6}$.* # Bent functions from additive cyclotomic mappings {#cons2} In this section, we construct bent functions from additive cyclotomic mappings. Let $p$ be a prime and $d,\,n$ be positive integers such that $d \mid p^n$. Let $C$ be the index $d$ subgroup of the additive group of $\mathbb{F}_{p^n}$. Then the left cosets of $\mathbb{F}_{p^n}$ modulo $C$ are of the form $C_i:=\upsilon_i+C$ for $i\in\mathbb{Z}_{d}$, where $\upsilon_i$ is a representative of $C_i$. It can be seen that $\mathbb{F}_{p^n}=\bigcup_{i=0}^{d-1}C_i$ and $C_i\bigcap C_j=\emptyset$ for $i\ne j$. Let $(a_0,a_1,\cdots,a_{d-1})\in\mathbb{F}_{p^n}^{d}$ and $r_0,r_1,\cdots,r_{d-1}$ be $d$ non-negative integers. Define a cyclotomic mapping of $\mathbb{F}_{p^n}$ of index $d$ as follows: $$\label{F.add} F(x)=a_i x^{r_i}, {\rm if} \,\, x\in C_i,\,i\in\mathbb{Z}_{d},$$ which is with respect to the additive cosets and we call it an additive cyclotomic mapping. In this paper, we investigate the bentness of functions $f(x)={\rm Tr}_1^n(F(x))$. For any $b\in\mathbb{F}_{p^n}$, a calculation gives $$\begin{aligned} \widehat{f}(b) &=\sum\nolimits_{x \in \mathbb{F}_{p^n}}\xi_p^{{\rm Tr}_1^n(F(x))-{\rm Tr}_1^n(bx)} \\ &=\sum_{i\in\mathbb{Z}_{d}}\sum_{x\in C_i}\xi_{p}^{{\rm Tr}_1^n(a_i x^{r_i})-{\rm Tr}_1^n(bx)} \\ &=\sum\nolimits_{i\in\mathbb{Z}_{d}}S_{i}(b), \end{aligned}$$ where $S_{i}(b)$ is defined as in [\[Sij\]](#Sij){reference-type="eqref" reference="Sij"}. Therefore, to calculate $\widehat{f}(b)$, it suffices to determine $S_{i}(b)$ for all $i\in\mathbb{Z}_{d}$. In the case of $C=\mathbb{F}_{2^k}$ and $r_i=2^{t_i}+1$ with $k|t_i$ and $k|n$, one knows $d=2^{n-k}$ and $S_{i}(b)$ defined by [\[Sij\]](#Sij){reference-type="eqref" reference="Sij"} becomes $$\begin{aligned}S_{i}(b) &=\sum\nolimits_{x\in v_i+\mathbb{F}_{2^k}}(-1)^{{\rm Tr}_1^n(a_i x^{2^{t_i}+1})+{\rm Tr}_1^n(bx)} \\ &=\sum\nolimits_{y \in \mathbb{F}_{2^k}}(-1)^{{\rm Tr}_1^n\big(a_i (v_i+y)^{2^{t_i}+1}\big)+{\rm Tr}_1^n(b(v_i+y))} \\ &=(-1)^{{\rm Tr}_1^n(a_i v_i^{2^{t_i}+1}+bv_i)}\sum\nolimits_{y \in \mathbb{F}_{2^k}}(-1)^{{\rm Tr}_1^k({\rm Tr}_k^n(a_iv_i^{2^{t_i}}+a_iv_i+a_i^{2^{-1}}+b)y)}, \end{aligned}$$ which gives $$S_{i}(b)=\left \{\begin{array}{ll} 2^k(-1)^{{\rm Tr}_1^n(a_i v_i^{2^{t_i}+1}+bv_i)}, & {\rm if} \,\, {\rm Tr}_k^n(a_i(v_i^{2^{t_i}}+v_i)+a_i^{2^{-1}}+b)=0,\\ 0, &{\rm otherwise}. \end{array} \right.$$ Then we give the Walsh transform of $f(x)$ by the following theorem. **Theorem 7**. *Let $n$, $k$ and $t_i$ be positive integers satisfying $k|n$ and $k| t_i$ for $i\in\mathbb{Z}_{2^{n-k}}$. Let $$f(x)={\rm Tr}_1^n(a_ix^{2^{t_i}+1}),\,\, {\rm if} \,\, x\in v_i+\mathbb{F}_{2^k},\,i\in\mathbb{Z}_{2^{n-k}},$$ where $a_i\in\mathbb{F}_{2^n}$ and $\upsilon_i$ are the representatives of cosets of $\mathbb{F}_{2^n}$ modulo $\mathbb{F}_{2^k}$. Then for any $b\in\mathbb{F}_{2^n}$, $$\widehat{f}(b)=2^k\sum_{i\in E(b)}(-1)^{{\rm Tr}_1^n(a_i v_i^{2^{t_i}+1}+bv_i)}$$ and $E(b)$ is defined by $$\label{Eb} E(b):=\{i\in\mathbb{Z}_{2^{n-k}}: {\rm Tr}_k^n(b)={\rm Tr}_k^n(a_i(v_i^{2^{t_i}}+v_i)+a_i^{2^{-1}})\}.$$* By using Kasami functions, we generate bent functions from a class of additive cyclotomic mappings of index $q$ over $\mathbb{F}_{q^2}$. **Theorem 8**. *Let $\xi$ be a primitive element of $\mathbb{F}_{q}$ and define $\xi^{\infty}=0$, where $q=2^m$ and $m$ is a positive integer. Define $$f(x)={\rm Tr}_1^m(\alpha_ix^{q+1}),\,\,{\rm if} \,\, x\in N_i,\,i\in\{\infty\}\cup\mathbb{Z}_{q-1},$$ where $\alpha_{i}\in{\mathbb F}_{q}$ and $N_i=\{x\in \mathbb{F}_{q^2}: x^q+x=\xi^i\}$ for $i\in\{\infty\}\cup\mathbb{Z}_{q-1}$. If $$\{\alpha_{i}^2\xi^{2i}+\alpha_{i}: i\in\{\infty,0,\cdots,q-2\}\}={\mathbb F}_{q},$$ then $f(x)$ is a bent function over ${\mathbb F}_{q^2}$. Moreover, for any $b\in{\mathbb F}_{q^2}$, the Walsh transform of $f(x)$ is $$\widehat{f}(b)=q(-1)^{\varphi_{t}(b)}, \,\,{\rm if} \,\, b^q+b=\alpha_t\xi^t+\alpha_t^{2^{-1}},\,\,t\in\{\infty\}\cup\mathbb{Z}_{q-1}$$ with $$\label{phi-b} \varphi_{t}(b)=\left \{\begin{array}{ll} {\rm Tr}_1^m(\xi^tb), & {\rm if} \,\, \alpha_t=0,\\ {\rm Tr}_1^m(\alpha_t^{-1}b^{q+1})+1, &{\rm if} \,\, \alpha_t\ne 0. \end{array} \right.$$* *Proof.* In this case, $k=t_i=m$, $\alpha_i=a_i+a_i^q$ for $i\in\mathbb{Z}_{q-1}$ and $\alpha_{\infty}=a_{q-1}+a_{q-1}^q$. Then we calculate the values of $v_i$ for $i\in\mathbb{Z}_{q}$ in Theorem [Theorem 8](#thm.bent-kasami){reference-type="ref" reference="thm.bent-kasami"}. Suppose that $x_0$ is a solution of the equation $x^q+x=1$. It can be verified that $\xi^ix_0$ is a solution of the equation $x^q+x=\xi^i$. This allows us to write $N_i=\{x\in \mathbb{F}_{q^2}: x^q+x=\xi^i\}=\xi^ix_0+\mathbb{F}_{q}$ for each $i\in\{\infty\}\cup\mathbb{Z}_{q-1}$. That means $v_i=\xi^ix_0$ for $i\in\mathbb{Z}_{q-1}$ and $v_{q-1}=\xi^{\infty}x_0=0$. Combining with the values of $v_i$, for any $b\in{\mathbb F}_{q^2}$, $E(b)$ given by [\[Eb\]](#Eb){reference-type="eqref" reference="Eb"} turns into $$\label{Eb-kasami} E(b)=\{i\in\mathbb{Z}_{q}: b^q+b=\beta_i\},$$ with $$\begin{aligned} \beta_{i} &={\rm Tr}_m^n(a_i(v_i^q+v_i)+a_i^{2^{-1}}) \\ &={\rm Tr}_m^n(a_i((\xi^ix_0)^q+\xi^ix_0))+(a_i+a_i^{q})^{2^{-1}} \\ &={\rm Tr}_m^n(a_i(x_0^q+x_0))\xi^i+\alpha_i^{2^{-1}}=\alpha_i\xi^i+\alpha_i^{2^{-1}} \end{aligned}$$ for $i\in\mathbb{Z}_{q-1}$ and $$\beta_{q-1}={\rm Tr}_m^n(a_{q-1}(v_{q-1}^q+v_{q-1})+a_{q-1}^{2^{-1}})=(a_{q-1}+a_{q-1}^q)^{2^{-1}} =\alpha_{\infty}\xi^{\infty}+\alpha_{\infty}^{2^{-1}}$$ due to $\alpha_i=a_i+a_i^q$ for $i\in\mathbb{Z}_{q-1}$ and $\alpha_{\infty}=a_{q-1}+a_{q-1}^q$. Therefore $\{\beta_{i}: i\in\mathbb{Z}_{q}\}=\{\alpha_{i}\xi^{i}+\alpha_{i}^{2^{-1}}: i\in\{\infty,0,\cdots,q-2\}\}={\mathbb F}_{q}$, which implies there exist a unique $t\in\{\infty,0,\cdots,q-2\}$ such that $b^q+b=\alpha_t\xi^t+\alpha_t^{2^{-1}}$ for any $b\in{\mathbb F}_{q^2}$. Moreover, in the case of $b^q+b=\alpha_t\xi^t+\alpha_t^{2^{-1}}$, $E(b)=\{t\}$ and from Theorem [Theorem 7](#thm.bent-add){reference-type="ref" reference="thm.bent-add"}, one derives $$\begin{aligned}\widehat{f}(b)=&q(-1)^{{\rm Tr}_1^n(a_t (\xi^tx_0)^{q+1}+b\xi^tx_0)} \\ =&q(-1)^{{\rm Tr}_1^m(\alpha_t (\xi^tx_0)^{q+1}+(bx_0+(bx_0)^q)\xi^t)} \\ =&q(-1)^{{\rm Tr}_1^m(\alpha_t \xi^{2t}x_0^{q+1}+((b+b^q)x_0+b^q)\xi^t)} \end{aligned}$$ due to $x_0^q+x_0=1$ and $\xi\in \mathbb{F}_{q}$. Denote $\varphi_{t}(b):={\rm Tr}_1^m(\alpha_t \xi^{2t}x_0^{q+1}+((b+b^q)x_0+b^q)\xi^t)$. Then $\widehat{f}(b)=q(-1)^{\varphi_{t}(b)}$, which implies $f(x)$ is bent. Furthermore, we claim that $\varphi_{t}(b)$ is given by [\[phi-b\]](#phi-b){reference-type="eqref" reference="phi-b"}. If $\alpha_t=0$, then $b^q+b=0$ and one readily gets $\varphi_{t}(b)={\rm Tr}_1^m(\xi^tb)$. For $\alpha_t\ne 0$, combining with the facts $b^q+b=\alpha_t\xi^t+\alpha_t^{2^{-1}}$ and $x_0^q+x_0=1$, a calculation gives $$\begin{aligned} \varphi_{t}(b)=&{\rm Tr}_1^m\big(\alpha_t\xi^{2t}x_0(x_0+1)+\xi^t\big((\alpha_t\xi^t+\alpha_t^{2^{-1}})x_0+(\alpha_t\xi^t +\alpha_t^{2^{-1}}+b)\big)\big) \\ =&{\rm Tr}_1^m(\alpha_t\xi^{2t}x_0^2+\alpha_t^{2^{-1}}\xi^{t}x_0+\alpha_t\xi^{2t}+\alpha_t^{2^{-1}}\xi^{t} +b\xi^t) \\ =&\alpha_t^{2^{-1}}\xi^{t}(x_0^q+x_0)+\alpha_t^{2^{-1}}(\xi^{t}+\xi^{t})+ \sum\nolimits_{i=0}^{m-1}(b\xi^t)^{2^i} \\=&\alpha_t^{2^{-1}}\xi^{t}+\sum\nolimits_{i=0}^{m-1}(b\xi^t)^{2^i} \end{aligned}$$ On the other hand, we have $$\begin{aligned} {\rm Tr}_1^m(\alpha_t^{-1}b^{q+1}) =&{\rm Tr}_1^m(\alpha_t^{-1}b(b+\alpha_t\xi^t+\alpha_t^{2^{-1}})) \\ =&{\rm Tr}_1^m(\alpha_t^{-1}b^2+\alpha_t^{-2^{-1}}b+b\xi^t) \\ =&\alpha_t^{-2^{-1}}(b^q+b)+\sum\nolimits_{i=0}^{m-1}(b\xi^t)^{2^i} \\=&\alpha_t^{-2^{-1}}(\alpha_t\xi^t+\alpha_t^{2^{-1}})+\sum\nolimits_{i=0}^{m-1}(b\xi^t)^{2^i} \\=&\alpha_t^{2^{-1}}\xi^{t}+\sum\nolimits_{i=0}^{m-1}(b\xi^t)^{2^i}+1. \end{aligned}$$ Therefore, $\varphi_{t}(b)={\rm Tr}_1^m(\alpha_t^{-1}b^{q+1})+1$. This completes the proof. ◻ **Remark 3**. *If one takes $\alpha_{\infty}=\alpha_0=\cdots=\alpha_{q-2}=a\in{\mathbb F}_{q}^*$, then $f(x)$ in Theorem [Theorem 8](#thm.bent-kasami){reference-type="ref" reference="thm.bent-kasami"} is reduced to the monomial case, that is $f(x)={\rm Tr}_1^m(a x^{q+1})$. It can be verified that $a^2\xi^{2i}+a$ are $q$ distinct elements in ${\mathbb F}_{q}$ when $i$ runs over $\{\infty,0,\cdots,q-2\}$. Otherwise, assume that there are distinct $i,j\in\{\infty,0,\cdots,q-2\}$ such that $a^2\xi^{2i}+a=a^2\xi^{2j}+a$, then $\xi^{2(i-j)}=1$, which is impossible since $\gcd(2,\,q-1)=1$ and $i-j\ne 0$. Thus $\{a^2\xi^{2i}+a: i\in\{\infty,0,\cdots,q-2\}\}={\mathbb F}_{q}.$ Theorem [Theorem 8](#thm.bent-kasami){reference-type="ref" reference="thm.bent-kasami"} gives that $f(x)$ is bent with the dual function ${\rm Tr}_1^m(a^{-1}x^{q+1})+1$, which is consistent with the result obtained by Mesnager in [@M].* Note that it is easy to find parameters satisfying the condition given in Theorem [Theorem 8](#thm.bent-kasami){reference-type="ref" reference="thm.bent-kasami"} such that $f(x)$ is bent. To show that, we provide an equivalent condition which help us to explicitly construct bent functions. Without of loss of generality, assume that $\alpha_{\infty}\ne0$. It can be verified that the condition $\{\alpha_{i}^2\xi^{2i}+\alpha_{i}: i\in\mathbb{Z}_{q-1}\}\bigcup\{\alpha_{\infty}\}={\mathbb F}_{q}$ is equivalent to $$\label{kasami.con2} \{\alpha_{\infty}^2\xi^{2i}+\alpha_{\infty}: i\in\mathbb{Z}_{q-1}\}=\{\alpha_{i}^2\xi^{2i}+\alpha_{i}: i\in\mathbb{Z}_{q-1}\}.$$ By using it, we give the following construction of bent functions for the case $\#\{\alpha_i: i\in\{\infty\}\cup\mathbb{Z}_{q-1}\}\leq 2$. Precisely, let $\alpha_i=a$ for $i\in\mathbb{Z}\subseteq \mathbb{Z}_{q-1}$ and $\alpha_i=c\ne0$ for others $i$. Then [\[kasami.con2\]](#kasami.con2){reference-type="eqref" reference="kasami.con2"} becomes $$\label{kasami.con3} \{c^2\xi^{2i}+c: i\in\mathbb{Z}\}=\{a^2\xi^{2i}+a: i\in\mathbb{Z}\}.$$ We shall present the bent functions for two cases: $a=0$ and $a\ne0$. In the case of $a=0$, [\[kasami.con3\]](#kasami.con3){reference-type="eqref" reference="kasami.con3"} is equivalent to $c^2\xi^{2i}+c=0$, that is, $\xi^i=c^{2^{m-1}-1}$ for a unique $i\in\mathbb{Z}_{q-1}$. This means $\alpha_i=0$ for such $i$ and $\alpha_i=c\ne0$ for others $i$. Then the following corollary can be obtained from Theorem [Theorem 8](#thm.bent-kasami){reference-type="ref" reference="thm.bent-kasami"}. **Corollary 3**. *Let $q=2^m$ and $c\in\mathbb{F}_{q}^*$, where $m$ is a positive integer. Then $$f(x)=\left \{\begin{array}{ll} 0, & {\rm if} \,\, x^q+x=c^{2^{m-1}-1},\\ {\rm Tr}_1^m(c x^{q+1}), &{\rm otherwise} \end{array} \right.$$ is a bent function over $\mathbb{F}_{q^2}$, and its dual is $$\widetilde{f}(x)=\left \{\begin{array}{ll} {\rm Tr}_1^m(c^{2^{m-1}-1}x), & {\rm if} \,\, x\in{\mathbb F}_{q},\\ {\rm Tr}_1^m(c^{-1}x^{q+1})+1, &{\rm otherwise} . \end{array} \right.$$* If $a\ne 0$, then Theorem [Theorem 8](#thm.bent-kasami){reference-type="ref" reference="thm.bent-kasami"} and [\[kasami.con3\]](#kasami.con3){reference-type="eqref" reference="kasami.con3"} give the following corollary directly. **Corollary 4**. *Let $q=2^m$ and $a,\,c\in\mathbb{F}_{q}^*$ satisfying [\[kasami.con3\]](#kasami.con3){reference-type="eqref" reference="kasami.con3"}. Denote $N:=\{\xi^{i}: i \in\mathbb{Z} \}$, where $\xi$ is a primitive element of $\mathbb{F}_{q}$ and $\mathbb{Z}$ is a subset of $\mathbb{Z}_{q-1}$. Then $$f(x)=\left \{\begin{array}{ll} {\rm Tr}_1^m(a x^{q+1}), & {\rm if} \,\,x^q+x\in N,\\ {\rm Tr}_1^m(c x^{q+1}), &{\rm otherwise}, \end{array} \right.$$ is a bent function over $\mathbb{F}_{q^2}$ and its dual is $$\widetilde{f}(x)=\left \{\begin{array}{ll} {\rm Tr}_1^m(a^{-1}x^{q+1})+1, & {\rm if} \,\, x^q+x=a\xi^i+a^{2^{-1}},\,i\in\mathbb{Z}\\ {\rm Tr}_1^m(c^{-1}x^{q+1})+1, &{\rm otherwise} . \end{array} \right.$$* It is clear that any $c\in{\mathbb F}_{q}^*$ gives a bent function as in Corollary [Corollary 3](#cor.kasami0){reference-type="ref" reference="cor.kasami0"}. Then we only give an example for the construction in Corollary [Corollary 4](#cor.kasami1){reference-type="ref" reference="cor.kasami1"}. **Example 4**. *Let $q=2^4$ and $N=\{\xi^{2}\}$, where $\xi$ is a primitive element of $\mathbb{F}_{2^4}$. According to Magma, there are 14 pairs $(a_1,\,a_2)$ with $a_1\ne a_2$ such that $f(x)$ in Corollary [Corollary 4](#cor.kasami1){reference-type="ref" reference="cor.kasami1"} is bent over $\mathbb{F}_{2^{8}}$. For example, take $a=\xi^{9}$ and $c=\xi^{2}$. It can be verified that $a^2\xi^{4}+a=c^2\xi^{4}+c=1$. Corollary [Corollary 4](#cor.kasami1){reference-type="ref" reference="cor.kasami1"} now establishes that $$f(x)=\left \{\begin{array}{ll} {\rm Tr}_1^4(\xi^{9} x^{17}), & {\rm if} \,\, x^q+x=\xi^{2},\\ {\rm Tr}_1^4(\xi^{2}x^{17}), &{\rm otherwise} \end{array} \right.$$ is a bent function over $\mathbb{F}_{2^8}$, and its dual is $$\widetilde{f}(b)=\left \{\begin{array}{ll} {\rm Tr}_1^4(\xi^{6}x^{17})+1, & {\rm if} \,\, x^q+x=1,\\ {\rm Tr}_1^4(\xi^{13}x^{17})+1, &{\rm otherwise} . \end{array} \right.$$* # Switching between cyclotomic form and polynomial form of bent functions {#sec.poly} Boolean functions $f,\,f':\mathbb{F}_{2^n}\rightarrow \mathbb{F}_2$ are extended-affine equivalent (EA-equivalent) if there exist an affine permutation $L$ of $\mathbb{F}_{2^n}$ and an affine function $l:\mathbb{F}_{2^n}\rightarrow\mathbb{F}_2$ such that $f'(x)=(f\circ L)(x)+l(x)$. A class of bent functions is called complete if it is globally invariant under EA-equivalence and the completed version of a class is the set of all functions EA-equivalent to the functions in the class. In this section, we first investigate the polynomial form of those bent functions proposed in Section [3](#cons1){reference-type="ref" reference="cons1"} and Section [4](#cons2){reference-type="ref" reference="cons2"}, and then study the EA-equivalence between proposed bent functions and known ones. The switching between multiplicative cyclotomic form and polynomial form is characterized as below. **Lemma 3**. *([@AW; @W])[\[lem.poly\]]{#lem.poly label="lem.poly"} Let $F(x)$ be an index $d$ generalized cyclotomic mapping defined as in [\[F\]](#F){reference-type="eqref" reference="F"}. Then the polynomial form of $F(x)$ is $$F(x)=\frac{1}{d}\sum_{i,j=0}^{d-1}\omega^{-ij\cdot\frac{p^n-1}{d}}a_i x^{j\cdot\frac{p^n-1}{d}+r_i}.$$* Recall that $u=\omega^{(q-1)(2^{n-1}-1)}$, i.e., $\omega^{q-1}=u^{-2}$. From Lemma [\[lem.poly\]](#lem.poly){reference-type="ref" reference="lem.poly"}, we give the polynomial forms of bent functions proposed in Section [3](#cons1){reference-type="ref" reference="cons1"}. \(1\) Dillon case: First of all, the polynomial form of $f(x)$ proposed in Theorem [Theorem 2](#thm.bent-dillon){reference-type="ref" reference="thm.bent-dillon"} is $$f(x)=\sum_{i,j=0}^{q}{\rm Tr}_1^n(u^{2ij}a_i x^{(q-1)(j+l_i)}),$$ which is a Dillon type polynomial. Note that such $f(x)$ restricted to the cosets $u{\mathbb F}_{q}^*$ are constant where $u$ ranges over $\mu_{q+1}$, and thus belong to $\mathcal{PS}_{ap}$ class [@D]. In fact, Dillon in his thesis [@D] shows more precisely that, a Boolean function over ${\mathbb F}_{q^2}$ with the form $g(x^{q-1})$ and $g(0)=0$, is bent if and only if $g(h)=1$ for exactly $2^{m-1}$ elements in $\mu_{q+1}$. Obviously, in Theorem [Theorem 2](#thm.bent-dillon){reference-type="ref" reference="thm.bent-dillon"}, $f(x)=g(x^{q-1})$ with $g(x)=\sum_{i,j=0}^{q}{\rm Tr}_1^n(u^{2ij}a_i x^{j+l_i})$ and the condition $\sum\nolimits_{i\in\mathbb{Z}_{q+1}}(-1)^{{\rm Tr}_1^n(a_iu^{-2il_i})}=1$ implies that there are exactly $2^{m-1}$ $t$'s in $\mathbb{Z}_{q+1}$ such that $g(u^{t})=1$ if $u^{tl_t}$ runs over $\mu_{q+1}$ when $t$ runs over $\mathbb{Z}_{q+1}$. Thus our construction gives a subclass of bent functions of Dillon's construction. We remark that some generalizations of Dillon's construction were given later on, for example, see [@H; @LL; @N]. In fact, our construction can also explain some previous infinite classes of Dillon type bent polynomials. For example, from Algorithm 1 given in [@AW], the cyclotomic form of a general Dillon type polynomial $f(x)={\rm Tr}_1^n(\sum_{t=0}^q \gamma_t x^{t(q-1)})$ is $$f(x)={\rm Tr}_1^n((\sum\nolimits_{t=0}^q \gamma_t u^{-2it})),\,\,{\rm if} \,\, x\in u^i\mathbb{F}_{q}^*,\,i\in\{\infty\}\cup\mathbb{Z}_{q+1}.$$ Then Theorem [Theorem 2](#thm.bent-dillon){reference-type="ref" reference="thm.bent-dillon"} yields $f(x)$ is bent if and only if $$\sum\nolimits_{i\in\mathbb{Z}_{q+1}}(-1)^{{\rm Tr}_1^n(\sum\nolimits_{t=0}^q \gamma_t u^{-2it} )}=\sum\nolimits_{z\in\mu_{q+1}}(-1)^{{\rm Tr}_1^n(\sum_{t=0}^q \gamma_t z^{t})}=1,$$ which is consistent with the result presented by Li et al. in [@LHTK]. Although our first construction coincides with some known ones in some sense, the choice of parameters are more flexible in our construction. This can help to construct new explicit infinite classes of Dillon type polynomials. For instance, set $r=3$, then for odd $m$ and $K_m(c^{q+1})=0$, Corollary [Corollary 1](#cor.bent-dillon3.1){reference-type="ref" reference="cor.bent-dillon3.1"} generates the bent function with the polynomial form $$f(x)={\rm Tr}_1^n\big(\sum\nolimits_{i=0}^{(q+1)/3-1}\big(c\epsilon x^{(3i+l_1)(q-1)}+cx^{(3i+l_2)(q-1)}\big)+c x^{l_2(q-1)}\big).$$ For $n=4$ and $n=6,\,8$, the $2$-th power coset representatives of $k(q-1)$, $k=1,\cdots,q$ modulo $q+1$ are $1$ and $1,3$ respectively, which means all Dillon type bent ponomials are of the forms ${\rm Tr}_1^n(a x^{q-1})$ over $\mathbb{F}_{2^4}$ and ${\rm Tr}_1^n(a x^{q-1}+b x^{3(q-1)})$ over $\mathbb{F}_{2^6}$ or $\mathbb{F}_{2^8}$. Li et al. [@LHTK] has characterized the bentness for such two type functions. For $n\geq 10$, due to the shortage of computer memory, we cannot verify the EA-equivalence between two bent functions. Thus the equivalence between the newly proposed Dillon type bent functions and the previously known ones is not clear, and we leave it to interested readers. \(2\) Niho case: Secondly, the polynomial form of $f(x)$ proposed in Theorem [Theorem 4](#thm.bent-Niho){reference-type="ref" reference="thm.bent-Niho"} is $$f(x)=\sum_{i,j=0}^{q}{\rm Tr}_1^n(u^{2ij}a_i x^{(q-1)(j+s_i)+1}),$$ which is a Niho type polynomial. Such $f(x)$ is linear over each element of the Desarguesian spread and thus belongs to the class $\mathcal{H}$ [@D]. From the polynomial perspective, our construction concludes some previous infinite classes of Niho type bent polynomials. For example, from Algorithm 1 given in [@AW], the cyclotomic form of a general Niho type polynomial $f(x)={\rm Tr}_1^n(\sum_{t=1}^k \gamma_t x^{s_t(q-1)+1})$ is $$f(x)={\rm Tr}_1^n((\sum\nolimits_{t=1}^k \gamma_r u^{-2s_ti})x),\,\,{\rm if} \,\, x\in u^i\mathbb{F}_{q}^*,\,i\in\{\infty\}\cup\mathbb{Z}_{q+1}.$$ Then Theorem [Theorem 4](#thm.bent-Niho){reference-type="ref" reference="thm.bent-Niho"} yields $f(x)$ is bent if and only if $bz+b^qz^{-1}+\sum_{t=1}^k (\gamma_t z^{1-2s_t}+\gamma_t^q z^{2s_t-1})=0$ has $0$ or $2$ solutions in $\mu_{q+1}$ for any $b\in{\mathbb F}_{q^2}$, which coincides with the result presented by Leander and Kholosha in [@LK]. In terms of the equivalence, Abdukhalikov [@A] has determined all the equivalence classes of Niho type bent functions for $m\leq 6$. For instance, Example [Example 3](#eq.Niho){reference-type="ref" reference="eq.Niho"} is equivalent to ${\rm Tr}_1^3(x^{36})+{\rm Tr}_1^6(x^{22})$ over $\mathbb{F}_{2^6}$. However, for $m\geq 7$, we have not yet found an efficient way to select appropriate parameters to search for new bent functions. We encourage interested readers to construct specific infinite classes of Niho type bent functions from cyclotomic mappings. As for the additive case, the polynomial form of bent functions from additive cyclotomic mappings can be obtained using Lagrange interpolation. \(3\) Kasami case: The polynomial form of $f(x)$ proposed in Theorem [Theorem 8](#thm.bent-kasami){reference-type="ref" reference="thm.bent-kasami"} is $$\label{eq.poly.kasami} f(x)=\sum_{i=0}^{q-2}{\rm Tr}_1^m(a_i x^{q+1})((x^q+x+\xi^i)^{q-1}+1)+{\rm Tr}_1^m(a_{\infty}x^{q+1})((x^q+x)^{q-1}+1).$$ According to the equivalent condition for a bent function belonging to the Maiorana-McFarland class $\mathcal{MM}$ [@D; @Mc] given by [@D], it can be checked that $f(x)$ in [\[eq.poly.kasami\]](#eq.poly.kasami){reference-type="eqref" reference="eq.poly.kasami"} belongs to the $\mathcal{MM}$ class. Note that Fernando and Hou [@FH] also characterised the polynomial form of such type functions. **Lemma 4**. *([@FH])[\[lem.poly.add\]]{#lem.poly.add label="lem.poly.add"} Let $\xi$ be an element of order $k$ with $k|(p^n-1)$, and define $\xi^{\infty}=0$. Let $f_{\infty},\,f_0,\cdots,f_{k-1}\in\mathbb{F}_{p^n}[x]$ and $\phi: \mathbb{F}_{p^n}\rightarrow\{\xi^i: i=\infty,0,\cdots,k-1\}$. Then the polynomial form of $$H(x)=f_i(x) \,\,{\rm if}\,\, \phi(x)=\xi^i,\,i\in\{\infty,0,\cdots,k-1\}$$ is $$H(x)=f_{\infty}(x)(1-\phi(x)^{p^n-1})+\frac{1}{k}\sum_{i,j=0}^{k-1}\xi^{-ij}f_j(x) \phi(x)^i.$$* Taking $k=q-1$, $\phi(x)=x^q+x$ and $f_i(x)={\rm Tr}_1^m(a_i x^{q+1})$ for $i=\infty,0,\cdots,k-1$, then $f(x)$ in Theorem [Theorem 8](#thm.bent-kasami){reference-type="ref" reference="thm.bent-kasami"} coincides with $H(x)$, which can be rewritten as $$f(x)={\rm Tr}_1^m(a_{\infty} x^{q+1})(1+(x^q+x)^{p^n-1})+\sum_{i,j=0}^{q-2}\xi^{-ij}{\rm Tr}_1^m(a_j x^{q+1})(x^q+x)^i.$$ In fact, this is consistent with the expansion of [\[eq.poly.kasami\]](#eq.poly.kasami){reference-type="eqref" reference="eq.poly.kasami"}. From a polynomial point of view, our construction produce infinite classes of bent functions with algebraic degrees higher than 2. For instance, $f(x)$ in Corollary [Corollary 3](#cor.kasami0){reference-type="ref" reference="cor.kasami0"} is of the polynomial form $$\label{eq.Kasami.poly} f(x)={\rm Tr}_1^m(c x^{q+1})(x^q+x+c^{2^{m-1}-1})^{q-1}.$$ It can be verified that the algebraic degree of $f(x)$ is $m$ for $2\leq m\leq 10$ when $c=1$, which achieves the optimal algebraic degree. Next, we study the EA-equivalence between $f(x)$ in [\[eq.Kasami.poly\]](#eq.Kasami.poly){reference-type="eqref" reference="eq.Kasami.poly"} and known bent polynomials. Firstly, it can be verified by Magma that $f(x)$ is EA-inequivalent to the five classes of bent monomials. Recall from (1) and (2), when $m=3$, all known Dillon type bent polynomials are of the form ${\rm Tr}_1^6(a x^{q-1}+b x^{3(q-1)})$ and there are only two Niho type bent polynomials, ${\rm Tr}_1^3(x^{36})$ and ${\rm Tr}_1^3(x^{36})+{\rm Tr}_1^6(x^{22})$, up to equivalence. Magma shows $f(x)$ are EA-inequivalent to these three classes of bent functions. We also note that there are so many bent functions based on the second construction and thus we decide not to consider the equivalence with all known ones. Therefore we leave this problem for future study. # Concluding remarks {#conc} In this paper, we investigated the construction of Boolean bent functions from cyclotomic mappings. Firstly, using Dillon functions and Niho functions as the branch functions over index $q+1$ multiplicative cyclotomic cosets of ${\mathbb F}_{q^2}$ respectively, we obtained two generic constructions of bent functions and then derived several new explicit infinite families of bent functions. Secondly, a generic construction was presented by using Kasami functions as branch functions over index $q$ additive cyclotomic cosets of ${\mathbb F}_{q^2}$, from which we got some explicit constructions of bent functions. Finally, switching between the cyclotomic form and polynomial form, we showed these three classes of bent functions belong to the $\mathcal{PS}$ class, the class $\mathcal{H}$ and the $\mathcal{MM}$ class respectively. EA-equivalence of these bent functions has been briefly discussed and it is worth pursuing a further study. # Acknowledgements {#acknowledgements .unnumbered} This work was supported by the National Key Research and Development Program of China (No. 2021YFA1000600), the National Natural Science Foundation of China (No. 62072162), the Natural Science Foundation of Hubei Province of China (No. 2021CFA079), the Knowledge Innovation Program of Wuhan-Basic Research (No. 2022010801010319), the Innovation Group Project of the Natural Science Foundation of Hubei Province of China (No. 2003AFA021), and Natural Sciences and Engineering Research Council of Canada (RGPIN- 2023-04673). 99 Abdukhalikov, K.: Equivalence classes of Niho bent functions, Des. Codes Cryptogr. 89, 1509-1534 (2021). Akbary A., Ghioca D., Wang Q.: On permutation polynomials of prescribed shape. Finite Fields Appl. 15 (2), 195-206 (2009). Bors A., Wang Q.: Generalized cyclotomic mappings: Switching between polynomial, cyclotomic, and wreath product form, Commun. Math. Res. (2021). Canteaut A., Charpin P., Kyureghyan G.: A new class of monomial bent functions, Finite Fields Appl. 14(1), 221-241 (2008). Carlet C.: Two new classes of bent functions, In: Helleseth T. (eds) Advances in EUROCRYPT 1993. LNCS 765, Springer, Berlin (1994). Carlet C.: On the secondary constructions of resilient and bent functions, In: Feng K., Niederreiter H., Xing C. (eds.) Proceedings of the Workshop on Coding, Cryptography and Combinatorics 2003, pp. 3-28. Birkhäuser Verlag (2004). Carlet C.: On bent and highly nonlinear balanced/resilient functions and their algebraic immunities, In: Fossorier M., Imai H., Lin S., Poli A. (eds.) Proceedings of AAECC 2006, LNCS 3857, pp. 1-28 (2006). Carlet C.: Boolean functions for cryptography and error correcting codes, In: Crama Y., Hammer P.L. (eds.) Boolean Models and Methods in Mathematics, Computer Science, and Engineering, 1st edn, pp. 257-397. Cambridge University Press, New York (2010). Carlet C., Zhang, F., Hu, Y.: Secondary constructions of bent functions and their enforcement, Adv. Math. Commun. 6(3), 305-314 (2012). Carlet C.: Boolean functions for cryptography and coding theory, Cambridge, U.K.: Cambridge Univ. Press (2021). Carlet C., Mesnager S.: Four decades of research on bent functions, Des. Codes Cryptogr. 78(1), 5-50 (2016). Charpin P., Gong G.: Hyperbent functions, Kloosterman sums and Dickson polynomials, IEEE Trans. Inf. Theory 9(54), 4230-4238 (2008). Cohen G., Honkala I., Litsyn S., Lobstein A.: Covering codes, North-Holland Mathematical Library 54, North-Holland, Amsterdam (1997). Dillon J. F.: Elementary Hadamard difference sets, Ph.D. dissertation, Univ. Maryland, Collage Park, (1974). Ding C., Fan C., Zhou Z.: The dimension and minimum distance of two classes of primitive BCH codes, Finite Fields Appl. 45, 237-263 (2017). Dobbertin H.: Construction of bent functions and balanced Boolean functions with high nonlinearity, in Fast Software Encryption, vol. 1008. Berlin, Germany: Springer, pp. 61-74 (1995). Dobbertin H., Leander G., Canteaut A., Carlet C., Felke P., Gaborit P.: Construction of bent functions via Niho power functions, J. Combin. Theory Ser. A 113(5), 779-798 (2006). Evans A. B: Orthomorphism graphs of groups. Lecture Notes in Mathematics, 1535. Springer-Verlag, Berlin, 1992. Fang J., Sun Y., Wang L., Wang Q: Two-weight or three-weight binary linear codes from cyclotomic mappings, Finite Fields Appl. 85, 102114 (2023). Fernando N., Hou X.: A piecewise construction of permutation polynomials over finite fields, Finite Fields Appl. 18, 1184-1194 (2012). Helleseth T., Kholosha A.: Monomial and quadratic bent functions over the finite fields of odd characteristic, IEEE Trans. Inf. Theory 52(5), 2018-2032 (2006). Hou X.: $q$-ary bent functions constructed from chain rings, Finite Fields Appl. 4, 55-61 (1998). Kumar P. V., Scholtz R. A., Welch L. R.: Generalized bent functions and their properties, J. Combin. Theory Ser. A 40(1), 90-107 (1985). Leander N. G.: Monomial bent functions, IEEE Trans. Inf. Theory 52(2), 738-743 (2006). Leander N. G., Kholosha A.: Bent functions with $2^r$ Niho exponents, IEEE Trans. Inf. Theory 52(12), 5529-5532 (2006). Li N., Helleseth T., Tang X., Kholosha A.: Several new classes of bent functions from Dillon exponents, IEEE Trans. Inf. Theory 59(3), 1818-1831 (2013). Li Y., Kan H., Mesnager S., Peng J., Tan C. H., Zheng L.: Generic constructions of (Boolean and vectorial) bent functions and their consequences, IEEE Trans. Inf. Theory 68(4), 2735-2751 (2022). Lisoněk P., Lu H. Y.: Bent functions on partial spreads, Des. Codes Cryptogr. 73(1), 209-216 (2014). McFarland R. L.: A family of noncyclic difference sets, J. Combin. Theory Ser. A 15, 1-10 (1973). Mesnager S.: Several new infinite families of bent functions and their duals, IEEE Trans. Inf. Theory 60(7), 4397-4407 (2014). Mesnager S.: Bent functions-fundamentals and results, Cham, Switzerland: Springer, pp. 1-544 (2016). Nyberg N.: Perfect nonlinear S-boxes, In: Davies, D.W. (eds) Advances in EUROCRYPT 1991, LNCS 547, Springer, Berlin, pp. 378-386, (1992). Niederreiter H., Winterhof A.: Cyclotomic $\mathcal{R}$-orthomorphisms of finite fields, Discrete Math. 295, 161-171 (2005). Olsen J., Scholtz R., Welch L.: Bent-function sequences, IEEE Trans. Inf. Theory 28(6), 858-864 (1982). Rothaus O. S.: On bent functions, J. Combin. Theory Ser. A 20(3), 300-305 (1976). Tang C., Zhou Z., Qi Y., Zhang X., Fan C., Helleseth T.: Generic construction of bent functions and bent idempotents with any possible algebraic degrees. IEEE Trans. Inf. Theory 63(10), 6149-6157 (2017). Tu Z., Zeng X., Li C., Helleseth T.: A class of new permutation trinomials, Finite Fields Appl. 50, 178-195 (2018). Wang Q.: Cyclotomic mapping permutation polynomials over finite fields, in: Golomb, S.W., Gong, G., Helleseth, T., Song, HY. (Eds.), Sequences, Subsequences, and Consequences, LNCS 4893, Springer, Berlin, pp. 119-128 (2007). Wang Q.: Cyclotomy and permutation polynomials of large indices, Finite Fields Appl. 22, 57-69 (2013). Xie X., Li N., Zeng X., Tang X., Yao Y.: Several classes of bent functions over finite fields, Des. Codes Cryptogr. 91, 309-332 (2023). Zieve M.: Some families of permutation polynomials over finite fields, Int. J. Number Theory 4, 851-857 (2008). Zieve M.: On some permutation polynomials over $\mathbb{F}_q$ of the form $x^r h(x^{(q-1)/d})$, Proc. Amer. Math. Soc. 137(7), 2209-2216 (2009). Zieve M.: Classes of permutation polynomials based on cyclotomy and an additive analogue, in: Additive Number Theory, Springer, New York, pp. 355-361 (2010). [^1]: X. Xie and X. Zeng are with the Hubei Key Laboratory of Applied Mathematics, Faculty of Mathematics and Statistics, Hubei University, Wuhan 430062, China. N. Li is with the Hubei Key Laboratory of Applied Mathematics, School of Cyber Science and Technology, Hubei University, Wuhan 430062, China. Q. Wang is with the School of Mathematics and Statistics, Carleton University, Ottawa, K1S 5B6, Canada. Email: xi.xie\@aliyun.com, nian.li\@hubu.edu.cn, wang\@math.carleton.ca, xiangyongzeng\@aliyun.com.
arxiv_math
{ "id": "2310.01317", "title": "On constructing bent functions from cyclotomic mappings", "authors": "Xi Xie, Nian Li, Qiang Wang and Xiangyong Zeng", "categories": "math.NT math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We consider the Rademacher- and Sobolev-to-Lipschitz-type properties for arbitrary quasi-regular strongly local Dirichlet spaces. We discuss the persistence of these properties under localization, globalization, transfer to weighted spaces, tensorization, and direct integration. As byproducts we obtain: necessary and sufficient conditions to identify a quasi-regular strongly local Dirichlet form on an extended metric topological $\sigma$-finite possibly non-Radon measure space with the Cheeger energy of the space; the tensorization of intrinsic distances; the tensorization of the Varadhan short-time asymptotics. address: - Institute of Science and Technology Austria, Am Campus 1, 3400 Klosterneuburg, Austria - Department of Mathematical Science, Durham University, Science Laboratories, South Road, DH1 3LE, United Kingdom author: - Lorenzo Dello Schiavo - Kohei Suzuki title: | Persistence of Rademacher-type and\ Sobolev-to-Lipschitz properties --- [^1] [^2] [^3] # Introduction The interplay between analysis, geometry, and stochastic analysis on non-smooth spaces has recently gained incredibly large attention. Extraordinarily insightful theories have been developed connecting these three aspects on: Ricci-limit spaces; sub-Riemannian manifolds; Alexandrov and Cartan--Alexandrov--Topogonov spaces; metric measure spaces satisfying synthetic Ricci-curvature lower bounds à la Lott--Sturm--Villani and Ambrosio--Gigli--Savaré; Lorentzian metric measure spaces; configuration, Wasserstein, Wiener, and other infinite-dimensional spaces; only to name a few. On the one hand, every metric measure space $(X,\mssd,\mssm)$ may be endowed with a natural convex energy functional nicely capturing the properties of the metric measure structure, namely the *Cheeger energy* $\mathsf{Ch}_{\mssd,\mssm}$ on the space of real-valued functions on $X$. Starting from $\mathsf{Ch}_{\mssd,\mssm}$, it is possible to build a sophisticated theory of non-smooth analysis on $(X,\mssd,\mssm)$ encompassing for instance: first- and second-order Sobolev spaces, a (non-linear) 'Laplacian', a (non-linear) 'heat flow', etc; see e.g. [@AmbGigSav14; @Gig18]. It was a particularly fruitful intuition ---in great generality put forward by N. Gigli--- that the validity of the parallelogram identity for $\mathsf{Ch}_{\mssd,\mssm}$ ---equivalently, the linearity of the heat flow--- provides a setting most suitable to the study of the interplay mentioned above. On the other hand however, other standard ---this time naturally *quadratic*--- energy forms appear in many settings and are in principle unrelated to the metric-measure structure of the underlying space. This includes, in particular, Dirichlet energy forms on configuration spaces [@AlbKonRoe98; @RoeSch99; @ErbHue15; @LzDSSuz21; @LzDSSuz22a], spaces of measures [@LzDS17+; @ForSavSod22; @KonLytVer15], Wiener spaces [@AidKaw01; @AidZha02; @HinRam03], and others. Linking ---and possibly reconciling--- these two aspects of the theory of non-smooth spaces provides great insight, allowing us to import tools from stochastic analysis into metric measure geometry, and vice versa. To this end, in [@LzDSSuz20] we introduced several Rademacher- and Sobolev-to-Lipschitz-type properties comparing the domain of the energy form under consideration with the space of Lipschitz functions induced by an assigned distance. We used this comparison to give sufficient conditions for the validity of the integral-type Varadhan short-time asymptotics of the heat semigroup with respect to an assigned distance. Here, we address the persistence of these Rademacher- and Sobolev-to-Lipschitz-type properties under various constructions/transformations on arbitrary quasi-regular strongly local Dirichlet spaces, including: - *localization* to form restrictions (in the sense of e.g. [@Kuw98 §3]); - *globalization* from form restrictions on coverings of the space; - transfer to *weighted spaces* (more general than Girsanov transforms); - *tensorization* to product spaces; - *direct integration* (in the sense of [@LzDS20; @LzDSWir21]). As anticipated, our first goal is to compare notions in Dirichlet-form theory (quasi-regular strongly local forms, energy measures, square field operators) with notions in metric measure geometry (Cheeger energies, minimal weak upper gradients, minimal relaxed slopes). Whenever possible, we do so in great generality, on extended metric topological $\sigma$-finite non-Radon measure spaces; in particular, away from any assumption of geometric type (e.g., measure doubling, Poincaré inequalities, synthetic curvature bounds, etc.). *Applications to particle systems*. Apart from the theoretical motivations explained above, the persistence of the aforementioned properties is inspired also by geometric and analytic constructions of infinite particle systems. Each of the above transformations of Dirichlet forms plays a significant role for the construction of interacting particle systems of diffusions from a single diffusion process on the base space. Starting from a one-particle diffusion associated with the Dirichlet energy on the base space, the correspondence between each operation on Dirichlet spaces and diffusion processes is the following: - *localization* $\rightsquigarrow$ *killing* of a one-particle diffusion upon exiting given set; - *globalization* $\rightsquigarrow$ *patching* a one-particle diffusions; - *tensorization* $\rightsquigarrow$ *independent copy* of a one-particle diffusion; - transfer to *weighted spaces* $\rightsquigarrow$ *interacting* many-particle system; - *direct integration* $\rightsquigarrow$ *superposition* of finite-particle systems to the space of infinite particles. The study of the persistence under these transformations leads us to lift metric measure properties of the base space to the space of infinite particles such as the configuration space or the space of atomic probability measures, see e.g., [@LzDSSuz21; @LzDSSuz22a] for the application to the configuration space and [@LzDS17+] for the Wasserstein space. ## The Rademacher and Sobolev-to-Lipschitz properties Let $(\mcE,\mcF)$ be a quasi-regular strongly local Dirichlet form on the space $L^2(\mssm)$ of a topological Luzin space $(X,\tau)$ endowed with a Radon measure $\mssm$. Given a $\sigma$-finite Borel measure $\mu$ on $X$ (possibly different from $\mssm$), we denote by $\mbbL^{\mu}_{\loc,b}$ the space of bounded functions in the *broad local domain* (see §[2.4](#ss:BroadLoc){reference-type="ref" reference="ss:BroadLoc"}) of $(\mcE,\mcF)$ with *$\mu$-uniformly bounded $\mcE$-energy* (see §[2.4.2](#sss:LocDom){reference-type="ref" reference="sss:LocDom"}). In particular ---in order to exemplify this concept--- when $(\mcE,\mcF)$ admits carré du champ operator $\Gamma$, and $\mu= g\mssm$ is absolutely continuous w.r.t. $\mssm$, then $\mbbL^{\mu}_{\loc,b}$ is the space of all functions $f$ in the broad local domain of $(\mcE,\mcF)$ additionally satisfying $\Gamma(f)\leq g$ $\mssm$-a.e.. For $\mu$ as above, we introduced in [@LzDSSuz20] the *intrinsic distance* $\mssd_\mu$ of $(\mcE,\mcF)$ induced by $\mu$, see [\[eq:IntrinsicD\]](#eq:IntrinsicD){reference-type="eqref" reference="eq:IntrinsicD"}. It is an extended pseudo-distance, that is, possibly taking the value $+\infty$ and/or vanishing outside the diagonal in $X^{\scriptscriptstyle{\times 2}}$. When $\mu=\mssm$ and $(\mcE,\mcF)$ is a regular Dirichlet form, $\mssd_\mssm$ coincides with the standard intrinsic distance of $(\mcE,\mcF)$. Now, let $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to [0,\infty]$ be any extended pseudo-distance. For the sake of notational simplicity, throughout this introduction we denote by $\mathrm{Lip}^1(\mssd)$ the space of *measurable* $\mssd$-Lipschitz functions on $X$ with Lipschitz constant less than $1$, postponing to later sections a discussion of measurability issues and the choice(s) of a $\sigma$-algebra. Finally, for $f\in L^0(\mssm)$, denote by $\hat f$ and $\tilde{f}$ any of its (everywhere defined) $\mssm$-representatives. After [@LzDSSuz20] (also cf. Dfn. [Definition 14](#d:RadStoL){reference-type="ref" reference="d:RadStoL"} below), we say that $(X,\mcE,\mssd,\mu)$ has: - the *Rademacher property* if, whenever $\hat f\in \mathrm{Lip}^1(\mssd)$, then $f\in \mbbL^{\mu}_{\loc}$; - the *distance-Rademacher property* if $\mssd\leq \mssd_\mu$; - the *Sobolev--to--continuous-Lipschitz property* if each $f\in\mbbL^{\mu}_{\loc}$ has a $\tau$-continuous $\mssm$-representative $\hat f\in\Lip^1(\mssd)$; - the *Sobolev--to--Lipschitz property* if each $f\in\mbbL^{\mu}_{\loc}$ has an $\mssm$-representative $\hat f\in\Lip^1(\mssd)$; - the *$\mssd$-continuous-Sobolev--to--Lipschitz property* if each $f\in \mbbL^{\mu}_{\loc}$ having a $\mssd$-continuous (measurable) representative $\hat f$ also has a representative $\tilde{f}\in \Lip^1(\mssd)$ (possibly, $\tilde{f}\neq \hat f$); - the *continuous-Sobolev--to--Lipschitz property* if each $\tau$-continuous $f\in \mbbL^{\mu}_{\loc}$ satisfies $f\in\Lip^1(\mssd)$; - the *distance Sobolev-to-Lipschitz property* if $\mssd\geq \mssd_\mu$. For implications between the above properties in the present setting see [\[eq:EquivalenceRadStoL\]](#eq:EquivalenceRadStoL){reference-type="eqref" reference="eq:EquivalenceRadStoL"}. ## Main results Let us now summarize our main results. *Localization/globalization*. We prove that the Rademacher property is stable under localization (Prop. [Proposition 28](#p:LocRad){reference-type="ref" reference="p:LocRad"}), while the Sobolev--to--Lipschitz property is stable under globalization (in the sense of sheaves, see Prop. [Proposition 34](#p:GlobCSL){reference-type="ref" reference="p:GlobCSL"}) but not under localization (Ex. [Example 32](#ese:FailureLocSL){reference-type="ref" reference="ese:FailureLocSL"}). Let us stress that these stability results are trivial (or trivially false) if one replaces the local space $\mbbL^{\mssm}_{\loc,b}$ with its global counterpart $\mbbL^{\mssm}_b$ (see §[2.4.2](#sss:LocDom){reference-type="ref" reference="sss:LocDom"}), which is a first indication---see below for stronger ones---that the Rademacher and Sobolev-to-Lipschitz properties ought to be phrased in terms of *local* spaces (as opposed to: subspaces of the domain). *Weighted spaces*. Both the Rademacher and the Sobolev-to-Lipschitz property for $(\mcE,\mcF)$ on $L^2(\mssm)$ transfer to weighted space $L^2(\theta\mssm)$ for any density $\theta$ bounded away from $0$ and infinity locally in the sense of quasi-open nests. For simplicity, we state here a single result under stronger assumptions than necessary, combining the transfer of the Rademacher property with the transfer of the Sobolev-to-Lipschitz property (for minimal assumptions see Prop.s [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"} and [Proposition 48](#p:IneqCdC2){reference-type="ref" reference="p:IneqCdC2"} respectively). **Theorem 1** (Cor. [Corollary 40](#c:LocalityDistances){reference-type="ref" reference="c:LocalityDistances"}). *Let $(\mbbX,\mcE)$ and $(\mbbX',\mcE')$ be quasi-regular strongly local Dirichlet spaces with same underlying topological measurable space $(X,\tau,\Sigma)=(X',\tau',\Sigma')$ and possibly different measures $\mssm$ and $\mssm'$. Further let $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to [0,\infty]$ be an extended pseudo-distance. Assume that:* 1. *there exists a linear subspace $\mcD$ both $\mcF$-dense in $\mcF$ and $\mcF'$-dense in $\mcF'$, additionally so that $\Gamma=\Gamma'$ on $\mcD$.* 2. *$\mssm'=\theta\mssm$ for some $\theta\in L^0(\mssm)$, and there exists $E_\bullet, G_\bullet$ quasi-open nests for both $\mcE$ and $\mcE'$ with the following properties:* 1. *for each $k\in {\mathbb N}$ there exists a constant $a_k>0$ such that $$0<a_k\leq \theta \leq a_k^{-1}<\infty \quad \mssm\text{-a.e.} \quad \text{on } G_k\,\,\mathrm{;}\;\,$$ (that is, $\theta, \theta^{-1}\in {{\big({L^\infty(\mssm)}\big)}^\bullet_{\loc}}(G_\bullet)$.)* 2. *for each $k\in {\mathbb N}$ it holds $E_k\subset G_k$ $\mcE$- and $\mcE'$-quasi-everywhere, and there exists $\varrho_k\in \mcD$ with $\mathop{\mathrm{\mathds 1}}_{E_k}\leq \varrho_k\leq \mathop{\mathrm{\mathds 1}}_{G_k}$ $\mssm$-a.e..* *Then,* 1. *$\big({\Gamma, \mbbL^{\mssm}_{\loc,b}}\big)= \big({\Gamma',\mbbL^{\prime\, \mssm'}_{\loc,b}}\big)$ and $\mssd_\mssm= \mssd_{\mssm'}$;* 2. *$(\mathsf{ScL}_{\mssm,\tau,\mssd})$, resp. $(\mathsf{SL}_{\mssm,\mssd})$, $(\mathsf{cSL}_{\tau,\mssm,\mssd})$, $(\mathsf{Rad}_{\mssd,\mssm})$, holds if and only if $(\mathsf{ScL}_{\mssm',\tau,\mssd})$, resp. $(\mathsf{SL}_{\mssm',\mssd})$, $(\mathsf{cSL}_{\tau,\mssm',\mssd})$, $(\mathsf{Rad}_{\mssd,\mssm'})$ holds.* The above theorem provides us with the following general guidelines: - the broad local space of functions with uniformly bounded energy, the intrinsic distance, and various Rademacher and Sobolev-to-Lipschitz-type properties are all *completely determined by a dense subspace and in a local fashion*. - the notion of broad local space introduced by K. Kuwae in [@Kuw98] is the right one to address the interplay between the Dirichlet-space and metric measure space structures. - the intrinsic distance, and therefore ---under the assumption of both the Rademacher and the Sobolev-to-Lipschitz property--- the Varadhan-type short-time asymptotics for the heat-semigroup/kernel, both transfer to weighted spaces in far greater generality than Girsanov transforms. When considering applications to metric measure geometry it is natural to choose the space $\mcD$ in the above Theorem to be some algebra of Lipschitz functions. In this case, as a consequence of the above result, the Rademacher, Sobolev-to-Lipschitz-type and related properties may be shown in the simplified setting of probability spaces and subsequently transferred to general $\sigma$-finite spaces. This applies in particular to the comparison of a Dirichlet space $(\mcE,\mcF)$ with the Cheeger energy $\mathsf{Ch}_{\mssd,\mssm}$ of the underlying extended metric measure space, for which we prove two comparison results (Prop. [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"} and [Proposition 48](#p:IneqCdC2){reference-type="ref" reference="p:IneqCdC2"}) separately showing the inequalities $\mcE\leq \mathsf{Ch}_{\mssd,\mssm}$ and $\mcE\geq \mathsf{Ch}_{\mssd,\mssm}$ under minimal assumptions. Combining the two, we further have---in the general case of $\sigma$-finite extended metric spaces---the following identification of $\mcE$ with $\mathsf{Ch}_{\mssd,\mssm}$, previously shown by L. Ambrosio, N. Gigli, and G. Savaré [@AmbGigSav15] for energy-measure spaces, and by L. Ambrosio, M. Erbar, and G. Savaré [@AmbErbSav16] for extended metric-topological *probability* spaces. Namely we prove: **Theorem 2** (Cor. [Corollary 50](#c:RadStoLCheegerComparison){reference-type="ref" reference="c:RadStoLCheegerComparison"}). *Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space admitting carré du champ operator $\Gamma$, and $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to[0,\infty]$ be an extended distance. Further assume that $(\mathsf{Rad}_{\mssd,\mssm})$ and $(\mathsf{cSL}_{\tau,\mssm,\mssd})$ hold. Then, $\mcE\leq \mathsf{Ch}_{\mssd,\mssm}$. The equality $\mcE=\mathsf{Ch}_{\mssd,\mssm}$ holds if and only if $(\mbbX,\mcE)$ is additionally $\tau$-upper regular (see Dfn. [Definition 46](#d:TUpperReg){reference-type="ref" reference="d:TUpperReg"}).* See Remark [Remark 51](#r:ComparisonAGS-AES){reference-type="ref" reference="r:ComparisonAGS-AES"} below for a thorough comparison of our results with those in [@AmbGigSav15; @AmbErbSav16]. Let us further stress that the above Theorem allows us to *deduce* the parallelogram identity for the Cheeger energy, from that for the form $\mcE$, thus providing necessary and sufficient conditions to implement the *Riemannian* point of view in the sense of Gigli's. *Tensorization*. In the case when $\mssd$ is a distance (*not* extended), we also discuss the tensorization of the Rademacher and Sobolev-to-Lipschitz properties. When $\mcE=\mathsf{Ch}_{\mssd,\mssm}$ is the Cheeger energy of a metric measure space, the tensorization of the Rademacher property is a byproduct of the tensorization of the Cheeger energy, discussed under different geometric assumptions in [@AmbGigSav14b; @AmbPinSpe15] and recently settled by S. Eriksson-Bique, T. Rajala, and E. Soultanis for infinitesimally (quasi-)Hilbertian metric measure spaces in [@EriRajSou22]; see §[5.2](#sss:TensorizationConseq){reference-type="ref" reference="sss:TensorizationConseq"} below for a detailed account. Here, we discuss the case of general quasi-regular strongly local Dirichlet spaces $(\mbbX,\mcE)$, without any geometric assumption. As it turns out, a product space inherits the Rademacher property from its factors. **Theorem 3** (Thm. [Theorem 61](#t:TensorRad){reference-type="ref" reference="t:TensorRad"}). *Let $\mbbX\mathop{\mathrm{\coloneqq}}(X,\mssd,\mssm)$ be a metric measure space (Dfn. [Definition 53](#d:MMSp){reference-type="ref" reference="d:MMSp"}), and $(\mcE,\mcF)$ be a quasi-regular strongly local Dirichlet form on $\mbbX$ admitting carré du champ operator $\Gamma$ and satisfying $(\mathsf{Rad}_{\mssd,\mssm})$. Further let $(\mbbX',\mcE')$ be satisfying the same assumptions as $(\mbbX,\mcE)$.* *Then, their product space $(\mbbX^{\scriptscriptstyle{\otimes }},\mcE^{\scriptscriptstyle{\otimes }})$ satisfies $(\mathsf{Rad}_{\mssd^{\scriptscriptstyle{\otimes }},\mssm^{\scriptscriptstyle{\otimes }}})$.* As for the Sobolev-to-Lipschitz property, we show that a product space satisfies the continuous-Sobolev-to-Lipschitz property if the factors satisfy the Sobolev-to-Lipschitz property. **Theorem 4** (Thm. [Theorem 62](#t:TensorSL){reference-type="ref" reference="t:TensorSL"}). *Let $\mbbX\mathop{\mathrm{\coloneqq}}(X,\mssd,\mssm)$ be a metric measure space (Dfn. [Definition 53](#d:MMSp){reference-type="ref" reference="d:MMSp"}), and $(\mcE,\mcF)$ be a quasi-regular strongly local Dirichlet form on $\mbbX$ satisfying $(\mathsf{SL}_{\mssm,\mssd})$. Further let $(\mbbX',\mcE')$ be satisfying the same assumptions as $(\mbbX,\mcE)$. Then, their product space $(\mbbX^{\scriptscriptstyle{\otimes }},\mcE^{\scriptscriptstyle{\otimes }})$ satisfies $(\mathsf{cSL}_{\tau^{\scriptscriptstyle{\otimes }},\mssm^{\scriptscriptstyle{\otimes }},\mssd^{\scriptscriptstyle{\otimes }}})$.* *Notation*. For a measure $\mu$ on a measurable space $(X,\Sigma)$ we denote by $\mu A$, resp. $\mu f$, the $\mu$-measure of $A\in\Sigma$, resp. the integral with respect to $\mu$ of a $\Sigma$-measurable function $f$ (whenever the integral makes sense). # Setting {#s:Preliminaries} ## Metric and topological spaces Let $X$ be any non-empty set. A function $\mssd\colon X^{\scriptscriptstyle{\times 2}}\rightarrow[0,\infty]$ is an *extended pseudo-distance* if it is symmetric and satisfying the triangle inequality. Any such $\mssd$ is: a *pseudo-distance* if it is everywhere finite, i.e. $\mssd\colon X^{{\scriptscriptstyle{\times 2}}}\rightarrow[0,\infty)$; an *extended distance* if it does not vanish outside the diagonal in $X^{{\scriptscriptstyle{\times 2}}}$, i.e. $\mssd(x,y)=0$ iff $x=y$; a *distance* if it is both finite and non-vanishing outside the diagonal. Let $x_0\in X$ and $r\in (0,\infty]$. We write $B^\mssd_r(x_0)\mathop{\mathrm{\coloneqq}}\left\{\mssd_{x_0}<r\right\}$. We call $B^\mssd_\infty(x_0)$ the *$\mssd$-accessible component* of $x_0$ in $X$. Note that, if $\mssd$ is an extended pseudo-metric, then both of the inclusions $\left\{x_0\right\}\subset \cap_{r>0} B^\mssd_r(x_0)$ and $B^\mssd_\infty(x_0)\subset X$ may be strict ones. We say that an extended metric space is *complete* if $B^\mssd_\infty(x)$ is complete for each $x\in X$. Finally set $$\begin{aligned} \mssd({\,\cdot\,}, A)\mathop{\mathrm{\coloneqq}}& \inf_{x\in A} \mssd({\,\cdot\,},x) \colon X\longrightarrow[0,\infty] \,\,\mathrm{,}\;\,\qquad A\subset X\,\,\mathrm{.}\end{aligned}$$ **Lemma 1**. *Let $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to [0,\infty]$ be an extended pseudo-distance. Furter fix $x\in X$ and let $\msA_x$ be any family of subsets with $x\in\cap\msA_x$ and $\inf_{A\in\msA_x}\diam_\mssd A=0$. Then, $$\sup_{A\in\msA_x} \mssd({\,\cdot\,}, A)= \mssd({\,\cdot\,},x) \,\,\mathrm{.}$$* **Proof.* The inequality '$\leq$' is always satisfied by definition of point-to-set extended pseudo-distance, thus it suffices to show the converse inequality. Up to restricting to any up-to-countable sub-family of $\msA_x$, it suffices to show the assertion in the case when $\msA_x\mathop{\mathrm{\coloneqq}}\left(A_n\right)_n$ is up-to-countable.* *Now, fix $y\in X$ and $\varepsilon>0$. For each $n\in{\mathbb N}_1$ let $x_n$ be so that $\mssd(y,A_n)\geq \mssd(y,x_n)+\varepsilon$. Then, $$\begin{aligned} \sup_n \mssd(y,A_n) \geq&\ \varepsilon+\sup_n \mssd(y,x_n) \geq \varepsilon+\sup_n \big({\mssd(y,x)-\mssd(x,x_n)}\big) \geq \varepsilon+ \mssd(y,x) - \inf_n\diam_\mssd(A_n) \\ =&\ \varepsilon+ \mssd(y,x) \,\,\mathrm{.}\end{aligned}$$ By arbitrariness of $\varepsilon>0$ we conclude that $\sup_n\mssd(y,A_n)=\mssd(y,x)$, and the conclusion follows by arbitrariness of $y$. ◻* For an extended pseudo-distance $\mssd$ on $X$, let $\tau_\mssd$ denote the (possibly *not* Hausdorff) topology on $X$ induced by the pseudo-distance $\mssd\wedge 1$. The topology $\tau_\mssd$ is Hausdorff if and only if $\mssd$ is an extended distance. The topology $\tau_\mssd$ is separable if and only if there exists a countable family of points $\left(x_n\right)_n\subset X$ so that $X=\cup_n B^\mssd_\infty(x_n)$ and $(B^\mssd_\infty(x_n),\mssd)$ is a separable pseudo-metric space for every $n\in {\mathbb N}$. *Lipschitz functions*. A function $\hat f\colon X\rightarrow{\mathbb R}$ is $\mssd$-Lipschitz if there exists a constant $L>0$ so that $$\begin{aligned} \label{eq:Lipschitz} \big\lvert\hat f(x)-\hat f(y)\big\rvert\leq L\, \mssd(x,y) \,\,\mathrm{,}\;\,\qquad x,y\in X \,\,\mathrm{.}\end{aligned}$$ The smallest constant $L$ so that [\[eq:Lipschitz\]](#eq:Lipschitz){reference-type="eqref" reference="eq:Lipschitz"} holds is the (global) *Lipschitz constant of $\hat f$*, denoted by $\mathrm{L}_{\mssd}(\hat f)$. Further let the *slope* of $\hat f$ at $x$ be defined as $$\big\lvert\mathrm{D}\hat f\big\rvert_{\mssd}(x)\mathop{\mathrm{\coloneqq}}\limsup_{\mssd(x,y)\to 0} \frac{\left\lvert f(x)-f(y)\right\rvert}{\mssd(x,y)}\,\,\mathrm{.}$$ Conventionally, $\big\lvert\mathrm{D}\hat f\big\rvert_{\mssd}(x)=0$ whenever $x$ is isolated relative to $\mssd$. We omit the specification of $\mssd$ whenever apparent from context. *Remark 2*. It is worth stressing that ---*conventionally*--- in [\[eq:Lipschitz\]](#eq:Lipschitz){reference-type="eqref" reference="eq:Lipschitz"} we set $0\cdot\infty\mathop{\mathrm{\coloneqq}}\infty$. We further note that, for $\hat f\colon X\to {\mathbb R}$, having $\mathrm{L}_{\mssd}(\hat f)=0$ does not imply that $\hat f$ is constant, unless $\mssd$ were in fact a distance (*not*: extended). Indeed, if $\mssd$ has accessible components $\left(X_i\right)_{i\in I}$, then every function $\hat f$ constant on each $X_i$ is $\mssd$-Lipschitz with $\mathrm{L}_{\mssd}(\hat f)=0$. For any non-empty $A\subset X$ we write $\Lip(A,\mssd)$, resp. $\mathrm{Lip}_b(A,\mssd)$ for the family of all finite, resp. bounded, $\mssd$-Lipschitz functions on $A$. For simplicity of notation, further let $\Lip(\mssd)\mathop{\mathrm{\coloneqq}}\Lip(X,\mssd)$, resp. $\mathrm{Lip}_b(\mssd)\mathop{\mathrm{\coloneqq}}\mathrm{Lip}_b(X,\mssd)$. *Topological spaces*. A Hausdorff topological space $(X,\tau)$ is: 1. [\[i:Top:2\]]{#i:Top:2 label="i:Top:2"} a *topological Luzin space* if it is a continuous injective image of a Polish space; 2. [\[i:Top:3\]]{#i:Top:3 label="i:Top:3"} a *metrizable Luzin space* if it is homeomorphic to a Borel subset of a compact metric space. Let $(X,\tau)$ be a Hausdorff topological space. A family of pseudo-distances $\UP$ is a *uniformity* (*of pseudo-distances*) if: it is directed, i.e., $\mssd_1\vee \mssd_2\in\UP$ for every $\mssd_1,\mssd_2\in \UP$; and it is order-closed, i.e., $\mssd_2\in \UP$ and $\mssd_1\leq \mssd_2$ implies $\mssd_1\in \UP$ for every pseudo-distance $\mssd_1$ on $X$. A uniformity is *Hausdorff* if it separates points. *Extended metric-topological space*. The next definition is a reformulation of [@AmbErbSav16 Dfn. 4.1]. **Definition 3** (Extended metric-topological space). Let $(X,\tau)$ be a Hausdorff topological space. An extended pseudo-distance $\mssd\colon X^{{\scriptscriptstyle{\times 2}}}\rightarrow[0,\infty]$ is $\tau$-*admissible* if there exists a uniformity $\UP$ of *$\tau^{\scriptscriptstyle{\times 2}}$-continuous* pseudo-distances $\mssd'\colon X^{\scriptscriptstyle{\times 2}}\rightarrow[0,\infty)$, so that $$\begin{aligned} \label{eq:d=supUP} \mssd=\sup\left\{\mssd':\mssd'\in\UP\right\}\,\,\mathrm{.}\end{aligned}$$ The triple $(X,\tau,\mssd)$ is an *extended metric-topological space* if $\mssd$ is $\tau$-admissible, and there exists a uniformity $\UP$ witnessing the $\tau$-admissibility of $\mssd$, and additionally Hausdorff and generating $\tau$. Let $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to[0,\infty]$ be an extended pseudo-distance on $X$. We denote by $\tau_\mssd$ the topology induced by $\mssd$ and we note that, even in the case when $\mssd$ is $\tau$-admissible, $\tau_\mssd$ is in general strictly finer than $\tau$. If $\mssd$ is a distance however, then $\tau_\mssd=\tau$. ## Measure spaces {#ss:MeasureTopSp} Let $(X,\tau)$ be a Hausdorff topological space. We denote by $\msB_{\tau}$ the Borel $\sigma$-algebra of $(X,\tau)$. Given a Borel measure $\mu$ on $(X,\msB_{\tau})$, we denote by $\msB_{\tau}^\mu$ the (Carathédory) completion of $\msB_{\tau}$ with respect to $\mu$. Given $\sigma$-finite measures $\mu_0$, $\mu_1$ on $(X,\msB_{\tau})$, we write $\mu_0\leq \mu_1$ to indicate that $\mu_0 A\leq \mu_1 A$ for every $A\in\msB_{\tau}$. Every Borel measure on a strongly Lindelöf space has support, e.g. [@MaRoe92 p. 148]. Let $\Sigma$ be a $\sigma$-algebra over $X$. We denote by $\mcL^0(\Sigma)$, resp. $\mcL^\infty(\Sigma)$, the vector space of all everywhere-defined real-valued, resp. uniformly bounded, ($\Sigma$-)measurable functions on $X$. When $\mu$ is a measure on $(X,\Sigma)$, we denote by $L^0(\mu)$ the corresponding vector space of $\mu$-classes. We denote by $\mcL^2(\mu)$ the vector space of all $\mu$-square-integrable functions in $\mcL^0(\Sigma)$, by $L^2(\mu)$ the corresponding space of $\mu$-classes. Let the corresponding definition of $\mcL^p(\mu)$, resp. $L^p(\mu)$, be given for all $p\in [1,\infty)$. As a general rule, we denote measurable functions by either $\hat f$ or $\tilde{f}$, and classes of measurable functions up to a.e. equality simply by $f$. When $\mu$ has full support on $X$, we drop this distinction for $\tau$-continuous functions, simply writing $f$ for both the class and its unique $\tau$-continuous representative. *Measurability and continuity of Lipschitz functions*. Let $(X,\tau)$ be a Hausdorff space, $\mssd\colon X^{{\scriptscriptstyle{\times 2}}}\rightarrow[0,\infty]$ be an extended distance on $X$. Let $\hat f\colon X \rightarrow[-\infty,\infty]$ be $\mssd$-Lipschitz with $\hat f\not\equiv \pm\infty$. In general, $\hat f$ is *neither* everywhere finite, nor $\tau$-continuous, nor $\msB_{\tau}^\mssm$-measurable, see [@LzDSSuz20]. For a given $\sigma$-algebra $\Sigma$ on $X$, this motivates to set $$\begin{aligned} \Lip(\mssd,\Sigma)\mathop{\mathrm{\coloneqq}}\Lip(\mssd)\ \cap&\ \mcL^0(\Sigma)\,\,\mathrm{,}\;\,& \mathrm{Lip}_b(\mssd,\Sigma)\mathop{\mathrm{\coloneqq}}\mathrm{Lip}_b(\mssd)\ \cap&\ \mcL^0(\Sigma)\,\,\mathrm{.} \\ \Lip(\mssd,\tau)\mathop{\mathrm{\coloneqq}}\Lip(\mssd)\ \cap&\ \mcC(\tau)\,\,\mathrm{,}\;\,& \mathrm{Lip}_b(\mssd,\tau)\mathop{\mathrm{\coloneqq}}\mathrm{Lip}_b(\mssd)\ \cap&\ \mcC(\tau)\,\,\mathrm{.}\end{aligned}$$ *Main assumptions*. Everywhere in the following, $\mbbX$ is a quadruple $(X,\tau,\Sigma,\mssm)$ so that $\msB_{\tau}\subset \Sigma\subset \msB_{\tau}^\mssm$, the reference measure $\mssm$ is positive $\sigma$-finite on $(X,\Sigma)$, and one of the following holds: 1. [\[ass:Hausdorff\]]{#ass:Hausdorff label="ass:Hausdorff"} $(X,\tau)$ is a Hausdorff space; 2. [\[ass:Luzin\]]{#ass:Luzin label="ass:Luzin"} $(X,\tau)$ is a topological Luzin space and $\supp[\mssm]=X$; 3. [\[ass:Polish\]]{#ass:Polish label="ass:Polish"} $(X,\tau)$ is a second countable locally compact Hausdorff space, $\mssm$ is Radon and $\supp[\mssm]=X$. ## Dirichlet spaces Given a bilinear form $(Q,\msD({Q}))$ on a Hilbert space $H$, we write $$\begin{aligned} Q(h)\mathop{\mathrm{\coloneqq}}Q(h,h)\,\,\mathrm{,}\;\,\qquad Q_\alpha(h_0,h_1)\mathop{\mathrm{\coloneqq}}Q(h_0,h_1)+\alpha\left\langle h_0 \,\middle |\, h_1\right\rangle\,\,\mathrm{,}\;\,\alpha>0\,\,\mathrm{.}\end{aligned}$$ Let $\mbbX$ be satisfying Assumption [\[ass:Hausdorff\]](#ass:Hausdorff){reference-type="ref" reference="ass:Hausdorff"}. A *Dirichlet form on $L^2(\mssm)$* is a non-negative definite densely defined closed symmetric bilinear form $(\mcE,\mcF)$ on $L^2(\mssm)$ satisfying the Markov property $$\begin{aligned} f_0\mathop{\mathrm{\coloneqq}}0\vee f \wedge 1\in \mcF\qquad \text{and} \qquad \mcE(f_0)\leq \mcE(f)\,\,\mathrm{,}\;\,\qquad f\in\mcF\,\,\mathrm{.}\end{aligned}$$ If not otherwise stated, $\mcF$ is always regarded as a Hilbert space with norm $\left\lVert{\,\cdot\,}\right\rVert_\mcF\mathop{\mathrm{\coloneqq}}\mcE_1({\,\cdot\,})^{1/2}=\sqrt{\mcE({\,\cdot\,})+\left\lVert{\,\cdot\,}\right\rVert_{L^2(\mssm)}^2}$. A *Dirichlet space* is a pair $(\mbbX,\mcE)$, where $\mbbX$ satisfies [\[ass:Hausdorff\]](#ass:Hausdorff){reference-type="ref" reference="ass:Hausdorff"} and $(\mcE,\mcF)$ is a Dirichlet form on $L^2(\mssm)$. A *pseudo-core* is any $\mcF$-dense linear subspace of $\mcF$. ### Quasi-notions For any $A\in\msB_{\tau}$ set $\mcF_A\mathop{\mathrm{\coloneqq}}\left\{u\in \mcF: u= 0 \text{~$\mssm$-a.e.~on~} X\setminus A\right\}$. A sequence $\left(A_n\right)_n\subset \msB_{\tau}$ is a *Borel $\mcE$-nest* if $\cup_n \mcF_{A_n}$ is dense in $\mcF$. For any $A\in\msB_{\tau}$, let $(p)$ be a proposition defined with respect to $A$. We say that '$(p_A)$ holds' if $A$ satisfies $(p)$. A *$(p)$-$\mcE$-nest* is a Borel nest $\left(A_n\right)$ so that $(p_{A_n})$ holds for every $n$. In particular, a *closed $\mcE$-nest*, henceforth simply referred to as an *$\mcE$-nest*, is a Borel $\mcE$-nest consisting of closed sets. A set $N\subset X$ is *$\mcE$-polar* if there exists an $\mcE$-nest $\left(F_n\right)_n$ so that $N\subset X\setminus \cup_n F_n$. A set $G\subset X$ is *$\mcE$-quasi-open* if there exists an $\mcE$-nest $\left(F_n\right)_n$ so that $G\cap F_n$ is relatively open in $F_n$ for every $n\in {\mathbb N}$. A set $F$ is *$\mcE$-quasi-closed* if $X\setminus F$ is $\mcE$-quasi-open. Without loss of generality, and without explicit mention, we will assume that $\mcE$-quasi-open/closed, sets are additionally Borel measurable, see [@LzDSSuz20 Lem. 2.6]. Any countable union or finite intersection of $\mcE$-quasi-open sets is $\mcE$-quasi-open; analogously, any countable intersection or finite union of $\mcE$-quasi-closed sets is $\mcE$-quasi-closed; see [@Fug71 Lem. 2.3]. A property $(p_x)$ depending on $x\in X$ holds $\mcE$-*quasi-everywhere* (in short: $\mcE$-q.e.) if there exists an $\mcE$-polar set $N$ so that $(p_x)$ holds for every $x\in X\setminus N$. Given sets $A_0,A_1\subset X$, we write $A_0\subset A_1$ $\mcE$-q.e. if $\mathop{\mathrm{\mathds 1}}_{A_0}\leq \mathop{\mathrm{\mathds 1}}_{A_1}$ $\mcE$-q.e. Let the analogous definition of $A_0=A_1$ $\mcE$-q.e. be given. A function $\hat f\in \mcL^0(\Sigma)$ is *$\mcE$-quasi-continuous* if there exists an $\mcE$-nest $\left(F_n\right)_n$ so that $\hat f\big\lvert_{F_n}$ is continuous for every $n\in {\mathbb N}$. Equivalently, $\tilde{f}$ is $\mcE$-quasi-continuous if and only if it is $\mcE$-q.e. finite and $\tilde{f}^{-1}(U)$ is $\mcE$-quasi-open for every open $U\subset {\mathbb R}$, see e.g. [@FukOshTak11 p. 70]. Whenever $f\in L^0(\mssm)$ has an $\mcE$-quasi-continuous $\mssm$-version, we denote it by $\tilde{f}\in \mcL^0(\Sigma)$. *Spaces of measures*. We write $\mfM^+_b(\Sigma)$, resp. $\mfM^+_\sigma(\Sigma)$, $\mfM^\pm_b(\Sigma)$, $\mfM^\pm_\sigma(\Sigma)$, for the space of finite, resp. $\sigma$-finite, finite signed, extended $\sigma$-finite signed, measures on $(X,\Sigma)$. A further subscript '$\text{\textsc{r}}$' indicates (sub-)spaces of Radon measures, e.g. $\mfM^+_{b\text{\textsc{r}}}(\Sigma)$. We write $\mfM^\pm_\sigma(\Sigma,\msN_{\mcE})$ for the space of extended $\sigma$-finite signed measures not charging sets in the family $\msN_{\mcE}$ of $\mcE$-polar Borel subsets of $X$. ### General properties Let $\mbbX$ be a topological measure space as in §[2.2](#ss:MeasureTopSp){reference-type="ref" reference="ss:MeasureTopSp"}. When $(\mcE,\mcF)$ is a Dirichlet form on $L^2(\mssm)$, we say that $(\mbbX,\mcE)$ is a *Dirichlet space*. A Dirichlet space $(\mbbX,\mcE)$ is *quasi-regular* if each of the following holds: 1. [\[i:QR:1\]]{#i:QR:1 label="i:QR:1"} there exists an $\mcE$-nest $\left(F_n\right)_n$ consisting of $\tau$-compact sets; 2. [\[i:QR:2\]]{#i:QR:2 label="i:QR:2"} there exists a dense subset of $\mcF$ the elements of which all have $\mcE$-quasi-continuous $\mssm$-versions; 3. [\[i:QR:3\]]{#i:QR:3 label="i:QR:3"} there exists an $\mcE$-polar set $N$ and a countable family $\left(u_n\right)_n$ of functions $u_n\in\mcF$ having $\mcE$-quasi-continuous versions $\tilde{u}_n$ so that $\left(\tilde{u}_n\right)_n$ separates points in $X\setminus N$. Let $(\mbbX,\mcE)$ be a quasi-regular Dirichlet space, $\left(F_n\right)_n$ be an $\mcE$-nest witnessing its quasi-regularity, and set $X_0\mathop{\mathrm{\coloneqq}}\cup_n F_n$, endowed with the trace topology $\tau_0$, $\sigma$-algebra $\Sigma_0$, and the restriction $\mssm_0$ of $\mssm$ to $\Sigma_0$. Then, $\mbbX_0$ satisfies [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"}, and the space $L^p(\mssm)$ may be canonically identified with the space $L^p(\mssm_0)$, $p\in[0,\infty]$, since $X\setminus X_0$ is $\mcE$-polar, hence $\mssm$-negligible. By letting $\mcE_0$ denote the image of $\mcE$ under this identification, $(\mbbX_0,\mcE_0)$ is a quasi-regular Dirichlet space, and $\mcF_0$ is canonically linearly isometrically isomorphic to $\mcF$. See [@MaRoe92 Rmk. IV.3.2(iii)] for the details of this construction. *Remark 4*. When considering a quasi-regular Dirichlet space $(\mbbX,\mcE)$, we may and shall therefore assume, with no loss of generality, that $\mbbX$ satisfies [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"}. In particular $(X,\tau)$ is separable. A Dirichlet space $(\mbbX,\mcE)$ with $\mbbX$ satisfying [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"} is - *local* if $\mcE(f,g)=0$ for every $f,g\in\mcF$ with $\supp[f]$, $\supp[g]$ compact, $\supp[f]\cap\supp[g]=\mathop{\mathrm{\varnothing}}$; - *strongly local* if $\mcE(f,g)=0$ for every $f,g\in\mcF$ with $\supp[f]$, $\supp[g]$ compact and $f$ constant on a neighborhood of $\supp[g]$; - *regular* if $\mbbX$ satisfies [\[ass:Polish\]](#ass:Polish){reference-type="ref" reference="ass:Polish"}, and $\mcC_0(\tau)\cap \mcF$ is both dense in $\mcF$ and dense in the space $\mcC_0(\tau)$ of all $\tau$-continuous functions on $X$ vanishing at infinity. *Domains*. Let $(\mbbX,\mcE)$ be a Dirichlet space. We write $\mcF_b\mathop{\mathrm{\coloneqq}}\mcF\cap L^\infty(\mssm)$. The *extended domain* $\mcF_e$ of $(\mcE,\mcF)$ is the space of all functions $f\in L^0(\mssm)$ so that there exists an $\mcE^{1/2}$-fundamental (i.e. Cauchy) sequence $\left(f_n\right)_n\subset \mcF$ with $\mssm$-a.e.-$\lim_{n}f_n=f$. We write $\mcF_{eb}\mathop{\mathrm{\coloneqq}}\mcF_e\cap L^\infty(\mssm)$. The bilinear form $\mcE$ on $\mcF$ extends to a (non-relabeled) bilinear form on $\mcF_e$, [@Kuw98 Prop. 3.1]. Furthermore, - $\mcF_b$ is an algebra with respect to the pointwise multiplication, [@BouHir91 Prop. I.2.3.2]; - if $(\mbbX,\mcE)$ is quasi-regular, then $\mcF_b$ is dense in $\mcF$, [@Kuw98 Cor. 2.1]; - if $(\mbbX,\mcE)$ is quasi-regular, then $\mcF$ is separable, [@MaRoe92 Prop. IV.3.3, p. 102]; *Quasi-interior, quasi-closure*. Let $(\mbbX,\mcE)$ be a quasi-regular Dirichlet space. Every $f\in\mcF$ has an *$\mcE$-q.e.-unique* $\mcE$-quasi-continuous $\mssm$-representative, denoted by $\tilde{f}$, [@MaRoe92 Prop. IV.3.3.(iii)]. For $A\subset X$ set $$\begin{aligned} \msU(A)\mathop{\mathrm{\coloneqq}}& \left\{G : G \text{ is an~$\mcE$-quasi-open subset of } A\right\} \,\,\mathrm{,}\;\, \\ \msF(A)\mathop{\mathrm{\coloneqq}}& \left\{F : F \text{ is an~$\mcE$-quasi-closed superset of } A \right\}\,\,\mathrm{.}\end{aligned}$$ By [@Fug71 Thm. 2.7], $\msU(A)$ has an $\mcE$-q.e.-maximal element denoted by $\mathop{\mathrm{int}}_\mcE A$, $\mcE$-quasi-open, and called the $\mcE$-*quasi-interior* of $A$. Analogously, $\msF(A)$ has an $\mcE$-q.e.-minimal element denoted by $\cl_\mcE A$, $\mcE$-quasi-closed, and called the $\mcE$-*quasi-closure* of $A$. *Carré du champ operators*. Let $(\mbbX,\mcE)$ be a Dirichlet space with $\mbbX$ satisfying [\[ass:Hausdorff\]](#ass:Hausdorff){reference-type="ref" reference="ass:Hausdorff"}, and set $$\begin{aligned} \boldsymbol\Gamma_{f,g}(h)\mathop{\mathrm{\coloneqq}}\mcE(fh,g)+\mcE(gh,f)-\mcE(fg,h)\,\,\mathrm{,}\;\,\qquad \boldsymbol\Gamma_{f}(h)\mathop{\mathrm{\coloneqq}}\boldsymbol\Gamma_{f,f}(h) \,\,\mathrm{,}\;\,\qquad f,g,h\in\mcF_b\,\,\mathrm{.}\end{aligned}$$ We say that $(\mcE,\mcF)$ admits *carré du champ operator* $\Gamma$ if there exists $\Gamma\colon \mcF_b^{\scriptscriptstyle{\times 2}}\to L^1(\mssm)$ such that $$\boldsymbol\Gamma_{f,g}(h)=2 \int h\, \Gamma(f,g)\mathop{}\!\mathrm{d}\mssm\,\,\mathrm{,}\;\,\qquad f,g,h\in\mcF_b\,\,\mathrm{.}$$ *Energy measures*. Not every quasi-regular strongly local Dirichlet space admits a carré du champ operator. However we have the following: **Theorem 5** (Cf. [@Kuw98 Thm. 5.2, Lem.s 5.1, 5.2]). *Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space. Then, a bilinear form $\mu_{{\,\cdot\,},{\,\cdot\,}}$ is defined on $\mcF_b^{{\scriptscriptstyle{\times 2}}}$ with values in $\mfM^\pm_{b\text{\textsc{r}}}(\msB_{\tau},\msN_{\mcE})$ by $$\begin{aligned} \label{eq:EnergyMeas} 2\int \tilde{h}\mathop{}\!\mathrm{d}\mu_{f,g}= \boldsymbol\Gamma_{f,g}(h) \,\,\mathrm{,}\;\,\qquad f,g,h\in\mcF_b\,\,\mathrm{.}\end{aligned}$$* The bilinear form $\mu_{{\,\cdot\,},{\,\cdot\,}}$ constructed above is called the *energy measure* of $(\mbbX,\mcE)$. When $(\mbbX,\mcE)$ is quasi-regular strongly local and additionally admits carré du champ operator, then $\mu_{f,g} \ll \mssm$ for every $f,g\in\mcF_b$, in which case $\Gamma(f,g)\mathop{\mathrm{\coloneqq}}\frac{\mathop{}\!\mathrm{d}\mu_{f,g}}{\mathop{}\!\mathrm{d}\mssm}$ is the Radon--Nikodym derivative of $\mu_{f,g}$ w.r.t. $\mssm$. ## Broad local spaces and energy moderance {#ss:BroadLoc} Let $(\mbbX,\mcE)$ be a quasi-regular Dirichlet space. For any $\mcE$-quasi-open $E\subset X$ set $$\begin{aligned} \label{eq:Xi0} \msG(E)\mathop{\mathrm{\coloneqq}}&\ \left\{ G_\bullet\mathop{\mathrm{\coloneqq}}\left(G_n\right)_n : G_n \text{~$\mcE$-quasi-open,~} G_n\subset G_{n+1} \text{~$\mcE$-q.e.,~} \cup_n G_n=E \;{\mcE}\text{-q.e.}\right\} \,\,\mathrm{,}\;\,\end{aligned}$$ When $E=X$ we simply write $\msG$ in place of $\msG(X)$. We say that $G_\bullet\in\msG(E)$ is $\mcE$-moderate if for each $n\in{\mathbb N}_0$ there exists $e_n\in\mcF$ with $e_n\geq 1$ $\mssm$-a.e. on $G_n$, and we set $$\label{eq:Nests} \begin{aligned} \msG_0(E)\mathop{\mathrm{\coloneqq}}&\ \left\{G_\bullet \in\msG(E): G_\bullet \text{~is $\mcE$-moderate}\right\} \,\,\mathrm{,}\;\, \\ \msG_c(E)\mathop{\mathrm{\coloneqq}}&\ \left\{G_\bullet \in\msG_0(E) : \cl_\tau G_n \text{~is $\tau$-compact for all~$n$}\right\}\,\,\mathrm{.} \end{aligned}$$ For $G_\bullet\in\msG_0(E)$, we write $e_\bullet\mathop{\mathrm{\coloneqq}}\left(e_n\right)_n$ for any sequence of functions witnessing the $\mcE$-moderance of $G_\bullet$. When the sequence $e_\bullet$ is relevant, we write as well $(G_\bullet,e_\bullet)\in\msG_0(E)$. As usual, we omit the specification of $E=X$. Since $\mathop{\mathrm{\mathds 1}}\in{{\mcF}^\bullet_{\loc}}$ by [\[eq:E(1)=0\]](#eq:E(1)=0){reference-type="eqref" reference="eq:E(1)=0"}, then $\msG_0\neq \mathop{\mathrm{\varnothing}}$. Clearly, $\msG_c\subset \msG_0\subsetneq \msG$. **Definition 6** ($\mcE$-moderance, [@LzDSSuz20 Dfn. 2.22]). Let $(\mbbX,\mcE)$ be a quasi-regular Dirichlet space. For a measure $\mu\in\mfM^+_\sigma(\msB_{\tau},\msN_{\mcE})$ we say that $(G_\bullet,e_\bullet)\in\msG_0$ is *$\mu$-moderated* if $e_\bullet$ is additionally so that $\mu \tilde{e}_n <\infty$ for every $n$. We say that $\mu$ is: - *$\mcE$-moderate* if there exists a $\mu$-moderated $G_\bullet\in\msG_0$; - *absolutely $\mcE$-moderate* if for every $G_\bullet\in\msG_0$ there exists $e_\bullet$ so that $(G_\bullet, e_\bullet)$ is $\mu$-moderated. We refer the reader to [@LzDSSuz20 §2.5.2] for the heuristics behind all the above definitions. ### Broad local spaces For $G_\bullet\in\msG(E)$ and $\msA\subset L^0(\mssm)$, we say that $f\in L^0(\mssm_{E})$ is in the *broad local space* ${{\msA}^\bullet_{\loc}}(E,G_\bullet)$ if for every $n$ there exists $f_n\in \msA$ so that $f_n=f$ $\mssm$-a.e. on $G_n$. The *broad local space* ${{\msA}^\bullet_{\loc}}(E)$ of $(\mbbX,\mcE)$ relative to $E$ is the space [@Kuw98 §4, p. 696], $$\begin{aligned} \label{eq:Xi} {{\msA}^\bullet_{\loc}}(E)\mathop{\mathrm{\coloneqq}}\bigcup_{G_\bullet\in \msG(E)} {{\msA}^\bullet_{\loc}}(E,G_\bullet) \,\,\mathrm{.}\end{aligned}$$ The set ${{\msA}^\bullet_{\loc}}(E,G_\bullet)$ depends on $G_\bullet$. Again, we omit the specification of $E=X$. **Proposition 7** (Extension of energy measure, [@LzDSSuz20 Prop. 2.12]). *Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space. Then, the quadratic form $\mu_{{\,\cdot\,}}\colon \mcF\rightarrow\mfM^+_{b\text{\textsc{r}}}(\msB_{\tau},\msN_{\mcE})$ associated to the bilinear form $\mu_{{\,\cdot\,},{\,\cdot\,}}$ in [\[eq:EnergyMeas\]](#eq:EnergyMeas){reference-type="eqref" reference="eq:EnergyMeas"} uniquely extends to a non-relabeled form on ${{\mcF}^\bullet_{\loc}}$ with values in $\mfM^+_\sigma(\msB_{\tau},\msN_{\mcE})$, satisfying:* 1. *[\[i:p:PropertiesLoc:00\]]{#i:p:PropertiesLoc:00 label="i:p:PropertiesLoc:00"} the representation property $$\begin{aligned} \label{eq:RepresentationLoc} \mcE(f,g)=\tfrac{1}{2}\mu_{f,g} X \,\,\mathrm{,}\;\,\qquad f,g\in\mcF_e\,\,\mathrm{;}\;\,\end{aligned}$$* 2. *[\[i:p:PropertiesLoc:1\]]{#i:p:PropertiesLoc:1 label="i:p:PropertiesLoc:1"} the truncation property $$\label{eq:TruncationLoc} \begin{aligned} f\wedge g \in {{\mcF}^\bullet_{\loc}} \quad\text{and}\quad \mu_{f\wedge g}=\mathop{\mathrm{\mathds 1}}_{\{\tilde{f}\leq \tilde{g}\}}\mu_{f}+\mathop{\mathrm{\mathds 1}}_{\{\tilde{f}> \tilde{g}\}} \mu_{g}\,\,\mathrm{,}\;\,\qquad f,g\in{{\mcF}^\bullet_{\loc}} \,\,\mathrm{;}\;\, \\ f\vee g \in {{\mcF}^\bullet_{\loc}} \quad\text{and}\quad \mu_{f\vee g}=\mathop{\mathrm{\mathds 1}}_{\{\tilde{f}\leq \tilde{g}\}}\mu_{g}+\mathop{\mathrm{\mathds 1}}_{\{\tilde{f}> \tilde{g}\}} \mu_{f}\,\,\mathrm{,}\;\,\qquad f,g\in{{\mcF}^\bullet_{\loc}} \,\,\mathrm{;}\;\, \end{aligned}$$* 3. *[\[i:p:PropertiesLoc:2\]]{#i:p:PropertiesLoc:2 label="i:p:PropertiesLoc:2"} the chain rule $$\label{eq:ChainRuleLoc} \varphi\circ f \in {{\mcF}^\bullet_{\loc}} \quad\text{and}\quad \mu_{\varphi\circ f}=(\varphi'\circ \tilde{f})^2 \cdot \mu_{f}\,\,\mathrm{,}\;\,\qquad f\in{{\mcF}^\bullet_{\loc}} \,\,\mathrm{,}\;\,\quad \begin{aligned}&\varphi\in \mcC^1({\mathbb R})\,\,\mathrm{,}\;\,\\ &\varphi(0)=0\end{aligned} \,\,\mathrm{;}\;\,$$* 4. *[\[i:p:PropertiesLoc:5\]]{#i:p:PropertiesLoc:5 label="i:p:PropertiesLoc:5"} the strong locality property $$\begin{aligned} \label{eq:SLoc:2} \mathop{\mathrm{\mathds 1}}_G \mu_{f}=\mathop{\mathrm{\mathds 1}}_G \mu_{g}\,\,\mathrm{,}\;\,\qquad G \text{~$\mcE$-quasi-open}\,\,\mathrm{,}\;\,f,g\in{{\mcF}^\bullet_{\loc}}\,\,\mathrm{,}\;\,f\equiv g \text{~$\mssm$-a.e.\ on $G$}\,\,\mathrm{.}\end{aligned}$$* *Furthermore, ${{\mcF}^\bullet_{\loc}}$ is an algebra for the pointwise multiplication, and $$\begin{aligned} \label{eq:E(1)=0} \mathop{\mathrm{\mathds 1}}\in{{\mcF}^\bullet_{\loc}}\,\,\mathrm{,}\;\,\qquad \mu_{\mathop{\mathrm{\mathds 1}}}\equiv 0 \,\,\mathrm{.}\end{aligned}$$* **Corollary 8** (Extension of carré du champ operator). *Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space admitting carré du champ operator $(\Gamma,\mcF_b)$. Then, the bilinear form $\Gamma\colon \mcF_b^{\scriptscriptstyle{\times 2}}\to L^1(\mssm)$ extends to a non-relabeled bilinear form $\Gamma\colon{{\mcF}^\bullet_{\loc}}^{\scriptscriptstyle{\times 2}}\to L^0(\mssm)$ representing the quadratic form $\mu_{{\,\cdot\,}}\colon {{\mcF}^\bullet_{\loc}}\to \mfM^+_\sigma(\msB_{\tau},\msN_{\mcE})$ in Proposition [Proposition 7](#p:PropertiesLoc){reference-type="ref" reference="p:PropertiesLoc"} in the sense that $\tfrac{\mathop{}\!\mathrm{d}\mu_{f}}{\mathop{}\!\mathrm{d}\mssm}=\Gamma(f,f)$ for every $f\in{{\mcF}^\bullet_{\loc}}$.* *Furthermore, the form $\Gamma\colon{{\mcF}^\bullet_{\loc}}\to L^0(\mssm)$ satisfies the strong locality property $$\label{eq:c:BH:0} \Gamma(f,h)=\Gamma(g,h) \quad \mssm\text{-a.e.} \quad \text{on} \quad \left\{f\equiv g\right\} \,\,\mathrm{,}\;\,\qquad f,g,h\in{{\mcF}^\bullet_{\loc}}\,\,\mathrm{.}$$* **Proof.* Since ${{\mcF}^\bullet_{\loc}}={{(\mcF_b)}^\bullet_{\loc}}$, e.g. [@LzDSSuz20 Eqn. (2.15)], the extension is an immediate consequence of the definition of $\Gamma\colon\mcF_b\to L^1(\mssm)$ together with the strong locality property of $\mu_{{\,\cdot\,}}$ in Proposition [Proposition 7](#p:PropertiesLoc){reference-type="ref" reference="p:PropertiesLoc"}[\[i:p:PropertiesLoc:5\]](#i:p:PropertiesLoc:5){reference-type="ref" reference="i:p:PropertiesLoc:5"}. Noting that our definition of broad local space ${{\mcF}^\bullet_{\loc}}$ is more restrictive than the definition of local space in [@BouHir91 Dfn. I.7.1.3], the equality [\[eq:c:BH:0\]](#eq:c:BH:0){reference-type="eqref" reference="eq:c:BH:0"} is a standard consequence of e.g. [@BouHir91 Prop. I.7.1.4]. ◻* ### Local domains of bounded-energy and intrinsic distances {#sss:LocDom} Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space, and $\mu\in\mfM^+_\sigma(\msB_{\tau},\msN_{\mcE})$. For $\mcE$-quasi-open $E\subset X$ and any $G_\bullet\in\msG_0(E)$, set $$\begin{aligned} \mbbL^{\mu}_{\loc}(E,G_\bullet)\mathop{\mathrm{\coloneqq}}& \left\{f\in {{\mcF}^\bullet_{\loc}}(E,G_\bullet): \mu_{f}\leq \mu\right\}\,\,\mathrm{,}\;\,\qquad \mbbL^{\mu}_{\loc}(E)\mathop{\mathrm{\coloneqq}}\big\{f\in {{\mcF}^\bullet_{\loc}}(E): \mu_{f}\leq \mu\big\}\,\,\mathrm{,}\;\, \\ \mbbL^{\mu}(E)\mathop{\mathrm{\coloneqq}}&\ \mbbL^{\mu}_{\loc}(E)\cap \mcF\,\,\mathrm{,}\;\,\quad \ \mbbL^{\mu,\tau}_{\loc}(E)\mathop{\mathrm{\coloneqq}}\mbbL^{\mu}_{\loc}(E)\cap \mathop{\mathrm{\mcC}}(E,\tau) \,\,\mathrm{,}\;\,\quad \ \mbbL^{\mu,\tau}(E)\mathop{\mathrm{\coloneqq}}\mbbL^{\mu,\tau}_{\loc}(E)\cap \mcF\,\,\mathrm{.}\end{aligned}$$ As usual, we omit $E$ from the notation whenever $E=X$. For $G_\bullet\in\msG_0(E)$ we additionally denote by $$\begin{aligned} \mbbL^{\mu}_{\loc,b}(E,G_\bullet)\mathop{\mathrm{\coloneqq}}\mbbL^{\mu}_{\loc}(E,G_\bullet)\cap L^\infty(\mssm)\end{aligned}$$ the space of $\mssm$-essentially uniformly bounded functions in $\mbbL^{\mu}_{\loc}(E,G_\bullet)$. Let the analogous definitions for $\mbbL^{\mu}_{\loc,b}(E)$, $\mbbL^{\mu}_b(E)$, $\mbbL^{\mu,\tau}_{\loc,b}(E)$, and $\mbbL^{\mu,\tau}_b(E)$ be given. **Lemma 9** ([@LzDSSuz20 Prop. 2.26]). *Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space, and $\mu\in\mfM^+_\sigma(\msB_{\tau},\msN_{\mcE})$ be absolutely $\mcE$-moderate. Then, for every $G_\bullet, G_\bullet'\in\msG_0$, $$\mbbL^{\mu}_{\loc,b}(G_\bullet) = \mbbL^{\mu}_{\loc,b}(G_\bullet')\,\,\mathrm{.}$$ In particular, if $f\in\mbbL^{\mu}_{\loc,b}$, then there exists $G_\bullet\in \msG_c$ additionally so that $\cl_\tau G_k\subset G_{k+1}$ for all $k\in{\mathbb N}$, and $f_\bullet\subset \mcF$, such that $(G_\bullet,f_\bullet)$ witnesses that $f\in\mbbL^{\mu}_{\loc,b}$.* **Proof.* The first assertion is shown in [@LzDSSuz20 Prop. 2.26]. The second follows from [@Kuw98 Lem. 3.5(iii)]. ◻* **Lemma 10**. *Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space, and $\mu\in\mfM^+_\sigma(\msB_{\tau},\msN_{\mcE})$ be absolutely $\mcE$-moderate. If $\mathop{\mathrm{\mathds 1}}\in\mcF$, then $$\mbbL^{\mu}_{\loc,b}=\mbbL^{\mu}_b\,\,\mathrm{.}$$* **Proof.* Since $\mathop{\mathrm{\mathds 1}}\in\mcF$, the constant sequence $X_\bullet\mathop{\mathrm{\coloneqq}}\left(X\right)_k$ satisfies $X_\bullet \in\msG_0$. Since $\mu$ is absolutely $\mcE$-moderate, $X_\bullet$ is $\mu$-moderated, and we have that $\mbbL^{\mu}_{\loc,b}=\mbbL^{\mu}_{\loc,b}(X_\bullet)$ by Lemma [Lemma 9](#l:2.26){reference-type="ref" reference="l:2.26"}. On the other hand, it follows from the definition that $\mbbL^{\mu}_{\loc,b}(X_\bullet)=\mbbL^{\mu}_b$. ◻* ### Intrinsic distances We recall a definition of generalized intrinsic distance introduced in [@LzDSSuz20]. For more information on intrinsic distances, we refer the reader to [@LzDSSuz20 §2.6]. **Definition 11** (Intrinsic distance). Let $\mu\in\mfM^+_\sigma(\msB_{\tau},\msN_{\mcE})$. The *intrinsic distance generated by $\mu$* is the extended pseudo-distance $\mssd_\mu\colon X^{\times 2}\rightarrow[0,\infty]$ defined as $$\begin{aligned} \label{eq:IntrinsicD} \mssd_\mu(x,y)\mathop{\mathrm{\coloneqq}}\sup\big\{f(x)-f(y) : f\in \mbbL^{\mu,\tau}_{\loc,b}\big\} \,\,\mathrm{.}\end{aligned}$$ Note that $\mssd_\mu$ is always $\tau^{\scriptscriptstyle{\times 2}}$-l.s.c., hence $\msB_{\tau^{\scriptscriptstyle{\times 2}}}$- and $\Sigma^{{\scriptscriptstyle{\otimes 2}}}$-measurable, for it is the supremum of a family of $\tau^{\scriptscriptstyle{\times 2}}$-continuous functions. Furthermore, $\mssd_\mu$ is $\tau$-admissible by definition, as witnessed by the bounded uniformity $$\begin{aligned} \UP_\mu\mathop{\mathrm{\coloneqq}}\left\{\mssd_f(x,y)\mathop{\mathrm{\coloneqq}}\left\lvert f(x)-f(y)\right\rvert: f\in\mbbL^{\mu,\tau}_{\loc,b}\right\}\,\,\mathrm{.}\end{aligned}$$ In the case when $(\mbbX,\mcE)$ is a regular strongly local Dirichlet space and $\mu\mathop{\mathrm{\coloneqq}}\mssm$, [\[eq:IntrinsicD\]](#eq:IntrinsicD){reference-type="eqref" reference="eq:IntrinsicD"} coincides with the standard definition of intrinsic distance, see [@LzDSSuz20 Prop. 2.31]. ### Localizability Let us recall (a version of) the definition of *$\mu$-uniform $\tau$-localizability* given in [@LzDSSuz20 §3.4], also cf. [@LzDSSuz20 §2.5.2], and amounting to the existence of good Sobolev cut-off functions. **Definition 12**. Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space, and $\mu\in \mfM^+_\sigma(\msB_{\tau},\msN_{\mcE})$. We say that $(\mbbX,\mcE,\mu)$ is *$\mu$-uniformly latticially*, resp. *algebraically*, *$\tau$-localizable*, in short: $(\mbbX,\mcE,\mu)$ satisfies $(\mathsf{Loc}_{\mu,\tau})$, resp. $(\mathsf{Loc}^{\times}_{\mu,\tau})$, if $\mu$ is $\mcE$-moderate, and there exists a latticial approximation to the identity $\left(\theta_n\right)_n$, resp. an algebraic approximation to the identity $\left(\psi_n\right)_n$, uniformly bounded by $\mu$ in energy measure, viz. $$\begin{aligned} \tag*{$(\mathsf{Loc}_{\mu,\tau})$}\label{ass:Loc} && 0 \leq \theta_n\leq \theta_{n+1} \nearrow_n \infty\,\,\mathrm{,}\;\,\qquad \theta_n \in \mbbL^{\mu,\tau}_b \quad \big({\subset \mcC_b(\tau)}\big)\,\,\mathrm{.} \\ \tag*{$(\mathsf{Loc}^{\times}_{\mu,\tau})$}\label{ass:ALoc} \text{resp.}&& 0 \leq \psi_n\leq \psi_{n+1} \nearrow_n 1\,\,\mathrm{,}\;\,\qquad \psi_n \in \mbbL^{\mu,\tau}_b \quad \big({\subset \mcC_b(\tau)}\big)\,\,\mathrm{.}\end{aligned}$$ *Remark 13* ([@LzDSSuz20 Rmk. 3.28, 3.29]). Let us note the following. We may additionally assume with no loss of generality that $\mathop{\mathrm{int}}_\tau\left\{\theta_n=n\right\}\neq \mathop{\mathrm{\varnothing}}$ and that $\theta_n\leq n$ for every $n\in{\mathbb N}$. If $\mathop{\mathrm{\mathds 1}}\in\mcF$, then [\[ass:ALoc\]](#ass:ALoc){reference-type="ref" reference="ass:ALoc"} is trivially satisfied with $\psi_n\mathop{\mathrm{\coloneqq}}\mathop{\mathrm{\mathds 1}}$ for all $n$, and the same holds for [\[ass:Loc\]](#ass:Loc){reference-type="ref" reference="ass:Loc"} for the functions $\theta_n$ in [@LzDSSuz20 Rmk. 3.29]. [\[ass:Loc\]](#ass:Loc){reference-type="ref" reference="ass:Loc"} implies [\[ass:ALoc\]](#ass:ALoc){reference-type="ref" reference="ass:ALoc"} by setting $\psi_n\mathop{\mathrm{\coloneqq}}\theta_n\vee 1$. ## Rademacher and Sobolev-to-Lipschitz properties {#sss:RadStoL} Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space, $\mu\in \mfM^+_\sigma(\msB_{\tau})$, and $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to [0,\infty]$ be an extended pseudo-distance. **Definition 14** (Rademacher and Sobolev-to-Lipschitz properties [@LzDSSuz20]). We say that $(\mbbX,\mcE,\mssd,\mu)$ has the: - *Rademacher property* (*for* $\Sigma$) if, whenever $\hat f\in \mathrm{Lip}^1(\mssd,\Sigma)$, then $f\in \mbbL^{\mu}_{\loc}$; - *distance-Rademacher property* if $\mssd\leq \mssd_\mu$; - *Sobolev--to--continuous-Lipschitz property* if each $f\in\mbbL^{\mu}_{\loc}$ has an $\mssm$-representative $\hat f\in\Lip^1(\mssd,\tau)$; - *Sobolev--to--Lipschitz property* (*for* $\Sigma$) if each $f\in\mbbL^{\mu}_{\loc}$ has an $\mssm$-representative $\hat f\in\Lip^1(\mssd,\Sigma)$; - *$\mssd$-continuous-Sobolev--to--Lipschitz property* (*for* $\Sigma$) if each $f\in \mbbL^{\mu}_{\loc}$ having a $\mssd$-continuous $\Sigma$-measurable representative $\hat f$ also has a representative $\tilde{f}\in\Lip^1(\mssd,\Sigma^\mssm)$ (possibly, $\tilde{f}\neq \hat f$); - *continuous-Sobolev--to--Lipschitz property* if each $f\in \mbbL^{\mu,\tau}_{\loc}$ satisfies $f\in\Lip^1(\mssd,\tau)$; - *distance Sobolev-to-Lipschitz property* if $\mssd\geq \mssd_\mu$. As noted in [@LzDSSuz20], in all the above definitions we may equivalently additionally require that $\hat f$ be uniformly bounded. In the following, we shall make use of this fact without further mention. We refer the reader to [@LzDSSuz20 Rmk.s 3.2, 4.3] for comments on the terminology and to [@LzDSSuz20 Lem. 3.6, Prop. 4.2] for the interplay of all such properties. For a quasi-regular strongly local Dirichlet space $(\mbbX,\mcE)$, they reduce to the following scheme: $$\label{eq:EquivalenceRadStoL} \begin{tikzcd} & (\mathsf{Rad}_{\mssd,\mu}) \arrow[r, Rightarrow] & ({\mssd}\textrm{-}\mathsf{Rad}_{\mu}) & & \text{\cite[Lem.~3.6]{LzDSSuz20}}\,\,\mathrm{,}\;\, \\ (\mathsf{ScL}_{\mu,\tau,\mssd}) \arrow[r, Rightarrow] & (\mathsf{SL}_{\mu,\mssd}) \arrow[r, Rightarrow] & (\mathsf{cSL}_{\mu,\tau,\mssd}) \arrow[r, Leftrightarrow] & ({\mssd}\textrm{-}\mathsf{SL}_{\mu}) & \text{\cite[Prop.~4.2]{LzDSSuz20}}\,\,\mathrm{.} \end{tikzcd}$$ *Remark 15* (About $({\mssd}\textrm{-}\mathsf{cSL}_{\mu,\mssd})$). Let us note that, whereas both $({\mssd}\textrm{-}\mathsf{cSL}_{\mu,\mssd})$ and $(\mathsf{cSL}_{\tau,\mu,\mssd})$ are implied by $(\mathsf{SL}_{\mu,\mssd})$ and coincide on metric spaces, they do *not* --- at least in principle --- imply each other on extended metric spaces. In particular, while the $\mssd$-Lipschitz representative in $(\mathsf{cSL}_{\tau,\mu,\mssd})$ is taken to coincide with the given $\tau$-continuous one, it is important in the definition of $({\mssd}\textrm{-}\mathsf{cSL}_{\mu,\mssd})$ to allow for the $\mssd$-Lipschitz representative to be different from the $\mssd$-continuous one, and for the former to be only $\Sigma^\mssm$-measurable, rather than $\Sigma$-measurable. **Lemma 16** ([@LzDSSuz20 Prop. 3.7]). *Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space with $\mbbX$ satisfying [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"}, $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to [0,\infty]$ be an extended pseudo-distance on $X$, and $\mu\in \mfM^+_\sigma(\msB_{\tau},\msN_{\mcE})$ be $\mcE$-moderate. Further assume that $(\mbbX,\mcE)$ satisfies $({\mssd}\textrm{-}\mathsf{Rad}_{\mu})$. If $(X,\tau,\mssd)$ is a complete extended metric-topological space (Dfn. [Definition 3](#d:AES){reference-type="ref" reference="d:AES"}), then so is $(X,\tau,\mssd_\mu)$.* ### Metric completions Let $(\mbbX,\mcE)$ be a Dirichlet space, and $\mssd\colon X^{\scriptscriptstyle{\times 2}}\rightarrow[0,\infty]$ be an extended distance. Further let $(X^\iota,\mssd^\iota)$ be the abstract completion of $(X,\mssd)$ and denote by $\iota$ the completion embedding $\iota\colon X\rightarrow X^\iota$. If $\iota(X)$ is a Borel subset of $X^\iota$, then $\iota$ is $\msB_{\tau_{\mssd}}/\msB_{\tau_{\mssd^\iota}}$-measurable, and the image form $(\mcE^\iota,\mcF^\iota)$ of $(\mcE,\mcF)$ via $\iota$ is well-defined on the image space $\mbbX^\iota$. When $(\mbbX^\iota,\mcE^\iota)$ is a quasi-regular strongly local Dirichlet space, we denote its intrinsic distance by $\mssd_{\mssm^\iota}$. *Remark 17*. Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space, and $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to [0,\infty]$ be an extended distance on $X$. In order to discuss the interplay between $(\mbbX,\mcE)$ and $\mssd$ --- precisely: the relation between $\mssd$ and $\mssd_\mssm$ ---, it is *natural* to assume that $(X,\mssd)$ be complete. In particular, it is usually not enough to resort to the completion of $(X,\mssd)$, as we now show. Indeed, let $A\mathop{\mathrm{\coloneqq}}X^\iota\setminus\iota(X)$ be the complement of the $\iota$-image of $X$ inside its completion. The set $A$ is in general not $\mcE^\iota$-polar, that is, $(\mbbX,\mcE)$ and $(\mbbX^\iota,\mcE^\iota)$ are in general not quasi-homeomorphic. Furthermore, even in the case when $(\mbbX,\mcE)$ and $(\mbbX^\iota,\mcE^\iota)$ are quasi-homeomorphic, the corresponding intrinsic distances may be not related in any way. That is, the metric completion $(X^\iota,\mssd_\mssm^\iota)$ of $(X,\mssd_\mssm)$ may differ (even as a set) from the (extended) metric space $(X^\iota,\mssd_{\mssm^\iota})$, where this time we denoted by $X^\iota$ the $\mssd$-completion of $(X,\mssd)$ and by $\mssd_{\mssm^\iota}$ the intrinsic distance of the Dirichlet form $(\mcE^\iota,\mcF^\iota)$. For examples and counterexamples to the above statements, and for their relationship with the Sobolev-to-Lipschitz property, we refer the reader to [@LzDSSuz20 Ex. 4.7, §4.1]. ## Cheeger energies {#sss:CheegerE} The content of this section is mostly taken from the detailed discussion of extended metric measure spaces put forward by L. Ambrosio, N. Gigli, and G. Savaré in [@AmbGigSav14], Ambrosio, M. Erbar, and Savaré in [@AmbErbSav16], and the one --- more general still --- in Savaré's monograph [@Sav19]. **Definition 18** (Extended metric-topological measure space, [@AmbErbSav16 Dfn. 4.7]). By an *extended metric-topological measure space* $(X,\tau,\mssd,\mssm)$ we mean an extended metric-topological space (Dfn. [Definition 3](#d:AES){reference-type="ref" reference="d:AES"}) together with a Radon measure $\mssm$ restricted to the Borel $\sigma$-algebra $\msB_{\tau}$. We use the expression *extended metric-topological probability space* to indicate that $\mssm$ is additionally a probability measure. Let $(X,\tau,\mssd,\mssm)$ be an extended metric-topological measure space. *Minimal relaxed slopes*. A $\msB_{\tau}$-measurable function $G\colon X\to [0,\infty]$ is a ($\mssm$-)*relaxed slope* of $f\in L^2(\mssm)$ if there exist $\left(f_n\right)_n\subset \Lip(\mssd,\msB_{\tau})$ so that - $L^2(\mssm)$-$\lim_{n}f_n=f$ and $\left\lvert\mathrm{D}f_n\right\rvert_{}$ $L^2(\mssm)$-weakly converges to $\tilde G\in L^2(\mssm)$; - $\tilde G\leq G$ $\mssm$-a.e.. We say that $G$ is the ($\mssm$-)*minimal* ($\mssm$-)*relaxed slope* of $f\in L^2(\mssm)$, denoted by $\left\lvert\mathrm{D}f\right\rvert_{*}$, if its $L^2(\mssm)$-norm is minimal among those of all relaxed slopes. The notion is well-posed, and $\left\lvert\mathrm{D}f\right\rvert_{*}$ is in fact $\mssm$-a.e. minimal as well, see [@AmbGigSav14 §4.1]. *Minimal weak upper gradients*. A Borel probability measure ${\boldsymbol \pi}$ on $\AC^2(I;X)$ is a *test plan of bounded compression* if there exists a constant $C=C_{\boldsymbol \pi}>0$ such that $$\begin{aligned} (\ev_t)_\sharp{\boldsymbol \pi}\leq C\mssm \,\,\mathrm{,}\;\,\qquad \ev_t\colon \AC^2(I;X)\to X\,\,\mathrm{,}\;\,\ev_t\colon x_{\,\cdot\,}\mapsto x_t\,\,\mathrm{.}\end{aligned}$$ A Borel subset $A\subset \AC^2(I;X)$ is called *negligible* if ${\boldsymbol \pi}(A)=0$ for every test plan of bounded compression. A property of $\AC^2(I;X)$-curves is said to hold *for a.e.-curve* if it holds for every curve in a co-negligible set. A $\msB_{\tau}$-measurable function $G\colon X\to [0,\infty]$ ($\mssm$-)*weak upper gradient* of $\hat f\in \mcL^0(\mssm)$ if $$\begin{aligned} \big\lvert\hat f(x_1)-\hat f(x_0)\big\rvert\leq \int_0^1 G(x_r)\, \left\lvert\dot x\right\rvert_r\mathop{}\!\mathrm{d}r<\infty \quad \text{for a.e.\ curve } \left(x_t\right)_t\in \AC^2(I;X)\,\,\mathrm{.}\end{aligned}$$ We say that $G$ is the ($\mssm$-)*minimal* ($\mssm$-)*weak upper gradient* of $f\in L^2(\mssm)$, denoted by $\left\lvert\mathrm{D}f\right\rvert_{w}$, if it is $\mssm$-.a.e.-minimal among the weak upper gradients of $\hat f$ for every representative $\hat f$ of $f$. See e.g. [@AmbGigSav14 Dfn. 2.12] for the well-posedness of this notion, independently of the representatives of $f$. *Asymptotic Lipschitz constants and asymptotic slopes*. For $f\in \mathrm{Lip}_b(\mssd,\tau)$ set $$\begin{aligned} \mathrm{Lip}^a_{\mssd}[f](x,r)\mathop{\mathrm{\coloneqq}}\sup_{\substack{y,z\in B^\mssd_r(x)\\ \mssd(y,z)>0}} \frac{\left\lvert f(z)-f(y)\right\rvert}{\mssd(z,y)}\,\,\mathrm{,}\;\,\qquad x\in X\,\,\mathrm{,}\;\,r>0 \,\,\mathrm{.}\end{aligned}$$ The *asymptotic Lipschitz constant* $\mathrm{Lip}^a_{\mssd}[f]\colon X\rightarrow[0,\infty]$ is defined as $$\begin{aligned} \mathrm{Lip}^a_{\mssd}[f](x)\mathop{\mathrm{\coloneqq}}\lim_{r\downarrow 0} \mathrm{Lip}^a_{\mssd}[f](x,r) \,\,\mathrm{,}\;\,\end{aligned}$$ with the usual convention that $\mathrm{Lip}^a_{\mssd}[f](x)=0$ whenever $x$ is a $\mssd$-isolated point in $X$. We drop the subscript $\mssd$ whenever apparent from context. Note that $\mathrm{Lip}^a_{}[f]$ is $\mssd$-u.s.c. by construction, thus if $\mssd$ additionally metrizes $\tau$, then $\mathrm{Lip}^a_{}[f]$ is $\tau$-u.s.c. as well, and therefore it is $\msB_{\tau}$-measurable. *Cheeger energies*. Each extended metric-topological measure space $(X,\tau,\mssd,\mssm)$ is naturally endowed with a convex local energy functional, the l.s.c. $L^2(\mssm)$-relaxation of the natural energy on Lipschitz functions, called the *Cheeger energy* of the space; e.g. [@AmbGigSav14; @Sav19]. Several --- *a priori* inequivalent --- definitions of Cheeger energy are possible. We collect here three of them, referring to [@Sav19] for additional ones. **Definition 19** ([@AmbGigSav14 Thm. 4.5]). The $\mathsf{Ch}_{*}$-*Cheeger energy* of $f\in L^2(\mssm)$ is defined as $$\begin{gathered} \mathsf{Ch}_{*,\mssd,\mssm}(f)\mathop{\mathrm{\coloneqq}}\int \left\lvert\mathrm{D}f\right\rvert_{*}^2 \mathop{}\!\mathrm{d}\mssm\,\,\mathrm{,}\;\,\qquad \msD({\mathsf{Ch}_{*,\mssd,\mssm}})\mathop{\mathrm{\coloneqq}}\left\{f\in L^2(\mssm) : \mathsf{Ch}_{*,\mssd,\mssm}(f)<\infty\right\} \,\,\mathrm{.}\end{gathered}$$ **Definition 20** ([@AmbGigSav14 Rmk. 5.12]). The $\mathsf{Ch}_{w}$-*Cheeger energy* of $f\in L^2(\mssm)$ is defined as $$\begin{gathered} \mathsf{Ch}_{w,\mssd,\mssm}(f)\mathop{\mathrm{\coloneqq}}\int \left\lvert\mathrm{D}f\right\rvert_{w}^2 \mathop{}\!\mathrm{d}\mssm\,\,\mathrm{,}\;\,\qquad \msD({\mathsf{Ch}_{w,\mssd,\mssm}})\mathop{\mathrm{\coloneqq}}\left\{f\in L^2(\mssm) : \mathsf{Ch}_{w,\mssd,\mssm}(f)<\infty\right\} \,\,\mathrm{.}\end{gathered}$$ **Definition 21** ([@AmbErbSav16 Dfn. 6.1]). The $\mathsf{Ch}_{a}$-*Cheeger energy* of $f\in L^2(\mssm)$ is defined as $$\begin{gathered} \mathsf{Ch}_{a,\mssd,\mssm}(f)\mathop{\mathrm{\coloneqq}}\inf\liminf_{n }\int g_n^2 \mathop{}\!\mathrm{d}\mssm\,\,\mathrm{,}\;\,\quad \msD({\mathsf{Ch}_{a,\mssd,\mssm}})\mathop{\mathrm{\coloneqq}}\left\{f\in L^2(\mssm) : \mathsf{Ch}_{a,\mssd,\mssm}(f)<\infty\right\} \,\,\mathrm{,}\;\,\end{gathered}$$ where the infimum is taken over all sequences $\left(f_n\right)_n \subset \mathrm{Lip}_b(\mssd,\tau)$, $L^2(\mssm)$-strongly converging to $f$, and all sequences $\left(g_n\right)_n$ of $\msB_{\tau}^\mssm$-measurable functions satisfying $g_n\geq \mathrm{Lip}^a_{}[f_n]$ $\mssm$-a.e.. In all cases, we shall omit the specification of either $\mssd$, $\mssm$ or both, whenever not relevant or apparent from context. We refer the reader to [@AmbGigSav14 §4] for a thorough treatment of $\mathsf{Ch}_{*}$, and to [@AmbErbSav16 §6] for a thorough treatment of $\mathsf{Ch}_{a}$ in the setting of extended metric-topological probability spaces. In the following, in order to refer to results in the literature, we shall need to make use of all the above definitions on some extended metric-topological *probability* space. To this end, we first show that they coincide on every such space. **Proposition 22**. *Let $(X,\tau,\mssd,\mssm)$ be an extended metric-topological *probability* space. Then, $$\mathsf{Ch}_{*,\mssd,\mssm}=\mathsf{Ch}_{a,\mssd,\mssm}=\mathsf{Ch}_{w,\mssd,\mssm}\,\,\mathrm{,}\;\,$$ and each of these functionals is densely defined on $L^2(\mssm)$.* **Proof.* Since $\mssm X<\infty$, the domain of $\mathsf{Ch}_{*,\mssd,\mssm}$ contains $\mathrm{Lip}_b(\mssd,\msB_{\tau})$, and the latter is dense in $L^2(\mssm)$ by e.g. [@AmbGigSav14 Prop. 4.1]. Let us show the identification. Firstly, note that our Definition [Definition 21](#d:Cheeger){reference-type="ref" reference="d:Cheeger"} (i.e. [@AmbErbSav16 Dfn. 6.1]) differs from [@Sav19 Dfn. 5.1] --- as do our definition of the asymptotic Lipschitz constant (again after [@AmbErbSav16]) and the one in [@Sav19]. Thus, we need to show that our definition of $\mathsf{Ch}_{a}$ coincides with the one of $\mathsf{CE}_{2}$ in [@Sav19 Dfn. 5.1]. Once this identity is established, the assertion will be a consequence of the identification of both $\mathsf{Ch}_{*}$ and $\mathsf{CE}_{2}$ with $\mathsf{Ch}_{w}$.* *The identification of $\mathsf{CE}_{2}$ with $\mathsf{Ch}_{w}$ is shown in [@Sav19 Thm. 11.7] for the choice $\msA\mathop{\mathrm{\coloneqq}}\mathrm{Lip}_b(\mssd,\tau)$. The identification of $\mathsf{Ch}_{*}$ with $\mathsf{Ch}_{w}$ is shown in [@AmbGigSav14 Thm. 6.2]. The assumption in Equation (4.2) there is trivially satisfied since $\mssm X=1$. Thus, the proof is concluded if we show that $$\label{eq:p:Cheeger:1} \mathsf{Ch}_{w}\leq \mathsf{Ch}_{a}\leq \mathsf{CE}_{2}\,\,\mathrm{.}$$* *Since $\tau_\mssd$ is finer than $\tau$, for each $x\in X$ and for each neighborhood $U$ of $x$, there exists $r>0$ so that $B^\mssd_r(x)\subset U$. Thus, for any $\msB_{\tau}$-measurable $f\colon X\rightarrow\overline{{\mathbb R}}$, $$\begin{aligned} \sup_{x,y\in B^\mssd_r(x)} \frac{f(z)-f(y)}{\mssd(x,y)} \leq \sup_{x,y\in U} \frac{f(z)-f(y)}{\mssd(x,y)} \,\,\mathrm{.}\end{aligned}$$ As a consequence, $\mathrm{Lip}^a_{\mssd}[f]$ is dominated by the asymptotic Lipschitz constant of $f$ as defined in [@Sav19 Eqn. (2.48)], and the second inequality in [\[eq:p:Cheeger:1\]](#eq:p:Cheeger:1){reference-type="eqref" reference="eq:p:Cheeger:1"} follows.* *The first inequality in [\[eq:p:Cheeger:1\]](#eq:p:Cheeger:1){reference-type="eqref" reference="eq:p:Cheeger:1"} is a consequence of [@AmbErbSav16 Prop. 6.3$(b)$ and $(g)$]. Importantly, we note that [@AmbErbSav16] denotes by $\left\lvert\mathrm{D}f\right\rvert_{w}$ the minimal relaxed slope $\left\lvert\mathrm{D}f\right\rvert_{*}$. ◻* # Localization and globalization {#s:LocGlob} Let $(\mbbX,\mcE)$ be a quasi-regular Dirichlet space, and $E\subset X$ be $\mcE$-quasi-open, with $\mssm E>0$. Further set $$\label{eq:LocalizationKuwae} \mcF_E\mathop{\mathrm{\coloneqq}}\left\{u\in\mcF: \tilde{u} \equiv 0\ \;{\mcE}\text{-q.e.} \text{ on } X\setminus E\right\} \,\,\mathrm{,}\;\,\qquad \mcE_E(u,v)\mathop{\mathrm{\coloneqq}}\mcE(u,v)\,\,\mathrm{,}\;\,\quad u,v\in\mcF_E\,\,\mathrm{.}$$ Note that $\mcF_E\subset L^2(\mssm_{E})$. Let us gather here various results proved by K. Kuwae in [@Kuw98]. **Proposition 23** (Kuwae). *Let $(\mbbX,\mcE)$ be a quasi-regular Dirichlet space, and $E\subset X$ be $\mcE$-quasi-open, with $\mssm E>0$. Then,* 1. *[\[i:p:Kuwae:1\]]{#i:p:Kuwae:1 label="i:p:Kuwae:1"} [@Kuw98 Lem. 3.4(i)] $\mcF_E$ is dense in $L^2(m_{E})$ and $(\mcE_E,\mcF_E)$ is a Dirichlet form on $L^2(m_{E})$;* 2. *[\[i:p:Kuwae:2\]]{#i:p:Kuwae:2 label="i:p:Kuwae:2"} [@Kuw98 Lem. 3.4(ii), Lem. 3.5(ii)&(iv)] $A\subset E$ is $\mcE$-polar, resp. $\mcE$-quasi-open, if and only if it is $\mcE_E$-polar, resp. $\mcE_E$-quasi-open;* 3. *[\[i:p:Kuwae:3\]]{#i:p:Kuwae:3 label="i:p:Kuwae:3"} [@Kuw98 Lem. 3.4(ii)] the restriction to $E$ of any $\mcE$-quasi-continuous function is $\mcE_E$-quasi-continuous;* 4. *[\[i:p:Kuwae:4\]]{#i:p:Kuwae:4 label="i:p:Kuwae:4"} [@Kuw98 Lem. 3.4(ii)] $(\mcE_E,\mcF_E)$ is quasi-regular;* 5. *[\[i:p:Kuwae:5\]]{#i:p:Kuwae:5 label="i:p:Kuwae:5"} [@Kuw98 Thm. 4.2] ${{\mcF}^\bullet_{\loc}}(E)={{(\mcF_E)}^\bullet_{\loc}}$. In particular, $\mcF\big\lvert_{E}\mathop{\mathrm{\coloneqq}}\left\{f\big\lvert_E: f\in\mcF\right\}\subset {{(\mcF_E)}^\bullet_{\loc}}$.* We denote by $\mbbX_E\mathop{\mathrm{\coloneqq}}(E,\tau_E,\Sigma_E,\mssm_{E})$ the restricted space of $\mbbX$ to $E$, defined in the obvious way, and by $(\mbbX_E,\mcE_E)$ the corresponding quasi-regular Dirichlet space constructed in Proposition [Proposition 23](#p:Kuwae){reference-type="ref" reference="p:Kuwae"}[\[i:p:Kuwae:1\]](#i:p:Kuwae:1){reference-type="ref" reference="i:p:Kuwae:1"}, [\[i:p:Kuwae:4\]](#i:p:Kuwae:4){reference-type="ref" reference="i:p:Kuwae:4"}. When $(\mbbX_E,\mcE_E)$ is a quasi-regular strongly local Dirichlet space, we denote by $\mu^E_{{\,\cdot\,},{\,\cdot\,}}\colon (\mcF_E)_b^{\scriptscriptstyle{\times 2}}\to \mfM^+_{b\text{\textsc{r}}}(\msB_{\tau},\msN_{\mcE_E})$ its energy measure, and by $\mu^E_{{\,\cdot\,}}$ the corresponding extension to ${{(\mcF_E)}^\bullet_{\loc}}$ with values in $\mfM^+_\sigma(\msB_{\tau},\msN_{\mcE_E})$ constructed in Proposition [Proposition 7](#p:PropertiesLoc){reference-type="ref" reference="p:PropertiesLoc"}. **Corollary 24** (Kuwae). *Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space, and $E\subset X$ be $\mcE$-quasi-open, with $\mssm E>0$. Then, $(\mbbX_E,\mcE_E)$ is a quasi-regular strongly local Dirichlet space. Furthermore, $$\mu^E_{f,g}=\mathop{\mathrm{\mathds 1}}_E\mu_{f,g}\,\,\mathrm{,}\;\,\quad f,g\in\mcF_E \qquad \text{and} \qquad \mu^E_{f}=\mathop{\mathrm{\mathds 1}}_E\mu_{f} \,\,\mathrm{,}\;\,\quad f\in{{(\mcF_E)}^\bullet_{\loc}} \,\,\mathrm{.}$$* **Proof.* The quasi-regularity of $(\mbbX_E,\mcE_E)$ was noted in Proposition [Proposition 23](#p:Kuwae){reference-type="ref" reference="p:Kuwae"}[\[i:p:Kuwae:4\]](#i:p:Kuwae:4){reference-type="ref" reference="i:p:Kuwae:4"}. By (the proof[^4] of) [@Kuw98 Cor. 5.1], the form $(\mcE_E,\mcF_E)$ is as well strongly local, and $\mu^E_{f,g}=\mathop{\mathrm{\mathds 1}}_E \mu_{f,g}$ for every $f,g\in\mcF_E$. The last assertion follows by extension to the broad local domain as in Proposition [Proposition 7](#p:PropertiesLoc){reference-type="ref" reference="p:PropertiesLoc"}[\[i:p:PropertiesLoc:5\]](#i:p:PropertiesLoc:5){reference-type="ref" reference="i:p:PropertiesLoc:5"}. ◻* *Remark 25* (*Caveat*). Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space, and $E\subset X$ be $\mcE$-quasi-open with $0<\mssm E<\infty$. Then, $(\mcE_E,\mcF_E)$ is conservative (and $\mathop{\mathrm{\mathds 1}}_E\in\mcF_E$) if and only if $E$ is additionally $\mcE$-quasi-closed. Indeed, if $E$ is both $\mcE$-quasi-open and $\mcE$-quasi-closed, then it is $\mcE$-invariant, see e.g. [@FukOshTak11 Cor. 4.6.3, p. 194]. Thus, $\mathop{\mathrm{\mathds 1}}_E\in\mcF$ by $\mcE$-invariance of $E$, e.g. [@FukOshTak11 p. 53], and thus $\mathop{\mathrm{\mathds 1}}_E\in\mcF_E$ by definition of $\mcF_E$. If otherwise, $E$ is not $\mcE$-quasi-closed, then it is not $\mcE$-invariant, hence $\mathop{\mathrm{\mathds 1}}_E\not\in\mcF\supset \mcF_E$. **Corollary 26**. *Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space, $E\subset X$ be $\mcE$-quasi-open, with $\mssm E>0$, and $\mu\in\mfM^+_\sigma(\msB_{\tau},\msN_{\mcE})$. Then, with obvious meaning of the notation, $$\begin{gathered} \mbbL^{\mu}_{\loc}(E)= \mbbL^{\!\!E\ \mu_{E}}_{\loc}\mathop{\mathrm{\coloneqq}}\left\{f\in{{(\mcF_E)}^\bullet_{\loc}}: \mu^E_{f}\leq \mu_{E}\right\}\,\,\mathrm{,}\;\, \\ \mbbL^{\mu}_{\loc,b}(E)= \mbbL^{\!\!E\ \mu_{E}}_{\loc,b}\,\,\mathrm{,}\;\,\qquad \mbbL^{\mu,\tau}_{\loc,b}(E)=\ \mbbL^{\!\!E\ \mu_{E},\tau}_{\loc,b} \,\,\mathrm{.}\end{gathered}$$* *Further assume that $\mu$ is additionally absolutely $\mcE$-moderate, and that there exists $e_E\in\mcF_b$ with $0\leq \tilde{e}_E\leq 1$ $\mcE$-q.e. on $E$. Then, $$\label{eq:c:RestrictionDzLoc:1} \mbbL^{\mu}_{\loc,b}\big\lvert_E\mathop{\mathrm{\coloneqq}}\left\{f\big\lvert_E : f\in \mbbL^{\mu}_{\loc,b}\right\}\subset \mbbL^{\!\!E\ \mu_{E}}_{\loc} \,\,\mathrm{.}$$* **Proof.* We only show the first equality, all others being trivial consequences of the first one. Firstly, note that $\mu_{E}\in\mfM^+_\sigma(\msB_{\tau_E},\msN_{\mcE_E})$ as a consequence of Proposition [Proposition 23](#p:Kuwae){reference-type="ref" reference="p:Kuwae"}[\[i:p:Kuwae:2\]](#i:p:Kuwae:2){reference-type="ref" reference="i:p:Kuwae:2"}. Thus, the statement is well-posed. Let $f\in\mbbL^{\mu}_{\loc}(E)$. Then, $\mu^E_{f}=\mathop{\mathrm{\mathds 1}}_E\mu_{f}\leq \mathop{\mathrm{\mathds 1}}_E \mu\eqqcolon\mu_{E}$, where the first equality is shown in Corollary [Corollary 24](#c:Kuwae){reference-type="ref" reference="c:Kuwae"} and the inequality holds by definition of $\mbbL^{\mu}_{\loc}(E)$. This concludes the first assertion in light of Proposition [Proposition 23](#p:Kuwae){reference-type="ref" reference="p:Kuwae"}[\[i:p:Kuwae:5\]](#i:p:Kuwae:5){reference-type="ref" reference="i:p:Kuwae:5"}.* *We show the second assertion. Let $(G_\bullet,e_\bullet)\in\msG_0$ be $\mu$-moderated, and set $G_k'\mathop{\mathrm{\coloneqq}}G_k\cup E$ for every $k\in {\mathbb N}$. Since $e_E\equiv 1$ $\mssm$-a.e. on $E$, the function $e_k\vee e_E$ satisfies $e_k\vee e_E \in\mcF_b$ and $e_k\vee e_E\equiv 1$ $\mssm$-a.e. on $G_k'$. Thus, $G_\bullet'\in\msG_0$. Now, let $(G^f_\bullet, f_\bullet)$ with $G^f_\bullet\in\msG_0$ be witnessing that $f\in \mbbL^{\mu}_{\loc,b}$. Since $\mu$ is absolutely $\mcE$-moderate, both $G^f_\bullet$ and $G'_\bullet$ are $\mu$-moderated. Thus, $f\in\mbbL^{\mu}_{\loc,b}(G^f_\bullet)=\mbbL^{\mu}_{\loc,b}(G'_\bullet)$ by Lemma [Lemma 9](#l:2.26){reference-type="ref" reference="l:2.26"}. By definition of the latter space, there exists $f^E\in \mcF$ with $f^E\equiv f$ $\mssm$-a.e. on $E$ and $\mu_{f}\leq \mu$. By Proposition [Proposition 23](#p:Kuwae){reference-type="ref" reference="p:Kuwae"}[\[i:p:Kuwae:5\]](#i:p:Kuwae:5){reference-type="ref" reference="i:p:Kuwae:5"} we conclude that $f^E\big\lvert_E \in {{(\mcF_E)}^\bullet_{\loc}}$. In fact, $f^E\big\lvert_{E}\in \mbbL^{\!\!E\ \mu_{E}}_{\loc,b}$ by locality of $\mu_{f^E}$ as in the first assertion, which concludes the second assertion and the proof. ◻* *Remark 27*. In general, $\mbbL^{\!\!E\ \mu_{E},\tau}_{\loc}\not\subset\mbbL^{\mu,\tau}_{\loc,b}\big\lvert_E$. Indeed, let $\mcE$ be the $0$-form on $L^2(\mssm)$. Then, $\mbbL^{\!\!E\ \mu_{E},\tau}_{\loc}=\mcC_b(\tau_E)\not\subset \mcC_b(\tau)\big\lvert_{E}=\mbbL^{\mu,\tau}_{\loc}\big\lvert_{E}$, unless $E$ is additionally $\tau$-closed. In general, $\mbbL^{\mu}_b\big\lvert_E \not\subset \mbbL^{\!\!E\ \mu_{E}}$, since $\mathop{\mathrm{\mathds 1}}\in \mbbL^{\mu}\big\lvert_E$ yet $\mathop{\mathrm{\mathds 1}}_E\notin \mcF\supset \mcF_E$, unless $E$ is additionally $\mcE$-quasi-closed. Let us state here the following localization property for $(\mathsf{Rad}_{\mssd,\mu})$. **Proposition 28** (Localization of the Rademacher property). *Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space with $\mbbX$ satisfying [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"}, $\mu\in \mfM^+_\sigma(\msB_{\tau},\msN_{\mcE})$ be absolutely $\mcE$-moderate, and $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to [0,\infty]$ be an extended pseudo-distance on $X$. Further assume that* 1. *[\[i:p:LocRad:1\]]{#i:p:LocRad:1 label="i:p:LocRad:1"} $\tau_\mssd$ is separable;* 2. *[\[i:p:LocRad:2\]]{#i:p:LocRad:2 label="i:p:LocRad:2"} $\mssd({\,\cdot\,}, x_0)\colon X\to [0,\infty]$ is $\Sigma$-measurable for every $x_0\in X$.* *If $(\mbbX,\mcE,\mssd,\mu)$ satisfies $(\mathsf{Rad}_{\mssd,\mu})$, then $(\mbbX_E,\mcE_E,\mssd,\mu_{E})$ satisfies $(\mathsf{Rad}_{\mssd,\mu_{E}})$ for every $\mcE$-quasi-open $E\subset X$ with $\mssm E>0$.* We need two preparatory results. The first one is a version of McShane's Extension Theorem for Lipschitz functions. We note that our proof of the following statement in [@LzDSSuz20 Lem. 2.1] contains a gap, which we fill in the proof below. **Lemma 29** (Constrained McShane extensions, [@LzDSSuz20 Lem. 2.1]). *Let $(X,\mssd)$ be an extended metric space. Fix $A\subset X$, $A\neq \mathop{\mathrm{\varnothing}}$, and let $\hat f\colon A\rightarrow{\mathbb R}$ be a function in $\mathrm{Lip}_b(A,\mssd)$. Further set $$\label{eq:McShane} \begin{aligned} \overline{f}\colon x&\longmapsto \sup_A \hat f\wedge \inf_{a\in A} \big({\hat f(a)+\mathrm{L}_{\mssd}(\hat f)\,\mssd(x,a)}\big) \,\,\mathrm{,}\;\,& 0\cdot\infty&\mathop{\mathrm{\coloneqq}}\infty\,\,\mathrm{,}\;\, \\ \underline{f}\colon x&\longmapsto \inf_A \hat f \vee \sup_{a\in A} \big({\hat f(a)-\mathrm{L}_{\mssd}(\hat f)\,\mssd(x,a)}\big)\,\,\mathrm{,}\;\,& 0\cdot\infty&\mathop{\mathrm{\coloneqq}}\infty \,\,\mathrm{.} \end{aligned}$$* *Then,* 1. *[\[i:l:McShane:1\]]{#i:l:McShane:1 label="i:l:McShane:1"} $\underline{f}=\hat f=\overline{f}$ on $A$ and $\inf_A \hat f\leq \underline{f}\leq \overline{f}\leq \sup_A \hat f$ on $X$;* 2. *[\[i:l:McShane:2\]]{#i:l:McShane:2 label="i:l:McShane:2"} $\underline{f}$, $\overline{f}\in \mathrm{Lip}_b(\mssd)$ with $\mathrm{L}_{\mssd}(\underline{f})=\mathrm{L}_{\mssd}(\overline{f})=\mathrm{L}_{\mssd}(\hat f)$;* 3. *[\[i:l:McShane:3\]]{#i:l:McShane:3 label="i:l:McShane:3"} $\underline{f}$, resp. $\overline{f}$, is the minimal, resp. maximal, function satisfying [\[i:l:McShane:1\]](#i:l:McShane:1){reference-type="ref" reference="i:l:McShane:1"}-[\[i:l:McShane:2\]](#i:l:McShane:2){reference-type="ref" reference="i:l:McShane:2"}, that is, for every $\hat g\in \mathrm{Lip}_b(\mssd)$ with $\inf_A \hat f\leq \hat g\leq \sup_A \hat f$, $\hat g\big\lvert_A=\hat f$ on $A$ and $\mathrm{L}_{\mssd}(\hat g)\leq \mathrm{L}_{\mssd}(\hat f)$, it holds that $\underline{f}\leq \hat g \leq \overline{f}$.* **Proof.* It is erroneously claimed in [@LzDSSuz20] that, if $\hat f$ is non-constant, one can assume that $\mathrm{L}_{\mssd}(\hat f)>0$ with no loss of generality, cf. Remark [Remark 2](#r:McShane){reference-type="ref" reference="r:McShane"}. Since this does not hold, we treat separately the case when $\mathrm{L}_{\mssd}(\hat f)=0$. In this case, partition $A$ into its (pairwise disjoint) $\mssd$-accessible components $\left(A_i\right)_{i\in I}$. Since $\mathrm{L}_{\mssd}(\hat f)=0$, then $\hat f\big\lvert_{A_i}\equiv a_i$ is constant for every $i$. Let $X_i$ be the unique $\mssd$-accessible component of $X$ containing $A_i$. Set $\underline{f}(x)\mathop{\mathrm{\coloneqq}}a_i$ if $x\in X_i$ and $\underline{f}(x)\mathop{\mathrm{\coloneqq}}\inf_A \hat f$ otherwise, and similarly $\overline{f}(x)\mathop{\mathrm{\coloneqq}}a_i$ if $x\in X_i$ and $\overline{f}(x)\mathop{\mathrm{\coloneqq}}\sup_A \hat f$ otherwise. Then, $\underline{f}$ and $\overline{f}$ satisfy the required properties.* *The rest of the proof is as in [@LzDSSuz20]. ◻* **Corollary 30**. *Let $(X,\mssd)$ be a separable extended pseudo-metric space. Further let $\Sigma$ be a $\sigma$-algebra on $X$ and assume that $\mssd({\,\cdot\,},x_0)\colon X\to [0,\infty]$ is $\Sigma$-measurable for every $x_0\in X$. Then, $\Lip(\mssd)=\Lip(\mssd,\Sigma)$.* **Proof.* By a standard truncation argument, it suffices to show that every $\hat f\in\mathrm{Lip}_b(\mssd)$ is $\Sigma$-measurable. Choosing $A=X$ in Lemma [Lemma 29](#l:McShane){reference-type="ref" reference="l:McShane"} we have that $\hat f=\overline{f}$, thus it suffices to show that $\overline{f}$ is measurable. Since $(X,\mssd)$ is separable, there exists a countable $\mssd$-dense set $D\subset X$. Since $(x,a)\mapsto\big({\hat f(a)+\mathrm{L}_{\mssd}(\hat f)\, \mssd(x,a)}\big)$ is jointly $\mssd$-continuous, $$\inf_{a\in X} \big({\hat f(a)+\mathrm{L}_{\mssd}(\hat f)\, \mssd(x,a)}\big)=\inf_{a\in D} \big({\hat f(a)+\mathrm{L}_{\mssd}(\hat f)\, \mssd(x,a)}\big) \,\,\mathrm{.}$$ Since the infimum of a countable family of $\Sigma$-measurable functions is $\Sigma$-measurable, the conclusion follows since $x\mapsto \big({\hat f(a)+\mathrm{L}_{\mssd}(\hat f)\, \mssd(x,a)}\big)$ is $\Sigma$-measurable for every $a\in X$ by assumption. ◻* *Proof of Proposition [Proposition 28](#p:LocRad){reference-type="ref" reference="p:LocRad"}.* Let $\hat f\in \mathrm{Lip}^1_b(E,\mssd,\Sigma_E)$. By Lemma [Lemma 29](#l:McShane){reference-type="ref" reference="l:McShane"}[\[i:l:McShane:2\]](#i:l:McShane:2){reference-type="ref" reference="i:l:McShane:2"} and Corollary [Corollary 30](#c:McShane){reference-type="ref" reference="c:McShane"}, the upper constrained McShane extension $\overline{f}\colon X\to {\mathbb R}$ of $\hat f\colon E\to {\mathbb R}$ defined in [\[eq:McShane\]](#eq:McShane){reference-type="eqref" reference="eq:McShane"} satisfies $\overline f\in \mathrm{Lip}^1_b(\mssd,\Sigma)$. By $(\mathsf{Rad}_{\mssd,\mu})$ we conclude that $\big [\overline f\big]_{\mssm}\in \mbbL^{\mu}_{\loc,b}$. Thus, $\big [\overline f\big\lvert_E\big]_{\mssm_{E}}\in \mbbL^{\!\!E\ \mu_E}_{\loc,b}$ by [\[eq:c:RestrictionDzLoc:1\]](#eq:c:RestrictionDzLoc:1){reference-type="eqref" reference="eq:c:RestrictionDzLoc:1"} in Corollary [Corollary 26](#c:RestrictionDzLoc){reference-type="ref" reference="c:RestrictionDzLoc"}. This concludes the assertion since $\overline f=\hat f$ everywhere on $E$ by Lemma [Lemma 29](#l:McShane){reference-type="ref" reference="l:McShane"}[\[i:l:McShane:1\]](#i:l:McShane:1){reference-type="ref" reference="i:l:McShane:1"}. ◻ *Remark 31*. In Proposition [Proposition 28](#p:LocRad){reference-type="ref" reference="p:LocRad"}, assumption [\[i:p:LocRad:1\]](#i:p:LocRad:1){reference-type="ref" reference="i:p:LocRad:1"} is usually stronger than [\[i:p:LocRad:2\]](#i:p:LocRad:2){reference-type="ref" reference="i:p:LocRad:2"}. For instance, if $\mssd_\mu$ satisfies the assumption in Proposition [Proposition 28](#p:LocRad){reference-type="ref" reference="p:LocRad"}[\[i:p:LocRad:1\]](#i:p:LocRad:1){reference-type="ref" reference="i:p:LocRad:1"}, then it satisfies as well the assumption in Proposition [Proposition 28](#p:LocRad){reference-type="ref" reference="p:LocRad"}[\[i:p:LocRad:2\]](#i:p:LocRad:2){reference-type="ref" reference="i:p:LocRad:2"} by [@LzDSSuz20 Thm. 3.13]. [\[i:r:LocRad:2\]]{#i:r:LocRad:2 label="i:r:LocRad:2"} It is essential for the validity of the above proposition that the Rademacher property be phrased with the broad *local* space $\mbbL^{\!\!E\ \mu_{E}}_{\loc}$ (as opposed to $\mbbL^{\!\!E\ \mu_{E}}$). Indeed, $\mathop{\mathrm{\mathds 1}}_E\in \mathrm{Lip}_b(E,\mssd,\Sigma_E)$, yet in general $\mathop{\mathrm{\mathds 1}}_E\notin \mcF_E$ as in Remark [Remark 25](#r:Caveat){reference-type="ref" reference="r:Caveat"}. The statement analogous to Proposition [Proposition 28](#p:LocRad){reference-type="ref" reference="p:LocRad"} with, e.g., $(\mathsf{SL}_{\mu,\mssd})$ in place of $(\mathsf{Rad}_{\mssd,\mu})$ does *not* hold. This is a consequence of the failure of the converse inclusion in [\[eq:c:RestrictionDzLoc:1\]](#eq:c:RestrictionDzLoc:1){reference-type="eqref" reference="eq:c:RestrictionDzLoc:1"}. In the next example we discuss a quasi-regular strongly local Dirichlet space $(\mbbX,\mcE)$ satisfying $(\mathsf{SL}_{\mu,\mssd})$ for which there exists a $\tau$-open set $E$ with $\mssm E=\mssm X$ and such that $(\mbbX_E,\mcE_E,\mssd,\mu_{E})$ does not satisfy $(\mathsf{SL}_{\mu_{E},\mssd})$. *Example 32*. Let $\mbbX$ be standard interval $[-1,1]$, and $\mssd$ be the standard Euclidean distance on $X$. Set $E\mathop{\mathrm{\coloneqq}}[-1,1]\setminus \left\{0\right\}$, and let $\mu\mathop{\mathrm{\coloneqq}}\mssm={\mathrm{Leb}}^1$ be the standard Lebesgue measure on $X$. Further let $(\mcE,\mcF)$ be the regular Dirichlet form properly associated with the Brownian motion on $X$ with reflecting boundary conditions. Now, let $\hat f\colon E\to {\mathbb R}$ be defined by $\hat f\mathop{\mathrm{\coloneqq}}\mathop{\mathrm{\mathds 1}}_{(0,1]}-\mathop{\mathrm{\mathds 1}}_{[-1,0)}$. Setting $G_k^E\mathop{\mathrm{\coloneqq}}[-1,1]\setminus[-\tfrac{1}{k},\tfrac{1}{k}]$ and $f^E_k(x)\mathop{\mathrm{\coloneqq}}-1 \vee k x \wedge 1$, it is straightforwardly verified that $(G^E_\bullet, f^E_\bullet)$ witnesses that $f\in \mbbL^{\!\!E\ \mu_E}_{\loc,b}$. On the other hand, $f\notin \mbbL^{\mu}_{\loc,b}$. Indeed, argue by contradiction that there exists $(G_\bullet, f_\bullet)$, with $G_\bullet\in\msG_0$, witnessing that $f\in \mbbL^{\mu}_{\loc,b}$. Since $\left\{0\right\}$ is *not* $\mcE$-polar, there exists $G_k\in G_\bullet$ and $f_k\in\mcF$ such that $f\equiv f_k$ $\mssm$-a.e. on $G_k$ and $G_k\ni \left\{0\right\}$. This is a contradiction since every $h\in \mcF$ has a $\tau$-continuous $\mssm$-representative, yet $f$ has no $\mssm$-representative continuous at $0$. *Remark 33*. Again specularly to the case of the Rademacher property discussed in Remark [Remark 31](#r:LocRad){reference-type="ref" reference="r:LocRad"}, if we had phrased, e.g., $(\mathsf{SL}_{\mu,\mssd})$ with $\mbbL^{\mu}_b$ in place of $\mbbL^{\mu}_{\loc,b}$, then the localization of $(\mathsf{SL}_{\mu,\mssd})$ to $E$ would be immediate, since $\mbbL^{\!\!E\ \mu_E}_b\subset \mbbL^{\mu}_b$ because $\mcF_E\subset \mcF$. Example [Example 32](#ese:FailureLocSL){reference-type="ref" reference="ese:FailureLocSL"} shows that the lack of a *Zygmund Lemma* --- '$W^{1,\infty}\hookrightarrow\Lip$' --- for $E$ is an obstruction to the localization to $E$ of the Sobolev-to-Lipschitz property on $X$. In fact, the situation for $(\mathsf{SL}_{\mu,\mssd})$ is opposite to that for $(\mathsf{Rad}_{\mssd,\mu})$, in the sense that one should rather expect *globalization* (as opposed to: *localization*), which takes the following form. **Proposition 34** (Globalization of Sobolev--to--Lipschitz-type properties). *Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space with $\mbbX$ satisfying [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"}, $\mu\in\mfM^+_\sigma(\msB_{\tau},\msN_{\mcE})$ be absolutely $\mcE$-moderate, and $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to [0,\infty]$ be an extended distance on $X$. Further assume that* 1. *[\[i:p:GlobCSL:1\]]{#i:p:GlobCSL:1 label="i:p:GlobCSL:1"} $\tau_{\mssd}=\tau$;* 2. *[\[i:p:GlobCSL:2\]]{#i:p:GlobCSL:2 label="i:p:GlobCSL:2"} $(X,\mssd)$ is a (possibly: extended) length space;* 3. *[\[i:p:GlobCSL:3\]]{#i:p:GlobCSL:3 label="i:p:GlobCSL:3"} there exists a $\tau$-open covering $\msE$ of $X$, with the following properties:* 1. * $\mssm E>0$ for each $E\in\msE$;* 2. *$\msE$ is *$\mcE$-moderate*, i.e. for each $E\in\msE$ there exists $e_E\in\mcF$ with $0\leq e_E\leq 1$ $\mssm$-a.e. on $E$, cf. [@LzDSSuz20 Dfn. 2.19];* 3. * $(\mathsf{SL}_{\mu_{E},\mssd})$, resp. $(\mathsf{cSL}_{\tau_E,\mu_{E},\mssd})$, holds for $(\mbbX_E,\mcE_E,\mssd,\mu_{E})$ for every $E\in\msE$.* *Then, $(\mathsf{SL}_{\mu,\mssd})$, resp. $(\mathsf{cSL}_{\tau,\mu,\mssd})$, holds for $(\mbbX,\mcE,\mssd,\mu)$.* **Proof.* Arguing as in the proof of [@LzDSSuz20 Lem. 3.23] with $\mssd$ in place of $\mssd_\mu$, we may restrict ourselves to the case when $\mssd$ is everywhere finite.* *We start with the case of $(\mathsf{SL}_{\mu,\mssd})$. Let $f\in \mbbL^{\mu}_{\loc,b}$. Since $\msE$ is $\mcE$-moderate and $\mu$ is absolutely $\mcE$-moderate, [\[eq:c:RestrictionDzLoc:1\]](#eq:c:RestrictionDzLoc:1){reference-type="eqref" reference="eq:c:RestrictionDzLoc:1"} in Corollary [Corollary 26](#c:RestrictionDzLoc){reference-type="ref" reference="c:RestrictionDzLoc"} holds, and it implies that $f\big\lvert_E\in \mbbL^{\!\!E\ \mu_E}_{\loc,b}$ for every $E\in\msE$. We may now apply $(\mathsf{SL}_{\mu_E,\mssd})$ to conclude that for every $E\in\msE$ there exists $\hat f^E\in \mathrm{Lip}^1_b(E,\mssd,\Sigma_E)$ with $\big [\hat f^E\big]_{\mssm_E}\equiv f\big\lvert_{E}$ $\mssm_E$-a.e.. As a consequence of [\[i:p:GlobCSL:1\]](#i:p:GlobCSL:1){reference-type="ref" reference="i:p:GlobCSL:1"}, $\mathrm{Lip}^1_b(E,\mssd,\Sigma_E)=\mathrm{Lip}^1_b(E,\mssd,\tau_E)$, thus we may omit the notation for representatives, and simply write $f^E$ in place of both $\hat f^E$ and $\big [\hat f^E\big]_{\mssm}$. Now, since $f^{E_1} \equiv f \equiv f^{E_2}$ $\mssm$-a.e. on $E_1\cap E_2$ for every $E_1,E_2\in\msE$, and since $\mssm$ has full $\tau$-support by [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"}, it follows from the $\tau_{E_i}$-continuity of $f^{E_i}$ that $f^{E_1}\equiv f^{E_2}$ everywhere on $E_1\cap E_2$. Therefore, since $\mcC(\tau)$ is a sheaf on $(X,\tau)$, there exists a unique $\hat f\in\mcC(\tau)$ with $\hat f\big\lvert_{E}\equiv f^E$ for every $E\in\msE$. Since $\big [\hat f\big]_{\mssm}\equiv f^E\equiv f$ $\mssm$-a.e. on $E$ for each $E$ in the covering $\msE$ of $X$, we conclude that $\hat f$ is a (in fact: *the* unique) $\tau$-continuous $\mssm$-representative of $f$. As customary, we may therefore drop the distinction between $\hat f$ and $f$. It remains to show that $f\in \mathrm{Lip}^1_b(\mssd)$. Since $(X,\mssd)$ is a length space, it is $1$-quasi-convex (see e.g. [@CobMicNic19 Dfn. 2.5.4]), and the fact that $f\big\lvert_E\in \mathrm{Lip}^1_b(E,\mssd)$ for every $E$ in the $\tau_\mssd$-open covering $\msE$ implies that $f\in \mathrm{Lip}^1_b(\mssd)$, see e.g. [@CobMicNic19 Thm. 2.5.6], which concludes the proof.* *For the case of $(\mathsf{cSL}_{\tau,\mu,\mssd})$ it suffices to note that [\[eq:c:RestrictionDzLoc:1\]](#eq:c:RestrictionDzLoc:1){reference-type="eqref" reference="eq:c:RestrictionDzLoc:1"} still holds when the broad local spaces of functions with bounded energy are replaced by their additionally continuous counterparts, since the restriction to $E$ of any $\tau$-continuous function is $\tau_E$-continuous. ◻* *Remark 35*. One relevant case for the application of Proposition [Proposition 34](#p:GlobCSL){reference-type="ref" reference="p:GlobCSL"} is when $\mssd=\mssd_\mssm$, in which case the assumption in Proposition [Proposition 34](#p:GlobCSL){reference-type="ref" reference="p:GlobCSL"}[\[i:p:GlobCSL:1\]](#i:p:GlobCSL:1){reference-type="ref" reference="i:p:GlobCSL:1"} amounts to the *strict locality* of the Dirichlet space $(\mbbX,\mcE)$ in the sense of [@LzDSSuz20 Dfn. 3.19]. In this case, the assumption in Proposition [Proposition 34](#p:GlobCSL){reference-type="ref" reference="p:GlobCSL"}[\[i:p:GlobCSL:2\]](#i:p:GlobCSL:2){reference-type="ref" reference="i:p:GlobCSL:2"} holds as soon as $(X,\mssd_\mssm)$ is locally complete, see [@LzDSSuz20 Thm. 3.24]. *Remark 36*. When $\mssd=\mssd_\mu$, it would not be difficult to show that Proposition [Proposition 34](#p:GlobCSL){reference-type="ref" reference="p:GlobCSL"} (even without the assumption in [\[i:p:GlobCSL:2\]](#i:p:GlobCSL:2){reference-type="ref" reference="i:p:GlobCSL:2"}) holds as well when, e.g., $(\mathsf{cSL}_{\tau,\mu,\mssd_\mu})$ is replaced by $(\mathsf{Rad}_{\mssd_\mu,\mu})$ and $(\mathsf{cSL}_{\tau_E,\mu_E,\mssd_\mu})$ is replaced by $(\mathsf{Rad}_{\mssd_\mu,\mu_E})$. However, this new statement should not be interpreted as a globalization result for the Rademacher property. Rather, since $\mbbX$ satisfies [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"}, the topology $\tau=\tau_{\mssd_\mu}$ is separable. Thus, $(\mathsf{Rad}_{\mssd_\mu,\mu})$ is simply *verified* for $(\mbbX,\mcE,\mssd_\mu,\mu)$ ---that is: *independently* of the Rademacher property for $(\mbbX_E,\mcE_E,\mssd_\mu,\mu_E)$--- by virtue of [@LzDSSuz20 Cor. 3.15]. # Weighted spaces {#s:Transfer} In this section, let $\mbbX$ and $\mbbX'$ be structures with same underlying topological measurable space $(X,\tau,\Sigma)=(X',\tau',\Sigma')$ and different measures $\mssm$ and $\mssm'$ with $\mssm\sim\mssm'$. Further let $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to [0,\infty]$ be an extended distance. Now, let $(\mcE,\mcF)$, resp. $(\mcE',\mcF')$, be a quasi-regular strongly local Dirichlet space with underlying space $\mbbX$, resp. $\mbbX'$, defined on $L^2(\mssm)$, resp. $L^2(\mssm')$, and admitting carré du champ operator $\Gamma$, resp. $\Gamma'$. We write $$\big({\Gamma, \msA}\big)\leq \big({\Gamma',\msA'}\big)$$ to indicate that $\msA\supset \msA'$ and $\Gamma'\geq \Gamma$ on $\msA'$, and analogously for the opposite inequality. ## Rademacher and Sobolev-to-Lipschitz properties for weighted spaces Let $(\mssP)$ denote either $(\mathsf{Rad}_{\mssd,\mssm})$, $(\mathsf{ScL}_{\mssm,\tau,\mssd})$, $(\mathsf{SL}_{\mssm,\mssd})$, or $(\mathsf{cSL}_{\tau,\mssm,\mssd})$. Note that $\mathrm{Lip}^1(\mssd,\Sigma)$ and $\mcC_b(\tau)$ do not depend on $(\mcE,\mcF)$ nor on $\mssm$. Furthermore, since $\mssm\sim\mssm'$, we have that $L^\infty(\mssm)= L^\infty(\mssm')$ as Banach spaces. As a consequence of the facts above, $(\mssP)$ is a *local* property in the sense of the following Proposition, a proof of which is straightforward. **Proposition 37** (Weighted spaces). *Retain the notation above. Then,* 1. *[\[i:p:Locality:1\]]{#i:p:Locality:1 label="i:p:Locality:1"} if $\big({\Gamma, \mbbL^{\mssm}_{\loc,b}}\big)\leq \big({\Gamma',\mbbL^{\prime\, \mssm'}_{\loc,b}}\big)$ and $(\mathsf{ScL}_{\mssm,\tau,\mssd})$, resp. $(\mathsf{SL}_{\mssm,\mssd})$, $(\mathsf{cSL}_{\tau,\mssm,\mssd})$ holds, then $(\mathsf{ScL}_{\mssm',\tau,\mssd})$, resp. $(\mathsf{SL}_{\mssm',\mssd})$, $(\mathsf{cSL}_{\tau,\mssm',\mssd})$ holds as well;* 2. *[\[i:p:Locality:2\]]{#i:p:Locality:2 label="i:p:Locality:2"} if $\big({\Gamma, \mbbL^{\mssm}_{\loc,b}}\big)\geq \big({\Gamma',\mbbL^{\prime\, \mssm'}_{\loc,b}}\big)$ and $(\mathsf{Rad}_{\mssd,\mssm})$ holds, then $(\mathsf{Rad}_{\mssd,\mssm'})$ holds as well.* Let us now show that it suffices to verify the assumptions in Proposition [Proposition 37](#p:Locality){reference-type="ref" reference="p:Locality"} only on a common pseudo-core (i.e. a linear subspace dense in both domains). In the statement of the next result, let $\msG_0\mathop{\mathrm{\coloneqq}}\msG_0^\mcE$ and $\msG_0'\mathop{\mathrm{\coloneqq}}\msG_0^{\mcE'}$ be defined as in [\[eq:Nests\]](#eq:Nests){reference-type="eqref" reference="eq:Nests"}. **Proposition 38** (Comparison of square fields). *Let $(\mbbX,\mcE)$ and $(\mbbX',\mcE')$ be quasi-regular strongly local Dirichlet spaces with same underlying topological measurable space $(X,\tau,\Sigma)=(X',\tau',\Sigma')$ and possibly different measures $\mssm$ and $\mssm'$, both satisfying [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"}. Further assume that:* 1. *[\[i:p:LocalityProbab:3\]]{#i:p:LocalityProbab:3 label="i:p:LocalityProbab:3"} $\mssm'=\theta\mssm$ for some $\theta\in L^0(\mssm)$, and there exists $E_\bullet, G_\bullet\in\msG_0\cap \msG_0'$ with the following properties:* 1. *[\[i:p:LocalityProbab:3.1\]]{#i:p:LocalityProbab:3.1 label="i:p:LocalityProbab:3.1"} for each $k\in {\mathbb N}$ there exists a constant $a_k$ such that $\theta\geq a_k>0$ $\mssm$-a.e. on $G_k$; (that is, $\theta^{-1}\in {{\big({L^\infty(\mssm)}\big)}^\bullet_{\loc}}(G_\bullet)$.)* 2. *[\[i:p:LocalityProbab:2\]]{#i:p:LocalityProbab:2 label="i:p:LocalityProbab:2"} for each $k\in {\mathbb N}$ it holds $E_k\subset G_k$ $\mcE$- and $\mcE'$-quasi-everywhere, and there exists $\varrho_k\in \mcF$ with $\mathop{\mathrm{\mathds 1}}_{E_k}\leq \varrho_k\leq \mathop{\mathrm{\mathds 1}}_{G_k}$ $\mssm$-a.e., and $\Gamma(\varrho_k)\in L^1(\mssm'_{G_k})$;* 2. *[\[i:p:LocalityProbab:4\]]{#i:p:LocalityProbab:4 label="i:p:LocalityProbab:4"} there exists $\mcD$ a pseudo-core for both $(\mcE,\mcF)$ on $L^2(\mssm)$ and $(\mcE',\mcF')$ on $L^2(\mssm')$, additionally so that $\Gamma\leq \Gamma'$ on $\mcD$.* *Then, $\big({\Gamma, \mbbL^{\mssm}_{\loc,b}}\big)\leq \big({\Gamma',\mbbL^{\prime\, \mssm'}_{\loc,b}}\big)$ and $\mssd_\mssm\geq \mssd_{\mssm'}$.* *Remark 39*. 1. Assumption [\[i:p:LocalityProbab:2\]](#i:p:LocalityProbab:2){reference-type="ref" reference="i:p:LocalityProbab:2"} in Proposition [Proposition 38](#p:LocalityProbab){reference-type="ref" reference="p:LocalityProbab"} is implied by the following: - for each $k\in{\mathbb N}$ it holds $E_k\subset G_k$ $\mcE$- and $\mcE'$-quasi-everywhere and there exists $\varrho_k\in\mcD$ with $\mathop{\mathrm{\mathds 1}}_{E_k}\leq \varrho_k\leq \mathop{\mathrm{\mathds 1}}_{G_k}$ $\mssm$-a.e.; indeed, in this case $\Gamma(\varrho_k)\leq \Gamma'(\varrho_k)\in L^1(\mssm')$ by assumption [\[i:p:LocalityProbab:4\]](#i:p:LocalityProbab:4){reference-type="ref" reference="i:p:LocalityProbab:4"} in Proposition [Proposition 38](#p:LocalityProbab){reference-type="ref" reference="p:LocalityProbab"} and since $\mcD\subset \mcF'$, and we may thus choose $C_k\mathop{\mathrm{\coloneqq}}\Gamma'(\varrho_k)$. Depending on the choice of $\mcD$, assumption $(a_2')$ is possibly more easily verified; in particular, it is immediately satisfied if $\mcD=\mcF'$. 2. The existence of the cut-off functions in Proposition [Proposition 38](#p:LocalityProbab){reference-type="ref" reference="p:LocalityProbab"}[\[i:p:LocalityProbab:2\]](#i:p:LocalityProbab:2){reference-type="ref" reference="i:p:LocalityProbab:2"} is a quite mild assumption. For instance, let $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to [0,\infty)$ be a distance metrizing $\tau$ and assume that $(\mbbX,\mcE)$ satisfies $(\mathsf{Rad}_{\mssd,\mssm})$. We say that $A_1$, $A_2\subset X$ are $\tau$-*well-separated* if $\cl_\tau A_1\cap \cl_\tau A_2=\mathop{\mathrm{\varnothing}}$. Then it is readily verified that, for *every* pair of sets $E,G$ with -  $E\subset G$, and $G$ (hence $E$) $\mssd$-bounded and of finite $\mssm'$-measure, -  $E$ and $G^\mathrm{c}$ $\tau$-well-separated, the function $\varrho_{E,G}\mathop{\mathrm{\coloneqq}}0\vee \big({1-\mssd(E,G^\mathrm{c})^{-1}\mssd({\,\cdot\,},E)}\big)$ satisfies $$\mathop{\mathrm{\mathds 1}}_{\cl_\tau E} \leq \varrho_k \leq \mathop{\mathrm{\mathds 1}}_{\cl_\tau G}\,\,\mathrm{,}\;\,\qquad \Gamma(\varrho_{E,G})\leq \mssd(E,G^\mathrm{c})^{-1}\mathop{\mathrm{\mathds 1}}_{\cl_\tau G}\in L^1(\mssm')\,\,\mathrm{.}$$ *Proof of Proposition [Proposition 38](#p:LocalityProbab){reference-type="ref" reference="p:LocalityProbab"}.* Let $\varrho_k$ be given by [\[i:p:LocalityProbab:2\]](#i:p:LocalityProbab:2){reference-type="ref" reference="i:p:LocalityProbab:2"}. Since $G_\bullet\in\msG_0$ (resp. $G_\bullet\in\msG_0'$), we have in particular that $\mssm G_k<\infty$ (resp. $\mssm'G_k<\infty$) for every $k\in {\mathbb N}$. Thus, $\varrho_k\in L^p(\mssm)\cap L^p(\mssm')$ for every every $p\in [1,\infty]$ by interpolation, for every $k\in{\mathbb N}$. Now, let $(G'_\bullet, f_\bullet)$, with $G'_\bullet \in\msG_0'$, be witnessing that $f\in \mbbL^{\prime\, \mssm'}_{\loc,b}$. With no loss of generality, by truncation, we may assume that $\sup_n\left\lVert f_n\right\rVert_{L^\infty(\mssm)}\leq M\mathop{\mathrm{\coloneqq}}\left\lVert f\right\rVert_{L^\infty(\mssm)}<\infty$. In light of Lemma [Lemma 9](#l:2.26){reference-type="ref" reference="l:2.26"} and since $G_\bullet\in\msG_0\cap\msG_0'$, we may assume with no loss of generality that $G'_\bullet= G_\bullet$. For each $k\in{\mathbb N}$, since $f_k\in\mcF'$, there exists $\left(f^{\scriptscriptstyle{(n)}}_k\right)_n\subset \mcD\subset\mcF'$ a sequence of functions $\mcF'$-converging to $f_k$. With no loss of generality, up to replacing $\left(f^{\scriptscriptstyle{(n)}}_k\right)_n$ with a non-relabeled subsequence, we may further assume that $\lim_{n}f^{\scriptscriptstyle{(n)}}_k=f_k$ $\mssm'$- (hence $\mssm$-)a.e.. By a standard truncation argument, by the Markov property and by the closability of $(\mcE',\mcF')$, we may finally also assume that $\left\lvert f^{\scriptscriptstyle{(n)}}_k\right\rvert\leq \left\lvert f_k\right\rvert\leq M$ $\mssm'$-a.e. for every $k,n\in {\mathbb N}$. As a consequence, by Dominated Convergence with dominating function $M\mathop{\mathrm{\mathds 1}}_{G_k}\in L^2(\mssm')$ we have in particular that $$\label{eq:p:LocalityProbab:1} L^2(\mssm')\text{-}\lim_{n}f^{\scriptscriptstyle{(n)}}_k\varrho_k= f\varrho_k\,\,\mathrm{.}$$ Furthermore, by the Leibniz rule for $\Gamma$ and by the assumption, we have that $\left(f^{\scriptscriptstyle{(n)}}_k\varrho_k\right)_n\subset \mcF_b$ satisfies $$\begin{aligned} \Gamma(f^{\scriptscriptstyle{(n)}}_k\varrho_k-f^{\scriptscriptstyle{(m)}}_k\varrho_k) =&\ \left\lvert f^{\scriptscriptstyle{(n)}}_k-f^{\scriptscriptstyle{(m)}}_k\right\rvert^2 \Gamma(\varrho_k) + \left\lvert\varrho_k\right\rvert^2 \Gamma(f^{\scriptscriptstyle{(n)}}_k-f^{\scriptscriptstyle{(m)}}_k) \\ \leq&\ \left\lvert f^{\scriptscriptstyle{(n)}}_k-f^{\scriptscriptstyle{(m)}}_k\right\rvert^2\Gamma(\varrho_k)+\mathop{\mathrm{\mathds 1}}_{G_k}\Gamma'(f^{\scriptscriptstyle{(n)}}_k-f^{\scriptscriptstyle{(m)}}_k) \,\,\mathrm{.}\end{aligned}$$ Since $\varrho_k\equiv 0$ $\mcE$-q.e. on $G_k^\mathrm{c}$, and since $G_k$ is $\mcE$-quasi-open, by [\[eq:SLoc:2\]](#eq:SLoc:2){reference-type="eqref" reference="eq:SLoc:2"} we conclude that $$\begin{aligned} \label{eq:p:LocalityProbab:2} \Gamma(f^{\scriptscriptstyle{(n)}}_k\varrho_k-f^{\scriptscriptstyle{(m)}}_k\varrho_k) \leq&\ \mathop{\mathrm{\mathds 1}}_{G_k}\big({\left\lvert f^{\scriptscriptstyle{(n)}}_k-f^{\scriptscriptstyle{(m)}}_k\right\rvert^2\Gamma(\varrho_k)+ \Gamma'(f^{\scriptscriptstyle{(n)}}_k-f^{\scriptscriptstyle{(m)}}_k)}\big) \,\,\mathrm{.}\end{aligned}$$ Now, since $\theta\geq a_k>0$ on $G_k$, the $L^2(\mssm'_{G_k})$-convergence implies the $L^2(\mssm_{G_k})$-convergence. Thus, there exists $$\label{eq:p:LocalityProbab:2.5} L^2(\mssm)\text{-}\lim_{n}f^{\scriptscriptstyle{(n)}}_k\varrho_k = L^2(\mssm')\text{-}\lim_{n}f^{\scriptscriptstyle{(n)}}_k\varrho_k = f_k\varrho_k = f\varrho_k\,\,\mathrm{,}\;\,$$ where the last equality holds by definition of $(G_\bullet,f_\bullet)$. Furthermore, integrating [\[eq:p:LocalityProbab:2\]](#eq:p:LocalityProbab:2){reference-type="eqref" reference="eq:p:LocalityProbab:2"} we see that $$\label{eq:p:LocalityProbab:2.75} \begin{aligned} \lim_{n,m} &\int \Gamma(f^{\scriptscriptstyle{(n)}}_k\varrho_k-f^{\scriptscriptstyle{(m)}}_k\varrho_k) \mathop{}\!\mathrm{d}\mssm \\ &\leq a_k^{-1}\lim_{n,m}\int_{G_k} \left\lvert f^{\scriptscriptstyle{(n)}}_k-f^{\scriptscriptstyle{(m)}}_k\right\rvert^2\Gamma(\varrho_k) \mathop{}\!\mathrm{d}\mssm' +\lim_{n,m}\mcE'(f^{\scriptscriptstyle{(n)}}_k-f^{\scriptscriptstyle{(m)}}_k)=0\,\,\mathrm{,}\;\, \end{aligned}$$ where the first limit in the right-hand side vanishes by Dominated Convergence with dominating function $4M^2 \Gamma(\varrho_k)$, and the second vanishes by definition of $\left(f^{\scriptscriptstyle{(n)}}_k\right)_n$. Combining [\[eq:p:LocalityProbab:2.5\]](#eq:p:LocalityProbab:2.5){reference-type="eqref" reference="eq:p:LocalityProbab:2.5"} and [\[eq:p:LocalityProbab:2.75\]](#eq:p:LocalityProbab:2.75){reference-type="eqref" reference="eq:p:LocalityProbab:2.75"} implies the existence of $$\begin{aligned} \mcF\text{-}\lim_{n}f^{\scriptscriptstyle{(n)}}_k\varrho_k=f\varrho_k\in\mcF_b\,\,\mathrm{.}\end{aligned}$$ Thus, since $G_\bullet$ is increasing to $X$ $\mcE$-quasi-everywhere, and $\left(f\varrho_k\right)_k\subset \mcF$ is a sequence of functions satisfying $f\varrho_k\equiv f$ on $G_k$, then $f\in{{\mcF}^\bullet_{\loc}}$ by definition of broad local space. Since $\varrho_k\equiv 1$ on $E_k$, we have $\mathop{\mathrm{\mathds 1}}_{E_k}\Gamma(\varrho_k)\equiv 0$ by [\[eq:SLoc:2\]](#eq:SLoc:2){reference-type="eqref" reference="eq:SLoc:2"}. Thus, again by the Leibniz rule and by the assumption, $$\begin{aligned} \mathop{\mathrm{\mathds 1}}_{E_k}\Gamma(f^{\scriptscriptstyle{(n)}}_k\varrho_k)= \mathop{\mathrm{\mathds 1}}_{E_k}\left\lvert f^{\scriptscriptstyle{(n)}}_k\right\rvert^2\Gamma(\varrho_k)+\mathop{\mathrm{\mathds 1}}_{E_k}\left\lvert\varrho_k\right\rvert^2\Gamma(f^{\scriptscriptstyle{(n)}}_k)=\mathop{\mathrm{\mathds 1}}_{E_k} \Gamma(f^{\scriptscriptstyle{(n)}}_k)\leq \mathop{\mathrm{\mathds 1}}_{E_k} \Gamma'(f^{\scriptscriptstyle{(n)}}_k)\,\,\mathrm{,}\;\,\end{aligned}$$ and, taking the limit as $n\to\infty$ (possibly, up to choosing a suitable non-relabeled subsequence), $$\label{eq:p:LocalityProbab:3} \begin{aligned} \mathop{\mathrm{\mathds 1}}_{E_k} \Gamma(f\varrho_k)=& \mathop{\mathrm{\mathds 1}}_{E_k} \Gamma(f_k\varrho_k)= \mathop{\mathrm{\mathds 1}}_{E_k} \lim_{n}\Gamma(f^{\scriptscriptstyle{(n)}}_k\varrho_k)\leq \mathop{\mathrm{\mathds 1}}_{E_k} \lim_{n}\Gamma'(f^{\scriptscriptstyle{(n)}}_k) \\ =& \mathop{\mathrm{\mathds 1}}_{E_k} \Gamma'(f_k)=\mathop{\mathrm{\mathds 1}}_{E_k}\Gamma'(f) \leq 1 \end{aligned} \quad\quad \mssm\text{-a.e.}\,\,\mathrm{,}\;\,$$ where the last equality holds by locality [\[eq:SLoc:2\]](#eq:SLoc:2){reference-type="eqref" reference="eq:SLoc:2"} of $\Gamma$ and definition of $(G_\bullet,f_\bullet)$. Again by [\[eq:SLoc:2\]](#eq:SLoc:2){reference-type="eqref" reference="eq:SLoc:2"}, and by [\[eq:p:LocalityProbab:3\]](#eq:p:LocalityProbab:3){reference-type="eqref" reference="eq:p:LocalityProbab:3"}, $$\begin{aligned} \mathop{\mathrm{\mathds 1}}_{E_k} \Gamma(f) =\mathop{\mathrm{\mathds 1}}_{E_k} \Gamma(f\varrho_k)\leq \mathop{\mathrm{\mathds 1}}_{E_k} \Gamma'(f) \quad \mssm\text{-a.e.}\,\,\mathrm{,}\;\,\end{aligned}$$ hence, letting $k\to\infty$ and since $\mssm\big({ \cap_k E_k^\mathrm{c}}\big)=0$, $$\begin{aligned} \Gamma(f) \leq \Gamma'(f) \leq 1 \quad \mssm\text{-a.e.}\,\,\mathrm{,}\;\,\end{aligned}$$ which also shows that $f\in\mbbL^{\mssm}_{\loc,b}$, and thus concludes the proof of the first assertion. The assertion on the intrinsic distances is an immediate consequence of the first assertion together with the definition [\[eq:IntrinsicD\]](#eq:IntrinsicD){reference-type="eqref" reference="eq:IntrinsicD"} of intrinsic distance. ◻ Simmetrizing the assumptions in Proposition [Proposition 38](#p:LocalityProbab){reference-type="ref" reference="p:LocalityProbab"} and combining it with Proposition [Proposition 37](#p:Locality){reference-type="ref" reference="p:Locality"}, we obtain the following. **Corollary 40** (Mutual implications for weighted spaces). *Let $(\mbbX,\mcE)$ and $(\mbbX',\mcE')$ be quasi-regular strongly local Dirichlet spaces with same underlying topological measurable space $(X,\tau,\Sigma)=(X',\tau',\Sigma')$ satisfying [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"} and different measures $\mssm$ and $\mssm'$. Further let $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to [0,\infty]$ be an extended pseudo-distance. Assume that:* 1. *[\[i:c:LocalityDistances:4\]]{#i:c:LocalityDistances:4 label="i:c:LocalityDistances:4"} there exists $\mcD$ a pseudo-core for both $(\mcE,\mcF)$ on $L^2(\mssm)$ and $(\mcE',\mcF')$ on $L^2(\mssm')$, additionally so that $\Gamma=\Gamma'$ on $\mcD$.* 2. *[\[i:c:LocalityDistances:3\]]{#i:c:LocalityDistances:3 label="i:c:LocalityDistances:3"} $\mssm'=\theta\mssm$ for some $\theta\in L^0(\mssm)$, and there exists $E_\bullet, G_\bullet\in\msG_0\cap \msG_0'$ with the following properties:* 1. *[\[i:c:LocalityDistances:1\]]{#i:c:LocalityDistances:1 label="i:c:LocalityDistances:1"} for each $k\in {\mathbb N}$ there exists a constant $a_k>0$ such that $$0<a_k\leq \theta \leq a_k^{-1}<\infty \quad \mssm\text{-a.e.} \quad \text{on } G_k\,\,\mathrm{;}\;\,$$ that is, $\theta, \theta^{-1}\in {{\big({L^\infty(\mssm)}\big)}^\bullet_{\loc}}(G_\bullet)$.* 2. *[\[i:c:LocalityDistances:2\]]{#i:c:LocalityDistances:2 label="i:c:LocalityDistances:2"} for each $k\in {\mathbb N}$ it holds $E_k\subset G_k$ $\mcE$- and $\mcE'$-quasi-everywhere, and there exists $\varrho_k\in \mcD$ with $\mathop{\mathrm{\mathds 1}}_{E_k}\leq \varrho_k\leq \mathop{\mathrm{\mathds 1}}_{G_k}$ $\mssm$-a.e..* *Then,* 1. *$\big({\Gamma, \mbbL^{\mssm}_{\loc,b}}\big)= \big({\Gamma',\mbbL^{\prime\, \mssm'}_{\loc,b}}\big)$ and $\mssd_\mssm= \mssd_{\mssm'}$;* 2. *$(\mathsf{ScL}_{\mssm,\tau,\mssd})$, resp. $(\mathsf{SL}_{\mssm,\mssd})$, $(\mathsf{cSL}_{\tau,\mssm,\mssd})$, $(\mathsf{Rad}_{\mssd,\mssm})$, holds if and only if $(\mathsf{ScL}_{\mssm',\tau,\mssd})$, resp. $(\mathsf{SL}_{\mssm',\mssd})$, $(\mathsf{cSL}_{\tau,\mssm',\mssd})$, $(\mathsf{Rad}_{\mssd,\mssm'})$ holds.* ## Form comparison In this section we provide a full comparison of a Dirichlet form $(\mcE,\mcF)$ with a Cheeger energy $\mathsf{Ch}_{*,\mssd,\mssm}$ defined on the same space. Roughly speaking, we show that $\mcE$ is dominated by $\mathsf{Ch}_{*,\mssd,\mssm}$ under the Rademacher property for $\mssd$, and that the reverse domination holds under the continuous-Sobolev--to--Lipschitz property for $\mssd$ and the *$\tau$-upper regularity* property for $(\mcE,\mcF)$ (Dfn. [Definition 46](#d:TUpperReg){reference-type="ref" reference="d:TUpperReg"}). ### Form comparison under the Rademacher property The next Proposition [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"} is an extension of the same result obtained for *energy-measure spaces* by L. Ambrosio, N. Gigli, and G. Savaré in [@AmbGigSav15], and for extended metric-topological *probability* spaces by L. Ambrosio, M. Erbar, and G. Savaré in [@AmbErbSav16]. In the proof we make use of assertions proven in [@AmbErbSav16] for extended metric-topological probability spaces, and of assertions proven in [@AmbGigSav14]. The compatibility between different definitions in these references is granted by Proposition [Proposition 22](#p:ConsistencyCheeger){reference-type="ref" reference="p:ConsistencyCheeger"}. **Lemma 41**. *Let $(X,\tau,\mssd)$ be an extended metric-topological space (Dfn. [Definition 3](#d:AES){reference-type="ref" reference="d:AES"}) and $K\subset X$ be $\tau$-compact. Then, $\mssd({\,\cdot\,}, K)$ is $\tau$-lower semicontinuous (in particular, $\msB_{\tau}$-measurable).* *Remark 42*. When $(X,\tau)$ is additionally Polish, Lemma [Lemma 41](#l:AGS){reference-type="ref" reference="l:AGS"} is claimed without proof in (the proof of) [@AmbGigSav14 Lem. 4.11]. We provide a proof for completeness and for reference. *Proof of Lemma [Lemma 41](#l:AGS){reference-type="ref" reference="l:AGS"}.* By [@AmbErbSav16 Eqn. (4.3)] the $\tau$-admissible extended distance $\mssd$ is $\tau^{\scriptscriptstyle{\times 2}}$-lower semicontinuous. Let $\left(x_\alpha\right)_\alpha\subset X$ be any net $\tau$-converging to $x\in X$. For each $\varepsilon>0$ there exists $y_{\alpha(\varepsilon)}\in K$ so that $\mssd(x_\alpha,K)\geq \mssd(x_\alpha,y_{\alpha(\varepsilon)})-\varepsilon$. By $\tau$-compactness of $K$, for each $\varepsilon>0$ there exists $y_\varepsilon\in K$ a $\tau$-accumulation point of $\left(y_{\alpha(\varepsilon)}\right)_{\alpha(\varepsilon)}$. Thus, by the above inequality, by $\tau^{\scriptscriptstyle{\times 2}}$-lower-semicontinuity of $\mssd$, and since $y_\varepsilon\in K$, $$\begin{aligned} \liminf_\alpha \mssd(x_\alpha,K)\geq \liminf_{\alpha(\varepsilon)}\liminf_\alpha \mssd(x_\alpha,y_{\alpha(\varepsilon)})-\varepsilon\geq \mssd(x,y_\varepsilon) -\varepsilon\geq \mssd(x,K) -\varepsilon\,\,\mathrm{,}\;\,\end{aligned}$$ which concludes the assertion by arbitrariness of $\varepsilon>0$. ◻ The following is a rewriting of [@AmbGigSav14 Lem. 4.11]. Whereas our assumptions are milder, the proof in [@AmbGigSav14] applies with minor modifications. **Lemma 43** ([@AmbGigSav14 Lem. 4.11]). *Let $(X,\tau,\mssd)$ be a complete extended metric-topological measure space, and $\mssm$ be a $\sigma$-finite $\tau$-Radon measure on $\msB_{\tau}$. Further let $\mssm'\mathop{\mathrm{\coloneqq}}\theta \mssm$ be another $\sigma$-finite measure on $\msB_{\tau}$ with density $\theta$ satisfying the following condition: there exists a sequence of $\tau$-compact sets $K_i$ with $K_i\subset K_{i+1}$, and constants $r_i,c_i,C_i>0$ such that $$\label{eq:l:AGS2:0} \mssm \big({\cap_i K_i^\mathrm{c}}\big)=0 \qquad \text{and} \qquad 0<c_i\leq \theta \leq C_i <\infty \quad \quad \mssm\text{-a.e.} \text{ on } \left\{\mssd({\,\cdot\,}, K_i) < r_i\right\} \,\,\mathrm{.}$$ Then, the relaxed gradient $\left\lvert\mathrm{D}f\right\rvert_{*,\mssd,\mssm'}$ induced by $\mssm'$ coincides $\mssm$-a.e. with $\left\lvert\mathrm{D}f\right\rvert_{*,\mssd,\mssm}$ for every $f\in \msD({\mathsf{Ch}_{*,\mssd,\mssm'}})\cap \msD({\mathsf{Ch}_{*,\mssd,\mssm}})$. If moreover there exists a single $r>0$ such that [\[eq:l:AGS2:0\]](#eq:l:AGS2:0){reference-type="eqref" reference="eq:l:AGS2:0"} holds with $r$ in place of $r_i$, then $$f\in \msD({\mathsf{Ch}_{*,\mssd,\mssm}})\,\,\mathrm{,}\;\,\quad f\,\,\mathrm{,}\;\,\left\lvert\mathrm{D}f\right\rvert_{*,\mssd,\mssm} \in L^2(\mssm') \implies f\in \msD({\mathsf{Ch}_{*,\mssd,\mssm'}}) \,\,\mathrm{.}$$* *Proof.* Since $\mssm$ is $\sigma$-finite $\tau$-Radon, there exists a sequence of $\tau$-compact sets $K_i$ satisfying $\mssm\big({\cap_i K_i^\mathrm{c}}\big)$. The rest of the proof holds exactly as in [@AmbGigSav14] having care to substitute the $\tau$-compact set $K$ there with $K_i$ such that $\mssm (K\cap K_i)>0$ which exists since $\left(K_i\right)_i$ exhausts $X$ up to an $\mssm$-negligible set. ◻ **Proposition 44**. *Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space with $\mbbX$ satisfying [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"} and admitting carré du champ operator, and $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to [0,\infty)$ be an extended distance. Further assume that* 1. *[\[i:p:IneqCdC:A\]]{#i:p:IneqCdC:A label="i:p:IneqCdC:A"} $(X,\tau,\mssd)$ is a complete extended metric-topological space (Dfn. [Definition 3](#d:AES){reference-type="ref" reference="d:AES"});* 2. *[\[i:p:IneqCdC:B\]]{#i:p:IneqCdC:B label="i:p:IneqCdC:B"} $\mssm$ is *$\mssd$-moderate* in the following sense: there exists a $\tau$-compact $\mcE$-nest $\left(K_i\right)_i$ and $\varepsilon>0$ such that $$\label{eq:p:IneqCdC:00} \kappa_i\mathop{\mathrm{\coloneqq}}\mssm\left\{\mssd({\,\cdot\,}, K_i)< \varepsilon\right\} <\infty\,\,\mathrm{.}$$* *If $(\mbbX,\mcE)$ satisfies $(\mathsf{Rad}_{\mssd,\mssm})$ for $\msB_{\tau}$, then $$\label{eq:p:IneqCdC:0} \Gamma(f)\leq \left\lvert\mathrm{D}f\right\rvert_{w,\mssd_{\mssm}}^2 \leq \left\lvert\mathrm{D}f\right\rvert_{w,\mssd}^2 \quad \quad \mssm\text{-a.e.}\,\,\mathrm{,}\;\,\qquad f\in \mathrm{Lip}_b(\mssd,\msB_{\tau})\,\,\mathrm{.}$$ In particular, $$\label{eq:p:IneqCdC:001} \mcE\leq \mathsf{Ch}_{w,\mssd_{\mssm},\mssm}\leq \mathsf{Ch}_{w,\mssd,\mssm}\,\,\mathrm{.}$$* *Remark 45*. The existence of a $\tau$-compact $\mcE$-nest in [\[i:p:IneqCdC:B\]](#i:p:IneqCdC:B){reference-type="ref" reference="i:p:IneqCdC:B"} is always satisfied as a consequence of the quasi-regularity of $(\mbbX,\mcE)$. The second condition is a form of *moderance* of $\mssm$ ---in the sense of [@LzDSSuz20 §2.5.2]--- compatible with $\mssd$. Together with the assumption of the Rademacher property $(\mathsf{Rad}_{\mssd,\mssm})$, it is tantamount to the $\mssm$-uniform algebraic $\tau$-localizability (Dfn. [Definition 12](#d:Localizability){reference-type="ref" reference="d:Localizability"}) of $(\mbbX,\mcE)$. Consistently with Remark [Remark 13](#r:AlgLocTrivial){reference-type="ref" reference="r:AlgLocTrivial"}, the assumption in [\[eq:p:IneqCdC:00\]](#eq:p:IneqCdC:00){reference-type="eqref" reference="eq:p:IneqCdC:00"} is trivially satisfied whenever $\mssm X<\infty$, which is why this assumption does not appear in [@AmbErbSav16], addressing only probability spaces. *Proof of Proposition [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"}.* By [\[eq:EquivalenceRadStoL\]](#eq:EquivalenceRadStoL){reference-type="eqref" reference="eq:EquivalenceRadStoL"} the space $(\mbbX,\mcE)$ satisfies $({\mssd}\textrm{-}\mathsf{Rad}_{\mssm})$, which implies the second inequality by definition of the objects involved. Thus, it suffices to show the first inequality. Firstly, let us note that $(X,\tau,\mssd_{\mssm})$ is a complete extended metric-topological space by Lemma [Lemma 16](#l:RadCompleteness){reference-type="ref" reference="l:RadCompleteness"}. Suppose for the moment that $\mssm$ be a probability measure. Since $(X,\tau)$ is a topological Luzin space, it is a Radon space. In particular, $\mssm$ is Radon. Then, we can apply [@AmbErbSav16 Thm. 12.5] to the complete metric-topological Radon probability space $(X,\tau,\mssd,\mssm)$ and conclude the assertion. *Heuristics*. Now, let us show how to extend the statement to the case of any $\sigma$-finite measure $\mssm$. We need to treat simultaneously the square field operator $\Gamma$ and the minimal weak upper gradient $\left\lvert\mathrm{D}{\,\cdot\,}\right\rvert_{w,\mssd_\mssm,\mssm}$. To this end, we find a probability density $\theta\in L^1(\mssm)$, set $\mssm'\mathop{\mathrm{\coloneqq}}\theta\mssm$, and show that $\Gamma=\Gamma'$ and $\left\lvert\mathrm{D}{\,\cdot\,}\right\rvert_{w,\mssd_\mssm,\mssm}=\left\lvert\mathrm{D}{\,\cdot\,}\right\rvert_{w,\mssd_\mssm,\mssm'}$. The conclusion follows from these equalities together with the inequality established in the probability case applied to $\mssm'$. For the square field, we make use of the result on the Girsanov transform of $(\mcE,\mcF)$ by a factor $\sqrt{\theta}\in\mcF$, thoroughly discussed in the generality of quasi-regular Dirichlet spaces by C.-Z. Chen and W. Sun in [@CheSun06]. For the minimal weak upper gradient, we make use of the locality result for $\left\lvert\mathrm{D}{\,\cdot\,}\right\rvert_{w,\mssd_\mssm,\mssm}$ under transformation of the reference measure by a factor $\theta$ locally bounded away from $0$ and infinity on neighborhoods of compact sets, Lemma [Lemma 43](#l:AGS2){reference-type="ref" reference="l:AGS2"}. *Reduction*. Since $\mssm X=\infty$ by assumption, there exists $i_*$ such that $\mssm K_i\geq 1$ for every $i\geq i_*$. Thus, up to discarding the first $i_*$ elements of $\left(K_i\right)_i$, we may and will assume with no loss of generality that $\kappa_i\geq1$ for every $i$. As a consequence, $\kappa_i^{-1/2}\leq 1$ for every $i\in{\mathbb N}$. We shall further assume that $\varepsilon\leq 1$, again with no loss of generality up to replacing $\varepsilon$ by $\varepsilon\vee 1$. Throughout the proof we let $(K_i)_\delta\mathop{\mathrm{\coloneqq}}\left\{\mssd({\,\cdot\,},K_i)<\delta\right\}$ for all $\delta>0$. *Construction of a density*. We start by showing that there exists a $\msB_{\tau}$-measurable $\theta\colon X\to {\mathbb R}$ satisfying: $$\label{eq:IneqCdC:0.1} \theta\in L^1(\mssm)\cap L^\infty(\mssm)\,\,\mathrm{,}\;\,\qquad \left\lVert\theta\right\rVert_{L^1(\mssm)}=1\,\,\mathrm{,}\;\,\qquad \theta>0 \quad \mssm\text{-a.e.}\,\,\mathrm{,}\;\,\qquad \sqrt\theta\in \mcF\,\,\mathrm{.}$$ Let $\left(K_i\right)_i$ and $\varepsilon$ be as in [\[eq:p:IneqCdC:00\]](#eq:p:IneqCdC:00){reference-type="eqref" reference="eq:p:IneqCdC:00"}, and $S\colon [0,\infty]\to [0,1]$ be defined by $$\label{eq:MultiTrunc} S(t)\mathop{\mathrm{\coloneqq}} \begin{cases} -\frac{1}{2}t+1 & \text{if } 0\leq t\leq 1 \\ \frac{1}{2} & \text{if } 1\leq t \leq 2 \\ -\frac{1}{2}t+\frac{3}{2} & \text{if } 2\leq t\leq 3 \\ 0 & \text{if } 3\leq t\leq +\infty \end{cases} \,\,\mathrm{.}$$ Further set (cf. Fig.s [1](#fig:FunctionPsi){reference-type="ref" reference="fig:FunctionPsi"} and [2](#fig:Phi3){reference-type="ref" reference="fig:Phi3"} below) $$\psi_i\mathop{\mathrm{\coloneqq}}S\big({\tfrac{3}{\varepsilon}\,\mssd({\,\cdot\,}, K_i)}\big)\,\,\mathrm{,}\;\,\qquad f_i\mathop{\mathrm{\coloneqq}}\frac{\varepsilon}{3\sqrt{\kappa_i}}\,\psi_i \,\,\mathrm{,}\;\,\qquad \phi_n\mathop{\mathrm{\coloneqq}}\sum_{i=0}^n 2^{-i-1} f_i \,\,\mathrm{.}$$ All the above functions are $\msB_{\tau}$-measurable (in fact: $\tau$-upper semicontinuous) by Lemma [Lemma 41](#l:AGS){reference-type="ref" reference="l:AGS"}. ![The functions $\psi_i$.](Figure1.pdf){#fig:FunctionPsi} ![The function $\phi_3$, taking value $\tfrac{1}{2}+\tfrac{1}{4}+\tfrac{1}{8}$ on $K_1$.](Figure2.pdf){#fig:Phi3} Since $\kappa_i\mathop{\mathrm{\coloneqq}}\mssm \left\{\mssd(x,K_i)<\varepsilon\right\}<\infty$ we have that $f_i\in L^2(\mssm)$ and $\left\lVert f_i\right\rVert_{L^2(\mssm)}\leq \tfrac{\varepsilon}{3}\leq 1$ for every $i$. Thus, the sequence $\left(\phi_n\right)_n$ satisfies $$\begin{aligned} \left\lVert\phi_n\right\rVert_{L^2(\mssm)}\leq 1\,\,\mathrm{,}\;\,\qquad L^2(\mssm)\text{-}\lim_{n}\phi_n= \sum_i^\infty 2^{-i-1} f_i \eqqcolon\phi\in L^2(\mssm)\end{aligned}$$ by Monotone Convergence. Furthermore, $f_i\in \mathrm{Lip}_b(\mssd,\msB_{\tau})$ and $\mathrm{L}_{\mssd}(f_i)\leq (\kappa_i)^{-1/2}\leq 1$ for every $i$ by definition, hence $$\begin{aligned} \label{eq:IneqCdC:0.15} \phi_n\in \mathrm{Lip}_b(\mssd,\msB_{\tau}) \qquad \text{and} \qquad \mathrm{L}_{\mssd}(\phi_n)\leq 1\end{aligned}$$ for every $n$ by triangle inequality for the Lipschitz semi-norm $\mathrm{L}_{\mssd}({\,\cdot\,})$. As a consequence, $\left(\phi_n\right)_n$ is uniformly bounded in $\mcF$ by $(\mathsf{Rad}_{\mssd,\mssm})$ and thus $\phi\in \mcF$ by e.g. [@MaRoe92 Lem. I.2.12]. Since $$\label{eq:l:IneqCdC:0.25} \phi\big\lvert_{(K_i)_{\varepsilon/3}}\geq 2^{-i-1}f_i\big\lvert_{(K_i)_{\varepsilon/3}}=\frac{2^{-i-2}\varepsilon}{3\sqrt{\kappa_i}}>0$$ and since $\left(K_i\right)_i$ exhausts $X$ up to an $\mssm$-negligible set, then $\phi>0$ $\mssm$-a.e. on $X$. Letting $\theta\mathop{\mathrm{\coloneqq}}\phi^2/\left\lVert\phi\right\rVert_{L^2(\mssm)}^2$ shows the assertion in [\[eq:IneqCdC:0.1\]](#eq:IneqCdC:0.1){reference-type="eqref" reference="eq:IneqCdC:0.1"}. *Dirichlet forms*. Now, let us set $\mssm'\mathop{\mathrm{\coloneqq}}\theta\mssm\sim \mssm$. Since $\sqrt\theta\in\mcF_b$, by [@CheSun06 Thm. 2.2] the form $$\begin{aligned} \mcE'(f)\mathop{\mathrm{\coloneqq}}\int_X \Gamma(f)\mathop{}\!\mathrm{d}\mssm'\,\,\mathrm{,}\;\,\qquad f\in\mcF\,\,\mathrm{,}\;\,\end{aligned}$$ is closable, and its closure $\big({\mcE',\mcF'}\big)$ is a Dirichlet form on $L^2(\mssm')$ with square field operator $\Gamma'$ satisfying $$\begin{aligned} \label{eq:l:IneqCdC:0.5} \Gamma'(f)=\Gamma(f) \quad \mssm'\text{-a.e.}\,\,\mathrm{,}\;\,\qquad f\in \mcF\cap L^\infty(\mssm')=\mcF_b\,\,\mathrm{,}\;\,\end{aligned}$$ and the equality extends to all $f\in\mcF_b'$ by $\mcF'$-density of $\mcF$ in $\mcF'$. In particular, since $\mssm'$ is a finite measure, $\mcF_b'\supset \mathrm{Lip}_b(\mssd,\msB_{\tau})$ by $(\mathsf{Rad}_{\mssd,\mssm})$, and [\[eq:l:IneqCdC:0.5\]](#eq:l:IneqCdC:0.5){reference-type="eqref" reference="eq:l:IneqCdC:0.5"} holds for every $f\in \mathrm{Lip}_b(\mssd,\msB_{\tau})$. *Minimal relaxed slopes*. On the complete extended metric-topological measure space $(X,\tau,\mssd_\mssm,\mssm)$ we have that $$\label{eq:l:IneqCdC:1} \left\lvert\mathrm{D}f\right\rvert_{*,\mssd_\mssm,\mssm}=\left\lvert\mathrm{D}f\right\rvert_{w,\mssd_\mssm,\mssm}\,\,\mathrm{,}\;\,\qquad f\in\mathrm{Lip}_b(\mssd_\mssm,\msB_{\tau})\,\,\mathrm{.}$$ This readily follows from the locality of both $\left\lvert\mathrm{D}{\,\cdot\,}\right\rvert_{*}$ and $\left\lvert\mathrm{D}{\,\cdot\,}\right\rvert_{w}$ in the sense of e.g. [@AmbGigSav14 Prop. 4.8(b)] and [@AmbGigSav14b Eqn. (2.18)]. Secondly, let us verify that the space $(X,\tau,\mssd_\mssm,\mssm)$ satisfies the assumptions of Lemma [Lemma 43](#l:AGS2){reference-type="ref" reference="l:AGS2"}. The fact that $(X,\tau,\mssd_\mssm)$ is an extended metric-topological space was already noted in the beginning of the proof. Thus, it suffices to show that $\theta$ as above satisfies [\[eq:l:AGS2:0\]](#eq:l:AGS2:0){reference-type="eqref" reference="eq:l:AGS2:0"} with $\mssd_\mssm$ in place of $\mssd$ there. In fact, since $\theta\in L^\infty(\mssm)$, it suffices to show the existence of $c_i$ and $r_i$. We show that there exists $c_i$ satisfying [\[eq:l:AGS2:0\]](#eq:l:AGS2:0){reference-type="eqref" reference="eq:l:AGS2:0"} with $r_i\mathop{\mathrm{\coloneqq}}\varepsilon/3$. Since $\mssd\leq \mssd_\mssm$ by $(\mathsf{Rad}_{\mssd,\mssm})$ and [\[eq:EquivalenceRadStoL\]](#eq:EquivalenceRadStoL){reference-type="eqref" reference="eq:EquivalenceRadStoL"}, we have that $\left\{\mssd_\mssm({\,\cdot\,},K_i)<\varepsilon/3\right\}\subset (K_i)_{\varepsilon/3}$ for every $i\in{\mathbb N}$. By [\[eq:l:IneqCdC:0.25\]](#eq:l:IneqCdC:0.25){reference-type="eqref" reference="eq:l:IneqCdC:0.25"}, it suffices to set $c_i\mathop{\mathrm{\coloneqq}}\left(\frac{2^{-i-2} \varepsilon}{3\sqrt{\kappa_i}\left\lVert\phi\right\rVert_{L^2(\mssm)}}\right)^2$. Now, applying Lemma [Lemma 43](#l:AGS2){reference-type="ref" reference="l:AGS2"} to the probability measure $\mssm'$, we have that $$\begin{aligned} \label{eq:l:IneqCdC:2} \left\lvert\mathrm{D}f\right\rvert_{*,\mssd_\mssm,\mssm}=\left\lvert\mathrm{D}f\right\rvert_{*,\mssd_\mssm,\mssm'}\,\,\mathrm{,}\;\,\qquad f\in \mathrm{Lip}_b(\mssd_\mssm,\msB_{\tau})\supset\mathrm{Lip}_b(\mssd,\msB_{\tau})\,\,\mathrm{.}\end{aligned}$$ *Intrinsic distances*. We claim that $\mssd_\mssm=\mssd_{\mssm'}$. Let us verify the assumptions of Corollary [Corollary 40](#c:LocalityDistances){reference-type="ref" reference="c:LocalityDistances"} with $\mcD\mathop{\mathrm{\coloneqq}}\mcF$. Indeed, $\mcF$ is a (pseudo-)core for both $(\mcE,\mcF)$ and $(\mcE',\mcF')$ and $\Gamma=\Gamma'$ on $\mcF$ by [\[eq:l:IneqCdC:0.5\]](#eq:l:IneqCdC:0.5){reference-type="eqref" reference="eq:l:IneqCdC:0.5"}. This verifies assumption [\[i:c:LocalityDistances:4\]](#i:c:LocalityDistances:4){reference-type="ref" reference="i:c:LocalityDistances:4"} in Corollary [Corollary 40](#c:LocalityDistances){reference-type="ref" reference="c:LocalityDistances"}. In light of [\[eq:l:AGS2:0\]](#eq:l:AGS2:0){reference-type="eqref" reference="eq:l:AGS2:0"} we see that $\theta$ satisfies the assumptions in Corollary [Corollary 40](#c:LocalityDistances){reference-type="ref" reference="c:LocalityDistances"}[\[i:c:LocalityDistances:1\]](#i:c:LocalityDistances:1){reference-type="ref" reference="i:c:LocalityDistances:1"} with $E_k\mathop{\mathrm{\coloneqq}}\mathop{\mathrm{int}}_\mcE K_k$, $G_k\mathop{\mathrm{\coloneqq}}\mathop{\mathrm{int}}_\mcE(K_k)_{\varepsilon/3}$, and $a_k\mathop{\mathrm{\coloneqq}}c_k$. Indeed $E_\bullet,G_\bullet\in\msG_0$ since $K_\bullet$ is an $\mcE$-nest, and $E_\bullet,G_\bullet\in\msG_0'$ since $\mcF$ is $\mcF'$-dense in $\mcF$ by construction of $(\mcE',\mcF')$. Now, let $\varrho_k\mathop{\mathrm{\coloneqq}}\left[1-\tfrac{3}{\varepsilon}\mssd({\,\cdot\,}, K_k)\right]_+$ and note that $\varrho_k\in \mathrm{Lip}_b(\mssd,\msB_{\tau})$ by Lemma [Lemma 41](#l:AGS){reference-type="ref" reference="l:AGS"}. Thus, $\Gamma(\varrho_k)\leq \mathrm{L}_{\mssd}(\varrho_k)^2= 9\varepsilon^{-2}$ by $(\mathsf{Rad}_{\mssd,\mssm})$ and in fact $\Gamma(\varrho_k)\leq 9\varepsilon^{-2}\, \mathop{\mathrm{\mathds 1}}_{G_k}$ by [\[eq:SLoc:2\]](#eq:SLoc:2){reference-type="eqref" reference="eq:SLoc:2"}. Since $\theta\geq c_k>0$ on $(K_k)_{\varepsilon/3}\supset G_k$, we conclude that $\Gamma(\varrho_k)\in L^1(\mssm)$ too. Thus, the assumptions in Corollary [Corollary 40](#c:LocalityDistances){reference-type="ref" reference="c:LocalityDistances"}[\[i:c:LocalityDistances:2\]](#i:c:LocalityDistances:2){reference-type="ref" reference="i:c:LocalityDistances:2"} hold as well. This concludes the verification of the assumptions in Corollary [Corollary 40](#c:LocalityDistances){reference-type="ref" reference="c:LocalityDistances"}, and thus $\mssd_\mssm=\mssd_{\mssm'}$. *Conclusion*. Respectively by [\[eq:l:IneqCdC:0.5\]](#eq:l:IneqCdC:0.5){reference-type="eqref" reference="eq:l:IneqCdC:0.5"}, [@AmbErbSav16 Thm. 12.5], [\[eq:l:IneqCdC:1\]](#eq:l:IneqCdC:1){reference-type="eqref" reference="eq:l:IneqCdC:1"} and $\mssd_\mssm=\mssd_{\mssm'}$, [\[eq:l:IneqCdC:2\]](#eq:l:IneqCdC:2){reference-type="eqref" reference="eq:l:IneqCdC:2"}, and again [\[eq:l:IneqCdC:1\]](#eq:l:IneqCdC:1){reference-type="eqref" reference="eq:l:IneqCdC:1"}, $$\begin{aligned} \Gamma(f)=\Gamma'(f)\leq \left\lvert\mathrm{D}f\right\rvert_{w,\mssd_{\mssm'},\mssm'}^2 = \left\lvert\mathrm{D}f\right\rvert_{*,\mssd_\mssm,\mssm'}^2 = \left\lvert\mathrm{D}f\right\rvert_{*,\mssd_\mssm,\mssm}^2 = \left\lvert\mathrm{D}f\right\rvert_{w,\mssd_\mssm,\mssm}^2\,\,\mathrm{,}\;\,\qquad f\in\mathrm{Lip}_b(\mssd,\msB_{\tau})\,\,\mathrm{,}\;\,\end{aligned}$$ which concludes the proof. ◻ ### Form comparison under the Sobolev-to-Lipschitz property The chain of inequalities [\[eq:p:IneqCdC:0\]](#eq:p:IneqCdC:0){reference-type="eqref" reference="eq:p:IneqCdC:0"} in Proposition [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"} is completed by including the slope of a Lipschitz function, viz. $$\Gamma(f)\leq \left\lvert\mathrm{D}f\right\rvert_{w,\, \mssd_{\mssm}}^2 \leq \left\lvert\mathrm{D}f\right\rvert_{w,\, \mssd}^2\leq \left\lvert\mathrm{D}f\right\rvert_{\mssd}^2 \quad \quad \mssm\text{-a.e.}\,\,\mathrm{,}\;\,\qquad f\in \mathrm{Lip}_b(\mssd,\msB_{\tau})\,\,\mathrm{,}\;\,$$ where the last inequality holds by the very definition of the minimal weak upper gradient. In the same spirit of duality as in §[3](#s:LocGlob){reference-type="ref" reference="s:LocGlob"}, this suggests that the opposite chain of inequalities, viz. $$\label{eq:SLChain1} \Gamma(f)\geq \left\lvert\mathrm{D}f\right\rvert_{w,\, \mssd_{\mssm}}^2 \geq \left\lvert\mathrm{D}f\right\rvert_{w,\, \mssd}^2\geq \left\lvert\mathrm{D}f\right\rvert_{\mssd}^2 \quad \quad \mssm\text{-a.e.}\,\,\mathrm{,}\;\,\qquad f\in \mbbL^{\mssm,\tau}_{\loc,b}\,\,\mathrm{,}\;\,$$ may be satisfied if some Sobolev-to-Lipschitz-type property holds. Indeed, under the assumption of $(\mathsf{cSL}_{\tau,\mssm,\mssd})$---hence, in light of [\[eq:EquivalenceRadStoL\]](#eq:EquivalenceRadStoL){reference-type="eqref" reference="eq:EquivalenceRadStoL"}, under $(\mathsf{SL}_{\mssm,\mssd})$ as well---the second inequality is a consequence of $({\mssd}\textrm{-}\mathsf{SL}_{\mssm})$, given by [\[eq:EquivalenceRadStoL\]](#eq:EquivalenceRadStoL){reference-type="eqref" reference="eq:EquivalenceRadStoL"}, together with the definition of the objects involved. The last inequality is in fact an equality, since the opposite inequality always holds, as detailed above. Thus, [\[eq:SLChain1\]](#eq:SLChain1){reference-type="eqref" reference="eq:SLChain1"} in fact reads $$\label{eq:SLChain2} \Gamma(f)\geq \left\lvert\mathrm{D}f\right\rvert_{w,\, \mssd_{\mssm}}^2 \geq \left\lvert\mathrm{D}f\right\rvert_{w,\, \mssd}^2= \left\lvert\mathrm{D}f\right\rvert_{\mssd}^2 \quad \quad \mssm\text{-a.e.}\,\,\mathrm{,}\;\,\qquad f\in \mbbL^{\mssm,\tau}_{\loc,b}\,\,\mathrm{.}$$ In order to discuss the validity of [\[eq:SLChain2\]](#eq:SLChain2){reference-type="eqref" reference="eq:SLChain2"} we shall further need the following definition, introduced by L. Ambrosio, N. Gigli, and G. Savaré in [@AmbGigSav15 Dfn. 3.13] (also cf. [@AmbErbSav16 Dfn. 12.4] for the generality of extended metric-topological measure spaces). **Definition 46** ($\tau$-upper regularity). Let $(\mbbX,\mcE)$ be a (quasi-regular strongly local) Dirichlet space with $\mbbX$ satisfying [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"} and admitting carré du champ operator $\Gamma$. We say that $(\mcE,\mcF)$ is *$\tau$-upper regular* if for every $f$ in a pseudo-core of $(\mcE,\mcF)$ there exists $\left(f_n\right)_n \subset \mcF_b\cap \mcC_b(\tau)$ and a sequence of bounded $\tau$-upper semi-continuous functions $g_n\colon X\to {\mathbb R}$ such that $$\label{eq:d:TUpperReg:0} L^2(\mssm)\text{-}\lim_{n}f_n =f\,\,\mathrm{,}\;\,\qquad \sqrt{\Gamma(f_n)}\leq g_n \quad \mssm\text{-a.e.}\,\,\mathrm{,}\;\,\qquad \limsup_{n }\,\int_X g_n^2\mathop{}\!\mathrm{d}\mssm \leq \mcE(f) \,\,\mathrm{.}$$ *Remark 47*. It is noted in the proof of [@AmbGigSav15 Thm. 3.14] (after the approximation results for the asymptotic Lipschitz constant in [@AmbGigSav12B §8.3]), resp. in [@AmbErbSav16 Thm. 9.2], that the Cheeger energy $\mathsf{Ch}_{w,\mssd,\mssm}$ of a metric measure space, resp. of an extended metric-topological probability space, is always $\tau$-upper regular. As a consequence, in comparing a general quasi-regular Dirichlet form $(\mcE,\mcF)$ with a Cheeger energy on the same (extended metric-topological) space, it is in fact necessary to discuss the $\tau$-upper regularity of $(\mcE,\mcF)$. Under the assumption of $\tau$-upper regularity, we now show the dual statement to Proposition [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"}. Again, we adapt to the $\sigma$-finite case the corresponding result in [@AmbErbSav16] for extended metric-topological probability spaces. **Proposition 48**. *Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space with $\mbbX$ satisfying [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"} and admitting carré du champ operator, and $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to [0,\infty]$ be an extended distance. Further assume that* 1. *[\[i:p:IneqCdC2:B\]]{#i:p:IneqCdC2:B label="i:p:IneqCdC2:B"} $\mbbL^{\mssm,\tau}_{\loc,b}$ generates $\tau$;* 2. *[\[i:p:IneqCdC2:C\]]{#i:p:IneqCdC2:C label="i:p:IneqCdC2:C"} $(\mbbX,\mcE,\mssm)$ satisfies $(\mathsf{Loc}_{\mssm,\tau})$;* 3. *[\[i:p:IneqCdC2:D\]]{#i:p:IneqCdC2:D label="i:p:IneqCdC2:D"} $(\mcE,\mcF)$ is $\tau$-upper regular.* *If $(\mbbX,\mcE)$ satisfies $(\mathsf{cSL}_{\tau,\mssd,\mssm})$, then $$\label{eq:p:IneqCdC2:0} \Gamma(f)\geq \left\lvert\mathrm{D}f\right\rvert_{w,\mssd_{\mssm}}^2 \geq \left\lvert\mathrm{D}f\right\rvert_{w,\mssd}^2 \quad \quad \mssm\text{-a.e.}\,\,\mathrm{,}\;\,\qquad f\in \mbbL^{\mssm,\tau}_{\loc,b}\,\,\mathrm{.}$$ In particular, $\mcE\geq \mathsf{Ch}_{w,\mssd_{\mssm},\mssm}\geq \mathsf{Ch}_{w,\mssd,\mssm}$.* *Remark 49* (Comparison with Proposition [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"}). As anticipated above, Proposition [Proposition 48](#p:IneqCdC2){reference-type="ref" reference="p:IneqCdC2"} ought to be understood as 'dual' to Proposition [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"}. In this respect: 1. [\[i:r:ComparisonSL:1\]]{#i:r:ComparisonSL:1 label="i:r:ComparisonSL:1"} The assumption in Proposition [Proposition 48](#p:IneqCdC2){reference-type="ref" reference="p:IneqCdC2"}[\[i:p:IneqCdC2:B\]](#i:p:IneqCdC2:B){reference-type="ref" reference="i:p:IneqCdC2:B"} is one about the largeness of $\mbbL^{\mssm,\tau}_{\loc,b}$. It is required to guarantee the validity of [@AmbErbSav16 Eqn. (12.1b)] for the ---in our case: *given*--- topology $\tau$. In Proposition [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"} such largeness is granted directly by the Rademacher property. Furthermore, let us note that this assumption is to be compared with the one in Proposition [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"}[\[i:p:IneqCdC:A\]](#i:p:IneqCdC:A){reference-type="ref" reference="i:p:IneqCdC:A"}. Indeed, recall that $\mssd_\mssm$ is always $\tau^{\scriptscriptstyle{\times 2}}$-lower semicontinuous and $\tau$-admissible, as noted after Definition [Definition 11](#d:IntrinsicD){reference-type="ref" reference="d:IntrinsicD"}. Now, since $\mbbL^{\mssm,\tau}_{\loc,b}$ generates $\tau$, it in particular separates points (since $(X,\tau)$ is Hausdorff), hence $\mssd_\mssm$ separates points as well. Thus, $(X,\tau,\mssd_\mssm)$ is an extended metric-topological space (Dfn. [Definition 3](#d:AES){reference-type="ref" reference="d:AES"}), which translates into the assumption in Proposition [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"}[\[i:p:IneqCdC:A\]](#i:p:IneqCdC:A){reference-type="ref" reference="i:p:IneqCdC:A"} when $\mssd$ is replace by $\mssd_\mssm$. 2. [\[i:r:ComparisonSL:b\]]{#i:r:ComparisonSL:b label="i:r:ComparisonSL:b"} When $\mbbX$ is additionally locally compact, the assumption in Proposition [Proposition 48](#p:IneqCdC2){reference-type="ref" reference="p:IneqCdC2"}[\[i:p:IneqCdC2:B\]](#i:p:IneqCdC2:B){reference-type="ref" reference="i:p:IneqCdC2:B"} may be directly replaced with - $\mssd_\mssm$ separates points, i.e. it is an extended distance (as opposed to: extended *pseudo*-distance). 3. As already commented in Remark [Remark 45](#r:IneqCdC){reference-type="ref" reference="r:IneqCdC"}, the assumption in Proposition [Proposition 48](#p:IneqCdC2){reference-type="ref" reference="p:IneqCdC2"}[\[i:p:IneqCdC2:C\]](#i:p:IneqCdC2:C){reference-type="ref" reference="i:p:IneqCdC2:C"} is dual to the one in Proposition [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"}[\[i:p:IneqCdC:B\]](#i:p:IneqCdC:B){reference-type="ref" reference="i:p:IneqCdC:B"}. 4. As for $\tau$-upper regularity, we refer the reader to [@AmbGigSav15; @AmbErbSav16] for a detailed explanation of this condition. As pointed out in [@AmbGigSav15; @AmbErbSav16], the assumption of $\tau$-upper regularity is *necessary* to the validity of (the first inequality in [\[eq:p:IneqCdC2:0\]](#eq:p:IneqCdC2:0){reference-type="eqref" reference="eq:p:IneqCdC2:0"} in) Proposition [Proposition 48](#p:IneqCdC2){reference-type="ref" reference="p:IneqCdC2"}. *Proof of Remark [Remark 49](#r:ComparisonSL){reference-type="ref" reference="r:ComparisonSL"}[\[i:r:ComparisonSL:b\]](#i:r:ComparisonSL:b){reference-type="ref" reference="i:r:ComparisonSL:b"}.* Since generating $\tau$ is a local property, it suffices to show the statement when $(X,\tau)$ is compact. Since $\mssd_\mssm$ separates points, so does $\mbbL^{\mssm,\tau}_{\loc,b}$. Fix a $\tau$-closed set $K\subset X$, and note that it is $\tau$-compact. For fixed $x\in K^\mathrm{c}$ and each $y\in K$ let $f_y\in \mbbL^{\mssm,\tau}_{\loc,b}$ be separating $x$ from $y$. Without loss of generality ---up to possibly changing the sign of $f$ and adding to it a constant function in $\mbbL^{\mssm,\tau}_{\loc,b}$---, we may assume that $f(x)=0$ and $f(y)=a_y>0$. (We may however *not* directly assume that $f(y)=1$, since $\mbbL^{\mssm,\tau}_{\loc,b}$ is not a linear space.) Since each $f_y$ is $\tau$-continuous, the family $\left\{f_y> a_y/2\right\}_{y\in K}$ is a $\tau$-open cover of $K$. Let $\left(y_i\right)_{i\leq n}$ be defining a finite subcover and set $f_{x,K}\mathop{\mathrm{\coloneqq}}\wedge_{i\leq n} f_{y_i}$. Then, $f_{x,K}\in \mbbL^{\mssm,\tau}_{\loc,b}$, $f_{x,K}(x)=0$, and $f_{x,K}>0$ everywhere on $K$; that is, $f_{x,K}$ separates $x$ from $K$. Since $K$ was arbitrary, $\mbbL^{\mssm,\tau}_{\loc,b}$ separates points from $\tau$-closed sets. By standard arguments, any family of $\tau$-continuous functions separating points from $\tau$-closed sets generates $\tau$. ◻ *Proof of Proposition [Proposition 48](#p:IneqCdC2){reference-type="ref" reference="p:IneqCdC2"}.* Since $(\mathsf{cSL}_{\tau,\mssm,\mssd})$ implies $({\mssd}\textrm{-}\mathsf{SL}_{\mssm})$ by [\[eq:EquivalenceRadStoL\]](#eq:EquivalenceRadStoL){reference-type="eqref" reference="eq:EquivalenceRadStoL"}, the second inequality in [\[eq:p:IneqCdC2:0\]](#eq:p:IneqCdC2:0){reference-type="eqref" reference="eq:p:IneqCdC2:0"} holds by definition of the objects involved. In order to show the first inequality, we argue similarly to the proof of Proposition [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"}. *Construction of a density*. Let $\left(\theta_i\right)_i$ be a latticial approximation of the identity witnessing $(\mathsf{Loc}_{\mssm,\tau})$. Without loss of generality, we may and will assume that $\left\{\theta_1\geq 3\right\}\neq \mathop{\mathrm{\varnothing}}$, hence that $\left\{\theta_i\geq 3\right\}\neq \mathop{\mathrm{\varnothing}}$ for every $i\in{\mathbb N}$. Further let $S\colon [0,\infty]\to [0,1]$ be defined as in [\[eq:MultiTrunc\]](#eq:MultiTrunc){reference-type="eqref" reference="eq:MultiTrunc"}, and set $$\psi_i\mathop{\mathrm{\coloneqq}}S\big({3-(\theta_i({\,\cdot\,})\wedge 3)}\big)\,\,\mathrm{,}\;\,\qquad i\in {\mathbb N}\,\,\mathrm{.}$$ Note that $\psi_i$ has the same shape as in Figure [1](#fig:FunctionPsi){reference-type="ref" reference="fig:FunctionPsi"}, and, for each $i\in{\mathbb N}$, 1. [\[i:p:IneqCdC2:proof1\]]{#i:p:IneqCdC2:proof1 label="i:p:IneqCdC2:proof1"} since $\mathop{\mathrm{\mathds 1}}\in \mbbL^{\mssm,\tau}_{\loc,b}$ and $\Gamma(\mathop{\mathrm{\mathds 1}})\equiv 0$ by locality, and since $\theta_i\in\mbbL^{\mssm,\tau}_b$, we have $\psi_i\in \mbbL^{\mssm,\tau}_b\subset \mcF$ by [\[eq:ChainRuleLoc\]](#eq:ChainRuleLoc){reference-type="eqref" reference="eq:ChainRuleLoc"} and [\[eq:SLoc:2\]](#eq:SLoc:2){reference-type="eqref" reference="eq:SLoc:2"}. 2. [\[i:p:IneqCdC2:proof2\]]{#i:p:IneqCdC2:proof2 label="i:p:IneqCdC2:proof2"} since $\theta_i\nearrow_i \infty$, the sets $E_i\mathop{\mathrm{\coloneqq}}\mathop{\mathrm{int}}_\tau\left\{\psi_i\equiv 1\right\}$ form a $\tau$-open exhaustion of $X$; 3. [\[i:p:IneqCdC2:proof3\]]{#i:p:IneqCdC2:proof3 label="i:p:IneqCdC2:proof3"} since $\theta_i\nearrow_i \infty$ and $\theta_i\in\mcC_b(\tau)$ (hence $\psi_i\in\mcC_b(\tau)$), the sets $G_i\mathop{\mathrm{\coloneqq}}\left\{\psi_i>\tfrac{1}{2}\right\}$ form a $\tau$-open exhaustion of $X$; 4. [\[i:p:IneqCdC2:proof4\]]{#i:p:IneqCdC2:proof4 label="i:p:IneqCdC2:proof4"} $\cl_\tau E_i\subset G_i$; 5. [\[i:p:IneqCdC2:proof5\]]{#i:p:IneqCdC2:proof5 label="i:p:IneqCdC2:proof5"} since $\psi_i\in L^2(\mssm)$, we have $\mssm G_i<\infty$. Since $\left(G_i\right)_i$ is an exhaustion of $X$, we may and will assume with no loss of generality ---up to dropping some of the first elements of all sequences above--- that $\mssm G_1\geq 1$, in such a way $(\mssm G_i)^{-1/2}\leq 1$ for every $i\in{\mathbb N}$. Now, set $$f_i\mathop{\mathrm{\coloneqq}}\frac{1}{\sqrt{\mssm G_i}}\psi_i \,\,\mathrm{,}\;\,\qquad \phi_n\mathop{\mathrm{\coloneqq}}\sum_i^n 2^{-i-1} f_i \,\,\mathrm{,}\;\,$$ and note that $f_i\in \mbbL^{\mssm,\tau}_{\loc,b}$, hence $f_i\in \Lip(\mssd,\tau)$ and $\mathrm{L}_{\mssd}(f_i)\leq \mathrm{L}_{\mssd}(\psi_i)\leq 1$ by $(\mathsf{cSL}_{\tau,\mssm,\mssd})$ and by [\[i:p:IneqCdC2:proof1\]](#i:p:IneqCdC2:proof1){reference-type="ref" reference="i:p:IneqCdC2:proof1"} above, hence [\[eq:IneqCdC:0.15\]](#eq:IneqCdC:0.15){reference-type="eqref" reference="eq:IneqCdC:0.15"} holds. Furthermore, similarly to the proof of Proposition [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"}, we have $\left\lVert\phi_n\right\rVert_{L^2(\mssm)}\leq 1$ for every $n$. Thus, similarly to the proof of Proposition [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"}, there exists the monotone limit $\phi\in \mcF$. Since $\psi_i\nearrow_i 1$, we additionally have $\phi>0$ $\mssm$-a.e.. Letting $\theta\mathop{\mathrm{\coloneqq}}\phi^2/\left\lVert\phi\right\rVert_{L^2(\mssm)}^2$ shows the assertion in [\[eq:IneqCdC:0.1\]](#eq:IneqCdC:0.1){reference-type="eqref" reference="eq:IneqCdC:0.1"}. *Dirichlet forms*. Arguing exactly as in the proof of Proposition [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"} we conclude [\[eq:l:IneqCdC:0.5\]](#eq:l:IneqCdC:0.5){reference-type="eqref" reference="eq:l:IneqCdC:0.5"}. Since $\mssm'$ is a finite measure, $\mcF_b'\supset \mbbL^{\mssm,\tau}_{\loc,b}$, and thus [\[eq:l:IneqCdC:0.5\]](#eq:l:IneqCdC:0.5){reference-type="eqref" reference="eq:l:IneqCdC:0.5"} holds for every $f\in \mbbL^{\mssm,\tau}_{\loc,b}$. *Intrinsic distances*. We claim that $\mssd_\mssm=\mssd_{\mssm'}$. Let us verify the assumptions in Corollary [Corollary 40](#c:LocalityDistances){reference-type="ref" reference="c:LocalityDistances"} with $\mcD\mathop{\mathrm{\coloneqq}}\mcF$. Indeed, $\mcF$ is a (pseudo-)core for both $(\mcE,\mcF)$ and $(\mcE',\mcF')$ and $\Gamma=\Gamma'$ on $\mcF$ by [\[eq:l:IneqCdC:0.5\]](#eq:l:IneqCdC:0.5){reference-type="eqref" reference="eq:l:IneqCdC:0.5"}. This verifies assumption [\[i:c:LocalityDistances:4\]](#i:c:LocalityDistances:4){reference-type="ref" reference="i:c:LocalityDistances:4"} in Corollary [Corollary 40](#c:LocalityDistances){reference-type="ref" reference="c:LocalityDistances"}. The sequences $E_\bullet$ and $G_\bullet$ are $\tau$-open exhaustions of $X$, and thus satisfy $E_\bullet, G_\bullet\in\msG_0\cap\msG_0'$ (see [\[i:p:IneqCdC2:proof2\]](#i:p:IneqCdC2:proof2){reference-type="ref" reference="i:p:IneqCdC2:proof2"}, [\[i:p:IneqCdC2:proof3\]](#i:p:IneqCdC2:proof3){reference-type="ref" reference="i:p:IneqCdC2:proof3"} above). Furthermore $a_i\mathop{\mathrm{\coloneqq}}\big({\frac{2^{-i-2}}{\sqrt{\mssm G_i} \left\lVert\phi\right\rVert_{L^2(\mssm)}}}\big)^2\leq \theta$ everywhere on $E_i$. This verifies assumption [\[i:c:LocalityDistances:1\]](#i:c:LocalityDistances:1){reference-type="ref" reference="i:c:LocalityDistances:1"} in Corollary [Corollary 40](#c:LocalityDistances){reference-type="ref" reference="c:LocalityDistances"}. Finally, let $\varrho_k\mathop{\mathrm{\coloneqq}}\big[\theta_i\wedge 3 -2\big]_+\subset \mcF$ and note that $\varrho_k\in \mbbL^{\mssm,\tau}_b$ by [\[eq:TruncationLoc\]](#eq:TruncationLoc){reference-type="eqref" reference="eq:TruncationLoc"} and [\[eq:SLoc:2\]](#eq:SLoc:2){reference-type="eqref" reference="eq:SLoc:2"}, and that $\mathop{\mathrm{\mathds 1}}_{E_i}\leq \varrho_i \leq \mathop{\mathrm{\mathds 1}}_{G_i}$ since $G_i\mathop{\mathrm{\coloneqq}}\left\{\psi_i\geq \tfrac{1}{2}\right\}=\left\{\theta_i\geq 1\right\}$. That is, assumption [\[i:c:LocalityDistances:2\]](#i:c:LocalityDistances:2){reference-type="ref" reference="i:c:LocalityDistances:2"} in Corollary [Corollary 40](#c:LocalityDistances){reference-type="ref" reference="c:LocalityDistances"} is satisfied. This concludes the verification of the assumptions in Corollary [Corollary 40](#c:LocalityDistances){reference-type="ref" reference="c:LocalityDistances"}, and thus $\mssd_\mssm=\mssd_{\mssm'}$. *Minimal relaxed slopes*. In light of Remark [Remark 49](#r:ComparisonSL){reference-type="ref" reference="r:ComparisonSL"}[\[i:r:ComparisonSL:1\]](#i:r:ComparisonSL:1){reference-type="ref" reference="i:r:ComparisonSL:1"}, the space $(X,\tau,\mssd_\mssm)$ is an extended metric-topological space. As in the proof of Proposition [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"}, let us verify that the space $(X,\tau,\mssd_\mssm,\mssm)$ satisfies the assumptions of Lemma [Lemma 43](#l:AGS2){reference-type="ref" reference="l:AGS2"}. By quasi-regularity of $(\mcE,\mcF)$ there exists a $\tau$-compact $\mcE$-nest $\left(K'_i\right)_i$. For each $i\in{\mathbb N}$, the set $K_i\mathop{\mathrm{\coloneqq}}\cl_\tau(K_i'\cap E_i)$ is $\tau$-compact, being the closure of a relatively $\tau$-compact set, satisfies $K_i\subset G_i$ by [\[i:p:IneqCdC2:proof4\]](#i:p:IneqCdC2:proof4){reference-type="ref" reference="i:p:IneqCdC2:proof4"} above, and is thus a $\tau$-compact $\mcE$-nest since $\left(E_i\right)_i$ and $\left(K_i\right)_i$ are $\mcE$-nests. Thus, since $\theta\in L^\infty(\mssm)$, it suffices to verify that there exists $\varepsilon>0$ and, for every $i\in{\mathbb N}$, there exists $c_i>0$, so that $\theta>c_i$ on $(K_i)_\varepsilon\mathop{\mathrm{\coloneqq}}\left\{\mssd_\mssm({\,\cdot\,}, K_i)<\varepsilon\right\}$. Since $\psi_i\in \mbbL^{\tau,\mssm}_b$ by [\[i:p:IneqCdC2:proof1\]](#i:p:IneqCdC2:proof1){reference-type="ref" reference="i:p:IneqCdC2:proof1"} above, $$\mssd_\mssm({\,\cdot\,}, K_i)\geq \inf_{x\in K_i} \big({\psi_i(x)-\psi_i({\,\cdot\,})}\big)=1-\psi_i({\,\cdot\,})\,\,\mathrm{,}\;\,$$ hence $$(K_i)_\varepsilon\subset\left\{\psi_i\geq 1-\varepsilon\right\}=\left\{f_i\geq \frac{1-\varepsilon}{\sqrt{\mssm G_i}}\right\}=\left\{\frac{2^{-i-1}f_i}{\left\lVert\phi\right\rVert_{L^2(\mssm)}^2}\geq \frac{2^{-i-1}(1-\varepsilon)}{\sqrt{\mssm G_i}\left\lVert\phi\right\rVert^2_{L^2(\mssm)}}\right\}\,\,\mathrm{,}\;\,$$ and choosing $\varepsilon\mathop{\mathrm{\coloneqq}}1/2$ and $c_i\mathop{\mathrm{\coloneqq}}a_i$ as above we conclude that $(K_i)_{1/2}\subset \left\{\theta\geq c_i\right\}$. Thus, [\[eq:l:AGS2:0\]](#eq:l:AGS2:0){reference-type="eqref" reference="eq:l:AGS2:0"} holds and the assumptions of Lemma [Lemma 43](#l:AGS2){reference-type="ref" reference="l:AGS2"} are satisfied. Now, applying (the second assertion in) Lemma [Lemma 43](#l:AGS2){reference-type="ref" reference="l:AGS2"} to the probability measure $\mssm'$, we have that $$\begin{aligned} \label{eq:l:IneqCdC2:2} \left\lvert\mathrm{D}f\right\rvert_{*,\mssd_\mssm,\mssm}=\left\lvert\mathrm{D}f\right\rvert_{*,\mssd_\mssm,\mssm'}\,\,\mathrm{,}\;\,\qquad f\in\msD({\mathsf{Ch}_{*,\mssd_\mssm,\mssm}}) \,\,\mathrm{.}\end{aligned}$$ *Conclusion*. When $\mssm$ is a probability measure, the sought inequality $\Gamma(f)\geq\left\lvert\mathrm{D}f\right\rvert_{w,\, \mssd_\mssm,\mssm}^2$ is shown in [@AmbErbSav16] under the assumption of $\tau$-upper regularity. Thus, it suffices to verify that $\tau$-upper regularity too is transferred from $(\mcE,\mcF)$ to $(\mcE',\mcF')$. Let $\mcD$ be a pseudo-core for $(\mcE,\mcF)$ witnessing its $\tau$-upper regularity, and note that $\mcD$ is also a pseudo-core of $(\mcE',\mcF')$ by definition of the latter. Now let $f$, $\left(f_n\right)_n$, and $\left(g_n\right)_n$ be as in [\[eq:d:TUpperReg:0\]](#eq:d:TUpperReg:0){reference-type="eqref" reference="eq:d:TUpperReg:0"}. Since $\Gamma'\equiv\Gamma$ by [\[eq:l:IneqCdC:0.5\]](#eq:l:IneqCdC:0.5){reference-type="eqref" reference="eq:l:IneqCdC:0.5"}, it is not difficult to show that [\[eq:d:TUpperReg:0\]](#eq:d:TUpperReg:0){reference-type="eqref" reference="eq:d:TUpperReg:0"} holds as well with $\Gamma'$ in place of $\Gamma$ and $\mssm'$ in place of $\mssm$. Respectively by definition of $\mbbL^{\mssm,\tau}_b$, construction of $\mcF'$, and by [@AmbErbSav16 Thm. 12.5], we have $\mbbL^{\mssm,\tau}_b\subset \mcF\subset \mcF'\subset\msD({\mathsf{Ch}_{*,\mssd_\mssm,\mssm'}})$. Respectively by [\[eq:l:IneqCdC:0.5\]](#eq:l:IneqCdC:0.5){reference-type="eqref" reference="eq:l:IneqCdC:0.5"}, [@AmbErbSav16 Thm. 12.5], [\[eq:l:IneqCdC:1\]](#eq:l:IneqCdC:1){reference-type="eqref" reference="eq:l:IneqCdC:1"} and $\mssd_\mssm=\mssd_{\mssm'}$, we see that $$\Gamma(f)=\Gamma'(f)\geq\left\lvert\mathrm{D}f\right\rvert_{w,\, \mssd_{\mssm'},\mssm'}^2=\left\lvert\mathrm{D}f\right\rvert_{*,\mssd_\mssm,\mssm'}^2\,\,\mathrm{,}\;\,\qquad f\in\mbbL^{\mssm,\tau}_b\,\,\mathrm{.}$$ Applying Lemma [Lemma 43](#l:AGS2){reference-type="ref" reference="l:AGS2"} while exchanging the roles of $\mssm$ and $\mssm'$ with $\theta^{-1}$ in place of $\theta$ we conclude from the above inequality that, in fact, $\mbbL^{\mssm,\tau}_b\subset \msD({\mathsf{Ch}_{*,\mssd_\mssm,\mssm}})$. Thus, continuing the above chain of inequalities, we conclude by [\[eq:l:IneqCdC2:2\]](#eq:l:IneqCdC2:2){reference-type="eqref" reference="eq:l:IneqCdC2:2"}, and again by [\[eq:l:IneqCdC:1\]](#eq:l:IneqCdC:1){reference-type="eqref" reference="eq:l:IneqCdC:1"}, that $$\Gamma(f)\geq \left\lvert\mathrm{D}f\right\rvert_{*,\mssd_\mssm,\mssm'}^2=\left\lvert\mathrm{D}f\right\rvert_{*,\mssd_\mssm,\mssm}^2=\left\lvert\mathrm{D}f\right\rvert_{w,\, \mssd_\mssm,\mssm}^2\,\,\mathrm{,}\;\,\qquad f\in\mbbL^{\mssm,\tau}_b\,\,\mathrm{.}$$ This last chain of inequalities extends to $\mbbL^{\mssm,\tau}_{\loc,b}$ by locality of all the objects involved, and the conclusion follows. ◻ Combining Propositions [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"} and [Proposition 48](#p:IneqCdC2){reference-type="ref" reference="p:IneqCdC2"} we further obtain the following identification of $\mcE$ with $\mathsf{Ch}_{w,\mssd,\mssm}$ (or, equivalently, with $\mathsf{Ch}_{*,\mssd,\mssm}$). **Corollary 50**. *Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space with $\mbbX$ satisfying [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"} and admitting carré du champ operator $\Gamma$, and $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to[0,\infty]$ be an extended distance. Further assume that $(\mathsf{Rad}_{\mssd,\mssm})$ and $(\mathsf{cSL}_{\tau,\mssm,\mssd})$ hold. Then, $\mcE\leq \mathsf{Ch}_{w,\mssd,\mssm}$, and the equality holds if and only if $(\mbbX,\mcE)$ is additionally $\tau$-upper regular.* *Remark 51* (Comparison with [@AmbGigSav15; @AmbErbSav16]). Corollary [Corollary 50](#c:RadStoLCheegerComparison){reference-type="ref" reference="c:RadStoLCheegerComparison"} ought to be compared with [@AmbGigSav15 Thm. 3.14] and [@AmbErbSav16 Thm. 12.5]. Let us first note that the two approaches have different premises: in [@AmbGigSav15; @AmbErbSav16] the authors compare the form $(\mcE,\mcF)$ with the Cheeger energy $\mathsf{Ch}_{w,\mssd_\mssm,\mssm}$ constructed from (the intrinsic distance $\mssd_\mssm$ of) $(\mcE,\mcF)$. We rather compare $(\mcE,\mcF)$ and $\mathsf{Ch}_{w,\mssd,\mssm}$ for some *assigned* $\mssd$, *a priori* unrelated to $\mssd_\mssm$. Apart from the point of view, a main difference among the results in [@AmbGigSav15], those in [@AmbErbSav16], and ours lies in the generality of the respective assumptions, as we now detail. *Comparison with [@AmbGigSav15]*. Under the standing assumption in [@AmbGigSav15]: 1. [\[i:r:ComparisonAGS-AES:1\]]{#i:r:ComparisonAGS-AES:1 label="i:r:ComparisonAGS-AES:1"} $\mssd_\mssm$ is a finite distance metrizing $\tau$ and $(X,\mssd_\mssm)$ is separable and complete; 2. [\[i:r:ComparisonAGS-AES:2\]]{#i:r:ComparisonAGS-AES:2 label="i:r:ComparisonAGS-AES:2"} $(\mbbX,\mcE,\mssm)$ satisfies (a slightly stronger version of) $(\mathsf{Loc}_{\mssm,\tau})$ (see [@AmbGigSav15 Eqn. (3.28)]); it is in fact possible to *prove* $(\mathsf{Rad}_{\mssd_\mssm,\mssm})$ ---see [@AmbGigSav15 Thm. 3.9] or (in a more general setting than in [@AmbGigSav15]) [@LzDSSuz20 Cor. 3.15]---, while $(\mathsf{cSL}_{\tau,\mssm,\mssd_\mssm})$ holds by definition of $\mssd_\mssm$ (cf. the proof of [@AmbGigSav15 Thm. 3.9]). Furthermore, thanks to the Rademacher property $(\mathsf{Rad}_{\mssd_\mssm,\mssm})$ and to [\[i:r:ComparisonAGS-AES:1\]](#i:r:ComparisonAGS-AES:1){reference-type="ref" reference="i:r:ComparisonAGS-AES:1"} above, the form $(\mcE,\mcF)$ is quasi-regular by [@LzDSSuz20 Prop. 3.21]. Thus, our assumptions are weaker (and our result stronger) than those in [@AmbGigSav15]. *Comparison with [@AmbErbSav16]*. It is (part of) the assumptions in [@AmbErbSav16] that 1. $(X,\tau,\mssd)$ is a completely regular Hausdorff *extended* metric-topological space; 2. $\mssm$ is a probability measure and ---roughly--- $\mbbL^{\mssm,\tau}_{\loc,b}$ generates $\tau$. Again, these assumptions trivially imply $(\mathsf{cSL}_{\tau,\mssm,\mssd_\mssm})$. They are however skew to ours in the level of generality of the spaces involved. Indeed, whereas we always assume that $(X,\tau)$ satisfies [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"} (which is less restrictive than [@AmbGigSav15] but more restrictive than [@AmbErbSav16]), we are able to address the case of *extended* distances (as in [@AmbErbSav16]) on $\sigma$-finite spaces (as in [@AmbGigSav15], and as opposed to [@AmbErbSav16], addressing only probability spaces). *Proof of Corollary [Corollary 50](#c:RadStoLCheegerComparison){reference-type="ref" reference="c:RadStoLCheegerComparison"}.* It suffices to verify the assumptions in Propositions [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"} and [Proposition 48](#p:IneqCdC2){reference-type="ref" reference="p:IneqCdC2"}. ◻ Finally, let us now comment on the last equality in [\[eq:SLChain2\]](#eq:SLChain2){reference-type="eqref" reference="eq:SLChain2"}, viz. $$\label{eq:SLChain3} \left\lvert\mathrm{D}f\right\rvert_{w,\, \mssd}^2= \left\lvert\mathrm{D}f\right\rvert_{\mssd}^2 \quad \quad \mssm\text{-a.e.}\,\,\mathrm{,}\;\,\qquad f\in \mbbL^{\mssm,\tau}_{\loc,b}\,\,\mathrm{.}$$ *Remark 52* (On the validity of [\[eq:SLChain3\]](#eq:SLChain3){reference-type="eqref" reference="eq:SLChain3"}). When $\mssd$ is a distance (as opposed to: an extended distance), the equality [\[eq:SLChain3\]](#eq:SLChain3){reference-type="eqref" reference="eq:SLChain3"} is known to hold for all $\mssd$-Lipschitz functions under some assumptions on the metric measure space $(X,\mssd,\mssm)$. In particular, it is satisfied whenever $(X,\mssd,\mssm)$ is a (measure) doubling metric measure space (see Dfn. [Definition 53](#d:MMSp){reference-type="ref" reference="d:MMSp"} below) supporting a weak $(1,2)$-Poincaré inequality (see Dfn. [Definition 69](#d:DP){reference-type="ref" reference="d:DP"} below). # Tensorization In this section, we will mostly confine ourselves to *metric measure spaces* in the sense of the following definition. **Definition 53** (Metric measure space). A triple $(X,\mssd,\mssm)$ is called a *metric measure space* if $(X,\mssd)$ is a complete and separable metric space, and $\mssm$ is a Borel measure on $X$ finite on $\mssd$-bounded sets. For the purpose of comparison with known results in the literature, we recall the definition of infinitesimal Hilbertianity, introduced by N. Gigli in [@Gig13]. **Definition 54** (Infinitesimal Hilbertianity). A metric measure space $(X,\mssd,\mssm)$ is *infinitesimally Hilbertian* if the Cheeger energy $\mathsf{Ch}_{*,\mssd,\mssm}$ satisfies the parallelogram identity on $L^2(\mssm)$, viz. $$\begin{aligned} \tag{$\mathsf{IH}_{\mssd,\mssm}$} 2\big({\mathsf{Ch}_{*,\mssd,\mssm}(f)+\mathsf{Ch}_{*,\mssd,\mssm}(g)}\big)= \mathsf{Ch}_{*,\mssd,\mssm}(f+g) + \mathsf{Ch}_{*,\mssd,\mssm}(f-g) \,\,\mathrm{.}\end{aligned}$$ In the case when $(X,\mssd,\mssm)$ is infinitesimal Hilbertian, the Cheeger energy $\mathsf{Ch}_{*,\mssd,\mssm}$ is a quadratic functional. The non-relabelled bilinear form induced on $L^2(\mssm)$ by polarization is in fact a local Dirichlet form. ## Product structures {#sss:Products} Let $(\mbbX,\mcE)$ and $(\mbbX',\mcE')$ be Dirichlet spaces. Further let $$\mbbX^{\scriptscriptstyle{\otimes }}=(X^{\scriptscriptstyle{\otimes }},\tau^{\scriptscriptstyle{\otimes }},\Sigma^{\scriptscriptstyle{\otimes }},\mssm^{\scriptscriptstyle{\otimes }})\mathop{\mathrm{\coloneqq}}(X\times X', \tau\times\tau',\Sigma\widehat{\otimes}\Sigma',\mssm\widehat{\otimes}\mssm')$$ be the corresponding product space. For a function $\hat f^{\scriptscriptstyle{\otimes }}\colon X^{\scriptscriptstyle{\otimes }}\to{\mathbb R}$ and any $\mbfx\mathop{\mathrm{\coloneqq}}(x,x')\in X^{\scriptscriptstyle{\otimes }}$, define the sections of $\hat f^{\scriptscriptstyle{\otimes }}$ at $\mbfx$ by $\hat f^{\scriptscriptstyle{\otimes }}_{\mbfx,1}\colon y\mapsto \hat f^{\scriptscriptstyle{\otimes }}(y, x')$ and $\hat f^{\scriptscriptstyle{\otimes }}_{\mbfx,2}\colon y'\mapsto \hat f^{\scriptscriptstyle{\otimes }}(x,y')$. Since no confusion may arise, we also write $f^{\scriptscriptstyle{\otimes }}\mathop{\mathrm{\coloneqq}}\big [\hat f\big]_{\mssm^{\scriptscriptstyle{\otimes }}}$, $f^{\scriptscriptstyle{\otimes }}_{\mbfx,1}\mathop{\mathrm{\coloneqq}}\big [\hat f^{\scriptscriptstyle{\otimes }}_{\mbfx,1}\big]_{\mssm}$ and $f^{\scriptscriptstyle{\otimes }}_{\mbfx,2}\mathop{\mathrm{\coloneqq}}\big [\hat f^{\scriptscriptstyle{\otimes }}_{\mbfx,2}\big]_{\mssm'}$. We stress that the subscript number indicates the *free* coordinate. ### Products of Dirichlet spaces Set $$\begin{aligned} \mcD^{\scriptscriptstyle{\otimes }}\mathop{\mathrm{\coloneqq}}\left\{f\in L^2(\mssm^{\scriptscriptstyle{\otimes }}): \begin{gathered} f^{\scriptscriptstyle{\otimes }}_{\mbfx,1} \in \mcF\ {\textrm{\,for ${\mssm'}$-a.e.\,}} x'\in X' \\ f^{\scriptscriptstyle{\otimes }}_{\mbfx,2} \in \mcF'\ {\textrm{\,for ${\mssm}$-a.e.\,}} x\in X \\ \mcE^{\scriptscriptstyle{\otimes }}(f^{\scriptscriptstyle{\otimes }})\mathop{\mathrm{\coloneqq}}\int\mcE(f^{\scriptscriptstyle{\otimes }}_{\mbfx,1}) \mathop{}\!\mathrm{d}\mssm'\ +\int \mcE'(f^{\scriptscriptstyle{\otimes }}_{\mbfx,2}) \mathop{}\!\mathrm{d}\mssm<\infty \end{gathered}\right\}\,\,\mathrm{.}\end{aligned}$$ **Proposition 55** (Product structures). *The following assertions hold:* 1. *[\[i:p:Products:1\]]{#i:p:Products:1 label="i:p:Products:1"} If both $\mbbX$ and $\mbbX'$ satisfy either [\[ass:Hausdorff\]](#ass:Hausdorff){reference-type="ref" reference="ass:Hausdorff"}, [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"}, or [\[ass:Polish\]](#ass:Polish){reference-type="ref" reference="ass:Polish"}, then so does $\mbbX^{\scriptscriptstyle{\otimes }}$;* 2. *[\[i:p:Products:2\]]{#i:p:Products:2 label="i:p:Products:2"} the quadratic form $(\mcE^{\scriptscriptstyle{\otimes }},\mcD^{\scriptscriptstyle{\otimes }})$ is closable, and its closure is a Dirichlet form on $\mbbX^{\scriptscriptstyle{\otimes }}$ with domain $$\label{eq:ProductDomain} \mcF^{\scriptscriptstyle{\otimes }}\mathop{\mathrm{\coloneqq}}\left\{f\in L^2(\mssm^{\scriptscriptstyle{\otimes }}) : \begin{gathered} x'\mapsto f^{\scriptscriptstyle{\otimes }}_{\mbfx,1}\in L^2(\mssm'; \mcF) \\ x\mapsto f^{\scriptscriptstyle{\otimes }}_{\mbfx,2}\in L^2(\mssm; \mcF') \end{gathered}\right\}\,\,\mathrm{;}\;\,$$* 3. *[\[i:p:Products:3\]]{#i:p:Products:3 label="i:p:Products:3"} if both $(\mbbX,\mcE)$ and $(\mbbX',\mcE')$ are strongly local, then so is $(\mbbX^{\scriptscriptstyle{\otimes }},\mcE^{\scriptscriptstyle{\otimes }})$;* 4. *[\[i:p:Products:4\]]{#i:p:Products:4 label="i:p:Products:4"} the algebraic tensor product $\mcF\otimes_{{\mathbb R}}\mcF'$ is $\mcF^{\scriptscriptstyle{\otimes }}$-dense in $\mcF^{\scriptscriptstyle{\otimes }}$.* 5. *[\[i:p:Products:5\]]{#i:p:Products:5 label="i:p:Products:5"} if both $(\mbbX,\mcE)$ and $(\mbbX',\mcE')$ are either quasi-regular or regular, then so is $(\mbbX^{\scriptscriptstyle{\otimes }},\mcE^{\scriptscriptstyle{\otimes }})$;* 6. *[\[i:p:Products:6\]]{#i:p:Products:6 label="i:p:Products:6"} if $(\mbbX,\mcE)$, resp. $(\mbbX',\mcE')$, admits carré du champ operator $\Gamma$, resp. $\Gamma'$, then $(\mbbX^{\scriptscriptstyle{\otimes }},\mcE^{\scriptscriptstyle{\otimes }})$ admits carré du champ operator $$\label{eq:CdCProduct} \Gamma^{\scriptscriptstyle{\otimes }}(f^{\scriptscriptstyle{\otimes }})(x,x')\mathop{\mathrm{\coloneqq}}\Gamma(f^{\scriptscriptstyle{\otimes }}_{\mbfx,1})(x)+\Gamma'(f^{\scriptscriptstyle{\otimes }}_{\mbfx,2})(x')\,\,\mathrm{,}\;\,\qquad \mbfx\mathop{\mathrm{\coloneqq}}(x,x') \,\,\mathrm{.}$$* **Proof.* A proof of [\[i:p:Products:1\]](#i:p:Products:1){reference-type="ref" reference="i:p:Products:1"} is standard. Proofs of [\[i:p:Products:2\]](#i:p:Products:2){reference-type="ref" reference="i:p:Products:2"}, [\[i:p:Products:3\]](#i:p:Products:3){reference-type="ref" reference="i:p:Products:3"}, and [\[i:p:Products:6\]](#i:p:Products:6){reference-type="ref" reference="i:p:Products:6"} are found in [@BouHir91 Prop. V.2.1.2, p. 201]. A proof of [\[i:p:Products:4\]](#i:p:Products:4){reference-type="ref" reference="i:p:Products:4"} is found in [@BouHir91 Prop. V.2.1.3(b), p. 201]. A proof of [\[i:p:Products:5\]](#i:p:Products:5){reference-type="ref" reference="i:p:Products:5"} in the regular case follows from [\[i:p:Products:4\]](#i:p:Products:4){reference-type="ref" reference="i:p:Products:4"}. The quasi-regular case follows from the regular case via the transfer method, cf. e.g. [@Kuw98; @CheMaRoe94]. ◻* Combining [\[eq:CdCProduct\]](#eq:CdCProduct){reference-type="eqref" reference="eq:CdCProduct"} with Corollary [Corollary 8](#c:BH){reference-type="ref" reference="c:BH"}, we see that [\[eq:CdCProduct\]](#eq:CdCProduct){reference-type="eqref" reference="eq:CdCProduct"} extends to $f\in{{\mcF^{\scriptscriptstyle{\otimes }}}^\bullet_{\loc}}$. **Corollary 56**. *Let $(\mbbX,\mcE)$ and $(\mbbX',\mcE')$ be quasi-regular strongly local Dirichlet spaces with $\mbbX,\mbbX'$ satisfying [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"}. Then, $$\label{eq:t:Tensor:5} \Gamma^{\scriptscriptstyle{\otimes }}(f^{\scriptscriptstyle{\otimes }})(x,x')\mathop{\mathrm{\coloneqq}}\Gamma(f^{\scriptscriptstyle{\otimes }}_{\mbfx,1})(x)+\Gamma'(f^{\scriptscriptstyle{\otimes }}_{\mbfx,2})(x')\,\,\mathrm{,}\;\,\qquad \mbfx\mathop{\mathrm{\coloneqq}}(x,x')\,\,\mathrm{,}\;\,\qquad f\in{{\mcF^{\scriptscriptstyle{\otimes }}}^\bullet_{\loc}} \,\,\mathrm{.}$$* As a consequence, we further have $$\begin{aligned} \nonumber f\otimes\mathop{\mathrm{\mathds 1}}\in&\ \mbbL^{\mssm^{\scriptscriptstyle{\otimes }}}_{\loc,b} \,\,\mathrm{,}\;\,\mathop{\mathrm{\mathds 1}}\otimes f'\in \mbbL^{\mssm^{\scriptscriptstyle{\otimes }}}_{\loc,b} \,\,\mathrm{,}\;\,& f\in&\ \mbbL^{\mssm}_{\loc,b}\,\,\mathrm{,}\;\,f'\in \mbbL^{\mssm'}_{\loc,b} \,\,\mathrm{,}\;\, \\ \label{eq:TensorDzLocBT} f\otimes\mathop{\mathrm{\mathds 1}}\in&\ \mbbL^{\mssm^{\scriptscriptstyle{\otimes }},\tau^{\scriptscriptstyle{\otimes }}}_{\loc,b} \,\,\mathrm{,}\;\,\mathop{\mathrm{\mathds 1}}\otimes f'\in \mbbL^{\mssm^{\scriptscriptstyle{\otimes }},\tau^{\scriptscriptstyle{\otimes }}}_{\loc,b} \,\,\mathrm{,}\;\,& f\in&\ \mbbL^{\mssm,\tau}_{\loc,b}\,\,\mathrm{,}\;\,f'\in \mbbL^{\mssm',\tau'}_{\loc,b} \,\,\mathrm{.}\end{aligned}$$ *Sectioning*. Denote the sections of a set $A^{\scriptscriptstyle{\otimes }}\subset X^{\scriptscriptstyle{\otimes }}$ by $$A^{\scriptscriptstyle{\otimes }}_{\mbfx,1}\mathop{\mathrm{\coloneqq}}\left\{x\in X: (x,x')\in A^{\scriptscriptstyle{\otimes }}\right\} \quad \text{and}\quad A^{\scriptscriptstyle{\otimes }}_{\mbfx,2}\mathop{\mathrm{\coloneqq}}\left\{x'\in X': (x,x')\in A^{\scriptscriptstyle{\otimes }}\right\}\,\,\mathrm{,}\;\,\qquad \mbfx\mathop{\mathrm{\coloneqq}}(x,x,')\in X^{\scriptscriptstyle{\otimes }} \,\,\mathrm{.}$$ **Lemma 57** (Sectioning of quasi-notions). *Let $(\mbbX,\mcE)$ and $(\mbbX',\mcE')$ be quasi-regular strongly local Dirichlet spaces with $\mbbX,\mbbX'$ satisfying [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"}. Then, the following assertions hold:* 1. *[\[i:l:Quasi-Sections:1\]]{#i:l:Quasi-Sections:1 label="i:l:Quasi-Sections:1"} if $\left(F^{\scriptscriptstyle{\otimes }}_k\right)_k$ is an $\mcE^{\scriptscriptstyle{\otimes }}$-nest, then ${\big((F^{\scriptscriptstyle{\otimes }}_k)_{\mbfx,1}\big)}_k$ is an $\mcE$-nest for $\mssm'$-a.e. $x'\in X'$, resp. ${\big((F^{\scriptscriptstyle{\otimes }}_k)_{\mbfx,2}\big)}_k$ is an $\mcE'$-nest for $\mssm$-a.e. $x\in X$;* 2. *if $P^{\scriptscriptstyle{\otimes }}\subset X^{\scriptscriptstyle{\otimes }}$ is $\mcE^{\scriptscriptstyle{\otimes }}$-polar, then $P^{\scriptscriptstyle{\otimes }}_{\mbfx,1}$ is $\mcE$-polar for $\mssm'$-a.e. $x'\in X'$, resp. $P^{\scriptscriptstyle{\otimes }}_{\mbfx,2}$ is $\mcE'$-polar for $\mssm$-a.e. $x\in X$;* 3. *[\[i:l:Quasi-Sections:3\]]{#i:l:Quasi-Sections:3 label="i:l:Quasi-Sections:3"} if $G^{\scriptscriptstyle{\otimes }}\subset X^{\scriptscriptstyle{\otimes }}$ is $\mcE^{\scriptscriptstyle{\otimes }}$-quasi-open, then $G^{\scriptscriptstyle{\otimes }}_{\mbfx,1}$ is $\mcE$-quasi-open for $\mssm'$-a.e. $x'\in X'$, resp. $G^{\scriptscriptstyle{\otimes }}_{\mbfx,2}$ is $\mcE'$-quasi-open for $\mssm$-a.e. $x\in X$;* 4. *if $f^{\scriptscriptstyle{\otimes }}$ is $\mcE^{\scriptscriptstyle{\otimes }}$-quasi-continuous, then $f^{\scriptscriptstyle{\otimes }}_{\mbfx,1}$ is $\mcE$-quasi-open for $\mssm'$-a.e. $x'\in X'$, resp. $f^{\scriptscriptstyle{\otimes }}_{\mbfx,2}$ is $\mcE'$-quasi-open for $\mssm$-a.e. $x\in X$.* **Proof.* We only show [\[i:l:Quasi-Sections:1\]](#i:l:Quasi-Sections:1){reference-type="ref" reference="i:l:Quasi-Sections:1"}, the other assertions being a straightforward consequence. Let $\left(F^{\scriptscriptstyle{\otimes }}_k\right)_k$ be an $\mcE^{\scriptscriptstyle{\otimes }}$-nest. For each $x'\in X$ and for each $k\in{\mathbb N}$ let $F_k\mathop{\mathrm{\coloneqq}}\left\{x\in X: (x,x')\in F^{\scriptscriptstyle{\otimes }}_k\right\}$ be the $x'$-section of $F^{\scriptscriptstyle{\otimes }}_k$. We show that $\left(F_k\right)_k$ is an $\mcE$-nest for $\mssm'$-a.e. $x'\in X'$. Since $F^{\scriptscriptstyle{\otimes }}_k$ is $\tau^{\scriptscriptstyle{\otimes }}$-closed and sectioning preserves closedness, $F_k$ is $\tau$-closed for every $k$. It suffices to show that $\cap_k F_k^\mathrm{c}$ is $\mcE$-polar. When $\mbbX$ and $\mbbX'$ are probability spaces, this is claimed in, e.g., [@BouHir91 Exercise V.2.1(2), p. 208].* *In order to address the case of $\sigma$-finite $\mssm^{\scriptscriptstyle{\otimes }}$, let $\varphi\in\mcF$, resp. $\varphi'\in\mcF'$, with $\varphi>0$ $\mssm$-a.e., resp. $\varphi'>0$ $\mssm'$-a.e., and set $\varphi^{\scriptscriptstyle{\otimes }}\mathop{\mathrm{\coloneqq}}\varphi\otimes \varphi'\in\mcF^{\scriptscriptstyle{\otimes }}$. Consider the Girsanov transforms $(\mcE^\varphi,\mcF^\varphi)$ of $(\mcE,\mcF)$, $(\mcE^{\varphi'},\mcF^{\varphi'})$ of $(\mcE',\mcF')$, and $(\mcE^{\varphi^{\scriptscriptstyle{\otimes }}},\mcF^{\varphi^{\scriptscriptstyle{\otimes }}})$ of $(\mcE^{\scriptscriptstyle{\otimes }},\mcF^{\scriptscriptstyle{\otimes }})$. The assertion follows since Girsanov transforms by quasi-everywhere strictly positive functions preserve nests, hence polarity; see e.g. [@CheSun06 p. 449]. ◻* **Corollary 58** (Sectioning of broad local domains). *Let $(\mbbX,\mcE)$ and $(\mbbX',\mcE')$ be quasi-regular strongly local Dirichlet spaces with $\mbbX,\mbbX'$ satisfying [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"}. If $f^{\scriptscriptstyle{\otimes }}\in {{\mcF^{\scriptscriptstyle{\otimes }}}^\bullet_{\loc}}$, then $f^{\scriptscriptstyle{\otimes }}_{\mbfx,1}\in {{\mcF}^\bullet_{\loc}}$ for $\mssm'$-a.e. $x'\in X$ and $f^{\scriptscriptstyle{\otimes }}_{\mbfx,2}\in {{\mcF'}^\bullet_{\loc}}$ for $\mssm$-a.e. $x\in X$.* **Proof.* Let $(G^{\scriptscriptstyle{\otimes }}_\bullet, f^{\scriptscriptstyle{\otimes }}_\bullet)$ be witnessing that $f^{\scriptscriptstyle{\otimes }}\in{{\mcF^{\scriptscriptstyle{\otimes }}}^\bullet_{\loc}}$. We show the statement for $X$-sections, the one for $X'$-sections being analogous. It suffices to note that, for $\mssm'$-a.e. $x'\in X'$,* *$(G^{\scriptscriptstyle{\otimes }}_n)_{\mbfx,1}$ is $\mcE$-quasi-open by Lemma [Lemma 57](#l:Quasi-Sections){reference-type="ref" reference="l:Quasi-Sections"}[\[i:l:Quasi-Sections:3\]](#i:l:Quasi-Sections:3){reference-type="ref" reference="i:l:Quasi-Sections:3"}* *$(f^{\scriptscriptstyle{\otimes }}_n)_{\mbfx,1}\in \mcF$ by definition [\[eq:ProductDomain\]](#eq:ProductDomain){reference-type="eqref" reference="eq:ProductDomain"} of $\mcF^{\scriptscriptstyle{\otimes }}$;* *$(f^{\scriptscriptstyle{\otimes }}_n)_{\mbfx,1}\equiv f^{\scriptscriptstyle{\otimes }}_{\mbfx,1}$ $\mssm$-a.e. on $(G^{\scriptscriptstyle{\otimes }}_n)_{\mbfx,1}$ by definition $f^{\scriptscriptstyle{\otimes }}_\bullet$.* * ◻* **Corollary 59** (Sectioning of broad local spaces of uniformly bounded energy). *Let $(\mbbX,\mcE)$, resp. $(\mbbX',\mcE')$, be a quasi-regular strongly local Dirichlet spaces with $\mbbX$, resp. $\mbbX'$, satisfying [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"}. If $f^{\scriptscriptstyle{\otimes }}\in \mbbL^{\mssm^{\scriptscriptstyle{\otimes }}}_{\loc,b}$, resp. $\mbbL^{\mssm^{\scriptscriptstyle{\otimes }},\tau^{\scriptscriptstyle{\otimes }}}_{\loc,b}$, then $f^{\scriptscriptstyle{\otimes }}_{\mbfx,1}\in \mbbL^{\mssm}_{\loc,b}$, resp. $\mbbL^{\mssm,\tau}_{\loc,b}$, for $\mssm'$-a.e. $x'\in X$ and $f^{\scriptscriptstyle{\otimes }}_{\mbfx,2}\in \mbbL^{\mssm'}_{\loc,b}$, resp. $\mbbL^{\mssm',\tau'}_{\loc,b}$, for $\mssm$-a.e. $x\in X$.* **Proof.* Straightforward consequence of Corollary [Corollary 58](#c:Sectioning-BLD){reference-type="ref" reference="c:Sectioning-BLD"} and [\[eq:CdCProduct\]](#eq:CdCProduct){reference-type="eqref" reference="eq:CdCProduct"}. ◻* ### Products of metric objects {#sss:MetricProduct} Let $X,X'$ be any (non-empty) sets. For extended pseudo-distances $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to [0,\infty]$ and $\mssd'\colon X^{\prime{\scriptscriptstyle{\times 2}}}\to [0,\infty]$, we denote by $$\mssd^{\scriptscriptstyle{\otimes }}\big({(x,x'),(y,y')}\big)=(\mssd\otimes \mssd')\big({(x,x'),(y,y')}\big)\mathop{\mathrm{\coloneqq}}\sqrt{\mssd(x,y)^2+ \mssd'(x',y')^2}$$ the ($\ell^2$-)product extended pseudo-distance on $X\times X'$, and by $$(\mssd\oplus\mssd')\big({(x,x'),(y,y')}\big)\mathop{\mathrm{\coloneqq}}\mssd(x,y)+ \mssd'(x',y')$$ the ($\ell^1$-)product extended pseudo-distance on $X\times X'$. Note that $\mssd^{\scriptscriptstyle{\otimes }}$ and $\mssd\oplus\mssd'$ induce the same topology on $X\times X'$. **Lemma 60**. *Let $(\mbbX,\mcE)$, resp. $(\mbbX',\mcE')$, be a quasi-regular strongly local Dirichlet spaces with $\mbbX$, resp. $\mbbX'$, satisfying [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"}. Then, $$\label{eq:l:TensorIntrinsicD:0} \mssd_{\mssm^{\scriptscriptstyle{\otimes }}} \leq \mssd_\mssm\oplus\mssd_{\mssm'} \quad \mssm^{\scriptscriptstyle{\otimes }}\text{-a.e.} \,\,\mathrm{.}$$* **Proof.* Set $\mbfx\mathop{\mathrm{\coloneqq}}(x,x')$ and $\mbfy\mathop{\mathrm{\coloneqq}}(y,y')$. Then, $$\begin{aligned} \nonumber \mssd_{\mssm^{\scriptscriptstyle{\otimes }}}\big({(x,x'),(y,y')}\big)\leq&\ \mssd_{\mssm^{\scriptscriptstyle{\otimes }}}\big({(x,x'),(y,x')}\big)+\mssd_{\mssm^{\scriptscriptstyle{\otimes }}}\big({(y,x'),(y,y')}\big) \\ \label{eq:l:TensorIntrinsicD:1} =&\ \inf_{f^{\scriptscriptstyle{\otimes }}\in\mbbL^{\mssm^{\scriptscriptstyle{\otimes }},\tau^{\scriptscriptstyle{\otimes }}}_{\loc,b}} f^{\scriptscriptstyle{\otimes }}(x,x')-f^{\scriptscriptstyle{\otimes }}(y,x')+ \inf_{g^{\scriptscriptstyle{\otimes }}\in\mbbL^{\mssm^{\scriptscriptstyle{\otimes }},\tau^{\scriptscriptstyle{\otimes }}}_{\loc,b}} g^{\scriptscriptstyle{\otimes }}(y,x')-g^{\scriptscriptstyle{\otimes }}(y,y') \,\,\mathrm{.}\end{aligned}$$ Choosing $f^{\scriptscriptstyle{\otimes }}$ of the form $f\otimes\mathop{\mathrm{\mathds 1}}$, resp. $\mathop{\mathrm{\mathds 1}}\otimes f'$ in the first, resp. second, summand of [\[eq:l:TensorIntrinsicD:1\]](#eq:l:TensorIntrinsicD:1){reference-type="eqref" reference="eq:l:TensorIntrinsicD:1"}, we may continue [\[eq:l:TensorIntrinsicD:1\]](#eq:l:TensorIntrinsicD:1){reference-type="eqref" reference="eq:l:TensorIntrinsicD:1"} with $$\begin{aligned} \mssd_{\mssm^{\scriptscriptstyle{\otimes }}}\big({(x,x'),(y,y')}\big)\leq&\ \inf_{\substack{\hat f\colon X\to{\mathbb R}\\ f\otimes \mathop{\mathrm{\mathds 1}}\in\mbbL^{\mssm^{\scriptscriptstyle{\otimes }},\tau^{\scriptscriptstyle{\otimes }}}_{\loc,b}}} \hat f(x)-\hat f(y) + \inf_{\substack{\hat g\colon X'\to{\mathbb R}\\ \mathop{\mathrm{\mathds 1}}\otimes g \in\mbbL^{\mssm^{\scriptscriptstyle{\otimes }},\tau^{\scriptscriptstyle{\otimes }}}_{\loc,b}}} \hat g(x')-\hat g(y') \\ \leq&\ \inf_{f \in\mbbL^{\mssm,\tau}_{\loc,b}} f(x)-f(y) + \inf_{f'\in\mbbL^{\mssm',\tau'}_{\loc,b}} g(x')-g(y') \\ =&\ \mssd_\mssm(x,y)+ \mssd_{\mssm'}(x',y')\,\,\mathrm{,}\;\,\end{aligned}$$ where the second inequality holds in light of [\[eq:TensorDzLocBT\]](#eq:TensorDzLocBT){reference-type="eqref" reference="eq:TensorDzLocBT"}. ◻* Now, let $(X,\tau,\mssd,\mssm)$ and $(X',\tau',\mssd',\mssm')$ be extended metric-topological measure spaces in the sense of Definition [Definition 3](#d:AES){reference-type="ref" reference="d:AES"}. We denote by $$\begin{aligned} \left\lvert\mathrm{D}f^{\scriptscriptstyle{\otimes }}\right\rvert_{w,\, \mssd,\mssd'}^{{\scriptscriptstyle{\otimes }}}(x,x')\mathop{\mathrm{\coloneqq}}\sqrt{\left\lvert\mathrm{D}f^{\scriptscriptstyle{\otimes }}_{\mbfx,1}\right\rvert_{w,\, \mssd}^2(x)+\left\lvert\mathrm{D}f^{\scriptscriptstyle{\otimes }}_{\mbfx,2}\right\rvert_{w,\, \mssd'}^2(x')} \,\,\mathrm{,}\;\,\qquad f^{\scriptscriptstyle{\otimes }}\colon X^{\scriptscriptstyle{\otimes }}\to {\mathbb R}\,\,\mathrm{,}\;\,\end{aligned}$$ the *Cartesian gradient* in [@AmbGigSav14b p. 1477], by $$\begin{aligned} \label{eq:CartesianGrad} \left\lvert\mathrm{D}f^{\scriptscriptstyle{\otimes }}\right\rvert_{c,\mssd,\mssd'}^{{\scriptscriptstyle{\otimes }}}(x,x')\mathop{\mathrm{\coloneqq}}\sqrt{\left\lvert\mathrm{D}f^{\scriptscriptstyle{\otimes }}_{\mbfx,1}\right\rvert_{\mssd}^2(x)+\left\lvert\mathrm{D}f^{\scriptscriptstyle{\otimes }}_{\mbfx,2}\right\rvert_{\mssd'}^2(x')} \,\,\mathrm{,}\;\,\qquad f^{\scriptscriptstyle{\otimes }}\colon X^{\scriptscriptstyle{\otimes }}\to {\mathbb R}\,\,\mathrm{,}\;\,\end{aligned}$$ the *Cartesian slope* in [@AmbPinSpe15 Eqn. (3.3)], and by $\left\lvert\mathrm{D}f^{\scriptscriptstyle{\otimes }}\right\rvert_{*,c,\mssd,\mssd'}^{{\scriptscriptstyle{\otimes }}}$ the minimal relaxed gradient associated to the Cheeger energy $$\begin{aligned} \mathsf{Ch}_{*,c,\mssm^{\scriptscriptstyle{\otimes }}}(f^{\scriptscriptstyle{\otimes }})\mathop{\mathrm{\coloneqq}}\inf\left\{\liminf_n \int_{X^{\scriptscriptstyle{\times 2}}} \big({\left\lvert\mathrm{D}f^{\scriptscriptstyle{\otimes }}_n\right\rvert_{c,\mssd,\mssd'}^{{\scriptscriptstyle{\otimes }}}}\big)^2 \mathop{}\!\mathrm{d}\mssm^{\scriptscriptstyle{\otimes }}\right\}=\int_{X^{\scriptscriptstyle{\times 2}}} \big({\left\lvert\mathrm{D}f^{\scriptscriptstyle{\otimes }}\right\rvert_{*,c,\mssd,\mssd'}^{{\scriptscriptstyle{\otimes }}}}\big)^2 \mathop{}\!\mathrm{d}\mssm^{\scriptscriptstyle{\otimes }} \,\,\mathrm{,}\;\,\end{aligned}$$ where the infimum is taken over all sequences $\left(f^{\scriptscriptstyle{\otimes }}_n\right)_n\subset \Lip(\mssd^{\scriptscriptstyle{\otimes }})$ with $L^2(\mssm^{\scriptscriptstyle{\otimes }})\text{-}\lim_{n}f_n^{\scriptscriptstyle{\otimes }}=f^{\scriptscriptstyle{\otimes }}$. Recall that, by e.g. [@AmbPinSpe15 Thm. 2.2(ii)], for every $f^{\scriptscriptstyle{\otimes }}\in\Lip_{bs}(\mssd^{\scriptscriptstyle{\otimes }})\subset L^2(\mssm^{\scriptscriptstyle{\otimes }})$ there exists a sequence of functions $\left(f^{\scriptscriptstyle{\otimes }}_n\right)_n\subset \Lip_{bs}(\mssd^{\scriptscriptstyle{\otimes }})$ such that $$\begin{aligned} \label{eq:APS} L^2(\mssm^{\scriptscriptstyle{\otimes }})\text{-}\lim_{n}f^{\scriptscriptstyle{\otimes }}_n= f^{\scriptscriptstyle{\otimes }} \qquad \text{and} \qquad \left\lvert\mathrm{D}f^{\scriptscriptstyle{\otimes }}\right\rvert_{*,c,\mssd,\mssd'}^{{\scriptscriptstyle{\otimes }}}= L^2(\mssm^{\scriptscriptstyle{\otimes }})\text{-}\lim_{n}\left\lvert\mathrm{D}f^{\scriptscriptstyle{\otimes }}_n\right\rvert_{c,\mssd,\mssd'}^{{\scriptscriptstyle{\otimes }}}\,\,\mathrm{.}\end{aligned}$$ ## Tensorization of the Rademacher property {#sss:TensorizationConseq} In this section we show the tensorization of the Rademacher property. In the literature, it has been addressed in the case when $\mcE=\mathsf{Ch}_{\mssd,\mssm}$ is the Cheeger energy, in connection with the tensorization of the Cheeger energy under the assumptions of - infinitesimal Hilbertianity (Dfn. [Definition 54](#d:IH){reference-type="ref" reference="d:IH"}), measure doubling and a weak $(1,2)$-Poincaré inequality (see Dfn. [Definition 69](#d:DP){reference-type="ref" reference="d:DP"} below) in [@AmbPinSpe15]; - under the Riemannian Curvature-Dimension condition $\RCD(K,\infty)$ in [@AmbGigSav14b]; - more recently under the *infinitesimal quasi-Hilbertianity* in [@EriRajSou22]. Here, we discuss the case of general quasi-regular strongly local Dirichlet spaces $(\mbbX,\mcE)$, without any geometric assumption. In particular, we never require a two-sided comparison of $(\mcE,\mcF)$ with $\mathsf{Ch}_{*,\mssd,\mssm}$ implicit in the definition of infinitesimal (quasi-)Hilbertianity in [@EriRajSou22]. Similarly to the discussion on transfer by weight in §[4](#s:Transfer){reference-type="ref" reference="s:Transfer"}, our assumptions are weaker than those in [@AmbGigSav14b; @AmbPinSpe15; @EriRajSou22], since the Cheeger energy $\mathsf{Ch}_{*,\mssd,\mssm}$ of a metric measure space is always quasi-regular by [@Sav14 Thm. 4.1] or [@LzDSSuz20 Prop. 3.21], cf.  Remark [Remark 51](#r:ComparisonAGS-AES){reference-type="ref" reference="r:ComparisonAGS-AES"}. **Theorem 61** (Tensorization of $(\mathsf{Rad})$). *Let $\mbbX\mathop{\mathrm{\coloneqq}}(X,\mssd,\mssm)$ be a metric measure space, and $(\mcE,\mcF)$ be a quasi-regular strongly local Dirichlet form on $\mbbX$ admitting carré du champ operator $\Gamma$ and satisfying $(\mathsf{Rad}_{\mssd,\mssm})$. Further let $(\mbbX',\mcE')$ be satisfying the same assumptions as $(\mbbX,\mcE)$.* *Then, their product space $(\mbbX^{\scriptscriptstyle{\otimes }},\mcE^{\scriptscriptstyle{\otimes }})$ satisfies $(\mathsf{Rad}_{\mssd^{\scriptscriptstyle{\otimes }},\mssm^{\scriptscriptstyle{\otimes }}})$.* **Proof.* Since $(X,\mssd,\mssm)$ and $(X',\mssd',\mssm')$ are metric measure spaces in the sense of Definition [Definition 53](#d:MMSp){reference-type="ref" reference="d:MMSp"}, their product space $(X^{\scriptscriptstyle{\otimes }},\mssd^{\scriptscriptstyle{\otimes }},\mssm^{\scriptscriptstyle{\otimes }})$ is so as well. Since $\tau_\mssd=\tau$, all Lipschitz functions in this proof are continuous (in particular: Borel) for the relative distances/topologies, and we thus omit the specification of measure-representatives of such functions.* *Let $\left(G_k\right)_k$, resp. $\left(G_k'\right)_k$, be a monotone exhaustion of $X$, resp. $X'$ consisting of well-separated (see Rmk. [Remark 39](#r:CutOff){reference-type="ref" reference="r:CutOff"}) bounded open sets. Set $G_k^{\scriptscriptstyle{\otimes }}\mathop{\mathrm{\coloneqq}}G_k\times G_k'$, and note that $G_k^{\scriptscriptstyle{\otimes }}$ is a monotone exhaustion of $X^{\scriptscriptstyle{\otimes }}$ consisting of well-separated bounded open sets. Similarly to Remark [Remark 39](#r:CutOff){reference-type="ref" reference="r:CutOff"}, for each $k$ there exists $\varrho_k^{\scriptscriptstyle{\otimes }}\in\mathrm{Lip}_b(\mssd^{\scriptscriptstyle{\otimes }})$ satisfying condition [\[i:p:LocalityProbab:2\]](#i:p:LocalityProbab:2){reference-type="ref" reference="i:p:LocalityProbab:2"} in Proposition [Proposition 38](#p:LocalityProbab){reference-type="ref" reference="p:LocalityProbab"}, that is such that $\mathop{\mathrm{\mathds 1}}_{G_k^{\scriptscriptstyle{\otimes }}}\leq\varrho_k^{\scriptscriptstyle{\otimes }}\leq \mathop{\mathrm{\mathds 1}}_{{G_{k+1}^{\scriptscriptstyle{\otimes }}}^c}$ and $$\mathrm{L}_{\mssd^{\scriptscriptstyle{\otimes }}}(\varrho_k^{\scriptscriptstyle{\otimes }})\leq c_k\mathop{\mathrm{\coloneqq}}\mssd^{\scriptscriptstyle{\otimes }}(G_k^{\scriptscriptstyle{\otimes }}, {G_{k+1}^{\scriptscriptstyle{\otimes }}}^\mathrm{c})^{-1}<\infty\,\,\mathrm{.}$$* *Throughout the proof, let $\mbfx\mathop{\mathrm{\coloneqq}}(x,x')\in X^{\scriptscriptstyle{\otimes }}$.* **Step I: $\mathrm{Lip}_b(\mssd^{\scriptscriptstyle{\otimes }})\subset \mcF^{\scriptscriptstyle{\otimes }}$*. Without loss of generality, we show that $\mathrm{Lip}^1_b(\mssd^{\scriptscriptstyle{\otimes }})\subset \mcF^{\scriptscriptstyle{\otimes }}$. Let $f^{\scriptscriptstyle{\otimes }}\in \mathrm{Lip}^1_b(\mssd^{\scriptscriptstyle{\otimes }})$ and set $M\mathop{\mathrm{\coloneqq}}\sup_{X^{\scriptscriptstyle{\otimes }}} \left\lvert f^{\scriptscriptstyle{\otimes }}\right\rvert$. Note that, for each fixed $\mbfx\mathop{\mathrm{\coloneqq}}(x,x')\in X^{\scriptscriptstyle{\otimes }}$, $$\label{eq:t:Tensor:1} \begin{aligned} f_{k,\mbfx,1}\mathop{\mathrm{\coloneqq}}&\ (f^{\scriptscriptstyle{\otimes }}\varrho_k^{\scriptscriptstyle{\otimes }})_{\mbfx,1}\in \mathrm{Lip}_b(\mssd) \,\,\mathrm{,}\;\,\quad \mathrm{L}_{\mssd}(f_{\mbfx,k,1})\leq Mc_k+1\,\,\mathrm{,}\;\, \\ f_{k,\mbfx,2}\mathop{\mathrm{\coloneqq}}&\ (f^{\scriptscriptstyle{\otimes }}\varrho_k^{\scriptscriptstyle{\otimes }})_{\mbfx,2}\in \mathrm{Lip}_b(\mssd') \,\,\mathrm{,}\;\,\quad \mathrm{L}_{\mssd'}(f_{\mbfx,k,2})\leq Mc_k+1 \,\,\mathrm{.} \end{aligned}$$ By $(\mathsf{Rad}_{\mssd,\mssm})$, resp. $(\mathsf{Rad}_{\mssd',\mssm'})$, we have $$\label{eq:t:Tensor:2} \begin{aligned} f_{k,\mbfx,1}\in&\ {{\mcF}^\bullet_{\loc}}\,\,\mathrm{,}\;\,& \Gamma(f_{k,\mbfx,1})\leq&\ \mathrm{L}_{\mssd}(f_{k,\mbfx,1})^2 \quad \mssm\text{-a.e.}\,\,\mathrm{,}\;\, \\ f_{k,\mbfx,2}\in&\ {{\mcF'}^\bullet_{\loc}}\,\,\mathrm{,}\;\,& \Gamma'(f_{k,\mbfx,2})\leq&\ \mathrm{L}_{\mssd'}(f_{k,\mbfx,2})^2 \quad \mssm'\text{-a.e.} \,\,\mathrm{.} \end{aligned}$$ Furthermore, by definition of $\varrho_k^{\scriptscriptstyle{\otimes }}$ we have $f_{k,\mbfx,1}\equiv 0$ everywhere on $G_{k+1}^\mathrm{c}$, resp. $f_{k,\mbfx,2}\equiv 0$ everywhere on ${G_{k+1}'}^\mathrm{c}$. Combing this fact with [\[eq:t:Tensor:1\]](#eq:t:Tensor:1){reference-type="eqref" reference="eq:t:Tensor:1"} and [\[eq:t:Tensor:2\]](#eq:t:Tensor:2){reference-type="eqref" reference="eq:t:Tensor:2"}, we conclude from Corollary [Corollary 8](#c:BH){reference-type="ref" reference="c:BH"} that $$\label{eq:t:Tensor:3} \begin{aligned} \Gamma(f_{k,\mbfx,1})\leq& \begin{cases} (Mc_k+1)^2 & \quad \mssm\text{-a.e.}\ \text{on } G_{k+1} \\ 0 & \quad \mssm\text{-a.e.}\ \text{on } {G_{k+1}}^\mathrm{c} \end{cases}\,\,\mathrm{,}\;\, \\ \Gamma'(f_{k,\mbfx,2})\leq& \begin{cases} (Mc_k+1)^2 & \quad \mssm'\text{-a.e.}\ \text{on } G'_{k+1} \\ 0 & \quad \mssm'\text{-a.e.}\ \text{on } {G'_{k+1}}^\mathrm{c} \end{cases} \,\,\mathrm{.} \end{aligned}$$ Thus, since $G_k$, resp. $G_k'$, has finite $\mssm$-, resp. $\mssm'$-measure for all $k$, [\[eq:t:Tensor:3\]](#eq:t:Tensor:3){reference-type="eqref" reference="eq:t:Tensor:3"} implies that, for each $k$ and every $\mbfx\in X^{\scriptscriptstyle{\otimes }}$, $$\label{eq:t:Tensor:4} \begin{aligned} f_{k,\mbfx,1}\in&\ \mcF\,\,\mathrm{,}\;\,& \left\lVert f_{k,\mbfx,1}\right\rVert_\mcF^2\leq&\ \mssm G_{k+1} \big({(Mc_k+1)^2+ M^2}\big)\,\,\mathrm{,}\;\, \\ f_{k,\mbfx,2}\in&\ \mcF'\,\,\mathrm{,}\;\,& \left\lVert f_{k,\mbfx,2}\right\rVert_{\mcF'}^2\leq&\ \mssm' G_{k+1}' \big({(Mc_k+1)^2+ M^2}\big)\,\,\mathrm{.} \end{aligned}$$ As a consequence, $$\begin{aligned} \left\lVert x'\mapsto f_{k,\mbfx,1}\right\rVert_{L^2(\mssm',\mcF)}^2\leq \mssm^{\scriptscriptstyle{\otimes }} G_{k+1}^{\scriptscriptstyle{\otimes }} \big({(Mc_k+1)^2+ M^2}\big)\,\,\mathrm{,}\;\, \\ \left\lVert x'\mapsto f_{k,\mbfx,1}\right\rVert_{L^2(\mssm',\mcF)}^2\leq \mssm^{\scriptscriptstyle{\otimes }} G_{k+1}^{\scriptscriptstyle{\otimes }} \big({(Mc_k+1)^2+ M^2}\big)\,\,\mathrm{,}\;\, \end{aligned}$$ which finally shows that $f^{\scriptscriptstyle{\otimes }}\varrho_k^{\scriptscriptstyle{\otimes }}\in \mcF^{\scriptscriptstyle{\otimes }}$ for each $k$ by definition [\[eq:ProductDomain\]](#eq:ProductDomain){reference-type="eqref" reference="eq:ProductDomain"} of $\mcF^{\scriptscriptstyle{\otimes }}$. Since $\left(G_k^{\scriptscriptstyle{\otimes }}\right)_k$ is an open exhaustion of $X^{\scriptscriptstyle{\otimes }}$, the sequence $\left(G_k^{\scriptscriptstyle{\otimes }}, f^{\scriptscriptstyle{\otimes }}\varrho_k^{\scriptscriptstyle{\otimes }}\right)_k$ witnesses that $f^{\scriptscriptstyle{\otimes }}\in{{\mcF^{\scriptscriptstyle{\otimes }}}^\bullet_{\loc}}$.* **Step II: sharp estimate*. We now proceed to estimate of $\Gamma^{\scriptscriptstyle{\otimes }}({\,\cdot\,})$ by $\mathrm{L}_{\mssd^{\scriptscriptstyle{\otimes }}}({\,\cdot\,})$. Fix $f^{\scriptscriptstyle{\otimes }}\in \mathrm{Lip}^1_b(\mssd^{\scriptscriptstyle{\otimes }})$. Since $f^{\scriptscriptstyle{\otimes }}\in{{\mcF^{\scriptscriptstyle{\otimes }}}^\bullet_{\loc}}$ by Step I, by Corollary [Corollary 8](#c:BH){reference-type="ref" reference="c:BH"} we have $$\label{eq:t:Tensor:6} \Gamma^{\scriptscriptstyle{\otimes }}(f^{\scriptscriptstyle{\otimes }})(\mbfx)= \Gamma^{\scriptscriptstyle{\otimes }}(f^{\scriptscriptstyle{\otimes }}\varrho_k^{\scriptscriptstyle{\otimes }})(\mbfx) \quad \text{on } G_k^{\scriptscriptstyle{\otimes }}\,\,\mathrm{.}$$ Thus, it suffices to show that, for each $k\in{\mathbb N}$, $$\label{eq:t:Tensor:7} \Gamma^{\scriptscriptstyle{\otimes }}(f^{\scriptscriptstyle{\otimes }}\varrho_k^{\scriptscriptstyle{\otimes }})\leq 1 \quad \mssm^{\scriptscriptstyle{\otimes }}\text{-a.e.}\ \text{on }G_k^{\scriptscriptstyle{\otimes }}\,\,\mathrm{.}$$* *Let $k$ be fixed. Since $f^{\scriptscriptstyle{\otimes }}\varrho_k^{\scriptscriptstyle{\otimes }}$ is $\mssd^{\scriptscriptstyle{\otimes }}$-Lipschitz with $\mssd^{\scriptscriptstyle{\otimes }}$-bounded support, we can find a sequence ${\big(f^{\scriptscriptstyle{\otimes }}_{k,n}\big)}_n$ satisfying [\[eq:APS\]](#eq:APS){reference-type="eqref" reference="eq:APS"} with $f^{\scriptscriptstyle{\otimes }}\varrho_k^{\scriptscriptstyle{\otimes }}$ in place of $f^{\scriptscriptstyle{\otimes }}$. By Step I we further have $f^{\scriptscriptstyle{\otimes }}_{k,n}\in {{\mcF^{\scriptscriptstyle{\otimes }}}^\bullet_{\loc}}$ for each $n$, thus we may compute $\Gamma^{\scriptscriptstyle{\otimes }}(f^{\scriptscriptstyle{\otimes }}_{k,n})$ pointwise.* *By [\[eq:t:Tensor:5\]](#eq:t:Tensor:5){reference-type="eqref" reference="eq:t:Tensor:5"}, and applying Proposition [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"} on $X$ and $X'$, $$\begin{aligned} \nonumber \Gamma^{\scriptscriptstyle{\otimes }}(f^{\scriptscriptstyle{\otimes }}_{k,n})\leq&\ \Gamma\big({(f^{\scriptscriptstyle{\otimes }}_{k,n})_{{\,\cdot\,},1}}\big) + \Gamma'\big({(f^{\scriptscriptstyle{\otimes }}_{k,n})_{{\,\cdot\,},2}}\big) \leq \big\lvert\mathrm{D}(f^{\scriptscriptstyle{\otimes }}_{k,n})_{{\,\cdot\,},1}\big\rvert_{w,\, \mssd}^2+\big\lvert\mathrm{D}(f^{\scriptscriptstyle{\otimes }}_{k,n})_{{\,\cdot\,},2}\big\rvert_{w,\, \mssd'}^2 \\ \label{eq:t:Tensor:9} \leq&\ \big\lvert\mathrm{D}(f^{\scriptscriptstyle{\otimes }}_{k,n})_{{\,\cdot\,},1}\big\rvert_{\mssd}^2+\big\lvert\mathrm{D}(f^{\scriptscriptstyle{\otimes }}_{k,n})_{{\,\cdot\,},2}\big\rvert_{\mssd'}^2 \eqqcolon\left(\big\lvert\mathrm{D}f^{\scriptscriptstyle{\otimes }}_{k,n}\big\rvert_{c,\mssd,\mssd'}^{\scriptscriptstyle{\otimes }}\right)^2 \,\,\mathrm{,}\;\,\end{aligned}$$ where the inequality [\[eq:t:Tensor:9\]](#eq:t:Tensor:9){reference-type="eqref" reference="eq:t:Tensor:9"} holds since the slope of a Lipschitz function is a weak upper gradient. Since the right-hand side of [\[eq:t:Tensor:9\]](#eq:t:Tensor:9){reference-type="eqref" reference="eq:t:Tensor:9"} converges in $L^2(\mssm^{\scriptscriptstyle{\otimes }})$ by [\[eq:APS\]](#eq:APS){reference-type="eqref" reference="eq:APS"}, integrating [\[eq:t:Tensor:9\]](#eq:t:Tensor:9){reference-type="eqref" reference="eq:t:Tensor:9"} w.r.t. $\mssm^{\scriptscriptstyle{\otimes }}$ shows that ${\big(f^{\scriptscriptstyle{\otimes }}_{k,n}\big)}_n$ is uniformly bounded in $\mcF^{\scriptscriptstyle{\otimes }}$. Thus, up to possibly choosing a suitable non-relabeled subsequence, we may and will assume with no loss of generality that ${\big(f^{\scriptscriptstyle{\otimes }}_{k,n}\big)}_n$ is $\mcF^{\scriptscriptstyle{\otimes }}$-weakly convergent, in which case its $\mcF^{\scriptscriptstyle{\otimes }}$-weak limit is $f^{\scriptscriptstyle{\otimes }}\varrho^{\scriptscriptstyle{\otimes }}_k$ in light of its $L^2(\mssm^{\scriptscriptstyle{\otimes }})$-convergence to $f^{\scriptscriptstyle{\otimes }}\varrho^{\scriptscriptstyle{\otimes }}_k$ and [@MaRoe92 Lem. I.2.12].* *Fix now $h^{\scriptscriptstyle{\otimes }}\in{\mcF_b^{\scriptscriptstyle{\otimes }}}^+$. Then, by e.g. [@AriHin05 Lem. 3.3(ii)], and by [\[eq:t:Tensor:9\]](#eq:t:Tensor:9){reference-type="eqref" reference="eq:t:Tensor:9"}, $$\begin{aligned} \label{eq:t:Tensor:10} \int h^{\scriptscriptstyle{\otimes }}\, \Gamma^{\scriptscriptstyle{\otimes }}(f^{\scriptscriptstyle{\otimes }}\varrho_k^{\scriptscriptstyle{\otimes }}) \mathop{}\!\mathrm{d}\mssm^{\scriptscriptstyle{\otimes }}\leq&\ \liminf_{n }\int h^{\scriptscriptstyle{\otimes }}\, \Gamma^{\scriptscriptstyle{\otimes }}(f^{\scriptscriptstyle{\otimes }}_{k,n})\mathop{}\!\mathrm{d}\mssm^{\scriptscriptstyle{\otimes }} \leq \liminf_{n }\int h^{\scriptscriptstyle{\otimes }} \left(\big\lvert\mathrm{D}f^{\scriptscriptstyle{\otimes }}_{k,n}\big\rvert_{c,\mssd,\mssd'}^{\scriptscriptstyle{\otimes }}\right)^2\mathop{}\!\mathrm{d}\mssm^{\scriptscriptstyle{\otimes }} \,\,\mathrm{.}\end{aligned}$$ Since $h^{\scriptscriptstyle{\otimes }}\in L^\infty(\mssm^{\scriptscriptstyle{\otimes }})$, by [\[eq:APS\]](#eq:APS){reference-type="eqref" reference="eq:APS"} there exists $$\begin{aligned} \label{eq:t:Tensor:12} \lim_{n}\int h^{\scriptscriptstyle{\otimes }} \left(\big\lvert\mathrm{D}f^{\scriptscriptstyle{\otimes }}_{k,n}\big\rvert_{c,\mssd,\mssd'}^{\scriptscriptstyle{\otimes }}\right)^2 \mathop{}\!\mathrm{d}\mssm^{\scriptscriptstyle{\otimes }} = \int h^{\scriptscriptstyle{\otimes }}\left(\big\lvert\mathrm{D}(f^{\scriptscriptstyle{\otimes }}\varrho_k^{\scriptscriptstyle{\otimes }})\big\rvert_{*,c,\mssd,\mssd'}^{\scriptscriptstyle{\otimes }}\right)^2 \mathop{}\!\mathrm{d}\mssm^{\scriptscriptstyle{\otimes }} \,\,\mathrm{.}\end{aligned}$$ Combining [\[eq:t:Tensor:10\]](#eq:t:Tensor:10){reference-type="eqref" reference="eq:t:Tensor:10"} with [\[eq:t:Tensor:12\]](#eq:t:Tensor:12){reference-type="eqref" reference="eq:t:Tensor:12"} we thus have $$\begin{aligned} \int h^{\scriptscriptstyle{\otimes }}\, \Gamma^{\scriptscriptstyle{\otimes }}(f^{\scriptscriptstyle{\otimes }}\varrho_k^{\scriptscriptstyle{\otimes }}) \mathop{}\!\mathrm{d}\mssm^{\scriptscriptstyle{\otimes }} \leq& \int h^{\scriptscriptstyle{\otimes }}\left(\big\lvert\mathrm{D}(f^{\scriptscriptstyle{\otimes }}\varrho_k^{\scriptscriptstyle{\otimes }})\big\rvert_{*,c,\mssd,\mssd'}^{\scriptscriptstyle{\otimes }}\right)^2 \mathop{}\!\mathrm{d}\mssm^{\scriptscriptstyle{\otimes }} \,\,\mathrm{,}\;\,\end{aligned}$$ whence, by arbitrariness of $h^{\scriptscriptstyle{\otimes }}$, we conclude that $$\begin{aligned} \Gamma^{\scriptscriptstyle{\otimes }}(f^{\scriptscriptstyle{\otimes }}\varrho_k^{\scriptscriptstyle{\otimes }})\leq&\ \left(\big\lvert\mathrm{D}(f^{\scriptscriptstyle{\otimes }}\varrho_k^{\scriptscriptstyle{\otimes }})\big\rvert_{*,c,\mssd,\mssd'}^{\scriptscriptstyle{\otimes }}\right)^2 = \big\lvert\mathrm{D}(f^{\scriptscriptstyle{\otimes }}\varrho_k^{\scriptscriptstyle{\otimes }})\big\rvert_{*,\mssd^{\scriptscriptstyle{\otimes }}}^2 \leq \big\lvert\mathrm{D}(f^{\scriptscriptstyle{\otimes }}\varrho_k^{\scriptscriptstyle{\otimes }})\big\rvert_{\mssd^{\scriptscriptstyle{\otimes }}}^2 \,\,\mathrm{,}\;\,\end{aligned}$$ where the equality is shown in [@AmbPinSpe15 Thm. 3.2], and the last inequality holds again by definition of the objects involved. Since $\varrho^{\scriptscriptstyle{\otimes }}_k\equiv \mathop{\mathrm{\mathds 1}}$ on $G^{\scriptscriptstyle{\otimes }}_k$, by locality of $\left\lvert\mathrm{D}{\,\cdot\,}\right\rvert_{\mssd^{\scriptscriptstyle{\otimes }}}$ and locality of $\Gamma^{\scriptscriptstyle{\otimes }}$ as in Corollary [Corollary 8](#c:BH){reference-type="ref" reference="c:BH"}, we finally obtain $$\begin{aligned} \Gamma^{\scriptscriptstyle{\otimes }}(f^{\scriptscriptstyle{\otimes }}\varrho_k^{\scriptscriptstyle{\otimes }})\mathop{\mathrm{\mathds 1}}_{G_k^{\scriptscriptstyle{\otimes }}}\leq \left\lvert\mathrm{D}f^{\scriptscriptstyle{\otimes }}\right\rvert_{\mssd^{\scriptscriptstyle{\otimes }}} \mathop{\mathrm{\mathds 1}}_{G_k^{\scriptscriptstyle{\otimes }}}\leq \mathrm{L}_{\mssd^{\scriptscriptstyle{\otimes }}}(f^{\scriptscriptstyle{\otimes }}) \leq 1\end{aligned}$$ by the assumption that $f^{\scriptscriptstyle{\otimes }}\in \mathrm{Lip}^1_b(\mssd^{\scriptscriptstyle{\otimes }})$, which concludes [\[eq:t:Tensor:7\]](#eq:t:Tensor:7){reference-type="eqref" reference="eq:t:Tensor:7"} and therefore the assertion of the theorem. ◻* ## Tensorization of the Sobolev-to-Lipschitz property {#sss:TensorSL} In this section we show a slightly weaker form of tensorization of the Sobolev-to-Lipschitz property. Precisely, we show that the Sobolev-to-Lipschitz property on the factors implies the *continuous*-Sobolev-to-Lipschitz property on their product space. **Theorem 62**. *Let $\mbbX\mathop{\mathrm{\coloneqq}}(X,\mssd,\mssm)$ be a metric measure space (Dfn. [Definition 53](#d:MMSp){reference-type="ref" reference="d:MMSp"}), and $(\mcE,\mcF)$ be a quasi-regular strongly local Dirichlet form on $\mbbX$ satisfying $(\mathsf{SL}_{\mssm,\mssd})$. Further let $(\mbbX',\mcE')$ be satisfying the same assumptions as $(\mbbX,\mcE)$. Then, their product space $(\mbbX^{\scriptscriptstyle{\otimes }},\mcE^{\scriptscriptstyle{\otimes }})$ satisfies $(\mathsf{cSL}_{\tau^{\scriptscriptstyle{\otimes }},\mssm^{\scriptscriptstyle{\otimes }},\mssd^{\scriptscriptstyle{\otimes }}})$.* Before discussing a proof of Theorem [Theorem 62](#t:TensorSL){reference-type="ref" reference="t:TensorSL"}, let us explain why both its statement and the proof presented below are not dual to the tensorization of the Rademacher property in Theorem [Theorem 61](#t:TensorRad){reference-type="ref" reference="t:TensorRad"}. Indeed, the proof of Theorem [Theorem 61](#t:TensorRad){reference-type="ref" reference="t:TensorRad"} crucially relies on Proposition [Proposition 44](#p:IneqCdC){reference-type="ref" reference="p:IneqCdC"}. A dual proof for the tensorization of the Sobolev-to-Lipschitz property would then rely on the chain of inequalities in [\[eq:SLChain2\]](#eq:SLChain2){reference-type="eqref" reference="eq:SLChain2"}. Now, the validity of the first inequality in [\[eq:SLChain2\]](#eq:SLChain2){reference-type="eqref" reference="eq:SLChain2"} is implied by ---in essence, equivalent to--- the $\tau$-upper regularity of the form $(\mcE,\mcF)$, as discussed in (the proof of) Proposition [Proposition 48](#p:IneqCdC2){reference-type="ref" reference="p:IneqCdC2"}. The validity of the last equality in [\[eq:SLChain2\]](#eq:SLChain2){reference-type="eqref" reference="eq:SLChain2"}, that is, [\[eq:SLChain3\]](#eq:SLChain3){reference-type="eqref" reference="eq:SLChain3"}, was already discussed for *Lipschitz* $f$ in Remark [Remark 52](#r:Validity1){reference-type="ref" reference="r:Validity1"} under further assumptions on the metric measure structure, such as, e.g., measure doubling and a weak $(1,2)$-Poincaré inequality. Deducing the same equality for $f\in\mbbL^{\mssm,\tau}_{\loc,b}$ from the equality for $f\in\mathrm{Lip}_b(\mssd)$ is equivalent to having already shown the continuous-Sobolev-to-Lipschitz property $(\mathsf{cSL}_{\tau,\mssm,\mssd})$. As a consequence, it seems difficult to establish the chain of inequalities in [\[eq:SLChain2\]](#eq:SLChain2){reference-type="eqref" reference="eq:SLChain2"} without assuming further structural properties of the product metric measure structure. ### Semigroups, irreducibility, and maximal functions {#sss:Semigroups} Let us collect here some definitions and simple facts about them. Throughout this section, let $(\mbbX,\mcE)$ and $(\mbbX',\mcE')$ be any Dirichlet spaces. *Semigroups*. We denote by $\left(T_t\right)_{t\geq 0}$, $T_t\colon L^2(\mssm)\to L^2(\mssm)$, the (strongly continuous contraction) $L^2(\mssm)$-semigroup associated to $(\mcE,\mcF)$. We denote again by $T_t\colon L^\infty(\mssm)\to L^\infty(\mssm)$ the extension of $T_t\colon L^2(\mssm)\to L^2(\mssm)$ to a weakly\* continuous semigroup on $L^\infty(\mssm)$. We denote by $T_t^{\scriptscriptstyle{\otimes }}$ the semigroup(s) associated to $(\mbbX^{\scriptscriptstyle{\otimes }},\mcE^{\scriptscriptstyle{\otimes }})$. We note that $T_t^{\scriptscriptstyle{\otimes }}=T_t\otimes T_t'$, that is, in particular $$\label{eq:TensorSemigroup} T_t^{\scriptscriptstyle{\otimes }}(f\otimes f')(x,x')=(T_t f)(x) (T_t' f')(x') \,\,\mathrm{,}\;\,\qquad t\geq 0\,\,\mathrm{,}\;\,\quad f\in L^\infty(\mssm)\,\,\mathrm{,}\;\,f'\in L^\infty(\mssm')\,\,\mathrm{.}$$ *Invariance and irreducibility*. A set $A\in\Sigma$ is *$\mcE$-invariant* if $$T_t(\mathop{\mathrm{\mathds 1}}_A f)=\mathop{\mathrm{\mathds 1}}_A T_tf\,\,\mathrm{,}\;\,\qquad f\in L^2(\mssm) \,\,\mathrm{.}$$ For a characterization of $\mcE$-invariance in the present setting, see e.g. [@LzDSSuz21 Prop. 2.38]. We say that a Dirichlet space $(\mbbX,\mcE)$ is *irreducible* if every $\mcE$-invariant $A\in\Sigma$ satisfies either $\mssm A=0$ or $\mssm A^\mathrm{c}=0$. **Lemma 63**. *Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet form on $\mbbX$, and $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to [0,\infty)$ be a distance (not: extended) on $X$. If $(\mbbX,\mcE)$ satisfies $(\mathsf{SL}_{\mssm,\mssd})$, then it is irreducible.* **Proof.* Argue by contradiction that $(\mbbX,\mcE)$ is *not* irreducible but $(\mathsf{SL}_{\mssm,\mssd})$ holds. Since $(\mcE,\mcF)$ is not irreducible, there exists an $\mcE$-invariant $A\in\Sigma$ with $\mssm A>0$ and $\mssm A^\mathrm{c}>0$. By $\mcE$-invariance of $A$ and [@LzDSSuz21 Lem. 2.39] we have $\mathop{\mathrm{\mathds 1}}_A\in{{\mcF}^\bullet_{\loc}}$ and $\mu_{\mathop{\mathrm{\mathds 1}}_A}\equiv 0$. Hence, by $(\mathsf{SL}_{\mssm,\mssd})$, the function $\mathop{\mathrm{\mathds 1}}_A$ has a $\mssd$-Lipschitz representative $\widehat{\mathop{\mathrm{\mathds 1}}_A}$ with $\mathrm{L}_{\mssd}(\widehat{\mathop{\mathrm{\mathds 1}}_A})=0$. Since $\mssd$ is a distance (not: extended), $\widehat{\mathop{\mathrm{\mathds 1}}_A}$ is a constant, which is a contradiction since $\mssm A,\mssm A^\mathrm{c}>0$. ◻* See [@LzDSSuz21 §2.6.2] for relations between $\mcE$-invariance and metric notions (in particular, $\mssd$-accessibility) in the non-irreducible case. *Maximal functions*. We recall the following adaptation to our setting of results in [@AriHin05]. Set $$\begin{aligned} \mbbL^{\mu,A}_{\loc,r}\mathop{\mathrm{\coloneqq}}\left\{f\in\mbbL^{\mu}_{\loc}: f=0 \quad \mssm\text{-a.e.} \text{~on~} A\,\,\mathrm{,}\;\,\left\lvert f\right\rvert\leq r \quad \mssm\text{-a.e.}\right\}\subset \mbbL^{\mu}_{\loc,b}\,\,\mathrm{,}\;\,\qquad r>0\,\,\mathrm{.}\end{aligned}$$ **Proposition 64** ([@LzDSSuz21 Prop. 4.14]). *Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space. For each $A\in\Sigma$ there exists an $\mssm$-a.e. unique $\Sigma$-measurable function $\bar\mssd_{\mssm,A}\colon X\rightarrow[0,\infty]$ so that $\bar\mssd_{\mssm,A}\wedge r$ is the $\mssm$-a.e. maximal element of $\mbbL^{\mssm,A}_{\loc, r}$.* We call the function $\bar\mssd_{\mu,A}$ constructed in Proposition [Proposition 64](#p:Hino){reference-type="ref" reference="p:Hino"} the *maximal function* of $A\in\Sigma$. Note that $\bar\mssd_{\mssm,A}$ is generally not an element of ${{L^\infty(\mssm)}^\bullet_{\loc}}$, unless $(\mbbX,\mcE)$ is irreducible. In proving the tensorization of the Sobolev-to-Lipschitz property, we shall need the following specialization to our setting of a very general result by T. Ariyoshi and M. Hino. **Theorem 65** ([@AriHin05 Thm. 5.2], irreducible case). *Let $(\mbbX,\mcE)$ be an irreducible quasi-regular strongly local Dirichlet space. Then, for every $A\in\Sigma$ with $0<\mssm A<\infty$, $$\begin{aligned} \nu\text{-}\lim_{t\downarrow 0} \big({-2t\log T_t\mathop{\mathrm{\mathds 1}}_A}\big) = \bar\mssd_{\mssm, A}^2\end{aligned}$$ for every probability measure $\nu$ on $\Sigma$ equivalent to $\mssm^{\scriptscriptstyle{\otimes }}$.* ### Tensorization of the Sobolev-to-Lipschitz property {#tensorization-of-the-sobolev-to-lipschitz-property} Let us now show a form of tensorization of the Sobolev-to-Lipschitz property for metric measure spaces. We recall that, on every such space $(X,\mssd,\mssm)$, the topology $\tau$ is always assumed to be the one induced by $\mssd$, and no other topology is considered. We start with two preparatory lemmas. **Lemma 66**. *Let $(\mbbX,\mcE)$, resp. $(\mbbX',\mcE')$, be a quasi-regular strongly local Dirichlet spaces with $\mbbX$, resp. $\mbbX'$, satisfying [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"}. Further let $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to [0,\infty]$, resp. $\mssd'\colon X^{\prime{\scriptscriptstyle{\times 2}}}\to [0,\infty]$, be a separable jointly $\tau$-, resp. $\tau'$-, continuous extended pseudo-distance on $X$, resp. $X'$, and assume $({\mssd}\textrm{-}\mathsf{SL}_{\mssm})$, resp. $({\mssd'}\textrm{-}\mathsf{SL}_{\mssm'})$. Then, $$\label{eq:l:IntrinsicIneq:0} \mssd_{\mssm^{\scriptscriptstyle{\otimes }}}\leq \mssd_\mssm\oplus \mssd_{\mssm'} \quad \text{everywhere on~$X^{\scriptscriptstyle{\otimes }}$}\,\,\mathrm{,}\;\,$$ and the topology induced by $\mssd_{\mssm^{\scriptscriptstyle{\otimes }}}$ on $X^{\scriptscriptstyle{\otimes }}$ is separable.* **Proof.* Firstly, note that by $({\mssd}\textrm{-}\mathsf{SL}_{\mssm})$, $({\mssd'}\textrm{-}\mathsf{SL}_{\mssm'})$ and joint $\tau^{\scriptscriptstyle{\otimes }}$-continuity of $\mssd\oplus\mssd'$, the extended pseudo-distance $\mssd_\mssm\oplus \mssd_{\mssm'}$ too is jointly $\tau^{\scriptscriptstyle{\otimes }}$-continuous. Now, let $\Omega^{\scriptscriptstyle{\otimes }}\in\Sigma^{\scriptscriptstyle{\otimes }}$ be a set of full $\mssm^{\scriptscriptstyle{\otimes }}$-measure on which [\[eq:l:TensorIntrinsicD:0\]](#eq:l:TensorIntrinsicD:0){reference-type="eqref" reference="eq:l:TensorIntrinsicD:0"} holds. By $\tau^{\scriptscriptstyle{\otimes }}$-density of $\Omega^{\scriptscriptstyle{\otimes }}$ in $X^{\scriptscriptstyle{\otimes }}$, for every $\mbfx,\mbfy\in X^{\scriptscriptstyle{\otimes }}$ there exists $\left(\mbfx_n\right)_n,\left(\mbfy_n\right)_n\subset \Omega^{\scriptscriptstyle{\otimes }}$ with $\tau^{\scriptscriptstyle{\otimes }}$-$\lim_{n}\mbfx_n=\mbfx$ and $\tau^{\scriptscriptstyle{\otimes }}$-$\lim_{n}\mbfy_n=\mbfy$. Thus, by joint $\tau^{\scriptscriptstyle{\otimes }}$-lower semi-continuity of $\mssd_{\mssm^{\scriptscriptstyle{\otimes }}}$ and joint $\tau^{\scriptscriptstyle{\otimes }}$-continuity of $\mssd_\mssm \oplus \mssd_{\mssm'}$, $$\begin{aligned} \mssd_{\mssm^{\scriptscriptstyle{\otimes }}}(\mbfx,\mbfy) \leq \liminf_{n }\mssd_{\mssm^{\scriptscriptstyle{\otimes }}}(\mbfx_n,\mbfy_n) \leq \liminf_{n }(\mssd_\mssm \oplus \mssd_{\mssm'})(\mbfx_n,\mbfy_n) = (\mssd_\mssm\oplus \mssd_{\mssm'})(\mbfx,\mbfy) \,\,\mathrm{.}\end{aligned}$$* **Separability of $\mssd_{\mssm^{\scriptscriptstyle{\otimes }}}$*. Since $(X,\mssd)$, resp. $(X',\mssd')$, is separable, so is $(X,\mssd_\mssm)$, resp. $(X',\mssd_{\mssm'})$, by $({\mssd}\textrm{-}\mathsf{SL}_{\mssm})$, resp. $({\mssd'}\textrm{-}\mathsf{SL}_{\mssm'})$. Thus, $(X^{\scriptscriptstyle{\otimes }},\mssd_\mssm\oplus \mssd_{\mssm'})$ is separable as well, and $(X,\mssd_{\mssm^{\scriptscriptstyle{\otimes }}})$ is separable as a consequence of [\[eq:l:IntrinsicIneq:0\]](#eq:l:IntrinsicIneq:0){reference-type="eqref" reference="eq:l:IntrinsicIneq:0"}. ◻* Let us recall the following results in [@LzDSSuz21]. **Lemma 67**. *Let $(\mbbX,\mcE)$ be a quasi-regular strongly local Dirichlet space with $\mbbX$ satisfying [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"}, and assume that the topology generated on $X$ by $\mssd_\mssm$ is separable. Then,* 1. *[\[i:l:Lemma:1\]]{#i:l:Lemma:1 label="i:l:Lemma:1"} $(\mbbX,\mcE)$ satisfies $(\mathsf{Rad}_{\mssd_\mssm,\mssm})$;* 2. *[\[i:l:Lemma:2\]]{#i:l:Lemma:2 label="i:l:Lemma:2"} $\mssd_\mssm({\,\cdot\,}, A)$ is $\Sigma$-measurable for every $A\in\Sigma$;* 3. *[\[i:l:Lemma:3\]]{#i:l:Lemma:3 label="i:l:Lemma:3"} $\mssd_\mssm({\,\cdot\,}, A)\leq \bar\mssd_{\mssm,A}$ $\mssm$-a.e. for every $A\in\Sigma$.* **Proof.* Assertion [\[i:l:Lemma:1\]](#i:l:Lemma:1){reference-type="ref" reference="i:l:Lemma:1"} is [@LzDSSuz21 Cor. 3.15]. In order to show [\[i:l:Lemma:2\]](#i:l:Lemma:2){reference-type="ref" reference="i:l:Lemma:2"}, it suffices to show that $\mssd_\mssm({\,\cdot\,},x)$ is $\Sigma$-measurable for every $x\in X$. Indeed, this implies the $\Sigma$-measurability of $\mssd_\mssm({\,\cdot\,}, A)$ for every $A\in\Sigma$ as in the proof of [@LzDSSuz21 Rmk. 3.12(i)]. Since $\mssd_\mssm({\,\cdot\,},x)$ is $\tau$-lower semi-continuous for every $x\in X$, it is also $\Sigma$-measurable, since $\Sigma\supset\msB_{\tau}$. This concludes the proof of [\[i:l:Lemma:2\]](#i:l:Lemma:2){reference-type="ref" reference="i:l:Lemma:2"}. Assertion [\[i:l:Lemma:3\]](#i:l:Lemma:3){reference-type="ref" reference="i:l:Lemma:3"} follows from [\[i:l:Lemma:1\]](#i:l:Lemma:1){reference-type="ref" reference="i:l:Lemma:1"} and [\[i:l:Lemma:2\]](#i:l:Lemma:2){reference-type="ref" reference="i:l:Lemma:2"} by [@LzDSSuz21 Lem. 4.16]. ◻* *Proof of Theorem [Theorem 62](#t:TensorSL){reference-type="ref" reference="t:TensorSL"}.* We prove $({\mssd^{\scriptscriptstyle{\otimes }}}\textrm{-}\mathsf{SL}_{\mssm^{\scriptscriptstyle{\otimes }}})$ and conclude the assertion by [\[eq:EquivalenceRadStoL\]](#eq:EquivalenceRadStoL){reference-type="eqref" reference="eq:EquivalenceRadStoL"}. *Step I: comparison with the maximal function*. By $(\mathsf{SL}_{\mssm,\mssd})$, resp. $(\mathsf{SL}_{\mssm',\mssd'})$, the Dirichlet spaces $(\mbbX,\mcE)$ and $(\mbbX',\mcE')$ are both irreducible. Fix $A\in\tau$ and $A'\in\tau'$ with $\mssm A, \mssm'A'\in (0,\infty)$ to be chosen later, and set $A^{\scriptscriptstyle{\otimes }}\mathop{\mathrm{\coloneqq}}A\times A'$. Let $\nu$, resp. $\nu'$, be any probability measure on $(X,\Sigma)$, resp. $(X',\Sigma')$, equivalent to $\mssm$, resp. $\mssm'$, and observe that $\nu^{\scriptscriptstyle{\otimes }}\mathop{\mathrm{\coloneqq}}\nu\otimes\nu'$ is a probability measure on $\Sigma^{\scriptscriptstyle{\otimes }}$ equivalent to $\mssm^{\scriptscriptstyle{\otimes }}$. On the one hand, by [\[eq:TensorSemigroup\]](#eq:TensorSemigroup){reference-type="eqref" reference="eq:TensorSemigroup"} and by Theorem [Theorem 65](#t:AriHino){reference-type="ref" reference="t:AriHino"} applied to $(\mcE,\mcF)$, resp. $(\mcE',\mcF')$, $$\begin{aligned} \label{eq:t:SLTensor:1} \nu^{\scriptscriptstyle{\otimes }}\text{-}\lim_{t\to 0} \left(-2t\log T_t^{\scriptscriptstyle{\otimes }}\mathop{\mathrm{\mathds 1}}_{A^{\scriptscriptstyle{\otimes }}}\right)= \nu\text{-}\lim_{t\to 0} \left(-2t\log T_t\mathop{\mathrm{\mathds 1}}_{A}\right) + \nu'\text{-}\lim_{t\to 0} \left(-2t\log T_t'\mathop{\mathrm{\mathds 1}}_{A'}\right) = \bar\mssd_{\mssm,A}^2+\bar\mssd_{\mssm',A'}^2 \,\,\mathrm{.}\end{aligned}$$ Since $(X,\mssd,\mssm)$, resp. $(X',\mssd',\mssm')$, is a metric measure space, $(\mathsf{ScL}_{\mssm,\tau,\mssd})$ coincides with $(\mathsf{SL}_{\mssm,\mssd})$, resp. $(\mathsf{ScL}_{\mssm',\tau',\mssd'})$ with $(\mathsf{SL}_{\mssm',\mssd'})$. Furthermore, since $A$ is $\tau$-open, resp. $A'$ is $\tau'$-open, we have $A\cap \cl_\tau\mathop{\mathrm{int}}_\tau A=A$ and analogously for $A'$ in place of $A$. Thus, combining $(\mathsf{SL}_{\mssm,\mssd})$, resp. $(\mathsf{SL}_{\mssm',\mssd'})$, with [@LzDSSuz21 Lem. 4.19, Rem. 4.20(d)], it follows from [\[eq:t:SLTensor:1\]](#eq:t:SLTensor:1){reference-type="eqref" reference="eq:t:SLTensor:1"} that $$\begin{aligned} \label{eq:t:SLTensor:2} \nu^{\scriptscriptstyle{\otimes }}\text{-}\lim_{t\to 0} \left(-2t\log T_t^{\scriptscriptstyle{\otimes }}\mathop{\mathrm{\mathds 1}}_{A^{\scriptscriptstyle{\otimes }}}\right) \leq \mssd({\,\cdot\,}, A)^2+\mssd'({\,\cdot\,},A')^2 \quad \mssm^{\scriptscriptstyle{\otimes }}\text{-a.e.}\,\,\mathrm{.}\end{aligned}$$ On the other hand, by Theorem [Theorem 65](#t:AriHino){reference-type="ref" reference="t:AriHino"} applied to $(\mcE^{\scriptscriptstyle{\otimes }},\mcF^{\scriptscriptstyle{\otimes }})$, $$\begin{aligned} \label{eq:t:SLTensor:3} \nu^{\scriptscriptstyle{\otimes }}\text{-}\lim_{t\to 0} \left(-2t\log T_t^{\scriptscriptstyle{\otimes }}\mathop{\mathrm{\mathds 1}}_{A^{\scriptscriptstyle{\otimes }}}\right)= \bar\mssd_{\mssm^{\scriptscriptstyle{\otimes }},A^{\scriptscriptstyle{\otimes }}}^2 \quad \mssm^{\scriptscriptstyle{\otimes }}\text{-a.e.} \,\,\mathrm{.}\end{aligned}$$ Combining [\[eq:t:SLTensor:2\]](#eq:t:SLTensor:2){reference-type="eqref" reference="eq:t:SLTensor:2"} and [\[eq:t:SLTensor:3\]](#eq:t:SLTensor:3){reference-type="eqref" reference="eq:t:SLTensor:3"}, for $\mssm^{\scriptscriptstyle{\otimes }}$-a.e. $\mbfx\mathop{\mathrm{\coloneqq}}(x,x')$, $$\begin{aligned} \bar\mssd_{\mssm^{\scriptscriptstyle{\otimes }},A^{\scriptscriptstyle{\otimes }}}(\mbfx)^2 \leq&\ \mssd(x,A)^2+\mssd'(x',A')^2 \\ \eqqcolon&\ \inf_{y\in A} \mssd(x,y)^2+\inf_{y'\in A'} \mssd'(x',y')^2 % = \inf_{(y,y')\in A^{\scriptscriptstyle{\otimes }}} \mssd(x,y)^2+\inf_{(y,y')\in A^{\scriptscriptstyle{\otimes }}} \mssd'(x',y')^2 \\ \leq&\ \inf_{(y,y')\in A^{\scriptscriptstyle{\otimes }}} \big({\mssd(x,y)^2+\mssd'(x',y')^2}\big) % = \inf_{\mbfy\in A^{\scriptscriptstyle{\otimes }}} \mssd^{\scriptscriptstyle{\otimes }}(\mbfx,\mbfy)^2 \\ =&\ \left(\inf_{\mbfy\in A^{\scriptscriptstyle{\otimes }}} \mssd^{\scriptscriptstyle{\otimes }}(\mbfx,\mbfy)\right)^2 = \mssd^{\scriptscriptstyle{\otimes }}(\mbfx,A^{\scriptscriptstyle{\otimes }})^2 \,\,\mathrm{,}\;\,\end{aligned}$$ that is $$\begin{aligned} \label{eq:t:SLTensor:4} \bar\mssd_{\mssm^{\scriptscriptstyle{\otimes }},A^{\scriptscriptstyle{\otimes }}} \leq \mssd^{\scriptscriptstyle{\otimes }}({\,\cdot\,}, A) \quad \mssm^{\scriptscriptstyle{\otimes }}\text{-a.e.}\end{aligned}$$ *Step II: comparison with the intrinsic distance*. By Lemma [Lemma 66](#l:IntrinsicIneq){reference-type="ref" reference="l:IntrinsicIneq"} the topology induced by $\mssd_{\mssm^{\scriptscriptstyle{\otimes }}}$ is separable. Thus, by Lemma [Lemma 67](#l:Lemma){reference-type="ref" reference="l:Lemma"}[\[i:l:Lemma:3\]](#i:l:Lemma:3){reference-type="ref" reference="i:l:Lemma:3"} applied to $(\mbbX^{\scriptscriptstyle{\otimes }},\mcE^{\scriptscriptstyle{\otimes }})$, $$\begin{aligned} \label{eq:t:SLTensor:5} \mssd_{\mssm^{\scriptscriptstyle{\otimes }}}({\,\cdot\,}, A^{\scriptscriptstyle{\otimes }})\leq \bar\mssd_{\mssm^{\scriptscriptstyle{\otimes }},A^{\scriptscriptstyle{\otimes }}} \quad \mssm^{\scriptscriptstyle{\otimes }}\text{-a.e.} \,\,\mathrm{.}\end{aligned}$$ Combining [\[eq:t:SLTensor:4\]](#eq:t:SLTensor:4){reference-type="eqref" reference="eq:t:SLTensor:4"} and [\[eq:t:SLTensor:5\]](#eq:t:SLTensor:5){reference-type="eqref" reference="eq:t:SLTensor:5"} we thus have, for every $A^{\scriptscriptstyle{\otimes }}\mathop{\mathrm{\coloneqq}}A\times A'$ with $A\in\tau$, $A'\in\tau'$, $$\begin{aligned} \label{eq:t:SLTensor:6} \mssd_{\mssm^{\scriptscriptstyle{\otimes }}}({\,\cdot\,}, A^{\scriptscriptstyle{\otimes }})\leq \mssd^{\scriptscriptstyle{\otimes }}({\,\cdot\,}, A^{\scriptscriptstyle{\otimes }})\quad \mssm^{\scriptscriptstyle{\otimes }}\text{-a.e.} \,\,\mathrm{.}\end{aligned}$$ Now, fix $\mbfx\mathop{\mathrm{\coloneqq}}(x,x')\in X^{\scriptscriptstyle{\otimes }}$, choose $A^{\scriptscriptstyle{\otimes }}=A^{\scriptscriptstyle{\otimes }}_r\mathop{\mathrm{\coloneqq}}B^{\mssd}_r(x)\times B^{\mssd'}_r(x')$ in [\[eq:t:SLTensor:6\]](#eq:t:SLTensor:6){reference-type="eqref" reference="eq:t:SLTensor:6"}, and note that $$\begin{aligned} \label{eq:t:SLTensor:7} \inf_{r>0} \diam_{\mssd_{\scriptscriptstyle{\otimes }}}(A^{\scriptscriptstyle{\otimes }})=0\,\,\mathrm{.}\end{aligned}$$ Then, by Lemma [Lemma 1](#l:SupDistanceShrinking){reference-type="ref" reference="l:SupDistanceShrinking"} $$\begin{aligned} \label{eq:t:SLTensor:8} \sup_{r>0}\mssd_{\mssm^{\scriptscriptstyle{\otimes }}}({\,\cdot\,}, A^{\scriptscriptstyle{\otimes }}_r)\leq \sup_{r>0} \mssd^{\scriptscriptstyle{\otimes }}({\,\cdot\,}, A^{\scriptscriptstyle{\otimes }}_r) = \mssd^{\scriptscriptstyle{\otimes }}({\,\cdot\,},\mbfx) \quad \mssm^{\scriptscriptstyle{\otimes }}\text{-a.e.} \,\,\mathrm{.}\end{aligned}$$ Let $\Omega^{\scriptscriptstyle{\otimes }}\in \Sigma^{\scriptscriptstyle{\otimes }}$ be a set of full $\mssm^{\scriptscriptstyle{\otimes }}$-measure on which [\[eq:t:SLTensor:8\]](#eq:t:SLTensor:8){reference-type="eqref" reference="eq:t:SLTensor:8"} holds, and fix $\mbfy\in \Omega^{\scriptscriptstyle{\otimes }}$. For each $r>0$, further let $\mbfx_r\in A^{\scriptscriptstyle{\otimes }}_r$ be satisfying $\mssd_{\mssm^{\scriptscriptstyle{\otimes }}}(\mbfy,\mbfx_r)\leq \varepsilon+\mssd_{\mssm^{\scriptscriptstyle{\otimes }}}(\mbfy,A^{\scriptscriptstyle{\otimes }}_r)$. Since $\mbfx_r\in A^{\scriptscriptstyle{\otimes }}_r$, there exists $\tau^{\scriptscriptstyle{\otimes }}$-$\lim_{r\downarrow 0} \mbfx_r=\mbfx$ by [\[eq:t:SLTensor:7\]](#eq:t:SLTensor:7){reference-type="eqref" reference="eq:t:SLTensor:7"}. Thus, for every $\mbfy\in\Omega^{\scriptscriptstyle{\otimes }}$, $$\begin{aligned} \mssd_{\mssm^{\scriptscriptstyle{\otimes }}}(\mbfy,\mbfx)-\varepsilon\leq \liminf_{r\downarrow 0} \mssd_{\mssm^{\scriptscriptstyle{\otimes }}}(\mbfy,\mbfx_r) -\varepsilon\leq \liminf_{r\downarrow 0} \mssd_{\mssm^{\scriptscriptstyle{\otimes }}}(\mbfy,A^{\scriptscriptstyle{\otimes }}_r) = \sup_{r>0} \mssd_{\mssm^{\scriptscriptstyle{\otimes }}}(\mbfy,A^{\scriptscriptstyle{\otimes }}_r) \leq \mssd^{\scriptscriptstyle{\otimes }}(\mbfy,\mbfx)\,\,\mathrm{,}\;\,\end{aligned}$$ where the first and last inequality hold by $\tau^{\scriptscriptstyle{\otimes }}$-lower semi-continuity of $\mssd_{\mssm^{\scriptscriptstyle{\otimes }}}$ and [\[eq:t:SLTensor:8\]](#eq:t:SLTensor:8){reference-type="eqref" reference="eq:t:SLTensor:8"} respectively. Since $\varepsilon>0$ was arbitrary, we conclude $$\begin{aligned} \label{eq:t:SLTensor:9} \mssd_{\mssm^{\scriptscriptstyle{\otimes }}}(\mbfy,\mbfx)\leq \mssd^{\scriptscriptstyle{\otimes }}(\mbfy,\mbfx)\,\,\mathrm{,}\;\,\qquad \mbfy\in\Omega^{\scriptscriptstyle{\otimes }}\,\,\mathrm{,}\;\,\mbfx\in X^{\scriptscriptstyle{\otimes }} \,\,\mathrm{.}\end{aligned}$$ Since $(X,\mssd,\mssm)$, resp. $(X',\mssd',\mssm')$ is a metric measure space in the sense of Definition [Definition 53](#d:MMSp){reference-type="ref" reference="d:MMSp"}, the set $\Omega^{\scriptscriptstyle{\otimes }}$ is $\tau^{\scriptscriptstyle{\otimes }}$-dense in $X^{\scriptscriptstyle{\otimes }}$. For each fixed $\mbfy_0\in X^{\scriptscriptstyle{\otimes }}$ we may thus find a sequence $\left(\mbfy_n\right)_n\subset \Omega^{\scriptscriptstyle{\otimes }}$ with $\tau^{\scriptscriptstyle{\otimes }}$-$\lim_{n}\mbfy_n=\mbfy_0$. Respectively by $\tau^{\scriptscriptstyle{\otimes }}$-lower semi-continuity of $\mssd_{\mssm^{\scriptscriptstyle{\otimes }}}$, the inequality [\[eq:t:SLTensor:9\]](#eq:t:SLTensor:9){reference-type="eqref" reference="eq:t:SLTensor:9"}, and by $\tau^{\scriptscriptstyle{\otimes }}$-continuity of $\mssd^{\scriptscriptstyle{\otimes }}$, $$\mssd_{\mssm^{\scriptscriptstyle{\otimes }}}(\mbfy_0,\mbfx) \leq \liminf_{n }\mssd_{\mssm^{\scriptscriptstyle{\otimes }}}(\mbfy_n,\mbfx) \leq \liminf_{n }\mssd^{\scriptscriptstyle{\otimes }}(\mbfy_n,\mbfx)= \mssd^{\scriptscriptstyle{\otimes }}(\mbfy_0,\mbfx) \,\,\mathrm{,}\;\,$$ which concludes the proof by arbitrariness of $\mbfx$, and $\mbfy_0$. ◻ ### Tensorization of the intrinsic distance As a consequence of the tensorization of the Sobolev-to-Lipschitz property we further obtain the following identification of the intrinsic distance of product forms. **Corollary 68**. *Let $\mbbX\mathop{\mathrm{\coloneqq}}(X,\mssd,\mssm)$ be a metric measure space (Dfn. [Definition 53](#d:MMSp){reference-type="ref" reference="d:MMSp"}), and $(\mcE,\mcF)$ be a quasi-regular strongly local Dirichlet form on $\mbbX$ satisfying $(\mathsf{Rad}_{\mssd,\mssm})$ and $(\mathsf{SL}_{\mssm,\mssd})$. Further let $(\mbbX',\mcE')$ be satisfying the same assumptions as $(\mbbX,\mcE)$. Then, $$\mssd_{\mssm^{\scriptscriptstyle{\otimes }}}=\mssd^{\scriptscriptstyle{\otimes }}=\mssd_\mssm\otimes\mssd_{\mssm'} \,\,\mathrm{.}$$* **Proof.* By Theorem [Theorem 61](#t:TensorRad){reference-type="ref" reference="t:TensorRad"} we conclude $(\mathsf{Rad}_{\mssd^{\scriptscriptstyle{\otimes }},\mssm^{\scriptscriptstyle{\otimes }}})$, hence ${\mssd^{\scriptscriptstyle{\otimes }}}\textrm{-}\mathsf{Rad}_{\mssm^{\scriptscriptstyle{\otimes }}}$ by [\[eq:EquivalenceRadStoL\]](#eq:EquivalenceRadStoL){reference-type="eqref" reference="eq:EquivalenceRadStoL"}. By Theorem [Theorem 62](#t:TensorSL){reference-type="ref" reference="t:TensorSL"} we conclude $({\mssd^{\scriptscriptstyle{\otimes }}}\textrm{-}\mathsf{SL}_{\mssm^{\scriptscriptstyle{\otimes }}})$. Thus, the first equality holds. By $(\mathsf{Rad}_{\mssd,\mssm})$, resp. $(\mathsf{Rad}_{\mssd',\mssm'})$, and [\[eq:EquivalenceRadStoL\]](#eq:EquivalenceRadStoL){reference-type="eqref" reference="eq:EquivalenceRadStoL"} we conclude $({\mssd}\textrm{-}\mathsf{Rad}_{\mssm})$, resp. $({\mssd'}\textrm{-}\mathsf{Rad}_{\mssm'})$. Together with the assumption of $({\mssd}\textrm{-}\mathsf{SL}_{\mssm})$ and $({\mssd'}\textrm{-}\mathsf{SL}_{\mssm'})$, this implies $\mssd=\mssd_\mssm$ and $\mssd'=\mssd_{\mssm'}$, which shows the second equality. ◻* ### Doubling-and-Poincaré spaces {#sss:DandP} Let $(X,\mssd,\mssm)$ be a metric measure space in the sense of Definition [Definition 53](#d:MMSp){reference-type="ref" reference="d:MMSp"}. In this short section we show that, under measure doubling and a weak Poincaré inequality for the factors, the Sobolev-to-Lipschitz property of the Cheeger energy tensorizes. We start by recalling the necessary definitions. **Definition 69** (Measure doubling and Poincaré inequality). We say that $(X,\mssd,\mssm)$ is (*measure*) *doubling* if there exists a constant $C>0$ such that $$\tag{$\mathsf{D}_{\mssd,\mssm}$}\label{eq:Doubling} \mssm B_{2r}(x) \leq C\, \mssm B_r(x)\,\,\mathrm{,}\;\,\qquad x\in X\,\,\mathrm{,}\;\,r>0\,\,\mathrm{.}$$ We say that $(X,\mssd,\mssm)$ satisfies a *weak $(1,2)$-Poincaré inequality* if, for some constants $c,\lambda>0$, $$\tag{$\mathsf{P}_{\mssd,\mssm}$}\label{eq:Poincare} -\!\!\!\!\!\!\int_{B_r(x)} \left\lvert f-f_{x,r}\right\rvert \mathop{}\!\mathrm{d}\mssm \leq c r \left(-\!\!\!\!\!\!\int_{B_{\lambda r}(x)}\left\lvert\mathrm{D}f\right\rvert_{\mssd}^2\mathop{}\!\mathrm{d}\mssm\right)^{1/2}\,\,\mathrm{,}\;\,\qquad f\in \Lip_\loc(\mssd)\,\,\mathrm{,}\;\,\quad x\in X\,\,\mathrm{,}\;\,r>0\,\,\mathrm{.}$$ (Here, $-\!\!\!\!\!\!\int_A f\mathop{}\!\mathrm{d}\mssm$ denotes the averaged integral of $f$ over a Borel set $A$, and $f_{x,r}\mathop{\mathrm{\coloneqq}}-\!\!\!\!\!\!\int_{B_r(x)} f\mathop{}\!\mathrm{d}\mssm$.) We write $(\mathsf{DP}_{\mssd,\mssm})$ to indicate that $(X,\mssd,\mssm)$ satisfies both $(\mathsf{D}_{\mssd,\mssm})$ and $(\mathsf{P}_{\mssd,\mssm})$. On a metric measure space satisfying $(\mathsf{DP}_{\mssd,\mssm})$ we have the following self-improvement of the continuous-Sobolev-to-Lipschitz property to the Sobolev-to-Lipschitz property. **Lemma 70** (Self-improvement of $(\mathsf{cSL}_{\tau,\mssm,\mssd})$ to $(\mathsf{SL}_{\mssm,\mssd})$). *Let $\mbbX\mathop{\mathrm{\coloneqq}}(X,\mssd,\mssm)$ be an infinitesimally Hilbertian metric measure space satisfying $(\mathsf{DP}_{\mssd,\mssm})$. Then, $(\mbbX,\mathsf{Ch}_{w,\mssd,\mssm})$ satisfies $(\mathsf{SL}_{\mssm,\mssd})$ if and only if it satisfies $(\mathsf{cSL}_{\tau,\mssd,\mssm})$.* **Proof.* One implication holds by definition, cf. [\[eq:EquivalenceRadStoL\]](#eq:EquivalenceRadStoL){reference-type="eqref" reference="eq:EquivalenceRadStoL"}. For the reverse implication it suffices to note that every $f\in\mbbL^{\mssm}_{\loc,b}$ admits a $\tau$-continuous (in fact $\mssd$-Lipschitz) representative, by e.g. [@HajKos00], also cf. [@AmbPinSpe15 Prop. 2.11]. ◻* In light of this self-improvement, for metric measure spaces satisfying $(\mathsf{DP}_{\mssd,\mssm})$ the tensorization of the Sobolev-to-Lipschitz property takes the following strong form. **Corollary 71**. *Let $\mbbX\mathop{\mathrm{\coloneqq}}(X,\mssd,\mssm)$, resp. $\mbbX'\mathop{\mathrm{\coloneqq}}(X',\mssd',\mssm')$, be an infinitesimally Hilbertian metric measure space satisfying $(\mathsf{DP}_{\mssd,\mssm})$, resp. $(\mathsf{DP}_{\mssd',\mssm'})$. If $(\mbbX,\mathsf{Ch}_{w,\mssd,\mssm})$, resp. $(\mbbX',\mathsf{Ch}_{w,\mssd',\mssm'})$, satisfies $(\mathsf{cSL}_{\tau,\mssm,\mssd})$, resp. $(\mathsf{cSL}_{\tau',\mssm',\mssd'})$, then $\mathsf{Ch}_{w,\mssd^{\scriptscriptstyle{\otimes }},\mssm^{\scriptscriptstyle{\otimes }}}$ satisfies $(\mathsf{SL}_{\mssm^{\scriptscriptstyle{\otimes }},\mssd^{\scriptscriptstyle{\otimes }}})$.* **Proof.* Note that $(\mathsf{DP}_{\mssd,\mssm})$ and $(\mathsf{DP}_{\mssd',\mssm'})$ together imply $(\mathsf{DP}_{\mssd^{\scriptscriptstyle{\otimes }},\mssm^{\scriptscriptstyle{\otimes }}})$. The assertion holds combining Theorem [Theorem 62](#t:TensorSL){reference-type="ref" reference="t:TensorSL"} with Lemma [Lemma 70](#l:SLcSL){reference-type="ref" reference="l:SLcSL"}. ◻* ### Tensorization of the Varadhan short-time asymptotics As a further consequence of the tensorization of both the Rademacher and the Sobolev-to-Lipschitz property, we further obtain the tensorization of Varadhan's short-time asymptotics for the heat flow. Recall the notation for semigroups in §[5.3.1](#sss:Semigroups){reference-type="ref" reference="sss:Semigroups"}. Whenever it exists, we denote by $p_t(x,y)$ the *heat kernel* on $X$, i.e. the integral kernel of $T_t$. The kernel $p_t^{\scriptscriptstyle{\otimes }}(\mbfx,\mbfy)$ on $X^{\scriptscriptstyle{\otimes }}$ is defined analogously. **Corollary 72**. *Let $\mbbX\mathop{\mathrm{\coloneqq}}(X,\mssd,\mssm)$ be an infinitesimally Hilbertian metric measure space satisfying $(\mathsf{DP}_{\mssd,\mssm})$, and [\[ass:Polish\]](#ass:Polish){reference-type="ref" reference="ass:Polish"}, with $\mathsf{Ch}_{\mssd,\mssm}$ satisfying $(\mathsf{SL}_{\mssm,\mssd})$. Further let $\mbbX'$ be satisfying the same assumptions as $\mbbX$. Then, for every $\mbfx\mathop{\mathrm{\coloneqq}}(x,x'), \mbfy\mathop{\mathrm{\coloneqq}}(y,y')\in X^{\scriptscriptstyle{\otimes }}$, $$\lim_{t\downarrow 0} \big({-2t \log p_t^{\scriptscriptstyle{\otimes }}(\mbfx,\mbfy)}\big) =\mssd_{\mssm^{\scriptscriptstyle{\otimes }}}(\mbfx,\mbfy)^2= \mssd^{\scriptscriptstyle{\otimes }}(\mbfx,\mbfy)^2 = \mssd_\mssm(x,y)^2+\mssd_{\mssm'}(x',y')^2\,\,\mathrm{.}$$* *Proof.* It suffices to verify the assumptions in [@Ram01 Thm. 4.1] and [@Stu95b Thm. 0.1]. Firstly, $\mbbX^{\scriptscriptstyle{\otimes }}$ satisfies [\[ass:Polish\]](#ass:Polish){reference-type="ref" reference="ass:Polish"} since so do its factors. On every space satisfying [\[ass:Polish\]](#ass:Polish){reference-type="ref" reference="ass:Polish"}, our definition [\[eq:IntrinsicD\]](#eq:IntrinsicD){reference-type="eqref" reference="eq:IntrinsicD"} of intrinsic distance coincides with the one in [@Ram01 p. 282] by [@LzDSSuz20 Prop. 2.31]. Secondly, the Cheeger energy $\mathsf{Ch}_{\mssd^{\scriptscriptstyle{\otimes }},\mssm^{\scriptscriptstyle{\otimes }}}$ is strongly regular on $\mbbX^{\scriptscriptstyle{\otimes }}$ in light of Corollary [Corollary 68](#c:IntrinsicDTensor){reference-type="ref" reference="c:IntrinsicDTensor"}. Finally, the validity of $(\mathsf{DP}_{\mssd^{\scriptscriptstyle{\otimes }},\mssm^{\scriptscriptstyle{\otimes }}})$ follows from that of $(\mathsf{DP}_{\mssd,\mssm})$ and $(\mathsf{DP}_{\mssd',\mssm'})$. The inequality $\geq$ for the first equality then follows from [@Ram01 Thm. 4.1]. The inequality $\leq$ is a consequence of the upper heat-kernel estimate [@Stu95b Thm. 0.1]. The second and third equalities follow from Corollary [Corollary 68](#c:IntrinsicDTensor){reference-type="ref" reference="c:IntrinsicDTensor"}. ◻ # Direct integrals In this section we study the validity of the Rademacher and Sobolev-to-Lipschitz properties for direct integrals of Dirichlet forms as introduced in [@LzDS20], to which we refer the reader for a complete discussion; see also [@LzDSWir21] for additional results, and [@Kuw21] for a different but related decomposition. We briefly recall the main definitions. For the sake of simplicity, we confine ourselves to the case of probability spaces. We expect that the result in this section can be adapted to the case of $\sigma$-finite measures by means of Propositions [Proposition 37](#p:Locality){reference-type="ref" reference="p:Locality"} and [Proposition 38](#p:LocalityProbab){reference-type="ref" reference="p:LocalityProbab"}. Throughout this section, let $\mbbX$ be satisfying [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"} with $\mssm$ a *probability* measure, and $(Z,\mcZ,\nu)$ be a countably generated probability space. A *disintegration* of $\mssm$ over $(Z,\mcZ,\nu)$ is a family $\left(\mssm_\zeta\right)_{\zeta\in Z}$ of non-zero measures on $(X,\Sigma)$ so that $\zeta\mapsto \mssm_\zeta A$ is $\nu$-measurable for every $A\in\Sigma$ and $$\label{eq:Disintegration} \mssm A= \int_Z \mssm_\zeta A \mathop{}\!\mathrm{d}\nu(\zeta) \,\,\mathrm{,}\;\,\qquad A\in\Sigma\,\,\mathrm{.}$$ A disintegration is *separated* if there exists a family of pairwise disjoint sets $\left(A_\zeta\right)_{\zeta\in Z}\subset \Sigma^\mssm$ so that $A_\zeta$ is $\mssm_\zeta$-conegligible for $\nu$-a.e. $\zeta\in Z$. Henceforth, let $\left(\mssm_\zeta\right)_{\zeta\in Z}$ be a separated disintegration of $\mssm$ over $(Z,\mcZ,\nu)$ as above. For a fixed Borel $\mssm$-representative $\hat f$ of $f\in L^2(\mssm)$, write $f_\zeta\mathop{\mathrm{\coloneqq}}\big [\hat f\big]_{\mssm_\zeta}$. As it turns out, the choice of the $\mssm$-representative $\hat f$ of $f$ is immaterial, and the assignment $\iota\colon f\mapsto (\zeta\mapsto f_\zeta)$ defines a unitary isomorphism providing a representation of $L^2(\mssm)$ as the direct integral of the Hilbert spaces $L^2(\mssm_\zeta)$, see [@LzDS20 Prop. 2.25(i)], viz. $$L^2(\mssm) \cong \sideset{^{}\!\!\!}{_{Z}^{\scriptstyle\oplus}}\int L^2(\mssm_\zeta) \,\,\mathrm{.}$$ Again since $\left(\mssm_\zeta\right)_{\zeta\in Z}$ is separated, such isomorphism is additionally a Riesz homomorphism, i.e. satisfying $$\label{eq:OrderIsoL1} f\geq 0 \quad \mssm\text{-a.e.} \quad \iff \quad f_\zeta \geq 0 \quad \mssm_\zeta\text{-a.e.} \quad {\textrm{\,for ${\nu}$-a.e.\,}} \zeta\in Z \,\,\mathrm{.}$$ **Lemma 73**. *Let $f\in L^\infty(\mssm)\subset L^2(\mssm)$. Then, $$\left\lVert f\right\rVert_{L^\infty(\mssm)}=\nu\text{-}\mathop{\mathrm{esssup}}\left\lVert f_\zeta\right\rVert_{L^\infty(\mssm_\zeta)} \,\,\mathrm{.}$$* *Proof.* It follows from [\[eq:OrderIsoL1\]](#eq:OrderIsoL1){reference-type="eqref" reference="eq:OrderIsoL1"} that if $\left\lvert f\right\rvert\leq a$ $\mssm$-a.e. for some constant $a>0$, then $\left\lvert f_\zeta\right\rvert\leq a$ $\mssm_\zeta$-a.e. for $\nu$-a.e. $\zeta\in Z$. On the one hand, as a consequence, $\left\lVert f_\zeta\right\rVert_{L^\infty(\mssm_\zeta)}\leq \left\lVert f\right\rVert_{L^\infty(\mssm)}$ for $\nu$-a.e. $\zeta\in Z$, which shows the inequality '$\geq$' in the assertion. On the other hand, for every $a<\left\lVert f\right\rVert_{L^\infty(\mssm)}$ there exists a set $A_a\in\Sigma$ with $\mssm A>0$ and $\left\lvert f\right\rvert\geq a$ $\mssm$-a.e. on $A_a$. Thus, by [\[eq:Disintegration\]](#eq:Disintegration){reference-type="eqref" reference="eq:Disintegration"} and since the disintegration is separated, there exists a set $B\in\mcZ$ of positive $\nu$-measure so that $\mssm_\zeta (A_a\cap A_\zeta)>0$ for every $\zeta\in B$, and $\left\lvert f_\zeta\right\rvert\geq a$ $\mssm_\zeta$-a.e. on $A_a\cap A_\zeta$ again by [\[eq:OrderIsoL1\]](#eq:OrderIsoL1){reference-type="eqref" reference="eq:OrderIsoL1"}. Since $\mssm_\zeta( A_a\cap A_\zeta)>0$ we have $\left\lVert f_\zeta\right\rVert_{L^\infty(\mssm_\zeta)}>a$, and since $\nu B>0$ we conclude that $\nu$-$\mathop{\mathrm{esssup}}\left\lVert f_\zeta\right\rVert_{L^\infty(\mssm_\zeta)}>a$. The inequality '$\leq$' in the assertion follows by arbitrariness of $a<\left\lVert f\right\rVert_{L^\infty(\mssm)}$. ◻ The next definition of direct integrals of Dirichlet forms is taken from [@LzDS20]. We present here a simplified definition for probability spaces, and refer the reader to [@LzDS20 §2] for a detailed discussion. **Definition 74** (Direct integrals of Dirichlet forms, [@LzDS20 Dfn.s 2.11, 2.26, and 2.31]). For each $\zeta\in Z$ let $(\mcE_\zeta,\mcF_\zeta)$ be a quasi-regular strongly local Dirichlet form on $L^2(\mssm_\zeta)$, and assume that $\zeta\mapsto \mcF_\zeta$ is a $\nu$-measurable field of Hilbert subspaces of $\zeta\mapsto L^2(\mssm_\zeta)$. The *direct integral of Dirichlet forms* of the field $\zeta\mapsto (\mcE_\zeta,\mcF_\zeta)$ is the Dirichlet form $$\begin{aligned} \label{eq:d:DirInt:0} \begin{aligned} \mcF\mathop{\mathrm{\coloneqq}}&\ \left\{f\in L^2(\mssm)\,\,\mathrm{,}\;\,\int_Z \left(\mcE_\zeta(f_\zeta) + \left\lVert f_\zeta\right\rVert_{L^2(\mssm_\zeta)}^2\right) \mathop{}\!\mathrm{d}\nu(\zeta) <\infty\right\}\,\,\mathrm{,}\;\, \\ \mcE(f,g)\mathop{\mathrm{\coloneqq}}&\ \int_Z \mcE_\zeta(f_\zeta,g_\zeta)\mathop{}\!\mathrm{d}\nu(\zeta) \,\,\mathrm{,}\;\,\qquad f,g\in\mcF\,\,\mathrm{.} \end{aligned}\end{aligned}$$ In the rest of this section, we will work under the following standing assumption. *Assumption 75*. Let $(\mbbX,\mcE)$ be a *conservative* quasi-regular strongly local Dirichlet space, with $\mbbX$ satisfying [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"} and $\mssm$ a *probability* measure. We assume that there exists a countably generated probability space $(Z,\mcZ,\nu)$ with the following property: for every $\zeta\in Z$ there exists a quasi-regular strongly local Dirichlet space $(\mbbX_\zeta,\mcE_\zeta)$ with $\mbbX_\zeta=\mbbX$ for $\nu$-a.e. $\zeta\in Z$, and $(\mcE,\mcF)$ is the direct-integral Dirichlet form of $\zeta\mapsto (\mcE_\zeta,\mcF_\zeta)$. Everywhere in the following, without loss of generality up to removing a $\nu$-negligible set from $Z$, we may and will assume that $\mbbX_\zeta=\mbbX$ for *every* $\zeta\in Z$. Whereas we require $\mbbX_\zeta=\mbbX$, we stress that we do not require $(\mbbX,\mcE_\zeta)$ to satisfy [\[ass:Luzin\]](#ass:Luzin){reference-type="ref" reference="ass:Luzin"}, that is, we do *not* require that $\supp[\mssm_\zeta]=X$. **Lemma 76**. *Let $(\mbbX,\mcE)$ be satisfying Assumption [Assumption 75](#ass:Superposition){reference-type="ref" reference="ass:Superposition"}. Then, $$\begin{aligned} \label{eq:CdCDirInt:3} f \in \mbbL^{\mssm}_{\loc,b} \iff f_\zeta\in \mbbL^{\mssm_\zeta}_{\loc,b} \quad {\textrm{\,for ${\nu}$-a.e.\,}} \zeta\in Z\,\,\mathrm{.}\end{aligned}$$* **Proof.* Since $(\mcE,\mcF)$ is conservative by assumption, and since $\mssm$ is a finite measure, we have $\mathop{\mathrm{\mathds 1}}\in \mcF$, hence $\mathop{\mathrm{\mathds 1}}\in\mcF_\zeta$ for every (without loss of generality, as opposed to: $\nu$-a.e.) $\zeta\in Z$. By Lemma [Lemma 10](#l:ModerateDomain){reference-type="ref" reference="l:ModerateDomain"} we have that $$\begin{aligned} \label{eq:CdCDirInt:1} \mbbL^{\mssm}_{\loc,b}=\mbbL^{\mssm}_b \qquad \text{and} \qquad \mbbL^{\mcE_\zeta \ \mssm_\zeta}_{\loc,b}=\mbbL^{\mcE_\zeta \ \mssm_\zeta}_b \,\,\mathrm{,}\;\,\quad \zeta\in Z\,\,\mathrm{.}\end{aligned}$$ By [@BouHir91 §V.3, Exercise 3.2(2), p. 216] the form $(\mcE,\mcF)$ admits carré du champ operator $$\Gamma(f)(x)=\int \Gamma_\zeta(f_\zeta)(x) \mathop{}\!\mathrm{d}\nu(\zeta) \,\,\mathrm{,}\;\,\qquad f\in \mcF_b\,\,\mathrm{.}$$ By Lemma [Lemma 73](#l:OrderPreserving){reference-type="ref" reference="l:OrderPreserving"} we conclude $$\label{eq:CdCDirInt:2} \left\lVert\Gamma(f)\right\rVert_{L^\infty(\mssm)}=\nu\text{-}\mathop{\mathrm{esssup}}\left\lVert\Gamma_\zeta(f_\zeta)\right\rVert_{L^\infty(\mssm_\zeta)}\,\,\mathrm{,}\;\,\qquad f\in \mcF_b\,\,\mathrm{.}\,\,\mathrm{.}$$ Combining [\[eq:d:DirInt:0\]](#eq:d:DirInt:0){reference-type="eqref" reference="eq:d:DirInt:0"} and [\[eq:CdCDirInt:2\]](#eq:CdCDirInt:2){reference-type="eqref" reference="eq:CdCDirInt:2"} we have therefore that $$\begin{aligned} \mbbL^{\mssm}_b= \left\{f\in L^2(\mssm)\,\,\mathrm{,}\;\,f_\zeta \in \mbbL^{\mssm_\zeta}_b {\textrm{\,for ${\nu}$-a.e.\,}} \zeta\in Z\right\} \,\,\mathrm{.}\end{aligned}$$ By [\[eq:CdCDirInt:1\]](#eq:CdCDirInt:1){reference-type="eqref" reference="eq:CdCDirInt:1"} we finally conclude that $$\begin{aligned} f \in \mbbL^{\mssm}_{\loc,b} \iff f_\zeta\in \mbbL^{\mssm_\zeta}_{\loc,b} {\textrm{\,for ${\nu}$-a.e.\,}} \zeta\in Z\,\,\mathrm{.}&\qedhere\end{aligned}$$ ◻* **Theorem 77**. *Let $(\mbbX,\mcE)$ be satisfying Assumption [Assumption 75](#ass:Superposition){reference-type="ref" reference="ass:Superposition"}, and $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to [0,\infty]$ be an extended pseudo-distance. Then, the following are equivalent:* 1. *$(\mbbX,\mcE)$ satisfies $(\mathsf{Rad}_{\mssd,\mssm})$;* 2. *$(\mbbX,\mcE_\zeta)$ satisfies $(\mathsf{Rad}_{\mssd,\mssm_\zeta})$ for $\nu$-a.e. $\zeta\in Z$.* **Proof.* Fix $\hat f\in \mathrm{Lip}^1_b(\mssd,\Sigma)$. Assume first $(\mathsf{Rad}_{\mssd,\mssm_\zeta})$ for $\nu$-a.e. $\zeta\in Z$. Then, $\Gamma_\zeta(f_\zeta)\leq 1$ $\mssm_\zeta$-a.e. for $\nu$-a.e. $\zeta\in Z$ by the assumption, and combining this fact with [\[eq:CdCDirInt:3\]](#eq:CdCDirInt:3){reference-type="eqref" reference="eq:CdCDirInt:3"} proves $(\mathsf{Rad}_{\mssd,\mssm})$.* *Vice versa, assume $(\mathsf{Rad}_{\mssd,\mssm})$. Then $\Gamma(f)\leq 1$ $\mssm$-a.e. by assumption, and combining this fact with [\[eq:CdCDirInt:3\]](#eq:CdCDirInt:3){reference-type="eqref" reference="eq:CdCDirInt:3"} proves $(\mathsf{Rad}_{\mssd,\mssm_\zeta})$ for $\nu$-a.e. $\zeta\in Z$. ◻* No statement analogous to Theorem [Theorem 77](#t:RadDInt){reference-type="ref" reference="t:RadDInt"} holds for the Sobolev-to-Lipschitz property. This is discussed in some detail [@LzDSSuz20 Ex. 4.7] for the simplest possible direct-integral form, namely the direct sum of two forms each defined on a quasi-connected component of the resulting direct-sum space. In [@LzDSSuz20 Ex. 4.7] however, the measures $\mssm_\zeta$, $\zeta\in\left\{\pm 1\right\}$ are absolutely continuous w.r.t. $\mssm$, and one might expect the situation to improve in the case when $\mssm$ is properly 'disintegrated' into $\mssm$-negligible fibers (as opposed to: split into $\mssm$-non-negligible components). The next example shows that this is not the case, and again the Sobolev-to-Lipschitz property is not preserved by direct integration. *Example 78* (cf., e.g., [@LzDS20 Ex. 2.32]). Let $I\mathop{\mathrm{\coloneqq}}[0,1]$, and $X\mathop{\mathrm{\coloneqq}}I^{\scriptscriptstyle{\times 2}}$ with standard topology, $2$-dimensional Euclidean distance $\mssd_{\scriptscriptstyle{\times 2}}$, Borel $\sigma$-algebra, and endowed with the $2$-dimensional Lebesgue measure $\mssm\mathop{\mathrm{\coloneqq}}{\mathrm{Leb}}^2$. Consider the form $$\begin{aligned} \mcF\mathop{\mathrm{\coloneqq}}& \left\{f\in L^2(X): f({\,\cdot\,},x_2)\in W^{1,2}(I) \quad {\textrm{\,for ${{\mathrm{Leb}}^1}$-a.e.\,}} x_2\in I\right\} \\ \mcE(f)\mathop{\mathrm{\coloneqq}}& \int_{I} \left\lvert\partial_1 f(x_1,x_2)\right\rvert^2 {\mathrm{dLeb}}^2(x_1,x_2) \,\,\mathrm{.}\end{aligned}$$ The form $(\mcE,\mcF)$ is the direct integral over $(Z,\nu)\mathop{\mathrm{\coloneqq}}(I,{\mathrm{Leb}}^1)$ of the forms $$\mcE_{\zeta}(f)\mathop{\mathrm{\coloneqq}}\int_I \left\lvert\mathop{}\!\mathrm{d}f({\,\cdot\,},\zeta)\right\rvert^2{\mathrm{dLeb}}^1\,\,\mathrm{,}\;\,\qquad f\in \mcF_\zeta\mathop{\mathrm{\coloneqq}}W^{1,2}(I) \,\,\mathrm{,}\;\,$$ with $\mssm_\zeta\mathop{\mathrm{\coloneqq}}{\mathrm{Leb}}^1\otimes \delta_{\zeta}$. In particular, $\mcF= L^2(Z; W^{1,2}(I))$. Now, since $\mathop{\mathrm{\mathds 1}}\in\mcF$ and $\mathop{\mathrm{\mathds 1}}_\zeta\in W^{1,2}(I)\eqqcolon\mcF_\zeta$ for every $\zeta\in Z$, by [\[eq:CdCDirInt:3\]](#eq:CdCDirInt:3){reference-type="eqref" reference="eq:CdCDirInt:3"} we have that $\mbbL^{\mssm,\tau}_{\loc,b}=\mbbL^{\mssm,\tau}_b$, and analogously for $\mssm_\zeta$ in place of $\mssm$ for every $\zeta\in Z$. Thus, it suffices to show that there exists $f\in \mbbL^{\mssm,\tau}_b$ not admitting any $\mssd_{{\scriptscriptstyle{\times 2}}}$-Lipschitz $\mssm$-representative. In fact, it is readily seen that $\zeta\mapsto \big({\mcE_\zeta, \mcF_\zeta}\big)$ is the ergodic decomposition of $(\mcE,\mcF)$ in the sense of [@LzDS20], hence $$\mbbL^{\mssm,\tau}_b= \left\{f\in \mcC_b(X) : f_\zeta\in \mbbL^{\mssm_\zeta,\tau}_b {\textrm{\,for ${\nu}$-a.e.\,}} \zeta\in Z\right\} \,\,\mathrm{.}$$ In particular, every function of the form $f=\mathop{\mathrm{\mathds 1}}\otimes g$ with $g\in \mcC_b(I)$ satisfies $f\in \mbbL^{\mssm,\tau}_b$ and $\mcE(f)=0$. Taking $g\notin \mathrm{Lip}^1_b(I,\mssd)$ shows that $f$ is in general not $\mssd_{{\scriptscriptstyle{\times 2}}}$-Lipschitz. A positive result in the spirit of Theorem [Theorem 77](#t:RadDInt){reference-type="ref" reference="t:RadDInt"} holds only for $(\mathsf{cSL}_{\tau,\mssm,\mssd})$ under further assumptions, granting a consistent choice of representatives. **Theorem 79**. *Let $(\mbbX,\mcE)$ be satisfying Assumption [Assumption 75](#ass:Superposition){reference-type="ref" reference="ass:Superposition"}, and $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to [0,\infty]$ be an extended pseudo-distance. Further assume that $(\mbbX,\mcE_\zeta)$ satisfies $(\mathsf{cSL}_{\tau,\mssm_\zeta,\mssd})$ and $\supp[\mssm_\zeta]=X$ for every $\zeta\in B$ for some $B\in\mcZ$ with $\nu B>0$. Then, $(\mbbX,\mcE)$ satisfies $(\mathsf{cSL}_{\tau,\mssm,\mssd})$.* **Proof.* Fix $f\in \mbbL^{\mssm,\tau}_b$. Then $f_\zeta\in \mbbL^{\mssm_\zeta,\tau}_b$ for $\nu$-a.e. $\zeta\in Z$ by [\[eq:CdCDirInt:3\]](#eq:CdCDirInt:3){reference-type="eqref" reference="eq:CdCDirInt:3"}. In particular, there exists $\zeta\in B$ with $f_\zeta\in \mbbL^{\mssm_\zeta,\tau}_b$. For such fixed $\zeta$, by $(\mathsf{cSL}_{\tau,\mssm_{\zeta},\mssd})$, there exists a $\tau$-continuous $\mssm_\zeta$-representative of $f_\zeta$, additionally an element of $\mathrm{Lip}^1(\mssd,\tau)$. Since $\mssm_\zeta$ has full $\tau$-support, such representative is unique, again denoted by $f_\zeta$, and we conclude that $f_\zeta\equiv f$ everywhere on $X$. Thus, $f\in \mathrm{Lip}^1(\mssd,\tau)$ and the proof is concluded. ◻* With a proof similar to the one of Theorem [Theorem 79](#t:cSLDInt){reference-type="ref" reference="t:cSLDInt"}, one can show the following. **Proposition 80**. *Let $(\mbbX,\mcE)$ be satisfying Assumption [Assumption 75](#ass:Superposition){reference-type="ref" reference="ass:Superposition"}, and $\mssd\colon X^{\scriptscriptstyle{\times 2}}\to [0,\infty]$ be an extended pseudo-distance. Further assume that there exists $\mssd\text{-}\supp[\mssm]=X$, and that $(\mbbX,\mcE_\zeta)$ satisfies $({\mssd}\textrm{-}\mathsf{cSL}_{\mssm_\zeta,\mssd})$ and there exists $\mssd\text{-}\supp[\mssm_\zeta]=X$ for every $\zeta\in B$ for some $B\in\mcZ$ with $\nu B>0$. Then, $(\mbbX,\mcE)$ satisfies $({\mssd}\textrm{-}\mathsf{cSL}_{\mssm,\mssd})$.* *Remark 81*. The assumption in Theorem [Theorem 79](#t:cSLDInt){reference-type="ref" reference="t:cSLDInt"} that $\mssm_\zeta$ have full $\tau$-support for all $\zeta$ in a set of positive $\nu$-measure may at first sight seem to contrast our standing assumption that $\left(\mssm_\zeta\right)_{\zeta\in Z}$ be separated. This is however not the case, since $\mssm_\zeta$ may be in general concentrated on sets much smaller than its support. For instance, let $Y$ be a locally compact Polish space, and denote by $\Upsilon$ the configuration space over $Y$, i.e. the space of all locally finite point measures on $Y$, endowed with the vague topology and the corresponding Borel $\sigma$-algebra. Further let $\sigma$ be a Radon measure on $Y$ and, for $s\in{\mathbb R}_+$, let $\pi_{s\sigma}$ be the Poisson measure on $\Upsilon$ with intensity measure $s\sigma$. Finally, let $\lambda$ be a diffuse Borel probability measure on ${\mathbb R}_+$. The mixed Poisson measure on $\Upsilon$ with Lévy measure $\lambda$ and intensity measure $\sigma$ is the probability measure $\mu_{\lambda,\sigma}\mathop{\mathrm{\coloneqq}}\int_{{\mathbb R}_+} \pi_{s\sigma}\mathop{}\!\mathrm{d}\lambda(s)$. Then, $\left(\pi_{s\sigma}\right)_{s\in{\mathbb R}_+}$ is a *separated* disintegration of $\mu_{\lambda,\sigma}$ over $({\mathbb R}_+,\lambda)$, yet $\mu_{\lambda,\sigma}$ and each $\pi_{s\sigma}$ has *full* topological support on $\Upsilon$; for details on all these constructions see e.g. [@LzDSSuz21]. We refer the reader to [@LzDS20 Ex. 3.13] for further details and for the construction of a relevant direct integral of Dirichlet forms in this case. [^1]: [^2]: This research was funded in whole, or in part, by the Austrian Science Fund (FWF) ESPRIT 208. For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. [^3]: K.S. gratefully acknowledges funding by the Alexander von Humboldt Stiftung, Humboldt-Forschungsstipendium [^4]: *Mistakenly marked as 'Proof of Corollary 6.1', see [@Kuw98 p. 702].*
arxiv_math
{ "id": "2309.10733", "title": "Persistence of Rademacher-type and Sobolev-to-Lipschitz properties", "authors": "Lorenzo Dello Schiavo, Kohei Suzuki", "categories": "math.MG math.FA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The work of this paper is devoted to obtaining strong laws for intermediately trimmed sums of random variables with infinite means. Particularly, we provide conditions under which the intermediately trimmed sums of independent but not identically distributed random variables converge almost surely. Moreover, by dropping the assumption of independence we provide a corresponding convergence result for a special class of Oppenheim expansions. We highlight that the results of this paper generalize the results provided in the recent work of [@KS] while the convergence of intermediately trimmed sums of generalized Oppenheim expansions is studied for the first time. : intermediately trimmed sums, generalized Oppenheim expansions, almost sure convergence\ : 60F15, 60G50, 11K55 author: - "Rita Giuliano[^1]" - " Milto Hadjikyriakou [^2]" title: " Intermediately Trimmed Sums of Oppenheim Expansions: a Strong Law" --- # Introduction Consider the following framework: Let $(\Omega, \mathcal{A}, P)$, be the probability space where $\Omega =[0,1]$, $\mathcal{A}$ is the $\sigma$-algebra of the Borel subsets of $[0,1]$ and $P$ is the Lebesgue measure on $[0,1]$. Moreover, let $(B_n)_{n\geq 1}$ be a sequence of integer valued random variables defined on $(\Omega, \mathcal{A}, P)$, $F$ be a distribution function with $F(0)=0$ and $F(1)=1$, $\varphi_n:\mathbb{N}^*\to \mathbb{R}^+$ be a sequence of functions and assume that $(y_n)_{n\geq 1}$ is a sequence of nonnegative numbers with $y_n=y_n(h_1, \dots, h_n)$ (i.e. possibly depending on the $n$ integers $h_1, \dots, h_n$) such that, for $h_1 \geq 1$ and $h_j\geq \varphi (h_{j-1})$, $j=2, \dots, n$ we have $$\label{densitacondizionale} P\big(B_{n+1}=h_{n+1}|B_{n}=h_{n}, \dots, B_{1}=h_{1}\big)= F(\beta_n)-F(\alpha_n),$$ where we set $$\delta_j(h,k, y) = \frac{ \varphi_j (h )(1+y )}{k+\varphi_j (h ) y }, \qquad j= 1, \dots, n$$ $$\alpha_n=\delta_n(h_n, h_{n+1}+1, y_n) ,\quad \beta_n=\delta_n(h_n, h_{n+1}, y_n) .$$ Let $Y_n= y_n(B_1, \dots, B_n)$ and $$R_{n}= \frac{ B_{n+1}+\varphi_n(B_n) Y_n}{\varphi_n(B_n)(1+Y_n) }= \frac{1}{\delta_n(B_n, B_{n+1}, Y_n)}.$$ In what follows the sequence $(R_n)_{n\geq 1}$ will be called an *Oppenheim expansion sequence with related distribution function $F$*. We refer to the paper [@Gi] and to the references therein for examples of Oppenheim expansion sequences. It is known (see the seminal paper [@F] and also [@E] and [@K]) that when a sequence of random variables $(X_n)_{n\in \mathbb{N}}$ has infinite means, there is no normalizing sequence $(d_n)_{n\in \mathbb{N}}$ such that $\lim \dfrac{1}{d_n}\sum_{i=1}^{n}X_i = 1$ almost everywhere. On the other hand, if there is a sequence such that $\lim \dfrac{1}{d_n}\sum_{i=1}^{n}X_i = 1$ in probability, then it might be possible to establish a strong law of large numbers after neglecting finitely many of the largest summands from the partial sum of $n$ terms (see for instance [@M1], [@M2], [@HMM], [@M]); in such a case one says that the sum has been *trimmed*. The Oppenheim expansion sequences defined above are typically random variables with infinite means; nevertheless, in [@Gi] it is proven that, when $F$ has a density $f$, the sequence of partial sums $S_n = \sum_{k= 1}^n R_k$, normalized by $n \log n$, converges in probability to the constant 1, i.e. $$\lim_{n \to \infty}\frac{S_n}{n \log n}=1\qquad \hbox{in probability}.$$ Moreover, in [@GH2021], the same weak law has been obtained in the special case where the involved distribution functions are not continuous (see Theorem 4 and 5 therein). Motivated by this result, the question arises whether a strong law can be obtained for the trimmed sums of Oppenheim expansions, possibly with the same normalizing sequence $n \log n$. By using the notation introduced in [@KS], let $(X_n)_{n\geq 1}$ be a sequence of random variables and, for each $n$, let $\sigma$ be a permutation from the symmetric group $\mathcal{S}_n$ acting on ${1,2,\dots, n}$ such that $X_{\sigma(1)} \geq X_{\sigma(2)}\geq \cdots \geq X_{\sigma(n)}$. For a given sequence $(r_n)_{n\geq 1 }$ of integers with $r_n< n$ for every $n$ we set $$S_n^{r_n}:=\sum_{k = r_n +1}^n X_{\sigma(k)}.$$ As mentioned above, according to the standard terminology $(S_n^{r_n})$ is a *trimmed sum process*; the trimming is called *light* if $r_n = r\in \mathbb{N^*}$ for every $n$; *moderate* or *intermediate* if $r_n\to \infty$ and $\frac{r_n}{n}\to 0$. In [@GH] a strong law of large numbers has been proved for lightly trimmed sums of Oppenheim expansion sequences. In the present paper we are interested in establishing a strong law of large numbers for intermediately trimmed sums for a special class of generalized Oppenheim expansions, and the main result of this work, Theorem [Theorem 10](#teorema2){reference-type="ref" reference="teorema2"}, is presented in Section 4. The paper is structured as follows: Section 2 contains some preliminary results; in Section 3 we prove a strong law of large numbers for independent variables (Theorem [Theorem 6](#teorema1){reference-type="ref" reference="teorema1"}), to be used for the proof of Theorem [Theorem 10](#teorema2){reference-type="ref" reference="teorema2"}. We point out that Theorem [Theorem 6](#teorema1){reference-type="ref" reference="teorema1"} is of independent interest and it can be considered as a generalization of the corresponding result presented in [@KS] since the convergence here is established for random variables that are not identically distributed. The proof of the main result, Theorem [Theorem 10](#teorema2){reference-type="ref" reference="teorema2"}, is discussed in Section 4. It is important to highlight that Theorem [Theorem 10](#teorema2){reference-type="ref" reference="teorema2"} is proven without any independence assumption since Oppenheim expansions are not sequences of independent random variables in general. Thus, Theorem [Theorem 10](#teorema2){reference-type="ref" reference="teorema2"} is a generalization of the classical framework of independence (as in [@KS]). Throughout the paper the notation $\lceil x\rceil$ (resp. $\lfloor x \rfloor$) stands for the least integer greater than (resp. less than) or equal to $x$ while $a_n \sim b_n$ as $n \to \infty$, means that $\lim_{n \to \infty}\frac{a_n}{b_n}=1$. The symbols $C$, $c$ (possibly equipped with indices) denote positive absolute constants that can take different values in different appearances, while the notation $1_{A}(x)$ represents the indicator function for the set $A$. # Preliminaries In this section we present some results that are essential for establishing the strong laws of the following sections. We start by providing, without a proof, a well-known inequality, namely the generalized Bernstein's inequality, which can be found in [@H]. **Lemma 1**. **Let $Y_1, \dots, Y_n$ be independent random variables for which there exists a positive constant $M$ such that $|Y _k - EY_k|\leq M$ for every $k =1, \dots, n$. Let $Z_n = \sum_{k=1}^n Y_k$. Then, for every $t> 0$ $$P \Big(\max_{1\leq i \leq n}\big|Z_i - EZ_i\big|\geq t\Big)\leq 2 \exp \Big(- \frac{t^2}{2 Var (Z_n) + \frac{2}{3}Mt}\Big).$$** The Bernstein's inequality is the main tool for the proof of a convergence result for the truncated random variables defined as follows. Let $(t_n)_{n \geq 1}$ be a sequence of positive numbers and let $$Z_n = \sum_{k=1}^n X_k 1_{\{X_k \leq t_n\}}$$ be the $n-$th partial sum of the corresponding truncated sequence. In Theorem [Theorem 2](#theorem6){reference-type="ref" reference="theorem6"} we give conditions on $(t_n)_{n \geq 1}$ allowing to establish a strong law for the sequence $(Z_n)_{n \geq 1}$. **Theorem 2**. **Let $(X_n)_{n \geq 1}$ be a sequence of nonnegative independent random variables and $(t_n)_{n \geq 1}$ a sequence of positive numbers. Assume that there exists $C_0 > 0$ such that $$\label{assumption1} \sum_n \exp\Big(- C\frac{d_n^2}{n t_n^2}\Big)< + \infty$$ for all $0 <C<C_0$ where $d_n = EZ_n$. Then $$\lim_{n \to \infty}\frac{Z_n}{d_n }=1.$$** **Proof.** For simplicity, denote $W_{k,n} = X_k 1_{\{X_k \leq t_n\}}$ for $k =1, \dots, n$. Then, for any integer $k$ we have $$\big|W_{k,n}- EW_{k,n}\big|\leq \big|W_{k,n}\big|+ \big|EW_{k,n}\big|\leq 2 t_n,$$ $$d_n = EZ_n= \sum_{k=1}^n EW_{k,n}\leq \sum_{k=1}^n t_n = n t_n,$$ and $$Var (W_{k,n}) = E \big[(W_{k,n}- EW_{k,n})^2\big]\leq 2E\big[W^2_{k,n}+E^2W_{k,n}\big]\leq 4 t_n^2.$$ Hence, by independence $$Var (Z_n) = \sum_{k=1}^n Var (W_{k,n}) \leq 4 nt_n^2 .$$ Apply the Bernstein inequality to $W_{k,n}-EW_{k,n}$ ($k = 1, \dots, n$) to obtain, for every $\varepsilon >0$, $$P\big(|Z_n -EZ_n|\geq \varepsilon EZ_n \big) \leq 2 \exp \Big(- \frac{\varepsilon^2 d_n ^2}{8 nt_n^2 + \frac{4}{3}\varepsilon t_nd_n}\Big)\leq\exp \Big(- \frac{3 \varepsilon^2}{24 + 4 \varepsilon}\cdot\frac{ d_n ^2}{ nt_n^2 }\Big).$$ The statement follows from the Borel--Cantelli lemma, due to assumption [\[assumption1\]](#assumption1){reference-type="eqref" reference="assumption1"}.   ------------------------------------------------------------------------ We introduce the following notation: for $n \in \mathbb{N}^*$ and $k = 1, \dots, n$ denote $$a_{k,n}= P(X_k > t_n) , \qquad b_{k,n}= P(X_k \geq t_n), \qquad A_n = \sum_{k=1}^n a_{k,n} , \qquad B_n =\sum_{k=1}^n b_{k,n} .$$ The lemma that follows can easily be obtained by employing again the generalized Bernstein's inequality. **Lemma 3**. **Let $(X_n)_{n \geq 1}$ be a sequence of nonnegative independent random variables and $(t_n)_{n \geq 1}$ a sequence of positive numbers. Assume that there exists $c_0 > 0$ such that $$\label{assumption2} \sum_n \exp\Big(- c A_n\Big)< + \infty$$for all $0 <c<c_0$. Then, for every sufficiently small $\varepsilon >0$, $$\label{statement1} P\left(\left|\sum_{k=1}^n 1_{\{X_k > t_n\}}- A_n \right|\geq \varepsilon A_n \hbox{ i.o.}\right)=0$$ and $$\label{statement2} P\left(\left|\sum_{k=1}^n 1_{\{X_k \geq t_n\}}-B_n \right|\geq \varepsilon B_n \hbox{ i.o.}\right)=0.$$** **Proof.** Observe that $\left|1_{\{X_k > t_n\}}-a_{k,n} \right|= \left|1_{\{X_k > t_n\}}- P(X_k > t_n) \right|\leq 2$ and $\mathop{\mathrm{Var}}\left(1_{\{X_k > t_n\}}-a_{k,n} \right)= a_{k,n} -a^2_{k,n}$. By independence $$\mathop{\mathrm{Var}}\left( \sum_{k=1}^n \left(1_{\{X_k > t_n\}}-a_{k,n}\right)\right) =\sum_{k=1}^n (a_{k,n} -a^2_{k,n}).$$ Applying the Bernstein inequality we get $$\begin{aligned} & P\left(\left|\sum_{k=1}^n 1_{\{X_k > t_n\}}- A_n \right|\geq \varepsilon A_n \right)\leq 2 \exp \left(- \frac{\varepsilon^2 A_n ^2}{2 \sum_{k=1}^n (a_{k,n} -a^2_{k,n}) + \frac{4}{3}\varepsilon \sum_{k=1}^n a_{k,n} }\right)\\&\leq \exp\left(- \frac{3 \varepsilon^2}{6 + 4 \varepsilon} A_n\right), \end{aligned}$$ which yields that $\displaystyle\sum_{n} \exp\left(- \frac{3 \varepsilon^2}{6 + 4 \varepsilon} A_n\right)$ is finite if $\varepsilon$ is sufficiently small, by assumption [\[assumption2\]](#assumption2){reference-type="eqref" reference="assumption2"}. Thus, statement [\[statement1\]](#statement1){reference-type="eqref" reference="statement1"} follows from the Borel--Cantelli lemma. The argument is identical for statement [\[statement2\]](#statement2){reference-type="eqref" reference="statement2"}, since $A_n\leq B_n ,$ which implies $$\sum_n\exp\Big(- c B_n \Big) \leq \sum_n \exp\Big(- c A_n\Big).$$   ------------------------------------------------------------------------ # A strong law for independent random variables Recall the notation introduced for the needs of Lemma [Lemma 3](#lemma1){reference-type="ref" reference="lemma1"}, i.e. $$A_n = \sum_{k=1}^n P(X_k > t_n), \qquad B_n =\sum_{k=1}^n P(X_k \geq t_n) .$$ Observe that [\[statement1\]](#statement1){reference-type="eqref" reference="statement1"} and [\[statement2\]](#statement2){reference-type="eqref" reference="statement2"} imply respectively that, for every $\varepsilon >0$, a.e. $$\label{statement3} (1-\varepsilon)A_n\leq\# \{i \leq n: X_i> t_n\} \leq (1+\varepsilon) A_n\quad \hbox{eventually}$$ and $$\label{statement4} (1-\varepsilon) B_n\leq \# \{i \leq n: X_i\geq t_n\} \leq (1+\varepsilon) B_n \quad \hbox{eventually}.$$ Moreover, [\[statement3\]](#statement3){reference-type="eqref" reference="statement3"} and [\[statement4\]](#statement4){reference-type="eqref" reference="statement4"} yield that, a.e.$$\begin{aligned} &\label{statement5}\nonumber \lfloor (1-\varepsilon) B_n\rfloor - \lceil (1+\varepsilon) A_n\rceil\leq (1-\varepsilon)B_n- (1+\varepsilon) A_n \leq \# \{i \leq n: X_i\geq t_n\}-\# \{i \leq n: X_i> t_n\} \\&= \# \{i \leq n: X_i= t_n\}\quad \hbox{eventually} .\end{aligned}$$ Before stating and proving the main theorem of this section we prove two lemmas that are essential tools for obtaining the convergence result we are interested in. **Lemma 4**. **For every pair of integers $r\geq 1$ and $n \geq 1$ we have $$Z_n-S_n^{r } = \sum_{k=1}^{r} X_{\sigma(k)}-\sum_{k=1}^n X_k 1_{\{X_k > t_n\}}.$$** **Proof.** Without loss of generality we may assume that the sequence $(X_n)_{n\in\mathbb{N}}$ is already ordered in decreasing order, i.e. that $\sigma(k)=k$: if not, just put $\tilde{X}_k = X_{\sigma(k)}$ and notice that $$\sum_{k=1}^n X_k 1_{\{X_k > t_n\}}= \sum_{k=1}^n \tilde X_k 1_{\{\tilde X_k > t_n\}} \mbox{ and } Z_n = \sum_{k=1}^n \tilde X_k 1_{\{\tilde X_k \leq t_n\}}.$$ Thus, we write $$\begin{aligned} & S_n^{r }= \sum_{k= r +1}^n X_k= \sum_{k= 1}^n X_k- \sum_{k= 1}^{r } X_k = \sum_{k= 1}^nX_k 1_{\{ X_k \leq t_n\}}+\sum_{k= 1}^nX_k 1_{\{ X_k > t_n\}}- \sum_{k=1}^{r }X_k\\&= Z_n+\sum_{k= 1}^nX_k 1_{\{ X_k > t_n\}}- \sum_{k=1}^{r}X_k. \end{aligned}$$   ------------------------------------------------------------------------ **Lemma 5**. **[\[lemma5\]]{#lemma5 label="lemma5"} For every pair of integers $r\geq 1$ and $n \geq 1$ such that $r \geq \# \{k \leq n:X_k > t_n \}$ and for every $\varepsilon > 0$ we have $$Z_n- S_n^{r } \leq \big(r-(1-\varepsilon) A_n \big)t_n.$$** **Proof.** Assume again that $\sigma(k) =k$; by Lemma [Lemma 4](#lemma3){reference-type="ref" reference="lemma3"}, it is sufficient to prove that $$\begin{aligned} & \sum_{k=1}^{r } X_{k}-\sum_{k=1}^n X_k 1_{\{X_k > t_n\}}\leq \big(r-(1-\varepsilon) A_n \big)t_n. \end{aligned}$$ We have $$\begin{aligned} & \sum_{k=1}^{r } X_{k}-\sum_{k=1}^n X_k 1_{\{X_k > t_n\}}= \sum_{k=1}^{r } X_{k}1_{\{X_k \leq t_n\}} + \sum_{k=1}^{r } X_{k}1_{\{X_k > t_n\}}- \sum_{k=1}^n X_k 1_{\{X_k > t_n\}}=\\ &=\sum_{k=1}^{r } X_{k}1_{\{X_k \leq t_n\}}-\sum_{k=r+1}^n X_k 1_{\{X_k > t_n\}} =\sum_{k=1}^{r } X_{k}1_{\{X_k \leq t_n\}}, \end{aligned}$$ since, if $k > r ,$ $X_k$ cannot be $>t_n$ (since all the summands $>t_n$ have been trimmed before $r$, due to the assumption in the present lemma). Continuing and denoting $\# \{k \leq n:X_k > t_n \}= \ell_n$, we find $$\sum_{k=1}^{r } X_{k}1_{\{X_k \leq t_n\}}= \sum_{k=1}^{\ell_n } X_{k}1_{\{X_k \leq t_n\}}+ \sum_{k=\ell_n+1}^{r } X_{k}1_{\{X_k \leq t_n\}}= \sum_{k=\ell_n+1}^{r } X_{k}1_{\{X_k \leq t_n\}},$$ since, if $k \leq \ell_n$, then $X_k$ is still $>t_n$ (by the very definition of $\ell_n$). Then, $$\sum_{k=\ell_n+1}^{r } X_{k}1_{\{X_k \leq t_n\}} \leq t_n\sum_{k=\ell_n+1}^{r} 1=(r -\ell_n)t_n\leq \big(r-(1-\varepsilon) A_n \big) t_n,$$ by the left inequality in [\[statement3\]](#statement3){reference-type="eqref" reference="statement3"}.   ------------------------------------------------------------------------ Next, we state and prove the main result of this section which is a strong law for trimmed sums of independent random variables. As it has already been mentioned in the introduction, since the convergence result obtained below is proven for random variables that are not identically distributed, it can be considered as a generalization of the corresponding result obtained in [@KS]. **Theorem 6**. **Let $(X_n)_{n \geq 1}$ be a sequence of nonnegative independent random variables and $(r_n)_{n \geq 1}$ a sequence of natural numbers tending to $\infty$ and with $r_n = o(n)$. Let $(t_n)_{n \geq 1}$ be a sequence of positive real numbers for which [\[assumption1\]](#assumption1){reference-type="eqref" reference="assumption1"} and [\[assumption2\]](#assumption2){reference-type="eqref" reference="assumption2"} hold. Assume that there exists $\varepsilon_0 > 0$ such that $$\label{assumption3} r_n \geq \lceil(1+\varepsilon_0)A_n \rceil$$ for $n$ large enough. Moreover, assume that $$\label{assumption4} \limsup_{n \to \infty}\frac{ A_nt_n}{d_n}=C< \infty \qquad \hbox{and}\qquad \lim_{n \to \infty}\frac{(r_n-A_n)t_n}{d_n}=0.$$ Then $$\lim_{n \to \infty}\frac{ S_n^{r_n}}{d_n}=1.$$** **Proof.** Notice that $Z_n\geq S_n^{\lceil (1+\varepsilon_0) A_n \rceil }$. In fact, by the right inequality in [\[statement3\]](#statement3){reference-type="eqref" reference="statement3"}, in $S_n^{\lceil (1+\varepsilon_0) A_n \rceil }$ there are only summands not greater than $t_n$, so the sum is not larger than the sum of all the summands not greater than $t_n$, i.e. $Z_n$. Since the assumption [\[assumption3\]](#assumption3){reference-type="eqref" reference="assumption3"} implies that $S_n^{r_n}\leq S_n^{\lceil (1+\varepsilon_0) A_n \rceil }$, we get also $$\label{statement14} S_n^{r_n}\leq Z_n.$$ Now observe that [\[assumption3\]](#assumption3){reference-type="eqref" reference="assumption3"} holds also for every $\varepsilon < \varepsilon_0$; thus by the right inequality in [\[statement3\]](#statement3){reference-type="eqref" reference="statement3"} we have $$r_n \geq \lceil(1+\varepsilon) A_n \rceil \geq \# \{k \leq n: X_k> t_n\}.$$ Thus Lemma [\[lemma5\]](#lemma5){reference-type="ref" reference="lemma5"} can be applied, yielding $$\label{statement11} Z_n- S_n^{r_n} \leq (r_n-(1-\varepsilon) A_n )t_n.$$ Combining relations [\[statement14\]](#statement14){reference-type="eqref" reference="statement14"} and [\[statement11\]](#statement11){reference-type="eqref" reference="statement11"} we obtain $$0 \leq Z_n- S_n^{r_n} \leq (r_n-(1-\varepsilon ) A_n )t_n.$$ Finally the assumptions in [\[assumption4\]](#assumption4){reference-type="eqref" reference="assumption4"} ensure that $$0 \leq \limsup_{n \to \infty} \frac{Z_n- S_n^{r_n}}{d_n}\leq C\epsilon;$$ which concludes the proof due to the arbitrariness of $\varepsilon$ and by applying Theorem [Theorem 2](#theorem6){reference-type="ref" reference="theorem6"}.   ------------------------------------------------------------------------ # A strong law for a special class of generalized Oppenheim expansions In this section, we provide a strong law for the trimmed sums of a special class of generalized Oppenheim expansions. Recall that Theorem [Theorem 6](#teorema1){reference-type="ref" reference="teorema1"} proven in the previous section, provides conditions under which a convergence result is established for sequences of independent random variables. However, the independence assumption is not satisfied by all Oppenheim expansions and therefore we need to establish a convergence result without this constraint; thus, first we describe the class of Oppenheim expansions we shall deal with. Call *good* a strictly increasing sequence $\Lambda = (\lambda_j)_{j \in \mathbb{N}}$ tending to $+ \infty$ with $\lambda_j\geq 1$ for every $j\geq 1$, $\lambda_0 =0$ and such that$$\label{as1F} \sup_n (\lambda_{n+1}- \lambda_n)= \ell< + \infty.$$ For $u \in [1, + \infty)$ let $j_u$ be the only integer such that $\lambda_{j_u-1}\leq u < \lambda_{j_u}$ i.e. $\lambda_{j_u}$ is the minimum element in $\Lambda$ greater than $u$. The sequence defined above is instrumental for defining a class of Oppenheim expansions with optimal properties. This class is presented and studied in depth in the paper in progress [@GH], where the proof of the following proposition can be found together with some examples. **Proposition 7**. **[\[indepTn\]]{#indepTn label="indepTn"} Let $(R_n)_{n \geq 1}$ be an Oppenheim expansion sequence with related distribution function $F$; assume that there exists a good sequence $\Lambda$ such that for every $x \in \Lambda$ and for every $n$, $x \phi_n(B_n)+(x -1)Y_n\phi_n(B_n)$ is an integer. Denote $$\label{Tn} X_n = :\lambda_{j_{R_n}}.$$ Then, the variables $X_n$ take values in $\Lambda$, are independent and $X_n$ has discrete density given by $$F\Big(\frac{1}{\lambda_{s-1}}\Big)-F\Big(\frac{1}{\lambda_s}\Big),\quad s\in \mathbb{N}^*.$$** Next, we define the function $$\phi(u) := \sum_{j=2}^{j_u-1} \frac{\lambda_j - \lambda_{j-1}}{\lambda_{j-1}}$$ which plays a crucial role in the proof of the main result of this section. In the Lemma that follows we provide lower and upper bounds for this important function. **Lemma 8**. **For the function $\phi$ the following inequalities hold true.** 1. *For every $u > 0$, $$\phi(u) \geq \log u - \log\lambda_1-\ell.$$* 2. *Let $\varepsilon >0$ be fixed. Then, for sufficiently large $u > 0$, $$\phi(u) \leq (1+\varepsilon)\{\log u - \log\lambda_1\}.$$* **Proof.** For the first inequality we start by considering the function $$f(x)=\sum_{j = 2}^{j_u} \frac{1}{\lambda_{j-1}}1_{[\lambda_{j-1},\lambda_j )}(x),$$ defined on the interval $[\lambda_1, \lambda_{{j_u}}]$. Then clearly $$\int_{\lambda_1}^{\lambda_{j_u}} f(x) \, {\rm d}x \geq \int_{\lambda_1}^{\lambda_{j_u}} \frac{1}{x}\, {\rm d}x =\log \lambda_{j_u} - \log\lambda_1\geq \log u - \log\lambda_1.$$ And now $$\phi(u) = \int_{\lambda_1}^{\lambda_{j_u}} f(x) \, {\rm d}x - \frac{\lambda_{j_u}- \lambda_{j_u-1}}{\lambda_{j_u-1}}\geq \log u - \log\lambda_1 - \ell.$$ For the second part consider the sequences $$C_n= \sum_{j = 2}^{n} \frac{1}{\lambda_{j}}(\lambda_j-\lambda_{j-1} ),$$ $$D_n= \sum_{j = 2}^{n} \frac{1}{\lambda_{j-1}} (\lambda_j-\lambda_{j-1} ).$$ Notice that, by an argument similar as in the proof of the first inequality $$D_n\geq \int_{\lambda_1}^{ \lambda_n}\frac{1}{x} {\rm d}x = \log \lambda_n - \log \lambda_1 \to \infty,$$ and $$C_n\leq \int_{\lambda_1}^{ \lambda_n}\frac{1}{x} {\rm d}x = \log \lambda_n - \log \lambda_1 .$$ Thus, by applying the Cesaro theorem, we have $$\lim_{n \to \infty} \frac{C_n}{D_n}=\lim_{n \to \infty}\frac{\frac{1}{\lambda_{n}}(\lambda_n-\lambda_{n-1} )}{ \frac{1}{\lambda_{n-1}} (\lambda_n-\lambda_{n-1} )}= \lim_{n \to \infty}\frac{\lambda_{n-1}}{\lambda_{n}}=1,$$ since $$\frac{\lambda_{n-1}}{\lambda_{n}}= 1 - \frac{\lambda_{n}-\lambda_{n-1}}{\lambda_{n}}$$ and $$0 \leq\frac{\lambda_{n}-\lambda_{n-1}}{\lambda_{n}}\leq \frac{\ell}{\lambda_{n}}\to 0, \qquad n \to \infty.$$ As a consequence, for sufficiently large $n$ we have $$D_n \leq (1+\varepsilon)C_n\leq (1+\varepsilon)(\log \lambda_n - \log \lambda_1)$$ Hence, for $n= j(u) -1$ and sufficiently large $u$, we obtain $$\phi(u)\leq (1+\varepsilon)(\log \lambda_{j(u) -1} - \log \lambda_1)\leq (1+\varepsilon)(\log u - \log \lambda_1)$$ (observe that $j(u) \to \infty$ if $u\to \infty$).   ------------------------------------------------------------------------ In this section the notation $S_n^{r_n}$ is reserved for $( R_n)_{n \geq 1}$, i.e. $$S_n^{r_n}:=\sum_{k = r_n +1}^n R_{\sigma(k)};$$ for $X_n$ defined as in [\[Tn\]](#Tn){reference-type="eqref" reference="Tn"} we denote $$\tilde S_n^{r_n}:=\sum_{k = r_n +1}^n X_{\sigma(k)}.$$ We recall again the notation $$A_n:= \sum_{k =1}^n P(X_k > t_n), \qquad d_n: = \sum_{k =1}^nE[X_k 1_{\{X_k \leq t_n\}}] .$$ where $(t_n)_{n \geq 1}$ is a sequence of positive numbers. For future use in this section, we investigate which further properties $(t_n)_{n \geq 1}$ must have in order to satisfy [\[assumption1\]](#assumption1){reference-type="eqref" reference="assumption1"} and [\[assumption2\]](#assumption2){reference-type="eqref" reference="assumption2"}. **Lemma 9**. **Let $(R_n)_{n \geq 1}$ be an Oppenheim expansion sequence with related distribution function $F$; assume that there exists a good sequence $\Lambda$ such that for every $x \in \Lambda$ and for every $n$, $x \phi_n(B_n)+(x -1)Y_n\phi_n(B_n)$ is an integer; assume in addition that two positive constants $C_1 <C_2$ exist such that $$C_1 \leq \frac{F(x)}{x}\leq C_2 \qquad \forall \, x \in (0,1].$$ Let $(t_n)_{n \geq 1}$ be a nondecreasing sequence of positive numbers such that $\displaystyle\lim_{n \to \infty} t_n = + \infty$; then** 1. *for every $n \geq 1$, $$\label{statement6} d_n \geq C n \log t_n;$$* 2. *assumption [\[assumption1\]](#assumption1){reference-type="eqref" reference="assumption1"} is satisfied if$$\label{assumption5} \sum_n \exp \left (-C \,\frac{n \log^2 t_n}{t_n ^{2 }}\right)< + \infty,$$ for sufficiently small $C >0;$* 3. *assumption [\[assumption2\]](#assumption2){reference-type="eqref" reference="assumption2"} is satisfied if $$\label{assumption6}\sum_n \exp \left(- C\, \frac{n }{t_n }\right)< +\infty,$$ for sufficiently small $C >0.$* **Proof.** Interpreting $F(\frac{1}{0})$ as 1, we have ultimately $$d_n= - \sum_{k =1}^n \sum_{j=1}^{ j_{t_n}-1}\lambda_j\left[ F\left(\frac{1}{\lambda_j}\right)- F\left(\frac{1}{\lambda_{j-1}}\right) \right].$$ Recalling Abel's summation formula, i.e. $$\sum_{j=m}^r a_j (b_{j+1}- b_j)= a_r b_{r+1}- a_mb_m -\sum_{j=m+1}^rb_j (a_j - a_{j-1}),$$ where we take $$a_j = \lambda_j, \qquad b_j=F\left(\frac{1}{\lambda_{j-1}}\right), \qquad m =1, \qquad r = j_{t_n}-1 ,$$ the expression $d_n$ becomes $$\begin{aligned} & \nonumber d_n=-\sum_{k =1}^{n}\left[\lambda_{j_{t_n}-1} F\left(\frac{1}{\lambda_{j_{t_n}}}\right)- \lambda_1 - \sum_{j=2}^{ {j_{t_n}-1}} F\left(\frac{1}{\lambda_{j-1}}\right) (\lambda_j-\lambda_{j-1})\right]\\&= \nonumber n \left(\sum_{j=1}^{ {j_{t_n}-1}} F\left(\frac{1}{\lambda_{j-1}}\right)(\lambda_j-\lambda_{j-1}) -\lambda_{j_{t_n}-1} F\left(\frac{1}{\lambda_{j_{t_n}}} \right) \right)\\& \geq n\left(C_1 \sum_{j=2}^{j_{t_n}-1 }\frac{\lambda_j - \lambda_{j-1}}{\lambda_{j-1}}+\lambda_1- C_2\right) =n\left(C_1 \phi(t_n)+\lambda_1- C_2\right)\geq C_3 n \log t_n, \end{aligned}$$ for every $n \geq 1$ by the first part of Lemma [Lemma 8](#lemma4){reference-type="ref" reference="lemma4"} (the first inequality holds since $\lambda_{j_{t_n}} \geq\lambda_{j_{t_n}-1}$). Then $$\begin{aligned} & \nonumber \frac{ d_n^2}{n t_n^2} = \frac{n}{t_n ^{2 }}\left( \sum_{j=1}^{ {j_{t_n}-1}} F\left(\frac{1}{\lambda_{j-1}}\right)(\lambda_j-\lambda_{j-1}) -\lambda_{j_{t_n}-1} F\left(\frac{1}{\lambda_{j_{t_n}}} \right) \right)^2 \geq C_3 \,\frac{n \log^2 t_n}{t_n ^{2 }}.\end{aligned}$$ Hence assumption [\[assumption1\]](#assumption1){reference-type="eqref" reference="assumption1"} is satisfied if [\[assumption5\]](#assumption5){reference-type="eqref" reference="assumption5"} holds for sufficiently small $C >0$. Concerning assumption [\[assumption2\]](#assumption2){reference-type="eqref" reference="assumption2"}, note that we can write $$A_n = \sum_{k =1}^n \sum_{j=j_ {t_n} }^{+ \infty}\left( F\left(\frac{1}{\lambda_{j-1}}\right)- F\left(\frac{1}{\lambda_j}\right)\right)= n F\left(\frac{1}{\lambda_{j_ {t_n} -1}}\right)\geq C_1\frac{n}{\lambda_{j_ {t_n} -1}}\geq C_1 \frac{n}{t_n},$$ thus we obtain assumption [\[assumption2\]](#assumption2){reference-type="eqref" reference="assumption2"} by imposing that [\[assumption6\]](#assumption6){reference-type="eqref" reference="assumption6"} holds for sufficiently small $C >0$.   ------------------------------------------------------------------------ The main result of this section (and of the paper) is presented below; for its proof we will employ Theorem [Theorem 6](#teorema1){reference-type="ref" reference="teorema1"}, in which we take $X_n =\lambda_{j_{R_n}}$. Furthermore, for the purpose of using the same Theorem, we need a sequence $(t_n)_{n \geq 1}$ which satisfies assumptions [\[assumption1\]](#assumption1){reference-type="eqref" reference="assumption1"} and [\[assumption2\]](#assumption2){reference-type="eqref" reference="assumption2"}. Based on the discussion presented in Lemma [Lemma 9](#lemma6){reference-type="ref" reference="lemma6"}, a suitable sequence is $t_n = n^\gamma$, with $0< \gamma < \frac{1}{2}$; thus in the sequel we take $t_n = n^\gamma$ and $d_n$ means $$d_n = \sum_{k =1}^n E[X_k 1_{\{X_k \leq n^\gamma\}}].$$ **Theorem 10**. **Let $(R_n)_{n \geq 1}$ be an Oppenheim expansion sequence with related distribution function $F$. Assume that there exists a good sequence $\Lambda$ such that, for every $x \in \Lambda$ and for every $n$, $x \phi_n(B_n)+(x -1)Y_n\phi_n(B_n)$ is an integer.** 1. *Assume that there exist two positive constants $C_1 <C_2$ with $$C_1 \leq \frac{F(x)}{x}\leq C_2 \qquad \forall \, x \in (0,1].$$ Then there exists a constant $C>0$ such that, letting $r_n = \lceil \beta n^{1-\gamma}\rceil$, where $0< \gamma < \frac{1}{2}$ and $\beta >C$, we have $$\lim_{n \to \infty} \frac{S^{r_n}_n}{ d_n}= 1.$$* 2. *Assume that $$\lim_{x \to 0} \frac{F(x)}{x}=\alpha >0.$$ Then, for the same sequence $(r_n)_{n \geq 1}$ as in part (a), we have $$\lim_{n \to \infty} \frac{S^{r_n}_n}{n \log n}= \alpha\gamma .$$* **Proof.** For part (a), we start by observing that for every $n$, $$X_n - \ell \leq R_n\leq X_n,$$ which leads to $$\sum_{k=r_n+1}^{n}(X_{\sigma(k)}-\ell)\leq \sum_{k=r_n+1}^{n}R_{\sigma(k)}\leq \sum_{k=r_n+1}^{n}X_{\sigma(k)}.$$ Thus $$\tilde S_n^{r_n}- \ell n\leq \tilde S_n^{r_n}- \ell n+\ell r_n \leq S_n^{r_n}\leq \tilde S_n^{r_n},$$ due to [\[as1F\]](#as1F){reference-type="eqref" reference="as1F"}. Hence, it suffices to prove that $$\lim_{n \to \infty} \frac{\tilde S^{r_n}_n}{d_n }=1\qquad \hbox{\rm and} \qquad \lim_{n \to \infty}\frac{n}{d_n }=0.$$ The second relation is an easy consequence of [\[statement6\]](#statement6){reference-type="eqref" reference="statement6"}. For the first one, note that $$\label{statement8} (n-1)^\gamma < \lambda_{j_ {t_{n-1} }}\leq \lambda_{j_ {t_n} }\leq C_4\lambda_{j_ {t_{n }-1 }},$$ (recall that $\lim_{n \to \infty}\frac{\lambda_n}{\lambda_{n-1}}=1,$ see the proof of Lemma [Lemma 8](#lemma4){reference-type="ref" reference="lemma4"}) whence $$A_n =nF\left(\frac{1}{\lambda_{j_ {t_n} -1}}\right)\leq C_2\frac{n}{\lambda_{j_ {t_n} -1}} \leq C_5\frac{n}{(n-1)^\gamma }\leq C_6 n^{1-\gamma} ,$$ for sufficiently large $n$. So for any $\varepsilon_0>0$ we have that $$(1+ \varepsilon_0)A_n\leq C_6(1+ \varepsilon_0)n^{1- \gamma} ,$$ ultimately. Let $C=C_6$; choose $\beta > C$ and $\varepsilon_0 \leq \frac{\beta - C}{ C}$; then $\beta \geq C(1+ \varepsilon_0)$ and $$r_n = \lceil\beta n^{1-\gamma}\rceil\geq \beta n^{1-\gamma} \geq C(1+ \varepsilon_0)n^{1- \gamma}\geq (1+ \varepsilon_0)A_n$$ which proves that assumption [\[assumption3\]](#assumption3){reference-type="eqref" reference="assumption3"} is satisfied. Furthermore, $$0 \leq \frac{ A_n t_n}{d_n}\leq \frac{Cn^{1- \gamma}n^\gamma}{n \left(\sum_{j=1}^{ {j_{t_n}-1}} F\left(\frac{1}{\lambda_{j-1}}\right)(\lambda_j-\lambda_{j-1}) -\lambda_{j_{t_n}-1} F\left(\frac{1}{\lambda_{j_{t_n}}} \right) \right)} \leq \frac{ C }{C_3 \log t_n}= \frac{C_7}{\log n}\to 0, \qquad n \to \infty$$ and $$\begin{aligned} & 0 \leq\frac{(r_n - A_n)t_n }{ d_n }\leq \frac{\left(\lceil\beta n^{1-\gamma} \rceil- nF\left(\frac{1}{ \lambda_{j_{t_n}-1}}\right)\right)n^\gamma}{n\left(C_1 \displaystyle\sum_{j=2}^{j_{t_n}-1} \frac{\lambda_j-\lambda_{j-1}}{\lambda_{j-1}} + \lambda_1- C_2\right)}\leq\frac{1}{C_8\log n }\left(\frac{\lceil\beta n^{1-\gamma} \rceil}{n^{1-\gamma}}-C_1\frac{n^\gamma}{ \lambda_{j_{t_n}-1}}\right)\to 0, \quad n \to \infty, \end{aligned}$$ by [\[statement6\]](#statement6){reference-type="eqref" reference="statement6"} and since the term in parenthesis is bounded (recall that $\frac{(n-1)^{\gamma}}{C_4} \leq \lambda_{j_{t_n}-1} \leq n^{\gamma}$, see the last inequality in [\[statement8\]](#statement8){reference-type="eqref" reference="statement8"}). An application of Theorem [Theorem 6](#teorema1){reference-type="ref" reference="teorema1"} concludes the proof of part (a). For the proof of part (b), it suffices to use part (a) and notice that $$\label{statement7} \lim_{n \to \infty}\frac{d_n}{n \log n}= \alpha \gamma.$$ In order to prove this relation, let $\epsilon > 0$; then, for sufficiently small $x$, we have $$(\alpha - \epsilon)x \leq F(x) \leq (\alpha + \epsilon)x.$$ Using these inequalities for estimating $d_n$ and the inequalities proved in Lemma [Lemma 8](#lemma4){reference-type="ref" reference="lemma4"}, in a similar way as in [\[statement6\]](#statement6){reference-type="eqref" reference="statement6"} we have that for sufficiently large $n$, $$\begin{aligned} &d_n =n \left(\sum_{j=1}^{ {j_{t_n}-1}} F\left(\frac{1}{\lambda_{j-1}}\right)(\lambda_j-\lambda_{j-1}) -\lambda_{j_{t_n}-1} F\left(\frac{1}{\lambda_{j_{t_n}}} \right) \right){ \leq n(\alpha+\epsilon)\sum_{j=2}^{j_{t_n}-1}\dfrac{\lambda_j-\lambda_{j-1}}{\lambda_{j-1}}+\lambda_1}\\ &\leq n(\alpha+\epsilon)\phi(t_n)+n(\alpha+\epsilon)\lambda_1 \leq n(\alpha+\epsilon)(1+\epsilon)\log t_n+n(\alpha+\epsilon)\lambda_1\\&= (\alpha + \epsilon)(1+\epsilon)\gamma n \log n+n(\alpha+\epsilon)\lambda_1 \end{aligned}$$ and $$\begin{aligned} &d_n =n \left(\sum_{j=1}^{ {j_{t_n}-1}} F\left(\frac{1}{\lambda_{j-1}}\right)(\lambda_j-\lambda_{j-1}) -\lambda_{j_{t_n}-1} F\left(\frac{1}{\lambda_{j_{t_n}}} \right) \right) \geq { n((\alpha-\epsilon)\phi(t_n) +(\alpha-\epsilon)\lambda_1-(\alpha+\epsilon))}\\ &{\geq n((\alpha-\epsilon)\gamma\log n-(\alpha-\epsilon)\log\lambda_1-(\alpha-\epsilon)\ell+(\alpha-\epsilon)\lambda_1-(\alpha+\epsilon))}. \end{aligned}$$ The statement [\[statement7\]](#statement7){reference-type="eqref" reference="statement7"} follows immediately by the arbitrariness of $\epsilon$.   ------------------------------------------------------------------------ **Remark 11**. *Observe that in part (b) of Theorem [Theorem 10](#teorema2){reference-type="ref" reference="teorema2"} the normalization sequence is $n\log n$, which is the one used in [@Gi] and [@GH2021] for obtaining a weak law.* spc Erickson, K. B., (1973), The Strong Law of Large Numbers When the Mean is Undefined , *Trans. Amer. Math. Soc.*, **185**, 371--381. Feller, W., (1946), A limit theorem for random variables with infinite moments, *Amer. J. Math.* **68** 257--262. Giuliano, R., (2018), *Convergence results for Oppenheim expansions*, *Monatsh. Math.*, **187**, 509--530. Giuliano, R., Hadjikyriakou, M., (2021), On Exact Laws of Large Numbers for Oppenheim Expansions with Infinite Mean. *J Theor Probab* **34**, 1579--1606 . Giuliano, R., Hadjikyriakou, M., (2023), Strong laws of large numbers for lightly trimmed sums of generalized Oppenheim expansions. *Manuscript in preparation.* Hoeffding, W., (1963), Probability Inequalities for Sums of Bounded Random Variables, *J. Am. Stat. Assoc.*, **58**, 13--30, https://doi.org/10.2307/2282952 Hatori, H., Maejima, M., Mori, T., (1979), Convergence rates in the law of large numbers when extreme terms are excluded , *Z. Wahrscheinlichkeitstheorie verw. Geb.*, **47**, 1--12. Kesten, H., (1970), The limit points of a normalized random walk , *Ann. Math. Stat.*, **41**, 1173--1205. Kesseböhmer, M., Schindler, T., (2019), Strong Laws of Large Numbers for Intermediately Trimmed Sums of i.i.d. Random Variables with Infinite Mean , *J. Theor. Probab.*, **32**, 702--720, https://doi.org/10.1007/s10959-017-0802-0. Maller, R.A., (1978), Relative Stability and the Strong Law of Large Numbers, *Z. Wahrscheinlichkeitstheorie verw. Geb.* **43**, 141--148. Mori, T., (1976), The strong law of large numbers when extreme terms are excluded from sums , *Z.Wahrscheinlichkeitstheorie verw. Geb.*, **36**, 189--194. Mori, T., (1977), Stability for sums of i.i.d. random variables when extreme terms are excluded, *Z.Wahrscheinlichkeitstheorie verw. Geb.*, **40**, 159--167. [^1]: Dipartimento di Matematica, Università di Pisa, Largo Bruno Pontecorvo 5, I-56127 Pisa, Italy (email: `rita.giuliano@unipi.it`) [^2]: University of Central Lancashire, 12--14 University Avenue Pyla, 7080 Larnaka, Cyprus (email: `MHadjikyriakou@uclan.ac.uk`)
arxiv_math
{ "id": "2310.00669", "title": "Intermediately Trimmed Sums of Oppenheim Expansions: a Strong Law", "authors": "Rita Giuliano, Milto Hadjikyriakou", "categories": "math.PR", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | We consider approximating solutions to parameterized linear systems of the form $A(\mu_1,\mu_2) x(\mu_1,\mu_2) = b$, where $(\mu_1, \mu_2) \in \mathbb{R}^2$. Here the matrix $A(\mu_1,\mu_2) \in \mathbb{R}^{n \times n}$ is nonsingular, large, and sparse and depends nonlinearly on the parameters $\mu_1$ and $\mu_2$. Specifically, the system arises from a discretization of a partial differential equation and $x(\mu_1,\mu_2) \in \mathbb{R}^n$, $b \in \mathbb{R}^n$. This work combines companion linearization with the Krylov subspace method preconditioned bi-conjugate gradient (BiCG) and a decomposition of a tensor matrix of precomputed solutions, called snapshots. As a result, a reduced order model of $x(\mu_1,\mu_2)$ is constructed, and this model can be evaluated in a cheap way for many values of the parameters. The decomposition is performed efficiently using the sparse grid based higher-order proper generalized decomposition (HOPGD) presented in \[Lu, Blal, and Gravouil, *Internat. J. Numer. Methods Engrg.*, 114:1438--1461, 2018\], and the snapshots are generated as one variable functions of $\mu_1$ or of $\mu_2$, as proposed in \[Correnty, Jarlebring, and Szyld, *Preprint on arXiv*, 2022 `https://arxiv.org/abs/2212.04295`\]. Tensor decompositions performed on a set of snapshots can fail to reach a certain level of accuracy, and it is not possible to know a priori if the decomposition will be successful. This method offers a way to generate a new set of solutions on the same parameter space at little additional cost. An interpolation of the model is used to produce approximations on the entire parameter space, and this method can be used to solve a parameter estimation problem. Numerical examples of a parameterized Helmholtz equation show the competitiveness of our approach. The simulations are reproducible, and the software is available online. author: - "Siobhán Correnty[^1]" - "Melina A. Freitag[^2]" - "Kirk M. Soodhalter[^3]" bibliography: - siobhanbib.bib - eliasbib.bib title: Sparse grid based Chebyshev HOPGD for parameterized linear systems --- Krylov methods, companion linearization, shifted linear systems, reduced order model, tensor decomposition, parameter estimation 65F10, 65N22, 65F55 # Introduction We are interested in approximating solutions to linear systems of the form $$\begin{aligned} \label{our-problem} A(\mu_1,\mu_2) x(\mu_1,\mu_2) = b,\end{aligned}$$ for many different values of the parameters $\mu_1, \mu_2$. Here $A(\mu_1,\mu_2) \in \mathbb{R}^{n \times n}$ is a large and sparse nonsingular matrix with a nonlinear dependence on $\mu_1 \in [a_1,b_1] \subset \mathbb{R}$ and $\mu_2 \in [a_2,b_2] \subset \mathbb{R}$ and $x(\mu_1,\mu_2) \in \mathbb{R}^n$, $b \in \mathbb{R}^n$. This work combines companion linearization, a technique from the study of nonlinear eigenvalue problems, with the Krylov subspace method bi-conjugate gradient (BiCG) [@BiCG76; @Lanczos52] and a tensor decomposition to construct a reduced order model. This smaller model can be evaluated in an inexpensive way to approximate the solution to [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"} for many values of the parameters. Additionally, the model can be used to solve a parameter estimation problem, i.e., to simultaneously estimate $\mu_1$ and $\mu_2$ for a given solution vector where these parameters are not known. Specifically, our proposed method is based on a decomposition of a tensor matrix of precomputed solutions to [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"}, called snapshots, where the systems arise from discretizations of parameterized partial differential equations (PDEs). In this way, building the reduced order model can be divided into two main parts: 1. Generate the snapshots. 2. Perform the tensor decomposition. We assume further that the system matrix can be expressed as the sum of products of matrices and functions, i.e., $$\begin{aligned} \label{form-of-A} A(\mu_1,\mu_2) = C_1 f_1(\mu_1,\mu_2) + \cdots + C_{n_f} f_{n_f} (\mu_1,\mu_2), \end{aligned}$$ where $n_f \ll n$ and $f_1,\ldots,f_{n_f}$ are nonlinear scalar functions in the parameters $\mu_1$ and $\mu_2$. Previously proposed methods of this variety, e.g., [@PODcit; @HOPGD2; @SparseHOPGD; @HOPGD; @HOPGD3; @HarborAgitation], generate the snapshots in an offline stage, and, thus, a linear system of dimension $n \times n$ must be solved for each pair $(\mu_1,\mu_2)$ in the tensor matrix. Here we instead compute the snapshots with the method proposed in [@CorrentyEtAl2], Preconditioned Chebyshev BiCG for parameterized linear systems. This choice allows for greater flexibility in the selection of the set of snapshots included in the tensor, as the approximations are generated as one variable functions of $\mu_1$ or of $\mu_2$, i.e., with one parameter frozen. These functions are cheap to evaluate for different values of the parameter, and this method is described in Section [2](#sec:ChebyshevBiCG){reference-type="ref" reference="sec:ChebyshevBiCG"}. The tensor matrix of precomputed snapshots has a particular structure, as we consider sparse grid sampling in the parameter space. This is a way to overcome the so-called *curse of dimensionality* since the number of snapshots to generate and store grows exponentially with the dimension when the sampling is performed on a conventional full grid. The decomposition is performed with a variant of the higher-order proper generalized decomposition (HOPGD), as proposed in [@SparseHOPGD]. The method HOPGD [@HarborAgitation] decomposes a tensor of snapshots sampled on a full grid. The approach here has been adapted for this particular setting with fewer snapshots and, as in the standard implementation, results in a separated expression in $\mu_1$ and $\mu_2$. Once constructed, the decomposition is interpolated to approximate solutions to [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"} corresponding to all parameters $(\mu_1,\mu_2) \in [a_1,b_1] \times [a_2,b_2]$. More precisely, an alternating directions, greedy algorithm is used to perform the decomposition, where the cost of each step of the method grows linearly with the number of unknowns $n$. The basis vectors for the decomposition are not known beforehand, similar to the established proper generalized decomposition (PGD), often used to create reduced order models of PDEs which are separated in the time and space variables. The method PGD has been used widely in the study of highly nonlinear PDEs [@PGD1; @PGDoverview2014; @PGD2] and has been generalized to solve multiparametric problems; see, for instance, [@AmmarEtAl1; @AmmarEtAl2]. As performing the decomposition can be done efficiently, generating the snapshots is the dominating cost of building the reduced order model. In general, we cannot guarantee that the error in a tensor decomposition for a certain set of snapshots will reach a specified accuracy level. Additionally, it is not possible to know a priori which sets of snapshots will lead to a successful decomposition, even with modest standards for the convergence [@KoldaBader]. In the case of the decomposition failing to converge for a given tensor, our proposed method offers an efficient way to generate a new set of snapshots on the same parameter space $[a_1,b_1] \times [a_2,b_2]$ with little extra computation. These snapshots can be used to construct a new reduced order model for the same parameter space in an identical, efficient way. This is the main contribution developed in this work. Evaluating the resulting reduced order model is in general much more efficient than solving the systems individually for each choice of the parameters [@SparseHOPGD]. Reliable and accurate approximations are available for a variety of parameter choices, and parameter estimation can be performed in a similarly competitive way. Details regarding the construction of the reduced order model are found in Section [3](#sec:SparseHOPGD){reference-type="ref" reference="sec:SparseHOPGD"}, and numerical simulations of a parameterized Helmholtz equation and a parameterized advection-diffusion equation are presented in Sections [4](#sec:simulations){reference-type="ref" reference="sec:simulations"} and [5](#sec:advec-diff){reference-type="ref" reference="sec:advec-diff"}, respectively. All experiments were carried out on a 2.3 GHz Dual-Core Intel Core i5 processor with 16 GB RAM, and the corresponding software can be found online.[^4] # Generating snapshots with Preconditioned Chebyshev BiCG {#sec:ChebyshevBiCG} Constructing an accurate reduced order model of $x(\mu_1,\mu_2)$ requires sampling of solutions to [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"} for many different values of the parameters $\mu_1$ and $\mu_2$. Parameterized linear systems have been studied in several prior works, for example, in the context of Tikhonov regularization for ill-posed problems [@FrommerMaass99:20; @KilmerOleary01], as well as in [@KressnerToblerLR], where the solutions were approximated by low-rank tensors, and in [@KirkUnrelated] with parameterized right-hand sides. Additionally, approaches based on companion linearization were proposed in [@GuSimoncini], as well as in [@CorrentyEtAl; @JarlebringCorrenty1] with a linearization based on an infinite Taylor series expansion. Reduced order models based on sampling of snapshot solutions are typically constructed from finite element approximations, generated in an offline stage at a significant computational cost. We base our approach to obtain the snapshot solutions on an adapted version of the method Preconditioned Chebyshev BiCG for parameterized linear systems, originally proposed in [@CorrentyEtAl2]. Here the unique solution to a companion linearization, formed from a Chebyshev approximation using `Chebfun` [@ChebfunGuide], is generated in a preconditioned BiCG [@BiCG76; @Lanczos52] setting. Two executions of this method generate all solutions corresponding to the values of the parameters shown in Figure [2](#tikz2){reference-type="ref" reference="tikz2"}. Specifically, one execution for all solutions on the line $\mu_1 = \mu_1^*$ and one execution for all solutions on the line $\mu_2 = \mu_2^*$ in the plane $[a_1 \times b_1] \times [a_2 \times b_2]$. In this way, we generate solutions fixed in one parameter. Moreover, the sampling represented here is sufficient to build a reliable reduced order model with the strategy presented in [@SparseHOPGD]. We summarize the method Preconditioned Chebyshev BiCG, for the sake of self-containment, as follows. A Chebyshev approximation of $A(\mu_1,\mu_2)$ for a fixed $\mu_2^* \in \mathbb{R}$ is given by $\hat{P}(\mu_1)$, where $$\begin{aligned} \label{cheb1} A(\mu_1,\mu_2^*) \approx \hat{P}(\mu_1) \coloneqq \hat{P}_0 \hat{\tau}_0 (\mu_1) + \ldots + \hat{P}_{d_1} \hat{\tau}_{d_1} (\mu_1)\end{aligned}$$ and, for a fixed $\mu_1^* \in \mathbb{R}$, by $\tilde{P}(\mu_2)$, i.e., $$\begin{aligned} \label{cheb2} A(\mu_1^*, \mu_2) \approx \tilde{P}(\mu_2) \coloneqq \tilde{P}_0 \tilde{\tau}_0 (\mu_2) + \ldots + \tilde{P}_{d_2} \tilde{\tau}_{d_2} (\mu_2).\end{aligned}$$ Here $\hat{\tau}_{\ell}$ is the degree $\ell$ Chebyshev polynomial on $[-c_1,c_1]$, $\tilde{\tau}_{\ell}$ is the degree $\ell$ Chebyshev polynomial on $[-c_2,c_2]$, and $\hat{P}_{\ell}$, $\tilde{P}_{\ell} \in \mathbb{R}^{n \times n}$ are the corresponding interpolation coefficients. These coefficients are computed efficiently by a discrete cosine transform of the one variable scalar functions $$\begin{aligned} \label{scalar-functions} f_k(\mu_1,\mu_2^*)\eqqcolon\hat{f}_k(\mu_1), \qquad f_k(\mu_1^*,\mu_2)\eqqcolon\tilde{f}_k(\mu_2),\qquad k=1,\ldots,n_f, \end{aligned}$$ in [\[form-of-A\]](#form-of-A){reference-type="eqref" reference="form-of-A"}. Note, $c_l \in \mathbb{R}$ is chosen such that $[a_l,b_l] \subseteq [-c_l,c_l]$, for $l=1,2$. A companion linearization based on the linear system $$\begin{aligned} \label{cheby-approx} \hat{P} (\mu_1) \hat{x} (\mu_1) = b\end{aligned}$$ with $\hat{x}(\mu_1) = \hat{x}(\mu_1,\mu_2^*) \approx x(\mu_1,\mu_2^*)$, where $x(\mu_1,\mu_2^*)$ is as in [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"}, and a fixed $\mu_2^*$ is given by $$\begin{aligned} \captionsize \left( \hspace{-.06cm} \begin{bmatrix} 0 \hspace{-.27cm}& I & & & & & \\ I \hspace{-.27cm}& 0 \hspace{-.27cm}& I & & & & \\ & I \hspace{-.27cm}& 0 \hspace{-.27cm}& I & && \\ & & &\ddots \hspace{-.27cm}& & & \\ & & & & I \hspace{-.27cm}& 0 \hspace{-.27cm}& I \\ \hat{P}_0 \hspace{-.27cm}& \hat{P}_1 \hspace{-.27cm}& \cdots& \cdots & \hat{P}_{d_1-3} \hspace{-.27cm}& (-\hat{P}_{d_1} \hspace{-.04cm}+ \hspace{-.04cm}\hat{P}_{d_1-2}) \hspace{-.27cm}& \hat{P}_{d_1-1} \end{bmatrix} \hspace{-.07cm}-\hspace{-.07cm} \frac{\mu_1}{c_1} \hspace{-.07cm} \begin{bmatrix} I \hspace{-.27cm}& & & & & \\ & 2 I\hspace{-.27cm}& & & & \\ & & 2 I \hspace{-.27cm}& & & \\ & & & \ddots \hspace{-.27cm}& & \\ & & & & 2 I \hspace{-.27cm}& \\ & & & & & \hspace{-.20cm}-2 \hat{P}_{d_1} \hspace{-.05cm} \\ \end{bmatrix} \hspace{-.06cm} \right) \begin{bmatrix} \hspace{-.05cm}u_0(\mu_1)\hspace{-.05cm} \\ \hspace{-.05cm}u_1(\mu_1)\hspace{-.05cm} \\ \hspace{-.05cm}u_2(\mu_1)\hspace{-.05cm} \\ \hspace{-.05cm}\vdots\hspace{-.05cm} \\ \hspace{-.05cm}u_{d_1-2}(\mu_1)\hspace{-.05cm} \\ \hspace{-.05cm}u_{d_1-1}(\mu_1)\hspace{-.05cm} \end{bmatrix} \hspace{-.08cm} =\hspace{-.08cm} \begin{bmatrix} \hspace{-.04cm}0\hspace{-.04cm} \\ \hspace{-.04cm}0\hspace{-.04cm} \\ \hspace{-.04cm}0\hspace{-.04cm} \\ \hspace{-.04cm}\vdots\hspace{-.04cm} \\ \hspace{-.04cm}0\hspace{-.04cm} \\ \hspace{-.04cm}b\hspace{-.04cm} \end{bmatrix}\hspace{-.04cm}\end{aligned}$$ with $u_{\ell}(\mu_1) \coloneqq \hat{\tau}_{\ell}(\mu_1) \hat{x}(\mu_1,\mu_2^*)$, for $\ell=0,\ldots,d_1-1$. The linearization has the form $$\begin{aligned} \label{linear} \left( \hat{K} - \mu_1 \hat{M} \right) u(\mu_1) = c,\end{aligned}$$ where $\hat{K}, \hat{M} \in \mathbb{R}^{d_1 n \times d_1 n}$ are coefficient matrices, independent of the parameter $\mu_1$, $c \in \mathbb{R}^{d_1 n}$ is a constant vector, and the solution $u(\mu_1)$ is unique. This linearization, inspired by the work [@ChebyEffenKress] and studied fully in [@CorrentyEtAl2], relies on the well-known three-term recurrence of the Chebyshev polynomials: $$\begin{aligned} \label{recurrence} \hat{\tau}_0 (\mu_1) \coloneqq 1, \quad \hat{\tau}_1(\mu_1) \coloneqq \frac{1}{c_1} \mu_1, \quad \hat{\tau}_{\ell+1} (\mu_1) \coloneqq \frac{2}{c_1} \mu_1 \hat{\tau}_{\ell} (\mu_1) - \hat{\tau}_{\ell-1}(\mu_1). \end{aligned}$$ Solutions to the systems in [\[linear\]](#linear){reference-type="eqref" reference="linear"} for many different $\mu_1$ are approximated with the Krylov subspace method bi-conjugate gradient for shifted systems [@PreCondAhmed; @Frommer2003res]. Specifically, we approximate an equivalent right preconditioned system, where the system matrix incorporates a shift with the identity matrix. This system is given by [\[precond\]]{#precond label="precond"} $$\begin{aligned} &&( \hat{K} - \mu_1 \hat{M} )(\hat{K}-\hat{\sigma} \hat{M})^{-1} \hat{u}(\mu_1) = c \\ &\iff& ( \hat{K} - \mu_1 \hat{M} + \hat{\sigma}\hat{M} - \hat{\sigma} \hat{M} )(\hat{K}-\hat{\sigma} \hat{M})^{-1} \hat{u}(\mu_1) = c \\ &\iff& \left( I + (-\mu_1 + \hat{\sigma}) \hat{M} (\hat{K} - \hat{\sigma} \hat{M})^{-1} \right)\hat{u}(\mu_1) = c \label{precond-2}\end{aligned}$$ with $\hat{u}(\mu_1) = (\hat{K} - \hat{\sigma} \hat{M}) u (\mu_1)$. The $k$th approximation comes from the Krylov subspace generated from the matrix $\hat{M} (\hat{K} - \hat{\sigma} \hat{M})^{-1}$ and the vector $c$, defined by $$\begin{aligned} \label{our-krylov} \mathcal{K}_k \coloneqq \text{span} \{ c, \hat{M} (\hat{K} - \hat{\sigma} \hat{M})^{-1} c, \ldots, (\hat{M} (\hat{K} - \hat{\sigma} \hat{M})^{-1} )^{k-1} c \}.\end{aligned}$$ Here the shift- and scaling- invariance properties of Krylov subspaces have been exploited, i.e., $\mathcal{K}_k = \tilde{\mathcal{K}}_k$, where $\tilde{\mathcal{K}}_k$ is the Krylov subspace generated from the matrix $( I + (-\mu_1 + \hat{\sigma}) \hat{M} (\hat{K} - \hat{\sigma} \hat{M})^{-1})$ and the vector $c$. Note, several Krylov methods have been developed specifically to approximate shifted systems of the form [\[precond-2\]](#precond-2){reference-type="eqref" reference="precond-2"}. See, for example, [@Freund:NLAproc92; @FrommerGlassner98; @ParksEtAl2006; @SOODHALTER2014105], as well as in [@bakhos2017; @Baumann2015NestedKM], where shift-and-invert preconditioners were considered. As we consider a BiCG setting, we require also a basis matrix for the subspace defined by $$\begin{aligned} \label{adj-krylov} \mathcal{L}_k \coloneqq \text{span} \{ \tilde{c}, \left(\hat{M} (\hat{K} - \hat{\sigma} \hat{M})^{-1} \right)^T \tilde{c}, \ldots, \left((\hat{M} (\hat{K} - \hat{\sigma} \hat{M})^{-1} )^T\right)^{k-1} \tilde{c} \},\end{aligned}$$ where $\tilde{c} \in \mathbb{R}^{d_1 n}$ and $c^T \tilde{c} \neq 0$. In this way, a basis matrix for the Krylov subspace [\[our-krylov\]](#our-krylov){reference-type="eqref" reference="our-krylov"} and a second basis matrix for the subspace [\[adj-krylov\]](#adj-krylov){reference-type="eqref" reference="adj-krylov"} are built once and reused for the computation of solutions to [\[precond\]](#precond){reference-type="eqref" reference="precond"} for all $\mu_1$ of interest. More concretely, a Lanczos biorthogonalization process generates the matrices $V_k$, $W_k \in \mathbb{R}^{d_1 n \times k}$, $T_k \in \mathbb{R}^{k \times k}$, and $\underbar{$T$}_k$, $\bar{T}^T_k \in \mathbb{R}^{(k+1) \times k}$ such that the relations [\[relations\]]{#relations label="relations"} $$\begin{aligned} {5} \hat{M}( \hat{K}-\hat{\sigma} \hat{M})^{-1} V_{k} &= V_{k} &&T_k &&+ \beta_k v_{k+1} e_k^T &&= V_{k+1} &&\underbar{$T$}_k, \label{relations-a} \\ \big(\hat{M}(\hat{K}-\hat{\sigma} \hat{M})^{-1}\big)^T W_{k} &= W_{k} &&T_k^T &&+ \gamma_k w_{k+1} e_k^T &&= W_{k+1} &&\bar{T}^T_k\end{aligned}$$ hold. The columns of $V_{k}$ span the subspace [\[our-krylov\]](#our-krylov){reference-type="eqref" reference="our-krylov"}, the columns of $W_k$ span the subspace [\[adj-krylov\]](#adj-krylov){reference-type="eqref" reference="adj-krylov"}, and the biorthogonalization procedure gives $W_k^T V_k = I_k$, where $I_k \in \mathbb{R}^{k \times k}$ is the identity matrix of dimension $k \times k$. Here $e_k$ in [\[relations\]](#relations){reference-type="eqref" reference="relations"} denotes the $k$th column of $I_k$ and the matrices in [\[relations\]](#relations){reference-type="eqref" reference="relations"} are generated independently of the parameter $\mu_1$. The square matrix $T_k$ is of the form $$\begin{aligned} \label{square-T} T_{k} \coloneqq \begin{bmatrix} \alpha_1 & \gamma_1 & & & \\ \beta_1 & \ddots & & & \\ & \ddots & & & \gamma_{k-1}\\\ & & & \beta_{k-1}& \alpha_{k} \\ \end{bmatrix} \in \mathbb{R}^{k \times k},\end{aligned}$$ and the tridiagonal rectangular Hessenberg matrices $\underline{T}_k$ and $\bar{T}^T_k$ are given by $$\begin{aligned} \label{Ts} \underline{T}_k \coloneqq \begin{bmatrix} \alpha_1 & \gamma_1 & & \\ \beta_1 & \ddots & \ddots& \\ & \ddots &\ddots & \gamma_{k-1}\\\ & & \beta_{k-1}& \alpha_k \\ & & & \beta_k \end{bmatrix}, \quad \bar{T}^T_k \coloneqq \begin{bmatrix} \alpha_1 & \beta_1 & & \\ \gamma_1 & \ddots & \ddots& \\ & \ddots &\ddots & \beta_{k-1} \\ & & \gamma_{k-1} & \alpha_k \\ & & & \gamma_k \end{bmatrix},\end{aligned}$$ where only the $k \times k$ principal submatrices of $\underline{T}_k$ and $\bar{T}^T_k$ are the transpose of each other. The Lanczos biorthogonalization process has the advantage of the so-called short-term recurrence of the Krylov basis vectors, i.e., that the matrices $\underline{T}_k$ and $\bar{T}^T_k$ in [\[relations\]](#relations){reference-type="eqref" reference="relations"} are tridiagonal. In this way, the basis vectors are computed recursively at each iteration of the algorithm, and no additional orthogonalization procedure is required. The same shift-and-invert preconditioner and its adjoint must be applied at each iteration of the Lanczos biorthogonalization algorithm. We consider an efficient application, derived in [@Amiraslani2009] and adapted in [@KressRoman], via a block LU decomposition of the matrix $L_{\hat{\sigma}} U_{\hat{\sigma}}=(\hat{K} - \hat{\sigma} \hat{M})\Pi$, where $\Pi\coloneqq \footnotesize \begin{bmatrix} & I_n \\ I_{(d_1-1)n}\end{bmatrix} \in \mathbb{R}^{d_1 n \times d_1 n}$ is a permutation matrix, $$L_{\hat{\sigma}} \coloneqq \begin{bmatrix} I& & & & & \\ -\frac{2\hat{\sigma}}{c_1}I& I& & & & \\ I&-\frac{2\hat{\sigma}}{c_1}I &I & & & \\ & & \ddots& & & \\ & &I & -\frac{2\hat{\sigma}}{c_1}I& I& \\ \hat{P}_1& \cdots & \hat{P}_{d_1-3}& (-\hat{P}_{d_1}+\hat{P}_{d_1+2})& (\hat{P}_{d_1-1}+\frac{2\hat{\sigma}}{c_1}\hat{P}_{d_1})& \hat{P}(\hat{\sigma})\\ \end{bmatrix} \in \mathbb{R}^{d_1 n \times d_1 n},$$ and $$U_{\hat{\sigma}} \coloneqq \begin{bmatrix} I& & & & & -\hat{\tau}_1(\hat{\sigma})I \\ & I& & & & -\hat{\tau}_2 (\hat{\sigma})I\\ & & I& & & -\hat{\tau}_3 (\hat{\sigma})I\\ & & & \ddots& & \vdots\\ & & & & I& -\hat{\tau}_{d_1-1} (\hat{\sigma})I\\ & & & & & I\\ \end{bmatrix} \in \mathbb{R}^{d_1 n \times d_1 n}.$$ Specifically, the action of the preconditioner $(\hat{K}- \hat{\sigma} \hat{M})^{-1}$ to a vector $y \in \mathbb{R}^{d_1 n}$ is given by $$\begin{aligned} \label{apply1} (\hat{K}- \hat{\sigma} \hat{M})^{-1} y = \Pi U_{\hat{\sigma}}^{-1} L_{\hat{\sigma}}^{-1} y,\end{aligned}$$ and the adjoint preconditioner is applied analogously, i.e., $$\begin{aligned} \label{apply2} (\hat{K}- \hat{\sigma} \hat{M})^{-T} y = L_{\hat{\sigma}}^{-T} U_{\hat{\sigma}}^{-T} \Pi ^{T} y. \end{aligned}$$ The matrix $U_{\hat{\sigma}}^{-1}$ is identical to $U_{\hat{\sigma}}$, except for a sign change in the first $d_1-1$ blocks in the last block column. Applying $L_{\hat{\sigma}}^{-1}$ to a vector amounts to recursively computing the first $d_1-1$ block elements and performing one linear solve with system matrix $\hat{P}(\hat{\sigma}) \in \mathbb{R}^{n \times n}$, analogous to solving a block lower triangular system with Gaussian elimination. This linear solve can be achieved, for example, by computing one LU decomposition of $\hat{P}(\hat{\sigma})$. Note, an LU decomposition of $\hat{P}(\hat{\sigma})$ can be reused in the application of $L_{\hat{\sigma}}^{-T}$. After the Krylov subspace basis matrices of a desired dimension $k$ have been constructed, approximations to [\[linear\]](#linear){reference-type="eqref" reference="linear"} and, equivalently, approximations to [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"}, can be calculated efficiently for many values of the parameter $\mu_1$. In particular, we reuse the matrices $V_k$ and $T_k$ in [\[relations\]](#relations){reference-type="eqref" reference="relations"} for the computation of each approximation to $x( \mu_1^i, \mu_2^* )_{i=1}^{n_1}$ of interest. This requires the calculations [\[each-one-needs\]]{#each-one-needs label="each-one-needs"} $$\begin{aligned} y_k (\mu_1^i) &=& \left(I_k + (-\mu_1^i + \hat{\sigma} T_k)\right)^{-1} (\beta e_1),\label{solve-tri} \\ z_k (\mu_1^i) &=& V_k y_k(\mu_1^i), \\ x_k (\mu_1^i,\mu_2^*) &=& \left( (\hat{K} - \hat{\sigma} \hat{M})^{-1} z_k(\mu_1^i) \right)_{1:n}, \label{krylov-approx}\end{aligned}$$ for $i=1,\ldots,n_1$. Here $\beta \coloneqq \left\lVert b\right\rVert$, where $b$ is as in [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"}, and $x_k(\mu_1,\mu_2^*)$ denotes the approximation to [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"} from the Krylov subspace [\[our-krylov\]](#our-krylov){reference-type="eqref" reference="our-krylov"} of dimension $k$ corresponding to $(\mu_1,\mu_2^*)$. Equivalently, $x_k(\mu_1,\mu_2^*) \approx \hat{x}(\mu_1,\mu_2^*)$, where $\hat{x}(\mu_1,\mu_2^*)$ is the solution to the system [\[cheby-approx\]](#cheby-approx){reference-type="eqref" reference="cheby-approx"}. Note, the subscript on the right-hand side of [\[krylov-approx\]](#krylov-approx){reference-type="eqref" reference="krylov-approx"} denotes the first $n$ elements in the vector, and the preconditioner in [\[krylov-approx\]](#krylov-approx){reference-type="eqref" reference="krylov-approx"} is applied as in [\[apply1\]](#apply1){reference-type="eqref" reference="apply1"}. Since $k \ll n$, solving the tridiagonal system in [\[solve-tri\]](#solve-tri){reference-type="eqref" reference="solve-tri"} is not computationally demanding. Thus, once we build a sufficiently large Krylov subspace via one execution of the main algorithm, we have access to approximations $x_k(\mu_1^i,\mu_2^*)$ for all $\mu_1^i$ in the interval $[a_1,b_1]$. The process of approximating [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"} for a fixed $\mu_1^*$ and $\mu_2^j$, for $j=1,\ldots,n_2$, is completely analogous to the above procedure and, thus, we provide just a summary here. A companion linearization is formed for a fixed $\mu_1^*$, which is solved to approximate $x(\mu_1^*,\mu_2^j)$, where $\mu_2^j \in [a_2,b_2]$, for $j=1,\ldots,n_2$. This linearization has the form $$\begin{aligned} \label{linear2} \left( \tilde{K} - \mu_2 \tilde{M} \right) u(\mu_2) = c \end{aligned}$$ and is based on the Chebyshev approximation $$\begin{aligned} \label{cheby-approx2} \tilde{P} (\mu_2) \tilde{x} (\mu_2) = b.\end{aligned}$$ Here $\tilde{P}(\mu_2)$ is as in [\[cheb2\]](#cheb2){reference-type="eqref" reference="cheb2"} and $\tilde{x}(\mu_2) = \tilde{x}(\mu_1^*,\mu_2) \approx x(\mu_1^*,\mu_2)$, where $x(\mu_1^*,\mu_2)$ is as in [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"}. The linearization [\[linear2\]](#linear2){reference-type="eqref" reference="linear2"} is based on a three-term recurrence as in [\[recurrence\]](#recurrence){reference-type="eqref" reference="recurrence"}, and we consider a shift-and-invert preconditioner of the form $(\tilde{K} - \tilde{\sigma} \tilde{M})^{-1}$. As in [\[apply1\]](#apply1){reference-type="eqref" reference="apply1"} and [\[apply2\]](#apply2){reference-type="eqref" reference="apply2"}, the application of this particular preconditioner and its adjoint each require the solution to a linear system with matrices $\tilde{P}(\tilde{\sigma})$ and $\tilde{P}(\tilde{\sigma})^T$, respectively, and this must be done at each iteration of the Lanczos biorthogonalization. Analogous computations to [\[each-one-needs\]](#each-one-needs){reference-type="eqref" reference="each-one-needs"} must be performed for each $\mu_2^j$ of interest. Thus, using a second application of the method Preconditioned Chebyshev BiCG for parameterized linear systems to solve [\[cheby-approx2\]](#cheby-approx2){reference-type="eqref" reference="cheby-approx2"}, we have access to approximations $x_k(\mu_1^*,\mu_2^j)$ for all $\mu_2^j$ on the interval $[a_2,b_2]$, obtained in a similarly efficient way. **Remark 1** (Choice of target parameter). *The use of shift-and-invert preconditioners with well-chosen $\hat{\sigma}$ ($\tilde{\sigma}$) generally result in fast convergence, i.e., a few iterations of the shifted BiCG algorithm. This is because $(\hat{K}-\mu_1 \hat{M})(\hat{K}-\hat{\sigma}\hat{M})^{-1} \approx I$, for $\mu_1 \approx \hat{\sigma}$ (and similarly $(\tilde{K}-\mu_2 \tilde{M})(\tilde{K}-\tilde{\sigma}\tilde{M})^{-1} \approx I$, for $\mu_2 \approx \tilde{\sigma}$). The result of this is that, typically, only a few matrix-vector products and linear (triangular) solves of dimension $n \times n$ are required before the algorithm terminates. Thus, we have accurate approximations to $x(\mu_1^i,\mu_2^*)$, $i=1,\ldots,n_1$ (and $x(\mu_1^*,\mu_2^j)$, $j=1,\ldots,n_2$), obtained in a cheap way. We refer to $\hat{\sigma}$ and $\tilde{\sigma}$ as target parameters for this reason.* **Remark 1** (Inexact application of the preconditioner). *The LU decomposition of the matrices $\hat{P}(\hat{\sigma})$ and $\tilde{P} (\tilde{\sigma})$ of dimension $n \times n$ can be avoided entirely by considering the inexact version of Preconditioned Chebyshev BiCG for parameterized linear systems, derived and fully analyzed in [@CorrentyEtAl2]. This method applies the preconditioner and its adjoint approximately via iterative methods and is suitable for systems where the dimension $n$ is very large. In practice, the corresponding systems can be solved with a large amount of error once the relative residual of the outer method is sufficiently low.* **Remark 1** (Structure of the companion linearization). *Though we are interested in the solution to systems which depend on two parameters, we consider an interpolation in one variable. Interpolations of functions in two variables have been studied. However, the error of the approximation outside of the interpolation nodes tends to be too large for our purposes. Additionally, the recursion in [\[recurrence\]](#recurrence){reference-type="eqref" reference="recurrence"} is essential for the structure of the companion linearizations [\[linear\]](#linear){reference-type="eqref" reference="linear"} and, analogously, in [\[linear2\]](#linear2){reference-type="eqref" reference="linear2"}. In particular, solutions to [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"} must only appear in the solution vector, and the matrices $\hat{K}$, $\hat{M}$ ($\tilde{K}$, $\tilde{M}$) and right-hand side vector $c$ must be constant with respect to the parameters $\mu_1$ and $\mu_2$.* **Remark 1** (Error introduced by the Chebyshev approximation). *This work utilizes `Chebfun` [@ChebfunGuide], which computes approximations to the true Chebyshev coefficients. In particular, convergence is achieved as the approximate coefficients decay to zero, and only coefficients greater in magnitude than $10^{-16}$ are used in the approximation. In the examples which follow, we consider only twice continuously differentiable functions for $\hat{f}_k$ and $\tilde{f}_k$ in [\[scalar-functions\]](#scalar-functions){reference-type="eqref" reference="scalar-functions"}. Thus, the error introduced by a Chebyshev approximation is very small.* # Sparse grid based HOPGD {#sec:SparseHOPGD} Let $X (\mu_1,\mu_2) \in \mathbb{R}^{n \times n_1 \times n_2}$ be a sparse three-dimensional matrix of precomputed snapshots. These approximations to [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"}, generated by two executions of the preconditioned Krylov subspace method described in Section [2](#sec:ChebyshevBiCG){reference-type="ref" reference="sec:ChebyshevBiCG"}, correspond to the pairs of parameters $(\mu_1^i,\mu_2^*)$, $i=1,\ldots,n_1$, and $(\mu_1^*,\mu_2^j)$, $j=1,\ldots,n_2$. Here $\mu_1^i$ are in the interval $[a_1,b_1]$, $\mu_2^j$ are in the interval $[a_2,b_2]$, and $\mu_1^* \in [a_1,b_1]$, $\mu_2^* \in [a_2,b_2]$ are fixed values. More precisely, [\[snapshots\]]{#snapshots label="snapshots"} $$\begin{aligned} {2} X(\mu_1^i,\mu_2^*) &= \hat{x}(\mu_1^i,\mu_2^*) \in \mathbb{R}^n, \quad i&&=1,\ldots,n_1, \\ X(\mu_1^*,\mu_2^j) &= \tilde{x}(\mu_1^*,\mu_2^j) \in \mathbb{R}^n,\quad j&&=1,\ldots,n_2,\end{aligned}$$ and the remaining entries of $X(\mu_1,\mu_2)$ are zeros; see Figure [1](#tikz1){reference-type="ref" reference="tikz1"} for a visualization of a tensor of this form. Note, $\hat{x}(\mu_1^i,\mu_2^*)=\hat{x} (\mu_1^i)$ are approximations to the linear systems described in [\[cheby-approx\]](#cheby-approx){reference-type="eqref" reference="cheby-approx"} and $\tilde{x}(\mu_1^*,\mu_2^j)=\tilde{x}(\mu_2^j)$ are approximations to the systems in [\[cheby-approx2\]](#cheby-approx2){reference-type="eqref" reference="cheby-approx2"}. We refer to the set $$\begin{aligned} \label{nodes} \bm{\mu} \coloneqq (\mu_1^i, \mu_2^*) \cup (\mu_1^*, \mu_2^j), \end{aligned}$$ for $i = 1,\ldots,n_1$, $j=1,\ldots,n_2$, as the nodes and $\bm{\mu} \subset [a_1,b_1] \times [a_2,b_2]$. The set of nodes corresponding to the tensor matrix $X(\mu_1,\mu_2)$ in Figure [1](#tikz1){reference-type="ref" reference="tikz1"} are visualized in Figure [2](#tikz2){reference-type="ref" reference="tikz2"}. This way of sampling in the parameter space is a way to mitigate the so-called *curse of dimensionality* in terms of the number of snapshots to generate and store. ![Example of tensor matrix $X(\mu_1,\mu_2)$ consisting of 9 snapshot solutions (dots connected by vertical lines) and reduced order model $X^m(\mu_1,\mu_2)$ of rank $m$ as in [\[tensor\]](#tensor){reference-type="eqref" reference="tensor"}, where $\Phi_n^k \in \mathbb{R}^n$, $\bm{f_1^k}\coloneqq\begin{bmatrix} F_1^k(\mu_1^1),\ldots,F_1^k(\mu_1^5)\end{bmatrix} \in \mathbb{R}^5$, $\bm{f_2^k}\coloneqq\begin{bmatrix}F_2^k(\mu_2^1),\ldots,F_2^k(\mu_2^5) \end{bmatrix} \in \mathbb{R}^5$. Approximation is sum of rank-one tensors.](figures/cp_tensor.pdf){#tikz1} ![Sparse grid sampling in the parameter space. Nodes [\[nodes\]](#nodes){reference-type="eqref" reference="nodes"} correspond to the 9 snapshot solutions in Figure [1](#tikz1){reference-type="ref" reference="tikz1"} with $\mu_1 \in [a_1,b_1]$, $\mu_2 \in [a_2,b_2]$, $\mu_1^* = (a_1+b_1)/2$, $\mu_2^* = (a_2 + b_2)/2$. All snapshots generated via two executions of Preconditioned Chebyshev BiCG.](figures/cp_tensor2.pdf){#tikz2} Sparse grid based higher order proper generalized decomposition (HOPGD) [@SparseHOPGD] is a method which generates an approximation $X^m(\mu_1,\mu_2) \in \mathbb{R}^{n \times n_1 \times n_2}$ to the tensor matrix $X(\mu_1,\mu_2) \in \mathbb{R}^{n \times n_1 \times n_2}$ [\[snapshots\]](#snapshots){reference-type="eqref" reference="snapshots"}. Specifically, this expression is separated in the parameters $\mu_1$ and $\mu_2$ and is of the form [\[tensor\]]{#tensor label="tensor"} $$\begin{aligned} {2} X (\mu_1^i,\mu_2^*) \approx X^m (\mu_1^i,\mu_2^*) &= \sum_{k=1}^m \Phi_n^k F_1^k (\mu_1^i) F_2^k (\mu_2^*) \in \mathbb{R}^{n},\quad i&&=1,\ldots,n_1, \\ X (\mu_1^*,\mu_2^j) \approx X^m (\mu_1^*,\mu_2^j) &= \sum_{k=1}^m \Phi_n^k F_1^k (\mu_1^*) F_2^k (\mu_2^j) \in \mathbb{R}^{n},\quad j&&=1,\ldots,n_2\end{aligned}$$ at each node in the set [\[nodes\]](#nodes){reference-type="eqref" reference="nodes"}, where $\Phi_n^k \in \mathbb{R}^n$ and $$\begin{aligned} \label{funcs} F_1^k: \mathbb{R} \rightarrow \mathbb{R}, \quad F_2^k: \mathbb{R} \rightarrow \mathbb{R}\end{aligned}$$ are one variable scalar functions of the parameters $\mu_1$ and $\mu_2$, respectively. Here $m$ is the rank of the approximation and $F_{l}^k$, $l=1,2$, are the unknown functions of the $k$th mode. In this way, the original function $X(\mu_1,\mu_2)$, evaluated at the nodes $\bm{\mu}$, is estimated by a linear combination of products of lower-dimensional functions. The decomposition in [\[tensor\]](#tensor){reference-type="eqref" reference="tensor"} generates only the evaluations of $F_{l}^k$, and the reduced basis vectors for the approximation are not known a priori. Instead, these vectors are determined on-the-fly, in contrast to other methods based on the similar proper orthogonal decomposition [@PODcit; @PGDoverview2014]. A visualization of this decomposition on a sparse tensor of snapshots appears in Figure [1](#tikz1){reference-type="ref" reference="tikz1"}. The spacing of the nodes is equidistant in both the $\mu_1$ and $\mu_2$ directions, conventional for methods of this type. We note the similarity between this method and the well-known higher-order singular value decomposition (HOSVD) [@HOSVDcite]. Additionally, the reduced order model generated via the method sparse grid based HOPGD and visualized in Figure [1](#tikz1){reference-type="ref" reference="tikz1"} is of the same form as the established CANDECOMP/PARAFAC (CP) decomposition. Specifically, the approximations consists of a sum of rank-one tensors [@KoldaBader]. The model as expressed in [\[tensor\]](#tensor){reference-type="eqref" reference="tensor"} can only be evaluated on the set of nodes [\[nodes\]](#nodes){reference-type="eqref" reference="nodes"} corresponding to the snapshots included in the tensor matrix [\[snapshots\]](#snapshots){reference-type="eqref" reference="snapshots"}. To access other approximations, interpolations of $F_l^k$ will be required; see Section [3.2](#sec:interpolation){reference-type="ref" reference="sec:interpolation"}. The particular structure of the sparse grid sampling incorporated here is very similar to that which is used in traditional sparse grid methods for PDEs [@ActaSparse]. Our approach differs from these methods, which are based on an interpolation of high dimensional functions using a hierarchical basis not considered here. See also [@HierBasisROM] for a method in this style, where a reduced order model was constructed using hierarchical collocation to solve parameteric problems. **Remark 1** (Evaluation of the model at a point in space). *The approximation in [\[tensor\]](#tensor){reference-type="eqref" reference="tensor"} is expressed using vectors of length $n$ as the examples which follow in Sections [4](#sec:simulations){reference-type="ref" reference="sec:simulations"} concern a parameterized PDE discretized on a spatial domain denoted by $\Omega$. In particular, we are interested in the solutions to these PDEs for many values of the parameters $\mu_1$ and $\mu_2$. The approximation [\[tensor\]](#tensor){reference-type="eqref" reference="tensor"}, expressed at a point in space, is denoted by $$\begin{aligned} \label{one-point} X(x_0,\mu_1,\mu_2) \approx X^m (x_0,\mu_1,\mu_2) = \sum_{k=1}^m \Phi^k(x_0) F_1^k (\mu_1) F_2^k (\mu_2) \in \mathbb{R},\end{aligned}$$ for a particular $(\mu_1,\mu_2)\in\bm{\mu}$ [\[nodes\]](#nodes){reference-type="eqref" reference="nodes"}, where $x_0 \in \Omega \subset \mathbb{R}^d$ is a spatial parameter, $d$ is the dimension of the spatial domain, and $\Phi^k(x_0) \in \mathbb{R}$ is an entry of the vector $\Phi_n^k \in\mathbb{R}^n$ in [\[tensor\]](#tensor){reference-type="eqref" reference="tensor"}. Specifically, $\Phi_n^k$ denotes $\Phi^k$ evaluated on a set of $n$ spatial discretization points $\Omega_{dp}$, where $\Omega_{dp} \subset \Omega$. As in [\[tensor\]](#tensor){reference-type="eqref" reference="tensor"}, to evaluate the model in [\[one-point\]](#one-point){reference-type="eqref" reference="one-point"} for values of $\mu_1$ and $\mu_2$ outside of the set of nodes, interpolations must be performed.* ## Computation of the separated expression Traditionally, tensor decomposition methods require sampling performed on full grids in the parameter space, requiring the computation and storage of many snapshot solutions. See [@HOPGD2; @HOPGD; @HarborAgitation] for HOPGD performed in this setting, as well as an example in Section [4.2](#sec:simulation-full){reference-type="ref" reference="sec:simulation-full"}. The tensor decomposition proposed in [@SparseHOPGD] is specifically designed for sparse tensors. In particular, the sampling is of the same form considered in Section [2](#sec:ChebyshevBiCG){reference-type="ref" reference="sec:ChebyshevBiCG"} and shown in Figures [1](#tikz1){reference-type="ref" reference="tikz1"} and [2](#tikz2){reference-type="ref" reference="tikz2"}. This approach is more efficient, as it requires fewer snapshots and, as a result, the decomposition is performed in fewer computations. The method in [@SparseHOPGD], adapted to our particular setting, is summarized here, and details on the full derivation appear in Appendix [8](#sec:deriv-sep-ex){reference-type="ref" reference="sec:deriv-sep-ex"}. We seek an approximation to the nonzero vertical columns of $X(\mu_1,\mu_2)$ described in [\[snapshots\]](#snapshots){reference-type="eqref" reference="snapshots"}, separated in the parameters $\mu_1 \in [a_1,b_1]$ and $\mu_2 \in [a_2,b_2]$. The full parameter domain is defined by $\mathcal{D} \coloneqq \Omega_n \times [a_1,b_1] \times [a_2,b_2]$, where $\Omega_n \subset \mathbb{R}^n$. We express solutions as vectors of length $n$, i.e., as approximations to [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"} evaluated at a particular node $(\mu_1,\mu_2)$; cf. [\[one-point\]](#one-point){reference-type="eqref" reference="one-point"}, where solutions are expressed at a single point in the spatial domain. The approximations satisfy [\[approx-xm\]]{#approx-xm label="approx-xm"} $$\begin{aligned} X(\mu_1,\mu_2) &\approx& X^m(\mu_1,\mu_2), \\ &=& \sum_{k=1}^m \Phi_n^k F_1^k (\mu_1) F_2^k (\mu_2), \label{second-line}\\ &=& X^{m-1}(\mu_1,\mu_2) + \Phi_n^m F_1^m (\mu_1) F_2^m (\mu_2), \label{part-b}\end{aligned}$$ for $(\mu_1,\mu_2) \in \bm{\mu}$, where $\bm{\mu}$ is the set of nodes defined in [\[nodes\]](#nodes){reference-type="eqref" reference="nodes"}, and are generated via an alternating directions strategy. Specifically, at each enrichment step $m$, all components $\Phi_n^m \in \mathbb{R}^n$, $F_1^m(\mu_1) \in \mathbb{R}$, $F_2^m(\mu_2)\in \mathbb{R}$ in [\[part-b\]](#part-b){reference-type="eqref" reference="part-b"} are initialized then updated sequentially to construct an $L^2$ projection of $X(\mu_1,\mu_2)$. This is done via a greedy algorithm, where all components are assumed fixed, except the one we seek to compute. The process is equivalent to the minimization problem described as follows. At step $m$, find $X^m(\mu_1,\mu_2) \in V_m \subset L^2(\mathcal{D})$ as in [\[approx-xm\]](#approx-xm){reference-type="eqref" reference="approx-xm"} such that $$\begin{aligned} \label{min-prob} \min_{X^m(\mu_1,\mu_2)} \left( \frac{1}{2} \left\lVert w X(\mu_1,\mu_2) - w X^m(\mu_1,\mu_2)\right\rVert_{L^2 (\mathcal{D})}^2 \right), \end{aligned}$$ where $w$ is the sampling index, i.e., $$\begin{cases} w=1, & (\mu_1,\mu_2) \in \bm{\mu}, \\ w=0, & \text{otherwise}, \end{cases}$$ $\mathcal{D}$ is the full parameter domain, and $V_m$ represents a set of test functions. Equivalently, using the weak formulation, we seek the solution $X^m(\mu_1,\mu_2)$ to $$\begin{aligned} \label{weak1} (wX^m(\mu_1,\mu_2), \delta X)_{\mathcal{D}} = (wX(\mu_1,\mu_2), \delta X)_{\mathcal{D}}, \quad \forall \delta X \in V_m.\end{aligned}$$ Here $(\cdot,\cdot)_{\mathcal{D}}$ denotes the integral of the scalar product over the domain $\mathcal{D}$, and [\[weak1\]](#weak1){reference-type="eqref" reference="weak1"} can be written as $$\begin{aligned} \label{solve} \left( w \Phi_n^m F_1^m F_2^m, \delta X \right)_{\mathcal{D}} = \left( w X(\mu_1,\mu_2) - wX^{m-1}(\mu_1,\mu_2), \delta X \right)_{\mathcal{D}}, \quad \forall \delta X \in V_m;\end{aligned}$$ see details in Appendix [8](#sec:deriv-sep-ex){reference-type="ref" reference="sec:deriv-sep-ex"}. We have simplified the notation in [\[solve\]](#solve){reference-type="eqref" reference="solve"} with $F_1^m \coloneqq F_1^m(\mu_1)$ and $F_2^m \coloneqq F_2^m(\mu_2)$ for readability. In practice, approximations to [\[solve\]](#solve){reference-type="eqref" reference="solve"} are computed successively via a least squares procedure until a fixed point is detected. The $m$th test function is given by $\delta X \coloneqq \delta \Phi^m F_1^m F_2^m + \Phi_n^m \delta F_1^m F_2^m + \Phi_n^m F_1^m \delta F_2^m$, and the approximation [\[part-b\]](#part-b){reference-type="eqref" reference="part-b"} is updated with the resulting components. More concretely, let the initializations $\Phi_n^{m}$, $\bm{f_1^m} \coloneqq \begin{bmatrix} F_1^{m}(\mu_1^1),\ldots,F_1^m(\mu_1^{n_1}) \end{bmatrix}\in\mathbb{R}^{n_1}$, $\bm{f_2^m} \coloneqq \begin{bmatrix} F_2^{m}(\mu_2^1),\ldots,F_2^m(\mu_2^{n_2}) \end{bmatrix}\in\mathbb{R}^{n_2}$ be given. We seek first an update of $\Phi_n^{m_\text{old}}\coloneqq\Phi_n^{m}$, assuming $F_1^{m_{\text{old}}} \coloneqq F_1^{m}$, $F_2^{m_{\text{old}}} \coloneqq F_2^{m}$ are known function evaluations for all $(\mu_1,\mu_2) \in \bm{\mu}$. We update $\Phi_n^m$ as $$\begin{aligned} \label{phi-m} \Phi_n^m = \frac{\sum_{\bm{\mu}} r_{m-1}(\mu_1,\mu_2)} {\sum_{\bm{\mu}} F_1^m (\mu_1) F_2^m (\mu_2)},\end{aligned}$$ where the $(m-1)$th residual vector is given by $$\begin{aligned} \label{res-u} r_{m-1}(\mu_1,\mu_2) \coloneqq X(\mu_1,\mu_2) - X^{m-1}(\mu_1,\mu_2) \in \mathbb{R}^n. \end{aligned}$$ Denote the vector obtained in [\[phi-m\]](#phi-m){reference-type="eqref" reference="phi-m"} by $\Phi_n^{m_{\text{new}}}$ and seek an update of $F_1^{m_\text{old}}(\mu_1^i)$, assuming $\Phi_n^m$, $F_2^m(\mu_2^j)$ known, using $$\begin{aligned} \label{sol-f} F_1^m(\mu_1^i) = \frac{(\Phi_n^m)^T r_{m-1}(\mu_1^i,\mu_2^*)}{(\Phi_n^m)^T \Phi_n^m F_2^m(\mu_2^*)}, \end{aligned}$$ for $i=1,\ldots,n_1$. We denote the solutions found in [\[sol-f\]](#sol-f){reference-type="eqref" reference="sol-f"} by $F_1^{m_{\text{new}}}(\mu_1^i)$. Note, $\Phi_n^{m_\text{new}}$ and $F_2^{m_\text{old}}(\mu_2^j)$ are used in the computations in [\[sol-f\]](#sol-f){reference-type="eqref" reference="sol-f"}. The updates $F_2^{m}(\mu_2^j)$ are computed as $$\begin{aligned} \label{sol-f2} F_2^m(\mu_2^j) = \frac{(\Phi_n^m)^T r_{m-1}(\mu_1^*,\mu_2^j)}{(\Phi_n^m)^T \Phi_n^m F_2^m(\mu_2^j)}, \end{aligned}$$ for $j=1,\ldots,n_2$, which we denote by $F_2^{m_\text{new}}(\mu_2^j)$, where $\Phi_n^{m_\text{new}}$ and $F_1^{m_{\text{new}}}(\mu_1^i)$ were used in the computations in [\[sol-f2\]](#sol-f2){reference-type="eqref" reference="sol-f2"}. For a certain tolerance level $\varepsilon_1$, if the relation $$\begin{aligned} \label{convergence1} \left\lVert\delta(\mu_1,\mu_2)\right\rVert < \varepsilon_1\end{aligned}$$ holds for all $(\mu_1,\mu_2) \in \bm{\mu}$, where $$\begin{aligned} \label{delta-mus} \delta (\mu_1,\mu_2) \coloneqq \Phi_n^{m_{\text{old}}} F_1^{m_{\text{old}}}(\mu_1) F_2^{m_{\text{old}}} (\mu_2)- \Phi_n^{m_{\text{new}}} F_1^{m_{\text{new}}}(\mu_1) F_2^{m_{\text{new}}}(\mu_2),\end{aligned}$$ we have approached a fixed point. In this case, the approximation in [\[part-b\]](#part-b){reference-type="eqref" reference="part-b"} is updated with the components $\Phi_n^m$, $F_1^m(\mu_1^i)$, and $F_2^m(\mu_2^j)$. If the condition [\[convergence1\]](#convergence1){reference-type="eqref" reference="convergence1"} is not met, we set $\Phi_n^{m_{\text{old}}} \coloneqq \Phi_n^{m_{\text{new}}}$, $F_1^{m_{\text{old}}} (\mu_1^i) \coloneqq F_1^{m_{\text{new}}}(\mu_1^i)$, $F_2^{m_{\text{old}}}(\mu_2^j) \coloneqq F_2^{m_{\text{new}}}(\mu_2^j)$ and repeat the process described above. If $$\frac{\left\lVert X(\mu_1,\mu_2) - X^{m}(\mu_1,\mu_2)\right\rVert}{\left\lVert X(\mu_1,\mu_2) \right\rVert} < \varepsilon_2$$ is met for a specified $\varepsilon_2$ and all $(\mu_1,\mu_2) \in \bm{\mu}$, the algorithm terminates. Otherwise, we seek $\Phi_n^{m+1}$, $F_1^{m+1}(\mu_1^i)$, $F_2^{m+1}(\mu_2^j)$ using the same procedure, initialized with the most recent updates. This process is summarized in Algorithm [\[alg:ChebyshevHOPGD\]](#alg:ChebyshevHOPGD){reference-type="ref" reference="alg:ChebyshevHOPGD"}. The most expensive operations in Algorithm [\[alg:ChebyshevHOPGD\]](#alg:ChebyshevHOPGD){reference-type="ref" reference="alg:ChebyshevHOPGD"} are the inner products of two vectors of dimension $n$, and, thus, the cost of each step in the tensor decomposition scales linearly with the number of unknowns in [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"}. In general, decompositions of many sets of snapshots can be performed to an acceptable accuracy level with $m \ll n$, where the parameter $m$ depends on the regularity of the exact solution [@PGDoverview2014]. Efficiency and robustness have been shown for the standard PGD method generated with a greedy algorithm like the one described here [@PGD2; @HOPGD; @Nouy10]. Note, the accuracy of the approximations here depends strongly on the quality of the separated functions in the model [@SparseHOPGD]. Additionally, the decomposition produced by the standard HOPGD method is optimal when the separation involves two parameters [@HarborAgitation]. The reduced order model described here can be used to approximate solutions to [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"} for many $(\mu_1, \mu_2)$ outside of the set of nodes. This is achieved through interpolating the one variable functions in [\[funcs\]](#funcs){reference-type="eqref" reference="funcs"}. Consider also a solution vector $x (\mu_1,\mu_2)\in \mathbb{R}^n$ to [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"}, where $(\mu_1,\mu_2)$ are unknown. The interpolation of $X^m(\mu_1,\mu_2)$ can be used to estimate the parameters $\mu_1$, $\mu_2$ which produce this particular solution. **Remark 1** (Separated expression in more than two parameters). *The reduced order model [\[tensor\]](#tensor){reference-type="eqref" reference="tensor"} is separated in the two parameters $\mu_1$ and $\mu_2$. In [@SparseHOPGD], models separated in as many as six parameters were constructed, where the sampling was performed on a sparse grid in the parameter space. The authors note this particular approach for the decomposition may not be optimal in the sense of finding the best approximation, though it is possible. Our proposed way of computing the snapshots could be generalized to a setting separated in $s$ parameters. In this way, the procedure would fix $(s-1)$ parameters at a time and execute Chebyshev BiCG a total of $s$ times.* **Remark 1** (Comparison of HOPGD to similar methods). *In [@HarborAgitation], HOPGD was studied alongside the similar HOSVD method. The decompositions were separated in as many as six parameters, and the sampling was performed on a full grid in the parameter space. In general, HOPGD produced separable representations with fewer terms compared with HOSVD. In this way, the model constructed by HOPGD can be evaluated in a more efficient manner. Furthermore, the method HOPGD does not require the number of terms in the separated solution to be set a priori, as in a CP decomposition.* $m=1$\ Set $X^0(\mu_1^i,\mu_2^j) \coloneqq 0$, $X^m(\mu_1^i,\mu_2^j) \coloneqq \Phi_n^{m} F_1^{m}(\mu_1^i) F_2^{m}(\mu_2^j)$\ **return** Compute interpolation $\bm{X}^m(\mu_1,\mu_2)$ as in [\[int\]](#int){reference-type="eqref" reference="int"} ## Interpolated model {#sec:interpolation} The tensor representation in [\[tensor\]](#tensor){reference-type="eqref" reference="tensor"} consists, in part, of one-dimensional functions $F_1^k$ and $F_2^k$, for $k=1,\ldots,m$. Note, we have only evaluations of these functions at $\mu_1^i$, $i=1,\ldots,n_1$ and $\mu_2^j$, $j=1,\ldots,n_2$, respectively, and no information about these functions outside of these points. In this way, we cannot approximate the solution to [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"} for all $(\mu_1,\mu_2) \in [a_1,b_1] \times [a_2,b_2]$ using [\[tensor\]](#tensor){reference-type="eqref" reference="tensor"} as written. We can use the evaluations $F_1^k(\mu_1^i)$ and $F_2^k(\mu_2^j)$ in [\[tensor\]](#tensor){reference-type="eqref" reference="tensor"} to compute an interpolation of these one-dimensional functions in a cheap way. In practice, any interpolation is possible and varying the type of interpolation does not contribute significantly to the overall error in the approximation. Thus, we make the following interpolation of the representation in [\[tensor\]](#tensor){reference-type="eqref" reference="tensor"}: $$\begin{aligned} \label{int} \bm{X}^m (\mu_1,\mu_2) &= \sum_{k=1}^m \Phi_n^k \bm{F}_1^k (\mu_1) \bm{F}_2^k (\mu_2) \in \mathbb{R}^{n},\end{aligned}$$ where $\bm{F}_1^k$, $\bm{F}_2^k$ are spline interpolations of $F_1^k$, $F_2^k$ [\[funcs\]](#funcs){reference-type="eqref" reference="funcs"}, respectively, and $\bm{X}^m(\mu_1,\mu_2): [a_1,b_1]\times[a_2,b_2]\rightarrow\mathbb{R}^n$. The interpolation in [\[int\]](#int){reference-type="eqref" reference="int"} can be evaluated to approximate [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"} for other $(\mu_1, \mu_2) \in [a_1,b_1] \times [a_2,b_2]$ in a cheap way. This approximation to $x(\mu_1, \mu_2)$ is denoted as $$\begin{aligned} \label{approx} x^m (\mu_1,\mu_2) \in \mathbb{R}^n,\end{aligned}$$ and we compute the corresponding relative error as follows: $$\frac{\left\lVert x^m(\mu_1,\mu_2) - x(\mu_1,\mu_2)\right\rVert_2}{\left\lVert x(\mu_1,\mu_2)\right\rVert_2} \cdot$$ Note, simply interpolating several snapshots to estimate solutions to [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"} for all $(\mu_1,\mu_2)$ in the parameter space is not a suitable approach, as the solutions tend to be extremely sensitive to the parameters [@HarborAgitation]. To solve the parameter estimation problem for a given solution $x(\mu_1,\mu_2) \in \mathbb{R}^n$ which depends on unknown parameters $(\mu_1,\mu_2)$, we use the [Matlab]{.smallcaps} routine `fmincon` which uses the sequential quadratic programming algorithm [@NumerOptim] to find the pair of values $(\mu_1, \mu_2)$ in the the domain which minimize the quantity $$\begin{aligned} \label{min-prob2} \left\lVert x(\mu_1,\mu_2) - x^m (\mu_1,\mu_2)\right\rVert_2. \end{aligned}$$ Simulations from a parameterized Helmholtz equation and a parameterized advection-diffusion equation appear in Sections [4](#sec:simulations){reference-type="ref" reference="sec:simulations"} and [5](#sec:advec-diff){reference-type="ref" reference="sec:advec-diff"}, respectively. In these experiments, an interpolation of a reduced order model is constructed to approximate the solutions to the parameterized PDEs. **Remark 1** (Offline and online stages). *In practice, the fixed point method in Algorithm [\[alg:ChebyshevHOPGD\]](#alg:ChebyshevHOPGD){reference-type="ref" reference="alg:ChebyshevHOPGD"} can require many iterations until convergence, though the cost of each step scales linearly with the number of unknowns. Residual-based accelerators have been studied to reduce the number of these iterations [@SparseHOPGD]. This strategy, though outside the scope of this work, has shown to be beneficial in situations where the snapshots depended strongly on the parameters. Only one successful decomposition is required to construct [\[int\]](#int){reference-type="eqref" reference="int"}. Thus, the offline stage of the method consists of generating the snapshots with Chebyshev BiCG and executing Algorithm [\[alg:ChebyshevHOPGD\]](#alg:ChebyshevHOPGD){reference-type="ref" reference="alg:ChebyshevHOPGD"}.* *Evaluating the reduced order model [\[int\]](#int){reference-type="eqref" reference="int"} requires only scalar and vector operations and, therefore, a variety of approximations to [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"} can be obtained very quickly, even when the number of unknowns is large. We consider this part of the proposed method the online stage. In the experiment corresponding to Figure [\[fig3\]](#fig3){reference-type="ref" reference="fig3"}, evaluating the reduced order model had simulation time $0.010601$ CPU seconds. Generating the corresponding finite element solution with backslash in [Matlab]{.smallcaps} had simulation time $1.197930$ CPU seconds.* **Remark 1** (Sources of error). *The approximations generated by Algorithm [\[alg:ChebyshevHOPGD\]](#alg:ChebyshevHOPGD){reference-type="ref" reference="alg:ChebyshevHOPGD"} contain error from the Chebyshev approximation, the iterative method Preconditioned Chebyshev BiCG, the low-rank approximation, as well as the interpolations in [\[int\]](#int){reference-type="eqref" reference="int"}. As noted in Remark [Remark 1](#cheb-remark){reference-type="ref" reference="cheb-remark"}, the Chebyshev approximation has a very small impact on the overall error, and Figure [\[fig1b\]](#fig1b){reference-type="ref" reference="fig1b"} shows that the iterative method obtains relatively accurate approximations to the true solutions. In practice, the interpolations performed in [\[int\]](#int){reference-type="eqref" reference="int"} are done on smooth functions, as visualized in Figure [7](#fig-int){reference-type="ref" reference="fig-int"}. The largest source of error from our proposed algorithm stems from the tensor decomposition, i.e., lines $1$--$17$ in Algorithm [\[alg:ChebyshevHOPGD\]](#alg:ChebyshevHOPGD){reference-type="ref" reference="alg:ChebyshevHOPGD"}. This can be seen, for example, in Figure [9](#fig2){reference-type="ref" reference="fig2"}.* # Numerical simulations of a parameterized Helmholtz equation {#sec:simulations} We consider both a reduced order model for a parameterized Helmholtz equation and a parameter estimation problem for the solution to a parameterized Helmholtz equation. Such settings occur naturally, for example, in the study of geophysics; see [@HarborAgitation; @InverseHelmPar; @GeophysicsCite]. These prior works were also based on a reduced order model, constructed using PGD. Similarly, the method PGD was used in the study of thermal process in [@ThermalProcessPGD], and a reduced basis method for solving parameterized PDEs was considered in [@ReducedBasisMeth]. In the simulations which follow, the matrices $A_i$ arise from a finite element discretization, and $b$ is the corresponding load vector. All matrices and vectors here were generated using the finite element software FEniCS [@Alnaes:2015:FENICS]. The solutions to these discretized systems were approximated with a modified version of the Kryov subspace method Preconditioned Chebyshev BiCG [@CorrentyEtAl2], as described in Section [2](#sec:ChebyshevBiCG){reference-type="ref" reference="sec:ChebyshevBiCG"}. This strategy requires a linear solve with a matrix of dimension $n \times n$ on each application of the preconditioner and of its adjoint. We have chosen to perform the linear solve by computing one LU decomposition per execution of the main algorithm, which is reused at each subsequent iteration accordingly. This can be avoided by considering the inexact version of Preconditioned Chebyshev BiCG; see Remark [Remark 1](#inexact-rem){reference-type="ref" reference="inexact-rem"}. This work proposes a novel improvement in generating snapshot solutions necessary for constructing models of this type. We choose to include three different examples in this section in order to fully capture the versatility of our strategy. Once the interpolation [\[int\]](#int){reference-type="eqref" reference="int"} has been constructed, evaluating the approximations for many values of the parameters $\mu_1$ and $\mu_2$ can be done in an efficient manner; see Remark [Remark 1](#online-offline-remark){reference-type="ref" reference="online-offline-remark"}. In the case of the tensor decomposition failing to converge sufficiently, a new set of snapshots can be generated with little extra work and a new decomposition can be attempted; see Section [2](#sec:ChebyshevBiCG){reference-type="ref" reference="sec:ChebyshevBiCG"}. ## First simulation, snapshots on a sparse grid {#sec:simulation-1} Consider the Helmholtz equation given by [\[model-prob\]]{#model-prob label="model-prob"} $$\begin{aligned} {2} \nabla^2 u(x) +f(\mu_1,\mu_2) u(x) &= h(x), \quad &&x \in \Omega, \\ u(x) &= 0, \quad &&x \in \partial \Omega,\end{aligned}$$ where $\Omega \subset \mathbb{R}^2$ is as in Figure [\[fig3\]](#fig3){reference-type="ref" reference="fig3"}, $\mu_1 \in [1,2]$, $\mu_2 \in [1,2]$, $f(\mu_1,\mu_2 ) = 2 \pi^2 + \cos(\mu_1) + \mu_1^4 + \sin(\mu_2) + \mu_2$, and $h(x) = \sin(\pi x_1) \sin(\pi x_2)$. A discretization of [\[model-prob\]](#model-prob){reference-type="eqref" reference="model-prob"} is of the form [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"}, where $$A(\mu_1,\mu_2) \coloneqq A_0 + f(\mu_1,\mu_2) A_1.$$ We are interested in approximating $u(x)$ in [\[model-prob\]](#model-prob){reference-type="eqref" reference="model-prob"} for many different pairs of $(\mu_1,\mu_2)$ simultaneously. The reduced order model $\bm{X}^m(\mu_1,\mu_2)$ is constructed as described in Algorithm [\[alg:ChebyshevHOPGD\]](#alg:ChebyshevHOPGD){reference-type="ref" reference="alg:ChebyshevHOPGD"}. We can evaluate this model for $(\mu_1,\mu_2)$ outside of the nodes corresponding to the snapshot solutions. The particular $13$ nodes considered in this simulation are plotted in the parameter space, shown in Figure [3](#fig1){reference-type="ref" reference="fig1"}. Figure [\[fig1b\]](#fig1b){reference-type="ref" reference="fig1b"} displays the convergence of the two executions of Preconditioned Chebyshev BiCG required to generate the snapshot corresponding to the nodes in Figure [3](#fig1){reference-type="ref" reference="fig1"}. Specifically, in Figure [4](#chebyplot1){reference-type="ref" reference="chebyplot1"}, we see faster convergence for approximations $\hat{x}(\mu_1,\mu_2^*)$ where the value $\mu_1$ is closer to the target parameter $\hat{\sigma}$ in [\[precond\]](#precond){reference-type="eqref" reference="precond"}. An analogous result holds in Figure [5](#chebyplot2){reference-type="ref" reference="chebyplot2"} for approximations $\tilde{x}(\mu_1^*,\mu_2)$ where $\mu_2$ is closer to $\tilde{\sigma}$; see Remark [Remark 1](#remark-near-id){reference-type="ref" reference="remark-near-id"}. We require the LU decompositions of $2$ different matrices of dimension $n \times n$ for this simulation. In Figure [7](#fig-int){reference-type="ref" reference="fig-int"}, we see the interpolations $\bm{F}_1^1(\mu_1)$ and $\bm{F}_2^1(\mu_2)$ as in [\[int\]](#int){reference-type="eqref" reference="int"} plotted along with the function evaluations $F_1^1(\mu_1^i)$ and $F_2^1(\mu_2^j)$ generated by Algorithm [\[alg:ChebyshevHOPGD\]](#alg:ChebyshevHOPGD){reference-type="ref" reference="alg:ChebyshevHOPGD"}, where $\mu_1^i$ and $\mu_2^j$ are as in Figure [3](#fig1){reference-type="ref" reference="fig1"}. Figure [9](#fig2){reference-type="ref" reference="fig2"} shows the percent relative error for approximations to [\[model-prob\]](#model-prob){reference-type="eqref" reference="model-prob"} for a variety of pairs $(\mu_1,\mu_2)$ different from the nodes plotted in Figure [3](#fig1){reference-type="ref" reference="fig1"}. As in [@HOPGD], we consider approximate solutions with percent relative error below $6 \%$ reliable and approximations with percent relative error below $1.5 \%$ accurate, where the error is computed by comparing the approximation to the solution obtained using backslash in [Matlab]{.smallcaps}. Figures [\[fig3\]](#fig3){reference-type="ref" reference="fig3"}, [\[fig4\]](#fig4){reference-type="ref" reference="fig4"}, and [\[fig5\]](#fig5){reference-type="ref" reference="fig5"} show both the finite element solution and the solution generated from [\[int\]](#int){reference-type="eqref" reference="int"} for the sake of comparison. Here we display these solutions for $3$ pairs of $(\mu_1,\mu_2)$. These approximations all have percent relative error below $1.5 \%$. A variety of solutions can be obtained from this reduced order model. ![Locations of $13$ snapshots $(\mu_1^i,\mu_2^*)$ and $(\mu_1^*,\mu_2^j)$ used to construct $X^m$ in [\[tensor\]](#tensor){reference-type="eqref" reference="tensor"} to approximate [\[model-prob\]](#model-prob){reference-type="eqref" reference="model-prob"}, sampling on a sparse grid in the parameter space.](figures/interp_points.pdf){#fig1} ![Approximations to [\[cheby-approx\]](#cheby-approx){reference-type="eqref" reference="cheby-approx"} with target parameter $\hat{\sigma} = 1.48$ as in [\[precond\]](#precond){reference-type="eqref" reference="precond"}, fixed parameter $\mu_2^* = 1.5$.](figures/figure0.pdf){#chebyplot1} ![Approximations to [\[cheby-approx2\]](#cheby-approx2){reference-type="eqref" reference="cheby-approx2"} with target parameter $\tilde{\sigma} = 1.48$, analogous to $\hat{\sigma}$ in [\[precond\]](#precond){reference-type="eqref" reference="precond"}, fixed parameter $\mu_1^*=1.5$.](figures/figure1.pdf){#chebyplot2} ![Function evaluations $F_1^1(\mu_1^i)$ and $F_2^1(\mu_2^j)$ produced by Algorithm [\[alg:ChebyshevHOPGD\]](#alg:ChebyshevHOPGD){reference-type="ref" reference="alg:ChebyshevHOPGD"} to approximate [\[model-prob\]](#model-prob){reference-type="eqref" reference="model-prob"}, plotted with corresponding interpolations $\bm{F}_1^1(\mu_1)$ and $\bm{F}_2^1(\mu_2)$ as in [\[int\]](#int){reference-type="eqref" reference="int"} with $\mu_1^i$ and $\mu_2^j$ as in Figure [3](#fig1){reference-type="ref" reference="fig1"}.](figures/figure2.pdf){#fig-int} ![Function evaluations $F_1^1(\mu_1^i)$ and $F_2^1(\mu_2^j)$ produced by Algorithm [\[alg:ChebyshevHOPGD\]](#alg:ChebyshevHOPGD){reference-type="ref" reference="alg:ChebyshevHOPGD"} to approximate [\[model-prob\]](#model-prob){reference-type="eqref" reference="model-prob"}, plotted with corresponding interpolations $\bm{F}_1^1(\mu_1)$ and $\bm{F}_2^1(\mu_2)$ as in [\[int\]](#int){reference-type="eqref" reference="int"} with $\mu_1^i$ and $\mu_2^j$ as in Figure [3](#fig1){reference-type="ref" reference="fig1"}.](figures/figure3.pdf){#fig-int} ![Percent relative error of $x^m(\mu_1,\mu_2)$ [\[approx\]](#approx){reference-type="eqref" reference="approx"} to approximate [\[model-prob\]](#model-prob){reference-type="eqref" reference="model-prob"} for 400 equidistant pairs of $(\mu_1,\mu_2)$, $m=14$, $n= 112995$. Here accurate: $< 1.5 \%$, reliable: $< 6 \%$. Note, same model in both figures.](figures/relerr2.pdf "fig:"){#fig2} [\[\]]{label=""} ![Percent relative error of $x^m(\mu_1,\mu_2)$ [\[approx\]](#approx){reference-type="eqref" reference="approx"} to approximate [\[model-prob\]](#model-prob){reference-type="eqref" reference="model-prob"} for 400 equidistant pairs of $(\mu_1,\mu_2)$, $m=14$, $n= 112995$. Here accurate: $< 1.5 \%$, reliable: $< 6 \%$. Note, same model in both figures.](figures/relerr2b.pdf "fig:"){#fig2} [\[\]]{label=""} ![Finite element solution](figures/fig3fenics.pdf) ![Reduced order model solution](figures/fig3rom.pdf) ![Finite element solution](figures/fig2fenics.pdf) ![Reduced order model solution](figures/fig2rom.pdf) ![Finite element solution](figures/fig1fenics.pdf) ![Reduced order model solution](figures/fig1rom.pdf) ## Second simulation, snapshots on a full grid {#sec:simulation-full} To obtain an accurate model over the entire parameter space, we consider using HOPGD with sampling performed on a full grid. This is the approach taken in [@HOPGD2; @HOPGD] to approximate parametric PDEs. The snapshots are obtained using the Krylov subspace method described in Section [2](#sec:ChebyshevBiCG){reference-type="ref" reference="sec:ChebyshevBiCG"}, and the model is constructed in a way that is completely analogous to the method described in Section [3](#sec:SparseHOPGD){reference-type="ref" reference="sec:SparseHOPGD"}. As the decomposition is performed on a tensor matrix containing more snapshots, the method is more costly. Specifically, we build a reduced order model to approximate the solution to the Helmholtz equation given by [\[helmholtz1\]]{#helmholtz1 label="helmholtz1"} $$\begin{aligned} {2} \left( \nabla^2 + f_1(\mu_1) + f_2(\mu_2) \alpha(x) \right) u(x) &= h(x), \quad &&x \in \Omega, \\ u(x) &= 0, \quad &&x \in \partial \Omega,\end{aligned}$$ where $\Omega \subset \mathbb{R}^2$ is as in Figure [\[fig7\]](#fig7){reference-type="ref" reference="fig7"}, $\alpha(x) = x_2$, $\mu_1 \in [1,2]$, $\mu_2 \in [1,2]$, and $f_1(\mu_1) = \cos(\mu_1) + \mu_1^3$, $f_2(\mu_2) = \sin(\mu_2) + \mu_2^2$, $h(x)=\sin(\pi x_1) \sin(\pi x_2)$. A discretization of [\[helmholtz1\]](#helmholtz1){reference-type="eqref" reference="helmholtz1"} is of the form [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"}, where $$A(\mu_1,\mu_2) \coloneqq A_0 + f_1(\mu_1) A_1 + f_2 (\mu_2) A_2,$$ and approximating $u(x)$ in [\[helmholtz\]](#helmholtz){reference-type="eqref" reference="helmholtz"} for many different pairs of $(\mu_1,\mu_2)$ is of interest. We can still exploit the structure of the sampling when generating the snapshots by fixing one parameter and returning solutions as a one variable function of the other parameter. As in the simulations based on sparse grid sampling, if the decomposition fails to reach a certain accuracy level, a new set of snapshots can be generated with little extra computational effort. Figure [10](#fig6a){reference-type="ref" reference="fig6a"} shows the locations of the nodes corresponding to $77$ different snapshot solutions used to construct a reduced order model. These snapshots were generated via $7$ executions of a modified form of Preconditioned Chebyshev BiCG, i.e., we consider $7$ different fixed $\mu_2^*$, equally spaced on $[a_2,b_2]$. This requires the LU decompositions of $7$ different matrices of size $n \times n$. Figure [11](#fig6b){reference-type="ref" reference="fig6b"} shows the percent relative error of the interpolated approximation, analogous to [\[int\]](#int){reference-type="eqref" reference="int"} constructed using Algorithm [\[alg:ChebyshevHOPGD\]](#alg:ChebyshevHOPGD){reference-type="ref" reference="alg:ChebyshevHOPGD"}, for $400$ pairs of $(\mu_1,\mu_2) \in [1,2] \times [1,2]$, different from the nodes corresponding to the snapshot solutions. We note that the percent relative error of the approximation is below $1 \%$ for all pairs $(\mu_1,\mu_2)$, indicating that the approximations are accurate. For the sake of comparison, Figures [\[fig7\]](#fig7){reference-type="ref" reference="fig7"}, [\[fig8\]](#fig8){reference-type="ref" reference="fig8"} and [\[fig9\]](#fig9){reference-type="ref" reference="fig9"} show both the finite element solutions and corresponding reduced order model solutions for $3$ pairs of $(\mu_1,\mu_2)$. In general, accurate solutions can be produced on a larger parameter space with full grid sampling. Furthermore, the model can produce a variety of solutions. ![$77$ nodes, corresponding snapshots generated via $7$ executions of Preconditioned Chebyshev BiCG, $7$ fixed values $\mu_2^*$.](figures/fullgrid.pdf){#fig6a} ![Percent relative error of interpolation of reduced order model for 400 equidistant pairs of $(\mu_1,\mu_2)$.](figures/fullgridres.pdf){#fig6b} ![Finite element solution](figures/fullfig1fenics.pdf "fig:") [\[\]]{label=""} ![Reduced order model solution](figures/fullfig1rom.pdf "fig:") [\[\]]{label=""} ![Finite element solution](figures/fullfig3fenics.pdf "fig:") [\[\]]{label=""} ![Reduced order model solution](figures/fullfig3rom.pdf "fig:") [\[\]]{label=""} ![Finite element solution](figures/fullfig4fenics.pdf "fig:") [\[\]]{label=""} ![Reduced order model solution](figures/fullfig4rom.pdf "fig:") [\[\]]{label=""} ## Third simulation, parameter estimation with snapshots on a sparse grid {#sec:param-est} We consider now an application of our method to solve a parameter estimation problem. Similar methods have been constructed in the context of parameterized PDEs, for example, [@HOPGD3; @ParEstSR; @InverseHelmPar]. Our approach is analogous to the experiments performed in [@HOPGD3], where several reduced order models were constructed using the method sparse grid based HOPGD [@SparseHOPGD]. Consider the Helmholtz equation given by [\[helmholtz\]]{#helmholtz label="helmholtz"} $$\begin{aligned} {2} \left( \nabla^2 + f_1(\mu_1) + f_2(\mu_2) \alpha(x) \right) u(x) &= h(x), \quad &&x \in \Omega, \\ u(x) &= 0, \quad &&x \in \partial \Omega,\end{aligned}$$ where $\Omega = [0,1] \times [0,1]$, $\alpha(x) = x_1$, $f_1(\mu_1) = \sin^2 (\mu_1)$, $f_2(\mu_2) = \cos^2(\mu_2)$, $\mu_1 \in [0,1]$, $\mu_2 \in [0,1]$, and $h(x)=\exp(-x_1 x_2)$. A discretization of [\[helmholtz\]](#helmholtz){reference-type="eqref" reference="helmholtz"} is of the form [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"}, where $$A(\mu_1,\mu_2) \coloneqq A_0 + f_1(\mu_1) A_1 + f_2 (\mu_2) A_2.$$ We consider a parameter estimation problem, i.e., we have a solution to the discretized problem, where the parameters $(\mu_1,\mu_2)$ are unknown. In particular, the solution is obtained using backslash in [Matlab]{.smallcaps}. Figures [12](#fig4a){reference-type="ref" reference="fig4a"}, [13](#fig4b){reference-type="ref" reference="fig4b"}, and [14](#fig4c){reference-type="ref" reference="fig4c"} together show a method similar to the one proposed in [@HOPGD3] for approximating this problem, performed by constructing $3$ successive HOPGD models with sparse grid sampling. This simulation executes a modified form of Preconditioned Chebyshev BiCG $6$ times, requiring the LU factorizations of $6$ different matrices of dimension $n \times n$ to generate the $39$ snapshot solutions used to create the $3$ reduced order models. Once the models have been constructed, an interpolation is performed, followed by the minimization of [\[min-prob2\]](#min-prob2){reference-type="eqref" reference="min-prob2"}. Table [1](#table:1){reference-type="ref" reference="table:1"} shows the percent relative error in the estimated values of $\mu_1$ and $\mu_2$ for each run. Each execution of our strategy leads to a better estimation of the pair of parameters. As before, if the decomposition fails to reach a certain level of accuracy on a set of snapshots, a new set of approximations in the same parameter space can be generated with little extra computational effort. ![First run, $m=34$.](figures/figure4.pdf){#fig4a} ![Second run, $m=25$.](figures/figure5.pdf){#fig4b} ![Third run, $m=7$.](figures/figure6.pdf){#fig4c} rel err $\%$ $\mu_1$ rel err $\%$ $\mu_2$ -------------------- ---------------------- ---------------------- First run, $m=34$ 41.87417 97.40212 Second run, $m=25$ 2.66555 105.09263 Third run, $m=7$ 0.05706 1.69921 : Percent relative error for parameter estimation in $\mu_1$ and $\mu_2$ for solution to [\[helmholtz\]](#helmholtz){reference-type="eqref" reference="helmholtz"}, $n=10024$. # Numerical simulations of a parameterized advection-diffusion equation {#sec:advec-diff} The advection-diffusion equation can be used to model particle transport, for example the distribution of air pollutants in the atmosphere; see [@1d11d082-9fb9-3fdc-a755-55dde241b26e; @Ulfah_2018]. In particular, we consider the parameterized advection-diffusion equation given by $$\begin{aligned} \label{ad-diff} \frac{\partial}{\partial t} u(x,t) = f_1(\mu_1) \frac{\partial^2}{\partial x^2} u(x,t) + f_2(\mu_2) \frac{\partial}{\partial x}u(x,t),\end{aligned}$$ where $f_1(\mu_1) = 1 + \sin(\mu_1)$, $f_2 (\mu_2) = 10 + \cos(\mu_2) + \pi \mu_2$ and $x\in[0,1]$, $t \in [0,T]$. The boundary conditions $u(0,t)=0$, $u(1,t)=0$ is enforced, as well as the initial condition $u_0(x) = u(x,0)=\sin (\pi x)$. Here the function evaluation $f_1(\mu_1)$ is referred to as the diffusion coefficient, and the value $f_2(\mu_2)$ is the advection parameter. Discretizing [\[ad-diff\]](#ad-diff){reference-type="eqref" reference="ad-diff"} in space, with a finite difference scheme, and in time, with by the implicit Euler method, gives a parameterized linear system of the form [\[our-problem\]](#our-problem){reference-type="eqref" reference="our-problem"}, where $$A(\mu_1,\mu_2) \coloneqq A_0 + f_1(\mu_1) A_1 + f_2(\mu_2) A_2.$$ Note, the solution to this linear system gives an approximation to [\[ad-diff\]](#ad-diff){reference-type="eqref" reference="ad-diff"} at a specific time-step. As in Section [4](#sec:simulations){reference-type="ref" reference="sec:simulations"}, we construct a reduced order model via a tensor decomposition [@SparseHOPGD], where the snapshots are generated efficiently with a modified version of the method Preconditioned Chebyshev BiCG [@CorrentyEtAl2]. Here the sampling is performed on a sparse grid in the parameter space. If the tensor decomposition is not successful on a given set of snapshots, a new set can be generated on the same parameter space with little extra computation. Similar model problems appear in, for example, [@Nouy10], where the method PGD was used to construct approximate solutions to the parameterized PDE. Once the corresponding reduced order model [\[int\]](#int){reference-type="eqref" reference="int"} has been constructed, approximating the solutions for many $(\mu_1,\mu_2)$ can be done very cheaply; see Remark [Remark 1](#online-offline-remark){reference-type="ref" reference="online-offline-remark"}. ## First simulation, snapshots on a sparse grid {#first-simulation-snapshots-on-a-sparse-grid} We construct approximations to [\[ad-diff\]](#ad-diff){reference-type="eqref" reference="ad-diff"} with Algorithm [\[alg:ChebyshevHOPGD\]](#alg:ChebyshevHOPGD){reference-type="ref" reference="alg:ChebyshevHOPGD"}, where the sparse grid sampling is performed as displayed in Figure [19](#fig11a){reference-type="ref" reference="fig11a"}. Figure [18](#fig10){reference-type="ref" reference="fig10"} shows the percent relative error for approximations to [\[ad-diff\]](#ad-diff){reference-type="eqref" reference="ad-diff"} for $400$ different pairs $(\mu_1,\mu_2) \in [0,0.5]\times[0,0.5]$, where, specifically, we consider the solutions at $t=0.01$. As in [@HOPGD], approximate solutions with percent relative error below $6 \%$ are considered reliable, and approximations with percent relative error below $1.5 \%$ are considered accurate. Here the error is computed by comparing the approximation to the solution obtained using backslash in [Matlab]{.smallcaps}. The $9$ snapshots used to constructed the reduced order model here were generated with $2$ executions of Chebyshev BiCG, requiring the LU decompositions of $2$ matrices of dimension $n \times n$. Note, solutions to the advection-diffusion equation [\[ad-diff\]](#ad-diff){reference-type="eqref" reference="ad-diff"} are time-dependent. An all-at-once procedure could be utilized to approximate solutions to this equation at many different time-steps, though this approach would result in a larger number of unknowns and a longer simulation time. Alternatively, the decomposition could be modified to construct an approximation similar to the one in [\[tensor\]](#tensor){reference-type="eqref" reference="tensor"}, separated in the time variable $t$ as well as the parameters $\mu_1$ and $\mu_2$. Such an approach would, however, require specific testing. Furthermore, a decomposition of this form performed with HOPGD and sparse grid sampling in the parameter space may not be optimal [@SparseHOPGD]. ![Percent relative error of $x^m(\mu_1,\mu_2)$ [\[approx\]](#approx){reference-type="eqref" reference="approx"} to approximate [\[ad-diff\]](#ad-diff){reference-type="eqref" reference="ad-diff"} for $400$ pairs of $(\mu_1,\mu_2)$, $m=50$, $n=9999$. Here accurate: $<1.5 \%$, reliable $<6 \%$. Note, same model in both figures.](figures/melina_model1.pdf "fig:"){#fig10} [\[fig10a\]]{#fig10a label="fig10a"} ![Percent relative error of $x^m(\mu_1,\mu_2)$ [\[approx\]](#approx){reference-type="eqref" reference="approx"} to approximate [\[ad-diff\]](#ad-diff){reference-type="eqref" reference="ad-diff"} for $400$ pairs of $(\mu_1,\mu_2)$, $m=50$, $n=9999$. Here accurate: $<1.5 \%$, reliable $<6 \%$. Note, same model in both figures.](figures/melina_model2.pdf "fig:"){#fig10} [\[fig10b\]]{#fig10b label="fig10b"} ## Second simulation, parameter estimation with snapshots on a sparse grid Consider again a parameter estimation problem. Specifically, we have an approximation to $u(x,t) \in \mathbb{R}^n$ in [\[ad-diff\]](#ad-diff){reference-type="eqref" reference="ad-diff"}, where $t=0.01$ is known, but the parameters $\mu_1$ and $\mu_2$ are not. This approximation corresponds to the solution to the discretized problem and is obtained using backslash in [Matlab]{.smallcaps}. Additionally, there is some uncertainty in the measurement of the given solution. In this way, we express our observed solution $u^{\text{obs}} \in \mathbb{R}^n$ as $$\begin{aligned} \label{obs-solution-with-error} u^{\text{obs}} \coloneqq u(x,t) + \varepsilon \Delta, \end{aligned}$$ where $\Delta \in \mathcal{N}(0,I) \in \mathbb{R}^n$ is a random vector and $\varepsilon = 10^{-2}$. Figure [\[fig11\]](#fig11){reference-type="ref" reference="fig11"} and Table [2](#table:2){reference-type="ref" reference="table:2"} show the result of this parameter estimation problem, using a similar strategy to the one described in [@HOPGD3]. Here $3$ successive HOPGD models are constructed from snapshots sampled on a sparse grid in the parameter space $(\mu_1,\mu_2) \in [0,0.5]\times[0,0.5]$. This simulation executes a modified version of Chebyshev BiCG $6$ times, generating the $23$ snapshots efficiently. Note, this requires the LU decompositions of $6$ matrices of dimension $n \times n$. After the third run of Algorithm [\[alg:ChebyshevHOPGD\]](#alg:ChebyshevHOPGD){reference-type="ref" reference="alg:ChebyshevHOPGD"}, the estimated parameter is very close to the actual parameter. Note, the relative error in $\mu_1$ and $\mu_2$ after the third run of the simulation is of the same order of magnitude as the noise in the observed solution [\[obs-solution-with-error\]](#obs-solution-with-error){reference-type="eqref" reference="obs-solution-with-error"}. ![First run, $m=50$.](figures/figure7.pdf){#fig11a} ![Second run, $m=25$.](figures/figure8.pdf){#fig11b} ![Third run, $m=15$.](figures/figure9.pdf){#fig11c} rel err $\%$ $\mu_1$ rel err $\%$ $\mu_2$ -------------------- ---------------------- ---------------------- First run, $m=50$ 1.9004 16.1248 Second run, $m=25$ 4.8542 23.2504 Third run, $m=15$ 1.2313 4.2588 : Percent relative error for parameter estimation in $\mu_1$ and $\mu_2$ for noisy solution to [\[ad-diff\]](#ad-diff){reference-type="eqref" reference="ad-diff"}, $n=9999$. # Conclusions and future work This work proposes a novel way to generate the snapshot solutions required to build a reduced order model. The model is based on an efficient decomposition of a tensor matrix, where the sampling is performed on a sparse grid. An adaptation of the previously proposed method Preconditioned Chebyshev BiCG is used to generate many snapshots simultaneously. Tensor decompositions may fail to reach a certain level of accuracy on a given set of snapshot solutions. Our approach offers a way to generate a new set of snapshots on the same parameter space with little extra computation. This is advantageous, as it is not possible to know a priori if a decomposition will converge or not on a given set of snapshots, and generating snapshots is computationally demanding in general. The reduced order model can also be used to solve a parameter estimation problem. Numerical simulations show competitive results. In [@SparseHOPGD], a residual-based accelerator was used in order to decrease the number of iterations required by the fixed point method when computing the updates described in [\[phi-m\]](#phi-m){reference-type="eqref" reference="phi-m"}, [\[sol-f\]](#sol-f){reference-type="eqref" reference="sol-f"}, and [\[sol-f2\]](#sol-f2){reference-type="eqref" reference="sol-f2"}. Techniques of this variety are especially effective when the snapshot solutions have a strong dependence on the parameters. Such a strategy could be directly incorporated into this work, though specific testing of the effectiveness would be required. # Acknowledgements The Erasmus programme of the European Union funded the first author's extended visit to Trinity College Dublin to carry out this research. The authors thank Elias Jarlebring (KTH Royal Institute of Technology) for fruitful discussions and for providing feedback on the manuscript. Additionally, the authors wish to thank Anna-Karin Tornberg (KTH Royal Institute of Technology) and her research group for many supportive, constructive discussions. # Derivation of the update formulas, Algorithm [\[alg:ChebyshevHOPGD\]](#alg:ChebyshevHOPGD){reference-type="ref" reference="alg:ChebyshevHOPGD"} {#sec:deriv-sep-ex} We are interested in the approximation $X^m(\mu_1,\mu_2)$ as in [\[approx-xm\]](#approx-xm){reference-type="eqref" reference="approx-xm"}, such that [\[min-prob\]](#min-prob){reference-type="eqref" reference="min-prob"} is satisfied. Here an alternating directions algorithm is used, where $X^{m-1} (\mu_1,\mu_2)$ as in [\[part-b\]](#part-b){reference-type="eqref" reference="part-b"} is given, and we assume, after an initialization, $F_1^m\coloneqq F_1^m(\mu_1)$, $F_2^m\coloneqq F_2^m(\mu_2)$ are known. Equivalently, we seek the update $\Phi_n^m$ in [\[solve\]](#solve){reference-type="eqref" reference="solve"} via a least squares procedure. The left-hand side of [\[solve\]](#solve){reference-type="eqref" reference="solve"} can be expressed equivalently as $$\begin{aligned} \label{part1} \left( w \Phi_n^m F_1^m F_2^m, \delta X \right)_{\mathcal{D}} = w \Phi_n^m (F_1^m)^2 (F_2^m)^2 \int_{\mathcal{D}} \delta \Phi^m (x) \mathop{dx} \mathop{d\mu_1} \mathop{d\mu_2,} \end{aligned}$$ and the right-hand side can be written as $$\begin{aligned} \label{part2} \left( w r_{m-1}(\mu_1,\mu_2), \delta X \right)_{\mathcal{D}} = w r_{m-1}(\mu_1,\mu_2) F_1^m F_2^m \int_{\mathcal{D}} \delta \Phi^m (x) \mathop{dx} \mathop{d\mu_1} \mathop{d\mu_2,}\end{aligned}$$ with $r_{m-1}$ [\[res-u\]](#res-u){reference-type="eqref" reference="res-u"}. Thus, from equating [\[part1\]](#part1){reference-type="eqref" reference="part1"} and [\[part2\]](#part2){reference-type="eqref" reference="part2"}, we obtain the overdetermined system $$\begin{aligned} \label{relation} \begin{bmatrix} F_1^m(\mu_1^1) F_2^m(\mu_2^*) I_n \\ \vdots \\ F_1^m(\mu_1^*) F_2^m(\mu_2^{n_2}) I_n \end{bmatrix} \Phi^m_n = \begin{bmatrix} r_{m-1}(\mu_1^1,\mu_2^*) \\ \vdots \\ r_{m-1}(\mu_1^*,\mu_2^{n_2}) \end{bmatrix},\end{aligned}$$ where $I_n \in \mathbb{R}^{n \times n}$ is the identity matrix of dimension $n$, and the update for $\Phi_n^m$ described in [\[phi-m\]](#phi-m){reference-type="eqref" reference="phi-m"} is determined via the solution to the corresponding normal equations. Note, the linear system [\[relation\]](#relation){reference-type="eqref" reference="relation"} contains function evaluations corresponding to all the nodes in $\bm{\mu}$ [\[nodes\]](#nodes){reference-type="eqref" reference="nodes"}. Assume now $\Phi_n^m$, $F_2^m$ in [\[part-b\]](#part-b){reference-type="eqref" reference="part-b"} are known, and seek an update of $F_1^m$. Rewriting the left-hand side of [\[solve\]](#solve){reference-type="eqref" reference="solve"} yields $$\begin{aligned} \label{two-a} \left( w \Phi_n^m F_1^m F_2^m, \delta X \right)_{\mathcal{D}} = w (\Phi_n^m)^T (\Phi_n^m) (F_1^m) (F_2^m)^2 \int_{\mathcal{D}} \delta F_1^m (\mu_1) \mathop{dx} \mathop{d\mu_1} \mathop{d\mu_2,}\end{aligned}$$ and the right-hand side is given by $$\begin{aligned} \label{two-b} \left( w r_{m-1}(\mu_1,\mu_2), \delta X \right)_{\mathcal{D}} = w (r_{m-1}(\mu_1,\mu_2))^T (\Phi_n^m) F_2^m \int_{\mathcal{D}} \delta F_1^m (\mu_1) \mathop{dx} \mathop{d\mu_1} \mathop{d\mu_2\cdot}\end{aligned}$$ Approximates to $F_1^m$ in [\[solve\]](#solve){reference-type="eqref" reference="solve"} are found by computing the least squares solutions to the overdetermined systems $$\begin{aligned} \begin{bmatrix} \Phi_n^m F_2^m(\mu_2^*) \end{bmatrix} F_1^m(\mu_1^i) = \begin{bmatrix} r_{m-1}(\mu_1^i,\mu_2^*) \end{bmatrix}, \end{aligned}$$ given in [\[sol-f\]](#sol-f){reference-type="eqref" reference="sol-f"}. Proceeding analogously to [\[two-a\]](#two-a){reference-type="eqref" reference="two-a"} and [\[two-b\]](#two-b){reference-type="eqref" reference="two-b"}, updating $F_2^{m}$ yields the overdetermined systems $$\begin{aligned} \begin{bmatrix} \Phi_n^m F_1^m(\mu_1^*) \end{bmatrix} F_2^m(\mu_2^j) = \begin{bmatrix} r_{m-1}(\mu_1^*,\mu_2^j) \end{bmatrix}, \end{aligned}$$ with least squares solutions [\[sol-f2\]](#sol-f2){reference-type="eqref" reference="sol-f2"}. In practice, each of the vectors depicted in the approximation in Figure [1](#tikz1){reference-type="ref" reference="tikz1"} are normalized. A constant is computed for each low-rank update, in a process analogous to the one described above. We leave this out of the derivation for the sake of brevity. This is consistent with the algorithm derived in [@SparseHOPGD]. [^1]: Department of Mathematics, Royal Institute of Technology (KTH), SeRC Swedish e-Science Research Center, Lindstedtsvägen 25, Stockholm, Sweden `correnty@kth.se` [^2]: Institute of Mathematics, University of Potsdam, Karl-Liebknecht-Str. 24-25, 14476 Potsdam, Germany `melina.freitag@uni-potsdam.de` [^3]: School of Mathematics, Trinity College Dublin, The University of Dublin, College Green, Dublin 2, Ireland `ksoodha@maths.tcd.ie` [^4]: `https://github.com/siobhanie/ChebyshevHOPGD`
arxiv_math
{ "id": "2309.14178", "title": "Sparse grid based Chebyshev HOPGD for parameterized linear systems", "authors": "Siobh\\'an Correnty, Melina A. Freitag, and Kirk M. Soodhalter", "categories": "math.NA cs.NA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Seismic travel time tomography is a geophysical imaging method to infer the 3-D interior structure of the solid Earth. Most commonly formulated as a linear(ized) inverse problem, it maps differences between observed and expected wave travel times to interior regions where waves propagate faster or slower than the expected average. The Earth's interior is typically parametrized by a single kind of localized basis function. Here we present an alternative approach that uses matching pursuits on large dictionaries of basis functions.\ Within the past decade the (Learning) Inverse Problem Matching Pursuits ((L)IPMPs) have been developed. They combine global and local trial functions. An approximation is built in a so-called best basis, chosen iteratively from an intentionally overcomplete set or dictionary. In each iteration, the choice for the next best basis element reduces the Tikhonov--Phillips functional. This is in contrast to classical methods that use either global or local basis functions. The LIPMPs have proven its applicability in inverse problems like the downward continuation of the gravitational potential as well as the MEG-/EEG-problem from medical imaging.\ Here, we remodel the Learning Regularized Functional Matching Pursuit (LRFMP), which is one of the LIPMPs, for travel time tomography in a ray theoretical setting. In particular, we introduce the operator, some possible trial functions and the regularization. We show a numerical proof of concept for artificial travel time delays obtained from a contrived model for velocity differences. The corresponding code is available at [<https://doi.org/10.5281/zenodo.8227888>]{style="color: blue"} under the licence CC-BY-NC-SA 3.0 DE. author: - "N. Schneider$^\\ast$, V. Michel[^1] , K. Sigloch[^2] and E. J. Totten[^3]" bibliography: - biblioneu.bib title: | A matching pursuit approach\ to the geophysical inverse problem\ of seismic travel time tomography\ under the ray theory approximation --- #### Keywords inverse problems, travel time tomography, seismology, matching pursuits, numerical modelling #### MSC(2020) *41A45, 45Q05, 65D15, 65J20, 65R32, 68T05, 86-10, 86A15, 86A22* #### Acknowledgments The authors gratefully acknowledge the financial support by the German Research Foundation (DFG; Deutsche Forschungsgemeinschaft), project MI 655/14-1. K. Sigloch was supported by the French government through the UCAJEDI Investments in the Future project, reference number ANR-15-IDEX-01. We thank Maria Tsekhmistrenko, PhD, who owns the relevant DETOX code on GitHub and Afsaneh Mohammadzaheri who assisted E.J. Totten with some data in the time of the Corona pandemic. Last but not least, we are grateful for using the HPC Clusters Horus and Omni maintained by the ZIMT of the University of Siegen for our numerical results. # Introduction {#sect:intro} Seismic travel time tomography serves to infer the 3-D interior structure of the solid Earth. For this, it measures the characteristics of seismic waves that propagate through the Earth's deep interior, between sources (earthquakes) and receivers (seismometers) that are located at or near the surface. The most widely practised approach solves a linearized inverse problem, where measured travel times of waves propagating through the real, heterogeneous interior deviate moderately from forward-modelled travel times through a spherically symmetric reference Earth model.This reference velocity model is updated with moderate 3-D variations, in a linear inversion step that attributes travel time anomalies (observed minus predicted) to discrete regions where wave velocities must be moderately faster or slower than in the reference model. The quantitative connection is made by means of efficiently computed sensitivity kernels, see [@AkiRichards2009; @Abel2007; @BenMenahemSingh2000; @DahlenTromp1998; @Dahlenetal2000; @Nolet2008]. In particular, a **s**pherically symmetric, **n**on-**r**otating, **e**lastic and **i**sotropic spherical shell (SNREI) model like the IASP91, see [@Kennettetal1990], is usually used. When comparing modelled travel times with true measurements, we expect a travel time difference caused by anomalies within the Earth. Specifically, for total body-wave travel times of hundreds to 1000 s, the unexplained travel time delay is typically fractions of a second to several seconds. Travel time tomography aims to approximate these deviations in the velocity field from the corresponding delays. Thus, travel time tomography is an inverse problem. Unfortunately, there are not many theoretical results known about it. In particular, for practical purposes, we are lacking a singular value decomposition. Moreover, it is known that, in practice, the inverse problem is ill-posed because, e. g.  the solution is non-unique. For details, see e. g.  [@Abel2007]. Nonetheless, it is approached with an infinite or more accurate, but also more demanding finite frequency strategy, [@DahlenTromp1998; @Hosseinietal2020; @Marqueringetal1998; @Marqueringetal1999; @Nolet2008; @Sigloch2008; @Tian2007; @Tian2007-2; @Yomogida1992]. As can be seen at [<https://www.earth.ox.ac.uk/~smachine/cgi/index.php>]{style="color: blue"}, currently there exists a range of detailed interior Earth velocity deviation models instead of one standard model. Taking a closer look at these models raises a few questions: tetrahedra, voxels, spherical harmonics are chosen mostly for computational convenience and not due to a physical motivation. Moreover, mantle anomalies are a-priori expected to be wider in horizontal dimensions than in the vertical dimension. This puts not only the type of chosen basis into question but also their specific geometry. Further, the characteristics of the structure of the Earth are still unknown. Hence, it might be potentially better to restrict basis functions as less as possible but let the data itself choose suitable functions. This is especially interesting since some areas of the mantle are very well illuminated, others very poorly. However, we do not expect fundamental changes in convection style in different places due to the general viscosity of the mantle. Hence, unsampled blanks might be filled more physically plausibly if the approximation method is free to choose types of basis functions taking into account the whole data distribution at once. This motivated us to apply a different type of approximation method for inverse problems to travel time tomography which appears to be suitable to tackle these challenges of parameterization, underdeterminedness and regularization. Here, we propose to use the Learning Regularized Functional Matching Pursuit (LRFMP) which is one of the (Learning) Inverse Problem Matching Pursuit ((L)IPMP) algorithms. The LRFMP is the realization of the RFMP with a learning add-on. The RFMP algorithm iteratively approximates the solution of an inverse problem by minimizing the corresponding Tikhonov--Phillips functional. The unique characteristic of the (L)IPMPs as well as the RFMP, is that the obtained approximation will be given in a so-called best basis of dictionary elements. The dictionary is an intentionally redundant set of diverse trial functions. Here, we consider orthogonal polynomials and linear tesseroid-based finite element hat functions (FEHFs). The best basis is usually also made of all types of trial functions given in the dictionary. The learning add-on enables the method to choose the best basis out of infinitely many trial functions. More details on the methods can be found in [@Berkeletal2011; @Fischer2011; @Fischeretal2012; @Fischeretal2013-1; @Fischeretal2013-2; @Guttingetal2017; @Kontak2018; @Kontaketal2018-2; @Kontaketal2018-1; @Michel2015-2; @Micheletal2017-1; @Micheletal2018-1; @Micheletal2014; @Micheletal2016-1; @Leweke2018; @Prakashetal2020; @Schneider2020; @Schneideretal2022; @Telschow2014; @Telschowetal2018]. These publications also show that these methods have already been successfully applied to inverse problems in geodesy and medical imaging as well as for normal mode tomography. Here, we will shortly introduce them as well in [4](#sect:ipmps){reference-type="ref" reference="sect:ipmps"}, in particular with an emphasis on adjustments made for travel time tomography. Regarding the theory, a focus is set on the derivation of certain gradients and inner products needed for this adjustment given in [3.3](#ssect:tfcs:reg){reference-type="ref" reference="ssect:tfcs:reg"}. These computations are provided as supplementary material in the appendix of this paper. We demonstrate this new method on inversions for synthetic (i.e., invented) whole-mantle test structures, using realistic sampling geometries, i.e. actual earthquake-to-receiver paths that have yielded high-quality travel time data for tomography in the ISC-EHB catalogue. For the latter, see e. g.  [@ISC-EHB; @Westonetal2018]. In [2](#sect:problem){reference-type="ref" reference="sect:problem"}, we introduce the mathematical inverse problem of the travel time tomography. Afterwards, we present our choices of dictionary elements in [3](#sect:tfcs){reference-type="ref" reference="sect:tfcs"}. Further, we compute regularization terms related to a Sobolev space of these trial functions. Next, we introduce the LRFMP in [4](#sect:ipmps){reference-type="ref" reference="sect:ipmps"} and show a first proof of concept in [5](#sect:numerics){reference-type="ref" reference="sect:numerics"}. In [8](#sect:app){reference-type="ref" reference="sect:app"}, the interested reader will find more details on certain mathematical derivations regarding the trial functions and the methods. The corresponding code is available at [<https://doi.org/10.5281/zenodo.8227888>]{style="color: blue"} under licence the CC-BY-NC-SA 3.0 DE. ## Notation {#ssect:intro:nota} The set of positive integers is denoted by $\mathbb{N}$. If we include 0, we use $\mathbb{N}_0$. For integers, we write $\mathbb{Z}$. For the set of real numbers, we use $\mathbb{R}$. The $d$-dimensional Euclidean space is called $\mathbb{R}^d$. Only positive real numbers are collected in $\mathbb{R}_+$. For $x\in\mathbb{R}^3$, we can use the common spherical coordinate transformation $$\begin{aligned} x(r,\varphi,\theta) = \left( \begin{matrix} r\sqrt{1-t^2}\cos(\varphi)\\ r\sqrt{1-t^2}\sin(\varphi)\\ rt \end{matrix} \right)\end{aligned}$$ for the radius $r\in\mathbb{R}_+$, the longitude $\varphi\in [0,2\pi[$ and the latitude $\theta\in [0,\pi]$ which yields the polar distance $\cos(\theta) = t \in [-1,1]$. A radius is denoted by $\mathbf{R}\in \mathbb{R}_+$ and the corresponding ball with radius $\mathbf{R}$ is defined by $\mathbb{B}_\mathbf{R}\coloneqq \{x \in \mathbb{R}^3\ \colon |x|\leq \mathbf{R}\}$. If $\mathbf{R}= 1$, we also abbreviate $\mathbb{B}\coloneqq \mathbb{B}_1$. Thus, a point $\xi(\varphi,t) \in \partial \mathbb{B}\subset \mathbb{R}^3$ is given by $$\begin{aligned} \xi(\varphi,t) = \left( \begin{matrix} \sqrt{1-t^2}\cos(\varphi)\\ \sqrt{1-t^2}\sin(\varphi)\\ t \end{matrix} \right).\end{aligned}$$ Moreover, a well-known local orthonormal basis in $\mathbb{R}^3$ is defined by the vectors $$\begin{aligned} \varepsilon^r(\varphi,t) = \left( \begin{matrix} \sqrt{1-t^2}\cos(\varphi)\\ \sqrt{1-t^2}\sin(\varphi)\\ t \end{matrix} \right), \ \varepsilon^\varphi(\varphi,t) = \left( \begin{matrix} -\sin(\varphi)\\ \phantom{-}\cos(\varphi)\\ 0 \end{matrix} \right), \ \varepsilon^t(\varphi,t) = \left( \begin{matrix} -t\cos(\varphi)\\ -t\sin(\varphi)\\ \sqrt{1-t^2} \end{matrix} \right),\end{aligned}$$ see, for instance, [@Michel2013]. # Seismic travel time tomography {#sect:problem} Seismic travel time tomography estimates the propagation velocity of seismic (P-)waves as a function of spatial location inside the solid Earth. The sensitivity of travel time measurements to Earth structures along the wave propagation path is volumetrically extended in practice, but can often be reasonably approximated as an (infinitely narrow) ray, in analogy to optical ray theory. Here we consider only (seismic) ray theory for P-waves, although our approach could be extended to less approximative sensitivity modelling, such as Born/finite-frequency approximations, see e. g. [@Dahlenetal2000], and to S- or other wave types. As we aim here for a proof of concept, we consider the accuracy of the classical ray theoretical infinite-frequency approach as adequate. For one seismic ray $\widetilde{X}$ between seismic source and receiver, the mathematical relationship between its travel time $\psi$ and the (P-)wave velocity $c$ (and its slowness $S$, respectively), is given by the Eikonal equation, see e.g. [@DahlenTromp1998; @Michel2020; @Nolet2008], $$\begin{aligned} \frac{1}{c(\widetilde{X}(s))} = S(\widetilde{X}(s)) = |\nabla_{\widetilde{X}} \psi(\widetilde{X}(s))|, \label{eq:Eikonal}\end{aligned}$$ where $\widetilde{X} \colon [0,R]\to\mathbb{R}^3$ and the arc length $s$ is chosen to parametrize $\widetilde{X}$. We reparameterize the ray here by a parameter $t\in[0,1]$ such that $X\colon [0,1]\to\mathbb{R}^3$ has the same curve, that is $X([0,1])=\widetilde{X}([0,R])$. The specific choice of the parameter transformation $s\leftrightarrow t$ depends on the implementation of the numerically calculated ray. The Eikonal equation yields the non-linear inverse problem $$\begin{aligned} \int_{\widetilde{X}} S(\widetilde{X}(s))\ \mathrm{d}\sigma(\widetilde{X}(s)) = \psi(\widetilde{X}(R)) \ &\Leftrightarrow \ \int_0^1 S(X(t))\left|X'(t)\right| \mathrm{d}t = \psi(X(1)) \label{eq:NLIPTT} \intertext{with} s(t) &= \int_0^t|X'(\tau)|d\tau. \label{eq:arclen}\end{aligned}$$ Due to the difficulty of non-linear inverse problems, [\[eq:NLIPTT\]](#eq:NLIPTT){reference-type="ref" reference="eq:NLIPTT"} is usually linearized: instead of approximating the slowness $S$ itself, we consider the deviation $\delta S$ between reality and a reference model, see e.g. [@Abel2007; @Nolet2008]. This yields the seismic ray perturbation operator (SPO) given by $$\begin{aligned} (\mathcal{T}\ \delta S)\left(\widetilde{X}_\mathrm{ref}\right) \coloneqq \int_0^R \delta S\left(\widetilde{X}_\mathrm{ref}(s)\right)\ \mathrm{d}s = \delta \psi\left(\widetilde{X}_\mathrm{ref}(R)\right) \label{eq:operatorgeneral}\end{aligned}$$ where $\widetilde{X}_\mathrm{ref}$ stands for the ray obtained from a reference model and parameterized with the arc length $s$ between 0 and its total arc length $R$. Moreover, $\delta \psi = \psi_{\mathrm{obs}} - \psi_{\mathrm{ref}}$ is the delay of the observed and (due to the reference model) expected travel time. The more rays we consider, the better our approximation of $\delta S$ can be. Thus, in practice, we use a family of rays $\{\widetilde{X}_{\mathrm{ref},i}\}_{i=1,...,\ell}$ with respect to a set of source-receiver pairs $$\begin{aligned} \left\{\left(\widetilde{X}_{\mathrm{ref},i}(0),\widetilde{X}_{\mathrm{ref},i}(R)\right)\right\}_{i=1,...,\ell} \label{eq:sourcereceiverpairs}\end{aligned}$$ and consider the Discretized SPO (DSPO) $$\begin{aligned} \left(\mathcal{T}_\daleth^i\ \delta S\right)\left(X_{\mathrm{ref},i}\right) &\coloneqq \int_0^{R} \delta S\left(\widetilde{X}_{\mathrm{ref},i}(s)\right) \mathrm{d}s = \delta \psi_i\left(\widetilde{X}_{\mathrm{ref},i}(R)\right)\\ &= \int_0^{1} \delta S\left(X_{\mathrm{ref},i}(t)\right)\left|X'_{\mathrm{ref},i}(t)\right| \mathrm{d}t = \delta \psi_i\left(X_{\mathrm{ref},i}(1)\right),\\ \mathcal{T}_\daleth\ \delta S &= \delta \psi_\daleth \label{eq:operatordiscret}\end{aligned}$$ with $s$ as in [\[eq:arclen\]](#eq:arclen){reference-type="ref" reference="eq:arclen"}, $s(1)=R$, $\mathcal{T}_\daleth \coloneqq (\mathcal{T}_\daleth^i)_{i=1,...,\ell}$ and $\delta \psi_\daleth \coloneqq (\delta \psi_i(X_{\mathrm{ref},i}(1)) )_{i=1,...,\ell}.$ Our aim is then to construct an approximation $$\begin{aligned} \delta S \approx \sum_{i=1}^I \alpha_id_i\end{aligned}$$ for $I\in\mathbb{N},\ \alpha_i\in\mathbb{R}$ and some trial functions $d_i$. Note that, in seismology, instead of considering $\delta S$, the velocity anomaly (deviation from the reference velocity) is often considered. In our approach, this means, we would have to reformulate our approximation in the following way: if we obtain $\delta S$ as the linear combination, we approximate $$\begin{aligned} \delta S = \frac{1}{c} - \frac{1}{c_{\mathrm{ref}}} = - \frac{c-c_{\mathrm{ref}}}{cc_{\mathrm{ref}}}. \label{eq:deltaS}\end{aligned}$$ This reformulates to $$\begin{aligned} \delta S + \frac{1}{c_{\mathrm{ref}}} = \frac{1}{c}. \label{eq:c_reciproc}\end{aligned}$$ Thus, after approximating $\delta S$, we would transform it via $$\begin{aligned} -\frac{\delta S}{\delta S + c_{\mathrm{ref}}^{-1}} = -\left(- \frac{c-c_{\mathrm{ref}}}{cc_{\mathrm{ref}}} \right)c = \frac{c-c_{\mathrm{ref}}}{c_{\mathrm{ref}}} \eqqcolon \frac{\mathrm{d}c}{c}\end{aligned}$$ for a better comparability in the geophysical community. Since we are using here a non-real, artificial deviation model (see [5.1](#ssect:numerics:genset){reference-type="ref" reference="ssect:numerics:genset"}), we abstain from this explicit reformulation but consider the numerator naturally in our resolution test. # Trial functions {#sect:tfcs} We utilize two types of trial functions in our methods: global orthogonal polynomials and compactly supported linear finite element hat functions. We first present their definition and give examples of them. ## Definition and examples {#ssect:tfcs:def} ### Tesseroid-based finite element hat functions {#sssect:tfcs:def:FE} ![Example of an FEHF. We show $N_{(5096.8,0,0),(955.65,\pi/4,0.25)}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./FEHFs-approximation3193.pdf "fig:"){#fig:FEHF width=".32\\textwidth"} ![Example of an FEHF. We show $N_{(5096.8,0,0),(955.65,\pi/4,0.25)}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./FEHFs-approximation3481.pdf "fig:"){#fig:FEHF width=".32\\textwidth"} ![Example of an FEHF. We show $N_{(5096.8,0,0),(955.65,\pi/4,0.25)}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./FEHFs-approximation3770.pdf "fig:"){#fig:FEHF width=".32\\textwidth"} ![Example of an FEHF. We show $N_{(5096.8,0,0),(955.65,\pi/4,0.25)}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./FEHFs-approximation4059.pdf "fig:"){#fig:FEHF width=".32\\textwidth"} ![Example of an FEHF. We show $N_{(5096.8,0,0),(955.65,\pi/4,0.25)}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./FEHFs-approximation4348.pdf "fig:"){#fig:FEHF width=".32\\textwidth"} ![Example of an FEHF. We show $N_{(5096.8,0,0),(955.65,\pi/4,0.25)}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./FEHFs-approximation4637.pdf "fig:"){#fig:FEHF width=".32\\textwidth"} ![Example of an FEHF. We show $N_{(5096.8,0,0),(955.65,\pi/4,0.25)}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./FEHFs-approximation4926.pdf "fig:"){#fig:FEHF width=".32\\textwidth"} ![Example of an FEHF. We show $N_{(5096.8,0,0),(955.65,\pi/4,0.25)}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./FEHFs-approximation5215.pdf "fig:"){#fig:FEHF width=".32\\textwidth"} ![Example of an FEHF. We show $N_{(5096.8,0,0),(955.65,\pi/4,0.25)}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./FEHFs-approximation5504.pdf "fig:"){#fig:FEHF width=".32\\textwidth"} ![Example of an FEHF. We show $N_{(5096.8,0,0),(955.65,\pi/4,0.25)}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./FEHFs-approximation5793.pdf "fig:"){#fig:FEHF width=".32\\textwidth"} ![Example of an FEHF. We show $N_{(5096.8,0,0),(955.65,\pi/4,0.25)}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./FEHFs-approximation6082.pdf "fig:"){#fig:FEHF width=".32\\textwidth"} ![Example of an FEHF. We show $N_{(5096.8,0,0),(955.65,\pi/4,0.25)}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./FEHFs-approximation6371.pdf "fig:"){#fig:FEHF width=".32\\textwidth"} For a general introduction to finite elements, see e.g. [@Braess2007; @Grossmannetal2007; @Johnson2009; @Schwarzetal2011]. Regarding trial functions, we are interested in composite hat functions on tesseroids as the finite elements. Thus, in the sequel, we speak of finite element hat functions (FEHFs). In seismology, using tetrahedra as finite elements are common, see e.g. [@Hosseinietal2020; @Sigloch2008; @Tian2007; @Tian2007-2]. We change the underlying finite element structure here because, from a seismological perspective, there is no physical need to use tetrahedra. Tesseroids, see [@Fukushima2018], are rectangular parallelepipeds defined by upper and lower bounds with respect to the radius, the longitude and the latitude. Thus, a tesseroid may be even more reasonable than tetrahedra: such an underlying structure would be a realization of the interior of the Earth being structured or layered by gravitation. Moreover, the depth of the mantle ($\approx$ 2 900 km) is much smaller than the circumference of the Earth ($\approx$ 40 000 km). Hence, the size of the heterogeneities in the mantle is smaller in the depth than it is latitudinally and longitudinally. For the definition of a tesseroid, we obviously have six degrees of freedom. Due to the composite nature of the FEHFs, we move from the boundary-based view to a centre-based one. Then, the degrees of freedom are its radial centre $R$ and its distance to each side $\Delta R$ as well as the longitudinal and latitudinal centres $\Phi$ and $T$ and their respective distances to each side $\Delta \Phi$ and $\Delta T$. The natural constraints of a tesseroid are $$\begin{aligned} R &\in [\rho\mathbf{R},\mathbf{R}] = [R_{\mathrm{min}},R_{\mathrm{max}}],& 0<\epsilon_R&\leq \Delta R\leq \mathbf{R}/2,\\ \Phi &\in [0,2\pi] = [\Phi_{\mathrm{min}},\Phi_{\mathrm{max}}],& 0<\epsilon_\Phi &\leq \Delta \Phi\leq \pi, \label{def:N-A-DA-constraints}\\ T &\in [-1+\epsilon_T,1-\epsilon_T] = [T_{\mathrm{min}},T_{\mathrm{max}}],& 0<\epsilon_T&\leq \Delta T \leq 0.5.\end{aligned}$$ Note that, with $\rho\in [0,1]$, we control the maximal possible depth of the tesseroids. For instance, if we set $\rho = 3482/6371 = 0.54654$, we allow the FEHF as deep as the core-mantle boundary. Further note that due to singularities at the poles, see [8.3](#ssect:app:H1IPs){reference-type="ref" reference="ssect:app:H1IPs"}, we need to stay away from the theoretically possible bounds $\pm 1$ in the latitudinal case. We can identify (in polar coordinates) the ball with the domain $$\begin{aligned} D &\coloneqq [0,\mathbf{R}] \times [P,P+2\pi] \times [-1,1]\quad \mathrm{with} \quad P\coloneqq\left\lfloor \frac{\Phi-\Delta \Phi}{\pi} \right\rfloor \pi,\end{aligned}$$ the difference domain as $$\begin{aligned} \Delta D &\coloneqq [\epsilon_R,\mathbf{R}/2] \times [\epsilon_\Phi,\pi] \times [\epsilon_T,0.5]\\ &= [\epsilon_R,R_{\mathrm{max}}/2] \times [\epsilon_\Phi,\Phi_{\mathrm{max}}/2] \times [\epsilon_T, (T_{\mathrm{max}}+\epsilon_T)/2]\end{aligned}$$ and the tesseroid as $$\begin{aligned} E &\coloneqq [\max(R_{\mathrm{min}},R-\Delta R),\min(R_{\mathrm{max}},R+\Delta R)] \\ &\qquad \times [\Phi-\Delta \Phi,\Phi+\Delta \Phi] \\ &\qquad \times [\max(T_{\mathrm{min}},T-\Delta T),\min(T_{\mathrm{max}},T+\Delta T)]. \label{def:tesseroid}\end{aligned}$$ For the sake of brevity, let $a\coloneqq(a_j)_{j=1,2,3} \coloneqq (r,\varphi,t),\ A\coloneqq(A_j)_{j=1,2,3} \coloneqq (R,\Phi,T),\ \Delta A\coloneqq(\Delta A_j)_{j=1,2,3} \coloneqq (\Delta R,\Delta \Phi,\Delta T),$ $$\begin{aligned} A_{\mathrm{min}} &\coloneqq (\rho\mathbf{R},P,-1+\epsilon_T) = (R_{\mathrm{min}},\Phi_{\mathrm{min}},T_{\mathrm{min}}),\\ A_{\mathrm{max}} &\coloneqq (\mathbf{R},P+2\pi,1-\epsilon_T) = (R_{\mathrm{max}},\Phi_{\mathrm{max}},T_{\mathrm{max}}) \intertext{and the Cartesian product} \mathrm{supp}_{A,\Delta A} (a) &\coloneqq \prod_{j=1}^3 [\max(A_{\mathrm{min},j},A_j-(\Delta A)_j), \min(A_{\mathrm{max},j},A_j+(\Delta A)_j)] = E.\end{aligned}$$ With these definitions, we can consider the FEHFs. Commonly used in seismology are Lagrange finite element basis function, see [@Tian2007; @Tian2007-2]. Finite element basis functions for cuboids are given, for instance, in [@Mazdziarz2010]. We take the linear examples from there. Via translation to $\mathrm{supp}_{A,\Delta A}$ and the general notation just introduced, we then obtain the dictionary elements $$\begin{aligned} &N_{A,\Delta A}(x(a)) \coloneqq N_{(R,\Phi,T),(\Delta R,\Delta \Phi, \Delta T)}(r\xi(\varphi,t)) \coloneqq \chi_{\mathrm{supp}_{A,\Delta A}} (a)\prod_{j=1}^3 \frac{\Delta A_j-|a_j-A_j|}{\Delta A_j} , \label{def:N-A-DA}\end{aligned}$$ where $\chi$ denotes the characteristic function and $x(a)=r\xi(\varphi,t)\in\mathbb{B}_\mathbf{R}$. The FEHF $N_{A,\Delta A}$ attains its maximum in $A$, which is the centre of the respective volume element. It holds $N_{A,\Delta A}(A)=1.$ The function linearly decreases towards zero when moving towards $A\pm\Delta A$. It is constant zero outside of the volume element. Thus, it is only piecewise constant. An example is given in [12](#fig:FEHF){reference-type="ref" reference="fig:FEHF"}. Note that, in this example, we see that the hat is clearly visible. However, this shows that, though the theoretical support of an FEHF is a tesseroid, its visible support appears smaller. ### Polynomials {#sssect:tfcs:def:POLY} For polynomials on a ball with radius $\mathbf{R}$, we consider the system $$\begin{aligned} &G^{\mathrm{I}}_{m,n,j} (r\xi(\varphi,t)) \\ &\coloneqq p_{m,n}P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n} Y_{n,j} (\xi(\varphi,t)) \label{def:GI_2}\\ &= p_{m,n}P_m^{(0,n+1/2)}\left(2\left(\frac{r}{\mathbf{R}}\right)^2-1\right) \left(\frac{r}{\mathbf{R}}\right)^{n} q_{n,j}P_{n,|j|}(t) \left\{ \begin{matrix} \sqrt{2}\cos(j\varphi),&j<0\\ 1,&j=0\\ \sqrt{2}\sin(j\varphi),&j>0 \end{matrix} \right. \label{def:GI_3}\\ &\eqqcolon p_{m,n}P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n} q_{n,j}P_{n,|j|}(t) \mathrm{Trig}(j\varphi) \label{def:GI_4}\end{aligned}$$ ![Example of a polynomial. We show $G_{2,2,1}^{\mathrm{I}}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Polys-approximation3193.pdf "fig:"){#fig:GI width=".32\\textwidth"} ![Example of a polynomial. We show $G_{2,2,1}^{\mathrm{I}}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Polys-approximation3481.pdf "fig:"){#fig:GI width=".32\\textwidth"} ![Example of a polynomial. We show $G_{2,2,1}^{\mathrm{I}}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Polys-approximation3770.pdf "fig:"){#fig:GI width=".32\\textwidth"} ![Example of a polynomial. We show $G_{2,2,1}^{\mathrm{I}}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Polys-approximation4059.pdf "fig:"){#fig:GI width=".32\\textwidth"} ![Example of a polynomial. We show $G_{2,2,1}^{\mathrm{I}}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Polys-approximation4348.pdf "fig:"){#fig:GI width=".32\\textwidth"} ![Example of a polynomial. We show $G_{2,2,1}^{\mathrm{I}}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Polys-approximation4637.pdf "fig:"){#fig:GI width=".32\\textwidth"} ![Example of a polynomial. We show $G_{2,2,1}^{\mathrm{I}}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Polys-approximation4926.pdf "fig:"){#fig:GI width=".32\\textwidth"} ![Example of a polynomial. We show $G_{2,2,1}^{\mathrm{I}}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Polys-approximation5215.pdf "fig:"){#fig:GI width=".32\\textwidth"} ![Example of a polynomial. We show $G_{2,2,1}^{\mathrm{I}}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Polys-approximation5504.pdf "fig:"){#fig:GI width=".32\\textwidth"} ![Example of a polynomial. We show $G_{2,2,1}^{\mathrm{I}}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Polys-approximation5793.pdf "fig:"){#fig:GI width=".32\\textwidth"} ![Example of a polynomial. We show $G_{2,2,1}^{\mathrm{I}}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Polys-approximation6082.pdf "fig:"){#fig:GI width=".32\\textwidth"} ![Example of a polynomial. We show $G_{2,2,1}^{\mathrm{I}}$ for the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Polys-approximation6371.pdf "fig:"){#fig:GI width=".32\\textwidth"} with the normalization factors $$\begin{aligned} &p_{m,n} \coloneqq \sqrt{\frac{4m+2n+3}{\mathbf{R}^3}} \qquad \mathrm{and} \qquad q_{n,j} \coloneqq q_{n,|j|} \coloneqq \sqrt{\frac{2n+1}{4\pi}\frac{(n-|j|)!}{(n+|j|)!}}\end{aligned}$$ for $m,\ n \in \mathbb{N}_0,\ j \in \mathbb{Z},\ |j|\leq n$ and where $P_m^{(\alpha,\beta)}$ denotes a Jacobi polynomial, $Y_{n,j}$ a spherical harmonic and $P_{n,|j|}$ an associated Legendre function. As can be seen here, we use fully normalized spherical harmonics in our implementation. For details on those composite functions, see e.g. [@AbramowitzStegun1965; @Freedenetal1998; @Freedenetal2013-1; @Freedenetal2009; @MagnusOberhettinger1966; @Michel2020; @Mueller1966; @Szegoe1975]. The functions $G^{\mathrm{I}}$ were investigated first by [@Ballanietal1993; @Dufour1977; @Michel2013]. For generalizations of this system, see [@DunkelXu2014] as well as [@Micheletal2016]. Note that every $G^{\mathrm{I}}$ is an algebraic polynomial in $\mathbb{R}^3$ and is, therefore, well-defined on the whole ball. An example is given in [24](#fig:GI){reference-type="ref" reference="fig:GI"}. ## A dictionary {#ssect:tfcs:dic} With these two types of trial functions, we now build a so-called dictionary $\mathcal{D}$ which is an intentionally redundant set of functions. In the sequel, we work with the following dictionary $$\begin{aligned} _{\mathrm{GI}} &\coloneqq \left\{G^{\mathrm{I}}_{m,n,j}\ \colon\ m,\ n \in \mathbb{N}_0,\ m\leq M,\ n\leq N,\ j\in\mathbb{Z},\ |j| \leq n\right\},\\ [\mathcal{A}]_{\mathrm{FEHF}} &\coloneqq \left\{N_{A,\Delta A}\ \colon\ A \in D,\ \Delta A \in \Delta D\right\},\\ \mathcal{D}&\coloneqq \mathcal{D}^{\mathrm{Inf}} \coloneqq [\mathcal{G}_{M,N}]_{\mathrm{GI}} \cup [\mathcal{A}]_{\mathrm{FEHF}}.\end{aligned}$$ We see that the trial function classes $[\mathcal{G}_{M,N}]_{\mathrm{GI}}$ and $[\mathcal{A}]_{\mathrm{FEHF}}$ are defined via the characteristic parameters of the trial functions. Note that the parameters of the FEHFs are continuous while those of the polynomials are discrete. We allow polynomials up to a maximum radial and angular degree. Theoretically, we could also allow $m,\ n \in \mathbb{N}_0$ and, thus, let $[\mathcal{G}_{M,N}]_{\mathrm{GI}}$ be infinite. In practice, however, this is not sensible as we will see later. Note, however, that $[\mathcal{A}]_{\mathrm{FEHF}}$ is truly infinite. We discuss the dictionary again in a larger context below when we introduce our approximation methods. ## Regularization terms {#ssect:tfcs:reg} As we are considering an ill-posed inverse problem, from a mathematical point of view, it is inevitable to include a regularization. For our approach, we use the Tikhonov--Phillips regularization. Thus, we need to determine a suitable function space for the penalty term. We decided to use the $\mathcal{H}_1\left(\mathbb{B}\right)$-Sobolev space, see [@Adamsetal2003; @Bhattacharyya2012; @Braess2007; @Heuser2006; @Schwarzetal2011; @Werner2018; @Yosida1995]. The regularization terms are then obtained via $$\begin{aligned} \langle d_i, d_j \rangle_{\mathcal{H}_1} &= \left\langle \mathrm{D}^{(0)}d_i, \mathrm{D}^{(0)} d_j \right\rangle_{\mathrm{L}^2\left(\mathbb{B},\mathbb{R}\right)} + \left\langle \mathrm{D}^{(1)}d_i, \mathrm{D}^{(1)} d_j \right\rangle_{\mathrm{L}^2\left(\mathbb{B},\mathbb{R}^3\right)}\\ %&= \left\langle d_i, d_j \right\rangle_{\Lp{2}} %+ \int_\ball \left(\mathrm{D}^{(1)}d_i(x)\right) \cdot \left(\mathrm{D}^{(1)} d_j(x) \right) \intd x\\ %&= \left\langle d_i, d_j \right\rangle_{\Lp{2}} %+ \int_\ball \left(\nabla_x d_i(x)\right) \cdot \left( \nabla_x d_j(x) \right) \intd x\\ &= \left\langle d_i, d_j \right\rangle_{\mathrm{L}^2\left(\mathbb{B},\mathbb{R}\right)} + \left\langle \nabla_x d_i, \nabla_x d_j\right\rangle_{\mathrm{L}^2\left(\mathbb{B},\mathbb{R}^3\right)} \label{def:H1IP}\end{aligned}$$ for two dictionary elements $d_i,\ d_j \in \mathcal{D}$. We have to determine these inner products for the different cases of dictionary elements in $\mathcal{D}$. We start with the determination of the gradients. For the FEHFs, we obtain $$\begin{aligned} &\nabla_{r\xi(\varphi,t)} N_{(R,\Phi,T),(\Delta R,\Delta \Phi,\Delta T)}(r,\varphi,t) \\&= \chi_{\mathrm{supp}_{[(R,\Phi,T)-(\Delta R,\Delta \Phi,\Delta T),(R,\Phi,T)+(\Delta R,\Delta \Phi,\Delta T)]}}(r,\varphi,t) \\ &\qquad\times \left( \varepsilon^r\frac{[-\mathop{\mathrm{sgn}}(r-R)]}{\Delta R} \frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi}\frac{\Delta T-|t-T|}{\Delta T}\right.\\ &\qquad\qquad + \frac{1}{r}\varepsilon^\varphi\frac{1}{\sqrt{1-t^2}} \frac{\Delta R-|r-R|}{\Delta R} \frac{[-\mathop{\mathrm{sgn}}(\varphi-\Phi)]}{\Delta \Phi} \frac{\Delta T-|t-T|}{\Delta T}\\ &\qquad\qquad \left. + \frac{1}{r}\varepsilon^t\sqrt{1-t^2}\frac{\Delta R-|r-R|}{\Delta R}\frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi}\frac{[-\mathop{\mathrm{sgn}}(t-T)]}{\Delta T} \right).\end{aligned}$$ The derivation of this formula is done straightforwardly and can be found in [8.1](#ssect:app:nablaFEHF){reference-type="ref" reference="ssect:app:nablaFEHF"}. For the polynomials, we have $$\begin{aligned} \nabla_{r\xi(\varphi,t)} &G^{\mathrm{I}}_{m,n,j}(r\xi(\varphi,t)) \\ &= \left(\frac{\partial}{\partial x_k}G^{\mathrm{I}}_{m,n,j} (r\xi(\varphi,t)) \right)_{k=1,2,3} \\ &= p_{m,n}q_{n,j}\left(\left(P_m^{(0,n+1/2)}\left(I(r)\right)\right)'I'(r) \left(\frac{r}{\mathbf{R}}\right)^{n} P_{n,|j|}(t)\mathrm{Trig}(j\varphi) \xi(\varphi,t)\right.\\ &\qquad\qquad + \frac{n}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} P_{n,|j|}(t)\mathrm{Trig}(j\varphi) \xi(\varphi,t)\\ &\qquad\qquad + \frac{j}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} \frac{1}{\sqrt{1-t^2}} P_{n,|j|}(t) \mathrm{Trig}(-j\varphi) \varepsilon^\varphi(\varphi,t)\\ &\qquad\qquad \left.+ \frac{1}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} \sqrt{1-t^2} P'_{n,|j|}(t)\mathrm{Trig}(j\varphi)\varepsilon^t(\varphi,t)\right)\\ &\eqqcolon p_{m,n}q_{n,j}\sum_{p=1}^4 G^{\mathrm{I}}_{m,n,j;p} (r\xi(\varphi,t)), \label{eq:nablaGI}\end{aligned}$$ with $$\begin{aligned} I'(r) = \frac{4r}{\mathbf{R}^2}. \label{eq:gradIr}\end{aligned}$$ The computation of this gradient is shown in [8.2](#ssect:app:nablaGI){reference-type="ref" reference="ssect:app:nablaGI"} in detail. In both cases, note that $\xi=\ \varepsilon^r,\ \varepsilon^\varphi$ and $\varepsilon^t$ are vectors. Then, we obtain the following values for the inner products. The detailed derivation is given in [8.3](#ssect:app:H1IPs){reference-type="ref" reference="ssect:app:H1IPs"}. For two polynomials, we have $$\begin{aligned} &\left\langle G^{\mathrm{I}}_{m,n,j}, G^{\mathrm{I}}_{m',n',j'} \right\rangle_{\mathcal{H}_1(\mathbb{B})}\\ &= \delta_{m,m'}\delta_{n,n'}\delta_{j,j'} + \delta_{n,n'}\delta_{j,j'} p_{m,n}p_{m',n}\\ &\qquad\times\left[\frac{\mathbf{R}\sqrt{2}}{2^{n}}\int_{-1}^{1} \left(P_m^{(0,n+1/2)}(u)\right)'\left(P_{m'}^{(0,n+1/2)}(u)\right)' \left(1+u\right)^{n+3/2} \mathrm{d} u \right. \\ &\qquad\qquad + \frac{\mathbf{R}n}{2^{n}\sqrt{2}}\int_{-1}^{1} \left[ \left(P_m^{(0,n+1/2)}(u)\right)'P_{m'}^{(0,n+1/2)}(u) \right. \\ &\qquad\qquad\qquad\qquad\qquad\qquad \left. + P_m^{(0,n+1/2)}(u)\left(P_{m'}^{(0,n+1/2)}(u)\right)'\right] \left(1+u\right)^{n+1/2} \mathrm{d} u \\ &\qquad\qquad +\left. \frac{\mathbf{R}n(2n+1)}{2^{n+1}\sqrt{2}}\int_{-1}^{1} P_m^{(0,n+1/2)}(u)P_{m'}^{(0,n+1/2)}(u) \left(1+u\right)^{n-1/2} \mathrm{d} u \right],\end{aligned}$$ where $\delta_{a,b}$ denotes the Kronecker Delta. Note that the remaining integrals must be computed numerically due to the exponents of $(1+u)$. For two FEHFs, we obtain $$\begin{aligned} &\langle N_{A,\Delta A}, N_{A',(\Delta A)'} \rangle_{\mathcal{H}_1(\mathbb{B})}\\ &=\prod_{j=1}^{3} \int_{lb_{a_j}}^{ub_{a_j}} \frac{[\Delta A_j-|a_j-A_j|][(\Delta A_j)'-|a_j-A'_j|]}{\Delta A_j (\Delta A_j)'} \left\{\begin{matrix} a_1^2,j=1,\\ 1,\ j=2,3\end{matrix} \right\} \mathrm{d}a_j\\ &\qquad + \sum_{k=1}^3\int_{lb_{a_k}}^{ub_{a_k}} \frac{\mathop{\mathrm{sgn}}(a_k-A_k)}{\Delta A_k} \frac{\mathop{\mathrm{sgn}}(a_k-A_k')}{(\Delta A_k)'} \left\{\begin{matrix} a_k^2, &k=1,\\ 1,& k=2,\\ 1-a_k^2, & k=3 \end{matrix} \right\} \mathrm{d}a_k\\ &\qquad \times \prod_{j=1,\ j\not=k}^3 \int_{lb_{a_j}}^{ub_{a_j}} \frac{\Delta A_j - |a_j-A_j|}{\Delta A_j} \frac{(\Delta A_j)' - |a_j-A_j'|}{(\Delta A_j)'} \left\{\begin{matrix} \frac{1}{1-a_j^2}, &j=3,k=2,\\ 1,& \text{else} \end{matrix} \right\} \mathrm{d}a_j,\end{aligned}$$ where $lb_{a_i}$ and $ub_{a_i}$ are the lower and upper bound with respect to the dimension $a_i$ of the intersection of the respective domains of the FEHFs $N_{A,\Delta A}$ and $N_{A',(\Delta A)'}$. Note that we need to determine these boundaries as well as the critical points in between them. At last, we consider the mixed case of a FEHF and a polynomial. This yields $$\begin{aligned} &\left\langle N_{A,\Delta A}, G^{\mathrm{I}}_{m,n,j} \right\rangle_{\mathcal{H}_1(\mathbb{B})}\\ &= p_{m,n}q_{n,j}\int_{lb_r}^{ub_r} \frac{\Delta R-|r-R|}{\Delta R} P_m^{(0,n+1/2)}(I(r))\left(\frac{r}{\mathbf{R}}\right)^nr^2 \mathrm{d} r\\ &\quad\times \int_{lb_\varphi}^{ub_\varphi} \frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi} \mathrm{Trig}(j\varphi) \mathrm{d} \varphi \int_{lb_t}^{ub_t} \frac{\Delta T-|t-T|}{\Delta T} P_{n,|j|}(t) \mathrm{d} t\\ & - p_{m,n}q_{n,j} \sum_{k=1}^3 \int_{lb_{a_k}}^{ub_{a_k}} \frac{\mathop{\mathrm{sgn}}(a_k-A_k)}{\Delta A_k} \left\{\begin{matrix} \left(P_m^{(0,n+1/2)}\left(I(a_k)\right)\right)'I'(a_k) \left(\frac{a_k}{\mathbf{R}}\right)^{n} a_k^2 &\\ + P_m^{(0,n+1/2)}\left(I(a_k)\right) \left(\frac{a_k}{\mathbf{R}}\right)^{n} n a_k, &k=1\\ j\mathrm{Trig}(-j a_k),&k=2\\ (1-a_k^2)P_{n,|j|}'(a_k),&k=3 \end{matrix}\right\} \mathrm{d}a_k\\ &\quad \times \prod_{i=1,i\not=k}^3 \int_{lb_{a_i}}^{ub_{a_i}} \frac{\Delta A_i-|a_i-A_i|}{\Delta A_i} \left\{\begin{matrix} P_m^{(0,n+1/2)}\left(I(a_i)\right) \left(\frac{a_i}{\mathbf{R}}\right)^{n}, &i=1\\ \mathrm{Trig}(j a_i), &i=2\\ P_{n,|j|}(a_i),&i=3,k=1\\ \frac{1}{1-a_i^2} P_{n,|j|}(a_i),&i=3,k=2 \end{matrix}\right\} \mathrm{d}a_i\end{aligned}$$ where $lb_r = lb_{a_1} = \max(R_{\mathrm{min}},R-\Delta R),\ ub_r = ub_{a_1} = \min(R_{\mathrm{max}},R+\Delta R),\ lb_\varphi= lb_{a_2} = \Phi-\Delta \Phi,\ ub_\varphi= ub_{a_2} = \Phi+\Delta \Phi,\ lb_t = lb_{a_3} = \max(T_{\mathrm{min}},T-\Delta T)$ and $ub_t = ub_{a_3} = \min(T_{\mathrm{max}},T+\Delta T)$. Note that we use the same notation as in [\[eq:nablaGI\]](#eq:nablaGI){reference-type="ref" reference="eq:nablaGI"}. The longitudinal integrals can be derived analytically (see [8.3](#ssect:app:H1IPs){reference-type="ref" reference="ssect:app:H1IPs"}). The radial and the latitudinal integrals must be computed numerically. Note that, in our experience, it is difficult but critical for our approach to determine the most efficient implementation for the latitudinal integrals. # The (Learning) Inverse Problem Matching Pursuits {#sect:ipmps} Next, we introduce our suggested algorithms for the approximation of ill-posed inverse problems: the (Learning) Inverse Problem Matching Pursuits ((L)IPMPs). The IPMPs include: the Regularized Functional Matching Pursuit (RFMP) and the Regularized Orthogonal Functional Matching Pursuit (ROFMP). The LIPMPs are the respective counterparts that include a learning add-on: the LRFMP and the LROFMP. Note that there also exists the Regularized Weak Functional Matching Pursuit (RWFMP) whose idea can be included in the LRFMP and the LROFMP quite naturally as we explain below. To the best of our knowledge, the Geomathematics Group Siegen is among the first to adapt these algorithms for inverse problems. For more details and applications, see [@Berkeletal2011; @Fischer2011; @Fischeretal2012; @Fischeretal2013-1; @Fischeretal2013-2; @Guttingetal2017; @Kontak2018; @Kontaketal2018-2; @Kontaketal2018-1; @Michel2015-2; @Micheletal2017-1; @Micheletal2018-1; @Micheletal2014; @Micheletal2016-1; @Leweke2018; @Prakashetal2020; @Schneider2020; @Schneideretal2022; @Telschow2014; @Telschowetal2018]. We concentrate here on the LRFMP because it appears to be more suitable for travel time tomography due to its efficiency. For this method, we introduce here the characteristics relevant for understanding and those that are newly adjusted to the particular problem at hand. Note, however, that the transfer of the latter to the LROFMP is straightforward and an implementation of it is included in the corresponding source code. We also direct the reader to the notations given in [2](#sect:problem){reference-type="ref" reference="sect:problem"}. ## The (Learning) Regularized Functional Matching Pursuit {#ssect:ipmps} The RFMPs tackles inverse problems, such as the discretized, linearized travel time tomography problem $T_\daleth \delta S = \delta \psi_\daleth$, which are often ill-posed by nature. The inevitable regularization for such kind of problems is implemented as a Tikhonov--Phillips regularization in these methods. This is an established and well-performing choice for many ill-posed inverse problems, see e.g. [@Engletal1996; @Hofmann1999; @Kirsch1996; @Louis1989; @Rieder2003]. An approximation $f^\ast$ of the solution $f$ is then found as the minimizer of the Tikhonov--Phillips functional. In particular, with the $\mathcal{H}_1\left(\mathbb{B}\right)$ Sobolev space introduced above, in the RFMP, we aim to determine $f^\ast$ such that $$\begin{aligned} \left\| \delta \psi_\daleth - \mathcal{T}_\daleth f^\ast \right\|^2_{\mathbb{R}^\ell} + \lambda\left\|f^\ast\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)},\qquad \lambda >0, \label{eq:TF}\end{aligned}$$ is minimized. The first term is usually called the data misfit while the latter is the penalty term. Note that the corresponding Tikhonov--Phillips functional consists formally of only one penalty term instead of two (for smoothing and damping) as commonly used in seismology, see e.g. [@Charletyetal2013; @Hosseinietal2020]. We choose the $\mathcal{H}_1\left(\mathbb{B}\right)$-Sobolev space for regularization here because we considered FEHFs for the approximation and they are tightly connected to (classical) Sobolev spaces. As we saw in [\[def:H1IP\]](#def:H1IP){reference-type="ref" reference="def:H1IP"}, the respective inner product is a sum of the $\mathrm{L}^2\left(\mathbb{B},\mathbb{R}\right)$- and the $\mathrm{L}^2\left(\mathbb{B},\mathbb{R}^3\right)$-inner product due to which we re-enact the trade-off between smoothing and damping. Thus, this difference is only minor.\ However, there is a more important difference between [\[eq:TF\]](#eq:TF){reference-type="ref" reference="eq:TF"} and the common seismological approach and it lies within the data misfit: due to uncertainties within the data, seismologists usually consider the (reduced) $\chi^2$ in the computation process as well as for model selection (via the L-curve) instead of the pure residual $\delta \psi_\daleth - \mathcal{T}_\daleth f^\ast$, see e.g. [@Hosseinietal2020]. Mathematically, they are connected as follows: $$\begin{aligned} \chi_{\mathrm{red}}^2 &= \frac{1}{\ell} \sum_{i=1}^\ell \left( \frac{\left(\delta \psi_\daleth\right)_i - \mathcal{T}^i_\daleth f^\ast}{\sigma_i}\right)^2 = \frac{1}{\ell} \left\| \frac{\delta \psi_\daleth - \mathcal{T}_\daleth f^\ast}{\sigma}\right\|_{\mathbb{R}^\ell}^2 \intertext{with} \frac{\delta \psi_\daleth - \mathcal{T}_\daleth f^\ast}{\sigma} &\coloneqq \left( \frac{\left(\delta \psi_\daleth\right)_i - \mathcal{T}^i_\daleth f^\ast}{\sigma_i} \right)_{i=1,...,\ell},\end{aligned}$$ where $\sigma_i \in \mathbb{R}_+$ denotes the (known) uncertainty with respect to the $i$-th measurement. As the uncertainty within the data cannot be circumvented, we, therefore, adjust the Tikhonov--Phillips functional considered in the RFMP for the case of travel time tomography as well. Therefore, in the sequel, we consider the noise-cognizant Tikhonov-Phillips functional $$\begin{aligned} \left\| \frac{\delta \psi_\daleth - \mathcal{T}_\daleth f^\ast}{\sigma} \right\|^2_{\mathbb{R}^\ell} + \lambda\left\|f^\ast\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)},\qquad \lambda >0, \label{eq:ncTF}\end{aligned}$$ which shall be minimized in the RFMP. The minimizer $f^*$ is then obtained iteratively as a linear combination of weighted dictionary elements $d_n \in \mathcal{D}$. Let $N$ denote the current (or final) iteration. Then we have $$\begin{aligned} f_N &= f_0 + \sum_{n=1}^N \alpha_n d_n \label{eq:fn}\end{aligned}$$ in the case of the RFMP. Here, $f_0$ denotes a first approximation which is often the zero approximation in practice. As the dictionary is made of global polynomials and local FEHFs, also the approximation consists -- in all probability -- of both types of trial functions.\ The noise-cognizant Tikhonov--Phillips functional of the $N$-th step transfers then to $$\begin{aligned} (\alpha, d) &\mapsto \left\| \frac{R^N - \alpha\mathcal{T}_\daleth d}{\sigma} \right\|^2_{\mathbb{R}^\ell} + \lambda\left\|f_N+\alpha d\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)},\qquad \lambda >0, \label{eq:TFNO}\\ &R^{N+1} \coloneqq R^N - \alpha_{N+1}\mathcal{T}_\daleth d_{N+1} = \delta \psi - \mathcal{T}_\daleth f_{N+1} \label{def:RN} %\intertext{for the RFMP algorithm and, for the ROFMP algorithm} %(\alpha, d) &\mapsto \left\| \frac{R^N - \alpha\proj{\mathcal{V}_N^\perp}\T_\daleth d}{\sigma} \right\|^2_{\real^\ell} + %\lambda\left\|f_N^{(N)}+\alpha \left(d-b_n^{(N)}(d) \right)\right\|^2_{\Hs{1}},\qquad \lambda >0, %\label{eq:TFO}\\ %&R^{N+1} \coloneqq R^N - \alpha_{N+1}^{(N+1)}\proj{\mathcal{V}_N^\perp} \T_\daleth d_{N+1} = \delta \psi - \T_\daleth f_{N+1}^{(N+1)}, %\label{def:RNO}\end{aligned}$$ for the RFMP and with $R^0 = \delta \psi_\daleth-\mathcal{T}_\daleth f_0$ which yields $R^0 = y$ if $f_0 \equiv 0$. The main question is how to choose the dictionary element $d_{N+1} \in \mathcal{D}$ and the corresponding weight $\alpha_{N+1} \in \mathbb{R}$ such that the corresponding Tikhonov--Phillips functional is minimized. Similarly as in the literature on the (L)IPMPs, we can exchange the minimization of the noise-cognizant Tikhonov--Phillips functional by an equivalent maximization of the objective functions, see [8.4](#sect:app:OF_IPMPs){reference-type="ref" reference="sect:app:OF_IPMPs"} for details: $$\begin{aligned} \mathrm{RFMP}(d;N) &\coloneqq \frac{\left( \left\langle \frac{R^N}{\sigma}, \frac{\mathcal{T}_\daleth d}{\sigma}\right\rangle_{\mathbb{R}^\ell} - \lambda\left\langle f_N,d \right\rangle_{\mathcal{H}_1\left(\mathbb{B}\right)} \right)^2}{\left\|\frac{\mathcal{T}_\daleth d}{\sigma}\right\|_{\mathbb{R}^\ell}^2 + \lambda\left\|d\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)}} \eqqcolon \frac{\left(A_N(d)\right)^2}{B_N(d)}. \label{def:RFMP(d;N)} %\intertext{and} %\ROFMP(d;N) &\coloneqq \frac{\left( \left\langle \frac{R^N}{\sigma}, \frac{\proj{\mathcal{V}_N^\perp}\T_\daleth d}{\sigma} \right\rangle_{\real^\ell} - \lambda\left\langle f_N^{(N)},d-b_n^{(N)}(d) \right\rangle_{\Hs{1}} \right)^2}{\left\| \frac{\proj{\mathcal{V}_N^\perp}\T_\daleth d}{\sigma}\right\|_{\real^\ell}^2 + \lambda\left\|d-b_n^{(N)}(d)\right\|^2_{\Hs{1}}} %\eqqcolon \frac{\left(A_N^{(N)}(d)\right)^2}{B_N^{(N)}(d)}, %\label{def:ROFMP(d;N)}\end{aligned}$$ The weights are then easily obtained via $$\begin{aligned} \alpha_{N+1} &\coloneqq \frac{A_N(d_{N+1})}{B_N(d_{N+1})}. \label{def:alphan} %\qquad \mathrm{and} \qquad %\alpha_{N+1}^{(N+1)} \coloneqq \frac{A_N^{(N)}(d_{N+1})}{B_N^{(N)}(d_{N+1})}, %\label{def:alphanN}\end{aligned}$$ In practice, the IPMPs need termination and model selection criteria. As implemented for the Tikhonov--Phillips functional, we adjust them here as well. Usually, in seismological experiments, we would strive to let $\chi^2_{\mathrm{red}}$ reach 1. This should yield the best trade-off between data (mis)fit, accuracy and smoothing. For the contrived data we use here, however, we have the uncertainty is assumed to be $\sigma\equiv 1\,$s. This enables us to consider the relative data error $\|R^N\|_{\mathbb{R}^\ell}/\|R^0\|_{\mathbb{R}^\ell}$ (as usually done in an IPMP) instead. In our experiments, we additionally perturb the delay vector with simulated noise. This again allows us to terminate the algorithm if the relative data error falls below this noise level. To avoid endless iterations for inappropriate parameters, we also set a maximum number of iterations. Among those models $f_N$ obtained for diverse regularization parameter $\lambda$, we select the one (i.e. the regularization parameter) which yields the lowest relative root mean square error $$\begin{aligned} \left(\frac{\sum_{i=1}^\kappa \left(f\left(x^i\right)-f_N\left(x^i\right)\right)^2}{\sum_{i=1}^\kappa f\left(x^i\right)^2}\right)^{1/2} \label{def:rrmse}\end{aligned}$$ for $\kappa\in\mathbb{N}$, where $f$ is the (exact) solution which we use for our test and is given on the points $x^i$. The IPMPs use by definition a finite dictionary $\mathcal{D}^{\mathrm{fin}} \subset \mathcal{D}= \mathcal{D}^{\mathrm{Inf}}$. In this case, the maximization of [\[def:RFMP(d;N)\]](#def:RFMP(d;N)){reference-type="ref" reference="def:RFMP(d;N)"} can be done by pairwise comparisons. However, this means that $\mathcal{D}^{\mathrm{fin}}$ must be chosen a-priori either automatically or manually. The latter cannot be recommended in the case of the travel time tomography because a) we are inexperienced regarding which trial functions are needed as this is a novel application for the methods; b) the size of $\mathcal{D}^{\mathrm{fin}}$ grows tremendously due to the six characteristic parameters of the FEHFs; and c) a possible bias by the choice of $\mathcal{D}^{\mathrm{fin}}$ cannot be quantified. In search of an automation of the a-priori choice, the LIPMPs were developed, see [@Micheletal2018-1; @Schneider2020; @Schneideretal2022]. They follow the same routine as the IPMPs but include an additional learning add-on. Then, the a-priori dictionary choice is negligible. Though they do produce a learnt dictionary which can be used as an automatically chosen one in the IPMPs, they also proved to be useful as standalone approximation algorithms. Thus, in our experiment, we include the learning add-on, that is, we use the LRFMP. ## The learning add-on {#ssect:lipmps} The idea is to allow all possible trial functions and, thus, use the infinite dictionary $\mathcal{D}$. As we saw before, $\mathcal{D}$ includes infinitely many trial functions of those types with continuous characteristic parameters, i.e. here the FEHFs. If the characteristic parameters are discrete as here with the polynomials, we still allow only a finite set. In each iteration, we determine optimized dictionary elements (or candidates) for each type of trial functions separately. Together they form again a very small, finite dictionary of candidates from which we obtain the overall most suitable function. Thus, the learning add-on is the determination of the finite dictionary of candidates in each iteration.\ Recall that we allow polynomials up to a maximally possible radial and angular degree. The global trial functions in the dictionary are usually chosen to reconstruct global trends and, thus, high degrees and orders are often negligible. Hence, it suffices to consider some maximum radial and angular degree. Theoretically, these maximally possible degrees can be chosen to be very high such that the methods can learn a maximally needed radial and angular degree, see [@Schneider2020; @Schneideretal2022]. It remains to be seen whether this can be done in practice for travel time tomography due to efficiency reasons (the curse of dimensionality occurs here: the set of all (Cartesian) polynomials $G_{m,n,j}^{\mathrm{I}}$ with degree $\leq N$ has a size of $\mathcal{O}(N^3)$, see [@Michel1999]). Note that, due to the finiteness, we can still obtain the maximizer of [\[def:RFMP(d;N)\]](#def:RFMP(d;N)){reference-type="ref" reference="def:RFMP(d;N)"} by pairwise comparisons. Further, we can still use the common preprocessing routine of the IPMPs for the polynomials and improve efficiency in this way. Practically, this means we define a finite starting dictionary $\mathcal{D}^{s}$ which contains at least the chosen polynomials from $\mathcal{D}$.\ In the case of the FEHFs, we use the truly infinite set of possible trial functions $[\mathcal{A}]_{\mathrm{FEHF}}$. The main challenge here is to maximize [\[def:RFMP(d;N)\]](#def:RFMP(d;N)){reference-type="ref" reference="def:RFMP(d;N)"} among all possible FEHFs. As a matter of fact, this is a non-linear constraint optimization problem with the objective function $\mathrm{RFMP}(N_{A,\Delta A};N) \rightarrow \max !$ For maximizing $\mathrm{RFMP}(N_{A,\Delta A};N)$, recall the corresponding constraints with respect to $A$ and $\Delta A$ given in [\[def:N-A-DA-constraints\]](#def:N-A-DA-constraints){reference-type="ref" reference="def:N-A-DA-constraints"}. We can solve this maximization with any kind of established optimization routine. For instance, methods from the NLOpt library, see [@NLOpt2019], can be utilized. Note, however, that a gradient-based approach cannot be used here because it would necessitate the computation of the gradient of $\mathrm{RFMP}(N_{A,\Delta A};N)$ with respect to $(A,\Delta A)$. Unfortunately, this is not possible for practical purposes due to the absolute value in the definition of the function, see [\[def:N-A-DA\]](#def:N-A-DA){reference-type="ref" reference="def:N-A-DA"}. Further, from experience, we propose a 2-step-optimization procedure. First, we use a global method. Then we refine this solution by using a local counterpart starting from the former solution. This also enables us to soften the accuracy of the global optimization technique (i.e. its termination criteria) which decreases the runtime. Note that softening the termination criteria of the optimization is similar to including a weakness parameter as in the RWFMP. Also note that both solutions are inserted into the dictionary of candidates. Moreover, if the global method needs a starting point, we should insert a few FEHFs in the starting dictionary for this reason as well. Further note that this starting solution can also be included in the dictionary of candidates but should generally not be chosen, i.e. this starting solution should not have a major influence on the learnt approximation in general. ## Additional divide-and-conquer strategy for travel time tomography We experienced that the optimization within the learning add-on is slowed down -- amongst others -- by the use of many rays at once. Thus, we considered how to sensibly use fewer rays at once while taking a numerically significant number of rays into account. This yields an additional divide-and-conquer strategy for our challenge at hand which proved to be helpful in practice. The main idea is to start with a low number of rays. In our experiments, we started with 1000 rays. If the relative data error falls below a certain threshold such as $50\%$ then we add the next package of 1000 rays to our consideration. We consider then the first 2000 rays in our algorithm with the exception of optimization of the FEHFs where we consider only the latest package of 1000 rays. We are aware that this is a quite manual approach with certain seemingly arbitrary parameters. In the meantime, we also considered other aspects that concerned the efficiency such that it seems now to be possible to increase, for instance, the size of each ray package. This may form the basis of future research. # Numerical implementation and tests {#sect:numerics} In this section, we show a numerical proof of concept for our TTIPMP code. We first introduce our experiment setting for reproducibility and afterwards our numerical results. Note that the corresponding code is available at [<https://doi.org/10.5281/zenodo.8227888>]{style="color: blue"} under licence the CC-BY-NC-SA 3.0 DE. ## Chosen Earthquake data and specific experiment setting {#ssect:numerics:genset} ![Resolution test input model for P-velocity consisting of two plumes below the Volcanic Eifel and Yellowstone volcano. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row).](./Plumes-sol_at_DH3193.pdf "fig:"){#fig:resolutiontests width=".32\\textwidth"} ![Resolution test input model for P-velocity consisting of two plumes below the Volcanic Eifel and Yellowstone volcano. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row).](./Plumes-sol_at_DH3481.pdf "fig:"){#fig:resolutiontests width=".32\\textwidth"} ![Resolution test input model for P-velocity consisting of two plumes below the Volcanic Eifel and Yellowstone volcano. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row).](./Plumes-sol_at_DH3770.pdf "fig:"){#fig:resolutiontests width=".32\\textwidth"} ![Resolution test input model for P-velocity consisting of two plumes below the Volcanic Eifel and Yellowstone volcano. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row).](./Plumes-sol_at_DH4059.pdf "fig:"){#fig:resolutiontests width=".32\\textwidth"} ![Resolution test input model for P-velocity consisting of two plumes below the Volcanic Eifel and Yellowstone volcano. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row).](./Plumes-sol_at_DH4348.pdf "fig:"){#fig:resolutiontests width=".32\\textwidth"} ![Resolution test input model for P-velocity consisting of two plumes below the Volcanic Eifel and Yellowstone volcano. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row).](./Plumes-sol_at_DH4637.pdf "fig:"){#fig:resolutiontests width=".32\\textwidth"} ![Resolution test input model for P-velocity consisting of two plumes below the Volcanic Eifel and Yellowstone volcano. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row).](./Plumes-sol_at_DH4926.pdf "fig:"){#fig:resolutiontests width=".32\\textwidth"} ![Resolution test input model for P-velocity consisting of two plumes below the Volcanic Eifel and Yellowstone volcano. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row).](./Plumes-sol_at_DH5215.pdf "fig:"){#fig:resolutiontests width=".32\\textwidth"} ![Resolution test input model for P-velocity consisting of two plumes below the Volcanic Eifel and Yellowstone volcano. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row).](./Plumes-sol_at_DH5504.pdf "fig:"){#fig:resolutiontests width=".32\\textwidth"} ![Resolution test input model for P-velocity consisting of two plumes below the Volcanic Eifel and Yellowstone volcano. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row).](./Plumes-sol_at_DH5793.pdf "fig:"){#fig:resolutiontests width=".32\\textwidth"} ![Resolution test input model for P-velocity consisting of two plumes below the Volcanic Eifel and Yellowstone volcano. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row).](./Plumes-sol_at_DH6082.pdf "fig:"){#fig:resolutiontests width=".32\\textwidth"} ![Resolution test input model for P-velocity consisting of two plumes below the Volcanic Eifel and Yellowstone volcano. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row).](./Plumes-sol_at_DH6371.pdf "fig:"){#fig:resolutiontests width=".32\\textwidth"} We use the unit ball for our practical computations and show here the approximation of synthetic input structures modelled on mantle plumes, i.e. seismically slow, vertical, columnar features that extend from the core-mantle boundary to the surface, as a model for the deviation of the slowness in the interior of the Earth. That is, we show here a resolution test. The plumes are constructed to extend from the core-mantle boundary up to the Earth's surface. They lie below the Volcanic Eifel and the Yellowstone volcano. Note that thin, vertically continuous plume conduits are among the structures suspected to occur in the real Earth. For a visualization of this model, see [36](#fig:resolutiontests){reference-type="ref" reference="fig:resolutiontests"}.\ This enables us to compute a relative root mean square error as given in [\[def:rrmse\]](#def:rrmse){reference-type="ref" reference="def:rrmse"} where we use a grid of $\kappa = 12 \cdot 65341 = 784\,092$ data points. This grid is given as 12 layers of an equi-angular grid, commonly also called a Driscoll-Healy grid, see e. g.  [@DriscollHealy1994; @Michel2013], of 361 $\cdot$ 181 grid points. The data is perturbed with $5\%$ Gaussian noise, such that we have perturbed data $y^\delta$ given by $$\begin{aligned} y^\delta_i = y_i \cdot \left( 1 + 0.05 \cdot \varepsilon_i\right) \label{eq:noise}\end{aligned}$$ for the unperturbed data $y_i = \left( \mathcal{T}f \right) \left(\sigma^i\eta^i\right)$ and a unit normally distributed random number $\varepsilon_i$.\ As we aim for a proof of concept, it suffices to consider a ray theoretical setting and the ISC-EHB seismic meta-data, see [@ISC-EHB; @Westonetal2018]. Note that these data were among others also used in [@Hosseinietal2020]. In particular, we are using the ISC-EHB meta-data from 1998 to 2016, respectively, but only the P waves. Note, however, due to the divide-and-conquer strategy, we might not use all rays from these years in practice. At most we have used 318 542 rays. According to our strategy, we start with rays in 2016 and add further rays by moving back in time.\ We correct these data with all necessary seismological corrections. The corrections are done in line with the tomography workflows exhibited in [@Hosseinietal2020; @Mohammadzaherietal2021; @Tsekhmistrenkoetal2021]. Moreover, with these meta-data, we have a constant uncertainty $\sigma_i = 1\, \mathrm{s}, i=1,...,\ell$. In general, we cannot expect the rays to illuminate the Earth evenly. That is, we have to take into account that all of our meta-data -- the receiving seismological stations as well as the rays themselves -- will be poorly-distributed to a certain extent. In particular, the path coverage is sparse in shallow depths under the largely uninstrumented oceans as well as in the deepest parts of the mantle since we exclude core-diffracted body-wave paths. For a visualization of the ISC-EHB meta-data rays, see [\[fig:distrrays\]](#fig:distrrays){reference-type="ref" reference="fig:distrrays"}.\ We solve the latitudinal integrals in the inner products of a polynomial and an FEHF with a Gauß-Legendre quadrature rule of $10^{6}$ points and use an adaptive Gauß-Kronrod quadrature rule with an integration error of $10^{-4}$ anywhere else (i.e. for the DSPO as well as for other integrals in the inner products). In particular, the latter is necessary for efficiency reasons. We set $\epsilon_R = \epsilon_\Phi = \epsilon_T = 10^{-2}$ and $\rho=3482/6371$. Recall that this sets the lower bound for the value of $R$ of an FEHF to the core-mantle boundary in our setting. We use the GN_DIRECT_L and the LN_SBPLX algorithm for the global and local optimization in the learning add-on. We terminate the global algorithm if succeeding iterates vary less than $10^{-4}$ (i.e. xtol_rel = $10^{-4}$) or succeeding function values vary less than $10^{0}$ (i.e. ftol_rel = $10^{0}$). We terminate the local algorithm if successive iterates vary less than $10^{-8}$ or successive function values vary less than $10^{-4}$. In analogy to [@Schneider2020; @Schneideretal2022], we terminate the optimization after 10 000 evaluations of the objective function or 600 s computation time. From experience, these termination criteria ensure that the optimization happens within a suitable time frame. Further, the loss in accuracy is negligible for our proof of concept. For more information on the termination criteria of the optimization methods, see [@NLOpt2019]. We terminate the LRFMP either after 300 iterations, or if $\|R^N\|_{\mathbb{R}^\ell}/\|y\|_{\mathbb{R}^\ell}$ is less than the noise level or greater than 2 or if $|\chi^2_{\mathrm{red}}-1|<10^{-8}$.\ As a finite (starting) dictionary, we utilize $$\begin{aligned} _{\mathrm{GI}} &= \left\{(m,n,j)\ \colon\ m,\ n \in \mathbb{N}_0,\ m\leq 5,\ n\leq 5,\ j\in\mathbb{Z}_0,\ |j| \leq n\right\},\\ [\mathcal{A}]_{\mathrm{FEHF}} &=\left\{(A,\Delta A)\ \colon\ A \in D_p,\ \Delta A = \Delta D_p\right\},\\ D_p &= \left\{\frac{3482}{6371} + \frac{2889 i}{25484}\ \colon i=0,...,4\right\} \times \left\{\frac{\pi i}{2}\ \colon i=0,...,4\right\} \\ &\qquad \times \left\{-1+\epsilon_T + \frac{(1-\epsilon_T) i}{2}\ \colon i=0,...,4\right\},\\ \Delta D_p &= \left( \frac{2889}{25484}, \frac{\pi}{2}, \frac{1-\epsilon_T}{2}\right)^\mathrm{T},\\ \mathcal{D}&\coloneqq [\mathcal{G}]_{\mathrm{GI}} \cup [\mathcal{A}]_{\mathrm{FEHF}}.\end{aligned}$$ Note that $\rho\mathbf{R}= 3482/6371$, $\mathbf{R}= 1.0$ and, thus, $\mathbf{R}-\rho\mathbf{R}= 2889/6371$ in our setting. ![3571-3771](./3571-3771.pdf){width="\\textwidth"} ![3771-3971](./3771-3971.pdf){width="\\textwidth"} ![3971-4171](./3971-4171.pdf){width="\\textwidth"} ![4171-4371](./4171-4371.pdf){width="\\textwidth"} \ ![4371-4571](./4371-4571.pdf){width="\\textwidth"} ![4571-4771](./4571-4771.pdf){width="\\textwidth"} ![4771-4971](./4771-4971.pdf){width="\\textwidth"} ![4971-5171](./4971-5171.pdf){width="\\textwidth"} \ ![5171-5371](./5171-5371.pdf){width="\\textwidth"} ![5371-5571](./5371-5571.pdf){width="\\textwidth"} ![5571-5771](./5571-5771.pdf){width="\\textwidth"} ![5771-5971](./5771-5971.pdf){width="\\textwidth"} \ ![5971-6171](./5971-6171.pdf){width="\\textwidth"} ![6171-6371](./6171-6371.pdf){width="\\textwidth"} ## Synthetic inversion tests {#ssect:numerics:exps} ![Absolute approximation error for the approximation of the plumes model given in [36](#fig:resolutiontests){reference-type="ref" reference="fig:resolutiontests"}. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Plumes-appr_err3193.pdf "fig:"){#fig:apprerr width=".32\\textwidth"} ![Absolute approximation error for the approximation of the plumes model given in [36](#fig:resolutiontests){reference-type="ref" reference="fig:resolutiontests"}. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Plumes-appr_err3481.pdf "fig:"){#fig:apprerr width=".32\\textwidth"} ![Absolute approximation error for the approximation of the plumes model given in [36](#fig:resolutiontests){reference-type="ref" reference="fig:resolutiontests"}. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Plumes-appr_err3770.pdf "fig:"){#fig:apprerr width=".32\\textwidth"} ![Absolute approximation error for the approximation of the plumes model given in [36](#fig:resolutiontests){reference-type="ref" reference="fig:resolutiontests"}. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Plumes-appr_err4059.pdf "fig:"){#fig:apprerr width=".32\\textwidth"} ![Absolute approximation error for the approximation of the plumes model given in [36](#fig:resolutiontests){reference-type="ref" reference="fig:resolutiontests"}. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Plumes-appr_err4348.pdf "fig:"){#fig:apprerr width=".32\\textwidth"} ![Absolute approximation error for the approximation of the plumes model given in [36](#fig:resolutiontests){reference-type="ref" reference="fig:resolutiontests"}. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Plumes-appr_err4637.pdf "fig:"){#fig:apprerr width=".32\\textwidth"} ![Absolute approximation error for the approximation of the plumes model given in [36](#fig:resolutiontests){reference-type="ref" reference="fig:resolutiontests"}. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Plumes-appr_err4926.pdf "fig:"){#fig:apprerr width=".32\\textwidth"} ![Absolute approximation error for the approximation of the plumes model given in [36](#fig:resolutiontests){reference-type="ref" reference="fig:resolutiontests"}. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Plumes-appr_err5215.pdf "fig:"){#fig:apprerr width=".32\\textwidth"} ![Absolute approximation error for the approximation of the plumes model given in [36](#fig:resolutiontests){reference-type="ref" reference="fig:resolutiontests"}. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Plumes-appr_err5504.pdf "fig:"){#fig:apprerr width=".32\\textwidth"} ![Absolute approximation error for the approximation of the plumes model given in [36](#fig:resolutiontests){reference-type="ref" reference="fig:resolutiontests"}. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Plumes-appr_err5793.pdf "fig:"){#fig:apprerr width=".32\\textwidth"} ![Absolute approximation error for the approximation of the plumes model given in [36](#fig:resolutiontests){reference-type="ref" reference="fig:resolutiontests"}. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Plumes-appr_err6082.pdf "fig:"){#fig:apprerr width=".32\\textwidth"} ![Absolute approximation error for the approximation of the plumes model given in [36](#fig:resolutiontests){reference-type="ref" reference="fig:resolutiontests"}. We show the depth slices (left, middle and right) at the following radial distances to the centre of the Earth in km: 3193.1, 3482.0 and 3770.9 (first row), 4059.8, 4348.7 and 4637.6 (second row), 4926.5, 5215.4 and 5504.3 (third row), 5793.2, 6082.1 and 6371.0 (last row). The colour scales are adjusted for a better comparability.](./Plumes-appr_err6371.pdf "fig:"){#fig:apprerr width=".32\\textwidth"} ![$m=0$](./Polys_m_0.pdf){width="\\textwidth"} ![$m=1$](./Polys_m_1.pdf){width="\\textwidth"} ![$m=2$](./Polys_m_2.pdf){width="\\textwidth"} ![$m=3$](./Polys_m_3.pdf){width="\\textwidth"} ![$m=4$](./Polys_m_4.pdf){width="\\textwidth"} ![$m=5$](./Polys_m_5.pdf){width="\\textwidth"} ![3371 km - 3571 km](./3371-3571_DR.pdf "fig:"){width=".32\\textwidth"} ![3371 km - 3571 km](./3371-3571_DPH.pdf "fig:"){width=".32\\textwidth"} ![3371 km - 3571 km](./3371-3571_DT.pdf "fig:"){width=".32\\textwidth"} ![4171 km - 4371 km](./4171-4371_DR.pdf "fig:"){width=".32\\textwidth"} ![4171 km - 4371 km](./4171-4371_DPH.pdf "fig:"){width=".32\\textwidth"} ![4171 km - 4371 km](./4171-4371_DT.pdf "fig:"){width=".32\\textwidth"} ![4371 km - 4571 km](./4371-4571_DR.pdf "fig:"){width=".32\\textwidth"} ![4371 km - 4571 km](./4371-4571_DPH.pdf "fig:"){width=".32\\textwidth"} ![4371 km - 4571 km](./4371-4571_DT.pdf "fig:"){width=".32\\textwidth"} ![4571 km - 4771 km](./4571-4771_DR.pdf "fig:"){width=".32\\textwidth"} ![4571 km - 4771 km](./4571-4771_DPH.pdf "fig:"){width=".32\\textwidth"} ![4571 km - 4771 km](./4571-4771_DT.pdf "fig:"){width=".32\\textwidth"} ![4771 km - 4971 km](./4771-4971_DR.pdf "fig:"){width=".32\\textwidth"} ![4771 km - 4971 km](./4771-4971_DPH.pdf "fig:"){width=".32\\textwidth"} ![4771 km - 4971 km](./4771-4971_DT.pdf "fig:"){width=".32\\textwidth"} ![5171 km - 5371 km](./5171-5371_DR.pdf "fig:"){width=".32\\textwidth"} ![5171 km - 5371 km](./5171-5371_DPH.pdf "fig:"){width=".32\\textwidth"} ![5171 km - 5371 km](./5171-5371_DT.pdf "fig:"){width=".32\\textwidth"} ![5371 km - 5571 km](./5371-5571_DR.pdf "fig:"){width=".32\\textwidth"} ![5371 km - 5571 km](./5371-5571_DPH.pdf "fig:"){width=".32\\textwidth"} ![5371 km - 5571 km](./5371-5571_DT.pdf "fig:"){width=".32\\textwidth"} ![5571 km - 5771 km](./5571-5771_DR.pdf "fig:"){width=".32\\textwidth"} ![5571 km - 5771 km](./5571-5771_DPH.pdf "fig:"){width=".32\\textwidth"} ![5571 km - 5771 km](./5571-5771_DT.pdf "fig:"){width=".32\\textwidth"} ![6171 km - 6371 km](./6171-6371_DR.pdf "fig:"){width=".32\\textwidth"} ![6171 km - 6371 km](./6171-6371_DPH.pdf "fig:"){width=".32\\textwidth"} ![6171 km - 6371 km](./6171-6371_DT.pdf "fig:"){width=".32\\textwidth"} ![3371 km - 3571 km](./DS_3371-3571_DR.pdf "fig:"){width=".32\\textwidth"} ![3371 km - 3571 km](./DS_3371-3571_DPH.pdf "fig:"){width=".32\\textwidth"} ![3371 km - 3571 km](./DS_3371-3571_DT.pdf "fig:"){width=".32\\textwidth"} ![4171 km - 4371 km](./DS_4171-4371_DR.pdf "fig:"){width=".32\\textwidth"} ![4171 km - 4371 km](./DS_4171-4371_DPH.pdf "fig:"){width=".32\\textwidth"} ![4171 km - 4371 km](./DS_4171-4371_DT.pdf "fig:"){width=".32\\textwidth"} ![4771 km - 4971 km](./DS_4771-4971_DR.pdf "fig:"){width=".32\\textwidth"} ![4771 km - 4971 km](./DS_4771-4971_DPH.pdf "fig:"){width=".32\\textwidth"} ![4771 km - 4971 km](./DS_4771-4971_DT.pdf "fig:"){width=".32\\textwidth"} ![5571 km - 5771 km](./DS_5571-5771_DR.pdf "fig:"){width=".32\\textwidth"} ![5571 km - 5771 km](./DS_5571-5771_DPH.pdf "fig:"){width=".32\\textwidth"} ![5571 km - 5771 km](./DS_5571-5771_DT.pdf "fig:"){width=".32\\textwidth"} ![6171 km - 6371 km](./DS_6171-6371_DR.pdf "fig:"){width=".32\\textwidth"} ![6171 km - 6371 km](./DS_6171-6371_DPH.pdf "fig:"){width=".32\\textwidth"} ![6171 km - 6371 km](./DS_6171-6371_DT.pdf "fig:"){width=".32\\textwidth"} We chose the regularization parameter to be $10^{-3}\|y\|_{\mathbb{R}^\ell}$ as this produced the lowest RRMSE among the tested values of parameters. The experiment terminated after 300 iterations with an RRMSE of 0.881173 and a relative data error of 0.154958. The absolute approximation error is shown in [48](#fig:apprerr){reference-type="ref" reference="fig:apprerr"}. We scaled the figures to the size of the solution for better comparability (compare colour scales in [36](#fig:resolutiontests){reference-type="ref" reference="fig:resolutiontests"} and [48](#fig:apprerr){reference-type="ref" reference="fig:apprerr"}). We note that the errors are low for intermediate depths, where our teleseismic body wave paths sample the mantle most extensively, and higher for large and shallow depths. Recall that we have very unregularly distributed data (see [\[fig:distrrays\]](#fig:distrrays){reference-type="ref" reference="fig:distrrays"}). In particular, at the distances from 3770.9 km to 4926.5 km to the centre, we see only very little remaining errors, and some of the errors are simply due to boundary effects since the method is unable to recover the precise geometry of the plumes. Moreover, those are close to the plumes, i. e.  the region where our structure is given. Also in larger depths (distances 3193.1 km and 3482.0 km to the centre) the main errors are situated in the Northern hemisphere. Since there is practically no data, see [\[fig:distrrays\]](#fig:distrrays){reference-type="ref" reference="fig:distrrays"}, in these depths, the method cannot register that the plumes are cut of at the core-mantle-boundary and, further, it does not introduce random artefacts there but continues the structures. We assume that using additionally core-diffracted waves as well would erase the errors in these depths. In shallower depths (distances 5215.4 km to 6082.1 km to the centre), we also see a similar continuing behaviour of the approximation. Comparing the approximation in these depths with the data distribution, we observe that the errors do not increase as rapidly as the data becomes sparse. Hence, also here we have a continuation of structure within sparser data regions without many artefacts. The errors increase in particular below the Southern Pacific at radii 5504.3 km and 5793.2 km, which is typically not a very active seismic region and the data is not well-distributed there. Unfortunately, at the Earth's surface we have many artefacts. As the sparsity there is extremely high, we assume that a certain sparsity level can also be a limit to the method. However, note that the Earth's surface represents also the boundary of the considered regions and boundaries are known to possibly introduce additional challenges in approximation tasks. In [\[fig:Polys\]](#fig:Polys){reference-type="ref" reference="fig:Polys"}, [\[fig:FEHFs1\]](#fig:FEHFs1){reference-type="ref" reference="fig:FEHFs1"} and [\[fig:FEHFs2\]](#fig:FEHFs2){reference-type="ref" reference="fig:FEHFs2"}, we show the chosen dictionary elements. Further, in [\[fig:DS_TFs\]](#fig:DS_TFs){reference-type="ref" reference="fig:DS_TFs"}, we provide a comparison to the FEHFs that were given in the starting dictionary. First of all, we note that the method chooses both local and global functions, much more polynomials than FEHFs to be precise. In particular, higher degrees and orders are preferred in general. This supports the idea that, due to the irregularly distributed data, the method uses global functions which fill empty blanks with similar structures as in regions with many data. This is at least the case, as long as there are enough data points nearby such as in medium depths. Thus, it would be interesting to investigate how the method would work with even higher degrees and orders. Maybe it would choose even less FEFHs then. Regarding the chosen FEHFs, we see that the LRFMP finds optimal ones in depths where more data is given though also there are not many selected. In regions where the data is sparse, the method falls back to the FEHFs given in the starting dictionary. In general this is not really desired. Note, however, that this also happens majorly near the Earth's surface which represents a specific challenge as discussed above. However, since the FEHFs are also generally less often chosen, this outcome suggests that FEHFs may not be the best choice within the LRFMP for travel time tomography but with only sparse data the polynomials are also not suitable. It remains a question for future research to determine which alternative types of trial functions in the dictionary can yield better numerical results. For other applications such as gravitational field modelling, the LRFMP proved to perform well for the combination of orthogonal polynomials (spherical harmonics in this case) and localized scaling functions and wavelets constructed from reproducing kernels, see e.g. [@Schneideretal2023-G]. However, the analogues of such kernels on balls are connected to essentially higher numerical costs, which is why we had not chosen them for our first experiments demonstrated here. Therefore, they might be promising but it remains open to improve the efficiency of their numerical calculation and integration. Hence, we observe that the LRFMP can provide the possibility to reconstruct plumes within the Earth and constrain errors to their spatial locale, though the method certainly leaves the potential for further improvements. # Discussion The underlying inverse problem, travel time tomography, is ill-posed. Unfortunately, we lack other, helpful theoretical insights into this problem such as a singular value decomposition of the corresponding operator which makes the numerical computations rather time-consuming. Due to the lack of numerically exploitable properties of the respective operator, we have to take special care on how to use a sensible amount of rays in practice. The amount we use here is, as far as we know, sufficient for global models. However, the number of rays has already provided a numerical challenge to the used inversion method.\ Thus, for now and as we just started our development, we aimed for a proof of concept and not novel seismological insights here. Hence, the use of a ray theoretical setting is sufficient for now, in particular, as a finite frequency ansatz would even include more computational challenges.\ All in all, the applicability of the LRFMP to travel time tomography is demonstrated quite fairly given the irregularly distributed data. Nonetheless, there are a number of open questions for future research.\ First of all, we note that we should harmonize the used perturbation noise and the uncertainty $\sigma$ in future experiments. For use with real data, corrections for earthquake hypocentres ('source relocations') will need to be added in an efficient manner. Another crucial aspect that emerges from the result is the question which trial functions are best for this inverse problem. Though we tried FEHFs in order to obtain a model that is comparable with other approaches, the results point towards a different dictionary for future research. In general, we conclude that the LIPMPs can promise to provide an alternative numerical regularization method in comparison to other methods for travel time tomography. However, for achieving a competing status, the method still needs to be further enhanced. # Conclusion and Outlook {#sect:cons} Here, we proposed to use the LRFMP algorithm which yields an approximation in a chosen best basis of dictionary elements. For the latter, we allowed polynomials as well as tesseroid-based linear finite element hat functions in order to include global and local trial functions in the dictionary. As this is the first application of the LRFMP to travel time tomography, we aimed for a proof of concept and follow the ray-theoretical approach. We presented here the method in general with an emphasis on aspects that have to be remodelled for travel time tomography. These are -- besides the choice of dictionary elements -- the choice of a regularization space and the practical use of a necessary amount of data. For the former, we chose a Sobolev space due to the deep mathematical connections between finite elements and these spaces. For the latter, we introduced an additional divide-and-conquer strategy which allowed us to consider packages of rays iteratively and, in this way, overall consider a suitable amount of rays to obtain a proof of concept. In our experiments, we considered a contrived Earth model consisting of two cone-shaped plumes between the core-mantle boundary and the Earth's surface. Our results showed that the LRFMP is able to reconstruct such structures. The approximation contains more errors where the data is rather sparse. Thus, future research could deal with a parameter study for the divide-and-conquer strategy in combination with other accuracy-lowering and efficiency-improving approaches as well as tackling more complicated contrived Earth models. # Declarations {#declarations .unnumbered} #### Funding V.  Michel gratefully acknowledges the support by the German Research Foundation (DFG; Deutsche Forschungsgemeinschaft), project MI 655/14-1. K. Sigloch was supported by the French government through the UCAJEDI Investments in the Future project, reference number ANR-15-IDEX-01. #### Conflicts of interest/Competing interests Not applicable. #### Availability of data, material and code The code is available at [<https://doi.org/10.5281/zenodo.8227888>]{style="color: blue"} under licence the CC-BY-NC-SA 3.0 DE. The solution shown here can be computed from it. The used and corrected rays accompany the code in a compressed format. Note that the ISC-EHB meta-data is also freely available online. #### Author's contributions The research was carried out during N.  Schneider's postdoc project. In this DFG-funded project, V.  Michel was the principal investigator. K.  Sigloch was the project partner from geosciences. Both supervised the project and assisted N.  Schneider. E. J.  Totten provided available data and software and assisted in the preparation of the tests. # Mathematical derivation of specific terms {#sect:app} ## Derivation of $\nabla N_{A,\Delta A}$ {#ssect:app:nablaFEHF} As we are using the gradient in the $\mathrm{L}^2\left(\mathbb{B},\mathbb{R}^3\right)$-integrals, we need the version of $\nabla$ in spherical coordinates: $$\begin{aligned} \nabla_{r\xi(\varphi,t)} &= \varepsilon^r\frac{\partial}{\partial r}+ \frac{1}{r} \nabla^* = \varepsilon^r\frac{\partial}{\partial r}+ \frac{1}{r} \left( \varepsilon^\varphi\frac{1}{\sqrt{1-t^2}}\frac{\partial}{\partial \varphi}+ \varepsilon^t\sqrt{1-t^2}\frac{\partial}{\partial t}\right)\end{aligned}$$ see, e.g. [@Freedenetal1998; @Freedenetal2013-1; @Michel2013]. Then, for an FEHF, we obtain $$\begin{aligned} \nabla_{r\xi(\varphi,t)} &N_{(R,\Phi,T),(\Delta R,\Delta \Phi,\Delta T)}(r,\varphi,t) \\ &= \chi_{\mathrm{supp}_{[(R,\Phi,T)-(\Delta R,\Delta \Phi,\Delta T),(R,\Phi,T)+(\Delta R,\Delta \Phi,\Delta T)]}}(r,\varphi,t)\\ &\qquad \times \left( \varepsilon^r\frac{\partial}{\partial r}N_{(R,\Phi,T),(\Delta R,\Delta \Phi,\Delta T)}(r,\varphi,t)\right.\\ &\qquad \qquad + \frac{1}{r}\varepsilon^\varphi\frac{1}{\sqrt{1-t^2}}\frac{\partial}{\partial \varphi}N_{(R,\Phi,T),(\Delta R,\Delta \Phi,\Delta T)}(r,\varphi,t)\\ &\qquad \qquad \left. + \frac{1}{r}\varepsilon^t\sqrt{1-t^2}\frac{\partial}{\partial t}N_{(R,\Phi,T),(\Delta R,\Delta \Phi,\Delta T)}(r,\varphi,t) \right)\\ &= \chi_{\mathrm{supp}_{[(R,\Phi,T)-(\Delta R,\Delta \Phi,\Delta T),(R,\Phi,T)+(\Delta R,\Delta \Phi,\Delta T)]}}(r,\varphi,t) \\ &\qquad\times \left( \varepsilon^r\frac{[-\mathop{\mathrm{sgn}}(r-R)]}{\Delta R} \frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi}\frac{\Delta T-|t-T|}{\Delta T}\right.\\ &\qquad\qquad + \frac{1}{r}\varepsilon^\varphi\frac{1}{\sqrt{1-t^2}} \frac{\Delta R-|r-R|}{\Delta R} \frac{[-\mathop{\mathrm{sgn}}(\varphi-\Phi)]}{\Delta \Phi} \frac{\Delta T-|t-T|}{\Delta T}\\ &\qquad\qquad \left. + \frac{1}{r}\varepsilon^t\sqrt{1-t^2}\frac{\Delta R-|r-R|}{\Delta R}\frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi}\frac{[-\mathop{\mathrm{sgn}}(t-T)]}{\Delta T} \right)\end{aligned}$$ almost everywhere because $$\begin{aligned} \frac{\partial}{\partial a_k} \frac{\Delta A_k-|a_k-A_k|}{\Delta A_k} = \frac{- \frac{\partial}{\partial a_k} |a_k-A_k|}{\Delta A_k} = \frac{-\mathop{\mathrm{sgn}}(a_k-A_k)}{\Delta A_k}.\end{aligned}$$ Note that this is only piecewise continuous with respect to the differentiated component but it is still continuous for the other components. ## Derivation of $\nabla G_{m,n,j}^\mathrm{I}$ {#ssect:app:nablaGI} Similarly, for the polynomials with $n\geq 1$, we obtain $$\begin{aligned} &\nabla_{r\xi(\varphi,t)} G_{m,n,j}^{\mathrm{I}} (r\xi(\varphi,t))\\ &= p_{m,n}\left[ \varepsilon^r\frac{\partial}{\partial r}+ \frac{1}{r} \nabla^*\right] \left[P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n} Y_{n,j}(\xi(\varphi,t))\right]\\ &= p_{m,n}\varepsilon^r\frac{\partial}{\partial r}\left[ P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n}\right] Y_{n,j}(\xi(\varphi,t))\\ &\qquad + \frac{p_{m,n}}{r} \nabla^* \left[P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n} Y_{n,j}(\xi(\varphi,t))\right]\\ &= p_{m,n}\left[ \left(P_m^{(0,n+1/2)}\left(I(r)\right)\right)'I'(r) \left(\frac{r}{\mathbf{R}}\right)^{n} + \frac{n}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} \right] \mu_n^{(1)}y^{(1)}_{n,j}(\xi(\varphi,t))\\ &\qquad + \frac{p_{m,n}}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} \mu_n^{(2)}y^{(2)}_{n,j}(\xi(\varphi,t)) \label{eq:nablaGImnj}\end{aligned}$$ with $$\begin{aligned} \mu_n^{(i)} &\coloneqq \left\{\begin{matrix} 1,&i=1\\ \sqrt{n(n+1)},&i=2,3, \end{matrix} \right. \label{def:mu}\end{aligned}$$ and the vector spherical harmonics $y_{n,j}^{(i)}$, see e.g. [@DahlenTromp1998; @Freedenetal2013-1; @Freedenetal2009; @Michel2020; @MorseFeshbachI1953; @MorseFeshbachII1953]. In the case $n=0$, the angular derivative as well as the derivative of $(r/R)^{n}$ does not exist: $$\begin{aligned} \nabla_{r\xi(\varphi,t)} G_{m,0,0}^{\mathrm{I}} (r\xi(\varphi,t)) &= p_{m,0}\left[ \varepsilon^r\frac{\partial}{\partial r}+ \frac{1}{r} \nabla^*\right] P_m^{(0,1/2)}\left(I(r)\right) Y_{0,0}(\xi(\varphi,t))\\ &= p_{m,0}\left[ \varepsilon^r\frac{\partial}{\partial r}+ \frac{1}{r} \nabla^*\right] P_m^{(0,1/2)}\left(I(r)\right) \frac{1}{\sqrt{4\pi}}\\ &= p_{m,0}\left(P_m^{(0,1/2)}\left(I(r)\right)\right)'I'(r) \left(\mu_0^{(1)}\right)y^{(1)}_{0,0}(\xi(\varphi,t)). \label{eq:nablaGIm00}\end{aligned}$$ Note that [\[eq:nablaGIm00\]](#eq:nablaGIm00){reference-type="ref" reference="eq:nablaGIm00"} can be written in the form [\[eq:nablaGImnj\]](#eq:nablaGImnj){reference-type="ref" reference="eq:nablaGImnj"} due to the multiplication with $0$ (2nd term) and by defining $y_{0,0}^{(2)} \coloneqq 0$ (3rd term). Hence, the gradient [\[eq:nablaGImnj\]](#eq:nablaGImnj){reference-type="ref" reference="eq:nablaGImnj"} is well-defined for all $r$ and all possible $m,\ n$ and $j$. For practical purposes, we need to specify the gradient in more detail: $$\begin{aligned} &\nabla_{r\xi(\varphi,t)} G_{m,n,j}^{\mathrm{I}}(r\xi(\varphi,t)) \\ &=p_{m,n}\left[ \left(P_m^{(0,n+1/2)}\left(I(r)\right)\right)'I'(r) \left(\frac{r}{\mathbf{R}}\right)^{n} + \frac{n}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} \right] y^{(1)}_{n,j}(\xi(\varphi,t))\\ &\qquad + \frac{p_{m,n}}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} \mu_n^{(2)}y^{(2)}_{n,j}(\xi(\varphi,t))\label{eq:l2IPGI_1}\\ &= p_{m,n} \left[ \left(P_m^{(0,n+1/2)}\left(I(r)\right)\right)'I'(r) \left(\frac{r}{\mathbf{R}}\right)^{n} q_{n,j} P_{n,|j|}(t)\mathrm{Trig}(j\varphi) \xi(\varphi,t) \right.\\ &\qquad + \frac{n}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} q_{n,j} P_{n,|j|}(t)\mathrm{Trig}(j\varphi) \xi(\varphi,t)\\ &\qquad + \left. \frac{1}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right)\left(\frac{r}{\mathbf{R}}\right)^{n-1} q_{n,j} \nabla^* \left(P_{n,|j|}(t)\mathrm{Trig}(j\varphi)\right) \right]\\ &= p_{m,n}q_{n,j}\left[\left(P_m^{(0,n+1/2)}\left(I(r)\right)\right)'I'(r) \left(\frac{r}{\mathbf{R}}\right)^{n} P_{n,|j|}(t)\mathrm{Trig}(j\varphi) \xi(\varphi,t)\right.\\ &\qquad + \frac{n}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} P_{n,|j|}(t)\mathrm{Trig}(j\varphi) \xi(\varphi,t)\\ &\qquad + \frac{1}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} \frac{1}{\sqrt{1-t^2}} P_{n,|j|}(t)\left(\mathrm{Trig}(j\varphi)\right)'\varepsilon^\varphi(\varphi,t)\\ &\qquad + \left. \frac{1}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} \sqrt{1-t^2} P'_{n,|j|}(t)\mathrm{Trig}(j\varphi)\varepsilon^t(\varphi,t) \right]\end{aligned}$$ $$\begin{aligned} &= p_{m,n}q_{n,j} \left[ \left(P_m^{(0,n+1/2)}\left(I(r)\right)\right)'I'(r) \left(\frac{r}{\mathbf{R}}\right)^{n} P_{n,|j|}(t)\mathrm{Trig}(j\varphi) \xi(\varphi,t)\right.\\ &\qquad + \frac{n}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} P_{n,|j|}(t)\mathrm{Trig}(j\varphi) \xi(\varphi,t)\\ &\qquad + \frac{j}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} \frac{1}{\sqrt{1-t^2}} P_{n,|j|}(t)\mathrm{Trig}(-j\varphi)\varepsilon^\varphi(\varphi,t)\\ &\qquad + \left.\frac{1}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} \sqrt{1-t^2} P'_{n,|j|}(t)\mathrm{Trig}(j\varphi)\varepsilon^t(\varphi,t)\right]\\ &\eqqcolon p_{m,n}q_{n,j} \sum_{p=1}^4 G_{m,n,j;p}^{\mathrm{I}} (r\xi(\varphi,t)).\end{aligned}$$ Note that the well-definedness of the terms $$\begin{aligned} \frac{P_{n,|j|}(t)}{\sqrt{1-t^2}} \qquad \textrm{and} \qquad \sqrt{1-t^2} P'_{n,|j|}(t)\end{aligned}$$ used only for $n\geq 1$ was already discussed in [@Schneider2020]. The latter can also be computed with the respective algorithm given there. We discuss the former here in a bit more detail as this increases the efficiency in our implementation. Away from the poles, the term can be calculated straightforwardly. In a neighbourhood of the North and South Pole (i.e. for $t\to \pm 1$), we obtain $$\begin{aligned} \lim_{t\to\pm 1} \frac{P_{n,|j|}(t)}{\sqrt{1-t^2}} = \lim_{t\to\pm 1} \left(1-t^2\right)^{(|j|-1)/2} P^{(|j|)}_{n}(t) = \left\{ \begin{matrix} \lim_{t\to\pm 1} \left(1-t^2\right)^{0} P'_{n}(t) = P'_{n}(\pm 1), & |j|=1\\ 0, & |j|>1, \end{matrix}\right\}\end{aligned}$$ where the lower row holds because every derivative of a Legendre polynomial is bounded in $[-1,1]$ for trivial reasons. ## Derivation of regularization terms {#ssect:app:H1IPs} At first, we list certain trigonometric identities that we will need hereafter: $$\begin{aligned} &\sin(\alpha) + \sin(\beta) = 2\sin\left(\frac{\alpha+\beta}{2}\right)\cos\left(\frac{\alpha-\beta}{2}\right),& &\sin(\alpha) - \sin(\beta) = 2\cos\left(\frac{\alpha+\beta}{2}\right)\sin\left(\frac{\alpha-\beta}{2}\right),\\ &\cos(\alpha) + \cos(\beta) = 2\cos\left(\frac{\alpha+\beta}{2}\right)\cos\left(\frac{\alpha-\beta}{2}\right),& &\cos(\alpha) - \cos(\beta) = -2\sin\left(\frac{\alpha+\beta}{2}\right)\cos\left(\frac{\alpha-\beta}{2}\right),\\ &\sin(\alpha)\sin(\beta) = \frac{1}{2}\left(\cos\left(\alpha-\beta\right) - \cos\left(\alpha + \beta\right) \right),& &\cos(\alpha)\cos(\beta) = \frac{1}{2}\left(\cos\left(\alpha-\beta\right) + \cos\left(\alpha + \beta\right) \right),\\ &\sin(\alpha)\cos(\beta) = \frac{1}{2}\left(\sin\left(\alpha-\beta\right) - \sin\left(\alpha + \beta\right) \right),& &\int \sin(x)\cos(x) \mathrm{d}x = \frac{\sin^2(x)}{2},\\ &\int x\sin(ax) \mathrm{d}x = \frac{\sin(ax)}{a^2} - \frac{x\cos(ax)}{a},& &\int x\cos(ax) \mathrm{d}x = \frac{\cos(ax)}{a^2} + \frac{x\sin(ax)}{a},\\ &\int \sin^2(x) \mathrm{d}x = \frac{x}{2} - \frac{\sin(2x)}{4},& &\int \cos^2(x) \mathrm{d}x = \frac{x}{2} + \frac{\sin(2x)}{4}.\end{aligned}$$ We have to discuss the following inner products $$\begin{aligned} &\left\langle G_{m,n,j}^{\mathrm{I}}, G_{m',n',j'}^{\mathrm{I}} \right\rangle_{\mathcal{H}^1},& &\left\langle N_{A,\Delta A}, N_{A',(\Delta A)'} \right\rangle_{\mathcal{H}^1}& &\left\langle N_{A,\Delta A}, G_{m,n,j}^{\mathrm{I}} \right\rangle_{\mathcal{H}^1} % \intertext{which practically means computing} % &\left\langle G_{m,n,j}^{\mathrm{I}}, G_{m',n',j'}^{\mathrm{I}} \right\rangle_{\mathrm{L}^2\left(\mathbb{B},\mathbb{R}\right)},& &\left\langle N_{A,\Delta A}, N_{A',(\Delta A)'} \right\rangle_{\mathrm{L}^2\left(\mathbb{B},\mathbb{R}\right)},& &\left\langle N_{A,\Delta A}, G_{m,n,j}^{\mathrm{I}} \right\rangle_{\mathrm{L}^2\left(\mathbb{B},\mathbb{R}\right)},\\ &\left\langle \nabla G_{m,n,j}^{\mathrm{I}}, \nabla G_{m',n',j'}^{\mathrm{I}} \right\rangle_{\mathrm{L}^2\left(\mathbb{B},\mathbb{R}^3\right)},& &\left\langle \nabla N_{A,\Delta A}, \nabla N_{A',(\Delta A)'} \right\rangle_{\mathrm{L}^2\left(\mathbb{B},\mathbb{R}^3\right)},& &\left\langle \nabla N_{A,\Delta A}, \nabla G_{m,n,j}^{\mathrm{I}} \right\rangle_{\mathrm{L}^2\left(\mathbb{B},\mathbb{R}^3\right)}.\end{aligned}$$ We start with two polynomials. We have $$\begin{aligned} \left\langle G_{m,n,j}^{\mathrm{I}}, G_{m',n',j'}^{\mathrm{I}} \right\rangle_{\mathrm{L}^2\left(\mathbb{B},\mathbb{R}\right)} = \delta_{m,m'}\delta_{n,n'}\delta_{j,j'} \end{aligned}$$ due to the fact that they constitute a basis system in $\mathrm{L}^2\left(\mathbb{B},\mathbb{R}\right)$. Next, we consider the vectorial inner product of their gradients. Note that the vector spherical harmonics are orthonormal with respect to their degrees, order and type. With [\[eq:gradIr\]](#eq:gradIr){reference-type="ref" reference="eq:gradIr"}, [\[def:mu\]](#def:mu){reference-type="ref" reference="def:mu"} and the substitution $r=\mathbf{R}\sqrt{(1+u)/2}$, we obtain from [\[eq:l2IPGI_1\]](#eq:l2IPGI_1){reference-type="ref" reference="eq:l2IPGI_1"} $$\begin{aligned} &\left\langle \nabla G_{m,n,j}^{\mathrm{I}}, \nabla G_{m',n',j'}^{\mathrm{I}}\right\rangle_{\mathrm{L}^2\left(\mathbb{B},\mathbb{R}^3\right)}\\ &= \delta_{n,n'}\delta_{j,j'} p_{m,n}p_{m',n} \\ &\qquad \times \left[\int_0^{\mathbf{R}} \left[ \left(P_m^{(0,n+1/2)}\left(I(r)\right)\right)'I'(r) \left(\frac{r}{\mathbf{R}}\right)^{n} + \frac{n}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} \right]\right.\\ &\qquad\qquad\qquad \times \left[ \left(P_{m'}^{(0,n+1/2)}\left(I(r)\right)\right)'I'(r) \left(\frac{r}{\mathbf{R}}\right)^{n} + \frac{n}{\mathbf{R}} P_{m'}^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} \right] r^2 \mathrm{d}r \\ &\qquad\qquad + \left. \left(\mu_n^{(2)}\right)^2\int_0^{\mathbf{R}} \left[\frac{1}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} \right]\left[\frac{1}{\mathbf{R}} P_{m'}^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} \right] r^2 \mathrm{d}r \right]\\ &= \delta_{n,n'}\delta_{j,j'} p_{m,n}p_{m',n} \\ &\qquad \times \left[\left(\mu_n^{(1)}\right)^2\int_0^{\mathbf{R}} \left(P_m^{(0,n+1/2)}\left(I(r)\right)\right)'I'(r) \left(\frac{r}{\mathbf{R}}\right)^{n} \left(P_{m'}^{(0,n+1/2)}\left(I(r)\right)\right)'I'(r) \left(\frac{r}{\mathbf{R}}\right)^{n} r^2 \mathrm{d}r \right. \\ &\qquad\qquad + \left(\mu_n^{(1)}\right)^2\int_0^{\mathbf{R}} \left(P_m^{(0,n+1/2)}\left(I(r)\right)\right)'I'(r) \left(\frac{r}{\mathbf{R}}\right)^{n}\frac{n}{\mathbf{R}} P_{m'}^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} r^2 \mathrm{d}r \\ &\qquad\qquad + \left(\mu_n^{(1)}\right)^2\int_0^{\mathbf{R}}\frac{n}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right)\left(\frac{r}{\mathbf{R}}\right)^{n-1} \left(P_{m'}^{(0,n+1/2)}\left(I(r)\right)\right)' I'(r) \left(\frac{r}{\mathbf{R}}\right)^{n} r^2 \mathrm{d}r \\ &\qquad\qquad + \left(\mu_n^{(1)}\right)^2\int_0^{\mathbf{R}} \frac{n}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} \frac{n}{\mathbf{R}} P_{m'}^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} r^2 \mathrm{d}r\\ &\qquad\qquad + \left. \left(\mu_n^{(2)}\right)^2\int_0^{\mathbf{R}} \frac{1}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1}\frac{1}{\mathbf{R}} P_{m'}^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} r^2 \mathrm{d}r \right]\\ &= \delta_{n,n'}\delta_{j,j'} p_{m,n}p_{m',n} \\ &\qquad \times \left[\int_0^{\mathbf{R}} \left(P_m^{(0,n+1/2)}\left(I(r)\right)\right)'\left(P_{m'}^{(0,n+1/2)}\left(I(r)\right)\right)'(I'(r))^2 \left(\frac{r}{\mathbf{R}}\right)^{2n} r^2 \mathrm{d}r \right. \\ &\qquad\qquad\qquad + \frac{n}{\mathbf{R}}\int_0^{\mathbf{R}} \left[ \left(P_m^{(0,n+1/2)}\left(I(r)\right)\right)'P_{m'}^{(0,n+1/2)}\left(I(r)\right)\right.\\ &\qquad\qquad\qquad\qquad\qquad \left. + P_m^{(0,n+1/2)}\left(I(r)\right)\left(P_{m'}^{(0,n+1/2)}\left(I(r)\right)\right)'\right] I'(r) \left(\frac{r}{\mathbf{R}}\right)^{2n-1} r^2 \mathrm{d}r \\ &\qquad\qquad\qquad +\left. \left( n^2 + n(n+1)\right)\int_0^{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right)P_{m'}^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{2n} \mathrm{d}r \right]\end{aligned}$$ $$\begin{aligned} &= \delta_{n,n'}\delta_{j,j'} p_{m,n}p_{m',n} \\ &\qquad \times \left[\int_0^{\mathbf{R}} \left(P_m^{(0,n+1/2)}\left(I(r)\right)\right)'\left(P_{m'}^{(0,n+1/2)}\left(I(r)\right)\right)'\frac{16r^2}{\mathbf{R}^4} \left(\frac{r}{\mathbf{R}}\right)^{2n} r^2 \mathrm{d}r \right. \\ &\qquad\qquad\qquad + \frac{n}{\mathbf{R}}\int_0^{\mathbf{R}} \left[ \left(P_m^{(0,n+1/2)}\left(I(r)\right)\right)'P_{m'}^{(0,n+1/2)}\left(I(r)\right)\right.\\ &\qquad\qquad\qquad\qquad\qquad \left. + P_m^{(0,n+1/2)}\left(I(r)\right)\left(P_{m'}^{(0,n+1/2)}\left(I(r)\right)\right)'\right] \frac{4r}{\mathbf{R}^2} \left(\frac{r}{\mathbf{R}}\right)^{2n-1} r^2 \mathrm{d}r \\ &\qquad\qquad\qquad +\left. \left( n^2 + n^2+n\right)\int_0^{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right)P_{m'}^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{2n} \mathrm{d}r \right]\\ &= \delta_{n,n'}\delta_{j,j'} p_{m,n}p_{m',n} \\ &\qquad \times \left[16\int_0^{\mathbf{R}} \left(P_m^{(0,n+1/2)}\left(I(r)\right)\right)'\left(P_{m'}^{(0,n+1/2)}\left(I(r)\right)\right)' \left(\frac{r}{\mathbf{R}}\right)^{2n+4} \mathrm{d}r \right. \\ &\qquad\qquad\qquad + 4n\int_0^{\mathbf{R}} \left[ \left(P_m^{(0,n+1/2)}\left(I(r)\right)\right)'P_{m'}^{(0,n+1/2)}\left(I(r)\right)\right.\\ &\qquad\qquad\qquad\qquad\qquad \left. + P_m^{(0,n+1/2)}\left(I(r)\right)\left(P_{m'}^{(0,n+1/2)}\left(I(r)\right)\right)'\right] \left(\frac{r}{\mathbf{R}}\right)^{2n+2} \mathrm{d}r \\ &\qquad\qquad\qquad +\left. \left( 2n^2 + n\right)\int_0^{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right)P_{m'}^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{2n} \mathrm{d}r \right]\\ &= \delta_{n,n'}\delta_{j,j'} p_{m,n}p_{m',n} \\ &\qquad \times \left[16\int_{-1}^{1} \left(P_m^{(0,n+1/2)}(u)\right)'\left(P_{m'}^{(0,n+1/2)}(u)\right)' \left(\frac{1+u}{2}\right)^{n+2} \frac{\mathbf{R}}{4}\sqrt{\frac{2}{1+u}}\mathrm{d}u \right. \\ &\qquad\qquad\qquad + 4n\int_{-1}^{1} \left[ \left(P_m^{(0,n+1/2)}(u)\right)'P_{m'}^{(0,n+1/2)}(u)\right.\\ &\qquad\qquad\qquad\qquad\qquad \left. + P_m^{(0,n+1/2)}(u)\left(P_{m'}^{(0,n+1/2)}(u)\right)'\right] \left(\frac{1+u}{2}\right)^{n+1} \frac{\mathbf{R}}{4}\sqrt{\frac{2}{1+u}}\mathrm{d}u \\ &\qquad\qquad\qquad +\left. \left( n(2n+1)\right)\int_{-1}^{1} P_m^{(0,n+1/2)}(u)P_{m'}^{(0,n+1/2)}(u) \left(\frac{1+u}{2}\right)^{n} \frac{\mathbf{R}}{4}\sqrt{\frac{2}{1+u}}\mathrm{d}u \right]\\ &= \delta_{n,n'}\delta_{j,j'} p_{m,n}p_{m',n} \\ &\quad \times \left[4\mathbf{R}\int_{-1}^{1} \left(P_m^{(0,n+1/2)}(u)\right)'\left(P_{m'}^{(0,n+1/2)}(u)\right)' \left(\frac{1+u}{2}\right)^{n+3/2} \mathrm{d}u \right. \\ &\qquad\qquad\qquad + \mathbf{R}n\int_{-1}^{1} \left[ \left(P_m^{(0,n+1/2)}(u)\right)'P_{m'}^{(0,n+1/2)}(u)\right.\\ &\qquad\qquad\qquad\qquad\qquad \left. + P_m^{(0,n+1/2)}(u)\left(P_{m}^{(0,n+1/2)}(u)\right)'\right] \left(\frac{1+u}{2}\right)^{n+1/2} \mathrm{d}u \\ &\qquad\qquad\qquad +\left. \frac{\mathbf{R} n(2n+1)}{4}\int_{-1}^{1} P_m^{(0,n+1/2)}(u)P_{m'}^{(0,n+1/2)}(u) \left(\frac{1+u}{2}\right)^{n-1/2} \mathrm{d}u \right]\end{aligned}$$ $$\begin{aligned} &= \delta_{n,n'}\delta_{j,j'} p_{m,n}p_{m',n} \\ &\quad \times \left[\frac{\mathbf{R}\sqrt{2}}{2^{n}}\int_{-1}^{1} \left(P_m^{(0,n+1/2)}(u)\right)'\left(P_{m'}^{(0,n+1/2)}(u)\right)' \left(1+u\right)^{n+3/2} \mathrm{d}u \right. \\ &\qquad\qquad\qquad + \frac{\mathbf{R}n}{2^{n}\sqrt{2}}\int_{-1}^{1} \left[ \left(P_m^{(0,n+1/2)}(u)\right)'P_{m'}^{(0,n+1/2)}(u)\right.\\ &\qquad\qquad\qquad\qquad\qquad \left. + P_m^{(0,n+1/2)}(u)\left(P_{m'}^{(0,n+1/2)}(u)\right)'\right] \left(1+u\right)^{n+1/2} \mathrm{d}u \\ &\qquad\qquad\qquad +\left. \frac{\mathbf{R} n(2n+1)}{2^{n+1}\sqrt{2}}\int_{-1}^{1} P_m^{(0,n+1/2)}(u)P_{m'}^{(0,n+1/2)}(u) \left(1+u\right)^{n-1/2} \mathrm{d}u \right].\end{aligned}$$ Next, we discuss the inner products of two finite element hat functions. Generally, we have $$\begin{aligned} &\left\langle N_{A,\Delta A}, N_{A',(\Delta A)'}\right\rangle_{\mathrm{L}^2\left(\mathbb{B},\mathbb{R}\right)}\\ &\qquad = \int_\mathbb{B}\chi_{\mathrm{supp}_{A,\Delta A}}(a) \chi_{\mathrm{supp}_{A',(\Delta A)'}}(a)\prod_{j=1}^{3} \frac{[\Delta A_j-|a_j-A_j|][(\Delta A_j)'-|a_j-A'_j|]}{\Delta A_j (\Delta A_j)'} \mathrm{d}a\\ &\qquad =\prod_{j=1}^{3} \int_{lb_{a_j}}^{ub_{a_j}} \frac{[\Delta A_j-|a_j-A_j|][(\Delta A_j)'-|a_j-A'_j|]}{\Delta A_j (\Delta A_j)'} \left\{\begin{matrix} a_1^2,j=1,\\ 1,\ j=2,3\end{matrix} \right\} \mathrm{d}a_j\end{aligned}$$ where $lb_a$ denotes the lower and $ub_a$ the upper bound of the respective $a_j$-integral. That is, principally, we have (informally speaking) $$\begin{aligned} = \mathrm{supp}_{A,\Delta A} \cap \mathrm{supp}_{A',(\Delta A)'}\end{aligned}$$ though we have to take a detailed look on the case $a_2$ if $P\not=P'$ (see below). As a generalization, we consider the integrals $$\begin{aligned} \int_{lb_{x}}^{ub_{x}} \frac{[\Delta X-|x-X|][(\Delta X)'-|x-X'|]}{\Delta X (\Delta X)'} x^q \mathrm{d}x \label{def:intNNa}\end{aligned}$$ for $q\in\mathbb{N}_0$ (here, in particular, $q=0$ and $q=2$) in the sequel. Then, the $\mathrm{L}^2\left(\mathbb{B},\mathbb{R}\right)$-integral is obtained as the product of this integral for all variables $x=a_j,\ j=1,2,3$. At first, we need to consider how the lower and upper bounds are defined. Furthermore, because the FEHFs are piecewise defined, we have to determine these critical points in between $lb_x$ and $ub_x$ as well. At last, we need to discuss the remaining integrals.\ For $x=a_1=r$ and $x=a_3=t$, there are three cases depending on $X,\ \Delta X,\ X'$ and $(\Delta X)'$ that we have to consider: $\mathrm{supp}_{X,\Delta X}= [X-\Delta X, X+\Delta X]$ $\mathrm{supp}_{X',(\Delta X)'}= [X'-(\Delta X)', X+(\Delta X)']$ $\mathrm{supp}_{X',(\Delta X)'}= [X_{\mathrm{min}}, X'+(\Delta X)']$ $\mathrm{supp}_{X',(\Delta X)'}= [X'-(\Delta X)', X_{\mathrm{max}}]$ $\mathrm{supp}_{X,\Delta X}= [X_{\mathrm{min}}, X+\Delta X]$ $\mathrm{supp}_{X',(\Delta X)'}= [X'-(\Delta X)', X'+(\Delta X)']$ $\mathrm{supp}_{X',(\Delta X)'}= [X_{\mathrm{min}}, X'+(\Delta X)']$ $\mathrm{supp}_{X',(\Delta X)'}= [X'-(\Delta X)', X_{\mathrm{max}}]$ $\mathrm{supp}_{X,\Delta X}= [X-\Delta X, X_{\mathrm{max}}]$ $\mathrm{supp}_{X',(\Delta X)'}= [X'-(\Delta X)', X'+(\Delta X)']$ $\mathrm{supp}_{X',(\Delta X)'}= [X_{\mathrm{min}}, X'+(\Delta X)']$ $\mathrm{supp}_{X',(\Delta X)'}= [X'-(\Delta X)', X_{\mathrm{max}}]$ In any of these cases, we obtain the lower bound as $$\begin{aligned} lb_x &= \max[ \max(X_{\mathrm{min}},X-\Delta X), \max(X_{\mathrm{min}},X'-(\Delta X)')] \label{def:lbxgen}\end{aligned}$$ and the upper bound as $$\begin{aligned} ub_x &= \min [\min(X_{\mathrm{max}},X+\Delta X), \min(X_{\mathrm{max}},X'+(\Delta X)')] \label{def:ubxgen}\end{aligned}$$ Note that if $lb_x \geq ub_x$ (componentwise), the intersection is the empty set and the integral vanishes. For $x=a_2=\varphi$, we also have three relevant cases: $\mathop{\mathrm{sgn}}(P)=-1 \Rightarrow \mathrm{supp}_{X,\Delta X}= [\max(-\pi,X-\Delta X), \min(\pi,X+\Delta X)]$ $\mathop{\mathrm{sgn}}(P')=-1 \Rightarrow \mathrm{supp}_{X',(\Delta X)'}= [\max(-\pi,X'-(\Delta X)'), \min(\pi,X+(\Delta X)')]$ $\mathop{\mathrm{sgn}}(P')=0 \Rightarrow \mathrm{supp}_{X',(\Delta X)'}= [\max(0,X-(\Delta X)'), \min(2\pi,X+(\Delta X)')]$ $\mathop{\mathrm{sgn}}(P')=1 \Rightarrow \mathrm{supp}_{X',(\Delta X)'}= [\max(\pi,X'-(\Delta X)'), \min(3\pi,X+(\Delta X)')]$ $\mathop{\mathrm{sgn}}(P)=0 \Rightarrow \mathrm{supp}_{X,\Delta X}= [\max(0,X-\Delta X), \min(2\pi,X+\Delta X)]$ $\mathop{\mathrm{sgn}}(P')=-1 \Rightarrow \mathrm{supp}_{X',(\Delta X)'}= [\max(-\pi,X'-(\Delta X)'), \min(\pi,X+(\Delta X)')]$ $\mathop{\mathrm{sgn}}(P')=0 \Rightarrow \mathrm{supp}_{X',(\Delta X)'}= [\max(0,X-(\Delta X)'), \min(2\pi,X+(\Delta X)')]$ $\mathop{\mathrm{sgn}}(P')=1 \Rightarrow \mathrm{supp}_{X',(\Delta X)'}= [\max(\pi,X'-(\Delta X)'), \min(3\pi,X+(\Delta X)')]$ $\mathop{\mathrm{sgn}}(P)=1 \Rightarrow \mathrm{supp}_{X,\Delta X}= [\max(\pi,X-\Delta X), \min(3\pi,X+\Delta XA)]$ $\mathop{\mathrm{sgn}}(P')=-1 \Rightarrow \mathrm{supp}_{X',(\Delta X)'}= [\max(-\pi,X'-(\Delta X)'), \min(\pi,X+(\Delta X)')]$ $\mathop{\mathrm{sgn}}(P')=0 \Rightarrow \mathrm{supp}_{X',(\Delta X)'}= [\max(0,X-(\Delta X)'), \min(2\pi,X+(\Delta X)')]$ $\mathop{\mathrm{sgn}}(P')=1 \Rightarrow \mathrm{supp}_{X',(\Delta X)'}= [\max(\pi,X'-(\Delta X)'), \min(3\pi,X+(\Delta X)')]$ If $\mathop{\mathrm{sgn}}(P)=\mathop{\mathrm{sgn}}(P')$ (i.e. the cases $(-1,-1),\ (0,0)$ and $(1,1)$), we obtain the same lower and upper bound as in [\[def:lbxgen\]](#def:lbxgen){reference-type="ref" reference="def:lbxgen"} and [\[def:ubxgen\]](#def:ubxgen){reference-type="ref" reference="def:ubxgen"}, respectively, but with the respective $X_{\mathrm{min}} \in \{-\pi,0,\pi\}$. If $\mathop{\mathrm{sgn}}(P)=-\mathop{\mathrm{sgn}}(P')$ while $\mathop{\mathrm{sgn}}(P)\not=\mathop{\mathrm{sgn}}(P')$ (i.e. the cases $(1,-1)$ and $(-1,1)$), we shift one of them about $2\pi$ and obtain that $\mathop{\mathrm{sgn}}(P)=\mathop{\mathrm{sgn}}(P')$ (and $P=P'$). Last but not least, we have the cases $(0,1),\ (1,0),\ (0,-1)$ and $(-1,0)$, i.e. where one, let it be $\mathop{\mathrm{sgn}}(P)$, equals zero and the other one, here $\mathop{\mathrm{sgn}}(P')$, is $\pm 1$. If $\mathop{\mathrm{sgn}}(P')=-1$, then we can cut the support $\mathrm{supp}_{X',(\Delta X)'} \subset [-\pi,\pi]$ from the left-hand side at 0 and shift the part that is in $[-\pi,0]$ into $[\pi,2\pi]$. Then, we obtain $\mathrm{supp}_{X',(\Delta X)'}^{\mathrm{shifted}} = [X'-(\Delta X)'+2\pi,\min(X'+(\Delta X)' + 2\pi,2\pi)]\cup[\max(0,X'-(\Delta X)'),X'+(\Delta X)']$. As a consequence, we obtain possibly two lower and upper bounds via $$\begin{aligned} \cap \mathrm{supp}_{X,\Delta X} \intertext{and} [\max(0,X'-(\Delta X)'),X'+(\Delta X)'] \cap \mathrm{supp}_{X,\Delta X}.\end{aligned}$$ In particular, we have $$\begin{aligned} lb_x &= \left( \begin{matrix} \max[ \max(0,X-\Delta X), X'-(\Delta X)'+2\pi]\\ \max[ \max(0,X-\Delta X), \max(0,X'-(\Delta X)')] \end{matrix} \right)\end{aligned}$$ and the upper bound as $$\begin{aligned} ub_x &= \left( \begin{matrix} \min [\min(2\pi,X+\Delta X), \min(X'+(\Delta X)' + 2\pi,2\pi)]\\ \min [\min(2\pi,X+\Delta X), X'+(\Delta X)'] \end{matrix} \right).\end{aligned}$$ If $\mathop{\mathrm{sgn}}(P')=1$, we cut its domain at $2\pi$ and shift the part that is in $[2\pi,3\pi]$ into $[0,\pi]$. Thus, analogously, we obtain $\mathrm{supp}_{X',(\Delta X)'}^{\mathrm{shifted}} = [X'-(\Delta X)',\min(2\pi,X'+(\Delta X)')]\cup[\max(0,X'-(\Delta X)'-2\pi),X'+(\Delta X)'-2\pi]$ and $$\begin{aligned} lb_x &= \left( \begin{matrix} \max[ \max(0,X-\Delta X), X'-(\Delta X)']\\ \max[ \max(0,X-\Delta X), \max(0,X'-(\Delta X)'-2\pi)] \end{matrix} \right)\end{aligned}$$ and the upper bound as $$\begin{aligned} ub_x = \left( \begin{matrix} \min [\min(2\pi,X+\Delta X), \min(2\pi,X'+(\Delta X)')]\\ \min [\min(2\pi,X+\Delta X), X'+(\Delta X)'-2\pi] \end{matrix} \right).\end{aligned}$$ For the cases where $\mathop{\mathrm{sgn}}(P')=0$ and $\mathop{\mathrm{sgn}}(P)=\pm 1$, we obtain analogous solutions (exchange $X$ with $X'$, $\Delta X$ with $(\Delta X)'$ and vice versa). Note again that, in all cases, if $lb_x \geq ub_x$ (componentwise), the intersection is the empty set and the integral vanishes. We explain the following steps of determining critical points and deriving the remaining integrals only for the case where we have one integration interval $[lb_x,ub_x]$. If we have two, we can execute these steps for both separately and add the integral values to obtain the value of [\[def:intNNa\]](#def:intNNa){reference-type="ref" reference="def:intNNa"} with respect to $x=a_2=\varphi$.\ Critical points here are those points between the lower and upper bound(s) where one of the FEHF turn from increase to decrease of the hat (always with respect to a fixed dimension). Here is how to determine them. The critical points can obviously only be from $\{X,\ X\pm\Delta X,\ X',\ X'\pm(\Delta X)'\}.$ Thus, we sort them for increasing order and then check each value whether it is in $[lb_x,ub_x]$. For practical purposes, it is sensible to count how many critical points -- including $lb_x$ and $ub_x$ -- we have. Let this count be $I\leq 6$. In the sequel, we set $p_i$ for a critical point: $lb_x \leq p_i \leq ub_x,\ i=1,...,I,\ I\leq 6$. Note that it cannot be more than 6 different critical points because of the definition of $lb$ and $ub$. The integrals in [\[def:intNNa\]](#def:intNNa){reference-type="ref" reference="def:intNNa"} are, thus, equal to $$\begin{aligned} \sum_{i=1}^{I-1} \int_{p_i}^{p_{i+1}} \frac{[\Delta X-|x-X|][(\Delta X)'-|x-X'|]}{\Delta X (\Delta X)'}x^q \mathrm{d}x. \label{eq:intNNasum}\end{aligned}$$ Note, that if two critical points $p_i$ and $p_{i+1}$ coincide, then the respective integral from $p_i$ to $p_{i+1}$ vanishes. Thus, we do not have to take care of this situation by ourselves. We first consider the following $$\begin{aligned} \sum_{i=1}^{I-1} &\int_{p_i}^{p_{i+1}} \frac{[\Delta X-|x-X|][(\Delta X)'-|x-X'|]}{\Delta X (\Delta X)'}x^q \mathrm{d}x\\ &= \sum_{i=1}^{I-1} \int_{p_i}^{p_{i+1}} \frac{[\Delta X-\mathop{\mathrm{sgn}}(x-X)(x-X)][(\Delta X)'-\mathop{\mathrm{sgn}}(x-X')(x-X')]}{\Delta X (\Delta X)'}x^q \mathrm{d}x\\ &= \sum_{i=1}^{I-1}\int_{p_i}^{p_{i+1}} \frac{\left[\Delta X \left[\begin{matrix} +\\-\\-\\+ \end{matrix} \right](x-X)\right]\left[(\Delta X)'\left[\begin{matrix} +\\-\\+\\- \end{matrix}\right](x-X')\right]}{\Delta X (\Delta X)'} x^q \mathrm{d}x,\\\end{aligned}$$ where the last two cases must coincide. Note that we need to determine the values of $-\mathop{\mathrm{sgn}}(x-X)$ and $-\mathop{\mathrm{sgn}}(x-X')$ for this. For $x=a_1=r$ and $x=a_3=t$, the sign value can be obtained straightforwardly. Similarly, this holds for $x=a_2=\varphi$ if $(P,P') \not \in \{(0,1),\ (1,0),\ (0,-1),\ (-1,0)\}$, that is, if we exclude the cases where we shifted the support of one FEHF in order to obtain the upper and lower bound. In the latter cases, we have to shift $P$ or $P'$ here accordingly to the shift done before to obtain the correct sign values. In practice, we can also shift $P$ or $P'$ once in the beginning of the computation of the inner product as well. For readability, we only consider the integration of $x^qW\left(x;X,\Delta X,X',(\Delta X)'\right)$ defined in $$\begin{aligned} \int \frac{\left[\Delta X \left[\begin{matrix} +\\-\\-\end{matrix} \right](x-X)\right]\left[(\Delta X)'\left[\begin{matrix} +\\-\\+ \end{matrix}\right](x-X')\right]}{\Delta X (\Delta X)'}x^q\mathrm{d}x \eqqcolon &\int x^q\frac{W\left(x;X,\Delta X,X',(\Delta X)'\right)}{\Delta X (\Delta X)'}\mathrm{d}x\\ = \frac{1}{\Delta X (\Delta X)'} &\int x^qW\left(x;X,\Delta X,X',(\Delta X)'\right)\mathrm{d}x \label{eq:intNNafinal}\end{aligned}$$ in the sequel. We obtain $$\begin{aligned} &\int x^qW\left(x;X,\Delta X,X',(\Delta X)'\right)\mathrm{d}x\\ % &= \int \left[\begin{matrix} +\\+\\-\end{matrix} \right] x^{q+2} \left[\begin{matrix} +\\-\\+\end{matrix} \right] x^{q+1} \left(\left[\begin{matrix} -\\+\\+\end{matrix} \right] X\left[\begin{matrix} -\\+\\+\end{matrix} \right] X' \left[\begin{matrix} +\\+\\+\end{matrix} \right] \Delta X \left[\begin{matrix} +\\+\\-\end{matrix} \right](\Delta X)' \right)\\ &\qquad \qquad \left[\begin{matrix} +\\+\\-\end{matrix} \right] x^q \left(X \left[\begin{matrix} -\\+\\+\end{matrix} \right]\Delta X \right) \left( X' \left[\begin{matrix} -\\+\\-\end{matrix} \right](\Delta X') \right) \mathrm{d}x\\ \\ % &= \left[\begin{matrix} +\\+\\-\end{matrix} \right] \frac{x^{q+3}}{q+3} \left[\begin{matrix} +\\-\\+\end{matrix} \right] \frac{x^{q+2}}{q+2} \left(\left[\begin{matrix} -\\+\\+\end{matrix} \right] X\left[\begin{matrix} -\\+\\+\end{matrix} \right] X' \left[\begin{matrix} +\\+\\+\end{matrix} \right] \Delta X \left[\begin{matrix} +\\+\\-\end{matrix} \right](\Delta X)' \right)\\ &\qquad \qquad \left[\begin{matrix} +\\+\\-\end{matrix} \right] \frac{x^{q+1}}{q+1} \left(X \left[\begin{matrix} -\\+\\+\end{matrix} \right]\Delta X \right) \left( X' \left[\begin{matrix} -\\+\\-\end{matrix} \right](\Delta X') \right)\\ % &= \left( \left[\begin{matrix} +\\+\\-\end{matrix} \right] (q+2)(q+1)x^{q+3} \left[\begin{matrix} +\\-\\+\end{matrix} \right] (q+3)(q+1)x^{q+2} \left(\left[\begin{matrix} -\\+\\+\end{matrix} \right] X\left[\begin{matrix} -\\+\\+\end{matrix} \right] X' \left[\begin{matrix} +\\+\\+\end{matrix} \right] \Delta X \left[\begin{matrix} +\\+\\-\end{matrix} \right](\Delta X)' \right)\right.\\ &\qquad \qquad \left. \left[\begin{matrix} +\\+\\-\end{matrix} \right] (q+3)(q+2)x^{q+1} \left(X \left[\begin{matrix} -\\+\\+\end{matrix} \right]\Delta X \right)\left( X' \left[\begin{matrix} -\\+\\-\end{matrix} \right](\Delta X') \right)\right)\frac{1}{(q+3)(q+2)(q+1)}\\ % &= \left( \left[\begin{matrix} +\\+\\-\end{matrix} \right] (q+2)(q+1)x^{2} \left[\begin{matrix} +\\-\\+\end{matrix} \right] (q+3)(q+1)x \left(\left[\begin{matrix} -\\+\\+\end{matrix} \right] X\left[\begin{matrix} -\\+\\+\end{matrix} \right] X' \left[\begin{matrix} +\\+\\+\end{matrix} \right] \Delta X \left[\begin{matrix} +\\+\\-\end{matrix} \right](\Delta X)' \right)\right.\\ &\qquad \qquad \left. \left[\begin{matrix} +\\+\\-\end{matrix} \right] (q+3)(q+2) \left(X \left[\begin{matrix} -\\+\\+\end{matrix} \right]\Delta X \right) \left( X' \left[\begin{matrix} -\\+\\-\end{matrix} \right](\Delta X') \right)\right)\frac{x^{q+1}}{(q+3)(q+2)(q+1)}.\\\end{aligned}$$ With these values the $\mathrm{L}^2\left(\mathbb{B},\mathbb{R}\right)$ inner product of two FEHFs is fully discussed. For the respective $\mathrm{L}^2\left(\mathbb{B},\mathbb{R}^3\right)$ inner product, we obtain $$\begin{aligned} &\left\langle \nabla_{r\xi(\varphi,t)} N_{(R,\Phi,T),(\Delta R,\Delta \Phi,\Delta T)}, \nabla_{r\xi(\varphi,t)} N_{(R',\Phi',T'),((\Delta R)',(\Delta \Phi)',(\Delta T)')}\right\rangle_{\mathrm{L}^2\left(\mathbb{B},\mathbb{R}^3\right)}\\ % %\nabla_{r\xi(\lon,t)} N_{(R,\Phi,T),(\Delta R,\Delta \Phi,\Delta T)}(r,\lon,t) %\chi_{\mathrm{supp}_{[(R,\Phi,T)-(\Delta R,\Delta \Phi,\Delta T),(R,\Phi,T)+(\Delta R,\Delta \Phi,\Delta T)]}}(r,\lon,t) \\ %&\qquad\times \left( %\era \frac{[-\sgn(r-R)]}{\Delta R} \frac{\Delta \Phi-|\lon-\Phi|}{\Delta \Phi}\frac{\Delta T-|t-T|}{\Delta T}\right.\\ %&\qquad\qquad + \frac{1}{r}\ephi \frac{1}{\sqrt{1-t^2}} \frac{\Delta R-|r-R|}{\Delta R} \frac{[-\sgn(\lon-\Phi)]}{\Delta \Phi} \frac{\Delta T-|t-T|}{\Delta T}\\ %&\qquad\qquad \left. + \frac{1}{r}\ete \sqrt{1-t^2}\frac{\Delta R-|r-R|}{\Delta R}\frac{\Delta \Phi-|\lon-\Phi|}{\Delta \Phi}\frac{[-\sgn(t-T)]}{\Delta T} %\right) % &=\int_{\mathbb{B}} \chi_{\mathrm{supp}_{[(R,\Phi,T)-(\Delta R,\Delta \Phi,\Delta T),(R,\Phi,T)+(\Delta R,\Delta \Phi,\Delta T)]}}(r,\varphi,t)\\ &\qquad\qquad\times \chi_{\mathrm{supp}_{[(R',\Phi',T')-((\Delta R)',(\Delta \Phi)',(\Delta T)'),(R',\Phi',T')+((\Delta R)',(\Delta \Phi)',(\Delta T)')]}}(r,\varphi,t)\\ &\qquad\qquad\qquad\times \left( \frac{[-\mathop{\mathrm{sgn}}(r-R)]}{\Delta R} \frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi}\frac{\Delta T-|t-T|}{\Delta T}\frac{[-\mathop{\mathrm{sgn}}(r-R')]}{(\Delta R)'} \frac{(\Delta \Phi)'-|\varphi-\Phi'|}{(\Delta \Phi)'}\frac{(\Delta T)'-|t-T'|}{(\Delta T)'}\right.\\ &\qquad\qquad\qquad\qquad + \frac{1}{r^2} \frac{1}{1-t^2} \frac{\Delta R-|r-R|}{\Delta R} \frac{[-\mathop{\mathrm{sgn}}(\varphi-\Phi)]}{\Delta \Phi} \frac{\Delta T-|t-T|}{\Delta T} \frac{(\Delta R)'-|r-R'|}{(\Delta R)'} \frac{[-\mathop{\mathrm{sgn}}(\varphi-\Phi')]}{(\Delta \Phi)'} \frac{(\Delta T)'-|t-T'|}{(\Delta T)'}\\ &\qquad\qquad\qquad\qquad \left. + \frac{1}{r^2} (1-t^2)\frac{\Delta R-|r-R|}{\Delta R}\frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi}\frac{[-\mathop{\mathrm{sgn}}(t-T)]}{\Delta T}\frac{(\Delta R)'-|r-R'|}{(\Delta R)'}\frac{(\Delta \Phi)'-|\varphi-\Phi'|}{(\Delta \Phi)'}\frac{[-\mathop{\mathrm{sgn}}(t-T')]}{(\Delta T)'} \right)\mathrm{d}x(r,\varphi,t)\\ % &=\int_{\mathbb{B}} \chi_{\mathrm{supp}_{[(R,\Phi,T)-(\Delta R,\Delta \Phi,\Delta T),(R,\Phi,T)+(\Delta R,\Delta \Phi,\Delta T)]}}(r,\varphi,t)\\ &\qquad\qquad\times \chi_{\mathrm{supp}_{[(R',\Phi',T')-((\Delta R)',(\Delta \Phi)',(\Delta T)'),(R',\Phi',T')+((\Delta R)',(\Delta \Phi)',(\Delta T)')]}}(r,\varphi,t)\\ &\qquad\qquad\qquad\times \frac{[-\mathop{\mathrm{sgn}}(r-R)]}{\Delta R} \frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi}\frac{\Delta T-|t-T|}{\Delta T}\frac{[-\mathop{\mathrm{sgn}}(r-R')]}{(\Delta R)'} \frac{(\Delta \Phi)'-|\varphi-\Phi'|}{(\Delta \Phi)'}\frac{(\Delta T)'-|t-T'|}{(\Delta T)'}\mathrm{d}x(r,\varphi,t)\\ &\qquad + \int_{\mathbb{B}} \chi_{\mathrm{supp}_{[(R,\Phi,T)-(\Delta R,\Delta \Phi,\Delta T),(R,\Phi,T)+(\Delta R,\Delta \Phi,\Delta T)]}}(r,\varphi,t)\\ &\qquad\qquad\qquad\times \chi_{\mathrm{supp}_{[(R',\Phi',T')-((\Delta R)',(\Delta \Phi)',(\Delta T)'),(R',\Phi',T')+((\Delta R)',(\Delta \Phi)',(\Delta T)')]}}(r,\varphi,t)\\ &\qquad\qquad\qquad\qquad\times \frac{1}{r^2} \frac{1}{1-t^2} \frac{\Delta R-|r-R|}{\Delta R} \frac{[-\mathop{\mathrm{sgn}}(\varphi-\Phi)]}{\Delta \Phi} \frac{\Delta T-|t-T|}{\Delta T} \frac{(\Delta R)'-|r-R'|}{(\Delta R)'} \frac{[-\mathop{\mathrm{sgn}}(\varphi-\Phi')]}{(\Delta \Phi)'} \frac{(\Delta T)'-|t-T'|}{(\Delta T)'}\mathrm{d}x(r,\varphi,t)\\ &\qquad + \int_{\mathbb{B}} \chi_{\mathrm{supp}_{[(R,\Phi,T)-(\Delta R,\Delta \Phi,\Delta T),(R,\Phi,T)+(\Delta R,\Delta \Phi,\Delta T)]}}(r,\varphi,t)\\ &\qquad\qquad\qquad\times \chi_{\mathrm{supp}_{[(R',\Phi',T')-((\Delta R)',(\Delta \Phi)',(\Delta T)'),(R',\Phi',T')+((\Delta R)',(\Delta \Phi)',(\Delta T)')]}}(r,\varphi,t)\\ &\qquad\qquad\qquad\qquad\times \frac{1}{r^2} (1-t^2)\frac{\Delta R-|r-R|}{\Delta R}\frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi}\frac{[-\mathop{\mathrm{sgn}}(t-T)]}{\Delta T}\frac{(\Delta R)'-|r-R'|}{(\Delta R)'}\frac{(\Delta \Phi)'-|\varphi-\Phi'|}{(\Delta \Phi)'}\frac{[-\mathop{\mathrm{sgn}}(t-T')]}{(\Delta T)'}\mathrm{d}x(r,\varphi,t)\end{aligned}$$ $$\begin{aligned} &=\int_{lb_r}^{ub_r} \int_{lb_\varphi}^{ub_\varphi} \int_{lb_t}^{ub_t} \frac{[-\mathop{\mathrm{sgn}}(r-R)]}{\Delta R} \frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi}\frac{\Delta T-|t-T|}{\Delta T}\frac{[-\mathop{\mathrm{sgn}}(r-R')]}{(\Delta R)'} \frac{(\Delta \Phi)'-|\varphi-\Phi'|}{(\Delta \Phi)'}\frac{(\Delta T)'-|t-T'|}{(\Delta T)'} r^2 \mathrm{d}r \mathrm{d}\varphi\mathrm{d}t\\ &\quad + \int_{lb_r}^{ub_r} \int_{lb_\varphi}^{ub_\varphi} \int_{lb_t}^{ub_t} \frac{1}{r^2} \frac{1}{1-t^2} \frac{\Delta R-|r-R|}{\Delta R} \frac{[-\mathop{\mathrm{sgn}}(\varphi-\Phi)]}{\Delta \Phi} \frac{\Delta T-|t-T|}{\Delta T} \frac{(\Delta R)'-|r-R'|}{(\Delta R)'} \frac{[-\mathop{\mathrm{sgn}}(\varphi-\Phi')]}{(\Delta \Phi)'} \frac{(\Delta T)'-|t-T'|}{(\Delta T)'}r^2 \mathrm{d}r \mathrm{d}\varphi\mathrm{d}t\\ &\quad + \int_{lb_r}^{ub_r} \int_{lb_\varphi}^{ub_\varphi} \int_{lb_t}^{ub_t} \frac{1}{r^2} (1-t^2)\frac{\Delta R-|r-R|}{\Delta R}\frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi}\frac{[-\mathop{\mathrm{sgn}}(t-T)]}{\Delta T}\frac{(\Delta R)'-|r-R'|}{(\Delta R)'}\frac{(\Delta \Phi)'-|\varphi-\Phi'|}{(\Delta \Phi)'}\frac{[-\mathop{\mathrm{sgn}}(t-T')]}{(\Delta T)'}r^2 \mathrm{d}r \mathrm{d}\varphi\mathrm{d}t\\ % &=\int_{lb_r}^{ub_r} \frac{\mathop{\mathrm{sgn}}(r-R)}{\Delta R} \frac{\mathop{\mathrm{sgn}}(r-R')}{(\Delta R)'} r^2 \mathrm{d}r \int_{lb_\varphi}^{ub_\varphi} \frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi} \frac{(\Delta \Phi)'-|\varphi-\Phi'|}{(\Delta \Phi)'} \mathrm{d}\varphi \int_{lb_t}^{ub_t} \frac{\Delta T-|t-T|}{\Delta T} \frac{(\Delta T)'-|t-T'|}{(\Delta T)'} \mathrm{d}t\\ &\quad + \int_{lb_r}^{ub_r} \frac{\Delta R-|r-R|}{\Delta R} \frac{(\Delta R)'-|r-R'|}{(\Delta R)'} \mathrm{d}r \int_{lb_\varphi}^{ub_\varphi} \frac{\mathop{\mathrm{sgn}}(\varphi-\Phi)}{\Delta \Phi} \frac{\mathop{\mathrm{sgn}}(\varphi-\Phi')}{(\Delta \Phi)'} \mathrm{d}\varphi \int_{lb_t}^{ub_t} \frac{1}{1-t^2} \frac{\Delta T-|t-T|}{\Delta T} \frac{(\Delta T)'-|t-T'|}{(\Delta T)'} \mathrm{d}t\\ &\quad + \int_{lb_r}^{ub_r} \frac{\Delta R-|r-R|}{\Delta R} \frac{(\Delta R)'-|r-R'|}{(\Delta R)'} \mathrm{d}r \int_{lb_\varphi}^{ub_\varphi} \frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi} \frac{(\Delta \Phi)'-|\varphi-\Phi'|}{(\Delta \Phi)'} \mathrm{d}\varphi \int_{lb_t}^{ub_t} (1-t^2) \frac{\mathop{\mathrm{sgn}}(t-T)}{\Delta T} \frac{\mathop{\mathrm{sgn}}(t-T')}{(\Delta T)'} \mathrm{d}t\\ &= \sum_{k=1}^3\int_{lb_{a_k}}^{ub_{a_k}} \frac{\mathop{\mathrm{sgn}}(a_k-A_k)}{\Delta A_k} \frac{\mathop{\mathrm{sgn}}(a_k-A_k')}{(\Delta A_k)'} \left\{\begin{matrix} a_k^2, &k=1,\\ 1,& k=2,\\ 1-a_k^2, & k=3 \end{matrix} \right\} \mathrm{d}a_k\\ &\qquad \times \prod_{j=1,\ j\not=k}^3 \int_{lb_{a_j}}^{ub_{a_j}} \frac{\Delta A_j - |a_j-A_j|}{\Delta A_j} \frac{(\Delta A_j)' - |a_j-A_j'|}{(\Delta A_j)'} \left\{\begin{matrix} \frac{1}{1-a_j^2}, &j=3,k=2,\\ 1,& \text{else} \end{matrix} \right\} \mathrm{d}a_j\end{aligned}$$ Note that $lb_r = lb_{a_1}, ub_r = ub_{a_1}, lb_\varphi= lb_{a_2}, ub_\varphi= ub_{a_2}, lb_t = lb_{a_3}$ and $ub_t = ub_{a_3}$. We see that also this integral fragments with respect to the variables $a_j,\ j=1,2,3,$ and the only integrals left for a discussion are of the form $$\begin{aligned} \int \frac{\mathop{\mathrm{sgn}}(x-X)\mathop{\mathrm{sgn}}(x-X')}{\Delta X (\Delta X)'} x^q \mathrm{d}x,\end{aligned}$$ again for $q\in\mathbb{N}_0$, here in particular $q=0$ and $q=2$, and $$\begin{aligned} \int \frac{1}{1-t^2} \frac{\Delta T-|t-T|}{\Delta T} \frac{(\Delta T)'-|t-T'|}{(\Delta T)'} \mathrm{d}t.\end{aligned}$$ For each integral between two critical points, we obtain $$\begin{aligned} \int \frac{\mathop{\mathrm{sgn}}(x-X)\mathop{\mathrm{sgn}}(x-X')}{\Delta X (\Delta X)'} x^q \mathrm{d}x = \pm \frac{1}{\Delta X (\Delta X)'}\int x^q \mathrm{d}x = \pm \frac{x^{q+1}}{(q+1)\Delta X (\Delta X)'} \end{aligned}$$ in the first case and $$\begin{aligned} &\int \frac{W\left(t;T,\Delta T,T',(\Delta T)'\right)}{1-t^2}\mathrm{d}t\\ % &= \int \left[\begin{matrix} +\\+\\-\end{matrix} \right] \frac{t^{2}}{1-t^2} \left[\begin{matrix} +\\-\\+\end{matrix} \right] \frac{t}{1-t^2} \left(\left[\begin{matrix} -\\+\\+\end{matrix} \right] T\left[\begin{matrix} -\\+\\+\end{matrix} \right] T' \left[\begin{matrix} +\\+\\+\end{matrix} \right] \Delta T \left[\begin{matrix} +\\+\\-\end{matrix} \right](\Delta T)' \right)\\ &\qquad \qquad \left[\begin{matrix} +\\+\\-\end{matrix} \right] \frac{1}{1-t^2} \left(T \left[\begin{matrix} -\\+\\+\end{matrix} \right]\Delta T \right) \left( T' \left[\begin{matrix} -\\+\\-\end{matrix} \right](\Delta T') \right) \mathrm{d}t\\ \\ % &= \left[\begin{matrix} +\\+\\-\end{matrix} \right] (\mathop{\mathrm{atanh}}(t)-t) \left[\begin{matrix} -\\+\\-\end{matrix} \right] \frac{\log\left(1-t^2\right)}{2} \left(\left[\begin{matrix} -\\+\\+\end{matrix} \right] T\left[\begin{matrix} -\\+\\+\end{matrix} \right] T' \left[\begin{matrix} +\\+\\+\end{matrix} \right] \Delta T \left[\begin{matrix} +\\+\\-\end{matrix} \right](\Delta T)' \right)\\ &\qquad \qquad \left[\begin{matrix} +\\+\\-\end{matrix} \right] \mathop{\mathrm{atanh}}(t) \left(T \left[\begin{matrix} -\\+\\+\end{matrix} \right]\Delta T \right) \left( T' \left[\begin{matrix} -\\+\\-\end{matrix} \right](\Delta T') \right)\\\end{aligned}$$ in the second case. Note that the latter is obviously not well-defined for $t=\pm 1$ due to the logarithm which is why we have to diminish the domain for this variable in practice. At last, we consider the mixed cases. We obtain $$\begin{aligned} &\left\langle N_{A,\Delta A}, G_{m,n,j}^{\mathrm{I}} \right\rangle_{\mathrm{L}^2\left(\mathbb{B},\mathbb{R}\right)}\\ &\qquad =\prod_{j=1}^3 \int_{\max(A_{\mathrm{min}},A_j-\Delta A_j)}^{\min(A_{\mathrm{max}},A_j+\Delta A_j)} \frac{\Delta A_j-|a_j-A_j|}{\Delta A_j} G_{m,n,j}^{\mathrm{I}}(x(a_1,a_2,a_3))\left\{\begin{matrix}a^2_1,&j=1\\1,&j=2,3\end{matrix}\right\} \mathrm{d}a_j\\ &\qquad = p_{m,n}q_{n,j} \int_{\max(R_{\mathrm{min}},R-\Delta R)}^{\min(R_{\mathrm{max}},R+\Delta R)} \frac{\Delta R-|r-R|}{\Delta R} P_m^{(0,n+1/2)}(I(r))\left(\frac{r}{\mathbf{R}}\right)^nr^2 \mathrm{d}r\\ &\qquad\qquad \times \int_{\Phi-\Delta \Phi}^{\Phi+\Delta \Phi} \frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi} \mathrm{Trig}(j\varphi) \mathrm{d}\varphi\\ &\qquad\qquad \times \int_{\max(T_{\mathrm{min}},T-\Delta T)}^{\min(T_{\mathrm{max}},T+\Delta T)} \frac{\Delta T-|t-T|}{\Delta T} P_{n,|j|}(t) \mathrm{d}t.\end{aligned}$$ We immediately see that the integrals with respect to $r$ and $t$ cannot be calculated analytically. However, as they are one-dimensional integrals, they can easily be integrated numerically, e.g. with suitable software libraries. With respect to the $\varphi$-integral, we obtain $$\begin{aligned} \frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi} \mathrm{Trig}(j\varphi) &=\frac{\Delta \Phi-\mathop{\mathrm{sgn}}(\varphi-\Phi)(\varphi-\Phi)}{\Delta \Phi} \mathrm{Trig}(j\varphi)\\ &= \left(1 +\frac{\Phi \mathop{\mathrm{sgn}}(\varphi-\Phi)}{\Delta \Phi}\right)\mathrm{Trig}(j\varphi) - \frac{\mathop{\mathrm{sgn}}(\varphi-\Phi)}{\Delta \Phi}\ \varphi\mathrm{Trig}(j\varphi). \end{aligned}$$ Thus, the following cases remain: $$\begin{aligned} I_1(j,\varphi) \coloneqq \int \varphi\mathrm{Trig}(j\varphi) \mathrm{d}\varphi &= \int\left\{ \begin{matrix} \sqrt{2}\varphi\cos(j\varphi),& j<0\\ \varphi,&j=0\\\sqrt{2}\varphi\sin(j\varphi),&j>0 \end{matrix} \right\} \mathrm{d}\varphi = \left\{ \begin{matrix} \sqrt{2} \left[ \frac{\cos(j\varphi)}{j^2} + \frac{\varphi\sin(j\varphi)}{j} \right],& j<0\\ \frac{1}{2} \varphi^2,&j=0\\ \sqrt{2}\left[ \frac{\sin(j\varphi)}{j^2} - \frac{\varphi\cos(j\varphi)}{j}\right],&j>0 \end{matrix} \right\} \intertext{and} I_2(j,\varphi) \coloneqq \int\mathrm{Trig}(j\varphi) \mathrm{d}\varphi &= \int\left\{ \begin{matrix} \sqrt{2} \cos(j\varphi),& j<0\\ 1,&j=0\\\sqrt{2}\sin(j\varphi),&j>0 \end{matrix} \right\} \mathrm{d}\varphi = \left\{ \begin{matrix} \frac{\sqrt{2}}{j} \sin(j\varphi),& j<0\\ \varphi,&j=0\\ -\frac{\sqrt{2}}{j}\cos(j\varphi),&j>0 \end{matrix} \right\}.\end{aligned}$$ This yields $$\begin{aligned} &\int_{\Phi-\Delta\Phi}^{\Phi+\Delta\Phi}\frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi} \mathrm{Trig}(j\varphi)\mathrm{d}\varphi\\ &= \int_{\Phi-\Delta\Phi}^{\Phi}\frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi} \mathrm{Trig}(j\varphi)\mathrm{d}\varphi+ \int_{\Phi}^{\Phi+\Delta\Phi}\frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi} \mathrm{Trig}(j\varphi)\mathrm{d}\varphi\\ &= \int_{\Phi-\Delta\Phi}^{\Phi}\frac{\Delta \Phi + \varphi-\Phi}{\Delta \Phi} \mathrm{Trig}(j\varphi)\mathrm{d}\varphi + \int_{\Phi}^{\Phi+\Delta\Phi}\frac{\Delta \Phi-\varphi+\Phi}{\Delta \Phi} \mathrm{Trig}(j\varphi)\mathrm{d}\varphi\\ &= \int_{\Phi-\Delta\Phi}^{\Phi}\left(1 - \frac{\Phi}{\Delta \Phi} + \frac{\varphi}{\Delta \Phi}\right)\mathrm{Trig}(j\varphi)\mathrm{d}\varphi + \int_{\Phi}^{\Phi+\Delta\Phi}\left(1 + \frac{\Phi}{\Delta \Phi} - \frac{\varphi}{\Delta \Phi}\right) \mathrm{Trig}(j\varphi)\mathrm{d}\varphi\\ &= \frac{1}{\Delta\Phi}I_1(j,\varphi) |_{\Phi-\Delta\Phi}^{\Phi} + \left(1-\frac{\Phi}{\Delta\Phi}\right)I_2(j,\varphi) |_{\Phi-\Delta\Phi}^{\Phi} - \frac{1}{\Delta\Phi}I_1(j,\varphi) |_{\Phi}^{\Phi+\Delta\Phi} + \left(1+\frac{\Phi}{\Delta\Phi}\right)I_2(j,\varphi) |_{\Phi}^{\Phi+\Delta\Phi}\\ &= -\frac{1}{\Delta\Phi}I_1(j,\Phi-\Delta\Phi) - \left(1-\frac{\Phi}{\Delta\Phi}\right)I_2(j,\Phi-\Delta\Phi)\\ &\qquad + \frac{1}{\Delta\Phi}I_1(j,\Phi) + \left(1-\frac{\Phi}{\Delta\Phi}\right)I_2(j,\Phi) + \frac{1}{\Delta\Phi}I_1(j,\Phi) - \left(1+\frac{\Phi}{\Delta\Phi}\right)I_2(j,\Phi)\\ &\qquad - \frac{1}{\Delta\Phi}I_1(j,\Phi+\Delta\Phi) + \left(1+\frac{\Phi}{\Delta\Phi}\right)I_2(j,\Phi+\Delta\Phi)\\ &= -\frac{1}{\Delta\Phi}I_1(j,\Phi-\Delta\Phi) - \left(1-\frac{\Phi}{\Delta\Phi}\right)I_2(j,\Phi-\Delta\Phi) + \frac{2}{\Delta\Phi}I_1(j,\Phi) - \frac{2\Phi}{\Delta\Phi}I_2(j,\Phi)\\ &\qquad - \frac{1}{\Delta\Phi}I_1(j,\Phi+\Delta\Phi) + \left(1+\frac{\Phi}{\Delta\Phi}\right)I_2(j,\Phi+\Delta\Phi).\end{aligned}$$ For the gradients, we have similarly $$\begin{aligned} &\left\langle \nabla_{r\xi(\varphi,t)} N_{(R,\Phi,T),(\Delta R,\Delta \Phi,\Delta T)}, \nabla_{r\xi(\varphi,t)} G_{m,n,j}^{\mathrm{I}} \right\rangle_{\mathrm{L}^2\left(\mathbb{B},\mathbb{R}^3\right)}\\ &= \int_{\mathbb{B}} \chi_{\mathrm{supp}_{[(R,\Phi,T)-(\Delta R,\Delta \Phi,\Delta T),(R,\Phi,T)+(\Delta R,\Delta \Phi,\Delta T)]}}(r,\varphi,t) \left( \varepsilon^r\frac{[-\mathop{\mathrm{sgn}}(r-R)]}{\Delta R} \frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi}\frac{\Delta T-|t-T|}{\Delta T} \right.\\ &\qquad\qquad + \left. \frac{1}{r}\varepsilon^\varphi\frac{1}{\sqrt{1-t^2}} \frac{\Delta R-|r-R|}{\Delta R} \frac{[-\mathop{\mathrm{sgn}}(\varphi-\Phi)]}{\Delta \Phi} \frac{\Delta T-|t-T|}{\Delta T} + \frac{1}{r}\varepsilon^t\sqrt{1-t^2}\frac{\Delta R-|r-R|}{\Delta R}\frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi}\frac{[-\mathop{\mathrm{sgn}}(t-T)]}{\Delta T} \right)\\ &\qquad\qquad\cdot\left( p_{m,n}q_{n,j} \sum_{p=1}^4 G_{m,n,j;p}^{\mathrm{I}} (r\xi(\varphi,t))\right)\mathrm{d}x(r,\varphi,t) \\ &= p_{m,n}q_{n,j} \int_{\mathbb{B}} \chi_{\mathrm{supp}_{[(R,\Phi,T)-(\Delta R,\Delta \Phi,\Delta T),(R,\Phi,T)+(\Delta R,\Delta \Phi,\Delta T)]}}(r,\varphi,t) \\ &\qquad\qquad\qquad \left[ \left( \varepsilon^r\frac{[-\mathop{\mathrm{sgn}}(r-R)]}{\Delta R} \frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi}\frac{\Delta T-|t-T|}{\Delta T}\right) \cdot\left( \sum_{p=1}^2G_{m,n,j;p}^{\mathrm{I}} (r\xi(\varphi,t)) \right) \right.\\ &\qquad\qquad\qquad\qquad + \left( \frac{1}{r}\varepsilon^\varphi\frac{1}{\sqrt{1-t^2}} \frac{\Delta R-|r-R|}{\Delta R} \frac{[-\mathop{\mathrm{sgn}}(\varphi-\Phi)]}{\Delta \Phi} \frac{\Delta T-|t-T|}{\Delta T}\right) \cdot\left(G_{m,n,j;3}^{\mathrm{I}} (r\xi(\varphi,t))\right)\\ &\qquad\qquad\qquad\qquad + \left. \left( \frac{1}{r}\varepsilon^t\sqrt{1-t^2}\frac{\Delta R-|r-R|}{\Delta R}\frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi}\frac{[-\mathop{\mathrm{sgn}}(t-T)]}{\Delta T} \right) \cdot\left(G_{m,n,j;4}^{\mathrm{I}} (r\xi(\varphi,t)) \right)\right]\mathrm{d}x(r,\varphi,t)\\ % &= p_{m,n}q_{n,j} \int_{R-\Delta R}^{R+\Delta R} \int_{\Phi-\Delta \Phi}^{\Phi+\Delta \Phi} \int_{T-\Delta T}^{T+\Delta T} \left( \varepsilon^r\frac{[-\mathop{\mathrm{sgn}}(r-R)]}{\Delta R} \frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi}\frac{\Delta T-|t-T|}{\Delta T}\right) \cdot\left( \sum_{p=1}^2 G_{m,n,j;p}^{\mathrm{I}} (r\xi(\varphi,t))\right)r^2 \mathrm{d}r \mathrm{d}\varphi\mathrm{d}t\\ &\qquad + p_{m,n}q_{n,j} \int_{R-\Delta R}^{R+\Delta R} \int_{\Phi-\Delta \Phi}^{\Phi+\Delta \Phi} \int_{T-\Delta T}^{T+\Delta T} \left( \frac{1}{r}\varepsilon^\varphi\frac{1}{\sqrt{1-t^2}} \frac{\Delta R-|r-R|}{\Delta R} \frac{[-\mathop{\mathrm{sgn}}(\varphi-\Phi)]}{\Delta \Phi} \frac{\Delta T-|t-T|}{\Delta T}\right) \cdot\left(G_{m,n,j;3}^{\mathrm{I}} (r\xi(\varphi,t)) \right)r^2 \mathrm{d}r \mathrm{d}\varphi\mathrm{d}t\\ &\qquad + p_{m,n}q_{n,j} \int_{R-\Delta R}^{R+\Delta R} \int_{\Phi-\Delta \Phi}^{\Phi+\Delta \Phi} \int_{T-\Delta T}^{T+\Delta T} \left( \frac{1}{r}\varepsilon^t\sqrt{1-t^2}\frac{\Delta R-|r-R|}{\Delta R}\frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi}\frac{[-\mathop{\mathrm{sgn}}(t-T)]}{\Delta T} \right)\cdot\left(G_{m,n,j;4}^{\mathrm{I}} (r\xi(\varphi,t)) \right) r^2 \mathrm{d}r \mathrm{d}\varphi\mathrm{d}t\\\end{aligned}$$ $$\begin{aligned} &= p_{m,n}q_{n,j} \int_{R-\Delta R}^{R+\Delta R} \int_{\Phi-\Delta \Phi}^{\Phi+\Delta \Phi} \int_{T-\Delta T}^{T+\Delta T} \left( \varepsilon^r\frac{[-\mathop{\mathrm{sgn}}(r-R)]}{\Delta R} \frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi}\frac{\Delta T-|t-T|}{\Delta T}\right) \\ &\qquad\qquad\qquad\qquad \cdot\left( \left(P_m^{(0,n+1/2)}\left(I(r)\right)\right)'I'(r) \left(\frac{r}{\mathbf{R}}\right)^{n} P_{n,|j|}(t)\mathrm{Trig}(j\varphi) \xi(\varphi,t) + \frac{n}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} P_{n,|j|}(t)\mathrm{Trig}(j\varphi) \xi(\varphi,t)\right)r^2 \mathrm{d}r \mathrm{d}\varphi\mathrm{d}t\\ &\qquad + p_{m,n}q_{n,j} \int_{R-\Delta R}^{R+\Delta R} \int_{\Phi-\Delta \Phi}^{\Phi+\Delta \Phi} \int_{T-\Delta T}^{T+\Delta T} \left( \frac{1}{r}\varepsilon^\varphi\frac{1}{\sqrt{1-t^2}} \frac{\Delta R-|r-R|}{\Delta R} \frac{[-\mathop{\mathrm{sgn}}(\varphi-\Phi)]}{\Delta \Phi} \frac{\Delta T-|t-T|}{\Delta T}\right)\\ &\qquad\qquad\qquad\qquad \cdot\left( \frac{j}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} \frac{1}{\sqrt{1-t^2}} P_{n,|j|}(t) \mathrm{Trig}(-j\varphi)\varepsilon^\varphi(\varphi,t)\right)r^2 \mathrm{d}r \mathrm{d}\varphi\mathrm{d}t\\ &\qquad + p_{m,n}q_{n,j} \int_{R-\Delta R}^{R+\Delta R} \int_{\Phi-\Delta \Phi}^{\Phi+\Delta \Phi} \int_{T-\Delta T}^{T+\Delta T} \left( \frac{1}{r}\varepsilon^t\sqrt{1-t^2}\frac{\Delta R-|r-R|}{\Delta R}\frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi}\frac{[-\mathop{\mathrm{sgn}}(t-T)]}{\Delta T} \right)\\ &\qquad\qquad\qquad\qquad \cdot\left( \frac{1}{\mathbf{R}} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n-1} \sqrt{1-t^2} P'_{n,|j|}(t)\mathrm{Trig}(j\varphi)\varepsilon^t(\varphi,t)\right) r^2 \mathrm{d}r \mathrm{d}\varphi\mathrm{d}t\\ % &= p_{m,n}q_{n,j} \int_{R-\Delta R}^{R+\Delta R} \frac{[-\mathop{\mathrm{sgn}}(r-R)]}{\Delta R} \left(P_m^{(0,n+1/2)}\left(I(r)\right)\right)'I'(r) \left(\frac{r}{\mathbf{R}}\right)^{n} r^2 \mathrm{d}r \int_{\Phi-\Delta \Phi}^{\Phi+\Delta \Phi} \frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi} \mathrm{Trig}(j\varphi) \mathrm{d}\varphi \int_{T-\Delta T}^{T+\Delta T} \frac{\Delta T-|t-T|}{\Delta T} P_{n,|j|}(t)\mathrm{d}t\\ &\qquad + p_{m,n}q_{n,j}n \int_{R-\Delta R}^{R+\Delta R} \frac{[-\mathop{\mathrm{sgn}}(r-R)]}{\Delta R} r P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n}\mathrm{d}r \int_{\Phi-\Delta \Phi}^{\Phi+\Delta \Phi} \frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi}\mathrm{Trig}(j\varphi) \mathrm{d}\varphi \int_{T-\Delta T}^{T+\Delta T} \frac{\Delta T-|t-T|}{\Delta T} P_{n,|j|}(t) \mathrm{d}t\\ &\qquad + p_{m,n}q_{n,j} \int_{R-\Delta R}^{R+\Delta R}\frac{\Delta R-|r-R|}{\Delta R}P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n} \mathrm{d}r \int_{\Phi-\Delta \Phi}^{\Phi+\Delta \Phi}\frac{[-\mathop{\mathrm{sgn}}(\varphi-\Phi)]}{\Delta \Phi} j \mathrm{Trig}(-j\varphi) \mathrm{d}\varphi \int_{T-\Delta T}^{T+\Delta T} \frac{\Delta T-|t-T|}{\Delta T} \frac{1}{1-t^2} P_{n,|j|}(t) \mathrm{d}t\\ &\qquad + p_{m,n}q_{n,j} \int_{R-\Delta R}^{R+\Delta R} \frac{\Delta R-|r-R|}{\Delta R} P_m^{(0,n+1/2)}\left(I(r)\right) \left(\frac{r}{\mathbf{R}}\right)^{n} \mathrm{d}r \int_{\Phi-\Delta \Phi}^{\Phi+\Delta \Phi} \frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi}\mathrm{Trig}(j\varphi) \mathrm{d}\varphi \int_{T-\Delta T}^{T+\Delta T} \frac{[-\mathop{\mathrm{sgn}}(t-T)]}{\Delta T}(1-t^2) P'_{n,|j|}(t)\mathrm{d}t\\ &= p_{n,m}q_{n,j} \sum_{k=1}^3 \int_{A_k-\Delta A_k}^{A_k+\Delta A_k} \frac{-\mathop{\mathrm{sgn}}(a_k-A_k)}{\Delta A_k} \left\{\begin{matrix} \left(P_m^{(0,n+1/2)}\left(I(a_k)\right)\right)'I'(a_k) \left(\frac{a_k}{\mathbf{R}}\right)^{n} a_k^2 + P_m^{(0,n+1/2)}\left(I(a_k)\right) \left(\frac{a_k}{\mathbf{R}}\right)^{n} n a_k, &k=1\\ j\mathrm{Trig}(-j a_k),&k=2\\ (1-a_k^2)P_{n,|j|}'(a_k),&k=3 \end{matrix}\right\} \mathrm{d}a_k\\ &\qquad \times \prod_{i=1,i\not=k}^3 \int_{A_i-\Delta A_i}^{A_i+\Delta A_i} \frac{\Delta A_i-|a_i-A_i|}{\Delta A_i} \left\{\begin{matrix} P_m^{(0,n+1/2)}\left(I(a_i)\right) \left(\frac{a_i}{\mathbf{R}}\right)^{n}, &i=1\\ \mathrm{Trig}(j a_i), &i=2\\ P_{n,|j|}(a_i),&i=3,k=1\\ \frac{1}{1-a_i^2} P_{n,|j|}(a_i),&i=3,k=2 \end{matrix}\right\} \mathrm{d}a_i.\end{aligned}$$ Note that, for practical purposes, we still have to compute 8 different integrals for this term. Moreover, the integrals with respect to $a_1=r$ and $a_3=t$ can still only be computed via numerical integration. For the case $a_2=\varphi$, we obtain $$\begin{aligned} \int_{\Phi-\Delta\Phi}^{\Phi+\Delta\Phi} -\frac{\mathop{\mathrm{sgn}}(\varphi-\Phi)}{\Delta \Phi} j\mathrm{Trig}(-j\varphi) \mathrm{d}\varphi &= \int_{\Phi-\Delta\Phi}^{\Phi} \frac{1}{\Delta \Phi} j\mathrm{Trig}(-j\varphi) \mathrm{d}\varphi -\int_{\Phi}^{\Phi+\Delta\Phi} \frac{1}{\Delta \Phi} j\mathrm{Trig}(-j\varphi) \mathrm{d}\varphi\\ &= \frac{j}{\Delta \Phi} \left(\int_{\Phi-\Delta\Phi}^{\Phi} \mathrm{Trig}(-j\varphi) \mathrm{d}\varphi -\int_{\Phi}^{\Phi+\Delta\Phi} \mathrm{Trig}(-j\varphi) \mathrm{d}\varphi\right)\\ &= \frac{j}{\Delta \Phi} \left(2I_2(-j;\Phi) - I_2(-j;\Phi-\Delta \Phi) - I_2(-j;\Phi+\Delta \Phi)\right) \intertext{and} &\int_{\Phi-\Delta\Phi}^{\Phi+\Delta\Phi} \frac{\Delta \Phi-|\varphi-\Phi|}{\Delta \Phi} \mathrm{Trig}(j\varphi) \mathrm{d}\varphi,\end{aligned}$$ which we have already discussed above. ## Derivation of objective functions $\mathrm{RFMP}(\cdot;\cdot)$ and $\mathrm{ROFMP}(\cdot;\cdot)$ and related coefficients {#sect:app:OF_IPMPs} We start with RFMP. The respective noise-cognizant Tikhonov-Phillips functional is given in [\[eq:TFNO\]](#eq:TFNO){reference-type="ref" reference="eq:TFNO"} by $$\begin{aligned} \mathcal{J}^{\mathrm{SM}} \left( f_N + \alpha d; \mathcal{T}_\daleth, \lambda, \delta \psi, \sigma\right) &\coloneqq \left\| \frac{R^N - \alpha\mathcal{T}_\daleth d}{\sigma} \right\|^2_{\mathbb{R}^\ell} + \lambda\left\|f_N+\alpha d\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)},\qquad \lambda >0,\\ R^{N+1} &\coloneqq R^N - \alpha_{N+1} \mathcal{T}_\daleth d_{N+1} = \delta \psi - \mathcal{T}_\daleth f_{N+1}.\end{aligned}$$ We now aim to determine $(\alpha,d),\ \alpha \in \mathbb{R},\ d\in\mathcal{D}$ which minimizes $\mathcal{J}^{\mathrm{SM}}$. We start as follows: $$\begin{aligned} 0 &= \frac{\partial}{\partial \alpha} \mathcal{J}^{\mathrm{SM}} \left( f_N + \alpha d; \mathcal{T}_\daleth, \lambda, \delta \psi, \sigma\right) = \frac{\partial}{\partial \alpha} \left[ \left\| \frac{R^N - \alpha\mathcal{T}_\daleth d}{\sigma} \right\|^2_{\mathbb{R}^\ell} + \lambda\left\|f_N+\alpha d\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)}\right]\end{aligned}$$ $$\begin{aligned} &= \frac{\partial}{\partial \alpha} \left[ \left\| \frac{R^N}{\sigma} \right\|^2_{\mathbb{R}^\ell} -2\alpha\left\langle \frac{R^N}{\sigma},\frac{\mathcal{T}_\daleth d}{\sigma} \right\rangle^2_{\mathbb{R}^\ell} + \alpha^2\left\| \frac{\mathcal{T}_\daleth d}{\sigma} \right\|^2_{\mathbb{R}^\ell} \right. \\ &\qquad\qquad \left. + \lambda\left\|f_N\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)} + 2\alpha\lambda\left\langle f_N,d\right\rangle^2_{\mathcal{H}_1\left(\mathbb{B}\right)} +\alpha^2\lambda\left\|d\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)} \right]\\ &= -2\left\langle \frac{R^N}{\sigma},\frac{\mathcal{T}_\daleth d}{\sigma} \right\rangle^2_{\mathbb{R}^\ell} + 2\alpha\left\| \frac{\mathcal{T}_\daleth d}{\sigma} \right\|^2_{\mathbb{R}^\ell} + 2\lambda\left\langle f_N,d\right\rangle^2_{\mathcal{H}_1\left(\mathbb{B}\right)} +2\alpha\lambda\left\|d\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)}.\end{aligned}$$ This yields $$\begin{aligned} \alpha_{N+1} = \frac{\left\langle \frac{R^N}{\sigma},\frac{\mathcal{T}_\daleth d}{\sigma} \right\rangle^2_{\mathbb{R}^\ell}-\lambda\left\langle f_N,d\right\rangle^2_{\mathcal{H}_1\left(\mathbb{B}\right)}}{\left\| \frac{\mathcal{T}_\daleth d}{\sigma} \right\|^2_{\mathbb{R}^\ell}+\lambda\left\|d\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)}}.\end{aligned}$$ Inserting this value into $\mathcal{J}^{\mathrm{SM}} \left( f_N + \alpha d; \mathcal{T}_\daleth, \lambda, \delta \psi, \sigma\right)$, we obtain $$\begin{aligned} &\mathcal{J}^{\mathrm{SM}} \left( f_N + \alpha_{N+1} d; \mathcal{T}_\daleth, \lambda, \delta \psi, \sigma\right)\\ &= \left\| \frac{R^N}{\sigma} \right\|^2_{\mathbb{R}^\ell} -2\alpha_{N+1}\left\langle \frac{R^N}{\sigma},\frac{\mathcal{T}_\daleth d}{\sigma} \right\rangle^2_{\mathbb{R}^\ell} + \alpha_{N+1}^2\left\| \frac{\mathcal{T}_\daleth d}{\sigma} \right\|^2_{\mathbb{R}^\ell} \\ &\qquad\qquad + \lambda\left\|f_N\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)} + 2\alpha_{N+1}\lambda\left\langle f_N,d\right\rangle^2_{\mathcal{H}_1\left(\mathbb{B}\right)} +\alpha_{N+1}^2\lambda\left\|d\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)}\\ &= \left\| \frac{R^N}{\sigma} \right\|^2_{\mathbb{R}^\ell} + \lambda\left\|f_N\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)}\\ &\qquad\qquad -2\frac{\left\langle \frac{R^N}{\sigma},\frac{\mathcal{T}_\daleth d}{\sigma} \right\rangle^2_{\mathbb{R}^\ell}-\lambda\left\langle f_N,d\right\rangle^2_{\mathcal{H}_1\left(\mathbb{B}\right)}}{\left\| \frac{\mathcal{T}_\daleth d}{\sigma} \right\|^2_{\mathbb{R}^\ell}+\lambda\left\|d\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)}} \left[\left\langle \frac{R^N}{\sigma},\frac{\mathcal{T}_\daleth d}{\sigma} \right\rangle^2_{\mathbb{R}^\ell} - \lambda\left\langle f_N,d\right\rangle^2_{\mathcal{H}_1\left(\mathbb{B}\right)} \right]\\ &\qquad\qquad +\left[ \frac{\left\langle \frac{R^N}{\sigma},\frac{\mathcal{T}_\daleth d}{\sigma} \right\rangle^2_{\mathbb{R}^\ell}-\lambda\left\langle f_N,d\right\rangle^2_{\mathcal{H}_1\left(\mathbb{B}\right)}}{\left\| \frac{\mathcal{T}_\daleth d}{\sigma} \right\|^2_{\mathbb{R}^\ell}+\lambda\left\|d\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)}}\right]^2 \left[\left\| \frac{\mathcal{T}_\daleth d}{\sigma} \right\|^2_{\mathbb{R}^\ell} + \lambda\left\|d\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)} \right]\\ &= \left\| \frac{R^N}{\sigma} \right\|^2_{\mathbb{R}^\ell} + \lambda\left\|f_N\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)} -\frac{\left(\left\langle \frac{R^N}{\sigma},\frac{\mathcal{T}_\daleth d}{\sigma} \right\rangle^2_{\mathbb{R}^\ell}-\lambda\left\langle f_N,d\right\rangle^2_{\mathcal{H}_1\left(\mathbb{B}\right)}\right)^2}{\left\| \frac{\mathcal{T}_\daleth d}{\sigma} \right\|^2_{\mathbb{R}^\ell}+\lambda\left\|d\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)}} \\ &= \mathcal{J}^{\mathrm{SM}} \left( f_{N-1} + \alpha_N d_N; \mathcal{T}_\daleth, \lambda, \delta \psi, \sigma\right) - \mathrm{RFMP}(d;N). \\\end{aligned}$$ Note that $\mathcal{J}^{\mathrm{SM}} \left( f_{N-1} + \alpha_N d_N; \mathcal{T}_\daleth, \lambda, \delta \psi, \sigma\right)$ is fixed in the $(N+1)$-th iteration. Thus, we see that, if we maximize $\mathrm{RFMP}(\cdot;\cdot)$ as defined in [\[def:RFMP(d;N)\]](#def:RFMP(d;N)){reference-type="ref" reference="def:RFMP(d;N)"}, we minimize the noise-cognizant Tikkhonov-Phillips functional in the $(N+1)$-th iteration. Similarly, in the ROFMP, we consider the noise-cognizant Tikhonov-Phillips functional $$\begin{aligned} \mathcal{J}^{\mathrm{SM}}_O \left( f_{N}^{(N)} + \alpha d; \mathcal{T}_\daleth, \lambda, \delta \psi, \sigma\right) &\eqqcolon \left\| \frac{R^N - \alpha\mathcal{P}_{\mathcal{V}_N^\perp}\mathcal{T}_\daleth d}{\sigma} \right\|^2_{\mathbb{R}^\ell} + \lambda\left\|f_N^{(N)}+\alpha \left(d-b_n^{(N)}(d) \right)\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)},\\ R^{N+1} &\coloneqq R^N - \alpha_{N+1}^{(N+1)}\mathcal{P}_{\mathcal{V}_N^\perp} \mathcal{T}_\daleth d_{N+1} = \delta \psi - \mathcal{T}_\daleth f_{N+1}^{(N+1)},\end{aligned}$$ for $\lambda >0$. Confer [4.1](#ssect:ipmps){reference-type="ref" reference="ssect:ipmps"} for a full definition. Again, we first consider the minimizer with respect to $\alpha$: $$\begin{aligned} 0 &= \frac{\partial}{\partial \alpha} \mathcal{J}^{\mathrm{SM}}_O \left( f_{N}^{(N)} + \alpha d; \mathcal{T}_\daleth, \lambda, \delta \psi, \sigma\right) \\ &= \frac{\partial}{\partial \alpha}\left[ \left\| \frac{R^N}{\sigma} \right\|^2_{\mathbb{R}^\ell} -2\alpha \left\langle \frac{R^N}{\sigma},\frac{\mathcal{P}_{\mathcal{V}_N^\perp}\mathcal{T}_\daleth d}{\sigma} \right\rangle^2_{\mathbb{R}^\ell} +\alpha^2\left\| \frac{\mathcal{P}_{\mathcal{V}_N^\perp}\mathcal{T}_\daleth d}{\sigma} \right\|^2_{\mathbb{R}^\ell} \right. \\ &\qquad\qquad \left. + \lambda\left\|f_N^{(N)}\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)} + 2\alpha\lambda\left\langle f_N^{(N)},d-b_n^{(N)}(d) \right\rangle^2_{\mathcal{H}_1\left(\mathbb{B}\right)} + \alpha^2\lambda\left\|d-b_n^{(N)}(d)\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)} \right]\\ &= -2\left\langle \frac{R^N}{\sigma},\frac{\mathcal{P}_{\mathcal{V}_N^\perp}\mathcal{T}_\daleth d}{\sigma} \right\rangle^2_{\mathbb{R}^\ell} +2\alpha\left\| \frac{\mathcal{P}_{\mathcal{V}_N^\perp}\mathcal{T}_\daleth d}{\sigma} \right\|^2_{\mathbb{R}^\ell} \\ &\qquad\qquad + 2\lambda\left\langle f_N^{(N)},d-b_n^{(N)}(d) \right\rangle^2_{\mathcal{H}_1\left(\mathbb{B}\right)} + 2\alpha\lambda\left\|d-b_n^{(N)}(d)\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)},\end{aligned}$$ which yields $$\begin{aligned} \alpha_{N+1} = \alpha_{N+1}^{(N+1)} \coloneqq \frac{\left\langle \frac{R^N}{\sigma},\frac{\mathcal{P}_{\mathcal{V}_N^\perp}\mathcal{T}_\daleth d}{\sigma} \right\rangle^2_{\mathbb{R}^\ell} - \lambda\left\langle f_N^{(N)},d-b_n^{(N)}(d) \right\rangle^2_{\mathcal{H}_1\left(\mathbb{B}\right)}}{\left\| \frac{\mathcal{P}_{\mathcal{V}_N^\perp}\mathcal{T}_\daleth d}{\sigma} \right\|^2_{\mathbb{R}^\ell} + \lambda\left\|d-b_n^{(N)}(d)\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)}}. \end{aligned}$$ Inserting $\alpha_{N+1}^{(N+1)}$ in $\mathcal{J}^{\mathrm{SM}}_O \left( f_{N}^{(N)} + \alpha d; \mathcal{T}_\daleth, \lambda, \delta \psi, \sigma\right)$, we obtain $$\begin{aligned} &\mathcal{J}^{\mathrm{SM}}_O \left( f_{N}^{(N)} + \alpha_{N+1}^{(N+1)} d; \mathcal{T}_\daleth, \lambda, \delta \psi, \sigma\right)\\ &= \left\| \frac{R^N}{\sigma} \right\|^2_{\mathbb{R}^\ell} -2\alpha_{N+1}^{(N+1)} \left\langle \frac{R^N}{\sigma},\frac{\mathcal{P}_{\mathcal{V}_N^\perp}\mathcal{T}_\daleth d}{\sigma} \right\rangle^2_{\mathbb{R}^\ell} +\left(\alpha_{N+1}^{(N+1)}\right)^2\left\| \frac{\mathcal{P}_{\mathcal{V}_N^\perp}\mathcal{T}_\daleth d}{\sigma} \right\|^2_{\mathbb{R}^\ell} \\ &\qquad + \lambda\left\|f_N^{(N)}\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)} + 2\alpha_{N+1}^{(N+1)}\lambda\left\langle f_N^{(N)},d-b_n^{(N)}(d) \right\rangle^2_{\mathcal{H}_1\left(\mathbb{B}\right)} + \left(\alpha_{N+1}^{(N+1)}\right)^2\lambda\left\|d-b_n^{(N)}(d)\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)}\\ &= \left\| \frac{R^N}{\sigma} \right\|^2_{\mathbb{R}^\ell} + \lambda\left\|f_N^{(N)}\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)}\\ &\qquad -2 \frac{\left\langle \frac{R^N}{\sigma},\frac{\mathcal{P}_{\mathcal{V}_N^\perp}\mathcal{T}_\daleth d}{\sigma} \right\rangle^2_{\mathbb{R}^\ell} - \lambda\left\langle f_N^{(N)},d-b_n^{(N)}(d) \right\rangle^2_{\mathcal{H}_1\left(\mathbb{B}\right)}}{\left\| \frac{\mathcal{P}_{\mathcal{V}_N^\perp}\mathcal{T}_\daleth d}{\sigma} \right\|^2_{\mathbb{R}^\ell} + \lambda\left\|d-b_n^{(N)}(d)\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)}} \\ &\qquad\qquad\qquad\qquad \times \left[\left\langle \frac{R^N}{\sigma},\frac{\mathcal{P}_{\mathcal{V}_N^\perp}\mathcal{T}_\daleth d}{\sigma} \right\rangle^2_{\mathbb{R}^\ell} + \lambda\left\langle f_N^{(N)},d-b_n^{(N)}(d) \right\rangle^2_{\mathcal{H}_1\left(\mathbb{B}\right)} \right]\\ &\qquad +\left(\frac{\left\langle \frac{R^N}{\sigma},\frac{\mathcal{P}_{\mathcal{V}_N^\perp}\mathcal{T}_\daleth d}{\sigma} \right\rangle^2_{\mathbb{R}^\ell} - \lambda\left\langle f_N^{(N)},d-b_n^{(N)}(d) \right\rangle^2_{\mathcal{H}_1\left(\mathbb{B}\right)}}{\left\| \frac{\mathcal{P}_{\mathcal{V}_N^\perp}\mathcal{T}_\daleth d}{\sigma} \right\|^2_{\mathbb{R}^\ell} + \lambda\left\|d-b_n^{(N)}(d)\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)}}\right)^2\\ &\qquad\qquad\qquad\qquad \times \left[ \left\| \frac{\mathcal{P}_{\mathcal{V}_N^\perp}\mathcal{T}_\daleth d}{\sigma} \right\|^2_{\mathbb{R}^\ell} +\lambda\left\|d-b_n^{(N)}(d)\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)}\right] \end{aligned}$$ $$\begin{aligned} &= \left\| \frac{R^N}{\sigma} \right\|^2_{\mathbb{R}^\ell} + \lambda\left\|f_N^{(N)}\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)} - \frac{\left(\left\langle \frac{R^N}{\sigma},\frac{\mathcal{P}_{\mathcal{V}_N^\perp}\mathcal{T}_\daleth d}{\sigma} \right\rangle^2_{\mathbb{R}^\ell} - \lambda\left\langle f_N^{(N)},d-b_n^{(N)}(d) \right\rangle^2_{\mathcal{H}_1\left(\mathbb{B}\right)}\right)^2}{\left\| \frac{\mathcal{P}_{\mathcal{V}_N^\perp}\mathcal{T}_\daleth d}{\sigma} \right\|^2_{\mathbb{R}^\ell} + \lambda\left\|d-b_n^{(N)}(d)\right\|^2_{\mathcal{H}_1\left(\mathbb{B}\right)}}\\ &= \mathcal{J}^{\mathrm{SM}}_O \left( f_{N-1}^{(N-1)} + \alpha_{N+1}^{(N+1)} d; \mathcal{T}_\daleth, \lambda, \delta \psi, \sigma\right) - \mathrm{ROFMP}(d;N).\end{aligned}$$ [^1]: Geomathematics Group Siegen, University of Siegen, michel\@mathematik.uni-siegen.de, naomi.schneider\@mathematik.uni-siegen.de [^2]: Université Côte d'Azur, CNRS, Observatoire de la Côte d'Azur, IRD, Géoazur, Sophia Antipolis, France [^3]: Earth Sciences Department, University of Oxford, UK, now at Dublin Institute for Advanced Studies
arxiv_math
{ "id": "2309.00085", "title": "A matching pursuit approach to the geophysical inverse problem of\n seismic travel time tomography under the ray theory approximation", "authors": "Naomi Schneider, Volker Michel, Karin Sigloch and Eoghan J. Totten", "categories": "math.NA cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Let $\psi:{\mathbb{N}}\to [0,\infty)$, $\psi(q)=q^{-(1+\tau)}$ and let $\psi$-badly approximable points be those vectors in ${\mathbb{R}}^{d}$ that are $\psi$-well approximable, but not $c\psi$-well approximable for arbitrarily small constants $c>0$. We establish that the $\psi$-badly approximable points have the Hausdorff dimension of the $\psi$-well approximable points, the dimension taking the value $(d+1)/(\tau+1)$ familiar from theorems of Besicovitch and Jarník. The method of proof is an entirely new take on the Mass Transference Principle by Beresnevich and Velani (Annals, 2006); namely, we use the colloquially named 'delayed pruning' to construct a sufficiently large $\liminf$ set and combine this with ideas inspired by the proof of the Mass Transference Principle to find a large $\limsup$ subset of the $\liminf$ set. Our results are a generalisation of some $1$-dimensional results due to Bugeaud and Moreira (Acta Arith, 2011), but our method of proof is nothing alike. author: - | Henna Koivusalo\ (Bristol) - | Jason Levesley\ (York) - | Benjamin Ward[^1]\ (La Trobe) - | Xintian Zhang\ (Bristol) bibliography: - biblio.bib title: The dimension of the set of $\psi$-badly approximable points in all ambient dimensions; on a question of Beresnevich and Velani. --- # Introduction {#intro} Throughout let $\psi:{\mathbb{N}}\to (0, \infty)$ be a monotonic decreasing function. We will use the notation $\psi_\tau$ when $\psi(q) = q^{-\tau}$ for $\tau \in (0, \infty)$. The set of *$\psi$-well approximable points*, denoted by $\mathcal{W}_{d}(\psi)$, is defined to be the set $$\mathcal{W}_{d}(\psi):=\left\{x \in [0,1]^{d} : \max_{1 \leq i \leq d} \left|x_{i}-\frac{p_{i}}{q} \right| < \frac{\psi(q)}{q} \quad \text{for i.m.} \, \, \, (p,q) \in {\mathbb{Z}}^{d} \times {\mathbb{N}}\right\}\, ,$$ where i.m. denotes infinitely many. The main object of interest in this article is the *Hausdorff dimension* of the *$\psi$-badly approximable points*; those points of $\mathcal{W}_d(\psi)$ for which the approximation by $\psi$ cannot be improved by an arbitrarily small constant. More precisely, $$\mathbf{Bad}_{d}(\psi):= \mathcal{W}_{d}(\psi) \backslash \bigcap_{k=1}^{\infty} \mathcal{W}_{d}\left(\frac{1}{k}\psi \right).$$ In [@BV08 Question 1] Beresnevich and Velani asked for the Hausdorff dimension of $\mathbf{Bad}_{d}(\psi)$ and conjectured that the Hausdorff dimension was equal to the dimension of $\mathcal{W}_{d}(\psi)$. We prove this conjecture true for a certain class of approximation functions. We recall the definition of Hausdorff measure and dimension. (For more details on such notions see for example [@Rog98; @F14].) For $s\geq 0$ the *Hausdorff $s$-measure* of a set $F \subset {\mathbb{R}}^{d}$ is defined as $${\cal H}^{s}(F):=\lim_{\rho \to 0^{+}} \inf \left\{ \sum_{i=1}^\infty |B_{i}|^{s} : F \subset \bigcup_{i=1}^\infty B_{i} \quad \text{ and }\quad |B_{i}|<\rho \, \, \forall \, \, i \right\} \,$$ where $|\cdot |$ denotes the diameter of a set. The *Hausdorff dimension* of set $F \subset {\mathbb{R}}^{d}$ is defined as $$\mathop{\mathrm{\dim_H}}F := \inf \left\{ s \geq 0 : {\cal H}^{s}(F)=0 \right\}\, .$$ In the case when $s = d$, the ambient dimension of our setting, Hausdorff $s$-measure coincides (up to a constant) with Lebesgue measure which we denote by $\lambda_d$. The literature surrounding the metric properties of $\mathbf{Bad}_d(\psi)$ is extensive, and we only highlight some of the main results below. For approximation function $\psi_{1/d}$, the set $\mathbf{Bad}_{d}(\psi_{1/d})=\mathbf{Bad}_{d}$ is the classical set of badly approximable points. As a consequence of a landmark theorem of Khintchine [@K24] we know that $\lambda_{d}(\mathbf{Bad}_d) = 0$ (This can be proven by a range of methods, see [@BDGW23] for an overview of techniques). It is then natural to ask, what is the Hausdorff dimension of $\mathbf{Bad}_d$. Jarnı́k [@J28] and Schmidt [@Schmidt66] showed that despite the set being null, it has full Hausdorff dimension in the cases where $d=1$ and $d>1$, respectively. That is $$\mathop{\mathrm{\dim_H}}\mathbf{Bad}_{d}=d\, .$$ In the direction of $\psi$-well approximable points, Besicovitch [@B34] and Jarnı́k [@J29] independently proved that whenever $\tau>\frac{1}{d}$ $$\mathop{\mathrm{\dim_H}}\mathcal{W}_{d}(\psi_{\tau}) = \frac{d+1}{\tau+1} \,.$$ Dodson extended this result to the case of general approximation functions $\psi$ [@Dod92 Theorem 2]. As $\mathbf{Bad}_{d}(\psi) \subset \mathcal{W}_{d}(\psi)$, these results give immediate upper bounds for the Hausdorff dimension of $\mathbf{Bad}_{d}(\psi)$. As is often the case, establishing the corresponding lower bound is the main obstacle in determining the exact dimension. The following refinement of $\mathcal{W}_d(\psi)$ and $\mathbf{Bad}_d(\psi)$, first introduced in [@BDV01], is useful as they can be used to define sets whose points satisfy more delicate approximation properties. Let $\psi, \phi : {\mathbb{N}}\to (0, \infty)$ be monotonic decreasing functions with $\psi(q)>\phi(q)$ for all $q \in {\mathbb{N}}$ and denote $$\begin{aligned} D_{d}(\psi, \phi) %&= %\left\{ x \in [0,1]^{d}: \begin{cases} % \max_{1 \leq i \leq d}\left|x_{i}-\frac{p_{i}}{q}\right|<\psi(q) \quad \text{ for i.m. } \, \, (p,q) \in \Z^{d} \times \N \, , \vspace{0.3cm} \\ % \max_{1 \leq i \leq d}\left|x_{i}-\frac{p_{i}}{q} \right| \geq \phi(q) \quad \begin{array}{c} \text{ for all } \, \, (p,q) \in \Z^{d} \times \N \, \text{ with $q$ sufficiently large.} % \end{array} % \end{cases} % \right\} \\ & := \left\{ x \in [0,1]^{d}: \underset{\stackrel{\text{for all $(p,q) \in {\mathbb{Z}}^{d} \times {\mathbb{N}}$}}{\text{with $q$ sufficiently large}}}{\underbrace{\, \quad \phi(q)/q \, \, \leq \quad \, }}\max_{1 \leq i \leq d}\left|x_{i}-\frac{p_{i}}{q}\right| \underset{\text{for i.m. $(p,q) \in {\mathbb{Z}}^{d} \times {\mathbb{N}}$ }}{\underbrace{\, \quad < \, \, \psi(q)/q \quad \, }} \right\} \\ &=\mathcal{W}_{d}(\psi) \backslash \mathcal{W}_{d}(\phi).\end{aligned}$$ With these sets defined, the set of $\psi$-badly approximable points can now be alternatively written as $$\mathbf{Bad}_{d}(\psi)= \bigcup_{k \in {\mathbb{N}}} D_{d}\left(\psi, \frac{1}{k}\psi\right),$$ and the set of *exact approximation order $\psi$* is $$\mathbf{Exact}_{d}(\psi):= \bigcap_{k \in {\mathbb{N}}} D_{d}\left(\psi, \left(1-\frac{1}{k}\right)\psi \right) = \mathcal{W}_{d}(\psi) \backslash \bigcup_{k=1}^{\infty} \mathcal{W}_{d}\left(\left(1-\frac{1}{k}\right)\psi \right).$$ Trivially $\mathbf{Exact}_{d}(\psi) \subset \mathbf{Bad}_{d}(\psi) \subset \mathcal{W}_{d}(\psi)$ and so $\mathbf{Exact}_{d}(\psi)$ can be used for lower bounds of dimension. One of the first results on the set $\mathbf{Exact}_{1}(\psi)$ was given by Jarnı́k [@Jar31] who, using the theory of continued fractions, proved that $\mathbf{Exact}_{1}(\psi) \not= \emptyset$ for functions $\psi(q)=o(q^{-1})$. Without the condition $\psi(q)=o(q^{-1})$ the theory becomes significantly more complicated. In particular, a classical result of Hurwitz tells us that $\mathcal{W}_{1}\left(\frac{1}{\sqrt{5}}\psi_{1}\right)=[0,1]$ and so $$\mathbf{Exact}_{1}(\psi_{1}) \subset \mathcal{W}_{1}(\psi_{1}) \backslash \mathcal{W}_{1}\left(\frac{1}{\sqrt{5}}\psi_{1}\right) = \emptyset\, .$$ Conversely, if $\psi(q)=cq^{-1}$ for some $c>0$ small it is not necessarily true that $\mathbf{Exact}_{1}(\psi)=\emptyset$. In fact, for $c\to 0$, Moreira showed that the Hausdorff dimension of $\mathbf{Exact}_{1}(\psi)$ tends to one [@Mor18 Theorem 2]. Nevertheless, throughout we will suppose the approximation function satisfies $\psi(q)=o\left(q^{-1/d}\right)$. Since the result of Jarnı́k there has been gradual progress towards establishing the Hausdorff dimension of $\mathbf{Exact}_{1}(\psi)$. In [@G63] Güting proved that $$\mathop{\mathrm{\dim_H}}\left( \bigcup_{k \in {\mathbb{N}}} D_{1}\left(\psi_{\tau}, \psi_{\tau+\frac{1}{k}}\right) \right)=\mathop{\mathrm{\dim_H}}\mathcal{W}_{1}(\psi_{\tau})=\frac{2}{1+\tau},$$ for $\tau > 1$. In [@BDV01] Beresnevich, Dickinson and Velani improved upon Güting's result by calculating the Hausdorff measure at the dimension and showed that $${\cal H}^{\frac{2}{1+\tau}}\left( \bigcup_{k \in {\mathbb{N}}} D_{1}\left(\psi_{\tau}, \psi_{\tau+\frac{1}{k}}\right) \right)= \infty \, .$$ Furthermore they considered approximation functions with \"logarithmic error\" and showed that for $\tau>\frac{1}{d}$ $$\mathop{\mathrm{\dim_H}}\left( \bigcup_{k \in {\mathbb{N}}} D_{d}\left( \psi_{\tau},\, q \mapsto q^{-\tau}(\log q)^{-\frac{1}{k}}\right) \right)=\frac{d+1}{\tau+1}.$$ This was established by comparing the Hausdorff measures of the two approximation sets $\mathcal{W}_{d}( \psi_{\tau})$ and $\mathcal{W}_{d}\left(q \mapsto q^{-\tau}(\log q)^{-\varepsilon}\right)$ for small $\varepsilon>0$. However the technique used here cannot be applied in the \"constant error\" case as the Hausdorff measure of the two sets would be the same. In a series of papers Bugeaud [@Bu03; @Bu08] and Bugeaud and Moreira [@BM11] proved a complete result in one dimension[^2], showing that $$\mathop{\mathrm{\dim_H}}\mathbf{Exact}_{1}(\psi_\tau)=\frac{2}{1+\tau}.$$ It is then a consequence of the inclusions $\mathcal{W}_{1}(\psi) \supset \mathbf{Bad}_{1}(\psi) \supset \mathbf{Exact}_{1}(\psi)$ that $$\mathop{\mathrm{\dim_H}}\mathbf{Bad}_{1}(\psi_\tau)=\frac{2}{1+\tau} \, ,$$ Furthermore, in [@Bu03 Theorem 2] Bugeaud also showed that $${\cal H}^{\frac{2}{1+\tau}}(\mathbf{Bad}_{1}(\psi_{\tau}))=\infty\,.$$ The series of papers by Bugeaud and Moreira rely on results from the theory of continued fractions. Higher dimensional versions of continued fractions have been extensively studied, see for example [@Brentjes1981; @Schweiger2000] and [@BDKKKT2023; @Karpenkov2022] for more recent approaches. However, trying to apply arguments from $1$-dimension to higher dimensions often fails and in particular, the methods of Bugeaud and Moreira do not seem to be applicable to the study of $\mathbf{Bad}_d(\psi_\tau)$. In the current work we compute the Hausdorff dimension $\mathbf{Bad}_{d}(\psi_{\tau})$ for any dimension $d$. That is, we prove the following; **Theorem 1**. *Let $\psi_{\tau}(q)=q^{-\tau}$ with $\tau>\frac{1}{d}$, and let $B\subset [0,1]^{d}$ be any open ball. Then $$\mathop{\mathrm{\dim_H}}\mathbf{Bad}_{d}(\psi_{\tau})\cap B = \frac{d+1}{1+\tau}.$$* The proof has been inspired by the proof of the Mass Transference Principle (MTP) of Beresnevich and Velani [@BV06]. The MTP is a powerful technique, introduced to Diophantine approximation in 2006, when it was used to prove a Hausdorff measure version of the Duffin-Schaeffer conjecture. The original MTP states, loosely speaking, that if a limsup set of balls has full Lebesgue measure, then the limsup set of the same balls shrunk in a controlled way satisfies a dimension lower bound. The MTP has since been adapted, e.g. for other measures, and other shapes [@AB19; @KR21; @WW19; @WWX15]. However, in studying $\psi$-badly approximable points, we are facing a set-up not covered by the many generalisations: we need a Hausdorff dimension estimate for the $\limsup$ set given by balls centred outside of the zero measured set in which we would like the dimension lower bounds to hold! The proof we present relies on a few key ideas: We emulate $\mathbf{Bad}_d(\psi_\tau)$ by a set of almost full measure (see §  [3](#cC(N)){reference-type="ref" reference="cC(N)"}), and for 'positive' approximation we use balls centred at rationals that are not too far away from this set (see § [4](#subset_W(psi)){reference-type="ref" reference="subset_W(psi)"}). These allow us to approach the dimension bound in the spirit of the proof of MTP, although there are some further geometric obstacles to overcome (see §  [5](#cD(B)){reference-type="ref" reference="cD(B)"}). Our main result is actually a consequence of the following result, which we also state for comparison to the series of results of Moreira and Bugeaud on dimension of $\mathbf{Exact}_1(\psi_\tau)$. **Theorem 2**. *Let $\psi_{\tau}$ and $B\subset [0,1]^{d}$ be as above with $\tau>\frac{1}{d}$. Then there exists constant $0<C<1$ dependent only on $B$, $\tau$ and $d$ such that $$\mathop{\mathrm{\dim_H}}D_{d}\left(\psi_{\tau},C\psi_{\tau} \right)\cap B =\frac{d+1}{1+\tau}.$$* *Remark 1*. Note that Theorem [Theorem 2](#D-set){reference-type="ref" reference="D-set"} immediately implies Theorem [Theorem 1](#main){reference-type="ref" reference="main"}, since $D_{d}\left(\psi_{\tau},C\psi_{\tau} \right) \subset \mathbf{Bad}_{d}(\psi_{\tau})$. The constant $C>0$ is implicit in our proofs and hasn't been optimised. However, we cannot make it arbitrary close to $1$, and so we cannot prove any result on $\mathbf{Exact}_{d}(\psi_{\tau})$. Very recently Fregoli has proven a restricted higher dimensional version of the results of Bugeaud and Moreira [@Bu03; @Bu08; @BM11] for $n\geq 3$ and $\tau>1$. The technique used by Fregoli took their results and lifted it to higher dimensions by a clever observation. Briefly, for $\boldsymbol{x}=(x_{1},\boldsymbol{y})\in{\mathbb{R}}^{d}$ if $x_{1} \in \mathbf{Exact}_{1}(\psi_{\tau})$ and $\boldsymbol{y} \in W_{d-1}(\psi_{\tau})$ then $\boldsymbol{x} \in \mathbf{Exact}_{d}(\psi_{\tau})$. The dimension result can then be proven via the result of Bugeaud and Moreira and a Theorem of Jarnı́k on fibres. See [@Fregoli2023] for more details, we only wish to point out that our technique is significantly different. For other notions of size it was recently shown by Schleischitz that for certain functions $\psi$ the set $\mathbf{Exact}_{d}(\psi)$ has full packing dimension, see [@Schleischitz2023 Theorem 3.6 & Corollary 7]. *Remark 2*. For this set of approximation functions the restriction that $\tau>\frac{1}{d}$ is appropriate. If $\tau=\frac{1}{d}$ then as previously mentioned $\mathbf{Bad}_{d}(\psi_{\tau})=\mathbf{Bad}_{d}$, that is, the classical set of $d$-dimensional badly approximable points which has full Hausdorff dimension [@Schmidt66]. If $\tau<\frac{1}{d}$ then $\mathbf{Bad}_{d}(\psi_{\tau})=\emptyset$ since $\mathcal{W}_{d}(c \psi_{\tau}) \supseteq \mathcal{W}_{d}(\psi_{1/d})=[0,1]^{d}$ for any constant $c>0$ and so $\mathbf{Bad}_{d}(\psi_{\tau})\subset W_{d}(\psi_{\tau}) \backslash [0,1]^{d}= \emptyset$. *Remark 3*. In the current work we haven't considered general approximation functions $\psi$. Analogous to the results of Bugeaud and Moreira [@Bu03; @Bu08; @BM11], we suspect that for monotonic decreasing $\psi(q)$ of order $o(q^{-\frac{1}{d}})$ $$\mathop{\mathrm{\dim_H}}\mathbf{Bad}_{d}(\psi)=\mathop{\mathrm{\dim_H}}\mathcal{W}_d(\psi),$$ where the Hausdorff dimension of $\mathop{\mathrm{\dim_H}}\mathcal{W}_d(\psi)$ is known [@Dod92]. We believe that it may be possible to adapt our argument to deduce this result in the case $\sum_{q=1}^{\infty} \psi(q)^{d}<\infty$. The case where $\sum_{q=1}^{\infty} \psi(q)^{d}=\infty$ appears significantly harder. *Remark 4*. The theory of exact approximation has also been studied in a range of other settings. ZL Zhang proved a version of Bugeaud and Moreiras' result in the setting of the field of formal series [@Z12], and He & Xiong [@HX22] have proven an analogue in the setting of approximation by complex rational numbers. Both papers appeal to results from the theory of continued fractions in their respective setting to prove their main theorem. Very recently Bandi, Ghosh and Nandi [@BGN22] have proven a general theory in the setting of exact approximation. In their paper they obtain a result on exact approximation sets in hyperbolic metric spaces. However, as far as we can see the $d$-dimensional real result presented here cannot be deduced from the general theory presented in their paper. The rest of the paper is organised as follows. In the following section we give an outline of the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}. In § [3](#cC(N)){reference-type="ref" reference="cC(N)"} we find a subset ${\cal C}^\tau(N) \subset \mathbf{Bad}_d(\psi_\tau)$ of large measure, as quantified in § [3.1](#cC(N)_measure){reference-type="ref" reference="cC(N)_measure"}. In § [4](#subset_W(psi)){reference-type="ref" reference="subset_W(psi)"} we define the subset of rationals $Q(N, \tau)$ that we consider for positive approximation on ${\cal C}^\tau(N)$, and show that this subcollection of rationals is not too small. In § [5](#cD(B)){reference-type="ref" reference="cD(B)"} we present a construction of a Cantor set and a mass distribution, following the general outline of the proof of Mass Transference Principle, but relying on a new geometric approximation Lemma [Lemma 7](#TGI){reference-type="ref" reference="TGI"}. # Outline of the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} {#proof_main} Before going into the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} we give an overview of the methodology used. Firstly, since $\mathbf{Bad}_{d}(\psi_{\tau}) \subseteq \mathcal{W}_{d}(\psi_{\tau})$ the upper bound follows from the Jarnı́k-Besicovitch Theorem. The main substance of the proof is in establishing a corresponding lower bound. To accomplish this we construct a Cantor subset ${\cal D}(B)$ of $\mathbf{Bad}_{d}(\psi_{\tau}) \cap B$ where $B$ is any arbitrary open ball in $(0,1)^{d}$. The construction of ${\cal D}(B)$ can be split into three main steps. **Step 1:** We construct a set that avoids 'dangerous balls' i.e. parts of the unit cube that lie 'too close' to rational points. The name 'dangerous balls' is taken from the analogous set that is removed when constructing a Cantor subset of badly approximable points, see for example [@BRV16 §7]. In this setting, given $N \in {\mathbb{N}}$ and $\tau> \frac{1}{d}$, we construct a set ${\cal C}^{\tau}(N)$ such that $${\cal C}^{\tau}(N) \cap \bigcup_{\frac{p}{q} \in {\mathbb{Q}}^{d} \cap [0,1]^{d} } B\left(\frac{p}{q}, c_{N}q^{-1-\tau} \right)= \emptyset \, ,$$ for some constant $c_{N}>0$, where $B(x,r)=\{y \in {\mathbb{R}}^{d}: \norm{x-y}<r\}$ denotes the open ball with centre $x \in {\mathbb{R}}^{d}$ of radius $r>0$ and $\| \cdot \|$ is the standard supremum norm in $\mathbb{R}^d$. This set, and properties of ${\cal C}^{\tau}(N)$ that will be required later in the proof of the main result, are dealt with in §[3](#cC(N)){reference-type="ref" reference="cC(N)"}. **Step 2:** We then construct a $\limsup$ subset of $\mathcal{W}_{d}(\psi_{\tau})$ such that each ball in the $\limsup$ set retains a positive proportion of its mass when intersected with ${\cal C}^{\tau}(N)$. We achieve this by constructing a set $Q(N,\tau) \subset {\mathbb{Q}}^{d}$ such that for any $\frac{p}{q} \in Q(N,\tau)$ the inequality $$\lambda_{d}\left( {\cal C}^{\tau}(N) \cap B\left(\frac{p}{q}, q^{-1-\tau} \right) \right) \geq \kappa \lambda_{d}\left( B\left(\frac{p}{q}, q^{-1-\tau} \right)\right)\,$$ holds for some fixed constant $\kappa>0$ independent of $\frac{p}{q}$. This is established in the course of proving Lemma [Lemma 5](#Q_approximations){reference-type="ref" reference="Q_approximations"}. **Step 3:** The final part of the argument is to construct a Cantor subset and a mass distribution, inspired by the Mass Transference Principle (a now standard tool for proving metric results on $\limsup$ sets, see [@AT19]). A key step in the construction of the Cantor set ${\cal D}(B)$ is to show that for every ball $\tilde{B}=B\left(\frac{p}{q}, q^{-1-\tau} \right)$ with $\frac{p}{q} \in {\mathbb{Q}}$ there exists a finite disjoint collection of balls of the form $B\left(\frac{p'}{q'}, q'^{-1-\tau} \right)$ with $\frac{p'}{q'} \in {\mathbb{Q}}$ that are contained in $\tilde{B}$ and intersect a positive proportion of ${\cal C}^{\tau}(N)$. A crucial lemma to ensure this happens is Lemma [Lemma 7](#TGI){reference-type="ref" reference="TGI"} of §[5](#cD(B)){reference-type="ref" reference="cD(B)"}. This statement is the counterpart of the $K_{G,B}$ lemma from the proof of the MTP [@BV06]. This methodology is sufficient to prove Theorem [Theorem 2](#D-set){reference-type="ref" reference="D-set"} which in turn implies Theorem [Theorem 1](#main){reference-type="ref" reference="main"}. # Step 1: The construction of ${\cal C}^{\tau}(N)$ {#cC(N)} ${\cal C}^{\tau}(N)$ The following 'Simplex lemma' is a crucial element in the construction of ${\cal C}^{\tau}(N)$. The proof can be found in [@KTV06]. **Lemma 1**. *Let $d \geq 1$ be an integer and $Q \in {\mathbb{N}}$. Let $E \subset {\mathbb{R}}^{d}$ be a convex set of $d$-dimensional Lebesgue measure $$\lambda_{d}(E) \leq (d!)^{-1}Q^{-(d+1)}.$$ Suppose $E$ contains $d+1$ rational points with denominator $1 \leq q \leq Q$. Then these rational points lie on some hyperplane of ${\mathbb{R}}^{d}$.* By splitting up the construction of ${\cal C}^{\tau}(N)$ into suitable levels, this result tells us that regions containing 'dangerous balls' will look like thickened $(d-1)$-dimensional hyperplanes in $[0,1]^{d}$. Fix some $N \in {\mathbb{N}}$. (We will need to take this $N$ large enough along the course of the proofs, but it will eventually be fixed.) Construct ${\cal C}^{\tau}(N)$ as follows: 1. Split $[0,1]^{d}$ into $t^{d}$ cubes of sidelength $t^{-1}$ for some $t \in {\mathbb{N}}$ where $t$ is chosen to satisfy $t > (d!)^{1/d}$. Now split each of these cubes into $N^{d+1}$ cubes $I$ each with side-length $t^{-1}N^{-\frac{d+1}{d}}$ and volume $t^{-d}N^{-(d+1)}$. Let $C_{1}$ denote the set of cubes constructed in this way. Note there are $t^{d}N^{(d+1)}$ cubes in total. 2. Within each cube $I$ we want to eventually remove the rational points $(\frac{p_{1}}{q}, \dots , \frac{p_{d}}{q})$ with $q$ bounded by $1 \leq q < N$. Apply Lemma [Lemma 1](#simplex_lemma){reference-type="ref" reference="simplex_lemma"} to each cube in $I \in C_{1}$ to establish that any such rational points are contained in some hyperplane $L$ which we choose to be the minimal such affine subspace. That is, if there is only a single point $\frac{p}{q} \in I$ then $L$ is a point, if there are two points $\frac{p}{q}, \frac{p'}{q'} \in I$ then $L$ is the unique line between $\frac{p}{q}$ and $\frac{p'}{q'}$ intersected with $I$ and so on. Lemma [Lemma 1](#simplex_lemma){reference-type="ref" reference="simplex_lemma"} tells us that at worst $L$ could be a $d$-dimensional affine hyperplane. For simplicity we will call any such $L$ a hyperplane. Let ${\cal L}_{1}$ denote the collection of all hyperplanes $L$ constructed in this manner for each cube in $C_{1}$. Remember these sets for later. 3. Repeat this process; Assume that levels up to $n-1$ have been constructed. At the $n$th level split each cube $I \in C_{n-1}$ into $N^{d+1}$ cubes each with side length $t^{-1} N^{-(n-1)\frac{d+1}{d}} \times N^{-\frac{d+1}{d}}=t^{-1} N^{-n\frac{d+1}{d}}$ and volume $t^{-d} N^{-n(d+1)}$. Let $C_{n}$ denote all cube constructed in this layer. For a cube $I \in C_{j}$ for $j=1, \dots , n-1$ let $C_{n}(I)$ denote all cubes in $C_n$ which are contained in $I$. Apply Lemma [Lemma 1](#simplex_lemma){reference-type="ref" reference="simplex_lemma"} to each $I^\prime \in C_{n}$ to find a hyperplane $L$ containing all rational points $(\frac{p_{1}}{q}, \dots , \frac{p_{d}}{q}) \in I^\prime$ with $1 \leq q < N^{n}$. Let ${\cal L}_{n}$ denote the collection of all level $n$ hyperplanes $L$ constructed in this way. For a cube $I \in C_{j}$ for $j=1, \dots , n-1$, let ${\cal L}_{n}(I)$ denote the set of all hyperplanes that correspond to cubes $I^\prime \in C_{n}(I)$. Observe that a hyperplane $L \in {\cal L}_{n}$ contains all the rational points with denominators $$N^{n-1} \leq q < N^{n}.$$ 4. *Removal of the hyperplanes*: Recall we eventually want to remove all dangerous balls $B\left(\tfrac{p}{q}, cq^{-(1+\tau)}\right)$ for some constant $c>0$. Choose $$c_{N}=t^{-1}N^{-u(1+\tau)}$$ with $u \in {\mathbb{N}}$ constant such that $$\label{u_def} u>3>2+\frac{(d+1)(d-1)}{(1+\tau)d^{2}}.$$ Then $$\bigcup_{N^{n-1} \leq q < N^{n}} B\left(\frac{p}{q} , c_{N}q^{-(1+\tau)}\right) \subset \bigcup_{N^{n-1} \leq q < N^{n}} B\left(\frac{p}{q}, t^{-1}N^{-(n-1+u)(1+\tau)}\right).$$ At layer $$\ell(1):=\left\lfloor u\frac{(1+\tau)d}{d+1}\right\rfloor$$ remove all cubes $I\in C_{\ell(1)}$ that intersect the $t^{-1}N^{-u(1+\tau)}$-thickening of all hyperplanes contained in ${\cal L}_{1}$. That is, for a hyperplane $L \in {\cal L}_{1}$ constructed in the cube $I' \in C_{1}$ and $$\delta(1)=t^{-1}N^{-u(1+\tau)}\, ,$$ let $$\label{thickened_hyperplane level 1} L^{\delta(1)}:= \left\{ \mathbf{x}\in I' : \operatorname{dist}(\mathbf{x}, L)=\inf_{(a_{1}, \dots , a_{d}) \in L}\left\{\max_{1 \leq i \leq d} |x_{i}-a_{i}| \right\} \leq \delta(1) \right\}\, .$$ Then, denoting ${\cal S}_{\ell(1)}$ as the first level surviving set of cubes in $C_{\ell(1)}$, we have $${\cal S}_{\ell(1)}:= \left\{ I \in C_{\ell(1)} :\, \, I \cap \bigcup_{L \in {\cal L}_{1}} L^{\delta(1)} = \emptyset \right\}.$$ 5. *Iterative removal of hyperplanes*: We follow the same step as the construction of the first layer of surviving cubes, with the small exception that we only remove hyperplanes that still contain 'significant' rational points after the previous level pruning. That is, at layer $$\label{l(n)} \ell(n):= \left\lfloor (n-1+u)\frac{(1+\tau)d}{d+1} \right\rfloor$$ remove all cubes $I \in C_{\ell(n)}$ that intersect the $t^{-1}N^{-(n-1+u)(1+\tau)}$-thickening of all hyperplanes contained in ${\cal L}_{n}^{*}$, where $${\cal L}_{n}^{*}:=\left\{ L \in {\cal L}_{n} : \, \, \exists \frac{p}{q} \in L \text{ with } \begin{cases} i) \quad N^{n-1}\leq q<N^{n}\, ,\\ ii)\quad B\left(\tfrac{p}{q},\delta(n)\right)\cap \bigcup\limits_{I\in {\cal S}_{\ell(n-1)}}I \neq \emptyset\, . \end{cases} \right\}.$$ That is, for a hyperplane $L \in {\cal L}_{n}^{*}$ and $$\delta(n)=t^{-1}N^{-(n-1+u)(1+\tau)}\, ,$$ let $$\label{thickened_hyperplane} L^{\delta(n)}:= \left\{ \mathbf{x}\in I' : \operatorname{dist}(\mathbf{x}, L)=\inf_{(a_{1}, \dots , a_{d}) \in L}\left\{\max_{1 \leq i \leq d} |x_{i}-a_{i}| \right\} \leq \delta(n) \right\}\, .$$ Then, denoting ${\cal S}_{\ell(n)}$ as the surviving set of cubes in $C_{\ell(n)}$, we have $${\cal S}_{\ell(n)}:= \left\{ I \in C_{\ell(n)} :\, \, I \cap \bigcup_{L \in {\cal L}_{n}^{*}} L^{\delta(n)} = \emptyset \quad \text{ \& } \quad \exists \, \, I' \in {\cal S}_{\ell(n-1)} \text{ s.t. } I \subset I'\right\} \, .$$ For ease of notation in later stages let $$\widehat{L^{\delta(n)}}:= \left\{ I\in C_{\ell(n)}: I\cap L^{\delta(n)} \neq \emptyset\right\}\, ,$$ so in particular $${\cal S}_{\ell(n)}=\left(\bigcup_{I\in {\cal S}_{\ell(n-1)}}C_{\ell(n)}(I) \right) \backslash \left( \bigcup_{L\in{\cal L}^{*}_{n}} \widehat{L^{\delta(n)}}\right).$$ For any cube $I \in C_{j}$ with $j=1, \dots , \ell(n)-1$ let ${\cal S}_{\ell(n)}(I)$ denote the set of cubes in ${\cal S}_{\ell(n)}$ that are contained in $I$. 6. Let $${\cal C}^{\tau}(N)=\bigcap_{n \in {\mathbb{N}}} \bigcup_{I \in {\cal S}_{\ell(n)}} I.$$ Observe that our constructed set ${\cal C}^{\tau}(N)$ does indeed avoid all dangerous balls. We have the following statement **Lemma 2**. *$${\cal C}^{\tau}(N) \cap \bigcup_{\frac{p}{q} \in {\mathbb{Q}}^{d} \cap [0,1]^{d}} B\left( \frac{p}{q}, c_{N}q^{-(1 + \tau)} \right)= \emptyset \, .$$* *Proof.* Suppose there exists $\frac{p}{q}\in{\mathbb{Q}}^{d}$ such that $$B\left(\frac{p}{q},c_{N}q^{-(1+\tau)}\right)\cap {\cal C}^{\tau}(N) \neq \emptyset.$$ We show this to be false. Suppose $N^{k-1}\leq q < N^{k}$ for $k\in{\mathbb{N}}$. Then $$\label{delta size relative to q} c_{N}q^{-(1+\tau)}\leq c_{N}N^{-(k-1)(1+\tau)}=t^{-1}N^{-(k-1+u)(1+\tau)}=\delta(k)\, .$$ By the Simplex Lemma there exists hyperplane $L=L\left(\tfrac{p}{q}\right)\in{\cal L}_{k}$ containing $\frac{p}{q}$. If $L\left(\tfrac{p}{q}\right)\in {\cal L}^{*}_{k}$ then by construction we have that $$L\left(\tfrac{p}{q}\right)^{\delta(n)}\cap {\cal C}^{\tau}(N)=\emptyset \, ,$$ and so by [\[delta size relative to q\]](#delta size relative to q){reference-type="eqref" reference="delta size relative to q"} $$B\left(\frac{p}{q},c_{N}q^{-(1+\tau)}\right)\cap {\cal C}^{\tau}(N) \subseteq B\left(\frac{p}{q},\delta(k)\right)\cap {\cal C}^{\tau}(N) = \emptyset.$$ Thus we must have $L\left(\tfrac{p}{q}\right) \not \in {\cal L}^{*}_{k}$. We know that $L\left(\tfrac{p}{q}\right)$ contains a rational with denominator satisfying $i)$ (the condition appearing in ${\cal L}^{*}_{k}$), thus since $L\left(\tfrac{p}{q}\right) \not \in {\cal L}^{*}_{k}$ it must hold that $ii)$ fails. That is $$B\left(\frac{p}{q},\delta(k)\right)\cap \bigcup_{I\in {\cal S}_{\ell(k-1)}}I = \emptyset\, .$$ Since ${\cal C}^{\tau}(N)$ is a nested Cantor set we have that $$\bigcup_{I\in {\cal S}_{\ell(k-1)}}I \supseteq {\cal C}^{\tau}(N)\, ,$$ and so, again, we are forced to conclude that $$B\left(\frac{p}{q},\delta(k)\right)\cap {\cal C}^{\tau}(N) = \emptyset.$$ Appealing to [\[delta size relative to q\]](#delta size relative to q){reference-type="eqref" reference="delta size relative to q"} we obtain that $B\left(\tfrac{p}{q},c_{N}q^{-(1+\tau)}\right)$ has empty intersection with ${\cal C}^{\tau}(N)$. This exhausts all possibilities for $\tfrac{p}{q}$ and so contradicts our initial assumption, thus ${\cal C}^{\tau}(N)$ does indeed avoid all dangerous neighbourhoods of rational points. ◻ ## Lebesgue measure of ${\cal C}^{\tau}(N)$ {#cC(N)_measure} The following measure theoretic properties on ${\cal C}^{\tau}(N)$ will be required later. The first of these statements tells us that for large enough $N \in {\mathbb{N}}$ a significant proportion of $[0,1]^{d}$ is contained in ${\cal C}^{\tau}(N)$. **Lemma 3**. *Suppose $\tau>\frac{1}{d}$. Then for any $\varepsilon>0$ there exists $N \in {\mathbb{N}}$ such that $$\lambda_{d}({\cal C}^{\tau}(N)) \geq 1-\varepsilon.$$ Precisely, $$\lambda_{d}({\cal C}^{\tau}(N)) \geq 1- 3t^{-d}\sum_{n=1}^{\infty} N^{\frac{(d+1)}{d}(n-\ell(n))}.$$* *Proof.* Observe that $$\lambda_{d}({\cal C}^{\tau}(N)) \geq 1-\lambda_{d}\left(\bigcup_{n \in {\mathbb{N}}}\left\{ I \in C_{\ell(n)}: I \cap \bigcup_{L \in {\cal L}_{n}} L^{\delta(n)} \neq \emptyset \right\} \right).$$ While we may be removing significantly less cubes (recall in later layers we only neighbourhoods of hyperplanes in the reduced set ${\cal L}_{n}^{*}$) the calculation below is sufficient. Note that $L$ is a hyperplane and the fact that the hyperplane is contained in a cube $I \in C_{n}$ of side length $t^{-1} N^{-n\frac{d+1}{d}}$. Hence the thickened hyperplane $L^{\delta(n)}$ (recall [\[thickened_hyperplane\]](#thickened_hyperplane){reference-type="eqref" reference="thickened_hyperplane"}) intersects at most $$3N^{(\ell(n)-n)\frac{(d+1)(d-1)}{d}}$$ cubes $I \in C_{\ell(n)}$. Hence $$\begin{aligned} \label{thick_hyperplanes_measure} \lambda_{d}\left(\bigcup_{n \in {\mathbb{N}}}\left\{ I \in C_{\ell(n)}: I \cap \bigcup_{L \in {\cal L}_{n}} L^{\delta(n)} \neq \emptyset \right\} \right) & \leq \sum_{n=1}^{\infty} \sum_{L \in {\cal L}_{n}} \lambda_{d}\left( \left\{ I \in C_{\ell(n)}: I \cap L^{\delta(n)} \neq \emptyset \right\}\right), \nonumber \\ & \leq \sum_{n=1}^{\infty} N^{n(d+1)}3N^{(\ell(n)-n)\frac{(d+1)(d-1)}{d}} \lambda_{d}(I), \nonumber \\ & \leq 3t^{-d}\sum_{n=1}^{\infty} N^{n(d+1)+(\ell(n)-n)\frac{(d+1)(d-1)}{d}-\ell(n)(d+1)}, \nonumber \\ & \leq 3t^{-d}\sum_{n=1}^{\infty} N^{\frac{(d+1)}{d}(n-\ell(n))} \, = \varepsilon_{N}. \end{aligned}$$ Thus, providing $\tau>\frac{1}{d}$ (and so $\ell(n)-n>\rho n$ for some $\rho>0$), the above summation is convergent, and for a suitably large choice of $N$ we have that $$\lambda_{d}({\cal C}^{\tau}(N)) \geq 1-\varepsilon_{N}.$$ with $\varepsilon_{N} \to 0$ as $N \to \infty$. ◻ The following lemma gives us an even stronger statement than Lemma [Lemma 3](#measure_cC){reference-type="ref" reference="measure_cC"}, namely that every surviving cube in the construction ${\cal C}^{\tau}(N)$ retains much of its mass when later layers are removed. **Lemma 4**. *Suppose $\tau >\frac{1}{d}$. Let $n, N\in {\mathbb{N}}$. Then for all $I \in {\cal S}_{\ell(n)}$, $$\lambda_{d}(I \cap {\cal C}^{\tau}(N)) \geq \left(1-3\left( \sum_{k=n+1}^{\ell(n)}N^{(\ell(n)-\ell(k))\frac{d+1}{d}} + \sum_{k=\ell(n)+1}^{\infty} N^{(k-\ell(k))\frac{d+1}{d}} \right) \right) \lambda_{d}(I).$$* *Remark 5*. Note the constant bound given here is larger than the constant proven in Lemma [Lemma 3](#measure_cC){reference-type="ref" reference="measure_cC"} for all $n \in {\mathbb{N}}$. So we could take $\varepsilon_{N}$ (given by [\[thick_hyperplanes_measure\]](#thick_hyperplanes_measure){reference-type="eqref" reference="thick_hyperplanes_measure"}) to be the universal constant for all surviving cube $I \in \bigcup _{n \in {\mathbb{N}}} {\cal S}_{\ell(n)}$. *Proof.* The proof is similar to that of Lemma [Lemma 3](#measure_cC){reference-type="ref" reference="measure_cC"}. Firstly, since $I \in {\cal S}_{\ell(n)}$ we clearly have that $$I \not \in \bigcup_{k=1}^{n} \left\{ I' \in C_{\ell(k)}: I' \cap \bigcup_{L \in {\cal L}_{k}} L^{\delta(k)} \neq \emptyset \right\}.$$ and so $$\lambda_{d}\left(I \cap \bigcup_{k=1}^{n} \bigcup_{I' \in {\cal S}_{\ell(k)}} I'\right)=\lambda_{d}(I).$$ Hence $$\lambda_{d}(I \cap {\cal C}^{\tau}(N)) \geq \lambda_{d}(I) - \lambda_{d}\left( \bigcup_{k=n}^\infty \left\{ I' \in C_{\ell(k)}(I):I' \cap \bigcup_{L \in {\cal L}_{k}} L^{\delta(k)} \neq \emptyset \right\} \right).$$ To count the number of hyperplanes from $\bigcup_{k >n}\bigcup_{L \in {\cal L}_{k}} L$ contained in $I$ observe that $$\label{hyperplanes_1} \#\left\{L \in {\cal L}_{k}: I \cap L^{\delta(k)} \neq \emptyset \right\} \leq 1 \quad \quad n+1 \leq k \leq \ell(n),$$ since $I$ intersects one unique cube $I' \in C_{k}$ for $n+1 \leq k \leq \ell(n)$ (in this case $I$ is actually contained in such cube). And that $$\label{hyperplanes_2} \#\left\{L \in {\cal L}_{k}: I \cap L^{\delta(k)} \neq \emptyset \right\} \leq N^{(k-\ell(n))(d+1)} \quad \quad k \geq \ell(n)+1,$$ since we are now looking at the number of cubes $I' \in C_{k}(I)$ that generate hyperplanes. Now we consider the number of subcubes $I' \in C_{\ell(k)}(I)$ that could intersect with a thickened hyperplane $L \in {\cal L}_{k}$. We have that $$\label{hyperplane_interesect_1} \#\{I' \in C_{\ell(k)}(I) : I' \cap L^{\delta(k)} \neq \emptyset\} \leq 3N^{(\ell(k)-\ell(n))\frac{(d+1)(d-1)}{d}} \quad \quad \text{ if } L \in {\cal L}_{k} \text{ for } n+1 \leq k \leq \ell(n),$$ since there are at most $N^{(\ell(k)-\ell(n))(d+1)}$ $\ell(k)$-level cubes in $I$. Also, $$\label{hyperplane_intersect_2} \#\{I' \in C_{\ell(k)}(I) : I' \cap L^{\delta(k)} \neq \emptyset\} \leq 3N^{(\ell(k)-k)\frac{(d+1)(d-1)}{d}} \quad \quad \text{ if $L \in {\cal L}_{k}$ for $k \geq \ell(n)+1$},$$ since each hyperplane of interest is now contained in some cube in $C_{k}$. Bring these together we have that $$\begin{aligned} \lambda_{d}\left( \bigcup_{k \in {\mathbb{N}}_{> n}} \left\{ I' \in C_{\ell(k)}(I):I' \cap \bigcup_{L \in {\cal L}_{k}} L^{\delta(k)} \neq \emptyset \right\} \right) & \\ & \hspace{-5cm} \leq \sum_{k=n+1}^{\infty} \, \, \, \sum_{L \in {\cal L}_{k}:L \cap I \neq \emptyset} \lambda_{d}\left( \left\{ I' \in C_{\ell(k)}(I) : I' \cap L^{\delta(k)} \neq \emptyset \right\}\right), \\ & \hspace{-6cm} \overset{\eqref{hyperplanes_1}-\eqref{hyperplane_intersect_2}}{\leq} \sum_{k=n+1}^{\ell(n)} 3t^{-d}N^{(\ell(k)-\ell(n))\frac{(d+1)(d-1)}{d}} N^{-\ell(k)(d+1)} \quad + \\ & \hspace{-2cm} \sum_{k=\ell(n)+1}^{\infty} 3t^{-d}N^{(k-\ell(n))(d+1)} N^{(\ell(k)-k)\frac{(d+1)(d-1)}{d}} N^{-\ell(k)(d+1)} \\ & \hspace{-7cm} = t^{-d}N^{-\ell(n)(d+1)}3\left( \sum_{k=n+1}^{\ell(n)}N^{(\ell(n)-\ell(k))\frac{d+1}{d}} + \sum_{k=\ell(n)+1}^{\infty} N^{(k-\ell(k))\frac{d+1}{d}} \right). \end{aligned}$$ And so we have that for $I \in {\cal S}_{\ell(n)}$ $$\lambda_{d}(I \cap {\cal C}^{\tau}(N)) \geq \left( 1-3\left( \sum_{k=n+1}^{\ell(n)}N^{(\ell(n)-\ell(k))\frac{d+1}{d}} + \sum_{k=\ell(n)+1}^{\infty} N^{(k-\ell(k))\frac{d+1}{d}} \right) \right)\lambda_{d}(I).$$ ◻ # Step 2: A suitable subset of $\mathcal{W}_{d}(\psi_{\tau})$ {#subset_W(psi)} Recall we want to construct a subset of $\mathcal{W}_{d}(\psi_{\tau})$ such that any ball in the corresponding $\limsup$ set retains enough mass when intersected with ${\cal C}^{\tau}(N)$. To this end we construct the following subset of ${\mathbb{Q}}^{d}$. Define $$Q(N,\tau):=\left\{ \frac{p}{q} \in {\mathbb{Q}}^{d}: \exists \, L\in{\cal L}^*_{n} \, \text{ with } \, \frac{p}{q}\in L \quad \text{ and } \begin{cases} i)\, N^{n-1} \leq q \leq N^{n} \, , \\ ii) \, B\left(\tfrac{p}{q},\delta(n)\right)\cap \bigcup_{I\in {\cal S}_{\ell(n-1)}}I \neq \emptyset\, . \end{cases} \right\}\, .$$ We say $\tfrac{p}{q}$ is a *leading rational of a hyperplane $L \in {\cal L}_{n}$* if $L$ is the hyperplane on which $\tfrac{p}{q}$ lies while satisfying the above conditions. Essentially, for each hyperplane removed in the construction of ${\cal C}^{\tau}(N)$ we associate a set of rational points $\tfrac{p}{q}$ and significant intersection with surviving cubes of the previous layer. Note a few important properties of leading rationals: 1. Every hyperplane $L\in \bigcup_{n\in{\mathbb{N}}}{\cal L}^{*}_{n}$ contains at least one leading rational. This is clear from how we define the sets ${\cal L}^{*}_{n}$ and $Q(N,\tau)$. Namely, if $L$ does not contain a leading rational, then it would not have been removed. 2. If $\tfrac{p}{q}$ is a leading rational of a hyperplane $L \in {\cal L}^{*}_{n}$ that was constructed in cube $I\in C_{n}$ then $$B\left(\tfrac{p}{q}, 2q^{-1-\frac{1}{d}}\right) \supseteq I\, .$$ To see this observe that trivially $\tfrac{p}{q} \in I$, that the sidelenght of $I$ is $N^{-n(1+\tfrac{1}{d})}$, and that $N^{n-1}\leq q \leq N^{n}$. 3. If $\tfrac{p}{q}$ is not a leading rational then there exists a hyperplane $L\in \bigcup_{n\in{\mathbb{N}}}{\cal L}^{*}_{k}$, say $L\in {\cal L}^{*}_{n}$, for which $\tfrac{p}{q} \in \widehat{L^{\delta(k)}}$ and any leading rational of $L$, say $\tfrac{r}{s}$, has $s<q$. This may not be so obvious. To see this, suppose $N^{n-1}< q \leq N^{n}$. Then $\tfrac{p}{q}$ must, by the simplex lemma, lie on a hyperplane of ${\cal L}_{n}$. If $\tfrac{p}{q}$ is not a leading rational of said hyperplane, then it must lie in some previous level removed strip. Any leading rational, say $\frac{r}{s}$, of a hyperplane from a previously removed layer must, by definition, have denominator $s<N^{n-1}<q$. Let $$\mathcal{W}_{d}(Q(N,\tau), \psi):= \left\{ \mathbf{x}\in [0,1]^{d} : \max_{1 \leq i \leq d}\left| x_{i}-\frac{p_{i}}{q} \right| < \frac{\psi(q)}{q} \quad \text{ for i.m. } \, \, \frac{p}{q} \in Q(N,\tau) \right\}.$$ The following two lemmas are crucial properties of the set $Q(N,\tau)$. **Lemma 5**. *Let $\psi(q)=3q^{-\frac{1}{d}}$ and suppose that $\tau>\frac{1}{d}$. Then $$\mathcal{W}_{d}(Q(N,\tau), \psi)\supseteq {\cal C}^{\tau}(N)\, .$$* *Proof.* Take any $\mathbf{x}\in {\cal C}^{\tau}(N)$. By Dirichlet's Theorem $\mathbf{x}\in \mathcal{W}_{d}(\psi)$. Let $A(\mathbf{x})$ be the set of all those rational points for which $$\mathbf{x}\in B\left(\frac{p}{q}, q^{-1-\frac{1}{d}}\right).$$ If the cardinality of $A(\mathbf{x}) \cap Q(N, \tau)$ is infinite, then clearly $\mathbf{x}\in \mathcal{W}_{d}(Q(N,\tau), \psi)$, so assume $A(\mathbf{x}) \cap Q(N,\tau)$ is finite. Let $\frac{p}{q} \in A(\mathbf{x}) \backslash Q(N,\tau)$. Let $N^{m-1}<q\leq N^m$. Since $\tfrac{p}{q}$ is not a leading rational, by $3.$ there exists some hyperplane $L$, say $L\in{\cal L}^{*}_{n}$ with $n<m$, such that $\tfrac{p}{q} \in \widehat{L^{\delta(n)}}$. Let $\tfrac{r}{s}$ be a leading rational of $L$. Then by property $3.$ of leading rational $s<q$, and by the triangle inequality $$\label{eq1} \left|\mathbf{x}-\frac{r}{s}\right| \leq \left|\mathbf{x}-\frac{p}{q}\right|+ \left|\frac{p}{q}-\frac{r}{s}\right|< 3s^{-1-\tfrac{1}{d}}.$$ If the sequence of rational points in $A(\mathbf{x})$ can be associated to infinitely many different leading rationals, then we are done by the above argument. We now prove that there must be hyperplanes $L\in {\cal L}_n^*$ from arbitrarily high levels $n$ associated to the points in the sequence $A(\mathbf{x})$. That is, they are leading rationals on these hyperplanes themselves, or we find a nearby leading rational by the above argument. Assume to the contrary that the sequence $A(\mathbf{x})$ is associated to finitely many hyperplanes. Suppose $m$ is the largest level in which this finite sequence of hyperplanes appear. By definition of $\mathbf{x}\in {\cal C}^{\tau}(N)$ we have that $$\mathbf{x}\in [0,1]^{d} \backslash \left( \bigcup_{j\leq m} \bigcup_{L\in {\cal L}^{*}_{j}} \widehat{L^{\delta(j)}}\right)\, ,$$ which is an open set, and so there exists some $\varepsilon>0$ such that $$d\left(\mathbf{x}, \bigcup_{j\leq m} \bigcup_{L\in {\cal L}^{*}_{j}} \widehat{L^{\delta(j)}}\right)>\varepsilon\, .$$ However, for any $\frac{u}{v}\in A(\mathbf{x})$ with $v>\varepsilon^{-1}$ we have that $$\left| \mathbf{x}-\frac{u}{v}\right|<v^{-1-\tfrac{1}{d}}<\varepsilon\, ,$$ and so $$\frac{u}{v} \not \in \bigcup_{j\leq m} \bigcup_{L\in {\cal L}^{*}_{j}} \widehat{L^{\delta(j)}}\, ,$$ contradicting the finiteness of the sequence of associated hyperplanes. Thus we have an infinite sequence of associated hyperplanes to $A(\mathbf{x})$, which by [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} we can associate to an infinite sequence of rational points which sufficiently approximate $\mathbf{x}$ so that $\mathbf{x}\in \mathcal{W}_{d}(Q(N,\tau),\psi)$ as required. ◻ Lastly, the following lemma on the measure of balls in the construction of $\mathcal{W}_{d}(Q(N,\tau),\psi_{\tau})$ will be required later. It essentially tells us that each ball $B\left(\frac{p}{q},q^{-1-\tau}\right)$ with $\frac{p}{q}\in Q(N,\tau)$ contains a surviving cube that covers a constant proportion of $B\left(\frac{p}{q},q^{-1-\tau}\right)$ independent of $\frac{p}{q}$. **Lemma 6**. *Let $B(\tau)=B\left(\tfrac{p}{q},q^{-1-\tau}\right)$ for some $\frac{p}{q}\in Q(N,\tau)$. Then there exists $I \in \bigcup_{n\in {\mathbb{N}}} {\cal S}_{\ell(n)}$ with $I \subset B(\tau)$ and $$\lambda_{d}(B(\tau))\leq C\lambda_{d}(I),$$ with $C>0$ independent of $\frac{p}{q}$.* *Proof.* Suppose that $N^{n-1}\leq q< N^{n}$. Since $\frac{p}{q} \in Q(N,\tau)$ we have that $$B\left(\frac{p}{q}, \delta(n)\right)\cap \bigcup_{I\in {\cal S}_{\ell(n-1)}}I \neq \emptyset\, .$$ Let $I^{*}\in {\cal S}_{\ell(n-1)}$ be some cube with positive intersection with $B\left(\tfrac{p}{q},\delta(n)\right)$. Then, by considering the sidelength of $I^{*}$ we have that $$B\left(\frac{p}{q}, 3\max\left\{t^{-1}N^{-\ell(n-1)\left(1+\tfrac{1}{d}\right)},\delta(n)\right\} \right) \supseteq I^{*}\, .$$ Now observe that $$\begin{aligned} 3t^{-1}N^{-\ell(n-1)\left(1+\frac{1}{d}\right)} &\leq 3t^{-1}N^{-\left\lfloor(n-2+u)\frac{(1+\tau)d}{(d+1)}\right\rfloor\frac{d+1}{d}} \\ &\leq 3t^{-1}N^{-(n-1+u)\frac{(1+\tau)d}{(d+1)}\frac{d+1}{d}}\,\,\, \,\,\,(=3\delta(n))\\ &=3t^{-1}N^{-(u-1)(1+\tau)}N^{-n(1+\tau)}\\ &<q^{-(1+\tau)}\, . \end{aligned}$$ Hence $$B\left(\frac{p}{q},q^{-(1+\tau)}\right) \supseteq B\left(\frac{p}{q}, 3\max\left\{t^{-1}N^{-\ell(n-1)(1+\tfrac{1}{d})},\delta(n)\right\} \right) \supseteq I^{*}\, .$$ It remains to see that $$\begin{aligned} \lambda_{d}(I^{*})&= t^{-d}N^{-\ell(n)(d+1)}\\ &\geq t^{-d}N^{-(n-2+u)(1+\tau)d}\\ & \geq t^{-d}N^{-(u-1)(1+\tau)d}q^{-(1+\tau)d}\\ &=2^{-d}t^{-d}N^{-(u-1)(1+\tau)d} \lambda_{d}\left(B\left(\frac{p}{q},q^{-(1+\tau)}\right)\right)\, , \end{aligned}$$ and so taking $C=2^{d}t^{d}N^{(u-1)(1+\tau)d}$ in the lemma completes the proof. ◻ # Step 3: Construction of a Cantor subset of $\mathbf{Bad}_{d}(\psi_{\tau})$ {#cD(B)} The following lemma is crucial in the construction of our Cantor subset of $\mathbf{Bad}_{d}(\psi_{\tau})$. **Lemma 7** ($T_{G,I}$ Lemma). *Let $\{B_{i}\}_{i \in {\mathbb{N}}}$ be a sequence of balls in ${\mathbb{R}}^{d}$ with $r(B_{i}) \to 0$ as $i \to \infty$. Suppose that there exists some constant $C>0$ such that $$\label{partial_measure} \lambda_{d}\left( I \cap \limsup_{i \to \infty} B_{i}\right)\geq C \lambda_{d}(I)$$ for any $I \in \bigcup_{n \in {\mathbb{N}}} {\cal S}_{\ell(n)}$. Then for any $I \in \bigcup_{n \in {\mathbb{N}}} {\cal S}_{\ell(n)}$ and any $G>1$ there is a finite subcollection $$T_{G,I} \subset \{B_{i}: i \geq G\}$$ such that the balls are disjoint, lie inside $I$, and $$\lambda_{d}\left( \bigcup_{\tilde{B} \in T_{G,I}}\tilde{B}\right) \geq \kappa_{1} \lambda_{d}(I),$$ with $\kappa_{1}=C5^{-d}4^{-1}$.* *Remark 6*. Note that Lemma [Lemma 7](#TGI){reference-type="ref" reference="TGI"} is applicable to our setting with $$\{B_{i}\}_{i \in {\mathbb{N}}}=\left\{ B\left(\frac{p}{q}, 2q^{-1-\frac{1}{d}} \right) \right\}_{\frac{p}{q} \in Q(N,\tau)}\, ,$$ i.e. $\limsup_{i \to \infty} B_{i}=\mathcal{W}_{d}\left(Q(N, \tau), 2 \psi_{1/d}\right)$, since $$\begin{aligned} \lambda_{d}\left(I \cap \mathcal{W}_{d}\left(Q(N, \tau), 2\psi_{1/d}\right) \right) & \overset{\text{ Lemma~\ref{Q_approximations}}}{\geq} \lambda_{d}\left(I \cap {\cal C}^{\tau}(N) \cap \mathcal{W}_{d}(\psi_{1/d}) \right) \\ & = \lambda_{d}\left(I \cap {\cal C}^{\tau}(N) \right)\\ & \overset{\text{ Lemma~\ref{measure_cC_cubes}}}{\geq} (1-\varepsilon_{N})\lambda_{d}(I).\end{aligned}$$ Note that we can order the collection of balls in a decreasing order. *Proof.* Suppose that $I \in {\cal S}_{\ell(n)}$ and let ${\cal S}_{\ell(n+1)}^{*}(I)$ denote the set of cubes $I' \in {\cal S}_{\ell(n+1)}$ contained in the **interior** of $I$ (remove the edge cubes). Observe that $$\label{eq1.1} \lambda_{d}\left(\bigcup_{I' \in {\cal S}_{\ell(n+1)}^{*}(I)} I' \right) \geq \frac{1}{2}\lambda_{d}(I)$$ since we are removing at most $2^{d}+1$ hyperplanes worth of cubes which (for sufficiently large $N \in {\mathbb{N}}$) is small relative to the size of $I$. Let $${\cal G}:=\left\{B_{i} : B_{i} \cap \bigcup_{I' \in {\cal S}_{\ell(n+1)}^{*}(I)} I' \neq \emptyset\, , \, \, i \geq G \right\}.$$ We may assume $r(B_{i})$ is monotonically decreasing (since we can order $B_{i}$ however we want) and so for sufficiently large $i$ ($i \geq i_{0}$ such that $r(B_{i_{0}}) < \frac{1}{2}N^{-\ell(n+1)\frac{d+1}{d}}$) any ball $B_{i} \in {\cal G}$ is contained in $I$. By the $5r$-covering lemma (see for example [@Hein01 Theorem 1.2] ) there exists a disjoint subcollection ${\cal G}' \subseteq {\cal G}$ such that $$\bigcup_{B_{i} \in{\cal G}} B_{i} \subset \bigcup_{B_{i} \in {\cal G}'}5B_{i}$$ and the balls in ${\cal G}'$ are disjoint. It follows that $$\begin{aligned} \lambda_{d}\left( \bigcup_{B_{i} \in {\cal G}'} 5B_{i} \right) & \geq \lambda_{d}\left( \bigcup_{I' \in {\cal S}_{\ell(n+1)}^{*}(I)}I' \cap \limsup_{i \to \infty} B_{i} \right) \\ & =\sum_{I' \in {\cal S}_{\ell(n+1)}^{*}(I)} \lambda_{d}\left( I' \cap \limsup_{i \to \infty} B_{i} \right) \\ & \overset{\eqref{partial_measure}}{\geq} C \sum_{I' \in {\cal S}_{\ell(n+1)}^{*}(I)} \lambda_{d}(I') \\ & \overset{\eqref{eq1.1}}{\geq} \frac{C}{2} \lambda_{d}(I).\end{aligned}$$ Furthermore, since ${\cal G}'$ is a disjoint collection of balls we have that $$\begin{aligned} \lambda_{d}\left( \bigcup_{B_{i} \in {\cal G}'} 5B_{i} \right) & \leq \sum_{B_{i} \in {\cal G}'} 5^{d} \lambda_{d}(B_{i}) \\ & = 5^{d}\lambda_{d}\left( \bigcup_{B_{i} \in {\cal G}'} B_{i} \right).\end{aligned}$$ Hence $$\lambda_{d}\left( \bigcup_{B_{i} \in {\cal G}'} B_{i} \right) \geq \frac{C}{5^{d}2} \lambda_{d}(I).$$ Since the balls of ${\cal G}'$ are disjoint and $r(B_{i}) \to 0$ as $i \to \infty$ we have that $$\lambda_{d}\left( \bigcup_{B_{i} \in {\cal G}': i \geq j} B_{i} \right) \to 0 \quad \text{ as } \quad j \to \infty.$$ Hence there exists $j_{0} \geq G$ such that $$\lambda_{d}\left( \bigcup_{B_{i} \in {\cal G}': i \leq j_{0}} B_{i} \right) \geq \frac{C}{5^{d}4} \lambda_{d}(I).$$ Letting $$T_{G,I}=\{B_{i} \in {\cal G}' : i \leq j_{0}\}$$ completes the proof. ◻ ## Constructing ${\cal D}(B)$ Armed with Lemma [Lemma 7](#TGI){reference-type="ref" reference="TGI"} we now proceed with the construction of a Cantor subset ${\cal D}(B)$ of $${\cal C}^{\tau}(N) \cap W_{d}(Q(N,\tau), \psi_{\tau}) \cap B \,$$ for any ball $B \subset [0,1]^{d}$. A sketch of the construction is as follows: i) Firstly given a ball $B$ we choose a suitable $N\in{\mathbb{N}}$ such that $B$ has non-empty intersection with ${\cal C}^{\tau}(N)$. (We will choose $N$ larger later when necessary.) ii) To a surviving cube $I\subset B$ we then apply Lemma [Lemma 7](#TGI){reference-type="ref" reference="TGI"} to obtain a disjoint collection of balls from $\left\{B\left(\frac{p}{q}, 2q^{-1-\frac{1}{d}} \right)\right\}_{\frac{p}{q} \in Q(N,\tau)}$ contained in $I$ that cover a positive proportion of $I$. iii) We then shrink each of these balls i.e. $$B\left(\frac{p}{q}, 2q^{-1-\frac{1}{d}}\right) \mapsto B\left( \frac{p}{q}, q^{-1-\tau} \right)\, .$$ The collection of these shrunken balls completes the first layer. iv) We repeat the same process as above. That is, for each shrunken ball from the first layer we apply Lemma [Lemma 6](#I in B(t)){reference-type="ref" reference="I in B(t)"} to find a surviving cube $I'$ of $C^{\tau}(N)$. We then choose suitable $G'>0$ and apply Lemma [Lemma 7](#TGI){reference-type="ref" reference="TGI"} to each surviving cube $I'$, and then shrink each of the balls in the set $T_{G',I'}$. This collection of balls then becomes the second layer. We continue inductively. Let us now present the full details of the construction. Fix $B\subset [0,1]^{d}$. We construct ${\cal D}(B)$ as follows: 1. Fix $N \in {\mathbb{N}}$ such that $$\label{N_choice} N \geq r(B)^{\frac{d}{d+1}} \quad \& \quad 3t^{-d}\sum_{n=1}^{\infty} N^{\frac{(d+1)}{d}(n-\ell(n))}=\varepsilon_{N}< \frac{1}{4}\lambda_{d}(B)\, .$$ This ensures that $$\lambda_{d}(B\cap {\cal C}^{\tau}(N))\geq \frac{3}{4}\lambda_{d}(B),$$ Furthermore, assume $N$ is large enough such that there exists $I_{0}\in {\cal S}_{\ell(1)}$ with $I_{0} \subset B$. Set $E_{0}=I_{0}$.\ 2. Apply Lemma [Lemma 7](#TGI){reference-type="ref" reference="TGI"} to $I_{0}$ with $G_{0}\geq 1$ chose sufficiently large such that $$\label{G bound} \left. \begin{array}{c} r(B_{i})<t^{-1}N^{-\ell(n_{0})\frac{d+1}{d}} \\ 4r(B_{i}(\tau))<r(B_{i}) \end{array} \right\} \quad \forall \, \, i \geq G_{0}.$$ Let $T_{G_{0},I_{0}}$ denote the set of balls $B_{i}$ from Lemma [Lemma 7](#TGI){reference-type="ref" reference="TGI"}.\ 3. *Shrinking:* For each $B_{i} \in T_{G_{0},I_{0}}$ shrink the balls $$B_{i}=B\left(\frac{p}{q},2q^{-\frac{d+1}{d}}\right) \mapsto B\left(\frac{p}{q},q^{-(1+\tau)}\right)=B_{i}(\tau),$$ and let the first layer $$E_{1}=T_{G_{0},I_{0}}^{\tau}=\left\{B_{i}(\tau):B_{i}\in T_{G_{0},I_{0}} \right\}.$$\ 4. *Induction:* Suppose $E_{n-1}$ has been constructed. For each $B_{i}(\tau)\in E_{n-1}$ we proceed as follows. 1. *nesting:* By Lemma [Lemma 6](#I in B(t)){reference-type="ref" reference="I in B(t)"} there exists some $I\in \bigcup_{n}{\cal S}_{\ell(n)}$, say $I(i)$, with $I(i)\subseteq B_{i}(\tau)$ and $$\lambda_{d}(B_{i}(\tau))\leq C(N,d,u,\tau)\lambda_{d}(I(i)).$$ For $I(i)$ related to $B_{i}(\tau)$ in this way say $I(i) \sim B_{i}(\tau)$.\ 2. *covering:* Choose $G_{n}> G_{n-1}$ such that $$4r(B_{j})<\min\{r(I(k)): I(k)\sim B_{k}(\tau)\in E_{n-1} \} \quad \forall \, \, j\geq G_{n}.$$ (Note that $r(B_j)\to 0$ with $j$.) Again, note this is only a lower bound, $G_{n}$ can be chosen as large as we want, and indeed, we will increase it if necessary in the proof of Lemma [Lemma 8](#lem:msrballs){reference-type="ref" reference="lem:msrballs"}. Apply Lemma [Lemma 7](#TGI){reference-type="ref" reference="TGI"} to $I(i)$ with $G_{n}$ chosen as above. Let $T_{G_{n},I(i)}$ be the set of balls from Lemma [Lemma 7](#TGI){reference-type="ref" reference="TGI"}.\ 3. *shrinking:* For each $B_{i} \in T_{G_{n},I(i)}$ shrink the ball and let $$T_{G_{n},I(i)}^{\tau}=\left\{ B_{j}(\tau): B_{j} \in T_{G_{n},I(i)} \right\}.$$ Define the $n$th layer to be $$E_{n}=\left\{ B_{j}(\tau) : B_{j}(\tau) \in \bigcup_{I(k)\sim B_{k}(\tau) \in E_{n-1}} T^{\tau}_{G_{n},I(k)} \right\}.$$ Let $${\cal D}(B)=\bigcap_{n\in {\mathbb{N}}} \bigcup_{B(\tau)\in E_{n}} B(\tau)$$ This completes the construction of ${\cal D}(B)$.\ Note that ${\cal D}(B) \subset D_d(\psi_\tau, c_N\psi_\tau)\subset \mathbf{Bad}_d(\psi_\tau)$ since for any $\mathbf{x}\in {\cal D}(B)$ we have an associated sequence $$I_{0} \supset B_{i_{1}}(\tau) \supset I(i_{1}) \supset B_{i_{2}}(\tau) \supset I(i_{2}) \dots$$ such that $\mathbf{x}$ is contained in the limit set. So $\mathbf{x}\in {\cal C}^{\tau}(N) \cap W(Q(N,\tau),\psi_{\tau}) \subset D_d(\psi_\tau, c_N\psi_\tau)\subset \mathbf{Bad}_{d}(\psi_{\tau})$. *Remark 7*. The methodology of the proof given here follows closely to the usual proof of the Mass Transference Principle. The key additional step in the construction is the *nesting* step, which we require to ensure ${\cal D}(B)\subset {\cal C}^{\tau}(N)$, and replacing the $K_{G,B}$-lemma [@BV06 Lemma 5] by $T_{G,I}$ Lemma [Lemma 7](#TGI){reference-type="ref" reference="TGI"} above. ## Constructing a measure on ${\cal D}(B)$ To obtain a lower bound to the Hausdorff dimension of ${\cal D}(B)$ and hence to $\mathbf{Bad}_d(\psi_\tau)$, we aim to use the mass distribution principle [@F14 Proposition 4.2]. Namely, we need to construct a measure $\mu$ supported on ${\cal D}(B)$ such that measures of balls are in good control. To that end, construct a mass distribution $\mu$ on ${\cal D}(B)$ as follows:\ Set $\mu(I_{0})=1$, then for each $B_{i}(\tau)\in E_{1}$ define $$\mu(B_{i}(\tau))=\frac{\lambda_{d}(B_{i})}{\sum_{B_{j} \in T_{G_{0},I_{0}}} \lambda_{d}(B_{j})}\times \mu(I_{0}).$$ Note $$\sum_{B_{i}(\tau) \in E_{1}}\mu(B_{i}(\tau))=\mu(I_{0}).$$ Now inductively suppose the mass has been distributed over balls in $E_{n-1}$. For each $B_{i}(\tau) \in E_{n}$ there exists, by construction, a unique $B_{k}(\tau) \in E_{n-1}$ such that $B_{i}(\tau)\subset B_{k}(\tau)$. Let $I(k)\sim B_{k}(\tau)$ and define $$\mu(B_{i}(\tau))= \frac{\lambda_{d}(B_{i})}{\sum_{B_{j} \in T_{G_{n},I(k)}} \lambda_{d}(B_{j})}\times \mu(B_{k}(\tau)).$$ Again the mass is preserved. By Proposition 1.7 of [@F14] this extends to a measure $\mu$ with support contained in ${\cal D}(B)$ and for general subset $F\subset [0,1]^{d}$ $$\mu(F)=\inf\left\{ \sum_{i}\mu(B_{i}(\tau)): F\cap {\cal D}(B) \subset \bigcup_{i}B_{i}(\tau) \, \text{ with } \, B_{i}(\tau) \in \bigcup_{n}E_{n} \right\}.$$ Recall that $s=(1+d)/(1+\tau)$. **Lemma 8**. *Let $\epsilon>0$. Let $F$ be a ball contained in $B$. Then there is a choice of suitable $(G_n)$ in steps 4 of the construction of ${\cal D}(B)$ such that $$\mu(F)\leq r(F)^{s-\epsilon}.$$* *Proof.* **Case 1: $B_{i}(\tau) \in E_{n}$.** Assume first that $F=B_{i}(\tau) \in E_{1}$, a ball of centre $\tfrac pq\in {\mathbb{Q}}^d$ and radius $q^{-(1+\tau)}$. Then $$\begin{aligned} \mu(B_{i}(\tau))& =\frac{\lambda_{d}(B_{i})}{\sum_{B_{j} \in T_{G_{0},I_{0}}} \lambda_{d}(B_{j})}\times 1 \\ & \overset{\text{Lemma~\ref{TGI}}}{\leq} \lambda_{d}(B_{i}) \frac{1}{\kappa_{1}\lambda_d(I_{0})} \\ & \leq \frac{1}{\kappa_{1}\lambda_d(I_{0})} 2^{2d}q^{-(d+1)} \\ &= \frac{2^{2d}}{\kappa_{1}\lambda_d(I_{0})} q^{-(1+\tau)\frac{d+1}{1+\tau}} \\ &= \frac{2^{2d}}{\kappa_{1}\lambda_d(I_{0})} r(B_{i}(\tau))^{s}. \end{aligned}$$ For balls $B_{i}(\tau) \in E_{n}$ centred at some $\tfrac pq\in {\mathbb{Q}}^d$ with $n>1$ there exists $B_{k}(\tau) \in E_{n-1}$ with $B_{i}(\tau)\subset B_{k}(\tau)$. Let $$\begin{aligned} \mu(B_{i}(\tau))& =\frac{\lambda_{d}(B_{i})}{\sum_{B_{j} \in T_{G_{0},I(k)}} \lambda_{d}(B_{j})}\times \mu(B_{k}(\tau)) \\ & \overset{\text{Lemma~\ref{TGI}}}{\leq} \lambda_{d}(B_{i}) \frac{\mu(B_{k}(\tau))}{\kappa_{1}\lambda_d(I(k))} \\ & \leq \frac{\mu(B_{k}(\tau))}{\kappa_{1}\lambda_d(I(k))} 2^{2d}q^{-(d+1)} \\ &= \frac{2^{2d}}{\kappa_{1}} r(B_{i}(\tau))^{s} \frac{\mu(B_{k}(\tau))}{\lambda_d(I(k))}\\ &\overset{\text{Lemma~\ref{I in B(t)}}}{\leq} \frac{2^{2d}}{\kappa_{1}C(N,d,u,\tau)} r(B_{i}(\tau))^{s} \frac{\mu(B_{k}(\tau))}{\lambda_d(B_{k}(\tau))} \end{aligned}$$ The value of $G_{n}$ can be chosen as large as we like, and hence for any $\epsilon>0$ we can suppose it was large enough to guarantee $$\label{assumption1} r(B_{i}(\tau))^{-\epsilon}>\frac{\mu(B_{k}(\tau))}{\lambda_d(B_{k}(\tau))}.$$ Note that this can be done in a uniform manner since for any $B_i(\tau)\in E_n$ we have $i>G_n$ and the set $E_{n-1}$ is finite. Hence, $$\mu(B_{i}(\tau))\leq C'(N,d,u,\tau) r(B_{i}(\tau))^{s-\epsilon}.$$ Observe the constant is independent of the layer which $B_{i}(\tau)$ is from. **Case 2: $F$ general ball.** Now fix $F$, a ball contained in $B$. We prove the general statement that for every $\epsilon>0$ $$\label{general ball} \mu(F)\ll_{N,d,u,\tau} r(F)^{s-\epsilon}.$$ Firstly note that if $F\cap {\cal D}(B)=\emptyset$ then $\mu(F)=0$ so the statement is trivially satisfied. Hence without loss of generality $F\cap {\cal D}(B)\neq \emptyset$. Since $F$ has non-empty intersection with ${\cal D}(B)$ at least one ball in each layer $E_{n}$ must intersect $F$. If there is exactly one ball $B_{i_{n}}(\tau)$ per layer $E_{n}$ (that intersects $F$) then note that $$\mu(B_{i_{n}}(\tau))\to 0 \quad \text{ as } \, n\to \infty$$ and so [\[general ball\]](#general ball){reference-type="eqref" reference="general ball"} holds, since there exists some $n\in {\mathbb{N}}$ such that $$\mu(B_{i_{n}}(\tau))\ll r(F)^{s-\epsilon}.$$ Hence assume there exists $n_{F}\in {\mathbb{N}}$ such that at least two balls, say $B_{i}(\tau),B_{j}(\tau) \in E_{n_{F}}$ have non-empty intersection with $F$. Furthermore $n_{F}$ is the first layer in which this happens. Note $B_{i}(\tau),B_{j}(\tau)$ belong to the same ball $B_{k}(\tau) \in E_{n_{F}-1}$ (otherwise the $n_{F}-1$ layer has non-empty intersection with at least two balls, contradicting the choice of $n_{F}$). If $$r(F)\geq r(B_{k}(\tau)),$$ then $$\mu(F) \leq \underset{B_{i}(\tau)\cap F \neq \emptyset}{\sum_{B_{i}(\tau) \in E_{n_{F}-1} :}}\mu(B_{i}(\tau))=\mu(B_{k}(\tau)).$$ We can then use Case 1 of the proof to estimate $\mu(B_k(\tau))$ from above and obtain $$\mu(F) \leq C'(N,d,u,\tau) r(B_{k}(\tau))^{s-\epsilon} \leq C'(N,d,u,\tau) r(F)^{s-\epsilon}$$ as required. So assume $$\label{assumption2} r(F)< r(B_{k}(\tau)).$$ We have that $$\label{F count} \lambda_{d}(5F)\geq \sum_{B_{i}:B_{i}(\tau)\in {\cal F}} \lambda_{d}(B_{i}),$$ where $${\cal F}=\left\{B_{i}(\tau) \in T_{G_{n_{F}},I(k)}^{\tau}: B_{i}(\tau)\cap F \neq \emptyset\right\}.$$ See [@BV06 Lemma 7], setting $A=F$ and $M=B_{i}(\tau)$, $cM=B_{i}$ for justification of the above statement. Hence $$\begin{aligned} \mu(F)&\leq \sum_{B_{i}(\tau) \in {\cal F}} \mu(B_{i}(\tau)) \\ & \leq \sum_{B_{i}:B_{i}(\tau) \in {\cal F}} \frac{\lambda_{d}(B_{i})}{\sum_{B_{j}\in T_{G_{n_{F}},I(k)}}\lambda_{d}(B_{j})} \times \mu(B_{k}(\tau)) \\ & \overset{\text{Lemma~\ref{TGI}}}{\leq} \frac{1}{\kappa_{1}}\sum_{B_{i}:B_{i}(\tau)\in {\cal F}}\lambda_{d}(B_{i}) \times \frac{\mu(B_{i}(\tau))}{\lambda_{d}(I(k))}\\ &\overset{\text{Lemma~\ref{I in B(t)}}}{\leq} \frac{1}{\kappa_{1}C(N,d,u,\tau)}\sum_{B_{i}:B_{i}(\tau)\in {\cal F}}\lambda_{d}(B_{i}) \times \frac{\mu(B_{i}(\tau))}{\lambda_{d}(B_{k}(\tau))} \\ & \overset{\eqref{F count}}{\leq} \frac{5^{d}}{\kappa_{1}C(N,d,u,\tau)} \lambda_{d}(F) \frac{\mu(B_{i}(\tau))}{\lambda_{d}(B_{k}(\tau))}. \end{aligned}$$ Finally, recalling that $d>s$ we continue with $$\begin{aligned} \mu(F)&\overset{(d>s)}{\leq} \frac{10^{d}}{\kappa_{1}C(N,d,u,\tau)} r(F)^{s} \frac{\mu(B_{i}(\tau))}{\lambda_{d}(B_{k}(\tau))}\\ &\leq \frac{10^{d}}{\kappa_{1}C(N,d,u,\tau)} r(F)^{s-\epsilon}, \end{aligned}$$ so [\[general ball\]](#general ball){reference-type="eqref" reference="general ball"} holds, where the last line follows on the observation that $$r(F)^{-\epsilon}\overset{\eqref{assumption2}}{>}r(B_{k}(\tau))^{-\epsilon}\overset{\eqref{assumption1}}{>} \frac{\mu(B_{i}(\tau))}{\lambda_{d}(B_{k}(\tau))}.$$ ◻ We can now finish the proof of Theorem [Theorem 2](#D-set){reference-type="ref" reference="D-set"}. By the mass distribution principle, see for example [@F14 Proposition 4.2], we have that $$\mathop{\mathrm{\dim_H}}{\cal D}(B) \geq s-\varepsilon.$$ Since $\varepsilon>0$ is arbitrary the lower bound dimension result of Theorem [Theorem 2](#D-set){reference-type="ref" reference="D-set"}, and thus Theorem [Theorem 1](#main){reference-type="ref" reference="main"}, follows. [^1]: B. W. gratefully acknowledges support from the EPSRC research grant (EP/W522430/1), and Australian Research Council Discovery grant (No. 200100994). [^2]: Indeed, this result of Bugeaud and Moreira doesn't only hold for $\psi_\tau$ but also for general approximation functions $\psi$. We state this special case for simplicity as it is the relevant statement for the purposes of the current work.
arxiv_math
{ "id": "2310.01947", "title": "The dimension of the set of $\\psi$-badly approximable points in all\n ambient dimensions; on a question of Beresnevich and Velani", "authors": "Henna Koivusalo, Jason Levesley, Benjamin Ward, Xintian Zhang", "categories": "math.NT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | This article solves the Hume's problem of induction using a probabilistic approach. From the probabilistic perspective, the core task of induction is to estimate the probability of an event and judge the accuracy of the estimation. Following this principle, the article provides a method for calculating the confidence on a given confidence interval, and furthermore, degree of confirmation. The law of large numbers shows that as the number of experiments tends to infinity, for any small confidence interval, the confidence approaches 100% in a probabilistic sense, thus the Hume's problem of induction is solved. The foundation of this method is the existence of probability, or in other words, the identity of physical laws. The article points out that it cannot be guaranteed that all things possess identity, but humans only concern themselves with things that possess identity, and identity is built on the foundation of pragmatism. After solving the Hum's problem, a novel demarcation of science are proposed, providing science with the legitimacy of being referred to as truth. author: - Xuezhi Yang, yangxuezhi\@hotmail.com bibliography: - IEEEfull.bib - philosophy.bib title: Solving the Problem of Induction --- # Introduction Hume's problem, coined after the Scottish philosopher David Hume, represents a philosophical conundrum, which pertains to the rationality of induction and the foundation of science. ## What is Hume's Problem? Hume pointed out that inductive reasoning involves making predictions about future events based on past experiences. However, this kind of reasoning can not logically ensure its accuracy because the future may differ from the past [@hume1739treatise]. Hume argued that no matter how many white swans we have observed, we can not logically prove that \"all swans are white\" because future swans may have different colors. This raises the question that why we should believe in the effectiveness of inductive reasoning, especially considering its widespread use in scientific research. The problem of induction is also illustrated vividly by the story of Russell's turkey. A turkey in a village observes the same thing for many days: a hand comes to feed it in the morning. For the turkey, this experience accumulates into such a strong pattern that it develops a high degree of confidence in the proposition \"it will be fed every morning\", until Thanksgiving arrives and shatters its belief [@russell1948human]. Closely related to induction is the concept of causality. Causality implies that one event occurs as a result of another. Hume argued that people tend to believe that if one event consistently follows another, the former is the cause of the latter. However, Hume contended that this view is not drawn through logical reasoning but rather is based on habit and psychological inclinations stemming from experience. Thus Hume believed that causality, which involves necessary connections between cause and effect, cannot be established through empirical observation or deduction. ## Responses to Hume's Problem in History ### Synthetic a priori To address Hume's problem, Kant argued in his famous work [@kant1998critique] that synthetic a priori knowledge is possible. Kant reversed the empiricist programme espoused by Hume and argued that experience only comes about through the 12 a priori categories of understanding, including the concept of causation. Causality becomes, in Kant's view, a priori and therefore universally necessary, thus sidestepping Hume's skepticism. However, if causality cannot be derived through induction, Kant's direct assertion of causality as an a priori category may seem more brutal. ### Probabilistic Approaches The Bayes-Laplacian solution and Carnap's confirmation theory are probabilistic approaches to Hume's problem with the foundational ideas of which are essentially the same. Carnap's goal is to provide a more precise measure of confirmation for scientific theories to assess their credibility [@1950Logical]. The measure he introduced is $$\label{Bayes} P(h|e) = \frac{P(e|h) \cdot P(h)}{P(e)},$$ where $e$ is the observational evidence and $h$ is a hypothesis. A hypothesis can achieve a higher probability when supported by evidence, making it more confirmed. If the evidence is inconsistent with a hypothesis, the degree of confirmation will be lower. Eq. ([\[Bayes\]](#Bayes){reference-type="ref" reference="Bayes"}) is actually a conditional probability expressed in the Bayesian form. Rule of Succession [@mood1974introduction] is a formula introduced by Laplace when working on the \"sunrise problem\". The question here is, If event $A$ occurred $N_A$ times in $N$ trials, how likely would it happen in the next trial? Laplace considers this problem based on such a random experiment of drawing ball randomly, with replacement, from an urn. At each draw, a random probability $p$ is uniformly produced on the \[0,1\] interval, and then set $p$ portion of the balls in the urn black, and then a ball is drawn. Event $A$ is the draw is black. In Carnap's language, the evidence $e$= \"event $A$ occurred $N_A$ times in $N$ trials\", and the hypothesis $h$= \"event $A$ will happen next time\", then the degree of conformation for $h$ based on $e$ is $P(h|e)$. Laplace worked out the result as $$\label{RuleofSuccession} P(h|e)=\frac{N_A+1}{N+2},$$ which is widely recognized for it satisfies people's expectation. For the sunrise problem, $N_A=N$, then $P(h|e)=(N+1)/(N+2)$, which is smaller than 1 and approaches 1 when $N$ approaches infinity. This is exactly the expectation in people's mind that finite evidences don't conform a hypothesis, but more evidences improve the degree of conformation. Although promising and influential, Carnap's confirmation theory encountered a wide range of criticisms from scholars among which Popper, Quine and Putnam were the most prominent figures. The author will not elaborate on their controversies but point out the essential problem with this approach. Carnap and Laplace's theory is based on a random experiment with an a priori probability distribution. But are we really doing such an experiment? If so, it is easy to calculate that $e$ is an event with a very small probability. If the sun rises with a random probability $p$ which is uniformly distributed on \[0,1\], it will be very rare to see 100 successive sunrises. However, the sun has kept rising every day for millions of years. So the essential problem with this approach is the model of the random experiment is wrong. That's why Carnap couldn't work out a clear and cogent theory of confirmation. ### Falsificationism Popper proposed \"falsifiability\" as the demarcation of science from pseudoscience [@popper1959logic]. In Popper's view, a scientific theory is a conjecture or hypothesis that can never be finally confirmed but can be falsified at any time. Propositions like \"all swans are white\" are not inductively derived but are conjectures. If all observed swans so far are white, the conjecture is provisionally accepted. However, once a black swan is discovered, the proposition is falsified. From a logical perspective, if a theory $h$ implies a specific observable statement $e$, verificationism asserts that \"if $e$ then $h$\", which commits the fallacy of affirming the consequent. On the other hand, falsificationism claims that \"if $\neg e$, then $\neg h$\", which is logically sound. This logical rigor has contributed to the widespread influence of falsificationism. The idea that \"science can only be falsified, not verified\" has become something of a doctrine. However, falsificationism has its problems, with a major critique coming from Quine's holism[@1980From]. Holism suggests that when scientists design an experiment to test a scientific theory, they rely on various experimental equipments, which themselves embody a set of theories. For example, observing the trajectories of planets requires telescopes, which involve optical theories. In Popper's view, these theories used in experiments are treated as background knowledge and assumed to be correct. But from Quine's perspective, background knowledge can be wrong. If experimental results don't agree with theoretical predictions, it might be due to problems with the experimental equipment rather than the theory. Thus, using experiments to falsify a theory becomes problematic. In addition to the critique from holism, falsificationism has deeper weaknesses: how do you prove counterexamples are true? For instance, to falsify the proposition \"all swans are white\", you only need to find one black swan. Assume you observed a black swan, how can you be sure that the statement \"this is a black swan\" is true? Popper didn't realize the correctness of this statement relies on the universal proposition \"my senses are always correct\", therefore we have no way to evade Hume and ultimately have to face the problem of induction. This article uses modern probability theory to address the problem of induction and provides a method for calculating the confidence of a proposition over a given confidence interval, thereby solving the Hume's problem. The article emphasizes that identity is the foundation of this solution. Identity cannot be proven but is established on the basis of pragmatism. After addressing the problem of induction, the article proposes a novel demarcation criteria of science based on probabilistic verification. # Solving the Hume's Problem Carnap attempted to solve the Hume's problem using a probabilistic approach, which was a step in the right direction. Unfortunately, he failed to establish a viable inductive logic because he got the problem wrong in the first place. ## The Concept of Probability From a pure mathematical perspective, probability theory is an axiomatic system. In 1933, Soviet mathematician Kolmogorov first provided the measure-theoretic definition of probability [@kolmogorov1933foundations]. Axiomatic Definition of Probability:Let $\Omega$ be the sample space of a random experiment, assign a real number $P(A)$ to every event $A$. $P(A)$ is the probability of event $A$ if $P(\cdot)$ satisfies the following properties: - Non-negativity: $\forall A, P(A)\geq 0$; - Normalization: $P(\Omega)=1$; - Countable Additivity: If $A_m\bigcap A_n=\emptyset$ for $m\neq n$, $P\left(\bigcup\limits_{n=1}^{+\infty}A_n\right)=\sum\limits_{n=1}^{+\infty}P(A_n)$. Although the debate on the interpretation of probability is still going on in the area of philosophy [@sepprobabilityinterpret], probability theory has evolved into a rigorous branch of mathematics and has served as the foundation of information theory, which has been guiding the development of modern communication ever since its birth. When using the language of probability, a universal proposition is transformed into the probability of a singular proposition. For example, \"all swans are white\" is expressed in probability as \"the probability that a swan is white is 100%\". The central task of induction from the probability perspective is to estimate this probability and evaluate the accuracy of the estimation. ## How to estimate a probability? If the evidence $e$ is \"event $A$ occurs $N_A$ times in an $N$-fold random experiment\", our task is to estimate the probability of event $A$, which is $p$. Maximum likelihood algorithm has been a well accepted criterion for optimal estimation. In this criterion, we choose the estimation to maximize the probability of $e$, which is expressed as $$P(e)=\begin{pmatrix} N \\ N_A \end{pmatrix}p^{N_A}(1-p)^{N-N_A}.$$ Let $$\frac{dP(e)}{dp}=0,$$ we get the maximum likelihood estimation of $p$, denoted as $$\hat{p}=\frac{N_A}{N}.$$ To assess the accuracy of this estimation, we need the concept of confidence on confidence interval. ## Confidence on Confidence Interval Because $p$ is a real number, there is no chance the estimation is exactly the same as $p$. Therefore, confidence should be defined on a confidence interval with a non-zero width. Confidence is the probability that the true value falls within this interval. From the perspective of maximum likelihood estimation, the optimization goal of this problem should be, given a width of the confidence interval, find a confidence interval that maximize the confidence. Earlier, we have obtained the maximum likelihood estimation of $p$ as $N_A/N$. In this paper, we just simplify this problem to set the confidence interval as $D$, so that $N_A/N \in D$. To calculate the confidence, we first need to make an assumption that in absence of any observed facts, $p$ is uniformly distributed on $[0, 1]$. Under this assumption, for each possible probability, we calculate the probability of $e$ to form a curve. The ratio of the area under the curve within $D$ to the total area under the curve is the probability that the true value falls within $D$, which is the confidence. Then, the confidence is given by $$c=\frac{\int_{D}x^{N_A}(1-x)^{N-N_A}dx}{\int_0^1 x^{N_A}(1-x)^{N-N_A}dx}.$$ Although the uniform prior over $p$ is the same as the Principle of Indifference in Rule of Succession, the ideology is different. Unlike Laplace's solution, the random experiment in this approach is, event $A$ happens with a fixed probability rather than a random one. It is just that, we don't know the true value of $p$. The task here is to use evidence $e$ to estimate $p$. The uniform prior is used here to define confidence, which is a subjective criterion of evaluation. In contrast, it is constitutive of the random experiment in Laplace's approach [@sepinductionproblem]. Let's take a simple example. Based on the fact that \"all $N$ swans were observed to be white\", calculate the confidence of \"the probability of a white swan being greater than 90%\", where the confidence interval is $[0.9, 1]$. The expression for this confidence is $$c=\frac{\int_{0.9}^1x^Ndx}{\int_0^1 x^Ndx}=1-0.9^{N+1}.$$ ![Confidence on $[0.9, 1]$ with respect to $N$](crdence.png){#whihtegoose width="3in"} As shown in Fig. 1, it can be seen that when $N=10$, the confidence is approximately 68.6%. When $N=30$, the confidence increases to 96.2%. If the confidence interval is reduced to $[0.99, 1]$, then $N=300$ is required to achieve a confidence of 95.1%. With the concepts of confidence and confidence interval, we can make a response to Russell's turkey. After two months of feeding by the farmer, the conclusion drawn by this turkey should have been \"the confidence of 'the probability of feeding is greater than 99%' is $1- 0.99^{61}=45.8\%$\", so it is not surprising that it was killed by the farmer on Thanksgiving. Besides, for a cow who has lived on the farm for 10 years, it is normal to conclude that a turkey has a high possibility to be slaughtered on Thanksgiving. A person around 30 years old who has seen the sun rise 10000 times can conclude that the confidence of \"the probability of sunrise is greater than 99.9%\" is $1-0.999^{10000}=99.995\%$. If he also believes in historical records that humans have seen sunrise for over a million years, then the confidence interval can continue to narrow and the confidence is closer to 100%. ## The Law of Large Numbers The law of large numbers is the fundamental law of probability theory, which comes in various forms, such as Bernoulli, Sinchin, Chebyshev's law of large numbers, and the strong law of large numbers. Let's take a look at the most fundamental form, Bernoulli's law of large numbers. Bernoulli's Law of Large Numbers: If a random experiment is conducted $N$ times, event $A$ occurs $N_A$ times, and $p$ is the probability of $A$, then for any $\varepsilon > 0$ $$\displaystyle \lim_{N\rightarrow \infty}P\left(\left|\frac{N_A}{N}-p\right|< \varepsilon\right)=1.$$ The proof of this theorem can be found in any textbook of probability theory. Bernoulli's theorem of large numbers states that when the number of trials is sufficiently large, the value of $N_A/N$ approaches $p$ infinitely in probability. From another perspective, $P\left(\left|N_A/N-p\right|< \varepsilon\right)$ is the confidence of $p$ being located in the confidence interval $[N_A/N-\varepsilon,N_A/N+\varepsilon]$. Bernoulli's law of large numbers states that for any small confidence interval, when $N$ is large enough, the confidence can be infinitely close to 1. ## Degree of Conformation We have discussed the concept of confidence on confidence interval. If the two parameters still seem too complicated, we can further simplify them to one parameter. Suppose the width of the confidence interval is $d$, then we can introduce a metric, degree of conformation, as $$C=\max_d (1-d)\cdot c.$$ For the sunrise problem, $N_A=N$, and the confidence on $[x,1]$ is $1-x^{N+1}$, By simple inference we get $$C=\frac{1}{\sqrt[N+1]{N+2}}\frac{N+1}{N+2}.$$ When $N=2$, $C=0.47247$, and when $N=10000$, $C=0.9990$, while the results of Rule of Succession are 0.75 and 0.9999, respectively . Though it is of little meaning to compare the detailed numbers, the conformation provided by this solution is more conservative than that of the Rule of Succession. # Identity The basis for solving the Hume's problem is the existence of probability. So let's further ask, does probability exist? This involves the concept of identity. The so-called identity means invariance. Only things with identity can humans understand and use the acquired knowledge about them to guide practices. If a thing changes after people understand it, then the previous understanding becomes useless. We acknowledge that there are things in the real world that do not have identity. For example, a shooting star in the night sky disappears after a few seconds of brilliance. You see this scene and tell your friend, 'Look, meteor!'. When your friend looks up, it's already gone. Although your friend may not think you are lying, he has no evidence to acknowledge you because this phenomenon is non-repeatable, and no one will care anymore after its disappearence. What humans are concerned about are things with identity. For example, if you see an apple in front of you, you look at it once, you look at it again, you look at it ten times, it will keep being an apple in your eyes. If you let your friend look at it, he will also see an apple. So, the existence of the apple has identity. For this apple, you can say to your friend, \"Do you want to eat this apple?\", your friend might say, \"That's great!\" and then pick it up and eat it. Humans are able to communicate and cooperate on objects with identity. Hume's questioning of induction and causality is essentially a negation of identity. Russell's turkey was killed on Thanksgiving, so how can we guarantee that the apple in front of us won't suddenly turn into a rabbit? We successfully responded to this challenge using a probabilistic approach. That is to say, our assumption of identity is not that the apple has been and will always be an apple, but rather that there is a probability that it is an apple, without limiting this probability to be 100%, which preserves the possibility of \"an apple suddenly becomes a rabbit\" to avoid dogmatism to the utmost extent. The notion of identity assumes there is an invariant probability, which is the basis for our argument for all other propositions. It is a presupposition and cannot be proven because there is no more fundamental propositions to prove it. # Practicality is the Foundation of Identity If $p$ exists, then according to the law of large numbers, $N_ A/N$ approaches $p$ infinitely after $N$ is sufficiently large. If we conduct experiments, such as flipping a coin, we do observe a phenomenon where the ratio of the number of heads ($N_A$) to the total number ($N$) gradually approaches a constant value. However, this does not prove the existence of probability. Because according to Hume's query, although $N_ A/N$ tends to be a constant value, it is only a case of finite number of trials and cannot be extrapolated to an infinite conclusion. This is also the reason why we say identity cannot be proven. Let's do a thought experiment. Suppose the demon of Descartes want to subvert our belief in identity and manipulate our experiment of flipping coins. The devil's purpose is to make $N_ A/N$ has no limit by letting $N_ A/N$ oscillates in constant amplitude. The devil first makes the coin flip normally, so $N_ A/N$ will be around 0.5, and then manipulates the probability of heads be 0.6 until $N_ A/N$ reaches 0.55, and then manipulates the probability of heads be 0.4 until $N_A/N$ reaches 0.45 and cycles like this, then $N_ A/N$ oscillates between 0.45 and 0.55 without convergence, meaning there is no limit for $N_ A/N$. So can the demon's approach overturn our belief in the existence of probability? However, it cannot. Because $N$ increases with each cycle, so changing the overall statistical characteristics in the next cycle will require more trials, which results in an exponential increase in the number of trials per cycle. At the beginning, due to the short cycle, the results are not reliable, so these experimental results could not guide human activities and no one would care. Later on, as the cycle becomes longer, for those who make short-term predictions, they will find that the current probability is 0.6 or 0.4, and they can arrange their practical activities based on this result and achieve success. It cannot be ruled out that all the physical laws currently mastered by humans are only invariants arranged by demons within a cycle. For long-term observers, it is easy to detect the oscillation pattern of $N_ A/N$, which is a broader sense of identity. This is actually the same as how we handle our daily lives. We believe an apple remains unchanged in the short term, and this identity can guide short-term activities, such as dealing with questions such as \"Do you want to eat the apple?\". But in the long cycle, apples will go through the process of freshness, loss of luster, wrinkling, decay and blackening. Although the apple has changed, the process of change follows the same pattern and remains unchanged, which is a long-term identity. Based on this identity, humans can derive the most favorable principles of action for themselves, such as eating apples before they become stale. Of course, the demon can also come up with more complex ways to subvert the assumption of identity, but this is not of much use. Because in the face of a constantly changing world, humans always extract those unchanging characteristics, identify their patterns, and use them to guide their practical activities. For those changes that cannot be mastered, the human approach is to ignore them and let them go. So, the assumption of identity is based on pragmatism, which is the final foundation of human knowledge [@Yang2023]. # Demarcation of Science The demarcation problem is the fundamental issue in the philosophy of science. The most influential demarcation criterion is Popper's falsification principle. According to Popper's theory, a proposition is scientific if it has falsifiability. The problem with Popper's standard is that it is too loose so that many nonsense propositions will also be considered scientific. Such as astrology, which was rightly taken by Popper as an example of pseudoscience, DOES have falsifiability, and has in fact been tested and refuted. Popper's falsification principle is a stopgap measure before the Hume's problem is solved, after which we can work out a precise definition of science. ![Definition of science](scienceE.png){#science width="3in"} The definition of science is shown in Figure [2](#science){reference-type="ref" reference="science"}. The left half plane represents the part of the world with identity, while the right half plane represents the subject, which is human. The third quadrant is miscellaneous phenomena that trigger our sensations and perceptions, while behind the phenomenon are physical laws, located in the second quadrant. Plato believed that there exists a world of ideas, and objects in the real world are copies of ideas. In our model, there exists a world of physical laws. Unlike Plato, physical laws are not blueprints of phenomena, but rather to constrain and regulate them. Popper believes scientific theories are conjectures of scientists on what laws are, which is in the first quadrant. Induction is not a logical method, but a way of conjecture. The results of conjecture appear in the form of axioms, becoming the starting point of reasoning. Conjectures should obey the Occam's Razor Principle, also known as the economic principle of thinking, by cutting off the unverifiable parts of them. Axioms are usually universal propositions and cannot be directly verified. Then it is necessary to combine actual scenarios to deduce a proposition that can be empirically confirmed or falsified. For example, we can conjecture \"the sun rises every day\" and use this proposition as an axiom. This axiom cannot be directly verified, and verifiable propositions need to be derived from it through logic, such as \"the sun rose yesterday\", \"the sun will rise tomorrow\", \"the sun will rise the day after tomorrow\", and so on. If all and large enough amount of propositions derived from this axiom are verified to be correct, then \"the sun rises every day\" is confirmed with high degree of confirmation. Newton's laws, theory of relativity, and quantum mechanics are also judged according to this standard. For example, Newton's laws have many application scenarios, such as free falling bodies, the trajectory of cannonballs and planets, and so on. In each scenario, a series of verifiable propositions are derived. If these propositions are experimentally confirmed, then Newton's laws are verified with high conformation in these scenarios. In scenarios in which Newton's laws are yet to be verified, we can conjecture that they will be still valid. But experiments have shown that Newton's laws do not hold true in scenarios close to the speed of light and atomic scale, so are falsified in these scenarios. But that does not affect conclusions in low-speed macroscopic scenarios. Once a theory is conformed with high degree of confirmation in a scenario, it is only theoretically possible and practically impossible to falsify it again under that scenario. Therefore, Newton's laws deserve the title of TRUTH in confirmed scenarios. From the above discussions, we can summarize the four elements of science: Conjecture, Logic, empirical Verification, and Economics. The use of probability methods to address the Hume's problem is reflected in the verification part. This standard of demarcation is actually the revival and development of logical positivism or logical empiricism, and is much stricter than the falsification criterion. Probabilistic verification requires a large number of experiments to obtain a high degree of confirmation, so any establishment on luckiness cannot pass the verification criteria. In addition, according to this standard, not only Newton's laws, theory of relativity, and quantum mechanics are science, but common senses of life such as \"the sun rises every day\" and \"apple is eatable\" are also science. This way, the notion of science will enter the everyday lives of ordinary people and can play a role in improving the scientific literacy of the whole people. # Conclusions The Hume's problem is a fundamental problem that runs through epistemology. Kant and Popper were unable to solve this problem and adopted an evasive attitude. Carnap's attempt was an endeavor in the right direction, but ended up in failure. The goal achieved in this article is exactly what Carnap attempted to achieve. For three hundred years, the Hume's problem has been a dark cloud over science and relegated science to the position of \"can never be confirmed and wait to be falsified\". After the solution of the Hume problem, science is confirmed to deserve the title of truth in a probabilistic sense.
arxiv_math
{ "id": "2309.07924", "title": "Solving the Problem of Induction", "authors": "Xuezhi Yang", "categories": "math.HO cs.IT math.IT math.PR", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | An irreducible polynomial $f\in\Bbb F_q[X]$ of degree $n$ is *normal* over $\Bbb F_q$ if and only if its roots $r, r^q,\dots,r^{q^{n-1}}$ satisfy the condition $\Delta_n(r, r^q,\dots,r^{q^{n-1}})\ne 0$, where $\Delta_n(X_0,\dots,X_{n-1})$ is the $n\times n$ circulant determinant. By finding a suitable *symmetrization* of $\Delta_n$ (A multiple of $\Delta_n$ which is symmetric in $X_0,\dots,X_{n-1}$), we obtain a condition on the coefficients of $f$ that is sufficient for $f$ to be normal. This approach works well for $n\le 5$ but encounters computational difficulties when $n\ge 6$. In the present paper, we consider irreducible polynomials of the form $f=X^n+X^{n-1}+a\in\Bbb F_q[X]$. For $n=6$ and $7$, by an indirect method, we are able to find simple conditions on $a$ that are sufficient for $f$ to be normal. In a more general context, we also explore the normal polynomials of a finite Galois extension through the irreducible characters of the Galois group. address: - Department of Mathematics, SUNY Geneseo, Geneseo, NY 14454 - Department of Mathematics, Dartmouth College, Hanover, NH 03755 - Department of Mathematics and Statistics, University of South Florida, Tampa, FL 33620 - Department of Mathematics and Statistics, Boston University, Boston, MA 02215 - Department of Mathematics and Statistics, University of South Florida, Tampa, FL 33620 author: - Darien Connolly - Calvin George - Xiang-dong Hou - Adam Madro - Vincenzo Pallozzi Lavorante title: An approach to normal polynomials through symmetrization and symmetric reduction \* --- [^1] # Introduction Let $K$ be a finite Galois extension over $F$ with Galois group $G=\text{Aut}(K/F)$. An element $\alpha\in K$ is said to be *normal* over $F$ (with respect to $K$) if $\{\sigma(\alpha):\sigma\in G\}$ forms a basis of $K$ over $F$; in this case, $\{\sigma(\alpha):\sigma\in G\}$ is called a *normal basis* of $K$ over $F$. Normal bases of finite Galois extension always exist [@Lang-2002 §VI.13]. The notion of normal bases appears in the context of Galois modules and leads to a subtle refinement in algebraic number theory [@Frohlich-1983; @Snaith-1994]. If $\alpha\in K$ is normal over $F$ (with respect to $K$) and $f$ is its minimal polynomial over $F$, then $\alpha$ is separable over $F$, $F(\alpha)\subset K$, and $[F(\alpha):F]=\deg f=|G|=[K:F]$. It follows that $K=F(\alpha)$. Therefore, one may speak of a separable element $\alpha$ over $F$ being normal over $F$ without mentioning $K$. The minimal polynomial of a normal element over $F$ is called a *normal polynomial* over $F$. Therefore, a normal polynomial over $F$ is a monic separable irreducible polynomial whose splitting field over $F$ is generated by one its roots and whose roots are linearly independent over $F$. When $F$ is a finite field $\Bbb F_q$, a normal polynomial over $\Bbb F_q$ is simply a monic irreducible polynomial whose roots are linearly independent over $\Bbb F_q$. Normal bases and normal polynomials over finite fields have applications in many areas and they constitute a well-studied topic in the theory of finite fields [@Gao-thesis-1993; @Hou-DCC-2018; @Lenstra-Schoof-MC-1987; @Masuda-Moura-Panario-Thomson-IEEE-C-2008], [@Mullen-Panario-HF-2013 §§5.2 -- 5.4], [@Mullin-Onyszchuk-Vanstone-Wilson-DAM-1988; @Pei-Wang-Omura-IEEE-IT-1986; @Perlis-DMJ-1942]. The number of normal polynomials of a given degree over $\Bbb F_q$ is known [@Lidl-Niederreiter-FF-1997 Theorem 3.73]. For an irreducible polynomial $f=X^n+a_1X^{n-1}+\cdots+a_n\in\Bbb F_q[X]$ with roots $r_0=r,r_1=r^q,\dots,r_{n-1}=r^{q^{n-1}}$, $f$ is normal over $\Bbb F_q$ if and only if $$\Delta_n(r_0,\dots,r_{n-1})\ne0,$$ where $$\Delta_n(X_0,\dots,X_{n-1})=\det\left[ \begin{matrix} X_0&X_1&\cdots&X_{n-1}\cr X_{n-1}&X_0&\cdots&X_{n-2}\cr \vdots&\vdots&\ddots&\vdots\cr X_1&X_2&\cdots&X_0 \end{matrix}\right]\in\Bbb Z[X_0\dots,X_{n-1}].$$ However, it is more desirable to tell whether $f$ is normal from its coefficients $a_1,\dots,a_n$. This question was considered in [@Hou-preprint] with a naive approach which we briefly describe below. The approach in [@Hou-preprint] is based on the notions of symmetrization and symmetric reduction. Given a polynomial $f(X_0,\dots,X_{n-1})\in\Bbb Z[X_0,\dots,X_{n-1}]$, a *symmetrization* of $f$ is a multiple of $f$ in $\Bbb Z[X_0,\dots,X_{n-1}]$ which is symmetric in $X_0,\dots X_{n-1}$. Given a symmetric polynomial $F(X_0,\dots,X_n)\in\Bbb Z[X_0,\dots,X_{n-1}]$, its *symmetric reduction* is the polynomial $H\in \Bbb Z[s_1,\dots,s_n]$ such that $$F(X_0,\dots,X_{n-1})=H(s_1,\dots,s_n),$$ where $s_i$ is the $i$th elementary symmetric polynomial in $X_0,\dots,X_{n-1}$. (Note: The coefficient ring $\Bbb Z$ here can be replaced with any commutative ring.) Let $\epsilon_n=e^{2\pi\sqrt{-1}/n}$. The polynomial $\Delta_n$ affords a factorization $$\Delta_n(X_0,\dots,X_{n-1})=\prod_{m\mid n}\Psi_m(Y_{n0},\dots,Y_{n,m-1}),$$ where $$\label{Psi} \Psi_n(X_0,\dots,X_{n-1})=\prod_{i\in(\Bbb Z/n\Bbb Z)^\times}\Bigl(\sum_{j\in\Bbb Z/n\Bbb Z}\epsilon_n^{ij}X_j\Bigr)\in\Bbb Z[X_0,\dots,X_{n-1}]$$ and $$Y_{ni}=\sum_{j=0}^{n/m-1}X_{i+mj},\quad 0\le i\le m-1.$$ Let the symmetric group $S_n$ act on $\Bbb Z[X_0,\dots,X_{n-1}]$ by permuting the indeterminates $X_0,\dots,X_{n-1}$. The stabilizer of $\Psi_m(Y_{n0},\dots,Y_{n,m-1})\in\Bbb Z[X_0,\dots,X_{n-1}]$ in $S_n$, denoted by $\text{Stab}(\Psi_m(Y_{n0},\dots,Y_{n,m-1}))$, is determined. Consequently, $$\label{Theta} \Theta_{n,m}(X_0,\dots,X_{n-1}):=\prod_\phi\phi(\Psi_m(Y_{n0},\dots,Y_{n,m-1})),$$ where $\phi$ runs over a left transversal of $\text{Stab}(\Psi_m(Y_{n0},\dots,Y_{n,m-1}))$ in $S_n$ (i.e., a system of representatives of the left cosets of $\text{Stab}(\Psi_m(Y_{n0},\dots,Y_{n,m-1}))$ in $S_n$), is a symmetrization of $\Psi_m(Y_{n0},\dots,Y_{n,m-1})$. The explicit formula for $\Theta_{n,m}$ is given in [@Hou-preprint]. Let $\theta_{n,m}\in\Bbb Z[s_1,\dots,s_n]$ be the symmetric reduction of $\Theta_{n,m}$, that is, $$\label{theta} \Theta_{n,m}(X_0,\dots,X_{n-1})=\theta_{n,m}(s_1,\dots,s_n).$$ If $p$ is prime, then $\theta_{n,p^tl}$ is a power of $\theta_{n,l}$ in characteristic $p$ ([@Hou-preprint Remark 4.2]). The main results of [@Hou-preprint] can be summarized as follows: **Theorem 1**. *[@Hou-preprint Theorem 4.1] Let $f=X^n+a_1X^{n-1}+\cdots+a_n\in\Bbb F_q[X]$ be irreducible, where $p=\text{\rm char}\,\Bbb F_q$. If $\theta_{n,m}(a_1,\dots,a_n)\ne 0$ for all $m\mid n$ with $p\nmid m$, then $f$ is normal over $\Bbb F_q$.* To recap, $\Theta_{n,m}$ are the symmetrizations of the factors of $\Delta_n$; $\theta_{n,m}$ is the symmetric reduction of $\Theta_{n,m}$. While $\Theta_{n,m}$ is explicitly given, $\theta_{n,m}$ is difficult to compute. The polynomials $\theta_{n,m}$ give rise to the sufficient condition that we seek for the normality of polynomials. The usefulness of Theorem [Theorem 1](#T1.1){reference-type="ref" reference="T1.1"} depends whether the polynomials $\theta_{n,m}$ (or their images in characteristic $p$) can be computed. In [@Hou-preprint], $\theta_{n,m}$ were computed for all $m\mid n$ with $n\le 6$ except for $\theta_{6,6}$. Hence Theorem [Theorem 1](#T1.1){reference-type="ref" reference="T1.1"} can be used to detect normality of irreducible polynomials of degree $\le 5$ and in cases $p=2$ or $3$, also of degree 6. However, for larger $n$ and $m$, the computation of the polynomial $\theta_{n,m}$, such as $\theta_{6,6}$ and $\theta_{7,7}$, becomes impractical as we will see in the next section. If $f=X^n+a_1X^{n-1}+\cdots+a_n\in\Bbb F_q[X]$ is normal over $\Bbb F_q$, then $a_1\ne 0$. Since $f(a_1X)=a_1^n(X^n+X^{n-1}+\cdots+a_n')$, we may assume without loss of generality that $a_1=1$. In the present paper, we consider irreducible polynomials of the form $$\label{f} f=X^n+X^{n-1}+a\in\Bbb F_q[X]$$ and we search for sufficient conditions on $a$ such that $f$ is normal over $\Bbb F_q$. From Theorem [Theorem 1](#T1.1){reference-type="ref" reference="T1.1"}, we have **Theorem 2**. *Assume that $f=X^n+X^{n-1}+a\in\Bbb F_q[X]$ is irreducible, where $p=\text{\rm char}\,\Bbb F_q$. If $\theta_{n,m}(1,0,\dots,0,a)\ne0$ for all $m\mid n$ with $m>1$ and $p\nmid m$, then $f$ is normal over $\Bbb F_q$.* In Theorem [Theorem 2](#T1.2){reference-type="ref" reference="T1.2"}, there is no need to consider $m=1$ since in general, $\theta_{n,1}(s_1,\dots,s_n)=s_1$, and hence $\theta_{n,1}(1,0,\dots,0,a)=1\ne0$. In Theorem [Theorem 2](#T1.2){reference-type="ref" reference="T1.2"}, $\theta_{n,m}(1,0,\dots,0,a)$ is a polynomial in $a$ with coefficients in $\Bbb F_p$. The usefulness of Theorem [Theorem 2](#T1.2){reference-type="ref" reference="T1.2"} of course depends on our ability to compute $\theta_{n,m}(1,0,\dots,0,a)$. Fortunately, there is an indirect method that allows us to compute $\theta_{n,m}(1,0,\dots,0,a)$ without having to compute $\theta_{n,m}(s_1,\dots,s_n)$ in general. We will describe this method in the next section. $\theta_{6,6}(1,0,\dots,0,a)$ and $\theta_{7,7}(1,0,\dots,0,a)$ in $\Bbb F_p[a]$ for various primes $p$ are computed in Sections 3 and 4, respectively. These results give rise to practical criteria for an irreducible polynomial $f=X^n+X^{n-1}+a\in\Bbb F_q[X]$ ($n=6,7$) to be normal over $\Bbb F_q$; see Theorems [Theorem 5](#T3.1){reference-type="ref" reference="T3.1"} and [Theorem 6](#T4.1){reference-type="ref" reference="T4.1"}. In Section 5, we discuss normal polynomials of an arbitrary finite Galois extension with Galois group $G$. Basic facts about normal polynomials over finite fields extend naturally to the general situation. The determinant $\Delta_n$ played a central role for normal polynomials over finite fields. In the general case, $\Delta_n$ is replaced by the *group determinant* $\mathcal D_G$ of $G$. (In fact, $\Delta_n$ is the group determinant of the cyclic group $\Bbb Z/n\Bbb Z$.) The factors of $\mathcal D_G$ are given by the irreducible characters of $G$. When $G$ is small, the symmetrizations of the factors of $\mathcal D_G$ can be determined, and in some cases, the symmetric reductions of these symmetrizations can be computed. In such cases, we have criteria for the normality of polynomials in terms of their coefficients similar to the case of finite fields. # An Indirect Method for Symmetric Reduction The main challenge that we face is the computation of the symmetric reduction $\theta_{n,m}$ (defined in [\[theta\]](#theta){reference-type="eqref" reference="theta"}) of the symmetric polynomial $\Theta_{n,m}$ (defined in [\[Theta\]](#Theta){reference-type="eqref" reference="Theta"}). We focus on $\theta_{6,6}$ and $\theta_{7,7}$. $\Theta_{6,6}$, given in [@Hou-preprint Appendix A3], is a product of 60 homogeneous polynomials of degree 2 in $X_0,\dots,X_5$: $$\Theta_{6,6}=\prod_{(i_1,i_2,i_3,i_4,i_5)}\Psi_6(X_0,X_{i_1},X_{i_2},X_{i_3},X_{i_4},X_{i_5}),$$ where $$\parbox[t]{\textwidth}{\raggedright$ \Psi_6=X_0^2+X_1 X_0-X_2 X_0-2 X_3 X_0-X_4 X_0+X_5 X_0+X_1^2+X_2^2+X_3^2+X_4^2+X_5^2+X_1 X_2-X_1 X_3+X_2 X_3-2 X_1 X_4-X_2 X_4+X_3 X_4-X_1 X_5-2 X_2 X_5-X_3 X_5+X_4 X_5, $}$$ and $$\begin{aligned} \label{i1-i5} &(i_1,i_2,i_3,i_4,i_5)=\\ &(1,3,4,5,2),(1,3,5,4,2),(1,4,3,5,2),(1,4,5,3,2),(1,5,3,4,2),(1,5,4,3,2),\cr &(1,2,4,5,3),(1,2,5,4,3),(1,4,2,5,3),(1,4,5,2,3),(1,5,2,4,3),(1,5,4,2,3),\cr &(1,2,3,5,4),(1,2,5,3,4),(1,3,2,5,4),(1,3,5,2,4),(1,5,2,3,4),(1,5,3,2,4),\cr &(1,2,3,4,5),(1,2,4,3,5),(1,3,2,4,5),(1,3,4,2,5),(1,4,2,3,5),(1,4,3,2,5),\cr &(2,1,4,5,3),(2,1,5,4,3),(2,4,1,5,3),(2,4,5,1,3),(2,5,1,4,3),(2,5,4,1,3),\cr &(2,1,3,5,4),(2,1,5,3,4),(2,3,1,5,4),(2,3,5,1,4),(2,5,1,3,4),(2,5,3,1,4),\cr &(2,1,3,4,5),(2,1,4,3,5),(2,3,1,4,5),(2,3,4,1,5),(2,4,1,3,5),(2,4,3,1,5),\cr &(3,1,2,5,4),(3,1,5,2,4),(3,2,1,5,4),(3,2,5,1,4),(3,5,1,2,4),(3,5,2,1,4),\cr &(3,1,2,4,5),(3,1,4,2,5),(3,2,1,4,5),(3,2,4,1,5),(3,4,1,2,5),(3,4,2,1,5),\cr &(4,1,2,3,5),(4,1,3,2,5),(4,2,1,3,5),(4,2,3,1,5),(4,3,1,2,5),(4,3,2,1,5).\nonumber\end{aligned}$$ $\Theta_{7,7}$ can be explicitly computed using equation (2.8) of [@Hou-preprint]; it is a product of 120 homogeneous polynomials of degree 6 in $X_0,\dots,X_6$; see Appendix A1. The computation of the symmetric reductions $\theta_{6,6}(s_1,\dots,s_6)$ and $\theta_{7,7}(s_1,\dots,s_7)$ (over $\Bbb Z$ or in characteristic $p$) are impractical if not impossible. However, there is an indirect method that allows us to compute $\theta_{6,6}(1,0,\dots,0,s_6)$ and $\theta_{7,7}(1,0,\dots,0,s_7)$ in characteristic $p$. We describe this method in a more general setting. Let $p$ be a prime and $\Theta(X_0,\dots,X_{n-1})\in\Bbb F_p[X_0,\dots,X_{n-1}]$ be a homogeneous symmetric polynomial of degree $d$. We have $$\begin{aligned} \Theta(X_0,\dots,X_{n-1}) \,&=\theta(s_1,\dots,s_n)\cr &=\sum_{j\le\lfloor d/n\rfloor}b_js_1^{d-jn}s_n^j+\text{terms involving $s_2,\dots,s_{n-1}$},\end{aligned}$$ where $\theta$ is the symmetric reduction of $\Theta$, $s_i$ is the $i$th elementary symmetric polynomial in $X_0,\dots,X_{n-1}$, and $b_j\in\Bbb F_p$. Then $$\label{theta-bj} \theta(1,0,\dots,0,s_n)=\sum_{j\le\lfloor d/n\rfloor}b_js_n^j$$ and hence the question is to find $b_0,\dots,b_{\lfloor d/n\rfloor}\in\Bbb F_p$. Given a polynomial $f\in\Bbb F_p[X]$ of relatively small degree, one can determine its roots explicitly in its splitting field. This is a unique feature not possessed by other fields. Our method exploits this distinctive advantage of finite fields. Take a "sample" polynomial $f=X^n-X^{n-1}+(-1)^nu\in\Bbb F_{p^k}[X]$ with $\Bbb F_p(u)=\Bbb F_{p^k}$, and let $r_0,\dots,r_{n-1}$ be the roots of $f$ (in some extension of $\Bbb F_{p^k}$). Then $$\label{bj} \Theta(r_0,\dots,r_{n-1})=\sum_{j\le\lfloor d/n\rfloor}b_ju^j.$$ Since $r_0,\dots,r_{n-1}$ are explicitly determined, we can compute $\Theta(r_0,\dots,r_{n-1})$. Since $\Theta(X_0,\dots,X_{n-1})$ is symmetric, $\Theta(r_0,\dots,r_{n-1})\in\Bbb F_{p^k}=\Bbb F_p(u)$. Hence $$\label{C(u)} \Theta(r_0,\dots,r_{n-1})=[u^0,\dots,u^{k-1}]C(u)$$ for some $C(u)\in\text{M}_{k\times 1}(\Bbb F_p)$. On the other hand, we can write $$\label{A(u)} [u^0,\dots,u^{\lfloor d/n\rfloor}]=[u^0,\dots,u^{k-1}]A(u),$$ where $A(u)\in\text{M}_{k\times(\lfloor d/n\rfloor+1)}(\Bbb F_p)$. By [\[C(u)\]](#C(u)){reference-type="eqref" reference="C(u)"} and [\[A(u)\]](#A(u)){reference-type="eqref" reference="A(u)"}, [\[bj\]](#bj){reference-type="eqref" reference="bj"} becomes $$[u^0,\dots,u^{k-1}]C(u)=[u^0,\dots,u^{k-1}]A(u)\left[\begin{matrix}b_0\cr\vdots\cr b_{\lfloor d/n\rfloor}\end{matrix}\right],$$ i.e., $$\label{LinEq} A(u)\left[\begin{matrix}b_0\cr\vdots\cr b_{\lfloor d/n\rfloor}\end{matrix}\right]=C(u).$$ Each sample polynomial $f=X^n-X^{n-1}+(-1)^nu$ gives rise to a linear system [\[LinEq\]](#LinEq){reference-type="eqref" reference="LinEq"} for $b_0,\dots,b_{\lfloor d/n\rfloor}$. Enough number of such linear systems will allow us to solve for $b_0,\dots,b_{\lfloor d/n\rfloor}$. More precisely, we have the following algorithm. **Algorithm 3**. *(An algorithm for computing $b_0,\dots,b_{\lfloor d/n\rfloor}$ in [\[theta-bj\]](#theta-bj){reference-type="eqref" reference="theta-bj"})* - *Choose $u_1,\dots,u_m\in\overline\Bbb F_p$ (the algebraic closure of $\Bbb F_p$) with $[\Bbb F_p(u_i):\Bbb F_p]=k_i$ satisfying the following conditions:* - *The minimal polynomials of $u_1,\dots,u_m$ over $\Bbb F_p$, denoted by $g_1,\dots,g_m$, are all distinct.* - *$f_i:=X^n-X^{n-1}+(-1)^nu_i$ is irreducible over $\Bbb F_{p^{k_i}}$.* - *$\sum_{i=1}^mk_i\ge\lfloor d/n\rfloor+1$.* *To satisfy (ii), use the following fact: If $\prod_{j=0}^{k_i-1}(X^n-X^{n-1}+(-1)^nu_i^{p^j})$ is irreducible over $\Bbb F_p$, then $X^n-X^{n-1}+(-1)^nu_i$ is irreducible over $\Bbb F_{p^{k_i}}$.* - *For each $1\le i\le m$, compute the matrix $A(u_i)$ in [\[A(u)\]](#A(u)){reference-type="eqref" reference="A(u)"} by reducing $u^j$ ($0\le j\le\lfloor d/n\rfloor$) mod $g_i(u)$. Since $f_i$ is irreducible over $\Bbb F_{p^{k_i}}$, its roots are $r_j= X^{p^{k_ij}}$ modulo $f_i(X)$ and $g_i(u)$ ($0\le j\le n-1$). Compute the matrix $C(u_i)$ in [\[C(u)\]](#C(u)){reference-type="eqref" reference="C(u)"} by reducing $\Theta(r_0,\dots,r_{n-1})$ mod $f_i(X)$ and $g_i(u)$.* - *Solve the linear system $$\label{LinSys} \left[ \begin{matrix} A(u_1)\cr\vdots\cr A(u_m)\end{matrix}\right] \left[ \begin{matrix} b_0\cr\vdots\cr b_{\lfloor d/n\rfloor}\end{matrix}\right]= \left[ \begin{matrix} C(u_1)\cr\vdots\cr C(u_m)\end{matrix}\right]$$ for $b_0,\dots, b_{\lfloor d/n\rfloor}$. By Lemma [Lemma 4](#L2.2){reference-type="ref" reference="L2.2"} below, $$\text{rank}\left[ \begin{matrix} A(u_1)\cr\vdots\cr A(u_m)\end{matrix}\right]=\lfloor d/n\rfloor+1.$$ Hence [\[LinSys\]](#LinSys){reference-type="eqref" reference="LinSys"} has a unique solution $b_0,\dots,b_{\lfloor d/n\rfloor}$.* Algorithm [Algorithm 3](#Alg2.1){reference-type="ref" reference="Alg2.1"} is applied to $\Theta_{6,6}$ and $\Theta_{7,7}$ in the next two sections. **Lemma 4**. *Let $N,m>0$ be integers and $u_1,\dots,u_m\in\overline\Bbb F_q$ (the algebraic closure of $\Bbb F_q$) be such that their minimal polynomials over $\Bbb F_q$ are all distinct. Let $[\Bbb F_q(u_i):\Bbb F_q]=k_i$ and write $$[u_i^0,u_i^1,\dots,u_i^{N-1}]=[u_i^0,\dots,u_i^{k_i-1}]A(u_i),$$ where $A(u_i)\in\text{\rm M}_{k_i\times N}(\Bbb F_q)$. Then $$\text{\rm rank}\left[ \begin{matrix} A(u_1)\cr\vdots\cr A(u_m)\end{matrix}\right]=\min\{k_1+\cdots+k_m,N\}.$$* *Proof.* Let $g_i$ be the minimal polynomial of $u_i$ over $\Bbb F_q$. Let $a=(a_0,\dots,a_{N-1})\in\Bbb F_q^N$. We have $$\begin{aligned} \left[\begin{matrix} A(u_1)\cr\vdots\cr A(u_m)\end{matrix}\right]a^T=0\ &\Leftrightarrow\ \left[\begin{matrix} u_1^0&\cdots&u_1^{N-1}\cr \vdots&&\vdots\cr u_m^0&\cdots&u_m^{N-1}\end{matrix}\right]a^T=0\cr &\Leftrightarrow\ g_i \,\Bigm | \,\sum_{j=0}^{N-1}a_jx^j\ \text{for all}\ 1\le i\le m\cr &\Leftrightarrow\ \prod_{i=1}^mg_i\,\Bigm | \,\sum_{j=0}^{N-1}a_jx^j.\end{aligned}$$ Since $\deg \prod_{i=1}^mg_i=\sum_{i=1}^mk_i$, we have $$\dim_{\Bbb F_q}\Bigl\{(a_0,\dots,a_{N-1})\in\Bbb F_q^N:\prod_{i=1}^mg_i\,\Bigm | \,\sum_{j=0}^{N-1}a_jx^j\Bigr\}=\max\Bigl\{N-\sum_{i=1}^mk_i,\,0\Bigr\}.$$ Hence $$\text{rank}\left[ \begin{matrix} A(u_1)\cr\vdots\cr A(u_m)\end{matrix}\right]=N-\max\Bigl\{N-\sum_{i=1}^mk_i,\,0\Bigr\}=\min\Bigl\{\sum_{i=1}^mk_i,\,N\Bigr\}.$$ ◻ In the above lemma, if $u_1$ and $u_2$ have the same minimal polynomial over $\Bbb F_q$, then $A(u_1)$ and $A(u_2)$ are row equivalent. The reason is that for $a=(a_0,\dots,a_{N-1})\in\Bbb F_q^N$, $$A(u_1)a^T=0\ \Leftrightarrow\ [u_1^0,\dots,u_1^{N-1}]a^T=0\ \Leftrightarrow\ [u_2^0,\dots,u_2^{N-1}]a^T=0\ \Leftrightarrow\ A(u_2)a^T=0.$$ More generally, the method described in this section can be used to compute the symmetric reduction $\theta(s_1,\dots,s_n)$ where most $s_i$ are 0. For example, instead of [\[theta-bj\]](#theta-bj){reference-type="eqref" reference="theta-bj"}, we may consider $$\theta(1,0,\dots,0,s_{n-1},s_n)=\sum_{i(n-1)+jn\le d}b_{ij}s_{n-1}^is_n^j,\quad b_{ij}\in\Bbb F_p.$$ The coefficients $b_{ij}$ can be determined using sample polynomials of the form $X^n-X^{n-1}+(-1)^{n-1}u+(-1)^nv$. Such results will produce criteria for irreducible polynomials of the form $X^n+X^{n-1}+aX+b\in\Bbb F_q[X]$ to be normal over $\Bbb F_q$ in terms of $a,b$. # The Case $n=6$ Let $f=X^6+X^5+a\in\Bbb F_q[X]$ be irreducible, where $\text{char}\,\Bbb F_q=p$. To apply Theorem 1.2, we need to compute $\theta_{6,m}(1,0,\dots,0,a)$ in characteristic $p$ for $m=2,3,6$. $\theta_{6,2}(s_1,\dots,s_6)$ and $\theta_{6,3}(s_1,\dots,s_6)$ have been determined in [@Hou-preprint Appendix A4] as polynomials over $\Bbb Z$ regardless of $p$. In particular, we have $$\label{theta62} \theta_{6,2}(1,0,\dots,0,a)=(320 a-1)^2,$$ $$\label{theta63} \theta_{6,3}(1,0,\dots,0,a)= 17222625 a^3+ 10935 a^2 + 1215 a+1.$$ To compute $\theta_{6,6}(1,0,\dots,0,a)$, we apply Algorithm [Algorithm 3](#Alg2.1){reference-type="ref" reference="Alg2.1"} with $n=6$. Without much difficulty, we computed $\theta_{6,6}(1,0,\dots,0,a)$ in characteristic $p$ for $5\le p<1000$. ($p=2,3$ are unnecessary since they divide $6$.) For example, for $p=5$, $$\label{p=5} \theta_{6,6}(1,0,\dots,0,a)=3(a^2+4a+2)^5.$$ For $p=7$, $$\begin{aligned} \label{p=7} &\theta_{6,6}(1,0,\dots,0,a)=\\ &3 a^{14}+4 a^{13}+6 a^{12}+a^{11}+4 a^{10}+a^8+4 a^7+3 a^5+4 a^4+6 a^2+a+1.\nonumber \end{aligned}$$ For $p=11$, $$\begin{aligned} \label{p=11} &\theta_{6,6}(1,0,\dots,0,a)=\\ &4 (a^{19}+3 a^{18}+10 a^{17}+5 a^{16}+7 a^{15}+4 a^{14}+4 a^{13}+6 a^{12}+8 a^{11}+10 a^{10}\cr &+4 a^9+10 a^8+7 a^7+5 a^6+5 a^5+8 a^4+2 a^3+a^2+2 a+3).\nonumber\end{aligned}$$ The formulas of $\theta_{6,6}(1,0,\dots,0,a)$ for $5\le p\le 97$ are given in Appendix A2. The following is Theorem 1.2 specified for $n=6$: **Theorem 5**. *Assume that $f=X^6+X^5+a\in\Bbb F_q[X]$ is irreducible, where $\text{\rm char}\,\Bbb F_q=p\ge 5$. If $\prod_{m=2,3,6}\theta_{6,m}(1,0,\dots,0,a)\ne 0$, then $f$ is normal over $\Bbb F_q$.* For example, when $p=5$, Theorem [Theorem 5](#T3.1){reference-type="ref" reference="T3.1"} states that if $f=X^6+X^5+a\in\Bbb F_q[X]$ is irreducible and $$a^2+4a+2\ne0,$$ then $f$ is normal over $\Bbb F_q$. When $p=7$, $\theta_{6,2}(1,0,\dots,0,a)=4(a+4)^2$ and $\theta_{6,3}(1,0,\dots,0,a)= a^2+4a+1$. If $f=X^6+X^5+a\in\Bbb F_q[X]$ is irreducible and $$\begin{cases} a\ne 3,\cr a^2+4a+1\ne0,\cr 3 a^{14}+4 a^{13}+6 a^{12}+a^{11}+4 a^{10}+a^8+4 a^7+3 a^5+4 a^4+6 a^2+a+1 \ne 0, \end{cases}$$ then $f$ is normal over $\Bbb F_q$. When $p=11$, $\theta_{6,2}(1,0,\dots,0,a)=(a+10)^2$ and $\theta_{6,3}(1,0,\dots,0,a)=2 (a+7)(a^2+10a+4)$. If $f=X^6+X^5+a\in\Bbb F_q[X]$ is irreducible and $$\begin{cases} a\ne 1,4,\cr a^2+10a+4\ne0,\cr 4 (a^{19}+3 a^{18}+10 a^{17}+5 a^{16}+7 a^{15}+4 a^{14}+4 a^{13}+6 a^{12}+8 a^{11}+10 a^{10}\cr +4 a^9+10 a^8+7 a^7+5 a^6+5 a^5+8 a^4+2 a^3+a^2+2 a+3)\ne 0, \end{cases}$$ then $f$ is normal over $\Bbb F_q$. To conclude this section, we include the intermediate data in the computations of [\[p=5\]](#p=5){reference-type="eqref" reference="p=5"}, [\[p=7\]](#p=7){reference-type="eqref" reference="p=7"} and [\[p=11\]](#p=11){reference-type="eqref" reference="p=11"}. We follow the notation of Algorithm [Algorithm 3](#Alg2.1){reference-type="ref" reference="Alg2.1"} $\bullet$ $p=5$. Minimal polynomials of $u_i$: $$\renewcommand*{\arraystretch}{1.4} \begin{tabular}{c|c|c} \hline $i$ &$k_i$ & $g_i(X)$\\ \hline $1$& 2&$X^2+2$\\ $2$& 3&$X^3+X^2+2$\\ $3$& 3&$X^3+2 X^2+2 X+2$\\ $4$& 3&$X^3+2 X^2+4 X+2$\\ $5$& 2&$X^2+2 X+3$\\ $6$& 3&$X^3+3 X^2+2 X+3$\\ $7$& 3&$X^3+3 X+3$\\ $8$& 3&$X^3+4 X+3$\\ \hline \end{tabular}$$ The matrices $[A(u_i)\ C(u_i)]$: $i=1$. $$\arraycolsep=1.6pt%\def\arraystretch{2.2} \begin{array}{cccccccccccccccccccccc} 1 & 0 & 3 & 0 & 4 & 0 & 2 & 0 & 1 & 0 & 3 & 0 & 4 & 0 & 2 & 0 & 1 & 0 & 3 & 0 & 4 &\quad 0\\ 0 & 1 & 0 & 3 & 0 & 4 & 0 & 2 & 0 & 1 & 0 & 3 & 0 & 4 & 0 & 2 & 0 & 1 & 0 & 3 & 0 &\quad 3\\ \end{array}$$ $i=2$. $$\arraycolsep=1.6pt%\def\arraystretch{2.2} \begin{array}{cccccccccccccccccccccc} 1 & 0 & 0 & 3 & 2 & 3 & 1 & 0 & 4 & 4 & 1 & 1 & 1 & 2 & 1 & 2 & 4 & 4 & 2 & 0 & 2 &\quad 0\\ 0 & 1 & 0 & 0 & 3 & 2 & 3 & 1 & 0 & 4 & 4 & 1 & 1 & 1 & 2 & 1 & 2 & 4 & 4 & 2 & 0 &\quad 1\\ 0 & 0 & 1 & 4 & 1 & 2 & 0 & 3 & 3 & 2 & 2 & 2 & 4 & 2 & 4 & 3 & 3 & 4 & 0 & 4 & 3 &\quad 0\\ \end{array}$$ $i=3$. $$\arraycolsep=1.6pt%\def\arraystretch{2.2} \begin{array}{cccccccccccccccccccccc} 1 & 0 & 0 & 3 & 4 & 1 & 4 & 2 & 1 & 1 & 2 & 2 & 0 & 2 & 2 & 2 & 3 & 1 & 3 & 1 & 0 &\quad 4\\ 0 & 1 & 0 & 3 & 2 & 0 & 0 & 1 & 3 & 2 & 3 & 4 & 2 & 2 & 4 & 4 & 0 & 4 & 4 & 4 & 1 &\quad 4\\ 0 & 0 & 1 & 3 & 2 & 3 & 4 & 2 & 2 & 4 & 4 & 0 & 4 & 4 & 4 & 1 & 2 & 1 & 2 & 0 & 4 &\quad 3\\ \end{array}$$ $i=4$. $$\arraycolsep=1.6pt%\def\arraystretch{2.2} \begin{array}{cccccccccccccccccccccc} 1 & 0 & 0 & 3 & 4 & 0 & 3 & 1 & 1 & 3 & 3 & 0 & 2 & 0 & 2 & 2 & 3 & 2 & 0 & 1 & 4 &\quad 0\\ 0 & 1 & 0 & 1 & 1 & 4 & 1 & 0 & 3 & 2 & 4 & 3 & 4 & 2 & 4 & 1 & 3 & 2 & 2 & 2 & 4 &\quad 0\\ 0 & 0 & 1 & 3 & 0 & 1 & 2 & 2 & 1 & 1 & 0 & 4 & 0 & 4 & 4 & 1 & 4 & 0 & 2 & 3 & 1 &\quad 2\\ \end{array}$$ $i=5$. $$\arraycolsep=1.6pt%\def\arraystretch{2.2} \begin{array}{cccccccccccccccccccccc} 1 & 0 & 2 & 1 & 2 & 3 & 3 & 0 & 1 & 3 & 1 & 4 & 4 & 0 & 3 & 4 & 3 & 2 & 2 & 0 & 4 &\quad 0\\ 0 & 1 & 3 & 1 & 4 & 4 & 0 & 3 & 4 & 3 & 2 & 2 & 0 & 4 & 2 & 4 & 1 & 1 & 0 & 2 & 1 &\quad 4\\ \end{array}$$ $i=6$. $$\arraycolsep=1.6pt%\def\arraystretch{2.2} \begin{array}{cccccccccccccccccccccc} 1 & 0 & 0 & 2 & 4 & 4 & 4 & 3 & 1 & 4 & 2 & 3 & 0 & 3 & 2 & 3 & 3 & 4 & 3 & 4 & 0 &\quad 0\\ 0 & 1 & 0 & 3 & 3 & 0 & 0 & 1 & 2 & 2 & 2 & 4 & 3 & 2 & 1 & 4 & 0 & 4 & 1 & 4 & 4 &\quad 1\\ 0 & 0 & 1 & 2 & 2 & 2 & 4 & 3 & 2 & 1 & 4 & 0 & 4 & 1 & 4 & 4 & 2 & 4 & 2 & 0 & 4 &\quad 1\\ \end{array}$$ $i=7$. $$\arraycolsep=1.6pt%\def\arraystretch{2.2} \begin{array}{cccccccccccccccccccccc} 1 & 0 & 0 & 2 & 0 & 4 & 4 & 3 & 1 & 4 & 3 & 0 & 4 & 1 & 3 & 0 & 3 & 1 & 1 & 3 & 4 &\quad 3\\ 0 & 1 & 0 & 2 & 2 & 4 & 3 & 2 & 4 & 0 & 2 & 3 & 4 & 0 & 4 & 3 & 3 & 4 & 2 & 4 & 2 &\quad 4\\ 0 & 0 & 1 & 0 & 2 & 2 & 4 & 3 & 2 & 4 & 0 & 2 & 3 & 4 & 0 & 4 & 3 & 3 & 4 & 2 & 4 &\quad 4\\ \end{array}$$ $i=8$. $$\arraycolsep=1.6pt%\def\arraystretch{2.2} \begin{array}{cccccccccccccccccccccc} 1 & 0 & 0 & 2 & 0 & 2 & 4 & 2 & 3 & 0 & 2 & 1 & 2 & 0 & 4 & 4 & 4 & 2 & 2 & 0 & 1 &\quad 1\\ 0 & 1 & 0 & 1 & 2 & 1 & 4 & 0 & 1 & 3 & 1 & 0 & 2 & 2 & 2 & 1 & 1 & 0 & 3 & 2 & 3 &\quad 0\\ 0 & 0 & 1 & 0 & 1 & 2 & 1 & 4 & 0 & 1 & 3 & 1 & 0 & 2 & 2 & 2 & 1 & 1 & 0 & 3 & 2 &\quad 3\\ \end{array}$$ $\bullet$ $p=7$. Minimal polynomials of $u_i$: $$\renewcommand*{\arraystretch}{1.4} \begin{tabular}{c|c|c} \hline $i$ &$k_i$ & $g_i(X)$\\ \hline $1$& $3$& $X^3+5 X^2+X+1$\\ $2$& $3$& $X^3+2 X+1$\\ $3$& $2$& $X^2+3 X+1$\\ $4$& $3$& $X^3+4 X+1$\\ $5$& $3$& $X^3+6 X^2+4 X+1$\\ $6$& $2$& $X^2+2 X+2$\\ $7$& $3$& $X^3+3 X^2+2 X+2$\\ $8$& $3$& $X^3+2 X^2+3$\\ \hline \end{tabular}$$ The matrices $[A(u_i)\ C(u_i)]$: $i=1$. $$\arraycolsep=1.6pt%\def\arraystretch{2.2} \begin{array}{ccccccccccccccccccccccc} 1 & 0 & 0 & 6 & 5 & 4 & 4 & 6 & 4 & 5 & 0 & 5 & 5 & 5 & 0 & 4 & 3 & 2 & 4 & 3 & 0 & \quad 4 \\ 0 & 1 & 0 & 6 & 4 & 2 & 1 & 3 & 3 & 2 & 5 & 5 & 3 & 3 & 5 & 4 & 0 & 5 & 6 & 0 & 3 & \quad 3 \\ 0 & 0 & 1 & 2 & 3 & 3 & 1 & 3 & 2 & 0 & 2 & 2 & 2 & 0 & 3 & 4 & 5 & 3 & 4 & 0 & 0 & \quad 2 \\ \end{array}$$ $i=2$. $$\arraycolsep=1.6pt%\def\arraystretch{2.2} \begin{array}{ccccccccccccccccccccccc} 1 & 0 & 0 & 6 & 0 & 2 & 1 & 3 & 3 & 0 & 5 & 4 & 4 & 1 & 2 & 1 & 2 & 3 & 2 & 6 & 0 & \quad 3 \\ 0 & 1 & 0 & 5 & 6 & 4 & 4 & 0 & 2 & 3 & 3 & 6 & 5 & 6 & 5 & 4 & 5 & 1 & 0 & 0 & 6 & \quad 0 \\ 0 & 0 & 1 & 0 & 5 & 6 & 4 & 4 & 0 & 2 & 3 & 3 & 6 & 5 & 6 & 5 & 4 & 5 & 1 & 0 & 0 & \quad 2 \\ \end{array}$$ $i=3$. $$\arraycolsep=1.6pt%\def\arraystretch{2.2} \begin{array}{ccccccccccccccccccccccc} 1 & 0 & 6 & 3 & 6 & 0 & 1 & 4 & 1 & 0 & 6 & 3 & 6 & 0 & 1 & 4 & 1 & 0 & 6 & 3 & 6 & \quad 4 \\ 0 & 1 & 4 & 1 & 0 & 6 & 3 & 6 & 0 & 1 & 4 & 1 & 0 & 6 & 3 & 6 & 0 & 1 & 4 & 1 & 0 & \quad 5 \\ \end{array}$$ $i=4$. $$\arraycolsep=1.6pt%\def\arraystretch{2.2} \begin{array}{ccccccccccccccccccccccc} 1 & 0 & 0 & 6 & 0 & 4 & 1 & 5 & 6 & 0 & 6 & 1 & 4 & 4 & 4 & 1 & 1 & 6 & 2 & 3 & 0 & \quad 4 \\ 0 & 1 & 0 & 3 & 6 & 2 & 1 & 0 & 1 & 6 & 3 & 3 & 3 & 6 & 6 & 1 & 5 & 4 & 0 & 0 & 3 & \quad 2 \\ 0 & 0 & 1 & 0 & 3 & 6 & 2 & 1 & 0 & 1 & 6 & 3 & 3 & 3 & 6 & 6 & 1 & 5 & 4 & 0 & 0 & \quad 3 \\ \end{array}$$ $i=5$. $$\arraycolsep=1.6pt%\def\arraystretch{2.2} \begin{array}{ccccccccccccccccccccccc} 1 & 0 & 0 & 6 & 6 & 3 & 1 & 4 & 4 & 1 & 2 & 1 & 6 & 0 & 3 & 4 & 6 & 1 & 1 & 5 & 0 & \quad 3 \\ 0 & 1 & 0 & 3 & 2 & 4 & 0 & 3 & 6 & 1 & 2 & 6 & 4 & 6 & 5 & 5 & 0 & 3 & 5 & 0 & 5 & \quad 4 \\ 0 & 0 & 1 & 1 & 4 & 6 & 3 & 3 & 6 & 5 & 6 & 1 & 0 & 4 & 3 & 1 & 6 & 6 & 2 & 0 & 0 & \quad 3 \\ \end{array}$$ $i=6$. $$\arraycolsep=1.6pt%\def\arraystretch{2.2} \begin{array}{ccccccccccccccccccccccc} 1 & 0 & 5 & 4 & 3 & 0 & 1 & 5 & 2 & 0 & 3 & 1 & 6 & 0 & 2 & 3 & 4 & 0 & 6 & 2 & 5 & \quad 1 \\ 0 & 1 & 5 & 2 & 0 & 3 & 1 & 6 & 0 & 2 & 3 & 4 & 0 & 6 & 2 & 5 & 0 & 4 & 6 & 1 & 0 & \quad 5 \\ \end{array}$$ $i=7$. $$\arraycolsep=1.6pt%\def\arraystretch{2.2} \begin{array}{ccccccccccccccccccccccc} 1 & 0 & 0 & 5 & 6 & 0 & 6 & 5 & 1 & 3 & 0 & 6 & 4 & 4 & 3 & 3 & 5 & 1 & 2 & 3 & 6 & \quad 3 \\ 0 & 1 & 0 & 5 & 4 & 6 & 6 & 4 & 6 & 4 & 3 & 6 & 3 & 1 & 0 & 6 & 1 & 6 & 3 & 5 & 2 & \quad 6 \\ 0 & 0 & 1 & 4 & 0 & 4 & 1 & 3 & 2 & 0 & 4 & 5 & 5 & 2 & 2 & 1 & 3 & 6 & 2 & 4 & 0 & \quad 6 \\ \end{array}$$ $i=8$. $$\arraycolsep=1.6pt%\def\arraystretch{2.2} \begin{array}{ccccccccccccccccccccccc} 1 & 0 & 0 & 4 & 6 & 2 & 5 & 0 & 1 & 4 & 6 & 6 & 4 & 2 & 6 & 4 & 0 & 3 & 3 & 1 & 3 & \quad 0 \\ 0 & 1 & 0 & 0 & 4 & 6 & 2 & 5 & 0 & 1 & 4 & 6 & 6 & 4 & 2 & 6 & 4 & 0 & 3 & 3 & 1 & \quad 2 \\ 0 & 0 & 1 & 5 & 4 & 3 & 0 & 2 & 1 & 5 & 5 & 1 & 4 & 5 & 1 & 0 & 6 & 6 & 2 & 6 & 5 & \quad 3 \\ \end{array}$$ $\bullet$ $p=11$. Minimal polynomials of $u_i$: $$\renewcommand*{\arraystretch}{1.4} \begin{tabular}{c|c|c} \hline $i$ &$k_i$ & $g_i(X)$\\ \hline $1$& $3$& $X^3+5 X^2+1$\\ $2$& $3$& $X^3+7 X^2+1$\\ $3$& $3$& $X^3+3 X^2+5 X+1$\\ $4$& $3$& $X^3+6 X^2+5 X+1$\\ $5$& $3$& $X^3+4 X^2+6 X+1$\\ $6$& $3$& $X^3+8 X^2+6 X+1$\\ $7$& $3$& $X^3+8 X+1$\\ \hline \end{tabular}$$ The matrices $[A(u_i)\ C(u_i)]$: $i=1$. $$\arraycolsep=2.8pt%\def\arraystretch{2.2} \begin{array}{ccccccccccccccccccccccc} 1 & 0 & 0 & 10 & 5 & 8 & 5 & 3 & 10 & 0 & 8 & 5 & 8 & 7 & 4 & 5 & 1 & 2 & 7 & 8 & 2 & \quad 0 \\ 0 & 1 & 0 & 0 & 10 & 5 & 8 & 5 & 3 & 10 & 0 & 8 & 5 & 8 & 7 & 4 & 5 & 1 & 2 & 7 & 8 & \quad 3 \\ 0 & 0 & 1 & 6 & 3 & 6 & 8 & 1 & 0 & 3 & 6 & 3 & 4 & 7 & 6 & 10 & 9 & 4 & 3 & 9 & 6 & \quad 10 \\ \end{array}$$ $i=2$. $$\arraycolsep=3pt%\def\arraystretch{2.2} \begin{array}{ccccccccccccccccccccccc} 1 & 0 & 0 & 10 & 7 & 6 & 3 & 5 & 3 & 9 & 9 & 0 & 2 & 10 & 7 & 4 & 6 & 6 & 9 & 8 & 4 & \quad 3 \\ 0 & 1 & 0 & 0 & 10 & 7 & 6 & 3 & 5 & 3 & 9 & 9 & 0 & 2 & 10 & 7 & 4 & 6 & 6 & 9 & 8 & \quad 8 \\ 0 & 0 & 1 & 4 & 5 & 8 & 6 & 8 & 2 & 2 & 0 & 9 & 1 & 4 & 7 & 5 & 5 & 2 & 3 & 7 & 4 & \quad 1 \\ \end{array}$$ $i=3$. $$\arraycolsep=2.75pt%\def\arraystretch{2.2} \begin{array}{ccccccccccccccccccccccc} 1 & 0 & 0 & 10 & 3 & 7 & 9 & 1 & 0 & 8 & 8 & 2 & 1 & 1 & 1 & 2 & 10 & 3 & 5 & 4 & 4 & \quad 5 \\ 0 & 1 & 0 & 6 & 3 & 5 & 8 & 3 & 1 & 7 & 4 & 7 & 7 & 6 & 6 & 0 & 8 & 3 & 6 & 3 & 2 & \quad 7 \\ 0 & 0 & 1 & 8 & 4 & 2 & 10 & 0 & 3 & 3 & 9 & 10 & 10 & 10 & 9 & 1 & 8 & 6 & 7 & 7 & 4 & \quad 3 \\ \end{array}$$ $i=4$. $$\arraycolsep=2.4pt%\def\arraystretch{2.2} \begin{array}{ccccccccccccccccccccccc} 1 & 0 & 0 & 10 & 6 & 2 & 3 & 10 & 0 & 2 & 0 & 1 & 3 & 10 & 1 & 7 & 9 & 9 & 4 & 10 & 10 & \quad 7 \\ 0 & 1 & 0 & 6 & 7 & 5 & 6 & 9 & 10 & 10 & 2 & 5 & 5 & 9 & 4 & 3 & 8 & 10 & 7 & 10 & 5 & \quad 8 \\ 0 & 0 & 1 & 5 & 9 & 8 & 1 & 0 & 9 & 0 & 10 & 8 & 1 & 10 & 4 & 2 & 2 & 7 & 1 & 1 & 4 & \quad 1 \\ \end{array}$$ $i=5$. $$\arraycolsep=2pt%\def\arraystretch{2.2} \begin{array}{ccccccccccccccccccccccc} 1 & 0 & 0 & 10 & 4 & 1 & 6 & 10 & 0 & 0 & 1 & 7 & 10 & 5 & 1 & 0 & 0 & 10 & 4 & 1 & 6 & \quad 9 \\ 0 & 1 & 0 & 5 & 1 & 10 & 4 & 0 & 10 & 0 & 6 & 10 & 1 & 7 & 0 & 1 & 0 & 5 & 1 & 10 & 4 & \quad 9 \\ 0 & 0 & 1 & 7 & 10 & 5 & 1 & 0 & 0 & 10 & 4 & 1 & 6 & 10 & 0 & 0 & 1 & 7 & 10 & 5 & 1 & \quad 1 \\ \end{array}$$ $i=6$. $$\arraycolsep=3pt%\def\arraystretch{2.2} \begin{array}{ccccccccccccccccccccccc} 1 & 0 & 0 & 10 & 8 & 8 & 10 & 7 & 8 & 5 & 4 & 7 & 3 & 7 & 7 & 9 & 0 & 5 & 6 & 10 & 0 & \quad 7 \\ 0 & 1 & 0 & 5 & 3 & 1 & 2 & 8 & 0 & 5 & 7 & 2 & 3 & 1 & 5 & 6 & 9 & 8 & 8 & 0 & 10 & \quad 2 \\ 0 & 0 & 1 & 3 & 3 & 1 & 4 & 3 & 6 & 7 & 4 & 8 & 4 & 4 & 2 & 0 & 6 & 5 & 1 & 0 & 0 & \quad 5 \\ \end{array}$$ $i=7$. $$\arraycolsep=2.4pt%\def\arraystretch{2.2} \begin{array}{ccccccccccccccccccccccc} 1 & 0 & 0 & 10 & 0 & 8 & 1 & 2 & 6 & 5 & 5 & 9 & 10 & 0 & 10 & 1 & 8 & 4 & 1 & 4 & 10 & \quad 9 \\ 0 & 1 & 0 & 3 & 10 & 9 & 5 & 6 & 6 & 2 & 1 & 0 & 1 & 10 & 3 & 7 & 10 & 7 & 1 & 0 & 7 & \quad 8 \\ 0 & 0 & 1 & 0 & 3 & 10 & 9 & 5 & 6 & 6 & 2 & 1 & 0 & 1 & 10 & 3 & 7 & 10 & 7 & 1 & 0 & \quad 9 \\ \end{array}$$ # The Case $n=7$ For $n=7$, Theorem [Theorem 2](#T1.2){reference-type="ref" reference="T1.2"} becomes **Theorem 6**. *Assume that $f=X^7+X^6+a\in\Bbb F_q[X]$ is irreducible, where $\text{\rm char}\,\Bbb F_q=p\ne 7$. If $\theta_{7,7}(1,0,\dots,0,a)\ne0$, then $f$ is normal over $\Bbb F_q$.* Using Algorithm [Algorithm 3](#Alg2.1){reference-type="ref" reference="Alg2.1"}, we have computed $\theta_{7,7}(1,0,\dots,0,a)$ for $p\le 47$ ($p\ne 7$). (The algorithm can be implemented for larger $p$ with only incremental complexity.) When $p=2$, $$\begin{aligned} &\theta_{7,7}(1,0,\dots,0,a)=\cr &(a^4+a^3+a^2+a+1)^2 (a^5+a^4+a^3+a+1)^2 (a^8+a^7+a^2+a+1)^2\cr & (a^8+a^7+a^4+a^3+a^2+a+1)^2 (a^{10}+a^8+a^7+a^6+a^5+a^2+1)^2.\end{aligned}$$ When $p=3$, $$\begin{aligned} \theta_{7,7}(1,0,\dots,0,a)=\, &2 (a+2)^{18} (a^9+2 a^6+2 a^5+2 a^4+a^3+a+2)^3\cr & (a^{10}+2a^9+a^8+a^6+2 a^4+2 a^3+1)^3.\end{aligned}$$ When $p=5$, $$\begin{aligned} &\theta_{7,7}(1,0,\dots,0,a)=\cr &(a+1)^3 (a^3+4 a^2+3 a+4) (a^{89}+4 a^{88}+4 a^{87}+3 a^{86}+2 a^{85}+3 a^{83}+3 a^{82}+4 a^{81}\cr &+4 a^{80}+2 a^{79}+4 a^{78}+3 a^{77}+2 a^{76}+a^{75}+3 a^{74}+2 a^{73}+4 a^{72}+a^{70}+3 a^{69}+4 a^{68}\cr &+2 a^{67}+a^{66}+2 a^{65}+2 a^{64}+4 a^{62}+2 a^{61}+4 a^{60}+4 a^{59}+a^{58}+a^{56}+a^{55}+3 a^{53}\cr &+3 a^{51}+4 a^{50}+2 a^{49}+2 a^{47}+2 a^{46}+3 a^{45}+2 a^{44}+a^{43}+4 a^{42}+a^{41}+4 a^{40}+2 a^{38}\cr &+4a^{37}+4 a^{36}+2 a^{35}+2 a^{34}+3 a^{33}+3 a^{32}+2 a^{31}+3 a^{30}+a^{29}+a^{28}+a^{27}+2 a^{26}\cr &+2 a^{25}+a^{24}+4 a^{23}+3 a^{22}+4 a^{21}+a^{20}+2 a^{16}+3 a^{15}+2 a^{14}+3 a^{13}+2 a^{12}+a^{11}\cr &+a^{10}+3 a^9+4 a^6+2 a^5+2 a^4+2 a^3+a+4).\end{aligned}$$ The intermediate data in the computation of $\theta_{7,7}(1,0,\dots,0,a)$ with $p=2$ are included in Appendix A4. Theorems [Theorem 5](#T3.1){reference-type="ref" reference="T3.1"} and [Theorem 6](#T4.1){reference-type="ref" reference="T4.1"} are simple criteria for the normality of irreducible polynomials of the form $X^n+X^{n-1}+a\in\Bbb F_q[X]$ ($n=6,7$). Are the sufficient conditions in Theorems [Theorem 5](#T3.1){reference-type="ref" reference="T3.1"} and [Theorem 6](#T4.1){reference-type="ref" reference="T4.1"} close to being necessary? Let $$\begin{aligned} \label{Irr} \text{Irr}\,(q,n)=\,&\text{the number of irreducible polynomials}\\ &\text{of the form $X^n+X^{n-1}+a\in\Bbb F_q[X]$},\nonumber\end{aligned}$$ $$\begin{aligned} \label{Norm} \text{Norm}\,(q,n)=\,&\text{the number of normal polynomials}\\ &\text{of the form $X^n+X^{n-1}+a\in\Bbb F_q[X]$},\nonumber\end{aligned}$$ and for $n=6,7$, let $$\begin{aligned} \label{Suff} \text{Suff}\,(q,n)=\,&\text{the number of irreducible polynomials of the form}\\ &\text{$X^n+X^{n-1}+a\in\Bbb F_q[X]$ satisfying the conditions}\cr &\text{in Theorem~\ref{T3.1} or \ref{T4.1}}.\nonumber\end{aligned}$$ We computed these three numbers for $n=6,7$, $q\le 49$ and $\text{gcd}(q,n)=1$; see Appendix A3. In the cases that we computed, $\text{Irr}\,(q,n)=\text{Norm}\,(q,n)$ most of the time and $\text{Norm}\,(q,n)=\text{Suff}\,(q,n)$ all the time. However, since the three numbers in these cases are all quite small, it is difficult to tell whether they are indicative of a general pattern. # Normal Polynomials of a Finite Galois Extension Let $G=\{g_1,\dots,g_n\}$ be a finite group. To each $g\in G$ assign an indeterminate $X_g$. The *group determinant* of $G$ is $$\mathcal D_G(X_{g_1},\dots,X_{g_n})=\det(X_{g_ig_j^{-1}})\in\Bbb Z[X_{g_1},\dots,X_{g_n}],$$ where $(X_{g_ig_j^{-1}})$ is the $n\times n$ matrix whose $(i,j)$ entry is $X_{g_ig_j^{-1}}$. Note that when $G=\Bbb Z/n\Bbb Z$, $$\mathcal D_{\Bbb Z/n\Bbb Z}(X_0,\dots,X_{n-1})=\Delta_n(X_0,\dots,X_{n-1}).$$ Let $F$ be any field and $f$ be a monic, separable irreducible polynomial over $F$ such that $F(\alpha)=S_F(f)$, where $\alpha$ is any root of $f$ and $S_F(f)$ demotes the splitting field of $f$ over $F$. Let $G=\text{Aut}(S_F(f)/F)=\{g_1,\dots,g_n\}$ be the Galois group of $f$ over $F$. It is well known that $f$ is normal over $F$ if and only if $$\label{Dg1} \mathcal D_G(g_1(\alpha)\,\dots,g_n(\alpha))\ne 0.$$ (See [@Hou-preprint Lemma 3.2].) Let $\rho_1,\dots,\rho_k$ be the irreducible representations of $G$ with $\deg\rho_i=m_i$. It is a fundamental fact from the representation theory that $$\label{Dg2} \mathcal D_G=\prod_{i=1}^k\Bigl(\det\Bigl(\sum_{g\in G}X_g\rho_i(g)\Big)\Bigr)^{m_i}=\prod_{i=1}^k(\det\mathcal X_i)^{m_i},$$ where $$\label{Xi} \mathcal X_i=\sum_{g\in G}X_g\rho_i(g)\in\text{M}_{m_i\times m_i}(\Bbb C).$$ In fact, [\[Dg2\]](#Dg2){reference-type="eqref" reference="Dg2"} is the complete factorization of $\mathcal D_G$ in $\Bbb C[X_{g_1},\dots,X_{g_n}]$. For references on group determinants, see [@Formanek-Sibley-PAMS-1991; @Frobenius-1896a; @Frobenius-1896b; @Hawkins-1971; @Hawkins-1974; @Johnson-MPCPS-1991]. The elementary symmetric polynomials in $X_1,\dots,X_m$ can be expressed as a polynomial in the power sums $p_j=X_1^j+\cdots+X_m^j$, $1\le j\le m$, with coefficients in $\Bbb Q$; see [@Macdonald-1995 (2.14)$'$]. In particular, $$\label{power-sum} X_1\cdots X_m=(-1)^m\sum_{1t_1+2t_2+\cdots+mt_m=m}\;\prod_{j=1}^m\frac{(-p_j)^{t_j}}{t_j!\,j^{t_j}}.$$ If $A$ be an $m\times m$ matrix over $\Bbb C$ with eigenvalues $\lambda_1,\dots,\lambda_m$, then $$p_j(\lambda_1,\dots,\lambda_m)=\lambda_1^j+\cdots+\lambda_m^j=\text{Tr}(A^j),$$ and hence by [\[power-sum\]](#power-sum){reference-type="eqref" reference="power-sum"}, $$\label{detA} \det A=\lambda_1\cdots\lambda_m=(-1)^m\sum_{1t_1+2t_2+\cdots+mt_m=m}\;\prod_{j=1}^m\frac{(-\text{Tr}(A^j))^{t_j}}{t_j!\,j^{t_j}}.$$ Therefore, $$\label{detX} \det \mathcal X_i=(-1)^{m_i}\sum_{1t_1+2t_2+\cdots+m_it_{m_i}=m_i}\;\prod_{j=1}^{m_i}\frac{(-\text{Tr}(\mathcal X_i^j))^{t_j}}{t_j!\,j^{t_j}}.$$ For $1\le j\le m_i$, $$\mathcal X_i^j=\sum_{g\in G}\Bigl(\sum_{\substack{h_1,\dots,h_j\in G\cr h_1\dots h_j=g}}X_{h_1}\cdots X_{h_j}\Bigr)\rho_i(g).$$ Let $\chi_i$ be the character of $\rho_i$. Then $$\label{Tr} \text{Tr}(\mathcal X_i^j)=\sum_{g\in G}\Bigl(\sum_{h_1\dots h_j=g}X_{h_1}\cdots X_{h_j}\Bigr)\chi_i(g).$$ To summarize, $\mathcal D_G$ is given by [\[Dg2\]](#Dg2){reference-type="eqref" reference="Dg2"} in which $\det \mathcal X_i$ is given by [\[detX\]](#detX){reference-type="eqref" reference="detX"} and [\[Tr\]](#Tr){reference-type="eqref" reference="Tr"}. Let $$\label{Fi} F_i(X_{g_1},\dots,X_{g_n})=\det\mathcal X_i\in\Bbb C[X_{g_1},\dots,X_{g_n}].$$ By [\[detX\]](#detX){reference-type="eqref" reference="detX"} and [\[Tr\]](#Tr){reference-type="eqref" reference="Tr"}, the coefficients of $F_i$ are algebraic numbers, by [\[Xi\]](#Xi){reference-type="eqref" reference="Xi"}, $F_i$ is monic in $X_e$, where $e$ is the identity of $G$, and by [\[Dg2\]](#Dg2){reference-type="eqref" reference="Dg2"}, $\prod_{i=1}^kF_i^{m_i}=\mathcal D_G\in\Bbb Z[X_{g_1},\dots,X_{g_n}]$. From these facts it follows that the coefficients of $F_i$ are algebraic integers. Let $\frak o$ denote the ring of algebraic integers in $\Bbb C$ and let $E_i\in\frak o[X_{g_1},\dots,X_{g_n}]$ be a symmetrization of $F_i$. Let $H_i\in\frak o[s_1,\dots,s_n]$ be the symmetric reduction of $E_i$, i.e., $$\label{Ei} E_i(X_{g_1},\dots,X_{g_n})=H_i(s_1,\dots,s_n),$$ where $s_i$ is the $i$th elementary symmetric polynomial in $X_{g_1},\dots,X_{g_n}$. Then we have the following generalization of Theorem [Theorem 1](#T1.1){reference-type="ref" reference="T1.1"}: **Theorem 7**. *Let $F$ be a field and $f=X^n+a_1X^{n-1}+\cdots+a_n$ be a separable irreducible polynomial over $F$ such that $F(\alpha)=S_F(f)$, where $\alpha$ is any root of $f$. Let $G=\text{\rm Aut}(S_F(f)/F)$. If $H_i(a_1,\dots,a_n)\ne0$ for all $1\le i\le k$, where $H_i$ is defined in [\[Ei\]](#Ei){reference-type="eqref" reference="Ei"}, then $f$ is normal over $F$.* Since $H_i(s_1,\dots,s_n)\in\frak o[s_1,\dots,s_n]$, $H_i(a_1,\dots,a_n)$ is meaningful regardless of the characteristic of $F$. In Theorem [Theorem 7](#T5.1){reference-type="ref" reference="T5.1"}, the computation of $H_i$ totally depends on the Galois group $G$. Hence, when $G$ is cyclic, the question is no different from the case of finite fields; see [@Hou-preprint Theorem 3.3]. We will further discuss Theorem [Theorem 7](#T5.1){reference-type="ref" reference="T5.1"} when $G$ is a noncyclic group of order $\le 8$. For this purpose, we need a technical lemma. Let $m\mid n$ and partition $\Bbb Z/n\Bbb Z$ into blocks $I_i=\{i+mj:0\le j\le n/m-1\}$, $0\le i\le m-1$. Let $S_n$ be the permutation group of $\Bbb Z/n\Bbb Z$. The *wreath product* $S_m \kern1pt \wr S_{n/m}$ is the subgroup of $S_n$ consisting of all permutations that are obtained as follows: first permute the blocks $I_1,\dots, I_m$ using a permutation from $S_m$; then permute the elements in each block $I_i$ independently. More precisely, $$S_m\wr S_{n/m}=\{(\sigma;\sigma_0,\dots,\sigma_{m-1}):\sigma\in S_m, \sigma_i\in S_{n/m}, 0\le i\le m-1\},$$ where $(\sigma;\sigma_0,\dots,\sigma_{m-1})$ maps $i+mj$ to $\sigma(i)+m\sigma_i(j)$. A left transversal of $S_m\wr S_{n/m}$ in $S_n$ has been determined in [@Hou-preprint § 2]. Let $\mathcal P_{n,m}$ be the set of all unordered partitions of $\{0,1,\dots,n-1\}$ into $m$ parts of size $n/m$. For each $\{P_0,\dots,P_{m-1}\}\in\mathcal P_{n,m}$, choose a permutation $\phi_{\{P_0,\dots,P_{m-1}\}}\in S_n$ which maps $I_i$ to $P_i$, $0\le i\le m-1$. Then $$\label{Enm} \mathcal E_{n,m}:=\{\phi_{\{P_0,\dots,P_{m-1}\}}:\{P_0,\dots,P_{m-1}\}\in\mathcal P_{n,m}\}$$ is a left transversal of $S_m\wr S_{n/m}$ in $S_n$. **Lemma 8**. *Let $m\mid n$ with $n\ge 3$. Let $A(Y_0,\dots,Y_{m-1})\in\Bbb C[Y_0,\dots,Y_{m-1}]$ be such that for each $\sigma\in S_m$, $$\label{+-A} A(Y_{\sigma(0)},\dots,Y_{\sigma(m-1)})=\pm A(Y_0,\dots,Y_{m-1}).$$ Let $B(Z_0,\dots,Z_{n/m-1})\in\Bbb C[Z_0,\dots,Z_{n/m-1}]$ be symmetric. Let $$B_i=B(X_i,X_{i+m},\dots,X_{i+(n/m-1)m}),\quad 0\le i\le m-1,$$ and $$F=A(B_0,\dots,B_{m-1})\in\Bbb C[X_0,\dots,X_{n-1}].$$ Further assume that $$\label{FX0} \deg_{X_0}F(X_0,0,\dots,0)>0.$$ Then $$\prod_{\phi\in\mathcal E_{n,m}}\phi(F)\in\Bbb C[X_0,\dots,X_{n-1}]$$ is symmetric, where $\mathcal E_{n,m}$ is the left transversal of $S_m\wr S_{n/m}$ in $S_n$ given in [\[Enm\]](#Enm){reference-type="eqref" reference="Enm"}.* *Proof.* For each $\sigma\in S_n$, $\{\sigma\phi:\phi\in\mathcal E_{n,m}\}$ is also a left transversal of $S_m\wr S_{n.m}$ in $S_n$, whence $\{\sigma\phi:\phi\in\mathcal E_{n,m}\}=\{\phi\sigma_\phi:\phi\in\mathcal E_{n,m}\}$, where each $\sigma_\phi\in S_m\wr S_{n,m}$. Therefore, $$\sigma\Bigl(\prod_{\phi\in\mathcal E_{n,m}}\phi(F)\Bigr)=\prod_{\phi\in\mathcal E_{n,m}}\phi((\sigma_\phi(F)).$$ By assumption, $\sigma_\phi(F)=\pm F$. Thus $$\label{sign} \sigma\Bigl(\prod_{\phi\in\mathcal E_{n,m}}\phi(F)\Bigr)=\pm\prod_{\phi\in\mathcal E_{n,m}}\phi(F).$$ Since $S_n$ is generated by transpositions and all transpositions are conjugate in $S_n$, to prove that $\prod_{\phi\in\mathcal E_{n,m}}\phi(F)$ is symmetric, it suffices to show that [\[sign\]](#sign){reference-type="eqref" reference="sign"} holds with a $+$ sign for *some* transposition $\sigma$. By [\[FX0\]](#FX0){reference-type="eqref" reference="FX0"}, $$F(X_0,\dots,X_{n-1})=a_0X_0^t+\cdots+a_{n-1}X_{n-1}^t+\text{other terms},$$ where $t>0$, $a_0\cdots a_{n-1}\ne0$ and the "other terms" are those of the form $x_i^s$, $s<t$, and those involving at least two variables. It follows that $$\prod_{\phi\in\mathcal E_{n,m}}\phi(F)=a_0'X_0^{t'}+\cdots+a_{n-1}'X_{n-1}^{t'}+\text{other terms},$$ where $t'>0$ and $a_0'\dots a_{n-1}'\ne0$. Assume that $$\sigma\Bigl(\prod_{\phi\in\mathcal E_{n,m}}\phi(F)\Bigr)=-\prod_{\phi\in\mathcal E_{n,m}}\phi(F)$$ for $\sigma=(0,1)$ and $(0,2)$. Then $a_1'=-a_0'=a_2'$. Thus, when $\sigma=(1,2)$, we must have $$\sigma\Bigl(\prod_{\phi\in\mathcal E_{n,m}}\phi(F)\Bigr)=\prod_{\phi\in\mathcal E_{n,m}}\phi(F).$$ ◻ Now we take a closer look of Theorem [Theorem 7](#T5.1){reference-type="ref" reference="T5.1"} when $G$ is a noncyclic group of order $\le 8$. The character tables of these groups are taken from [@James-Liebeck-2001]. In each case, we compute $F_i$ in [\[Fi\]](#Fi){reference-type="eqref" reference="Fi"} and its symmetrization $E_i$. Whenever possible, we also compute the symmetric reduction $H_i$ of $E_i$. $G=(\Bbb Z/2\Bbb Z)^2$. Table [\[Tb1\]](#Tb1){reference-type="ref" reference="Tb1"} is the character table of $(\Bbb Z/2\Bbb Z)^2$. In general, in the character table of a finite group $G$, the entries of the first row are the representatives of the conjugacy classes of $G$, the entries of the second row are the sizes of the conjugacy classes. $\chi_1,\dots,\chi_k$ are the irreducible characters, $\chi_1$ is the principal character, and $\deg\chi_i=\chi_i(e)$, where $e$ is the identity of $G$. We always have $F_1=\sum_{g\in G}X_g$, $E_1=F_1$ and $H_1=s_1$. Hence we will ignore $F_1$. $$\renewcommand*{\arraystretch}{1.2} \begin{tabular}{c|cccc} \hline conj. cls. rep. & (0,0) & (0,1) & (1,0) & (1,1) \\ conj. cls. size & 1 & 1& 1& 1\\ \hline $\chi_1$ & $1$ & $1$ & $1$ & $1$ \\ $\chi_2$ & $1$ & $-1$ & $1$ & $-1$ \\ $\chi_3$ & $1$ & $1$ & $-1$ & $-1$ \\ $\chi_4$ & $1$ & $-1$ & $-1$ & $1$ \\ \hline \end{tabular}$$ We have $$\begin{aligned} F_2\,&=\sum_{(i,j)\in G}(-1)^jX_{(i,j)},\cr F_3\,&=\sum_{(i,j)\in G}(-1)^iX_{(i,j)},\cr F_4\,&=\sum_{(i,j)\in G}(-1)^{i+j}X_{(i,j)}.\end{aligned}$$ Since $F_2,F_3,F_4$ are in the same $S_4$-orbit, they have the same symmetrizations. We relabel the variables $X_g$ by renaming the group elements $g$: $$((0,0),(1,0),(0,1),(1,1))\to (0,2,1,3).$$ Then $F_2=X_0+X_2-(X_1+X_3)$. Using Lemma [Lemma 8](#L5.2){reference-type="ref" reference="L5.2"} (with $A=Y_0-Y_1$, $B=Z_0+Z_1$), we may choose $$E_2=\prod_{\phi\in\mathcal E_{4,2}}\phi(F_2)=(X_0+X_1-X_2-X_3)(X_0+X_2-X_1-X_3)(X_0+X_3-X_1-X_2).$$ The symmetric reduction of $E_2$ is $$H_2=s_1^2-4s_1s_2+8s_3.$$ $G=S_3$. $$\renewcommand*{\arraystretch}{1.2} \begin{tabular}{c|ccc} \hline conj. cls. rep. & (1) & (12) & (123) \\ conj. cls. size & 1 & 3& 2\\ \hline $\chi_1$ & $1$ & $1$ & $1$ \\ $\chi_2$ & $1$ & $-1$ & $1$ \\ $\chi_3$ & $2$ & $0$ & $-1$ \\ \hline \end{tabular}$$ Let $$\begin{aligned} G_1\,&=\{(1),(123),(132)\}=A_3,\cr G_2\,&=\{(12),(23),(13)\}=S_3\setminus A_3.\end{aligned}$$ We have $$F_2=\sum_{g\in G_1}X_g-\sum_{g\in G_2}X_g.$$ To compute $F_3$, we first find $\text{Tr}(\mathcal X_3^1)$ and $\text{Tr}(\mathcal X_3^2)$: $$\text{Tr}(\mathcal X_3^1)=2X_{(1)}-X_{(123)}-X_{(132)},$$ $$\begin{aligned} &\text{Tr}(\mathcal X_3^2)=\sum_{g\in G}\Bigl(\sum_{h_1h_2=g}X_{h_1}X_{h_2}\Bigr)\chi_3(g)\cr &=2(X_{(1)}^2+X_{(12)}^2+X_{(23)}^2+X_{(13)}^2)-X_{(123)}^2-X_{(132)}^2-2(X_{(1)}X_{(123)}+X_{(1)}X_{(132)})\cr &\kern1.1em -2(X_{(12)}X_{(13)}+X_{(12)}X_{(23)}+X_{(13)}X_{(23)})+4X_{(123)}X_{(132)}.\end{aligned}$$ Then $$\begin{aligned} F_3\,&=\frac{(-\text{Tr}(\mathcal X_3^1))^2}{2!1^2}+\frac{(-\text{Tr}(\mathcal X_3^2))^1}{1!2^1}\cr &=\sum_{g\in G_1}X_g^2-\sum_{g\in G_2}X_g^2-\sum_{\substack{\{h_1,h_2\}\subset G_1\cr h_1\ne h_2}}X_{h_1}X_{h_2}+\sum_{\substack{\{h_1,h_2\}\subset G_2\cr h_1\ne h_2}}X_{h_1}X_{h_2}.\end{aligned}$$ Rename the elements of $G$ such that $G_1=\{0,2,4\}$ and $G_2=\{1,3,5\}$. Then $$F_2=X_0+X_2+X_4-(X_1+X_3+X_5),$$ $$\begin{aligned} F_3=\,&X_0^2+X_2^2+X_4^2-X_0X_2-X_2X_4-X_0X_4\cr &-(X_1^2+X_3^2+X_5^2-X_1X_3-X_3X_5-X_1X_5).\end{aligned}$$ By Lemma [Lemma 8](#L5.2){reference-type="ref" reference="L5.2"}, $$E_2=\prod_{\phi\in\mathcal E_{6,2}}\phi(F_2)=\prod_{(i_1,\dots,i_5)}(X_0+X_{i_2}+X_{i_4}-X_{i_1}-X_{i_3}-X_{i_5})$$ and $$E_3=\prod_{\phi\in\mathcal E_{6,2}}\phi(F_3)=\prod_{(i_1,\dots,i_5)}F_3(X_0,X_{i_1},\dots,X_{i_5}),$$ where the range of $(i_1,\dots,i_5)$ is given in [\[i1-i5\]](#i1-i5){reference-type="eqref" reference="i1-i5"}. The symmetric reductions of $E_2$ and $E_3$ are $$\parbox[t]{\textwidth}{\raggedright$ H_2=s_1^{10}-12 s_1^8 s_2+24 s_1^7 s_3+48 s_1^6 s_2^2-16 s_1^6 s_4-192 s_1^5 s_2 s_3-32 s_1^5 s_5-64 s_1^4 s_2^3+128 s_1^4 s_2 s_4+192 s_1^4 s_3^2-320 s_1^4 s_6+384 s_1^3 s_2^2 s_3+256 s_1^3 s_2 s_5-256 s_1^3 s_3 s_4-256 s_1^2 s_2^2 s_4-768 s_1^2 s_2 s_3^2+1536 s_1^2 s_2 s_6-256 s_1^2 s_3 s_5-512 s_1 s_2^2 s_5+1024 s_1 s_2 s_3 s_4+512 s_1 s_3^3-2048 s_1 s_3 s_6-1024 s_2^2 s_6+1024 s_2 s_3 s_5-1024 s_3^2 s_4+4096 s_4 s_6-1024 s_5^2 $}$$ and $$\parbox[t]{\textwidth}{\raggedright$ H_3= s_1^{20}-24 s_2 s_1^{18}+33 s_3 s_1^{17}+246 s_2^2 s_1^{16}-7 s_4 s_1^{16}-669 s_2 s_3 s_1^{15}+46 s_5 s_1^{15}-1396 s_2^3 s_1^{14}+444 s_3^2 s_1^{14}+116 s_2 s_4 s_1^{14}-125 s_6 s_1^{14}+5631 s_2^2 s_3 s_1^{13}-127 s_3 s_4 s_1^{13}-803 s_2 s_5 s_1^{13}+4737 s_2^4 s_1^{12}-7383 s_2 s_3^2 s_1^{12}-66 s_4^2 s_1^{12}-763 s_2^2 s_4 s_1^{12}+1151 s_3 s_5 s_1^{12}+2100 s_2 s_6 s_1^{12}+3113 s_3^3 s_1^{11}-25191 s_2^3 s_3 s_1^{11}+1600 s_2 s_3 s_4 s_1^{11}+5593 s_2^2 s_5 s_1^{11}-129 s_4 s_5 s_1^{11}-2900 s_3 s_6 s_1^{11}-9612 s_2^5 s_1^{10}+48942 s_2^2 s_3^2 s_1^{10}+1002 s_2 s_4^2 s_1^{10}+356 s_5^2 s_1^{10}+2490 s_2^3 s_4 s_1^{10}-631 s_3^2 s_4 s_1^{10}-15917 s_2 s_3 s_5 s_1^{10}-13960 s_2^2 s_6 s_1^{10}+1000 s_4 s_6 s_1^{10}-40713 s_2 s_3^3 s_1^9-1779 s_3 s_4^2 s_1^9+63180 s_2^4 s_3 s_1^9-7473 s_2^2 s_3 s_4 s_1^9-19434 s_2^3 s_5 s_1^9+11172 s_3^2 s_5 s_1^9+1218 s_2 s_4 s_5 s_1^9+37680 s_2 s_3 s_6 s_1^9-3000 s_5 s_6 s_1^9+10800 s_2^6 s_1^8+11988 s_3^4 s_1^8+477 s_4^3 s_1^8-161703 s_2^3 s_3^2 s_1^8-5544 s_2^2 s_4^2 s_1^8-3672 s_2 s_5^2 s_1^8-4032 s_2^4 s_4 s_1^8+5238 s_2 s_3^2 s_4 s_1^8+82431 s_2^2 s_3 s_5 s_1^8-621 s_3 s_4 s_5 s_1^8+45792 s_2^3 s_6 s_1^8-25110 s_3^2 s_6 s_1^8-10800 s_2 s_4 s_6 s_1^8+199098 s_2^2 s_3^3 s_1^7+18603 s_2 s_3 s_4^2 s_1^7+4374 s_3 s_5^2 s_1^7-84240 s_2^5 s_3 s_1^7+567 s_3^3 s_4 s_1^7+15336 s_2^3 s_3 s_4 s_1^7+33696 s_2^4 s_5 s_1^7-115263 s_2 s_3^2 s_5 s_1^7-2943 s_4^2 s_5 s_1^7-3780 s_2^2 s_4 s_5 s_1^7-180144 s_2^2 s_3 s_6 s_1^7+16200 s_3 s_4 s_6 s_1^7+32400 s_2 s_5 s_6 s_1^7-5184 s_2^7 s_1^6-115425 s_2 s_3^4 s_1^6-3402 s_2 s_4^3 s_1^6+266328 s_2^4 s_3^2 s_1^6+13284 s_2^3 s_4^2 s_1^6-15552 s_3^2 s_4^2 s_1^6+12636 s_2^2 s_5^2 s_1^6+4779 s_4 s_5^2 s_1^6+2592 s_2^5 s_4 s_1^6-13932 s_2^2 s_3^2 s_4 s_1^6+52974 s_3^3 s_5 s_1^6-189540 s_2^3 s_3 s_5 s_1^6+2754 s_2 s_3 s_4 s_5 s_1^6-73872 s_2^4 s_6 s_1^6+231336 s_2 s_3^2 s_6 s_1^6+38880 s_2^2 s_4 s_6 s_1^6-48600 s_3 s_5 s_6 s_1^6+24057 s_3^5 s_1^5-431568 s_2^3 s_3^3 s_1^5+5589 s_3 s_4^3 s_1^5-729 s_5^3 s_1^5-63180 s_2^2 s_3 s_4^2 s_1^5-29889 s_2 s_3 s_5^2 s_1^5+46656 s_2^6 s_3 s_1^5-7290 s_2 s_3^3 s_4 s_1^5-11664 s_2^4 s_3 s_4 s_1^5-23328 s_2^5 s_5 s_1^5+396576 s_2^2 s_3^2 s_5 s_1^5+20898 s_2 s_4^2 s_5 s_1^5+3888 s_2^3 s_4 s_5 s_1^5+5103 s_3^2 s_4 s_5 s_1^5-96228 s_3^3 s_6 s_1^5+373248 s_2^3 s_3 s_6 s_1^5-116640 s_2 s_3 s_4 s_6 s_1^5-116640 s_2^2 s_5 s_6 s_1^5+369603 s_2^2 s_3^4 s_1^4-729 s_4^4 s_1^4+5832 s_2^2 s_4^3 s_1^4-174960 s_2^5 s_3^2 s_1^4-11664 s_2^4 s_4^2 s_1^4+99873 s_2 s_3^2 s_4^2 s_1^4-14580 s_2^3 s_5^2 s_1^4+15309 s_3^2 s_5^2 s_1^4-33534 s_2 s_4 s_5^2 s_1^4+10935 s_3^4 s_4 s_1^4+11664 s_2^3 s_3^2 s_4 s_1^4-365229 s_2 s_3^3 s_5 s_1^4-35721 s_3 s_4^2 s_5 s_1^4+163296 s_2^4 s_3 s_5 s_1^4-2916 s_2^2 s_3 s_4 s_5 s_1^4+46656 s_2^5 s_6 s_1^4-682344 s_2^2 s_3^2 s_6 s_1^4-46656 s_2^3 s_4 s_6 s_1^4+87480 s_3^2 s_4 s_6 s_1^4+349920 s_2 s_3 s_5 s_6 s_1^4-150903 s_2 s_3^5 s_1^3+349920 s_2^4 s_3^3 s_1^3- 17496 s_2 s_3 s_4^3 s_1^3+4374 s_2 s_5^3 s_1^3-52488 s_3^3 s_4^2 s_1^3+69984 s_2^3 s_3 s_4^2 s_1^3+52488 s_2^2 s_3 s_5^2 s_1^3+63423 s_3 s_4 s_5^2 s_1^3+17496 s_2^2 s_3^3 s_4 s_1^3+124659 s_3^4 s_5 s_1^3+8748 s_4^3 s_5 s_1^3-454896 s_2^3 s_3^2 s_5 s_1^3-34992 s_2^2 s_4^2 s_5 s_1^3-17496 s_2 s_3^2 s_4 s_5 s_1^3+524880 s_2 s_3^3 s_6 s_1^3-279936 s_2^4 s_3 s_6 s_1^3+ 209952 s_2^2 s_3 s_4 s_6 s_1^3+139968 s_2^3 s_5 s_6 s_1^3-262440 s_3^2 s_5 s_6 s_1^3+19683 s_3^6 s_1^2-393660 s_2^3 s_3^4 s_1^2+13122 s_3^2 s_4^3 s_1^2-19683 s_3 s_5^3 s_1^2-157464 s_2^2 s_3^2 s_4^2 s_1^2-59049 s_2 s_3^2 s_5^2 s_1^2-39366 s_4^2 s_5^2 s_1^2+52488 s_2^2 s_4 s_5^2 s_1^2-39366 s_2 s_3^4 s_4 s_1^2+629856 s_2^2 s_3^3 s_5 s_1^2+104976 s_2 s_3 s_4^2 s_5 s_1^2+19683 s_3^3 s_4 s_5 s_1^2-137781 s_3^4 s_6 s_1^2+629856 s_2^3 s_3^2 s_6 s_1^2-314928 s_2 s_3^2 s_4 s_6 s_1^2-629856 s_2^2 s_3 s_5 s_6 s_1^2+236196 s_2^2 s_3^5 s_1+78732 s_4 s_5^3 s_1+157464 s_2 s_3^3 s_4^2 s_1+19683 s_3^3 s_5^2 s_1-157464 s_2 s_3 s_4 s_5^2 s_1+19683 s_3^5 s_4 s_1-433026 s_2 s_3^4 s_5 s_1-78732 s_3^2 s_4^2 s_5 s_1-629856 s_2^2 s_3^3 s_6 s_1+157464 s_3^3 s_4 s_6 s_1+944784 s_2 s_3^2 s_5 s_6 s_1-59049 s_2 s_3^6-59049 s_5^4-59049 s_3^4 s_4^2+118098 s_3^2 s_4 s_5^2+118098 s_3^5 s_5+236196 s_2 s_3^4 s_6-472392 s_3^3 s_5 s_6. $}$$ $G=D_4=\langle a,b\mid a^4=b^2=1,\ b^{-1}ab=a^{-1}\rangle$. $$\renewcommand*{\arraystretch}{1.2} \begin{tabular}{c|ccccc} \hline conj. cls. rep. & $1$ & $a^2$ & $a$ & $b$ & $ab$ \\ conj. cls. size & 1 & 1& 2& 2& 2\\ \hline $\chi_1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\ $\chi_2$ & $1$ & $1$ & $1$ & $-1$ & $-1$ \\ $\chi_3$ & $1$ & $1$ & $-1$ & $1$ & $-1$ \\ $\chi_4$ & $1$ & $1$ & $-1$ & $-1$ & $1$ \\ $\chi_5$ & $2$ & $-2$ & $0$ & $0$ & $0$ \\ \hline \end{tabular}$$ We have $$F_2=\sum_{g\in\langle a\rangle}X_g-\sum_{g\in\langle a\rangle b}X_g,$$ $$F_3=\sum_{g\in\langle a^2,b\rangle}X_g-\sum_{g\in a\langle a^2,b\rangle}X_g,$$ $$F_4=\sum_{g\in\langle a^2,ab\rangle}X_g-\sum_{g\in a\langle a^2,ab\rangle}X_g,$$ $$F_5=\sum_{g\in\langle a\rangle}X_g^2-\sum_{g\in\langle a\rangle b}X_g^2-2X_1X_{a^2}-2X_aX_{a^3}+2X_bX_{a^2b}+2X_{ab}X_{a^3b}.$$ $F_2,F_3,F_3$ are in the same $S_8$-orbit and hence have the same symmetrization. Rename the elements of $G$ as $$(1,a,a^2,a^3,b,ab,a^2b,a^3b)\to (0,2,4,6,1,3,5,7).$$ Then $$F_2=X_0+X_2+X_4+X_6-(X_1+X_3+X_5+X_7),$$ $$F_5=(X_0-X_4)^2+(X_2-X_6)^2-(X_1-X_5)^2-(X_3-X_7)^2.$$ By Lemma [Lemma 8](#L5.2){reference-type="ref" reference="L5.2"}, $$E_2=\prod_{\phi\in\mathcal E_{8,2}}\phi(F_2).$$ Let $B_i=(X_i-X_{i+4})^2$, $0\le i\le 3$. Then $F_5=B_0+B_2-B_1-B_3$. Let $$F_5'=(B_0+B_2-B_1-B_3)(B_1+B_2-B_0-B_3)(B_3+B_2-B_1-B_0),$$ which is symmetric in $B_0,B_1,B_2,B_3$. By Lemma [Lemma 8](#L5.2){reference-type="ref" reference="L5.2"}, $$E_5:=\prod_{\phi\in\mathcal E_{8,4}}\phi(F_5')$$ is a symmetrization of $F_5$. $E_2$ is a product of 35 homogeneous linear polynomials in 8 variables and $E_5$ is a product of 315 homogeneous quadratic polynomials in 8 variables. The computation of $H_2$ (the symmetric reduction of $E_2$) is impractical and the computation of $H_5$ (the symmetric reduction of $E_5$) is impossible. $G=Q_8=\langle a,b\mid a^4=1,\ b^2=a^2,\ b^{-1}ab=a^{-1}\rangle$. $$\renewcommand*{\arraystretch}{1.2} \begin{tabular}{c|ccccc} \hline conj. cls. rep. & $1$ & $a^2$ & $a$ & $b$ & $ab$ \\ conj. cls. size & 1 & 1& 2& 2& 2\\ \hline $\chi_1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\ $\chi_2$ & $1$ & $1$ & $1$ & $-1$ & $-1$ \\ $\chi_3$ & $1$ & $1$ & $-1$ & $1$ & $-1$ \\ $\chi_4$ & $1$ & $1$ & $-1$ & $-1$ & $1$ \\ $\chi_5$ & $2$ & $-2$ & $0$ & $0$ & $0$ \\ \hline \end{tabular}$$ We have $$F_2=\sum_{g\in\langle a\rangle}X_g-\sum_{g\in\langle a\rangle b}X_g,$$ $$F_3=\sum_{g\in\langle a^2,b\rangle}X_g-\sum_{g\in a\langle a^2,b\rangle}X_g,$$ $$F_4=\sum_{g\in\langle a^2,ab\rangle}X_g-\sum_{g\in a\langle a^2,ab\rangle}X_g,$$ $$F_5=(X_1-X_{a^2})^2+(X_a-X_{a^3})^2+(X_b-X_{a^2b})^2+(X_{ab}-X_{a^3b})^2.$$ As in Case 3, rename the elements of $G$ as $$(1,a,a^2,a^3,b,ab,a^2b,a^3b)\to (0,2,4,6,1,3,5,7).$$ $F_2,F_3,F_4$ are in the same $S_8$-orbit and $E_2$ is the same as in Case 3. We have $$F_5=(X_0-X_4)^2+(X_1-X_5)^2+(X_2-X_6)^2+(X_3-X_7)^2.$$ By Lemma [Lemma 8](#L5.2){reference-type="ref" reference="L5.2"} (with $B_i=(X_i-X_{i+4})^2$, $0\le i\le 3$), $$E_5:=\prod_{\phi\in\mathcal E_{8,4}}\phi(F_5)$$ is a symmetrization of $F_5$. This is a product of 105 homogeneous quadratic polynomials in 8 variable. We acre unable to compute the symmetric reductions $H_2$ and $H_5$ of $E_2$ and $E_5$. $G=(\Bbb Z/2\Bbb Z)^3$. $$\renewcommand*{\arraystretch}{1.2} \begin{tabular}{c|cccccccc} \hline conj. cls. rep. & $(000)$ & $(001)$ & $(010)$ & $(011)$ & $(100)$ & $(101)$ & $(110)$ & $(111)$ \\ conj. cls. size & 1 & 1& 1 & 1 & 1 & 1 & 1 & 1\\ \hline $\chi_1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\ $\chi_2$ & $1$ & $-1$ & $1$ & $-1$ & $1$ & $-1$ & $1$ & $-1$ \\ $\chi_3$ & $1$ & $1$ & $-1$ & $-1$ & $1$ & $1$ & $-1$ & $-1$ \\ $\chi_4$ & $1$ & $-1$ & $-1$ & $1$ & $1$ & $-1$ & $-1$ & $1$ \\ $\chi_5$ & $1$ & $1$ & $1$ & $1$ & $-1$ & $-1$ & $-1$ & $-1$ \\ $\chi_6$ & $1$ & $-1$ & $1$ & $-1$ & $-1$ & $1$ & $-1$ & $1$ \\ $\chi_7$ & $1$ & $1$ & $-1$ & $-1$ & $-1$ & $-1$ & $1$ & $1$ \\ $\chi_8$ & $1$ & $-1$ & $-1$ & $1$ & $-1$ & $1$ & $1$ & $-1$ \\ \hline \end{tabular}$$ In this case, $F_2,\dots,F_8$ are in the same $S_8$-orbit. $F_2$ and $E_2$ are the same as those in Case 3. $G=(\Bbb Z/2\Bbb Z)\times(\Bbb Z/4\Bbb Z)$. $$\renewcommand*{\arraystretch}{1.2} \begin{tabular}{c|cccccccc} \hline conj. cls. rep. & $(00)$ & $(01)$ & $(02)$ & $(03)$ & $(10)$ & $(11)$ & $(12)$ & $(13)$ \\ conj. cls. size & 1 & 1& 1 & 1 & 1 & 1 & 1 & 1\\ \hline $\chi_1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\ $\chi_2$ & $1$ & $-1$ & $1$ & $-1$ & $1$ & $-1$ & $1$ & $-1$ \\ $\chi_3$ & $1$ & $i$ & $-1$ & $-i$ & $1$ & $i$ & $-1$ & $-i$ \\ $\chi_4$ & $1$ & $-i$ & $-1$ & $i$ & $1$ & $-i$ & $-1$ & $i$ \\ $\chi_5$ & $1$ & $1$ & $1$ & $1$ & $-1$ & $-1$ & $-1$ & $-1$ \\ $\chi_6$ & $1$ & $-1$ & $1$ & $-1$ & $-1$ & $1$ & $-1$ & $1$ \\ $\chi_7$ & $1$ & $i$ & $-1$ & $-i$ & $-1$ & $-i$ & $1$ & $i$ \\ $\chi_8$ & $1$ & $-i$ & $-1$ & $i$ & $-1$ & $i$ & $1$ & $-i$ \\ \hline \end{tabular}$$ In this case, $F_2,F_5,F_6$ are in the same $S_8$-orbit, $F_3,F_4,F_7,F_8$ are in the same $S_8$-orbit, and $F_2$ and $E_2$ are the same as those in Case 3. Rename the element $(a,b)\in G$ as $4a+b$ and let $B_i=X_i+X_{i+4}$, $0\le i\le 3$. Then $$F_3F_4=(B_0-B_2)^2+(B_1-B_3)^2.$$ Let $$F_3'=\bigl((B_0-B_2)^2+(B_1-B_3)^2\bigr)\bigl((B_1-B_2)^2+(B_0-B_3)^2\bigr)\bigl((B_3-B_2)^2+(B_1-B_0)^2\bigr),$$ which is symmetric in $B_0,B_1,B_2,B_3$. By Lemma [Lemma 8](#L5.2){reference-type="ref" reference="L5.2"}, $$E_5:=\prod_{\phi\in\mathcal E_{8,4}}\phi(F_3')$$ is a symmetrization of $F_3$. Its symmetric reduction is beyond reach. # Appendix {#appendix .unnumbered} A1. The expression of $\Theta_{7,7}$. Recall from [\[Psi\]](#Psi){reference-type="eqref" reference="Psi"} that $$\Psi_7(X_0,\dots,X_6)=\prod_{i\in(\Bbb Z/7\Bbb Z)^\times}\Bigl(\sum_{j\in\Bbb Z/7\Bbb Z}\epsilon_7^{ij}X_j\Bigr)\in\Bbb Z[X_0,\dots,X_6].$$ The expansion of $\Psi_7$ is lengthy and is not included here. However, it can be easily generated by the following Mathematica code: Code Array \[x, 7, 0\]; psi7 = 1; For\[i = 1, i \< 7, i++, psi7 = psi7\*Sum\[e\^(i\*j) x\[j\], j, 0, 6\]; \]; psi7 = PolynomialMod\[psi7, Cyclotomic\[7, e\]\]; $\Theta_{7,7}$ is the symmetrization of $\Psi_7$. By equation (2.8) of [@Hou-preprint], $$\Theta_{7,7}=\prod_{(i_2,\dots,i_6)}\Psi_7(X_0,X_1,X_{i_2},X_{i_3},X_{i_4},X_{i_5},X_{i_6}),$$ where $(i_2,\dots,i_6)$ runs through all permutations of $2,\dots,6$. A2. $\theta_{6,6}(1,0,\dots,0,a)$ in characteristic $p$, $5\le p\le 97$. $$\renewcommand*{\arraystretch}{1.4} \begin{tabular}{c|l} \hline $p$ & \hfil$\theta_{6,6}(1,0,\dots,0,a)$\hfil \\ \hline 5 & $3(a^2+4a+2)^5$ \\ 7 & $3 (a^{14}+6 a^{13}+2 a^{12}+5 a^{11}+6 a^{10}+5 a^8+6 a^7+a^5+6 a^4+2 a^2+5 a+5) $\\ 11 & $4 (a^{19}+3 a^{18}+10 a^{17}+5 a^{16}+7 a^{15}+4 a^{14}+4 a^{13}+6 a^{12}+8 a^{11}+10 a^{10} $\\ & $+4 a^9+10 a^8+7 a^7+5 a^6+5 a^5+8 a^4+2 a^3+a^2+2 a+3) $\\ 13& $7 (a+12) (a^{16}+4 a^{15}+7 a^{14}+5 a^{13}+2 a^{12}+5 a^{11}+9 a^{10}+5 a^9$\\ &$+8 a^8+12 a^7+a^6+11 a^5+8 a^4+10 a^3+3 a^2+10 a+11 )$\\ 17& $10 (a^3+4 a^2+13 a+8 ) (a^3+7 a^2+11 a+8 ) (a^3+14 a^2+4 a+1 )$\\ & $(a^{10}+14 a^9+2 a^8+6 a^7+6 a^6+2 a^5+2 a^4+5 a^3+7 a^2+8 a+14 )$\\ 19& $4 (a+15) (a^7+17 a^6+6 a^5+10 a^4+8 a^3+9 a^2+7 a+17 )$\\ &$ (a^{11}+12 a^{10}+18 a^9+4 a^8+3 a^7+17 a^6+17 a^5+12 a^4+a^3+18 a^2+3 )$\\ 23& $11 (a^2+12 a+3 ) (a^5+4 a^4+19 a^3+18 a^2+15 a+14 )(a^{12}+12 a^{11}+18 a^{10}+10 a^9$\\ &$+10 a^8+18 a^7+5 a^6+2 a^5+3 a^4+22 a^3+4 a^2+19 a+12 )$\\ 29& $28 (a^{19}+13 a^{18}+10 a^{17}+15 a^{16}+20 a^{15}+18 a^{14}+13 a^{13}+a^{12}+23 a^{11}+22 a^{10}$\\ &$+10 a^9+20 a^8+2 a^7+11 a^6+21 a^5+24 a^4+25 a^3+7 a^2+19 a+28 )$\\ 31& $(a+19) (a^5+5 a^4+18 a^3+13 a^2+4 a+16 ) (a^5+18 a^4+12 a^3+7 a^2+9 a+9 ) $\\ &$(a^8+27 a^7+6 a^6+27 a^5+25 a^4+20 a^3+9 a^2+21 a+4 )$\\ 37& $5 (a^2+7 a+9 ) (a^4+2 a^3+35 a^2+1 ) (a^{13}+4 a^{12}+31 a^{11}+4 a^{10}+26 a^9+26 a^8$\\ &$+5 a^7+5 a^6+3 a^5+a^4+16 a^3+33 a^2+36 a+14 )$\\ 41& $32 (a^3+33 a^2+9 a+31 ) (a^6+2 a^5+27 a^4+24 a^3+9 a^2+8 a+19 )$\\ &$ (a^{10}+8 a^9+17 a^8+28 a^7+33 a^6+23 a^5+6 a^4+37 a^3+26 a^2+39 a+17 )$\\ 43& $42 (a^2+a+21 ) (a^2+17 a+35 ) (a^2+37 a+18 ) (a^3+7 a^2+21 a+35 )$\\ &$ (a^4+39 a^3+5 a^2+24 a+30 ) (a^6+4 a^5+28 a^4+9 a^3+34 a^2+9 a+7 )$\\ 47& $15 (a^3+19 a^2+13 a+8 ) (a^{16}+41 a^{15}+8 a^{14}+9 a^{13}+21 a^{12}+12 a^{11}+10 a^{10}$\\ &$+17 a^9+45 a^8+42 a^7+43 a^5+16 a^4+32 a^3+36 a^2+3 a+38 )$\\ 53& $50 (a+6)^2 (a+25)^2 (a^2+46 a+20 ) (a^4+4 a^3+32 a^2+6 a+40 )$\\ &$ (a^4+37 a^3+9 a^2+7 a+43 ) (a^5+19 a^4+3 a^3+12 a^2+20 a+49 )$\\ 59& $20 (a^{19}+47 a^{18}+45 a^{17}+51 a^{16}+53 a^{15}+25 a^{14}+51 a^{13}+31 a^{12}+58 a^{11}+18 a^{10}$\\ &$+52 a^9+42 a^8+15 a^7+10 a^6+22 a^5+16 a^4+38 a^3+3 a^2+49 a+3 )$\\ 61& $47 (a+33)^2 (a^3+36 a^2+29 a+35 ) (a^3+57 a^2+43 a+24 )$\\ &$ (a^4+27 a^3+3 a^2+12 a+53 ) (a^7+9 a^6+9 a^5+9 a^4+33 a^3+24 a^2+54 a+32 )$\\ 67& $32 (a+36) (a^4+2 a^3+30 a^2+12 a+5 ) (a^5+37 a^4+31 a^3+48 a^2+20 a+60 )$\\ &$(a^9+31 a^8+13 a^7+57 a^6+63 a^5+35 a^4+50 a^3+39 a^2+6 a+24 )$\\ 71& $57 (a^3+17 a^2+70 a+48 ) (a^{16}+67 a^{15}+26 a^{14}+12 a^{13}+5 a^{12}+47 a^{11}+60 a^{10}$\\ &$+69 a^9+29 a^8+14 a^7+13 a^6+25 a^5+65 a^4+15 a^3+67 a^2+49 a+43 )$\\ \hline \end{tabular}$$ $$\renewcommand*{\arraystretch}{1.4} \begin{tabular}{c|l} \hline $p$ & \hfil$\theta_{6,6}(1,0,\dots,0,a)$\hfil \\ \hline 73& $62 (a^3+63 a^2+39 a+6 ) (a^5+6 a^4+14 a^3+57 a^2+62 a+58 ) $\\ &$ (a^{11}+12 a^{10}+40 a^9+32 a^8+39 a^7+69 a^6+68 a^5+13 a^4+38 a^3+6 a^2+59 a+57 )$\\ 79& $40 (a+29) (a+45) (a^2+26 a+27 ) (a^2+61 a+75 ) (a^{13}+71 a^{12}+44 a^{11}+41 a^{10}$\\ &$+47 a^9+2 a^8+2 a^7+53 a^6+48 a^5+52 a^4+70 a^3+46 a^2+64 a+39 )$\\ 83& $82 (a^{19}+33 a^{18}+37 a^{17}+12 a^{16}+12 a^{15}+75 a^{14}+3 a^{13}+a^{12}+24 a^{11}+65 a^{10}$\\ &$+45 a^9+57 a^8+35 a^7+14 a^6+49 a^5+27 a^4+8 a^3+28 a^2+23 a+82 )$\\ 89& $55 (a+22)^2 (a+40) (a+78) (a^{15}+43 a^{14}+51 a^{13}+6 a^{12}+29 a^{11}+82 a^{10}+14 a^9$\\ &$+13 a^8+53 a^7+43 a^6+22 a^5+21 a^4+2 a^3+33 a^2+79 a+2 )$\\ 97& $10 (a^3+37 a^2+a+7 ) (a^5+26 a^4+48 a^3+53 a^2+36 a+32 )$\\ &$ (a^5+73 a^4+78 a^3+85 a^2+33 a+30 ) (a^6+45 a^5+15 a^4+89 a^3+61 a^2+16 a+60 )$\\ \hline \end{tabular}$$ A3. Values of $\text{Irr}\,(q,n)$, $\text{Norm}\,(q,n)$, $\text{Suff}\,(q,n)$, $n=6,7$, $q\le 49$, $\text{gcd}(q,n)=1$. Refer to [\[Irr\]](#Irr){reference-type="eqref" reference="Irr"}, [\[Norm\]](#Norm){reference-type="eqref" reference="Norm"}, [\[Suff\]](#Suff){reference-type="eqref" reference="Suff"} for the definitions of $\text{Irr}\,(q,n)$, $\text{Norm}\,(q,n)$, $\text{Suff}\,(q,n)$. $$\renewcommand*{\arraystretch}{1.4} \begin{tabular}{c|c|c|c} \multicolumn{4}{c}{$n=6$}\\ \hline $q$ & $\text{Irr}\,(q,6)$ &$\text{Norm}\,(q,6)$ & $\text{Suff}\,(q,6)$\\ \hline 5 & 1 & 1& 1\\ 7 & 0 & 0& 0\\ 11 & 2 & 2& 2\\ 13 & 2 & 1& 1\\ 17 & 4 & 3& 3\\ 19 & 3 & 2& 2\\ 23 & 2 & 2& 2\\ 25 & 4 & 4& 4\\ 29 & 7 & 7& 7\\ 31 & 4 & 3& 3\\ 37 & 6 & 6& 6\\ 41 & 6 & 6& 6\\ 43 & 5 & 5& 5\\ 47 & 10 & 10& 10\\ 49 & 10 & 10& 10\\ \hline \end{tabular}$$ $$\renewcommand*{\arraystretch}{1.4} \begin{tabular}{c|c|c|c} \multicolumn{4}{c}{$n=7$}\\ \hline $q$ & $\text{Irr}\,(q,7)$ &$\text{Norm}\,(q,7)$ & $\text{Suff}\,(q,7)$\\ \hline 2 & 1 & 1& 1\\ 3 & 0 & 0& 0\\ 4 & 1 & 1& 1\\ 5 & 2 & 2& 2\\ 8 & 1 & 1& 1\\ 9 & 2 & 2& 2\\ 11 & 3 & 3& 3\\ 13 & 1 & 1& 1\\ 16 & 5 & 5& 5\\ 17 & 2 & 2& 2\\ 19 & 2 & 2& 2\\ 23 & 1 & 1& 1\\ 25 & 2 & 2& 2\\ 27 & 0 & 0& 0\\ 29 & 6 & 5& 5\\ 31 & 4 & 4& 4\\ 32 & 11 & 11& 11\\ 37 & 5 & 5& 5\\ 41 & 4 & 4& 4\\ 43 & 4 & 4& 4\\ 47 & 7 & 7& 7\\ \hline \end{tabular}$$ A4. Intermediate data in the computation of $\theta_{7,7}(1,0,\dots,0,a)$ with $p=2$. Minimal polynomials of $u_i$: $$\renewcommand*{\arraystretch}{1.4} \begin{tabular}{c|c|c} \hline $i$ &$k_i$ & $g_i(X)$\\ \hline $1$& $7$& $X^7+X^6+1$\\ $2$& $7$& $X^7+X^4+1$\\ $3$& $8$& $X^8+X^7+X^5+X^4+1$\\ $4$& $8$& $X^8+X^6+X^5+X^4+1$\\ $5$& $7$& $X^7+X^6+X^5+X^4+1$\\ $6$& $7$& $X^7+X^3+1$\\ $7$& $4$& $X^4+X^3+1$\\ $8$& $8$& $X^8+X^6+X^5+X^2+1$\\ $9$& $6$& $X^6+X^5+X^4+X^2+1$\\ $10$& $8$& $X^8+X^7+X^3+X^2+1$\\ $11$& $6$& $X^6+X+1$\\ $12$& $8$& $X^8+X^7+X^6+X^5+X^4+X+1$\\ $13$& $8$& $X^8+X^5+X^3+X+1$\\ $14$& $6$& $X^6+X^4+X^3+X+1$\\ $15$& $8$& $X^8+X^7+X^6+X^4+X^2+X+1$\\ \hline \end{tabular}$$ 2em The matrices $[A(u_i)\ C(u_i)]$: 1em $i=1$. $$\begin{aligned} 1 0 0 0 0 0 0 1 1 1 1 1 1 1 0 1 0 1 0 1 0 0 1 1 0 0 1 1 1 0 1 1 1 0 1 0 0 1 0 1 1 0 0 0 1 1 0 1 1 1 1 0 1 1 0 1 0 1 1 0 1 1 0 0 1 0 0 1 0 0 0 1 1 1 0 0 0 0 1 0 1 1 1 1 1 0 0 1 0 1 0 1 1 1 0 0 1 1 0 1 0 0 0 &\ 0 \cr 0 1 0 0 0 0 0 0 1 1 1 1 1 1 1 0 1 0 1 0 1 0 0 1 1 0 0 1 1 1 0 1 1 1 0 1 0 0 1 0 1 1 0 0 0 1 1 0 1 1 1 1 0 1 1 0 1 0 1 1 0 1 1 0 0 1 0 0 1 0 0 0 1 1 1 0 0 0 0 1 0 1 1 1 1 1 0 0 1 0 1 0 1 1 1 0 0 1 1 0 1 0 0 &\ 1 \cr 0 0 1 0 0 0 0 0 0 1 1 1 1 1 1 1 0 1 0 1 0 1 0 0 1 1 0 0 1 1 1 0 1 1 1 0 1 0 0 1 0 1 1 0 0 0 1 1 0 1 1 1 1 0 1 1 0 1 0 1 1 0 1 1 0 0 1 0 0 1 0 0 0 1 1 1 0 0 0 0 1 0 1 1 1 1 1 0 0 1 0 1 0 1 1 1 0 0 1 1 0 1 0 &\ 1 \cr 0 0 0 1 0 0 0 0 0 0 1 1 1 1 1 1 1 0 1 0 1 0 1 0 0 1 1 0 0 1 1 1 0 1 1 1 0 1 0 0 1 0 1 1 0 0 0 1 1 0 1 1 1 1 0 1 1 0 1 0 1 1 0 1 1 0 0 1 0 0 1 0 0 0 1 1 1 0 0 0 0 1 0 1 1 1 1 1 0 0 1 0 1 0 1 1 1 0 0 1 1 0 1 &\ 0 \cr 0 0 0 0 1 0 0 0 0 0 0 1 1 1 1 1 1 1 0 1 0 1 0 1 0 0 1 1 0 0 1 1 1 0 1 1 1 0 1 0 0 1 0 1 1 0 0 0 1 1 0 1 1 1 1 0 1 1 0 1 0 1 1 0 1 1 0 0 1 0 0 1 0 0 0 1 1 1 0 0 0 0 1 0 1 1 1 1 1 0 0 1 0 1 0 1 1 1 0 0 1 1 0 &\ 0 \cr 0 0 0 0 0 1 0 0 0 0 0 0 1 1 1 1 1 1 1 0 1 0 1 0 1 0 0 1 1 0 0 1 1 1 0 1 1 1 0 1 0 0 1 0 1 1 0 0 0 1 1 0 1 1 1 1 0 1 1 0 1 0 1 1 0 1 1 0 0 1 0 0 1 0 0 0 1 1 1 0 0 0 0 1 0 1 1 1 1 1 0 0 1 0 1 0 1 1 1 0 0 1 1 &\ 0 \cr 0 0 0 0 0 0 1 1 1 1 1 1 1 0 1 0 1 0 1 0 0 1 1 0 0 1 1 1 0 1 1 1 0 1 0 0 1 0 1 1 0 0 0 1 1 0 1 1 1 1 0 1 1 0 1 0 1 1 0 1 1 0 0 1 0 0 1 0 0 0 1 1 1 0 0 0 0 1 0 1 1 1 1 1 0 0 1 0 1 0 1 1 1 0 0 1 1 0 1 0 0 0 1 &\ 1 \cr\end{aligned}$$ $i=2$. $$\begin{aligned} 1 0 0 0 0 0 0 1 0 0 1 0 0 1 1 0 1 0 0 1 1 1 1 0 1 1 1 0 0 0 0 1 1 1 1 1 1 1 0 0 0 1 1 1 0 1 1 0 0 0 1 0 1 0 0 1 0 1 1 1 1 1 0 1 0 1 0 1 0 0 0 0 1 0 1 1 0 1 1 1 1 0 0 1 1 1 0 0 1 0 1 0 1 1 0 0 1 1 0 0 0 0 0 &\ 1 \cr 0 1 0 0 0 0 0 0 1 0 0 1 0 0 1 1 0 1 0 0 1 1 1 1 0 1 1 1 0 0 0 0 1 1 1 1 1 1 1 0 0 0 1 1 1 0 1 1 0 0 0 1 0 1 0 0 1 0 1 1 1 1 1 0 1 0 1 0 1 0 0 0 0 1 0 1 1 0 1 1 1 1 0 0 1 1 1 0 0 1 0 1 0 1 1 0 0 1 1 0 0 0 0 &\ 0 \cr 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 1 1 0 1 0 0 1 1 1 1 0 1 1 1 0 0 0 0 1 1 1 1 1 1 1 0 0 0 1 1 1 0 1 1 0 0 0 1 0 1 0 0 1 0 1 1 1 1 1 0 1 0 1 0 1 0 0 0 0 1 0 1 1 0 1 1 1 1 0 0 1 1 1 0 0 1 0 1 0 1 1 0 0 1 1 0 0 0 &\ 1 \cr 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 1 1 0 1 0 0 1 1 1 1 0 1 1 1 0 0 0 0 1 1 1 1 1 1 1 0 0 0 1 1 1 0 1 1 0 0 0 1 0 1 0 0 1 0 1 1 1 1 1 0 1 0 1 0 1 0 0 0 0 1 0 1 1 0 1 1 1 1 0 0 1 1 1 0 0 1 0 1 0 1 1 0 0 1 1 0 0 &\ 1 \cr 0 0 0 0 1 0 0 1 0 0 1 1 0 1 0 0 1 1 1 1 0 1 1 1 0 0 0 0 1 1 1 1 1 1 1 0 0 0 1 1 1 0 1 1 0 0 0 1 0 1 0 0 1 0 1 1 1 1 1 0 1 0 1 0 1 0 0 0 0 1 0 1 1 0 1 1 1 1 0 0 1 1 1 0 0 1 0 1 0 1 1 0 0 1 1 0 0 0 0 0 1 1 0 &\ 1 \cr 0 0 0 0 0 1 0 0 1 0 0 1 1 0 1 0 0 1 1 1 1 0 1 1 1 0 0 0 0 1 1 1 1 1 1 1 0 0 0 1 1 1 0 1 1 0 0 0 1 0 1 0 0 1 0 1 1 1 1 1 0 1 0 1 0 1 0 0 0 0 1 0 1 1 0 1 1 1 1 0 0 1 1 1 0 0 1 0 1 0 1 1 0 0 1 1 0 0 0 0 0 1 1 &\ 1 \cr 0 0 0 0 0 0 1 0 0 1 0 0 1 1 0 1 0 0 1 1 1 1 0 1 1 1 0 0 0 0 1 1 1 1 1 1 1 0 0 0 1 1 1 0 1 1 0 0 0 1 0 1 0 0 1 0 1 1 1 1 1 0 1 0 1 0 1 0 0 0 0 1 0 1 1 0 1 1 1 1 0 0 1 1 1 0 0 1 0 1 0 1 1 0 0 1 1 0 0 0 0 0 1 &\ 1 \cr\end{aligned}$$ $i=3$. $$\begin{aligned} 1 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1 1 0 1 1 1 0 0 1 1 1 1 0 1 1 0 0 1 1 0 1 0 0 1 0 1 1 0 0 0 1 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1 1 0 1 1 1 0 0 1 1 1 1 0 1 1 0 0 1 1 0 1 0 0 1 0 1 1 0 0 0 1 &\ 1 \cr 0 1 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1 1 0 1 1 1 0 0 1 1 1 1 0 1 1 0 0 1 1 0 1 0 0 1 0 1 1 0 0 0 1 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1 1 0 1 1 1 0 0 1 1 1 1 0 1 1 0 0 1 1 0 1 0 0 1 0 1 1 0 0 0 &\ 0 \cr 0 0 1 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1 1 0 1 1 1 0 0 1 1 1 1 0 1 1 0 0 1 1 0 1 0 0 1 0 1 1 0 0 0 1 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1 1 0 1 1 1 0 0 1 1 1 1 0 1 1 0 0 1 1 0 1 0 0 1 0 1 1 0 0 &\ 0 \cr 0 0 0 1 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1 1 0 1 1 1 0 0 1 1 1 1 0 1 1 0 0 1 1 0 1 0 0 1 0 1 1 0 0 0 1 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1 1 0 1 1 1 0 0 1 1 1 1 0 1 1 0 0 1 1 0 1 0 0 1 0 1 1 0 &\ 0 \cr 0 0 0 0 1 0 0 0 1 1 1 0 1 1 0 1 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 1 1 1 0 0 1 0 0 0 0 1 1 1 1 1 1 1 1 0 1 0 0 0 0 1 0 0 0 1 1 1 0 1 1 0 1 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 1 1 1 0 0 1 0 0 0 0 1 1 1 1 1 1 1 1 0 1 0 &\ 1 \cr 0 0 0 0 0 1 0 0 1 0 0 1 0 1 0 1 1 0 1 0 1 1 0 0 1 0 1 0 0 0 1 1 0 1 0 1 0 1 1 1 0 1 1 1 0 1 0 0 1 1 0 0 0 0 0 0 1 0 0 1 0 0 1 0 1 0 1 1 0 1 0 1 1 0 0 1 0 1 0 0 0 1 1 0 1 0 1 0 1 1 1 0 1 1 1 0 1 0 0 1 1 0 0 &\ 0 \cr 0 0 0 0 0 0 1 0 0 1 0 0 1 0 1 0 1 1 0 1 0 1 1 0 0 1 0 1 0 0 0 1 1 0 1 0 1 0 1 1 1 0 1 1 1 0 1 0 0 1 1 0 0 0 0 0 0 1 0 0 1 0 0 1 0 1 0 1 1 0 1 0 1 1 0 0 1 0 1 0 0 0 1 1 0 1 0 1 0 1 1 1 0 1 1 1 0 1 0 0 1 1 0 &\ 1 \cr 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1 1 0 1 1 1 0 0 1 1 1 1 0 1 1 0 0 1 1 0 1 0 0 1 0 1 1 0 0 0 1 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1 1 0 1 1 1 0 0 1 1 1 1 0 1 1 0 0 1 1 0 1 0 0 1 0 1 1 0 0 0 1 0 &\ 1 \cr\end{aligned}$$ $i=4$. $$\begin{aligned} 1 0 0 0 0 0 0 0 1 0 1 1 0 0 0 1 1 1 1 0 1 0 0 0 0 1 1 1 1 1 1 1 1 0 0 1 0 0 0 0 1 0 1 0 0 1 1 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 0 0 0 1 1 0 0 0 1 0 1 0 1 1 0 0 1 1 0 0 1 0 1 1 1 1 1 1 0 1 1 1 1 0 0 1 1 0 1 1 1 &\ 1 \cr 0 1 0 0 0 0 0 0 0 1 0 1 1 0 0 0 1 1 1 1 0 1 0 0 0 0 1 1 1 1 1 1 1 1 0 0 1 0 0 0 0 1 0 1 0 0 1 1 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 0 0 0 1 1 0 0 0 1 0 1 0 1 1 0 0 1 1 0 0 1 0 1 1 1 1 1 1 0 1 1 1 1 0 0 1 1 0 1 1 &\ 1 \cr 0 0 1 0 0 0 0 0 0 0 1 0 1 1 0 0 0 1 1 1 1 0 1 0 0 0 0 1 1 1 1 1 1 1 1 0 0 1 0 0 0 0 1 0 1 0 0 1 1 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 0 0 0 1 1 0 0 0 1 0 1 0 1 1 0 0 1 1 0 0 1 0 1 1 1 1 1 1 0 1 1 1 1 0 0 1 1 0 1 &\ 1 \cr 0 0 0 1 0 0 0 0 0 0 0 1 0 1 1 0 0 0 1 1 1 1 0 1 0 0 0 0 1 1 1 1 1 1 1 1 0 0 1 0 0 0 0 1 0 1 0 0 1 1 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 0 0 0 1 1 0 0 0 1 0 1 0 1 1 0 0 1 1 0 0 1 0 1 1 1 1 1 1 0 1 1 1 1 0 0 1 1 0 &\ 0 \cr 0 0 0 0 1 0 0 0 1 0 1 1 1 0 1 0 1 1 1 1 0 1 1 0 1 1 1 1 1 0 0 0 0 1 1 0 1 0 0 1 1 0 1 0 1 1 0 1 1 0 1 0 1 0 0 0 0 0 1 0 0 1 1 1 0 1 1 0 0 1 0 0 1 0 0 1 1 0 0 0 0 0 0 1 1 1 0 1 0 0 1 0 0 0 1 1 1 0 0 0 1 0 0 &\ 1 \cr 0 0 0 0 0 1 0 0 1 1 1 0 1 1 0 0 1 0 0 1 0 0 1 1 0 0 0 0 0 0 1 1 1 0 1 0 0 1 0 0 0 1 1 1 0 0 0 1 0 0 0 0 0 0 0 1 0 1 1 0 0 0 1 1 1 1 0 1 0 0 0 0 1 1 1 1 1 1 1 1 0 0 1 0 0 0 0 1 0 1 0 0 1 1 1 1 1 0 1 0 1 0 1 &\ 0 \cr 0 0 0 0 0 0 1 0 1 1 0 0 0 1 1 1 1 0 1 0 0 0 0 1 1 1 1 1 1 1 1 0 0 1 0 0 0 0 1 0 1 0 0 1 1 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 0 0 0 1 1 0 0 0 1 0 1 0 1 1 0 0 1 1 0 0 1 0 1 1 1 1 1 1 0 1 1 1 1 0 0 1 1 0 1 1 1 0 1 &\ 0 \cr 0 0 0 0 0 0 0 1 0 1 1 0 0 0 1 1 1 1 0 1 0 0 0 0 1 1 1 1 1 1 1 1 0 0 1 0 0 0 0 1 0 1 0 0 1 1 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 0 0 0 1 1 0 0 0 1 0 1 0 1 1 0 0 1 1 0 0 1 0 1 1 1 1 1 1 0 1 1 1 1 0 0 1 1 0 1 1 1 0 &\ 0 \cr\end{aligned}$$ $i=5$. $$\begin{aligned} 1 0 0 0 0 0 0 1 1 0 0 1 1 0 1 1 0 0 0 1 1 1 0 0 1 1 1 0 1 0 1 1 1 0 0 0 0 1 0 0 1 1 0 0 0 0 0 1 0 1 0 1 0 1 1 0 1 0 0 1 0 0 1 0 1 0 0 1 1 1 1 0 0 1 0 0 0 1 1 0 1 0 1 0 0 0 0 1 1 1 1 1 1 1 0 1 1 1 0 1 1 0 1 &\ 1 \cr 0 1 0 0 0 0 0 0 1 1 0 0 1 1 0 1 1 0 0 0 1 1 1 0 0 1 1 1 0 1 0 1 1 1 0 0 0 0 1 0 0 1 1 0 0 0 0 0 1 0 1 0 1 0 1 1 0 1 0 0 1 0 0 1 0 1 0 0 1 1 1 1 0 0 1 0 0 0 1 1 0 1 0 1 0 0 0 0 1 1 1 1 1 1 1 0 1 1 1 0 1 1 0 &\ 0 \cr 0 0 1 0 0 0 0 0 0 1 1 0 0 1 1 0 1 1 0 0 0 1 1 1 0 0 1 1 1 0 1 0 1 1 1 0 0 0 0 1 0 0 1 1 0 0 0 0 0 1 0 1 0 1 0 1 1 0 1 0 0 1 0 0 1 0 1 0 0 1 1 1 1 0 0 1 0 0 0 1 1 0 1 0 1 0 0 0 0 1 1 1 1 1 1 1 0 1 1 1 0 1 1 &\ 1 \cr 0 0 0 1 0 0 0 0 0 0 1 1 0 0 1 1 0 1 1 0 0 0 1 1 1 0 0 1 1 1 0 1 0 1 1 1 0 0 0 0 1 0 0 1 1 0 0 0 0 0 1 0 1 0 1 0 1 1 0 1 0 0 1 0 0 1 0 1 0 0 1 1 1 1 0 0 1 0 0 0 1 1 0 1 0 1 0 0 0 0 1 1 1 1 1 1 1 0 1 1 1 0 1 &\ 1 \cr 0 0 0 0 1 0 0 1 1 0 0 0 0 0 1 0 1 0 1 0 1 1 0 1 0 0 1 0 0 1 0 1 0 0 1 1 1 1 0 0 1 0 0 0 1 1 0 1 0 1 0 0 0 0 1 1 1 1 1 1 1 0 1 1 1 0 1 1 0 1 1 1 1 0 1 0 0 0 1 0 1 1 0 0 1 0 1 1 1 1 1 0 0 0 1 0 0 0 0 0 0 1 1 &\ 1 \cr 0 0 0 0 0 1 0 1 0 1 0 1 1 0 1 0 0 1 0 0 1 0 1 0 0 1 1 1 1 0 0 1 0 0 0 1 1 0 1 0 1 0 0 0 0 1 1 1 1 1 1 1 0 1 1 1 0 1 1 0 1 1 1 1 0 1 0 0 0 1 0 1 1 0 0 1 0 1 1 1 1 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 1 1 0 1 1 0 0 &\ 1 \cr 0 0 0 0 0 0 1 1 0 0 1 1 0 1 1 0 0 0 1 1 1 0 0 1 1 1 0 1 0 1 1 1 0 0 0 0 1 0 0 1 1 0 0 0 0 0 1 0 1 0 1 0 1 1 0 1 0 0 1 0 0 1 0 1 0 0 1 1 1 1 0 0 1 0 0 0 1 1 0 1 0 1 0 0 0 0 1 1 1 1 1 1 1 0 1 1 1 0 1 1 0 1 1 &\ 0 \cr\end{aligned}$$ $i=6$. $$\begin{aligned} 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 1 0 0 0 1 0 1 1 1 0 1 0 1 1 0 1 1 0 0 0 0 0 1 1 0 0 1 1 0 1 0 1 0 0 1 1 1 0 0 1 1 1 1 0 1 1 0 1 0 0 0 0 1 0 1 0 1 0 1 1 1 1 1 0 1 0 0 1 0 1 0 0 0 1 1 0 1 1 1 0 0 0 1 1 1 1 1 1 &\ 0 \cr 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 1 0 0 0 1 0 1 1 1 0 1 0 1 1 0 1 1 0 0 0 0 0 1 1 0 0 1 1 0 1 0 1 0 0 1 1 1 0 0 1 1 1 1 0 1 1 0 1 0 0 0 0 1 0 1 0 1 0 1 1 1 1 1 0 1 0 0 1 0 1 0 0 0 1 1 0 1 1 1 0 0 0 1 1 1 1 1 &\ 0 \cr 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 1 0 0 0 1 0 1 1 1 0 1 0 1 1 0 1 1 0 0 0 0 0 1 1 0 0 1 1 0 1 0 1 0 0 1 1 1 0 0 1 1 1 1 0 1 1 0 1 0 0 0 0 1 0 1 0 1 0 1 1 1 1 1 0 1 0 0 1 0 1 0 0 0 1 1 0 1 1 1 0 0 0 1 1 1 1 &\ 1 \cr 0 0 0 1 0 0 0 1 0 0 1 1 0 0 0 1 0 1 1 1 0 1 0 1 1 0 1 1 0 0 0 0 0 1 1 0 0 1 1 0 1 0 1 0 0 1 1 1 0 0 1 1 1 1 0 1 1 0 1 0 0 0 0 1 0 1 0 1 0 1 1 1 1 1 0 1 0 0 1 0 1 0 0 0 1 1 0 1 1 1 0 0 0 1 1 1 1 1 1 1 0 0 0 &\ 0 \cr 0 0 0 0 1 0 0 0 1 0 0 1 1 0 0 0 1 0 1 1 1 0 1 0 1 1 0 1 1 0 0 0 0 0 1 1 0 0 1 1 0 1 0 1 0 0 1 1 1 0 0 1 1 1 1 0 1 1 0 1 0 0 0 0 1 0 1 0 1 0 1 1 1 1 1 0 1 0 0 1 0 1 0 0 0 1 1 0 1 1 1 0 0 0 1 1 1 1 1 1 1 0 0 &\ 1 \cr 0 0 0 0 0 1 0 0 0 1 0 0 1 1 0 0 0 1 0 1 1 1 0 1 0 1 1 0 1 1 0 0 0 0 0 1 1 0 0 1 1 0 1 0 1 0 0 1 1 1 0 0 1 1 1 1 0 1 1 0 1 0 0 0 0 1 0 1 0 1 0 1 1 1 1 1 0 1 0 0 1 0 1 0 0 0 1 1 0 1 1 1 0 0 0 1 1 1 1 1 1 1 0 &\ 0 \cr 0 0 0 0 0 0 1 0 0 0 1 0 0 1 1 0 0 0 1 0 1 1 1 0 1 0 1 1 0 1 1 0 0 0 0 0 1 1 0 0 1 1 0 1 0 1 0 0 1 1 1 0 0 1 1 1 1 0 1 1 0 1 0 0 0 0 1 0 1 0 1 0 1 1 1 1 1 0 1 0 0 1 0 1 0 0 0 1 1 0 1 1 1 0 0 0 1 1 1 1 1 1 1 &\ 0 \cr\end{aligned}$$ $i=7$. $$\begin{aligned} 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 &\ 1 \cr 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 &\ 0 \cr 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 &\ 0 \cr 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 &\ 1 \cr\end{aligned}$$ -2.5em $i=8$. $$\begin{aligned} 1 0 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 1 0 1 1 0 0 1 1 1 1 1 1 1 1 0 0 1 0 1 1 0 1 0 1 1 0 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 1 0 0 1 1 0 1 1 0 1 0 0 1 1 0 0 1 1 0 1 0 0 0 1 1 1 0 1 1 0 1 1 0 0 0 1 0 0 0 1 0 0 1 1 1 &\ 1 \cr 0 1 0 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 1 0 1 1 0 0 1 1 1 1 1 1 1 1 0 0 1 0 1 1 0 1 0 1 1 0 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 1 0 0 1 1 0 1 1 0 1 0 0 1 1 0 0 1 1 0 1 0 0 0 1 1 1 0 1 1 0 1 1 0 0 0 1 0 0 0 1 0 0 1 1 &\ 0 \cr 0 0 1 0 0 0 0 0 1 0 0 1 0 1 0 1 0 0 1 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 0 0 1 1 0 0 0 1 1 0 1 0 1 0 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 1 0 1 1 0 0 1 1 1 1 1 1 1 1 0 0 1 0 1 1 0 1 0 1 1 0 1 1 1 0 1 0 1 0 1 0 1 1 1 0 &\ 0 \cr 0 0 0 1 0 0 0 0 0 1 0 0 1 0 1 0 1 0 0 1 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 0 0 1 1 0 0 0 1 1 0 1 0 1 0 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 1 0 1 1 0 0 1 1 1 1 1 1 1 1 0 0 1 0 1 1 0 1 0 1 1 0 1 1 1 0 1 0 1 0 1 0 1 1 1 &\ 1 \cr 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 1 0 1 0 0 1 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 0 0 1 1 0 0 0 1 1 0 1 0 1 0 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 1 0 1 1 0 0 1 1 1 1 1 1 1 1 0 0 1 0 1 1 0 1 0 1 1 0 1 1 1 0 1 0 1 0 1 0 1 1 &\ 0 \cr 0 0 0 0 0 1 0 0 1 0 1 0 1 0 0 1 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 0 0 1 1 0 0 0 1 1 0 1 0 1 0 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 1 0 1 1 0 0 1 1 1 1 1 1 1 1 0 0 1 0 1 1 0 1 0 1 1 0 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 1 0 &\ 0 \cr 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 1 0 1 1 0 0 1 1 1 1 1 1 1 1 0 0 1 0 1 1 0 1 0 1 1 0 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 1 0 0 1 1 0 1 1 0 1 0 0 1 1 0 0 1 1 0 1 0 0 0 1 1 1 0 1 1 0 1 1 0 0 0 1 0 0 0 1 0 0 1 1 1 1 0 &\ 0 \cr 0 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 1 0 1 1 0 0 1 1 1 1 1 1 1 1 0 0 1 0 1 1 0 1 0 1 1 0 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 1 0 0 1 1 0 1 1 0 1 0 0 1 1 0 0 1 1 0 1 0 0 0 1 1 1 0 1 1 0 1 1 0 0 0 1 0 0 0 1 0 0 1 1 1 1 &\ 0 \cr\end{aligned}$$ $i=9$. $$\begin{aligned} 1 0 0 0 0 0 1 1 0 1 0 0 1 1 0 0 1 0 0 1 0 1 0 0 0 0 0 1 1 0 1 0 0 1 1 0 0 1 0 0 1 0 1 0 0 0 0 0 1 1 0 1 0 0 1 1 0 0 1 0 0 1 0 1 0 0 0 0 0 1 1 0 1 0 0 1 1 0 0 1 0 0 1 0 1 0 0 0 0 0 1 1 0 1 0 0 1 1 0 0 1 0 0 &\ 1 \cr 0 1 0 0 0 0 0 1 1 0 1 0 0 1 1 0 0 1 0 0 1 0 1 0 0 0 0 0 1 1 0 1 0 0 1 1 0 0 1 0 0 1 0 1 0 0 0 0 0 1 1 0 1 0 0 1 1 0 0 1 0 0 1 0 1 0 0 0 0 0 1 1 0 1 0 0 1 1 0 0 1 0 0 1 0 1 0 0 0 0 0 1 1 0 1 0 0 1 1 0 0 1 0 &\ 1 \cr 0 0 1 0 0 0 1 1 1 0 0 1 1 1 1 1 1 0 1 1 0 0 0 1 0 0 0 1 1 1 0 0 1 1 1 1 1 1 0 1 1 0 0 0 1 0 0 0 1 1 1 0 0 1 1 1 1 1 1 0 1 1 0 0 0 1 0 0 0 1 1 1 0 0 1 1 1 1 1 1 0 1 1 0 0 0 1 0 0 0 1 1 1 0 0 1 1 1 1 1 1 0 1 &\ 1 \cr 0 0 0 1 0 0 0 1 1 1 0 0 1 1 1 1 1 1 0 1 1 0 0 0 1 0 0 0 1 1 1 0 0 1 1 1 1 1 1 0 1 1 0 0 0 1 0 0 0 1 1 1 0 0 1 1 1 1 1 1 0 1 1 0 0 0 1 0 0 0 1 1 1 0 0 1 1 1 1 1 1 0 1 1 0 0 0 1 0 0 0 1 1 1 0 0 1 1 1 1 1 1 0 &\ 0 \cr 0 0 0 0 1 0 1 1 1 0 1 0 1 0 1 1 0 1 1 1 1 0 0 0 0 1 0 1 1 1 0 1 0 1 0 1 1 0 1 1 1 1 0 0 0 0 1 0 1 1 1 0 1 0 1 0 1 1 0 1 1 1 1 0 0 0 0 1 0 1 1 1 0 1 0 1 0 1 1 0 1 1 1 1 0 0 0 0 1 0 1 1 1 0 1 0 1 0 1 1 0 1 1 &\ 1 \cr 0 0 0 0 0 1 1 0 1 0 0 1 1 0 0 1 0 0 1 0 1 0 0 0 0 0 1 1 0 1 0 0 1 1 0 0 1 0 0 1 0 1 0 0 0 0 0 1 1 0 1 0 0 1 1 0 0 1 0 0 1 0 1 0 0 0 0 0 1 1 0 1 0 0 1 1 0 0 1 0 0 1 0 1 0 0 0 0 0 1 1 0 1 0 0 1 1 0 0 1 0 0 1 &\ 0 \cr\end{aligned}$$ $i=10$. $$\begin{aligned} 1 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 1 0 1 1 0 1 0 1 0 1 1 1 0 0 1 0 0 0 1 1 0 0 1 0 0 1 0 0 1 1 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 0 1 1 1 1 0 0 1 1 0 0 1 1 1 0 1 1 1 1 1 1 0 0 1 0 1 1 &\ 0 \cr 0 1 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 1 0 1 1 0 1 0 1 0 1 1 1 0 0 1 0 0 0 1 1 0 0 1 0 0 1 0 0 1 1 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 0 1 1 1 1 0 0 1 1 0 0 1 1 1 0 1 1 1 1 1 1 0 0 1 0 1 &\ 0 \cr 0 0 1 0 0 0 0 0 1 1 0 0 0 1 1 0 1 0 1 1 0 0 0 1 0 1 0 1 0 1 0 0 1 1 0 0 0 0 0 1 0 1 1 1 0 1 0 1 1 1 1 1 0 1 1 0 1 1 1 0 1 1 0 0 0 0 1 0 1 0 0 1 0 1 1 1 1 0 1 0 0 1 1 1 1 1 1 1 1 0 1 0 1 0 0 0 0 1 1 1 0 0 1 &\ 1 \cr 0 0 0 1 0 0 0 0 1 0 0 1 1 0 1 1 1 1 0 0 1 1 0 0 1 1 1 0 1 1 1 1 1 1 0 0 1 0 1 1 0 0 1 0 1 0 1 1 0 1 1 0 1 0 0 1 0 0 0 0 0 1 1 0 0 0 1 1 0 1 0 1 1 0 0 0 1 0 1 0 1 0 1 0 0 1 1 0 0 0 0 0 1 0 1 1 1 0 1 0 1 1 1 &\ 0 \cr 0 0 0 0 1 0 0 0 0 1 0 0 1 1 0 1 1 1 1 0 0 1 1 0 0 1 1 1 0 1 1 1 1 1 1 0 0 1 0 1 1 0 0 1 0 1 0 1 1 0 1 1 0 1 0 0 1 0 0 0 0 0 1 1 0 0 0 1 1 0 1 0 1 1 0 0 0 1 0 1 0 1 0 1 0 0 1 1 0 0 0 0 0 1 0 1 1 1 0 1 0 1 1 &\ 1 \cr 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 0 1 1 1 1 0 0 1 1 0 0 1 1 1 0 1 1 1 1 1 1 0 0 1 0 1 1 0 0 1 0 1 0 1 1 0 1 1 0 1 0 0 1 0 0 0 0 0 1 1 0 0 0 1 1 0 1 0 1 1 0 0 0 1 0 1 0 1 0 1 0 0 1 1 0 0 0 0 0 1 0 1 1 1 0 1 0 1 &\ 0 \cr 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 0 1 1 1 1 0 0 1 1 0 0 1 1 1 0 1 1 1 1 1 1 0 0 1 0 1 1 0 0 1 0 1 0 1 1 0 1 1 0 1 0 0 1 0 0 0 0 0 1 1 0 0 0 1 1 0 1 0 1 1 0 0 0 1 0 1 0 1 0 1 0 0 1 1 0 0 0 0 0 1 0 1 1 1 0 1 0 &\ 0 \cr 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 1 0 1 1 0 1 0 1 0 1 1 1 0 0 1 0 0 0 1 1 0 0 1 0 0 1 0 0 1 1 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 0 1 1 1 1 0 0 1 1 0 0 1 1 1 0 1 1 1 1 1 1 0 0 1 0 1 1 0 &\ 0 \cr\end{aligned}$$ $i=11$. $$\begin{aligned} 1 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 1 0 0 1 1 1 1 0 1 0 0 0 1 1 1 0 0 1 0 0 1 0 1 1 0 1 1 1 0 1 1 0 0 1 1 0 1 0 1 0 1 1 1 1 1 1 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 1 0 0 1 1 1 1 0 1 0 0 0 1 1 1 0 0 1 0 0 1 0 &\ 1 \cr 0 1 0 0 0 0 1 1 0 0 0 1 0 1 0 0 1 1 1 1 0 1 0 0 0 1 1 1 0 0 1 0 0 1 0 1 1 0 1 1 1 0 1 1 0 0 1 1 0 1 0 1 0 1 1 1 1 1 1 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 1 0 0 1 1 1 1 0 1 0 0 0 1 1 1 0 0 1 0 0 1 0 1 1 0 1 1 &\ 0 \cr 0 0 1 0 0 0 0 1 1 0 0 0 1 0 1 0 0 1 1 1 1 0 1 0 0 0 1 1 1 0 0 1 0 0 1 0 1 1 0 1 1 1 0 1 1 0 0 1 1 0 1 0 1 0 1 1 1 1 1 1 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 1 0 0 1 1 1 1 0 1 0 0 0 1 1 1 0 0 1 0 0 1 0 1 1 0 1 &\ 0 \cr 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 1 0 0 1 1 1 1 0 1 0 0 0 1 1 1 0 0 1 0 0 1 0 1 1 0 1 1 1 0 1 1 0 0 1 1 0 1 0 1 0 1 1 1 1 1 1 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 1 0 0 1 1 1 1 0 1 0 0 0 1 1 1 0 0 1 0 0 1 0 1 1 0 &\ 1 \cr 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 1 0 0 1 1 1 1 0 1 0 0 0 1 1 1 0 0 1 0 0 1 0 1 1 0 1 1 1 0 1 1 0 0 1 1 0 1 0 1 0 1 1 1 1 1 1 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 1 0 0 1 1 1 1 0 1 0 0 0 1 1 1 0 0 1 0 0 1 0 1 1 &\ 0 \cr 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 1 0 0 1 1 1 1 0 1 0 0 0 1 1 1 0 0 1 0 0 1 0 1 1 0 1 1 1 0 1 1 0 0 1 1 0 1 0 1 0 1 1 1 1 1 1 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 1 0 0 1 1 1 1 0 1 0 0 0 1 1 1 0 0 1 0 0 1 0 1 &\ 0 \cr\end{aligned}$$ $i=12$. $$\begin{aligned} 1 0 0 0 0 0 0 0 1 1 0 0 0 1 1 1 1 1 0 1 0 0 1 0 1 1 0 1 1 0 1 0 0 0 0 0 1 0 0 1 0 1 0 1 1 1 0 0 1 1 1 1 0 0 0 0 0 0 0 1 1 0 0 0 1 1 1 1 1 0 1 0 0 1 0 1 1 0 1 1 0 1 0 0 0 0 0 1 0 0 1 0 1 0 1 1 1 0 0 1 1 1 1 &\ 0 \cr 0 1 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 1 1 0 1 1 1 0 1 1 0 1 1 1 0 0 0 0 1 1 0 1 1 1 1 1 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 1 1 0 1 1 1 0 1 1 0 1 1 1 0 0 0 0 1 1 0 1 1 1 1 1 0 0 1 0 1 0 0 0 &\ 1 \cr 0 0 1 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 1 1 0 1 1 1 0 1 1 0 1 1 1 0 0 0 0 1 1 0 1 1 1 1 1 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 1 1 0 1 1 1 0 1 1 0 1 1 1 0 0 0 0 1 1 0 1 1 1 1 1 0 0 1 0 1 0 0 &\ 0 \cr 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 1 1 0 1 1 1 0 1 1 0 1 1 1 0 0 0 0 1 1 0 1 1 1 1 1 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 1 1 0 1 1 1 0 1 1 0 1 1 1 0 0 0 0 1 1 0 1 1 1 1 1 0 0 1 0 1 0 &\ 1 \cr 0 0 0 0 1 0 0 0 1 1 0 1 0 0 1 1 0 1 0 1 0 1 0 1 1 0 1 0 1 1 0 0 1 1 1 0 1 0 0 0 1 1 1 0 0 0 1 0 1 0 1 0 0 0 0 1 0 0 0 1 1 0 1 0 0 1 1 0 1 0 1 0 1 0 1 1 0 1 0 1 1 0 0 1 1 1 0 1 0 0 0 1 1 1 0 0 0 1 0 1 0 1 0 &\ 0 \cr 0 0 0 0 0 1 0 0 1 0 1 0 1 1 1 0 0 1 1 1 1 0 0 0 0 0 0 0 1 1 0 0 0 1 1 1 1 1 0 1 0 0 1 0 1 1 0 1 1 0 1 0 0 0 0 0 1 0 0 1 0 1 0 1 1 1 0 0 1 1 1 1 0 0 0 0 0 0 0 1 1 0 0 0 1 1 1 1 1 0 1 0 0 1 0 1 1 0 1 1 0 1 0 &\ 1 \cr 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 1 1 0 1 1 1 0 1 1 0 1 1 1 0 0 0 0 1 1 0 1 1 1 1 1 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 1 1 0 1 1 1 0 1 1 0 1 1 1 0 0 0 0 1 1 0 1 1 1 1 1 0 0 1 0 1 0 0 0 1 0 &\ 1 \cr 0 0 0 0 0 0 0 1 1 0 0 0 1 1 1 1 1 0 1 0 0 1 0 1 1 0 1 1 0 1 0 0 0 0 0 1 0 0 1 0 1 0 1 1 1 0 0 1 1 1 1 0 0 0 0 0 0 0 1 1 0 0 0 1 1 1 1 1 0 1 0 0 1 0 1 1 0 1 1 0 1 0 0 0 0 0 1 0 0 1 0 1 0 1 1 1 0 0 1 1 1 1 0 &\ 0 \cr\end{aligned}$$ -2em $i=13$. $$\begin{aligned} 1 0 0 0 0 0 0 0 1 0 0 1 0 1 1 1 1 1 1 1 1 0 0 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 1 1 0 0 1 1 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 1 1 1 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 1 1 1 1 0 0 1 1 1 1 1 1 0 1 0 1 0 &\ 0 \cr 0 1 0 0 0 0 0 0 1 1 0 1 1 1 0 0 0 0 0 0 0 1 0 0 1 0 1 1 1 1 1 1 1 1 0 0 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 1 1 0 0 1 1 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 1 1 1 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 1 1 1 1 &\ 1 \cr 0 0 1 0 0 0 0 0 0 1 1 0 1 1 1 0 0 0 0 0 0 0 1 0 0 1 0 1 1 1 1 1 1 1 1 0 0 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 1 1 0 0 1 1 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 1 1 1 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 1 1 1 &\ 0 \cr 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 1 1 1 1 0 0 1 1 1 1 1 1 0 1 0 1 0 0 0 1 0 1 0 1 0 0 1 1 0 0 0 0 1 1 0 0 1 1 1 0 1 1 1 1 1 0 1 1 1 0 1 0 0 1 0 1 0 1 1 0 1 0 0 1 1 1 0 0 1 1 0 1 1 0 0 0 1 0 1 1 1 0 1 1 0 1 &\ 0 \cr 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 1 1 1 1 0 0 1 1 1 1 1 1 0 1 0 1 0 0 0 1 0 1 0 1 0 0 1 1 0 0 0 0 1 1 0 0 1 1 1 0 1 1 1 1 1 0 1 1 1 0 1 0 0 1 0 1 0 1 1 0 1 0 0 1 1 1 0 0 1 1 0 1 1 0 0 0 1 0 1 1 1 0 1 1 0 &\ 1 \cr 0 0 0 0 0 1 0 0 1 0 1 1 1 1 1 1 1 1 0 0 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 1 1 0 0 1 1 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 1 1 1 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 1 1 1 1 0 0 1 1 1 1 1 1 0 1 0 1 0 0 0 1 &\ 0 \cr 0 0 0 0 0 0 1 0 0 1 0 1 1 1 1 1 1 1 1 0 0 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 1 1 0 0 1 1 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 1 1 1 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 1 1 1 1 0 0 1 1 1 1 1 1 0 1 0 1 0 0 0 &\ 0 \cr 0 0 0 0 0 0 0 1 0 0 1 0 1 1 1 1 1 1 1 1 0 0 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 1 1 0 0 1 1 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 1 1 1 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 1 1 1 1 0 0 1 1 1 1 1 1 0 1 0 1 0 0 &\ 1 \cr\end{aligned}$$ $i=14$. $$\begin{aligned} 1 0 0 0 0 0 1 0 1 1 1 1 1 1 0 0 1 0 1 0 1 0 0 0 1 1 0 0 1 1 1 1 0 1 1 1 0 1 0 1 1 0 1 0 0 1 1 0 1 1 0 0 0 1 0 0 1 0 0 0 0 1 1 1 0 0 0 0 0 1 0 1 1 1 1 1 1 0 0 1 0 1 0 1 0 0 0 1 1 0 0 1 1 1 1 0 1 1 1 0 1 0 1 &\ 0 \cr 0 1 0 0 0 0 1 1 1 0 0 0 0 0 1 0 1 1 1 1 1 1 0 0 1 0 1 0 1 0 0 0 1 1 0 0 1 1 1 1 0 1 1 1 0 1 0 1 1 0 1 0 0 1 1 0 1 1 0 0 0 1 0 0 1 0 0 0 0 1 1 1 0 0 0 0 0 1 0 1 1 1 1 1 1 0 0 1 0 1 0 1 0 0 0 1 1 0 0 1 1 1 1 &\ 0 \cr 0 0 1 0 0 0 0 1 1 1 0 0 0 0 0 1 0 1 1 1 1 1 1 0 0 1 0 1 0 1 0 0 0 1 1 0 0 1 1 1 1 0 1 1 1 0 1 0 1 1 0 1 0 0 1 1 0 1 1 0 0 0 1 0 0 1 0 0 0 0 1 1 1 0 0 0 0 0 1 0 1 1 1 1 1 1 0 0 1 0 1 0 1 0 0 0 1 1 0 0 1 1 1 &\ 0 \cr 0 0 0 1 0 0 1 0 0 0 0 1 1 1 0 0 0 0 0 1 0 1 1 1 1 1 1 0 0 1 0 1 0 1 0 0 0 1 1 0 0 1 1 1 1 0 1 1 1 0 1 0 1 1 0 1 0 0 1 1 0 1 1 0 0 0 1 0 0 1 0 0 0 0 1 1 1 0 0 0 0 0 1 0 1 1 1 1 1 1 0 0 1 0 1 0 1 0 0 0 1 1 0 &\ 0 \cr 0 0 0 0 1 0 1 1 1 1 1 1 0 0 1 0 1 0 1 0 0 0 1 1 0 0 1 1 1 1 0 1 1 1 0 1 0 1 1 0 1 0 0 1 1 0 1 1 0 0 0 1 0 0 1 0 0 0 0 1 1 1 0 0 0 0 0 1 0 1 1 1 1 1 1 0 0 1 0 1 0 1 0 0 0 1 1 0 0 1 1 1 1 0 1 1 1 0 1 0 1 1 0 &\ 1 \cr 0 0 0 0 0 1 0 1 1 1 1 1 1 0 0 1 0 1 0 1 0 0 0 1 1 0 0 1 1 1 1 0 1 1 1 0 1 0 1 1 0 1 0 0 1 1 0 1 1 0 0 0 1 0 0 1 0 0 0 0 1 1 1 0 0 0 0 0 1 0 1 1 1 1 1 1 0 0 1 0 1 0 1 0 0 0 1 1 0 0 1 1 1 1 0 1 1 1 0 1 0 1 1 &\ 1 \cr\end{aligned}$$ $i=15$. $$\begin{aligned} 1 0 0 0 0 0 0 0 1 1 0 1 0 0 1 0 1 1 0 0 0 0 0 0 0 1 1 0 1 0 0 1 0 1 1 0 0 0 0 0 0 0 1 1 0 1 0 0 1 0 1 1 0 0 0 0 0 0 0 1 1 0 1 0 0 1 0 1 1 0 0 0 0 0 0 0 1 1 0 1 0 0 1 0 1 1 0 0 0 0 0 0 0 1 1 0 1 0 0 1 0 1 1 &\ 0 \cr 0 1 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 0 1 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 0 1 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 0 1 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 0 1 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 0 1 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 0 &\ 1 \cr 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 &\ 0 \cr 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 &\ 1 \cr 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 &\ 1 \cr 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 &\ 0 \cr 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 0 1 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 0 1 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 0 1 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 0 1 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 0 1 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 0 1 0 &\ 0 \cr 0 0 0 0 0 0 0 1 1 0 1 0 0 1 0 1 1 0 0 0 0 0 0 0 1 1 0 1 0 0 1 0 1 1 0 0 0 0 0 0 0 1 1 0 1 0 0 1 0 1 1 0 0 0 0 0 0 0 1 1 0 1 0 0 1 0 1 1 0 0 0 0 0 0 0 1 1 0 1 0 0 1 0 1 1 0 0 0 0 0 0 0 1 1 0 1 0 0 1 0 1 1 0 &\ 0 \cr\end{aligned}$$ 99 E. Formanek and D. Sibley, *The group determinant determines the group*, Proc. Amer. Math. Soc. **112** (1991), 649 -- 656. G. Frobenius, *Über Gruppencharaktere*, Sitzungsber. Preuss. Akad. Wiss. Berlin (1896), 985 -- 1021. *Gesammelte Abhandlungen* (Springer-Verlag, 1968), pp. 1 -- 37. G. Frobenius, *Über die Primfaktoren der Gruppendeterminante*, Sitzungsber. Preuss. Akad. Wiss. Berlin (1896), 1343 -- 1382. *Gesammelte Abhandlungen* (Springer-Verlag, 1968), pp. 38 -- 77. A. Fröhlich, (1983), *Galois Module Structure of Algebraic Integers*, Ergebnisse der Mathematik und ihrer Grenzgebiete, 3. Folge, vol. 1, Springer-Verlag, Berlin - Heidelberg - New York - Tokyo, 1983. S. Gao, *Normal Bases over Finite Fields*, Ph.D. Dissertation, University of Waterloo, 1993. T. Hawkins, *The origins of the theory of group characters*, Archive for History of Exact Sciences **7** (1971), 142 -- 170. T. Hawkins, *New light on Frobenius' creation of the theory of group characters*, Archive for History of Exact Sciences **12** (1974), 217 -- 243. X. Hou, *Complexities of normal bases constructed from Gauss periods*, Des. Codes Cryptogr. **86** (2018), 893 -- 905. X. Hou, *A criterion for the normality of polynomials over finite fields based on their coefficients*, arXiv:2212.04978v2. G. James and M. Liebeck, *Representations and Characters of Groups*, 2nd edition, Cambridge University Press, Cambridge, UK, 2001. K. Johnson, *On the group determinant*, Math. Proc. Camb. Phil. Soc. **109** (1991), 299 -- 311. S. Lang, *Algebra*, Springer, New York, 2002. H. W. Lenstra, Jr. and R. J. Schoof, *Primitive normal bases for finite fields*, Math. Comp. **48** (1987) 217 -- 231. R. Lidl and H. Niederreiter, *Finite Fields*, Cambridge University Press, Cambridge, 1997. I. G. Macdonald, *Symmetric Functions and Hall Polynomials*, 2nd edition, Oxford University Press, New York, 1995. A. M. Masuda, L. Moura, D. Panario, D. Thomson, *Low complexity normal elements over finite fields of characteristic two*, IEEE Trans. Comput. **57** (2008) 990 -- 1001. G. L. Mullen and D. Panario (eds), *Handbook of Finite Fields*, Discrete Mathematics and Its Applications, CRC Press, Boca Raton, FL, 2013. R. C. Mullin, I. M. Onyszchuk, S. A. Vanstone, R. M. Wilson, *Optimal normal bases in GF($p^n$)*, Discrete Appl. Math. **22** (1988/89) 149 -- 161. D. Pei, C. Wang, J. Omura, *Normal bases of finite fields $\text{\rm GF}(2^m)$*, IEEE Trans. Inform. Theory **32** (1986), 285 -- 287. S. Perlis, *Normal bases of cyclic fields of prime power degree*, Duke Math. J. **9** (1942), 507 -- 517. V. P. Snaith, *Galois Module Structure*, Fields Institute Monographs, vol. 2, American Mathematical Society, Providence, RI, 1994. [^1]: \* This work was supported by NSF REU Grant 2244488
arxiv_math
{ "id": "2309.05470", "title": "An approach to normal polynomials through symmetrization and symmetric\n reduction", "authors": "Darien Connolly, Calvin George, Xiang-dong Hou, Adam Madro, Vincenzo\n Pallozzi Lavorante", "categories": "math.RA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We prove the sharp embedding between the spectral Barron space and the Besov space. Given the spectral Barron space as the target function space, we prove a dimension-free result that if the neural network contains $L$ hidden layers with $N$ units per layer, then the upper and lower bounds of the $L^2$-approximation error are $\mathcal{O}(N^{-sL})$ with $0 < sL\le 1/2$, where $s$ is the smoothness index of the spectral Barron space. address: LSEC, Institute of Computational Mathematics and Scientific/Engineering Computing, AMSS, Chinese Academy of Sciences, Beijing 100190, China; School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China author: - Yulei Liao - Pingbing Ming bibliography: - spectralBarron.bib title: Spectral Barron space and deep neural network approximation --- [^1] # Introduction {#sec:intro} A series of works have been devoted to studying the neural network approximation error and generalization error with the Barron class [@Barron:1992; @Barron:1993; @Barron:1994; @Barron:2018] as the target space. For $f$ a complex-valued function and $s\ge 0$, the spectral norm $\upsilon_{f,s}$ is defined as $$\upsilon_{f,s}{:}=\int_{\mathbb{R}^d}\lvert\xi\rvert^s\lvert\widehat{f}(\xi)\rvert\mathrm{d}\xi,$$ where $\widehat{f}$ is the Fourier transform of $f$ in the distribution sense. A function $f$ is said to belong to the Barron class if its spectral norm $\upsilon_{f,s}$ is finite and the Fourier inversion holds pointwise. However, it is important to note that this definition lacks rigor, as it does not specify the conditions under which the pointwise Fourier inversion is valid. Addressing this issue is a nontrivial matter, as discussed in [@Pinsky:1997]. Since then, the authors in [@Ma:2017; @Xu:2020; @Siegel:2022; @Siegel:2023] assume $f\in L^1(\mathbb{R}^d)$, and define $$\label{eq:hat-bs} \widehat{\mathscr{B}}^s(\mathbb{R}^d){:}=\left\{\,f\in L^1(\mathbb{R}^d)\,\mid\,\upsilon_{f,0}+\upsilon_{f,s}<\infty\,\right\}.$$ For functions in $\widehat{\mathscr{B}}^s(\mathbb{R}^d)$, the Fourier transform and the pointwise Fourier inversion are valid. Unfortunately, we shall prove in Lemma [Lemma 3](#lema:hat-bs){reference-type="ref" reference="lema:hat-bs"} that $\widehat{\mathscr{B}}^s(\mathbb{R}^d)$ equipped with the norm $\upsilon_{f,0}+\upsilon_{f,s}$ is not complete. Therefore, it does not qualify as a Banach space. To address this issue, an alternative class of function spaces has been proposed, which can be traced back to the work of Hörmander [@Hormander:1963]. It is defined as follows. $$\mathscr{F}L^s_p(\mathbb{R}^d){:}=\left\{\,f\in\mathscr{S}'(\mathbb{R}^d)\,\mid\,(1+\lvert\xi\rvert^s)\widehat{f}(\xi)\in L^p(\mathbb{R}^d)\,\right\}$$ for $1\le p\le\infty$ and $s\ge 0$. This space has been studied extensively and may be referred to by different names. Some call it the Hörmander space, as mentioned in works such as [@Hormander:1963; @Messina:2001; @DGSM60:2014; @Ivec:2021]; Others refer to it as the Fourier Lebesgue space, as seen in [@Grochenig:2002; @Pilipovic:2010; @BenyiOh:2013; @Kato:2020]. We are interested in $p=1$, and call it the spectral Barron space: $$\mathscr{B}^s(\mathbb{R}^d){:}=\left\{\,f\in\mathscr{S}'(\mathbb{R}^d)\,\mid\,\upsilon_{f,0}+\upsilon_{f,s}<\infty\,\right\},$$ which is equipped with the norm $$\|\,f\,\|_{\mathscr{B}^s(\mathbb{R}^d)}{:}=\upsilon_{f,0}+\upsilon_{f,s}=\int_{\mathbb{R}^d}(1+\lvert\xi\rvert^s)\lvert\widehat{f}(\xi)\rvert\mathrm{d}\xi.$$ We show in Lemma [Lemma 1](#lema:fourinv){reference-type="ref" reference="lema:fourinv"} that the pointwise Fourier inversion is valid for functions in $\mathscr{B}^s(\mathbb{R}^d)$ with a nonnegative $s$. Some authors also refer to $\mathscr{B}^s(\mathbb{R}^d)$ as the Fourier algebra or Wiener algebra, whose algebraic properties, such as the Wiener-Levy theorem [@Wiener:1932; @Levy:1935; @Helson:1959], have been extensively studied in [@ReitherStegeman:2000; @Liflyand:2012]. Another popular space for analyzing shallow neural networks is the Barron space [@E:2019; @EMW:2022], which can be viewed as shallow neural networks with infinite width. The authors in [@Wojtowytsch:2022; @E:2022] claimed that the spectral Barron space is much smaller than the Barron space. As observed in [@Caragea:2023], this statement is not accurate because they have not discriminated the smoothness index $s$ in $\mathscr{B}^s(\mathbb{R}^d)$. In addition, the variation space, introduced in [@Barron:2008], has been studied in relation to the spectral Barron space $\mathscr{B}^s(\mathbb{R}^d)$ and the Barron space in [@SiegelXu:2022; @Siegel:2023]. These spaces have been exploited to study the regularity of partial differential equations [@ChenLuLu:2021; @Lu:2021; @E:2022; @Chen:2023]. Recently a new space [@ParhiNowak:2022] originated from the variational spline theory, which is closely related to the variation space, has also been exploited as the target function space for neural network approximation [@ParhiNowak:2023]. The first objective of the present work is the analytical properties of $\mathscr{B}^s(\mathbb{R}^d)$. In Lemma [Lemma 2](#lema:Banach){reference-type="ref" reference="lema:Banach"}, we show that $\mathscr{B}^s(\mathbb{R}^d)$ is complete, while Lemma [Lemma 3](#lema:hat-bs){reference-type="ref" reference="lema:hat-bs"} shows that $\mathscr{\widehat{B}}^s(\mathbb{R}^d)$ is not complete. This distinction highlights a key difference between these two spaces. Furthermore, Lemma [Lemma 5](#lema:ex3){reference-type="ref" reference="lema:ex3"} provides an example that illustrates functions in $\mathscr{B}^s(\mathbb{R}^d)$ may decay arbitrarily slow. This example, constructed elegantly using the generalized Hypergeometric function, reveals interesting relationships between the Fourier transform and the decay rate of the functions. Additionally, we study the relations among $\mathscr{B}^s(\mathbb{R}^d)$ and some classical function spaces. In Theorem [Theorem 9](#thm:Besov){reference-type="ref" reference="thm:Besov"}, we establish the connections between $\mathscr{B}^s(\mathbb{R}^d)$ and the Besov space. Furthermore, in Corollary [Corollary 12](#coro:Sobolev){reference-type="ref" reference="coro:Sobolev"}, we establish the connections between $\mathscr{B}^s(\mathbb{R}^d)$ and the Sobolev spaces. Notably, we prove the embedding relation $$B^{s+d/2}_{2,1}(\mathbb{R}^d)\hookrightarrow\mathscr{B}^s(\mathbb{R}^d)\hookrightarrow B^s_{\infty,1}(\mathbb{R}^d),$$ which is an optimal result that appears to be missing in the existing literature. This embedding may serve as a bridge to study how the Barron space, the variation space and the space in [@ParhiNowak:2022] are related to the classical function spaces such as the Besov space, which seems missing in the literature; cf. [@ParhiNowak:2022]\*§ 5. The second objective of the present work is to explore the neural network approximation on a bounded domain. Building upon Barron's seminal works on approximating functions in $\mathscr{B}^1(\mathbb{R}^d)$ with $L^2$-norm, recent studies extended the approximation to functions in $\mathscr{B}^{k+1}(\mathbb{R}^d)$ with $H^k$-norm, as demonstrated in [@Siegel:2020; @Xu:2020]. Furthermore, improved approximation rates have been achieved for functions in $\mathscr{B}^s(\mathbb{R}^d)$ with large $s$ in works such as [@Bresler:2020; @MaSiegelXu:2022; @Siegel:2022]. These advancements contribute to a deeper understanding of the approximation capabilities of neural networks. The distinction between deep ReLU networks and shallow networks has been highlighted in the separation theorems presented in [@Eldan:2016; @Telgarsky:2016; @Shamir:2022]. These theorems provide examples that can be well approximated by three-layer ReLU neural networks but not by two-layer ReLU neural networks with a width that grows polynomially with the dimension. This sheds light on the differences in expressive power between shallow and deep networks. Moreover, the approximation rates for neural networks targeting mixed derivative Besov/Sobolev spaces, spectral Barron spaces, and Hölder spaces have also been investigated. These studies contribute to a broader understanding of the approximation capabilities of neural networks in various function spaces as in [@Du:2019; @BreslerNagaraj:2020; @Bolcskei:2021; @LuShenYangZhang:2021; @Suzuki:2021]. We focus on the $L^2$-approximation properties for functions in $\mathscr{B}^s(\mathbb{R}^d)$ when $s$ is small. In Theorem [Theorem 21](#thm:deep){reference-type="ref" reference="thm:deep"}, we establish that a neural network with $L$ hidden layers and $N$ units in each layer can approximate functions in $\mathscr{B}^s(\mathbb{R}^d)$ with a convergence rate of $\mathcal{O}(N^{-sL})$ when $0<sL\le 1/2$. This bound is sharp, as demonstrated in Theorem [Theorem 23](#thm:lower){reference-type="ref" reference="thm:lower"}. Importantly, our results provide optimal convergence rates compared to existing literature. For deep neural networks, a similar result has been presented in [@BreslerNagaraj:2020] with a convergence rate of $\mathcal{O}(N^{-sL/2})$. For shallow neural network; i.e., $L=1$, convergence rates of $\mathcal{O}(N^{-1/2})$ have been established in [@MengMing:2022; @Siegel:2022] when $s=1/2$. However, it is worth noting that the constants in their estimates depend on the dimension at least polynomially, or even exponentially, and require other bounded norms besides $\upsilon_{f,s}$. Our results provide a significant advancement by achieving optimal convergence rates without the additional dependency on dimension or other bounded norms. The remaining part of the paper is structured as follows. In Section 2, we demonstrate that the spectral Barron space is a Banach space and examine its relationship with other function spaces. This analysis provides a foundation for understanding the properties of the spectral Barron space. In Section 3, we delve into the error estimation for approximating functions in the spectral Barron space using deep neural networks with finite depth and infinite width. By investigating the convergence properties of these networks, we gain insights into their approximation capabilities and provide error bounds for their performance. Finally, in Section 4, we conclude our work by summarizing the key findings and contributions of this study. We also discuss potential avenues for future research and highlight the significance of our results in the broader context of function approximation using neural networks. Certain technical results are postponed to the Appendix. # Completeness of $\mathscr{B}^s$ and its relation to other function spaces {#sec:relation} This part discusses the completeness of the spectral Barron space and embedding relations to other classical function spaces. Firstly, we fix some notations. Let $\mathscr{S}$ be the Schwartz space and let $\mathscr{S}'$ be its topological dual space, i.e., the space of tempered distribution. The Gamma function $$\Gamma(s){:}=\int_0^\infty t^{s-1}e^{-t}\mathrm{d}t,\qquad s>0.$$ Denoting the surface area of the unit sphere $\mathbb{S}^{d-1}$ by $\omega_{d-1}=2\pi^{d/2}/\Gamma(d/2)$. The volume of the unit ball is $\nu_d=\omega_{d-1}/d$. The Beta function $$B(\alpha,\beta){:}=\int_0^1t^{\alpha-1}(1-t)^{\beta-1}\mathrm{d}t=\dfrac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)},\qquad \alpha,\beta>0.$$ The series formulation of the first kind of Bessel function is defined as $$J_\nu(x){:}=(x/2)^\nu\sum_{k=0}^\infty(-1)^k\dfrac{(x/2)^{2k}}{\Gamma(\nu+k+1)k!}.$$ This definition may be found in [@Luke:1962]\*§ 1.4.1, Eq. (1). For $f\in L^1(\mathbb{R}^d)$, its Fourier transform of $f$ is defined as $$\widehat{f}(\xi){:}=\int_{\mathbb{R}^d}f(x)e^{-2\pi ix\cdot\xi}\mathrm{d}x,$$ and the inverse Fourier transform is defined as $$f^\vee(x){:}=\int_{\mathbb{R}^d}f(\xi)e^{2\pi ix\cdot\xi}\mathrm{d}\xi. %\vee{f}(x){:}=\int_{\mb{R}^d}f(\xi)e^{2\pi ix\cdot\xi}\mr{d}\xi.$$ If $f\in\mathscr{S}'(\mathbb{R}^d)$, then the Fourier transform in the sense of distribution means $$\langle\,\widehat{f},\varphi\rangle=\langle\,f,\widehat{\varphi}\rangle\qquad \text{for any}\quad\varphi\in\mathscr{S}(\mathbb{R}^d)\subset L^1(\mathbb{R}^d).$$ We shall frequently use the following Hausdorff-Young inequality. Let $1\le p \le 2$ and $f\in L^p(\mathbb{R}^d)$, then $$\label{eq:hyineq} \|\,\widehat{f}\,\|_{L^{p'}(\mathbb{R}^d)}\le \|\,f\,\|_{L^p(\mathbb{R}^d)},$$ where $p'$ is the conjugate exponent of $p$; i.e. $1/p+1/p'=1$. We shall use the following pointwise Fourier inversion theorem. **Lemma 1**. *Let $g\in L^1(\mathbb{R}^d)$, then $\widehat{g^\vee}=g$ in $\mathscr{S}'(\mathbb{R}^d)$. Furthermore, let $f\in\mathscr{S}'(\mathbb{R}^d)$ and $\widehat{f}\in L^1(\mathbb{R}^d)$, then $(\widehat{f})^\vee=f$, a.e. on $\mathbb{R}^d$.* *Proof.* By definition, there holds $$\langle\,\widehat{g^\vee},\varphi\rangle=\langle\,g^\vee,\widehat{\varphi}\rangle =\langle\,g,\varphi\rangle\qquad\text{for any}\quad\varphi\in\mathscr{S}(\mathbb{R}^d).$$ Therefore, $\widehat{g^\vee}=g$ in $\mathscr{S}'(\mathbb{R}^d)$. Note that $\widehat{f}\in L^1(\mathbb{R}^d)$, $$\langle\,(\widehat{f})^\vee,\varphi\rangle=\langle\,\widehat{f},\varphi^\vee\rangle=\langle\,f,\varphi\rangle\qquad\text{for any}\quad\varphi\in\mathscr{S}(\mathbb{R}^d).$$ By the Hausdorff-Young inequality [\[eq:hyineq\]](#eq:hyineq){reference-type="eqref" reference="eq:hyineq"}, $$\|\,(\widehat{f})^\vee\,\|_{L^\infty(\mathbb{R}^d)}\le\|\,\widehat{f}\,\|_{L^1(\mathbb{R}^d)}.$$ Therefore, $f$ is a linear bounded operator on $L^1(\mathbb{R}^d)$; i.e., $f\in [L^1(\mathbb{R}^d)]^*=L^\infty(\mathbb{R}^d)$ due to $\mathscr{S}(\mathbb{R}^d)$ is dense in $L^1(\mathbb{R}^d)$ and $$\lvert\langle\,f,\varphi\rangle\rvert=\lvert\langle\,(\widehat{f})^\vee,\varphi\rangle\rvert\le\|\,(\widehat{f})^\vee\,\|_{L^\infty(\mathbb{R}^d)}\|\,\varphi\,\|_{L^1(\mathbb{R}^d)}\le\|\,\widehat{f}\,\|_{L^1(\mathbb{R}^d)}\|\,\varphi\,\|_{L^1(\mathbb{R}^d)}.$$ Hence, $(\widehat{f})^\vee=f$, a.e. on $\mathbb{R}^d$ because $(\widehat{f})^\vee-f\in L^\infty(\mathbb{R}^d)$ [@Brezis:2011]\*Corollary 4.24. ◻ A direct consequence of Lemma [Lemma 1](#lema:fourinv){reference-type="ref" reference="lema:fourinv"} is that the pointwise Fourier inversion is valid for functions in $\mathscr{B}^s(\mathbb{R}^d)$. We shall frequently use this fact later on. ## Completeness of the spectral Barron space **Lemma 2**. 1. *$\mathscr{B}^s(\mathbb{R}^d)$ is a Banach space.* 2. *When $s>0$, $\mathscr{B}^s(\mathbb{R}^d)$ is not a Banach space if the norm $\|\,f\,\|_{\mathscr{B}^s(\mathbb{R}^d)}$ is replaced by $\upsilon_{f,s}$.* *Proof.* We give a brief proof for the first claim for the reader's convenience, which has been stated in [@Hormander:1963]\*Theorem 2.2.1. It is sufficient to check the completeness of $\mathscr{B}^s(\mathbb{R}^d)$. For any Cauchy sequence $\{f_k\}_{k=1}^\infty\subset\mathscr{B}^s(\mathbb{R}^d)$, there exists $g\in L^1(\mathbb{R}^d)$ such that $\widehat{f}_k\to g$ in $L^1(\mathbb{R}^d)$. Therefore there exists a sub-sequence of $\{f_k\}_{k=1}^\infty$(still denoted by $f_k$) such that $\widehat{f}_k\to g$ a.e. on $\mathbb{R}^d$. Define the measure $\mu$ by setting that for any measurable set $E\subset\mathbb{R}^d$, $$\mu(E){:}=\int_E\lvert\xi\rvert^s\mathrm{d}\xi.$$ Then $\{\widehat{f}_k\}_{k=1}^\infty$ is a Cauchy sequence in $L^1(\mathbb{R}^d,\mu)$ and there exists $h\in L^1(\mathbb{R}^d,\mu)$ such that $\widehat{f}_k\to h$ in $L^1(\mathbb{R}^d,\mu)$. Therefore there exists a sub-sequence of $\{f_k\}_{k=1}^\infty$(still denoted by $f_k$) such that $\widehat{f}_k\to h$ $\mu$-a.e. on $\mathbb{R}^d$. Note that for any measurable set $E\subset\mathbb{R}^d$, $\mu(E)=0$ is equivalent to $\lvert E\rvert=0$. Therefore $\widehat{f}_k\to h$ a.e. on $\mathbb{R}^d$. By the uniqueness of limitation, $h=g$, a.e. on $\mathbb{R}^d$. Define $f=g^\vee$. Lemma [Lemma 1](#lema:fourinv){reference-type="ref" reference="lema:fourinv"} shows that $\widehat{f}=g$ in $\mathscr{S}'(\mathbb{R}^d)$. Therefore $f\in\mathscr{B}^s(\mathbb{R}^d)$ and $f_k\to f$ in $\mathscr{B}^s(\mathbb{R}^d)$. Hence $\mathscr{B}^s$ is complete and it is a Banach space. The proof for (2) is a *reductio ad absurdum*. Suppose the claim does not hold, then there exists $C$ depending only on $s$ and $d$ such that for any $f\in\mathscr{B}^s(\mathbb{R}^d)$, $$\label{eq:banach1} \upsilon_{f,0}\le C\upsilon_{f,s}.$$ We shall show this is false by the following example. For some $\delta>-1$, let $$f_n(x)=\left(\sum_{k=1}^n2^{kd}(1-2^{2k}\lvert\xi\rvert^2)_+^\delta\right)^\vee(x).$$ To bound $\upsilon_{f_n,0}$ and $\upsilon_{f_n,s}$, we introduce the Bochner-Riesz multipliers $$\phi_R=\left(\left(1-\dfrac{\lvert\xi\rvert^2}{R^2}\right)_+^\delta\right)^\vee,\qquad\delta>-1.$$ We claim $$\label{eq:phiR} \phi_R(x)=\dfrac{\Gamma(\delta+1)}{\pi^\delta\lvert x\rvert^{\delta+d/2}}R^{-\delta+d/2}J_{\delta+d/2}(2\pi\lvert x\rvert R),$$ and $$\label{eq:phiRs} \upsilon_{\phi_R,s}=\dfrac{\omega_{d-1}}2B\left(\dfrac{s+d}2,\delta+1\right)R^{s+d}.$$ The proof is postponed to Appendix [5.1](#apd:phiR){reference-type="ref" reference="apd:phiR"}. It follows from [\[eq:phiR\]](#eq:phiR){reference-type="eqref" reference="eq:phiR"} that $$\label{eq:ex1} f_n(x)=\dfrac{\Gamma(\delta+1)}{\pi^\delta\lvert x\rvert^{\delta+d/2}}\sum_{k=1}^n2^{k(\delta+d/2)}J_{\delta+d/2}(2^{1-k}\pi\lvert x\rvert),$$ and $f_n\in\mathscr{B}^s(\mathbb{R}^d)$ with $$\upsilon_{f_n,s}=\sum_{k=1}^n2^{kd}\upsilon_{\phi_{2^{-k}},s} =\dfrac{1-2^{-ns}}{2^{s+1}-2}\omega_{d-1}B\left(\dfrac{s+d}2,\delta+1\right), %\le\dfrac{\pi^{d/2}B((s+d)/2,\delta+1)}{(2^s-1)\Gamma(d/2)},$$ and $$\upsilon_{f_n,0}=\sum_{k=1}^n2^{kd}\upsilon_{\phi_{2^{-k}},0} =\dfrac{\omega_{d-1}}2B\left(\dfrac{d}2,\delta+1\right)n. %=\dfrac{\pi^{d/2}\Gamma(\delta+1)n}{\Gamma(d/2+\delta+1)}.$$ where we have used [\[eq:phiRs\]](#eq:phiRs){reference-type="eqref" reference="eq:phiRs"}. It is clear that $$\dfrac{\omega_{d-1}}{2^{s+1}}B\left(\dfrac{s+d}2,\delta+1\right)\le\upsilon_{f_n,s}\le \dfrac{\omega_{d-1}}{2^{s+1}-2}B\left(\dfrac{s+d}2,\delta+1\right).$$ Hence $\upsilon_{f_n,0}\simeq\mathcal{O}(n)$ while $\upsilon_{f_n,s}\simeq\mathcal{O}(1)$. This shows that [\[eq:banach1\]](#eq:banach1){reference-type="eqref" reference="eq:banach1"} is invalid for a large number $n$. This proves the second claim. ◻ Similar to $\mathscr{B}^s(\mathbb{R}^d)$, the space $\widehat{\mathscr{B}}^s(\mathbb{R}^d)$ defined in [\[eq:hat-bs\]](#eq:hat-bs){reference-type="eqref" reference="eq:hat-bs"} has been exploited as the target space for neural network approximation by several authors [@Ma:2017; @Xu:2020; @Siegel:2022; @Siegel:2023]. The advantage of this space is that the Fourier transform is well-defined and the pointwise Fourier inversion is true for functions belonging to $\widehat{\mathscr{B}}^s(\mathbb{R}^d)$. Unfortunately, $\widehat{\mathscr{B}}^s(\mathbb{R}^d)$ is not a Banach space as we shall show below. **Lemma 3**. *The space $\widehat{\mathscr{B}}^s(\mathbb{R}^d)$ defined in [\[eq:hat-bs\]](#eq:hat-bs){reference-type="eqref" reference="eq:hat-bs"} equipped with the norm $\upsilon_{f,0}+\upsilon_{f,s}$ is not a Banach space.* To prove Lemma [Lemma 3](#lema:hat-bs){reference-type="ref" reference="lema:hat-bs"}, we recall the Barron spectrum space defined by [Meng and Ming]{.smallcaps} in [@MengMing:2022]: For $s\in\mathbb{R}$ and $1\le p\le 2$, $$\label{eq:Bsp} \mathscr{B}^s_p(\mathbb{R}^d){:}=\left\{\,f\in L^p(\mathbb{R}^d)\,\mid\,\|\,f\,\|_{L^p(\mathbb{R}^d)}+\upsilon_{f,s}<\infty\,\right\}$$ equipped with the norm $\|\,f\,\|_{\mathscr{B}^s_p(\mathbb{R}^d)}{:}=\|\,f\,\|_{L^p(\mathbb{R}^d)}+\upsilon_{f,s}$. A useful interpolation inequality that compares the spectral norm of different orders has been proved in [@MengMing:2022]\*Lemma 2.1. For $1\le p\le 2$ and $-d/p<s_1<s_2$, there exists $C$ depends on $s_1,s_2,d$ and $p$ such that $$\label{eq:inter} \upsilon_{f,s_1}\le C\|\,f\,\|_{L^p(\mathbb{R}^d)}^{\gamma}\upsilon_{f,s_2}^{1-\gamma},$$ where $\gamma=(s_2-s_1)/(s_2+d/p)$. For any $\varepsilon>0$, using the fact $$\upsilon_{f_{\varepsilon},s}=\varepsilon^{-s}\upsilon_{f,s},$$ we observe that the inequality [\[eq:inter\]](#eq:inter){reference-type="eqref" reference="eq:inter"} is dilation invariant because it is invariant if we replace $f$ by $f_{\varepsilon}{:}=f(x/\varepsilon)$. *Proof of Lemma [Lemma 3](#lema:hat-bs){reference-type="ref" reference="lema:hat-bs"}.* The authors in [@MengMing:2022] have proved that $\mathscr{B}^s_p(\mathbb{R}^d)$ equipped with the norm $\|\,f\,\|_{\mathscr{B}^s_p(\mathbb{R}^d)}$ is a Banach space. For any $f\in\mathscr{B}^s_1(\mathbb{R}^d)$, taking $s_1=0,s_2=s$ and $p=1$ in [\[eq:inter\]](#eq:inter){reference-type="eqref" reference="eq:inter"}, we obtain, there exists $C$ depending only on $d$ and $s$ such that $$\upsilon_{f,0}\le C\|\,f\,\|_{L^1(\mathbb{R}^d)}^{\gamma}\upsilon_{f,s}^{1-\gamma}\le C\|\,f\,\|_{\mathscr{B}^s_1(\mathbb{R}^d)},$$ where $\gamma=s/(s+d)$. We prove the assertion by *reductio ad absurdum*. Suppose that $\widehat{\mathscr{B}}^s(\mathbb{R}^d)$ equipped with the norm $\upsilon_{f,0}+\upsilon_{f,s}$ is also a Banach space, then by the bounded inverse theorem and the above interpolation inequality, we get, there exists $C$ depending only on $s$ and $d$ such that for any $f\in\mathscr{B}^s_1(\mathbb{R}^d)$, $$\label{eq:banach2} \|\,f\,\|_{L^1(\mathbb{R}^d)}\le C(\upsilon_{f,0}+\upsilon_{f,s}).$$ We shall show this is not the case by the following example. For some $\delta>(d-1)/2$, we define $$f_n(x){:}=\left(\sum_{k=1}^n(1-2^{2k}\lvert\xi\rvert^2)_+^\delta\right)^\vee(x).$$ Using [\[eq:phiR\]](#eq:phiR){reference-type="eqref" reference="eq:phiR"} and noting $f_n=\sum_{k=1}^n\phi_{2^{-k}}$, we have the explicit form of $f_n$ as $$\label{eq:ex2} f_n(x)=\dfrac{\Gamma(\delta+1)}{\pi^\delta\lvert x\rvert^{\delta+d/2}}\sum_{k=1}^n2^{k(\delta-d/2)}J_{\delta+d/2}(2^{1-k}\pi\lvert x\rvert).$$ Using [\[eq:phiRs\]](#eq:phiRs){reference-type="eqref" reference="eq:phiRs"}, we get $$\upsilon_{f_n,s}=\sum_{k=1}^n\upsilon_{\phi_{2^{-k}},s}=\dfrac{1-2^{-n(s+d)}}{2^{s+d+1}-2}\omega_{d-1} B\left(\dfrac{s+d}2,\delta+1\right), %\upsilon_{f_n,s}=\sum_{k=1}^n\upsilon_{\phi_{2^{-k}},s}\le\dfrac{\om_{d-1}}{2^{s+d}-1}B\Lr{\dfrac{s+d}2,\delta+1},$$ and $$\dfrac{\omega_{d-1}}{2^{s+d+1}} B\left(\dfrac{s+d}2,\delta+1\right)\le\upsilon_{f_n,s}\le\dfrac{\omega_{d-1}}{2^{s+d+1}-2} B\left(\dfrac{s+d}2,\delta+1\right).$$ Proceeding along the same line, we obtain $$\upsilon_{f_n,0}=\sum_{k=1}^n\upsilon_{\phi_{2^{-k}},0}=\dfrac{1-2^{-nd}}{2^{d+1}-2}\omega_{d-1} B\left(\dfrac{d}2,\delta+1\right), %\dfrac{\om_{d-1}}{2^d-1}B\Lr{\dfrac{d}2,\delta+1}, %\upsilon_{f_n,0}=\sum_{k=1}^n\upsilon_{\phi_{2^{-k}},0}\le\dfrac{\om_{d-1}}{2^d-1}B\Lr{\dfrac{d}2,\delta+1}.$$ and $$\dfrac{\omega_{d-1}}{2^{d+1}} B\left(\dfrac{d}2,\delta+1\right)\le\upsilon_{f_n,0}\le\dfrac{\omega_{d-1}}{2^{d+1}-2} B\left(\dfrac{d}2,\delta+1\right).$$ Hence, $$\label{eq:sumbd} \upsilon_{f_n,0}+\upsilon_{f_n,s}\le\dfrac{\omega_{d-1}}{2}\left(\dfrac{B(d/2,\delta+1)}{2^d-1}+\dfrac{B((s+d)/2,\delta+1)}{2^{s+d}-1}\right).$$ By [\[eq:phiR\]](#eq:phiR){reference-type="eqref" reference="eq:phiR"}, a direct calculation gives $$\|\,\phi_R\,\|_{L^1(\mathbb{R}^d)}=\dfrac{\Gamma(\delta+1)}{\pi^\delta R^{\delta-d/2}}\int_{\mathbb{R}^d}\dfrac{\lvert J_{\delta+d/2}(2\pi\lvert x\rvert R)\rvert}{\lvert x\rvert^{\delta+d/2}}\mathrm{d}x\\ =\dfrac{2^{\delta}\Gamma(\delta+1)}{\pi^{\delta+d/2}}\int_{\mathbb{R}^d} \dfrac{\lvert J_{\delta+d/2}(\lvert x\rvert)\rvert}{\lvert x\rvert^{\delta+d/2}}\mathrm{d}x.$$ Invoking [@Grafakos:2014]\*Appendix B.6, B.7, there exists $C$ that depends on $\nu$ such that $$\lvert J_{\nu}(x)\rvert\le C\begin{cases} \lvert x\rvert^{\nu}\qquad&\lvert x\rvert\le 1,\\ \lvert x\rvert^{-1/2}\qquad&\lvert x\rvert>1. \end{cases}$$ We get, there exists $C$ depending only on $d$ and $\delta$ such that $$\begin{aligned} \|\,\phi_R\,\|_{L^1(\mathbb{R}^d)}&=\dfrac{2^{\delta}\Gamma(\delta+1)}{\pi^{\delta+d/2}}\left(\int_{\lvert x\rvert\le 1}\dfrac{\lvert J_{\delta+d/2}(\lvert x\rvert)\rvert}{\lvert x\rvert^{\delta+d/2}}\mathrm{d}x+\int_{\lvert x\rvert>1}\dfrac{\lvert J_{\delta+d/2}(\lvert x\rvert)\rvert}{\lvert x\rvert^{\delta+d/2}}\mathrm{d}x\right)\\ &\le C\left(\int_{\lvert x\rvert\le 1}\mathrm d\,x+\int_{\lvert x\rvert>1}\lvert x\rvert^{-1/2-\delta-d/2}\mathrm d\,x\right)\\ &\le C\left(1+\dfrac{1}{\delta-(d-1)/2}\right),\end{aligned}$$ where we have used the fact $\delta>(d-1)/2$ in the last step. Therefore, $\|\,\phi_R\,\|_{L^1(\mathbb{R}^d)}$ is bounded by a constant that depends only on $\delta$ and $d$ but is independent of $R$. Moreover, $$\|\,f_n\,\|_{L^1(\mathbb{R}^d)}\le\sum_{k=1}^n\|\,\phi_{2^{-k}}\,\|_{L^1(\mathbb{R}^d)}\le n\|\,\phi_1\,\|_{L^1(\mathbb{R}^d)},$$ and by the Hausdorff-Young inequality [\[eq:hyineq\]](#eq:hyineq){reference-type="eqref" reference="eq:hyineq"}, $$\|\,f_n\,\|_{L^1(\mathbb{R}^d)}\ge\|\,\widehat{f}_n\,\|_{L^\infty(\mathbb{R}^d)}=\widehat{f}_n(0)=n.$$ This means that $\|\,f_n\,\|_{L^1(\mathbb{R}^d)}=\mathcal{O}(n)$, which together with [\[eq:sumbd\]](#eq:sumbd){reference-type="eqref" reference="eq:sumbd"} immediately shows that the inequality [\[eq:banach2\]](#eq:banach2){reference-type="eqref" reference="eq:banach2"} cannot be true for sufficiently large $n$. Hence, we conclude that $\widehat{\mathscr{B}}^s(\mathbb{R}^d)$ is not a Banach space. ◻ ## Embedding relations of the spectral Barron spaces In this part we discuss the embedding of the spectral Barron spaces. **Lemma 4**. 1. *Interpolation inequality: For any $0\le s_1\le s\le s_2$ satisfying $s=\alpha s_1+(1-\alpha)s_2$ with $0\le\alpha\le 1$, and $f\in\mathscr{B}^{s_1}(\mathbb{R}^d)$, there holds $$\label{eq:inter2} \upsilon_{f,s}\le\upsilon_{f,s_1}^\alpha\upsilon_{f,s_2}^{1-\alpha},$$ and $$\label{eq:inter3} \|\,f\,\|_{\mathscr{B}^s(\mathbb{R}^d)}\le\|\,f\,\|_{\mathscr{B}^{s_1}(\mathbb{R}^d)}^\alpha\|\,f\,\|_{\mathscr{B}^{s_2}(\mathbb{R}^d)}^{1-\alpha}.$$* 2. *Let $0\le s_1\le s_2$, there holds $\mathscr{B}^{s_2}(\mathbb{R}^d)\hookrightarrow\mathscr{B}^{s_1}(\mathbb{R}^d)$ with $$\label{eq:monotone} \|\,f\,\|_{\mathscr{B}^{s_1}(\mathbb{R}^d)}\le\left(2-\dfrac{s_1}{s_2}\right)\|\,f\,\|_{\mathscr{B}^{s_2}(\mathbb{R}^d)}\qquad \forall f\in\mathscr{B}^{s_2}(\mathbb{R}^d).$$* The embedding [\[eq:monotone\]](#eq:monotone){reference-type="eqref" reference="eq:monotone"} has been stated in [@Hormander:1963]\*Theorem 2.2.2 without tracing the embedding constant. *Proof.* We start with the interpolation inequality [\[eq:inter2\]](#eq:inter2){reference-type="eqref" reference="eq:inter2"} for the spectral norm. For any $0\le s_1\le s\le s_2$ with $s=\alpha s_1+(1-\alpha)s_2$, using Hölder's inequality, we obtain $$\upsilon_{f,s}=\int_{\mathbb{R}^d}\left(\lvert\xi\rvert^{s_1}\lvert\widehat{f}(\xi)\rvert\right)^{\alpha}\left(\lvert\xi\rvert^{s_2}\lvert\widehat{f}(\xi)\rvert\right)^{1-\alpha}\mathrm d\xi \le\upsilon_{f,s_1}^{\alpha}\upsilon_{f,s_2}^{1-\alpha}.$$ This gives [\[eq:inter2\]](#eq:inter2){reference-type="eqref" reference="eq:inter2"}. Next, for $a,b,c>0$, by Young's inequality, we have $$\begin{aligned} \dfrac{a+b^\alpha c^{1-\alpha}}{(a+b)^\alpha(a+c)^{1-\alpha}}&=\left(\dfrac{a}{a+b}\right)^\alpha\left(\dfrac{a}{a+c}\right)^{1-\alpha}+\left(\dfrac{b}{a+b}\right)^\alpha\left(\dfrac{c}{a+c}\right)^{1-\alpha}\\ &\le\alpha\dfrac{a}{a+b}+(1-\alpha)\dfrac{a}{a+c}+\alpha\dfrac{b}{a+b}+(1-\alpha)\dfrac{c}{a+c}\\ &=1.\end{aligned}$$ This yields $$a+b^\alpha c^{1-\alpha}\le(a+b)^\alpha(a+c)^{1-\alpha}.$$ Let $a=\upsilon_{f,0},b=\upsilon_{f,s_1}$ and $c=\upsilon_{f,s_2}$, we obtain $$\|\,f\,\|_{\mathscr{B}^s(\mathbb{R}^d)}=\upsilon_{f,0}+\upsilon_{f,s} \le\upsilon_{f,0}+\upsilon_{f,s_1}^\alpha\upsilon_{f,s_2}^{1-\alpha}\le\|\,f\,\|_{\mathscr{B}^{s_1}(\mathbb{R}^d)}^\alpha\|\,f\,\|_{\mathscr{B}^{s_2}(\mathbb{R}^d)}^{1-\alpha}.$$ This implies [\[eq:inter3\]](#eq:inter3){reference-type="eqref" reference="eq:inter3"}. Next, if we take $s_1=0$ in [\[eq:inter2\]](#eq:inter2){reference-type="eqref" reference="eq:inter2"} and $s=(1-\alpha)s_2$ with $\alpha=1-s/s_2$, then $$\|\,f\,\|_{\mathscr{B}^s(\mathbb{R}^d)}\le\upsilon_{f,0}+\upsilon_{f,0}^\alpha\upsilon_{f,s_2}^{1-\alpha}\le(1+\alpha)\upsilon_{f,0}+(1-\alpha)\upsilon_{f,s_2}\le(1+\alpha)\|\,f\,\|_{\mathscr{B}^{s_2}(\mathbb{R}^d)}.$$ This leads to [\[eq:monotone\]](#eq:monotone){reference-type="eqref" reference="eq:monotone"} and completes the proof. ◻ The next lemma shows that $\mathscr{B}^s_p(\mathbb{R}^d)$ is a proper subspace of $\mathscr{B}^s(\mathbb{R}^d)$. **Lemma 5**. *For $s\ge 0$ and $1\le p\le 2$, there holds $\mathscr{B}^s_p(\mathbb{R}^d)\hookrightarrow\mathscr{B}^s(\mathbb{R}^d)$, and the inclusion is proper in the sense that for any $1\le p<\infty$, there exists $f_p\in\mathscr{B}^s(\mathbb{R}^d)$ and $f_p\not\in L^p(\mathbb{R}^d)$.* *Proof.* It follows from the interpolation inequality [\[eq:inter\]](#eq:inter){reference-type="eqref" reference="eq:inter"} that $\upsilon_{f,0} \le C\|\,f\,\|_{\mathscr{B}^s_p(\mathbb{R}^d)}$. Hence $$\|\,f\,\|_{\mathscr{B}^s(\mathbb{R}^d)}\le C\|\,f\,\|_{\mathscr{B}^s_p(\mathbb{R}^d)}.$$ This implies $\mathscr{B}^s_p(\mathbb{R}^d)\hookrightarrow\mathscr{B}^s(\mathbb{R}^d)$ for any $s\ge 0$ and $1\le p\le 2$. We shall show below that the inclusion is proper. Let $$f_p(x){:}=\left(\lvert\xi\rvert^{-d/p'}\chi_{[0,1)}(\lvert\xi\rvert)\right)^\vee(x),$$ where $\chi_{\Omega}(t)$ is the characteristic function on $\mathbb{R}$ that equals to one if $t\in\Omega$ and zero otherwise. It is straightforward to verify $f_p\in\mathscr{B}^s(\mathbb{R}^d)$. We shall show below that $f_p\notin L^p(\mathbb{R}^d)$, which is based on the following explicit formula for $f_p$ shown in Appendix [5.2](#apd:ex3){reference-type="ref" reference="apd:ex3"}: $$\label{eq:ex3} f_p(x)={}_1F_2(d/(2p);1+d/(2p),d/2;-\pi^2\lvert x\rvert^2)p\nu_d,$$ where the generalized Hypergeometric function ${}_nF_m$ is defined as follows. For nonnegative integer $n,m$ and none of the parameters $\{\beta_j\}_{j=1}^m$ is a negative integer or zero, $${}_nF_m(\alpha_1,\dots,\alpha_n;\beta_1,\dots,\beta_m;x){:}=\sum_{k=0}^\infty\dfrac{\prod_{j=1}^n(\alpha_j)_k}{\prod_{j=1}^m(\beta_j)_k}\dfrac{x^k}{k!}.$$ The generalized Hypergeometric function ${}_nF_m$ converges for all finite $x$ if $n\le m$. In particular ${}_nF_m(\alpha_1,\dots,\alpha_n;\beta_1,\dots,\beta_m;0)=1$. Hence $f_p(x)$ is finite for any $x$. Using [@MathaiSaxena:1973]\*Appendix, we obtain $${}_1F_2(\alpha;\beta,\gamma;-x^2/4)\simeq\mathcal{O}(\lvert x\rvert^{\alpha-\beta-\gamma+1/2}+\lvert x\rvert^{-2\alpha})\qquad\text{when}\quad\lvert x\rvert\to\infty.$$ Therefore, $$f_p(x)\simeq\mathcal{O}(\lvert x\rvert^{-(d+1)/2}+\lvert x\rvert^{-d/p})\qquad\text{when}\quad \lvert x\rvert\to\infty.$$ This immediately implies $f_p\not\in L^p(\mathbb{R}^d)$. ◻ *Remark 6*. The representation [\[eq:ex3\]](#eq:ex3){reference-type="eqref" reference="eq:ex3"} is rather complicated, we give explicit formulas for certain special cases. When $d=1$, $$f_1(x)=\dfrac{\sin(2\pi x)}{\pi x},\qquad f_2(x)=\dfrac{2}{\sqrt{\lvert x\rvert}}C(2\sqrt{\lvert x\rvert}),$$ where $C$ is the Fresnel Cosine integral given by $$C(x)=\int_0^x\cos(\pi t^2/2)\mathrm{d}t\to\dfrac12\qquad\text{when}\quad x\to\infty,$$ Indeed, for $d=p=1$, using the relation [@Luke:1969]\*§ 6.2.1, Eq.(10) $$\sin x={}_0F_1(;3/2;-x^2/4)x,$$ we obtain $$f_1(x)=2{}_1F_2(1/2;3/2,1/2;-\pi^2x^2)=2{}_0F_1(;3/2;-\pi^2x^2)=\dfrac{\sin(2\pi x)}{\pi x}.$$ When $p=2$, using the identity [@Luke:1969]\*§ 6.2.11, Eq. (41) $$C(\sqrt{2x/\pi})=\sqrt{\dfrac{2x}{\pi}}{}_1F_2(1/4;5/4,1/2;-x^2/4)\qquad\text{when}\quad x>0,$$ we obtain $$f_2(x)=4{}_1F_2(1/4;5/4,1/2;-\pi^2x^2)=\dfrac2{\sqrt{\lvert x\rvert}}C(2\sqrt{\lvert x\rvert}).$$ ## Relations to some classical function spaces In this part, we establish the embedding between the spectral Barron space $\mathscr{B}^s(\mathbb{R}^d)$ and the Besov space, and hence we bridge $\mathscr{B}^s(\mathbb{R}^d)$ and the Sobolev space as in [@MengMing:2022]. We firstly recall the definition of the Besov space. **Definition 7** (Besov space). Let $\{\varphi_j\}_{j=0}^\infty\subset\mathscr{S}(\mathbb{R}^d)$ satisfies $0\le\varphi_j\le 1$ and $$\begin{cases} \text{supp}(\varphi_0)\subset\Gamma_0{:}=\left\{\,x\in\mathbb{R}^d\,\mid\,\lvert x\rvert\le 2\,\right\},\\ \text{supp}(\varphi_j)\subset\Gamma_j{:}=\left\{\,x\in\mathbb{R}^d\,\mid\,2^{j-1}\le\lvert x\rvert\le 2^{j+1}\,\right\},\qquad j=1,2,\dots. \end{cases}$$ For every multi-index $\alpha$, there exists a positive number $c_{\alpha}$ such that $$2^{j\lvert\alpha\rvert}\lvert\nabla^{\alpha}\varphi_j(x)\rvert\le c_{\alpha}\quad\text{for all}\quad j=0,\dots,\quad\text{for all}\quad x\in\mathbb{R}^d,$$ and $$\sum_{j=0}^\infty\varphi_j(x)=1\quad\text{for every}\quad x\in\mathbb{R}^d.$$ Let $\alpha\in\mathbb{R}$ and $1\le p,q\le\infty$. Define the *Besov space* $$B^{\alpha}_{p,q}(\mathbb{R}^d){:}=\left\{\,f\in\mathscr{S}'(\mathbb{R}^d)\,\mid\,\|\,f\,\|_{B^{\alpha}_{p,q}(\mathbb{R}^d)}<\infty\,\right\}$$ equipped with the norm $$\|\,f\,\|_{B^{\alpha}_{p,q}(\mathbb{R}^d)}{:}=\left(\sum_{j=0}^\infty 2^{\alpha qj}\|\,(\varphi_j\widehat{f})^{\vee}\,\|_{L^p(\mathbb{R}^d)}^q\right)^{1/q}\qquad\text{when}\quad q<\infty,$$ and $$\|\,f\,\|_{B_{p,\infty}^\alpha(\mathbb{R}^d)}{:}=\sup_{j\ge 0}2^{\alpha j}\|\,(\varphi_j\widehat{f})^{\vee}\,\|_{L^p(\mathbb{R}^d)}.$$ We firstly recall the following embedding for the Besov space, which was firstly proved in the series work of Taibleson [@Taibleson:1964; @Taibleson:1965; @Taibleson:1966]. We retain the proof in Appendix [5.3](#apd:Besov){reference-type="ref" reference="apd:Besov"} for the reader's convenience. **Lemma 8**. *There holds $B_{p_1,q_1}^{\alpha_1}(\mathbb{R}^d)\hookrightarrow B_{p_2,q_2}^{\alpha_2}(\mathbb{R}^d)$ if and only if $p_1\le p_2$ and one of the following conditions holds:* 1. *$\alpha_1-d/p_1>\alpha_2-d/p_2$ and $q_1,q_2$ are arbitrary;* 2. *$\alpha_1-d/p_1=\alpha_2-d/p_2$ and $q_1\le q_2$.* The main result of the embedding is: **Theorem 9**. 1. *There holds $$\label{eq:besov} B_{2,1}^{s+d/2}(\mathbb{R}^d)\hookrightarrow\mathscr{B}^s(\mathbb{R}^d)\hookrightarrow B_{\infty,1}^s(\mathbb{R}^d).$$* 2. *The above embedding is optimal in the sense that $B_{2,1}^{s+d/2}(\mathbb{R}^d)$ is the biggest one of all $B_{p,q}^\alpha(\mathbb{R}^d)$ satisfying $B_{p,q}^\alpha(\mathbb{R}^d)\hookrightarrow\mathscr{B}^s(\mathbb{R}^d)$, and $B_{\infty,1}^s(\mathbb{R}^d)$ is the smallest one of all $B_{p,q}^\alpha(\mathbb{R}^d)$ satisfying $\mathscr{B}^s(\mathbb{R}^d)\hookrightarrow B_{p,q}^\alpha(\mathbb{R}^d)$.* *Proof.* To prove (1), firstly, for any $f\in\mathscr{B}^s(\mathbb{R}^d)$, $$\begin{aligned} \|\,f\,\|_{\mathscr{B}^s(\mathbb{R}^d)}=&\sum_{j=0}^\infty\int_{\mathbb{R}^d}(1+\lvert\xi\rvert^s)\varphi_j(\xi)\lvert\widehat{f}(\xi)\rvert\mathrm{d}\xi\\ \le&\sum_{j=0}^\infty\left(\int_{\text{supp\;}\varphi_j}(1+\lvert\xi\rvert^s)^2\mathrm{d}\xi\right)^{1/2}\|\,\varphi_j\widehat{f}\,\|_{L^2(\mathbb{R}^d)}. \end{aligned}$$ A direct calculation gives: for $j=0,1,\dots$, $$\begin{aligned} \int_{\text{supp\;}\varphi_j}(1+\lvert\xi\rvert^s)^2\mathrm{d}\xi &\le\int_{0\le\lvert\xi\rvert\le 2^{j+1}}(1+\lvert\xi\rvert^s)^2\mathrm{d}\xi\\ &=\omega_{d-1}\int_0^{2^{j+1}}(1+r^s)^2r^{d-1}\mathrm{d}\,r\\ &\le 2\omega_{d-1}\int_0^{2^{j+1}}(1+r^{2s})r^{d-1}\mathrm{d}\,r\\ &\le2\omega_{d-1}\left(\dfrac{2^{(j+1)d}}{d}+\dfrac{2^{(j+1)(2s+d)}}{2s+d}\right)\\ &\le 4\nu_d2^{(j+1)(2s+d)}.\end{aligned}$$ Using the Plancherel's theorem, we get $$\begin{aligned} \|\,f\,\|_{\mathscr{B}^s(\mathbb{R}^d)}&\le 2^{s+1+d/2}\sqrt{\nu_d} \sum_{j=0}^\infty2^{j(s+d/2)}\|\,\varphi_j\widehat{f}\,\|_{L^2(\mathbb{R}^d)}\\ &=2^{s+1+d/2}\sqrt{\nu_d} \sum_{j=0}^\infty2^{j(s+d/2)}\|\,(\varphi_j\widehat{f})^\vee\,\|_{L^2(\mathbb{R}^d)}\\ &= 2^{s+1+d/2}\sqrt{\nu_d}\|\,f\,\|_{B_{2,1}^{s+d/2}(\mathbb{R}^d)}.\end{aligned}$$ Next, for any $f\in \mathscr{B}^s(\mathbb{R}^d)$, by Lemma [Lemma 1](#lema:fourinv){reference-type="ref" reference="lema:fourinv"}, we have $\varphi_j\widehat{f}\in L^1(\mathbb{R}^d)$, using the Hausdorff-Young inequality [\[eq:hyineq\]](#eq:hyineq){reference-type="eqref" reference="eq:hyineq"}, we obtain $$\begin{aligned} \|\,f\,\|_{B_{\infty,1}^s(\mathbb{R}^d)}=&\sum_{j=0}^\infty2^{sj}\|\,(\varphi_j\widehat{f})^\vee\,\|_{L^\infty(\mathbb{R}^d)}\le\sum_{j=0}^\infty2^{sj}\|\,\varphi_j\widehat{f}\,\|_{L^1(\mathbb{R}^d)}\\ \le&\|\,\varphi_0\widehat{f}\,\|_{L^1(\mathbb{R}^d)}+2^s\sum_{j=1}^\infty\int_{\mathbb{R}^d}\varphi_j(\xi)\lvert\xi\rvert^s\lvert\widehat{f}(\xi)\rvert\mathrm{d}\xi\\ \le&\upsilon_{f,0}+2^s\upsilon_{f,s}.\end{aligned}$$ Therefore, $\|\,f\,\|_{B_{\infty,1}^s(\mathbb{R}^d)}\le2^s\|\,f\,\|_{\mathscr{B}^s(\mathbb{R}^d)}$. This proves [\[eq:besov\]](#eq:besov){reference-type="eqref" reference="eq:besov"} with $$2^{-s}\|\,f\,\|_{B_{\infty,1}^s(\mathbb{R}^d)}\le\|\,f\,\|_{\mathscr{B}^s(\mathbb{R}^d)}\le 2^{s+1+d/2}\sqrt{\nu_d}\|\,f\,\|_{B_{2,1}^{s+d/2}(\mathbb{R}^d)}.$$ It remains to show the embedding [\[eq:besov\]](#eq:besov){reference-type="eqref" reference="eq:besov"} is optimal. Suppose that there exists $B_{p,q}^\alpha(\mathbb{R}^d)$ such that $$B_{2,1}^{s+d/2}(\mathbb{R}^d)\hookrightarrow B_{p,q}^\alpha(\mathbb{R}^d)\hookrightarrow\mathscr{B}^s(\mathbb{R}^d)\hookrightarrow B_{\infty,1}^s(\mathbb{R}^d),$$ using Lemma [Lemma 8](#lema:Besov){reference-type="ref" reference="lema:Besov"}, we would have $2\le p\le\infty,\alpha=s+d/p$ and $q=1$. In what follows, we shall exploit an example adopted from [@Lieb:2001]\*Ch. 5, Ex. 9 to show that $B_{p,1}^\alpha(\mathbb{R}^d)\not\hookrightarrow\mathscr{B}^s(\mathbb{R}^d)$ when $2<p<\infty$ and $\alpha>0$. Therefore, $B^{s+d/p}_{p,1}(\mathbb{R}^d)\not\hookrightarrow\mathscr{B}^s(\mathbb{R}^d)$ for any $2<p\le\infty$. Let $$\psi_n(x)=(1+in)^{-d/2}e^{-\pi\lvert x\rvert^2/(1+in)}.$$ A direct calculation gives $\widehat{\psi}_n(\xi)=e^{-\pi(1+in)\lvert\xi\rvert^2}$. Hence $\lvert\widehat{\psi}_n(\xi)\rvert=e^{-\pi\lvert\xi\rvert^2}\in\mathscr{S}(\mathbb{R}^d)$ and $$\|\,\psi_n\,\|_{\mathscr{B}^s(\mathbb{R}^d)}=1+\dfrac{\Gamma((s+d)/2)}{\Gamma(d/2)\pi^{s/2}},$$ which is independent of $n$. We shall prove in Appendix [5.4](#apd:ex5){reference-type="ref" reference="apd:ex5"} that when $1\le p<\infty$ and $\alpha>0$, $$\label{eq:ex5} \|\,\psi_n\,\|_{B_{p,1}^\alpha(\mathbb{R}^d)}\le C(1+n^2)^{-d(p-2)/(4p)}.$$ Therefore $\|\,\psi_n\,\|_{B_{p,1}^\alpha(\mathbb{R}^d)}\to 0$ when $p>2$ and $n\to\infty$. On the other hand, we cannot expect that there exists certain $p<\infty$ such that $\mathscr{B}^s(\mathbb{R}^d)\hookrightarrow B^{s+d/p}_{p,1}(\mathbb{R}^d)$. Otherwise, we would have $$\mathscr{B}^s(\mathbb{R}^d)\hookrightarrow B^{s+d/p}_{p,1}(\mathbb{R}^d)\hookrightarrow L^p(\mathbb{R}^d)$$ because of Lemma [Lemma 8](#lema:Besov){reference-type="ref" reference="lema:Besov"} and [@Triebel:1983]\*§ 2.5.7, Proposition. This contradicts with the fact $\mathscr{B}^s(\mathbb{R}^d)\not\hookrightarrow L^p(\mathbb{R}^d)$, which has been proved in Lemma [Lemma 5](#lema:ex3){reference-type="ref" reference="lema:ex3"}. ◻ As a consequence of Theorem [Theorem 9](#thm:Besov){reference-type="ref" reference="thm:Besov"} and Lemma [Lemma 5](#lema:ex3){reference-type="ref" reference="lema:ex3"}, we establish the embedding between the spectral Barron space and the Sobolev spaces. **Definition 10** (Fractional Sobolev space). Let $1\le p<\infty$ and non-integer $\alpha>0$, then the fractional Sobolev space $$W^\alpha_p(\mathbb{R}^d){:}=\left\{\,f\in W^{\lfloor\alpha\rfloor}_p(\mathbb{R}^d)\,\mid\,\iint_{\mathbb{R}^d\times\mathbb{R}^d}\dfrac{\lvert\nabla^{\lfloor\alpha\rfloor}f(x)-\nabla^{\lfloor\alpha\rfloor}f(y)\rvert^p}{\lvert x-y\rvert^{d+(\alpha-\lfloor\alpha\rfloor)p}}\mathrm{d}x\mathrm{d}y<\infty\,\right\}$$ equipped with the norm $$\|\,f\,\|_{W^\alpha_p(\mathbb{R}^d)}{:}=\|\,f\,\|_{W^{\lfloor\alpha\rfloor}_p(\mathbb{R}^d)}+\left(\iint_{\mathbb{R}^d\times\mathbb{R}^d}\dfrac{\lvert\nabla^{\lfloor\alpha\rfloor}f(x)-\nabla^{\lfloor\alpha\rfloor}f(y)\rvert^p}{\lvert x-y\rvert^{d+(\alpha-\lfloor\alpha\rfloor)p}}\mathrm{d}x\mathrm{d}y\right)^{1/p}.$$ We firstly recall the relation between the Sobolev space and $\mathscr{B}^s_p(\mathbb{R}^d)$, which has been proved in [@MengMing:2022]\*Theorem 4.3. **Lemma 11** ([@MengMing:2022]\*Theorem 4.3). 1. *If $1\le p\le 2$ and $\alpha>s+d/p>0$, then $$W^{\alpha}_p(\mathbb{R}^d)\hookrightarrow\mathscr{B}^s_p(\mathbb{R}^d).$$* 2. *If $s>-d$ is not an integer or $s>-d$ is an integer and $d\ge 2$, then $$W^{s+d}_1(\mathbb{R}^d)\hookrightarrow\mathscr{B}^s_1(\mathbb{R}^d).$$* It follows from the above lemma and Lemma [Lemma 5](#lema:ex3){reference-type="ref" reference="lema:ex3"} that **Corollary 12**. 1. *If $1\le p\le 2$ and $\alpha>s+d/p$, there holds $$\label{eq:sobolevembed} W^{\alpha}_p(\mathbb{R}^d)\hookrightarrow\mathscr{B}^s(\mathbb{R}^d)\hookrightarrow C^s(\mathbb{R}^d).$$* 2. *If $s$ is not an integer or $s$ is an integer and $d\ge 2$, then $$W^{s+d}_1(\mathbb{R}^d)\hookrightarrow\mathscr{B}^s(\mathbb{R}^d).$$* The first embedding with $p=2$ and $s=1$ was hidden in [@Barron:1993]\* § II, Para. 7; § IX, 15. *Proof.* By Lemma [Lemma 11](#lema:BarronSobolev){reference-type="ref" reference="lema:BarronSobolev"} and Lemma [Lemma 5](#lema:ex3){reference-type="ref" reference="lema:ex3"}, when $\alpha>s+d/p$ and $1\le p\le 2$, we have $$W^{\alpha}_p(\mathbb{R}^d)\hookrightarrow\mathscr{B}^s_p(\mathbb{R}^d)\hookrightarrow\mathscr{B}^s(\mathbb{R}^d).$$ When $s$ is not an integer or $s$ is an integer and $d\ge 2$, there holds $$W^{s+d}_1(\mathbb{R}^d)\hookrightarrow\mathscr{B}^s_1(\mathbb{R}^d)\hookrightarrow\mathscr{B}^s(\mathbb{R}^d).$$ It remains to prove the right-hand side of [\[eq:sobolevembed\]](#eq:sobolevembed){reference-type="eqref" reference="eq:sobolevembed"}. Using Theorem [Theorem 9](#thm:Besov){reference-type="ref" reference="thm:Besov"}, $$\mathscr{B}^s(\mathbb{R}^d)\hookrightarrow B^s_{\infty,1}(\mathbb{R}^d)\hookrightarrow C^s(\mathbb{R}^d)$$ due to Theorem [Theorem 9](#thm:Besov){reference-type="ref" reference="thm:Besov"}, Lemma [Lemma 8](#lema:Besov){reference-type="ref" reference="lema:Besov"} and [@Triebel:1983]\*§ 2.3.5, Eq. (1); § 2.5.7, Eq. (2), (9), (11). ◻ # Application to deep neural network approximation {#sec:appro} The embedding results proved in Theorem [Theorem 9](#thm:Besov){reference-type="ref" reference="thm:Besov"} and Corollary [Corollary 12](#coro:Sobolev){reference-type="ref" reference="coro:Sobolev"} indicate that $s$ is a smoothness index. Consequently, we are interested in exploring the approximate rate when $s$ is small with $\mathscr{B}^s$ as the target function space. To facilitate our analysis, we shall focus on the hypercube $\Omega{:}=[0,1]^d$, and the spectral norm for function $f$ defined on $\Omega$ is $$\upsilon_{f,s,\Omega}=\inf_{Ef|_{\Omega}=f}\int_{\mathbb{R}^d}\|\,\xi\,\|_{1}^s\lvert\widehat{Ef}(\xi)\rvert\mathrm{d}\xi,$$ where the infimum is taken for all extension operators $E:\Omega\to\mathbb{R}^d$. To simplify the notations, we employ $f$ to denote $Ef$ subsequently. We replace $\lvert\xi\rvert$ by $\|\,\xi\,\|_{1}$ in the definition of $\upsilon_{f,s,\Omega}$, the latter seems more natural for studying the approximation over the hypercube as suggested by [@Barron:1993]\*§ V. **Definition 13**. A sigmoidal function is a bounded function $\sigma:\mathbb{R}\mapsto\mathbb{R}$ such that $$\lim_{t\to-\infty}\sigma(t)=0,\qquad \lim_{t\to+\infty}\sigma(t)=1.$$ For example, the Heaviside function $\chi_{[0,\infty)}$ is a sigmoidal function. A classical idea for the approximation error of neural networks with sigmoidal activation functions $\sigma$ is to use the Heaviside function $\chi_{[0,\infty)}$ as a transition. [Caragea et. al.]{.smallcaps} [@Caragea:2023] pointed out that the gap between sigmoidal function $\sigma$ and the Heaviside function $\chi_{[0,\infty)}$ cannot be dismissed in $L^\infty(\Omega)$. While this gap does not exist in $L^2(\Omega)$. **Lemma 14**. *For fixed $\omega\in\mathbb{R}^d\backslash\{0\}$ and $b\in\mathbb{R}$, $$\lim_{\tau\to\infty}\|\,\sigma(\tau(\omega\cdot x+b))-\chi_{[0,\infty)}(\omega\cdot x+b)\,\|_{L^2(\Omega)}=0.$$* *Proof.* Note that $$\lim_{t\to\pm\infty}\lvert\sigma(t)-\chi_{[0,\infty)}(t)\rvert=0.$$ We divide the cube $\Omega$ into $\Omega_1{:}=\left\{\,x\in\Omega\,\mid\,\lvert\tau(\omega\cdot x+b)\rvert<\delta\,\right\}$ and $\Omega_2{:}=\Omega\setminus\Omega_1$. With proper choice of $\delta>0$ and $\tau>0$ large enough, we can obtain that the $L^2$-distance between $\sigma(\tau(\omega\cdot x+b))$ and $\chi_{[0,\infty)}(\omega\cdot x+b)$ is arbitrarily small. ◻ For a shallow neural network, the following lemma in [@Barron:1993] is proved for the real-valued function, while it is straightforward to extend the proof to the complex-valued function. **Lemma 15** ([@Barron:1993]\*Theorem 1). *Let $f\in\mathscr{B}^1(\mathbb{R}^d)$, there exists $$\label{eq:shallow} f_N(x)=\sum_{i=1}^Nc_i\sigma(\omega_i\cdot x+b_i)$$ with $\omega_i\in\mathbb{R}^d,b_i\in\mathbb{R}$ and $c_i\in\mathbb{C}$ such that $$\|\,f-f_N\,\|_{L^2(\Omega)}\le\dfrac{2\upsilon_{f,1,\Omega}}{\sqrt{N}}.$$* In this part, we shall show the approximation error for the deep neural network. We use the $(L,N)$-network to describe a neural network with $L$ hidden layers and at most $N$ units per layer. Here $L$ denotes the number of hidden layers, e.g., the shallow neural network, expressed as [\[eq:shallow\]](#eq:shallow){reference-type="eqref" reference="eq:shallow"}, is an $(1,N)$-network. **Definition 16** ($(L,N)$-network). An $(L,N)$-network represents a neural network with $L$ hidden layers and at most $N$ units per layer. The activation functions of the first $L-1$ layers are all ReLU and the activation function of the last layer is the sigmoidal function. The connection weights between the input layer and the hidden layer, and between the hidden layer and the hidden layer are all real numbers. The connection weights between the last hidden layer and the output layer are complex numbers. Here we make some preparations for the rest work. The analysis in this part owns the most to [@BreslerNagaraj:2020] with certain improvements that will be detailed later on. For any function $g$ defined on $[0,1]$ and it is symmetric about $x=1/2$, We use the notation $g_{,n}$ to denote the function $g$ in the $[0,1]$ interval of the period repeated $n$ times, i.e., $$\label{eq:gn} g_{,n}(t)=g(nt-j),\quad j=0,\dots,n-1,\quad 0\le nt-j\le 1.$$ Define $$\beta(t)=\text{ReLU}(2t)-2\text{ReLU}(2t-1)+\text{ReLU}(2t-2)=\begin{cases} 2t,&0\le t\le 1/2,\\ 2-2t,&1/2\le t\le 1,\\ 0,&\text{otherwise}. \end{cases}$$ By definition [\[eq:gn\]](#eq:gn){reference-type="eqref" reference="eq:gn"}, $\beta_{,n}$ represents a triangle function with $n$ peaks and can be represented by $3n$ ReLUs: $$\beta_{,n}(t)=\sum_{j=0}^{n-1}\beta(nt-j),\quad 0\le t\le 1.$$ **Lemma 17**. *Let $g$ be a function defined on $[0,1]$ and symmetric about $x=1/2$, then $g_{,n_2}\circ\beta_{,n_1}=g_{,2n_1n_2}$ on $[0,1]$.* The above lemma is a rigorous statement of [@Telgarsky:2016]\*Proposition 5.1. A key example is $\cos(2\pi n_2\beta_{,n_1}(t))=\cos(4\pi n_1n_2t)$ when $t\in [0, 1]$. A geometrical explanation may be founded in  [@Bolcskei:2021]\*Figure 3. We postpone the rigorous proof in Appendix [\[apd:comp\]](#apd:comp){reference-type="ref" reference="apd:comp"}. For $r\in(0,1)$, we define $$\alpha(t,r)=\chi_{[0,\infty)}(t-r/2)-\chi_{[0,\infty)}(t-(1-r)/2)=\begin{cases} \chi_{[r/2,(1-r)/2]}(t), & 0< r\le 1/2,\\ -\chi_{[(1-r)/2,r/2]}(t), & 1/2\le r< 1, \end{cases}$$ then supp$(\alpha(\cdot,r))\subset(0,1/2)$ and $\alpha(t,r)$ is symmetric about $t=1/4$. Define $$\gamma(t,r)=\alpha(t+1/4,r)-\alpha(t-1/4,r)+\alpha(t-3/4,r).$$ Then $\gamma(t,r)$ is symmetric about $t=1/2$ because $$\begin{aligned} \gamma(1-t,r)=&\alpha(5/4-t,r)-\alpha(3/4-t,r)+\alpha(1/4-t,r)\\ =&\alpha(t-3/4,r)-\alpha(t-1/4,r)+\alpha(t+1/4,r)=\gamma(t,r).\end{aligned}$$ By definition [\[eq:gn\]](#eq:gn){reference-type="eqref" reference="eq:gn"}, $\gamma_{,n}(\cdot,r)$ is well defined on $[0,1]$ and $$\begin{aligned} \gamma_{,n}(t,r)=&\begin{cases} \alpha(nt-j+1/4,r), & 0\le nt-j \le 1/4, \\ -\alpha(nt-j-1/4,r), & 1/4\le nt-j\le 3/4,\\ \alpha(nt-j-3/4,r), & 3/4\le nt-j\le 1, \end{cases}&& j=0,\dots,n-1\\ =&\begin{cases} \alpha(nt-j+1/4,r), & -1/4\le nt-j\le 1/4,\\ -\alpha(nt-j-1/4,r), & 1/4\le nt-j\le 3/4,\\ \end{cases} && j=0,\dots,n.\end{aligned}$$ And $\gamma_{,n}(\cdot,r)$ on $[0,1]$ can be represents by $4n$ Heaviside function $\chi_{[0,\infty)}$ due to $\alpha(nt+1/4,r),\alpha(nt-n+1/4,r)$ only need one Heaviside function each: $$\gamma_{,n}(t,r)=\sum_{j=0}^n\alpha(nt-j+1/4,r)-\sum_{j=0}^{n-1}\alpha(nt-j-1/4,r).$$ A direct consequence of the above construction is **Lemma 18**. *For $t\in[0,1]$, there holds $$\label{eq:cos} \dfrac\pi{2}\int_0^1\cos(\pi r)\gamma_{,n}(t,r)\mathrm{d}r=\cos(2\pi nt).$$* *Proof.* For any $t\in[0,1/2]$, a direct calculation gives $$\dfrac\pi{2}\int_0^1\cos(\pi r)\alpha(t,r)\mathrm{d}r=\pi\int_0^{2t}\cos(\pi r)\mathrm{d}r=\sin(2\pi t).$$ Fix a $t\in[0,1]$. If there exists an integer $j$ satisfying $0\le j\le n$ and $-1/4\le nt-j\le 1/4$, then $\gamma_{,n}(t,r)=\alpha(nt-j+1/4,r)$ and $$\dfrac\pi{2}\int_0^1\cos(\pi r)\alpha(nt-j+1/4,r)\mathrm{d}r=\sin(2\pi(nt-j+1/4))=\cos(2\pi nt).$$ Otherwise there exists an integer $j$ satisfying $0\le j\le n-1$ and $1/4\le nt-j\le 3/4$. Then $\gamma_{,n}(t,r)=-\alpha(nt-j-1/4,r)$ and $$-\dfrac\pi{2}\int_0^1\cos(\pi r)\alpha(nt-j-1/4,r)\mathrm{d}r=-\sin(2\pi(nt-j-1/4))=\cos(2\pi nt).$$ This completes the proof of [\[eq:cos\]](#eq:cos){reference-type="eqref" reference="eq:cos"}. ◻ Now we are ready to give an approximation result for deep neural networks, which follows the framework of [@BreslerNagaraj:2020], while we achieve a higher order convergence rate and the prefactor is dimension-free. **Lemma 19**. *Let the positive integer $L$ and $f\in\mathscr{B}^s(\mathbb{R}^d)$ with $0<sL\le 1/2$ and $\mathrm{supp\;}f\subset\left\{\,\xi\in\mathbb{R}^d\,\mid\,\|\,\xi\,\|_{1}\ge 1\,\right\}$. For any positive integer $N$ there exists an $(L,N)$-network $f_N$ such that $$\label{eq:L2deep} \|\,f-f_N\,\|_{L^2(\Omega)}\le\dfrac{22\upsilon_{f,s,\Omega}}{N^{sL}}.$$* *Proof.* By Lemma [Lemma 1](#lema:fourinv){reference-type="ref" reference="lema:fourinv"}, for $f\in\mathscr{B}^s(\mathbb{R}^d)$, assume $f$ is real-valued, then $$f(x)=\int_{\mathbb{R}^d}\widehat{f}(\xi)e^{2\pi i\xi\cdot x}\mathrm{d}\xi=\int_{\mathbb{R}^d}\lvert\widehat{f}(\xi)\rvert\cos(2\pi(\xi\cdot x+\theta(\xi)))\mathrm{d}\xi,$$ with proper choice $\theta(\xi)$ such that $0\le\xi\cdot x+\theta(\xi)\le\|\,\xi\,\|_{1}+1$. For fixed $\xi$, choose $n_\xi=2^{L-1}\lceil(\|\,\xi\,\|_{1}+1)^{1/L}\rceil^L$ and $t_\xi(x)=(\xi\cdot x+\theta(\xi))/n_\xi$, then $0\le t_\xi(x)\le 1$ and by Lemma [Lemma 18](#lema:cos){reference-type="ref" reference="lema:cos"}, $$f(x)=\int_{\mathbb{R}^d}\lvert\widehat{f}(\xi)\rvert\cos(2\pi n_\xi t_\xi(x))\mathrm{d}\xi=\dfrac\pi{2}\int_{\mathbb{R}^d}\lvert\widehat{f}(\xi)\rvert\mathrm{d}\xi\int_0^1\cos(\pi r)\gamma_{,n_\xi}(t_\xi(x),r)\mathrm{d}r.$$ Define the probability measure $$\label{eq:prob} \mu(\mathrm{d}\xi,\mathrm{d}r)=\dfrac1Q\|\,\xi\,\|_{1}^{-s}\lvert\widehat{f}(\xi)\rvert\chi_{(0,1)}(r)\mathrm{d}\xi\mathrm{d}r,$$ where $Q$ is the normalized factor that $$Q=\int_{\mathbb{R}^d}\|\,\xi\,\|_{1}^{-s}\lvert\widehat{f}(\xi)\rvert\mathrm{d}\xi\int_0^1\mathrm{d}r\le\upsilon_{f,s,\Omega}.$$ Therefore $f(x)=\mathbb{E}_{(\xi,r)\sim\mu}F(x,\xi,r)$ with $$F(x,\xi,r)=\dfrac{\pi Q}{2}\|\,\xi\,\|_{1}^s\cos(\pi r)\gamma_{,n_\xi}(t_\xi(x),r).$$ If $\{\xi_i,r_i\}_{i=1}^m$ is an i.i.d. sequence of random samples from $\mu$, and $$\Tilde{f}=\dfrac1m\sum_{i=1}^mF(x,\xi_i,r_i),$$ then using Fubini's theorem, we obtain $$\begin{aligned} \mathbb{E}_{(\xi_i,r_i)\sim\mu}\|\,f-\Tilde{f}\,\|_{L^2(\Omega)}^2=&\int_\Omega\mathbb{E}_{(\xi_i,r_i)\sim\mu}\lvert\mathbb{E}_{(\xi,r)\sim\mu}F(x,\xi,r)-\Tilde{f}(x)\rvert^2\mathrm{d}x\\ =&\dfrac1m\int_\Omega\mathrm{Var}_{(\xi,r)\sim\mu}F(x,\xi,r)\mathrm{d}x\\ \le&\dfrac1m\mathbb{E}_{(\xi,r)\sim\mu}\|\,F(\cdot,\xi,r)\,\|_{L^\infty(\Omega)}^2.\end{aligned}$$ Note that $$\|\,F(\cdot,\xi,r)\,\|_{L^\infty(\Omega)}\le\dfrac{\pi Q}{2}\|\,\xi\,\|_{1}^s,$$ we obtain $$\mathbb{E}_{(\xi_i,r_i)\sim\mu}\|\,f-\Tilde{f}\,\|_{L^2(\Omega)}^2\le\dfrac1m\mathbb{E}_{(\xi,r)\sim\mu}\|\,F(\cdot,\xi,r)\,\|_{L^\infty(\Omega)}^2\le\dfrac{\pi^2Q\upsilon_{f,s,\Omega}}{4m}.$$ By Markov's inequality, with probability at least $(1+\varepsilon)/(2+\varepsilon)$, for some $\varepsilon>0$ to be chosen later on, we obtain $$\label{eq:L2err} \|\,f-\Tilde{f}\,\|_{L^2(\Omega)}^2\le\dfrac{(2+\varepsilon)\pi^2Q\upsilon_{f,s,\Omega}}{4m}$$ It remains to calculate the number of units in each layer. For each $\gamma_{,n_\xi}(t_\xi(x),r)$, choose $n_1=\dots=n_L=\lceil(\|\,\xi\,\|_{1}+1)^{1/L}\rceil$, then $n_\xi=2^{L-1}n_1\dots n_L$, and by Lemma [Lemma 17](#lema:comp){reference-type="ref" reference="lema:comp"}, $\gamma_{,n_\xi}(\cdot,r)=\gamma_{,n_L}(\cdot,r)\circ\beta_{,n_{L-1}}\circ\dots\circ\beta_{,n_1}$ on $[0,1]$. Lemma [Lemma 14](#lema:sigmoidal){reference-type="ref" reference="lema:sigmoidal"} shows the Heaviside function $\chi_{[0,\infty)}$ can be approximated by $\sigma$, and we need at most $$\max\{3n_1,\dots,3n_{L-1},4n_L\}\le 4\lceil(\|\,\xi\,\|_{1}+1)^{1/L}\rceil\le 12\|\,\xi\,\|_{1}^{1/L}$$ units in each layer to represent $\gamma_{,n_\xi}(t_\xi(x),r)$. Denote $N$ the total number of units in each layer, then $N\le 12\sum_{i=1}^m\|\,\xi_i\,\|_{1}^{1/L}$ and $$\mathbb{E}_{(\xi_i,r_i)\sim\mu}N^{2sL}\le 12\sum_{i=1}^m\mathbb{E}_{(\xi_i,r_i)\sim\mu}\|\,\xi_i\,\|_{1}^{2s}\le\dfrac{12m\upsilon_{f,s,\Omega}}{Q}.$$ Again, by Markov inequality, with probability at least $(1+\varepsilon)/(2+\varepsilon)$, we obtain $$\label{eq:num} \dfrac{Q}{m}\le\dfrac{12(2+\varepsilon)\upsilon_{f,s,\Omega}}{N^{2sL}}.$$ Combining [\[eq:L2err\]](#eq:L2err){reference-type="eqref" reference="eq:L2err"} and [\[eq:num\]](#eq:num){reference-type="eqref" reference="eq:num"}, with probability at least $\varepsilon/(2+\varepsilon)$, there exists an $(L,N)$-network $f_N$ such that $$\|\,f-f_N\,\|_{L^2(\Omega)}\le\dfrac{\sqrt3(2+\varepsilon)\pi\upsilon_{f,s,\Omega}}{N^{sL}}\le\dfrac{11\upsilon_{f,s,\Omega}}{N^{sL}},$$ with proper choice of $\varepsilon$ in the last step. Finally, if $f$ is complex-valued, we approximate the real and imaginary parts of the function separately to obtain [\[eq:L2deep\]](#eq:L2deep){reference-type="eqref" reference="eq:L2deep"}. ◻ *Remark 20*. We assume $\mathrm{supp}\;\widehat{f}\subset\left\{\,\xi\in\mathbb{R}^d\,\mid\,\|\,\xi\,\|_{1}\ge 1\,\right\}$ in Lemma [Lemma 19](#lema:deep){reference-type="ref" reference="lema:deep"} because we want to obtain an upper bound depending only on $\upsilon_{f,s,\Omega}$. If we give up this condition, then the upper bound in [\[eq:L2deep\]](#eq:L2deep){reference-type="eqref" reference="eq:L2deep"} changes to $C\|\,f\,\|_{\mathscr{B}^s(\mathbb{R}^d)}/N^{sL}$ for some dimension-free constant $C$. The proof is essentially the same provided that the probability measure [\[eq:prob\]](#eq:prob){reference-type="eqref" reference="eq:prob"} is replaced by $$\mu(\mathrm{d}\xi,\mathrm{d}r)=\dfrac1Q(1+\|\,\xi\,\|_{1})^{-s}\lvert\widehat{f}(\xi)\rvert\chi_{(0,1)}(r)\mathrm{d}\xi\mathrm{d}r,$$ We leave it to the interested reader. There is relatively little work on the approximation rate of deep neural networks that employ the spectral Barron space as the target space. For deep ReLU networks, [@BreslerNagaraj:2020] has proven approximation results of $(sL/2)$-order. We shall show below this may be improved to $sL$-order at the cost of $\upsilon_{f,s,\Omega}$ appears in the estimate. **Theorem 21**. *Let the positive integer $L$ and $f\in\mathscr{B}^s(\mathbb{R}^d)$ with $0<sL\le 1/2$. For any positive integer $N$ there exists an $(L,N+2)$-network $f_N$ such that $$\label{eq:thm2} \|\,f-f_N\,\|_{L^2(\Omega)}\le\dfrac{29\upsilon_{f,s,\Omega}}{N^{sL}}.$$ Moreover, if $f$ is a real-valued function, then the connection weights in $f_N$ are all real.* *Proof.* We may write $f=f_1+f_2$ with $$f_1(x)=\int_{\|\,\xi\,\|_{1}<1}\widehat{f}(\xi)e^{2\pi i\xi\cdot x}\mathrm{d}\xi,\qquad f_2(x)=\int_{\|\,\xi\,\|_{1}\ge 1}\widehat{f}(\xi)e^{2\pi i\xi\cdot x}\mathrm{d}\xi.$$ Then $\upsilon_{f_1,1,\Omega}\le\upsilon_{f,s,\Omega}$ and $\upsilon_{f_2,s,\Omega}\le\upsilon_{f,s,\Omega}$ because $$\widehat{f}_1(\xi)=\widehat{f}(\xi)\chi_{[0,1)}(\|\,\xi\,\|_{1})\qquad\text{and}\qquad \widehat{f}_2(\xi)=\widehat{f}(\xi)\chi_{[1,\infty)}(\|\,\xi\,\|_{1}).$$ We approximate $f_1$ with an $(L,n_1)$-network with $n_1=\lceil N/6\rceil$ and obtain the error estimate. Applying Lemma [Lemma 15](#lema:low){reference-type="ref" reference="lema:low"} we obtain, there exists an $(1,n_1)$-network $f_{1,n_1}$ such that $$\|\,f_1-f_{1,n_1}\,\|_{L^2(\Omega)}\le\dfrac{2\upsilon_{f_1,1,\Omega}}{n_1^{1/2}}\le\dfrac{2\sqrt{6}\upsilon_{f,s,\Omega}}{N^{sL}}.$$ Additional emphasis needs to be placed on the fact that an $(1,n_1)$-network can be represented by an $(L,n_1)$-network. We just need to fill the rest of the hidden layers with $$t=\begin{cases} \text{ReLU}(t), & t\ge 0,\\ -\text{ReLU}(-t), & t< 0. \end{cases}$$ Meanwhile, we approximate $f_2$ with an $(L,n_2)$-network with $n_2=\lceil 5N/6\rceil$ and obtain the error estimate. Applying Lemma [Lemma 19](#lema:deep){reference-type="ref" reference="lema:deep"} we obtain, there exists an $(L,n_2)$ network $f_{2,n_2}$ such that $$\|\,f_2-f_{2,n_2}\,\|_{L^2(\Omega)}\le\dfrac{22\pi\upsilon_{f_2,s,\Omega}}{n_2^s}\le\dfrac{22\sqrt{6/5}\pi\upsilon_{f,s,\Omega}}{N^{sL}}.$$ These together with the triangle inequality give the estimate [\[eq:thm2\]](#eq:thm2){reference-type="eqref" reference="eq:thm2"} and the total number of units in each layer is $$n_1+2n_2=\lceil N/6\rceil+\lceil 5N/6\rceil\le N+2.$$ If $f$ is a real-valued function, then we let $f_N=\mathrm{Re}(f_{1,n_1}+f_{2,n_2})$, and the upper bound [\[eq:thm2\]](#eq:thm2){reference-type="eqref" reference="eq:thm2"} still holds. ◻ As far as we know, the above theorem is best in the literature available so far. For shallow neural network $L=1$, the authors in [@MengMing:2022] have proven the $1/2$-convergence rate with $\mathscr{B}^{1/2}_p(\mathbb{R}^d)$ as the target function space, which is smaller than $\mathscr{B}^{1/2}(\mathbb{R}^d)$, and their estimate depends on the dimension as $d^{1/4}$. The upper bound in [@Siegel:2022] depends on $\upsilon_{f,0}+\upsilon_{f,1/2}$, while [\[eq:ex1\]](#eq:ex1){reference-type="eqref" reference="eq:ex1"} exemplifies that $\upsilon_{f,0}$ may be much larger than $\upsilon_{f,s}$ for some functions in $\mathscr{B}^s(\mathbb{R}^d)$, and the estimate depends upon the dimension exponentially. By contrast to these two results, the upper bound in Theorem [Theorem 21](#thm:deep){reference-type="ref" reference="thm:deep"} depends only on $\upsilon_{f,s,\Omega}$, and is independent of the dimension. For deep neural network, a similar result for ReLU has been proven in [@BreslerNagaraj:2020] with $(sL/2)$-order, which is not optimal compared with our estimate. At first glance, our result may seem contradictory with [@BreslerNagaraj:2020]\*Theorem 2. This is not the case because the upper bound therein is $\sqrt{\upsilon_{f,0}\upsilon_{f,s}}+\upsilon_{f,0}$, which requires $f\in\mathscr{B}^s(\mathbb{R}^d)$, but is usually smaller than $\|\,f\,\|_{\mathscr{B}^s(\mathbb{R}^d)}$ for oscillatory functions; cf. Lemma [Lemma 24](#lema:decay){reference-type="ref" reference="lema:decay"}. *Remark 22*. The activation function of the last hidden layer of the $(L,N)$-network in Theorem [Theorem 21](#thm:deep){reference-type="ref" reference="thm:deep"} may be replaced by many other familiar activation functions such as Hyperbolic tangent, SoftPlus, ELU, Leaky ReLU, ReLU$^k$ and so on. Because all these activation functions can be reduced to sigmoidal functions by certain shifting and scaling argument; e.g., for SoftPlus, we observe that $\text{SoftPlus}(t)-\text{SoftPlus}(t-1)$ is a sigmoidal function. Unfortunately, it is not easy to change ReLU of the first $L-1$ hidden layers by other activation functions. In what follows, we shall show that Theorem [Theorem 21](#thm:deep){reference-type="ref" reference="thm:deep"} is sharp if the activation function of the last hidden layer is Heaviside function. This example is adopted from [@BreslerNagaraj:2020]. We reserve it briefly to ensure the completeness of our work and postpone the proof in Appendix [\[apd:lower\]](#apd:lower){reference-type="ref" reference="apd:lower"}. **Theorem 23**. *For any fixed positive integers $L,N$ and real numbers $\varepsilon,s$ with $0<\varepsilon,sL\le 1/2$, there exists $f\in\mathscr{B}^s(\mathbb{R}^d)$ satisfying $\upsilon_{f,s,\Omega}\le1+\varepsilon$ such that for any $(L,N)$-network $f_N$ whose activation function $\sigma$ in the last layer is the Heaviside function $\chi_{[0,\infty)}$, there holds $$\label{eq:lower} \|\,f-f_N\,\|_{L^2(\Omega)}\ge\dfrac{1-\varepsilon}{8N^{sL}}.$$* # Conclusion  [\[sec:conclu\]]{#sec:conclu label="sec:conclu"} We discuss the analytical functional properties of the spectral Barron space. The sharp embedding between the spectral Barron spaces and various classical function spaces have been established. The approximation rate has been proved for the deep ReLU neural networks when the spectral Barron space with a small smoothness index is employed as the target function space. There are still some unsolved problems, such as the sup-norm error and the higher-order convergence results for larger $s$, the relations among Barron type spaces, variational space and the Radon bounded variation space as well as understanding how these spaces are related to the classical function spaces, which will be pursued in the subsequent works. # Some proof details ## Proof for [\[eq:phiR\]](#eq:phiR){reference-type="eqref" reference="eq:phiR"} and [\[eq:phiRs\]](#eq:phiRs){reference-type="eqref" reference="eq:phiRs"} {#apd:phiR} *Proof.* Note that $\widehat{\phi}_R$ is a radial function. By Lemma [Lemma 1](#lema:fourinv){reference-type="ref" reference="lema:fourinv"} and $\widehat{\phi}_R\in L^1(\mathbb{R}^d)$, $$\begin{aligned} \phi_R(x)=&\int_{B_R}\left(1-\dfrac{\lvert\xi\rvert^2}{R^2}\right)^\delta e^{2\pi ix\cdot\xi}\mathrm{d}\xi\\ =&\int_{-R}^Re^{2\pi i\lvert x\rvert\xi_1}\mathrm{d}\xi_1\int_{\xi_2^2+\dots\xi_d^2<R^2-\xi_1^2}\left(1-\dfrac{\lvert\xi\rvert^2}{R^2}\right)^\delta\mathrm{d}\xi_2\dots\mathrm{d}\xi_d.\end{aligned}$$ Performing the polar transformation and changing the variable $t=r^2/(R^2-\xi_1^2)$, we obtain $$\begin{aligned} &\quad\int_{\xi_2^2+\dots\xi_d^2<R^2-\xi_1^2}\left(1-\dfrac{\lvert\xi\rvert^2}{R^2}\right)^\delta\mathrm{d}\xi_2\dots\mathrm{d}\xi_d\\ &=\omega_{d-2}\int_0^{\sqrt{R^2-\xi_1^2}}r^{d-2}\left(1-\dfrac{\xi_1^2+r^2}{R^2}\right)^\delta\mathrm{d}r\\ &=\dfrac{\omega_{d-2}}2R^{d-1}\left(1-\dfrac{\xi_1^2}{R^2}\right)^{\delta+(d-1)/2}\int_0^1t^{(d-3)/2}(1-t)^\delta\mathrm{d}t\\ &=\dfrac{\omega_{d-2}}2R^{d-1}\left(1-\dfrac{\xi_1^2}{R^2}\right)^{\delta+(d-1)/2}B\left(\dfrac{d-1}{2},\delta+1\right).\end{aligned}$$ Substituting this equation into the previous one and changing the variable $\xi_1=R\cos\theta$, we get $$\begin{aligned} \phi_R(x)=&\dfrac{\omega_{d-2}}2R^{d-1}B\left(\dfrac{d-1}{2},\delta+1\right)\int_{-R}^R\left(1-\dfrac{\xi_1^2}{R^2}\right)^{\delta+(d-1)/2}e^{2\pi i\lvert x\rvert\xi_1}\mathrm{d}\xi_1\\ =&\dfrac{\pi^{(d-1)/2}\Gamma(\delta+1)R^d}{\Gamma(\delta+(d+1)/2)}\int_0^\pi\cos(2\pi\lvert x\rvert R\cos\theta)\sin^{2\delta+d}\theta\mathrm{d}\theta\\ =&\dfrac{\Gamma(\delta+1)}{\pi^\delta\lvert x\rvert^{\delta+d/2}}R^{-\delta+d/2}J_{\delta+d/2}(2\pi\lvert x\rvert R),\end{aligned}$$ where we have used $$J_\nu(x)=\dfrac{(x/2)^\nu}{\pi^{1/2}\Gamma((d+1)/2)}\int_0^\pi\cos(x\cos\theta)\sin^{2\nu}\theta\mathrm{d}\theta,\qquad\nu>-\dfrac12$$ in the last step. The above integral representation of the first kind of Bessel function may be found in [@Luke:1962]\*§ 1.4.5, Eq. (4). It remains to prove [\[eq:phiRs\]](#eq:phiRs){reference-type="eqref" reference="eq:phiRs"}. For $s\ge 0$, a direct calculation gives $$\begin{aligned} \upsilon_{\phi_R,s}&=\int_{B_R}\lvert\xi\rvert^s\left(1-\dfrac{\lvert\xi\rvert^2}{R^2}\right)^\delta\mathrm{d}\xi =\omega_{d-1}\int_0^Rr^{s+d-1}\left(1-\dfrac{r^2}{R^2}\right)^\delta\mathrm{d}r\\ &=\dfrac{\omega_{d-1}}2R^{s+d}\int_0^1t^{(s+d)/2-1}(1-t)^\delta\mathrm{d}t\\ &=\dfrac{\omega_{d-1}}2B\left(\dfrac{s+d}2,\delta+1\right)R^{s+d}.\end{aligned}$$ Therefore $\phi_R\in\mathscr{B}^s(\mathbb{R}^d)$. ◻ ## Proof for [\[eq:ex3\]](#eq:ex3){reference-type="eqref" reference="eq:ex3"} {#apd:ex3} *Proof.* To prove [\[eq:ex3\]](#eq:ex3){reference-type="eqref" reference="eq:ex3"}, we start with the following representation formula. If $\widehat{f}\in L^1(\mathbb{R}^d)$ is a radial function with $\widehat{f}(\xi)=g_0(\lvert\xi\rvert)$, then $$\label{eq:radial} f(x)=\dfrac{2\pi}{\lvert x\rvert^{d/2-1}}\int_0^\infty g_0(r)r^{d/2}J_{d/2-1}(2\pi\lvert x\rvert r)\mathrm{d}r.$$ If $d=1$, then using Lemma [Lemma 1](#lema:fourinv){reference-type="ref" reference="lema:fourinv"}, we obtain $$f(x)=\int_\mathbb{R}g_0(\lvert\xi\rvert)e^{2\pi ix\xi}\mathrm{d}\xi=2\int_0^\infty g_0(r)\cos(2\pi\lvert x\rvert r)\mathrm{d}r,$$ which gives [\[eq:radial\]](#eq:radial){reference-type="eqref" reference="eq:radial"}., where we have used the relation [@Luke:1962]\*§ 1.4.6, Eq. (7) $$J_{-1/2}(x)=\sqrt{\dfrac2{\pi x}}\cos(x)$$ in the last step. For $d\ge 2$, combining Lemma [Lemma 1](#lema:fourinv){reference-type="ref" reference="lema:fourinv"} and [@Stein:1971]\*Ch. IV, Theorem 3.3, we obtain [\[eq:radial\]](#eq:radial){reference-type="eqref" reference="eq:radial"}, which immediately implies $$\begin{aligned} f_p(x)&=\dfrac{2\pi}{\lvert x\rvert^{d/2-1}}\int_0^1 r^{d(1/2-1/p')}J_{d/2-1}(2\pi\lvert x\rvert r)\mathrm{d}r\\ &=\omega_{d-1}\int_0^1r^{d/p-1}{}_0F_1(;d/2;-\pi^2\lvert x\rvert^2r^2)\mathrm{d}r,\end{aligned}$$ where we have used the relation [@Luke:1962]\*§ 1.4.1, Eq. (1) $$J_\nu(x)=\dfrac{(x/2)^\nu}{\Gamma(\nu+1)}{}_0F_1(;\nu+1;-x^2/4)$$ in the last step. Changing the variable $t=r^2$ and using the identity [@Luke:1962]\*§ 1.3.2, Eq. (2) $${}_1F_2(\rho;\rho+\sigma,\beta;x)=\dfrac1{B(\rho,\sigma)}\int_0^1t^{\rho-1}(1-t)^{\sigma-1}{}_0F_1(;\beta;xt)\mathrm{d}t,$$ we get [\[eq:ex3\]](#eq:ex3){reference-type="eqref" reference="eq:ex3"}. ◻ ## Proof for Lemma [Lemma 8](#lema:Besov){reference-type="ref" reference="lema:Besov"} {#apd:Besov} *Proof.* The "if"-part is standard by [@Triebel:1983]\*§ 2.3.2, Proposition 2; § 2.7.1, Theorem. We illustrate the "only if"-part with an example, which is taken from [@Triebel:1983]\*§ 2.3.9, Proof of Theorem. Let $f_0\in\mathscr{S}(\mathbb{R}^d)$ with supp$(\widehat{f}_0)\subset\left\{\,x\in\mathbb{R}^d\,\mid\,1\le\lvert x\rvert\le 3/2\,\right\}$. Let $f_n(x)=f_0(2^{-n}x)$ for an integer $n$, then $$\widehat{f}_n(\xi)=2^{-dn}\widehat{f}_0(2^{-n}\xi)\qquad \text{and}\qquad \mathrm{supp}(\widehat{f}_n)\subset\left\{\,x\in\mathbb{R}^d\,\mid\,2^n\le\lvert x\rvert\le 3\times 2^{n-1}\,\right\}.$$ Choose proper $\{\varphi_j\}_{j=0}^{\infty}\subset\mathscr{S}(\mathbb{R}^d)$ in the definition of Besov space such that $\varphi_0(x)=1$ when $\lvert x\rvert\le 3/2$ and $\varphi_j=1$ on supp$(\widehat{f}_j)$ for $j\ge 1$, then $$\mathrm{supp}(\widehat{f}_n)\cap\mathrm{supp}(\varphi_j)=\emptyset\quad\text{if}\quad n\ge 0\quad\text{and}\quad n\neq j.$$ A direct calculation gives that $$(\varphi_j\widehat{f}_n)^\vee=\delta_{0n}f_n,\quad\text{if}\quad n\le 0\qquad\text{and}\qquad(\varphi_j\widehat{f}_n)^\vee=\delta_{jn}\widehat{f}_n,\quad\text{if}\quad n> 0.$$ By definition, when $n<0$, $$\|\,f_n\,\|_{B_{p,q}^\alpha(\mathbb{R}^d)}=\|\,f_n\,\|_{L^p(\mathbb{R}^d)}=2^{-dn/p}\|\,f_0\,\|_{L^p(\mathbb{R}^d)}.$$ Let $n\to-\infty$ with the embedding relation $B_{p_1,q_1}^{\alpha_1}(\mathbb{R}^d)\hookrightarrow B_{p_2,q_2}^{\alpha_2}(\mathbb{R}^d)$ yields $p_1\le p_2$. Similarly, when $n>0$, $$\|\,f_n\,\|_{B_{p,q}^\alpha(\mathbb{R}^d)}=2^{\alpha n}\|\,f_n\,\|_{L^p(\mathbb{R}^d)}=2^{(\alpha-d/p)n}\|\,f_0\,\|_{L^p(\mathbb{R}^d)}.$$ Let $n\to+\infty$ with the embedding relation implies $\alpha_1-d/p_1\ge\alpha_2-d/p_2$. Finally if $\alpha_1-d/p_1=\alpha_2-d/p_2$, then $q_1\le q_2$ proved in [@Triebel:1995]\*Theorem 3.2.1. ◻ ## Proof for [\[eq:ex5\]](#eq:ex5){reference-type="eqref" reference="eq:ex5"} {#apd:ex5} *Proof.* If $1\le p<\infty$ and $\alpha>0$, then $B_{p,1}^\alpha(\mathbb{R}^d)$ is equivalent to the space defined in [@Triebel:1983]\*§ 2.5.7, Theorem $$\Lambda_{p,1}^\alpha(\mathbb{R}^d){:}=\left\{\,f\in W^{[\alpha],p}(\mathbb{R}^d)\,\mid\,\int_{\mathbb{R}^d}\dfrac{\|\,\Delta_h^2(\nabla^{[\alpha]}f)\,\|_{L^p(\mathbb{R}^d)}}{\lvert h\rvert^{d+\{\alpha\}}}\mathrm{d}h<\infty\,\right\}$$ equipped with the norm $$\|\,f\,\|_{\Lambda_{p,1}^\alpha(\mathbb{R}^d)}{:}=\|\,f\,\|_{W^{[\alpha],p}(\mathbb{R}^d)}+\int_{\mathbb{R}^d}\dfrac{\|\,\Delta_h^2(\nabla^{[\alpha]}f)\,\|_{L^p(\mathbb{R}^d)}}{\lvert h\rvert^{d+\{\alpha\}}}\mathrm{d}h,$$ where $\alpha=[\alpha]+\{\alpha\}$ with integer $[\alpha]$ and $0<\{\alpha\}\le 1$, and $\Delta_h^2f(x)=f(x+2h)-2f(x+h)+f(x)$ [@Triebel:1983]\*§ 2.2.2, Eq. (9). For any nonnegative integer $k$, a direct calculation gives $$\nabla^k\psi_n(x)=\sum_{j=0}^k\dfrac{c_jx^{\beta_j}}{(1+in)^{d/2+j}}e^{-\pi\lvert x\rvert^2/(1+in)}$$ for some constants $\{c_j\}_{j=0}^k$, and the multi-index $\beta_j=(\beta_{j1},\dots,\beta_{jd})$ satisfies $\lvert\beta_j\rvert\le j$ with $x^{\beta_j}=x_1^{\beta_{j1}}\dots x_d^{\beta_{jd}}$. Then $$\|\,\nabla^k\psi_n\,\|_{L^p(\mathbb{R}^d)}\le C\sum_{j=0}^k\dfrac1{(1+n^2)^{d/4+j/2}}\|\,\lvert x\rvert^{\lvert\beta_j\rvert}e^{-\pi\lvert x\rvert^2/(1+n^2)}\,\|_{L^p(\mathbb{R}^d)}.$$ A direct calculation gives $$\begin{aligned} \int_{\mathbb{R}^d}\lvert x\rvert^{\lvert\beta_j\rvert p}e^{-\pi p\lvert x\rvert^2/(1+n^2)}\mathrm{d}x=&(1+n^2)^{(d+\lvert\beta_j\rvert p)/2}\int_{\mathbb{R}^d}\lvert y\rvert^{\lvert\beta_j\rvert p}e^{-\pi p\lvert y\rvert^2}\mathrm{d}y\\ =&\dfrac{\omega_{d-1}\Gamma\left((\lvert\beta_j\rvert p+d)/2\right)}{2(p\pi)^{(\lvert\beta_j\rvert p+d)/2}} (1+n^2)^{(d+\lvert\beta_j\rvert p)/2}. \end{aligned}$$ Therefore, there exists $C$ depending only on $d,p,k$ such that $$\label{eq:ex4} \|\,\nabla^k\psi_n\,\|_{L^p(\mathbb{R}^d)}\le C\sum_{j=0}^k\dfrac{(1+n^2)^{d/(2p)+\lvert\beta_j\rvert/2}}{(1+n^2)^{d/4+j/2}}\le C(1+n^2)^{-d(p-2)/(4p)}.$$ If $f\in W^2_p(\mathbb{R}^d)$, then $$\Delta_h^2f(x)=\int_0^1\mathrm{d}t\int_t^{1+t}\nabla^2f(x+sh)\mathrm{d}sh\cdot h=\int_0^2\nabla^2f(x+sh)\mathrm{d}s\int_{\max(s-1,0)}^{\min(s,1)}\mathrm{d}th\cdot h.$$ Therefore, $$\lvert\Delta_h^2f(x)\rvert\le\lvert h\rvert^2\int_0^2\lvert\nabla^2f(x+sh)\rvert\mathrm{d}s.$$ By the Minkowski's inequality, we obtain $$\|\,\Delta_h^2f(x)\,\|_{L^p(\mathbb{R}^d)}\le\lvert h\rvert^2\int_0^2\|\,\nabla^2f(\cdot+sh)\,\|_{L^p(\mathbb{R}^d)}\mathrm{d}s=2\lvert h\rvert^2\|\,\nabla^2f\,\|_{L^p(\mathbb{R}^d)}.$$ Splitting the integral part of the $\Lambda_{p,1}^\alpha$-norm into two parts, we get $$\begin{aligned} &\int_{\lvert h\rvert<1}\dfrac{\|\,\Delta_h^2f\,\|_{L^p(\mathbb{R}^d)}}{\lvert h\rvert^{d+\{\alpha\}}}\mathrm{d}h+\int_{\lvert h\rvert>1}\dfrac{\|\,\Delta_h^2f\,\|_{L^p(\mathbb{R}^d)}}{\lvert h\rvert^{d+\{\alpha\}}}\mathrm{d}h\\ \le&2\|\,\nabla^2f\,\|_{L^p(\mathbb{R}^d)}\int_{\lvert h\rvert<1}h^{2-d-\{\alpha\}}\mathrm{d}h+4\|\,f\,\|_{L^p(\mathbb{R}^d)}\int_{\lvert h\rvert>1}h^{-d-\{\alpha\}}\mathrm{d}h\\ =&2\omega_{d-1}\left(\dfrac{\|\,\nabla^2f\,\|_{L^p(\mathbb{R}^d)}}{2-\{\alpha\}}+\dfrac{2\|\,f\,\|_{L^p(\mathbb{R}^d)}}{\{\alpha\}}\right)\\ \le&C\|\,f\,\|_{W^2_p(\mathbb{R}^d)}. \end{aligned}$$ Note that $\nabla^{[\alpha]}\psi_n\in W^2_p(\mathbb{R}^d)$, a combination of the above inequality and [\[eq:ex4\]](#eq:ex4){reference-type="eqref" reference="eq:ex4"} yields $$\|\,\psi_n\,\|_{\Lambda_{p,1}^\alpha(\mathbb{R}^d)} \le C\|\,\psi_n\,\|_{W^{[\alpha]+2}_p(\mathbb{R}^d)}\le C(1+n^2)^{-d(p-2)/(4p)},$$ where $C$ is a constant depending on $p,\alpha$ and $d$ but independent of $n$. So does $\|\,\psi_n\,\|_{B_{p,1}^\alpha(\mathbb{R}^d)}$. ◻ ## Proof for Lemma [Lemma 17](#lema:comp){reference-type="ref" reference="lema:comp"} {#proof-for-lemma-lemacomp}  [\[apd:comp\]]{#apd:comp label="apd:comp"} *Proof.* Firstly we show that $g_{,n}$ is symmetric about $x=1/2$. By definition, $$g_{,n}(1-t)=g(n(1-t)-j)=g(nt-n+j+1)=g(nt-k)=g_{,n}(t)$$ for some integers $j,k$ satisfying $0\le j,k\le n-1$ and $k=n-j-1$. For a fixed $t\in[0,1]$, there exist integers $j,k$ satisfying $0\le j\le 2n_1-1,0\le k\le n_2-1$ such that $0\le 2n_1n_2t-n_2j-k\le 1$, then $0\le n_2(2n_1t-j)-k\le 1$ and $$g_{,2n_1n_2}(t)=g(2n_1n_2t-n_2j-k)=g(n_2(2n_1t-j)-k)=g_{,n_2}(2n_1t-j).$$ By definition, $$\beta_{,n}(t)=\begin{cases} 2nt-2j,& 0\le nt-j\le 1/2,\\ 2+2j-2nt, & 1/2\le nt-j\le 1, \end{cases}\quad j=0,\dots,n-1.$$ If $j$ is even, then $j=2l$ for some integer $l$ satisfying $0\le l\le n_1-1$ and $0\le n_1t-l\le1/2$. Therefore $$g_{,n_2}(\beta_{,n_1}(t))=g_{,n_2}(2n_1t-2l)=g_{,n_2}(2n_1t-j).$$ Otherwise $j$ is odd, then $j=2l+1$ for some integer $l$ satisfying $0\le l\le n_1-1$ and $1/2\le n_1t-l\le 1$. Therefore $$g_{,n_2}(\beta_{,n_1}(t))=g_{,n_2}(2+2l-2n_1t)=g_{,n_2}(1+j-2n_1t)=g_{,n_2}(2n_1t-j).$$ This gives $g_{,n_2}\circ\beta_{,n_1}=g_{,2n_1n_2}$ on $[0,1]$. ◻ ## Proof for Theorem [Theorem 23](#thm:lower){reference-type="ref" reference="thm:lower"} {#proof-for-theorem-thmlower}  [\[apd:lower\]]{#apd:lower label="apd:lower"} **Lemma 24**. *Given $n,R>0$ and let $f(x)=\cos(2\pi nx_1)e^{-\pi\lvert x\rvert^2/R}$, then $f\in\mathscr{B}^s(\mathbb{R}^d)$ with $$\label{eq:decay} \upsilon_{f,s,\Omega}\le\left(n+\dfrac{d}{\pi\sqrt{R}}\right)^s\qquad\text{for}\quad 0\le s\le 1.$$* *Proof.* For any $R>0,$ the Fourier transform of the dilated Guass function $e^{-\pi x_j^2/R}$ reads as $$\widehat{e^{-\pi x_j^2/R}}=\sqrt{R}e^{-\pi R\xi_j^2}.$$ A direct calculation gives $$\begin{aligned} \widehat{f}(\xi)=&\dfrac12\int_{\mathbb{R}}e^{-\pi x_1^2/R}\Bigl(e^{-2\pi ix_1(\xi_1-n)}+e^{-2\pi ix_1(\xi_1+n)}\Bigr)\mathrm{d}x_1\prod_{j=2}^d\int_\mathbb{R}e^{-\pi x_j^2/R-2\pi ix_j\xi_j}\mathrm{d}x_j\\ =&\dfrac{R^{d/2}}2\Bigl(e^{-\pi R(\xi_1-n)^2}+e^{-\pi R(\xi_1+n)^2}\Bigr)\prod_{j=2}^de^{-\pi R\xi_j^2}\\ =&R^{d/2}e^{-\pi R(\lvert\xi\rvert^2+n^2)}\cosh(2\pi nR\xi_1).\end{aligned}$$ It is clear that $f,\widehat{f}\in L^1(\mathbb{R}^d)$ and the pointwise Fourier inversion theorem holds true, and $$\upsilon_{f,0,\Omega}=\int_{\mathbb{R}^d}\lvert\widehat{f}(\xi)\rvert\mathrm d\xi=\int_{\mathbb{R}^d}\widehat{f}(\xi)\mathrm d\xi=f(0)=1,$$ where we have used the positiveness of $\widehat{f}$ . Next, using the elementary identities $$\sqrt{R}\int_{\mathbb{R}}e^{-\pi R\xi_j^2}\mathrm{d}\xi_j=1\quad\text{and}\quad \sqrt{R}\int_{\mathbb{R}}\lvert\xi_j\rvert e^{-\pi R\xi_j^2}\mathrm{d}\xi_j=\dfrac1{\pi\sqrt{R}},$$ we obtain $$\begin{aligned} \upsilon_{f,1,\Omega}&=R^{d/2}\int_{\mathbb{R}}\lvert\xi_1\rvert e^{-\pi R(\xi_1-n)^2}\mathrm{d}\xi_1\prod_{j=2}^d\int_{\mathbb{R}}e^{-\pi R\xi_j^2}\mathrm{d}\xi_j\\ &\quad+R^{d/2}\int_{\mathbb{R}}e^{-\pi R(\xi_1-n)^2}\mathrm{d}\xi_1\sum_{j=2}^d\int_{\mathbb{R}}\lvert\xi_j\rvert e^{-\pi R\xi_j^2}\mathrm{d}\xi_j\prod_{k\neq1,j}\int_{\mathbb{R}}e^{-\pi R\xi_k^2}\mathrm{d}\xi_k\\ &\le\sqrt{R}\int_{\mathbb{R}}(\lvert\xi_1-n\rvert+n)e^{-\pi R(\xi_1-n)^2}\mathrm{d}\xi_1+\dfrac{d-1}{\pi\sqrt{R}}\\ &=n+\dfrac{d}{\pi\sqrt{R}}.\end{aligned}$$ Using the interpolation inequality [\[eq:inter2\]](#eq:inter2){reference-type="eqref" reference="eq:inter2"}, we obtain [\[eq:decay\]](#eq:decay){reference-type="eqref" reference="eq:decay"}. ◻ *Proof for Theorem [Theorem 23](#thm:lower){reference-type="ref" reference="thm:lower"}.* Define $n=2^{L+2}N^L$ and $f(x)=n^{-s}\cos(2\pi nx_1)e^{-\pi\lvert x\rvert^2/R}$ with large enough $R$ such that $\upsilon_{f,s,\Omega}\le 1+\varepsilon$ by Lemma [Lemma 24](#lema:decay){reference-type="ref" reference="lema:decay"} and $e^{-\pi\lvert x\rvert^2/R}\ge 1-\varepsilon$ when $x\in\Omega$. Fix $x_2,\dots,x_d$, then any $(L,N)$-network $f_N$ can be viewed as an one-dimensional $(L,N)$-network, i.e. $f_N(\cdot,x_2,\dots,x_d):[0,1]\to\mathbb{C}$. Divide $[0,1]$ into $n$-internals of $[j/n,(j+1)/n]$ with $j=0,\dots,n-1$. There exists at least $n-2^{L+1}N^L=2^{L+1}N^L$ intervals such that $f_N$ does not change sign on those intervals [@Telgarsky:2016]\*Lemma 3.2. Without loss of generality, we assume $f_N(\cdot,x_2,\dots,x_d)\ge 0$ on some interval $[j/n,(j+1)/n]$, then $$\int_{j/n}^{(j+1)/n}(f(x)-f_N(x))^2\mathrm{d}x_1\ge\dfrac{(1-\varepsilon)^2}{n^{2s}}\int_{(4j+1)/(4n)}^{(4j+3)/(4n)}\cos^2(2\pi nx_1)\mathrm{d}x_1\ge\dfrac{(1-\varepsilon)^2}{4n^{2s+1}},$$ because $\cos(2\pi nx_1)\le 0$ when $2\pi j+\pi/2\le 2\pi nx_1\le 2\pi j+3\pi/2$. Summing up these $n-2^{L+1}N^L$ intervals gives $$\begin{aligned} \|\,f-f_N\,\|_{L^2(\Omega)}^2\ge&\int_{[0,1]^{d-1}}\mathrm{d}x_2\dots\mathrm{d}x_d\int_0^1(f(x)-f_N(x))^2\mathrm{d}x_1\\ \ge&\dfrac{2^{L+1}N^L(1-\varepsilon)^2}{4n^{2s+1}}\ge\dfrac{(1-\varepsilon)^2}{2^{2sL+4s+3}N^{2sL}}\ge\dfrac{(1-\varepsilon)^2}{64N^{2sL}}. \end{aligned}$$ Simultaneously squaring off both sides of the inequality, we obtain [\[eq:lower\]](#eq:lower){reference-type="eqref" reference="eq:lower"}. ◻ [^1]: The authors thank Professor Renjin Jiang in Capital Normal University for the helpful discussion. The work of Liao and Ming were supported by National Natural Science Foundation of China through Grants No. 11971467 and No. 12371438.
arxiv_math
{ "id": "2309.00788", "title": "Spectral Barron space and deep neural network approximation", "authors": "Yulei Liao, Pingbing Ming", "categories": "math.NA cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We observe a network scenario where parts of a secret are distributed among its nodes. Within the network, a group of attackers is actively trying to obtain the complete secret, while there is also the issue of some nodes malfunctioning or being absent. In this paper, we address this problem by employing graph multicoloring techniques, focusing on the case of a single attacker and varying numbers of malfunctioning nodes. author: - "Tanja Vojković [^1]" - Damir Vukičević date: May 2023 title: Graph multicoloring in solving node malfunction and attacks in a secret-sharing network --- Keywords: graph multicoloring, secret-sharing, networks # Introduction This paper builds upon our previous work titled 'Multicoloring of Graphs to Secure a Secret' [@nas], where we introduced a specific type of attack on secret-sharing networks and proposed a solution to protect secrets in such scenarios. In a secret-sharing network, a group divides a secret into parts and distributes them among participants, represented as nodes in a network of common acquaintances. Attackers infiltrate this network with dual objectives: obtaining all parts of the secret and disrupting network connections to prevent secret reconstruction. Our research seeks to answer questions such as how many secret pieces are required to ensure its safety against a specified number of attackers, and what network topology and piece distribution configurations achieve this security. We model the secret pieces with colors and observe the problem as the problem of graph multicoloring. In our previous work [@nas], we tackled the problem of securing secrets for 1, 2, 3, and 4 attackers. In this paper, we extend our analysis to address the issue of network nodes malfunctioning.\ In real-world networks, node failures are common, and designing network structures must account for this. If the participants (nodes) in a network are persons guarding a piece of the secret one can imagine a node failure as a person simply giving up on the task given or being removed from the network by some outside means and if the nodes represent some piece of technology then a failure can mean a simple malfunction. In Section [2](#def){reference-type="ref" reference="def"} we give definitions of basic concepts and preliminaries needed to study the problem, while Section [3](#main){reference-type="ref" reference="main"} provides our main results. # Definitions and preliminary results {#def} We will use standard definitions and notation from graph theory, [@gross]. By $N(u)=N_G(u)$ we denote the set of neighbors of vertex $u$ in the graph $G$, and for $A\subseteq V(G)$, by $N_G(A)$ we denote the set of neighbors of all the vertices in $A$. Further we denote, $$N_0(u)=N(u)\cup \{u\},$$ $$N_0(A)=N(A)\cup A.$$ By $G-u$ we denote a graph obtained from graph $G$ by deleting the vertex $u\in V(G)$ and all its incident edges, and for $A\subseteq V(G)$, by $G-A$, we denote a graph obtained from graph $G$ by deleting all the vertices in $A$ and their incident edges. Let us first define a graph multicoloring, and the specific multicoloring we used to solve the problem in [@nas].\ Let $k\in\mathbb{N}$. **Vertex $k$-multicoloring** of a graph $G$ is a function $\kappa:V(G)\to \mathcal{P}(\{1,...,k\})$. So, while in vertex coloring of a graph, each vertex is colored with one color, in multicoloring, each vertex is colored by some subset of the set of $k$ colors. Let $a\in\mathbb{N}$. A vertex $k$-multicoloring of a graph $G$ is called an **$a$-resistant vertex $k$-multicoloring** if the following holds: For each $A\subseteq V(G)$ with $a$ vertices, there is a component $H$ of the graph $G-N_0(A)$ such that $${\displaystyle\bigcup\limits_{u\in V(H)}}\kappa(u)=\{1,...,k\}\text{.}$$ An $a$-resistant vertex $k$-multicoloring is called a **highly $a$-resistant vertex $k$-multicoloring** if for each $A\subseteq V(G)$ with $a$ vertices it holds that ${\displaystyle\bigcup\limits_{u\in A}}\kappa(u)\neq\{1,...,k\}$. We can see that the definition of an $a$-resistant vertex $k$-multicoloring refers to any $a$ attackers not being able to disrupt the secret-sharing in the graph, because after the $a$ attackers and their neighbors leave the graph, there will exist a component $H$ of the remaining graph, which has all the colors, i.e. all parts of the secret. The condition that $a$ attackers do not collect all $k$ pieces of the secret is required for a multicoloring to be a highly $a$-resistant vertex $k$-multicoloring. We will denote by $a-HR$ the condition that ${\displaystyle\bigcup\limits_{u\in A}}\kappa(u)\neq\{1,...,k\}$, for each $A\subseteq V(G)$ with $a$ vertices.\ Now let us take into account possible missing or malfunctioning nodes in the network. Let $a,m\in\mathbb{N}_0$. A vertex $k$-multicoloring of a graph $G$ is called an **$(a,m)$-resistant vertex $k$-multicoloring** if the following holds: For all subsets $A,M\subseteq V(G)$, such that $|A|=a$, $|M|=m$, there exists a component $H$ of the graph $G-(N_{0}(A)\cup M)$ such that $${\displaystyle\bigcup\limits_{u\in V(H)}}\kappa(u)=\{1,...,k\}\text{.}$$ An $(a,m)$-resistant vertex $k$-multicoloring is called a **highly $(a,m)$-resistant vertex $k$-multicoloring** if the $a-HR$ condition holds.\ Let $a,m\in\mathbb{N}_0$. If there exists a graph $G$ with $n$ vertices, and a $k\in\mathbb{N}$, such that $G$ has a highly $(a,m)$-resistant vertex $k$-multicoloring $\kappa$, we say that the pair $(G,\kappa)$ **realizes** the quadruple $(a,m,n,k)$. Moreover, we will often say that $G$ **admits** a highly $(a,m)$-resistant vertex $k$-multicoloring $\kappa$. It is easy to see that an analogous result to the Theorem 3.1. from [@nas] holds, which we state as a proposition and omit the proof. **Proposition 1**. *Let $(G,\kappa)$ be a pair that realizes a quadruple $(a,m,n,k)$, for some $a,m,n,k\in\mathbb N_0$.* 1. *There exists a graph $G'$ with $n+1$ vertices and a highly $(a,m)$-resistant vertex $k$-multicoloring $\kappa'$ of $G'$ such that $(G',\kappa')$ realizes the quadruple $(a,m,n+1,k)$.* 2. *There exists a highly $(a,m)$-resistant vertex $(k+1)$-multicoloring $\kappa'$ of $G$.* On the basis of this proposition, we can search for the minimal number of colors $k$, for a highly $(a,m)$-resistant vertex $k$-multicoloring to exist, for given $a,m\in\mathbb{N}_0$, and for a graph $G$ with $n$ vertices that admits such multicoloring.\ Let $a,m\in\mathbb{N}_0$. By $K(a,m,n)$ we denote the minimal number of colors such that there exists a graph $G$ with $n$ vertices and a highly $(a,m)$-resistant vertex $K(a,m,n)$-multicoloring of $G$. If, for some $a,m,n\in\mathbb{N}_0$, a graph with $n$ vertices that admits a highly $(a,m)$-resistant vertex $k$-multicoloring doesn't exist, for any $k\in\mathbb{N}_0$, we write $K(a,m,n)=\infty$.\ Note that in [@nas] we found $K(a,0,n)$, for $a=1,2,3,4$. # Main results {#main} The problem of finding the number $K(a,m,n)$ can be studied for any given $a$ and $m$. In this paper we solve the case of $a=1$ and $m\in\mathbb{N}$. It is easy to see that for $a=0$ the following holds. **Proposition 2**. *Let $m\in\mathbb{N}$. $$K(0,m,n)=\left\{ \begin{array} [c]{cc}% +\infty, & n\leq m;\\ 1, & n>m. \end{array} \right.$$* For the proof of Theorem [Theorem 5](#glavni){reference-type="ref" reference="glavni"} we first need two lemmas. **Lemma 3**. *Let $G$ be a graph with $n$ vertices, and $f,f':V(G)\rightarrow\mathcal{P}(\{1,2,...,k\})$, vertex multicoloring functions on $G$ for which $1$-HR condition holds, such that for each $u\in V(G)$:* 1. *$|f(u)|=k-1$;* 2. *$f'(u)\subseteq f(u)$.* *If $(G,f)$ does not realize a quadruple $(1,m,n,k)$, for some $m\in\mathbb{N}$, then neither does $(G,f')$.* *Proof.* Function $f$ is obtained as an extension of $f'$ such that to each vertex $u\in V(G)$, for which $|f'(u)|\leq k-2$ holds, we add some colors until they have all but one color. It is easy to see that if $f$ is not a highly $(1,m)$-resistant $k$-multicoloring of $G$, then neither if $f'$. ◻ **Lemma 4**. *Let $m\in\mathbb{N}$ and let $f:\mathbb{R} \rightarrow\mathbb{R}$ be a function defined with $$f(x)=x+\left\lfloor \frac{m}{x}\right\rfloor \text{.}$$ Let $x_{m}\in\mathbb{R}$ such that $f(x_{m})\leq f(x)$, for all $x\in\mathbb{R}$, and $x_{m}$ is the smallest value for which that holds. It holds $f|_{\left\langle -\infty,x_{m}\right\rangle }$ is monotonically decreasing.* *Proof.* Let $m=b^{2}+c$, $0\leq c\leq2b$. Let us prove that $x_{m}\in\{b,b+1,b+2\}$. 1\) Let $x=b$. We have $$x+\left\lfloor \dfrac{m}{x}\right\rfloor =b+\left\lfloor \dfrac{b^{2}+c}% {b}\right\rfloor =2b+\left\lfloor \frac{c}{b}\right\rfloor \text{.}$$ 2\) For $x=b-l$, $l\in\mathbb{R}$ we have $$x+\left\lfloor \dfrac{m}{x}\right\rfloor =b-l+\left\lfloor \dfrac{b^{2}% +c}{b-l}\right\rfloor=$$ $$=b-l+\left\lfloor \dfrac{b^{2}-l^{2}+l^{2}+c}% {b-l}\right\rfloor =2b+\left\lfloor \frac{l^{2}+c}{b-l}\right\rfloor \text{.}%$$ This expression is obviously larger than the expression from 1). 3\) For $x=b+l$, $l\in\mathbb{R}$ we have $$x+\left\lfloor \dfrac{m}{x}\right\rfloor=b+l+\left\lfloor \dfrac{b^{2}% +c}{b+l}\right\rfloor =b+l+\left\lfloor \dfrac{b^{2}-l^{2}+l^{2}+c}% {b+l}\right\rfloor =$$ $$=2b+\left\lfloor \frac{l^{2}+c}{b+l}\right\rfloor \text{.}%$$ By further calculation we can easily see that this expression is smaller or equal to the expression from 1) for $0\leq l\leq2$. This proves $x_{m}\in\{b,b+1,b+2\}$. Now let $y_{1},y_{2}\in\mathbb{R}$ such that $y_{1}\leq y_{2}\leq x_{m}$. Let us prove $f(y_{1})\geq f(y_{2})$. We consider some subcases depending on $x_{m}$. a\) $x_{m}=b$. Let $y_{1}+l_{1}=b$, $y_{2}+l_{2}=b$, $l_{1},l_{2}\in\mathbb{R}$. It holds $l_{1}\geq l_{2}$. We have $$f(y_{1})=y_{1}+\left\lfloor \frac{m}{y_{1}}\right\rfloor =b-l_{1}% +\left\lfloor \frac{m}{b-l_{1}}\right\rfloor =$$ $$=b-l_{1}+\left\lfloor \frac{b^{2}-l_{1}^{2}+l_{1}^{2}+c}{b-l_{1}% }\right\rfloor =2b+\left\lfloor \frac{l_{1}^{2}+c}{b-l_{1}}\right\rfloor \text{;}$$ $$f(y_{2})=y_{2}+\left\lfloor \frac{m}{y_{2}}\right\rfloor =b-l_{2}% +\left\lfloor \frac{m}{b-l_{2}}\right\rfloor =$$ $$=b-l_{2}+\left\lfloor \frac{b^{2}-l_{2}^{2}+l_{2}^{2}+c}{b-l_{2}% }\right\rfloor =2b+\left\lfloor \frac{l_{2}^{2}+c}{b-l_{2}}\right\rfloor .$$ Now the claim follows from $l_{1}\geq l_{2}$. b\) $x_{m}=b+1$. Let $y_{1}+l_{1}=b+1$, $y_{2}+l_{2}=b+1$, $l_{1},l_{2}\in\mathbb{R}$. It holds $l_{1}\geq l_{2}$. We have $$f(y_{1})=y_{1}+\left\lfloor \frac{m}{y_{1}}\right\rfloor =b+1-l_{1} +\left\lfloor \frac{m}{b+1-l_{1}}\right\rfloor =$$ $$=b+1-l_{1}+\left\lfloor \frac{(b+1)^{2}-l_{1}^{2}+l_{1}^{2}+c-2b-1}= {b+1-l_{1}}\right\rfloor =$$ $$=2b+2+\left\lfloor \frac{l_{1}^{2}+c-2b-1}{b+1-l_{1}% }\right\rfloor \text{;}$$ $$f(y_{2})=y_{2}+\left\lfloor \frac{m}{y_{2}}\right\rfloor =b+1-l_{2}% +\left\lfloor \frac{m}{b+1-l_{2}}\right\rfloor =$$ $$=b+1-l_{2}+\left\lfloor \frac{(b+1)^{2}-l_{2}^{2}+l_{2}^{2}+c-2b-1}% {b+1-l_{2}}\right\rfloor =$$ $$=2b+2+\left\lfloor \frac{l_{2}^{2}+c-2b-1}{b+1-l_{2}% }\right\rfloor \text{,}%$$ and the claim again follows from $l_{1}\geq l_{2}$. c\) $x_{m}=b+2$. It is easy to see that this case follows analogously as the case b). We have shown $f(y_{1})\geq f(y_{2})$ for $y_{1},y_{2}\in\mathbb{R}$ such that $y_{1}\leq y_{2}\leq x_{m}$, so it holds that $f|_{\left\langle -\infty,x_{m}\right\rangle }$ is monotonically decreasing. ◻ **Theorem 5**. *For $m\in\mathbb{N}$ it holds $$K(1,m,n)=\left\{ \begin{array} [c]{cc}\scriptstyle +\infty, & n\leq2+m+\sqrt{4m+1}\text{;}\\ \left\lfloor \scriptstyle\frac{1}{2}(n-m-\sqrt{n^{2}+m^{2}-4n-2mn+4})\right\rfloor \scriptstyle+1, & n>2+m+\sqrt{4m+1}\text{.} \end{array} \right.\ $$* *Proof.* Let us define the set $S$ of ordered pairs $(m,n)$ such that there is at least one $k$ such that $(1,m,n,k)$ can be realized. We will prove the theorem in several steps. First, we will show that for a given $\left( m,n\right) \in S$, the minimal $k$ for which $(1,m,n,k)$ can be realized, i.e. $k=K(1,m,n)$, satisfies the inequality $$n\geq k+m+\left\lfloor \frac{m}{k-1}\right\rfloor +2\text{.}%$$ Then, by solving that inequality we will obtain the minimal $n$ for which $K(1,m,n)\neq\infty$, for given $m\in\mathbb{N}$. Last, we will find the minimal number of colors needed for given $m$ and $n$. First we prove two auxiliary claims. **CLAIM 1)** Let $(G,\kappa)$ be a pair that realizes $(1,m,n,k)$ for some $m,n,k\in\mathbb{N}$ and let $\Delta(G)$ be the maximal degree in $G$. It holds $$n\geq\Delta(G)+m+\left\lfloor \frac{m}{k-1}\right\rfloor +3\text{.}$$ Let us prove this. Suppose to the contrary, that $$n\leq\Delta(G)+m+\left\lfloor \frac{m}{k-1}\right\rfloor +2\text{.}$$ Because the $1$-HR condition holds, $\kappa (v)\neq\{1,...,k\}$, for all $v\in V(G)$. So each vertex is missing at least one color, and without the loss of generality we may assume that every vertex is missing exactly one color, Lemma [Lemma 3](#prosirenje){reference-type="ref" reference="prosirenje"}. For $u\in V$ such that $d(u)=\Delta(G)$, $G^{\prime}=G-N_{0}(u)$ has $p-\Delta(G)-1$ vertices. Let us define the sets $V_{1},...,V_{k}\subseteq V(G^{\prime})$ so that a vertex $v$ is in the set $V_{i}$ if and only if $i\notin \kappa(v)$. Every vertex is missing exactly one color so all the vertices in $G^{\prime}$ belong in precisely one of $V_{1},...,V_{k}$. Let $V_{l}$ be the set for which $\left\vert V_{l}\right\vert \geq\left\vert V_{1}\right\vert ,...,\left\vert V_{k}\right\vert$ holds. Note that $\left\vert V_{l}\right\vert \geq\left\lceil \frac{n-\Delta(G)-1}{k}\right\rceil$. For any set $M$, $|M|=m$, such that $M\supseteq V\left( G^{\prime}\right) \backslash V_{l}$, all the vertices in $G'-M$ are missing the color $l$ so the $k$-multicoloring of $G$ is not highly $(1,m)$-resistant. This will be possible when $$n-\Delta(G)-1-\left\lceil \frac{n-\Delta(G)-1}{k}\right\rceil \leq m\text{.}$$ Let us prove that this holds. By the assumption we have $$n-\Delta(G)-1\leq m+\left\lfloor \frac{m}{k-1}\right\rfloor +1\text{,}$$ and if we write $m=b(k-1)+c$, $0\leq c<k-1$, we have $$n-\Delta(G)-1-\left\lceil \frac{n-\Delta(G)-1}{k}\right\rceil\leq$$ $$\leq m+\left\lfloor \frac{m}{k-1}\right\rfloor +1-\left\lceil \frac{m+\left\lfloor \frac{m}{k-1}\right\rfloor +1}{k}\right\rceil\leq$$ $$\leq m+b+1-\left\lceil \frac{b(k-1)+c+b+1}{k}\right\rceil=$$ $$=m+b+1-b-\left\lceil \frac{c+1}{k}\right\rceil \leq m\text{.}$$ which proves the claim. We conclude that $$p\geq\Delta(G)+m+\left\lfloor \frac{m}{k-1}\right\rfloor +3\text{.}$$ **CLAIM 2)** Let $m,n\in\mathbb{N}$ and let $(G,\kappa)$ be a pair which realizes $(1,m,n,k)$, for some $k\in\mathbb{N}$. Let $\Delta(G)$ be the highest degree in $G$. It holds $$n\geq\Delta(G)+m+\left\lfloor \frac{m}{\Delta(G)}\right\rfloor +3\text{.}$$ Let us prove this by contradiction. Let $u$ be the vertex with the highest degree in $G$. For $A=\{u\}$, $G-N_{0}(A)$ has $n-\Delta(G)-1$ vertices. We will show that if $$n\leq\Delta(G)+m+\left\lfloor \frac{m}{\Delta(G)}\right\rfloor +2$$ holds, than there will exist a set $M\subseteq V(G)$, $|M|=m$, such that $G-(N_{0}(A)\cup M)$ is an empty graph. Let us call a vertex pruned if all its neighbors are in the set $M$. Since $\Delta(G)$ is the highest degree, there exists a set $M$ such that there are at least $\left\lfloor \dfrac{m}{\Delta(G)}\right\rfloor$ pruned vertices in $G-(N_{0}(A)\cup M)$. If $G-N_{0}(A)$ has $\left\lfloor \dfrac{m}{\Delta(G)}\right\rfloor +m+1$ vertices, such $M$ leads to at most one vertex in $G-(N_{0}(A)\cup M)$, that is neither in $M$ nor a pruned vertex. However than it is an isolated vertex in $G-(N_{0}(A)\cup M)$. So we conclude that if $$n\leq\Delta(G)+m+\left\lfloor \frac{m}{\Delta(G)}\right\rfloor +2$$ holds, no graph $G$ with $n$ vertices can realize $(1,m,p,k)$, for any $k\in\mathbb{N}$ because all the vertices in $G-(N_{0}(A)\cup M)$ will be isolated, and the $1$-HR condition must hold. Therefore $$n\geq\Delta(G)+m+\left\lfloor \frac{m}{\Delta(G)}\right\rfloor +3\text{.}$$ This concludes the proof of Claim 2. Let $m\in\mathbb{N}$ and let $$n_{m}=\min_{\Delta_{0}\in\mathbb{N}}\{\Delta_{0}+m+\left\lfloor \frac{m}{\Delta_{0}}\right\rfloor +3\}\text{,}$$ and let $\Delta_{m}$ be a value of $\Delta_{0}$ for which this minimum is obtained. (If there is more than one such value choose the smallest one). Furthermore, let us prove that $(1,m,n_{m},\Delta_{m}+1)$ can be realized. The realization is a pair $(G_{m},\kappa)$ given as follows $$G_{m}=\underset{\left\lfloor \dfrac{n_{m}}{\Delta_{m}+1}\right\rfloor \text{ times}}{\underbrace{K_{\Delta_{m}+1}\cup K_{\Delta_{m}+1}\cup...\cup K_{\Delta_{m}+1}}}\cup K_{l},$$ $$\text{ where }l=n_{m}-\left\lfloor \dfrac{n_{m}% }{\Delta_{m}+1}\right\rfloor \cdot\left( \Delta_{m}+1\right) \text{,}$$ with function $\kappa$ that gives each vertex all but one color in such a way that every two vertices in the same component miss different colors. The realization of some examples is illustrated in Figure [1](#fig:1_11_16){reference-type="ref" reference="fig:1_11_16"}. ![Realizations for $(1,4,11,3)$ and $(1,8,16,4)$](1_11_16.png){#fig:1_11_16}    We will now prove that the smallest $k$, for which $(1,m,n,k)$ can be realized for given $\left( m,n\right) \in S$, satisfies the expression $$n\geq k+m+\left\lfloor \frac{m}{k-1}\right\rfloor +2\text{.}$$ Let $\left( m,n\right) \in S$ and let $k_{0}$ be the smallest $k$ for which $(1,m,n,k)$ can be realized. We consider 3 cases. 1\) $k_{0}>\Delta_{m}+1$. From Claim 2 we have $$n\geq\Delta(G)+m+\left\lfloor \frac{m}{\Delta(G)}\right\rfloor +3\geq$$ $$\geq \Delta_{m}+m+\left\lfloor \frac{m}{\Delta_{m}}\right\rfloor +3\text{.}$$ We know that $(1,m,n_{m},\Delta_{m}+1)$ can be realized so because $n\geq n_{m}$, from Proposition [Proposition 1](#uvodna){reference-type="ref" reference="uvodna"} it follows that $(1,m,n,\Delta_{m}+1)$ can be realized. But now obviously the chosen $k_{0}$ is not the smallest $k$ for which $(1,m,n,k)$ can be realized, so this case cannot happen. 2\) $k_{0}=\Delta_{m}+1$. It now holds $$n\geq\Delta_{m}+m+\left\lfloor \frac{m}{\Delta_{m}}\right\rfloor +3=k_{0}+m+\left\lfloor \frac{m}{k_{0}-1}\right\rfloor +2\text{,}$$ so the claim stands. 3\) $k_{0}<\Delta_{m}+1$. Let $G$ be a graph which realizes $(1,m,n,k_{0})$, and let $\Delta(G)$ be the highest degree in $G$. We observe 2 subcases. 3.1.) $k_{0}\leq\Delta(G)+1$. Now from Claim 1 it holds $$n\geq\Delta(G)+m+\left\lfloor \frac{m}{k_{0}-1}\right\rfloor +3\geq k_{0}+m+\left\lfloor \frac{m}{k_{0}-1}\right\rfloor +2\text{,}$$ so the claim stands. 3.2.) $k_{0}>\Delta(G)+1$. Now we have $$\Delta(G)+1<k_{0}<\Delta_{m}+1\text{,}$$ so from Claim 2 it follows $$n\geq\Delta(G)+m+\left\lfloor \frac{m}{\Delta(G)}\right\rfloor +3\text{,}$$ and from Lemma [Lemma 4](#pad){reference-type="ref" reference="pad"} it follows that $$n\geq\Delta(G)+m+\left\lfloor \frac{m}{\Delta(G)}\right\rfloor +3\geq k_{0}+m+\left\lfloor \frac{m}{k_{0}-1}\right\rfloor +2\text{,}$$ because $\Delta(G)<k_{0}\leq\Delta_{m}$, and the function $f(\Delta (G))=\Delta(G)+1+m+\left\lfloor \frac{m}{\Delta(G)}\right\rfloor +2$ is monotonically decreasing on $f|_{\left\langle -\infty,\Delta_{m}\right\rangle }$. We have proven that the minimal number of colors, $k_{0}\geq2$, such that $(1,m,n,k_{0})$ can be realized satisfies $$\label{eq:1} \centering n\geq k_{0}+m+\left\lfloor \frac{m}{k_{0}-1}\right\rfloor +2\text{, }$$ for given $\left( m,n\right) \in S$. Moreover, graph $$G_{0}=\underset{\left\lfloor \dfrac{n}{k_{0}}\right\rfloor \text{ times} }{\underbrace{K_{k_{0}}\cup K_{k_{0}}\cup...\cup K_{k_{0}}}}\cup K_{l},\text{ where }l=n-\left\lfloor \dfrac{n}{k_{0}}\right\rfloor \cdot k_{0}\text{,}$$ with function $\kappa$ that gives each vertex all but one color, in such a way that every two vertices in the same component miss different colors, realizes $(1,m,n,k_{0})$. Hence indeed $$K(1,m,n)=k_{0}\text{.}$$ Further, for every $m,n\in\mathbb{N}$, if there is $k_{0}$ that satisfies (1), then $(m,n)\in S$. Moreover, (1) implies that for each $m\in\mathbb{N}$ there exists an $n\in\mathbb{N}$ such that $(m,n)\in S$. Now let us find $K(1,m,n)$, and the minimal $n$ for which $K(1,m,n)\neq \infty$. Let $m,n\in\mathbb{N}$. Note that right hand side of inequality (1) is an integer, hence (1) is equivalent to $$n+1>k_{0}+m+\frac{m}{k_{0}-1}+2\text{.}$$ This inequality is equivalent to $$k_{0}^{2}+k_{0}(m-n)+n-1<0\text{,}$$ and by solving it we get the interval $$k_{0}\in\left\langle \frac{1}{2}(n-m-\sqrt{D}),\frac{1}{2}(n-m+\sqrt {D})\right\rangle \text{,}$$ where $D=n^{2}+m^{2}-4n-2mn+4$. Let us denote that interval by $I_{k}$. So $K(1,m,n)\neq\infty$ if and only if there exists an integer $k_{0}>1$ in $I_{k}$. Let us find the conditions for $n$, for given $m\in\mathbb{N}$. Obviously, if $D<0$ than $K(1,m,n)=\infty$ because the value for $k_{0} ^{2}+k_{0}(m-n)+n-1$ is never negative. For $D=0$ we obtain $I_{k} =\emptyset$, and it remains to observe the case $D>0$. The values for $n$, for which $D>0$, are in the set $$\left( \left\langle -\infty,2+m-2\sqrt{m}\right\rangle \cup\left\langle 2+m+2\sqrt{m},+\infty\right\rangle \right) \cap\mathbb{N}\text{.}$$ Let us define functions $n_{1},n_{2}:\mathbb{N}\rightarrow\mathbb{R}$ with $$n_{1}(m)=2+m-2\sqrt{m},$$ $$n_{2}(m)=2+m+2\sqrt{m}.$$ $n<n_{1}(m)$ would imply that there exists $k\in\mathbb{N}$ such that $(1,m,n,k)$ can be realized. Then, because of Proposition [Proposition 1](#uvodna){reference-type="ref" reference="uvodna"}, such $k$ could also be found for all $n\in\mathbb{N}$, $n\geq n_{1}(m)$ and we have already proven that is impossible for $n\in\left[ n_{1}(m),n_{2}(m)\right] \cap\mathbb{N}$ (note that this set is non-empty). So we conclude that $n>n_{2}(m)$ must hold. Hence, $$k_{0}=\left\lfloor \frac{1}{2}(n-m-\sqrt{D})\right\rfloor +1,$$ and $$\left\lfloor \frac{1}{2}(n-m-\sqrt{D})\right\rfloor +1<\frac{1}{2}% (n-m+\sqrt{D}),\text{ and }n>n_{2}(m)\text{.}%$$ Let us distinguish two cases. CASE 1) $\frac{1}{2}(n-m-\sqrt{D})+1<\frac{1}{2}(n-m+\sqrt{D}).$\ This is equivalent to $D>1$ which is further equivalent to $$n\in\left( \left\langle -\infty,2+m-\sqrt{4m-1}\right\rangle \cup\left\langle 2+m+\sqrt{4m+1},+\infty\right\rangle \right) \cap\mathbb{N}\text{.}%$$ Let us define functions $n_{3},n_{4}:\mathbb{N}\rightarrow\mathbb{R}$ with $$n_{3}(m)=2+m-\sqrt{4m-1}\text{,}$$ $$n_{4}(m)=2+m+\sqrt{4m+1}\text{.}$$ Since $n_{3}(m)<n_{1}(m)<n_{2}(m)<n_{4}(m)$, for all $m\in\mathbb{N}$, this reduces to $n>n_{4}(m)$. CASE 2) $\frac{1}{2}(n-m-\sqrt{D})+1\geq\frac{1}{2}(n-m+\sqrt{D})$.\ Then $n_{2}(m)<n\leq n_{4}(m)$. We will now prove that $I_{k}\cap\mathbb{N}=\emptyset$ holds. Let us distinguish two subcases. 2.1) $n<n_{4}(m)$. We need to prove that the smallest integer larger than $n_{2}\left( m\right)$ is larger than $n_{4}\left( m\right) ,$ i.e. that $$\left\lfloor n_{2}\left( m\right) \right\rfloor +1\geq n_{4}\left( m\right);$$ $$\left\lfloor 2+m+2\sqrt{m}\right\rfloor +1\geq2+m+\sqrt{4m+1};$$ $$1+\left\lfloor 2\sqrt{m}\right\rfloor\geq\sqrt{4m+1}.$$ Let us denote $\left\lfloor 2\sqrt{m}\right\rfloor =b$. We may write $4m=b^{2}+c$, $0\leq c\leq2b$. Now we have to prove $$1+b\geq\sqrt{b^{2}+c+1}.$$ However, further calculation gives us that this holds when $2b\geq c$, so the claim is proven. 2.2) For $n=n_{4}(m)$ it holds $I_{k}\cap\mathbb{N}=\emptyset$. Let us observe the case $n=n_{4}(m)=2+m+\sqrt{4m+1}$. Since $n\in\mathbb{N}$, this is possible only if $\sqrt{4m+1}\in\mathbb{N}$. For this value of $n$, $I_{k}$ is equal to $$I_{k}=\left\langle \frac{1}{2}+\frac{1}{2}\sqrt{4m+1},\frac{3}{2}+\frac{1}% {2}\sqrt{4m+1}\right\rangle \text{.}%$$ It is easy to see that now the length of $I_{k}$ is precisely $1$, and that the bounds of the interval are natural numbers. But this means $I_{k}\cap\mathbb{N}=\emptyset$. We have proven that no $n\in\left\langle n_{2}(m),n_{4}(m)\right] \cap\mathbb{N}$ can produce $k\in I_{k}\cap\mathbb{N}$ so we conclude that $K(1,m,n)=\infty$ if and only if $n\leq n_{4}(m)$ and for $n>n_{4}(m)$ we have $$K(1,m,n)=\left\lfloor \frac{1}{2}(n-m-\sqrt{n^{2}+m^{2}-4n-2mn+4} )\right\rfloor +1\text{,}$$ which concludes the proof of the theorem. ◻ **Remark 6**. *Theorem [Theorem 5](#glavni){reference-type="ref" reference="glavni"} solves the problem of finding the minimal number of nodes and the minimal number of pieces a secret must be divided to, in order for the network to be resilient to the attack of $1$ agent that steals the pieces given to him and betrays its neighbors, and $m\in\mathbb{N}$ missing or malfunctioning nodes. It also gives one possible network configuration and the pieces distribution. For example, if there are $8$ malfunctioning nodes, as well as $1$ agent, the network must have the minimum of $16$ nodes and the secret has to be divided into minimum of $4$ parts in order to preserve it. One possible distribution is given in Figure [1](#fig:1_11_16){reference-type="ref" reference="fig:1_11_16"}.* **Remark 7**. *The case of $3$ attackers and $1$ malfunctioning node is analyzed in [@nas2].* 99 Vojković, T., Vukičević, D., Zlatić, V. (2018). Multicoloring of graphs to secure a secret. Rad Hrvatske akademije znanosti i umjetnosti. Matematičke znanosti, (534= 22), 1-22. Vojković, T., Vukičević, D. Highly resistant multicoloring with 3 attackers and 1 malfunctioning vertex. 2 nd Croatian Combinatorial Days, 195. West D.B., Introduction to graph theory (Vol. 2), Prentice hall, Upper Saddle River, 2001. [^1]: Corresponding author: tanja\@pmfst.hr, Rudjera Boškovića 33, 21000 Split, Croatia
arxiv_math
{ "id": "2310.05560", "title": "Graph multicoloring in solving node malfunction and attacks in a\n secret-sharing network", "authors": "Tanja Vojkovic and Damir Vukicevic", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | In this paper we focus on the solution of online problems with time-varying, linear equality and inequality constraints. Our approach is to design a novel online algorithm by leveraging the tools of control theory. In particular, for the case of equality constraints only, using robust control we design an online algorithm with asymptotic convergence to the optimal trajectory, differently from the alternatives that achieve non-zero tracking error. When also inequality constraints are present, we show how to modify the proposed algorithm to account for the wind-up induced by the nonnegativity constraints on the dual variables. We report numerical results that corroborate the theoretical analysis, and show how the proposed approach outperforms state-of-the-art algorithms both with equality and inequality constraints. address: - Department of Information Engineering (DEI), University of Padova, Italy - School of Electrical Engineering and Computer Science and Digital Futures, KTH Royal Institute of Technology, Sweden author: - Umberto Casti - Nicola Bastianello - Ruggero Carli - Sandro Zampieri bibliography: - references.bib title: | A Control Theoretical Approach to\ Online Constrained Optimization --- [^1] , , , online optimization, constrained optimization, control theory, anti-windup # Introduction {#sec:introduction} Recent advances in technology have made *online optimization* problems increasingly relevant in a wide range of applications, *e.g.* control [@liao-mcpherson_semismooth_2018; @paternain_realtime_2019], signal processing [@hall_online_2015; @fosson_centralized_2021; @natali_online_2021], and machine learning [@shalev_online_2011; @dixit_online_2019; @chang_distributed_2020]. Online problems are characterized by costs and constraints that change over time, mirroring changes in the dynamic environments that arise in these applications. The focus then shifts from solving a single problem (as in static optimization) to solving a sequence of problems *in real time*. Specifically, in this paper we are interested in online optimization problems with linear equality and inequality constraints $$\label{eq:problem-intro} \begin{split} &{\mathbold{x}}_k^* = \mathop{\mathrm{arg\,min}}_{{\mathbold{x}}\in {\mathbb{R}}^n} f_k({\mathbold{x}})\\ &\text{s.t.} \ {\mathbold{G}}_k {\mathbold{x}}= {\mathbold{h}}_k \quad {\mathbold{G}}^{\prime}_k {\mathbold{x}}\leq {\mathbold{h}}_k^{\prime} \end{split} \qquad k \in {\mathbb{N}}$$ where consecutive cost functions and sets of constraints arrive every ${T_\mathrm{s}}> 0$ seconds. In the following, we assume that each problem in [\[eq:problem-intro\]](#eq:problem-intro){reference-type="eqref" reference="eq:problem-intro"} has a unique solution. Drawing from [@simonetto_timevarying_2020], we can distinguish two approaches to online optimization: *unstructured* and *structured*. Unstructured algorithms are designed by applying static optimization methods (*e.g.* gradient descent-ascent) to each (static) problem in the sequence [\[eq:problem-intro\]](#eq:problem-intro){reference-type="eqref" reference="eq:problem-intro"}. The convergence of these algorithms has been analyzed in *e.g.* [@dixit_online_2019; @shalev_online_2011; @hall_online_2015; @fosson_centralized_2021] for unconstrained problems, and [@selvaratnam_numerical_2018; @cao_online_2019; @zhang_online_2021] for constrained ones; see also the surveys [@dallanese_optimization_2020; @simonetto_timevarying_2020]. However, straightforwardly repurposing static methods for an online scenario does not leverage in any way knowledge of the dynamic nature of these problems. And indeed, usual convergence analyses guarantee that the output of the online algorithm tracks the optimal trajectory $\{ {\mathbold{x}}_k^* \}_{k \in {\mathbb{N}}}$ with only finite precision. As a consequence, the alternative approach of structured methods has received increasing attention. "Structured" refers to the fact that the online algorithms are designed by incorporating information -- a model -- of the cost and constraints variability, with the goal of tracking $\{ {\mathbold{x}}_k^* \}_{k \in {\mathbb{N}}}$ with improved precision. One widely studied class of structured methods is that of *prediction-correction* algorithms [@simonetto_class_2016; @simonetto_prediction_2017; @fazlyab_prediction_2018; @li_online_2021; @bastianello_extrapolation_2023], which leverage a set of assumptions on the rate of change of [\[eq:problem-intro\]](#eq:problem-intro){reference-type="eqref" reference="eq:problem-intro"} as a rudimentary model to provably increase tracking precision. Recently, some of the authors have proposed a structured algorithm for unconstrained problems, which is designed using a fully fledged model of the cost function [@bastianello_internal_2022]. Accessing a detailed model of the problem allows us to draw from control theory to further improve tracking precision over previous algorithms. In this paper we focus on the control theoretical design of online algorithm, with the goal of extending it to solve constrained problems as well. By taking this approach, we place our contribution at the productive intersection of control theory and optimization, both static and online. In particular, the design of optimization algorithms using control theoretical techniques has been explored in [@lessard_analysis_2016; @sundararajan_canonical_2019; @zhang_unified_2021; @scherer_convex_2021; @franca_gradient_2021; @scherer_optimization_2023] for static optimization, and [@shahrampour_online_2017; @shahrampour_distributed_2018; @simonetto_optimization_2023; @davydov_contracting_2023] for online optimization. In particular, [@shahrampour_online_2017; @shahrampour_distributed_2018] exploit a linear model for the evolution of $\{ {\mathbold{x}}_k^* \}_{k \in {\mathbb{N}}}$ to analyze convergence of online algorithms, [@simonetto_optimization_2023] leverages Kalman-filtering for *stochastic* online optimization, and [@davydov_contracting_2023] uses contraction analysis to design and analyze continuous-time online algorithms. In turn, online optimization has also been used extensively as a tool to design controllers, giving rise to *feedback optimization*. As the name suggests, in feedback optimization an online optimization algorithm is connected in a feedback loop with a dynamical system [@bernstein_online_2019; @colombino_online_2020; @hauswirth_timescale_2021], as in the model predictive control (MPC) set-up [@liao-mcpherson_semismooth_2018]. Specifically, the output of the algorithm serves as control input to the system, whose output in turn acts on the online problem formulation. We are now ready to discuss the proposed contribution. As mentioned above, we focus on the solution of online problems with linear equality and inequality constraints, cf. [\[eq:problem-intro\]](#eq:problem-intro){reference-type="eqref" reference="eq:problem-intro"}. In particular, the goal is to extend the (structured) control theoretical approach of [@bastianello_internal_2022] to handle these problems. The key to this approach is interpreting the online problem as the plant, for which we have a model, that needs to be controlled. We start our development by working on quadratic online problems with linear equality constraints, which are relevant in and of themselves in signal processing and machine learning [@dixit_online_2019; @shalev_online_2011]. This class of problems can be reformulated as *linear control problems with uncertainties*, which allows us to design a novel online algorithm by leveraging the internal model principle and techniques from robust control, as well as linear algebra results on saddle matrices. The proposed algorithms can then be shown to achieve *zero* tracking error, differently from unstructured methods and alternative structured ones. However, the proposed algorithm cannot be applied in a straightforward manner when also inequality constraints are present. Indeed, these constraints translate into non-negativity constraints on the dual variables, and the problem cannot be cast as a linear control problem. But by noticing that non-negativity constraints act as a *saturation*, we are able to incorporate them in the algorithm with the use of an *anti wind-up* technique. We conclude the paper with numerical results that compare the proposed algorithm to alternative methods, showing its promising performance when both equality and inequality constraints are present. To summarize, we offer the following contributions: 1. We take a control theoretical approach to design online algorithms for problems with linear equality and inequality constraints. In particular, when the cost is quadratic and only equality constraints are present, we show how a zero tracking error can be achieved. 2. We extend the proposed algorithm to deal with inequality constraints as well, by leveraging an anti wind-up mechanism. 3. We present numerical simulations to show how the proposed algorithms can outperform alternative unstructured methods, both with equality and inequality constraints, and both for quadratic and non-quadratic costs. # Problem formulation and background {#sec:background} **Notation.** We denote by $\mathbb{N}$, $\mathbb{R}$ the natural and real numbers, respectively, and by $\mathbb{R}\left[z\right],\mathbb{R}\left(z\right)$ the set of polynomials and of rational functions in $z$ with real coefficients. Vectors and matrices are denoted by bold letters, *e.g.* ${\mathbold{x}}\in\mathbb{R}^n$ and ${\mathbold{A}}\in \mathbb{R}^{n\times n}$. The symbol ${\mathbold{I}}$ denotes the identity, ${\mathbf{1}}$ denotes the vector of all $1\mathrm{s}$ and ${\mathbf{0}}$ denotes the vector of all $0\mathrm{s}$. The $\mathrm{2}$-norm of a vector and the induced $\mathrm{2}$-norm of a matrix are both denoted by $\left\lVert\cdot\right\rVert$. Moreover, in the following, the entry-wise partial order in $\mathbb{R}^n$ is considered. Therefore, the notation ${\mathbold{x}}\leq {\mathbold{y}}$ denotes the fact that the inequality holds for every pair of corresponding entries in ${\mathbold{x}}$ and ${\mathbold{y}}$. The symbol ${\mathbold{A}}\succeq {\mathbold{B}}$ (or ${\mathbold{A}}\succ {\mathbold{B}}$) means that the matrix ${\mathbold{A}}-{\mathbold{B}}$ is positive semi-definite (or positive definite). $\otimes$ denotes the Kronecker product and $\mathop{\mathrm{diag}}$ denotes a (block) diagonal matrix built from the arguments. For a function $f({\mathbold{x}})$ from $\mathbb{R}^n$ to $\mathbb{R}$, we denote by $\nabla f$ its gradient while for a function $f\left({\mathbold{x}},{\mathbold{w}}\right)$ from $\mathbb{R}^{n}\times\mathbb{R}^m$ to $\mathbb{R}$, we denote $\nabla_{{\mathbold{x}}}f\left({\mathbold{x}},{\mathbold{w}}\right)$ and $\nabla_{{\mathbold{w}}}f\left({\mathbold{x}},{\mathbold{w}}\right)$ its gradient with respect to ${\mathbold{x}}$ and ${\mathbold{w}}$, respectively. $f({\mathbold{x}})$ is ${\underaccent{\bar}{\lambda}}$-strongly convex, ${\underaccent{\bar}{\lambda}}> 0$, iff $f({\mathbold{x}}) - \frac{{\underaccent{\bar}{\lambda}}}{2}\left\lVert{\mathbold{x}}\right\rVert^2$ is convex, and ${\bar{\lambda}}$-smooth iff $\nabla f({\mathbold{x}})$ is ${\bar{\lambda}}$-Lipschitz continuous. Given a symmetric matrix ${\mathbold{A}}$, with ${\underaccent{\bar}{\lambda}}\left({\mathbold{A}}\right)$ and ${\bar{\lambda}}\left({\mathbold{A}}\right)$ we denote the minimum and the maximum eigenvalue of ${\mathbold{A}}$, respectively, while for any matrix ${\mathbold{B}}$ with $\underaccent{\bar}{\sigma}\left({\mathbold{B}}\right)$ and $\bar{\sigma}\left({\mathbold{B}}\right)$ we denote the minimum and the maximum singular value of ${\mathbold{B}}$. Moreover with $\kappa\left({\mathbold{B}}\right)$ we denote the condition number on ${\mathbold{B}}$. Finally, $\mathop{\mathrm{proj}}_{\geq 0}(\cdot): \mathbb{R}^n \to \mathbb{R}^n_{\geq 0}$ represents the function that returns the closest element (in Euclidean norm) in the non-negative orthant. Finally, we denote by $\mathcal{Z}\left[\cdot\right]$ the $\mathcal{Z}$-transform of a given signal. ## Problem formulation {#subsec:problem-formulation} We begin by focusing on a specific sub-class of [\[eq:problem-intro\]](#eq:problem-intro){reference-type="eqref" reference="eq:problem-intro"}, that is, *quadratic, linearly constrained, online* problems of the following form: $$\label{eq:quadratic-online-optimization-general} \begin{split} {\mathbold{x}}_k^* &= \mathop{\mathrm{arg\,min}}_{{\mathbold{x}}\in {\mathbb{R}}^n} f_k({\mathbold{x}})\\ &\ \begin{aligned} \text{s.t.} &\ {\mathbold{G}}{\mathbold{x}}= {\mathbold{h}}_k \\ &\ {\mathbold{G}}^{\prime} {\mathbold{x}}\leq {\mathbold{h}}_k^{\prime} \end{aligned} \end{split} \qquad k \in {\mathbb{N}}.$$ where ${\mathbold{x}}\in {\mathbb{R}}^n$, $f_k({\mathbold{x}}) \coloneqq \frac{1}{2} {\mathbold{x}}^\top {\mathbold{A}}{\mathbold{x}}+ {\mathbold{b}}_k^{\top} {\mathbold{x}}$ and ${\mathbold{A}}\in {\mathbb{R}}^{n\times n}$, ${\mathbold{G}}\in {\mathbb{R}}^{p\times n}$ and ${\mathbold{G}}^{\prime} \in {\mathbb{R}}^{p^{\prime}\times n}$ are fixed matrices, while ${\mathbold{b}}_k \in \mathbb{R}^n$, ${\mathbold{h}}_k \in \mathbb{R}^p$ and ${\mathbold{h}}_k^{\prime} \in \mathbb{R}^{p^\prime}$ are time-varying. We make the following assumptions. [\[as:generic-online-problem-A\]]{#as:generic-online-problem-A label="as:generic-online-problem-A"} The symmetric matrix ${\mathbold{A}}$ is such that $$\label{eq:estA} {\underaccent{\bar}{\lambda}}{\mathbold{I}}\preceq {\mathbold{A}}\preceq {\bar{\lambda}}{\mathbold{I}},$$ with $0<{\underaccent{\bar}{\lambda}}\leq {\bar{\lambda}}\leq + \infty$. This is equivalent to imposing that the cost functions $\{ f_k \}_{k \in {\mathbb{N}}}$ are ${\underaccent{\bar}{\lambda}}$-strongly convex and ${\bar{\lambda}}$-smooth for any time $k \in {\mathbb{N}}$. [\[as:constraints\]]{#as:constraints label="as:constraints"}(i) The constraint matrix $\left[{\mathbold{G}}^{\top} \,|\,{{\mathbold{G}}^{\prime}}^{\top}\right]^{\top} \in {\mathbb{R}}^{\left(p+p^{\prime}\right) \times n}$ is full row rank. (ii) The symmetric matrix ${\mathbold{G}}{\mathbold{A}}^{-1}{\mathbold{G}}^{\top}$ is such that $$\label{eq:estGAGT} \underaccent{\bar}{\mu}{\mathbold{I}}\preceq {\mathbold{G}}{\mathbold{A}}^{-1}{\mathbold{G}}^{\top} \preceq \bar{\mu}{\mathbold{I}},$$ with $0<\underaccent{\bar}{\mu} \leq \bar{\mu} \leq + \infty$. Assumptions [\[as:generic-online-problem-A\]](#as:generic-online-problem-A){reference-type="ref" reference="as:generic-online-problem-A"} and [\[as:constraints\]](#as:constraints){reference-type="ref" reference="as:constraints"}, imply that each problem in [\[eq:quadratic-online-optimization-general\]](#eq:quadratic-online-optimization-general){reference-type="eqref" reference="eq:quadratic-online-optimization-general"} has a unique minimizer, and we can define the optimal trajectory $\{ {\mathbold{x}}_k^* \}_{k \in {\mathbb{N}}}$. In particular, Assumption [\[as:generic-online-problem-A\]](#as:generic-online-problem-A){reference-type="ref" reference="as:generic-online-problem-A"} is widely used in online optimization [@dallanese_optimization_2020], since the strong convexity of the cost implies that, at any $k\in {\mathbb{N}}$, there is a unique primal solution ${\mathbold{x}}^*$. Moreover, Assumption [\[as:constraints\]](#as:constraints){reference-type="ref" reference="as:constraints"} (i) guarantees that also the dual solution is unique [@boyd_vandenberghe_2004 p. 523]. [\[rem:robust\]]{#rem:robust label="rem:robust"} In the following, we assume that the online algorithm only has access to an oracle for the gradient. Moreover, we assume that in the design of this algorithm we can use only the values ${\underaccent{\bar}{\lambda}}$, ${\bar{\lambda}}$, $\underaccent{\bar}{\mu}$ and $\bar{\mu}$ appearing in [\[eq:estA\]](#eq:estA){reference-type="eqref" reference="eq:estA"} and [\[eq:estGAGT\]](#eq:estGAGT){reference-type="eqref" reference="eq:estGAGT"}. This assumption is made to align with gradient methods that our technique aims to improve, where the algorithm does not need to possess full knowledge of ${\mathbold{A}}$ and ${\mathbold{G}}$. Concerning the bound [\[eq:estGAGT\]](#eq:estGAGT){reference-type="eqref" reference="eq:estGAGT"} on ${\mathbold{G}}{\mathbold{A}}^{-1}{\mathbold{G}}^{\top}$ notice that the values of $\underaccent{\bar}{\mu}$ and $\bar{\mu}$ can be obtained from ${\underaccent{\bar}{\lambda}}$, ${\bar{\lambda}}$ and from bounds on the singular values of ${\mathbold{G}}$. Indeed, observe that $$\label{eq:BoundsGAinvGT} \frac{\underaccent{\bar}{\sigma}^2\left({\mathbold{G}}\right)}{{\bar{\lambda}}\left({\mathbold{A}}\right)} \leq {\underaccent{\bar}{\lambda}}\left({\mathbold{G}}{\mathbold{A}}^{-1}{\mathbold{G}}^{\top}\right) \leq {\bar{\lambda}}\left({\mathbold{G}}{\mathbold{A}}^{-1}{\mathbold{G}}^{\top}\right) \leq \frac{\bar{\sigma}^2\left({\mathbold{G}}\right)}{{\underaccent{\bar}{\lambda}}\left({\mathbold{A}}\right)}.$$ Our objective is to design an online algorithm that can *track the optimizer sequence* $\{ {\mathbold{x}}_k^* \}_{k \in {\mathbb{N}}}$ *in a real-time fashion*. Let $\{ {\mathbold{x}}_k \}_{k \in {\mathbb{N}}}$ be the output of such an algorithm. The goal is to ensure that $$\label{eq:tracking-error} \limsup_{k \to \infty} \left\lVert{\mathbold{x}}_k - {\mathbold{x}}_k^*\right\rVert \leq B < \infty.$$ Notice how the dynamic nature of the problem, imposes an important limitation on the algorithm, as it needs to compute the next output ${\mathbold{x}}_k$ within the time interval between the arrival of $f_{k-1}$ and $f_k$, namely $[(k-1) {T_\mathrm{s}}, k {T_\mathrm{s}})$. As discussed in section [1](#sec:introduction){reference-type="ref" reference="sec:introduction"}, in this paper we focus on a *control theoretical* approach to designing online algorithms that can achieve [\[eq:tracking-error\]](#eq:tracking-error){reference-type="eqref" reference="eq:tracking-error"}. We start our development by focusing on problems with equality constraints only, for which we propose a novel algorithm in section [3](#subsec:eqConstr){reference-type="ref" reference="subsec:eqConstr"}, and analyze its convergence. In section [4](#sec:ineq-constraints){reference-type="ref" reference="sec:ineq-constraints"} we then extend the applicability of the algorithm to inequality constraints as well, by leveraging anti wind-up techniques. # Online Optimization with Equality Constraints {#subsec:eqConstr} In this section we will analyze Problem [\[eq:quadratic-online-optimization-general\]](#eq:quadratic-online-optimization-general){reference-type="eqref" reference="eq:quadratic-online-optimization-general"} with only equality constraints, namely we will study the following problem: $$\label{eq:quadratic-online-optimization} \begin{split} {\mathbold{x}}_k^* &= \mathop{\mathrm{arg\,min}}_{{\mathbold{x}}\in {\mathbb{R}}^n} f_k({\mathbold{x}})\\ \text{s.t.} &\ {\mathbold{G}}{\mathbold{x}}= {\mathbold{h}}_k \end{split} \qquad k \in {\mathbb{N}},$$ where $f_k({\mathbold{x}}) \coloneqq \frac{1}{2} {\mathbold{x}}^\top {\mathbold{A}}{\mathbold{x}}+ {\mathbold{b}}_k^{\top} {\mathbold{x}}$. We also suppose that we have a partial knowledge on the time-varying terms, as clarified by the following assumption. [\[as:modelsbh\]]{#as:modelsbh label="as:modelsbh"} The sequence of vectors $\left\lbrace{\mathbold{b}}_k \right\rbrace_{k \in {\mathbb{N}}}$ and $\left\lbrace {\mathbold{h}}_k \right\rbrace_{k \in {\mathbb{N}}}$ have rational $\mathcal{Z}$-transforms $$\label{eq:linearModels} \begin{aligned} {\mathbold{\hat{b}}}\left(z\right) &\coloneqq \mathcal{Z}\left[ {\mathbold{b}}_k \right]= \frac{{\mathbold{b}}_\mathrm{N} \left(z\right)}{p\left(z\right)},\\ {\mathbold{\hat{h}}}\left(z\right) &\coloneqq \mathcal{Z}\left[ {\mathbold{h}}_k \right] = \frac{{\mathbold{h}}_\mathrm{N}\left(z\right)}{p\left(z\right)} , \end{aligned}$$ where the polynomial $p\left(z\right) = z^m + \sum_{i = 0}^{m-1} p_i z^i$ is known and whose zeros are assumed to be all marginally unstable[^2]. The numerators, ${\mathbold{b}}_\mathrm{N}\left(z\right)$ and ${\mathbold{h}}_\mathrm{N}\left(z\right)$, are instead assumed to be unknown. Observe that imposing the same polynomial at the denominator on the rational $\mathcal{Z}$-transforms in Equation [\[eq:linearModels\]](#eq:linearModels){reference-type="eqref" reference="eq:linearModels"} is not restrictive. In fact, if this is not the case, it is always possible to choose the least common multiple of the denominators. Moreover, also the assumption that all the zeros of $p\left(z\right)$ are marginally unstable is not restrictive, as it is easy to extend the proposed methods to general polynomials only at the cost of a more complicated notation. Taking inspiration from [@bastianello_internal_2022], we develop an algorithm based on the *primal-dual* problem that leverages the Internal Model Principle to address Problem [\[eq:quadratic-online-optimization\]](#eq:quadratic-online-optimization){reference-type="eqref" reference="eq:quadratic-online-optimization"}. In this case, we are able to provide precise and rigorous proof of the algorithm convergence. ## The design of the algorithm {#subsec:algorithm} In this section we extend the model-based approach proposed in [@bastianello_internal_2022] to address the constrained problem [\[eq:quadratic-online-optimization\]](#eq:quadratic-online-optimization){reference-type="eqref" reference="eq:quadratic-online-optimization"}. To this aim, we define the time-varying Lagrangian $$\label{eq:lagrangian} \mathcal{L}_k({\mathbold{x}}, {\mathbold{w}}) = f_k({\mathbold{x}}) +{\mathbold{w}}^{\top}\left({\mathbold{G}}{\mathbold{x}}- {\mathbold{h}}_k \right), \quad k \in {\mathbb{N}},$$ where ${\mathbold{w}}\in{\mathbb{R}}^p$ represents the Lagrange multiplier. The solution of the *primal-dual* problem [@boyd_vandenberghe_2004 p. 244] is given by the pair of vectors ${\mathbold{x}}_k^* \in {\mathbb{R}}^n$ and ${\mathbold{w}}_k^* \in {\mathbb{R}}^p$ satisfying the equation $$\label{eq:error_zero} \begin{bmatrix} {\mathbold{e}}_k\\ \mathbold{{\mathbold{f}}}_k \end{bmatrix} := \begin{bmatrix} \nabla_{\mathbold{x}}\mathcal{L}_k({\mathbold{x}}_k, {\mathbold{w}}_k) \\ \nabla_{\mathbold{w}}\mathcal{L}_k({\mathbold{x}}_k, {\mathbold{w}}_k) \end{bmatrix} = \begin{bmatrix} {\mathbf{0}} \\ {\mathbf{0}} \end{bmatrix},$$ where, in our case, $$\label{eq:gradLag} \begin{aligned} \nabla_{\mathbold{x}}\mathcal{L}_k({\mathbold{x}}_k, {\mathbold{w}}_k) &= {\mathbold{A}}{\mathbold{x}}_k + {\mathbold{b}}_k + {\mathbold{G}}^\top {\mathbold{w}}_k,\\ \nabla_{\mathbold{w}}\mathcal{L}_k({\mathbold{x}}_k, {\mathbold{w}}_k) &= {\mathbold{G}}{\mathbold{x}}_k - {\mathbold{h}}_k. \end{aligned}$$ A commonly employed algorithm for solving a convex problem with linear equality constraints is based on the gradient *descent-ascent* of the *primal-dual* problem [@bertsekas_constrained_2014]. The natural extension of this *primal-dual* algorithm to the online setting yields the following iterations (cf. [@bernstein_online_2019]) $$\label{eq:primalDual} \begin{aligned} {\mathbold{x}}_{k+1} &= {\mathbold{x}}_k - \alpha\nabla_{{\mathbold{x}}_k} \mathcal{L}_{k}({\mathbold{x}}_{k}, {\mathbold{w}}_{k}),\\ {\mathbold{w}}_{k+1} &= {\mathbold{w}}_k + \beta\nabla_{{\mathbold{w}}_k} \mathcal{L}_{k}({\mathbold{x}}_{k}, {\mathbold{w}}_{k}), \end{aligned}$$ where $k \in \mathbb{N}$, and $\alpha$ and $\beta$ are two positive parameters. Under Assumption [\[as:modelsbh\]](#as:modelsbh){reference-type="ref" reference="as:modelsbh"}, we can establish that the asymptotic tracking error $\left\lVert{\mathbold{x}}_k-{\mathbold{x}}_k^*\right\rVert$ is bounded. However, this asymptotic tracking error is not zero in general [@bernstein_online_2019]. On the other hand resorting to the control scheme illustrated in Figure [\[fig:block-diagram\]](#fig:block-diagram){reference-type="ref" reference="fig:block-diagram"}, we can obtain an algorithm able to find all the points where [\[eq:error_zero\]](#eq:error_zero){reference-type="eqref" reference="eq:error_zero"} is satisfied. The objective, therefore, is to design the transfer matrix, $$\label{eq:Cmatr} {\mathbold{C}}(z) = \begin{bmatrix} {\mathbold{C}}_{11}(z) & {\mathbold{C}}_{12}(z)\\ {\mathbold{C}}_{21}(z) & {\mathbold{C}}_{22}(z) \end{bmatrix} \in {\mathbb{R}}^{(n+p) \times (n+p)}(z),$$ of the controller able to drive the signals ${\mathbold{e}}_k$ and ${\mathbold{f}}_k$ to zero. A common alternative to the primal-dual algorithm in [\[eq:primalDual\]](#eq:primalDual){reference-type="eqref" reference="eq:primalDual"} is the dual-ascent algorithm where, in the update of ${\mathbold{w}}_{k+1}$, ${\mathbold{x}}_{k+1}$ is used in place of ${\mathbold{x}}_k$. In general, the performances of the dual-scent algorithm are similar to the ones of the primal-dual algorithm. For this reason, in this paper, we decided to compare our novel strategy only with the primal-dual algorithm. ## Internal Model Let us define the following $\mathcal{Z}$-transforms $$\begin{aligned} {\mathbold{\hat{x}}}\left(z\right) &\coloneqq \mathcal{Z}[{\mathbold{x}}_k],\\ {\mathbold{\hat{w}}}\left(z\right) &\coloneqq \mathcal{Z}[{\mathbold{w}}_k],\\ {\mathbold{\hat{e}}}\left(z\right) &\coloneqq \mathcal{Z}[{\mathbold{e}}_k],\\ {\mathbold{\hat{f}}}\left(z\right)&\coloneqq \mathcal{Z}[{\mathbold{f}}_k].\end{aligned}$$ By referring to the control scheme shown in Figure [\[fig:block-diagram\]](#fig:block-diagram){reference-type="ref" reference="fig:block-diagram"} we argue that $$\label{eq:state} \begin{bmatrix} {\mathbold{\hat{x}}}\left(z\right)\\ {\mathbold{\hat{w}}}\left(z\right) \end{bmatrix}= {\mathbold{C}}\left(z\right) \begin{bmatrix} {\mathbold{\hat{e}}}\left(z\right)\\ {\mathbold{\hat{f}}}\left(z\right) \end{bmatrix}.$$ From Equation [\[eq:gradLag\]](#eq:gradLag){reference-type="eqref" reference="eq:gradLag"} we also have that $$\begin{aligned} \begin{bmatrix} {\mathbold{\hat{e}}}\left(z\right)\\ {\mathbold{\hat{f}}}\left(z\right) \end{bmatrix} &= \underbrace{\begin{bmatrix} {\mathbold{A}}& {\mathbold{G}}^\top \\ {\mathbold{G}}& {\mathbf{0}}_{p \times p} \end{bmatrix}}_{\coloneqq \mathcal{A}}\begin{bmatrix} {\mathbold{\hat{x}}}\left(z\right)\\ {\mathbold{\hat{w}}}\left(z\right) \end{bmatrix} + \begin{bmatrix} {\mathbold{\hat{b}}}\left(z\right)\\ -{\mathbold{\hat{h}}}\left(z\right) \end{bmatrix}\\ &= \mathcal{A}{\mathbold{C}}\left(z\right) \begin{bmatrix} {\mathbold{\hat{e}}}\left(z\right)\\ {\mathbold{\hat{f}}}\left(z\right) \end{bmatrix} + \begin{bmatrix} {\mathbold{\hat{b}}}\left(z\right)\\ -{\mathbold{\hat{h}}}\left(z\right) \end{bmatrix},\end{aligned}$$ where $\mathcal{A} \in {\mathbb{R}}^{(n+p) \times (n+p)}$. In this way we obtain $$\label{eq:closeLoopSystem} \begin{bmatrix} {\mathbold{\hat{e}}}\left(z\right)\\ {\mathbold{\hat{f}}}\left(z\right) \end{bmatrix} = \left[{\mathbold{I}}- \mathcal{A}{\mathbold{C}}\left(z\right)\right]^{-1}\begin{bmatrix} {\mathbold{\hat{b}}}\left(z\right)\\ -{\mathbold{\hat{h}}}\left(z\right) \end{bmatrix}.$$ We remark that $\mathcal{A}$ is a so-called *saddle matrix* [@benzi_numerical_2005; @benzi_eigenvalues_2006], and in the following we will leverage results for this class of matrices to design a suitable controller. Observe that from Assumption [\[as:modelsbh\]](#as:modelsbh){reference-type="ref" reference="as:modelsbh"} we know that ${\mathbold{\hat{b}}}\left(z\right),{\mathbold{\hat{h}}}\left(z\right)$ are rational and hence ${\mathbold{\hat{e}}}\left(z\right),{\mathbold{\hat{f}}}\left(z\right)$ are rational as well. Therefore, if we prove that their poles are stable, namely inside the unit circle, than we can argue that the signals ${\mathbold{e}}_k, {\mathbold{f}}_k$ converge to zero, namely $$\begin{bmatrix} {\mathbold{e}}_k\\ {\mathbold{f}}_k \end{bmatrix} \xrightarrow[k \to \infty]{}\begin{bmatrix} {\mathbf{0}} \\ {\mathbf{0}} \end{bmatrix}.$$ By applying the Internal Model Principle, the first step is to choose the controller ${\mathbold{C}}(z)$ able to cancel out the unstable poles of ${\mathbold{\hat{b}}}\left(z\right),{\mathbold{\hat{h}}}\left(z\right)$ that are the unstable zeros of $p(z)$ introduced in Assumption [\[as:modelsbh\]](#as:modelsbh){reference-type="ref" reference="as:modelsbh"}. Since we assumed that all the zeros of $p(z)$ are marginally unstable, then to this aim it is enough to choose $${\mathbold{C}}(z) = \frac{{\mathbold{C}}_\mathrm{N}(z)}{p\left(z\right)},$$ where ${\mathbold{C}}_\mathrm{N}(z)\in {\mathbb{R}}^{(n+p) \times (n+p)}[z]$ is a polynomial matrix. Indeed, this choice yields $$\label{eq:error-z-transform} \begin{bmatrix} {\mathbold{\hat{e}}}\left(z\right)\\ {\mathbold{\hat{f}}}\left(z\right) \end{bmatrix} = \left[ p\left(z\right) {\mathbold{I}}- \mathcal{A} {\mathbold{C}}_\mathrm{N}(z) \right]^{-1} \begin{bmatrix} {\mathbold{b}}_\mathrm{N}\left(z\right)\\ -{\mathbold{h}}_\mathrm{N}(z) \end{bmatrix}.$$ Then the poles of ${\mathbold{\hat{e}}}\left(z\right),{\mathbold{\hat{f}}}\left(z\right)$ coincide with the poles $\left[ p\left(z\right) {\mathbold{I}}- \mathcal{A} {\mathbold{C}}_\mathrm{N}(z) \right]^{-1}$ and so the goal is to determine ${\mathbold{C}}_\mathrm{N}(z)$ such that all the poles of $\left[ p\left(z\right) {\mathbold{I}}- \mathcal{A} {\mathbold{C}}_\mathrm{N}(z) \right]^{-1}$ are stable. ## Stabilizing controller Observe that the matrix $\mathcal{A}$ is in general indefinite, which implies that the control design approach proposed in [@bastianello_internal_2022] cannot be directly applied in our context. So we have to resort to a controller ${\mathbold{C}}_\mathrm{N}(z)$ with a more general structure. Until now, we have considered a controller without a particular structure. Even though imposing a structure can limit the performance of the algorithms that can be obtained, the synthesis of general controllers can be complex and intractable. For that reason, we introduce a more structured version of the controller in which we impose that $$\label{eq:alphaCtr} {\mathbold{C}}_\mathrm{N}\left(z\right) = c\left(z\right)\begin{bmatrix} {\mathbold{I}}& {\mathbf{0}} \\ {\mathbf{0}} & -\tau{\mathbold{I}}\end{bmatrix},$$ where $c(z) = \sum_{i = 0}^{m-1} c_i z^i \in {\mathbb{R}}[z]$ is a scalar polynomial of degree $m-1$ able to ensure strict properness of the controller transfer matrix, and $\tau$ is a positive parameter. Through this strategy we will see that it is possible to transform the design from a matrix to a scalar problem. With this choice of the controller we have that $$\label{eq:inverMatr} \left[ p\left(z\right) {\mathbold{I}}- \mathcal{A} {\mathbold{C}}_\mathrm{N}(z) \right]^{-1}= \left[ p\left(z\right) {\mathbold{I}}- c\left(z\right)\widetilde{\mathcal{A}}\left(\tau\right) \right]^{-1},$$ where $$\label{eq:AcalHat} \widetilde{\mathcal{A}}\left(\tau\right) = \mathcal{A}\begin{bmatrix} {\mathbold{I}}& {\mathbf{0}} \\ {\mathbf{0}} & -\tau{\mathbold{I}}\end{bmatrix} = \begin{bmatrix} {\mathbold{A}}& -\tau{\mathbold{G}}^\top \\ {\mathbold{G}}& {\mathbf{0}} \end{bmatrix}.$$ We will see that the application of robust control techniques, needed in the proposed design method, is simpler if we impose that the matrix $\widetilde{\mathcal{A}}\left(\tau\right)$ has real and positive eigenvalues. The control structure described in [\[eq:alphaCtr\]](#eq:alphaCtr){reference-type="eqref" reference="eq:alphaCtr"} is able to guarantee this condition. The controller structure in Equation [\[eq:alphaCtr\]](#eq:alphaCtr){reference-type="eqref" reference="eq:alphaCtr"} can, in principle, be generalized to $${\mathbold{C}}_\mathrm{N}\left(z\right) = c\left(z\right)\begin{bmatrix} \alpha{\mathbold{I}}& {\mathbf{0}} \\ {\mathbf{0}} & -\beta{\mathbold{I}}\end{bmatrix},$$ where $\alpha, \beta > 0$. This controller can be transformed to $${\mathbold{C}}_\mathrm{N}\left(z\right) = \underbrace{c\left(z\right)\alpha}_{\coloneqq \tilde{c}\left(z\right)}\begin{bmatrix} {\mathbold{I}}& {\mathbf{0}}\times \\ {\mathbf{0}} & -\frac{\beta}{\alpha}{\mathbold{I}}\end{bmatrix}.$$ Since $c\left(z\right)$ is a polynomial that needs to be designed, we can always incorporate the extra degree of freedom $\alpha$ in it. With respect to the analysis done in [@bastianello_internal_2022], the design the polynomial $c(z)$ able to make all the poles of [\[eq:inverMatr\]](#eq:inverMatr){reference-type="eqref" reference="eq:inverMatr"} stable has the additional issue that matrix $\widetilde{\mathcal{A}}\left(\tau\right)$ is not always diagonalizable. Consequently, we will approach the problem using a slightly different strategy. Consider the Schur decomposition of $\widetilde{\mathcal{A}}\left(\tau\right)$, given by $\widetilde{\mathcal{A}}\left(\tau\right) = {\mathbold{Q}}{\mathbold{U}}{\mathbold{Q}}^{\top}$, where ${\mathbold{Q}}$ is a unitary matrix and ${\mathbold{U}}$ is an upper triangular matrix. By substituting this decomposition in [\[eq:inverMatr\]](#eq:inverMatr){reference-type="eqref" reference="eq:inverMatr"} we obtain $$\begin{gathered} \label{eq:inverMatrSchur} \left[ p\left(z\right) {\mathbold{I}}- c\left(z\right)\widetilde{\mathcal{A}}\left(\tau\right) \right]^{-1} =\\ ={\mathbold{Q}}\left[ p\left(z\right) {\mathbold{I}}- c\left(z\right){\mathbold{U}}\right]^{-1}{\mathbold{Q}}^{\top} .\end{gathered}$$ Hence the poles of $\left[ p\left(z\right) {\mathbold{I}}- \mathcal{A} {\mathbold{C}}_\mathrm{N}(z) \right]^{-1}$ coincide with the poles of $\left[ p\left(z\right) {\mathbold{I}}- c\left(z\right){\mathbold{U}}\right]^{-1}$. Since the $\left(i,j\right)$ element of this matrix has the following form $$\begin{gathered} \label{eq:inverseMatr} \left(\left[ p\left(z\right) {\mathbold{I}}- c\left(z\right){\mathbold{U}}\right]^{-1}\right)_{ij} = \\=\frac{\left(\operatorname{adj}\left[ p\left(z\right) {\mathbold{I}}- c\left(z\right){\mathbold{U}}\right]\right)_{ij}}{\det\left[ p\left(z\right) {\mathbold{I}}- c\left(z\right){\mathbold{U}}\right]}.\end{gathered}$$ then the poles of this matrix are surely zeros of the polynomial $\det\left[ p\left(z\right) {\mathbold{I}}- c\left(z\right){\mathbold{U}}\right]$. The triangular form of ${\mathbold{U}}$ implies that the set of these zeros coincides with the union of the zeros of the polynomials $$\label{eq:nPoly} p\left(z\right)-c\left(z\right)\lambda_i$$ where $\lambda_i$, $i=1,\ldots n$, are all the eigenvalues of $\widetilde{\mathcal{A}}\left(\tau\right)$. As emphasized in Remark [\[rem:robust\]](#rem:robust){reference-type="ref" reference="rem:robust"}, our knowledge of the eigenvalues of ${\mathbold{A}}$, and consequently also of $\widetilde{\mathcal{A}}\left(\tau\right)$, is incomplete. Therefore, we need to employ a robust control technique capable to obtain stabilizing controllers of a uncertain systems. In this regard, we rely on the findings of [@de_oliveira_new_1999], which employed an LMI-based approach to address this issue. To devise a robust controller for this problem, it is essential to obtain an estimate of the eigenvalues of the matrix $\widetilde{\mathcal{A}}\left(\tau\right)$. These eigenvalues are contingent upon the specific matrices ${\mathbold{A}}$ and ${\mathbold{G}}$, as well as the parameter $\tau$. The following lemma then provides bounds for the eigenvalues of $\widetilde{\mathcal{A}}\left(\tau\right)$, and is proved by exploiting the fact that $\widetilde{\mathcal{A}}\left(\tau\right)$ is a post-conditioned saddle matrix [@benzi_numerical_2005; @benzi_eigenvalues_2006]. [\[lem:eigenvalues_A\_hat\]]{#lem:eigenvalues_A_hat label="lem:eigenvalues_A_hat"} Given $\widetilde{\mathcal{A}}\left(\tau\right)$ as in [\[eq:AcalHat\]](#eq:AcalHat){reference-type="eqref" reference="eq:AcalHat"}, if $\tau$ is such that $$\label{eq:optChoiceineq} 0<\tau \leq \tau^* \coloneqq \frac{{\underaccent{\bar}{\lambda}}\left({\mathbold{A}}\right)}{4{\bar{\lambda}}\left({\mathbold{G}}{\mathbold{A}}^{-1}{\mathbold{G}}^{\top}\right)}.$$ then the eigenvalues of $\widetilde{\mathcal{A}}\left(\tau\right)$ are real and belong to the following interval $$\label{eq:interval} \left[ \tau{\underaccent{\bar}{\lambda}}\left({\mathbold{G}}{\mathbold{A}}^{-1}{\mathbold{G}}^{\top}\right), {\bar{\lambda}}\left({\mathbold{A}}\right)\right]$$ As proved in [@benzi_eigenvalues_2006 Corollary 2.7] if ${\underaccent{\bar}{\lambda}}\left( {\mathbold{A}}\right)\geq 4\tau{\bar{\lambda}}\left({\mathbold{G}}{\mathbold{A}}^{-1}{\mathbold{G}}^{\top}\right)$, then $\widetilde{\mathcal{A}}\left(\tau\right)$ has all real eigenvalues, which implies inequality [\[eq:optChoiceineq\]](#eq:optChoiceineq){reference-type="eqref" reference="eq:optChoiceineq"} and this implies that $\widetilde{\mathcal{A}}\left(\tau\right)$ has all real eigenvalues. Moreover, thanks to the fact that ${\mathbold{G}}$ is full row rank and using [@benzi_eigenvalues_2006 Proposition 2.12] and [@shen_eigenvalue_2010 Theorem 2.1] we know that the eigenvalues of $\widetilde{\mathcal{A}}\left(\tau\right)$ belongs to the interval $$\left[ \min \left\lbrace{\underaccent{\bar}{\lambda}}\left({\mathbold{A}}\right),\tau{\underaccent{\bar}{\lambda}}\left({\mathbold{G}}{\mathbold{A}}^{-1}{\mathbold{G}}^{\top}\right)\right\rbrace, {\bar{\lambda}}\left({\mathbold{A}}\right)\right]$$ Observe finally that, using [\[eq:optChoiceineq\]](#eq:optChoiceineq){reference-type="eqref" reference="eq:optChoiceineq"} we can argue that $$\label{eq:ineqInterval1} \begin{aligned} \tau{\underaccent{\bar}{\lambda}}\left({\mathbold{G}}{\mathbold{A}}^{-1}{\mathbold{G}}^{\top}\right)&\le\frac{{\underaccent{\bar}{\lambda}}\left({\mathbold{A}}\right)}{4{\bar{\lambda}}\left({\mathbold{G}}{\mathbold{A}}^{-1}{\mathbold{G}}^{\top}\right)}{\underaccent{\bar}{\lambda}}\left({\mathbold{G}}{\mathbold{A}}^{-1}{\mathbold{G}}^{\top}\right) \\ &= \frac{{\underaccent{\bar}{\lambda}}\left({\mathbold{A}}\right)}{4\kappa\left({\mathbold{G}}{\mathbold{A}}^{-1}{\mathbold{G}}^{\top}\right)}\leq {\underaccent{\bar}{\lambda}}\left({\mathbold{A}}\right), \end{aligned}$$ which yields [\[eq:interval\]](#eq:interval){reference-type="eqref" reference="eq:interval"}. $\square$ Based on Lemma [\[lem:eigenvalues_A\_hat\]](#lem:eigenvalues_A_hat){reference-type="ref" reference="lem:eigenvalues_A_hat"}, we can determine $\tau$ such that the eigenvalues of $\widetilde{\mathcal{A}}\left(\tau\right)$ are real and fall in the interval specified in Equation [\[eq:interval\]](#eq:interval){reference-type="eqref" reference="eq:interval"}. To stabilize the polynomials given by Equation [\[eq:nPoly\]](#eq:nPoly){reference-type="eqref" reference="eq:nPoly"}, it is sufficient to select an appropriate $c\left(z\right)$ that stabilizes the polynomials $$\label{eq:staPol} p\left(z\right)-c\left(z\right)\lambda,$$ for all $\lambda \in \left[{\underaccent{\bar}{\lambda}}\left(\widetilde{\mathcal{A}}\left(\tau\right)\right),{\bar{\lambda}}\left(\widetilde{\mathcal{A}}\left(\tau\right)\right)\right]$. But requiring the stability of the polynomials in [\[eq:staPol\]](#eq:staPol){reference-type="eqref" reference="eq:staPol"} coincides with requiring the stability of the associated companion matrices ${\mathbold{F}}_c\left(\lambda\right)\coloneqq {\mathbold{F}}+ \lambda \mathbold{C} {\mathbold{K}}$, where $$\label{eq:compMatr} \begin{aligned} {\mathbold{F}}&= \begin{bNiceMatrix} 0 & 1 & 0 & \Cdots & 0 \\ & \Ddots & \Ddots & \Ddots & \Vdots \\ \Vdots & & & & 0 \\ 0 & \Cdots & & 0 & 1 \\ -p_0 & & \Cdots & & -p_{m-1} \end{bNiceMatrix} \mathbold{C}&=\begin{bNiceMatrix} 0 \\ \Vdots\\ \\ \\ 0\\ 1 \end{bNiceMatrix} \\ {\mathbold{K}}&= \begin{bNiceMatrix}[columns-width=5.6mm] c_0 & & \Cdots & &c_{m-1} \end{bNiceMatrix} & . \end{aligned}$$ By denoting $\underaccent{\bar}{l} \coloneqq {\underaccent{\bar}{\lambda}}\left(\widetilde{\mathcal{A}}\left(\tau\right)\right)$ and $\bar{l} \coloneqq {\bar{\lambda}}\left(\widetilde{\mathcal{A}}\left(\tau\right)\right)$, we can express $\lambda$ as a convex combination of these two extreme values using the equation: $$\lambda = \alpha\left(\lambda\right)\underaccent{\bar}{l} + \left(1-\alpha\left(\lambda\right)\right)\bar{l}, \quad \alpha\left(\lambda\right) = \frac{\bar{l}-\lambda}{\bar{l}-\underaccent{\bar}{l}}$$ With this expression, we can apply the following result [@de_oliveira_new_1999 Theorem 3]), and subsequently derive the polynomial $c\left(z\right)$ by solving the two LMIs of size $m$ in the following lemma. [\[lem:stabC\]]{#lem:stabC label="lem:stabC"} The matrix ${\mathbold{F}}_c\left(\lambda\right)$ is asymptotically stable for any $\lambda \in \left[\,\underaccent{\bar}{l},\bar{l}\,\right]$ if there exist symmetric matrices $\underline{{\mathbold{P}}}, \overline{{\mathbold{P}}} \succ 0$, a square matrix ${\mathbold{Q}}\in {\mathbb{R}}^{m\times m }$, and a row vector $\mathbold{R}\in\mathbb{R}^{1 \times m}$ such that $$\begin{aligned} \begin{bmatrix} \underline{{\mathbold{P}}} & {\mathbold{F}}{\mathbold{Q}}+ \underaccent{\bar}{l}\mathbold{C}\mathbold{R}\\ {\mathbold{Q}}^{\top}{\mathbold{F}}^{\top}+\underaccent{\bar}{l}\mathbold{R}^{\top}\mathbold{C}^{\top} & {\mathbold{Q}}+ {\mathbold{Q}}^{\top} - \underline{{\mathbold{P}}} \end{bmatrix} &\succ 0,\\ \begin{bmatrix} \overline{{\mathbold{P}}} & {\mathbold{F}}{\mathbold{Q}}+ \bar{l}\mathbold{C}\mathbold{R}\\ {\mathbold{Q}}^{\top}{\mathbold{F}}^{\top}+\bar{l}\mathbold{R}^{\top}\mathbold{C}^{\top} & {\mathbold{Q}}+ {\mathbold{Q}}^{\top} - \overline{{\mathbold{P}}} \end{bmatrix} &\succ 0.\end{aligned}$$ A stabilizing controller is then ${\mathbold{K}}= \mathbold{R}{\mathbold{Q}}^{-1}$. These LMIs depend on the minimum and maximum eigenvalues of $\widetilde{\mathcal{A}}\left(\tau\right)$. But, using Lemma [\[lem:eigenvalues_A\_hat\]](#lem:eigenvalues_A_hat){reference-type="ref" reference="lem:eigenvalues_A_hat"}, we can obtain the polynomial $c\left(z\right)$ by solving the two LMIs on the extremes of the interval in [\[eq:interval\]](#eq:interval){reference-type="eqref" reference="eq:interval"}. The feasibility of aforementioned robust control design procedure, based on [@de_oliveira_new_1999 Theorem 3] depends on the ratio between the extremes of the interval. Precisely the possibility of obtaining a solution increases as much as this ratio get closer to one. Hence the best choice of the parameter $\tau$ is the maximum value ensuring that the eigenvalues of $\widetilde{\mathcal{A}}\left(\tau\right)$ are real. From Lemma [\[lem:eigenvalues_A\_hat\]](#lem:eigenvalues_A_hat){reference-type="ref" reference="lem:eigenvalues_A_hat"} this is $\tau=\tau^*$ and in this case we can argue that the eigenvalues of $\widetilde{\mathcal{A}}\left(\tau^*\right)$ belong to the interval $$\left[\frac{{\underaccent{\bar}{\lambda}}\left({\mathbold{A}}\right){\underaccent{\bar}{\lambda}}\left({\mathbold{G}}{\mathbold{A}}^{-1}{\mathbold{G}}^{\top}\right)}{4{\bar{\lambda}}\left({\mathbold{G}}{\mathbold{A}}^{-1}{\mathbold{G}}^{\top}\right)},{\bar{\lambda}}\left({\mathbold{A}}\right)\right],$$ and that the ratio between the extremes of this interval is $$4\kappa\left({\mathbold{A}}\right)\kappa\left({\mathbold{G}}{\mathbold{A}}^{-1}{\mathbold{G}}^{\top}\right).$$ In case only the bounds given in Equations [\[eq:estA\]](#eq:estA){reference-type="eqref" reference="eq:estA"} and [\[eq:estGAGT\]](#eq:estGAGT){reference-type="eqref" reference="eq:estGAGT"} are available, we choose $$\tau=\tau^*_{\mathrm{est}} := \frac{{\underaccent{\bar}{\lambda}}}{4\underaccent{\bar}{\mu}}.$$ In this case we can argue that the eigenvalues of $\widetilde{\mathcal{A}}\left(\tau^*_{\mathrm{est}}\right)$ belong to the interval $$\label{eq:intervalest} \left[\frac{{\underaccent{\bar}{\lambda}}\underaccent{\bar}{\mu}}{4\bar{\mu}},{\bar{\lambda}}\right],$$ and the ratio between the extremes of this interval is $$\label{eq:ratioest} 4\frac{{\bar{\lambda}}\bar{\mu}}{{\underaccent{\bar}{\lambda}}\underaccent{\bar}{\mu}}.$$ ## The online algorithm The algorithm consists in the translation in the time domain of the relations between the signals ${\mathbold{e}}_k$, ${\mathbold{f}}_k$, ${\mathbold{x}}_k$ and ${\mathbold{w}}_k$ that in the $\mathcal{Z}$-transform domain are given by $${\mathbold{\hat{x}}}\left(z\right)=\frac{c(z)}{p(z)}{\mathbold{\hat{e}}}\left(z\right),\qquad {\mathbold{\hat{w}}}\left(z\right)=-\tau\frac{c(z)}{p(z)}{\mathbold{\hat{f}}}\left(z\right).$$ A state space realization of such relation is given by [\[eq:control-based-algorithm\]]{#eq:control-based-algorithm label="eq:control-based-algorithm"} where the matrices ${\mathbold{F}}$ , ${\mathbold{C}}$ and ${\mathbold{K}}$ are defined in [\[eq:compMatr\]](#eq:compMatr){reference-type="eqref" reference="eq:compMatr"} and ${\mathbold{z}}_k \in {\mathbb{R}}^{nm},{\mathbold{y}}_k \in {\mathbb{R}}^{pm}$ are the state vectors of the controller. Hence, equations [\[eq:control-based-algorithm\]](#eq:control-based-algorithm){reference-type="eqref" reference="eq:control-based-algorithm"} describe the online algorithm in the scheme of Figure [\[fig:block-diagram\]](#fig:block-diagram){reference-type="ref" reference="fig:block-diagram"}. ## Convergence {#subsec:convergence} In this section we prove the the convergence of Algorithm [\[eq:control-based-algorithm\]](#eq:control-based-algorithm){reference-type="eqref" reference="eq:control-based-algorithm"} solving the optimization problem described in [\[eq:quadratic-online-optimization\]](#eq:quadratic-online-optimization){reference-type="eqref" reference="eq:quadratic-online-optimization"}. Convergence is established by the following proposition, which employs classical arguments based on a $\mathcal{Z}$-transforms. [\[pr:convergence-control-algorithm\]]{#pr:convergence-control-algorithm label="pr:convergence-control-algorithm"} Consider the optimization problem [\[eq:quadratic-online-optimization\]](#eq:quadratic-online-optimization){reference-type="eqref" reference="eq:quadratic-online-optimization"}, with $\{ {\mathbold{b}}_k \}_{k \in {\mathbb{N}}}$ and $\{ {\mathbold{h}}_k \}_{k \in {\mathbb{N}}}$ modeled by [\[eq:linearModels\]](#eq:linearModels){reference-type="eqref" reference="eq:linearModels"}. Choose the controller [\[eq:alphaCtr\]](#eq:alphaCtr){reference-type="eqref" reference="eq:alphaCtr"} with $\tau$ satisfying [\[eq:optChoiceineq\]](#eq:optChoiceineq){reference-type="eqref" reference="eq:optChoiceineq"} and let $c(z)$ be such that ${\mathbold{F}}_{\mathrm{c}}(\lambda)$, defined in [\[eq:compMatr\]](#eq:compMatr){reference-type="eqref" reference="eq:compMatr"}, is asymptotically stable for all $\lambda \in\left[ \tau{\underaccent{\bar}{\lambda}}\left({\mathbold{G}}{\mathbold{A}}^{-1}{\mathbold{G}}^{\top}\right), {\bar{\lambda}}\left({\mathbold{A}}\right)\right]$. Let $\{ {\mathbold{x}}_k \}_{k \in {\mathbb{N}}}$ be the output of the online algorithm [\[eq:control-based-algorithm\]](#eq:control-based-algorithm){reference-type="eqref" reference="eq:control-based-algorithm"}. Then it holds $$\limsup_{k \to \infty} \left\lVert{\mathbold{x}}_k - {\mathbold{x}}_k^*\right\rVert = 0.$$ For the proof we follow the reasoning of [@bastianello_internal_2022 Proposition 1]. Given the control structure [\[eq:alphaCtr\]](#eq:alphaCtr){reference-type="eqref" reference="eq:alphaCtr"} and considering that interval $\left[ \tau{\underaccent{\bar}{\lambda}}\left({\mathbold{G}}{\mathbold{A}}^{-1}{\mathbold{G}}^{\top}\right), {\bar{\lambda}}\left({\mathbold{A}}\right)\right]$ includes the interval $\left[{\underaccent{\bar}{\lambda}}\left(\widetilde{\mathcal{A}}\left(\tau\right)\right), {\bar{\lambda}}\left(\widetilde{\mathcal{A}}\left(\tau\right)\right)\right]$, selecting a controller $c\left(z\right)$ that stabilizes the matrix ${\mathbold{F}}_{\mathrm{c}}\left(\lambda\right)$ for $\lambda \in\left[ \tau{\underaccent{\bar}{\lambda}}\left({\mathbold{G}}{\mathbold{A}}^{-1}{\mathbold{G}}^{\top}\right), {\bar{\lambda}}\left({\mathbold{A}}\right)\right]$ ensures the asymptotic stability of the poles of ${\mathbold{\hat{e}}}\left(z\right)$ and of ${\mathbold{\hat{f}}}\left(z\right)$, cf. [\[eq:error-z-transform\]](#eq:error-z-transform){reference-type="eqref" reference="eq:error-z-transform"}. Consequently, the gradient of the Lagrangian, given by ${\mathbold{e}}_k$, ${\mathbold{f}}_k$, converges to zero, and this implies the thesis. $\square$ When the cost function $f_k$ is not quadratic and the constraints are time varying, the convergence to zero of tracking error is not guaranteed. Similarly to what has been done in [@bastianello_internal_2022], in this case one can apply techniques based on the small gain theorem to establish bounds on this error. We opt against presenting this analysis here due to its significant intricacy. Furthermore, its results yield relatively broad bounds in comparison to what is shown by numerical experiments. As a result, we find it more suitable to substantiate the algorithm's good performance in addressing general online optimization problems only through the evidence presented by numerical experiments, which will be proposed in Section [5](#sec:simulations){reference-type="ref" reference="sec:simulations"}. # Online Optimization with Equality and Inequality Constraints {#sec:ineq-constraints} In this section, we aim to extend the previous approach to also handle linear inequality constraints. ## Problem formulation {#subsec:problem-formulation-ineq} Consider now Problem [\[eq:quadratic-online-optimization-general\]](#eq:quadratic-online-optimization-general){reference-type="eqref" reference="eq:quadratic-online-optimization-general"} where we assume that the signal ${\mathbold{h}}_k^{\prime}$ has a rational $\mathcal{Z}$-transform, given by $$\label{eq:hkPrime} {\mathbold{\hat{h}}}^{\prime}\left(z\right)\coloneqq\mathcal{Z}\left[ {\mathbold{h}}_k^{\prime} \right] = \frac{{\mathbold{h}}_\mathrm{N}^{\prime} \left(z\right)}{p\left(z\right)}$$ A commonly employed approach to address this problem is to extend the *primal-dual* algorithm to the *projected primal-dual* [@bertsekas_constrained_2014; @boyd_vandenberghe_2004], which yields $$\label{eq:proPrimalDual} \begin{aligned} {\mathbold{x}}_{k+1} &= {\mathbold{x}}_k - \alpha\nabla_{{\mathbold{x}}_k} \mathcal{L}^{\prime}_{k}({\mathbold{x}}_{k}, {\mathbold{w}}_{k},{\mathbold{w}}_{k}^{\prime}),\\ {\mathbold{w}}_{k+1} &= {\mathbold{w}}_k + \beta\nabla_{{\mathbold{w}}_k} \mathcal{L}^{\prime}_{k}({\mathbold{x}}_{k}, {\mathbold{w}}_{k},{\mathbold{w}}_{k}^{\prime}),\\ {\mathbold{w}}_{k+1}^{\prime} &= \mathop{\mathrm{proj}}_{\geq 0}\left({\mathbold{w}}^{\prime}_k + \gamma\nabla_{{\mathbold{w}}^{\prime}_k} \mathcal{L}^{\prime}_{k}({\mathbold{x}}_{k}, {\mathbold{w}}_{k},{\mathbold{w}}_{k}^{\prime})\right). \end{aligned}$$ Here, $\alpha$, $\beta$, and $\gamma$ are positive parameters, ${\mathbold{w}}_k \in \mathbb{R}^{p}$ and ${\mathbold{w}}_k^\prime \in \mathbb{R}^{p^\prime}$ are the Lagrange multipliers associated with the equality and the inequality constraints and $\mathcal{L}_{k}^{\prime}$ is the time-varying Lagrangian defined as $$\begin{gathered} \mathcal{L}_{k}^{\prime}\left({\mathbold{x}},{\mathbold{w}},{\mathbold{w}}^{\prime}\right) \coloneqq\\ f_k\left({\mathbold{x}}\right) + {\mathbold{w}}^{\top}\left({\mathbold{G}}{\mathbold{x}}-{\mathbold{h}}_k\right) + {{\mathbold{w}}^{\prime}}^{\top}\left({\mathbold{G}}^{\prime}{\mathbold{x}}-{\mathbold{h}}^{\prime}_k\right).\end{gathered}$$ The main difference between Algorithms [\[eq:primalDual\]](#eq:primalDual){reference-type="eqref" reference="eq:primalDual"} and [\[eq:proPrimalDual\]](#eq:proPrimalDual){reference-type="eqref" reference="eq:proPrimalDual"} is the presence of a projection in the update of the dual variable associated with the inequality constraints. This projection works as a saturation to zero of the negative entries of the signal. The first attempt in the design of an online optimization algorithm based on the Internal Model principle is simply applying what we did in the case in which we have only equality constraints. Precisely, we determine the controller using the internal model $p\left(z\right)$ as done in the previous section, considering inequality constraints as they were further equality constraints, and we incorporate in the resulting algorithm the saturation as done in [\[eq:proPrimalDual\]](#eq:proPrimalDual){reference-type="eqref" reference="eq:proPrimalDual"}. However, if we do this, we observe in the experiments the output of the proposed algorithm achieves very poor tracking, worse than the unstructured online primal-dual algorithm. Indeed, it is well-known that a controller based on the internal model with marginally unstable poles, coupled with a saturation, gives rise to significant transient phenomena known as wind-up. In order to mitigate this negative effect of the saturation, an anti wind-up mechanism, called back-calculation [@astrom_antiwindup_1989], is added to the algorithm. The resulting algorithm is illustrated in Figure [\[fig:block-diagram-anti wind-up\]](#fig:block-diagram-anti wind-up){reference-type="ref" reference="fig:block-diagram-anti wind-up"} and described by the following equations [\[eq:control-based-algorithma-anti wind-up\]](#eq:control-based-algorithma-anti wind-up){reference-type="eqref" reference="eq:control-based-algorithma-anti wind-up"} [\[eq:control-based-algorithma-anti wind-up\]]{#eq:control-based-algorithma-anti wind-up label="eq:control-based-algorithma-anti wind-up"} align \_k+1\^ &= \_(\_k+1)[\[eq:satanti wind-up\]]{#eq:satanti wind-up label="eq:satanti wind-up"}\ \_k+1&= \_\^\_k+1(\_k+1, \_k+1,\_k+1\^)\ \_k+1 &= \_ \^\_k+1(\_k+1, \_k+1,\_k+1\^)\ \_k+1\^ &= \_\^ \^\_k+1(\_k+1, \_k+1,\_k+1\^) +\ &+\_ .[\[eq:errorDualanti wind-up\]]{#eq:errorDualanti wind-up label="eq:errorDualanti wind-up"} where $\rho$ is a positive tunable anti wind-up parameter, the matrices ${\mathbold{F}}$ , ${\mathbold{C}}$ and ${\mathbold{K}}$ are defined in [\[eq:compMatr\]](#eq:compMatr){reference-type="eqref" reference="eq:compMatr"} and ${\mathbold{z}}_k \in {\mathbb{R}}^{nm},{\mathbold{y}}_k \in {\mathbb{R}}^{pm},{\mathbold{y}}_k ^{\prime}\in {\mathbb{R}}^{p^{\prime}m}$ are the state vectors of the controller. We are not able at present to give proofs of the convergence of this algorithm whose performance will be analysed only by means of numerical experiments. # Numerical Experiments {#sec:simulations} In this section we compare the performance of the different proposed algorithms when applied to the Problem [\[eq:quadratic-online-optimization\]](#eq:quadratic-online-optimization){reference-type="eqref" reference="eq:quadratic-online-optimization"} and to the Problem [\[eq:quadratic-online-optimization-general\]](#eq:quadratic-online-optimization-general){reference-type="eqref" reference="eq:quadratic-online-optimization-general"}. In addition to addressing Problem [\[eq:quadratic-online-optimization\]](#eq:quadratic-online-optimization){reference-type="eqref" reference="eq:quadratic-online-optimization"}, we also provide a numerical example to demonstrate that the algorithm performs well even when the matrices ${\mathbold{A}}$ and ${\mathbold{G}}$ are time-varying and in more general non-quadratic cases. ## Equality constraints {#subsec:sim-equality-constraints} First we consider Problem [\[eq:quadratic-online-optimization\]](#eq:quadratic-online-optimization){reference-type="eqref" reference="eq:quadratic-online-optimization"} with ${\mathbold{x}}\in \mathbb{R}^n$, where $n = 10$. The matrix ${\mathbold{A}}$ is defined as ${\mathbold{A}}= {\mathbold{V}}{\mathbold{\Lambda}}{\mathbold{V}}^{\top}$, where ${\mathbold{V}}$ is a randomly generated orthogonal matrix, and ${\mathbold{\Lambda}}$ is a diagonal matrix with elements in the range $[1,10]$. On the other hand, ${\mathbold{G}}\in \mathbb{R}^{p\times n}$ is a randomly generated matrix with orthogonal rows. For the linear terms $\left\lbrace{\mathbold{b}}_k \right\rbrace_{k \in {\mathbb{N}}}$ and $\left\lbrace {\mathbold{h}}_k \right\rbrace_{k \in {\mathbb{N}}}$, we used the following models: 1. Triangular wave (Figure [1](#fig:triang){reference-type="ref" reference="fig:triang"}): ${\mathbold{b}}_k = \mathrm{triang}\left(\omega k\right){\mathbf{1}}$, ${\mathbold{h}}_k = \mathrm{triang}\left(\omega k\right){\mathbf{1}}$ with $\omega=10^{-4}\pi$; 2. Sine: ${\mathbold{b}}_k = \sin\left(\omega k\right){\mathbf{1}}$, ${\mathbold{h}}_k = \sin\left(\omega k\right){\mathbf{1}}$ with $\omega=10^{-4}\pi$. [\[enum:case2-eq\]]{#enum:case2-eq label="enum:case2-eq"} ![The graph of the $\mathrm{triang}\left(t\right)\coloneqq4 \left|\frac{t}{2\pi}-\lfloor \frac{t}{2\pi} + 3/4\rfloor+1/4\right|-1$, $t\in {\mathbb{R}}$.](figures/0-Triang_wave.pdf){#fig:triang} In Figure [2](#fig:comp-eq){reference-type="ref" reference="fig:comp-eq"}, we compare our algorithm, based on a control approach, with $p\left(z\right) = z^2 - 2\cos\left(\omega\right)z + 1$ for the sine and with the double integrator $p\left(z\right) = \left(z - 1\right)^2$ for the triangular wave, with the standard *primal-dual* [\[eq:primalDual\]](#eq:primalDual){reference-type="eqref" reference="eq:primalDual"} algorithm. From a control perspective, the classical *primal-dual* algorithm can be seen as a controller with only an integrator as internal model. Therefore, for example, when considering the sine, the controller is unable to guarantee perfect tracking but only tracking with a bounded error. On the other hand, the control approach design based on the correct model is capable of guaranteeing perfect tracking. In the case of the triangular wave, we can see that the Internal Model principle is piece-wise verified and then the tracking is always guaranteed except when there is a change of the slope in the signal. In these time instants a transient is required by the controller in order to track back the reference. ![Comparison between the *primal-dual* and the control-based algorithm in a semilogarithmic plot in case of sinusoidal or triangular wave signals.](figures/2-constrained_linear_Triang_Sine_TauOpt.pdf){#fig:comp-eq} ## Inequality constraints {#subsec:sim-inequality-constraints} In this section, we consider Problem [\[eq:quadratic-online-optimization-general\]](#eq:quadratic-online-optimization-general){reference-type="eqref" reference="eq:quadratic-online-optimization-general"} with only inequality constraints, and we compare three different approaches to solve this optimization problem. In the simulations we set $n = 10$ and we use the same matrix ${\mathbold{A}}$ used in Example [5.1](#subsec:sim-equality-constraints){reference-type="ref" reference="subsec:sim-equality-constraints"} and as ${\mathbold{G}}^{\prime}$ we take the matrix ${\mathbold{G}}$ used in Example [5.1](#subsec:sim-equality-constraints){reference-type="ref" reference="subsec:sim-equality-constraints"} . The signals ${\mathbold{b}}_k$ and ${\mathbold{h}}^{\prime}_k$ are triangular waves with a periodicity of $\omega=10^{-4}\pi$ and we take as model the double integrator. In Figure [3](#fig:comp-ineq){reference-type="ref" reference="fig:comp-ineq"}, we compare three different algorithms: - *Projected primal-dual* algorithm: This is the classical Algorithm [\[eq:proPrimalDual\]](#eq:proPrimalDual){reference-type="eqref" reference="eq:proPrimalDual"} used in both Figure [2](#fig:comp-eq){reference-type="ref" reference="fig:comp-eq"} and Figure [3](#fig:comp-ineq){reference-type="ref" reference="fig:comp-ineq"} as the baseline method. - Control-based algorithm modification of Equations [\[eq:control-based-algorithm\]](#eq:control-based-algorithm){reference-type="eqref" reference="eq:control-based-algorithm"}, which includes the saturation block but with $\rho = 0$, so that it does not incorporate the anti wind-up mechanism shown in Figure [\[fig:block-diagram-anti wind-up\]](#fig:block-diagram-anti wind-up){reference-type="ref" reference="fig:block-diagram-anti wind-up"}. - Control-based approach with anti wind-up compensation and $\rho=1$ presented in Equations [\[eq:control-based-algorithma-anti wind-up\]](#eq:control-based-algorithma-anti wind-up){reference-type="eqref" reference="eq:control-based-algorithma-anti wind-up"} and realized in the control scheme depicted in Figure [\[fig:block-diagram-anti wind-up\]](#fig:block-diagram-anti wind-up){reference-type="ref" reference="fig:block-diagram-anti wind-up"}. To compare the tracking performance of the three different algorithms, we separate the analysis into two time windows separating the case in which the inequality constraint is active or inactive. In other words, if we denote ${\mathbold{x}}^*_k$ the optimal solution with inequality constraints and ${\mathbold{x}}^*_{k,\mathrm{UNC}}$ the optimal solution without inequality constraints, we analyse what happens for the $k$'s when ${\mathbold{x}}^*_k={\mathbold{x}}^*_{k,\mathrm{UNC}}$ and when ${\mathbold{x}}^*_k\not={\mathbold{x}}^*_{k,\mathrm{UNC}}$. When $k$ is such that ${\mathbold{x}}^*_k={\mathbold{x}}^*_{k,\mathrm{UNC}}$ and assuming that Equations [\[eq:control-based-algorithma-anti wind-up\]](#eq:control-based-algorithma-anti wind-up){reference-type="eqref" reference="eq:control-based-algorithma-anti wind-up"} converge to the optimal couple $\left({\mathbold{x}}_k,{\mathbold{w}}_k\right)\to\left({\mathbold{x}}_k^*,{\mathbold{w}}_k^{\prime\,*}\right)$ as $k$ increases, we observe that both the control-based algorithm with and without anti wind-up have the dual variable ${\mathbold{w}}_k^{\prime} = 0$, thanks to the complementary slackness condition [@bertsekas_constrained_2014]. Then, dynamics [\[eq:control-based-algorithm-y\]](#eq:control-based-algorithm-y){reference-type="eqref" reference="eq:control-based-algorithm-y"} is decoupled by [\[eq:control-based-algorithm-zp\]](#eq:control-based-algorithm-zp){reference-type="eqref" reference="eq:control-based-algorithm-zp"} and the algorithm does not utilize the dual variable ${\mathbold{w}}_k^{\prime}$ in computing the optimum. Therefore, Equations [\[eq:control-based-algorithm\]](#eq:control-based-algorithm){reference-type="eqref" reference="eq:control-based-algorithm"} and [\[eq:control-based-algorithma-anti wind-up\]](#eq:control-based-algorithma-anti wind-up){reference-type="eqref" reference="eq:control-based-algorithma-anti wind-up"} are practically equivalent, as we can see also from Figure [3](#fig:comp-ineq){reference-type="ref" reference="fig:comp-ineq"}. Nevertheless, the error ${\mathbold{f}}^{\prime}_k$ in Equation [\[eq:errorDualanti wind-up\]](#eq:errorDualanti wind-up){reference-type="eqref" reference="eq:errorDualanti wind-up"}, and consequently ${\mathbold{y}}^{\prime}_k$ in Equation [\[eq:control-based-algorithm-zp\]](#eq:control-based-algorithm-zp){reference-type="eqref" reference="eq:control-based-algorithm-zp"}, are always influenced by the gradient $\nabla_{{\mathbold{w}}^{\prime}} \mathcal{L}^{\prime}_k({\mathbold{x}}_{k}, {\mathbold{w}}^{\prime}_{k})$, even if ${\mathbold{w}}^{\prime}_{k} = 0$ and ${\mathbold{x}}_k$ is approaching the true optimum. From a control perspective, this is the main effect of the wind-up: certain internal state variables of the controller (in this case ${\mathbold{y}}^{\prime}_k$) continue to accumulate error even when the desired set point is reached. Hence, the absence of anti wind-up does not lead to a degradation of performance in these regions of Figure [3](#fig:comp-ineq){reference-type="ref" reference="fig:comp-ineq"}, actually the internal variable of the controller associated with the dual variable keeps growing, but it is not affecting the primal variable. When $k$ is such that ${\mathbold{x}}^*_k\not={\mathbold{x}}^*_{k,\mathrm{UNC}}$, namely the true optimum lies on the region where the linear inequality constraints in Problem [\[eq:quadratic-online-optimization-general\]](#eq:quadratic-online-optimization-general){reference-type="eqref" reference="eq:quadratic-online-optimization-general"} are violated, the inequality constraints act as equality constraints. Then the true optimum is identified by a couple $\left({\mathbold{x}}_k^*,{\mathbold{w}}_k^{\prime\,*}\right)$ with ${\mathbold{w}}_k^{\prime\,*} \neq 0$ and once ${\mathbold{w}}_k^{\prime}$ becomes different from zero Equation [\[eq:satanti wind-up\]](#eq:satanti wind-up){reference-type="eqref" reference="eq:satanti wind-up"} becomes an identity map, making Algorithm [\[eq:control-based-algorithm\]](#eq:control-based-algorithm){reference-type="eqref" reference="eq:control-based-algorithm"} and Algorithm [\[eq:control-based-algorithma-anti wind-up\]](#eq:control-based-algorithma-anti wind-up){reference-type="eqref" reference="eq:control-based-algorithma-anti wind-up"} identical. As discussed earlier, the control-based algorithm guarantees perfect tracking. However, as shown in Figure [3](#fig:comp-ineq){reference-type="ref" reference="fig:comp-ineq"}, there is a significant difference between the two Algorithms when passing through the red region. This difference is due to the wind-up that results in a delay of ${\mathbold{w}}_k^{\prime}$ relative to ${\mathbold{w}}_k^{\prime\,*}$ in going from ${\mathbold{w}}_k^{\prime} = 0$ to ${\mathbold{w}}_k^{\prime} \neq 0$. The main issue arises when the dual variable ${\mathbold{w}}^{\prime}_k$ becomes saturated. In such cases, the algorithm continues to integrate the error ${\mathbold{f}}^{\prime}_k$ and increases the internal state variables associated with the integral actions. Ideally, when saturation is active, we are within the bounds of the inequality constraints, allowing the optimization algorithm to work as an unconstrained optimization. Consequently, the integration of the error ${\mathbold{f}}_k^{\prime}$ becomes not only redundant but also detrimental. The integral actions retain memory of the previously integrated error and negatively affect the algorithm's performance even outside the saturation region. The anti wind-up mechanism aims to reduce these integral actions when they are unnecessary, attempting to directly compensate for the input ${\mathbold{f}}_k^{\prime}$. Finally, in Figure [3](#fig:comp-ineq){reference-type="ref" reference="fig:comp-ineq"} we can see the presence of some transient effects, some related to the crossing of the red region the others linked to the change of the slope in the triangular wave. ![Comparison between the *projected primal-dual*, the control-based without anti wind-up, and the control-based with anti wind-up compensation in a semilogarithmic plot where ${\mathbold{b}}_k$ and ${\mathbold{h}}^{\prime}_k$ are assumed to be two triangular waves.](figures/2-constrained_linear_AntiWindup.pdf){#fig:comp-ineq} ## Time-varying quadratic term and constraint As done in [@bastianello_internal_2022], to assess the performance of our algorithm in more complex scenarios, we undertake an experimental validation. In these experiments we take $n = 10$ and $p = 2$. This begins by considering a time-varying quadratic term ${\mathbold{A}}_k = {\mathbold{A}}_1 + \sin\left(\omega k \right){\mathbold{A}}_2$, where ${\mathbold{A}}_1$ is symmetric positive definite and with eigenvalues within the interval $\left[1,10\right]$, ${\mathbold{A}}_2$ is a symmetric sparse matrix of ones, where, the non zeros elements are in the ten percent of its entries. Moreover, we also consider time-varying constraints. Precisely, we set ${\mathbold{G}}_k = {\mathbold{G}}_1 + \sin\left(\omega k \right){\mathbold{G}}_2$ where ${\mathbold{G}}_1$ is a random rectangular matrix with singular values in the range $\left[1,3\right]$ and, similar as before, ${\mathbold{G}}_2$ is a sparse matrix with the ten percent of its entries that are ones. The sinusoidal term $\sin\left(\omega k \right)$ has $\omega = 1/2$. The remaining terms in Problem [\[eq:quadratic-online-optimization\]](#eq:quadratic-online-optimization){reference-type="eqref" reference="eq:quadratic-online-optimization"} are held constant, namely ${\mathbold{b}}_k = \bar{{\mathbold{b}}}\in {\mathbb{R}}^n$ and ${\mathbold{h}}_k = \bar{{\mathbold{h}}}\in {\mathbb{R}}^p$. To implement our Algorithm we require an internal model that captures the dynamic nature of $f_k({\mathbold{x}})$. In view of the periodic evolution of $f_k({\mathbold{x}})$, reasonable choices of the internal model are $$\label{eq:models-timeVar} p\left(z\right) = \left(z-1\right)\prod_{l=1}^{L}\left[z^2 - 2\cos\left(l\omega\right)z+ 1 \right], \; L \in \mathbb{N}.$$ Since $f_k({\mathbold{x}})$ is non-linear, we can only approximate the internal model. Utilizing model [\[eq:models-timeVar\]](#eq:models-timeVar){reference-type="eqref" reference="eq:models-timeVar"}, we aim to capture the model's minimum periodicity and fast variations. The results of these simulations are depicted in Figure [4](#fig:comp-timeVar){reference-type="ref" reference="fig:comp-timeVar"}. As we can see, Algorithm [\[eq:control-based-algorithm\]](#eq:control-based-algorithm){reference-type="eqref" reference="eq:control-based-algorithm"} exhibits better performance than the *primal-dual* algorithm. Moreover, taking more complex internal models, we can improve the performance on the tracking error at the cost of a slower convergence rate. This slower convergence is characteristic of the internal model principle, as the controller requires a certain amount of time to reach its steady state response. This duration increases with the complexity of the internal model. Nevertheless, given our primary interest in the algorithm's asymptotic performances, this trade-off can be considered acceptable. ![Comparison between the *primal-dual* algorithm, and the control-based with models of Equation [\[eq:models-timeVar\]](#eq:models-timeVar){reference-type="eqref" reference="eq:models-timeVar"} ($L = 1,2,3,6$) in the time-varying case.](figures/3-TimeVaryingAandC.pdf){#fig:comp-timeVar} ## Non-quadratic cost {#subsec:non-quadratic} In this section, we present an additional example of the application of our algorithm in a setting where the objective function is non-quadratic. Specifically, we utilize the following function taken from [@simonetto_class_2016] $$f_k({\mathbold{x}}) = \frac{1}{2}{\mathbold{x}}^{\top}{\mathbold{A}}{\mathbold{x}}+ {\mathbold{b}}^{\top}{\mathbold{x}}+ \sin\left(\omega k\right)\log\left[1+\exp\left({\mathbold{c}}^{\top}{\mathbold{x}}\right)\right],$$ subject to the constraint: $${\mathbold{G}}{\mathbold{x}}= {\mathbold{h}}_k.$$ In the above equations we take $\omega = 1/2$, $n = 10$, ${\mathbold{b}},{\mathbold{c}}\in {\mathbb{R}}^{n}$ with ${\mathbold{c}}$ such that $\left\lVert{\mathbold{c}}\right\rVert=1$. The matrix ${\mathbold{A}}\in {\mathbb{R}}^{n\times n}$ and ${\mathbold{G}}\in {\mathbb{R}}^{p\times n}$, with $p=1$, are generated as described in Section [5.1](#subsec:sim-equality-constraints){reference-type="ref" reference="subsec:sim-equality-constraints"}, and the term ${\mathbold{h}}_k$ is the same as in case [\[enum:case2-eq\]](#enum:case2-eq){reference-type="ref" reference="enum:case2-eq"} of Section [5.1](#subsec:sim-equality-constraints){reference-type="ref" reference="subsec:sim-equality-constraints"}. Finally, in Figure [5](#fig:comp-nonQuad){reference-type="ref" reference="fig:comp-nonQuad"}, we report the tracking performance of the *primal-dual* algorithm along with three different versions of the control-based algorithm that implement the internal models of Equation [\[eq:models-timeVar\]](#eq:models-timeVar){reference-type="eqref" reference="eq:models-timeVar"}. From Figure [5](#fig:comp-nonQuad){reference-type="ref" reference="fig:comp-nonQuad"}, we can see that the control-based method outperforms the *primal-dual* algorithm in terms of tracking performance, even if with a slower convergence rate. ![Comparison between the *primal-dual* algorithm, and the control-based with models of Equation [\[eq:models-timeVar\]](#eq:models-timeVar){reference-type="eqref" reference="eq:models-timeVar"} ($L = 1,2,3$) in the non-quadratic case.](figures/2-non_quadratic.pdf){#fig:comp-nonQuad} # Conclusions In this paper we addressed the solution of online, constrained problems by leveraging control theory to design novel algorithms. When only equality constraints are present, we showed how to reformulate the problem as a robust, linear control problem, and we designed a suitable controller to achieve zero tracking error. When also inequality constraints are imposed, we showed how the required nonnegativity of the dual variables can lead to a wind-up phenomenon. As a consequence, we propose a modified version of the algorithm that incorporates an anti-windup scheme. Overall, we showed with both theoretical and numerical results how the proposed approaches outperform state-of-the-art alternatives. [^1]: Corresponding author N. Bastianello. [^2]: A zero $\bar z\in{\mathbb C}$ of a polynomial is said to be marginally unstable if $|\bar z|=1$.
arxiv_math
{ "id": "2309.15498", "title": "A Control Theoretical Approach to Online Constrained Optimization", "authors": "Umberto Casti, Nicola Bastianello, Ruggero Carli, Sandro Zampieri", "categories": "math.OC cs.SY eess.SY", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The *dichromatic number* of a digraph $D$ is the smallest $k$ such that $D$ can be partitioned into $k$ acyclic subdigraphs, and the dichromatic number of an undirected graph is the maximum dichromatic number over all its orientations. Extending a well-known result of Lovász, we show that the dichromatic number of the Kneser graph $KG(n,k)$ is $\Theta(n-2k+2)$ and that the dichromatic number of the Borsuk graph $BG(n+1,a)$ is $n+2$ if $a$ is large enough. We then study the list version of the dichromatic number. We show that, for any $\varepsilon>0$ and $2\leq k\leq n^{\frac{1}{2}-\varepsilon}$, the list dichromatic number of $KG(n,k)$ is $\Theta(n\ln n)$. This extends a recent result of Bulankina and Kupavskii on the list chromatic number of $KG(n,k)$, where the same behaviour was observed. We also show that for any $\rho>3$, $r\geq 2$ and $m\geq\ln^{\rho}r$, the list dichromatic number of the complete $r$-partite graph with $m$ vertices in each part is $\Theta(r\ln m)$, extending a classical result of Alon. Finally, we give a directed analogue of Sabidussi's theorem on the chromatic number of graph products. author: - "Ararat Harutyunyan[^1]        Gil Puig i Surroca[^2]" bibliography: - biblio.bib title: Colouring Complete Multipartite and Kneser-type Digraphs --- **2010 Mathematics Subject Classification:** 05C15, 05C20, 05C69, 05C76 **Keywords:** dichromatic number, list dichromatic number, complete multipartite graphs, Kneser graphs, Borsuk graphs # Introduction We consider graphs/digraphs without loops or multiple edges/arcs. They are all finite unless otherwise specified. A *proper $k$-colouring* of an undirected graph $G=(V,E)$ is a mapping $f:V\rightarrow [k]=\{1,...,k\}$ such that $f^{-1}(i)$ is an independent set for every $i\in[k]$. The *chromatic number* of $G$, denoted by $\chi(G)$, is the minimum $k$ for which $G$ has a proper $k$-colouring. A *proper $k$-colouring* of a digraph $D=(V,A)$ is a mapping $f:V\rightarrow [k]$ such that $f^{-1}(i)$ is acyclic (i.e. the subdigraph induced by $f^{-1}(i)$ has no directed cycles) for every $i\in[k]$, and the *dichromatic number* of $D$, denoted by $\vec\chi(D)$, is the minimum $k$ for which $D$ has a proper $k$-colouring. Note that this definition generalizes the usual colouring, in the sense that the chromatic number of a graph is equal to the dichromatic number of its corresponding bidirected digraph. The notion was introduced by Neumann-Lara in 1982 [@Neumann-Lara1982] and it was later rediscovered by Mohar [@Mohar2003]. Since then, it has been shown that many classical results hold also in this setting [@Betal04; @HM11b; @HM11a; @HM11c]. However, some fundamental questions remain unanswered. The *dichromatic number* of an undirected graph $G$, denoted by $\vec\chi(G)$, is the maximum dichromatic number over all its orientations. Erdős and Neumann-Lara conjectured the following. **Conjecture 1**. **[@Erdos1979]* For every integer $k$ there exists an integer $r(k)$ such that $\vec\chi(G)\geq k$ for any undirected graph $G$ satisfying $\chi(G)\geq r(k)$.* For instance, $r(1)=1$ and $r(2)=3$. But it is already unknown whether $r(3)$ exists. Mohar and Wu [@MoharWu2016] managed to prove the fractional analogue of Conjecture [Conjecture 1](#conj:E-NL){reference-type="ref" reference="conj:E-NL"}. Providing further evidence for the conjecture, they showed that Kneser graphs with large chromatic number have large dichromatic number. Improving their bound, we show that the dichromatic number of Kneser graphs is of the order of their chromatic number. In the 1970s Erdős, Rubin and Taylor [@ERT1979], and, independently, Vizing [@Vizing1976], introduced the list variant of the colouring problem, which can be carried over to the directed setting as well. A *$k$-list assignment* to a graph $G$ (or a digraph $D$) with vertex set $V$ is a mapping $L:V\rightarrow\binom{\mathbb Z^+}{k}=\{X\subseteq\mathbb Z^+\mid |X|=k\}$. A colouring (a mapping) $f:V\rightarrow\mathbb Z^+$ is said to be *accepted* by $L$ if $f(v)\in L(v)$ for every $v\in V$ (or just to be *acceptable* if the list assignment is understood). $G$ (resp. $D$) is *$k$-list colourable* if every $k$-list assignment accepts a proper colouring. The *list chromatic number* (or the *choice number*) of $G$ (resp. the *list dichromatic number* of $D$), denoted by $\chi_{\ell}(G)$ (resp. $\vec\chi_{\ell}(D)$), is the minimum $k$ such that $G$ (resp. $D$) is $k$-list colourable. Similarly, the *list dichromatic number* of $G$, denoted by $\vec\chi_{\ell}(G)$, is the maximum list dichromatic number over all its orientations. Bensmail, Harutyunyan and Le [@BHL2018] gave a sample of instances where the list dichromatic number of digraphs behaves as its undirected counterpart. Recently, Bulankina and Kupavskii [@BulankinaKupavskii2022] determined up to a constant factor the list chromatic number of a significant fraction of Kneser graphs. We show that their list dichromatic number is of the same order as the list chromatic number. The paper is organised as follows. In Section 2, we prove that Kneser graphs have large dichromatic number (at least a multiplicative constant of their chromatic number) and we show that the general lower bound for the chromatic number of Borsuk graphs is also a general lower bound for their dichromatic number. In Section 3, we study the list dichromatic number of the complete multipartite graph $K_{m*r}$, determining its asymptotics by extending a classical result of Alon [@Alon1992]. Then we extend Bulankina and Kupavskii's result on the list chromatic number of Kneser graphs to the list dichromatic number. Finally, in Section 4, we prove a directed analogue of Sabidussi's theorem on the chromatic number of Cartesian products of graphs. # The dichromatic number of Kneser graphs and Borsuk graphs The *Kneser graph* with parameters $n,k$, denoted by $KG(n,k)$, is the undirected graph with vertex set $\binom{[n]}{k}$ where two vertices $u,v$ are adjacent if and only if $u\cap v=\emptyset$. It is well-known [@Greene2002; @Lovasz1978; @Matousek2003] that $\chi(KG(n,k))=n-2k+2$ for $1\leq k\leq\frac{n}{2}$, as conjectured by Kneser [@Kneser1955; @Ziegler2021]. Mohar and Wu showed that, if $k$ is not too close to $\frac{n}{2}$, the dichromatic number of $KG(n,k)$ is unbounded as well. More precisely, they proved the following. **Theorem 2**. **[@MoharWu2016]* For any positive integers $n,k$ with $1\leq k\leq\frac{n}{2}$ we have $$\vec\chi(KG(n,k))\geq \left\lfloor\frac{n-2k+2}{8\log_2\frac{n}{k}}\right\rfloor.$$* Note that, since $\vec\chi(G)\leq\chi(G)$ for any graph $G$, the estimate in Theorem [Theorem 2](#thm:MW){reference-type="ref" reference="thm:MW"} is sharp up to a constant factor when $k$ is a constant fraction of $n$. Theorem [Theorem 6](#teo:Kneser){reference-type="ref" reference="teo:Kneser"} improves this bound for slower growth rates of $k$. Note that it cannot be extended to $k=1$ due to the following result on tournaments. **Theorem 3**. **[@EGK1991; @EM1964; @Harutyunyan2011]* Let $T$ be a tournament of order $n$. Then $\vec\chi(T)\leq\frac{n}{\log_2 n}(1+\text o(1))$.* For the proof of Theorem [Theorem 6](#teo:Kneser){reference-type="ref" reference="teo:Kneser"}, we shall adapt Greene's proof of Kneser's conjecture [@Greene2002]. The following version of Lusternik--Schnirelmann--Borsuk theorem plays a key role. **Lemma 4**. **[@Greene2002]* If the sphere $\mathbb S^n$ is covered with $n+1$ sets, each of which is either open or closed, then one of the sets contains a pair of antipodal points.* From now on, by $\mathbb S^n$ we will denote the embedded unit sphere $S(0,1)=\{x\in\mathbb R^{n+1}\mid \|x\|=1\}\subseteq\mathbb R^{n+1}$ centered at the origin. The following probabilistic lemma will also be of help. **Lemma 5**. *Let $G$ be a graph of order $n\geq 2$ and $D$ the random orientation of $G$ obtained by orienting every edge independently with probability $\frac{1}{2}$. Let $E_{\ell}$ be the event that there exists a subgraph of $G$ isomorphic to $K_{\ell,\ell}$ which is acyclic in $D$. If $5\log_2 n\leq\ell$ then $\mathbb P(E_{\ell})<\frac{1}{2}$.* *Proof.* Each acyclic orientation of $K_{\ell,\ell}$ can be extended to a transitive tournament on the same vertex set, and different orientations always extend to different tournaments. Therefore, among the $2^{\ell^2}$ possible orientations of $K_{\ell,\ell}$, at most $(2\ell)!\leq(2\ell)^{2\ell}\leq n^{2\ell}$ are acyclic. Since $G$ has at most $\binom{n}{2\ell}(2\ell)!\leq n^{2\ell}$ copies of $K_{\ell,\ell}$, we have that $\mathbb P(E_{\ell})\leq n^{4\ell}2^{-\ell^2}\leq 2^{-\ell^2/5}\leq 2^{-5}$. ◻ **Theorem 6**. *There exist a positive integer $n_0$ such that, for all $n\geq n_0$ and $2\leq k\leq\frac{n}{2}$, we have $\vec\chi(KG(n,k))\geq\left\lfloor \frac{1}{16}\chi(KG(n,k))\right\rfloor$.* *Proof.* Let $0<c<\frac{1}{2}$ be a constant and set $t=\frac{-1}{8\log_2 c}$; we will show that $\vec\chi(KG(n,k))\geq\left\lfloor t\chi(KG(n,k))\right\rfloor$ if $c$ is smaller than a certain quantity. Picking $c=\frac{1}{4}$ will suffice, although there is some margin to choose larger values. If $cn\leq k\leq\frac{n}{2}$, then the result is implied by Theorem [Theorem 2](#thm:MW){reference-type="ref" reference="thm:MW"}. Now suppose that $2\leq k\leq cn$. We assume for a contradiction that, for any given orientation of $KG(n,k)$, we can find a partition of its vertex set into $d=\lfloor t(n-2k+2)\rfloor-1$ acyclic subsets $\mathcal A_1,...,\mathcal A_d$. Let $X\subseteq\mathbb S^{d}\subseteq\mathbb R^{d+1}$ be a set of $n$ points on the unit sphere centered at the origin. We assume that these points together with the origin are in general position. In particular, there are no $d+1$ points of $X$ in a common hyperplane through the origin. The set of vertices of $KG(n,k)$ is assumed to be $\binom{X}{k}$. Let $s=tk+(1-t)\left(\frac{n}{2}+1\right)$ and $\ell=\left\lceil\frac{1}{d}\binom{\lfloor s\rfloor}{k}\right\rceil$ (note that $d\geq 1$; otherwise, the result is immediate). We define $U_i$ as the set of points $x\in\mathbb S^d$ for which there exist $\ell$ different vertices $A_1,...,A_{\ell}\in\mathcal A_i$ such that $x\cdot y>0$ for every $y\in A_1\cup...\cup A_{\ell}$. That is, $U_i$ is the set of poles of the open hemispheres containing all the points of $\ell$ vertices of $\mathcal A_i$. It is clear that $U_i$ is an open set of $\mathbb S^d$. Additionally, we define $F=\mathbb S^d\setminus U_1\setminus...\setminus U_d$. By Lemma [Lemma 4](#lem:Greene){reference-type="ref" reference="lem:Greene"}, one of the sets $U_1,...,U_d,F$ contains two antipodal points. Suppose that $U_i$ contains two antipodal points $x,-x$. Then the hemispheres with pole $x,-x$ each contain the points of $\ell$ vertices of $\mathcal A_i$. Therefore $KG(n,k)[\mathcal A_i]$ has a subgraph isomorphic to $K_{\ell,\ell}$. By Lemma [Lemma 5](#lem:dense_bipartite){reference-type="ref" reference="lem:dense_bipartite"}, $\ell\leq 5\log_2\binom{n}{k}\leq 5 k\log_2 n$. On the other hand, $$\ell\geq\frac{\binom{\lfloor s\rfloor}{k}}{d}\geq\frac{1}{n}\frac{\lfloor s\rfloor(\lfloor s\rfloor-1)...(\lfloor s\rfloor-k+1)}{k!}\geq\frac{1}{n}\left(\frac{\lfloor s\rfloor}{k}\right)^k\geq\frac{1}{n}\left(\frac{s-1}{k}\right)^k\geq\frac{1}{n}\left(\frac{(1-t)n}{2k}\right)^k.$$ We distinguish two cases. **Case 1.** $2\leq k\leq n^{\frac{1}{5}}$. In this case $$\ell\geq\frac{1}{n}\left(\frac{(1-t)n}{2k}\right)^k\geq\frac{1}{n}\left(\frac{(1-t)n^{\frac{4}{5}}}{2}\right)^k\geq n^{\frac{1}{5}}\left(\frac{(1-t)n^{\frac{1}{5}}}{2}\right)^k,$$ contradicting, when $n$ is large, that $\ell\leq 5k\log_2 n\leq 5n^{\frac{1}{5}}\log_2 n$. **Case 2.** $n^{\frac{1}{5}}\leq k\leq cn$. In this case $$\ell\geq\frac{1}{n}\left(\frac{(1-t)n}{2k}\right)^k\geq\frac{1}{n}\left(\frac{1-t}{2c}\right)^{n^{\frac{1}{5}}}.$$ Provided that $1-t-2c>0$, this contradicts that $\ell\leq 5k\log_2 n\leq 5c n\log_2 n$ when $n$ is large. Note that by picking $c=\frac{1}{4}$ we have $1-t-2c=1-\frac{1}{16}-\frac{1}{2}>0$. In conclusion, $F$ must contain two antipodal points $x,-x$. But then the hemispheres with pole $x,-x$ each contain at most $\lfloor s\rfloor-1$ points of $X$. Indeed, if one of them contained $\lfloor s\rfloor$ points, it would contain the points of $\binom{\lfloor s\rfloor}{k}>\left(\left\lceil\frac{1}{d}\binom{\lfloor s\rfloor}{k}\right\rceil-1\right)d=(\ell-1)d$ vertices, so at least $\ell$ vertices of the same colour would be involved. Hence, there are at least $n-2(\lfloor s\rfloor-1)\geq n-2s+2=t(n-2k+2)\geq d+1$ points of $X$ on the hyperplane separating the two hemispheres, contradicting the general position of $X\cup\{0\}$. ◻ Kneser's conjecture remained open for more than two decades [@Kneser1955; @Ziegler2021]. The famous resolution by Lóvasz [@Lovasz1978] was inspired by the analogy between Kneser graphs and Borsuk graphs. Let $n$ be a natural number and $a\in(0,2)$ a real number. The *Borsuk graph* with parameters $n+1$ and $a$, denoted by $BG(n+1,a)$, is the undirected graph with vertex set $\mathbb S^n=\{x\in\mathbb R^{n+1}\mid \|x\|=1\}$ where two vertices $x,y$ are adjacent if and only if $\|y-x\|\geq a$. The study of the chromatic number of Borsuk graphs can be linked with geometric packing/covering problems. If $a$ is large enough, an $(n+2)$-colouring of $BG(n+1,a)$ can be obtained by projecting the faces of an inscribed $(n+1)$-dimensional simplex. It turns out that this cannot be improved, no matter how close $a$ is to $2$. Indeed, it is known that $\chi(BG(n+1,a))\geq n+2$ for every $a\in(0,2)$, which is in fact equivalent to the Borsuk--Ulam theorem. The rest of the present section is devoted to prove that the dichromatic number of Borsuk graphs admits the same general lower bound. **Theorem 7**. *$\vec\chi(BG(n+1,a))\geq n+2$ for any $n\geq 1$ and any $a\in(0,2)$.* *Proof.* Let us denote by $B(x,r)$ the open ball $\{y\in\mathbb R^{n+1}\mid \|y-x\|<r\}$. Let $\delta\in (0,2)$ such that every point in $B(x,\delta)\cap\mathbb S^n$ is adjacent to every point in $B(-x,\delta)\cap\mathbb S^n$ for any $x\in\mathbb S^n$. Let $\ell$ be an integer that for now remains unspecified, but that is assumed to be as large as desired. We define $m=\left\lceil\sqrt[n+1]{(\ell-1)(n+1)}\right\rceil+1\leq 2\sqrt[n+1]{(\ell-1)(n+1)}$ and $c=\frac{\delta}{m\sqrt{n+1}}$. An *open hypercube* of $\mathbb R^{n+1}$ is the image by a rigid transformation of a product of intervals $(0,\lambda)^{n+1}\subseteq\mathbb R^{n+1}$, where $\lambda\in\mathbb R^+$. The length of its *side* is $\lambda$ and the length of its *longest diagonal* is its diameter (i.e. $\lambda\sqrt{n+1}$). Let $\mathcal Q_c$ be the set of open hypercubes of the form $(ck_1,ck_1+c)\times...\times(ck_{n+1},ck_{n+1}+c)$ with $(k_1,...,k_{n+1})\in\mathbb Z^{n+1}$. We will make use of the following easy observations about $\mathcal Q_c$. **Observation 1.** For every $x\in\mathbb R^{n+1}$, $B(x,\delta)$ contains at least $m^{n+1}$ hypercubes of $\mathcal Q_c$. *Proof.* Consider an open hypercube $Q$ of longest diagonal $\delta$ with $x\in Q$. Clearly $Q\subseteq B(x,\delta)$ and the side of $Q$ is $\frac{\delta}{\sqrt{n+1}}$. This implies the claim. ◻ **Observation 2.** $B(0,1+2\delta)$ is contained in any open hypercube $Q$ of side $2c\left\lceil\frac{1+2\delta}{c}\right\rceil$ centered at the origin. Moreover, one (in fact exactly one) such $Q$ can be obtained as the interior of the closure of the union of $\left(2\left\lceil\frac{1+2\delta}{c}\right\rceil\right)^{n+1}$ hypercubes of $\mathcal Q_c$. Let $\mathcal Q'_c\subseteq\mathcal Q_c$ be the set of $\left(2\left\lceil\frac{1+2\delta}{c}\right\rceil\right)^{n+1}$ hypercubes from Observation 2. For each $Q\in\mathcal Q'_c$ choose a point $x_Q\in Q$. Let $y_Q$ be the point where the open ray starting at the origin and passing through $x_Q$ intersects $\mathbb S^n$. Since $n\geq 1$ we can assume that the points $x_Q$ have been chosen so that $y_Q\neq y_{Q'}$ if $Q\neq Q'$. Let $Y=\{y_Q\mid Q\in\mathcal Q'_c\}$. Note that $$|Y|=|\mathcal Q'_c|=\left(2\left\lceil\frac{(1+2\delta)m\sqrt{n+1}}{\delta}\right\rceil\right)^{n+1}\leq\left(\frac{8(1+2\delta)\sqrt{n+1}}{\delta}\right)^{n+1}(\ell-1)(n+1).$$ **Observation 3.** For every $x\in\mathbb S^{n}$, $B(x,\delta)$ contains at least $m^{n+1}$ points of $Y$. *Proof.* Since $B((1+\delta)x,\delta)\subseteq B(0,1+2\delta)$, all hypercubes of $\mathcal Q_c$ intersecting $B((1+\delta)x,\delta)$ are in $\mathcal Q'_c$. Hence, by Observation 1, $B((1+\delta)x,\delta)$ contains $m^{n+1}$ hypercubes of $\mathcal Q'_c$. The points in $Y$ corresponding to these hypercubes all lie in $B(x,\delta)$. ◻ We now consider the finite induced subgraph $H=BG(n+1,a)[Y]$ of $BG(n+1,a)$. It will be enough to show that $\vec\chi(H)\geq n+2$. Let us assume for a contradiction that each orientation of $H$ admits a partition of $Y$ into $n+1$ acyclic subsets $Y_1,...,Y_{n+1}$. For $i\in[n+1]$ let $U_i=\{x\in\mathbb S^n\mid |B(x,\delta)\cap Y_i|\geq\ell\}$. Clearly, $U_i$ is an open set of $\mathbb S^n$. Moreover, $\mathbb S^n=U_1\cup...\cup U_{n+1}$. Indeed, otherwise $B(x,\delta)$ would contain at most $(\ell-1)(n+1)<m^{n+1}$ points of $Y$ for some $x\in\mathbb S^n$, contradicting Observation 3. Therefore, by Lemma [Lemma 4](#lem:Greene){reference-type="ref" reference="lem:Greene"}, $U_i$ contains two antipodal points $x$ and $-x$ for some $i\in[n+1]$. By the choice of $\delta$, we know that in $H$ there is a copy of $K_{\ell,\ell}$ of colour $i$. Now, $5\log_2 |Y|\leq\ell$ if $\ell$ is large enough. By Lemma [Lemma 5](#lem:dense_bipartite){reference-type="ref" reference="lem:dense_bipartite"}, there is an orientation of $H$ such that every copy of $K_{\ell,\ell}$ in $H$ has a directed cycle, a contradiction. ◻ # The list dichromatic number of Kneser graphs and complete multipartite graphs {#sec:lists} The goal of this section is to study the list dichromatic number of complete multipartite graphs and Kneser graphs. In the first subsection, we study the list dichromatic number of complete multipartite graphs, obtaining tight upper and lower bounds (up to a multiplactive factor). Following this, in the second subsection, we consider Kneser graphs. ## List dichromatic number of complete multipartite graphs We denote by $K_{m*r}$ the complete $r$-partite graph with $m$ vertices on each part. Answering a question of Erdős, Rubin and Taylor [@ERT1979], Alon determined, up to a constant factor, the list chromatic number of $K_{m*r}$. **Theorem 8**. **[@Alon1992]* There exist two positive constants $c_1$ and $c_2$ such that for every $m\geq 2$ and for every $r\geq 2$ $$c_1 r\ln m\leq\chi_{\ell}(K_{m*r})\leq c_2 r\ln m.$$* More precise results were obtained in [@GK2006]. Adapting Alon's proof, we find an analogous bound for the list dichromatic number of $K_{m*r}$ when $r\geq 2$ and $m\geq\ln^{\rho}r$, for any $\rho>3$ (Theorem [Theorem 11](#teo:directed_multipartite){reference-type="ref" reference="teo:directed_multipartite"}). We remark that it is known that $\vec{\chi}(K_{m*r}) = r$, when $m$ is sufficiently large (see [@HHH23]; see also [@Erdos1979]). We will use the following probabilistic result, which is a consequence of the Hoeffding--Azuma inequality. **Theorem 9**. **(Simple Concentration Bound, [@MolloyReed2002])* Let $X$ be a random variable determined by $n$ independent trials, and satisfying the property that changing the outcome of any single trial can affect $X$ by at most $c$. Then $$\mathbb P(|X-\mathbb EX|>t)\leq 2e^{-\frac{t^2}{2c^2 n}}$$* We will also invoke the following elementary fact. **Remark 10**. Let $a\in\mathbb R^+$. The function $f:(a,\infty)\rightarrow\mathbb R$ defined by $f(x)=\left(1-\frac{a}{x}\right)^x$ is increasing. *Proof.* $f'(x)=f(x)\left(\ln\left(1-\frac{a}{x}\right)+\frac{a}{x-a}\right)\geq f(x)\left(\ln\frac{x-a}{x}+\ln\frac{x}{x-a}\right)=0$. ◻ **Theorem 11**. *For every $\rho>3$ there exist constants $c_1,c_2\in\mathbb R^+$ such that if $r\geq 2$ and $m\geq\ln^{\rho} r$ then $$c_1 r\ln m\leq\vec\chi_{\ell}(K_{m*r})\leq c_2 r\ln m.$$* *Proof.* We may assume that $m$ is large enough. Let $V_1,...,V_r$ be the parts of $K_{m*r}$. The upper bound is implied by Theorem [Theorem 8](#thm:complete_multipartite){reference-type="ref" reference="thm:complete_multipartite"}. **Claim.** There is a constant $c$ and an orientation $D$ of $K_{m*r}$ such that, if $\ell\geq c\ln(rm)$, each subgraph of $K_{m*r}$ isomorphic to $K_{\ell}$ or to $K_{\ell,\ell}$ has a directed cycle in $D$. *Proof.* We orient the edges of $K_{m*r}$ at random, independently and with probability $\frac{1}{2}$. Let $E$ (resp. $E'$) be the event that each subgraph of $K_{m*r}$ isomorphic to $K_{\ell}$ (resp. $K_{\ell,\ell}$) has a directed cycle. By Lemma [Lemma 5](#lem:dense_bipartite){reference-type="ref" reference="lem:dense_bipartite"}, $\mathbb P(E),\mathbb P(E')>\frac{1}{2}$ if $c$ is sufficently large. Hence $\mathbb P(E\cap E')>0$. ◻ Let $k=\lfloor Cr\ln m\rfloor$, where $0<C\leq 1$ is a constant for now unspecified. We start by showing that there exists an assignment of $k$-lists from a palette $\mathscr C$ of $\lfloor r\ln m\rfloor$ colours such that, for any given set $A\subseteq\mathscr C$ of at most $\frac{4}{3}\ln m$ colours, each part has at least $\frac{1}{2}m^{1-\delta}$ vertices that avoid the colours from $A$ on their lists, where $\delta=2C\ln 5$. We assign to each vertex $v$ of $D$ a random $k$-list $L(v)$ chosen independently and uniformly among the $\binom{|\mathscr C|}{k}$ possible $k$-lists. Given $i\in[r]$ and $A\subseteq\mathscr C$, consider the random variable $X_{i,A}=|\{v\in V_i\mid L(v)\cap A=\emptyset\}|$. Note that there are exactly $\binom{|\mathscr C|-|A|}{k}$ $k$-lists avoiding the colours in $A$. Devoting ourselves to the case $|A|=\left\lfloor\frac{4}{3}\ln m\right\rfloor$, we have that $$\mathbb E X_{i,A}=m\frac{\binom{|\mathscr C|-|A|}{k}}{\binom{|\mathscr C|}{k}}\geq m\left(\frac{|\mathscr C|-|A|-k}{|\mathscr C|-k}\right)^k=m\left(1-\frac{|A|}{|\mathscr C|-k}\right)^k$$ $$\geq m\left(1-\frac{\frac{4}{3}\ln m}{(1-C)r\ln m -1}\right)^{Cr\ln m}\geq m\left(1-\frac{4}{5}\right)^{2C\ln m}=m^{1-\delta}$$ if $m$ is large enough and $C$ is not too large, using Remark [Remark 10](#lem:calculus){reference-type="ref" reference="lem:calculus"}. By the Simple Concentration Bound (Theorem [Theorem 9](#teo:simple_concentration){reference-type="ref" reference="teo:simple_concentration"}), $$\mathbb P(X_{i,A}<\frac{1}{2}m^{1-\delta})\leq\mathbb P(|X_{i,A}-\mathbb E X_{i,A}|>\frac{1}{2}m^{1-\delta})\leq 2e^{-\frac{1}{8}m^{1-2\delta}}.$$ Let $E$ be the event that $X_{i,A}<\frac{1}{2}m^{1-\delta}$ for some $i\in[r]$ and $A\subseteq\mathscr C$ with $|A|\leq\frac{4}{3}\ln m$. We have that $$\mathbb P(E)\leq r\binom{|\mathscr C|}{\left\lfloor\frac{4}{3}\ln m\right\rfloor}2e^{-\frac{1}{8}m^{1-2\delta}}\leq (r\ln m)^{\frac{4}{3}\ln m+1}2e^{-\frac{1}{8}m^{1-2\delta}}$$ $$\leq 2e^{\left(m^{\frac{1}{\rho}}+\ln\ln m\right)\left(\frac{4}{3}\ln m+1\right)-\frac{1}{8}m^{1-2\delta}}\leq 2e^{2m^{\frac{1}{\rho}}\ln m-\frac{1}{8}m^{1-2\delta}}$$ if $m$ is large enough. Consequently, if $\delta<\frac{1}{2}(1-\frac{1}{\rho})$ and $m$ is large enough, there exists a list assignment $L'$ satisfying the desired property. This is the assignment that we are going to use. Now let $f$ be a proper colouring of $D$. We claim that there exists a set of indices $I\subseteq [r]$ of size at least $\frac{3r}{4}$ such that $|f(V_i)|\leq 4c\ln^2(rm)$ for each $i\in I$. Indeed, if more than $\frac{r}{4}$ parts are coloured with more than $4c\ln^2(rm)$ colours each, then one of the colours appears on more than $\frac{cr\ln^2(rm)}{|\mathscr C|}\geq c\frac{\ln^{2}(rm)}{\ln m}\geq c\ln(rm)$ parts. By the choice of $D$, $f$ is not proper, a contradiction. For each $i\in[r]$ define the set $A_i=\{\gamma\in\mathscr C\mid |V_i\cap f^{-1}(\gamma)|\geq c\ln(rm)\}$. We claim that if $f$ is acceptable then $|A_i|>\frac{4}{3}\ln m$ for every $i\in I$. Indeed, otherwise, by the choice of the lists, at least $\frac{1}{2}m^{1-\delta}$ vertices of $V_i$ have been coloured with colours not from $A_i$. Thus one of these colours is used at least $$\frac{\frac{1}{2}m^{1-\delta}}{4c\ln^2(rm)}\leq c\ln(rm)$$ times on $V_i$. If $m$ is large enough, this implies that $$m^{1-\delta}\leq 8c^2\ln^3(rm)\leq 8c^2(m^{\frac{1}{\rho}}+\ln m)^3\leq 9c^2m^{\frac{3}{\rho}}.$$ If we further assume that $\delta<1-\frac{3}{\rho}$, we get a contradiction when $m$ is large. Therefore $|A_i|>\frac{4}{3}\ln m$ for every $i\in I$. Now, by the choice of $D$, the sets $A_1,...,A_r$ are mutually disjoint. But then $$|\mathscr C|\geq\sum_{i=1}^r |A_i|\geq\sum_{i\in I} |A_i|>\frac{4}{3}|I|\ln m\geq r\ln m\geq |\mathscr C|.$$ This contradiction shows that there is no acceptable proper colouring for the $k$-list assignment $L'$. ◻ We do not know what happens for other values of $m,r$. What is clear is that Theorem [Theorem 11](#teo:directed_multipartite){reference-type="ref" reference="teo:directed_multipartite"} is not valid in general. Indeed, if $m\leq\ln r$ then the following theorem implies that $\vec\chi_{\ell}(K_{m*r})\leq \vec\chi_{\ell}(K_{mr})\leq cr$ for some constant $c$. **Theorem 12**. **[@BHL2018]* Let $T$ be a tournament of order $n$. Then $\vec\chi_{\ell}(T)\leq\frac{n}{\log_2 n}(1+\text o(1))$.* ## List dichromatic number of Kneser graphs Here we investigate the list dichromatic number of Kneser graphs. The list chromatic number of Kneser graphs was recently studied by Bulankina and Kupavskii. They proved the following two results. **Theorem 13**. **[@BulankinaKupavskii2022]* For any positive integers $n,k$ with $1\leq k\leq\frac{n}{2}$ we have $\chi_{\ell}(KG(n,k))\leq n\ln\frac{n}{k}+n$.* **Theorem 14**. **[@BulankinaKupavskii2022]* For every $\varepsilon\ > 0$, there exists a constant $c_{\varepsilon} > 0$ such that $\chi_{\ell}(KG(n,k))\geq c_{\varepsilon}n\ln n$ for all $n,k$ with $2\leq k\leq n^{\frac{1}{2}-\varepsilon}$.* However, good lower bounds for larger values of $k$ are still unknown. Clearly, the upper bound of Theorem [Theorem 13](#thm:BK1){reference-type="ref" reference="thm:BK1"} trivially generalises to the dichromatic number. The rest of the subsection is devoted to the proof of the directed analogue of Theorem [Theorem 14](#thm:BK2){reference-type="ref" reference="thm:BK2"}; that is, we show that the lower bound can be strengthened to digraphs. The proof is achieved by a sequence of lemmas, which involve the argument of Bulankina and Kupavskii, as well as ideas from Mohar and Wu [@MoharWu2016]. As in Theorem [Theorem 14](#thm:BK2){reference-type="ref" reference="thm:BK2"}, we do not know if the bound on $k$ can be extended to $2\leq k\leq n^{1-\varepsilon}$ for an arbitrarily small $\varepsilon>0$. **Theorem 15**. *For every $\varepsilon > 0$ there exists a constant $c_{\varepsilon} > 0$ such that $\vec\chi_{\ell}(KG(n,k))\geq c_{\varepsilon}n\ln n$ for all $n,k$ with $2\leq k\leq n^{\frac{1}{2}-\varepsilon}$.* Let $G=(V,E)$ be a graph, $\mathcal C$ a collection of subsets of $V$ and $s,t$ positive integers. We say that $\mathcal C$ is an *$(s,t)$-collection* of $V$ if (i) $|\mathcal C|\leq s$; (ii) $\forall C\in\mathcal C\ \ \, |C|\leq t$. Given a list assignment $L$, we denote by $U = \cup_{v \in V(G)} L(v)$ the total set of colours, referred to as the *palette*, and we set $u=|U|$. The partitions $P$ of $V$ considered in the sequel will always have $u$ (not necessarily non-empty) parts (we always implicitly or explicitly assume that the partitions are *acyclic*, i.e., that each part of $P$ induces an acyclic digraph). It will be convenient to regard as distinct any two partitions arising from different colourings. Thus, partitions will be thought as indexed by $U$ (but for simplicity we will continue calling them just "partitions\"). We say that a partition $P$ of $V$ is *covered* by an $(s,t)$-collection $\mathcal C$ (or that $\mathcal C$ is an *$(s,t)$-cover* of $P$) if each part determined by $P$ is contained in some $C\in\mathcal C$. Let $P=(P_1,...,P_u)$ be a partition. We say that a list assignment $L$ of $G$ *accepts* $P$ if, for every $i\in[u]$ and every $v \in P_i$, $i \in L(v)$. Otherwise, we say that $L$ *rejects* $P$. In what follows, $\ell_1$ and $\ell_2$ are integers. Define the function $g(\ell_1,\ell_2,n,s,t,u):=s^u e^{-\frac{n}{2}2^{-\frac{4\ell_2 tu}{(\ell_1-\ell_2)n}}}$. **Lemma 16**. *Let $G=(V,E)$ be an undirected graph of order $n$, $\mathcal C$ an $(s,t)$-collection of $V$ and $\mathcal P$ the family of partitions of $V$ covered by $\mathcal C$. Let $L_1$ be an $\ell_1$-list assignment for $G$ from a palette of $u$ colours and $L_2$ a random $\ell_2$-list assignment for $G$ where, for every $v\in V$, $L_2(v)$ is chosen independently and equiprobably among $\binom{L_1(v)}{\ell_2}$. If $4tu\leq (\ell_1-\ell_2)n$, then $$\mathbb P(L_2\text{ accepts some }P\in\mathcal P)< g(\ell_1,\ell_2,n,s,t,u)=s^u e^{-\frac{n}{2}2^{-\frac{4\ell_2 tu}{(\ell_1-\ell_2)n}}}.$$* *Proof.* Let $\mathrm C=(C_1,...,C_u)\in\mathcal C^u$ be any $u$-tuple of elements of $\mathcal C$. For every $v\in V$, let $r_{\mathrm C}(v)$ be the number of indices $i\in[u]$ such that $v\in C_i$. Consider the subset of vertices $W_{\mathrm C}=\{v\in V\mid r_{\mathrm C}(v)\leq\frac{2tu}{n}\}$. We claim that $|W_{\mathrm C}|>\frac{1}{2}n$. Indeed, otherwise $$tu\geq\sum_{i=1}^u|C_i|=\sum_{v\in V}r_{\mathrm C}(v)\geq\sum_{v\in V\setminus W_{\mathrm C}}r_{\mathrm C}(v)>tu.$$ Moreover, for any $v\in W_{\mathrm C}$ the probability $p_{\mathrm C}(v)$ that $v\notin\bigcup_{i\in L_2(v)} C_i$ is at least $$\frac{\binom{\ell_1-r_{\mathrm C}(v)}{\ell_2}}{\binom{\ell_1}{\ell_2}}=\prod_{k=1}^{\ell_2}\frac{\ell_1-\ell_2-r_{\mathrm C}(v)+k}{\ell_1-\ell_2+k}\geq\left(1-\frac{r_{\mathrm C}(v)}{\ell_1-\ell_2}\right)^{\ell_2}$$ $$\geq\left(1-\frac{2tu}{(\ell_1-\ell_2)n}\right)^{\ell_2}\geq\left(\frac{1}{2}\right)^{\frac{4\ell_2 tu}{(\ell_1-\ell_2)n}}=:p,$$ using Remark [Remark 10](#lem:calculus){reference-type="ref" reference="lem:calculus"} and the inequality $4tu\leq (\ell_1-\ell_2)n$. Therefore, the probability that there is some $u$-tuple $\mathrm C=(C_1,...,C_u)$ of elements of $\mathcal C$ such that $v\in\bigcup_{i\in L_2(v)}C_i$ for every $v\in V$ is at most $$\sum_{\mathrm C\in{\mathcal C}^u}\,\prod_{v\in W_\mathrm C}(1-p_\mathrm C(v))<s^u\left(1-p\right)^{\frac{1}{2}n}\leq s^u e^{-\frac{1}{2}np}.$$ Since every $P\in\mathcal P$ is covered by $\mathcal C$, each of the $u$ parts of any such $P$ is contained in some $C\in\mathcal C$, so the result follows. ◻ Let $G,H$ be graphs. The *tensor product* $G\times H$ of $G$ and $H$ is the graph with vertex set $V(G)\times V(H)$ where two vertices $(v,x)$ and $(w,y)$ are adjacent if and only if $v,w$ are adjacent in $G$ and $x,y$ are adjacent in $H$. The tensor product of complete graphs $K_n\times K_n$ is going to play an auxiliary role; we denote it by $G_n$. Given $S\subseteq V(G_n)$, we call $\pi_1(S)$ and $\pi_2(S)$ the projection of $S$ to the first and second coordinate, respectively. The *rows* (resp. *columns*) of $S$ are the subsets of $S$ of the form $S\cap(\{i\}\times [n])$ (resp. $S\cap([n]\times\{i\})$), where $i\in[n]$. Now we give some properties of $G_n$. **Lemma 17**. *For any $n\geq 2$, there is an orientation $D_n$ of $G_n$ (resp. of $K_2\times G_n$) such that, for every $S,T\subseteq V(G_n)$ satisfying* i) *$|S|,|T|\geq 30\ln n$ and* ii) *$\pi_i(S)\cap\pi_i(T)=\emptyset$ for $i\in\{1,2\}$,* *the subdigraph of $D_n$ induced by $S\cup T$ (resp. by $(\{1\}\times S)\cup(\{2\}\times T)$) has a directed cycle.* *Proof.* Since in $G_n[S\cup T]$ (resp. in ($K_2\times G_n)[(\{1\}\times S)\cup(\{2\}\times T)]$) all edges between $S$ and $T$ (resp. between $\{1\}\times S$ and $\{2\}\times T$) are present, the conclusion follows from Lemma [Lemma 5](#lem:dense_bipartite){reference-type="ref" reference="lem:dense_bipartite"}. ◻ We define an $(s_n,t_n)$-collection $\mathcal C_{G_n}$ of $V(G_n)$ as follows. Let $\mathcal L_{G_n}=\{\{i\}\times[n]\mid i\in[n]\}\cup\{[n]\times \{i\}\mid i\in[n]\}$ be the set of rows and columns of $V(G_n)$ and $$\mathcal Q_{G_n}=\begin{cases}\{A\times B\mid A,B\in\binom{[n]}{\lfloor 124\ln n\rfloor}\} &\text{if } 1\leq\lfloor 124\ln n\rfloor\leq n\\ \{V(G_n)\} &\text{otherwise}.\end{cases}$$ We set $\mathcal C_{G_n}=\{L\cup Q\mid L\in\mathcal L_{G_n},\ Q\in\mathcal Q_{G_n}\}$. Note that $|\mathcal C_{G_n}|\leq s_n:=\max\{1,2n\binom{n}{\lfloor 124\ln n\rfloor}^2\}$ and $|C|\leq t_n:=n+\lfloor 124\ln n\rfloor^2$ for any $C\in\mathcal C_{G_n}$. **Lemma 18**. *There is an orientation $D_n$ of $G_n$ such that $\mathcal C_{G_n}$ covers all acyclic partitions of $D_n$.* *Proof.* It can be assumed that $1\leq\lfloor 124\ln n\rfloor\leq n$. Let $D_n$ be the orientation from Lemma [Lemma 17](#lem:bipartite_in_rook){reference-type="ref" reference="lem:bipartite_in_rook"}, and let $S$ be an acyclic set of $D_n$. Assume for a contradiction that $S$ is not contained in any $C\in\mathcal C_{G_n}$. Let $L$ be the largest row of $S$, or its largest column if it is larger than its largest row, and let $S'=S\setminus L$. Then $S'$ is not contained in any $Q\in\mathcal Q_{G_n}$, so $|\pi_i(S')|>124\ln n>90\ln n+2$ for some $i\in\{1,2\}$. Assume that $i=1$ (if $i=2$, the argument below is repeated with rows instead of columns). Let $L'$ be the largest column of $S'$. We distinguish three cases. We will show that, in each case, we can find two sets in $S$ satisfying the hypotheses of Lemma [Lemma 17](#lem:bipartite_in_rook){reference-type="ref" reference="lem:bipartite_in_rook"}. This will yield a contradiction since $S$ is acyclic. **Case 1.** $|L'|>60\ln n+2$. Recall that $|L|\geq |L'|$. Therefore we can find a subset of $L$ and a subset of $L'$ satisfying the hypotheses of Lemma [Lemma 17](#lem:bipartite_in_rook){reference-type="ref" reference="lem:bipartite_in_rook"}. **Case 2.** $60\ln n+2\geq |L'|\geq 30\ln n$. Since $|\pi_i(S')|>90\ln n+2$, we can find a subset $T$ of $S'$ such that $T$ and $L'$ satisfy the hypotheses of Lemma [Lemma 17](#lem:bipartite_in_rook){reference-type="ref" reference="lem:bipartite_in_rook"}. **Case 3.** $|L'|\leq 30\ln n$. Let $\{T_1,...,T_k\}$ be a minimal set of columns of $S'$ satisfying $|\pi_i(\bigcup^k_{j=1} T_j)|\geq 30\ln n$. By minimality, $|\pi_i(\bigcup^k_{j=1} T_j)|\leq 60\ln n$. Hence, as in Case 2, we can find a subset $T$ of $S'$ such that $T$ and $\bigcup^k_{j=1} T_j$ satisfy the hypotheses of of Lemma [Lemma 17](#lem:bipartite_in_rook){reference-type="ref" reference="lem:bipartite_in_rook"}. In any case, Lemma [Lemma 17](#lem:bipartite_in_rook){reference-type="ref" reference="lem:bipartite_in_rook"} yields a cycle in $S$, the desired contradiction. ◻ We are now in position to determine the order of $\vec\chi_{\ell}(KG(n,k))$ when $k$ is bounded by a constant. **Lemma 19**. *There is a constant $c\in\mathbb R^+$ such that $\vec{\chi}_\ell(KG(n,k))\geq c\frac{n}{k}\ln\frac{n}{k}$ for every $2\leq k\leq\frac{n}{2}$.* *Proof.* First note that $G_{\lfloor\frac{n}{k}\rfloor}$ is isomorphic to a subgraph of $KG(n,k)$. Indeed, if we take $\left\lfloor\frac{n}{k}\right\rfloor+1$ pairwise disjoint subsets $I,J_1,...,J_{\lfloor\frac{n}{k}\rfloor}\subseteq [n]$ with $|I|=\left\lfloor\frac{n}{k}\right\rfloor$ and $|J_1|=...=|J_{\lfloor\frac{n}{k}\rfloor}|=k-1$, then the set of vertices $S=\{\{i\}\cup J_j\mid i\in I,\ 1\leq j\leq\left\lfloor\frac{n}{k}\right\rfloor\}\subseteq\binom{[n]}{k}$ induces a copy of $G_{\lfloor\frac{n}{k}\rfloor}$. Thus it suffices to show that $\vec\chi_{\ell}(G_{\tilde n})\geq c\tilde n\ln\tilde n$ for some $c > 0$. Assume that $\tilde n$ is large enough. Given $G_{\tilde n}$, consider the orientation $D_{\tilde n}$ from Lemma [Lemma 18](#lem:rook_cover){reference-type="ref" reference="lem:rook_cover"}; we know that $\mathcal C_{G_{\tilde n}}$ covers all acyclic partitions of $D_{\tilde n}$. Let $u_{\tilde n}=\ell_{1,\tilde n}=\lfloor\tilde n\ln\tilde n\rfloor$ and $\ell_{2,\tilde n}=\lfloor cu_{\tilde n}\rfloor$, where $c < 1$ is a positive constant to be defined later. Let $L_{1,\tilde n}$ be the canonical $\ell_{1,\tilde n}$-list assignment to $D_{\tilde n}$ (i.e. $L_{1,\tilde n}(v)=[u_{\tilde n}]$ for every $v\in V(D_{\tilde n})$). It is clear that $4t_{\tilde n} u_{\tilde n}\leq(\ell_{1,\tilde n}-\ell_{2,\tilde n})\tilde n^2$ and $$\ln g(\ell_{1,\tilde n},\ell_{2,\tilde n},{\tilde n}^2,s_{\tilde n},t_{\tilde n},u_{\tilde n})\leq 330\tilde n\ln^3 \tilde n-\frac{1}{2}{\tilde n}^{2-\frac{8c}{(1-c)\log_2 e}}<0$$ if $\tilde n$ is large enough and $c$ has been chosen so that $\frac{8c}{(1-c)\log_2 e}<1$. Now, by Lemma [Lemma 16](#lem:BK_general){reference-type="ref" reference="lem:BK_general"}, $\vec\chi_{\ell}(D_{\tilde n})>\ell_{2,\tilde n}$. ◻ The previous lemma handles the case when $k$ is small. For larger values of $k$, we need to modify the definition of a cover. Let $H=(V,E)$ be a graph, $\mathcal C$ an $(s,t)$-collection of $V$ and $\lambda\in\mathbb R^+$. Consider the graph $K_2\times H$ and one of its orientations $D$. We say that an acyclic partition $P$ of $V(D)$ is *semicovered* by the pair $(\mathcal C,\lambda)$ (or that $(\mathcal C,\lambda)$ is an *$(s,t)$-semicover* of $P$) if for every acyclic set $S=(\{1\}\times S_1)\cup(\{2\}\times S_2)\in P$ either $S_1\subseteq C_1$ and $S_2\subseteq C_2$ for some $C_1,C_2\in\mathcal C$, or $S_i\subseteq C$ and $|S_i|<\lambda$ for some $i\in\{1,2\}$ and some $C\in\mathcal C$. **Lemma 20**. *Let $G,H$ be graphs. Let $m_G$ be the size of $G$ and $n_H$ the order of $H$. Let $D$ be an orientation of $K_2\times H$ and $(\mathcal C,\lambda)$ an $(s,t)$-semicover of all acyclic partitions of $D$. Let $\ell_1,\ell_2$ be positive integers such that $8t\ell_1\leq (\ell_1-\ell_2)n_H$, $m_G\, g^2(\ell_1,\ell_2,n_H,s,t,2\ell_1)<1$ and $\lambda\ell_1\leq n_H$. If $\chi_{\ell}(G)>\ell_1$, then $\vec\chi_{\ell}(G\times H)>\ell_2$.* *Proof.* Suppose that $\vec\chi_{\ell}(G\times H)\leq \ell_2$. Let $L_1$ be any $\ell_1$-list assignment for $G$. Consider a random $\ell_2$-list assignment $L_2$ for $G\times H$, where for each $v\in V(G)$ and each $x\in\{v\}\times V(H)$ $L_2(x)$ is chosen independently and equiprobably among $\binom{L_1(v)}{\ell_2}$. For each edge $\{v,w\}$ of $G$, let $\mathcal C_{\{v,w\}}=\{(\{v\}\times C_1)\cup(\{w\}\times C_2)\mid C_1,C_2\in\mathcal C\}$. We orient the subgraph induced by $\{v,w\}\times V(H)$ according to $D$ (in any of the two possible ways). This results in an orientation of $G\times H$ that we will call $\overrightarrow{G\times H}$. By applying Lemma [Lemma 16](#lem:BK_general){reference-type="ref" reference="lem:BK_general"} to $(G\times H)[\{v,w\}\times V(H)]$ with a palette of size $u=2\ell_1$, we see that the probability that ${L_2}\raisebox{-.5ex}{$|$}_{\{v,w\}\times V(H)}$ accepts some partition covered by $\mathcal C_{\{v,w\}}$ is smaller than $g(\ell_1,\ell_2,2n_H,s^2,2t,2\ell_1)=g^2(\ell_1,\ell_2,n_H,s,t,2\ell_1)$. Therefore, the probability that this happens for some edge $\{v,w\}$ of $G$ is less than $m_G\, g^2(\ell_1,\ell_2,n_H,s,t,2\ell_1)< 1$. Thus we can find a $\ell_2$-list assignment $L'_2$ for $G\times H$ such that, for every $\{v,w\}\in E(G)$, ${L'_2}\raisebox{-.5ex}{$|$}_{\{v,w\}\times V(H)}$ rejects all partitions of $\{v,w\}\times V(H)$ covered by $\mathcal C_{\{v,w\}}$. Since $\vec\chi_{\ell}(G\times H)\leq\ell_2$, $\overrightarrow{G\times H}$ has a colouring $f'_2$ which is accepted by $L'_2$ and produces no monochromatic cycles. Let us define a colouring $f_1$ for $G$ as $$f_1(v)=\begin{cases}\gamma_v\!\!\! & \text{if }\exists\gamma\ \, \forall C\in\mathcal C\ \ {(f'_2)}^{-1}(\gamma)\cap(\{v\}\times V(H))\nsubseteq\{v\}\times C \text{, where $\gamma_v$ is any such $\gamma$} \\ \gamma^+_v\!\!\! & \text{otherwise, where }\gamma^+_v \text{ is any }\gamma\text{ maximizing } |{(f'_2)}^{-1}(\gamma)\cap(\{v\}\times V(H))|.\end{cases}$$ Note that $f_1(v) \in L_1(v)$, for each $v \in V(G)$. We will show that $f_1(v)$ is a proper colouring of $G$, and this contradiction will finish the proof. Let $\{v,w\}$ be an edge of $G$, and suppose for a contradiction that $f_1(v)=f_1(w)$. Since ${L'_2}\raisebox{-.5ex}{$|$}_{\{v,w\}\times V(H)}$ rejects all partitions of $\{v,w\}\times V(H)$ covered by $\mathcal C_{\{v,w\}}$, either $f_1(v)=\gamma_v$ or $f_1(w)=\gamma_w$. Without loss of generality, we can assume that $f_1(w)=\gamma_w$. Since $f'_2$ produces no monochromatic cycles, if $f_1(v)=\gamma_v$ then $(\mathcal C,\lambda)$ does not semicover all acyclic partitions of $D$, a contradiction. On the other hand, if $f_1(v)=\gamma_v^+$ then $|{(f'_2)}^{-1}(\gamma_v^+)\cap(\{v\}\times V(H))|\geq\frac{n_H}{\ell_1}\geq\lambda$, also contradicting that $(\mathcal C,\lambda)$ semicovers all acyclic partitions of $D$. Therefore, we have found a proper colouring $f_1$ accepted by the $\ell_1$-list assignment $L_1$. Since $L_1$ was arbitrary we conclude that $\chi_{\ell}(G)\leq\ell_1$, ending the proof. ◻ **Lemma 21**. *For every $n$ there is an orientation $D_n$ of $K_2\times G_n$ such that all acyclic partitions of $D_n$ are semicovered by $(\mathcal C_{G_n},2^{13}\ln^2 n)$.* *Proof.* The proof is similar to that of Lemma [Lemma 18](#lem:rook_cover){reference-type="ref" reference="lem:rook_cover"}. Let $D_n$ be the orientation of $K_2\times G_n$ from Lemma [Lemma 17](#lem:bipartite_in_rook){reference-type="ref" reference="lem:bipartite_in_rook"}. Assume that $1\leq \lfloor 124\ln n\rfloor\leq n$; otherwise the lemma is trivial. Let $S=(\{1\}\times S_1)\cup(\{2\}\times S_2)$ be an acyclic set in $D_n$. Assume that $S_1$ is not contained in any $C\in\mathcal C_{G_n}$. As in the proof of Lemma [Lemma 18](#lem:rook_cover){reference-type="ref" reference="lem:rook_cover"} we can find in $S_1$ two sets $L_1,L_1'\subseteq S_1$ satisfying the hypotheses of Lemma [Lemma 17](#lem:bipartite_in_rook){reference-type="ref" reference="lem:bipartite_in_rook"}. We can assume that $|L_1|=|L'_1|=\lceil 30\ln n\rceil$. We argue by contradiction. Suppose that $|S_2|\geq 2^{13}\ln^2 n$ or that $S_2$ is not contained in any $C\in\mathcal C_{G_n}$. Let $T=\{(j_1,j_2)\in S_2\mid j_1\notin\pi_1(L_1\cup L'_1)\text{ or }j_2\notin\pi_2(L_1\cup L'_1)\}$. We claim that $|T|\geq 60\ln n$. Let us consider two cases. **Case 1.** $|S_2|\geq 2^{13}\ln^2 n$. We have that $|\pi_1(L_1\cup L'_1)|,|\pi_2(L_1\cup L'_1)|\leq 2\lceil 30\ln n\rceil\leq 64\ln n$. Therefore $|T|\geq |S_2|-64^2\ln^2 n\geq 60\ln n$. **Case 2.** $S_2\nsubseteq C$ for any $C\in\mathcal C_{G_n}$. In this case, $S_2\nsubseteq Q$ for any $Q\in\mathcal Q_{G_n}$. Therefore, $|\pi_i(S_2)|>124\ln n$ for some $i\in\{1,2\}$. Since $|\pi_i(L_1\cup L'_1)|\leq 2\lceil 30\ln n\rceil$, we have that $|T|\geq 124\ln n-2\lceil 30\ln n\rceil\geq 60\ln n$. Hence $|T|\geq 60\ln n$ in any case. Let $T_1=\{(j_1,j_2)\in T\mid j_1\notin\pi_1(L_1),\,j_2\notin\pi_2(L_1)\}$ and $T'_1=T\setminus T_1$. Note that $T'_1=\{(j_1,j_2)\in T\mid j_1\notin\pi_1(L'_1),\,j_2\notin\pi_2(L'_1)\}$ by the definition of $T$. Applying Lemma [Lemma 17](#lem:bipartite_in_rook){reference-type="ref" reference="lem:bipartite_in_rook"} yields the desired contradiction. Indeed, if $|T_1|\geq 30\ln n$, then $(\{1\}\times L_1)\cup(\{2\}\times T_1)$ has a directed cycle, and if otherwise $|T'_1|\geq 30\ln n$, then $(\{1\}\times L'_1)\cup(\{2\}\times T'_1)$ has a directed cycle. ◻ *Proof of Theorem [Theorem 15](#thm:list_dichromatic_Kneser){reference-type="ref" reference="thm:list_dichromatic_Kneser"}.* We fix $\varepsilon$ and assume that $n$ is large enough. If $k$ is bounded by a constant, then Lemma [Lemma 19](#lem:k_bounded){reference-type="ref" reference="lem:k_bounded"} does the job. Therefore we can assume that $k\geq 4$. Note that $KG(\lfloor\frac{n}{2}\rfloor,k-2)\times G_{\lfloor\frac{n}{4}\rfloor}$ is a subgraph of $KG(n,k)$. Indeed, consider, for any positive integers $n_1,n_2,k_1,k_2$ satisfying $2k_1\leq n_1$, $2k_2\leq n_2$, $n_1+n_2\leq n$ and $k_1+k_2=k$, the set of vertices in $KG(n,k)$ of the form $$S=\left\{\{i_1,...,i_{k_1},j_1,...,j_{k_2}\}\ \left|\ \{i_1,...,i_{k_1}\}\in\binom{[n_1]}{k_1},\ \{j_1,...,j_{k_2}\}\in\binom{n_1+[n_2]}{k_2}\right.\right\};$$ the subgraph induced by $S$ is isomorphic to $KG(n_1,k_1)\times KG(n_2,k_2)$. By the proof of Lemma [Lemma 19](#lem:k_bounded){reference-type="ref" reference="lem:k_bounded"}, if $k_2\geq 2$ then $G_{\lfloor\frac{n_2}{k_2}\rfloor}$ is isomorphic to a subgraph of $KG(n_2,k_2)$, so we can just take $n_1=\lfloor\frac{n}{2}\rfloor$, $n_2=\lceil\frac{n}{2}\rceil$, $k_1=k-2$ and $k_2=2$. Note that $KG(\lfloor\frac{n}{2}\rfloor,k-2)$ has $m_n=\frac{1}{2}\binom{\lfloor\frac{n}{2}\rfloor}{k-2}\binom{\lfloor\frac{n}{2}\rfloor-k+2}{k-2}$ edges. Provided that $n$ is sufficiently large, we can find an $\varepsilon'\in(0,1)$ such that $n^{\frac{1}{2}-\varepsilon}\leq {\lfloor\frac{n}{2}\rfloor}^{\frac{1}{2}-\varepsilon'}$. Let $c_{\varepsilon'}$ be the corresponding constant from Theorem [Theorem 14](#thm:BK2){reference-type="ref" reference="thm:BK2"}; let $c'_{\varepsilon}\in(0,1)$ be a constant for now unspecified, $\ell_{1,n}=\lfloor c_{\varepsilon'}\lfloor\frac{n}{2}\rfloor\ln\lfloor\frac{n}{2}\rfloor\rfloor-1$ and $\ell_{2,n}=\lfloor c'_{\varepsilon}\ell_{1,n}\rfloor$. We apply Lemma [Lemma 20](#lem:MoharWu_general){reference-type="ref" reference="lem:MoharWu_general"} with the $(s_{\lfloor\frac{n}{4}\rfloor},t_{\lfloor\frac{n}{4}\rfloor})$-semicover $(\mathcal C_{G_{\lfloor\frac{n}{4}\rfloor}},2^{13}\ln^2\lfloor\frac{n}{4}\rfloor)$ of the family of acyclic partitions of $D_{\lfloor\frac{n}{4}\rfloor}$, the orientation of $K_2\times G_{\lfloor\frac{n}{4}\rfloor}$ from Lemma [Lemma 21](#lem:rook_cover2){reference-type="ref" reference="lem:rook_cover2"}. Clearly $$8t_{\lfloor\frac{n}{4}\rfloor}\ell_{1,n}\leq (\ell_{1,n}-\ell_{2,n})\left\lfloor\frac{n}{4}\right\rfloor^2,$$ $$2^{13}\ln^2\left\lfloor\frac{n}{4}\right\rfloor\, \ell_{1,n}\leq \left\lfloor\frac{n}{4}\right\rfloor^2\text{ and}$$ $$\ln m_n+2\ln g(\ell_{1,n},\ell_{2,n},\left\lfloor\frac{n}{4}\right\rfloor^2,s_{\lfloor\frac{n}{4}\rfloor},t_{\lfloor\frac{n}{4}\rfloor},2\ell_{1,n})\leq 2k\ln n+660c_{\varepsilon'} n\ln^3 n-\frac{1}{25}n^{2-\frac{50c_{\varepsilon'}c'_{\varepsilon}}{(1-c'_{\varepsilon})\log_2 e}}< 0$$ if $n$ is large enough and $c'_{\varepsilon}$ has been chosen so that $\frac{50c_{\varepsilon'}c'_{\varepsilon}}{(1-c'_{\varepsilon})\log_2 e}<1$. Thus Lemma [Lemma 20](#lem:MoharWu_general){reference-type="ref" reference="lem:MoharWu_general"} and Theorem [Theorem 14](#thm:BK2){reference-type="ref" reference="thm:BK2"} imply that $\vec\chi_{\ell}(KG(\lfloor\frac{n}{2}\rfloor,k-2)\times G_{\lfloor\frac{n}{4}\rfloor})>\ell_{2,n}$ if $n$ is large enough. ◻ # Sabidussi's theorem Tensor products are one of the leitmotifs of Section [3](#sec:lists){reference-type="ref" reference="sec:lists"}. Let us now take a look at another type of graph product. Let $G,H$ be graphs (resp. digraphs). The *Cartesian product* of $G$ and $H$ is the graph (resp. digraph) $G\square H$ with vertex set $V(G)\times V(H)$ where there is an edge between $(u,x)$ and $(v,y)$ (resp. an arc from $(u,x)$ to $(v,y))$ if and only if either $u=v$ and $\{x,y\}\in E(H)$ (resp. and $(x,y)\in A(H)$), or $x=y$ and $\{u,v\}\in E(G)$ (resp. and $(u,v)\in A(G)$). A well-known theorem of Sabidussi [@Sabidussi1957] states that for any two graphs $G$ and $H$ the chromatic number of its Cartesian product is $\chi(G\square H)=\max\{\chi(G),\chi(H)\}$. His proof can be adapted to show an analogous result for digraphs. **Theorem 22**. *Let $G$ and $H$ be digraphs. Then $\vec\chi(G\square H)=\max\{\vec\chi(G),\vec\chi(H)\}$.* *Proof.* Let $N=\max\{\vec\chi(G),\vec\chi(H)\}$. Note that both $G$ and $H$ are isomorphic to a subdigraph of $G\square H$. Therefore, $\vec\chi(G\square H)\geq N$. Now, let $f_G,f_H$ be $N$-colourings of $G,H$. Let $f:V(G\square H)\rightarrow [N]$ be the $N$-colouring of $G\square H$ defined by $f(g,h)\equiv f_G(g)+f_H(h)\mod N$. We claim that $f$ is a proper colouring of $G\square H$. We argue by contradiction. Let $(g_1,h_1),...,$ $(g_1,h_{s_1}),(g_2,h_{s_1+1}),...,(g_2,h_{s_2}),...,$ $(g_r,h_{s_{r-1}+1}),...,(g_r,h_{s_r})$ be the successive vertices of a monochromatic cycle, where $g_i\neq g_{i+1}$ and $h_{s_i}=h_{s_i+1}$ for $i\in[r-1]$, and $g_{r}\neq g_1$ and $h_{s_r}=h_1$. The fact that $f$ is constant on these vertices implies that $f_H(h_1)=...=f_H(h_{s_1})=f_H(h_{s_1+1})=...=f_H(h_{s_r})$. If not all of $h_1,...,h_{s_r}$ are identical, we have found a monochromatic cycle in $H$. If they are all identical, then $r\geq 2$, and we can similarly see that $f_G(g_1)=...=f_G(g_r)$, yielding a monochromatic cycle in $G$. Therefore, $\vec\chi(G\square H)\leq N$. ◻ Hedetniemi's conjecture proposes a similar statement for tensor products: given any two graphs $G$ and $H$, the chromatic number of its tensor product is (conjectured to be) $\chi(G\times H)=\min\{\chi(G),\chi(H)\}$. That was refuted by Shitov [@Shitov2019]; at the time, the conjecture had been standing for more than five decades. Before the first counterexamples were known, its directed version was formulated. Given two digraphs $G$ and $H$, its tensor product $G\times H$ is defined to be the digraph with vertex set $V(G)\times V(H)$ where there is an arc from $(g_1,h_1)$ to $(g_2,h_2)$ if and only if $(g_1,g_2)$ is an arc of $G$ and $(h_1,h_2)$ is an arc of $H$. **Conjecture 23**. **[@Harutyunyan2011]* Let $G$ and $H$ be digraphs. Then $\vec\chi(G\times H)=\min\{\vec\chi(G),\vec\chi(H)\}$.* Note that counterexamples to Conjecture [Conjecture 23](#conj:directed_Hedetniemi){reference-type="ref" reference="conj:directed_Hedetniemi"} can be obtained by taking any counterexample to Hedetniemi's conjecture and replacing all its edges with bidirected arcs. But what happens if $G,H$ are oriented graphs? In [@Harutyunyan2011] it was proved that Conjecture [Conjecture 23](#conj:directed_Hedetniemi){reference-type="ref" reference="conj:directed_Hedetniemi"} holds when $\min\{\vec\chi(G),\vec\chi(H)\}\leq 2$. On the positive side, Zhu proved the fractional version of Hedetniemi's conjecture [@Zhu11]. We wonder if this result can be generalized to the dichromatic number of digraphs, if the fractional dichromatic number of a digraph is defined in the natural way. **Acknowledgements.** We thank Stéphane Bessy and Stéphan Thomassé for many previous conversations around Conjecture [Conjecture 1](#conj:E-NL){reference-type="ref" reference="conj:E-NL"}, which motivated us to start this paper. We also wish to thank Denis Cornaz for exciting and encouraging discussions throughout the writing of this manuscript. The authors are supported by ANR grant 21-CE48-0012 DAGDigDec (DAGs and Digraph Decompositions). GPS was also partially supported by the Spanish Ministerio de Ciencia e Innovación through grant PID2019-194844GB-I00. [^1]: `ararat.harutyunyan@lamsade.dauphine.fr` [^2]: `gil.puig-i-surroca@dauphine.eu`
arxiv_math
{ "id": "2309.16565", "title": "Colouring Complete Multipartite and Kneser-type Digraphs", "authors": "Ararat Harutyunyan, Gil Puig i Surroca", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this article we discuss how *abstraction boundaries* can help tame complexity in mathematical research, with the help of an interactive theorem prover. While many of the ideas we present here have been used implicitly by mathematicians for some time, we argue that the use of an interactive theorem prover introduces additional qualitative benefits in the implementation of these ideas. address: - Albert--Ludwigs-Universität Freiburg, Mathematisches Institut, Ernst-Zermelo-Straße 1, 79104 Freiburg im Breisgau, Deutschland - Mathematical and Statistical Sciences, University of Alberta, 632 Central Academic Building, Edmonton AB T6G 2G1, Canada author: - Johan Commelin - Adam Topaz bibliography: - refs.bib title: Abstraction boundaries and spec driven development in pure mathematics --- # Introduction Modern research in pure mathematics has a clear tendency toward increasing complexity. New striking mathematical results may involve complex proof techniques, deep mastery of a subfield within mathematics, or nontrivial input from several areas of mathematics. All of these play a role in increasing the inherent complexity of a piece of modern mathematics. In many respects, such increasing complexity is an indication of *progress* in pure mathematics. However, with such complexity comes a significant increase in *cognitive load* for both readers and authors. It is now routine to see significant new papers in pure mathematics with one hundred pages or more. Similarly, the refereeing process of a complex mathematical result regularly takes multiple years. This state of affairs is clearly not sustainable. Mathematicians have long been using the concept of abstraction in order to tame such complexities. For example, they often break down proofs into more manageable steps, such as lemmas, propositions, etc. They also frequently introduce definitions that capture a curated selection of properties satisfied by the objects they wish to study. More generally, it is often helpful to create "abstractions boundaries", which separate the use of a mathematical object from its actual implementation. Staying within the boundaries of a given abstraction can help reduce cognitive load to some extent, but inherent complexities may nevertheless remain. In this article we discuss how the use of abstraction boundaries, in conjunction with an *interactive theorem prover* (ITP), can facilitate complex interdisciplinary mathematical collaboration. We argue that the implementation of these ideas within an ITP can have significant qualitative benefits in further reducing cognitive load for both authors and readers. The application of this methodology was a key ingredient in the success of the *liquid tensor experiment* (LTE), and we use examples from this project to illustrate our primary points. Here is a brief outline of the paper. In §2, we review some aspects of condensed mathematics and LTE, focusing primarily on the topics which are relevant for this paper. In §3, we give a more precise description of what we mean by an *abstraction boundary* along with a discussion on sources of complexity within mathematics. In §4, we highlight some of the qualitative benefits offered by the use of abstraction boundaries alongside an ITP, while illustrating *spec driven development* with several examples related to LTE. The issue of aligning formal and informal mathematics using abductive reasoning is addressed in §5. Finally, we summarize and conclude our discussion in §6. ## Acknowledgements {#acknowledgements .unnumbered} We would like to warmly thank all of the contributors to the liquid tensor experiment for the amazing collaboration and excellent discussions. We thank the Lean community at large for providing the technological and social environments which made this work possible. Many thanks to the referees of this article and the editors of this volume for their comments which helped improve this paper in various ways. Finally, we would like to thank A. Venkatesh for proposing such an exciting topic for his Fields Medal Symposium. # An overview of the liquid tensor experiment The *liquid tensor experiment* (LTE) began with a challenge posed by P. Scholze [@zbMATH07566886], whose ultimate goal was to formally verify the proof of the *main theorem of liquid vector spaces*: **Theorem 1** (Clausen--Scholze). *Let $0 < p' < p \le 1$ be real numbers and let $S$ be a profinite set. Let $V$ be a $p$-Banach space, considered as a condensed abelian group. Write $\mathcal{M}_{p'}(S)$ for the space of real-valued $p'$-measures on $S$, also considered as a condensed abelian group. Then $\operatorname{Ext}^{i}_{\operatorname{Cond}(\mathrm{Ab})}(\mathcal{M}_{p'}(S),V) = 0$ for all $i \geq 1$.* This theorem is of utmost foundational importance; it is the technical core of the assertion that the real numbers, endowed with the collection of so-called *$p$-liquid vector spaces*, forms an *analytic ring* in the sense of Clausen--Scholze [@condensedpdf; @analyticpdf; @complexpdf]. In [@zbMATH07566886], Scholze mentions that this might be his "most important theorem to date." In this section, we will only give a brief overview and motivation for the objects involved in this theorem along with a general summary of the experiment itself, as will be relevant for the remainder of this paper. We refer the reader to [@zbMATH07566886] and to [@analyticpdf Theorem 9.1] for more details. ## Condensed and liquid mathematics Condensed mathematics is a new foundational system which replaces topological spaces with objects called *condensed sets*, which are set-valued sheaves on the category $\mathrm{Profinite}$ of profinite sets with respect to the coherent Grothendieck topology. The power of this new theory shines particularly in algebraic contexts, whereas the classical approaches using point-set topology have several defects. For example, the category of topological abelian groups fails (quite badly) to be an abelian category, while the category of condensed abelian groups, defined either as sheaves of abelian groups on $\mathrm{Profinite}$ or equivalently as the category of abelian group objects in condensed sets, is an exceptionally nice abelian category. Condensed mathematics and its applications are still under development by Clausen--Scholze [@condensedpdf; @analyticpdf; @complexpdf] et al., while a similar approach, albeit with a different overall focus, was introduced by Barwick--Haine in [@barwick2019pyknotic] under the name *pyknotic sets*. [^1] Any topological space (resp. group, ring, module, etc.) can be considered as a condensed set (resp. group, ring, module, etc.) via a Yoneda-like construction. One may thus speak about condensed modules over condensed rings, which are the condensed incarnation of topological modules over topological rings. But in order to capture the building blocks of *analytic geometry*, one needs a notion of *completeness* for such modules that behaves well algebraically. This is exactly what the notion of an *analytic ring* entails: it is a condensed ring with a notion of "completeness" for condensed modules which behaves well in a suitable *derived* sense. Examples of analytic rings include $\mathbb{Z}^{\blacksquare}$, whose underlying condensed ring is $\mathbb{Z}$ endowed the discrete topology, where the notion of completeness for modules is modeled on $\mathbb{Z}[S]^{\blacksquare} := \varprojlim_{i} \mathbb{Z}[S_{i}]$ for profinite sets $S = \varprojlim_{i} S_{i}$ expressed as cofiltered limits of finite sets $S_{i}$. A similar construction makes $\mathbb{Z}_{p}$, with the $p$-adic topology, into an analytic ring denoted $\mathbb{Z}_{p}^{\blacksquare}$. The field $\mathbb{Q}_{p}$ can be considered as an analytic ring as well, essentially by base-changing from $\mathbb{Z}_{p}$. The prototypical complete $\mathbb{Q}_{p}$-module in the last case turns out to be $\mathcal{M}(S,\mathbb{Q}_{p})$, the space of (bounded) $\mathbb{Q}_{p}$-valued measures on a profinite set $S$. However, the analogous assertion *fails* if one replaces $\mathbb{Q}_{p}$ with $\mathbb{R}$. The theory of *$p$-liquid vector spaces* solves this issue, by replacing the prototype $\mathcal{M}(S,\mathbb{R})$ by a *smaller* space $\mathcal{M}_{< p}(S,\mathbb{R})$. Theorem [Theorem 1](#theorem:challenge){reference-type="ref" reference="theorem:challenge"} eventually translates to the assertion that $\mathbb{R}$, endowed with the collection of such $p$-liquid vector spaces, forms an analytic ring, thus providing a family of analytic rings, parameterized by $0 < p \le 1$, whose underlying condensed ring is $\mathbb{R}$. This result shows real and complex geometry can be studied in the context of analytic spaces alongside objects from algebraic and nonarchimedean geometry, see [@analyticpdf; @complexpdf]. At the same time, classical objects from functional analysis, such as Banach spaces (or more generally, $p$-Banach spaces for $0 < p \le 1$), can fit in this framework as well, and thus many facets of functional analysis can now be studied *algebraically* with the necessary analysis essentially encapsulated in Theorem [Theorem 1](#theorem:challenge){reference-type="ref" reference="theorem:challenge"}. This work has already resulted in generalizations and new *algebraic* proofs of classical results from algebraic geometry which were originally analytic in nature, such as Serre's GAGA. Many additional applications will surely come in the future. At this point, it should be quite clear that the proof of Theorem [Theorem 1](#theorem:challenge){reference-type="ref" reference="theorem:challenge"} involves significant amounts of category theory, topos theory, and homological algebra, as well as analysis and topology. Surprisingly, a key step in the proof (see [@386796; @Scholze-half-a-year]) brings combinatorics and (discrete) convex geometry into the picture as well. ## Verifying the theorem on a computer The liquid tensor experiment (LTE) was completed on July 14, 2022 [@LTE-completion] with a complete formal verification of Theorem [Theorem 1](#theorem:challenge){reference-type="ref" reference="theorem:challenge"} using the Lean proof assistant [@10.1007/978-3-319-21401-6_26] and its mathematics library `Mathlib` [@10.1145/3372885.3373824]. The formalization project, which was led by the authors of the present paper, was a collaboration involving about a dozen mathematicians and several computer scientists. A detailed exposition of the project itself will appear elsewhere. To conclude this section, we would like to highlight again that the proof of Theorem [Theorem 1](#theorem:challenge){reference-type="ref" reference="theorem:challenge"} involves many disparate subfields of pure mathematics, and at the same time, many relatively complicated mathematical objects. This significant source of complexity in the proof would have posed serious challenges in a traditional refereeing process. Indeed, this is one of the main reasons Scholze sought a formalization of the proof of Theorem [Theorem 1](#theorem:challenge){reference-type="ref" reference="theorem:challenge"} as explained in [@zbMATH07566886]. In [@Scholze-half-a-year], he writes: > The Lean Proof Assistant was really that: An assistant in navigating through the thick jungle that this proof is. Really, one key problem I had when I was trying to find this proof was that I was essentially unable to keep all the objects in my "RAM", and I think the same problem occurs when trying to read the proof. The Lean ITP helped wrangle this complex proof, and, as we explain below, abstraction boundaries played a key role in the process. # Managing complexity with abstraction boundaries {#section:sources-of-complexity} Complexity in mathematical research projects can come from various sources. We distinguish between two kinds. The first is *inherent complexity*, by which we mean the raw complexity of the Platonic ideal of a piece of mathematics. While this notion is hard to quantify, an approximate measure of inherent complexity could be the amount of time it takes a generic mathematician to understand the proof or definition in question starting from scratch. The second form of complexity we call *accidental complexity*. This encompasses all sources of complexity that are not essential to the mathematics but rather imperfections that could be removed given enough time and energy. Both inherent and accidental complexity contribute to the total complexity of a piece of mathematics. It is this total complexity which can overwhelm both authors and readers. ## Inherent complexity The main source of inherent complexity for a piece of mathematics comes from the scope of its prerequisites. This can be measured along the two axes of *breadth* and *depth*: a piece of mathematics may combine ingredients from many different areas of mathematics, or it may be a pinnacle within some subject, or a combination of both. Note that the complexity of a theorem statement can be unrelated to the complexity of its proof. Fermat's Last Theorem is an excellent example in this regard. The statement can be understood by anyone who understands the usual operations on natural numbers, but understanding of the proof requires mastery of several subfields of mathematics. Conversely, an elementary property of a complicated definition may have a simple proof. In this case, the *statement* has a high inherent complexity, but the proof does not. Moreover, two different approaches to a proof of a given theorem may differ in their inherent complexities, even if both are written in an ideal manner. Thus, inherent complexity does not measure the minimal complexity of any possible proof or argument, but rather the complexity of a particular argument or proof strategy. An excellent proof strategy that is executed poorly may have low inherent complexity and high accidental complexity (see below). Vice versa, a poorly chosen proof strategy that is executed perfectly may have high inherent complexity and low accidental complexity. Complexity may also arise from the applications of proof *techniques* as opposed to applications of theorems. By this we are referring to proofs that apply variations of a common technique, but where it is unreasonable to capture all applicable variations in single general theorem statement. This happens frequently in parts of analysis, probability theory, and combinatorics, to name some examples. A specific example, provided to us by Peter Nelson, is "connectivity reduction," where one proves theorems about combinatorial objects such as graphs, matroids, hypergraphs, etc., by reducing to certain "connected" objects. There are many variations in the objects involved, the notion of connectivity, the way connected objected are glued together, etc. While such connectivity reductions are common, it does not seem reasonable to unify them in a precise theorem statement. While it may appear as if this is more a form of accidental complexity, we argue that these proof techniques should be rather viewed as inherent complexity of the subfield of mathematics in which the proof takes place. Although it might be possible to rephrase such a proof technique as an actual theorem for the purpose of a given proof or mathematical argument, the theorem will not be sufficiently broadly applicable: only the proof strategy generalizes. ## Accidental complexity We illustrate accidental complexity by some well-known examples that occur as part of the reading and writing of mathematical texts. *Referring to the proof of a lemma, instead of its statement.* This may occur in the rather common situation where the proof of Lemma X.Y.Z in reference \[AB\] actually proves something stronger than what the statement of Lemma X.Y.Z claims. In this case we might write something like "We now see that our quasi-perfect gadget $G$ is strongly separated, by the proof of \[Lemma X.Y.Z, AB\]. Indeed, it is not hard to check that the proof shows $G$ is not only separated, but even *strongly* separated". *Using modifications of source material.* Sometimes in the mathematical writing process, we need a verbatim copy of some source material modulo a handful of completely algorithmic modifications. A mild example of this might be: "In the following proof, we will rely on a slightly modified version of the material in section 5 of \[CD\], but with the condition that the field $k$ has characteristic $0$ replaced by the condition that $k$ is perfect." This can often occur in combination with the preceding point, where a modification of a proof of some result applies while the statement itself does not. *Unwritten assumptions.* Sometimes statements of theorems or lemmas can omit assumptions. In the good cases, these assumptions were listed at the beginning of the section ("in this section we assume that $G$ is infinite") or at the beginning of the paper in a section on notation. But there might also be assumptions that are common practice within certain communities ("all rings are commutative") which might be hard to discover for the uninitiated reader. *Unclear definitions or terminology.* A related point is that it may be hard to understand definitions or terminology. For example, definitions may change over time. In the early literature on scheme theory, all schemes were separated. In Whitney's excellent book on Geometric Integration Theory [@whitney p.15], a function is called smooth if it is $C^1$. Such changes occur naturally over time, but can lead to significant confusion when studying older works, and even to a second degree when contemporary sources quote older works without properly warning about the change in terminology. *Keeping track of side conditions.* When a lemma is applied, we verify the main assumptions of the lemma. But it would be burdensome and distracting to verify all the "trivial" side conditions (that some number is positive, or less than $\epsilon$, or that some map is continuous, etc). At the same time, requiring the reader to keep track of these conditions implicitly increases the cognitive load. *Editing a paper.* Making changes to a paper in its final stages can be tricky, because of all the possible ramifications downstream. Moving a lemma from section 2 to section 3 might require changing the statement, because the standing assumptions on $X$ differ between the two sections. And yet it is all too easy to make such changes in a careless manner, possibly even shortly before publication and after a refereeing process took place. All of the examples above, among others, are well-known pitfalls in mathematical writing, and are often warned against, see [@halmos1970write; @knuth1989mathematical; @serre2007badly]. Nevertheless these issues are still commonplace in the mathematical literature, increasing the accidental complexity of the work. ## What is an abstraction boundary? An *abstraction boundary* is a formal separation between the implementation details of a concept from its extrinsic properties and behaviour. This involves introducing a *specification* (or *interface*) which describes how the object interacts with the outside environment. By using such specifications, one can work with the concepts they capture without relying on any actual implementation details. This concept is prevalent in software engineering, with prominent examples including `C` header files, public methods in object-oriented programming, and typeclasses in functional programming. In mathematics, abstraction boundaries play a fundamental role as well, with the mathematical concept of a *definition* being the principal example. For example, the definition of a "group" allows us to develop group theory, and apply results from it in the most varying circumstances, instead of only working with particular examples such as permutation groups and matrix groups. In this context, the specification is provided by the group axioms. Another example of abstraction boundaries in mathematics includes "black box" theorems, such as Cauchy's residue theorem, the existence of a Néron model, the law of large numbers, Theorem [Theorem 1](#theorem:challenge){reference-type="ref" reference="theorem:challenge"}, etc. Such theorems can be effectively applied in particular situations without requiring an understanding of the argument. ## Abstraction boundaries in interactive theorem provers Interactive theorem provers (ITPs) enforce certain abstraction boundaries rigidly. While this can be restraining at times, it also relieves the user from cognitive load arising from several sources of complexity. ITPs differ in the features that they offer, and certainly in the implementation details of those features. But most of this section applies to all modern ITPs, even though we use the Lean theorem prover as a running example. Lean is the system that we used for LTE, and it is the ITP that we are most familiar with. The most well-known feature offered by ITPs is *proof checking*, often as an instance of *type checking*. This means that the ITP ensures that lemmas and theorems are always applied correctly, and that all side conditions are satisfied. Certain side conditions that are amenable to automation, such as continuity proofs, functoriality proofs, etc., can often be handled in a way that is transparent to the user. Another feature that most ITPs offer is *proof irrelevance*. [^2] This notion can be seen as an analogue of the common mathematical practice of treating certain theorem statements as black boxes. Essentially, this means that after an ITP finishes checking the proof of a lemma or theorem, it remembers only one bit of information: that the proof is valid. Any constructions performed within the proof become inaccessible as soon as the proof is complete. If it is necessary to refer to such a construction later on, it must be factored out of the proof into a standalone definition. In doing so, the construction becomes an object in its own right, that can be named, referred to, and reasoned about. Conversely, if a construction is local to a proof, it implicitly comes with the strict promise that it will not be used outside of the proof. # Spec driven development {#section:spec-driven-dev} *Spec driven development* is an approach for "doing mathematics" which aims to both use and capture the abstraction boundaries of the mathematical objects involved. For our purposes, by "doing mathematics" we mean the process of making mathematical objects such as definitions, constructions, etc., and proving theorems about these objects. As we explain below, the use of an ITP in conjunction with spec driven development provides substantial benefits in taming the various sources of complexity, particularly in complex and collaborative projects. While this approach arose naturally during the LTE, it is quite commonplace, although *implicit*, in usual mathematical practice. For example, an informal proof may assume some properties up front with the promise of a justification in a later section. The main difference here is that organizing a proof in this way usually increases the accidental complexity and hence the cognitive load. For example, the reader may feel inclined to check that there is no circular reasoning in the final argument. The use of an ITP completely eliminates this source of complexity, and, as explained below, even *encourages* such uses of mathematical debt. Spec driven development can be summarized as the following recursive process: 1. Isolate a desired mathematical definition, construction, theorem, etc., as a "target". 2. When appropriate, isolate an initial *specification* ("spec" for short) of the target. 3. Break down the object and its specification into parts with lower complexity. 4. Repeat the above, with each new part acting as the target. Furthermore, at each step, definitions and/or theorems, including the targets and/or their specs, may be *refactored* as necessary. Refactoring refers to the process of restructuring the implementation of an existing object without changing its external behaviour. As long as these implementation details and external properties of the object are properly separated by an abstraction boundary, other parts of the environment will not be affected by such a restructuring. Spec driven development is also deeply intertwined with the notion of an abstraction boundary discussed in the previous section. In broad terms, the relationship between the two can be broken up into two interconnected categories: 1. Spec driven development uses abstraction boundaries in order to control complexity. 2. New abstraction boundaries arise naturally in the process of spec driven development. Of course, these two categories are far from disjoint. For instance, as item (2) produces abstraction boundaries, they naturally feed into item (1). At the same time there are situations where spec driven development can straddle both categories. In the following subsections we discuss the idea behind spec driven development in more detail by going through a series of examples, based on constructions that were necessary in the LTE. We will present a few snippets of Lean code, although experience with the language or any other ITP will not be necessary. The precise meaning of the code we present and the mathematics involved will not be important, and are only meant to illustrate the implementation of spec driven development in an ITP. The only important concept needed from Lean's syntax is that of a `sorry`{.lean}; this is a keyword which should be thought of as a *placeholder* that needs to be filled in later. A `sorry`{.lean} can be used as a placeholder for both *data* and *proofs*, which is crucial for this approach.[^3] ## Using and extending abstraction boundaries {#subsection:abelian-cat-example} The simplest use-case of spec driven development is in using previously established abstraction boundaries in order to reduce complexity for the overall objective at hand. As an example, consider the assertion that the category of condensed abelian groups is an abelian category, a property that is used in numerous places in LTE. In Lean, this might be initially formulated as follows: instance : abelian (Condensed Ab) := sorry In this example, the `sorry`{.lean} appearing should be considered as our initial *target*: it is a unit of mathematical debt that must be accounted for at some point in the future. At this point, all collaborators can immediately start using the assertion that the category `Condensed Ab`{.lean} is abelian, even before providing a proof of this fact. For example, we can now speak about kernels and cokernels of morphisms, exact sequences, etc. The abstraction boundary here is the notion of an *abelian category*. There are instances where we wish to extend the boundary in some way, and at the basic level, this can be accomplished by adding additional *spec lemmas/theorems*. For instance, in the example above we may want to use the fact that morphisms of condensed abelian groups are morphisms of presheaves on `Profinite`{.lean} taking values in `Ab`{.lean}, and that sums of such morphisms are computed "object-wise". In practice, this assertion would be true by definition, but since the actual definition has yet to be provided, we cannot formally rely on this. Instead, we add this property with a *spec lemma*, which becomes an additional target that can again be used immediately: lemma val_app_add {F G : Condensed Ab} {X : Profiniteᵒᵖ} (f g : F ⟶ G) : (f + g).val.app X = f.val.app X + g.val.app X := sorry In the code above, if `f`{.lean} is a morphism of condensed abelian groups, then `f.val.app X`{.lean} denotes the corresponding morphism on sections over the profinite set `X`{.lean}. Note that if we *use* the targets above at this point, then we are *forced* to remain within their associated abstraction boundary. We can't rely on an implementation detail because there is no implementation yet! In order to start repaying the existing debt, we can break down each target and its associated spec into smaller parts, and repeat the process until the individual targets can be handled directly. The initial target and its spec may thus become a collection of several parts, each of which becomes an additional target. In our example, recall that an abelian category is a preadditive category satisfying some additional conditions. After we do this for the abelian category instance in the example above, we are left with the following which summarizes all the current targets: instance : preadditive (Condensed Ab) := sorry instance : abelian (Condensed Ab) := { normal_mono_of_mono := sorry, normal_epi_of_epi := sorry, has_finite_products := sorry, has_kernels := sorry, has_cokernels := sorry, ..(infer_instance : preadditive (Condensed Ab)) } lemma add_val_app {F G : Condensed Ab} {X : Profiniteᵒᵖ} (f g : F ⟶ G) : (f + g).val.app X = f.val.app X + g.val.app X := sorry At each step, a contributor may fill in some target(s) by replacing the `sorry`{.lean} with an actual construction/proof. For example, one contributor may construct the preadditive structure on `Condensed Ab`{.lean} in such a way that the spec lemma `add_val_app`{.lean} above is true by definition, resulting in a reduction of debt: instance : preadditive (Condensed Ab) := /- actual proof omitted -/ lemma add_val_app {F G : Condensed Ab} {X : Profiniteᵒᵖ} (f g : F ⟶ G) : (f + g).val.app X = f.val.app X + g.val.app X := rfl At this point, the remaining debt in our running example consists of `normal_mono_of_mono`{.lean}, `normal_epi_of_epi`{.lean}, etc., which appear as components of the proof that `Condensed Ab`{.lean} is an abelian category. These targets may again be handled directly, or further broken down as necessary. ## Related targets and specs {#subsection:ext-example} In practice, the targets appearing in spec driven development may depend in nontrivial ways on others. To illustrate this, let's consider a more interesting example that came up during the LTE. In order to state Theorem [Theorem 1](#theorem:challenge){reference-type="ref" reference="theorem:challenge"} formally, we must be able to talk about $\operatorname{Ext}$-groups, and a natural context for this construction is using an abelian category with enough projectives. For each natural number $n$, $\operatorname{Ext}^{n}(-,-)$ is a bifunctor taking values in abelian groups, which is contravariant in the first variable. We may thus consider the following as our target: -- The arrow `⥤` is used to denote functors. def Ext {A : Type*} [category A] [abelian A] [enough_projectives A] (n : ℕ) : Aᵒᵖ ⥤ A ⥤ Ab := sorry In this case, it makes sense to immediately introduce specifications describing the intended behavior of this definition. For example, we may want to ensure that $\operatorname{Ext}^{0}$ is naturally isomorphic to $\operatorname{Hom}$, and to speak about long exact sequences of $\operatorname{Ext}$-groups, say in the first variable. This can all be accomplished by adding the following: -- `preadditive_yoneda.flip` is the bifunctor `X ↦ (Y ↦ Hom(X,Y))`, -- where `Hom(X,Y)` is considered as an abelian group. def Ext_zero_iso_Hom {A : Type*} [category A] [abelian A] [enough_projectives A] : (Ext 0 : Aᵒᵖ ⥤ A ⥤ Ab) ≅ preadditive_yoneda.flip := sorry def d {A : Type*} [category A] [abelian A] [enough_projectives A] (n : ℕ) (X Y : A) : ((Ext n).obj (op X)).obj Y ⟶ ((Ext (n+1)).obj (op X)).obj Y := sorry lemma Ext_LES {A : Type*} [category A] [abelian A] [enough_projectives A] (n : ℕ) (X₁ X₂ X₃ Y : A) (f : X₁ ⟶ X₂) (g : X₂ ⟶ X₃) (ses : short_exact f g) : exact_seq Ab [((Ext n).map g.op).app Y, ((Ext n).map f.op).app Y, d n X₁ Y] := sorry The definitions `Ext_zero_iso_Hom`{.lean}, `d`{.lean} and spec lemma `Ext_LES`{.lean} are additional targets, while `Ext_LES`{.lean} relates to *all three* targets `Ext`{.lean}, `Ext_zero_iso_Hom`{.lean} and `d`{.lean}. As additional targets and/or preexisting definitions/theorems are introduced into the context, the situation can quickly become unwieldy. It is at this stage where the use of an ITP really shines. Essentially, the ITP keeps track of all the connections between the targets and existing definitions/theorems, while ensuring consistency *at all times.* One enormous qualitative benefit of using an ITP with this approach is that a contributor working on a given target only needs to keep track of the immediately relevant portions of this dependency graph of targets/definitions/theorems, thereby substantially reducing cognitive load. This approach also facilitates collaboration, as it is no longer necessary to remember (or even understand) the context of the project as a whole when working on individual targets. In collaborative projects involving various mathematical subfields, each contributor may thus choose to only work on targets within their own area(s) of expertise. ## Refactoring {#subsection:refactoring} Another major benefit of using abstraction boundaries, which is highlighted in the process of spec driven development, is that *refactoring* becomes easier in most cases. For instance, in the example from §[4.1](#subsection:abelian-cat-example){reference-type="ref" reference="subsection:abelian-cat-example"}, the actual *implementation* of the fact that `Condensed Ab`{.lean} is an abelian category is irrelevant, as long as one only needs results which hold true in an arbitrary abelian category. Changing the implementation details will therefore not interfere with the validity of such results. Continuing with the example of $\operatorname{Ext}$ groups from §[4.2](#subsection:ext-example){reference-type="ref" reference="subsection:ext-example"}, in LTE there was a point where we had a working definition of `Ext`{.lean}, similar to the one described in §[4.2](#subsection:ext-example){reference-type="ref" reference="subsection:ext-example"}, which had to be refactored to accommodate $\operatorname{Ext}$-groups of (bounded above) *complexes* as opposed to just objects. In other words, at one point we introduced targets resembling the following: def bounded_above_complex.Ext {A : Type*} [category A] [abelian A] [enough_projectives A] (n : ℤ) : (bounded_above_complex A)ᵒᵖ ⥤ (bounded_above_complex A) ⥤ Ab := sorry def bounded_above_complex.single_zero {A : Type*} [category A] [abelian A] : A ⥤ bounded_above_complex A := sorry and *redefined* the `Ext`{.lean} from §[4.2](#subsection:ext-example){reference-type="ref" reference="subsection:ext-example"} using these roughly as follows: def Ext {A : Type*} [category A] [abelian A] [enough_projectives A] (n : ℕ) : Aᵒᵖ ⥤ A ⥤ Ab := (bounded_above_complex.single_zero A).op ⋙ (bounded_above_complex.single_zero A ⋙ (bounded_homotopy_category.Ext n).flip).flip By relying on the given abstraction boundaries, as one is essentially *required* to do in the context of spec driven development, the process of resolving errors arising as a consequence of such a refactor becomes significantly easier. While this change in `Ext`{.lean} may indeed cause certain other targets/specs/definitions/theorems to fail, this failure will be *localized* to those constructions and proofs that depend on the *implementation*. Crucially, *the ITP will catch all such errors!* Any results elsewhere in the project that rely on the abstraction boundary in question, such as proofs that only rely on the lemma `Ext_LES`{.lean} from §[4.2](#subsection:ext-example){reference-type="ref" reference="subsection:ext-example"}, should still work without any further modification. # Aligning the formal with the informal Every mathematician has to make an ongoing effort to align internal mental models of mathematics with the pen-and-paper representations found in the literature. This alignment goes both ways: mental models have to be updated and adjusted, and expositions of mathematics can be streamlined and improved. This alignment is an integral part of the creative process. It happens at the blackboard, during seminar talks, at the writing table, or during a walk in the park. Furthermore, it cannot be captured in formal rules: one can not *prove* that a mental model aligns well with a pen-and-paper representation of some piece of mathematics. ## The alignment problem With the use of ITPs, this alignment issue is extended in a new direction: besides mental models and the pen-and-paper representations, there are the additional *digital* representations. To illustrate the alignment problem, consider the statement of Theorem [Theorem 1](#theorem:challenge){reference-type="ref" reference="theorem:challenge"} as it was formalized in LTE: variables (p' p : ℝ≥0) [fact (0 < p')] [fact (p' < p)] [fact (p ≤ 1)] theorem liquid_tensor_experiment (S : Profinite.{0}) (V : pBanach.{0} p) : ∀ i > 0, Ext i (ℳ_{p'} S) V ≅ 0 := /- proof omitted -/ At a superficial level, this code snippet shows a lot of similarity with the statement of Theorem [Theorem 1](#theorem:challenge){reference-type="ref" reference="theorem:challenge"}. However, there is a possibility that the definition of the symbols `Profinite`{.lean}, `pBanach`{.lean}, `Ext`{.lean}, and so forth, do not mean what they should mean according to established mathematical tradition. In other words: do their formal definitions align with the pen-and-paper representations and/or with the mental model of anyone looking at the source code of LTE? If the contributors to LTE were evil, the symbol `Ext`{.lean} might be defined as the zero functor, trivializing the statement in the code snippet above. More innocently, there may be some subtle mistake in the definition of `Ext`{.lean} that implies the statement for completely uninteresting reasons. Even this is quite rare in practice, as a formal proof written by an honest human mathematician still relies on the mathematician's mental models of the objects involved. Nevertheless, the alignment issue still needs to be addressed in order to efficiently *communicate* the results of a formalization to the greater mathematical community. ## Abductive reasoning Although there is no formal system for verifying alignment of different representations of mathematics, we argue that *abductive reasoning*, along with purposefully designed abstraction boundaries, can be fruitfully applied to provide convincing evidence of alignment. Abductive reasoning is the technical term for a mode of reasoning that seeks to explain a set of facts or observations in the simplest way possible, and is also called *inference to the best explanation*. Unlike with deductive reasoning, an abduction is not a formal logical consequence of its premises. The method is concisely captured by the colloquial "Duck Test": > If it looks like a duck, swims like a duck,\ > and quacks like a duck, then it probably is a duck. We provide a curated selection of examples and lemmas, somewhat analogous to "test suites" in software developments, that exhibit the ways in which a particular formal definition can be used. This is the input that allows others to abductively conclude that these formal symbols align with their mental models of the corresponding mathematical notions. In the terminology of the preceding section, this input would be a carefully crafted "spec" for the formal representation of a mathematical concept. The crucial aspect of such a spec is that it can be much shorter and easier to read than the actual definitions themselves. Indeed, typically the definitions will recursively unwind to a long list of prerequisite definitions. [^4] Furthermore, definitions might be made using general constructions that the reader is unfamiliar with. For LTE we provided such specs for all the objects occuring in the main theorem of liquid vector spaces. We collected these specs in the subfolder `examples/`{.lean} of the project; see [@LTE-examples] for a detailed discussion. For example, we show that the real numbers in Lean are a conditionally complete linearly ordered field, that $\text{Ext}^1(\mathbb{Z}/n\mathbb{Z}, \mathbb{Z}/n\mathbb{Z}) \cong \mathbb{Z}/n\mathbb{Z}$, and that $p$-Banach spaces $V$ can be given a norm that satisfies $\|\lambda v\| = |\lambda|^p\|v\|$ for all $\lambda \in \mathbb{R}$ and $v \in V$. Such examples are a *reality check*, which collectively are meant to provide a large degree of confidence that the formal representations of mathematical concepts align with corresponding mental models and pen-and-paper representations. Abductive reasoning is not unique to LTE or the use of ITPs, and many forms are often used in the mathematical practice at large. For example, the following common methods of establishing and/or strengthening belief in a mathematical argument or proof can be considered as instances of abductive reasoning: - the ingredients and/or ideas seem strong enough to prove the claim; - the claims hold up in well-chosen examples or numerical experiments; - the proof is accepted by the community, including the experts in the field. With such methods, a reader may obtain some level of trust in the validity of a piece of mathematics without necessarily digging into every detail of the exposition. The reader must balance their desired level or trust with the amount of effort they are willing to exert. The main differences between such forms of abductive reasoning and the strategy we put forth in LTE is in the scope to which it is applied. While the three examples above focus on deciding whether or not to believe the *proofs* of certain results, in the case of formalized mathematics, the focus shifts to merely the validity of the *definitions* involved. Thus, in formal projects such as LTE, the "surface area" to which abductive reasoning applies is comparatively small and carefully delineated. Once the reader is confident that the definitions align with their internal mental model, which can be done with abductive reasoning as discussed above, they may choose how far to dig into the associated proofs while remaining confident that the ITP ensures correctness. Overall, this has the effect of *separating* the exposition of the material from the issue of trusting its correctness. # Conclusion In §[3](#section:sources-of-complexity){reference-type="ref" reference="section:sources-of-complexity"} we discussed two kinds of sources of complexity in mathematics: *accidental* and *inherent*. We also saw that ITPs can manage and remove accidental complexity, essentially *by default*. On the other hand, in §[4](#section:spec-driven-dev){reference-type="ref" reference="section:spec-driven-dev"} we saw that spec driven development and abstraction boundaries, in conjunction with an ITP, can also help in taming *inherent* complexity. As discussed in that section, a fundamental feature of spec driven development is the concept of *mathematical debt* which can be effectively tracked using an ITP. While this debt has to be repaid at some point, it is fundamentally *non-blocking* for the project as a whole. The targets arising in spec driven development, along with other mathematical objects in question, are frequently interconnected in highly nontrivial ways. With the use of an ITP, contributors only need to keep track of the objects which are *locally* relevant, thereby reducing the cognitive load significantly. At the same time, these targets can be recursively broken into smaller components until they can be handled directly. This is particularly useful in collaborations as this allows for the work to be easily distributed among contributors with potentially varying areas of expertise. Another key property of spec driven development is that reliance on abstraction boundaries becomes effectively *required* with this approach. As a byproduct, refactors generally become much easier. As the ITP ensures consistency within the project at all times, it also keeps track of the changes that must be made as a consequence of such refactors. We feel that there is a lot of potential to develop tools to help facilitate spec driven development in pure mathematics, in conjunction with an ITP. Above we discussed how an ITP "keeps track" of the complicated dependency graph of objects (including targets) within a mathematical project. Right now, this is essentially done in the process of *type-checking*. However, a tool that does this in a more *explicit* manner, by identifying precisely the various objects associated with a given piece of mathematics within an ITP, would be even more beneficial as it would tell contributors *precisely* what they need to focus on when working on individual targets. Patrick Massot's blueprint software [@blueprint-software], which was used in LTE, [^5] can be seen an initial approximation to this idea. Another tool that would be useful in projects such as LTE is a system that would keep track of the "weights" of targets. Essentially, we envision that each `sorry`{.lean} in a given project would be tagged with a "weight" that is meant to act as a quantitative approximation for its complexity. Progress in the project as a whole can then be tracked by computing the total remaining weight. As the history of projects such as LTE is stored using a version control system (e.g. `git`), the evolution of the weights of targets (along with other metadata) could be mined and studied, and potentially used in machine learning applications. In this article, we have painted a very positive picture about the current use of ITPs in pure mathematics. The formal verification of significant results, such as Hales' proof of the Kepler conjecture [@zbMATH06750549] and the liquid tensor experiment, have clearly demonstrated that ITPs are valuable tools for the verification of hard proofs. To summarize, we discussed how the use of abstraction boundaries and spec driven development within the rigid framework of an ITP helps - facilitate collaboration amongst mathematicians with different expertise; - manage accidental complexity and thus reduce cognitive load; - provide a guarantee of formal correctness of proofs, thereby reducing the amount of material that needs to be trusted. We envision that these methods can also provide similar benefits in original research. However, we acknowledge that there are still a number of high costs that come with the use of ITPs and related tools that pose a barrier to wider adoption within mathematics. For example, they currently have a steep learning curve, and the difference between an informal piece of mathematical exposition and its formalization is still too large. Nevertheless, ITPs and their formalized mathematics libraries have seen tremendous improvements in the last few years, while the successes in the field have helped accelerate progress. We are also optimistic that advances in metaprogramming, artificial intelligence, and type theory itself, will help resolve some of the remaining issues in the near future. [^1]: The actual definition of a condensed set involves cardinality bounds which we have omitted from the discussion, while the only distinction between condensed and pyknotic objects is in such set-theoretic issues. [^2]: ITPs built on a foundation of homotopy-type-theoretic nature will often not have proof irrelevance for *explicit reasons* arising from the foundational theory. The particular reasons for this choice are not relevant to this paper, and a discussion of homotopy type theory would lead us too far astray. [^3]: In type-theoretic terms, a `sorry`{.lean} can be used as a placeholder for an arbitrary *term* of an arbitrary type, including propositions. [^4]: The main result of LTE depends recursively on several thousands of definitions. Part of these definitions are made in the project itself, but the majority is imported from Lean's mathematical library `mathlib`. [^5]: <https://leanprover-community.github.io/liquid/>
arxiv_math
{ "id": "2309.14870", "title": "Abstraction boundaries and spec driven development in pure mathematics", "authors": "Johan Commelin and Adam Topaz", "categories": "math.HO cs.CY cs.LO math.AG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We develop a flexible online version of the permutation test. This allows us to test exchangeability as the data is arriving, where we can choose to stop or continue without invalidating the size of the test. Our methods generalize beyond exchangeability to other forms of invariance under a compact group. Our approach relies on constructing an $e$-process that is the running product of multiple $e$-values that are constructed on batches of observations. To construct $e$-values, we first develop an essentially complete class of admissible $e$-values in which one can flexibly 'plug' almost any desired test statistic. To find good $e$-values, we develop the theory of likelihood ratios for testing group invariance yielding new optimality results for group invariance tests. These statistics turn out to exist in three different flavors, depending on the space on which we specify our alternative, and their induced $e$-processes satisfy attractive power properties. We apply these statistic to test against a Gaussian location shift, which yields connections to the $t$-test when testing sphericity, connections to the softmax function and its temperature when testing exchangeability, and an $e$-process that is valid under arbitrary dependence when testing sign-symmetry. author: - | Nick W. Koning\ Econometric Institute, Erasmus University Rotterdam, the Netherlands bibliography: - Bibliography-MM-MC.bib title: "**Online Permutation Tests and Likelihood Ratios for Testing Group Invariance**" --- 1 1 **Online Permutation Tests and Likelihood Ratios for Testing Group Invariance** *Keywords:* $e$-values, sequential testing, online testing, group invariance test, permutation test. # Introduction We introduce and motivate our methodology through an example: a case-control study. In an idealized traditional case-control study, treatments are randomly allocated across $n$ units at the start of the study. The null hypothesis $H_0^\text{static}$ that the treatment has no effect can be represented as $$\begin{aligned} H_0^\text{static} : X_{[1:n]} \text{ is exchangeable}, \end{aligned}$$ where $X_{[1:n]} = X_1, \dots X_n$ denotes the observed outcomes. To test this null hypothesis at some level $\alpha$, we can select any desired test statistic $T$ and perform a permutation test. The $p$-value for this test is defined as $$\begin{aligned} p(X_{[1:n]}) = \mathbb{P}_{\overline{P}}\left(T(\overline{P}X_{[1:n]}) \geq T(X_{[1:n]})\right), \end{aligned}$$ where $\overline{P}$ is uniformly distributed over all possible permutations on $n$ elements. This $p$-value can be understood as the proportion of test statistics calculated from the rearranged ('permuted') data that exceed or match the original test statistic. Moreover, it is guaranteed to control the size in finite samples $$\begin{aligned} \mathbb{P}\left[p(X_{[1:n]}) \leq \alpha\right] \leq \alpha, \text{ for all } n. \end{aligned}$$ Unfortunately, it is hard to determine a 'good' sample size $n$ prior to the study, as the magnitude of the treatment effect under the alternative is usually unknown. If the sample size $n$ is set too small, this risks a high chance of a non-rejection even if the treatment is effective. On the other hand, setting $n$ too large may come with high cost or ethical concerns. As $n$ is hard to specify ahead of time, we would ideally choose it in an 'online' fashion, based on the data that has already arrived. However, we cannot simply use this 'static' permutation test with a dynamic number of observations, as the resulting test would no longer have valid size. Indeed, we typically do not have $$\begin{aligned} \label{eq:opt_cont} \mathbb{P}\left[\inf_{n \in \mathcal{T}} p(X_{[1:n]}) \leq \alpha\right] \leq \alpha, \end{aligned}$$ where $\mathcal{T} = \{t_1, t_2, \dots\}$ contains the points in time at which we would like to perform our test. This means that even if there is no treatment effect, then our $p$-value will eventually drop below the threshold $\alpha$ with a probability that is typically much higher than $\alpha$. The primary contribution of this paper is to introduce an online generalization of the permutation test, that does have valid size. The test we introduce is valid for two different types of sequential treatment allocations that lead to different null hypotheses. We will refer to these allocation schemes as *prior allocation* and *batch allocation*.\ **Prior allocation**. Here, the treatments are randomly allocated at the beginning of the study, with outcomes $X = X_1, X_2, \dots$ observed sequentially. The null hypothesis that posits no treatment effect can then be formulated as $$\begin{aligned} H_0^{\text{prior}} : X \text{ is sequentially exchangeable}, \end{aligned}$$ where sequentially exchangeability means that $X_{[1:t]}$ is exchangeable for all $t$, and $[1$:$t]$ denotes the range of indices from $1$ to $t \geq 1$.\ **Batch allocation**. Here, the treatment allocation is not predetermined. Instead, treatments are only allocated to the first batch of $t_1$ units at the start, after which their outcomes $X_{[1\text{:}t_1]}$ are observed. Based on these outcomes, treatments may be independently allocated to new batches, and the choice to allocate further treatments can be made after every batch. We then formulate our null hypothesis of no treatment effect as $$\begin{aligned} H_0^{\text{batch}} : X \text{ is within-batch exchangeable and cross-batch independent}. \end{aligned}$$ This means that each batch is internally exchangeable, while being independent from other batches. ## Contributions To construct an online version of the permutation test, we use the machinery of $e$-values and $e$-processes (sometimes called $e$-variables or $e$-statistics), which is detailed in Section [2](#sec:e-values){reference-type="ref" reference="sec:e-values"}. In particular, we develop an $e$-process that is valid both for $H_0^\text{prior}$ and $H_0^\text{batch}$. Our $e$-process is the running product of $e$-values that are each based on batches of observed data. Such a construction is well-known to be valid if the $e$-values are independent as in $H_0^\text{batch}$, but can also be shown to be valid under $H_0^\text{prior}$ by using the de Finetti-Hewitt-Savage theorem [@vovk2021testing]. Moreover, we show how to generalize this to validity under a generalization of $H_0^\text{batch}$ to within-batch invariance under arbitrary compact groups, and a generalization of $H_0^\text{prior}$ to sequential invariance under compact supergroups of permutations. In addition, we derive a corollary that that can be viewed as a multiplicative version of the exchangeable Markov inequality by @ramdas2023randomized. To construct the $e$-values that we use as buildings blocks for $e$-processes, we first derive an essentially complete class of admissible $e$-values, which can be constructed from almost any (appropriately integrable) test statistic of choice $T$. To select a 'good' $e$-value from this class, we develop the theory of likelihood ratio statistics for testing exchangeability and more general forms of group invariance. We show that the $e$-processes built from these likelihood ratios satisfy an attractive power property introduced by @koolen2022log. Moreover, @ramdas2022admissible show that any admissible procedure for sequential inference must rely on likelihood ratios. As far as we are aware, likelihood ratios for group invariance have not been explicitly derived before. We find that these likelihood ratio statistics come in three flavors, that differ by the space on which we define the alternative. The first flavor is inspired by a proof in @lehmann1949theory, and relies on specifying an alternative on the sample space. The second flavor generalizes the first type, by showing that the same construction is valid if we only specifiy an alternative on certain invariant subsets of the sample space (orbits), without specifying a mixing distribution over the subsets. We note that results of @lehmann1949theory also hold for such alternatives, allowing us to generalize their main result. Finally, we derive a third flavor that relies on specifying an alternative on the set of permutations (i.e. the group). This final flavor relies on the use of an inversion kernel (see Ch. 7 in @kallenberg2017random), which was recently introduced in the context of group invariance testing by @chiu2023non. We discuss the relationship between the three types of likelihood ratios, and also illustrate them in a toy example that can be found in the Appendix. We apply our likelihood ratios statistics to testing group invariance on $\mathbb{R}^d$, $1 \leq d < \infty$ against a simple alternative that is a location $\mu$ shift under normality. For the special case of sphericity, this generalizes an example from @lehmann1949theory relating to the $t$-test, where we derive a larger class of alternatives against which the $t$-test is uniformly most powerful. We also consider testing sign-symmetry, which turns out to produce an $e$-process that is valid under arbitrary dependence of the data. Moreover, we consider exchangeability where we find that the softmax function is nested as a special case of the likelihood ratio. In addition, we provide some guidance in selecting $\mu$ under composite alternatives, which coincides with the temperature in the softmax function. Specifically, the 'temperature' should be high if the data is informative, and low if the data is uninformative. While our $e$-processes are valid for any batch size, the batches need to contain at least two observations for the resulting test to have any power in case of exchangeability, as a single observation contains no information about the alternative. Luckily, the arrival of data in batches is typical in practice: there may be overhead costs involved in collecting data, so that data is preferably gathered in a batch; there may be computational restrictions that make it undesirable to test after every observation; the data collection and computation of the $e$-values may take place in parallel for computational or privacy reasons, after which the $e$-values are centrally combined into an $e$-process. See also @grunwald2023safe for a batch-based $e$-process approach. In our simulation study, we find that the power of our methods barely depends on the batch size. Moreover, the use of a dynamic sample size seems to come at the cost of a quite modest amount of power. Furthermore, we find that we should set a higher temperature if we are impatient about obtaining a result, and a lower temperature if we are patient. Finally, we verify that the $e$-process for symmetric data indeed controls size under a null hypothesis where the elements of $X = X_1, X_2, \dots$ are strongly dependent. ## Related literature {#sec:lit} At first glance, our work may seem intimately related to @perez2022statistics. However, they consider invariance of *collections of distributions* (both the null, and the alternative), whereas we consider invariance of *distributions themselves*. Specifically, a collection of distributions $\mathcal{P}$ is said to be invariant under a transformation $g$ if for any $\mathbb{P} \in \mathcal{P}$, its transformation $g\mathbb{P}$ by $g$ is also in $\mathcal{P}$. In contrast, invariance of a distribution $\mathbb{P}$ means that $g\mathbb{P} = \mathbb{P}$. Intuitively, their work can be interpreted as testing in the presence of an invariant model, whereas we consider testing invariance of the data generating process. As our null hypothesis consists exclusively of invariant distributions it is technically also invariant, so that one may believe their results may still apply under appropriate assumptions on the alternative. However, this invariance is of a very strong type which excludes the transitivity that @perez2022statistics require. In some sense, the strong type of invariance we consider is the complete opposite of transitivity. A closely related work is that of @ramdas2022testing, who consider testing $H_0^\text{prior}$. However, they focus primarily on the case where $X$ is a binary sequence, where they show an approach based on martingales is powerless without adapting the filtration, leading them to develop other types of $e$-processes. They do mention several ideas for $e$-processes that extend from binary to '$d$-ary' data, but their methods do not coincide with ours and mostly focus on Markovian-type alternatives while our methods are better suited to alternatives of a more stationary nature that are expressed on the individual batches as in Theorem [Theorem 3](#thm:loavev){reference-type="ref" reference="thm:loavev"}. In addition, Remark 3.3 in @vovk2021testing suggests that the negative result of @ramdas2022testing extends to non-binary data as well, but that this can be circumvented by 'impoverishing' the filtration. Our approach can indeed be interpreted as using an impoverished filtration, as we only permit optional continuation of data collection after batches of observations. @vovk2021testing proposes a different way to deal with the problem by using conformal $p$-values. His approach only permits optional continuation based on the observed $p$-values themselves, but bans inspection of the underlying data. In contrast, our approach permits a full inspection of all the arrived batches of data. The approach by @vovk2021testing seems especially fruitful for changepoint detection. For more recent work in the context of sequential case-control (or 'A/B') testing see @johari2022always and @larsen2023statistical, who do not consider exchangeability. ## Notation and assumptions We assume the random variables $X_1, X_2, \dots$ that we observe take on values in a separable complete metric space $\mathcal{X}$, which includes $\mathbb{R}^n$, $n \geq 1$. We equip $\mathcal{X}$ with its Borel $\sigma$-algebra $\mathcal{B}(\mathcal{X})$. We write $\mathcal{X}^m = \prod_{i=1}^m \mathcal{X}$ and $\mathcal{B}(\mathcal{X}^m) = \prod_{i=1}^m \mathcal{B}(\mathcal{X})$ for the product space and its Borel $\sigma$-algebra, $m \in \mathbb{N}_+ \cup \{\infty\}$. Let $\mathcal{P}_{\mathcal{X}}$ denote the collection of probability measures on $\mathcal{X}$, which is itself a measurable space. For a probability measure $\mathbb{P} \in \mathcal{P}_{\mathcal{X}}$, we write $\mathbb{P}^m = \prod_{i=1}^{m} \mathbb{P}$, $m \in \mathbb{N}_+ \cup \{\infty\}$. Moreover, we use $\mathcal{P}_{\mathcal{X}}^{\infty} = \{\mathbb{P}^\infty : \mathbb{P} \in \mathcal{P}_{\mathcal{X}}\}$ to denote the collection of countably infinite product measures on $\mathcal{X}$, and $\overline{\mathcal{P}}_\mathcal{X}^\infty \supseteq \mathcal{P}_{\mathcal{X}}^{\infty}$ for the exchangeable probability measures on $\mathcal{X}^\infty$. To avoid ambiguity, we sometimes write expectations $\mathbb{E}$ with a superscript and/or subscript $\mathbb{E}_X^{\mathbb{P}}$ to make explicit the measure over which is being integrated ($\mathbb{P}$), and the random variables over which the integration takes place ($X$). We use similar subscripts for probabilities. Moreover, we use $\mathbb{E}_\mathbb{P}^Q \mathbb{P}$ to denote a $Q$-mixture over $\mathbb{P}$. For any collection of indices $\mathcal{I} \subset \mathbb{N}_+$, we use $X_{\mathcal{I}}$ to denote the corresponding subset of $X$, and we use $[s$:$t]$ to denote the set of indices from $s$ to $t$. # Background: $e$-values and $e$-processes {#sec:e-values} A new perspective on testing that has garnered much recent attention, is the use of so-called $e$-values (or $e$-statistics, $e$-variables). See e.g. @ramdas2023gametheoretic for an overview of the quickly developing literature on $e$-values. For an arbitrary collection of distributions $\mathcal{P}$, typically a null hypothesis, an $e$-value is defined as some non-negative real-valued test statistic $e$ with the property $$\begin{aligned} \label{hyp:e} \sup_{\mathbb{P} \in \mathcal{P}}\mathbb{E}^{\mathbb{P}} e \leq 1. \end{aligned}$$ We call such an $e$-value 'valid' and if it satisfies $\sup_{\mathbb{P} \in \mathcal{P}} \mathbb{E}_e^{\mathbb{P}} e = 1$, we call it 'exact'. Based on an $e$-value, a finite sample valid test is easily constructed by rejecting if $e \geq 1/\alpha$. Indeed, Markov's inequality then tells us that $$\begin{aligned} \sup_{\mathbb{P} \in \mathcal{P}} \mathbb{P}_e\left(e \geq 1/\alpha\right) \leq \alpha. \end{aligned}$$ The corresponding valid $p$-value, the smallest value of $\alpha$ for which the test rejects, is then given by $1/e$. Most importantly for our purposes, $e$-values have a sequential counterpart called $e$-processes. An $e$-process for a collection of distributions $\mathcal{P}$ is a non-negative process $(e_t)_{t\geq 1}$ with $$\begin{aligned} \sup_{\mathbb{P} \in \mathcal{P}} \sup_{\tau} \mathbb{E}_{e_{\tau}}^\mathbb{P}[e_\tau] \leq 1, \end{aligned}$$ where $\tau$ ranges over all stopping times [@ramdas2022testing]. The test that compares an $e$-value to $1/\alpha$ satisfies the desired 'anytime validity' property: $$\begin{aligned} \label{ineq:ville} \sup_{P \in \mathcal{P}} \mathbb{P}_{e_{t}}\left(\sup_{t \geq 1} e_t \geq 1/\alpha\right) \leq \alpha. \end{aligned}$$ The corresponding valid $p$-value is $\inf_t 1/e_t$. # $e$-processes for exchangeability In this section, we discuss the construction of $e$-processes for $H_0^\text{prior}$ and $H_0^{\text{batch}}$. For the sake of presentation, we delay the introduction of the $e$-values for exchangeability that they are built from to Section [4](#sec:e-value-construction){reference-type="ref" reference="sec:e-value-construction"}. Moreover, we also delay generalizations to group invariance to the same section, as it will require introducing various concepts that are not relevant here. In Remark [Remark 1](#rmk:generalization_groups){reference-type="ref" reference="rmk:generalization_groups"} we discuss where generalizations are possible. To construct $e$-processes for our hypotheses, we will first assume that we have access to $e$-values for finite exchangeability $H_0$. In particular, we assume that for any finite collection or 'batch' of observations $X_B \subseteq X$ with indices in $B$, we can specify an $e$-value $e(X_B)$ for $H_0$: $$\begin{aligned} \sup_{\mathbb{P} \in H_0}\mathbb{E}_{X_B}^{\mathbb{P}}[e(X_{B})] \leq 1. \end{aligned}$$ For a collection of batches $B_1, B_2, \dots$, we then multiply these $e$-values together to construct an $e$-process $$\begin{aligned} e_t = \prod_{i=1}^{t} e(X_{B_t}). \end{aligned}$$ Under $H_0^\text{batch}$, the $e$-values in this product are independent, so that the product is a valid $e$-process. Under $H_0^\text{prior}$, these $e$-values are not independent. However, Theorem [Theorem 1](#thm:e-process){reference-type="ref" reference="thm:e-process"} shows that this $e$-process is valid for $H_0$. We present the proof of the result in Appendix [11.1](#sec:technical){reference-type="ref" reference="sec:technical"}. Closely related results can be found in Remark 3.4 in @vovk2021testing for martingales and Theorem 3 in @ramdas2022testing for binary data. **Theorem 1**. *Suppose that $X = X_1, X_2, \dots$ is an exchangeable sequence of random variables. Let $e^1, e^2, \dots$ be $e$-values that depend on non-empty and non-overlapping subsets of $X$. Then, $\prod_{i=1}^{t} e^i$ is an $e$-process for $H_0$.* *Sketch of proof.* By the de Finetti-Hewitt-Savage-type theorem, any exchangeable sequence can be represented as a mixture over i.i.d. random variables. Moreover, the validity of an $e$-process is closed under taking mixtures. As a result, any $e$-process that is valid for any i.i.d. sequence is also valid for any exchangeable sequence. ◻ **Remark 1**. *Theorem [Theorem 1](#thm:e-process){reference-type="ref" reference="thm:e-process"} generalizes beyond exchangeability to any generalization of $H_0^{\text{prior}}$ to invariance under an algebraic supergroup of permutations. An example of a supergroup of permutations is the orthogonal group $O(t)$, for which sequential exchangeability on $\mathbb{R}^\infty$ can be viewed as sequential sphericity. Moreover, it can be generalized to any space $\mathcal{X}$ for which an appropriate de Finetti-Hewitt-Savage-type theorem is available: see e.g. @alam2023generalizing for generalized versions, and an excellent overview. Moreover, Theorem [Theorem 1](#thm:e-process){reference-type="ref" reference="thm:e-process"} generalizes from $H_0^{\text{batch}}$ to within-batch invariance under arbitrary compact groups.* ## Some corollaries to Theorem [Theorem 1](#thm:e-process){reference-type="ref" reference="thm:e-process"} {#some-corollaries-to-theorem-thme-process} A direct implication of Theorem [Theorem 1](#thm:e-process){reference-type="ref" reference="thm:e-process"} is Corollary [Corollary 1](#cor:prod_e){reference-type="ref" reference="cor:prod_e"}. To see that Theorem [Theorem 1](#thm:e-process){reference-type="ref" reference="thm:e-process"} is more general, observe that it, for example, also applies to $e$-values that are based on batches of different sizes or based on different test statistics, which may not be exchangeable. **Corollary 1**. *Suppose that $e^1, e^2, \dots$ are sequentially exchangeable $e$-values, then $\prod_{i=1}^t e^i$ is an $e$-process for sequential exchangeability.* As a further corollary to Corollary [Corollary 1](#cor:prod_e){reference-type="ref" reference="cor:prod_e"}, we obtain a type of exchangeable Markov inequality for a product of random variables. The result follows from Corollary [Corollary 1](#cor:prod_e){reference-type="ref" reference="cor:prod_e"} and [\[ineq:ville\]](#ineq:ville){reference-type="eqref" reference="ineq:ville"}. Corollary [Corollary 2](#cor:markov){reference-type="ref" reference="cor:markov"} is visually similar to the 'exchangeable Markov inequality' presented by @ramdas2023randomized, where $\frac{1}{E[Y_1]^t}\prod_{i=1}^t$ is replaced by $\frac{1}{t} \sum_{i=1}^t$. Together, their exchangeable Markov's inequality and Corollary [Corollary 2](#cor:markov){reference-type="ref" reference="cor:markov"} show that both averaging and multiplying are valid options to construct $e$-processes for exchangeability. However, we find that multiplication is typically the more powerful option. **Corollary 2**. *Let $Y_1, Y_2, \dots$ be a sequence of non-negative integrable random variables with shared expectation $E[Y_1] > 0$. Then, for any $c > 0$, $$\begin{aligned} \mathbb{P}\left(\exists t \geq 1 : \frac{1}{E[Y_1]^t}\prod_{i=1}^t Y_i \geq 1/c \right) \leq c. \end{aligned}$$* # Generic $e$-values for group invariance {#sec:e-value-construction} In this section, we first explain how testing exchangeability can be seen as a special case of testing invariance under a compact group. Then, we show that with any appropriate test statistic we can construct an exact $e$-value for invariance under a compact group. Moreover, we also show the converse: that any exact $e$-value can be written in such a form for some test statistic. We show that the class of such $e$-values forms an essentially complete class of $e$-values, so that we do not miss out on any 'good' $e$-values by restricting our attention to this class. ## Background: Testing group invariance {#sec:background} Let $\mathcal{G}$ be a compact topological group, acting properly and topologically on some sample space $\mathcal{Y}$, say $\mathcal{Y} = \mathcal{X}^m$ for some $m \in \mathbb{N}_+$. The goal of testing group invariance is to test whether a random variable $Y$ is $\mathcal{G}$ invariant. That is, $$\begin{aligned} H_0^{\mathcal{G}} : Y \overset{d}{=} GY, \text{ for all } G \in \mathcal{G}. \end{aligned}$$ Exchangeability is a special case, where $\mathcal{G}$ is a group of permutations. Moreover, sequential analogues can be defined by constructing a compact group $\mathcal{G}_m$ for each $m$ that acts on $\mathcal{X}^m$. As with the permutation test, we define a test statistic $T$ that is 'large' under the alternative of interest. Then, we can define an exact $p$-value for $H_0^{\mathcal{G}}$, as $$\begin{aligned} \label{eq:gi_p_value} p(Y) = \mathbb{P}_{\overline{G}}\left[T(\overline{G}Y) \geq T(Y)\right] + \overline{u}\mathbb{P}_{\overline{G}}\left[T(\overline{G}Y) = T(Y)\right], \end{aligned}$$ where $\overline{G}$ is uniformly (Haar) distributed on $\mathcal{G}$, and $\overline{u}$ is uniform on $[0,1]$. Comparing this $p$-value to a significance level of interest $\alpha$ yields a so-called group invariance test. This test is equivalent to rejecting the null if $$\begin{aligned} \label{ineq:quantile} T(Y) > q_\alpha^{\overline{G}}\left[T(\overline{G}Y)\right], \end{aligned}$$ and with some appropriate probability in case of equality, where $q_\alpha^{\overline{G}}\left[T(\overline{G}Y)\right]$ denotes the $\alpha$ upper-quantile of the distribution of $T(\overline{G}Y)$ where $Y$ is considered fixed. ## A flexible construction of $e$-values {#sec:ess_comp} For any test statistic $T$ for which it is well-defined, we consider the $e$-value $$\begin{aligned} \label{eq:exact_e_form} e_T(Y) = \frac{T(Y)}{\mathbb{E}_{\overline{G}}T(\overline{G}Y)}, \end{aligned}$$ where $\overline{G}$ is uniformly distributed on the group $\mathcal{G}$. The interpretation is that $e_T(Y)$ is large if $T(Y)$ is large compared to its average value under the group, for the observed value of $Y$. Moreover, as we shall show in Section [5](#sec:LR){reference-type="ref" reference="sec:LR"}, if $T \geq 0$ and $T$ is integrable, then $e_T$ can be interpreted as a likelihood ratio for $\mathcal{G}$ invariance against a density proportional to $T$. Theorem [Theorem 2](#thm:essentially_complete){reference-type="ref" reference="thm:essentially_complete"} not only shows that such an $e$-value is exact, but also the converse: any exact $e$-value can be written as in [\[eq:exact_e\_form\]](#eq:exact_e_form){reference-type="eqref" reference="eq:exact_e_form"}. Moreover, as any non-exact but valid $e$-value can be trivially improved by adding a sufficiently small deterministic constant to it, this constitutes an essentially complete class of admissible $e$-values for $H_0^{\mathcal{G}}$. The result is not obvious, as $H_0^{\mathcal{G}}$ is a large composite hypotheses, and the denominator in [\[eq:exact_e\_form\]](#eq:exact_e_form){reference-type="eqref" reference="eq:exact_e_form"} only takes the expectation over the group. Its proof can be found in Appendix [11.2](#proof:essentially_complete){reference-type="ref" reference="proof:essentially_complete"}. **Theorem 2**. *The class of statistics of the type $e_T$ equals the class of all exact $e$-values for $H_0^{\mathcal{G}}$. Moreover, this constitutes an essentially complete class of admissible $e$-values for $H_0^{\mathcal{G}}$.* By Theorem [Theorem 2](#thm:essentially_complete){reference-type="ref" reference="thm:essentially_complete"}, we can turn any appropriately integrable test statistic $T$ into an exact $e$-value for $H_0^{\mathcal{G}}$, including non-exact $e$-values. This means we have a similarly large freedom in choosing our test statistic as we are used to with permutation tests. Moreover, it shows that by only considering $e$-value of the type in [\[eq:exact_e\_form\]](#eq:exact_e_form){reference-type="eqref" reference="eq:exact_e_form"}, we are not 'missing' any $e$-values that may be better. Unfortunately, the result gives us no guidance in the selection of a 'good' $e$-value within this class, which will be the topic of the remainder of the paper. ## Obtaining the normalization constant The main difficulty in using $e_T(y)$ is the computation of the normalization constant. As the group $\mathcal{G}$ can be enormous, the normalization constant $\mathbb{E}_{\overline{G}}T(\overline{G}y)$ in the denominator of [\[eq:exact_e\_form\]](#eq:exact_e_form){reference-type="eqref" reference="eq:exact_e_form"} may need to be estimated. To do this, we can borrow ideas from the literature on group invariance testing. The simplest idea is to use a Monte Carlo approach by replacing $\overline{G}$ with a random variable $\overline{G}_M$ that is uniformly distributed on a set of i.i.d. draws $\{\overline{G}^1, \overline{G}^2, \dots, \overline{G}^M\}$ of $\overline{G}$. Alternatively, we can replace $\overline{G}$ with $\overline{S}$ that is uniformly distributed on a compact subgroup of $\mathcal{S}$ [@chung1958randomization]. As invariance under $\mathcal{G}$ implies invariance $\mathcal{S}$ under a subgroup, this is still guaranteed to deliver a valid test, which is not clear for the Monte Carlo approach. Moreover, @koning2023more note that we can actually strategically select the subgroup based on the test statistic and alternative, and select a subgroup that yields high power. @koning2023power notes that this can even yield tests that are more powerful than if we use the entire group $\mathcal{G}$, and the curious phenomenon that tiny subgroups can sometimes dramatically outperform larger subgroups. Note that in the standard group invariance test, the goal is to estimate the $\alpha$-upper quantile of a reference distribution, as in [\[ineq:quantile\]](#ineq:quantile){reference-type="eqref" reference="ineq:quantile"}. In contrast, the normalization constant is the mean of this same distribution, which we expect to be somewhat easier to estimate in practice. Moreover, in Appendix [10](#sec:norm_analytical){reference-type="ref" reference="sec:norm_analytical"} we discuss that we can sometimes very efficiently approximate the normalization constant analytically. # Likelihood Ratios for Group Invariance {#sec:LR} The construction in the previous section gives us a lot of freedom in the choice of $e$-value. Unfortunately, this freedom also comes with the responsibility to select the $e$-value appropriately. For testing a simple null hypothesis against a simple alternative, $e$-processes built from likelihood ratio statistics have been shown to have attractive power properties [@koolen2022log]. Moreover, @ramdas2022admissible show that any admissible procedure for online inference must rely on likelihood ratios. In Theorem [Theorem 3](#thm:loavev){reference-type="ref" reference="thm:loavev"}, we show that the result by @koolen2022log extends to testing group invariance against a simple alternative. Its proof can be found in Appendix [11.3](#appn:loavev){reference-type="ref" reference="appn:loavev"}, where the main complication is that group invariance is a composite null hypothesis. @koolen2022log speculate that their result (and by extension ours) can be generalized to infinite horizons as well. Moreover, the result extends to a (somewhat narrow) class of alternatives, specified there. **Theorem 3**. *Suppose we have i.i.d. batches $X_{B_1}, X_{B_2}, \dots, X_{B_T}$, $T < \infty$. Let us consider testing $H_0^\text{batch}$ against a simple alternative $H_1$ in which each batch is not $\mathcal{G}$ invariant in the same manner. Now, suppose that we can specify a likelihood ratio for $\mathcal{G}$ invariance against this type of non-invariance. Then, the $e$-process $(e_t)_{t\geq 0}$ that is the running product of these likelihood ratios maximizes $\mathbb{E}^{H_1}\log e_{t}$ over all $e$-processes, for all $t$.* Inspired by this property, we construct likelihood ratio statistics for group invariance, which are well-specified and exact $e$-values even though $H_0^\mathcal{G}$ is composite. It turns out that these likelihood ratio statistics come in three flavors. The first flavor relies on specifying an alternative on $\mathcal{Y}$ and was used by @lehmann1949theory, though not explicitly constructed. The second generalizes the first, by observing that the same approach works without fully specifying an alternative on $\mathcal{Y}$ but only specifying it on invariant subsets (orbits) without providing the mixing distribution over these subsets. The final flavor relies on specifying an alternative on the group $\mathcal{G}$, instead. We dedicate one subsection to each, and conclude the section with a comparison. The ideas in this section are applied in Section [6](#sec:exm_spherical){reference-type="ref" reference="sec:exm_spherical"}. Moreover, we include a toy example in Appendix [9](#sec:exm_permutation){reference-type="ref" reference="sec:exm_permutation"} to illustrate the concepts, which may be helpful for readers unfamiliar with invariance theory. **Remark 2**. *While the likelihood ratios mostly concern a simple alternative, this simple alternative itself may be constructed from a composite alternative with a Bayes factor-type approach. Indeed, suppose that we have a composite alternative $\mathcal{P}$ and want to put some additional emphasis on some alternatives, or that we have some prior information or beliefs about what alternatives are likely. Then, we can express this emphasis, information or belief by weighting the different alternatives in $\mathcal{P}$. Specifically, let $\Pi$ be a distribution on $\mathcal{P}$ that captures these weights. Computing the expectation $\mathbb{P}^* = \mathbb{E}_{P}^{\Pi} \mathbb{P}$ over $\mathbb{P} \in \mathcal{P}$, we can then use $\mathbb{P}^*$ as our alternative in the likelihood ratios.* ## Likelihood Ratio on $\mathcal{Y}$ Our approach here is inspired by the proofs of theorems 2 and 2$^\prime$ in @lehmann1949theory. Their results discuss conditions under which group invariance results are optimal, though they do not explicitly mention algebraic groups. Moreover, they also do not explicitly construct the likelihood ratio statistic itself, which turns out to be quite elegant. In order to explain this likelihood ratio, we first introduce the notion of an orbit and orbit representative. Let $O_y = \{z \in \mathcal{Y}\ |\ z = Gy,\ \exists G \in \mathcal{G}\}$ denote the orbit of $y \in \mathcal{Y}$. We assign a single point $[y]$ on each orbit as the 'orbit representative' of $O_y$. That is, $[y] = Gy$ for some $G \in \mathcal{G}$. We use $[\mathcal{Y}]$ to denote the collection of orbit representatives, and we call the function $[\cdot] : \mathcal{Y} \to [\mathcal{Y}]$ that maps $y$ to its orbit representative $[y]$ an orbit selector. Suppose that $\lambda$ is some dominating measure on $\mathcal{Y}$ of our alternative $\mathbb{P}_{\mathcal{Y}}$. Then, @lehmann1949theory consider the test that uses the density of $\mathbb{P}_{\mathcal{Y}}$ with respect to $\lambda$ as a test statistic for a group invariance test, which then rejects if $$\begin{aligned} \label{ineq:LSLR} d\mathbb{P}_{\mathcal{Y}}/d\lambda(y) > q_{\alpha}^{\overline{G}}\left(d\mathbb{P}_{\mathcal{Y}}/d\lambda(\overline{G}y)\right), \end{aligned}$$ and with some probability under equality. To see that this is a likelihood ratio test with respect to $H_0$, first observe that $$\begin{aligned} q_{\alpha}^{\overline{G}}\left(d\mathbb{P}_{\mathcal{Y}}/d\lambda(\overline{G}y)\right) = q_{\alpha}^{\overline{G}}(d\mathbb{P}_{\mathcal{Y}}/d\lambda(\overline{G}[y])), \end{aligned}$$ as $\overline{G}G \overset{d}{=} \overline{G}$, since $\overline{G}$ is $\mathcal{G}$ invariant. As a result, the quantile only depends on $y$ through the orbit representative $[y]$. That is, the quantile is a constant function on the orbit. Hence, it is proportional to a uniform density on the orbit.[^1] As a consequence, the test can be interpreted as a likelihood ratio test between the density $d\mathbb{P}_{\mathcal{Y}}/d\lambda(y)$ and a uniform density on the orbit $O_y$. As the test is exact, @lehmann1949theory then apply the Neyman-Pearson lemma to conclude the uniformly most powerfulness of the test on each orbit, and so on all of $\mathcal{Y}$. In order to explicitly construct the likelihood ratio statistic, which @lehmann1949theory do not do, we can find the appropriate normalization constant by simply taking the expectation over the orbit: $$\begin{aligned} \mathbb{E}_{\overline{G}} \left[d\mathbb{P}_{\mathcal{Y}}/d\lambda(\overline{G}y) / q_{\alpha}^{\overline{G}_2}(d\mathbb{P}_{\mathcal{Y}}/d\lambda(\overline{G}_2[\overline{G}y]))\right] = \mathbb{E}_{\overline{G}} \left[d\mathbb{P}_{\mathcal{Y}}/d\lambda(\overline{G}y)\right] / q_{\alpha}^{\overline{G}_2}(d\mathbb{P}_{\mathcal{Y}}/d\lambda(\overline{G}_2[y])), \end{aligned}$$ where $\overline{G}$ and $\overline{G}_2$ are considered independent and uniform on $\mathcal{G}$. Hence, the likelihood ratio is given by $$\begin{aligned} \label{eq:LR} \frac{d\mathbb{P}_{\mathcal{Y}}/d\lambda(y) / q_{\alpha}^{\overline{G}}(d\mathbb{P}_{\mathcal{Y}}/d\lambda(\overline{G}[y])}{\mathbb{E}_{\overline{G}} \left[d\mathbb{P}_{\mathcal{Y}}/d\lambda(\overline{G}y) / q_{\alpha}^{\overline{G}_2}(d\mathbb{P}_{\mathcal{Y}}/d\lambda(\overline{G}_2[\overline{G}y]))\right]} &= \frac{d\mathbb{P}_{\mathcal{Y}}/d\lambda(y)}{\mathbb{E}_{\overline{G}} \left[d\mathbb{P}_{\mathcal{Y}}/d\lambda(\overline{G}y) \right]}. \end{aligned}$$ This is reminiscent of the $e$-values discussed in Section [4.2](#sec:ess_comp){reference-type="ref" reference="sec:ess_comp"}, and therefore an exact $e$-value by Theorem [Theorem 2](#thm:essentially_complete){reference-type="ref" reference="thm:essentially_complete"}. Indeed, if we choose our test statistic $T$ to be proportional to our density under the alternative, then the $e$-value $e_T$ as in [\[eq:exact_e\_form\]](#eq:exact_e_form){reference-type="eqref" reference="eq:exact_e_form"} is a likelihood ratio statistic of this type. **Remark 3**. *The key step in the reasoning above is that the quantile is $\mathcal{G}$ invariant. This step actually works for any non-negative $\mathcal{G}$ invariant function that is integrable on each orbit. However, the choice of the quantile is highly computationally convenient, as a likelihood ratio $T$ based on any other $\mathcal{G}$ invariant function would still require us to explicitly compute the quantile $q_\alpha^{\overline{G}}(T(\overline{G}Y))$ as well. For numerical integration, that could amount to $|\mathcal{G}|^2$ computations of the test statistic. In Appendix [10](#sec:norm_analytical){reference-type="ref" reference="sec:norm_analytical"}, we discuss a special case where everything can be done analytically without much effort.* ## Likelihood Ratio on Orbits {#sec:LR_orbits} One thing that @lehmann1949theory did not observe is that their likelihood ratio test only compares the likelihood of $y$ to the likelihood of other values in its orbit $O_y$. So, while they do define a distribution on $\mathcal{Y}$, only the conditional distribution on the orbits is actually used in the test. That is, the mixing distribution over the orbits is unused. Indeed, they implicitly use the distribution on $\mathcal{Y}$ to induce a conditional distribution on orbits, and the resulting likelihood ratio statistic [\[eq:LR\]](#eq:LR){reference-type="eqref" reference="eq:LR"} is a likelihood ratio on $O_y$. As a consequence of this observation, we can generalize their main result on uniformly most powerfulness from a simple alternative on $\mathcal{Y}$, to the composite alternative of all distributions on $\mathcal{Y}$ that share the same conditional distribution on the orbits. Moreover, we can also directly construct likelihood ratio statistics based on distributions on each orbit. Specifically, let us choose a distribution $\mathbb{P}_{z}$ on each orbit $O_{z}$ and select $\lambda_{[z]}$ as the unique $\mathcal{G}$ invariant distribution on $O_{z}$, $z \in [\mathcal{Y}]$. Then, we can construct a likelihood ratio statistic for each orbit: $$\begin{aligned} \frac{d\mathbb{P}_{z}/d\lambda_z(y)}{\mathbb{E}_{\overline{G}} d\mathbb{P}_z / d\lambda_z(\overline{G}y)} = d\mathbb{P}_{z}/d\lambda_z(y), \end{aligned}$$ where $z \in [\mathcal{Y}]$, $y \in O_z$, and the equality follows from the fact that the denominator is equal to 1 on each orbit, as it is a density on the orbit. The latter argument did not apply in [\[eq:LR\]](#eq:LR){reference-type="eqref" reference="eq:LR"}, as $dP_{\mathcal{Y}}/d\lambda$ is only a density on $\mathcal{Y}$, which needs not integrate to 1 on each orbit. Moreover, this indeed yields an $e$-value for $H_0^\mathcal{G}$, as shown in the following proposition. **Proposition 1**. *The statistic $d\mathbb{P}_{z}/d\lambda_z(Y)$ is an exact $e$-value under $H_0^\mathcal{G}$.* *Proof.* Notice that $$\begin{aligned} \mathbb{E}_{Y} d\mathbb{P}_{Y}/d\lambda_Y(Y) &= \mathbb{E}_{Y}\mathbb{E}_{\overline{G}} d\mathbb{P}_{\overline{G}Y}/d\lambda_{\overline{G}Y}(\overline{G}Y) \\ &= \mathbb{E}_{Y}\mathbb{E}_{\overline{G}} d\mathbb{P}_{[Y]}/d\lambda_{[Y]}(\overline{G}Y) \\ &= \mathbb{E}_{Y} 1 = 1. \end{aligned}$$ Here, we use the $Y \overset{d}{=} \overline{G}Y$ characterization of invariance, where $\overline{G}$ is uniform on the group $\mathcal{G}$ independently of $Y$, and the fact that the distributions are specified on the orbits. ◻ ## Likelihood Ratio on the group $\mathcal{G}$ As noted in the previous section, the likelihood ratio statistics we considered so far are essentially constructed in an orbit-by-orbit fashion. In this section, we present an approach that works across orbits, by defining a likelihood ratio on the group $\mathcal{G}$ itself, and then mapping from $\mathcal{Y}$ to the group. In order to construct a likelihood ratio statistic for group invariance, we first introduce some tools from Chapter 7 of @kallenberg2017random. @chiu2023non also provide an excellent survey of the tools that are relevant in the context of testing group invariance, and (as far as we are aware) they are the first to use the inversion kernel $\gamma$ (defined below) in this context. If the group acts freely on $\mathcal{Y}$, then we can uniquely define a map $\gamma : \mathcal{Y} \to \mathcal{G}$ that takes in an element $y$ and returns the element $G$ that carries the representative element in the orbit $[y]$ to $y$. That is, $\gamma$ is defined as $\gamma(y) [y] = y$. A group action is free if on every orbit, each element corresponds to a unique element of the group, so that $Gy = y$ implies $G = I$. For example, if no element in $\mathcal{X}^m$ has duplicate entries, then the group of permutations acts freely on it. If the group action is not free, then there may exist multiple elements in $\mathcal{G}$ that carry $[y]$ to $y$, so that $\gamma(y)$ is not uniquely defined. In that case, $\gamma(y)$ will be viewed as uniformly drawn from the elements in $\mathcal{G}$ that carry $[y]$ to $y$, which is well-defined by Theorem 7.14 in @kallenberg2017random. This gives us $\gamma(x) [y] = y$ almost surely. Appendix [9](#sec:exm_permutation){reference-type="ref" reference="sec:exm_permutation"} contains a concrete illustration of a setting where $\gamma$ is randomized in this manner, and an intuition of why it is possible to construct a uniform draw from such elements. We can now use $\gamma$ to obtain an alternative characterization of $\mathcal{G}$ invariance of a random variable: $$\begin{aligned} \gamma(Y) \overset{d}{=} \overline{G}, \end{aligned}$$ where $\overline{G}$ is uniformly (Haar) distributed on $\mathcal{G}$ (see e.g. @chiu2023non). Let us denote this uniform probability measure by $\mathbb{P}_\mathcal{G}^0$. We first consider a simple alternative hypothesis $\mathcal{P}_{\mathcal{G}}^1 = \{\mathbb{P}_{\mathcal{G}}^1\}$, where $\mathbb{P}_{\mathcal{G}}^1$ is some non-uniform probability measure on $\mathcal{G}$, that is absolutely continuous with respect to the Haar measure $\mathbb{P}_{\mathcal{G}}^0$. We can then define the likelihood ratio as $$\begin{aligned} \label{dfn:LRG} d\mathbb{P}_{\mathcal{G}}^1 / d\mathbb{P}_{\mathcal{G}}^0(G), \text{ for all } G \in \mathcal{G}. \end{aligned}$$ If we view $\mathcal{G}$ as our sample space, the resulting likelihood ratio test is equivalent to substituting the likelihood ratio as a test statistic into the test described in Section [4.1](#sec:background){reference-type="ref" reference="sec:background"}, which delivers the appropriate threshold since it is exact. By the Neyman-Pearson lemma, such a test is uniformly most powerful for testing invariance against $\mathbb{P}_{\mathcal{G}}^1$. While this likelihood ratio is defined on $\mathcal{G}$, it can easily be transferred to our original sample space $\mathcal{Y}$ through $\gamma$. Indeed, we can define the likelihood ratio statistic $$\begin{aligned} \label{dfn:LRX} d\mathbb{P}_{\mathcal{G}}^1 / d\mathbb{P}_{\mathcal{G}}^0(\gamma(y)). \end{aligned}$$ To understand what alternatives this statistic picks up, first notice that if the distribution $\mathbb{P}_{\mathcal{G}}^z$ of $\gamma(y)$ induced by a simple alternative $\mathbb{P}_{z}$ on $O_z$ is the same for each orbit $z \in [\mathcal{Y}]$, then the resulting group invariance test is uniformly most powerful. Moreover, if we specify an alternative on $\mathcal{Y}$ and the distributions do not coincide, then the test is most powerful against the average of the distributions $\mathbb{P}_{\mathcal{G}}^z$, weighted by the mixture distribution over the orbits. That is, it can be viewed as testing against the 'average' type of non-invariance that is expressed by the alternative. This indeed yields an exact $e$-value for $H_0^\mathcal{G}$. **Proposition 2**. *The statistic $d\mathbb{P}_{\mathcal{G}}^1 / d\mathbb{P}_{\mathcal{G}}^0(\gamma(Y))$ is an exact $e$-value for $H_0^\mathcal{G}$.* *Proof.* Using the fact that $\gamma(Y) \overset{d}{=} \overline{G}$ if $Y$ is in $H_0^{\mathcal{G}}$, we find $$\begin{aligned} \mathbb{E}_{Y} d\mathbb{P}_{\mathcal{G}}^1 / d\mathbb{P}_{\mathcal{G}}^0(\gamma(Y)) = \mathbb{E}_{\overline{G}} d\mathbb{P}_{\mathcal{G}}^1 / d\mathbb{P}_{\mathcal{G}}^0(\overline{G}) = 1, \end{aligned}$$ as $\overline{G} \overset{d}{\sim} \mathbb{P}_{\mathcal{G}}^0$. ◻ ## Relationship between the different likelihood ratios To see the connection between the three different types of likelihood ratios, suppose that we have a simple alternative $\mathbb{P}_{\mathcal{Y}}$ on $\mathcal{Y}$. Then, we can construct the likelihood ratio statistic of the first flavor, that was inspired by @lehmann1949theory. This likelihood ratio can be equivalently obtained by disintegrating this distribution into conditional distributions $\mathbb{P}_z$ on each orbit $z \in [\mathcal{Y}]$, and defining likelihood ratios on orbits as in Section [5.2](#sec:LR_orbits){reference-type="ref" reference="sec:LR_orbits"}. Alternatively, we can consider the distribution $\mathbb{P}_{\mathcal{G}}^z$ of $\gamma(Z)$ on $\mathcal{G}$, induced by $Z \sim \mathbb{P}_{z}$ on each orbit. Then, by mixing the resulting distributions $\mathbb{P}_{\mathcal{G}}^z$ by the induced distribution of $\mathbb{P}_{\mathcal{Y}}$ over the orbits, we obtain the alternative we can use for the likelihood ratio statistic based on $\mathcal{G}$. Of course, we may also specify such a distribution directly. If the group action of $\mathcal{G}$ on $\mathcal{Y}$ is transitive, which means that there is just one orbit, then our orbit-based approach coincides with that of @lehmann1949theory. Moreover, if $\mathcal{Y} = \mathcal{G}$, then the group action is automatically transitive and free, so that all three approaches are equivalent. # LR for group invariance against normality {#sec:exm_spherical} In this section, we apply our likelihood ratios to test for invariance under a group of orthonormal matrices against a normal distribution with a location shift. If we include all orthonormal matrices, this yields clean connections to parametric theory and Student's $t$-test. Moreover, we also consider exchangeability, which reveals an interesting relationship to the softmax function. In addition, we consider sign-symmetry, which turns out to result in an $e$-process that works regardless of the dependence structure of $X = X_1, X_2, \dots$. In addition, we provide some practical guidance for what to do under a composite alternative. We start with an exposition of the invariance-based concepts for the orthogonal group $O(d)$ that consists of all orthonormal matrices. ## Sphericity Suppose that $\mathcal{Y} = \mathbb{R}^d\setminus\{0\}$ and $\mathcal{G} = O(d)$ is the orthogonal group, which can be represented as the collection of all orthonormal matrices. The orbits $O_y = \{z \in \mathcal{Y}\ |\ z = Gy, \exists G \in \mathcal{G}\}$ of $\mathcal{G}$ in $\mathbb{R}^d$ are the concentric $d$-dimensional hyperspheres. Each of these hyperspheres can be uniquely identified with their radius $\mu > 0$. To obtain a $\mathcal{Y}$-valued orbit representative, we multiply $\mu$ by an arbitrary unit $d$-vector $\iota$ to obtain $\mu\iota$. For example $y$ lies on the orbit $O_y$ that is the $d$-dimensional hypersphere with radius $\|y\|_2$, and has orbit representative $[y] = \|y\|_2\iota$. For simplicity, we now first focus on the subgroup $SO(2)$ of $O(2)$, which exactly describes the (orientation-preserving) rotations of the circle, and has the same orbits as $O(2)$. The reason we focus on $SO(2)$, is because its group acts freely on each concentric circle. As a consequence, every element in the group can be uniquely identified with an element on the unit circle $S^2$. We choose to identify the identity element with $\iota$, and we identify every element of $SO(2)$ with the element on the circle that we obtain if that rotation is applied to $\iota$. We denote the induced group action of $S^2$ on $\mathcal{Y}$ by $\circ$. We can then define our kernel inversion map $\gamma$ as $\gamma(y) = y/\|y\|_2$. To see that $\gamma$ indeed conforms to its definition, observe that $$\begin{aligned} \label{exm:eq:gamma} \gamma(y)[y] = [(y/\|y\|_2) \circ \iota]\|y\|_2 = (y/\|y\|_2)\|y\|_2 = y, \end{aligned}$$ where the second equality follows from the fact that the action of $(y/\|y\|_2)$ on $\iota$, rotates $\iota$ to $y/\|y\|_2$. Invariance of an $\mathcal{Y}$-valued random variable $Y$ under $\mathcal{G}$, also known as sphericity, can then be formulated as '$\gamma(Y)$ is uniform on $S^2$'. For $O(2)$ or the general $d > 2$ case, the group action is no longer free on each orbit. As a result there may be multiple group actions that carry $\iota\|y\|_2$ to a point $y$ on the hypersphere. While this may superficially seem like a potentially serious issue, we view $\gamma(y)$ as uniformly drawn from all the 'rotations' that carry $\iota\|y\|_2$ to $y$. As a result, the only difference is that [\[exm:eq:gamma\]](#exm:eq:gamma){reference-type="eqref" reference="exm:eq:gamma"} will now hold almost surely, which suffices for our purposes. ## Likelihood ratio on $\mathcal{Y}$ {#sec:LR_spherical_Y} This section can be seen as a generalization of the example in the final paragraph of @lehmann1949theory, who only consider spherical invariance. Suppose that $Y \sim \mathcal{N}_d(\mu\iota, I)$ on $\mathbb{R}^d \setminus \{0\}$, $\mu \geq 0$ under the alternative and $\mathcal{G}$ invariance under the null hypothesis. The alternative is spherical if and only if $\mu = 0$. We start by considering $\mathcal{G} = O(d)$. The $\mathcal{Y}$-based likelihood ratio test is given by $$\begin{aligned} 1/(2\pi)^{d/2}\exp\left\{-\frac{1}{2}\|y - \iota\mu\|_2^2\right\} &> q_{\alpha}^{\overline{G}}\left(1/(2\pi)^{d/2}\exp\left\{-\frac{1}{2}\|\overline{G}y - \iota\mu\|_2^2\right\}\right), \end{aligned}$$ where $\overline{G}$ is uniformly distributed on all orthonormal matrices. This is equivalent to $$\begin{aligned} -y'y + 2\mu\iota'y - \mu^2 &> q_{\alpha}^{\overline{G}}\left(-y'y + 2\mu\iota'\overline{G}y - \mu^2\right) \end{aligned}$$ and $$\begin{aligned} \iota'y &> q_{\alpha}^{\overline{G}}\left(\iota'\overline{G}y\right), \end{aligned}$$ which is independent of $\mu$ and equal to the $t$-test by Theorem 6 in @koning2023more. As already shown by @lehmann1949theory, the $t$-test is uniformly most powerful for testing spherical invariance against $\mathcal{N}_d(\mu\iota, I)$. Moreover, this test can also be written as $$\begin{aligned} \iota'y / \|y\|_2 > q_{\alpha}^{\overline{G}}\left(\iota'\overline{G}\iota\right), \end{aligned}$$ as $q_{\alpha}^{\overline{G}}\left(\iota'\overline{G}y\right) = q_{\alpha}^{\overline{G}}\left(\iota'\overline{G}\iota\|y\|_2\right) = \|y\|_2 q_{\alpha}^{\overline{G}}\left(\iota'\overline{G}\iota \right)$. Then, as the rejection event does not change if we apply a strictly increasing function to both sides, we can even conclude that the $t$-test is equivalent to any spherical group invariance test based on a test statistic that is increasing in $\iota'y/\|y\|_2$. A straightforward derivation shows that the likelihood ratio statistic is $$\begin{aligned} \label{eq:LR_spherical} \exp\left\{\mu y'\iota\right\} / \mathbb{E}_{\overline{G}} \left[\exp\left\{\mu y'\overline{G}\iota\right\}\right]. \end{aligned}$$ To obtain the likelihood ratio for other groups $\mathcal{G}$ of orthonormal matrices, we can simply compute the normalization constant in [\[eq:LR_spherical\]](#eq:LR_spherical){reference-type="eqref" reference="eq:LR_spherical"} with $\overline{G}$ uniform on the group of interest. This includes the group of permutation matrices for testing exchangeability against normality (see Section [6.4](#sec:softmax){reference-type="ref" reference="sec:softmax"}), and the group of sign-flipping matrices for testing symmetry against normality (see Section [6.5](#sec:symmetry){reference-type="ref" reference="sec:symmetry"}). The resulting likelihood ratio test is also uniformly most powerful for testing $\mathcal{G}$ invariance against $\mathcal{N}_d(\mu\iota, I)$. ## Likelihood ratio on orbits {#sec:LR_orbits_normal} The conditional distribution of $Y \sim \mathcal{N}_d(\mu\iota, I)$ on each orbit is proportional to $\exp(\mu\iota'y)$, where $y$ is on the orbit with radius $\|y\|_2$. For $\|y\|_2 = 1$, this coincides with the von Mises-Fisher distribution. Notice that this density is uniform on each orbit if and only if $\mu = 0$, so that the likelihood ratio with respect to sphericity is proportional to $\exp(\mu\iota'y)$, and coincides with the one from previous section: $$\begin{aligned} \exp\left\{\mu y'\iota\right\} / \mathbb{E}_{\overline{G}} \left[\exp\left\{\mu y'\overline{G}\iota\right\}\right]. \end{aligned}$$ Applying our argument from Section [5.2](#sec:LR_orbits){reference-type="ref" reference="sec:LR_orbits"}, this implies that the $t$-test is uniformly most powerful against the composite alternative of all distributions on $\mathcal{Y}$ whose conditional distributions on the orbits of $O(d)$ are proportional $\exp(\mu\iota'y)$. This generalizes the observation by @lehmann1949theory who only conclude optimality against $\mathcal{N}(\mu\iota, I)$. ## Relationship to softmax {#sec:softmax} The likelihood ratio in [\[eq:LR_spherical\]](#eq:LR_spherical){reference-type="eqref" reference="eq:LR_spherical"} is strongly related to the softmax function. Indeed, if we choose $\overline{G}$ to be uniform on permutation matrices (which form a subgroup of the orthonormal matrices) and $\iota = (1, 0, \dots, 0)$ this reduces to $$\begin{aligned} \frac{\exp\left\{\mu y_1\right\}}{\tfrac{1}{d}\sum_{i = 1}^d\exp\left\{\mu y_i\right\}}. \end{aligned}$$ This is exactly the softmax function with 'temperature' $\mu \geq 0$. Hence, the softmax function can be viewed as a likelihood ratio statistic for testing exchangeability (permutation invariance) against $\mathcal{N}((\mu, 0, \dots, 0), I)$. More generally, it is the likelihood ratio statistic for testing exchangeability on the orbit $O_Y = \{PY\ |\ P \in \text{permutations}\}$ against the conditional distribution of $\mathcal{N}((\mu, 0, \dots, 0), I)$ on $O_Y$. ## Testing symmetry {#sec:symmetry} Suppose $\mathcal{Y} = \mathbb{R}$ and $\mathcal{G} = \{-1, 1\}$. Then, invariance of $Y$ under $\mathcal{G}$ is also known as 'symmetry' about 0, defined as $Y \overset{d}{=} -Y$. For testing symmetry against our normal location model with $\iota = 1$, the likelihood ratio is $$\begin{aligned} \exp\{\mu \iota'y\} / \mathbb{E}_{\overline{G}} \exp\{\iota'\overline{G}y\} = 2\exp\{\mu y\} / \left[\exp\{\mu y\} + \exp\{-\mu y\}\right], \end{aligned}$$ This can be generalized to $\mathcal{Y} = \mathbb{R}^d$ and $\mathcal{G} = \{-1, 1\}^d$ and $\iota = d^{-1/2}(1, \dots, 1)'$. The likelihood ratio then becomes $$\begin{aligned} \exp\{\mu \iota'y\} / \mathbb{E}_{\overline{g}} \exp\{d^{-1/2}\mu\overline{g}'y\} = \prod_{i=1}^d \exp\{d^{-1/2}\mu y_i\} / \mathbb{E}_{\overline{g}_i} \exp\{d^{-1/2}\mu\overline{g}_iy_i\}, \end{aligned}$$ where $\overline{g}$ is a $d$-vector of i.i.d. Bernouilli distributed random variables on $\{-1, 1\}$ with probability .5. For more work on testing symmetry sequentially, see @de1999general and @ramdas2022admissible. They consider different types of test statistics. **Remark 4**. *Interestingly, for this particular choice of $\iota$, the likelihood nicely factorizes so that the $e$-process constructed from multiplying these likelihood ratios is valid even if $X_1, X_2, \dots$ are not independent but only symmetric under the null hypothesis. This may be attractive in the presence of heteroskedasticity and dependence under the null hypothesis.* ## Selecting the temperature {#sec:temperature} In practice, the alternative value of $\mu$ is typically unknown so that it must be chosen as a hyperparameter, which we will denote by $\kappa$ and refer to as the 'temperature', as in Section [6.4](#sec:softmax){reference-type="ref" reference="sec:softmax"}. The connection to the likelihood ratio gives us a practical guide for how to set this temperature. In particular, if we believe that the 'signal' $\mu$ is strong, then we should choose a high temperature $\kappa$. On the other hand, if we believe the signal is weak, then we should set a low temperature $\kappa$. In our simulation study, we see that choosing $\mu = \kappa$ yields good overall results. Moreover, we find that a higher temperature is better if we are impatient (i.e. want as much power as possible early), and a lower temperature is better if we are patient (i.e. want more power in the long run). ## Sphericity likelihood ratio on $\mathcal{G}$ In this section, we reduce ourselves to $d=2$ and $SO(2)$, so that the group action is free and the group will be easy to represent. If $Y \sim \mathcal{N}_2(\mu\iota, I)$, then $\gamma(Y) = Y / \|Y\|_2$ follows a so-called projected normal distribution $\mathcal{P}\mathcal{N}_2(\mu\iota, I)$. Its density with respect to the uniform distribution on $S^2$ is $$\begin{aligned} \frac{\exp\{-\tfrac{1}{2}\mu^2\}}{2\pi}\left(1 + \mu \iota'v\frac{\Phi(\mu \iota'v)}{\phi(\mu \iota'v)}\right), \end{aligned}$$ where $v \in S^2$, $\Phi$ is the normal cdf and $\phi$ the pdf (Presnell et al., 1998; Watson, 1983). For $\mu = 0$, this reduces to $1/2\pi$, so the likelihood ratio with respect to the uniform distribution on $S^2$ is $$\begin{aligned} \exp\{-\tfrac{1}{2}\mu^2\}\left(1 + \mu \iota'v\frac{\Phi(\mu \iota'v)}{\phi(\mu \iota'v)}\right). \end{aligned}$$ As a result, the likelihood ratio on $\mathcal{Y}$ is $$\begin{aligned} dP_{\mathcal{G}}^1 / dP_{\mathcal{G}}^0(\gamma(y)) = \exp\{-\tfrac{1}{2}\mu^2\}\left(1 + \mu \iota'\gamma(y)\frac{\Phi(\mu \iota'\gamma(y))}{\phi(\mu \iota'\gamma(y))}\right) = \exp\{-\tfrac{1}{2}\mu^2\}\left(1 + \mu \iota'y / \|y\|_2\frac{\Phi(\mu \iota'y / \|y\|_2)}{\phi(\mu \iota'y / \|y\|_2)}\right) \end{aligned}$$ which is an increasing function in $\iota'y / \|y\|_2$ if $\mu > 0$. As the likelihood ratio is increasing in $\iota'y/\|y\|_2$, the likelihood ratio test is also equivalent to the $t$-test. # Simulations ## Spherical $e$-process and the $t$-test In this simulation study, we compare the performance of Student's $t$-test to our proposed $e$-process for testing spherical invariance against a normal location shift. We sequentially observe draws $X = X_1, X_2, \dots$, where $X_i \sim \mathcal{N}(\mu, I)$ for $i = 1, 2, \dots$, $\mu \geq 0$. We aim to test the null hypothesis that $X$ is sequentially spherical: the subvector $X_{[1:t]}$ is spherical for all $t$. For simplicity, we assume that we test at fixed intervals, so that the data arrives in equally sized batches $X_{B_1}$, $X_{B_2}$, $\dots$, and the level of our test is $\alpha = 0.05$. We consider our $e$-process based on the likelihood ratio for testing spherical invariance against $\mathcal{N}_{|B_i|}(|B_i|^{1/2}\mu, I)$. This $e$-process is given by $$\begin{aligned} \prod_{i=1}^t \exp\{\kappa\iota'X_{B_i}\} / \mathbb{E}_{\overline{G}}\exp\{\kappa\iota'\overline{G}X_{B_i}\}, \end{aligned}$$ where $\overline{G}$ is uniform on $O(|B_i|)$, for a temperature $\kappa \geq 0$. If $\mu$ is known under the alternative, the recommended temperature from Section [6.6](#sec:temperature){reference-type="ref" reference="sec:temperature"} is $\kappa^* = |B_i|^{1/2}\mu$. This $e$-process can be easily computed as explained in Appendix [10](#sec:norm_analytical){reference-type="ref" reference="sec:norm_analytical"}. Based on 10 000 simulations, Figure [2](#fig:t-test){reference-type="ref" reference="fig:t-test"} reports the proportion of $e$-processes that have exceeded $1/\alpha$ at any point before time $t$, for temperatures $\kappa = \beta \kappa^*$ under the alternative and $\kappa = \beta$ under the null, where $\beta \in \{.5, 1, 2\}$. The figure also shows the proportion of $t$-tests that were rejected at any point before time $t$, as well as the proportion of $t$-tests that were rejected at time $t$. The former can be viewed as the power of an invalid 'online $t$-test' that rejects as soon as the $p$-value dips below $\alpha$, quickly loses control of size. In the left plot, we see that the invalid 'online $t$-test' quickly loses control of size. The 'static' $t$-test has exact size $\alpha$, as expected. Our $e$-processes also control size, with the higher temperatures having larger size. In the right plot, we see that the power curve of our $e$-process for $\kappa = \kappa^*$ runs roughly parallel to that of the $t$-test, but with somewhat lower power. This loss of power is of course traded against the online validity. For temperatures $\kappa^*/2$ and $2\kappa^*$, the power curves are no longer 'parellel' with those of the $t$-test, but neither is dominated by the choice $\kappa^*$. In particular, the higher temperature exceeds early on, whereas the lower temperature exceeds in the long run. To summarize, a higher temperature setting is advisable if we are impatient, while a lower setting is preferable if we have more patience. We also conducted additional experiments varying the batch sizes from 1 to 100 and altering the significance levels. However, the outcomes largely mirrored the findings discussed above. ![Estimated size (left) and power (right) curves over the number of observations, for the $t$-test, our spherical $e$-process with $\kappa \in |B_i|^{1/2}\mu\times\{.5, 1, 2\}$ (left) and $\kappa \in |B_i|^{1/2}\times\{.5, 1, 2\}$ (right) ('low temp', '$e$-process', 'high temp'), and an 'invalid' version of the $t$-test ('online $t$-test') that rejects as soon as the $p$-value dips under the level $\alpha = .05$. The left plot is for $\mu = 0$ and the right plot for $\mu = .15$. Both are based on 10 000 simulations and a batch size of 25.](figures/t_test_size_b25.pdf "fig:"){#fig:t-test width="8cm"} ![Estimated size (left) and power (right) curves over the number of observations, for the $t$-test, our spherical $e$-process with $\kappa \in |B_i|^{1/2}\mu\times\{.5, 1, 2\}$ (left) and $\kappa \in |B_i|^{1/2}\times\{.5, 1, 2\}$ (right) ('low temp', '$e$-process', 'high temp'), and an 'invalid' version of the $t$-test ('online $t$-test') that rejects as soon as the $p$-value dips under the level $\alpha = .05$. The left plot is for $\mu = 0$ and the right plot for $\mu = .15$. Both are based on 10 000 simulations and a batch size of 25.](figures/t_test_power_b25.pdf "fig:"){#fig:t-test width="8cm"} ## Case-control experiment In this simulation study, we consider a hypothetical case-control experiment, where units are assigned to each case uniformly at random. In each interval of time, we receive the outcomes of a number of treated and control units, where the number of both units is Poisson distributed with parameter $\theta > 0$ and a minimum of 1. We will assume that the outcomes of the treated units are $\mathcal{N}(\mu_t, 1)$-distributed and the outcomes of the controls are $\mathcal{N}(\mu_c, 1)$-distributed. As a batch of data, we will consider the combined observations of both the treated and control units that arrived in the previous interval of time. As a result, a batch $X_B$ of $n$ outcomes, consisting of $n_t$ treated and $n_c$ control units, can be represented as $$\begin{aligned} X_B \sim \begin{bmatrix} 1_{n_{t}}\mu_t \\ 1_{n_{c}}\mu_c \end{bmatrix} + \mathcal{N}\left(0, I\right), \end{aligned}$$ where $1_{n_t}$ and $1_{n_c}$ denote a vector of $n_t$ and $n_c$ ones, respectively, where the first $n_t$ elements correspond to the treated units, without loss of generality. For our simulations, we consider the arrival of 40 batches with $\theta = 25$. Without loss of generality, we choose $\mu_c = 0$ and vary $\mu_t \geq 0$. We use our $e$-process based on the likelihood ratio for testing exchangeability against $\mathcal{N}_{n}(\kappa\iota, I)$, with $\iota = n^{-1/2}(1_{n_t}, -1_{n_c})$, for various choices of the temperature $\kappa \geq 0$. Based on the arguments in Section [5.2](#sec:LR_orbits){reference-type="ref" reference="sec:LR_orbits"}, a good temperature would be $\kappa^* = n^{1/2} \left(\mu_t - \mu_c\right) / 2$, in which case the $e$-values that our $e$-process consists of exactly coincide with the likelihood ratio statistics. We estimate the normalization constant $\mathbb{E}_{\overline{G}}\exp\left\{\kappa\iota'\overline{G}X\right\}$, where $\overline{G}$ is uniform on the permutation matrices, by using 100 Monte Carlo draws. In Figure [4](#fig:power_cc){reference-type="ref" reference="fig:power_cc"}, we report proportion of tests that have been rejected up until each point in time at level $\alpha = 0.05$ for various temperatures. Considering the plot on the left, all the tests seem to control size. Looking at the plot on the right, we see that the best performing temperature changes over the amount of data that has been observed. Indeed, higher values for $\kappa$ yield a lot of power early, but dwindle later on whereas lower values of $\kappa$ do better in the long run. Moreover, we see that $\kappa^*$ seems to obtain a good balance between both worlds. ![Proportion of tests that rejected so-far at each moment against the number of observed batches at $\alpha = 0.05$, with $\mu_t = 0$ (left) and $\mu_t = 0.2$ (right), for temperatures $\kappa \in 2^{\{-2, -1, 0, 2, 1\}}$ (left) and $\kappa = \kappa^* \times \beta$ with $\beta \in 2^{\{-2, -1, 0, 2, 1\}}$ (right), where $\kappa^* \approx .6$. Both plots are based on 1 000 simulations. The number of treatment and control units in each batch are Poisson distributed with parameter $\theta = 25$.](figures/case_control_absolute_H_0.pdf "fig:"){#fig:power_cc width="8cm"} ![Proportion of tests that rejected so-far at each moment against the number of observed batches at $\alpha = 0.05$, with $\mu_t = 0$ (left) and $\mu_t = 0.2$ (right), for temperatures $\kappa \in 2^{\{-2, -1, 0, 2, 1\}}$ (left) and $\kappa = \kappa^* \times \beta$ with $\beta \in 2^{\{-2, -1, 0, 2, 1\}}$ (right), where $\kappa^* \approx .6$. Both plots are based on 1 000 simulations. The number of treatment and control units in each batch are Poisson distributed with parameter $\theta = 25$.](figures/case_control_relative_H_1.pdf "fig:"){#fig:power_cc width="8cm"} ## Strongly dependent symmetric data In this section, we consider testing sign-symmetric as in Section [6.5](#sec:symmetry){reference-type="ref" reference="sec:symmetry"}. By Remark [Remark 4](#rmk:symmetry_dependent){reference-type="ref" reference="rmk:symmetry_dependent"}, this likelihood ratio results a valid $e$-process regardless of the dependence structure in the sequence $X = X_1, X_2, \dots$. To illustrate this, we consider a simulation study for data that is strongly dependent and sign-symmetric under the null hypothesis. In particular, we generate $$\begin{aligned} \label{eq:null_dependent} X_i \sim \overline{b}X_{i-1} + \mathcal{N}_1(0, 1), \end{aligned}$$ with $X_1 \sim \mathcal{N}(0, 1)$ and where $\overline{b}$ is uniform on $\{-1, 1\}$. The results are displayed in the left plot of Figure [6](#fig:symmetry){reference-type="ref" reference="fig:symmetry"} for a batch size of $1$, level $\alpha = .05$, and temperatures $\kappa \in \{.5, 1, 2\}$. There, we see that our $e$-process is indeed valid. In the plot on the right, we show that the test is powerful against non-symmetric data, by considering $X_i \sim \mathcal{N}(.15, 1)$ with $\kappa \in \{.075, .15, .3\}$ and a batch size of 1. We also considered an alternative with a mean shift of [\[eq:null_dependent\]](#eq:null_dependent){reference-type="eqref" reference="eq:null_dependent"}. There, we found the test has power but quickly ran into numerical issues as the components of the likelihood ratio became extremely large, which was not easy to overcome in the denominator. ![Proportion of tests that rejected so-far at each moment against the number of observed batches at $\alpha = 0.05$, for the highly dependent null in [\[eq:null_dependent\]](#eq:null_dependent){reference-type="eqref" reference="eq:null_dependent"} (left) and normal alternative (right), using temperatures $\{.5, 1, 2\}$ (left) and $\{.075, .15, .3\}$ (right). The plots are based on 1000 simulations and a batch size of 1.](figures/symmetry_null_b1.pdf "fig:"){#fig:symmetry width="8cm"} ![Proportion of tests that rejected so-far at each moment against the number of observed batches at $\alpha = 0.05$, for the highly dependent null in [\[eq:null_dependent\]](#eq:null_dependent){reference-type="eqref" reference="eq:null_dependent"} (left) and normal alternative (right), using temperatures $\{.5, 1, 2\}$ (left) and $\{.075, .15, .3\}$ (right). The plots are based on 1000 simulations and a batch size of 1.](figures/symmetry_alt_b1.pdf "fig:"){#fig:symmetry width="8cm"} # Acknowledgements We would like to thank Sam van Meer, Will Hartog, Yaniv Romano, Jake Soloff and Stan Koobs for useful discussions. Part of this research was performed while the author was visiting the Institute for Mathematical and Statistical Innovation (IMSI), which is supported by the National Science Foundation (Grant No. DMS-1929348). # Example: LR for exchangeability {#sec:exm_permutation} In this section, we discuss a toy example of permutations on a small and finite sample space. While not as statistically interesting as the examples in Section [6](#sec:exm_spherical){reference-type="ref" reference="sec:exm_spherical"}, it is more tangible as the group itself is finite and easy to understand. ## Exchangeability on a finite sample space Suppose our sample space $\mathcal{Y}$ consists of the vectors $[1, 2, 3]$, $[1, 1, 2]$ and all their permutations. As a group $\mathcal{G}$, we consider the permutations on 3 elements, which we will denote by $\{abc, acb, bac, bca, cab, cba\}$. For example, $bac$ represents the permutation that swaps the first two elements. The orbits are then given by all permutations of $[1,2,3]$ and $[1,1,2]$ $$\begin{aligned} O_{[1,1,2]} = \{[1,1,2], [1,2,1], [2,1,1]\}, \end{aligned}$$ and $$\begin{aligned} O_{[1,2,3]} = \{[1,2,3],[1,3,2],[2,1,3],[2,3,1],[3,1,2],[3,2,1]\}. \end{aligned}$$ As $\mathcal{Y}$-valued orbit representatives, we pick the unique element in the orbit that is sorted in ascending order: $[1,1,2]$ and $[1,2,3]$. Notice that the group action of $\mathcal{G}$ is free on $O_{[1,2,3]}$ but not free on $O_{[1,1,2]}$. For simplicity, let us restrict ourselves to $O_{[1,2,3]}$ first. On this orbit, the map $\gamma$ is defined as the unique permutation that brings the element $[1,2,3]$ to $z \in O_{[1,2,3]}$. Moreover, on this orbit, the null hypothesis then states that $\gamma(Y)$ is uniform on the permutations, which in this case is equivalent to the hypothesis that $Y$ is uniform on $O_{[1,2,3]}$. Now let us restrict ourselves to $O_{[1,1,2]}$. On this orbit, there are multiple permutations that may bring a given element back to $[1,1,2]$. For example, both $bac$, as well as the identity permutation $abc$ bring $[1,1,2]$ to itself. More generally, any permutation that brings $[1,1,2]$ to $z \in O_{[1,1,2]}$, can be preceded by $bac$, and the result still brings $[1,1,2]$ to $z \in O_{[1,1,2]}$. Even more abstractly speaking, let $\mathcal{S}_{[y]} = \{G \in \mathcal{G} : G[y] = [y]\}$ be the stabilizer subgroup of $[y]$ (the subgroup that leaves $[y]$ unchanged). Then, if $G^* \in \mathcal{G}$ carries $[y]$ to $y$, so does any element of $G^*\mathcal{S}_{[y]}$. To construct $\gamma$ on $O_{[1,1,2]}$, let $\overline{S}_{[y]}$ denote a uniform distribution on $\{abc, bac\}$, which is also well-defined in the general case as $\mathcal{S}_{[y]}$ is a compact subgroup and so admits a Haar probability measure. Moreover, let $G_y$ be an arbitrary permutation that carries $[y]$ to $y$, say $G_{[1,1,2]} = abc$, $G_{[1,2,1]} = acb$ and $G_{[2,1,1]} = cba$. Then, we define $\gamma(y) = G_y\overline{S}_{[y]}$. Concretely, this means that $\gamma([1,1,2]) \sim \text{Unif}(abc, bac)$, $\gamma([1,2,1]) \sim \text{Unif}(acb, bca)$ and $\gamma([2,1,1]) \sim \text{Unif}(cba, cab)$. If $Y$ is indeed uniform on $O_{[1,1,2]}$, then $G_Y$ is uniform on $\{abc, acb, cba\}$ and so $\gamma(Y)$ is uniform on $\mathcal{G}$. The definition of $\gamma$ on the sample space $\mathcal{Y} = O_{[1,2,3]} \cup O_{[1,1,2]}$ is obtained by combining the definitions on the two separate orbits. ## Likelihood ratios We start with the orbit $O_{[1,2,3]}$. Suppose that our alternative distribution $P_{\mathcal{Y}}^1$ conditional on $O_{[1,2,3]}$ is that $Y$ is uniform on $\{[1,2,3], [1,3,2]\}$ and all other arrangements happen with probability 0. As a density on $O_{[1,2,3]}$, we find $$\begin{aligned} \begin{cases} 0.5, \text{ if } y \in \{[1,2,3], [1,3,2]\}, \\ 0, \text{ otherwise}. \end{cases} \end{aligned}$$ As the density under the null is $1/6$ for each arrangement, the likelihood ratio is given by $$\begin{aligned} \begin{cases} 3, \text { if } y \in \{[1,2,3], [1,3,2]\}, \\ 0, \text{ otherwise}. \end{cases} \end{aligned}$$ Since the group action is free on the orbit $O_{[1,2,3]}$, $\gamma$ is a bijection between $O_{[1,2,3]}$ and the group, so likelihood ratio is $$\begin{aligned} \begin{cases} 3, \text { if } G \in \{abc, acb\}, \\ 0, \text{ otherwise}. \end{cases} \end{aligned}$$ Now let us consider the orbit $O_{[1,1,2]}$. Suppose that our alternative $P_{\mathcal{Y}}^1$ conditional on $O_{[1,1,2]}$ is that $Y$ equals $[1,1,2]$ with probability 1. The likelihood ratio on our orbit then becomes $$\begin{aligned} \begin{cases} 3, \text{ if } y = [1,1,2], \\ 0, \text{ otherwise}. \end{cases} \end{aligned}$$ In this case, the group action is not free, as both $abc$ and $bac$ are permutations that carry $[1,1,2]$ to itself. As a consequence $\gamma([1,1,2]) \sim \text{Unif}(abc, bac)$. This induces the following likelihood ratio on $\mathcal{G}$: $$\begin{aligned} \begin{cases} 3, \text { if } G \in \{abc, bac\}, \\ 0, \text{ otherwise}. \end{cases} \end{aligned}$$ Now let us consider a likelihood ratio on $\mathcal{Y}$. For this, it is insufficient that we have an alternative on both $O_{[1,2,3]}$ and $O_{[1,1,2]}$, separately. We need to additionally specify the probability that that $Y$ lands in $O_{[1,2,3]}$ and $O_{[1,1,2]}$ under the alternative. For simplicity, let us assume that the probability of each orbit is 1/2. The likelihood ratio on $\mathcal{Y}$ can then be derived to be $$\begin{aligned} \begin{cases} 3/2, &\text { if } y \in \{[1,2,3], [3,2,1], [1,1,2]\}, \\ 0, &\text{ otherwise}. \end{cases} \end{aligned}$$ The likelihood ratio this induces on $\mathcal{G}$ is $$\begin{aligned} \begin{cases} 3/2, &\text { if } G \in \{abc, bac\}, \\ 3, &\text { if } G = abc, \\ 0, &\text{ otherwise}, \end{cases} \end{aligned}$$ which is exactly the weighted average of the likelihood ratios on $\mathcal{G}$ that were induced on the individual orbits, weighted by the probability of each orbit. # Finding the normalization constant analytically {#sec:norm_analytical} For the likelihood ratio in Sections [6.2](#sec:LR_spherical_Y){reference-type="ref" reference="sec:LR_spherical_Y"} and [6.3](#sec:LR_orbits_normal){reference-type="ref" reference="sec:LR_orbits_normal"}, we can easily compute the normalization constant under sphericity. The key trick is to use the fact that $\iota'\overline{G}y / \|y\|_2$ follows a Beta$(\frac{d-1}{2}, \frac{d-1}{2})$ distribution on the interval $[-1, 1]$. The normalization constant is then equal to its moment generating function: $$\begin{aligned} \mathbb{E}_{\overline{G}}\exp(\mu \iota'\overline{G}y) = 1 + \mu\|y\|_2 \mathbb{E}_{\overline{G}}\widetilde{B} + \mu^2\|y\|_2^2 \mathbb{E}_{\overline{G}}\widetilde{B}^2/2! + \dots, \end{aligned}$$ where $\widetilde{B} \sim \text{Beta}\left(\frac{d-1}{2}, \frac{d-1}{2}\right)$ on $[-1, 1]$. For this generalized beta distribution, the odd moments are all 0, since it is symmetric about 0. Moreover, the even moments are given by $$\begin{aligned} \mathbb{E}_{\overline{G}}(\widetilde{B})^{k+2} = \mathbb{E}_{\overline{G}}(\widetilde{B})^{k} (k - 1) / (n + k - 2), \end{aligned}$$ with $\mathbb{E}_{\overline{G}}(\widetilde{B})^{0} = 1$ and $n$ the dimension of $y$. This means the normalization constant can be easily approximated to high precision, which we exploit in our simulation studies. # Omitted proofs {#sec:proofs} ## Proof of Theorem [Theorem 1](#thm:e-process){reference-type="ref" reference="thm:e-process"} {#sec:technical} The proof of Theorem [Theorem 1](#thm:e-process){reference-type="ref" reference="thm:e-process"} relies on Theorem [Theorem 5](#thm:convex){reference-type="ref" reference="thm:convex"}, which in-turn relies on the de Finetti-Hewitt-Savage theorem [@barber2023finetti; @alam2023generalizing]. This theorem states that any exchangeable sequence of random variables can be represented as a mixture of i.i.d. random variables. **Theorem 4** (de Finetti-Hewitt-Savage). *Suppose that ${Q} \in \overline{\mathcal{P}}_{\mathcal{X}}^\infty$. Then, there exists a distribution $\lambda$ on $\mathcal{P}_{\mathcal{X}}$ so that ${Q} = \mathbb{E}_P^\lambda P^{\infty}$.* The following result is the key to linking Theorem [Theorem 4](#thm:fhs){reference-type="ref" reference="thm:fhs"} to Theorem [Theorem 1](#thm:e-process){reference-type="ref" reference="thm:e-process"}. It states that any $e$-process for a sequence of i.i.d. random variables is also an $e$-process for a sequence of exchangeable random variables. **Theorem 5**. *If $(e_{t})_{t \geq 1}$ is a $\mathcal{P}_{\mathcal{X}}^\infty$-$e$-process, then it is also a $\overline{\mathcal{P}}_{\mathcal{X}}^\infty$-$e$-process.* *Proof.* For any ${Q} \in \overline{\mathcal{P}}_{\mathcal{X}}^\infty$, we have $$\begin{aligned} \sup_{Q \in \overline{\mathcal{P}}_{\mathcal{X}}^\infty} \sup_{\tau} \mathbb{E}^{Q} e_{\tau} = \sup_{Q \in \overline{\mathcal{P}}_{\mathcal{X}}^\infty} \sup_{\tau}\mathbb{E}_P^{\lambda} \mathbb{E}^{P^{\infty}} e_\tau \leq \mathbb{E}_P^{\lambda}\sup_{Q \in \overline{\mathcal{P}}_{\mathcal{X}}^\infty} \sup_{\tau} \mathbb{E}^{P^{\infty}} e_\tau \leq \mathbb{E}_P^{\lambda} 1 = 1, \end{aligned}$$ where $\lambda$ is some distribution on $\mathcal{P}_{\mathcal{X}}$, which exists by Theorem [Theorem 4](#thm:fhs){reference-type="ref" reference="thm:fhs"}. Moreover, the second inequality follows from the fact that $e_t$ is an $\mathcal{P}_{\mathcal{X}}^\infty$-$e$-process. The first inequality is a simple property of suprema and expectations. ◻ With Theorem [Theorem 5](#thm:convex){reference-type="ref" reference="thm:convex"} in hand, we are now ready to prove Theorem [Theorem 1](#thm:e-process){reference-type="ref" reference="thm:e-process"}, which comes down to observing that the $e$-process would be valid if the data was i.i.d., and so is also valid under exchangeability. *Proof of Theorem [Theorem 1](#thm:e-process){reference-type="ref" reference="thm:e-process"}.* Let $Q$ denote the law of $X$, and note that $Q \in \overline{\mathcal{P}}_{\mathcal{X}}^\infty$. If $Q \in \mathcal{P}_{\mathcal{X}}^{\infty} \subseteq \overline{\mathcal{P}}_{\mathcal{X}}^\infty$, then $\prod_{i = 1}^{t} e^i$ is a product of independent $e$-values which is valid $e$-process for $Q$. Hence, $\prod_{i = 1}^{t} e^i$ is a $\mathcal{P}_{\mathcal{X}}^{\infty}$-$e$-process, so by Theorem [Theorem 5](#thm:convex){reference-type="ref" reference="thm:convex"} it is also a $\overline{\mathcal{P}}_{\mathcal{X}}^\infty$-$e$-process. ◻ **Remark 5**. *A special case of Theorem [Theorem 5](#thm:convex){reference-type="ref" reference="thm:convex"} was proven by @ramdas2022testing for the sample space $\mathcal{X} = \{0, 1\}$, while Theorem [Theorem 5](#thm:convex){reference-type="ref" reference="thm:convex"} covers arbitrary separable metric spaces. In addition another special case of Theorem [Theorem 5](#thm:convex){reference-type="ref" reference="thm:convex"} is mentioned in Remark 3.4 in @vovk2021testing.* ## Proof of Theorem [Theorem 2](#thm:essentially_complete){reference-type="ref" reference="thm:essentially_complete"} {#proof:essentially_complete} *Proof.* We start by showing that $e_T$ is exact on each orbit $O_y$, $y \in \mathcal{Y}$. We find $$\begin{aligned} \mathbb{E}_{\overline{G}} e_T(\overline{G}y) &= \mathbb{E}_{\overline{G}} \frac{T(\overline{G}y)}{\mathbb{E}_{\overline{G}_2}T(\overline{G}_2\overline{G}y)} \\ &= \mathbb{E}_{\overline{G}} \frac{T(\overline{G}y)}{\mathbb{E}_{\overline{G}_2}T(\overline{G}_2y)} \\ &= \mathbb{E}_{\overline{G}} \frac{T(\overline{G}y)}{\mathbb{E}_{\overline{G}_2}T(\overline{G}_2y)} \\ &= \frac{1}{\mathbb{E}_{\overline{G}_2}T(\overline{G}_2y)} \mathbb{E}_{\overline{G}} T(\overline{G}y) \\ &= 1, \end{aligned}$$ where $\overline{G}_2$ is independent from $\overline{G}$ and also uniform on $\mathcal{G}$. As a result, we immediately find that $e_T$ is an exact $e$-value for $H_0^\mathcal{G}$, as for any $P \in H_0^{\mathcal{G}}$, we have $$\begin{aligned} \mathbb{E}_{Y}^P e_T(Y) = \mathbb{E}_{Y}^P\mathbb{E}_{\overline{G}} e_T(\overline{G}Y) = \mathbb{E}_{Y}^P1 &= 1, \end{aligned}$$ which proves the 'left-to-right' direction of the first claim. For the 'right-to-left' direction, observe that if $T$ is an exact $e$-value for $H_0^{\mathcal{G}}$, then $$\begin{aligned} \mathbb{E}_Y^P T(Y) = \mathbb{E}_Y^P \mathbb{E}_{\overline{G}} T(\overline{G}Y) = 1, \end{aligned}$$ for any distribution $P$ in $H_0^{\mathcal{G}}$. In particular, it is true if we choose $Y \sim \delta_{y}$ for any $y \in \mathcal{Y}$, where $\delta_y$ denotes the Dirac measure at $y$. As a consequence, $T(Y)$ is an exact $e$-value for $H_0^{\mathcal{G}}$ on every orbit $$\begin{aligned} \mathbb{E}_Y^{\delta_y} \mathbb{E}_{\overline{G}} T(\overline{G}Y) = \mathbb{E}_{\overline{G}} T(\overline{G}y) = 1. \end{aligned}$$ Hence, $$\begin{aligned} e_T(y) = \frac{T(y)}{\mathbb{E}_{\overline{G}}T(\overline{G}y)} = T(y). \end{aligned}$$ Hence, if $T$ is an exact $e$-value for $H_0^\mathcal{G}$, then it must be equal to some test statistic of the form $e_T$. For the final claim, it suffices to observe that if an $e$-value is valid but not exact, then we can improve its power by adding a sufficiently small deterministic constant so that it is still valid. ◻ ## Proof of Theorem [Theorem 3](#thm:loavev){reference-type="ref" reference="thm:loavev"} {#appn:loavev} *Proof.* We start with the orbit-based likelihood ratio. Let $\mathcal{Y}$ denote the shared sample space of $Y_i = X_{B_i}$, $i = 1, 2, \dots$, and condition on some orbit $O_z$, $z \in [\mathcal{Y}]$. Let $\mathbb{P}_z$ denote the alternative distribution on the orbit $O_z$. The likelihood ratio on $O_z$ is given by $$\begin{aligned} \frac{d\mathbb{P}_{z}}{d\lambda_z}(y), \end{aligned}$$ for $y \in O_z$. According to Theorem 8 in @koolen2022log, the $e$-process $$\begin{aligned} e_t^z = \prod_{i=1}^t \frac{d\mathbb{P}_{z}}{d\lambda_z}(Y_i) \end{aligned}$$ maximizes $\mathbb{E}^{\mathbb{P}_{z}}\log e_{t}$ over all $e$-processes, for all $t$. Choosing $\mathcal{Y} = \mathcal{G}$ there is only one orbit, so we immediately obtain the result for the group-based likelihood ratio. Moreover, as the property this holds for every $z$, it also holds for any mixture over orbits $\mathbb{E}_Z\mathbb{E}^{\mathbb{P}_{Z}}\log e_{t}$. As a consequence, the result also holds for the orbit-based and $\mathcal{Y}$-based likelihood ratios. ◻ **Remark 6**. *The result easily extends to the composite alternative consisting of all distributions that induce the same conditional distributions on the orbits.* [^1]: See @lehmann1949theory for more details regarding measurability, integrability and corner cases.
arxiv_math
{ "id": "2310.01153", "title": "Online Permutation Tests: $e$-values and Likelihood Ratios for Testing\n Group Invariance", "authors": "Nick W. Koning", "categories": "math.ST stat.ME stat.TH", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | For a quasilinear $p$-form defined over a field $F$ of characteristic ${p>0}$, we prove that its defect over the field ${F(\!\sqrt[p^{n_1}]{a_1}, \dots, \sqrt[p^{n_r}]{a_r})}$ equals to its defect over the field ${F(\!\sqrt[p]{a_1}, \dots, \sqrt[p]{a_r})}$, strengthening a result of Hoffmann from 2004. We also compute the full splitting pattern of some families of quasilinear $p$-forms. address: - Fakultät für Mathematik, Technische Universität Dortmund, D-44221 Dortmund, Germany - Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton T6G 2G1, Canada author: - Kristýna Zemková bibliography: - bibliography.bib title: "Isotropy and full splitting pattern of quasilinear $\\boldsymbol{p}$-forms" --- # Introduction Let $F$ be a field and $\varphi$ a quadratic form over $F$. Then the splitting pattern $\mathrm{SP}(\varphi)$ of $\varphi$ can be defined as the set of dimensions realized by the anisotropic part of $\varphi$ over all possible field extensions of $F$. In case of fields of characteristic different from 2, the splitting pattern was first systematically studied by Knebusch in [@Kne76]; Knebusch presented an explicit tower of fields $F_0\subseteq F_1 \subseteq \dots \subseteq F_h$ (depending on $\varphi$), nowadays called the *standard splitting tower*, that has a generic property: $m\in\mathrm{SP}(\varphi)$ if and only if $m$ is the dimension of the anisotropic part of $\varphi$ over one of the fields $F_k$ in the standard splitting tower. We can also consider an analogous standard splitting tower over fields of characteristic 2. However, in charateristic 2, we have to consider several different types of quadratic forms: *nonsingular*, *totally singular*, and *singular* (the latter being the general case). For nonsingular quadratic forms, the standard splitting tower has been defined by Knebusch [@Kne77], and it has been extended to the general case by Laghribi [@Lag02-gensplit]. Unfortunately, the generic behavior of the standard splitting tower is guaranteed only in the case of nonsingular quadratic forms. In fact, Hoffmann and Laghribi [@HL04] proved that if $\varphi$ is a nonsingular quadratic form over a field of characteristic $2$, then the standard splitting tower provides all values from the set $\mathrm{SP}(\varphi)$, but the same does not have to hold for singular quadratic forms. More precisely, for (singular) quadratic forms over fields of characteristic 2, we distinguish between *Witt index* of $\varphi$ -- the number of hyperbolic planes split off by $\varphi$ -- and the *defect* of $\varphi$ -- the number of $\langle 0 \rangle$'s split off by (the quasilinear part of) $\varphi$. Hoffmann and Laghribi proved that the standard splitting tower gives all possible Witt indices ([@HL04 Prop. 4.6]), but not all possible defects ([@HL04 Ex. 8.15]). Therefore, for a quadratic form $\varphi$ defined over a field of characteristic $2$, we distinguish between the *standard splitting pattern* of $\varphi$ (denoted $\mathrm{sSP}(\varphi)$) -- the set of the dimensions of the anisotropic parts of $\varphi$ over the fields in the standard splitting tower -- and the *full splitting pattern* of $\varphi$ (denoted $\mathrm{fSP}(\varphi)$) -- the set of the dimensions of the anisotropic parts of $\varphi$ over all field extensions. To understand the full splitting pattern of a singular quadratic form, one has to first understand the splitting behavior of its quasilinear part; hence, the starting point is to examine the full splitting pattern of totally singular quadratic forms. As such, the study of splitting patterns of quadratic forms is closely related to the study of their isotropy behavior over different field extensions. For some results in characteristic 2, see [@Lag04; @HL06; @Hof22; @KZ22; @LagMuk23]. The object of study of this paper, quasilinear $p$-forms, is a generalization of the concept of totally singular quadratic forms over fields of characteristic $2$: Let $p$ be any prime integer, and let $F$ be a field of characteristic $p$. A *quasilinear $p$-form* can be expressed as a "diagonal\" homogeneous polynomial of degree $p$ with coefficients in $F$, i.e., as $a_1X_1^p+\dots+a_nX_n^p$ with $a_1,\dots,a_n\in F$ and $X_1,\dots,X_n$ variables; we denote it by $\langle a_1,\dots,a_n \rangle$ (in an analogy to the totally singular quadratic forms). Many results (and proofs) on totally singular quadratic forms translate directly to quasilinear $p$-forms, but some need a special care because of the increased complexity of the forms. (For example: 1-fold *quasi-Pfister form* is defined as $\langle\!\langle a \rangle\!\rangle=\langle 1,a,\dots,a^{p-1} \rangle$. These forms are easy to work with when $p=2$, i.e., in the case of totally singular quadratic forms, but it can get more complicated when $p>2$.) Quasilinear $p$-forms have been studied first by Hoffmann [@Hof04] and subsequently by Scully [@Scu13; @Scu16-Hoff; @Scu16-Split]. In particular, [@Scu16-Split] analyzes the standard splitting pattern of quasilinear $p$-forms. In this article, we focus on two topics that are closely related. In the first part of the article, Section [3](#Sec:isotropy){reference-type="ref" reference="Sec:isotropy"}, we study the defect of quasilinear $p$-forms over purely inseparable field extensions. The "complexity\" of a purely inseparable field extension $E$ of a field $F$ depends on its *exponent* -- the smallest integer $e$ (if it exists) such that $E^{p^e}\subseteq F$. It turns out that to fully describe the defects over any purely inseparable field extensions, we only need to understand the defects over purely inseparable extensions of exponent one. More precisely, we will prove a stronger version of Hoffmann's theorem [@Hof04 Th. 5.9]: **Theorem 1** (cf. Theorem [Theorem 21](#Th:IsotropyInsepExtKZ_pforms){reference-type="ref" reference="Th:IsotropyInsepExtKZ_pforms"}). *Let $r\geq1$. For each $1\leq i \leq r$, let $a_i\in F$, $n_i\geq1$, and $\alpha_i$, $\beta_i$ be such that $\alpha_i^p=\beta_i^{p^{n_i}}=a_i$. Furthermore, set $K=F(\alpha_1,\dots,\alpha_r)$, $L=F(\beta_1,\dots,\beta_r)$ and $\pi=\langle\!\langle a_1,\dots,a_r \rangle\!\rangle$. Let $\varphi$ be an anisotropic quasilinear $p$-form. Then* 1. *$\mathfrak{i}_{\mathrm{d}}(\varphi_K)=\mathfrak{i}_{\mathrm{d}}(\varphi_L)$,* 2. *$\mathfrak{i}_{\mathrm{d}}(\varphi_K)=\frac{1}{p^r}\mathfrak{i}_{\mathrm{d}}(\varphi\otimes\pi)$ if $\pi$ is anisotropic.* In the second part of the article, Section [4](#Sec:fsp){reference-type="ref" reference="Sec:fsp"}, we look directly at the full splitting pattern of quasilinear $p$-forms. There are two obvious bounds on the size of the set $\mathrm{fSP}(\varphi)$: The upper bound is given by the dimension of $\varphi$. On the other hand, the inclusion $\mathrm{sSP}(\varphi)\subseteq\mathrm{fSP}(\varphi)$ gives a lower bound $b:=\left|{\mathrm{sSP}(\varphi)}\right|$ on $\left|{\mathrm{fSP}(\varphi)}\right|$; in Corollary [Corollary 25](#Cor:FSPlowerBound){reference-type="ref" reference="Cor:FSPlowerBound"}, we provide an alternative proof of $b\leq\left|{\mathrm{fSP}(\varphi)}\right|$ by using purely inseparable field extensions instead of the standard splitting tower. In examples [Example 27](#Ex:fspMin){reference-type="ref" reference="Ex:fspMin"} and [Example 30](#Ex:fspPF){reference-type="ref" reference="Ex:fspPF"}, we compute the full splitting patterns of the so-called *minimal* $p$-forms and of *quasi-Pfister forms*, showing that both of the above mentioned bounds are optimal. After that we focus on some quasilinear $p$-forms with nontrivial full splitting pattern, namely on a subfamily of quasi-Pfister neighbors: **Theorem 2** (cf. Theorem [Theorem 34](#Th:fullsplitpatSPNmin_pforms){reference-type="ref" reference="Th:fullsplitpatSPNmin_pforms"}). *Let $\pi$ be an $n$-fold quasi-Pfister form over $F$, $\sigma$ be a minimal subform of $\pi$ of dimension at least $2$ and $d\in F^*$ be such that the quasilinear $p$-form $\varphi\cong\pi\:\bot\:d\sigma$ is anisotropic. Then $m\in \mathrm{fSP}(\varphi)$ if and only if $m=p^k+l$ and one of the following holds:* 1. *$0\leq k\leq n$ and $l=0$;* 2. *$0\leq k\leq n$ and $\max\{1, k-n+\dim\sigma\}\leq l\leq \min\{\dim\sigma,p^k\}$.* We conclude the paper by Example [Example 36](#Ex:FullSplitPattSPN_pforms){reference-type="ref" reference="Ex:FullSplitPattSPN_pforms"}, in which we apply the previous theorem on a quasilinear $p$-form of dimension $p^4+4$; we compute its full splitting pattern and explicitly provide all the field extensions that we use for that. This paper is based on the second chapter of the author's PhD thesis [@KZdis]. # Preliminaries {#Sec:Prel_p-forms} All fields in this article are of characteristic $p>0$. ## Quasilinear $p$-forms Quasilinear $p$-forms are a generalization of totally singular quadratic forms (defined over fields of characteristic $2$). For totally singular quadratic forms, see, e.g., [@HL04 Sec. 8]; for a detailed introduction into quasilinear $p$-forms in general, see [@Hof04]. **Definition 3**. Let $F$ be a field and $V$ a finite-dimensional vector space over $F$. A *quasilinear $p$-form* (or simply a *$p$-form*) over $F$ is a map ${\varphi:V\rightarrow F}$ with the following properties: (1) $\varphi(av)=a^p\varphi(v)$ for any $a\in F$ and $v\in V$, [\[Enum:Defpform1\]]{#Enum:Defpform1 label="Enum:Defpform1"} (2) $\varphi(v+w)=\varphi(v)+\varphi(w)$ for any $v,w\in V$. [\[Enum:Defpform2\]]{#Enum:Defpform2 label="Enum:Defpform2"} We define the dimension of $\varphi$ as $\dim\varphi=\dim V$. Let $V$ be an $F$-vector space with a basis $\{e_1,\dots,e_n\}$, and $\varphi$ a $p$-form on $V$. Let $a_i=\varphi(e_i)$; then, for a vector $v=\sum_{i=1}^nx_ie_i\in V$, we have $$\varphi(v)=\sum_{i=1}^na_ix_i^p.$$ Thus we can associate $\varphi$ with the "diagonal\" homogeneous polynomial $\sum_{i=1}^na_iX_i^p\in F[X_1,\dots,X_n]$. We denote such a $p$-form by $\langle a_1,\dots,a_n \rangle$. Let $\langle a_1,\dots,a_n \rangle$ and $\langle b_1\dots,b_m \rangle$ be $p$-forms over $F$ . Then we define the orthogonal sum and the tensor product of these two $p$-forms in the obvious way: $$\begin{aligned} \langle a_1,\dots,a_n \rangle\:\bot\:\langle b_1\dots,b_m \rangle&=\langle a_1,\dots,a_n,b_1\dots,b_m \rangle,\\ \langle a_1,\dots,a_n \rangle\otimes\langle b_1\dots,b_m \rangle&=a_1\langle b_1\dots,b_m \rangle\:\bot\:\dots\:\bot\:a_n\langle b_1\dots,b_m \rangle.\end{aligned}$$ Moreover, for $c\in F^*$, we have $c\langle a_1,\dots,a_n \rangle=\langle ca_1,\dots,ca_n \rangle$, and if $k$ is a positive integer, then we write $k\times\varphi$ for the $p$-form $\varphi\:\bot\:\dots\:\bot\:\varphi$ ($k$ copies). Let $\varphi:V\rightarrow F$ and $\psi:W\rightarrow F$ be two $p$-forms and $f:V\rightarrow W$ a vector space homomorphism satisfying $\varphi(v)=\psi(f(v))$ for any $v\in V$. If $f$ is bijective, then we call $\varphi$ and $\psi$ *isometric*, and write $\varphi\cong\psi$. If $f$ is injective, then $\varphi$ is a *subform* of $\psi$, which we denote by $\varphi\subseteq\psi$; in such case, there exists a $p$-form $\tau$ over $F$ such that $\psi\cong\varphi\:\bot\:\tau$. If there exists $c\in F^*$ such that $\varphi\cong c\psi$, then we call $\varphi$ and $\psi$ *similar*, which we denote by $\varphi\stackrel{{\scriptsize{\mathrm{sim}}}}{\sim}\psi$. We call a $p$-form $\varphi:V\rightarrow F$ *isotropic* (or *defective*) if $\varphi(v)=0$ for some nonzero vector $v\in V$; otherwise, $\varphi$ is called *anisotropic* (or *nondefective*). The $p$-form $\varphi$ can be written as ${\varphi\cong\sigma\:\bot\:k\times\langle 0 \rangle}$ for some anisotropic $p$-form $\sigma$ over $F$ and a non-negative integer $k$. Then $\sigma$ is unique up to isometry; we call it the *anisotropic part of $\varphi$*, and denote it by $\varphi_{\mathrm{an}}$. The integer $k$ is called the *defect* of $\varphi$ and is denoted by $\mathfrak{i}_{\mathrm{d}}(\varphi)$. In particular, we have ${\varphi\cong\varphi_{\mathrm{an}}\:\bot\:\mathfrak{i}_{\mathrm{d}}(\varphi)\times\langle 0 \rangle}$. For a $p$-form $\varphi:V\rightarrow F$, we denote $$\begin{aligned} &D_F(\varphi)=\{\varphi(v)~|~v\in V\} \quad &\text{and}& \quad &D_F^*(\varphi)=D_F(\varphi)\setminus\{0\},\\ &G_F^*(\varphi)=\{x\in F^*~|~x\varphi\cong\varphi\} \quad &\text{and}& \quad &G_F(\varphi)=G_F^*(\varphi)\cup\{0\}.\end{aligned}$$ Then $D_F(\varphi)$ is the set of all elements of $F$ represented by $\varphi$ (including zero). Note that since $F^p$ is a field, it follows that $D_F(\varphi)$ is an $F^p$-vector space. On the other hand, for any $F^p$-vector space $U$, there exists a unique (up to isometry) anisotropic $p$-form $\sigma$ such that $D_F(\sigma)=U$ (see [@Hof04 Prop. 2.12]). Note that it follows that for any $a,b\in F$ and $x\in F^*$, we have $$\langle a \rangle\cong\langle ax^p \rangle, \quad \langle a,b \rangle\cong\langle a,a+b \rangle.$$ In particular, $\langle -a \rangle\cong\langle a \rangle$ for any $a\in F$, because $-1=(-1)^p$. Moreover, the set $W=\{v\in V~|~\varphi(v)=0\}$ is also an $F^p$-vector space, because for any $v,w\in W$, we have $\varphi(v+w)=\varphi(v)+\varphi(w)=0$; it follows that $W$ is the unique maximal isotropic subspace of $V$. It is easy to see that $\varphi$ is anisotropic if and only if $\dim\varphi=\dim_{F^p}D_F(\varphi)$. More precisely, we have the following lemma: **Lemma 4** ([@Hof04 Prop. 2.6]). *[\[Lem:p-subform\]]{#Lem:p-subform label="Lem:p-subform"} Let $\varphi$ be a $p$-form over $F$.* 1. *Let $\{c_1,\dots,c_k\}$ be any $F^p$-basis of the vector space $D_F(\varphi)$. Then we have $\varphi_{\mathrm{an}}\cong\langle c_1,\dots,c_k \rangle$.* 2. *If $a_1,\dots,a_m\in D_F(\varphi)$, then $\langle a_1,\dots,a_m \rangle_{\mathrm{an}}\subseteq\varphi$.* ## Field extensions In the following, if $V$ is a vector space over $F$ and $E/F$ a field extension, then we denote $V_E=E\otimes V$. We start with a lemma that is rather elementary but it will be essential for the proof of Theorem [Theorem 21](#Th:IsotropyInsepExtKZ_pforms){reference-type="ref" reference="Th:IsotropyInsepExtKZ_pforms"}. **Lemma 5**. *Let $V$ be a vector space over $F$, and let $E/F$ be a field extension. Let $v_1,\dots,v_n\in V$ be $F$-linearly independent vectors. Then the vectors $1\otimes v_1,\dots,1\otimes v_n$ are linearly independent over $E$. In particular, $\dim_EV_E=\dim_FV$.* *Proof.* Let $\{\alpha_j~|~j\in J\}$, for a suitable index set $J$, be a basis of $E$ as an $F$-vector space. Assume that $$0=\sum_{i=1}^n\gamma_i(1\otimes v_i)$$ for some $\gamma_i\in E$, and for each $i$ write $$\gamma_i=\sum_{j\in J} x_{ij}\alpha_j$$ with $x_{ij}\in F$, such that $x_{ij}=0$ for almost all $j\in J$. Then we have $$0=\sum_{i=1}^n\gamma_i(1\otimes v_i)=\sum_{i=1}^n\sum_{j\in J} x_{ij}\alpha_j\otimes v_i=\sum_{j\in J} \alpha_j\otimes\Bigl(\underbrace{\sum_{i=1}^nx_{ij}v_i}_{\in V}\Bigr).$$ Since $\{\alpha_j~|~j\in J\}$ is linearly independent over $F$, we get $\sum_{i=1}^nx_{ij}v_i=0$ for each $j\in J$. But now we are in $V$, where $v_1,\dots,v_n$ are linearly independent; therefore, $x_{ij}=0$ for each $i$ and $j$. It follows that $\gamma_i=0$ for each $i$, and hence the vectors $1\otimes v_1,\dots,1\otimes v_n$ are linearly independent over $E$. Clearly, if $\mathop{\mathrm{span}}_F\{v_1,\dots,v_n\}=V$, it follows that $\mathop{\mathrm{span}}_E\{1\otimes v_1,\dots,1\otimes v_n\}=V_E$. Together with the first part of the lemma, we get $\dim_FV=\dim_EV_E$. ◻ Let $\varphi:V\rightarrow F$ be a $p$-form and $E/F$ a field extension. Then we denote by $\varphi_E$ the $p$-form on the $E$-vector space $V_E$ defined by $\varphi_E(e\otimes v)=e^p\varphi(v)$ for any $e\in E$ and $v\in V$. By the previous lemma, we have $\dim\varphi=\dim\varphi_E$. In the following, we write $D_E(\varphi)$ for short instead of $D_E(\varphi_E)$. **Definition 6**. For any element $a\in F$, denote $\langle\!\langle a \rangle\!\rangle=\langle 1,a,\dots,a^{p-1} \rangle$. Let $a_1,\dots,a_n\in F$; then we set $\langle\!\langle a_1,\dots,a_n \rangle\!\rangle=\langle\!\langle a_1 \rangle\!\rangle\otimes\cdots\otimes\langle\!\langle a_n \rangle\!\rangle$ and call it an *$n$-fold quasi-Pfister form*. Moreover, for convenience, we call $\langle 1 \rangle$ the *$0$-fold quasi-Pfister form*. A $p$-form $\varphi$ over $F$ is called a *quasi-Pfister neighbor* if there exists a quasi-Pfister form $\pi$ over $F$ and $c\in F^*$ such that $c\varphi\subseteq\pi$ and $\dim\varphi>\frac{1}{p}\dim\pi$. Let $\pi$ be a quasi-Pfister form over $F$; then $G_F(\pi)=D_F(\pi)$ ([@Hof04 Prop. 4.6]). If $E/F$ is a field extension, then $(\pi_E)_{\mathrm{an}}$ is a quasi-Pfister form ([@Scu13 Lemma 2.6]). Moreover, if $\varphi$ is a quasi-Pfister neighbor of $\pi$, then it holds that $\varphi_E$ is isotropic if and only if $\pi_E$ is isotropic ([@Hof04 Prop. 4.14]). ## Field theory {#Subsec:FieldTheory} Since $F^p$ is a field, we can compare extensions of $F$ and of $F^p$. We will use the following lemma repeatedly. **Lemma 7** ([@Hof04 Lemma 7.5]). *Let $\mathop{\mathrm{char}}{F}=p$.* 1. *Let $E/F$ be a field extension and $\alpha\in E$. Then $\alpha$ is algebraic over $F$ if and only if $\alpha^p$ is algebraic over $F^p$. Moreover, it holds that $$[F(\alpha):F]=[F^p(\alpha^p):F^p].$$* 2. *If $a_1,\dots,a_r\in F$, then $$[F(\sqrt[p]{a_1},\dots,\sqrt[p]{a_r}):F]=[F^p(a_1,\dots,a_r):F^p].$$* We say about a finite set $\{a_1,\dots,a_n\}\subseteq F$ that it is *$p$-independent over $F$* if $[F^p(a_1,\dots,a_n):F^p]=p^n$; otherwise, we call the set (or the elements of the set) *$p$-dependent over $F$*. We say that a set $S\subseteq F$ is *$p$-independent over $F$* if any finite subset of $S$ is $p$-independent over $F$. **Lemma 8** ([@Pick50 pg. 27]). *Let $a_1,\dots,a_n\in F$. The following are equivalent:* 1. *The set $\{a_1,\dots,a_n\}$ is $p$-independent over $F$,* 2. *for any $1\leq i\leq n$, we have $a_i\notin F^p(a_1,\dots,a_{i-1}, a_{i+1},\dots,a_n)$,* 3. *the system $\bigl(a_1^{i_1}\cdots a_n^{i_n}~\bigl|~(i_1,\dots,i_n)\in\{0,\dots,p-1\}^n\bigr)$ is $F^p$-linearly independent.* **Example 9**. (i) It is clear directly form the definition that $\{a\}\subseteq F$ is $p$-independent if and only if $a\notin F^p$. \(ii\) Let $a,b\in F\setminus F^p$. Then, by Lemma [Lemma 8](#Lem:p-independence){reference-type="ref" reference="Lem:p-independence"}, $a,b$ are $p$-dependent if and only if $a\in F^p(b)$ if and only if $b\in F^p(a)$. **Corollary 10**. *Let $a_1,\dots,a_n\in F^*$. The set $\{a_1,\dots,a_n\}$ is $p$-independent over $F$ if and only if the quasi-Pfister form $\langle\!\langle a_1,\dots,a_n \rangle\!\rangle$ is anisotropic over $F$.* *Proof.* First of all, note that $$\langle\!\langle a_1,\dots,a_n \rangle\!\rangle\cong \mathop{\large{ \mathlarger{\:\bot\:}}}_{(i_1,\dots,i_n)\in\{0,\dots,p-1\}^n}\langle a_1^{i_1}\cdots a_n^{i_n} \rangle.$$ Now $\{a_1,\dots,a_n\}$ is $p$-independent over $F$ if and only if (by Lemma [Lemma 8](#Lem:p-independence){reference-type="ref" reference="Lem:p-independence"}) the system $(a_1^{i_1}\cdots a_n^{i_n}~|~(i_1,\dots,i_n)\in\{0,\dots,p-1\}^n)$ is $F^p$-linearly independent if and only if $\dim_{F^p}D_F(\langle\!\langle a_1,\dots,a_n \rangle\!\rangle)=p^n$ if and only if $\langle\!\langle a_1,\dots,a_n \rangle\!\rangle$ is anisotropic over $F$. ◻ Let $E$ be a field with $F^p\subseteq E\subseteq F$. Note that the extension $E/F^p$ is purely inseparable of exponent one (an *exponent* of a purely inseparable field extension $L/K$ is the infimum of integers $n$ such that $L^{p^n}\subseteq K$). We call a set $\mathcal{B}\subseteq E$ a *$p$-basis of $E$ over $F$* if $\mathcal{B}$ is $p$-independent over $F$ and $F^p(\mathcal{B})=E$. **Lemma 11** (cf. [@CSAandGC Cor. A.8.9]). *Let $E$ be a field with $F^p\subseteq E\subseteq F$. Then the following hold:* (i) *There exists a $p$-basis of $E$ over $F$.* (ii) *If $\{a_1,\dots,a_n\}\subseteq E$ is $p$-independent over $F$, then there exists a set $\mathcal{A}\subseteq E$ which is $p$-independent over $F$ and such that $\{a_1,\dots,a_n\}\cup\mathcal{A}$ is a $p$-basis of $E$ over $F$.* (iii) *Let $\mathcal{A}\subseteq E$ be such that $F^p(\mathcal{A})=E$. Then there exists $\mathcal{B}\subseteq\mathcal{A}$ which is a $p$-basis of $E$ over $F$.* *Proof.* The proof is an easy exercise on Zorn's lemma: We seek a maximal element of $\mathcal{S}$, where: $$\begin{aligned} \mathrm{(i)} \quad &\mathcal{S}=\left\{A~|~A\subseteq E,\ A\ p\text{-independent over } F\right\},\\ \mathrm{(ii)} \quad&\mathcal{S}=\left\{A~|~\{a_1,\dots,a_n\}\subseteq A\subseteq E,\ A\ p\text{-independent over } F\right\},\\ %\mathrm{(iv)} \mathrm{(iii)} \quad&\mathcal{S}=\left\{A~|~A\subseteq \mathcal{A}, \ A\ p\text{-independent over } F\right\}. \qedhere\end{aligned}$$ ◻ ## Norm fields and norm forms Let $\varphi$ be a $p$-form over $F$. We define the *norm field* of $\varphi$ over $F$ as the field $$N_F(\varphi)=F^p\left(\frac{a}{b}~\Bigl|~a,b\in D_F^*(\varphi)\right).$$ Note that $N_F(\varphi)$ is a finite field extension of $F^p$; we define the *norm degree* of $\varphi$ over $F$ as $\mathop{\mathrm{ndeg}}_F{\varphi}=[N_F(\varphi):F^p]$. It is obvious from the definition that $N_F(\varphi)=N_F(\varphi_\mathrm{an})$ and also that $N_F(\varphi)=N_F(c\varphi)$ for any $c\in F^*$. Moreover, if $\psi$ is another $p$-form over $F$ such that $\psi\cong\varphi$, then $N_F(\psi)=N_F(\varphi)$. Finally, if $\tau_\mathrm{an}\subseteq c\varphi_\mathrm{an}$ for some $p$-form $\tau$ over $F$ and $c\in F^*$, then $N_F(\tau)\subseteq N_F(\varphi)$. **Lemma 12** ([@Hof04 Prop. 4.8]). *Let $\varphi$ be a nonzero $p$-form with $\mathop{\mathrm{ndeg}}_F\varphi=p^n$. Then $n+1\leq\dim\varphi_\mathrm{an}\leq p^n$.* Note that the norm degree is always a $p$-power. Let us assume that $N_F(\varphi)=F^p(b_1,\dots,b_n)$ for some $b_i\in F$; then $\{b_1,\dots,b_n\}$ is $p$-independent over $F$ if and only if $\mathop{\mathrm{ndeg}}_F{\varphi}=p^n$. By Lemma [Lemma 11](#Lemma:p-bases){reference-type="ref" reference="Lemma:p-bases"}, such a $p$-basis of $N_F(\varphi)$ over $F$ always exists. **Lemma 13** ([@Hof04 Lemma 4.2 and Cor. 4.3]). *Let $\varphi$ be a $p$-form over $F$.* 1. *If $\varphi\cong\langle a_0,\dots,a_n \rangle$ for some $n\geq1$ and $a_1,\dots,a_n\in F$ with $a_0\neq0$, then $N_F(\varphi)=F^p\bigl(\frac{a_1}{a_0},\dots,\frac{a_n}{a_0}\bigr)$.* 2. *Suppose $N_F(\varphi)=F^p(b_1,\dots,b_m)$ for some $b_1,\dots,b_m\in F^*$ and let $E/F$ be a field extension. Then $N_E(\varphi)=E^p(b_1,\dots,b_m)$.* Recall that, by Lemma [Lemma 11](#Lemma:p-bases){reference-type="ref" reference="Lemma:p-bases"}, any $p$-generating set contains a $p$-basis. Therefore, part (i) of the previous lemma implies the following: **Corollary 14**. *Let $\varphi\cong\langle a_0,\dots,a_n \rangle$ for $n\geq1$ and $a_0,\dots,a_n\in F$ with $a_0\neq0$. Moreover, suppose that $\mathop{\mathrm{ndeg}}_F\varphi=p^k$. Then there exists a subset $\{i_1,\dots,i_k\}\subseteq\{1,\dots,n\}$ such that $\left\{\frac{a_{i_1}}{a_0},\dots,\frac{a_{i_k}}{a_0}\right\}$ is a $p$-basis of $N_F(\varphi)$ over $F$.* *In particular, if $1\in D_F^*(\varphi)$, then there exist $b_1,\dots,b_n\in F$ such that $\varphi\cong\langle 1,b_1,\dots,b_n \rangle$ and $N_F(\varphi)=F^p(b_1,\dots,b_k)$.* Let $\varphi$ be a $p$-form over $F$ with $\mathop{\mathrm{ndeg}}_F{\varphi}=p^n$ and ${N_F(\varphi)=F^p(a_1,\dots,a_n)}$ (so, in particular, $\{a_1,\dots,a_n\}$ is $p$-independent over $F$). Then we define the *norm form* of $\varphi$ over $F$, denoted by $\hat{\nu}_F(\varphi)$, as the quasi-Pfister form $\langle\!\langle a_1,\dots,a_n \rangle\!\rangle$. It follows that $\hat{\nu}_F(\varphi)$ is the smallest quasi-Pfister form that contains a scalar multiple of $\varphi_\mathrm{an}$ as its subform. In particular, if $\varphi$ is a quasi-Pfister form itself, then we have $\hat{\nu}_F(\varphi)\cong\varphi_\mathrm{an}$. We state a condition under which a tensor product of two anisotropic $p$-forms remains anisotropic. **Lemma 15**. *Let $\varphi$, $\psi$ be two anisotropic $p$-forms over $F$, and assume that ${\mathop{\mathrm{ndeg}}_F(\varphi\otimes\psi)=\mathop{\mathrm{ndeg}}_F\varphi\cdot\mathop{\mathrm{ndeg}}_F\psi}$. Then $\varphi\otimes\psi$ is anisotropic.* *Proof.* We have $\varphi\otimes\psi\subseteq\hat{\nu}_F(\varphi)\otimes\hat{\nu}_F(\psi)$, where $\hat{\nu}_F(\varphi)\otimes\hat{\nu}_F(\psi)$ is a quasi-Pfister form. As $\hat{\nu}_F(\varphi\otimes\psi)$ is the smallest quasi-Pfister form containing $\varphi\otimes\psi$, we have $\hat{\nu}_F(\varphi\otimes\psi)\subseteq\hat{\nu}_F(\varphi)\otimes\hat{\nu}_F(\psi)$. By the assumption, we have $\dim\hat{\nu}_F(\varphi\otimes\psi)=\dim\hat{\nu}_F(\varphi)\cdot\dim\hat{\nu}_F(\psi)$; thus, $\hat{\nu}_F(\varphi\otimes\psi)\cong\hat{\nu}_F(\varphi)\otimes\hat{\nu}_F(\psi)$, and this form is anisotropic. Therefore, its subform $\varphi\otimes\psi$ is anisotropic as well. ◻ **Remark 16**. The other implication in the previous lemma does not hold in general: Let $\{a,b,c\}\subseteq F^*$ be a $p$-independent set over $F$, and consider the $p$-forms $\varphi\cong\langle 1,a,b,c \rangle$ and $\psi\cong\langle 1,abc \rangle$. Then $$\varphi\otimes\psi\cong\langle 1,a,b,c,abc, a^2bc, ab^2c, abc^2 \rangle\subseteq\langle\!\langle a,b,c \rangle\!\rangle,$$ and hence $\varphi\otimes\psi$ is anisotropic. But $$\mathop{\mathrm{ndeg}}_F(\varphi\otimes\psi)=p^3<p^3\cdot p=\mathop{\mathrm{ndeg}}_F\varphi\cdot\mathop{\mathrm{ndeg}}_F\psi.$$ ## Minimal ${p}$-forms {#Subsec:Minimalpforms} In this subsection, we introduce a new class of $p$-forms, so-called minimal $p$-forms. These are the $p$-forms of minimal dimension with respect to their norm degree (cf. Lemma [Lemma 12](#Lemma:PrereqForNormForm){reference-type="ref" reference="Lemma:PrereqForNormForm"}): **Definition 17**. Let $\varphi$ be an anisotropic $p$-form over $F$. We call $\varphi$ *minimal* over $F$ if $\mathop{\mathrm{ndeg}}_F\varphi=p^{\dim\varphi-1}$. First, we state some rather obvious properties of minimal $p$-forms. **Lemma 18**. *Let $\varphi$ and $\psi$ be $p$-forms over $F$ of dimension at least two.* 1. *The $p$-form $\langle 1,a_1,\dots,a_n \rangle$ is minimal over $F$ if and only if the set $\{a_1,\dots,a_n\}$ is $p$-independent over $F$.* 2. *Minimality is not invariant under field extensions.* 3. *If $\varphi$ is minimal over $F$ and $\psi\stackrel{{\scriptsize{\mathrm{sim}}}}{\sim}\varphi$, then $\psi$ is minimal over $F$.* 4. *If $\varphi$ is minimal over $F$ and $\psi\subseteq c\varphi$ for some $c\in F^*$, then $\psi$ is minimal over $F$.* 5. *If $\mathop{\mathrm{ndeg}}_F\varphi=p^k$ with $k\geq1$, then $\varphi$ contains a minimal subform of dimension $k+1$.* *Proof.* (i) $\varphi=\langle 1,a_1,\dots,a_n \rangle$ is minimal over $F$ if and only if $\mathop{\mathrm{ndeg}}_F(\varphi)=p^{n}$ if and only if $[F^p(a_1,\dots,a_n):F^p]=p^n$ if and only if $\{a_1,\dots,a_n\}$ is $p$-independent over $F$ (by Lemma [Lemma 13](#Lemma:NormFieldBasics){reference-type="ref" reference="Lemma:NormFieldBasics"}). \(ii\) Let $\{a,b,c\}\subseteq F$ be a $p$-independent set over $F$. Let $E=F(\sqrt[p]{c})$ and consider $\varphi=\langle 1,a,b,abc \rangle$. Then $N_F(\varphi)=F^p(a,b,abc)=F^p(a,b,c)$, and hence $\varphi$ is minimal over $F$. Nevertheless, $\varphi_E\cong\langle 1,a,b,ab \rangle$ is anisotropic but not minimal over $E$, because $\mathop{\mathrm{ndeg}}_E\varphi=p^2$. \(iii\) This is a consequence of the fact that similar $p$-forms have the same dimension and the same norm field. \(iv\) Invoking (iii), we can assume $\psi\subseteq\varphi$ and $1\in D_F(\psi)$. So, there exist $a_1,\dots a_n\in F^*$ and some $m\leq n$ such that $\psi\cong\langle 1,a_1,\dots,a_m \rangle$ and ${\varphi\cong\langle 1,a_1,\dots,a_n \rangle}$; see Lemma [\[Lem:p-subform\]](#Lem:p-subform){reference-type="ref" reference="Lem:p-subform"}. By (i), $\{a_1,\dots,a_n\}$ is $p$-independent over $F$; thus, $\{a_1,\dots,a_m\}$ has to be $p$-independent over $F$, and the claim follows by applying (i) again. \(v\) By Lemma [Lemma 12](#Lemma:PrereqForNormForm){reference-type="ref" reference="Lemma:PrereqForNormForm"}, it follows from the assumption on $k$ that $\dim\varphi_\mathrm{an}\geq2$. There exists $c\in F^*$ such that $1\in D_F^*(c\varphi_\mathrm{an})$; then, by Corollary [Corollary 14](#Cor:BasisSubform){reference-type="ref" reference="Cor:BasisSubform"}, $c\varphi_\mathrm{an}\cong\langle 1,b_1,\dots,b_n \rangle$ for some $b_1,\dots,b_n\in F^*$ with $n\geq k$ and $\{b_1,\dots,b_k\}$ $p$-independent over $F$. It follows that $c^{-1}\langle 1,b_1,\dots,b_k \rangle$ is a subform of $\varphi$ which is minimal over $F$ by parts (i) and (iii). ◻ # Isotropy over purely inseparable field extensions {#Sec:isotropy} Anisotropic $p$-forms remain anisotropic over purely transcendental and separable field extensions by [@Hof04 Prop. 5.3]. But the situation gets more interesting when we consider purely inseparable field extensions. The easiest one is a simple purely inseparable extension of exponent one; such an extension is of the form $F(\sqrt[p]{a})/F$ for some $a\in F\setminus F^p$. In the following lemma, we take a closer look at the behavior of $p$-forms over such extensions. **Lemma 19** ([@Scu16-Hoff Lemma 2.27]). *Let $\varphi$ be a $p$-form over $F$ and let $a\in F\setminus F^p$. Then:* 1. *$D_{F(\sqrt[p]{a})}(\varphi)=D_F(\langle\!\langle a \rangle\!\rangle\otimes\varphi)=\sum_{i=0}^{p-1}a^iD_F(\varphi)$. [\[Lemma:p-forms_BasicIsotropy_i\]]{#Lemma:p-forms_BasicIsotropy_i label="Lemma:p-forms_BasicIsotropy_i"}* 2. *$\mathfrak{i}_{\mathrm{d}}(\varphi_{F(\sqrt[p]{a})})=\frac1p\mathfrak{i}_{\mathrm{d}}(\langle\!\langle a \rangle\!\rangle\otimes\varphi)$.* For purely inseparable field extensions of higher exponents, the following is known. **Proposition 20** ([@Hof04 Prop. 5.7]). *Let $r\geq1$. For each $1\leq i \leq r$, let $a_i\in F$, $n_i\geq1$, and $\alpha_i$, $\beta_i$ be such that $\alpha_i^p=\beta_i^{p^{n_i}}=a_i$. Furthermore, set $K=F(\alpha_1,\dots,\alpha_r)$, $L=F(\beta_1,\dots,\beta_r)$, and assume that $[K:F]=p^r$. Then $[L:F]=p^{n_1+\dots+n_r}$ and the quasi-Pfister form $\pi=\langle\!\langle a_1,\dots,a_r \rangle\!\rangle$ is anisotropic.* *Let $\varphi$ be an anisotropic $p$-form. Then the following statements are equivalent.* 1. *$\varphi\otimes\pi$ is isotropic,* 2. *$\varphi_K$ is isotropic,* 3. *$\varphi_L$ is isotropic.* Recently, in [@LagMuk23 Cor. 3.3], it has been proved that in the situation from Proposition [Proposition 20](#Prop:IsotropyInsepExtHof_pforms){reference-type="ref" reference="Prop:IsotropyInsepExtHof_pforms"} in case of $p=2$, not only isotropy, but also quasi-hyperbolicity of totally singular quadratic forms is preserved. (A totally singular quadratic form $\varphi$ is called *quasi-hyperbolic* if $\mathfrak{i}_{\mathrm{d}}(\varphi)\geq\frac{\dim\varphi}{2}$.) Note that Proposition [Proposition 20](#Prop:IsotropyInsepExtHof_pforms){reference-type="ref" reference="Prop:IsotropyInsepExtHof_pforms"} does not cover all purely inseparable extensions of exponent higher than one; only those extensions $L/F$ are considered, for which $L=F(\!\sqrt[p^{n_1}]{a_1},\dots,\sqrt[p^{n_r}]{a_r})$ with $\langle\!\langle a_1,\dots,a_r \rangle\!\rangle$ anisotropic (because the set $\{a_1\dots,a_r\}$ is $p$-independent; this follows from the assumption $[K:F]=p^r$). Purely inseparable field extensions for which such elements $a_1,\dots,a_r$ can be found are called *modular*. A known example of a non-modular extension is $E/F$ with $F=\mathbb{F}_2(a,b,c)$ for some algebraically independent elements $a,b,c$, and $E=F(\sqrt[4]{a},\sqrt[4]{b^2a+c^2})$, see [@Nonmodular]. However, we can strengthen the results by comparing the values $\mathfrak{i}_{\mathrm{d}}(\varphi\otimes\pi)$, $\mathfrak{i}_{\mathrm{d}}(\varphi_K)$ and $\mathfrak{i}_{\mathrm{d}}(\varphi_L)$; it turns out that we can prove $\mathfrak{i}_{\mathrm{d}}(\varphi_K)=\mathfrak{i}_{\mathrm{d}}(\varphi_L)$ even if the extension $L/F$ is not modular. **Theorem 21**. *Let $r\geq1$. For each $1\leq i \leq r$, let $a_i\in F$, $n_i\geq1$, and $\alpha_i$, $\beta_i$ be such that $\alpha_i^p=\beta_i^{p^{n_i}}=a_i$. Furthermore, set $K=F(\alpha_1,\dots,\alpha_r)$, $L=F(\beta_1,\dots,\beta_r)$ and $\pi=\langle\!\langle a_1,\dots,a_r \rangle\!\rangle$. Let $\varphi$ be an anisotropic $p$-form. Then* 1. *$\mathfrak{i}_{\mathrm{d}}(\varphi_K)=\mathfrak{i}_{\mathrm{d}}(\varphi_L)$,* 2. *$\mathfrak{i}_{\mathrm{d}}(\varphi_K)=\frac{1}{p^r}\mathfrak{i}_{\mathrm{d}}(\varphi\otimes\pi)$ if $\pi$ is anisotropic.* *Proof.* To prove (i), we proceed by induction on $r$. Let $r=1$, and write $a=a_1$, $n=n_1$, $\alpha=\alpha_1$ and $\beta=\beta_1$. Let $V$ be the vector space associated with $\varphi$, and write $V_K=K\otimes_F V$ and $V_L=L\otimes_F V\simeq L\otimes_K V_K$ as usual. Let $W_K=\{w\in V_K~|~\varphi_K(w)=0\}$ (resp. $W_L=\{w\in V_L~|~\varphi_L(w)=0\}$) be the maximal isotropic subspace of $V_K$ (resp. $V_L$); then $\mathfrak{i}_{\mathrm{d}}(\varphi_K)=\dim_KW_K$ (resp. $\mathfrak{i}_{\mathrm{d}}(\varphi_L)=\dim_LW_L$). We will show that $W_L= L\otimes_K W_K$, which will not only justify the notation but also prove the claim because we have $\dim_L(L\otimes_KW_K)=\dim_KW_K$ by Lemma [Lemma 5](#Lemma:LinIndepOverExt){reference-type="ref" reference="Lemma:LinIndepOverExt"}. Let $w=\sum_{i=1}^k\gamma_i\otimes w_i\in L\otimes_KW_K$ for some $\gamma_i\in L$ and $w_i\in W_K$; then $$\varphi_L(w)=\varphi_L\left(\sum_{i=1}^k\gamma_i\otimes w_i\right)=\sum_{i=1}^k\gamma_i^p\varphi_K(w_i)=0.$$ Hence, $w\in W_L$, and so we get $L\otimes_KW_K\subseteq W_L$. For a proof of the opposite inclusion, first note that $\varphi_K(v)\in F$ for any $v\in V_K$ by Lemma [Lemma 19](#Lemma:p-forms_BasicIsotropy){reference-type="ref" reference="Lemma:p-forms_BasicIsotropy"}[\[Lemma:p-forms_BasicIsotropy_i\]](#Lemma:p-forms_BasicIsotropy_i){reference-type="ref" reference="Lemma:p-forms_BasicIsotropy_i"}. Now, let $w\in W_L$. Since $\{1,\beta,\beta^2,\dots,\beta^{p^{n-1}-1}\}$ is a basis of $L$ over $K$, we can write $$w=\sum_{i=0}^{p^{n-1}-1}\beta^i\otimes w_i$$ with $w_i\in V_K$. Then $$0=\varphi_L(w)=\sum_{i=0}^{p^{n-1}-1}\beta^{pi}\varphi_K(w_i).$$ Note that the set $B=\{1,\beta^p, \beta^{2p}, \dots,\beta^{p^n-p}\}$ is a subset of $\{1,\beta, \dots,\beta^{p^n-1}\}$, which is a basis of $L$ over $F$; therefore, $B$ is linearly independent over $F$. Since, as we noted, $\varphi_K(w_i)\in F$ for all the $i$'s, it follows that $\varphi_K(w_i)=0$. Thus, we have $w_i\in W_K$ for all $i$, and hence $w\in L\otimes_KW_K$. It follows that $W_L\subseteq L\otimes_KW_K$, which concludes the proof in the case $r=1$. Let $r>1$; for $0\leq i \leq r$, set $$L_i=F(\beta_1,\dots,\beta_i,\alpha_{i+1},\dots,\alpha_r).$$ Then $L_0=K$, $L_r=L$, and $\mathfrak{i}_{\mathrm{d}}(\varphi_{L_i})=\mathfrak{i}_{\mathrm{d}}(\varphi_{L_{i+1}})$ for all $0\leq i\leq r-1$ by the previous part of the proof. Therefore, $\mathfrak{i}_{\mathrm{d}}(\varphi_K)=\mathfrak{i}_{\mathrm{d}}(\varphi_L)$. \(ii\) Set $K_i=F(\alpha_1,\dots,\alpha_i)$ for $0\leq i\leq r$. Since $\pi$ is anisotropic by the assumption, we have $[K_{i+1}:K_i]=p$ for all $0\leq i\leq r-1$. Now the equality $\mathfrak{i}_{\mathrm{d}}(\varphi\otimes\pi)=\frac{1}{p^r}\mathfrak{i}_{\mathrm{d}}(\varphi_K)$ follows by a repeated application of Lemma [Lemma 19](#Lemma:p-forms_BasicIsotropy){reference-type="ref" reference="Lemma:p-forms_BasicIsotropy"}: $$\begin{gathered} \mathfrak{i}_{\mathrm{d}}(\varphi_K)=\mathfrak{i}_{\mathrm{d}}(\varphi_{K_r})=p\,\mathfrak{i}_{\mathrm{d}}((\langle\!\langle a_r \rangle\!\rangle\otimes\varphi)_{K_{r-1}})=\dots\\ =p^r\mathfrak{i}_{\mathrm{d}}((\langle\!\langle a_1,\dots,a_r \rangle\!\rangle\otimes\varphi)_{K_{0}})=p^r\mathfrak{i}_{\mathrm{d}}(\varphi\otimes\pi). \qedhere\end{gathered}$$ ◻ Next to the standard and full splitting pattern, we can define *purely inseparable splitting pattern* of a $p$-form $\varphi$ over $F$ as $$\mathrm{piSP}(\varphi)=\{\dim(\varphi_L)_\mathrm{an}~|~L/F \text{ a purely inseparable extension}\}.$$ By the previous theorem, we only need to consider purely inseparable extensions of exponent one. **Corollary 22**. *Let $\varphi$ be a $p$-form over $F$. Then $$\mathrm{piSP}(\varphi)=\{\dim(\varphi_K)_{\mathrm{an}}~|~ K/F \text{ finite purely inseparable of exponent } 1\}.$$* **Remark 23**. We can restrict the field extensions necessary to determine $\mathrm{piSP}(\varphi)$ a bit more: Let $\varphi$ be anisotropic with $\mathop{\mathrm{ndeg}}_F\varphi=p^n$ and $N_F(\varphi)=F^p(a_1,\dots,a_n)$. Consider the field $K=F(\sqrt[p]{b_1},\dots,\sqrt[p]{b_r})$, write $\pi\cong\langle\!\langle b_1,\dots,b_r \rangle\!\rangle$, and assume that $\pi$ is anisotropic. Then we know from Proposition [Proposition 20](#Prop:IsotropyInsepExtHof_pforms){reference-type="ref" reference="Prop:IsotropyInsepExtHof_pforms"} that $\varphi_K$ is isotropic if and only if $\varphi\otimes\pi$ is isotropic. By Lemma [Lemma 15](#Lemma:AnisotropicProduct_pforms){reference-type="ref" reference="Lemma:AnisotropicProduct_pforms"}, this can happen only if $\mathop{\mathrm{ndeg}}_F(\varphi\otimes\pi)<\mathop{\mathrm{ndeg}}_F(\varphi)\cdot\mathop{\mathrm{ndeg}}_F(\pi)$, i.e., if the set $\{a_1,\dots,a_n,b_1,\dots,b_r\}$ is $p$-dependent. Thus, we have $$\mathrm{piSP}(\varphi)=\{\dim(\varphi_K)_\mathrm{an}~|~ [K^p(a_1,\dots,a_n):F^p]<p^n[K^p:F^p]\},$$ where $K$ runs over purely inseparable extensions of $F$ of exponent one. This also indicates that a $p$-form may become isotropic even over a field $E$ such that $E^p\cap N_F(\varphi)=F^p$. For example, consider a set $\{a_1,a_2,a_3,b_1\}$ $p$-independent over $F$, and write $\varphi\cong\langle 1,a_1,a_2,a_3 \rangle$ and $E=F(\sqrt[p]{b_1},\sqrt[p]{b_2})$ with $b_2=\frac{a_1b_1+a_3}{a_2}$. It can be shown that indeed $E^p\cap N_F(\varphi)=F^p$. Moreover, $\langle a_1b_1,a_2b_2,a_3 \rangle\subseteq\varphi\otimes\langle\!\langle b_1,b_2 \rangle\!\rangle$, and since $a_3=a_1b_1+a_2b_2$, the form $\langle a_1b_1,a_2b_2,a_3 \rangle$ is isotropic. Therefore, the $p$-form $\varphi\otimes\langle\!\langle b_1,b_2 \rangle\!\rangle$ is isotropic, and so is $\varphi_E$. # Full splitting pattern {#Sec:fsp} We define the *full splitting pattern* of a $p$-form $\varphi$ over $F$ as $$\mathrm{fSP}(\varphi)=\{\dim(\varphi_E)_{\mathrm{an}}~|~E/F \text{ a field extension}\}.$$ The goal of this section is to determine the full splitting pattern at least of some families of $p$-forms. ## Bounds on the size of ${\mathrm{fSP}(\varphi)}$ Putting aside the trivial case when $\varphi_\mathrm{an}$ is the zero form, it is obvious that $\mathrm{fSP}(\varphi)\subseteq\{1,2,\dots,\dim\varphi\}$, and hence $\left|{\mathrm{fSP}(\varphi)}\right|\leq\dim\varphi$. We will show that the bound cannot be improved. Moreover, we provide a lower bound for $\left|{\mathrm{piSP}(\varphi)}\right|$; since obviously $\mathrm{piSP}(\varphi)\subseteq\mathrm{fSP}(\varphi)$, we will also get a lower bound for $\left|{\mathrm{fSP}(\varphi)}\right|$. For a given $p$-form $\varphi$, we construct a tower of fields over which the defect of $\varphi$ is strictly increasing similarly as over the standard splitting tower. To do that, we put a seemingly strong assumption on the $p$-form. But by Corollary [Corollary 14](#Cor:BasisSubform){reference-type="ref" reference="Cor:BasisSubform"}, all $p$-forms which represent one fulfill this assumption. **Lemma 24**. *Let $\varphi=\langle 1,a_1,\dots,a_n \rangle$ be a $p$-form over $F$, and assume that the set $\{a_1, \dots, a_m\}$ is a $p$-basis of the field $N_F(\varphi)$ over $F$ for some $m\leq n$. We denote $E_0=F$ and $E_i=F\bigl(\sqrt[p]{a_1}, \dots, \sqrt[p]{a_i}\bigr)$ for $1\leq i \leq m$. Then we have $\mathfrak{i}_{\mathrm{d}}(\varphi_{E_{i+1}})>\mathfrak{i}_{\mathrm{d}}(\varphi_{E_i})$ and $\langle 1,a_{i+1},\dots,a_m \rangle_{E_i}\subseteq(\varphi_{E_i})_{\mathrm{an}}$ for any $0\leq i \leq m-1$.* *Proof.* Let $i\in\{0,\dots,m-1\}$ and set $\tau_i= (\varphi_{E_i})_{\mathrm{an}}$. Since $D_F(\varphi)\subseteq D_{E_i}(\tau_i)$, it follows from Lemma [\[Lem:p-subform\]](#Lem:p-subform){reference-type="ref" reference="Lem:p-subform"} that $$(\langle 1, a_{i+1},\dots,a_m \rangle_{E_i})_{\mathrm{an}}\subseteq \tau_i,$$ and so we only need to prove that the $p$-form $\langle 1,a_{i+1},\dots,a_m \rangle_{E_i}$ is anisotropic over $E_i$. Suppose not; then we can find $k\in\{i+1,\dots,m\}$, such that $$a_k\in\mathop{\mathrm{span}}_{E_i^p}\{1,a_{i+1}, \dots, \widehat{a_k}, \dots, a_m\}.$$ But that implies $a_k\in F^p(a_1,\dots,\widehat{a_k}, \dots, a_m)$, which is a contradiction with the choice of $\{a_1,\dots, a_m\}$ as a $p$-basis over $F$ by Lemma [Lemma 8](#Lem:p-independence){reference-type="ref" reference="Lem:p-independence"}. Furthermore, the $p$-form $\langle 1, a_{i+1},\dots,a_m \rangle$ becomes isotropic over the field $E_{i+1}$, and hence so does $\tau_i$. It follows that $\mathfrak{i}_{\mathrm{d}}(\varphi_{E_{i+1}})>\mathfrak{i}_{\mathrm{d}}(\varphi_{E_i})$. ◻ As a corollary to the previous lemma, we get a lower bound on the size of the sets $\mathrm{piSP}(\varphi)$ and $\mathrm{fSP}(\varphi)$. **Corollary 25**. *Let $\varphi$ be a $p$-form over $F$, and suppose that $\mathop{\mathrm{ndeg}}_F\varphi=p^m$. Then $\left|{\mathrm{piSP}(\varphi)}\right|\geq m+1$ and $\left|{\mathrm{fSP}(\varphi)}\right|\geq m+1$.* *Proof.* Since the splitting patterns do not depend on the choice of the similarity class, we can assume without loss of generality that $1\in D_F(\varphi)$; Then, invoking Corollary [Corollary 14](#Cor:BasisSubform){reference-type="ref" reference="Cor:BasisSubform"}, we can suppose that $\varphi$ is as in Lemma [Lemma 24](#Lemma:TotInsepTower){reference-type="ref" reference="Lemma:TotInsepTower"}. Using the fields $E_i$ for $0\leq i\leq m$ from that lemma, it follows that we have $\dim(\varphi_{E_i})_{\mathrm{an}}\neq\dim(\varphi_{E_j})_{\mathrm{an}}$ for any $i\neq j$. The claim follows. ◻ **Remark 26**. (i) The same lower bound on $\left|{\mathrm{fSP}(\varphi)}\right|$ as in Corollary [Corollary 25](#Cor:FSPlowerBound){reference-type="ref" reference="Cor:FSPlowerBound"} could be obtained through the standard splitting tower: Denote $$\mathrm{sSP}(\varphi)=\{\dim(\varphi_K)_{\mathrm{an}}~|~K \text{ a field in the standard splitting tower of }\varphi\}.$$ By [@Scu16-Split Lemma 4.3], it holds that $\mathop{\mathrm{ndeg}}_{F(\varphi)}(\varphi_{F(\varphi)})_\mathrm{an}=\frac{1}{p}\mathop{\mathrm{ndeg}}_F\varphi$; therefore, it follows that $\left|{\mathrm{sSP}(\varphi)}\right|=m+1$ where $\mathop{\mathrm{ndeg}}_F\varphi=p^m$. Since $\mathrm{sSP}(\varphi)\subseteq\mathrm{fSP}(\varphi)$, we get $\left|{\mathrm{fSP}(\varphi)}\right|\geq m+1$. \(ii\) The tower of fields $F=E_0\subseteq E_1 \subseteq\dots\subseteq E_m$ in Lemma [Lemma 24](#Lemma:TotInsepTower){reference-type="ref" reference="Lemma:TotInsepTower"} might look like a purely inseparable analogy of the standard splitting tower (note that they are of the same length). But the values of $\mathfrak{i}_{\mathrm{d}}(\varphi_{E_i})$ depend on the ordering of $a_1,\dots,a_m$, so in particular we cannot expect that ${\mathfrak{i}_{\mathrm{d}}(\varphi_{E_1})=\mathfrak{i}_{\mathrm{d}}(\varphi_{F(\varphi)})}$. For example, if $\varphi\cong\langle\!\langle a_1,a_2 \rangle\!\rangle\:\bot\:a_3\langle 1,a_1 \rangle$ for some set $\{a_1,a_2,a_3\}\subseteq F^*$ that is $p$-independent over $F$, then $(\varphi_{F(\sqrt[p]{a_1})})_{\mathrm{an}}\cong(\langle\!\langle a_2 \rangle\!\rangle\:\bot\:a_3\langle 1 \rangle)_{F(\sqrt[p]{a_1})}$. The $p$-form $(\varphi_{F(\varphi)})_{\mathrm{an}}\cong\langle\!\langle a_1,a_2 \rangle\!\rangle_{F(\varphi)}$ corresponds rather to $(\varphi_{F(\sqrt[p]{a_3})})_{\mathrm{an}}\cong\langle\!\langle a_1,a_2 \rangle\!\rangle_{F(\sqrt[p]{a_3})}$. The following example shows that the lower bound from Corollary [Corollary 25](#Cor:FSPlowerBound){reference-type="ref" reference="Cor:FSPlowerBound"} is optimal. **Example 27**. Let $\varphi\cong\langle 1,a_1,\dots,a_m \rangle$ be a minimal $p$-form over $F$ (i.e., the set $\{a_1,\dots,a_m\}$ is $p$-independent over $F$). Then $\mathop{\mathrm{ndeg}}_F{\varphi}=p^m$ and by Lemma [Lemma 24](#Lemma:TotInsepTower){reference-type="ref" reference="Lemma:TotInsepTower"}, we have $$\mathrm{fSP}(\varphi)=\{1,2,\dots,m+1\}.$$ ## Full splitting pattern of quasi-Pfister forms To determine the full splitting pattern of quasi-Pfister forms, we start by looking at their behavior over purely inseparable field extensions. **Lemma 28**. *Let $\pi$ and $\pi'$ be anisotropic quasi-Pfister forms over $F$ such that $\pi\cong\langle\!\langle a \rangle\!\rangle\otimes\pi'$ for some $a\in F$, and let $E=F(\sqrt[p]{a})$. Then $\pi'_E$ is anisotropic.* *Proof.* Let $\pi'=\langle\!\langle b_1,\dots,b_m \rangle\!\rangle$. If $\pi'_E$ is isotropic, then, by Corollary [Corollary 10](#Cor:pIndAndAnisPF){reference-type="ref" reference="Cor:pIndAndAnisPF"}, $\{b_1,\dots,b_m\}$ is $p$-dependent over $E$; it follows by Lemma [Lemma 8](#Lem:p-independence){reference-type="ref" reference="Lem:p-independence"} that we can find $i\in\{1,\dots,m\}$ such that ${b_i\in E^p(b_1,\dots,b_{i-1},b_{i+1},\dots,b_m)}$. Then $b_i\in F^p(a, b_1,\dots,b_{i-1},b_{i+1},\dots,b_m)$, so the set $\{a, b_1,\dots,b_n\}$ is $p$-dependent over $F$, and hence $\pi$ is isotropic by Corollary [Corollary 10](#Cor:pIndAndAnisPF){reference-type="ref" reference="Cor:pIndAndAnisPF"}. ◻ By an induction, we get: **Lemma 29**. *Let $\pi=\langle\!\langle a_1,\dots,a_m \rangle\!\rangle$ be an anisotropic quasi-Pfister form over $F$. We set $E_0=F$ and $E_i=F\bigl(\sqrt[p]{a_1}, \dots, \sqrt[p]{a_i}\bigr)$ for $1\leq i\leq m$. Then $(\pi_{E_i})_{\mathrm{an}}\cong\langle\!\langle a_{i+1},\dots,a_m \rangle\!\rangle$. (For $i=m$ we get the $0$-fold Pfister form $\langle 1 \rangle$.)* **Example 30**. Let $\pi\cong\langle\!\langle a_1,\dots,a_n \rangle\!\rangle$ be an anisotropic quasi-Pfister form. Since $(\pi_E)_{\mathrm{an}}$ is again a quasi-Pfister form (and hence its dimension is a $p$-power) for any field extension $E/F$ by [@Scu13 Lemma 2.6], it follows from Lemma [Lemma 29](#Lemma:PFinsepsplitting_pforms){reference-type="ref" reference="Lemma:PFinsepsplitting_pforms"} that $$\mathrm{fSP}(\pi)=\{1,p,\dots,p^n\}.$$ ## Full splitting pattern of quasi-Pfister neighbors We look at anisotropic $p$-forms of the form $\varphi\cong\pi\:\bot\:d\sigma$, where $\pi$ is a quasi-Pfister form and $\sigma\subseteq\pi$. Note that $\varphi$ is a quasi-Pfister neighbor of $\pi\otimes\langle\!\langle d \rangle\!\rangle$. **Lemma 31**. *Let $\pi$ be a quasi-Pfister form over $F$, $\sigma\subseteq\pi$ and $d\in F^*$ be such that the $p$-form $\varphi\cong\pi\:\bot\:d\sigma$ is anisotropic. Let $E/F$ be a field extension.* 1. *If $d\in D_E(\pi)$, then we have $(\varphi_E)_{\mathrm{an}}\cong(\pi_E)_{\mathrm{an}}$, and so, in particular, $\mathfrak{i}_{\mathrm{d}}(\varphi_E)=\mathfrak{i}_{\mathrm{d}}(\pi_E)+\dim\sigma$.* 2. *If $d\notin D_E(\pi)$, then $(\varphi_E)_{\mathrm{an}}\cong(\pi_E)_{\mathrm{an}}\:\bot\:d(\sigma_E)_{\mathrm{an}}$, and so, in particular, $\mathfrak{i}_{\mathrm{d}}(\varphi_E)=\mathfrak{i}_{\mathrm{d}}(\pi_E)+\mathfrak{i}_{\mathrm{d}}(\sigma_E)$.* *Proof.* If $d\in D_E(\pi)$, then $\varphi_E\cong d\pi_E\:\bot\:d\sigma_E$, and hence $(\varphi_E)_{\mathrm{an}}\cong(\pi_E)_{\mathrm{an}}$. Now assume $d\notin D_E(\pi)$; if $(\pi_E)_{\mathrm{an}}\:\bot\:d(\sigma_E)_{\mathrm{an}}$ were isotropic, then so would be $(\pi_E)_{\mathrm{an}}\otimes\langle 1,d \rangle$, which would imply $d\in D_E(\pi)$, a contradiction. Thus, the $p$-form $(\pi_E)_{\mathrm{an}}\:\bot\:d(\sigma_E)_{\mathrm{an}}$ is anisotropic. As clearly $(\varphi_E)_{\mathrm{an}}\cong((\pi_E)_{\mathrm{an}}\:\bot\:d(\sigma_E)_{\mathrm{an}})_{\mathrm{an}}$, we get $(\varphi_E)_{\mathrm{an}}\cong(\pi_E)_{\mathrm{an}}\:\bot\:d(\sigma_E)_{\mathrm{an}}$. ◻ **Corollary 32**. *Let $\pi$ be an $n$-fold quasi-Pfister form over $F$, $\sigma\subseteq\pi$ and $d\in F^*$ be such that the $p$-form $\varphi\cong\pi\:\bot\:d\sigma$ is anisotropic. Then $$\mathrm{fSP}(\varphi)\subseteq\{p^k+l~|~0\leq k\leq n, 0\leq l\leq \dim\sigma\}.$$* **Lemma 33**. *Let $\pi\cong\langle\!\langle a_1,\dots,a_n \rangle\!\rangle$ be an anisotropic quasi-Pfister form over $F$ and $\sigma\cong\langle 1,a_1,\dots,a_s \rangle$ for some $s\leq n$. Let $E/F$ be a field extension.* 1. *If $\dim(\sigma_E)_{\mathrm{an}}=l$, then $\dim(\pi_E)_{\mathrm{an}}\geq p^{\lceil\log_pl\rceil}$.* 2. *If $\dim(\pi_E)_{\mathrm{an}}=p^k$, then $\dim(\sigma_E)_{\mathrm{an}}\geq k-n+s+1$.* *Proof.* Part (i) follows from the facts that $(\sigma_E)_{\mathrm{an}}\subseteq(\pi_E)_{\mathrm{an}}$ and the dimension of $(\pi_E)_{\mathrm{an}}$ is a $p$-power. To prove part (ii), we apply [@Hof04 Prop. 5.2] on $\pi_E$ to find a subset $\{i_1,\dots,i_k\}\subseteq\{1,\dots,n\}$ such that $(\pi_E)_{\mathrm{an}}\cong\langle\!\langle a_{i_1},\dots,a_{i_k} \rangle\!\rangle_E$. Let $$\{j_1,\dots,j_l\}=\{i_1,\dots,i_k\}\cap\{1,\dots,s\};$$ since the set $\{a_{j_1},\dots,a_{j_l}\}$ is $p$-independent over $E$, the $p$-form $\langle 1,a_{j_1},\dots,a_{j_l} \rangle_E$ is an anisotropic subform of $(\sigma_E)_{\mathrm{an}}$. In particular, $\dim(\sigma_E)_{\mathrm{an}}\geq l+1$. Since $$l=\left|{ \{i_1,\dots,i_k\}\cap\{1,\dots,s\}}\right|\geq k-(n-s),$$ the claim follows. ◻ If, in the situation of Lemma [Lemma 31](#Lemma:IsotropyIndicesSPN_pforms){reference-type="ref" reference="Lemma:IsotropyIndicesSPN_pforms"}, the form $\sigma$ is minimal, then it is possible to describe the full splitting pattern of $\varphi\cong\pi\:\bot\:d\sigma$. **Theorem 34**. *Let $\pi$ be an $n$-fold quasi-Pfister form over $F$, $\sigma$ be a minimal subform of $\pi$ of dimension at least $2$ and $d\in F^*$ be such that the $p$-form $\varphi\cong\pi\:\bot\:d\sigma$ is anisotropic. Then $m\in \mathrm{fSP}(\varphi)$ if and only if $m=p^k+l$ and one of the following holds:* 1. *$0\leq k\leq n$ and $l=0$;* 2. *$0\leq k\leq n$ and $\max\{1, k-n+\dim\sigma\}\leq l\leq \min\{\dim\sigma,p^k\}$.* *Proof.* If $x\in D_F(\sigma)$, then $x^{-1}\in G_F(\pi)$, and hence $x^{-1}\varphi\cong\pi\:\bot\:d(x^{-1}\sigma)$ with $1\in D_F(x^{-1}\sigma)$; therefore, we can assume without loss of generality that $1\in D_F(\sigma)$. Let $\sigma\cong\langle 1,a_1,\dots, a_s \rangle$ for some $1\leq s\leq n$; since $\sigma$ is assumed to be minimal, the set $\{a_1,\dots,a_s\}$ is $p$-independent by part (i) of Lemma [Lemma 18](#Lem:PropertiesOfMinimalForms_pforms){reference-type="ref" reference="Lem:PropertiesOfMinimalForms_pforms"}, and hence (by Lemma [Lemma 11](#Lemma:p-bases){reference-type="ref" reference="Lemma:p-bases"}) it can be extended to a $p$-independent set $\{a_1,\dots,a_n\}$ such that $N_F(\pi)=F^p(a_1,\dots, a_n)$. Then $\pi\cong\langle\!\langle a_1,\dots,a_n \rangle\!\rangle$ and $$\varphi\cong\langle\!\langle a_1\dots,a_n \rangle\!\rangle\:\bot\:d\langle 1,a_1,\dots,a_s \rangle.$$ By Corollary [Corollary 32](#Cor:fspSPN_pforms){reference-type="ref" reference="Cor:fspSPN_pforms"}, we have $$\mathrm{fSP}(\varphi)\subseteq\{p^k+l~|~0\leq k\leq n, 0\leq l\leq s+1\}.$$ Furthermore, if $l\geq 1$ (i.e., if $\sigma_E$ does not disappear completely in $(\varphi_E)_{\mathrm{an}}$), then by Lemma [Lemma 33](#Lemma:SplitPatMinSubform_pforms){reference-type="ref" reference="Lemma:SplitPatMinSubform_pforms"}, it must hold $k-n+s+1\leq l\leq p^k$ for any tuple $(k,l)$ such that $p^k+l\in\mathrm{fSP}(\varphi)$. Therefore, $\mathrm{fSP}(\varphi)\subseteq I$ where $$\begin{gathered} I=\{p^k+l~|~0\leq k\leq n, \max\{1, k-n+s+1\}\leq l\leq \min\{s+1,p^k\}\}\\ \cup\{p^k~|~0\leq k\leq n\}.\end{gathered}$$ So we only need to prove that all values in $I$ are realizable. We define $$\begin{aligned} I_0&=\{(k,0)~|~0\leq k\leq n\},\\ I_1&=\{(k,l)~|~0\leq k\leq n, \max\{1, k-n+s+1\}\leq l\leq\min\{s+1, k+1\}\},\\ I_2&=\{(k,l)~|~0\leq k\leq n, k+1<l\leq \min\{s+1,p^k\}\};\end{aligned}$$ then $I=\{p^k+l~|~(k,l)\in I_0\cup I_1\cup I_2\}$. If $(k,l)=(k,0)\in I_0$, then set $D_k=F(\sqrt[p]{a_{k+1}},\dots,\sqrt[p]{a_n}, \sqrt[p]{d})$. We get $$(\varphi_{D_k})_{\mathrm{an}}\cong\langle\!\langle a_1,\dots,a_k \rangle\!\rangle_{D_k},$$ i.e., $\dim(\varphi_{D_k})_{\mathrm{an}}=p^k$. To cover the values in the set $I_1$, we define $E_{n,s+1}=F$ and $$E_{k,l}=F(\sqrt[p]{a_l},\dots,\sqrt[p]{a_{n-(k-l)-1}})$$ for all tuples $(k,l)\in I_1$ such that $(k,l)\neq (n,s+1)$; note that since $k\leq n-1$ for such tuples, we have $l\leq n-(k-l)-1$. Moreover, the condition $l\leq k+1$ can be rewritten as $n-(k-l)-1 \leq n$. Therefore, $E_{k,l}$ is well-defined. Moreover, the inequality $k-n+s+1\leq l$ ensures that $s\leq n-(k-l)-1$, and hence $(\sigma_{E_{k,l}})_\mathrm{an}\cong\langle 1,a_1,\dots,a_{l-1} \rangle_{E_{k,l}}$. Thus, for any $(k,l)\in I_1$, we get $$(\varphi_{E_{k,l}})_{\mathrm{an}}\cong(\langle\!\langle a_1,\dots,a_{l-1} \rangle\!\rangle\otimes\langle\!\langle a_{n-(k-l)},\dots,a_n \rangle\!\rangle\:\bot\:d\langle 1,a_1,\dots,a_{l-1} \rangle)_{E_{k,l}}$$ and so $\dim(\varphi_{E_{k,l}})_{\mathrm{an}}=p^k+l$. Finally, let $(k,l)\in I_2$. We denote $S_k=\{\lambda:\{1,\dots,k\}\rightarrow\{0,\dots,p-1\}\}$ and write $a^\lambda=\prod_{i=1}^ka_i^{\lambda(i)}$ for any $\lambda\in S_k$. Furthermore, we pick a subset $\mathcal{L}_{k,l}\subseteq S_k$ such that $\left|{\mathcal{L}_{k,l}}\right|=l-k-1$ and $\sum_{i=1}^k\lambda(i)>1$ for each $\lambda\in\mathcal{L}_{k,l}$; note that this is possible since $l-k-1\leq p^k-k-1=\left|{\{\lambda\in S_k~|~\sum_{i=1}^k\lambda(i)>1\}}\right|$. Let $\mathcal{L}_{k,l}=\{\lambda_1,\dots,\lambda_{l-k-1}\}$. We define $$G_{k,l}=F\left(\sqrt[p]{\frac{a^{\lambda_1}}{a_{k+1}}}, \dots, \sqrt[p]{\frac{a^{\lambda_{l-k-1}}}{a_{l-1}}}, \sqrt[p]{a_l},\dots,\sqrt[p]{a_n}\right)$$ (the field depends on the choice of $\mathcal{L}_{k,l}$ and the ordering of its elements). Then $[G_{k,l}:F]\leq p^{n-k}$, and $$G_{k,l}^p(a_1,\dots,a_k) =F^p\left(a_1,\dots,a_k, \frac{a^{\lambda_1}}{a_{k+1}}, \dots, \frac{a^{\lambda_{l-k-1}}}{a_{l-1}}, a_l, \dots,a_n\right) =F^p(a_1,\dots,a_n)$$ because by the definition $a^{\lambda_i}\in F^p(a_1,\dots,a_k)$ for all $1\leq i\leq l-k-1$. As $[G^p_{k,l}:F^p]=[G_{k,l}:F]$ by Lemma [Lemma 7](#Lemma:AlgebraicToThepPower){reference-type="ref" reference="Lemma:AlgebraicToThepPower"}, we get $$p^n=[F^p(a_1,\dots,a_n):F^p]=[G_{k,l}^p(a_1,\dots,a_k):G_{k,l}^p]\cdot[G^p_{k,l}:F^p]\leq p^k\cdot p^{n-k}.$$ It follows that $[G_{k,l}:F]=p^{n-k}$ and $[G_{k,l}^p(a_1,\dots,a_k):G_{k,l}^p]=p^k$; hence, in particular, the set $\{a_1,\dots,a_k\}$ is $p$-independent over $G_{k,l}$, which means that the $p$-form $\langle\!\langle a_1,\dots,a_k \rangle\!\rangle$ is anisotropic over $G_{k,l}$. Moreover, note that we have $a^{\lambda_i}\equiv a_{k+i} \mod G_{k,l}^{*p}$ for any $1\leq i\leq l-k-1$; in particular $a_{k+1},\dots,a_{l-1}\in D_{G_{k,l}}(\langle\!\langle a_1,\dots,a_k \rangle\!\rangle)$. Therefore, $$\begin{aligned} (\pi_{G_{k,l}})_{\mathrm{an}}&\cong\langle\!\langle a_1,\dots,a_k \rangle\!\rangle_{G_{k,l}},\\ (\sigma_{G_{k,l}})_{\mathrm{an}}&\cong\langle 1,a_1,\dots,a_{l-1} \rangle_{G_{k,l}},\end{aligned}$$ where the anisotropy of $\langle 1,a_1,\dots,a_{l-1} \rangle_{G_{k,l}}$ follows from $$\langle 1,a_1,\dots,a_{l-1} \rangle_{G_{k,l}}\cong\bigl(\langle 1,a_1,\dots,a_k \rangle\:\bot\:\mathop{\large{ \mathlarger{\:\bot\:}}}\limits_{i=1}^{l-k-1}\langle a^{\lambda_i} \rangle\bigr)_{G_{k,l}}\subseteq\langle\!\langle a_1,\dots,a_k \rangle\!\rangle_{G_{k,l}}.$$ Since $d\notin F^p(a_1,\dots,a_n)=D_{G_{k,l}}(\pi)$, we get by Lemma [Lemma 31](#Lemma:IsotropyIndicesSPN_pforms){reference-type="ref" reference="Lemma:IsotropyIndicesSPN_pforms"} that $$(\varphi_{G_{k,l}})_{\mathrm{an}}\cong\left(\langle\!\langle a_1,\dots,a_k \rangle\!\rangle\:\bot\:d\langle 1,a_1,\dots,a_{l-1} \rangle\right)_{G_{k,l}};$$ in particular $\dim(\varphi_{G_{k,l}})_{\mathrm{an}}=p^k+l$. ◻ **Remark 35**. Note that we used in the proof of Theorem [Theorem 34](#Th:fullsplitpatSPNmin_pforms){reference-type="ref" reference="Th:fullsplitpatSPNmin_pforms"} only purely inseparable field extensions of exponent one. Thus, for any $p$-form $\varphi$ satisfying the conditions of that theorem, we have $\mathrm{piSP}(\varphi)=\mathrm{fSP}(\varphi)$. The following example illustrates both the result and the proof of Theorem [Theorem 34](#Th:fullsplitpatSPNmin_pforms){reference-type="ref" reference="Th:fullsplitpatSPNmin_pforms"}. **Example 36**. Let $a_1, a_2, a_3, a_4, d\in F^*$ be $p$-independent and $$\varphi\cong\langle\!\langle a_1,a_2,a_3, a_4 \rangle\!\rangle\:\bot\:d\langle 1,a_1,a_2, a_3 \rangle.$$ Then we have by Theorem [Theorem 34](#Th:fullsplitpatSPNmin_pforms){reference-type="ref" reference="Th:fullsplitpatSPNmin_pforms"} $$\begin{aligned} \mathrm{fSP}(\varphi)=&\{p^0, p^1,p^2,p^3,p^4\}\\ &\cup\{p^0+1,p^1+1,p^1+2,p^2+2,p^2+3,p^2+4, p^3+3,p^3+4\}\\ &\cup\{p^1+3 ~|~\text{if } p\geq3\}\cup\{p^1+4~|~\text{if } p\geq4\}.\end{aligned}$$ Table [1](#Tab:FullSplitPattSPN_pforms){reference-type="ref" reference="Tab:FullSplitPattSPN_pforms"} provides the fields used in the proof of Theorem [Theorem 34](#Th:fullsplitpatSPNmin_pforms){reference-type="ref" reference="Th:fullsplitpatSPNmin_pforms"} and the obtained $p$-forms for the case $p\geq4$. max width=24.5cm $\dim$ 0 1 2 3 4 -------- ------------------------------------------------------------------------------ ---------------------------------------------------------------------- --------------------------------------------------------------------------- ----------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------ $p^0$ $D_0=F(\sqrt[p]{a_1},\sqrt[p]{a_2},\sqrt[p]{a_3},\sqrt[p]{a_4},\sqrt[p]{d})$ $E_{0,1}=F(\sqrt[p]{a_1},\sqrt[p]{a_2},\sqrt[p]{a_3},\sqrt[p]{a_4})$ X X X $\langle 1 \rangle$ $\langle 1 \rangle\:\bot\:d\langle 1 \rangle$ $p^1$ $D_1=F(\sqrt[p]{a_1},\sqrt[p]{a_2},\sqrt[p]{a_3},\sqrt[p]{d})$ $E_{1,1}=F(\sqrt[p]{a_1},\sqrt[p]{a_2},\sqrt[p]{a_3})$ $E_{1,2}=F(\sqrt[p]{a_2},\sqrt[p]{a_3},\sqrt[p]{a_4})$ $G_{1,3}=F\bigl(\sqrt[p]{\frac{a_1^2}{a_2}},\sqrt[p]{a_3},\sqrt[p]{a_4}\bigr)$ $G_{1,4}=F\bigl(\sqrt[p]{\frac{a_1^2}{a_2}}, \sqrt[p]{\frac{a_1^3}{a_3}}, \sqrt[p]{a_4}\bigr)$ $\langle\!\langle a_4 \rangle\!\rangle$ $\langle\!\langle a_4 \rangle\!\rangle\:\bot\:d\langle 1 \rangle$ $\langle\!\langle a_1 \rangle\!\rangle\:\bot\:d\langle 1,a_1 \rangle$ $\langle\!\langle a_1 \rangle\!\rangle\:\bot\:d\langle 1,a_1,a_1^2 \rangle$ $\langle\!\langle a_1 \rangle\!\rangle\:\bot\:d\langle 1,a_1,a_1^2, a_1^3 \rangle$ $p^2$ $D_2=F(\sqrt[p]{a_1},\sqrt[p]{a_2},\sqrt[p]{d})$ X $E_{2,2}=F(\sqrt[p]{a_2},\sqrt[p]{a_3})$ $E_{2,3}=F(\sqrt[p]{a_3},\sqrt[p]{a_4})$ $G_{2,4}=F\bigl(\sqrt[p]{\frac{a_1a_2}{a_3}}, \sqrt[p]{a_4}\bigr)$ $\langle\!\langle a_3,a_4 \rangle\!\rangle$ $\langle\!\langle a_1,a_4 \rangle\!\rangle\:\bot\:d\langle 1,a_1 \rangle$ $\langle\!\langle a_1,a_2 \rangle\!\rangle\:\bot\:d\langle 1,a_1,a_2 \rangle$ $\langle\!\langle a_1,a_2 \rangle\!\rangle\:\bot\:d\langle 1,a_1,a_2,a_1a_2 \rangle$ $p^3$ $D_3=F(\sqrt[p]{a_1},\sqrt[p]{d})$ X X $E_{3,3}=F(\sqrt[p]{a_3})$ $E_{3,4}=F(\sqrt[p]{a_4})$ $\langle\!\langle a_2,a_3,a_4 \rangle\!\rangle$ $\langle\!\langle a_1,a_2,a_3 \rangle\!\rangle\:\bot\:d\langle 1,a_1,a_2 \rangle$ $\langle\!\langle a_1,a_2,a_3 \rangle\!\rangle\:\bot\:d\langle 1,a_1,a_2,a_3 \rangle$ $p^4$ $D_4=F(\sqrt[p]{d})$ X X X $E_{4,4}=F$ $\langle\!\langle a_1,a_2,a_3,a_4 \rangle\!\rangle$ $\langle\!\langle a_1,a_2,a_3,a_4 \rangle\!\rangle\:\bot\:d\langle 1,a_1,a_2,a_3 \rangle$ : Splitting of the $p$-form $\langle\!\langle a_1,a_2,a_3,a_4 \rangle\!\rangle\:\bot\:d\langle 1,a_1,a_2,a_3 \rangle$ in the case $p\geq4$ (see Example [Example 36](#Ex:FullSplitPattSPN_pforms){reference-type="ref" reference="Ex:FullSplitPattSPN_pforms"}).
arxiv_math
{ "id": "2309.13341", "title": "Isotropy and full splitting pattern of quasilinear $p$-forms", "authors": "Krist\\'yna Zemkov\\'a", "categories": "math.NT math.AG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In the vertex colouring game on a graph $G$, Maker and Breaker alternately colour vertices of $G$ from a palette of $k$ colours, with no two adjacent vertices allowed the same colour. Maker seeks to colour the whole graph while Breaker seeks to make some vertex impossible to colour. The game chromatic number of $G$, $\chi_\text{g}(G)$, is the minimal number $k$ of colours for which Maker has a winning strategy for the vertex colouring game. Matsumoto proved in 2019 that $\chi_\text{g}(G)-\chi(G)\leq\floor{n/2} - 1$, and conjectured that the only equality cases are some graphs of small order and the Turán graph $T(2r,r)$ (i.e. $K_{2r}$ minus a perfect matching). We resolve this conjecture in the affirmative by considering a modification of the vertex colouring game wherein Breaker may remove a vertex instead of colouring it. Matsumoto further asked whether a similar result could be proved for the vertex marking game, and we provide an example to show that no such nontrivial result can exist. address: Department of Pure Mathematics and Mathematical Statistics (DPMMS), University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA, United Kingdom author: - Lawrence Hollom bibliography: - main.bib title: On graphs with maximum difference between game chromatic number and chromatic number --- # Introduction {#sec:intro} The vertex colouring game on a graph is a widely studied combinatorial game, first invented by Brams and then discussed by Gardner in his "Mathematical Games" column of Scientific American [@gardner1981scientific] in 1981. The game was subsequently reinvented by Bodlaeder [@bodlaender1991complexity] in 1991, who brought it into the combinatorics literature, in which it has since attracted significant attention. The game is played on a graph $G$ by two players, Maker and Breaker, often also referred to as Alice and Bob. The players alternately assign colours from a set $X$ of size $k$ to vertices of the graph, Maker going first, with the restriction that no two adjacent vertices may be given the same colour. Maker wins the game if the whole of $G$ can be coloured, and Breaker wins otherwise, i.e. if some vertex becomes impossible to colour. The *game chromatic number* $\chi_\text{g}(G)$ of $G$ is then the minimal number $k$ of colours for which Maker has a winning strategy. Calculating -- or even estimating -- $\chi_\text{g}(G)$ is very difficult, even for small graphs. Bodlaeder first asked about the computational complexity of the vertex colouring game in 1991, and it took until 2020 before Costa, Pessoa, Sampaio, and Soares [@costa2020pspace] proved that the both the vertex colouring game and the related greedy colouring game are PSPACE-complete. This result has since been extended to variants of the game where Breaker starts, or one or other of the players can pass their turn [@marcilon2020hardness]. Despite the difficulty of the problem in general, the game chromatic number of various classes of graphs has received significant attention; the cases of forests [@faigle1991game] and planar graphs [@kierstead1994planar; @zhu1999game; @sekiguchi2014game; @zhu2008refined; @nakprasit2018game] have been studied, as have cactuses [@sidorowicz2007game], and the relation to the acyclic chromatic number [@dinski1999bound] and game Grundy number [@havet2013game], to name a few avenues of research. Another significant direction of study, initiated by Bohman, Frieze, and Sudakov [@bohman2008game], is the study of the game chromatic number of the binomial random graph. One approach that can be taken in bounding $\chi_\text{g}(G)$ is to compare it to the usual chromatic number $\chi(G)$. Such an approach was taken by Matsumoto [@matsumoto2019difference], who in 2019 proved the following theorem. [\[thm:comparison-basic\]]{#thm:comparison-basic label="thm:comparison-basic"} For any graph $G$ of order at least 2, we have $$\begin{aligned} \chi_\text{g}(G)-\chi(G)\leq\floor{n/2}-1. \end{aligned}$$ Following on from this, Matsumoto conjectured a classification of the equality cases, as follows. [\[conj:matsumoto\]]{#conj:matsumoto label="conj:matsumoto"} For any graph $G$ with $n\geq 2$ vertices, $$\begin{aligned} \chi_\text{g}(G)-\chi(G)\leq\floor{n/2}-2, \end{aligned}$$ unless $G$ is isomorphic to either the Turán graph $T(2r,r)$ or $K_{2,3}$. Throughout this paper, all graphs we consider are finite and simple, but not necessarily connected. We resolve (modulo some trivialities regarding graphs of small order) by first introducing a new, modified version of the vertex colouring game, wherein Breaker may choose to play a 'blank', which is equivalent to deleting a vertex. Neighbours of a vertex at which a blank has been played may still be given any colour, or even another blank. This game, and the corresponding graph invariant $\chi_\text{gb}(G)$ (the minimal number of colours, not counting blanks, for which Maker wins) are significantly more amenable to inductive methods than the standard vertex colouring game. It is this property which allow us to prove the following theorem. [\[thm:intro-comparison\]]{#thm:intro-comparison label="thm:intro-comparison"} For any graph $G$ of order at least 6 which is not isomorphic to the $r$-partite Turán graph $T(2r,r)$, we have $$\begin{aligned} \chi_\text{gb}(G)-\chi(G)\leq\floor{n/2}-2. \end{aligned}$$ Noting that $\chi_\text{gb}(G)\geq \chi_\text{g}(G)$, follows from . Matsumoto also asked whether a similar inequality holds for the marking game, also called the graph colouring game (see for a definition of this game). We resolve this problem, presenting graphs with large game colouring number and small chromatic number in . In we give the definitions and preliminary results we shall need in our proof of , before presenting the inductive step of the proof in . We conclude with some open problems and directions for future work in . # Definitions and preliminary results {#sec:preliminaries} The precise small graphs for which does not hold are the $r$-partite Turán graph $T(2r,r)$, any of the connected subgraphs of the complete bipartite graph $K_{2,3}$ which itself contains the path on four vertices $P_4$ as a subgraph, and any graph of order at most 3. We restrict our attention to graphs of order at least 6 to avoid the technicalities that come with these small cases. We in fact prove a statement slightly stronger than concerning a generalisation of the vertex colouring game, which we now define. [\[def:game-with-blanks\]]{#def:game-with-blanks label="def:game-with-blanks"} We define the *vertex colouring game with blanks* to be the usual vertex colouring game, except now Breaker can use his turn to mark a vertex as 'blank', rather than with a particular colour. When we refer to 'colours' in this game, we do not include blanks. We will refer to a player marking a vertex as blank as *playing a blank* at that vertex. Blanks do not restrict what colours may be played at adjacent vertices, and any such vertex may still be given any colour not prohibited by its own neighbours. Thus playing a blank is equivalent to deleting a vertex from the graph. The game ends when no vertex can legally be given any (non-blank) colour; in particular, if Breaker's only legal move is to play a blank, then the game ends and Breaker wins. The minimum number of colours for which Maker wins the vertex colouring game with blanks is denoted $\chi_\text{gb}(G)$. Throughout the rest of the paper we will refer to a player assigning a colour to a vertex as *playing a vertex*. We now generalise the game even further, as is necessary for our proof. [\[def:marked-for-blanks\]]{#def:marked-for-blanks label="def:marked-for-blanks"} The vertex colouring game with blanks on $G$ with independent sets $D_1,\dotsc,D_s \subseteq V(G)$ *marked for blanks* is played as the vertex colouring game with blanks, but with the following differences. Both Maker and Breaker may play blanks in $D\mathrel{\coloneqq}D_1\mathbin{\cup}\dots\mathbin{\cup}D_s$, and the game does not end while vertices in $D$ are left unplayed, even if the only valid move is to play a blank. Finally, whenever Breaker plays a blank at a vertex $v\in V(G)$, he may pick some class marked for blanks $D_i\not\ni v$ (if such a class exists), and remove it from the list of classes marked for blanks. We define $\chi_\text{gb}(G;D_1,\dotsc,D_s)$ to be the minimum number of colours for Maker to win the vertex colouring game with blanks, with independent sets $D_1,\dotsc,D_s$ marked for blanks. In particular, if $s=0$ and no classes are marked for blanks then the above game agrees with . At the other extreme, if $D_1=V(G)$ (and so $G$ has no edges), $\chi_\text{gb}(G;D_1)=0$, as both players are forced to play blanks until there are no vertices left. Note that classes marked for blanks only help Maker: Maker can choose to play as if there are no classes marked for blanks, and the game will proceed exactly as if there are no such classes. The only exception to this is at the end of the game, when both players may be forced to play blanks, allowing Maker a chance to win when she would otherwise have lost. Therefore $\chi_\text{gb}(G)\geq\chi_\text{gb}(G;D_1,\dotsc,D_s)$. We now state our main theorem, which the remainder of this section and are devoted to proving. [\[thm:comparison-better\]]{#thm:comparison-better label="thm:comparison-better"} Let $G$ be a graph on $n$ vertices with $n\geq 2$, and $C_1,\dotsc,C_p$ be a partition of $V(G)$ into $p\geq\chi(G)$ disjoint independent sets. Let $0\leq s\leq p$, and let $D_1,\dotsc,D_s$ be some $s$ of the sets $C_1,\dotsc,C_p$. Assume that either $s\geq 1$, or that $n\geq 6$ and $G$ is not isomorphic to the Turán graph $T(2r,r)$. Then $$\begin{aligned} \label{eq:comparison-target} \chi_\text{gb}(G;D_1,\dotsc,D_s)\leq p+\floor{n/2}-2. \end{aligned}$$ Noting that we may take $s=0$ and $p=\chi(G)$ in the above, follows. We shall refer to the independent sets $C_1.\dotsc,C_p$ as the 'classes' of $G$, noting that they are colour classes in some proper colouring of $G$ (not necessarily using the minimal number of colours). Our proof of proceeds by induction on $n$, and we use the following as our base case. [\[lem:base-case\]]{#lem:base-case label="lem:base-case"} All of the following finite simple graphs satisfy equation [\[eq:comparison-target\]](#eq:comparison-target){reference-type="eqref" reference="eq:comparison-target"} with $s=0$. 1. [\[case:matsumoto-6\]]{#case:matsumoto-6 label="case:matsumoto-6"} Graphs of order 6 except for $T(6,3)$. 2. [\[case:matsumoto-7\]]{#case:matsumoto-7 label="case:matsumoto-7"} Graphs of order 7. 3. [\[case:matsumoto-interesting\]]{#case:matsumoto-interesting label="case:matsumoto-interesting"} Strict subgraphs $G\subsetneqq T(2r,r)$ where $r\geq 3$, $\abs{V(G)}=2r$, and each $C_i$ has size 2. 4. [\[case:matsumoto-bipartite\]]{#case:matsumoto-bipartite label="case:matsumoto-bipartite"} Any subgraph of $K_{2,r}$ on $2+r$ vertices, with $p=2$ and $\abs{C_1}=2$, for $r\geq 4$. Furthermore, the following satisfy [\[eq:comparison-target\]](#eq:comparison-target){reference-type="eqref" reference="eq:comparison-target"} with $s\geq 1$. 1. [\[case:matsumoto-2\]]{#case:matsumoto-2 label="case:matsumoto-2"} Graphs of order 2. 2. [\[case:matsumoto-3\]]{#case:matsumoto-3 label="case:matsumoto-3"} Graphs of order 3. The proofs of parts [\[case:matsumoto-6\]](#case:matsumoto-6){reference-type="ref" reference="case:matsumoto-6"}, [\[case:matsumoto-7\]](#case:matsumoto-7){reference-type="ref" reference="case:matsumoto-7"}, [\[case:matsumoto-2\]](#case:matsumoto-2){reference-type="ref" reference="case:matsumoto-2"}, and [\[case:matsumoto-3\]](#case:matsumoto-3){reference-type="ref" reference="case:matsumoto-3"} of are technical and not enlightening, and so are given in . Here we present first the proof of part [\[case:matsumoto-interesting\]](#case:matsumoto-interesting){reference-type="ref" reference="case:matsumoto-interesting"} of , and then deal with part [\[case:matsumoto-bipartite\]](#case:matsumoto-bipartite){reference-type="ref" reference="case:matsumoto-bipartite"}. *Proof of part [\[case:matsumoto-interesting\]](#case:matsumoto-interesting){reference-type="ref" reference="case:matsumoto-interesting"}.* Let $G\subsetneqq T(2r,r)$ satisfy $\abs{V(G)}=2r$ and $\chi(G)=r$, and let $C_1=\set{x_1,y_1},\dotsc,C_r=\set{x_r,y_r}$ be the colour classes in an arbitrary proper $r$-colouring of $G$. We present a strategy for Maker to win the vertex colouring game with $2r-2$ colours. Note first that for Maker to win, there only need to be two times throughout the game when a colour is repeated or a blank is played. Consider the final play of the game, which is made by Breaker; say this play is at vertex $y_r$. The only reason why $y_r$ could not be given the same colour as $x_r$ is if there was some other vertex which already had the same colour as $x_r$. Thus it suffices for Maker to force a single colour to be repeated, with both instances of that colour occurring in classes which are completely coloured before the final play of the game. For the same reason, if Breaker ever plays a blank before Maker's final play of the game, then Maker will win, so we may assume that Breaker does not play blanks. Since $G$ is a strict subgraph of $T(2r,r)$, there must be some edge missing between colour classes. Relabelling if necessary, we may assume that edge $x_1x_2$ is not present. Maker's strategy is to give vertex $x_1$ colour 1, and we split into cases based on Breaker's response. If Breaker replies by playing at vertex $y_1$, then either he plays colour 1, in which case a colour is repeated and we are done, or he plays a different colour, in which case Maker plays colour 1 at vertex $x_2$. Wherever Breaker plays next, Maker then uses her next turn to ensure vertex $y_2$ is coloured with any colour, which is sufficient for Maker to win. If Breaker plays colour 1 at some vertex other than $y_1$, which we may w.l.o.g. assume is $x_2$, then Maker uses her next two turns to ensure vertices $y_1$ and $y_2$ are coloured (with arbitrary colours), which is again sufficient to win. If Breaker plays some other colour, say colour 2, at some vertex $x_i$ for $i\geq 2$, then Maker can reply by giving $y_i$ colour 2, which is again sufficient to win. Thus when there are $2r-2$ colours available, Maker has a winning strategy, as required. ◻ Next we deal with part [\[case:matsumoto-bipartite\]](#case:matsumoto-bipartite){reference-type="ref" reference="case:matsumoto-bipartite"}. This is achieved via the following result, which establishes a greedy strategy for Maker. [\[lem:greedy\]]{#lem:greedy label="lem:greedy"} If a graph $G$ has at most $\ceil{k/2}$ vertices of degree at least $k$, then $\chi_\text{gb}(G)\leq k$. *Proof.* Assume that $k$ colours are available. Maker's strategy on her first $\ceil{k/2}$ turns is to arbitrarily colour any vertex of degree at least $k$. Note that if all such vertices are coloured, then Maker wins, as a vertex of degree strictly less than $k$ will always be playable. Furthermore, just before Maker's $i$^th^ turn, $2(i-1)$ vertices have been coloured. Thus on Maker's $\ceil{k/2}$^th^ turn, $2(\ceil{k/2}-1)\leq k-1$ vertices have been coloured, so all vertices are still colourable. Therefore all vertices of degree at least $k$ can be coloured, and so this is a winning strategy for Maker. ◻ Note that immediately implies that for any subgraph $H$ of $K_{2,r}$, $\chi_\text{gb}(H)\leq 3$, and part [\[case:matsumoto-bipartite\]](#case:matsumoto-bipartite){reference-type="ref" reference="case:matsumoto-bipartite"} follows. We now state and prove two further lemmas which we will need in our proof. The first lemma is the tool which allows us to use induction, which will be needed for our proof of . Our proof of this lemma makes use of a so-called *imagination strategy*, as popularised by the seminal paper of Brešar, Klavžar, and Rall [@brevsar2010domination]. [\[lem:subgraph-imagination\]]{#lem:subgraph-imagination label="lem:subgraph-imagination"} Assume that Maker and Breaker have each played $t$ times in the vertex colouring game with blanks, and that $U\subseteq V(G)$ is the set of vertices which have been played (so $\abs{U}=2t$). Let $X$ be the set of colours have been used (not including blanks) and let $c_1,\dotsc,c_s\in X$ be some distinct colours. Finally, let $D_1,\dotsc,D_s,E_1,\dots,E_q\subseteq V(G)$ be some disjoint non-empty independent sets, where $D_i$ includes all vertices which have been given colour $c_i$ for each $1\leq i \leq s$. Then if we let $G'=G-U$ be the subgraph induced by the uncoloured vertices, then we have $$\begin{aligned} \label{eq:induction} \chi_\text{gb}(G;E_1,\dotsc,E_q)\leq \chi_\text{gb}(G';D_1',\dotsc,D_s',E_1',\dotsc,E_q')+\abs{X}. \end{aligned}$$ Where $D_i'\mathrel{\coloneqq}D_i\mathbin{\cap}V(G')$ and $E_j'$ is either $E_j\mathbin{\cap}V(G')$, or $\varnothing$ if Breaker removed $E_j$ as a class marked for blanks, for all $1\leq i\leq s$ and $1\leq j\leq q$. *Proof.* We prove this via an imagination strategy, constructing a winning strategy for Maker on $G$ with $E_1,\dotsc,E_q$ marked for blanks using $\chi_\text{gb}(G';D_1',\dotsc,D_s',E_1',\dotsc,E_q')+\abs{X}$ colours. After the first $2t$ plays of the game in $G$, Maker imagines playing on $G'\subseteq G$ with sets $D_1',\dotsc,D_s',E_1',\dotsc,E_q'$ marked for blanks using the remaining colours. Call the set of remaining colours $Y$. In short, Maker playing a blank in $D_i'$ in the imagined game is equivalent to her playing colour $c_i$ in the real game. To move Maker's plays from the imagined game on $G'$ to the real game on $G$ we do the following. If Maker plays a colour from $Y$ or a blank in some $E_j'$, then this move may be copied to $G$. Otherwise, Maker played a blank at $v\in D_i'$. In the real game on $G$, we instead let Maker play colour $c_i$ at $v$, which is playable on vertices in $D_i$ so long as Breaker has not played colour $c_i$ from move $2t+1$ onward. To move Breaker's plays from the real game on $G$ to the imagined game on $G'$, we do the following. If Breaker plays a colour from $Y$ or a blank, then this move is copied over into the imagined game. If Breaker plays a colour from $X$, we instead have Breaker play a blank in the imagined game. In particular, if Breaker plays $c_i\in X$ outside of $D_i$, then Breaker removes $D_i'$ from the list of classes marked for blanks, and otherwise Breaker does not remove a class marked for blanks. This means that if Breaker plays colour $c_i$ outside of $D_i$ from move $2t+1$ onward, possibly preventing colour $c_i$ from being played in $D_i'$, then in the imagined game Breaker will play a blank outside of $D_i'$ and disallow Maker from playing blanks in $D_i'$. Thus if Maker plays a blank at $x\in D_i'\subseteq G'$, then we know that Breaker has not yet played colour $c_i$ from move $2t+1$ onward in the real game, so playing colour $c_i$ at $x$ in the real game is a valid move. Therefore Maker's strategy can be converted from the imagined game to the real game, and the result follows. ◻ Our second lemma deals with a particular case in the induction, wherein the subgraph induced by the uncoloured vertices is isomorphic to $T(2r,r)$. [\[lem:annotated-turan\]]{#lem:annotated-turan label="lem:annotated-turan"} If $G$ is a $2r$-vertex subgraph of $T(2r,r)$ for some $r$, and $D_1,\dotsc,D_s$ are some $s\geq 1$ of the vertex classes corresponding to colour classes in the unique (up to relabelling colours) $r$-colouring of $T(2r,r)$, then $$\begin{aligned} \chi_\text{gb}(G;D_1,\dotsc,D_s)\leq 2r - s - 1. \end{aligned}$$ *Proof.* We present a strategy for Maker whereby, for each $1\leq i\leq s$, either Maker plays a blank in $D_i$ or Breaker plays a blank to remove $D_i$ as a class marked for blanks, and from this Maker wins with $2r-s-1$ colours. Maker's strategy while there is still a class marked for blanks is to always play a blank (if possible), playing in the same class as Breaker's previous move if that class is not already fully coloured, and in an arbitrary class marked for blanks otherwise. Maker's aim is for each original class $D_i$ marked for blanks is to have some blank played either in $D_i$ by Maker, or outside of $D_i$ by Breaker, who then removed $D_i$ from the list of classes marked for blanks. Consider some such $D_i$. The only way Maker could fail to achieve her goal is if $D_i$ was filled before she had a chance to play in it, and Breaker never removed it as a class marked for blanks. But as soon as Breaker plays in $D_i$, Maker will play a blank in $D_i$, and so Maker succeeds in her goal. In particular, at least $s$ blanks are played in the game, so it suffices to prove that either $s+1$ blanks were played or some colour was repeated. Consider the final move of the game, which is made by Breaker since there are an even number of vertices. Assume that all colours have been played at least once, as otherwise Breaker has a legal move, and Maker wins. Say it is at vertex $x\in D_i=\set{x,y}$. Vertex $y$ has already been played. If $y$ was given some colour $c$ already, then either Breaker can give $x$ colour $c$, and Maker wins, or colour $c$ has already been given to another vertex in $G$, in which case Maker also wins. If $y$ has been played and is blank, then note that this blank must have been played by Maker, as otherwise Maker would already have played at $x$. But then either playing a blank is a legal move at $y$, meaning at least $s+1$ blanks are played, or Breaker played a blank which removed $D_i$ as a class marked for blanks. But in this case there were two blanks associated with class $D_i$, and so there must be a colour remaining for Breaker to play. Thus Maker wins with $2r-s-1$ colours, as required. ◻ Together with the proofs in , this completes our proof of . Thus only the inductive step of our proof remains, which is presented in the following section. # Proof of Theorem [\[thm:comparison-better\]](#thm:comparison-better){reference-type="ref" reference="thm:comparison-better"} {#sec:main-proof} Let $G=(V,E)$ and let $V=C_1\mathbin{\cup}\dots\mathbin{\cup}C_p$ be a partition of $V$ into $p$ independent sets. We shall proceed by producing the first few moves of Maker's strategy, and then remove played vertices to proceed by means of and , as required for the induction. When we apply , we will do so in such a way that a class marked for blanks is one of the independent sets $C_1,\dots,C_p$. The base case of our induction is provided by , and so here we present only the inductive step. We first present the strategy which Maker will use to achieve the claimed bounds. ## Maker's strategy {#subsec:maker-strat} If there are $s\geq 1$ nonempty sets $D_1,\dotsc,D_s$ marked for blanks, with $\abs{D_1}\leq\dots\leq\abs{D_s}$, then Maker plays a blank in $D_1$. Assuming instead that $s=0$, if there is some class with odd size, then Maker plays colour 1 at a vertex $u\in C_i$ of maximal degree in its class, where $C_i$ is a class of minimal odd size. Otherwise, either applies, or there is a class of size at least 4, and Maker plays arbitrarily in a class of minimum even size. In any case, we may w.l.o.g. say that Maker played at vertex $u$ in class $C_1$. If further moves of Maker's strategy need to be specified, they depend on Breaker's reply to Maker's first move, and so are detailed in the analysis to follow. One strategy which may be employed by Maker in her second turn and onward is to play in the same class as Breaker's most recent move, using the same colour. We refer to such a move as Maker 'copying Breaker'. We now show that this strategy works. We start with the case when Maker plays a blank, and then when Maker plays a colour. We w.l.o.g. assume that Breaker plays at a vertex $v$ in either $C_1$ or $C_2$. Let $G'$ be the graph resulting from an application of . Furthermore, let $X$ be the set of colours played on $V(G)\setminus V(G')$, and $n$ and $n'$ be the orders of $G$ and $G'$ respectively. Let $\mathcal{D}$ and $\mathcal{D}'$ be the sets of nonempty classes marked for blanks in $G$ and $G'$ respectively. Finally, let $p'=\abs{\set{i\mathbin{\colon}C_i\mathbin{\cap}G'\neq \varnothing}}$ be the number of independent sets in our partition of $G'$. We need to show that equation [\[eq:comparison-target\]](#eq:comparison-target){reference-type="eqref" reference="eq:comparison-target"} holds for $G'$, that $n'\geq 2$, and that if $G'\simeq T(2r,r)$ or $n'\leq 5$ then $\abs{\mathcal{D}'}\geq 1$. Note that by rearranging equation [\[eq:comparison-target\]](#eq:comparison-target){reference-type="eqref" reference="eq:comparison-target"}, we see that it suffices to show the following inequality. $$\begin{aligned} (p-p')+(n-n') / 2 &\geq \abs{X} + \max(\abs{\mathcal{D}} - \abs{\mathcal{D}'},0).\label{eq:general-induction-target}\end{aligned}$$ We thus see that if the number of classes marked for blanks decreases, then we may apply to conclude immediately, which is why it suffices to show that if $n'\leq 5$ then $\abs{\mathcal{D}'}\geq 1$. ## Maker plays a blank We will set $G'=G-\set{u,v}$, and so as we may assume that $n\geq 4$ (as otherwise we could conclude by ), we see that $n'\geq 2$. Thus all we need to do is prove equation [\[eq:general-induction-target\]](#eq:general-induction-target){reference-type="eqref" reference="eq:general-induction-target"} holds. Indeed, if Breaker plays a blank, then we need to prove the following: $$\begin{aligned} \label{eq:induction-blank-blank} p-p'+1 \geq \max(\abs{\mathcal{D}} - \abs{\mathcal{D}'},0).\end{aligned}$$ Breaker can remove one class marked for blanks, but any further classes marked for blanks which are removed must come from sets $D_i$ being completely coloured, and thus contributing to both $p-p'$ and $\abs{\mathcal{D}} - \abs{\mathcal{D}'}$. From this we deduce equation [\[eq:induction-blank-blank\]](#eq:induction-blank-blank){reference-type="eqref" reference="eq:induction-blank-blank"}. If Breaker plays a colour, then we need to prove the following. $$\begin{aligned} \label{eq:induction-blank-col} p-p' \geq \max(\abs{\mathcal{D}} - \abs{\mathcal{D}'},0).\end{aligned}$$ Here, the only classes marked for blanks which are removed are those which are completely coloured. So for the same reason as the previous case we deduce equation [\[eq:induction-blank-col\]](#eq:induction-blank-col){reference-type="eqref" reference="eq:induction-blank-col"}. This completes this case of the induction. ## Maker plays a colour We assume by relabelling that Maker played colour 1, and again call the vertex played by Breaker $v$, and w.l.o.g. assume that either $v\in C_1$ or $v\in C_2$. By relabelling colours, we may further assume that the play is of colour 1 or 2, or a blank. If we consider Maker's first $t$ turns, so we remove $2t$ vertices from $G$ to form $G'$, then we find by simplifying equation [\[eq:general-induction-target\]](#eq:general-induction-target){reference-type="eqref" reference="eq:general-induction-target"} that we need to prove one of the following inequalities. $$\begin{aligned} p-p'+t &\geq \abs{X} & &\text{ if }G'\not\simeq T(2r,r)\text{ or }n'\geq 6\text{ or }\abs{\mathcal{D}'}\geq 1\label{eq:induction-col},\\ p-p'+t &\geq \abs{X}+1 & &\text{ otherwise.}\label{eq:induction-col-safe}\end{aligned}$$ Note first that inequality [\[eq:induction-col\]](#eq:induction-col){reference-type="eqref" reference="eq:induction-col"} is implied by [\[eq:induction-col-safe\]](#eq:induction-col-safe){reference-type="eqref" reference="eq:induction-col-safe"}, and so proving the latter is sufficient in all cases. Indeed, when [\[eq:induction-col-safe\]](#eq:induction-col-safe){reference-type="eqref" reference="eq:induction-col-safe"} holds, can be used to immediately deduce the desired result. Next note that if there is a class of size 1, then Maker plays in it and we may set $G'=G-\set{u,v}$ (so $t=1$), so that $\abs{X}\leq 2$ and $p-p'\geq 1$. Furthermore, if $G'\simeq T(2r,r)$, then either Breaker replied in a class of size 1, so $p-p'=2$, or we may mark some class for blanks, as required. Thus we may assume that no class has size 1. We now distinguish an exhaustive list of five cases depending on what Breaker's reply to Maker might be, dealing with each case separately. ### Breaker plays colour 2 and no class is completely coloured We see that Breaker plays colour 2 in either $C_1$ with $\abs{C_1} \geq 3$, or in $C_2$ with $\abs{C_2}\geq 2$. In this case we need to track some further moves of the game, for which Maker's strategy is as follows. For as long as Breaker plays previously-unplayed colours which can be copied, and no class is completely coloured, Maker keeps copying Breaker. Letting $U$ be the set of vertices played in the first $t$ rounds of the game, and $G'=G-U$, we take $t$ minimal so that either $\abs{X}=t$ or $p'<p$. We now consider four cases, depending on what vertices remain unplayed after the first $t$ rounds. 1. $G$ has only one vertex left uncoloured, in some class $C_i$. In this case we do not use the induction hypothesis, but instead conclude directly. First note that $p\leq 3$. If $p=1$, then $G$ has no edges and we are done immediately. If $p=3$, then on turn $t$ Maker and Breaker both filled a class in; say $C_1$ and $C_2$. But then, as $\abs{C_3}\geq 2$, there must be some colour $c$ played only in $C_3$, and so Maker can give the final vertex colour $c$. Now, as $p=2$, if we have $\abs{C_i}=2$ then we are done by . If not, then as at most one colour appears in two different classes, there is some colour $c$ which has been played only in class $C_i$. Then Maker may give the final vertex colour $c$, and we may directly see that equation [\[eq:comparison-target\]](#eq:comparison-target){reference-type="eqref" reference="eq:comparison-target"} holds. 2. $G$ is completely coloured. We have $\abs{X}\leq t+1$, $n=2t$, and $p\geq 2$. However, either $p\geq 3$, Breaker repeated a colour at some point, or Breaker's final move was in a class containing some colour played in no other class, which Breaker can be forced to repeat if only $t$ colours are available. Thus $t$ colours suffice and we directly see that [\[eq:comparison-target\]](#eq:comparison-target){reference-type="eqref" reference="eq:comparison-target"} holds. 3. Some class $C_i$ is filled on Breaker's $t$^th^ move. We have $p-p'\geq 1$ and $\abs{X}\leq t+1$, as Maker always copied Breaker's colour. Thus [\[eq:induction-col\]](#eq:induction-col){reference-type="eqref" reference="eq:induction-col"} holds, and we are done unless $p-p'=1$, $\abs{X}=t+1$, no class may be marked for blanks, and either $G'\simeq T(2r,r)$ or $n'\leq 5$. As $\abs{X}=t+1$, we know that no colour appears in more than one class, so either some class may be marked for blanks, or all plays were in class $C_1$, and thus $\abs{C_1}$ is even, so $n$ is also even. By assumption, $\abs{C_1}\geq 3$, so $\abs{C_1}\geq 4$. Maker played in a class of minimum even size by assumption, so either $n'\geq 6$ and we are done, or $G'\simeq E_4$, the empty graph on four vertices. But in this case, equation [\[eq:comparison-target\]](#eq:comparison-target){reference-type="eqref" reference="eq:comparison-target"} may be checked to hold, so we are again done. 4. Some class $C_i$ is filled on Maker's $t$^th^ move, and Breaker then plays in a different class, w.l.o.g. $C_2$. We know that vertices in at least two classes have been played, so we must have either $p-p'=2$, or some class may be marked for blanks, and so we are done. ### Breaker plays a blank There are only two sub-cases here. If Breaker plays a blank in $C_1$ and $\abs{C_1} = 2$, then set $G'=G-C_1$, so we have $p'-p=t=\abs{X}=1$, satisfying [\[eq:induction-col-safe\]](#eq:induction-col-safe){reference-type="eqref" reference="eq:induction-col-safe"} as required. If Breaker plays a blank in any other case, then we set $G'=G-\set{u,v}$ and may mark $C_1$ for blanks, again satisfying [\[eq:induction-col\]](#eq:induction-col){reference-type="eqref" reference="eq:induction-col"}. ### Breaker plays colour 1 in class $C_2$ Say Breaker plays at vertex $v$. We may set $G'=G-\set{u,v}$, so $\abs{X}=1$ and we satisfy [\[eq:induction-col\]](#eq:induction-col){reference-type="eqref" reference="eq:induction-col"} unless $G'\simeq T(2r,r)$. But if $G'\simeq T(2r,r)$ then $\abs{C_1}=\abs{C_2}=3$, and by assumption Maker played at a vertex $u$ of maximal degree in $C_1$, which was still missing an edge to $C_2$. Thus all other vertices in $C_1$ must be missing some edge when compared to the complete multipartite graph. Assuming $G'\simeq T(2r,r)$, we see that all three vertices in $C_1$ had no edge to $v$. Say $C_1=\set{u,x,y}$. Then Maker's strategy is to play colour 1 again, at vertex $x$. Say that Breaker replies at $z\in C_i$, and set $G''=G-\set{u,v,x,z}$, noting that now $t=2$. Let $p''$ be the number of the sets $C_1,\dotsc,C_p$ with nonempty intersection with $G''$. If Breaker gives vertex $z$ colour 1, we have $\abs{X}=1$ when passing to $G''$. Otherwise, Breaker gives $z$ colour 2. If $z$ was the last uncoloured vertex in $C_i$, then we have $\abs{X}=2$ and $p-p''\geq 1$. If $z$ is not the last uncoloured vertex in $C_i$, then we may note that colour 2 has been played only at $z$ and so we may mark the class $C_i$ for blanks in $G''$. Thus either [\[eq:induction-col-safe\]](#eq:induction-col-safe){reference-type="eqref" reference="eq:induction-col-safe"} holds or some class is marked for blanks, so we are done, regardless of whether $G''\simeq T(2(r-1),r-1)$ or not. ### Breaker plays in class $C_1$ and $\abs{C_1}=2$ Set $G'=G-C_1$. Then we have removed at most 2 colours and exactly one class, as required by [\[eq:induction-col\]](#eq:induction-col){reference-type="eqref" reference="eq:induction-col"}. Note we cannot have $G'\simeq T(2r,r)$, as then $G$ would be a subgraph of $T(2(r+1),r+1)$ on $2(r+1)$ vertices, so would apply to $G$ and we would be able to conclude our desired result directly. ### Breaker plays colour 1 in class $C_1$ and $\abs{C_1}\geq 3$ Set $G'=G-\set{u,v}$, removing two vertices and one colour, and mark $C_1$ for blanks. Then $t=\abs{X}=1$ as required.\ We have in every case verified that Maker's strategy as detailed in leads to Maker winning the vertex colouring game with blanks under the assumption that [\[eq:comparison-target\]](#eq:comparison-target){reference-type="eqref" reference="eq:comparison-target"} holds. Therefore our proof is complete and holds. # The marking game {#sec:marking-game} Matsumoto [@matsumoto2019difference] also asked whether an inequality similar to holds for the so-called marking game, or graph colouring game, which is defined as follows. Some number $k$ is fixed. Maker and Breaker alternately *mark* vertices of a graph $G=(V,E)$ by assigning them to a set $M$. A vertex may only be marked if it has at most $k-1$ marked neighbours. Maker wins if eventually $M=V$, whereas Breaker wins if some vertex becomes impossible to mark. The *marking number* $m(G)$, or *game colouring number* $\mathop{\mathrm{col}}_g(G)$, of a graph $G$ is the minimal $k$ for which Maker can win the marking game on $G$. In particular, Matsumoto asked whether $m(G)-\chi(G)\leq\floor{n/2}-1$. We answer this question in the negative with the following result. For any constant $\varepsilon> 0$, there is some integer $n$ and a graph $G$ on $n$ vertices for which $m(G)-\chi(G)>(1-\varepsilon)n$. *Proof.* Let integer $r$ be large enough that $2/r<\varepsilon$. Then let $G=T(r^2,r)$ be the complete $r$-partite graph on $n=r^2$ vertices. Firstly, we know that $\chi(G)=r$. Next, consider the last vertex to be marked. It has $r(r-1)$ neighbours, all of which are already marked, and so $m(G)\geq r(r-1)$. Thus $m(G)-\chi(G)\geq r(r-2)=n(1-2/r)>(1-\varepsilon)n$, as required. ◻ We may also note that the Turán graphs maximise the value $m(G)$ for fixed order and chromatic number, so the above result is in this sense sharp. # Conclusion and future work {#sec:future} In this paper, we have resolved the problem of Matsumoto [@matsumoto2019difference] to determine all graphs $G$ for which $\chi_\text{g}(G)-\chi(G)=\floor{n/2}-1$. As noted in [@matsumoto2019difference], there are infinitely many graphs $G$ for which $\chi_\text{g}(G)-\chi(G)=\floor{n/2}-2$; any complete bipartite graph $K_{r,r}$ minus a perfect matching suffices, as does the Turán graph $T(2r,r+1)$. Because of this, we see that there is no stability around the equality cases of . There is however hope of some more general classification of near-equality cases, and to this end we pose the following questions. Are there any infinite families of graphs for which $\chi_\text{g}(G)-\chi(G)=\floor{n/2}-2$, besides the graphs $K_{r,r}$ minus a perfect matching and the Turán graphs $T(2r,r+1)$? We also ask the following more general question. [\[question:minus-O-1\]]{#question:minus-O-1 label="question:minus-O-1"} For which infinite families of graphs do we have $\chi_\text{g}(G)-\chi(G)=\floor{n/2}-O(1)$? One may try to answer by strengthening ; the sharp version of this statement should include some $s$-dependence on the right-hand side. We also in particular note that we have no families of graphs of odd order approaching these bounds. We therefore also pose the following question. What is the correct upper bound on $\chi_\text{g}(G)-\chi(G)$ when we consider only graphs of (large) odd order? In a similar vein, one could also consider the situation wherein Breaker starts (rather than Maker), and ask for bounds similar to the above for graphs of odd and even order. Finally, we ask how the values of $\chi_\text{g}(G)$ and $\chi_\text{gb}(G)$ can differ, which relates to a question of Zhu [@zhu1999game] from 1999. Namely, if Maker wins the vertex colouring game with $k$ colours, must she also win with $k+1$? (See [@hollom2023note] for some recent work of the author on this problem.) Such a question can be resolved for the vertex colouring game with blanks by a simple imagination strategy, but such a proof does not work for the usual vertex colouring game. Note that there are in fact graphs where being able to play a blank does in fact help Breaker, for example if $G$ is the six vertex graph formed by a four-cycle plus a pendulant edge and an isolated vertex (for which $\chi_\text{gb}(G)=3$ and $\chi_\text{g}(G)=2$). However, it is unclear to the author what the answer to the following question might be. Is there some function $f\colon\mathbb{N}\to\mathbb{N}$ for which $f(k)> k$ and $\chi_\text{gb}(G)\leq f(\chi_\text{g}(G))$ for all graphs $G$? It seems that methods more detailed than those presented in this paper would be required to make progress on these problems. Nevertheless, the author is hopeful that these questions are within reach of methods not too far beyond those shown here. # Acknowledgements {#sec:acknowledgement} The author is supported by the Trinity Internal Graduate Studentship of Trinity College, Cambridge. The author would like to thank their supervisor, Professor Béla Bollobás, for his thorough reading of the manuscript and many valuable comments. # Proof of Lemma [\[lem:base-case\]](#lem:base-case){reference-type="ref" reference="lem:base-case"} {#sec:appendix} We now prove parts [\[case:matsumoto-6\]](#case:matsumoto-6){reference-type="ref" reference="case:matsumoto-6"}, [\[case:matsumoto-7\]](#case:matsumoto-7){reference-type="ref" reference="case:matsumoto-7"}, [\[case:matsumoto-2\]](#case:matsumoto-2){reference-type="ref" reference="case:matsumoto-2"}, and [\[case:matsumoto-3\]](#case:matsumoto-3){reference-type="ref" reference="case:matsumoto-3"}. As these cases concern only graphs of order seven or less, they can be checked by exhaustive case analysis, but we provide proofs here for completeness. Throughout our proofs, we partition of $G$ into independent sets, and consider the sequence of sizes of these sets, which we call classes. We will label these classes as $C_1,C_2,\dotsc,C_p$ with $\abs{C_1}\geq\abs{C_2}\geq\dots\geq\abs{C_p}$, and define the *class-size sequence* $\mathcal{C}(G)=(\abs{C_1},\abs{C_2},\dotsc,\abs{C_p})$. Throughout both proofs, we will often state that Maker's strategy is to *copy Breaker*. By this we mean that Maker should play in the same set $C_i$ as Breaker, and use the same colour. In the case that Breaker played a blank, Maker instead repeats any colour already used in $C_i$ if possible, or plays a new, previously-unused colour if not. *Proof of part [\[case:matsumoto-6\]](#case:matsumoto-6){reference-type="ref" reference="case:matsumoto-6"}.* Graphs with class-size sequence $(2,2,2)$ have already been dealt with in , so do not need to be considered here. We consider all possible class-size sequences in turn, and show that $\chi_\text{gb}(G)\leq p+1$. Assume for contradiction that this is not the case. - $\mathcal{C}(G)=(5,1)$ or $\mathcal{C}(G)=(4,2)$. There are at most 2 vertices of degree at least 3, so gives us $\chi_\text{gb}(G)\leq 3=\chi(G)+1\leq p+1$. - $\mathcal{C}(G)=(3,3)$. By , some vertex $x$ must have degree 3. Breaker only wins if one class contains all three colours. Say $x\in C_1$, and Maker plays colour 1 at $x$. Then however Breaker plays, Maker may copy Breaker. If this was in $C_1$, then we are done. If it was in $C_2$, then however Breaker next plays, Maker can play colour 1 again in $C_1$, so neither class can have all colours. - $\mathcal{C}=(4,1,1)$. gives that we need three vertices of degree at least 4, which is impossible here. - $\mathcal{C}=(3,2,1)$. By , all vertices in classes $C_2$ and $C_3$ must have degree at least 4. Thus some $x\in C_1$ has degree 3. Maker plays 1 at $x$, and however Breaker replies, Maker may play colour 1 again in $C_1$. Then Maker may use her third move to colour the single vertex in $C_3$ if it is not already coloured, and win. - $\mathcal{C}=(3,1,1,1)$. implies that $G$ must be complete 4-partite. But then Maker can play colour 1 in $C_1$ for both of her first two moves. - $\mathcal{C}=(2,2,1,1)$ and $\mathcal{C}=(2,1,1,1,1)$ and $\mathcal{C}=(1,1,1,1,1,1)$. immediately gives a contradiction. This completes all cases, and hence the proof. ◻ *Proof of part [\[case:matsumoto-7\]](#case:matsumoto-7){reference-type="ref" reference="case:matsumoto-7"}.* Similarly to the previous case, we need to show that $\chi_\text{gb}(G)\leq p+1$, and so assume for contradiction that this is not the case. - $\mathcal{C}(G)\in\set{(6,1),(5,2),(5,1,1),(4,1,1,1),(2,2,1,1,1),(2,1,1,1,1,1),\\(1,1,1,1,1,1,1)}$. Done by . - $\mathcal{C}=(4,3)$. If there is a vertex $x\in C_1$ of degree 3, then Maker starts by playing colour 1 at $x$. Then Maker can keep playing colour 1 in $C_1$ until it is full. But then at most three different colours have been played in $C_1$, so all vertices in $C_2$ remain playable. If no vertex in $C_1$ has degree 3, then we see that some vertex in $C_2$ must also have degree less than 3. Thus applies and we are done. - $\mathcal{C}=(4,2,1)$. By we must be in a complete 3-partite graph. But then Maker plays colour 1 in $C_3$, and then may copy Breaker for her remaining moves, which uses at most four colours. - $\mathcal{C}=(3,3,1)$. By , there must be a vertex $x$ of degree 4 in $C_1\mathbin{\cup}C_2$. Say it is in $C_1$. Then Maker gives $x$ colour 1, gives another vertex in $C_1$ colour 1 on her second turn, and then colours the one vertex in $C_3$ on the third turn if it is not already coloured. Then every other vertex remains colourable. - $\mathcal{C}=(3,2,2)$. If some vertex in $C_1$ has degree 4, then Maker can play colour 1 there, and then copy Breaker's move to fill some class. Maker then again copies Breaker, and the final class remains colourable. Otherwise, say $C_1=\set{x_1,y_1,z_1},C_2=\set{x_2,y_2},C_3=\set{x_3,y_3}$, and assume that there is no edge from $x_1$ to $x_2$. Maker plays colour 1 at $x_1$. If Breaker then plays any colour other than 1, Maker can copy Breaker and win. So we assume Breaker plays colour 1 at $x_2$. Then Maker plays colour 2 at $y_2$. Then, however Breaker plays, Maker may play in the same class to fill it, and then the remaining two vertices remain colourable. - $\mathcal{C}=(3,2,1,1)$. implies that the graph must be complete 4-partite. But then Maker may play colour 1 in $C_1$ for her first move, and then copy Breaker's moves when possible, and play in the other class of size 1 if not. - $\mathcal{C}=(3,1,1,1,1)$. implies that the graph must be complete 5-partite. But then Maker may play colour 1 in class $C_1$ for both of her first two moves, and win. This completes all cases, and hence the proof. ◻ *Proof of part [\[case:matsumoto-2\]](#case:matsumoto-2){reference-type="ref" reference="case:matsumoto-2"}.* The only class-size sequences are $(2)$ and $(1,1)$. - $\mathcal{C}=(2)$, so $p=1$. We have $\chi_\text{gb}(G;V(G))=0=p-1$, as required. - $\mathcal{C}=(1,1)$, so $p=2$. Let $V(G)=\set{x,y}$. Then $\chi_\text{gb}(G,\set{x},\set{y})\leq\chi_\text{gb}(G,\set{x})=1=p-1$, as required. This completes all cases, and so also the proof. ◻ *Proof of part [\[case:matsumoto-3\]](#case:matsumoto-3){reference-type="ref" reference="case:matsumoto-3"}.* There are three possible class-size sequences for us to consider: $(3)$, $(2,1)$, and $(1,1,1)$. - $\mathcal{C}=(3)$, so $p=1$. We have $\chi_\text{gb}(G;V(G))=0=\chi(G)-1$, as required. - $\mathcal{C}=(2,1)$, so $p=2$. If $C_2$ is marked for blanks, then Maker plays a blank in it. Then colour 1 can be played at both remaining vertices. If the set of size 2 is marked for blanks, then Maker plays colour 1 in the set of size 1. Then both players are forced to play blanks in the set of size 2, so Maker wins. - $\mathcal{C}=(1,1,1)$, so $p=3$. We need to show that $\chi_\text{gb}(G;D_1,\dotsc,D_s)\leq 2$ when $s\geq 1$. But if two colours are available, then Maker can play a blank on her first move, and then must win. This completes all cases, and so also the proof. ◻
arxiv_math
{ "id": "2309.01583", "title": "On graphs with maximum difference between game chromatic number and\n chromatic number", "authors": "Lawrence Hollom", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Optimal Transport is a useful metric to compare probability distributions and to compute a pairing given a ground cost. Its entropic regularization variant (eOT) is crucial to have fast algorithms and reflect fuzzy/noisy matchings. This work focuses on Inverse Optimal Transport (iOT), the problem of inferring the ground cost from samples drawn from a coupling that solves an eOT problem. It is a relevant problem that can be used to infer unobserved/missing links, and to obtain meaningful information about the structure of the ground cost yielding the pairing. On one side, iOT benefits from convexity, but on the other side, being ill-posed, it requires regularization to handle the sampling noise. This work presents an in-depth theoretical study of the $\ell_1$ regularization to model for instance Euclidean costs with sparse interactions between features. Specifically, we derive a sufficient condition for the robust recovery of the sparsity of the ground cost that can be seen as a far reaching generalization of the Lasso's celebrated "Irrepresentability Condition''. To provide additional insight into this condition, we work out in detail the Gaussian case. We show that as the entropic penalty varies, the iOT problem interpolates between a graphical Lasso and a classical Lasso, thereby establishing a connection between iOT and graph estimation, an important problem in ML. author: - | Francisco Andrade\ CNRS and ENS, PSL Université\ `andrade.francisco@ens.fr` - | Gabriel Peyré\ CNRS and ENS, PSL Université\ `gabriel.peyre@ens.fr` - | Clarice Poon\ University of Warwick\ `clarice.poon@warwick.ac.uk` bibliography: - references.bib title: Sparsistency for Inverse Optimal Transport --- # Introduction Optimal transport has emerged as a key theoretical and numerical ingredient in machine learning for performing learning over probability distributions. It enables the comparison of probability distributions in a "geometrically faithful" manner by lifting a ground cost (or "metric" in a loose sense) between pairs of points to a distance between probability distributions, metrizing the convergence in law. However, the success of this OT approach to ML is inherently tied to the hypothesis that the ground cost is adapted to the problem under study. This necessitates the exploration of ground metric learning. However, it is exceptionally challenging due to its a priori highly non-convex nature when framed as an optimization problem, thereby inheriting complications in its mathematical analysis. As we illustrate in this theoretical article, these problems become tractable -- numerically and theoretically -- if one assumes access to samples from the OT coupling (i.e., having access to some partial matching driven by the ground cost). Admittedly, this is a restrictive setup, but it arises in practice (refer to subsequent sections for illustrative applications) and can also be construed as a step in a more sophisticated learning pipeline. The purpose of this paper is to propose some theoretical understanding of the possibility to stably learn a ground cost from partial matching observations. ## Previous Works #### Entropic Optimal Transport. OT has been instrumental in defining and studying various procedures at the core of many ML pipelines, such as bag-of-features matching [@rubner-2000], distances in NLP [@kusner2015word], generative modeling [@arjovsky2017wasserstein], flow evolution for sampling [@de2021diffusion], and even single-cell trajectory inference [@schiebinger2019optimal]. We refer to the monographs [@santambrogio2015optimal] for detailed accounts on the theory of OT, and [@peyre2019computational] for its computational aspects. Of primary importance to our work, entropic regularization of OT is the workhorse of many ML applications. It enables a fast and highly parallelizable estimation of the OT coupling using the Sinkhorn algorithm [@Sinkhorn64]. More importantly, it defines a smooth distance that incorporates the understanding that matching procedures should be modeled as a noisy process (i.e., should not be assumed to be 1:1). These advantages were first introduced in ML by the seminal paper of Cuturi [@CuturiSinkhorn], and this approach finds its roots in Schrödinger's work in statistical physics [@leonard2012schrodinger]. The role of noise in matching (with applications in economics) and its relation to entropic OT were advanced in a series of papers by Galichon and collaborators [@galichon2010matching; @dupuy2014personality; @galichon2022cupid]; see the book [@galichon2018optimal]. These works are key inspirations for the present paper, which aims at providing more theoretical understanding in the case of inverse OT (as detailed next). #### Metric Learning. The estimation of some metrics from pairwise interactions (either positive or negative) falls into the classical field of metric learning in ML, and we refer to the monograph [@bellet2013survey] for more details. In contrast to the inverse OT (iOT) problem considered in this paper, classical metric learning is more straightforward, as no global matching between sets of points is involved. This allows the metric to be directly optimized, while the iOT problem necessitates some form of bilevel optimization. Similarly to our approach, since the state space is typically continuous, it is necessary to restrict the class of distances to render the problem tractable. The common option, which we also adopt in our paper to exemplify our findings, is to consider the class of Mahalanobis distances. These distances generalize the Euclidean distance and are equivalent to computing a vectorial embedding of the data points. See, for instance, [@xing2002distance; @weinberger2006distance; @davis2008structured]. #### OT Ground Metric Learning. The problem of estimating the ground cost driving OT in a supervised manner was first addressed by [@cuturi2014ground]. Unlike methods that have access to pairs of samples, the ground metric learning problem requires pairs of probability distributions and then evolves into a classical metric learning problem, but within the OT space. The class of ground metrics can be constrained, for example, by utilizing Mahalanobis [@wang2012supervised; @xu2018multi; @kerdoncuff:ujm-02611800] or geodesic distances [@heitz2021ground], to devise more efficient learning schemes. The study [@zen2014simultaneous] conducts ground metric learning and matrix factorization simultaneously, finding applications in NLP [@huang2016supervised]. It is noteworthy that ground metric learning can also be linked to generative models through adversarial training [@genevay2018learning] and to robust learning [@paty2020regularized] by maximizing the cost to render the OT distance as discriminative as possible. #### Inverse Optimal Transport. The inverse optimal transport problem (iOT) can be viewed as a specific instance of ground metric learning, where one aims to infer the ground cost from partial observations of the (typically entropically regularized) optimal transport coupling. This problem was first formulated and examined by Dupuy and Galichon [@dupuy2014personality] over a discrete space (also see [@galichon2022cupid] for a more detailed analysis), making the fundamental remark that the maximum likelihood estimator amounts to solving a convex problem. The mathematical properties of the iOT problem for discrete space (i.e., direct computation of the cost between all pairs of points) are explored in depth in [@chiu2022discrete], studying uniqueness (up to trivial ambiguities) and stability to pointwise noise. Note that our theoretical study differs fundamentally as we focus on continuous state spaces. This "continuous" setup assumes access only to a set of couples, corresponding to matches (or links) presumed to be drawn from an OT coupling. In this scenario, the iOT is typically a ill-posed problem, and [@dupuy2016estimating; @carlier2023sista] propose regularizing the maximum likelihood estimator with either a low-rank (using a nuclear norm penalty) or a sparse prior (using an $\ell^1$ Lasso-type penalty). In our work, we concretely focus on the sparse case, but our theoretical treatment of the iOT could be extended to general structured convex regularization, along the lines of [@vaiter2015model]. While not the focus of our paper, it is noteworthy that these works also propose efficient large-scale, non-smooth proximal solvers to optimize the penalized maximum likelihood functional, and we refer to [@ma2020learning] for an efficient solver without inner loop calls to Sinkhorn's algorithm. This approach was further refined in [@stuart2020inverse], deriving it from a Bayesian interpretation, enabling the use of MCMC methods to sample the posterior instead of optimizing a pointwise estimate (as we consider here). They also propose parameterizing cost functions as geodesic distances on graphs (while we consider only linear models to maintain convexity). An important application of iOT to ML, explored by [@li2019learning], is to perform link prediction by solving new OT problems once the cost has been estimated from the observed couplings. Another category of ML application of iOT is representation learning (learning embeddings of data into, e.g., an Euclidean space) from pairwise interactions, as demonstrated by [@shi2023understanding], which recasts contrastive learning as a specific instance of iOT. #### Inverse problems and model selection. The iOT problem is formally a bilevel problem, as the observation model necessitates solving an OT problem as an inner-level program [@colson2005bilevel] -- we refer to [@eisenberger2022unified] for a recent numerical treatment of bilevel programming with entropic OT. The iOT can thus be conceptualized as an "inverse optimization" problem [@zhang1996calculating; @ahuja2001inverse], but with a particularly favorable structure, allowing it to be recast as a convex optimization. This provides the foundation for a rigorous mathematical analysis of performance, as we propose in this paper. The essence of our contributions is a theoretical examination of the recoverability of the OT cost from noisy observations, and particularly, the robustness to noise of the sparse support of the cost (for instance, viewed as a symmetric matrix for Mahalanobis norms). There exists a rich tradition of similar studies in the fields of inverse problem regularization and model selection in statistics. The most prominent examples are the sparsistency theory of the Lasso [@tibshirani1996regression] (least square regularized by $\ell^1$), which culminated in the theory of compressed sensing [@candes2006robust]. These theoretical results are predicated on a so-called "irrepresentability condition" [@zhao2006model], which ensures the stability of the support. While our analysis is grounded in similar concepts (in particular, we identify the corresponding irrepresentability condition for the iOT), the iOT inverse problem is fundamentally distinct due to the differing observation model (it corresponds to a sampling process rather than the observation of a vector) and the estimation process necessitates solving a linear program of potentially infinite dimension (in the limit of a large number of samples). This mandates a novel proof strategy, which forms the core of our mathematical analysis. ## Contributions {#sec:contrib} This paper proposes the first mathematical analysis of the performance of regularized iOT estimation, focusing on the special case of sparse $\ell^1$ regularization (the $\ell^1$-iOT method). We begin by deriving the customary "irrepresentability condition" of the iOT problem, rigorously proving that it is well-defined. This condition interweaves the properties of the Hessian of the maximum likelihood functional with the sparse support of the sought-after cost. The main contribution of this paper is Theorem [Theorem 6](#thm:sparsistency){reference-type="ref" reference="thm:sparsistency"}, which leverages this abstract irrepresentability condition to ensure sparsistency of the $\ell^1$-iOT method. This relates to the robust estimation of the cost and its support in some linear model, assuming the number $n$ of samples is large enough. Specifically, we demonstrate a sample complexity of $n^{-1/2}$. Our subsequent sets of contributions are centered on the case of matching between samples of Gaussian distributions. Herein, we illustrate in Lemma [Lemma 8](#lem:hessian_formula){reference-type="ref" reference="lem:hessian_formula"} how to compute the irrepresentability condition in closed form. This facilitates the examination of how the parameters of the problem, particularly regularization strength and the covariance of the distributions, influence the success and stability of iOT. We further explore the limiting cases of small and large entropic regularization, revealing in Proposition [Proposition 9](#prop:gaussian_obj_lasso){reference-type="ref" reference="prop:gaussian_obj_lasso"} and Proposition [Proposition 10](#prop:gaussian_obj_graph_lasso){reference-type="ref" reference="prop:gaussian_obj_graph_lasso"} that iOT interpolates between the graphical lasso (to estimate the graph structure of the precision matrix) and a classical lasso. This sheds light on the connection between iOT and graph estimation procedures. Simple synthetic numerical explorations in Section [5.2](#sec:numerics){reference-type="ref" reference="sec:numerics"} further provide intuition about how $\varepsilon$ and the geometry of the graph associated with a sparse cost impact sparsistency. As a minor numerical contribution, we present in Appendix [9](#sec:solver){reference-type="ref" reference="sec:solver"} a large-scale $\ell^1$-iOT solver, implemented in JAX and distributed as open-source software. # Inverse optimal transport #### The forward problem Given probability distributions $\alpha\in\mathcal{P}(\mathcal{X})$, $\beta\in\mathcal{P}(\mathcal{Y})$ and cost function $c:\mathcal{X}\times\mathcal{Y}\to \mathbb{R}$, the entropic optimal transport problem seeks to compute a coupling density $$\label{EntropicOT} \mathrm{Sink}(c,\varepsilon)\triangleq \mathop{\mathrm{arg\,max}}_{\pi\in\mathcal{U}(\alpha,\beta)} \langle c,\,\pi\rangle - \frac{\varepsilon}{2} \mathrm{KL}(\pi|\alpha\otimes \beta) \quad\text{where}\quad \langle c,\,\pi\rangle \triangleq\int c(x,y) d \pi(x,y),$$ where $\mathrm{KL}(\pi|\xi)\triangleq\int \log(d\pi/d\xi)d\pi - \int d\pi$ and $\mathcal{U}(\alpha,\beta)$ is the space of all probability measures $\pi\in\mathcal{P}(\mathcal{X}\times\mathcal{Y})$ with marginals $\alpha,\beta$. #### The inverse problem The inverse optimal transport problem seeks to recover the cost function $c$ given an approximation $\hat \pi_n$ of the probability coupling $\hat \pi \triangleq \mathrm{Sink}(c,\varepsilon)$. A typical setting is an empirical probability coupling $\hat \pi_n = \frac{1}{n}\sum_{i=1}^n \delta_{(x_i,y_i)}$ where $(x_i,y_i)_{i=1}^n \overset{iid}{\sim}\hat \pi$. See Section [2.2](#sec:finite_sample){reference-type="ref" reference="sec:finite_sample"}. #### The Loss function The iOT problem has been proposed and studied in a series of paper, see Section [1.2](#sec:contrib){reference-type="ref" reference="sec:contrib"}. The approach is typically to consider some linear parameterization of the cost $c_A(x,y)$ by some parameter $A \in \mathbb{R}^s$. The key observation of [@dupuy2016estimating] is that the negative log-likelihood of $\hat \pi$ at parameter value $A$ is given by $$\mathcal{L}(A,\hat \pi) \triangleq-\langle c_A,\,\hat \pi\rangle + W(A) \quad \text{where} \quad W(A)\triangleq\sup_{\pi\in\mathcal{U}(\alpha,\beta)} \langle c_A,\,\pi\rangle - \frac{\varepsilon}{2} \mathrm{KL}(\pi|\alpha\otimes \beta).$$ So, computing the maximum likelihood estimator $A$ for the cost corresponds to minimizing the *convex* 'loss function' $\mathcal{L}(A,\hat \pi)$. We write the parameterization as $$\Phi: A\in \mathbb{R}^s \mapsto c_A = \langle A,\,\mathbf{C}\rangle = \sum_j A_j \mathbf{C}_j, \quad \text{where} \quad \mathbf{C}_j \in \mathcal{C}(\mathcal{X}\times \mathcal{Y}).$$ A relevant example is quadratic loss functions, so that for $\mathcal{X}\subset \mathbb{R}^{d_1}$, $\mathcal{Y}\subset \mathbb{R}^{d_2}$, given $A\in\mathbb{R}^{d_1\times d_2}$, $c_A(x,y) = x^\top A y$. In this case, $s = d_1d_2$ and for $k=(i,j)\in [d_1]\times [d_2]$, $\mathbf{C}_k(x,y) = x_i y_j$. #### $\ell_1$-iOT To handle the presence of noisy data (typically coming from the sampling process), various regularization approaches have been proposed. In this work, we focus on the use of $\ell_1$-regularization [@carlier2023sista] to recover sparse parametrizations: $$\label{eq:primal} \mathop{\mathrm{arg\,min}}_A \mathcal{F}(A), \quad \text{where} \quad \mathcal{F}(A)\triangleq\lambda\left\| A \right\|_1 + \mathcal{L}(c_A,\hat \pi). \tag{$\mathrm{iOT-}\ell_1(\hat\pi)$}$$ #### Kantorovich formulation By considering the convex dual of $W(A)$, one can show that ([\[eq:primal\]](#eq:primal){reference-type="ref" reference="eq:primal"}) has the following equivalent Kantorovich formulation [@carlier2023sista]: $$\label{eq:kt_inf} \mathop{\mathrm{arg\,min}}_{A,f,g} \mathcal{K}(A,f,g), \quad \text{where} \quad \mathcal{K}(A,f,g)\triangleq\mathcal{J}(A,f,g) + \lambda\left\| A \right\|_1, \quad\text{and} \tag{$\mathcal{K}_\infty$}$$ $$\mathcal{J}(A,f,g) \triangleq-\int \left( f(x)+g(y) +\Phi A(x,y) \right) d\hat \pi(x,y) + \frac{\varepsilon}{2} \int \exp\left( \frac{2(f(x)+ g(y) + \Phi A(x,y))}{\varepsilon} \right)d\alpha(x)d\beta(y).$$ Based on this formulation, various algorithms have been proposed, including alternating minimization with proximal updates [@carlier2023sista]. Section [9](#sec:solver){reference-type="ref" reference="sec:solver"} details a new large scale solver that we use for the numerical simulations. ## Invariances and assumptions {#sec:invariances} *Assumption 1*. We first assume that $\mathcal{X}$ and $\mathcal{Y}$ are compact. Note that $\mathcal{J}$ has the translation invariance property that for any constant function $u$, $\mathcal{J}(f+u, g-u, A) = \mathcal{J}(f,g,A)$, so to remove this invariance, throughout, we restrict the optimization of ([\[eq:kt_inf\]](#eq:kt_inf){reference-type="ref" reference="eq:kt_inf"}) to the set $$\mathcal{S}\triangleq \left\{ (A,f,g)\in \mathbb{R}^s\times L^2(\alpha)\times L^2(\beta) \;;\; \int g(y)d\beta(y) = 0 \right\} .$$ Next, we make some assumptions on the cost to remove invariances in the iOT problem. *Assumption 2* (Assumption on the cost). - $\mathbb{E}_{(x,y)\sim\alpha\otimes \beta} [\mathbf{C}(x,y) \mathbf{C}(x,y)^\top ]\succeq \mathrm{Id}$ is invertible. - $\left\| \mathbf{C}(x,y) \right\|\leqslant 1$ for $\alpha$ almost every $x$ and $\beta$-almost every $y$. - for all $k$, $\int \mathbf{C}_k(x,y)\mathrm{d}\alpha(x) = 0$ for $\beta$-a.e. $y$ and $\int \mathbf{C}_k(x,y)\mathrm{d}\beta(y) = 0$ for $\alpha$-a.e. $x$. Under these assumptions, it can be shown that iOT has a unique solution (see remark after Proposition [Proposition 2](#InverseOTLossFuncPropertiesProp){reference-type="ref" reference="InverseOTLossFuncPropertiesProp"}). Assumption [Assumption 2](#ass:main){reference-type="ref" reference="ass:main"} (i) is to ensure that $c_A = \Phi A$ is uniquely determined by $A$. Assumption [Assumption 2](#ass:main){reference-type="ref" reference="ass:main"} (ii) is without loss of generality, since we assume that $\alpha,\beta$ are compactly supported, so this holds up to a rescaling of the space. Assumption [Assumption 2](#ass:main){reference-type="ref" reference="ass:main"} (iii) is to handle the invariances pointed out in [@carlier2023sista] and [@ma2020learning]: $$\mathcal{J}(A,f,g) = \mathcal{J}(A',f',g') \iff c_A + (f\oplus g) = c_{A'} +(f'\oplus g').$$ As observed in [@carlier2023sista], any cost can be adjusted to fit this assumption: one can define $$\tilde \mathbf{C}_k(x,y) = \mathbf{C}_k(x,y) - u_k(x) - v_k(y)$$ where $u_k(x) = \int \mathbf{C}_k(x,y)d\beta(y) \quad \text{and} \quad v_k(y)=\int \mathbf{C}_k(x,y) d\alpha(x) - \int \mathbf{C}_k(x,y)d\alpha(x)d\beta(y).$ Letting $\tilde \Phi A = \sum_k A_k \tilde \mathbf{C}_k$, we have $\left( \left( f- \sum_k A_k u_k \right)\oplus \left( g - \sum_k A_k v_k \right) \right) + \tilde \Phi A = (f\oplus g )+ \Phi A.$ So optimization with the parametrization $\Phi$ is equivalent to optimization with $\tilde \Phi$. NB: For the quadratic cost $c_k(x,y) = x_i y_j$ for $k=(i,j)$, condition (iii) corresponds to *recentering* the data points, and taking $x \mapsto x - \int x d\alpha(x)$ and $y \mapsto y-\int y d\beta(y)$. Condition (ii) holds if $\left\| x \right\| \vee \left\| y \right\| \leqslant 1$ for $\alpha$-a.e. $x$ and $\beta$-a.e. $y$. Condition (i) corresponds to invertibility of $\mathbb{E}_\alpha[x x^\top]\otimes \mathbb{E}_\beta[y y^\top]$. ## The finite sample problem {#sec:finite_sample} In practice, we are faced with the following finite sample problem: Given $\hat \pi = \mathrm{Sink}(c_{\widehat A}, \varepsilon)$, recover ${\widehat A}$ from the empirical measure $\hat \pi_n = \frac1n\sum_{i=1}^n \delta_{x_i,y_i}$ with $(x_i,y_i)\overset{iid}{\sim} \hat \pi$. We will consider solutions to $\mathrm{iOT-}\ell_1(\hat \pi_n)$ and derived conditions under which the support of ${\widehat A}$ is identifiable. The problem $\mathrm{iOT-}\ell_1(\hat \pi_n)$ can be formulated entirely in finite dimensions as follows. Let $$H(P|Q) \triangleq\sum_{i,j} P_{i,j} ( \log(P_{i,j}/Q_{i,j}) - 1).$$ Note that $\hat \pi_n$ has marginals $\hat a_n =\frac1n \sum_{i=1}^n \delta_{x_i}$ and $\hat b_n =\frac1n \sum_{i=1}^n \delta_{y_i}$. We can interpret $\hat \pi_n$ as the matrix $\hat P_n = \frac{1}{n}\mathrm{Id}_{n\times n}$ and the "noisy" primal problem can be equivalently written as $$\label{eq:primal_n} \min_{A\in\mathbb{R}^s} \sup_{P\in\mathbb{R}^{n\times n}_+} \lambda\left\| A \right\|_1 + \langle \Phi_n A,\,P-\hat P_n \rangle -\frac{\varepsilon}{2} H(P)\quad s.t.\quad P\mathds{1}= \frac1n \mathds{1} \quad \text{and} \quad P^\top \mathds{1}= \frac1n \mathds{1}, \tag{$\mathcal{P}_n$}$$ where we write $H(P)\triangleq H( P |\frac{1}{n^2}\mathds{1}\otimes \mathds{1})$ and $\Phi_n A = \sum_k A_k C_k, \quad \text{where} \quad C_k =( \mathbf{C}_k(x_i,y_j))_{i,j\in [n]} \in \mathbb{R}^{n\times n}.$ Note that the finite dimensional problem has the same invariances as ([\[eq:primal\]](#eq:primal){reference-type="ref" reference="eq:primal"}), so, we will take $C_k$ to be centred so that for all $i$, $\sum_i (C_k)_{i,j} = 0$ and for all $j$, $\sum_j (C_k)_{i,j} = 0$. The finite sample Kantorovich formulation is $$\label{eq:kt_n} \inf_{A,F,G \in \mathcal{S}_n} \mathcal{K}_n(A,F,G) \quad \text{where} \quad \mathcal{K}_n(A,F,G) \triangleq\mathcal{J}_n(A,F,G)+ \lambda \left\| A \right\|_1 \quad\text{and} \tag{$\mathcal{K}_n$}$$ $$\mathcal{J}_n(A,F,G) \triangleq- \sum_{i,j} \left( F_i\oplus G_j + (\Phi_n A)_{i,j} \right){ \left( \hat P_n \right)_{i,j}} + \frac{\varepsilon}{2 n^2} \sum_{i,j} \exp\left( \frac{2}{\varepsilon}(F_i+ G_j + (\Phi_n A)_{i,j} \right),$$ and we restrict the optimization of ([\[eq:kt_n\]](#eq:kt_n){reference-type="ref" reference="eq:kt_n"}) over $\mathcal{S}_n \triangleq \left\{ (A,F,G)\in\mathbb{R}^s\times \mathbb{R}^n\times \mathbb{R}^n \;;\; \sum_j G_j = 0 \right\} .$ # The certificate for sparsistency {#sec:duality} ## Certificate and non-degeneracy In this section we present a sufficient condition for support recovery, that we term *non-degeneracy of the certificate*. Under non-degeneracy of the certificate we obtain support recovery as stated in Theorem [Theorem 3](#certificateTheorem){reference-type="ref" reference="certificateTheorem"}, a known result whose proof can be found *e.g.* in [@lee2015model]. This condition can be seen as a generalization of the celebrated *Lasso's Irrepresentability condition* (see *e.g.* [@hastie2015statistical]) -- Lasso corresponds to having a quadratic loss instead of $\mathcal{L}(A,\hat \pi)$, thus $\nabla^2_A W(A)$ in the definition below reduces to a matrix. In what follows, we denote $u_I \triangleq(u_i)_{i \in I}$ and $U_{I,J} \triangleq(U_{i,j})_{i \in I, j \in J}$ the restriction operators. **Definition 1**. *The *certificate* with respect to $A$ and support $I=\{i : A_i \neq 0\}$ is $$\begin{aligned} \label{FuchsCert}\tag{C} z^\ast_A \triangleq \nabla^2W(A)_{(:,I)}(\nabla^2W(A)_{(I,I)})^{-1} \mathop{\mathrm{sign}}(A)_I.\end{aligned}$$ We say that it is non-degenerate if $\| (z^\ast_{A})_{I^c}\|_\infty<1$.* The next proposition, whose proof can be found in Appendix [6.1](#ProofOfHessianFormSect){reference-type="ref" reference="ProofOfHessianFormSect"}, shows that the function $W(A)$ is twice differentiable, thus ensuring that [\[FuchsCert\]](#FuchsCert){reference-type="ref" reference="FuchsCert"} is well defined. **Proposition 2**. *$A \mapsto W(A)$ is twice differentiable, strictly convex, with gradient and Hessian $$\nabla_A W(A) =\Phi^\ast \pi_A, \quad % _{\mathcal{X}\times \mathcal{Y}} \nabla_A^2 W(A) =\Phi^\ast \frac{\partial \pi_A}{\partial A}(x,y)$$ where $\pi_A$ is the unique solution to ([\[EntropicOT\]](#EntropicOT){reference-type="ref" reference="EntropicOT"}) with cost $c_A=\Phi A$.* The next theorem, a well-known result (see *e.g.* [@lee2015model] for a proof), shows that support recovery can be characterized via $z^*_{{\widehat A}}$. **Theorem 3**. *If $z^\ast_{{\widehat A}}$ is non-degenerate, then for all $\lambda$ sufficiently small, the solution $A^\lambda$ to ([\[eq:primal\]](#eq:primal){reference-type="ref" reference="eq:primal"}) is sparsistent with $\left\| A^\lambda - {\widehat A} \right\| = \mathcal{O}(\lambda)$ and $\mathop{\mathrm{Supp}}\left( A^\lambda \right)=\mathop{\mathrm{Supp}}({\widehat A})$.* *Remark 4*. We remark that strict convexity implies that any solution of ([\[eq:primal\]](#eq:primal){reference-type="ref" reference="eq:primal"}) must be unique. It is then an immediate consequence of Gamma convergence that solutions $A^\lambda$ to ([\[eq:primal\]](#eq:primal){reference-type="ref" reference="eq:primal"}) converge to $\hat A$ as $\lambda \to 0$. The above result shows that the support (and hence signs) are also preserved when $\lambda$ is sufficiently small. ## Intuition behind the certificate ([\[FuchsCert\]](#FuchsCert){reference-type="ref" reference="FuchsCert"}) {#sec:intuition-cert} In this section, we provide some insight into the link between $z^\ast_{{\widehat A}}$ and the optimization problem ([\[eq:primal\]](#eq:primal){reference-type="ref" reference="eq:primal"}). Observe that the first order optimality condition to ([\[eq:primal\]](#eq:primal){reference-type="ref" reference="eq:primal"}) implies that $A^\lambda$ is a solution if and only if $$z^\lambda \triangleq- \frac{1}{\lambda}\nabla L(A^\lambda) \in \partial \left\| A_\lambda \right\|_1,$$ where the subdifferential for the $\ell_1$ norm has the explicit form $\partial \left\| A \right\|_1 = \left\{ z \;;\; \left\| z \right\|_\infty \leqslant 1, \; \forall i \in\mathop{\mathrm{Supp}}(A), z_i = \mathop{\mathrm{sign}}(A_i) \right\}$. It therefore follows $z^\lambda$ can be seen as a *certificate* for the support of $A^\lambda$ since $\mathop{\mathrm{Supp}}(A^\lambda)\subseteq \left\{ i \;;\; \left\lvert z^\lambda_i\right\rvert=1 \right\}$. To study the support behaviour of $A^\lambda$ for small $\lambda$, it is therefore interesting to consider the limit of $z^\lambda$ as $\lambda \to 0$. In fact, its limit is precisely the subdifferential element with minimal norm: **Proposition 5**. *Let $z^\lambda \triangleq- \frac{1}{\lambda}\nabla L(A^\lambda)$ where $A^\lambda$ solves ([\[eq:primal\]](#eq:primal){reference-type="ref" reference="eq:primal"}). Then, $$\label{MnCertificate}\tag{MNC} \lim_{\lambda\to 0} z^\lambda= z^{\min}_{{\widehat A}}\triangleq \mathop{\mathrm{arg\,min}}_{z} \left\{ \langle z,\,\left( \nabla^2 W({\widehat A} \right)^{-1} z\rangle_F \;;\; z\in \partial\|{\widehat A}\|_1 \right\}$$ Moreover, if $z^*_{{\widehat A}}$ is non-degenerate, then $z^{\min}_{{\widehat A}} = z^*_{{\widehat A}}$.* The certificate ([\[FuchsCert\]](#FuchsCert){reference-type="ref" reference="FuchsCert"}) can therefore be seen as the *limit* optimality condition and determines whether the support of ${\widehat A}$ can be stably recovered or not. One useful aspect of ([\[FuchsCert\]](#FuchsCert){reference-type="ref" reference="FuchsCert"}) is that it can be readily computed in some cases to analyse the sparsistency properties of ([\[eq:primal\]](#eq:primal){reference-type="ref" reference="eq:primal"}). In particular, in Section [5.1](#sec:limit_cases){reference-type="ref" reference="sec:limit_cases"}, we provide explicit expressions for $z^\ast_{{\widehat A}}$ in the Gaussian setting. # Sample complexity bounds Our main contribution is the following theorem which shows that ([\[FuchsCert\]](#FuchsCert){reference-type="ref" reference="FuchsCert"}) also provides a certification for sparsistency under sampling noise. **Theorem 6**. *Let $\hat \pi = \mathrm{Sink}(c_{\widehat A},\varepsilon)$. Suppose that the certificate $z_{{\widehat A}}^*$ is non-degenerate. Then, for all sufficiently small regularization parameters $\lambda$ and sufficiently many number of samples $n$, $$\lambda \lesssim 1 \quad \text{and} \quad \max\left( {\exp(C/\lambda)m}\lambda^{-1},\sqrt{\log(2s)} \right) \lesssim \sqrt{n},$$ where $C>0$ is a constant that depends on $\hat \pi$, with probability at least $1- \exp(-m^2)$, the minimizer $A_n$ to ([\[eq:primal_n\]](#eq:primal_n){reference-type="ref" reference="eq:primal_n"}) is sparsistent with ${\widehat A}$ and satisfies $\left\| A_n - {\widehat A} \right\| \lesssim \lambda + \sqrt{\exp(C/\lambda)} m n^{-\tfrac12}$.* #### Main idea behind Theorem [Theorem 6](#thm:sparsistency){reference-type="ref" reference="thm:sparsistency"} {#main-idea-behind-theorem-thmsparsistency} We know from Theorem [Theorem 3](#certificateTheorem){reference-type="ref" reference="certificateTheorem"} that there is some $\lambda_0>0$ such that for all $\lambda \leqslant\lambda_0$, the solution to ([\[eq:kt_inf\]](#eq:kt_inf){reference-type="ref" reference="eq:kt_inf"}) has the same support as $A^*$. To show that the finite sample problem also recovers the support of $A^*$ when $n$ is sufficiently large, we **fix** $\lambda\in (0,\lambda_0]$ and consider the setting where the observations are iid samples from the coupling measure $\hat \pi$. We will derive convergence bounds for the primal and dual solutions as the number of samples $n$ increases. Let $(A_\infty,f_\infty,g_\infty)$ minimise ([\[eq:kt_inf\]](#eq:kt_inf){reference-type="ref" reference="eq:kt_inf"}). Denote $$P_\infty =\frac{1}{n^2}( p_\infty(x_i,y_j))_{i,j\in [n]}, \quad \text{where} \quad p_\infty(x,y) = \exp\left( \frac{2}{\varepsilon}\left( \Phi A_\infty(x,y) + f_\infty(x) + g_\infty(y) \right) \right).$$ We denote $F_\infty = (f_\infty(x_i))_{i\in[n]}$ and $G_\infty = (g_\infty(y_j))_{j\in [n]}$. Let $(A_n, F_n, G_n)$ minimise ([\[eq:kt_n\]](#eq:kt_n){reference-type="ref" reference="eq:kt_n"}), so the primal solution to ([\[eq:primal_n\]](#eq:primal_n){reference-type="ref" reference="eq:primal_n"}) is $P_n \triangleq\frac{1}{n^2} \exp\left( \frac{2}{\varepsilon}\left( \Phi_n A_n + F_{n} \oplus G_n \right) \right)$. The *certificates* are $$z_\infty = \frac{1}{\lambda}\Phi^*\left( p_\infty \alpha\otimes \beta - \hat \pi \right) \quad \text{and} \quad z_n = \frac{1}{\lambda}\Phi_n^*\left( P_n - \hat P_n \right).$$ By exploiting strong convexity properties of $\mathcal{J}_n$, one can show the following sample complexity bound on the convergence of $z_n$ to $z_\infty$ (the proof can be found in the appendix): **Proposition 7**. *Let $n \gtrsim \max(m^2 \lambda^{-2},\log(2s))$ for some $m>0$. For some constant $C>0$, with probability at least $1-\exp(-m^2)$ $$\left\| z_\infty - z_n \right\|_\infty \lesssim {\exp(C/\lambda)m}{\lambda^{-1} n^{-\frac12}} \quad \text{and}$$ $$\left\| A_n - A_\infty \right\|^2 + \frac{1}{n} \sum_{i}(F_n - F_\infty)_i^2 + \frac{1}{n} \sum_{j}(G_n - G_\infty)_j^2 \lesssim \varepsilon^2 \exp(C/\lambda)m^2 n^{-1}.$$* From Proposition [Proposition 5](#MinimalNormCertProposi){reference-type="ref" reference="MinimalNormCertProposi"}, for all $\lambda\leqslant\lambda_0$ sufficiently small, $z_\infty$ is nondegenerate. Moreover, the convergence result in Proposition [Proposition 7](#prop:sample_complexity){reference-type="ref" reference="prop:sample_complexity"} above implies that for $n$ is sufficiently large, $z_n$ is also nondegenerate. Hence, since the set $\left\{ i \;;\; (z_n)_i=\pm 1 \right\}$ determines the support of $A_n$, we have sparsistency. # Gaussian distributions To get further insight about the sparsistency property ot iOT, we consider the special case where the source and target distributions are Gaussians, and the cost parametrization $c_A(x,y) = x^\top A y$. To this end, we first derive closed form expressions for the Hessian $\partial_A^2 \mathcal{L}(A) = \nabla_A^2 W(A)$. Given $\alpha= \mathcal{N}(m_\alpha,\Sigma_\alpha)$ and $\beta = \mathcal{N}(m_\beta,\Sigma_\beta)$, it is known (see [@bojilov2016matching]) that the coupling density is also a Gaussian of the form $\pi = \mathcal{N}\left( \binom{m_\alpha}{m_\beta}, \begin{pmatrix} \Sigma_\alpha & \Sigma \\ \Sigma^\top & \Sigma_\beta \end{pmatrix} \right)$ for some $\Sigma\in\mathbb{R}^{d_1\times d_2}$. In this case, $W$ can be written as an optimization problem over the cross covariance $\Sigma$ [@bojilov2016matching]. $$\label{eq:W_gaussian} W(A) = \sup_{\Sigma\in\mathbb{R}^{d_1\times d_2}} \langle A,\,\Sigma\rangle + \frac{\varepsilon}{2} \log\det\left( \Sigma_\beta - \Sigma^\top \Sigma_\alpha^{-1} \Sigma \right),$$ with the optimal solution being precisely the cross-covariance of the optimal coupling $\pi$. In [@bojilov2016matching], the authors provide an explicit formula for the minimizer $\Sigma$ to ([\[eq:W_gaussian\]](#eq:W_gaussian){reference-type="ref" reference="eq:W_gaussian"}), and, consequently, for $\nabla W(A)$: $$\label{eq:Sigma_galichon} \Sigma = \Sigma_\alpha A\Delta \left( \Delta A^\top \Sigma_\alpha A \Delta \right)^{-\frac12} \Delta -\tfrac{1}{2} \varepsilon A^{\dagger,\top} \quad \text{where} \quad \Delta \triangleq\left( \Sigma_\beta +\frac{ \varepsilon^2}{4} A^\dagger\Sigma_\alpha^{-1} A^{\dagger,\top} \right)^{\frac12}.$$ By differentiating the first order condition for $W$, that is $A^\dagger = \varepsilon^{-1} (\Sigma_\beta - \Sigma^\top\Sigma_\alpha^{-1} \Sigma) \Sigma^\dagger\Sigma_\alpha,$ Galichon derives the expression for the Hessian in terms of $\Sigma$ in ([\[eq:Sigma_galichon\]](#eq:Sigma_galichon){reference-type="ref" reference="eq:Sigma_galichon"}): $$\label{eq:d_Sigma} \nabla^2 W(A)= \partial_A \Sigma= \varepsilon\left( \Sigma_\alpha\Sigma^{-1,\top}\otimes \Sigma_\beta\Sigma^{-1} + \mathbb{T} \right)^{-1} \left( A^{-1,\top}\otimes A^{-1} \right),$$ where $\mathbb{T}$ is such that $\mathbb{T}\mathrm{vec}( A) = \mathrm{vec}(A^\top)$. This formula does not hold in when $A$ is rectangular or rank deficient, since $A^{\dagger}$ is not differentiable. In the following, we derive, via the implicit function theorem, a general formula for $\partial_A \Sigma$ that agrees with that of Galichon in the square invertible case. **Lemma 8**. *Denoting $\Sigma$ as in ([\[eq:Sigma_galichon\]](#eq:Sigma_galichon){reference-type="ref" reference="eq:Sigma_galichon"}), one has $$\begin{aligned} \nabla^2 W(A) &=\varepsilon\left( \varepsilon^2 (\Sigma_\beta - \Sigma^\top \Sigma_\alpha\Sigma)^{-1} \otimes (\Sigma_\alpha- \Sigma\Sigma_\beta^{-1} \Sigma^\top)^{-1} +(A^\top\otimes A) \mathbb{T} \right)^{-1} \label{eq:hessian_formula_a}\end{aligned}$$* This formula given in Lemma [Lemma 8](#lem:hessian_formula){reference-type="ref" reference="lem:hessian_formula"} provides an explicit expression for the certificate ([\[FuchsCert\]](#FuchsCert){reference-type="ref" reference="FuchsCert"}). ## Limit cases for large and small $\varepsilon$ {#sec:limit_cases} This section explores the behaviour of the certificate in the large/small $\varepsilon$ limits: Proposition [Proposition 9](#prop:gaussian_obj_lasso){reference-type="ref" reference="prop:gaussian_obj_lasso"} reveals that the large $\varepsilon$ limit coincides with the classical Lasso while Proposition [Proposition 10](#prop:gaussian_obj_graph_lasso){reference-type="ref" reference="prop:gaussian_obj_graph_lasso"} reveals that the small epsilon limit (for symmetric $A\succ 0$ and $\Sigma_\alpha = \Sigma_\beta = \mathrm{Id}$) coincides with the Graphical Lasso. In the following results, we denote the functional in ([\[eq:primal\]](#eq:primal){reference-type="ref" reference="eq:primal"}) with parameters $\lambda$ and $\varepsilon$ by $\mathcal{F}(A)_{\varepsilon,\lambda}$. **Proposition 9** ($\varepsilon\to \infty$). *Let ${\widehat A}$ be invertible and let $\hat \pi = \mathrm{Sink}(c_{\widehat A},\varepsilon)$ be the observed coupling between $\alpha= \mathcal{N}(m_\alpha, \Sigma_\alpha)$ and $\beta = \mathcal{N}(m_\beta, \Sigma_\beta)$. Then, $$\label{eq:limit_eps_infty} \lim_{\varepsilon\to \infty} z_\varepsilon= (\Sigma_\beta\otimes \Sigma_\alpha)_{(:,I)} \big((\Sigma_\beta\otimes \Sigma_\alpha)_{(I,I)}\big)^{-1} \mathop{\mathrm{sign}}({\widehat A})_I.$$* *Moreover, for $\lambda_0>0$, given any sequence $(\varepsilon_j)_j$ and $A_j\in \mathop{\mathrm{arg\,min}}_A \mathcal{F}(A)_{\varepsilon, \lambda_0 /\varepsilon_j}(A)$ with $\lim_{j\to\infty}\varepsilon_j = \infty$, any cluster point of $(A_j)_j$ is in $$\label{LassoProblem} \underset{A\in\mathbb{R}^{d\times d}}{\mathop{\mathrm{arg\,min}}}\; \lambda_0\left\| A \right\|_1 +\frac12 \|{(\Sigma_{\beta}^{1/2}\otimes \Sigma_{\alpha}^{1/2})(A - {\widehat A})}\|_F^2$$* **Proposition 10** ($\varepsilon\to 0$). *Let ${\widehat A}$ be symmetric positive-definite and let $\hat \pi = \mathrm{Sink}(c_{\widehat A},\varepsilon)$ be the observed coupling between $\alpha= \mathcal{N}(m_\alpha, \mathrm{Id})$ and $\beta = \mathcal{N}(m_\beta, \mathrm{Id})$. Then, $$\label{eq:Limit_eps_0} \lim_{\varepsilon\to 0} z_\varepsilon= ({\widehat A}^{-1}\otimes {\widehat A}^{-1})_{(:,I)}\big(({\widehat A}^{-1}\otimes {\widehat A}^{-1})_{(I,I)}\big)^{-1} \mathop{\mathrm{sign}}({\widehat A})_I.$$ Let $\lambda_0>0$. Then, optimizing over symmetric positive semi-definite matrices, given any sequence $(\varepsilon_j)_j$ and $A_j\in \mathop{\mathrm{arg\,min}}_{A\succeq 0} \mathcal{F}(A)_{\varepsilon, \lambda_0 \varepsilon_j}(A)$ with $\lim_{j\to\infty}\varepsilon_j = 0$, any cluster point of $(A_j)_j$ is in $$\label{GlassoProblem} \underset{A\succeq 0}{\mathop{\mathrm{arg\,min}}}\; \lambda_0 \left\| A \right\|_1 -\frac12 \log\det(A) + \frac{1}{2} \langle A,\,{\widehat A}^{-1}\rangle.$$* Observe that the $\varepsilon\to \infty$ and $\varepsilon\to 0$ limits given in ([\[eq:limit_eps_infty\]](#eq:limit_eps_infty){reference-type="ref" reference="eq:limit_eps_infty"}) and ([\[eq:Limit_eps_0\]](#eq:Limit_eps_0){reference-type="ref" reference="eq:Limit_eps_0"}) correspond respectively to the certificates associated to a Lasso optimization problem, *i.e.,* problem ([\[LassoProblem\]](#LassoProblem){reference-type="ref" reference="LassoProblem"}), and to a graphical Lasso optimization problem, *i.e.,* problem [\[GlassoProblem\]](#GlassoProblem){reference-type="ref" reference="GlassoProblem"}. This can be easily derived by plugging in the Hessian of the unregularized loss functions of ([\[LassoProblem\]](#LassoProblem){reference-type="ref" reference="LassoProblem"}) and ([\[GlassoProblem\]](#GlassoProblem){reference-type="ref" reference="GlassoProblem"}) into ([\[FuchsCert\]](#FuchsCert){reference-type="ref" reference="FuchsCert"}); alternatively, see [@hastie2015statistical] for the Lasso certificate and [@ravikumar2011high] for graphical Lasso. ## Numerical illustrations {#sec:numerics} In order to gain some insight into the impact of $\varepsilon$ and the covariance structure on the efficiency of iOT, we present numerical computations of certificates here. We fix the covariances of the input measures as $\Sigma_\alpha=\Sigma_\beta=\mathrm{Id}_n$, similar results are obtained with different covariance as long as they are not rank-deficient. We consider that the support of the sought-after cost matrix $A = \delta \mathrm{Id}_n + \mathop{\mathrm{diag}}(G 1_n) - G \in \mathbb{R}^{n \times n}$ is defined as a shifted Laplacian matrix of some graph adjacency matrix $G$, for a graph of size $n=80$ (similar conclusions hold for larger graphs). We set the shift $\delta$ to be $10\%$ of the largest eigenvalue of the Laplacian, ensuring that $C$ is symmetric and definite. This setup corresponds to graphs defining positive interactions at vertices and negative interactions along edges. For small $\varepsilon$, adopting the graphical lasso interpretation (as exposed in Section [5.1](#sec:limit_cases){reference-type="ref" reference="sec:limit_cases"}) and interpreting $C$ as a precision matrix, this setup corresponds (for instance, for a planar graph) to imposing a spatially smoothly varying covariance $C^{-1}$. Figure [6](#fig:certifs){reference-type="ref" reference="fig:certifs"} illustrates how the value of the certificates $z_{i,j}$ evolves depending on the indexes $(i,j)$ for three types of graphs (circular, planar, and Erdős--Rényi with a probability of edges equal to 0.1), for several values of $\varepsilon$. By construction, $z_{i,i}=1$ and $z_{i,j}=-1$ for $(i,j)$ connected by the graph. For $z$ to be non-degenerate, it is required that $|z_{i,j}|<1$, as $i$ moves away from $j$ on the graph. For the circular and planar graphs, the horizontal axis represents the geodesic distance $d_{\text{geod}}(i,j)$, demonstrating how the certificates become well-behaved as the distance increases. The planar graph displays envelope curves showing the range of values of $z$ for a fixed value of $d_{\text{geod}}(i,j)$, while this is a single-valued curve for the circular graph due to periodicity. For the Erdős--Rényi graph, to better account for the randomness of the certificates along the graph edges, we display the histogram of the distribution of $|z_{i,j}|$ for $d_{\text{geod}}(i,j)=2$ (which represents the most critical set of edges, as they are the most likely to be large). All these examples show the same behavior, namely, that increasing $\varepsilon$ improves the behavior of the certificates (which is in line with the stability analysis of Section [5.1](#sec:limit_cases){reference-type="ref" reference="sec:limit_cases"}), and that pairs of vertices $(i,j)$ connected by a small distance $d_{\text{geod}}(i,j)$ are the most likely to be degenerate. This suggests that they will be inaccurately estimated by iOT for small $\varepsilon$. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Display of the certificate values $z_{i,j}$ for three types of graphs, for varying $\varepsilon$. Left, middle: plotted as a function of the geodesic distance $d_{\text{geod}}(i,j)$ on the $x$-axis. Right: histogram of $z_{i,j}$ for $(i,j)$ at distance $d_{\text{geod}}(i,j)=2$.[\[fig:certifs\]]{#fig:certifs label="fig:certifs"} ](figures-certifs/circular-graph.pdf){#fig:certifs width=".3\\linewidth"} ![Display of the certificate values $z_{i,j}$ for three types of graphs, for varying $\varepsilon$. Left, middle: plotted as a function of the geodesic distance $d_{\text{geod}}(i,j)$ on the $x$-axis. Right: histogram of $z_{i,j}$ for $(i,j)$ at distance $d_{\text{geod}}(i,j)=2$.[\[fig:certifs\]]{#fig:certifs label="fig:certifs"} ](figures-certifs/planar-graph.pdf){#fig:certifs width=".3\\linewidth"} ![Display of the certificate values $z_{i,j}$ for three types of graphs, for varying $\varepsilon$. Left, middle: plotted as a function of the geodesic distance $d_{\text{geod}}(i,j)$ on the $x$-axis. Right: histogram of $z_{i,j}$ for $(i,j)$ at distance $d_{\text{geod}}(i,j)=2$.[\[fig:certifs\]]{#fig:certifs label="fig:certifs"} ](figures-certifs/erdos-graph.pdf){#fig:certifs width=".3\\linewidth"} ![Display of the certificate values $z_{i,j}$ for three types of graphs, for varying $\varepsilon$. Left, middle: plotted as a function of the geodesic distance $d_{\text{geod}}(i,j)$ on the $x$-axis. Right: histogram of $z_{i,j}$ for $(i,j)$ at distance $d_{\text{geod}}(i,j)=2$.[\[fig:certifs\]]{#fig:certifs label="fig:certifs"} ](figures-certifs/circular-certifs.pdf){#fig:certifs width=".33\\linewidth"} ![Display of the certificate values $z_{i,j}$ for three types of graphs, for varying $\varepsilon$. Left, middle: plotted as a function of the geodesic distance $d_{\text{geod}}(i,j)$ on the $x$-axis. Right: histogram of $z_{i,j}$ for $(i,j)$ at distance $d_{\text{geod}}(i,j)=2$.[\[fig:certifs\]]{#fig:certifs label="fig:certifs"} ](figures-certifs/planar-certifs.pdf){#fig:certifs width=".33\\linewidth"} ![Display of the certificate values $z_{i,j}$ for three types of graphs, for varying $\varepsilon$. Left, middle: plotted as a function of the geodesic distance $d_{\text{geod}}(i,j)$ on the $x$-axis. Right: histogram of $z_{i,j}$ for $(i,j)$ at distance $d_{\text{geod}}(i,j)=2$.[\[fig:certifs\]]{#fig:certifs label="fig:certifs"} ](figures-certifs/erdos-certifs.pdf){#fig:certifs width=".33\\linewidth"} Circular Planar Erdős--Rényi -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Figure [9](#fig:perfs){reference-type="ref" reference="fig:perfs"} displays the recovery performances of $\ell^1$--iOT for the circular graph shown on the left of Figure [6](#fig:certifs){reference-type="ref" reference="fig:certifs"} (similar results are obtained for the other types of graph topologies). These numerical simulations are obtained using the large-scale iOT solver, which we detail in Appendix [9](#sec:solver){reference-type="ref" reference="sec:solver"}. The performance is represented using the number of inaccurately estimated coordinates in the estimated cost matrix $C$, so a score of 0 means a perfect estimation of the support (sparsistency is achieved). For $\varepsilon= 0.1$, sparsistency cannot be achieved, aligning with the fact that the certificate $z$ is degenerate as depicted in the previous figure. In sharp contrast, for larger $\varepsilon$, sparsistency can be attained as soon as the number of samples $N$ is sufficiently large, which also aligns with our theory of sparsistency of $\ell^1$--iOT. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ Recovery performance (number of wrongly estimated position) of $\ell^1$--iOT as a function of $\lambda$ for three different values of $\varepsilon$. [\[fig:perfs\]]{#fig:perfs label="fig:perfs"} ](figures-perfs/circular-perfs-eps3.pdf){#fig:perfs width=".33\\linewidth"} ![ Recovery performance (number of wrongly estimated position) of $\ell^1$--iOT as a function of $\lambda$ for three different values of $\varepsilon$. [\[fig:perfs\]]{#fig:perfs label="fig:perfs"} ](figures-perfs/circular-perfs-eps2.pdf){#fig:perfs width=".33\\linewidth"} ![ Recovery performance (number of wrongly estimated position) of $\ell^1$--iOT as a function of $\lambda$ for three different values of $\varepsilon$. [\[fig:perfs\]]{#fig:perfs label="fig:perfs"} ](figures-perfs/circular-perfs-eps1.pdf){#fig:perfs width=".33\\linewidth"} $\varepsilon=0.1$ $\varepsilon=1$ $\varepsilon=10$ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Figure [12](#fig:nonsym){reference-type="ref" reference="fig:nonsym"} displays the certificate values in the case of a non-symmetric planar graph. The graph is obtained by deleting all edges on a planar graph with $i<j$. We plot the certificate values as a function of geodesic distances $d_{\text{geod}}(i,j)$ of the symmetrized graph. The middle plot shows the certificate values on $i\geqslant j$ (where the actual edges are constrained to be and nondegeneracy requires values smaller than 1 in absolute value for $d_{\text{geod}}(i,j)\geqslant 2$). The right plot shows the certificate values on $i\leqslant j$ where there are no edges and for nondegeneracy, one expects values smaller than 1 in absolute value for $d_{\text{geod}}(i,j)\geqslant 1$. Observe that here, the certificate is degenerate for small values of $d_{\text{geod}}(i,j)$ when $\varepsilon=0$, meaning that the problem is unstable at the "ghost" symmetric edges. As $\varepsilon\to \infty$, the certificate becomes non-degenerate. Observe that as $\varepsilon\to \infty$, the certificate becomes non-degenerate. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ Display of certificate values for a non-symmetric planar graph, for varying $\varepsilon$ with edges only for $i > j$. Middle/Right: plots of the certificate values as a function of the *geodesic distance $d_{\text{geod}}(i,j)$ of the symmetrized graph*. The middle plot show the values when restricted to $i\geqslant j$. The right plot shows the values restricted to $i\leqslant j$ (where there are no edges). [\[fig:nonsym\]]{#fig:nonsym label="fig:nonsym"} ](figures-certifs/non-symm/planar-graph.pdf){#fig:nonsym width=".33\\linewidth"} ![ Display of certificate values for a non-symmetric planar graph, for varying $\varepsilon$ with edges only for $i > j$. Middle/Right: plots of the certificate values as a function of the *geodesic distance $d_{\text{geod}}(i,j)$ of the symmetrized graph*. The middle plot show the values when restricted to $i\geqslant j$. The right plot shows the values restricted to $i\leqslant j$ (where there are no edges). [\[fig:nonsym\]]{#fig:nonsym label="fig:nonsym"} ](figures-certifs/non-symm/planar-nonsym-certifs-true.pdf){#fig:nonsym width=".33\\linewidth"} ![ Display of certificate values for a non-symmetric planar graph, for varying $\varepsilon$ with edges only for $i > j$. Middle/Right: plots of the certificate values as a function of the *geodesic distance $d_{\text{geod}}(i,j)$ of the symmetrized graph*. The middle plot show the values when restricted to $i\geqslant j$. The right plot shows the values restricted to $i\leqslant j$ (where there are no edges). [\[fig:nonsym\]]{#fig:nonsym label="fig:nonsym"} ](figures-certifs/non-symm/planar-nonsym-certifs-false.pdf){#fig:nonsym width=".33\\linewidth"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # Conclusion {#conclusion .unnumbered} In this paper, we have proposed the first theoretical analysis of the recovery performance of $\ell^1$-iOT. Much of this analysis can be extended to more general convex regularizers, such as the nuclear norm to promote low-rank Euclidean costs, for instance. Our analysis and numerical exploration support the conclusion that iOT becomes ill-posed and fails to maintain sparsity for overly small $\varepsilon$. When approached from the perspective of graph estimations, unstable indices occur at smaller geodesic distances, highlighting the geometric regularity of iOT along the graph geometry. # Acknowledgements {#acknowledgements .unnumbered} The work of G. Peyré was supported by the European Research Council (ERC project NORIA) and the French government under management of Agence Nationale de la Recherche as part of the "Investissements d'avenir" program, reference ANR19-P3IA-0001 (PRAIRIE 3IA Institute). # Proofs for Section [3](#sec:duality){reference-type="ref" reference="sec:duality"} {#proofs-for-section-secduality} ## Proof of Proposition [Proposition 2](#InverseOTLossFuncPropertiesProp){reference-type="ref" reference="InverseOTLossFuncPropertiesProp"} (differentiability of $W$) {#ProofOfHessianFormSect} For the strict convexity of $W(A)$ see Lemma 3 in [@dupuy2014personality]. The gradient formula can also be found in [@dupuy2014personality] and trivially follows from the envelope theorem because the optimization problem $W(A)$ has a unique solution. We will only give a proof of the Hessian formula. The formula for the Hessian follows from the formula for the gradient provided we show that the density $\pi_A$ is continuously differentiable with respect to $A$. The fact that we can swap the order of the operator $\Phi^\ast$ and partial differentiation follows from the conditions for differentiability under the integral sign which hold since the measures are compactly supported. Without loss of generality, let $\varepsilon= 1$. Then, the optimizer in $W(A)$ is of the form [@santambrogio2015optimal]: $$\begin{aligned} \pi_A(x,y)=\exp \Big(u_A(x)+v_A(y)+c_A(x,y)\Big),\end{aligned}$$ where $u_A(\cdot)$ and $v_A(\cdot)$ satisfy $$\begin{aligned} u_A(x)&=-\log \int_{\mathcal{Y}} \exp\Big(v_A(y)+c_A(x,y)\Big)\, d\beta(y),\quad \alpha\text{-a.s.}\\ v_A(y)&=-\log \int_{\mathcal{X}}\exp\Big(u_A(x)+c_A(x,y)\Big)\, d\alpha(x),\quad \beta\text{-a.s.}\end{aligned}$$ It is known (see *e.g.* [@nutz2021introduction]) that the functions $u_A(\cdot)$ and $v_A(\cdot)$ inherit the modulus of continuity of the cost (in this case the map $(x,y)\to c_A(x,y))$) and, hence, since we are assuming the measures to be compactly supported, it follows that $u_A \in L^2(\alpha)$ and $v_A \in L^2(\beta)$. Moreover, if $(u_A,v_A)$ solve these two equations then, for any constant $c$, the pair $(u_A+c,v_A-c)$ is also a solution and, hence, to eliminate this ambiguity we consider solutions in $\mathcal{H}=L^2(\alpha)\times L_0^2(\beta)$, where $L_0^2(\beta)= \left\{ g \in L^2(\beta) \;;\; \int g\, d\beta(y)=0 \right\}$. To show that $W$ is twice differentiable, since $\nabla W(A) = \Phi^* \pi_A$, it is sufficient to show that $A \mapsto u_A$ and $A \mapsto v_A$ are differentiable. To this end, we will apply the Implicit Function Theorem (in Banach spaces) to the map $$\begin{aligned} F:\mathcal{H}\times \mathbb{R}^{s} &\to L^2(\alpha)\times L^2(\beta)\\ \begin{bmatrix} u\\ v\\ A \end{bmatrix}&\to \begin{bmatrix}u+\log \int_{\mathcal{Y}} \exp\Big(v(y)+c_A(x,y)\Big)\, d\beta(y)\\ v+\log \int_{\mathcal{X}}\exp\Big(u(x)+c_A(x,y)\Big)\, d\alpha(x) \end{bmatrix},\end{aligned}$$ since we have $F(u_A,v_A,A)=0$. The partial derivative of $F$ at $(u_A,v_A,A)$ , denoted by $\partial_{u,v}F(u_A,v_A,A)$, is the linear map defined by $$\begin{aligned} \label{JacobianOfF} \Big(\partial_{u,v} F(u_A,v_A,A)\Big)(f,g)=\Big(f+\int_{\mathcal{Y}} p_A(\cdot,y)g(y)\, d\beta(y), g+\int_\mathcal{X} q_A(x,\cdot)f(x)\, d\alpha(x)\Big),\end{aligned}$$ where $$\begin{aligned} p_A(x,y)=\frac{\exp\Big(v_A(y)+c_A(x,y)\Big)}{\int_\mathcal{Y}\exp\Big(v_A(y)+c_A(x,y)\Big)\, d\beta(y)}\text{ and } q_A(x,y)=\frac{\exp\Big(u_A(x)+c_A(x,y)\Big)}{\int_\mathcal{X}\exp\Big(u_A(y)+c_A(x,y)\Big)\, d\beta(x)}.\end{aligned}$$ Note that, since $F(u_A,v_A,A)=0$, $p_A(x,y)=q_A(x,y)=\pi_A(x,y)$. Moreover, since $\pi_A$ has marginals $\alpha$ and $\beta$, it follows that $$\begin{aligned}\label{InnerProductPartialF} &\Big\langle (f,g),\Big(\partial_{u,v} F(u_A,v_A,A)\Big)(f,g)\Big\rangle\\ =&\int_\mathcal{X}f(x)^2\,d\alpha(x)+\int_{\mathcal{X}\times \mathcal{Y}} 2f(x)g(y)\pi_A(x,y)\, d(\alpha\otimes \beta)(x,y) +\int_\mathcal{Y}g(y)^2\, d\beta(y)\\ =&\int_{\mathcal{X}\times \mathcal{Y}} \Big(f(x)+g(y)\Big)^2\pi_A(x,y)\, d(\alpha\otimes \beta)(x,y) \end{aligned}.$$ This shows that $\partial_{u,v} F(u_A,v_A,A)$ is invertible -- the last is line of [\[InnerProductPartialF\]](#InnerProductPartialF){reference-type="ref" reference="InnerProductPartialF"} is zero if and only if $f\oplus g\equiv 0$ and since $g \in L^2_0(\beta)$ it follows that $g=0$ and $f=0$. To conclude the proof we need to show that $\left( \partial_{u,v} F(u_A,v_A,A) \right)^{-1}$ is a bounded operator (see *e.g.* [@deimling2010nonlinear] for the statement of IFT in Banach spaces) and to show this it is enough to show that, for some constant $C$, $$\Big\|\Big(\partial_{u,v} F(u_A,v_A,A)\Big)(f,g)\Big\|\geqslant C\|(f,g)\|.$$ This follows from [\[InnerProductPartialF\]](#InnerProductPartialF){reference-type="ref" reference="InnerProductPartialF"} and the fact that there exists a constant $C$ such that $\pi_A(x,y)\geqslant C$ for all $x$ and $y$ (see *e.g.* [@nutz2021introduction] for the existence of $C$). In fact, from $g\in L^2_0(\beta)$, we obtain $\int (f(x)+g(y)))^2\, d(\alpha\otimes \beta)(x,y)=\|(f,g)\|^2$ and, hence, [\[InnerProductPartialF\]](#InnerProductPartialF){reference-type="ref" reference="InnerProductPartialF"} implies that $$\begin{aligned} \Big\langle (f,g),\Big(\partial_{u,v} F(u_A,v_A,A)\Big)(f,g)\Big\rangle\geqslant C\|(f,g)\|^2\end{aligned}$$ and from Cauchy Schwartz applied to the left-hand-side we obtain $$\begin{aligned} \Big\|\Big(\partial_{u,v} F(u_A,v_A,A)\Big)(f,g)\Big\|\geqslant C\|(f,g)\|,\end{aligned}$$ thus concluding the proof. ## Connection of the certificate ([\[FuchsCert\]](#FuchsCert){reference-type="ref" reference="FuchsCert"}) with optimization problem ([\[eq:primal\]](#eq:primal){reference-type="ref" reference="eq:primal"}) As mentioned in Section [3.2](#sec:intuition-cert){reference-type="ref" reference="sec:intuition-cert"}, the vector $z^\lambda$ provides insight into support recovery since $\mathop{\mathrm{Supp}}(A^\lambda) \subseteq \left\{ i \;;\; z^\lambda_i = \pm 1 \right\}$. In this section, we provide the proof to Proposition [Proposition 5](#MinimalNormCertProposi){reference-type="ref" reference="MinimalNormCertProposi"}, which shows that as $\lambda$ converges to $0$, $z^\lambda$ converges to the solution of a quadratic optimization problem ([\[MnCertificate\]](#MnCertificate){reference-type="ref" reference="MnCertificate"}) that we term the *minimal norm certificate*. Note that the connection with ([\[FuchsCert\]](#FuchsCert){reference-type="ref" reference="FuchsCert"}) can now be established by noting that if the minimal norm certificate is non-degenerate then the inequality constraints (which correspond to the complement of the support of ${\widehat A}$) in ([\[MnCertificate\]](#MnCertificate){reference-type="ref" reference="MnCertificate"}) can be dropped since they are inactive; in this case ([\[MnCertificate\]](#MnCertificate){reference-type="ref" reference="MnCertificate"}) reduces to a quadratic optimization problem with only equality constraints and whose solution can be seen to be ([\[FuchsCert\]](#FuchsCert){reference-type="ref" reference="FuchsCert"}) which will thus be non-degenerate as well. The converse is clear. So, under non-degeneracy, ([\[FuchsCert\]](#FuchsCert){reference-type="ref" reference="FuchsCert"}) can be seen as the *limit* optimality vector and determines the support of $A^\lambda$ when $\lambda$ is small. To prove Proposition [Proposition 5](#MinimalNormCertProposi){reference-type="ref" reference="MinimalNormCertProposi"}, we first show that the vector $z^\lambda$ coincides the solution to a dual problem of ([\[eq:primal\]](#eq:primal){reference-type="ref" reference="eq:primal"}). **Proposition 11**. *Let $W^*$ be the convex conjugate of $W$. Problem ([\[eq:primal\]](#eq:primal){reference-type="ref" reference="eq:primal"}) admits a dual given by $$\label{dualOfiOT} \mathop{\mathrm{arg\,min}}_{z} W^\ast(\hat \Sigma_{xy}-\lambda z) \quad\text{ subject to }\quad \|z\|_\infty \leqslant 1,$$ where $\hat \Sigma_{xy}=\Phi^\ast \hat \pi$ . Moreover, a pair of primal-dual solutions $(A^\lambda,z^\lambda)$ is related by $$\label{KKTConditions1} z^\lambda= -\tfrac{1}{\lambda} \nabla_A \mathcal{L}(A^\lambda,\hat\pi) \quad\text{and}\quad z^\lambda \in \partial \|{A^\lambda}\|_1.$$* *Proof.* Observe that we can write $\mathcal{L}(A,\hat \pi)$ as $$\begin{aligned} \mathcal{L}(A,\hat \pi)&= W(A)-\int_{\mathcal{X}\times \mathcal{Y}} (\Phi A)(x,y)\, d\hat \pi=W(A)-\langle \Phi^\ast \hat \pi,A\rangle_F=W(A)-\langle A,\hat \Sigma_{xy}\rangle.\end{aligned}$$ The *Fenchel Duality Theorem* (see $\emph{e.g.}$ [@borwein2006convex]) yields a dual of ([\[eq:primal\]](#eq:primal){reference-type="ref" reference="eq:primal"}) given by $$\begin{aligned} \mathop{\mathrm{arg\,min}}_w W^\ast(\hat \Sigma_{xy}-w)+(\lambda \|\cdot\|_1)^\ast(w).\end{aligned}$$ To conclude the proof just note that the Fenchel conjugate of $\lambda\|\cdot\|_1$ is the indicator of the set $\lbrace v: \|v\|_\infty\leqslant\lambda\rbrace$ and make a change of variable $z \triangleq 1/\lambda w$ to obtain [\[dualOfiOT\]](#dualOfiOT){reference-type="ref" reference="dualOfiOT"}. The relationship between any primal-dual pair in Fenchel Duality can also be found in [@borwein2006convex]. ◻ *Proof of Proposition [Proposition 5](#MinimalNormCertProposi){reference-type="ref" reference="MinimalNormCertProposi"}.* We begin by noting that $W^\ast$ is of class $C^2$ in a neighborhood of $\Sigma_{xy}$ and that $$\begin{aligned} \label{SecondDerivativeAtCovariance} \nabla^2 W^\ast(\hat \Sigma_{xy})=\Big(\nabla^2 W ({\widehat A})\Big)^{-1}.\end{aligned}$$ To see this, note that Proposition [Proposition 2](#InverseOTLossFuncPropertiesProp){reference-type="ref" reference="InverseOTLossFuncPropertiesProp"} together with the assumption on $\hat \pi$ implies that $$\begin{aligned} \hat \Sigma_{xy}=\nabla W({\widehat A}).\end{aligned}$$ Moreover, since $W(A)$ is twice continuously differentiable and strictly convex (see Proposition [\[FuchsCert\]](#FuchsCert){reference-type="ref" reference="FuchsCert"}), it follows (see *e.g.* [@hiriart1993conjugacy]) that $W^\ast(\cdot)$ is $C^2$ and strictly convex in a neighborhood of $\hat \Sigma_{xy}$ and that ([\[SecondDerivativeAtCovariance\]](#SecondDerivativeAtCovariance){reference-type="ref" reference="SecondDerivativeAtCovariance"}) holds. Now observe that, since $\nabla W^\ast(\hat \Sigma_{xy})=\nabla W^\ast\big(\nabla W ({\widehat A})\big)={\widehat A}$, we can rewrite $\partial\left\| {\widehat A} \right\|_1$ as $$\partial\left\| {\widehat A} \right\|_1=\mathop{\mathrm{arg\,min}}_{z} \big\langle -z,\nabla W^\ast(\hat \Sigma_{xy})\big\rangle \quad\text{subject to}\quad \|z\|_\infty\leqslant 1$$ Observe that since $z^{ambda}$ are uniform bounded vectors due to the constraint set in [\[dualOfiOT\]](#dualOfiOT){reference-type="ref" reference="dualOfiOT"}, there is a convergent subsequence converging to some $z^\ast$. We later deduce that all limit points are the same and hence, the full sequence $z^\lambda$ converges to $z^\ast$. Let $\lambda_n$ be such that $\lim_{\lambda_n\to 0} z^{\lambda_n}=z^\ast$, and let $z^0$ be any element in $\partial\left\| {\widehat A} \right\|_1$. We have that $$\begin{aligned} \big\langle -z^0,\nabla W^\ast(\hat \Sigma_{xy})\big\rangle&\leqslant\big\langle -z^{\lambda_n},\nabla W^\ast(\hat \Sigma_{xy})\big\rangle\leqslant\frac{1}{\lambda_n}\Big(W^\ast\big(\hat \Sigma_{xy}-\lambda_n z^{\lambda_n}\big)-W(\hat \Sigma_{xy})\Big)\\ &\leqslant\frac{1}{\lambda_n}\Big(W^\ast\big(\hat \Sigma_{xy}-\lambda_n z^0\big)-W(\hat \Sigma_{xy})\Big),\end{aligned}$$ where the first inequality is the optimality of $z^0$, the second inequality is the gradient inequality of convex functions and the last inequality follows from optimality of $z^\lambda$. Taking the limit as $\lambda_n\to 0$ we obtain that $$\begin{aligned} \big\langle -z^0,\nabla W^\ast(\hat \Sigma_{xy})\big\rangle=\big\langle -z^\ast,\nabla W^\ast(\hat \Sigma_{xy})\big\rangle,\end{aligned}$$ showing that $z^\ast \in \partial\left\| {\widehat A} \right\|_1$. We now finish the proof by showing that $$\begin{aligned} \label{MinimalNormCondition} \Big\langle z^\ast,\nabla^2 W^\ast(\hat \Sigma_{xy})z^\ast\Big\rangle\leqslant\Big\langle z^0,\nabla^2 W^\ast(\hat \Sigma_{xy})z^0\Big\rangle.\end{aligned}$$ Since $W^\ast(\cdot)$ is $C^2$ in a neighborhood of $\hat \Sigma_{xy}$, Taylor's theorem ensures that there exists a remainder function $R(x)$ with $\lim_{x\to \hat \Sigma_{xy}} R(x)=0$ such that $$\begin{aligned} &\langle -z^{\lambda_n},\nabla W^\ast(\hat \Sigma_{xy})\rangle+\frac{\lambda_n}{2}\big\langle z^{\lambda_n},\nabla^2W^\ast(\hat \Sigma_{xy})z^{\lambda_n}\big\rangle+R(\hat \Sigma_{xy}+\lambda_n z^{\lambda_n})\lambda^2_n\\ =& \frac{1}{\lambda_n}\Big(W^\ast\big(\hat \Sigma_{xy}-\lambda_n z^{\lambda_n}\big)-W^\ast\big(\hat \Sigma_{xy}\big)\Big)\leqslant\frac{1}{\lambda_n}\Big(W^\ast\big(\hat \Sigma_{xy}-\lambda_n z^0\big)-W^\ast\big(\hat \Sigma_{xy}\big)\Big)\\ =& \langle -z^0,\nabla W^\ast(\hat \Sigma_{xy})\rangle+\frac{\lambda_n}{2}\big\langle z^0,\nabla^2W^\ast(\hat \Sigma_{xy})z^0\big\rangle+R(\hat \Sigma_{xy}+\lambda_n z^0)\lambda^2_n\\ \leqslant& \langle -z^{\lambda_n},\nabla W^\ast(\hat \Sigma_{xy})\rangle+\frac{\lambda_n}{2}\big\langle z^0,\nabla^2W^\ast(\hat \Sigma_{xy})z^0\big\rangle+R(\hat \Sigma_{xy}+\lambda_n z^0)\lambda^2_n,\end{aligned}$$ where we used the optimality of $z^0$ and of $z^{\lambda_n}$. We conclude that $$\begin{aligned} \frac{1}{2}\big\langle z^{\lambda_n},\nabla^2W^\ast(\hat \Sigma_{xy})z^{\lambda_n}\big\rangle+R(\hat \Sigma_{xy}+\lambda_n z^{\lambda_n})\lambda_n\leqslant\frac{1}{2}\big\langle z^0,\nabla^2W^\ast(\hat \Sigma_{xy})z^0)\big\rangle+R(\hat \Sigma_{xy}+\lambda_n z^0)\lambda_n.\end{aligned}$$ Taking the limit establishes [\[MinimalNormCondition\]](#MinimalNormCondition){reference-type="ref" reference="MinimalNormCondition"}. Since $z^0$ was an arbitrary element in $\partial\left\| {\widehat A} \right\|_1$, we obtain that the limit of $z^{\lambda_n}$ is $$\begin{aligned} z^\ast=\mathop{\mathrm{arg\,min}}_{z} \Big\langle z,\Big(\nabla^2 W({\widehat A})\Big)^{-1}z\Big\rangle \quad \text{subject to}\quad z \in \partial\left\| {\widehat A} \right\|_1\end{aligned}$$ where we used [\[SecondDerivativeAtCovariance\]](#SecondDerivativeAtCovariance){reference-type="ref" reference="SecondDerivativeAtCovariance"}. Finally, observe that $z^\ast$ was an arbitrary limit point of $z^\lambda$ and we showed that all limit points are the same; this is enough to conclude the result. ◻ # Proof of Proposition [Proposition 7](#prop:sample_complexity){reference-type="ref" reference="prop:sample_complexity"} {#proof-of-proposition-propsample_complexity} The proof of this statement relies on strong convexity of $\mathcal{J}_n$. Similar results have been proven in the context of entropic optimal transport (e.g. [@genevay2019sample]). Our proof is similar to the approach taken in [@rigollet2022sample]. Let $(A_\infty,f_\infty,g_\infty)$ minimize ([\[eq:kt_inf\]](#eq:kt_inf){reference-type="ref" reference="eq:kt_inf"}). Note that $p_\infty \alpha\otimes \beta$ with $$p_\infty(x,y) = \exp\left( \frac{2}{\varepsilon}\left( \Phi A_\infty(x,y) + f_\infty(x) + g_\infty(y) \right) \right)$$ minimizes ([\[eq:primal\]](#eq:primal){reference-type="ref" reference="eq:primal"}). Let $$P_\infty =\frac{1}{n^2}( p_\infty(x_i,y_j))_{i,j}, \quad F_\infty = (f_\infty(x_i))_i \quad \text{and} \quad G_\infty = (g_\infty(y_j))_j.$$ Note that by optimality of $(A_\infty,f_\infty,g_\infty)$ and $\pi_\infty = p_\infty \alpha\otimes\beta$, $$\lambda \left\| A_\infty \right\|_1+ \sup_{\pi\in\mathcal{U}(\alpha,\beta)} \langle \Phi A_\infty,\,\pi - \hat \pi\rangle - \frac{\varepsilon}{2}\mathrm{KL}(\pi|\alpha\otimes\beta) \leqslant\frac{\varepsilon}{2}$$ where we plug in $(A,F,G) = (0,0,0)$ into ([\[eq:kt_inf\]](#eq:kt_inf){reference-type="ref" reference="eq:kt_inf"}) for the RHS. Plugging in $\hat\pi$ into the LHS, we have $$\lambda \left\| A_\infty \right\|_1 \leqslant\frac{\varepsilon}{2}+\frac{\varepsilon}{2}\mathrm{KL}(\hat\pi|\alpha\otimes\beta) \leqslant C \varepsilon,$$ with the implicit constant $C$ being dependent on $\hat\pi$. So, $\left\| A_\infty \right\|_1 = \mathcal{O}(\varepsilon/\lambda)$ and due to the uniform bounds on $f_\infty,g_\infty$ from Lemma [Lemma 13](#lem:boundedness){reference-type="ref" reference="lem:boundedness"}, $p_\infty$ is uniformly bounded away from 0 by $\exp(-C/\lambda)$ for some constant $C$ that depends on $\hat\pi$. Let $P_n$ minimise ([\[eq:primal_n\]](#eq:primal_n){reference-type="ref" reference="eq:primal_n"}), we know it is of the form $$P_n = \frac{1}{n^2} \exp\left( \frac{2}{\varepsilon}\left( \Phi_n A_n + F_{n} \oplus G_n \right) \right).$$ for vectors $A_n, F_n, G_n$. The 'certificates' are $\Phi^* \varphi_\infty$ and $z_n \triangleq\Phi_n^* \varphi_n$ where $$\varphi_\infty = \frac{1}{\lambda}\left( p_\infty \alpha\otimes \beta - \hat \pi \right) \quad \text{and} \quad \varphi_n = \frac{1}{\lambda}\left( P_n - \hat P_n \right).$$ Note that $z_n = \Phi^* \varphi_\infty = -\frac{1}{\lambda} \nabla_A \mathcal{L}(A^\infty, \hat \pi)$. The goal is to bound $\left\| \Phi^* \varphi_\infty - \Phi_n^* \varphi_n \right\|_\infty$ so that nondegeneracy of $z_\infty$ would imply nondegeneracy of $\Phi_n^* \varphi_n$. Note that $\Phi_n^* P_n = \Phi^* \hat \pi_n$. So, by the triangle inequality, $$\begin{aligned} \left\| \Phi^* \varphi_\infty - \Phi_n^* \varphi_n \right\|_\infty& \leqslant\frac{1}{\lambda} \left\| \Phi^* \left( p_\infty \alpha\otimes \beta \right) - \Phi_n^* P_n \right\|_\infty +\frac{1}{\lambda} \left\| \Phi^* \left( \hat \pi_n - \hat \pi \right) \right\|_\infty\\ &\leqslant\frac{1}{\lambda} \left\| \Phi_n^* P_\infty - \Phi_n^* P_n \right\|_\infty + \frac{1}{\lambda} \left\| \Phi^* \left( p_\infty \alpha\otimes \beta \right) - \Phi_n^* P_\infty \right\|_\infty +\frac{1}{\lambda} \left\| \Phi^* \left( \hat \pi_n - \hat \pi \right) \right\|_\infty\end{aligned}$$ The last two terms on the RHS can be controlled using Proposition [Proposition 21](#prop:concentration_cost){reference-type="ref" reference="prop:concentration_cost"}, and are bounded by $\mathcal{O}(m n^{-1/2})$ with probability at least $1-\mathcal{O}(\exp(-m^2))$. For the first term on the RHS, letting $Z = P_\infty - P_n$, $$\begin{aligned} \left\| \Phi_n^* Z \right\|_\infty &= \left\| \sum_{i,j=1}^n \mathbf{C}(x_i,y_j) Z_{i,j} \right\|_\infty\\ &\leqslant\left\| \mathbf{C} \right\|_\infty \sqrt{ \frac{1}{n^2} \sum_{i,j} \left( \exp\left( \frac{2}{\varepsilon}(\Phi_n A_n + F_n \oplus G_n) \right)- \exp\left( \frac{2}{\varepsilon}(\Phi_n A_\infty + F_\infty \oplus G_\infty) \right) \right)_{i,j}^2}\end{aligned}$$ Let $L \triangleq 2( \left\| \Phi_n A_n + F_n\oplus G_n) \right\|_\infty\vee\left\| \Phi_n A_\infty + F_\infty \oplus G_\infty ) \right\|_\infty)$. By Lipschitz continuity of the exponential $$\begin{aligned} &\frac{1}{n^2} \sum_{i,j} \left( \exp\left( \frac{2}{\varepsilon}(\Phi_n A_n + F_n \oplus G_n) \right)- \exp\left( \frac{2}{\varepsilon}(\Phi_n A_\infty + F_\infty \oplus G_\infty) \right) \right)_{i,j}^2\\ &\leqslant\frac{4\exp(L/\varepsilon)}{\varepsilon^2 n^2} \sum_{i,j} \left( (\Phi_n A_n + F_n \oplus G_n) - (\Phi_n A_\infty + F_\infty \oplus G_\infty) \right)_{i,j}^2\\ &\leqslant\frac{12\exp(L/\varepsilon) }{\varepsilon^2 } \left( \frac{1}{n^2}\sum_{i,j} (\Phi_n A_n - \Phi_n A_\infty)_{i,j}^2 + \frac{1}{n} \sum_{i}(F_n - F_\infty)_i^2 + \frac{1}{n} \sum_{j}(G_n - G_\infty)_j^2 \right)\label{eq:error_2}\end{aligned}$$ By strong convexity properties of $\mathcal{J}_n$ and hence $\mathcal{K}_n$, it can be shown (see Prop [Proposition 15](#prop:strongconvexity){reference-type="ref" reference="prop:strongconvexity"}) that ([\[eq:error_2\]](#eq:error_2){reference-type="ref" reference="eq:error_2"}) is upper bounded up to a constant by $$\begin{aligned} & \varepsilon^{-1} \exp(L/\varepsilon) \left( \mathcal{K}_n(F_\infty,G_\infty,A_\infty) - \mathcal{K}_n(F_n,G_n,A_n) \right)\\ \leqslant& \frac{ \exp(L/\varepsilon)}{4} \Bigg( \left\| n^{-2}(\Phi^*_n\Phi_n)^{-1} \right\|\left\| (\partial_A \mathcal{J}_n(A_\infty,F_\infty,G_\infty) +\lambda \xi_\infty) \right\|^2 \\ &\qquad\qquad +n\left\| \partial_F \mathcal{J}_n(A_\infty,F_\infty,G_\infty) \right\|^2 + n\left\| \partial_G \mathcal{J}_n(A_\infty,F_\infty,G_\infty) \right\|^2\Bigg)\end{aligned}$$ where $\xi_\infty = \frac{1}{\lambda}(\Phi^* \hat \pi- \Phi^* (p_\infty \alpha\otimes \beta))\in \partial\left\| A_\infty \right\|_1$. By Lemma [Lemma 12](#lem:bound1){reference-type="ref" reference="lem:bound1"} and Lemma [Lemma 13](#lem:boundedness){reference-type="ref" reference="lem:boundedness"}, $L=\mathcal{O}(\varepsilon/\lambda)$ with probability at least $1-\mathcal{O}(\exp(-m^2))$ if $\lambda\gtrsim mn^{-\frac12}$. Finally, $$\begin{aligned} &\left\| (\partial_A \mathcal{J}_n(A_\infty,F_\infty,G_\infty) +\lambda \xi_\infty) \right\|^2 = \left\| \Phi_n^* \hat P + \Phi_n^* P_\infty+ \lambda\xi_\infty \right\|^2\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad \leqslant 2\left( \left\| \Phi_n^* \hat P - \Phi^* \hat \pi \right\|^2 + \left\| \Phi_n^* P_\infty - \Phi^* (p_\infty \alpha\otimes \beta) \right\|^2 \right)\\ &n\left\| \partial_F \mathcal{J}_n(A_\infty,F_\infty,G_\infty) \right\|^2_2 = \frac{1}{n} \sum_{i=1}^n \left( 1- \frac{1}{n} \sum_{j=1}^n p_\infty(x_i,y_j) \right)^2 \\ &n\left\| \partial_G \mathcal{J}_n(A_\infty,F_\infty,G_\infty) \right\|^2_2\Bigg) = \frac{1}{n} \sum_{j=1}^n \left( 1- \frac{1}{n} \sum_{i=1}^n p_\infty(x_i,y_j) \right)^2 \Bigg) .\end{aligned}$$ We show in Propositions [Proposition 17](#prop:concentration_marginal){reference-type="ref" reference="prop:concentration_marginal"}, [Proposition 20](#prop:concentration_cost1){reference-type="ref" reference="prop:concentration_cost1"} and [Proposition 21](#prop:concentration_cost){reference-type="ref" reference="prop:concentration_cost"} that these are bounded by $\mathcal{O}(m^2 n^{-1})$ with probability at least $1-\mathcal{O}(\exp(-m^2))$ and from Proposition [Proposition 24](#prop:concentration_injective){reference-type="ref" reference="prop:concentration_injective"}, assuming $n\gtrsim \log(2s)$, $\left\| (n^{-2}\Phi_n^*\Phi_n)^{-1} \right\|\lesssim 1$ with probability at least $1-\mathcal{O}(\exp(- n))$. So, $\lambda \left\| \Phi^* \varphi_\infty - \Phi_n^* \varphi_n \right\|_\infty \lesssim \exp(C/\lambda)\frac{m}{\sqrt{n}}$ with probability at least $1-\exp(-m^2)$. The second statement follows by combining our bound for ([\[eq:error_2\]](#eq:error_2){reference-type="ref" reference="eq:error_2"}) with the fact that $\left\| (n^{-2} \Phi_n^*\Phi_n)^{-1} \right\|\lesssim 1$. In the following subsections, we complete the proof by establishing the required strong convexity properties of $\mathcal{J}_n$ and bound $\nabla \mathcal{J}_n$ at $(A_\infty,F_\infty,G_\infty)$ using concentration inequalities. The proofs to some of these results are verbatim to the results of [@rigollet2022sample] for deriving sampling complexity bounds in eOT, although they are included due to the difference in our setup. ## Strong convexity properties of $\mathcal{J}_n$ {#sec:strong_conv} In this section, we present some of the key properties of $\mathcal{J}_n$. Recall that since we assume that $\alpha,\beta$ are compactly supported, up to a rescaling of the space, we assume without loss of generality that for all $k$, $\left\lvert\mathbf{C}_k(x,y)\right\rvert\leqslant 1$. **Lemma 12**. *Let $A,F,G$ minimize $\mathcal{K}_n$. Let $\lambda\gtrsim mn^{-\frac12}$ for some $m>0$. Then, with probability at least $1-\mathcal{O}(\exp(-m^2))$ $$\left\| A \right\|_1 \lesssim (C+1)\varepsilon/\lambda$$ where the constant $C>0$ depends on $d\hat\pi/d(\alpha\otimes \beta)$.* *Proof.* Recall $\hat P_n = \frac1n \mathrm{Id}$. By optimality of $A,F,G$, $$\label{eq:A_bound} \lambda \left\| A \right\|_1 -\langle \hat P_n,\,\Phi_n A\rangle + \langle P,\,\Phi_n A\rangle - \frac{\varepsilon}{2}H(P) \leqslant\frac{\varepsilon}{2}.$$ for all $P$ satisfying the marginal constraints $P\mathds{1}= \frac{1}{n}\mathds{1}$ and $P^\top \mathds{1}= \frac1n \mathds{1}$. By plugging in $P=\hat P_n$, we have $H(P) = \frac{1}{n}\sum_{i=1}^n \log(n) -1 = \log(n)-1$. So, $$\lambda \left\| A \right\|_1 \leqslant\varepsilon\log(n).$$ To obtain a bound that does not grow with $n$, the idea is to take $Q_n = \frac{1}{n^2} (\hat p(x_i,y_j))_{i,j\in[n]}$ where $\hat p = d\hat \pi/d(\alpha\otimes \beta)$. Note that $Q_n$ will only approximately satisfy the marginal constraints $Q_n\mathds{1}\approx \frac1n\mathds{1}$ and $Q_n^\top \mathds{1}\approx \frac1n \mathds{1}$ (this approximation can be made precise using Proposition [Proposition 17](#prop:concentration_marginal){reference-type="ref" reference="prop:concentration_marginal"}). So, we insert into the above inequality $P = \tilde Q_n$ with $\tilde Q_n$ being the projection of $Q_n$ onto the constraint set $\mathcal{U}(\frac1n \mathds{1}_n, \frac1n \mathds{1}_n)$. By [@altschuler2017near Lemma 7], the projection $\tilde Q_n$ satisfies $$\left\| \tilde Q_n - Q_n \right\|_1 \leqslant 2 \left\| Q_n \mathds{1}- \frac 1n \mathds{1} \right\|_1 + 2\left\| Q_n^\top \mathds{1}- \frac 1n \mathds{1} \right\|_1.$$ By Proposition [Proposition 17](#prop:concentration_marginal){reference-type="ref" reference="prop:concentration_marginal"}, with probability at least $1-\mathcal{O}(\exp(-m^2))$, $$\left\| Q_n\mathds{1}- \frac 1n \mathds{1} \right\|_1= \frac{1}{n} \sum_{i=1}^n \left\lvert\frac{1}n\sum_{j=1}^n \hat p(x_i,y_j) -1 \right\rvert \leqslant\sqrt{\frac{1}{n}\sum_{i=1}^n \left\lvert\frac{1}n\sum_{j=1}^n \hat p(x_i,y_j) -1 \right\rvert^2} = \mathcal{O}\left( \frac{m}{\sqrt{n}} \right).$$ Similarly, $\left\| Q_n\mathds{1}- \frac 1n \mathds{1} \right\|_1=\mathcal{O}(\log(n) n^{-\frac12})$ with high probability. Moreover, since $\hat p$ is uniformly bounded from below, one can check from the projection procedure of [@altschuler2017near] that $\tilde Q_n$ is also uniformly bounded from above and away from zero with constants that dependent only on $\hat p$. So, $H(\tilde Q_n) = C$, where $C$ is a constant depends on the lower bound of $d\hat \pi/d(\alpha\otimes \beta)$. Plugging in $\tilde Q_n$ into ([\[eq:A_bound\]](#eq:A_bound){reference-type="ref" reference="eq:A_bound"}), we have $$\lambda \left\| A \right\|_1 - \langle \hat P_n - \tilde Q_n,\,\Phi_n A\rangle \lesssim C\varepsilon+\varepsilon.$$ Note that $$\begin{aligned} &\left\lvert\langle \tilde Q_n - Q_n,\,\Phi_n A\rangle\right\rvert = \left\lvert\sum_{k=1}^s \sum_{i,j=1}^n A_k (C_k)_{i,j} (\tilde Q_n- Q_n)_{i,j}\right\rvert\\ &\leqslant\max_k\left\| C_k \right\|_\infty \left\| A \right\|_1 \left\| Q_n - \tilde Q_n \right\|_1 %\leq \frac{1}{\lambda}\epsilon \log(n) \norm{Q_n - \tilde Q_n}_1 .\end{aligned}$$ Moreover, $$\begin{aligned} \langle \hat P_n - Q_n,\,\Phi_n A\rangle & \leqslant\left\| A \right\|\left\| \Phi_n^* (\hat P_n - Q_n) \right\| %\leq \frac{\epsilon\log(n)}{\lambda} \norm{\Phi_n^* (\hat P_n - Q_n)} \end{aligned}$$ By Proposition [Proposition 20](#prop:concentration_cost1){reference-type="ref" reference="prop:concentration_cost1"} and [Proposition 21](#prop:concentration_cost){reference-type="ref" reference="prop:concentration_cost"}, with probability at least $1-\mathcal{O}(\exp(-m^2))$, $$\mathbb{E}\left\| \Phi_n^* \hat P_n - \Phi^*\hat \pi \right\| = \mathcal{O}(m n^{-\frac12}) \quad \text{and} \quad \mathbb{E}\left\| \Phi_n^* Q_n - \Phi^*\hat \pi \right\| = \mathcal{O}(m n^{-\frac12}).$$ So, we have $$\left( \lambda - \left\| \Phi_n^* (\hat P_n - Q_n) \right\| -\left\| \tilde Q_n - Q_n \right\|_1 \right) \left\| A \right\|_1 \lesssim (C+1)\varepsilon,$$ and hence, with probability at least $1-\mathcal{O}(\exp(-m^2))$ $$\left\| A \right\|_1 \lesssim (C+1)\varepsilon/\lambda$$ if $\lambda\gtrsim mn^{-\frac12}$. ◻ **Lemma 13**. *Let $(A,F,G)$ minimize ([\[eq:kt_n\]](#eq:kt_n){reference-type="ref" reference="eq:kt_n"}). Let $C_A = \Phi_n A$. Then, $$F_i \in [-3\left\| C_A \right\|_\infty, \left\| C_A \right\|_\infty] \quad \text{and} \quad G_i \in [-2\left\| C_A \right\|_\infty,2\left\| C_A \right\|_\infty].$$ Moreover, $\exp(4\left\| C_A \right\|_\infty ) \geqslant\exp\left( \frac{2}{\varepsilon} \left( F_i + G_j + (\Phi_n A )_{i,j} \right) \right)\geqslant\exp(-6\left\| C_A \right\|_\infty)$. Note that $\left\| C_A \right\|_\infty \leqslant\left\| A \right\|_1$.* *If $(A,f,g)$ minimize ([\[eq:kt_inf\]](#eq:kt_inf){reference-type="ref" reference="eq:kt_inf"}). Let $c_A = \Phi A$. Then, for all $x,y$ $$f(x) \in [-3\left\| c_A \right\|_\infty, \left\| c_A \right\|_\infty] \quad \text{and} \quad g(y) \in [-2\left\| c_A \right\|_\infty,2\left\| c_A \right\|_\infty].$$ Moreover, $\exp(4\left\| c_A \right\|_\infty ) \geqslant\exp\left( \frac{2}{\varepsilon} \left( f\oplus + g + (\Phi A )_{i,j} \right) \right)\geqslant\exp(-6\left\| c_A \right\|_\infty)$.* *Proof.* This proof is nearly identical to [@rigollet2022sample Prop. 10]: Let $A,F,G$ minimize ([\[eq:kt_n\]](#eq:kt_n){reference-type="ref" reference="eq:kt_n"}). By the marginal constraints for $P_n \triangleq\frac{1}{n^2}\left( \exp\left( \frac{2}{\varepsilon} (F\oplus G + \Phi_n A) \right) \right)_{i,j}$ given in ([\[eq:primal_n\]](#eq:primal_n){reference-type="ref" reference="eq:primal_n"}), $$\label{eq:upper_bd_f} \begin{split} 1 &= \frac{1}{n} \sum_j \exp\left( \frac{2}{\varepsilon} (\Phi A(x_i,y_j) + F_{i} + G_{j}) \right)\\ & \geqslant\exp\left( \frac{2}{\varepsilon} (-\left\| C_A \right\|_\infty+ F_{i}) \right) \sum_j \frac{1}{n} \exp\left( 2G_{j}/\varepsilon \right) \geqslant\exp\left( \frac{2}{\varepsilon} (-\left\| C_A \right\|_\infty+ F_{i}) \right) \end{split}$$ where we use Jensen's inequality and the assumption that $\sum_j G_{j} =0$ for the second. So, $F_{i}\leqslant\left\| C_A \right\|_\infty$ for all $i\in [n]$. Using the other marginal constraint for $P_n$ along with this bound on $F_{i}$ implies that $$1 = \frac{1}{n} \sum_i \exp\left( \frac{2}{\varepsilon}( \Phi A(x_i,y_j) + F_{i} + G_{j}) \right) \leqslant\exp\left( \frac{2}{\varepsilon}\left( 2\left\| C_A \right\|_\infty+ G_{j} ) \right) \right)$$ So, $G_{n,j} \geqslant-2\left\| C_A \right\|_\infty$. To prove the reverse bounds, we now show that $\sum_i F_i$ is lower bounded: Note that since $H(P)\geqslant-1$, by duality between ([\[eq:primal_n\]](#eq:primal_n){reference-type="ref" reference="eq:primal_n"}) and ([\[eq:kt_n\]](#eq:kt_n){reference-type="ref" reference="eq:kt_n"}), $$\left\| C_A \right\|_\infty + \frac{\varepsilon}{2} \geqslant\langle \Phi_n A,\,P\rangle -\frac{\varepsilon}{2}H(P) = -\frac{1}{n}\sum_{i}F_i -\frac1n \sum_j G_j + \frac{\varepsilon}{2n^2}\sum_{i,j}\exp\left( \frac{2}{\varepsilon}(F_i+G_j +( \Phi_n A)_{ij} \right)$$ By assumption, $\sum_j G_j = 0$ and $\sum_{i,j}\exp\left( \frac{2}{\varepsilon}(F_i+G_j +( \Phi_n A)_{ij} \right) = n^2$. So, $$\frac{1}{n}\sum_i F_i \geqslant- \left\| C_A \right\|_\infty.$$ By repeating the argument in ([\[eq:upper_bd_f\]](#eq:upper_bd_f){reference-type="ref" reference="eq:upper_bd_f"}), we see that $$1 \geqslant\exp( 2/\varepsilon(-\left\| C_A \right\|_\infty + G_j) ) \exp\left( \frac{2}{n\varepsilon} \sum_i F_i \right) \geqslant\exp(2/\varepsilon(-2\left\| C_A \right\|_\infty + G_j)).$$ So, $G_j \leqslant 2\left\| C_A \right\|_\infty$ and $F_i \geqslant-3\left\| C_A \right\|_\infty$. The proof for ([\[eq:kt_inf\]](#eq:kt_inf){reference-type="ref" reference="eq:kt_inf"}) is nearly identical and hence omitted. ◻ Similarly to [@rigollet2022sample Lemma 11], we derive the following strong convexity bound for $\mathcal{J}_n$: **Lemma 14**. *The functional $\mathcal{J}_n$ is strongly convex with $$\label{eq:strong_convexity_J} \begin{split} \mathcal{J}_n(A',F',G') \geqslant\mathcal{J}_n(A,F,G) &+ \langle \nabla \mathcal{J}_n(A,F,G),\,(A',F',G')-(A,F,G)\rangle \\ &+ \frac{\exp(-L/\varepsilon)}{\varepsilon} \left( \frac{1}{n^2} \left\| \Phi_n (A-A') \right\|^2_2 + \frac{1}{n}\left\| F-F' \right\|^2_2 + \frac{1}{n}\left\| G-G' \right\|_2^2 \right), \end{split}$$ for some $L = \mathcal{O}(\left\| A \right\|_1\vee \left\| A' \right\|_1)$.* *Proof.* To establish the strong convexity inequality, let $$h(t) \triangleq\mathcal{J}((1-t) A + t A', (1-t)f + t f' ,(1-t)g + tg').$$ It suffices to find $\delta>0$ such that for all $t\in [0,1]$, $$\label{eq:toshow_strong_conv} h''(t)\geqslant\delta \left( \frac{1}{n^2} \left\| \Phi_n (A-A') \right\|^2_2 + \frac{1}{n}\left\| F-F' \right\|^2_2 + \frac{1}{n}\left\| G-G' \right\|_2^2 \right).$$ Let $Z_t \triangleq((1-t)F+ tF')\oplus ((1-t)G+tG') + ((1-t)\Phi A +t \Phi A')$. Note that $$\begin{aligned} h''(t) = \frac{2}{ \varepsilon n^2} \sum_{i,j} \exp\left( \frac{2}{\varepsilon}(Z_t)_{i,j} \right) (F'_i - F_i +G'_i - G_i + (\Phi A')_{i,j} - (\Phi A)_{i,j})^2 \end{aligned}$$ Since $\left\| c_A \right\|_\infty \vee \left\| c_{A'} \right\|_\infty \leqslant L$, by Lemma [Lemma 13](#lem:boundedness){reference-type="ref" reference="lem:boundedness"}, $\left\| Z_t \right\|_\infty \lesssim \left\| c_A \right\|_\infty\vee \left\| c_{A'} \right\|_\infty$. So, $$h''(t) \geqslant\frac{2}{\varepsilon n^2} \exp(-L/\varepsilon)\sum_{i,j}(F'_i - F_i +G'_i - G_i + (\Phi A')_{i,j} - (\Phi A)_{i,j})^2.$$ By expanding out the brackets and using $\sum_i G_i = 0$ and since $C_k$ are centred ( $\sum_i (C_k)_{i,j} = 0$ and $\sum_j (C_k)_{i,j} = 0$), ([\[eq:toshow_strong_conv\]](#eq:toshow_strong_conv){reference-type="ref" reference="eq:toshow_strong_conv"}) holds with $\delta = \frac{2}{\varepsilon}\exp(-L/\varepsilon)$. ◻ Based on this strong convexity result, we have the following bound. **Proposition 15**. *Let $(A_n, F_n,G_n)$ minimise $\mathcal{F}_n$, then for all $(A,F,G)\in\mathcal{S}$ such that $n^{-2}\exp(F\oplus G + \Phi_n A)$ satisfy the marginal constraints of ([\[eq:primal_n\]](#eq:primal_n){reference-type="ref" reference="eq:primal_n"}), $$\mathcal{K}_n(A,F,G) - \mathcal{K}_n(A_n,F_n,G_n) \geqslant\frac{\exp(-L/\varepsilon)}{\varepsilon} \left( \frac{1}{n^2} \left\| \Phi_n (A-A_n) \right\|^2_2 + \frac{1}{n}\left\| F-F_n \right\|^2_2 + \frac{1}{n}\left\| G-G_n' \right\|_2^2 \right).$$ and $$\begin{aligned} \mathcal{K}_n(A,F,G) -\mathcal{K}_n(A_n,F_n,G_n) \leqslant&\frac{ \varepsilon\exp(L/\varepsilon)}{4} \Bigg( \left\| n^{-2}(\Phi^*_n\Phi_n)^{-1} \right\|\left\| (\partial_A \mathcal{J}_n(A,F,G) +\lambda \xi) \right\|^2 \\ &+n\left\| \partial_F \mathcal{J}_n(A,F,G) \right\|^2_2 + n\left\| \partial_G \mathcal{J}_n(A,F,G) \right\|^2_2\Bigg).\end{aligned}$$ where $L = \mathcal{O}(\left\| A \right\|_1\vee \left\| A_n \right\|_1)$.* *Proof.* By strong convexity of $\mathcal{J}_n$, we can show that for any $(A,f,g), (A',f',g')\in\mathcal{S}$ and any $\xi\in \partial\left\| A \right\|_1$, $$\label{eq:strong_convexity} \begin{split} \mathcal{K}_n(A',F',G') \geqslant\mathcal{K}_n(A,F,G) &+ \langle \nabla \mathcal{J}_n(A,F,G),\,(A',F',G')-(A,F,G)\rangle +\lambda \langle \xi,\,A'-A\rangle \\ &+ \frac{\delta}{2} \left( \frac{1}{n^2} \left\| \Phi_n (A-A') \right\|^2_2 + \frac{1}{n}\left\| F-F' \right\|^2_2 + \frac{1}{n}\left\| G-G' \right\|_2^2 \right), \end{split}$$ where $\delta = \frac{2\exp(-L/\varepsilon)}{\varepsilon}$ with $L= \mathcal{O}(\left\| A \right\|_1\vee \left\| A' \right\|_1)$. The first statement follows by letting $(A,F,G) = (A_n,F_n,G_n)$ in the above inequality. To prove the second statement, let $$\begin{aligned} M\triangleq&\langle \nabla \mathcal{J}_n(A,F,G),\,(A',F',G')-(A,F,G)\rangle +\lambda \langle \xi,\,A'-A\rangle \\ &+ \frac{\delta}{2} \left( \frac{1}{n^2} \left\| \Phi_n (A-A') \right\|^2_2 + \frac{1}{n}\left\| F-F' \right\|^2_2 + \frac{1}{n}\left\| G-G' \right\|_2^2 \right).\end{aligned}$$ By minimising over $(A',F',G')$, note that $$M\geqslant-\frac{1}{2\delta}\left( n^2\left\| (\partial_A \mathcal{J}_n(A,F,G) +\lambda \xi) \right\|^2_{(\Phi_n^* \Phi_n)^{-1} } +n\left\| \partial_F \mathcal{J}_n(A,F,G) \right\|^2_2 + n\left\| \partial_G \mathcal{J}_n(A,F,G) \right\|^2_2 \right),$$ So, $$-M \leqslant\frac{1}{2\delta}\left( \left\| n^{-2}(\Phi^*_n\Phi_n)^{-1} \right\|\left\| (\partial_A \mathcal{J}_n(A,F,G) +\lambda \xi) \right\|^2 +n\left\| \partial_F \mathcal{J}_n(A,F,G) \right\|^2_2 + n\left\| \partial_G \mathcal{J}_n(A,F,G) \right\|^2_2 \right)$$ Finally, note that $\mathcal{K}_n(A,F,G)- \mathcal{K}_n(A',F',G') \geqslant\mathcal{K}_n(A,F,G)- \mathcal{K}_n(A_n,F_n,G_n)$ by optimality of $A_n,F_n,G_n$. ◻ ## Concentration bounds **Lemma 16** (McDiarmid's inequality). *Let $f:\mathcal{X}^n \to \mathbb{R}$ satisfy the bounded differences property: $$\sup_{x_i'\in\mathcal{X}} \left\lvert f(x_1,\ldots,x_{i-1},x_i, x_{i+1},\ldots, x_n) - f(x_1,\ldots,x_{i-1},x_i', x_{i+1},\ldots, x_n)\right\rvert \leqslant c.$$ Then, given $X_1,\ldots, X_n$ random variables with $X_i\in\mathcal{X}$, for any $t>0$, $$\mathbb{P}(.\left\lvert f(X_1,\ldots, X_n) - \mathbb{E}[f(X_1,\ldots, X_n) ]\right\rvert \geqslant t ) \leqslant 2\exp(- 2 t^2/(nc^2)).$$* Given random vectors $X$ and $Y$, denote $\mathrm{Cov}(X,Y) = \mathbb{E}\langle Y-\mathbb{E}[Y],\,X-\mathbb{E}[X]\rangle$ and $\mathrm{Var}(X) = \mathrm{Cov}(X,X)$. **Proposition 17**. *Let $\pi$ have marginals $\alpha$ and $\beta$ and let $p = \frac{d\pi}{d(\alpha\otimes \beta)}$. Let $(x_i,y_i)\sim \hat \pi$ where $\hat \pi$ has marginals $\alpha,\beta$. Assume that $\left\| p \right\|_\infty \leqslant b$. Then, $$\begin{aligned} \mathbb{E}\left[ \frac{1}{n} \sum_{j=1}^n \left( 1- \frac{1}{n} \sum_{i=1}^n p(x_i,y_j) \right)^2 \right] \leqslant\frac{(b+1)^2}{n} \end{aligned}$$ and $$\frac{1}{n} \sum_{j=1}^n \left( 1- \frac{1}{n} \sum_{i=1}^n p(x_i,y_j) \right)^2 \leqslant\frac{ (t+b+1)^2}{n}.$$ with probability at least $1-\exp(-t^2/(4b^2))$.* *Remark 18*. The bounds also translate to an $\ell_1$ norm on the marginal errors, since by Cauchy-Schwarz, $$\begin{aligned} \frac{1}{n} \sum_{j=1}^n \left\lvert 1- \frac{1}{n} \sum_{i=1}^n p(x_i,y_j)\right\rvert& \leqslant\sqrt{ \sum_{j=1}^n \frac{1}{n}\left( 1- \frac{1}{n} \sum_{i=1}^n p(x_i,y_j) \right)^2}.\end{aligned}$$ Moreover, by Jensen's inequality $\mathbb{E}\frac{1}{n} \sum_{j=1}^n \left\lvert 1- \frac{1}{n} \sum_{i=1}^n p(x_i,y_j)\right\rvert \leqslant \sqrt{ \mathbb{E}\sum_{j=1}^n \frac{1}{n}\left( 1- \frac{1}{n} \sum_{i=1}^n p(x_i,y_j) \right)^2}.$ *Proof.* $$\begin{aligned} &\mathbb{E}\left[ \frac{1}{n} \sum_{j=1}^n \left( 1- \frac{1}{n} \sum_{i=1}^n p(x_i,y_j) \right)^2 \right]= \frac{1}{n^3} \sum_{ij,k=1}^n \mathbb{E}\left( (1-p(x_i,y_j))(1-p(x_k,y_j)) \right).\end{aligned}$$ For each $j\in [n]$, we have the following cases for $u_j\triangleq\mathbb{E}\left( (1-p(x_i,y_j))(1-p(x_k,y_j)) \right)$: 1. $i=k=j$, then $u_j= \mathbb{E}_{(x,y)\sim \hat \pi}(p_\infty(x,y)-1)^2$. There is 1 such term. 2. $i=j$ and $k\neq j$, then $u_j = \mathbb{E}_{(x,y)\sim \hat \pi, z\sim \alpha}\left( (1-p(x,y))(1-p(z,y)) \right) = 0$. There are $n-1$ such terms. 3. $i\neq j$ and $k= j$, then $u_j = \mathbb{E}_{(z,y)\sim\hat \pi, x\sim \alpha}\left( (1-p(x,y))( 1-p(z,y)) \right) = 0$. There are $n-1$ such terms. 4. $i=k$ and $i\neq j$, then $u_j = \mathbb{E}_{x\sim \alpha, y\sim \beta}(1-p(x,y))^2$ and there are $(n-1)$ such terms. 5. $i,j,k$ all distinct. Then, $u_j = 0$ and there are $(n-1)(n-2)$ such terms. Therefore, $$\frac{1}{n^3} \sum_{ij,k=1}^n \mathbb{E}\left( (1-p(x_i,y_j))(1-p(x_k,y_j)) \right) = \frac{1}{n^2} \mathbb{E}_{(x,y)\sim \hat \pi}(p(x,y)-1)^2 + \frac{n-1}{n^2} \mathbb{E}_{x\sim \alpha, y\sim \beta}(1-p(x,y))^2$$ Using $\left\lvert 1-p(x,y)\right\rvert\leqslant b+1$ gives the first inequality. Note also that letting $V = \frac{1}{n} \sum_{i=1}^n \left( (1-p(x_i,y_j) ) \right)_j$, $\left\| V \right\|^2 = \sum_{j=1}^n\left( \frac{1}{n} \sum_{i=1}^n (1-p(x_i,y_j) ) \right)^2$. We will apply Lemma [Lemma 16](#lem:mcdiarmid){reference-type="ref" reference="lem:mcdiarmid"} to $f(z_1,\ldots, z_n) = \left\| V \right\|$. Let $V' = \frac{1}{n} \sum_{i=1}^n \left( (1-p(x'_i, y'_j) ) \right)_j$ where $x_i,y_j = x_j',y_j'$ for $i,j\geqslant 2$. Then, for all vectors $u$ of norm 1, $$\begin{aligned} \langle u,\,V'-V\rangle& = \sum_{j=1}^n u_j \frac{1}{n} \sum_{i=1}^n (p(x_i',y_j')-p(x_i,y_j) )\\ & = u_1 \frac{1}{n} \sum_{i=1}^n (p(x_i',y_1')-p(x_i,y_1) ) + \sum_{j=2}^n u_j \frac{1}{n} (p(x_1',y_j')-p(x_1,y_j) ) \\ &\leqslant\frac{2b}{n} \sum_{j=1}^n u_j \leqslant\frac{2b}{\sqrt{n}}.\end{aligned}$$ So, by the reverse triangle inequality, $\left\lvert\left\| V \right\|-\left\| V' \right\|\right\rvert \leqslant\frac{2b}{\sqrt{n}}$. It follows that $$n^{-1/2}\left\| V \right\| \leqslant n^{-1/2} t +n^{-1/2} \mathbb{E}\left\| V \right\| \leqslant n^{-1/2} t + \sqrt{n^{-1}\mathbb{E}\left\| V \right\|^2 } \leqslant n^{-1/2} (t+b+1).$$ with probability at least $1-\exp(-t^2/(4b^2))$ as required. ◻ ### Bounds for the cost parametrization In the following two propositions (Prop [Proposition 21](#prop:concentration_cost){reference-type="ref" reference="prop:concentration_cost"} and [Proposition 20](#prop:concentration_cost1){reference-type="ref" reference="prop:concentration_cost1"}), we assume that $\Phi_n$ is defined with $C_k = (\mathbf{C}_k(x_i,y_j))_{i,j}$ and discuss in the remark afterwards how to account for the fact that our cost $C_k$ in ([\[eq:primal_n\]](#eq:primal_n){reference-type="ref" reference="eq:primal_n"}) are centred. Note that the following classical result is a direct consequence of Lemma [Lemma 16](#lem:mcdiarmid){reference-type="ref" reference="lem:mcdiarmid"} **Lemma 19**. *Suppose $Z_1,\ldots, Z_m$ are independent mean zero random variables taking values in Hilbert space $\mathcal{H}$. Suppose there is some $C>0$ such that for all $k$, $\left\| Z_k \right\|\leqslant C$. Then, for all $t>0$, with probability at least $1-2\exp(-t)$, $$\left\| \frac{1}{m}\sum_{k=1}^m Z_k \right\|^2 \leqslant\frac{8C^2t}{m}.$$* **Proposition 20**. *Let $C = \max_{x,y}\left\| \mathbf{C}(x,y) \right\|$. Let $(x_i,y_i)_{i=1}^n$ be iid drawn from $\hat \pi$. Then, $$\left\| \Phi_n^* \hat P - \Phi^* \hat\pi \right\| \leqslant\frac{\sqrt{8}C t}{n}$$ with probability at least $1-2\exp(-t^2)$.* *Proof.* Direct consequence of Lemma [Lemma 19](#lem:bernstein){reference-type="ref" reference="lem:bernstein"}. ◻ **Proposition 21**. *Let $(x_i,y_i)_{i=1}^n$ be iid drawn from $\hat \pi$, which has marginals $\alpha,\beta$. Let $P = \frac1{n^2}(p(x_i,y_j))_{i,j=1}^n$ where $\pi$ has marginals $\alpha,\beta$ and $p = \frac{d\pi}{d(\alpha\otimes \beta)}$. Then, $\mathbb{E}\left\| \Phi_n^* P - \Phi^* (p \alpha \otimes \beta) \right\|^2 = \mathcal{O}(n^{-1})$ and $$\left\| \Phi_n^* P - \Phi^* p \alpha\otimes \beta \right\| \lesssim \frac{1+m}{\sqrt{n}}$$ with probability at least $1-2\exp(-2m^2/ (64 \left\| p \right\|_\infty^2))$* *Proof.* Let $h(x,y) \triangleq p(x,y) \mathbf{C}(x,y)$, then $$\begin{aligned} &\Phi_n^* P = \frac{1}{n^2}\sum_{j=1}^n\left( h(x_j,y_j) + \sum_{\substack{i=1\\i\neq j}}^n h(x_i,y_j) \right)\end{aligned}$$ and $$\mathbb{E}[\Phi_n^* P - \Phi^* p \alpha\otimes \beta ] = \frac{1}{n} A^*\left( p \hat \pi- p \alpha\otimes\beta \right)$$ It follows that $$\begin{aligned} \mathbb{E}\left\| \Phi_n^* P - \Phi^* p \alpha\otimes \beta \right\|^2 = \frac{1}{n^2}\left\| \Phi^*\left( p \hat \pi- p \alpha\otimes\beta \right) \right\|^2 + \mathrm{Var}(\Phi_n^* P)\end{aligned}$$ and $$\begin{aligned} \mathrm{Var}(\Phi_n^* P) = \frac{1}{n^4} \sum_{i,j,k,\ell=1}^n \mathrm{Cov}(h(x_i,y_k), h(x_\ell,y_k))\end{aligned}$$ Note that $\mathrm{Cov}(h(x_i,y_k), h(x_\ell,y_k)) = 0$ whenever $i,j,k,\ell$ are distinct and there are $n(n-1)(n-2)(n-3) = n^4-6n^3+12n^2-6n$ such terms, i.e. there are $6n^3-12n^2+6n$ nonzero terms in the sum. It follows that $\mathrm{Var}(\Phi_n^* P) =\mathcal{O}(n^{-1})$ if $\left\| \mathbf{C} \right\|$ and $p$ are uniformly bounded. To conclude, we apply Lemma [Lemma 16](#lem:mcdiarmid){reference-type="ref" reference="lem:mcdiarmid"} with $f(z_1,\ldots, z_n) = \left\| \Phi_n^* P - \Phi^* p \alpha\otimes \beta \right\|$ and $z_i = (x_i,y_i)$. Let $X(z_1,\ldots, z_n) = \Phi_n^* P - \Phi^* p \alpha\otimes \beta$. Then, Note that for an arbitrary vector $u$, $$\begin{aligned} &\langle u,\,X(z_1,z_2,\ldots, z_n) - X(z_1',z_2,\ldots, z_n)\rangle = \sum_{i,j}\left( \sum_{k=1}^s u_k ( h(x_i,y_j) - h(x_i',y_j')) \right)\\ &= \sum_{i=2}^n \left( \sum_{k=1}^s u_k ( h(x_i,y_1) - h(x_i',y_1)) \right) + \sum_{j=1}^n \left( \sum_{k=1}^s u_k ( h(x_1, y_j) - h(x_1',y_j')) \right) \\ &\leqslant 8n^{-1} \left\| u \right\| \sup_{x,y} \left\| \mathbf{C}(x,y) \right\| \left\| p \right\|_\infty\end{aligned}$$ So, $f(z_1,z_2,\ldots, z_n) - f(z_1',z_2,\ldots, z_n) \leqslant 8n^{-1} \left\| p \right\|_\infty$. and $$\left\| \Phi_n^* P - \Phi^* p \alpha\otimes \beta \right\| \lesssim \frac{1+m}{\sqrt{n}}$$ with probability at least $1-2\exp(-2m^2/ (64 \left\| p \right\|_\infty^2))$ ◻ *Remark 22* (Adjusting for the centred cost parametrization). In the above two propositions, we compare $\Phi_n^* P = \sum_{i,j} \mathbf{C}_k(x_i,y_j) P_{i,j}$ with $\Phi^* \pi = \int \mathbf{C}_k(x,y) d\pi(x,y)$ for $P$ being a discretized version of $\pi$. The delicate issue is that in ([\[eq:primal_n\]](#eq:primal_n){reference-type="ref" reference="eq:primal_n"}), $C_k$ is a centralized version of $\hat C_k \triangleq(\mathbf{C}_k(x_i,y_j))_{i,j}$. In particular, $$C_k = \hat C_k - \frac{1}{n}\sum_{i=1}^n (\hat C_k)_{i,j} - \frac{1}{n}\sum_{i=1}^n (\hat C_k)_{i,j} + \frac{1}{n^2}\sum_{j=1}^n\sum_{i=1}^n (\hat C_k)_{i,j}.$$ Note that if $P\mathds{1}=\frac{1}{n} \mathds{1}$ and $P^\top \mathds{1}=\frac1n \mathds{1}$, then $$\sum_{i,j}(C_k)_{i,j}P_{i,j} = \sum_{i,j}(\hat C_k)_{i,j}P_{i,j} - \frac{1}{n^2} \sum_{j=1}^n \sum_{k=1}^n (\hat C_k)_{k,j}$$ This last term on the RHS is neglibible because $\Phi^* (\alpha\otimes \beta) = 0$ by assumption of $\mathbf{C}_k$ being centred: Note that $$\frac{1}{n^2} \sum_{j=1}^n \sum_{k=1}^n (\hat C_k)_{k,j} = \hat \Phi_n^* (\frac{1}{n^2}\mathds{1}_{n\times n}).$$ Applying the above proposition, $$\left\| \frac{1}{n^2} \sum_{j=1}^n \sum_{k=1}^n (\hat C_k)_{k,j} - \Phi^* (\alpha\otimes \beta) \right\| \lesssim \frac{m+\log(n)}{\sqrt{n}}$$ with probability at least $1-\exp(-m)$. So, up to constants, the above two propositions can be applied even for our centralized cost $C$. ### Invertibility bound Recall our assumption that $M\triangleq\mathbb{E}_{(x,y)\sim\alpha\otimes \beta} [\mathbf{C}(x,y) \mathbf{C}(x,y)^\top ]$ is invertible. In Proposition [Proposition 24](#prop:concentration_injective){reference-type="ref" reference="prop:concentration_injective"}, we bound the deviation of $\Phi_n^*\Phi_n$ from $M$ in spectral norm, and hence establish that it is invertible with high probability. We will make use of the following matrix Bernstein inequality. **Theorem 23** (Matrix Bernstein). *[@tropp2015introduction] Let $Z_1,\ldots, Z_n \in \mathbb{R}^{d\times d}$ be independent symmetric mean-zero random matrices such that $\left\| Z_i \right\| \leqslant L$ for all $i\in [n]$. Then, for all $t\geqslant 0$, $$\mathbb{E}\left\| \sum_i Z_i \right\| \leqslant\sqrt{2\sigma \log(2d)} + \frac{1}{3}L \log(2d)$$ where $\sigma^2 = \left\| \sum_{i=1}^n \mathbb{E}[Z_i^2] \right\|$.* **Proposition 24**. *Assume that $\log(2s)+1\leqslant n$. Then, $$\left\| \frac{1}{n^2}\Phi_n^*\Phi_n - M \right\| \lesssim \sqrt{\frac{\log(2s)}{n-1} } + \frac{m}{\sqrt{n}}$$ with probability at least $1-\mathcal{O}(\exp(-m^2))$.* *Proof.* Recall that $\Phi_n A = \sum_k A_k C_k$, where $C_k$ is the centred version of the matrix $\hat C_k = (\mathbf{C}_k(x_i,y_j))_{i,j}$. That is, $$\label{eq:Ck_centre} (C_k)_{i,j} = (\hat C_k)_{i,j} - \frac{1}{n} \sum_{\ell} (\hat C_k)_{\ell,j} - \frac{1}{n} \sum_{m} (\hat C_k)_{i,m} + \frac{1}{n^2} \sum_{\ell} \sum_m (\hat C_k)_{\ell,m}.$$ For simplicity, we first do the proof for $(\Phi_n A)_{i,j} = \sum_k A_k \mathbf{C}_k(x_i,y_j)$, and explain at the end how to modify the proof to account for $\Phi_n$ using the centred $C_k$. First, $$\begin{aligned} \frac{1}{n^2} \Phi_n^* \Phi_n -M & = \frac{1}{n} \sum_{i=1}^n \left( \frac{1}{n} \sum_{j=1}^n \mathbf{C}(x_i,y_j) \mathbf{C}(x_i,y_j)^\top - \int \mathbf{C}(x_i,y) \mathbf{C}(x_i,y)^\top d\beta(y) \right) \label{eq:mtx1}\\ &+ \frac{1}{n} \sum_{i=1}^n \int \mathbf{C}(x_i,y) \mathbf{C}(x_i,y)^\top d\beta(y) - \int \mathbf{C}(x,y)\mathbf{C}(x,y)^\top d\alpha(x)d\beta(y) \label{eq:mtx2}\end{aligned}$$ To bound the two terms in ([\[eq:mtx2\]](#eq:mtx2){reference-type="ref" reference="eq:mtx2"}), let $Z_i\triangleq\int \mathbf{C}(x_i,y) \mathbf{C}(x_i,y)^\top d\beta(y) - \int \mathbf{C}(x,y)\mathbf{C}(x,y)^\top d\alpha(x)d\beta(y)$ and observe that these are iid matrices with zero mean . By matrix bernstein with the bounds $\left\| Z_i \right\| \leqslant 2$ and $\left\| Z_i^2 \right\| \leqslant 4$, $$\mathbb{E}\left\| \frac{1}{n} \sum_{i=1}^n Z_i \right\| \leqslant\frac{\sqrt{8\log(2s)}}{\sqrt{n}} + \frac{2\log(2s)}{3n} \leqslant 4 \sqrt{\frac{\log(2s)}{n}}$$ assuming that $\log(2s)\leqslant n$. For the two terms in ([\[eq:mtx1\]](#eq:mtx1){reference-type="ref" reference="eq:mtx1"}), $$\begin{aligned} &\mathbb{E}\left\| \frac{1}{n} \sum_{i=1}^n \left( \frac{1}{n} \sum_{j=1}^n \mathbf{C}(x_i,y_j) \mathbf{C}(x_i,y_j)^\top - \int \mathbf{C}(x_i,y) \mathbf{C}(x_i,y)^\top d\beta(y) \right) \right\|\\ & \leqslant\mathbb{E}\left\| \frac{1}{n^2} \sum_{i=1}^n \mathbf{C}(x_i,y_i) \mathbf{C}(x_i,y_i)^\top \right\| + \mathbb{E}\left\| \frac{1}{n} \sum_{i=1}^n \left( \frac{1}{n} \sum_{\substack{j=1\\j\neq i}}^n \mathbf{C}(x_i,y_j) \mathbf{C}(x_i,y_j)^\top - \int \mathbf{C}(x_i,y) \mathbf{C}(x_i,y)^\top d\beta(y) \right) \right\|\\ &\leqslant\frac{2}{n} +\frac{n-1}{n^2} \sum_{i=1}^n \mathbb{E}\left\| \frac{1}{n-1} \sum_{\substack{j=1\\j\neq i}}^n \left( \mathbf{C}(x_i,y_j) \mathbf{C}(x_i,y_j)^\top - \int \mathbf{C}(x_i,y) \mathbf{C}(x_i,y)^\top d\beta(y) \right) \right\| \end{aligned}$$ For each $i=1,\ldots n$, let $Y_j = \mathbf{C}(x_i,y_j) \mathbf{C}(x_i,y_j) - \int \mathbf{C}(x_i,y) \mathbf{C}(x_i,y)^\top d\beta(y)$ and observe that conditional on $x_i$, $\left\{ Y_j \right\} _{j\neq i}$ are iid matrices with zero mean. The matrix Bernstein inequality applied to $\frac{1}{n-1}\sum_{j\in [n]\setminus \left\{ i \right\} } Y_j$ implies that $$\mathbb{E}\left\| \frac{1}{n-1}\sum_{j\in [n]\setminus \left\{ i \right\} } Y_j \right\| \leqslant\frac{\sqrt{8\log(2s)}}{\sqrt{n-1}} + \frac{2\log(2s)}{3n-3} \leqslant 4 \sqrt{\frac{\log(s)}{n}}$$ assuming that $\log(2s)\leqslant n-1$. Finally, we apply Lemma [Lemma 16](#lem:mcdiarmid){reference-type="ref" reference="lem:mcdiarmid"} to $f(z_1,\ldots, z_n) = \left\| \frac{1}{n^2} \Phi_n^*\Phi_n - M \right\|$. Let $\Phi_n' u = \sum_k u_k \mathbf{C}(x_i',y_i')$ with $x_i'=x_i$ and $y_i' = y_i$ for $i\geqslant 2$. For each vector $u$ of unit norm, $$\begin{aligned} \frac{1}{n^2}&\langle ( \Phi_n^*\Phi_n - (\Phi_n')^*\Phi_n' )u,\,u\rangle =\frac{1}{n^2} \sum_{i=1}^n \sum_{j=1}^n \left\lvert\mathbf{C}(x_i,y_j)^\top u\right\rvert^2 - \left\lvert\mathbf{C}(x_i',y_j')^\top u\right\rvert^2\\ &= \frac{1}{n^2}\sum_{i=1}^n \left\lvert\mathbf{C}(x_i,y_1)^\top u\right\rvert^2 - \left\lvert\mathbf{C}(x_i',y_1')^\top u\right\rvert^2 +\frac{1}{n^2} \sum_{j=2}^n \left\lvert\mathbf{C}(x_1,y_j)^\top u\right\rvert^2 - \left\lvert\mathbf{C}(x_1',y_j')^\top u\right\rvert^2 \leqslant 4n^{-1}.\end{aligned}$$ So, $$\left\| \frac{1}{n^2}\Phi_n^*\Phi_n - M \right\| \leqslant 8 \sqrt{\frac{\log(2s)}{n-1} } + \frac{m}{\sqrt{n}}$$ with probability at least $1-\exp(-2m^2 /16)$. To conclude this proof, we now discuss how to modify the above proof in the case of the centred cost $C_k$, that is $\Phi_n A = \sum_k A_k C_k$ where $C_k$ is as defined in ([\[eq:Ck_centre\]](#eq:Ck_centre){reference-type="ref" reference="eq:Ck_centre"}). Note that in this case, $$\begin{aligned} \frac{1}{n^2} (\Phi_n^* \Phi_n)_{k,\ell} &= \frac{1}{n^2}\sum_{i,j=1}^n \mathbf{C}_k(x_i,y_j)\mathbf{C}_\ell(x_i,y_j) + \frac{1}{n^4} \left( \sum_{p,q=1}^n \mathbf{C}_\ell(x_p,y_q) \right)\left( \sum_{i,j=1}^n \mathbf{C}_k(x_i,y_j) \right) \label{eq:fix1}\\ & - \frac{1}{n^3} \sum_{j=1}^n \left( \sum_{p=1}^n \mathbf{C}_\ell(x_p,y_j) \right) \left( \sum_{i=1}^n \mathbf{C}_k(x_i,y_j) \right) - \frac{1}{n^3} \sum_{i=1}^n \left( \sum_{q=1}^n \mathbf{C}_\ell(x_i,y_q) \right) \left( \sum_{j=1}^n \mathbf{C}_k(x_i,y_j) \right) \label{eq:fix2}\end{aligned}$$ We already know from the previous arguments that $$\mathbb{E}\left\| \frac{1}{n^2}\sum_{i,j=1}^n \mathbf{C}_k(x_i,y_j)\mathbf{C}_\ell(x_i,y_j) - M \right\| = \mathcal{O}(\sqrt{\log(s)/n}).$$ For the last term in ([\[eq:fix1\]](#eq:fix1){reference-type="ref" reference="eq:fix1"}), let $\Lambda = \left\{ (i,j,p,q) \;;\; i\in [n], j\in[n]\setminus \left\{ i \right\} , \; p\in[n]\setminus \left\{ i,j \right\} , q\in[n]\setminus \left\{ i,j,p \right\} \right\}$. Note that $\Lambda$ has $n(n-1)(n-2)(n-3)$ terms, and $\Lambda^c = [n]\times[n]\times[n]\times[n]$ has $\mathcal{O}(n^3)$ terms. Therefore, we can write $$\begin{aligned} \mathbb{E}&\left\| \frac{1}{n^4} \left( \sum_{p,q=1}^n \mathbf{C}(x_p,y_q) \right)\left( \sum_{i,j=1}^n \mathbf{C}(x_i,y_j)^\top \right) \right\|\\ &\leqslant\frac{1}{n^4} \mathbb{E}\left\| \sum_{(i,j,p,q)\in\Lambda} \mathbf{C}(x_p,y_q)\mathbf{C}(x_i,y_j)^\top \right\| + \mathbb{E}\left\| \frac{1}{n^4} \sum_{(i,j,p,q)\in\Lambda^c} \mathbf{C}(x_p,y_q)\mathbf{C}(x_i,y_j)^\top \right\|\\ &\leqslant\frac{1}{n^4} \mathbb{E}\left\| \sum_{(i,j,p,q)\in\Lambda} \mathbf{C}(x_p,y_q)\mathbf{C}(x_i,y_j)^\top \right\| + \mathcal{O}(n^{-1})\end{aligned}$$ where the second inequality is because there are $\mathcal{O}(n^3)$ terms in $\Lambda^c$ and we used the bound that $\left\| \mathbf{C}(x,y) \right\| = 1$. Moreover, $$\begin{aligned} \frac{1}{n^4} \mathbb{E}\left\| \sum_{(i,j,p,q)\in\Lambda} \mathbf{C}(x_p,y_q)\mathbf{C}(x_i,y_j)^\top \right\| \leqslant\frac{n-3}{n^4} \sum_{i} \sum_{j\not\in \left\{ i \right\} } \sum_{p\not\in \left\{ i,j \right\} } \mathbb{E}\left\| \frac{1}{n-3}\sum_{q\not\in \left\{ i,j,p \right\} } \mathbf{C}(x_p,y_q)\mathbf{C}(x_i,y_j)^\top \right\|.\end{aligned}$$ Note that conditional on $i,j,p$, $$\mathbb{E}[\frac{1}{n-3} \sum_{q\not\in \left\{ i,j,p \right\} } \mathbf{C}(x_p,y_q)\mathbf{C}(x_i,y_j)^\top] = \int \mathbf{C}(x_p,y)d\beta(y)\mathbf{C}(x_i,y_j)^\top = 0$$ by assumption that $\mathbf{C}(x,y)d\beta(y) = 0$. So, we can apply Matrix Bernstein as before to show that $$\mathbb{E}[ \frac{1}{n^4} \mathbb{E}\left\| \sum_{(i,j,p,q)\in\Lambda} \mathbf{C}(x_p,y_q)\mathbf{C}(x_i,y_j)^\top \right\|] \lesssim \sqrt{\log(s)n^{-1}}.$$ A similar argument can be applied to handle the two terms in ([\[eq:fix2\]](#eq:fix2){reference-type="ref" reference="eq:fix2"}), and so, $$\mathbb{E}\left\| \frac{1}{n^2}\Phi_n^* \Phi_n - M \right\| \lesssim \sqrt{\log(s)/n}.$$ The high probability bound can now be derived using Lemma [Lemma 16](#lem:mcdiarmid){reference-type="ref" reference="lem:mcdiarmid"} as before. ◻ # Proofs for the Gaussian setting {#app:gaussian} #### Simplified problem To ease the computations, we will compute the Hessian of the following function (corresponding to the special case where $\Sigma_\alpha=\mathrm{Id}$ and $\Sigma_\beta = \mathrm{Id}$): $$\label{eq:tilde_W} \tilde W(A) \triangleq\sup_\Sigma \langle A,\,\Sigma\rangle +\frac \varepsilon 2 \log \det(\mathrm{Id}- \Sigma^\top \Sigma).$$ To retrieve the Hessian of the original function note that, since $\log\det(\Sigma_\beta - \Sigma^\top \Sigma_\alpha^{-1} \Sigma) = \log\det(\Sigma_\beta) + \log \det(\mathrm{Id}- \Sigma_\beta^{-\frac12}\Sigma^\top\Sigma_\alpha^{-1} \Sigma \Sigma_\beta^{-\frac12})$, a change of variable $\tilde \Sigma \triangleq\Sigma_\alpha^{-\frac12} \Sigma \Sigma_\beta^{-\frac12}$, shows that $W(A) = \tilde W(\Sigma_\alpha^{\frac12} A \Sigma_\beta^{\frac12})$ and, hence, $$\label{ChangeOfVariableHessian} \nabla^2 W(A) = (\Sigma_\beta^{\frac12}\otimes \Sigma_\alpha^{\frac12})\nabla^2 \tilde W (\Sigma_\alpha^{\frac12} A \Sigma_\beta^{\frac12}) (\Sigma_\beta^{\frac12}\otimes \Sigma_\alpha^{\frac12}).$$ By the envelope theorem, $\nabla \tilde W(A) = \Sigma$, where $\Sigma$ is the maximizer of ([\[eq:tilde_W\]](#eq:tilde_W){reference-type="ref" reference="eq:tilde_W"}), thus reducing the computation of $\nabla^2 \tilde W(A)$ to differentiating the optimality condition of $\Sigma$, *i.e.,* $$\label{SigmaOptimalityGaussian} A = \varepsilon^{-1} \Sigma(\mathrm{Id}- \Sigma^\top \Sigma)^{-1}.$$ Recall also that such a $\Sigma$ has an explicit formula given in ([\[eq:Sigma_galichon\]](#eq:Sigma_galichon){reference-type="ref" reference="eq:Sigma_galichon"}). **Lemma 25**. *$$\begin{aligned} \nabla^2 \tilde W(A) &=\varepsilon\left( \varepsilon^2 (\mathrm{Id}- \Sigma^\top \Sigma)^{-1} \otimes (\mathrm{Id}- \Sigma \Sigma^\top)^{-1} +(A^\top\otimes A) \mathbb{T} \right)^{-1} \label{eq:hessian_formula_simp}\end{aligned}$$ and $\Sigma$ is as in ([\[SigmaOptimalityGaussian\]](#SigmaOptimalityGaussian){reference-type="ref" reference="SigmaOptimalityGaussian"}) and $\mathbb T$ is the linear map defined by $\mathbb T\mbox{vec}(X)=\mbox{vec}(X^\top)$.* *Proof of Lemma [Lemma 25](#lem:hessian_formula_simp){reference-type="ref" reference="lem:hessian_formula_simp"}.* Define $G(\Sigma,A) \triangleq\Sigma(\mathrm{Id}- \Sigma^\top \Sigma)^{-1} - \varepsilon^{-1} A$. Note that a maximizer $\Sigma$ of ([\[eq:tilde_W\]](#eq:tilde_W){reference-type="ref" reference="eq:tilde_W"}) satisfies $G(\Sigma,A)=0$. Moreover, $\partial_\Sigma G$ is invertible at such $(\Sigma,A)$ because this is the Hessian of $\log\det(\mathrm{Id}- \Sigma^\top\Sigma)$, a strictly concave function. By the implicit function theorem, there exists a function $f : A \mapsto \Sigma$ such that $G(f(A),A) = 0$ and $$\nabla f = (\partial_\Sigma G)^{-1} \partial_A G = \varepsilon^{-1} (\partial_\Sigma G)^{-1}.$$ It remains to compute $\partial_\Sigma G$ at $(f(A),A)$: At an arbitrary point $(\Sigma,A)$ we have $$\begin{aligned} \partial_\Sigma G& = (\mathrm{Id}- \Sigma^\top \Sigma)^{-1} \otimes \mathrm{Id} + (\mathrm{Id}\otimes \Sigma) \partial_\Sigma (\mathrm{Id}- \Sigma^\top \Sigma)^{-1}\\ &= (\mathrm{Id}- \Sigma^\top \Sigma)^{-1} \otimes \mathrm{Id} + (\mathrm{Id}\otimes \Sigma) \left( (\mathrm{Id}- \Sigma^\top \Sigma)^{-1}\otimes (\mathrm{Id}- \Sigma^\top \Sigma)^{-1} \right) \left( (\Sigma^\top \otimes \mathrm{Id})\mathbb{T}+ (\mathrm{Id}\otimes \Sigma^\top) \right)\end{aligned}$$ and at $(f(A),A)$, *i.e,* with $\Sigma(\mathrm{Id}- \Sigma^\top \Sigma)^{-1}= \varepsilon^{-1} A$, we can further simplify to $$\partial_\Sigma G = (\mathrm{Id}- \Sigma^\top \Sigma)^{-1} \otimes ( \mathrm{Id}+ \Sigma (\mathrm{Id}- \Sigma^\top \Sigma)^{-1} \Sigma^\top) ) + \varepsilon^{-2} ( A^\top \otimes A )\mathbb{T}$$ By the woodbury matrix formula, $$( \mathrm{Id}+ \Sigma (\mathrm{Id}- \Sigma^\top \Sigma)^{-1} \Sigma^\top) = (\mathrm{Id}- \Sigma\Sigma^\top)^{-1} =\mathrm{Id}+ A\Sigma^\top.$$ So, $$\partial_\Sigma G = (\mathrm{Id}- \Sigma^\top \Sigma)^{-1} \otimes (\mathrm{Id}- \Sigma \Sigma^\top)^{-1} +\varepsilon^{-2} (A^\top\otimes A) \mathbb{T},$$ thus concluding the proof. ◻ We remark that, from the connection between $W(A)$ and $\tilde W(A)$, *i.e.,* [\[ChangeOfVariableHessian\]](#ChangeOfVariableHessian){reference-type="ref" reference="ChangeOfVariableHessian"}, we obtain Lemma [Lemma 8](#lem:hessian_formula){reference-type="ref" reference="lem:hessian_formula"} as a corollary. ## Limit cases #### SVD representation of the covariance To derive the limiting expressions, we make an observation on the singular value decomposition of $\Sigma$: Let the singular value decomposition of $A$ be $A = U DV^\top$, where $D$ is the diagonal matrix with positive entries $d_i$. Note that $\Delta = (\mathrm{Id}+ \frac{\varepsilon^2}{4} (A^\top A)^\dagger)^{\frac12} = V(\mathrm{Id}+ \frac{\varepsilon^2}{4} D^{-2} )^{\frac12} V^\top$. Moreover, $\Delta$ and $A^\top A$ commute, so $$\begin{aligned} \label{SVDofSIgma} \Sigma &= A\Delta( (\Delta^2 A^\top A)^\dagger )^{\frac12} \Delta -\frac{\varepsilon}{2}A^{T,\dagger} %= A(A^\top A)^{-\frac12} \Delta - \frac{\epsilon}{2} A^{T,-1}\\ = U \left( \left( \mathrm{Id}+ \frac{\varepsilon^2}{4}D^{\dagger, 2} \right)^{\frac 12} - \frac{\varepsilon}{2} D^{\dagger} \right) V^\top = U \tilde D V^\top,\end{aligned}$$ where $\tilde D$ is the diagonal matrix with diagonal entries $$\label{eq:tilde_d_i} \tilde d_i = \sqrt{\left( 1+\frac{\varepsilon^2}{4d_i^2} \right) }- \frac{\varepsilon}{2}\frac{1}{d_i} .$$ ### Link with Lasso: Limit as $\varepsilon\to\infty$ Note that $$\begin{aligned} \tilde d_i &= \frac{\varepsilon}{2 d_i} \sqrt{1+\frac{4d_i^2}{\varepsilon^2}} - \frac{\varepsilon}{2d_i} = \frac{d_i}{\varepsilon} - \frac{d_i^3}{\varepsilon^3} + \mathcal{O}(\varepsilon^{-5}) \to 0,\qquad \varepsilon\to\infty.\end{aligned}$$ It follows that $\lim_{\varepsilon\to \infty}\Sigma = 0$ and hence, $\varepsilon\nabla^2 \tilde W(A) \to \mathrm{Id}$. So, the certificate converges to $$(\Sigma_\beta\otimes \Sigma_\alpha)_{(:,I)} \big((\Sigma_\beta\otimes \Sigma_\alpha)_{(I,I)}\big)^{-1} \mathop{\mathrm{sign}}({\widehat A})_I.$$ *Proof of Proposition [Proposition 9](#prop:gaussian_obj_lasso){reference-type="ref" reference="prop:gaussian_obj_lasso"}.* The iOT problem approaches a Lasso problem as $\varepsilon\to \infty$. Recall that in the Gaussian setting, the iOT problem is of the form $$\mathop{\mathrm{arg\,min}}_A \mathcal{F}(A) = \lambda \left\| A \right\|_1 + \langle A,\,\Sigma_{\varepsilon,A} - \Sigma_{\varepsilon,{\widehat A}}\rangle + \frac{\varepsilon}{2}\log\det\left( \Sigma_\beta - \Sigma_{\varepsilon,A}^\top \Sigma_\alpha^{-1} \Sigma_{\varepsilon,A} \right)$$ where $\Sigma_{\varepsilon,A}$ satisfies $\Sigma_\beta - \Sigma_{\varepsilon,A}^\top\Sigma_\alpha^{-1} \Sigma_{\varepsilon,A} = \varepsilon\Sigma_\alpha^{-1} \Sigma_{\varepsilon,A} A^{-1}.$ So, we can write $$\mathop{\mathrm{arg\,min}}_A \mathcal{F}(A)\triangleq\lambda \left\| A \right\|_1 + \langle A,\,\Sigma_{\varepsilon,A} - \Sigma_{\varepsilon,{\widehat A}}\rangle + \frac{\varepsilon}{2}\log\det\left( \varepsilon\Sigma_\alpha^{-1} \Sigma_{\varepsilon,A} A^{-1} \right)$$ Let $$X\triangleq\Sigma_\alpha^{\frac12} A \Sigma_\beta^{\frac12} \quad \text{and} \quad \tilde \Sigma_{\varepsilon,A}= \Sigma_\alpha^{-\frac12} \Sigma_{\varepsilon,A}\Sigma_\beta^{-\frac12}.$$ From ([\[eq:tilde_d\_i\]](#eq:tilde_d_i){reference-type="ref" reference="eq:tilde_d_i"}), if $X$ has svd decomposition $W = U\mathop{\mathrm{diag}}(d_i)V^\top$, then $$\tilde \Sigma_{\varepsilon,A}= U \tilde D V^\top, \quad \text{where} \quad \tilde D = \mathop{\mathrm{diag}}(\tilde d_i)$$ and $$\begin{aligned} \tilde d_i = \frac{d_i}{\varepsilon} - \frac{d_i^3}{\varepsilon^3} + \mathcal{O}(\varepsilon^{-5}).\end{aligned}$$ So, $$\begin{aligned} \log\det(\varepsilon\Sigma_\alpha^{-1} \Sigma_{\varepsilon,A}A^{-1}) &= \log\det\left( \varepsilon\Sigma_\alpha^{-\frac 12} \tilde \Sigma_{\varepsilon,A}X^{-1} \Sigma_\alpha^{\frac 12} \right)\\ &=\log\det\left( U \mathop{\mathrm{diag}}\left( 1 - d_i^2/\varepsilon^2 + \mathcal{O}(\varepsilon^{-4}) \right) U^\top \right)\\ &= \log\det\left( \mathop{\mathrm{diag}}\left( 1 - d_i^2/\varepsilon^2 + \mathcal{O}(\varepsilon^{-4}) \right) \right) = -\frac{1}{\varepsilon^2}\left\| X \right\|_F^2 + \mathcal{O}(\varepsilon^{-4}).\end{aligned}$$ Also, $$\varepsilon\langle \Sigma_{\varepsilon,A}- \Sigma_{\varepsilon,{\widehat A}},\,A\rangle = \varepsilon\langle \tilde \Sigma_{\varepsilon,A}- \tilde \Sigma_{\varepsilon,{\widehat A}},\,X\rangle = \langle X - X_0 ,\,X\rangle + \mathcal{O}(\varepsilon^{-2}).$$ So, assuming that $\lambda = \lambda_0/\varepsilon$, $$\begin{aligned} \varepsilon\mathcal{F}(A)&= \lambda_0\left\| A \right\|_1 + \langle X-X_0 ,\,X\rangle- \left\| X \right\|_F^2 + \mathcal{O}(\varepsilon^{-2}) \\ &= \lambda_0\left\| A \right\|_1 + \left\| (\Sigma_{\beta}^{\frac 12}\otimes \Sigma_{\alpha}^{\frac 12})(A - {\widehat A}) \right\|_F^2 - \frac12 \left\| (\Sigma_{\beta}^{\frac 12}\otimes \Sigma_{\alpha}^{\frac 12}) {\widehat A} \right\|^2 + \mathcal{O}(\varepsilon^{-2})\end{aligned}$$ The final statement on convergence of minimizers follows by Gamma convergence. ◻ ### Link with Graphical Lasso In the special case where the covariances are the identity ($\Sigma_\alpha=\mathrm{Id}$ and $\Sigma_\beta=\mathrm{Id}$) and $A$ is symmetric positive definite we have that $\Sigma$ is also positive definite and Galichon's formula ([\[eq:Sigma_galichon\]](#eq:Sigma_galichon){reference-type="ref" reference="eq:Sigma_galichon"}) holds (since $A$ is invertible) and hence the Hessian reduces to $$\begin{aligned} \label{HessianSpecialCaseIdentityCovarice} &\left( \frac{1}{\varepsilon} \nabla^2 \tilde W(A) \right)^{-1} = (A\otimes A) \left( \Sigma^{-1}\otimes \Sigma^{-1} + \mathbb{T} \right).\end{aligned}$$ Moreover, if $A$ admits an eigenvalue decomposition $A=UD U^\top$, then $\Sigma$ admits an eigenvalue decomposition $\Sigma=U \tilde D U^\top$ with entries of $\tilde D$ given by ([\[eq:tilde_d\_i\]](#eq:tilde_d_i){reference-type="ref" reference="eq:tilde_d_i"}). Note that it follows that $\lim_{\varepsilon\to 0} \Sigma=\mathrm{Id}$ and, hence, $\lim_{\varepsilon\to 0} \left( \Sigma^{-1}\otimes \Sigma^{-1} + \mathbb{T} \right) = \mathrm{Id}+ \mathbb{T}$, a singular matrix with the kernel being the set of asymmetric matrices. So, the limit does not necessarily exist as $\varepsilon\to 0$. However, in the special case where $A$ is *symmetric positive definite*, one can show that the certificates remain well defined as $\varepsilon\to 0$: Let $S_+$ be the set of matrices $\psi$ such that $\psi$ is symmetric and $S_-$ be the set of matrices $\psi$ such that $\psi$ is anti-symmetric. Then, $(\nabla^2 \tilde W(A) )^{-1}( S_+) \subset S_+$ and $(\nabla^2 \tilde W(A) )^{-1}( S_-) \subset S_-$. Moreover, $$\begin{aligned} \left( \frac{1}{\varepsilon} \nabla^2 \tilde W(A) \right)^{-1} \restriction_{S_+} &= (A\otimes A)\left( \Sigma^{-1}\otimes \Sigma^{-1} + \mathrm{Id} \right), \\ \left( \frac{1}{\varepsilon} \nabla^2 \tilde W(A) \right)^{-1} \restriction_{S_+} &= (A\otimes A)\left( \Sigma^{-1}\otimes \Sigma^{-1} - \mathrm{Id} \right).\end{aligned}$$ Since the symmetry of $A$ implies the symmetry of $\mbox{sign}(A)$, we can replace the Hessian given in ([\[HessianSpecialCaseIdentityCovarice\]](#HessianSpecialCaseIdentityCovarice){reference-type="ref" reference="HessianSpecialCaseIdentityCovarice"}) by $(A\otimes A)( \Sigma^{-1}\otimes \Sigma^{-1} + \mathrm{Id})$ and, hence, since $\lim_{\varepsilon\to 0} \Sigma=\mathrm{Id}$, the limit as $\varepsilon\to 0$ is $$\label{eq:limit_eps_0} \lim_{\varepsilon\to 0} z_\varepsilon= (A^{-1}\otimes A^{-1})_{(:,I)}\big((A^{-1}\otimes A^{-1})_{(I,I)}\big)^{-1} \mathop{\mathrm{sign}}(A)_I.$$ This coincides precisely with the certificate of the *graphical lasso*: $$\mathop{\mathrm{arg\,min}}_{\Theta\succeq 0} \langle S,\,\Theta\rangle - \log\det(\Theta) + \lambda \left\| \Theta \right\|_1.$$ *Proof of Proposition [Proposition 10](#prop:gaussian_obj_graph_lasso){reference-type="ref" reference="prop:gaussian_obj_graph_lasso"}.* The iOT problem with identity covariances restricted to the set of positive semi-definite matrices has the form $$\label{eq:iOT_identityCovariances} \mathop{\mathrm{arg\,min}}_{A\succeq 0} \mathcal{F}(A)_{\varepsilon,\lambda}(A), \quad \text{where} \quad \mathcal{F}(A)_{\varepsilon,\lambda}(A) \triangleq\lambda \left\| A \right\|_1 + \langle A,\,\Sigma_{\varepsilon,A} - \hat \Sigma\rangle + \frac{\varepsilon}{2}\log\det(\mathrm{Id}- \Sigma_{\varepsilon,A}^\top\Sigma_{\varepsilon,A}),$$ where $I- \Sigma_{\varepsilon,A}^\top\Sigma_{\varepsilon,A} = \varepsilon\Sigma_{\varepsilon,A} A^{-1}.$ From the singular value decomposition of $\Sigma_{\varepsilon,A}$,*i.e.,* ([\[SVDofSIgma\]](#SVDofSIgma){reference-type="ref" reference="SVDofSIgma"}), we see that if $A$ is symmetric positive definite, then so is $\Sigma_{\varepsilon,A}$. Plugging the optimality condition, *i.e.,* $I- \Sigma_{\varepsilon,A}^\top\Sigma_{\varepsilon,A} = \varepsilon\Sigma_{\varepsilon,A} A^{-1}$, into ([\[eq:iOT_identityCovariances\]](#eq:iOT_identityCovariances){reference-type="ref" reference="eq:iOT_identityCovariances"}), we obtain $$\begin{aligned} &\mathop{\mathrm{arg\,min}}_{A\succeq 0} \lambda \left\| A \right\|_1 + \langle A,\,\Sigma_{\varepsilon,A} - \hat \Sigma\rangle + \frac{\varepsilon}{2}\log\det(\varepsilon A^{-1} \Sigma_{\varepsilon,A}) \\ =& \mathop{\mathrm{arg\,min}}_{A\succeq 0} \lambda \left\| A \right\|_1 -\frac{\varepsilon}{2}\log\det( A/\varepsilon)+ \varepsilon\langle A,\,(\Sigma_{\varepsilon,A} - \hat \Sigma)/\varepsilon\rangle + \frac{\varepsilon}{2} \log\det( \Sigma_{\varepsilon,A}) \end{aligned}$$ Let $\lambda = \varepsilon\lambda_0$ for some $\lambda_0>0$. Then, removing the constant $\frac{\varepsilon}{2}\log\det(\varepsilon\mathrm{Id})$ term and factoring out $\varepsilon$, the problem is equivalent to $$\mathop{\mathrm{arg\,min}}_{A\succeq 0} \lambda_0 \left\| A \right\|_1 -\frac{1}{2}\log\det( A)+ \langle A,\, \varepsilon^{-1}(\Sigma_{\varepsilon,A} - \hat \Sigma)\rangle + \frac{1}{2} \log\det( \Sigma_{\varepsilon,A})$$ Assume that $\hat \Sigma = \Sigma_{\varepsilon, {\widehat A}}$. From the expression for the singular values of $\Sigma_{\varepsilon,A}$ in ([\[eq:tilde_d\_i\]](#eq:tilde_d_i){reference-type="ref" reference="eq:tilde_d_i"}), note that $$\Sigma_{\varepsilon,A} = \mathrm{Id}- \frac{\varepsilon}{2}A^{-1} +\mathcal{O}(\varepsilon^2).$$ So, $\lim_{\varepsilon\to 0}(\Sigma_{\varepsilon, A} - \Sigma_{\varepsilon, {\widehat A}})/\varepsilon= -\frac12\left( A^{-1} - {\widehat A}^{-1} \right)$. The objective converges pointwise to $$\lambda_0 \left\| A \right\|_1 -\frac{1}{2}\log\det( A)+ \frac12 \langle A,\,{\widehat A}^{-1} - A^{-1}\rangle ,$$ and the statement is then a direct consequence of Gamma convergence. ◻ # Large scale $\ell^1$-iOT solver {#sec:solver} Recall that the iOT optimization problem, recast over the dual potentials for empirical measures, reads $$\inf_{A,F,G } \frac{1}{n} \sum_{i} F_i + G_i + (\Phi_n A)_{i,i} + \frac{\varepsilon}{2 n^2} \sum_{i,j} \exp\left( \frac{2}{\varepsilon}(F_i+ G_j + (\Phi_n A)_{i,j} \right) + \lambda \left\| A \right\|_1.$$ In order to obtain a better-conditioned optimization problem, in line with [@cuturi2016smoothed], we instead consider the semi-dual problem, which is derived by leveraging the closed-form expression for the optimal $G$, given $F$. $$\inf_{A,F } \frac{1}{n} \sum_{i} F_i + (\Phi_n A)_{i,i} + \frac{\varepsilon}{n} \sum_i \log \frac{1}{n} \sum_j \exp\left( \frac{2}{\varepsilon}(F_i + (\Phi_n A)_{i,j} \right) + \lambda \left\| A \right\|_1.$$ Following [@poon2021smooth], which proposes a state of the art Lasso solver, the last step is to use the following Hadamard product over-parameterization of the $\ell^1$ norm $$\left\| A \right\|_1 = \min_{U \odot V} \frac{\left\| U \right\|^2_2}{2} + \frac{\left\| V \right\|^2_2}{2}.$$ where the Hadamard product is $U \odot V \triangleq(U_{i} V_i)_i$, to obtain the final optimization problem $$\inf_{A,U,V } \frac{1}{n} \sum_{i} F_i + (\Phi_n (U \odot V))_{i,i} + \frac{\varepsilon}{n} \sum_i \log \frac{1}{n} \sum_j \exp\left( \frac{2}{\varepsilon}(F_i + (\Phi_n (U \odot V))_{i,j} \right) + \frac{\lambda}{2} \left\| U \right\|^2_2 + \frac{\lambda}{2} \left\| V \right\|^2_2.$$ This is a smooth optimization problem, for which we employ a quasi-Newton solver (L-BFGS). Although it is non-convex, as demonstrated in [@poon2021smooth], the non-convexity is benign, ensuring the solver always converges to a global minimizer, $(F^\star,U^\star,V^\star)$, of the functional. From this, one can reconstruct the cost parameter, $A^\star \triangleq U^\star \odot V^\star$.
arxiv_math
{ "id": "2310.05461", "title": "Sparsistency for Inverse Optimal Transport", "authors": "Francisco Andrade, Gabriel Peyre, Clarice Poon", "categories": "math.ST stat.TH", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Ramanujan, in his lost notebook, gave an interesting identity, which generates infinite families of solutions to Euler's Diophantine equation $A^3 + B^3 = C^3 + D^3$. In this paper, we produce a few infinite families of solutions to the aforementioned Diophantine equation as well as for the Diophantine equation $A^4 + B^4 + C^4 + D^4 + E^4 = F^4$ in the spirit of Ramanujan. address: - Archit Agarwal, Department of Mathematics, Indian Institute of Technology Indore, Simrol, Indore 453552, Madhya Pradesh, India. - Meghali Garg, Department of Mathematics, Indian Institute of Technology Indore, Simrol, Indore 453552, Madhya Pradesh, India. author: - Archit Agarwal - Meghali Garg title: Infinite families of solutions for $A^3 + B^3 = C^3 + D^3$ and $A^4 + B^4 + C^4 + D^4 + E^4 = F^4$ in the spirit of Ramanujan --- [^1] # Introduction Littlewood once remarked that *"Every natural number was one of Ramanujan's personal friends".* $1729$ is one such special number which was dear to Ramanujan. This number is popularly known as Ramanujan taxicab number or Hardy-Ramanujan number due to the following incident. The story begins in 1918, when Ramanujan was admitted to Matlock Sanitorium in Derbyshire. Hardy went to see him and communicated the following story about his visit: > *I remember once going to him when he was lying ill at Putney. I had ridden in taxi cab number 1729 and remarked that the number seemed to me rather a dull one, and that I hoped it was not an unfavourable omen. No, he replied, it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways.* One can easily verify that $1729$ is the smallest natural number to be represented as sum of two cubes in two different ways i.e., $1729=1^3+12^3=9^3+10^3$. Once we know this special property of $1729$, this prompts one to think are there more numbers satisfying this property? This is analogous to solving the Diophantine equation $$\begin{aligned} \label{Euler's Diophantine equation} A^3+B^3=C^3+D^3,\end{aligned}$$ which is well-known as Euler's Diophantine equation. Ramanujan, in his lost notebook [@lost; @notebook p. 341], mentioned the following amazing identity, which gives infinitely many solutions to [\[Euler\'s Diophantine equation\]](#Euler's Diophantine equation){reference-type="eqref" reference="Euler's Diophantine equation"}. **Theorem 1**. *If $$\begin{aligned} \label{ramanujan identity} \begin{split} \frac{1+53x+9x^2}{1-82x-82x^2+x^3}=\sum_{n=0}^\infty a_nx^n=\sum_{n=1}^\infty \alpha_{n-1}x^{-n},\\ \frac{2-26x-12x^2}{1-82x-82x^2+x^3}=\sum_{n=0}^\infty b_nx^n=\sum_{n=1}^\infty \beta_{n-1}x^{-n},\\ \frac{2+8x-10x^2}{1-82x-82x^2+x^3}=\sum_{n=0}^\infty c_nx^n=\sum_{n=1}^\infty \gamma_{n-1}x^{-n}, \end{split}\end{aligned}$$ then[^2] $$\begin{aligned} \label{ramanujan final result} a_n^3+b_n^3=c_n^3+(-1)^{n} \hspace{5mm} \textrm{and} \hspace{5mm} \alpha_n^3+\beta_n^3=\gamma_n^3+(-1)^n.\end{aligned}$$* A few solutions obtained by Ramanujan corresponding to $a_n^3+b_n^3=c_n^3+(-1)^{n}$, for $n=1,2$, are $$\begin{aligned} 135^3 + 138^3 &= 172^3 - 1, \\ 11161^3 + 11468^3 &= 14258^3 + 1, %926271^3 + 951690^3 &= 1183258^3 - 1, \\ 76869289^3 + 78978818^3 &= 98196140^3 + 1, \end{aligned}$$ and from $\alpha_n^3+\beta_n^3=\gamma_n^3+(-1)^n$, for $n=0,1,2$, he derived $$\begin{aligned} 9^3 + (-12)^3 &= (-10)^3 +1, \\ 791^3 + (-1010)^3 &= (-812)^3 -1, \\ 65601^3 + (-83802)^3 &= (-67402)^3 +1. %\\ 5444135^3 + (-6945572)^3 &= (-5593538)^3 - 1.\end{aligned}$$ Like many of Ramanujan's results, one wonders how he managed to achieve such an identity. Hirschhorn examined Ramanujan's claim over a period of time and put forward the idea about how Ramanujan might have proceeded in [@hir; @2]-[@hir; @4]. Hirschhorn along with Han [@hir; @1] gave an alternative proof of Theorem [Theorem 1](#ramanujan main result){reference-type="ref" reference="ramanujan main result"}. McLaughlin [@JMc] found an identity similar to [\[ramanujan identity\]](#ramanujan identity){reference-type="eqref" reference="ramanujan identity"} involving eleven sequences. Over the years, this magical number $1729$ has gained attention of many mathematicians. While trying to find more integers satisfying Euler's Diophantine equation, Silverman [@silverman] tackled the problem using elliptic curves. Chen [@chen] discussed an algorithm to find various identities similar to [\[ramanujan identity\]](#ramanujan identity){reference-type="eqref" reference="ramanujan identity"}. A few years back, Niţică had observed a very interesting property about $1729$. It says that if we add the digits of $1729$, we get $19$, then multiplying $19$ with $91$, which is obtained by reversing the digits of $19$, we again get $1729$. The only integers satisfying this property are $\{1, 81, 1458, 1729\}$. In [@VN], Niţică obtained a generalization of above property. An elegant generalization of taxicab number has been discussed by Dinitz, Games and Roth [@DGR] in 2019, where they discussed the smallest number that can be represented as sum of $n$ positive $m^{th}$ powers in atleast $t$ ways. In this paper, we use Hirschhorn's idea to obtain infinite families of solutions to [\[Euler\'s Diophantine equation\]](#Euler's Diophantine equation){reference-type="eqref" reference="Euler's Diophantine equation"}. Hirschhorn believes that Ramanujan might have used his previously obtained parametric solution of [\[Euler\'s Diophantine equation\]](#Euler's Diophantine equation){reference-type="eqref" reference="Euler's Diophantine equation"} to get [\[ramanujan identity\]](#ramanujan identity){reference-type="eqref" reference="ramanujan identity"}. One of Ramanujan's families of solutions [@RN4 p. 56], [@ramanujantifr p. 266], [@ramanujanquestion] to [\[Euler\'s Diophantine equation\]](#Euler's Diophantine equation){reference-type="eqref" reference="Euler's Diophantine equation"} was given by $$\begin{aligned} \label{parametric_solution} (3a^2+5ab-5b^2)^3+(4a^2-4ab+6b^2)^3+(5a^2-5ab-3b^2)^3=(6a^2-4ab+4b^2)^3.\end{aligned}$$ On suggestion by Craig, Hirschhorn changed the variables as $a=A+B$ and $b=A-2B$ in $\eqref{parametric_solution}$ and obtained the following identity: $$\begin{aligned} \label{after_variable_change} (A^2+7AB-9B^2)^3+(2A^2-4AB+12B^2)^3=(2A^2+10B^2)^3+(A^2-9AB-B^2)^3.\end{aligned}$$ We use [\[after_variable_change\]](#after_variable_change){reference-type="eqref" reference="after_variable_change"} to get various identities similar to [\[ramanujan identity\]](#ramanujan identity){reference-type="eqref" reference="ramanujan identity"} which leads to several families of solutions to [\[Euler\'s Diophantine equation\]](#Euler's Diophantine equation){reference-type="eqref" reference="Euler's Diophantine equation"}. In the next section, we mention our main results. # Main Results At the beginning of Ramanujan's second notebook [@ramanujantifr p. 3], he mentioned the following parametric solution to [\[Euler\'s Diophantine equation\]](#Euler's Diophantine equation){reference-type="eqref" reference="Euler's Diophantine equation"}. **Theorem 2**. *If $p^3+q^3+r^3=s^3$ and $m= (s+q)\sqrt{\frac{s-q}{r+p}},~n= (r-p)\sqrt{\frac{r+p}{s-q}}$, then for any arbitrary $a,b$, we have $$\begin{aligned} (pa^2+mab-rb^2)^3+(qa^2-nab+sb^2)^3+(ra^2-mab-pb^2)^3=(sa^2-nab+qb^2)^3.\end{aligned}$$* Berndt [@RN4 p. 54] gave a proof of the above theorem. In this paper, we present an elementary proof, which also motivated us to obtain the below identity. **Theorem 3**. *If $p^3+q^3+r^3+s^3+t^3=u^3$ and $g= (q-t)\sqrt{\frac{q+t}{p+s}},~h= (p-s)\sqrt{\frac{p+s}{r-u}},~k= (p-s)\sqrt{\frac{p+s}{q+t}},~l= (q-t)\sqrt{\frac{q+t}{r-u}},~m= (r+u)\sqrt{\frac{r-u}{p+s}},~n= (r+u)\sqrt{\frac{r-u}{q+t}}$, then for any arbitrary $a,b,c$, we have $$\begin{aligned} &(sa^2+pb^2+pc^2-gab-mac)^3 + (qa^2+tb^2+qc^2-kab-nbc)^3 + (ra^2+rb^2-uc^2-hac-lbc)^3 \nonumber \\ & +(pa^2+sb^2+sc^2+mac+gab)^3 + (ta^2+qb^2+tc^2+kab+nbc)^3 = (ua^2+ub^2-rc^2-hac-lbc)^3.\end{aligned}$$* Next identities are in the spirit of Theorem [Theorem 1](#ramanujan main result){reference-type="ref" reference="ramanujan main result"}. **Theorem 4**. *Suppose we have the following power series: $$\begin{aligned} \begin{split} \frac{1-3x+9x^2}{1-2x-2x^2+x^3}=\sum_{n=0}^\infty a_nx^n,\\ \frac{2+6x-12x^2}{1-2x-2x^2+x^3}=\sum_{n=0}^\infty b_nx^n,\\ \frac{2+8x-10x^2}{1-2x-2x^2+x^3}=\sum_{n=0}^\infty c_nx^n,\\ \frac{1-11x+x^2}{1-2x-2x^2+x^3}=\sum_{n=0}^\infty d_nx^n, \end{split}\end{aligned}$$ then, for all $n \geq 0$, we get $$\begin{aligned} a_n^3+b_n^3=c_n^3+d_n^3.\end{aligned}$$* A few solutions to the Euler's diophantine equation obtained from the above theorem for $n=1,2,3,4$ are: $$\begin{aligned} (-1)^3+10^3&=12^3+(-9)^3,\\ 9^3+12^3&=18^3+(-15)^3,\\ 15^3+42^3&=58^3+(-49)^3,\\ 49^3+98^3&=140^3+(-119)^3 %\\ 119^3+268^3&=378^3+(-321)^3. \\ \vdots \\14961^3+32562^3&=46092^3+(-39159)^3.\end{aligned}$$ **Remark 1**. *From the above examples, one can observe that $d_n=-a_{n+1}$ for all $n \geq 0$. We can easily verify this observation with the help of generating functions for $d_n$ and $a_n$.* **Theorem 5**. *Let $a_n,~b_n,~c_n$ be the sequence of integers and $\alpha_n,~\beta_n,~\gamma_n$ be the sequence of rationals with the following generating functions: $$\begin{aligned} \label{generating functions 9^n} \begin{split} \frac{2-8x-90x^2}{1-58x-522x^2+729x^3}=\sum_{n=0}^\infty a_nx^n=\sum_{n=1}^\infty \alpha_{n-1} x^{-n},\\ \frac{1+53x+9x^2}{1-58x-522x^2+729x^3}=\sum_{n=0}^\infty b_nx^n=\sum_{n=1}^\infty \beta_{n-1} x^{-n},\\ \frac{2+22x-108x^2}{1-58x-522x^2+729x^3}=\sum_{n=0}^\infty c_nx^n=\sum_{n=1}^\infty \gamma_{n-1} x^{-n}, \end{split}\end{aligned}$$ then one has $$\begin{aligned} \label{abc} a_n^3+b_n^3=c_n^3+((-9)^n)^3 \hspace{5mm} \textrm{and} \hspace{5mm} \alpha_n^3+\beta_n^3=\gamma_n^3- \delta_n^3,\end{aligned}$$ where $\delta_n=(-1/9)^n$.* **Remark 1**. *We can get integral solutions from the second equality of [\[abc\]](#abc){reference-type="eqref" reference="abc"} on multiplying $\alpha_{n},~\beta_{n},~\gamma_{n}$ by $9^{2(n+1)}$. This will be clearer from the following example. For $n=1$, we have $$\begin{aligned} \left(\frac{-10}{81}\right)^3 + \left(\frac{1}{81}\right)^3 &= \left(\frac{-4}{27}\right)^3 - \left(\frac{-1}{9}\right)^3.\end{aligned}$$ On clearing the denominators, we get $$(-10)^3+1^3=(-12)^3-(-9)^3.$$ Here we see that each of $\alpha_0,~\beta_0$ and $\gamma_0$ has been multiplied by $9^2$.* A few more solutions obtained from the above theorem satisfying $a_n^3+b_n^3=c_n^3+((-9)^n)^3$, for $n=1,2,3$, are $$\begin{aligned} 108^3 + 111^3 &= 138^3 + (-9)^3, \\ 7218^3 + 6969^3 &= 8940^3 + 81^3, \\ 473562^3 + 461415^3 &= 589098^3 + (-729)^3.\end{aligned}$$ Some more examples satisfying $\alpha_n^3+\beta_n^3=\gamma_n^3- \delta_n^3$ for $n=2,3$, after clearing the denominators, are $$\begin{aligned} (-652)^3+535^3&=(-498)^3-81^3, \\ (-41578)^3+32281^3 &=(-33690)^3-(-729)^3.\end{aligned}$$ **Theorem 6**. *Suppose we have the following power series expansions: $$\begin{aligned} \begin{split} \frac{2+22x+60x^2}{1+2x-12x^2-216x^3}=\sum_{n=0}^\infty a_nx^n=\sum_{n=1}^\infty \alpha_{n-1} x^{-n},\\ \frac{1-13x-6x^2}{1+2x-12x^2-216x^3}=\sum_{n=0}^\infty b_nx^n=\sum_{n=1}^\infty \beta_{n-1} x^{-n},\\ \frac{1+11x-54x^2}{1+2x-12x^2-216x^3}=\sum_{n=0}^\infty c_nx^n=\sum_{n=1}^\infty \gamma_{n-1} x^{-n}, \end{split}\end{aligned}$$ then we have $$\begin{aligned} \label{2.6n} a_n^3+b_n^3=c_n^3+(2\times 6^n)^3 \hspace{5mm} \textrm{and} \hspace{5mm} \alpha_n^3+\beta_n^3=\gamma_n^3- \delta_n^3,\end{aligned}$$ where $\delta_n=2/6^n$.* **Remark 1**. *Here again on multiplying by $6^{2(n+1)}$, rational solutions corresponding to $\alpha_{n},~\beta_{n},~\gamma_{n}$ can be converted to integral solutions of [\[Euler\'s Diophantine equation\]](#Euler's Diophantine equation){reference-type="eqref" reference="Euler's Diophantine equation"}.* A few more solutions to [\[Euler\'s Diophantine equation\]](#Euler's Diophantine equation){reference-type="eqref" reference="Euler's Diophantine equation"} extracted from Theorem [Theorem 6](#gen 6^3n){reference-type="ref" reference="gen 6^3n"} are $$\begin{aligned} 18^3 + (-15)^3 &= 9^3 + 12^3, \\ 48^3 + 36^3 &= (-60)^3 + 72^3, \\ 552^3 + (-36)^3 &= 444^3 + 216^3, \\ 3360^3 + (-2736)^3 &= 336^3 + 1296^3.\end{aligned}$$ The above solutions satisfy the first equality of [\[2.6n\]](#2.6n){reference-type="eqref" reference="2.6n"} and in accordance with the second equality, we clear the denominators to get $$\begin{aligned} (-112)^3+76^3 &=(-84)^3-72^3, \\ (-328)^3+(-356)^3&=60^3-432^3.\end{aligned}$$ **Theorem 7**. *Consider six different sequence of integers $\{a_n\}, \{b_n\}, \{c_n\}, \{d_n\}, \{e_n\},$ and $\{f_n\}$ with the following generating function: $$\begin{aligned} \label{fibonacci a_n...f_n} \begin{split} \frac{8+8x+24x^2}{1-2x-2x^2+x^3}=\sum_{n=0}^\infty a_nx^n,\\ \frac{6-68x+18x^2}{1-2x-2x^2+x^3}=\sum_{n=0}^\infty b_nx^n,\\ \frac{14-60x+42x^2}{1-2x-2x^2+x^3}=\sum_{n=0}^\infty c_nx^n,\\ \frac{9+18x-27x^2}{1-2x-2x^2+x^3}=\sum_{n=0}^\infty d_nx^n,\\ \frac{4+8x-12x^2}{1-2x-2x^2+x^3}=\sum_{n=0}^\infty e_nx^n,\\ \frac{15+30x-45x^2}{1-2x-2x^2+x^3}=\sum_{n=0}^\infty f_nx^n. \end{split}\end{aligned}$$ Then, we have $$a_n^4+b_n^4+c_n^4+d_n^4+e_n^4 = f_n^4.$$* A few numerical examples drawn from above theorem for $n=0,1,2$ are $$\begin{aligned} 8^4 + 6^4 + 14^4 + 9^4 + 4^4 &= 15^4,\\ 24^4 + (-56)^4 + (-32)^4 + 36^4 + 16^4 &= 60^4,\\ 88^4 + (-82)^4 + 6^4 + 63^4 + 28^4 &= 105^4,\\ %216^4 + (-282)^4 + (-66)^4 + 189^4 + 84^4 &= 315^4.\end{aligned}$$ **Theorem 8**. *Consider the following power series $$\begin{aligned} \begin{split} \frac{6+184x+54x^2}{1-28x-84x^2+27x^3}=\sum_{n=0}^\infty a_nx^n,\\ \frac{14-64x+126x^2}{1-28x-84x^2+27x^3}=\sum_{n=0}^\infty b_nx^n,\\ \frac{9-81x^2}{1-28x-84x^2+27x^3}=\sum_{n=0}^\infty c_nx^n,\\ \frac{4-36x^2}{1-28x-84x^2+27x^3}=\sum_{n=0}^\infty d_nx^n,\\ \frac{15-135x^2}{1-28x-84x^2+27x^3}=\sum_{n=0}^\infty e_nx^n, \end{split}\end{aligned}$$ then the coefficients of the above power series satisfy $$\begin{aligned} a_n^4+b_n^4+c_n^4+d_n^4 = e_n^4 - (8 \times (-3)^n)^4.\end{aligned}$$* A few solutions obtained from above theorem corresponding to $n = 1,2,3$ are $$\begin{aligned} 352^4 + 328^4 + 252^4 + 112^4 &= 420^4 - (-24)^4,\\ 10414^4 + 10486^4 + 7731^4 + 3436^4 &= 12885^4 - 72^4,\\ 320998^4 + 320782^4 + 237393^4 + 105508^4 &= 395655^4 - (-216)^4,\\ %9853216^4 + ^4 9853864+ 7289604^4 + 3239824^4 &= 121499340^4 -648^4.\end{aligned}$$ **Theorem 9**. *Let $\{a_n\}, \{b_n\}, \{c_n\}, \{d_n\}, \{e_n\},$ and $\{f_n\}$ be the sequences of integers with the following power series: $$\begin{aligned} \begin{split} \frac{4-16x+12x^2}{1-2x-2x^2+x^3}=\sum_{n=0}^\infty a_nx^n,\\ \frac{3+6x-9x^2}{1-2x-2x^2+x^3}=\sum_{n=0}^\infty b_nx^n,\\ \frac{2-20x+6x^2}{1-2x-2x^2+x^3}=\sum_{n=0}^\infty c_nx^n,\\ \frac{4+8x-12x^2}{1-2x-2x^2+x^3}=\sum_{n=0}^\infty d_nx^n,\\ \frac{2+4x+6x^2}{1-2x-2x^2+x^3}=\sum_{n=0}^\infty e_nx^n,\\ \frac{5+10x-15x^2}{1-2x-2x^2+x^3}=\sum_{n=0}^\infty f_nx^n. \end{split}\end{aligned}$$ Then, $$a_n^4+b_n^4+c_n^4+d_n^4+e_n^4 = f_n^4$$* A few solutions obtained from the above theorem for $n=1,2,3$ are $$\begin{aligned} (-8)^4 + 12^4 + (-16)^4 + 16^4 + 8^4 &= 20^4,\\ 4^4 + 21^4 + (-22)^4 + 28^4 + 26^4 &= 35^4,\\ (-12)^4 + 63^4 + (-78)^4 + 84^4 + 66^4 &= 105^4.\end{aligned}$$ **Theorem 10**. *Consider the following power series $$\begin{aligned} \begin{split} \frac{4-24x+36x^2}{1-39x-117x^2+27x^3}=\sum_{n=0}^\infty a_nx^n,\\ \frac{3-27x^2}{1-39x-117x^2+27x^3}=\sum_{n=0}^\infty b_nx^n,\\ \frac{4-36x^2}{1-39x-117x^2+27x^3}=\sum_{n=0}^\infty c_nx^n,\\ \frac{2+60x+18x^2}{1-39x-117x^2+27x^3}=\sum_{n=0}^\infty d_nx^n,\\ \frac{5-45x^2}{1-39x-117x^2+27x^3}=\sum_{n=0}^\infty e_nx^n,\\ \end{split}\end{aligned}$$ then we have $$\begin{aligned} a_n^4+b_n^4+c_n^4+d_n^4 = e_n^4 - (2 \times (-3)^n)^4.\end{aligned}$$* A few examples obtained from the above theorem are $$\begin{aligned} \begin{split} 132^4 + 117^4 + 156^4 + 138^4 &= 195^4 - (-6)^4,\\ 5652^4 + 4887^4 + 6516^4 + 5634^4 &= 8145^4 - 18^4,\\ 235764^4 + 204201^4 + 272268^4 + 235818^4 &= 340335^4 - (-54)^4. \end{split}\end{aligned}$$ From these numerical evidences, one can clearly see that the families of solutions of each theorem are different. In the next section, we present the proofs of all the results. # Proof of Main Results Let $(p,q,r,s)$ be a known solution to $A^3+B^3+C^3=D^3$. Then consider a straight line passing through $(p,q,r,s)$ as $$\begin{aligned} \frac{A-p}{a}=\frac{B-q}{b}=\frac{C-r}{-a}=\frac{D-s}{b}=\theta.\end{aligned}$$ This gives $$\label{A,B,C,D} \begin{aligned} A&=a\theta+p,\\ B&=b\theta+q,\\ C&=-a\theta+r,\\ D&=b\theta+s. \end{aligned}$$ Substituting [\[A,B,C,D\]](#A,B,C,D){reference-type="eqref" reference="A,B,C,D"} in $A^3+B^3+C^3=D^3$, we get $$\begin{aligned} \label{value of theta} \theta=-\frac{a(p^2-r^2)+b(q^2-s^2)}{a^2(p+r)+b^2(q-s)}.\end{aligned}$$ Plugging the value [\[value of theta\]](#value of theta){reference-type="eqref" reference="value of theta"} of $\theta$ in [\[A,B,C,D\]](#A,B,C,D){reference-type="eqref" reference="A,B,C,D"}, we get $$\begin{aligned} A&=a^2r(p+r)-ab(q^2-s^2)-b^2p(s-q),\\ B&=a^2q(p+r)-ab(p^2-r^2)+b^2s(s-q),\\ C&=a^2p(p+r)+ab(q^2-s^2)-b^2r(s-q),\\ D&=a^2s(p+r)-ab(p^2-r^2)+b^2q(s-q).\end{aligned}$$ This is the most general solution. Now, we make a change of variable $$\begin{aligned} a\rightarrow \frac{a}{\sqrt{p+r}},~~~b\rightarrow\frac{b}{\sqrt{s-q}}.\end{aligned}$$ Without loss of generality, we may assume $s>q$. This gives us $$\begin{aligned} A&=a^2r+ab(s+q)\sqrt{\frac{s-q}{p+r}}-b^2p,\\ B&=a^2q-ab(p-r)\sqrt{\frac{p+r}{s-q}}+b^2s,\\ C&=a^2p-ab(s+q)\sqrt{\frac{s-q}{p+r}}-b^2r,\\ D&=a^2s-ab(p-r)\sqrt{\frac{p+r}{s-q}}+b^2q.\end{aligned}$$ Interchanging $r$ and $p$ gives Ramanujan's solution. 0◻ Let us consider a straight line passing through a point $(p,q,r,s,t,u)$, which is a known solution to $A^3+B^3+C^3+D^3+E^3 = F^3$, $$\begin{aligned} \frac{A-p}{a}=\frac{B-q}{b}=\frac{C-r}{c}=\frac{D-s}{-a}=\frac{E-t}{-b}=\frac{F-u}{c}=\theta.\end{aligned}$$ This gives $$\begin{aligned} \label{all x_i's} \begin{split} A&=a\theta + p, \\ B&=b\theta + q, \\ C&=c\theta + r, \\ D&=-a\theta + s, \\ E&=-b\theta + t, \\ F&=a\theta + u. \end{split}\end{aligned}$$ Substitute [\[all x_i\'s\]](#all x_i's){reference-type="eqref" reference="all x_i's"} in $A^3+B^3+C^3+D^3+E^3 = F^3$ to get $$\begin{aligned} \theta = \frac{-[a(p^2-s^2)+b(q^2-t^2)+c(r^2-u^2)]}{a^2(p+s)+b^2(q+t)+c^2(r-u)}.\end{aligned}$$ Utilizing the value of $\theta$ in [\[all x_i\'s\]](#all x_i's){reference-type="eqref" reference="all x_i's"}, we get $$\begin{aligned} A&=a^2s(p+s)-ab(q^2-t^2)-ac(r^2-u^2)+b^2p(q+t)+c^2p(r-u),\\ B&=-ab(p^2-s^2)+b^2t(q+t)-bc(r^2-u^2)+a^2q(p+s)c^2q(r-u),\\ C&=-ac(p^2-s^2)-bc(q^2-t^2)-c^2u(r-u)+a^2r(p+s)+b^2r(q+t),\\ D&=a^2p(p+s)+ab(q^2-t^2)+ac(r^2-u^2)+b^2s(q+t)+c^2s(r-u),\\ E&=ab(p^2-s^2)+b^2(q+t)+bc(r^2-u^2)+a^2t(p+s)+c^2t(r-u),\\ F&=-ac(p^2-s^2)-bc(q^2-t^2)-c^2r(r-u)+a^2u(p+s)+b^2u(q+t).\end{aligned}$$ Now we make a change of variable $$\begin{aligned} a\rightarrow \frac{a}{\sqrt{p+s}}, \hspace{3mm} b\rightarrow \frac{b}{\sqrt{q+t}}, \hspace{3mm} c\rightarrow \frac{c}{\sqrt{r-u}}.\end{aligned}$$ Without loss of generality, we may assume $r>u$. This gives us $$\begin{aligned} A&=a^2s+b^2p+c^2p-ab(q-t)\sqrt{\frac{q+t}{p+s}}-ac(r+u)\sqrt{\frac{r-u}{p+s}},\\ B&=a^2q+b^2t+c^2q-ab(p-s)\sqrt{\frac{p+s}{q+t}}-bc(r+u)\sqrt{\frac{r-u}{q+t}},\\ C&=a^2r+b^2r-c^2u-ac(p-s)\sqrt{\frac{p+s}{r-u}}-bc(q-t)\sqrt{\frac{q+t}{r-u}},\\ D&=a^2p+b^2s+c^2s+ac(r+u)\sqrt{\frac{r-u}{p+s}}+ab(q-t)\sqrt{\frac{q+t}{p+s}},\\ E&=a^2t+b^2q+c^2t+ab(p-s)\sqrt{\frac{p+s}{q+t}}+bc(r+u)\sqrt{\frac{r-u}{q+t}},\\ F&=a^2u+b^2u-c^2r-ac(p-s)\sqrt{\frac{p+s}{r-u}}-bc(q-t)\sqrt{\frac{q+t}{r-u}}.\end{aligned}$$ This completes the proof. 0◻ Let us assume our sequence $\omega_n$ to be Fibonacci sequence defined as $$\begin{aligned} \label{Fibonnaci_sequence} \omega_{n+2}=\omega_{n+1}+\omega_n\hspace{3mm}\textrm{with}\hspace{3mm} \omega_0=0\hspace{3mm} \textrm{and} \hspace{3mm} \omega_1=1.\end{aligned}$$ Then for the coexistence of $\omega_n$ with [\[after_variable_change\]](#after_variable_change){reference-type="eqref" reference="after_variable_change"}, we substitute $$\begin{aligned} A=\omega_{n+1}\hspace{3mm} \textrm{and} \hspace{3mm}B=\omega_n,\end{aligned}$$ with $$\begin{aligned} \label{a_n, b_n, c_n, d_n} \begin{split} &a_n=A^2+7AB-9B^2=\omega_{n+1}^2+7\omega_n\omega_{n+1}-9\omega_n^2,\\ &b_n=2A^2-4AB+12B^2=2\omega_{n+1}^2-4\omega_n\omega_{n+1}+12\omega_n^2,\\ &c_n=2A^2+10B^2=2\omega_{n+1}^2+10\omega_n^2\\ &d_n= A^2-9AB-B^2=\omega_{n+1}^2-9\omega_n\omega_{n+1}-9\omega_n^2. \end{split}\end{aligned}$$ satisfying $$\begin{aligned} a_n^3+b_n^3=c_n^3+d_n^3.\end{aligned}$$ Now we solve the recurrence relation of $\omega_n$ to see $$\begin{aligned} &\omega_n=\frac{1}{\sqrt{5}}\bigg(\bigg(\frac{1+\sqrt{5}}{2}\bigg)^n+\bigg(\frac{1-\sqrt{5}}{2}\bigg)^n\bigg),\\ &\omega_n^2=\frac{1}{5}\bigg(\bigg(\frac{3+\sqrt{5}}{2}\bigg)^n+\bigg(\frac{3-\sqrt{5}}{2}\bigg)^n-2(-1)^n\bigg),\\&\omega_n\omega_{n+1}=\frac{1}{5}\bigg(\bigg(\frac{1+\sqrt{5}}{2}\bigg)^{2n+1}+\bigg(\frac{1-\sqrt{5}}{2}\bigg)^{2n+1}-(-1)^n\bigg).\\\end{aligned}$$ This leads us to obtain the following generating functions: $$\begin{aligned} \label{generating function of omega_n} \begin{split} &\sum_{n=0}^\infty \omega_n^2x^n=\frac{x-x^2}{1-2x-2x^2+x^3},\\ &\sum_{n=0}^\infty \omega_n\omega_{n+1}x^n=\frac{x}{1-2x-2x^2+x^3}. \end{split}\end{aligned}$$ We now make use of [\[generating function of omega_n\]](#generating function of omega_n){reference-type="eqref" reference="generating function of omega_n"} in [\[a_n, b_n, c_n, d_n\]](#a_n, b_n, c_n, d_n){reference-type="eqref" reference="a_n, b_n, c_n, d_n"} to get the generating functions for $a_n, b_n, c_n$ and $d_n.$ This completes the proof. 0◻ We define a sequence $\{\omega_n\}$ satisfying the following recurrence relation: $$\begin{aligned} \omega_{n+2}=9\omega_n-7\omega_{n+1},\hspace{3mm} \textrm{with}\hspace{3mm} \omega_0=0,\hspace{1mm} \omega_1=1.\end{aligned}$$ Then one can verify that $\omega_n$ satisfies $$\begin{aligned} \omega_{n+1}^2-\omega_n\omega_{n+2} = (-9)^n.\end{aligned}$$ Now, to synchronize the sequence $\omega_n$ with [\[after_variable_change\]](#after_variable_change){reference-type="eqref" reference="after_variable_change"}, we substitute $$\begin{aligned} A=\omega_{n+1},~~B=\omega_n.\end{aligned}$$ This leads us to obtain $$\begin{aligned} A^2+7AB-9B^2 &= \omega_{n+1}^2 + 7\omega_n\omega_{n+1} - 9\omega_n^2 \nonumber\\ &= \omega_{n+1}^2 - \omega_n(9\omega_n - 7\omega_{n+1})\nonumber \\&= \omega_{n+1}^2 - \omega_n \omega_{n+2} = (-9)^n.\end{aligned}$$ We write the remaining terms of [\[after_variable_change\]](#after_variable_change){reference-type="eqref" reference="after_variable_change"} as $$\begin{aligned} \label{a_n, b_n, c_n for 9^n} \begin{split} &a_n=2A^2+10B^2=2\omega_{n+1}^2+10\omega_n^2,\\ &b_n=A^2-9AB-B^2=\omega_{n+1}^2-9\omega_n\omega_{n+1}-\omega_n^2, \\ &c_n=2A^2-4AB+12B^2=2\omega_{n+1}^2-4\omega_n\omega_{n+1}+12\omega_n^2, \end{split}\end{aligned}$$ and they satisfy $$\begin{aligned} a_n^3+b_n^3=c_n^3+((-9)^n)^3.\end{aligned}$$ Solving recurrence relation for $\omega_n$, we have $$\begin{aligned} &\omega_n=\frac{1}{\sqrt{85}}\bigg(\frac{1}{2^n}(\sqrt{85}-7)^n-\frac{1}{2^n}(-\sqrt{85}-7)^n\bigg),\\ &\omega_n^2=\frac{1}{85}\bigg(\frac{1}{2^n}(7\sqrt{85}+67)^n+\frac{1}{2^n}(-7\sqrt{85}+67)^n-2(-9)^n\bigg),\\ &\omega_n\omega_{n+1}=\frac{1}{85}\bigg(\frac{1}{2^{2n+1}}(\sqrt{85}-7)^{2n+1}+\frac{1}{2^{2n+1}}(-\sqrt{85}-7)^{2n+1}+7(-9)^n\bigg).\end{aligned}$$ This leads us to obtain the following generating functions: $$\begin{aligned} \label{generating function for w_n3} \begin{split} &\sum_{n=0}^\infty \omega_n^2x^n=\frac{x-9x^2}{1-58x-522x^2+729x^3},\\ &\sum_{n=0}^\infty \omega_n\omega_{n+1}x^n=\frac{-7x}{1-58x-522x^2+729x^3}. \end{split}\end{aligned}$$ Utilizing [\[generating function for w_n3\]](#generating function for w_n3){reference-type="eqref" reference="generating function for w_n3"} in [\[a_n, b_n, c_n for 9\^n\]](#a_n, b_n, c_n for 9^n){reference-type="eqref" reference="a_n, b_n, c_n for 9^n"} completes the proof of first equality of [\[abc\]](#abc){reference-type="eqref" reference="abc"}. Now expanding the left hand sides of [\[generating functions 9\^n\]](#generating functions 9^n){reference-type="eqref" reference="generating functions 9^n"} around $x=\infty$ and collecting the coefficient of $x^{-n-1}$, we get $$\begin{aligned} \alpha_n=\frac{-162}{85\sqrt{85}}\big[(71\eta-11)\eta^n - (71\xi-11)\xi^n\big]+\frac{16}{765}\left(\frac{-1}{9}\right)^n, \\ \beta_n=\frac{648}{595\sqrt{85}}\big[(59\eta+16)\eta^n - (59\xi+16)\xi^n\big]-\frac{43}{765}\left(\frac{-1}{9}\right)^n, \\ \gamma_n=\frac{-486}{595\sqrt{85}}\big[(146\eta-31)\eta^n - (146\xi-31)\xi^n\big]-\frac{16}{765}\left(\frac{-1}{9}\right)^n,\end{aligned}$$ where $\eta$ and $\xi$ are the roots of the polynomial equation $81x^2-67x+1=0$. Replacing $n$ by $n-1$, we get the required result. 0◻ The proof of this theorem goes along the same line as in Theorem [Theorem 5](#gen -9^3n){reference-type="ref" reference="gen -9^3n"}. Here the terms of the sequence $\{\omega_n\}$ can be derived from the following recurrence relation: $$\begin{aligned} \omega_{n+2}=-6\omega_n+2\omega_{n+1},\hspace{3mm} \omega_0=0,\hspace{1mm} \omega_1=1,\end{aligned}$$ and satisfies $$\begin{aligned} 2\omega_{n+1}^2-2\omega_n\omega_{n+2}=2\cdot6^n.\end{aligned}$$ Here we utilize [\[after_variable_change\]](#after_variable_change){reference-type="eqref" reference="after_variable_change"} by substituting $A=\omega_{n+1},~~B=\omega_n,$ where $$\begin{aligned} \label{a_n, b_n, c_n for 2.6^n} \begin{split} &a_n=2A^2+10B^2=2\omega_{n+1}^2+10\omega_n^2,\\ &b_n=A^2-9AB-B^2=\omega_{n+1}^2-9\omega_n\omega_{n+1}-9\omega_n^2,\\ &c_n=A^2+7AB-9B^2=\omega_{n+1}^2+7\omega_n\omega_{n+1}-9\omega_n^2, \end{split}\end{aligned}$$ and $$\begin{aligned} 2A^2-4AB+12B^2=2(6)^n.\end{aligned}$$ Then we have $$\begin{aligned} a_n^3+b_n^3=c_n^3+2^3(6)^{3n}.\end{aligned}$$ We solve the recurrence relation of $\omega_n$ to get, $$\begin{aligned} &\omega_n=\frac{i}{2 \sqrt{5}}\bigg((1-i\sqrt{5})^n-(1+i\sqrt{5})^n\bigg),\\ &\omega_n^2=\frac{-1}{20}\bigg((-4-2i\sqrt{5})^n+(-4+2i\sqrt{5})^n -2(6)^n\bigg),\\ &\omega_n\omega_{n+1}=\frac{-1}{20}\bigg(1-i\sqrt{5})^{2n+1}+(1+i\sqrt{5})^{2n+1}-2(6)^n\bigg).\end{aligned}$$ Then the generating functions for $\omega_n^2$ and $\omega_n \omega_{n+1}$ are $$\begin{aligned} &\sum_{n=0}^\infty \omega_n^2x^n=\frac{x+6x^2}{1+2x-12x^2-216x^3},\\ &\sum_{n=0}^\infty \omega_n\omega_{n+1}x^n=\frac{2x}{1+2x-12x^2-216x^3}.\end{aligned}$$ Utilizing these generating functions in [\[a_n, b_n, c_n for 2.6\^n\]](#a_n, b_n, c_n for 2.6^n){reference-type="eqref" reference="a_n, b_n, c_n for 2.6^n"}, we get the required result. 0◻ The coefficients of the power series obtained in [\[fibonacci a_n\...f_n\]](#fibonacci a_n...f_n){reference-type="eqref" reference="fibonacci a_n...f_n"} gives solution to the Diophantine equation $A^4 +B^4+C^4+D^4+E^4=F^4$. To prove this theorem, we make use of the following parametric solution given by Ramanujan [@ramanujantifr p. 386]. If $s$ and $t$ are arbitrary, then $$\begin{aligned} \label{power_4_s_t} \begin{split} (8s^2 + 40st &-24t^2)^4 + (6s^2 - 44st -18t^2)^4 + (14s^2 -4st -42t^2)^4 \\ &+ (9s^2 + 27t^2)^4 + (4s^2 + 12t^2)^4 = (15s^2 + 45t^2)^4. \end{split}\end{aligned}$$ The proof goes in the same vein as done in previous theorems. Here we take our sequence $\{\omega_n \}$ to be the Fibonacci sequence defined as in [\[Fibonnaci_sequence\]](#Fibonnaci_sequence){reference-type="eqref" reference="Fibonnaci_sequence"}. Now to synchronize this with [\[power_4\_s_t\]](#power_4_s_t){reference-type="eqref" reference="power_4_s_t"}, we substitute, $$\begin{aligned} s=\omega_{n+1}\hspace{3mm} \textrm{and} \hspace{3mm}t=\omega_n.\end{aligned}$$ Then we represent $a_n, b_n, c_n, d_n, e_n$ and $f_n$ as $$\begin{aligned} \label{a_n,b_n...f_n,fibonacci} \begin{split} &a_n= 8s^2 + 40st -24t^2 = 8\omega_{n+1}^2+40\omega_n\omega_{n+1}-24\omega_n^2, \\ &b_n= 6s^2 - 44st -18t^2 = 6\omega_{n+1}^2-44\omega_n\omega_{n+1}-18\omega_n^2, \\ &c_n= 14s^2 -4st -42t^2 = 14\omega_{n+1}^2-4\omega_n\omega_{n+1}-42\omega_n^2, \\ &d_n= 9s^2 + 27t^2= 9\omega_{n+1}^2+27\omega_n^2, \\ &e_n= 4s^2 + 12t^2= 4\omega_{n+1}^2+12\omega_n^2, \\ &f_n= 15s^2 + 45t^2 = 15\omega_{n+1}^2+45\omega_n^2, \end{split}\end{aligned}$$ satisfying $$\begin{aligned} a_n^4 + b_n^4 + c_n^4 + d_n^4 + e_n^4 = f_n^4. \end{aligned}$$ The generating functions for $\omega_n^2$ and $\omega_n \omega_{n+1}$ are given in [\[generating function of omega_n\]](#generating function of omega_n){reference-type="eqref" reference="generating function of omega_n"}. Utilizing [\[generating function of omega_n\]](#generating function of omega_n){reference-type="eqref" reference="generating function of omega_n"} in [\[a_n,b_n\...f_n,fibonacci\]](#a_n,b_n...f_n,fibonacci){reference-type="eqref" reference="a_n,b_n...f_n,fibonacci"}, one can complete the proof. 0◻ Let us consider a sequence $\{\omega_n\}$ defined as $$\begin{aligned} \label{w_n 2.8} \omega_{n+2}=3\omega_n-5\omega_{n+1},\hspace{3mm} \omega_0=0,\hspace{1mm} \omega_1=1.\end{aligned}$$ This sequence satisfies a recurrence relation $$g_n = (-3)^n g_0 = 8(-3)^n ,$$ where we define $g_n$ as $$g_n = 8\omega_{n+1}^2 - 8\omega_n \omega_{n+2}.$$ Now, in accordance with [\[power_4\_s_t\]](#power_4_s_t){reference-type="eqref" reference="power_4_s_t"}, we substitute $$\begin{aligned} s=\omega_{n+1} \hspace{5mm} \textrm{and} \hspace{5mm} t=\omega_n,\end{aligned}$$ and we write $$\begin{aligned} \label{a_n,b_n...f_n,8.3^n} \begin{split} &a_n= 6s^2 - 44st -18t^2 = 6\omega_{n+1}^2-44\omega_n\omega_{n+1}-18\omega_n^2, \\ &b_n= 14s^2 -4st -42t^2 = 14\omega_{n+1}^2-4\omega_n\omega_{n+1}-42\omega_n^2, \\ &c_n= 9s^2 + 27t^2= 9\omega_{n+1}^2+27\omega_n^2, \\ &d_n= 4s^2 + 12t^2= 4\omega_{n+1}^2+12\omega_n^2, \\ &e_n= 15s^2 + 45t^2 = 15\omega_{n+1}^2+45\omega_n^2, \end{split}\end{aligned}$$ satisfying $$\begin{aligned} a_n^4 + b_n^4 + c_n^4 + d_n^4 = e_n^4 - (8\times (-3)^n)^4.\end{aligned}$$ On solving [\[w_n 2.8\]](#w_n 2.8){reference-type="eqref" reference="w_n 2.8"}, we get the generating functions for $w_n$ to be $$\begin{aligned} &\sum_{n=0}^\infty \omega_n^2x^n=\frac{x-3x^2}{1-28x-84x^2+27x^3},\\ &\sum_{n=0}^\infty \omega_n\omega_{n+1}x^n=\frac{-5x}{1-28x-84x^2+27x^3}.\end{aligned}$$ Make use of these generating functions in [\[a_n,b_n\...f_n,8.3\^n\]](#a_n,b_n...f_n,8.3^n){reference-type="eqref" reference="a_n,b_n...f_n,8.3^n"} to get the desired result. 0◻ The proof of this theorem goes along the lines of Theorem [Theorem 7](#Fibonacci_power_4_s_t){reference-type="ref" reference="Fibonacci_power_4_s_t"}. The key difference here is that we use a different parametric solution of Diophantine equation $A^4 +B^4+C^4+D^4+E^4=F^4$ given by Ramanujan [@ramanujantifr p. 386]. Basically, for arbitrary $m$ and $n$, we have $$\begin{aligned} \label{power_4_m_n} \begin{split} (4m^2-&12n^2)^4 + (3m^2 + 9n^2)^4 + (2m^2 - 12mn -6n^2)^4 \\& + (4m^2+12n^2)^4 + (2m^2 + 12mn -6n^2)^4 = (5m^2 + 15n^2)^4. \end{split}\end{aligned}$$ Here again, we consider our sequence to be Fibonacci sequence. As the proof goes along the lines of Theorem [Theorem 4](#fibonacci){reference-type="ref" reference="fibonacci"}, so we left it for the readers. 0◻ Here we consider our sequence $\{\omega_n\}$ to be $$\begin{aligned} \omega_{n+2}=6\omega_{n+1}+3\omega_{n},\hspace{3mm} \omega_0=0,\hspace{1mm} \omega_1=1.\end{aligned}$$ Now let $$g_n=2\omega_{n+1}^2-2\omega_n \omega_{n+2},$$ then it satisfies $$g_n=(-3)^ng_0=2\times(-3)^n.$$ The rest of the proof goes along the same lines as in Theorem [Theorem 8](#3^n.8_power_4){reference-type="ref" reference="3^n.8_power_4"}, so we omit the proof. 0◻ **Acknowledgement:** We sincerely thank the anonymous referee for carefully reading our manuscript and for giving valuable suggestions. The authors would like to thank Dr. Ajai Choudhry, Dr. Pramod Eyyunni and Dr. Bibekananda Maji for their valuable suggestions throughout this project. The first author's research is supported by the CSIR-UGC Fellowship, Govt. of India, whereas the second author's research is given by the PMRF Fellowship, Govt. of India, grant number 2101705. We sincerely thank our respective agencies for their generous support. We would also like to thank Bhaskaracharya Mathematics Laboratory and Brahmagupta Mathematics Library of the Department of Mathematics, IIT Indore, supported by DST FIST Project (File No.: SR/FST/MS-I/2018/26). 99 G. E. Andrews and B. C. Berndt, *Ramanujan's Lost Notebook, Part IV*, Springer, New York, 2013. B. C. Berndt, *Ramanujan's Notebook, Part IV*, Springer, New York, 1994. K. W. Chen, *Extension of an amazing identity of Ramanujan,* Fibonacci Quart. **50** (2012), 227--230. J. H. Dinitz, R. Games and R. Roth, *Seeds for generalized taxicab numbers,* J. Integer Seq. **22** (2019), Article 19.3.3. J. H. Han and M. D. Hirschhorn, *Another look at an amazing identity of Ramanujan*, Math. Magazine **79** (2006), 302--304. M. D. Hirschhorn, *An amazing identity of Ramanujan*, Math. Magazine **68** (1995), 199--201. M. D. Hirschhorn, *A proof in the spirit of Zeilberger of an amazing identity of Ramanujan*, Math. Magazine **69** (1996), 267--269. M. D. Hirschhorn, *Ramanujan and Fermat's last theorem*, Austral. Math. Soc. Gaz. **31** (2004), 256--257. J. McLaughlin, *An identity motivated by an amazing identity of Ramanujan,* The Fibinacci Quaterly **48.1** (2010), 34--38. V. Niţică, *About some relatives of the taxicab number*, J. Integer seq. **21** (2018), Article 18.9.4. S. Ramanujan, *Notebooks of Srinivasa Ramanujan, Vol. II*, Tata Institute of Fundamental Research, Mumbai, 2012. S. Ramanujan *Question* 441, J. Indian Math. Soc. **5** (1913), 39; solution in **6** (1914), 226--227. S. Ramanujan, *The Lost Notebook and Other Unpublished Papers,* New Delhi, Narosa, 1988. J. H. Silverman, *Taxicabs and sums of two cubes,* Amer. Math. Monthly, **100** (1993), 331--340. [^1]: 2010*Mathematics Subject Classification.* Primary 11D25.\ *Keywords and phrases.* Euler's Diophantine equation, Ramanujan taxicab number. [^2]: *In [@LN4 p. 203, Equation (8.5.13)], the authors mentioned that the second equality of [\[ramanujan final result\]](#ramanujan final result){reference-type="eqref" reference="ramanujan final result"} is incorrect. However, we emphasize that the second equality is in fact correctly mentioned by Ramanujan.*
arxiv_math
{ "id": "2309.00427", "title": "Infinite families of solutions for $A^3 + B^3 = C^3 + D^3$ and $A^4 +\n B^4 + C^4 + D^4 + E^4 = F^4$", "authors": "Archit Agarwal and Meghali Garg", "categories": "math.NT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- bibliography: - reference.bib title: This is the title --- ------------------------------------------------------------------------ ------------------------------------------------------------------------ ------------------------------------------------------------------------ ------------------------------------------------------------------------ ------------------------------------------------------------------------ **CONTINUOUS DEUTSCH UNCERTAINTY PRINCIPLE AND CONTINUOUS KRAUS CONJECTURE**\ ------------------------------------------------------------------------ ------------------------------------------------------------------------ ------------------------------------------------------------------------ ------------------------------------------------------------------------ ------------------------------------------------------------------------ **K. MAHESH KRISHNA**\ Post Doctoral Fellow\ Statistics and Mathematics Unit\ Indian Statistical Institute, Bangalore Centre\ Karnataka 560 059, India\ Email: kmaheshak\@gmail.com\ Date: ------------------------------------------------------------------------ ------------------------------------------------------------------------ **Abstract**: Let $(\Omega, \mu)$, $(\Delta, \nu)$ be measure spaces and $\{\tau_\alpha\}_{\alpha\in \Omega}$, $\{\omega_\beta\}_{\beta \in \Delta}$ be 1-bounded continuous Parseval frames for a Hilbert space $\mathcal{H}$. Then we show that $$\begin{aligned} \label{UE} \log (\mu(\Omega)\nu(\Delta))\geq S_\tau(h)+S_\omega (h)\geq -2 \log \left(\frac{1+\displaystyle \sup_{\alpha \in \Omega, \beta \in \Delta}|\langle\tau_\alpha , \omega_\beta\rangle|}{2}\right) , \quad \forall h \in \mathcal{H}_\tau \cap \mathcal{H}_\omega,\end{aligned}$$ where $$\begin{aligned} &\mathcal{H}_\tau := \{h_1 \in \mathcal{H}: \langle h_1 , \tau_\alpha \rangle \neq 0, \alpha \in \Omega\}, \quad \mathcal{H}_\omega := \{h_2 \in \mathcal{H}: \langle h_2, \omega_\beta \rangle \neq 0, \beta \in \Delta\},\\ &S_\tau(h):= -\displaystyle\int\limits_{\Omega}\left|\left \langle \frac{h}{\|h\|}, \tau_\alpha\right\rangle \right|^2\log \left|\left \langle \frac{h}{\|h\|}, \tau_\alpha\right\rangle \right|^2\,d\mu(\alpha), \quad \forall h \in \mathcal{H}_\tau, \\ & S_\omega (h):= -\displaystyle\int\limits_{\Delta}\left|\left \langle \frac{h}{\|h\|}, \omega_\beta\right\rangle \right|^2\log \left|\left \langle \frac{h}{\|h\|}, \omega_\beta\right\rangle \right|^2\,d\nu(\beta), \quad \forall h \in \mathcal{H}_\omega.\end{aligned}$$ We call Inequality ([\[UE\]](#UE){reference-type="ref" reference="UE"}) as **Continuous Deutsch Uncertainty Principle**. Inequality ([\[UE\]](#UE){reference-type="ref" reference="UE"}) improves the uncertainty principle obtained by Deutsch *\[Phys. Rev. Lett., 1983\]*. We formulate Kraus conjecture for 1-bounded continuous Parseval frames. We also derive continuous Deutsch uncertainty principles for Banach spaces. **Keywords**: Uncertainty Principle, Hilbert space, Frame, Banach space, Deutsch uncertainty, Kraus Conjecture. **Mathematics Subject Classification (2020)**: 42C15.\ ------------------------------------------------------------------------ ------------------------------------------------------------------------ # Introduction Let $\mathcal{H}$ be a finite dimensional Hilbert space. Given an orthonormal basis $\{\omega_j\}_{j=1}^n$ for $\mathcal{H}$, the **(finite) Shannon entropy** at a point $h \in \mathcal{H}_\tau$ is defined as $$\begin{aligned} S_\tau (h)\coloneqq - \sum_{j=1}^{n} \left|\left \langle \frac{h}{\|h\|}, \tau_j\right\rangle \right|^2\log \left|\left \langle \frac{h}{\|h\|}, \tau_j\right\rangle \right|^2\geq 0, \end{aligned}$$ where $\mathcal{H}_\tau \coloneqq \{h \in \mathcal{H}: \langle h , \tau_j \rangle\neq 0, 1\leq j \leq n \}$. In 1983, Deutsch derived following uncertainty principle for Shannon entropy [@DEUTSCH]. **Theorem 1**. *(**Deutsch Uncertainty Principle**) [@DEUTSCH] [\[DU\]]{#DU label="DU"} Let $\{\tau_j\}_{j=1}^n$, $\{\omega_j\}_{j=1}^n$ be two orthonormal bases for a finite dimensional Hilbert space $\mathcal{H}$. Then $$\begin{aligned} \label{DUP} 2 \log n \geq S_\tau (h)+S_\omega (h)\geq -2 \log \left(\frac{1+\displaystyle \max_{1\leq j, k \leq n}|\langle\tau_j , \omega_k\rangle|}{2}\right) \geq 0, \quad \forall h \in \mathcal{H}_\tau \cap \mathcal{H}_\omega. \end{aligned}$$* In 1988, followed by a conjecture of Kraus [@KRAUS] made in 1987, Maassen and Uffink improved Inequality ([\[DUP\]](#DUP){reference-type="ref" reference="DUP"}) [@MAASSENUFFINK]. **Theorem 2**. *(**Kraus Conjecture/Maassen-Uffink Uncertainty Principle**) [@KRAUS; @MAASSENUFFINK] [\[KMU\]]{#KMU label="KMU"} Let $\{\tau_j\}_{j=1}^n$, $\{\omega_j\}_{j=1}^n$ be two orthonormal bases for a finite dimensional Hilbert space $\mathcal{H}$. Then $$\begin{aligned} 2 \log n \geq S_\tau (h)+S_\omega (h)\geq -2 \log \left(\displaystyle\max_{1\leq j, k \leq n}|\langle\tau_j , \omega_k\rangle|\right)\geq 0, \quad \forall h \in \mathcal{H}_\tau \cap \mathcal{H}_\omega. \end{aligned}$$* In 2013, Ricaud and Torrésani [@RICAUDTORRESANI] showed that Theorem [\[KMU\]](#KMU){reference-type="ref" reference="KMU"} holds for Parseval frames. **Theorem 3**. *(**Maassen-Uffink-Ricaud-Torrésani Uncertainty Principle**)[\[MU\]]{#MU label="MU"} [@RICAUDTORRESANI] Let $\{\tau_j\}_{j=1}^n$, $\{\omega_j\}_{j=1}^n$ be two Parseval frames for a finite dimensional Hilbert space $\mathcal{H}$. Then $$\begin{aligned} 2 \log n \geq S_\tau (h)+S_\omega (h)\geq -2 \log \left(\displaystyle\max_{1\leq j, k \leq n}|\langle\tau_j , \omega_k\rangle|\right)\geq 0, \quad \forall h \in \mathcal{H}_\tau \cap \mathcal{H}_\omega. \end{aligned}$$* Recently, Banach space versions of Deutsch uncertainty principle have been derived in [@KRISHNA]. To formulate them, we need some notions. Given a Parseval p-frame $\{f_j\}_{j=1}^n$ for $\mathcal{X}$, we define the **(finite) p-Shannon entropy** at a point $x \in \mathcal{X}_f$ as $$\begin{aligned} S_f(x)\coloneqq -\sum_{j=1}^{n}\left|f_j\left(\frac{x}{\|x\|}\right)\right|^p\log \left|f_j\left(\frac{x}{\|x\|}\right)\right|^p\geq 0, \end{aligned}$$ where $\mathcal{X}_f\coloneqq \{x \in \mathcal{X}:f_j(x)\neq 0, 1\leq j \leq n\}$. On the other way, given a Parseval p-frame $\{\tau_j\}_{j=1}^n$ for $\mathcal{X}^*$, we define the **(finite) p-Shannon entropy** at a point $f \in \mathcal{X}^* _\tau$ as $$\begin{aligned} S_\tau(f)\coloneqq -\sum_{j=1}^{n}\left|\frac{f(\tau_j)}{\|f\|}\right|^p\log \left|\frac{f(\tau_j)}{\|f\|}\right|^p\geq 0,\end{aligned}$$ where $\mathcal{X}^*_\tau\coloneqq \{f \in \mathcal{X}^*:f(\tau_j)\neq 0, 1\leq j \leq n\}$. **Theorem 4**. *[@KRISHNA] (**Functional Deutsch Uncertainty Principle**) Let $\{f_j\}_{j=1}^n$ and $\{g_k\}_{k=1}^m$ be Parseval p-frames for a finite dimensional Banach space $\mathcal{X}$. Then $$\begin{aligned} \frac{1}{(nm)^\frac{1}{p}} \leq \displaystyle\sup_{y \in \mathcal{X}, \|y\|=1}\left(\max_{1\leq j\leq n, 1\leq k\leq m}|f_j(y)g_k(y)|\right) \end{aligned}$$ and $$\begin{aligned} \log (nm)\geq S_f (x)+S_g (x)\geq -p \log \left(\displaystyle\sup_{y \in \mathcal{X}_f\cap \mathcal{X}_g, \|y\|=1}\left(\max_{1\leq j\leq n, 1\leq k\leq m}|f_j(y)g_k(y)|\right)\right)> 0, \quad \forall x \in \mathcal{X}_f \cap \mathcal{X}_g. \end{aligned}$$* **Theorem 5**. *(**Functional Deutsch Uncertainty Principle**) Let $\{\tau_j\}_{j=1}^n$ and $\{\omega_k\}_{k=1}^m$ be two Parseval p-frames for the dual $\mathcal{X}^*$ of a finite dimensional Banach space $\mathcal{X}$. Then $$\begin{aligned} \frac{1}{(nm)^\frac{1}{p}} \leq \displaystyle\sup_{g \in \mathcal{X}^*, \|g\|=1}\left(\max_{1\leq j\leq n, 1\leq k\leq m}|g(\tau_j)g(\omega_k)|\right) \end{aligned}$$ and $$\begin{aligned} \log (nm) \geq S_\tau (f)+S_\omega (f)\geq -p \log \left(\displaystyle\sup_{g \in \mathcal{X}^*_\tau \cap \mathcal{X}^*_\omega, \|g\|=1}\left(\max_{1\leq j\leq n, 1\leq k\leq m}|g(\tau_j)g(\omega_k)|\right)\right)> 0, \quad \forall f \in \mathcal{X}^*_\tau \cap \mathcal{X}^*_\omega. \end{aligned}$$* In this paper, we derive continuous versions of Theorem [\[DU\]](#DU){reference-type="ref" reference="DU"}, Theorem [Theorem 4](#M1){reference-type="ref" reference="M1"} and Theorem [Theorem 5](#M2){reference-type="ref" reference="M2"}. We also formulate a conjecture based on Theorem [\[KMU\]](#KMU){reference-type="ref" reference="KMU"}. We wish to say that functional continuous uncertainty principles are derived in [@KRISHNA2]. # Continuous Deutsch Uncertainty Principle and Continuous Kraus Conjecture In the paper, $\mathbb{K}$ denotes $\mathbb{C}$ or $\mathbb{R}$ and $\mathcal{H}$ (resp. $\mathcal{X}$) denotes a Hilbert space (resp. Banach space) (need not be finite dimensional) over $\mathbb{K}$. We use $(\Omega, \mu)$ to denote a measure space. Continuous frames are introduced independently by Ali, Antoine and Gazeau [@ALIANTOINEGAZEAU] and Kaiser [@KAISER]. In the paper, $\mathbb{K}$ denotes $\mathbb{C}$ or $\mathbb{R}$ and $\mathcal{H}$ denotes a finite dimensional Hilbert space. **Definition 6**. *[@ALIANTOINEGAZEAU; @KAISER] Let $(\Omega, \mu)$ be a measure space. A collection $\{\tau_\alpha\}_{\alpha\in \Omega}$ in a Hilbert space $\mathcal{H}$ is said to be a **continuous Parseval frame** for $\mathcal{H}$ if the following conditions hold.* 1. *For each $h \in \mathcal{H}$, the map $\Omega \ni \alpha \mapsto \langle h, \tau_\alpha \rangle \in \mathbb{K}$ is measurable.* 2. *$$\begin{aligned} \|h\|^2=\int\limits_{\Omega}|\langle h, \tau_\alpha \rangle|^2\,d\mu(\alpha), \quad \forall h \in \mathcal{H}. \end{aligned}$$* We consider the following subclass of continuous Parseval frames. **Definition 7**. *A continuous Parseval frame $\{\tau_\alpha\}_{\alpha\in \Omega}$ for $\mathcal{H}$ is said to be **1-bounded** if $$\begin{aligned} \|\tau_\alpha\|\leq 1, \quad \forall \alpha \in \Omega.\end{aligned}$$* Note that if $\{\tau_j\}_{j=1}^n$ is a Parseval frame for a Hilbert space $\mathcal{H}$, then $\|\tau_j\|\leq 1$, $\forall 1\leq j \leq n$ (see Remark 3.12 in [@HANKORNELSONLARSONWEBER]). We are unable to derive this for continuous frames. Given a continuous 1-bounded Parseval frame $\{\tau_\alpha\}_{\alpha\in \Omega}$ for $\mathcal{H}$, we define the **continuous Shannon entropy** at a point $h \in \mathcal{H}_\tau$ as $$\begin{aligned} S_\tau(h)\coloneqq -\int\limits_{\Omega}\left|\left \langle \frac{h}{\|h\|}, \tau_\alpha\right\rangle \right|^2\log \left|\left \langle \frac{h}{\|h\|}, \tau_\alpha\right\rangle \right|^2\,d\mu(\alpha)\geq 0,\end{aligned}$$ where $\mathcal{H}_\tau \coloneqq \{h \in \mathcal{H}: \langle h , \tau_\alpha \rangle \neq 0, \alpha \in \Omega\}$. Following is the first fundamental result of this paper. **Theorem 8**. *(**Continuous Deutsch Uncertainty Principle**)[\[CDUP\]]{#CDUP label="CDUP"} Let $(\Omega, \mu)$, $(\Delta, \nu)$ be measure spaces and $\{\tau_\alpha\}_{\alpha\in \Omega}$, $\{\omega_\beta\}_{\beta \in \Delta}$ be 1-bounded continuous Parseval frames for a Hilbert space $\mathcal{H}$. Then $$\begin{aligned} \label{CDSP} \log (\mu(\Omega)\nu(\Delta))\geq S_\tau(h)+S_\omega (h)\geq -2 \log \left(\frac{1+\displaystyle \sup_{\alpha \in \Omega, \beta \in \Delta}|\langle\tau_\alpha , \omega_\beta\rangle|}{2}\right) \geq 0, \quad \forall h \in \mathcal{H}_\tau \cap \mathcal{H}_\omega. \end{aligned}$$* *Proof.* Since $1=\int\limits_{\Omega}\left|\left \langle \frac{h}{\|h\|}, \tau_\alpha\right\rangle \right|^2\,d\mu(\alpha)$ for all $h \in \mathcal{H} \setminus \{0\}$, $1=\int\limits_{\Delta}\left|\left \langle \frac{h}{\|h\|}, \omega_\beta\right\rangle \right|^2\,d\nu(\beta)$ for all $h \in \mathcal{H} \setminus \{0\}$ and $\log$ is concave, using Jensen's inequality (cf. [@GARLING]) we get $$\begin{aligned} S_\tau(h)+S_\omega (h)&=\int \limits_{\Omega}\left|\left \langle \frac{h}{\|h\|}, \tau_\alpha\right\rangle \right|^2\log \left(\frac{1}{\left|\left \langle \frac{h}{\|h\|}, \tau_\alpha\right\rangle \right|^2}\right)\,d\mu(\alpha)+\int \limits_{\Delta}\left|\left \langle \frac{h}{\|h\|}, \omega_\beta\right\rangle \right|^2\log \left(\frac{1}{\left|\left \langle \frac{h}{\|h\|}, \omega_\beta\right\rangle \right|^2}\right)\,d\nu(\beta)\\ &\leq \log \left(\int \limits_{\Omega}\left|\left \langle \frac{h}{\|h\|}, \tau_\alpha\right\rangle \right|^2\frac{1}{\left|\left \langle \frac{h}{\|h\|}, \tau_\alpha\right\rangle \right|^2}\,d\mu(\alpha)\right)+\log \left(\int \limits_{\Delta}\left|\left \langle \frac{h}{\|h\|}, \omega_\beta\right\rangle \right|^2\frac{1}{\left|\left \langle \frac{h}{\|h\|}, \omega_\beta\right\rangle \right|^2}\,d\nu(\beta)\right)\\ &=\log (\mu(\Omega))+\log\nu (\Delta))=\log (\mu(\Omega)\nu (\Delta)), \quad \forall h \in \mathcal{H}_\tau \cap \mathcal{H}_\omega. \end{aligned}$$ Let $h \in \mathcal{H}_\tau \cap \mathcal{H}_\omega$. Then using Buzano inequality [@BUZANO; @FFUJIIKUBO] we get $$\begin{aligned} S_\tau(h)+S_\omega (h)&=-\int\limits_{\Delta}\int\limits_{\Omega}\left|\left \langle \frac{h}{\|h\|}, \tau_\alpha\right\rangle \right|^2\left|\left \langle \frac{h}{\|h\|}, \omega_\beta\right\rangle \right|^2\left[\log \left|\left \langle \frac{h}{\|h\|}, \tau_\alpha\right\rangle \right|^2+\log \left|\left \langle \frac{h}{\|h\|}, \omega_\beta\right\rangle \right|^2\right]\,d\mu(\alpha)\,d\nu(\beta)\\ &=-\int\limits_{\Delta}\int\limits_{\Omega}\left|\left \langle \frac{h}{\|h\|}, \tau_\alpha\right\rangle \right|^2\left|\left \langle \frac{h}{\|h\|}, \omega_\beta\right\rangle \right|^2\log \left|\left \langle \frac{h}{\|h\|}, \tau_\alpha\right\rangle \left \langle \frac{h}{\|h\|}, \omega_\beta\right\rangle\right|^2\,d\mu(\alpha)\,d\nu(\beta)\\ &=-2\int\limits_{\Delta}\int\limits_{\Omega}\left|\left \langle \frac{h}{\|h\|}, \tau_\alpha\right\rangle \right|^2\left|\left \langle \frac{h}{\|h\|}, \omega_\beta\right\rangle \right|^2\log \left|\left \langle \frac{h}{\|h\|}, \tau_\alpha\right\rangle \left \langle \frac{h}{\|h\|}, \omega_\beta\right\rangle\right|\,d\mu(\alpha)\,d\nu(\beta)\\ &\geq -2\int\limits_{\Delta}\int\limits_{\Omega}\left|\left \langle \frac{h}{\|h\|}, \tau_\alpha\right\rangle \right|^2\left|\left \langle \frac{h}{\|h\|}, \omega_\beta\right\rangle \right|^2\log \left(\left\|\frac{h}{\|h\|}\right\|^2\frac{\|\tau_\alpha\|\|\omega_\beta\|+|\langle\tau_\alpha, \omega_\beta \rangle|}{2}\right) \,d\mu(\alpha)\,d\nu(\beta)\\ & =-2\int\limits_{\Delta}\int\limits_{\Omega}\left|\left \langle \frac{h}{\|h\|}, \tau_\alpha\right\rangle \right|^2\left|\left \langle \frac{h}{\|h\|}, \omega_\beta\right\rangle \right|^2\log \left(\frac{1+|\langle\tau_\alpha, \omega_\beta \rangle|}{2}\right) \,d\mu(\alpha)\,d\nu(\beta)\\ &\geq -2\int\limits_{\Delta}\int\limits_{\Omega}\left|\left \langle \frac{h}{\|h\|}, \tau_\alpha\right\rangle \right|^2\left|\left \langle \frac{h}{\|h\|}, \omega_\beta\right\rangle \right|^2\log \left(\frac{1+\displaystyle \sup_{\alpha \in \Omega, \beta \in \Delta}|\langle\tau_\alpha, \omega_\beta \rangle|}{2}\right) \,d\mu(\alpha)\,d\nu(\beta)\\ &=-2\log \left(\frac{1+\displaystyle \sup_{\alpha \in \Omega, \beta \in \Delta}|\langle\tau_\alpha, \omega_\beta \rangle|}{2}\right)\int\limits_{\Delta}\int\limits_{\Omega}\left|\left \langle \frac{h}{\|h\|}, \tau_\alpha\right\rangle \right|^2\left|\left \langle \frac{h}{\|h\|}, \omega_\beta\right\rangle \right|^2\,d\mu(\alpha)\,d\nu(\beta)\\ &\geq-2\log \left(\frac{1+\displaystyle \sup_{\alpha \in \Omega, \beta \in \Delta}|\langle\tau_\alpha, \omega_\beta \rangle|}{2}\right).\end{aligned}$$ ◻ Theorem [\[CDUP\]](#CDUP){reference-type="ref" reference="CDUP"} promotes following question. **Question 9**. *Let $(\Omega, \mu)$, $(\Delta, \nu)$ be measure spaces, $\mathcal{H}$ be a Hilbert space. For which pairs of 1-bounded continuous Parseval frames $\{\tau_\alpha\}_{\alpha\in \Omega}$ and $\{\omega_\beta\}_{\beta\in \Delta}$ for $\mathcal{H}$, we have equality in Inequality ([\[CDSP\]](#CDSP){reference-type="ref" reference="CDSP"})?* Based on Theorems [\[MU\]](#MU){reference-type="ref" reference="MU"} and Theorem [\[CDUP\]](#CDUP){reference-type="ref" reference="CDUP"} we formulate following conjecture. **Conjecture 10**. ***(Continuous Kraus Conjecture)** **Let $(\Omega, \mu)$, $(\Delta, \nu)$ be measure spaces and $\{\tau_\alpha\}_{\alpha\in \Omega}$, $\{\omega_\beta\}_{\beta \in \Delta}$ be 1-bounded continuous Parseval frames for a Hilbert space $\mathcal{H}$. Then $$\begin{aligned} S_\tau(h)+S_\omega (h)\geq -2 \log \left(\displaystyle \sup_{\alpha \in \Omega, \beta \in \Delta}|\langle\tau_\alpha , \omega_\beta\rangle|\right) \geq 0, \quad \forall h \in \mathcal{H}_\tau \cap \mathcal{H}_\omega. \end{aligned}$$*** Next we derive continuous Deutsch uncertainty for Banach spaces. We need a definition. **Definition 11**. *[@FAROUGHIOSGOOEI][\[A\]]{#A label="A"} Let $(\Omega, \mu)$ be a measure space and $\mathcal{X}$ be a Banach space over $\mathbb{K}$. A collection $\{f_\alpha\}_{\alpha\in \Omega}$ in $\mathcal{X}^*$ is said to be a **continuous Parseval p-frame** ($1\leq p <\infty$) for $\mathcal{X}$ if the following conditions hold.* 1. *For each $x \in \mathcal{X}$, the map $\Omega \ni \alpha \mapsto f_\alpha (x)\in \mathbb{K}$ is measurable.* 2. *$$\begin{aligned} \|x\|^p=\displaystyle\int\limits_{\Omega}|f_\alpha (x)|^p\,d\mu(\alpha), \quad \forall x \in \mathcal{X}. \end{aligned}$$* *If $\|\tau_\alpha\|\leq 1$, $\forall \alpha \in \Omega$, then we say that the frame $\{f_\alpha\}_{\alpha\in \Omega}$ is 1-bounded.* Similar to Definition [Definition 7](#1B){reference-type="ref" reference="1B"}, we set the following. **Definition 12**. *A continuous Parseval p-frame $\{f_\alpha\}_{\alpha\in \Omega}$ for $\mathcal{X}$ is said to be **1-bounded** if $$\begin{aligned} \|f_\alpha\|\leq 1, \quad \forall \alpha \in \Omega. \end{aligned}$$* Given a 1-bounded continuous Parseval p-frame $\{f_\alpha\}_{\alpha\in \Omega}$ for $\mathcal{X}$, we define the **continuous p-Shannon entropy** at a point $x \in \mathcal{X}_f$ as $$\begin{aligned} S_f(x)\coloneqq -\int\limits_{\Omega}\left|f_\alpha \left(\frac{x}{\|x\|}\right) \right|^p\log\left|f_\alpha \left(\frac{x}{\|x\|}\right) \right|^p\,d\mu(\alpha)\geq 0,\end{aligned}$$ where $\mathcal{X}_f\coloneqq \{x \in \mathcal{X}:f_\alpha(x)\neq 0, \alpha \in \Omega\}$. **Theorem 13**. *(**Functional Continuous Deutsch Uncertainty Principle**) Let $(\Omega, \mu)$, $(\Delta, \nu)$ be measure spaces and $\{f_\alpha\}_{\alpha\in \Omega}$, $\{g_\beta\}_{\beta \in \Delta}$ be 1-bounded continuous Parseval p-frames for a Banach space $\mathcal{X}$. Then $$\begin{aligned} \frac{1}{(\mu(\Omega)\nu(\Delta))^\frac{1}{p}} \leq \displaystyle\sup_{y \in \mathcal{X}, \|y\|=1}\left(\sup_{\alpha \in \Omega, \beta \in \Delta}|f_\alpha(y)g_\beta(y)|\right) \end{aligned}$$ and $$\begin{aligned} \label{FD} S_f (x)+S_g (x)\geq -p \log \left(\displaystyle\sup_{y \in \mathcal{X}, \|y\|=1}\left(\sup_{\alpha \in \Omega, \beta \in \Delta}|f_\alpha(y)g_\beta(y)|\right)\right)\geq 0, \quad \forall x \in \mathcal{X}_f \cap \mathcal{X}_g. \end{aligned}$$* *Proof.* Let $z \in \mathcal{X}$ be such that $\|z\|=1$. Then $$\begin{aligned} 1&=\left(\int\limits_{\Omega}|f_\alpha(z)|^p\,d\mu(\alpha)\right)\left(\int\limits_{\Delta}|g_\beta(z)|^p\,d\nu(\beta)\right)=\int\limits_{\Omega}\int\limits_{\Delta} |f_\alpha(z)g_\beta(z)|^p\,d\nu(\beta)\,d\mu(\alpha)\\ &\leq \int\limits_{\Omega}\int\limits_{\Delta}\left(\displaystyle\sup_{y \in \mathcal{X}, \|y\|=1}\left(\sup_{\alpha \in \Omega, \beta \in \Delta}|f_\alpha(y)g_\beta(y)|\right)\right)^p\,d\nu(\beta)\,d\mu(\alpha)\\ &=\left(\displaystyle\sup_{y \in \mathcal{X}, \|y\|=1}\left(\sup_{\alpha \in \Omega, \beta \in \Delta}|f_\alpha(y)g_\beta(y)|\right)\right)^p\mu(\Omega)\nu(\Delta)\end{aligned}$$ which gives $$\begin{aligned} \frac{1}{\mu(\Omega)\nu(\Delta)}\leq \left(\displaystyle\sup_{y \in \mathcal{X}, \|y\|=1}\left(\sup_{\alpha \in \Omega, \beta \in \Delta}|f_\alpha(y)g_\beta(y)|\right)\right)^p.\end{aligned}$$ Let $x \in \mathcal{X}_f \cap \mathcal{X}_g$. Then $$\begin{aligned} S_f (x)+S_g (x)&= -\int\limits_{\Omega}\int\limits_{\Delta}\left|f_\alpha\left(\frac{x}{\|x\|}\right)\right|^p\left|g_\beta\left(\frac{x}{\|x\|}\right)\right|^p\left[\log \left|f_\alpha\left(\frac{x}{\|x\|}\right)\right|^p+\log \left|g_\beta\left(\frac{x}{\|x\|}\right)\right|^p\right]\,d\nu(\beta)\,d\mu(\alpha)\\ &=-\int\limits_{\Omega}\int\limits_{\Delta}\left|f_\alpha\left(\frac{x}{\|x\|}\right)\right|^p\left|g_\beta\left(\frac{x}{\|x\|}\right)\right|^p\log \left|f_\alpha\left(\frac{x}{\|x\|}\right)g_\beta\left(\frac{x}{\|x\|}\right)\right|^p\,d\nu(\beta)\,d\mu(\alpha)\\ &=-p\int\limits_{\Omega}\int\limits_{\Delta}\left|f_\alpha\left(\frac{x}{\|x\|}\right)\right|^p\left|g_\beta\left(\frac{x}{\|x\|}\right)\right|^p\log \left|f_\alpha\left(\frac{x}{\|x\|}\right)g_\beta\left(\frac{x}{\|x\|}\right)\right|\,d\nu(\beta)\,d\mu(\alpha)\\ &\geq -p\int\limits_{\Omega}\int\limits_{\Delta}\left|f_\alpha\left(\frac{x}{\|x\|}\right)\right|^p\left|g_\beta\left(\frac{x}{\|x\|}\right)\right|^p\log\left(\displaystyle\sup_{y \in \mathcal{X}, \|y\|=1}\left(\sup_{\alpha \in \Omega, \beta \in \Delta}|f_\alpha(y)g_\beta(y)|\right)\right)\,d\nu(\beta)\,d\mu(\alpha)\\ &=-p\log\left(\displaystyle\sup_{y \in \mathcal{X}, \|y\|=1}\left(\sup_{\alpha \in \Omega, \beta \in \Delta}|f_\alpha(y)g_\beta(y)|\right)\right)\int\limits_{\Omega}\int\limits_{\Delta}\left|f_\alpha\left(\frac{x}{\|x\|}\right)\right|^p\left|g_\beta\left(\frac{x}{\|x\|}\right)\right|^p\,d\nu(\beta)\,d\mu(\alpha)\\ &=-p\log\left(\displaystyle\sup_{y \in \mathcal{X}, \|y\|=1}\left(\sup_{\alpha \in \Omega, \beta \in \Delta}|f_\alpha(y)g_\beta(y)|\right)\right).\end{aligned}$$ ◻ **Corollary 14**. *Theorem [\[DU\]](#DU){reference-type="ref" reference="DU"} follows from Theorem [Theorem 13](#FDS){reference-type="ref" reference="FDS"}.* *Proof.* Let $(\Omega, \mu)$, $(\Delta, \nu)$ be measure spaces and $\{\tau_\alpha\}_{\alpha\in \Omega}$, $\{\omega_\beta\}_{\beta \in \Delta}$ be 1-bounded continuous Parseval frames for a Hilbert space $\mathcal{H}$. Define $$\begin{aligned} &f_\alpha:\mathcal{H} \ni h \mapsto \langle h, \tau_\alpha \rangle \in \mathbb{K}; \quad \forall \alpha \in \Omega, \\ & g_\beta:\mathcal{H} \ni h \mapsto \langle h, \omega_\beta \rangle \in \mathbb{K}, \quad \forall \beta \in \Delta. \end{aligned}$$ Now by using Buzano inequality [@BUZANO; @FFUJIIKUBO] we get $$\begin{aligned} \displaystyle\sup_{h \in \mathcal{H}, \|h\|=1}\left(\sup_{\alpha \in \Omega,\beta \in \Delta }|f_\alpha(h)g_\beta(h)|\right)&=\displaystyle\sup_{h \in \mathcal{H}, \|h\|=1}\left(\sup_{\alpha \in \Omega,\beta \in \Delta }|\langle h, \tau_\alpha \rangle||\langle h, \omega_\beta \rangle|\right)\\ &\leq \displaystyle\sup_{h \in \mathcal{H}, \|h\|=1}\left(\sup_{\alpha \in \Omega,\beta \in \Delta }\left(\|h\|^2\frac{\|\tau_\alpha\|\|\omega_\beta\|+|\langle \tau_\alpha, \omega_\beta \rangle |}{2}\right)\right)\\ &=\frac{1+\displaystyle\sup_{\alpha \in \Omega,\beta \in \Delta }|\langle \tau_j, \omega_k \rangle |}{2}.\end{aligned}$$ ◻ Theorem [Theorem 13](#FDS){reference-type="ref" reference="FDS"} brings the following question. **Question 15**. *Let $(\Omega, \mu)$, $(\Delta, \nu)$ be measure spaces, $\mathcal{X}$ be a Banach space and $p>1$. For which pairs of continuous Parseval p-frames $\{f_\alpha\}_{\alpha\in \Omega},$ and $\{g_\beta\}_{\beta\in \Delta}$ for $\mathcal{X}$, we have equality in Inequality ([\[FD\]](#FD){reference-type="ref" reference="FD"})?* Next we derive a dual inequality of ([\[FD\]](#FD){reference-type="ref" reference="FD"}). For this we need dual of Definition [\[A\]](#A){reference-type="ref" reference="A"}. **Definition 16**. *Let $(\Omega, \mu)$ be a measure space and $\mathcal{X}$ be a Banach space over $\mathbb{K}$. A collection $\{\tau_\alpha\}_{\alpha\in \Omega}$ in $\mathcal{X}$ is said to be a **continuous Parseval p-frame** ($1\leq p <\infty$) for $\mathcal{X}^*$ if the following conditions hold.* 1. *For each $\phi \in \mathcal{X}^*$, the map $\Omega \ni \alpha \mapsto \phi(\tau_\alpha)\in \mathbb{K}$ is measurable.* 2. *$$\begin{aligned} \|f\|^p=\int_{\Omega}|f(\tau_\alpha)|^p\,d\mu(\alpha), \quad \forall f \in \mathcal{X}^*. \end{aligned}$$* *If $\|\tau_\alpha\|\leq 1$, $\forall \alpha \in \Omega$, then we say that the frame $\{\tau_\alpha\}_{\alpha\in \Omega}$ is 1-bounded.* Given a 1-bounded continuous Parseval p-frame $\{\tau_\alpha\}_{\alpha\in \Omega}$ for $\mathcal{X}^*$, we define the **continuous p-Shannon entropy** at a point $f \in \mathcal{X}_\tau^*$ as $$\begin{aligned} S_f(x)\coloneqq -\int\limits_{\Omega}\left|\frac{f(\tau_\alpha)}{\|f\|}\right|^p\log \left|\frac{f(\tau_\alpha)}{\|f\|}\right|^p\,d\mu(\alpha)\geq 0, \end{aligned}$$ where $\mathcal{X}_\tau^*\coloneqq \{f \in \mathcal{X}: f(\tau_\alpha)\neq 0, \alpha \in \Omega\}\}$. We now have the following dual to Theorem [Theorem 13](#FDS){reference-type="ref" reference="FDS"}. **Theorem 17**. *(**Continuous Deutsch Uncertainty Principle for Banach spaces**) Let $(\Omega, \mu)$, $(\Delta, \nu)$ be measure spaces and $\{\tau_\alpha\}_{\alpha\in \Omega}$, $\{\omega_\beta\}_{\beta \in \Delta}$ be 1-bounded continuous Parseval p-frames for the dual $\mathcal{X}^*$ of a Banach space $\mathcal{X}$. Then $$\begin{aligned} \frac{1}{(\mu(\Omega)\nu(\Delta))^\frac{1}{p}} \leq \displaystyle\sup_{g \in \mathcal{X}^*, \|g\|=1}\left(\sup_{\alpha \in \Omega, \beta \in \Delta}|g(\tau_\alpha)g(\omega_\beta)|\right) \end{aligned}$$ and $$\begin{aligned} \label{DFDS} S_\tau (f)+S_\omega (f)\geq -p \log \left(\displaystyle\sup_{g \in \mathcal{X}^*, \|g\|=1}\left(\sup_{\alpha \in \Omega, \beta \in \Delta}|g(\tau_\alpha)g(\omega_\beta)|\right)\right)\geq 0, \quad \forall f \in \mathcal{X}_\tau^*\cap \mathcal{X}_\omega^*. \end{aligned}$$* *Proof.* Let $h \in \mathcal{X}^*$ be such that $\|h\|=1$. Then $$\begin{aligned} 1&=\left(\int\limits_{\Omega}|h(\tau_\alpha)|^p\,d\mu(\alpha)\right)\left(\int\limits_{\Delta}h(\omega_\beta)|^p\,d\nu(\beta)\right)=\int\limits_{\Omega}\int\limits_{\Delta} |h(\tau_\alpha)h(\omega_\beta)|^p\,d\nu(\beta)\,d\mu(\alpha)\\ &\leq \int\limits_{\Omega}\int\limits_{\Delta}\left(\displaystyle\sup_{g \in \mathcal{X}^*, \|g\|=1}\left(\sup_{\alpha \in \Omega, \beta \in \Delta}|g(\tau_\alpha)g(\omega_\beta)|\right)\right)^p\,d\nu(\beta)\,d\mu(\alpha)\\ &=\left(\displaystyle\sup_{g \in \mathcal{X}^*, \|g\|=1}\left(\sup_{\alpha \in \Omega, \beta \in \Delta }|g(\tau_\alpha)g(\omega_\beta)|\right)\right)^p\mu(\Omega)\nu(\Delta)\end{aligned}$$ which gives $$\begin{aligned} \frac{1}{\mu(\Omega)\nu(\Delta)}\leq \left(\displaystyle\sup_{g \in \mathcal{X}^*, \|g\|=1}\left(\sup_{\alpha \in \Omega, \beta \in \Delta}|g(\tau_\alpha)g(\omega_\beta)|\right)\right)^p.\end{aligned}$$ Let $f \in \mathcal{X}_\tau^*\cap \mathcal{X}_\omega^*$. Then $$\begin{aligned} S_\tau (f)+S_\omega (f)&=-\int\limits_{\Omega}\int\limits_{\Delta}\left|\frac{f(\tau_\alpha)}{\|f\|}\right|^p\left|\frac{f(\omega_\beta)}{\|f\|}\right|^p\left[\log \left|\frac{f(\tau_\alpha)}{\|f\|}\right|^p+\log \left|\frac{f(\omega_\beta)}{\|f\|}\right|^p\right]\,d\nu(\beta)\,d\mu(\alpha)\\ &=-\int\limits_{\Omega}\int\limits_{\Delta}\left|\frac{f(\tau_\alpha)}{\|f\|}\right|^p\left|\frac{f(\omega_\beta)}{\|f\|}\right|^p\log \left|\frac{f(\tau_\alpha)}{\|f\|}\frac{f(\omega_\beta)}{\|f\|}\right|^p\,d\nu(\beta)\,d\mu(\alpha)\\ &=-p\int\limits_{\Omega}\int\limits_{\Delta}\left|\frac{f(\tau_\alpha)}{\|f\|}\right|^p\left|\frac{f(\omega_\beta)}{\|f\|}\right|^p\log \left|\frac{f(\tau_\alpha)}{\|f\|}\frac{f(\omega_\beta)}{\|f\|}\right|\,d\nu(\beta)\,d\mu(\alpha)\\ &\geq -p \int\limits_{\Omega}\int\limits_{\Delta}\left|\frac{f(\tau_\alpha)}{\|f\|}\right|^p\left|\frac{f(\omega_\beta)}{\|f\|}\right|^p\log \left(\displaystyle\sup_{g \in \mathcal{X}^*, \|g\|=1}\left(\sup_{\alpha \in \Omega, \beta \in \Delta}|g(\tau_\alpha)g(\omega_\beta)|\right)\right)\,d\nu(\beta)\,d\mu(\alpha)\\ &=-p\log \left(\displaystyle\sup_{g \in \mathcal{X}^*, \|g\|=1}\left(\sup_{\alpha \in \Omega, \beta \in \Delta}|g(\tau_\alpha)g(\omega_\beta)|\right)\right)\int\limits_{\Omega}\int\limits_{\Delta}\left|\frac{f(\tau_\alpha)}{\|f\|}\right|^p\left|\frac{f(\omega_\beta)}{\|f\|}\right|^p\,d\nu(\beta)\,d\mu(\alpha)\\ &=-p\log \left(\displaystyle\sup_{g \in \mathcal{X}^*, \|g\|=1}\left(\sup_{\alpha \in \Omega, \beta \in \Delta}|g(\tau_\alpha)g(\omega_\beta)|\right)\right).\end{aligned}$$ ◻ Theorem [Theorem 17](#DUALFDS){reference-type="ref" reference="DUALFDS"} again gives the following question. **Question 18**. *Let $(\Omega, \mu)$, $(\Delta, \nu)$ be measure spaces, $\mathcal{X}$ be a Banach space and $p>1$. For which pairs of continuous Parseval p-frames $\{\tau_\alpha\}_{\alpha\in \Omega},$ and $\{\omega_\beta\}_{\beta\in \Delta}$ for $\mathcal{X}^*$, we have equality in Inequality ([\[DFDS\]](#DFDS){reference-type="ref" reference="DFDS"})*
arxiv_math
{ "id": "2310.01450", "title": "Continuous Deutsch Uncertainty Principle and Continuous Kraus Conjecture", "authors": "K. Mahesh Krishna", "categories": "math.FA math-ph math.MP math.OA math.QA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Let $S=K[x_1,\ldots,x_n]$ be the polynomial ring over a field $K$, and let $A$ be a finitely generated standard graded $S$-algebra. We show that if the defining ideal of $A$ has a quadratic initial ideal, then all the graded components of $A$ are componentwise linear. Applying this result to the Rees ring $\mathcal{R}(I)$ of a graded ideal $I$ gives a criterion on $I$ to have componentwise linear powers. Moreover, for any given graph $G$, a construction on $G$ is presented which produces graphs whose cover ideals $I_G$ have componentwise linear powers. This in particular implies that for any Cohen-Macaulay Cameron-Walker graph $G$ all powers of $I_G$ have linear resolutions. Moreover, forming a cone on special graphs like unmixed chordal graphs, path graphs and Cohen-Macaulay bipartite graphs produces cover ideals with componentwise linear powers. address: - Takayuki Hibi, Department of Pure and Applied Mathematics, Graduate School of Information Science and Technology, Osaka University, Suita, Osaka 565-0871, Japan - Somayeh Moradi, Department of Mathematics, Faculty of Science, Ilam University, P.O.Box 69315-516, Ilam, Iran author: - Takayuki Hibi and Somayeh Moradi title: Ideals with componentwise linear powers --- [^1] # Introduction {#introduction .unnumbered} Componentwise linear ideals were first introduced by Herzog and the first author of this paper in [@HH1]. Since their introduction they have emerged as an intriguing class of ideals deserving special attention, due to some of their interesting characterizations in combinatorics and commutative algebra, see [@AHH1; @HV; @HRW; @R1; @Y]. One research theme in this context is to find ideals whose all powers are componentwise linear or have linear resolutions. Let $S=K[x_1,\ldots,x_n]$ be the polynomial ring over a field $K$. A graded ideal $I\subset S$ is called *componentwise linear* if for each integer $j$ the ideal generated by all homogeneous elements of degree $j$ in $I$ has a linear resolution. In this paper we mainly consider the question: what hypotheses on $I$ ensure that all powers of $I$ are componentwise linear? In particular when $I$ is the cover ideal of a graph $G$, what graph constructions lead to cover ideals with componentwise linear powers? To investigate this question it is natural to consider the Rees algebra $\mathcal{R}(I)$. For a graded ideal $I\subset S$ let $J\subset T=S[y_1,\ldots,y_m]$ be the defining ideal of $\mathcal{R}(I)$. The ideal $J$ is said to satisfy the $x$-condition with respect to a monomial order $<$ on $T$ if any minimal monomial generator of $\ini_<(J)$ is of the form $vw$ with $v\in S$ of degree at most one and $w\in K[y_1,\ldots,y_m]$. This property was first defined in [@HHZ]. When $J$ has the $x$-condition property, then it is proved in [@HHZ Corollary 1.2] that if $I$ is equigenerated, then each power of $I$ has a linear resolution. It is natural to ask when $I$ is not equigenerated, how the $x$-condition affects the powers of $I$. This is considered in [@HHM], where it is shown that if $J$ satisfies the $x$-condition with respect to some special monomial order, then $I^k$ has linear quotients with respect to some set of generators for all $k$, which may not be minimal. But once it is minimal, it implies the componentwise linearity of $I^k$, see [@HH Theorem 8.2.15]. When $I$ is a monomial ideal and $\ini_<(J)$ is generated by quadratic monomials, it is shown in [@HHM Theorem 3.6] that this set of generators is minimal and hence all powers of $I$ are componentwise linear. In the first section of this paper we extend this result to any graded ideal $I$, see Theorem [Theorem 1](#shameo){reference-type="ref" reference="shameo"} and Corollary [Corollary 2](#extend){reference-type="ref" reference="extend"}. The question of having linear or componentwise linear powers has attracted special attention for the ideals arising from graphs and it has been studied in several papers. Some families of such ideals whose powers inherit componentwise linear or linear property are cover ideals of Cohen-Macaulay bipartite graphs [@HHbi], unmixed chordal graphs [@HHO Theorem 2.7], chordal graphs that are $(C_4, 2K_2)$-free [@N Theorem 3.7], trees [@KK Corollary 3.5], path graphs, biclique graphs and Cameron-Walker graphs whose bipartite graph is a complete bipartite graph [@HHM Corollary 4.7] and edge ideals with linear resolutions [@HHZ Theorem 3.2]. In Section 2 of this paper we consider a construction on a graph $G$ denoted by $G(H_1, \ldots, H_n)$ which attaches to each vertex $x_i$ of $G$ a graph $H_i$. As the main result we show in Theorem [Theorem 4](#construction){reference-type="ref" reference="construction"} that if for each $i$, the defining ideal $J_{H_i}$ of $\mathcal{R}(I_{H_i})$ has a quadratic Gröbner basis, then each power of the vertex cover ideal $I_{G(H_1, \ldots, H_n)}$ is componentwise linear. To this aim we first show in Theorem [Theorem 3](#graphjoin){reference-type="ref" reference="graphjoin"} that if each $J_{H_i}$ satisfies the $x$-condition , then $J_{G(H_1, \ldots, H_n)}$ satisfies the $x$-condition with respect to some monomial order. Cohen-Macaulay Cameron-Walker graph and cone graphs are examples of such constructions. # Algebras with componentwise linear graded components Let $K$ be a field and let $A=\bigoplus_{i,j}A_{(i,j)}$ be a bigraded $K$-algebra with $A_{(0,0)}=K$. Set $A_j=\bigoplus_iA_{(i,j)}$. We assume that $A_0$ is the polynomial ring $S= K[x_1,\ldots,x_n]$ with the standard grading. Then $A=\bigoplus_jA_j$ is a graded $S$-algebra, and each $A_j$ is a graded $S$-module with grading $(A_j)_i=A_{(i,j)}$ for all $i$. We assume in addition that $A$ is a finitely generated standard graded $S$-algebra and $(A_1)_i=0$ for $i<0$. We fix a system of homogeneous generators $f_1,\ldots,f_m$ of $A_1$ with $\deg f_i=d_i$ for $i=1,\ldots,m$. Let $T=K[x_1,\ldots,x_n,y_1,\ldots,y_m]$ be the bigraded polynomial ring over $K$ with the grading induced by $\deg x_i=(1,0)$ for $i=1,\ldots,n$ and $\deg y_j=(d_j,1)$ for $j=1,\ldots,m$. Define the $K$-algebra homomorphism $\varphi\colon T\rightarrow A$ with $\varphi(x_i)=x_i$ for $i=1,\ldots,n$ and $\varphi(y_j)=f_j$ for $j=1,\ldots, m$. Then $\varphi$ is a surjective $K$-algebra homomorphism of bigraded $K$-algebras, and hence $J=\Ker(\varphi)$ is a bigraded ideal in $T$. The defining ideal $J$ of $A$ is said to satisfy the *$x$-condition*, with respect to a monomial order $<$ on $T$ if all $u\in \mathcal{G}(\ini_<(J))$ are of the form $vw$ with $v\in S$ of degree $\leq 1$ and $w\in K[y_1,\ldots,y_m]$. Here $\mathcal{G}(I)$ denotes the minimal set of monomial generators of a monomial ideal $I$. Given a monomial order $<'$ on the polynomial ring $K[y_1,\ldots,y_m]$ and a monomial order $<_x$ on $K[x_1,\ldots,x_n]$, let $<$ be a monomial order on $T$ such that $$\begin{aligned} \label{monomialorder} \prod_{i=1}^n x_i^{a_i}\prod_{i=1}^m y_i^{b_i} &<& \prod_{i=1}^n x_i^{a'_i} \prod_{i=1}^m y_i^{b'_i},\end{aligned}$$ if $$\begin{aligned} \prod_{i=1}^m y_i^{b_i} <'\prod_{i=1}^m y_i^{b'_i}\quad \text{or}\quad \prod_{i=1}^m y_i^{b_i}&=&\prod_{i=1}^m y_i^{b'_i}\quad \text{and} \quad \prod_{i=1}^n x_i^{a_i} <_x \prod_{i=1}^n x_i^{a'_i}.\end{aligned}$$ We call $<$ the order induced by the orders $<'$ and $<_x$. A graded $S$-module $M$ has *linear quotients*, if there exists a system of homogeneous generators $f_1,\ldots,f_m$ of $M$ with the property that each of the colon ideals $(f_1,\ldots,f_{j-1}):f_j$ is generated by linear forms. With the above notation and terminology we have **Theorem 1**. *Let $J$ be the defining ideal of the $K$-algebra $A$. If $\ini_{<}(J)$ is generated by quadratic monomials, then for all $k\geq 1$, the $S$-module $A_k$ has linear quotients with respect to a minimal generating set and hence it is componentwise linear.* *Proof.* Let $k\geq 1$ be an integer. For any element $h=f_{i_1}\cdots f_{i_k}\in A_k$, we set $h^{*}=y_{i_1}\cdots y_{i_k}$. Let $\{h_1^{*},\ldots,h_{d}^*\}$ be the set of all standard monomials of bidegree $(*,k)$ in $T$ which are of the form $h^*$. Then by [@HHM Lemma 3.2], $h_1,\ldots,h_d$ is a system of generators of $A_k$. We show that this is indeed a minimal system of generators of $A_k$. Since $f_1,\ldots,f_m$ is a minimal generating set of $A_1$, there is nothing to prove for $k=1$. Now, let $k>1$. By contradiction suppose that there exists an integer $j$ such that $h_j=\sum_{\ell\neq j}p_{\ell}h_{\ell}$ for some homogeneous polynomials $p_{\ell}\in S$. Then $q=h_j^*-\sum_{\ell\neq j}p_{\ell}h_{\ell}^*\in J$. Without loss of generality assume that $q$ has the least initial term among all the expressions of the form $h_j^*-\sum_{\ell\neq j}p'_{\ell}h_{\ell}^*\in J$. Since $h_j^*$ is a standard monomial, we have $\ini_<(q)=\ini_<(p_i)h_i^*\in\ini_{<}(J)$ for some $i\neq j$ with $p_i\neq 1$. By assumption there exists an element $g$ in the Gröbner basis of $J$ such that $\ini_<(g)$ has degree two and $\ini_<(g)$ divides $\ini_<(p_i)h_i^*$. If $\ini_<(g)=y_ry_t$ for some $r$ and $t$, then $y_ry_t$ divides $h_i^*$, which implies that $h_i^*\in \ini_<(J)$, a contradiction. So $\ini_<(g)=x_ry_t$ for some $r$ and $t$ and then $x_r$ divides $\ini_<(p_i)$ and $y_t$ divides $h_i^*$. Let $g=x_ry_t-\sum_{\ell=1}^s c_{\ell}u_{\ell}y_{j_{\ell}}$ for some $c_{\ell}\in K$ and monomials $u_{\ell}\in S$. Then $$\label{-1} x_rf_t=\sum_{\ell=1}^s c_{\ell}u_{\ell}f_{j_{\ell}}.$$ We have $u_{\ell}\neq 1$ for all $\ell$. Indeed, if $u_{\lambda}=1$ for some $\lambda$, then $$\label{0} f_{j_{\lambda}}=c_{\lambda}^{-1}x_rf_t-\sum_{\ell\neq\lambda} c_{\lambda}^{-1}c_{\ell}u_{\ell}f_{j_{\ell}}.$$ This is a contradiction, since $f_1,\ldots,f_m$ is a minimal generating set of $A_1$. Thus $u_{\ell}\neq 1$ for all $\ell$. We have $$\begin{aligned} \ini_<(p_i)h_i=(\ini_<(p_i)/x_r)x_rf_t(h_i/f_t)= \\ (\ini_<(p_i)/x_r)(\sum_{\ell=1}^s c_{\ell}u_{\ell}f_{j_{\ell}}) (h_i/f_t). \end{aligned}$$ Hence $\ini_<(p_i)h_i=\sum_{\ell=1}^s c_{\ell}w_{\ell}h'_{\ell}$, for the monomials $w_{\ell}=(\ini_<(p_i)/x_r)u_{\ell}\neq 1$ and elements $h'_{\ell}=f_{j_{\ell}}(h_i/f_t)\in A_k$. Let $p'_i=p_i-\ini_<(p_i)$. Then $$\label{1} h_j=(\ini_<(p_i)+p'_i)h_i+\sum_{\ell\neq i,j}p_{\ell}h_{\ell}=\sum_{\ell=1}^s c_{\ell}w_{\ell}h'_{\ell}+p'_ih_i+\sum_{\ell\neq i,j}p_{\ell}h_{\ell}.$$ We show that $h'_{\ell}\neq h_j$ for all $1\leq\ell\leq s$. Indeed, by the equality ([\[-1\]](#-1){reference-type="ref" reference="-1"}) and that $u_{\ell}\neq 1$, we have $\deg(f_t)\geq \deg(f_{j_{\ell}})$. Hence $\deg(h'_{\ell})\leq \deg(h_i)$. While $\deg(h_j)=\deg(p_i)+\deg(h_i)>\deg(h_i)$. Hence $\deg(h_j)>\deg(h'_{\ell})$ which implies that $h'_{\ell}\neq h_j$. Also $y_{j_{\ell}}<'y_t$ implies that $(h'_{\ell})^*<'h_i^*$. By ([\[1\]](#1){reference-type="ref" reference="1"}), we have $$\label{2} q'=h_j^*-\sum_{\ell=1}^s c_{\ell}w_{\ell}(h'_{\ell})^*-p'_ih_i^*-\sum_{\ell\neq i, j}p_{\ell}h_{\ell}^*\in J.$$ Since $(h'_{\ell})^*\neq h_j^*$ for all $\ell$ and $\ini_<(q')<\ini_<(q)$, by the assumption on $q$ we conclude that there exists at least one integer $1\leq\ell\leq s$ such that the monomial $(h'_{\ell})^*$ is not standard. By [@HHM Lemma 2.2], for any such $\ell$ there exist standard monomials $h_{t_{\ell,1}}^*,\ldots,h_{t_{\ell,b}}^*$ with $h_{t_{\ell,1}}^*<\cdots<h_{t_{\ell,b}}^*<(h'_{\ell})^*$ and homogeneous polynomials $v_{\ell,\lambda}$ such that $$\label{3} (h'_{\ell})^*-\sum_{\lambda=1}^b v_{\ell,\lambda}h_{t_{\ell,\lambda}}^*\in J.$$ Again notice that $j\notin\{t_{\ell,1},\ldots,t_{\ell,b}\}$, since $\deg(h_j)>\deg(h'_\ell)\geq\deg(h_{t_{\ell,\lambda}})$ for any $1\leq \lambda\leq b$. Set $I_1=\{\ell: 1\leq \ell\leq s, \ (h'_{\ell})^*\ \textrm{is not standard}\}$ and $I_2=[s]\setminus I_1$. By ([\[2\]](#2){reference-type="ref" reference="2"}) and ([\[3\]](#3){reference-type="ref" reference="3"}) we obtain an expression $$q''=h_j^*-\sum_{\ell\in I_1}\sum_{\lambda=1}^b c_{\ell}w_{\ell}v_{\ell,d}\ h_{t_{\ell,d}}^*-\sum_{\ell\in I_2}c_{\ell}w_{\ell}(h'_{\ell})^*-p'_ih_i^*-\sum_{\ell\neq i, j}p_{\ell}h_{\ell}^*\in J$$ in terms of standard monomials. Since $\ini_<(q'')<\ini_<(q)$, we get a contradiction. Thus $h_1,\ldots,h_d$ is a minimal system of generators of $A_k$. Now using [@HHM Theorem 2.3], we conclude that for each $k$, $A_k$ has linear quotients with respect to its minimal set of generators. Thus it follows from  [@HH Theorem 8.2.15] that $A_k$ is componentwise linear. ◻ Applying Theorem [Theorem 1](#shameo){reference-type="ref" reference="shameo"} to the Rees algebra of a graded ideal we obtain the following corollary which generalizes [@HH Corollary 10.1.8] and [@HHM Theorem 3.6]. **Corollary 2**. *Let $I\subset S$ be a graded ideal and let $J$ be the defining ideal of the Rees algebra ${\mathcal R}(I)$. If $\ini_{<}(J)$ is generated by quadratic monomials with respect to the monomial order defined in *([\[monomialorder\]](#monomialorder){reference-type="ref" reference="monomialorder"})*, then for any $k\geq 1$, $I^k$ has linear quotients with respect to its minimal monomial generating set and hence it is componentwise linear.* # cover ideals of graphs with componentwise linear powers Let $G$ be a finite simple graph on the vertex set $V(G) = \{x_1, \ldots, x_n \}$ and let $E(G)$ be the set of edges of $G$. A subset $C\subseteq V(G)$ is called a *vertex cover* of $G$, when it intersects any edge of $G$. Moreover, $C$ is called a *minimal vertex cover* of $G$, if it is a vertex cover and no proper subset of $C$ is a vertex cover of $G$. Let as before $S = K[x_1, \ldots, x_n]$ denote the polynomial ring in $n$ variables over a field $K$. We associate each subset $C \subset V(G)$ with the monomial $u_C = \prod_{x_i \in C} x_i$ of $S$. Let $C_1, \ldots, C_q$ denote the minimal vertex covers of $G$. The *cover ideal* of $G$ is defined as $I_G = (u_{C_1}, \ldots, u_{C_q})$. The Rees algebra of $I_G$ is the toric ring $${\mathcal R}(I_G) = K[x_1, \ldots, x_n, u_{C_1}t, \ldots, u_{C_q}t] \subset S[t].$$ Let $T = S[y_1, \ldots, y_q]$ denote the polynomial ring and define the surjective map $\pi : T \rightarrow{\mathcal R}(I_G)$ by setting $\pi(x_i) = x_i$ and $\pi(y_j) = u_{C_j}t$. The toric ideal $J_G \subset T$ of ${\mathcal R}(I_G)$ is the kernel of $\pi$. Let $<_{\rm lex}$ denote the pure lexicographic order on $S$ induced by the ordering $x_1 > \cdots > x_n$ and suppose that $u_{C_q} <_{\rm lex} \cdots <_{\rm lex} u_{C_1}$. Let $<'_{\rm lex}$ denote the pure lexicographic order on $K[y_1, \ldots, y_q]$ induced by the ordering $y_1 > \cdots > y_q$. We let $<^\sharp$ be the monomial order on $T$ which is induced by the orders $<'_{\rm lex}$ and $<_{\rm lex}$. Given finite simple graphs $H_1, \ldots, H_n$ with $V(H_i) = \{z^{(i)}_1, \ldots, z^{(i)}_{r_i} \}$, we construct the graph $G(H_1, \ldots, H_n)$ on $V(G) \cup V(H_1) \cup \cdots \cup V(H_n)$ whose set of edges is $$E(G) \cup E(H_1) \cup \cdots \cup E(H_n) \cup \left(\,\bigcup_{\substack{1 \leq i \leq n \\ 1 \leq j \leq r_i}}\{x_i, z^{(i)}_j \}\right)$$ **Theorem 3**. *Suppose that each $J_{H_i}$ satisfies the $x$-condition with respect to $z^{(i)}_1 > \cdots > z^{(i)}_{r_i}$. Then $J_{G(H_1, \ldots, H_n)}$ satisfies the $x$-condition with respect to $$z^{(1)}_1 > z^{(1)}_2 > \cdots > z^{(1)}_{r_1} > z^{(2)}_{1} > \cdots > z^{(2)}_{r_2} > \cdots > z^{(n)}_{r_n} > x_1 > \cdots > x_n.$$* *Proof.* Let $S = K[x_1, \ldots, x_n, z^{(1)}_1, \ldots, z^{(n)}_{r_n} ]$ denote the polynomial ring in $n + r_1 + \cdots + r_n$ variables over a field $K$. Let $C^{(i)}_1, \ldots, C^{(i)}_{s_i}$ denote the minimal vertex covers of $H_i$. Given a vertex cover $C$ of $G$, we introduce a subset $C' = C \cup C^{(i)} \cup \cdots \cup C^{(n)}$ of $V(G) \cup V(H_1) \cup \cdots \cup V(H_n)$ for which $$C^{(i)} = \left\{ \begin{array}{lll} V(H_i) & \text{if} & x_i \not\in C, \\ C^{(i)}_j & \text{if} & x_i \in C, \end{array} \right.$$ where $1 \leq j \leq s_i$ is arbitrary. It follows that $C'$ is a minimal vertex cover of $G$ and that every minimal vertex cover of $G$ is of the form $C'$. Let $<_{\rm lex}$ denote the pure lexicographic order on $S$ with respect to the above ordering $z^{(1)}_1 > \cdots > x_n$. Let $C_1, \ldots, C_q$ denote the minimal vertex covers of $G(H_1, \ldots, H_n)$ and suppose that $u_{C_q} <_{\rm lex} \cdots <_{\rm lex} u_{C_1}$. Let $T_i = K[z^{(i)}_1, \ldots, z^{(i)}_{r_i}, y^{(i)}_1, \ldots, y^{(i)}_{s_i}]$ denote the polynomial ring in $r_i + s_i$ variables over $K$ and $J_{H_i} \subset T_i$ the toric ideal of ${\mathcal R}(I_{H_i})$ which is the kerner of $\pi_i : T_i \rightarrow{\mathcal R}(I_{H_i})$. Let $w = z^{(i)}_j w''$ be a monomial belonging to the minimal system of monomial generators of ${\rm in}_{<^\sharp}(J_{H_i})$, where $w''$ is a monomial in $y^{(i)}_1, \ldots, y^{(i)}_{s_i}$. Let $\pi_i(w) = z^{(i)}_j u_{C^{(i)}_{\xi_1}}t \cdots u_{C^{(i)}_{\xi_a}}t$. Let $C_{\zeta_{i'}}$ be a minimal vertex covers of $G(H_1, \ldots, H_n)$ with $C_{\zeta_{i'}} \cap V(H_i) = C^{(i)}_{\xi_{i'}}$ for each $1 \leq i' \leq a$. It follows that $z^{(i)}_j y_{\zeta_1} \cdots y_{\zeta_a}$ belongs to ${\rm in}_{<^\sharp}(J_{G(H_1, \ldots, H_n)}) \subset T = S[y_1, \ldots, y_q]$. Let $C$ be a minimal vertex cover of $G(H_1, \ldots, H_n)$ with $x_i \not\in C$ and $$C' = ((C \cup \{x_i\}) \setminus V(H_i) ) \cup C^{(i)}_j,$$ where $1 \leq j \leq s_i$ is arbitrary. Then $C'$ is a minimal vertex cover of $G(H_1, \ldots, H_n)$ with $u_{C'} <_{\rm lex} u_{C}$. Let $C = C_e$ and $C' = C_f$. Then $e < f$ and $$x_i y_e - \prod_{z^{(i)}_{j'} \not\in C^{(i)}_j} z^{(i)}_{j'} y_f \in J_{G(H_1, \ldots, H_n)}$$ whose initial monomial is $x_i y_e$. Let ${\mathcal A}$ denote the set of monomials of the form either $z^{(i)}_j y_{\zeta_1} \cdots y_{\zeta_a}$ or $x_i y_e$ constructed above. Let ${\mathcal B}$ denote the set of monomials in $y_1, \ldots, y_q$ belonging to the minimal system of monomial generators of ${\rm in}_{<^\sharp}(J_{G(H_1, \ldots, H_n)})$. Let $({\mathcal A}, {\mathcal B})$ denote the monomial ideal of $T$ generated by ${\mathcal A}\cup {\mathcal B}$. One claims that ${\rm in}_{<^\sharp}(J_{G(H_1, \ldots, H_n)}) = ({\mathcal A}, {\mathcal B})$. One must prove that, for monomials $u$ and $v$ of $T$ not belonging to $({\mathcal A}, {\mathcal B})$ with $u \neq v$, one has $\pi(u) \neq \pi(v)$. Let $u = u_x u_z u_y$ and $v = v_x v_z v_y$ with $u \neq v$, where $u_x, v_x$ are monomials in $x_1, \ldots, x_n$, where $u_z, v_z$ are monomials in $z^{(i)}_j, 1 \leq i \leq n, 1 \leq j \leq r_i$, and where $u_y, v_y$ are monomials in $y_1, \ldots, y_s$. Suppose that $u$ and $v$ are relatively prime and that $\pi(u) = \pi(v)$. Let, say, $u_x \neq 1$ and $x_i$ divide $u_x$. Since $x_i$ does not divide $v_x$ and since $\deg u_y = \deg v_y$, it follows that there is $y_a$ which divids $u_y$ for which $x_i \not\in C_a$. Hence $x_i y_a \in {\mathcal A}$, a contradiction. Thus $u_x = v_x = 1$. Let $\pi(u) = u_z \cdot u_{C_{a_1}}t \cdots u_{C_{a_p}}t$ and $\pi(v) = v_z \cdot u_{C_{a'_1}}t \cdots u_{C_{a'_p}}t$. Let, say, $u_z \neq 1$ and $z^{(i)}_j$ divide $u_z$. In each of $\pi(u)$ and $\pi(v)$, replace each of $x_{1}, \ldots, x_n$ with $1$ and replace $z^{(i')}_j$ with $1$ for each $i' \neq i$ and for each $1 \leq j \leq r_i$. Then $\pi(u)$ comes to $$u'_z \cdot u_{C^{(i)}_{c_1}}t \cdots u_{C^{(i)}_{c_{p'}}}t\,\left(\prod_{j=1}^{r_i}z_j^{(i)}\right)^{p-p'}$$ and $\pi(v)$ comes to $$v'_z \cdot u_{C^{(i)}_{c'_1}}t \cdots u_{C^{(i)}_{c'_{p'}}}t\,\left(\prod_{j=1}^{r_i}z_j^{(i)}\right)^{p-p'},$$ where each of $u'_z$ and $v'_z$ is a monomial in $z_{1}^{(i)}, \ldots, z_{r_i}^{(i)}$. One has $p' > 0$ and $$u'_z \cdot y^{(i)}_{c_1} \cdots y^{(i)}_{c_{p'}} - v'_z \cdot y^{(i)}_{c'_1} \cdots y^{(i)}_{c'_{p'}} \in J_{H_i}.$$ Since $z^{(i)}_j$ divides $u'_z$, one has $u'_z \cdot y^{(i)}_{c_1} \cdots y^{(i)}_{c_{p'}} - v'_z \cdot y^{(i)}_{c'_1} \cdots y^{(i)}_{c'_{p'}} \neq 0$ and its initial monomial belongs to ${\rm in}_{<^\sharp}(J_{H_i})$. Since $J_{H_i}$ satisfies the $x$-condition, it follows that either $u \in ({\mathcal A}, {\mathcal B})$ or $v \in ({\mathcal A}, {\mathcal B})$, a contradiction. Thus $u_z = v_z = 1$. Since $u = u_y, v = v_y, u - v \neq 0$ and $\pi(u) = \pi(v)$, one has either $u \in ({\mathcal B})$ or $v \in ({\mathcal B})$, a contradiction.                                                                             ◻ **Theorem 4**. *Suppose that each $J_{H_i}$ has a quadratic Gröbner basis with respect to some monomial order. Then each power of the vertex cover ideal $I_{G(H_1, \ldots, H_n)}$ possesses an order of linear quotients on its minimal monomial generating set, and hence it is componentwise linear.* *Proof.* We keep the notation used in the proof of Theorem [Theorem 3](#graphjoin){reference-type="ref" reference="graphjoin"}. Suppose that $J_{H_i}$ has a quadratic Gröbner basis with respect to $z^{(i)}_1 > \cdots > z^{(i)}_{r_i}$ for each $i$. By Theorem [Theorem 3](#graphjoin){reference-type="ref" reference="graphjoin"}, the ideal $J=J_{G(H_1, \ldots, H_n)}$ satisfies the $x$-condition with respect to the order $<$ induced by the orders $z^{(i)}_1 > \cdots > z^{(i)}_{r_i}$. Hence by the proof of [@HHM Theorem 3.3], for any positive integer $k$, the ideal $(I_{G(H_1, \ldots, H_n)})^k$ has a system of generators $h_1,\ldots,h_s$, each of them of the form $h_i=u_{C_{i_1}}\cdots u_{C_{i_k}}$, which possess an order of linear quotients $h_1<\cdots<h_s$ and such that $h_i^*=y_{i_1}\cdots y_{i_k}$ is a standard monomial of $T$ with respect to $<^\sharp$ and $J$. We prove that $\{h_1,\ldots,h_s\}$ is the minimal generating set of monomials of $(I_{G(H_1, \ldots, H_n)})^k$. Suppose that $h_j=wh_i$ for some integers $i$ and $j$ and a monomial $w\in S$. We should show that $w=1$ and $i=j$. Let $h_i=u_{C_{i_1}}\cdots u_{C_{i_k}}$ and $h_j=u_{C_{j_1}}\cdots u_{C_{j_k}}$. If $w=1$ and $i\neq j$, then $h_j^*-h_i^*\in J$. So either $h_i^*$ or $h_j^*$ belongs to $\ini_{<^\sharp}(J)$, a contradiction. Hence $i=j$ and we are done. Now, assume that $w\neq 1$. Let $w=w_xw_z$, where $w_x$ is a monomial in $x_1, \ldots, x_n$ and $w_z$ is a monomial in $z^{(i)}_j, 1 \leq i \leq n, 1 \leq j \leq r_i$. Assume that $w_x\neq 1$ and $x_t|w$ for some integer $t$. So we have $x_t\notin C_{i_{a}}$ for some $1\leq a\leq k$. The set $$C' = ((C_{i_{a}} \cup \{x_t\}) \setminus V(H_t) ) \cup C^{(t)}_\ell,$$ where $1 \leq \ell \leq s_t$ is arbitrary, is a minimal vertex cover of $G(H_1, \ldots, H_n)$ with $u_{C'} <_{\rm lex} u_{C}$. Let $C' = C_b$. Then $i_a< b$ and $$x_t y_{i_{a}} - \prod_{z^{(t)}_{j'} \not\in C^{(t)}_\ell} z^{(t)}_{j'} y_b \in J_{G(H_1, \ldots, H_n)}$$ whose initial monomial is $x_t y_{i_{a}}$. Since $x_t| w$ and $u_{C_{i_a}}|h_i$, we may write $wh_i=w'h'$, where $w'=(w/x_t)\prod_{z^{(t)}_{j'} \not\in C^{(t)}_\ell} z^{(t)}_{j'}$ and $h'=(h_i/u_{C_{i_a}})u_{C_b}$. Therefore $h_j=w'h'$ with $\deg(w'_x)<\deg(w_x)$. Repeating this procedure we obtain $h_j=uf$, where $u$ is a monomial with $u_x=1$ and $u_z\neq 1$ and $f=u_{C_{\ell_1}}\cdots u_{C_{\ell_k}}$ for some $\ell_1,\ldots,\ell_k$. Let $w''(h'')^*$ be the unique standard monomial in $T$ with respect to $<^\sharp$ and $J$ such that $\pi(w''(h'')^*)=f$, where $w''$ is a monomial in $S$. Then $h_j=uf=uw''h''$. Let $h''=u_{C_{s_1}}\cdots u_{C_{s_k}}$ If $x_t|w''$ for some integer $t$, then we have $x_t\notin C_{s_{a}}$ for some $a$ and then $x_t y_{s_{a}}\in \ini_{<^\sharp}(J)$. Since $x_ty_{s_{a}}$ divides $w''(h'')^*$, this implies that $w''(h'')^*\in\ini_{<^\sharp}(J)$, a contradiction. Hence $w''_x=1$. Note that $(h'')^*$ is a standard monomial as well. So the equality $h_j=uf=u_zw''_zh''$ shows that we may reduce to the case that $w_x=1$. Hence $w=w_z$ with $w_z\neq 1$ and $h_j=w_zh_i$. Let, say, $z^{(i)}_j$ divides $w_z$. In each of $h_j$, $h_i$ and $w$, replace each of $x_{1}, \ldots, x_n$ with $1$ and replace $z^{(i')}_j$ with $1$ for each $i' \neq i$ and for each $1 \leq j \leq r_{i'}$. Then $h_j$ comes to $$u_{C^{(i)}_{c_1}}t \cdots u_{C^{(i)}_{c_{p'}}}t\,\left(\prod_{j=1}^{r_i}z_j^{(i)}\right)^{p-p'}$$ and $h_i$ comes to $$u_{C^{(i)}_{c'_1}}t \cdots u_{C^{(i)}_{c'_{p'}}}t\,\left(\prod_{j=1}^{r_i}z_j^{(i)}\right)^{p-p'},$$ and $w$ comes to $v$, where $v$ is a monomial in $z_{1}^{(i)}, \ldots, z_{r_i}^{(i)}$. One has $p' > 0$ and $$\label{quad} y^{(i)}_{c_1} \cdots y^{(i)}_{c_{p'}} - v \cdot y^{(i)}_{c'_1} \cdots y^{(i)}_{c'_{p'}} \in J_{H_i}.$$ Moreover, since $h_i^*$ and $h_j^*$ are standard monomials in $T$, $y^{(i)}_{c_1} \cdots y^{(i)}_{c_{p'}}$ and $y^{(i)}_{c'_1} \cdots y^{(i)}_{c'_{p'}}$ are standard monomials in $T_i$, i.e., they do not belong to $\ini_<(J_{H_i})$. Since $J_{H_i}$ has a quadratic Gröbner basis with respect to $z^{(i)}_1 > \cdots > z^{(i)}_{r_i}$, as was shown in the proof of [@HHM Theorem 3.6], the monomials $u_{C^{(i)}_{c_1}} \cdots u_{C^{(i)}_{c_{p'}}}$ and $u_{C^{(i)}_{c'_1}}\cdots u_{C^{(i)}_{c'_{p'}}}$ belong to the minimal set of monomial generators of $(I_{H_i})^{p'}$. This contradicts to equation ([\[quad\]](#quad){reference-type="ref" reference="quad"}). ◻ The following corollaries are derived from Theorem [Theorem 4](#construction){reference-type="ref" reference="construction"}. **Corollary 5**. *Let $G'=G(H_1,\ldots,H_n)$, where $G$ is any graph on $n$ vertices and each $H_i$ belongs to one of the following families of graphs:* - *Unmixed chordal graphs.* - *Cohen-Macaulay bipartite graphs.* - *Path graphs.* - *Cameron-Walker graphs whose bipartite graphs are complete graphs.* *Then any powers of the vertex cover ideal $I_{G'}$ possesses an order of linear quotients on its minimal monomial generating set and hence it is componentwise linear.* *Proof.* For any graph $H_i$ belonging to one of the families (a) to (d), the defining ideal of ${\mathcal R}(I_{H_i})$ has a quadratic Gröbner basis, see [@HHO Theorem 2.7], [@HHbi] and [@HHM Corollary 4.7]. Hence the desired result follows from Theorem [Theorem 4](#construction){reference-type="ref" reference="construction"}. ◻ Let $G$ be a graph with $V(G)=\{x_1,\ldots,x_n\}$. A *cone* on $G$ is a graph $G'$ with $V(G')=V(G)\cup\{y\}$ and $E(G')=E(G)\cup\{\{x_i,y\}:\ 1\leq i\leq n\}$, where $y$ is a new vertex. The vertex $y$ is called a *universal vertex* of $G'$ and $G'$ is called a *cone graph*. A *friendship graph* $F_n$ is a planar graph with $2n + 1$ vertices and $3n$ edges. It can be constructed by joining $n$ copies of the cycle graph $C_3$ with a common vertex. A *fan graph* $F_{1,n}$ is a cone on a path graph with $n$ vertices. **Corollary 6**. *Let $G$ be one of the following graphs* 1. *A Cameron-Walker graph with the bipartite partition $(X,Y)$ for which there exists one leaf attached to each vertex in $X$ and at least one pendant triangle attached to each vertex of $Y$.* 2. *A cone graph with a universal vertex $x$ such that $G\setminus \{x\}$ is one of the graphs (a) to (d) in Corollary [Corollary 5](#cor1){reference-type="ref" reference="cor1"}. Examples of such cone graphs are the fan graphs $F_{1,n}$, friendship graphs $F_n$ and star graphs.* *Then all powers of the vertex cover ideal $I_{G}$ have linear quotients and hence are componentwise linear.* The following result is an immediate consequence of Corollary [Corollary 6](#cor2){reference-type="ref" reference="cor2"} and the characterization of Cohen-Macaulay Cameron-Walker graphs given in [@HHKO Theorem 1.3]. **Corollary 7**. *Let $G$ be a Cohen-Macaulay Cameron-Walker graph. Then all power of the vertex cover ideal of $G$ have linear resolutions.* A. Aramova, J. Herzog, and T. Hibi, Ideals with stable Betti numbers, Adv. Math. **152** (2000), 72--77. N. Erey, Powers of ideals associated to $(C_4, 2K_2)$-free graphs, J. Pure Appl. Algebra 223 (2019), no. 7, 3071-3080. H. T. Hà and A. Van Tuyl, Powers of componentwise linear ideals: the Herzog-Hibi-Ohsugi conjecture and related problems, Research in the Mathematical Sciences 9 (2022), 22. J. Herzog and T. Hibi, Componentwise linear ideals, Nagoya Math. J. 153 (1999), 141-153. J. Herzog and T. Hibi, Distributive lattices, bipartite graphs and Alexander duality, J. Algebraic Combin. 22 (2005), 289-302. J. Herzog and T. Hibi, Monomial ideals, Springer London, 2011. J. Herzog, T. Hibi and S. Moradi, Componentwise linear powers and the $x$-condition, Math. Scand. 128 (2022), 401-433. J. Herzog, T. Hibi and H. Ohsugi, Powers of componentwise linear ideals, Combinatorial aspects of commutative algebra and algebraic geometry, 49-60, Abel Symp., 6, Springer, Berlin, 2011. J. Herzog, T. Hibi and X. Zheng, Monomial ideals whose powers have a linear resolution, Math. Scand. (2004), 23-32. J. Herzog, V. Reiner and V. Welker, Componentwise linear ideals and Golod rings, Michigan Math. J. **46** (1999), 211--223. T. Hibi, A. Higashitani, K. Kimura and A. B. O'Keefe, Algebraic study on Cameron-Walker graphs, J. Algebra 422 (2015), 257-269. A. Kumar and R. Kumar, On the powers of vertex cover ideals, J. Pure. Appl. Algebra 226 (2022), no. 1, 106808. T. Römer, On minimal graded free resolutions, Dissertation, Universität Duisburg-Essen, 2001. K. Yanagawa, Alexander duality for Stanley-Reisner rings and squarefree ${\mathbb N}^n$-graded modules, J. Algebra **225** (2000), 630--645. [^1]: Takayuki Hibi is partially supported by JSPS KAKENHI 19H00637. Somayeh Moradi is supported by the Alexander von Humboldt Foundation.
arxiv_math
{ "id": "2309.01677", "title": "Ideals with componentwise linear powers", "authors": "Takayuki Hibi and Somayeh Moradi", "categories": "math.AC math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Lie's third theorem does not hold for Lie groupoids and Lie algebroids. In this article, we show that Lie's third theorem is valid within a specific class of diffeological groupoids that we call 'singular Lie groupoids.' To achieve this, we introduce a subcategory of diffeological spaces which we call 'quasi-etale.' Singular Lie groupoids are precisely the groupoid objects within this category, where the unit space is a manifold. Our approach involves the construction of a functor that maps singular Lie groupoids to Lie algebroids, extending the classical functor from Lie groupoids to Lie algebroids. We prove that the Ševera-Weinstein groupoid of an algebroid is an example of a singular Lie groupoid, thereby establishing Lie's third theorem in this context. address: Washington University in St Louis, Department of Mathematics author: - Joel Villatoro bibliography: - references.bib date: Sep 2023 title: On the integrability of Lie algebroids by diffeological spaces --- # Introduction Lie theory provides us with a differentiation procedure that takes global symmetries geometric objects (Lie groupoids) as input and outputs infinitesimal symmetry algebras (Lie algebroids). This differentiation procedure takes the form of a functor: $$\mathbf{Lie}\colon \mathbf{LieGrpd}\to \mathbf{LieAlg}$$ Given infinitesimal data, such as a morphism or object in $\mathbf{LieAlg}$, the "integration problem" refers to constructing a corresponding global data in $\mathbf{LieGrpd}$. In the classical case of finite dimensional Lie groups and Lie algebras, Lie's second theorem Lie's second and third theorems are concerned with the existence of integrations of morphisms and objects, respectively. If one replaces the classical condition of "simply connected" with "source simply-connected", Lie's second theorem holds for the more general setting of Lie groupoids. On the other hand, Lie's third theorem is known to be false. In other words, there exist Lie algebroids which are not integrated by any Lie groupoid. The first example of a non-integrable algebroid is due to Rui Almeida and Pierre Molino [@almeida_suites_1985]. To any algebroid $A$ it is possible to functorially associate a kind of fundamental groupoid $\Pi_1(A)$ which is commonly called the "Weinstein groupoid" or occasionally the "Ševera-Weinstein" groupoid. It turns out that a Lie algebroid admits an integration if and only if $\Pi_1(A)$ is a smooth manifold. Furthermore, if $\Pi_1(A)$ is smooth it is the "universal" source simply-connected integration. The earliest version of this fundamental groupoid construction is by Cattaneo and Felder[@cattaneo_poisson_2001] where they describe how to construct a symplectic groupoid integrating a Poisson manifold via Hamiltonian reduction on the space of cotangent paths. In 2000, a few months after the article of Cattaneo and Felder appeared on the arXiv, Pavol Ševera gave a talk where he proposed a version of this construction where the Hamiltonian action is reinterpreted as homotopies in the category of algebroids [@severa_title_2005]. An advantage of this perspective was that it makes it clear that Cattaneo and Felder's approach could be easily generalized to work for any Lie algebroid. Marius Crainic and Rui Loja Fernandes [@crainic_integrability_2003] were able to study the geometry of the space of algebroid paths in detail and consequently find a precise criteria the smoothness of the Ševera-Weinstein groupoid. Crainic and Fernandes credit Alan Weinstein for suggesting a path-based approach to them and so they introduced the term "Weinstein groupoid''. Even when the Ševera-Weinstein groupoid is not smooth, it is clear that it is far from being an arbitrary topological groupoid. For example, in the work of Hsian-Hua Tseng and Chenchang Zhu [@tseng_integrating_2006] it is observed that it is the topological coarse moduli-space of a groupoid object in étale geometric stacks over manifolds. In other words, it is the orbit space of an étale Lie groupoid equipped with a "stacky" product. ## Main question Our aim in this article will be to provide a foundation for a version of Lie theory that includes the kinds of groupoids associated to non-integrable algebroids. We intend to do this in a way that will permit us to extend notions of Morita equivalence, symplectic groupoids and differentiation to this larger context while preserving Lie's second theorem. Therefore, we aim to answer the following question: **Question 1**. Does there exist a category $\mathbf{C}$ with the following properties: - The category of smooth manifolds is a full subcategory of $\mathbf{C}$. - If $\mathbf{SingLieGrpd}$ is the category of groupoid objects $\mathcal{G}\rightrightarrows M$ in $\mathbf{C}$ where $M$ is a smooth manifold, there exists a functor: $$\widehat\mathbf{Lie}\colon \mathbf{SingLieGrpd}\to \mathbf{LieAlg}$$ - $\mathbf{LieGrpd}$ is a full subcategory of $\mathbf{SingLieGrpd}$ and we have that: $$\widehat\mathbf{Lie}|_{\mathbf{LieGrpd}} = \mathbf{Lie}$$ - There is a notion of "simply connected object" in $\mathbf{C}$ which corresponds to being simply connected on the subcategory of manifolds. And, using this notion of simply connectedness, Lie's second and third theorems hold for $\widehat\mathbf{Lie}$. Answering this question is not so straightforward. If we take $\mathbf{C}$ to be the (2,1)-category of étale geometric stacks we get all but the last bullet point [@tseng_integrating_2005][@zhu_lie_2008]. Another natural choice could be to take $\mathbf{C}$ to be the category of diffeological spaces. However, this category is so large that there appears to be little hope of defining a Lie functor in this context. Another complication with answering this question is that the notion of groupoid object referenced in the statement of the question is somewhat subtle. For example, a Lie groupoid is not just a groupoid object in manifolds. It is a groupoid object in manifolds where the source (or equivalently target) map is a submersion. Therefore, when proposing such a category $\mathbf{C}$ we also need to propose a notion of "submersion" to go along with it. The problem, as stated above, is a sort of "Goldilocks problem." If we choose the category $\mathbf{C}$ to be too large, we have little hope of defining the Lie functor. If we choose $\mathbf{C}$ to be to small, it may be the case that not every Lie algebroid can be integrated by a groupoid object in $\mathbf{C}$. ## Our solution In this article, we will give a partial answer to this question. We introduce the category of "quasi-étale diffeological spaces" ($\mathbf{QUED}$ for short) and submit that it is a solution to the problem stated above. The category $\mathbf{QUED}$ is, in many ways, very natural and generalizes existing notions such as orbifolds, quasifolds and similar types of diffeological structures. We will show that the category $\mathbf{QUED}$ indeed satisfies the first three bullet points of our question above. In order to do this we will also need to explain what a "submersion" is in this context, and we will construct the Lie functor $\mathbf{Lie}\colon \mathbf{SingLieGrpd}\to \mathbf{LieAlg}$. The main results of our paper can be summarized in the following two theorems: **Theorem 1**. *Let $\mathbf{SingLieGrpd}$ be the category of $\mathbf{QUED}$-groupoids where the space of objects is a smooth manifold. There exists a functor: $$\widehat\mathbf{Lie}\colon \mathbf{SingLieGrpd}\to \mathbf{LieAlg}$$ with the property that $\widehat\mathbf{Lie}|_{\mathbf{LieGrpd}} = \mathbf{Lie}$.* **Theorem 2**. *Given a Lie algebroid, $A \to M$, let $\Pi_1(A) \rightrightarrows M$ be the Ševera-Weinstein groupid of $A$ and consider it as a diffeological groupoid. Then $\Pi_1(A)$ is an element of $\mathbf{SingLieGrpd}$ and $\widehat\mathbf{Lie}(\Pi_1(A))$ is canonically isomorphic to $A$.* What does not appear in this article is a proof of Lie's second theorem and the notion of "simply connected" for $\mathbf{QUED}$. There is a diffeological version of simply-connected due to Patrick Iglesias-Zemmour [@iglesias-zemmour_fibrations_1985]. Indeed, it is not too difficult to show that the source fibers of the Ševera-Weinstein groupoid are diffeologically simply-connected. However, a full proof of Lie's second theorem will require additional technical development which we intend to address in a later article. ## Methods and outline The main technical tools used in this article are the theory of diffeological spaces and structures as well as the the theory of local groupoids. The bulk of the article is dedicated to developing the technology needed to show that the Lie functor is well-defined. However, the basic idea behind our definition of the Lie functor for $\mathbf{SingLieGrpd}$ is not fundamentally so complex: If $\mathcal{G}\rightrightarrows M$ is an element of $\mathbf{SingLieGrpd}$ and $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ is a "local covering" from a *local Lie* groupoid $\widetilde\mathcal{G}$ to $\mathcal{G}$ then we define the Lie algebroid of $\mathcal{G}$ to be the Lie algebroid of $\widetilde\mathcal{G}$. In order for this definition to make sense, we will have to prove that every element of $\mathbf{SingLieGrpd}$ is locally the quotient of a local Lie groupoid and this is the main technical difficulty. In Section [2](#section:diffeology){reference-type="ref" reference="section:diffeology"}, we will review the basics of diffeology and diffeological spaces. We primarily do this to establish some notational conventions and to keep this article mostly self-contained. All of the material in this section is fairly standard in the field of diffeology and can be found in, for instance, the book of Patrick Iglesias-Zemmour [@iglesias-zemmour_diffeology_2013]. In Section [3](#section:quasi-étale.diffeological.spaces){reference-type="ref" reference="section:quasi-étale.diffeological.spaces"}, we will introduce the notion of a quasi-étale diffeological space. Briefly, a quasi-étale diffeological space is a diffeological space which is locally the quotient of a smooth manifold by a "nice" equivalence relation. Nice in this context means that the equivalence classes are totally disconnected (e.g. think $\mathbb{Q}\subseteq\mathbb{R}$ ) and the equivalence relation is "rigid" in the sense than any endomorphism of the equivalence relation must be an isomorphism. After defining quasi-étale spaces we will explore a few properties of quasi-étale maps and show how it is possible to study maps between quasi-étale spaces in terms of maps between smooth manifolds by "representing" them on local charts. The remaining part of the section is dedicated to defining what a submersion in $\mathbf{QUED}$ is and proving some key technical properties that we will need later. In Section [4](#section:quasi.étale.groupoids){reference-type="ref" reference="section:quasi.étale.groupoids"} we will define what a groupoid object in $\mathbf{QUED}$ is as well as state the definition of a *local* groupoid which is key to our differentiation procedure. Section [5](#section:differentiation){reference-type="ref" reference="section:differentiation"} defines the Lie functor and states the main technical theorems that we need in order to prove that it is well-defined. It concludes with a classification of source connected singular Lie groupoid with integrable algebroids. Section [6](#section:proof.of.main.theorem){reference-type="ref" reference="section:proof.of.main.theorem"} is dedicated to proving the main results that we rely on in Section [5](#section:differentiation){reference-type="ref" reference="section:differentiation"}. It is the most technical part of the article. The main tools used here are a combination of diffeology and the theory of local Lie groupoids. The main point of this section is to show that every element of $\mathbf{SingLieGrpd}$ is locally the quotient of a local Lie groupoid in a "unique" way. With these proofs we finish the proof of Theorem [Theorem 1](#theorem:main.lie.functor){reference-type="ref" reference="theorem:main.lie.functor"}. Finally, in Section [7](#section:Weinstein.groupoid){reference-type="ref" reference="section:Weinstein.groupoid"} we conclude by proving that the classical Ševera-Weinstein groupoid is an element of $\mathbf{SingLieGrpd}$ and we explain how to apply the Lie functor to it. With this calculation we complete the proof of Theorem [Theorem 2](#theorem:main.wein.groupoid){reference-type="ref" reference="theorem:main.wein.groupoid"}. ## Related work The most closely related work to the subject of this article is perhaps that of Chenchang Zhu et al.([@tseng_integrating_2006][@zhu_lie_2008][@bursztyn_principal_2020]). In Tseng and Zhu [@tseng_integrating_2006] they describe the geometric stack whose orbit space is the Ševera-Weinstein groupoid. They also describe a differentiation procedure for certain groupoid objects in geometric stacks. However, there are two main disadvantages to this approach: One is that it necessitates going from the category of Lie groupoids to a (2,1)-category of stacky groupoids. The inherently higher categorical nature comes with a variety of coherence conditions and other phenomena that can make working with such objects a bit cumbersome. The second problem, is with Lie's second theorem. Although stacky groupoids can be used to repair the loss of Lie's third theorem, this comes at the cost of further complicating Lie's second theorem. Unlike for ordinary Lie groupoids, for stacky groupoids, it is not the case that source simply connected groupoids operate as "universal" integrations. The version of Lie's second theorem that is true for stacky groupoids can be seen in [@zhu_lie_2008]. The stacky groupoids version replaces the "source simply connected" condition with the much stronger condition of "source 2-connected". Indeed, the simply-connected version of Lie's second theorem for stacky groupoids appears to be false. Several other authors have also investigated the relationship between diffeology and Lie theory. For example, Gilbert Hector and Enrique Macías-Virgós[@hector_diffeological_2002] wrote an article discussing diffeological groups with a particular emphasis on diffeomorphism groups. They use a diffeological version of the tangent functor to define a kind of Lie algebra that one can associate to a diffeological group. However, it is not clear from their work whether the resulting structure is indeed a Lie algebra in the classical sense. They do, however, show that this procedure recovers the expected Lie algebra in the case of diffeomorphism groups. Also on the topic of diffeological groups, there is an article by Jean-Marie Souriau [@Souriau_groupes_2006] where he generalizes a variety of properties of Lie groups to the diffeological setting. Some of them relate to the integration and differentiation problem. For example, Souriau observes that under some separability assumptions, it is possible to construct something akin to the exponential map. There is work by Marco Zambon and Iakovos Androulidakis [@androulidakis_integration_2020] on the topic of integrating singular subalgebroids by diffeological groupoids. In their work they develop a differentiation procedure for diffeological groupoids that arise as a kind of "singular subgroupoid" of an ambient Lie groupoid. Their integration and differentiation procedure is defined relative to an ambient (integrable) structure algebroid and they do not treat the case of non-integrable algebroids. There is also work by Christian Blohmann [@blohmann_elastic_2023] towards developing a kind of Lie theory/Cartan calculus for elastic diffeological spaces. The motivation behind this is to study the infinite dimensional symmetries that occur in field theory. However, we remark that the Ševera-Weinstein groupoid of a non-integrable algebroid does not appear to be elastic in general so it is not clear if such a theory would be suitable for the study of non-integrable algebroids. # Acknowledgements {#acknowledgements .unnumbered} The author would like to thank Marco Zambon for the numerous discussions on this topic over the years, as well as his suggestions for this manuscript. The author has also greatly benefited from discussions with Christian Blohmann during his stay at the Max Planck Institute. We also acknowledge Cattaneo Felder, Rui Loja Fernandes, Eckhard Meinrenken, David Miyamoto, and Jordan Watts for their corrections and/or comments on an earlier draft of this article. This article is based on work that was supported by the following sources: Fonds Wetenschappelijk Onderzoek (FWO Project G083118N); the Max Planck Institute for Mathematics in Bonn; the National Science Foundation (Award Number 2137999). # Notation {#notation .unnumbered} The category of sets will be denoted $\mathbf{Set}$ and the category of finite-dimensional, Hausdorff, second countable, smooth manifolds will be written $\mathbf{Man}$. # Diffeology {#section:diffeology} Diffeological spaces are a generalization of the notion of a smooth manifold. They were independently introduced by Souriau[@souriau_groupes_1984] and Chen[@chen_iterated_1977]. Fundamentally, the idea is to endow a space with a 'manifold-like' structure by specifying which maps into the space are smooth. The standard textbook for the theory is by Iglesias-Zemmour[@iglesias-zemmour_diffeology_2013] who is also responsible for fleshing out a considerable amount of the standard diffeological techniques. ## Diffeological structures The core observation behind diffeology is that the smooth structure on a manifold $M$ is completely determined by the set of smooth maps into $M$ where the domains are Euclidean sets. The definition of diffeology is obtained by axiomatizing some of the basic properties of this distinguished collection of maps. **Definition 3**. An *$n$-dimensional Euclidean set* is an open subset $U \subseteq\mathbb{R}^n$. Let $\mathbf{Eucl}$ denote the category whose objects are Euclidean sets and whose morphisms are smooth functions between Euclidean sets. **Definition 4**. A *diffeological structure* on a set $X$ is a function $\mathcal{D}_X$ which assigns to each object $U \in \mathbf{Eucl}$ a distinguished collection $\mathcal{D}_X(U) \subseteq\mathbf{Set}(U,X)$. An element of $\mathcal{D}_X(U)$ for some $U$ is call a *plot*. Plots are required to satisfy the following axioms: 1. If $\phi \colon U_\phi \to X$ is constant then $\phi$ is a plot. 2. If $\phi \colon U_\phi \to X$ is a plot and $\psi \colon V \to U_\phi$ is a morphism in $\mathbf{Eucl}$ then $\phi \circ \psi$ is a plot. 3. If $\phi \colon U_\phi \to X$ is a function and $\{ U_i \}$ is an open cover of $U_\phi$ such that $\phi|_{U_i}$ is a plot for all $i \in I$ then $\phi$ is a plot. A *diffeological space* is a pair $(X, \mathcal{D}_X)$ where $X$ is a set and $\mathcal{D}_X$ is a diffeological structure on $X$. In a mild abuse of notation we will typically refer to diffeological spaces by only their underlying set. However, it is important to keep in mind that a given set may admit many diffeological structures. Finally, given a plot $\phi$ on a diffeological space $X$ we will typically denote the domain of $\phi$ using a subscript $U_\phi$. Diffeological spaces are quite general and include objects which range from the ordinary to quite pathological. Let us go over a few basic examples. **Example 5**. If $M$ is a smooth manifold and $U$ is a Euclidean set, then we declare $\phi \colon U_\phi \to M$ to be a plot if and only if $\phi$ is smooth as a map of manifolds. **Example 6**. Suppose $X$ is a topological space. We can make $X$ into a diffeological space by declaring any continuous function $\phi \colon U \to X$ to be a plot. **Example 7**. Suppose $X$ is a set. The *discrete diffeology* on $X$ is the unique diffeology on $X$ for which every plot is locally constant. **Example 8**. Suppose $X$ is a set. The *coarse diffeology* on $X$ is the diffeology on $X$ which declares every set theoretic function $\phi \colon U \to X$ from a Euclidean set to be a plot. **Example 9**. Suppose $X$ is a diffeological space and $\iota \colon Y \to X$ is the inclusion of an arbitrary subset. The *subset diffeology* on $Y$ is the diffeology on $Y$ which says that $\phi \colon U \to Y$ is a plot if and only if $\iota \circ \phi$ is a plot. One particular example of the subset diffeology will be of particular relevance to this article. **Definition 10**. Suppose $X$ is a diffeological space and $Y \subseteq X$. We say that $Y$ is *totally disconnected* if the subset diffeology on $Y$ is the discrete diffeology. In other words, a map $\phi \colon U_\phi \to Y$ is a plot on $X$ if and only if it is locally constant. **Example 11**. The set of rational numbers $\mathbb{Q}\subseteq\mathbb{R}$ is a totally disconnected subset of $\mathbb{R}$. Diffeological spaces come with a natural topological structure. However, the relationship between topological structures and diffeological structures is much weaker than the usual one between smooth structures and topology. **Definition 12**. Suppose $X$ is a diffeological space. A subset $V \subseteq X$ is said to be *open* if for all plots $\phi \colon U_\phi \to X$ we have that the inverse image $\phi^{-1}(V) \subseteq U_\phi$ is open. This topology on $X$ is called the *D-topology*. From now on, whenever we refer to something being "local" or "open" in a diffeological space, we mean relative to the D-topology. ## Smooth maps {#subsection:smooth.maps} Morphisms of diffeological spaces are defined in a rather straightforward way. Basically, a function is smooth if it pushes forward plots to plots. More formally: **Definition 13**. Let $X$ and $Y$ be diffeological spaces. A function $f \colon X \to Y$ is a *smooth map* if for all plots $\phi$ on $X$ we have that $f \circ \phi$ is a plot on $Y$. We say that $f$ is a *diffeomorphism* if it is a bijection and the inverse function $f^{-1}$ is smooth. The category of diffeological spaces with smooth maps will be denoted $\mathbf{Diffgl}$. This notion of a smooth map extends the usual one for smooth maps between manifolds. Let us consider some basic examples. **Example 14**. Suppose $M$ and $N$ are smooth manifolds. A function $f \colon M \to N$ is smooth as a map of diffeological spaces if and only if it is smooth as a map of manifolds. This means that the category of smooth manifolds embeds fully faithfully into the category of diffeological spaces. **Example 15**. Suppose $X$ is a diffeological space and $Y$ is a subset of $X$ equipped with the subset diffeology. The inclusion map $\iota \colon Y \to X$ is smooth. There are a few different notions of "quotients" in the context of diffeology. The most fundamental one is called a subduction. One can think of subduction as the diffeological version of a topological quotient. **Definition 16**. Suppose $X$ and $Y$ are diffeological spaces. A smooth function $f$ is called a *subduction* if for all plots $\phi \colon U_\phi \to Y$ and points $u \in U$ there exists an open $V \subseteq U_\phi$ of $u \in U$ together with a plot $\widetilde\phi \colon V \to X$ such that $\phi |_{V} = f \circ \widetilde\phi$. $$\begin{tikzcd} & X \arrow[d, "f"] \\ (U_\phi, u) \arrow[r, "\phi"] & Y \end{tikzcd} \quad \Rightarrow \quad \begin{tikzcd} \exists (V, u) \arrow[d, "\iota", hook, dashed] \arrow[r, " \exists\widetilde\phi", dashed] & X \arrow[d, "f"] \\ (U_\phi, u) \arrow[r, "\phi"] & Y \end{tikzcd}$$ Let us state a few examples. **Example 17**. If $f \colon M \to N$ is a smooth map of manifolds, then $f$ is a subduction if and only if for all $p \in N$ there exists an element $q \in M$ such that the differential $T_q f \colon T_q M \to T_p N$ is a surjection. **Example 18**. Suppose $X$ is a diffeological space and let $\{ \phi_i \colon U_i \to X \}_{i\in I}$ be the set of all plots on $X$. Consider the function: $$\bigsqcup_{i\in I} \phi_i \colon \bigsqcup_{i\in I} U_i \to X$$ This function is a subduction since every plot on $X$ factors through $\bigsqcup\limits_{i\in I} \phi_i$ in an canonical way. In some cases, one requires maps with a greater degree of regularity than a subduction. For example, one might wish to generalize the notion of a submersion of manifolds to the diffeological setting. We saw above that a subduction between manifolds is not quite the same thing as a submersion. This leads us to the notion of a local subduction. **Definition 19**. We say that $f$ is a *local subduction* if for all plots $\phi \colon U_\phi \to Y$ and points $x \in X$, $u \in U_\phi$ such that $\phi(u) = f(x)$, there exists an open neighborhood $V \subseteq U_\phi$ of $u \in U_\phi$ and a plot $\widetilde\phi \colon U_\phi \to X$ such that $\widetilde\phi(u) = x$ and $\phi|_V = f \circ \phi$. $$\begin{tikzcd} & (X,x) \arrow[d, "f"] \\ (U_\phi, u) \arrow[r, "\phi"] & (Y, \phi(u) ) \end{tikzcd} \quad \Rightarrow \quad \begin{tikzcd} \exists (V, u) \arrow[d, "\iota", hook, dashed] \arrow[r, " \exists\widetilde\phi", dashed] & (X,x) \arrow[d, "f"] \\ (U_\phi, u) \arrow[r, "\phi"] & (Y, \phi(u) ) \end{tikzcd}$$ The definitions of local subduction and subduction look similar. The main distinction, for subductions, is that the lift of $\phi$ is not required to factor through any *specific* point in $X$. On the other hand, for a local subduction, one must be able to find a lift through every point in $X$ that is in the preimage of $\phi(u)$. This has two main consequences: One is that being a local subduction is a property that is local in the *domain* of $f$. The other consequence is that, unlike subductions, local subductions may not necessarily be surjective. **Example 20**. If $f \colon M \to N$ is a morphism of smooth manifolds then $f$ is a local subduction if and only if $f$ is a (not necessarily surjective) submersion. **Example 21**. Consider $f \colon \mathbb{R}\sqcup \mathbb{R}\to \mathbb{R}$ where $f(x) = 0$ on the first connected component and $f(x) = x$ on the second connected component. Then $f$ is a subduction but $f$ is not a local subduction. Our next example/lemma is one of particular interest to us. Arguments similar to the one below will be used frequently in this article. **Lemma 22**. *Suppose $\mathcal{G}\rightrightarrows M$ is a Lie groupoid and let $X = M / \mathcal{G}$ be the set of orbits. Then quotient map $\pi \colon M \to X$ is a local subduction.* *Proof.* Suppose $\phi \colon U_\phi \to X$ is a plot and let $u \in U_\phi$ and $p \in M$ be fixed such that $\phi(u) = \pi(p)$. Since $\pi \colon M \to X$ is a subduction, we know there exists an open neighborhood $V \subseteq U_\phi$ of $U$ and a smooth function $\widetilde\phi \colon V \to M$ with the property that $\pi \circ \widetilde\phi = \phi$. Let $q := \phi(u) \in M$. Since $p$ and $q$ lie in the same $\pi$ fiber, there exists a groupoid element $g \in \mathcal{G}$ with the property that $s(g) = q$ and $t(g) = p$. Let $\sigma \colon \mathcal{O}\to \mathcal{G}$ be a local section of the source map defined in a neighborhood of $\mathcal{O}\subseteq M$ of $q$ such that $\sigma(q) = g$. Now let: $$\overline{\phi}(v) := (t \circ \sigma \circ \widetilde\phi)(v)$$ where $t$ is the target map for the groupoid. If necessary, we shrink the domain $V$ of $\overline{\phi}$ to a smaller open neighborhood of $u$ to ensure that $\overline{\phi}$ is well-defined. Then a direct computation shows that $\pi \circ \overline{\phi} = \phi$ and also $\overline{\phi}(u) = p$. We conclude that $\pi$ is a local subduction. ◻ ## Standard constructions {#subsection:standard.constructions} The flexibility of diffeological spaces means that we have many tools for constructing new diffeological spaces out of old ones. Let us briefly review a few of the standard diffeological constructions that we will require. **Definition 23**. Suppose $X$ is a set and $\{ \mathbf{D}_i \}_{i \in I}$ is an arbitrary collection of diffeological structures on $X$. The *intersection diffeology* relative to this collection is the diffeology on $X$ which declares a function $\phi \colon U \to X$ to be a plot if and only if $\phi$ is a plot in $\mathbf{D}_i$ for all $i \in I$. Suppose $X$ is a set and we have an arbitrary collection $C = \{ \phi \colon U_\phi \to X \}$ of set theoretic functions where the domains of the elements of $C$ are Euclidean spaces. The *diffeology generated by $C$* is the diffeology obtained by taking the intersection of all diffeologies which for which every element in $C$ is a plot. Since the constant plots are contained in every possible diffeology, the intersection diffeology always exists. Since every diffeology contains the discrete diffeology, the intersection of an arbitrary collection of diffeologies is never empty. Many diffeologies can be constructed by taking intersections. Among these, the quotient diffeology is of particular importance. **Definition 24**. Suppose $X$ is a diffeological space and $\sim$ is an equivalence relation on $X$. Let $X/ \sim$ be the set of equivalence classes and let $\pi \colon X \to X/\sim$ be the canonical surjective function. The *quotient diffeology* on $X/\sim$ is the intersection of all diffeologies on $Y$ which makes $\pi$ a subduction. Surjective subductions and quotient diffeologies are in one-to-one correspondence. By this we mean that a surjective function $f \colon X \to Y$ of diffeological spaces is a subduction if and only if the diffeology on $Y$ is the quotient diffeology relative to the equivalence relation given by the fibers of $f$. The category of diffeological spaces is fairly well-behaved. In particular it admits finite products and exponential objects. **Definition 25**. Given two diffeological spaces, $X$ and $Y$ the *product diffeology* on $X \times Y$ is constructed as follows: We say that $\phi \colon U_\phi \to X \times Y$ is a plot if and only if $\mathop{\mathrm{pr}}_1 \circ \phi$ and $\mathop{\mathrm{pr}}_2 \circ \phi$ are plots. **Definition 26**. Given two diffeological spaces $X$ and $Y$ the *functional diffeology* on $C^\infty(X,Y)$ is constructed as follows: We say that $\phi \colon U_\phi \to C^\infty(X,Y)$ is a plot if and only if the following map is smooth: $$\overline{\phi} \colon U_\phi \times X \to Y \qquad (u,x) \mapsto \phi(u)(x)$$ Note that we take the product diffeology on $U_\phi \times X$. # Quasi-étale diffeological spaces {#section:quasi-étale.diffeological.spaces} We will now introduce a new class of diffeological space that we will call "quasi-étale diffeological spaces\" or QUED for short. This subcategory of diffeological spaces, along with its associated groupoids, will be the main focus of the remainder of the article. In short, a QUED is a diffeological space that can be expressed as the (local) quotient of a smooth (Hausdorff) manifold by a well-behaved equivalence relation. The term \"quasi-étale\" is used because this equivalence relation causes the quotient map to behave in a manner that is reminiscent of an étale map of manifolds. However, unlike a typical étale map, it is not a local diffeomorphism. ## Quasi-étale maps and spaces **Definition 27** (Quasi-étale). Suppose $X$ and $Y$ are diffeological spaces. A map $\pi \colon X \to Y$ is said to be *quasi-étale* if it satisfies the following properties: 1. $\pi$ is a local subduction, 2. the fibers of $\pi$ are totally disconnected, 3. if $\mathcal{O}\subseteq X$ is an open subset and we are given a smooth map $f \colon \mathcal{O}\to X$ with the property that $\pi \circ f = \pi$ then $f$ is a local diffeomorphism. Given $x \in X$ a *quasi-étale chart around $x$* consists of a quasi-étale map $\pi \colon M \to X$ where $M$ is a smooth manifold and such that $x$ is in the image of $\pi$. We say that $X$ is a *quasi-étale diffeological space (QUED)* if for all $x \in X$ there exists a quasi-étale chart around $x$. We write $\mathbf{QUED}$ to denote the full subcategory of $\mathbf{Diffgl}$ that consists of quasi-étale diffeolgical spaces. Since local subductions are open, every quasi-étale diffeological space is locally the quotient of a smooth manifold modulo an equivalence relation. In practice, it can be difficult to determine whether an equivalence relation on a manifold gives rise to a quasi-étale chart and typically the most difficult step to prove is (QE3). However, there are a lot of interesting examples of Quasi-étale diffeological spaces. In particular, several kinds of diffeological spaces that appear in the literature happen to be quasi-étale. **Example 28**. Suppose $N$ and $M$ are smooth manifolds. A quasi-étale map $\pi \colon M \to N$ is the same thing as an étale map. Therefore a classical atlas on a manifold is also a quasi-étale atlas. **Example 29**. A quasifold (introduced by Prato [@prato_sur_1999]) is a diffeological space that is locally the quotient of Euclidean space modulo a countable group of affine transformations. If $\Gamma$ is a countable group of affine transformations of $\mathbb{R}^n$ then the quotient map $\mathbb{R}^n \to \mathbb{R}^n/\Gamma$ is quasi-étale. It is a local subduction because the quotient map for a smooth group action is always a local subduction. The fibers are totally disconnected since $\Gamma$ is countable. The fact that condition (QE3) is satisfied in this context is actually rather non-trivial but it has already been proved by Karshon and Miyamoto [@karshon_quasifold_2022] (Corollary 2.15). **Example 30**. Since orbifolds are a special case of quasifolds, they are also quasi-étale diffeological spaces. **Example 31**. In the literature, the closest definition to the one we give above is probably the "diffeological étale manifolds" of Ahmadi [@ahmadi_submersions_2023]. In the Ahmadi article, he defines a class of maps which he calls "diffeological étale maps." We will not state the definition of such maps here but we will simply utilize some of the properties proved by Ahmadi to show the relationship with our notion: Ahmadi shows that if $\pi \colon X \to Y$ is a diffeological étale map then it is a local subduction and it has the property that for all $x \in X$: $$T^{int}_x \pi \colon T^{int}_x X \to T^{int}_{\pi(x)} Y$$ is a bijection ([@ahmadi_submersions_2023], Corollary 5.6(i)). Here $T^{int}$ denotes the "internal tangent space" (see [@hector_geometrie_1995] and [@christensen_tangent_2016].) If $X$ is a smooth manifold and we have an $f$ as in (QE3) such that $\pi \circ f = \pi$ we can apply the tangent functor to this equation to easily conclude that $f$ must be a local diffeomorphism. Our last example is of particular importance. It says that quotients of Lie groups by totally disconnected subgroups are quasi-étale diffeological spaces. The proof strategy for this lemma is essentially a simplified form of the main proof strategy we will use to show that the Ševera-Weinstein groupoid of a Lie algebroid is a quasi-étale diffeological space. **Lemma 32**. *Suppose $G$ is a Lie group and $K$ is a totally disconnected normal subgroup of $G$. Let $X = G/K$ be the group of $K$-cosets equipped with the quotient diffeology. The quotient map $\pi \colon G \to X$ is quasi-étale and so $X$ is a quasi-étale diffeological space.* *Proof.* We need to show each of the three properties. (QE1) Consider the action groupoid $K \ltimes G \rightrightarrows G$. Clearly $\pi$ is the quotient map for the orbit space of this Lie groupoid. By Lemma [Lemma 22](#lemma:lie.groupoid.quotient.is.subduction){reference-type="ref" reference="lemma:lie.groupoid.quotient.is.subduction"} we conclude $\pi$ is a local subduction. (QE2) The fibers of $\pi$ are the $K$ cosets of the form $gK$ for $g \in G$. Since left translation by $g$ is a diffeomorphism and $K$ is totally disconnected, it follows that $g K$ must be totally disconnected. (QE3) Suppose $\mathcal{O}\subseteq G$ is a connected open set and $f \colon \mathcal{O}\to G$ is a smooth function such that $\pi \circ f = \pi$. Consider the function: $$\overline{f} (g) := g^{-1}\cdot f(g)$$ Since $\pi$ is a homomorphism it is clear that the function $\pi \circ \overline{f} \colon \mathcal{O}\to X$ is constant and therefore the image of $\overline f$ is contained in a single fiber of $\pi$. Since the fibers of $\pi$ are totally disconnected, we conclude that there exists $g_0 \in G$ such that $\overline{f}(g) = g_0$ for all $g \in G$. This implies that $$f(g) = g \cdot g_0$$ Since $f$ is just right translation by $g_0$ we conclude that $f$ is a local diffeomorphism. ◻ ## Properties of Quasi-étale maps Quasi-étale diffeological spaces have a variety of favorable properties that makes it possible to transport many differential geometry techniques to the category $\mathbf{QUED}$. The way one does this is by "representing" maps between quasi-étale diffeological spaces using maps between manifolds. This is analogous to how smooth maps between manifolds can be represented in terms of charts. **Definition 33**. Suppose $f \colon X \to Y$ is a smooth map in $\mathbf{QUED}$. A *local representation* of $f$ consists of a pair of quasi-étale charts $\pi_X \colon M \to X$ and $\pi_Y\colon N \to Y$ together with a smooth function $\widetilde f \colon M \to N$ such that the following diagram commutes: $$\label{diagram:local.representation} \begin{tikzcd} M \arrow[d, "\pi_X"] \arrow[r, "\widetilde f"] & N \arrow[d, "\pi_Y"] \\ X \arrow[r, "f"] & Y \end{tikzcd}$$ We say that a local representation of $f$ is *around $x \in X$* if $x$ is in the image of $\pi_X$. Our next lemma states that representations of smooth maps in $\mathbf{QUED}$ always exist. **Lemma 34**. *Suppose $f \colon X \to Y$ is a smooth in $\mathbf{QUED}$. Then for all $x \in X$ there exists a local representation of $f$ around $x \in X$.* *Proof.* Since $Y$ is quasi-étale, there must exist a quasi-étale chart $\pi_Y \colon N \to Y$ around $f(x) \in Y$. Now let $\pi_X \colon M' \to X$ be a quasi-étale chart around $x \in X$. Since $\pi_Y$ is a subduction, we know there must exist a local lift $\widetilde f \colon M \to N$ of of $f \circ \pi_X \colon M' \to Y$ defined on some open subset $M \subseteq M'$. The triple $\widetilde f, \pi_X|_{M}$ and $\pi_Y$ is the desired local representation. ◻ Our next lemma observes that any two points in the same fiber of a quasi-étale chart can be related by a diffeomorphism. **Lemma 35**. *Suppose $X$ is a quasi-étale diffeological space and $\pi \colon M \to X$ is a quasi-étale chart. If $p, q \in M$ are such that $\pi(p) = \pi(q)$ then there exists a local diffeomorphism $f \colon \mathcal{O}\to M$ defined on a neighborhood $\mathcal{O}\subseteq M$ of $p$ such that $f(p) = q$ and $\pi \circ f = \pi$.* *Proof.* Since $\pi \colon M \to X$ is a local subduction and $M$ is a manifold, we know that there must exists an open neighborhood $\mathcal{O}\subseteq M$ and a smooth function $f \colon \mathcal{O}\to M$ with the property that $\pi \circ f = \pi$ and $f(p) = q$. Since $\pi$ is assumed to be quasi-étale, (QE3) implies that such an $f$ must be a local diffeomorphism. ◻ Our next lemma relates properties of maps between quasi-étale diffeological spaces and properties of their representations. **Proposition 36**. *Suppose $f \colon X \to Y$ is a smooth map in $\mathbf{QUED}$ and we have a local representation of $f$ as in the Diagram [\[diagram:local.representation\]](#diagram:local.representation){reference-type="ref" reference="diagram:local.representation"}.* (a) *If $f$ is a local diffeomorphism, then $\widetilde f$ is a local diffeomorphism.* (b) *If $f$ is a local subduction then $\widetilde f$ is a submersion.* (c) *If $f$ is constant, then $\widetilde f$ is locally constant.* *Proof.* Throughout, let $p \in M$ be arbitrary and $q := \widetilde f(p) \in N$, $x := \pi_X(p) \in X$ and $y := \pi_Y(q) \in Y$. \(a\) Since $N$ is a smooth manifold and $f \circ \pi_X$ is a local subduction, there must exists a smooth function $g \colon \mathcal{O}\to M$ defined on an open neighborhood $\mathcal{O}$ of $q$ such that $$f \circ \pi \circ g = \pi_Y$$ This implies that: $$\pi_Y \circ \widetilde f \circ g = \pi_Y$$ Since $\pi_Y$ is quasi-étale, by (QE3) it follows that $\widetilde f \circ g$ is a local diffeomorphism. Therefore, it follows $\widetilde f$ is a submersion. and the dimension of $M$ is greater than or equal to the dimension of $N$. \(b\) Now suppose $f$ is a diffeomorphism. Then it is, in particular, a local subduction. Let $g$ be as in the discussion for part (a). We can repeat the argument from part (a) where we replace $f$ with $f ^{-1}$ and $\widetilde f$ with $g$ and from that we conclude that $g$ is a submersion. Since $\widetilde f \circ g$ is a local diffeomorphism and both $f$ and $g$ are submersions we conclude that $\widetilde f$ is a local diffeomorphism. \(c\) Since $f$ is a constant function, it follows that the image of $\widetilde f$ is contained in $\pi_Y^{-1}(y)$. By assumption, we know that $\pi_Y^{-1}(y)$ is totally disconnected so $f$ is locally constant. ◻ We conclude this subsection with a lemma that simplifies the process of proving a map is a quasi-étale chart when we already know the space is quasi-étale. **Lemma 37**. *Suppose $X$ is a quasi-étale diffeological space and suppose $\pi \colon M \to X$ is a local subduction and the fibers of $\pi$ are totally disconnected. Then $\pi$ is a quasi-étale chart.* *Proof.* Let $p_0 \in M$ be an arbitrary point and let $x_0 := \pi(p_0)$. We will show that $\pi$ is quasi-étale in a neighborhood of $p$. Take $\pi' \colon N \to X$ be a quasi-étale chart around $x_0$ and let $q_0 \in N$ be such that $\pi'(q_0) = x_0$ Since both $\pi$ and $\pi'$ are local subductions, we know that by possibly shrinking $M$ and $N$ to smaller open neighborhoods of $p_0$ and $q_0$, respectively, we can construct a pair of smooth fuctions $f \colon M \to N$ and $g \colon N \to M$ such that $f(p_0) = q_0$, $g(q_0) = p_0$, $\pi' \circ f = \pi$, and $\pi \circ g = \pi'$. Then it follows that $f \circ g \colon N \to N$ is a smooth function such that $\pi' \circ (f \circ g) = \pi'$. Since $\pi'$ is quasi-étale, it follows that $f \circ g$ is a local diffeomorphism. Therefore, it follows that $f$ is a submersion. However, since $\pi' \circ f = \pi$ the fibers of $f$ must be totally disconnected and so $f$ is a local diffeomorphism. Since $f \colon M \to N$ preserves the projections to $X$, and $\pi' \colon N \to X$ is quasi-étale, it follows that $\pi$ must also be quasi-étale. ◻ ## Fiber products It is a notable property of diffeological spaces that the fiber product of two diffeological spaces always exist. However, fiber products of quasi-étale diffeological spaces may not be quasi-étale. This can happen even when the maps involved are quasi-étale. Consider the following simple example: **Example 38**. Consider the $\mathbb{Z}_2$ action on $\mathbb{R}$ by the reflection $x \mapsto - x$. Let $X := \mathbb{R}/ \mathbb{Z}_2$. Then $X$ is quasi-étale (it is an orbifold). However, consider the fiber product: $$\mathbb{R}\times_X \mathbb{R}= \{ (x,y) \in \mathbb{R}^2 \ : \ x = y \mbox{ or } x = -y \}$$ The diffeological space $\mathbb{R}\times_X \mathbb{R}$ is not a quasi-étale diffeological space. To see why, note that the domain of a quasi-étale chart around $(0,0)$ would have to be to be one-dimensional. It is an standard exercise to show that there cannot exist a local subduction $\pi \colon \mathbb{R}\to \mathbb{R}\times_X \mathbb{R}$ with $(0,0)$ in the image of $\pi$. What this tells us is that "local subduction" is not a strong enough condition to function as our notion of "submersion" between quasi-étale diffeological spaces. In order to clarify what is going on here, will will need to use the notion of a "cartesian" morphism in a category. **Definition 39**. Suppose $\mathbf{C}$ is an arbitrary category. We say that a morphism $p \colon X \to Z$ in $\mathbf{C}$ is *cartesian* if for all other morphisms $f \colon Y \to Z$ one of the following holds: (1) The fiber product $X \!\tensor*[^{}_{p}]{\times}{^{}_{f}} Y$ exists. I.e. we have a pullback square: $$\begin{tikzcd} X \!\tensor*[^{}_{p}]{\times}{^{}_{f}} Y \arrow[r, "\mathop{\mathrm{pr}}_1"] \arrow[d, "\mathop{\mathrm{pr}}_2"] & X \arrow[d, "p"] \\ Y \arrow[r, "f"] & Z \end{tikzcd}$$ (2) The collection of commutative squares of the following form is empty: $$\begin{tikzcd} W \arrow[r] \arrow[d] & X \arrow[d, "p"] \\ Y \arrow[r, "f"] & Z \end{tikzcd}$$ **Example 40**. Suppose $\mathbf{C}$ is the category of sets $\mathbf{Set}$. Then every function in $\mathbf{Set}$ is cartesian. In this category case (2) in the above definition never occurs since the empty set $\empty$ is an initial object in $\mathbf{Set}$. **Example 41**. Consider the category of smooth manifolds $\mathbf{Man}$. Any submersion $p \colon M \to N$ in $\mathbf{Man}$ is cartesian. Observe that if we permit the empty set $\empty$ to be an initial object in $\mathbf{Man}$ condition (2) above never occurs. However, we will proceed with the convention that the empty set is not a manifold. **Example 42**. In the category of diffeological spaces $\mathbf{Diffgl}$ every morphism is cartesian. For a general morphism in $\mathbf{QUED}$ there does not appear to be a simple geometric criterion for determining when a morphism is cartesian. However, later in this section we will state such a criterion for the special case where the co-domain is a manifold. We use the notion of a cartesian morphism to define a class of maps in $\mathbf{QUED}$ that is well-behaved for fiber products and extends the classical notion of a submersion. **Definition 43**. Suppose $p \colon X \to Z$ is morphism in $\mathbf{QUED}$. We say that $p$ is a *$\mathbf{QUED}$-submersion* if $p$ is a local subduction and $p$ is cartesian in $\mathbf{QUED}$. In our theory of quasi-étale diffeological spaces, $\mathbf{QUED}$-submersions will play the role of submersions in the theory of manifolds. Note that from Example [Example 38](#example:non.existant.fiber.product){reference-type="ref" reference="example:non.existant.fiber.product"} we know that quasi-étale maps are *not* a special case of $\mathbf{QUED}$-submersions. Our next lemma says that $\mathbf{QUED}$-submersions are stable under taking base changes. It is consequence of the more general fact that the properties of being a local subduction and being cartesian are both stable under base changes. We will include a proof, regardless, for completeness. **Lemma 44**. *Suppose $p \colon X \to Z$ is a $\mathbf{QUED}$-submersion and $f \colon Y \to Z$ is any other morphism. The base change of $p$ along $f$: $$\mathop{\mathrm{pr}}_2 \colon X \times_Z Y \to Y$$ is a $\mathbf{QUED}$-submersion.* *Proof.* Suppose $p \colon X \to Z$ is a $\mathbf{QUED}$ submersion and $f \colon Y \to Z$ is a morphism in $\mathbf{QUED}$. Let us illustrate the situation with a diagram: $$\begin{tikzcd} X \times_Z Y \arrow[r, "\mathop{\mathrm{pr}}_1"] \arrow[d, "\mathop{\mathrm{pr}}_2"] & X \arrow[d, "p"] \\ Y \arrow[r, "f"] & Z \end{tikzcd}$$ We need to show that: $$( X \times_Z Y ) \times_Y W$$ is a quasi-étale diffeological space. Observe that $( X \times_Z Y ) \times_Y W$ is canonically diffeomorphic to $X \times_Z W$ as a diffeological space. Since $p$ is cartesian we conclude that $X \times_Z W$ is a quasi-étale diffeological space so $\mathop{\mathrm{pr}}_2 \colon X \times_Z Y \to Y$ is cartesian. Now we must show that $\mathop{\mathrm{pr}}_2 \colon X \times_Z Y \to Y$ is a local subduction. Suppose $\phi \colon U_\phi \to Y$ is a plot and let $u \in U_\phi$ and $(x,y) \in X \times_Z Y$ be such that $\phi(u) = y$. Since $p \colon X \to Z$ is a local subduction, we know there exists an open neighborhood $V \subseteq U_\phi$ of $u$ and a lift $\psi \colon V \to X$ with the property that $\psi(u) = x$ and $p \circ \psi = f \circ \phi|_V$. Then $$\widetilde\phi := \phi|_V \times \psi \colon V \to X \times_Z Y$$ is well-defined and has the properties that $\mathop{\mathrm{pr}}_2 \circ \widetilde\phi = \phi|_V$ and $\widetilde\phi(u) = (x,y)$. ◻ It is not that easy to construct "obvious" examples of $\mathbf{QUED}$-submersions. It is essential to our theory that $\mathbf{QUED}$-submersions between manifolds be the same thing as ordinary submersions. However, the proof of this fact is not as obvious as one might expect. **Example 45**. Suppose $p \colon M \to N$ is a smooth map of manifolds. Then $p$ is a $\mathbf{QUED}$-submersion if and only if $p$ is a submersion. One of the directions in the above statement is clear from the fact that local subductions between manifolds are automatically submersions. The other direction follows from Theorem [Theorem 47](#theorem:qeds.submersion.criteria){reference-type="ref" reference="theorem:qeds.submersion.criteria"} which we will state later in this subsection. ## Fiber products over manifolds In this last subsection we will give some statements about the behavior of fiber products along $\mathbf{QUED}$-submersions in the special case where the fiber product is taken over a smooth manifold. **Proposition 46**. *Suppose $p \colon X \to B$ is a local subduction where $B$ is a smooth manifold. Suppose we have another morphism $f \colon Y \to B$ in $\mathbf{QUED}$ for which $X \times_B Y$ is also a quasi-étale diffeological space.* *Then for any point $(x_0,y_0) \in X \times_B Y$. If $\pi_X \colon M \to X$ and $\pi_Y \colon N \to Y$ are quasi-étale charts around $x$ and $y$ respectively, then the map: $$\pi_{XY} \colon M \times_B N \to X \times_B Y \qquad \pi_{XY}(m,n) := (\pi_X(m), \pi_Y(n))$$ is a quasi-étale chart around $(x_0, y_0)$.* *Proof.* We have a commutative diagram: $$\begin{tikzcd} M \arrow[dd, "\mathop{\mathrm{pr}}_2"] \arrow[rr, "\mathop{\mathrm{pr}}_1"] \times_B N \arrow[dr, "\pi_{XY}"] & & M \arrow[d, "\pi_X"] \\ & X \times_B Y \arrow[r, "\mathop{\mathrm{pr}}_1"] \arrow[d, "\mathop{\mathrm{pr}}_2"] & X \arrow[d, "p"] \\ N \arrow[r,"\pi_Y"] & Y \arrow[r, "f"] & B \end{tikzcd}$$ Note that $M \times_{B} N$ is a manifold since $p \circ \pi_X$ is a local subduction (and hence a submersion). We must show that $\pi_{XY}$ is quasi-etale. By Lemma [Lemma 37](#lemma:finding.quasi.étale.charts){reference-type="ref" reference="lemma:finding.quasi.étale.charts"}, it suffices to show that $\pi_{XY}$ satisfies (QE1) and (QE2) from Definition [Definition 27](#defn:quasi-étale){reference-type="ref" reference="defn:quasi-étale"}. (QE1) We need to show that $\pi_{XY}$ is a local subduction. Suppose $\phi \colon U_\phi \to X \times_B Y$ is a plot. Let $u \in U_\phi$ and $(m,n) \in M \times_B N$ be such that: $$\phi(u) = \pi_{XY}(m,n)$$ We may split $\phi$ into a product $\phi = \phi_1 \times \phi_2$ where $\phi_1$ is a plot on $X$ and $\phi_2$ is a plot on $Y$. Since $\pi_X$ and $\pi_Y$ are local subductions, it follows that there exist an open neighborhood $V \subseteq U_\phi$ of $u$ and plots: $$\widetilde\phi_1 \colon V \to M \qquad \widetilde\phi_2 \colon V \to N$$ on $M$ and $N$ respectively such that: $$\pi_X \circ \widetilde\phi_1 = \phi_1|_V \qquad \pi_Y \circ \widetilde\phi_2 = \phi_2|_V$$ and $$\widetilde\phi_1(u) = m \qquad \widetilde\phi_2(u) = n$$ This pair of plots defines a plot $\widetilde\phi := \widetilde\phi_1 \times \widetilde\phi_2$ on $M \times_B N$. Furthermore $\pi_{XY} \circ \widetilde\phi = \phi$ and $\widetilde\phi(u) = ( m,n)$. This shows $\pi_{XY}$ is a local subduction. (QE2) Next, we show that $\pi_{XY}$ has totally disconnected fibers. Suppose $$\phi \colon U_\phi \to M \times_B N$$ is a plot such that the image of $\phi$ is contained in $\pi_{XY}^{-1}(x,y)$ for some $(x,y) \in X \times_B Y$. Then $\mathop{\mathrm{pr}}_1 \circ \phi$ is a plot with image contained in $\pi_X^{-1}(x)$ and $\mathop{\mathrm{pr}}_2 \circ \phi$ is a plot with image contained in $\pi_Y^{-1}(y)$. Since $\pi_X$ and $\pi_Y$ have totally disconnected fibers, it follows that $\phi$ is locally constant. $\phi$ was arbitrary we conclude that $\pi^{-1}_{XY}(x,y)$ is totally disconnected. ◻ The next theorem is our main result of this section. It says essentially that a smoothly parameterized family of quasi-étale diffeological spaces has a quasi étale total space. **Theorem 47**. *Suppose $p \colon X \to B$ is a morphism in $\mathbf{QUED}$ with $B$ a smooth manifold and $p$ a local subduction. Then $p$ is a $\mathbf{QUED}$-submersion if and only if the fibers of $p$ are quasi-étale.* This somewhat innocent looking theorem has a surprisingly involved proof. Before getting to the proof of this theorem. We will first prove a lemma that simplifies the process of determining whether or not a map is quasi-étale. Note that the only difference is a subtle change in the last axiom in the definition of quasi-étale. **Lemma 48**. *Suppose $\pi \colon M \to X$ is a smooth map of diffeological spaces with $M$ a smooth manifold. Then $\pi$ is quasi-étale if and only if the following conditions hold:* (1) *$\pi$ is a local subduction,* (2) *the fibers of $\pi$ are totally disconnected,* (3) *for all $p \in M$ and smooth functions: $$f \colon \mathcal{O}\to M$$ such that $\mathcal{O}$ is an open neighborhood of $p$, $f(p)= p$ and $\pi \circ f = \pi$ we have that $f$ is a diffeomorphism in a neighborhood of $p$.* *Proof.* Note that the only difference between the two conditions above and the definition of quasi-étale is the additional stipulation that $f$ has a fixed point. Therefore, the $\Rightarrow$ case is clear. Now, suppose we have a map $\pi \colon M \to X$ which satisfies properties (1-3). We must show that $\pi$ satisfies (QE3). Consider a smooth map $f \colon \mathcal{O}\to Y$ where $\mathcal{O}\subseteq X$ is an open subset and $\pi \circ f = \pi$. To show that $f$ is a local diffeomorphism, it suffices to show that it is a local diffeomorphism in a neighborhood of an arbitrary $p \in \mathcal{O}$ so let $p \in \mathcal{O}$ be fixed. Since $\pi$ is a local subduction and $M$ is a smooth manifold, there exists an open neighborhood $\mathcal{U}\subseteq M$ of $f(p)$ and a smooth function $\psi \colon \mathcal{U}\to M$ such that $\psi(f(p)) = p$ and $\pi \circ \psi = \pi$. We can assume without loss of generality that $f^{-1}(\mathcal{U}) \subseteq\mathcal{O}$ and therefore the function $\psi \circ f$ is well defined. This function also satisfies $(\psi \circ f)(p) = p$ and $\pi \circ (\psi \circ f) = \pi$. By property (3) above we conclude that $\psi \circ f$ is a local diffeomorphism. Therefore, $f$ must be an immersion. By a dimension count, we conclude that $f$ is a local diffeomorphism. ◻ Our next lemma tells us that if we have a smoothly parameterized family of quasi-étale diffeological spaces with a quasi-étale total space, then one can find a quasi-étale chart on a fiber by restricting a quasi-étale chart on the total space. **Lemma 49**. *Suppose $p \colon X \to B$ is smooth with $X$ quasi-étale and $B$ a smooth manifold and assume that for all $b \in B$ $$X_b := p^{-1}(b)$$ is quasi-étale.* *Then if $\pi \colon M \to X$ is a quasi-étale chart, it follows that: $$\pi|_{(p \circ \pi)^{-1}(b)} \colon (p \circ \pi)^{-1}(b) \to X_b$$ is a quasi-étale chart.* *Proof.* This is an immediate consequence of Proposition [Proposition 46](#prop:fiber.product.chart){reference-type="ref" reference="prop:fiber.product.chart"} where we take $f \colon N \to B$ to be the inclusion of a point $\{ b \} \hookrightarrow B$. ◻ With these observations, we can now prove the theorem. *Proof of Theorem [Theorem 47](#theorem:qeds.submersion.criteria){reference-type="ref" reference="theorem:qeds.submersion.criteria"}.* Suppose $B$ is a smooth manifold and let $p \colon X \to B$ be a local subduction and a morphism in $\mathbf{QUED}$. We must show that $p$ is a $\mathbf{QUED}$-submersion if and only if the fibers of $p$ are quasi-étale diffeological spaces. ($\Rightarrow$) Suppose that $p$ is a $\mathbf{QUED}$-submersion. Then $p$ must be cartesian and it follows that for all $b \in B$ we have that the fiber product: $$\{ b \} \times_{B} X \cong p^{-1}(b)$$ is a quasi-étale diffeological space. ($\Leftarrow$ ) Suppose that $p \colon X \to B$ is a local subduction in $\mathbf{QUED}$ where $B$ is a smooth manifold. Assume that for all $b \in B$ we have that $p^{-1}(b)$ is a quasi-étale diffeological space. We must show that $p$ is cartesian. Let $f \colon Y \to B$ be an arbitrary smooth map in $\mathbf{QUED}$. Our task is to show that $X \times_B Y$ quasi-étale. Suppose $\pi_X \colon M \to X$ and $\pi_Y \colon N \to Y$ are quasi-étale charts. Let $\pi_{XY} \colon M \times_B N \to X \times_B Y$ be the associated smooth map. We claim that $\pi_{XY}$ is quasi-étale. First let us establish some notation and make a few observations: - Let $$\widetilde p := p \circ \pi_X \colon M \to B \qquad \widetilde f := f \circ \pi_Y \colon N \to B$$ We remark that $\widetilde p$ is a submersion between smooth manifolds. - Given $b \in B$, let $X_b := p^{-1}(b)$ and $M_b := \widetilde p^{-1}(b)$. Note that, by Lemma [Lemma 49](#lemma:restriction.to.fiber){reference-type="ref" reference="lemma:restriction.to.fiber"}, we know that $$\pi_X|_{M_p} \colon M_p \to X_p$$ is a quasi-étale chart. - Let $q := \mathop{\mathrm{pr}}_2 \colon M \times_B N \to N$. Note that since $q$ is the base change of $\widetilde p$ along $\widetilde f$ it follows that $q$ is a submersion. We can illustrate the situation with the following diagram: $$\begin{tikzcd} M \arrow[dd, "q"] \arrow[rr, "\mathop{\mathrm{pr}}_1"] \times_B N \arrow[dr, "\pi_{XY}"] & & M \arrow[d, "\pi_X", swap] \arrow[dd, "\widetilde p", bend left] \\ & X \times_B Y \arrow[r, "\mathop{\mathrm{pr}}_1"] \arrow[d, "\mathop{\mathrm{pr}}_2"] & X \arrow[d, "p", swap] \\ N \arrow[rr, "\widetilde f", bend right]\arrow[r,"\pi_Y"] & Y \arrow[r, "f"] & B \end{tikzcd}$$ (QE1) We must show $\pi_{XY}$ is a local subduction. Suppose $\phi \colon U_\phi \to X \times_B Y$ is a plot and we have a point $u \in U_\phi$ and $(m,n) \in M \times_B N$ such that $\pi_{XY}(m,n) = \phi(u)$. We know that there exist plots $\phi_X \colon U_\phi \to X$ and $\phi_Y \colon U_\phi \to Y$ such that: $$\phi(u) = (\phi_X(u), \phi_Y(u)) \quad \mbox{ and } \quad p \circ \phi_X = f \circ \phi_Y$$ Since $\pi_X$ and $\pi_Y$ are local subductions, there must exist an open neighborhood $V$ of $u \in U_\phi$ together with lifts $\widetilde\phi_X$ and $\widetilde\phi_Y$ such that: $$\pi_X \circ \widetilde\phi_X = \phi_X \qquad \pi_Y \circ \widetilde\phi_Y = \phi_Y$$ and: $$\widetilde\phi_X(u) = m \qquad \widetilde\phi_Y(u) = n$$ Therefore, we can define a map: $$\widetilde\phi \colon V \to X \times Y \qquad \widetilde\phi(u) = (\widetilde\phi_X(u), \widetilde\phi_Y(u))$$ We claim that the image of $\widetilde\phi$ is contained in $X \times_B Y$. This follows from a direct calculation: $$\begin{aligned} \widetilde p \circ \widetilde\phi_X &= p \circ \pi_X \circ \widetilde\phi_X \\ &= p \circ \phi_X \\ &= f \circ \phi_Y \\ &= f \circ \pi_Y \circ \widetilde\phi_Y \\ &= \widetilde f \circ \widetilde\phi_Y \end{aligned}$$ Furthermore, it follows immediately from the definition of $\pi_{XY}$ that $\pi_{XY} \circ \widetilde\phi = \phi$ and $\widetilde\phi(u) = (m,n)$. This shows that $\pi_{XY}$ is a local subduction. (QE2) If $\phi \colon U_\phi \to M \times_B N$ is a smooth map with image contained in the fiber of $\pi_{XY}$, then it folows that $\mathop{\mathrm{pr}}_1 \circ \pi_{XY} \circ \phi$ is locally constant. Commutativity of the top square implies that $\pi_X \circ \mathop{\mathrm{pr}}_1 \circ \phi$ is locally constant. Since $\pi_X$ is quasi-étale it follows that $\mathop{\mathrm{pr}}_1 \circ \phi$ is locally constant. A symmetrical argument shows that $\mathop{\mathrm{pr}}_2 \circ \phi$ is locally constant. Since each component of $\phi$ is locally constant it follows that $\phi$ is locally constant. (QE3) We utilize the simplification from Lemma [Lemma 48](#lemma:quasi.étale.alternate){reference-type="ref" reference="lemma:quasi.étale.alternate"}. Let us fix $(m_0,n_0) \in M \times_B N$ and suppose $g \colon \mathcal{O}\to M \times_B N$ is a smooth function such that $\pi_{XY} \circ g = \pi_{XY}$ and $g(m_0,n_0) = (m_0,n_0)$. We are finished if we can show $g$ is a local diffeomorphism in a neighborhood of $(m_0, n_0)$. Our strategy to show $g$ is a local diffeomorphism involves several steps. This fact follows from the following sequence of claims: (1) In a neighborhood of $(m_0,n_0)$, $g$ maps fibers of $q$ to fibers of $q$. (2) Since $\pi_Y$ is quasi-étale, the horizontal (relative to $q$) component of $g$ is a local diffeomorphism. (3) Since $\pi_X|_{M_b}$ is quasi-étale for all $b \in B$, it follows that the vertical component of $g$ is a local diffeomorphism. (4) Since both the horizontal and vertical components of $g$ are local diffeomorphisms, it follows that $g$ is a local diffeomorphism. \(1\) Since we are only concerned with the behavior of $g$ in an open neighborhood of $(m_0,n_0)$ and $(m_0,n_0)$ is a fixed point of $g$ we can freely restrict $g$ to arbitrarily small neighborhoods of $(m_0,n_0)$. Therefore, we can assume without loss of generality that there exist two open subsets $\mathcal{U}\subseteq M$ and $\mathcal{V}\subseteq N$ such that $\mathcal{O}= \mathcal{U}\times_B \mathcal{V}$. Since $\widetilde p$ is a submersion, we can choose $\mathcal{U}$ in such a way that for all $b \in B$ we have that: $$M_b \cap \mathcal{U}$$ is connected. Note that if $n \in N$ is a point such that $\widetilde f(n) = b$, it follows that: $$q^{-1}(n) = M_{b} \times \{ n \}$$ Consequently, it follows that for all $n \in N$ the sets: $$q^{-1}(n) \cap \mathcal{O}$$ are connected. We claim that $g$ must preserve the fibers of $q$ on such a domain. To see why, observe that by following the commutative diagram above and the fact that $\pi_{XY} \circ g = \pi_{XY}$ we have that: $$\pi_Y \circ q \circ g = \mathop{\mathrm{pr}}_2 \circ \pi_{XY} \circ g = \mathop{\mathrm{pr}}_2 \circ \pi_{XY} = \pi_Y \circ q$$ Since, for all $n \in N$, the subset $q^{-1}(n) \cap \mathcal{O}$ are connected and the fibers of $\pi_Y$ are totally disconnected, it follows that the image: $$q \circ g ( q^{-1}(n) \cap \mathcal{O})$$ is a point. This implies that the image of the fibers of $q$ under $g$ are contained in fibers of $q$. Since $q$ is a submersion, it follows that there must exist a smooth function $h$ which makes the following diagram commute: $$\begin{tikzcd} \mathcal{O}\arrow[d, "q|_{\mathcal{O}}"] \arrow[r, "g"]& M \times_B N \arrow[d, "q"] \\ \mathcal{U}\arrow[r, "h"] & N \end{tikzcd}$$ \(2\) Note that since $\pi_{XY} \circ g = \pi_{XY}$ it follows that $\pi_Y \circ h = \pi_Y$. Since $\pi_Y$ is quasi-étale it follows that $h$ is a local diffeomorphism. \(3\) For each $n \in N$ let $\widetilde f(n) = b$. Note that: $$q^{-1}(n) \cap \mathcal{O}= (M_b \cap \mathcal{U}) \times \{ n \}$$ Furthermore, we remark that $\widetilde f \circ h = \widetilde f$ so it follows that: $$g((M_b \cap \mathcal{U}) \times \{ n \} ) \subseteq M_b \times \{ h(n) \}$$ Therefore, let: $$v_n \colon M_b \cap \mathcal{U}\to M_b$$ the the map induced by $g$ at the level of fibers. In other words, $v_n$ is the vertical component of $g$ at the $q$ fiber of $n$. Observe that since $\pi_{XY} \circ g = \pi_{XY}$ it follows that $\pi_X \circ v_b = \pi_X$. From a previous lemma, we saw that $\pi_X |_{M_b}$ is quasi-étale for all $b \in B$ whenever $M_b$ is non-empty. From the quasi-étale property, it follows that $v_b$ is a local diffeomorphism. \(4\) Since both $h$ and $v_b$ are local diffeomorphisms for all $b \in B$ it follows that $g$ is a local diffeomorphism. ◻ # Quasi-étale groupoids {#section:quasi.étale.groupoids} We will now look at groupoid objects in the category of quasi-étale diffeological spaces. We will also need to define the notion of a local groupoid in this context. It turns out that local groupoids will play a key role in constructing the differentiation functor. ## Groupoid objects in $\mathbf{QUED}$ **Definition 50**. A *$\mathbf{QUED}$-groupoid* consists of two quasi-étale diffeological spaces $\mathcal{G}$ and $X$ called the *arrows* and *objects* respectively together with a collection of smooth maps: - Two $\mathbf{QUED}$-submersions called the *source* and *target*: $$s\colon \mathcal{G}\to X \qquad t\colon \mathcal{G}\to X$$ - A smooth map called the *unit* $$u\colon X \to \mathcal{G}\qquad x \mapsto 1_x$$ - A smooth map called *multiplication* $$m\colon \mathcal{G}\!\tensor*[^{}_{s}]{\times}{^{}_{t}} \mathcal{G}\to \mathcal{G}\qquad (g,h) \mapsto gh$$ - A smooth map called *inverse* $$i\colon \mathcal{G}\to \mathcal{G}$$ These morphisms are required to satisfy the following properties: 1. Compatibility of source and target with unit: $$\forall x \in M \qquad s(u(x)) = t(u(x)) = x$$ 2. Compatibility of source and target with multiplication: $$\forall (g,h) \in \mathcal{G}\!\tensor*[^{}_{s}]{\times}{^{}_{t}} \mathcal{G}\qquad s(m(g,h) = s(h) \quad t( m(g ,h) = t(g)$$ 3. Compatibility of source and target with inverse: $$\forall g \in \mathcal{G}\qquad s(i(g)) = t(g)$$ 4. Left and right unit laws: $$\forall g \in \mathcal{G}\qquad m( u(t(g)), g) = g = m(g, u(s(g)))$$ 5. Left and right inverse laws: $$\forall g \in \mathcal{G}\qquad m(g, i(g)) = u(t(g)) \quad m(i(g),g) = u(s(g))$$ 6. Associativity law: $$\forall (g,h,k) \in \mathcal{G}\!\tensor*[^{}_{s}]{\times}{^{}_{t}} \mathcal{G}\!\tensor*[^{}_{s}]{\times}{^{}_{t}} \mathcal{G}\qquad m(g,m(h,k)) = m(m(g,h),k)$$ We say that such a groupoid $\mathcal{G}\rightrightarrows X$ is a *singular Lie groupoid* if $X$ is a smooth manifold. We say that $\mathcal{G}\rightrightarrows X$ is a *Lie groupoid* if both $\mathcal{G}$ and $X$ are manifolds. It should be noted that the condition that the source and target maps be $\mathbf{QUED}$-submersions is crucial. For example, this property ensures that the spaces of composable arrows are also objects in $\mathbf{QUED}$. Singular Lie groupoids are precisely the class of diffeological groupoids that we intend to differentiate to Lie algebroids. **Definition 51**. Suppose $\mathcal{G}\rightrightarrows X$ and $\mathcal{H}\rightrightarrows Y$ are $\mathbf{QUED}$-groupoids. A *groupoid homomorphism* $F \colon \mathcal{G}\to \mathcal{H}$ covering $f \colon X \to Y$ is a pair of smooth maps with the properties: 1. Compatibility with source and target: $$\forall g \in \mathcal{G}\qquad s(F(g)) = f(s(g)) \quad t(F(g)) = f(t(g))$$ 2. Compatibility with multiplication: $$\forall (g,h) \in \mathcal{G}\!\tensor*[^{}_{s}]{\times}{^{}_{t}} \mathcal{G}\qquad F(m(g,h)) = m(F(g),F(h))$$ **Example 52**. Since a $\mathbf{QUED}$-submersion between smooth manifolds is an ordinary submersion, this definition of a Lie groupoid corresponds exactly to the classical definition. **Example 53**. Suppose $G$ is a Lie group and $K$ is a totally disconnected normal subgroup of $G$. According to Lemma [Lemma 32](#lemma:homogeneous.space){reference-type="ref" reference="lemma:homogeneous.space"} we know $G/K$ is a quasi-étale diffeological space and therefore $G/K$ is a singular Lie groupoid. **Example 54**. Consider the tangent bundle $T \mathbb{R}$ and think of it as a bundle of abelian groups over $\mathbb{R}$. Let us define an action of $\mathbb{Z}$ on $T \mathbb{R}$ by the following rule: $$z \in \mathbb{Z}, \ (v,t) \in T\mathbb{R}\qquad z \cdot (v,t) := (v + tz, t)$$ where the second coordinate is the base point and the first coordinate is the vector component. The orbit space of this action $T \mathbb{R}/\mathbb{Z}$ is quasi-étale so it is canonically a singular Lie groupoid over $\mathbb{R}$. This example illustrates how quasi-étale groupoids are able to handle the "transverse obstruction" to integrability. ## Local groupoids Local groupoids are a weaker form of a groupoid which has a more restrictive product operation. Essentially, we do not require that multiplication be defined for all "composable" pairs. The "local" part of the definition of a local groupoid refers to the fact that multiplication should be defined in an open neighborhood of the units. **Definition 55**. A *local $\mathbf{QUED}$-groupoid* consists of a pair of quasi-étale spaces $\mathcal{G}$ and $X$ called the *arrows* and *objects* together with: - two $\mathbf{QUED}$-submersions called the *source* and *target*: $$s\colon \mathcal{G}\to X \qquad t\colon \mathcal{G}\to X$$ - a smooth map called the *unit* $$u\colon X \to \mathcal{G}\qquad x \mapsto 1_x$$ - an open neighborhood $\mathcal{M}\subseteq\mathcal{G}\!\tensor*[^{}_{s}]{\times}{^{}_{t}} \mathcal{G}$ of $u(M) \times u(M)$ and smooth map called *multiplication* $$m\colon \mathcal{M}\to \mathcal{G}\qquad (g,h) \mapsto gh$$ - An open neighborhood $\mathcal{I}\subseteq\mathcal{G}$ of $u(M)$ and a smooth map called *inverse* $$i\colon \mathcal{I}\to \mathcal{G}\qquad g \mapsto g^{-1}$$ We further require that the following properties hold: 1. Compatibility of source and target with unit: $$\forall x \in M, \qquad s(u(x)) = t(u(x)) = x$$ 2. Compatibility of source and target with multiplication: $$\forall (g,h) \in \mathcal{M}, \qquad s(m(g,h) = s(h) \quad t( m(g ,h) = t(g)$$ 3. Compatibility of source and target with inverse: $$\forall g \in \mathcal{I}, \qquad s(i(g)) = t(g)$$ 4. Left and right unit laws: $\forall g \in \mathcal{G}$ $$\forall g \in \mathcal{G}\qquad m( u(t(g)), g) = g = m(g, u(s(g)))$$ whenever the above expression is well-defined 5. Left and right inverse laws: $$\forall g \in \mathcal{G}, \qquad m(g, i(g)) = u(t(g)) \quad m(i(g),g) = u(s(g))$$ whenever the above expression is well-defined 6. Associativity law: $$\forall (g,h,k), \in \mathcal{G}\!\tensor*[^{}_{s}]{\times}{^{}_{t}} \mathcal{G}\!\tensor*[^{}_{s}]{\times}{^{}_{t}} \mathcal{G}\qquad m(g,m(h,k)) = m(m(g,h),k)$$ whenever the above expression is well-defined. A local $\mathbf{QUED}$-groupoid $\mathcal{G}\rightrightarrows X$ is said to be a *singular local Lie groupoid* if $X$ is a smooth manifold. It is a *local Lie groupoid* if both $\mathcal{G}$ and $X$ are manifolds. Apart from the fact that multiplication is defined only on an open subset, for our purposes, local groupoids will have a distinction in how their morphisms are defined. Let us establish some notation. Given a smooth function of diffeological spaces $f \colon X \to Y$ and a pair of subsets $S \subseteq X$ and $T \subseteq Y$ such that $f(S) \subseteq T$ we write: $$[f]_S \colon [X]_S \to [Y]_T$$ to denote the class of $f$ as a germ of a map from $X$ to $Y$ defined in an open neighborhood of $S$. The expressions $[X]_S$ and $[Y]_T$ denote germs of diffeological spaces. **Definition 56**. Suppose $\mathcal{G}\rightrightarrows X$ and $\mathcal{H}\rightrightarrows Y$ are local $\mathbf{QUED}$-groupoids. A *morphism* $\mathcal{G}\to \mathcal{H}$ covering $f \colon X \to Y$ consists of a *germ* of a smooth function: $$[\mathcal{F}]_{u(M)} \colon [\mathcal{G}]_{u(M)} \to [\mathcal{H}]_{u(N)}$$ such that there exists a representative of the germ $\mathcal{F}$ which satisfies the following properties: 1. Compatibility with source and target: $$\forall g \in \mathcal{G}\qquad s(\mathcal{F}(g)) = f(s(g)) \quad t(\mathcal{F}(g)) = f(t(g))$$ whenever the above expressions are well-defined. 2. Compatibility with multiplication: $$\forall (g,h) \in \mathcal{G}\!\tensor*[^{}_{s}]{\times}{^{}_{t}} \mathcal{G}\qquad \mathcal{F}(m(g,h)) = m(\mathcal{F}(g),\mathcal{F}(h))$$ whenever the above expression is well-defined. It is built-in to our notation that $\mathcal{F}(u(M)) \subseteq u(N)$. In other words, a morphism of local $\mathbf{QUED}$-groupoids always maps units to units by definition. A $\mathbf{QUED}$-groupoid can also be thought of as a local $\mathbf{QUED}$-groupoid. However, we remark that the functor from $\mathbf{QUED}$-groupoids to local $\mathbf{QUED}$-groupoids is not full or faithful. In other words, $\mathbf{QUED}$-groupoids do not form a subcategory of their local counterparts. In order to reduce the potential of confusion, we will always use a symbol such as $\mathcal{F}$ to denote an actual smooth map compatible with the structure maps and $[\mathcal{F}]$ to denote the germ of such a map around the units. This is a mild abuse of notation since we should formally use $[\mathcal{F}]_{u(M)}$ but we will omit the repetitive subscripts to reduce notational clutter. Local Lie groupoids are particularly relevant to the integration problem due to the following theorem. **Theorem 57** ([@crainic_integrability_2003],[@cabrera_local_2020]). *The Lie functor for local groupoids defines an equivalence of categories: $$\mathbf{Lie}\colon \{\mbox{Local Lie groupoids}\} \to \mbox \{ \mbox{Lie algebroids}\}$$* The first proof of local integrability of Lie algebroids appears in Crainic and Fernandes [@crainic_integrability_2003]. However, that the relationship is actually an equivalence of categories was, to some extent, folklore. Later, a complete proof of this equivalence appeared in [@cabrera_local_2020] and this is the earliest such proof that we are aware of. This theorem means that we will not need to deal with Lie algebroids directly to define differentiation. Instead, our strategy will be to construct a functor from singular Lie groupoids to local Lie groupoids. The only other feature of Lie algebroids that we must keep in mind is that Lie algebroids are a *sheaf*. In other words, Lie algebroids can be constructed by gluing together Lie algebroids defined over an open cover together with coherent gluing data on the intersections. Since this functor is an equivalence, Local Lie groupoids are also a sheaf. # Differentiation {#section:differentiation} The key observation that we need for our differentiation procedure is that a quasi-étale chart (around an identity element) on a local singular Lie groupoid inherits a unique local groupoid structure compatible with the chart. In this sense, local Lie groupoids can be use to "desingularize" singular Lie groupoids. This section will be organized as follows. In the first subsection we will state our basic theorems about quasi-étale charts on local singular Lie groupoids. In the next section we will show how these theorems can be used to construct a differentiation functor. In the last section we will use some of this theory to classify singular Lie groupoids with integrable algebroids. ## Representing singular Lie groupoids Our method of differentiating a singular Lie groupoid involves locally "representing" the said groupoid with a local Lie groupoid. **Definition 58**. Suppose $\mathcal{G}\rightrightarrows M$ is a local singular Lie groupoid. A *local Lie groupoid chart* of $\mathcal{G}\rightrightarrows M$ consists of an open subset $\widetilde M \subseteq M$ together with a quasi-étale chart $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ such that $[\pi]$ constitutes a local groupoid morphism covering the inclusion: $$\begin{tikzcd}[column sep = large] \widetilde\mathcal{G}\arrow[r, "{[\pi]}" ]\arrow[d, shift left]\arrow[d, shift right]& \mathcal{G}\arrow[d, shift left]\arrow[d, shift right]\\ \widetilde M \arrow[r, hook] & M \end{tikzcd}$$ We say a local groupoid chart is *around $x \in M$* if $x \in \widetilde M$. We say it is *wide* if $\widetilde M = M$. The following two theorems are the main technical results that permit us to differentiate (local) singular Lie groupoids. The first is an existence and uniqueness result for local groupoid charts. The second theorem says that one can always represent a homomorphism of local singular Lie groupoids using local Lie groupoid charts. **Theorem 59**. *Suppose $\mathcal{G}\rightrightarrows M$ is a local singular Lie groupoid. There exists a wide local Lie groupoid chart $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$. Furthermore, if $\pi' \colon \widetilde\mathcal{G}' \to \mathcal{G}$ is another wide local Lie groupoid chart there exists a unique isomorphism of local Lie groupoids $[\mathcal{F}] \colon \widetilde\mathcal{G}\to \widetilde\mathcal{G}'$ which makes the following diagram commute: $$\begin{tikzcd} \widetilde\mathcal{G}\arrow[dr, "{[\pi]}", swap] \arrow[rr, "{[\mathcal{F}]}"] & & {\widetilde\mathcal{G}' }\arrow[dl, "{[\pi']}"] \\ & {[\mathcal{G}] }& \end{tikzcd}$$* **Theorem 60**. *Let $\mathcal{G}\rightrightarrows M$ and $\mathcal{H}\rightrightarrows N$ be local singular Lie groupoids. Suppose $\mathcal{F}\colon \mathcal{G}\to \mathcal{H}$ is homomorphism and $\pi_\mathcal{G}\colon \widetilde\mathcal{G}\to \mathcal{G}$ and $\pi_\mathcal{H}\colon \widetilde\mathcal{H}\to \mathcal{H}$ are wide local groupoid charts. Then there exists a unique local groupoid morphism $[\widetilde\mathcal{F}] \colon \widetilde\mathcal{G}\to \widetilde\mathcal{H}$ which makes the following diagram commute: $$\begin{tikzcd} \widetilde\mathcal{G}\arrow[r, "{[\widetilde\mathcal{F}]}"] \arrow[d, "{[\pi_{\mathcal{G}}]}", swap] & \widetilde\mathcal{H}\arrow[d, "{[\pi_{\mathcal{H}}]}"] \\ \mathcal{G}\arrow[r, "{[\mathcal{F}]}"] & \mathcal{H} \end{tikzcd}$$* The idea of the proof is that, one starts with an arbitrary quasi-étale chart on $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ and then constructs a local groupoid structure on $\widetilde\mathcal{G}$ by finding representations of all of the groupoid structure maps. Once these maps are constructed, one then needs to show that these representatives satisfy the groupoid axioms in a neighborhood of the identity. The full proofs of Theorem [Theorem 59](#theorem:existence.and.uniqueness.of.LGC){reference-type="ref" reference="theorem:existence.and.uniqueness.of.LGC"} and Theorem [Theorem 60](#theorem:existence.and.uniqueness.of.lifts){reference-type="ref" reference="theorem:existence.and.uniqueness.of.lifts"} require their own section as well as some further technical development. Therefore, for the sake of exposition, we have moved these proofs to Section [6](#section:proof.of.main.theorem){reference-type="ref" reference="section:proof.of.main.theorem"}. For now, let us explore some of the consequences of these theorems. One interesting consequence is a classification of all connected singular Lie groups. **Theorem 61**. *Suppose $G$ is a connected singular Lie group. In other words, $G$ is a singular Lie groupoid over a point. Then as a diffeological group, $G$ is isomorphic to a quotient of a Lie group modulo a totally disconnected normal subgroup.* *Proof.* By Theorem [Theorem 59](#theorem:existence.and.uniqueness.of.LGC){reference-type="ref" reference="theorem:existence.and.uniqueness.of.LGC"} we know that we can come up with a local Lie group $\widetilde G^\circ$ together with quasi-étale chart $\pi \colon \widetilde G \to G$ which is a homomorphism of local groups. By possibly shrinking $\widetilde G^\circ$ to a small enough open neighborhood of the identity, we can assume without loss of generality that $\widetilde G^\circ$ is an open subset of $\widetilde G$ where $\widetilde G$ is a simply connected Lie group. Since the group $\widetilde G^\circ$ generates $\widetilde G$, it is possible[^1] to extend $\pi$ uniquely to a homomorphism defined on all of $\widetilde G$ and so we have a continuous group homomorphism $\pi \colon \widetilde G \to G$. Since $\pi$ is quasi-étale in a neighborhood of the identity, it follows by a simple translation argument that it must be quasi-étale everywhere. Since the fibers of a quasi-étale map must be totally disconnected it follows that the kernel of $\pi$ is a totally disconnected normal subgroup of $\widetilde G$. Finally, since $\pi$ is open, the image of $\pi$ is an open subgroup. Since we have assumed that $G$ is connected, it follows that $\pi$ is surjective. ◻ ## Construction of the Lie functor We will now prove the main theorem about differentiating singular Lie groupoids. Let us establish some notation for the relevant categories: $$\mathbf{Alg}:= \{ \text{Category of Lie algebroids} \}$$ $$\mathbf{LocLieGrpd}:= \{ \text{Category of local Lie groupoids} \}$$ $$\mathbf{SingLocLieGrpd}:= \{ \text{Category of singular local Lie groupoids} \}$$ Let us first restate the main theorem from the introduction: ** 1**. *Let $\mathbf{SingLieGrpd}$ be the category of $\mathbf{QUED}$-groupoids where the space of objects is a smooth manifold. There exists a functor: $$\widehat\mathbf{Lie}\colon \mathbf{SingLieGrpd}\to \mathbf{LieAlg}$$ with the property that $\widehat\mathbf{Lie}|_{\mathbf{LieGrpd}} = \mathbf{Lie}$.* In fact, we will prove a stronger result than the one stated above. The full version is stated for local singular Lie groupoids and includes a claim that that says the functor is essentially "unique". **Theorem 62**. *There exists a functor $$\widehat\mathbf{Lie}\colon \mathbf{SingLocLieGrpd}\to \mathbf{Alg}$$ with the following two properties:* (a) *For all wide local Lie groupoid charts $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ we have that $\widehat\mathbf{Lie}(\pi)$ is an isomorphism.* (b) *$\widehat\mathbf{Lie}= \mathbf{Lie}$ when restricted to the subcategory of local Lie groupoids.* *Furthermore, such a functor is unique up to a natural isomorphism.* *Proof.* Recall that $\mathbf{Lie}\colon \mathbf{LocLieGrpd}\to \mathbf{Alg}$ is an equivalence of categories. Therefore, we shall instead prove a closely related fact from which the above theorem will immediately follow: We claim that there exists a unique functor $$\mathbf{F}\colon \mathbf{SingLocLieGrpd}\to \mathbf{LocLieGrpd}$$ with the following two properties: (1) For all wide local Lie groupoid charts $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ we have that $\mathbf{F}[\pi]$ is an isomorphism. (2) $\mathbf{F}|_{\mathbf{LocLieGrpd}} = \mathop{\mathrm{Id}}$. First let us construct such a $\mathbf{F}$. For each local singular Lie groupoid $\mathcal{G}$ chose a local groupoid chart $\pi_\mathcal{G}\colon \widetilde\mathcal{G}\to \mathcal{G}$. We make this choice in such a way so that when $\mathcal{G}$ is a local Lie groupoid we take $\widetilde\mathcal{G}= \mathcal{G}$. Therefore, for each such $\mathcal{G}$ we define $\mathbf{F}(\mathcal{G}) := \widetilde\mathcal{G}$. If $[\mathcal{F}] \colon \mathcal{G}\to \mathcal{H}$ is a morphism of singular local Lie groupoids, let $\mathbf{F}[\mathcal{F}] \colon \widetilde\mathcal{G}\to \widetilde\mathcal{H}$ be the unique morphism which makes the following diagram commute: $$\begin{tikzcd} \mathbf{F}(\mathcal{G}) \arrow[d, "\pi_\mathcal{G}"] \arrow[r, "{\mathbf{F}[\mathcal{F}]}"] & \mathbf{F}(\mathcal{H}) \arrow[d, "\pi_\mathcal{H}"] \\ \mathcal{G}\arrow[r, "{[\mathcal{F}]}"] & \mathcal{H} \end{tikzcd}$$ Such a morphism is guaranteed to exist by Theorem [Theorem 60](#theorem:existence.and.uniqueness.of.lifts){reference-type="ref" reference="theorem:existence.and.uniqueness.of.lifts"}. We must show that $\mathbf{F}$ is a functor. Suppose $[\mathcal{F}_1] \colon \mathcal{G}\to \mathcal{H}$ and $[\mathcal{F}_2] \colon \mathcal{H}\to \mathcal{K}$ are a pair of morphisms of singular local Lie groupoids. Then we get a commutative diagram in $\mathbf{SingLocLieGrpd}$: $$\begin{tikzcd}[column sep = large] \mathbf{F}(\mathcal{G}) \arrow[d, "{[\pi_\mathcal{G}]}"] \arrow[r, "{\mathbf{F}[\mathcal{F}_1]}"] & \mathbf{F}(\mathcal{H}) \arrow[d, "{[\pi_\mathcal{H}]}"] \arrow[r, "{\mathbf{F}[ \mathcal{F}_2]}"] & \widetilde\mathbf{F}(\mathcal{K}) \arrow[d, "{[\pi_\mathcal{K}]}"] \\ \mathcal{G}\arrow[r, "{[\mathcal{F}_1]}"] & \mathcal{H}\arrow[r, "{[\mathcal{F}_2]}"] & \mathcal{K} \end{tikzcd}$$ From the definition of $\mathbf{F}([\mathcal{F}_2] \circ [\mathcal{F}_1])$ we conclude from the uniqueness part of Theorem [Theorem 60](#theorem:existence.and.uniqueness.of.lifts){reference-type="ref" reference="theorem:existence.and.uniqueness.of.lifts"} that: $$\mathbf{F}([\mathcal{F}_2] \circ [\mathcal{F}_1]) = \mathbf{F}[\mathcal{F}_2] \circ \mathbf{F}[\mathcal{F}_1]$$ Now we will show that $\mathbf{F}$ satisfies (1) and (2) above. \(1\) Suppose $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ is a local groupoid chart of a singular Lie groupoid. From the definition of $\mathcal{F}$ we have a commuting diagram: $$\begin{tikzcd} \mathbf{F}(\mathcal{G}) = \widetilde\mathcal{G}\arrow[d, "{[\mathop{\mathrm{Id}}_{\widetilde\mathcal{G}}]}"] \arrow[r, "{\mathbf{F}[\pi]}"] & \mathbf{F}(\mathcal{G}) \arrow[d, "\pi_\mathcal{G}"] \\ \widetilde\mathcal{G}\arrow[r, "{[\pi]}"] & \mathcal{G} \end{tikzcd}$$ Since $\pi$ is a quasi-étale chart, it follows from Proposition [Proposition 36](#proposition:local presentations.of.morphisms){reference-type="ref" reference="proposition:local presentations.of.morphisms"} that $\mathbf{F}[\pi]$ must be a submersion in a neighborhood of the identity elements. By a dimension count, we conclude that $\mathbf{F}[\pi]$ is a diffeomorphism in a neighborhood of the identity elements and so it is an isomorphism of local Lie groupoids. Property (2) is immediate from the fact that $\mathbf{F}(\mathcal{G}) = \mathcal{G}$ by definition for $\mathcal{G}\in \mathbf{LocLieGrpd}$. Now suppose $\mathbf{F}'$ is another functor satisfying properties (1) and (2) above. Given $\mathcal{G}\in \mathbf{SingLocLieGrpd}$ let: $$\eta(\mathcal{G}) := \mathbf{F}' [\pi_\mathcal{G}] \colon \mathbf{F}(\mathcal{G}) \to \mathbf{F}'(\mathcal{G})$$ The domain and codomain are as above since $\mathbf{F}'$ satisfies property (2). Furthermore, for each $\mathcal{G}$ we have that $\eta(\mathcal{G})$ is an isomorphism since $\mathbf{F}'$ satisfies property (1). We only need to check that $\eta$ defines a natural transformation. Suppose we have $[\mathcal{F}] \colon \mathcal{G}\to \mathcal{H}$. From the definition of $\mathbf{F}[\mathcal{F}]$ we have that the following square commutes: $$\begin{tikzcd} \mathbf{F}(\mathcal{G}) \arrow[r, "{\mathbf{F}[\mathcal{F}]}"] \arrow[d, "{[\pi_\mathcal{G}]}"] & \mathbf{F}(\mathcal{H}) \arrow[d, "{[\pi_\mathcal{G}]}"] \\ \mathcal{G}\arrow[r, "{[\mathcal{F}]}"] & \mathcal{H} \end{tikzcd}$$ If we apply the functor $\mathbf{F}'$ to this square we get a new commuting square: $$\begin{tikzcd} \mathbf{F}(\mathcal{G}) \arrow[r, "{\mathbf{F}[\mathcal{F}]}"] \arrow[d, "\eta(\mathcal{G})"] & \mathbf{F}(\mathcal{H}) \arrow[d, "\eta(\mathcal{H})"] \\ \mathbf{F}' (\mathcal{G}) \arrow[r, "{\mathbf{F}'[\mathcal{F}]}"] & \mathbf{F}'(\mathcal{H}) \end{tikzcd}$$ Which proves that $\eta$ is a natural transformation. ◻ By viewing singular Lie groupoids as singular local Lie groupoids, it follows that Theorem [Theorem 1](#theorem:main.lie.functor){reference-type="ref" reference="theorem:main.lie.functor"} is a direct corollary of Theorem [Theorem 62](#theorem:main.lie.functor.fancy){reference-type="ref" reference="theorem:main.lie.functor.fancy"}. Although the definition of the Lie functor above is a bit abstract. Computing this functor is not too difficult for many kinds of singular Lie groupoids. Let us fix a choice of functor $\widehat\mathbf{Lie}$ satisfying Theorem [Theorem 62](#theorem:main.lie.functor.fancy){reference-type="ref" reference="theorem:main.lie.functor.fancy"}. Suppose $\mathcal{G}$ is a singular Lie groupoid and let $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ be a wide local groupoid chart. Notice that if we apply the functor $\widehat\mathbf{Lie}$ to such a local groupoid chart we get: $$\widehat\mathbf{Lie}[\pi] \colon \mathbf{Lie}(\widetilde\mathcal{G}) \to \widehat\mathbf{Lie}(\mathcal{G})$$ In other words, the Lie algebroid of the domain of any local groupoid chart is canonically isomorphic to the Lie algebroid of $\widetilde\mathcal{G}$. Therefore, we can compute $\widehat\mathbf{Lie}$ by simply constructing a local groupoid chart and then applying the classical Lie functor. **Example 63**. Suppose $G$ is a Lie group and suppose $N \subseteq G$ is a totally disconnected normal subgroup. We observed earlier that $G / N$ is a singular Lie groupoid (over a point) and the projection map $\pi \colon G \to G/N$ is a local groupoid chart. Therefore, $\widehat\mathbf{Lie}(G/N) \cong \widehat\mathbf{Lie}(G)$. This leads to some interesting calculations. For example, the group $\mathbb{R}/\mathbb{Q}$ has rather degenerate topology but $\widehat\mathbf{Lie}(\mathbb{R}/ \mathbb{Q}) \cong \mathbb{R}$ so it has a perfectly acceptable Lie algebra. ## Example - singular Lie groupoids with integrable algebroids This section is not necessary for proving our main theorems. However, it does tell us what a singular Lie groupoid with an integrable algebroid must look like. It generalizes the example of the singular Lie group. **Lemma 64**. *Suppose $\widetilde\mathcal{G}\rightrightarrows M$ is a Lie groupoid and $\mathcal{N}\subseteq\widetilde\mathcal{G}$ is a wide normal subgroupoid. We think of $\mathcal{N}$ as a diffeological space via the subspace diffeology. Suppose further that $\mathcal{N}$ has the following properties:* (a) *$\mathcal{N}$ includes only isotropy arrows. In other words, $s|_\mathcal{N}= t|_\mathcal{N}$.* (b) *The smooth map $s|_\mathcal{N}\colon \mathcal{N}\to M$ is a local subduction.* (c) *For each $x \in M$ the fiber $N_x := s^{-1}(x) \cap N$ is totally disconnected.* *Then $\mathcal{G}:= \widetilde\mathcal{G}/\mathcal{N}$ with the quotient diffeology is a singular Lie groupoid.* *Proof.* First we show that the projection map $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ is quasi-étale. The fibers of $\pi$ are clearly totally disconnected due to property (c) above. From the definition of the quotient diffeology, we know $\pi$ is a subduction but we must show it is a local subduction. Suppose $\phi \colon U_\phi \to \mathcal{G}$ is a plot and $u_0 \in U_\phi$ and $g_0 \in \widetilde\mathcal{G}$ are such that $\phi(u_0) = \pi(g_0)$. Since $\pi$ is a subduction, we know that there must exist a smooth function $\psi \colon V \to \widetilde\mathcal{G}$ such that $V \subseteq U_\phi$ is an open neighborhood of $u_0$ and $\pi \circ \psi = \phi$. Let $n \in \mathcal{N}$ be the unique element such that $\psi(u) \cdot n = g$. Now let $\sigma \colon \mathcal{O}\to \mathcal{N}$ be a smooth section of $s |_{\mathcal{N}}$ such that $\mathcal{O}$ is an open neighborhood of $s(n)$ and $\sigma(s(n)) = n$. Such a section exists since $s|_\mathcal{N}$ is a local subduction. The function: $$\widetilde\phi(u) := \psi(u) \sigma(s(u))$$ will be well defined in an open neighborhood of $u_0$ in $U_\phi$. Furthermore, $\pi \circ \widetilde\phi = \phi$ and $\widetilde\phi(u_0) = g_0$. Now we need to show that $\pi$ is quasi-étale. We will use the simplified criteria from Lemma [Lemma 48](#lemma:quasi.étale.alternate){reference-type="ref" reference="lemma:quasi.étale.alternate"}. Suppose $f \colon \mathcal{O}\to \widetilde\mathcal{G}$ is a smooth function defined on an open $\mathcal{O}\subseteq\mathcal{G}$ such that $\pi \circ f = \pi$ and $f(g_0) = g_0$ for some point $g_0 \in \widetilde\mathcal{G}$. We need to show that $f$ is a diffeomorphism in a neighborhood of $g_0$. Let us set some notation, to avoid confusion we will write $\widetilde s$ and $\widetilde t$ to denote the source and target maps for $\widetilde\mathcal{G}$ and $s$ and $t$ to denote the source and target maps for $\mathcal{G}$. Assume without loss of generality that for all $x \in M$ we have that $\widetilde t^{-1}(x) \cap \mathcal{O}$ is connected. Consider the function: $$\alpha \colon \mathcal{O}\to \mathcal{N}\qquad \alpha(g) := f(g) \cdot g^{-1}$$ Since the the fibers of $\mathcal{N}\to M$ are totally disconnected, it follows that, for all $x \in M$, the restriction of $\alpha$ to $\widetilde t^{-1}(x) \cap \mathcal{O}$ is constant. In other words, $\alpha$ is constant on target fibers. Therefore, there exists a unique function $\sigma \colon \widetilde t(\mathcal{O}) \to \mathcal{N}$ which is a section of the source map and with the property that: $$\forall g \in \mathcal{O}\qquad \alpha(g) = \sigma(\widetilde t(g))$$ We can rewrite this to get that: $$\forall g \in \mathcal{O}\qquad f(g) \cdot g^{-1}= \sigma( \widetilde t(g))$$ In other words: $$\forall g \in \mathcal{O}\qquad f(g) = \sigma(\widetilde t(g)) \cdot g$$ From this equation it follows that: $$\forall g \in f(\mathcal{O}) \qquad f^{-1}(g) = \sigma(\widetilde t(g))^{-1}\cdot g$$ Since $f$ has a smooth inverse, $f$ is a diffeomorphism onto its image. This shows that $\pi$ is a quasi-étale chart. To finish the proof. We need to show that $s \colon \mathcal{G}\to M$ is a $\mathbf{QUED}$-submersion. By Theorem [Theorem 47](#theorem:qeds.submersion.criteria){reference-type="ref" reference="theorem:qeds.submersion.criteria"}, it suffices to show that the fibers of $t$ are quasi-étale. Let us fix $x \in M$ and consider the projection: $$\pi_x \colon \widetilde t^{-1}(x) \to t^{-1}(x)$$ We claim that $\pi_x$ is a quasi-étale chart. A standard argument shows that $\pi_x$ is a local subduction. Of course, the fibers of $\pi_x$ are totally disconnected. To show $\pi_x$ is quasi-étale. We will use the simplified criteria from Lemma [Lemma 48](#lemma:quasi.étale.alternate){reference-type="ref" reference="lemma:quasi.étale.alternate"}. Suppose $f \colon \mathcal{O}\to \widetilde t^{-1}(x)$ is a smooth function defined on an open $\mathcal{O}\subseteq\widetilde t^{-1}(x)$ such that $\pi_x \circ f = \pi_x$ and $f(g_0) = g_0$ for some point $g_0 \in \widetilde\mathcal{G}$. We need to show that $f$ is a diffeomorphism in a neighborhood of $g_0$. The argument that this $f$ is a local diffeomorphism is essentially identical to the one from earlier on in this proof. We divide $f$ by the identity map and observe that $f$ must locally be left translation by an element of $\mathcal{N}$. ◻ The converse to the above lemma is also true. Every source connected singular Lie groupoid with an integrable Lie algebroid is the quotient of a Lie groupoid by a totally disconnected wide normal subgroupoid. **Lemma 65**. *Suppose $\mathcal{G}\rightrightarrows M$ is a source connected Weistein groupoid and $\widehat\mathbf{Lie}(\mathcal{G})$ is an integrable Lie algebroid. Then there exists a Lie groupoid $\mathcal{G}$ with a wide normal subgroupoid $\mathcal{N}$ satisfying properties (a), (b) and (c) from the previous lemma such that $\mathcal{G}\cong \widetilde\mathcal{G}/ \mathcal{N}$.* *Proof.* Let $\pi^\circ \colon \widetilde\mathcal{G}^\circ \to \mathcal{G}$ be a wide local groupoid chart. By a theorem of Fernandes and Michiels [@fernandes_associativity_2020], it is possible to chose $\widetilde\mathcal{G}^\circ$ in such a way that $\widetilde\mathcal{G}^\circ$ is an open subset of a source simply connected Lie groupoid $\widetilde\mathcal{G}$. Using the associative completion functor of Fernandes and Michiels, one can extend the map $\pi^\circ$ to a local groupoid chart $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ defined on all of $\widetilde\mathcal{G}$. Since $\pi$ is open and $\mathcal{G}$ is source connected, it follows that $\pi$ is surjective. Let $\mathcal{N}= \ker \pi$. We only need to show that $\mathcal{N}$ satisfies properties (a), (b) and (c). Property (a) is immediate since $\pi$ covers the identity map at the level of objects. For property (b), suppose we have a plot $\phi \colon U_\phi \to M$ and $u \in U_\phi$ together with $n \in \mathcal{N}$ such that $s(n) = \phi(u)$. Since the unit embedding $u\colon M \to \mathcal{G}$ is smooth the map $u \circ \phi \colon U_\phi \to \mathcal{G}$ is a plot. Since $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ is a local subduction, there must exist an open neighborhood $V \subseteq U_\phi$ of $u$ and a lift $\widetilde\phi \colon V \to \widetilde\mathcal{G}$ such that $\pi \circ \widetilde\phi = u \circ \phi|_{V}$ and $\widetilde\phi(u) = n$. Since $\pi \circ \widetilde\phi = u \circ \phi|_V$ it follows that $s \circ \widetilde\phi = \phi$ and the image of $\widetilde\phi$ is contained in $\mathcal{N}$. ◻ # Proof of Theorem [Theorem 59](#theorem:existence.and.uniqueness.of.LGC){reference-type="ref" reference="theorem:existence.and.uniqueness.of.LGC"} and Theorem [Theorem 60](#theorem:existence.and.uniqueness.of.lifts){reference-type="ref" reference="theorem:existence.and.uniqueness.of.lifts"} {#section:proof.of.main.theorem} Suppose $\mathcal{G}\rightrightarrows M$ is a local singular Lie groupoid. Following our usual convention, let us write $s, t, u, m$ and $i$ to denote the source, target, unit, multiplication and inverse groupoid structure maps for $\mathcal{G}\rightrightarrows M$, respectively. We will also use: $$\delta \colon \mathcal{D}\to \mathcal{G}\qquad (g,h) \mapsto m(g,i(h))$$ to denote the division map. Note that the domain of division: $$\mathcal{D}:= \{ (g,h) \in \mathcal{G}\!\tensor*[^{}_{s}]{\times}{^{}_{s}} \mathcal{G}\ : \ m(g,i(h)) \text{ is well-defined} \}$$ is an open neighborhood of the image of $u\times u \colon M \to \mathcal{G}\!\tensor*[^{}_{s}]{\times}{^{}_{s}} \mathcal{G}$. It will be desirable for us to be able to compare elements of $\mathcal{G}$ by dividing them. Our next lemma states that any local singular Lie groupoid is isomorphic to one where this is the case. **Lemma 66**. *Suppose $\mathcal{G}' \rightrightarrows M$ is a local singular Lie groupoid.* *Then $\mathcal{G}'$ is isomorphic (as a local groupoid) to a local singular Lie groupoid $\mathcal{G}\rightrightarrows M$ with the property: $$\forall (g,h) \in \mathcal{D}\qquad \delta(g,h) = u\circ t(g) \quad \Leftrightarrow \quad g = h \}$$* *Proof.* Consider the following calculation: $$\begin{aligned} (gh^{-1})h &= (u\circ t(g))h \\ g (h^{-1}h) &= h \\ g (u\circ s(h)) &= h \\ g &= h\end{aligned}$$ Let $\mathcal{G}\subseteq\mathcal{G}'$ be an open neighborhood of the units with the property that for all $g, h \in \mathcal{G}$ we have that each step of the above calculation is well-defined. Since being well-defined is an open condition (it is just about being in the inverse image of open sets under some continuous functions) it follows that $\mathcal{G}$ is an open set. $\mathcal{G}$ will be an open neighborhood of the units since the above calculation is always well-defined for units. Now if we have that $\delta(g, h) = u\circ t(g)$ for $g,h \in \mathcal{G}$, it follows from the above calculation that $g = h$. ◻ **Remark 67**. The above proof can be generalized into the following principle: For any local groupoid and a finite number of equations that are consequences of the groupoid axioms, there exists an open local subgroupoid where the desired equation holds. Crucially, this only holds for a *finite* number of equations as an infinite intersection of open sets may not be open. ## Lifting the division map We will begin by showing that, given a quasi-étale chart $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$, we have can lift the division operation to $\widetilde\mathcal{G}$ in a way that has favorable properties. Given a quasi-étale chart $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ we will use the conventions that: $\widetilde s := s \circ \pi$ and $\widetilde t := t \circ \pi$. Since $s$, $t$ and $\pi$ are local subductions, $\widetilde s$ and $\widetilde t$ will be submersions. Rather than choosing local representations of the normal groupoid structure maps, we will begin by choosing a local presentation of division. This is due to the fact that all of the remaining structure maps can be recovered from division. Indeed, (local) groupoids can be studied entirely in terms of their division map and source map (see the appendix of Crainic, Nuno Mestre, and Struchiner [@crainic_deformations_2020]. Our first lemma says that it is possible to find a lift of the division map: **Lemma 68**. *Let $\mathcal{G}$ be a singular Lie groupoid and let us fix a point $x_0 \in M$. Suppose $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ is a quasi-étale chart and we have a point $e \in \widetilde\mathcal{G}$ such that $\pi(e) = u(x_0)$.* *There exists an open neighborhood $\widetilde\mathcal{D}\subseteq\widetilde\mathcal{G}\!\tensor*[^{}_{\widetilde s}]{\times}{^{}_{\widetilde t}} \widetilde\mathcal{G}$ of $(e,e)$ together with a submersion $\delta \colon \widetilde\mathcal{D}\to \widetilde\mathcal{G}$ such that the following diagram commutes: $$\label{diagram:delta.is.lift} \begin{tikzcd}[column sep=huge] \widetilde\mathcal{D}\arrow[d, "(\pi \circ \mathop{\mathrm{pr}}_2) \times (\pi \circ pr_2)", swap] \arrow[r, "\widetilde\mathbf{\delta}"] & \widetilde\mathcal{G}\arrow[d, "\pi"] \\ \mathcal{D}\arrow[r, "\mathbf{\delta}"] & \mathcal{G} \end{tikzcd}$$* *Proof.* Note that since $s$ is a $\mathbf{QUED}$-submersion, it follows that $\mathcal{G}\!\tensor*[^{}_{s}]{\times}{^{}_{s}} \mathcal{G}$ is quasi-étale and the map: $$\pi \circ \mathop{\mathrm{pr}}_1 \times \pi \circ \mathop{\mathrm{pr}}_2 \colon \widetilde\mathcal{G}\!\tensor*[^{}_{s}]{\times}{^{}_{s}} \widetilde\mathcal{G}\to \mathcal{G}\!\tensor*[^{}_{s}]{\times}{^{}_{s}} \mathcal{G}$$ is a quasi-étale chart. Since the domain of division, $\mathcal{D}$, is an open subset of a quasi-étale space it follows that $\mathcal{D}$ is quasi-étale as well. Since $\pi$ is a local subduction, is possible to chose a local representation of the division map $\widetilde\delta \colon \widetilde\mathcal{D}\to \widetilde\mathcal{G}$ where $\widetilde\mathcal{D}\subseteq\widetilde\mathcal{G}\!\tensor*[^{}_{s}]{\times}{^{}_{s}} \widetilde\mathcal{G}$ is an open neighborhood of $(e,e)$ and which makes Diagram [\[diagram:delta.is.lift\]](#diagram:delta.is.lift){reference-type="ref" reference="diagram:delta.is.lift"} commute. Furthermore, we can apply Lemma [Lemma 35](#lemma:fiber.hopping){reference-type="ref" reference="lemma:fiber.hopping"}, to chose $\widetilde\delta$ in such a way that $\widetilde\delta(e,e) = e$. Since $\mathbf{\delta}$ is a local subduction, Proposition [Proposition 36](#proposition:local presentations.of.morphisms){reference-type="ref" reference="proposition:local presentations.of.morphisms"} tells us that $\widetilde\mathbf{\delta}$ will be a submersion. ◻ The core of our proof of the existence of local groupoid charts will be the fact that a map $\delta$ as above will always be (in some open neighborhood) the division map for a local groupoid. Our next lemma tells us that a choice of representation of the division map also induces a lift of the unit embedding. **Lemma 69**. *Let $\mathcal{G}$ be a singular Lie groupoid and let us fix a point $x_0 \in M$. Suppose $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ is a quasi-étale chart and we have a point $e \in \widetilde\mathcal{G}$ such that $\pi(e) = u(x_0)$. Let $\delta$ be a representation of division as in Lemma [Lemma 68](#lemma:delta.exists){reference-type="ref" reference="lemma:delta.exists"}.* *There exists a smooth function $$\widetilde u\colon \widetilde M \to \widetilde\mathcal{G}$$ where $\widetilde M$ is an open neighborhood of $x_0$ in $M$ and such that:* - *$\pi \circ \widetilde u= u$* - *$\widetilde u(x_0) = e$* - *$\forall x \in \widetilde M$ we have that $\widetilde\mathbf{\delta}(\widetilde u(x), \widetilde u(x)) = \widetilde u(x)$.* *Proof.* First let $\mathcal{U}\subseteq\widetilde\mathcal{G}$ be an open neighborhood of $e$ such that for all $g \in \mathcal{U}$, we have that $(g,g) \in \widetilde\mathcal{D}$. In other words, $(g,g)$ is in the domain of $\widetilde\delta$. Write $\Delta \colon \mathcal{U}\to \widetilde\mathcal{D}$ to denote the diagonal embedding. Now, note that for all $g \in \mathcal{U}$, compatibility of $\mathbf{\delta}$ with $\pi$ implies that: $$\widetilde t( g) = (\widetilde s\circ \widetilde\mathbf{\delta}\circ \Delta)(g)$$ Since $\widetilde t$ is a surjective submersion, this implies that $\widetilde\mathbf{\delta}\circ \Delta$ has rank at least equal the dimension of the object manifold $M$. On the other hand, if $g(t)$ is a curve in $\mathcal{U}$ tangent to a fiber of $\widetilde t|_{\mathcal{U}}$, then it follows that $(\pi \circ \widetilde\mathbf{\delta}\circ \Delta)(g(t))$ is a constant path. Since $\pi$ is quasi-étale, it follows that $g(t)$ is a constant path in a fiber of $\widetilde\mathbf{\delta}\circ \Delta$. This implies that kernel distribution of $\widetilde t|_{\mathcal{U}}$ is contained in the kernel distribution of $\widetilde\mathbf{\delta}\circ \Delta$. In other words: $$\ker T \widetilde t\subseteq\ker T (\widetilde\mathbf{\delta}\circ \Delta)$$ Since the rank of $\widetilde\mathbf{\delta}\circ \Delta$ is at least equal to the rank of $\widetilde t|_{\mathcal{U}}$, a dimension count tells us that the kernel distributions are actually equal. Now let us shrink $\mathcal{U}$ to a smaller neighborhood of $e \in \widetilde\mathcal{G}$ with the property that the fibers of $\widetilde t|_{\mathcal{U}} \colon \mathcal{U}\to M$ and $\widetilde\delta \circ \Delta \colon \mathcal{U}\to \widetilde\mathcal{G}$ coincide. This means that for all $g, h \in \mathcal{U}$, we have that: $$\label{eqn:fix.units} \widetilde t(g) = \widetilde t(h) \quad \Leftrightarrow \quad \widetilde\mathbf{\delta}(g,g) = \widetilde\mathbf{\delta}(h,h)$$ From all of these facts, we conclude that $\widetilde t$ is a diffeomorphism when restricted to the image of $\widetilde\mathbf{\delta}\circ \Delta$. In other words, the image of $\widetilde\mathbf{\delta}\circ \Delta$ must be the image of a section $\widetilde u\colon \widetilde M \to \widetilde\mathcal{G}$ of $\widetilde t$. Compatibility of $\pi$ with $\widetilde\mathbf{\delta}$ implies that $\pi \circ \widetilde u= u$. Furthermore, since $\widetilde\mathbf{\delta}( e , e ) = e$ we know that $\widetilde u(x_0) = e$. Finally, for any $x \in \widetilde M$, we have that: $$\widetilde t(\widetilde u(x) ) = \widetilde t( \widetilde\mathbf{\delta}( \widetilde u(x), \widetilde u(x)))$$ By ([\[eqn:fix.units\]](#eqn:fix.units){reference-type="ref" reference="eqn:fix.units"}) we conclude that $\widetilde\mathbf{\delta}(\widetilde u(y), \widetilde u(y)) = \widetilde u(y)$. ◻ The final lemma for this subsection tells us that a lift of the division map (as in the above lemmas) can be used as an equality test: **Lemma 70**. *Let $\mathcal{G}$ be a local singular Lie groupoid and let us fix a point $x_0 \in M$. Suppose $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ is a quasi-étale chart and we have a point $e \in \widetilde\mathcal{G}$ such that $\pi(e) = u(x_0)$. Let $\delta \colon \widetilde\mathcal{D}\to \widetilde\mathcal{G}$ be a representation of division as in Lemma [Lemma 68](#lemma:delta.exists){reference-type="ref" reference="lemma:delta.exists"} and $\widetilde u\colon \widetilde M \to \widetilde\mathcal{G}$ be a lift of the units as in Lemma [Lemma 69](#lemma:fix.units){reference-type="ref" reference="lemma:fix.units"}.* *There exists an open neighborhood $\mathcal{O}\subseteq\widetilde\mathcal{G}$ of $e$ with the following properties:* - *$\mathcal{O}\times_{\widetilde s, \widetilde s} \mathcal{O}\subseteq\mathcal{D}$* - *For all $(g, h) \in \mathcal{O}\times_{\widetilde s, \widetilde s} \mathcal{O}$ we have that $g = h$ if and only if $\widetilde\delta (g,h) = \widetilde u({\widetilde t(g)})$.* *Proof.* First observe that $\widetilde u(\widetilde M)$ is a cross section of a submersion and is therefore an embedded submanifold. Since $\widetilde\mathbf{\delta}$ is a submersion, we know that the inverse image $\widetilde\mathbf{\delta}^{-1} (\widetilde u(\widetilde M))$ is an embedded submanifold of dimension equal to the dimension of $\widetilde\mathcal{G}$. Now let $\mathcal{U}\subseteq\widetilde\mathcal{G}$ and $\Delta \colon \mathcal{U}\to \mathcal{D}$ be as in the proof of Lemma [Lemma 69](#lemma:fix.units){reference-type="ref" reference="lemma:fix.units"}. From the construction of $\widetilde u$ we know that: $$\Delta (\mathcal{U}) \subseteq\widetilde\mathbf{\delta}^{-1}(\widetilde u(\widetilde M) )$$ By a dimension count, $\Delta (\mathcal{U})$ is actually an open subset of $\widetilde\mathbf{\delta}^{-1}(\widetilde u (\widetilde M ) )$ containing $(e,e)$. Now let $\mathcal{O}\subseteq\widetilde\mathcal{G}$ be an open neighborhood of $e$ such that: $$(\mathcal{O}\times_{\widetilde s,\widetilde s} \mathcal{O}) \cap \widetilde\mathbf{\delta}^{-1}(\widetilde u( \widetilde M ))) \subseteq\Delta(\mathcal{U})$$ We claim that this is the desired open subset. Suppose $g, h \in \mathcal{O}$ have the same source and $\widetilde\mathbf{\delta}(g,h) = \widetilde u({\widetilde t(g)})$. Then: $$(g,h) \in (\mathcal{O}\times_{s,s} \mathcal{O}) \cap \widetilde\mathbf{\delta}^{-1}(\widetilde u(\widetilde M))$$ Therefore $(g,h) \in \Delta(\mathcal{U})$ and $g = h$. ◻ ## Division structures and comparing maps It will be useful to formalize the properties from the previous three lemmas into a definition. **Definition 71**. Let $\mathcal{G}\rightrightarrows M$ be a local singular Lie groupoid. Suppose $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ is a quasi-étale chart. A *division structure* on $\pi$ consists of the following: - A smooth function: $$\widetilde u\colon \widetilde M \to \widetilde\mathcal{G}$$ where $\widetilde M = \widetilde s(\widetilde\mathcal{G}) \subseteq M$. - A smooth function: $$\widetilde\delta \colon \widetilde\mathcal{D}\to \widetilde\mathcal{G}$$ where $\widetilde\mathcal{D}\subseteq\widetilde\mathcal{G}\!\tensor*[^{}_{\widetilde s}]{\times}{^{}_{\widetilde s}} \widetilde\mathcal{G}$ is an open neighborhood of the image of $\widetilde u\times \widetilde u$. We require these two functions to satisfy the following properties: (a) $\widetilde\delta$ is a representation of division. In other words we have a commutative diagram: (b) $\widetilde u$ is a lift of the units. In other words, $\pi \circ \widetilde u= u|_{\widetilde M}$. (c) For all $(g,h) \in \widetilde\mathcal{D}$ we have that: $$\widetilde\delta(g,h) = \widetilde u \circ \widetilde t(g) \quad \leftrightarrow \quad g = h$$ The sum effect of Lemma [Lemma 68](#lemma:delta.exists){reference-type="ref" reference="lemma:delta.exists"}, Lemma [Lemma 69](#lemma:fix.units){reference-type="ref" reference="lemma:fix.units"} and Lemma [Lemma 70](#lemma:division.lemma){reference-type="ref" reference="lemma:division.lemma"} is the claim that around any point $x_0 \in M$ we can find a quasi-étale chart $\widetilde\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ around $u(x)$ equipped with a division structure. The power of division structures is that they enable us to develop very useful tests for equality of certain maps. First let us establish some notation. Given a natural number $n$ and $U \subseteq\widetilde\mathcal{G}$ open, we will write: $$U^{(n)} := \overbrace{U \times_{\widetilde s,\widetilde t} U \times_{\widetilde s,\widetilde t} \cdots \times_{\widetilde s,\widetilde t} U}^{n\text{-times}}$$ In other words, $U^{(n)}$ is the set of $n$-tuples of "composable" elements of $U$. Our next lemma provides us with a way of determining when exactly a function on the composable arrows takes values only in units. **Lemma 72**. *Suppose $\mathcal{G}\rightrightarrows M$ and $\mathcal{H}\rightrightarrows N$ are a local singular Lie groupoid and $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ is a quasi-étale chart equipped with a division structure.* *Suppose $\mathcal{U}\subseteq\widetilde\mathcal{G}$ is an open neighborhood of the image of $\widetilde u$. Given a natural number $n$ suppose that we have a smooth function: $$F \colon \mathcal{U}^{(n)} \to \widetilde\mathcal{G}$$ with the following properties:* - *The image of $\pi \circ F$ contains only unit arrows in $\mathcal{G}$.* - *$\widetilde t \circ F = \widetilde t \circ \mathop{\mathrm{pr}}_1$* - *For all $x \in \widetilde u^{-1}(\mathcal{U})$, we have that $F(\widetilde u(x), \ldots , \widetilde u(x)) = \widetilde u(x)$* *Then there exists an open neighborhood $\mathcal{O}\subseteq\mathcal{U}$ of the image of $\widetilde u$ with the property that for all $(g_1, \ldots , g_n ) \in \mathcal{O}^{(n)}$ we have that $$F(g_1, \ldots , g_n) = (\,\widetilde u \circ \widetilde t \,)(g_1)$$ In particular, for a small enough open neighborhood of $\mathcal{U}$, the function $F$ takes values only in the image of $\widetilde u$.* *Proof.* Notice that since $\widetilde t \circ \mathop{\mathrm{pr}}_1$ is a submersion and $$\widetilde t\circ F = \widetilde t\circ \mathop{\mathrm{pr}}_1$$ this implies that $F$ has rank at least equal to the rank of $\widetilde t\circ \mathop{\mathrm{pr}}_1$ which is equal to the dimension of $M$. Now we claim that the kernel distribution of $TF$ contains the kernel distribution of $T (\widetilde t \circ \mathop{\mathrm{pr}}_1)$. By a dimension count, this will imply their kernel distributions are equal. To see this, suppose $\gamma$ is a path tangent to the kernel distribution of $T\widetilde t \circ \mathop{\mathrm{pr}}_1$. We know that $\widetilde t \circ F \circ \pi$ is constant. Furthermore, since $\pi \circ F$ contains only unit elements, we can conclude that $F \circ \gamma$ is a path in $\pi$-fiber of a unit element in $\mathcal{G}$. However, the fibers of $\pi$ are totally disconnected, and so this implies that $F \circ \gamma$ is a constant path. Therefore, $TF$ and $T(\widetilde t \circ \mathop{\mathrm{pr}}_1)$ are submersions with identical kernel distributions Now, notice that the map: $$\widetilde u^{(n)} \colon \widetilde M \to \widetilde\mathcal{G}^{(n)} \qquad x \mapsto ( \widetilde u(x) , \widetilde u(x) , \ldots , \widetilde u(x) )$$ is a section of $\widetilde t \circ \mathop{\mathrm{pr}}_1$. By the local normal form theorem for submersions around a section, it is possible to find an open neighborhood $\mathcal{W}$ of the image of $\widetilde u^{(n)}$ of the domain of $F$ with the property that the fibers of $\widetilde t \circ \mathop{\mathrm{pr}}_1$ and $F$ are connected and coincide. Now we claim that $$\widetilde F|_{\mathcal{W}} = \widetilde u \circ \widetilde t \circ \mathop{\mathrm{pr}}_1 |_{\mathcal{W}}$$ To see why, first notice that the fibers of these two functions coincide. Furthermore $$F(\widetilde u(x), \ldots , \widetilde u(x)) = \widetilde u(x) = \widetilde u \circ \widetilde t \circ \mathop{\mathrm{pr}}_1(\widetilde u(x), \ldots , \widetilde u(x))$$ Since the fibers of $\widetilde F|_{\mathcal{W}}$ and $\widetilde u \circ \widetilde t \circ \mathop{\mathrm{pr}}_1 |_{\mathcal{W}}$ are equal and they agree on one element of each fiber, the two functions are equal. Therefore, the proof is completed by choosing an open neighborhood $\mathcal{O}\subseteq\mathcal{U}$ of the units with the property that $\mathcal{O}^{(n)} \subseteq\mathcal{W}$. ◻ Our next lemma is just an upgrade of the previous lemma. It provides us with a very useful equality test. **Lemma 73**. *Suppose $\mathcal{G}\rightrightarrows M$ is a local singular Lie groupoid and $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ is a quasi-étale chart equipped with a division structure.* *Suppose $\mathcal{U}\subseteq\widetilde\mathcal{G}$ is an open neighborhood of the image of $\widetilde u$. Given a natural number $n$, suppose that we have a pair of smooth functions: $$\alpha \colon \mathcal{U}^{(n)} \to \widetilde\mathcal{G}\qquad \beta \colon \mathcal{U}^{(n)} \to \widetilde\mathcal{G}$$ with the following properties:* - *$\pi \circ \alpha = \pi \circ \beta$* - *$\widetilde t \circ \alpha = \widetilde t \circ \mathop{\mathrm{pr}}_1$* - *For all $x \in \widetilde u^{-1}(\mathcal{U})$, we have that $$\alpha(\widetilde u(x), \ldots , \widetilde u(x)) = \beta(\widetilde u(x), \ldots , \widetilde u(x)) = \widetilde u(x)$$* *Then there exists an open neighborhood $\mathcal{O}\subseteq\mathcal{U}$ of the image of $\widetilde u$ such that $\alpha|_{\mathcal{O}^{(n)}} = \beta|_{\mathcal{O}^{(n)}}$* *Proof.* Let: $$F \colon \mathcal{U}^{(n)} \to \widetilde\mathcal{G}\qquad F(g_1,\ldots , g_n) := \widetilde\delta(\alpha(g_1, \ldots , g_n), \beta(g_1,\ldots , g_n))$$ We claim that $F$ satisfies the hypotheses of Lemma [Lemma 72](#lemma:failure.function){reference-type="ref" reference="lemma:failure.function"}. For the first bullet point, note that for $\overline{g} \in \mathcal{U}^{(n)}$: $$\pi \circ F (\overline{g}) = \delta(\pi \circ \alpha(\overline{g}), \pi \circ \beta ({\overline{g}}))$$ Since we have assume that $\pi \circ \alpha = \pi \circ \beta$ it follows that dividing them in $\widetilde\mathcal{G}$ results in a unit element. Therefore $\pi \circ F$ takes values only in unit elements. For the second bullet point, note that since $\widetilde t \circ \widetilde\delta = \mathop{\mathrm{pr}}_1$ it follows that $\widetilde t \circ F = \widetilde t \circ \alpha$. By assumption, $\widetilde t \circ \alpha = \widetilde t \circ \mathop{\mathrm{pr}}_1$ so the bullet point holds. Finally, given $x \in \widetilde M$, we have that: $$\begin{aligned} F(\widetilde u(x), \ldots , \widetilde u(x)) &= \widetilde\delta( \alpha(\widetilde u(x), \ldots , \widetilde u(x)), \beta(\widetilde u(x), \ldots , \widetilde u(x))) \\ &= \widetilde\delta( \widetilde u(x), \widetilde u (x)) \\ &= \widetilde u(x) \end{aligned}$$ Since $F$ satisfies the hypotheses of Lemma [Lemma 72](#lemma:failure.function){reference-type="ref" reference="lemma:failure.function"}, it follows that there exists an open neighborhood $\mathcal{O}\subseteq\mathcal{U}$ of $e$ with the property that: $$\forall (g_1, \ldots , g_n ) \in \mathcal{O}^{(n)} \qquad \widetilde\delta(\alpha(g_1, \ldots , g_n), \beta(g_1, \ldots , g_n)) = \widetilde u( \widetilde t(g_1))$$ By Lemma [Lemma 70](#lemma:division.lemma){reference-type="ref" reference="lemma:division.lemma"}, we conclude that: $$\forall (g_1, \ldots , g_n ) \in \mathcal{O}^{(n)} \qquad \alpha(g_1, \ldots , g_n) = \beta(g_1, \ldots , g_n)$$ ◻ ## Local Existence of charts **Proposition 74** (Existence of local groupoid charts). *Suppose $\mathcal{G}\rightrightarrows M$ is a singular Lie groupoid and let $x_0 \in M$ be fixed. There exists a local groupoid chart $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ around $x_0$.* *Proof.* Let $\pi \colon \widetilde\mathcal{G}' \to \mathcal{G}$ be a quasi-étale chart equipped with a division structure around $x_0$. We saw earlier that such things exist. Now let: $$\widetilde i\colon \mathcal{I}\to \widetilde\mathcal{G}\qquad g \mapsto \mathbf{\delta}(\widetilde u(s(g)), g)$$ $$\widetilde m\colon \mathcal{M}\to \widetilde\mathcal{G}' \qquad (g,h) \mapsto \mathbf{\delta}(g , \widetilde i(h))$$ where $\mathcal{I}\subseteq\widetilde\mathcal{G}'$ and $\mathcal{M}\subseteq\widetilde\mathcal{G}' \times_{\widetilde s, \widetilde t} \widetilde\mathcal{G}'$ are the maximal open sets which make these well-defined. We claim that there exists an open neighborhood $\widetilde\mathcal{G}$ of the image of $\widetilde u$ which makes the above maps a local groupoid structure on $\widetilde\mathcal{G}$ and $\pi|_{\mathcal{G}}$ is a local groupoid chart. We will begin by proving that $\pi$ is compatible with these structure maps. (1) (Compatibility with source and target) By definition $\widetilde s= s\circ \pi$ and $\widetilde t= t\circ \pi$ so this condition is automatic. (2) (Compatibility with multiplication) Consider the following computation where we apply compatibility of $\widetilde\delta$ with $\pi$ multiple times: $$\begin{aligned} \pi \circ \widetilde m(g,h) &= \pi \circ \widetilde\delta(g , \widetilde\delta( \widetilde u\circ \widetilde s(h), h) ) \\ &= \delta ( \pi(g), \pi \circ \widetilde\delta( \widetilde u\circ \widetilde s(h), h) ) \\ &= \delta(\pi(g), \delta( \pi \circ \widetilde u\circ \widetilde s(h), \pi(h))) \\ &= \delta(\pi(g), \delta( u\circ s(\pi(h), \pi(h))) \end{aligned}$$ Since $\mathcal{G}$ is a local groupoid, there exists an open neighborhood of the units where $$\delta(\pi(g), \delta( u\circ s(\pi(h), \pi(h))) = m(g,h)$$ A similar calculation to what we have done above tells us that we can find an open neighborhood of the units $\widetilde\mathcal{G}$ where: $$\pi ( \widetilde i(g)) = i( \pi(g))$$ We can therefore assume without loss of generality that the whole ambient space $\widetilde\mathcal{G}'$ has the property $\widetilde m$ and $\widetilde i$ are compatible with $\pi$. Now we will show the axioms (LG1-6) of a local groupoid must each hold in an open neighborhood of the image of $\widetilde u$. 1. (Compatibility of source and target with unit) We will do the computation for the source as the proof of target is symmetrical: $$\widetilde s\circ \widetilde u= s \circ \pi \circ \widetilde u= s\circ u= u$$ 2. (Compatibility of source and target with multiplication) We will do the computation for the source as the proof of target is symmetrical: $$\widetilde s\circ \widetilde m(g,h) = s\circ \pi \circ \widetilde m(g,h) = s\circ m(\pi(g),\pi(h)) = s\circ \pi(h) = \widetilde s(h)$$ 3. (Compatibility of source and target with inverse) $$\widetilde s \circ \widetilde i(g) = s\circ \pi \circ \widetilde i(g) = s\circ i\circ \pi(g) = t \circ \pi(g) = \widetilde t(g)$$ 4. (Left and Right unit law) We will show the left unit. First, observe that one can choose an open neighborhood $\mathcal{U}\subseteq\widetilde\mathcal{G}$ of the image of $\widetilde u$ such that $(\widetilde u(\widetilde t(g)) , g) \in \mathcal{M}$ for all $g \in \mathcal{O}$. Given such an $\mathcal{U}$, consider the two functions: $$\alpha \colon \mathcal{U}\to \widetilde\mathcal{G}\qquad g \mapsto \widetilde m( \widetilde u(\widetilde t(g)),g)$$ $$\beta \colon \mathcal{U}\to \widetilde\mathcal{G}\qquad g \mapsto g$$ Since $\pi$ preserves our structure maps, it follows that $\pi \circ \alpha = \pi \circ \beta$. Furthermore, $\widetilde t \circ \alpha = \widetilde t$. Lastly, if $x \in \widetilde M$ we have that: $$\alpha( \widetilde u(x)) = \widetilde m( \widetilde u(x) , \widetilde u(x)) = \widetilde u(x)$$ Therefore, the pair $\alpha$ and $\beta$ satisfy the hypotheses of Lemma [Lemma 73](#lemma:equality.test){reference-type="ref" reference="lemma:equality.test"} (in the case where $n = 1$) which completes the proof. 5. (Left and right inverse laws) We will show the proof for right inverse. The left inverse case is symmetrical. As we did in the proof of left unit. Consider a pair of functions: $$\alpha \colon \mathcal{U}\to \widetilde\mathcal{G}\qquad g \mapsto \widetilde m(g, \widetilde i(g) )$$ $$\beta \colon \mathcal{U}\to \widetilde\mathcal{G}\qquad g \mapsto \widetilde u(\widetilde t(g)))$$ where $\mathcal{U}$ is an open neighborhood of the image of $\widetilde u$ that makes $\alpha$ and $\beta$ well-defined. Observe that since $\pi$ is compatible with our structure maps and $\mathcal{G}$ satisfies the unit axiom, it follows that $\pi \circ \alpha = \pi \circ \beta$. Furthermore, $t \circ \alpha = \widetilde t$. Finally, if $x \in \widetilde M$ we have that: $$\alpha(\widetilde u(x)) = \widetilde m( \widetilde u(x) , \widetilde i(\widetilde u(x))) = \widetilde m(\widetilde u(x), \widetilde u(x)) = \widetilde u(x) = \beta( \widetilde u(x))$$ Therefore, $\alpha$ and $\beta$ satisfy the hypotheses of Lemma [Lemma 73](#lemma:equality.test){reference-type="ref" reference="lemma:equality.test"} and it follows that $\alpha = \beta$ in an open neighborhood of $e$. 6. (Associativity Law) As we did for the previous two axioms. We consider a pair of functions: $$\alpha \colon \mathcal{O}^{(3)} \to \widetilde\mathcal{G}\qquad (g,h,k) \mapsto \widetilde m(g, \widetilde m(h,k))$$ $$\beta \colon \mathcal{O}^{(3)} \to \widetilde\mathcal{G}\qquad (g,h,k) \mapsto \widetilde m( \widetilde m(g,h),k) )$$ where $\mathcal{U}$ is an open neighborhood of the image of $\widetilde u$ and is chosen in such a way that $\alpha$ and $\beta$ are well-defined. Since $\pi$ preserves these structure maps, and $\mathcal{G}$ satisfies associativity, it follows that $\pi \circ \alpha = \pi \circ \beta$. Furthermore, $\widetilde t \circ \alpha = \widetilde t \circ \mathop{\mathrm{pr}}_1$. Finally, given $x \in \widetilde M$ we have that: $$\alpha( \widetilde u (x),\widetilde u (x),\widetilde u (x) ) = \widetilde m(\widetilde u (x), \widetilde m(\widetilde u (x),\widetilde u (x))) = \widetilde m(\widetilde u(x), \widetilde u(x)) = \widetilde u(x)$$ A similar calculation holds for $\beta$. Therefore, it follows that $\alpha$ and $\beta$ satisfy the hypotheses of Lemma [Lemma 73](#lemma:equality.test){reference-type="ref" reference="lemma:equality.test"}, and therefore $\alpha = \beta$ in some open neighborhood of the image of $\widetilde u$.  ◻ At this point, we have proved a local (non-wide) form of the existence part of the proof of Theorem [Theorem 59](#theorem:existence.and.uniqueness.of.LGC){reference-type="ref" reference="theorem:existence.and.uniqueness.of.LGC"}. That is, every singular Lie groupoid admits a local groupoid chart around any given object. ## Uniqueness of charts Before we explain why "wide" local groupoid charts exist. We first need to prove a local form of the uniqueness portion of Theorem [Theorem 59](#theorem:existence.and.uniqueness.of.LGC){reference-type="ref" reference="theorem:existence.and.uniqueness.of.LGC"}. First, we will observe that the local groupoid structure on a local groupoid chart is uniquely (in a neighborhood of units) determined by its unit embedding. **Lemma 75**. *Let $\mathcal{G}\rightrightarrows M$ be a singular Lie groupoid. Suppose $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ is a quasi-étale chart we have two different local groupoid structures on $\widetilde\mathcal{G}$ which both make $\pi$ into a local groupoid chart and which have the same set of units $\widetilde M \subseteq M$.* *Let $$\widetilde u\colon \widetilde M \to \widetilde\mathcal{G}\qquad \widetilde u'\colon \widetilde M \to \mathcal{G}$$ denote the two different unit embeddings. Then for all $x_0 \in \widetilde M$ the local groupoid structures on $\widetilde\mathcal{G}$ are equal in a neighborhood of $\widetilde u(x_0)$ if and only if $\widetilde u$ and $\widetilde u'$ are equal in a neighborhood of $x$.* *Proof.* Let $\widetilde m$ and $\widetilde m'$ denote the two different multiplication maps and let $x \in \widetilde M$ be fixed. Note that one direction is clear, if the two local groupoid structures are equal then they must have the same unit embedding. Therefore, we only need to show that if the unit embeddings are equal then $m$ and $m'$ are equal in an open neighborhood of $e := \widetilde u(x_0) = \widetilde u'(x_0)$. By assumption, $\pi \circ \widetilde m = \pi \circ \widetilde m '$. Furthermore, $\widetilde t \circ \widetilde m = \widetilde t \circ \mathop{\mathrm{pr}}_1$. Finally, observe that for all $x \in \widetilde M$, we have that: $$\widetilde m(\widetilde u(x), \widetilde u(x)) = \widetilde m' (\widetilde u(x), \widetilde u(x))$$ Therefore, the functions $\alpha = \widetilde m$ and $\beta = \widetilde m'$ satisfy the hypotheses of Lemma [Lemma 73](#lemma:equality.test){reference-type="ref" reference="lemma:equality.test"} so they must be equal. ◻ Our next lemma tells us that we can always construct an isomorphism between any two local groupoid charts. **Lemma 76**. *Suppose $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ and $\pi' \colon \widetilde\mathcal{G}' \to \mathcal{G}$ are local groupoid charts around $x_0 \in M$. Let $\widetilde u$ and $\widetilde u'$ be the respective unit embeddings. Then there exists an open neighborhood $\mathcal{U}$ of $x_0$ and an isomorphism of local groupoids $[F] \colon \widetilde\mathcal{G}|_{\mathcal{U}} \to \widetilde\mathcal{G}'|_{\mathcal{U}}$ such that $[\pi] = [\pi'] \circ [F]$.* *Proof.* Without loss of generality, we can assume that the two local groupoid charts have the same set of units $\widetilde M$. Lemma [Lemma 75](#lemma:unique.lift){reference-type="ref" reference="lemma:unique.lift"} tells us that if two (local) groupoid structures on $\widetilde\mathcal{G}'$ are compatible with $\pi'$ and have the same set units near $\widetilde u'(x_0)$, then they are equal in a neighborhood of $\widetilde u'(x_0)$. Therefore, it suffices to construct a diffeomorphism $F_1 \colon \widetilde\mathcal{G}\to \widetilde\mathcal{G}'$, defined in a neighborhood of $\widetilde u(x_0)$ such that $F \circ \widetilde u= \widetilde u'$ near $\widetilde u( x_0)$. Such a diffeomorphism will automatically be compatible with the multiplication operations in some neighborhood of $\widetilde u(x_0)$. Since $\pi$ and $\pi'$ are quasi-étale, we can assume without loss of generality that we have a diffeomorphism $\widehat F \colon \widetilde\mathcal{G}\to \widetilde\mathcal{G}'$ such that $\widehat F (\widetilde u(x)) = \widetilde u'(x)$. Let $\widetilde\delta'$ be the division map for $\widetilde\mathcal{G}'$. Now consider the function: $$F(g) := \widetilde\mathbf{\delta}'( \widehat F(g), \widehat F ( \widetilde u(\widetilde s(g)) ) )$$ where $\widetilde s$ is the source map for $\widetilde\mathcal{G}$. Clearly $F$ will be well-defined in a neighborhood of $\widetilde u(x)$. A direct calculation shows that $F \circ \widetilde u= \widetilde u'$ and $\pi' \circ F = \pi$. ◻ We will finish the proof of this section a lemma where we show that there exists exactly one isomorphism between any two given local groupoid charts. **Lemma 77**. *Suppose $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ and $\pi' \colon \widetilde\mathcal{G}' \to \mathcal{G}$ are local groupoid charts around $x_0 \in M$. If $\mathcal{U}\subseteq M$ is an open neighborhood of $x_0$ and $[F] \colon\widetilde\mathcal{G}|_{\mathcal{U}} \to \widetilde\mathcal{G}' |_{\mathcal{U}}$ and $[G] \colon\widetilde\mathcal{G}|_{\mathcal{U}} \to \widetilde\mathcal{G}' |_{\mathcal{U}}$ satisfy $$[\pi] = [\pi'] \circ [F] \qquad [\pi] = [\pi'] \circ [G]$$ Then $[F] = [G]$.* *Proof.* We showed in the previous lemma that such an isomorphism exists. Therefore, we need to show that it is unique. We can assume without loss of generality that $\mathcal{U}= M$. We must show that $[F] \circ [G]^{-1}\colon \widetilde\mathcal{G}|_{\mathcal{U}} \to [\widetilde\mathcal{G}]$ is equal to the identity germ. Note that $[F]\circ [G]^{-1}$ satisfies $[\pi]_{x_0} = [\pi]_{x_0} \circ( [F]_{x_0} \circ [G]_{x_0})$ so we can reduce to the case where $\widetilde\mathcal{G}' = \widetilde\mathcal{G}$. Therefore, we are in a situation where we have a local groupoid map $[F] \colon \widetilde\mathcal{G}\to \widetilde\mathcal{G}$ which is an isomorphism in a neighborhood of the units and is such that $\pi \circ F = \pi$. We must show that $F$ equals the identity. Note that we have that $\pi \circ F = \pi \circ \mathop{\mathrm{Id}}_{\widetilde\mathcal{G}}$, and also $\widetilde t \circ F = \widetilde t$. Furthermore, if $x \in \widetilde M$ we have that $\widetilde F(\widetilde u(x)) = \widetilde u(x)$. Therefore, $F$ and $\mathop{\mathrm{Id}}_{\widetilde\mathcal{G}}$ satisfy the hypotheses of Lemma [Lemma 73](#lemma:equality.test){reference-type="ref" reference="lemma:equality.test"} and therefore $F = \mathop{\mathrm{Id}}$ in a neighborhood of $\widetilde u(x_0)$. ◻ ## Proof of Theorem [Theorem 59](#theorem:existence.and.uniqueness.of.LGC){reference-type="ref" reference="theorem:existence.and.uniqueness.of.LGC"} {#proof-of-theorem-theoremexistence.and.uniqueness.of.lgc} *Proof.* Suppose $\mathcal{G}\rightrightarrows M$ is a singular Lie groupoid. First, we show show uniqueness. Suppose $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ and $\pi' \colon \widetilde\mathcal{G}' \to \mathcal{G}$ are wide local groupoid charts. By Lemma [Lemma 76](#lemma:isomorphism.exists){reference-type="ref" reference="lemma:isomorphism.exists"}, around each point $p \in M$ there exists an open neighborhood $\mathcal{U}_p \subseteq M$ of $p$ and an isomorphism $[\mathcal{F}_p] \colon \widetilde\mathcal{G}|_{\mathcal{U}_p} \to \widetilde\mathcal{G}'|_{\mathcal{U}_p}$ with the property that $[\pi'] \circ [\mathcal{F}_p] = [\pi]$. By Lemma [Lemma 77](#lemma:isomorphism.is.unique){reference-type="ref" reference="lemma:isomorphism.is.unique"}, for any pair $p, q \in M$ such that $\mathcal{U}_p \cap \mathcal{U}_q$ is non-empty, it must be the case that $$[\mathcal{F}_p]|_{\mathcal{G}|_{\mathcal{U}_p \cap \mathcal{U}_q}} = [\mathcal{F}_q]|_{\mathcal{G}'_{\mathcal{U}_p \cap \mathcal{U}_q}}$$ Recall that the category of local groupoids is equivalent to the category of Lie algebroids. Since morphisms of Lie algebroids form a sheaf (over the base manifold), it follows that morphisms of local groupoids also form a sheaf. Therefore, there must exist a unique local groupoid morphism $[\mathcal{F}] \colon \widetilde\mathcal{G}\to \widetilde\mathcal{G}'$ with the property that $[\pi'] \circ [\mathcal{F}] = [\pi]$. Now we consider existence. By Lemma [Proposition 74](#prop:existence.of.local.local.groupoid.charts){reference-type="ref" reference="prop:existence.of.local.local.groupoid.charts"}, we know that for all $p \in M$ there must exist an open neighborhood $\widetilde M_p \subseteq M$ of $p$ together with a local groupoid $\widetilde\mathcal{G}_p \rightrightarrows\widetilde M_p$ and a local groupoid chart $\pi \colon \widetilde\mathcal{G}_p \to \mathcal{G}$. Furthermore, we remark that given any two $p, q \in M$ such that $\widetilde M_p \cap \widetilde M_q \neq \emptyset$, by the uniqueness part of the proof there must exist a unique isomorphism of local groupoid charts: $$[\mathcal{F}_{pq}] \colon \widetilde\mathcal{G}_p |_{\mathcal{U}_p \cap \mathcal{U}_q} \to \widetilde\mathcal{G}_q|_{\mathcal{U}_p \cap \mathcal{U}_q}$$ that is compatible with the projections to $\mathcal{G}$. Furthermore, on any triple intersection, the uniqueness of this isomorphism implies that a cocycle condition holds. In other words, for all $p,q,r \in M$ such that $\mathcal{U}_p \cap \mathcal{U}_q \cap \mathcal{U}_r \neq \emptyset$ we have: $$[\mathcal{F}_{qr}] \circ [\mathcal{F}_{pq}] = [\mathcal{F}_{pr}]$$ Since local groupoids are a sheaf. We conclude that there must exist a local groupoid $\widetilde\mathcal{G}$ equipped with isomorphisms: $$[\phi_p] \colon \widetilde\mathcal{G}|_{\mathcal{U}_p} \to \widetilde\mathcal{G}_p$$ which are compatible with the transition maps in the standard way. Furthermore, we remark that for all $p \in \mathcal{U}$ we have a local groupoid chart: $$[\pi_p] \circ [\widetilde\phi_p] \colon \widetilde\mathcal{G}|_{\mathcal{U}_p} \to \mathcal{G}$$ Compatibility of the collection $\{[\widetilde\phi_p ] \}_{p \in M}$ with the transition maps imply that the local groupoid charts $\{ [\pi_p] \circ [\widetilde\phi_p] \}$ are compatible on double intersections. Again, since morphisms of local groupoids are a sheaf, there must exist a morphism of local groupoids $[\pi] \colon \widetilde\mathcal{G}\to \mathcal{G}$. Since a local groupoid is canonically isomorphic to any neighborhood of its units, we can assume that $\pi$ is globally defined. Furthermore such a $\pi$ will be locally quasi-étale since each of the $[\pi_p]$ are quasi-étale. Since the property of being quasi-étale is local, we conclude that $\pi$ is quasi-étale and so $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$ is a local groupoid chart. ◻ ## Proving Theorem [Theorem 60](#theorem:existence.and.uniqueness.of.lifts){reference-type="ref" reference="theorem:existence.and.uniqueness.of.lifts"} {#proving-theorem-theoremexistence.and.uniqueness.of.lifts} Before we prove Theorem [Theorem 60](#theorem:existence.and.uniqueness.of.lifts){reference-type="ref" reference="theorem:existence.and.uniqueness.of.lifts"}. Let us state new version of the equality test before that has been upgraded to account for maps between groupoids. **Lemma 78**. *Suppose $\mathcal{G}\rightrightarrows M$ and $\mathcal{H}\rightrightarrows N$ are local singular Lie groupoids and $\pi_\mathcal{G}\colon \widetilde\mathcal{G}\to \mathcal{G}$ and $\pi_\mathcal{H}\colon \widetilde\mathcal{H}\to \mathcal{H}$ are local groupoid charts.* *Suppose $\mathcal{U}\subseteq\widetilde\mathcal{G}$ is an open neighborhood of the image of $\widetilde u$. Given a natural number $n$, suppose that we have a pair of smooth functions: $$\alpha \colon \mathcal{U}^{(n)} \to \widetilde\mathcal{G}\qquad \beta \colon \mathcal{U}^{(n)} \to \widetilde\mathcal{G}$$ together with a smooth function: $$f \colon M \to N$$ with the following properties:* - *$\pi_\mathcal{H}\circ \alpha = \pi_\mathcal{H}\circ \beta$* - *$\widetilde t \circ \alpha = f \circ \widetilde t \circ \mathop{\mathrm{pr}}_1$* - *For all $x \in \widetilde u^{-1}(\mathcal{U})$, we have that $$\alpha(\widetilde u(x), \ldots , \widetilde u(x)) = \beta(\widetilde u(x), \ldots , \widetilde u(x)) = \widetilde u(x)$$* *Then there exists an open neighborhood $\mathcal{O}\subseteq\mathcal{U}$ of the image of $\widetilde u$ such that $\alpha|_{\mathcal{O}^{(n)}} = \beta|_{\mathcal{O}^{(n)}}$* The proof of this lemma is almost identical to the one for Lemma [Lemma 73](#lemma:equality.test){reference-type="ref" reference="lemma:equality.test"}. In principal it involves proving a similarly reformulated version of Lemma [Lemma 72](#lemma:failure.function){reference-type="ref" reference="lemma:failure.function"} as well. For the sake of avoiding repetition we will not write out the proof here except to remark that the only difference is addition of the function $f$ in a few of the equations. One should also observe that the local groupoid structure on $\widetilde\mathcal{H}$ will be a division structure so a version of Lemma [Lemma 70](#lemma:division.lemma){reference-type="ref" reference="lemma:division.lemma"} applies. We can now proceed with proving Theorem [Theorem 60](#theorem:existence.and.uniqueness.of.lifts){reference-type="ref" reference="theorem:existence.and.uniqueness.of.lifts"} *Proof.* Suppose $\mathcal{G}\rightrightarrows M$ and $\mathcal{H}\rightrightarrows N$ are local singular Lie groupoids. Suppose $[\mathcal{F}] \colon \mathcal{G}\to \mathcal{H}$ is a local groupoid homomorphism covering $f \colon M \to N$ and $\pi_\mathcal{G}\colon \widetilde\mathcal{G}\to \mathcal{G}$ and $\pi_\mathcal{H}\colon \widetilde\mathcal{H}\to \mathcal{H}$ are wide local groupoid charts. We first show the uniqueness part. Suppose $[ \widetilde\mathcal{F}] \colon \widetilde\mathcal{G}\to \widetilde\mathcal{H}$ and $[\widetilde\mathcal{F}' ] \colon \widetilde\mathcal{G}\to \widetilde\mathcal{H}$ are local groupoid homomorphisms with the properties: $$[\pi_\mathcal{H}] \circ [\widetilde\mathcal{F}] = [\mathcal{F}] \circ [\pi_\mathcal{G}] \qquad [\pi_\mathcal{H}] \circ [\widetilde\mathcal{F}'] = [\mathcal{F}] \circ [\pi_\mathcal{G}]$$ We can assume that $\widetilde\mathcal{F}$ and $\widetilde\mathcal{F}'$ are globally defined in $\widetilde\mathcal{G}$. We invoke Lemma [Lemma 78](#lemma:relative.equality.test){reference-type="ref" reference="lemma:relative.equality.test"}, where we take $\mathcal{U}$ to be a common domain of definition for $\widetilde\mathcal{F}$ and $\widetilde\mathcal{F}'$, $n = 1$, $\alpha = \widetilde\mathcal{F}$ and $\beta = \widetilde\mathcal{F}'$. It is straightforward to check that the hypotheses are satisfied and so we conclude there must exist an open neighborhood $\mathcal{O}\subseteq\mathcal{U}$ of the units where $\mathcal{F}|_\mathcal{O}= \mathcal{F}'|_{\mathcal{O}}$. Since local groupoid morphisms are germs in a neighborhood of the units we conclude $[\mathcal{F}] = [\mathcal{F}']$. Now we show existence. Note that since local groupoid morphisms can be defined locally in $M$ (i.e. they are a sheaf), due to the uniqueness property we have just proved, it suffices to prove that there exists a $\mathcal{F}$ with the desired property that is defined in a neighborhood of an arbitrary point of $M$. Let $x_0 \in M$ be a fixed, arbitrary point. Since $\pi_\mathcal{H}\colon \widetilde\mathcal{H}\to \mathcal{H}$ is a local subduction, there must exist a smooth function $\overline \mathcal{F}\colon \mathcal{U}\to \mathcal{H}$ defined an open neighborhood $\mathcal{U}\subseteq\widetilde\mathcal{G}$ of $\widetilde u(x_0)$ with the property that $$\pi_\mathcal{H}\circ \widetilde\mathcal{F}= \mathcal{F}\circ \pi_{\mathcal{G}}$$ and $$\overline{\mathcal{F}}(\widetilde u(x_0)) = \widetilde u(f(x_0))$$ Now let: $$\widetilde\mathcal{F}(g) := \overline{\mathcal{F}}(g) \cdot \overline{\mathcal{F}}(\widetilde u \circ \widetilde s(g)) ^{-1}$$ This may not be defined for all $g$ but note that $\widetilde\mathcal{F}(\widetilde u(x_0)) = \widetilde u(f(x_0))$. Therefore, there exists an open neighborhood of $\widetilde u(x_0)$ where $\widetilde\mathcal{F}$ is well-defined. Since we are only trying to show existence in a neighborhood of $x_0$ we can assume without loss of generality that $\widetilde\mathcal{F}$ is defined on $\mathcal{U}$. Now observe that $\widetilde\mathcal{F}$ maps units to units. That is, for all $x \in M$ such that $\widetilde u(x) \in \mathcal{U}$: $$\widetilde\mathcal{F}( \widetilde u(x)) = \overline{\mathcal{F}}(\widetilde u(x)) \cdot \overline{\mathcal{F}}(\widetilde u(x))^{-1}= \widetilde u( \widetilde t\circ \overline{\mathcal{F}} (\widetilde u(x)))$$ We claim that $\widetilde\mathcal{F}$ defines a local groupoid homomorphism. To see this let $\alpha$ and $\beta$ be defined as follows. For $(g,h) \in \mathcal{U}^{(2)}$: $$\alpha (g,h) = \widetilde\mathcal{F}( gh) \qquad \beta(g,h) = \widetilde\mathcal{F}(g) \cdot \widetilde\mathcal{F}(h)$$ Again, we may need to shrink $\mathcal{U}$ to a smaller open neighborhood so that $\alpha$ and $\beta$ are well-defined but this is no issue. Now we can invoke Lemma [Lemma 78](#lemma:relative.equality.test){reference-type="ref" reference="lemma:relative.equality.test"}. A straightforward check shows that $\alpha$ and $\beta$ satisfy the conditions of the equality test so we know there must exist an open neighborhood $\mathcal{O}$ of $\widetilde u(x_0)$ with the property that $\alpha = \beta$ on this open set. Therefore, $\mathcal{O}$ is an open set where $\widetilde\mathcal{F}$ is compatible with multiplication. Finally, we need to show that $\widetilde\mathcal{F}$ satisfies: $$[\pi_\mathcal{H}] \circ [\widetilde\mathcal{F}] = [\mathcal{F}] \circ [\pi_\mathcal{G}]$$ However, if we compute for $g \in \mathcal{O}$: $$\pi_\mathcal{H}\circ \widetilde\mathcal{F}(g) = \pi_\mathcal{H}\left( \overline{\mathcal{F}}(g) \cdot \overline{\mathcal{F}}(\widetilde u\circ s(g))\right) = \pi_\mathcal{H}(\overline{\mathcal{F}}(g))$$ Where the last equality holds since $\pi_{\mathcal{H}}$ is a homomorphism. However, by definition of how we constructed $\overline{\mathcal{F}}$ we know that $\pi_\mathcal{H}\circ \overline{\mathcal{F}} = \mathcal{F}\circ \pi_\mathcal{G}$. Therefore: $$[\pi_\mathcal{H}] \circ [\widetilde\mathcal{F}] = [\mathcal{F}] \circ [\pi_\mathcal{G}]$$ This concludes showing local existence and so the proof is finished. ◻ # The classical Ševera-Weinstein groupoid {#section:Weinstein.groupoid} We have now constructed the differentiation functor. Our final task is to prove that the classical Ševera-Weinstein groupoid is, indeed, a singular Lie groupoid. ## The fundamental groupoid construction {#section:weinstein.groupoid.construction} The Ševera-Weinstein groupoid is constructed via an analogy to the fundamental groupoid. Essentially, what one does is reproduce the construction of the fundamental groupoid but within the category of Lie algebroids. We will mostly follow the notation and approach to the construction that appears in the works of Crainic and Fernandes [@crainic_integrability_2003][@crainic_lectures_2011] as we will need to use several of their results about the geometry of the path space. **Definition 79**. Suppose $A \to M$ and $B \to N$ are Lie algebroids and $F_0 , F_1 \colon A \to B$ are Lie algebroid homomorphisms covering $f_0, f_1 \colon M \to N$. We say that $F_0$ and $F_1$ are *algebroid homotopic* if there exists a homomorphism of Lie algebroids $H \colon A \times T [0,1] \to B$ covering $h \colon M \times [0,1] \to N$ with the following properties: (a) $H|_{A \times 0_{\{ i \}}} = F_i$ (b) $H|_{ 0_{\partial M} \times T[0,1] } = 0$ By concatenating homotopies, algebroid homotopies can be seen to form an equivalence relation on the set of all algebroid morphisms $A \to B$. In particular, in the case of the tangent algebroid, the algebroid homotopy relation coincides with the standard notion of homotopy of smooth maps. **Definition 80**. Suppose $A \to M$ is a Lie algebroid. An $A$-path is an algebroid morphism $a \colon T [0,1] \to A$. The set of all $A$-paths will be denoted $\mathcal{P}(A)$. The *fundamental groupoid* or *Ševera-Weinstein groupoid* of $A$ is denoted $\Pi_1(A)$ and is defined to be the set of all $A$-paths modulo algebroid homotopy. In the work of Crainic and Fernandes, they make use of the theory of Banach manifolds to study the structure of $\mathcal{P}(A)$. In order to do this, we must relax the smoothness assumptions on $\mathcal{P}(A)$. From now on, we assume that any $a \in \mathcal{P}(A)$ is a bundle map of class $C^1$ and has base path of class $C^2$. Under these assumptions, Crainic and Fernandes prove the following theorem: **Theorem 81**. *[@crainic_integrability_2003] The set of algebroid homotopy equivalence classes in $\mathcal{P}(A)$ partitions $\mathcal{P}(A)$ into a foliation with finite co-dimension $\mathcal{F}$.* In addition to this, Crainic and Fernandes also show how this construction can be used to obtain a local Lie groupoid. **Theorem 82**. *[@crainic_integrability_2003] For any Lie algebroid $A$, there exists an open neighborhood $A^\circ$ of the zero section together with an embedding $\exp \colon A^\circ \to \mathcal{P}(A)$ with the following properties:* - *$\exp \colon A^\circ \to \mathcal{P}(A)$ is transverse to the foliation $\mathcal{F}$.* - *There is a local Lie groupoid structure on $A^\circ$ which makes the associated map $\pi \colon A^\circ \to \Pi_1(A)$ into a local groupoid homomorphism.* - *The Lie algebroid of $A^\circ$ as a local groupoid is canonically isomorphic to $A$.* An important consequence of the first bullet point above is that $\exp \colon A^\circ \to \Pi_1(A)$ is a local subduction. This is essentially implicit in the literature, however it is a little difficult to find a clear statement. This is due to the fact that it relies of properties of foliations on Banach manifolds. For example, in [@tseng_integrating_2006], it is claimed that the restriction of the fundamental groupoid of the foliation $\mathcal{F}$ to an arbitrary complete transversal results in an étale Lie groupoid. Therefore, by Lemma [Lemma 22](#lemma:lie.groupoid.quotient.is.subduction){reference-type="ref" reference="lemma:lie.groupoid.quotient.is.subduction"} passing to the quotient space should be a local subduction. Unfortunately, Tseng and Zhu do not go into much detail beyond stating this fact. For completeness, we will include a proof of this fact in the following lemma. **Lemma 83**. *Let $\pi \colon A^\circ \to \Pi_1(A)$ be the map defined above. Then $\pi$ is a local subduction.* *Proof.* First, we remark that since $\mathcal{F}$ is a foliation on a Banach manifold, around any point $a(t) \in \mathcal{P}(A)$ there exists a foliated chart $\Phi \colon \mathbb{R}^k \times V \to \mathcal{P}(A)$. By foliated chart, we mean that $\Phi$ is an open diffeomorphism onto its image, $k$ is the codimension of $\mathcal{F}$, $V$ is an open subset of a Banach vector space, and the foliation $\mathcal{F}$ corresponds to $\mathbb{R}^k \times TV$. The existence of such foliated charts for foliations on Banach manifolds seems a little difficult to find in the literature stated plainly in this way. However, it can be inferred from the proof of Theorem 1.1 in Chapter 7 of Lang [@lang_fundamentals_1999]. Now, to see why $\exp$ is a subduction suppose $\phi \colon U_\phi \to \Pi_1(A)$ is a plot and let $v \in A^\circ$ and $u \in U_\phi$ be such that $exp(v) = \phi(u)$. Let $p \colon \mathcal{P}(A) \to \Pi_1(A)$ be the canonical projection so that $\pi = p \circ \exp$. Since $\Pi_1(A)$ is equipped with the quotient diffeology, we know there exists a lift $\widetilde\phi \colon V \to \mathcal{P}(A)$ defined on some open neighborhood $V$ of $u$. Now suppose we have a submanifold $\mathcal{T}\subseteq\mathcal{P}(A)$ which is transverse to $\mathcal{F}$, and is such that the image of $\phi$ is contained in $\mathcal{T}$. By using a foliated charts centered around $\phi(u)$, and $a \in \mathcal{P}(A)$ there exists a locally defined function $q_1 \colon \mathcal{O}_1 \to \mathcal{T}$ and $q_2 \circ \mathcal{O}_2 \to A^\circ$ defined around an open neighborhood $\mathcal{O}_1$ of $\phi(u)$ and $\mathcal{O}_2$ of $\exp(v)$ such that $p \circ q_1 = p$ and $p \circ q_2 = p$. Now, we also know that $\widetilde\phi(u)$ is connected to $a$ by a path $h(t)$ in $\mathcal{P}(A)$ which is tangent to $\mathcal{F}$. Since $\mathcal{T}$ is a transverse submanifold at $\phi(u)$ and $\exp(A^\circ)$ is a transverse submanifold at $\exp(v)$ we can use the classical construction[^2] for the holonomy of a foliation, to obtain a germ of a diffeomorphism $$f \colon \mathcal{T}\to \exp(A^\circ)$$ such that $f(\phi(u)) = \exp(v)$ and $p \circ f = p$. Therefore, by choosing a suitably small open neighborhood $V$ of $u$ the function $$q_2 \circ f \circ q \circ \widetilde\phi$$ is a lift of $\phi$ with the property that $\phi(u) = a$. This shows that $\pi$ is a local subduction. ◻ In the article of Crainic and Fernandes, the main goal was to obtain a criteria for integrability. Towards this end, they became interested in understanding the fibers of this exponential map and, in particular, they computed the inverse images of the unit elements. **Theorem 84** (Crainic and Fernandes). *Let $A$ be a Lie algebroid and let $\exp \colon A^\circ \to \Pi_1(A)$ be as in the previous theorem.* *Given $x \in M$ the fiber $\exp^{-1}( u(x))$ is countable.* Indeed the actual theorem proved in the article of Crainic and Fernandes is much stronger: The fiber $\exp^{-1}(u(x))$ is actually an additive subgroup of $A$ called the *monodromy group* at $A$ and it arises from a group homomorphism from the second fundamental group of the orbit. The countability of the monodromy group is just an immediate consequence of the countability of the fundamental groups of a smooth manifold. ## The Ševera-Weinstein groupoid is a singular Lie groupoid In this subsection we will explain the proof of Theorem [Theorem 2](#theorem:main.wein.groupoid){reference-type="ref" reference="theorem:main.wein.groupoid"}. Let us restate this theorem now. ** 2**. *Given a Lie algebroid, $A \to M$, let $\Pi_1(A) \rightrightarrows M$ be the Ševera-Weinstein groupid of $A$ and consider it as a diffeological groupoid. Then $\Pi_1(A)$ is an element of $\mathbf{SingLieGrpd}$ and $\widehat\mathbf{Lie}(\Pi_1(A))$ is canonically isomorphic to $A$.* Our first lemma is used to argue that we only have to check if $\Pi_1(A)$ is a singular Lie groupoid in a neighborhood of the units. **Lemma 85**. *Suppose $\mathcal{G}\rightrightarrows M$ is a diffeological groupoid over a smooth manifold $M$. Assume that the source map $s \colon \mathcal{G}\to M$ is a local subduction. Now suppose there exists an open neighborhood of the units $\mathcal{U}\subseteq\mathcal{G}$ such that $\mathcal{U}$ is a singular local Lie groupoid. Then $\mathcal{G}$ is a singular Lie groupoid.* *Proof.* The proof of this fact is a translation argument. We need to show that $\mathcal{G}$ is a $\mathbf{QUED}$-groupoid. This means that we must show that $\mathcal{G}$ is a quasi-étale diffeological space and the source map $s \colon \mathcal{G}\to M$ is a $\mathbf{QUED}$-submersion. For the first part, let $g_0 \in \mathcal{G}$ be arbitrary and let $s(g) = x_0 \in M$. Since $\mathcal{U}$ is quasi-étale, there must exist a quasi-étale chart $\pi \colon N \to \mathcal{G}$ where $u(x)$ is in the image of $\pi$. Let $n_0 \in N$ be the point such that $\pi(n_0) = x_0$. Now let $\sigma \colon \mathcal{O}\to \mathcal{G}$ be a local section of the source map such that $\sigma(x) = g$. Then we can define: $$\pi'(n) := \sigma( t \circ \pi(n)) \cdot \pi(n)$$ Clearly $\pi'$ will be well-defined in an open neighborhood of $n_0$. Furthermore, $\pi'(n_0) = g_0$. Since $\pi'$ is locally $\pi$ composed with a diffeomorphism, it follows that $\pi'$ is quasi-étale. Therefore, $\pi'$ is a quasi-étale chart around $g_0$. To show that $s \colon \mathcal{G}\to M$ is a $\mathbf{QUED}$-submersion, we will utilize Theorem [Theorem 47](#theorem:qeds.submersion.criteria){reference-type="ref" reference="theorem:qeds.submersion.criteria"}. We only need to show that the fibers of $s$ are quasi-étale diffeological spaces. However, we already know that $\mathcal{U}$ is a singular local Lie groupoid. Therefore, for each $x \in M$ we know $s^{-1}(x) \cap \mathcal{U}$ is a quasi-étale diffeological space. To show that $s^{-1}(x)$ is a quasi-étale diffeological space we once again invoke a simple translation argument. Any point in $s^{-1}(x)$ is just a left translation of the identity. Therefore, a quasi-étale chart around the identity element of $s^{-1}(x)$ can be translated to be a quasi-étale chart around any element of $s^{-1}(x)$. ◻ In order to show that $\Pi_1(A)$ is a singular Lie groupoid we will have to show that the source map is a $\mathbf{QUED}$-submersion. Part of that is the claim that the source map is a local subduction. In a way, this is also a bit implicit in the literature but we include a proof here. **Lemma 86**. *Suppose $p \colon A \to M$ is a Lie algebroid. Let $s \colon \Pi_1(A) \to M$ be the source map for the fundamental groupoid. Then $s$ is a local subduction.* *Proof.* First, we argue that the map: $$\widehat s \colon \mathcal{P}(A) \to M \qquad a \mapsto p(a(0))$$ is a submersion of Banach manifolds. To see this, we rely on the original paper of Crainic and Fernandes [@crainic_integrability_2003] where they compute $T_a \mathcal{P}(A)$ to be: $$\{ (u,X) \in C^\infty([0,1], A \times TM) \ : \ \overline{\nabla}_a X = \rho(u) \}$$ where $\nabla$ is a connection on $A$, $\rho$ is the anchor map and: $$\overline{\nabla}_a X := \rho( \nabla_X a) + [\rho(a),X ]$$ The right hand side of this expression is computed by using any set of sections that extend $X$ and $a$ to time dependent sections of $TM$ and $A$, respectively. Actually, $\overline{\nabla}$ is an example of what is called an $A$-connection. More specifically, $\overline{\nabla}$ is an $A$ connection on the vector bundle $TM$. The expression $\overline{\nabla}_a \phi$ should be interpreted as the directional derivative of $\phi$ along $a$. We refer the reader to [@crainic_lectures_2011] for more details on the geometry of $A$-connections. The key fact is that, for $A$ connections, parallel transport along an $A$-path is well-defined and the equation $\overline{\nabla}_a X = 0$ is equivalent to stating that $X$ is parallel along $a$. Now, the $u$ part represents the vertical (fiber-wise) component of the tangent vector while $X$ represents the "horizontal" component of the tangent vector. In particular, if we consider the differential of the projection we get: $$T_a p : T_a \mathcal{P}(A) \to T_{p \circ a} C^2([0,1],M) \qquad (u,X) \mapsto X$$ Therefore we have: $$T_a \widehat s \colon T_a \mathcal{P}(A) \to T_{p \circ a(0)} M \qquad (u,\phi) \mapsto \phi(0)$$ If we are given any $v \in T_{p \circ a(0)} M$ and we let $\phi$ be the unique solution to the initial value problem: $$\overline{\nabla}_a \phi = 0 \qquad \phi(0) = v$$ Then it follows that $(0,\phi) \in T_a \mathcal{P}(A)$ is a tangent vector with the property that $T_a \widehat s ( 0,\phi) = v$. This shows that $\widehat s$ is a submersion of Banach manifolds. Now, since the co-domain of $\widehat s$ is finite dimensional, the kernel of $T \widehat s$ must admit a closed complement at each point. Therefore, the inverse function theorem for Banach manifolds applies and $\widehat s$ admits local sections through every point in its domain. This immediately implies that $\widehat s$ is a local subduction. As an immediate consequence, $s \colon \Pi_1(A) \to M$ is also a local subduction. ◻ In a neighborhood of the identity, the fundamental groupoid of an algebroid is just a quotient of $A^\circ$. Therefore, it will be useful to know, precisely, which kinds of quotients of local groupoids are singular Lie groupoids. The next theorem answers this question. **Theorem 87**. *Suppose $\mathcal{G}\rightrightarrows M$ is a diffeological groupoid where the source map is a local subduction. Suppose further that $\widetilde\mathcal{G}$ is a local Lie groupoid over the same manifold and we have $\pi \colon \widetilde\mathcal{G}\to \mathcal{G}$, a local subduction, such that $[\pi] \colon \widetilde\mathcal{G}\to \mathcal{G}$ is a homomorphism of local groupoids covering the identity.* *If for all unit elements $u \in \mathcal{G}$ we have that the fiber $\pi^{-1}(u) \subseteq\widetilde\mathcal{G}$ is totally disconnected, then $\mathcal{G}\rightrightarrows M$ is a $\mathbf{QUED}$-groupoid and $\pi$ is a wide local groupoid chart.* *Proof.* By Lemma [Lemma 85](#lemma:locally.Weinstein.is.weinstein){reference-type="ref" reference="lemma:locally.Weinstein.is.weinstein"} it suffices to show that the image of an open neighborhood of the units in $\widetilde\mathcal{G}$ under $\pi$ is a local singular Lie groupoid. To set our notation, we will use $\widetilde s, \widetilde t , \widetilde u, \widetilde m$ and $\widetilde i$ to denote the local groupoid structure on $\widetilde\mathcal{G}$ and $s,t, u, m, i$ the local groupoid structure on $\mathcal{G}$. We begin by arguing that $\pi$ is a quasi-étale chart (in some neighborhood of the units). We already know that $\pi$ is a local subduction. Therefore, we need to show that $\pi$ has totally disconnected fibers (in a neighborhood of the units) and satisfies the rigid endomorphism property. We know that $\pi$ has totally disconnected fibers over the units of $\mathcal{G}$. We claim that the fibers of $\pi$ are totally disconnected in a neighborhood of the units of $\widetilde\mathcal{G}$. To see why, consider the following calculation: $$(b \cdot a^{-1}) \cdot a = b \cdot (a^{-1}\cdot a) = b \cdot (u(s(a))) = b$$ Let $\mathcal{U}\subseteq\widetilde\mathcal{G}$ be an open neighborhood of the identity with the property that for all $a,b \in \mathcal{U}$ with $s(a) = s(b)$ we have that the above calculation is well-defined in $\widetilde\mathcal{G}$. Now given $\widetilde g_0 \in \mathcal{U}$ and let $g_0 = \pi(\widetilde g)$. Consider the function: $$d \colon \pi^{-1}(g_0) \cap \mathcal{U}\to \widetilde\mathcal{G}\qquad x \mapsto x \cdot {\widetilde g_0}^{-1}$$ This function is well-defined since we are in $\mathcal{U}$. Furthermore, this function is injective due to the fact that if we have $x \cdot a ^{-1}= y \cdot a ^{-1}$ we have that: $$x = (x \cdot a ^{-1}) \cdot a = (y \cdot a^{-1}) \cdot a = y$$ Furthermore, $d_a$ takes values in the kernel of $\pi$. Specifically, it takes values in $\pi^{-1}(u(t(g_0)))$, which are totally disconnected by assumption. Since $d$ is a smooth injection into a totally disconnected set, it follows that the domain of $d$ is totally disconnected. This shows the fibers of $\pi$ are totally disconnected when $\pi$ is restricted to a small enough neighborhood of the units. Therefore, we can assume without loss of generality that, from now on, the fibres of $\pi$ are totally disconnected. Next, we must show that $\pi$ satisfies the rigid endomorphism property. Let us take $\mathcal{U}\subseteq\widetilde\mathcal{G}$ to be an open subset of $\widetilde\mathcal{G}$ with the property that for all $x , y \in \mathcal{U}$ such that $s(x) = s(y)$ we have that $x \cdot y^{-1}$ is well-defined. We use the simplified criteria from Lemma [Lemma 37](#lemma:finding.quasi.étale.charts){reference-type="ref" reference="lemma:finding.quasi.étale.charts"} to show the rigid endomorphism property. Let $g_0 \in \mathcal{U}$ be arbitrary and $f \colon \mathcal{O}\to \widetilde\mathcal{G}$ be a smooth function with the property that $\mathcal{O}$ is an open neighborhood of $g_0$, $f( g_0 ) = g_0$, and $\pi \circ f = \pi$. We only need to show that $f$ is a diffeomorphism in a neighborhood of $g_0$. Let $\alpha$ be the following function: $$\alpha \colon \mathcal{O}\to \widetilde\mathcal{G}\qquad f(g)\cdot g^{-1}$$ Note $\alpha$ will be well-defined by our assumptions on $\mathcal{U}$. If we apply $\pi$ to $\alpha$ we observe that $\pi \circ \alpha$ takes values only in units. Now let us chose $\mathcal{O}$ to be an open neighborhood of $g_0$ with the property that $\mathcal{O}\cap \widetilde t^{-1}(x)$ is connected for all $x \in M$. Since the fibers of $\pi$ are totally disconnected, it follows that $\alpha$ must be constant on the $\widetilde t$-fibers. In particular, there must exist a local section $\sigma \colon \widetilde t (\mathcal{O}) \to \widetilde\mathcal{G}$ with the property that: $$\forall g \in \mathcal{O}\qquad \alpha(g) = \sigma(\widetilde t(g))$$ From this we can conclude that: $$\forall g \in \mathcal{O}\qquad f(g) \cdot g^{-1}= \sigma(\widetilde t(g))$$ Which means that: $$\forall g \in \mathcal{O}\qquad f(g) = \sigma(\widetilde t(g)) \cdot g$$ From this it immediately follows that $f$ is a diffeomorphism since it is given by translation along a section. In fact, it has an explicit inverse: $$f^{-1}(g) = \sigma(\widetilde t(g))^{-1}\cdot g$$ To conclude showing that the image of $\pi$ is a local singular Lie groupoid, the only thing left to show is that the source map $s \colon \mathcal{G}\to M$ is a $\mathbf{QUED}$-submersion. We will use the criteria from Theorem [Theorem 47](#theorem:qeds.submersion.criteria){reference-type="ref" reference="theorem:qeds.submersion.criteria"} which means that we need to show that for all $x \in M$ we have that $s^{-1}(x)$ is a quasi-étale diffeological space. Therefore, we just need to show that $\pi|_{\widetilde s^{-1}(x)}$ is a quasi-étale chart. Let $X := \pi(\widetilde s^{-1}(x))$. We already know it is a local subduction and that the fibers are totally disconnected. It only remains to show that it satisfies the rigid endomorphism property. We use the simplified criteria from Lemma [Lemma 37](#lemma:finding.quasi.étale.charts){reference-type="ref" reference="lemma:finding.quasi.étale.charts"}. Suppose $\mathcal{O}\subseteq\widetilde s^{-1}(x)$ is open and $f \colon \mathcal{O}\to \widetilde s^{-1}(x)$ is a smooth function such that $\pi \circ f = \pi$ and $f(\widetilde g_0) = \widetilde g_0$ for some point $\widetilde g_0 \in \mathcal{O}$. We need to show that $f$ is a diffeomorphism in a neighborhood of $\widetilde g_0$. We need to show that $f$ is a local diffeomorphism. The argument is very similar from the one earlier in the proof. Let: $$\alpha \colon \mathcal{O}\to \widetilde s^{-1}(x) \qquad \alpha(g) := f(g) \cdot g^{-1}$$ Applying $\pi$ to $\alpha$ we get that: $$\pi \circ \alpha (g) = u\circ t(g)$$ If we chose $\mathcal{O}$ such that the intersection with the $t$-fibers is connected, then we can conclude that $\alpha$ is constant when restricted to $t$-fibers. Therefore, there must exist a smooth section of the source map $\sigma \colon t(\mathcal{O}) \to \widetilde\mathcal{G}$ with the property that: $$\alpha(g) = \sigma( \widetilde t(g))$$ Therefore, we can conclude that: $$f(g) \cdot g^{-1}= \sigma(\widetilde t(g))$$ Therefore: $$f(g) = \sigma(\widetilde t(g)) \cdot g$$ Which, by similar reasoning, is a diffeomorphism. ◻ We can now finish the proof of the main theorem for this section. *Proof of Theorem [Theorem 2](#theorem:main.wein.groupoid){reference-type="ref" reference="theorem:main.wein.groupoid"}.* Let $\pi \colon A^\circ \to \Pi_1(A)$ be the projection from Section [7.1](#section:weinstein.groupoid.construction){reference-type="ref" reference="section:weinstein.groupoid.construction"}. Recall that $A^\circ$ is an open subset of the zero section in $A$ and that $A^\circ$ is equipped with a unique local groupoid structure that makes $[\pi] \colon A^\circ \to \Pi_1(A)$ a local groupoid homomorphism and the zero section of $A^\circ$ the unit embedding. Note that by Lemma [Lemma 86](#lemma:source.is.subduction){reference-type="ref" reference="lemma:source.is.subduction"} we know the source map of $\Pi_1(A)$ is a local subduction. Furthermore, we know that $\pi$ is a subduction by Lemma [Lemma 83](#lemma:exponential.is.subduction){reference-type="ref" reference="lemma:exponential.is.subduction"}. By Theorem [Theorem 87](#theorem:classify.qeds.groupoids){reference-type="ref" reference="theorem:classify.qeds.groupoids"} we conclude that $\Pi_1(\mathcal{G})$ is a singular Lie groupoid. Furthermore, the map $\pi \colon A^\circ \to \Pi_1(\mathcal{G})$ is a local groupoid chart. Therefore, the Lie algebroid of $\Pi_1(\mathcal{G})$ is canonically isomorphic to the Lie algebroid of the local groupoid structure on $A^\circ$ which is just $A$. ◻ [^1]: One way to prove this is by considering the subgroup of $\widetilde G \times H$ generated by the graph of $\pi$. This can also be seen by using the theory of associative completions developed by Malcev[@malcev_sur_1941]. See also Fernandes and Michiels[@fernandes_associativity_2020] for a more modern version. [^2]: Note that the traditional form of this construction is by linking a series of foliated charts.
arxiv_math
{ "id": "2309.07258", "title": "On the integrability of Lie algebroids by diffeological spaces", "authors": "Joel Villatoro", "categories": "math.DG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We find all solutions to the constant Yang--Baxter equation $R_{12}R_{13}R_{23}=R_{23}R_{13}R_{12}$ in three dimensions, subject to an additive charge-conservation ansatz. This ansatz is a generalisation of (strict) charge-conservation, for which a complete classification in all dimensions was recently obtained. Additive charge-conservation introduces additional sector-coupling parameters -- in 3 dimensions there are $4$ such parameters. In the generic dimension 3 case, in which all of the $4$ parameters are nonzero, we find there is a single 3 parameter family of solutions. We give a complete analysis of this solution, giving the structure of the centraliser (symmetry) algebra in all orders. We also solve the remaining cases with three, two, or one nonzero sector-coupling parameter(s). author: - "J.Hietarinta[^1], P.Martin[^2], E.C.Rowell[^3]" bibliography: - local.bib - bib/Loop_Hecke.bib title: | Solutions to the constant Yang-Baxter equation:\ additive charge conservation in three dimensions --- # Introduction The Yang-Baxter equation (YBE) reads (in shorthand form) $$\label{eq:YBs2} R_{12}R_{13}R_{23}=R_{23}R_{13}R_{12} .$$ It is a fundamental equation for many applications --- see for example [@Baxter72; @Baxter73; @STF1979; @Baxter1982], [@Drinfeld1985; @Jimbo1985; @Jones1990; @Turaev1988] and references therein. To make [\[eq:YBs2\]](#eq:YBs2){reference-type="eqref" reference="eq:YBs2"} explicit, one first fixes a dimension $N$ for a vector space $V={\mathbf{C}}^N$. We can also pick bases for $V$ and $V \otimes V$. Then we have an underlying matrix $R$ acting on $V \otimes V$; each matrix $R_{ij}$ acts on $V \otimes V \otimes V$, and since there are three different indices in the equation, we have three spaces, and each $R_{ij}$ will act on two of them as indicated by its subscript and on the third as an identity. Thus in explicit form the Yang-Baxter equation reads $$\label{eq:YB} \sum_{\alpha_1,\alpha_2,\alpha_3}{{\cal R}}^{i_1i_2}_{\alpha_1\alpha_2} {\cal R}^{\alpha_1i_3}_{j_1\alpha_3} {\cal R}^{\alpha_2\alpha_3}_{j_2j_3} = \sum_{\beta_1,\beta_2,\beta_3}{\cal R}^{i_2i_3}_{\beta_2\beta_3} {\cal R}^{i_1\beta_3}_{\beta_1j_3} {\cal R}^{\beta_1\beta_2}_{j_1j_2},$$ where in dimension $N$ the summation ranges $0,1,\dots,N-1 \;$ (starting from 0 in our convention) and ${\cal R}^{i_1i_2}_{\alpha_1\alpha_2}$ is the appropriate matrix entry of $R$, see §[2.1](#ss:presenting){reference-type="ref" reference="ss:presenting"}. With various applications in mind, we impose $$\det(R) \neq 0.$$ For some applications the $R$ matrices depend on spectral parameters that can be different for each $R_{ij}$ [@STF1979; @Baxter1982], but in this paper we will consider the constant YBE. Observe that $R$ will have $N^2\times N^2$ entries and there will be, in principle, $N^3\times N^3$ equations. It is clear that such an overdetermined set of nonlinear equations is difficult to solve, even in this constant form. Indeed, while many individual solutions are known, a complete solution is known only for dimension two [@Hietarinta1992] and for higher dimensions knowledge is far from complete. The three dimensional upper triangular case was solved in [@Hietarinta_1993], but for further progress it is important to make a meaningful ansatz. Recently Martin and Rowell proposed [@MR] charge-conservation of the form $${\cal R}_{ij}^{kl}=0,\text{ if } \{i,j\}\neq\{k,l\} \text{ as a set,}$$ as an effective constraint and with it they were able to find all solutions for all dimensions. The above constraint may be called "strict charge conservation" (SCC). In this paper we will explore the results obtained by relaxing the SCC rule to "additive charge conservation" (ACC) defined by $$\label{eq:ACC} {\cal R}_{ij}^{kl}=0,\text{ if } i+j\neq k+l.$$ Observe that ACC differs from SCC first in dimension 3. In practice this change increases the complexity of the underlying computational problem by introducing four further 'mixing' parameters (SCC itself having fifteen parameters in dimension 3). The paper is organized as follows. In Section [2](#ss:YB){reference-type="ref" reference="ss:YB"} we discuss notational matters and symmetries of the problem. In Section [3](#S:sol){reference-type="ref" reference="S:sol"} we present the solutions. The set of solutions is organized according to the non-vanishing conditions on the four mixing parameters (together with their symmetries). Thus the first family of solutions is the generic case, with all parameters non-zero -- it is solved in detail in §[\[ss:Jxxxx\]](#ss:Jxxxx){reference-type="ref" reference="ss:Jxxxx"}. The various possibilities are then addressed in turn, the last case being the set of solutions where all but one mixing parameter vanishes - §[\[ss:case6\]](#ss:case6){reference-type="ref" reference="ss:case6"}. It turns out that in several solutions have the 'Hecke' property (precisely two distinct eigenvalues). In §[4.1](#ss:analys){reference-type="ref" reference="ss:analys"} we use this to analyse the representations, giving a complete analysis for the generic case. One natural realisation of the constant Yang--Baxter problem is as a problem in categorical representation theory, and this is the perspective largely taken in [@MR] (see also [@MRTorzewska], for example). However here we will keep to a simple analytical setting. Direct transliteration of results between the settings is a routine exercise. **Acknowledgements**. We thank Frank Nijhoff for various important contributions, including initiating our collaboration. PM thanks EPSRC for funding under grant EP/W007509/1; and Paula Martin for useful conversations. ECR was partially funded by US NSF grants DMS-2000331 and DMS-2205962. # The setup {#ss:YB} For the braid group point of view we first define $$\label{eq:brex} \check{R}=P\, R,\,\text{ where }\, \mathcal{P}_{ij}^{kl}=\delta_{i}^{l}\delta_{j}^{k}, \quad\text{ i.e. }\, \mathcal{\check{R}}_{ij}^{kl}={\cal R}_{ji}^{kl}.$$ and furthermore $$(P R)_{12} \; = \; \check{R}_1 \; := \; \check{R}\otimes 1 \quad \mbox{ and } \quad (P R)_{23} =\check{R}_2 \; := \; 1\otimes \check{R}$$ acting on $V\otimes V\otimes V$. Then the YBE in [\[eq:YBs2\]](#eq:YBs2){reference-type="eqref" reference="eq:YBs2"} becomes $$\label{eq:braidr} \check{R}_1\check{R}_2\check{R}_1=\check{R}_2\check{R}_1\check{R}_2,$$ i.e., the braid group version of the YBE. ## Presenting matrices {#ss:presenting} Set $V={\mathbf{C}}^3$ with basis labeled by $\{0,1,2\}$. We will order this basis as the symbols suggest. Using the standard ket notation, i.e. $i\otimes j=:| ij \rangle$, we may order the basis of $V\otimes V$ for example using lexicographical order $$| 00 \rangle, | 01 \rangle, | 02 \rangle, | 10 \rangle, | 11 \rangle, | 12 \rangle, | 20 \rangle, | 21 \rangle, | 22 \rangle$$ or reverse lexicographical order (rlex) $$| 00 \rangle, | 10 \rangle, | 20 \rangle, | 01 \rangle, | 11 \rangle, | 21 \rangle,| 02 \rangle, | 12 \rangle, | 22 \rangle$$ Still another possibility is to use a 'graded' reverse lexicographical ordering (grlex) $$| 00 \rangle, | 10 \rangle, | 01 \rangle, | 20 \rangle, | 11 \rangle, | 02 \rangle, | 21 \rangle, | 12 \rangle, | 22 \rangle.$$ - the name is borrowed from monomial orderings, in which setting the symbols *are* numbers, rather than being arbitrarily associated to numbers as in our case. The matrix entries are defined as: $${\cal R}_{i,j}^{k,l}:= \langle ij |R| kl \rangle$$ In the present case with ACC [\[eq:ACC\]](#eq:ACC){reference-type="eqref" reference="eq:ACC"} and the rlex ordering we get the matrix $$\label{eq:matf} {R_{{rlex}} = \begin{pmatrix}{\cal R}_{0,0}^{0,0} &. & . & . & . & . & . & . & .\\ . & {\cal R}_{1,0}^{1,0} & . & {\cal R}_{1,0}^{0,1} & . & . & . & . & .\\ . & . & {\cal R}_{2,0}^{2,0} & . & {\cal R}_{2,0}^{1,1} & . & {\cal R}_{2,0}^{0,2 } & . & . \\ . & {\cal R}_{0,1}^{1,0} & . & {\cal R}_{0,1}^{0,1} & . & . & . & . & . \\ . & . & {\cal R}_{1,1}^{2,0} & . & {\cal R}_{1,1}^{1,1} & . & {\cal R}_{1,1}^{0,2} & . & . \\ . & . & . & . & . & {\cal R}_{2,1}^{2,1} & . & {\cal R}_{2,1}^{1,2} & .\\ . & . & {\cal R}_{0,2}^{2,0} & . & {\cal R}_{0,2}^{1,1} & . & {\cal R}_{0,2}^{0,2} & . & . \\ . & . & . & . & . & {\cal R}_{1,2}^{2,1} & . & {\cal R}_{1,2}^{1,2} & .\\ . &. & . & . & . & . & . & . & {\cal R}_{2,2}^{2,2} \end{pmatrix}}$$ Indeed the 'shape' - the non-vanishing pattern - is the same for $R$, $R_{rlex}$ and $\check{R}$. The grlex matrix is obtained from this with $$R_{grlex}=P_G R_{rlex} P_G,$$ where $$P_G:={\small \begin{pmatrix} 1&.&.&.&.&.&.&.&.\\ .&1&.&.&.&.&.&.&.\\ .&.&.&1&.&.&.&.&.\\ .&.&1&.&.&.&.&.&.\\ .&.&.&.&1&.&.&.&.\\ .&.&.&.&.&.&1&.&.\\ .&.&.&.&.&1&.&.&.\\ .&.&.&.&.&.&.&1&.\\ .&.&.&.&.&.&.&.&1 \end{pmatrix}}$$ Then an ACC matrix takes the block form exemplified by $$\label{eq:matf} {R_{grlex}= \begin{pmatrix}{\cal R}_{0,0}^{0,0} &. & . & . & . & . & . & . & .\\ . & {\cal R}_{1,0}^{1,0} & {\cal R}_{1,0}^{0,1} & . & . & . & . & . & .\\ . & {\cal R}_{0,1}^{1,0} & {\cal R}_{0,1}^{0,1} & . & . & . & . & . & . \\ . & . & . & {\cal R}_{2,0}^{2,0} & {\cal R}_{2,0}^{1,1} &{\cal R}_{2,0}^{0,2 } & . & . & . \\ . & . & . & {\cal R}_{1,1}^{2,0} & {\cal R}_{1,1}^{1,1} & {\cal R}_{1,1}^{0,2} & . & . & . \\ . & . & . & {\cal R}_{0,2}^{2,0} & {\cal R}_{0,2}^{1,1} & {\cal R}_{0,2}^{0,2} & . & . & . \\ . & . & . & . & . & . & {\cal R}_{2,1}^{2,1} & {\cal R}_{2,1}^{1,2} & .\\ . & . & . & . & . & . & {\cal R}_{1,2}^{2,1} & {\cal R}_{1,2}^{1,2} & .\\ . &. & . & . & . & . & . & . & {\cal R}_{2,2}^{2,2} \end{pmatrix}}$$ In order to save space we will in the following just give the blocks as $$\label{eq:Rblock} R_{grlex}=\begin{bmatrix} {\cal R}_{0,0}^{0,0} \end{bmatrix} \begin{bmatrix} {\cal R}_{1,0}^{1,0} & {\cal R}_{1,0}^{0,1}\\ {\cal R}_{0,1}^{1,0} & {\cal R}_{0,1}^{0,1}\end{bmatrix} \begin{bmatrix} {\cal R}_{2,0}^{2,0} & {\cal R}_{2,0}^{1,1} &{\cal R}_{2,0}^{0,2 } \\ {\cal R}_{1,1}^{2,0} & {\cal R}_{1,1}^{1,1} & {\cal R}_{1,1}^{0,2} \\ {\cal R}_{0,2}^{2,0} & {\cal R}_{0,2}^{1,1} & {\cal R}_{0,2}^{0,2}\end{bmatrix} \begin{bmatrix} {\cal R}_{2,1}^{2,1} & {\cal R}_{2,1}^{1,2} \\ {\cal R}_{1,2}^{2,1} & {\cal R}_{1,2}^{1,2} \end{bmatrix} \begin{bmatrix} {\cal R}_{2,2}^{2,2} \end{bmatrix}.$$ Recall that $\check{R}$ is obtained from $R$ by exchanging lower indices, which corresponds to up-down reflection within the block. In order to match with [@MR] (using shifted basis labels $\{0,1,2\} \leadsto$ $\{1,2,3\}$), highlight the new parameters, and save from writing many double indices we introduce shorthand notation for $\check{R}$: $$\label{Rcheck} \check{R}=PR = \begin{pmatrix} a_{1}&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot\cr \cdot&{a}_{12}&\cdot&{b}_{12}&\cdot&\cdot&\cdot&\cdot&\cdot\cr \cdot&\cdot&{a}_{13}&\cdot&{x_1}&\cdot&{b}_{13}&\cdot&\cdot\cr \cdot&{c}_{12}&\cdot&{d}_{12}&\cdot&\cdot&\cdot&\cdot&\cdot\cr \cdot&\cdot&x_{2}&\cdot&a_{2}&\cdot&x_{3}&\cdot&\cdot\cr \cdot&\cdot&\cdot&\cdot&\cdot&{a}_{23}&\cdot&{b}_{23}&\cdot\cr \cdot&\cdot&{c}_{13}&\cdot&{x_4}&\cdot&{d}_{13}&\cdot&\cdot\cr \cdot&\cdot&\cdot&\cdot&\cdot&{c}_{23}&\cdot&{d}_{23}&\cdot\cr \cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&a_{3}\cr \end{pmatrix}$$ Then the block form is $$\label{eq:blocsh} \begin{bmatrix} a_{1}\end{bmatrix} \begin{bmatrix} a_{12} & b_{12} \\ c_{12} & d_{12} \end{bmatrix} \begin{bmatrix} a_{13} & x_1 & b_{13} \\ x_{2} & a_{2} & x_3\\ c_{13} & x_4 & d_{13} \end{bmatrix} \begin{bmatrix} a_{23} & b_{23} \\ c_{23} & d_{23} \end{bmatrix} \begin{bmatrix} a_{3} \end{bmatrix}$$ ## Symmetries {#ss:syms} Naturally it is useful to consider additive charge-conserving solutions to [\[eq:braidr\]](#eq:braidr){reference-type="eqref" reference="eq:braidr"} up to transformations that preserve [\[eq:braidr\]](#eq:braidr){reference-type="eqref" reference="eq:braidr"} and the additive charge-conserving condition. 1. **Scaling** symmetry: Equation [\[eq:braidr\]](#eq:braidr){reference-type="eqref" reference="eq:braidr"} and the additive charge-conserving condition is invariant under rescaling $\check{R}$ by a non-zero complex number. 2. **Transpose** symmetry: The additive charge-conservation is preserved under transpose: $\check{R}\mapsto\check{R}^T$; and of course [\[eq:braidr\]](#eq:braidr){reference-type="eqref" reference="eq:braidr"} is satisfied by $\check{R}^T$ if it is satisfied by $\check{R}$ quite generally. Indeed, notice that from the form of [\[eq:YB\]](#eq:YB){reference-type="eqref" reference="eq:YB"} it is easy to see that if ${\cal \check{R}}_{i,j}^{k,l}$ solves the equation, then so does ${\cal \check{R}}_{k,l}^{i,j}$. The effect on the variable choices of in [\[Rcheck\]](#Rcheck){reference-type="eqref" reference="Rcheck"} are $b_{ij}\leftrightarrow c_{ij}$ and $x_1\leftrightarrow x_2$ and $x_3\leftrightarrow x_4$. 3. **Left-Right** (LR) symmetry: Changing the ordering of the basis from lex to rlex the resulting matrix will also be a solution. This can be seen in matrix entries because if ${\cal \check{R}}_{i,j}^{k,l}$ solves equation [\[eq:braidr\]](#eq:braidr){reference-type="eqref" reference="eq:braidr"}, then so does ${\cal \check{R}}_{j,i}^{l,k}$. This corresponds to reflecting each of the blocks in [\[eq:blocsh\]](#eq:blocsh){reference-type="eqref" reference="eq:blocsh"} across both the diagonal and the skew-diagonal, i.e. $a_{ij}\leftrightarrow d_{ij}$, $b_{ij}\leftrightarrow c_{ij}$ as well as $x_1\leftrightarrow x_4$ and $x_2\leftrightarrow x_3$. 4. **02-** or $| 0 \rangle\leftrightarrow| 2 \rangle$ symmetry: while [\[eq:braidr\]](#eq:braidr){reference-type="eqref" reference="eq:braidr"} is clearly invariant under local basis changes, the additive charge-conserving condition is not. However, the local basis change (permutation) $| j \rangle\leftrightarrow | 2-j \rangle$ with indices $\{0,1,2\}$ taken modulo $3$ does preserve the form of an additive charge-conserving matrix: the span of the $| ij \rangle$ with $i+j=2$ is preserved, while the $| ij \rangle$ with $i+j=1$ and $i+j=3$ are interchanged as are the vectors $| 00 \rangle$ and $| 22 \rangle$. The effect on the block form [\[eq:blocsh\]](#eq:blocsh){reference-type="eqref" reference="eq:blocsh"} is to interchange the pairs of $1\times 1$ and $2\times 2$ blocks followed by a reflection across both the diagonal and skew-diagonal of each block. Of course these symmetries can be composed with one another and, discounting the rescaling, one finds that the group of such symmetries is the dihedral group of order $8$. This can be seen by tracking the orbit of the $2\times 2$ matrix $\begin{bmatrix} a_{12} & b_{12} \\ c_{12} & d_{12} \end{bmatrix}$, since there are no symmetries that fix it. Indeed, we see that there are $4$ forms it can take, generated by the reflections across the diagonal and, independently across the skew-diagonal, and two positions in [\[eq:blocsh\]](#eq:blocsh){reference-type="eqref" reference="eq:blocsh"} it can occupy. # The solutions {#S:sol} For constant Yang--Baxter solutions, a necessary and sufficient set of constraint equations on the indeterminate matrix entries arise as follows. Firstly compute, say, $$\label{eq:AR} A_R:= \check{R}_1 \check{R}_2 \check{R}_1 - \check{R}_2 \check{R}_1 \check{R}_2$$ Then set $A_R=0$. The SCC case in which all $x_i$ vanish was solved in [@MR]. Now that in ACC some $x_i$ can be nonzero there will be mixing, but since mixing takes place only between states $|ij\rangle$ with the same $i+j$ this is a computationally relatively modest generalisation. However the full symmetry of indices that exists for SCC is now broken. This ansatz-relaxing obviously increases the complexity of the system of cubic equations, but they can still be solved, as given below. We organise the solutions according to which $x_i$s are vanishing. In principle there are $2^4-1=15$ cases (excluding the SCC case), but we can use the above symmetries in order omit some $x$ configurations. This leads to the following classification into 6 cases: 1. All $x_i$ are nonzero. See §[3.1](#ss:Jxxxx1){reference-type="ref" reference="ss:Jxxxx1"} and §[\[ss:Jxxxx\]](#ss:Jxxxx){reference-type="ref" reference="ss:Jxxxx"}. 2. Precisely one $x$ vanishes, by symmetry it can be assumed to be $x_4$. See §[\[ss:nosoln\]](#ss:nosoln){reference-type="ref" reference="ss:nosoln"}. 3. $x_3x_4\neq0$ and $x_1=x_2=0$, related by the LR symmetry to $x_1x_2\neq0$ and $x_3=x_4=0$. See §[\[ss:case3\]](#ss:case3){reference-type="ref" reference="ss:case3"}. 4. $x_1x_3\neq0$ and $x_2=x_4=0$, related to $x_2x_4\neq0$ by transposition. See §[\[ss:nosoln\]](#ss:nosoln){reference-type="ref" reference="ss:nosoln"}. 5. $x_1x_4\neq0$, $x_2=x_3=0$, related to $x_2x_3\neq0$ by transposition, §[\[ss:case5\]](#ss:case5){reference-type="ref" reference="ss:case5"} 6. Only one $x$ is nonzero, by symmetry it can be assumed to be $x_4$, §[\[ss:case6\]](#ss:case6){reference-type="ref" reference="ss:case6"}. As noted, solution of constant Yang--Baxter is equivalent to solving $A_R=0$. We write out $A_R$ explicitly in Appendix [5.1](#ss:cubics){reference-type="ref" reference="ss:cubics"}. We solve for the various cases as above in the following sections [3.1](#ss:Jxxxx1){reference-type="ref" reference="ss:Jxxxx1"}--[\[ss:case6\]](#ss:case6){reference-type="ref" reference="ss:case6"}. In the first of these we treat Case 1 relatively gently. After that we will proceed more rapidly though all cases. ## The ${x}_1 {x}_2 {x}_3 {x}_4\neq 0$ solutions {#ss:Jxxxx1} Recall the ACC ansatz for $\check{R}$, which is as in ([\[Rcheck\]](#Rcheck){reference-type="ref" reference="Rcheck"}). Consider now the refinement of this ansatz indicated by the block structure $$\left[\begin {array}{c} 1\end {array} \right] \left[ \begin {array}{cc} 1&\cdot\\ \noalign{\medskip}\cdot&1\end {array} \right] \left[ \begin {array}{ccc} a&x_{{1}}&b\\ \noalign{\medskip}{\frac {x_ {{3}} \left( a-1 \right) }{b}}&{\frac {x_{{1}}x_{{3}}+b}{b}}&x_{{3}} \\ \noalign{\medskip}{\frac {{x_{{1}}}^{2}{x_{{3}}}^{2}}{{b}^{3}}}&-{ \frac {x_{{1}} \left( ab+x_{{1}}x_{{3}} \right) }{a{b}^{2}}}&-{\frac { x_{{1}}x_{{3}}}{ab}}\end {array} \right] \left[ \begin {array}{cc} 1 &\cdot\\ \noalign{\medskip}\cdot&1\end {array} \right] \left[ \begin {array} {c} 1\end {array} \right]$$ that is $$\label{eq:JaR1} \check{R}_{j} \; = \left[ \begin{array}{ccccccccc} 1 & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & 1 & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & a & \cdot & {{x}_1} & \cdot & b & \cdot & \cdot \\ \cdot & \cdot & \cdot & 1 & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \frac{{{x}_3} \left(a -1\right)}{b} & \cdot & \frac{{{x}_1} {{x}_3} +b}{b} & \cdot & {{x}_3} & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & 1 & \cdot & \cdot & \cdot \\ \cdot & \cdot & \frac{{{x}_3}^{2} {{x}_1}^{2}}{b^{3}} & \cdot & -\frac{{{x}_1} \left(a b +{{x}_1} {{x}_3} \right)}{a \,b^{2}} & \cdot & -\frac{{{x}_3} {{x}_1}}{a b} & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & 1 & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & 1 \end{array}\right]$$ Here the parameters ${a}_{13}, \; {b}_{13}, \; {x}_1, \; {x}_3$ are indeterminate (we write ${a}= {a}_{13}, \; {b}= {b}_{13}$) but the remaining parameters are replaced with functions of these four as shown. **Proposition 1**. *(I) Consider the ansatz for $\check{R}$ in ([\[Rcheck\]](#Rcheck){reference-type="ref" reference="Rcheck"}). If we leave parameters ${a}_{13}, \; {b}_{13}, \; {x}_1, \; {x}_3$ indeterminate (here we write ${a}= {a}_{13}, \; {b}= {b}_{13}$) but replace the remaining parameters with functions of these four as shown in ([\[eq:JaR1\]](#eq:JaR1){reference-type="ref" reference="eq:JaR1"}) then the braid anomaly $A_R$ has an overall factor $${x}_3^{2} {x}_1^{2} a + a^{2} b^{2} + {x}_1 b {x}_3 a - a \,b^{2} - {x}_1 b {x}_3 \;\; = \;\; b^2 a (a-1) + b {x}_3 {x}_1 (a-1) + a {x}_3^2 {x}_1^2$$ That is, we have a family of solutions obeying $\check{R}_1 \check{R}_2 \check{R}_1 = \check{R}_2 \check{R}_1 \check{R}_2$ with free non-zero parameters (say) $a, {x}_1, {x}_3$, and parameter $b$ determined by $$\label{eq:boxx} \frac{b}{{x}_1 {x}_3} = \frac{-\frac{1}{a} \pm \sqrt{\frac{1}{a^2}-\frac{4}{a-1}}}{2} \hspace{1cm}\mbox{ or } \hspace{1cm} \frac{{x}_1 {x}_3}{b} = \frac{-(a-1)\pm\sqrt{(a-1)^2 -4a^2 (a-1)}}{2a}$$ and the remaining entries determined as in ([\[eq:JaR1\]](#eq:JaR1){reference-type="ref" reference="eq:JaR1"}) above.* *(II) If ${x}_1 {x}_2 {x}_3 {x}_4 \neq 0$ then the above (with $a\neq 1$) gives the complete set of solutions up to overall rescaling.* \(I\) is simply a brutal but straightforward calculation, plugging in to $A_R$ as given for example in Appendix [5.1](#ss:cubics){reference-type="ref" reference="ss:cubics"}. $\;$ For (II) we proceed as follows. The matrix $A_R$ is rather large to write out (again see Appendix [5.1](#ss:cubics){reference-type="ref" reference="ss:cubics"}), but a subset of its entries is $$\label{eq:SR0} {\mathsf{S}}_{R}\;\; =\;\; \{ \; -{a}_{12} {b}_{12} {c}_{12} -{a}_1 {a}_{12} ({a}_{12} - {a}_{1}), \;\;\;\;\;\;\; \hspace{.1cm} - {b}_{23} {c}_{23} {d}_{23} -{a}_3 {d}_{23} ({d}_{23} - {a}_{3}), \;\;\;\;\; \hspace{1cm}$$ $$\label{eq:SR00} \hspace{1.2cm} -{b}_{23} {x}_1 {x}_3, \;\;\;\;\;\;\;\hspace{.1cm} -{b}_{12} {x}_1 {x}_3, \;\;\;\;\;\; ({a}_{12}-{d}_{12}) {x}_1 {x}_3, \;\;\;\;\;\; ((-{d}_{12} +{a}_1 ) {b}_{13} -{b}_{12}^2 ) {x}_1, \;\;\;\;\;$$ $${a}_{13} ({b}_{12} {c}_{12} -{b}_{23} {c}_{23}) +{a}_{12} {a}_{23} ({a}_{12} - {a}_{23}), \hspace{1cm} ({d}_{12} -{d}_{23} ) {x}_1 {x}_2, \hspace{1cm} ({d}_{23} -{a}_{23} ) {x}_1 {x}_3,$$ $$\label{eq:SR} \hspace{-.1cm} -{a}_{12} {x}_1 {x}_3 -{a}_{13} {b}_{13} {d}_{13}, \;\;\;\;\;\hspace{1cm} -{a}_{12} {x}_2 {x}_4 -{a}_{13} {c}_{13} {d}_{13}, \hspace{.9cm} {a}_{12} {c}_{12} {d}_{12}, \;\;\;\;\;\hspace{1cm} {a}_{23} {c}_{23} {d}_{23}, \;\;\;\;\;\;$$ $$\label{eq:SR4} -{a}_{13} {b}_{13} {x}_4 +({a}_1 {a}_{12} -{a}_{12} {a}_2 -{a}_1 {a}_{13} ) {x}_1,$$ $$\label{eq:SR5} ( {a}_1 {d}_{13} +{a}_2 {d}_{12} -{a}_1 {d}_{12} ) {x}_3 +{b}_{13} {d}_{13} {x}_2, \hspace{1.2cm} ({a}_{13} {d}_{13} +{a}_2 {a}_{23} -{a}_{13} {a}_{23} ) {x}_3 + ( {a}_3 {b}_{13} -{b}_{23}^2 ) {x}_2, \;\;\;\;\;\;$$ $$\label{eq:SR7} \hspace{.03cm} {a}_{13} {c}_{13} {x}_3 +({a}_{13} {a}_3 -{a}_{23} {a}_3 +{a}_2 {a}_{23} ) {x}_2, \hspace{1.1cm} {a}_{12} {c}_{13} {x}_3 +(-{a}_2 {d}_{23} +{a}_{13} {d}_{23} -{b}_{23} {c}_{12} +{a}_2^2 ) {x}_2,$$ $$\label{eq:SR77} ( {a}_1 {c}_{13} -{c}_{12}^2 ) {x}_3 + ( {a}_{13} {d}_{13} -{d}_{12} {d}_{13} +{a}_2 {d}_{12} ) {x}_2,$$ $$\label{eq:SR777} -{a}_2 {x}_1 {x}_2 +{a}_{13} {d}_{12}^2 -{a}_{13}^2 {d}_{12} -{a}_{23} {b}_{13} {c}_{13} +{a}_{23} {b}_{12} {c}_{12},$$ $$\label{eq:SR8} {a}_1 {x}_3 {x}_4 +{d}_{13} {x}_1 {x}_2 -{a}_2 {d}_{12}^2 -{b}_{12} {c}_{12} {d}_{12} +{a}_2^2 {d}_{12}, \;\;\;\;\hspace{.42cm}$$ $$\label{eq:SR9} {a}_{13} {x}_3 {x}_4 +{a}_3 {x}_1 {x}_2 -{a}_{23} {b}_{23} {c}_{23} -{a}_2 {a}_{23}^2 +{a}_2^2 {a}_{23}, \;\;\;\; ... \} \;$$ Imposing $A_R=0$ and ${x}_1 {x}_3 \neq 0$ we thus get ${b}_{12} = {b}_{23}=0$ and ${a}_{12} = {d}_{12}$ from ([\[eq:SR00\]](#eq:SR00){reference-type="ref" reference="eq:SR00"}). Note that $\check{R}$ is invertible, so ${a}_1 , {a}_3 \neq 0$ and $$\label{eq:Rinv} {a}_{12} {d}_{12}-{b}_{12} {c}_{12} \neq 0 , \hspace{1cm} {a}_{23} {d}_{23}-{b}_{23} {c}_{23} \neq 0.$$ Thus ${a}_{12} , {d}_{12},{a}_{23}, {d}_{23} \neq 0$. Thus ${d}_{12}={a}_{23} ={a}_{12} = {a}_1 = {d}_{23} = {a}_3$ and ${c}_{12} = {c}_{23}=0$ from ([\[eq:SR\]](#eq:SR){reference-type="ref" reference="eq:SR"}). Note that if $\check{R}$ is a solution then so is any non-zero scalar multiple, so we first scale $\check{R}$ by an overall factor, so that ${a}_1 =1$. This confirms the form for $\check{R}$ above outside the 3x3 block. Observe now from ([\[eq:SR\]](#eq:SR){reference-type="ref" reference="eq:SR"}) that ${a}_{13} {b}_{13} {d}_{13} =-{x}_1 {x}_3 \neq 0$, and that we may either replace ${a}_{13} = -\frac{{x}_1 {x}_3}{{b}_{13} {d}_{13}}$, or ${d}_{13} = -\frac{{x}_1 {x}_3}{{a}_{13} {b}_{13}}$. The latter gives the form of ${d}_{13}$ in the Proposition. Before proceeding we will need to show that ${d}=1$ cannot occur here. (Recall ${a}={a}_{13}$, and write also ${d}={d}_{13}$.) Comparing ([\[eq:SR8\]](#eq:SR8){reference-type="ref" reference="eq:SR8"}) and ([\[eq:SR9\]](#eq:SR9){reference-type="ref" reference="eq:SR9"}) we find $$({a}-1) {x}_3 {x}_4 = ({d}-1) {x}_1 {x}_2 .$$ So ${a}=1$ if and only if ${d}=1$. So if ${a}=1$ then ${b}_{13} =-{x}_1 {x}_3$ and ${c}_{13} = -{x}_2 {x}_4$ (consider ([\[eq:SR\]](#eq:SR){reference-type="ref" reference="eq:SR"}i/ii)). Evaluating ([\[eq:SR7\]](#eq:SR7){reference-type="ref" reference="eq:SR7"}ii)-([\[eq:SR77\]](#eq:SR77){reference-type="ref" reference="eq:SR77"}) here we find: $${a}_2^2 -{a}_2 +a -{a}_2 +{d}-a {d}\;=\;\; ({a}_2 -1)^2 -(a-1)({d}-1) \; = 0$$ So if ${a}=1$ then ${a}_2=1$. Further, if ${a}=1$ then ([\[eq:SR777\]](#eq:SR777){reference-type="ref" reference="eq:SR777"}) becomes $-{x}_1 {x}_2 -{b}_{13} {c}_{13} = -{x}_1 {x}_2 - {x}_1 {x}_2 {x}_3 {x}_4 =0$ so ${x}_3 {x}_4 = -1$. But if ${a}=1$ then (18) gives ${x}_4 = (-{x}_1)/(-{x}_1 {x}_3) = 1/{x}_3$ -- a contradiction. We conclude here that $({a}-1)\neq 0$; and hence $({d}-1) \neq 0$. From ([\[eq:SR5\]](#eq:SR5){reference-type="ref" reference="eq:SR5"}) we have two formulae for ${b}_{13} {x}_2$. Equating we have $$\frac{( {a}_1 {d}_{13} +{a}_2 {d}_{12} -{a}_1 {d}_{12} )} {{d}_{13} } \hspace{.2cm} = ({a}_{13} {d}_{13} +{a}_2 {a}_{23} -{a}_{13} {a}_{23} )$$ that is $$\frac{( {d}_{13} +{a}_2 - 1 )} {{d}_{13} } \hspace{.2cm} = ({a}_{13} {d}_{13} +{a}_2 -{a}_{13} )$$ giving $$\frac{( {a}_2 - 1 )(1- {d}_{13})} {{d}_{13} } \hspace{.2cm} = -{a}_{13} (1-{d}_{13} )$$ Since ${d}_{13} -1 \neq 0$ we have $${a}_2 - 1 = -{a}_{13} {d}_{13} = \frac{ {x}_1 {x}_3 }{ {b}_{13} }$$ giving ${a}_2$ as in the Proposition. Plugging back in $$({d}_{13} +{a}_2 -1) {x}_3 =-{b}_{13} {d}_{13} {x}_2$$ $$\frac{({a}_{13} -1) {x}_3}{{b}_{13}} = {x}_2$$ as in the Proposition. From ([\[eq:SR4\]](#eq:SR4){reference-type="ref" reference="eq:SR4"}) we have $${x}_4 \; = \; {x}_1 \frac{1-{a}_2 -{a}_{13}}{{a}_{13} {b}_{13}} \; = \; {x}_1 \frac{-{x}_1 {x}_3 -{a}_{13} {b}_{13}}{{a}_{13} {b}_{13}^2}$$ as in the Proposition. Finally from ([\[eq:SR7\]](#eq:SR7){reference-type="ref" reference="eq:SR7"}) we now have $${c}_{13} = -\frac{ ( {a}_{13} +{a}_2 -1 ) {x}_2}{{a}_{13} {x}_3} = -\frac{({a}_{13} {b}_{13} +{x}_1 {x}_3 )({a}_{13}-1)}{{a}_{13} {b}_{13}^2}$$ $${c}_{13} {x}_3 = - ( {a}_{13} +{a}_2 ({a}_2 -1) ) {x}_2 = - ( {a}+ \frac{( {x}_1 {x}_3 +b )}{b} \frac{{x}_1 {x}_3}{ b} ) \frac{ ({a}_{13}-1) {x}_3 }{b}$$ $$= \frac{ {-} {x}_3 ( (a b^2 + {x}_1 {x}_3 ({x}_1 {x}_3 +{b}_{13}))({a}_{13}-1))}{{b}_{13}^3}$$ Equating the two formulae for ${c}_{13}$, and noting that ${a}_{13}-1\neq 0$, we have $${a}_{13} ({x}_1 {x}_3 )^2 +({a}_{13}-1) {b}_{13} {x}_1 {x}_3 +({a}_{13}-1) {a}_{13} {b}_{13}^2 =0$$ Plugging back in to ([\[eq:SR7\]](#eq:SR7){reference-type="ref" reference="eq:SR7"}) we obtain ${c}_{13}$ as in the Proposition, so we are done. $\square$ ## Case 1: $x_1x_2x_3x_4\neq0$ revisited [\[ss:Jxxxx\]]{#ss:Jxxxx label="ss:Jxxxx"} In this section we solve the case in which $x_1x_2x_3x_4\neq0$ again, but leaning directly on the Appendix (as we shall below for the remaining cases). Since all $x_i$ are nonzero, we conclude from equations [\[A5\]](#A5){reference-type="eqref" reference="A5"}-[\[A8\]](#A8){reference-type="eqref" reference="A8"} that $b_{12}=c_{12}=b_{23}=c_{23}=0$, and from [\[A11\]](#A11){reference-type="eqref" reference="A11"}-[\[A13\]](#A13){reference-type="eqref" reference="A13"} $a_{12}=d_{12}=a_{23}=d_{23}$. Then since $a_{12}\neq0$ from [\[A29\]](#A29){reference-type="eqref" reference="A29"} we get $a_{12}=1$ and since $a_3\neq0$ from [\[A30\]](#A30){reference-type="eqref" reference="A30"} $a_3=1$. Now from [\[A18\]](#A18){reference-type="eqref" reference="A18"} we find $a_{13}b_{13}d_{13}\neq0$ and we can solve $$d_{13}=-\frac{x_1x_3}{a_{13}b_{13}},$$ and from [\[A23\]](#A23){reference-type="eqref" reference="A23"} $$c_{13}=\frac{b_{13}x_2x_4}{x_1x_3}.$$ Then from [\[A70\]](#A70){reference-type="eqref" reference="A70"} we find $$x_4=\frac{x_1(1-a_2-a_{13})}{a_{13}b_{13}}.$$ Now it turns out that some equations factorize, for example [\[A94\]](#A94){reference-type="eqref" reference="A94"} can be written as $$x_1(a_{13}-1)[(a_2-1)b_{13}-x_1x_3]=0.$$ If we were to choose $a_{13}=1$ we reach a contradiction: from [\[A58\]](#A58){reference-type="eqref" reference="A58"} we get $a_2=1$ and then [\[A50\]](#A50){reference-type="eqref" reference="A50"} and [\[A66\]](#A66){reference-type="eqref" reference="A66"} are contradictory since $x_i\neq0$. Thus we can solve $$a_2=\frac{b_{13}+x_1x_3}{b_{13}}.$$ and then from [\[A50\]](#A50){reference-type="eqref" reference="A50"} $$x_2=\frac{x_3(a_{13}-1)}{b_{13}}.$$ After this all remaining nonzero equations simplify to $$b_{13}^2a_{13}(a_{13}-1)+b_{13} x_1x_3(a_{13}-1)+a_{13}x_1^2x_3^2=0.$$ This biquadratic equation can be resolved using Weierstrass elliptic function $\wp$: $$a_{13}=-\wp+\frac5{12},\quad b_{13}=6x_1x_3\frac{12\wp+7+12\wp'}{(12\wp-5)(12\wp+7)},$$ where $$(\wp')^2=4\wp^3- \frac1{12}\wp+\frac{ 7\cdot 23}{2^3\ 3^3}.$$ With simplifying notation, $a_{13}=a,b_{13}=x_1x_3\beta$ the solution in block form reads [\[sol:case1\]]{#sol:case1 label="sol:case1"} $$\begin{bmatrix}1&.\cr .&1\end{bmatrix} \begin{bmatrix}a&x_{1}&\beta x_{1} x_{3}\cr \frac{a -1}{ \beta x_{1}}& \frac{\beta+1}{ \beta}&x_{3}\cr \frac{1}{ \beta^{3} x_{1} x_{3}}& \frac{- (a \beta+1 ) }{ a \beta^{2} x_{3}}& \frac{-1}{ a \beta}\end{bmatrix} \begin{bmatrix}1&.\cr .&1\end{bmatrix} [1]$$ with constraint $$\beta^2 a (a - 1) + \beta(a - 1) + a=0.$$ ## Cases 2 and 4: $x_1x_3\neq0$ and $x_4x_2=0$ [\[ss:nosoln\]]{#ss:nosoln label="ss:nosoln"} From [\[A7\]](#A7){reference-type="eqref" reference="A7"}-[\[A8\]](#A8){reference-type="eqref" reference="A8"} we get $b_{12}=b_{23}=0$ and then since the matrix is non-singular we must have $a_{12}d_{12}a_{23}d_{23}\neq 0$. Then from [\[A1\]](#A1){reference-type="eqref" reference="A1"},[\[A2\]](#A2){reference-type="eqref" reference="A2"} we get $c_{12}=c_{23}=0$, from [\[A29\]](#A29){reference-type="eqref" reference="A29"},[\[A31\]](#A31){reference-type="eqref" reference="A31"} $a_{12}=d_{12}=1$ (recall that we have scaled $a_1=1$) and from [\[A84\]](#A84){reference-type="eqref" reference="A84"},[\[A85\]](#A85){reference-type="eqref" reference="A85"} $a_{23}=d_{23}=1$. Next from [\[A30\]](#A30){reference-type="eqref" reference="A30"} $a_3=1$. Since $x_1x_3\neq0$ we have from [\[A18\]](#A18){reference-type="eqref" reference="A18"} that $a_{13}b_{13}d_{13}\neq 0$ and then from [\[A24\]](#A24){reference-type="eqref" reference="A24"} we find $c_{13}=0$. To continue we consider first the case $x_2=0$, $x_4$ free. Then from [\[A52\]](#A52){reference-type="eqref" reference="A52"} we get $a_{13}=1$ and from [\[A76\]](#A76){reference-type="eqref" reference="A76"} $d_{13}=1-a_2$. Then since $d_{13}\neq0$ we cannot have $a_2=1$ but [\[A106\]](#A106){reference-type="eqref" reference="A106"} is $x_3(a_2-1)^2=0$, a contradiction. Next assume $x_2\neq0,\,x_4=0$. Then from [\[A66\]](#A66){reference-type="eqref" reference="A66"} we get $d_{13}=1$ and from [\[A70\]](#A70){reference-type="eqref" reference="A70"} $a_{13}=1-a_2$ but [\[A98\]](#A98){reference-type="eqref" reference="A98"} yields $a_2=1$ which is in contradiction with $a_{13}\neq0$. Thus there are no solution in this case. ## Case 3: $x_1=x_2=0$, $x_3x_4\neq0$ [\[ss:case3\]]{#ss:case3 label="ss:case3"} From [\[A16\]](#A16){reference-type="eqref" reference="A16"} we get $a_{23}=a_{12}$ and from [\[A94\]](#A94){reference-type="eqref" reference="A94"} and [\[A104\]](#A104){reference-type="eqref" reference="A104"} $b_{13}=b_{12}^2$ and $c_{13}=c_{12}^2$. On the basis of [\[A42\]](#A42){reference-type="eqref" reference="A42"} and [\[A46\]](#A46){reference-type="eqref" reference="A46"} we can divide the problem to 2 branches: Case 3.1: $a_{12}=0$, $b_{12}c_{12}\neq 0$, and Case 3.2: $a_{12}d_{12}\neq0$, $b_{12}=c_{12}= 0$. #### Case 3.1: $a_{12}=0$, $b_{12}c_{12}\neq 0$. From [\[A46\]](#A46){reference-type="eqref" reference="A46"} we get $a_{13}=0$ and then from [\[A41\]](#A41){reference-type="eqref" reference="A41"} $d_{12}=1-b_{12}c_{12}$ and from [\[A76\]](#A76){reference-type="eqref" reference="A76"} $d_{13}=(1-a_2)(1-b_{12}c_{12})$. Then from [\[A100\]](#A100){reference-type="eqref" reference="A100"} and [\[A101\]](#A101){reference-type="eqref" reference="A101"}, $b_{23}=a_2^2/c_{12}$, $c_{23}=a_2^2/b_{12}$ and from [\[A95\]](#A95){reference-type="eqref" reference="A95"} $a_3=a_2^4/(b_{12}c_{12})^2$. Since $a_3\neq0$ we have $a_2\neq0$ and can solve $d_{23}$ from [\[A43\]](#A43){reference-type="eqref" reference="A43"}: $d_{23}=(a_2^4-(b_{12}c_{12})^3)/(b_{12}c_{12})^2$. Now [\[A82\]](#A82){reference-type="eqref" reference="A82"} factorizes as $(a_2^2-b_{12}c_{12})(a_2^2+a_2b_{12}c_{12}+(b_{12}c_{12})^2)=0$. Case 3.1.1: If we choose the first factor and set $b_{12}=a_2^2/c_{12}$ the remaining equations simplify to $x_3=a_2(a_2^2-1)^2/x_4$, yielding the first solution ($a_2\to a,\,c_{12}\to c$); $$\label{eq:III.1.1} [1] \begin{bmatrix}.& \frac{a^{2}}{ c}\cr c& 1-a^2 \end{bmatrix} \begin{bmatrix}.&.& \phantom{c_{I}} \frac{a^{4}}{ c^{2}} \\ .&a& \frac{a(a^2-1)^2}{x_4} \\ c^{2}&x_{4}& (a +1 ) (a -1 ) ^{2}\end{bmatrix} \begin{bmatrix}.& \frac{a^{2}}{ c}\cr c& 1-a^2 \end{bmatrix} [1]$$ Case 3.1.2: For the second solution we solve [\[A82\]](#A82){reference-type="eqref" reference="A82"} by $b_{12}=a\omega/c_{12}$, where $\omega$ is a cubic root of unity $\omega\neq1$. Then then the remaining solution are solved by $x_3=a_2(a_2-1)(1-\omega a_2)/x_4$ and we have $$\label{eq:III.1.2} [1] \begin{bmatrix}.& \frac{\omega a}{ c}\\ c& 1-\omega a \end{bmatrix} \begin{bmatrix}.&.& \frac{ \omega^2 a^{2} }{ c^{2}}\\ .&a& \frac{(\omega^2-a ) (a -1 ) a}{ x_{4}}\\ c^{2}&x_{4}& (1-\omega a)(1-a) \end{bmatrix} \begin{bmatrix}.& \frac{a^{2}}{ c}\\ \omega^2 a c & \omega a (a-1)\end{bmatrix} [ \omega a^{2} ]$$ The eigenvalues are $\{1,-\omega a,\omega a^2\}$ each with multiplicity $3$. For both solutions $a\neq 0,1$ and for the first $a\neq -1$. Note that for the second case if $a=-1$ there are only two eigenvalues: $1$ and $\omega$. #### Case 3.2: $a_{12}d_{12}\neq0$, $b_{12}=c_{12}= 0$. From equations [\[A9\]](#A9){reference-type="eqref" reference="A9"}, [\[A29\]](#A29){reference-type="eqref" reference="A29"} we get $a_{12}=d_{12}=1$ and from [\[A44\]](#A44){reference-type="eqref" reference="A44"}, [\[A47\]](#A47){reference-type="eqref" reference="A47"} $b_{23}=c_{23}=0$. Due to non-singularity we may now assume $a_{23}d_{23}\neq0$ and then from [\[A10\]](#A10){reference-type="eqref" reference="A10"}, [\[A16\]](#A16){reference-type="eqref" reference="A16"} we get $a_{23}=d_{23}=1$. Since $a_3\neq0$ [\[A30\]](#A30){reference-type="eqref" reference="A30"} yields $a_3=1$. Now from [\[A69\]](#A69){reference-type="eqref" reference="A69"} and [\[A100\]](#A100){reference-type="eqref" reference="A100"} we get $a_2=1,d_{13}=0$, after which we get a contradiction in [\[A50\]](#A50){reference-type="eqref" reference="A50"}. ## Case 5: $x_2x_3\neq0$, $x_1=x_4=0$ [\[ss:case5\]]{#ss:case5 label="ss:case5"} This case contains many solutions and therefore it is necessary to do some basic classification first. We do this on the basis of the $2\times2$ blocks. For the first $2\times2$ block (the "12" block), consider equations [\[A1\]](#A1){reference-type="eqref" reference="A1"}, [\[A2\]](#A2){reference-type="eqref" reference="A2"}, [\[A9\]](#A9){reference-type="eqref" reference="A9"}, [\[A29\]](#A29){reference-type="eqref" reference="A29"} and [\[A31\]](#A31){reference-type="eqref" reference="A31"}. The solutions to these equations can be divided into the following: - $a_{12}d_{12}\neq0$. Then one finds $b_{12}=c_{12}=0$ and $a_{12}=d_{12}=a_1$. - $a_{12}\neq0$, $d_{12}=0$ and $b_{12}c_{12}\neq0$, then $a_{12}=a_1-b_{12}c_{12}/a_1$ - $d_{12}\neq0$, $a_{12}=0$ and $b_{12}c_{12}\neq0$, then $d_{12}=a_1-b_{12}c_{12}/a_1$ - $a_{12}=d_{12}=0$. The results for the other $2\times2$ block (the "23" block) are obtained by index changes, including $a_1 \to a_3$, we denote them as $\alpha'$ etc. In principle there would be $4\times4=16$ cases, but we can omit several using the known symmetries. First of all for the "12" block we can omit $\gamma$ because it is related to $\beta$ by LR symmetry. The list of cases is as follows: 1. $(\alpha,\alpha')$: $[a_1]\begin{bmatrix}a_1&.\\.&a_1\end{bmatrix}[3\times3] \begin{bmatrix}a_3&.\\.&a_3\end{bmatrix}[a_3]$. 2. $(\alpha,\beta')$: $[a_1]\begin{bmatrix}a_1&.\\.&a_1\end{bmatrix}[3\times3] \begin{bmatrix}a_3-b_{23}c_{23}/a_3&b_{23}\\c_{23}&.\end{bmatrix}[a_3]$. 3. $(\alpha,\delta')$: $[a_1]\begin{bmatrix}a_1&.\\.&a_1\end{bmatrix}[3\times3] \begin{bmatrix}.&b_{23}\\c_{23}&.\end{bmatrix}[a_3]$. 4. $(\beta,\beta')$: $[a_1] \begin{bmatrix}a_1-b_{12}c_{12}/a_1&b_{12}\\c_{12}&.\end{bmatrix} [3\times3] \begin{bmatrix}a_3-b_{23}c_{23}/a_3&b_{23}\\c_{23}&.\end{bmatrix}[a_3]$. 5. $(\beta,\gamma')$: $[a_1] \begin{bmatrix}a_1-b_{12}c_{12}/a_1&b_{12}\\c_{12}&.\end{bmatrix} [3\times3] \begin{bmatrix}.&b_{23}\\c_{23}&a_3-b_{23}c_{23}/a_3\end{bmatrix}[a_3]$. 6. $(\beta,\delta')$: $[a_1] \begin{bmatrix}a_1-b_{12}c_{12}/a_1&b_{12}\\c_{12}&.\end{bmatrix} [3\times3] \begin{bmatrix}.&b_{23}\\c_{23}&.\end{bmatrix}[a_3]$. 7. $(\delta,\delta')$: $[a_1] \begin{bmatrix}.&b_{12}\\c_{12}&.\end{bmatrix} [3\times3] \begin{bmatrix}.&b_{23}\\c_{23}&.\end{bmatrix}[a_3]$. Here we have omitted $(\alpha,\gamma')$, $(\beta,\alpha')$, $(\delta,\alpha')$, $(\delta,\beta')$ and $(\delta,\gamma')$, because they are related to entries in the above list of seven by some symmetry. Specifically, notice that the vanishing of $x_1$ and $x_4$ and the non-vanishing of $x_2x_3$ is preserved under LR-symmetry and the $| 0 \rangle\leftrightarrow| 2 \rangle$ symmetry, but *not* the transpose symmetry. Moreover the composition of the LR and 02-symmetries has the effect of simply interchanging the pairs of $2\times 2$ and $1\times 1$ blocks. #### Case 5.1 $(\alpha,\alpha')$ We scale to $a_1=1$ and from [\[A30\]](#A30){reference-type="eqref" reference="A30"} get $a_3=1$. According to [\[A86\]](#A86){reference-type="eqref" reference="A86"} $a_2^2-a_2=0$ and then from [\[A106\]](#A106){reference-type="eqref" reference="A106"} and [\[A108\]](#A108){reference-type="eqref" reference="A108"} we get $d_{13}=-b_{13}x_2/x_3$ and $c_{13}=-c_{13}x_2/x_3$ but then the $3\times3$ block matrix becomes singular. Therefore no solutions for this subcase. #### Case 5.2 $(\alpha,\beta')$ From [\[A39\]](#A39){reference-type="eqref" reference="A39"} and [\[A43\]](#A43){reference-type="eqref" reference="A43"} we get $c_{13}=c_{23}^2/a_3$ and $b_{13}=b_{23}a_3/c_{23}$. Next from [\[A58\]](#A58){reference-type="eqref" reference="A58"} $d_{13}=0$ and from [\[A76\]](#A76){reference-type="eqref" reference="A76"} $a_2=1$ and from [\[A54\]](#A54){reference-type="eqref" reference="A54"} $a_{23}=a_{13}$. After setting $a_3=-x_3c_{23}^2/x_2$ from [\[A104\]](#A104){reference-type="eqref" reference="A104"}, the GCD of the remaining equations is $(x_3c_{23}-x_2b_{23})(x_2+x_3c_{23})^2$ and we get two solutions: ( $c_{23}\to c,\, b_{23}\to b$) 5.2.1: $x_2=-x_3c_{23}^2$ $$\label{eq:V.2.1} [1] \begin{bmatrix}1&.\\ .&1\end{bmatrix} \begin{bmatrix}1-b c&.& \frac{b}{ c}\\ -x_{3} c^{2}&1&x_{3}\\ c^{2}&.&.\end{bmatrix} \begin{bmatrix}1-b c &b\\ c&.\end{bmatrix} [1]$$ The eigenvalues are $-bc$ with multiplicity $2$ and $1$ with multiplicity $7$. 5.2.2: $x_2=x_3c_{23}/b_{23}$ $$\label{5.2.2} [1] \begin{bmatrix}1&.\\ .&1\end{bmatrix} \begin{bmatrix}1-b c &.& -b^{2}\\ \frac{x_{3} c}{ b}&1&x_{3}\\ \frac{-c}{ b}&.&.\end{bmatrix} \begin{bmatrix}1-b c &b\\ c&.\end{bmatrix} [-b c]$$ The eigenvalues are $-bc$ with multiplicity $3$ and $1$ with multiplicity $6$. #### Case 5.3 $(\alpha,\delta')$ From [\[A54\]](#A54){reference-type="eqref" reference="A54"} and [\[A62\]](#A62){reference-type="eqref" reference="A62"} we get $a_{13}=d_{13}=0$ and from [\[A72\]](#A72){reference-type="eqref" reference="A72"} $a_2=1$. Next [\[A40\]](#A40){reference-type="eqref" reference="A40"} and [\[A43\]](#A43){reference-type="eqref" reference="A43"} yield $c_{13}=b_{13}=a_3$ and [\[A102\]](#A102){reference-type="eqref" reference="A102"} $x_3=-x_2a_3$ The remaining equations are satisfied with $a_3=\epsilon_1$ and $c_{23}=\epsilon_2$, where $\epsilon_j^2=1$. The result is $$\begin{bmatrix}1&.\\ .&1\end{bmatrix} \begin{bmatrix}.&.& \epsilon_{1}\\ x_{2}&1& -x_{2} \epsilon_{1}\\ \epsilon_{1}&.&.\end{bmatrix} \begin{bmatrix}.& \epsilon_{2}\\ \epsilon_{2}&.\end{bmatrix} [\epsilon_{1}]$$ The eigenvalues are $1$ and $-1$ with multiplicity $7$ and $2$ if $\epsilon_1=1$ and $6$ and $3$ otherwise. However, when $\epsilon_1=-1$ this is a special case of [\[5.2.2\]](#5.2.2){reference-type="eqref" reference="5.2.2"} by setting $b=c=-1$. For $\epsilon_1=1$ we may take $b=c=1$ in [\[eq:V.2.1\]](#eq:V.2.1){reference-type="eqref" reference="eq:V.2.1"}. Thus this case may be discarded *a posteriori* as a subcase. #### Case 5.4 $(\beta,\beta')$ From [\[A37\]](#A37){reference-type="eqref" reference="A37"} and [\[A39\]](#A39){reference-type="eqref" reference="A39"} we get $c_{13}=c_{12}^2$ and $a_3=c_{23}^2/c_{12}^2$. Next since $a_{12}=1-b_{12}c_{12}\neq0$ we get $d_{13}=0$ from [\[A61\]](#A61){reference-type="eqref" reference="A61"}. From [\[A41\]](#A41){reference-type="eqref" reference="A41"} $b_{13}=b_{12}/c_{12}$ and from [\[A78\]](#A78){reference-type="eqref" reference="A78"} $a_{13}=1-b_{12}c_{12}$. For nonsingularity we must have $a_2\neq0$ and then from [\[A82\]](#A82){reference-type="eqref" reference="A82"} we get $b_{23}=b_{12}$ and from [\[A43\]](#A43){reference-type="eqref" reference="A43"} $c_{23}=c_{12}$. Now from [\[A74\]](#A74){reference-type="eqref" reference="A74"} we find $a_2=-x_3 c_{12}^2/x_2$ and after that the remaining equations factorize and we have two solutions: 5.4.a $x_2=-c_{12}^2x_3$ $$\label{5.4.a} [1] \begin{bmatrix}1-bc&b\\ c&.\end{bmatrix} \begin{bmatrix}1-bc&.& \frac{b}{ c}\\ -x_{3} c^{2}&1&x_{3}\\ c^{2}&.&.\end{bmatrix} \begin{bmatrix}1-bc&b\\ c&.\end{bmatrix} [1]$$ Eigenvalues are $1$ with multiplicity $6$ and $-bc$ with multiplicity $3$. 5.4.b $x_2=c_{12}x_3/b_{12}$ $$\label{5.4.b} [1] \begin{bmatrix}1-bc&b\\ c&.\end{bmatrix} \begin{bmatrix}1-bc&.& \frac{b}{ c}\\ \frac{x_{3} c}{ b}& -b c&x_{3}\\ c^{2}&.&.\end{bmatrix} \begin{bmatrix}1-bc&b\\ c&.\end{bmatrix} [1]$$ Eigenvalues are $1$ with multiplicity $5$ and $-bc$ with multiplicity $4$. #### Case 5.5 $(\beta,\gamma')$ Since the matrix is non-singular we must have $a_2\neq0$. From [\[A38\]](#A38){reference-type="eqref" reference="A38"} and [\[A41\]](#A41){reference-type="eqref" reference="A41"} we get $c_{13}=c_{12}^2$ and $b_{13}=b_{12}/c_{12}$, and from [\[A39\]](#A39){reference-type="eqref" reference="A39"} and [\[A43\]](#A43){reference-type="eqref" reference="A43"} $b_{12}=b_{23}^2c_{12}/a_3$, $c_{23}=b_{23}c_{12}^2/a_3$. Then we get from several equations the condition $a_{13}d_{13}=0$. If both $a_{13}=d_{13}=0$, we would get from [\[A78\]](#A78){reference-type="eqref" reference="A78"} $a_{12}=0$, which would lead to case $\delta'$. Therefore we have two branches: 5.5.1 Assume $a_{13}=0,\,d_{13}\neq0$. From [\[A76\]](#A76){reference-type="eqref" reference="A76"} we get $x_3=-x_2b_{23}^2/a_3$ and then since $a_{12}\neq0$ equation [\[A102\]](#A102){reference-type="eqref" reference="A102"} yields $a_2=1$. From [\[A66\]](#A66){reference-type="eqref" reference="A66"} we get $d_{13}=1-b_{23}^2c_{12}^2/a_3$. If we use [\[A81\]](#A81){reference-type="eqref" reference="A81"} to eliminate second and higher powers of $a_3$ of equation [\[A82\]](#A82){reference-type="eqref" reference="A82"}, it factorizes as $(a_3-1)(1+b_{23}c_{12})=0$, and we get two branches: 5.5.1.1 If we choose $a_3=1$ all other equations are satisfied with $b_{23}=\omega^2/c_{12}$, where $\omega^3=1$ but we must have $\omega\neq1$ to stay in the $(\beta,\gamma')$ case. $$\label{5.5.1.1} [1] \begin{bmatrix}1-\omega& \frac{\omega }{ c}\\ c&.\end{bmatrix} \begin{bmatrix}.&.& \frac{\omega }{ c^{2}} \\ x_{2}&1& \frac{-x_{2} \omega }{ c^{2}}\\ c^{2}&.& 1-\omega \end{bmatrix} \begin{bmatrix}.& \frac{\omega ^{2}}{ c}\\ c \omega ^{2}&1 -\omega\end{bmatrix} [1]$$ The eigenvalues are $1$ with multiplicity $6$ and $\omega$ with multiplicity $3$. 5.5.1.2 Now we choose $b_{23}=-1/c_{12}$ and then remaining equations are satisfied with $a_3=\varsigma=\pm i$. $$\label{eq:V.5.1.2} [1] \begin{bmatrix}\varsigma +1& \frac{-\varsigma }{ c}\\ c&.\end{bmatrix} \begin{bmatrix}.&.& \frac{-\varsigma }{ c^{2}}\\ x_{2}&1& \frac{x_{2} \varsigma }{ c^{2}}\\ c^{2}&.&\varsigma +1\end{bmatrix} \begin{bmatrix}.& \frac{-1}{ c}\\ \varsigma c&\varsigma +1\end{bmatrix} [\varsigma ]$$ The eigenvalues are $1$ with multiplicity $5$ and $\varsigma$ with multiplicity $4$. 5.5.2 The case $d_{13}=0,\,a_{13}\neq0$ is obtained by $02$-symmetry from 5.5.1. Indeed, we see that the form $(\beta,\gamma^\prime)$ is invariant under the $| 0 \rangle\leftrightarrow| 2 \rangle$ symmetry, with the $3\times 3$ block having the following pairs interchanged $(a_{13},d_{13}),(b_{13},c_{13}),(x_2,x_3)$ and $(x_1,x_4)=(0,0)$. Thus any solution obtained for $d_{13}=0$ and $a_{13}\neq 0$ may be transformed into a solution with $d_{13}\neq 0$ and $a_{13}=0$. #### Case 5.6 $(\beta,\delta')$ Since in this case $a_{12}\neq0$ we have from [\[A54\]](#A54){reference-type="eqref" reference="A54"} and [\[A63\]](#A63){reference-type="eqref" reference="A63"} that $a_{13}=d_{13}=0$ but then [\[A78\]](#A78){reference-type="eqref" reference="A78"} implies $a_{12}=0$, a contradiction. #### Case 5.7 $(\delta,\delta')$. From $\det\neq0$ we get $a_2\neq 0$ and then [\[A81\]](#A81){reference-type="eqref" reference="A81"} and [\[A82\]](#A82){reference-type="eqref" reference="A82"} imply $c_{23}=c_{12}$ and $b_{23}=b_{12}$. Next from [\[A38\]](#A38){reference-type="eqref" reference="A38"} and [\[A42\]](#A42){reference-type="eqref" reference="A42"} we get $c_{13}=c_{12}^2$ and $b_{13}=b_{12}^2$. Equation [\[A39\]](#A39){reference-type="eqref" reference="A39"} then gives $a_3=1$ and [\[A40\]](#A40){reference-type="eqref" reference="A40"} implies $c_{12}=1/b_{12}$. After this [\[A52\]](#A52){reference-type="eqref" reference="A52"} and [\[A66\]](#A66){reference-type="eqref" reference="A66"} yield $a_{13}=d_{13}=0$. The remaining equations are satisfied with $a_2=\epsilon,\,\epsilon=\pm 1$. $$\label{Case5.7} [1] \begin{bmatrix}.&b\\ \frac{1}{ b}&.\end{bmatrix} \begin{bmatrix}.&.&b^{2}\\ x_{2}&\epsilon&x_{3}\\ \frac{1}{ b^{2}}&.&.\end{bmatrix} \begin{bmatrix}.&b\\ \frac{1}{ b}&.\end{bmatrix} [1]$$ The eigenvalues are $1$ and $-1$ with multiplicities $5$ and $4$ if $\epsilon=-1$ and multiplicities $6$ and $3$ otherwise. ## Case 6: $x_1=x_2=x_3=0$, $x_4\neq0$ [\[ss:case6\]]{#ss:case6 label="ss:case6"} From the outset it is best to divide this into two cases on whether or not $b_{12}$ vanishes. Case 6.1: $b_{12}=0$ therefore $a_{12}d_{12}\neq0$. Then from [\[A1\]](#A1){reference-type="eqref" reference="A1"} $c_{12}=0$ and from [\[A29\]](#A29){reference-type="eqref" reference="A29"} and [\[A31\]](#A31){reference-type="eqref" reference="A31"} $a_{12}=d_{12}=1$. From [\[A94\]](#A94){reference-type="eqref" reference="A94"} we get $b_{13}=0$, and hence $a_{13}a_2d_{13}\neq0$ and then from [\[A90\]](#A90){reference-type="eqref" reference="A90"}, [\[A86\]](#A86){reference-type="eqref" reference="A86"} and [\[A66\]](#A66){reference-type="eqref" reference="A66"} $a_{13}=a_2=d_{13}=1$, which leads to a contradiction with [\[A68\]](#A68){reference-type="eqref" reference="A68"}. Case 6.2: Now that $b_{12}\neq0$ we get from [\[A68\]](#A68){reference-type="eqref" reference="A68"} and [\[A72\]](#A72){reference-type="eqref" reference="A72"} $a_{13}=d_{13}=0$. From [\[A72\]](#A72){reference-type="eqref" reference="A72"} $a_{13}=a_{12}$ and from [\[A94\]](#A94){reference-type="eqref" reference="A94"} $b_{13}=b_{12}^2$ and then from [\[A98\]](#A98){reference-type="eqref" reference="A98"} and [\[A99\]](#A99){reference-type="eqref" reference="A99"} $a_{12}=a_{23}=0$ and therefore $c_{12}b_{23}c_{23}\neq0$. Next from [\[A46\]](#A46){reference-type="eqref" reference="A46"} $c_{13}=c_{12}^2$ and from [\[A100\]](#A100){reference-type="eqref" reference="A100"} and [\[A101\]](#A101){reference-type="eqref" reference="A101"} $b_{23}=a_2^2/c_{12}$, $b_{12}=a_2^2/c_{23}$ and from [\[A95\]](#A95){reference-type="eqref" reference="A95"} $a_3=c_{23}^2/c_{12}^2$. At this point we divide the problem into two branches on whether or not $d_{12}$ vanishes. Case 6.2.1: $d_{12}=0$. Then from [\[A68\]](#A68){reference-type="eqref" reference="A68"} $d_{13}=0$ and from [\[A95\]](#A95){reference-type="eqref" reference="A95"} $c_{23}=a_2^2c_{12}$ and from [\[A83\]](#A83){reference-type="eqref" reference="A83"} $d_{23}=a_2(1-a_2^2)$. After this the remaining equations imply that we must have either $a_2=\epsilon,$ $\epsilon=\pm 1$ or $a_2=\omega$ with $\omega^3=1$, $\omega\neq 1$. After changing $c_{12}\to c$ one solution is $$\label{eq:VI.2.1a} [1] \begin{bmatrix}.& \frac{1}{ c}\\ c&.\end{bmatrix} \begin{bmatrix}.&.& \frac{1}{ c^{2}}\\ .&\epsilon&.\\ c^{2}&x_{4}&.\end{bmatrix} \begin{bmatrix}.& \frac{1}{ c}\\ c& . \end{bmatrix} [1]$$ The eigenvalues are $1$ and $-1$, with multiplicities $5$ and $4$ if $\epsilon=-1$, otherwise multiplicities $6$ and $3$. Note that this solution may be obtained from [\[Case5.7\]](#Case5.7){reference-type="eqref" reference="Case5.7"} by setting $x_2=0$ and taking the transpose, but this violates the Case 5 assumption that $x_2\neq 0$. The second solution is $$\label{eq:VI.2.1} [1] \begin{bmatrix}.& \frac{1}{ c}\\ c&.\end{bmatrix} \begin{bmatrix}.&.& \frac{1}{ c^{2}}\\ .&\omega&.\\ c^{2}&x_{4}&.\end{bmatrix} \begin{bmatrix}.& \frac{\omega^{2}}{ c}\\ \omega^{2} c&\omega-1 \end{bmatrix} [ \omega]$$ The eigenvalues are $1,\omega$ and $-1$, each with multiplicity $3$. Case 6.2.2: $d_{12}\neq0$. From [\[A64\]](#A64){reference-type="eqref" reference="A64"} and [\[A68\]](#A68){reference-type="eqref" reference="A68"} we get $d_{13}=d_{23}=d_{12}(1-a_2)$ and then from [\[A63\]](#A63){reference-type="eqref" reference="A63"} $a_2=1$. The remaining equations are solved by $d_{12}=(c_{23}-c_{12})/c_{23}$ and $c_{23}=c_{12}\omega$ with $\omega^3=1$, $\omega\neq 1$, yielding: $$\label{sol:case14} [1] \begin{bmatrix}.& \frac{1}{ c \omega}\\ c& \omega+2\end{bmatrix} \begin{bmatrix}.&.& \frac{1}{ c^{2} \omega^{2}}\\ .&1&.\\ c^{2}&x_{4}&.\end{bmatrix} \begin{bmatrix}.& \frac{1}{ c}\\ \omega c&.\end{bmatrix} [\omega^{2}]$$ The eigenvalues are $1,\omega^2$ and $-\omega^2$, each with multiplicity $3$. --------- ---------------------------------------- ------------------------------------------------------------------------------ ---------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- soln. non-zero block parameters eigenvalues, name $x_i$s form cont./discrete degeneracies 1 ![image](xfig/case1.pdf){width=".2in"} [\[sol:case1\]](#sol:case1){reference-type="eqref" reference="sol:case1"} 3 /0 $\left({\begin{array}{c}1\\{\times 8}\end{array}}\raisebox{.26cm}{,}{\begin{array}{c}{x}\\{\times 1}\end{array}}\right)$ 2 ![image](xfig/case2.pdf){width=".2in"} \- \- 3.1.1 ![image](xfig/case3.pdf){width=".2in"} [\[eq:III.1.1\]](#eq:III.1.1){reference-type="eqref" reference="eq:III.1.1"} 3 /0 $\left({\begin{array}{c}1\\{\times 5}\end{array}}\raisebox{.27cm}{,}{\begin{array}{c}{-x^2}\\{\times 3}\end{array}}\raisebox{.27cm}{,}{\begin{array}{c}{x^3}\\{\times 1}\end{array}}\right)$ 3.1.2 ![image](xfig/case3.pdf){width=".2in"} [\[eq:III.1.2\]](#eq:III.1.2){reference-type="eqref" reference="eq:III.1.2"} 3 /1 $\left({\begin{array}{c}1\\{\times 3}\end{array}}\raisebox{.27cm}{,}{\begin{array}{c}{-\omega x}\\{\times 3}\end{array}}\raisebox{.27cm}{,}{\begin{array}{c}{\omega x^2}\\{\times 3}\end{array}}\right)$ 4 ![image](xfig/case4.pdf){width=".2in"} \- \- 5.2.1 ![image](xfig/case5.pdf){width=".2in"} [\[eq:V.2.1\]](#eq:V.2.1){reference-type="eqref" reference="eq:V.2.1"} 3 /0 $\left({\begin{array}{c}1\\{\times 7}\end{array}}\raisebox{.26cm}{,}{\begin{array}{c}{x}\\{\times 2}\end{array}}\right)$ 5.2.2 ![image](xfig/case5.pdf){width=".2in"} [\[5.2.2\]](#5.2.2){reference-type="eqref" reference="5.2.2"} 3 /0 $\left({\begin{array}{c}1\\{\times 6}\end{array}}\raisebox{.26cm}{,}{\begin{array}{c}{x}\\{\times 3}\end{array}}\right)$ 5.4.a ![image](xfig/case5.pdf){width=".2in"} [\[5.4.a\]](#5.4.a){reference-type="eqref" reference="5.4.a"} 3 /0 $\left({\begin{array}{c}1\\{\times 6}\end{array}}\raisebox{.26cm}{,}{\begin{array}{c}{x}\\{\times 3}\end{array}}\right)$ 5.4.b ![image](xfig/case5.pdf){width=".2in"} [\[5.4.b\]](#5.4.b){reference-type="eqref" reference="5.4.b"} 3 /0 $\left({\begin{array}{c}1\\{\times 5}\end{array}}\raisebox{.26cm}{,}{\begin{array}{c}{x}\\{\times 4}\end{array}}\right)$ 5.5.1.1 ![image](xfig/case5.pdf){width=".2in"} [\[5.5.1.1\]](#5.5.1.1){reference-type="eqref" reference="5.5.1.1"} 2 /1 $\left({\begin{array}{c}1\\{\times 6}\end{array}}\raisebox{.26cm}{,}{\begin{array}{c}{\omega}\\{\times 3}\end{array}}\right)$ 5.5.1.2 ![image](xfig/case5.pdf){width=".2in"} [\[eq:V.5.1.2\]](#eq:V.5.1.2){reference-type="eqref" reference="eq:V.5.1.2"} 2 /1 $\left({\begin{array}{c}1\\{\times 5}\end{array}}\raisebox{.26cm}{,}{\begin{array}{c}{\varsigma}\\{\times 4}\end{array}}\right)$ 5.7 ![image](xfig/case5.pdf){width=".2in"} [\[Case5.7\]](#Case5.7){reference-type="eqref" reference="Case5.7"} 3 /1 $\left({\begin{array}{c}1\\{\times 5}\end{array}}\raisebox{.26cm}{,}{\begin{array}{c}{-1}\\{\times 4}\end{array}}\right)$/$\left({\begin{array}{c}1\\{\times 6}\end{array}}\raisebox{.26cm}{,}{\begin{array}{c}{-1}\\{\times 3}\end{array}}\right)$ 6.2.1 ![image](xfig/case6.pdf){width=".2in"} [\[eq:VI.2.1a\]](#eq:VI.2.1a){reference-type="eqref" reference="eq:VI.2.1a"} 2 /1 $\left({\begin{array}{c}1\\{\times 5}\end{array}}\raisebox{.26cm}{,}{\begin{array}{c}{-1}\\{\times 4}\end{array}}\right)$/$\left({\begin{array}{c}1\\{\times 6}\end{array}}\raisebox{.26cm}{,}{\begin{array}{c}{-1}\\{\times 3}\end{array}}\right)$ 6.2.1' ![image](xfig/case6.pdf){width=".2in"} [\[eq:VI.2.1\]](#eq:VI.2.1){reference-type="eqref" reference="eq:VI.2.1"} 2 /1 $\left({\begin{array}{c}1\\{\times 3}\end{array}}\raisebox{.27cm}{,}{\begin{array}{c}{\omega}\\{\times 3}\end{array}}\raisebox{.27cm}{,}{\begin{array}{c}{-1}\\{\times 3}\end{array}}\right)$ 6.2.2 ![image](xfig/case6.pdf){width=".2in"} [\[sol:case14\]](#sol:case14){reference-type="eqref" reference="sol:case14"} 2 /1 $\left({\begin{array}{c}1\\{\times 3}\end{array}}\raisebox{.27cm}{,}{\begin{array}{c}{\omega}\\{\times 3}\end{array}}\raisebox{.27cm}{,}{\begin{array}{c}{-\omega}\\{\times 3}\end{array}}\right)$ --------- ---------------------------------------- ------------------------------------------------------------------------------ ---------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- : Table of all ACC solutions to the Yang-Baxter equation ([\[eq:braidr\]](#eq:braidr){reference-type="ref" reference="eq:braidr"}) in rank-3. $\;$ Here $x$ denotes a non-zero variable possibly with further constraints described in the text, $\omega$ is a primitive 3rd root of unity; and $\varsigma$ is a primitive 4th root of unity. In continuous/discrete parameter column entry 3/1 means a 3-free-parameter family, not counting overall scaling, with 1 discrete parameter (which always take on exactly 2 values). (Hyphens and omitted 'names' correspond to choices leading to no solution.) [\[tab:1\]]{#tab:1 label="tab:1"} Altogether we have established the following: **Theorem 2**. *For the Yang-Baxter equation ([\[eq:braidr\]](#eq:braidr){reference-type="ref" reference="eq:braidr"}) in three dimensions, the complete list of solutions satisfying ACC but not SCC (see [@MR] for SCC) is given, up to noted symmetries (see Sec.[2.2](#ss:syms){reference-type="ref" reference="ss:syms"}), in the formulae [\[sol:case1\]](#sol:case1){reference-type="eqref" reference="sol:case1"}-[\[sol:case14\]](#sol:case14){reference-type="eqref" reference="sol:case14"}, and collected in the Table [1](#tab:1){reference-type="ref" reference="tab:1"}. $\square$* # Analysis of the generic representations ([\[eq:JaR1\]](#eq:JaR1){reference-type="ref" reference="eq:JaR1"}/[\[eq:boxx\]](#eq:boxx){reference-type="ref" reference="eq:boxx"}). {#ss:analysis} Constant Yang--Baxter solutions can be of considerable intrinsic interest. But they are also often interesting because of their symmetry algebras. In the XXZ case (one of the strict charge-conserving cases) for example, the symmetry algebra is the quantum group $U_q sl_2$. This holds true in all ranks - as we go up in rank we simply see more of the symmetry algebra - i.e. the representation has a smaller kernel. It is not immediate that such a strong outcome would hold in general. But it is interesting to investigate. In §[4.1](#ss:analys){reference-type="ref" reference="ss:analys"} we analyse our new solutions. (In §[6](#ss:repthy1){reference-type="ref" reference="ss:repthy1"} we recall some classical facts about the classical cases for comparison.) ## Analysis of the generic solution: spectrum of $\check{R}$ {#ss:analys} Now we consider the solution in ([\[eq:JaR1\]](#eq:JaR1){reference-type="ref" reference="eq:JaR1"}/[\[eq:boxx\]](#eq:boxx){reference-type="ref" reference="eq:boxx"}). Observe that the trace of the 3x3 block is $$a + \frac{{x}_1 {x}_3 +b}{b} -\frac{{x}_1 {x}_3}{ab} = \frac{a^2 b + ab{x}_1 {x}_3 + ab^2 -{x}_1 {x}_3}{ab}$$ Consider $\check{R}_j -1$, so that all but the 3x3 block is zero. Restricting to the 3x3 block of $\check{R}$, call it $\check{r}$, we have $$\label{eq:JaR12} \check{r}-1_3 \; = \; \left[ \begin{array}{ccccccccc} a & {x}_1 & b \\ \frac{\mathit{{x}_3} \left(a -1\right)}{b} & \frac{\mathit{{x}_1} \mathit{{x}_3} +b}{b} & \mathit{{x}_3} \\ \frac{\mathit{{x}_3}^{2} \mathit{{x}_1}^{2}}{b^{3}} & -\frac{\mathit{{x}_1} \left(a b +\mathit{{x}_1} \mathit{{x}_3} \right)}{a \,b^{2}} & -\frac{\mathit{{x}_3} \mathit{{x}_1}}{a b} \end{array}\right] -1_3 \; = \left[ \begin{array}{ccccccccc} a-1 & {x}_1 & b \\ \frac{\mathit{{x}_3} \left(a -1\right)}{b} & \frac{\mathit{{x}_1} \mathit{{x}_3} }{b} & \mathit{{x}_3} \\ \frac{\mathit{{x}_3}^{2} \mathit{{x}_1}^{2}}{b^{3}} & -\frac{\mathit{{x}_1} \left(a b +\mathit{{x}_1} \mathit{{x}_3} \right)}{a \,b^{2}} & -\frac{b(\mathit{{x}_3} \mathit{{x}_1} +ab)}{a b^2} \end{array}\right]$$ Note that $\frac{ {x}_3^2 {x}_1^2 }{b^3} = \frac{-(a-1)(ab+{x}_1 {x}_3 )}{ab^2}$ so this is clearly rank 1. Thus only one eigenvalue of $\check{R}-1$ is not 0, and so only one eigenvalue of $\check{R}$ is not 1. We have $$Trace(\check{R}-1) = a-1 +\frac{{x}_1 {x}_3}{b} -\frac{b({x}_1 {x}_3 +ab)}{ab^2} = \frac{a^2 b^2 +ab{x}_1 {x}_3 -b({x}_1 {x}_3 +ab)}{ab^2} -1$$ The other eigenvalue of $\check{R}$ is $$\lambda_2 = \frac{a(a-1) b^2 + (a-1)b {x}_1 {x}_3 }{ab^2} = - ({x}_1 {x}_3 /b)^2 \; = -\left( \frac{-(a-1)\pm\sqrt{(a-1)^2-4a^2(a-1)}}{2a} \right)^2$$ --- note from ([\[eq:boxx\]](#eq:boxx){reference-type="ref" reference="eq:boxx"}) that this depends only on $a$. In particular each of our braid representations (varying the parameters appropriately) is a Hecke representation. We see that the eigenvalue $\lambda_2$ can be varied over an open interval (in each branch it is a continuous function of $a$, small for $a$ close to 1; and large for large negative $a$). So (by Hecke representation theory, specifically that the Hecke algebras are generically semisimple, and abstract considerations [@ClineParshallScott; @Martin91]\...) the representation is generically semisimple. Returning to ([\[eq:JaR12\]](#eq:JaR12){reference-type="ref" reference="eq:JaR12"}) we have $$\check{r}-1_3 \; =\;\; \left[ 1,\frac{{x}_3}{b},\frac{-ab-{x}_1 {x}_3}{ab^2} \right]^t \; [a-1,{x}_1,b] =\;\; \frac{1}{ab^2} \left[ \begin{array}{c} ab^2 \\ {ab{x}_3} \\ {-ab-{x}_1 {x}_3} \end{array} \right] \; [a-1,{x}_1,b]$$ and $$\check{R}-1_9 \; =\;\; \left[0,0, 1,0,\frac{{x}_3}{b},0,\frac{-ab-{x}_1 {x}_3}{ab^2},0,0 \right]^t \; [0,0,a-1,0,{x}_1,0,b,0,0]$$ Armed with this, we have a Temperley--Lieb category representation (i.e. an embedded TQFT - we assume familiarity with the standard $U_q sl_2$ version which can be used for comparison). In this form the duality is going to be skewed (not a simple conjugation) but should be workable. In particular the loop parameter is $$[0,0,a-1,0,{x}_1,0,b,0,0] \left[ \begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \\ {{x}_3 /b} \\ 0 \\ {(-ab-{x}_1 {x}_3)}/ab^2 \\ 0 \\ 0 \end{array} \right] =\;\; (a-1) + \frac{{x}_1 {x}_3}{b} -\frac{b(ab+{x}_1 {x}_3)}{ab^2}$$$$\hspace{2in} = \frac{ a^2 b^2 -2ab^2 +(a-1)b {x}_1 {x}_3}{ab^2} = \frac{a-1}{a} (a + \frac{{x}_1 {x}_3}{b}) -1 = \lambda_2 -1$$ which, note, depends only on $a$. ## Irreducible representation content of the generic solution $\rho_n$ The following analysis gives us an invariant, and thus a way to classify solutions $\check{R}$ (or equivalently $R$). By construction, any $\check{R}$ gives a representation of the braid group $B_n$ for each $n$. Thus in principle we can classify $R$-matrices according to the representation structure (the irrep content and so on) for each (and all) $n$. In general such an approach is very hard (due to the limits on knowledge of the braid groups $B_n$ and their representation theory). Certain properties can, however, make the problem more tractable. In our case call the representation $\rho_n$ (or just $\rho$ if no ambiguity arises, or to denote the monoidal functor from the braid category, as in [@MR]). Depending on the field we are working over, this might mean the rep with indeterminate parameters, or a generic point in parameter space (i.e. the rep variety or a point on that variety). Since this $\check{R}$ has two eigenvalues (see §[4.1](#ss:analys){reference-type="ref" reference="ss:analys"}) we have a Hecke representation --- a representation of the algebra $H_n = H_n(q)$, a quotient of the group algebra of $B_n$ for each $n$, for some $q$. (With the same understanding about parameters.) Since eigenvalue $\lambda_1 =1$ this $H_n(q)$ is essentially in the 'Lusztig' convention --- we can write $t_i$ for the braid generators in $H_n$, so $$R_i = \rho(t_i) \; = \;\; \rho_n (t_i) \; ;$$ then the quotient relation is $$\label{eq:-1q} (t_i -1)(t_i +q)=0$$ for some $q=-\lambda_2$, as in [@PaulsNotes]. Here it is convenient to define $$U_i = \frac{t_i -1 }{ \alpha}$$ so $\alpha U_i (\alpha U_i +1+q)=0$, i.e. $\alpha U_i^2 = -(1+q) U_i$. In a convention/parameterisation as in ([\[eq:-1q\]](#eq:-1q){reference-type="ref" reference="eq:-1q"}) the operator $$e' = 1-t_1 -t_2 +t_1 t_2 +t_2 t_1 -t_1 t_2 t_1$$ is an unnormalised idempotent, and hence $$\rho_n(e') = 1-R_1 -R_2 +R_1 R_2 +R_2 R_1 -R_1 R_2 R_1$$ is an unnormalised (possibly zero) idempotent, whenever $\check{R}$ gives such a Hecke representation. In our case in fact $e'$ is zero (by direct computation): $$\label{eq:e00} \rho_n(e' ) = 0$$ Note that $$\alpha^3 U_1 U_2 U_1 = (t_1 -1)(t_2 -1)(t_1 -1) = t_1 t_2 t_1 -t_1 t_2 -t_2 t_1 - t_1 t_1 +2t_1 +t_2 -1$$ so in our case $$\rho_n( \alpha^3 U_1 U_2 U_1 ) = (R_1 -1)(R_2 -1)(R_1 -1) = R_1 R_2 R_1 -R_1 R_2 -R_2 R_1 - R_1 R_1 +2R_1 +R_2 -1$$ so $$\rho_n( \alpha^3 U_1 U_2 U_1 ) = -R_1^2 +R_1 = -R_1(R_1 -1) = \rho_n( q \alpha U_1 ) \hspace{1cm}\mbox{so}\hspace{1cm} \rho_n( U_1 U_2 U_1 ) = \rho_n( \frac{q}{\alpha^2} U_1 )$$ so if we put $\alpha= \pm \sqrt{q}$ then we have the relations of the usual generators for Temperley--Lieb [@TemperleyLieb71]. We assume familiarity with the generic irreducible representations of $H_n$, which we write, up to isomorphism, as $L_\lambda$ with $\lambda\vdash n$ an integer partition of $n$. The idempotent $e'$ induces the irrep $L_{1^3}$. The unnormalised idempotent inducing the irrep $L_3$ is $$\label{eq:en} e'_3 \; =\; 1+\frac{1}{q} (R_1 +R_2) +\frac{1}{q^2} (R_1 R_2 + R_2 R_1) +\frac{1}{q^3} R_1 R_2 R_1$$ This gives $$L_3(e'_3) = \frac{1}{q^3} (1+q)(1+q+q^2)$$ which gives the normalisation factor, so $$e_3 = \frac{q^3}{(1+q)(1+q+q^2)} e'_3$$ The generalisation to irrep $L_n$ in rank $n$ will hopefully be clear (in fact we won't really need it except for checking). We can write $\chi_\lambda$ for the irreducible character associated to irrep $L_\lambda$. That is, $$\chi_{\lambda}(t_i) = Trace(L_{\lambda}(t_i)) .$$ We can evaluate these characters in various ways, but a simple device is the restriction rule for the inclusion $H_{n-1} \otimes 1_1 \hookrightarrow H_n$; together with the easy cases: $$\label{eq:chin1n} \chi_{n}(t_i)=1, \hspace{1in} \chi_{1^n}(t_i)=\lambda_2$$ For example $$\chi_{2,1}(t_i) = \chi_{2}(t_i)+\chi_{1^2}(t_i) =1+\lambda_2$$ and so on. Observe that the eigenvalues of $R_i$, specifically $R_1 = \check{R}\otimes 1_3$, are three copies each of the eigenvalues of $\check{R}$. Hence there are 24 eigenvalues $\lambda_1 =1$ and 3 copies of the other eigenvalue, call it $\lambda_2$: $$\chi_{\rho}(t_i) =3(8+\lambda_2)= 24+3\lambda_2$$ The 1d irrep $L_3$, when present, contributes 1 eigenvalue $\lambda_1 =1$. The 2d irrep $L_{2,1}$ contributes 1 eigenvalue $\lambda_1 =1$ and 1 of the other eigenvalue $\lambda_2$. The 1d irrep $L_{1^3}$ contributes just 1 of the other eigenvalue $\lambda_2$. Since $e'=0$ the multiplicity of this irrep in $\rho$ is 0. Therefore all the 3 eigenvalues $\lambda_2$ come from $L_{2,1}$ summands. The identity ([\[eq:e00\]](#eq:e00){reference-type="ref" reference="eq:e00"}) therefore tells us that the irreducible content of our representation of $H_3$ (the Hecke quotient of $B_3$) is $$\rho \; = \;\; 21 \; L_3 \; + \;\; 3\; L_{2,1}$$ (the sum is generically but not necessarily always direct). In particular we have re-verified: **Proposition 3**. *Representation $\rho$ is a representation of Temperley--Lieb.* Note that it follows from the tensor construction that this TL property holds (i.e. the image of $e'$ continues to vanish) for all $n$.\ Next we address the question of faithfulness of $\rho_n$ as a TL representation, and determine the centraliser, for all $n$. Write $m_\lambda$ for the multiplicity of the generic irrep $L_\lambda$ in our rep $\rho$ (the generic character is well-defined in all specialisations, but the corresponding rep is not irreducible in all specialisations): $$\label{eq:mmultip} \chi_{\rho_n} = \sum_{\lambda \vdash n} m_\lambda \chi_\lambda$$ Note that integer partitions can be considered as vectors ('weights' in Lie theory) and hence added. For example if $\mu=(\mu_1, \mu_2, \mu_3, ...,\mu_l)$ then $$\mu+11 = \mu+(1,1) = \mu+(1,1,0,...,0) = (\mu_1 +1, \mu_2 +1, \mu_3, ..., \mu_l ) .$$ **Stability Lemma**. The multiplicity $m_\mu$ at level $n-2$ is the same as $m_{\mu+11}$ at level $n$. *Outline Proof*. The method of 'virtual Lie theory' works here (see e.g. [@Martin91; @MartinRyomHansen]). Let us define $$U_i = \check{R}_i -1$$ our rank-1 operator. Thus $U_i$ is itself an unnormalised idempotent - indeed it is, up to scalar, the image of the cup-cap operator in the TL diagram algebra. Write $T_n$ for TL on $n$ strands. Recall that $U_1 T_n$ is a left $T_{n-2}$ right $T_n$ bimodule. Recall the algebra isomorphism $U_1 T_n U_1 \cong T_{n-2}$; and recall that $T_n / T_n U_1 T_n \cong k$ where $k$ is the ground field (for us it is ${\mathbf{C}}$). It follows that the category $T_{n-2}-mod$ embeds in $T_n-mod$, with embedding functor given by: $$\label{eq:TUM} M \mapsto T_n U_1 \otimes_{T_{n-2}} M$$ The irrep $L_\mu = L_{\mu_1, \mu_2}$ is taken to $L_{\mu+11} = L_{\mu_1 +1, \mu_2 +1}$. Here $L_n$ is the module not hit by the embedding --- this is the module corresponding to $T_n / T_n U_1 T_n \cong k$, so the one that is annihilated by the localisation $M \mapsto U_1 M$. $\square$ The Theorem below is a corollary of this Lemma. It might also be of interest to show how to compute the further multiplicities $m_\mu$ by direct calculation. For $n=4$ we have $$\chi_{\rho_4}(t_i) = 3\chi_{\rho_3}(t_i) = 72+9\lambda_2$$ A direct calculation gives $$\chi_{\rho_4}(e_4) = 55$$ so $m_4=55$, and we have $$\chi_{\rho_4}(t_1) = 55 + m_{3,1} (2+\lambda_2) +m_{2,2} (1+\lambda_2)$$ We have $2m_{31} + m_{22} =72-55=17$ and $m_{31} + m_{22} =9$, giving $m_{3,1} = 8$ and $m_{2,2}=1$. Observe that this is in agreement with the Stability Lemma. For $n=5$ we have $$\chi_{\rho_5}(t_i) = 3\chi_{\rho_4}(t_i) = 216+27\lambda_2$$ A direct calculation gives $$\chi_{\rho_5}(e_5) = 144$$ We have $$\chi_{\rho_5}(t_1) = 144 + m_{4,1} (3+\lambda_2) +m_{3,2} (3+2\lambda_2)$$ We have $3m_{41} + 3m_{32} =216-144=72$ and $m_{41} + 2m_{32} =27$, giving $m_{4,1} = 21$ and $m_{3,2}=3$. We observe a pattern of repeated multiplicities, in agreement with the Stability Lemma: $$\begin{array}{c|cccccccc} m_\lambda & 1 & 3 & 8 & 21 & 55 & 144 \\ \hline \lambda & 11& & 2 \\ & &21 & &3 \\ & 22& &31 & & 4 \\ & &32 & &41 & & 5 \\ & 33& & \end{array}$$ Beside the Stability Lemma or a direct calculation, the last entry above may be guessed based on Perron--Frobenius applied to the Hamiltonian $H = \sum_i \check{R}_i$ --- if some power of $H$ is positive then there is a unique largest magnitude eigenvalue, and hence the corresponding multiplicity is 1. We know from the XXZ chain, which has the same eigenvalues but different multiplicities, that $\lambda = mm$ gives the largest eigenvalue when $n=2m$. **Theorem 4**. *The multiplicity $m_n$ in ([\[eq:mmultip\]](#eq:mmultip){reference-type="ref" reference="eq:mmultip"}) is given by A001906 from Sloane/OEIS [@Sloane], with all other multiplicities $m_\mu$ determined by the Stability Lemma. $\square$* # The equations ## The cubic constraints {#ss:cubics} Here we write out the system of cubics corresponding to entries in $A_R$ as in ([\[eq:AR\]](#eq:AR){reference-type="ref" reference="eq:AR"}), hence the cubics that must vanish, in the ACC ansatz. In fact the first few cubics in $A_R$ are unchanged (ordering 000 001 002 010 011 012 020 021 022 100 101 102 \... 222) from the strict cc ansatz. Row 000 has vanishing anomaly. Row 001 gives: $$\langle 001 | A_R| 001 \rangle =\; -{a}_{12} {b}_{12} {c}_{12} -{a}_1 {a}_{12}^2 +{a}_1^2 {a}_{12}, \hspace{1.6cm} \langle 001 |A_R| 010 \rangle= -{a}_{12} {b}_{12} {d}_{12}$$ with all other entries vanishing. The first departure from SCC is in the 002 row, which is: $$\langle 002 | A_R\;\; = \;\; \; [0,0, \; -{a}_{12} {x}_1 {x}_2 -{a}_{13} {b}_{13} {c}_{13} -{a}_1 {a}_{13}^2 +{a}_1^2 {a}_{13}, \hspace{5.5cm}$$$$0,\;\;\;\;\; -{a}_{13} {b}_{13} {x}_4 +({a}_1 {a}_{12} -{a}_{12} {a}_2 -{a}_1 {a}_{13} ) {x}_1 ,$$$$0, \; -{a}_{12} {x}_1 {x}_3 -{a}_{13} {b}_{13} {d}_{13}, \hspace{1cm} 0,0,0,$$$$-{b}_{13} {c}_{12} {x}_1 -{a}_{12} {b}_{12} {x}_1 +{a}_1 {b}_{12} {x}_1, \; 0,$$ $$\label{eq:R002} \hspace{1.3cm} -{b}_{13} {d}_{12} {x}_1 +{a}_1 {b}_{13} {x}_1 -{b}_{12}^2 {x}_1, \hspace{1cm} 0,0,0,0,0,0,0,0,0,0,0,0,0,0]$$ The complete list of constraints, organised by number of terms, is as follows. ## List of equations We give the complete list equations that are distinct up to an overall sign, organised by the number of terms (in computations we use the scale freedom to assume $a_1=1$). One term equations (8): $$\begin{aligned} &&a_{12}\,c_{12}\,d_{12}=0,\label{A1}\\&& a_{12}\,b_{12}\,d_{12}=0,\label{A2}\\&& a_{23}\,c_{23}\,d_{23}=0,\label{A3}\\&& a_{23}\,b_{23}\,d_{23}=0,\label{A4}\\&& \nonumber\\&& x_2\,x_4\,c_{12}=0,\label{A5}\\&& x_2\,x_4\,c_{23}=0,\label{A6}\\&& x_1\,x_3\,b_{12}=0,\label{A7}\\&& x_1\,x_3\,b_{23}=0\label{A8},\end{aligned}$$ Two term equations (20): $$\begin{aligned} &&a_{12}\,d_{12}\,(a_{12} - d_{12})=0,\label{A9}\\&& a_{23}\,d_{23}\,(a_{23} - d_{23})=0,\label{A10}\\&& \nonumber\\&& x_1\,x_2\,(d_{12} - d_{23})=0,\label{A11}\\&& x_1\,x_3\,(a_{12} - d_{12})=0,\label{A12}\\&& x_1\,x_3\,(a_{23} - d_{23})=0,\label{A13}\\&& x_2\,x_4\,(a_{12} - d_{12})=0,\label{A14}\\&& x_2\,x_4\,(a_{23} - d_{23})=0,\label{A15}\\&& x_3\,x_4\,(a_{12} - a_{23})=0,\label{A16}\\&& \nonumber\\&& x_1\,x_3\,c_{12} - a_{12}\,b_{12}\,d_{12}=0,\label{A17}\\&& x_1\,x_3\,d_{23} + a_{13}\,b_{13}\,d_{13}=0,\label{A18}\\&& x_1\,x_3\,a_{12} + a_{13}\,b_{13}\,d_{13}=0,\label{A19}\\&& x_1\,x_3\,c_{23} - a_{23}\,b_{23}\,d_{23}=0,\label{A20}\\&& x_1\,x_3\,d_{12} + a_{13}\,b_{13}\,d_{13}=0,\label{A21}\\&& x_1\,x_3\,a_{23} + a_{13}\,b_{13}\,d_{13}=0,\label{A22}\\&& x_2\,x_4\,d_{12} + a_{13}\,c_{13}\,d_{13}=0,\label{A23}\\&& x_2\,x_4\,a_{12} + a_{13}\,c_{13}\,d_{13}=0,\label{A24}\\&& x_2\,x_4\,b_{12} - a_{12}\,c_{12}\,d_{12}=0,\label{A25}\\&& x_2\,x_4\,d_{23} + a_{13}\,c_{13}\,d_{13}=0,\label{A26}\\&& x_2\,x_4\,a_{23} + a_{13}\,c_{13}\,d_{13}=0,\label{A27}\\&& x_2\,x_4\,b_{23} - a_{23}\,c_{23}\,d_{23}=0,\label{A28}\end{aligned}$$ Three term equations (20): $$\begin{aligned} && a_{12}\,(a_1^2 - a_1\,a_{12} - c_{12}\,b_{12})=0,\label{A29}\\&& a_{23}\,(c_{23}\,b_{23} - a_3^2 + a_3\,a_{23})=0,\label{A30}\\&& d_{12}\,(a_1^2 - a_1\,d_{12} - c_{12}\,b_{12})=0,\label{A31}\\&& d_{23}\,(c_{23}\,b_{23} - a_3^2 + a_3\,d_{23})=0,\label{A32}\\&& \nonumber\\&& x_1\,(a_1\,b_{12} - c_{12}\,b_{13} - a_{12}\,b_{12})=0,\label{A33}\\&& x_1\,(a_1\,b_{13} - d_{12}\,b_{13} - b_{12}^2)=0,\label{A34}\\&& x_1\,(c_{23}\,b_{13} - a_3\,b_{23} + a_{23}\,b_{23})=0,\label{A35}\\&& x_1\,(a_3\,b_{13} - d_{23}\,b_{13} - b_{23}^2)=0,\label{A36}\\&& \nonumber\\&& x_2\,(a_1\,c_{12} - c_{12}\,a_{12} - c_{13}\,b_{12})=0,\label{A37}\\&& x_2\,(a_1\,c_{13} - c_{12}^2 - c_{13}\,d_{12})=0,\label{A38}\\&& x_2\,(c_{13}\,a_3 - c_{13}\,d_{23} - c_{23}^2)=0,\label{A39}\\&& x_2\,(c_{13}\,b_{23} - c_{23}\,a_3 + c_{23}\,a_{23})=0,\label{A40}\\&& \nonumber\\&& x_3\,(a_1\,b_{12} - c_{12}\,b_{13} - d_{12}\,b_{12})=0,\label{A41}\\&& x_3\,(a_1\,b_{13} - a_{12}\,b_{13} - b_{12}^2)=0,\label{A42}\\&& x_3\,(c_{23}\,b_{13} - a_3\,b_{23} + d_{23}\,b_{23})=0,\label{A43}\\&& x_3\,(a_3\,b_{13} - a_{23}\,b_{13} - b_{23}^2)=0,\label{A44}\\&& \nonumber\\&& x_4\,(a_1\,c_{12} - c_{12}\,d_{12} - c_{13}\,b_{12})=0,\label{A45}\\&& x_4\,(a_1\,c_{13} - c_{12}^2 - c_{13}\,a_{12})=0,\label{A46}\\&& x_4\,(c_{13}\,a_3 - c_{13}\,a_{23} - c_{23}^2)=0,\label{A47}\\&& x_4\,(c_{13}\,b_{23} - c_{23}\,a_3 + c_{23}\,d_{23})=0,\label{A48}\end{aligned}$$ Four term equations (37): $$\begin{aligned} && x_3\,x_4\,(a_{12} - a_{23}) + x_1\,x_2\,( - d_{12} + d_{23})=0,\label{A49}\\&& x_3\,x_4\,a_{23} - x_2\,x_1\,d_{23} + d_{13}\,a_{13}\,(d_{13} - a_{13})=0,\label{A50}\\&& x_3\,x_4\,a_{12} - x_2\,x_1\,d_{12} + d_{13}\,a_{13}\,(d_{13} - a_{13})=0,\label{A51}\\&& \nonumber\\&& x_1\,x_2\,a_{12} + a_{13}\,( - a_1^2 + a_1\,a_{13} + c_{13}\,b_{13})=0,\label{A52}\\&& x_1\,x_2\,a_{23} + a_{13}\,(c_{13}\,b_{13} - a_3^2 + a_3\,a_{13})=0,\label{A53}\\&& x_1\,x_2\,b_{12} + b_{23}\,( - d_{23}\,a_{13} + a_{12}\,a_{13} - a_{12}\,a_{23})=0,\label{A54}\\&& x_1\,x_2\,b_{23} + b_{12}\,( - d_{12}\,a_{13} - a_{12}\,a_{23} + a_{13}\,a_{23})=0,\label{A55}\\&& x_1\,x_2\,c_{12} + c_{23}\,( - d_{23}\,a_{13} + a_{12}\,a_{13} - a_{12}\,a_{23})=0,\label{A56}\\&& x_1\,x_2\,c_{23} + c_{12}\,( - d_{12}\,a_{13} - a_{12}\,a_{23} + a_{13}\,a_{23})=0,\label{A57}\\&& \nonumber\\&& x_1\,x_3\,a_2 + b_{13}\,(d_{13}\,a_{12} - d_{23}\,a_{12} + d_{23}\,a_{13})=0,\label{A58}\\&& x_1\,x_3\,a_2 + b_{13}\,(d_{12}\,a_{13} - d_{12}\,a_{23} + d_{13}\,a_{23})=0,\label{A59}\\&& \nonumber\\&& x_2\,x_4\,a_2 + c_{13}\,(d_{12}\,a_{13} - d_{12}\,a_{23} + d_{13}\,a_{23})=0,\label{A60}\\&& x_2\,x_4\,a_2 + c_{13}\,(d_{13}\,a_{12} - d_{23}\,a_{12} + d_{23}\,a_{13})=0,\label{A61}\\&& \nonumber\\&& x_3\,x_4\,b_{12} + b_{23}\,(d_{12}\,d_{13} - d_{12}\,d_{23} - d_{13}\,a_{23})=0,\label{A62}\\&& x_3\,x_4\,b_{23} + b_{12}\,( - d_{12}\,d_{23} + d_{13}\,d_{23} - d_{13}\,a_{12})=0,\label{A63}\\&& x_3\,x_4\,c_{12} + c_{23}\,(d_{12}\,d_{13} - d_{12}\,d_{23} - d_{13}\,a_{23})=0,\label{A64}\\&& x_3\,x_4\,c_{23} + c_{12}\,( - d_{12}\,d_{23} + d_{13}\,d_{23} - d_{13}\,a_{12})=0,\label{A65}\\&& x_3\,x_4\,d_{12} + d_{13}\,( - a_1^2 + a_1\,d_{13} + c_{13}\,b_{13})=0,\label{A66}\\&& x_3\,x_4\,d_{23} + d_{13}\,(c_{13}\,b_{13} - a_3^2 + a_3\,d_{13})=0,\label{A67}\\&& \nonumber\\&& x_4\,(a_1\,d_{12} - a_1\,d_{13} - a_2\,d_{12}) - x_1\,c_{13}\,d_{13}=0,\label{A68}\\&& x_4\,(a_2\,d_{23} + a_3\,d_{13} - a_3\,d_{23}) + x_1\,c_{13}\,d_{13}=0,\label{A69}\\&& x_4\,a_{13}\,b_{13} + x_1\,(a_2\,a_{23} + a_3\,a_{13} - a_3\,a_{23})=0,\label{A70}\\&& x_4\,a_{13}\,b_{13} + x_1\,( - a_1\,a_{12} + a_1\,a_{13} + a_2\,a_{12})=0,\label{A71}\\&& x_4\,b_{12}\,(a_{12} - a_{13}) + x_1\,c_{12}\,( - d_{12} + d_{13})=0,\label{A72}\\&& x_4\,b_{23}\,(a_{13} - a_{23}) + x_1\,c_{23}\,( - d_{13} + d_{23})=0,\label{A73}\\&& \nonumber\\&& x_2\,(a_1\,a_{12} - a_1\,a_{13} - a_2\,a_{12}) - x_3\,c_{13}\,a_{13}=0,\label{A74}\\&& x_2\,(a_2\,a_{23} + a_3\,a_{13} - a_3\,a_{23}) + x_3\,c_{13}\,a_{13}=0,\label{A75}\\&& x_2\,d_{13}\,b_{13} + x_3\,( - a_1\,d_{12} + a_1\,d_{13} + a_2\,d_{12})=0,\label{A76}\\&& x_2\,d_{13}\,b_{13} + x_3\,(a_2\,d_{23} + a_3\,d_{13} - a_3\,d_{23})=0,\label{A77}\\&& x_2\,b_{12}\,(d_{12} - d_{13}) + x_3\,c_{12}\,( - a_{12} + a_{13})=0,\label{A78}\\&& x_2\,b_{23}\,(d_{13} - d_{23}) + x_3\,c_{23}\,( - a_{13} + a_{23})=0,\label{A79}\\&& \nonumber\\&& x_1\,(a_2\,b_{12} - a_2\,b_{23} + a_{12}\,b_{23} - a_{23}\,b_{12})=0,\label{A80}\\&& x_2\,(c_{12}\,a_2 - c_{12}\,a_{23} - a_2\,c_{23} + c_{23}\,a_{12})=0,\label{A81}\\&& x_3\,(a_2\,b_{12} - a_2\,b_{23} + d_{12}\,b_{23} - d_{23}\,b_{12})=0,\label{A82}\\&& x_4\,(c_{12}\,a_2 - c_{12}\,d_{23} - a_2\,c_{23} + c_{23}\,d_{12})=0,\label{A83}\\&& \nonumber\\&& c_{12}\,d_{13}\,b_{12} - c_{23}\,d_{13}\,b_{23} + d_{12}^2\,d_{23} - d_{12}\,d_{23}^2=0,\label{A84}\\&& c_{12}\,a_{13}\,b_{12} - c_{23}\,a_{13}\,b_{23} + a_{12}^2\,a_{23} - a_{12}\,a_{23}^2=0,\label{A85}\end{aligned}$$ Five term equations (24): $$\begin{aligned} &&x_1\,x_2\,a_1 + x_3\,x_4\,a_{13} + a_{12}\,( - a_{12}\,a_2 - b_{12}\,c_{12} + a_2^2)=0,\label{A86}\\&& x_1\,x_2\,a_3 + x_3\,x_4\,a_{13} + a_{23}\,( - a_{23}\,a_2 - b_{23}\,c_{23} + a_2^2)=0,\label{A87}\\&& x_1\,x_2\,d_{13} + x_3\,x_4\,a_3 + d_{23}\,( - b_{23}\,c_{23} + a_2^2 - a_2\,d_{23})=0,\label{A88}\\&& x_1\,x_2\,d_{13} + x_3\,x_4\,a_1 + d_{12}\,( - b_{12}\,c_{12} + a_2^2 - a_2\,d_{12})=0,\label{A89}\\&& \nonumber\\&& x_1\,x_2\,a_2 - c_{12}\,a_{23}\,b_{12} + c_{13}\,a_{23}\,b_{13} - d_{12}^2\,a_{13} + d_{12}\,a_{13}^2=0,\label{A90}\\&& x_1\,x_2\,a_2 + c_{13}\,a_{12}\,b_{13} - c_{23}\,a_{12}\,b_{23} - d_{23}^2\,a_{13} + d_{23}\,a_{13}^2=0,\label{A91}\\&& \nonumber\\&& x_3\,x_4\,a_2 - c_{12}\,d_{23}\,b_{12} + c_{13}\,d_{23}\,b_{13} + d_{13}^2\,a_{12} - d_{13}\,a_{12}^2=0,\label{A92}\\&& x_3\,x_4\,a_2 + c_{13}\,d_{12}\,b_{13} - c_{23}\,d_{12}\,b_{23} + d_{13}^2\,a_{23} - d_{13}\,a_{23}^2=0,\label{A93}\\&& \nonumber\\&& x_1\,(a_{13}\,d_{13} + a_2\,d_{12} - d_{12}\,d_{13}) + x_4\,( - b_{12}^2 + b_{13}\,a_1)=0,\label{A94}\\&& x_1\,(a_{13}\,d_{13} + a_2\,d_{23} - d_{13}\,d_{23}) + x_4\,(b_{13}\,a_3 - b_{23}^2)=0,\label{A95}\\&& x_1\,(a_1\,c_{13} - c_{12}^2) + x_4\,( - a_{12}\,a_{13} + a_{12}\,a_2 + a_{13}\,d_{13})=0,\label{A96}\\&& x_1\,(c_{13}\,a_3 - c_{23}^2) + x_4\,( - a_{13}\,a_{23} + a_{13}\,d_{13} + a_{23}\,a_2)=0,\label{A97}\\&& x_1\,(a_{13}\,d_{12} - b_{23}\,c_{12} + a_2^2 - a_2\,d_{12}) + x_4\,a_{23}\,b_{13}=0,\label{A98}\\&& x_1\,(a_{13}\,d_{23} - b_{12}\,c_{23} + a_2^2 - a_2\,d_{23}) + x_4\,a_{12}\,b_{13}=0,\label{A99}\\&& x_1\,c_{13}\,d_{12} + x_4\,( - a_{23}\,a_2 + a_{23}\,d_{13} - b_{23}\,c_{12} + a_2^2)=0,\label{A100}\\&& x_1\,c_{13}\,d_{23} + x_4\,( - a_{12}\,a_2 + a_{12}\,d_{13} - b_{12}\,c_{23} + a_2^2)=0,\label{A101}\\&& \nonumber\\&& x_2\,(a_1\,b_{13} - b_{12}^2) + x_3\,(a_2\,a_{12} + d_{13}\,a_{13} - a_{12}\,a_{13})=0,\label{A102}\\&& x_2\,(a_3\,b_{13} - b_{23}^2) + x_3\,(a_2\,a_{23} + d_{13}\,a_{13} - a_{13}\,a_{23})=0,\label{A103}\\&& x_2\,(a_2\,d_{12} - d_{12}\,d_{13} + d_{13}\,a_{13}) + x_3\,(a_1\,c_{13} - c_{12}^2)=0,\label{A104}\\&& x_2\,(a_2\,d_{23} - d_{13}\,d_{23} + d_{13}\,a_{13}) + x_3\,(c_{13}\,a_3 - c_{23}^2)=0,\label{A105}\\&& x_2\,d_{12}\,b_{13} + x_3\,(a_2^2 - a_2\,a_{23} - c_{23}\,b_{12} + d_{13}\,a_{23})=0,\label{A106}\\&& x_2\,d_{23}\,b_{13} + x_3\,( - c_{12}\,b_{23} + a_2^2 - a_2\,a_{12} + d_{13}\,a_{12})=0,\label{A107}\\&& x_2\,(c_{12}\,b_{23} - a_2^2 + a_2\,d_{23} - d_{23}\,a_{13}) - x_3\,c_{13}\,a_{12}=0,\label{A108}\\&& x_2\,(a_2^2 - a_2\,d_{12} - c_{23}\,b_{12} + d_{12}\,a_{13}) + x_3\,c_{13}\,a_{23}=0,\label{A109} \end{aligned}$$ # Aside on further analysing solutions {#ss:repthy1} A step even further than the all-ranks representation theory analysis in Sec.[4](#ss:analysis){reference-type="ref" reference="ss:analysis"} above would be to give an *intrinsic* characterisation of the centraliser algebra. We so not do this, but we can briefly set the scene. For an example $\check{R}=P$ as in ([\[eq:brex\]](#eq:brex){reference-type="ref" reference="eq:brex"}) is itself a solution --- this specific case, and also the corresponding $P$ for each $N$. This solution is relatively simple, and completely understood in all cases, but still highly non-trivial. Of course it factors through the symmetric group. (It is the Schur--Weyl dual to the natural general linear group action on tensor space.) Its kernel as a symmetric group representation depends on $N$ as well as $n$. Assuming we work over the complex field, then the kernel is generated exactly by the rank $N+1$ antisymmetriser. Thus in particular for $N=2$ we have a faithful representation of 'classical' Temperley--Lieb. While for $N=3$ the rank-3 antisymmetriser does not vanish (so faithful on the corresponding algebras --- see e.g. [@Brzezinski_1995]). More explicitly we have the charge-conserving decomposition $$\rho = (\rho_{111} \oplus \rho_{222} \oplus \rho_{333} ) \oplus (\rho_{112} \oplus \rho_{122} \oplus \rho_{113} \oplus \rho_{133} \oplus \rho_{223} \oplus \rho_{233} ) \oplus ( \rho_{123})$$ $$\label{eq:1081} \cong 3\rho_{111} \oplus 6\rho_{112} \oplus \rho_{123} \cong 10 L_3 \oplus 8 L_{21} \oplus L_{1^3}$$ where the bracketed sums are of isomorphic reps, and $\rho_{111}$ is trivial; $\rho_{112} =L_3 \oplus L_{21}$; $\rho_{123} = L_3 \oplus 2L_{21} \oplus L_{1^3}$ (i.e. the regular rep). Observe that the multiplicities 10, 8, 1 are the dimensions of the corresponding $GL_3$ irreps (recall these may be indexed by integer partitions of at most 2 rows, or equivalently of at most 3 rows where we delete all length-3 columns) as dictated by the duality. Note that this structure will be preserved by any generic deformation. We can characterise this in the classical way, starting with the spectrum of $\check{R}$ itself: $$\begin{aligned} \square \otimes \square \; & = & \; \square\!\square \oplus \ytableausetup{smalltableaux} \begin{ytableau} { } \\ {} \end{ytableau} \\ \label{eq:63} 3 \times 3 \;\; & = & \;\;\; 6 \;\;+\; \overline{3} \end{aligned}$$ $$\begin{aligned} \square \otimes \square\otimes\square \; & = & \; \left( \square\!\square \oplus \ytableausetup{smalltableaux} \begin{ytableau} { } \\ {} \end{ytableau} \right)\otimes\square \;\; = \;\; \begin{ytableau} {}&{}&{} \end{ytableau}\oplus 2. \begin{ytableau} {}&{}\\{} \end{ytableau} \oplus \emptyset \\ \label{eq:863} 3 \times 3\times 3 \;\; & = & \;\;\; (6 \;\;+\; \overline{3})\times 3 \;\;\;\;\;\;\; = \;\; 10\;\; +\;\; 2.8 \;\;+ \;\;1\end{aligned}$$ cf. ([\[eq:1081\]](#eq:1081){reference-type="ref" reference="eq:1081"}). Recall that this continues $$\square \otimes \square\otimes\square\otimes\square \; = \begin{ytableau} {}&{}&{}&{} \end{ytableau} \oplus 3.\begin{ytableau} {}&{}&{}\\{} \end{ytableau} \oplus 2.\begin{ytableau} {}&{}\\{}&{} \end{ytableau} \oplus 3.\begin{ytableau} {} \end{ytableau} \;$$ $$3\times 3\times 3\times 3 \;\;= \;\; 15 \;\; +\;\; 3.15 \;\;+\;\; 2.\overline{6} \;\; +\;\; 3.3$$ (Side note for future reference: Here in each third rank up the reps from three ranks down reappear (along with some more). This 'three' is one sign that we are with $gl_3$ or $sl_3$ in this case.) Observe that the solution for $\check{R}$ in ([\[eq:JaR1\]](#eq:JaR1){reference-type="ref" reference="eq:JaR1"}) (in §[\[ss:Jxxxx\]](#ss:Jxxxx){reference-type="ref" reference="ss:Jxxxx"}) certainly does not have the multiplicities in ([\[eq:63\]](#eq:63){reference-type="ref" reference="eq:63"}). Indeed it agrees formally initially with $$\begin{aligned} \square \otimes \ytableausetup{smalltableaux} \begin{ytableau} { } \\ {} \end{ytableau} &=& \begin{ytableau} {}&{}\\{} \end{ytableau}\oplus \emptyset \\ 3\times \overline{3} \; &= &\;\;\; 8 \; + 1\end{aligned}$$ (see e.g. [@MartinRittenberg92]) - formally, in the sense that the symmetry needed for the symmetric group(/Hecke/braid) action is broken here. In this formal picture it is not clear how the labels would correspond with the Hecke algebra/symmetric group labels --- we are in rank-2 (but at least there are two summands). And It is not clear how to continue. We have $$\begin{aligned} \begin{ytableau} {} \end{ytableau} \otimes \begin{ytableau} {}&{}\\{} \end{ytableau}&=& \begin{ytableau} {}&{}&{}\\{} \end{ytableau} \oplus \begin{ytableau} {}&{}\\{}&{} \end{ytableau} \oplus \begin{ytableau} {} \end{ytableau} \\ 3 \times \; 8 \;\;\;&=&\;\;\; 15\;\; +\;\;\; \overline{6} \;\; + 3 \end{aligned}$$ for example (so at least the centralised algebra of $\begin{ytableau} {} \end{ytableau}\otimes\begin{ytableau} {}\\{} \end{ytableau}\otimes\begin{ytableau} {} \end{ytableau}$ is -miraculously - isomorphic to the Hecke quotient of $B_3$). But this is nowhere close to what we have. This suggests that it is at least time to pass to the Lie supergroups again, such as $GL(2|1)$ (cf. e.g. [@Arnal94; @Rittenberg; @MartinRittenberg92]). (Alternatively it could be that the construction is not dual to a quantum group action.) [^1]: hietarin\@utu.fi, University of Turku, Finland [^2]: p.p.martin\@leeds.ac.uk, University of Leeds, UK [^3]: rowell\@tamu.edu, Texas A& M University, USA
arxiv_math
{ "id": "2310.03816", "title": "Solutions to the constant Yang-Baxter equation: additive charge\n conservation in three dimensions", "authors": "Jarmo Hietarinta, Paul Martin, Eric C. Rowell", "categories": "math.QA cond-mat.stat-mech hep-th math-ph math.MP nlin.SI", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We prove that every 2-sphere graph different from a prism can be vertex 4-colored in such a way that all Kempe chains are forests. This implies the following \"three tree theorem\": the arboricity of a discrete 2-sphere is 3. Moreover, the three trees can be chosen so that each hits every triangle. A consequence is a result of an exercise in the book of Bondy and Murty based on work of A. Frank, A. Gyarfas and C. Nash-Williams: the arboricity of a planar graph is less or equal than 3. address: | Department of Mathematics\ Harvard University\ Cambridge, MA, 02138 author: - Oliver Knill date: September 4, 2023 title: The Three Tree Theorem --- # Preliminaries ## A finite simple graph $G=(V,E)$ is called a **2-manifold**, if every **unit sphere** $S(v)$, the sub-graph of $G$ induced by $\{ w \in V \; | \; (v,w) \in E \}$ is a **1-sphere**, a cyclic graph with $4$ or more elements. Because every edge in a $2$-manifold bounds two triangles and every triangle is surrounded by three edges, the set $T$ of triangles satisfies the **Dehn-Sommerville relation** $3|T| = 2|E|$. Since $G$ is $K_4$-free, the **Euler characteristic** of $G$ is $\chi(G) = |V|-|E|+|T|$, where $T$ is the set of **triangles** $K_3$, $E$ the set of **edges** $K_2$ and $V$ the set of **vertices** $K_1$ in $G$. If a $2$-manifold $G$ is connected and $\chi(G)=2$, it is called a **$2$-sphere**. The class of $2$-spheres together with $K_4$ are known to agree with the class of maximally planar, $4$-connected graphs. ## $G$ is **contractible** if there is $v \in V$ such that both the unit sphere graph $S(v)$ and the graph $G \setminus v$ induced by $V \setminus \{v\}$ are contractible. This inductive definition starts by assuming that $1=K_1$ is contractible. The **zero graph** $0$, the empty graph, is declared to be the **$(-1)$-sphere**. For $d \geq 0$, a **d-sphere** $G$ is is a $d$-manifold for which $G \setminus v$ is contractible for some vertex $v$. A **$d$-manifold** is a graph for which every unit sphere is a $(d-1)$-sphere. If $G$ is a $d$-sphere, the two contractible sets, $G \setminus v$ and the unit ball $B(v)$ together cover $G$, so that its **category** is $2$ as in the continuum. ## Using induction one can show that a contractible graph satisfies $\chi(G)=1$ and a sphere satisfies the **Euler gem formula** $\chi(G)=1+(-1)^d$ which for $d=2$ is $2$. The Euler-Poincaré formula $\sum_{k=0}^d (-1)^k f_k=\sum_{k=0}^d (-1)^k b_k$ holds for a general **finite abstract simplicial complex**, where $f_k$ is the number of $K_{k+1}$ sub-graphs of $G$ and $b_k$ is a **Betti number**, the nullity of the $k$-th Hodge matrix block. In the connected $2$-dimensional case, the Euler-Poincaré formula is $|V|-|E|+|T|=1-b_1+b_2$, implying $\chi(G) \leq 2$. Indeed, the identity $\chi(G)=2$ forces $b_1=0$ and $b_2=1$ so that $G$ is orientable of genus $0$, justifying the characterization of 2-spheres using the Euler characteristic functional alone, among connected 2-manifolds. ## **Examples:** 1) The **octahedron** and **icosahedron** are $2$-spheres. 2) The suspension of a 1-sphere $C_n$ is a **prism graph**. It is a 2-sphere for $n \geq 4$. For $n=4$, it is the octahedron. Prisms are the only 2-dimensional non-primes in the **Zykov monoid** [@Zykov] of all spheres. No other 2-sphere can be written as $A \oplus B$ with non-zero $A,B$. 3) The **tetrahedron** $K_4$ is not a 2-manifold because $S(v)=K_3$. It is **4-connected** and maximally planar although. One could look at the 2-dimensional skeleton complex of $K_4$ which is a 2-dimensional simplicial complex qualifying for a sphere. Within graph theory, where the simplicial complexes are the Whitney complexes, it is not as sphere. 4) The **triakis-icosahedron** is a triangulation of the Euclidean 2-sphere $\{ |x|=1, x \in \mathbb{R}^3 \}$ but as it is having $K_4$ sub-graphs, it is not a 2-manifold. Also here, one could look at the 2-skeleton complex and get as a geometric realization a triangulation of a regular sphere. We do not allow degree 3 vertices as this produces tetrahedra which are 3-dimensional. 5) The **cube** and **dodecahedron** both have dimension $1$ and are not 2-manifolds. Also here, one would have to transcend the frame work to include it as a 2-sphere. One would look at CW complexes, meaning a discrete cell complex, where the quadrangles or pentagons are included as cells. There had been a lot of confusion about defining polyhedra and polytopes [@lakatos; @Richeson]. 6) The **Barycentric refinement** $G_1=(V_1,E_1)$ of a 2-sphere $G$ is a 2-sphere. It has as vertex set $V_1$, the set of complete sub-graphs of $G$ and the edge set $E_1=\{ (x,y), x \subset y, \; {\rm or} \; y \subset x\}$. We currently are under the impression that the arboricity in the barycentric limit could give an upper bound for the arboricity of all d-spheres. 7) An **edge refinement** $G_e=(V_e,E_e)$ of a 2-sphere $G$ with edge $e=(a,b)$ is a 2-sphere. One has $V_e = V \cup \{e\}, E_e=E \cup \{(v,a),(v,b),(v,c),v(d)\}$ with $\{c,d\}=S(a) \cap S(b)$. 8) The reverse of 7), **edge collapse** can be applied to a degree 4 vertex $S(v)$ for which all $w \in S(v)$ satisfy ${\rm deg}_G(w) \geq 6$. The result is then again a 2-sphere. 9) A **vertex refinement** $G_{a,b}$ picks two non-adjacent points $\{a,b\}$ in $S(v)$ and takes $V_{a,b}=V \cup \{w\}$ and $E_{a,b} = E \cup \{ (v,w),(a,w),(b,w) \}$. 10) The reverse of 9), the **kite collapse**, can be applied to an embedded **kite** $K=K_2 +2$ in $G$ if the dis-connected vertex pair $\{a,b\}$ in $K$ both have degrees $5$ or larger. ## The **arboricity** of a graph $G$ is defined as the minimal number of forests that partition the graph [@BM]. A **forest** is a triangle-free graph for which every connected component is contractible. The empty graph $0$ is not a tree and has arboricity $0$, the graph $1=K_1$ as well as any 0-dimensional manifold (a graph without edges) has arboricity $1$ because it is a forest. For a **connected** graph $G$, the arboricity of $G$ agrees with the minimal number of sub trees which cover the graph. Proof: a finite set of forests $F_1,\dots,F_k$ partitioning $G$ can in a connected $G$ be completed to a set of covering trees $T_k$ of $G$ if $G$ is connected. Conversely, every collection of covering trees $T_1,\dots,T_k$ can be morphed into a collection of disjoint partitioning forests $F_1,\dots,F_k$, where $F_i$ are obtained from $T_i$ by deleting edges $w$ that are already covered by other trees. We get back to this in an appendix. ## The **Nash-Williams theorem** [@NashWilliams; @CMWZZ; @HararyGraphTheory] identifies the arboricity of a positive dimensional graph as the least integer $k$ such that $|E_H| \leq k (|V_H|-1)$ for all sub-graphs $H=(V_H,E_H)$ and the understanding is that the vertex set $V_H$ **generates** the graph $H$, meaning that if $v,w$ are two nodes in $H$ and the connection $(v,w)$ is in $E$ then also $(v,w)$ is in $H$. The understanding is also that for $H=K_1$ where any $k$ would work for Nash-Williams the arboricity is $1$ and that for $H=0$, the empty graph, the arboricity is $0$. In particular, $|E|/(|V|-1)$ is always a lower bound for the arboricity for a connected graph $G=(V,E)$. Since any 4-connected planar graph different from $K_4$ is a sub-graph of a 2-sphere, the arboricity of a 4-connected planar graph is bounded by the maximal arboricity that is possible of 2-spheres. As we will show here, 2-spheres have arboricity $3$ so that all planar graphs will have arboricity $\leq 3$, a result which appears as an exercise 21.4.6 in [@BM] based on work of A. Frank, A. Gyarfas and C. Nash-Williams. The text [@Ruohonen] sees it as a consequence of the **Edmond Covering Theorem** in matroid theory. ## **Examples**: 1) The arboricity of a cyclic graph $C_n=(V,E)=( \{0,1, \dots, n-1\}, \{ (k,k-1) \; {\rm mod} \; n, 1, \leq n\})$ with $n \geq 4$ is $2$. It is larger or equal to $|E|/(|V|-1)=n/(n-1)>1$ and two linear trees like $T_1=(\{0,1\}\{ (01) \} )$ and $T_2=C_n \setminus K_2 = (V,E \setminus \{ (01) \})$ cover $C_n$. That the arboricity of $C_n$ is $2$ also follows from the fact that $C_n$ itself is not a tree. 2) The arboricity of a **figure 8 graph** is $2$ too. It can be partitioned into 2 forests $F_1,F_2$, where $F_1$ is a star graph with 5 vertices and $F_2$ is a forest consisting of 2 trees. From the two forests, one can get a tree cover by taking $T_1=F_1$ by taking $T_2$ the two trees $F_2$ with a path connecting them. This illustrates the above switch from a **forest partition** to a **tree cover** which works in the connected graph case. ## 3\) The smallest **$2$-torus** that is a manifold is obtained by triangulating a $5 \times 5$ grid and identifying the left-right and bottom-top boundaries to get 16 vertices and $2*16=32$ triangles. It has the f-vector $(|V|,|E|,|T|)=(16,48,32)$ so that $|E|/(|V|-1)=48/15>3$ and an explicit cover with $4$ forests shows the arboricity is $4$. This is larger than the number $3$ obtained in the Barycentric limit. It is still not excluded that the arboricity of a $d$-manifold is always smaller or equal than the smallest integer larger than the Barycentric limit number $c_d$ to be defined later and which is $c_2=3$ in two dimensions. A conjecture of Albertson and Stromquist states that all 2-manifolds have cromatic number $\leq 5$ [@AlbertsonStromquist]. 4) An **octahedron** with $|V|=6,|E|=12,|T|=8$ has arboricity $3$, because $|E|/(|V|-1)=12/5 >2$ and because there are three spanning trees can cover it: start with two star graphs and a circular graph partitioning the edge set, then switch one of the equator colors. 5) There is a **projective plane** of chromatic number $5$. Its $f$-vector is $f=(15, 42, 28)$. The arboricity is 3. # The theorem ## The proof of the following theorem makes heavy use of the **4 color theorem** [@StromquistPlanar; @AppelHaken1]. **Theorem 1** (Three Tree Theorem). *The arboricity of any 2-sphere is $3$.* ## The arboricity of a sub-graph $H$ of $G$ is smaller or equal than the arboricity of $G$ and any 4-connected planar graph is contained in a maximally planar 4-connected graph and so a 2-sphere. For non-4-connected graphs, cover the 4-connected components forests and glue the forests together. The following theorem is mentioned in [@BM] as Exercise 21.4.6 based on results of Frank, Frank and Gyarfas and using the Nash-William theorem. **Corollary 1** (Arboricity of planar graphs). *The arboricity of any planar graph is $\leq 3$.* ## We prove the stronger result, telling that any 2-sphere different from a prism can be neatly covered with forests. A **neat forest cover** is a cover by 3 forests such that every triangle hits each of the three forests in exactly one point. If $G$ is a $2$-sphere, define the **upper line graph** as the graph which has the edges of $G$ as vertices and where two are connected if they are in a common triangle. This is a 4-regular graph so that the chromatic number is less or equal to $4$. We will see that the chromatic number of the upper line graph is $\leq 3$, the coloring given by the Kempe chains. Just to compare, the **lower line graph** or simply **line graph** connects two edges, if they intersect. ## The proof of the theorem follows from two propositions. The first gives a lower bound: **Proposition 1**. *The arboricity of any $2$-manifold is larger than $2$.* *Proof.* The **Euler-Poincaré formula** tells $\chi(G) = b_0-b_1+b_2 = 1+b_2 - b_1$ so that $\chi(G)=2-b_1$ or $\chi(G)=1-b_1$, pending on the orientability of $G$. Use the Euler gem formula $|V|-|E|+|T| \leq 2$ [@Richeson] as well as the **Dehn-Sommerville relation** $3|T|=2|E|$ to get $|V|-|E|/3 \leq 2$ or $|E| \geq 3|V|-6$. If there were two spanning trees covering all the edges, then the **Nash-Williams formula** gives the bound $|E| \leq 2|V|-2$. This is incompatible with $|E|=3|V|-6$ for $|V|>4$ as plotting the functions $|E|=2|V|-2,|E|=3|V|-6$ shows. ◻ ## The second proposition gives an upper bound: **Proposition 2** (3 trees suffice). *A $2$-sphere $G$ can be covered with $3$ trees.* ## This is exercise 21.4.6 in [@BM] based on work of A. Frank, A. Gyarfas and C. Nash-Williams. We will prove something slightly stronger in that we also can assure that all of the three trees intersects any of the triangles, provided that the graph $G$ is not a prism. ## We were not aware of previous work on the arboricity of planar graph before the google AI agent \"Bard\" told us about it in a personal communication. Since the agent had hallucinated during that chat with other claims like that the arboricity of a cyclic graph $C_n$ is $1$, we did not take it seriously, but it prompted us to hit the literature again and let us to find the exercise in [@BM]. We are not aware of a publication proving that planar graphs have arboricity $\leq 3$. We will come back to this. ## We will establish the proposition standing on the shoulders of the **4-color theorem**. There is a **discrete contraction map** to shrink colored spheres. It uses **edge collapses** and **kite collapses**. It can be seen as a \"discrete heat flow\" because edge collapse removes curvature $1/3$ vertices and kite collapses reduce and merge vertex degrees which tends to smooth out curvature. It in general lowers places with large positive curvature and increases places with low negative curvature. ## We actually will prove a stronger theorem. A **prism graph** is a 2-sphere that is the suspension of a cycle graph. (In general, the join of a $k$-sphere and a $l$-sphere is a $(k+l+1)$-sphere. The join with a $0$-sphere is a suspension. We add some remarks about the sphere monoid in an appendix.) ## A **neat 3-forest cover** of a 2-sphere is a partition of $G$ into 3 forests $A,B,C$ such that every triangle $t \in T$ intersects every of these graphs. The existence of a neat 3-forest cover is equivalent to a **neat 4-coloring**, a $4$-coloring for which there are no Kempe chains. We will show: **Theorem 2** (Existence of neat forests). *Every 2-sphere $G$ different from a prism admits a neat 3 forest partition.* ## We can now upgrade the 4 color theorem. Of course, to prove this, we will take the 4-color theorem for granted. **Corollary 2** (Neat 4 color theorem). *Every 2-sphere $G$ different from a prism admits a neat 4 coloring.* Here is some indication that the "stronger 3-forest-suffice\" result is hard: **Corollary 3**. *The neat 4 color theorem is equivalent to the neat 3 forest theorem.* As pointed out before, it is possible prove the existence of 3 forests (not necessarily neat) without the 4-color theorem (dealing with not necessarily neat colorings). But so far, we could only see this as an excercise 21.4.6 in [@BM] which builds on various research of directed graphs. # Proof that 3 trees suffice ## In this section, we give the proof of the proposition provided we have a geometric contraction map $\phi$ on a class $\mathcal{X}$ of colored $2$-spheres with Kempe loops. more details of the evolution will be given in the next section. The main strategy is a **descent-ascent argument**: reduce the area of a given colored 2-sphere $G \in \mathcal{X}$ until it is small enough that we can recolor it with a neat coloring. The neat coloring can then be lifted by ascent back to the original graph. The smallest sphere that is reached, the octahedron, can not be recolored neatly but before reaching it, we can recolor. ## If we can recolor some $\phi^n(G)$ neatly, then this color can be lifted back to $G$ so that also $G$ admits also a neat coloring. The **strategy** is to prove that every colored 2-sphere with a Kempe loop that is not a prism can be compressed. This compression map $\phi$ reduces the area of the sphere and is possible if we have a Kempe loop present and not a prism. There is then an explicit graph $\phi^{-1}(P_n$ that is reached just before entering the class of prism $P_n$. These **pre-prisms** are already large enough to be colored neatly. It follows that also $G$ can be recolored neatly. We have built the ladder with a non-neat color function (assuring the existence of Kempe loops and so allowing to define the map). But we climb the ladder back with the neat color function. ## Any $2$-sphere $G$ is a **planar graph**. By the **4-color theorem**, there is a **vertex 4-coloring** $f:V \to \{1,2,3,4\}$ which we simply call a **4-coloring**. A **colored sphere** $(G,f)$ is a $2$-sphere $G=(V,E)$ equipped with a 4-coloring $f$. We also equip it with a fixed orientation. If $(a,b)$ is an edge so that $f(a)=1,f(b)=2$, call it an edge of type $(12)$. The vertex coloring $f$ produces an **edge coloring** $F: E \to \{A,B,C\}$ using the rules - color the edges of type $(12)$ and $(34)$ with $C$ - color the edges of type $(13)$ and $(24)$ with $B$ - color the edges of type $(23)$ and $(14)$ with $A$ Sub-graphs of $G$ with edge colors are simply called by their color name. Each of the sub-graphs $A,B,C$ are now graphs which are vertex colored by 2 colors. They are called **Kempe chains** [@Kempe1879; @Story1879; @Heawood90]. The graphs $A,B,C$ are $1$-dimensional, meaning that they are **triangle free**. ## The sub-graphs $A,B,C$ are not forests in general. Closed curves in $A$ or $B$ or $C$ are called **Kempe loops**. If there are no **Kempe loops**, then all $A,B,C$ are forests and we do not have to work any further as we have then already a neat 3-cover. We will therefore define the map $\phi$ only on colored spheres which have Kempe loops. Actually, the presence of a Kempe loop is important. In general, we can not shrink a colored sphere in such a way that we have compatibility with the coloring. ## Look at the set $\mathcal{X}$ of colored oriented 2-spheres $(G,f)$ for which $f$ has at least one Kempe loop, meaning that there are closed loops of the same edge color. This set is not empty, as it contains any colored sphere $(G,f)$, for which $f$ is a 3-color function (These graphs were characterized by Heawood as the class of Eulerian graphs, graphs for which every vertex degree is even). The set also contains prisms. Two different Kempe loops can intersect, either with different color or the same color: a 1-2 Kempe chain can intersect for example the 3-4 Kempe chain. A 3-colorable graph is an example, where every unit sphere $S(v)$ is a Kempe loop. **Proposition 3** (Existence of a geometric evolution). *There exists a map $\phi: \mathcal{X} \to \mathcal{X}$ that reduces the area of $G$, as long as long as $G$ is not an octahedron. The octahedron is considered a fixed point of $\phi$. If $\phi^n(G)$ is a prism, then $\phi^{n-1}(G)$ admits a neat coloring.* ## This proposition is proven in the next section and establishes that we have a neat coloring on $G$ and so a neat forest cover establishing especially "three trees are enough\". The reason is that the evolution will reach a prism and one step before is a colored 2-sphere $(G,f) \in \mathcal{X}$ that allows for a 4-coloring without Kempe loops. In order to see that we have to be able to define the map in each case, where we have a Kempe loop and $G$ is not an octahedron. Then we have to see that we can reduce prism graphs and that so the only possible $G$ for which $\phi^2(G)$ is the octahedron can be recolored to have no Kempe loops. # The map ## We assume that $G$ is a 2-sphere that is not a prism. The map $\phi$ first remove degree-$4$ vertex b edge collapse, provided there are degree 4 vertices. This lowers the number of **peak positive curvature points** and is done by collapsing a Heawood wheel. If no degree 4 vertices exist any more, then $\phi$ collapses a Heawood kite. A **Heawood kite** $(H,f)$ is a colored kite graph $H$, where the coloring is such that the two unconnected vertices having the same color. The kite collapse will increases curvature of some vertices and decreases curvature at others. The main difficulty will be to show that except in the prism case, but with a Kempe loop present, there always exists a Heawood kite. Not all colored spheres have Heawood kites. But we will see that all colored spheres with a Kempe loop must have a Heawood kite. The collapse of a Heawood kite might introduce again degree 4 vertices, but we remain in the class of 2-spheres. Each step $\phi$ reduces the area of $G$ by at least $2$. ## A **Heawood leaf** is a vertex in a Kempe chain that has only vertex degree $1$ within that chain. A **Kempe seed** is a vertex in a Kempe chain that is isolated, not connected to any other vertex of the chain. A Kempe tree either has a leaf or a seed. A $1$-dimensional graph that has no leaf and no seed must consist of a disjoint union of loops, possibly joined by connections. It is $1$-dimensional network with vertex degrees all larger than $1$. ## A **Heawood wheel** is a unit ball $B(v)$ of a degree-4 vertex $v$. Degree 4 vertices are the points in $G$ with maximal curvature $1-4/6=1/3$. A Heawood wheel has 5 vertices and is also called a **wheel graph** $W_5$. Because we can only use 3 colors in the unit sphere $S(v)$ of such a graph, there are by the pigeon-hole principle two vertices in $S(v)$ which have the same color. (it is also possible that the unit sphere $S(v)$ has only 2 colors, in which case $S(v)$ is a Kempe chain). These vertices $a,b$ with $f(a)=f(b)$ have to be opposite to each other. ## We have a colored 2-sphere $(G,f)$, where $G=(V,E)$. Define the subset $V_4 \subset V$ of vertices in $G$ which have degree $4$. The sub-graph of $G$ generated by $V_4$ is called $G_4$. A **linear forest** is a forest in which every tree is a linear graph or a seed $K_1$, a graph without edges. Here is a major reason why prism graphs are special: **Lemma 1**. *If $G$ is a 2-sphere different from a prism, then $G_4$ is a linear forest.* *Proof.* If $G_4$ contains a cyclic graph $C_n$, then $G$ must be a prim graph. If $G_4$ contained a triangle, then it must be the octahedron (again a prism graph). If $G_4$ contained a branch point (a star graph with 3 or more points), then again, it would have to be the octahedron. ◻ ## Given a connected component in $G_4$, it is the center line $Q$ of a **Kempe diamond**. We call its suspension a **Kempe diamond** $Q+\{a,b\}$. There are now two possibilities. Either $f(a)=f(b)$ or $f(a) \neq f(b)$. In the case $f(a)=f(b)$, we just remove the diamond by identifying $a,b$. All the triangles in that diamond will disappear. We still can get a degree 4 vertex like that but we have reduced the number of vertices. Now go back to the beginning, of this stage. After finitely many steps, we will have either only degree 4 vertices meaning an octahedron or then reduced the number of degree 4 vertices by $1$. Repeat this step until no degree 1 vertices exist ## In the case when $f(a) \neq f(b)$, we necessarily have that the center line $Q$ is a Kempe chain with only two colors. If we are not in the prism case, we can collapse wheels one by one until the center line has length $0$ or $1$. After doing so, no degree 4 vertices are left in distance 1 of the remaining center edge or point. ## All these reductions were compatible with the coloring. It can happen that a Kempe loop is removed like for example if $G$ contains a Kempe loop of length 4. Collapsing that Kempe wheel removes that loop. The coloring $f$ of $G$ can be adapted to a coloring $\phi(f)$ of $\phi(G)$. We now establish that under the condition of a Kempe chain and if $G$ is different from a prism, there must be Heawood kite. **Lemma 2**. *Given a colored sphere $(G,f)$ with a Kempe loop and where $G$ is different from a prism. Then there is at least one Heawood kite.* *Proof.* A Kempe graph $C$ intersected with the 2-ball in the interior of a minimal loop must either be empty or have a **Heawood leaf**. ◻ **Lemma 3**. *The existence of a Heawood leaf in the interior of $U$ forces the existence of a Heawood kite.* *Proof.* We can assume without loss of generality that we deal with a minimal $1-2$ loop $L$ and that we have a leaf ending with a vertex $v$ of color $1$ (the case where it ends with $2$ being similar). The unit sphere of $v$ in $C$ contains only one vertex. Call it $w$. Look at the unit sphere $S(v)$ in $G$. It contains $w$ with color $2$ and all other vertices have color $3,4$. So, there are two vertices of the same color. This produces a Heawood kite. ◻ ## Under the assumption that no degree 4 vertices exist, we can now collapse a Heawood kite. A collapse reduces the vertex degrees of two of the vertices by 1. The deformation preserves the presence of a Kempe loop. The conclusion is that unless are in the Octahedron case, there is a smaller colored sphere $(H,g)$. **Lemma 4** (3 neat forests produce a 4 colored gem). *Let $G$ be a 2-sphere. If $G$ has a neat 3 forest cover, then it has a neat 4-coloring.* *Proof.* Assume $A,B,C$ are the forests giving the colors attached to the edges of $G$. Think of a $4$-coloring is a gauge field $V \to \mathbb{Z}_4=\{0,1,2,3\}$. We attach now a **connection**, by assigning to every edge an element in the subgroup of the symmetric group $S_3$ as the presentation $\{A,B,C, A^2=B^2=C^2=ABC=1 \}$. This is an Abelian group with 4 elements. Now start coloring the vertices by picking a vertex $v$ and attaching to it the color $0 \in \mathbb{Z}_4$. Now use the connection to color all of the neighbors. This produces a coloring. The only problem we could encounter is if we have a closed loop of the same color summing up to something non-zero. But this is excluded by having trees. ◻ ## This article has shown that if we can 4-color a 2-sphere $G$ neatly, we can constructively cover $G$ with 3 forests. In applications, we want to do this fast. To speed things up, it helps to assume that the coloring has the property that every ball $B_2(x)$ of radius $2$ contains all 4 colors. If not, change some of the vertices $S(x)$ with the forth color. This preconditioning breaks already many closed Kempe chains. An extreme case with many Kempe chains is if $G$ is Eulerian. By a theorem of Heawood, $G$ can then be colored with $3$ colors: just start with a point, color the unit sphere, and then follow the triangles to color the entire graph. In the case when the range of $f$ has $3$ points, every unit sphere is a Kempe chain. **Corollary 4**. *The construction of 3 neat forests is at least as hard as constructing a neat 4-coloring.* *Proof.* Tree data $(G,F)$ with 3 neat tree coloring $F$ produces immediately a $4$ vertex coloring $f$. Since the 4-coloring is hard, also the neat 3-tree covering is hard in the sense of complexity. ◻ ## Constructing a general 3-tree covering is computationally simpler. We are not aware however of a procedure that upgrades any "forest 3 cover\" to a "neat forest 3-cover\". What we have shown here is that we can upgrade a 4-coloring to a neat 4-coloring. # Footnotes ## The arboricity of a $0$-dimensional graph by definition is $1$. Seeds are considered trees too and a graph without edges is a collection of seeds, still a forest. This assumption matches the **forest theorem**, telling that the number of rooted forests in a graph is ${\rm det}(L+1)=1$ and the number of rooted trees in a graph is ${\rm Det}(L)$, where $L$ is the Kirchhoff matrix of $G$ which for a zero dimensional graph is the $0$ matrix. See [@cauchybinet]. The arboricity for $H=K_1$ should be declared to be $1$ too even so $|E|/(|V|-1)$ technically does not make sense. In the disconnected case, the forest arboricity (the usual book definition) is not the same than the tree arboricity (which we did not use as a definition). In the connected case, these are the same notions, as indicated in the first section and in an appendix. The arboricity of a disjoint union of graphs is the maximum of the arboricity of the individual connected compoentns. ## Covering problems with trees can also focus on packing with the same \"tile\". Most famous is **Ringel conjecture** from 1963 which told that any tree with $n$ edges packs $2n+1$ times into $K_{2n+1}$. A tree of length $1$ for example fits $3$ times into $K_3$, a tree of length $2$ fits $5$ times into $K_5$. The arboricity of $K_5$ is $3$. There are 3 trees that do the covering. This has been proven for large $n$ [@RingelConjecture]. ## In chapter 5 of [@Diestel] there is an exercise linking **chromatic number** with **arboricity**. Here it is: *Find a function f such that every graph of arboricity at least f(k) has colouring number at least k, and a function g such that every graph of colouring number at least g(k) has arboricity at least k, for all $k \in \mathbb{N}$.* We show here that for 2-spheres, there is an explicit relation. In general, given a coloring with $n$ colors, there is a tree cover with $n(n-1)/2$ trees. If we have $k$ forests, we can color each forest with 2 color so that $2k$ is an upper bound for the chromatic number. For general graphs there is no universal relation between chromatic number and arboricity. ## The classification of discrete 2-manifolds agrees with the classification of 2-dimensional differentiable manifolds. In a combinatorial setting we want to avoid the continuum. The argument that links the discrete with the is a geometric realization of the simplicial complex of $G$ is a 2-manifold. The geometric realization of every ball $B(v)$ is a wheel graph which has a two-dimensional ball as geometric realization. The structure and incidence of the unit balls provides an explicit **atlas** for a piecewise linear 2-manifold which then could be smoothed out to have a smooth 2-manifold. Finite discrete geometry is close to PL (piecewise linear) geometry. Topological manifolds are a much larger and much different category: a double suspension of a homotopy 3-sphere for example produces there a 5-sphere. This is not possible in the PL category or the discrete. ## The question of characterizing spheres in the continuum had been pursued by Poincaré already. The thread was taken in a finitist setting by Hermann Weyl during the **"Grundlagen crisis\"** in mathematics. A combinatorial description based on cardinalities of simplices does not work because there are complexes with the same f-vector, where one is a sphere and the other is not. The existence of holonomy spheres already constructed by Poincaré thwarted any attempts of a cohomological characterization of spheres Weyl's problem was solved using homotopy or **contractibility**. A fancy version is the solution of the Poincaré conjecture that characterizes $d$-spheres using homotopy for $d \geq 2$. ## Various characterizations of spheres have been proposed in discrete, combinatorial settings. The definition used here was spearheaded in digital topology by Evako [@Evako1994; @I94a]. Forman's discrete Morse theory [@FormanVectorfields; @Forman2002; @Forman1999] characterized spheres using Reeb's notions, [@dehnsommervillegaussbonnet], seeing them as manifolds admitting a Morse function with exactly 2 critical points. The Lusternik-Schnirelmann picture [@josellisknill] is to see spheres as manifolds of characteristic 2, meaning that it can be covered by two contractible balls. In two dimension, Kuratowski's theorem identifies 2-spheres as 2-manifolds that are planar. More precisely, Whitney would have characterized a 2-sphere as a maximally planar 4-connected graph different from $K_4$. $2$-spheres are the class of graphs among planar graphs which are hardest to color. ## Spheres can also be characterized as manifolds which have trivial homotopy type and for 3-spheres as 3-manifolds which are simply connected. Poincaré's conjecture is now Perelmann's theorem. The deformation $\phi$ is a Mickey Mouse **Ricci flow** and is certainly motivated as such because we get rid of large curvature parts by removing cross wheels and reduce negative curvature parts by collapsing kites. In higher dimensions, a continuum deformation argument should be able to prove a discrete Perelmann theorem. But deformations in higher dimensions most likely leads to subtle bubble-off difficulties as in the continuum. ## To see a hexahedron (=cube) or dodecahedron as 2-dimensional sphere with Euler characteristic $2$ would require us to go beyond simplicial complexes and use the larger category of **cell complexes**, a category, where 2-dimensional cells can be attached to already present 1-dimensional cyclic graphs. This requires care. If we would declare fr example every 4-loop in a hexahedron graph to be a "cell\", we would have much too many faces. Historically, the continuum was a guide seeing polyhedra as triangulations of classical spheres. The confusion was already evident to the time of Euler. See [@lakatos; @Richeson]. **Kepler polyhedra** for example are regular polyhedra of Euler characteristic different from 2. An other larger category in which one can work is the category of **$\Delta$-sets**, a pre-sheaf over a simplex category and generalizing simplicial complexes. This is a **finite topos** and would be the ultimate finite category to work in. But graphs are much more intuitive and accessible. ## The history of the 4-color theorem is discussed in chromatography textbooks like [@SaatyKainen; @Barnette; @Ore; @ChartrandZhang2; @RobinWilson; @Fisk1980]. Kempe's proof [@Kempe1879] from 1879 had been refuted 1890 [@Heawood90] but some of his ideas survived. Already in the same issue of the American Journal of Mathematics, William E. Story pointed out that Kempe's proof needs to be made rigorous [@Story1879]. Kempe's descent idea for the proof allows to see quickly that one can color any 2-sphere by 5 colors. More geometric approaches have been spear headed by Fisk [@Fisk1977b; @knillgraphcoloring; @knillgraphcoloring2; @knillgraphcoloring3]. Every d-manifold can be colored by $2d+2$ colors [@manifoldcoloring]. Again, we have to stress that this is a completely different set-up as the Heawood story Ringel and Youngs finished showing that on a genus $g=\lceil (n-3)(n-4)/12 \rceil$ there is a complete subgraph $K_n$ embedded. For $n=7$, this gives the famous torus case $g=1$ of Heawood. But this embedding of a $6$-dimensional graph in a $2$-dimensional surface is not what we call a discrete manifold [@Heawood49; @Ringel1974] as the unit sphere of a vertex in the graph $K_7$ is the $5$-dimensional $K_6$. ## For any coloring $f:V \to \mathbb{R}$, the Poincaré-Hopf index $i(v) = 1-\chi(S^-(v))$ for the sub-graph $S^-(v)$ generated by $\{ w \in S(v), f(w)<f(v)\}$ is an integer-valued function such that $\sum_{v \in V} i(v) = \chi(G)$ an identity that holds for all finite simple graphs. See [@Glass1973; @poincarehopf; @parametrizedpoincarehopf; @MorePoincareHopf]. \[The maximum $v$ of $f$ has $S^-_f(v)=S(v)$. The valuation formula $\chi(A \cup B) = \chi(A)+\chi(B)-\chi(A \cap B)$ produces $\chi(G) = \chi(G-v) + \chi(B(v)) -\chi(S(v)) = \chi(G-v)+ 1-\chi(S^-(v)) = \chi(G-v) + i(v)$, allowing induction.\] A vertex $v \in V$ is a **critical point** of $f$ if $S^-(v)$ is not contractible. The Reeb characterization of spheres is a manifolds for which there is a coloring with exactly two critical points. ## The expectation of the Poincaré-Hopf index $i_f(v)$ over natural probability spaces like the uniform counting measure on colorings is the **Levitt curvature** $K(v) = 1-\sum_{k=0} (-1)^k \frac{|f_k(S(v))|}{k+1}$ [@Levitt1992; @cherngaussbonnet; @valuation]. For $K_4$-free graphs, it leads to $1-f_0(S(v))/2+f_1(S(v))/3$ which for 2-manifolds simplifies with $f_1(S(v))=f_0(S(v))$ to $1-f_0(S(v))/6$, a curvature which goes back to Victor Eberhard [@Eberhard1891] and was used in graph coloring arguments already at the time of Birkhoff and heavily for the 4 color theorem program [@Heesch], as Gauss Bonnet $\chi(G)=2$ implies that there must be some vertices of degree $4$ or $5$, points of **positive curvature**. ## The integral geometric approach to Gauss-Bonnet $\chi(G) = \sum_{v \in V} K(v)$ leads in the continuum for even-dimensional manifolds $M$ to the **Gauss-Bonnet-Chern** result. See [@indexexpectation; @colorcurvature; @ConstantExpectationCurvature; @DiscreteHopf2]. This can be seen by **Nash embedding** $M$ in to an ambient Euclidean space $E$, using that Lebesgue almost all unit vectors $u \in E$, the function $f_u(x) = x \cdot u$ is a Morse function. The Euler characteristic $\chi(M) = \sum_{v \in M} i(v)$ is a finite sum. The expectation $K(v) = {\rm E}[i(v)]$ over the probability space $\Omega=S(0)= \{ u \in E, |u|=1\}$ a rotationally invariant probability measure $\mu$ on $S(0)$ (parametrizing the Morse functions). By a result of Hermann Weyl, this must be the Gauss-Bonnet-Chern curvature. ## Trees are useful data structures within networks. There are many reasons: a network equipped with a tree for example, gives fast search as missing loops avoids running in circles. The data are located in the leaves of the tree, nodes with only one neighbor Decision trees in network of decisions is an other example providing causal strategies to reach goals located at leaf positions in the tree. The Nash-Williams result shows that the arboricity measures a **network density**. Indeed $|E|/(|V|-1)$ is close to $|E|/|V|$ which is half the **average vertex degree** of the network by the **Euler handshake formula**. The arboricity is essentially the maximal density of subgraphs of $G$. ## If we know a finite set of rooted trees covering the graph, we know the entire network. The reason is simple: given two nodes $v,w$ in the network, then $v,w$ are connected if and only if in one of the rooted trees, the distance between $v$ and $w$ is $1$. The **arboricity** of a graph the minimal number of trees which are needed to cover all connections. Within a network it can serve as some sort of dimension. There are lots of interesting questions, like how many trees there can be for a neat coloring of a 2-sphere. ## We have already called a graph $G=(V,D)$ to be **contractible** if there is a vertex $v \in V$ such that the unit sphere $S(v)$ as well as the graph generated by $V \setminus \{v\}$ are both contractible. The **Lusternik-Schnirelmannn category** ${\rm cat}(G)$ is the minimal number of contractible sub-graphs covering $G$. One has ${\rm cat}(G) \leq {\rm arb}(G)$ in the positive dimensional connected graph case because every tree is contractible. We will see that for a 2-sphere ${\rm cat}(G)=2$ and always ${\rm arb}(G)=3$. With the arboricity of a $0$-dimensional graph defined to be 1, the inequality ${\rm cat}(G) \leq {\rm arb}(G)$ only works for connected graphs. ## Whitney once characterized 2-spheres using duality. There is a theorem of Karl von Staudt [@Staudt1847] telling that $G$ and $\hat{G}$ have the same number of spanning trees, because every spanning tree for $G$ produces a spanning tree in $\hat{G}$. The dual graph however is only 1-dimensional and has smaller arboricity than $G$ if $G$ is a 2-sphere. See [@KnillTorsion]. ## Prims pointed towards the **arithmetic of graphs**. The Zykov monoid can be group completed and together with the large product (introduced by Sabidussi [@Sabidussi]) gies the **Sabidussi ring**. It is isomorphic to the **Shannon ring** in which the disjoint union is group completed and where the Shannon multiplication (strong multiplication) is used [@Shannon1956]. In the Shannon ring, the connected components are the **additive co-prodoct primes**. In the Sabidussi ring, the graphs which are not the join of two non-zero graphs are the **additive Zykov primes**. Spheres form a sub-monoid of the Zykov monoid and most spheres are prime. In dimension $2$ there are very few non-primes: they are the prisms. [@ArithmeticGraphs; @RemarksArithmeticGraphs; @numbersandgraphs]. ## For Dehn-Sommerville relations for any $d$-sphere, see [@Sommerville1929]. See also [@DehnSommerville] for a Gauss-Bonnet approach or [@dehnsommervillegaussbonnet]. They are quantities which are both invariant and explode under Barycentric refinements. The only way that this can happen is that they are $0$. They can be obtained from the eigenvectors of the Barycentric refinement operator. See [@KnillBarycentric; @KnillBarycentric2]. See [@TreeForest] for the tree forest ratio. The **Nash-Williams** density $f_1/(f_0-1)$ measures some sort of density of the network, similarly as the average simplex cardinality [@AverageSimplexCardinality] or other functionals [@KnillWolframDemo1; @KnillFunctional]. This could lead to interesting variational problems like which $d$-manifolds produce the largest Nash-Williams density. We hope to work on this a bit more in the future. ## Does cohomology determine the arboricity for manifolds? In other words, if two graphs have the same Betti vector $(b_0,b_1, \dots)$ and the same dimension does this imply that the arboricity is the same? This is not true: the arboricity of a sufficiently large $d$-sphere can be estimated by the Perron-Frobenius eigenvector of the Barycentric refinement operator and must be at least to a constant $c_d$ that grows exponentially. But the arboricity of a suspension of $G$ is less or equal than the arboricity of $G$ plus $2$. In higher dimension, the **arboricity spectrum** gets wider and wider within the same class of graphs. There are $d$-spheres that can be colored with $2d$ trees and there are spheres with arboricity growing like $c_d = 1.70948^d$. We hope to explore this a bit more in the future. Especially interesting is the question whether the constants $c_d$ define the upper bound. We currently believe that this is the case. ## We believe that the arboricity of any 2-manifold is maximally 4. We have no proof of that but it would follow if the arboricity of a $d$-manifold were the smallest integer larger than the Barycentric arboricity constant $c_d$ which is in the case $d=2$ equal to $3$. There are examples like tori or Klein bottles with arboricity 4. In particular, we believe that the arboricity of any 3-manifold is smaller or equal than the smallest integer larger than $13/2=6/5$ which is $7$. We know using Barycentric refinement that there are 3-spheres with arboricity $7$. The suspension of the octahedron, the smallest 3-sphere has Nash-Williams ration $24/7$ and so arboricity $4$. We see that there are 3-spheres of arboricity $4,5,6,7$. This is completely different than in the 2-dimensional case considered here, where the three theorem assures that the arboricity of a 2-sphere is locked to be $3$. For 5-spheres, the range of possible arboricity is $5,6, \dots, 13$. The lower bound grows linearly with the dimension, while the upper bound grows exponentially with the dimension. ## If $c_d$ is integer, that it can be by 1 larger. For $d=2$, where $c_d=3$, this happens for 2 tori as in dimension $2$ the Barycentric refinement limit is $3$ and the arboricity can be $4$. The first constants are $\left\{ c_1,c_2,c_3,\dots, c_{10} \right\}$ are $$\left\{ 1,3,\frac{13}{2},\frac{25}{2},\frac{751}{33},\frac{1813}{45},\frac{3087965}{43956}, \frac{6635137}{54628},\frac{488084587476}{2335501595},\frac{200710991888}{559734735} \right\} \; .$$ This suggests that the correct maximal arboricity numbers for $d$-spheres in the case $d=0,1,\dots, 10$ are $$\{1, 2, 3, 7, 13, 23, 41, 71, 122, 209, 359 \}$$ We do not know whether there are integer $c_d$ besides $c_1,c_2$. There are lots of open questions. Are there 3-spheres of arboricity 8 for example. We believe this is not possible. # Appendix: Arboricity {#appendix-arboricity .unnumbered} ## The current public material about the arboricity of graphs is rather sparse. The functional is mentioned in [@HararyGraphTheory; @BM]. Most graph theory textbooks do not cover it. Both in [@HararyGraphTheory] and [@BM], it is defined as the minimal number of forests partitioning the graph. This has been mentioned in the introduction. Lets elaborate about it a bit more. ## A **tree** is a triangle free graph which is contractible. The smallest tree is $K_1$ and also called a **seed**. Note that the zero graph is not a tree. Every tree has Euler characteristic $\chi(G) =|V|-|E|=1$. A **forest** is a triangle free graph for which every connected component is a tree. A forest has Euler characteristic $\chi(G) = b_0$, which is the number of connected components of the graph. A forest can be contracted to a $0$ dimensional manifold. A **spanning tree** in a connected graph $G$ is a subgraph that is a tree and which has the same vertex set than $G$. A **spanning forest** is a forest which has the same vertex set than $G$. **Lemma 5**. *If $G$ is connected, the following numbers are the same:\ a) The arboricity $T_1(G)$, the minimal number of forests partitioning $G$.\ b) The minimal number $T_2(G)$ of trees covering $G$.\ c) The minimal number $T_3(G)$ of spanning trees covering $G$* *Proof.* This follows from the fact that every spanning tree is a tree and every tree is a forest and that every forest can be completed to a spanning tree in a connected graph. ◻ ## If $G$ is not connected, the numbers $T_1(G),T_2(G)$ are not the same. For a forest $G$ with 2 trees for example, we have $T_1(G)=1$ while $T_2(G)=2$. A related interesting concept is **linear arboricity** $T_4 \geq T_2(G)$ which asks how many linear graphs partition the graph. A lower bound is $[\Delta/2]$ (where \[x\] is the largest integer smaller or equal to x), where $\Delta(G)$ is the maximal vertex degree. The **linear arboricity conjecture** asks whether $[(\Delta+1)/2]$ is an upper bound for linear arboricity. If a regular graph $G$ of even vertex degree $\Delta$ gets a vertex removed, one gets a graph $H$ whose linear arboricity is $\Delta/2$ if and only if the graph $G$ has a Hamiltonian decomposition. ## A graph is called **contractible**, if there exists $v \in V(G)$ such that $S(v)$ and $G \setminus v$ are both contractible. Contractible graphs are connected. Trees are contractible, forests that are not trees are not contractible. The **Lusternik-Schnirelmann category** (or simply category) of a graph is the minimal number of contractible graphs that cover the graph. Since trees are contractible, the category in general is smaller or equal than the number of trees covering $G$. For a 2-sphere $G$, the category of $G$ is $2$, while the arboricity of $G$ is $3$. ## If $G$ is a graph with $f$-vector $f=(f_0,f_1, \dots, f_d)$, where $f_k$ is the number of $k$ dimensional simplices (=cliques=faces) $K_{k+1}$ in $G$. The graph $G_1$ in which the complete subgraphs of $G$ are the vertices and where two are connected if one is contained in the other is called the **Barycentric refinement** of $G$. If $G$ was contractible then $G_1$ is contractible. If $G$ is a $d$-sphere, then $G_1$ is a $d$-sphere. The graph $G_n=(G_{n-1})_1$ is the $n$'th Barycentric refinement. ## The **Barycentric refinement matrix** $A$ is a $(d+1) \times (d+1)$ matrix with entries $$A_{ij} = i! S(j,i) \; ,$$ where $S(j,i)$ are the **Stirling numbers** of the second kind. The vector $Af$ is the f-vector of $G_1$. Define $$c_d = \lim_{n \to \infty} E(G_n)/(V(G_n)-1) = \lim_{n \to \infty} f_1(G_n)/f_0(G_n) \; .$$ Since the normalized $f$-vectors $f(G_n)/||f(G_n)||_2$ ($|| \cdot ||_2$ is the Euclidean norm), converge to the **Perron-Frobenius vector** we can compute the limits $c_d$. They grow like $1.70948^d$. From the Nash-Williams formula, we see that the arboricity of $d$-spheres grows exponentially in $d$. **Proposition 4**. *a) If $H$ is a sub-graph of $G$ then ${\rm arb}(H) \leq {\rm arb}(G)$.\ b) The arboricity of the disjoint sum of two graphs $H,G$ is ${\rm max}({\rm arb}(H),{\rm arb}(H))$.\ c) There are graphs with a $K_{n+1}$ subgraph with arboricity $\geq c_n$\ d) The arboricity of $K_n$ is ${n/2}$ so that any graph with n vertices has arboricity $\leq {n/2}$.\ e) For a connected graph, the Lusternik Schnirelmann category of $G$ is a lower bound for the arboricity.* *Proof.* a) This follows directly from Nash Williams. The inequality also can be derived from the definition and is used in proofs of [@HararyGraphTheory].\ b) This follows from the definition and that forests can be disconnected.\ c) This follows from the Barycentric refinement picture and Nash-Williams.\ d) For a complete graph $K_{2n}$ we can get the forests explicitly. See page 92 in [@HararyGraphTheory].\ e) This follows because for connected graphs, we can use the tree cover picture and because trees are contractible. ◻ # Appendix: The sphere monoid {#appendix-the-sphere-monoid .unnumbered} ## The **Zykov join** $G \oplus H$ of two graphs $G,H$ has $V(G) \cup V(H)$ as vertex set $E(G) \cup E(H) \cup \{ (a,b), a \in V(G), b \in V(H) \}$ as edge set. It is dual to the disjoint union of graphs $\overline{\overline{G} +\overline{H}}$, where $\overline{H}$ is the graph complement. The Zykov join $G+S_0$ of $G$ with a $0$-sphere is called the **suspension** of $G$. The suspension of a $d$-sphere is a $(d+1)$-sphere. ## The union of all $k$-spheres, where $k \geq (-1)$ forms a sub-monoid of the Zykov monoid of all graphs. The **Dehn-Sommerville monoid** is a monoid strictly containing the sphere monoid but where the unit spheres are required to have Euler characteristic of the corresponding spheres but are Dehn-Sommerville manifolds of a dimension lower. Examples are disjoint unions of odd dimensional spheres or disjoint copies of two even dimensional spheres glued along north and south pole so that the unit spheres are either spheres or unions of spheres. **Lemma 6**. *If $G$ is a $p$-sphere and $H$ is a $q$-sphere, then $G \oplus H$ is a $(p+q+1)$-sphere. The empty graph $0$ is the **zero element**.* *Proof.* Use induction and note that for $v \in V(G)$, the unit sphere is $S_{G \oplus H}(v) = S_G(v) \oplus H$ which by induction is a $(p+q)$-sphere. Similarly the unit sphere for $v \in V(H)$ is a $p+q$ sphere. ◻ ## A graph is a **Zykov prime** if it can not be written as $G=H \oplus K$, where $H,K$ are both non-zero graphs. A small observation about **small non-prime spheres**: **Lemma 7**. *a) The only non-prime 1-sphere is $C_4$.\ b) The only non-prime 2-sphere is a prism graph $C_n \oplus S_0$.\ c) For every $n \geq 6$ there exists exactly one non-prime $2$-sphere with $n$ vertices.\ d) For every $n \geq 8$ the number of non-prime $3$-spheres with $n$ vertices is $2$ plus the number of 2-spheres with $(n-2)$ vertices.* ## The fact that the only non-prime 2-spheres are prisms makes them special. If we make an edge refinement of a prism along an edge in the equator, we get a larger prism. If we make an edge refinement along an edge hitting one of the poles, we get an **almost prism**, a graph which allows a 4 coloring without Kempe loops. 10 M.O. Albertson and W.R. Stromquist. Locally planar toroidal graphs are $5$-colorable. , 84(3):449--457, 1982. K. Appel and W. Haken. A proof of the four color theorem. , 16:179--180, 1976. D. Barnette. , volume 9 of *Dociani Mathematical Expositions*. AMS, 1983. H-G. Bigalke. . Birkhäuser, 1988. J. Bondy and U. Murty. , volume 244 of *Graduate Texts in Mathematics*. Springer, New York, 2008. G. Chartrand and P. Zhang. . CRC Press, 2009. B. Chen, M. Matsumoto, J. Wang, Z. Zhang, and J. Zhang. A short proof of nash-williams' theorem for the arboricity of a graph. , 10:27--28, 1994. R. Diestel. , volume 173 of *Graduate Texts in Mathematics*. Springer, 5th edition, 2016. V. Eberhard. . Teubner Verlag, 1891. A.V. Evako. Dimension on discrete spaces. , 33(7):1553--1568, 1994. S. Fisk. Variations on coloring, surfaces and higher-dimensional manifolds. , pages 226--266, 1977. S. Fisk. Cobordism and functoriality of colorings. , 37(3):177--211, 1980. R. Forman. Combinatorial vector fields and dynamical systems. , 228(4):629--681, 1998. R. Forman. Combinatorial differential topology and geometry. , 38, 1999. R. Forman. A user's guide to discrete Morse theory. , 48, 2002. L. Glass. A combinatorial analog of the Poincare index theorem. , 15:264--268, 1973. F. Harary. . Addison-Wesley Publishing Company, 1969. P. J. Heawood. Map colour theorem. , 24:332--338, 1890. Kempe refutal, Cabot Science PER 6669 Library Info. P. J. Heawood. Map-colour theorem. , 51:161--175, 1949. A.V. Ivashchenko. Graphs of spheres and tori. , 128(1-3):247--255, 1994. F. Josellis and O. Knill. A Lusternik-Schnirelmann theorem for graphs.\ http://arxiv.org/abs/1211.0750, 2012. A.B. Kempe. On the geographical problem of the four colours. , 2, 1979. O. Knill. The average simplex cardinality of a finite abstract simplicial complex. https://arxiv.org/abs/1905.02118, 1999. O. Knill. A graph theoretical Gauss-Bonnet-Chern theorem. http://arxiv.org/abs/1111.5395, 2011. O. Knill. Dimension and Euler characteristics of graphs. \ demonstrations.wolfram.com/DimensionAndEulerCharacteristicsOfGraphs, 2012. O. Knill. A graph theoretical Poincaré-Hopf theorem. http://arxiv.org/abs/1201.1162, 2012. O. Knill. On index expectation and curvature for networks. http://arxiv.org/abs/1202.4514, 2012. O. Knill. Cauchy-Binet for pseudo-determinants. , 459:522--547, 2014. O. Knill. Characteristic length and clustering. , 2014. O. Knill. Coloring graphs using topology. , 2014. O. Knill. Curvature from graph colorings. , 2014. O. Knill. The graph spectrum of barycentric refinements. , 2015. O. Knill. Graphs with Eulerian unit spheres. , 2015. O. Knill. Universality for Barycentric subdivision. , 2015. O. Knill. Gauss-Bonnet for multi-linear valuations. http://arxiv.org/abs/1601.04533, 2016. O. Knill. On a Dehn-Sommerville functional for simplicial complexes. https://arxiv.org/abs/1705.10439, 2017. O. Knill. On the arithmetic of graphs. https://arxiv.org/abs/1706.05767, 2017. O. Knill. Eulerian edge refinements, geodesics, billiards and sphere coloring. , 2018. O. Knill. Constant index expectation curvature for graphs or Riemannian manifolds. https://arxiv.org/abs/1912.11315, 2019. O. Knill. Dehn-Sommerville from Gauss-Bonnet. https://arxiv.org/abs/1905.04831, 2019. O. Knill. More on numbers and graphs. https://arxiv.org/abs/1905.13387, 2019. O. Knill. . https://arxiv.org/abs/1912.00577, 2019. O. Knill. A parametrized Poincare-Hopf theorem and clique cardinalities of graphs. https://arxiv.org/abs/1906.06611, 2019. O. Knill. On index expectation curvature for manifolds. https://arxiv.org/abs/2001.06925, 2020. O. Knill. . https://arxiv.org/abs/2106.14374, 2021. O. Knill. Remarks about the arithmetic of graphs. https://arxiv.org/abs/2106.10093, 2021. O. Knill. Analytic torsion for graphs. https://arxiv.org/abs/2201.09412, 2022. O. Knill. The Tree-Forest Ratio. https://arxiv.org/abs/2205.10999, 2022. I. Lakatos. . Cambridge University Press, 1976. N. Levitt. The Euler characteristic is the unique locally determined numerical homotopy invariant of finite complexes. , 7:59--67, 1992. R. Montgomery, A. Pokrovskiy, and B. Sudakov. A proof of Ringel's conjecture. , 31:663--720, 2021. C. Nash-Williams. Edge-disjoint spanning trees of finite graphs. , 36:455--450, 1961. O. Ore. . Academic Press, 1967. D.S. Richeson. . Princeton University Press, Princeton, NJ, 2008. The polyhedron formula and the birth of topology. G. Ringel. . Grundlehren der mathematischen Wissenschaften. Springer Verlag, 1974. R. Ruononen. . . Translation of TUT course Graafiteoria. T.L. Saaty and P.C. Kainen. . Dover Publications, 1986. G. Sabidussi. Graph multiplication. , 72:446--457, 1959/1960. C. Shannon. The zero error capacity of a noisy channel. , 2:8--19, 1956. D.M.Y. Sommerville. . Methuen and Co. Ltd, 1929. K.G.C. Von Staudt. . Nuernberg, 1847. page 21. W.E. Story. Note on the preceding paper: \[on the geographical problem of the four colours\]. , 2, 1979. W. Stromquest. The four color theorem for locally planar graphs. In Bondy and Murty, editors, *Graph theory and related topics*. Academic Press, 1979. R. Wilson. . Princeton Science Library. Princeton University Press, 2002. A.A. Zykov. On some properties of linear complexes. (russian). , 24(66):163--188, 1949.
arxiv_math
{ "id": "2309.01869", "title": "The Three Tree Theorem", "authors": "Oliver Knill", "categories": "math.CO cs.DM", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The determinant of a skew-symmetric matrix has a canonical square root given by the Pfaffian. Similarly, the resultant of two reciprocal polynomials of even degree has a canonical square root given by their *reciprocant*. Computing the reciprocant of two cyclotomic polynomials yields a short and elegant proof of the Law of Quadratic Reciprocity. author: - Matthew H. Baker bibliography: - QR.bib title: Reciprocity via Reciprocants --- [^1] # Introduction Let $p$ be a prime number and let $a$ be an integer not divisible by $p$. The *Legendre symbol* $\left( \frac{a}{p} \right)$ is defined by $\left( \frac{a}{p} \right) = 1$ if $a$ is a square modulo $p$ and $\left( \frac{a}{p} \right) = -1$ otherwise. According to *Euler's criterion*, $a^{(p-1)/2} \equiv 1 \pmod{p}$ if $\left( \frac{a}{p} \right) = 1$ and $a^{(p-1)/2} \equiv -1 \pmod{p}$ if $\left( \frac{a}{p} \right) = -1$. The Law of Quadratic Reciprocity, first proved by Gauss, asserts that there is an unexpected relationship between $\left( \frac{p}{q} \right)$ and $\left( \frac{q}{p} \right)$ when $p,q$ are distinct odd primes, and a supplement to the law asserts that $\left( \frac{2}{p} \right)$ depends only on $p$ modulo 8. **Theorem 1** (Law of Quadratic Reciprocity). 1. *If $p$ and $q$ are distinct odd primes then $\left( \frac{p}{q} \right) \left( \frac{q}{p} \right) = (-1)^{\frac{p-1}{2}\frac{q-1}{2}}$.* 2. *If $p$ is an odd prime then $\left( \frac{2}{p} \right) = (-1)^{\frac{p^2 - 1}{8}}$.* There are currently more than 300 known proofs of the Law of Quadratic Reciprocity [@QRdatabase]. In this paper we will present an elegant proof that deserves to be better known. The basic approach, via the identity $$\label{eq:introRES3} {\rm Res}(g,f) = (-1)^{\deg(f) \cdot \deg(g)} {\rm Res}(f,g)$$ for resultants, appears to have been independently discovered on at least two occasions [@Merindol; @Hambleton-Scharaschkin], see Section [5](#sec:RelatedWork){reference-type="ref" reference="sec:RelatedWork"} below for a discussion of related work. Our exposition is somewhat novel, in that a central role is played by an expression that we dub the *reciprocant*.[^2] The resultant of two reciprocal[^3] polynomials $f$ and $g$ of even degree is always a square, and the reciprocant of $f$ and $g$ furnishes a canonical square root. If $p$ and $q$ are distinct primes, the resultant of the cyclotomic polynomials $\Phi_p(x)$ and $\Phi_q(x)$ is always equal to 1, but their reciprocant $\mathop{\mathrm{{\rm Rec}}}(\Phi_p(x),\Phi_q(x))$ turns out to be the Legendre symbol $\left( \frac{q}{p} \right)$. By symmetry, we have $\mathop{\mathrm{{\rm Rec}}}(\Phi_q(x),\Phi_p(x)) = \left( \frac{p}{q} \right)$, and part (a) of the Law of Quadratic Reciprocity is then a consequence of [\[eq:introRES3\]](#eq:introRES3){reference-type="eqref" reference="eq:introRES3"}. We also provide a proof via reciprocants of the supplementary law for $\left( \frac{2}{p} \right)$. Our proof of the resultant identity $\mathop{\mathrm{{\rm Res}}}(\Phi_p(x),\Phi_q(x))=1$ is original, to the best of our knowledge. It is in some ways more elementary than the other proofs we have seen of this formula. Throughout the article, we strive to keep the exposition as elementary as possible, with the goal of making the paper understandable by a reader who has taken basic undergraduate courses in number theory, abstract algebra, and linear algebra. In order to make the paper as self-contained as possible, we provide two appendices, one on resultants and one on the trace polynomial (which is used to define the reciprocant). # Resultants and Reciprocants ## Resultants The resultant of two monic[^4] polynomials $f,g \in R[x]$ over an integral domain $R$ satisfies numerous useful identities, including the following (see Appendix A for details): - If $f(x) = (x - \alpha_1)\cdots (x - \alpha_m)$ with all $\alpha_i$ in $R$, then ${\rm Res}(f,g) = \prod_{i} g(\alpha_i)$. - ${\rm Res}(g,f) = (-1)^{\deg(f) \cdot \deg(g)} {\rm Res}(f,g)$. - Suppose $\phi : R \to R'$ is a ring homomorphism. Then[^5] $$\phi(\mathop{\mathrm{{\rm Res}}}(f,g)) = \mathop{\mathrm{{\rm Res}}}(\phi(f),\phi(g)).$$ - If $g(x) = f(x) \cdot q(x) + r(x)$ with $f,g,r \in R[x]$ monic and $q \in R[x]$ arbitrary, then $${\rm Res}(f,g) = {\rm Res}(f,r).$$ ## Reciprocal polynomials and their traces {#sec:recip} A polynomial $g(x) = a_0 + a_1 x + \cdots + a_{n} x^{n} \in R[x]$ with coefficients in a ring[^6] $R$ is called *reciprocal* if $a_n \neq 0$ and $a_k = a_{n-k}$ for all $k=0,1,\ldots,n$. Equivalently, $g$ is reciprocal if and only if $g(x) = x^{n}g(\frac{1}{x})$. If $g \in R[x]$ is reciprocal of even degree $2m$, there is a unique polynomial $g^\#(x) \in R[x]$ of degree $m$ such that $$\label{eq:Chebyshev transform} g(x) = x^m g^\#(x + \frac{1}{x}).$$ We call $g^\#(x)$ the *trace polynomial* of $g$ (see Appendix B for details). Note that if $g(x)$ is monic, then $g^\#(x)$ is monic as well. The following lemma will be proved in Appendix B: **Lemma 2**. *If $g(x) = \prod_{i=1}^m (x - \alpha_i)(x - \alpha_i^{-1})$ for some units $\alpha_1,\ldots,\alpha_m \in R^\times$, then $g$ is reciprocal and $$\label{eq:product formula for gsharp} g^\#(x) = \prod_{i=1}^m \left(x - (\alpha_i + \alpha_i^{-1}) \right).$$* *Remark 3*. Conversely, it follows from [\[eq:Chebyshev transform\]](#eq:Chebyshev transform){reference-type="eqref" reference="eq:Chebyshev transform"} that if $K$ is a field, $g \in K[x]$ is reciprocal of even degree $2m$, and $L$ is a splitting field for $g$ over $K$, there exist $\alpha_1,\ldots,\alpha_m \in L^\times$ such that $g(x) = \prod_{i=1}^m (x - \alpha_i)(x - \alpha_i^{-1})$. ## Reciprocants Over an integral domain, the reciprocant is a canonical square root of the resultant of two reciprocal polynomials. More precisely: **Proposition 4**. *If $R$ is an integral domain and $f,g \in R[x]$ are monic reciprocal polynomials of even degree, then $$\mathop{\mathrm{{\rm Res}}}(f,g) = \mathop{\mathrm{{\rm Rec}}}(f,g)^2,$$ where $\mathop{\mathrm{{\rm Rec}}}(f,g) := \mathop{\mathrm{{\rm Res}}}(f^\#,g^\#) \in R$ is the *reciprocant* of $f$ and $g$.* *Proof.* Let $K$ be the fraction field of $R$ and let $L$ be a splitting field for $f$ over $K$. By Remark [Remark 3](#rem:split){reference-type="ref" reference="rem:split"}, we can write $f(x) = \prod_{i=1}^m (x - \alpha_i)(x - \alpha_i^{-1})$ with $\alpha_i \in L$ for all $i$. In what follows, will apply (RES3) to the natural injective map $\phi : R \to L$. Let $a_i = \alpha_i + \alpha_i^{-1}$ for $i=1,\ldots,m$. We have: $$\begin{aligned} \mathop{\mathrm{{\rm Res}}}(f^\#,g^\#)^2 &= \prod_i g^\#(a_i) \cdot \prod_i g^\#(a_i) \qquad\text{(by (RES1), (RES3), and \eqref{eq:product formula for gsharp})} \notag \\ &= \prod_i \alpha_i^{-m} g(\alpha_i) \cdot \prod_i \alpha_i^m g(\alpha_i^{-1}) \qquad\text{(by \eqref{eq:Chebyshev transform})} \notag \\ &= \prod_i g(\alpha_i) \cdot \prod_i g(\alpha_i^{-1}) \notag \\ &= \mathop{\mathrm{{\rm Res}}}(f,g) \qquad\text{(by (RES1)).} \notag \end{aligned}$$ ◻ We will use the following in our proof of Quadratic Reciprocity: **Proposition 5**. *If $g_1,g_2 ,h \in {\mathbb Z}[x]$ are monic and reciprocal polynomials of even degree and $n$ is a positive integer such that $g_1 \equiv g_2 \pmod{n}$, then $$\mathop{\mathrm{{\rm Rec}}}(g_1,h) \equiv \mathop{\mathrm{{\rm Rec}}}(g_2,h) \pmod{n}$$ and $$\mathop{\mathrm{{\rm Rec}}}(h,g_1) \equiv \mathop{\mathrm{{\rm Rec}}}(h,g_2) \pmod{n}.$$* A proof, based on property (RES3) of resultants, is given in Appendix B. # Proof of the Law of Quadratic Reciprocity For $n \geq 1$, define $$\label{eq:gn} g_n(x) = \frac{x^n - 1}{x-1} = x^{n-1} + x^{n-2} + \cdots + x + 1 \in {\mathbb Z}[x].$$ If $p$ is prime, then since $x^p - 1 \equiv (x-1)^p \pmod{p}$ we have $$\label{eq:gpmodp} g_p(x) \equiv (x-1)^{p-1} \pmod{p}.$$ **Proposition 6**. *If $m,n$ are relatively prime positive integers, ${\rm Res}(g_m,g_n) = 1$.* *Proof.* If $m=n=1$ then $\mathop{\mathrm{{\rm Res}}}(g_m,g_n) = \mathop{\mathrm{{\rm Res}}}(1,1) = 1$. We may therefore suppose without loss of generality that $n > m$. Note that since ${\rm gcd}(m,n)=1$, at least one of $m$ and $n$ is odd. By the division algorithm, we can write $n = mq + r$ with $q,r$ integers such that $q \geq 0$ and $0 \leq r < q$. Since at least one of $m$ and $n$ is odd, the same is true for $m$ and $r$. Working in the quotient ring ${\mathbb Z}[x]/(x^m - 1)$, we have $$\begin{aligned} x^n - 1 &\equiv (x^m)^q \cdot x^r - 1 \notag\\ &\equiv 1^q \cdot x^r - 1 \notag\\ &\equiv x^r - 1 \pmod{x^m - 1}. \notag\\\end{aligned}$$ In other words, there is a polynomial $h(x) \in {\mathbb Z}[x]$ such that $$x^n - 1 = (x^m - 1)h(x) + x^r - 1.$$ Dividing both sides by $x-1$ gives $$g_n(x) = g_m(x) h(x) + g_r(x).$$ By (RES4) and (RES2), we have $$\label{eq:euclidean} \mathop{\mathrm{{\rm Res}}}(g_m,g_n) = \mathop{\mathrm{{\rm Res}}}(g_m,g_r) = \mathop{\mathrm{{\rm Res}}}(g_r,g_m).$$ Since ${\rm gcd}(m,n)=1$, it follows from [\[eq:euclidean\]](#eq:euclidean){reference-type="eqref" reference="eq:euclidean"} and the Euclidean algorithm that there is an integer $k\geq 1$ such that $$\mathop{\mathrm{{\rm Res}}}(g_m,g_n) = \mathop{\mathrm{{\rm Res}}}(g_k,g_1) = \mathop{\mathrm{{\rm Res}}}(g_k,1) = 1.$$ ◻ *Remark 7*. Conversely, if ${\rm gcd}(m,n) = d > 1$ then (RES1) and (RES3) (applied to the natural injection $\phi : {\mathbb Z}\hookrightarrow{\mathbb C}$) imply that $\mathop{\mathrm{{\rm Res}}}(g_m,g_n)=0$, since a primitive $d^{\rm th}$ root of unity in ${\mathbb C}$ is a common root of $g_m$ and $g_n$. *Remark 8*. Here is an alternate proof of Proposition [Proposition 6](#prop:Res g){reference-type="ref" reference="prop:Res g"} which is arguably more conceptual, but somewhat less elementary. First, observe that if $K$ is any field and $\alpha \in K$ satisfies both $\alpha^m = 1$ and $\alpha^n = 1$, with ${\rm gcd}(m,n)=1$, then necessarily $\alpha = 1$. Let $p$ be a prime number, let ${\mathbf F}_p$ be the finite field of order $p$, and let $\phi : {\mathbb Z}\to {\mathbf F}_p$ be the natural homomorphism. Applying (RES3) to $\phi$ implies, together with (RES1) and the above observation with $K={\mathbf F}_p$, that $\mathop{\mathrm{{\rm Res}}}(g_m,g_n) \not\equiv 0 \pmod{p}$. Since this holds for all prime numbers $p$, we must have $\mathop{\mathrm{{\rm Res}}}(g_m,g_n) = \pm 1$. By Proposition [Proposition 4](#prop:res square){reference-type="ref" reference="prop:res square"}, we must in fact have $\mathop{\mathrm{{\rm Res}}}(g_m,g_n) = 1$. Assume from now on that $n$ is odd. Since $g_n$ is a reciprocal polynomial of even degree, it follows from [\[eq:Chebyshev transform\]](#eq:Chebyshev transform){reference-type="eqref" reference="eq:Chebyshev transform"} that $$\label{eq:gnsharp2} g_n^\#(2) = g_n(1) = n.$$ Furthermore, for any ring $R$, if $g(x)=(x-1)^{2m} \in R[x]$ then, by Lemma [Lemma 2](#lem:computing the reciprocant){reference-type="ref" reference="lem:computing the reciprocant"}, $$\label{eq:special sharp} g^\#(x) = (x-2)^m.$$ *Remark 9*. By Remark [Remark 14](#rem:gnrecursion){reference-type="ref" reference="rem:gnrecursion"}, we have $g_1^\#(x) = 1$ and $g_3^\#(x) = x+1$, and $$\label{eq:gnrecursion} g_n^\#(x) = x g_{n-2}^\#(x) - g_{n-4}^\#(x)$$ for all odd integers $n \geq 5$. This implies that the polynomials $g_n^\#$ are related to the classical *Lucas polynomials* $L_n(x)$, defined for $n \geq 0$ by $L_0(x) = 2, L_1(x) = x$, and $L_n(x) = xL_{n-1}(x) + L_{n-2}(x)$, as follows. For $n\geq 1$ odd, define $H_n(x)$ by $L_n(x) = xH_n(x^2)$. Then $g_n^\#(x) = H_n(x-2)$. *Proof of the Law of Quadratic Reciprocity.* Let $p,q$ be distinct odd primes. Since $\mathop{\mathrm{{\rm Res}}}(g_p,g_q) = 1$ by Proposition [Proposition 6](#prop:Res g){reference-type="ref" reference="prop:Res g"}, it follows from Proposition [Proposition 4](#prop:res square){reference-type="ref" reference="prop:res square"} that $\mathop{\mathrm{{\rm Rec}}}(g_p,g_q) \in \{ \pm 1 \}$. We compute the following congruences modulo $p$: $$\begin{aligned} \mathop{\mathrm{{\rm Rec}}}(g_p,g_q) &\equiv \mathop{\mathrm{{\rm Rec}}}((x-1)^{p-1},g_q) \qquad\text{(by \eqref{eq:gpmodp} and Proposition~\ref{prop:rec mod p})} \notag\\ &\equiv \mathop{\mathrm{{\rm Res}}}((x-2)^{\frac{p-1}{2}},g^\#_q) \qquad\text{(by \eqref{eq:special sharp} and the definition of the reciprocant)} \notag\\ &= g_q^\#(2)^{\frac{p-1}{2}} \qquad\text{(by (RES1))} \notag\\ &= q^{\frac{p-1}{2}} \qquad\text{(by \eqref{eq:gnsharp2})} \notag\\ &\equiv \left( \frac{q}{p} \right) % \pmod{p} \qquad\text{(by Euler's criterion).} \notag\end{aligned}$$ Since $\mathop{\mathrm{{\rm Rec}}}(g_p,g_q)$ and $\left( \frac{q}{p} \right)$ both belong to $\{ \pm 1 \}$, it follows that $$\label{eq:legendre1} \mathop{\mathrm{{\rm Rec}}}(g_p,g_q) = \left( \frac{q}{p} \right).$$ By symmetry, we also have $$\label{eq:legendre2} \mathop{\mathrm{{\rm Rec}}}(g_q,g_p) = \left( \frac{p}{q} \right).$$ The Law of Quadratic Reciprocity now follows from [\[eq:legendre1\]](#eq:legendre1){reference-type="eqref" reference="eq:legendre1"}, [\[eq:legendre2\]](#eq:legendre2){reference-type="eqref" reference="eq:legendre2"}, and (RES2). ◻ # The supplementary law We can use a similar argument to prove the supplementary law characterizing $\left( \frac{2}{p} \right)$ when $p$ is an odd prime. Actually, it turns out to be more straightforward to establish a formula for $\left( \frac{-2}{p} \right)$. By Euler's criterion, we have $$\left( \frac{-2}{p} \right) = \left( \frac{-1}{p} \right) \cdot \left( \frac{2}{p} \right) = (-1)^{\frac{p-1}{2}} \left( \frac{2}{p} \right),$$ so the supplemental law is equivalent to: **Theorem 10**. *If $p$ is an odd prime then $\left( \frac{-2}{p} \right) = 1$ if $p \equiv 1$ or $3 \pmod{8}$ and $\left( \frac{-2}{p} \right) = -1$ if $p \equiv 5$ or $7 \pmod{8}$.* Instead of the polynomial $g_2(x) = x+1$, we will use the cyclotomic polynomial $\Phi_4(x) = x^2 + 1$. Note that $\Phi_4$ is a reciprocal polynomial of even degree, with $\Phi_4^\#(x) = x$. **Proposition 11**. *If $n$ is an odd positive integer, then ${\rm Rec}(\Phi_4,g_n)$ is equal to 1 if $n$ is 1 or 3 (mod 8) and $-1$ if $n$ is 5 or 7 (mod 8).* *Proof.* We have $$\mathop{\mathrm{{\rm Rec}}}(\Phi_4,g_n) = \mathop{\mathrm{{\rm Res}}}(x, g_n^\#) = g_n^\#(0),$$ so it suffices to evaluate $g_n^\#(0)$. This is a straightforward but tedious calculation given [\[eq:Chebyshev transform\]](#eq:Chebyshev transform){reference-type="eqref" reference="eq:Chebyshev transform"}, which implies that $$g_n^\#(0) = g_n(i) i^{-\frac{n-1}{2}} = \frac{i^n - 1}{i-1} i^{-\frac{n-1}{2}},$$ where $i^2 = -1 \in {\mathbb C}$. Alternatively, recall from Remark [Remark 9](#rmk:recursion for g){reference-type="ref" reference="rmk:recursion for g"} that for $n \geq 5$ odd we have $$g_n^\#(x) = x g_{n-2}^\#(x) - g_{n-4}^\#(x).$$ From this, a simple inductive argument shows that $g_n^\#(0) = 1$ if $n$ is 1 or 3 (mod 8) and $-1$ if $n$ is 5 or 7 (mod 8). ◻ *Proof of Theorem [Theorem 10](#thm:QRsupplement){reference-type="ref" reference="thm:QRsupplement"}.* Let $p$ be an odd prime. We compute: $$\begin{aligned} \mathop{\mathrm{{\rm Rec}}}(\Phi_4,g_p) &\equiv \mathop{\mathrm{{\rm Rec}}}(\Phi_4,(x-1)^{p-1}) \qquad\text{(by \eqref{eq:gpmodp} and Proposition~\ref{prop:rec mod p})} \notag\\ &\equiv \mathop{\mathrm{{\rm Res}}}(x,(x-2)^{\frac{p-1}{2}}) \qquad\text{(by \eqref{eq:special sharp} and the definition of the reciprocant)} \notag\\ &= (-2)^{\frac{p-1}{2}} \qquad\text{(by (RES1))} \notag\\ &\equiv \left( \frac{-2}{p} \right) \pmod{p} \qquad\text{(by Euler's criterion).} \notag\end{aligned}$$ Since $\mathop{\mathrm{{\rm Rec}}}(\Phi_4,g_p)$ and $\left( \frac{-2}{p} \right)$ both belong to $\{ \pm 1 \}$, we have $\mathop{\mathrm{{\rm Rec}}}(\Phi_4,g_p) = \left( \frac{-2}{p} \right)$, which implies the desired result via Proposition [Proposition 11](#prop:Res 2){reference-type="ref" reference="prop:Res 2"}. ◻ # Related work {#sec:RelatedWork} The proof of Quadratic Reciprocity given here is closely related to several existing arguments. The earliest reference we're aware of for a proof of quadratic reciprocity based on resultants of cyclotomic polynomials is J.-Y. Mérindol's paper [@Merindol], which was published in an obscure French higher education journal called L'Ouvert. A similar proof appears to have been independently discovered by Hambleton and Scharaschkin in [@Hambleton-Scharaschkin]. We learned of the basic argument behind these papers from Antoine Chambert-Loir's blog post [@Chambert-Loir]. The main new ingredient in the present paper is a systematic use of Proposition [Proposition 4](#prop:res square){reference-type="ref" reference="prop:res square"} and the quantity we've dubbed the reciprocant. As far as we know, our arguments proving the supplemental law (Theorem [Theorem 10](#thm:QRsupplement){reference-type="ref" reference="thm:QRsupplement"}) are also new. Our treatment of resultants was inspired by a paper of Barnett [@Barnett]. Our proof of Proposition [Proposition 6](#prop:Res g){reference-type="ref" reference="prop:Res g"} makes use of the Euclidean algorithm and property (RES4) of resultants; this approach is also used, for example, in [@FHR]. Although we have not seen Proposition [Proposition 4](#prop:res square){reference-type="ref" reference="prop:res square"} explicitly stated in a published paper, it is mentioned without proof in a Math Overflow post by Denis Serre [@Serre]. The main ingredients in the proof of Proposition [Proposition 4](#prop:res square){reference-type="ref" reference="prop:res square"} are also contained in the proof of [@Loper-Werner Theorem 3.4]. The first published work we're aware of that computes the resultant of two cyclotomic polynomials is F. E. Diederichsen's paper [@Diederichsen]. Diederichsen's results were extended, and his proofs simplified, in Apostol's paper [@Apostol]. Some other papers computing resultants of Fibonacci--Lucas type polynomials include [@Merindol; @Hambleton-Scharaschkin; @Loper-Werner; @FHR]. As noted in [@Hambleton-Scharaschkin Section 3], the key step underlying our proof of Quadratic Reciprocity, which is identifying the Legendre symbol with a resultant, is closely related to one of Eisenstein's classical proofs [@LemmermeyerBook Chapter 8.1]. There are also close connections to the more recent proof of Swan [@Swan]. The proof given in this paper is also closely related to the proof in the author's blog post [@BakerBlog]. A resultant-based approach to quadratic reciprocity in the function field case is given in [@Clark-Pollack]. See Section 3.4 of *loc. cit.* for remarks about other proofs of the Law of Quadratic Reciprocity which ultimately boil down (either explicitly or in disguise) to property (RES2) of resultants. # Resultants {#AppendixA} Let $R$ be a ring and let $f,g \in R[x]$ be monic polynomials. Inspired by an observation of Barnett [@Barnett], we define the *resultant* of $f$ and $g$ to be $$\mathop{\mathrm{{\rm Res}}}(f,g) := {\rm det} \left( g(C_f) \right) \in R,$$ where $C_f$ is the *companion matrix* of $f(x) = a_0 + a_1 x + \cdots + a_{m-1}x^{m-1} + x^m$: $$C_f := \begin{pmatrix} 0 & 0 & 0 & \cdots & 0 & -a_0 \\ 1 & 0 & 0 & \cdots & 0 & -a_1 \\ 0 & 1 & 0 & \cdots & 0 & -a_2 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 1 & -a_{m-1} \\ \end{pmatrix}$$ We assume for the rest of this section that $R$ is an integral domain with fraction field $K$. We will use the following two well-known facts from linear algebra: - The characteristic polynomial of $C_f$ over $K$ is $f$ (cf. [@Liesen-Mehrmann Lemma 8.4]). - If an $m \times m$ matrix $A$ over $K$ has characteristic polynomial $f$, and if $f$ factors over some extension field $L$ of $K$ as $f(x) = (x-\lambda_1)\cdots (x-\lambda_m)$, then the characteristic polynomial of $g(A)$ is $(x - g(\lambda_1))\cdots (x - g(\lambda_m))$. (**Proof:** By [@Liesen-Mehrmann Theorem 14.17], $A$ is similar to an upper triangular matrix $B$ with $\lambda_1,\ldots,\lambda_m$ on the diagonal. The diagonal entries of $g(B)$ are $g(\lambda_1),\ldots,g(\lambda_m)$, and by [@Liesen-Mehrmann Theorem 8.12] the characteristic polynomial of $p(A)$ is equal to that of $p(B)$.) Assume $f$ splits into linear factors over $R$ as $f(x) = (x-\alpha_1)\cdots (x-\alpha_m)$. Then by (LA1) and (LA2), the characteristic polynomial of $g(C_f)$ over $K$ is $$(x - g(\alpha_1))\cdots (x - g(\alpha_m)).$$ It follows that the determinant of $g(C_f)$ is $\prod_{i=1}^m g(\alpha_i)$, which proves (RES1). Moreover, if $g(x) = (x-\beta_1)\cdots (x-\beta_n)$ with all $\beta_j \in R$ then $$\label{eq:symmetric product for res} \mathop{\mathrm{{\rm Res}}}(f,g) = \prod_{i,j} (\alpha_i - \beta_j) \in R.$$ If we view the coefficients of $f$ and $g$ as indeterminates, the expression ${\rm det} \left( g(C_f) \right)$ is a polynomial of degree $m+n$ with integer coefficients in these variables. In other words, there is a multivariate polynomial $S_{m,n} \in {\mathbb Z}[x_0,x_1,\ldots,x_{m-1},y_0,y_1,\ldots,y_{n-1}]$ such that for every ring $R$ and every pair of monic polynomials $f(x) = a_0 + a_1 x + \cdots + a_{m-1} x^{m-1} + x^m$ and $g(x) = b_0 + b_1 x + \cdots + b_{n-1} x^{n-1} + x^n$ in $R[x]$, $${\rm Res}(f,g) = S_{m,n}(a_0,a_1,\ldots,a_{m-1},b_0,b_1,\ldots,b_{n-1}).$$ The 'functoriality' relation (RES3) follows easily from this observation. By (RES3) and the fact that $R$ is an integral domain, we may replace $R$ by a splitting field $L$ for $fg$ over the fraction field $K$ of $R$. The identity (RES2) then follows immediately from [\[eq:symmetric product for res\]](#eq:symmetric product for res){reference-type="eqref" reference="eq:symmetric product for res"}. In the same way, we can reduce the proof of (RES4) to the case where $f(x) = (x-\alpha_1)\cdots (x-\alpha_m)$ with all $\alpha_i \in R$. Using (RES1), we compute: $$\begin{aligned} {\rm Res}(f(x),r(x)) &= \prod_{i=1}^m r(\alpha_i) \\ &= \prod_{i=1}^m \left( g(\alpha_i) - f(\alpha_i)q(\alpha_i) \right) \\ &= \prod_{i=1}^m g(\alpha_i) \\ &= {\rm Res}(f(x),g(x)),\end{aligned}$$ which proves (RES4). # The trace polynomial Our primary goal in this Appendix is to prove: **Proposition 12**. *Suppose $R$ is a ring and $g \in R[x]$ is a reciprocal polynomial of even degree $2m$. Then there is a unique polynomial $h(x) \in R[x]$ of degree $m$ such that $g(x) = x^m h(x + \frac{1}{x})$.* The following proof was suggested by Darij Grinberg. *Proof.* We first prove the existence of $h(x)$. This will be done by induction on $m$. The base case $m = 0$ is clear. For the induction step, let $g(x) = a_0 + a_1 x + ... + a_{2m} x^{2m}$ be a reciprocal polynomial of degree $2m$; in particular, $a_{2m} = a_0$. Thus $\tilde{g}(x) := \left( g(x) - a_0 (1 + x^2)^m \right) / x$ is a reciprocal polynomial of degree $2(m-1)$. By the inductive hypothesis, $\tilde{g}(x) = x^{m-1} \tilde{h}(x + 1/x)$ for some polynomial $\tilde{h}(x)$ of degree $m-1$ . Setting $h(x) = a_0 x^m + \tilde{h}(x)$ yields $g(x) = x^m h(x + 1/x)$, as desired. This establishes the existence of $h$. The uniqueness of $h$ follows by reversing the existence argument. More formally, we again proceed by induction on $m$. The base case $m=0$ is obvious. For the induction step, note that the equation $g(x) = x^m h(x + 1/x)$ implies that the $x^m$-coefficient of $h(x)$ must be $a_0$. Let $\tilde{h}(x) = h(x) - a_0 x^m$, which has degree $m-1$, and let $\tilde{g}(x) = \left( g(x) - a_0 (1 + x^2)^m \right) / x$, which is reciprocal of degree $2(m-1)$. Then $\tilde{g}(x) = x^{m-1} \tilde{h}(x + 1/x)$, and by the inductive hypothesis $\tilde{h}(x)$ is uniquely determined by $\tilde{g}(x)$. It follows that $h$ is uniquely determined by $g$. ◻ Following the terminology of [@Gross-McMullen §2.1], we define the *trace polynomial* $g^\#$ of $g$ to be the polynomial $h$ appearing in Proposition [Proposition 12](#prop:hpoly2){reference-type="ref" reference="prop:hpoly2"}. *Remark 13*. The following alternative proof of the existence portion of Proposition [Proposition 12](#prop:hpoly2){reference-type="ref" reference="prop:hpoly2"} was suggested by Franz Lemmermeyer, and provides an explicit recursion which will be useful in the next remark. Write $g(x) = a_0 + a_1 x + \cdots + a_{2m} x^{2m}$ with $a_i = a_{2m-i}$ for all $0 \leq i \leq m$ and $h(x) = b_0 + b_1 x + \cdots + b_m x^m$. We wish to prove that we can uniquely solve for the coefficients of $h$ in terms of the coefficients of $g$. In the Laurent polynomial ring $R[x, \frac{1}{x}]$, we have the identity $$x^{-m}g(x) = a_0(x^m + x^{-m}) + a_1(x^{m-1} + x^{-(m-1)}) + \cdots + a_{m-1} (x + x^{-1}) + a_m,$$ so it suffices to prove the result for the special Laurent polynomials $f_n(x) := x^n + x^{-n}$ for all $n \geq 0$. In other words, we want to prove that for each $n \geq 0$, there is a polynomial $h_n(x) \in R[x]$ of degree $n$ such that $f_n(x) = h_n(x + x^{-1})$. We prove existence of the polynomials $h_n(x)$ by induction on $n$. The result is trivial for $n=0,1$, so we may assume that $n\geq 2$ and that the result is true for polynomials of degree at most $n-1$. A simple calculation gives $$f_n(x) = (x + x^{-1}) f_{n-1}(x) - f_{n-2}.$$ Therefore, if we set $h_0(x) = 2$, $h_1(x) = x$, and $$\label{eq:hnrecursion} h_n(x) = xh_{n-1}(x) - h_{n-2}(x),$$ we will have the desired identity $f_n(x) = h_n(x + x^{-1})$. *Remark 14*. For $n\geq 0$, define $g_{2n+1}(x) = \sum_{k=0}^{2n} x^k$ as in [\[eq:gn\]](#eq:gn){reference-type="eqref" reference="eq:gn"}. Then with $f_k(x)$ and $h_k(x)$ as in Remark [Remark 13](#rem:Lemmermeyer){reference-type="ref" reference="rem:Lemmermeyer"}, for $n\geq 1$ we have $x^{-n} g_{2n+1} = 1 + \sum_{k=1}^n f_k(x)$, and thus $g^\#_{2n+1}(x) = 1 + \sum_{k=1}^n h_k(x)$. Since $g_1(x) = 1$ and $g_3(x)=1+x+x^2$, we have $g_1^\#(x) = 1$ and $g_3^\#(x) = x+1$. Moreover, since $h_k(x) = x h_{k-1}(x) - h_{k-2}(x)$ for $k \geq 2$, it follows from [\[eq:hnrecursion\]](#eq:hnrecursion){reference-type="eqref" reference="eq:hnrecursion"} that for $n \geq 2$, $$x g^\#_{2n-1}(x) - g^\#_{2n-3}(x) = x + \sum_{k=1}^{n-1} \left( xh_k(x) - h_{k-1}(x) \right) - 1 + h_0 = 1 + x + \sum_{k=2}^n h_k(x) = g^\#_{2n+1}.$$ In other words, for all odd integers $n \geq 5$ we have $$g_n^\#(x) = x g_{n-2}^\#(x) - g_{n-4}^\#(x).$$ *Proof of Lemma [Lemma 2](#lem:computing the reciprocant){reference-type="ref" reference="lem:computing the reciprocant"}.* To see that $g(x) = \prod_{i=1}^m (x - \alpha_i)(x - \alpha_i^{-1})$ is reciprocal, we compute: $$\begin{aligned} x^{2m}g(\frac{1}{x}) & = x^{2m} \prod (\frac{1}{x} - \alpha_i)(\frac{1}{x} - \frac{1}{\alpha_i}) \\ &= \prod (1 - \alpha_i x)(1 - \frac{1}{\alpha_i} x) \\ &= (-1)^m \prod \alpha_i (x - \frac{1}{\alpha_i}) \cdot (-1)^m \prod \frac{1}{\alpha_i} (x - \alpha_i) \\ &= \prod (x - \frac{1}{\alpha_i})(x - \alpha_i) \\ &= g(x).\end{aligned}$$ To prove [\[eq:product formula for gsharp\]](#eq:product formula for gsharp){reference-type="eqref" reference="eq:product formula for gsharp"}, the case $m=1$ can be handled by a simple computation: setting $\alpha=\alpha_1$ and $a = \alpha + \alpha^{-1}$, we have $g(x) = (x - \alpha)(x - \alpha^{-1}) = x^2 - ax + 1 = x(x + \frac{1}{x} - a)$, and thus $g^\#(x) = x - a$. The general case follows immediately from the special case $m=1$: if $a_j = \alpha_j + \alpha_j^{-1}$ then $g^\#(x) = \prod_{j=1}^m (x - a_j)$. ◻ As mentioned in the text, we define the *reciprocant* $\mathop{\mathrm{{\rm Rec}}}(f,g)$ of two reciprocal polynomials of even degree to be $\mathop{\mathrm{{\rm Res}}}(f^\#,g^\#)$. *Proof of Proposition [Proposition 5](#prop:rec mod p){reference-type="ref" reference="prop:rec mod p"}.* It suffices, by (RES2), to prove the following statement: if $g_1,g_2 ,h \in {\mathbb Z}[x]$ are reciprocal of even degree and $n$ is a positive integer such that $g_1 \equiv g_2 \pmod{n}$, then $\mathop{\mathrm{{\rm Rec}}}(g_1,h) \equiv \mathop{\mathrm{{\rm Rec}}}(g_2,h) \pmod{n}$. By Proposition [Proposition 12](#prop:hpoly2){reference-type="ref" reference="prop:hpoly2"}, we have $g_1^\#(x) \equiv g_2^\#(x) \pmod{n}$. Applying (RES3) to the natural ring homomorphism $\phi : {\mathbb Z}\to {\mathbb Z}/n{\mathbb Z}$ shows that $\mathop{\mathrm{{\rm Res}}}(g^\#_1,h^\#) \equiv \mathop{\mathrm{{\rm Res}}}(g^\#_2,h^\#) \pmod{n}$ as desired. ◻ [^1]: We thank Antoine Chambert-Loir for pointing us to Mérindol's paper [@Merindol]. Thanks also to Darij Grinberg, Franz Lemmermeyer, and Evan O'Dorney for helpful feedback on an earlier version of this paper. The author was supported by NSF grant DMS-2154224 and a Simons Fellowship in Mathematics. [^2]: This is not a standard term; we chose the name both because it involves reciprocal polynomials and because of its relation to quadratic reciprocity. [^3]: Reciprocal means that the coefficients read the same backwards and forwards; see Section [2.2](#sec:recip){reference-type="ref" reference="sec:recip"} for more details. [^4]: We restrict ourselves to resultants of *monic* polynomials over an integral domain here, as (a) it's the only case we need and (b) the identities (RES1)-(RES4) look cleaner in the monic case. [^5]: Here $\phi(f) \in R'[x]$ denotes the image of $f \in R[x]$ under the homomorphism $R[x] \to R'[x]$ induced by $\phi$, and similarly for $\phi(g)$. [^6]: All rings in this paper will be nonzero commutative rings with identity.
arxiv_math
{ "id": "2309.02512", "title": "Reciprocity via Reciprocants", "authors": "Matthew Baker", "categories": "math.NT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We describe the compact objects in the $\infty$-category of $\mathcal C$-valued sheaves $\operatorname{Shv}(X,\mathcal C)$ on a hypercomplete locally compact Hausdorff space $X$, for $\mathcal C$ a compactly generated stable $\infty$-category. When $X$ is a non-compact connected manifold and $\mathcal C$ is the unbounded derived category of a ring, our result recovers a result of Neeman. Furthermore, for $X$ as above and $\mathcal C$ a nontrivial compactly generated stable $\infty$-category, we show that $\operatorname{Shv}(X,\mathcal C)$ is compactly generated if and only if $X$ is totally disconnected. author: - Oscar Harr bibliography: - sources.bib title: Compact sheaves on a locally compact space --- The aim of this note is to clarify and expand on a point made by Neeman [@neeman2001]. Let $M$ be a non-compact connected manifold, and let $\operatorname{Shv}(M,\mathcal D(\mathbb Z))$ denote the unbounded derived category of sheaves of abelian groups on $M$. Neeman shows that, up to equivalence, the only compact object in $\operatorname{Shv}(M,\mathcal D(\mathbb Z))$ is the zero sheaf. This implies that $\operatorname{Shv}(M,\mathcal D(\mathbb Z))$ is very far from compactly generated. Nevertheless, it follows from Lurie's covariant Verdier duality theorem [@HA Thm 5.5.5.1] that $\operatorname{Shv}(M,\mathcal D(\mathbb Z))$ satisfies a related smallness condition: it is *dualizable* in the symmetric monoidal $\infty$-category $\mathcal P\mathrm r_{\mathrm{stab}}^{\otimes}$ of stable presentable $\infty$-categories, which holds more generally if $M$ is replaced with any locally compact Hausdorff space $X$. Although every compactly generated presentable stable $\infty$-category is dualizable [@SAG Prop D.7.2.3], Neeman's example thus shows that the converse is false. The existence of this large and interesting class of stable presentable $\infty$-categories that are dualizable but not compactly generated forms part of the motivation behind Efimov's continuous extensions of localizing invariants [@efimov-ICM], see also [@hoyois]. This note is concerned with the following two questions about the $\infty$-category of $\mathcal C$-valued sheaves on a general locally compact Hausdorff space $X$, where $\mathcal C$ is some compactly generated stable $\infty$-category (e.g. the unbounded derived $\infty$-category of a ring or the $\infty$-category of spectra): 1. How rare is it for $\operatorname{Shv}(X,\mathcal C)$ to be compactly generated? 2. How far is $\operatorname{Shv}(X,\mathcal C)$ from being compactly generated in general? With a relatively mild completeness assumption on $X$ (see Section [1](#sec:hypercompleteness){reference-type="ref" reference="sec:hypercompleteness"}), we answer question (2) by showing that a $\mathcal C$-valued sheaf $\mathscr F$ on $X$ is compact as an object of $\operatorname{Shv}(X,\mathcal C)$ if and only if it has compact support, compact stalks, and is locally constant (Theorem [Theorem 5](#thm:cpt-sheaves){reference-type="ref" reference="thm:cpt-sheaves"}). Thus if $X$ is for instance a CW complex, the subcategory of compact objects $\operatorname{Shv}(X,\mathcal C)^\omega$ remembers only the *homotopy type* of the compact path components of $X$, and it is therefore impossible to reconstruct the entire sheaf category $\operatorname{Shv}(X,\mathcal C)$, or equivalently the homeomorphism type of $X$, from this information. In his 2022 ICM talk, Efimov mentions that the $\infty$-category of $\mathcal D(R)$-valued sheaves on a locally compact Hausdorff space $X$ 'is almost never compactly generated (unless $X$ is totally disconnected, like a Cantor set)' [@efimov-ICM slide 13]. As a corollary to our description of the compact objects of $\operatorname{Shv}(X,\mathcal C)$, we verify--modulo the same completeness assumption mentioned above--that indeed the *only* locally compact Hausdorff spaces $X$ with $\operatorname{Shv}(X,\mathcal C)$ compactly generated, for some nontrivial $\mathcal C$, are the totally disconnected ones (Proposition [Proposition 8](#prop:when-cpt){reference-type="ref" reference="prop:when-cpt"}), thereby answering question (1). ## Notation and conventions {#notation-and-conventions .unnumbered} Throughout this note, we use the theory of higher categories and higher algebra, an extensive textbook account of which can be found in the work of Lurie [@HTT; @HA; @SAG]. We will also make frequent use of the six-functor formalism for derived sheaves on locally compact Hausdorff spaces, as described classically by [@verdier1965dualite] and with general coefficients by [@volpe23]. For convenience, we assume the existence of an uncountable Grothendieck universe $\mathcal U$ of *small* sets and further Grothendieck universes $\mathcal U'$ and $\mathcal U''$ of *large* and *very large* sets respectively, such that $\mathcal U\in\mathcal U'\in\mathcal U''$. 'Topological space' always implicitly refers to a small topological space, and similarly with 'spectrum'. On the other hand, an '$\infty$-category' is an '$(\infty ,1)$-category' is a quasicategory, which unless otherwise stated is large. We let $\widehat{\mathcal C\mathrm{at}}_\infty$ denote the very large $\infty$-category of (large) $\infty$-categories. Because we are dealing with sheaves on topological spaces, we deem it best to make a clear distinction between actual topological spaces on the one hand, and on the other hand the objects of the $\infty$-category $\mathcal S$ of 'spaces' in the sense of Lurie. Following the convention suggested in [@cesnavicius-scholze], we will refer to the latter as *anima*. Given an $\infty$-category $\mathcal C$, we let $\mathcal C^{\omega}\subseteq\mathcal C$ denote the subcategory spanned by the compact objects. Recall that an object $C\in\mathcal C$ is said to be *compact* if the presheaf of large anima $D\mapsto\operatorname{Map}_{\mathcal C}(C,D)$ preserves small filtered colimits. ## Acknowledgements {#acknowledgements .unnumbered} I was partially supported by the Danish National Research Foundation through the Copenhagen Centre for Geometry and Topology (DRNF151). I am grateful to Marius Verner Bach Nielsen for comments on the draft, and to Jesper Grodal and Maxime Ramzi for valuable discussions about the arguments appearing in this note, and to the latter for pointing out that the original version of Lemma [Lemma 9](#lem:small-compare){reference-type="ref" reference="lem:small-compare"} was false in the generality in which I had stated it. # $\mathcal C$-hypercomplete spaces {#sec:hypercompleteness} Given an $\infty$-category $\mathcal C$ and a topological space $X$, we let $\operatorname{Shv}(X,\mathcal C)$ denote the $\infty$-category of $\mathcal C$-valued sheaves on $X$ in the sense of Lurie [@HTT]. That is, $\operatorname{Shv}(X,\mathcal C)$ is the full subcategory of the presheaf $\infty$-category $\operatorname{Fun}(\operatorname{Open}(X)^{\text{op}},\mathcal C)$ consisting of presheaves $\mathscr F$ satisfying the *sheaf condition*: for any open set $U\subseteq X$ and any open cover $\lbrace U_i\to U\rbrace_{i\in I}$, the canonical map $$\mathscr F(U)\to\mathop{\mathrm{\operatorname{lim}}}_V\mathscr F(V)$$ is an equivalence, where $V$ ranges over open sets $V\subseteq U_i\subseteq X$, $i\in I$, considered as a poset under inclusion. When $\mathcal C = \mathcal S$ is the $\infty$-category of anima, we will abbreviate $\operatorname{Shv}(X) = \operatorname{Shv}(X,\mathcal S)$. **Remark 1**. When $\mathcal C = \mathcal D(R)$ is the unbounded derived $\infty$-category of a ring, the $\infty$-category $\operatorname{Shv}(X,\mathcal D(R))$ is related to, but generally not the same as, the derived $\infty$-category $\mathcal D(\operatorname{Shv}(X,R))$ of the ordinary category of sheaves of $R$-modules on $X$, which is the object studied (via its homotopy category) by Neeman [@neeman2001]. However, they do coincide under the completeness assumption that we will impose on $X$ below, see [@scholze-6FF Prop 7.1]. Since this completeness assumption is verified when $X$ is a topological manifold, our results include those of Neeman. We are interested in topological spaces satisfying the following condition: **Definition 2**. A topological space $X$ is *$\mathcal C$-hypercomplete* if the stalk functors $x^*\colon\operatorname{Shv}(X,\mathcal C)\to\mathcal C$ are jointly conservative for $x$ ranging over the points of $X$. The reason for our choice of terminology is that $X$ is $\mathcal S$-hypercomplete if and only if the $0$-localic $\infty$-topos $\operatorname{Shv}(X)$ has enough points, which is equivalent to $\operatorname{Shv}(X)$ being hypercomplete as an $\infty$-topos by Claim (6) in [@HTT § 6.5.4]. (This is *not* true for arbitrary $\infty$-topoi, i.e. there are hypercomplete $\infty$-topoi that do not have enough points.) This subtlety, whereby a morphism of sheaves may fail to be an equivalence even though it is so on all stalks, does not occur for non-derived sheaves: the $1$-topos $\operatorname{Shv}(X,\mathcal S_{\leq 0})$ of sheaves of sets on a topological space $X$ *always* has enough points. We refer to [@HTT § 6.5.4] for a discussion of why it is often preferable to work with non-hypercomplete sheaves, rather than, say, imposing hypercompleteness by replacing $\operatorname{Shv}(X)$ with its hypercompletion $\operatorname{Shv}(X)^\wedge$. Several classes of topological spaces are known to be $\mathcal S$-hypercomplete, and hence also $\mathcal C$-hypercomplete for any compactly generated $\infty$-category $\mathcal C$.[^1] Although only the first two are relevant for this note, here is a list of some classes of topological spaces that have this property: - paracompact spaces that are locally of covering dimension $\leq n$ for some fixed $n$ [@HTT Cor 7.2.1.12], - arbitrary CW complexes [@hoyois-cw], - finite-dimensional Heyting spaces [@HTT Rem 7.2.4.18], and - Alexandroff spaces, since the $\infty$-topos of sheaves associated to an Alexandroff space is equivalent to a presheaf $\infty$-topos. # When is a sheaf compact? {#sec:when-compact} Let $\mathcal C$ be a compactly generated stable $\infty$-category, e.g. the unbounded derived category $\mathcal D(R)$ of a ring $R$ or the $\infty$-category of spectra $\mathrm{Sp}$. Given a sheaf $\mathscr F\in\operatorname{Shv}(X,\mathcal C)$, we define the *support* of $\mathscr F$ by $$\operatorname{supp}\mathscr F = \lbrace x\in X\mid \mathscr F_x\not\simeq 0 \rbrace\subseteq X.$$ As in [@neeman2001], our study of the compact objects of $\operatorname{Shv}(X,\mathcal C)$ proceeds from an analysis of their supports. By slightly adapting the proof of [@neeman2001 Lem 1.4], we get the following description of the support of a compact sheaf: **Lemma 3**. *Let $X$ be a $\mathcal C$-hypercomplete locally compact Hausdorff space and let $\mathscr F\in\operatorname{Shv}(X,\mathcal C)^\omega$. Then the support $\operatorname{supp}\mathscr F$ is compact.* *Proof.* We first show that $\operatorname{supp}\mathscr F$ is contained in a compact subset of $X$. Consider the canonical map $$\label{eq:precpt-filter} \mathop{\mathrm{\operatorname{colim}}}_U (j_U)_!j_U^*\mathscr F \to\mathscr F,$$ where the colimit ranges over the poset of precompact open sets ordered by the rule $U\leq V$ if $\overline U\subseteq V$, and for each such $U$ we have denoted by $j_U\colon U\hookrightarrow X$ the inclusion. Since precompact open sets form a basis for the topology on $X$, the map [\[eq:precpt-filter\]](#eq:precpt-filter){reference-type="eqref" reference="eq:precpt-filter"} is an equivalence of sheaves. Let $\phi\colon\mathscr F\xrightarrow{\sim} \mathop{\mathrm{\operatorname{colim}}}_U (j_U)_!(j_U)^*\mathscr F$ be some choice of inverse. Any finite union of precompact open sets is again precompact open, so the poset of precompact open sets is filtered. Hence compactness of $\mathscr F$ implies that $\phi$ factors through $(j_U)_!j_U^*\mathscr F$ for some precompact open $U$, and it follows that $\operatorname{supp}\mathscr F$ is contained in a compact subset $\overline U\subseteq X$, as claimed. By the above, it remains only to be seen that $\operatorname{supp}\mathscr F$ is closed, or equivalently that its complement $X\setminus\operatorname{supp}\mathscr F$ is open. Suppose $x\in X\setminus{\operatorname{supp}\mathscr F}$. Then we have a recollement fiber sequence $$j_!j^*\mathscr F\to\mathscr F\to i_*i^*\mathscr F,$$ where $j\colon X\setminus\lbrace x\rbrace\hookrightarrow X$ and $i\colon\lbrace x\rbrace\hookrightarrow X$ are the inclusions, and since $x\not\in\operatorname{supp}\mathscr F$ we have $j_!j^*\mathscr F\simeq\mathscr F$. Since $j_!$ is a fully faithful left adjoint, it reflects compact objects, and we conclude that $j^*\mathscr F$ is again compact. But then $j^*\mathscr F$ is supported on a compact subset of $X\setminus\lbrace x\rbrace$ by the above, which must be closed as a subset of $X$, and hence $x$ lies in the interior of $X\setminus\operatorname{supp}\mathscr F$ as desired. ◻ **Lemma 4**. *If $f\colon X\to Y$ is a proper map of locally compact Hausdorff spaces, then the pullback functor $f^*$ preserves compact objects. In particular, if $X$ is a compact Hausdorff space and $E\in\mathcal C^\omega$, then $E_X\in\operatorname{Shv}(X,\mathcal C)^\omega$, where $E_X$ denotes the constant sheaf at $E$.* *Proof.* Since $f$ is proper, the pullback $f^*$ is left adjoint to $f_*\simeq f_!$, which is itself left adjoint to $f^!$. Hence $f_*$ preserves colimits, and it follows that its left adjoint $f^*$ preserves compact objects. The statement about constant sheaves follows by taking $f$ to be the projection from $X$ to a point. ◻ Our main result is the following description of the compact objects in $\operatorname{Shv}(X,\mathcal C)$: **Theorem 5**. *Let $X$ be a $\mathcal C$-hypercomplete locally compact Hausdorff space. A sheaf $\mathscr F\in\operatorname{Shv}(X,\mathcal C)$ is compact if and only if* 1. *$\operatorname{supp}\mathscr F$ is compact;* 2. *$\mathscr F$ is locally constant; and* 3. *$\mathscr F_x\in\mathcal C^\omega$ for each $x\in X$.* In particular, note that conditions (i) and (ii) together imply that if $\mathscr F$ is compact, then the support $\mathscr F$ must be a compact open subset of $X$. On the other hand, (iii) guarantees that if $\mathscr F$ is constant on $U\subseteq X$, say with value $E$, then $E\in\mathcal C^\omega$. *Proof.* 'Necessity.' Suppose we are given $\mathscr F\in\operatorname{Shv}(X,\mathcal C)^\omega$. Then $\mathscr F$ must satisfy (i) by Lemma [Lemma 3](#lem:cpt-supp){reference-type="ref" reference="lem:cpt-supp"} and (ii) by Lemma [Lemma 4](#lem:prop-pullback){reference-type="ref" reference="lem:prop-pullback"}, since the stalk $\mathscr F_x$ at $x\in X$ is the same as the pullback $i_x^*\mathscr F$ along the inclusion $i_x\colon\lbrace x\rbrace\hookrightarrow X$, which is always a proper map. It remains only to be seen that $\mathscr F$ is locally constant. Fix a point $x\in X$, and let $i_x$ again denote the inclusion of this point into $X$. Let $E = i_x^*\mathscr F$ denote the stalk of $\mathscr F$ at $x$. By [@HTT Cor 7.1.5.6], there is an equivalence $\mathop{\mathrm{\operatorname{colim}}}_U\mathscr F(U)\simeq E$, where $U$ ranges over the poset of open neighborhoods of $x$. As $E$ is compact, this implies that $\mathscr F(U)\to E$ has a section for some $U$. Pick a precompact open neighborhood $W\ni x$ with $\overline W\subseteq U$, and let $i\colon\overline W\hookrightarrow X$ denote the inclusion. As the canonical map $\mathscr F(U)\to E$ factors through the restriction $\mathscr F(U)\to(i^*\mathscr F)(\overline W)\to\mathscr F(W)$, the map $(i^*\mathscr F)(\overline W)\to E$ also admits a section $E\to (i^*\mathscr F)(\overline W)$. Viewing the latter as a morphism from the constant presheaf with value $E$ to $i^*\mathscr F$, we get an induced map $\sigma\colon E_{\overline W}\to i^*\mathscr F$ of sheaves over $\overline W$ which by construction induces an equivalence of stalks at $x$. Here both $E_{\overline W}$ and $i^*\mathscr F$ are compact, so the cofiber $\mathscr Q = \operatorname{cofib}(\sigma )$ is again compact. But then $\operatorname{supp}\mathscr Q$ is compact, so $W'=W\setminus\operatorname{supp}\mathscr Q$ is open and $\mathscr Q_x\simeq 0$ so $x\in W'$. Furthermore, $\sigma$ restricts to an equivalence of sheaves on $W'$ by construction, so $\mathscr F\vert_{W'}$ is equivalent to the constant sheaf on $W'$ with value $E$, as desired. 'Sufficiency.' Let $i\colon\operatorname{supp}\mathscr F\hookrightarrow X$ denote the inclusion. Since $i$ is both proper and an open immersion, both $i_*\simeq i_!$ and $i^*\simeq i^!$ preserve and reflect compact objects. We may therefore assume that $X$ is compact, after possibly replacing it with $\operatorname{supp}\mathscr F$. Pick a finite collection of closed subsets $Z_i\subseteq X$, $i=1,\dots ,n$, such that $\mathscr F$ is constant in a neighborhood of each $Z_i$ and such that $X$ is covered by the interiors $Z_i^{\mathrm o}$. Descent (Corollary [Corollary 16](#cor:descent-lem){reference-type="ref" reference="cor:descent-lem"}) implies that the canonical functor $$\begin{aligned} &\operatorname{Shv}(X,\mathcal C) \\ &\qquad\to \displaystyle{ \mathop{\mathrm{\operatorname{lim}}}_{\Delta_{\leq n-1}}} \left( \begin{tikzcd}[ampersand replacement=\&,column sep=small] \operatorname{Shv}(\bigcap_1^nZ_i,\mathcal C) \arrow[r,shift left=10pt] \arrow[r,phantom,"\raisebox{1.5ex}{\vdots}"] \arrow[r,shift right=10pt] \&\cdots \arrow[r,shift left=10pt] \arrow[r,leftarrow,shorten <= 2pt,shorten >= 2pt,shift left=5pt] \arrow[r] \arrow[r,leftarrow,shorten <= 2pt,shorten >= 2pt,shift right=5pt] \arrow[r,shift right=10pt] \&\prod_{i,j}\operatorname{Shv}(Z_i\cap Z_j,\mathcal C) \arrow[r,shift left=5pt] \arrow[r,leftarrow,shorten <= 2pt,shorten >= 2pt] \arrow[r,shift right=5pt] \&\prod_i\operatorname{Shv}(Z_i,\mathcal C) \end{tikzcd} \right) \end{aligned}$$ is an equivalence. Write $I = \lbrace 1,\dots ,n\rbrace$ for short and put $Z_J = \bigcap_{j\in J}Z_j$ for each $J\subseteq I$. The canonical projection from $\operatorname{Shv}(X,\mathcal C)$ to $\operatorname{Shv}(Z_J,\mathcal C)$ is the restriction map. By construction, we have that for each $J\subseteq I$, the restriction $\mathscr F\vert_{Z_J}$ is constant with value a compact object, and hence compact as an object of $\operatorname{Shv}(Z_J,\mathcal C)$ by the preceding lemma. According to [@HTT Lem 6.3.3.6], the identity functor $\text{id}\colon\operatorname{Shv}(X,\mathcal C)\to\operatorname{Shv}(X,\mathcal C)$ is the colimit of a diagram $\Delta_{\leq n-1}\to\operatorname{Fun}(\operatorname{Shv}(X,\mathcal C),\operatorname{Shv}(X,\mathcal C))$ which sends the object $\lbrack k\rbrack\in\Delta_{\leq n-1}$ to the composition $$\begin{tikzcd} ` \operatorname{Shv}(X,\mathcal C)\arrow[r]\arrow[rr,bend left=20,"i_k^*",shorten <= 8pt ] &\displaystyle{\prod_{\substack{J\subseteq I, \\ |J|=k}}}\operatorname{Shv}(Z_J,\mathcal C) \arrow[r,phantom,"\simeq"] &\operatorname{Shv}\bigg(\displaystyle{\coprod_{\substack{J\subseteq I, \\ |J|=k}}} Z_J,\mathcal C\bigg)\arrow[r,"(i_k)_*"] & \operatorname{Shv}(X,\mathcal C), \end{tikzcd}$$ and so for any filtered system $\lbrace\mathscr G_\alpha\rbrace_{\alpha\in A}$, we find $$\begin{aligned} \operatorname{Map}(\mathscr F,\mathop{\mathrm{\operatorname{colim}}}_A\mathscr G_\alpha ) &\simeq\mathop{\mathrm{\operatorname{lim}}}_{\lbrack k\rbrack\in\Delta_{\leq n-1}}\operatorname{Map}(\mathscr F,(i_k)_*i_k^*\mathop{\mathrm{\operatorname{colim}}}\mathscr G_\alpha ) \\ &\simeq \mathop{\mathrm{\operatorname{lim}}}_{\lbrack k\rbrack\in\Delta_{\leq n-1}}\operatorname{Map}(i_k^*\mathscr F,\mathop{\mathrm{\operatorname{colim}}}_A i_k^*\mathscr G_\alpha ) \\ &\simeq\mathop{\mathrm{\operatorname{lim}}}_{\lbrack k\rbrack\in\Delta_{\leq n-1}}\mathop{\mathrm{\operatorname{colim}}}_A\operatorname{Map}(i_k^*\mathscr F, i_k^*\mathscr G_\alpha ) \\ &\simeq\mathop{\mathrm{\operatorname{colim}}}_A\mathop{\mathrm{\operatorname{lim}}}_{\lbrack k\rbrack\in\Delta_{\leq n-1}}\operatorname{Map}(i_k^*\mathscr F, i_k^*\mathscr G_\alpha ) \\ &\simeq\mathop{\mathrm{\operatorname{colim}}}_A\operatorname{Map}(\mathscr F,\mathscr G_\alpha ), \end{aligned}$$ where the third equivalence uses that the restriction $i_k^*\mathscr F$ is compact[^2] and the second-last equivalence uses that filtered colimits commute are left exact in $\mathcal S$. ◻ As a corollary, we recover Neeman's result: **Corollary 6** (Neeman). *Let $M$ be a non-compact connected manifold. Then $\mathscr F\in\operatorname{Shv}(M,\mathcal C)^\omega$ if and only if $\mathscr F\simeq 0$.* In fact our result shows that the conclusion of Neeman's result holds more generally if $M$ is replaced by a $\mathcal C$-hypercomplete locally compact Hausdorff space $X$ whose quasicomponents are all non-compact. As a further corollary to our theorem, we can also describe the dualizable objects of, say, the $\infty$-category of $\text{Mod}_R$-valued sheaves on a $\text{Mod}_R$-hypercomplete locally compact Hausdorff space $X$, where $R$ is some $\mathbb E_\infty$-ring. As with the compact objects, the dualizable objects turn out to be very sparse: **Corollary 7**. *Let $\mathrm{Mod}_R^\otimes$ be the symmetric monoidal $\infty$-category of modules over an $\mathbb E_\infty$-ring $R$, and let $X$ be a $\mathrm{Mod}_R$-hypercomplete locally compact Hausdorff space. With respect to the induced symmetric monoidal structure on $\operatorname{Shv}(X,\mathrm{Mod}_R)$, a sheaf $\mathscr F\in\operatorname{Shv}(X,\mathrm{Mod}_R)$ is dualizable if and only if* 1. *$\mathscr F$ is locally constant, and* 2. *$\mathscr F_x$ is a perfect $R$-module[^3] for each $x\in X$.* *Proof.* 'Sufficiency.' Since $\operatorname{Shv}(X,\mathrm{Mod}_R)^\otimes$ is closed symmetric monoidal, it suffices to show that for each sheaf $\mathscr F$ satisfying the two conditions, the canonical map $$\label{eq:dual-test} \mathscr{H}om (\mathscr F,R_X)\otimes\mathscr F\to\mathscr Hom (\mathscr F,\mathscr F)$$ is an equivalence, where $R_X$ is the constant sheaf at $R$ and $\mathscr Hom ({-},{-})$ denotes the internal mapping object in $\operatorname{Shv}(X,\mathrm{Mod}_R)$. For sufficiently small open subsets $U\subseteq X$, the restriction $\mathscr F\vert_U$ is equivalent to the constant sheaf $F_U = \pi^*F$ at a perfect $R$-module $F$, and since pullback--being symmetric monoidal--preserves dualizable sheaves, we find that [\[eq:dual-test\]](#eq:dual-test){reference-type="eqref" reference="eq:dual-test"} restricts to an equivalence on $U$. Since [\[eq:dual-test\]](#eq:dual-test){reference-type="eqref" reference="eq:dual-test"} is a morphism of sheaves which is locally an equivalence, it must be an equivalence, proving the claim. 'Necessity.' Assume that $\mathscr F$ is dualizable, and let $x\in X$ be some point. The condition on the stalks is immediate, since pullback preserves dualizable sheaves. We must show that $\mathscr F$ is locally constant in a neighborhood of $x$. Pick a precompact open neighborhood $U\ni x$. Then $\mathscr F\vert_{\overline U}$ is again dualizable, and since the monoidal unit $R_{\overline U} = \pi^*R\in\operatorname{Shv}(\overline U,\mathrm{Mod}_R)$ is compact, it follows that $\mathscr F\vert_{\overline U}$ is compact as an object of $\operatorname{Shv}(\overline U,\mathrm{Mod}_R)$. But then the previous theorem implies that it must be locally constant on $\overline U$, and hence also on the subset $U$ as desired. ◻ # When is $\operatorname{Shv}(X,\mathcal C)$ compactly generated? In this section, we prove the following characterization of those locally compact Hausdorff spaces $X$ that have $\operatorname{Shv}(X,\mathcal C)$ compactly generated: **Proposition 8**. *Let $\mathcal C$ be a non-trivial compactly generated stable $\infty$-category, and let $X$ be a $\mathcal C$-hypercomplete locally compact Hausdorff space. Then $\operatorname{Shv}(X,\mathcal C)$ is compactly generated if and only if $X$ is totally disconnected.* ## Proof of the proposition The proof will use the following observation: **Lemma 9**. *Let $\mathcal C$ be a compactly generated stable $\infty$-category, and let $\lbrace C_i\rbrace_{i\in I}$ and $\lbrace D_i\rbrace_{i\in I}$ be filtered systems in $\mathcal C$ indexed over the same poset $I$.* 1. *Suppose that for each $i\in I$, there is some $j\geq i$ so that the transition map $C_i\to C_j$ factors through the zero object $*$. Then $\mathop{\mathrm{\operatorname{colim}}}_I C_i\simeq *$. If each $C_i$ is compact, then the converse holds.* 2. *Suppose that for each comparable pair $i\leq j$ in $I$ there are horizontal equivalences making $$\begin{tikzcd} C_i\arrow[r,dashed,"\simeq"] \arrow[d] & D_i\arrow[d] \\ C_j\arrow[r,dashed,"\simeq"] & D_j \end{tikzcd}$$ commute, where the vertical maps are the transition maps. If each $C_i$ is compact, then $\mathop{\mathrm{\operatorname{colim}}}_IC_i\simeq *$ if and only if $\mathop{\mathrm{\operatorname{colim}}}_ID_i\simeq *$.* *Proof.* Note that (2) follows from (1), since the existence of such commutative squares implies that $\lbrace C_i\rbrace_I$ has the vanishing property for transition maps described in (1) if and only if $\lbrace D_i\rbrace_I$ has that property. For the first claim in (1), it suffices to show that $\operatorname{Map}_{\mathcal C}(D,\mathop{\mathrm{\operatorname{colim}}}_{i\in I}C_i)$ is contractible for each compact object $D\in\mathcal C^\omega$. For this, first observe that $$\pi_0\operatorname{Map}_{\mathcal C}(D,\mathop{\mathrm{\operatorname{colim}}}_{i\in I}C_i)\cong\mathop{\mathrm{\operatorname{colim}}}_{i\in I}\pi_0\operatorname{Map}(D,C_i)\cong *,$$ since our assumption guarantees that any homotopy class $D\to C_i$ is identified $D\to *\to C_i$ after postcomposing with the transition map $C_i\to C_j$ for sufficiently large $j\geq i$. Applying the same argument for the compact object $\Sigma^nD$, $n\geq 1$, we find that $$\pi_n\operatorname{Map}_{\mathcal C}(D,\mathop{\mathrm{\operatorname{colim}}}_{i\in I}C_i)\cong\pi_0\operatorname{Map}_{\mathcal C}(\Sigma^nD,\mathop{\mathrm{\operatorname{colim}}}_{i\in I}C_i)$$ vanishes also. Assume now that each $C_i$ is compact and that $\mathop{\mathrm{\operatorname{colim}}}_I C_i\simeq *$. Then $$\mathop{\mathrm{\operatorname{colim}}}_{j\in I}\operatorname{Map}_{\mathcal C}(C_i,C_j) \simeq \operatorname{Map}_{\mathcal C}(C_i,\mathop{\mathrm{\operatorname{colim}}}_{j\in I} C_j),$$ and since $\pi_0$ commutes with filtered colimits of anima, it follows that for sufficiently large $j\geq i$ the transition map $C_i\to C_j$ is homotopic to $C_i\to *\to C_j$. ◻ *Proof of Proposition [Proposition 8](#prop:when-cpt){reference-type="ref" reference="prop:when-cpt"}.* 'Sufficiency.' The $\infty$-category of sheaves of anima $\operatorname{Shv}(X)$ is compactly generated by [@HTT Prop 6.5.4.4], and hence so is $\operatorname{Shv}(X,\mathcal C)\simeq\operatorname{Shv}(X)\otimes\mathcal C$ according to [@HA Lem 5.3.2.11]. 'Necessity.' Let $x\in X$. We must show that if $y\in X$ lies in the same connected component as $X$, then $y=x$. For this, pick an object $C\not\simeq 0$ in $\mathcal C$ and let $x_*C$ denote the skyscraper sheaf at $x$ with value $C$. By assumption there is a filtered system $\lbrace\mathscr F_i\rbrace_{i\in I}$ of compact sheaves with $\mathop{\mathrm{\operatorname{colim}}}_I\mathscr F_i\simeq x_*C$. For each $i$, the fact that $\mathscr F_i$ is locally constant and that $x$ and $y$ lie in the same connected component means there is a non-canonical equivalence of stalks $x^*\mathscr F_i\simeq y^*\mathscr F_i$. One should not expect to find a system of such non-canonical equivalences assembling into a natural transformation, essentially because the neighborhoods on which the $\mathscr F_i$ are constant could get smaller and smaller as $i$ increases. Nevertheless, given a comparable pair $i\leq j$ in $I$, one can pick equivalences making the diagram $$\label{eq:small-transf} \begin{tikzcd} x^*\mathscr F_i\arrow[r,dashed,"\simeq"] \arrow[d] & y^*\mathscr F_i \arrow[d]\\ x^*\mathscr F_j\arrow[r,dashed,"\simeq"] & y^*\mathscr F_j \end{tikzcd}$$ where the vertical maps are the transition maps. To see this, simply note that the set of $z\in Z$ for which there is a commutative diagram $$\begin{tikzcd} x^*\mathscr F_i\arrow[r,dashed,"\simeq"] \arrow[d] & z^*\mathscr F_i \arrow[d]\\ x^*\mathscr F_j\arrow[r,dashed,"\simeq"] & z^*\mathscr F_j \end{tikzcd}$$ is a clopen subset of $X$, since any point admits a neighborhood on which both $\mathscr F_i$ and $\mathscr F_j$ are constant. Since all of the $\mathscr F_i$ have compact stalks by Theorem [Theorem 5](#thm:cpt-sheaves){reference-type="ref" reference="thm:cpt-sheaves"}, it follows from Lemma [Lemma 9](#lem:small-compare){reference-type="ref" reference="lem:small-compare"} that the stalk $(x_*C)_y\simeq\mathop{\mathrm{\operatorname{colim}}}_Iy^*\mathscr F_i$ is nonzero. But $X$ is Hausdorff, so this implies that $y=x$ as desired. ◻ **Remark 10**. Lemma [Lemma 9](#lem:small-compare){reference-type="ref" reference="lem:small-compare"} is also true if $\mathcal C$ is any ordinary category, e.g. the category of abelian groups $\mathrm{Ab}$. It is illuminating to consider why the lemma holds in this concrete setting. Given a filtered system of abelian groups $\lbrace A_i\rbrace_{i\in I}$, the associated colimit can be described as the quotient of $\bigoplus_I A_i$ by the subgroup consisting of elements $a\in A_i$ such that there is $j\geq i$ for which the transition map $\varphi_{ij}\colon A_i\to A_j$ maps $a$ to zero. Clearly $\mathop{\mathrm{\operatorname{colim}}}_IA_i\cong 0$ is implied by the assumption that for every $i\in I$ there is $j\geq i$ with $\varphi_{ij}\colon A_i\to A_j$ being zero. For the partial converse, assume now that each $A_i$ is a compact object of $\mathrm{Ab}$, i.e. a finitely generated abelian group, and that $\mathop{\mathrm{\operatorname{colim}}}_IA_i\cong 0$. Let $i\in I$ and pick a generating set $a_1,\dots ,a_n$ for $A_i$. Since $\mathop{\mathrm{\operatorname{colim}}}_IA_j\cong 0$, there is $j_1,\dots ,j_n\geq i$ with $\varphi_{ij_s}(a_s) = 0$ for each $s$. Using that $I$ is filtered, pick $j\in I$ so that $j\geq j_s$ for each $s$. Then $\varphi_{ij}(a_s) = \varphi_{j_sj}\varphi_{ij_s}(a_s) = 0$ for each $s$, and hence $\varphi_{ij} = 0$. ## Hausdorff schemes Unlike in point-set topology, compactly generated categories of sheaves are abundant in algebraic geometry. Using results of Hochster [@hochster1969prime], Proposition [Proposition 8](#prop:when-cpt){reference-type="ref" reference="prop:when-cpt"} can be interpreted as saying that $\operatorname{Shv}(X,\mathcal C)$ is only compactly generated when $X$ happens to come from algebraic geometry: **Proposition 11**. *Let $\mathcal C$ be a nontrivial compactly generated stable $\infty$-category, and let $X$ be a $\mathcal C$-hypercomplete locally compact Hausdorff space Then $\operatorname{Shv}(X,\mathcal C)$ is compactly generated if and only if $X$ is the underlying space of a scheme* Indeed, a locally compact Hausdorff space is totally disconnected if and only if it admits a basis of compact open sets if and only if it is the underlying space of a scheme. The second equivalence is the Hausdorff case of [@hochster1969prime Thm 9]. For the first equivalence, note in one direction that if $X$ admits a basis of compact open sets, then every $x\in X$ has $\lbrace x\rbrace = \bigcap_{U\ni x} U$, with $U$ ranging over compact open neighborhoods of $x$. Since each compact open neighborhood is clopen, we thus have that $\lbrace x\rbrace$ is a quasi-component in $X$, and hence that $X$ is totally disconnected. For the other direction, we must show that for every $x\in X$ and every open neighborhood $V\ni x$, there is a compact open $W$ with $x\in W\subseteq V$. Since $X$ is locally compact, we may assume that $V$ is precompact. By assumption $\lbrace x\rbrace = \bigcap_{U\ni x}U$, with $U$ ranging over clopen neighborhoods of $x$. Since each of these $U$ is in particular closed, we have that each $U\cap\partial\overline V$ is compact. By the finite intersection property, it therefore follows from $\bigcap_{U\ni x} U\cap \partial\overline{V} = \varnothing$ that for small enough clopen $U\ni x$, $U\cap\partial\overline{V} = \varnothing$. Hence $U\cap\overline V = U\cap V$ is a compact open neighborhood of $x$ contained in $V$, as desired. ## When is $\operatorname{Shv}(X)$ compactly generated? Proposition [Proposition 8](#prop:when-cpt){reference-type="ref" reference="prop:when-cpt"} says that the $\infty$-category of sheaves on $X$ with coefficients in a stable $\infty$-category is rarely compactly generated when $X$ is a locally compact Hausdorff space. If we had asked the same question 'without coefficients,' this would have been an easier observation: **Proposition 12**. *Let $X$ be a quasi-separated[^4] topological space. The $\infty$-topos $\operatorname{Shv}(X)$ of sheaves of anima on $X$ is compactly generated if and only if the sobrification of $X$ is the underlying space of a scheme.* *Proof.* One direction is [@HTT Thm 7.2.3.6]. For the other direction, assume that $\operatorname{Shv}(X)$ is compactly generated. Then so is the frame $\mathcal U\simeq\tau_{\leq -1}\operatorname{Shv}(X)$ of open subsets of $X$ by [@HTT Cor 5.5.7.4]. But this means that $X$ admits a basis of compact open sets, and hence the sobrification of $X$ is the underlying space of a scheme according to [@hochster1969prime Thm 9]. ◻ Applying $\pi_0$ to [\[eq:small-transf\]](#eq:small-transf){reference-type="eqref" reference="eq:small-transf"} and noting that each $\pi_0x^*\mathscr F_i$ and $\pi_0 y^*\mathscr F_i$ is a finitely generated abelian group by Serre's finiteness theorem,[^5] we conclude using Lemma [Lemma 9](#lem:small-compare){reference-type="ref" reference="lem:small-compare"} that since $\mathop{\mathrm{\operatorname{colim}}}_I\pi_0x^*\mathscr F_i\cong \pi_0x^*\mathop{\mathrm{\operatorname{colim}}}_I\mathscr F_i\cong \pi_0x^*x_*\mathbb S\cong\mathbb Z\not\cong 0$, we must also have $\pi_0y^*y_*\mathbb S\not\cong 0$. But $X$ is Hausdorff, so this implies $y=x$ as desired. **Remark 13**. The same method of proof shows that, with the same assumptions on $X$, the $\infty$-category $\operatorname{Shv}(X,\mathcal D(R))\simeq\mathcal D(\operatorname{Shv}(X,R))$ is compactly generated if and only if $X$ is totally disconnected. Recall that $\mathop{\mathrm{\operatorname{colim}}}_{I}A_i$ is isomorphic to the quotient of $\bigoplus_{I}A_i$ by the subgroup generated by elements $a\in A_i$ for which there is $j\geq i$ with $\varphi_{ij}(a) = 0$. One direction is therefore obvious and does not depend on the $A_i$ being finitely generated. For the other direction, suppose $\mathop{\mathrm{\operatorname{colim}}}_I A_i\cong 0$ and fix some $i\in I$. Pick generators $a_1,\dots ,a_n\in A_i$. By assumption each $a_s$ must lie in the kernel of $\varphi_{ij_s}$ for some $j_s\geq i$. Using that $I$ is filtered, pick $j\in I$ with $j\geq j_s$ for each $s$. Then $\varphi_{ij}(a_s) = \varphi_{j_sj}\varphi_{ij_s}(a_s) = 0$ for each $s$, and since the $a_s$ generate $A_i$ it follows that $\varphi_{ij}$ is the zero homomorphism as desired. # Descent for maps with local sections In this short appendix, we prove a descent lemma that was used in the proof of Theorem [Theorem 5](#thm:cpt-sheaves){reference-type="ref" reference="thm:cpt-sheaves"}, which is an immediate generalization of [@techniques Cor 4.1.6]. Let $\mathcal C$ be a compactly generated $\infty$-category and let $f\colon X\to Y$ be a continuous map of topological spaces. Recall that the *Čech nerve* of $f$ is the augmented simplicial topological space $X_\bullet$ with $X_{-1} = Y$ and $p$-simplices $$X_p = \underbrace{X\times_Y \dots\times_Y X}_{p\text{ times}}$$ for $p\geq 0$, with face maps given by projections and degeneracy maps given in the obvious way. More formally, if $\Delta_+$ is the category of finite (possibly empty) ordinals and $\mathcal T\mathrm{op}$ is the category of topological spaces, then $X_\bullet\colon \Delta^{\text{op}}_+\to\mathcal T\mathrm{op}$ is defined by right Kan extending $(f\colon X\to Y)\colon \Delta_{+,\leq 0}^{\text{op}}\to\mathcal T\mathrm{op}$ along the inclusion functor $\Delta_{+,\leq 0}^{\text{op}}\subset \Delta^{\text{op}}_+$. Letting $\operatorname{Shv}^*({-},\mathcal C)$ denote the contravariant functor from $\mathcal T\mathrm{op}$ to $\widehat{\mathcal C\mathrm{at}}_\infty$ given informally by $X\mapsto \operatorname{Shv}(X,\mathcal C)$ on objects and $f\mapsto f^*$ on morphisms, we then have the following useful definition: **Definition 14**. The function $f$ is of *$\mathcal C$-descent type* if the canonical functor $$\operatorname{Shv}(X,\mathcal C)\to\mathop{\mathrm{\operatorname{lim}}}_{\Delta}\operatorname{Shv}^*(X_\bullet ,\mathcal C)$$ is an equivalence. Let us say that $f$ *admits local sections* if for every $x\in X$, there is an open set $U\ni x$ such that the restriction $f\colon f^{-1}(U)\to U$ admits a section. **Proposition 15**. *If $f$ admits local sections, then $f$ is of $\mathcal C$-descent type.* *Proof.* By ordinary Čech descent, we may assume that $f$ admits a section globally on $X$, after possibly passing to an open cover of $X$ on which this is true. Let $\varepsilon\colon Y\to X$ be a choice of such a section. The section $\varepsilon$ allows us to endow the Čech nerve $X_\bullet$ with the structure of a split augmented simplicial space, by defining the extra degeneracies $h_i\colon X_p\to X_{p+1}$ by $$h_i(x_0,\dots ,x_p) = (x_0,\dots ,x_{i-1},\varepsilon (y),x_i,\dots ,x_p)$$ where $y = f(x_0) = \dots = f(x_p)$. It then follows that the split coaugmented cosimplicial $\infty$-category $\operatorname{Shv}^*(X_\bullet ,\mathcal C)$ is a limit diagram by [@HTT Lem 6.1.3.16] ◻ **Corollary 16**. *Let $\lbrace A_i\rbrace_{i\in I}$ be a collection of subsets of $X$ such that $X = \bigcup_I A_i^{\mathrm o}$, where $A_i^{\mathrm o}$ is the interior of $A_i$. Then the canonical map $\coprod_I A_i\to X$ is of $\mathcal C$-descent type.* *Proof.* The canonical map $\coprod_IA_i\to X$ admits a section on $A_j^{\mathrm o}$ given by $A_j^{\mathrm o}\hookrightarrow A_j\to\coprod_IA_i$, where the second map is the canonical injection. ◻ [^1]: Indeed, for any such $\mathcal C$ there is a conservative functor $$\operatorname{Shv}(X,\mathcal C)\to \prod_{C\in\mathcal C^\omega}\operatorname{Shv}(X)$$ given informally by mapping $\mathscr F$ to $(\mathscr F_C)_{C\in\mathcal C^\omega}$, where $\mathscr F_C = \operatorname{Map}_{\mathcal C}(C,{-})\circ\mathscr F$, which is a sheaf since $\operatorname{Map}_{\mathcal C}(C,{-})$ preserves limits. Also, since $C$ is compact, we have a canonical equivalence $(\mathscr F_C)_x\simeq\operatorname{Map}_{\mathcal C}(C,\mathscr F_x)$ natural in $\mathscr F$ for each $x\in X$, and it follows that if $X$ is $\mathcal S$-hypercomplete then it is also $\mathcal C$-hypercomplete. [^2]: Indeed, we have already observed that $\mathscr F\vert_{Z_J}$ is compact for each $J$, and hence the associated object $i_k^*\mathscr F$ in the product $\Pi_J\operatorname{Shv}(Z_J,\mathcal C)$is also compact according to [@HTT Lem 5.3.4.10]. [^3]: *i.e. a compact object of $\mathrm{Mod}_R$* [^4]: *Recall that a topological space $X$ is said to be *quasi-separated* if for any pair of compact open subsets $U,V\subseteq X$, the intersection $U\cap V$ is again compact. Note that all Hausdorff spaces are quasi-separated.* [^5]: Since $\mathscr F_i$ is compact, it follows from Theorem [Theorem 5](#thm:cpt-sheaves){reference-type="ref" reference="thm:cpt-sheaves"} that each stalk $z^*\mathscr F_i$ is a finite spectrum, and Serre's finiteness theorem says that the homotopy groups of a finite spectrum are finitely generated.
arxiv_math
{ "id": "2309.12127", "title": "Compact sheaves on a locally compact space", "authors": "Oscar Harr", "categories": "math.AT math.KT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | A $2k$-move is a local deformation adding or removing $2k$ half-twists. We show that if two virtual knots are related by a finite sequence of $2k$-moves, then their $n$-writhes are congruent modulo $k$ for any nonzero integer $n$, and their odd writhes are congruent modulo $2k$. Moreover, we give a necessary and sufficient condition for two virtual knots to have the same congruence class of odd writhes modulo $2k$. address: Department of Mathematics, Kobe University, 1-1 Rokkodai-cho, Nada-ku, Kobe 657-8501, Japan author: - Kodai Wada title: Writhes and $2k$-moves for virtual knots --- [^1] # Introduction {#sec-intro} Let $k$ be a positive integer. A *$2k$-move* on a knot diagram is a local deformation adding or removing $2k$ half-twists as shown in Figure [\[2k-move\]](#2k-move){reference-type="ref" reference="2k-move"}. A $2$-move is equivalent to a crossing change; that is, a $2$-move is realized by a crossing change, and vice verse. In this sense, a $2k$-move can be considered as a generalization of a crossing change. The $2k$-moves form an important family of local moves in classical knot theory. In fact, they have been well studied by means of many invariants of classical knots and links; for example, Alexander polynomials [@Kin], Jones, HOMFLYPT and Kauffman polynomials [@Prz], Burnside groups [@DP02; @DP04] and Milnor invariants [@MWY]. 2k-move.pdf (81,25)$2k$ (119.5,-12)$2k$ half-twists Recently, Jeong, Choi and Kim [@JCK] extended the classical study of $2k$-moves to the setting of virtual knots, which are a generalization of classical knots discovered by Kauffman [@Kau99]. Roughly speaking, a *virtual knot* is an equivalence class of generalized knot diagrams called *virtual knot diagrams* under seven types of local deformations. We say that two virtual knots $K$ and $K'$ are related by a $2k$-move if a diagram of $K'$ is a result of a $2k$-move on a diagram of $K$. For a virtual knot $K$, Kauffman [@Kau04] defined an integer-valued invariant $J(K)$ called the *odd writhe*. Satoh and Taniguchi [@ST] generalized it to a sequence of integer-valued invariants $J_{n}(K)$ of $K$ called the *$n$-writhes*. This sequence $\{J_{n}(K)\}_{n\ne0}$ gives rise to a polynomial invariant $P_{K}(t)$ of $K$ known as the the *affine index polynomial* [@Kau13], which is essentially equivalent to the *writhe polynomial* [@CG]. Refer to [@CFGMX] for a good survey of virtual knot invariants derived from chord index, including $J(K)$, $J_{n}(K)$ and $P_{K}(t)$. In [@JCK Theorem 2.3], Jeong, Choi and Kim established a necessary condition for two virtual knots to be related by a finite sequence of $2k$-moves using their affine index polynomials. Examining the proof of [@JCK Theorem 2.3], we can find another necessary condition for such a pair of virtual knots in terms of the $n$-writhes and odd writhes. The first aim of this paper is to prove the following. **Theorem 1**. *If two virtual knots $K$ and $K'$ are related by a finite sequence of $2k$-moves, then the following hold:* 1. *$J_{n}(K)\equiv J_{n}(K')\pmod{k}$ for any nonzero integer $n$.* 2. *$J(K)\equiv J(K')\pmod{2k}$.* Although the assertion (i) in this theorem is obtained from [@JCK Theorem 2.3] immediately, the author believes that it is worth stating it as a separate result. A *$\Xi$-move* on a virtual knot diagram is a local deformation exchanging the positions of $c_{1}$ and $c_{3}$ of three consecutive real crossings $c_{1}$, $c_{2}$ and $c_{3}$ as shown in Figure [\[Xi-move\]](#Xi-move){reference-type="ref" reference="Xi-move"}, where we omit the over/under information of each crossing $c_{i}$ $(i=1,2,3)$. The $\Xi$-move arises naturally as a diagrammatic characterization of virtual knots having the same odd writhe. In fact, Satoh and Taniguchi [@ST] proved the following. Xi-move.pdf (72.75,33)$\Xi$ (17,40)$c_{1}$ (17,21)$c_{2}$ (17,2)$c_{3}$ (121.5,52)$c_{3}$ (121.5,20)$c_{2}$ (121.5,2)$c_{1}$ **Theorem 2** ([@ST Theorem 1.7]). *For two virtual knots $K$ and $K'$, the following are equivalent:* 1. *$J(K)=J(K')$.* 2. *$K$ and $K'$ are related by a finite sequence of $\Xi$-moves.* Inspired by this theorem, we use $\Xi$-moves together with $2k$-moves to characterize virtual knots having the same congruence class of odd writhes modulo $2k$. The second aim of this paper is to prove the following. **Theorem 3**. *For two virtual knots $K$ and $K'$, the following are equivalent:* 1. *$J(K)\equiv J(K')\pmod{2k}$.* 2. *$K$ and $K'$ are related by a finite sequence of $2k$-moves and $\Xi$-moves.* Although the crossing change is an unknotting operation for classical knots, it was shown by Carter, Kamada and Saito in [@CKS Proposition 25] that not every virtual knot can be unknotted by crossing changes. This fact justifies the notion of flat virtual knots. A *flat virtual knot* [@Kau99] is an equivalence class of virtual knots up to crossing changes. Equivalently, a flat virtual knot is represented by a virtual knot diagram with all the real crossings replaced by flat crossings, where a *flat crossing* is a transverse double point with no over/under information. In [@Che Lemma 2.2], Cheng showed that the odd writhe for any virtual knot takes values in even integers. Therefore any virtual knot $K$ and the trivial one $O$ satisfy $J(K)\equiv J(O)\equiv0\pmod{2}$. By Theorem [Theorem 3](#thm-2kXi){reference-type="ref" reference="thm-2kXi"} for $k=1$, the two knots $K$ and $O$ are related by a finite sequence of crossing changes and $\Xi$-moves. In other words, we have the following. **Corollary 4**. *Any flat virtual knot can be deformed into the trivial knot by a finite sequence of flat $\Xi$-moves; that is, the flat $\Xi$-move is an unknotting operation for flat virtual knots. Here, a flat $\Xi$-move is a $\Xi$-move with all the real crossings replaced by flat ones. 0◻* The rest of this paper is organized as follows. In Section [2](#sec-thm1){reference-type="ref" reference="sec-thm1"}, we review the definitions of a virtual knot, a Gauss diagram, the $n$-writhe and the odd writhe, and prove Theorem [Theorem 1](#thm-writhe){reference-type="ref" reference="thm-writhe"}. Section [3](#sec-thm2){reference-type="ref" reference="sec-thm2"} is devoted to the proof of Theorem [Theorem 3](#thm-2kXi){reference-type="ref" reference="thm-2kXi"}. Our main tool is the notion of shell-pairs, which are certain pairs of chords of a Gauss diagram introduced in [@MSW]. In the last section, for two virtual knots $K$ and $K'$ that are related by a finite number of $2k$-moves, we study their $2k$-move distance $\mathrm{d}_{2k}(K,K')$ defined as the minimal number of such $2k$-moves. We show that for any virtual knot $K$ and any positive integer $a$, there is a virtual knot $K'$ such that $\mathrm{d}_{2k}(K,K')=a$ (Proposition [Proposition 12](#prop-distance){reference-type="ref" reference="prop-distance"}). # Proof of Theorem [Theorem 1](#thm-writhe){reference-type="ref" reference="thm-writhe"} {#sec-thm1} We begin this section by recalling the definitions of virtual knots and Gauss diagrams from [@GPV; @Kau99]. A *virtual knot diagram* is the image of an immersion of an oriented circle into the plane whose singularities are only transverse double points. Such double points consist of *positive*, *negative* and *virtual crossings* as shown in Figure [\[xing\]](#xing){reference-type="ref" reference="xing"}. A positive/negative crossing is also called a *real crossing*. xing.pdf (0.8,-12)positive (67.25,-12)negative (139,-12)virtual Two virtual knot diagrams are said to be *equivalent* if they are related by a finite sequence of *generalized Reidemeister moves* I--VII as shown in Figure [\[gReid-move\]](#gReid-move){reference-type="ref" reference="gReid-move"}. A *virtual knot* is the equivalence class of a virtual knot diagram. In particular, a classical knot can be considered as a virtual knot diagram with no virtual crossings, called a *classical knot diagram*, up to the moves I, II and III. In [@GPV Theorem 1.B], Goussarov, Polyak and Viro proved that two equivalent classical knot diagrams are related by a finite sequence of moves I, II, and III; in other words, the set of virtual knots contains that of classical knots. In this sense, virtual knots are a generalization of classical knots. gReid-move.pdf (33.5,162)I (68.5,162)I (162.5,162)II (278.5,162)III (48,92)IV (162.5,92)V (278.5,92)VI (158.5,23)VII A *Gauss diagram* is an oriented circle equipped with a finite number of signed and oriented chords whose endpoints lie disjointly on the circle. In figures the underlying circle and chords of a Gauss diagram will be drawn with thick and thin lines, respectively. Gauss diagrams provide an alternative way of representing virtual knots. For a virtual knot diagram $D$ with $n$ real crossings (and some or no virtual crossings), the *Gauss diagram $G_{D}$ associated with $D$* is constructed as follows. It consists of a circle and $n$ chords connecting the preimage of each real crossing of $D$. Each chord of $G_{D}$ has the sign of the corresponding real crossing of $D$, and it is oriented from the overcrossing to the undercrossing. For a virtual knot $K$, a *Gauss diagram of $K$* is defined to be a Gauss diagram associated with a virtual knot diagram of $K$. A motivation of introducing virtual knot theory comes from the realization of Gauss diagrams. In fact, the construction above defines a surjective map from virtual knot diagrams onto Gauss diagrams, although not every Gauss diagram can be realized by a classical knot diagram. Moreover, this map induces a bijection between the set of virtual knots and that of Gauss diagrams modulo Reidemeister moves I, II and III defined in the Gauss diagram level as shown in Figure [\[Reid-Gauss\]](#Reid-Gauss){reference-type="ref" reference="Reid-Gauss"} [@GPV Theorem 1.A]. Refer also to [@Kau99 Section 3.2]. Therefore a virtual knot can be regarded as the equivalence class of a Gauss diagram. Reid-Gauss.pdf (34,175)I (17,161)$\varepsilon$ (71,175)I (105.5,161)$\varepsilon$ (204,175)II (169,178)$\varepsilon$ (161.5,159)$-\varepsilon$ (278.5,175)II (309,178)$\varepsilon$ (303.5,159)$-\varepsilon$ (71.8,101)III (29.5,113.5)$\varepsilon$ (1,86)$-\varepsilon$ (47,86)$-\varepsilon$ (122.5,107)$\varepsilon$ (103,93.5)$-\varepsilon$ (131,93.5)$-\varepsilon$ (257.9,101)III (215,113.5)$\varepsilon$ (200,88)$\varepsilon$ (229,88)$\varepsilon$ (309,107)$\varepsilon$ (293,88)$\varepsilon$ (322.5,88)$\varepsilon$ (71.8,26.5)III (29.5,39.5)$\varepsilon$ (14,13)$\varepsilon$ (41,13)$-\varepsilon$ (122.5,33)$\varepsilon$ (107,13)$\varepsilon$ (135,13)$-\varepsilon$ (257.9,26.5)III (215,39.5)$\varepsilon$ (193,13)$-\varepsilon$ (229,13)$\varepsilon$ (309,33)$\varepsilon$ (286,13)$-\varepsilon$ (322.5,13)$\varepsilon$ We will use two local deformations on Gauss diagrams as shown in Figure [\[2kXi-Gauss\]](#2kXi-Gauss){reference-type="ref" reference="2kXi-Gauss"} as well as the Reidemeister moves I, II and III. These deformations are the counterparts of a $2k$-move and a $\Xi$-move for Gauss diagrams. More precisely, a *$2k$-move on a Gauss diagram* adds or removes $2k$ chords with the same sign $\varepsilon$ whose initial and terminal endpoints appear alternatively. Let $P_{1}$, $P_{2}$ and $P_{3}$ be three consecutive endpoints of chords of a Gauss diagram. A *$\Xi$-move* exchanges the positions of $P_{1}$ and $P_{3}$ with preserving the signs $\varepsilon_{1},\varepsilon_{2},\varepsilon_{3}$ and orientations of the chords. In the right of the figure, a pair of dots $\bullet$ marks the two endpoints $P_{1}$ and $P_{3}$ exchanged by a $\Xi$-move. 2kXi-Gauss.pdf (76,36)$2k$ (114,-12)$2k$ chords (100,29.5)$\varepsilon$ (114,29.5)$\varepsilon$ (151,29.5)$\varepsilon$ (165,29.5)$\varepsilon$ (266,36)$\Xi$ (196,41)$\varepsilon_{1}$ (214,41)$\varepsilon_{2}$ (232,41)$\varepsilon_{3}$ (287,41)$\varepsilon_{1}$ (303.5,41)$\varepsilon_{2}$ (333,41)$\varepsilon_{3}$ (201.5,-5)$P_{1}$ (219.5,-5)$P_{2}$ (237.5,-5)$P_{3}$ (290.5,-5)$P_{3}$ (308.5,-5)$P_{2}$ (326.5,-5)$P_{1}$ Using Gauss diagrams, we now define the $n$-writhe and the odd writhe of a virtual knot $K$. For a Gauss diagram $G$ of $K$, let $\gamma$ be a chord of $G$. If $\gamma$ has a sign $\varepsilon$, then we assign $\varepsilon$ and $-\varepsilon$ to the terminal and initial endpoints of $\gamma$, respectively. The endpoints of $\gamma$ divides the underlying circle of $G$ into two arcs. Let $\alpha$ be one of the two oriented arcs that starts at the initial endpoint of $\gamma$. See Figure [\[arc\]](#arc){reference-type="ref" reference="arc"}. The *index* of $\gamma$, $\mathrm{ind}(\gamma)$, is the sum of sings of endpoints of chords on $\alpha$. For an integer $n$, we denote by $J_{n}(G)$ the sum of signs of chords with index $n$. In [@ST Lemma 2.3], Satoh and Taniguchi proved that $J_{n}(G)$ is an invariant of the virtual knot $K$ for any $n\ne0$; that is, it is independent of the choice of $G$. This invariant is called the *$n$-writhe* of $K$ and denoted by $J_{n}(K)$. The *odd writhe $J(K)$* of $K$, defined by Kauffman [@Kau04], is given by $$J(K)=\sum_{n\in\mathbb{Z}}J_{2n-1}(K).$$ Refer to [@CFGMX; @Kau04; @ST] for more details. arc.pdf (15,26)$\gamma$ (30,26)$\varepsilon$ (60,26)$\alpha$ (24,60)$\varepsilon$ (20,-10)$-\varepsilon$ In [@JCK], Jeong, Choi and Kim prepared Lemma 2.2 to show Theorem 2.3 giving the necessary condition mentioned in Section [1](#sec-intro){reference-type="ref" reference="sec-intro"}. We rephrase and reprove this lemma from the Gauss diagram point of view for the proof of Theorem [Theorem 1](#thm-writhe){reference-type="ref" reference="thm-writhe"}. **Lemma 5** (cf. [@JCK Lemma 2.2]). *If two Gauss diagrams $G$ and $G'$ are related by a single $2k$-move, then there is a unique integer $n$ such that $$J_{n}(G)-J_{n}(G')=\varepsilon k,\ J_{-n}(G)-J_{-n}(G')=\varepsilon k \text{ and } J_{m}(G)=J_{m}(G')$$ for some $\varepsilon=\pm1$ and any integer $m\neq \pm n$.* *Proof.* We may assume that $G'$ is obtained from $G$ by removing $2k$ chords $\gamma_{1}$, $\gamma_{2},\dots,\gamma_{2k}$ with sing $\varepsilon$ involved in a $2k$-move as shown in Figure [\[2k-Gauss\]](#2k-Gauss){reference-type="ref" reference="2k-Gauss"}, where we depict the signs of the initial and terminal endpoints of each $\gamma_{i}$ $(i=1,2,\ldots,2k)$, instead of omitting the sign of $\gamma_{i}$ itself. 2k-Gauss.pdf (99,24)$2k$ (39,-23)$G$ (157,-23)$G'$ (3,18)$\gamma_{1}$ (17.5,18)$\gamma_{2}$ (73,18)$\gamma_{2k}$ (7,42)$-\varepsilon$ (27,42)$\varepsilon$ (49,42)$-\varepsilon$ (69,42)$\varepsilon$ (12,-9)$\varepsilon$ (21,-9)$-\varepsilon$ (54.5,-9)$\varepsilon$ (63.5,-9)$-\varepsilon$ For any chord $\gamma$ of $G$ other than the $2k$ chords $\gamma_{1},\ldots,\gamma_{2k}$, let $\gamma'$ be the corresponding chord of $G'$. Then $\gamma$ and $\gamma'$ have the same index; that is, $\mathrm{ind}(\gamma)=\mathrm{ind}(\gamma')$. In fact, the sum of signs of $2k$ consecutive endpoints of $\gamma_{i}$'s on an arc equals zero. Let $n\in\mathbb{Z}$ (possibly zero) be the index of $\gamma_{1}$. Then we have $\mathrm{ind}(\gamma_{2i-1})=n$ and $\mathrm{ind}(\gamma_{2i})=-n$ for $i=1,2,\dots,k$. Moreover, the sum of signs of $k$ chords $\gamma_{1},\gamma_{3},\ldots,\gamma_{2k-1}$ is equal to $\varepsilon k$, and the sum of signs of $k$ chords $\gamma_{2},\gamma_{4},\ldots,\gamma_{2k}$ is also $\varepsilon k$. Therefore we obtain the conclusion. ◻ *Proof of Theorem [Theorem 1](#thm-writhe){reference-type="ref" reference="thm-writhe"}.* Assume that $K$ and $K'$ are related by a single $2k$-move, and let $G$ and $G'$ be Gauss diagrams of $K$ and $K'$, respectively. Since $G$ and $G'$ are related by a combination of a single $2k$-move and Reidemeister moves, it follows from Lemma [Lemma 5](#lem-writhe){reference-type="ref" reference="lem-writhe"} that $$J_{n}(K)-J_{n}(K')=J_{n}(G)-J_{n}(G')\in\{0,\pm k\}$$ for any nonzero integer $n$. Therefore the assertion (i) holds. If there is an odd integer $n$ such that $J_{n}(K)-J_{n}(K')=\varepsilon k$ holds for some $\varepsilon=\pm1$, then we have $$J_{-n}(K)-J_{-n}(K')=\varepsilon k \text{ and } J_{m}(K)=J_{m}(K')$$ for any nonzero integer $m\neq \pm n$. Otherwise, we have $J_{m}(K)=J_{m}(K')$ for any odd integer $m$. In both cases, it follows that $J(K)-J(K')\in\{0,\pm2k\}$. This completes the proof for the assertion (ii). ◻ Theorem [Theorem 1](#thm-writhe){reference-type="ref" reference="thm-writhe"} together with Theorem [Theorem 2](#thm-ST){reference-type="ref" reference="thm-ST"} implies an interesting consequence, which states that the set of $2k$-move equivalence classes of a virtual knot $K$ for all $k\geq1$ determines the $\Xi$-move equivalence class of $K$. **Proposition 6**. *If two virtual knots $K$ and $K'$ are related by a finite sequence of $2k$-moves for all $k\geq1$, then $K$ and $K'$ are related by a finite sequence of $\Xi$-moves.* *Proof.* By assuming that $K$ and $K'$ satisfy $J(K)\ne J(K')$, there is a positive integer $k$ such that $J(K)\not\equiv J(K')\pmod{2k}$. By Theorem [Theorem 1](#thm-writhe){reference-type="ref" reference="thm-writhe"}(ii), this contradicts that $K$ and $K'$ are related by a finite sequence of $2k$-moves for all $k\geq1$. Since we have $J(K)=J(K')$, Theorem [Theorem 2](#thm-ST){reference-type="ref" reference="thm-ST"} gives the conclusion. ◻ # Proof of Theorem [Theorem 3](#thm-2kXi){reference-type="ref" reference="thm-2kXi"} {#sec-thm2} This section is devoted to the proof of Theorem [Theorem 3](#thm-2kXi){reference-type="ref" reference="thm-2kXi"}. Our main tool is the notion of shell-pairs, which are certain pairs of chords of a Gauss diagram developed in [@MSW] for classifying $2$-component virtual links up to $\Xi$-moves. Let $P_{1}$, $P_{2}$ and $P_{3}$ be three consecutive endpoints of chords of a Gauss diagram $G$. We say that a chord of $G$ is a *shell* if it connects $P_{1}$ and $P_{3}$. See the left of Figure [\[shell\]](#shell){reference-type="ref" reference="shell"}. Note that the orientation of a shell can be reversed by a $\Xi$-move exchanging the positions of $P_{1}$ and $P_{3}$. A *positive shell-pair* (or *negative shell-pair*) consists of a pair of positive shells (or *negative shells*) whose four endpoints are consecutive. See the right of the figure, where we omit the orientations of shells. shell.pdf (11,16)$\varepsilon$ (6,-12)$P_{1}$ (21,-12)$P_{2}$ (36,-12)$P_{3}$ (124,-12)positive (205,-12)negative We prepare three results (Lemmas [Lemma 7](#lem-shellpair){reference-type="ref" reference="lem-shellpair"}, [Lemma 8](#lem-censecutive){reference-type="ref" reference="lem-censecutive"} and Proposition [Proposition 9](#prop-normalform){reference-type="ref" reference="prop-normalform"}) to give the proof of Theorem [Theorem 3](#thm-2kXi){reference-type="ref" reference="thm-2kXi"}. The first and second results will be used to prove the third one. The following lemma was shown in [@MSW; @ST]. **Lemma 7** ([@MSW Section 4], [@ST Fig. 13]). *Let $G$, $G'$ and $G''$ be Gauss diagrams.* 1. *If $G'$ is obtained from $G$ by a local deformation exchanging the positions of a shell-pair and an endpoint of a chord in $G$ with preserving the orientations of the chords as shown in the top of Figure [\[shellpair-move\]](#shellpair-move){reference-type="ref" reference="shellpair-move"}, then $G$ and $G'$ are related by a finite sequence of $\Xi$-moves and Reidemeister moves.* 2. *If $G''$ is obtained from $G$ by a local deformation adding or removing two consecutive shell-pairs with opposite signs as shown in the bottom of Figure [\[shellpair-move\]](#shellpair-move){reference-type="ref" reference="shellpair-move"}, then $G$ and $G''$ are related by a finite sequence of $\Xi$-moves and Reidemeister moves.* shellpair-move.pdf (13.5,74)$\varepsilon$ (46,74)$\varepsilon$ (134,74)$\varepsilon$ (166,74)$\varepsilon$ (125,17)$\varepsilon$ (157,17)$\varepsilon$ (172,17)$-\varepsilon$ (203,17)$-\varepsilon$ **Lemma 8**. *Let $G$ and $G'$ be Gauss diagrams and $k$ a positive integer. If $G'$ is obtained from $G$ by a local deformation adding or removing $k$ consecutive shell-pairs with the same sign $\varepsilon$ as shown in Figure [\[censec-shellpairs\]](#censec-shellpairs){reference-type="ref" reference="censec-shellpairs"}, then $G$ and $G'$ are related by a finite sequence of $2k$-moves, $\Xi$-moves and Reidemeister moves.* censec-shellpairs.pdf (139,29)$\varepsilon$ (165,29)$\varepsilon$ (209,29)$\varepsilon$ (235,29)$\varepsilon$ (164,-12)$k$ shell-pairs *Proof.* We only prove the result for $k=2$. The other cases are shown similarly. Assume that $G'$ is obtained from $G$ by adding two consecutive shell-pairs with sign $\varepsilon$. The proof follows from Figure [\[pf-lem-censecutive\]](#pf-lem-censecutive){reference-type="ref" reference="pf-lem-censecutive"}, which gives a sequence of Gauss diagrams $$G=G_{0}, G_{1},\ldots,G_{6}=G'$$ such that for each $i=1,2,\ldots,6$, $G_{i}$ is obtained from $G_{i-1}$ by a combination of $2k$-moves, $\Xi$-moves and Reidemeister moves. More precisely, we obtain $G_{1}$ from $G_{0}=G$ by a Reidemeister move I adding a positive chord, $G_{2}$ from $G_{1}$ by a $4$-move adding four chords with sign $\varepsilon$, and $G_{3}$ from $G_{2}$ by a $\Xi$-move exchanging the positions of the two endpoints with dots $\bullet$. By Lemma [Lemma 7](#lem-shellpair){reference-type="ref" reference="lem-shellpair"}(i), we can move the resulting shell-pair (with preserving the orientations of the chords) to get $G_{4}$ from $G_{3}$. After deforming $G_{4}$ into $G_{5}$ by a $\Xi$-move, we finally obtain $G_{6}=G'$ by two $\Xi$-moves reversing the orientations of shells and a Reidemeister move I removing a positive chord. ◻ pf-lem-censecutive.pdf (7,124)$G=G_{0}$ (102,124)$G_{1}$ (223,124)$G_{2}$ (80,56)$G_{3}$ (247,56)$G_{4}$ (89,-12)$G_{5}$ (243,-12)$G_{6}=G'$ (63,156)I (147,156)4 (5.5,84)$\Xi$ (149.5,84)Lem [Lemma 7](#lem-shellpair){reference-type="ref" reference="lem-shellpair"} (5.5,19)$\Xi$ (177.5,19)$\Xi$, I (216,183)$\varepsilon$ (216,174)$\varepsilon$ (216,166)$\varepsilon$ (216,157)$\varepsilon$ (56,85)$\varepsilon$ (82,85)$\varepsilon$ (75,111)$\varepsilon$ (75,102)$\varepsilon$ (205,85)$\varepsilon$ (231,85)$\varepsilon$ (267,85)$\varepsilon$ (267,94)$\varepsilon$ (38.5,21)$\varepsilon$ (64.5,21)$\varepsilon$ (86,21)$\varepsilon$ (112,21)$\varepsilon$ (222,21)$\varepsilon$ (248,21)$\varepsilon$ (269.5,21)$\varepsilon$ (295.5,21)$\varepsilon$ For an integer $a$, let $G(a)$ be the Gauss diagram in Figure [\[normalform\]](#normalform){reference-type="ref" reference="normalform"}; that is, it consists of $|a|$ shell-pairs with sign $\varepsilon$, where $\varepsilon=1$ for $a>0$ and $\varepsilon=-1$ for $a<0$. In particular, $G(0)$ is the Gauss diagram with no chords. We denote by $K(a)$ the virtual knot represented by $G(a)$. We remark that $J(K(a))=2a$. normalform.pdf (20,34)$\varepsilon$ (59,34)$\varepsilon$ (107,34)$\varepsilon$ (146,34)$\varepsilon$ (55,-12)$|a|$ shell-pairs We give a normal form of an equivalence class of virtual knots under $2k$-moves and $\Xi$-moves as follows. **Proposition 9**. *Any virtual knot $K$ is related to $K(a)$ for some $a\in\mathbb{Z}$ with $0\le a<k$ by a finite sequence of $2k$-moves and $\Xi$-moves.* *Proof.* By [@ST Proposition 7.2], any Gauss diagram $G$ of $K$ can be deformed into $G(a)$ for some $a\in\mathbb{Z}$ by a finite sequence of $\Xi$-moves and Reidemeister moves. If $a$ satisfies $0\le a<k$, then we have the conclusion. For $k\le a$, there is a unique positive integer $p$ such that $0\le a-pk<k$. Lemma [Lemma 8](#lem-censecutive){reference-type="ref" reference="lem-censecutive"} allows us to add $pk$ consecutive negative shell-pairs to $G(a)$. From the obtained Gauss diagram, we can remove $pk$ pairs of shell-pairs with opposite signs by Lemma [Lemma 7](#lem-shellpair){reference-type="ref" reference="lem-shellpair"}(ii) in order to obtain $G(a-pk)$. Therefore $G$ is related to $G(a-pk)$ by a finite sequence of $2k$-moves, $\Xi$-moves and Reidemeister moves. In the case $a<0$, let $q$ be the positive integer such that $0\le a+qk<k$. Using Lemmas [Lemma 7](#lem-shellpair){reference-type="ref" reference="lem-shellpair"}(ii) and [Lemma 8](#lem-censecutive){reference-type="ref" reference="lem-censecutive"}, we add $qk$ consecutive positive shell-pairs to $G(a)$, and then remove $qk$ pairs of shell-pairs with opposite signs. Finally $G$ is related to $G(a+qk)$ by a finite sequence of $2k$-moves, $\Xi$-moves and Reidemeister moves. ◻ Now we are ready to prove Theorem [Theorem 3](#thm-2kXi){reference-type="ref" reference="thm-2kXi"}. *Proof of Theorem [Theorem 3](#thm-2kXi){reference-type="ref" reference="thm-2kXi"}.* (i)$\Rightarrow$(ii): By Proposition [Proposition 9](#prop-normalform){reference-type="ref" reference="prop-normalform"}, $K$ and $K'$ are related to $K(a)$ and $K(a')$ for some $a,a'\in\mathbb{Z}$ with $0\le a,a'<k$, respectively, by a finite sequence of $2k$-moves and $\Xi$-moves. Then it follows from Theorems [Theorem 1](#thm-writhe){reference-type="ref" reference="thm-writhe"}(ii) and [Theorem 2](#thm-ST){reference-type="ref" reference="thm-ST"} that $$J(K)\equiv J(K(a))=2a \pmod{2k}$$ and $$J(K')\equiv J(K(a'))=2a' \pmod{2k}.$$ By assumption, we have $2a\equiv 2a' \pmod{2k}$. Since the nonnegative integers $a$ and $a'$ are less than $k$, we have $a=a'$. Therefore $K(a)$ and $K(a')$ coincide. (ii)$\Rightarrow$(i): This follows from Theorems [Theorem 1](#thm-writhe){reference-type="ref" reference="thm-writhe"}(ii) and [Theorem 2](#thm-ST){reference-type="ref" reference="thm-ST"}. ◻ The following result is an immediate consequence of the proof of Theorem [Theorem 3](#thm-2kXi){reference-type="ref" reference="thm-2kXi"}. **Corollary 10**. *A complete representative system of the equivalence classes of virtual knots under $2k$-moves and $\Xi$-moves is given by the set $$\{K(a)\mid a\in\mathbb{Z},\ 0\le a<k\}.$$ In particular, the number of equivalence classes is $k$. 0◻* # $2k$-move distance {#sec-distance} This section studies the $2k$-move distance for virtual knots. For two virtual knots $K$ and $K'$ that are related by a finite sequence of $2k$-moves, we denote by $\mathrm{d}_{2k}(K,K')$ the minimal number of $2k$-moves needed to deform a diagram of $K$ into that of $K'$. In particular, we set $\mathrm{u}_{2k}(K)=\mathrm{d}_{2k}(K,O)$, where $O$ is the trivial knot. We will show that for any virtual knot $K$ and any positive integer $a$, there is a virtual knot $K'$ such that $\mathrm{d}_{2k}(K,K')=a$ (Proposition [Proposition 12](#prop-distance){reference-type="ref" reference="prop-distance"}). In [@JCK Theorem 2.3], Jeong, Choi and Kim gave a lower bound for $\mathrm{d}_{2k}(K,K')$ using the affine index polynomials of $K$ and $K'$, which can be rephrased in terms of the $n$-writhes as follows. **Theorem 11** (cf. [@JCK Theorem 2.3]). *Let $K$ and $K'$ be virtual knots such that they are related by a finite sequence of $2k$-moves. Then we have $$\mathrm{d}_{2k}(K,K')\geq\frac{1}{k}\sum_{n>0}|J_{n}(K)-J_{n}(K')|=\frac{1}{k}\sum_{n<0}|J_{n}(K)-J_{n}(K')|.$$ In particular, when $K'=O$ is the trivial knot, we have $$\mathrm{u}_{2k}(K)\geq\frac{1}{k}\sum_{n>0}|J_{n}(K)|=\frac{1}{k}\sum_{n<0}|J_{n}(K)|.$$* We remark that this theorem is a direct consequence of Lemma [Lemma 5](#lem-writhe){reference-type="ref" reference="lem-writhe"}. In [@JCK Example 2.4], Jeong, Choi and Kim demonstrated that the lower bound for $\mathrm{u}_{2k}(K)$ in the theorem is sharp for some virtual knots $K$. However, they did not make it clear whether for a pair of nontrivial virtual knots $K$ and $K'$, the lower bound for $\mathrm{d}_{2k}(K,K')$ is sharp. We answer this by proving the following. **Proposition 12**. *Let $a$ be a positive integer. For any virtual knot $K$, there is a virtual knot $K'$ such that $\mathrm{d}_{2k}(K,K')=a$.* *Proof.* Consider a long virtual knot diagram $T$ whose closure represents the virtual knot $K$. Let $K'$ be the virtual knot represented by the diagram $D$ in the left of Figure [\[pf-prop-distance\]](#pf-prop-distance){reference-type="ref" reference="pf-prop-distance"}. The Gauss diagram $G_{D}$ associated with $D$ is given in the right of the figure, where the boxed part depicts the Gauss diagram $G_{T}$ corresponding to $T$. pf-prop-distance.pdf (52,65)$T$ (22,9)$2ak$ half-twists (176,30)$G_{T}$ (234,-3)$2ak$ chords Removing $2ak$ half-twists from $D$ by $2k$-moves $a$ times, we can deform $D$ into a diagram of $K$. Therefore we have $\mathrm{d}_{2k}(K,K')\leq a$. The $2ak$ vertical chords in $G_{D}$ consist of $ak$ positive chords with index $1$ and $ak$ positive chords with index $-1$, and the remaining one chord of $G_{D}$ excluding the chords in $G_{T}$ has index $0$. Hence it follows from [@ST Lemma 4.3] that $$J_{n}(K')= \begin{cases} J_{1}(K)+ak & (n=1), \\ J_{-1}(K)+ak & (n=-1), \\ J_{n}(K) & (n\ne 0,\pm1). \end{cases}$$ By Theorem [Theorem 11](#thm-lowerbound){reference-type="ref" reference="thm-lowerbound"}, we have $$\mathrm{d}_{2k}(K,K')\ge \frac{1}{k}|J_{1}(K)-J_{1}(K')| =\frac{1}{k}|-ak|=a,$$ which shows $\mathrm{d}_{2k}(K,K')=a$. ◻ We remark that for any positive integer $a$, there is a family of infinitely many virtual knots $\{K_{s}\}_{s=1}^{\infty}$ such that $\mathrm{u}_{2k}(K_{s})=a$ $(s\geq1)$. In fact, let $K_{s}$ be the virtual knot represented by the diagram in [@NNSW Figure 2.6] with $m=ak$. Then by Theorem [Theorem 11](#thm-lowerbound){reference-type="ref" reference="thm-lowerbound"}, it can be seen that these virtual knots $K_{s}$'s form such a family. 99 J. S. Carter, S. Kamada and M. Saito, *Stable equivalence of knots on surfaces and virtual knot cobordisms*, J. Knot Theory Ramifications **11** (2002), no. 3, 311--322. Z. Cheng, *A polynomial invariant of virtual knots*, Proc. Amer. Math. Soc. **142** (2014), no. 2, 713--725. Z. Cheng, D. A. Fedoseev, H. Gao, V. O. Manturov and M. Xu, *From chord parity to chord index*, J. Knot Theory Ramifications **29** (2020), no. 13, 2043004, 26 pp. Z. Cheng and H. Gao, *A polynomial invariant of virtual links*, J. Knot Theory Ramifications **22** (2013), no. 12, 1341002, 33 pp. M. K. Dabkowski and J. H. Przytycki, *Burnside obstructions to the Montesinos-Nakanishi $3$-move conjecture*, Geom. Topol. **6** (2002), 355--360. M. K. Dabkowski and J. H. Przytycki, *Unexpected connections between Burnside groups and knot theory*, Proc. Natl. Acad. Sci. USA **101** (2004), no. 50, 17357--17360. M. Goussarov, M. Polyak and O. Viro, *Finite-type invariants of classical and virtual knots*, Topology **39** (2000), no. 5, 1045--1068. M.-J. Jeong, Y. Choi and D. Kim, *Twist moves and the affine index polynomials of virtual knots*, J. Knot Theory Ramifications **31** (2022), no. 7, 2250042, 13 pp. L. H. Kauffman, *Virtual knot theory*, European J. Combin. **20** (1999), no. 7, 663--690. L. H. Kauffman, *A self-linking invariant of virtual knots*, Fund. Math. **184** (2004), 135--158. L. H. Kauffman, *An affine index polynomial invariant of virtual knots*, J. Knot Theory Ramifications **22** (2013), no. 4, 1340007, 30 pp. S. Kinoshita, *On the distribution of Alexander polynomials of alternating knots and links*, Proc. Amer. Math. Soc. **79** (1980), no. 4, 644--648. J.-B. Meilhan, S. Satoh and K. Wada, *Classification of $2$-component virtual links up to $\Xi$-moves*, arXiv:2002.08305. H. A. Miyazawa, K. Wada and A. Yasuhara, *Classification of string links up to $2n$-moves and link-homotopy*, Ann. Inst. Fourier (Grenoble) **71** (2021), no. 3, 889--911. T. Nakamura, Y. Nakanishi, S. Satoh and K. Wada, *Virtualized $\Delta$-moves for virtual knots and links*, submitted. J. H. Przytycki, *$t_{k}$ moves on links*, Contemp. Math., **78**, American Mathematical Society, Providence, RI, 1988, 615--656. S. Satoh and K. Taniguchi, *The writhes of a virtual knot*, Fund. Math. **225** (2014), no. 1, 327--342. [^1]: This work was supported by JSPS KAKENHI Grant Numbers JP21K20327 and JP23K12973.
arxiv_math
{ "id": "2309.07441", "title": "Writhes and $2k$-moves for virtual knots", "authors": "Kodai Wada", "categories": "math.GT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Paley graphs are Cayley graphs which are circulant and strongly regular. Paley-type graph of order a product of two distinct Pythagorean primes was introduced by Dr Angsuman Das. In this paper, we extend the study of Paley-type graphs to the order a product of 'n' distinct Pythagorean primes $p_1<p_2<p_3<...<p_n$.We have determined its adjacency spectra and the exact number of the automorphisms. author: - | A. Sivaranjani\ `sivaranjani.a2021@vitstudent.ac.in`\ \ - | S. Radha\ `radha.s@vit.ac.in`\ \ bibliography: - main.bib date: August 2023 title: Spectra and the exact number of Automorphisms of Paley-type graphs of order a product of 'n' distinct primes --- $\begin{aligned} \textbf{Keywords:} \textit{{Paley-type graph, Adjacency spectrum, Automorphisms, Pancyclicity}} \end{aligned}$\ \ **Mathematics Subject Classification:** 05C25, 05C50, 20D45 # Introduction Matrix theory and linear algebra were initially utilised to analyse networks using adjacency matrices. Algebraic approaches are very useful when dealing with regular and symmetric graphs. The study of the relationship pertaining to combinatorial features of graphs and the eigenvalues of matrices associated with the graph is known as Spectral Graph theory.\ \ Cayley graphs, named after mathematician Arthur Cayley, are a notion in the relationship between the concepts of groups and graphs. When a group G is represented as a Cayley graph, properties such as its size and the number of generators become considerably easier to investigate. Cayley graphs are particularly important in the subject of combinatorics due to their ability to express the algebraic structure of groups. Cayley graphs are built to visualise cyclic groups $Z_n$, dihedral groups $D_n$, symmetric groups $S_n$, alternating groups $A_n$, direct and semidirect products and other finite group representations. Cayley graphs can help in understanding algebraic structures of groups in relation to representation theory. Refer [@2] for more details on the characteristics of Cayley graphs and their many applications across a variety of fields.\ \ \ Also, numerous distinct features of the Kronecker product of graphs have been examined (in different names, such as direct product, cardinal product, conjunction, tensor product, etc.). The study of structural findings is cited in \[[@6],[@4], [@7], [@8], [@9]\],their Hamiltonian features in directed[@11] and undirected graphs \[[@10], [@12]\],their hyperbolicity (see [@5]) stands out above all. A more in-depth structural understanding of this product would be beneficial, according to open problems in the field.\ Paley graph is a Cayley graph in which $V=\mathbb F_q$ a finite field on $q=p^{n}$ such that $q\equiv 1\mod{4}$ and the generating set S is the set of all elements of quadratic residue modulo q. Paley graphs are strongly regular, self-complementary, vertex-transitive, arc-transitive and edge-transitive. Paley graphs find applications in Coding theory, network design for the use to model and analyze network topologies and in the field of quantum computing in error detection and correction,etc.\ In this paper, we extend the study of Paley-type graphs introduced by Dr.Angsuman Das [@1] to the order a product of 'n' distinct Pythagorean primes and studied some basic graph theoretical properties with some known existing results. We have also found out the adjacency spectrum and the exact number of automorphisms of the Paley-type graph. We have also proved that the Paley-type graph $\Gamma_N$ is pancyclic for $p_1 > 5$. # Definitions and Preliminaries For the convenience of the readers, we provide some fundamental definitions, notations and some results in the form of lemmas and theorems without proof. ## Basic Definitions **Definition 1**. *Let G be a finite group and let $S\subset G$ such that* - *$e\notin S$( e is the identity of G)* - *If $a\in S$ then $a^{-1}\in S$ and* - *S generates G* *The Cayley graph $\Gamma$ is defined as $\Gamma=(V,E)$ where $V=G$ and\ $E=\{ab | a,b \in V$ and $a*b^{-1} \in S\}$.\ * **Definition 2**. ***(Neighborhood of a node v in $\Gamma$)**Let $\Gamma$ be a graph.The neighbourhood of the node $v \in \Gamma$ written as $N(v)$, refers to the collection of nodes adjacent to the node v. The closed neighbourhood of v, is defined as $N[v]=N(v)\cup v$.\ * **Definition 3**. ***(Spectrum of Paley graph) [@13]**The eigenvalues of a given undirected Paley graph $\Gamma_q$ on q vertices where $q \equiv 1\mod 4$ are $\frac{1}{2}(q-1)$ with multiplicity 1 and $\frac{1}{2}(-1\pm\sqrt{q})$ each with multiplicity ${\frac{1}{2}(q-1)}$.\ * **Definition 4**. ***(Spectrum of the Kronecker product of Adjacency\ matrices of two graphs) [@14]** Suppose $\Gamma_1$ and $\Gamma_2$ are two graphs with orders m and n respectively. Let **A** be the adjacency matrices of $\Gamma_1$ with eigenvalues $\lambda_1,\lambda_2,...,\lambda_m$ and **B** be the adjacency matrix of $\Gamma_2$ with eigenvalues $\mu_1,\mu_2,...,\mu_n$. The eigenvalues of the Kronecker product of A and B i.e., $\textbf{A}\otimes\textbf{B}$ are $\lambda_i\mu_j$ where $i=1,2,...,m$ and $j=1,2,...,n.$\ * **Definition 5**. ***[@3]** A graph $\Gamma$ is said to be **prime** with respect to the Kronecker product if it has more than one node and $\Gamma\cong \Gamma_1\times \Gamma_2$ implies that either $\Gamma_1$ or $\Gamma_2$ equals $K_1^{s}$ where $K_1^{s}$ denotes an isolated node with loop. The expression $\Gamma\cong \Gamma_1\times \Gamma_2 \times...\times \Gamma_k$ in which each $\Gamma_i$ prime is called the **prime factorisation** of $\Gamma$.\ * **Definition 6**. ***[@16]**A graph $\Gamma$ is called **R-thin** if no two nodes $a,b \in \Gamma_N$ have the same open neighborhood i.e., $N_\Gamma(a)=N_\Gamma(b)$ implies a=b.\ * ## Preliminaries Here, we set up some standard notations from the number theory - ${\mathbb{Z}_N}$ denotes the set which contains all integers modulo N - ${\mathbb{Z}^*_N}$ denotes the set which contains all units in ${\mathbb{Z}_N}$ - ${\mathcal{QR}}_N$ denotes the set of all quadratic residues which are also units in ${\mathbb{Z}_N}$ - ${\mathcal{QNR}}_N$ denotes the set of all non-quadratic residues which are also units in ${\mathbb{Z}_N}$ - ${\mathcal{J}}^{+1}_N$ denotes the set of all values from ${\mathbb{Z}^*_N}$ with Jacobi symbol value +1 - ${\mathcal{J}}^{-1}_N$ denotes the set of all values from ${\mathbb{Z}^*_N}$ with Jacobi symbol value -1 Despite the fact that this work is an expansion of [@1], the same graph characteristics apply for n different primes. The following lemmas can be proved with the help of the results from elementary number theory. **Lemma 1**. *If $p_1,p_2,...,p_n$ are 'n' distinct primes with $p_1\equiv p_2\equiv...\equiv p_n \equiv 1 (mod4)$, then -1 is a quadratic residue in $\mathbb{Z}_N$* **Theorem 2**. *$\Gamma_N$ is isomorphic to the Kronecker product of Paley graphs $\Gamma_{p_1},\Gamma_{p_2},...,\Gamma_{p_n}$,where $p_i\equiv 1\mod 4$, i=1,2,\...,n i.e.,$\Gamma_N \cong \Gamma_{p_1} \times \Gamma_{p_2}\times...\times \Gamma_{p_n}$* **Lemma 3**. *[\[diff\]]{#diff label="diff"} **[@1]**Let $p$ be a Pythagorean prime and $z\in \mathbb{Z}_p$ .Then the number of ways in which $z$ can be represented as a difference of two quadratic residues in ${\mathbb{Z}^*_p}$ are* - *$\frac{p-1}{2}$ if $z\equiv(0\mod p)$* - *$\frac{p-5}{4}$ if $z \in \mathcal{QR}_p$* - *$\frac{p-1}{4}$ if $z \in \mathcal{QNR}_p$* # Paley-type graph and its symmetrical properties We first define the Paley-type graph on the product of 'n' distinct primes and we have listed some of their basic properties through lemmas and theorems. The proofs of the lemmas and theorems can be obtained using the same procedure given in [@1] **Definition 7**. ***(Paley-type graph modulo N)**For $N=p_1p_2...p_n$ , $p_{1}<p_2<...<p_n$ the Paley-type graph modulo $\Gamma_N$ is defined as $\Gamma_N=(V,E)$ where $V=\mathbb{Z}_N$ and $E=\{(a,b)\hspace{0.1cm}|\hspace{0.1cm} a-b\in \mathcal{QR}_N\}$* **Remark 1**:The graph $\Gamma_N$ is a Cayley graph (G,S) where the group $G=(\mathbb{Z}_N,+)$ and the generating set $S=\mathcal{QR}_N$\ \ **Remark 2**: Throughout this paper we assume $p_i<p_{i+1}$, ${i=1,2,...,n}$ **Lemma 4**. *If $N=p_1p_2...p_n$, then the following holds:* - *$\mathcal{QR}_N$ is a subgroup of ${\mathcal{J}}^{+1}_N$ and ${\mathcal{J}}^{+1}_N$ is a subgroup of ${\mathbb{Z}^*_N}$* - *$|{\mathbb{Z}^*_N}|=\phi(N)=(p_{1}-1)(p_{2}-1)...(p_{n}-1)$* - *$|\mathcal{QR}_N|=\frac{\phi(N)}{2^n}$* - *$|\mathcal{J}^{+1}_N|= |\mathcal{J}^{-1}_N|=\frac{\phi(N)}{2}$* - *$x\in \mathcal{QR}_N \iff x\in \mathcal{QR}_{p_{1}}\cap \mathcal{QR}_{p_{2}}\cap...\cap \mathcal{QR}_{p_{n}}$* **Theorem 5**. *The graph $\Gamma_N$ is Hamiltonian and hence connected.* **Theorem 6**. *The graph $\Gamma_N$ is regular with degree $\frac{\phi(N)}{2^n}$ and hence Eulerian.* **Theorem 7**. *The graph $\Gamma_N$ is both vertex and edge transitive.* **Theorem 8**. *Vertex and edge connectivity of the graph $\Gamma_N$ is given by $\frac{\phi(N)}{2^n}$.* **Note:** $\Gamma_N$ is not self-complementary and not strongly regular. **Lemma 9**. *Let $N=p_1p_2\dots p_n$ where $p_1,p_2,\dots,p_n$ are 'n' distinct primes. Then* - *If $z\in{\mathcal{QR}_{N}}$ then $z$ can be represented as a difference of two quadratic residues i.e., $z={u^2}-{v^2}; u,v\in{{\mathbb{Z}}^*_N}$ in $$\frac{1}{4^n}\prod^{n}_{j=1} {(p_{j}-5)}$$ number of ways\ * **Proof.* If $z \in \mathcal{QR}_N$ then $z \in \mathcal{QR}_{{p}_{j}}$. Hence, the proof follows by the Chinese remainder theorem and by (2) of Lemma [\[diff\]](#diff){reference-type="eqref" reference="diff"}.\  ◻* - *If $z \in {\mathcal{J}}^{+1}_N$ $\backslash$ $\mathcal{QR_{N}}$ and $z\in {{{\mathcal{QNR}_p}_j}_m},j_m \in \{1,2,...,n\}$; $r\leq n$; r is even, then the number of ways in which $z$ can be represented as difference of two quadratic residues given under two cases:\ Case (i): if $r=n$, then $$\frac{1}{4^n}\prod^{n}_{j_m=1} {(p_{j_m}-1)}$$\ Case(ii): if $r<n$, then $$\frac{1}{4^n} \left[\prod^{r}_{m=1} ({p_{j_m}}-1) \prod^{n}_{k=r+1} {{{(p_j}_k}-5)}\right]$$\ * **Proof.* Case (i): If $z \in {\mathcal{J}}^{+1}_N$ $\backslash$ $\mathcal{QR_{N}}$ and $z\in {{\mathcal{QNR}_p}_j}_m$ ; $r=n$ then by Chinese remainder theorem and by (3) of Lemma [\[diff\]](#diff){reference-type="eqref" reference="diff"}, the proof follows.\ \ \ Case(ii): If $z \in {\mathcal{J}}^{+1}_N$ $\backslash$ $\mathcal{QR_{N}}$ and $z\in {{\mathcal{QNR}_p}_j}_m$; $r<n$ and even, then by the Chinese remainder theorem and by (3) and (2) of Lemma[\[diff\]](#diff){reference-type="eqref" reference="diff"} the proof follows.\  ◻* - *If $z\in {\mathcal{J}}^{-1}_N$ and $z\notin {{\mathcal{QR}_p}_j}_k, j_k \in \{1,2,...,n\}$; $r\leq n;$ r is odd, then the number of ways in which $z$ can be represented as difference of two quadratic residues is given under two cases:\ Case (i): if $r=n$, then $$\frac{1}{4^n}\prod^{n}_{j_k=1} {(p_{j_k}-1)}$$\ Case (ii): if $r<n$, then $$\frac{1}{4^n} \left[ \prod^{r}_{k=1} {{(p_j}_k}-5) \prod^{n}_{m=r+1} {{{(p_j}_m}-1)}\right]$$\ * **Proof.* Case (i): If $z\in {\mathcal{J}}^{-1}_N$ and $z\notin {{\mathcal{QR}_p}_j}_k$ ,$r=n$ then by Chinese remainder theorem and by (3) of Lemma [\[diff\]](#diff){reference-type="eqref" reference="diff"}, we complete the proof.\ Case(ii): If $z\in {\mathcal{J}}^{-1}_N$ and $z \notin {{\mathcal{QR}_p}_j}_k$ ; $r<n$ and odd then by applying (2) of Lemma [\[diff\]](#diff){reference-type="eqref" reference="diff"} for ${{p_j}_k}$ and (3) of lemma [\[diff\]](#diff){reference-type="eqref" reference="diff"} for ${{p_j}_m}$ and by using Chinese remainder theorem we get the proof. ◻* - *If $z(\neq 0) \in {\mathbb{Z}_N} \backslash {\mathbb{Z}}^*_N$ i.e., z is a non-unit non-zero in ${\mathbb{Z}_N}$ then\ Case (i): For any $p_1,p_2,\dots,p_{n}$ if $z \equiv 0 \mod \prod^{n-1}_{m=1} {{p_j}_m}$ and $z \in {{\mathcal{QR}_p}_j}_k; k\in \{1,2,...n\};\hspace{0.2cm} j_m\neq j_k$ then 'z' can be represented as a difference of two quadratic residues is $${\left[ \prod^{n-1}_{m=1} \frac{({{p_j}_m}-1)} {2^{n-1}}\right]} {\left[\frac{({{p_j}_k}-5)}{4}\right]}$$ number of ways\ \ Case(ii): If $z \equiv 0 \mod \prod^{r}_{m=1} {{p_j}_m}$ and $z \in {{\mathcal{QNR}_p}_j}_k;j_m\neq j_k$ where $r<n;\hspace{0.2cm} j_m,j_k \in \{1,2,...,n\}$ then $'z'$ can be represented as a difference of two quadratic residues is $${\left[ \prod^{n}_{j=1} \frac{(p_{j}-1)}{2^{r} .\hspace{0.1cm} 4^{n-r}} \right]}$$ number of ways* **Proof.* Case(i): As $z \equiv 0 \mod \prod^{n-1}_{m=1} {{p_j}_m}$ and $z \in {{\mathcal{QR}_p}_j}_k$ then by applying Chinese remainder theorem and by (1),(2) of Lemma [\[diff\]](#diff){reference-type="eqref" reference="diff"} we complete the proof.\ \ Case (ii): As $z \equiv 0 \mod \prod^{r}_{m=1} {{p_j}_m}$ and $z \in {{\mathcal{QNR}_p}_j}_k$ then by applying (1),(3) of lemma [\[diff\]](#diff){reference-type="eqref" reference="diff"} and by Chinese remainder theorem we end the proof. ◻* # Adjacency spectra of Paley-Type Graphs In this section, using the properties and results of the Kronecker product we have obtained the adjacency spectra of $\Gamma_N$. **Theorem 10**. *Let $\Gamma_N$ be a Paley-type graph where $N=p_1p_2...p_n$.Then there are $3^n$ distinct eigenvalues in the adjacency spectra.* *Proof.* Let us consider a Paley graph on two distinct primes $p_1,p_2$ say $\Gamma_{p_{1}p_{2}}$. Then the adjacency spectrum of $\Gamma_{p_{1}p_{2}}$ can be obtained using the definition [\[spec\]](#spec){reference-type="eqref" reference="spec"} and is given in the following table.\ \ \ **Eigenvalues** **No of distinct eigenvalues** **Multiplicities** ----------------------------------------------- -------------------------------- ---------------------------- $\frac{(p_{1}-1)(p_{2}-1)}{2^2}$ ${2^0}$ $1$ $\frac{(\pm \sqrt p_{1}-1)(p_{2}-1)}{2^2}$ ${2^1}$ $\frac{(p_1-1)}{2}$ $\frac{(\pm \sqrt p_{2}-1)(p_{1}-1)}{2^2}$ ${2^1}$ $\frac{(p_2-1)}{2}$ $\frac{(\pm \sqrt p_1-1)(\sqrt p_2-1)}{2^2}$ ${2^2}$ $\frac{(p_1-1)(p_2-1)}{2}$ $\frac{(\pm \sqrt p_1-1)(-\sqrt p_2-1)}{2^2}$ ${2^2}$ $\frac{(p_1-1)(p_2-1)}{2}$ \ \ \ Therefore the number of distinct eigenvalues of $\Gamma_{p_{1}p_{2}}$ $$\begin{aligned} &= 2_{C_0}\times 2^0+2_{C_1}\times 2^1+2_{C_2}\times 2^2\\ & =(1+2)^2\\ & =3^2\\ \end{aligned}$$ Extending the above to $N=p_1p_2...p_n$ we have summarised the spectrum of a Paley-type graph $\Gamma_N$ below.\ \ \ **Eigenvalues** **No of distinct eigenvalues** **Multiplicities** ------------------------------------------------------------------------------------------- -------------------------------- ---------------------------------------------- $\frac{(p_1-1)(p_2-1)...(p_n-1)}{2^n}$ ${2^0}\vspace{0.5cm}$ $1$ $\frac{(\pm \sqrt p_1-1)(p_2-1)...(p_n-1)}{2^n}$ ${2^1}$ $\frac{(p_1-1)}{2}$ $\frac{(p_1-1)(\pm \sqrt p_2-1)...(p_n-1)}{2^n}$ ${2^1}$ $\frac{(p_2-1)}{2}$ ... ... ... $\frac{(p_1-1)( p_2-1)...(\pm \sqrt p_n-1)}{2^n}$ ${2^1}$ $\frac{(p_n-1)}{2}$ $\frac{(\pm \sqrt p_1-1)(\pm \sqrt p_2-1)...(p_n-1)}{2^n}$ ${2^2}$ $\frac{(p_1-1)}{2} \frac{(p_2-1)}{2}$ $\frac{(p_1-1)(\pm \sqrt p_2-1)(\pm \sqrt p_3-1)...(p_n-1)}{2^n}$ ${2^2}$ $\frac{(p_2-1)}{2} \frac{(p_3-1)}{2}$ ... ... ... $\frac{(p_1-1)(p_2-1)...(\pm \sqrt p_{n-1}-1)(\pm \sqrt p_n-1)}{2^n}$ ${2^2}$ $\frac{(p_{n-1}-1)}{2} \frac{(p_n-1)}{2}$ \... \... \... \... \... \... \... \... \... $\frac{(\pm \sqrt p_1-1)(\pm \sqrt p_2-1)...(\pm \sqrt p_{n-1}-1)(\pm \sqrt p_n-1)}{2^n}$ ${2^n}$ $\frac{(p_{1}-1)(p_{2}-1)...(p_{n}-1)}{2^n}$ \ \ \ Hence the number of distinct eigenvalues $$\begin{aligned} &= n_{C_0}\times 2^0+n_{C_1}\times 2^1+n_{C_2}\times 2^2+...+n_{C_n}\times 2^n\\ & =(1+2)^n\\ & =3^n\\ \end{aligned}$$ ◻ **Corollary 1**. *The least eigenvalue of the Paley-type graph $\Gamma_N$ is\ $\frac{(-\sqrt{p}_1-1)(p_2-1)...(p_n-1)}{2^n}$* # Order of the Automorphism group of Paley-type Graph **Lemma 11**. *The Paley-type graph $\Gamma_N$ on $N=p_1 p_2...p_n$ vertices is R-thin.* **Proof.* To prove this, let us assume $N_{\Gamma_N}(a)=N_{\Gamma_N}(b)$ and prove $a=b$.\ Now $$N_{\Gamma_N}(a)=\{q+a| x^2\equiv q\mod N, x\in \mathbb{Z}_N, q\in \mathcal{QR}_N\}$$ $$N_{\Gamma_N}(b)=\{q+b| x^2\equiv q\mod N, x\in \mathbb{Z}_N, q\in \mathcal{QR}_N\}$$ Then $$q+a=q+b$$ Implies $a=b$. Thus the graph $\Gamma_N$ is R-thin. ◻* **Lemma 12**. *If $\Gamma_N$ is prime.Then the Paley-type graph $\Gamma_N$ has prime\ factorisation $\Gamma_N \cong \Gamma_{p_1}\times\Gamma_{p_2}\times...\times\Gamma_{p_n}$* *Proof.* By the definition 6, we can say that each $\Gamma_{p_i}$ can be written as\ ${K_1}^s \times \Gamma_{p_i} \cong \Gamma_{p_i}$. Therefore each $\Gamma_{p_i}$ is prime. Hence $\Gamma_{p_1}\times\Gamma_{p_2}\times...\times\Gamma_{p_n}$ is a prime factorisation of $\Gamma_N$ . ◻ **Lemma 13**. *[@3] Suppose $\Gamma$ be a connected, non-bipartite and R-thin graph and let $\phi$ be an automorphism defined on it which has a prime factorisation. $$\Gamma=\Gamma_1\times \Gamma_2\times...\times \Gamma_k$$ Then there exists a permutation $\tau$ of {1,2,\...,k} together with isomorphisms $\phi:\Gamma_{\tau(i)} \rightarrow \Gamma_i$ such that $$\phi(x_1,x_2,...,x_k)=(\phi_1(x_{\tau(1)}),\phi_2(x_{\tau(2)}),...,\phi_n(x_{\tau(k)}))$$ Thus, the automorphism group of $\Gamma$ is generated by the automorphisms of the prime factors and transpositions of isomorphic factors. Hence, Aut( $\Gamma$) is isomorphic to the automorphism group of the disjoint union of the prime factors of $\Gamma$.\ * We use the above lemmas and propositions to prove the following theorem.\ **Theorem 14**. *Let $Aut(\Gamma_N)$ denote the automorphism group of $\Gamma_N$. Then the number of automorphisms is $|Aut(\Gamma_N)|=\frac{N\phi(N)}{2^n}$* *Proof.* The Paley-type graph $\Gamma_N$ is connected and non-bipartite which is\ obvious from the adjacency spectrum of $\Gamma_N$. The graph $\Gamma_N$ is R-thin by lemma [Lemma 11](#rthinlemma){reference-type="ref" reference="rthinlemma"} Also each $\Gamma_{p_i}$ is prime and $\Gamma_N$ has a prime factorisation $\Gamma_{p_1}\times\Gamma_{p_2}\times...\times\Gamma_{p_n}$ by lemma [Lemma 12](#prime){reference-type="ref" reference="prime"}. Hence the graph $\Gamma_N$ is a connected, non-bipartite R-thin graph and has a prime factorisation $\Gamma_{p_1}\times\Gamma_{p_2}\times...\times\Gamma_{p_n}$.\ Then, by lemma [Lemma 13](#aut){reference-type="ref" reference="aut"} $$\label{equ1} \begin{aligned} |Aut({\Gamma_N})|= |Aut( {\Gamma_{p_1}})| \times |Aut ({\Gamma_{p_2}})| \times \hdots \times |Aut({\Gamma_{p_n}})| \end{aligned}$$ We know that the paley graph of order $q=p^e$ has $|Aut(\Gamma_q)|=\frac{q(q-1)e}{2}$ For a Paley-type graph $q=p_i$ the order is $|Aut(\Gamma_{p_i})|=\frac{p_i({p_i}-1)}{2}$\ By [\[equ1\]](#equ1){reference-type="eqref" reference="equ1"} $$\begin{aligned} i.e.,|Aut(\Gamma_N)|& =\frac{p_1(p_1-1)}{2}\times\frac{p_2(p_2-1)}{2}\times\hdots\times\frac{p_n(p_n-1)}{2}\\ & =\frac{p_1\times p_2 \times \hdots \times p_n \times (p_1-1)\times(p_2-1)\times \hdots \times (p_n-1)}{2^n} \\& =\frac{N\phi(N)}{2^n} \end{aligned}$$ ◻ # Pancyclicity of Paley-type graphs **Lemma 15**. *The Kronecker product of two pancyclic graphs is pancyclic.* *Proof.* Let $G_1$ and $G_2$ be two pancyclic graphs of orders $n_1$ and $n_2$ respectively. Since $G_1$ and $G_2$ are pancyclic they have cycles of length k for $3\leq k \leq n_1$ and $3\leq k \leq n_2$. Consider k-cycles $x_1 x_2...x_k x_1 \in G_1$ and $y_1 y_2...y_k y_1 \in G_2$ .\ By the definition of the Kronecker product ,$(x_1,y_1) (x_2,y_2)...(x_k,y_k)(x_1,y_1)$ is a k-cycle in $G_1 \times G_2$. This is true for all k. Hence $G_1 \times G_2$ is pancyclic. ◻ **Theorem 16**. *[@17] The Paley-type graph $\Gamma_N$ on $N=p_1p_2...p_n$, $p_1 > 5$ is pancyclic.* *Proof.* $\Gamma_N$ is the Kronecker product of Paley graphs $\Gamma_{p_1}\Gamma_{p_2}...\Gamma_{p_n}$ by the theorem [\[iso\]](#iso){reference-type="eqref" reference="iso"} . By the above lemma [\[pan\]](#pan){reference-type="eqref" reference="pan"} $\Gamma_N$ is pancyclic. ◻ # Statements and Declarations {#statements-and-declarations .unnumbered} The authors declare that no funds, grants or other support were received during the preparation of this manuscript. # Conclusion {#conclusion .unnumbered} In this article, we have extended the study of Paley-type graphs and found out its adjacency spectrum, the exact number of automorphisms and the pancyclicity. Metric, local metric and partition metric dimensions of Paley-type graphs are still unresolved.
arxiv_math
{ "id": "2310.04137", "title": "Spectra and the exact number of Automorphisms of Paley-type graphs of\n order a product of n distinct primes", "authors": "Sivaranjani A, Radha S", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by-nc-sa/4.0/" }
--- abstract: | We study subsets of $\mathbb{F}_p^n$ that do not contain progressions of length $k$. We denote by $r_k(\mathbb{F}_p^n)$ the cardinality of such subsets containing a maximal number of elements. In this paper we focus on the case $k=p$ and therefore sets containing no full line. A trivial lower bound $r_p(\mathbb{F}_p^n)\geq(p-1)^n$ is achieved by a hypercube of side length $p-1$ and it is known that equality holds for $n\in\{1,2\}$. We will however show $r_p(\mathbb{F}_p^n)\geq (p-1)^3+p-2\sqrt{p}$, which is the first improvement in the three dimensional case that is increasing in $p$. We will also give the upper bound $r_p(\mathbb{F}_p^{3})\leq p^3-2p^2-(\sqrt{2}-1)p+2$ as well as generalizations for higher dimensions. Finally we present some bounds for individual $p$ and $n$, in particular $r_5(\mathbb{F}_5^{3})\geq 70$ and $r_7(\mathbb{F}_7^{3})\geq 225$ which can be used to give the asymptotic lower bound $4.121^n$ for $r_5(\mathbb{F}_5^{n})$ and $6.082^n$ for $r_7(\mathbb{F}_7^{n})$. address: - Christian Elsholtz and Jakob Führer, Institute of Analysis and Number Theory, Graz University of Technology, Kopernikusgasse 24/II, 8010 Graz, Austria. - Erik Füredi, ELTE Eötvös Loránd University Faculty of Science, 1117 Budapest, Pázmány Péter sétány 1/A, Hungary. - Benedek Kovács, ELTE Linear Hypergraphs Research Group, Eötvös Loránd University, 1117 Budapest, Pázmány Péter sétány 1/A, Hungary. - | Péter Pál Pach, Department of Computer Science and Information Theory,\ Budapest University of Technology and Economics,\ Műegyetem rkp. 3., H-1111 Budapest, Hungary;\ MTA-BME Lendület Arithmetic Combinatorics Research Group, ELKH,\ Műegyetem rkp. 3., H-1111 Budapest, Hungary;\ Extremal Combinatorics and Probability Group (ECOPRO), Institute for Basic Science (IBS), South Korea. - Dániel Gábor Simon, Alfréd Rényi Institute of Mathematics, Reáltanoda street 13-15, H-1053 Budapest, Hungary. - Nóra Velich, University of Cambridge, Lucy Cavendish College, Lady Margaret Road, Cambridge CB3 0BU, UK. author: - Christian Elsholtz - Jakob Führer - Erik Füredi - Benedek Kovács - Péter Pál Pach - Dániel Gábor Simon - Nóra Velich bibliography: - references-biblatex.bib date: - - title: Maximal line-free sets in $\mathbb{F}_p^n$ --- # Introduction In the intersection of finite geometry and extremal combinatorics numerous problems of finding maximal subsets of affine or projective spaces avoiding certain configurations have been studied. One natural question asks for bounds on the cardinality of subsets of the $n$-dimensional affine space over a finite field $\mathbb{F}_q$ that do not contain a full line. We denote by $r_k(\mathbb{F}_p^n)$ the cardinality of a subset $S\subseteq \mathbb{F}_p^n$, containing a maximal number of elements such that $S$ contains no $k$ points in arithmetic progression. Note that in the case when $k=p$ is a prime, $k$-progressions in $\mathbb{F}_p^n$ correspond to lines in the $n$-dimensional affine space and we are therefore interested in bounds on $r_p(\mathbb{F}_p^n)$. When $p=3$ the problem coincides with the cap set problem, a well studied area where one can use the fact that $x,y,z$ form a line exactly when they fulfil a non-trivial linear equation $ax+by+cz=0$ where $a+b+c=0$. Ellenberg and Gijswijt [@EG] gave the first exponential improvement to the trivial upper bound of $3^n$ with $r_3(\mathbb{F}_3^{n})<2.756^n$, for large enough $n$ which was further improved by Jiang [@Jiang] by a factor of $\sqrt{n}$. The best lower bound was given by Tyrrell [@Tyrrell] with $r_3(\mathbb{F}_3^{n})>2.218^n$, for large enough $n$. The exact values of $r_3(\mathbb{F}_3^{n})$ are known up to $n=6$, where $r_3(\mathbb{F}_3^{6})=112$ was proven by Potechin [@Potechin]. For the general case surprisingly few results on $r_p({\mathbb F}_p^n)$ are known. There is the trivial lower bound $r_p(\mathbb{F}_p^n)\geq(p-1)^n$ achieved by a hypercube of side length $p-1$. Jamison [@Jamison] and Brouwer and Schrijver [@BS] independently proved that this is sharp for $n=2$. For $n=3$ the only improvement to this construction was by a single point described in the post of Zare in a mathoverflow thread [@Mathoverflow]. We will prove the following lower bounds: **Theorem 1**. *Let $p\geq 5$ be a prime then $$r_p(\mathbb{F}_p^3) \geq (p-1)^3+p-2\sqrt{p}=p^3-3p^2+4p-2\sqrt{p}-1.$$* This can be improved in some special cases. **Theorem 2**. *Let $p$ be a prime with $p\equiv 7$ [(mod $24$)]{.nodecor}, then $$r_p(\mathbb{F}_p^3)\geq(p-1)^3+(p-1)=p^3-3p^2+4p-2.$$* *Moreover, $r_7(\mathbb{F}_7^3)\geq 225$.* The simple upper bound $r_p(\mathbb{F}_p^n)\leq p^n-\frac{p^n-1}{p-1}$ was given by Aleksanyan and Papikian [@AP] and is achieved by removing at least one point from each line going through a fixed point. In particular $r_p(\mathbb{F}_p^3)\leq p^3-p^2-p-1$. This was improved by Bishnoi et al. [@BDGP] to $p^n-2p^{n-1}+1$ and $p^3-2p^{2}+1$, respectively. We will give the following new bounds: **Theorem 3**. *Let $p\geq 3$ be a prime, $k\in \{3,4,...,p\}$ and $n\in \mathbb{N}$ then* *$${r_k(\mathbb{F}_p^{n+1})}\leq \frac{2(p^ {n+1}-1)r_k(\mathbb{F}_p^n)+p^n-\sqrt{4(p^ {n+1}-1)r_k(\mathbb{F}_p^n)(p^n-r_k(\mathbb{F}_p^n))+p^{2n}}}{2p^n},$$* where the three-dimensional case gives the following corollary. **Corollary 4**. *Let $p\geq 3$ be a prime then $$r_p(\mathbb{F}_p^3) \leq\frac{2p^5-4p^4+2p^3-p^2+4p-2-\sqrt{8p^6-20p^5+17p^4-12p^3+20p^2-16p+4}}{2p^2},$$ in particular, $$r_p(\mathbb{F}_p^3)\leq p^3-2p^2-(\sqrt{2}-1)p+2.$$* For other dimensions, there is the lower bound $r_p(\mathbb{F}_p^{2p})\geq p(p-1)^{2p-1}$ due to Frankl et al. [@FGR], using large sunflower-free sets. We found a $70$ point $5$-progression-free set in $\mathbb{F}_5^3$ via a branch and cut approach (see Figure [\[fig70\]](#fig70){reference-type="ref" reference="fig70"}) and we will show the following upper bounds for small primes. **Theorem 5**. *$r_5(\mathbb{F}_5^3)<74.$* **Theorem 6**. *$r_7(\mathbb{F}_7^3)<243.$* One can use the tensor product $S_1\times S_2$ of two line-free sets $S_1\in \mathbb{F}_p^{n_1}$, $S_2\in \mathbb{F}_p^{n_2}$ to get a line-free set in the higher dimension $n_1+n_2$. This construction also provides us the lower bound $|S_1|^{1/{n_1}}$ for $\alpha_p:=\lim\limits_{n \rightarrow \infty}(r_p(\mathbb{F}_p^n))^{1/n}$ and therefore the asymptotic lower bound $(|S_1|^{1/{n_1}}-o(1))^n$ for $r_p(\mathbb{F}_p^n)$ (see e.g. [@SET], [@Pach22]). The strongest known lower bound for general $p$ is $\alpha_p\geq p^{1/{2p}}(p-1)^{(2p-1)/{2p}}$ using the results of Frankl et al. [@FGR], however for small primes the new three-dimensional lower bounds $r_5(\mathbb{F}_5^3)\geq 70$ and $r_7(\mathbb{F}_7^3)\geq 225$ give better lower bounds, namely, $\alpha_5\geq 4.121$ and $\alpha_7\geq 6.082$. We will also show the following explicit lower bound for arbitrary dimension. **Theorem 7**. *Let $p\geq 3$ be a prime then $r_p(\mathbb{F}_p^n)\geq(p-1)^n+\frac{n-2}{2}(p-1)(p-2)^{n-3}.$* # Related results - Davis and Maclagan [@SET] studied the card game SET, where the cards can be described as points in $\mathbb{F}_3^4$ and one is interested whether the displayed cards form a cap set. The results of Tyrell [@Tyrrell] are building on the construction of Edel [@Edel], who gave the previously best lower bound for cap sets. Elsholtz and Lipnik [@ElsholtzLipnik] and Elsholtz and Pach [@ElsholtzPach] studied cap sets in other spaces than $\mathbb{F}_3$. - Croot et al. [@CrootLevPach] gave an upper bound for $3$-progression-free sets in $\mathbb{Z}_4^n$ that is exponentially smaller than $4^n$. Their methods also led to the results of Ellenberg and Gijswijt [@EG]. Petrov and Pohoata [@PetrovPohoata] gave an improved upper bound for $3$-progression-free sets in $\mathbb{Z}_8^n$, Pach and Palincza [@PachPalincza] gave both upper and lower bounds for $6$-progression-free sets in $\mathbb{Z}_6^n$. Elsholtz et al. [@EKL] studied the general case of $k$-progression-free sets in $\mathbb{Z}_m^n$. An overview on known bounds is given by Pach [@Pach22]. - Moser [@MoserProblem] asked for the maximal size of a subset of $\{1,2,...,k\}^n$ without a geometric line. Similarly, Hales and Jewett asked for a subset without a combinatorial line. The result of Furstenberg and Katznelson [@FK] also known as the density Hales--Jewett theorem implies that in both cases these sets have to be asymptotically smaller than $k^n$ as $n$ tends to infinity. Polymath [@PolymathMoser] gave some explicit bounds for special cases. - Sets that intersect every affine subspace of codimension $s$ are called $s$-blocking sets. The complement of a line-free set in a finite $n$-dimensional affine space is therefore also called an $(n-1)$-blocking set. It is known that the union of any $n$ independent lines intersecting in a single point form a $1$-blocking set in $\mathbb{F}_p^n$ which is optimal (see e.g. [@AlonTFHA], [@BS], [@Jamison]). However, for $(n-1)$-blocking sets, the union of $n$ independent hyperplanes, which seems to be the obvious algebraic construction, are not optimal, as will be shown in this paper. Bishnoi et al. [@BDGP] gave several upper bounds for the size of $s$-blocking sets. # Notation We write $\mathbb{Z}_n$ for $\mathbb{Z}/n\mathbb{Z}$ and $\mathbb{F}_p=\mathbb{Z}_p$ is the field with $p$ elements whenever $p$ is a prime. We write $[k,\ell]$ for the set $\{k,k+1,...,\ell\}$ either as a subset of $\mathbb{Z}$ or of $\mathbb{F}_p$. We use both row and column vectors for the elements of $\mathbb{F}_p^n$ and we call these elements points. Given a subset $S\subseteq \mathbb{F}_p^3$ we call the image of $S\cap (\{j\}\times\mathbb{F}_p^2)$ under the projection $\phi\colon\mathbb{F}_p^3\longrightarrow\mathbb{F}_p^2$, $(a,b,c)\mapsto\left(b,c\right)$ the $j$-layer of $S$. # Proofs of the upper bounds *Proof of Theorem [Theorem 3](#ubmain){reference-type="ref" reference="ubmain"}.* Let $A\subseteq \mathbb{F}_p^{n+1}$ be $k$-progression-free. We count the number of the point pairs on every $n$-dimensional hyperplane $$s = \left| \{ (\{a,b\},S)\ |\ a,b\in A,\ a\ne b,\ a,b\in S,\ \text{$S$ is an $n$-dimensional hyperplane}\}\right| .$$ On every hyperplane, the number of points is at most $r_k(\mathbb{F}_p^n)$. Firstly, we assume $r_k(\mathbb{F}_p^{n+1})\geq(p-1)r_k(\mathbb{F}_p^n)$, then the sum of number of point pairs for $p$ parallel hyperplanes is maximal if there are $p-1$ hyperplanes with $r_k(\mathbb{F}_p^n)$ points and one with $r_k(\mathbb{F}_p^{n+1})-(p-1)r_k(\mathbb{F}_p^n)$. There are $\frac{p^{n+1}-1}{p-1}$ disjoint sets of parallel hyperplanes, so $$s\leq\Big(\frac{p^{n+1}-1}{p-1}\Big)\Big((p-1)\binom{r_k(\mathbb{F}_p^n)}{2}+ \binom{r_k(\mathbb{F}_p^{n+1})-(p-1)r_k(\mathbb{F}_p^{n})}{2}\Big).$$ Note here that this inequality still holds if $r_k(\mathbb{F}_p^{n+1})<(p-1)r_k(\mathbb{F}_p^n)$, as in this case the number of point pairs is clearly less than $$(p-1)\binom{r_k(\mathbb{F}_p^n)}{2}$$ and $$\binom{r_k(\mathbb{F}_p^{n+1})-(p-1)r_k(\mathbb{F}_p^{n})}{2}\geq 0.$$ On the other hand, every point pair defines a line that is included in exactly $\frac{p^n-1}{p-1}$ $n$-dimensional hyperplanes, so $$s=\frac{p^n-1}{p-1}\binom{r_k(\mathbb{F}_p^{n+1})}{2}.$$ We get the quadratic inequality $$p^n\big(r_k(\mathbb{F}_p^{n+1})\big)^2-\big(p^n+2(p^{n+1}-1)r_k(\mathbb{F}_p^{n})\big)r_k(\mathbb{F}_p^{n+1})+(p^{n+2}-p)\big(r_k(\mathbb{F}_p^{n})\big)^2\geq 0$$ with roots $$\frac{2(p^ {n+1}-1)r_k(\mathbb{F}_p^n)+p^n\pm\sqrt{4(p^ {n+1}-1)r_k(\mathbb{F}_p^n)(p^n-r_k(\mathbb{F}_p^n))+p^{2n}}}{2p^n}.$$ As $${r_k(\mathbb{F}_p^{n+1})}\leq p(r_k(\mathbb{F}_p^n))$$ but $$\begin{split} & \frac{2(p^{n+1}-1)r_k(\mathbb{F}_p^n)+p^n+\sqrt{4(p^ {n+1}-1)r_k(\mathbb{F}_p^n)(p^n-r_k(\mathbb{F}_p^n))+p^{2n}}}{2p^n} \\ > \; & p (r_k(\mathbb{F}_p^n))+\frac{1}{2}-\frac{r_k(\mathbb{F}_p^n)}{p^n}+\frac{\sqrt{p^{2n}}}{2p^n}\geq p (r_k(\mathbb{F}_p^n))+\frac{1}{2}-1+\frac{1}{2}= p (r_k(\mathbb{F}_p^n)), \end{split}$$ the theorem follows. ◻ *Proof of Corollary [Corollary 4](#cor-upp){reference-type="ref" reference="cor-upp"}.* The first statement follows immediately from Theorem [Theorem 3](#ubmain){reference-type="ref" reference="ubmain"} using $r_p(\mathbb{F}_p^2)=(p-1)^2$. For the second statement we are using that $8p^6-20p^5+17p^4-12p^3+20p^2-16p+4$ can be bounded by $(2\sqrt{2}p^3-5/\sqrt{2}p^2)^2$ from below for $p\geq 3$ and we get $$\begin{split} r_p(\mathbb{F}_p^3)\leq & \; p^3-2p^2+p-\frac{1}{2}+\frac{2}{p}-\frac{1}{p^2}-\sqrt{2}p+\frac{5}{2 \sqrt{2}} \\ \leq & \; p^3-2p^2-(\sqrt{2}-1)p-\frac{1}{2}+\frac{2}{3}+\frac{5}{2 \sqrt{2}} \\ \leq & \; p^3-2p^2-(\sqrt{2}-1)p+2. \end{split}$$ ◻ *Proof of Theorem [Theorem 5](#thm-r5){reference-type="ref" reference="thm-r5"}.* Assume that $S\subseteq \mathbb{F}_5^3$ is a $5$-progression-free set of size $74$. We will compute a weighted sum over all lines containing $4$ points to reach a contradiction. Let us call a line containing exactly $r$ points an $r$-line. Let $l$ be a $4$-line in $S$ and let $H_1,H_2,...,H_6$ be the planes containing $l$. Then $\sum_{i=1}^6|H_i\cap S|=(74-4)+6\cdot 4=94$. Note that $r_5(\mathbb{F}_5^2)=16$ and $r_4(\mathbb{F}_5^2)=11$, which can be easily checked by computer search. Therefore, $|H_i\cap A|\geq 94-5\cdot 16=14$ for all $i$ and there is no plane in $\mathbb{F}_5^3$ containing $12$ or $13$ points. Hence, there are five different distributions for the number of points in five parallel planes: - $\{10,16,16,16,16\}$ - $\{11,15,16,16,16\}$ - $\{14,14,14,16,16\}$ - $\{14,14,15,15,16\}$ - $\{14,15,15,15,15\}.$ Denote by $a,b,c,d,e$ the number of classes of parallel planes having these distributions. Note that $$\label{eq-sum} a+b+c+d+e=31.$$ If we compare the number of pairs of points in each plane with the total number of pairs we get $(\binom{10}{2}+4\binom{16}{2})a+(\binom{11}{2}+\binom{15}{2}+3\binom{16}{2})b+(3\binom{14}{2}+2\binom{16}{2})c+(2\binom{14}{2}+2\binom{15}{2}+\binom{16}{2})d+(\binom{14}{2}+4\binom{15}{2})e=6\binom{74}{2}$ $$\label{eq-pairs} \Leftrightarrow 525a+520b+513c+512d+511e=16206,$$ since each pair lies in exactly six planes. Now denote by $A$, $B$, and $C$ the number of pairs $(\ell,H)$ where $H$ is a hyperplane containing $16$, $15$ and $14$ points, respectively and $\ell\subseteq H$ is a $4$-line. Again let $\ell$ be a $4$-line and let $H_1,H_2,...,H_6$ be the planes containing $\ell$. Then $$\{\left| H_i\cap S\right| |\;i\in[1,6]\}\in\{\{14,16,16,16,16,16\},\{15,15,16,16,16,16\}\}$$ as multisets and therefore $$\label{eq-main} A-2B-5C=0$$ To bound the size of $A$, $B$ and $C$ we need the following claims. **Claim 1**. *Every plane containing $16$ points contains at least twelve $4$-lines.* Consider a plane $H$ containing $16$ points and let $x_i$ be the number of $i$-lines in $H$ for $i\in\{1,2,3,4\}$. By double counting the points in $H$ we get $x_1+2x_2+3x_3+4x_4=6\cdot16=96$ and by double counting the pairs of points in $H$ we get $x_2+3x_3+6x_4=\binom{16}{2}=120$. By taking the difference of the two equations we get $-x_1-x_2+2x_4=24$, implying that $2x_4\geq 24$. **Claim 2**. *For $m\in \{14,15\}$, every plane containing $m$ points contains at most $m$ $4$-lines.* As $5\cdot3+1=16> m$, every point in $S$ can be contained in at most four $4$-lines and therefore the number of $4$-lines in the plane is bounded from above by $\frac{4m}{4}=m.$ Finally combining [\[eq-sum\]](#eq-sum){reference-type="eqref" reference="eq-sum"}, [\[eq-pairs\]](#eq-pairs){reference-type="eqref" reference="eq-pairs"} and [\[eq-main\]](#eq-main){reference-type="eqref" reference="eq-main"} we obtain the following system of linear equations and inequalities. $$\begin{split} &a+b+c+d+e=31 \\ &525a+520b+513c+512d+511e=16206 \\ &A-2B-5C=0 \\ &A\geq 48a+36b+24c+12d \\ &B\leq 15b+30d+60e \\ &C\leq 42c+28d+14e \\ &a,b,c,d,e,A,B,C\geq 0, \end{split}$$ which does not have any integral solution, a contradiction to $|S|=74$. ◻ *Proof of Theorem [Theorem 6](#thm-r7){reference-type="ref" reference="thm-r7"}.* Assume that $S\subseteq \mathbb{F}_7^3$ is a $7$-progression-free set of size $243$. Note that we have the following bounds. **Claim 3**. *Every plane containing $36$ points contains at least $18$ $6$-lines and every plane containing $35$, $34$ or $33$ contains at most $33$, $30$, $28$ $6$-lines, respectively. Moreover, $r_7(\mathbb{F}_7^2)=36$ and $r_6(\mathbb{F}_7^2)=29$.* Consider a plane $H$ containing $m$ points and let $x_i$ be the number of $i$-lines in $H$ for $i\in [0,6]$. There are $56$ lines in the plane and therefore $$\label{eq-sublin} x_0+x_1+x_2+x_3+x_4+x_5+x_6=56$$ By double counting the points in $H$ we get $$\label{eq-lin} x_1+2x_2+3x_3+4x_4+5x_5+6x_6=8m$$ and by double counting the pairs of points in $H$ we get $$\label{eq-qua} x_2+3x_3+6x_4+10x_5+15x_6=\binom{m}{2}.$$ If $m=36$ we take the difference of the [\[eq-qua\]](#eq-qua){reference-type="eqref" reference="eq-qua"} and two times [\[eq-lin\]](#eq-lin){reference-type="eqref" reference="eq-lin"} and get $-2x_1-3x_2-3x_3-2x_4+3x_6=54$, implying that $x_6\geq 18$. If $m\in\{33,34,35\}$, then by taking three times [\[eq-sublin\]](#eq-sublin){reference-type="eqref" reference="eq-sublin"} minus two times [\[eq-lin\]](#eq-lin){reference-type="eqref" reference="eq-lin"} plus [\[eq-qua\]](#eq-qua){reference-type="eqref" reference="eq-qua"} we get $3x_0+x_1+x_4+3x_5+6x_6=168-16m+\binom{m}{2}$ and therefore $6x_6\leq168-16m+\binom{m}{2}$ which gives the desired bounds. The last two claims can be easily checked by computer search. If we now proceed analogously to the proof of Theorem [Theorem 5](#thm-r5){reference-type="ref" reference="thm-r5"} we again arrive at a contradiction. ◻ # Proofs of the lower bounds *Proof of Theorem [Theorem 7](#LBhighn){reference-type="ref" reference="LBhighn"}.* We consider three different types of $2$-dimensional layers: - $A:=[0,p-2]^{2}$, - $B:=[0,p-1]^2\setminus \{(i,i)\ |\ i \in [0,p-1]\}\setminus \big(\{p-1\}\times [0,\frac{p-3}{2}]\big)\setminus \big([0,\frac{p-3}{2}]\times \{p-1\}\big)$, - $C:= \{(i,i)\ |\ i \in [0,\frac{p-3}{2}]\}$, and three disjoint subsets of $\mathbb{F}_p^{n-2}$: - $\mathcal{A}:=[0,p-3]^{n-2}$, - $\mathcal{B}:=[0,p-2]^{n-2}\setminus [0,p-3]^{n-2}$, - $\mathcal{C}:=\bigcup_{j\in[1,n-2]}\{x\in\mathbb{F}_p^{n-2} \; | \; (x_j=p-1) \land (x_i\in[0,p-3] \; \forall i\neq j)\}.$ We show that $S:=(\mathcal{A}\times A)\cup (\mathcal{B}\times B)\cup (\mathcal{C}\times C)$ is $p$-progression-free. First consider the case $n=3$. Let $L:=\{(a_1,a_2,a_3)+(b_1,b_2,b_3)i \ |\ i\in [0,p-1]\}$ be a $p$-progression in $\mathbb{F}_p^3$ with $a_1,a_2,a_3,b_1,b_2,b_3\in\mathbb{F}_p$. - Case 1: $b_1=0$ and $a_1\neq p-2:$ $[0,p-2]^2$ is $p$-progression-free and $|\{(i,i)\ |\ i \in [0,\frac{p-3}{2}]\}|<p$, therefore $L$ is not contained in $S$. - Case 2: $b_1=0$ and $a_1= p-2:$ $L':=\{(a_2,a_3)+(b_2,b_3)i\ |\ i\in [0,p-1]\}$ and $\{(i,i)\ |\ i \in [0,p-1]\}$ are both lines in $\mathbb{F}_p^2$. If they are not parallel or they are equal, they do intersect, and $L$ is not contained in $S$. Otherwise we can rewrite $L'=\{(i,c+i)\ |\ i\in [0,p-1]\}$ with $c\in [1,p-1].$ If $c\in [1,\frac{p-1}{2}]$ then $c+(p-1)\in[0,\frac{p-3}{2}]$ (choose $i=p-1$) and $(p-2,p-1,c+(p-1))\in L\setminus S.$ Similarly if $c\in [\frac{p+1}{2},p-1]$, then $p-1-c\in[0,\frac{p-3}{2}]$ (choose $i=p-1-c$) and $(p-2,p-1-c,p-1)\in L\setminus S.$ Therefore, $L$ is not contained in $S$. - Case 3: $b_1\neq 0$: Without the loss of generality let $b_1=1$ and $a_1=p-2$. If $b_2=b_3=0$ then $L$ is not contained in $S$ because the $(p-2)$-layer and $(p-1)$-layer of $S$ have no common point. Otherwise, without the loss of generality, let $b_2\neq 0$ and therefore $\{a_2+b_2i\ |\ i\in [0,p-1]\}=[0,p-1].$ Assume that $L\subseteq S$. Then $a_2=p-1$ and $a_3\in [\frac{p-1}{2},p-2]$ because the $(p-2)$-layer is the only layer containing points with the coordinate $p-1$. Since the $(p-1)$-layer does not have coordinates in $[\frac{p-1}{2},p-2]$, also $b_3\neq 0$ and consequently $\{a_3+b_3i\ |\ i\in [0,p-1]\}=[0,p-1].$ As before it follows that $a_3=p-1$ contradicting that $L\subseteq S$. Thus, $L$ is not contained in $S$ and $S$ is $p$-progression-free. Now consider $n>3$. We have already seen that every layer is $p$-progression-free, so we only consider progressions $L:=\{a+bi \ |\ i\in [0,p-1]\}$ visiting $p$ non-empty layers. Let $m$ be the number of non-zero entries in the first $n-2$ coordinates of $b$. Since only layers of type $C$ are placed where one of the first $n-2$ coordinates is $p-1$ and all layers where two of the first $n-2$ coordinates are $p-1$ are empty, $m$ is also the number of type $C$ layers visited by $L$ and $m\leq p$. - If $m=1$, $L$ is not contained in $S$, analogously to the $3$-dimensional case. - If $m\geq 2$ the last two coordinates of every point in $L$ are equal, since the projection of $L$ in the last two coordinates is a line containing two points in the main diagonal, or it is a single point in the main diagonal. Now since only layers of type $B$ are placed where one of the first $n-2$ coordinates is $p-2$, $L$ also visits a layer of type $B$. Therefore $L$ is not contained in $B$ because layers of type $B$ contain no points on the main diagonal. Finally, note that layers of type $A$ and $B$ contain $(p-1)^2$ points and layers of type $C$ contain $(p-1)/2$ points and thus $$|S|=(p-1)^2(p-1)^{n-2}+\frac{p-1}{2}(n-2)(p-2)^{n-3}=(p-1)^n+\frac{n-2}{2}(p-1)(p-2)^{n-3}.$$ ◻ *Proof of Theorem [Theorem 1](#LBhighp){reference-type="ref" reference="LBhighp"}.* Let $k=\lfloor \sqrt{p} \rfloor$, $t=\lfloor p/k \rfloor$, $K:=[0,k-1]$ and $T:=\{jk-1\ |\ j\in [1,t]\}$. Consider the set $$\begin{split} S:=&[0,p-3]\times [0,p-2]^2 \\ \cup &\{p-2\} \times ([0,p-1]^2 \setminus \{(j,j)\ |\ j\in [0,p-1]\}\setminus ((K\cup \{p-1\})\times (T\cup \{p-1\})) )\\ \cup & \{p-1\}\times K\times T. \end{split}$$ we will show that $S\subseteq \mathbb{F}_p^3$ is $p$-progression-free. Let $L:=\{(a_1,a_2,a_3)+(b_1,b_2,b_3)i \ |\ i\in [0,p-1]\}$ be a $p$-progression in $\mathbb{F}_p^3$ with $a_1,a_2,a_3,b_1,b_2,b_3\in\mathbb{F}_p$. - Case 1: $b_1=0$ and $a_1\neq p-2:$ $[0,p-2]^2$ is $p$-progression-free and $|K\times T|=kt<p$, therefore $L$ is not contained in $S$. - Case 2: $b_1=0$ and $a_1= p-2:$ $L':=\{(a_2,a_3)+(b_2,b_3)i\ |\ i\in [0,p-1]\}$ and $\{(i,i)\ |\ i \in [0,p-1]\}$ are both lines in $\mathbb{F}_p^2$. If they are not parallel or they are equal, they do intersect, and $L$ is not contained in $S$. Otherwise we can rewrite $L'=\{(i,c+i)\ |\ i\in [0,p-1]\}$ with $c\in [1,p-1].$ $\{(i,c+i)\ |\ i\in [0,k-1]\}\cap (K\times (T\cup \{p-1\}))\neq \emptyset$ and therefore $L$ is not contained in $S$. - Case 3: $b_1\neq 0$: Without the loss of generality, let $b_1=1$ and $a_1=p-2$. If $b_2=b_3=0$ then $L$ is not contained in $S$ because the $(p-2)$-layer and $(p-1)$-layer of $S$ have no common point. Else, if $b_2\neq 0$ and $b_3 \neq 0$ then $\{a_2+b_2i\ |\ i\in [0,p-1]\}=\{a_3+b_3j\ |\ j\in [0,p-1]\}=[0,p-1].$ Since the $(p-2)$-layer is the only layer with $p-1$ entries but $(p-2,p-1,p-1)\not\in S$, $L$ is not contained in $S$. Finally, if either $b_2=0$ or $b_3=0$ but not both, one of the last two coordinates is constant, and the other one visits every possible value. Now again the $(p-2)$-layer is the only layer with $p-1$ entries but the $(p-1)$-layer has empty rows and columns wherever the $(p-2)$-layer has $p-1$ entries and therefore $L$ is not contained in $S$. Note that since $p$ is a prime and $k\geq 2$, $t\leq \frac{p-1}{k}$ and that from the definition of $k$ it follows that $$\begin{split} &k\in [\sqrt{p}-1,\sqrt{p}+1]\\ \Leftrightarrow \ &k^2-2\sqrt{p}k+p-1\leq 0\\ \Leftrightarrow \ &k+\frac{p-1}{k}\leq 2\sqrt{p}, \end{split}$$ and therefore $k+t\leq 2\sqrt{p}$. Hence, $$\begin{split} |S|=&(p-2)(p-1)^2+(p^2-p-(kt-1)-k-t)+kt \\ =&(p-2)(p-1)^2+p^2-p+1-k-t \\ \geq & (p-2)(p-1)^2+p^2-p+1-2\sqrt{p} \\ =& (p-1)^3 +p-2\sqrt{p} \end{split}$$ ◻ *Proof of Theorem [Theorem 2](#LB7){reference-type="ref" reference="LB7"}.* Let $p=7+24\ell$ for $\ell\in\mathbb{Z}_{\geq0}$, let $A$ be the set of quadratic residues, that is, $A=\{a^2\ |\ a\in\mathbb{F}_p^*\}$ and $B:=\mathbb{F}_p^*\setminus A$. Note that $|A|=|B|=\frac{p-1}{2}$ and the law of quadratic reciprocity yields $$% \genfrac{(}{)}{}{}{-1}{p}% \if\relax\detokenize{}\relax\else_{\!}\fi =(-1)^{\frac{p-1}{2}}=(-1)^{3+12\ell}=-1,$$ $$% \genfrac{(}{)}{}{}{2}{p}% \if\relax\detokenize{}\relax\else_{\!}\fi =(-1)^{\frac{p^2-1}{8}}=(-1)^{6+42\ell+72\ell^2}=1,$$ $$% \genfrac{(}{)}{}{}{3}{p}% \if\relax\detokenize{}\relax\else_{\!}\fi =(-1)^{\frac{p-1}{2}\frac{3-1}{2}}% \genfrac{(}{)}{}{}{p}{3}% \if\relax\detokenize{}\relax\else_{\!}\fi =(-1)^{3+12\ell}% \genfrac{(}{)}{}{}{1}{3}% \if\relax\detokenize{}\relax\else_{\!}\fi =-1,$$ and therefore $2\in A$ and $\{-1,3\}\subseteq B$. Note here that $A$ is a subgroup of $\mathbb{F}_p^*$ and this means that multiplication by $2$ or $\frac{1}{2}$ leaves elements of $A$ or $B$ in the same set, while multiplication by $-1$, $3$ or $\frac{1}{3}$ changes the set. For instance, $3a\in B$ for all $a\in A$ and $-\frac{3b}{2}=(-1)\cdot 3\cdot \frac{1}{2} \cdot b \in B$ for all $b\in B$. Let $$\begin{split} S:=&[1,p-1]^3\cup \big(\{(a,0,a)|a\in A\}\cup \{(0,a,a)|a\in A\}\big) \\ \setminus &\big(\{(a,a,a)|a\in A\}\cup \{(a/2,a/2,a)|a\in A\}\big) \\ \cup &\big(\{(3b/2,0,b)|b\in B\}\cup \{(0,3b/2,b)|b\in B\}\cup (3b,0,b)|b\in B\}\cup \{(0,3b,b)|b\in B\}\big) \\ \setminus &\big(\{(b,b,b)|b\in B\}\cup \{(3b/2,3b/2,b)|b\in B\}\cup\{(b/3,b/3,b)|b\in B\}\big) \\ \setminus &\big( \{(3b,-3b/2,b)|b\in B\}\cup \{(-3b/2,3b,b)|b\in B\}\big) \\ \cup &\big(\{(b,b,0)|b\in B\}\cup \{(2a,-a,0)|a\in A\}\cup (-a,2a,0)|a\in A\}\big). \end{split}$$ We will show that $S$ is $p$-progression-free. Note that $S$ is symmetric in the first two coordinates. We will therefore, in this proof, skip one of two symmetric cases, whenever possible. Let $L:=\{(c_1,c_2,c_3)+(d_1,d_2,d_3)i \ |\ i\in [0,p-1]\}$ be a $p$-progression in $\mathbb{F}_p^3$ with $c_1,c_2,c_3,d_1,d_2,d_3\in\mathbb{F}_p$ and assume that $L\subseteq S$. First, assume that $d_3=0$. - Case 1: $c_3=0$: Since $S$ contains no points where the third and one of the first two coordinates is $0$, $L$ is not contained in $S$. - Case 2: $c_3\in A$: Let $a:=c_3$. Since $(a, 0, a)$ and $(0,a,a)$ are the only points where the third coordinate is $a$ and one of the first two coordinates is $0$, we can assume $(a, 0, a)\in L$. If $d_1=0$ then $(a,a,a)\in L$, a contradiction. If $d_1\neq 0$ then also $(0,a,a)\in L$ and consequently $(\frac{a}{2},\frac{a}{2},a)=\frac{1}{2}(a, 0, a)+\frac{1}{2}(0,a,a)\in L$, again a contradiction. - Case 3: $c_3\in B$: Let $b:=c_3$. First, assume $d_1\neq0$ and $d_2\neq0$. Since $(\frac{3b}{2}, 0, b)$, $(0, \frac{3b}{2}, b)$, $(3b, 0, b)$ and $(0,3b,b)$ are the only points where the third coordinate is $b$ and one of the first two coordinates is $0$, we only have to consider the following cases: If $(\frac{3b}{2}, 0, b)\in L$ and $(0, \frac{3b}{2}, b)\in L$, then also $(-\frac{3b}{2},3b , b)=(-1)(\frac{3b}{2}, 0, b)+2(0, \frac{3b}{2}, b)\in L$, if $(\frac{3b}{2}, 0, b)\in L$ and $(0,3b,b)\in L$, then also $(b,b , b)=\frac{2}{3}(\frac{3b}{2}, 0, b)+\frac{1}{3}(0, 3b, b)\in L$ and if $(3b, 0, b)\in L$ and $(0,3b,b)\in L$, then also $(\frac{3b}{2},\frac{3b}{2}, b)=\frac{1}{2}(3b, 0, b)+\frac{1}{2}(0, 3b, b)\in L$. Consequently, we arrived at a contradiction. Now, if $d_1=0$ or $d_2=0$, again $L$ has to contain one of the points $(\frac{3b}{2}, 0, b)$, $(0, \frac{3b}{2}, b)$, $(3b, 0, b)$, $(0,3b,b)$ and therefore $L$ also contains one of the points $(\frac{3b}{2},\frac{3b}{2},b)$, $(3b,\frac{-3b}{2},b)$, $(\frac{-3b}{2},3b,b)$, again a contradiction. Now assume that $d_3\neq0$. If $d_1=d_2=0$, $L$ contains a point with a zero last coordinate. We get that either $(b,b,0)$ and therefore also $(b,b,b)$ is in $L$ for some $b\in B$ or $(2a,-a,0)\in L$ for some $a\in A$ and therefore also $(3b,\frac{-3b}{2},b)\in L$ for the unique $b\in B$ such that $3b=2a$, both a contradiction. In the remaining case $d_3\neq 0$ and at least one of $d_1$ and $d_2$ is non-zero. Since there is no point in $S$ where both the third and one of first two coordinates is zero, $L$ has to include a point with third coordinate being zero and a different point where one of the other two coordinates is zero. We are therefore left with checking the following cases, where $L$ is given by a pair of two points in $S$. For some of these cases it is important to note that $S$ contains no points where one of the first two coordinates is $0$ and the other is in $B$. - $(b,b,0)\in L$ and $(0,a,a)\in L$: $$L=\bigg\{\begin{pmatrix} b\\ b\\ 0 \end{pmatrix}+k\begin{pmatrix} -b\\ a-b\\ a \end{pmatrix}\bigg|\ k\in\mathbb{F}_p\bigg\}$$ Since $a\neq b$, setting $k:=\frac{b}{b-a}$, we get $(x,y,z):=(-\frac{ab}{b-a},0,\frac{ab}{b-a})\in L$. Now $x=-z$ and therefore $z\in B$ and $x\in A$, so $-1=\frac{3}{2}$ or $-1=3$, a contradiction. - $(b,b,0)\in L$ and $(0,\frac{3b'}{2},b')\in L$: $$L=\bigg\{\begin{pmatrix} b\\ b\\ 0 \end{pmatrix}+k\begin{pmatrix} -b\\ \frac{3b'}{2}-b\\ b' \end{pmatrix}\bigg|\ k\in\mathbb{F}_p\bigg\}$$ Since $\frac{3b'}{2}\neq b$, setting $k:=\frac{b}{b-\frac{3b'}{2}}$, we get $(x,y,z):=(-\frac{3bb'}{2(b-\frac{3b'}{2})},0,\frac{bb'}{b-\frac{3b'}{2}})\in L$. Now $x=-\frac{3}{2}z$ and therefore $x,z\in A$, so $-\frac{3}{2}=1$, a contradiction. - $(b,b,0)\in L$ and $(0,3b',b')\in L$: $$L=\bigg\{\begin{pmatrix} b\\ b\\ 0 \end{pmatrix}+k\begin{pmatrix} -b\\ 3b'-b\\ b' \end{pmatrix}\bigg|\ k\in\mathbb{F}_p\bigg\}$$ Since $3b'\neq b$, setting $k:=\frac{b}{b-3b'}$, we get $(x,y,z):=(-\frac{3bb'}{b-3b'},0,\frac{bb'}{b-3b'})\in L$. Now $x=-3z$ and therefore $x,z\in A$, so $-3=1$, a contradiction. - $(2a,-a,0)\in L$ and $(0,a',a')\in L$: $$L=\bigg\{\begin{pmatrix} 2a\\ -a\\ 0 \end{pmatrix}+k\begin{pmatrix} -2a\\ a'+a\\ a' \end{pmatrix}\bigg|\ k\in\mathbb{F}_p\bigg\}$$ Since $a'\neq -a$, setting $k:=\frac{a}{a+a'}$, we get $(x,y,z):=(\frac{2aa'}{a+a'},0,\frac{aa'}{a+a'})\in L$. Now $x=2z$ and therefore $x,z\in A$, so $2=1$, a contradiction. - $(2a,-a,0)\in L$ and $(a',0,a')\in L$: $$L=\bigg\{\begin{pmatrix} 2a\\ -a\\ 0 \end{pmatrix}+k\begin{pmatrix} a'-2a\\ a\\ a' \end{pmatrix}\bigg|\ k\in\mathbb{F}_p\bigg\}$$ Assume $a'\neq 2a$, then setting $k:=\frac{2a}{2a-a'}$, we get $(x,y,z):=(0,\frac{aa'}{2a-a'},\frac{2aa'}{2a-a'})\in L$. Now $2y=z$ and therefore $y,z\in A$, so $1=2$, a contradiction. If $a'=2a$, then setting $k:=3$, we get $(x,y,z):=(2a,2a,6a)\in L$, a contradiction since $6a\in B$. - $(2a,-a,0)\in L$ and $(\frac{3b}{2},0,b)\in L$: $$L=\bigg\{\begin{pmatrix} 2a\\ -a\\ 0 \end{pmatrix}+k\begin{pmatrix} \frac{3b}{2}-2a\\ a\\ b \end{pmatrix}\bigg|\ k\in\mathbb{F}_p\bigg\}$$ Assume $\frac{3b}{2}\neq 2a$, then setting $k:=\frac{2a}{2a-\frac{3b}{2}}$, we get $(x,y,z):=(0,\frac{3ab}{2(2a-\frac{3b}{2})},\frac{2ab}{2a-\frac{3b}{2}})\in L$. Now $y=\frac{3z}{4}$ and therefore $z\in B$ and $x\in A$, so $\frac{3}{4}=\frac{3}{2}$ or $\frac{3}{4}=3$, a contradiction. If $\frac{3b}{2}= 2a$, then setting $k:=3$, we get $(x,y,z):=(2a,2a,4a)\in L$, a contradiction since $4a\in A$. - $(2a,-a,0)\in L$ and $(0,\frac{3b}{2},b)\in L$: $$L=\bigg\{\begin{pmatrix} 2a\\ -a\\ 0 \end{pmatrix}+k\begin{pmatrix} -2a\\ \frac{3b}{2}+a\\ b \end{pmatrix}\bigg|\ k\in\mathbb{F}_p\bigg\}$$ Assume $\frac{3b}{2}\neq -3a$, then setting $k:=\frac{3a}{\frac{3b}{2}+3a}$, we get $(x,y,z):=(\frac{3ab}{\frac{3b}{2}+3a},\frac{3ab}{\frac{3b}{2}+3a},\frac{3ab}{\frac{3b}{2}+3a})\in L$. Now $x=y=z$, a contradiction. If $\frac{3b}{2}= -3a$, then setting $k:=-\frac{1}{2}$, we get $(x,y,z):=(3a,0,a)\in L$, so $1=3$, a contradiction. - $(2a,-a,0)\in L$ and $(3b,0,b)\in L$: $$L=\bigg\{\begin{pmatrix} 2a\\ -a\\ 0 \end{pmatrix}+k\begin{pmatrix} 3b-2a\\ a\\ b \end{pmatrix}\bigg|\ k\in\mathbb{F}_p\bigg\}$$ Since $b\neq a$, then setting $k:=\frac{a}{a-b}$, we get $(x,y,z):=(\frac{ab}{a-b},\frac{ab}{a-b},\frac{ab}{a-b})\in L$. Now $x=y=z$, a contradiction. - $(2a,-a,0)\in L$ and $(0,3b,b)\in L$: $$L=\bigg\{\begin{pmatrix} 2a\\ -a\\ 0 \end{pmatrix}+k\begin{pmatrix} -2a\\ a+3b\\ b \end{pmatrix}\bigg|\ k\in\mathbb{F}_p\bigg\}$$ since $3b\neq -a$, setting $k:=\frac{a}{a+3b}$, we get $(x,y,z):=(\frac{6ab}{a+3b},0,\frac{ab}{a+3b})\in L$. Now $x=6z$ and therefore $z\in B$ and $x\in A$, so $6=\frac{3}{2}$ or $6=3$, a contradiction. Finally, note that the layer with third coordinate $0$ contains $|B|+2|A|=\frac{3}{2}(p-1)$ points, layers with the third coordinate in $A$ contain $(p-1)^2$ points and layers with the third coordinate in $B$ contain $(p-1)^2-1$ points and thus $$|S|=\frac{p-1}{2}(p-1)^2+\frac{p-1}{2}((p-1)^2-1)+\frac{3}{2}(p-1)=(p-1)^3+(p-1).$$ In the special case of $p=7$, the layers with third coordinate in $B$ actually contain $(p-1)^2=36$ points, since $\frac{3}{2}=\frac{1}{3}$, thus giving the lower bound of $225$. ◻ ## Acknowledgements {#acknowledgements .unnumbered} C.E. was supported by a joint FWF-ANR project ArithRand (I 4945-N and ANR-20-CE91-0006). J.F. was supported by the Austrian Science Fund (FWF) under the project W1230. D.G.S. was supported by the ERC Advanced Grant \"Geoscape\". B.K. was supported by the ÚNKP-22-3, New National Excellence Program of the Ministry for Culture and Innovation from the source of the National Research, Development and Innovation Fund. P.P.P. was supported by the Lendület program of the Hungarian Academy of Sciences (MTA) and by the National Research, Development and Innovation Office NKFIH (Grant Nr. K124171). # Appendix
arxiv_math
{ "id": "2310.03382", "title": "Maximal line-free sets in $\\mathbb{F}_p^n$", "authors": "Christian Elsholtz, Jakob F\\\"uhrer, Erik F\\\"uredi, Benedek Kov\\'acs,\n P\\'eter P\\'al Pach, D\\'aniel G\\'abor Simon, N\\'ora Velich", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | This article addresses the dynamic multi-skill workforce scheduling and routing problem with time windows and synchronization constraints (DWSRP-TW-SC) inherent in the on-demand home services sector. In this problem, new service requests (tasks) emerge in real-time, necessitating a constant reevaluation of service team task plans. This reevaluation involves maintaining a portion of the plan unaltered, ensuring team-task compatibility, addressing task priorities, and managing synchronization when task demands exceed a team's capabilities. To address the problem, we introduce a real-time optimization framework triggered upon the arrival of new tasks or the elapse of a set time. This framework redesigns the routes of teams with the goal of minimizing the cumulative weighted throughput time for all tasks. For the route redesign phase of this framework, we develop both a mathematical model and an Adaptive Large Neighborhood Search (ALNS) algorithm. We conduct a comprehensive computational study to assess the performance of our proposed ALNS-based reoptimization framework and to examine the impact of reoptimization strategies, frozen period lengths, and varying degrees of dynamism. Our contributions provide practical insights and solutions for effective dynamic workforce management in on-demand home services. address: - Department of Computing, Imperial College London, Exhibition Rd, South Kensington, London SW7 2BX, United Kingdom. odemiray21\@imperial.ac.uk - "Department of Industrial Engineering, TOBB University of Economics and Technology, Söğütözü Caddesi No:43, 06560, Söğütözü, Ankara, Turkey. dtolga\\@etu.edu.tr" - "Department of Industrial Engineering, TOBB University of Economics and Technology, Söğütözü Caddesi No:43, 06560, Söğütözü, Ankara, Turkey. e.yucel\\@etu.edu.tr" author: - Onur Demiray - Doruk Tolga - Eda Yücel bibliography: - ms.bib date: September, 2023 title: An Optimization Framework for a Dynamic Multi-Skill Workforce Scheduling and Routing Problem with Time Windows and Synchronization Constraints --- Dynamic vehicle routing, workforce scheduling and routing, reoptimization framework, adaptive large neighborhood search # Introduction {#section:introduction} The increasing share of services is common for advanced economies as households tend to spend more on services as their income and wealth rise. Within the services sector there is a subsector called the on-demand home services sector that provides on-premise services primarily to households such as home health care or household professional services. The on-demand home services sector connects people who need assistance with day-to-day problems to those who are ready to provide instant solutions. According to Harvard Business Reports, these services attract 22.4 million consumers annually, who spend \$57.6 billion on them. The popular on-demand home services include health care, wellness, appliances, home cleaning, repairs and maintenance that require multi-skilled workforce. [@colby_bell_2017] The demand for home services increases with an increasing growth rate, and with the epidemic novel Covid-19, the adoption of the home services becomes extremely necessary not only to increase their benefits but also to provide relief to the people during the pandemic. Furthermore, home services assume a pivotal role in curbing and mitigating the transmission of this virulent virus. Their significance lies in the diminished need for extensive human interaction, which inherently aids in controlling the propagation of the virus. On-demand home services can be either static or dynamic. In the static scenario, all tasks are known beforehand, while in the dynamic scenario, tasks show up dynamically as time progresses and the routes of the workforce should be continuously adapted to cater to evolving demand. The integration of mobile applications and diverse online platforms has led to the prevalence of flexible, personalized, and responsive dynamic services. In practical terms, addressing dynamic challenges often involves treating them as sequences of static subproblems, as noted by @CordeauLaporte2007. The dynamism can be tackled through either offline (stochastic) or online (real-time) approaches in the solution methodology. In the offline scenario, robust solutions that consider dynamic demand data are produced before the operation. On the other hand, in the online case, an initial solution is created based on the available data, which is then continually optimized as new service requests unfold during operations. This re-optimization from the current solution can be initiated whenever a new task arises. Alternatively, tasks can be buffered and periodically incorporated in the existing workforce routes in consolidated batches. Effectively managing the operations of a dynamic on-demand on-site service system involves a multifaceted decision-making process encompassing task prioritization and identification of requisite skills, workforce dispatching, scheduling, and routing. Task prioritization is required due to the inherent variance in request importance and urgency. In the current landscape, service requests, or tasks, often demand distinct skill sets, rendering a service personnel incapable of addressing every task. This challenge is tackled through two different strategies: synchronization and team formation. Team formation finds applicability within static scenarios, if particularly when teams are established based on the demand data at the beginning of the planning phase. However, in the absence of synchronization, team formation alone falls short as a emergent request might necessitate skills beyond the capabilities of a single team (or worker) among the available team pool. Therefore, the subsequent problem of task assignment, scheduling, and workforce routing should account for synchronization to achieve a comprehensive solution. The optimization problem addressed within this paper resides in the realm of dynamic workforce scheduling and routing problems with time windows and synchronization constraints (DWSRP-TW-SC), and its essence can be encapsulated as follows. In the static workforce scheduling and routing problem (WSRP), the aim is to allocate a collection of geographically dispersed service tasks with distinct skill prerequisites to a given ensemble of teams or workers, each possessing varying skill proficiencies. This entails identifying an optimal configuration of task-to-team assignments and devising a schedule for the task executions [@Cakirgil2020]. Transitioning to the dynamic rendition of this problem, the narrative takes on a more intricate dimension. New service requests arise concurrently with the execution of team task plans (TTPs). Considering the potential urgency of these new tasks, the TTP must be subject to continual re-evaluation and, if necessary, recalibration. In practice, when this recalibration takes place, a segment of the plan within a predefined timeframe - known as the frozen period - remains unaltered. In the course of re-optimizing the TTP, several factors merit attention. This includes the tasks situated within the frozen period (referred to as frozen tasks), the compatibility of team-task skills, task priorities, and the necessity for synchronization. All these elements converge to shape the trajectory of re-optimization. To address the described DWSRP-TW-SC, we propose an optimization framework that is triggered whenever a predetermined number of new tasks arrive. The framework first identifies frozen tasks and determines the first available time and location of the team, and then re-optimizes the subsequent TTP with the objective of minimizing the total weighted throughput time of all tasks. For the route redesign phase of the framework, we develop both a mathematical model and a heuristic algorithm. The remainder of this paper is organized as follows. Section [2](#section:litreview){reference-type="ref" reference="section:litreview"} reviews the literature on related problems and approaches. The problem is formally described and mathematically modeled in Section [3](#section:problem){reference-type="ref" reference="section:problem"}. Section [4](#section:solution){reference-type="ref" reference="section:solution"} presents the proposed re-optimization framework that is based on an Adaptive Large Neighborhood Search (ALNS) heuristic. A comprehensive computational study is provided in Section [5](#section:compStudy){reference-type="ref" reference="section:compStudy"}. Finally, Section [6](#section:conclusion){reference-type="ref" reference="section:conclusion"} concludes the paper and draws directions for future research. # Literature review {#section:litreview} In this section, we comprehensively examine the pertinent literature across three distinct categories: the studies focused on routing problems involving dynamic demand, the solution methodologies for dynamic routing problems, and the studies on measuring the level of dynamism. ## Routing problems with dynamic demand {#litreview_dynamicdemand} The focus of this paper is the DWRSP-TW-SC problem, which is categorized as one of the dynamic demand vehicle routing problems (DVRP). As per @Pillac2013, routing problems with dynamic demand can be broadly classified into two primary categories: online and stochastic. In both cases, the demand for resources fluctuates over time. In the stochastic category, demand data is assumed to follow a probabilistic distribution, while in the online category, new demand occurrences are unpredictable. Our research falls into the dynamic and online category. Consequently, we will now provide an overview of the existing studies that belong to this specific classification. The first study on DVRP that considered dynamic demand was @WilsonColvin1977. They developed a heuristic based on an insertion method for a single vehicle Dial-a-Ride problem, which concerns transportation services for disabled or elderly citizens. Since then, dynamic routing problems have gained traction in various fields, including food and human transportation. In recent years, there has been a notable surge in studies on dynamic routing problems, as noted by @Pillac2013. Notable reviews of DVRP and dynamic pickup and delivery studies have been provided by @Larsen2001, @Pillac2013, and @BERBEGLIA20108. Many of these studies have focused on dynamic demand in Pickup and Delivery Problems (PDP) [@BERBEGLIA20108; @Savelsbergh97; @Gendreau2006], city logistics [@Regnier-Coudert2016; @Bieding2009; @Campbell2005], or human transportation issues like Dial-a-Ride Problems (DARP) [@CordeauLaporte2007_darp]. However, it's important to note that DARP studies that consider dynamic demand [e.g., @Horn2002; @Attanasio2004; @COSLOVICH20061605; @LOIS2017377] differ from our focus on DWSRP-TW-SC due to their homogeneous resources, such as vehicles or workforce. In some dynamic pickup and delivery studies [e.g., @Ferrucci2014], heterogeneous vehicles are considered, but the demand structure, featuring origin and destination nodes, differs significantly from that of the DWSRP-TW-SC. Moving beyond these areas, as outlined in @Pillac2013, another application domain grappling with dynamic demand in vehicle routing problems is on-premise or home services. @bostel2008multiperiod explored dynamic technician routing and scheduling, considering a homogeneous workforce over a multi-period planning horizon. Their approach involved a memetic algorithm that schedules known tasks initially for each day and then seamlessly integrated new tasks into the existing plan. Additionally, they devised a column generation algorithm, but its applicability is limited to smaller-scale problems. Subsequently, @Borenstein2008 addressed scheduling challenges with a multi-skilled workforce at British Telecom. They employed heuristic methods to assign technicians to task clusters and developed a rule-based system for task assignment when new tasks emerged. While their study tackled clustering and assignment aspects, it didn't delve into technician routing or task prioritization. In recent times, @Pillac2018 delved into the multi-skill technician scheduling and routing problem with dynamic demand, spare parts, and tool requirements. Their study treated tools as renewable resources and spare parts as non-renewable, consumed upon task completion. Technicians could replenish tools and spare parts at a central depot, offering them the flexibility to serve more requests. They introduced parallel processing, employing a parallel Adaptive Large Neighborhood Search (pALNS) algorithm to compute an initial solution and reoptimize it upon new request arrivals. What distinguishes @Pillac2018 from our study is the consideration of spare part requirements and the capacity limitations of technicians regarding carrying spare parts. In our research, we assume sufficient resources within the vehicle and also allow synchronization, which involves tasks being serviced by multiple teams or technicians due to skill requirements. ## Solution approaches for dynamic routing problems {#litreview_solnapproaches} Addressing routing problems in dynamic settings introduces an added layer of complexity, where response time emerges as a critical concern. Many of the solution methods typically employed for static problems prove to be excessively computationally intensive when applied to dynamic scenarios [@Gendreau2006]. Dynamic and deterministic problems are commonly tackled using two primary approaches: periodic reoptimization and continuous reoptimization [@Pillac2013]. Periodic reoptimization strategies involve generating vehicle routes at the outset of the planning horizon, often corresponding to a working day. As new data emerges throughout the day, or at predefined intervals, optimization techniques are deployed to update the routes based on this fresh information. For instance, @Chang2003 explored real-time vehicle routing problems with time windows and simultaneous delivery/pickup demands, proposing a periodic reoptimization framework employing a Tabu Search (TS) algorithm. @Montemanni2005 introduced an Ant Colony System algorithm for a real-world DVRP problem situated in the road network of Lugano city, based on data from a local fuel distribution company. @Barcelo2007 developed a decision support system employing an iterative algorithm for dynamic routing of city logistic services. @Attanasio2004 presented a parallel tabu-search heuristic within a periodic reoptimization framework to solve the dynamic multi-vehicle DARP. @Pillac2018 also offered a periodic reoptimization framework, using a parallel ALNS, triggered each time a new request emerged in the context of dynamic technician routing and scheduling. In contrast, continuous reoptimization approaches run continuously throughout the day, relying on adaptive memory [@Taillard2001] to store alternative solutions for adapting to new data without complete reoptimization. @Gendreau1999 developed a parallel TS with adaptive memory for a dynamic VRP with time windows. This approach adapted the static TS to dynamic scenarios through parallel execution, where threads running in the background are interrupted upon data updates (e.g., new requests or completed customer services). Subsequently, the solutions in the adaptive memory are updated, and the parallel search process restarts with this updated information. This framework was extended to dynamic VRP by @Ichoua2000 and @Ichoua2003. @Bent2004 explored a dynamic VRP with time windows and stochastic customer information, introducing a multiple plan approach (MPA) inspired by @Gendreau1999's framework. The MPA maintains a pool of routing plans consistent with current decisions and removes incompatible ones from the pool. @Benyahia1998 proposed a Genetic Algorithm (GA) modeling the vehicle dispatcher's decision-making process for dynamic PDP. GA techniques are employed to find a utility function that approximates the dispatcher's choices. @Hemert2004, @HAGHANI2005, and @Cheung2008 extended the continuous reoptimization framework using GAs for dynamic PDPs. These GAs, designed for dynamic contexts, closely resemble those used for static problems but operate continuously throughout the planning horizon, adapting solutions as input data changes [@Pillac2013]. In this study, we propose a periodic reoptimization approach based on an ALNS. Our optimization framework is triggered when a predetermined number of new tasks, denoted as $\beta^{\text{task}}$, arrive, or when a set amount of time, referred to as $\beta^{\text{time}}$, elapses. We conduct a computational analysis to assess various values of $\beta^{\text{task}}$ and $\beta^{\text{time}}$ (see Section [5.4](#section:compStudy_reoptimizationPeriodComparison){reference-type="ref" reference="section:compStudy_reoptimizationPeriodComparison"}). To account for teams engaged in task execution or en route to task locations at the time of reoptimization, our proposed framework freezes team-task assignments within a designated frozen period, denoted as $f$, starting from the reoptimization moment. We also conduct experiments to evaluate the optimal length of the frozen period (see Section [5.5](#section:compStudy_frozenPeriodComparison){reference-type="ref" reference="section:compStudy_frozenPeriodComparison"}). ## Measuring dynamism {#litreview_measuringDynamism} Problems with dynamic data, even within the same problem class, can exhibit varying degrees of dynamism. According to @Ichoua2007, the dynamism of a problem instance can be characterized using two key dimensions: the frequency of changes and the urgency of customer requests. The first dimension relates to how quickly new information emerges, while the second pertains to the time interval between the arrival of a new request and its expected service time or deadline. Based on these considerations, three distinct metrics have been proposed to measure the dynamism of a problem or instance: $(i).$ Degree of Dynamism ($\delta$), as introduced by @Lund1996, is defined as the ratio of new requests ($n_{\text{d}}$) to the total number of requests ($n_{\text{tot}}$), i.e., $\delta = n_{\text{d}}/n_{\text{tot}}$. $(ii).$ Effective Degree of Dynamism ($\delta^e$), as proposed by @Larsen2001, takes into account the arrival times of requests ($a_i$) and the length of the planning horizon ($\tau^{\max}$). For requests known in advance, the arrival time is considered as 0. $\delta^e$ is calculated as follows: $\delta^e = \frac{1}{n_{\text{tot}}}\sum_{i\in N} \frac{a_i}{\tau^{\max}}$, where $N$ represents the set of requests. $(iii).$ Effective Degree of Dynamism with Time Windows ($\delta^e_{\text{tw}}$), also proposed by @Larsen2001, takes into consideration both the arrival time and the latest start time of requests ($a_i$ and $l_i$, respectively), along with the length of the planning horizon ($\tau^{\max}$). It is calculated as: $\delta^e_{\text{tw}} = \frac{1}{n_{\text{tot}}}\sum_{i\in N} \left(1 - \frac{(l_i - a_i)}{\tau^{\max}}\right)$. @Larsen2001 further classified dynamic problems or instances into three categories based on the effective degree of dynamism ($\delta^e$): Weekly Dynamic Systems having $\delta^e \leq 0.3$, Moderately Dynamic Systems having $0.3 < \delta^e \leq 0.8$, Strongly Dynamic Systems where $\delta^e > 0.8$. In our study, we investigate the impact of the degree of dynamism on the quality of solutions provided by our proposed reoptimization framework through computational analysis (see Section [5.3](#section:compStudy_largeSizeDWSRPinstances){reference-type="ref" reference="section:compStudy_largeSizeDWSRPinstances"}). # Problem description and mathematical model {#section:problem} DWRSP-TW-SC entails the allocation of crews equipped with certain skills to a geographically-dispersed set of tasks that are randomly revealed during the day. The problem involves a predetermined and unalterable set of crews denoted by $\mathcal{K}$, corresponding to the technician compositions, therefore the skill qualifications of crews are given at the beginning of the day and remain constant throughout the course of the planning horizon. On the other hand, the tasks are dynamic, and new tasks with varying skill requirements and priorities emerge as the operation of the crews progresses in the field. The sequence of tasks to be completed by each crew is defined by the so-called Team Task Plan (TTP), which must be updated whenever a predetermined number of new tasks appear. This process of updating the TTP is referred to as a *re-optimization problem (period/phase)*, which is regarded as a separate optimization problem. The $t^{\textit{th}}$ re-optimization period begins at time $\tau^{(t)}$. At this time point, the system's tasks can be categorized into two sets: (i) $\mathcal{N}_{prev}^{(t)}$, corresponding to the tasks that have not been addressed yet and were available at the previous re-optimization period, and (ii) $\mathcal{N}_{new}^{(t)}$, corresponding to the tasks that have accumulated in the system between the previous and current re-optimization periods. In other words, $\mathcal{N}_{new}^{(t)}$ is made up of tasks whose arrival times fall within the time interval $(\tau^{(t-1)}, \tau^{(t)}]$. At this point, it is worth noting that the initial static optimization problem at the start of the day can be considered as the previous re-optimization period for the first re-optimization period. Hence, without loss of generality, we refer to this initial static problem as the $0^{\textit{th}}$ re-optimization problem that occurs at the time $\tau^{(0)}=0$. The team assignment of the tasks in $\mathcal{N}_{prev}^{(t)}$ can be changed except a subset of tasks in $\mathcal{N}_{prev}^{(t)}$, which are planned to be started by a crew within a specified time period called *frozen period*. The frozen period of an appropriate duration aims to prevent confusions in switching from one TTP to another TTP. Let $f^{(t)}$ denote the duration of the frozen period for the $t^{\textit{th}}$ re-optimization period. Then, $[\tau^{(t)}, \tau^{(t)} + f^{(t)}]$ specifies the frozen period of the $t^{\textit{th}}$ re-optimization period. Tasks whose start times fall into this period are called *frozen tasks* and are referred to as $\mathcal{N}_{frozen}^{(t)}$. During the $t^{\textit{th}}$ re-optimization period, it is possible for the completion time of the final frozen task to differ among crews. As a result, the starting time ($\eta_k^{(t)}$ for crew $k \in \mathcal{K}$) of each crew to the same period may vary. The starting location ($\phi_k^{(t)}$) of crew $k \in \mathcal{K}$ is determined by the location of its final frozen task. In other words, the last task completed by a crew during the frozen period serves as the basis for determining its starting location to the corresponding re-optimization period. The problem to be solved for the $t^{\textit{th}}$ re-optimization period is defined on a directed graph $G^{(t)}= (\mathcal{V}^{(t)}, \mathcal{A}^{(t)})$, where $\mathcal{V}^{(t)} = \mathcal{N}^{(t)} \cup \Phi \cup \{0\}$ denotes the set of vertices (nodes), $\mathcal{N}^{(t)}$ corresponds to the whole set of tasks that should be considered at the time of re-optimization, i.e., $\mathcal{N}^{(t)} = \mathcal{N}_{prev}^{(t)}\setminus \mathcal{N}^{(t)}_{frozen} \cup \mathcal{N}_{new}^{(t)}$, $\Phi$ corresponds to the starting locations of crews, i.e., $\Phi = \bigcup_{k\in \mathcal{K}} \phi_k$, node $0$ represents the central depot that each team should go at the end of the work day, and $\mathcal{A}^{(t)}=\{(i,j):i \in \mathcal{V}^{(t)}, j \in \mathcal{V}^{(t)} \setminus \Phi, i \neq j\}$ denotes the set of arcs between the nodes. For each arc $(i,j) \in A^{(t)}$, $c_{ij}$ denotes the traveling time from node $i\in \mathcal{V}^{(t)}$ to $j\in \mathcal{V}^{(t)}\setminus\Phi$. Each task $i \in \mathcal{N}^{(t)}$ is associated with an arrival time $a_i$, processing time $p_i$, priority $w_{i}$, and time window $[e_i, l_i]$. The set of skills is denoted by $\mathcal{Q}$. The $0-1$ parameter $u_{iq}$ indicates whether task $i \in \mathcal{N}^{(t)}$ requires skill $q \in \mathcal{Q}$ and $v_{kq}$ indicates whether crew $k \in \mathcal{K}$ has skill $q \in \mathcal{Q}$. There may be some tasks that require a skill set that cannot be satisfied by a single team. Depending on their skill requirements, two or more crews are assigned to these task and these crews have to be synchronized, meaning that they need to start at the same time. The planning horizon, denoted by $[\tau^{(0)}=0, \tau^{\max}]$, spans from the beginning of the working day until its conclusion at $\tau^{\max}$. If a task cannot be completed within this timeframe, it is outsourced and assumed to be accomplished by the end of the day, i.e., at $\tau^{\max}$. Our objective is to minimize the total weighted throughput time of the tasks, where the throughput time of a task is defined as the duration between its arrival and completion. In the remainder of this section, we will initially present an illustrative example to demonstrate how dynamic tasks are executed by re-optimization problems during the day, followed by the introduction of a mixed-integer programming formulation designed to solve any given re-optimization problem. ## An illustrative example {#illustrativeExample} Within this section, we present an illustrative example involving two crews, eight tasks, and three skills, with the primary objective of showcasing the execution of dynamically revealed tasks through different re-optimization phases. By examining this scenario, we aim to provide a comprehensive understanding of how our dynamic framework effectively handles the allocation and sequencing of tasks that unfold over time. Additionally, we aim to demonstrate how synchronization due to the assignment of multiple teams to a task might cause idle times for crews. The hypothetical example is visually represented in Figure [1](#fig:illustrative_example){reference-type="ref" reference="fig:illustrative_example"}, showing the routes taken by the crews. In this example, crew 1 executes tasks in the sequence of $<1,2,4,5,7>$, while crew 2 follows the route of $<3,4,6,8>$. The skills are represented by colored double line circles positioned above the crew and task symbols. Specifically, there are three distinct skills denoted by different colors: red, purple, and green. Figure [1](#fig:illustrative_example){reference-type="ref" reference="fig:illustrative_example"} provides insight into the skill composition of the crews. Crew 1 possesses skills associated with the red and purple circles, enabling them to complete tasks 1, 2, 5, and 7 autonomously. Conversely, crew 2 possesses skills indicated by the purple and green circles, equipping them to undertake tasks 3, 6, and 8 independently. However, task 4 necessitates collaboration between both crews since it requires the combined skill set. Consequently, the crew that reaches the location of task 4 earlier must await the arrival of the other crew, highlighting the concept of synchronization. In Figure [1](#fig:illustrative_example){reference-type="ref" reference="fig:illustrative_example"}, the numbers displayed over the arcs indicate the corresponding travel times required to traverse those arcs. Additionally, the blue-colored numbers beneath the nodes represent the process time of each respective task. Consequently, the Gantt chart in Figure [2](#fig:illustrative_example_gantt){reference-type="ref" reference="fig:illustrative_example_gantt"} visualizes the start and finish times of each task. For tasks completed by a single team, the start time can be easily determined by adding the completion time of the previous task to the travel time required to reach the current task when all tasks are already revealed. However, in cases where synchronization is required, such as in Task 4, the team that arrives earlier experiences idle time. For instance, in order to complete Task 4, crew 2 arrives at time 25 while crew 1 arrives at 40. Consequently, crew 1 has an idle time of 15 time units while waiting for crew 2 to arrive. ![An illustrative solution for an instance having two crews, 8 tasks, and 3 skills](dwsrp_draw_network.pdf){#fig:illustrative_example width="0.7\\linewidth"} ![Gantt chart of the crews for the solution of the illustrative example represented in Figure [1](#fig:illustrative_example){reference-type="ref" reference="fig:illustrative_example"}](dwsrp_grantt_chart.pdf){#fig:illustrative_example_gantt width="0.6\\linewidth"} For illustrative purposes, let us assume that the given routes and schedules shown in Figure [1](#fig:illustrative_example){reference-type="ref" reference="fig:illustrative_example"} pertain to the static problem. While we here demonstrate the transition to the first re-optimization, the same approach can be applied to subsequent transitions between consecutive re-optimization periods. At the initial time $0$, the set $\mathcal{N}^{(0)}$ comprises tasks $\{1,2,3,4,5,6,7,8\}$. Assuming $\tau^{(1)}=45$ and $f^{(1)}=10$, the frozen period spans the interval $[45,55]$. Prior to reaching $\tau^{(1)}=45$, tasks 1, 2, 3, and 4 have already been completed, resulting in $\mathcal{N}_{prev}^{(1)}$ being defined as $\{5,6,7,8\}$. During the frozen period, crew 1 initiates task 5 and completes it at time 60. Consequently, crew 1 commences the first re-optimization period from the location of task 5 at time 60, i.e., $\phi_1^{(1)}=5$ and $\eta_1^{(1)}=60$. Employing the same logic, we have $\phi_2^{(1)}=6$ and $\eta_2^{(1)}=55$. Therefore, $\mathcal{N}_{frozen}^{(1)}$ is defined as $\{5,6\}$. Suppose that only two tasks, task 9 and task 10, appear within the time interval $(0,45]$. Consequently, $\mathcal{N}^{(1)}$ is defined as $\{7,8,9,10\}$. These four tasks must be addressed during the first re-optimization phase. ## Mixed integer programming formulation {#mip} In this section, we present the mixed-integer programming formulation for the $t^{\textit{th}}$ re-optimization problem of DWRSP-TW-SC for any $t$. We start with introducing the decision variables: $$\begin{aligned} X_{ij}^k &: \begin{cases} 1, & \text{if arc }(i,j)\in\mathcal{A}^{(t)} \text{ is traversed by crew }k\in\mathcal{K}\\ 0, & \text{otherwise} \end{cases} \\ Y_i^k &: \begin{cases} 1, & \text{if node }i \in \mathcal{V}^{(t)}\setminus\{0\} \text{ is assigned to crew }k\in\mathcal{K}\\ 0, & \text{otherwise} \end{cases} \\ O_i &: \begin{cases} 1, & \text{if task }i \in \mathcal{N}^{(t)} \text{ is accomplished via outsourcing}\\ 0, & \text{otherwise} \end{cases}\\ D_{i}^k &: \text{departure time of crew }k \in \mathcal{K} \text{ from node }i\in\mathcal{V}^{(t)}\\ C_i &: \text{completion time of task } i \in \mathcal{N}^{(t)} \end{aligned}$$ Then, the $t^{\textit{th}}$ re-optimization problem at any time in between $\tau^{(0)}$ and $\tau^{\max}$ has a planning horizon of $[\tau^{(t)}, \tau^{\max}]$ and can be formulated as follows: $$\begin{aligned} \text{minimize} & \quad \displaystyle \text{TWTT} = \sum_{i \in \mathcal{N}^{(t)}} w_i (C_i - a_i) \label{mip_objective}\\ \mathrm{s.t.} & \displaystyle\quad O_i + \sum_{k \in \mathcal{K}}Y_{i}^k \geq 1 & \forall i \in \mathcal{N}^{(t)} \label{c1:job_must_be_completed}\\ & \displaystyle \quad \sum_{k \in \mathcal{K}}Y_{i}^k \leq |\mathcal{K}| (1-O_i) & \forall i \in \mathcal{N}^{(t)} \label{c2:if_outsourced_no_team} \\ & \displaystyle \quad Y_{\phi_k^{(t)}}^k = 1 & \forall k \in \mathcal{K} \label{c3:start_location} \\ & \displaystyle \quad D_{\phi_k^{(t)}}^k = \eta_k^{(t)} & \forall k \in \mathcal{K} \label{c4:start_time}\\ & \displaystyle \quad \sum_{i \in \mathcal{N}^{(t)} \cup \{\phi_k\}}X_{i0}^k = 1 & \forall k \in \mathcal{K} \label{c5:return_depot} \\ & \displaystyle \quad \sum_{\substack{j \in \mathcal{V}^{(t)}:\\(i,j) \in \mathcal{A}^{(t)}}} X_{ij}^k = Y_i^k & \forall i \in \mathcal{N}^{(t)} \cup \{\phi_k\}, k \in \mathcal{K} \ \label{c6:X_Y_match} \\ & \displaystyle \quad \sum_{\substack{j \in \mathcal{N}^{(t)} \cup \{\phi_k\}:\\(j,i) \in \mathcal{A}^{(t)}}}X_{ji}^k - \sum_{\substack{j \in \mathcal{N}^{(t)} \cup \{0\}:\\(i,j) \in \mathcal{A}^{(t)}}}X_{ij}^k = 0 & \forall i \in \mathcal{N}^{(t)}, k \in \mathcal{K} \label{c7:flow_balance} \\ & \displaystyle \quad \sum_{k \in \mathcal{K}} v_{kq}Y_{i}^k \geq u_{iq}- O_i & \forall i \in \mathcal{N}^{(t)}, q \in \mathcal{Q} \label{c8:skill_competence}\\ & \displaystyle \quad D_i^k + c_{ij} + p_j \leq D_j^k + \tau^{\max}\color{black}(1-X_{ij}^k) &\forall k \in \mathcal{K}, i \in \mathcal{V}^{(t)}\setminus\{0\}, j \in \mathcal{V}^{(t)}\setminus\{\phi_k\}: i \neq j \label{c9:departure_time_of_crews} \\ & \displaystyle \quad D_i^k + \tau^{\max}(1-Y_i^k) \geq C_i - \tau^{\max}O_i & \forall i \in \mathcal{N}^{(t)}, k \in \mathcal{K} \label{c10:task_completion_time_1}\\ & \displaystyle \quad D_i^k \leq C_i & \forall i \in \mathcal{N}^{(t)}, k \in \mathcal{K} \label{c11:task_completion_time_2}\\ & \displaystyle \quad C_i \leq \tau^{\max}O_i + (l_i+p_i)(1-O_i) \quad & \forall i \in \mathcal{N}^{(t)} \label{c12:time_window_1}\\ & \displaystyle \quad C_i \geq \tau^{\max}O_i + (e_i+p_i)(1-O_i) \quad & \forall i \in \mathcal{N}^{(t)} \label{c13:time_window_2} \end{aligned}$$ $$\begin{aligned} & \displaystyle \quad X_{ij}^k \in \{0,1\} & \forall (i,j) \in \mathcal{A}^{t}, k \in \mathcal{K} \label{c14:domain_X} \\ & \displaystyle \quad Y_{i}^k \in \{0,1\} & \forall i \in \mathcal{V}^{t} \setminus\{0\}, k \in \mathcal{K} \label{c15:domain_Y} \\ & \displaystyle \quad O_i \in \{0,1\} & \forall i \in \mathcal{N}^{t}\label{c16:domain_O} \\ & \displaystyle \quad D_{i}^k \geq 0 & \forall i \in \mathcal{V}^{t} \setminus\{0\}, k \in \mathcal{K} \label{c17:domain_D} \\ & \displaystyle \quad C_i \geq 0 & \forall i \in \mathcal{N}^{t}\label{c18:domain_C} \end{aligned}$$ The objective function ([\[mip_objective\]](#mip_objective){reference-type="ref" reference="mip_objective"}) minimizes the weighted sum of task throughput times for all tasks. It's important to note that when we remove the constant term from the objective, it essentially transforms into the minimization of the weighted sum of completion times for all tasks. As such, these two descriptions of the objective are used interchangeably throughout this article. Constraints ([\[c1:job_must_be_completed\]](#c1:job_must_be_completed){reference-type="ref" reference="c1:job_must_be_completed"}) ensure that all tasks in the system are either outsourced or assigned to at least one crew. Depending on the capabilities of the crews, multiple crews may be assigned to a task. However, if a task is outsourced, no crew can be assigned to it, as stated in constraints ([\[c2:if_outsourced_no_team\]](#c2:if_outsourced_no_team){reference-type="ref" reference="c2:if_outsourced_no_team"}). Each crew begins their tour from their last completed task, as determined by constraints ([\[c3:start_location\]](#c3:start_location){reference-type="ref" reference="c3:start_location"}) and ([\[c4:start_time\]](#c4:start_time){reference-type="ref" reference="c4:start_time"}), and must complete their tour at the central depot, as indicated by constraint ([\[c5:return_depot\]](#c5:return_depot){reference-type="ref" reference="c5:return_depot"}). Constraints ([\[c6:X_Y\_match\]](#c6:X_Y_match){reference-type="ref" reference="c6:X_Y_match"}) imply that if a node is assigned to a crew, the task must be part of that crew's route. Constraints ([\[c7:flow_balance\]](#c7:flow_balance){reference-type="ref" reference="c7:flow_balance"}) ensure flow conservation, ensuring that a crew arrives at and departs from the node of an assigned task. Constraints ([\[c8:skill_competence\]](#c8:skill_competence){reference-type="ref" reference="c8:skill_competence"}) guarantee that the skill requirements of in-house tasks are satisfied by the assigned team(s). Constraints ([\[c9:departure_time_of_crews\]](#c9:departure_time_of_crews){reference-type="ref" reference="c9:departure_time_of_crews"}) calculate the departure time of the crews from the nodes. Constraints ([\[c10:task_completion_time_1\]](#c10:task_completion_time_1){reference-type="ref" reference="c10:task_completion_time_1"}) and ([\[c11:task_completion_time_2\]](#c11:task_completion_time_2){reference-type="ref" reference="c11:task_completion_time_2"}) determine the completion time of in-house tasks. Constraints ([\[c12:time_window_1\]](#c12:time_window_1){reference-type="ref" reference="c12:time_window_1"}) and ([\[c13:time_window_2\]](#c13:time_window_2){reference-type="ref" reference="c13:time_window_2"}) set the completion time of outsourced tasks to the end of the day and enforce time window constraints for non-outsourced tasks. Finally, Constraints ([\[c14:domain_X\]](#c14:domain_X){reference-type="ref" reference="c14:domain_X"}) - ([\[c18:domain_C\]](#c18:domain_C){reference-type="ref" reference="c18:domain_C"}) specify the domains of the decision variables. # Solution approach {#section:solution} Considering the inherent nature of the problem, it is imperative that the $t^{th}$ re-optimization phase, which commences at time $\tau^{(t)}$, is concluded by time $\tau^{(t)} + f^{(t)}$. While the mathematical model can produce optimal solutions for small- to medium-sized problem instances within this time window, obtaining the optimal solution through the model becomes increasingly time-intensive for larger instances. As a result, we propose an ALNS heuristic to generate high-quality solutions within the designated time interval. The proposed ALNS algorithm initiates with an initial solution constructed by a constructive heuristic, the details of which are presented in Section [4.1](#subsec:constructive){reference-type="ref" reference="subsec:constructive"}. The subsequent Section [4.2](#subsec:alns){reference-type="ref" reference="subsec:alns"} provides a detailed description of the ALNS algorithm. ## Constructive heuristic {#subsec:constructive} The algorithm starts with sorting the tasks in $\mathcal{N}^{(t)}$ in a non-increasing order based on their priorities. Considering that a single team may not be able to fulfill a task's requirements, there can exist multiple team combinations that are capable of satisfying the skill requirements of the task. Subsequently, starting from the first task in the ordered list, the algorithm identifies the *irreducible competible team combinations* for each task, taking into account its skill requirements. These combinations represent the alternative minimal sets of teams capable of fulfilling the skill requirements of the task. The collection of all those combinations for task $i$ forms the set $\mathcal{A}_i$. For instance, consider a scenario where there are five distinct skills, and the task being considered, task $i$, necessitates the first, second, and fifth skills, represented as $u_i = [1, 1, 0, 0, 1]$. Assuming the existence of three teams with corresponding skill qualifications, denoted as $v_1 = [1, 1, 1, 0, 1]$, $v_2 = [1, 0, 0, 0, 0]$, and $v_3 = [0, 1, 0, 0, 1]$, the set of irreducible competible team combinations $\mathcal{A}_i$ is derived as $\{ \{1\}, \{2, 3\}\}$. If no feasible team combination is identified for a given task $i$, denoted as $\mathcal{A}_i = \emptyset$, the task is subsequently outsourced. Next, for each available alternative team combination, the optimal insertion point is determined for each team within the combination, taking into consideration their respective time windows and the resulting total insertion cost. Following this, the task is inserted at the best insertion point within each team of the selected team combination, which exhibits the lowest total insertion cost. This iterative process continues until all tasks have been assigned to teams or outsourced. For a comprehensive understanding of the constructive heuristic's implementation, refer to Algorithm [\[Algorithm:CH\]](#Algorithm:CH){reference-type="ref" reference="Algorithm:CH"}, which provides the pseudocode outlining the step-by-step procedure. $\mathcal{N}^{outsourced} \leftarrow \emptyset$ $\mathcal{N}^{sorted} \leftarrow SortTasks(\mathcal{N}^{(t)})$ $\mathcal{A}_i \leftarrow FindIrreducibleCompetibleCombinations(i,\mathcal{K})$ $\mathcal{N}^{outsourced} \leftarrow \mathcal{N}^{outsourced} \cup$ $\{i\}$ ${\mathcal{A}_i}^* \leftarrow FindBestCombination(\mathcal{A}_i)$ Insert task $i$ to the best insertion point of the route of crew $k$ $x^0$: initial solution [\[Algorithm:CH\]]{#Algorithm:CH label="Algorithm:CH"} ## Adaptive large neighborhood search algorithm {#subsec:alns} ALNS was originally introduced by Ropke and Pisinger in their works [@RopkePisinger2006a; @RopkePisinger2006b]. This metaheuristic approach operates through iterative selection of a destroy and repair heuristic from a designated set, based on their past performance. By employing this strategy, the ALNS generates candidate solutions and determines their acceptance or rejection through a probabilistic criterion. Renowned for its ability to balance exploration and exploitation, the ALNS has demonstrated its effectiveness in optimizing various problem domains, particularly in the realm of routing problems. Thus, we propose the utilization of an ALNS algorithm to enhance the initial solution derived from the constructive heuristic, leveraging its strengths to further refine the solution quality. The pseudocode for our Adaptive Large Neighborhood Search algorithm is presented in Algorithm [\[Algorithm:ALNS\]](#Algorithm:ALNS){reference-type="ref" reference="Algorithm:ALNS"}. The algorithm begins by constructing an initial solution $x^0$ using the constructive heuristic. It then initializes the current and best-found solutions, while assigning equal weights to the destroy and repair heuristics. In each iteration, the algorithm selects a destroy heuristic $h^{\text{destroy}}$ and a repair heuristic $h^{\text{repair}}$ based on their performance in previous iterations, taking their weights into account. Given the current solution $x^{\text{current}}$, the selected destroy heuristic removes a certain number of tasks from the assigned teams, and the repair heuristic reinserts the removed tasks into the routes, if possible. If the resulting solution $x^{\text{candidate}}$ satisfies the stochastic acceptance criteria, it replaces the current solution $x^{\text{current}}$. Furthermore, if $x^{\text{candidate}}$ is superior to the best solution found so far, denoted as $x^{\text{best}}$, $x^{\text{candidate}}$ replaces $x^{\text{best}}$ as the new best solution. The algorithm then updates the weights of the destroy and repair heuristics based on the performance of $h^{\text{destroy}}$ and $h^{\text{repair}}$, and proceeds to the next iteration. The weights of the destroy and repair heuristics influence their selection probability in subsequent iterations. The algorithm continues until the stopping criterion is met. $x^0$: initial solution $x^{\text{best}}\leftarrow x^0$ $x^{\text{current}}\leftarrow x^0$ $\mathbf{w^{\text{destroy}}} \leftarrow \text{InitializeDestroyWeights()}$ $\mathbf{w^{\text{repair}}} \leftarrow \text{InitializeRepairWeights()}$ *iter* $\leftarrow 1$ $h^{\text{destroy}} \leftarrow \text{ChooseDestroyHeuristic}(\mathbf{w^{\text{destroy}}})$ $h^{\text{repair}} \leftarrow \text{ChooseRepairHeuristic}(\mathbf{w^{\text{repair}}})$ $x^{\text{candidate}} \leftarrow h^{\text{repair}}(h^{\text{destroy}}(x^{\text{current}}))$ $x^{\text{current}} \leftarrow x^{\text{candidate}}$ $x^{\text{best}} \leftarrow x^{\text{candidate}}$ $\mathbf{w^{\text{destroy}}} \leftarrow \text{UpdateDestroyWeights()}$ $\mathbf{w^{\text{repair}}} \leftarrow \text{UpdateRepairWeights()}$ $iter \leftarrow iter+1$ $x^{\text{best}}$: best found solution [\[Algorithm:ALNS\]]{#Algorithm:ALNS label="Algorithm:ALNS"} The remainder of this section is dedicated to describing the components of the ALNS algorithm, including the destroy and repair heuristics, the acceptance criteria, the heuristic weight adjustment and selection processes. ## Destroy heuristics Our ALNS incorporates five destroy heuristics, denoted as the set $\mathcal{D}$. These heuristics include problem-specific variations of the random, worst, and related removal heuristics proposed by @RopkePisinger2006b, @PisingerRopke2007, @shaw1997, and @shaw1998. The destroy heuristics employed in our algorithm are random task removal, worst task removal, random team removal, worst team removal, and Shaw removal. Prior to the application of a destroy heuristic, we determine the *degree of destruction*, denoted as $\gamma$. This parameter specifies the percentage of tasks to be removed from the current solution $x^{\text{current}}$. At each iteration, $\gamma$ is assigned a random value between $\gamma^{min}$ and $\gamma^{max}$. ### Random task removal (RTaR): {#subsubsec:rtr} The algorithm employs a randomized selection mechanism to choose a task, which is subsequently removed from the routes of all teams to which it is assigned. This iterative process continues until $\gamma$ percentage of the total tasks has been successfully eliminated from the current solution. ### Worst task removal (WTaR): The algorithm computes a score for each task based on its impact on the objective function. Through this scoring process, the task with the highest score is identified as the worst task, as we have a minimization problem. Subsequently, the algorithm removes this worst task from the routes of all teams to which it is assigned. The procedure continues in an iterative manner until $\gamma$ percentage of the total tasks has been successfully eliminated from the current solution. ### Random team removal (RTeR): {#subsubsec:rter} The algorithm randomly chooses a team and proceeds to remove all tasks from its route until $\gamma$ percentage of the total tasks has been eliminated. It is important to note that if a task that needs to be removed is assigned to multiple teams, it will be removed from all teams to which it is assigned. ### Worst team removal (WTeR): The algorithm evaluates the impact of each team on the objective function and assigns a score accordingly. Through this scoring process, the team with the highest score is identified as the worst team. The algorithm then proceeds to remove the tasks associated with the identified worst team from the routes of all teams to which they are assigned. This iterative process continues until $\gamma$ percentage of the total tasks has been successfully eliminated. ### Shaw removal (SR): According to the Shaw Removal heuristic, it is more promising to remove tasks that are related rather than unrelated, as it is more likely to lead to an improved solution. The relatedness between two tasks, $R(i, j)$, is calculated using the $L_2$ norm distance between the skill vectors of the tasks, as described by Equation [\[Rij\]](#Rij){reference-type="ref" reference="Rij"}. $$\begin{aligned} R(i, j) = \sqrt{\sum_{q\in \mathcal{Q}} ((u_{iq})^2 -(u_{jq})^2)} \label{Rij}\end{aligned}$$ As the value of $R(i, j)$ approaches 0, the tasks $i \in \mathcal{N}^{(t)}$ and $j \in \mathcal{N}^{(t)} \setminus \{i\}$ are considered more closely related. The Shaw Removal (SR) heuristic initiates by randomly selecting a task $i$ from the set of tasks $\mathcal{N}^{(t)}$. It then removes task $i$ and identifies the most related task $j^*$, where $j^* = \arg\min_{j \in \mathcal{N}^{(t)} \setminus \{i\}} R(i,j)$. This iterative process continues until $\gamma$ percentage of the total tasks has been successfully eliminated. ## Repair heuristics {#sec:insertion} The proposed ALNS algorithm incorporates three distinct repair heuristics, represented by the set $\mathcal{R}$: random insertion, greedy insertion, and regret insertion. Let $\mathcal{N}^{(t)}_{or}$ denote the amalgamation of outsourced tasks and the set of removed tasks resulting from the application of the selected removal heuristic. Prior to executing a repair heuristic, the algorithm determines the collection of all irreducible compatible team combinations, denoted as $\mathcal{A}_i$, for each task $i$ belonging to the set $\mathcal{N}^{(t)}_{or}$. ### Random task insertion (RaI) The algorithm begins by sorting the tasks in $\mathcal{N}^{(t)}_{or}$ in non-increasing order of their priority. Starting from the first task, denoted as $i^*$, in the sorted list, a random team combination is selected from the set $\mathcal{A}_{i^{*}}$. Subsequently, the task is inserted into the optimal positions within the teams comprising the selected random team combination. In the event that $\mathcal{A}_{i^{*}}$ is empty, the task is outsourced. ### Greedy task insertion (GI) Similar to random insertion, the algorithm begins by sorting the tasks in $\mathcal{N}^{(t)}_{or}$ in non-increasing order based on their priority. Subsequently, starting from the first task, denoted as $i^*$, in the sorted list, the algorithm selects the optimal team combination from the set $\mathcal{A}_{i^{*}}$, which yields the most favorable insertion cost. The task is then strategically inserted into the optimal positions within the teams belonging to the selected team combination. In the event that $\mathcal{A}_{i^{*}}$ is empty, the task is designated for outsourcing. ### $n$-regret insertion (ReI-$n$) Regret heuristics are kind of greedy heuristics enhanced with a look-ahead feature [@PotvinRousseau1993]. Based on @PisingerRopke2007, the regret value for each task $i\in \mathcal{N}^{(t)}_{or}$ is calculated as follows. Let $\Delta \text{TTWT}_k(i)$ denote the change in the objective function value for inserting $i$ at its best team combination instead of its $k^{\textrm{th}}$-best team combination. In the basic version where $n=2$, referred to as *ReI-2*, the task to be inserted next is selected where the cost difference between inserting it into its best team combination and its second best team combination is largest. Then, in each iteration, depending on the regret heuristic used, i.e., the value of $n$, the regret heuristic chooses the next task $i^*$ to be inserted according to Equation [\[Regreti\*\]](#Regreti*){reference-type="ref" reference="Regreti*"}: $$\begin{aligned} i^* = \arg\max_{i\in \mathcal{N}^{(t)}_{or}} \Bigg\{ \sum_{k=2}^{\text{min}\{n,|\mathcal{A}_i|\}}\big(\Delta \text{TTWT}_k(i) - \Delta \text{TTWT}_1(i)\big) \Bigg\} \label{Regreti*}\end{aligned}$$ ## Acceptance criteria The acceptance criteria of the simulated annealing is employed in the proposed ALNS. It implies that an improving solution $x^{\text{candidate}}$, where $\text{TTWT}(x^{\text{candidate}})<\text{TTWT}(x^{\text{current}})$, is accepted with probability 1. If $\text{TTWT}(x^{\text{candidate}}) \geq \text{TTWT}(x^{\text{current}})$, then $x^{\text{candidate}}$ is accepted with probability $e^{\frac{-\Delta \text{TTWT}}{T_{iter}}}$, where $\Delta \text{TTWT} = \text{TTWT}(x^{\text{candidate}}) - \text{TTWT}(x^{\text{current}})$ and $T_{iter}$ denotes the temperature of the current iteration. The algorithm initiates with an initial temperature value represented as $T_{0}$. After accepting a non-improving solution, the temperature value is updated by multiplying it by a factor $\alpha$, denoted as $T_{iter} = \alpha T_{iter}$. To reduce the likelihood of accepting non-improving solutions as the algorithm progresses, the temperature value must gradually decrease. Consequently, the parameter $\alpha$ should fall within the range of 0 to 1, i.e., $\alpha \in (0,1)$. ## Initialization and update of weights The initial weights of the heuristics in $\mathcal{D}$ and $\mathcal{R}$ are set to $1/|\mathcal{D}|$ and $1/|\mathcal{R}|$, respectively. During each iteration, the weights of the chosen destroy and repair heuristics, denoted as $h^{\text{destroy}}$ and $h^{\text{repair}}$, respectively, are updated based on the solution quality of the resulting solution $x^{\text{candidate}}$. If $x^{\text{candidate}}$ outperforms the best-found solution, replacing $x^{\text{best}}$, the weights of $h^{\text{destroy}}$ and $h^{\text{repair}}$ are increased by $\sigma_1$. Conversely, if $x^{\text{candidate}}$ improves upon the current solution, replacing $x^{\text{current}}$, the weights of $h^{\text{destroy}}$ and $h^{\text{repair}}$ are increased by $\sigma_2$. However, if $x^{\text{candidate}}$ is accepted based on the acceptance criteria, even without providing a better solution, the weights of $h^{\text{destroy}}$ and $h^{\text{repair}}$ are increased by $\sigma_3$. In cases where $x^{\text{candidate}}$ does not replace $x^{\text{current}}$, the weights of $h^{\text{destroy}}$ and $h^{\text{repair}}$ are increased by $\sigma_4$. It is important to note that the values of $\sigma_1, \sigma_2, \sigma_3$, and $\sigma_4$ are hyperparameters that need to be tuned through computational experiments. However, they must adhere to the following relation: $$\sigma_1 > \sigma_2 > \sigma_3 > 0 > \sigma_4$$ At the end of each iteration, the weights of all repair and destroy heuristics are normalized. To choose a destroy and a repair heuristic from the sets $\mathcal{D}$ and $\mathcal{R}$, respectively, we employ the *roulette wheel selection* approach. This approach allocates each alternative an angle proportional to its weight in the roulette wheel. ## Stopping condition The maximum number of iterations, referred to as $\nu_1$, and maximum number of non-improving iterations, referred to as $\nu_2$ are both used as the stopping condition. # Computational experiments {#section:compStudy} In this section, we evaluate the effectiveness of our proposed reoptimization framework through a series of computational experiments conducted on randomly generated DWSRP instances. The process for creating these instances is detailed in Section [5.1](#subsec:data){reference-type="ref" reference="subsec:data"}. Our evaluation begins with a comprehensive comparison involving the proposed ALNS algorithm, the state-of-the-art solver CPLEX, and a construction heuristic designed to mimic the decision-making approach of a prudent decision-maker. This comparison is carried out using problem instances representing single periods, as described in Section [5.2](#section:compStudy_HeuristicMMComparison){reference-type="ref" reference="section:compStudy_HeuristicMMComparison"}. Subsequently, in Section [5.3](#section:compStudy_largeSizeDWSRPinstances){reference-type="ref" reference="section:compStudy_largeSizeDWSRPinstances"}, we extend our examination to encompass larger instances that span multiple reoptimization periods. This section also explores the impact of various factors, such as the degree of dynamism, effective degree of dynamism, and the number of tasks requiring synchronization, on TWTT. Additionally, we conduct an analysis of the influence of different reoptimization strategies and the effects of varying frozen period durations on solution quality. These aspects are comprehensively explored in Sections [5.4](#section:compStudy_reoptimizationPeriodComparison){reference-type="ref" reference="section:compStudy_reoptimizationPeriodComparison"} and  [5.5](#section:compStudy_frozenPeriodComparison){reference-type="ref" reference="section:compStudy_frozenPeriodComparison"}, respectively. We employed CPLEX 20.1 to solve the proposed mathematical model, setting a time limit of 15 minutes for each run. The implementation of the proposed ALNS-based reoptimization framework was developed in Java. The computational experiments were conducted on a computer equipped with an Intel® Core™ i7-6500U 2.5 GHz processor, complemented by 12 GB of RAM, and running on the Windows 10 operating system. To optimize the performance of the ALNS algorithm, we engaged in a parameter tuning process. As a result, the ALNS algorithm's parameters were meticulously configured to align with the specifications detailed in Table [1](#tab:ALNSparams){reference-type="ref" reference="tab:ALNSparams"}. Parameter Value ----------------------------------------------------------------------- ------- initial temperature value, $T_0$ 1000 temperature update coefficient, $\alpha$ 0.95 max. no. of iterations, $\nu_1$ 250 max. no. of non-improving iterations, $\nu_2$ 50 min. degree of destruction, $\gamma^{min}$ 0.50 max. degree of destruction, $\gamma^{max}$ 1.00 weight increase for the best-found improving solution, $\sigma_1$ 0.08 weight increase for the current improving solution, $\sigma_2$ 0.05 weight increase for the non-improving accepted solution, $\sigma_3$ 0.01 weight increase for the non-improving unaccepted solution, $\sigma_4$ -0.03 : Parameter values of the proposed ALNS ## Data {#subsec:data} In our study, a DWSRP instance represents the problem to be optimized during reoptimization. To distinguish the instance that spans an entire working day, we refer to it as a DWSRP super-instance. An arbitrary DWSRP super-instance, designated as $I(N,K,\delta)$, encompasses a total of $N$ tasks scheduled throughout the entire working day, involving $K$ teams, and embodies a specific degree of dynamism quantified by $\delta$. Our aim is to create instances that manifest diverse degrees of dynamism, precisely $\delta = 0.2, 0.4, 0.6, 0.8$, while varying the total task counts and team numbers, represented as $(N, K) = (30, 2), (40, 3), (50, 4), (60, 5), (75, 5)$. The process of generating an instance denoted as $I(N,K,\delta)$ unfolds as follows. Considering a standard working day spanning 9 hours, equivalent to 540 minutes, we set $\tau^{\max}=540$. To ensure a gradual distribution of new task arrivals across the work period, we divide the day, denoted as $(0, \tau^{\max}]$, into $d$ time intervals. At the start of the day, $\lfloor N/d\rfloor$ tasks are assumed to be predetermined, thereby establishing the degree of dynamism, $\delta$, for the given instance as $\delta = \big(N-\lfloor N/d\rfloor\big)/N$. The arrival time $a_i$ and the earliest start time $e_i$ for these static tasks are initialized to 0. For each interval $t=1,..., d-1$, corresponding to the time interval $\big(\frac{\tau^{\max}}{d}(t-1), \frac{\tau^{\max}}{d}t\big]$, $\lfloor N/d\rfloor$ tasks are generated. The remaining $\big(N-\lfloor N/d\rfloor (d-1)\big)$ tasks are allocated to the final time interval, i.e., time interval $t=d$, corresponding to $\big(\frac{\tau^{\max}}{d}(d-1), \tau^{\max}]$. In the case of a task $i$ generated for a time interval $t$, its arrival time $a_i$ is determined as a random value within the boundaries of the corresponding interval, while the earliest start time $e_i$ is set to match $a_i$. The latest start time $l_i$ for task $i$ is calculated as the minimum of either $a_i + \textit{Uniform}(10, 50)$ or $\tau^{\max}$. Furthermore, both the processing time $p_i$ and the priority $w_i$ are established using uniform distributions of $\textit{Uniform(5,25)}$ and $\textit{Uniform(1,5)}$, respectively. Additionally, each task is assigned a random location within a rectangular metric space measuring 25 km by 25 km. We assume a traveling speed of 30 km per hour for each team, utilizing rectilinear distances to generate the distance matrix. We consider a skill set comprising five distinct skills, denoted as $|\mathcal{Q}|=5$. To determine the skill requirements for each task-skill pair $(i,q)$, we employ a random number generator that generates values between 0 and 1. Specifically, we set $u_{iq}$ to 1 if the randomly generated number falls within the range of 0 to 0.5. This means that task $i$ requires skill $q$ when the random number is less than or equal to 0.5. In cases where a task $i$ does not initially require any skill, i.e., $u_{iq}=0$ for all $q\in \mathcal{Q}$, we then randomly select a skill to be associated with the task. A similar process is applied to determine the skill capabilities of the teams, resulting in the $[v_{kq}]$ matrix. It's worth noting that, depending on the skill requirements of tasks that are not known in advance and the skill capabilities of the teams, some tasks within each super-instance require synchronization. The number of tasks necessitating synchronization is denoted by $N^{\text{sync}}$. In this manner, we have generated twenty DWSRP super-instances, each characterized by varying levels of dynamism. Table [2](#tab:InstanceInfo){reference-type="ref" reference="tab:InstanceInfo"} provides information on the effective degree of dynamism, $\delta^e$, effective degree of dynamism with time windows, $\delta^e_{tw}$, and the number of tasks requiring synchronization, $N^{\text{sync}}$, for each instance. --------------------- ------------ ----------------- ------------------- --------------------- ------------ ----------------- ------------------- **DWSRP-TW-SC** $\delta^e$ $\delta^e_{tw}$ $N^{\text{sync}}$ **DWSRP-TW-SC** $\delta^e$ $\delta^e_{tw}$ $N^{\text{sync}}$ **super-instance** **super-instance** **$I(N,K,\delta)$** **$I(N,K,\delta)$** $I(30,2, 0.2)$ 0.47 0.76 12 $I(50,4, 0.6)$ 0.49 0.72 13 $I(30,2, 0.4)$ 0.77 0.72 17 $I(50,4, 0.8)$ 0.53 0.7 19 $I(30,2, 0.6)$ 0.36 0.69 13 $I(60,5, 0.2)$ 0.67 0.72 21 $I(30,2, 0.8)$ 0.47 0.7 9 $I(60,5, 0.4)$ 0.34 0.68 11 $I(40,3, 0.2)$ 0.23 0.69 17 $I(60, 5, 0.6)$ 0.91 0.73 18 $I(40,3, 0.4)$ 0.25 0.72 13 $I(60, 5, 0.8)$ 0.44 0.7 12 $I(40,3, 0.6)$ 0.47 0.69 26 $I(75,5, 0.2)$ 0.61 0.74 27 $I(40,3, 0.8)$ 0.62 0.71 8 $I(75,5, 0.4)$ 0.45 0.7 19 $I(50,4, 0.2)$ 0.82 0.7 24 $I(75, 5, 0.6)$ 0.39 0.71 31 $I(50,4, 0.4)$ 0.12 0.7 7 $I(75, 5, 0.8)$ 0.18 0.72 12 --------------------- ------------ ----------------- ------------------- --------------------- ------------ ----------------- ------------------- : Information about the generated DWSRP super-instances Furthermore, to comprehensively assess the performance of the proposed ALNS algorithm in comparison to both the Mathematical Model (MM) and the Constructive Heuristic (CH), we meticulously crafted a set of thirty small- to medium-size DWSRP instances. These instances encompass a variety of task counts ($4 \leq |\mathcal{N}| \leq 8$), team quantities ($2 \leq |\mathcal{K}| \leq 5$), and skill numbers ($3 \leq |\mathcal{Q}| \leq 5$). The remaining parameters for these instances were generated following the aforementioned methodology. This collection of instances is instrumental in the analysis presented in Section [5.2](#section:compStudy_HeuristicMMComparison){reference-type="ref" reference="section:compStudy_HeuristicMMComparison"}. The complete dataset is available at @mendeley_data. ## Analysis on the performance of the proposed ALNS compared to the mathematical model and the constructive heuristic {#section:compStudy_HeuristicMMComparison} To comprehensively investigate the effectiveness of our proposed ALNS approach, we conducted a thorough comparative analysis, involving the ALNS, the Mathematical Model (abbreviated as MM), and the Constructive Heuristic (abbreviated as CH). This evaluation involved multiple reoptimization problems across the range of small- to medium-sized instances. Comprehensive outcomes of these comparative experiments can be found in Table [3](#tab:compStudy_HeuristicMMComparison){reference-type="ref" reference="tab:compStudy_HeuristicMMComparison"}. This table provides a comprehensive overview of each DWSRP instance, including essential parameters such as task count ($|\mathcal{N}|$), team count ($|\mathcal{K}|$), and skill count ($|\mathcal{Q}|$). It further presents the objective function value, denoted as TWTT, as computed by the MM, CH, and ALNS techniques. Furthermore, the table highlights the percentage gap associated with the best found solution by the MIP within limited time (15 minutes) (under column *MIP Gap (%)*), the percentage of improvement in the objective value provided by the ALNS compared to the MIP (under column *ALNS Diff (%)*), the percentage of improvement in the objective value provided by the ALNS compared to CH (under column *ALNS Imp (%)*), and the respective runtimes of MM, CH, and ALNS. ----------------- ----- ----- ----- ------- ------- ---------- --------- ------- ------- -- ---------- --------- ------ **DWSRP-TW-SC** $N$ $K$ $Q$ TWTT CPU(s) MM CH ALNS MIP ALNS ALNS MM CH ALNS Gap (%) Diff (%) Imp (%) 1 4 2 3 769 769 769 0.00 0.00 0.00 $<1$ $<1$ $<1$ 2 4 3 3 747 747 747 0.00 0.00 0.00 $<1$ $<1$ $<1$ 3 4 4 4 608 608 608 0.00 0.00 0.00 $<1$ $<1$ 1 4 5 2 3 1335 1335 1335 0.00 0.00 0.00 $<1$ $<1$ 1 5 5 3 3 1423 1467 1423 0.00 0.00 3.00 $<1$ $<1$ 2 6 5 4 4 1141 1165 1141 0.00 0.00 2.06 $<1$ $<1$ 1 7 5 5 5 909 1009 909 0.00 0.00 9.91 $<1$ $<1$ 4 8 6 2 3 1697 1716 1697 0.00 0.00 1.11 $<1$ $<1$ 1 9 6 3 4 1702 2906 1702 0.00 0.00 41.43 $<1$ $<1$ 3 10 6 5 3 1222 1276 1222 0.00 0.00 4.23 5 $<1$ 3 11 7 2 3 4601 4601 4601 0.00 0.00 0.00 $<1$ $<1$ $<1$ 12 7 3 3 5232 5232 5232 0.00 0.00 0.00 $<1$ $<1$ $<1$ 13 7 4 4 1669 2923 1669 0.00 0.00 42.90 $<1$ $<1$ 4 14 7 5 5 1556 2290 1556 0.00 0.00 32.05 4 $<1$ 6 15 8 2 5 4685 4685 4685 0.00 0.00 0.00 $<1$ $<1$ 4 16 8 3 3 1892 2070 1892 0.00 0.00 8.60 16 $<1$ 3 17 8 4 4 1396 1657 1396 0.00 0.00 15.75 109 $<1$ 5 18 10 2 3 3685 3817 3685 0.00 0.00 3.46 207 $<1$ 5 19 10 3 3 5271 5601 5271 0.00 0.00 5.89 120 $<1$ 10 20 10 4 4 2155 2946 2120 53.51 1.62 28.04 $900$ 2 6 21 10 5 5 17621 17621 17621 0.00 0.00 0.00 $<1$ $<1$ $<1$ 22 12 2 3 10779 10782 10779 0.00 0.00 0.03 $<1$ $<1$ 2 23 12 3 4 4473 4647 4400 71.03 1.63 5.32 $900$ $<1$ 9 24 12 5 5 3382 3473 3105 69.50 8.19 10.60 $900$ 8 20 25 15 2 3 9275 9985 9275 0.00 0.00 7.11 72 $<1$ 5 26 15 3 4 9314 9874 8561 89.75 8.08 13.30 $900$ $<1$ 27 27 15 5 5 5890 5721 3783 84.77 35.77 33.88 $900$ 10 47 28 20 2 3 21376 21376 21376 0.00 0.00 0.00 $<1$ $<1$ 3 29 20 3 4 5991 5925 5925 82.22 1.10 0.00 $900$ 5 29 30 20 5 5 6797 5212 4908 87.05 27.79 5.83 $900$ 11 85 **Min.** 0.00 35.77 0.00 1 1 **Max.** 89.75 0.00 42.90 900 85 **Avg.** 17.93 2.81 9.15 228.3 9.7 ----------------- ----- ----- ----- ------- ------- ---------- --------- ------- ------- -- ---------- --------- ------ : Comparing ALNS, MM, and CH Results for small-to medium-size DWSRP instances As presented in Table [3](#tab:compStudy_HeuristicMMComparison){reference-type="ref" reference="tab:compStudy_HeuristicMMComparison"}, among the 30 problem instances, the MM was able to attain optimal solutions (as indicated by zero percentage gap values in the MIP Gap ($\%$) column) for 23 instances within the prescribed 15-minute runtime limit. Remarkably, for these instances, our propsoed ALNS also achieved the optimal solution, often within the same or significantly less time. For the remaining 7 instances where the MM couldn't reach optimality withing the limited runtime, a comparison of MM and ALNS objective function values (found under the ALNS Diff ($\%$) column) reveals ALNS consistently outperforming the MM. Notably, the ALNS demonstrated an average improvement of $12.03\%$ over the MM. A closer look at the average runtime reinforces ALNS's superiority, as it proved to be approximately 23.5 times faster than the MM. Taking both the quality of objective function results and the efficiency of the ALNS algorithm into account, it's evident that ALNS significantly outshines the MM. Turning to the values in the ALNS Imp ($\%$) column, we further assess the extent to which the proposed ALNS enhances solutions obtained from the CH. This evaluation mirrors ALNS's practical value, given the preference for simple greedy algorithms in many industries. Upon analyzing the results, a consistent trend emerges: ALNS consistently improves the objective function value, reducing TWTT values by an average of $9.15\%$. Notably, some instances saw substantial improvements, reaching as high as $42.90\%$. It's worth highlighting that ALNS achieves these enhancements in an average time of just 9.7 seconds. ## Analysis on the results of the proposed ALNS on the large-size DWSRP super-instances {#section:compStudy_largeSizeDWSRPinstances} Next, we apply both the CH and the ALNS independently within the proposed reoptimization framework for each large-size DWSRP super-instance. Following an initial analysis, we set the values of $\beta^{\text{task}}$ and $\beta^{\text{time}}$ that indicate the frequency of the reoptimizations at 5 tasks and 60 time units, respectively, while the frozen period length is defined as $f=30$ time units. For each approach employed on a DWSRP super-instance, we aggregate the results of corresponding DWSRP instances across each reoptimization cycle. Subsequently, we compute the total waiting time of all tasks within the given super-instance, presenting this value under the relevant $\text{TWTT}$ column in Table [4](#tab:compStudy_DWSPsuperinstances){reference-type="ref" reference="tab:compStudy_DWSPsuperinstances"}. The number of reoptimizations performed during the workday remains consistent for both the CH and ALNS approaches across each DWSRP super-instance, and this count is documented in the *No. of reopt.* column. Additionally, for each DWSRP super-instance, we present the improvement percentage attributed to the use of ALNS compared to CH within the reoptimization framework. Furthermore, we include the total runtimes of CH and ALNS, taking into account reoptimizations conducted for the corresponding instances. ----------------- --------- ---------- ------- ------------------- --------- ---------- **DWSRP-TW-SC** TWTT **No. of reopt.** CPU(s) CH ALNS ALNS CH ALNS Imp (%) $I(30,2,0.2)$ 18290 14445 21.02 8 0.57 42.12 $I(30,2,0.4)$ 15843 15135 4.47 13 0.31 24.62 $I(30,2,0.6)$ 9496 7837 17.47 19 0.68 31.49 $I(30,2,0.8)$ 7860 7558 3.84 25 0.29 10.66 $I(40,3,0.2)$ 19857 16504 16.89 9 7.53 308.58 $I(40,3,0.4)$ 13387 13110 2.07 17 7.81 511.49 $I(40,3,0.6)$ 14015 13139 6.25 24 3.44 276.91 $I(40,3,0.8)$ 7378 6792 7.94 32 0.94 15.76 $I(50,4,0.2)$ 26199 23595 9.94 11 161.71 453.01 $I(50,4,0.4)$ 14658 11818 19.38 21 41.99 619.32 $I(50,4,0.6)$ 17466 11901 31.86 31 34.86 685.3 $I(50,4,0.8)$ 10271 9752 5.05 41 9.48 202.3 $I(60,5,0.2)$ 26991 24638 8.72 13 716.36 1386.44 $I(60,5,0.4)$ 25604 20290 20.75 25 217.76 3078.96 $I(60,5,0.6)$ 18151 14988 17.43 37 141.94 855.13 $I(60,5,0.8)$ 17672 11237 36.41 49 26.76 263.43 $I(75,5,0.2)$ 44429 36635 17.54 16 3526.19 31996.16 $I(75,5,0.4)$ 27496 23380 14.97 31 678.66 1716.18 $I(75,5,0.6)$ 28462 21757 23.56 45 137.53 1204.65 $I(75,5,0.8)$ 26611 17598 33.87 59 71.69 370.38 **Min.** 7378 6792 2.07 8 0.29 10.66 **Max.** 44429 36635 36.41 59 3526.19 31996.16 **Avg.** 19506.8 16105.45 15.97 26.3 289.33 2202.64 ----------------- --------- ---------- ------- ------------------- --------- ---------- : The results of the reoptimization frameworks that employ the CH and the ALNS, separately, on large-size DWSRP super-instances The data presented in Table [4](#tab:compStudy_DWSPsuperinstances){reference-type="ref" reference="tab:compStudy_DWSPsuperinstances"} reveals a substantial average reduction of nearly $16\%$ in TWTT, with the potential for this improvement to extend to almost $37\%$ in specific instances. However, it is evident from these findings that achieving such a significant substantial enhancement comes at the cost of increased solution times. This results in ALNS requiring approximately $7.60$ times the CPU time compared to the construction heuristic. These results suggest that the viability of adopting ALNS depends on the nature of the business models; decision-makers must carefully evaluate the trade-off between the computational time consumed by ALNS and its potential benefits. Nonetheless, the results underscore that if the algorithm's time requirements are accommodated, decision-makers can gain a significant advantage from implementing ALNS. Table [4](#tab:compStudy_DWSPsuperinstances){reference-type="ref" reference="tab:compStudy_DWSPsuperinstances"} offers valuable insights into the influence of several factors associated with super-instances. These factors encompass the level of dynamism, effective level of dynamism, and the number of tasks requiring synchronization, all of which have the potential to affect TWTT. As depicted in Figure [3](#fig:delta_twtt){reference-type="ref" reference="fig:delta_twtt"}, it is evident that TWTT tends to decrease with an increase in the level of dynamism ($\delta$). This observation aligns with our expectations, as a higher influx of tasks during execution diminishes the flexibility of the planning framework. However, as illustrated in Figure [4](#fig:useless){reference-type="ref" reference="fig:useless"}, there does not exist a direct correlation between $\delta^e$-TWTT and $N^{sync}$-TWTT. This lack of correlation may arise from neither $\delta^e$ nor $N^{sync}$ individually having a significant impact on TWTT. Since all our datasets exhibit similar levels of $\delta^e_{tw}$ values, and it's worth noting that $\delta^e_{tw}$ is essentially an extension of $\delta^e$, we chose not to delve into the specific impact of $\delta^e_{tw}$ in this analysis. ![Impact of degree of dynamism ($\delta$) on TWTT](delta.pdf){#fig:delta_twtt width="0.6\\linewidth"} ![**(a)** Impact of effective degree of dynamism ($\delta^e$) on TWTT **(b)** Impact of $N^{sync}$ on TWTT](useless.pdf){#fig:useless width="1.1\\linewidth"} ## Analysis on different reoptimization strategies {#section:compStudy_reoptimizationPeriodComparison} The aim of this section is to investigate the impact of employing various reoptimization strategies on both the objective function value (TWTT) and the total CPU time. These strategies are defined by different values of $\beta^{\text{task}}$. To achieve this objective, we analyze two distinct sets of super-instances: (i) the super-instances listed in Table [2](#tab:InstanceInfo){reference-type="ref" reference="tab:InstanceInfo"}, referred to as *tight instances*, and (ii) super-instances identical to the tight instances, with the exception of task deadlines. This second set, referred to as *loose instances*, uniformly assigns task deadlines at $\tau^{\max}$. The inclusion of these additional super-instances eliminates potential biases in the results that could arise from tasks with particularly stringent deadlines necessitating outsourcing. In this experiment, we initially focus on the tight super-instances. For each individual super-instance within this tight set, we conduct separate simulations of the entire workday, utilizing the options $\beta^{\text{task}}=1$, $\beta^{\text{task}}=3$, and $\beta^{\text{task}}=5$ each with a frozen period length of zero. Our ALNS algorithm is employed for each reoptimization, and we present the resulting TWTT values, CPU times, and the cumulative count of outsourced tasks in Table [5](#tab:compStudy_tight_reoptimization){reference-type="ref" reference="tab:compStudy_tight_reoptimization"}. Subsequently, a parallel procedure is executed for the loose super-instances, with the corresponding outcomes documented in Table [6](#tab:compStudy_looose_reoptimization){reference-type="ref" reference="tab:compStudy_looose_reoptimization"}. --------------- ------------------------- ---------- -------- -- ------------------------- ---------- -------- -- ------------------------- --------- -------- -- -- **Tight** $\beta^{\text{task}}=1$ $\beta^{\text{task}}=3$ $\beta^{\text{task}}=5$ TWTT CPU(s) outso. TWTT CPU(s) outso. TWTT CPU(s) outso. $I(30,2,0.2)$ 14445 42.12 14 15369 21.03 15 17001 18.31 14 $I(30,2,0.4)$ 15135 24.62 13 17398 19.11 16 18087 18.66 12 $I(30,2,0.6)$ 7837 31.49 7 8872 10.48 8 12389 13.02 5 $I(30,2,0.8)$ 7558 10.66 8 9893 3.3 8 12764 5.33 4 $I(40,3,0.2)$ 16504 308.58 10 20972 199.3 11 22589 411.14 11 $I(40,3,0.4)$ 13110 511.49 9 15347 360.39 10 18922 366.77 14 $I(40,3,0.6)$ 13139 276.91 8 13548 157.98 8 18996 172.39 6 $I(40,3,0.8)$ 6792 15.76 5 9710 5.86 5 11710 11.09 6 $I(50,4,0.2)$ 23595 453.01 14 27781 787.1 13 27840 688.79 16 $I(50,4,0.4)$ 11818 619.32 9 14893 129.08 8 15377 220.63 13 $I(50,4,0.6)$ 11901 685.3 10 12705 336.97 10 14856 199.87 14 $I(50,4,0.8)$ 9752 202.3 7 10663 42.46 7 13037 32.72 10 $I(60,5,0.2)$ 24638 1386.44 17 25404 1656.26 21 27959 896.53 18 $I(60,5,0.4)$ 20290 3078.96 12 24120 846.34 16 25418 664.03 11 $I(60,5,0.6)$ 14988 855.13 9 15367 290.88 11 16205 250.99 9 $I(60,5,0.8)$ 11237 263.43 9 12233 131.91 7 14303 124.38 5 $I(75,5,0.2)$ 36635 31996.16 29 39205 16444.81 33 42754 13702.6 26 $I(75,5,0.4)$ 23380 1716.18 19 23630 1305.72 22 24988 1014.62 21 $I(75,5,0.6)$ 21757 1204.65 22 23347 862.76 19 25579 510.55 27 $I(75,5,0.8)$ 17598 370.38 12 18827 179.95 13 24423 120.7 11 **Min.** 6792 10.66 5 8872 3.3 5 11710 5.33 4 **Max.** 36635 31996.16 29 39205 16444.81 33 42754 13702.6 27 **Avg.** 16105.45 2202.64 12.15 17964.2 1189.58 13.05 20259.85 972.16 12.65 --------------- ------------------------- ---------- -------- -- ------------------------- ---------- -------- -- ------------------------- --------- -------- -- -- : Results on tight super-instances with changing values of $\beta^{\text{task}}$ --------------- ------------------------- --------- -------- -- ------------------------- --------- -------- -- ------------------------- ---------- -------- -- -- **Loose** $\beta^{\text{task}}=1$ $\beta^{\text{task}}=3$ $\beta^{\text{task}}=5$ TWTT CPU(s) outso. TWTT CPU(s) outso. TWTT CPU(s) outso. $I(30,2,0.2)$ 12845 81.59 9 13089 52.19 9 13991 126.33 9 $I(30,2,0.4)$ 13854 44.44 10 14904 16.47 11 16147 21.94 8 $I(30,2,0.6)$ 7837 23.41 7 9901 11.14 7 12094 20.79 6 $I(30,2,0.8)$ 7922 17.86 7 8088 7.49 6 12473 6.35 4 $I(40,3,0.2)$ 14747 643.68 2 16721 459.8 1 18384 590.16 0 $I(40,3,0.4)$ 11170 957.53 2 12357 697.99 4 13406 385.3 5 $I(40,3,0.6)$ 13839 432.43 9 12247 256.95 6 19372 220.09 4 $I(40,3,0.8)$ 6552 14.75 4 9784 10.9 4 10963 11.91 5 $I(50,4,0.2)$ 20308 814.25 7 22113 595.68 7 23161 1019.99 9 $I(50,4,0.4)$ 11603 717.35 9 14249 328.51 8 12957 183.45 11 $I(50,4,0.6)$ 10676 826.65 7 11378 629.85 8 13050 554.01 12 $I(50,4,0.8)$ 8624 120.91 6 10663 36.96 7 11044 56.17 7 $I(60,5,0.2)$ 18902 2043.36 5 19730 1912.13 6 23213 1809.02 5 $I(60,5,0.4)$ 18228 8591.31 6 18362 2211.29 6 23693 2100.76 2 $I(60,5,0.6)$ 10780 1005.91 7 11871 445.09 8 14785 423.12 9 $I(60,5,0.8)$ 10318 436.92 8 11423 99.27 8 14615 131.27 5 $I(75,5,0.2)$ 25944 57847.9 6 26361 35707.8 8 27275 40546.73 7 $I(75,5,0.4)$ 20180 4202.78 11 20342 2131.41 11 22350 2435.08 12 $I(75,5,0.6)$ 20513 2327.05 20 21953 1114.38 16 20813 985.76 22 $I(75,5,0.8)$ 17051 604.61 12 17445 223.78 13 22037 100.35 8 **Min.** 6552 14.75 2 8088 7.49 1 10963 6.35 0 **Max.** 25944 57847.9 20 26361 35707.8 16 27275 40546.73 22 **Avg.** 14094.65 4087.73 7.7 15149.05 2347.45 7.7 17291.15 2586.43 7.5 --------------- ------------------------- --------- -------- -- ------------------------- --------- -------- -- ------------------------- ---------- -------- -- -- : Results on loose super-instances with changing values of $\beta^{\text{task}}$ Table [5](#tab:compStudy_tight_reoptimization){reference-type="ref" reference="tab:compStudy_tight_reoptimization"} and Table [6](#tab:compStudy_looose_reoptimization){reference-type="ref" reference="tab:compStudy_looose_reoptimization"} clearly illustrate that reducing the value of $\beta^{\text{task}}$ leads to improved (lower) TWTT values, regardless of whether the super-instance is classified as tight or loose. This phenomenon arises because a reduced number of new tasks into the system for reoptimization provides greater degrees of freedom to our optimization framework. Consequently, lower values of $\beta^{\text{task}}$ are more likely to yield enhanced TWTT values. This observation remains consistent across all super-instances listed in Table [5](#tab:compStudy_tight_reoptimization){reference-type="ref" reference="tab:compStudy_tight_reoptimization"}. On average, reducing $\beta^{\text{task}}$ from five to three results in a gain of $11.33\%$, while decreasing it from three to one contributes $10.35\%$ to the improvement. Similarly, this observation holds true in Table [6](#tab:compStudy_looose_reoptimization){reference-type="ref" reference="tab:compStudy_looose_reoptimization"} for nearly every super-instance, with the exception of one instance. On average across these super-instances, decreasing $\beta^{\text{task}}$ from five to three yields an improvement of $12.38\%$, while reducing it from three to one results in a contribution of $6.96\%$ to the enhancement. However, this enhancement achieved through a reduced number of required tasks to trigger the subsequent reoptimization phase is accompanied by an increase in the elapsed CPU time. In the context of the tight super-instances, as depicted in Table [5](#tab:compStudy_tight_reoptimization){reference-type="ref" reference="tab:compStudy_tight_reoptimization"}, diminishing $\beta^{\text{task}}$ from five to three results in an average CPU time increase of $22.36\%,$ while reducing it from three to one elevates the average CPU time by $85.16\%.$ Meanwhile, in the case of the loose instances, the behavior is not evident upon reducing $\beta^{\text{task}}$ from five to three. However, a reduction from three to one amplifies the elapsed CPU time by $74.14\%.$ This substantial increase in solution time is to be expected, given that a lower value of $\beta^{\text{task}}$ corresponds to a higher frequency of invoking the ALNS algorithm. It's worth highlighting that the observation we've made---namely, that reducing $\beta^{\text{task}}$ improves the objective function value while increasing CPU time---is consistent for both types of super-instances. Furthermore, no evidence suggests a correlation between the decision of $\beta^{\text{task}}$ and the number of outsourced tasks. Nonetheless, as anticipated, we observe a notable reduction in the number of outsourced tasks for the loose super-instances. This outcome can be attributed primarily to the absence of strict deadlines in these cases. The decision-maker possesses the flexibility to defer task assignment to outsourcing in favor of accommodating them in subsequent reoptimization phases. ## Analysis on the length of the frozen period, $f$ {#section:compStudy_frozenPeriodComparison} Recall that the purpose of implementing a frozen period of suitable duration is to minimize confusion during the transition from one TTP to another. With this goal in mind, this section is dedicated to understanding the influence of varying frozen period lengths, enabling us to offer practical policy recommendations for industry decision-makers. In this context, similar to our approach in Section [5.4](#section:compStudy_reoptimizationPeriodComparison){reference-type="ref" reference="section:compStudy_reoptimizationPeriodComparison"}, we utilize both tight and loose instances to augment the robustness of our analysis. In this experiment, for each individual super-instance within the sets of tight and loose instances, we conduct separate simulations covering an entire workday. We explore three options: setting $f$ to 0 (corresponding to no frozen period), setting $f$ to $\max (t^{\max},p^{\max})$ where $t^{\max}$ and $p^{\max}$ represent the maximum travel time and processing time for the corresponding super-instance, respectively, and setting $f$ to $t^{\max}+p^{\max}$, all with a $\beta^{\text{task}}$ value of 1. It is important to note that $t^{\max}+p^{\max} > \max (t^{\max},p^{\max}) > 0$, as both $t^{\max}$ and $p^{\max}$ are strictly positive. For reoptimization, we employ our ALNS algorithm and record the resulting values for TWTT, CPU times, and the cumulative count of outsourced tasks. These results are presented in Table [7](#tab:compStudy_frozen_tight){reference-type="ref" reference="tab:compStudy_frozen_tight"} and [8](#tab:compStudy_frozen_loose){reference-type="ref" reference="tab:compStudy_frozen_loose"} for tight and loose super-instances, respectively. --------------- ---------- ---------- -------- -- ------------------------------ ---------- -------- -- ----------------------- ---------- -------- -- -- **Tight** f=0 $f=\max (t^{\max},p^{\max})$ $f=t^{\max}+p^{\max}$ TWTT CPU(s) outso. TWTT CPU(s) outso. TWTT CPU(s) outso. $I(30,2,0.2)$ 14445 42.12 14 15286 19.75 13 16132 66.15 14 $I(30,2,0.4)$ 15135 24.62 13 15933 31.21 14 15958 26.66 14 $I(30,2,0.6)$ 7837 31.49 7 10779 16.83 9 11319 9.34 8 $I(30,2,0.8)$ 7558 10.66 8 10227 5.36 9 11314 4.39 9 $I(40,3,0.2)$ 16504 308.58 10 18061 319.73 11 18268 456 10 $I(40,3,0.4)$ 13110 511.49 9 14169 276.24 9 15039 293.08 9 $I(40,3,0.6)$ 13139 276.91 8 15502 76.56 9 15998 55.29 12 $I(40,3,0.8)$ 6792 15.76 5 9261 7.61 8 12767 11.28 10 $I(50,4,0.2)$ 23595 453.01 14 24277 514.66 15 25772 365.09 13 $I(50,4,0.4)$ 11818 619.32 9 12693 177.97 9 13235 230.9 9 $I(50,4,0.6)$ 11901 685.3 10 13959 547.54 12 14620 331.44 10 $I(50,4,0.8)$ 9752 202.3 7 15338 41.98 12 15860 21.99 11 $I(60,5,0.2)$ 24638 1386.44 17 23415 1660.26 15 22736 1028.8 16 $I(60,5,0.4)$ 20290 3078.96 12 21427 1345.96 15 23657 1184.37 13 $I(60,5,0.6)$ 14988 855.13 9 16779 374.6 10 18640 307.44 10 $I(60,5,0.8)$ 11237 263.43 9 17391 136.34 13 18603 57.69 12 $I(75,5,0.2)$ 36635 31996.16 29 38376 19140.52 29 37978 14228.26 29 $I(75,5,0.4)$ 23380 1716.18 19 24846 1197.53 19 24887 976.14 21 $I(75,5,0.6)$ 21757 1204.65 22 24637 624.05 23 25477 587.17 25 $I(75,5,0.8)$ 17598 370.38 12 27507 88.07 20 28748 75.67 21 **Min.** 6792 10.66 5 9261 5.36 8 11314 4.39 8 **Max.** 36635 31996.16 29 38376 19140.52 29 37978 14228.26 29 **Avg.** 16105.45 2202.64 12.15 18493.15 1330.14 13.7 19350.4 1015.86 13.8 --------------- ---------- ---------- -------- -- ------------------------------ ---------- -------- -- ----------------------- ---------- -------- -- -- : Results on tight super-instances with changing values of $f$ --------------- ---------- --------- -------- -- ------------------------------ ---------- -------- -- ----------------------- ---------- -------- -- -- **Loose** f=0 $f=\max (t^{\max},p^{\max})$ $f=t^{\max}+p^{\max}$ TWTT CPU(s) outso. TWTT CPU(s) outso. TWTT CPU(s) outso. $I(30,2,0.2)$ 12845 81.59 9 12836 133.63 9 12836 52.18 9 $I(30,2,0.4)$ 13854 44.44 10 14330 37.2 10 14422 33.37 10 $I(30,2,0.6)$ 7837 23.41 7 9727 21.39 8 9893 11.94 7 $I(30,2,0.8)$ 7922 17.86 7 10229 12.47 7 10891 6.91 8 $I(40,3,0.2)$ 14747 643.68 2 15085 794.35 2 14879 570.69 2 $I(40,3,0.4)$ 11170 957.53 2 12068 701.81 4 12122 585.81 2 $I(40,3,0.6)$ 13839 432.43 9 14426 262.29 9 15334 139.47 10 $I(40,3,0.8)$ 6552 14.75 4 9793 11.86 6 10863 8.11 8 $I(50,4,0.2)$ 20308 814.25 7 20132 944.27 7 21107 813.64 7 $I(50,4,0.4)$ 11603 717.35 9 12242 427.6 9 12105 450.59 9 $I(50,4,0.6)$ 10676 826.65 7 12370 500.91 8 13307 507.85 7 $I(50,4,0.8)$ 8624 120.91 6 14260 32.62 12 16929 56.3 12 $I(60,5,0.2)$ 18902 2043.36 5 19281 2357.89 5 19971 2379.95 5 $I(60,5,0.4)$ 18228 8591.31 6 19530 4555.02 5 22089 2292.28 7 $I(60,5,0.6)$ 10780 1005.91 7 13864 422.13 7 15544 278.19 7 $I(60,5,0.8)$ 10318 436.92 8 16249 141.05 10 18253 97.08 12 $I(75,5,0.2)$ 25944 57847.9 6 24901 52782.11 5 26071 55740.71 5 $I(75,5,0.4)$ 20180 4202.78 11 21920 3106.4 14 20903 2570.13 10 $I(75,5,0.6)$ 20513 2327.05 20 22202 814.07 22 24499 633.46 24 $I(75,5,0.8)$ 17051 604.61 12 23045 102.09 14 25807 67.96 15 **Min.** 6552 14.75 2 9727 11.86 2 9893 6.91 2 **Max.** 25944 57847.9 20 24901 52782.11 22 26071 55740.71 24 **Avg.** 14094.65 4087.73 7.7 15924.5 3408.06 8.65 16891.25 3364.83 8.8 --------------- ---------- --------- -------- -- ------------------------------ ---------- -------- -- ----------------------- ---------- -------- -- -- : Results on loose super-instances with changing values of $f$ Tables [7](#tab:compStudy_frozen_tight){reference-type="ref" reference="tab:compStudy_frozen_tight"} and [8](#tab:compStudy_frozen_loose){reference-type="ref" reference="tab:compStudy_frozen_loose"} provide clear evidence that extending the duration of the frozen period has a detrimental effect on TWTT values, regardless of whether the super-instance falls into the tight or loose category. This outcome aligns with expectations, a longer frozen period reduces our algorithm's flexibility, as it narrows the set of tasks the algortihm can optimize. These extended durations, intended to prevent confusion during transitions between different plans, can be seen as a trade-off between flexibility and stability. Specifically, when we extend the frozen period for the tight super-instances from $0$ to $\max (t^{\max},p^{\max})$, we observe an average reduction in TWTT of $14.83\%$. Further increasing it from $\max (t^{\max},p^{\max})$ to $t^{\max}+p^{\max}$ results in an average reduction of $4.64\%$. For loose super-instances, similarly, increasing the frozen period from $0$ to $\max (t^{\max},p^{\max})$ leads to an average TWTT reduction of $12.98\%$, while extending it from $\max (t^{\max},p^{\max})$ to $t^{\max}+p^{\max}$ results in an average reduction of $6.07\%$. The analysis above holds practical relevance for managers considering investments in technologies aimed at reducing the duration of the frozen period. Consider a scenario where costly technology can diminish the value of $f$ from $t^{\max}+p^{\max}$ to $\max (t^{\max},p^{\max})$. In such a case, the adoption of this technology may not be justified, given the expected improvement is only around $5\%$. Conversely, technology capable of reducing the frozen period from $\max (t^{\max},p^{\max})$ to nearly 0 warrants serious consideration, as it promises a significantly substantial enhancement. While reducing the duration of the frozen period offers the advantage of achieving better TWTT values, it comes at the cost of increased CPU times for the algorithm. This can be rationalized by considering that each execution of ALNS must handle a larger number of to-be-scheduled tasks, with lower values of $f$. Specifically, for the tight instances, increasing the value of $f$ from 0 to $\max (t^{\max},p^{\max})$ results in a $39.61\%$ reduction in solution time. When the value of $f$ is further increased to $t^{\max}+p^{\max}$ from $\max (t^{\max},p^{\max})$, the solution time decreases by $23.63\%$. This pattern remains consistent for the loose super-instances as well. In this case, increasing the value of $f$ from 0 to $\max (t^{\max},p^{\max})$ reduces the solution time by $16.63\%$, while increasing it to $t^{\max}+p^{\max}$ from $\max (t^{\max},p^{\max})$ results in a $1.27\%$ decrease in solution time. # Conclusion {#section:conclusion} In the realm of dynamic on-demand on-site service systems, optimizing operations in real-time is pivotal for businesses aiming to enhance their efficiency and productivity. In this article, we have embarked on a comprehensive exploration of various facets of a dynamic multi-skill workforce scheduling and routing problem with time windows and synchronization constraints, delving into reoptimization strategies and frozen period lengths. To address the outlined DWSRP-TW-SC, we developed an optimization framework that is triggered whenever a predetermined number of new tasks arrive. The initial step of this framework entails identifying the frozen tasks and establishing the earliest feasible time and location for the assigned team. Subsequently, the framework engages in a re-optimization process for the ensuing Team Task Plan, aimed at minimizing the cumulative weighted completion time for all tasks. For the route redesign phase of this framework, we proposed two alternative methodologies: a mathematical model and a heuristic algorithm. Our analysis began by assessing the effectiveness of the proposed ALNS compared to traditional solvers and a constructive heuristics. The ALNS algorithm demonstrated remarkable prowess in rapidly delivering high-quality solutions across a spectrum of DWSRP-TW-SC instances. It consistently outperformed the mathematical model and the constructive heuristic in terms of solution quality, often achieving optimal results within significantly shorter time frames. We further examined the impact of reoptimization strategies, discovering that a reduced frequency of reoptimization can lead to improved objective function values. While this improvement came at the cost of increased computation time, the trade-off was well-justified in scenarios where solution quality is of paramount importance. Our investigation into the optimal frozen period length illuminated an intriguing balance between flexibility and stability. Extending the frozen period proved to be detrimental to solution quality, with shorter frozen periods resulting in significantly improved the weighted sum of task throughput times. Decision-makers considering technology investments in reducing frozen periods should carefully assess the potential benefits, as even modest reductions can yield substantial enhancements. In conclusion, our study provides insights for businesses dealing with dynamic workforce scheduling and routing problems. The proposed ALNS algorithm emerges as a powerful tool for real-time optimization, offering a compelling balance between solution quality and computation time. Furthermore, reoptimization strategies and frozen period lengths should all be tailored to suit specific business requirements, with careful consideration of the trade-offs involved. As industries continue to grapple with dynamic scheduling challenges, the principles and findings presented here can serve as a valuable compass guiding them toward more efficient and effective workforce management strategies. The interplay of dynamism, effective dynamism, and synchronization factors adds complexity to the dynamic on-demand on-site service systems, highlighting the need for holistic and adaptable solutions. # Acknowledgements {#acknowledgements .unnumbered} This research was supported by TUBITAK \[grant number 117M577\]. During the preparation of this work the author(s) used ChatGBT in order to improve the language and readability. After using this tool/Service, the authors reveiewed and edited the content as needed and take full reponsibility for the content of the publication.
arxiv_math
{ "id": "2309.09321", "title": "An Optimization Framework for a Dynamic Multi-Skill Workforce Scheduling\n and Routing Problem with Time Windows and Synchronization Constraints", "authors": "Onur Demiray, Doruk Tolga, Eda Y\\\"ucel", "categories": "math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Topology optimization (TO) has experienced a dramatic development over the last decades aided by the arising of metamaterials and additive manufacturing (AM) techniques, and it is intended to achieve the current and future challenges. In this paper we propose an extension for linear orthotropic materials of a three-dimensional TO algorithm which directly operates on the six elastic properties -- three longitudinal and shear moduli, having fixed three Poisson ratios -- of the finite element (FE) discretization of certain analysis domain. By performing a gradient-descent-alike optimization on these properties, the standard deviation of a strain-energy measurement is minimized, thus coming up with optimized, strain-homogenized structures with variable longitudinal and shear stiffness in their different material directions. To this end, an orthotropic formulation with two approaches -- direct or strain-based and complementary or stress-based -- has been developed for this optimization problem, being the stress-based more efficient as previous works on this topic have shown. The key advantages that we propose are: (1) the use of orthotropic ahead of isotropic materials, which enables a more versatile optimization process since the design space is increased by six times, and (2) no constraint needs to be imposed (such as maximum volume) in contrast to other methods widely used in this field such as Solid Isotropic Material with Penalization (SIMP), all of this by setting one unique hyper-parameter. Results of four designed load cases show that this orthotropic-TO algorithm outperforms the isotropic case, both for the similar algorithm from which this is an extension and for a SIMP run in a FE commercial software, presenting a comparable computational cost. We remark that it works particularly effectively on pure shear or shear-governed problems such as torsion loading. address: | $^a$ E.T.S. de Ingeniería Aeronáutica y del Espacio, Universidad Politécnica de Madrid, Pza. Cardenal Cisneros 3, 28040, Madrid, Spain\ $^b$ Department of Materials, University of Oxford, Parks Road, Oxford, OX1 3PJ, UK\ $^c$ Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, MA02139, USA\ $^d$ Department of Mechanical and Aerospace Engineering, Herbert Wertheim College of Engineering, University of Florida, FL32611, USA author: - Ismael Ben-Yelun$^{a,}$[^1], Víctor Riera$^a$, Luis Saucedo-Mora$^{a,b,c}$, Miguel Ángel Sanz$^a$, Francisco Javier Montáns$^{a,d}$ bibliography: - references.bib title: An elastic properties-based topology optimization algorithm for linear orthotropic, functionally graded materials --- Topology optimization ,Orthotropic materials ,Functionally graded ,Mechanical metamaterials # Introduction Novel and more flexible methods for topology optimization (TO) are becoming increasingly popular, leveraged by the advances in additive manufacturing [@brackett2011topology] and the arise of metamaterials [@diaz2010topology; @bertoldi2017flexible]---the latter preceded by the greater computational capacity of modern computers. TO is an area within the structural optimization wherein a given domain $\Omega$ subjected to certain boundary conditions $\Gamma_N \cup \Gamma_D$ is sought to present an optimized final shape by minimizing one of its macroscopic features e.g. mean compliance or total mass. It has been historically addressed with a Finite Element (FE) approach, therefore similar procedures have been followed herein [@bathe2006finite]. In the search for more versatile techniques, the use of orthotropic materials seems a promising choice due to their characteristics [@jones2018mechanics] e.g. high stiffness and strength-to-weight ratio [@karatacs2018review]. An example of their usage might be seen in the aerospace sector, where more than 50% of the primary structure of the A350 (latest Airbus aircraft programme) is made of Carbon Fiber Reinforced Polymer (CFRP) laminates i.e. aligned carbon fibers embedded in epoxy matrix [@mcilhagger2020manufacturing]. In fact, this links it with the TO in the sense that originally, the practical applications of this subject were mainly carried out in this industry [@suzuki1991homogenization]---a review of several aeronautical utilizations has been recently done by Zhu et al. [@zhu2016topology]. Regarding the mechanical properties, constructing different material architectures enable a wider range of possibilities. Through the design of orthotropic materials (e.g. composites), one may achieve a purposeful anisotropy in the elastic properties [@pedersen1987sensitivity; @pedersen1989optimal]. A more recent example is the development of metamaterials, wherein macro-scale behaviour is controlled by tuning the design parameters at micro, unit-cell level [@bertoldi2017flexible], allowing the modification of elastic properties e.g. Poisson ratio within a certain physical range. This paper belongs to the area of structural optimization, which several authors have subdivided into three: size optimization, shape optimization and topology optimization [@christensen2008introduction; @tsavdaridis2019application]. While size optimization is clearly identified and separated (e.g. obtaining the optimal thicknesses of a set of parts in a structural component that satisfy given constraints), the other two are devoted to respectively optimize the boundary $\partial \Omega$ (i.e. shape), and the connectivity within the analysis domain $\Omega$ (i.e. topology). Since these two last may overlap to some extent, another classification by methods has been made by Deaton and Grandhi [@deaton2014survey; @yago2021topology], comprising of three groups. The first one is homogenization (e.g. density-based), which began with the pioneering works of Bendsøe resulting in the commonly-used Solid Isotropic Material with Penalization (SIMP) method [@Bendse1989; @Bendsoe1999]. This is the method against which the proposed algorithm is compared in this paper from a conservative point of view. In the second group lie the evolutionary methods (or hard-kill), such as the Evolutionary Structural Optimization (ESO) proposed by Xie and Steven [@xie1993simple] or its birectional extension (BESO), by Huang and Xie [@querin1998evolutionary]. More specifically, Xie and co-workers developed and algorithm wherein the FE removal is carried out based on their contribution to the total compliance [@huang2008topology; @huang2009bi]. In that sense, this work implements a similar feature since elements are removed from the domain attending to their energy contribution and their derivatives (with respect to elastic properties) to the whole domain. Finally, level-set (boundary variation) methods attempt to fix the artifacts that may arise in homogenization methods by defining level-set curves or surfaces that deal better with more complex topologies [@wu2017level]. These previous methods were initially meant for isotropic, linear materials, but other effects such as geometrical or material non-linearities have been included in TO algorithms as well. Regarding material non-linearities, Yuge and Kikuchi addressed an elasto-plastic analysis [@yuge1995optimization], Bendsøe et al. studied softening material [@bendsoe1996optimization] and Zhang et al. addressed multi-material TO considering material non-linearities [@zhang2018multi], among others. Geometric non-linearities were also considered by Pedersen and Sigmund in their large-displacement TO work [@pedersen2001topology], as well as the numerical methods developed by Bruns et al. to address a non-linear elasticity problem such as snap-through [@bruns2002numerical]. A linear extension of material behavior that may be posed consists of considering linear orthotropic materials---different orthotropic applications for TO might be found in the literature. However, these studies have nearly always been limited to bi-dimensional, plane problems comprised of unit cells and their orientation in space. These unit cells are virtually a metamaterial configuration: isotropic material elements with a void at their centre, thus presenting a global anisotropic behavior, in overall. Therefore, those are optimal orientation problems, and the first published works on the subject were carried out by Pedersen [@pedersen1989optimal; @pedersen1990bounds; @pedersen1991thickness], who developed the models within a strains-based framework. Suzuki and Kikuchi [@suzuki1991homogenization], and Díaz and Bendsoe [@diaz1992shape] -- whose works were devoted to the optimal orientation of the so-called unit cells in plane-stress problems applied for TO algorithms as well -- highlighted that the stress-based formulation is more efficient. Cheng, Kikuchi and Ma addressed similarly this topic from the point of view of Optimal Material Distribution (OMD) [@cheng1994improved], also agreeing with the efficiency of the stress-based formulation for these orthotropic TO problems. Regarding the development of TO methods for orthotropic materials in the recent years, research has been likewise conducted yet it is still scarce. Gea and Luo provided closed-form solutions for both strain-based and stress-based orthotropic TO problems modelled through these plane unit-cells [@gea2004stress]. Jia et al. continued with this approach [@jia2008topology], presenting a SIMP-alike model optimizing at the same time the density of FEs and orientation of the cells. On the other side, Luo and Gea also proposed a model based on plate structures, similarly to laminates for composites materials [@luo1998optimal], and Stegmann and Lund developed an optimization algorithm for laminates, thus belonging to Discrete Material Optimization (DMO) methods [@stegmann2005discrete]. We remark that our study is not limited to only a discrete amount of plies (e.g. laminate) but to all kind of orthotropic materials, thus enabling the availability of continuous derivatives -- which are useful for gradient optimization methods -- or the usage of this method in conjunction with AM techniques or metamaterials. More recently, Page et al. [@page2016topology] have proposed a similar method insofar a volume restriction is imposed, but applied to heat transfer problems. Another application with AM techniques is proposed by Li et al. [@lee2021design], also using these unit cells with orthotropic behaviour. A SIMP scheme is used likewise, with the previously mentioned drawback of having to impose a volume restriction. In this paper, we propose a novel TO algorithm suitable for linear orthotropic materials to come up with three-dimensional optimized structures with minimum local compliance through two different approaches: strain-based, i.e. using the strain energy function, and stress-based, i.e. using the complementary strain (or *stress*) energy function. This optimization algorithm updates at once the elastic properties (the 6 stiffness moduli, longitudinal and shear, letting the three Poisson ratios fixed) of each FE in which the domain is discretized. These updates are addressed by linearizing a decoupled form of the energy function, using their derivatives with respect to said elastic properties to perform gradient-descent-alike update step. Thus, by operating directly on the elastic properties, the step of relating them to a intermediate variable e.g. density or mass is saved. Our model presents particularities typical of orthotropic materials, for instance, the coupling of longitudinal moduli which appears in the terms of the volumetric part of the constitutive matrix in the strain-based approach. Hence the need of introducing the complementary formulation, in an uncoupled fashion preferably. For this purpose, a similar decoupling of the (complementary) strain energy function as the one performed in Amores et al. is followed [@amores2021finite]. This formulation is proved to be more efficient, as former research have demonstrated likewise [@suzuki1991homogenization; @diaz1992shape]. Therefore, there are two key features that we have developed: The first one is the implementation of a numerical formulation for orthotropic materials in TO framework, leading to a more versatile optimization due to the extension of the design space---by six times with respect to the isotropic material case. The second one is that a volume constraint is no longer needed as in other classical methods. Similarly, heuristic sensitivity filters are not required. Additionally, since the final goal is to achieve structures with functionally graded properties, no penalization is applied. All of this requires fixing only one step-update hyperparameter needed for the optimization. In order to assess the performance of both methods, four examples are studied. These are designed such that the nature of different load cases (tensile-compressive, pure shear, combination of both) are represented. Then, the results of the algorithm are compared with their isotropic analogue [@saucedo2023updated; @ben2023topology] and with simulation runs carried out in the commercial FEM-CAE software [OptiStruct]{.smallcaps}, from [Altair]{.smallcaps} [@optistruct]. This paper is organized as follows. First, in Section [2](#Sec:theoretical_framework){reference-type="ref" reference="Sec:theoretical_framework"} the theoretical framework is established, in which the formulation proposed is developed. Then, in Section [3](#Sec:methodology){reference-type="ref" reference="Sec:methodology"} the methodology and particularities of this method are outlined, considering important restrictions and sketching the most important algorithms. Finally, in Section [4](#Sec:results){reference-type="ref" reference="Sec:results"} the previously commented results are displayed and analysed, concluding with a number of final remarks in Section [5](#Sec:concluding_remarks){reference-type="ref" reference="Sec:concluding_remarks"}. # Theoretical framework {#Sec:theoretical_framework} In contrast to other TO methods, the objective function to minimize is the standard deviation $s_{\mathcal{H}^{\bm \varepsilon}}$ of a strain-energy like variable $\mathcal{H}^{\bm \varepsilon}$, where $\bm \varepsilon$ stands for strains---see Saucedo et al. [@saucedo2023updated] and Ben-Yelun et al. [@ben2023topology] for more details. This is achieved by directly operating on the different stiffness of the material, which enables the elimination of intermediate variables (e.g. density, as it is done in density-based topology optimization methods such as SIMP [@Bendse1989; @Bendsoe1999]). The novelty with respect to other works is the nature of the considered material: orthotropic. A linear orthotropic material contains 9 elastic properties: Young moduli on 3 principal material directions $E_1$, $E_2$, $E_3$, 3 shear moduli referred to the same directions $G_{12}$, $G_{13}$, $G_{23}$ and three of the six Poisson coefficients e.g. $\nu_{12},\nu_{13},\nu_{23}$---the remaining three Poisson coefficients are determined due to tensor symmetry i.e. $\nu_{ij}/E_i = \nu_{ji}/E_j$. Thus, in terms of the optimization problems, this implies a wider range in design space compared to the linear isotropic case, which only presents two elastic parameters, for instance. Therefore, since the exploration of more possibilities is allowed, optimal designs might outperform the baseline results provided by its isotropic counterpart. ## Energy split into volumetric and deviatoric parts First of all, the volumetric-deviatoric split on the (total) energy is performed so that $$\mathcal{W}^{\square} := \mathcal{W}^{\square,v} + \mathcal{W}^{\square,d},$$ where, for the sake of compactness, $\square = \left\lbrace \bm \varepsilon, \bm\sigma\right\rbrace$ can represent either direct (strains-based) or complementary (stress-based) energy, respectively. With that, the first term $\mathcal{W}^{\square, v}$ contains information about the longitudinal stiffness and their coupling (i.e. Young moduli and Poisson ratio) whilst the shear stiffness is involved in the second term $\mathcal{W}^{\square, d}$. It is important to remark that if an isotropic material optimization with fixed Poisson ratio is chosen, this splitting would be futile since there is only one variable to optimize: the Young modulus. The field variables of this structural problem are obtained through finite-element modeling (FEM). Thus, the domain is discretized into $n_e$ elements, hence the energy might be expressed as the sum of the contributions of each element: $$\mathcal{W}^{\square} = \sum_{e=1}^{n_e}\mathcal{W}_e^{\square} = \sum_{e=1}^{n_e} \mathcal{W}_e^{\square, v} + \sum_{e=1}^{n_e} \mathcal{W}_e^{\square,d}.$$ Assuming that the material works within its linear elastic regime, both direct and complementary elastic energies might be expanded through stiffness and compliance matrices---$\mathbf{D}$ and $\mathbf{D}^{-1}$ in Voigt notation, respectively. For materials with longitudinal-shear decoupling (as in the case of isotropic and orthotropic materials, among others) these matrices could be separated as follow into the following submatrices, being null the part relating extension and shear $$\left[\mathbf{D}\right] = \left[ \begin{BMAT}{c;c}{c;c} \left[\mathbf{D}^v\right] & \left[\bm{0}\right] \\ \left[\bm{0}\right] & \left[\mathbf{D}^d\right] \end{BMAT} \right], \quad \left[\mathbf{D}^{-1}\right] = \left[ \begin{BMAT}{c;c}{c;c} \left[\left(\mathbf{D}^v\right)^{-1}\right] & \left[\bm{0}\right] \\ \left[\bm{0}\right] & \left[(\mathbf{D}^d )^{-1}\right] \end{BMAT} \right],$$ with [@chaves2014mecanica] $$\begin{array}{c} [\mathbf{D}^v]= \dfrac{1}{\chi} \begin{bmatrix} E_{1} \left(1-\nu_{23}^{2} \frac{E_{3}}{E_{2}}\right) & \nu_{12} E_{2}+\nu_{23} \nu_{13} E_{3} & \nu_{13} E_{3}+\nu_{12} \nu_{23} E_{3}\\ & E_{2} \left(1-\nu_{13}^{2} \frac{E_{3}}{E_{1}}\right) & \nu_{23} E_{3}+\nu_{12} \nu_{13} \frac{E_{2} E_{3}}{E_{1}}\\ \rm{(sym)} & & E_{3} \left(1-\nu_{12}^{2} \frac{E_{2}}{E_{1}}\right)\\ \end{bmatrix}, \\[7.5ex] \chi = 1 - \dfrac{\nu_{12}^2E_2}{E_1} - \dfrac{\nu_{23}^2E_3}{E_2} - 2\nu_{12}\nu_{23}\nu_{13}\dfrac{E_3}{E_1}, \end{array} \label{Eq:2_D_v}$$ $$= \begin{bmatrix} 1/E_1 & -\nu_{21}/E_2 & -\nu_{31}/E_3 \\ -\nu_{12} / E_1 & 1/E_2 & -\nu_{32}/E_3 \\ -\nu_{13} / E_1 & -\nu_{23} / E_2 & 1 / E_3 \end{bmatrix}, \label{Eq:2_D_d}$$ and $$= \begin{bmatrix} G_{12} & 0 & 0 \\ 0 & G_{13} & 0 \\ 0 & 0 & G_{23} \end{bmatrix}, \quad [(\mathbf{D}^d)^{-1}] = \begin{bmatrix} 1/G_{12} & 0 & 0 \\ 0 & 1/G_{13} & 0 \\ 0 & 0 & 1/G_{23} \end{bmatrix}. \label{Eq:2_inverse_D}$$ In the following subsections, the optimization algorithms developed for the direct and complementary formulation are addressed. ## Direct (strain-based) approach with separated influence of elastic properties {#Subsec:2-direct} In this first approach, the elastic energy of the $e-$th element $\mathcal{W}_e^{\bm \varepsilon}$ might be expanded as follows by performing numerical integration $$\mathcal{W}_e^{\bm \varepsilon} = \dfrac{1}{2}\sum_{p=1}^{n_p}\underline{\bm{\varepsilon}}\left(\mathbf{x}_p\right)\cdot \mathbf{D}_{k}^v \underline{\bm{\varepsilon}}\left(\mathbf{x}_p\right) J_p w_p + \dfrac{1}{2}\sum_{p=1}^{n_p}\underline{\bm{\gamma}}\left(\mathbf{x}_p\right)\cdot \mathbf{D}_{k}^d \underline{\bm{\gamma}}\left(\mathbf{x}_p\right) J_p w_p, \label{Eq:2_W_e}$$ where $\underline{\bm{\varepsilon}}$ is a vector containing the longitudinal strains along the material directions i.e. $\underline{\bm{\varepsilon}} = [\varepsilon_{1}, \varepsilon_{2}, \varepsilon_{3}]^T$, and $\underline{\bm{\gamma}}$ is the vector which contains the shear strains of the material directions $\underline{\bm{\gamma}} = [\gamma_{12}, \gamma_{13}, \gamma_{23}]^T$. Both $\underline{\bm{\varepsilon}}$ and $\underline{\bm{\gamma}}$ are evaluated in the integration point $p$, as well as the Jacobian of the element transformation to normalized coordinates $J_p$, being their corresponding quadrature weights $w_p$. This is a generalization for elements with $n_p$ integration points. A similar procedure to the isotropic case might be followed. In these materials, the energy contribution of an element $\mathcal{W}_e^{\bm \varepsilon}$ might be expressed as a product of its Young modulus $E_e$ and an *ad-hoc* variable $\mathcal{H}_e^{\bm \varepsilon}$ which contains all the energy information except the Young modulus i.e. $\mathcal{W}_e^{\bm \varepsilon} = E_e \mathcal{H}_e^{\bm \varepsilon} \Omega_e$ (being $\Omega_e$ the volume of the element $e$), in such a way that $\partial \mathcal{H}_e^{\bm \varepsilon} / \partial E_e = 0$, considering the rest of the variables fixed e.g. strain/stress distributions. The introduction of the volume $\Omega_e$ in the last expression is to prevent the algorithm from being sensitive to different element sizes. A volume-averaged strain energy density variable $\bar{\Psi}^{\bm \varepsilon}$ represents a more suitable alternative, which is defined as $$\bar{\Psi}_e^{\bm \varepsilon} := \dfrac{1}{\Omega_e}\int_{\Omega_e}\Psi^{\bm \varepsilon}\left(\mathbf{x}\right)\mathrm{d}\Omega_e = \dfrac{\mathcal{W}_e^{\bm \varepsilon}}{\Omega_e}.$$ In this way, the stiffening of larger elements by the mere fact of having larger volume than others is avoided. Therefore, it is desirable to model orthotropic materials in a similar way. To this end, a split of the elastic energy on six terms is carried out, each of them containing one of the six moduli in linear orthotropic materials. Ideally, these terms depend explicitly and solely on that elastic property that represent, so that a decoupled form of the energy might be achieved, where in each term a separation of variables with the elastic modulus can be identified. Taking this into account, the energy of each element $e$ might be expressed as $$\mathcal{W}_e^{\bm \varepsilon} = \sum_{i=1}^3\mathcal{W}_{i,e}^{\bm \varepsilon, v} + \sum_{i=1}^3\sum_{j>i}^3\mathcal{W}_{ij,e}^{\bm \varepsilon, d} = \sum_{i=1}^3\mathcal{W}^{\bm \varepsilon, d}_{i,e}(E_{i,e}) + \sum_{i=1}^3\sum_{j>i}^3\mathcal{W}^{\bm \varepsilon, d}_{ij,e} (G_{ij,e}), \label{Eq:2_energy_of_an_element}$$ where $\mathcal{W}_{i,e}^v$ represents the elastic energy contribution which involves stress and strain in the principal direction $i$ (i.e. the longitudinal stiffness and the Poisson effects), and $\mathcal{W}^d_{ij,e}$ is the contribution of the shear part which operates in directions $i$ and $j$. Thus, splitting Equation ([\[Eq:2_W\_e\]](#Eq:2_W_e){reference-type="ref" reference="Eq:2_W_e"}) in three longitudinal and another three shear parts, $$\mathcal{W}_e^{\bm \varepsilon} = \dfrac{1}{2}\sum_{i=1}^3\sum_{p=1}^{n_p} \underline{\bm{\varepsilon}}\left(\mathbf{x}_p\right)\cdot \mathbf{D}_{i,e}^v \underline{\bm{\varepsilon}}\left(\mathbf{x}_p\right) J_p w_p + \dfrac{1}{2}\sum_{i=1}^3 \sum_{j>i}^3 \sum_{p=1}^{n_p}\underline{\bm{\gamma}}\left(\mathbf{x}_p\right)\cdot \mathbf{D}_{ij,e}^d \underline{\bm{\gamma}}\left(\mathbf{x}_p\right) J_p w_p.$$ The disadvantage of this formulation lies in the difficulty of separating $\mathbf{D}_{1,k}^v$, $\mathbf{D}_{2,k}^v$ and $\mathbf{D}_{3,k}^v$ since an expression of $E_1$, $E_2$ and $E_3$ appears non-explicitly in each term of $\mathbf{D}_{k}^v$---see Equation ([\[Eq:2_D\_v\]](#Eq:2_D_v){reference-type="ref" reference="Eq:2_D_v"}). Conversely, $\mathbf{D}_{ij,e}^d$ are straightforward to relate with their shear modulus $G_{ij,e}$---see Equation ([\[Eq:2_inverse_D\]](#Eq:2_inverse_D){reference-type="ref" reference="Eq:2_inverse_D"}). We can now introduce the volume-averaged strain energy densities $\bar{\Psi}_e^{\bm \varepsilon, v}$ and $\bar{\Psi}_e^{\bm \varepsilon, d}$ by analogy, $$\mathcal{W}_e^{\bm \varepsilon} = \sum_{i=1}^3 \bar{\Psi}_{i,e}^{\bm \varepsilon, v} \Omega_e + \sum_{i=1}^3 \sum_{j>i}^3 \bar{\Psi}_{ij,e}^{\bm \varepsilon, d} \Omega_e,$$ where the volume by means of numerical integration is $\Omega_e = \sum_p J_p w_p$. Proceeding in an analogous way to the isotropic case, by defining the strain-variables $\mathcal{H}_{i,e}^{\bm \varepsilon}$ and $\mathcal{H}_{ij,e}^{\bm \varepsilon}$ associated with the volumetric and deviatoric contributions, we propose to rewrite the previous terms in the following way $$\bar{\Psi}_{i,e}^{\bm \varepsilon, v} = \dfrac{\sum_p \underline{\bm{\varepsilon}}\left(\mathbf{x}_p\right)\cdot \mathbf{D}_{i,e}^v\underline{\bm{\varepsilon}}\left(\mathbf{x}_p\right)J_p w_p}{2\sum_p J_p w_p} =: E_{i,e}\mathcal{H}_{i,e}^{\bm \varepsilon}, \label{eq:separation_variables_direct}$$ and $$\bar{\Psi}_{ij,e}^{\bm \varepsilon, d} = \dfrac{\sum_p \underline{\bm{\gamma}}\left(\mathbf{x}_p\right)\cdot \mathbf{D}_{ij,e}^d\underline{\bm{\gamma}}\left(\mathbf{x}_p\right)J_p w_p}{2\sum_p J_p w_p} =: G_{ij,e}\mathcal{H}_{ij,e}^{\bm \varepsilon, d}.$$ Note that we *assume* the explicit separation of variables in the volumetric part of the direct case in Equation [\[eq:separation_variables_direct\]](#eq:separation_variables_direct){reference-type="eqref" reference="eq:separation_variables_direct"} into the Young's moduli $E_{i,e}$ and the strain variable $\mathcal{H}^{\bm \varepsilon}_{i,e}$ in order to follow a similar procedure to [@ben2023topology]. This separation is however correct in the deviatoric contribution of the energy since $G_{ij,e}$ is a explicit, separable term in $\bar{\Psi}_{ij,e}^{\bm \varepsilon, d}$ i.e., no assumption is required. Therefore, the *ad-hoc* variables $\mathcal{H}_{i,e}^{\bm \varepsilon}$ and $\mathcal{H}_{ij,e}^{\bm \varepsilon}$ are defined as $$\mathcal{H}_{i,e}^{\bm \varepsilon} := \dfrac{\sum_p\underline{\bm{\varepsilon}}\left(\mathbf{x}_p\right)\cdot \hat{\mathbf{D}}_{i,e}^v \underline{\bm{\varepsilon}}\left(\mathbf{x}_p\right) J_p w_p}{2\sum_p J_p w_p},$$ and $$\mathcal{H}_{ij,e}^{\bm \varepsilon} := \dfrac{\sum_p\underline{\bm{\gamma}}\left(\mathbf{x}_p\right)\cdot \hat{\mathbf{D}}_{ij,e}^d \underline{\bm{\gamma}}\left(\mathbf{x}_p\right) J_p w_p}{2\sum_p J_p w_p}.$$ Regarding these introduced variables, the following dimensionless matrices are proposed for the volumetric part (in order to compute $\mathcal{H}_{i,e}^{\bm \varepsilon}$): $$\hat{\mathbf{D}}_{i,e} = \dfrac{\mathbf{D}_e}{3E_i},$$ so the volumetric contribution of the elastic energy is recovered when these reduced matrices are sum over the three material directions. As highlighted before, $\hat{\mathbf{D}}_{i,e}^v$ neither depends solely on $E_i$, nor is this dependence explicit i.e. $\partial \mathcal{H}_{i,e}^{\bm \varepsilon} / \partial E_{i,e} \neq 0$. Hence the advantage of a complementary formulation in which the Young moduli $E_i$ appear explicitly in the terms of the (compliance) matrix. This complementary formulation will be covered later on. However, this explicit separation is indeed achieved in the deviatoric contribution since the matrices $\hat{\mathbf{D}}^d_{ij,e}$ might be easily obtained as $$\hat{\mathbf{D}}^d_{ij,e} = \dfrac{\mathbf{D}^d_{ij,e}}{G_{ij,e}},$$ leading to the following reduced matrices $$= \begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}, \; [\hat{\mathbf{D}}^d_{13,k}] = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{bmatrix}, \; [\hat{\mathbf{D}}^d_{23,k}] = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix}. \label{Eq:2_D_ij_e_reduced}$$ Note that these reduced matrices are constant regardless of the element $e$ that they represent, which leads to $\partial \bar{\Psi}^{\bm \varepsilon,d}_{ij,e} / \partial G_{ij,e} = \mathcal{H}_{ij,e}^{\bm \varepsilon}$. Having computed $\bm\mathcal{H}^{\bm \varepsilon}_i$ and $\bm \mathcal{H}^{\bm \varepsilon}_{ij}$, the total energy might be computed by $$\mathcal{W}_e^{\bm \varepsilon} = \displaystyle\sum_{i=1}^3 E_{i,e}\mathcal{H}_{i,e}^{\bm \varepsilon}\Omega_e + \displaystyle\sum_{i=1}^3 \sum_{j>i}^3 G_{ij,e}\mathcal{H}_{ij,e}^{\bm \varepsilon} \Omega_e. \label{Eq:2_W_e_decoupled_and_H}$$ ## Complementary (stress-based) approach with separated influence of elastic properties {#Subsec:2-complementary} In the linear elastic regime, the complementary elastic energy is equal to the direct elastic energy i.e. $\mathcal{W}^{\bm \sigma} = \mathcal{W}^{\bm \varepsilon}$, so the computation is analogous. The complementary strain energy function might be expressed as [@holzapfel2002nonlinear] $$\mathcal{W}^{\bm \sigma} = \int_\Omega \dfrac{1}{2}\bm{\sigma}:\bm{\varepsilon}\left(\bm{\sigma}\right)\mathrm{d}\Omega = \int_\Omega \dfrac{1}{2}\bm{\sigma}:\mathbb{S}:\bm{\sigma}\,\mathrm{d}\Omega = \sum_{e=1}^{n_e}\int_{\Omega_e}\dfrac{1}{2}\bm{\sigma}:\mathbb{S}:\bm{\sigma}\,\mathrm{d}\Omega_e =: \sum_{e=1}^{n_e}\mathcal{W}_e^{\bm \sigma}, \label{Eq:2_energy_complementary_definition}$$ where $\mathbb{S}$ is the fourth-order compliance tensor, defined such that $\bm{\varepsilon} = \mathbb{S}:\bm{\sigma}$. Thus, the complementary strain energy contribution of element $e$, $\mathcal{W}^{\bm \sigma}_e$, could be defined as $$\mathcal{W}_e^{\bm \sigma} = \dfrac{1}{2}\sum_{p=1}^{n_p}\underline{\bm{\sigma}}\left(\mathbf{x}_p\right)\cdot (\mathbf{D}_e^v)^{-1} \underline{\bm{\sigma}}\left(\mathbf{x}_p\right) J_p w_p + \dfrac{1}{2}\sum_{p=1}^{n_p}\underline{\bm{\tau}}\left(\mathbf{x}_p\right)\cdot (\mathbf{D}_e^d)^{-1} \underline{\bm{\tau}}\left(\mathbf{x}_p\right) J_p w_p, \label{Eq:2_W_e_c}$$ where numerical integration with $n_p$ integration points is likewise performed, and vectors $\underline{\bm{\sigma}} = [\sigma_{1}, \sigma_{2}, \sigma_{3}]^T$ and $\underline{\bm{\tau}} = [\tau_{12}, \tau_{13}, \tau_{23}]^T$ are defined analogously to $\underline{\bm{\varepsilon}}$ and $\underline{\bm{\gamma}}$. Note that $\underline{\bm{\sigma}} = (\mathbf{D}^v)^{-1}\underline{\bm{\varepsilon}}$ and $\underline{\bm{\tau}} = (\mathbf{D}^d)^{-1}\underline{\bm{\gamma}}$. Similarly to the direct approach, by splitting the complementary energy function $\mathcal{W}^{\bm \sigma}$ into volumetric and deviatoric contributions and being considered the previously referred finite-element discretization, $$\mathcal{W}_e^{\bm \sigma} = \dfrac{1}{2}\sum_{i=1}^3 \sum_{p=1}^{n_p}\underline{\bm{\sigma}}\left(\mathbf{x}_p\right)\cdot (\mathbf{D}_{i,e}^v)^{-1} \underline{\bm{\sigma}}\left(\mathbf{x}_p\right) J_p w_p + \dfrac{1}{2}\sum_{i=1}^3 \sum_{j>i}^3\sum_{p=1}^{n_p} \underline{\bm{\tau}}\left(\mathbf{x}_p\right)\cdot (\mathbf{D}_{ij,e}^d)^{-1} \underline{\bm{\tau}}\left(\mathbf{x}_p\right) J_p w_p. \label{Eq:2_W_e_c_decoupled}$$ In this case, the dependence of the terms of compliance matrices is explicit on the elastic parameters that they are associated with, so a direct identification of $(\mathbf{D}_{i,e}^v)^{-1}$ and $(\mathbf{D}_{ij,e}^d)^{-1}$ from the total matrices might be carried out. This represents an advantage over the direct case which results from a better or less complex formulation of the volumetric part. Following the analogy with respect to the isotropic case, the complementary formulation in these materials states that the volume-averaged energy density $\bar{\Psi}^{\bm \sigma}_e$ might be expressed as $\bar{\Psi}_e^{\bm \sigma} = \mathcal{H}_e^{\bm \sigma} / E_e$, being defined an *ad-hoc* stress variable $\mathcal{H}_e^{\bm \sigma}$. Thus, it is satisfied that $\partial \mathcal{H}^{\bm \sigma}_e / \partial E_e = 0$ assuming that the stress distribution remains fixed. Extrapolating it to the orthotropic case, Equation ([\[Eq:2_W\_e_c\_decoupled\]](#Eq:2_W_e_c_decoupled){reference-type="ref" reference="Eq:2_W_e_c_decoupled"}) might be rewritten as $$\mathcal{W}_e^{\bm \sigma} = \sum_{i=1}^3 \bar{\Psi}_{i,e}^{\bm \sigma,v}\Omega_e + \sum_{i=1}^3\sum_{j>i}^3\bar{\Psi}^{\bm \sigma,d}_{ij,e} \Omega_e,$$ which allows us to group terms into the following variables $$\bar{\Psi}_{i,e}^{\bm \sigma,v} = \dfrac{\sum_p\underline{\bm{\sigma}}\left(\mathbf{x}_p\right)\cdot (\hat{\mathbf{D}}_{i,e}^v)^{-1} \underline{\bm{\sigma}}\left(\mathbf{x}_p\right) J_p w_p}{2E_{i,e}\sum_p J_p w_p} =: \dfrac{\mathcal{H}_{i,e}^{\bm \sigma}}{E_{i,e}}$$ and $$\bar{\Psi}_{ij,e}^{\bm \sigma,d} = \dfrac{\sum_p\underline{\bm{\tau}}\left(\mathbf{x}_p\right)\cdot (\hat{\mathbf{D}}_{ij,e}^d)^{-1} \underline{\bm{\tau}}\left(\mathbf{x}_p\right) J_p w_p}{2G_{ij,e}\sum_p J_p w_p} =: \dfrac{\mathcal{H}_{ij,e}^{\bm \sigma}}{G_{ij,e}}.$$ The complementary *ad-hoc* variables are $$\mathcal{H}^{\bm \sigma}_{i,e} := \dfrac{\sum_p\underline{\bm{\sigma}}\left(\mathbf{x}_p\right)\cdot (\hat{\mathbf{D}}_{i,e}^v)^{-1} \underline{\bm{\sigma}}\left(\mathbf{x}_p\right) J_p w_p}{2\sum_p J_p w_p},$$ and $$\mathcal{H}^{\bm \sigma}_{ij,e} := \dfrac{\sum_p\underline{\bm{\tau}}\left(\mathbf{x}_p\right)\cdot (\hat{\mathbf{D}}_{ij,e}^d)^{-1} \underline{\bm{\tau}}\left(\mathbf{x}_p\right) J_p w_p}{2\sum_p J_p w_p}.$$ On the one hand, the volumetric reduced matrices $(\hat{\mathbf{D}}_{i,e}^v)^{-1}$ are now direct to obtain, namely $$= \begin{bmatrix} 1 & 0 & 0 \\ -\nu_{12} & 0 & 0 \\ -\nu_{13} & 0 & 0 \end{bmatrix}, \; [\hat{\mathbf{D}}^v_{2,k}] = \begin{bmatrix} 0 & -\nu_{21} & 0 \\ 0 & 1 & 0 \\ 0 & -\nu_{23} & 0 \end{bmatrix}, \; [\hat{\mathbf{D}}^v_{3,k}] = \begin{bmatrix} 0 & 0 & -\nu_{31} \\ 0 & 0 & -\nu_{32} \\ 0 & 0 & 1 \end{bmatrix}. \label{Eq:2_inverse_D_i^v}$$ Therefore, the condition that isotropic materials meet it is likewise satisfied in the complementary formulation, that is, $\partial \mathcal{H}^{\bm \sigma}_{i,e} / \partial E_{i,e} = 0$ and $\partial \mathcal{H}^{\bm \sigma}_{ij,e} / \partial G_{ij,e} = 0$. This will imply a slightly simpler formulation that avoids computing derivatives of constitutive matrix terms. On the other hand, for the deviatoric part, it turns out that the reduced matrices are the same as the direct formulation i.e. $$(\hat{\mathbf{D}}_{ij,e}^d)^{-1} = \hat{\mathbf{D}}_{ij,e}^d,$$ so the expressions for these matrices are the same as those stated in Equation ([\[Eq:2_D\_ij_e\_reduced\]](#Eq:2_D_ij_e_reduced){reference-type="ref" reference="Eq:2_D_ij_e_reduced"}). Taking all into account, the total (complementary) energy is thus computed as $$\mathcal{W}_e^{\bm \sigma} = \displaystyle\sum_{i=1}^3 \dfrac{\mathcal{H}^{\bm \sigma}_{i,e}}{E_{i,e}}\Omega_e + \displaystyle\sum_{i=1}^3 \sum_{j>i}^3 \dfrac{\mathcal{H}^{\bm \sigma}_{ij,e}}{G_{ij,e}} \Omega_e. \label{Eq:2_W_e_c_decoupled_and_H}$$ ## Update formula derivation We now perform an extension starting from the isotropic algorithm formulation for direct and complementary energies. From Ben-Yelun et al. [@ben2023topology], the minimization of the standard deviation $s_{\mathcal{H}^{\bm \varepsilon}}(\bm \mathcal{H}^{\bm \varepsilon})$ of the isotropic strain-level variable $\bm \mathcal{H}^{\bm \varepsilon}$ subjected to the equilibrium equation is performed applying a gradient-based scheme setting a step parameter $\eta_e := \mathcal{H}^{\bm \varepsilon}_e N / k$, where $k$ is a (user-selected) update parameter to control the smoothness of the step. This yields the update formula $$^{t+1}E_e = {}^{t}E_e\left(1 + \dfrac{^t\mathcal{H}^{\bm \varepsilon}_e - {}^t\overline{\bm \mathcal{H}^{\bm \varepsilon}}}{ s_{\bm \mathcal{H}^{\bm \varepsilon}} k }\right),$$ where $\overline{\bm \mathcal{H}^{\bm \varepsilon}}$ is the average value of this mechanical property over all the elements in the domain. We can extend this isotropic update parameter to the strain-based orthotropic formulation i.e., $\eta_{i,e}:=\mathcal{H}^{\bm \varepsilon}_{i,e}N/k_i$ and $\eta_{ij,e} := \mathcal{H}^{\bm \varepsilon}_{ij,e}N/k_{ij}$, ultimately arriving to $$\begin{array}{c} {^{t+1}}E_{i,e} = \tensor[^t]{E}{_{i,e}} \left(1 + \tensor[^t]{\alpha}{_{i,e}} \right),\\[1ex] {^{t+1}}G_{ij,e} = \tensor[^t]{G}{_{ij,e}} \left(1 + \tensor[^t]{\alpha}{_{ij,e}} \right), \end{array} \label{Eq:2_update_direct}$$ with $$\begin{array}{c} \alpha_{i,e} = \dfrac{\mathcal{H}^{\bm \varepsilon}_{i,e} - \overline{\bm \mathcal{H}^{\bm \varepsilon}_i}}{s_{\bm \mathcal{H}^{\bm \varepsilon}_i}k_i},\\ [3ex] \alpha_{ij,e} = \dfrac{\mathcal{H}^{\bm \varepsilon}_{ij,e} - \overline{\bm \mathcal{H}^{\bm \varepsilon}_{ij}}}{ s_{\bm \mathcal{H}^{\bm \varepsilon}_{ij}} k_{ij} }. \end{array} \label{Eq:2_alpha_definition}$$ Therefore, update formulae to homogenize all six $\bm \mathcal{H}^{\bm \varepsilon}_i$ and $\bm \mathcal{H}^{\bm \varepsilon}_{ij}$ variables are derived computing the update parameters with the strains field information. Note that $k_i, k_{ij}$, are the (user-prescribed) modulating parameters and can be defined independently for every elastic property that is being optimized. For ease of notation we write $$\bm{k} = \left\{\begin{array}{cccccc} k_1 & k_2 & k_3 & k_{12} & k_{13} & k_{23} \end{array} \right\}.$$ Similarly, according to [@ben2023topology] and making use of previous extension, the update formulae for the stress-based orthotropic formulation are $$\begin{array}{c} ^{t+1}E_{i,e} = \dfrac{^tE_{i,e}}{1 - \tensor[^t]{\alpha}{_{i,e}}}, \\[3ex] ^{t+1}G_{ij,e} = \dfrac{^tG_{ij,e}}{1 - \tensor[^t]{\alpha}{_{ij,e}}}, \end{array} \label{Eq:2_update_complementary}$$ where the update parameters $\alpha_{i, k}$ and $\alpha_{ij,e}$ are computed by means of Equation [\[Eq:2_alpha_definition\]](#Eq:2_alpha_definition){reference-type="eqref" reference="Eq:2_alpha_definition"}, i.e., the strain-based case, since the strain homogenization is pursued. The difference lies in the way these variables are computed, making use of the following handy relations $$\mathcal{H}^{\bm \varepsilon}_{i,e} = \dfrac{\mathcal{H}^{\bm \sigma}_{i,e}}{E_{i,e}^2}, \quad \mathcal{H}^{\bm \varepsilon}_{ij,e} = \dfrac{\mathcal{H}^{\bm \sigma}_{ij,e}}{G_{ij,e}^2}.$$ ## Energy restrictions There are some constraints that have to be taken into account in order to prevent the optimization algorithm from creating a thermodynamically inconsistent material [@jones2018mechanics]. For isotropic materials, these are non-negative Young modulus $E>0$ and a bounded Poisson coefficient i.e. $-1 < \nu < 0.5$---these limits ensure the shear and bulk moduli to be positive, respectively. The extrapolation to the orthotropic case of these restrictions are taken from Jones [@jones2018mechanics], and are listed below: $$\begin{array}{c} \begin{array}{rl} E_i \geq 0, & i = 1, 2, 3\\[2ex] G_{ij} \geq 0, & i=1,2,3; \; j>i\\[2ex] \left|\nu_{ij} \right| < \sqrt{\dfrac{E_i}{E_j}}, & i=1,2,3; \; j>i \end{array}\\[2ex] \nu_{12} \nu_{23}\nu_{31} < \dfrac{1}{2}\left(1 - \nu_{12}\nu_{21} - \nu_{13}\nu_{31} - \nu_{23}\nu_{32}\right) \end{array} \label{Eq:2_energy_restrictions}$$ ## Problem statement With everything detailed above, the optimization problem might be stated as $$\left\lbrace \begin{array}{rl} \min & \lbrace s_{\bm \mathcal{H}^{\bm \varepsilon}_i}, s_{\bm \mathcal{H}^{\bm \varepsilon}_{ij}}\rbrace, \quad i=1,2,3;\; j>i \\[2ex] \textrm{s.t.} & \bm{K}(E_i, G_{ij}, \nu_{ij})\; \bm u = \bm f \\[2ex] & \begin{array}{ll} 0 \leq E_i \leq E_{\max}, & i = 1,2,3\\[2ex] 0 \leq G_{ij} \leq G_{\max}, & i=1,2,3; \; j>i\\[2ex] \left|\nu_{ij} \right| < \sqrt{\dfrac{E_i}{E_j}}, & i=1,2,3; \; j>i \end{array}\\[2ex] & \nu_{12} \nu_{23}\nu_{31} < \dfrac{1}{2}\left(1 - \nu_{12}\nu_{21} - \nu_{13}\nu_{31} - \nu_{23}\nu_{32}\right) \end{array} \right. \label{Eq:2_optimization_problem}$$ All six standard deviations are minimized at once, by performing steps on the elastic properties likewise at once. This is achieved by separating every iteration into two steps: first to solve the equilibrium via FEM using the elastic properties from the previous iterations -- or the initial values, in case of iteration 0 -- thus obtaining the displacement, stress and strain distributions. Then the elastic properties are updated via Eqs. ([\[Eq:2_update_direct\]](#Eq:2_update_direct){reference-type="ref" reference="Eq:2_update_direct"}) for the direct approach or ([\[Eq:2_update_complementary\]](#Eq:2_update_complementary){reference-type="ref" reference="Eq:2_update_complementary"}) if the complementary approach is used. This update is performed leaving the rest of the variables fixed. The optimization is run until some convergence criterion is satisfied, setting a tolerance $\epsilon$ to this effect. Namely, the update is applied until the ratio between two successive computations of the elastic energy is lower than this tolerance i.e. $$\dfrac{\left|^{t+1}\mathcal{W}^{\square} - {}^t\mathcal{W}^{\square}\right|}{^t\mathcal{W}^{\square}} \leq \epsilon. \label{Eq:2_convergence}$$ # Methodology {#Sec:methodology} In this section the specifics regarding the methods carried out in this paper are introduced. Therefore, the following subsections address the initial conditions that will be fed to the optimization algorithm, as well as the processes within the algorithm itself. ## Boundary Conditions Once the initial volume has been properly defined and meshed, the program will assign to each FE a set of initial properties, those being the six elastic properties considered in this work. The next step is now to prescribe BCs at the boundaries $\Gamma_D$ and $\Gamma_N$. Imposing continuous BCs along the cube is impossible due to the nature of FEs. Instead, the BCs must be prescribed on the nodes of the mesh. In order to select a set of nodes, several functions have been implemented with the purpose of selecting regions of the mesh and determining which nodes are contained within said regions. Having selected the nodes, it is possible to impose either displacements or forces on the nodes. Displacements would be measured in $\SI{}{\milli \meter}$, while force would be measured in $\SI{}{\newton}$. It must be noted that since tetrahedrons are solid FEs, it is not possible to impose rotations, since solid FEs do not have rotational DOFs. ## Optimization Algorithm With the domain and boundaries properly defined, it is now possible to commence the optimization. The stiffness update process is composed of three consecutive sub-processes. First of all, a base update will take place following Equations [\[Eq:2_update_direct\]](#Eq:2_update_direct){reference-type="eqref" reference="Eq:2_update_direct"} or [\[Eq:2_update_complementary\]](#Eq:2_update_complementary){reference-type="eqref" reference="Eq:2_update_complementary"} for direct or complementary approaches, respectively. Secondly, some elements that have reached certain values are removed from the optimization. Finally, the resulting stiffness distribution is adjusted in order to fit the energy constraints of orthotropic materials. ### Base Update The base update process is based on the Equation ([\[Eq:2_update_direct\]](#Eq:2_update_direct){reference-type="ref" reference="Eq:2_update_direct"}) for the direct approach and Equation ([\[Eq:2_update_complementary\]](#Eq:2_update_complementary){reference-type="ref" reference="Eq:2_update_complementary"}) for the complementary approach. Additionally, we have proposed that the update variables $\tensor[^{t}]{\alpha}{_{i,e}}$ and $\tensor[^{t}]{\alpha}{_{ij,e}}$ shall take the form stated in Equation ([\[Eq:2_alpha_definition\]](#Eq:2_alpha_definition){reference-type="ref" reference="Eq:2_alpha_definition"}). Recall that the parameters in $\bm{k}$ must be selected manually by the user and play a fundamental role on the results that will be attained, since they determine whether the update process is more or less aggressive. Several tests with different values of $\bm{k}$ have been performed in order to find an effective set of values for each scenario. Furthermore, having three of the six Poisson ratios defined, lead to different forms of coming up with the three remaining. Therefore, two different ways of dealing with this computation have been implemented. The first method consists of fixing the values of $\nu_{12}, \nu_{23}$ and $\nu_{13}$, then assigning the corresponding values to $\nu_{21}, \nu_{32}$ and $\nu_{31}$ through the symmetry relation $\nu_{ij}/E_i = \nu_{ji} / E_j$. Alternatively, we can impose a relationship such as $\nu_{ij} + \nu_{ik} = \kappa$ and then solve a system of equations to obtain $\nu_{ij}$ and $\nu_{ik}$. Finally, once the FEs have been updated following the previous procedure, elastic properties that have reached values beyond their specified range will be updated to fit inside said range. The entire base update process is described in Algorithm [\[alg:baseupd\]](#alg:baseupd){reference-type="ref" reference="alg:baseupd"}. ${\rm approach} = {\rm direct} \vee {\rm complementary}$ $\alpha_{i,e} \gets \frac{\mathcal{H}^{\bm \varepsilon}_{i,e}-\overline{\bm\mathcal{H}^{\bm \varepsilon}_i}}{s_{\bm \mathcal{H}^{\bm \varepsilon}_i}k_i}$ $E_{i,e} \gets E_{i,e} \left(1 + \alpha_{i,e} \right)$ $E_{i,e} \gets E_{i,e} \left(\frac{1}{1 - \alpha_{i,e}} \right)$ $E_{i,e} \gets E_{\max}$ $E_{i,e} \gets E_{\min}$ $\alpha_{ij,e} \gets \frac{\mathcal{H}^{\bm \varepsilon}_{ij,e}-\overline{\bm\mathcal{H}^{\bm \varepsilon}_{ij}}}{s_{\bm \mathcal{H}^{\bm \varepsilon}_{ij}}k_{ij}}$ $G_{ij,e} \gets G_{ij,e} \left(1 + \alpha_{ij,e} \right)$ $G_{ij,e} \gets G_{ij,e} \left(\frac{1}{1 - \alpha_{ij,e}} \right)$ $G_{ij,e} \gets G_{\max}$ $G_{ij,e} \gets G_{\min}$ ### Element Removal In order to improve the performance of the algorithm and facilitate reaching convergence, certain FEs, or more specifically the elastic properties assigned to them, are removed from the optimization process and their value will be fixed when given lower and upper thresholds are surpassed. As to achieve those results, elastic properties that have reached values above the maximum allowed or below the minimum will not be updated in the following iterations. ### Fulfillment of Energy Constraints The values obtained after the base update may not be compatible with a real orthotropic material. Hence, they must be adjusted in order to satisfy the constraints introduced in Equation ([\[Eq:2_energy_restrictions\]](#Eq:2_energy_restrictions){reference-type="ref" reference="Eq:2_energy_restrictions"}). The code will check in every FE whether the energy constraints are satisfied. If they are not, the values of the corresponding elastic properties must be altered. Two different alteration algorithms are used depending on the elastic energy associated to a FE compared to the average energy of all FEs. The stiffness of FEs with higher elastic energies are, in a first approximation, more relevant to the behaviour of the structure. Therefore, FEs that surpass a certain energy threshold will undergo a process that exclusively increases the values for the Young and shear moduli whenever possible, while the rest may be subject to a process that reduces some of the elastic properties in order to meet the required constraints. Note that the constraints in Equations ([\[Eq:2_energy_restrictions\]](#Eq:2_energy_restrictions){reference-type="ref" reference="Eq:2_energy_restrictions"}.1) and ([\[Eq:2_energy_restrictions\]](#Eq:2_energy_restrictions){reference-type="ref" reference="Eq:2_energy_restrictions"}.2) do not need to be enforced since the base update prevents the Young and shear moduli from taking values below a certain minimum. Regarding the constraint in Equation ([\[Eq:2_energy_restrictions\]](#Eq:2_energy_restrictions){reference-type="ref" reference="Eq:2_energy_restrictions"}.3), it is always met when the second method of computing all the Poisson ratios is selected and as long as $\left\lvert \kappa \right\rvert < 2$. This section has described in detail the procedures used to develop the object of this paper. This provides the understanding required to introduced the obtained results and the conclusions that can be inferred from them in the following sections. # Numerical examples {#Sec:results} This section presents the results that have been obtained using the procedures explained in Section [3](#Sec:methodology){reference-type="ref" reference="Sec:methodology"}. To this end, four different load cases are studied. For each of them, figures with the achieved stiffness distribution are presented, as well as a figure indicating which FEs have the greatest elastic energy and the numerical value of the overall compliance of the structure, which is the main factor at evaluating the effectiveness of each method. The results are computed using the direct approach for the isotropic case, and both the direct and complementary approaches for the orthotropic case. It has been deemed unnecessary to show the results for the isotropic case using both approaches since the results obtained are very similar between them, specially when large values of $\bm{k}$ are selected---see Ben-Yelun et al. for more details [@ben2023topology]. Figure [1](#fig:num1){reference-type="ref" reference="fig:num1"} shows the four cases studied. All use the same initial cube and properties, but with different boundary conditions. The top row shows the boundary conditions, the middle row the deformation imposed in each case, with different magnifications, and the bottom row shows the isosurfaces with a constant elastic energy density for the orthotropic complementary calculation of each case. ![The boundary conditions of the four cases studied in this section, denoted as tube (a), elbow (b), chair (c) and torsion (d). The first row are the boundary conditions (a-d), the second row is the initial deformed configuration of the cube with different magnifications (e-h), and the third row is the isosurface of elastic energy for the optimized sample using the orthotropic calculation with the complementary approach (i-l). In grey the isosurface of 0.8 $\mu$J/mm$^3$, in green the one of 8 $\mu$J/mm$^3$ and in red the 80 $\mu$J/mm$^3$ isosurface.](all_BC_energy.jpg){#fig:num1 width="100%"} These results will be contrasted between each other and against the results from a traditional SIMP optimization from the commercial program [Altair]{.smallcaps} [OptiStruct]{.smallcaps}. ## Hyperparameters For all the load cases, the generator cube has a side length of $\SI{100}{\milli\meter}$ and there are 12 subdivisions per side, for a total of 8640 FEs. The initial and limit values assigned assigned to the FEs for the isotropic case are: $$\begin{gathered} \tensor[^0]{E}{} = \SI{200}{\giga\pascal}; \quad \nu = 0.33, \\ E_{\max} = \SI{500}{\giga\pascal}; \quad E_{\min} = \SI{50}{\giga\pascal},\end{gathered}$$ and, for the orthotropic case, $$\begin{gathered} \tensor[^0]{E}{_{i}} = \SI{200}{\giga\pascal}; \quad \tensor[^0]{G}{_{ij}} = \SI{75.19}{\giga\pascal}; \quad \nu_{ij} = 0.33, \\ E_{i,\max} = \SI{500}{\giga\pascal}; \quad E_{i,\min} = \SI{50}{\giga\pascal}, \\ G_{ij,\max} = \SI{75.19}{\giga\pascal}; \quad G_{ij,\min} = \SI{7.519}{\giga\pascal}; \quad i,j = 1,2,3; \enspace j > i.\end{gathered}$$ In all cases the defined values for $\nu_{ij}$ remain constant during the optimization process (i.e. $\nu_{12}$, $\nu_{13}$ and $\nu_{23}$, recall that the others are updated along with the elastic properties within the optimization loop). The parameters in $\bm{k}$ will be defined individually for each load case. Finally, the tolerance established for the converging condition in Equation ([\[Eq:2_convergence\]](#Eq:2_convergence){reference-type="ref" reference="Eq:2_convergence"}) takes the value $\epsilon = 0.01$. In the case of [OptiStruct]{.smallcaps} the generator cube has the same dimensions but it is meshed in a different fashion, with elements of average size 5 mm. The SIMP method deals with a material with a maximum Young modulus $E_{\max} = \SI{500}{\giga\pascal}$ using $p=1$. The penalization parameter is set to 1 since this solution might stiffen some elements using less volume fraction of them, thus achieving a lower compliance in comparison with analyses with $p>1$. Furthermore, using a finer mesh allows the problem to reach an slightly lower compliance. Both of these adjustments benefit the final compliance obtained with the SIMP method, thus enabling a more conservative focus for the methods presented in this paper. Finally, the volume fraction is set to $\hat{V} = V/V_{\max} = 0.5$ as a trade-off solution, since lower volume fractions do not let the domain to be completely (topologically) connected, and higher fractions tend to over-stiffen the final structure, thus achieving a pointless low-compliance solution. Also, the final shape obtained using this volume fraction is similar to the ones obtained using the other methods. ## Load Case 1: Tube The first load case, named Tube, is characterized by normal stresses along the $z-$axis without many stresses on any other direction. For this scenario, all the parameters in $\bm{k}$ are set to 15. The imposed boundary conditions to simulate this load case are displayed in Figure [1](#fig:num1){reference-type="ref" reference="fig:num1"}a. This load case is defined by two circular crowns on the top and bottom faces of the generator cube. The bottom crown (blue) represents the Dirichlet boundary $\Gamma_D$, where all displacements are set to $\SI{0}{\milli\meter}$. The top crown (red) represents part of the Neumann boundary $\Gamma_N$, where a force of $\SI{10000}{\newton}$ in the direction of the $z-$axis has been distributed among all the contained nodes. The remaining surface of the cube also belongs to the boundary $\Gamma_N$, since the prescribed force on all the nodes contained within is equal to $\SI{0}{\newton}$. ### Isotropic Case The isotropic case resulted in a compliance of $\SI{2.23}{\newton\!\,\milli\meter}$ after 21 iterations and provided the stiffness distributions in Figure [2](#fig:isotropic1){reference-type="ref" reference="fig:isotropic1"}. ![Optimal Young moduli distribution for the direct and complementary isotropic calculations for the tube case.](tubo_iso.jpg){#fig:isotropic1 width="60%"} The obtained results seem very reasonable since both the Young modulus and the elastic energy are the greatest in a tube-like volume between the two circular crowns, as one might have expected in the first place. It must be noted that only FEs with elastic energies greater than a certain value relative to the maximum elastic energy within the generator cube have been represented. The quasi-transparent FEs have surpassed a less exigent threshold, while the opaque ones have surpassed a more exigent threshold. This case will serve as the benchmark to evaluate the two following results, which are related to the orthotropic case. ### Orthotropic Case For this load case, and the successive orthotropic analyses as well, only three out of all the six elastic properties are commented i.e. the most relevant ones. In the Tube load case, the elastic properties of interest are $E_{x}$, $E_{z}$ and $G_{yz}$. #### Direct Approach The orthotropic case using the direct approach resulted in a compliance of $\SI{2.03}{\newton\!\,\milli\meter}$ after 14 iterations and provided the stiffness distributions in Figure [3](#fig:direct1){reference-type="ref" reference="fig:direct1"}. ![Optimal distribution for the tube considering the direct approach in the orthotropic case.](orto_direct_tubo.jpg){#fig:direct1 width="80%"} Similarly to the isotropic case, $E_{z}$ develops into a tube-like structure between the circular crowns. On the other hand, $E_{x}$ -- analogous to $E_{y}$ -- is greater next to the circular crowns and on the faces perpendicular to the $y-$axis, most likely to minimize the compliance associated to the Poisson effect. Lastly, $G_{yz}$ -- analogous to $G_{xz}$ -- is greatest next to both of the circular crowns, presenting a significant directionality. The FEs with the most elastic energy are organized in a very similar structure to that of the isotropic case, although more concentrated next to the faces. This rearrangement may explain the better local optimum reached in this orthotropic analysis. #### Complementary Approach The orthotropic case using the complementary approach resulted in a compliance of $\SI{0.897}{\newton\!\,\milli\meter}$ after 28 iterations and provided the stiffness distributions in Figure [4](#fig:complementary1){reference-type="ref" reference="fig:complementary1"}. In this case $E_{z}$ does not take the shape of a full tube, losing stiffness next to the top and bottom faces and reaching greater values in the rest of them. This can most likely be attributed to the fact that those areas have been stiffened by $G_{yz}$ instead. Additionally, it seems like $E_{x}$ just ignores the tube-like expected shape in order to deal with the Poisson effect as efficiently as possible. Once again, the FEs with the most elastic energy are organized in a very similar structure to that of the isotropic case, although wider. ![Optimal distribution for the tube considering the complementary approach in the orthotropic case.](orto_comp_tubo.jpg){#fig:complementary1 width="80%"} ### Comparison It seems like the isotropic and orthotropic direct cases reach a similar solution judging by the similar stiffness distribution regarding $E_{z}$ and the compliance attained being just slightly smaller for the isotropic case. However, the orthotropic complementary case reaches a much lower compliance with a less expected *a priori* distribution. Lacking experimental results performed on real materials, it is impossible to determine whether such a result would be reproducible on an actual structure, but it holds significant value as a result regardless. The evolution of the compliance with the number of iterations performed for the previous methods, as well as [OptiStruct]{.smallcaps}, can be found in Figure [5](#fig:evolution1){reference-type="ref" reference="fig:evolution1"}. These results show that every approach used outperforms the traditional optimization using a SIMP method with the (hyper-)parameters highlighted before. ![Evolution of the compliance over the number of iterations for Tube.](compliance_tube.pdf){#fig:evolution1 width="70%"} Figure [6](#fig:enercomptube){reference-type="ref" reference="fig:enercomptube"} shows the comparison of the final elastic energy density distribution considering the 4 configurations used in the calculation (i.e. isotropic direct, isotropic complementary, orthotropic direct, orthotropic complementary). Both the isotropic approaches reach practically the same result, but the orthotropic cases can release the elastic energy density from the middle region of the tube. It means that the orthotropic methods are more efficient, and even more the complementary approach. ![Elastic energy density distribution in the optimal configuration of the four calculation cases; orthotropic complementary (a), orthotropic direct (b), isotropic complementary (c) and isotropic direct (d).](E_elastica_todos_tubo.jpg){#fig:enercomptube width="100%"} ## Load Case 2: Elbow The second load case, named Elbow, is characterized by normal and shear stresses along the $x$ and $z-$axis. For this scenario, all the parameters in $\bm{k}$ are set to 20. The representation of this load case is depicted in Figure [1](#fig:num1){reference-type="ref" reference="fig:num1"}b. This load case is defined by two circles on the face perpendicular to the $x-$axis and the top face: the former (blue) represents the Dirichlet boundary $\Gamma_D$, where all displacements are set to $\SI{0}{\milli\meter}$. The latter (red) represents part of the Neumann boundary $\Gamma_N$, where a force of $\SI{10000}{\newton}$ in the direction of the $z-$axis has been distributed among all the contained nodes. Analogous to the first load case, the remaining surface of the cube also belongs to the boundary $\Gamma_N$, since the prescribed force on all the nodes contained within is equal to $\SI{0}{\newton}$. ### Isotropic Case The isotropic case resulted in a compliance of $\SI{23.2}{\newton\!\,\milli\meter}$ after 15 iterations and provided the stiffness distributions in Figure [7](#fig:isotropic2){reference-type="ref" reference="fig:isotropic2"}. ![Optimal Young moduli distribution for the direct and complementary isotropic calculations for the elbow case.](codo_iso.jpg){#fig:isotropic2 width="60%"} The Young modulus distribution seems to reproduce a structure similar to an elbow joint, joining both the boundary conditions. The elastic energy distribution adopts a similar shape, where the FEs with the greatest energy are grouped next to the Dirichlet boundary. This case is compared with the two following (orthotropic) analyses. ### Orthotropic Case The relevant elastic properties, which are therefore highlighted, are $E_{x}$, $E_{z}$ and $G_{xz}$ since these are the directions involved in this load case. #### Direct Approach The orthotropic case using the direct approach resulted in a compliance of $\SI{22.4}{\newton\!\,\milli\meter}$ after 12 iterations and provided the stiffness distributions in Figure [8](#fig:direct2){reference-type="ref" reference="fig:direct2"}. ![Optimal distribution for the elbow considering the direct approach in the orthotropic case.](orto_direct_codo.jpg){#fig:direct2 width="80%"} The Young modulus along $x-$axis $E_{x}$ develops in a structure similar to an elbow joint -- i.e. similarly to the isotropic case -- although more prominent near the Dirichlet boundary. On the other hand, $E_{z}$ seems to only reproduce the part of the elbow closer to the Neumann boundary, while $E_{z}$ has the same behaviour next to the Dirichlet boundary. The FEs with the most elastic energy are organized in a near identical structure to that of the isotropic case. #### Complementary Approach The orthotropic case using the complementary approach resulted in a compliance of $\SI{22.8}{\newton\!\,\milli\meter}$ after 14 iterations and provided the stiffness distributions in Figure [9](#fig:complementary2){reference-type="ref" reference="fig:complementary2"}. ![Optimal distribution for the elbow considering the complementary approach in the orthotropic case.](orto_comp_codo.jpg){#fig:complementary2 width="80%"} In this case, the stiffness and energy distributions reached are almost identical to those of the direct approach. The main difference being $E_{x}$ is now only significant next to the Dirichlet boundary, more precisely the top and bottom parts, indicating that it is most likely trying to stiffen the areas where there is a large bending moment. ### Comparison It seems like all cases reach similar solutions judging by the stiffness and energy distributions and the final compliance being very similar. The main point of interest of this load case is how the orthotropic case is capable of discerning which elastic property is the adequate one to stiffen a certain area and minimize the compliance of the structure as a result. The evolution of the compliance with the number of iterations performed for the previous methods, as well as [OptiStruct]{.smallcaps}, can be found in Figure [10](#fig:evolution2){reference-type="ref" reference="fig:evolution2"}. These results show that every approach used matches the performance of the traditional optimization using a SIMP method. ![Evolution of the compliance over the number of iterations for Elbow.](compliance_elbow.pdf){#fig:evolution2 width="70%"} Figure [11](#fig:enercompelbow){reference-type="ref" reference="fig:enercompelbow"} shows that the elastic energy density distribution in the sample is practically de same in all the calculation cases, which agrees with the behaviour of Figure [10](#fig:evolution2){reference-type="ref" reference="fig:evolution2"}. ![Elastic energy density distribution in the optimal configuration of the four calculation cases; orthotropic complementary (a), orthotropic direct (b), isotropic complementary (c) and isotropic direct (d).](E_elastica_todos_codo.jpg){#fig:enercompelbow width="100%"} As it might be observed in Figure [10](#fig:evolution2){reference-type="ref" reference="fig:evolution2"}, our model has succeeded to equalize the SIMP method, yet no improvement is observed. One of the reasons that may explain this result is the coexistence of loads of two different nature i.e. FEs that are subjected to axial and shear loads. Therefore, despite the fact that the orthotropic case enables the optimization of both longitudinal and shear stiffness separately, it is difficult to meet a trade-off solution since the effects that they mutually produce might not be decoupled, and the algorithm finds it difficult to perform big steps along the optimization---note the smooth changes in compliance in the three curves. ## Load Case 3: Chair The third load case, named Chair, is characterized by normal stresses along the $z-$axis and the presence of significant shear stresses. For this scenario, all the parameters in $\bm{k}$ are set to 30. As seen in Figure [1](#fig:num1){reference-type="ref" reference="fig:num1"}c, this load case is defined by four small circles on the bottom face and a bigger circle on the top face. The small circles (blue) represent the Dirichlet boundary $\Gamma_D$, where all displacements are set to $\SI{0}{\milli\meter}$. The bigger circle (red) represents part of the Neumann boundary $\Gamma_N$, where a force of $\SI{10000}{\newton}$ in the direction of the $z-$axis has been distributed among all the contained nodes. ### Isotropic Case The isotropic case resulted in a compliance of $\SI{4.40}{\newton\!\,\milli\meter}$ after 20 iterations and provided the stiffness and elastic energy distributions in Figure [12](#fig:isotropic3){reference-type="ref" reference="fig:isotropic3"}. ![Optimal Young moduli distribution for the direct and complementary isotropic calculations for the chair case.](silla_iso.jpg){#fig:isotropic3 width="60%"} The Young modulus distribution seems to reproduce a structure reminiscent of a stool, being the most stiff near the surfaces conforming the Dirichlet boundary. The elastic energy distribution is nearly identical, both in shape and magnitude. ### Orthotropic Case In this case, the elastic properties of interest are $E_{z}$, $G_{xy}$ and $G_{yz}$, conclusions on the latter are valid for $G_{xz}$ as well. #### Direct Approach The orthotropic case using the direct approach resulted in a compliance of $\SI{3.75}{\newton\!\,\milli\meter}$ after 16 iterations and provided the stiffness and elastic energy distributions in Figure [13](#fig:direct3){reference-type="ref" reference="fig:direct3"}. ![Optimal distribution for the chair considering the direct approach in the orthotropic case.](orto_direct_silla.jpg){#fig:direct3 width="80%"} The result in $E_{z}$ is analogous to the isotropic computation since it seems to reproduce a structure reminiscent of a stool, although in this case the stiffness is distributed more evenly throughout the structure. However, the 'legs' of the stool are still the most stiff parts since both $G_{xy}$ and $G_{yz}$ are the most prominent here. Regarding the elastic energy distribution along the FEs, those with higher values are organized in a near identical structure to that of the isotropic case. #### Complementary Approach The orthotropic case using the complementary approach resulted in a compliance of $\SI{1.87}{\newton\!\,\milli\meter}$ after 30 iterations and provided the stiffness and elastic energy distributions in Figure [14](#fig:complementary3){reference-type="ref" reference="fig:complementary3"}. In this case the structure developed by $E_{z}$ is still reminiscent of a stool, but it is significantly less stiff than in the previous two cases. Similarly to the direct approach, both $G_{xy}$ and $G_{yz}$ reach their higher values near the Dirichlet boundary, but in this case $G_{yz}$ high values extend all the way up to the Neumann boundary following the contour of the stool. Additionally, it seems some areas have been over-stiffened, specially by both the shear moduli. Nevertheless, the FEs with the most elastic energy are still organized in a very similar fashion to the previous two cases. ![Optimal distribution for the chair considering the complementary approach in the orthotropic case.](orto_comp_silla.jpg){#fig:complementary3 width="80%"} ### Comparison For this load case both the isotropic case and the orthotropic case reach qualitatively similar results for the stiffness distribution, although the latter manages to attain a significantly lower compliance. However, just like in the first load case, the orthotropic complementary case succeeds in reaching a much lower compliance with a less intuitive distribution. As it has already been stated, this is an interesting result, deserving of further analysis in experimental testing. The evolution of the compliance with the number of iterations performed for the previous methods, as well as [OptiStruct]{.smallcaps}, can be found in Figure [15](#fig:evolution3){reference-type="ref" reference="fig:evolution3"}. ![Evolution of the compliance over the number of iterations for Chair.](compliance_chair.pdf){#fig:evolution3 width="70%"} Figure [15](#fig:evolution3){reference-type="ref" reference="fig:evolution3"} shows a clear improvement given by the orthotropic approach, even more with the complementary case. This is also clear in Figure [16](#fig:enercompchair){reference-type="ref" reference="fig:enercompchair"}, where the isotropic cases have a high density of elastic energy in all the domain. And the orthotropic cases are capable to release this elastic energy density in the top part of the sample, with a big difference between the orthotropic complementary approach and the others. ![Elastic energy density distribution in the optimal configuration of the four calculation cases; orthotropic complementary (a), orthotropic direct (b), isotropic complementary (c) and isotropic direct (d).](E_elastica_todos_silla.jpg){#fig:enercompchair width="100%"} The results displayed in Figure [15](#fig:evolution3){reference-type="ref" reference="fig:evolution3"} show significant disparity between the compliance obtained by the different methods, where the orthotropic case using a complementary approach manages to outperform the rest (including SIMP) by a significant margin. ## Load Case 4: Torsion The fourth load case, named Torsion, is characterized by primarily shear stresses without any significant normal stresses in any direction. For this scenario, all the parameters in $\bm{k}$ are set to 20. Boundary conditions for this computation are represented in Figure [1](#fig:num1){reference-type="ref" reference="fig:num1"}d. This load case is defined by two circular crowns on the top and bottom faces of the generator cube. The bottom crown (blue) represents the Dirichlet boundary $\Gamma_D$, where all displacements are set to $\SI{0}{\milli\meter}$. The top crown (red) represents part of the Neumann boundary $\Gamma_N$, where a force of $\SI{10000}{\newton}$ has been distributed among all the contained nodes. The direction of the force vector $\mathbf{f}_{\bm k}$ at each node is perpendicular to the vector that goes from the central point of the top face to that node $\mathbf{r}_{\bm{O,k}}$, and the sense is such that the cross product of the aforementioned vector times the force vector i.e. the torque is positive along the $z-$axis, $\left(\mathbf{r}_{\bm{O,k}} \times \mathbf{f}_{\bm k} \right) \cdot \mathbf{u}_{\bm z} > 0$. ### Isotropic Case The isotropic case resulted in a compliance of $\SI{3.81}{\newton\!\,\milli\meter}$ after 23 iterations and provided the stiffness and elastic energy distributions in Figure [17](#fig:isotropic4){reference-type="ref" reference="fig:isotropic4"}. ![Optimal Young moduli distribution for the direct and complementary isotropic calculations for the torsion case.](torsion_iso.jpg){#fig:isotropic4 width="60%"} The Young modulus acquires the greatest values near the two boundary conditions and in the outermost parts of the cube. Additionally, a clear directionality can be observed when comparing the faces perpendicular to the $x-$axis and those perpendicular to the $y-$axis. This phenomenon occurs because this is a pure shear stress case, so the principal directions for stresses and strains appear $\ang{45}$, thus the directionality in the stiffness distribution. Contrary to that, the innermost past has reached the minimum possible stiffness values. The elastic energy distribution adopts a similar shape, where the FEs with the greatest energy are grouped next to both of the boundaries. ### Orthotropic Case In this case, the elastic properties of interest are $E_{x}$ -- conclusions of which apply to $E_{y}$ as well -- $G_{xz}$ and $G_{yz}$. The distributions $E_{z}$ and $G_{xy}$ are of lesser significance due to their low contribution to the elastic energy of the structure. #### Direct Approach The orthotropic case using the direct approach resulted in a compliance of $\SI{3.11}{\newton\!\,\milli\meter}$ after 23 iterations and provided the stiffness and elastic energy distributions in Figure [18](#fig:direct4){reference-type="ref" reference="fig:direct4"}. ![Optimal distribution for the torsion considering the direct approach in the orthotropic case.](orto_direct_torsion.jpg){#fig:direct4 width="80%"} In this case it seems that the shear moduli $G_{xz}$ and $G_{yz}$ replicate the same structure as in the isotropic case, splitting the total area between the two, each modulus being responsible of stiffening two opposite faces of the cube. However, it is also apparent that the Young modulus $E_{x}$ has severely over-stiffened the cube while developing into a strange structure. The FEs with the most elastic energy are organized in a very similar structure to that of the isotropic case, although more concentrated next to the top and bottom faces. #### Complementary Approach The orthotropic case using the complementary approach resulted in a compliance of $\SI{2.87}{\newton\!\,\milli\meter}$ after 23 iterations and provided the stiffness and elastic energy distributions in Figure [19](#fig:complementary4){reference-type="ref" reference="fig:complementary4"}. For this case a very similar stiffness distribution to that of the direct approach has been reached by $G_{xz}$ and $G_{yz}$. Furthermore, $E_{x}$ does not present the same strange behaviour and over-stiffening as in the previous case. It is instead localised near the top and bottom faces with a very clear directionality. This leads to a better optimum solution reached. Once again, the FEs with the most elastic energy are organized in a very similar structure to that of the isotropic case, although slightly more concentrated next to the top and bottom faces. ![Optimal distribution for the torsion considering the complementary approach in the orthotropic case.](orto_comp_torsion.jpg){#fig:complementary4 width="80%"} ### Comparison This load case clearly represents the advantages of orthotropic materials, since the stiffness distribution can be clearly divided in two different regions depending on which shear modulus is required in that area and the compliance reached is significantly and consistently lower. Furthermore, it suggest that the complementary approach may be more suitable to avoid over-stiffening for the less relevant elastic properties. The evolution of the compliance with the number of iterations performed for the previous methods, as well as [OptiStruct]{.smallcaps}, can be found in Figure [20](#fig:evolution4){reference-type="ref" reference="fig:evolution4"}. As one can observe all the methods presented in this paper, and specially the orthotropic case with a complementary approach, show a clear advantage when compared to SIMP. ![Evolution of the compliance over the number of iterations for Torsion.](compliance_torsion.pdf){#fig:evolution4 width="70%"} Figure [21](#fig:enertorsion){reference-type="ref" reference="fig:enertorsion"} shows a lower elastic energy density in the orthotropic cases, but very similar between them. As well, the elastic energy density distribution is similar for the isotropic cases. It agrees with the overall behaviour shown in Figure [20](#fig:evolution4){reference-type="ref" reference="fig:evolution4"}. ![Elastic energy density distribution in the optimal configuration of the four calculation cases; orthotropic complementary (a), orthotropic direct (b), isotropic complementary (c) and isotropic direct (d).](E_elastica_todos_torsion.jpg){#fig:enertorsion width="100%"} ## Comparison of all load cases Table [\[tab:compliance\]](#tab:compliance){reference-type="ref" reference="tab:compliance"} displays the overall results for the compliance obtained in all cases. I l I \*4C1.8cm I **Compliance \[N mm\]** & Tube & Elbow & Chair & Torsion\ [OptiStruct]{.smallcaps} & $2.43$ & $23.7$ & $3.53$ & $6.29$\ Isotropic & $2.23$ & $23.2$ & $4.40$ & $3.81$\ Orthotropic Direct & $2.03$ & $\mathbf{22.4}$ & $3.75$ & $3.11$\ Orthotropic Complementary & $\mathbf{0.897}$ & $22.8$ & $\mathbf{1.87}$ & $\mathbf{2.87}$\ In nearly all cases, both the orthotropic results seem to be superior to those attained with the isotropic case or using [OptiStruct]{.smallcaps}. The only instance where this statement does not hold true is for the third load case (Elbow) when using the direct approach for the orthotropic case. Regardless of that, the orthotropic case either with a direct or a complementary approach manages to acquire the lowest compliance in all load cases. Using these criteria and taking into account that the values obtained have been set up to be on the conservative side, it seems clear that using stiffness based methods may prove to be advantageous compared to a traditional SIMP in the scenarios herein presented. # Concluding remarks {#Sec:concluding_remarks} We have successfully developed a formulation for linear orthotropic materials within a topology optimization framework through the development of an algorithm that homogenizes the strain level by performing an optimization on the elastic parameters of each FE, rather than using density-based methods which operate on an intermediate variable such as the volume of said FE. Only one parameter (or set of parameters with same value) $\bm{k}$ has to be fixed, and the energy constraints are ensured not to be violated. This paper represents an extension of a previous stiffness-based work valid for isotropic materials [@saucedo2023updated; @ben2023topology]. The proposed generalization to orthotropic materials poses several advantages, most notably a higher versatility than its isotropic counterpart given that the design space is six times larger for the former---orthotropic materials present 9 elastic properties, however in this work we only consider 6 (longitudinal and shear moduli), fixing three Poisson ratios. Therefore, all of this leads to a procedure which is able to outperform the isotropic formulation. It also improves the results provided by an unpenalised SIMP implemented in the commercial software [OptiStruct]{.smallcaps}, with a similar usage of material. Note that the pursued objectives are different: whereas SIMP minimizes the compliance under a volume constraint, our method aims at minimizing the standard deviation of the strain variables $\bm\mathcal{H}^{\bm \varepsilon}$. However, it is worth noting the similarities between the final shapes. As has been highlighted, two approaches have been proposed to this end: the so-called direct, whose formulation is based on strains, and the complementary approach, developed within a stresses-based framework. In both methods, a first volumetric-deviatoric energy split is performed, then those terms are likewise split such that the contributions to energy and the effects of each of the six elastic properties are decoupled, thus enabling the update of all properties at the same time in every iteration. To this effect, 6 *ad-hoc* update parameters $\alpha$ are defined through the strain elastic energy density variables of these decoupled terms, in a way that the convergence to local minimum of compliance is ensured, given by the application of a gradient-based scheme in the standard deviation of such variables. We have proved that the complementary formulation is more explicit than the direct -- as was originally highlighted by Suzuki and Kikuchi [@suzuki1991homogenization], and Díaz and Bendsoe [@diaz1992shape] -- since complementary energy becomes more explicit in the elastic parameters, thus yielding more optimal results for the load cases herein presented. Regarding the improvements in results provided by this method, it has been observed for all the load cases that the orthotropic formulation compared to its analogous isotropic approach achieves lower compliance structures with quite similar final shapes in terms of elastic energy. This is due to the previously commented versatility in the design space. This higher flexibility, result of a wider design space, enables new ways of distributing the energy contribution across the domain -- as Figures [6](#fig:enercomptube){reference-type="ref" reference="fig:enercomptube"}, [11](#fig:enercompelbow){reference-type="ref" reference="fig:enercompelbow"}, [16](#fig:enercompchair){reference-type="ref" reference="fig:enercompchair"} and [21](#fig:enertorsion){reference-type="ref" reference="fig:enertorsion"} show -- hence the improvement regarding the isotropic case. Moreover, by improving this case results, we automatically outperform SIMP as well, since the isotropic formulation at least equalled this method having the same material usage [@ben2023topology]. Specially noteworthy is the fourth load case (torsion) in which a 54% improvement of the compliance is achieved in comparable computation time using the complementary orthotropic optimization with respect to the SIMP method. We recall again that the pursued objectives are different. New horizons with more structural meaningful objective functions that do not rely on constraint impositions can be exploited through proper structural optimization frameworks. This will bring in turn the ability to face the simulation, design, or optimization challenges that new families of materials are requiring, such as functionally graded materials or mechanical metamaterials. Our method aims at this objective. # Acknowledgements ![image](european-flag.pdf){width="\\linewidth"} This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 101007815. [^1]: Corresponding author\ *Email address*: `i.binsenser@upm.es` (I. Ben-Yelun).
arxiv_math
{ "id": "2309.12012", "title": "An elastic properties-based topology optimization algorithm for linear\n orthotropic, functionally graded materials", "authors": "Ismael Ben-Yelun, V\\'ictor Riera, Luis Saucedo-Mora, Miguel \\'Angel\n Sanz, Francisco Javier Mont\\'ans", "categories": "math.NA cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We prove the local well-posedness of the incompressible current-vortex sheet problems in standard Sobolev spaces under the surface tension or the Syrovatskij condition, which shows that both capillary forces and large tangential magnetic fields can stabilize the motion of current-vortex sheets. Furthermore, under the Syrovatskij condition, the vanishing surface tension limit is established for the motion of current-vortex sheets. These results hold without assuming the interface separating the two plasmas being a graph. address: - The Institute of Mathematical Sciences, the Chinese University of Hong Kong, Hong Kong. - The Institute of Mathematical Sciences, the Chinese University of Hong Kong, Hong Kong. author: - Sicheng Liu - Zhouping Xin bibliography: - ref.bib title: Local Well-posedness of the Incompressible Current-Vortex Sheet Problems --- [^1] # Introduction ## Formulations of the problems We consider the free interface problems for ideal incompressible magnetohydrodynamics (MHD) equations, which describe the motions of two plasmas separating by a free interface (current-vortex sheet problems). If we denote by ${\Omega}_t^\pm \subset \mathbb{R}^3$ the fluid domains at time $t$ occupied by two kinds of plasmas respectively, the ideal incompressible MHD system can be written as () \_t \_+ (\_) \_+ p\^= (\_) \_&in $\Omega_t^\pm$, [\[eqn Dtv\]]{#eqn Dtv label="eqn Dtv"}\ \_t \_+ (\_) \_= (\_) \_&in $\Omega_t^\pm$, [\[eqn Dth\]]{#eqn Dth label="eqn Dth"}\ \_= 0 = \_&in $\Omega_t^\pm$; [\[divfree\]]{#divfree label="divfree"} here $\rho_\pm, {\vb{v}}_\pm, {\vb{h}}_\pm, p^\pm$ are the densities, velocities, magnetic fields and effective pressures for the two plasmas respectively (c.f. [@Landau-Lifshitz-Vol8] or [@book_Davidson]). The boundary conditions are: () \_+ \_+ = \_- \_+ =: &on ${\Gamma_t}$, [\[eqn bdry v\]]{#eqn bdry v label="eqn bdry v"}\ p p\^+ - p\^- = \^2 \_+ &on ${\Gamma_t}$, [\[jump p\]]{#jump p label="jump p"}\ \_+ \_+ = \_- \_+ = 0 &on ${\Gamma_t}$, [\[bdry mag\]]{#bdry mag label="bdry mag"}\ \_- = 0 &on $\partial\Omega$, [\[bc out v\]]{#bc out v label="bc out v"}\ \_- = 0 &on $\partial{\Omega}$ [\[bc out h\]]{#bc out h label="bc out h"}, where $\kappa_+$ is the mean curvature of ${\Gamma_t}$ with respect to ${\vb{N}}_+$, and $0 \le \alpha \le 1$ is a non-negative constant representing the surface tension coefficient. Here $\Omega \subset \mathbb{R}^3$ is a bounded domain with a fixed boundary $\partial{\Omega}$, and ${\Omega}= {\Omega}_t^+ \cup {\Gamma_t}\cup {\Omega}_t^-$, ${\Gamma_t}= \partial{\Omega}_t^+$ is the moving interface with normal speed $\theta$, and $\partial{\Omega}_t^- = \partial{\Omega}\cup {\Gamma_t}$. Denote by ${\vb{N}}_+$ the outward unit normal of $\partial{\Omega}_t^+ = {\Gamma_t}$, and $\widetilde{{\vb{N}}}$ the outward unit normal of $\partial{\Omega}$. Assume further that ${\Gamma_t}\subset {\Omega}$, ${\Gamma_t}\cap \partial{\Omega}= \varnothing$, and ${\Gamma_t}$ separates ${\Omega}$ into two disjoint simply-connected domains ${\Omega}_t^\pm$. The equations [\[eqn Dtv\]](#eqn Dtv){reference-type="eqref" reference="eqn Dtv"} are the Euler equations in hydrodynamics, for which the Lorentz forces serve as the exterior body forces acting on the plasmas. Note that the displacement currents are neglected, due to the fact that the scale of the plasma velocities is much less than the speed of light. The equations [\[eqn Dth\]](#eqn Dth){reference-type="eqref" reference="eqn Dth"} are the combination of Faraday's Law and Ohm's Law, and [\[divfree\]](#divfree){reference-type="eqref" reference="divfree"} are the incompressibility of the plasmas and Gauss's Law for magnetism. The boundary condition [\[eqn bdry v\]](#eqn bdry v){reference-type="eqref" reference="eqn bdry v"} is also known as the kinematic boundary condition, which means that the free interface evolves with the two plasmas. [\[jump p\]](#jump p){reference-type="eqref" reference="jump p"} is derived from the balance of momentum between two sides of the interface, and [\[bdry mag\]](#bdry mag){reference-type="eqref" reference="bdry mag"} follows from the Gauss's Law for magnetism and physical characters of the materials. [\[bc out v\]](#bc out v){reference-type="eqref" reference="bc out v"} means that the outer plasma cannot penetrate the solid boundary, and [\[bc out h\]](#bc out h){reference-type="eqref" reference="bc out h"} follows from the assumption that the solid boundary is a perfect conductor. ## Physical background The motion of electrically conductive fluids (e.g., plasma, liquid metals, salt water, and electrolytes) under the influence of magnetic fields is governed by the MHD systems. The corresponding mathematical theories have numerous significant applications (e.g., drug targeting, earthquakes, sensors, and astrophysics). One of the fundamental differences between MHD and hydrodynamics is that the magnetic fields can induce currents in a moving conductive fluid, and these currents in turn polarize the fluid and change the magnetic and velocity fields in a reciprocal manner. The set of equations is a combination of those in fluid dynamics and electrodynamics, and these equations must be solved concurrently (c.f. [@Landau-Lifshitz-Vol8; @book_Davidson]). Mathematically speaking, the effect of the magnetic field is governed by the Maxwell equations and acts as a Lorentz force on the Euler system for the plasma, which can induce many nontrivial interactions and lead to rich phenomena. The current-vortex sheet problems describe the plasma motion in a domain whose boundary evolves with the plasma itself. Such issues are significant not only because they describe numerous physical phenomena thus have significant applications in science and technology, but also since such studies give rise to profound and challenging theoretical interdisciplinary problems involving partial differential equations, differential geometry, analysis, mathematical physics, and dynamical systems. ## Previous works In the absence of magnetic fields, the equations are reduced to the incompressible Euler system. The free boundary problems in hydrodynamics have garnered considerable interest from the mathematical community. Although water waves are very universal in reality, from which one can see a vast diversity of phenomena, the corresponding mathematical theories are still in their infancy, because the full equations describing the motion of the waves are famously difficult to handle due to the free boundary and intrinsic nonlinearity. We refer to the works by Wu [@Wu1997; @Wu1999], Alazard-Burq-Zuily [@Alazard-Burq-Zuily2014] for the local well-posedness of irrotational water wave problems. When the vorticity of the fluid flow is non-zero, one can refer to Christodoulou-Lindblad [@Christodoulou-Lindblad2000], Lindblad [@Lindblad2003; @Lindblad2005], Coutand-Shkoller [@Coutand-Shkoller2007], Cheng-Coutand-Shkoller [@Cheng-Coutand-Shkoller2008], Zhang-Zhang [@Zhang-Zhang2008], Shatah-Zeng [@Shatah-Zeng2008-Geo; @Shatah-Zeng2008-vortex; @Shatah-Zeng2011] for the local well-posedness of the water wave and vortex sheet problems. In contrast to the long history of the study on the water wave problems, the free-interface problems for ideal MHD equations have been studied only in recent decades. Owing to the strong coupling of the magnetic and velocity fields, it is necessary to deal with multiple hyperbolic systems simultaneously, making it difficult to establish the nonlinear well-posedness theories. In particular, how magnetic fields affect the dynamics of a plasma is an important issue. As most of fluids are electrically conductive and magnetic fields are ubiquitous, the MHD model is certainly an important physical one with similar significance as the Euler or Navier-Stokes ones. When the effect of magnetic fields is not negligible, it is significant to study the dynamics of conducting fluids. Here are some representative works on the free interface problems for the ideal incompressible MHD. A current-vortex sheet is a hypersurface evolving with the conductive fluids, along which the magnetic and velocity fields possess tangential jumps. This sort of problems explain the motion of two conducting fluids with a free interface separating them. Around the middle of the twentieth century, Syrovatskij [@Syrovatskij] and Axford [@axford1962note] discovered the stability requirements for the planer incompressible current-vortex sheets and demonstrated that magnetic fields have a stabilizing influence on the plasma dynamics. The Syrovatskij stability conditions are (see Landau-Lifshitz [@Landau-Lifshitz-Vol8  71]): $$\label{Syr 1"} \rho_+ \abs{{\vb{h}}_+}^2 + \rho_- \abs{{\vb{h}}_-}^2 > \frac{\rho_+ \rho_-}{\rho_+ + \rho_-}\abs{\llbracket{\vb{v}}\rrbracket}^2,$$ $$\label{Syr 2"} \qty(\rho_+ + \rho_-)\abs{{\vb{h}}_+ \cp {\vb{h}}_-}^2 \ge \rho_+ \abs{{\vb{h}}_+ \cp \llbracket{\vb{v}}\rrbracket}^2 + \rho_-\abs{{\vb{h}}_-\cp\llbracket{\vb{v}}\rrbracket}^2,$$ where $\llbracket{\vb{v}}\rrbracket \coloneqq {\vb{v}}_+ - {\vb{v}}_-$ is the velocity jump. If the current-vortex sheet is assumed to be the graph of a function, there are some studies on the dynamics: Trakhinin [@Trakinin2005] proved the a priori estimate for the linearized equations under a strong stability condition: $$\label{strong stabilibty"} \abs{{\vb{h}}_+ \cp {\vb{h}}_-} > \max\qty{\abs{{\vb{h}}_+ \cp \llbracket{\vb{v}}\rrbracket}, \abs{{\vb{h}}_-\cp\llbracket{\vb{v}}\rrbracket}}.$$ Coulombel-Morando-Secchi-Trebeschi [@Coulombel-Morando-Secchi-Trebischi2012] showed the a priori estimate without loss of derivatives for the non-linear problem under [\[strong stabilibty\"\]](#strong stabilibty"){reference-type="eqref" reference="strong stabilibty\""}. If the original Syrovatskij condition [\[Syr 2\"\]](#Syr 2"){reference-type="eqref" reference="Syr 2\""} were replaced by the following strict one: $$\label{Syro 3"} \qty(\rho_+ + \rho_-)\abs{{\vb{h}}_+ \cp {\vb{h}}_-}^2 > \rho_+ \abs{{\vb{h}}_+ \cp \llbracket{\vb{v}}\rrbracket}^2 + \rho_-\abs{{\vb{h}}_-\cp\llbracket{\vb{v}}\rrbracket}^2,$$ from which [\[Syr 1\"\]](#Syr 1"){reference-type="eqref" reference="Syr 1\""} follows, Morando-Trakhinin-Trebeschi [@Morando-Trakhinin-Trebeschi2008] derived the a priori estimates for the linearized system with loss of derivatives. The nonlinear local well-posedness result under [\[Syro 3\"\]](#Syro 3"){reference-type="eqref" reference="Syro 3\""} was first proven by Sun-Wang-Zhang [@Sun-Wang-Zhang2018]. The above works demonstrate that the strict Syrovatskij condition [\[Syro 3\"\]](#Syro 3"){reference-type="eqref" reference="Syro 3\""} indeed has a nonlinear stabilizing effect on the free interface (at least for a graph surface in a short time period), in contrast to the Kelvin-Helmholtz instability for pure-fluid vortex-sheet issues due to the lack of surface tension (c.f. [@Ebin1988] and [@Majda-Bertozzi2002 Chapters 9 & 11] for more detailed discussions). Recently, the methods in [@Sun-Wang-Zhang2018] were also applied to the study for the case with surface tension, see [@Li-Li2022]. For the plasma-vacuum interface problems, if the magnetic field is parallel to the free boundary and the one in the vacuum is vanishing, we refer to Hao-Luo [@Hao-Luo2014; @Hao-Luo2021], Gu-Wang [@Gu-Wang2019], and Gu-Luo-Zhang [@Gu-Luo-Zhang2021; @Gu-Luo-Zhang22] for the local well-posedness. If the magnetic field in the vacuum is nontrivial, one can see Mordando-Trakhinin-Trebeschi [@Morando-Trakhinin-Trebeschi2014], and Sun-Wang-Zhang [@Sun-Wang-Zhang2019] for the local well-posedness under a stability condition. Hao-Luo [@Hao-Luo2020] also showed the ill-posedness for the plasma-vacuum problems without the Rayleigh-Taylor sign condition, as indicated by Ebin [@Ebin1987] for the pure fluid-vacuum case. Concerning the global well-posedness for free-boundary incompressible inviscid MHD equations, Wang-Xin [@Wang-Xin2021] established it for both the plasma-vacuum and the plasma-plasma problems. Although these advances are significant, all of the aforementioned nonlinear local well-posedness results for MHD problems were founded on a crucial premise that the free interface is a graph. However, in reality, the moving surface cannot be represented simply by a graph in many significant cases. To remove these limitations seems quite challenging, even for the pure fluid problems (c.f. [@Coutand-Shkoller2007; @Shatah-Zeng2011]). Using the partition of unity to characterize the general interface appears feasible, but the analysis of these transition maps is rather involved due to the intense interactions between the plasmas in different local charts. In view of the strong coupling of the magnetic and velocity fields (one direct consequence of which is that the vorticity transport formula will change), MHD problems must be analyzed with greater care than the pure fluid ones. For example, one of the difficulties is that the estimates of the velocity and magnetic fields must be derived simultaneously, which is much more complex than in the case for pure fluids. More significantly, the strategies on the local dynamic motion of a general current-vortex sheet will be indispensable to the study of long-time dynamics, particularly the finite-time formation of splash singularities from a generic perturbation of a current-vortex sheet (even of a graph type). This paper is to establish the nonlinear local well-posedness for the current-vortex sheet problems in standard Sobolev spaces, without the graph assumption on the free interface. Namely, we show that for more general physical models, both the capillary forces and large tangential magnetic fields (the Syrovatskij condition) can stabilize the motion of current-vortex sheets. In particular, our results can be applied to study the dynamics of free interfaces with turning-over points, and may be useful to construct splash singularities. # Main Results For convenience, we shall use the notation $f \coloneqq f_+ \mathbbm{1}_{{\Omega}_t^+} + f_- \mathbbm{1}_{{\Omega}_t^-}$ to represent for functions $f_\pm : {\Omega}_t^\pm \to \mathbb{R}$. ## The stabilization effect of the surface tension If there exists surface tension on the free interface, the following local well-posedness result holds: **Theorem 1** ($\alpha = 1$ case). *Suppose that ${\Omega}\subset \mathbb{R}^3$ is a bounded domain with a $C^1 \cap H^{{\frac{3}{2}k}+ 1}$ boundary, and $k \ge 2$ is an integer. Given the initial hypersurface $\Gamma_0 \in H^{{\frac{3}{2}k}+1}$ and two solenoidal vector fields ${\vb{v}}_0, {\vb{h}}_0 \in H^{{\frac{3}{2}k}}({\Omega}\setminus\Gamma_0)$, if $\Gamma_0$ separates ${\Omega}$ into two disjoint simply-connected parts, then there exists a constant $T > 0$ so that the current-vortex sheet problem (MHD)-(BC) has a solution in the space: $${\Gamma_t}\in C^0\qty([0, T]; H^{{\frac{3}{2}k}+1}) \qand {\vb{v}}, {\vb{h}}\in C^0\qty([0, T]; H^{{\frac{3}{2}k}}({\Omega\setminus{\Gamma_t}})).$$ Furthermore, if $k \ge 3$, the solution is unique and it depends on the initial data continuously, i.e. the problem (MHD)-(BC) is locally well-posed.* ## The stabilization effect of the Syrovatskij condition In the absence of surface tension, we show that the Syrovatskij condition [\[Syro 3\"\]](#Syro 3"){reference-type="eqref" reference="Syro 3\""} can stabilize the motion of the current-vortex sheet, at least for a short time period, without the graph assumption on the interface. Due to the hairy ball theorem, [\[Syro 3\"\]](#Syro 3"){reference-type="eqref" reference="Syro 3\""} cannot hold on a hypersurface homeomorphic to a sphere. Thereby, we assume that ${\Omega}= \mathbb{T}^2 \times (-1, 1)$, and $\Gamma_t$ is a $C^1 \cap H^2$ hypersurface diffeomorphic to $\mathbb{T}^2$ (e.g. a surface with shape \"$\mho$\" or \"$Z$\", or a portion of sea waves), which separates $\Omega$ into the upper and the lower parts. Accordingly, the boundary conditions [\[eqn bdry v\]](#eqn bdry v){reference-type="eqref" reference="eqn bdry v"}-[\[bc out h\]](#bc out h){reference-type="eqref" reference="bc out h"} are modified to $$\label{key} (\mathrm{BC'})\quad \begin{cases*} {\vb{v}}_+ \cdot {\vb{N}}_+ = {\vb{v}}_- \cdot {\vb{N}}_+ =: \theta &on $ {\Gamma_t}$,\\ \llbracket p \rrbracket \coloneqq p^+ - p^- = \alpha^2\kappa_+ &on $ {\Gamma_t}$, \\ {\vb{h}}_+ \cdot {\vb{N}}_+ = {\vb{h}}_- \cdot {\vb{N}}_+ = 0 &on $ {\Gamma_t}$,\\ {\vb{v}}_\pm \cdot \widetilde{{\vb{N}}}_\pm = 0 &on $ \mathbb{T}^2 \times\{\pm1\} $,\\ {\vb{h}}_\pm \cdot \widetilde{{\vb{N}}}_\pm = 0 &on $ \mathbb{T}^2 \times\{\pm1\} $, \end{cases*}$$ where $\widetilde{{\vb{N}}}_\pm \equiv \pm \vb{e}_3$, are the outward unit normals of $\mathbb{T}^2 \times \{\pm1\}$. By Lemma [Lemma 24](#lem syro equiv 1){reference-type="ref" reference="lem syro equiv 1"}, the strict Syrovatskij condition [\[Syro 3\"\]](#Syro 3"){reference-type="eqref" reference="Syro 3\""} implies $$\begin{split} 0 < \Upsilon\qty({\vb{h}}_{\pm}, \llbracket{\vb{v}}\rrbracket) &\coloneqq \inf_{\substack{\vb{a} \in \mathrm{T}\Gamma_t; \\ \abs{\vb{a}}=1}} \inf_{z \in \Gamma_t} \frac{\rho_+}{\rho_+ + \rho_-}\abs{\vb{a} \vdot {\vb{h}}_+(z)}^2 + \frac{\rho_-}{\rho_+ + \rho_-}\abs{\vb{a} \vdot {\vb{h}}_-(z)}^2 \\ &\hspace{6em} - \frac{\rho_+ \rho_-}{(\rho_+ + \rho_-)^2}\abs{\vb{a} \vdot \llbracket{\vb{v}}\rrbracket(z)}^2. \end{split}$$ We prove the following two theorems under the Syrovatskij condition [\[Syro 3\"\]](#Syro 3"){reference-type="eqref" reference="Syro 3\""}: **Theorem 2** ($\alpha = 0$ case). *Let $k \ge 3$ be an integer and ${\Omega}\coloneqq \mathbb{T}^2 \times (-1, 1)$. Suppose that $\Gamma_0$ is an $H^{{\frac{3}{2}k}+\frac{1}{2}}$ hypersurface diffeomorphic to $\mathbb{T}^2$ separating ${\Omega}$ into two parts (the upper one and the lower one). Assume that ${\vb{v}}_0, {\vb{h}}_0 \in H^{{\frac{3}{2}k}}({\Omega}\setminus\Gamma_0)$ are two $H^{{\frac{3}{2}k}}$ solenoidal vector fields satisfying $$\begin{split} \Upsilon\qty({\vb{h}}_{0\pm}, \llbracket{\vb{v}}_0\rrbracket) \ge 2\mathfrak{s}_0 > 0. \end{split}$$ If $\Gamma_0$ does not touch the top or the bottom boundary, namely, for some positive constant $c_0$, $$\label{key} \mathop{\mathrm{\mathrm{dist}}}\qty(\Gamma_0, \mathbb{T}^2 \times \{\pm1\}) \ge 2c_0 > 0;$$ then there exists a constant $T > 0$, so that the current-vortex sheet problem (MHD)-(BC') admits a unique solution in the space $$\label{key} {\Gamma_t}\in C^0\qty([0, T]; H^{{\frac{3}{2}k}+\frac{1}{2}}) \qand {\vb{v}}, {\vb{h}}\in C^0\qty([0, T]; H^{{\frac{3}{2}k}}({\Omega\setminus{\Gamma_t}})).$$ Furthermore, for $0 \le t \le T$, the solution $({\Gamma_t}, {\vb{v}}, {\vb{h}})$ satisfies $$\label{key} \Upsilon\qty({\vb{h}}_\pm, \llbracket{\vb{v}}\rrbracket) \ge \mathfrak{s}_0 \qand \mathop{\mathrm{\mathrm{dist}}}\qty({\Gamma_t}, \mathbb{T}^2 \times \{\pm1\}) \ge c_0,$$ and it depends on the initial data continuously. That is, the problem (MHD)-(BC') is locally well-posed.* Furthermore, the following result on the vanishing surface tension limit holds: **Theorem 3** ($\alpha \to 0$ limit). *Assume that $0 \le \alpha \le 1, k \ge 3$ and ${\Omega}= \mathbb{T}^2 \times (-1, 1)$. Fix $\Gamma_0 \in H^{{\frac{3}{2}k}+1}$ diffeomorphic to $\mathbb{T}^2$ with $\mathop{\mathrm{\mathrm{dist}}}(\Gamma_0, \mathbb{T}^2\times\{\pm1\}) \ge 2 c_0 > 0$, and solenoidal vector fields ${\vb{v}}_{0\pm}, {\vb{h}}_{0\pm} \in H^{{\frac{3}{2}k}}({\Omega}_0^\pm)$ satisfying $\Upsilon\qty({\vb{h}}_{0\pm}, \llbracket{\vb{v}}_0\rrbracket) \ge 2\mathfrak{s}_0 > 0$. Then, there is a constant $T > 0$, independent of $\alpha$, so that the problem, (MHD)-(BC'), is well-posed for $t \in [0, T]$. Furthermore, as $\alpha \to 0$, by passing to a subsequence, the solution to (MHD)-(BC') with surface tension converges weakly to a solution to (MHD)-(BC') with $\alpha = 0$ in the space ${\Gamma_t}\in H^{{\frac{3}{2}k}+\frac{1}{2}}$ and ${\vb{v}}_\pm, {\vb{h}}_\pm \in H^{{\frac{3}{2}k}}({\Omega}_t^\pm)$.* ## Main ideas Inspired by the works of Shatah-Zeng [@Shatah-Zeng2011], we choose a geometric approach to analyze the problems. First of all, a reference hypersurface ${\Gamma_\ast}$ diffeomorphic to the initial one is taken, and one may choose a transversal vector field ${\vb*{\nu}}$ defined on the reference hypersurface of the same regularity and close to the unit normal in the $C^1$-topology. Therefore, any hypersurface near the reference one can be expressed uniquely by the height function defined on ${\Gamma_\ast}$ via: $$\Phi_\Gamma (p) = p + \gamma_\Gamma (p) {\vb*{\nu}}(p) \qq{for} p \in {\Gamma_\ast}.$$ Every hypersurface $\Gamma$ in a small $C^1$-neighborhood of ${\Gamma_\ast}$ is associated to a unique function $\gamma_\Gamma$, and the mean curvature $\kappa$ of $\Gamma$ can be expressed by $\gamma_\Gamma$. Conversely, by taking an auxiliary constant $a > 0$ determined by ${\Gamma_\ast}$ and ${\vb*{\nu}}$, the height function $\gamma_\Gamma$ can be determined uniquely by the function $\kappa \circ \Phi_\Gamma + a^2 \gamma_\Gamma : {\Gamma_\ast}\to \mathbb{R}$, whose leading order term is the mean curvature $\kappa$ of $\Gamma$. Hence, the analysis of the evolution equation for the mean curvature $\kappa$ can determine the evolution of the free hypersurface, which is the crucial part of the settlement of such free interface problems. On the other hand, any vector field defined in a simply-connected domain is uniquely determined by its divergence, curl, and normal projection on the boundary (c.f. [@Cheng-Shkoller2017]). The evolution equation for the free interface yields the normal part of the velocity on the boundary, and the normal projection of the magnetic field is zero (for the current-vortex sheet problems considered here); thus, the incompressibility of the plasma ($\div {\vb{v}}= 0$) and Gauss's law for magnetism ($\div {\vb{h}}= 0$) imply that one only needs to determine the current (${\vb{j}}\coloneqq \curl {\vb{h}}$) and vorticity (${\vb*{\omega}}\coloneqq \curl {\vb{v}}$). The evolution equations for ${\vb*{\omega}}$ and ${\vb{j}}$ are given respectively as $$\begin{cases*} \partial_t {\vb*{\omega}}+ ({\vb{v}}\vdot \grad){\vb*{\omega}}- ({\vb{h}}\vdot \grad){\vb{j}}= ({\vb*{\omega}}\vdot \grad){\vb{v}}- ({\vb{j}}\vdot \grad) {\vb{h}}, \\ \partial_t{\vb{j}}+ ({\vb{v}}\vdot \grad){\vb{j}}- ({\vb{h}}\vdot \grad) {\vb*{\omega}}= ({\vb{j}}\vdot \grad){\vb{v}}- ({\vb*{\omega}}\vdot \grad) {\vb{h}}- 2\tr(\grad {\vb{v}}\cp \grad {\vb{h}}). \end{cases*}$$ Since ${\vb{h}}$ is parallel to the free hypersurface, as indicated in [@Sun-Wang-Zhang2018], it follows that ${\vb{v}}\pm {\vb{h}}$ are also evolution velocities of the interface and the fluid domain. Thus, one may use characteristic methods to transform the above equations into a system of ordinary differential equations, whose well-posedness is standard to obtain (see [@Caflisch-Klapper_Steele97]). Once the evolution of the free interfaces, currents and vorticities are known, the original problem can be resolved via working on the div-curl systems. Such an approach can not only resolve the free interface problems for general interfaces (in particular, with no graph assumptions) but also help to understand various stability conditions more clearly. Indeed, it will be seen in the evolution equations for the mean curvatures that the surface tension corresponds to a third order positive differential operator on the free interface, which serves as a stabilizer for the surface motion; the strict Syrovatskij condition [\[Syro 3\"\]](#Syro 3"){reference-type="eqref" reference="Syro 3\""} corresponds to a second order positive differential operator. Thus, concerning the stabilization effect, surface tensions are stronger than the strict Syrovatskij condition. Moreover, due to the existence of the unstable term in the evolution equations for the mean curvature (which is a second order differential operator resulting in the Kelvin-Helmholtz instability for the vortex sheet problems), the small tangential magnetic fields (the Syrovatskij conditions can be understood as a largeness assumption) cannot stabilize the current-vortex sheet in the absence of surface tension. ## Structure of the paper In  [3](#sec prelimi){reference-type="ref" reference="sec prelimi"}, we introduce some geometric relations and some analytical tools.  [4](#sec reform){reference-type="ref" reference="sec reform"} -  [6](#sec nonlinear){reference-type="ref" reference="sec nonlinear"} are devoted to the proof of Theorem [Theorem 1](#thm s.t.){reference-type="ref" reference="thm s.t."}. More precisely, in  [4](#sec reform){reference-type="ref" reference="sec reform"}, we rewrite the current-vortex sheet problems in a geometric manner, and derive the corresponding evolution equations. In  [5](#sec linear){reference-type="ref" reference="sec linear"}, we study the uniform linear estimates for the linearized systems; and in  [6](#sec nonlinear){reference-type="ref" reference="sec nonlinear"} we consider the nonlinear problems and show the local well-posedness of the original current-vortex sheet ones.  [7](#sec syro){reference-type="ref" reference="sec syro"} is devoted to the proof of Theorem [Theorem 2](#thm alpha=0 case){reference-type="ref" reference="thm alpha=0 case"} and [Theorem 3](#thm vanishing surf limt){reference-type="ref" reference="thm vanishing surf limt"}. In the Appendices, we prove two technical lemmas. # Preliminaries {#sec prelimi} ## Geometry of hypersurfaces For a family of hypersurfaces ${\Gamma_t}\subset \mathbb{R}^d$ evolving with the velocity ${\vb{v}}: {\Gamma_t}\to \mathbb{R}^d$, consider the local charts for the initial hypersurface ${\vb{F}}: U \to \Gamma_0 \subset \mathbb{R}^d$. Assume that $\Xi_t$ is the flow map induced by ${\vb{v}}$, then one can take a coordinate map of ${\Gamma_t}$ as ${\vb{F}}(t) \coloneqq \Xi_t \circ {\vb{F}}: U \to {\Gamma_t}$. Standard geometric arguments (c.f. [@Ecker2004]) give that the coordinate tangent vectors $\partial_i {\vb{F}}(t, z) : U \to \mathbb{R}^{d}, (1 \le i \le d-1)$ form a natural basis of the tangent space $\ensuremath{\mathrm{T}}_p{\Gamma_t}$ at $p = {\vb{F}}(t, z) \in {\Gamma_t}$ for each $z \in U$. The submanifold metric of ${\Gamma_t}\subset \mathbb{R}^d$ is given by $$\label{key} g_{ij} = {\vb{F}}_{,i} \vdot {\vb{F}}_{,j}$$ for $1 \le i, j \le d-1$, where $f_{,i}$ represents $\partial_i f$ for any function $f : U \to \mathbb{R}$. The inverse metric is defined to be $$\label{key} (g^{ij}) \coloneqq (g_{ij})^{-1},$$ and the area element of ${\Gamma_t}$ is $$\label{key} \sqrt{g} = \sqrt{\det(g_{ij})}.$$ Furthermore, there is a natural Riemannian connection on ${\Gamma_t}$, whose Christoffel symbols are given by $$\Gamma_{ij}^k \coloneqq g^{kl} {\vb{F}}_{,ij} \vdot {\vb{F}}_{,l} = \frac{1}{2}g^{kl}\qty(g_{jl,i} + g_{il,j} - g_{ij,l}),$$ where the summation convention for tensors has been used. For a tangent vector $\vb{X} \in \ensuremath{\mathrm{T}}{\Gamma_t}$, one can write $$\vb{X} = X^i {\vb{F}}_{,i} = g^{ij}X_j{\vb{F}}_{,i},$$ where $X_j \coloneqq \vb{X} \vdot {\vb{F}}_{,j}$. The covariant derivative of $\vb{X}$ is defined to be $$\label{key} \qty(\mathop{}\!\mathbin{\mathrm{D}}^{\Gamma_t}_i X)^j \coloneqq X^j_{,i} + \Gamma_{ik}^j X^k,$$ Hence, the divergence of $\vb{X}$ on ${\Gamma_t}$ is defined to be $$\label{def bdry div} \mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}\vb{X} \coloneqq g^{ij} \vb{X}_{,i} \vdot \vb{F}_{,j}.$$ One can also extend ([\[def bdry div\]](#def bdry div){reference-type="ref" reference="def bdry div"}) to all vector fields defined on ${\Gamma_t}$, not necessarily being tangential. For a function $h : {\Gamma_t}\to \mathbb{R}$, the tangential gradient is defined by $$\label{key} \grad_{{\Gamma_t}} h = g^{ij} h_{,i} {\vb{F}}_{,j},$$ and the Laplace-Beltrami operator on ${\Gamma_t}$ is given by $$\label{key} \mathop{}\!\mathbin{\triangle}_{\Gamma_t}h = \mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}\grad_{\Gamma_t}h = g^{ij}\qty(h_{,ij} - \Gamma_{ij}^k h_{,k}) = \frac{1}{\sqrt{g}}\partial_i \qty(\sqrt{g} g^{ij} h_{,j}).$$ For a general vector field $\vb{Y} : {\Gamma_t}\to \mathbb{R}^d$, the notation $(\mathop{}\!\mathbin{\mathrm{D}}\vb{Y})^\top$ represents a (0,2)-tensor on ${\Gamma_t}$ (here $\mathop{}\!\mathbin{\mathrm{D}}$ is the covariant derivative on $\mathbb{R}^d$, and $\top$ is the tangential projection): $$\label{key} \qty[(\mathop{}\!\mathbin{\mathrm{D}}\vb{Y})^\top]_{ij} \coloneqq \vb{Y}_{,i} \vdot {\vb{F}}_{,j},$$ so $$\mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}\vb{Y} = \tr[(\mathop{}\!\mathbin{\mathrm{D}}\vb{Y})^\top].$$ Denote by ${\vb{N}}: {\Gamma_t}\to \mathbb{S}^{d-1}$ the unit normal vector field of ${\Gamma_t}$, i.e. $$\label{key} {\vb{N}}\vdot {\vb{F}}_{,i} = 0$$ for $1 \le i \le d-1$. Since ${\vb{N}}\vdot {\vb{N}}\equiv 1$, it is clear that $$\label{key} {\vb{N}}_{,i} \vdot {\vb{N}}= 0,$$ namely, ${\vb{N}}_{,i} \in \ensuremath{\mathrm{T}}{\Gamma_t}$ for all $1 \le i \le d-1$. The second fundamental form ${\vb{I\!I}}$ of ${\Gamma_t}$ is a (0, 2)-tensor defined by $$\label{def II} \ensuremath{I\!\!I}_{ij} \coloneqq {\vb{N}}_{,i} \vdot {\vb{F}}_{,j} = - {\vb{N}}\vdot {\vb{F}}_{,ij}.$$ The mean curvature is defined to be the trace of ${\vb{I\!I}}$, i.e. $$\label{eqn kp div N} \kappa \coloneqq \tr({\vb{I\!I}}) = \ensuremath{I\!\!I}_{ij}g^{ij} = g^{ij} {\vb{N}}_{,i} \vdot {\vb{F}}_{,j} = \mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}{\vb{N}}.$$ Here we mention several useful identities, whose calculations can be found in [@Ecker2004 Appendix A]: $$\label{key} \mathop{}\!\mathbin{\triangle}_{\Gamma_t}\ensuremath{I\!\!I}_{ij} = \kappa_{;ij} + \kappa \ensuremath{I\!\!I}_{i}^k \ensuremath{I\!\!I}_{kj} - \abs{{\vb{I\!I}}}^2 \ensuremath{I\!\!I}_{ij},$$ namely, $$\label{simons' identity} \mathop{}\!\mathbin{\triangle}_\Gamma {\vb{I\!I}}= \qty(\mathop{}\!\mathbin{\mathrm{D}}_\Gamma)^2 \kappa + (\kappa {\vb{I\!I}}- \abs{{\vb{I\!I}}}^2 \vb{I}) \vdot {\vb{I\!I}},$$ which is called Simons' identity. Here the dot product of tensors is defined to be $$\label{key} \vb{C} \coloneqq \vb{A} \cdot \vb{B} \qc C_{ij} \equiv A_i^k B_{kj} = A_{il}B_{kj}g^{lk}.$$ Furthermore, it follows from the Codazzi equation that $$\label{lap vn} \mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{N}}= - \abs{{\vb{I\!I}}}^2 {\vb{N}}+ \grad_{\Gamma_t}{\kappa}.$$ *Remark 1*. Derivatives of functions and vector fields on hypersurfaces can also be defined in terms of the projections from $\mathbb{R}^d$ onto the tangent space of ${\Gamma_t}$. In particular, for a function $f$ and a vector field $\vb{X}$ defined in a neighborhood of ${\Gamma_t}\subset \mathbb{R}^d$, the tangential gradient of $f$ is $$\label{key} \grad_{\Gamma_t}f = \grad f - ({\vb{N}}\vdot \grad f) {\vb{N}},$$ where $\grad f$ is the gradient in $\mathbb{R}^d$; and the tangential divergence of $\vb{X}$ is given by $$\label{eqn div Gt div Rd} \mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}\vb{X} = \mathop{\mathrm{\mathrm{div}}}_{\mathbb{R}^d} \vb{X} - {\vb{N}}\vdot \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}} \vb{X},$$ where $\mathop{}\!\mathbin{\mathrm{D}}$ is the covariant derivative in $\mathbb{R}^d$. The above definitions are identical to the intrinsic ones given earlier. The Laplace-Beltrami operator can be calculated in an equivalent way by: $$\label{lap gt f alt} \begin{split} \mathop{}\!\mathbin{\triangle}_{\Gamma_t}f &= \mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}\grad_{\Gamma_t}f = \mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}\grad f - \mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}\qty[({\vb{N}}\vdot\grad f){\vb{N}}] \\ &= \mathop{}\!\mathbin{\triangle}_{\mathbb{R}^d} f - \mathop{}\!\mathbin{\mathrm{D}}^2 f({\vb{N}}, {\vb{N}}) - \kappa {\vb{N}}\vdot \grad f, \end{split}$$ for any $C^2$ function defined in a neighborhood of ${\Gamma_t}\subset \mathbb{R}^d$. Next, we shall derive the evolution equations. For the evolution of ${\vb{N}}$, it holds that $$\label{key} 0 \equiv \partial_t \qty({\vb{N}}\vdot {\vb{F}}_{,i}) = \partial_t{\vb{N}}\vdot {\vb{F}}_{,i} + {\vb{N}}\vdot {\vb{v}}_{,i},$$ which, together with the fact that ${\vb{N}}$ has unit length, implies that $$\label{eqn pdt n} \partial_t {\vb{N}}= - g^{ij} \qty({\vb{N}}\vdot {\vb{v}}_{,j}) {\vb{F}}_{,i},$$ in other words, $$\label{eqn dt n} {\mathbb{D}_t}{\vb{N}}= - \qty[(\grad{\vb{v}})^* \vdot{\vb{N}}]^\top,$$ here ${\mathbb{D}_t}$ is the material derivative along the trajectory of ${\vb{v}}$. For the metric tensor, observe that $$\label{dt g ij} \partial_t g_{ij} = {\vb{v}}_{,i} \vdot {\vb{F}}_{,j} + {\vb{v}}_{,j} \vdot {\vb{F}}_{,i} =: 2A_{ij}.$$ One can check that $\vb{A}$ is a tensor on ${\Gamma_t}$. In fact, $$\label{key} \vb{A} = (\ensuremath{\mathrm{Def}} \, {\vb{v}}^\top) + v^\perp{\vb{I\!I}},$$ where \"$\ensuremath{\mathrm{Def}}$\" represents the deformation tensor on ${\Gamma_t}$. In particular, the material derivative of the area element is: $$\begin{split} \dv{t}(\sqrt{\det(g_{ij})}) = \frac{1}{2} \sqrt{\det(g_{ij})} g^{kl} \partial_t (g_{kl}) = (\mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}{\vb{v}}) \sqrt{\det(g_{ij})}, \end{split}$$ i.e., $$\label{eqn Dt dS} {\mathbb{D}_t}\dd{S_t} = \mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}{\vb{v}}\dd{S_t}.$$ The evolution equation for the second fundamental form is $$\label{eqn dt II} \partial_t \ensuremath{I\!\!I}_{ij} = - {\vb{N}}\vdot ({\vb{v}}_{,ij} - \Gamma_{ij}^k {\vb{v}}_{,k}),$$ in particular, $$\label{eqn dt II tan} ({\mathbb{D}_t}{\vb{I\!I}})^\top = - \mathop{}\!\mathbin{\mathrm{D}}^\top \qty[(\mathop{}\!\mathbin{\mathrm{D}}{\vb{v}})^*{\vb{N}}]^\top - {\vb{I\!I}}\vdot \qty(\mathop{}\!\mathbin{\mathrm{D}}{\vb{v}})^\top.$$ The evolution of $\kappa$ is given by $$\label{eqn dt kappa} \begin{split} {\mathbb{D}_t}\kappa \coloneqq\, &\partial_t(\kappa \circ{\vb{F}}) = \partial_t\qty(\ensuremath{I\!\!I}_{ij}g^{ij}) \\ =\, &{\vb{N}}\vdot\qty[g^{ij}\qty(-{\vb{v}}_{,ij}+\Gamma_{ij}^k{\vb{v}}_{,k})] - 2\ensuremath{I\!\!I}_{ij}{A}^{ij} \\ =\, &-{\vb{N}}\vdot \mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{v}}- 2 \ip{{\vb{I\!I}}}{\vb{A}}, \end{split}$$ where $\ip{\cdot}{\cdot}$ is the standard inner product of tensors defined by $$\ip{\vb{A}}{\vb{B}} \coloneqq A_{ij} B^{ij} = A_{ij} B_{kl} g^{ik} g^{jl}.$$ The second order evolution equation is: $$\begin{split} {\mathbb{D}_t}^2\kappa =\, &-{\vb{N}}\vdot\mathop{}\!\mathbin{\triangle}_{\Gamma_t}({\mathbb{D}_t}{\vb{v}})+ 2{\vb{N}}\vdot(\grad{\vb{v}})\vdot(\mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{v}})^\top + 4 \ip{\vb{A}}{{\vb{N}}\vdot\qty(\mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t})^2{\vb{v}}} -\kappa \abs{\qty[(\grad{\vb{v}})^*\vdot{\vb{N}}]^\top}^2 \\ &+ 4 \ip{{\vb{I\!I}}\vdot\vb{A} + \vb{A}\vdot{\vb{I\!I}}}{\vb{A}} -2\ip{{\vb{I\!I}}}{\mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t}{\vb{v}}\vdot \mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t}{\vb{v}}} - 2\ip{{\vb{I\!I}}}{[\mathop{}\!\mathbin{\mathrm{D}}({\mathbb{D}_t}{\vb{v}})]^\top}. \end{split}$$ Here we explain some terms appeared in the last expressions. If one assumes that $${\vb{N}}\equiv N^\alpha \vb{e}_{(\alpha)} \qand {\vb{v}}\equiv v^\alpha \vb{e}_{(\alpha)},$$ for which $\vb{e}_{(\alpha)} (1 \le \alpha \le d)$ is an orthonormal basis of $\mathbb{R}^d$, then $${\vb{N}}\vdot (\grad {\vb{v}}) \vdot (\mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{v}})^\top = \sum_{\alpha} N^\alpha \grad v^\alpha \vdot (\mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{v}})^\top \qc {\vb{N}}\vdot (\mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t})^2 {\vb{v}}= \sum_{\alpha} N^\alpha (\mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t})^2 v^\alpha,$$ and $$\mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t}{\vb{v}}\vdot \mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t}{\vb{v}}= \sum_{\alpha} \mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t}v^\alpha \otimes \mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t}v^\alpha.$$ By using the identity [\[lap vn\]](#lap vn){reference-type="eqref" reference="lap vn"}, one can derive an alternate formula: $$\label{eqn Dt2 kappa alt} \begin{split} {\mathbb{D}_t}^2 \kappa =\, &-\mathop{}\!\mathbin{\triangle}_{\Gamma_t}({\vb{N}}\vdot{\mathbb{D}_t}{\vb{v}}) - \abs{{\vb{I\!I}}}^2 ({\vb{N}}\vdot{\mathbb{D}_t}{\vb{v}}) + \grad_{\Gamma_t}\kappa \vdot{\mathbb{D}_t}{\vb{v}}+ 2{\vb{N}}\vdot(\grad{\vb{v}})\vdot(\mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{v}})^\top \\ &+ 4 \ip{\vb{A}}{{\vb{N}}\vdot\qty(\mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t})^2{\vb{v}}} -\kappa \abs{\qty[(\grad{\vb{v}})^*\vdot{\vb{N}}]^\top}^2 -2\ip{{\vb{I\!I}}}{\mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t}{\vb{v}}\vdot \mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t}{\vb{v}}} + 4 \ip{{\vb{I\!I}}\vdot\vb{A} + \vb{A}\vdot{\vb{I\!I}}}{\vb{A}}. \end{split}$$ ## Reference hypersurface Let $k$ be an integer with $k \ge 2$, and ${\Gamma_\ast}\subset {\Omega}$ a compact reference hypersurface without boundary separating ${\Omega}$ into two disjoint simply-connected domains ${\Omega}_*^\pm$. Assume that ${\Gamma_\ast}$ is of Sobolev class $H^{{\frac{3}{2}k}+ 1}$. Denote by ${\vb{N}}_{*+}$ the outward unit normal of $\partial{\Omega}_*^+ = {\Gamma_\ast}$ and ${\vb{N}}_{*-} \coloneqq - {\vb{N}}_{*+}$ the outward unit normal of ${\Gamma_\ast}\subset \partial{\Omega}_*^-$. Let ${\vb{I\!I}}_{*\pm}$ be the second fundamental form of ${\Gamma_\ast}$ with respect to ${\vb{N}}_{*\pm}$, and $\kappa_{*\pm}$ the corresponding mean curvature. As in [@Shatah-Zeng2011], we shall consider the evolution of hypersurfaces in a tubular neighborhood of ${\Gamma_\ast}$. Although it is natural to take normal bundle coordinates of ${\Gamma_\ast}$ in classical geometric arguments, it would be better not to do so. Indeed, if ${\Gamma_\ast}$ is of finite regularity, ${\vb{N}}_*$ has one less derivatives than ${\Gamma_\ast}$, hence one needs to take another transversal vector field to obtain the Fermi coordinates of the same regularity as that of ${\Gamma_\ast}$. For example, one can take a unit vector field $\vb*{\nu} \in H^{{\frac{3}{2}k}+1} ({\Gamma_\ast}; \mathbb{R}^{2})$ for which ${\vb*{\nu}}\cdot {\vb{N}}_{*+} \ge 9/10$ by mollifying ${\vb{N}}_*$. It follows from the implicit function theorem that there exists a constant $\delta_0 > 0$ depending on ${\Gamma_\ast}$ and ${\vb*{\nu}}$ so that $$\label{key} \begin{split} \varphi: \, &{\Gamma_\ast}\times (-\delta_0, \delta_0) \to \mathbb{R}^3 \\ &(p, \gamma) \mapsto p + \gamma {\vb*{\nu}} \end{split}$$ is an $H^{{\frac{3}{2}k}+ 1}$ diffeomorphism onto a neighborhood of ${\Gamma_\ast}$. Therefore, each hypersurface $\Gamma$ close to ${\Gamma_\ast}$ in the $C^1$ topology is associated to a unique height function $\gamma_\Gamma : {\Gamma_\ast}\to \mathbb{R}$ so that $$\label{key} \Phi_\Gamma (p) \coloneqq p + \gamma_\Gamma(p){\vb*{\nu}}(p)$$ is a diffeomorphism from ${\Gamma_\ast}$ to $\Gamma$. Thus, one can use the function $\gamma_\Gamma$ to represent the hypersurface $\Gamma$. **Definition 4**. For $\delta>0$ and $\frac{1+3}{2} < s \le 1+{\frac{3}{2}k}$, define $\Lambda({\Gamma_\ast}, s, \delta)$ to be the collection of all hypersurfaces $\Gamma$ close to ${\Gamma_\ast}$, whose associated coordinate functions $\gamma_\Gamma$ satisfy $\abs{\gamma_\Gamma}_{H^s({\Gamma_\ast})} < \delta$. As $s > \frac{3-1}{2} + 1$ implies $H^s({\Gamma_\ast}) \hookrightarrow C^1({\Gamma_\ast})$, $\delta \ll 1$ yields that each $\Gamma \in \Lambda$ also separates ${\Omega}$ into two disjoint simply-connected domains. ## Recovering a hypersurface from its mean curvature Here, we characterize the moving hypersurface by its mean curvature $\kappa_+ \coloneqq \tr {\vb{I\!I}}_+$. Recall that the second fundamental form is defined by $$\label{second fundamental form} {\vb{I\!I}}_+ (\vb*{\tau}) \coloneqq \mathop{}\!\mathbin{\mathrm{D}}_{\vb*{\tau}} {\vb{N}}_+ \qfor \vb*{\tau} \in \ensuremath{\mathrm{T}}\Gamma.$$ For an $H^s$ hypersurface $\Gamma \in \Lambda({\Gamma_\ast}, s, \delta_0)$ with $s> 2$, the unit normal ${\vb{N}}_+$ has the same regularity as $\grad \gamma_\Gamma$. Then the mapping from $\gamma_\Gamma \in \H{s}$ to the mean curvature $\kappa_+ \circ \Phi_\Gamma \in \H{s-2}$ is smooth. In order to establish a bijection between them, one may consider a modification $$\label{def ka} {\mathfrak{K}}[{\gamma_\Gamma}](p) \equiv {\varkappa_a}(p) \coloneqq \kappa_+ \circ \Phi_\Gamma (p) + a^2 {\gamma_\Gamma}(p) \qfor p \in {\Gamma_\ast},$$ where $a$ is a parameter depending only on ${\Gamma_\ast}$ and ${\vb*{\nu}}$ (c.f. [@Shatah-Zeng2011]). For a small constant $\delta_0 > 0$, define $$\label{def lambda*} \Lambda_* \coloneqq \Lambda \qty({\Gamma_\ast}, {\frac{3}{2}k}- \frac{1}{2}, \delta_0).$$ Then, the following lemma holds: **Lemma 5**. *For $\Gamma \in \Lambda_*$ with $\kappa_+ \in H^s(\Gamma)$, ${\frac{3}{2}k}- \frac{5}{2} \le s \le {\frac{3}{2}k}- 1$, the following estimate holds: $$\label{key} \abs{{\vb{N}}_+}_{H^{s+1}(\Gamma)} + \abs{{\vb{I\!I}}_+}_{H^s(\Gamma)} \le C_* \qty(1 + \abs{\kappa_+}_{H^s(\Gamma)}),$$ for some constant $C_*$ depending only on $\Lambda_*$.* The proof of the lemma follows from the bootstrap arguments and Simons' identity [\[simons\' identity\]](#simons' identity){reference-type="eqref" reference="simons' identity"}. For the details, one can refer to [@Shatah-Zeng2008-Geo p. 719]. If $\Lambda_*$ is regarded as an open subset of $\H{{\frac{3}{2}k}-\frac{1}{2}}$, then ${\mathfrak{K}}$ is a $C^3$-morphism from $\Lambda_* \subset \H{{\frac{3}{2}k}- \frac{1}{2}}$ to $\H{{\frac{3}{2}k}-\frac{5}{2}}$. Furthermore, by taking $a \gg 1$, one may deduce from the positivity of $\eval{(\fdv*{{{\mathfrak{K}}}}{\gamma_\Gamma})}_{{\Gamma_\ast}}^{}$ that ${\mathfrak{K}}$ is actually a $C^3$ diffeomorphism, and the following proposition holds (c.f. [@Shatah-Zeng2011 Lemma 2.2]): **Proposition 6**. *There are positive constants $C_*, \delta_0, \delta_1, a_0$ depending only on ${\Gamma_\ast}$ and ${\vb*{\nu}}$ such that for $a \ge a_0$, ${\mathfrak{K}}$ is a $C^3$ diffeomorphism from $\Lambda_* \subset \H{{\frac{3}{2}k}-\frac{1}{2}}$ to $\H{{\frac{3}{2}k}-\frac{5}{2}}$. Denote by $$B_{\delta_1} \coloneqq \Set*{{\varkappa_a}\abs{{\varkappa_a}- \kappa_{*+}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} < \delta_1},$$ where $\kappa_{*+}$ is the mean curvature of ${\Gamma_\ast}$ with respect to ${\vb{N}}_{*+}$, then $$\abs{{\mathfrak{K}}^{-1}}_{C^3\qty(B_{\delta_1}; \H{{\frac{3}{2}k}-\frac{1}{2}})} \le C_*.$$ Furthermore, if ${\varkappa_a}\in B_{\delta_1} \cap \H{s-2}$ with ${\frac{3}{2}k}-\frac{1}{2} \le s \le {\frac{3}{2}k}+1$, then ${\gamma_\Gamma}, \Phi_\Gamma \in \H{s}$, and for $\max\qty{s'-2, -s} \le s'' \le s' \le s$, it holds that $$\label{var est K^-1} \abs{\var{{\mathfrak{K}}^{-1}}({\varkappa_a})}_{{\mathscr{L}}\qty(\H{s''}; \H{s'})} \le C_* a^{s'-s''-2} \qty(1+ \abs{{\varkappa_a}}_{\H{s-2}}),$$ where $\var{{\mathfrak{K}}^{-1}}$ is the functional variation of ${\mathfrak{K}}^{-1}$.* ## Harmonic coordinates and Dirichlet-Neumann operators {#sec harmonic coord} Given a hypersurface $\Gamma \in \Lambda({\Gamma_\ast}, s, \delta)$, define a map $\mathcal{X}_\Gamma^\pm : {\Omega}_*^\pm \to {\Omega}_\Gamma^\pm$ by $$\label{key} \begin{cases*} \mathop{}\!\mathbin{\triangle}_y \mathcal{X}_\Gamma^\pm = 0 &for $ y \in {\Omega}_*^\pm $, \\ \mathcal{X}_\Gamma^\pm (z) = \Phi_\Gamma(z) &for $ z \in \Gamma_* $, \\ \mathcal{X}_\Gamma^- (z') = z' &for $ z' \in \partial\Omega $. \end{cases*}$$ Then, it is clear that $$\label{key} \norm{\nabla \mathcal{X}_\Gamma^\pm - \mathrm{Id}|_{{\Omega}_*^\pm}}_{H^{s-\frac{1}{2}}({\Omega}_*^\pm)} \le C \abs{\gamma_\Gamma}_{H^s({\Gamma_\ast})} < C\delta,$$ where $C > 0$ is uniform in $\Gamma \in \Lambda({\Gamma_\ast}, s, \delta)$. Thus there is a constant $\delta_0 >0$ determined by ${\Gamma_\ast}$ and ${\vb*{\nu}}$, for which $\mathcal{X}_\Gamma^\pm$ are diffeomorphisms from ${\Omega}_*^\pm$ to ${\Omega}^\pm_\Gamma$ respectively, whenever $\delta \le \delta_0$. With the notations in [\[def lambda\*\]](#def lambda*){reference-type="eqref" reference="def lambda*"}, we list some basic inequalities, whose proofs are standard (c.f. [@Shatah-Zeng2008-Geo; @Bahouri-Chemin-Danchin2011]). **Lemma 7**. *Suppose that $\Gamma \in \Lambda_*$. Then there are constants $C_1, C_2 > 0$, depending on $\Lambda_*$, so that* 1. *If $u_\pm \in H^\sigma (\Omega_\Gamma^\pm)$ for $\sigma \in \qty[-{\frac{3}{2}k}, {\frac{3}{2}k}]$, then $$\dfrac{1}{C_1} \norm{u_\pm}_{H^\sigma({\Omega}_\Gamma^\pm)} \le \norm{u_\pm \circ \mathcal{X}_\Gamma^\pm}_{H^\sigma({\Omega}_*^\pm)} \le C_1 \norm{u_\pm}_{H^\sigma({\Omega}_\Gamma^\pm)}.$$* 2. *If $f \in H^s (\Gamma)$ for $s \in \qty[\frac{1}{2}-{\frac{3}{2}k}, {\frac{3}{2}k}-\frac{1}{2}]$, then $$\dfrac{1}{C_2} \abs{f}_{H^{s}(\Gamma)} \le \abs{f\circ\Phi_\Gamma}_{\H{s}} \le C_2 \abs{f }_{H^{s}(\Gamma)}.$$* **Lemma 8**. *Assume that $\Gamma \in \Lambda_*$. Then there are constants $C_1, C_2 > 0$ determined by $\Lambda_*$ such that* 1. *For $u_\pm \in H^{\sigma_1}({\Omega}_\Gamma^\pm)$, $w_\pm \in H^{\sigma_2}({\Omega}_\Gamma^\pm)$ and $\sigma_1 \le \sigma_2$, $$\begin{split} \norm{u_\pm \cdot w_\pm}_{H^{\sigma_1 + \sigma_2 - \frac{3}{2}}({\Omega}_\Gamma^\pm)} \le C_1 \norm{u_\pm}_{H^{\sigma_1}({\Omega}_\Gamma^\pm)} \norm{w}_{H^{\sigma_2}({\Omega}_\Gamma^\pm)} \qif \sigma_2 < \dfrac{3}{2} \qc 0 < \sigma_1 + \sigma_2 \le {\frac{3}{2}k}. \end{split}$$ $$\begin{split} \norm{u_\pm \cdot w_\pm}_{H^{\sigma_1}({\Omega}_\Gamma^\pm)} \le C_1 \norm{u_\pm}_{H^{\sigma_1}({\Omega}_\Gamma^\pm)} \norm{w_\pm}_{H^{\sigma_2}({\Omega}_\Gamma^\pm)} \qif \frac{3}{2} < \sigma_2 \le {\frac{3}{2}k}\qc \sigma_1 + \sigma_2 > 0. \end{split}$$* 2. *For $f \in H^{s_1}(\Gamma)$, $g \in H^{s_2}(\Gamma)$ and $s_1 \le s_2$, $$\label{key} \abs{fg}_{H^{s_1+s_2-\frac{2}{2}}(\Gamma)} \le C_2 \abs{f}_{H^{s_1}(\Gamma)} \abs{g}_{H^{s_2}(\Gamma)} \qif s_2 < 1 \qc 0 \le s_1 + s_2 \le {\frac{3}{2}k}-\frac{1}{2}.$$ $$\label{key} \abs{fg}_{H^{s_1}(\Gamma)} \le C_2 \abs{f}_{H^{s_1}(\Gamma)} \abs{g}_{H^{s_2}(\Gamma)}\qif 1 < s_2 \le {\frac{3}{2}k}-\frac{1}{2} \qc s_1 + s_2 > 0.$$* For any smooth function $f$ defined on $\Gamma \in \Lambda_*$, denote by ${\mathcal{H}}_\pm f$ the harmonic extensions to ${\Omega}_\Gamma^\pm$, namely $$\label{harmonic ext +} \begin{cases*} \mathop{}\!\mathbin{\triangle}{\mathcal{H}}_+ f = 0 \qfor x \in {\Omega}_\Gamma^+, \\ {\mathcal{H}}_+ f = f \qfor x \in \Gamma, \end{cases*}$$ and $$\label{harmonic ext 2} \begin{cases*} \mathop{}\!\mathbin{\triangle}{\mathcal{H}}_- f = 0 \qfor x \in {\Omega}_\Gamma^-, \\ {\mathcal{H}}_- f = f \qfor x \in \Gamma, \\ \mathop{}\!\mathbin{\mathrm{D}}_{\widetilde{{\vb{N}}}} {\mathcal{H}}_- f = 0 \qfor x \in \partial{\Omega}. \end{cases*}$$ The Dirichlet-Neumann operators are defined to be $$\label{def DN op} {\mathcal{N}}_\pm f \coloneqq {\vb{N}}_\pm \vdot (\grad {\mathcal{H}}_\pm f)|_{\Gamma}^{}.$$ Assume that $\Gamma \in \Lambda_* \subset H^{{\frac{3}{2}k}-\frac{1}{2}}$ and $\frac{3}{2}-{\frac{3}{2}k}\le s \le {\frac{3}{2}k}-\frac{1}{2}$. The Dirichlet-Neumann operators ${\mathcal{N}}_\pm : H^{s}(\Gamma) \to H^{s-1}(\Gamma)$ satisfy the following properties (c.f. [@Shatah-Zeng2008-Geo pp. 738-741]): 1. ${\mathcal{N}}_\pm$ are self-adjoint on $L^2(\Gamma)$ with compact resolvents; 2. $\ker({\mathcal{N}}_\pm) = \qty{\mathrm{const.}}$; 3. There is a constant $C_* > 0$ uniform in $\Gamma \in \Lambda_*$ so that $$C_*\abs{f}_{H^{s}(\Gamma)} \ge \abs{{\mathcal{N}}_\pm(f)}_{H^{s-1}(\Gamma)} \ge \frac{1}{C_*}\abs{f}_{H^{s}(\Gamma)},$$ for any $f$ satisfying $\int_\Gamma f \dd{S} = 0$; 4. For $\frac{1}{2}-{\frac{3}{2}k}\le s_1 \le {\frac{3}{2}k}-\frac{1}{2}$, there is a constant $C_*$ determined by $\Lambda_*$ so that $$\dfrac{1}{C_*} \qty(\mathrm{I}-\mathop{}\!\mathbin{\triangle}_\Gamma)^{\frac{s_1}{2}} \le \qty(\mathrm{I}+{\mathcal{N}}_\pm)^{s_1} \le C_*\qty(\mathrm{I}-\mathop{}\!\mathbin{\triangle}_\Gamma)^{\frac{s_1}{2}},$$ i.e., the norms on $H^{s_1}(\Gamma)$ defined by interpolating $\qty(\mathrm{I}-\mathop{}\!\mathbin{\triangle}_\Gamma)^\frac{1}{2}$ and $\qty(\mathrm{I}+{\mathcal{N}}_\pm)$ are equivalent; 5. For $\frac{1}{2}-{\frac{3}{2}k}\le s_2 \le {\frac{3}{2}k}-\frac{3}{2}$, $$({\mathcal{N}}_\pm)^{-1} : H^{s_2}_{0}(\Gamma) \to H^{s_2+1}_{0}(\Gamma),$$ $$H^{s_2}_{0}(\Gamma) = \Set*{f\in H^{s_2}(\Gamma) \int_\Gamma f \dd{S} = 0 }$$ are well-defined and bounded uniformly in $\Gamma \in \Lambda_*$. *Remark 2*. We shall denote by $({\mathcal{N}}_\pm)^{-1} \coloneqq ({\mathcal{N}}_\pm)^{-1} \circ\mathcal{P}$ for simplicity, where $$\mathcal{P}f \coloneqq f - \fint_\Gamma f \dd{S} \equiv f - \ev{f}.$$ is the projection into mean-free functions on $\Gamma$. The following notations will be used later: $$\label{def operator n} \bar{{\mathcal{N}}}\coloneqq \dfrac{1}{\rho_+} {\mathcal{N}}_+ + \dfrac{1}{\rho_-} {\mathcal{N}}_-,$$ $$\label{def operator n tilde} \widetilde{{\mathcal{N}}} \coloneqq \qty(\dfrac{1}{\rho_+} {\mathcal{N}}_+) \bar{{\mathcal{N}}}^{-1} \qty(\dfrac{1}{\rho_-}{\mathcal{N}}_-) = \qty[\qty(\dfrac{1}{\rho_+}{\mathcal{N}}_+)^{-1} + \qty(\dfrac{1}{\rho_-} {\mathcal{N}}_-)^{-1}]^{-1}.$$ At the end of this subsection, we state an important lemma (c.f. [@Shatah-Zeng2008-vortex p. 863]): **Lemma 9**. *Suppose that $\Gamma \in \Lambda_*$ with $\kappa \in H^{{\frac{3}{2}k}-\frac{3}{2}}(\Gamma)$. Then for $\frac{1}{2} - {\frac{3}{2}k}\le s \le {\frac{3}{2}k}-\frac{1}{2}$, one has $$\label{key} \abs{\qty(-\mathop{}\!\mathbin{\triangle}_\Gamma)^{\frac{1}{2}} - {\mathcal{N}}_\pm}_{{\mathscr{L}}\qty(H^s(\Gamma))} \le C_* \qty(1+\abs{\kappa}_{H^{{\frac{3}{2}k}-\frac{3}{2}}(\Gamma)}),$$ where the constant $C_*>0$ is uniform in $\Gamma \in \Lambda_*$.* ## Commutator estimates For vector fields (not necessarily solenoidal) ${\vb{v}}_\pm(t) : {\Omega}_t^\pm \to \mathbb{R}^3$, denote by $$\label{key} {\mathbb{D}_t}_\pm \coloneqq \partial_t + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_\pm}.$$ Then one has the following lemma: **Lemma 10**. *Suppose that ${\Gamma_t}\in \Lambda_*$, and ${\vb{v}}_\pm \in H^{{\frac{3}{2}k}}({\Omega}_t^\pm)$ are the evolution velocities of ${\Omega}_t^\pm$, so ${\vb{v}}_\pm$ are both evolution velocities of ${\Gamma_t}$. Let $f (t, x) : {\Gamma_t}\to \mathbb{R}$ and $h (t, x) : {\Omega\setminus{\Gamma_t}}\to \mathbb{R}$ be two functions. Then the following commutator estimates hold:* I. *For $1 \le s \le {\frac{3}{2}k}$, $\norm{\comm{{\mathbb{D}_t}_\pm}{{\mathcal{H}}_\pm}f}_{H^s({\Omega}_t^\pm)} \lesssim_{\Lambda_*} \abs{f}_{H^{s-\frac{1}{2}}({\Gamma_t})}\cdot \norm{{\vb{v}}_\pm}_{H^{{\frac{3}{2}k}}({\Omega}_t^\pm)}$;* II. *For $1 \le s \le {\frac{3}{2}k}$, $\norm{\comm{{\mathbb{D}_t}_\pm}{\mathop{}\!\mathbin{\triangle}_\pm^{-1}}h_\pm}_{H^s({\Omega}_t^\pm)} \lesssim_{\Lambda_*} \norm{h_\pm}_{H^{s-2}({\Omega}_t^\pm)} \cdot \norm{{\vb{v}}_\pm}_{H^{\frac{3}{2}k}({\Omega}_t^\pm)}$;* III. *For $-\frac{1}{2} \le s \le {\frac{3}{2}k}- \frac{3}{2}$, $\abs{\comm{{\mathbb{D}_t}_\pm}{{\mathcal{N}}_\pm}f}_{H^s({\Gamma_t})} \lesssim_{\Lambda_*} \abs{f}_{H^{s+1}({\Gamma_t})} \cdot \abs{{\vb{v}}_\pm}_{H^{{\frac{3}{2}k}-\frac{1}{2}}({\Gamma_t})}$;* IV. *For $\frac{1}{2} \le s \le {\frac{3}{2}k}- \frac{1}{2}$, $\abs{\comm{{\mathbb{D}_t}_\pm}{{\mathcal{N}}_\pm^{-1}}f}_{H^s({\Gamma_t})} \lesssim_{\Lambda_*} \abs{f}_{H^{s-1}({\Gamma_t})} \cdot \abs{{\vb{v}}_\pm}_{H^{{\frac{3}{2}k}-\frac{1}{2}}({\Gamma_t})}$;* V. *For $-2 \le s \le {\frac{3}{2}k}- \frac{5}{2}$, $\abs{\comm{{\mathbb{D}_t}_\pm}{\mathop{}\!\mathbin{\triangle}_{\Gamma_t}}f}_{H^s({\Gamma_t})} \lesssim_{\Lambda_*} \abs{f}_{H^{s+2}({\Gamma_t})} \cdot \abs {{\vb{v}}_\pm}_{H^{{\frac{3}{2}k}-\frac{1}{2}}({\Gamma_t})}$;* VI. *For $0 \le s \le {\frac{3}{2}k}-1$, $\norm{\comm{{\mathbb{D}_t}_\pm}{\mathop{}\!\mathbin{\mathrm{D}}}h_\pm}_{H^s({\Omega}_t^\pm)} \lesssim_{\Lambda_*}\norm{h_\pm}_{H^{s+1}({\Omega}_t^\pm)} \cdot \norm{{\vb{v}}_\pm}_{H^{\frac{3}{2}k}({\Omega}_t^\pm)}$.* The proof of Lemma [Lemma 10](#Dt comm est lemma){reference-type="ref" reference="Dt comm est lemma"} follows from the identities introduced in [@Shatah-Zeng2008-Geo pp. 709-710] and standard product estimates. ## Div-curl systems {#sec div-cul system} In this subsection, we list some basic results on div-curl systems (c.f. [@Cheng-Shkoller2017] for details): **Theorem 11**. *Suppose that $U$ is a bounded domain in $\mathbb{R}^3$ for which $\partial U \in H^{{\frac{3}{2}k}-\frac{1}{2}}$. Given $\vb{f}, g \in H^{l-1}(U)$ with $\div \vb{f} = 0$ and $h \in H^{l-\frac{1}{2}}(\partial U)$, consider the system:* *[\[div-curl eqn\]]{#div-curl eqn label="div-curl eqn"} = &in $U$,\ = g &in $U$,\ = h &on $\partial U$.* *If on each connected component $\Gamma$ of $\partial U$, one has that $$\label{compatibility div-curl} \int_\Gamma {\vb{f}}\vdot {\vb{N}}\dd{S} = 0,$$ and the following compatibility condition holds true: $$\label{key} \int_{\partial U} h \dd{S} = \int_U g \dd{x},$$ then for $1 \le l \le {\frac{3}{2}k}-1$, there is a solution ${\vb{u}}\in H^l(U)$ such that $$\label{key} \norm{{\vb{u}}}_{H^l(U)} \le C\qty(\abs{\partial U}_{H^{{\frac{3}{2}k}-\frac{1}{2}}}) \cdot \qty(\norm{{\vb{f}}}_{H^{l-1}(U)} + \norm{g}_{H^{l-1}(U)} + \abs{h}_{H^{l-\frac{1}{2}}(\partial U)}).$$ The solution is unique whenever $U$ is simply-connected.* *Remark 3*. If $\vb{f} = \curl {\vb{u}}$ for some vector field ${\vb{u}}$, then ([\[compatibility div-curl\]](#compatibility div-curl){reference-type="ref" reference="compatibility div-curl"}) holds naturally (see [@Cheng-Shkoller2017 Remark 1.2]). # Reformulation of the Problem {#sec reform} ## Velocity fields on the interface Since the interface ${\Gamma_t}$ separates two plasmas, and ${\vb{v}}_\pm$ have the same normal components on ${\Gamma_t}$, it is natural to consider the evolution of $\kappa_+$ with respect to some weighted velocity $$\label{key} {\vb{u}}_\lambda \coloneqq \lambda {\vb{v}}_+ + (1-\lambda) {\vb{v}}_-$$ for some $0 \le \lambda \le 1$. Denote by $$\label{key} {\mathbb{D}_{t_\lambda}}\coloneqq \partial_t + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_\lambda},$$ then $$\label{key} {\mathbb{D}_{t_\lambda}}{\vb{u}}_\lambda = \qty(\partial_t + \lambda \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_+} + (1-\lambda)\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_-}) \qty(\lambda {\vb{v}}_+ + (1-\lambda) {\vb{v}}_-).$$ In view of ([\[eqn Dtv\]](#eqn Dtv){reference-type="ref" reference="eqn Dtv"}), it holds that $$\begin{split} {\mathbb{D}_{t_\lambda}}{\vb{u}}_\lambda &= \lambda^2 \qty(-\dfrac{1}{\rho_+} \grad p^+ + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+} {\vb{h}}_+) + (1-\lambda)^2 \qty(-\dfrac{1}{\rho_-}\grad p^- + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_-} {\vb{h}}_-) \\ &\quad\, + \lambda(1-\lambda) ({\mathbb{D}_t}_+ {\vb{v}}_- + {\mathbb{D}_t}_- {\vb{v}}_+). \end{split}$$ Since $$\begin{split} {\mathbb{D}_t}_+ {\vb{v}}_- = \partial_t {\vb{v}}_- + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_+} {\vb{v}}_- = \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_+ - {\vb{v}}_-} {\vb{v}}_- - \dfrac{1}{\rho_-} \grad p^- + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_-} {\vb{h}}_-, \end{split}$$ and $${\mathbb{D}_t}_- {\vb{v}}_{+} = \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_- - {\vb{v}}_+} {\vb{v}}_+ - \dfrac{1}{\rho_+} \grad p^+ + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+} {\vb{h}}_+,$$ one may write that $${\mathbb{D}_t}_+ {\vb{v}}_- + {\mathbb{D}_t}_- {\vb{v}}_+ = - \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{w}}} {\vb{w}}- \qty(\dfrac{1}{\rho_+} \grad p^+ + \dfrac{1}{\rho_-} \grad p^- ) + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+} {\vb{h}}_+ + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_-} {\vb{h}}_-,$$ where ${\vb{w}}\in \mathrm{T} {\Gamma_t}$ is defined to be $$\label{key} {\vb{w}}\equiv \llbracket {\vb{v}}\rrbracket \coloneqq {\vb{v}}_+ - {\vb{v}}_-.$$ Therefore, $$\label{eqn dtl ulambda} \begin{split} {\mathbb{D}_{t_\lambda}}{\vb{u}}_\lambda = - \dfrac{\lambda}{\rho_+} \grad p^+ - \dfrac{1-\lambda}{\rho_-}\grad p^- + \lambda \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+}{\vb{h}}_+ + (1-\lambda) \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_-} {\vb{h}}_- - \lambda(1-\lambda) \mathop{}\!\mathbin{\mathrm{D}}_{\vb{w}}{\vb{w}}. \end{split}$$ Next, we introduce a useful decomposition of the pressure: $$\label{decomp p} \dfrac{1}{\rho_\pm} p^\pm = p^\pm_{{\vb{v}},{\vb{v}}} - p^\pm_{{\vb{h}}, {\vb{h}}} + \alpha^2 p^\pm_\kappa + p^\pm_b.$$ Here $p_{\vb{a},\vb{b}}^\pm$ are the solutions to the following elliptic problems respectively: $$\label{def p_ab^+} \begin{cases*} \mathop{}\!\mathbin{\triangle}p^+_{\vb{a}, \vb{b}} = - \tr(\mathop{}\!\mathbin{\mathrm{D}}\vb{a}_+ \vdot \mathop{}\!\mathbin{\mathrm{D}}\vb{b}_+) \qfor x \in\Omega_t^+,\\ p^+_{\vb{a}, \vb{b}} = 0 \qfor x \in {\Gamma_t}; \end{cases*}$$ $$\label{def p_ab^-} \begin{cases*} \mathop{}\!\mathbin{\triangle}p_{\vb{a}, \vb{b}}^- = - \tr(\mathop{}\!\mathbin{\mathrm{D}}\vb{a}_- \vdot \mathop{}\!\mathbin{\mathrm{D}}\vb{b}_-) \qfor x \in {\Omega}_t^-, \\ p_{\vb{a}, \vb{b}}^- = 0 \qfor x \in {\Gamma_t}, \\ \mathop{}\!\mathbin{\mathrm{D}}_{\widetilde{{\vb{N}}}} p_{\vb{a}, \vb{b}}^- = \widetilde{{\vb{I\!I}}}(\vb{a}_-, \vb{b}_-) \qfor x \in \partial{\Omega}, \end{cases*}$$ where $\vb{a} = \vb{a}_+ \mathbbm{1}_{{\Omega}_t^+} + \vb{a}_- \mathbbm{1}_{{\Omega}_t^-}$ and $\vb{b} = \vb{b}_+ \mathbbm{1}_{{\Omega}_t^+} + \vb{b}_-\mathbbm{1}_{{\Omega}_t^-}$ are solenoidal vector fields satisfying $\vb{a}_- \vdot \widetilde{{\vb{N}}} = 0 = \vb{b}_- \vdot \widetilde{{\vb{N}}}$ on $\partial\Omega$. $p_\kappa$ and $p_b$ are given respectively by (c.f. [@Shatah-Zeng2011]): $$\label{def p_kappa} p_\kappa^\pm \coloneqq \dfrac{1}{\rho_+ \rho_-} {\mathcal{H}}_\pm \bar{{\mathcal{N}}}^{-1} {\mathcal{N}}_\mp \kappa_\pm, \qand p_b^\pm \coloneqq \dfrac{1}{\rho_\pm}{\mathcal{H}}_\pm \mathfrak{p},$$ where $\mathfrak{p}$ is a function defined on $\Gamma_t$ whose expression will be determined later. With this decomposition, it is routine to check that $p_{{\vb{v}},{\vb{v}}}^\pm = p_{{\vb{h}}, {\vb{h}}}^\pm = 0$ , $\rho_+ p_b^+ = \rho_- p_b^-$, and $\rho_+ p_\kappa^+ - \rho_- p_\kappa^- = \kappa_+$ hold simultaneously on ${\Gamma_t}$. Namely, ([\[jump p\]](#jump p){reference-type="ref" reference="jump p"}) is satisfied automatically. Next, we will derive the explicit formula of $\mathfrak{p}$ by using ([\[eqn Dtv\]](#eqn Dtv){reference-type="ref" reference="eqn Dtv"}), ([\[eqn bdry v\]](#eqn bdry v){reference-type="ref" reference="eqn bdry v"}) and ([\[bdry mag\]](#bdry mag){reference-type="ref" reference="bdry mag"}). Indeed, multiplying ([\[eqn Dtv\]](#eqn Dtv){reference-type="ref" reference="eqn Dtv"}) by ${\vb{N}}_+$, one has $${\vb{N}}_+ \vdot {\mathbb{D}_t}_+ {\vb{v}}_+ + \dfrac{1}{\rho_+} \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_+} p^+ = {\vb{N}}_+ \vdot \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+} {\vb{h}}_+,$$ which implies that $$\begin{split} \partial_t \theta + \dfrac{1}{\rho_+} \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_+} p^+ ={\vb{v}}_+ \vdot {\mathbb{D}_t}_+ {\vb{N}}_+ - \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_+}\theta + {\vb{N}}_+ \vdot \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+} {\vb{h}}_+. \end{split}$$ It follows from ([\[bdry mag\]](#bdry mag){reference-type="ref" reference="bdry mag"}), [\[def II\]](#def II){reference-type="eqref" reference="def II"} and [\[eqn dt n\]](#eqn dt n){reference-type="eqref" reference="eqn dt n"} that $$\begin{split} \partial_t \theta + \dfrac{1}{\rho_+} \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_+} p^+ &= - {\vb{N}}_+ \vdot (\mathop{}\!\mathbin{\mathrm{D}}{\vb{v}}_+) \vdot {\vb{v}}_+^\top - \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_+}\theta + {\vb{N}}_+ \vdot \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+} {\vb{h}}_+ \\ &= - \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_+}\theta -\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_+^\top} \theta + {\vb{I\!I}}_+ \qty({\vb{v}}_+^\top, {\vb{v}}_+^\top) - {\vb{I\!I}}_+ \qty({\vb{h}}_+, {\vb{h}}_+). \end{split}$$ Similarly, $$\begin{split} -\partial_t \theta + \dfrac{1}{\rho_-} \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_-}p^- =\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_-}\theta + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_-^\top} \theta + {\vb{I\!I}}_- \qty({\vb{v}}_-^\top, {\vb{v}}_-^\top) - {\vb{I\!I}}_- \qty({\vb{h}}_-, {\vb{h}}_-). \end{split}$$ Due to the conventions that ${\vb{N}}_+ + {\vb{N}}_- = \vb{0}$, ${\vb{I\!I}}_+ + {\vb{I\!I}}_- = \vb{0}$, and the relation that $({\vb{v}}_+ - {\vb{v}}_-) \in \mathrm{T}{\Gamma_t}$, summing the above two equations yields $$\label{key} \begin{split} \dfrac{1}{\rho_+} \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_+} p^+ + \dfrac{1}{\rho_-} \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_-} p^- = &- \qty[2\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_+^\top}\theta - {\vb{I\!I}}_+\qty({\vb{v}}_+^\top, {\vb{v}}_+^\top) + {\vb{I\!I}}_+\qty({\vb{h}}_+, {\vb{h}}_+)] \\ &+ \qty[2\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_-^\top}\theta - {\vb{I\!I}}_+\qty({\vb{v}}_-^\top, {\vb{v}}_-^\top) + {\vb{I\!I}}_+\qty({\vb{h}}_-, {\vb{h}}_-)]. \end{split}$$ According to the decomposition ([\[decomp p\]](#decomp p){reference-type="ref" reference="decomp p"}) of $p^\pm$ and the relation that $$\label{key} \begin{split} \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_+} p_\kappa^+ + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_-} p_\kappa^- = \qty(\dfrac{1}{\rho_+}{\mathcal{N}}_+)\bar{{\mathcal{N}}}^{-1}\qty(\dfrac{1}{\rho_-}{\mathcal{N}}_-)\kappa_+ +\qty(\dfrac{1}{\rho_-}{\mathcal{N}}_-) \bar{{\mathcal{N}}}^{-1}\qty(\dfrac{1}{\rho_+}{\mathcal{N}}_+)\kappa_- = 0, \end{split}$$ it holds that $$\label{key} \begin{split} \dfrac{1}{\rho_+} \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_+} p^+ + \dfrac{1}{\rho_-} \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_-} p^- = \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_+} \qty(p_{{\vb{v}},{\vb{v}}}^+ - p_{{\vb{h}},{\vb{h}}}^+ - p_{{\vb{v}},{\vb{v}}}^- + p_{{\vb{h}},{\vb{h}}}^-) + \qty(\dfrac{1}{\rho_+}{\mathcal{N}}_+ + \dfrac{1}{\rho_-} {\mathcal{N}}_-) \mathfrak{p}. \end{split}$$ Therefore, one gets the formula $$\label{def g pm} \begin{split} \bar{{\mathcal{N}}}\mathfrak{p} = &- \qty[2\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_+^\top}\theta - {\vb{I\!I}}_+\qty({\vb{v}}_+^\top, {\vb{v}}_+^\top) + {\vb{I\!I}}_+\qty({\vb{h}}_+, {\vb{h}}_+) + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_+}\qty(p_{{\vb{v}}, {\vb{v}}}^+ - p_{{\vb{h}}, {\vb{h}}}^+)] \\ &+ \qty[2\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_-^\top}\theta - {\vb{I\!I}}_+\qty({\vb{v}}_-^\top, {\vb{v}}_-^\top) + {\vb{I\!I}}_+\qty({\vb{h}}_-, {\vb{h}}_-) + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_+}\qty(p_{{\vb{v}},{\vb{v}}}^- - p_{{\vb{h}}, {\vb{h}}}^-)] \\ =: &-g^+ + g^-, \end{split}$$ namely, $$\label{def frak p} \mathfrak{p} = \bar{{\mathcal{N}}}^{-1}(-g^+ + g^-).$$ In conclusion, if $\qty({\Gamma_t}, {\vb{v}}, {\vb{h}})$ is a solution to (MHD)-(BC) with ${\Gamma_t}\in \Lambda_*$, ${\Gamma_t}\in H^{{\frac{3}{2}k}+1}$ and ${\vb{v}}, {\vb{h}}\in H^{{\frac{3}{2}k}}({\Omega\setminus{\Gamma_t}})$, then the following estimate holds: $$\label{est Dtl ulam} \abs{{\mathbb{D}_{t_\lambda}}{\vb{u}}_\lambda}_{H^{{\frac{3}{2}k}-2}({\Gamma_t})} \le Q_\lambda\qty(\alpha\abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-1}}, \abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{3}{2}}}, \norm{{\vb{v}}}_{H^{{\frac{3}{2}k}}({\Omega\setminus{\Gamma_t}})}, \norm{{\vb{h}}}_{H^{{\frac{3}{2}k}}({\Omega\setminus{\Gamma_t}})}),$$ where $Q_\lambda$ is a generic polynomial depending only on $\Lambda_*$ and $\lambda$. (Indeed, for $\frac{1}{2} < s \le {\frac{3}{2}k}- 2$, one has $\abs{\grad p_\kappa}_{H^{s}({\Gamma_t})} \lesssim_{\Lambda_*} \abs{\kappa}_{H^{s+1}({\Gamma_t})}$, so the best estimate on $({\mathbb{D}_{t_\lambda}}{\vb{u}}_\lambda)|_{\Gamma_t}$ is its $H^{{\frac{3}{2}k}-2}$ norm.) *Remark 4*. The following formula holds as long as ${\vb{v}}_\pm$ are the evolution velocities of ${\Omega}_t^\pm$ (which is, in particular, independent of the MHD problems): $$\label{int g^+ - g^-} \int_{\Gamma_t}g^+ - g^- \dd{S} = -\int_{{\Omega\setminus{\Gamma_t}}} \qty(\div {\vb{v}})^2 \dd{x}.$$ So [\[def g pm\]](#def g pm){reference-type="eqref" reference="def g pm"} makes sense whenever ${\vb{v}}_\pm$ are both solenoidal. Indeed, $$\begin{split} &\hspace{-1.5em}\int_{\Gamma_t}g^+ - g^- \dd{S_t} \\ =\, &\int_{\Gamma_t}\qty({\mathbb{D}_t}_+ + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_+^\top}) ({\vb{v}}_+ \vdot {\vb{N}}_+) - {\vb{I\!I}}_+\qty({\vb{v}}_+^\top, {\vb{v}}_+^\top) + {\vb{I\!I}}_+\qty({\vb{h}}_+, {\vb{h}}_+) + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_+}\qty(p_{{\vb{v}}, {\vb{v}}}^+ - p_{{\vb{h}}, {\vb{h}}}^+) \dd{S_t} \\ &+\int_{\Gamma_t}\qty({\mathbb{D}_t}_- + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_-^\top})({\vb{v}}_- \vdot {\vb{N}}_-) - {\vb{I\!I}}_-\qty({\vb{v}}_-^\top, {\vb{v}}_-^\top) + {\vb{I\!I}}_-\qty({\vb{h}}_-, {\vb{h}}_-) + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_-}\qty(p_{{\vb{v}},{\vb{v}}}^- - p_{{\vb{h}}, {\vb{h}}}^-) \dd{S_t} \\ =\, &\int_{{\Omega}_t^+} {\mathbb{D}_t}_+ \qty(\div{\vb{v}}_+) \dd{x} +\int_{{\Omega}_t^-} {\mathbb{D}_t}_- \qty(\div{\vb{v}}_-) \dd{x} \\ =\, &\dv{t}(\int_{{\Omega}_t^+} \qty(\div{\vb{v}}_+) \dd{x}) - \int_{{\Omega}_t^+} \qty(\div {\vb{v}}_+)^2 \dd{x} +\dv{t}(\int_{{\Omega}_t^-}\qty(\div {\vb{v}}_-)\dd{x}) - \int_{{\Omega}_t^-} \qty(\div{\vb{v}}_-)^2 \dd{x} \\ =\, &-\int_{{\Omega\setminus{\Gamma_t}}} \qty(\div{\vb{v}})^2 \dd{x}. \end{split}$$ ## Transformation of the velocity {#sec var v_*} As stated in the preliminary, a vector field defined in a bounded simply-connected domain is determined by its divergence, curl and appropriate boundary conditions. Since both the velocity and magnetic fields are solenoidal, they are uniquely determined by the vorticities, currents and boundary conditions. Therefore, denote by $$\label{key} {\vb*{\omega}}_{*\pm} \coloneqq \qty(\curl {\vb{v}}_\pm) \circ \mathcal{X}_{\Gamma_t}^\pm,$$ then the velocity field ${\vb{v}}$ can be uniquely determined by ${\varkappa_a}$, $\theta$ and ${\vb*{\omega}}_*$ via solving the following div-curl problems: $$\label{sys div-curl v} \begin{cases*} \div {\vb{v}}_\pm = 0 &in ${\Omega}_t^\pm$, \\ \curl {\vb{v}}_\pm = {\vb*{\omega}}_{*\pm} \circ (\mathcal{X}_{\Gamma_t}^\pm)^{-1} &in ${\Omega}_t^\pm$, \\ {\vb{v}}_\pm \vdot {\vb{N}}_+ = \theta &on ${\Gamma_t}$, \\ {\vb{v}}_- \vdot \widetilde{{\vb{N}}} = 0 &on $\partial{\Omega}$. \end{cases*}$$ Next, for a function $f : {\Gamma_t}\to \mathbb{R}$, it is natural to pull back ${\mathbb{D}_{t_\lambda}}f$ to ${\Gamma_\ast}$ via $\Phi_{\Gamma_t}$, namely, one needs to look for a vector field ${\vb{u}}_{\lambda *} : {\Gamma_\ast}\to \mathbb{R}^3$ so that $$\label{pull back Dt to Gs} {\mathbb{D}_{t_\lambda}}_* \qty(f \circ \Phi_{\Gamma_t}) = \qty({\mathbb{D}_{t_\lambda}}f) \circ \Phi_{\Gamma_t},$$ where $$\label{key} {\mathbb{D}_{t_\lambda}}_* \coloneqq \partial_t + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_{\lambda *}}.$$ It is necessary that $$\qty(\mathop{}\!\mathbin{\mathrm{D}}f)\circ \Phi_{\Gamma_t}\vdot \qty(\partial_t \Phi_{\Gamma_t}+ \mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t}\vdot {\vb{u}}_{\lambda *}) = \qty(\mathop{}\!\mathbin{\mathrm{D}}f)\circ \Phi_{\Gamma_t}\vdot {\vb{u}}_\lambda \circ \Phi_{\Gamma_t}.$$ On the other hand, it suffices to define $$\label{def u lambda *} \begin{split} {\vb{u}}_{\lambda *} \coloneqq \qty(\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1}\qty({\vb{u}}_\lambda \circ\Phi_{\Gamma_t}- \partial_t\Phi_{\Gamma_t}) = \qty(\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1}\qty[{\vb{u}}_\lambda \circ\Phi_{\Gamma_t}- (\partial_t \gamma_{\Gamma_t}){\vb*{\nu}}]. \end{split}$$ Since $$\label{key} \theta = \qty(\partial_t \gamma_{\Gamma_t}{\vb*{\nu}})\circ\qty(\Phi_{\Gamma_t})^{-1} \vdot {\vb{N}}_+,$$ one has $$\label{key} \qty[{\vb{u}}_\lambda - \qty(\partial_t \gamma_{\Gamma_t}{\vb*{\nu}})\circ\qty(\Phi_{\Gamma_t})^{-1} ] \vdot {\vb{N}}_+ = 0,$$ i.e., $\qty[{\vb{u}}_\lambda - \qty(\partial_t \gamma_{\Gamma_t}{\vb*{\nu}})\circ\qty(\Phi_{\Gamma_t})^{-1} ] \in \ensuremath{\mathrm{T}}{\Gamma_t}$ and ${\vb{u}}_{\lambda*} \in \ensuremath{\mathrm{T}}{\Gamma_\ast}$. ### Variational estimates {#variational-estimates .unnumbered} In order to compute the variation of ${\vb{u}}_{\lambda*}$, one can assume that ${\varkappa_a}$ and ${\vb*{\omega}}_*$ depend on a parameter $\beta$. By ([\[def u lambda \*\]](#def u lambda *){reference-type="ref" reference="def u lambda *"}), it suffices to compute $\partial_\beta {\vb{v}}_{\pm*}$. Applying $\pdv*{\beta}$ to the identity $$\qty(\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t}) \vdot {\vb{v}}_{\pm*} = {\vb{v}}_\pm \circ \Phi_{\Gamma_t}- (\partial_t\gamma_{\Gamma_t}){\vb*{\nu}},$$ one has $$\mathop{}\!\mathbin{\mathrm{D}}\qty(\partial_\beta {\gamma_{\Gamma_t}}{\vb*{\nu}}) \vdot {\vb{v}}_{\pm*} + (\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t}) \vdot \partial_\beta{\vb{v}}_{\pm*} = \partial_\beta\qty({\vb{v}}_\pm \circ \Phi_{\Gamma_t}) - \qty(\partial^2_{t\beta}{\gamma_{\Gamma_t}}){\vb*{\nu}},$$ for which $$\partial_\beta ({\vb{v}}_\pm \circ \Phi_{\Gamma_t}) = \qty(\partial_\beta {\vb{v}}_\pm + \mathop{}\!\mathbin{\mathrm{D}}_{(\partial_\beta {\gamma_{\Gamma_t}}{\vb*{\nu}})\circ (\Phi_{\Gamma_t})^{-1}} {\vb{v}}_\pm) \circ \Phi_{\Gamma_t}.$$ Denote by $\vb*{\mu} \coloneqq (\partial_\beta {\gamma_{\Gamma_t}}{\vb*{\nu}})\circ (\Phi_{\Gamma_t})^{-1}$ and use the notation $$\label{def Dbt} {\mathbb{D}_{\beta}}\coloneqq \partial_\beta + \mathop{}\!\mathbin{\mathrm{D}}_{\vb*{\mu}}.$$ Then $$\label{eqn pd beta v+*} \partial_\beta {\vb{v}}_{\pm*} = (\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1} \vdot \qty[\qty({\mathbb{D}_{\beta}}{\vb{v}}_{\pm})\circ\Phi_{\Gamma_t}- \qty(\partial^2_{t\beta}{\gamma_{\Gamma_t}}){\vb*{\nu}}-\mathop{}\!\mathbin{\mathrm{D}}(\partial_\beta{\gamma_{\Gamma_t}}{\vb*{\nu}})\vdot{\vb{v}}_{\pm*} ].$$ In particular, $\qty[(\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t}) \vdot \partial_\beta{\vb{v}}_{\pm*}] \circ \Phi_{\Gamma_t}^{-1} \vdot {\vb{N}}_+ \equiv 0$, so $\partial_\beta {\vb{v}}_{\pm*} \in \ensuremath{\mathrm{T}}{\Gamma_\ast}$. Applying ${\mathbb{D}_{\beta}}$ to [\[sys div-curl v\]](#sys div-curl v){reference-type="eqref" reference="sys div-curl v"} and utilizing the commutator estimates together with the div-curl estimates, one can derive that for $\frac{1}{2} \le \sigma \le {\frac{3}{2}k}-\frac{3}{2}$, $$\label{est pd beta v+*} \begin{split} \abs{\partial_\beta {\vb{v}}_{\pm*}}_{\H{\sigma}} \lesssim_{\Lambda_*} &\abs{\partial_\beta {\gamma_{\Gamma_t}}}_{\H{\sigma + 1}} \qty(\norm{{\vb*{\omega}}_{*\pm}}_{H^{{\frac{3}{2}k}-1}({\Omega}_*^\pm)} + \abs{\partial_t {\gamma_{\Gamma_t}}}_{\H{{\frac{3}{2}k}-\frac{1}{2}}}) \\ &+ \norm{\partial_\beta{\vb*{\omega}}_{*\pm}}_{H^{\sigma-\frac{1}{2}}({\Omega}_*^\pm)} + \abs{\partial^2_{t\beta}{\gamma_{\Gamma_t}}}_{\H{\sigma}}. \end{split}$$ Recalling that ${\gamma_{\Gamma_t}}= {\mathfrak{K}}^{-1}({\varkappa_a})$, one can obtain that $$\label{eqn pd gt} \partial_\beta {\gamma_{\Gamma_t}}= \var{{\mathfrak{K}}^{-1}}({\varkappa_a}) [\partial_\beta{\varkappa_a}],$$ and $$\label{eqn pd2 gt} \partial^2_{t\beta}{\gamma_{\Gamma_t}}= \var{{\mathfrak{K}}^{-1}}({\varkappa_a})[\partial^2_{t\beta}{\varkappa_a}] + \var[2]{{\mathfrak{K}}^{-1}}({\varkappa_a})[\partial_t{\varkappa_a}, \partial_\beta{\varkappa_a}].$$ In conclusion, the linear relations imply the existence of six linear operators ${\mathbf{B}}_\pm({\varkappa_a})$, ${\mathbf{F}}_\pm({\varkappa_a})$ and ${\mathbf{G}}_\pm({\varkappa_a}, \partial_t{\varkappa_a}, {\vb*{\omega}}_{*\pm})$ whose ranges are all in $\ensuremath{\mathrm{T}}{\Gamma_\ast}$, so that $$\label{eqn pd beta v} \partial_\beta {\vb{v}}_{\pm*} = {\mathbf{B}}_\pm({\varkappa_a})\partial^2_{t\beta}{\varkappa_a}+ {\mathbf{F}}_\pm({\varkappa_a})\partial_\beta{\vb*{\omega}}_{*\pm} + {\mathbf{G}}_\pm({\varkappa_a}, \partial_t {\varkappa_a}, {\vb*{\omega}}_{*\pm})\partial_\beta{\varkappa_a}.$$ Moreover, the following lemma holds, whose proof will be given in the Appendix: **Lemma 12**. *Suppose that $a \ge a_0$ and ${\varkappa_a}\in B_{\delta_1} \subset \H{{\frac{3}{2}k}-\frac{5}{2}}$, where $a_0$ and $B_{\delta_1}$ are given in Proposition [Proposition 6](#prop K){reference-type="ref" reference="prop K"}. If $s'-2 \le s'' \le s' \le {\frac{3}{2}k}-\frac{3}{2}$, $s' \ge \frac{1}{2}$, and $\frac{1}{2} \le s \le {\frac{3}{2}k}-\frac{3}{2}$, then the following estimates hold: $$\label{est B} \abs{{\mathbf{B}}_\pm({\varkappa_a})}_{{\mathscr{L}}\qty(\H{s''}; H^{s'}({\Gamma_\ast}; \ensuremath{\mathrm{T}}{\Gamma_\ast})) } \le C_* a^{s'-s''-2},$$ $$\label{est var B} \abs{\var{\mathbf{B}}_\pm({\varkappa_a})}_{{\mathscr{L}}\qty[\H{{\frac{3}{2}k}-\frac{5}{2}}; {\mathscr{L}}\qty(\H{s-2}; \H{s})]} \le C_*,$$ $$\label{est F} \abs{{\mathbf{F}}_\pm({\varkappa_a})}_{{\mathscr{L}}\qty(H^{s-\frac{1}{2}}({\Omega}_*^\pm); \H{s})} \le C_*,$$ and $$\label{est var F} \abs{\var{{\mathbf{F}}_\pm}({\varkappa_a})}_{{\mathscr{L}}\qty[\H{{\frac{3}{2}k}-\frac{5}{2}}; {\mathscr{L}}\qty(H^{s-\frac{1}{2}}({\Omega}_*^\pm); \H{s})]} \le C_*,$$ where $C_*$ is a constant depending only on $\Lambda_{*}$.* *Moreover, if ${\varkappa_a}\in B_{\delta_1} \cap \H{{\frac{3}{2}k}-\frac{3}{2}}$, $\partial_t {\varkappa_a}\in \H{{\frac{3}{2}k}-\frac{5}{2}}$, and ${\vb*{\omega}}_* \in H^{{\frac{3}{2}k}-1}({\Omega}\setminus{\Gamma_\ast})$, then for $\sigma'-2 \le \sigma'' \le \sigma' \le {\frac{3}{2}k}+ \frac{1}{2}$, $\sigma' \ge \frac{1}{2}$, and $s$ given above, it holds that $$\label{est B version2} \abs{{\mathbf{B}}_\pm({\varkappa_a})}_{{\mathscr{L}}\qty[\H{\sigma''}; \H{\sigma'}]} \le a^{\sigma'-\sigma''-2} Q\qty(\abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{3}{2}}}),$$ $$\label{est G} \begin{split} \abs{{\mathbf{G}}_\pm({\varkappa_a}, \partial_t{\varkappa_a}, {\vb*{\omega}}_{*\pm})}_{{\mathscr{L}}\qty[\H{s-1}; \H{s}]} \le Q\qty(\abs{\partial_t{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}, \norm{{\vb*{\omega}}_{*\pm}}_{H^{{\frac{3}{2}k}-1}({\Omega}_*^\pm)}), \end{split}$$ and for $\frac{1}{2} \le \sigma \le {\frac{3}{2}k}-\frac{5}{2}$, $$\label{est var G} \begin{split} &\hspace{-2em}\abs{\var{{\mathbf{G}}_\pm}({\varkappa_a}, \partial_t{\varkappa_a}, {\vb*{\omega}}_{*\pm})}_{{\mathscr{L}}\qty[\H{\sigma-1}\times \H{\sigma-2}\times H^{\sigma-\frac{1}{2}}({\Omega}_*^\pm); {\mathscr{L}}\qty(\H{\sigma-1}; \H{\sigma}) ]} \\ \le\, &Q\qty(\abs{\partial_t{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}, \norm{{\vb*{\omega}}_{*\pm}}_{H^{{\frac{3}{2}k}-1}({\Omega}_*^\pm)}), \end{split}$$ where $Q$ is a generic polynomial depending only on $\Lambda_*$.* ## Evolution of the mean curvature {#sectoin evolution kappa} Suppose that $({\Gamma_t}, {\vb{v}}, {\vb{h}})$ is a solution to the interface problem (MHD)-(BC) for $0 \le t \le T$ with ${\Gamma_t}\in \Lambda_*$, ${\Gamma_t}\in H^{{\frac{3}{2}k}+ 1}$ and ${\vb{v}}(t), {\vb{h}}(t) \in H^{{\frac{3}{2}k}}({\Omega}\setminus {\Gamma_t})$. The hypersurface ${\Gamma_t}$ is uniquely determined by the function ${\varkappa_a}(t) : {\Gamma_\ast}\to \mathbb{R}$, whose leading order term is $\kappa_+$. Then, it is natural to consider the evolution equation for $\kappa_+$ under a weighted velocity ${\vb{u}}_\lambda$. Plugging ${\vb{u}}_\lambda$ into [\[eqn Dt2 kappa alt\]](#eqn Dt2 kappa alt){reference-type="eqref" reference="eqn Dt2 kappa alt"} yields $$\label{eqn dt2 k+} \begin{split} {\mathbb{D}_{t_\lambda}}^{\!2} \kappa_+ = &-\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\qty({\mathbb{D}_{t_\lambda}}{\vb{u}}_\lambda \vdot {\vb{N}}_+) + {\mathbb{D}_{t_\lambda}}{\vb{u}}_\lambda \vdot \mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{N}}_+ + 2{\vb{N}}_+ \vdot (\grad{\vb{u}}_\lambda) \vdot (\mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{u}}_\lambda)^\top \\ &+4 \ip{\vb{A}_\lambda}{{\vb{N}}_+ \vdot (\mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t})^2{\vb{u}}_\lambda} - \kappa_+ \abs{(\grad{\vb{u}}_\lambda)^*\vdot{\vb{N}}_+}^2 \\ &- 2\ip{{\vb{I\!I}}_+}{\mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t}{\vb{u}}_\lambda \vdot \mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t}{\vb{u}}_\lambda} +4 \ip{{\vb{I\!I}}_+\vdot\vb{A}_\lambda + \vb{A}_\lambda\vdot{\vb{I\!I}}_+}{\vb{A}_\lambda}, \end{split}$$ where $$\vb{A}_\lambda \coloneqq \dfrac{1}{2}\qty{(\mathop{}\!\mathbin{\mathrm{D}}{\vb{u}}_\lambda)^\top + \qty[(\mathop{}\!\mathbin{\mathrm{D}}{\vb{u}}_\lambda)^\top]^*}.$$ Denoting by $Q_\lambda$ a generic polynomial determined only by $\Lambda_*$ and $\lambda$, one gets from [\[est Dtl ulam\]](#est Dtl ulam){reference-type="eqref" reference="est Dtl ulam"}, the product estimates, and Lemma [Lemma 5](#est ii){reference-type="ref" reference="est ii"} that $$\label{key} \begin{split} &\abs{{\mathbb{D}_{t_\lambda}}^{\!2}\kappa_+ + \mathop{}\!\mathbin{\triangle}_{\Gamma_t}\qty({\mathbb{D}_{t_\lambda}}{\vb{u}}_\lambda \vdot {\vb{N}}_+)}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Gamma_t})} \\ &\quad \le \begin{cases*} Q_\lambda \qty(\abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-1}}, \norm{({\vb{v}}, {\vb{h}})}_{H^{{\frac{3}{2}k}}({\Omega}\setminus{\Gamma_t})}) \qif k = 2, \\ Q_\lambda \qty(\alpha\abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-1}}, \abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{3}{2}}}, \norm{({\vb{v}}, {\vb{h}})}_{H^{{\frac{3}{2}k}}({\Omega}\setminus{\Gamma_t})}) \qif k \ge 3. \end{cases*} \end{split}$$ Since ${\vb{w}}\equiv \llbracket {\vb{v}}\rrbracket \coloneqq {\vb{v}}_+ - {\vb{v}}_- \in \ensuremath{\mathrm{T}}{\Gamma_t}$, it is routine to calculate $$\label{key} \begin{split} {\vb{N}}_+ \vdot {\mathbb{D}_{t_\lambda}}{\vb{u}}_\lambda =\, & \dfrac{1}{\rho_-}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_-}p^- - \lambda \qty(\dfrac{1}{\rho_+}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_+}p^+ + \dfrac{1}{\rho_-}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_-}p^-) \\ &+ {\vb{N}}_+ \vdot \qty[\lambda \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+}{\vb{h}}_+ + (1-\lambda) \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_-} {\vb{h}}_- - \lambda(1-\lambda) \mathop{}\!\mathbin{\mathrm{D}}_{\vb{w}}{\vb{w}}] \\ = &-\alpha^2\widetilde{{\mathcal{N}}}\kappa_+ + \qty(\dfrac{\lambda}{\rho_+}{\mathcal{N}}_+ - \dfrac{1-\lambda}{\rho_-}{\mathcal{N}}_-)\bar{{\mathcal{N}}}^{-1}\qty(g^+-g^-) \\ &-\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_+}\qty[\lambda\qty(p_{{\vb{v}},{\vb{v}}}^+ - p_{{\vb{h}},{\vb{h}}}^+) + (1-\lambda)\qty(p_{{\vb{v}},{\vb{v}}}^- - p_{{\vb{h}},{\vb{h}}}^-) ] \\ &-\lambda {\vb{I\!I}}_+ \qty({\vb{h}}_+, {\vb{h}}_+) - (1-\lambda){\vb{I\!I}}_+({\vb{h}}_-, {\vb{h}}_-) + \lambda(1-\lambda){\vb{I\!I}}_+({\vb{w}}, {\vb{w}}). \end{split}$$ In order to control the $H^{{\frac{3}{2}k}-\frac{1}{2}}$ norm of $\qty(\dfrac{\lambda}{\rho_+}{\mathcal{N}}_+ - \dfrac{1-\lambda}{\rho_-}{\mathcal{N}}_-)\bar{{\mathcal{N}}}^{-1}\qty(g^+-g^-)$, it suffices to have $$\label{est lam n+ - n-} \abs{\qty(\dfrac{\lambda}{\rho_+}{\mathcal{N}}_+ - \dfrac{1-\lambda}{\rho_-}{\mathcal{N}}_-)}_{{\mathscr{L}}\qty(H^s({\Gamma_t}); H^s({\Gamma_t}))} \le Q \qty(\abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{3}{2}}})$$ for $\frac{1}{2} - {\frac{3}{2}k}\le s \le {\frac{3}{2}k}- \frac{1}{2}$ and some generic polynomial $Q$ determined by $\Lambda_*$. Thanks to Lemma [Lemma 9](#lem lap-n){reference-type="ref" reference="lem lap-n"}, [\[est lam n+ - n-\]](#est lam n+ - n-){reference-type="eqref" reference="est lam n+ - n-"} holds as long as $\lambda$ satisfies $$\dfrac{\lambda}{\rho_+} = \dfrac{1-\lambda}{\rho_-} \Longleftrightarrow \lambda = \dfrac{\rho_+}{\rho_+ + \rho_-}.$$ Denote by $$\label{def vbu} {\vb{u}}\coloneqq \dfrac{\rho_+}{\rho_+ + \rho_-} {\vb{v}}_+ + \dfrac{\rho_-}{\rho_+ + \rho_-} {\vb{v}}_-, \qand {\mathbb{D}_{\bar{t}}}\coloneqq \partial_t + \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}.$$ Then ${\vb{u}}\vdot {\vb{N}}_+ = \theta$ and $$\begin{split} {\vb{N}}_+ \vdot {\mathbb{D}_{\bar{t}}}{\vb{u}}=\, &- \alpha^2\widetilde{{\mathcal{N}}}\kappa_+ + \dfrac{\rho_+ \rho_-}{\qty(\rho_+ + \rho_-)^{2}} {\vb{I\!I}}_{+}({\vb{w}}, {\vb{w}}) \\ &- \dfrac{\rho_+}{\rho_+ + \rho_-}{\vb{I\!I}}_+ \qty({\vb{h}}_+, {\vb{h}}_+) - \dfrac{\rho_-}{\rho_+ + \rho_-} {\vb{I\!I}}_+\qty({\vb{h}}_-, {\vb{h}}_-) + \mathfrak{r}_0, \end{split}$$ where $$\label{def frak[r_0]} \begin{split} \mathfrak{r}_0 \coloneqq \dfrac{1}{\rho_+ \rho_-}\qty({\mathcal{N}}_+ - {\mathcal{N}}_-)\bar{{\mathcal{N}}}^{-1}\qty(g^+ - g^-) - \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_+}\qty[\dfrac{\rho_+}{\rho_+ + \rho_-}\qty(p_{{\vb{v}},{\vb{v}}}^+ - p_{{\vb{h}},{\vb{h}}}^+) + \dfrac{\rho_-}{\rho_+ + \rho_-}\qty(p_{{\vb{v}},{\vb{v}}}^- - p_{{\vb{h}},{\vb{h}}}^-)] \end{split}$$ satisfies $$\abs{\mathfrak{r}_0}_{H^{{\frac{3}{2}k}-\frac{1}{2}}({\Gamma_t})} \le Q \qty(\abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{3}{2}}}, \norm{{\vb{v}}}_{H^{{\frac{3}{2}k}}({\Omega}\setminus{\Gamma_t})}, \norm{{\vb{h}}}_{H^{{\frac{3}{2}k}} ({\Omega}\setminus{\Gamma_t})}).$$ Define the following two operators: $$\label{def op a} {\mathcal{A}}(\Gamma) \coloneqq - \mathop{}\!\mathbin{\triangle}_{\Gamma} \widetilde{{\mathcal{N}}}$$ and $$\label{def op r} {\mathcal{R}}\qty(\Gamma, \vb{J}) \coloneqq \qty(\mathop{}\!\mathbin{\mathrm{D}}_\Gamma)_{\vb{J}} \qty(\mathop{}\!\mathbin{\mathrm{D}}_\Gamma)_{\vb{J}}$$ for a vector field $\vb{J} \in \ensuremath{\mathrm{T}}\Gamma$. It follows from Lemma [Lemma 10](#Dt comm est lemma){reference-type="ref" reference="Dt comm est lemma"} that $$\label{comm lap ii R} \abs{\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\qty[{\vb{I\!I}}_+(\vb{J}, \vb{J})] - {\mathcal{R}}({\Gamma_t}, \vb{J})\kappa_+}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Gamma_t})} \le Q\qty(\abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{3}{2}}}) \abs{\vb{J}}^2_{H^{{\frac{3}{2}k}-\frac{1}{2}}({\Gamma_t})},$$ where ${\vb{J}}\in \ensuremath{\mathrm{T}}{\Gamma_t}$ is an $H^{{\frac{3}{2}k}-\frac{1}{2}}$ tangential vector field and $Q$ is a generic polynomial determined by $\Lambda_*$. Thus, by using the following notations: $$\label{eqn mathfrak[R]_0} \begin{split} \mathfrak{R}_0 \coloneqq\, &\va{\mathfrak{a}} \vdot \mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{N}}_+ + 2 {\vb{N}}_+ \vdot (\grad{\vb{u}}) \vdot (\mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{u}})^\top + 4\ip{\vb{A}}{{\vb{N}}_+ \vdot (\mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t})^2{\vb{u}}}\\ &-\kappa_+ \abs{(\grad{\vb{u}})^*\vdot{\vb{N}}_+}^2 - 2\ip{{\vb{I\!I}}_+}{\mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t}{\vb{u}}\vdot \mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t}{\vb{u}}} + 4\ip{{\vb{I\!I}}_+ \vdot \vb{A} + \vb{A} \vdot {\vb{I\!I}}_+}{\vb{A}} \\ &+ \dfrac{\rho_+\rho_-}{(\rho_+ + \rho_-)^2}\qty{{\mathcal{R}}({\Gamma_t}, {\vb{w}})\kappa_+ - \mathop{}\!\mathbin{\triangle}_{{\Gamma_t}}[{\vb{I\!I}}_+({\vb{w}}, {\vb{w}})]} \\ &+ \dfrac{\rho_+}{\rho_+ + \rho_-}\qty{\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\qty[{\vb{I\!I}}_+({\vb{h}}_+, {\vb{h}}_+)] - {\mathcal{R}}({\Gamma_t}, {\vb{h}}_+)\kappa_+} \\ &+ \dfrac{\rho_-}{\rho_+ + \rho_-}\qty{\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\qty[{\vb{I\!I}}_+({\vb{h}}_-, {\vb{h}}_-)] - {\mathcal{R}}({\Gamma_t}, {\vb{h}}_-)\kappa_+} \\ &- \mathop{}\!\mathbin{\triangle}_{\Gamma_t}\mathfrak{r}_0, \end{split}$$ with $$\label{eqn va a} \begin{split} \va{\mathfrak{a}} \coloneqq \dfrac{1}{\rho_+ + \rho_-} \qty(\grad p^+ + \grad p^-) + \dfrac{\rho_+ \rho_-}{\qty(\rho_+ + \rho_-)^2}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{w}}}{\vb{w}}- \dfrac{\rho_+}{\rho_+ + \rho_-}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+}{\vb{h}}_+ - \dfrac{\rho_-}{\rho_+ + \rho_-}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_-}{\vb{h}}_-, \end{split}$$ and $p$ given by ([\[decomp p\]](#decomp p){reference-type="ref" reference="decomp p"}), ([\[def g pm\]](#def g pm){reference-type="ref" reference="def g pm"}) and [\[def frak p\]](#def frak p){reference-type="eqref" reference="def frak p"}, one can obtain the following lemma: **Lemma 13**. *There exists a generic polynomial $Q$ depending only on $\Lambda_*$, so that for any solution to (MHD)-(BC) for $t \in [0, T]$, $k \ge 2$, ${\Gamma_t}\in \Lambda_* \cap H^{{\frac{3}{2}k}+ 1}$ and ${\vb{v}}, {\vb{h}}\in H^{{\frac{3}{2}k}}({\Omega}\setminus {\Gamma_t})$, the mean curvature $\kappa_+$ satisfies the equation $$\label{eqn Dt2 kappa+} \begin{split} {\mathbb{D}_{\bar{t}}}^2 \kappa_+ &+ \alpha^2 {\mathcal{A}}({\Gamma_t})\kappa_+ + \dfrac{\rho_+\rho_-}{\qty(\rho_+ + \rho_-)^2}{\mathcal{R}}({\Gamma_t}, {\vb{w}})\kappa_+ \\ &-\dfrac{\rho_+}{\rho_+ + \rho_-}{\mathcal{R}}({\Gamma_t}, {\vb{h}}_+)\kappa_+ - \dfrac{\rho_-}{\rho_+ + \rho_-}{\mathcal{R}}({\Gamma_t}, {\vb{h}}_-)\kappa_+ = \mathfrak{R}_0, \end{split}$$ where ${\mathbb{D}_{\bar{t}}}$ is defined in [\[def vbu\]](#def vbu){reference-type="eqref" reference="def vbu"}, and $\mathfrak{R}_0 : {\Gamma_t}\to \mathbb{R}$ satisfies $$\label{est frak R_0} \abs{\mathfrak{R}_0}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Gamma_t})} \le Q \qty(\abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-1}}, \norm{{\vb{v}}}_{H^{{\frac{3}{2}k}}({\Omega}\setminus{\Gamma_t})}, \norm{{\vb{h}}}_{H^{{\frac{3}{2}k}} ({\Omega}\setminus{\Gamma_t})}).$$ Furthermore, if $k \ge 3$, the estimate of $\mathfrak{R}_0$ can be refined to $$\label{est frak R_0'} \abs{\mathfrak{R}_0}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Gamma_t})} \le Q \qty(\alpha\abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-1}}, \abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{3}{2}}} \norm{{\vb{v}}}_{H^{{\frac{3}{2}k}}({\Omega}\setminus{\Gamma_t})}, \norm{{\vb{h}}}_{H^{{\frac{3}{2}k}} ({\Omega}\setminus{\Gamma_t})}). \tag{\ref{est frak R_0}'}$$* ## Evolution of ${\varkappa_a}$ {#sec evo ka} In order to compute the evolution equation for ${\varkappa_a}$, it is convenient to pull back ${\mathcal{A}}$ and ${\mathcal{R}}$ to ${\Gamma_\ast}$. More precisely, for $\Gamma \in \Lambda_*$, $\vb{J}_* \in \ensuremath{\mathrm{T}}{\Gamma_\ast}$, and $f : {\Gamma_\ast}\to \mathbb{R}$, define: $$\label{def opA} {\mathscr{A}}({\varkappa_a})f \coloneqq \qty[{\mathcal{A}}(\Gamma)(f \circ \Phi_\Gamma^{-1})]\circ\Phi_\Gamma,$$ and $$\label{def opR} {\mathscr{R}}({\varkappa_a}, \vb{J}_*)f \coloneqq \qty[{\mathcal{R}}(\Gamma, \vb{J})(f\circ\Phi_\Gamma^{-1})]\circ\Phi_\Gamma,$$ where $\vb{J} \coloneqq \ensuremath{\mathrm{T}}\Phi_\Gamma (\vb{J}_*) = \qty[(\mathop{}\!\mathbin{\mathrm{D}}\Phi_\Gamma) \vdot \vb{J}_*] \circ (\Phi_\Gamma)^{-1} \in \ensuremath{\mathrm{T}}\Gamma$. Furthermore, the following lemma holds (c.f. [@Shatah-Zeng2011]): **Lemma 14**. *There are positive constants $C_*, \delta_1$ depending only on $\Lambda_*$, so that for ${\varkappa_a}\in B_{\delta_1} \subset \H{{\frac{3}{2}k}-\frac{5}{2}}$, $\vb{J}_* \in \H{{\frac{3}{2}k}-\frac{1}{2}}$, $2 \le s \le {\frac{3}{2}k}- \frac{1}{2}$, $1 \le \sigma \le {\frac{3}{2}k}-\frac{1}{2}$ and $2 \le s_1 \le {\frac{3}{2}k}- \frac{1}{2}$, the following estimates hold: $$\label{est opA} \abs{{\mathscr{A}}({\varkappa_a})}_{{\mathscr{L}}\qty[\H{s}; \H{s-3}]} \le C_*,$$ $$\label{est opR} \abs{{\mathscr{R}}({\varkappa_a}, {\vb{J}}_*)}_{{\mathscr{L}}\qty[\H{\sigma}; \H{\sigma-2}]} \le C_* \abs{{\vb{J}}_*}_{\H{{\frac{3}{2}k}-\frac{1}{2}}}^2,$$ and $$\label{est var opA} \abs{\var{{\mathscr{A}}}({\varkappa_a})}_{{\mathscr{L}}\qty[\H{{\frac{3}{2}k}-\frac{5}{2}}; {\mathscr{L}}\qty(\H{s_1}; \H{s_1-3})]} \le C_*.$$ Furthermore, if $k \ge 3$, it holds for $2 \le s_2 \le {\frac{3}{2}k}-1$ that $$\label{est var opR} \begin{split} \abs{\var{{\mathscr{R}}}({\varkappa_a}, {\vb{J}}_*)}_{{\mathscr{L}}\qty[\H{{\frac{3}{2}k}-\frac{5}{2}}\times\H{{\frac{3}{2}k}-2}; {\mathscr{L}}\qty(\H{s_2}; \H{s_2-2})]} \le C_*\qty(1 +\abs{{\vb{J}}_*}^2_{\H{{\frac{3}{2}k}-\frac{1}{2}}}). \end{split}$$* *Proof.* [\[est opA\]](#est opA){reference-type="eqref" reference="est opA"} and [\[est opR\]](#est opR){reference-type="eqref" reference="est opR"} follow from the definitions. As for the variational estimates, suppose that $\{{\Gamma_t}\}_{t} \subset \Lambda_*$ is a family of hypersurfaces parameterized by $t$. Then by considering the velocity of the moving hypersurface: $$\label{key} {\vb{v}}\coloneqq \qty(\partial_t{\gamma_{\Gamma_t}}\nu) \circ \mathcal{X}_{\Gamma_t}^{-1}$$ and the material derivative $$\label{key} {\mathbb{D}_t}\coloneqq \partial_t + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}},$$ one gets $$\label{key} \begin{split} \pdv{t} {\mathscr{A}}({\varkappa_a})f = {\mathbb{D}_t}\qty[-\mathop{}\!\mathbin{\triangle}_{{\Gamma_t}}\widetilde{{\mathcal{N}}}(f\circ\Phi_{\Gamma_t}^{-1})]\circ\Phi_{\Gamma_t}= -\qty{\comm{{\mathbb{D}_t}}{\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}}(f\circ\Phi_{\Gamma_t}^{-1})}\circ\Phi_{\Gamma_t}. \end{split}$$ Thus [\[est var opA\]](#est var opA){reference-type="eqref" reference="est var opA"} follows from Lemma [Lemma 10](#Dt comm est lemma){reference-type="ref" reference="Dt comm est lemma"}. As for [\[est var opR\]](#est var opR){reference-type="eqref" reference="est var opR"}, one may observe that $$\label{key} \pdv{t}{\mathscr{R}}({\varkappa_a},{\vb{J}}_*)f = {\mathbb{D}_t}\qty[\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{J}}}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{J}}(f\circ\Phi_{\Gamma_t}^{-1})]\circ\Phi_{\Gamma_t}$$ and for $\phi : {\Gamma_t}\to \mathbb{R}$, $$\comm{{\mathbb{D}_t}}{\mathop{}\!\mathbin{\mathrm{D}}_{\vb{J}}}\phi = \mathop{}\!\mathbin{\mathrm{D}}_{({\mathbb{D}_t}{\vb{J}}- \mathop{}\!\mathbin{\mathrm{D}}_{\vb{J}}{\vb{v}})}\phi.$$ Hence for $0 \le s' \le {\frac{3}{2}k}-2$ (here $k \ge 3$), it holds that $$\begin{split} \abs{\comm{{\mathbb{D}_t}}{\mathop{}\!\mathbin{\mathrm{D}}_{\vb{J}}}\phi}_{H^{s'}({\Gamma_t})} &\lesssim_{\Lambda_*}\abs{\mathop{}\!\mathbin{\mathrm{D}}\phi \cdot ({\mathbb{D}_t}{\vb{J}}- \mathop{}\!\mathbin{\mathrm{D}}_{\vb{J}}{\vb{v}})}_{H^{s'}({\Gamma_t})} \\ &\lesssim_{\Lambda_*}\abs{\phi}_{H^{s'+1}({\Gamma_t})} \cdot \abs{{\mathbb{D}_t}{\vb{J}}- \mathop{}\!\mathbin{\mathrm{D}}_{\vb{J}}{\vb{v}}}_{H^{{\frac{3}{2}k}-2}({\Gamma_t})}, \end{split}$$ which, together with Lemma [Lemma 10](#Dt comm est lemma){reference-type="ref" reference="Dt comm est lemma"}, implies ([\[est var opR\]](#est var opR){reference-type="ref" reference="est var opR"}). ◻ Next, we shall derive the evolution equation for ${\varkappa_a}$. First, define a vector field ${\vb{W}}: {\Gamma_t}\to \mathbb{R}^3$ by $$\label{def vW} \begin{split} {\vb{W}}&\coloneqq {\mathbb{D}_{\bar{t}}}{\vb{u}}+ \dfrac{1}{\rho_+ + \rho_-} \qty(\grad q^+ + \grad q^-) + \dfrac{\rho_+ \rho_-}{\qty(\rho_+ + \rho_-)^2}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{w}}}{\vb{w}}- \dfrac{\rho_+}{\rho_+ + \rho_-}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+}{\vb{h}}_+ - \dfrac{\rho_-}{\rho_+ + \rho_-}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_-}{\vb{h}}_- \\ &\equiv {\mathbb{D}_{\bar{t}}}{\vb{u}}+ \va{\mathfrak{b}}, \end{split}$$ for which $$\begin{gathered} \label{key} q^\pm \coloneqq \rho_\pm \qty( p^\pm_{{\vb{v}}, {\vb{v}}} + p^\pm_{{\vb{h}}, {\vb{h}}}) \pm \alpha^2 \dfrac{1}{\rho_\mp}{\mathcal{H}}_\pm\bar{{\mathcal{N}}}^{-1}{\mathcal{N}}_\mp\kappa_+ + {\mathcal{H}}_\pm \bar{{\mathcal{N}}}^{-1}\mathfrak{q}, \\ \mathfrak{q} \coloneqq -g^+ + g^- - \frac{1}{\abs{{\Gamma_t}}}\int_{{\Omega\setminus{\Gamma_t}}} \qty(\div {\vb{v}})^2 \dd{x}, \\ g^{\pm} \coloneqq 2 \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_\pm^\top}\theta - {\vb{I\!I}}_+\qty({\vb{v}}_\pm^\top, {\vb{v}}_\pm^\top) + {\vb{I\!I}}_+\qty({\vb{h}}_\pm^\top, {\vb{h}}_\pm^\top) + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_+}\qty(p_{{\vb{v}}, {\vb{v}}}^\pm - p_{{\vb{h}}, {\vb{h}}}^\pm). \end{gathered}$$ Thus, ${\vb{W}}\equiv 0$ if $\qty({\Gamma_t}, {\vb{v}}, {\vb{h}})$ is a solution to (MHD)-(BC). Substituting $$\begin{split} {\mathbb{D}_{\bar{t}}}{\vb{u}}= {\vb{W}}- \dfrac{1}{\rho_+ + \rho_-} \qty(\grad q^+ + \grad q^-) - \dfrac{\rho_+ \rho_-}{\qty(\rho_+ + \rho_-)^2}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{w}}}{\vb{w}}+ \dfrac{\rho_+}{\rho_+ + \rho_-}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+}{\vb{h}}_+ + \dfrac{\rho_-}{\rho_+ + \rho_-}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_-}{\vb{h}}_- \end{split}$$ into ([\[eqn dt2 k+\]](#eqn dt2 k+){reference-type="ref" reference="eqn dt2 k+"}) with $\lambda = \rho_+/(\rho_+ + \rho_-)$, and pulling back it to ${\Gamma_\ast}$ via ([\[pull back Dt to Gs\]](#pull back Dt to Gs){reference-type="ref" reference="pull back Dt to Gs"}), one has $$\label{eqn dt2 ka} \begin{split} &{\mathbb{D}_{t\ast}}^{\!2}(\kappa_+ \circ \Phi_{\Gamma_t}) + \alpha^2{\mathscr{A}}({\varkappa_a})(\kappa_+\circ\Phi_{\Gamma_t}) + \dfrac{\rho_+\rho_-}{\qty(\rho_+ + \rho_-)^2}{\mathscr{R}}({\varkappa_a}, {\vb{w}}_*)(\kappa_+ \circ \Phi_{\Gamma_t})\\ &-\dfrac{\rho_+}{\rho_+ + \rho_-}{\mathscr{R}}({\varkappa_a}, {\vb{h}}_{+*})(\kappa_+\circ\Phi_{\Gamma_t}) - \dfrac{\rho_-}{\rho_+ + \rho_-}{\mathscr{R}}({\varkappa_a}, {\vb{h}}_{-*})(\kappa_+ \circ \Phi_{\Gamma_t}) \\ &\quad=\qty{\mathfrak{R}_1 - \mathop{}\!\mathbin{\triangle}_{\Gamma_t}\qty({\vb{W}}\vdot {\vb{N}}_+) + {\vb{W}}\vdot \mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{N}}_+} \circ\Phi_{\Gamma_t}, \end{split}$$ where $${\vb{u}}_* = \dfrac{\rho_+}{\rho_+ + \rho_-}{\vb{v}}_{+*} + \dfrac{\rho_-}{\rho_+ + \rho_-}{\vb{v}}_{-*} \qc {\mathbb{D}_{t\ast}}\coloneqq \partial_t + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*},$$ $${\vb{h}}_{\pm*} \coloneqq (\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1}\vdot({\vb{h}}_\pm \circ \Phi_{\Gamma_t}) \qc {\vb{w}}_* \coloneqq (\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1}\vdot({\vb{w}}\circ\Phi_{\Gamma_t}) = {\vb{v}}_{+*} - {\vb{v}}_{-*},$$ and $$\label{def frak[R]_1} \begin{split} \mathfrak{R}_1 \coloneqq\, &-\va{\mathfrak{b}} \vdot \mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{N}}_+ + 2 {\vb{N}}_+ \vdot (\grad{\vb{u}}) \vdot (\mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{u}})^\top + 4\ip{\vb{A}}{{\vb{N}}_+ \vdot (\mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t})^2{\vb{u}}}\\ &-\kappa_+ \abs{(\grad{\vb{u}})^*\vdot{\vb{N}}_+}^2 - 2\ip{{\vb{I\!I}}_+}{\mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t}{\vb{u}}\vdot \mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t}{\vb{u}}} + 4\ip{{\vb{I\!I}}_+ \vdot \vb{A} + \vb{A} \vdot {\vb{I\!I}}_+}{\vb{A}} \\ &+ \dfrac{\rho_+\rho_-}{(\rho_+ + \rho_-)^2}\qty{{\mathcal{R}}({\Gamma_t}, {\vb{w}})\kappa_+ - \mathop{}\!\mathbin{\triangle}_{{\Gamma_t}}[{\vb{I\!I}}_+({\vb{w}}, {\vb{w}})]} \\ &+ \dfrac{\rho_+}{\rho_+ + \rho_-}\qty{\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\qty[{\vb{I\!I}}_+({\vb{h}}_+, {\vb{h}}_+)] - {\mathcal{R}}({\Gamma_t}, {\vb{h}}_+)\kappa_+} \\ &+ \dfrac{\rho_-}{\rho_+ + \rho_-}\qty{\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\qty[{\vb{I\!I}}_+({\vb{h}}_-, {\vb{h}}_-)] - {\mathcal{R}}({\Gamma_t}, {\vb{h}}_-)\kappa_+} \\ &- \mathop{}\!\mathbin{\triangle}_{\Gamma_t}\mathfrak{r}_0, \end{split}$$ with $\mathfrak{r}_0$ given by ([\[def frak\[r_0\]\]](#def frak[r_0]){reference-type="ref" reference="def frak[r_0]"}). Due to the relation $${\mathbb{D}_{t\ast}}^{\!2} = \partial^2_{tt} + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*} + 2 \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*}\partial_t + \mathop{}\!\mathbin{\mathrm{D}}_{\partial_t {\vb{u}}_*}$$ and ([\[eqn pd beta v\]](#eqn pd beta v){reference-type="ref" reference="eqn pd beta v"}), the term $\partial_t {\vb{u}}_*$ involves $\partial^2_{tt} {\varkappa_a}$, so ([\[eqn dt2 ka\]](#eqn dt2 ka){reference-type="ref" reference="eqn dt2 ka"}) is a nonlinear equation for $\partial^2_{tt}{\varkappa_a}$. In order to get a equation which is linear for $\partial^2_{tt}{\varkappa_a}$, one may drive from ([\[eqn pd beta v\]](#eqn pd beta v){reference-type="ref" reference="eqn pd beta v"}) that $$\label{eqn pdt2 kappa+} \begin{split} &\partial^2_{tt}(\kappa_+ \circ \Phi_{\Gamma_t}) + {\mathscr{C}}_\alpha({\varkappa_a}, \partial_t{\varkappa_a}, {\vb{v}}_*, {\vb{h}}_*)(\kappa_+ \circ \Phi_{\Gamma_t}) \\ &+ \grad^\top(\kappa_+ \circ \Phi_{\Gamma_t}) \vdot \qty[{\mathbf{B}}({\varkappa_a})\partial^2_{tt}{\varkappa_a}+ {\mathbf{F}}({\varkappa_a})\partial_t{\vb*{\omega}}_*+{\mathbf{G}}({\varkappa_a}, \partial_t{\varkappa_a}, {\vb*{\omega}}_*)\partial_t{\varkappa_a}] \\ &\quad =\qty{\mathfrak{R}_1 - \mathop{}\!\mathbin{\triangle}_{\Gamma_t}\qty({\vb{W}}\vdot {\vb{N}}_+) + {\vb{W}}\vdot \mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{N}}_+} \circ\Phi_{\Gamma_t}, \end{split}$$ where the following notations have been used: $$\label{key} {\mathbf{B}}(\text{resp. } {\mathbf{F}}\text{ or } {\mathbf{G}}) \coloneqq \dfrac{\rho_+}{\rho_+ + \rho_-}{\mathbf{B}}_+ (\text{resp. } {\mathbf{F}}_+ \text{ or } {\mathbf{G}}_+) + \dfrac{\rho_-}{\rho_+ + \rho_-}{\mathbf{B}}_- (\text{resp. } {\mathbf{F}}_- \text{ or } {\mathbf{G}}_-),$$ and for simplicity, $$\label{def opC} \begin{split} {\mathscr{C}}_\alpha({\varkappa_a}, \partial_t{\varkappa_a}, {\vb{v}}_*, {\vb{h}}_*) \coloneqq\, &2\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*}\partial_t + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*} + \alpha^2{\mathscr{A}}({\varkappa_a}) + \dfrac{\rho_+\rho_-}{\qty(\rho_+ + \rho_-)^2}{\mathscr{R}}({\varkappa_a}, {\vb{w}}_*) \\ &-\dfrac{\rho_+}{\rho_+ + \rho_-}{\mathscr{R}}({\varkappa_a}, {\vb{h}}_{+*}) - \dfrac{\rho_-}{\rho_+ + \rho_-}{\mathscr{R}}({\varkappa_a}, {\vb{h}}_{-*}). \end{split}$$ Since ${\varkappa_a}= \kappa_+ \circ \Phi_{\Gamma_t}+ a^2 {\gamma_{\Gamma_t}}$, one also needs to calculate $\partial^2_{tt}{\gamma_{\Gamma_t}}$. Notice that for every evolution velocity ${\vb{v}}: {\Gamma_t}\to \mathbb{R}^3$ of ${\Gamma_t}$, it holds that $$\label{key} \partial_t {\gamma_{\Gamma_t}}{\vb*{\nu}}\vdot ({\vb{N}}_+ \circ \Phi_{\Gamma_t}) = ({\vb{v}}\vdot {\vb{N}}_+) \circ \Phi_{\Gamma_t},$$ which, together with ([\[eqn dt n\]](#eqn dt n){reference-type="ref" reference="eqn dt n"}), implies $$\begin{split} (\partial^2_{tt}{\gamma_{\Gamma_t}}{\vb*{\nu}})\circ (\Phi_{\Gamma_t})^{-1} \vdot {\vb{N}}_+ =\, &{\vb{N}}_+ \vdot \mathop{}\!\mathbin{\mathrm{D}}_{\qty[(\partial_t{\gamma_{\Gamma_t}}{\vb*{\nu}})\circ(\Phi_{\Gamma_t})^{-1} - {\vb{u}}]}\qty[(\partial_t{\gamma_{\Gamma_t}}{\vb*{\nu}})\circ(\Phi_{\Gamma_t})^{-1}]\\ &+ {\vb{N}}_+ \vdot \qty[{\mathbb{D}_{\bar{t}}}{\vb{u}}- \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}}{\vb{u}}+ \mathop{}\!\mathbin{\mathrm{D}}_{(\partial_t{\gamma_{\Gamma_t}}{\vb*{\nu}})\circ(\Phi_{\Gamma_t})^{-1}}{\vb{u}}], \end{split}$$ that is, $$\label{eqn pd2 tt gt} \begin{split} \partial^2_{tt}{\gamma_{\Gamma_t}}=\, &\dfrac{({\vb{N}}_+ \circ \Phi_{\Gamma_t})}{{\vb*{\nu}}\vdot ({\vb{N}}_+ \circ \Phi_{\Gamma_t})} \vdot \qty[\qty({\mathbb{D}_{\bar{t}}}{\vb{u}})\circ\Phi_{\Gamma_t}- \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*}\qty({\vb{u}}\circ\Phi_{\Gamma_t}+ \partial_t{\gamma_{\Gamma_t}}{\vb*{\nu}}) ] \\ =\, &\dfrac{({\vb{N}}_+ \circ \Phi_{\Gamma_t})}{{\vb*{\nu}}\vdot ({\vb{N}}_+ \circ \Phi_{\Gamma_t})} \vdot \qty[\qty({\vb{W}}- \va{\mathfrak{b}})\circ\Phi_{\Gamma_t}- \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*}\qty({\vb{u}}\circ\Phi_{\Gamma_t}+ \partial_t{\gamma_{\Gamma_t}}{\vb*{\nu}}) ]. \end{split}$$ In particular, $\partial^2_{tt}{\gamma_{\Gamma_t}}$ does not involve the term $\partial^2_{tt}{\varkappa_a}$. Combining ([\[def ka\]](#def ka){reference-type="ref" reference="def ka"}) and ([\[eqn pdt2 kappa+\]](#eqn pdt2 kappa+){reference-type="ref" reference="eqn pdt2 kappa+"}) yields $$\label{eqn pdt2 ka premitive} \begin{split} &\qty[\ensuremath{\mathrm{I}} + \grad^\top(\kappa_+ \circ \Phi_{\Gamma_t}) \vdot {\mathbf{B}}({\varkappa_a}) ]\partial^2_{tt}{\varkappa_a}+ {\mathscr{C}}_\alpha({\varkappa_a}, \partial_t{\varkappa_a}, {\vb{v}}_*, {\vb{h}}_*) {\varkappa_a}\\ &+ \grad^\top(\kappa_+ \circ \Phi_{\Gamma_t}) \vdot \qty[{\mathbf{F}}({\varkappa_a})\partial_t{\vb*{\omega}}_* + {\mathbf{G}}({\varkappa_a}, \partial_t{\varkappa_a}, {\vb*{\omega}}_*)\partial_t{\varkappa_a}] \\ &+ a^2 \dfrac{{\vb{N}}_+ \circ \Phi_{\Gamma_t}}{{\vb*{\nu}}\vdot ({\vb{N}}_+ \circ \Phi_{\Gamma_t})} \vdot \qty[\va{\mathfrak{b}} \circ \Phi_{\Gamma_t}+ \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*}\qty({\vb{u}}\circ\Phi_{\Gamma_t}+ \partial_t{\gamma_{\Gamma_t}}{\vb*{\nu}})]-a^2 {\mathscr{C}}_\alpha({\varkappa_a}, \partial_t{\varkappa_a}, {\vb{v}}_*, {\vb{h}}_*){\gamma_{\Gamma_t}}\\ &\quad = \mathfrak{R}_1 \circ \Phi_{\Gamma_t}+ \qty[-\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\qty({\vb{W}}\vdot {\vb{N}}_+) + {\vb{W}}\vdot \mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{N}}_+ + a^2 \dfrac{{\vb{W}}\vdot {\vb{N}}_+}{{\vb{N}}_+ \vdot ({\vb*{\nu}}\circ \Phi_{\Gamma_t}^{-1})}] \circ \Phi_{\Gamma_t}. \end{split}$$ Define a new operator: $$\label{def opB} {\mathscr{B}}({\varkappa_a}) \coloneqq \grad^\top(\kappa_+ \circ \Phi_{\Gamma_t}) \vdot {\mathbf{B}}({\varkappa_a}).$$ It then follows from Lemma [Lemma 12](#lem3.1){reference-type="ref" reference="lem3.1"} that $$\label{est opB} \abs{{\mathscr{B}}({\varkappa_a})}_{{\mathscr{L}}\qty[\H{s''}; \H{s'}]} \lesssim_{\Lambda_*}a^{s'-s''-2+\epsilon} \abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-1}},$$ for $s'-2 \le s'' \le s' \le {\frac{3}{2}k}- 2$, $s' \ge \frac{1}{2}$, and $0 < \epsilon \le s'' -s' +2$. If $k \ge 3$, one may take $\epsilon = 0$, and it holds for $\sigma'-2\le \sigma'' \le \sigma' \le {\frac{3}{2}k}-\frac{5}{2}$, $\sigma' \ge \frac{1}{2}$ that $$\label{est opB'} \abs{{\mathscr{B}}({\varkappa_a})}_{{\mathscr{L}}\qty[\H{\sigma''}; \H{\sigma'}]} \lesssim_{\Lambda_*}a^{\sigma'-\sigma''-2} \abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{3}{2}}}. \tag{\ref{est opB}'}$$ Letting $s'=s''$, $0 < \epsilon < \frac{1}{2}$ ($\epsilon = 0$ if $k \ge 3$) and $a_0$ large enough compared to $\abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-1}}$ (or $a_0$ large compared to $\abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{3}{2}}}$ if $k \ge 3$), one has $$\label{est opcal b} \abs{{\mathscr{B}}({\varkappa_a})}_{{\mathscr{L}}(\H{s'})} \le \frac{1}{2} < 1,$$ for $\frac{1}{2} \le s' \le {\frac{3}{2}k}-2$ (or $\frac{1}{2} \le s' \le {\frac{3}{2}k}- \frac{5}{2}$ if $k \ge 3$). Namely, $\qty[\mathrm{I}+{\mathscr{B}}({\varkappa_a})]$ is an isomorphism on $\H{s'}$. Set $$\label{key} {\vb{j}}\coloneqq \curl {\vb{h}}\qand {\vb{j}}_* \coloneqq {\vb{j}}\circ \mathcal{X}_{\Gamma_t}.$$ Then ${\vb{h}}$ can be recovered from $({\varkappa_a}, {\vb{j}}_*)$ by solving div-curl problems. Applying the operator $\qty[\mathrm{I}+{\mathscr{B}}({\varkappa_a})]^{-1}$ to ([\[eqn pdt2 ka premitive\]](#eqn pdt2 ka premitive){reference-type="ref" reference="eqn pdt2 ka premitive"}), one can get the evolution equation for ${\varkappa_a}$ as (which is, in particular, irrelevant to the MHD system): $$\label{eqn pd2 tt ka} \begin{split} &\partial^2_{tt}{\varkappa_a}+ {\mathscr{C}}_\alpha({\varkappa_a}, \partial_t{\varkappa_a}, {\vb{v}}_*, {\vb{h}}_*) {\varkappa_a}- {\mathscr{F}}({\varkappa_a})\partial_t{\vb*{\omega}}_* - {\mathscr{G}}({\varkappa_a}, \partial_t{\varkappa_a}, {\vb*{\omega}}_*, {\vb{j}}_*) \\ &\quad = \qty[\mathrm{I} + {\mathscr{B}}({\varkappa_a})]^{-1}\qty{\qty[-\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\qty({\vb{W}}\vdot {\vb{N}}_+) + {\vb{W}}\vdot \mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{N}}_+ + a^2 \dfrac{{\vb{W}}\vdot {\vb{N}}_+}{{\vb{N}}_+ \vdot ({\vb*{\nu}}\circ \Phi_{\Gamma_t}^{-1})}] \circ \Phi_{\Gamma_t}}. \end{split}$$ The operators ${\mathscr{F}}$ and ${\mathscr{G}}$ defined above satisfy the following lemma, whose proof is given in the Appendix: **Lemma 15**. *Assume that $a \ge a_0$ and ${\varkappa_a}\in B_{\delta_1}$ as in Proposition [Proposition 6](#prop K){reference-type="ref" reference="prop K"}. For $\frac{1}{2} \le s \le {\frac{3}{2}k}-2$ and $\epsilon > 0$, there are some positive constants $C_*$ and generic polynomials $Q$ determined by $\Lambda_*$, so that $$\label{est opF} \abs{{\mathscr{F}}({\varkappa_a})}_{{\mathscr{L}}\qty[H^{s+\epsilon-\frac{1}{2}}({\Omega\setminus{\Gamma_\ast}}); \H{s}]} \le C_* \abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-1}},$$ and $$\label{est opG} \begin{split} &\hspace{-2em}\abs{{\mathscr{G}}({\varkappa_a}, \partial_t{\varkappa_a}, {\vb*{\omega}}_*, {\vb{j}}_*)}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} \\ \le\, &a^2 Q\qty(\abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-1}}, \abs{\partial_t{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}, \norm{{\vb*{\omega}}_*}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})}, \norm{{\vb{j}}_*}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})}). \end{split}$$ Furthermore, if $k \ge 3$, for $\frac{1}{2} \le \sigma \le {\frac{3}{2}k}-\frac{5}{2}$, there hold $$\label{est opF'} \abs{{\mathscr{F}}({\varkappa_a})}_{{\mathscr{L}}\qty[H^{\sigma-\frac{1}{2}}({\Omega\setminus{\Gamma_\ast}}); \H{\sigma}]} \le C_* \abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{3}{2}}},$$ $$\label{est opG'} \begin{split} &\hspace{-1em}\abs{{\mathscr{G}}({\varkappa_a}, \partial_t{\varkappa_a}, {\vb*{\omega}}_*, {\vb{j}}_*)}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} \\ &\le a^2 Q\qty(\alpha\abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-1}}, \abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{3}{2}}}, \abs{\partial_t{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}, \norm{({\vb*{\omega}}_*, {\vb{j}}_*)}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})}), \end{split}$$ $$\label{est var opF} \abs{\var{\mathscr{F}}({\varkappa_a})}_{{\mathscr{L}}\qty[\H{{\frac{3}{2}k}-\frac{5}{2}}; {\mathscr{L}}\qty(H^{{\frac{3}{2}k}-\frac{7}{2}}({\Omega\setminus{\Gamma_\ast}}); \H{{\frac{3}{2}k}-\frac{7}{2}})]} \le C_*,$$ and $$\label{est var opG} \begin{split} &\hspace{-2em}\abs{\var{\mathscr{G}}}_{{\mathscr{L}}\qty[\H{{\frac{3}{2}k}-\frac{5}{2}}\times\H{{\frac{3}{2}k}-4}\times H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega\setminus{\Gamma_\ast}}) \times H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega\setminus{\Gamma_\ast}}); \H{{\frac{3}{2}k}-4}]} \\ \le\, &a^2 Q\qty(\abs{\partial_t{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}, \norm{{\vb*{\omega}}_*}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})}, \norm{{\vb{j}}_*}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})}). \end{split}$$* ## Evolution of the current and vorticity {#sec curr-vorticity eqn} We shall use the vorticity, current and the corresponding boundary conditions to recover the solenoidal vector fields ${\vb{v}}$ and ${\vb{h}}$ by solving the corresponding div-curl systems. Hence, it is necessary to consider the evolution of the vorticity and the current. Set $${\vb*{\omega}}_\pm \coloneqq \curl {\vb{v}}_\pm \qand {\vb{j}}_\pm \coloneqq \curl {\vb{h}}_\pm.$$ Then it follows from taking curl of the equations ([\[eqn Dtv\]](#eqn Dtv){reference-type="ref" reference="eqn Dtv"}) and ([\[eqn Dth\]](#eqn Dth){reference-type="ref" reference="eqn Dth"}) that $$\label{eqn pdt vom} \partial_t {\vb*{\omega}}+ ({\vb{v}}\vdot \grad){\vb*{\omega}}- ({\vb{h}}\vdot \grad){\vb{j}}= ({\vb*{\omega}}\vdot \grad){\vb{v}}- ({\vb{j}}\vdot \grad) {\vb{h}},$$ and $$\label{eqn pdt vj} \partial_t{\vb{j}}+ ({\vb{v}}\vdot \grad){\vb{j}}- ({\vb{h}}\vdot \grad) {\vb*{\omega}}= ({\vb{j}}\vdot \grad){\vb{v}}- ({\vb*{\omega}}\vdot \grad) {\vb{h}}- 2\tr(\grad {\vb{v}}\cp \grad {\vb{h}}),$$ where in the Cartesian coordinate $$\tr(\grad {\vb{v}}\cp \grad {\vb{h}}) = \sum_{l=1}^3 \grad v^l \cp \grad h^l.$$ # Linear Systems {#sec linear} In this section, we shall study the linearized systems deriving from ([\[eqn pd2 tt ka\]](#eqn pd2 tt ka){reference-type="ref" reference="eqn pd2 tt ka"}), ([\[eqn pdt vom\]](#eqn pdt vom){reference-type="ref" reference="eqn pdt vom"}) and ([\[eqn pdt vj\]](#eqn pdt vj){reference-type="ref" reference="eqn pdt vj"}). More precisely, the uniform estimates will be shown, from which the local well-posedness of the linear systems follows. ## Linearized problem for ${\varkappa_a}$ {#sec linear ka} Suppose that ${\Gamma_\ast}\in H^{{\frac{3}{2}k}+1}$ ($k \ge 2$) is a reference hypersurface, and $\Lambda_*$, defined by ([\[def lambda\*\]](#def lambda*){reference-type="ref" reference="def lambda*"}), satisfies all the properties given in the preliminary. Now, assume that there are a $t$-parameterized family of hypersurfaces ${\Gamma_t}\in \Lambda_*$ and four tangential vector fields ${\vb{v}}_{\pm*}, {\vb{h}}_{\pm*} : {\Gamma_\ast}\to \mathrm{T}{\Gamma_\ast}$ satisfying: $$\label{key} {\varkappa_a}\in C^0\qty{[0, T]; \H{{\frac{3}{2}k}-1}} \cap C^1\qty{[0, T]; B_{\delta_1} \subset \H{{\frac{3}{2}k}-\frac{5}{2}}}, \tag{H1}$$ and $$\label{key} {\vb{v}}_{\pm*}, {\vb{h}}_{\pm*} \in C^0\qty{[0, T]; \H{{\frac{3}{2}k}-\frac{1}{2}}} \cap C^1\qty{[0, T]; \H{{\frac{3}{2}k}-2}}. \tag{H2}$$ For the sake of convenience, suppose that there are positive constants $L_0, L_1, L_2$ so that $$\label{key} \sup_{t\in[0, T]} \qty{\abs{{\varkappa_a}(t)}_{\H{{\frac{3}{2}k}-1}}, \abs{\partial_t {\varkappa_a}(t)}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}, \abs{\qty({\vb{v}}_{\pm*}(t), {\vb{h}}_{\pm*}(t))}_{\H{{\frac{3}{2}k}-\frac{1}{2}}}} \le L_1,$$ $$\label{key} \sup_{t\in [0, T]} \abs{\qty(\partial_t{\vb{v}}_{\pm*}(t), \partial_t{\vb{h}}_{\pm*}(t))}_{\H{{\frac{3}{2}k}-2}} \le L_2,$$ and $$\label{key} \abs{\qty({\vb{v}}_{+*}(0), {\vb{v}}_{-*}(0) )}_{\H{{\frac{3}{2}k}-2}} \le L_0.$$ Using the following notations as in the previous section: $${\vb{w}}_* = {\vb{v}}_{+*} - {\vb{v}}_{-*}, \quad {\vb{u}}_{*} = \dfrac{\rho_+}{\rho_+ + \rho_-}{\vb{v}}_{+*} + \dfrac{\rho_-}{\rho_+ + \rho_-}{\vb{v}}_{-*},$$ we consider the linear initial value problem $$\label{eqn linear 1} \begin{cases} \partial^2_{tt} {\mathfrak{f}}+ {\mathscr{C}}({\varkappa_a}, \partial_t{\varkappa_a}, {\vb{v}}_*, {\vb{h}}_*) {\mathfrak{f}}= {\mathfrak{g}}, \\ {\mathfrak{f}}(0) = {\mathfrak{f}}_0, \quad \partial_t {\mathfrak{f}}(0) = {\mathfrak{f}}_1, \end{cases}$$ where ${\mathfrak{f}}_0, {\mathfrak{f}}_1, {\mathfrak{g}}(t) : {\Gamma_\ast}\to \mathbb{R}$ are three given functions, and ${\mathscr{C}}$ is given by: $$\begin{split} {\mathscr{C}}({\varkappa_a}, \partial_t{\varkappa_a}, {\vb{v}}_*, {\vb{h}}_*) \coloneqq\, &2\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*}\partial_t + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*} + {\mathscr{A}}({\varkappa_a}) + \dfrac{\rho_+\rho_-}{\qty(\rho_+ + \rho_-)^2}{\mathscr{R}}({\varkappa_a}, {\vb{w}}_*) \\ &-\dfrac{\rho_+}{\rho_+ + \rho_-}{\mathscr{R}}({\varkappa_a}, {\vb{h}}_{+*}) - \dfrac{\rho_-}{\rho_+ + \rho_-}{\mathscr{R}}({\varkappa_a}, {\vb{h}}_{-*}), \end{split}$$ which is exactly [\[def opC\]](#def opC){reference-type="eqref" reference="def opC"} with $\alpha = 1$. We shall derive some uniform estimates, from which the uniqueness and continuous dependence on the initial data follow, while the existence can be obtained via the Galerkin approximations (or the standard semigroup theory as in [@Shatah-Zeng2011]). In order to derive the energy estimates for ([\[eqn linear 1\]](#eqn linear 1){reference-type="ref" reference="eqn linear 1"}), one notes first that ${\mathscr{A}}$ is the highest order spacial derivative term (3rd order). Then for an integer $l \in [0, k-2]$, if one multiplies ([\[eqn linear 1\]](#eqn linear 1){reference-type="ref" reference="eqn linear 1"}) by $$\det(\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t}) \cdot \qty{\widetilde{{\mathcal{N}}}\qty[\qty({\mathscr{A}}^l \partial_t {\mathfrak{f}})\circ (\Phi_{\Gamma_t})^{-1}]}\circ\Phi_{\Gamma_t}$$ and integrates on ${\Gamma_\ast}$, the leading order terms will be obtained: $$\label{key} \begin{split} \dfrac{1}{2}\dv{t} \int_{\Gamma_\ast}&\partial_t {\mathfrak{f}}\cdot \qty{\widetilde{{\mathcal{N}}}\qty[\qty({\mathscr{A}}^l \partial_t {\mathfrak{f}})\circ (\Phi_{\Gamma_t})^{-1}]}\circ\Phi_{\Gamma_t}\\ &+ f \cdot \qty{\widetilde{{\mathcal{N}}}\qty[\qty({\mathscr{A}}^{1+l} {\mathfrak{f}})\circ (\Phi_{\Gamma_t})^{-1}]}\circ\Phi_{\Gamma_t}\cdot \det(\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t}) \dd{S_*}. \end{split}$$ The energy is defined as: $$\label{eqn E_l} \begin{split} E_l(t, {\mathfrak{f}}, \partial_t{\mathfrak{f}}) \coloneqq \int_{{\Gamma_t}} &\abs{\qty(-\widetilde{{\mathcal{N}}}^{\frac{1}{2}}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}^{\frac{1}{2}})^{\frac{l}{2}} \widetilde{{\mathcal{N}}}^{\frac{1}{2}} \qty[\qty(\partial_t {\mathfrak{f}}+ \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*} {\mathfrak{f}}) \circ \Phi_{\Gamma_t}^{-1}] }^2 \\ &+ \abs{\qty(-\widetilde{{\mathcal{N}}}^{\frac{1}{2}}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}^{\frac{1}{2}})^{\frac{l+1}{2}}\widetilde{{\mathcal{N}}}^{\frac{1}{2}} \qty({\mathfrak{f}}\circ \Phi_{\Gamma_t}^{-1})}^2 \\ &- \dfrac{\rho_+\rho_-}{\qty(\rho_+ + \rho_-)^2} \abs{\qty(-\widetilde{{\mathcal{N}}}^{\frac{1}{2}}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}^{\frac{1}{2}})^{\frac{l}{2}}\widetilde{{\mathcal{N}}}^{\frac{1}{2}} \qty[\qty(\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{w}}_*} {\mathfrak{f}}) \circ\Phi_{\Gamma_t}^{-1}]}^2 \\ &+ \dfrac{\rho_+}{\rho_+ + \rho_-} \abs{\qty(-\widetilde{{\mathcal{N}}}^{\frac{1}{2}}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}^{\frac{1}{2}})^{\frac{l}{2}}\widetilde{{\mathcal{N}}}^{\frac{1}{2}} \qty[\qty(\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_{+*}} {\mathfrak{f}}) \circ\Phi_{\Gamma_t}^{-1}] }^2 \\ &+ \dfrac{\rho_-}{\rho_+ + \rho_-} \abs{\qty(-\widetilde{{\mathcal{N}}}^{\frac{1}{2}}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}^{\frac{1}{2}})^{\frac{l}{2}}\widetilde{{\mathcal{N}}}^{\frac{1}{2}} \qty[\qty(\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_{-*}} {\mathfrak{f}}) \circ\Phi_{\Gamma_t}^{-1}]}^2 \dd{S_t}. \end{split}$$ **Lemma 16**. *For any integer $0 \le l \le k-2$, and $0 \le t \le T$, it holds that $$\label{est E_l} \begin{split} &\hspace{-2em}E_l (t, {\mathfrak{f}}, \partial_t{\mathfrak{f}}) - E_l (0, {\mathfrak{f}}_0, {\mathfrak{f}}_1) \\ \le&Q(L_1, L_2)\int_0^t \qty(\abs{{\mathfrak{f}}(s)}_{\H{\frac{3}{2}l + 2}} + \abs{\partial_t{\mathfrak{f}}(s)}_{\H{\frac{3}{2}l + \frac{1}{2}}} + \abs{{\mathfrak{g}}(s)}_{\H{\frac{3}{2}l + \frac{1}{2}}}) \times \\ &\hspace{8em} \times \qty(\abs{{\mathfrak{f}}(s)}_{\H{\frac{3}{2}l + 2}} + \abs{\partial_t{\mathfrak{f}}(s)}_{\H{\frac{3}{2}l + \frac{1}{2}}}) \dd{s}, \end{split}$$ where $Q$ is a generic polynomial determined by $\Lambda_*$.* *Proof.* Denote by $${\mathbb{D}_t}\coloneqq \partial_t + \mathop{}\!\mathbin{\mathrm{D}}_{\vb*{\mu}}, \qq{with} {\vb*{\mu}}\coloneqq (\partial_t{\gamma_{\Gamma_t}}{\vb*{\nu}}) \circ \Phi_{\Gamma_t}^{-1} : {\Gamma_t}\to \mathbb{R}^3 ,$$ and for any function $f : {\Gamma_\ast}\to \mathbb{R}$ and vector field $\vb{a}_*: {\Gamma_\ast}\to \mathrm{T}{\Gamma_\ast}$ $$\bar{f} \coloneqq f \circ \Phi_{\Gamma_t}^{-1} : {\Gamma_t}\to \mathbb{R}, \quad \vb{a} \coloneqq \qty(\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t}\cdot \vb{a}_*) \circ \Phi_{\Gamma_t}^{-1} : {\Gamma_t}\to \mathrm{T}{\Gamma_t}.$$ Thus $$(\partial_t{\mathfrak{f}}) \circ \Phi_{\Gamma_t}^{-1} = {\mathbb{D}_t}\bar{{\mathfrak{f}}}, \quad \qty(\mathop{}\!\mathbin{\mathrm{D}}_{\vb{a}_*}{\mathfrak{f}}) \circ\Phi_{\Gamma_t}^{-1} = \mathop{}\!\mathbin{\mathrm{D}}_{\vb{a}} \bar{{\mathfrak{f}}}.$$ For simplicity, we will use the conventions that $$\label{key} \abs{f}_{s} \coloneqq \abs{f}_{H^s({\Gamma_t})},$$ and $$\mathfrak{u} \lesssim_{L_j} \mathfrak{v}$$ if there exists a constant $C = C(\Lambda_*, L_j)$ such that $\mathfrak{u} \le C \mathfrak{v}$. In order to commute ${\mathbb{D}_t}$ with the differential operators, one observes first the facts that $$\label{comm est dt wtn} \abs{\int_{\Gamma_t}g \comm{{\mathbb{D}_t}}{\widetilde{{\mathcal{N}}}}f \dd{S_t}} \lesssim_{\Lambda_*}\abs{{\vb*{\mu}}}_{{\frac{3}{2}k}-\frac{1}{2}} \abs{f}_{\frac{1}{2}}\abs{g}_{\frac{1}{2}},$$ and $$\label{comm est Dt lapGt} \abs{\int_{\Gamma_t}g \comm{{\mathbb{D}_t}}{\mathop{}\!\mathbin{\triangle}_{\Gamma_t}}f \dd{S_t}} \lesssim_{\Lambda_*}\abs{{\vb*{\mu}}}_{{\frac{3}{2}k}-\frac{1}{2}}\abs{f}_{1}\abs{g}_{1}.$$ Indeed, it follows from the properties of ${\mathcal{N}}_+$ that $$\begin{split} \int_{\Gamma_t}g \comm{{\mathbb{D}_t}}{{\mathcal{N}}_+}f \dd{S_t} &= \int_{\Gamma_t}{\mathcal{N}}_+^{-1}{\mathcal{N}}_+g \comm{{\mathbb{D}_t}}{{\mathcal{N}}_+}f \dd{S_t} + \qty(\int_{\Gamma_t}g)\qty(\int_{\Gamma_t}\comm{{\mathbb{D}_t}}{{\mathcal{N}}_+}f ) \\ &= \int_{\Gamma_t}{\mathcal{N}}_+^{\frac{1}{2}}g \cdot {\mathcal{N}}_+^{-\frac{1}{2}}\comm{{\mathbb{D}_t}}{{\mathcal{N}}_+}f \dd{S_t} + \qty(\int_{\Gamma_t}g)\qty(\int_{\Gamma_t}\comm{{\mathbb{D}_t}}{{\mathcal{N}}_+}f ), \end{split}$$ where the last equality follows from the self-adjointness of ${\mathcal{N}}_+$. It follows from the following commutator formula (c.f. [@Shatah-Zeng2008-Geo p. 710] for the derivation): $$\label{comm formula Dt n} \begin{split} \comm{{\mathbb{D}_t}}{{\mathcal{N}}_+}f = \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_+} \mathop{}\!\mathbin{\triangle}^{-1}\qty(2\mathop{}\!\mathbin{\mathrm{D}}{\vb*{\mu}}{\textbf{:}}\mathop{}\!\mathbin{\mathrm{D}}^2{\mathcal{H}}_+ f + \grad{\mathcal{H}}_+ f \vdot \mathop{}\!\mathbin{\triangle}{\vb*{\mu}}) - \grad{\mathcal{H}}_+ f \vdot \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_+} {\vb{v}}- \mathop{}\!\mathbin{\mathrm{D}}_{\grad^\top f}{\vb*{\mu}}\vdot {\vb{N}}_+, \end{split}$$ that $$\begin{split} \int_{\Gamma_t}\comm{{\mathbb{D}_t}}{{\mathcal{N}}_+}f \dd{S_t} &= \int_{\Gamma_t}f \comm{{\mathbb{D}_t}}{{\mathcal{N}}_+}1 + \qty(1{\mathcal{N}}_+ f)(-\mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}{\vb*{\mu}}) \dd{S_t} \\ &= - \int_{\Gamma_t}{\mathcal{N}}_+(f) \cdot \mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}{\vb*{\mu}}\dd{S_t} \\ &= - \int_{\Gamma_t}f \cdot {\mathcal{N}}_+(\mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}{\vb*{\mu}}) \dd{S_t}. \end{split}$$ Thus, the above two relations imply that $$\label{comm est Dt op n+} \abs{\int_{\Gamma_t}g \comm{{\mathbb{D}_t}}{{\mathcal{N}}_+}f \dd{S_t}} \lesssim_{\Lambda_*}\abs{{\vb*{\mu}}}_{{\frac{3}{2}k}-\frac{1}{2}} \abs{f}_{\frac{1}{2}}\abs{g}_{\frac{1}{2}}.$$ As $\widetilde{{\mathcal{N}}}$ is defined via [\[def operator n tilde\]](#def operator n tilde){reference-type="eqref" reference="def operator n tilde"}, the estimate [\[comm est dt wtn\]](#comm est dt wtn){reference-type="eqref" reference="comm est dt wtn"} follows from [\[comm est Dt op n+\]](#comm est Dt op n+){reference-type="eqref" reference="comm est Dt op n+"} and Lemma [Lemma 10](#Dt comm est lemma){reference-type="ref" reference="Dt comm est lemma"}. The relation [\[comm est Dt lapGt\]](#comm est Dt lapGt){reference-type="eqref" reference="comm est Dt lapGt"} can be derived via the following formula (c.f. [@Shatah-Zeng2008-Geo p. 710] for the derivation): $$\label{comm formula Dt lapGt} \comm{{\mathbb{D}_t}}{\mathop{}\!\mathbin{\triangle}_{\Gamma_t}} = - 2 (\mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t})^2f\vdot (\mathop{}\!\mathbin{\mathrm{D}}{\vb*{\mu}})^\top - \grad^\top f \vdot \mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb*{\mu}}+ \kappa_+ \mathop{}\!\mathbin{\mathrm{D}}_{\grad^\top f} {\vb*{\mu}}\vdot {\vb{N}}_+,$$ and the integration-by-parts on ${\Gamma_t}$. Now, it follows from the self-adjointness of $\widetilde{{\mathcal{N}}}$ and $(-\mathop{}\!\mathbin{\triangle}_{\Gamma_t})$ that $$\label{eqn lin est decomp} \begin{split} &\int_{\Gamma_t}\qty(-\widetilde{{\mathcal{N}}}^{\frac{1}{2}}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}^\frac{1}{2})^{\frac{l}{2}}\widetilde{{\mathcal{N}}}^{\frac{1}{2}}\qty({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot \qty(-\widetilde{{\mathcal{N}}}^{\frac{1}{2}}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}^\frac{1}{2})^{\frac{l}{2}}\widetilde{{\mathcal{N}}}^{\frac{1}{2}}\qty({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t} \\ &\quad = \int_{\Gamma_t}\widetilde{{\mathcal{N}}}^{\frac{1}{2}}\qty({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \widetilde{{\mathcal{N}}}^{\frac{1}{2}}(-\mathop{}\!\mathbin{\triangle}_{\Gamma_t})\widetilde{{\mathcal{N}}}^{\frac{1}{2}}\qty(-\widetilde{{\mathcal{N}}}^{\frac{1}{2}}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}^\frac{1}{2})^{l-1} \widetilde{{\mathcal{N}}}^{\frac{1}{2}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t} \\ &\quad = \int_{\Gamma_t}\widetilde{{\mathcal{N}}} ({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \qty(-\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}})^{l}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t}. \end{split}$$ Observing that $$\label{key} {\mathbb{D}_t}\dd{S_t} = \qty(\mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}{\vb*{\mu}}) \dd{S_t},$$ then, if $l = 0$, one can calculate that $$\label{eqn l = 0 case} \begin{split} &\hspace{-2em}\dv{t}\int_{{\Gamma_t}} ({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot \widetilde{{\mathcal{N}}} ({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t} \\ =\, &\int_{\Gamma_t}\qty({\mathbb{D}_t}^2\bar{{\mathfrak{f}}}+ {\mathbb{D}_t}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot \widetilde{{\mathcal{N}}} ({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t} \\ &+ \int_{{\Gamma_t}} ({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot \widetilde{{\mathcal{N}}}\qty({\mathbb{D}_t}^2\bar{{\mathfrak{f}}}+ {\mathbb{D}_t}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t} \\ &+ \int_{{\Gamma_t}} ({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot \comm{{\mathbb{D}_t}}{\widetilde{{\mathcal{N}}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t} \\ &+ \int_{\Gamma_t}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot \widetilde{{\mathcal{N}}} ({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot (\mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}{\vb*{\mu}}) \dd{S_t}. \end{split}$$ It follows from the self-adjointness of $\widetilde{{\mathcal{N}}}$ and [\[comm est dt wtn\]](#comm est dt wtn){reference-type="eqref" reference="comm est dt wtn"} that $$\label{key} \begin{split} &\abs{\frac{1}{2}\dv{t}(\int_{\Gamma_t}\abs{\widetilde{{\mathcal{N}}}^{\frac{1}{2}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}})}^2 \dd{S_t}) - \qty(\int_{\Gamma_t}\qty({\mathbb{D}_t}^2\bar{{\mathfrak{f}}}+ {\mathbb{D}_t}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot \widetilde{{\mathcal{N}}} ({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t}) } \\ &\quad \lesssim_{\Lambda_*}\abs{{\vb*{\mu}}}_{{\frac{3}{2}k}-\frac{1}{2}} \cdot \qty(\abs{\partial_t{\mathfrak{f}}}_{\H{\frac{1}{2}}}^2 + \abs{{\mathfrak{f}}}_{\H{\frac{3}{2}}}^2) \end{split}$$ If $l = 1$ (so $k \ge 3$, since $l \le k-2$), direct computations give that $$\label{eqn l = 1 case} \begin{split} &\hspace{-2em}\dv{t} \int_{\Gamma_t}\widetilde{{\mathcal{N}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot (-\mathop{}\!\mathbin{\triangle}_{\Gamma_t})\widetilde{{\mathcal{N}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t} \\ =\, &\int_{\Gamma_t}\widetilde{{\mathcal{N}}}\qty({\mathbb{D}_t}^2\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot (-\mathop{}\!\mathbin{\triangle}_{\Gamma_t})\widetilde{{\mathcal{N}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t} \\ &+ \int_{\Gamma_t}\widetilde{{\mathcal{N}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot (-\mathop{}\!\mathbin{\triangle}_{\Gamma_t})\widetilde{{\mathcal{N}}}\qty({\mathbb{D}_t}^2\bar{{\mathfrak{f}}}+ {\mathbb{D}_t}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t} \\ &+ \int_{\Gamma_t}\comm{{\mathbb{D}_t}}{\widetilde{{\mathcal{N}}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot (-\mathop{}\!\mathbin{\triangle}_{\Gamma_t})\widetilde{{\mathcal{N}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t} \\ &+ \int_{\Gamma_t}\widetilde{{\mathcal{N}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot \comm{{\mathbb{D}_t}}{(-\mathop{}\!\mathbin{\triangle}_{\Gamma_t})} \widetilde{{\mathcal{N}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t} \\ &+ \int_{\Gamma_t}\widetilde{{\mathcal{N}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot (-\mathop{}\!\mathbin{\triangle}_{\Gamma_t}) \comm{{\mathbb{D}_t}}{\widetilde{{\mathcal{N}}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t} \\ &+ \int_{\Gamma_t}\widetilde{{\mathcal{N}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot (-\mathop{}\!\mathbin{\triangle}_{\Gamma_t})\widetilde{{\mathcal{N}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot (\mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}{\vb*{\mu}}) \dd{S_t}. \end{split}$$ Hence, one may derive from the self-adjointness of $\widetilde{{\mathcal{N}}}, \mathop{}\!\mathbin{\triangle}_{\Gamma_t}$ and the estimates [\[comm est dt wtn\]](#comm est dt wtn){reference-type="eqref" reference="comm est dt wtn"}-[\[comm est Dt lapGt\]](#comm est Dt lapGt){reference-type="eqref" reference="comm est Dt lapGt"} that $$\label{key} \begin{split} &\frac{1}{2}\dv{t} \int_{{\Gamma_t}} \abs{\qty(-\widetilde{{\mathcal{N}}}^\frac{1}{2}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}^\frac{1}{2})^\frac{1}{2}\widetilde{{\mathcal{N}}}^\frac{1}{2}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}})}^2 \dd{S_t} \\ & \quad = \int_{\Gamma_t}\widetilde{{\mathcal{N}}}\qty({\mathbb{D}_t}^2\bar{{\mathfrak{f}}}+ {\mathbb{D}_t}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot (-\mathop{}\!\mathbin{\triangle}_{\Gamma_t})\widetilde{{\mathcal{N}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t} + r_0, \end{split}$$ with the estimate: $$\label{key} \abs{r_0} \lesssim_{\Lambda_*}\abs{{\vb*{\mu}}}_{{\frac{3}{2}k}-\frac{1}{2}} \times \qty(\abs{\partial_t{\mathfrak{f}}}_{\H{2}}^2 + \abs{{\mathfrak{f}}}_{\H3}^2).$$ Therefore, by using the following relations: $$\begin{split} &\int_{\Gamma_t}\abs{\qty(-\widetilde{{\mathcal{N}}}^\frac{1}{2}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}^\frac{1}{2})^\frac{2m+1}{2}\widetilde{{\mathcal{N}}}^\frac{1}{2}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}})}^2 \dd{S_t} \\ &\quad = \int_{\Gamma_t}\qty(-\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}})^m\widetilde{{\mathcal{N}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot \qty(-\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}})^{m+1}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t}, \end{split}$$ and $$\begin{split} &\int_{\Gamma_t}\abs{\qty(-\widetilde{{\mathcal{N}}}^\frac{1}{2}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}^\frac{1}{2})^\frac{2m}{2}\widetilde{{\mathcal{N}}}^\frac{1}{2}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}})}^2 \dd{S_t} \\ &\quad = \int_{\Gamma_t}\qty(-\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}})^m\widetilde{{\mathcal{N}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot \qty(-\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}})^{m}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t}, \end{split}$$ one can argue as the cases for $l = 0$ and for $l = 1$ to obtain the following estimate: $$\label{est I_0} \begin{split} &\hspace{-2em}\abs{\dfrac{1}{2}\dv{t} \int_{{\Gamma_t}} \abs{\qty(-\widetilde{{\mathcal{N}}}^\frac{1}{2}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}^\frac{1}{2})^\frac{l}{2} \widetilde{{\mathcal{N}}}^{\frac{1}{2}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}})}^2 \dd{S_t} - I } \\ \le &\abs{{\vb*{\mu}}}_{{\frac{3}{2}k}-\frac{1}{2}} \cdot \qty(\abs{{\mathfrak{f}}}^2_{\H{\frac{3}{2}l + \frac{3}{2}}} + \abs{\partial_t{\mathfrak{f}}}^2_{\H{\frac{3}{2}l + \frac{1}{2}}}), \end{split}$$ where $$\label{key} I \coloneqq \int_{\Gamma_t}\qty(-\widetilde{{\mathcal{N}}}^\frac{1}{2}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}^\frac{1}{2})^\frac{l}{2} \widetilde{{\mathcal{N}}}^{\frac{1}{2}} \qty({\mathbb{D}_t}^2\bar{{\mathfrak{f}}}+ {\mathbb{D}_t}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot \qty(-\widetilde{{\mathcal{N}}}^\frac{1}{2}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}^\frac{1}{2})^\frac{l}{2} \widetilde{{\mathcal{N}}}^{\frac{1}{2}} \qty({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}}\bar{{\mathfrak{f}}}) \dd{S_t}.$$ For simplicity, set $$\label{key} {\mathcal{D}}\coloneqq \qty(-\widetilde{{\mathcal{N}}}^\frac{1}{2}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}^\frac{1}{2})^\frac{1}{2}.$$ The equation in [\[eqn linear 1\]](#eqn linear 1){reference-type="eqref" reference="eqn linear 1"} is equivalent to $$\label{eqn Dt2 baf} \begin{split} &{\mathbb{D}_t}^2 \bar{{\mathfrak{f}}}+ 2 \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}{\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}+ \qty(-\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}})\bar{{\mathfrak{f}}}+ \frac{\rho_+ \rho_-}{(\rho_+ + \rho_-)^2}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{w}}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{w}}\bar{{\mathfrak{f}}}\\ &\quad- \frac{\rho_+}{\rho_+ + \rho_-}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+}\bar{{\mathfrak{f}}}- \frac{\rho_-}{\rho_+ + \rho_-}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_-}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_-}\bar{{\mathfrak{f}}}= \bar{{\mathfrak{g}}}, \end{split}$$ which implies $$\label{key} \begin{split} I = &\int_{\Gamma_t}{\mathcal{D}}^l \widetilde{{\mathcal{N}}}^{\frac{1}{2}} \qty({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot {\mathcal{D}}^l \widetilde{{\mathcal{N}}}^{\frac{1}{2}} \bar{{\mathfrak{g}}} \dd{S_t} \\ &+\int_{\Gamma_t}{\mathcal{D}}^l \widetilde{{\mathcal{N}}}^{\frac{1}{2}} \qty({\mathbb{D}_t}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}- 2 \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}{\mathbb{D}_t}\bar{{\mathfrak{f}}}- \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot {\mathcal{D}}^l \widetilde{{\mathcal{N}}}^{\frac{1}{2}} \qty({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t} \\ &-\int_{\Gamma_t}{\mathcal{D}}^l \widetilde{{\mathcal{N}}}^{\frac{1}{2}} \qty(-\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}} \bar{{\mathfrak{f}}}) \cdot {\mathcal{D}}^l \widetilde{{\mathcal{N}}}^{\frac{1}{2}} \qty({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t} \\ &-\dfrac{\rho_+ \rho_-}{\qty(\rho_+ + \rho_-)^2}\int_{\Gamma_t}{\mathcal{D}}^l{\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}\qty(\mathop{}\!\mathbin{\mathrm{D}}_{\vb{w}}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{w}}\bar{{\mathfrak{f}}}) \cdot {\mathcal{D}}^l{\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t} \\ &+\dfrac{\rho_+}{\rho_+ + \rho_-} \int_{\Gamma_t}{\mathcal{D}}^l{\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}\qty(\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+}\bar{{\mathfrak{f}}}) \cdot {\mathcal{D}}^l{\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t} \\ &+\dfrac{\rho_-}{\rho_+ + \rho_-}\int_{\Gamma_t}{\mathcal{D}}^l{\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}\qty(\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_-}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_-}\bar{{\mathfrak{f}}}) \cdot {\mathcal{D}}^l{\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t} \\ =: &I_1 + I_2 + I_3 + I_4 + I_5 + I_6. \end{split}$$ It is clear that $$\label{est I_1} \abs{I_1} \lesssim_{L_1} \abs{{\mathfrak{g}}}_{\H{\frac{3}{2}l + \frac{1}{2}}} \cdot \qty(\abs{\partial_t{\mathfrak{f}}}_{\H{\frac{3}{2}l + \frac{1}{2}}} + \abs{{\mathfrak{f}}}_{\H{\frac{3}{2}l + \frac{3}{2}}}).$$ Note that for two functions $\phi, \psi : {\Gamma_t}\to \mathbb{R}$, the integration-by-parts formula is $$\label{formula int by parts Gt} \int_{\Gamma_t}- (\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\phi) \cdot \psi \dd{S_t} = \int_{\Gamma_t}(\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\psi)\cdot\phi + \phi\psi(\mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}{\vb{u}}) \dd{S_t},$$ since ${\vb{u}}: {\Gamma_t}\to \mathrm{T}{\Gamma_t}$ is a tangential field and $\int_{\Gamma_t}\mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}\qty({\vb{u}}\phi \psi) \dd{S_t} \equiv 0$. For $I_2$, observe that $$\label{int DuDuf * Dtf} \begin{split} \int_{\Gamma_t}{\mathcal{D}}^l \widetilde{{\mathcal{N}}}^\frac{1}{2}(\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot {\mathcal{D}}^l \widetilde{{\mathcal{N}}}^\frac{1}{2}({\mathbb{D}_t}\bar{{\mathfrak{f}}}) \dd{S_t} = \int_{\Gamma_t}\widetilde{{\mathcal{N}}}(\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot (-\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}})^l ({\mathbb{D}_t}\bar{{\mathfrak{f}}}) \dd{S_t}. \end{split}$$ Thus, commuting $\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}$ with $\widetilde{{\mathcal{N}}}$ and $\mathop{}\!\mathbin{\triangle}_{\Gamma_t}$ via the arguments as [\[eqn l = 0 case\]](#eqn l = 0 case){reference-type="eqref" reference="eqn l = 0 case"} and [\[eqn l = 1 case\]](#eqn l = 1 case){reference-type="eqref" reference="eqn l = 1 case"}, it is routine to derive that $$\label{est DuDu f * Dt f} \begin{split} &\hspace{-1em}\abs{ \int_{\Gamma_t}{\mathcal{D}}^l {\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}(\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot {\mathcal{D}}^l {\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}) + {\mathcal{D}}^l{\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}(\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot {\mathcal{D}}^l{\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}(\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}{\mathbb{D}_t}\bar{{\mathfrak{f}}}) \dd{S_t}} \\ &\lesssim_{L_1} \abs{{\mathfrak{f}}}_{\H{\frac{3}{2}l +\frac{3}{2}}} \cdot \abs{\partial_t {\mathfrak{f}}}_{\H{\frac{3}{2}l+\frac{1}{2}}}. \end{split}$$ Similarly, it follows from [\[formula int by parts Gt\]](#formula int by parts Gt){reference-type="eqref" reference="formula int by parts Gt"} that $$\label{key} \begin{split} \abs{\int_{\Gamma_t}- {\mathcal{D}}^l {\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}(\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \cdot {\mathcal{D}}^l {\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}(\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t}} \lesssim_{L_1}\abs{{\mathfrak{f}}}_{\H{\frac{3}{2}l+\frac{3}{2}}}^2, \end{split}$$ and $$\begin{split} \abs{\int_{\Gamma_t}{\mathcal{D}}^l {\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}(\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}{\mathbb{D}_t}\bar{{\mathfrak{f}}}) \cdot {\mathcal{D}}^l {\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}) \dd{S_t} } \lesssim_{L_1} \abs{\partial_t {\mathfrak{f}}}_{\H{\frac{3}{2}l + \frac{1}{2}}}^2. \end{split}$$ Furthermore, observing that $$\label{est I2 last} \abs{\comm{{\mathbb{D}_t}}{\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}}\bar{{\mathfrak{f}}}}_{\frac{3}{2}l+\frac{1}{2}} = \abs{\mathop{}\!\mathbin{\mathrm{D}}_{\qty({\mathbb{D}_t}{\vb{u}}-\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}{\vb*{\mu}})} \bar{{\mathfrak{f}}}}_{\frac{3}{2}l+\frac{1}{2}} \lesssim_{Q}\abs{{\mathfrak{f}}}_{\H{\frac{3}{2}l + \frac{7}{4}}},$$ one can deduce from ([\[int DuDuf \* Dtf\]](#int DuDuf * Dtf){reference-type="ref" reference="int DuDuf * Dtf"})-([\[est I2 last\]](#est I2 last){reference-type="ref" reference="est I2 last"}) that $$\label{est I_2} \abs{I_2} \lesssim_{Q}\abs{\partial_t{\mathfrak{f}}}_{\H{\frac{3}{2}l+\frac{1}{2}}}^2 + \abs{{\mathfrak{f}}}_{\H{\frac{3}{2}l+2}}^2.$$ Next, the estimate on $I_3$ can be derived via: $$\label{key} \begin{split} I_3 = &-\int_{\Gamma_t}{\mathcal{D}}^l {\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}\qty(-\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}})\bar{{\mathfrak{f}}}\cdot {\mathcal{D}}^l{\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}\qty({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t} \\ = &-\int_{\Gamma_t}\widetilde{{\mathcal{N}}}\bar{{\mathfrak{f}}}\cdot (-\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}})^{l+1}({\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t}, \end{split}$$ which, together with the previous arguments, yield $$\label{est I_3} \begin{split} \abs{I_3 + \dfrac{1}{2}\dv{t}\int_{\Gamma_t}\abs{{\mathcal{D}}^{l+1}{\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}\bar{{\mathfrak{f}}}}^2 \dd{S_t}} \lesssim_{Q}\abs{{\mathfrak{f}}}_{\H{\frac{3}{2}l + 2}}^2. \end{split}$$ As for $I_4$, in view of the relation $$\comm{\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}}{\mathop{}\!\mathbin{\mathrm{D}}_{\vb{w}}} \bar{{\mathfrak{f}}}= \mathop{}\!\mathbin{\mathrm{D}}_{\comm{{\vb{u}}}{{\vb{w}}}} \bar{{\mathfrak{f}}},$$ it follows from the integration-by-parts that $$\label{key} \begin{split} \abs{\int_{\Gamma_t}{\mathcal{D}}^l{\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}(\mathop{}\!\mathbin{\mathrm{D}}_{\vb{w}}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{w}}\bar{{\mathfrak{f}}}) \cdot {\mathcal{D}}^l{\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}(\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}})\dd{S_t}} \lesssim_{L_1} \abs{{\mathfrak{f}}}_{\H{\frac{3}{2}l+\frac{3}{2}}}^2. \end{split}$$ The previous arguments can be used to show $$\label{key} \begin{split} &\hspace{-1em}\abs{ \int_{\Gamma_t}{\mathcal{D}}^l {\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}(\mathop{}\!\mathbin{\mathrm{D}}_{\vb{w}}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{w}}\bar{{\mathfrak{f}}}) \cdot {\mathcal{D}}^l {\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}) + {\mathcal{D}}^l{\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}(\mathop{}\!\mathbin{\mathrm{D}}_{\vb{w}}\bar{{\mathfrak{f}}}) \cdot {\mathcal{D}}^l{\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}(\mathop{}\!\mathbin{\mathrm{D}}_{\vb{w}}{\mathbb{D}_t}\bar{{\mathfrak{f}}}) \dd{S_t}} \\ &\lesssim_{L_1} \abs{{\mathfrak{f}}}_{\H{\frac{3}{2}l +\frac{3}{2}}} \times \abs{\partial_t {\mathfrak{f}}}_{\H{\frac{3}{2}l+\frac{1}{2}}}, \end{split}$$ and $$\label{I_4 est end} \begin{split} &\hspace{-1em}\abs{\int_{\Gamma_t}{\mathcal{D}}^l {\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}(\mathop{}\!\mathbin{\mathrm{D}}_{\vb{w}}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{w}}\bar{{\mathfrak{f}}}) \cdot {\mathcal{D}}^l {\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}({\mathbb{D}_t}\bar{{\mathfrak{f}}}) \dd{S_t } + \dfrac{1}{2}\dv{t} \int_{\Gamma_t}\abs{{\mathcal{D}}^l{\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}(\mathop{}\!\mathbin{\mathrm{D}}_{\vb{w}}\bar{{\mathfrak{f}}})}^2 \dd{S_t}} \\ &\lesssim_{Q}\abs{{\mathfrak{f}}}_{\H{\frac{3}{2}l+\frac{3}{2}}} \times \qty(\abs{{\mathfrak{f}}}_{\H{\frac{3}{2}l+2}} + \abs{\partial_t{\mathfrak{f}}}_{\H{\frac{3}{2}l+\frac{1}{2}}}), \end{split}$$ that is, $$\label{est I_4} \begin{split} &\abs{I_4 - \dfrac{\rho_+\rho_-}{2\qty(\rho_+ + \rho_-)^2} \dv{t}\int_{\Gamma_t}\abs{{\mathcal{D}}^l{\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}(\mathop{}\!\mathbin{\mathrm{D}}_{\vb{w}}\bar{{\mathfrak{f}}})}^2 \dd{S_t}} \\ &\quad\lesssim_{Q}\abs{{\mathfrak{f}}}_{\H{\frac{3}{2}l+\frac{3}{2}}} \times \qty(\abs{{\mathfrak{f}}}_{\H{\frac{3}{2}l+2}} + \abs{\partial_t{\mathfrak{f}}}_{\H{\frac{3}{2}l+\frac{1}{2}}}). \end{split}$$ Since $I_5$ and $I_6$ have the same form as $I_4$, there hold $$\label{est I_5} \begin{split} &\abs{I_5 + \dfrac{\rho_+}{2\qty(\rho_+ + \rho_-)} \dv{t}\int_{\Gamma_t}\abs{{\mathcal{D}}^l{\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}(\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+}\bar{{\mathfrak{f}}})}^2 \dd{S_t} } \\ &\quad\lesssim_{Q}\abs{{\mathfrak{f}}}_{\H{\frac{3}{2}l+\frac{3}{2}}} \times \qty(\abs{{\mathfrak{f}}}_{\H{\frac{3}{2}l+2}} + \abs{\partial_t{\mathfrak{f}}}_{\H{\frac{3}{2}l+\frac{1}{2}}}), \end{split}$$ and $$\label{est I_6} \begin{split} &\abs{I_6 + \dfrac{\rho_-}{2\qty(\rho_+ + \rho_-)} \dv{t}\int_{\Gamma_t}\abs{{\mathcal{D}}^l{\widetilde{{\mathcal{N}}}^{\frac{1}{2}}}(\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_-}\bar{{\mathfrak{f}}})}^2 \dd{S_t} } \\ &\quad\lesssim_{Q}\abs{{\mathfrak{f}}}_{\H{\frac{3}{2}l+\frac{3}{2}}} \times \qty(\abs{{\mathfrak{f}}}_{\H{\frac{3}{2}l+2}} + \abs{\partial_t{\mathfrak{f}}}_{\H{\frac{3}{2}l+\frac{1}{2}}}). \end{split}$$ In conclusion, the combination of ([\[eqn E_l\]](#eqn E_l){reference-type="ref" reference="eqn E_l"}), ([\[est I_0\]](#est I_0){reference-type="ref" reference="est I_0"}), ([\[est I_1\]](#est I_1){reference-type="ref" reference="est I_1"}), ([\[est I_2\]](#est I_2){reference-type="ref" reference="est I_2"}), ([\[est I_3\]](#est I_3){reference-type="ref" reference="est I_3"}), ([\[est I_4\]](#est I_4){reference-type="ref" reference="est I_4"})-([\[est I_6\]](#est I_6){reference-type="ref" reference="est I_6"}) gives that $$\label{key} \begin{split} \abs{\dv{t} E_l} \lesssim_{Q}&\qty(\abs{{\mathfrak{f}}}_{\H{\frac{3}{2}l+2}} + \abs{\partial_t{\mathfrak{f}}}_{\H{\frac{3}{2}l+\frac{1}{2}}}) \times \\ &\quad\times \qty(\abs{{\mathfrak{f}}}_{\H{\frac{3}{2}l+2}} + \abs{\partial_t{\mathfrak{f}}}_{\H{\frac{3}{2}l+\frac{1}{2}}} + \abs{{\mathfrak{g}}}_{\H{\frac{3}{2}l+\frac{1}{2}}}), \end{split}$$ which completes the proof of Lemma [Lemma 16](#lem est E_l){reference-type="ref" reference="lem est E_l"}. ◻ Based on Lemma [Lemma 16](#lem est E_l){reference-type="ref" reference="lem est E_l"}, the following energy estimate for [\[eqn linear 1\]](#eqn linear 1){reference-type="eqref" reference="eqn linear 1"} holds: **Proposition 17**. *There is a constant $C_* > 0$ determined by $\Lambda_*$ so that for any integer $0 \le l \le k-2$, and $0 \le t \le T$ ($T \le C$ for some constant $C = C(L_1, L_2)$), it holds that $$\label{est linear eqn ka} \begin{split} &\hspace{-1em}\abs{{\mathfrak{f}}(t)}_{\H{\frac{3}{2}l+2}}^2 + \abs{\partial_t{\mathfrak{f}}(t)}_{\H{\frac{3}{2}l+\frac{1}{2}}}^2 \\ \le\, &C_* \exp\qty\big[Q(L_1, L_2)t]\times \\ & \times \qty{ \abs{{\mathfrak{f}}_0}_{\H{\frac{3}{2}l+2}}^2 + \abs{{\mathfrak{f}}_1}_{\H{\frac{3}{2}l+\frac{1}{2}}}^2 + (L_0)^{12} \abs{{\mathfrak{f}}_0}_{\H{\frac{3}{2}l+\frac{1}{2}}}^2 + \int_0^t \abs{{\mathfrak{g}}(t')}^2_{\H{\frac{3}{2}l+\frac{1}{2}}} \dd{t'} }, \end{split}$$ where $Q$ is a generic polynomial determined by $\Lambda_*$.* *Proof.* It is clear that for some $C_* > 0$, it holds that $$\label{eqn linear energy est} \begin{split} &\abs{\bar{{\mathfrak{f}}}}_{\frac{3}{2}l+2}^2 + \abs{{\mathbb{D}_t}\bar{{\mathfrak{f}}}}_{\frac{3}{2}l+\frac{1}{2}} \\ &\le C_* E_l(t, {\mathfrak{f}}, \partial_t{\mathfrak{f}}) + C_* \qty(\abs{\int_{\Gamma_t}\bar{{\mathfrak{f}}}\dd{S_t}}^2 + \abs{\int_{\Gamma_t}{\mathbb{D}_t}\bar{{\mathfrak{f}}}\dd{S_t}}^2) +C_* \qty(\abs{\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{w}}}\bar{{\mathfrak{f}}}}_{\frac{3}{2}l + \frac{1}{2}}^2 + \abs{\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}}_{\frac{3}{2}l + \frac{1}{2}}^2) \\ &\le C_* E_l(t, {\mathfrak{f}}, \partial_t{\mathfrak{f}}) + C_* \qty(\abs{\int_{\Gamma_t}\bar{{\mathfrak{f}}}\dd{S_t}}^2 + \abs{\int_{\Gamma_t}{\mathbb{D}_t}\bar{{\mathfrak{f}}}\dd{S_t}}^2) +C_* \qty(\abs{{\vb{v}}_{+}}_{{\frac{3}{2}k}-2}^2 + \abs{{\vb{v}}_{-}}_{{\frac{3}{2}k}-2}^2) \abs{\bar{{\mathfrak{f}}}}_{\frac{3}{2}l + \frac{7}{4}}^2. \end{split}$$ For the last term above, it follows from the interpolation inequality that $$\label{key} \begin{split} \qty(\abs{{\vb{v}}_{+}}_{{\frac{3}{2}k}-2}^2 + \abs{{\vb{v}}_{-}}_{{\frac{3}{2}k}-2}^2) \abs{\bar{{\mathfrak{f}}}}_{\frac{3}{2}l + \frac{7}{4}}^2 \le \frac{5}{6C_*} \abs{\bar{{\mathfrak{f}}}}_{\frac{3}{2}l+2}^2 + \dfrac{(C_*)^5}{6}\qty(\abs{{\vb{v}}_+}_{{\frac{3}{2}k}-2}^{12} + \abs{{\vb{v}}_-}_{{\frac{3}{2}k}-2}^{12}) \abs{\bar{{\mathfrak{f}}}}_{\frac{3}{2}l+\frac{1}{2}}^2, \end{split}$$ with $$\label{key} \begin{split} \abs{{\vb{v}}_\pm (t)}_{{\frac{3}{2}k}-2} &\lesssim_{\Lambda_*}\abs{{\vb{v}}_{\pm*}(t)}_{\H{{\frac{3}{2}k}-2}} \\ &\lesssim_{\Lambda_*}\abs{{\vb{v}}_{\pm*}(0)}_{\H{{\frac{3}{2}k}-2}} + L_2 t. \end{split}$$ Now, observe that $$\label{key} \dv{t} \int_{\Gamma_t}\bar{{\mathfrak{f}}}(t) \dd{S_t} = \int_{\Gamma_t}{\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \bar{{\mathfrak{f}}}\mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}{\vb*{\mu}}\dd{S_t}.$$ Thus $$\label{key} \begin{split} \abs{\int_{\Gamma_t}\bar{{\mathfrak{f}}}(t) \dd{S_t} - \int_{\Gamma_0} \bar{{\mathfrak{f}}}(0) \dd{S_0} } \lesssim_{L_1} \int_0^t \abs{\partial_t{\mathfrak{f}}(t')}_{\L2} + \abs{{\mathfrak{f}}(t')}_{\L2} \dd{t'}. \end{split}$$ To deal with the integral of ${\mathbb{D}_t}\bar{{\mathfrak{f}}}$, one notes that $$\label{key} \dv{t} \int_{\Gamma_t}{\mathbb{D}_t}\bar{{\mathfrak{f}}}\dd{S_t} = \int_{\Gamma_t}{\mathbb{D}_t}^2 \bar{{\mathfrak{f}}}+ \qty({\mathbb{D}_t}\bar{{\mathfrak{f}}}) \mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}{\vb*{\mu}}\dd{S_t},$$ and ([\[eqn Dt2 baf\]](#eqn Dt2 baf){reference-type="ref" reference="eqn Dt2 baf"}) implies that $$\label{key} \begin{split} \int_{\Gamma_t}{\mathbb{D}_t}^2\bar{{\mathfrak{f}}}\dd{S_t} = &-\int_{\Gamma_t}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\qty(2{\mathbb{D}_t}\bar{{\mathfrak{f}}}+ \mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}\bar{{\mathfrak{f}}}) \dd{S_t} +\int_{\Gamma_t}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}\bar{{\mathfrak{f}}}\dd{S_t} \\ &- \dfrac{\rho_+ \rho_-}{\qty(\rho_+ + \rho_-)^2} \int_{\Gamma_t}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{w}}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{w}}\bar{{\mathfrak{f}}}\dd{S_t} + \dfrac{\rho_+}{\rho_+ + \rho_-}\int_{\Gamma_t}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+} \bar{{\mathfrak{f}}}\dd{S_t} \\ &+\dfrac{\rho_-}{\rho_+ + \rho_-} \int_{\Gamma_t}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_-} \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_-} \bar{{\mathfrak{f}}}\dd{S_t} + \int_{\Gamma_t}\bar{\mathfrak{g}} \dd{S_t}. \end{split}$$ Due to the fact that $\partial{\Gamma_t}= \varnothing$, one has $$\label{key} \int_{\Gamma_t}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}\bar{{\mathfrak{f}}}\dd{S_t} \equiv 0,$$ and $$\label{key} -\int_{\Gamma_t}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{u}}{\mathbb{D}_t}\bar{{\mathfrak{f}}}\dd{S_t} = \int_{\Gamma_t}\qty({\mathbb{D}_t}\bar{{\mathfrak{f}}}) \mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}{\vb{u}}\dd{S_t}.$$ Thus $$\label{key} \abs{\dv{t}\int_{\Gamma_t}{\mathbb{D}_t}\bar{{\mathfrak{f}}}\dd{S_t} } \lesssim_{L_1} \abs{{\mathbb{D}_t}\bar{{\mathfrak{f}}}}_0 + \abs{\bar{{\mathfrak{f}}}}_1 + \abs{\bar{{\mathfrak{g}}}}_0,$$ and $$\label{est Dt baf int} \begin{split} &\abs{\int_{\Gamma_t}\qty({\mathbb{D}_t}\bar{{\mathfrak{f}}})(t) \dd{S_t} - \int_{\Gamma_0} \qty({\mathbb{D}_t}\bar{{\mathfrak{f}}})(0)\dd{S_0} } \\ &\quad\lesssim_{L_1} \int_0^t \abs{\partial_t{\mathfrak{f}}(t')}_{\L2} + \abs{{\mathfrak{f}}(t')}_{\H1} + \abs{{\mathfrak{g}}(t')}_{\L2} \dd{t'}. \end{split}$$ Combining ([\[eqn linear energy est\]](#eqn linear energy est){reference-type="ref" reference="eqn linear energy est"})-([\[est Dt baf int\]](#est Dt baf int){reference-type="ref" reference="est Dt baf int"}), ([\[est E_l\]](#est E_l){reference-type="ref" reference="est E_l"}) and the following relation: $$\label{key} \abs{\bar{{\mathfrak{f}}}(t)}_{\frac{3}{2}l+\frac{1}{2}} \lesssim_{\Lambda_*}\abs{{\mathfrak{f}}(0)}_{\H{\frac{3}{2}l+\frac{1}{2}}} + \int_0^t \abs{\partial_t{\mathfrak{f}}(t')}_{\H{\frac{3}{2}l+\frac{1}{2}}} \dd{t'},$$ one can get that $$\label{key} \begin{split} &\hspace{-3em}\abs{\bar{{\mathfrak{f}}}(t)}_{\frac{3}{2}l+2}^2 + \abs{{\mathbb{D}_t}\bar{{\mathfrak{f}}}(t)}_{\frac{3}{2}l+\frac{1}{2}}^2 \\ \lesssim_{\Lambda_*}&\abs{{\mathfrak{f}}_0}_{\H{\frac{3}{2}l+2}}^2 + \abs{{\mathfrak{f}}_1}^2_{\H{\frac{3}{2}l+\frac{1}{2}}} + \qty(L_0)^{12}\abs{{\mathfrak{f}}_0}_{\H{\frac{3}{2}l+\frac{1}{2}}}^2 \\ &+ \bar{Q} \cdot \int_0^t \abs{{\mathfrak{f}}(t')}_{\H{\frac{3}{2}l+2}}^2 + \abs{\partial_t{\mathfrak{f}}(t')}_{\H{\frac{3}{2}l+\frac{1}{2}}}^2 + \abs{{\mathfrak{g}}(t')}_{\H{\frac{3}{2}l+\frac{1}{2}}}^2 \dd{t'} \\ &+ \qty(\abs{{\vb{v}}_{\pm*}(0)}_{\H{{\frac{3}{2}k}-2}}^{12} + (L_2)^{12}t^{12})\cdot C(L_1)t \int_0^t \abs{\partial_t{\mathfrak{f}}(t')}_{\H{\frac{3}{2}l+\frac{1}{2}}}^2 \dd{t'} \\ &+C(L_1)t\int_0^t \abs{\partial_t{\mathfrak{f}}(t')}_{\L2}^2 + \abs{{\mathfrak{f}}(t')}_{\H1}^2 + \abs{{\mathfrak{g}}(t')}_{\L2}^2 \dd{t'}, \end{split}$$ where $\bar{Q} = \bar{Q}(L_1, L_2)$ is the generic polynomial in the previous lemma. If $T \le Q_* (L_1, L_2)$, it follows that $$\label{key} \begin{split} &\hspace{-2em}\abs{\bar{{\mathfrak{f}}}(t)}_{\frac{3}{2}l+2}^2 + \abs{{\mathbb{D}_t}\bar{{\mathfrak{f}}}(t)}_{\frac{3}{2}l+\frac{1}{2}}^2 \\ \le\, &C_* \qty(\abs{{\mathfrak{f}}_0}_{\H{\frac{3}{2}l+2}}^2 + \abs{{\mathfrak{f}}_1}_{\H{\frac{3}{2}l+\frac{1}{2}}}^2 + (L_0)^{12}\abs{{\mathfrak{f}}_0}_{\H{\frac{3}{2}l+\frac{1}{2}}}^2) \\ &+ Q\int_0^t \abs{{\mathfrak{f}}(t')}_{\H{\frac{3}{2}l+2}}^2 + \abs{\partial_t{\mathfrak{f}}(t')}_{\H{\frac{3}{2}l+\frac{1}{2}}}^2 + \abs{{\mathfrak{g}}(t')}_{\H{\frac{3}{2}l+\frac{1}{2}}}^2 \dd{t'}, \end{split}$$ where $Q = Q(L_1, L_2)$ is a generic polynomial determined by $\Lambda_*$. Hence ([\[est linear eqn ka\]](#est linear eqn ka){reference-type="ref" reference="est linear eqn ka"}) follows from Gronwall's inequality. ◻ Then the local well-posedness of [\[eqn linear 1\]](#eqn linear 1){reference-type="eqref" reference="eqn linear 1"} follows from this energy estimate: **Corollary 18**. *For $0 \le l \le k-2$, $T \le C(L_1, L_2)$ and ${\mathfrak{g}}\in C^0 \qty([0, T]; H^{\frac{3}{2}l + \frac{1}{2}}({\Gamma_\ast}))$, the problem [\[eqn linear 1\]](#eqn linear 1){reference-type="eqref" reference="eqn linear 1"} is well-posed in $C^0\qty([0, T]; \H{\frac{3}{2}l+2}) \cap C^1\qty([0, T]; \H{\frac{3}{2}l+\frac{1}{2}})$, and the estimate [\[est linear eqn ka\]](#est linear eqn ka){reference-type="eqref" reference="est linear eqn ka"} holds.* ## Linearized system for the current and vorticity {#sec linear current vortex} Assume still that ${\Gamma_\ast}\in H^{{\frac{3}{2}k}+1}$ $(k\ge2)$ is a reference hypersurface and consider a family of hypersurfaces $\{{\Gamma_t}\}_{0\le t \le T} \subset \Lambda_*$, for which each ${\Gamma_t}$ separates ${\Omega}$ into two disjoint simply-connected domains ${\Omega}_t^\pm$. Suppose that ${\vb{v}}_\pm(t), {\vb{h}}_\pm(t) : {\Omega}_t^\pm \to \mathbb{R}^3$ are given vector fields solving $$\label{key} \begin{cases*} \div{\vb{v}}_\pm = 0 = \div{\vb{h}}_\pm &in $ {\Omega}_t^\pm $, \\ {\vb{h}}_+ \vdot {\vb{N}}_+ = 0 = {\vb{h}}_- \vdot {\vb{N}}_+ &on $ {\Gamma_t}$, \\ {\vb{v}}_+ \vdot {\vb{N}}_+ = {\vb{N}}_+ \vdot \qty(\partial_t{\gamma_{\Gamma_t}}{\vb*{\nu}})\circ\Phi_{\Gamma_t}^{-1} = {\vb{v}}_- \vdot {\vb{N}}_+ &on $ {\Gamma_t}$, \\ {\vb{v}}_- \vdot \widetilde{{\vb{N}}} = 0 = {\vb{h}}_- \vdot \widetilde{{\vb{N}}} &on $ \partial{\Omega}$. \end{cases*}$$ Assume further that there is a constant $\bar{L}_1$ so that $$\label{key} \sup_{t \in [0, T]} \qty(\abs{{\varkappa_a}(t)}_{\H{{\frac{3}{2}k}-\frac{3}{2}}}, \abs{\partial_t{\varkappa_a}(t)}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}, \norm{\qty({\vb{v}}_\pm(t), {\vb{h}}_\pm(t))}_{H^{{\frac{3}{2}k}}({\Omega}_t^\pm)}) \le \bar{L}_1.$$ Consider the following linearized current-vorticity system in ${\Omega}\setminus {\Gamma_t}$: $$\label{eqn linear vom} \partial_t \widetilde{{\vb*{\omega}}} + ({\vb{v}}\vdot\grad)\widetilde{{\vb*{\omega}}} - ({\vb{h}}\vdot\grad)\widetilde{{\vb{j}}} = (\widetilde{{\vb*{\omega}}}\vdot\grad){\vb{v}}- (\widetilde{{\vb{j}}}\vdot\grad){\vb{h}},$$ $$\label{eqn linear vj} \partial_t \widetilde{{\vb{j}}} + ({\vb{v}}\vdot\grad)\widetilde{{\vb{j}}} - ({\vb{h}}\vdot\grad)\widetilde{{\vb*{\omega}}} = (\widetilde{{\vb{j}}}\vdot\grad){\vb{v}}- (\widetilde{{\vb*{\omega}}}\vdot\grad){\vb{h}}-2\tr(\grad{\vb{v}}\cp \grad{\vb{h}}).$$ Set $$\label{def vxi veta} \vb*{\xi} \coloneqq \widetilde{{\vb*{\omega}}} - \widetilde{{\vb{j}}}, \quad \vb*{\eta} \coloneqq \widetilde{{\vb*{\omega}}} + \widetilde{{\vb{j}}}.$$ Then $$\label{key} \partial_t{\vb*{\xi}}+ \qty[({\vb{v}}+{\vb{h}})\vdot\grad]\vb*{\xi} = (\vb*{\xi}\vdot\grad)({\vb{v}}+{\vb{h}}) + 2 \tr(\grad{\vb{v}}\cp\grad{\vb{h}}),$$ $$\label{key} \partial_t {\vb*{\eta}}+ \qty[({\vb{v}}-{\vb{h}})\vdot\grad]{\vb*{\eta}}= ({\vb*{\eta}}\vdot\grad)({\vb{v}}-{\vb{h}}) - 2\tr(\grad{\vb{v}}\cp\grad{\vb{h}}).$$ Define the flow map $\mathcal{Y}^\pm$ by $$\label{key} \dv{t}\mathcal{Y}^\pm(t, y) = \qty({\vb{v}}_\pm - {\vb{h}}_\pm) \qty(t, \mathcal{Y}^\pm(t, y)), \quad y \in {\Omega}_0^\pm.$$ As indicated in [@Sun-Wang-Zhang2018], due to the fact that ${\vb{h}}_\pm \vdot {\vb{N}}_+ \equiv 0$, $\mathcal{Y}^\pm(t)$ is a bijection from ${\Omega}_0^\pm$ to ${\Omega}_t^\pm$ for small time $t$. Furthermore, the evolution equation for ${\vb*{\eta}}$ can be rewritten as $$\label{key} \dv{t}({\vb*{\eta}}\circ\mathcal{Y}) = \qty[({\vb*{\eta}}\vdot\grad)({\vb{v}}-{\vb{h}})-2\tr(\grad{\vb{v}}\cp\grad{\vb{h}})]\circ\mathcal{Y},$$ or equivalently, $$\label{key} \dv{t}\qty[\qty(\mathop{}\!\mathbin{\mathrm{D}}\mathcal{Y})^{-1}\qty({\vb*{\eta}}\circ\mathcal{Y})] = -2\tr(\grad{\vb{v}}\cp\grad{\vb{h}})\circ\mathcal{Y},$$ which is a linear ODE system. Thus, the local well-posedness follows routinely. Similarly, the evolution equation for ${\vb*{\xi}}$ is also locally well-posed on $[0, T]$, with the life span $T$ depending on $\bar{L}_1$. Furthermore, the following energy estimates hold: **Proposition 19**. *For $0 \le t \le T$, it follows that $$\label{est linear vom vj} \begin{split} &\hspace{-2em}\norm{\widetilde{{\vb*{\omega}}}_\pm(t)}_{H^{{\frac{3}{2}k}-1}({\Omega}_t^\pm)}^2 + \norm{\widetilde{{\vb{j}}}_\pm(t)}_{H^{{\frac{3}{2}k}-1}({\Omega}_t^\pm)}^2 \\ \le\, &\exp{Q\qty(\bar{L}_1)t} \qty(1+ \norm{\widetilde{{\vb*{\omega}}}_\pm(0)}_{H^{{\frac{3}{2}k}-1}({\Omega}_0^\pm)}^2 + \norm{\widetilde{{\vb{j}}}_\pm(0)}_{H^{{\frac{3}{2}k}-1}({\Omega}_0^\pm)}^2), \end{split}$$ where $Q$ is a generic polynomial depending on $\Lambda_*$.* *Proof.* For $0 \le s \le {\frac{3}{2}k}-1$, observe that $$\label{key} \begin{split} &\hspace{-1em}\dfrac{1}{2}\dv{t}\int_{{\Omega}_t^\pm} \abs{\nabla^s {\vb*{\eta}}_\pm}^2\dd{x} \\ =\,& \int_{{\Omega}_t^\pm} \nabla^s {\vb*{\eta}}_\pm \vdot \nabla^s\partial_t{\vb*{\eta}}_\pm \dd{x} + \dfrac{1}{2}\int_{{\Omega}_t^\pm} \mathop{}\!\mathbin{\mathrm{D}}_{({\vb{v}}_\pm - {\vb{h}}_\pm)} \abs{\nabla^s {\vb*{\eta}}_\pm}^2 \dd{x} \\ =\, &\int_{{\Omega}_t^\pm} \nabla^s {\vb*{\eta}}_\pm \vdot \nabla^s\qty[\mathop{}\!\mathbin{\mathrm{D}}_{({\vb{h}}_\pm-{\vb{v}}_\pm)}{\vb*{\eta}}_\pm + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb*{\eta}}_\pm}({\vb{v}}_\pm-{\vb{h}}_\pm) - 2\tr(\grad{\vb{v}}_\pm \cp \grad {\vb{h}}_\pm) ] \dd{x} \\ &+ \dfrac{1}{2}\int_{{\Omega}_t^\pm} \mathop{}\!\mathbin{\mathrm{D}}_{({\vb{v}}_\pm - {\vb{h}}_\pm)} \abs{\nabla^s {\vb*{\eta}}_\pm}^2 \dd{x} \\ =\, &\int_{{\Omega}_t^\pm} \nabla^s {\vb*{\eta}}_\pm \vdot \comm{\nabla^s}{\mathop{}\!\mathbin{\mathrm{D}}_{({\vb{h}}_\pm - {\vb{v}}_\pm)}}{\vb*{\eta}}_\pm \dd{x} +\int_{{\Omega}_t^\pm} \nabla^s {\vb*{\eta}}_\pm \vdot \nabla^s\qty{\mathop{}\!\mathbin{\mathrm{D}}_{{\vb*{\eta}}_\pm}({\vb{v}}_\pm-{\vb{h}}_\pm) - 2\tr(\grad{\vb{v}}_\pm\cp\grad{\vb{h}}_\pm)} \dd{x} \\ \le\, &Q\qty(\bar{L}_1) \qty(1+\norm{{\vb*{\eta}}_\pm}_{H^s({\Omega}_t^\pm)}^2). \end{split}$$ It follows from a similar argument that $$\label{key} \dv{t} \norm{{\vb*{\xi}}_\pm (t)}_{H^s({\Omega}_t^\pm)}^2 \le Q\qty(\bar{L}_1) \qty(1+\norm{{\vb*{\xi}}_\pm (t)}_{H^s({\Omega}_t^\pm)}^2).$$ Therefore, ([\[est linear vom vj\]](#est linear vom vj){reference-type="ref" reference="est linear vom vj"}) holds due to Gronwall's inequality and ([\[def vxi veta\]](#def vxi veta){reference-type="ref" reference="def vxi veta"}). ◻ To show the compatibility conditions: $$\label{linear compatibility condition} \int_{\partial{\Omega}} \widetilde{{\vb*{\omega}}}_- \vdot \widetilde{{\vb{N}}} \dd{\widetilde{S}} \equiv 0 \equiv \int_{\partial{\Omega}} \widetilde{{\vb{j}}}_- \vdot \widetilde{{\vb{N}}} \dd{\widetilde{S}},$$ one observes that $$\label{key} \begin{split} &\hspace{-2em}\dv{t}\int_{\partial{\Omega}} {\vb*{\eta}}_- \vdot \widetilde{{\vb{N}}} \dd{\widetilde{S}} \\ =\, &\int_{\partial{\Omega}} \qty(\partial_t + \mathop{}\!\mathbin{\mathrm{D}}_{({\vb{v}}_- - {\vb{h}}_-)}) \qty({\vb*{\eta}}_- \vdot \widetilde{{\vb{N}}}) + \qty({\vb*{\eta}}_- \vdot \widetilde{{\vb{N}}}) \mathop{\mathrm{\mathrm{div}}}_{\partial{\Omega}} ({\vb{v}}_- - {\vb{h}}_-) \dd{\widetilde{S}} \\ =\, &\int_{\partial{\Omega}} \widetilde{{\vb{N}}}\vdot\mathop{}\!\mathbin{\mathrm{D}}_{{\vb*{\eta}}_-} ({\vb{v}}_- - {\vb{h}}_-) - 2\widetilde{{\vb{N}}} \vdot \tr(\grad{\vb{v}}_- \cp \grad {\vb{h}}_-) \dd{\widetilde{S}} \\ &+\int_{\partial{\Omega}} - \widetilde{{\vb{N}}} \vdot \mathop{}\!\mathbin{\mathrm{D}}_{{\vb*{\eta}}_-^\top}({\vb{v}}_- - {\vb{h}}_-) + ({\vb*{\eta}}_- \vdot \widetilde{{\vb{N}}}) \mathop{\mathrm{\mathrm{div}}}_{\partial{\Omega}}({\vb{v}}_- - {\vb{h}}_-) \dd{\widetilde{S}} \\ =:\, &I_1 + I_2. \end{split}$$ Since $\div \qty(\grad \phi \cp \grad \psi) \equiv 0$ and $\div({\vb{v}}_- - {\vb{h}}_- )\equiv 0$, $$\label{key} \begin{split} I_1 =\, &\int_{\partial{\Omega}} \widetilde{{\vb{N}}} \vdot \mathop{}\!\mathbin{\mathrm{D}}_{{\vb*{\eta}}_-^\top}({\vb{v}}_- - {\vb{h}}_-) + ({\vb*{\eta}}_- \vdot \widetilde{{\vb{N}}}) \mathop{}\!\mathbin{\mathrm{D}}_{\widetilde{{\vb{N}}}}({\vb{v}}_- - {\vb{h}}_-) \vdot \widetilde{{\vb{N}}} \dd{\widetilde{S}} \\ =\, &\int_{\partial{\Omega}} \widetilde{{\vb{N}}} \vdot \mathop{}\!\mathbin{\mathrm{D}}_{{\vb*{\eta}}_-^\top}({\vb{v}}_- - {\vb{h}}_-) - ({\vb*{\eta}}_- \vdot \widetilde{{\vb{N}}}) \mathop{\mathrm{\mathrm{div}}}_{\partial{\Omega}}({\vb{v}}_- - {\vb{h}}_-) \dd{\widetilde{S}} \\ =\, &-I_2, \end{split}$$ where the geometric relation [\[eqn div Gt div Rd\]](#eqn div Gt div Rd){reference-type="eqref" reference="eqn div Gt div Rd"} has been used. Thus, the similar arguments yield $$\label{key} \dv{t}\int_{\partial{\Omega}} {\vb*{\xi}}\vdot \widetilde{{\vb{N}}} \dd{\widetilde{S}} = 0,$$ which implies the following lemma: **Lemma 20**. *Suppose that $(\widetilde{{\vb*{\omega}}}(t), \widetilde{{\vb{j}}}(t))$ is the solution to the linear system ([\[eqn linear vom\]](#eqn linear vom){reference-type="ref" reference="eqn linear vom"})-([\[eqn linear vj\]](#eqn linear vj){reference-type="ref" reference="eqn linear vj"}) with initial data $(\widetilde{{\vb*{\omega}}}_0, \widetilde{{\vb{j}}}_0)$. If $$\label{key} \int_{\partial{\Omega}} \widetilde{{\vb*{\omega}}}_{0-} \vdot \widetilde{{\vb{N}}} \dd{\widetilde{S}} = 0 = \int_{\partial{\Omega}} \widetilde{{\vb{j}}}_{0-} \vdot \widetilde{{\vb{N}}} \dd{\widetilde{S}},$$ then for all $t$ such that the solution exists, there holds $$\label{key} \int_{\partial{\Omega}} \widetilde{{\vb*{\omega}}}_-(t) \vdot \widetilde{{\vb{N}}} \dd{\widetilde{S}} \equiv 0 \equiv \int_{\partial{\Omega}} \widetilde{{\vb{j}}}_-(t) \vdot \widetilde{{\vb{N}}} \dd{\widetilde{S}}.$$* # Nonlinear Problems {#sec nonlinear} ## Initial settings {#sec nonlinear set-up} Take a reference hypersurface ${\Gamma_\ast}\in H^{{\frac{3}{2}k}+1}$ and $\delta_0 > 0$ so that $$\Lambda_* \coloneqq \Lambda \qty({\Gamma_\ast}, {\frac{3}{2}k}-\frac{1}{2}, \delta_0)$$ satisfies all the properties discussed in the preliminary. We will solve the nonlinear current-vortex sheet problems by an iteration scheme based on solving the linearized problems in the space: $$\label{key} \begin{split} &{\varkappa_a}\in C^0\qty([0, T]; \H{{\frac{3}{2}k}-1}) \cap C^1\qty([0, T]; B_{\delta_1}\subset\H{{\frac{3}{2}k}-\frac{5}{2}}) \cap C^2\qty([0, T]; \H{{\frac{3}{2}k}-4}); \\ &{\vb*{\omega}}_{\pm*}, {\vb{j}}_{\pm*} \in C^0\qty([0, T]; H^{{\frac{3}{2}k}-1}({\Omega}_{\Gamma_\ast}^\pm)) \cap C^1\qty([0, T]; H^{{\frac{3}{2}k}-2}({\Omega}_{\Gamma_\ast}^\pm)). \end{split}$$ In order to construct the iteration map, we define the following function space: **Definition 21**. For given constants $T, M_0, M_1, M_2, M_3 >0$, define $\mathfrak{X}$ to be the collection of $\qty({\varkappa_a}, {\vb*{\omega}}_*, {\vb{j}}_*)$ satisfying: $$\abs{{\varkappa_a}(0) - \kappa_{*+}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} \le \delta_1,$$ $$\abs{\qty(\partial_t {\varkappa_a})(0)}_{\H{{\frac{3}{2}k}-4}}, \norm{{\vb*{\omega}}_*(0)}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega\setminus{\Gamma_\ast}})}, \norm{{\vb{j}}_*(0)}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega\setminus{\Gamma_\ast}})} \le M_0,$$ $$\sup_{t \in [0, T]} \qty(\abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-1}}, \abs{\partial_t {\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}, \norm{{\vb*{\omega}}_*}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})}, \norm{{\vb{j}}_*}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})}) \le M_1,$$ $$\sup_{t \in [0, T]} \qty(\norm{\partial_t{\vb*{\omega}}_*}_{H^{{\frac{3}{2}k}-2}({\Omega\setminus{\Gamma_\ast}})}, \norm{\partial_t{\vb{j}}_*}_{H^{{\frac{3}{2}k}-2}({\Omega\setminus{\Gamma_\ast}})}) \le M_2,$$ $$\sup_{t \in [0, T]} \abs{\partial^2_{tt}{\varkappa_a}}_{\H{{\frac{3}{2}k}-4}} \le a^2 M_3 \ (\text{here $ a $ is the constant in the definition of $ {\varkappa_a}$}).$$ In addition, the compatibility conditions $$\label{eqn compati} \int_{\partial{\Omega}} \widetilde{{\vb{N}}} \vdot {\vb*{\omega}}_{*-} \dd{\widetilde{S}} = \int_{\partial{\Omega}} \widetilde{{\vb{N}}} \vdot {\vb{j}}_{*-} \dd{\widetilde{S}} = 0$$ hold for all $t\in [0, T]$. For $0 < \epsilon \ll \delta_1$ and $A > 0$, the collection of initial data $$\mathfrak{I}(\epsilon, A) \coloneqq\qty{\qty\big(\qty({\varkappa_a})_{\mathrm{I}}, \qty(\partial_t{\varkappa_a})_{\mathrm{I}}, \qty({\vb*{\omega}}_*)_{\mathrm{I}}, \qty({\vb{j}}_*)_{\mathrm{I}})}$$ is defined by: $$\begin{gathered} \abs{\qty({\varkappa_a})_{\mathrm{I}}-\kappa_{*+}}_{\H{{\frac{3}{2}k}-1}}<\epsilon; \\ \abs{\qty(\partial_t{\varkappa_a})_{\mathrm{I}}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}},\ \norm{\qty({\vb*{\omega}}_*)_{\mathrm{I}}}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})},\ \norm{\qty({\vb{j}}_*)_{\mathrm{I}}}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})} < A, \end{gathered}$$ and $$\int_{\partial{\Omega}} \widetilde{{\vb{N}}} \vdot {\qty({\vb*{\omega}}_*)_{\mathrm{I}}}_- \dd{\widetilde{S}} = \int_{\partial{\Omega}} \widetilde{{\vb{N}}} \vdot {\qty({\vb{j}}_*)_{\mathrm{I}}}_{-} \dd{\widetilde{S}} = 0.$$ Thus, $\mathfrak{I}(\epsilon, A) \subset \H{{\frac{3}{2}k}-1}\times\H{{\frac{3}{2}k}-\frac{5}{2}}\times H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}}) \times H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})$. ## Recovery of the fluid region, velocity and magnetic fields {#section recovery} For $\qty\big({\varkappa_a}, {\vb*{\omega}}_*, {\vb{j}}_*) \in \mathfrak{X}$, ${\varkappa_a}(t)$ induces a family of hypersurfaces ${\Gamma_t}\in \Lambda_*$ if $M_1 T$ is not too large. Thus $\Phi_{\Gamma_t}$ and $\mathcal{X}_{\Gamma_t}$ can be defined by ${\varkappa_a}(t)$. For a vector field $\vb{Y} : {\Omega\setminus{\Gamma_t}}\to \mathbb{R}^3$, define $$\qty({\mathbb{P}}\vb{Y})_\pm \coloneqq \vb{Y}_\pm - \grad\phi_{\pm},$$ for which $$\begin{cases*} \mathop{}\!\mathbin{\triangle}\phi_\pm = \div \vb{Y}_\pm &in $ {\Omega}_t^\pm $, \\ \phi_\pm = 0 &on $ {\Gamma_t}$, \\ \mathop{}\!\mathbin{\mathrm{D}}_{\widetilde{{\vb{N}}}} \phi_- = 0 &on $ \partial{\Omega}$. \end{cases*}$$ Namely, ${\mathbb{P}}$ is the Leray projection. Set $$\label{def bar vom vj} \bar{{\vb*{\omega}}} \coloneqq {\mathbb{P}}\qty({\vb*{\omega}}_*(t) \circ \mathcal{X}_{\Gamma_t}^{-1}), \quad \bar{{\vb{j}}} \coloneqq {\mathbb{P}}\qty({\vb{j}}_*(t) \circ \mathcal{X}_{\Gamma_t}^{-1}).$$ Thus, $\div\bar{{\vb*{\omega}}} = \div\bar{{\vb{j}}} = 0$ in ${\Omega\setminus{\Gamma_t}}$ and $$\int_{\partial{\Omega}} \widetilde{{\vb{N}}} \vdot \bar{{\vb*{\omega}}}_- \dd{S} = 0 = \int_{\partial{\Omega}} \widetilde{{\vb{N}}} \vdot \bar{{\vb{j}}}_- \dd{S},$$ since $\mathcal{X}_{\Gamma_t}|_{\partial{\Omega}} = \mathrm{id}|_{\partial{\Omega}}$. Now, by solving the following div-curl problems: $$\label{div-curl nonlinear v} \begin{cases*} \div {\vb{v}}= 0 &in $ {\Omega\setminus{\Gamma_t}}$, \\ \curl {\vb{v}}= \bar{{\vb*{\omega}}} &in $ {\Omega\setminus{\Gamma_t}}$, \\ {\vb{v}}_\pm \vdot {\vb{N}}_+ = {\vb{N}}_+ \vdot \qty(\partial_t {\gamma_{\Gamma_t}}{\vb*{\nu}})\circ\Phi_{\Gamma_t}^{-1} &on $ {\Gamma_t}$, \\ {\vb{v}}_- \vdot \widetilde{{\vb{N}}} = 0 &on $ \partial{\Omega}$; \end{cases*}$$ and $$\label{div-curl nonlinear h} \begin{cases*} \div {\vb{h}}= 0 &in $ {\Omega\setminus{\Gamma_t}}$, \\ \curl {\vb{h}}= \bar{{\vb{j}}} &in $ {\Omega\setminus{\Gamma_t}}$, \\ {\vb{h}}_\pm \vdot {\vb{N}}_+ = 0 &on $ {\Gamma_t}$, \\ {\vb{h}}_- \vdot \widetilde{{\vb{N}}} = 0 &on $ \partial{\Omega}$, \end{cases*}$$ one can obtain the corresponding velocity and magnetic fields ${\vb{v}}_\pm, {\vb{h}}_\pm : {\Omega}_t^\pm \to \mathbb{R}^3$. Furthermore, the following estimate holds thanks to Theorem [Theorem 11](#thm div-curl){reference-type="ref" reference="thm div-curl"}: $$\label{key} \sup_{t \in [0, T]}\qty(\norm{{\vb{v}}_\pm}_{H^{{\frac{3}{2}k}}({\Omega}_t^\pm)}, \norm{{\vb{h}}_\pm}_{H^{{\frac{3}{2}k}}({\Omega}_t^\pm)}) \le Q(M_1),$$ where $Q$ is a generic polynomial determined by $\Lambda_*$. ## Iteration map {#sec itetarion map} For $\qty({{\varkappa_a}}^{(n)}, {{\vb*{\omega}}_*}^{(n)}, {{\vb{j}}_*}^{(n)}) \in \mathfrak{X}$ and $\qty\big{\qty({\varkappa_a})_{\mathrm{I}}, \qty(\partial_t{\varkappa_a})_{\mathrm{I}}, \qty({\vb*{\omega}}_*)_{\mathrm{I}}, \qty({\vb{j}}_*)_{\mathrm{I}}} \in \mathfrak{I}(\epsilon, A)$, define the $(n+1)$-th step by solving the following initial value problems: $$\label{eqn (n+1)ka} \begin{cases*} \partial^2_{tt}{{\varkappa_a}}^{(n+1)} + {\mathscr{C}}\qty({{\varkappa_a}}^{(n)}, {\partial_t{\varkappa_a}}^{(n)}, {{\vb{v}}_*}^{(n)}, {{\vb{h}}_*}^{(n)}){{\varkappa_a}}^{(n+1)} \\ \qquad = {\mathscr{F}}\qty({{\varkappa_a}}^{(n)})\partial_t {{\vb*{\omega}}_*}^{(n)} + {\mathscr{G}}\qty({{\varkappa_a}}^{(n)}, {\partial_t{\varkappa_a}}^{(n)}, {{\vb*{\omega}}_*}^{(n)}, {{\vb{j}}_*}^{(n)}) \\ {{\varkappa_a}}^{(n+1)}(0) = \qty({\varkappa_a})_{\mathrm{I}}, \quad {\partial_t{\varkappa_a}}^{(n+1)}(0) = \qty(\partial_t{\varkappa_a})_{\mathrm{I}}; \end{cases*}$$ and $$\label{eqn linear (n+1) vom vj} \begin{cases*} \partial_t{{\vb*{\omega}}}^{(n+1)} + \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{v}}}^{(n)}}{{\vb*{\omega}}}^{(n+1)} - \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{h}}}^{(n)}}{{\vb{j}}}^{(n+1)} = \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb*{\omega}}}^{(n+1)}}{{\vb{v}}}^{(n)} - \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{j}}}^{(n+1)}}{{\vb{h}}}^{(n)}, \\ \partial_t{{\vb{j}}}^{(n+1)} + \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{v}}}^{(n)}}{{\vb{j}}}^{(n+1)} - \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{h}}}^{(n)}}{{\vb*{\omega}}}^{(n+1)} \\ \qquad = \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{j}}}^{(n+1)}}{{\vb{v}}}^{(n)} - \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb*{\omega}}}^{(n+1)}}{{\vb{h}}}^{(n)} - 2\tr(\grad{{\vb{v}}}^{(n)} \cp \grad {{\vb{h}}}^{(n)}), \\ {{\vb*{\omega}}}^{(n+1)}(0) = {\mathbb{P}}\qty(\qty({\vb*{\omega}}_*)_{\mathrm{I}} \circ \mathcal{X}_{{\Gamma_0}^{(n)}}^{-1}), \quad {{\vb{j}}}^{(n+1)}(0) = {\mathbb{P}}\qty(\qty({\vb{j}}_*)_{\mathrm{I}}\circ \mathcal{X}_{{\Gamma_0}^{(n)}}^{-1}), \end{cases*}$$ where $\qty({{\vb{v}}}^{(n)}, {{\vb{h}}}^{(n)})$ is induced by $\qty({{\varkappa_a}}^{(n)}, {{\vb*{\omega}}_*}^{(n)}, {{\vb{j}}_*}^{(n)})$ via solving [\[div-curl nonlinear v\]](#div-curl nonlinear v){reference-type="eqref" reference="div-curl nonlinear v"}-[\[div-curl nonlinear h\]](#div-curl nonlinear h){reference-type="eqref" reference="div-curl nonlinear h"}, the tangential vector fields ${{\vb{v}}_*}^{(n)}$ and ${{\vb{h}}_*}^{(n)}$ on ${\Gamma_\ast}$ are defined by $$\label{key} \begin{split} {{\vb{v}}_{*\pm}}^{(n)} &\coloneqq \qty(\mathop{}\!\mathbin{\mathrm{D}}\Phi_{{{\Gamma_t}}^{(n)}})^{-1}\qty[{{\vb{v}}_\pm}^{(n)}\circ\Phi_{{{\Gamma_t}}^{(n)}} - \qty(\partial_t\gamma_{{{\Gamma_t}}^{(n)}}){\vb*{\nu}}], \\ {{\vb{h}}_{*\pm}}^{(n)} &\coloneqq \qty(\mathop{}\!\mathbin{\mathrm{D}}\Phi_{{{\Gamma_t}}^{(n)}})^{-1} \qty({{\vb{h}}_\pm}^{(n)} \circ \Phi_{{{\Gamma_t}}^{(n)}}), \end{split}$$ and the current-vorticity equations are considered in the domain ${\Omega}\setminus {{\Gamma_t}}^{(n)}$. Denoting by $$\label{key} {{\vb*{\omega}}_*}^{(n+1)}\coloneqq {{\vb*{\omega}}}^{(n+1)}\circ\mathcal{X}_{{{\Gamma_t}}^{(n)}}, \qand {{\vb{j}}_*}^{(n+1)}\coloneqq {{\vb{j}}}^{(n+1)}\circ\mathcal{X}_{{{\Gamma_t}}^{(n)}},$$ we will show that $\qty({{\varkappa_a}}^{(n+1)}, {{\vb*{\omega}}_*}^{(n+1)}, {{\vb{j}}_*}^{(n+1)}) \in \mathfrak{X}$. Indeed, in view of Lemma [Lemma 15](#lem 3.4){reference-type="ref" reference="lem 3.4"}, there hold $$\label{key} \begin{split} &\hspace{-2em}\abs{{\mathscr{F}}\qty({{\varkappa_a}}^{(n)}){\partial_t{\vb*{\omega}}_*}^{(n)}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} \\ \le\, &Q\qty(\abs{{{\varkappa_a}}^{(n)}}_{\H{{\frac{3}{2}k}-1}})\norm{{\partial_t{\vb*{\omega}}_*}^{(n)}}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega\setminus{\Gamma_\ast}})} \\ \le\, &Q(M_1) \cdot M_2, \end{split}$$ and $$\label{key} \begin{split} &\hspace{-2em}\abs{{\mathscr{G}}\qty({{\varkappa_a}}^{(n)}, {\partial_t{\varkappa_a}}^{(n)}, {{\vb*{\omega}}_*}^{(n)}, {{\vb{j}}_*}^{(n)})}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} \\ \le\, &a^2 Q\qty(\abs{{{\varkappa_a}}^{(n)}}_{\H{{\frac{3}{2}k}-1}}, \abs{{\partial_t{\varkappa_a}}^{(n)}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}, \norm{\qty({{\vb*{\omega}}_*}^{(n)}, {{\vb{j}}_*}^{(n)})}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})}) \\ \le\, &a^2 Q(M_1). \end{split}$$ Furthermore, by the definition of constants $L_1, L_2$ in  [5.1](#sec linear ka){reference-type="ref" reference="sec linear ka"} and Lemma [Lemma 12](#lem3.1){reference-type="ref" reference="lem3.1"}, one has $$\label{key} \begin{split} L_1 \le\, &\sup_{t \in [0, T]} \qty(\abs{{{\varkappa_a}}^{(n)}}_{\H{{\frac{3}{2}k}-1}}, \abs{{\partial_t{\varkappa_a}}^{(n)}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}) \\ &\quad +\sup_{t \in [0, T]} \qty(\abs{{{\vb{v}}_{\pm*}}^{(n)}}_{\H{{\frac{3}{2}k}-\frac{1}{2}}}, \abs{{{\vb{h}}_{\pm*}}^{(n)}}_{\H{{\frac{3}{2}k}-\frac{1}{2}}}) \\ \le\, &Q(M_1), \end{split}$$ and $$\label{key} L_2 = \sup_{t \in [0, T]} \qty(\abs{{\partial_t{\vb{v}}_{\pm*}}^{(n)}}_{\H{{\frac{3}{2}k}-2}}, \abs{{{\vb{h}}_{\pm*}}^{(n)}}_{\H{{\frac{3}{2}k}-2}} ) \le Q\qty(M_1, M_2, a^2 M_3).$$ Thus, by taking $l=k-2$ in ([\[est linear eqn ka\]](#est linear eqn ka){reference-type="ref" reference="est linear eqn ka"}), it follows that $$\label{key} \begin{split} &\sup_{t \in [0, T]}\qty(\abs{{{\varkappa_a}}^{(n+1)}}_{\H{{\frac{3}{2}k}-1}} + \abs{{\partial_t{\varkappa_a}}^{(n+1)}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}) \\ &\le C_* \exp\qty{Q\qty(M_1, M_2, a^2M_3)T} \times \qty[C_* + \epsilon + A + (M_0)^{12} + T\cdot \qty(a^2+M_2)Q(M_1)]. \end{split}$$ If $T$ is taken small enough, and $M_1$ is much larger than $M_0$ and $A$, then there holds $$\label{key} \sup_{t \in [0, T]}\qty(\abs{{{\varkappa_a}}^{(n+1)}}_{\H{{\frac{3}{2}k}-1}}+ \abs{{\partial_t{\varkappa_a}}^{(n+1)}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}) \le M_1.$$ Moreover, choosing $M_3$ large enough compared to $M_1$ and $M_2$, one gets from ([\[eqn (n+1)ka\]](#eqn (n+1)ka){reference-type="ref" reference="eqn (n+1)ka"}) that $$\label{key} \sup_{t \in [0, T]} \abs{{\partial^2_{tt}{\varkappa_a}}^{(n+1)}}_{\H{{\frac{3}{2}k}-4}} \le a^2M_3.$$ Similarly, by applying Proposition [Proposition 19](#prop linear vom vj){reference-type="ref" reference="prop linear vom vj"} to ([\[eqn linear (n+1) vom vj\]](#eqn linear (n+1) vom vj){reference-type="ref" reference="eqn linear (n+1) vom vj"}), one can deduce that $$\label{key} \begin{split} &\hspace{-2em}\sup_{t \in [0, T]}\qty(\norm{{{\vb*{\omega}}_*}^{(n+1)}}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})}, \norm{{{\vb{j}}_*}^{(n+1)}}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})}) \\ \le\, &Q\qty(\abs{{{\varkappa_a}}^{(n)}}_{C^0_t\H{{\frac{3}{2}k}-\frac{5}{2}}}) e^{Q(M_1)T}(1+2A) \\ \le\, &M_1, \end{split}$$ if $T$ is small and $M_1 \gg \abs{\kappa_{*+}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}$. Next, by taking $M_2 \gg M_1$, it holds that $$\label{key} \begin{split} &\hspace{-2em}\norm{{\partial_t{\vb*{\omega}}_*}^{(n+1)}(t)}_{H^{{\frac{3}{2}k}-2}({\Omega\setminus{\Gamma_\ast}})} \\ \le\, &C_* \qty(\norm{{\partial_t{\vb*{\omega}}}^{(n+1)}}_{H^{{\frac{3}{2}k}-2}({\Omega}\setminus{{\Gamma_t}}^{(n)})} + \norm{{{\vb*{\omega}}}^{(n+1)}}_{H^{{\frac{3}{2}k}-1}({\Omega}\setminus{{\Gamma_t}}^{(n)})}\abs{{\partial_t{\varkappa_a}}^{(n)}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} ) \\ \le\, &Q(M_1) \le M_2 \end{split}$$ for all $0 \le t \le T$. Similarly, $$\label{key} \sup_{t \in [0, T]} \qty(\norm{{\partial_t{\vb*{\omega}}_*}^{(n+1)}}_{H^{{\frac{3}{2}k}-2}({\Omega\setminus{\Gamma_\ast}})}, \norm{{\partial_t{\vb{j}}_*}^{(n+1)}}_{H^{{\frac{3}{2}k}-2}({\Omega\setminus{\Gamma_\ast}})}) \le M_2.$$ The compatibility condition [\[eqn compati\]](#eqn compati){reference-type="eqref" reference="eqn compati"} follows from Lemma [Lemma 20](#lem compatibility){reference-type="ref" reference="lem compatibility"}. With the following notation: $$\label{key} \begin{split} \mathfrak{T}\qty{\qty[\qty({\varkappa_a})_{\mathrm{I}}, \qty(\partial_t{\varkappa_a})_{\mathrm{I}}, \qty({\vb*{\omega}}_*)_{\mathrm{I}}, \qty({\vb{j}}_*)_{\mathrm{I}}], \qty[{{\varkappa_a}}^{(n)}, {{\vb*{\omega}}_*}^{(n)}, {{\vb{j}}_*}^{(n)}]} \coloneqq \qty[{{\varkappa_a}}^{(n+1)}, {{\vb*{\omega}}_*}^{(n+1)}, {{\vb{j}}_*}^{(n+1)}], \end{split}$$ one can conclude from the previous arguments that: **Proposition 22**. *Suppose that $k \ge 2$. For any $0 < \epsilon \ll \delta_0$ and $A > 0$, there are positive constants $M_0, M_1, M_2, M_3$, so that for small $T > 0$, $$\mathfrak{T}\qty\Big{\qty[\qty({\varkappa_a})_{\mathrm{I}}, \qty(\partial_t{\varkappa_a})_{\mathrm{I}}, \qty({\vb*{\omega}}_*)_{\mathrm{I}}, \qty({\vb{j}}_*)_{\mathrm{I}}], \qty[{\varkappa_a}, {\vb*{\omega}}_*, {\vb{j}}_*]} \in \mathfrak{X},$$ holds for any $\qty[\qty({\varkappa_a})_{\mathrm{I}}, \qty(\partial_t{\varkappa_a})_{\mathrm{I}}, \qty({\vb*{\omega}}_*)_{\mathrm{I}}, \qty({\vb{j}}_*)_{\mathrm{I}}] \in \mathfrak{I}(\epsilon, A)$ and $\qty[{\varkappa_a}, {\vb*{\omega}}_*, {\vb{j}}_*] \in \mathfrak{X}$.* ## Contraction of the iteration map {#sec contra ite} In this subsection, it is always assumed that $k \ge 3$. Suppose that there are two one-parameter families $\qty({{\varkappa_a}}^{(n)}(\beta), {{\vb*{\omega}}_*}^{(n)}(\beta), {{\vb{j}}_*}^{(n)}(\beta)) \subset \mathfrak{X}$ and\ $\qty\Big(\qty({\varkappa_a})_{\mathrm{I}}(\beta), \qty(\partial_t{\varkappa_a})_{\mathrm{I}}(\beta), \qty({\vb*{\omega}}_*)_{\mathrm{I}}(\beta), \qty({\vb{j}}_*)_{\mathrm{I}}(\beta)) \subset \mathfrak{I}(\epsilon, A)$ with parameter $\beta$. Define $$\begin{split} &\hspace{-2em}\qty({{\varkappa_a}}^{(n+1)}(\beta), {{\vb*{\omega}}_*}^{(n+1)}(\beta), {{\vb{j}}_*}^{(n+1)}(\beta)) \\ \coloneqq\, &\mathfrak{T}\qty{\qty\Big(\qty({\varkappa_a})_{\mathrm{I}}(\beta), \qty(\partial_t{\varkappa_a})_{\mathrm{I}}(\beta), \qty({\vb*{\omega}}_*)_{\mathrm{I}}(\beta), \qty({\vb{j}}_*)_{\mathrm{I}}(\beta)), \qty({{\varkappa_a}}^{(n)}(\beta), {{\vb*{\omega}}_*}^{(n)}(\beta), {{\vb{j}}_*}^{(n)}(\beta))}. \end{split}$$ Then by applying $\pdv{\beta}$ to ([\[eqn (n+1)ka\]](#eqn (n+1)ka){reference-type="ref" reference="eqn (n+1)ka"}) and ([\[eqn linear (n+1) vom vj\]](#eqn linear (n+1) vom vj){reference-type="ref" reference="eqn linear (n+1) vom vj"}) respectively, one gets $$\label{eqn contra 1} \begin{cases*} \partial^2_{tt} \partial_\beta {{\varkappa_a}}^{(n+1)} + {{\mathscr{C}}}^{(n)}\partial_\beta{{\varkappa_a}}^{(n+1)} = - \qty(\partial_\beta{{\mathscr{C}}}^{(n)}){{\varkappa_a}}^{(n+1)} + \partial_\beta \qty({{\mathscr{F}}}^{(n)}{\partial_t{\vb*{\omega}}_*}^{(n)} + {{\mathscr{G}}}^{(n)}), \\ \partial_\beta{{\varkappa_a}}^{(n+1)}(0) = \partial_\beta\qty({\varkappa_a})_{\mathrm{I}}(\beta), \quad \partial_t\qty(\partial_\beta{{\varkappa_a}}^{(n+1)})(0) = \partial_\beta\qty(\partial_t{\varkappa_a})_{\mathrm{I}}(\beta), \end{cases*}$$ and $$\label{eqn beta n+1} \begin{cases*} \partial_t{\mathbb{D}_{\beta}}{{\vb*{\omega}}}^{(n+1)} + \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{v}}}^{(n)}}{\mathbb{D}_{\beta}}{{\vb*{\omega}}}^{(n+1)} - \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{h}}}^{(n)}}{\mathbb{D}_{\beta}}{{\vb{j}}}^{(n+1)} = \mathop{}\!\mathbin{\mathrm{D}}_{{\mathbb{D}_{\beta}}{{\vb*{\omega}}}^{(n+1)}}{{\vb{v}}}^{(n)} - \mathop{}\!\mathbin{\mathrm{D}}_{{\mathbb{D}_{\beta}}{{\vb{j}}}^{(n+1)}}{{\vb{h}}}^{(n)} + \va{{\mathfrak{g}}}_1, \\ \partial_t {\mathbb{D}_{\beta}}{{\vb{j}}}^{(n+1)} + \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{v}}}^{(n)}}{\mathbb{D}_{\beta}}{{\vb{j}}}^{(n+1)} - \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{h}}}^{(n)}}{\mathbb{D}_{\beta}}{{\vb*{\omega}}}^{(n+1)} = \mathop{}\!\mathbin{\mathrm{D}}_{{\mathbb{D}_{\beta}}{{\vb{j}}}^{(n+1)}}{{\vb{v}}}^{(n)} - \mathop{}\!\mathbin{\mathrm{D}}_{{\mathbb{D}_{\beta}}{{\vb*{\omega}}}^{(n+1)}}{{\vb{h}}}^{(n)} + \va{{\mathfrak{g}}}_2, \\ {\mathbb{D}_{\beta}}{{\vb*{\omega}}}^{(n+1)}(0) = {\mathbb{P}}\qty{\qty[\partial_\beta\qty({\vb*{\omega}}_*)_{\mathrm{I}}] \circ \mathcal{X}_{{\Gamma_0}^{(n)}(\beta)}^{-1}}, \quad {\mathbb{D}_{\beta}}{{\vb{j}}}^{(n+1)}(0) = {\mathbb{P}}\qty{\qty[\partial_\beta\qty({\vb{j}}_*)_{\mathrm{I}}]\circ\mathcal{X}_{{\Gamma_0}^{(n)}(\beta)}^{-1}}, \end{cases*}$$ where $$\label{key} {\mathbb{D}_{\beta}}\coloneqq \pdv{\beta} + \mathop{}\!\mathbin{\mathrm{D}}_{\vb*{\mu}} \qc \vb*{\mu} \coloneqq {\mathcal{H}}\qty[\qty(\partial_\beta\gamma_{{{\Gamma_t}}^{(n)}}{\vb*{\nu}})\circ\Phi_{{{\Gamma_t}}^{(n)}(\beta)}^{-1}],$$ $$\label{key} \begin{split} \va{{\mathfrak{g}}}_1 \coloneqq\, &\comm{\partial_t}{{\mathbb{D}_{\beta}}}{{\vb*{\omega}}}^{(n+1)} + \comm{\mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{v}}}^{(n)}}}{{\mathbb{D}_{\beta}}}{{\vb*{\omega}}}^{(n+1)} - \comm{\mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{h}}}^{(n)}}}{{\mathbb{D}_{\beta}}}{{\vb{j}}}^{(n+1)} \\ &+{{\vb*{\omega}}}^{(n+1)} \vdot {\mathbb{D}_{\beta}}\mathop{}\!\mathbin{\mathrm{D}}{{\vb{v}}}^{(n)} - {{\vb{j}}}^{(n+1)} \vdot {\mathbb{D}_{\beta}}\mathop{}\!\mathbin{\mathrm{D}}{{\vb{h}}}^{(n)}, \end{split}$$ and $$\label{eqn g2} \begin{split} \va{{\mathfrak{g}}}_2 \coloneqq\, &-2{\mathbb{D}_{\beta}}\tr(\grad{{\vb{v}}}^{(n)} \cp \grad{{\vb{h}}}^{(n)}) + \comm{\partial_t}{{\mathbb{D}_{\beta}}}{{\vb{j}}}^{(n+1)} + \comm{\mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{v}}}^{(n)}}}{{\mathbb{D}_{\beta}}}{{\vb{j}}}^{(n+1)} \\ &-\comm{\mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{h}}}^{(n)}}}{{\mathbb{D}_{\beta}}}{{\vb*{\omega}}}^{(n+1)}+ {{\vb{j}}}^{(n+1)}\vdot {\mathbb{D}_{\beta}}\mathop{}\!\mathbin{\mathrm{D}}{{\vb{h}}}^{(n)} - {{\vb*{\omega}}}^{(n+1)}\vdot{\mathbb{D}_{\beta}}\mathop{}\!\mathbin{\mathrm{D}}{{\vb{h}}}^{(n)}. \end{split}$$ To estimate the Lipschitz constant for the iteration map $\mathfrak{T}$, we consider the following energy functionals: $$\label{key} \begin{split} {\mathfrak{E}}^{(n)}(\beta)\coloneqq &\sup_{t \in [0, T]}\left(\abs{\partial_\beta{{\varkappa_a}}^{(n)}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} + \abs{\partial_\beta{\partial_t{\varkappa_a}}^{(n)}}_{\H{{\frac{3}{2}k}-4}} + \right. \\ &\qquad\qquad \left.+ \norm{\partial_\beta{{\vb*{\omega}}_*}^{(n)}}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega\setminus{\Gamma_\ast}})} + \norm{\partial_\beta{{\vb{j}}_*}^{(n)}}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega\setminus{\Gamma_\ast}})} \right), \end{split}$$ and $$\label{key} \begin{split} \qty(\mathfrak{E})_{\mathrm{I}}(\beta)\coloneqq\, &\left(\abs{\partial_\beta\qty({\varkappa_a})_{\mathrm{I}}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} + \abs{\partial_\beta\qty(\partial_t{\varkappa_a})_{\mathrm{I}}}_{\H{{\frac{3}{2}k}-4}} + \right. \\ &\qquad \left.+ \norm{\partial_\beta\qty({\vb*{\omega}}_*)_{\mathrm{I}}}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega\setminus{\Gamma_\ast}})} + \norm{\partial_\beta\qty({\vb{j}}_*)_{\mathrm{I}}}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega\setminus{\Gamma_\ast}})} \right). \end{split}$$ Thanks to Lemmas [Lemma 12](#lem3.1){reference-type="ref" reference="lem3.1"}, [Lemma 14](#lem 3.3){reference-type="ref" reference="lem 3.3"}, and [Lemma 15](#lem 3.4){reference-type="ref" reference="lem 3.4"}, it holds that $$\label{key} \begin{split} \abs{\qty(\partial_\beta{{\mathscr{C}}}^{(n)}){{\varkappa_a}}^{(n+1)}}_{\H{{\frac{3}{2}k}-4}} \le Q(M_1) {\mathfrak{E}}^{(n)}\abs{{{\varkappa_a}}^{(n+1)}}_{\H{{\frac{3}{2}k}-2}} \le Q(M_1) {\mathfrak{E}}^{(n)}, \end{split}$$ $$\label{key} \abs{{\mathscr{F}}\qty({{\varkappa_a}}^{(n)})\partial_\beta\partial_t{{\vb*{\omega}}_*}^{(n)}}_{\H{{\frac{3}{2}k}-4}} \le Q(M_1) \norm{\partial_\beta\partial_t {{\vb*{\omega}}_*}^{(n)}}_{H^{{\frac{3}{2}k}-4}({\Omega\setminus{\Gamma_\ast}})},$$ and $$\label{key} \begin{split} &\hspace{-2em}\abs{\qty(\partial_\beta{{\mathscr{F}}}^{(n)})\partial_t{{\vb*{\omega}}_*}^{(n)} + \partial_\beta{{\mathscr{G}}}^{(n)}}_{\H{{\frac{3}{2}k}-4}} \\ \le\, &C_* \abs{\partial_\beta{{\varkappa_a}}^{(n)}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}\norm{\partial_t{{\vb*{\omega}}_*}^{(n)}}_{H^{{\frac{3}{2}k}-\frac{7}{2}}({\Omega\setminus{\Gamma_\ast}})} + a^2 Q(M_1) {\mathfrak{E}}^{(n)} \\ \le\, &\qty(C_*M_2 + a^2 Q(M_1)) {\mathfrak{E}}^{(n)}. \end{split}$$ Taking $l=k-3$ in ([\[est linear eqn ka\]](#est linear eqn ka){reference-type="ref" reference="est linear eqn ka"}) leads to $$\label{est beta ka(n+1)} \begin{split} &\hspace{-1em}\sup_{t \in [0, T]} \qty(\abs{\partial_\beta{{\varkappa_a}}^{(n+1)}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} + \abs{\partial_\beta{\partial_t{\varkappa_a}}^{(n+1)}}_{\H{{\frac{3}{2}k}-4}}) \\ \le\, &C_* \exp{Q\qty(M_1, M_2, a^2M_3)T} \\ & \times\qty(\qty(\mathfrak{E})_{\mathrm{I}} + T \cdot \qty[C_*M_2 + \qty(1+a^2)Q(M_1)]{\mathfrak{E}}^{(n)} +T\cdot Q(M_1)\sup_{t \in [0, T]}\norm{\partial_\beta\partial_t{{\vb*{\omega}}_*}^{(n)}}_{H^{{\frac{3}{2}k}-4}({\Omega\setminus{\Gamma_\ast}})}). \end{split}$$ To estimate ([\[eqn beta n+1\]](#eqn beta n+1){reference-type="ref" reference="eqn beta n+1"}), one can derive that $$\label{key} \begin{split} \norm{\va{{\mathfrak{g}}}_1}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega}\setminus{{\Gamma_t}}^{(n)})} \le Q(M_1) \qty(\abs{\partial_t\partial_\beta{{\varkappa_a}}^{(n)}}_{\H{{\frac{3}{2}k}-4}} + \norm{\qty({\mathbb{D}_{\beta}}{{\vb{v}}}^{(n)}, {\mathbb{D}_{\beta}}{{\vb{h}}}^{(n)})}_{H^{{\frac{3}{2}k}-\frac{3}{2}}({\Omega}\setminus{{\Gamma_t}}^{(n)})}), \end{split}$$ which, together with [\[est pd beta v+\*\]](#est pd beta v+*){reference-type="eqref" reference="est pd beta v+*"}, implies $$\label{key} \norm{\va{{\mathfrak{g}}}_1}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega}\setminus{{\Gamma_t}}^{(n)})} \le Q(M_1) {\mathfrak{E}}^{(n)}.$$ Similarly, one can obtain $$\label{key} \norm{\va{{\mathfrak{g}}}_2}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega}\setminus{{\Gamma_t}}^{(n)})} \le Q(M_1) {\mathfrak{E}}^{(n)}.$$ It follows from the same arguments as in Proposition [Proposition 19](#prop linear vom vj){reference-type="ref" reference="prop linear vom vj"} that $$\label{est dbt vom vj (n+1)} \begin{split} &\hspace{-2em}\sup_{t \in [0, T]}\qty(\norm{{\mathbb{D}_{\beta}}{{\vb*{\omega}}}^{(n+1)}}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega}\setminus{{\Gamma_t}}^{(n)})} + \norm{{\mathbb{D}_{\beta}}{{\vb{j}}}^{(n+1)}}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega}\setminus{{\Gamma_t}}^{(n)})}) \\ \le\, &e^{Q(M_1)T} \qty{\qty(\mathfrak{E})_{\mathrm{I}} + TQ(M_1){\mathfrak{E}}^{(n)}}, \end{split}$$ and thus $$\label{key} \norm{\partial_t{\mathbb{D}_{\beta}}{{\vb*{\omega}}}^{(n+1)}}_{H^{{\frac{3}{2}k}-4}({\Omega}\setminus{{\Gamma_t}}^{(n)})} \le e^{Q(M_1)T} Q(M_1)\qty{\qty(\mathfrak{E})_{\mathrm{I}} + TQ(M_1){\mathfrak{E}}^{(n)}}.$$ Since $$\label{key} \qty[\partial_t\partial_\beta{{\vb*{\omega}}_*}^{(n+1)}]\circ\mathcal{X}_{{{\Gamma_t}}^{(n)}}^{-1} = \partial_t{\mathbb{D}_{\beta}}{{\vb*{\omega}}}^{(n+1)} + \mathop{}\!\mathbin{\mathrm{D}}_{{\mathcal{H}}\qty[\qty(\partial_t{{\gamma_{\Gamma_t}}}^{(n)}{\vb*{\nu}})\circ\Phi_{{{\Gamma_t}}^{(n)}}^{-1}]}{\mathbb{D}_{\beta}}{{\vb*{\omega}}}^{(n+1)},$$ one has $$\label{est pd beta t vom (n+1)} \norm{\partial_\beta\partial_t{{\vb*{\omega}}_*}^{(n+1)}}_{H^{{\frac{3}{2}k}-4}({\Omega\setminus{\Gamma_\ast}})} \le e^{Q(M_1)T} Q(M_1)\qty{\qty(\mathfrak{E})_{\mathrm{I}} + TQ(M_1){\mathfrak{E}}^{(n)}}.$$ Set $$\label{key} {\mathfrak{F}}^{(n)}\coloneqq\norm{\partial_\beta\partial_t{{\vb*{\omega}}_*}^{(n)}}_{H^{{\frac{3}{2}k}-4}({\Omega\setminus{\Gamma_\ast}})}.$$ Then ([\[est beta ka(n+1)\]](#est beta ka(n+1)){reference-type="ref" reference="est beta ka(n+1)"})-([\[est pd beta t vom (n+1)\]](#est pd beta t vom (n+1)){reference-type="ref" reference="est pd beta t vom (n+1)"}) imply that $$\label{key} \begin{split} {\mathfrak{E}}^{(n+1)} \le\, &C_* \exp\qty{Q\qty(M_1, M_2, a^2M_3)T} \\ &\quad\times \qty{\qty(\mathfrak{E})_{\mathrm{I}} + T \cdot \qty[M_2{\mathfrak{E}}^{(n)} + \qty(1+a^2)Q(M_1){\mathfrak{E}}^{(n)} + Q(M_1){\mathfrak{F}}^{(n)}]} \\ &+ e^{Q(M_1)T} Q(M_1)\qty{\qty(\mathfrak{E})_{\mathrm{I}} + TQ(M_1){\mathfrak{E}}^{(n)}}, \end{split}$$ and $$\label{key} {\mathfrak{F}}^{(n+1)} \le e^{Q(M_1)T} Q(M_1)\qty{\qty(\mathfrak{E})_{\mathrm{I}} + TQ(M_1){\mathfrak{E}}^{(n)}}.$$ Thus, if $T$ is small compared to $M_1, M_2, M_3$ and $a$, then $$\label{key} {\mathfrak{E}}^{(n+1)} + {\mathfrak{F}}^{(n+1)} \le \dfrac{1}{2}\qty({\mathfrak{E}}^{(n)}+{\mathfrak{F}}^{(n)}) + Q(M_1)\qty(\mathfrak{E})_{\mathrm{I}}.$$ An immediate consequence is that if the initial data is fixed, the iteration map $\mathfrak{T}$ is a contraction in a space containing $\mathfrak{X}$. Since $\mathfrak{T}$ is a map from $\mathfrak{X}$ to $\mathfrak{X}$, it follows that $\mathfrak{T}$ has a unique fixed point on $\mathfrak{X}$. Namely, one obtains: **Proposition 23**. *Assume that $k \ge 3$. For any $0 < \epsilon \ll \delta_0$ and $A > 0$, there are positive constants $M_0, M_1, M_2, M_3$ so that if $T$ is small enough, then there is a map $\mathfrak{S} : \mathfrak{I}(\epsilon, A) \to \mathfrak{X}$ such that $$\label{key} \mathfrak{T}\qty{\mathfrak{x}, \mathfrak{S(x)}} = \mathfrak{S(x)}$$ for each $\mathfrak{x} = \qty\Big(\qty({\varkappa_a})_{\mathrm{I}}, \qty(\partial_t{\varkappa_a})_{\mathrm{I}}, \qty({\vb*{\omega}}_*)_{\mathrm{I}}, \qty({\vb{j}}_*)_{\mathrm{I}}) \in \mathfrak{I}(\epsilon, A)$.* ## The original nonlinear MHD problem {#sec back to MHD} For any given initial data $\Gamma_0 \in H^{{\frac{3}{2}k}+1}$ and ${\vb{v}}_0, {\vb{h}}_0 \in H^{{\frac{3}{2}k}}({\Omega}\setminus\Gamma_0)$, one can construct $$\qty\Big({\varkappa_a}(0), \partial_t{\varkappa_a}(0), {\vb*{\omega}}_*(0), {\vb{j}}_*(0)).$$ Indeed, for a reference hypersurface ${\Gamma_\ast}\in H^{{\frac{3}{2}k}+1}$ close enough to $\Gamma_0$ (or ${\Gamma_\ast}= \Gamma_0$) and a transversal field ${\vb*{\nu}}\in \H{{\frac{3}{2}k}-1}$, ${\varkappa_a}(0)$ can be given by $\Gamma_0$, and $\partial_t{\varkappa_a}(0)$ is determined by $\theta_0 = {\vb{v}}_{0\pm}\vdot{\vb{N}}_+$. In addition, let $$\label{key} {\vb*{\omega}}_* \coloneqq \qty(\curl {\vb{v}}_0) \circ \mathcal{X}_{\Gamma_0}^{-1}, \quad {\vb{j}}_*\coloneqq\qty(\curl {\vb{h}}_0) \circ \mathcal{X}_{\Gamma_0}^{-1}.$$ Thus, ${\varkappa_a}(0) \in \H{{\frac{3}{2}k}-1}, \partial_t{\varkappa_a}(0) \in \H{{\frac{3}{2}k}-\frac{5}{2}}$, and ${\vb*{\omega}}_*(0), {\vb{j}}_*(0) \in H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})$. Let $\qty{\qty({\varkappa_a})_{\mathrm{I}}, \qty(\partial_t{\varkappa_a})_{\mathrm{I}}, \qty({\vb*{\omega}}_*)_{\mathrm{I}}, \qty({\vb{j}}_*)_{\mathrm{I}}} \coloneqq \qty{{\varkappa_a}(0), \partial_t{\varkappa_a}(0), {\vb*{\omega}}_*(0), {\vb{j}}_*(0)}$, and take the corresponding fixed point $\qty{{\varkappa_a}(t), {\vb*{\omega}}_*(t), {\vb{j}}_*(t)} \in \mathfrak{X}$ of the map $\mathfrak{S} : \mathfrak{I}(\epsilon, A) \to \mathfrak{X}$ given in Proposition [Proposition 23](#fixed point){reference-type="ref" reference="fixed point"}. Thus, $\qty({\Gamma_t}, {\vb{v}}, {\vb{h}})$ can be obtained as discussed in  [6.2](#section recovery){reference-type="ref" reference="section recovery"}. We will show that the induced quantity $\qty({\Gamma_t}, {\vb{v}}, {\vb{h}})$ is a solution to the (MHD)-(BC) problem with initial data $\qty(\Gamma_0, {\vb{v}}_0, {\vb{h}}_0)$. Indeed, it is clear that $\Gamma(0) = \Gamma_0$, ${\vb{v}}(0) = {\vb{v}}_0$, and ${\vb{h}}(0) = {\vb{h}}_0$ by the definition and the uniqueness of div-curl systems. First, we claim that $$\label{key} {\mathbb{P}}\qty({\vb*{\omega}}_*(t) \circ \mathcal{X}_{\Gamma_t}^{-1}) = {\vb*{\omega}}_*(t) \circ \mathcal{X}_{\Gamma_t}^{-1}, \quad {\mathbb{P}}\qty({\vb{j}}_*(t) \circ \mathcal{X}_{\Gamma_t}^{-1}) = {\vb{j}}_*(t) \circ \mathcal{X}_{\Gamma_t}^{-1}.$$ Indeed, taking the divergence of ([\[eqn linear (n+1) vom vj\]](#eqn linear (n+1) vom vj){reference-type="ref" reference="eqn linear (n+1) vom vj"}) and using the fact that $\div {\vb{v}}\equiv 0 \equiv \div{\vb{h}}$ yield $$\label{key} \begin{cases*} \partial_t (\div{\vb*{\omega}}) + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}}(\div {\vb*{\omega}}) = \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}} (\div{\vb{j}}), \\ \partial_t (\div{\vb{j}}) + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}}(\div{\vb{j}}) = \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}} (\div{\vb*{\omega}}), \end{cases*}$$ where $$\label{key} {\vb*{\omega}}(t) \coloneqq {\vb*{\omega}}_*(t) \circ \mathcal{X}_{\Gamma_t}^{-1}, \quad {\vb{j}}(t)\coloneqq {\vb{j}}_*(t) \circ \mathcal{X}_{\Gamma_t}^{-1}.$$ Since $\div {\vb*{\omega}}(0) = 0 = \div {\vb{j}}(0)$, it follows from the arguments in  [5.2](#sec linear current vortex){reference-type="ref" reference="sec linear current vortex"} that $\div{\vb*{\omega}}\equiv 0 \equiv \div {\vb{j}}$ for all $t$, which proves the claim. Consequently, $$\label{eqn curl vv vh} \curl{\vb{v}}= {\vb*{\omega}}\qand \curl {\vb{h}}= {\vb{j}}.$$ Next, as in ([\[decomp p\]](#decomp p){reference-type="ref" reference="decomp p"}), define the pressure functions via $$\label{key} p^{\pm} = \rho_\pm \qty(p_{{\vb{v}}, {\vb{v}}}^{\pm} - p_{{\vb{h}}, {\vb{h}}}^{\pm} + p_\kappa^\pm) + {\mathcal{H}}_\pm \bar{{\mathcal{N}}}^{-1}(-g^+ + g^-),$$ with $p^\pm_\kappa$ defined by ([\[def p_kappa\]](#def p_kappa){reference-type="ref" reference="def p_kappa"}) and $$\label{key} g^{\pm} \coloneqq 2 \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_\pm^\top}\theta - {\vb{I\!I}}_+\qty({\vb{v}}_\pm^\top, {\vb{v}}_\pm^\top) + {\vb{I\!I}}_+\qty({\vb{h}}_\pm^\top, {\vb{h}}_\pm^\top) + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_+}\qty(p_{{\vb{v}}, {\vb{v}}}^\pm - p_{{\vb{h}}, {\vb{h}}}^\pm).$$ Inspired by [@Sun-Wang-Zhang2018], define $$\label{key} {\vb{V}}_\pm \coloneqq \partial_t {\vb{v}}_\pm + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_\pm}{\vb{v}}_\pm + \dfrac{1}{\rho_\pm}\grad p^\pm - \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_\pm}{\vb{h}}_\pm,$$ and $$\label{key} {\vb{H}}_\pm \coloneqq \partial_t {\vb{h}}_\pm + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_\pm}{\vb{h}}_\pm - \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_\pm}{\vb{v}}_\pm.$$ It suffices to show that ${\vb{V}}\equiv \vb{0} \equiv {\vb{H}}$ for $0 < t < T$. Indeed, since ${\Omega}_t^\pm$ are both assumed to be simply-connected, one only needs to verify for $\vb{Z} = {\vb{V}}$ or ${\vb{H}}$: $$\label{key} \begin{cases*} \div \vb{Z}_\pm = 0 &in $ {\Omega}_t^\pm $, \\ \curl \vb{Z}_\pm = \vb{0} &in $ {\Omega}_t^\pm $, \\ \vb{Z}_\pm \vdot {\vb{N}}_+ = 0 &on $ {\Gamma_t}$, \\ \vb{Z}_- \vdot \widetilde{{\vb{N}}} = 0 &on $ \partial{\Omega}$. \end{cases*}$$ ### Verification of ${\vb{V}}$ {#verification-of-vbv .unnumbered} Observe that $$\label{key} \div {\vb{v}}\equiv 0 \equiv \div{\vb{h}}.$$ Then it follows from the definitions of $p^\pm_{\vb{a}, \vb{b}}$ that, $$\label{key} \div {\vb{V}}\equiv 0.$$ Taking curl of ${\vb{V}}$ and using ([\[eqn curl vv vh\]](#eqn curl vv vh){reference-type="ref" reference="eqn curl vv vh"}) with ([\[eqn pdt vom\]](#eqn pdt vom){reference-type="ref" reference="eqn pdt vom"}) lead to $$\label{key} \curl {\vb{V}}= \partial_t {\vb*{\omega}}+ \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}}{\vb*{\omega}}- \mathop{}\!\mathbin{\mathrm{D}}_{{\vb*{\omega}}}{\vb{v}}- \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}}{\vb{j}}+ \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{j}}}{\vb{h}}= \vb{0}.$$ In addition, it follows from ([\[def p_ab\^-\]](#def p_ab^-){reference-type="ref" reference="def p_ab^-"}) that on $\partial{\Omega}$: $$\label{key} {\vb{V}}_- \vdot \widetilde{{\vb{N}}} = - \widetilde{{\vb{I\!I}}}({\vb{v}}_-, {\vb{v}}_-) + \widetilde{{\vb{I\!I}}}({\vb{h}}_-, {\vb{h}}_-) + \mathop{}\!\mathbin{\mathrm{D}}_{\widetilde{{\vb{N}}}}(p^-_{{\vb{v}}, {\vb{v}}} - p^-_{{\vb{h}}, {\vb{h}}}) = 0.$$ Thus, it remains to show that ${\vb{V}}_\pm \vdot {\vb{N}}_+ \equiv 0$. Note that for $\theta\coloneqq {\vb{v}}_\pm \vdot {\vb{N}}_+$ and $\ev{g^\pm}\coloneqq\fint_{\Gamma_t}g^\pm \dd{S_t}$, there hold $$\label{key} \begin{split} &\hspace{-1em}{\vb{V}}_+ \vdot {\vb{N}}_{+} \\ =\, &{\vb{N}}_+ \vdot \qty({\mathbb{D}_t}_+ {\vb{v}}_+ - \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+}{\vb{h}}_+) + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_+}\qty(p_{{\vb{v}}, {\vb{v}}}^+ - p_{{\vb{h}}, {\vb{h}}}^+ + p_\kappa^+) - \qty(\dfrac{1}{\rho_+}{\mathcal{N}}_+)\bar{{\mathcal{N}}}^{-1}(g^+ - g^-) \\ =\, &{\mathbb{D}_t}_+ \theta + {\vb{N}}_+ \vdot (\mathop{}\!\mathbin{\mathrm{D}}{\vb{v}}_+)\vdot {\vb{v}}_+^\top + {\vb{I\!I}}_+({\vb{h}}_+, {\vb{h}}_+) + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_+}\qty(p_{{\vb{v}}, {\vb{v}}}^+ - p_{{\vb{h}}, {\vb{h}}}^+) \\ &+ \widetilde{{\mathcal{N}}}\kappa_+ - \qty(\dfrac{1}{\rho_+}{\mathcal{N}}_+)\bar{{\mathcal{N}}}^{-1}(g^+ - g^-) \\ =\, &{\mathbb{D}_t}_+ \theta + {\vb{N}}_+ \vdot \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_+^\top}{\vb{v}}_+ + {\vb{I\!I}}_+({\vb{h}}_+, {\vb{h}}_+) + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_+}\qty(p_{{\vb{v}}, {\vb{v}}}^+ - p_{{\vb{h}}, {\vb{h}}}^+) \\ &+ \widetilde{{\mathcal{N}}}\kappa_+ + \qty(\dfrac{1}{\rho_-}{\mathcal{N}}_-)\bar{{\mathcal{N}}}^{-1}g^+ + \qty(\dfrac{1}{\rho_+}{\mathcal{N}}_+)\bar{{\mathcal{N}}}^{-1}g^- -g^+ + \ev{g^+} \\ =\, &{\mathbb{D}_t}_+ \theta - \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_+^\top}\theta + \widetilde{{\mathcal{N}}}\kappa_+ + \qty(\dfrac{1}{\rho_-}{\mathcal{N}}_-)\bar{{\mathcal{N}}}^{-1}g^+ + \qty(\dfrac{1}{\rho_+}{\mathcal{N}}_+)\bar{{\mathcal{N}}}^{-1}g^- + \ev{g^+}, \end{split}$$ and $$\label{key} \begin{split} &\hspace{-2em}{\vb{V}}_- \vdot {\vb{N}}_- \\ =\, &-{\mathbb{D}_t}_- \theta + {\vb{N}}_- \vdot (\mathop{}\!\mathbin{\mathrm{D}}{\vb{v}}_-) \vdot {\vb{v}}_- + {\vb{I\!I}}_-({\vb{h}}_-, {\vb{h}}_-) + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_-}(p_{{\vb{v}}, {\vb{v}}}^- - p_{{\vb{h}}, {\vb{h}}}^-) \\ &-\widetilde{{\mathcal{N}}}\kappa_+ - \qty(\dfrac{1}{\rho_-}{\mathcal{N}}_-) \bar{{\mathcal{N}}}^{-1}(g^+ - g^-) \\ =\, &-{\mathbb{D}_t}_- \theta + {\vb{N}}_- \vdot \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_-^\top}{\vb{v}}_- + {\vb{I\!I}}_-({\vb{h}}_-, {\vb{h}}_-) + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{N}}_-}(p_{{\vb{v}}, {\vb{v}}}^- - p_{{\vb{h}}, {\vb{h}}}^-) \\ &-\widetilde{{\mathcal{N}}}\kappa_+ - \qty(\dfrac{1}{\rho_-}{\mathcal{N}}_-) \bar{{\mathcal{N}}}^{-1}g^+ - \qty(\dfrac{1}{\rho_+}{\mathcal{N}}_+)\bar{{\mathcal{N}}}^{-1}g^- + g^- - \ev{g^-} \\ =\, &-{\mathbb{D}_t}_- \theta + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_-^\top}\theta -\widetilde{{\mathcal{N}}}\kappa_+ - \qty(\dfrac{1}{\rho_-}{\mathcal{N}}_-)\bar{{\mathcal{N}}}^{-1}g^+ - \qty(\dfrac{1}{\rho_+}{\mathcal{N}}_+)\bar{{\mathcal{N}}}^{-1}g^- - \ev{g^-}. \end{split}$$ Hence, $$\label{key} \begin{split} {\vb{V}}_+ \vdot {\vb{N}}_+ + {\vb{V}}_- \vdot {\vb{N}}_- &= {\mathbb{D}_t}_+ \theta - {\mathbb{D}_t}_- \theta - \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_+^\top}\theta + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_-^\top}\theta + \ev{g^+ - g^-} \\ &= \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_+ - {\vb{v}}_-}\theta - \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_+^\top -{\vb{v}}_-^\top}\theta + \ev{g^+ - g^-} \\ &=0, \end{split}$$ where the last equality follows from ([\[int g\^+ - g\^-\]](#int g^+ - g^-){reference-type="ref" reference="int g^+ - g^-"}) and the relation that ${\vb{v}}_+ \vdot {\vb{N}}_+ = {\vb{v}}_- \vdot {\vb{N}}_+$. Therefore, one can define $$\label{key} \Theta \coloneqq {\vb{V}}_+ \vdot {\vb{N}}_+ = {\vb{V}}_- \vdot {\vb{N}}_+.$$ For ${\vb{W}}$ defined via ([\[def vW\]](#def vW){reference-type="ref" reference="def vW"}), the relation that $\div {\vb{v}}\equiv 0$ implies $q^\pm = p^\pm$, that is, $$\label{eqn vW vV} {\vb{W}}= \dfrac{\rho_+}{\rho_+ + \rho_-}{\vb{V}}_+ + \dfrac{\rho_-}{\rho_+ + \rho_-} {\vb{V}}_- \qq{on} {\Gamma_t}.$$ Thus, $$\label{key} \Theta = {\vb{W}}\vdot {\vb{N}}_+.$$ Because $\qty{{\varkappa_a}, {\vb*{\omega}}_*, {\vb{j}}_*} \in \mathfrak{X}$ is a fixed point of $\mathfrak{T}$, by ([\[eqn pd2 tt ka\]](#eqn pd2 tt ka){reference-type="ref" reference="eqn pd2 tt ka"}), it holds that $$\label{last eqn vW} -\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\qty({\vb{W}}\vdot {\vb{N}}_+) + {\vb{W}}\vdot \mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{N}}_+ + a^2 \dfrac{{\vb{W}}\vdot {\vb{N}}_+}{{\vb{N}}_+ \vdot ({\vb*{\nu}}\circ \Phi_{\Gamma_t}^{-1})} = 0.$$ In addition, since $\curl {\vb{V}}= \vb{0}, \div {\vb{V}}= 0$, ${\vb{V}}_- \vdot \widetilde{{\vb{N}}} = 0$ and ${\Omega}_t^\pm$ are both simply-connected, there are two mean-zero functions $r^\pm(t, x') : {\Gamma_t}\to \mathbb{R}$ so that $$\label{key} {\vb{V}}_\pm = \dfrac{1}{\rho_\pm} \grad {\mathcal{H}}_\pm r^\pm,$$ which implies that $$\label{eqn Theta r} \begin{split} \Theta &= {\vb{V}}_+ \vdot {\vb{N}}_+ = \qty(\dfrac{1}{\rho_+}{\mathcal{N}}_+) r^+ \\&= -{\vb{V}}_- \vdot {\vb{N}}_- = -\qty( \dfrac{1}{\rho_-}{\mathcal{N}}_- ) r^-. \end{split}$$ It follows from ([\[last eqn vW\]](#last eqn vW){reference-type="ref" reference="last eqn vW"}), ([\[eqn vW vV\]](#eqn vW vV){reference-type="ref" reference="eqn vW vV"}), and the identity ([\[lap vn\]](#lap vn){reference-type="ref" reference="lap vn"}) that $$\label{key} -\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\Theta + \qty(\dfrac{a^2}{{\vb{N}}_+ \vdot ({\vb*{\nu}}\circ \Phi_{\Gamma_t}^{-1}) } - \abs{{\vb{I\!I}}_+}^2)\Theta + \dfrac{1}{\rho_+ + \rho_-} \grad^\top \qty(r^+ + r^-) \vdot \grad^\top \kappa_+ = 0.$$ If $a_0$ is taken large enough, so that $\dfrac{a^2}{{\vb{N}}_+ \vdot ({\vb*{\nu}}\circ \Phi_{\Gamma_t}^{-1}) } - \abs{{\vb{I\!I}}_+}^2 > a$ holds for all $t \in [0, T]$ whenever $a \ge a_0$ (indeed, it holds for all $\Gamma \in \Lambda_*$), then $$\label{key} \begin{split} \abs{\grad^\top \Theta}_{L^2({\Gamma_t})}^2 + a\abs{\Theta}_{L^2({\Gamma_t})}^2 &\le \frac{1}{\rho_+ + \rho_-} \abs{\Theta}_{L^2({\Gamma_t})} \cdot \abs{\grad^\top \kappa_+ \vdot \grad^\top\qty(r^+ + r^-)}_{L^2({\Gamma_t})} \\ &\le C_* \abs{\Theta}_{L^2({\Gamma_t})}\abs{r^+ + r^-}_{H^{\frac{3}{2}}({\Gamma_t})}\abs{\kappa_+}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Gamma_t})} \\ &\le C_{*0} \qty(\abs{\Theta}_{L^2({\Gamma_t})}^2 + \abs{r^+ + r^-}_{H^{\frac{3}{2}}({\Gamma_t})}^2 ). \end{split}$$ It can be deduced directly from [\[eqn Theta r\]](#eqn Theta r){reference-type="eqref" reference="eqn Theta r"} that $$r^\pm = \pm \qty(\dfrac{1}{\rho_\pm}{\mathcal{N}}_\pm)^{-1}\Theta,$$ which implies that $$\label{key} \begin{split} \abs{r^+ + r^-}_{H^{\frac{3}{2}}({\Gamma_t})}^2 &\le C_* \abs{\Theta}_{H^{\frac{1}{2}}({\Gamma_t})}^2 \\ &\le \frac{1}{2C_{*0}}\abs{\grad^\top{\Theta}}_{L^2({\Gamma_t})}^2 + C_* \abs{\Theta}_{L^2({\Gamma_t})}^2 . \end{split}$$ Thus, for a generic constant $C_*$ determined by $\Lambda_*$, it holds that $$\label{key} \abs{\grad^\top\Theta}_{L^2({\Gamma_t})}^2 + \qty(2a-C_*) \abs{\Theta}_{L^2({\Gamma_t})}^2 \le 0.$$ If $a_0$ is taken large enough (depending only on $\Lambda_*$), then $\Theta = 0$, which yields ${\vb{V}}(t) \equiv \vb{0}$. ### Verification of $\vb{H}$ {#verification-of-vbh .unnumbered} Similar arguments show that $$\label{key} \div {\vb{H}}\equiv 0, \quad \curl {\vb{H}}\equiv \vb{0}.$$ As for the boundary terms, observe that ${\vb{h}}_\pm \vdot {\vb{N}}_+ \equiv 0$, which implies $$\label{key} \begin{split} {\vb{H}}_\pm \vdot {\vb{N}}_+ &={\vb{N}}_+ \vdot{\mathbb{D}_t}_\pm {\vb{h}}_\pm - {\vb{N}}_+ \vdot \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_\pm}{\vb{v}}_\pm \\ &=-{\vb{h}}_\pm\vdot{\mathbb{D}_t}_\pm {\vb{N}}_+ - {\vb{N}}_+ \vdot \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_\pm}{\vb{v}}_\pm \\ &={\vb{N}}_+ \vdot \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_\pm}{\vb{v}}_\pm - {\vb{N}}_+ \vdot \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_\pm}{\vb{v}}_\pm \\ &=0. \end{split}$$ In addition, one can derive from ${\vb{v}}_- \vdot \widetilde{{\vb{N}}} = 0 = {\vb{h}}_- \vdot \widetilde{{\vb{N}}}$ that $$\label{key} \begin{split} {\vb{H}}_- \vdot \widetilde{{\vb{N}}} = \partial_t {\vb{h}}_- \vdot \widetilde{{\vb{N}}} + \comm{{\vb{h}}_-}{{\vb{v}}_-}\vdot \widetilde{{\vb{N}}} = 0. \end{split}$$ Therefore, ${\vb{H}}(t) \equiv \vb{0}$. The previous arguments ensure that $\qty({\Gamma_t}, {\vb{v}}, {\vb{h}})$ is a solution to the original (MHD) system, with (BC) following from the construction. The uniqueness and the continuous dependence on the initial data of the original problem follow from those of the div-curl ones. In conclusion, Theorem [Theorem 1](#thm s.t.){reference-type="ref" reference="thm s.t."} holds. # Stabilization Effect of the Syrovatskij Condition {#sec syro} ## The strict Syrovatskij condition Since the free interface is a compact 2-D manifold, $\abs*{{\vb{h}}_+ \cp {\vb{h}}_-} > 0$ on $\Gamma$ implies that ${\vb{h}}_\pm$ form a global frame of $\Gamma$. Therefore, for any tangential vector field $\vb{a}$ on $\Gamma$, there is a unique decomposition: $$\label{decomp syro} \vb{a} = a^+ {\vb{h}}_+ + a^- {\vb{h}}_-.$$ Furthermore, the following relation holds: **Lemma 24**. *If [\[Syro 3\"\]](#Syro 3"){reference-type="eqref" reference="Syro 3\""} holds on a compact hypersurface $\Gamma \subset \mathbb{R}^3$, then for any non-vanishing tangential vector field $\vb{a}$ on $\Gamma$, it holds that $$\label{strong syro equiv} \frac{\rho_+}{\rho_+ + \rho_-}\abs{\vb{a} \vdot {\vb{h}}_+}^2 + \frac{\rho_-}{\rho_+ + \rho_-}\abs{\vb{a} \vdot {\vb{h}}_-}^2 - \frac{\rho_+ \rho_-}{(\rho_+ + \rho_-)^2}\abs{\vb{a} \vdot \llbracket{\vb{v}}\rrbracket}^2 > 0 \qq{on} \Gamma.$$* *Proof.* For simplicity, we shall use the notations: $$\label{key} g_{++}\coloneqq {\vb{h}}_+ \vdot {\vb{h}}_+ \qc g_{--}\coloneqq {\vb{h}}_- \vdot {\vb{h}}_- \qc g_{+-} \equiv g_{-+} \coloneqq {\vb{h}}_- \vdot {\vb{h}}_+,$$ the decomposition [\[decomp syro\]](#decomp syro){reference-type="eqref" reference="decomp syro"}, and $$\label{key} \llbracket{\vb{v}}\rrbracket \equiv w^+ {\vb{h}}_+ + w^- {\vb{h}}_- \qq{on} \Gamma.$$ Thus, [\[Syro 3\"\]](#Syro 3"){reference-type="eqref" reference="Syro 3\""} is equivalent to $$\label{key} \abs{{\vb{h}}_+ \cp {\vb{h}}_-}^2 > \frac{\rho_+}{\rho_+ + \rho_-}\abs{w^-}^2\abs{{\vb{h}}_+ \cp {\vb{h}}_-}^2 + \frac{\rho_-}{\rho_+ + \rho_-}\abs{w^+}^2\abs{{\vb{h}}_+ \cp {\vb{h}}_-}^2,$$ namely, $$\label{key} 1 > \frac{\rho_+}{\rho_+ + \rho_-}\abs{w^-}^2 + \frac{\rho_-}{\rho_+ + \rho_-}\abs{w^+}^2.$$ Hence, direct calculations yield $$\label{key} \begin{split} &\hspace{-1em}\frac{\rho_+}{\rho_+ + \rho_-}\abs{\vb{a} \vdot {\vb{h}}_+}^2 + \frac{\rho_-}{\rho_+ + \rho_-}\abs{\vb{a} \vdot {\vb{h}}_-}^2 \\ &= \frac{\rho_+}{\rho_+ + \rho_-}\abs{a^+ g_{++} + a^- g_{+-}}^2 + \frac{\rho_-}{\rho_+ + \rho_-}\abs{a^- g_{--} + a^+ g_{+-}}^2 \\ &> \qty(\frac{\rho_+}{\rho_+ + \rho_-}\abs{w^-}^2 + \frac{\rho_-}{\rho_+ + \rho_-}\abs{w^+}^2) \frac{\rho_+}{\rho_+ + \rho_-}\abs{a^+ g_{++} + a^- g_{+-}}^2 \\ &\qquad + \qty(\frac{\rho_+}{\rho_+ + \rho_-}\abs{w^-}^2 + \frac{\rho_-}{\rho_+ + \rho_-}\abs{w^+}^2) \frac{\rho_-}{\rho_+ + \rho_-}\abs{a^- g_{--} + a^+ g_{+-}}^2 \\ % &= \frac{\rho_+\rho_-}{(\rho_+ + \rho_-)^2}\abs{w^+}^2 \qty(\abs{a^+}^2 g_{++}^2 + \abs{a^-}^2 g_{+-}^2 + 2a^+ a^- g_{++}g_{+-}) \\ % &\qquad + \frac{\rho_+ \rho_-}{(\rho_+ + \rho_-)^2} \abs{w^-}^2 \qty(\abs{a^-}^2 g_{--}^2 + \abs{a^+}^2 g_{+-}^2 + 2 a^+ a^- g_{--}g_{+-}) \\ % &\qquad + \qty(\frac{\rho_+}{\rho_+ + \rho_-})^2 \abs{w^-}^2 \qty(\abs{a^+}^2 g_{++}^2 + \abs{a^-}^2 g_{+-}^2 + 2a^+ a^- g_{++}g_{+-}) \\ % &\qquad + \qty(\frac{\rho_-}{\rho+ + \rho_-})^2 \abs{w^+}^2 \qty(\abs{a^-}^2 g_{--}^2 + \abs{a^+}^2 g_{+-}^2 + 2 a^+ a^- g_{--} g_{+-}) &\ge \frac{\rho_+\rho_-}{(\rho_+ + \rho_-)^2} \abs{a^+ w^+ g_{++} + a^-w^-g_{--} + a^-w^+ g_{+-} + a^+ w^- g_{+-}}^2 \\ &= \frac{\rho_+ \rho_-}{(\rho_+ + \rho_-)^2}\abs{\vb{a} \vdot \llbracket{\vb{v}}\rrbracket}^2, \end{split}$$ which is exactly [\[strong syro equiv\]](#strong syro equiv){reference-type="eqref" reference="strong syro equiv"}. ◻ Since $\Gamma$ is assumed to be compact, the following corollary follows: **Corollary 25**. *Suppose that [\[strong syro equiv\]](#strong syro equiv){reference-type="eqref" reference="strong syro equiv"} holds on $\Gamma$. Then it holds that $$\label{syro inf} \begin{split} \Upsilon\qty({\vb{h}}_{\pm}, \llbracket{\vb{v}}\rrbracket) &\coloneqq \inf_{\substack{\vb{a} \in \mathrm{T}\Gamma_t; \\ \abs{\vb{a}}=1}} \inf_{z \in \Gamma_t} \frac{\rho_+}{\rho_+ + \rho_-}\abs{\vb{a} \vdot {\vb{h}}_+(z)}^2 + \frac{\rho_-}{\rho_+ + \rho_-}\abs{\vb{a} \vdot {\vb{h}}_-(z)}^2 \\ &\hspace{6em} - \frac{\rho_+ \rho_-}{(\rho_+ + \rho_-)^2}\abs{\vb{a} \vdot \llbracket{\vb{v}}\rrbracket(z)}^2 \\ &=: \mathfrak{s_0}>0. \end{split}$$ Equivalently, the following relation holds on $\Gamma$: $$\label{syro stab condi useful} \qty(\frac{\rho_+}{\rho_+ + \rho_-} ({\vb{h}}_+ \otimes {\vb{h}}_+) + \frac{\rho_-}{\rho_+ + \rho_-} ({\vb{h}}_- \otimes {\vb{h}}_-) - \frac{\rho_+ \rho_-}{(\rho_+ + \rho_-)^2}(\llbracket{\vb{v}}\rrbracket \otimes \llbracket{\vb{v}}\rrbracket)) \ge \mathfrak{s}_0 \vb{I}.$$* ## Interfaces, coordinates and div-curl systems {#sec prelimi'} From now on, ${\Omega}$ is assumed to be $\mathbb{T}^2 \times (-1, 1)$ and ${\Omega}_t^+$ has a solid boundary $\mathbb{T}^2 \times \{+1\}$. Hence, some statements in  [3.4](#sec harmonic coord){reference-type="ref" reference="sec harmonic coord"} and  [3.6](#sec div-cul system){reference-type="ref" reference="sec div-cul system"} need slight changes in order to be compatible to the topology of ${\Omega}_t^\pm$. More precisely, the harmonic coordinate maps introduced in  [3.4](#sec harmonic coord){reference-type="ref" reference="sec harmonic coord"} are now replaced by $$\label{key} \begin{cases*} \mathop{}\!\mathbin{\triangle}_y \mathcal{X}_\Gamma^\pm = 0 &for $ y \in {\Omega}_*^\pm $, \\ \mathcal{X}_\Gamma^\pm (z) = \Phi_\Gamma(z) &for $ z \in \Gamma_* $, \\ \mathcal{X}_\Gamma^\pm (z) = z &for $ z \in \mathbb{T}^2 \times \{\pm1\}$. \end{cases*}$$ Similarly, the definitions of harmonic extensions of a function $f$ defined on $\Gamma$ are modified to $$\label{eqn harm ext'} \begin{cases*} \mathop{}\!\mathbin{\triangle}{\mathcal{H}}_\pm f = 0 \qfor x \in {\Omega}_\Gamma^\pm, \\ {\mathcal{H}}_\pm f = f \qfor x \in \Gamma, \\ \mathop{}\!\mathbin{\mathrm{D}}_{\widetilde{{\vb{N}}}_\pm} {\mathcal{H}}_\pm f = 0 \qfor x \in \mathbb{T}^2\times \{\pm1\}. \end{cases*}$$ The Dirichlet-Neumann operators are also defined by [\[def DN op\]](#def DN op){reference-type="eqref" reference="def DN op"} for which ${\mathcal{H}}_\pm$ are given by [\[eqn harm ext\'\]](#eqn harm ext'){reference-type="eqref" reference="eqn harm ext'"}. Therefore, Lemmas [Lemma 7](#lem composition harm coordi){reference-type="ref" reference="lem composition harm coordi"} - [Lemma 9](#lem lap-n){reference-type="ref" reference="lem lap-n"}, and those properties of the Dirichlet-Neumann operators introduced in  [3.4](#sec harmonic coord){reference-type="ref" reference="sec harmonic coord"} still hold. As for the div-curl systems, due to the different topology, we introduce the following modification of Theorem [Theorem 11](#thm div-curl){reference-type="ref" reference="thm div-curl"}: **Theorem 26**. *Assume that $\Gamma$ is an $H^{{\frac{3}{2}k}-\frac{1}{2}} (k \ge 3)$ surface diffeomorphic to $\mathbb{T}^2$, with $$\label{surface away condi} \mathop{\mathrm{\mathrm{dist}}}(\Gamma, \mathbb{T}^2 \times \{\pm1\}) \ge c_0 > 0$$ for some positive constant $c_0$. Given $\vb{f}, g \in H^{l-1}({\Omega}^+)$ and $h \in H^{l-\frac{1}{2}}(\Gamma)$ with the compatibility condition $$\label{key} \int_{{\Omega}^+} g \dd{x} = \int_{\Gamma} h \dd{S},$$ and suppose further that $\vb{f}$ satisfies $$\label{key} \div \vb{f} = 0 \text{ in } {\Omega}^+ \qc \int_{\mathbb{T}^2\times\{+1\}} \vb{f} \vdot \widetilde{{\vb{N}}}_+ \dd{S} = 0.$$ Then, for $2 \le l \le {\frac{3}{2}k}-1$, the following system: $$\label{key} \begin{cases*} \curl {\vb{u}}= \vb{f} &in $ {\Omega}^+ $, \\ \div {\vb{u}}= g &in $ {\Omega}^+ $, \\ {\vb{u}}\vdot {\vb{N}}_+ = h &on $ \Gamma $, \\ {\vb{u}}\vdot \widetilde{{\vb{N}}}_+ = 0 \qc \int_{\mathbb{T}^2\times\{+1\}} {\vb{u}}\dd{\widetilde{S}} = \va{\mathfrak{u}} &on $ \mathbb{T}^2 \times \{+1\} $ \end{cases*}$$ admits a unique solution ${\vb{u}}\in H^{l}({\Omega}^+)$ satisfying the estimate: $$\label{key} \norm{{\vb{u}}}_{H^l({\Omega}^+)} \le C\qty(\abs{\Gamma}_{H^{{\frac{3}{2}k}-\frac{1}{2}}}, c_0) \times \qty( \norm{{\vb{f}}}_{H^{l-1}({\Omega}^+)} + \norm{g}_{H^{l-1}({\Omega}^+)} + \abs{h}_{H^{l-\frac{1}{2}}(\Gamma)} + \abs{\va{\mathfrak{u}}}).$$* One may refer to [@Cheng-Shkoller2017] and [@Sun-Wang-Zhang2018] for a proof of Theorem [Theorem 26](#thm div-curl'){reference-type="ref" reference="thm div-curl'"}. ## Reformulation of the problem {#reformulation-of-the-problem} We shall consider the free interface problem (MHD)-(BC') under the assumption that $k \ge 3$. Due to the difference between Theorems [Theorem 11](#thm div-curl){reference-type="ref" reference="thm div-curl"} and [Theorem 26](#thm div-curl'){reference-type="ref" reference="thm div-curl'"}, the velocity and magnetic fields depend on one more boundary condition -- their integrals on the bottom or the top solid boundary. Therefore, when considering the variation, one needs to assume further that the integrals of ${\vb{v}}_\pm$ and ${\vb{h}}_\pm$ on $\mathbb{T}^2 \times \{\pm1\}$ also depend on the parameter $\beta$. More precisely, set $$\label{key} \va{\mathfrak{v}}_\pm \coloneqq \int_{\mathbb{T}^2 \times \{\pm1\}} {\vb{v}}_\pm \dd{\widetilde{S}} \qand \va{\mathfrak{h}}_\pm \coloneqq \int_{\mathbb{T}^2 \times \{\pm1\}} {\vb{h}}_\pm \dd{\widetilde{S}}.$$ Then, for each fixed $t$, $\va{\mathfrak{v}}_\pm$ and $\va{\mathfrak{h}}_\pm$ are constant tangential vectors on $\mathbb{T}^2 \times\{\pm1\}$. With the same notations in  [4.2](#sec var v_*){reference-type="ref" reference="sec var v_*"}, assume that ${\varkappa_a}, {\vb*{\omega}}_{*\pm}$ and $\va{\mathfrak{v}}_\pm$ are parameterized by $\beta$. Thus, [\[eqn pd beta v\]](#eqn pd beta v){reference-type="eqref" reference="eqn pd beta v"} can be rewritten as $$\label{eqn pd beta vv '} \partial_\beta {\vb{v}}_{\pm*} = {\mathbf{B}}_\pm({\varkappa_a})\partial^2_{t\beta}{\varkappa_a}+ {\mathbf{F}}_\pm({\varkappa_a})\partial_\beta{\vb*{\omega}}_{*\pm} + {\mathbf{G}}_\pm({\varkappa_a}, \partial_t {\varkappa_a}, {\vb*{\omega}}_{*\pm})\partial_\beta{\varkappa_a}+ \vb{S}_\pm ({\varkappa_a}) \partial_\beta \va{\mathfrak{v}}_{\pm}.$$ It follows from the same arguments that Lemma [Lemma 12](#lem3.1){reference-type="ref" reference="lem3.1"} still holds with a subtle modification of indices, namely, $s, s', \sigma, \sigma' \ge \frac{3}{2}$ rather than $\ge \frac{1}{2}$. As for the new term $\vb{S}_\pm ({\varkappa_a})\partial_\beta \va{\mathfrak{v}}_\pm$, they are the pull-backs to ${\Gamma_\ast}$ of the solutions to the following boundary value problems: $$\label{def opS} \begin{cases*} \div \vb{y}_\pm = 0 \qc \curl \vb{y}_\pm = \vb{0} &in $ {\Omega}^\pm_t $, \\ \vb{y}_\pm \vdot {\vb{N}}_\pm = 0 &on $ {\Gamma_t}$, \\ \vb{y}_\pm \vdot \widetilde{{\vb{N}}}_\pm = 0 &on $ \mathbb{T}^2 \times \{\pm1\} $, \\ \int_{\mathbb{T}^2 \times \{\pm 1\}} \vb{y}_\pm \dd{\widetilde{S}} = \partial_\beta \va{\mathfrak{v}}_\pm &on $ \mathbb{T}^2 \times \{\pm 1\} $. \end{cases*}$$ Therefore, since $\partial_\beta \va{\mathfrak{v}}_\pm$ are constant on $\mathbb{T}^2 \times \{\pm1\}$ for each fixed $\beta$, the following estimates hold: $$\label{est S} \abs{\vb{S}_\pm ({\varkappa_a})}_{{\mathscr{L}}\qty(\mathbb{R}^2; \H{s})} \le C_* \qfor \frac{3}{2} \le s \le {\frac{3}{2}k}-\frac{3}{2},$$ and $$\label{est var S} \abs{\var \vb{S}_\pm({\varkappa_a})}_{{\mathscr{L}}\qty(\H{{\frac{3}{2}k}-\frac{5}{2}}; {\mathscr{L}}\qty(\mathbb{R}^2; \H{s'}))} \le C_* \qfor \frac{3}{2} \le s' \le {\frac{3}{2}k}-\frac{3}{2}.$$ Due to the change of ${\Omega}^\pm_t$, we shall also modify the definition of $p^\pm_{\vb{a}, \vb{b}}$ to: $$\label{eqn p_ab '} \begin{cases*} -\mathop{}\!\mathbin{\triangle}p_{\vb{a}, \vb{b}}^\pm = \tr(\mathop{}\!\mathbin{\mathrm{D}}\vb{a}_\pm \vdot \mathop{}\!\mathbin{\mathrm{D}}\vb{b}_\pm) &in $ {\Omega}_t^\pm $, \\ p^\pm_{\vb{a}, \vb{b}} = 0 &on $ {\Gamma_t}$, \\ \mathop{}\!\mathbin{\mathrm{D}}_{\widetilde{{\vb{N}}}_\pm} p^{\pm}_{\vb{a}, \vb{b}} = \widetilde{{\vb{I\!I}}}_\pm(\vb{a}_\pm, \vb{b}_\pm) &on $ \mathbb{T}^2 \times \{\pm1\} $, \end{cases*}$$ for which the solenoidal vector fields $\vb{a}_\pm, \vb{b}_\pm$ satisfy $\vb{a}_\pm \vdot \widetilde{{\vb{N}}}_\pm = 0 = \vb{b}_\pm \vdot \widetilde{{\vb{N}}}_\pm$ on $\mathbb{T}^2 \times \{\pm1\}$. With all the previous modifications on the definitions of the harmonic extensions, Lagrange multiplier pressures, and div-curl systems, it follows from the similar arguments that [\[eqn pd2 tt ka\]](#eqn pd2 tt ka){reference-type="eqref" reference="eqn pd2 tt ka"} can be rewritten as $$\label{key} \begin{split} &\partial^2_{tt}{\varkappa_a}+ {\mathscr{C}}_{\alpha} ({\varkappa_a}, \partial_t{\varkappa_a}, {\vb{v}}_{*\pm}, {\vb{h}}_{*\pm} ) {\varkappa_a}- {\mathscr{F}}({\varkappa_a})\partial_t{\vb*{\omega}}_* - {\mathscr{G}}({\varkappa_a}, \partial_t{\varkappa_a}, {\vb*{\omega}}_*, {\vb{j}}_*, \va{\mathfrak{v}}, \va{\mathfrak{h}}) - \mathscr{S}({\varkappa_a})\partial_t \va{\mathfrak{v}} \\ &\quad = \qty[\mathrm{I} + {\mathscr{B}}({\varkappa_a})]^{-1}\qty{\qty[-\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\qty({\vb{W}}\vdot {\vb{N}}_+) + {\vb{W}}\vdot \mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{N}}_+ + a^2 \dfrac{{\vb{W}}\vdot {\vb{N}}_+}{{\vb{N}}_+ \vdot ({\vb*{\nu}}\circ \Phi_{\Gamma_t}^{-1})}] \circ \Phi_{\Gamma_t}}. \end{split}$$ Since $\va{\mathfrak{v}}_\pm$ and $\va{\mathfrak{h}}_\pm$ are all constants, Lemma [Lemma 15](#lem 3.4){reference-type="ref" reference="lem 3.4"} holds with $k \ge 3, \alpha = 0$ and a slight change of [\[est var opG\]](#est var opG){reference-type="eqref" reference="est var opG"} as: $$\label{est var opG'} \begin{split} &\hspace{-2em}\abs{\var{\mathscr{G}}}_{{\mathscr{L}}\qty[\H{{\frac{3}{2}k}-\frac{5}{2}}\times\H{{\frac{3}{2}k}-\frac{7}{2}}\times H^{{\frac{3}{2}k}-2}({\Omega\setminus{\Gamma_\ast}}) \times H^{{\frac{3}{2}k}-2}({\Omega\setminus{\Gamma_\ast}}) \times \mathbb{R}^2 \times \mathbb{R}^2; \H{{\frac{3}{2}k}-\frac{7}{2}}]} \\ \le\, &a^2 Q\qty(\abs{\partial_t{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}, \norm{{\vb*{\omega}}_*}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})}, \norm{{\vb{j}}_*}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})}), \end{split}$$ whose proof follows from the same arguments. Furthermore, the operator $\mathscr{S}({\varkappa_a})$ satisfies $$\label{est opS} \abs{\mathscr{S}({\varkappa_a})}_{{\mathscr{L}}\qty(\mathbb{R}^2; \H{{\frac{3}{2}k}-\frac{5}{2}})} \le Q\qty(\abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{3}{2}}}),$$ and $$\label{est var opS} \abs{\var\mathscr{S}({\varkappa_a})}_{{\mathscr{L}}\qty[\H{{\frac{3}{2}k}-\frac{7}{2}}; {\mathscr{L}}\qty(\mathbb{R}^2; \H{{\frac{3}{2}k}-\frac{7}{2}})]} \le Q\qty(\abs{{\varkappa_a}}_{H^{{\frac{3}{2}k}-\frac{3}{2}}}).$$ Indeed, the leading order term of $\mathscr{S}({\varkappa_a})\partial_t\va{\mathfrak{v}}$ is $\grad^\top(\kappa_+ \circ \Phi_{\Gamma_t}) \vdot \vb{S}({\varkappa_a})\partial_t \va{\mathfrak{v}}$, so the above estimates follow from the standard commutator and product ones. ## Linear systems {#sec syro linear system} Similar to the arguments in  [5.1](#sec linear ka){reference-type="ref" reference="sec linear ka"}, assume that ${\Gamma_\ast}\in H^{{\frac{3}{2}k}+\frac{1}{2}} (k\ge 3)$ is a reference hypersurface, and $\Lambda_*$ defined by ([\[def lambda\*\]](#def lambda*){reference-type="ref" reference="def lambda*"}) satisfies all the properties discussed in the preliminary. Suppose further that there are a family of hypersurfaces ${\Gamma_t}\in \Lambda_*$ parameterized by $t \in [0, T]$ and four tangential vector fields ${\vb{v}}_{\pm*}, {\vb{h}}_{\pm*} : {\Gamma_\ast}\to \mathrm{T}{\Gamma_\ast}$ satisfying: $$\label{key} {\varkappa_a}\in C^0\qty{[0, T]; \H{{\frac{3}{2}k}-\frac{3}{2}}} \cap C^1\qty{[0, T]; B_{\delta_1} \subset \H{{\frac{3}{2}k}-\frac{5}{2}}}, \tag{H1'}$$ and $$\label{key} {\vb{v}}_{\pm*}, {\vb{h}}_{\pm*} \in C^0\qty{[0, T]; \H{{\frac{3}{2}k}-\frac{1}{2}}} \cap C^1\qty{[0, T]; \H{{\frac{3}{2}k}-\frac{3}{2}}}. \tag{H2'}$$ Moreover, assume that there are positive constants $c_0, \mathfrak{s}_0$ so that [\[syro stab condi useful\]](#syro stab condi useful){reference-type="eqref" reference="syro stab condi useful"} and [\[surface away condi\]](#surface away condi){reference-type="eqref" reference="surface away condi"} hold uniformly on $[0, T]$. The positive constants $\widetilde{L_1}$ and $\widetilde{L_2}$ are defined by: $$\label{key} \sup_{t\in[0, T]} \qty{\abs{{\varkappa_a}(t)}_{\H{{\frac{3}{2}k}-\frac{3}{2}}}, \abs{\partial_t {\varkappa_a}(t)}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}, \abs{\qty({\vb{v}}_{\pm*}(t), {\vb{h}}_{\pm*}(t))}_{\H{{\frac{3}{2}k}-\frac{1}{2}}}} \le \widetilde{L_1},$$ $$\label{key} \sup_{t\in [0, T]} \abs{\qty(\partial_t{\vb{v}}_{\pm*}(t), \partial_t{\vb{h}}_{\pm*}(t))}_{\H{{\frac{3}{2}k}-\frac{3}{2}}} \le \widetilde{L_2}.$$ Consider the following linear initial value problem similar to [\[eqn linear 1\]](#eqn linear 1){reference-type="eqref" reference="eqn linear 1"}: $$\label{eqn linear 1'} \begin{cases} \partial^2_{tt}{\mathfrak{f}}+ {\mathscr{C}}_0({\varkappa_a}, \partial_t{\varkappa_a}, {\vb{v}}_*, {\vb{h}}_*){\mathfrak{f}}= {\mathfrak{g}}, \\ {\mathfrak{f}}(0) = {\mathfrak{f}}_0, \quad \partial_t {\mathfrak{f}}(0) = {\mathfrak{f}}_1, \end{cases}$$ where ${\mathfrak{f}}_0, {\mathfrak{f}}_1, {\mathfrak{g}}(t) : {\Gamma_\ast}\to \mathbb{R}$ are three given functions, and ${\mathscr{C}}_0$ is given by: $$\begin{split} {\mathscr{C}}_0({\varkappa_a}, \partial_t{\varkappa_a}, {\vb{v}}_*, {\vb{h}}_*) \coloneqq\, &2\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*}\partial_t + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*} + \dfrac{\rho_+\rho_-}{\qty(\rho_+ + \rho_-)^2}{\mathscr{R}}({\varkappa_a}, {\vb{w}}_*) \\ &-\dfrac{\rho_+}{\rho_+ + \rho_-}{\mathscr{R}}({\varkappa_a}, {\vb{h}}_{+*}) - \dfrac{\rho_-}{\rho_+ + \rho_-}{\mathscr{R}}({\varkappa_a}, {\vb{h}}_{-*}), \end{split}$$ which is exactly ([\[def opC\]](#def opC){reference-type="ref" reference="def opC"}) with $\alpha = 0$. Thus, for $0 \le l \le k-2$, the energy [\[eqn E_l\]](#eqn E_l){reference-type="eqref" reference="eqn E_l"} is replaced by $$\label{eqn E_l'} \begin{split} \widetilde{E_l}(t, {\mathfrak{f}}, \partial_t{\mathfrak{f}}) \coloneqq \int_{{\Gamma_t}} &\abs{\qty(-\widetilde{{\mathcal{N}}}^{\frac{1}{2}}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}^{\frac{1}{2}})^{\frac{l}{2}} \widetilde{{\mathcal{N}}}^{\frac{1}{2}} \qty[\qty(\partial_t {\mathfrak{f}}+ \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*} {\mathfrak{f}}) \circ \Phi_{\Gamma_t}^{-1}] }^2 \\ &- \dfrac{\rho_+\rho_-}{\qty(\rho_+ + \rho_-)^2} \abs{\qty(-\widetilde{{\mathcal{N}}}^{\frac{1}{2}}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}^{\frac{1}{2}})^{\frac{l}{2}}\widetilde{{\mathcal{N}}}^{\frac{1}{2}} \qty[\qty(\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{w}}_*} {\mathfrak{f}}) \circ\Phi_{\Gamma_t}^{-1}]}^2 \\ &+ \dfrac{\rho_+}{\rho_+ + \rho_-} \abs{\qty(-\widetilde{{\mathcal{N}}}^{\frac{1}{2}}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}^{\frac{1}{2}})^{\frac{l}{2}}\widetilde{{\mathcal{N}}}^{\frac{1}{2}} \qty[\qty(\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_{+*}} {\mathfrak{f}}) \circ\Phi_{\Gamma_t}^{-1}] }^2 \\ &+ \dfrac{\rho_-}{\rho_+ + \rho_-} \abs{\qty(-\widetilde{{\mathcal{N}}}^{\frac{1}{2}}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\widetilde{{\mathcal{N}}}^{\frac{1}{2}})^{\frac{l}{2}}\widetilde{{\mathcal{N}}}^{\frac{1}{2}} \qty[\qty(\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_{-*}} {\mathfrak{f}}) \circ\Phi_{\Gamma_t}^{-1}]}^2 \dd{S_t}. \end{split}$$ It follows from the same arguments as in the proof of Lemma [Lemma 16](#lem est E_l){reference-type="ref" reference="lem est E_l"} that there exists a generic polynomial $Q$ determined by $\Lambda_*$, such that the following estimate holds: $$\label{est E_l'} \begin{split} &\hspace{-2em}\widetilde{E_l}(t, {\mathfrak{f}}, \partial_t{\mathfrak{f}}) - \widetilde{E_l}(0, {\mathfrak{f}}_0, {\mathfrak{f}}_1) \\ \le&Q(\widetilde{L_1}, \widetilde{L_2})\int_0^t \qty(\abs{{\mathfrak{f}}(s)}_{\H{\frac{3}{2}l + \frac{3}{2}}} + \abs{\partial_t{\mathfrak{f}}(s)}_{\H{\frac{3}{2}l + \frac{1}{2}}} + \abs{{\mathfrak{g}}(s)}_{\H{\frac{3}{2}l + \frac{1}{2}}}) \times \\ &\hspace{8em} \times \qty(\abs{{\mathfrak{f}}(s)}_{\H{\frac{3}{2}l + \frac{3}{2}}} + \abs{\partial_t{\mathfrak{f}}(s)}_{\H{\frac{3}{2}l + \frac{1}{2}}}) \dd{s}. \end{split}$$ Thanks to the uniform stability condition [\[syro stab condi useful\]](#syro stab condi useful){reference-type="eqref" reference="syro stab condi useful"}, one can derive an estimate similar to [\[est linear eqn ka\]](#est linear eqn ka){reference-type="eqref" reference="est linear eqn ka"}, as long as $T \le C$ for some constant $C = C(\widetilde{L_1}, \widetilde{L_2}, \mathfrak{s}_0)$: $$\label{est linear eqn ka'} \begin{split} &\hspace{-1em}\abs{{\mathfrak{f}}(t)}_{\H{\frac{3}{2}l+\frac{3}{2}}}^2 + \abs{\partial_t{\mathfrak{f}}(t)}_{\H{\frac{3}{2}l+\frac{1}{2}}}^2 \\ \le\, &C_* e^{Q(\widetilde{L_1}, \widetilde{L_2}, \mathfrak{s}_0^{-1})t} \qty( \abs{{\mathfrak{f}}_0}_{\H{\frac{3}{2}l+\frac{3}{2}}}^2 + \abs{{\mathfrak{f}}_1}_{\H{\frac{3}{2}l+\frac{1}{2}}}^2 + \int_0^t \abs{{\mathfrak{g}}(t')}^2_{\H{\frac{3}{2}l+\frac{1}{2}}} \dd{t'}), \end{split}$$ for any integer $0 \le l \le k-2$, $0 \le t \le T$, a generic polynomial $Q$ and a positive constant $C_*$ depending on $\Lambda_*$. Thus, one has **Proposition 27**. *For $0 \le l \le k-2$, $T \le C(\widetilde{L_1}, \widetilde{L_2}, \mathfrak{s}_0)$ and ${\mathfrak{g}}\in C^0 \qty([0, T]; H^{\frac{3}{2}l + \frac{1}{2}}({\Gamma_\ast}))$, the linear problem [\[eqn linear 1\'\]](#eqn linear 1'){reference-type="eqref" reference="eqn linear 1'"} is well-posed in $C^0\qty([0, T]; \H{\frac{3}{2}l+\frac{3}{2}}) \cap C^1\qty([0, T]; \H{\frac{3}{2}l+\frac{1}{2}})$, and the energy estimate [\[est linear eqn ka\'\]](#est linear eqn ka'){reference-type="eqref" reference="est linear eqn ka'"} holds.* It is also noted that the arguments in  [5.2](#sec linear current vortex){reference-type="ref" reference="sec linear current vortex"} are still valid for the linear systems for the current and vorticity here. ## Nonlinear problems {#nonlinear-problems} As in  [6](#sec nonlinear){reference-type="ref" reference="sec nonlinear"}, take a reference hypersurface ${\Gamma_\ast}\in H^{{\frac{3}{2}k}+\frac{1}{2}}$ and $\delta_0 > 0$ so that $$\Lambda_* \coloneqq \Lambda \qty({\Gamma_\ast}, {\frac{3}{2}k}-\frac{1}{2}, \delta_0)$$ satisfies all the properties discussed in the preliminary. Furthermore, assume that there is a constant $c_0 > 0$ so that [\[surface away condi\]](#surface away condi){reference-type="eqref" reference="surface away condi"} holds for ${\Gamma_\ast}$. We shall solve the nonlinear problem by iterations on the linearized problems in the spaces: $$\label{key} \begin{split} &{\varkappa_a}\in C^0\qty([0, T]; \H{{\frac{3}{2}k}-\frac{3}{2}}) \cap C^1\qty([0, T]; B_{\delta_1}\subset\H{{\frac{3}{2}k}-\frac{5}{2}}) \cap C^2\qty([0, T]; \H{{\frac{3}{2}k}-\frac{7}{2}}); \\ &{\vb*{\omega}}_{\pm*}, {\vb{j}}_{\pm*} \in C^0\qty([0, T]; H^{{\frac{3}{2}k}-1}({\Omega}_{\Gamma_\ast}^\pm)) \cap C^1\qty([0, T]; H^{{\frac{3}{2}k}-2}({\Omega}_{\Gamma_\ast}^\pm)); \\ &\va{\mathfrak{v}}_\pm, \va{\mathfrak{h}}_\pm \in C^1\qty([0, T]; \mathbb{R}^2). \end{split}$$ ### Fluid region, velocity and magnetic fields As discussed in  [6.2](#section recovery){reference-type="ref" reference="section recovery"}, the bulk region, velocity and magnetic fields can be obtained by solving the following div-curl problems: $$\label{div-curl nonlinear v'} \begin{cases*} \div {\vb{v}}_\pm = 0 \qc \curl {\vb{v}}_\pm = \bar{{\vb*{\omega}}}_\pm &in $ {\Omega}^\pm_t $, \\ {\vb{v}}_\pm \vdot {\vb{N}}_+ = {\vb{N}}_+ \vdot (\partial_t{\gamma_{\Gamma_t}}{\vb*{\nu}}) \circ\Phi_{\Gamma_t}^{-1} &on $ {\Gamma_t}$, \\ {\vb{v}}_\pm \vdot \widetilde{{\vb{N}}}_\pm = 0 &on $ \mathbb{T}^2 \times \{\pm1\} $, \\ \int_{\mathbb{T}^2\times\{\pm1\}} {\vb{v}}_\pm \dd{\widetilde{S}} = \va{\mathfrak{v}}_\pm &on $ \mathbb{T}^2 \times \{\pm1\} $; \end{cases*}$$ and $$\label{div-curl nonlinear h'} \begin{cases*} \div {\vb{h}}_\pm = 0 \qc \curl {\vb{h}}_\pm = \bar{{\vb{j}}}_\pm &in $ {\Omega}_t^\pm $, \\ {\vb{h}}_\pm \vdot {\vb{N}}_\pm = 0 &on $ {\Gamma_t}$, \\ {\vb{h}}_\pm \vdot \widetilde{{\vb{N}}}_\pm = 0 &on $ \mathbb{T}^2 \times \{\pm1\} $, \\ \int_{\mathbb{T}^2 \times \{\pm1\}} {\vb{h}}_\pm \dd{\widetilde{S}} = \va{\mathfrak{h}}_\pm &on $ \mathbb{T}^2 \times \{\pm1\} $, \end{cases*}$$ where $\bar{{\vb*{\omega}}}_\pm$ and $\bar{{\vb{j}}}_\pm$ are given by [\[def bar vom vj\]](#def bar vom vj){reference-type="eqref" reference="def bar vom vj"}. ### Iteration mapping {#sec syro ite map} In order to construct the iteration map, we consider the following function space: **Definition 28**. For given constants $T, M_0, M_1, M_2, M_3, c_0, \mathfrak{s}_0 >0$, define $\mathfrak{X}$ to be the collection of $\qty({\varkappa_a}, {\vb*{\omega}}_*, {\vb{j}}_*, \va{\mathfrak{v}}_\pm, \va{\mathfrak{h}}_\pm)$ satisfying: $$\abs{{\varkappa_a}(0) - \kappa_{*+}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} \le \delta_1,$$ $$\abs{\qty(\partial_t {\varkappa_a})(0)}_{\H{{\frac{3}{2}k}-\frac{7}{2}}}, \norm{{\vb*{\omega}}_*(0)}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega\setminus{\Gamma_\ast}})}, \norm{{\vb{j}}_*(0)}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega\setminus{\Gamma_\ast}})}, \abs{\va{\mathfrak{v}}_\pm(0)}, \abs{\va{\mathfrak{h}}_\pm(0)} \le M_0,$$ $$\sup_{t \in [0, T]} \qty(\abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{3}{2}}}, \abs{\partial_t {\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}, \norm{({\vb*{\omega}}_*, {\vb{j}}_*)}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})}, \abs{\va{\mathfrak{v}}_\pm}, \abs{\va{\mathfrak{h}}_\pm} ) \le M_1,$$ $$\sup_{t \in [0, T]} \qty(\norm{\partial_t{\vb*{\omega}}_*}_{H^{{\frac{3}{2}k}-2}({\Omega\setminus{\Gamma_\ast}})}, \norm{\partial_t{\vb{j}}_*}_{H^{{\frac{3}{2}k}-2}({\Omega\setminus{\Gamma_\ast}})}, \abs{\partial_t \va{\mathfrak{v}}_\pm}, \abs{\partial_t\va{\mathfrak{h}}_\pm}) \le M_2,$$ $$\sup_{t \in [0, T]} \abs{\partial^2_{tt}{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{7}{2}}} \le a^2 M_3 \ (\text{here $ a $ is the constant in the definition of $ {\varkappa_a}$}).$$ For $\Upsilon({\vb{h}}_\pm, \llbracket{\vb{v}}\rrbracket)$ defined by [\[syro inf\]](#syro inf){reference-type="eqref" reference="syro inf"}, $$\label{key} \Upsilon\qty({\vb{h}}_\pm, \llbracket{\vb{v}}\rrbracket) \ge \mathfrak{s}_0$$ holds uniformly for $0 \le t \le T$. In addition, [\[surface away condi\]](#surface away condi){reference-type="eqref" reference="surface away condi"} and the compatibility conditions $$\int_{\mathbb{T}^2\times\{\pm1\}} \widetilde{{\vb{N}}}_\pm \vdot {\vb*{\omega}}_{*\pm} \dd{\widetilde{S}} = \int_{\mathbb{T}^2\times\{\pm1\}} \widetilde{{\vb{N}}}_\pm \vdot {\vb{j}}_{*\pm} \dd{\widetilde{S}} = 0$$ hold for all $t\in [0, T]$. As for the initial data, take $0 < \epsilon \ll \delta_1$ and $A > 0$, and consider: $$\mathfrak{I}(\epsilon, A) \coloneqq\qty{\qty\big(\qty({\varkappa_a})_{\mathrm{I}}, \qty(\partial_t{\varkappa_a})_{\mathrm{I}}, \qty({\vb*{\omega}}_*)_{\mathrm{I}}, \qty({\vb{j}}_*)_{\mathrm{I}}), \qty(\va{\mathfrak{v}}_\pm)_{\mathrm{I}}, \qty(\va{\mathfrak{h}}_\pm)_{\mathrm{I}} },$$ where $$\begin{gathered} \abs{\qty({\varkappa_a})_{\mathrm{I}}-\kappa_{*+}}_{\H{{\frac{3}{2}k}-\frac{3}{2}}}<\epsilon; \\ \abs{\qty(\partial_t{\varkappa_a})_{\mathrm{I}}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}},\ \norm{\qty({\vb*{\omega}}_*)_{\mathrm{I}}}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})},\ \norm{\qty({\vb{j}}_*)_{\mathrm{I}}}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})},\ \abs{\qty(\va{\mathfrak{v}}_\pm)_{\mathrm{I}}},\ \abs{\qty(\va{\mathfrak{h}}_\pm)_{\mathrm{I}}} < A, \end{gathered}$$ $$\label{key} \Upsilon({\vb{h}}_\pm, \llbracket {\vb{v}}\rrbracket) \ge 2\mathfrak{s}_0,$$ and $$\mathop{\mathrm{\mathrm{dist}}}\qty(\Gamma_{\mathrm{I}}, \mathbb{T}^2\times\{\pm1\}) \ge 2c_0.$$ In addition, $\qty({\vb*{\omega}}_*)_{\mathrm{I}}$ and $\qty({\vb{j}}_*)_{\mathrm{I}}$ satisfy the following compatibility conditions: $$\int_{\mathbb{T}^2\times\{\pm1\}} \widetilde{{\vb{N}}}_\pm \vdot {\qty({\vb*{\omega}}_{*})_{\mathrm{I}}}_\pm \dd{\widetilde{S}} = \int_{\mathbb{T}^2\times\{\pm1\}} \widetilde{{\vb{N}}}_\pm \vdot {\qty({\vb{j}}_{*})_{\mathrm{I}}}_\pm \dd{\widetilde{S}} = 0.$$ Thus, $\mathfrak{I}(\epsilon, A) \subset \H{{\frac{3}{2}k}-\frac{3}{2}}\times\H{{\frac{3}{2}k}-\frac{5}{2}}\times H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}}) \times H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}}) \times \mathbb{R}^4$. Then, as in  [6.3](#sec itetarion map){reference-type="ref" reference="sec itetarion map"}, one can define the iteration map: $$\label{eqn (n+1)ka'} \begin{cases*} \partial^2_{tt}{{\varkappa_a}}^{(n+1)} + {\mathscr{C}}_0\qty({{\varkappa_a}}^{(n)}, {\partial_t{\varkappa_a}}^{(n)}, {{\vb{v}}_{*\pm}}^{(n)}, {{\vb{h}}_{*\pm}}^{(n)}){{\varkappa_a}}^{(n+1)} \\ \qquad = {\mathscr{F}}\qty({{\varkappa_a}}^{(n)})\partial_t {{\vb*{\omega}}_*}^{(n)} + {\mathscr{G}}\qty({{\varkappa_a}}^{(n)}, {\partial_t{\varkappa_a}}^{(n)}, {{\vb*{\omega}}_{*\pm}}^{(n)}, {{\vb{j}}_{*\pm}}^{(n)}, {\va{\mathfrak{v}}_\pm}^{(n)}, {\va{\mathfrak{h}}_\pm}^{(n)}) \\ \qquad\qquad + \mathscr{S}\qty({{\varkappa_a}}^{(n)})\partial_t {\va{\mathfrak{v}}_\pm}^{(n)} \\ {{\varkappa_a}}^{(n+1)}(0) = \qty({\varkappa_a})_{\mathrm{I}}, \quad {\partial_t{\varkappa_a}}^{(n+1)}(0) = \qty(\partial_t{\varkappa_a})_{\mathrm{I}}; \end{cases*}$$ and $$\label{eqn linear (n+1) vom vj'} \begin{cases*} \partial_t{{\vb*{\omega}}_\pm}^{(n+1)} + \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{v}}_\pm}^{(n)}}{{\vb*{\omega}}_\pm}^{(n+1)} - \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{h}}_\pm}^{(n)}}{{\vb{j}}_\pm}^{(n+1)} = \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb*{\omega}}_\pm}^{(n+1)}}{{\vb{v}}_\pm}^{(n)} - \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{j}}_\pm}^{(n+1)}}{{\vb{h}}_\pm}^{(n)}, \\ \partial_t{{\vb{j}}_\pm}^{(n+1)} + \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{v}}_\pm}^{(n)}}{{\vb{j}}_\pm}^{(n+1)} - \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{h}}_\pm}^{(n)}}{{\vb*{\omega}}_\pm}^{(n+1)} \\ \qquad = \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{j}}_\pm}^{(n+1)}}{{\vb{v}}_\pm}^{(n)} - \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb*{\omega}}_\pm}^{(n+1)}}{{\vb{h}}_\pm}^{(n)} - 2\tr(\grad{{\vb{v}}_\pm}^{(n)} \cp \grad {{\vb{h}}_\pm}^{(n)}), \\ {{\vb*{\omega}}_\pm}^{(n+1)}(0) = {\mathbb{P}}\qty(\qty({\vb*{\omega}}_{*\pm})_{\mathrm{I}} \circ (\mathcal{X}_{{\Gamma_0}^{(n)}}^{\pm})^{-1}), \quad {{\vb{j}}_\pm}^{(n+1)}(0) = {\mathbb{P}}\qty(\qty({\vb{j}}_{*\pm})_{\mathrm{I}}\circ (\mathcal{X}_{{\Gamma_0}^{(n)}}^{\pm})^{-1}), \end{cases*}$$ where $\qty({{\vb{v}}_{\pm}}^{(n)}, {{\vb{h}}_{\pm}}^{(n)})$ is induced by $\qty({{\varkappa_a}}^{(n)}, {{\vb*{\omega}}_{*\pm}}^{(n)}, {{\vb{j}}_{*\pm}}^{(n)}, {\va{\mathfrak{v}}_\pm}^{(n)}, {\va{\mathfrak{h}}_\pm}^{(n)})$ via solving [\[div-curl nonlinear v\'\]](#div-curl nonlinear v'){reference-type="eqref" reference="div-curl nonlinear v'"}-[\[div-curl nonlinear h\'\]](#div-curl nonlinear h'){reference-type="eqref" reference="div-curl nonlinear h'"}, the tangential vector fields ${{\vb{v}}_{*\pm}}^{(n)}$ and ${{\vb{h}}_{*\pm}}^{(n)}$ on ${\Gamma_\ast}$ are defined by $$\label{key} \begin{split} {{\vb{v}}_{*\pm}}^{(n)} &\coloneqq \qty(\mathop{}\!\mathbin{\mathrm{D}}\Phi_{{{\Gamma_t}}^{(n)}})^{-1}\qty[{{\vb{v}}_\pm}^{(n)}\circ\Phi_{{{\Gamma_t}}^{(n)}} - \qty(\partial_t\gamma_{{{\Gamma_t}}^{(n)}}){\vb*{\nu}}], \\ {{\vb{h}}_{*\pm}}^{(n)} &\coloneqq \qty(\mathop{}\!\mathbin{\mathrm{D}}\Phi_{{{\Gamma_t}}^{(n)}})^{-1} \qty({{\vb{h}}_\pm}^{(n)} \circ \Phi_{{{\Gamma_t}}^{(n)}}), \end{split}$$ and the current-vorticity equations are considered in the domains ${{\Omega}_t^\pm}^{(n)}$. Define $$\label{eqn dvt va v} {\va{\mathfrak{v}}_\pm}^{(n+1)}(t) \coloneqq \qty(\va{\mathfrak{v}}_\pm)_{\mathrm{I}} + \int_0^t\int_{\mathbb{T}^2\times\{\pm1\}} - \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{v}}_\pm}^{(n)}}{{\vb{v}}_\pm}^{(n)} - \frac{1}{\rho_\pm}\grad{p^\pm}^{(n)} + \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{h}}_\pm}^{(n)}}{{\vb{h}}_\pm}^{(n)} \dd{\widetilde{S}} \dd{t'},$$ $$\label{eqn dvt va h} {\va{\mathfrak{h}}_\pm}^{(n+1)}(t) \coloneqq \qty(\va{\mathfrak{h}}_\pm)_{\mathrm{I}} + \int_0^t \int_{\mathbb{T}^2\times\{\pm1\}} \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{h}}_\pm}^{(n)}}{{\vb{v}}_\pm}^{(n)} - \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{v}}_\pm}^{(n)}} {{\vb{h}}_\pm}^{(n)} \dd{\widetilde{S}} \dd{t'},$$ and $$\label{key} {{\vb*{\omega}}_*}^{(n+1)}\coloneqq {{\vb*{\omega}}}^{(n+1)}\circ\mathcal{X}_{{{\Gamma_t}}^{(n)}} \qc {{\vb{j}}_*}^{(n+1)}\coloneqq {{\vb{j}}}^{(n+1)}\circ\mathcal{X}_{{{\Gamma_t}}^{(n)}},$$ where ${p^\pm}^{(n)}$ are given by [\[decomp p\]](#decomp p){reference-type="eqref" reference="decomp p"} with $\qty({{\varkappa_a}}^{(n)}, {{\vb{v}}_{\pm}}^{(n)}, {{\vb{h}}_{\pm}}^{(n)})$ plugged in. In order to show that the iteration map is a mapping from $\mathfrak{X}$ to $\mathfrak{X}$, one may first check that $$\label{key} \mathop{\mathrm{\mathrm{dist}}}\qty(\Gamma^{(n+1)}_t, \mathbb{T}^2\times\{\pm 1\}) \ge 2c_0 - CT \abs{\partial_t {{\varkappa_a}}^{(n+1)}}_{C^0_t \H{{\frac{3}{2}k}-\frac{5}{2}}} \ge c_0,$$ and $$\label{key} \abs{\Upsilon({{\vb{h}}}^{(n+1)}_\pm, \llbracket{{\vb{v}}}^{(n+1)}\rrbracket) - \Upsilon\qty({\qty({\vb{h}})_{\mathrm{I}}}_\pm, \llbracket\qty({\vb{v}})_{\mathrm{I}}\rrbracket)} \le T Q(M_1, M_2) \le \mathfrak{s}_0,$$ if $T$ is small compared to $M_1$ and $M_2$. Next, for ${\va{\mathfrak{v}}_\pm}^{(n+1)}$ and ${\va{\mathfrak{h}}_\pm}^{(n+1)}$, observe that $$\label{key} \abs{{\va{\mathfrak{v}}}^{(n+1)}_\pm (t)} + \abs{{\va{\mathfrak{h}}_\pm}^{(n+1)}(t)} \le A + T Q(M_1) \le M_1,$$ and $$\label{key} \abs{\partial_t\va{\mathfrak{v}}_\pm(t)} + \abs{\partial_t\va{\mathfrak{h}}_\pm(t)} \le Q(M_1) \le M_2,$$ provided that $T$ is small and $M_2 \gg M_1$. Then, with the same notation as in  [6.3](#sec itetarion map){reference-type="ref" reference="sec itetarion map"}, one can define $$\label{key} \begin{split} &\mathfrak{T}\qty(\qty[\qty({\varkappa_a})_{\mathrm{I}}, \qty(\partial_t{\varkappa_a})_{\mathrm{I}}, \qty({\vb*{\omega}}_{*\pm})_{\mathrm{I}}, \qty({\vb{j}}_{*\pm})_{\mathrm{I}}, \qty(\va{\mathfrak{v}}_\pm)_{\mathrm{I}}, \qty(\va{\mathfrak{h}}_\pm)_{\mathrm{I}}], \qty[{{\varkappa_a}}^{(n)}, {{\vb*{\omega}}_{*\pm}}^{(n)}, {{\vb{j}}_{*\pm}}^{(n)}, {\va{\mathfrak{v}}_\pm}^{(n)}, {\va{\mathfrak{h}}_\pm}^{(n)}]) \\ &\coloneqq \qty({{\varkappa_a}}^{(n+1)}, {{\vb*{\omega}}_{*\pm}}^{(n+1)}, {{\vb{j}}_{*\pm}}^{(n+1)}, {\va{\mathfrak{v}}_\pm}^{(n+1)}, {\va{\mathfrak{h}}_\pm}^{(n+1)}). \end{split}$$ It follows from the arguments in  [6.3](#sec itetarion map){reference-type="ref" reference="sec itetarion map"} and the linear estimates in  [7.4](#sec syro linear system){reference-type="ref" reference="sec syro linear system"} that the following proposition holds: **Proposition 29**. *Suppose that $k \ge 3$. For any $0 < \epsilon \ll \delta_0$ and $A > 0$, there are positive constants $M_0, M_1, M_2, M_3, c_0, \mathfrak{s}_0$, so that for small $T > 0$, $$\mathfrak{T}\qty\Big{\qty[\qty({\varkappa_a})_{\mathrm{I}}, \qty(\partial_t{\varkappa_a})_{\mathrm{I}}, \qty({\vb*{\omega}}_{*\pm})_{\mathrm{I}}, \qty({\vb{j}}_{*\pm})_{\mathrm{I}}, \qty(\va{\mathfrak{v}}_\pm)_{\mathrm{I}}, \qty(\va{\mathfrak{h}}_\pm)_{\mathrm{I}}], \qty[{\varkappa_a}, {\vb*{\omega}}_{*\pm}, {\vb{j}}_{*\pm}, \va{\mathfrak{v}}_\pm, \va{\mathfrak{h}}_\pm]} \in \mathfrak{X},$$ holds for any $\qty(\qty({\varkappa_a})_{\mathrm{I}}, \qty(\partial_t{\varkappa_a})_{\mathrm{I}}, \qty({\vb*{\omega}}_{*\pm})_{\mathrm{I}}, \qty({\vb{j}}_{*\pm})_{\mathrm{I}}, \qty(\va{\mathfrak{v}}_\pm)_{\mathrm{I}}, \qty(\va{\mathfrak{h}}_\pm)_{\mathrm{I}}) \in \mathfrak{I}(\epsilon, A)$ and\ $\qty({\varkappa_a}, {\vb*{\omega}}_{*\pm}, {\vb{j}}_{*\pm}, \va{\mathfrak{v}}_\pm, \va{\mathfrak{h}}_\pm) \in \mathfrak{X}$.* For the contraction of the iteration mapping, as in  [6.4](#sec contra ite){reference-type="ref" reference="sec contra ite"}, assume that\ $\qty({{\varkappa_a}}^{(n)}(\beta), {{\vb*{\omega}}_{*\pm}}^{(n)}(\beta), {{\vb{j}}_{*\pm}}^{(n)}(\beta), {\va{\mathfrak{v}}_\pm}^{(n)}(\beta), {\va{\mathfrak{h}}_\pm}^{(n)}(\beta) ) \subset \mathfrak{X}$ and\ $\qty(\qty({\varkappa_a})_{\mathrm{I}}(\beta), \qty(\partial_t{\varkappa_a})_{\mathrm{I}}(\beta), \qty({\vb*{\omega}}_{*\pm})_{\mathrm{I}}(\beta), \qty({\vb{j}}_{*\pm})_{\mathrm{I}}(\beta), \qty(\va{\mathfrak{v}}_\pm)_{\mathrm{I}}(\beta), \qty(\va{\mathfrak{h}}_\pm)_{\mathrm{I}}(\beta)) \subset \mathfrak{I}(\epsilon, A)$ are two families of data depending on a parameter $\beta$. Define $\qty({{\varkappa_a}}^{(n+1)}(\beta), {{\vb*{\omega}}_{*\pm}}^{(n+1)}(\beta), {{\vb{j}}_{*\pm}}^{(n+1)}(\beta), {\va{\mathfrak{v}}_\pm}^{(n+1)}(\beta), {\va{\mathfrak{h}}_\pm}^{(n+1)}(\beta) )$ to be the output of the iteration map. Then, by applying $\pdv*{\beta}$ to [\[eqn (n+1)ka\'\]](#eqn (n+1)ka'){reference-type="eqref" reference="eqn (n+1)ka'"}-[\[eqn dvt va h\]](#eqn dvt va h){reference-type="eqref" reference="eqn dvt va h"}, one has the variational problems [\[eqn beta n+1\]](#eqn beta n+1){reference-type="eqref" reference="eqn beta n+1"}-[\[eqn g2\]](#eqn g2){reference-type="eqref" reference="eqn g2"} as well as: $$\label{key} \begin{cases*} \partial^2_{tt} \partial_\beta {{\varkappa_a}}^{(n+1)} + {{\mathscr{C}}}^{(n)}\partial_\beta{{\varkappa_a}}^{(n+1)} \\ \qquad = - \qty(\partial_\beta{{\mathscr{C}}}^{(n)}){{\varkappa_a}}^{(n+1)} + \partial_\beta \qty({{\mathscr{F}}}^{(n)}{\partial_t{\vb*{\omega}}_*}^{(n)} + {{\mathscr{G}}}^{(n)} + {\mathscr{S}}^{(n)}\partial_t{\va{\mathfrak{v}}}^{(n)}), \\ \partial_\beta{{\varkappa_a}}^{(n+1)}(0) = \partial_\beta\qty({\varkappa_a})_{\mathrm{I}}(\beta), \quad \partial_t\qty(\partial_\beta{{\varkappa_a}}^{(n+1)})(0) = \partial_\beta\qty(\partial_t{\varkappa_a})_{\mathrm{I}}(\beta), \end{cases*}$$ $$\label{eqn var pdt va v} \begin{split} &\partial_t\partial_\beta {\va{\mathfrak{v}}_\pm}^{(n+1)}\\ &\quad=\partial_\beta\qty(\va{\mathfrak{v}}_\pm)_{\mathrm{I}}+\int_0^t \int_{\mathbb{T}^2\times\{\pm1\}} {\mathbb{D}_{\beta}}\qty(- \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{v}}_\pm}^{(n)}}{{\vb{v}}_\pm}^{(n)} - \frac{1}{\rho_\pm}\grad{p^\pm}^{(n)} + \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{h}}_\pm}^{(n)}}{{\vb{h}}_\pm}^{(n)}) \dd{\widetilde{S}} \dd{t'}, \end{split}$$ and $$\label{eqn var pdt va h} \partial_t\partial_\beta{\va{\mathfrak{h}}_\pm}^{(n+1)} = \partial_\beta\qty(\va{\mathfrak{h}}_\pm)_{\mathrm{I}} + \int_0^t \int_{\mathbb{T}^2\times\{\pm1\}} {\mathbb{D}_{\beta}}\qty(\mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{h}}_\pm}^{(n)}}{{\vb{v}}_\pm}^{(n)} - \mathop{}\!\mathbin{\mathrm{D}}_{{{\vb{v}}_\pm}^{(n)}} {{\vb{h}}_\pm}^{(n)}) \dd{\widetilde{S}} \dd{t'}.$$ Consider the energy functionals: $$\label{key} \begin{split} {\mathfrak{E}}^{(n)}(\beta)\coloneqq &\sup_{t \in [0, T]}\left(\abs{\partial_\beta{{\varkappa_a}}^{(n)}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} + \abs{\partial_\beta{\partial_t{\varkappa_a}}^{(n)}}_{\H{{\frac{3}{2}k}-\frac{7}{2}}} + \right. \\ &\qquad\qquad \left.+ \norm{\partial_\beta{{\vb*{\omega}}_{*\pm}}^{(n)}}_{H^{{\frac{3}{2}k}-2}({\Omega}_*^\pm)} + \norm{\partial_\beta{{\vb{j}}_{*\pm}}^{(n)}}_{H^{{\frac{3}{2}k}-2}({\Omega}_*^\pm)} + \right. \\ &\qquad\qquad\qquad \left. + \norm{\partial_\beta\partial_t{{\vb*{\omega}}_{*\pm}}^{(n)}}_{H^{{\frac{3}{2}k}-4}({\Omega}_*^\pm)} + \abs{\partial_\beta{\va{\mathfrak{v}}_\pm}^{(n)}} + \abs{\partial_\beta{\va{\mathfrak{h}}_\pm}^{(n)}} \right), \end{split}$$ and $$\label{key} \begin{split} \qty(\mathfrak{E})_{\mathrm{I}}(\beta)\coloneqq &\sup_{t \in [0, T]}\left(\abs{\partial_\beta\qty({\varkappa_a})_{\mathrm{I}}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} + \abs{\partial_\beta\qty(\partial_t{\varkappa_a})_{\mathrm{I}}}_{\H{{\frac{3}{2}k}-\frac{7}{2}}} + \right. \\ &\qquad\qquad \left.+ \norm{\partial_\beta\qty({\vb*{\omega}}_{*\pm})_{\mathrm{I}}}_{H^{{\frac{3}{2}k}-2}({\Omega}_*^\pm)} + \norm{\partial_\beta\qty({\vb{j}}_{*\pm})_{\mathrm{I}}}_{H^{{\frac{3}{2}k}-2}({\Omega}_*^\pm)} + \right. \\ &\qquad\qquad\qquad \left. + \abs{\partial_\beta\qty(\va{\mathfrak{v}}_\pm)_{\mathrm{I}}} + \abs{\partial_\beta\qty(\va{\mathfrak{h}}_\pm)_{\mathrm{I}}} \right). \end{split}$$ It follows from [\[eqn var pdt va v\]](#eqn var pdt va v){reference-type="eqref" reference="eqn var pdt va v"}-[\[eqn var pdt va h\]](#eqn var pdt va h){reference-type="eqref" reference="eqn var pdt va h"}, [\[est var opG\'\]](#est var opG'){reference-type="eqref" reference="est var opG'"}-[\[est var opS\]](#est var opS){reference-type="eqref" reference="est var opS"}, and the arguments in  [6.4](#sec contra ite){reference-type="ref" reference="sec contra ite"} that $$\label{key} {\mathfrak{E}}^{(n+1)} \le \frac{1}{2}{\mathfrak{E}}^{(n)} + Q(M_1)\qty(\mathfrak{E})_{\mathrm{I}},$$ provided that $T$ is sufficiently small. That is, the following proposition holds: **Proposition 30**. *Assume that $k \ge 3$. For any $0 < \epsilon \ll \delta_0$ and $A > 0$, there are positive constants $M_0, M_1, M_2, M_3, c_0, \mathfrak{s}_0$, so that if $T$ is small enough, then there is a map $\mathfrak{S} : \mathfrak{I}(\epsilon, A) \to \mathfrak{X}$ such that $$\label{key} \mathfrak{T}\qty{\mathfrak{x}, \mathfrak{S(x)}} = \mathfrak{S(x)},$$ for each $\mathfrak{x} = \qty(\qty({\varkappa_a})_{\mathrm{I}}, \qty(\partial_t{\varkappa_a})_{\mathrm{I}}, \qty({\vb*{\omega}}_{*\pm})_{\mathrm{I}}, \qty({\vb{j}}_{*\pm})_{\mathrm{I}}, \qty(\va{\mathfrak{v}}_\pm)_{\mathrm{I}}, \qty(\va{\mathfrak{h}}_\pm)_{\mathrm{I}}) \in \mathfrak{I}(\epsilon, A)$.* ### The original MHD problem {#sec Back to original syro} For the fixed point given in Proposition [Proposition 30](#prop fixed point'){reference-type="ref" reference="prop fixed point'"}, one can obtain $({\Gamma_t}, {\vb{v}}_\pm, {\vb{h}}_\pm)$ by solving the div-curl problems [\[div-curl nonlinear v\'\]](#div-curl nonlinear v'){reference-type="eqref" reference="div-curl nonlinear v'"}-[\[div-curl nonlinear h\'\]](#div-curl nonlinear h'){reference-type="eqref" reference="div-curl nonlinear h'"}. Observe that [\[eqn dvt va v\]](#eqn dvt va v){reference-type="eqref" reference="eqn dvt va v"} and [\[eqn dvt va h\]](#eqn dvt va h){reference-type="eqref" reference="eqn dvt va h"} yield $$\label{key} \int_{\mathbb{T}^2\times\{\pm1\}} \partial_t {\vb{v}}_\pm + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_\pm}{\vb{v}}_\pm + \dfrac{1}{\rho_\pm}\grad p^\pm - \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_\pm}{\vb{h}}_\pm \dd{\widetilde{S}} = 0$$ and $$\label{key} \int_{\mathbb{T}^2\times\{\pm1\}} \partial_t {\vb{h}}_\pm + \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{v}}_\pm}{\vb{h}}_\pm - \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_\pm}{\vb{v}}_\pm \dd{\widetilde{S}} = 0,$$ which, together with the arguments in  [6.5](#sec back to MHD){reference-type="ref" reference="sec back to MHD"}, shows that $({\Gamma_t}, {\vb{v}}_\pm, {\vb{h}}_\pm)$ is the unique solution to the (MHD)-(BC') problem. In particular, Theorem [Theorem 2](#thm alpha=0 case){reference-type="ref" reference="thm alpha=0 case"} follows. ## Vanishing surface tension limit {#sec vani st} In this subsection, it is always assumed that ${\Omega}= \mathbb{T}^2 \times \{\pm1\}$, $k \ge 3$ and the initial data satisfy the assumptions of Theorem [Theorem 2](#thm alpha=0 case){reference-type="ref" reference="thm alpha=0 case"}. To derive the uniform-in-$\alpha$ estimates, we consider the following four parts of the energies: $$\label{key} \mathcal{E}_0 (t) \coloneqq \frac{1}{2}\int_{{\Omega}_t^+} \rho_+\qty(\abs{{\vb{v}}_+}^2 + \abs{{\vb{h}}_+}^2) \dd{x} + \frac{1}{2} \int_{{\Omega}_t^-} \rho_- \qty(\abs{{\vb{v}}_-}^2 + \abs{{\vb{h}}_-}^2) \dd{x} + \int_{\Gamma_t}\alpha^2 \dd{S_t},$$ $$\label{key} \mathcal{E}_1 (t) \coloneqq \abs{{\mathbb{D}_{\bar{t}}}\kappa_+}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Gamma_t})}^2 + \alpha^2 \abs{\kappa_+}_{H^{{\frac{3}{2}k}-1}({\Gamma_t})}^2 + \abs{\kappa_+}_{H^{{\frac{3}{2}k}-\frac{3}{2}}({\Gamma_t})}^2,$$ $$\label{key} \mathcal{E}_2 (t) \coloneqq \norm{{\vb*{\omega}}_\pm}_{H^{{\frac{3}{2}k}-1}({\Omega}_t^\pm)}^2 + \norm{{\vb{j}}_\pm}_{H^{{\frac{3}{2}k}-1}({\Omega}_t^\pm)}^2,$$ and $$\label{key} \mathcal{E}_3 (t) \coloneqq \abs{\va{\mathfrak{v}}_\pm}^2 + \abs{\va{\mathfrak{h}}_\pm}^2.$$ It follows from the (MHD)-(BC') system that $$\label{eqn dt calE0} \dv{t}\mathcal{E}_0 (t) \equiv 0.$$ Indeed, $$\label{key} \begin{split} \dv{t} \frac{1}{2} \int_{{\Omega}_t^+} \rho_+ \qty(\abs{{\vb{v}}_+}^2 + \abs{{\vb{h}}_+}^2) \dd{x} &= \int_{{\Omega}_t^+} \rho_+ \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+}({\vb{v}}_+ + {\vb{h}}_+) - {\vb{v}}_+ \vdot \grad p^+ \dd{x} \\ &= \int_{{\Omega}_t^+} - {\vb{v}}_+ \vdot \grad p^+ + \rho_+ \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+}({\vb{h}}_+ \vdot {\vb{v}}_+) \dd{x} \\ &= \int_{{\Omega}_t^+} - \div(p^+{\vb{v}}_+) + \rho_+ \div({\vb{h}}_+ [{\vb{h}}_+ \vdot {\vb{v}}_+]) \dd{x} \\ &= \int_{{\Gamma_t}} - p^+ \theta \dd{S_t}. \end{split}$$ Similarly, one can calculate that $$\label{key} \dv{t} \frac{1}{2} \int_{{\Omega}_t^-} \rho_- \qty(\abs{{\vb{v}}_-}^2 + \abs{{\vb{h}}_-}^2) \dd{x} = \int_{{\Gamma_t}} p_- \theta \dd{S_t}.$$ It follows from [\[eqn kp div N\]](#eqn kp div N){reference-type="eqref" reference="eqn kp div N"} and [\[eqn Dt dS\]](#eqn Dt dS){reference-type="eqref" reference="eqn Dt dS"} that $$\label{key} \begin{split} \dv{t} \int_{\Gamma_t}\dd{S_t} &= \int_{\Gamma_t}\mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}{\vb{v}}\dd{S_t} \\ &= \int_{\Gamma_t}\mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}(\theta{\vb{N}}_+) + \mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}({\vb{v}}-\theta{\vb{N}}_+) \dd{S_t} \\ &= \int_{\Gamma_t}\theta \mathop{\mathrm{\mathrm{div}}}_{\Gamma_t}{\vb{N}}_+ + \grad^\top \theta \vdot {\vb{N}}_+ \dd{S_t} \\ &= \int_{\Gamma_t}\theta \kappa_+ \dd{S_t}. \end{split}$$ Thus, [\[eqn dt calE0\]](#eqn dt calE0){reference-type="eqref" reference="eqn dt calE0"} follows from the above relations and the boundary condition that $p^+ - p^- = \alpha^2 \kappa_+$ on ${\Gamma_t}$. Furthermore, applying the arguments in the proof of Lemma [Lemma 16](#lem est E_l){reference-type="ref" reference="lem est E_l"} to [\[eqn Dt2 kappa+\]](#eqn Dt2 kappa+){reference-type="eqref" reference="eqn Dt2 kappa+"}, [\[est frak R_0\'\]](#est frak R_0'){reference-type="eqref" reference="est frak R_0'"} and [\[syro stab condi useful\]](#syro stab condi useful){reference-type="eqref" reference="syro stab condi useful"} yields $$\label{key} \abs{\dv{t}\mathcal{E}_1 (t)} \le Q \qty(\alpha\abs{\kappa_+}_{H^{{\frac{3}{2}k}-1}({\Gamma_t})}, \abs{\kappa_+}_{H^{{\frac{3}{2}k}-\frac{3}{2}}({\Gamma_t})}, \norm{{\vb{v}}}_{H^{{\frac{3}{2}k}}({\Omega}\setminus{\Gamma_t})}, \norm{{\vb{h}}}_{H^{{\frac{3}{2}k}} ({\Omega}\setminus{\Gamma_t})})$$ for some generic polynomial $Q$ determined by $\Lambda_*$, $c_0$, and $\mathfrak{s}_0$. Similarly, [\[eqn pdt vom\]](#eqn pdt vom){reference-type="eqref" reference="eqn pdt vom"}-[\[eqn pdt vj\]](#eqn pdt vj){reference-type="eqref" reference="eqn pdt vj"} and the arguments in  [5.2](#sec linear current vortex){reference-type="ref" reference="sec linear current vortex"} lead to: $$\label{key} \abs{\dv{t}\mathcal{E}_2(t)} \le Q \qty(\abs{\kappa_+}_{H^{{\frac{3}{2}k}-\frac{3}{2}}({\Gamma_t})}, \norm{{\vb{v}}}_{H^{{\frac{3}{2}k}}({\Omega}\setminus{\Gamma_t})}, \norm{{\vb{h}}}_{H^{{\frac{3}{2}k}} ({\Omega}\setminus{\Gamma_t})}).$$ As for $\mathcal{E}_3$, it follows from [\[eqn dvt va v\]](#eqn dvt va v){reference-type="eqref" reference="eqn dvt va v"} and [\[eqn dvt va h\]](#eqn dvt va h){reference-type="eqref" reference="eqn dvt va h"} that $$\label{key} \abs{\dv{t}\mathcal{E}_3(t)} \le Q \qty(\abs{\kappa_+}_{H^{{\frac{3}{2}k}-\frac{3}{2}}({\Gamma_t})}, \norm{{\vb{v}}}_{H^{{\frac{3}{2}k}}({\Omega}\setminus{\Gamma_t})}, \norm{{\vb{h}}}_{H^{{\frac{3}{2}k}} ({\Omega}\setminus{\Gamma_t})}).$$ On the other hand, if $T$ is small compared to $\norm{({\vb{v}}_{0\pm}, {\vb{h}}_{0\pm})}_{H^{{\frac{3}{2}k}}({\Omega}_0^\pm)}$, one has $$\label{key} \mathop{\mathrm{\mathrm{dist}}}\qty({\Gamma_t}, \mathbb{T}^2 \times \{\pm 1\}) \ge c_0.$$ Then it follows from the estimates of div-curl systems and Lemma [Lemma 5](#est ii){reference-type="ref" reference="est ii"} that $$\label{key} \begin{split} \norm{{\vb{v}}_\pm}_{H^{{\frac{3}{2}k}}({\Omega}_t^\pm)} \le &Q\qty(\abs{\kappa_+}_{H^{{\frac{3}{2}k}-\frac{3}{2}}({\Gamma_t})}, c_0) \times \\ &\quad \times \qty(\norm{{\vb*{\omega}}_\pm}_{H^{{\frac{3}{2}k}-1}({\Omega}_t^\pm)} + \abs{{\vb{N}}_+ \vdot \mathop{}\!\mathbin{\triangle}_{{\Gamma_t}} {\vb{v}}_\pm}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Gamma_t})} + \abs{\va{\mathfrak{v}}_\pm}), \end{split}$$ and $$\label{key} \begin{split} \norm{{\vb{h}}_\pm}_{H^{{\frac{3}{2}k}}({\Omega}_t^\pm)} \le &Q\qty(\abs{\kappa_+}_{H^{{\frac{3}{2}k}-\frac{3}{2}}({\Gamma_t})}, c_0) \times \\ &\quad \times \qty(\norm{{\vb{j}}_\pm}_{H^{{\frac{3}{2}k}-1}({\Omega}_t^\pm)} + \abs{{\vb{N}}_+ \vdot \mathop{}\!\mathbin{\triangle}_{{\Gamma_t}} {\vb{h}}_\pm}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Gamma_t})} + \abs{\va{\mathfrak{h}}_\pm}). \end{split}$$ In addition, [\[eqn dt kappa\]](#eqn dt kappa){reference-type="eqref" reference="eqn dt kappa"} implies $$\label{key} -{\vb{N}}_+ \vdot \mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{v}}_\pm = {\mathbb{D}_t}_\pm \kappa_+ + 2 \ip{{\vb{I\!I}}_+}{(\mathop{}\!\mathbin{\mathrm{D}}{\vb{v}}_\pm)^\top} = {\mathbb{D}_{\bar{t}}}\kappa_+ + \mathop{}\!\mathbin{\mathrm{D}}_{({\vb{v}}_\pm - {\vb{u}})}\kappa_+ + 2 \ip{{\vb{I\!I}}_+}{(\mathop{}\!\mathbin{\mathrm{D}}{\vb{v}}_\pm)^\top}.$$ Thus, one has $$\label{key} \begin{split} &\abs{{\vb{N}}_+ \vdot \mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{v}}_\pm}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Gamma_t})} \\ &\quad \le C_* \qty(\abs{{\mathbb{D}_{\bar{t}}}\kappa_+}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Gamma_t})} + \abs{\kappa_+}_{H^{{\frac{3}{2}k}-\frac{3}{2}}({\Gamma_t})}\abs{({\vb{v}}_+, {\vb{v}}_-)}_{H^{{\frac{3}{2}k}-\frac{3}{2}}({\Gamma_t})}). \end{split}$$ It follows from the interpolation inequalities of Sobolev norms that $$\label{key} \begin{split} &\norm{{\vb{v}}_\pm}_{H^{{\frac{3}{2}k}}({\Omega}_t^\pm)} \\ &\quad\le Q \qty(\abs{{\mathbb{D}_{\bar{t}}}\kappa_+}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Gamma_t})}, \abs{\kappa_+}_{H^{{\frac{3}{2}k}-\frac{3}{2}}({\Gamma_t})}, \norm{{\vb*{\omega}}_\pm}_{H^{{\frac{3}{2}k}-1}({\Omega}_t^\pm)}, \abs{\va{\mathfrak{v}}_\pm}, \norm{{\vb{v}}_\pm}_{L^2({\Omega}_t^\pm)}), \end{split}$$ where $Q$ is a generic polynomial determined by $\Lambda_*$ and $c_0$. Similarly, $$\label{key} -{\vb{N}}_+ \vdot \mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{h}}_\pm = \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_\pm}\kappa_+ + 2\ip{{\vb{I\!I}}_+}{(\mathop{}\!\mathbin{\mathrm{D}}{\vb{h}}_\pm)^\top}$$ implies that $$\label{key} \norm{{\vb{h}}_\pm}_{H^{{\frac{3}{2}k}}({\Omega}_t^\pm)} \le Q \qty(\abs{\kappa_+}_{H^{{\frac{3}{2}k}-\frac{3}{2}}({\Gamma_t})}, \norm{{\vb{j}}_\pm}_{H^{{\frac{3}{2}k}-1}({\Omega}_t^\pm)}, \abs{\va{\mathfrak{h}}_\pm}, \norm{{\vb{h}}_\pm}_{L^2({\Omega}_t^\pm)}).$$ In conclusion, setting $$\label{key} \mathcal{E} (t) \coloneqq \mathcal{E}_0 + \mathcal{E}_1 + \mathcal{E}_2 + \mathcal{E}_3,$$ then one can deduce that, $$\label{key} \abs{\dv{t}\mathcal{E}(t)} \le Q\qty[\mathcal{E}(t)],$$ where $Q$ is a generic polynomial depending on $\Lambda_*, \mathfrak{s}_0$ and $c_0$ (in particular, independent of $\alpha$ and $t$). Thus, Theorem [Theorem 3](#thm vanishing surf limt){reference-type="ref" reference="thm vanishing surf limt"} follows from the above energy estimates. # Proof of Lemma [Lemma 12](#lem3.1){reference-type="ref" reference="lem3.1"} {#proof-of-lemma-lem3.1} *Proof.* ([\[est B\]](#est B){reference-type="ref" reference="est B"}), ([\[est F\]](#est F){reference-type="ref" reference="est F"}), and ([\[est B version2\]](#est B version2){reference-type="ref" reference="est B version2"})-([\[est G\]](#est G){reference-type="ref" reference="est G"}) follow from ([\[est pd beta v+\*\]](#est pd beta v+*){reference-type="ref" reference="est pd beta v+*"})-([\[eqn pd2 gt\]](#eqn pd2 gt){reference-type="ref" reference="eqn pd2 gt"}) and Proposition [Proposition 6](#prop K){reference-type="ref" reference="prop K"}. As for the variational estimates, let $f, h : {\Gamma_\ast}\to \mathbb{R}$, ${\vb{g}}: {\Omega\setminus{\Gamma_\ast}}\to \mathbb{R}^3$ be given quantities, and ${\vb{w}}_1, {\vb{w}}_2, {\vb{w}}_3 : {\Omega}\setminus{\Gamma_t}\to \mathbb{R}^3$ the solutions to the following div-curl problems respectively: $$\label{eqn w1} \begin{cases*} \div{\vb{w}}_1 = \comm{\div}{\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{f}}}}{\vb{v}}&in $ {\Omega}\setminus{\Gamma_t}$, \\ \curl{\vb{w}}_1 = \comm{\curl}{\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{f}}}}{\vb{v}}+ \comm{\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{f}}}}{{\mathbb{P}}}\qty[{\vb*{\omega}}_* \circ \qty(\mathcal{X}_{\Gamma_t})^{-1}], &in $ {\Omega}\setminus{\Gamma_t}$, \\ {\vb{N}}_+ \vdot {\vb{w}}_{1\pm} = {\vb{N}}_+ \vdot \vb{b}_{\pm}({\varkappa_a}, \partial_t{\varkappa_a}, f) &on $ {\Gamma_t}$, \\ \widetilde{{\vb{N}}} \vdot {\vb{w}}_{1-} = 0 &on $ \partial{\Omega}$, \end{cases*}$$ where $$\label{key} \begin{split} \vb{b}_{\pm} = &\, \mathop{}\!\mathbin{\mathrm{D}}\qty[\qty(\var{{\mathfrak{K}}^{-1}}({\varkappa_a})[f] {\vb*{\nu}}) \circ (\Phi_{\Gamma_t})^{-1}] \vdot \qty[ {\vb{v}}_\pm|_{\Gamma_t}- \qty(\var{{\mathfrak{K}}^{-1}}({\varkappa_a})[\partial_t{\varkappa_a}] {\vb*{\nu}}) \circ (\Phi_{\Gamma_t})^{-1}] \\ &+ \qty[\var[2]{{\mathfrak{K}}^{-1}}({\varkappa_a})\qty[\partial_t{\varkappa_a}, f] {\vb*{\nu}}] \circ (\Phi_{\Gamma_t})^{-1}, \end{split}$$ $$\label{key} {\vb{f}}_\pm \coloneqq {\mathcal{H}}_\pm \qty[\qty(\var{{\mathfrak{K}}^{-1}}({\varkappa_a})[f]{\vb*{\nu}})\circ(\Phi_{\Gamma_t})^{-1} ];$$ $$\label{key} \begin{cases*} \div{\vb{w}}_2 = 0 &in $ {\Omega\setminus{\Gamma_t}}$, \\ \curl{\vb{w}}_2 = {\mathbb{P}}\qty({\vb{g}}\circ (\mathcal{X}_{\Gamma_t})^{-1}) &in $ {\Omega\setminus{\Gamma_t}}$, \\ {\vb{N}}_+ \vdot {\vb{w}}_{2\pm} = 0 &on $ {\Gamma_t}$, \\ \widetilde{{\vb{N}}} \vdot {\vb{w}}_{2-} = 0 &on $ \partial{\Omega}$; \end{cases*}$$ and $$\label{key} \begin{cases*} \div{\vb{w}}_3 = 0 &in $ {\Omega\setminus{\Gamma_t}}$, \\ \curl{\vb{w}}_3 = 0 &in $ {\Omega\setminus{\Gamma_t}}$, \\ {\vb{N}}_+ \vdot {\vb{w}}_{3\pm} = \qty[\var{{\mathfrak{K}}^{-1}}({\varkappa_a})[h]{\vb*{\nu}}] \circ (\Phi_{\Gamma_t})^{-1} \vdot {\vb{N}}_+ &on $ {\Gamma_t}$, \\ \widetilde{{\vb{N}}} \vdot {\vb{w}}_{3-} = 0 &on $ \partial{\Omega}$. \end{cases*}$$ Then $$\label{key} {\mathbb{D}_{\beta}}{\vb{v}}= {\vb{w}}_1 + {\vb{w}}_2 + {\vb{w}}_3,$$ with the substitution $f = \partial_\beta{\varkappa_a}$, ${\vb{g}}= \partial_\beta{\vb*{\omega}}_*$ and $h = \partial^2_{t\beta}{\varkappa_a}$. By the definitions, there hold $$\label{expression B} {\mathbf{B}}_\pm({\varkappa_a})h = (\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1}\qty{{\vb{w}}_{3\pm}|_{\Gamma_t}\circ\Phi_{\Gamma_t}- \var{{\mathfrak{K}}^{-1}}({\varkappa_a})\qty[h]{\vb*{\nu}}},$$ $$\label{expression F} {\mathbf{F}}_\pm({\varkappa_a}) {\vb{g}}= (\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1}({\vb{w}}_{2\pm}|_{\Gamma_t}\circ\Phi_{\Gamma_t}),$$ and $$\label{expression G} \begin{split} {\mathbf{G}}_\pm({\varkappa_a}, \partial_t{\varkappa_a}, {\vb*{\omega}}_{*\pm})f = &\, (\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1} \qty{{\vb{w}}_{1\pm}|_{\Gamma_t}\circ (\Phi_{\Gamma_t}) - \var[2]{{\mathfrak{K}}^{-1}}({\varkappa_a})[\partial_t{\varkappa_a}, f]{\vb*{\nu}}} \\ &- (\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1}\qty{\mathop{}\!\mathbin{\mathrm{D}}\qty(\var{{\mathfrak{K}}^{-1}}({\varkappa_a})[f]{\vb*{\nu}}) \vdot {\vb{v}}_{\pm*}}. \end{split}$$ Therefore, if ${\varkappa_a}$ and ${\vb*{\omega}}_*$ depend on a parameter $\beta$, then applying $\pdv*{\beta}$ to ([\[expression B\]](#expression B){reference-type="ref" reference="expression B"}) yields that $$\label{key} \begin{split} &\hspace{-2em}\var{{\mathbf{B}}_\pm}({\varkappa_a})[\partial_\beta {\varkappa_a}] h \\ =\, &-(\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1} \mathop{}\!\mathbin{\mathrm{D}}(\partial_\beta{\gamma_{\Gamma_t}}{\vb*{\nu}}) (\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1}\qty{{\vb{w}}_{3\pm}|_{\Gamma_t}\circ\Phi_{\Gamma_t}- \var{{\mathfrak{K}}^{-1}}({\varkappa_a})\qty[h]{\vb*{\nu}}} \\ &+(\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1}\qty{({\mathbb{D}_{\beta}}{\vb{w}}_{3\pm})|_{\Gamma_t}\circ\Phi_{\Gamma_t}- \var[2]{\mathfrak{K}}^{-1}({\varkappa_a})[\partial_\beta{\varkappa_a}, h]{\vb*{\nu}}}, \end{split}$$ where $$\label{key} {\mathbb{D}_{\beta}}\coloneqq \partial_\beta + \mathop{}\!\mathbin{\mathrm{D}}_{{\mathcal{H}}(\partial_\beta{\gamma_{\Gamma_t}}{\vb*{\nu}})\circ(\mathcal{X}_{\Gamma_t})^{-1}}.$$ Thus, one can derive from the commutator and div-curl estimates that for $\frac{1}{2} \le s \le {\frac{3}{2}k}- \frac{3}{2}$, $$\label{key} \abs{\var{{\mathbf{B}}_\pm}({\varkappa_a})[\partial_\beta {\varkappa_a}] h}_{\H{s}} \le C \abs{h}_{\H{s-2}} \cdot \abs{\partial_\beta {\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}},$$ namely, ([\[est var B\]](#est var B){reference-type="ref" reference="est var B"}) holds. ([\[est var F\]](#est var F){reference-type="ref" reference="est var F"}) can be shown in a similar way. The proof of ([\[est var G\]](#est var G){reference-type="ref" reference="est var G"}) is a little more complicated. Indeed, denote by $$Q = Q\qty(\abs{\partial_t{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}, \norm{{\vb*{\omega}}_*}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})}),$$ and note that $$\label{key} \begin{split} &\hspace{-2em}\pdv{\beta}\qty[{\mathbf{G}}_\pm({\varkappa_a}, \partial_t {\varkappa_a}, {\vb*{\omega}}_{*\pm})f] \\ =\, &-(\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1} \vdot \mathop{}\!\mathbin{\mathrm{D}}(\partial_\beta{\gamma_{\Gamma_t}}{\vb*{\nu}}) \vdot (\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1} \vdot \vb{a}_\pm + (\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1} \vdot \partial_\beta\vb{a}_\pm, \end{split}$$ where $$\label{key} \vb{a}_\pm \coloneqq {\vb{w}}_{1\pm}|_{\Gamma_t}\circ (\Phi_{\Gamma_t}) - \var[2]{{\mathfrak{K}}^{-1}}({\varkappa_a})[\partial_t{\varkappa_a}, f]{\vb*{\nu}}- \mathop{}\!\mathbin{\mathrm{D}}\qty(\var{{\mathfrak{K}}^{-1}}({\varkappa_a})[f]{\vb*{\nu}}) \vdot {\vb{v}}_{\pm*}.$$ It follows from ([\[eqn w1\]](#eqn w1){reference-type="ref" reference="eqn w1"}), ([\[def u lambda \*\]](#def u lambda *){reference-type="ref" reference="def u lambda *"}), and Proposition [Proposition 6](#prop K){reference-type="ref" reference="prop K"} that $$\label{key} \begin{split} \abs{\vb{a_\pm}}_{\H{\sigma-\frac{1}{2}}} \lesssim_{Q}\, &\norm{{\vb{w}}_{1\pm}}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_t}})} + \abs{\partial_t{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{9}{2}}} \cdot \abs{f}_{\H{\sigma-\frac{5}{2}}} \\ &+ \norm{{\vb{v}}_{\pm*}}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})} \cdot \abs{f}_{\H{\sigma-\frac{3}{2}}} \\ \lesssim_{Q}\, &\abs{f}_{\H{\sigma-\frac{5}{2}}}. \end{split}$$ Therefore, $$\label{key} \begin{split} \abs{(\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1} \vdot \mathop{}\!\mathbin{\mathrm{D}}(\partial_\beta{\gamma_{\Gamma_t}}{\vb*{\nu}}) \vdot (\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1} \vdot \vb{a}_\pm}_{\H{\sigma-\frac{1}{2}}} \lesssim_{Q} \abs{\partial_\beta{\varkappa_a}}_{\H{\sigma-\frac{1}{2}}} \abs{f}_{\H{\sigma-\frac{3}{2}}}. \end{split}$$ Next, by observing that $$\label{key} \begin{split} \partial_\beta\vb{a}_\pm =\, &({\mathbb{D}_{\beta}}{\vb{w}}_{1\pm})|_{\Gamma_t}\circ \Phi_{\Gamma_t}- \var[3]{{\mathfrak{K}}^{-1}}({\varkappa_a})\qty[\partial_\beta{\varkappa_a}, \partial_t{\varkappa_a}, f]{\vb*{\nu}}\\ &-\var[2]{{\mathfrak{K}}^{-1}}({\varkappa_a})[\partial^2_{t\beta}{\varkappa_a}, f]{\vb*{\nu}}- \mathop{}\!\mathbin{\mathrm{D}}\qty{\var[2]{{\mathfrak{K}}^{-1}}({\varkappa_a})[\partial_\beta{\varkappa_a}, f]{\vb*{\nu}}}\vdot{\vb{v}}_{\pm*} \\ &- \mathop{}\!\mathbin{\mathrm{D}}\qty{\var{{\mathfrak{K}}^{-1}}({\varkappa_a})[f]{\vb*{\nu}}}\vdot\partial_\beta{\vb{v}}_{\pm*}, \end{split}$$ one can deduce from ([\[est pd beta v+\*\]](#est pd beta v+*){reference-type="ref" reference="est pd beta v+*"}) that $$\label{key} \begin{split} &\hspace{-2em}\abs{\partial_\beta \vb{a}_\pm - ({\mathbb{D}_{\beta}}{\vb{w}}_{1\pm})|_{\Gamma_t}\circ \Phi_{\Gamma_t}}_{\H{\sigma-\frac{1}{2}}} \\ \lesssim_{Q}\, &\abs{f}_{\H{\sigma-\frac{3}{2}}} \qty(\abs{\partial_\beta{\varkappa_a}}_{\H{\sigma-\frac{1}{2}}} + \abs{\partial^2_{t\beta}{\varkappa_a}}_{\H{\sigma-\frac{3}{2}}} + \norm{\partial_\beta {\vb*{\omega}}_{*\pm}}_{H^{\sigma}({\Omega}_*^\pm)}). \end{split}$$ As for the estimate of ${\mathbb{D}_{\beta}}{\vb{w}}_{1\pm}$, first note that $$\label{key} \begin{split} \abs{{\mathbb{D}_{\beta}}{\vb{w}}_{1\pm}}_{H^{\sigma-\frac{1}{2}}({\Gamma_t})} \lesssim_{Q}\, &\norm{{\mathbb{D}_{\beta}}{\vb{w}}_{1\pm}}_{H^{\sigma}({\Omega}_t^\pm)} \\ \lesssim_{Q}\, &\norm{\div({\mathbb{D}_{\beta}}{\vb{w}}_{1\pm})}_{H^{\sigma-1}({\Omega}_t^\pm)} + \norm{\curl({\mathbb{D}_{\beta}}{\vb{w}}_{1\pm})}_{H^{\sigma-1}({\Omega}_t^\pm)} \\ &+ \abs{{\vb{N}}_+ \vdot ({\mathbb{D}_{\beta}}{\vb{w}}_{1\pm})}_{H^{\sigma-\frac{1}{2}}({\Gamma_t})}. \end{split}$$ The commutator estimates yield that $$\label{key} \begin{split} &\hspace{-2em}\norm{\div({\mathbb{D}_{\beta}}{\vb{w}}_{1\pm})}_{H^{\sigma-1}({\Omega}_t^\pm)} + \norm{\curl({\mathbb{D}_{\beta}}{\vb{w}}_{1\pm})}_{H^{\sigma-1}({\Omega}_t^\pm)} \\ \lesssim_{Q}\, &\abs{f}_{\H{\sigma-\frac{3}{2}}} \cdot \qty(\abs{\partial_\beta{\varkappa_a}}_{\H{\sigma-\frac{1}{2}}} + \norm{\partial_\beta{\vb*{\omega}}_{*\pm}}_{H^{\sigma}({\Omega}_*^\pm)}). \end{split}$$ For the boundary estimate of ${\mathbb{D}_{\beta}}{\vb{w}}_{1\pm}$, due to the relations that $$\label{key} \begin{split} &\hspace{-2em}{\vb{N}}_+ \vdot ({\mathbb{D}_{\beta}}{\vb{w}}_{1\pm}) - {\vb{N}}_+ \vdot {\mathbb{D}_{\beta}}\vb{b}_\pm \\ =\, &{\vb{N}}_+ \vdot \mathop{}\!\mathbin{\mathrm{D}}\qty[(\partial_\beta{\gamma_{\Gamma_t}}{\vb*{\nu}})\circ (\Phi_{\Gamma_t})^{-1}] \vdot ({\vb{w}}_{1\pm}^\top - \vb{b}_\pm^\top), \end{split}$$ and $$\label{key} \begin{split} &\hspace{-2em}\abs{{\vb{N}}_+ \vdot \mathop{}\!\mathbin{\mathrm{D}}\qty[(\partial_\beta{\gamma_{\Gamma_t}}{\vb*{\nu}})\circ (\Phi_{\Gamma_t})^{-1}] \vdot ({\vb{w}}_{1\pm}^\top - \vb{b}_\pm^\top)}_{H^{\sigma-\frac{1}{2}}({\Gamma_t})} \\ \lesssim_{Q}\, &\abs{f}_{\H{\sigma-\frac{3}{2}}} \cdot \abs{\partial_\beta{\varkappa_a}}_{\H{\sigma-\frac{1}{2}}}, \end{split}$$ it is direct to compute that $$\label{key} \begin{split} &\hspace{-1em}({\mathbb{D}_{\beta}}\vb{b}_\pm)\circ\Phi_{\Gamma_t}= \partial_\beta(\vb{b}_\pm \circ \Phi_{\Gamma_t}) \\ =\, &\mathop{}\!\mathbin{\mathrm{D}}\qty{\var[2]{{\mathfrak{K}}^{-1}}({\varkappa_a})[\partial_\beta{\varkappa_a}, f]{\vb*{\nu}}} \vdot (\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1} \vdot \qty{{\vb{v}}_{\pm}|_{\Gamma_t}\circ\Phi_{\Gamma_t}- \var{{\mathfrak{K}}^{-1}}({\varkappa_a})[\partial_t{\varkappa_a}]{\vb*{\nu}}} \\ &-\mathop{}\!\mathbin{\mathrm{D}}\qty{\var{{\mathfrak{K}}^{-1}}({\varkappa_a})[f]{\vb*{\nu}}} \vdot (\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1} \vdot \mathop{}\!\mathbin{\mathrm{D}}\qty(\partial_\beta{\gamma_{\Gamma_t}}{\vb*{\nu}}) \vdot (\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1} \vdot \\ &\qquad \vdot \qty{{\vb{v}}_{\pm}|_{\Gamma_t}\circ\Phi_{\Gamma_t}- \var{{\mathfrak{K}}^{-1}}({\varkappa_a})[\partial_t{\varkappa_a}]{\vb*{\nu}}} \\ &+ \mathop{}\!\mathbin{\mathrm{D}}\qty{\var{{\mathfrak{K}}^{-1}}({\varkappa_a})[f]{\vb*{\nu}}} \vdot (\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t})^{-1} \vdot \\ &\qquad\vdot\qty{({\mathbb{D}_{\beta}}{\vb{v}}_\pm) \circ\Phi_{\Gamma_t}- \var[2]{{\mathfrak{K}}^{-1}}({\varkappa_a})[\partial_\beta{\varkappa_a}, \partial_t{\varkappa_a}]{\vb*{\nu}}- \var{{\mathfrak{K}}^{-1}({\varkappa_a})[\partial^2_{t\beta}{\varkappa_a}]{\vb*{\nu}}}} \\ &-\var[2]{{\mathfrak{K}}^{-1}}({\varkappa_a})[\partial^2_{t\beta}{\varkappa_a}, f]{\vb*{\nu}}- \var[3]{{\mathfrak{K}}^{-1}}({\varkappa_a})[\partial_\beta{\varkappa_a}, \partial_t{\varkappa_a}, f]{\vb*{\nu}}. \end{split}$$ Therefore, it follows from [\[est pd beta v+\*\]](#est pd beta v+*){reference-type="eqref" reference="est pd beta v+*"} and Proposition [Proposition 6](#prop K){reference-type="ref" reference="prop K"} that $$\label{key} \begin{split} \abs{\partial_\beta(\vb{b}_\pm \circ \Phi_{\Gamma_t})}_{\H{\sigma-\frac{1}{2}}} \lesssim_{Q}\abs{f}_{\H{\sigma-\frac{3}{2}}} \cdot \qty(\abs{\partial_\beta{\varkappa_a}}_{\H{\sigma-\frac{1}{2}}}+ \abs{\partial^2_{t\beta}{\varkappa_a}}_{\H{\sigma-\frac{3}{2}}}). \end{split}$$ In conclusion, $$\label{key} \begin{split} &\hspace{-2em}\abs{\pdv{\beta}\qty[{\mathbf{G}}_\pm({\varkappa_a}, \partial_t {\varkappa_a}, {\vb*{\omega}}_{*\pm})f]}_{\H{\sigma-\frac{1}{2}}} \\ \lesssim_{Q}\, &\abs{f}_{\H{\sigma-\frac{3}{2}}} \cdot \qty(\abs{\partial_\beta{\varkappa_a}}_{\H{\sigma-\frac{1}{2}}} + \abs{\partial^2_{t\beta}{\varkappa_a}}_{\H{\sigma-\frac{3}{2}}} + \norm{\partial_\beta{\vb*{\omega}}_*}_{H^{\sigma}({\Omega\setminus{\Gamma_\ast}})} ), \end{split}$$ which is exactly ([\[est var G\]](#est var G){reference-type="ref" reference="est var G"}). ◻ # Proof of Lemma [Lemma 15](#lem 3.4){reference-type="ref" reference="lem 3.4"} {#proof-of-lemma-lem-3.4} *Proof.* Note that $$\label{eqn opF} {\mathscr{F}}({\varkappa_a})\vb{g} = \qty[\mathrm{I}+{\mathscr{B}}({\varkappa_a})]^{-1} \qty[\grad^\top(\kappa_+ \circ \Phi_{\Gamma_t}) \vdot {\mathbf{F}}({\varkappa_a})\vb{g}].$$ It follows from [\[est F\]](#est F){reference-type="eqref" reference="est F"} that $$\label{est op scrF} \begin{split} \abs{{\mathscr{F}}({\varkappa_a})\vb{g}}_{\H{s}} &\lesssim_{\Lambda_*}\abs{\grad^\top(\kappa_+ \circ \Phi_{\Gamma_t}) \vdot {\mathbf{F}}({\varkappa_a})\vb{g}}_{\H{s}} \\ &\lesssim_{\Lambda_*}\abs{\grad^\top (\kappa_+ \circ \Phi_{\Gamma_t})}_{\H{{\frac{3}{2}k}-2}} \abs{{\mathbf{F}}({\varkappa_a})\vb{g}}_{\H{s+\epsilon}} \\ &\lesssim_{\Lambda_*}\abs{{\varkappa_a}}_{\H{{\frac{3}{2}k}-1}} \norm{\vb{g}}_{H^{s+\epsilon-\frac{1}{2}}({\Omega\setminus{\Gamma_\ast}})}, \end{split}$$ for $\frac{1}{2} \le s \le {\frac{3}{2}k}-2$, which is exactly [\[est opF\]](#est opF){reference-type="eqref" reference="est opF"}. For $k \ge 3$ and $\frac{1}{2} \le \sigma \le {\frac{3}{2}k}-\frac{5}{2}$, similar arguments lead to [\[est opF\'\]](#est opF'){reference-type="eqref" reference="est opF'"}. To estimate ${\mathscr{G}}$, one observes that $$\label{formula (I+B)^(-1)G} \begin{split} &\hspace{-2em}\qty[\mathrm{I} + {\mathscr{B}}({\varkappa_a})]{\mathscr{G}}({\varkappa_a}, \partial_t{\varkappa_a}, {\vb*{\omega}}_*, {\vb{j}}_*) \\ =\,&-a^2 \dfrac{{\vb{N}}_+ \circ \Phi_{\Gamma_t}}{{\vb*{\nu}}\vdot ({\vb{N}}_+ \circ \Phi_{\Gamma_t})} \vdot \qty[\va{\mathfrak{b}} \circ \Phi_{\Gamma_t}+ \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*}\qty({\vb{u}}\circ\Phi_{\Gamma_t}+ \partial_t{\gamma_{\Gamma_t}}{\vb*{\nu}})] \\ &+a^2{\mathscr{C}}_\alpha {\gamma_{\Gamma_t}}+ {\mathscr{B}}{\mathscr{C}}_\alpha{\varkappa_a}- \grad^\top(\kappa_+\circ\Phi_{\Gamma_t}) \vdot \qty[{\mathbf{G}}\partial_t{\varkappa_a}] + \mathfrak{R}_1 \circ\Phi_{\Gamma_t}, \end{split}$$ with $\mathfrak{R_1}$ given by ([\[def frak\[R\]\_1\]](#def frak[R]_1){reference-type="ref" reference="def frak[R]_1"}). Thus, [\[est opG\]](#est opG){reference-type="eqref" reference="est opG"} and [\[est opG\'\]](#est opG'){reference-type="eqref" reference="est opG'"} follow from [\[def vW\]](#def vW){reference-type="eqref" reference="def vW"}, [\[est frak R_0\]](#est frak R_0){reference-type="eqref" reference="est frak R_0"}, [\[est frak R_0\'\]](#est frak R_0'){reference-type="eqref" reference="est frak R_0'"}, [\[est opB\]](#est opB){reference-type="eqref" reference="est opB"}, [\[est opB\'\]](#est opB'){reference-type="eqref" reference="est opB'"}, Lemma [Lemma 14](#lem 3.3){reference-type="ref" reference="lem 3.3"} and Theorem [Theorem 11](#thm div-curl){reference-type="ref" reference="thm div-curl"}. Next, we consider the variational estimates under the assumption that $k \ge 3$. Suppose that $\xi({\varkappa_a})$ and $\eta({\varkappa_a})$ are two functionals so that $$\label{relation xi eta} \xi = \qty(\mathrm{I} + {\mathscr{B}})\eta.$$ Then, when computing the variation formula, the following relation holds: $$\label{eqn xi eta I+B} \pdv{\xi}{\beta} = \qty(\mathrm{I}+{\mathscr{B}})\pdv{\eta}{\beta} + \pdv{\beta}({\mathscr{B}})\eta.$$ Therefore, if ${\varkappa_a}$ is parameterized by $\beta$, then $$\label{eqn var xi eta I+B} \var{\eta}({\varkappa_a})\qty[\partial_\beta{\varkappa_a}] = \qty[\mathrm{I}+{\mathscr{B}}({\varkappa_a})]^{-1} \qty{\var{\xi}({\varkappa_a})\qty[\partial_\beta{\varkappa_a}] - \var{{\mathscr{B}}}({\varkappa_a})\qty[\partial_\beta{\varkappa_a}]\eta({\varkappa_a})},$$ where $\var{\mathscr{B}}$ has the following estimate (by Lemma [Lemma 12](#lem3.1){reference-type="ref" reference="lem3.1"}): $$\label{est var opB} \abs{\var{\mathscr{B}}({\varkappa_a})}_{{\mathscr{L}}\qty[\H{{\frac{3}{2}k}-\frac{5}{2}}; {\mathscr{L}}\qty(\H{s-2}; \H{s})]} \le C_*,$$ for $\frac{1}{2} \le s \le {\frac{3}{2}k}-\frac{7}{2}$ and a constant $C_* > 0$ depending on $\Lambda_*$ and $s$. Similarly, it follows form [\[eqn opF\]](#eqn opF){reference-type="eqref" reference="eqn opF"}-[\[eqn xi eta I+B\]](#eqn xi eta I+B){reference-type="eqref" reference="eqn xi eta I+B"} and ([\[est var F\]](#est var F){reference-type="ref" reference="est var F"}) that $$\label{key} \begin{split} &\hspace{-2em}\abs{\var\qty{(\mathrm{I} + {\mathscr{B}}({\varkappa_a})){\mathscr{F}}({\varkappa_a})}[\partial_\beta{\varkappa_a}]\vb{g}}_{\H{{\frac{3}{2}k}-\frac{7}{2}}} \\ \lesssim_{\Lambda_*}\, &\abs{\grad^\top\partial_\beta(\kappa_+ \circ \Phi_{\Gamma_t}) \vdot {\mathbf{F}}({\varkappa_a})\vb{g}}_{\H{{\frac{3}{2}k}-\frac{7}{2}}} + \abs{\grad^\top(\kappa_+ \circ \Phi_{\Gamma_t}) \vdot \var{\mathbf{F}}({\varkappa_a})[\partial_\beta{\varkappa_a}]\vb{g}}_{\H{{\frac{3}{2}k}-\frac{7}{2}}} \\ \lesssim_{\Lambda_*}\, &\abs{\partial_\beta{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} \norm{\vb{g}}_{H^{{\frac{3}{2}k}-\frac{7}{2}}({\Omega\setminus{\Gamma_\ast}})}, \end{split}$$ which, together with ([\[eqn var xi eta I+B\]](#eqn var xi eta I+B){reference-type="ref" reference="eqn var xi eta I+B"}), ([\[est var opB\]](#est var opB){reference-type="ref" reference="est var opB"}), ([\[eqn opF\]](#eqn opF){reference-type="ref" reference="eqn opF"}) and Lemma [Lemma 12](#lem3.1){reference-type="ref" reference="lem3.1"}, yields ([\[est var opF\]](#est var opF){reference-type="ref" reference="est var opF"}). To derive the variational estimate of ${\mathscr{G}}$, we suppose that ${\varkappa_a}, {\vb*{\omega}}_*$, and ${\vb{j}}_*$ depend on a parameter $\beta$. With the same notations as in ([\[def Dbt\]](#def Dbt){reference-type="ref" reference="def Dbt"}), the variational estimate of $\mathfrak{R}_1 \circ \Phi_{\Gamma_t}$ can be given by: $$\label{est var frak R_1} \abs{\pdv{\beta}(\mathfrak{R}_1 \circ\Phi_{\Gamma_t})}_{\H{{\frac{3}{2}k}-4}} \lesssim_{\Lambda_*}\abs{{\mathbb{D}_{\beta}}\mathfrak{R}_1}_{H^{{\frac{3}{2}k}-4}({\Gamma_t})}.$$ For simplicity, from now on, we shall use notations $\abs{\cdot}_s \equiv \abs{\cdot}_{H^s({\Gamma_t})}$, $\norm{\phi_\pm}_s \equiv \norm{\phi_\pm}_{H^s({\Omega}_t^\pm)}$, and $$\begin{split} \va{\mathfrak{b}} \coloneqq \dfrac{1}{\rho_+ + \rho_-} \qty(\grad q^+ + \grad q^-) + \dfrac{\rho_+ \rho_-}{\qty(\rho_+ + \rho_-)^2}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{w}}}{\vb{w}}- \dfrac{\rho_+}{\rho_+ + \rho_-}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_+}{\vb{h}}_+ - \dfrac{\rho_-}{\rho_+ + \rho_-}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}_-}{\vb{h}}_-. \end{split}$$ First note that $$\label{key} \abs{{\mathbb{D}_{\beta}}\qty(\va{\mathfrak{b}} \vdot \mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{N}}_+)}_{{\frac{3}{2}k}-4} \lesssim_{\Lambda_*}\abs*{{\mathbb{D}_{\beta}}\va{\mathfrak{b}}}_{{\frac{3}{2}k}-\frac{7}{2}}\abs{\mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{N}}_+}_{{\frac{3}{2}k}-\frac{7}{2}} + \abs{{\mathbb{D}_{\beta}}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{N}}_+}_{{\frac{3}{2}k}-\frac{7}{2}}\abs*{\va{\mathfrak{b}}}_{{\frac{3}{2}k}-\frac{7}{2}},$$ and one can derive from Lemma [Lemma 10](#Dt comm est lemma){reference-type="ref" reference="Dt comm est lemma"} and ([\[eqn dt n\]](#eqn dt n){reference-type="ref" reference="eqn dt n"}) that $$\label{key} \abs{{\mathbb{D}_{\beta}}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{N}}_+}_{{\frac{3}{2}k}-4} \lesssim_{\Lambda_*}\abs{\partial_\beta{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}.$$ As for the term ${\mathbb{D}_{\beta}}\va{\mathfrak{b}}$, Lemma [Lemma 10](#Dt comm est lemma){reference-type="ref" reference="Dt comm est lemma"}, [\[est pd beta v+\*\]](#est pd beta v+*){reference-type="eqref" reference="est pd beta v+*"} and ([\[def vW\]](#def vW){reference-type="ref" reference="def vW"}) lead to $$\label{est dbt va b} \begin{split} \abs{{\mathbb{D}_{\beta}}\va{\mathfrak{b}}}_{{\frac{3}{2}k}-4} \lesssim_{\Lambda_*}\, &\abs{{\mathbb{D}_{\beta}}\grad q}_{{\frac{3}{2}k}-4} + \abs{{\mathbb{D}_{\beta}}\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{h}}} {\vb{h}}}_{{\frac{3}{2}k}-4} + \abs{{\mathbb{D}_{\beta}}\mathop{}\!\mathbin{\mathrm{D}}_{\vb{w}}{\vb{w}}}_{{\frac{3}{2}k}-4} \\ \lesssim_{Q_*}\, &\abs{\partial_\beta {\varkappa_a}}_{\H{{\frac{3}{2}k}-3}} + \abs{\partial^2_{t\beta}{\varkappa_a}}_{\H{{\frac{3}{2}k}-5}} + \norm{\partial_\beta{\vb*{\omega}}_*}_{H^{{\frac{3}{2}k}-\frac{7}{2}}({\Omega\setminus{\Gamma_\ast}})} \\ & + \norm{\partial_\beta{\vb{j}}_*}_{H^{{\frac{3}{2}k}-\frac{7}{2}}({\Omega\setminus{\Gamma_\ast}})}, \end{split}$$ where $Q_*$ is a generic polynomial depending on $\Lambda_*$ of the quantities $\abs{\partial_t{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}$,\ $\norm{{\vb*{\omega}}_*}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})}$, and $\norm{{\vb{j}}_*}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})}$. Next, observe that $$\label{key} \begin{split} \abs{{\mathbb{D}_{\beta}}\qty[{\vb{N}}\cdot \mathop{}\!\mathbin{\mathrm{D}}{\vb{u}}\cdot (\mathop{}\!\mathbin{\mathrm{D}}_{\Gamma_t})^2{\vb{u}}]}_{{\frac{3}{2}k}-4} \lesssim_{Q_*}\, &\abs{\partial_\beta {\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} + \abs{{\mathbb{D}_{\beta}}{\vb{u}}}_{{\frac{3}{2}k}-2} \\ \lesssim_{Q_*}\, &\abs{\partial_\beta{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} + \abs{\partial^2_{t\beta}{\varkappa_a}}_{\H{{\frac{3}{2}k}-4}} +\norm{\partial_\beta{\vb*{\omega}}_*}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega\setminus{\Gamma_\ast}})}, \end{split}$$ and $$\label{key} \abs{{\mathbb{D}_{\beta}}{\vb{I\!I}}_+}_{{\frac{3}{2}k}-4} \lesssim_{Q_*} \abs{\partial_\beta {\varkappa_a}}_{\H{{\frac{3}{2}k}-4}}.$$ Thus, the variational estimates of the first six terms of ([\[def frak\[R\]\_1\]](#def frak[R]_1){reference-type="ref" reference="def frak[R]_1"}) follow easily. To deal with the terms involving $\mathop{}\!\mathbin{\triangle}_{\Gamma_t}{\vb{I\!I}}- {\mathcal{R}}$, one can deduce from [\[simons\' identity\]](#simons' identity){reference-type="eqref" reference="simons' identity"}, ([\[est pd beta v+\*\]](#est pd beta v+*){reference-type="ref" reference="est pd beta v+*"}) and Lemma [Lemma 10](#Dt comm est lemma){reference-type="ref" reference="Dt comm est lemma"} that $$\label{key} \begin{split} &\hspace{-3em}\abs{{\mathbb{D}_{\beta}}\qty{\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\qty[{\vb{I\!I}}_+({\vb{w}}, {\vb{w}})] - {\mathcal{R}}({\Gamma_t}, {\vb{w}})\kappa_+}}_{{\frac{3}{2}k}-4} \\ \lesssim_{Q_*}\, &\abs{{\mathbb{D}_{\beta}}{\vb{w}}}_{{\frac{3}{2}k}-2} + \abs{{\mathbb{D}_{\beta}}\kappa_+}_{{\frac{3}{2}k}-3} + \abs{\partial_\beta {\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} \\ \lesssim_{Q_*}\, &\abs{\partial_\beta{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} + \abs{\partial^2_{t\beta}{\varkappa_a}}_{\H{{\frac{3}{2}k}-4}} +\norm{\partial_\beta{\vb*{\omega}}_*}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega\setminus{\Gamma_\ast}})}. \end{split}$$ Similar arguments yield that $$\label{key} \begin{split} &\hspace{-3em}\abs{{\mathbb{D}_{\beta}}\qty{\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\qty[{\vb{I\!I}}_+({\vb{h}}_\pm, {\vb{h}}_\pm)] - {\mathcal{R}}({\Gamma_t}, {\vb{h}}_\pm)\kappa_+}}_{{\frac{3}{2}k}-4} \\ \lesssim_{Q_*}\, &\abs{\partial_\beta{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} +\norm{\partial_\beta{\vb{j}}_*}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega\setminus{\Gamma_\ast}})}. \end{split}$$ For the last term of ([\[def frak\[R\]\_1\]](#def frak[R]_1){reference-type="ref" reference="def frak[R]_1"}), one can deduce from ([\[def frak\[r_0\]\]](#def frak[r_0]){reference-type="ref" reference="def frak[r_0]"}) and Lemma [Lemma 10](#Dt comm est lemma){reference-type="ref" reference="Dt comm est lemma"} that $$\label{key} \begin{split} &\hspace{-2em}\abs{{\mathbb{D}_{\beta}}\mathop{}\!\mathbin{\triangle}_{\Gamma_t}\mathfrak{r}_0}_{{\frac{3}{2}k}-4} \\ \lesssim_{Q_*}\, &\abs{\partial_\beta{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} + \abs{{\mathbb{D}_{\beta}}\mathfrak{r}_0}_{{\frac{3}{2}k}-2} \\ \lesssim_{Q_*}\, &\abs{\partial_\beta{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} + \abs{{\mathbb{D}_{\beta}}(g^+ - g^-)}_{{\frac{3}{2}k}-3} + \norm{{\mathbb{D}_{\beta}}\qty( p_{{\vb{v}}, {\vb{v}}} - p_{{\vb{h}}, {\vb{h}}})}_{{\frac{3}{2}k}-\frac{1}{2}}\\ \lesssim_{Q_*}\, &\abs{\partial_\beta {\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} + \norm{{\mathbb{D}_{\beta}}{\vb{v}}}_{{\frac{3}{2}k}-\frac{3}{2}} + \norm{{\mathbb{D}_{\beta}}{\vb{h}}}_{{\frac{3}{2}k}-\frac{3}{2}} \\ \lesssim_{Q_*}\, &\abs{\partial_\beta {\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} + \abs{\partial^2_{t\beta}{\varkappa_a}}_{\H{{\frac{3}{2}k}-4}} + \norm{\partial_\beta{\vb*{\omega}}_*}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega\setminus{\Gamma_\ast}})} + \norm{\partial_\beta{\vb{j}}_*}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega\setminus{\Gamma_\ast}})}. \end{split}$$ Thus, the variational estimate of the last term of ([\[formula (I+B)\^(-1)G\]](#formula (I+B)^(-1)G){reference-type="ref" reference="formula (I+B)^(-1)G"}) has been obtained. Next, for the variation of the first term on the right hand side of ([\[formula (I+B)\^(-1)G\]](#formula (I+B)^(-1)G){reference-type="ref" reference="formula (I+B)^(-1)G"}), observe that $$\label{pd tt gt exp} \begin{split} &\hspace{-2em}\pdv{\beta}(\dfrac{{\vb{N}}_+ \circ \Phi_{\Gamma_t}}{{\vb*{\nu}}\vdot ({\vb{N}}_+ \circ \Phi_{\Gamma_t})} \vdot \qty[\va{\mathfrak{b}} \circ \Phi_{\Gamma_t}+ \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*}\qty({\vb{u}}\circ\Phi_{\Gamma_t}+ \partial_t{\gamma_{\Gamma_t}}{\vb*{\nu}})]) \\ =\, &\pdv{\beta}(\dfrac{{\vb{N}}_+ \circ \Phi_{\Gamma_t}}{{\vb*{\nu}}\vdot ({\vb{N}}_+ \circ \Phi_{\Gamma_t})}) \vdot \qty\Big{\va{\mathfrak{b}}\circ\Phi_{\Gamma_t}- \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*}\qty[\qty({\vb{u}}\circ\Phi_{\Gamma_t}) + \qty(\partial_t{\gamma_{\Gamma_t}}) {\vb*{\nu}}]} \\ & +\dfrac{{\vb{N}}_+ \circ \Phi_{\Gamma_t}}{{\vb*{\nu}}\vdot ({\vb{N}}_+ \circ \Phi_{\Gamma_t})} \vdot \pdv{\beta}\qty\Big(\va{\mathfrak{b}}\circ\Phi_{\Gamma_t}- \mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*}\qty[\qty({\vb{u}}\circ\Phi_{\Gamma_t}) + \qty(\partial_t{\gamma_{\Gamma_t}}) {\vb*{\nu}}]). \end{split}$$ Due to the relations that $$\label{key} \begin{split} \abs{\partial_\beta ({\vb{N}}_+ \circ \Phi_{\Gamma_t})}_{{\frac{3}{2}k}-\frac{3}{2}} \lesssim_{\Lambda_*}\abs{\mathop{}\!\mathbin{\mathrm{D}}(\partial_\beta{\gamma_{\Gamma_t}}{\vb*{\nu}})}_{\H{{\frac{3}{2}k}-\frac{3}{2}}} \lesssim_{\Lambda_*}\abs{\partial_\beta{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}, \end{split}$$ and $$\label{key} \begin{split} &\hspace{-1em}\abs{\pdv{\beta}\qty\Big(\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*}\qty[({\vb{u}}\circ\Phi_{\Gamma_t})+(\partial_t{\gamma_{\Gamma_t}}){\vb*{\nu}}])}_{\H{{\frac{3}{2}k}-4}} \\ &=\abs{\pdv{\beta}\qty\Big(\mathop{}\!\mathbin{\mathrm{D}}_{{\vb{u}}_*}\qty[\mathop{}\!\mathbin{\mathrm{D}}\Phi_{\Gamma_t}\vdot {\vb{u}}_* + 2(\partial_t{\gamma_{\Gamma_t}}){\vb*{\nu}}])}_{\H{{\frac{3}{2}k}-4}} \\ &\lesssim_{Q_*} \abs{\partial_\beta {\vb{u}}_*}_{\H{{\frac{3}{2}k}-3}} + \abs{\partial^2_{t\beta}{\gamma_{\Gamma_t}}}_{\H{{\frac{3}{2}k}-3}} + \abs{\partial_\beta {\varkappa_a}}_{\H{{\frac{3}{2}k}-4}} \\ &\lesssim_{Q_*} \abs{\partial_\beta {\varkappa_a}}_{\H{{\frac{3}{2}k}-4}} + \abs{\partial^2_{t\beta}{\varkappa_a}}_{\H{{\frac{3}{2}k}-5}} + \norm{\partial_\beta{\vb*{\omega}}_*}_{H^{{\frac{3}{2}k}-\frac{7}{2}}({\Omega\setminus{\Gamma_\ast}})} +\norm{\partial_\beta{\vb{j}}_*}_{H^{{\frac{3}{2}k}-\frac{7}{2}}({\Omega\setminus{\Gamma_\ast}})}, \end{split}$$ the estimate of ([\[pd tt gt exp\]](#pd tt gt exp){reference-type="ref" reference="pd tt gt exp"}) can be deduced via ([\[est dbt va b\]](#est dbt va b){reference-type="ref" reference="est dbt va b"}). Since ${\vb{h}}$ can be recovered from $({\varkappa_a}, {\vb{j}}_*)$ by solving the div-curl problems, the operator $\mathscr{C}_\alpha$ can also be expressed as ${\mathscr{C}}_\alpha({\varkappa_a}, \partial_t{\varkappa_a}, {\vb*{\omega}}_*, {\vb{j}}_*)$. It follows form ([\[def opC\]](#def opC){reference-type="ref" reference="def opC"}), [\[est pd beta v+\*\]](#est pd beta v+*){reference-type="eqref" reference="est pd beta v+*"}, ([\[eqn pd beta v\]](#eqn pd beta v){reference-type="ref" reference="eqn pd beta v"}), Lemma [Lemma 12](#lem3.1){reference-type="ref" reference="lem3.1"} and Lemma [Lemma 14](#lem 3.3){reference-type="ref" reference="lem 3.3"} that for $2 \le s' \le {\frac{3}{2}k}-1$, $\mathscr{V} \coloneqq \H{{\frac{3}{2}k}-\frac{5}{2}} \times \H{{\frac{3}{2}k}-4} \times H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega\setminus{\Gamma_\ast}}) \times H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega\setminus{\Gamma_\ast}})$, it holds that $$\label{key} \begin{split} &\abs{\var{{\mathscr{C}}_\alpha}({\varkappa_a}, \partial_t{\varkappa_a}, {\vb*{\omega}}_*, {\vb{j}}_*)}_{{\mathscr{L}}\qty[ \mathscr{V}; {\mathscr{L}}\qty(\H{s'}; \H{s'-3})]} \\ &\quad \le Q_*\qty(\abs{\partial_t{\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}}, \norm{{\vb*{\omega}}_*}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})}, \norm{{\vb{j}}_*}_{H^{{\frac{3}{2}k}-1}({\Omega\setminus{\Gamma_\ast}})}). \end{split}$$ In particular, by letting $s' \coloneqq {\frac{3}{2}k}-1$, one can deduce that $$\label{key} \begin{split} &\abs{\pdv{\beta}({\mathscr{C}}_\alpha {\gamma_{\Gamma_t}}+ {\mathscr{B}}{\mathscr{C}}_\alpha{\varkappa_a}) }_{\H{{\frac{3}{2}k}-4}} \\ &\quad\lesssim_{Q_*}\abs{\partial_\beta {\varkappa_a}}_{\H{{\frac{3}{2}k}-\frac{5}{2}}} + \abs{\partial^2_{t\beta}{\varkappa_a}}_{\H{{\frac{3}{2}k}-4}} + \norm{\partial_\beta{\vb*{\omega}}_*}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega\setminus{\Gamma_\ast}})} \\ &\qquad\quad + \norm{\partial_\beta{\vb{j}}_*}_{H^{{\frac{3}{2}k}-\frac{5}{2}}({\Omega\setminus{\Gamma_\ast}})}. \end{split}$$ Furthermore, one may derive from Lemma [Lemma 12](#lem3.1){reference-type="ref" reference="lem3.1"} that $$\label{est grad ka G pdt ka} \begin{split} &\abs{\pdv{\beta}\qty\Big{\grad^\top(\kappa_+\circ\Phi_{\Gamma_t}) \vdot \qty[{\mathbf{G}}\partial_t{\varkappa_a}]}}_{\H{{\frac{3}{2}k}-4}} \\ &\quad\lesssim_{Q_*}\abs{\partial_\beta {\varkappa_a}}_{\H{{\frac{3}{2}k}-3}} + \abs{\partial^2_{t\beta}{\varkappa_a}}_{\H{{\frac{3}{2}k}-5}} + \norm{\partial_\beta{\vb*{\omega}}_*}_{H^{{\frac{3}{2}k}-\frac{7}{2}}({\Omega\setminus{\Gamma_\ast}})}. \end{split}$$ In conclusion, ([\[est var opG\]](#est var opG){reference-type="ref" reference="est var opG"}) follows from ([\[relation xi eta\]](#relation xi eta){reference-type="ref" reference="relation xi eta"})-([\[est var opB\]](#est var opB){reference-type="ref" reference="est var opB"}), ([\[formula (I+B)\^(-1)G\]](#formula (I+B)^(-1)G){reference-type="ref" reference="formula (I+B)^(-1)G"}) and [\[est var frak R_1\]](#est var frak R_1){reference-type="eqref" reference="est var frak R_1"}-([\[est grad ka G pdt ka\]](#est grad ka G pdt ka){reference-type="ref" reference="est grad ka G pdt ka"}). ◻ [^1]: This is part of the Ph.D. thesis of the first author written under the guidance of the second author at the Institute of Mathematical Sciences, the Chinese University of Hong Kong. This research is supported in part by Zheng Ge Ru Foundation, Hong Kong RGC Earmarked Research Grants CUHK-14301421, CUHK-14300819, CUHK-14302819, CUHK-14300917, and the key project of NSFC (Grant No. 12131010).
arxiv_math
{ "id": "2309.03534", "title": "Local Well-posedness of the Incompressible Current-Vortex Sheet Problems", "authors": "Sicheng Liu, Zhouping Xin", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | This paper studies a distributed online convex optimization problem, where agents in an unbalanced network cooperatively minimize the sum of their time-varying local cost functions subject to a coupled inequality constraint. To solve this problem, we propose a distributed dual subgradient tracking algorithm, called DUST, which attempts to optimize a dual objective by means of tracking the primal constraint violations and integrating dual subgradient and push-sum techniques. Different from most existing works, we allow the underlying network to be unbalanced with a column stochastic mixing matrix. We show that DUST achieves sublinear dynamic regret and constraint violations, provided that the accumulated variation of the optimal sequence grows sublinearly. If the standard Slater's condition is additionally imposed, DUST acquires a smaller constraint violation bound than the alternative existing methods applicable to unbalanced networks. Simulations on a plug-in electric vehicle charging problem demonstrate the superior convergence of DUST. author: - "Dandan Wang, Daokuan Zhu, Kin Cheong Sou and Jie Lu [^1]" title: " **Distributed Online Optimization with Coupled Inequality Constraints over Unbalanced Directed Networks** " --- # Introduction Distributed online convex optimization (DOCO) has received considerable interest in recent years, motivated by its broad applications in dynamic networks with uncertainty, such as resource allocation for wireless network [@c1], target tracking [@c2], multi-robot surveillance [@c3], and medical diagnosis[@c4]. In these scenarios, each agent in a network holds a time-varying local cost function and only has access to its real-time local cost function after making a decision based on historical information. Compared with centralized online optimization, DOCO enjoys prominent advantages in privacy protection, alleviation of computation and communication burden, and robustness to channel failures [@c5]. There has been a great number of distributed algorithms for solving DOCO problems [@c2; @c3; @c4; @c6; @c7; @c8; @c9; @c10; @c11; @c12; @c13; @c14; @c15]. Nevertheless, most of them are limited to unconstrained problems or simple set constraints, and do not allow for coupled inequality constraints that arise in many engineering applications. Coupled inequality constraints involve information from all agents, which poses a significant challenge to handle them in a distributed manner. To date, only a few distributed algorithms have been developed to address DOCO problems with coupled inequality constraints, including various variants of the saddle-point algorithm [@c13; @c10; @c11; @c12], a primal-dual dynamic mirror descent algorithm [@c14] that has been extended to bandit settings in [@c15], and a bandit distributed mirror descent push-sum algorithm [@c9]. However, among these works, [@c15; @c12; @c13; @c14] can only be applied to balanced networks with doubly stochastic mixing matrices. Although [@c9; @c10; @c11] consider unbalanced networks, their regret and constraint violation bounds are much worse than those in [@c15; @c12; @c13; @c14]. In this paper, we focus on the DOCO problem with a coupled inequality constraint over an unbalanced network with a column stochastic mixing matrix, and propose a distributed dual subgradient tracking (DUST) algorithm to solve it. To develop DUST, we first attempt to employ the subgradient method to address the dual problem of the constrained DOCO at each time instance. Then, to enable distributed implementation, we introduce auxiliary variables to track the primal constraint violations, which can be viewed as estimated dual subgradients. Finally, we harness the push-sum technique to eliminate the imbalance of networks, leading to the DUST algorithm. Our main contributions are elaborated as follows: 1. DUST is able to address DOCO with *coupled inequality* constraints over *unbalanced* networks with column stochastic mixing matrices, while the alternative methods in [@c15; @c12; @c13; @c14] require balanced interaction graphs. 2. We adopt *dynamic regret* as the performance measure of DUST, which is a more stringent metric than the static regret used in [@c13; @c11; @c12; @c9]. 3. We show that DUST achieves $\mathcal{O}(\sqrt{T}+V_T)$ dynamic regret and $\mathcal{O}(T^{\frac{3}{4}})$ constraint violation bounds, where $T$ is a finite time horizon and $V_T$ is the accumulated variation of the optimal sequence. Provided that $V_T$ grows sublinearly, DUST is able to achieve sublinear dynamic regret and constraint violations. Moreover, the constraint violation bound is improved to $\mathcal{O}(\sqrt{T})$ if we additionally assume the Slater's condition. To the best of our knowledge, there are no existing distributed algorithms achieving comparable dynamic regret and constraint violation bounds for DOCO problems with coupled constraints over unbalanced networks. The remainder of the paper is organized as follows. Section [2](#sec:problemformulation){reference-type="ref" reference="sec:problemformulation"} formulates a DOCO with a coupled inequality constraint over unbalanced graphs with column stochastic mixing matrices. Section [3](#sec: DUSTalgorithm){reference-type="ref" reference="sec: DUSTalgorithm"} develops the proposed DUST algorithm, and Section [4](#sec: regretanalysis){reference-type="ref" reference="sec: regretanalysis"} provides bounds of dynamic regret and constraint violations. Section [5](#sec: numericalexperiments){reference-type="ref" reference="sec: numericalexperiments"} presents the numerical experiments, and Section [6](#sec:conclusion){reference-type="ref" reference="sec:conclusion"} concludes the paper. Throughout the paper, we adopt $\mathbb{R}^n$, $\mathbb{R}_+^n$ as the set of $n$-dimensional vectors, nonnegative vectors, respectively. For any set $X \subseteq {\mathbb{R}^n}$, $\operatorname{relint}(X)$ is its relative interior. $A\otimes B$ represents the Kronecker product of any two matrices $A$ and $B$ with arbitrary size. Let $[a]_+$ represents the component-wise projection of a vector $a \in \mathbb{R}^n$ onto $\mathbb{R}_+^n$. Denote $I_d$ and $\mathbf{1}_p$ ($\mathbf{0}_p$) as the $d$-dimensional identity matrix and the all-one (all-zero) column vectors with $p$ dimensions. Let $\|\cdot\|$ be the Euclidean norm. $\langle x, y\rangle$ represents the standard inner product of two vectors $x$ and $y$. The notation $w_{ij,t}$ denotes the $i,j$-th component of matrix $W_t$ at time $t$. Let $\lceil \cdot \rceil$ and $\lfloor \cdot \rfloor$ be the ceiling and floor functions, respectively. For a convex function $f:\mathbb{R}^n\rightarrow\mathbb{R}$, we denote $\partial f(x)$ as a subgradient of $f$ at $x$, i.e., $f(y) \ge f(x)+\langle\partial f(x), y-x\rangle$, $\forall y \in \mathbb{R}^n$. # Problem Formulation {#sec:problemformulation} Consider the network at each time $t\in \{1, \ldots, T\}$ modeled as a directed graph $\mathcal{G}_t=(\mathcal{V}, \mathcal{E}_t)$, where $\mathcal{V}=\{1,..., N\}$ is the set of nodes and $\mathcal{E}_t\subseteq\{\{i,j\}:i,j\in\mathcal{V},\;i\neq j\}$ is the set of edges. An edge $(j,i) \in \mathcal{E}_t$ means that node $i$ can receive a message from node $j$. Let $\mathcal{N}_{i,t}^{\text{in}}=\{j | (j,i) \in \mathcal{E}_t\}\cup \{i\}$ and $\mathcal{N}_{i,t}^{\text{out}}=\{j|(i,j) \in \mathcal{E}_t\}\cup\{i\}$ be the sets of in-neighbors and out-neighbors of node $i$, respectively. The mixing matrix $W_t$ associated with $\mathcal{G}_t$ is defined as $w_{ij,t} >0$ if $(j,i) \in \mathcal{E}_t$ or $i=j$, and $w_{ij,t}=0$, otherwise. We assume each node $j \in \mathcal{V}$ only knows the weights related to its out-neighbors, i.e., $w_{ij,t}$, $\forall i \in \mathcal{N}_{j,t}^{\text{out}}$. We impose the following assumption on the interaction graph. **Assumption 1**. *$\{\mathcal{G}_t\}_{t=1}^{T}$ satisfies:* 1. *There exists a constant $a \in (0,1)$ such that for each $t \ge 1$, $w_{ij,t}>a$ if $w_{ij,t} >0$. [\[graphnondegeneracy\]]{#graphnondegeneracy label="graphnondegeneracy"}* 2. *For each $t \ge 1$, the mixing matrix $W_t$ is column stochastic, i.e., $\sum_{i=1}^{N} w_{ij,t}=1$ for all $j \in \mathcal{V}$.[\[columnstochastic\]]{#columnstochastic label="columnstochastic"}* 3. *There exists an integer $B >0$ such that for any $k \ge 0$, the graph $(\mathcal{V}, \bigcup_{t=kB+1}^{(k+1)B}\mathcal{E}_t)$ is connected. [\[Bconnect\]]{#Bconnect label="Bconnect"}* An example of the mixing matrix that satisfies Assumption [Assumption 1](#ass: networkassumption){reference-type="ref" reference="ass: networkassumption"} is $w_{ij,t}=1/d_{j,t}$, if $i \in \mathcal{N}_{j,t}^{\text{out}}$; otherwise, $w_{ij,t}=0$, where $d_{j,t}=|\mathcal{N}_{j,t}^{\text{out}}|$ is the out-degree of node $j$ at each time $t$. In this case, each node only needs to know its out-degree at each time $t$. Assumption [Assumption 1](#ass: networkassumption){reference-type="ref" reference="ass: networkassumption"} ensures that there exists a path from one node to every other nodes within the interval of length $B$. Assumption [Assumption 1](#ass: networkassumption){reference-type="ref" reference="ass: networkassumption"} is less restrictive than those in [@c12; @c13; @c14; @c15], which require $W_t$ to be doubly stochastic. We consider the distributed online problem with a globally coupled inequality constraint over the directed graph $\mathcal{G}_t$, where each node $i \in \mathcal{V}$ privately holds a time-varying local cost function $f_{i,t}:\mathbb{R}^{d_i}\rightarrow \mathbb{R}$, a constraint function $g_{i}:\mathbb{R}^{d_i}\rightarrow \mathbb{R}^p$, and a constraint set $X_i\subseteq\mathbb{R}^{d_i}$. Let $\mathbf{x}=[(x_1)^T, \ldots, (x_N)^T]^T\in \mathbb{R}^{\sum_{i=1}^{N}d_i}$ and $X=X_1\times \cdots \times X_N$ be the Cartesian product of all the $X_i$'s. At each time $t$, all nodes cooperate to minimize the sum of local cost functions while satisfying a coupled inequality constraint and set constraints, which can be written as $$\label{eq:primalprob} \begin{array}{cl} \underset{\substack{x_i, \forall i \in \mathcal{V} }}{\operatorname{minimize}} ~\!\!\!& f_t(\mathbf{x}):=\sum_{i=1}^{N} f_{i,t}(x_{i})\\ %\vspace{1mm} {\operatorname{ subject~to }}\!\!\! & \sum_{i=1}^{N}g_{i}(x_{i}) \leq \mathbf{0}_p,\\\vspace{2mm} {} & {x_i \in X_i,\;\forall i \in \mathcal{V},} \end{array}$$ where the feasible set $\mathcal{X}:=\{\mathbf{x}\in X | \sum_{i=1}^{N}g_{i}(x_{i}) \leq \mathbf{0}_p \}$ is assumed to be nonempty. Note that the local cost function $f_{i,t}$ is unrevealed to node $i$ until it makes its decision $x_{i,t}$ at time $t$. Since node $i$ cannot access $f_{i,t}$ in advance, it is unlikely to obtain the exact optimal solution of problem [\[eq:primalprob\]](#eq:primalprob){reference-type="eqref" reference="eq:primalprob"}. Thus, it is desirable to develop a distributed online algorithm that generates local decisions $x_{i,t}$, $i \in \mathcal{V}$ to track the optimal solution. We make the following assumption on problem [\[eq:primalprob\]](#eq:primalprob){reference-type="eqref" reference="eq:primalprob"}. **Assumption 2**. *For any $t \ge 1$, problem [\[eq:primalprob\]](#eq:primalprob){reference-type="eqref" reference="eq:primalprob"} satisfies* 1. *For each $i\in\mathcal{V}$, $X_i$ is a compact convex set with diameter $R:= \sup_{x_i,\tilde{x}_i \in X_i} \|x_i-\tilde{x}_i\|$.* 2. *For each $i\in\mathcal{V}$, $f_{i,t}$ and $g_i$ are convex on $X_{i}$.[\[couplinggisumficonvex\]]{#couplinggisumficonvex label="couplinggisumficonvex"}* It is directly obtained from the compactness of $X_i$'s and the convexity of $f_i$, $g_i$ in Assumption [Assumption 2](#asm:primalcouplingprob){reference-type="ref" reference="asm:primalcouplingprob"} that there exist constants $F > 0$, $G>0$ such that $$\begin{aligned} \|g_i(x_i)\|&\le F,\;\forall x_i \in X_i, \forall i \in \mathcal{V}, \label{eq:bounofgivalue}\\ \|\partial f_{i,t}(x_{i})\|&\le G,~\|\partial g_i(x_i)\|\le G, \;\forall x_i \in X_i, \forall i \in \mathcal{V}. \label{eq:boundsibradientoffigi} \end{aligned}$$ We adopt *dynamic regret* to measure the algorithm performance over a finite time horizon $T$ [@c2], which is defined $$\begin{aligned} \label{definitionofdynamicregret} \operatorname{Reg}(T):=\sum_{t=1}^T \sum_{i=1}^N f_{i, t}\left(x_{i, t}\right)-\sum_{t=1}^T \sum_{i=1}^N f_{i, t}\left(x_{i, t}^*\right), \end{aligned}$$ where $x_{i,t}^*$ is the $i$-th component of the optimal solution $\mathbf{x}_{t}^*=[(x_{1,t}^*)^T, \ldots, (x_{N,t}^*)^T]^T:=\underset{\mathbf{x} \in \mathcal{X}}{\arg \min } \sum_{i=1}^N f_{i, t}\left(x_i\right)$ to problem [\[eq:primalprob\]](#eq:primalprob){reference-type="eqref" reference="eq:primalprob"}. In contrast to the conventional metric *static regret* that is defined as the difference between the accumulated cost over time and the cost incurred by the best fixed decision when all functions are known in hindsight (i.e., $\sum_{t=1}^T \sum_{i=1}^N f_{i, t}\left(x_{i}^*\right)$, where $\mathbf{x}^*=[(x_{1}^*)^T, \ldots, (x_{N}^*)^T]^T:=\underset{\mathbf{x} \in \mathcal{X}}{\arg \min } \sum_{t=1}^T\sum_{i=1}^N f_{i, t}\left(x_i\right)$), the dynamic regret [\[definitionofdynamicregret\]](#definitionofdynamicregret){reference-type="eqref" reference="definitionofdynamicregret"} allows the best decisions to vary with time and is a more stringent and suitable benchmark to capture the algorithm performance on a time-varying optimization problem [@c2; @c3; @c14]. In addition, we define the cumulative constraint violation to measure whether the coupled inequality is satisfied in a longterm run as follows: $$\begin{aligned} \operatorname{Reg}^c(T):=\left\|\left[\sum_{t=1}^T \sum_{i=1}^N g_i\left(x_{i, t}\right)\right]_{+}\right\|. \label{eq:definitionofConstVio} \end{aligned}$$ Our goal is to design a distributed algorithm for solving the online problem [\[eq:primalprob\]](#eq:primalprob){reference-type="eqref" reference="eq:primalprob"} over $\mathcal{G}_t$ with superior dynamic regret and cumulative constraint violation bounds. # Algorithm Development {#sec: DUSTalgorithm} In this section, we propose a distributed dual subgradient tracking method to solve the distributed online problem with a coupled inequality constraint described in Section [2](#sec:problemformulation){reference-type="ref" reference="sec:problemformulation"}. First of all, let $L_{t}: \mathbb{R}^{\sum_{i=1}^{N}d_i} \times \mathbb{R}_{+}^{p} \rightarrow \mathbb{R}$ be the Lagrangian function associated with problem [\[eq:primalprob\]](#eq:primalprob){reference-type="eqref" reference="eq:primalprob"} at time $t$: $$\begin{aligned} L_{t}(\mathbf{x},\mu)=f_t(\mathbf{x})+\mu^T\sum_{i=1}^{N}g_{i}(x_{i}), \displaybreak[0] \label{eq:lagrangianfunt} \end{aligned}$$ where $\mu \ge \mathbf{0}_p$ is the Lagrange multiplier. We denote the dual function at time $t$ as $D_{t}(\mu):=\min_{\mathbf{x}} \{ L_{t}(\mathbf{x},\mu)\}$. The dual problem of problem [\[eq:primalprob\]](#eq:primalprob){reference-type="eqref" reference="eq:primalprob"} at time $t$ is $\max_{\mu \ge \mathbf{0}_p} D_{t}(\mu)$. If we directly apply the dual subgradient method [@c25] to the online problem [\[eq:primalprob\]](#eq:primalprob){reference-type="eqref" reference="eq:primalprob"}, we obtain the following updates: For arbitrarily given $\mu_1 \ge \mathbf{0}_p$ and each $t \ge 1$ $$\begin{aligned} \mathbf{x}_{t+1}&=\operatorname{\arg min}\{L_{t+1}(\mathbf{x},\mu_t)\}, \displaybreak[0]\label{eq:xt1dualascentupdate}\\ \mu_{t+1}&=[\mu_t+\sum_{i=1}^{N}g_{i}(x_{i,t+1})]_+, \displaybreak[0]\label{eq:mut1dualascentupdate} \end{aligned}$$ where $\mathbf{x}_{t+1}=[(x_{1,t+1})^T, \ldots, (x_{N,t+1})^T]^T \in \mathbb{R}^{\sum_{i=1}^{N}d_i}$, $\mu_{t+1} \in \mathbb{R}_p$ can be viewed as an estimate of the optimal solution to problem (1) at time $t+1$ and an estimate of the optimal dual solution to the dual problem of problem [\[eq:primalprob\]](#eq:primalprob){reference-type="eqref" reference="eq:primalprob"} at time $t+1$, respectively. The updates [\[eq:xt1dualascentupdate\]](#eq:xt1dualascentupdate){reference-type="eqref" reference="eq:xt1dualascentupdate"}--[\[eq:mut1dualascentupdate\]](#eq:mut1dualascentupdate){reference-type="eqref" reference="eq:mut1dualascentupdate"} actually optimize the dual problem of problem [\[eq:primalprob\]](#eq:primalprob){reference-type="eqref" reference="eq:primalprob"} at time $t+1$, i.e, $\max_{\mu \ge \mathbf{0}_p} D_{t+1}(\mu)$, by applying the subgradient method to compute the estimate of optimal dual solution, i.e, $\mu_{t+1}$, based on the historical information $\mu_t$, where the subgradient of the dual function $D_{t+1}(\mu)$ at $\mu_t$ is $\sum_{i=1}^{N}g_{i}(x_{i,t+1})$ according to the update [\[eq:xt1dualascentupdate\]](#eq:xt1dualascentupdate){reference-type="eqref" reference="eq:xt1dualascentupdate"} and the Danskin's theorem [@c25]. However, [\[eq:xt1dualascentupdate\]](#eq:xt1dualascentupdate){reference-type="eqref" reference="eq:xt1dualascentupdate"} and [\[eq:mut1dualascentupdate\]](#eq:mut1dualascentupdate){reference-type="eqref" reference="eq:mut1dualascentupdate"} cause two issues. First, we have no prior knowledge of $f_{t+1}$ when making decision $\mathbf{x}_{t+1}$. Second, the above updates require the global quantities $\mu_t$ and $\sum_{i=1}^{N}g_{i}(x_{i,t+1})$ at each time $t$, which cannot be executed in a distributed scenario. To overcome the two issues, we let $g(x)=[(g_1(x_1))^T, \ldots, (g_N(x_N))^T]^T \in \mathbb{R}^{Np}$, and construct the following algorithm: Given $\mathbf{x}_1 \in X$, $\mathbf{y}_1=g(\mathbf{x}_1)$, $\bm{\mu}_1=\mathbf{0}_{Np}$, for any $t \ge 1$, $$\begin{aligned} \mathbf{x}_{t+1}&=\operatorname{\arg min}\limits_{x \in X } \left\{ \alpha_t \partial f_{t}(\mathbf{x}_{t})^T\!(\mathbf{x}\!-\!\mathbf{x}_{t})\right.\displaybreak[0]\notag\\ &~~~\left.+\langle (W_t\otimes I_p)\bm{\mu}_{t}, ~g(\mathbf{x})\rangle\!+\eta_t\|\mathbf{x}\!-\!\mathbf{x}_{t}\|^2\right\}, \displaybreak[0] \label{eq:updateofxt11} \\ \mathbf{y}_{t+1}&=(W_t\otimes I_p)\mathbf{y}_{t}+g(\mathbf{x}_{t+1})-g(\mathbf{x}_{t}), \displaybreak[0] \label{eq:updateofyk1}\\ \bm{\mu}_{t+1}&=[(W_t\otimes I_p)\bm{\mu}_{t}+\mathbf{y}_{t+1}]_+, \label{eq:updateofuk1}\end{aligned}$$ where $\mathbf{y}_t=[(y_{1,t})^T, \ldots, (y_{N,t})^T]^T\in \mathbb{R}^{Np}$, $\bm{\mu}_t=[(\mu_{1,t})^T, \ldots, (\mu_{N,t})^T]^T\in \mathbb{R}^{Np}$, and $W_t$ is the mixing matrix at time $t$ described in Section [2](#sec:problemformulation){reference-type="ref" reference="sec:problemformulation"}. Here, the parameters $\alpha_t$ is used to balance the objective optimization and the penalty of constraint violations and $\eta_t$ is the stepsize. The above updates [\[eq:updateofxt11\]](#eq:updateofxt11){reference-type="eqref" reference="eq:updateofxt11"}--[\[eq:updateofuk1\]](#eq:updateofuk1){reference-type="eqref" reference="eq:updateofuk1"} are capable of handling the issues caused by [\[eq:xt1dualascentupdate\]](#eq:xt1dualascentupdate){reference-type="eqref" reference="eq:xt1dualascentupdate"}--[\[eq:mut1dualascentupdate\]](#eq:mut1dualascentupdate){reference-type="eqref" reference="eq:mut1dualascentupdate"}, and potentially solve the online problem [\[eq:primalprob\]](#eq:primalprob){reference-type="eqref" reference="eq:primalprob"}. Specifically, we estimate the unknown $f_{t+1}$ with the first-order approximation of $f_{t}$ at $\mathbf{x}_{t}$, i.e., $f_{t}(\mathbf{x}_{t})+\partial f_{t}(\mathbf{x}_{t})^T\!(\mathbf{x}\!-\!\mathbf{x}_{t})$. The proximal term $\eta_t\|\mathbf{x}-\mathbf{x}_{t}\|^2$ in [\[eq:updateofxt11\]](#eq:updateofxt11){reference-type="eqref" reference="eq:updateofxt11"} guarantees the unique existence of $x_{t+1}$. To understand the distributed implementation of [\[eq:updateofxt11\]](#eq:updateofxt11){reference-type="eqref" reference="eq:updateofxt11"}--[\[eq:updateofuk1\]](#eq:updateofuk1){reference-type="eqref" reference="eq:updateofuk1"}, let $x_{i,t}$, $y_{i,t}$, and $\mu_{i,t}$ be the $i$-th blocks of $\mathbf{x}_t$, $\mathbf{y}_t$, and $\bm{\mu}_t$, respectively. We let each node $i \in \mathcal{V}$ maintain $x_{i,t}$, $y_{i,t}$, and $\mu_{i,t}$ at time $t$. The term $\langle (W_t\otimes I_p)\bm{\mu}_{t}, ~g(\mathbf{x})\rangle$ in [\[eq:updateofxt11\]](#eq:updateofxt11){reference-type="eqref" reference="eq:updateofxt11"} not only enables distributed computation of $\mathbf{x}_{t+1}$, where each $x_{i,t+1}$ updates only involving its local information and the information received from its in-neighbors but also approaches to $\mu_t^T\sum_{i=1}^{N}g_{i}(x_{i})$ used in [\[eq:xt1dualascentupdate\]](#eq:xt1dualascentupdate){reference-type="eqref" reference="eq:xt1dualascentupdate"} if $W_t$ satisfies row stochasticity and each $\mu_{i,t}$ reaches the same value $\mu_t$. Owning to the column stochasticity of $W_t$ and the initial condition, from [\[eq:updateofyk1\]](#eq:updateofyk1){reference-type="eqref" reference="eq:updateofyk1"}, it is easy to obtain the update of $y_{i,t+1}$ and $\sum_{i=1}^{N}y_{i,t}=\sum_{i=1}^{N}g(x_{i,t})$, $\forall t \ge 1$, which implies the local variable $y_{i,t}$ can track the primal constraint violations $\sum_{i=1}^N g_i(x_{i,t})$ at each time $t$. Thus, at time $t+1$, $y_{i,t+1}$ tracks the primal constraint violations $\sum_{i=1}^N g_i(x_{i,t+1})$, which can be regarded as the estimated subgradient of the dual function $D_{t+1}(\mu)$ at $\mu_t$ in [\[eq:mut1dualascentupdate\]](#eq:mut1dualascentupdate){reference-type="eqref" reference="eq:mut1dualascentupdate"}. The variable $\mu_{i,t+1}$ is the estimate for node $i$ of the optimal dual solution to the dual problem of problem [\[eq:primalprob\]](#eq:primalprob){reference-type="eqref" reference="eq:primalprob"} at time $t+1$, which is similar to $\mu_{t+1}$ in [\[eq:mut1dualascentupdate\]](#eq:mut1dualascentupdate){reference-type="eqref" reference="eq:mut1dualascentupdate"} also estimating the optimal dual solution at time $t+1$. Thus, each node $i \in \mathcal{V}$ computes $\mu_{i,t+1}$ with the weighted $\mu_{j,t}$ received from its all in-neighbor $j$ that facilitates consensual $\mu_{i,t}$, $\forall i \in \mathcal{V}$, and with the estimated dual subgradient of the dual function $D_{t+1}(\mu)$ at $\mu_t$ like [\[eq:mut1dualascentupdate\]](#eq:mut1dualascentupdate){reference-type="eqref" reference="eq:mut1dualascentupdate"}, i.e., $\sum_{i=1}^N g_i(x_{i,t+1})$, which can be tracked by the local variable $y_{i,t+1}$. Consequently, [\[eq:updateofxt11\]](#eq:updateofxt11){reference-type="eqref" reference="eq:updateofxt11"}--[\[eq:updateofuk1\]](#eq:updateofuk1){reference-type="eqref" reference="eq:updateofuk1"} leads to a distributed dual subgradient tracking algorithm (DUST). However, [\[eq:updateofxt11\]](#eq:updateofxt11){reference-type="eqref" reference="eq:updateofxt11"}--[\[eq:updateofuk1\]](#eq:updateofuk1){reference-type="eqref" reference="eq:updateofuk1"} do not work over unbalanced networks since the column stochasticity of $W_t$ causes $\mu_{i,t}$, $\forall i \in \mathcal{V}$ cannot reach an identical value as they should. To cope with unbalanced graphs, we integrate the push-sum technique into [\[eq:updateofxt11\]](#eq:updateofxt11){reference-type="eqref" reference="eq:updateofxt11"}--[\[eq:updateofuk1\]](#eq:updateofuk1){reference-type="eqref" reference="eq:updateofuk1"} to eliminate the imbalance of interaction networks by dynamically constructing row-stochastic matrices. We still refer to the resulting algorithm as DUST whose distributed implementation is described below. Let each node $i \in \mathcal{V}$ maintain variables $c_{i, t}\in \mathbb{R}$ besides $x_{i,t}$, $y_{i,t}$, and $\mu_{i,t}$. The DUST algorithm is described as follows: Given $x_{i,1} \in X_i$, $y_{i,1}=g_i(x_{i,1})$, $c_{i,1}=1$, $\mu_{i, 1}=\mathbf{0}_p$, $\forall i \in \mathcal{V}$, for any $t \ge 1$, each node $i \in \mathcal {V}$ updates $$\begin{aligned} c_{i, t+1}&=\sum_{j \in \mathcal{N}_{i,t}^{\text{in}}} w_{i j, t} c_{j, t}, ~\label{eq:updateofcouplingcit}\displaybreak[0]\\ \lambda_{i,t+1}&=\frac{\sum_{j \in \mathcal{N}_{i,t}^{\text{in}}} w_{i j, t} \mu_{j, t}}{c_{i,t+1}}, \label{eq:updateofcouplinglambdait1}\displaybreak[0]\\ x_{i,t+1}&=\operatorname{\arg min}\limits_{x_{i} \in X_i } \left\{ \alpha_t\partial f_{i,t}(x_{i,t})^T\!(x_i\!-\!x_{i,t})\right.\notag\displaybreak[0]\\ &~~\left.+\langle \lambda_{i,t+1}, ~g_i(x_i)\rangle\!+\eta_t\|x_i\!-\!x_{i,t}\|^2\right\}, \label{eq:updateofcouplingxt1} \displaybreak[0]\\ y_{i,t+1}&=\sum_{j \in \mathcal{N}_{i,t}^{\text{in}}} w_{i j, t} y_{j, t}+g_i(x_{i,t+1})-g_i(x_{i,t}),\label{eq:updateofcouplingyk}\displaybreak[0]\\ \mu_{i,t+1}&=\big[ \sum_{j \in \mathcal{N}_{i,t}^{\text{in}}} w_{i j, t} \mu_{j, t}+y_{i,t+1}\big]_+, \label{eq:updateofcouplinguk}\displaybreak[0] \end{aligned}$$ where the initialization $\mu_{i, 1}=\mathbf{0}_p$ is simply set satisfying $\mu_{i, 1} \ge\mathbf{0}_p$, and $y_{i,1}=g_i(x_{i,1})$, $\forall i \in \mathcal {V}$ ensures that $y_{i,t+1}$ tracks the estimated dual subgradient at time $t+1$ in [\[eq:updateofcouplingyk\]](#eq:updateofcouplingyk){reference-type="eqref" reference="eq:updateofcouplingyk"}. The updates [\[eq:updateofcouplingcit\]](#eq:updateofcouplingcit){reference-type="eqref" reference="eq:updateofcouplingcit"}, [\[eq:updateofcouplinglambdait1\]](#eq:updateofcouplinglambdait1){reference-type="eqref" reference="eq:updateofcouplinglambdait1"}, [\[eq:updateofcouplingyk\]](#eq:updateofcouplingyk){reference-type="eqref" reference="eq:updateofcouplingyk"}, and [\[eq:updateofcouplinguk\]](#eq:updateofcouplinguk){reference-type="eqref" reference="eq:updateofcouplinguk"} require node $i$ to collect $w_{i j, t} c_{j, t}$, $w_{i j, t} \mu_{j, t}$, and $w_{i j, t}y_{j, t}$ from its every in-neighbor $j \in \mathcal{N}_{i,t}^{\text{in}}$ and [\[eq:updateofcouplingxt1\]](#eq:updateofcouplingxt1){reference-type="eqref" reference="eq:updateofcouplingxt1"}--[\[eq:updateofcouplinguk\]](#eq:updateofcouplinguk){reference-type="eqref" reference="eq:updateofcouplinguk"} are obtained by rewriting [\[eq:updateofxt11\]](#eq:updateofxt11){reference-type="eqref" reference="eq:updateofxt11"}--[\[eq:updateofuk1\]](#eq:updateofuk1){reference-type="eqref" reference="eq:updateofuk1"}. Obviously, the above updates only needs communication between neighboring nodes. Algorithm [\[alg:TVAlgorithm\]](#alg:TVAlgorithm){reference-type="ref" reference="alg:TVAlgorithm"} details all these actions taken by the nodes. Before executing Algorithm [\[alg:TVAlgorithm\]](#alg:TVAlgorithm){reference-type="ref" reference="alg:TVAlgorithm"}, all nodes need to determine the values of parameter $\alpha_t$ and the stepsize $\eta_t$. They can be set as $\alpha_t=\sqrt{t}$ and $\eta_t=t$ according to the theoretical results in Section [4](#sec: regretanalysis){reference-type="ref" reference="sec: regretanalysis"}. Different from [@c13; @c14] whose parameters related to the time horizon $T$, we allow $\alpha_t$ and $\eta_t$ to be time-varying without knowing $T$ in advance, which provides flexibility for deciding when to stop the proposed online algorithm. : Each node $i \in \mathcal{V}$ sets $x_{i,1} \in X_i$, $c_{i,1}=1$, $\mu_{i, 1}=\mathbf{0}_p$, and $y_{i,1}=g_i(x_{i,1})$. Each node $j \in \mathcal{V}$ sends its local information $w_{i j, t} c_{j, t}$, $w_{i j, t} \mu_{j, t}$, and $w_{i j, t}y_{j, t}$ to every out-neighbor $i \in \mathcal{N}_{j,t}^{\text{out}}$. After receiving the information from its in-neighbor $j \in \mathcal{N}_{i,t}^{\text{in}}$, each node $i \in \mathcal{V}$ updates $c_{i,t+1}$ according to [\[eq:updateofcouplingcit\]](#eq:updateofcouplingcit){reference-type="eqref" reference="eq:updateofcouplingcit"} and then computes $\lambda_{i,t+1}$ according to [\[eq:updateofcouplinglambdait1\]](#eq:updateofcouplinglambdait1){reference-type="eqref" reference="eq:updateofcouplinglambdait1"}. Each node $i \in \mathcal{V}$ updates $x_{i,t+1}$ according to [\[eq:updateofcouplingxt1\]](#eq:updateofcouplingxt1){reference-type="eqref" reference="eq:updateofcouplingxt1"}. Each node $i \in \mathcal{V}$ updates $y_{i,t+1}$ according to [\[eq:updateofcouplingyk\]](#eq:updateofcouplingyk){reference-type="eqref" reference="eq:updateofcouplingyk"}. Each node $i \in \mathcal{V}$ updates $\mu_{i,t+1}$ according to [\[eq:updateofcouplinguk\]](#eq:updateofcouplinguk){reference-type="eqref" reference="eq:updateofcouplinguk"}. # Dynamic Regret and Constraint Violation Bounds {#sec: regretanalysis} In this section, we provide the dynamic regret and constraint violation bounds of DUST. We first present the following lemmas. **Lemma 1**. *Suppose Assumptions [Assumption 1](#ass: networkassumption){reference-type="ref" reference="ass: networkassumption"} and [Assumption 2](#asm:primalcouplingprob){reference-type="ref" reference="asm:primalcouplingprob"} hold. Then, for any $t \ge 1$, $$\begin{aligned} \sum_{i=1}^{N} y_{i,t}&=\sum_{i=1}^N g_i(x_{i,t}),\displaybreak[0] \label{eq:sumyitsumgit}\\ \|y_{i,t}\| &\le B_y, \label{eq:boundednessofyit}\displaybreak[0] \end{aligned}$$ where $B_y=\frac{8N^2F\sqrt{p}}{r}(1+\frac{2}{1-\sigma})+(N+2)F$, $r:= \inf_{t=1,2,...}(\min_{i \in [N]}\{W_t\cdots W_1\mathbf{1}_N\}_i)$, and $\sigma \in (0,1)$ satisfying $r \geq \frac{1}{N^{N B}},~\sigma \leq\left(1-\frac{1}{N^{N B}}\right)^{\frac{1}{N B}}$.* Lemma [Lemma 1](#lem:sumyit=sumgit){reference-type="ref" reference="lem:sumyit=sumgit"} shows that the local estimator $y_{i,t}$ tracks the the sum of local constraint function values at each time $t$. The proof of Lemma [Lemma 1](#lem:sumyit=sumgit){reference-type="ref" reference="lem:sumyit=sumgit"} is similar to Lemma 1 in [@c12] and Lemma 4 in [@c10], and we omit it here. **Lemma 2**. *Suppose Assumptions [Assumption 1](#ass: networkassumption){reference-type="ref" reference="ass: networkassumption"} and [Assumption 2](#asm:primalcouplingprob){reference-type="ref" reference="asm:primalcouplingprob"} hold. Then, for any $t \ge 1$, $$\begin{aligned} \sum_{i=1}^{N}\|\bar{\mu}_t-\lambda_{i,t+1}\| &\le \frac{8N^2B_y\sqrt{p}}{r}\sum_{k=1}^{t}\sigma^{t-k},\displaybreak[0] \label{eq:sumNbarmutlambda} \end{aligned}$$ where $\bar{\mu}_t=\frac{1}{N}\sum_{i=1}^{N}\mu_{i,t}$ and $r, \sigma$ are given in Lemma [Lemma 1](#lem:sumyit=sumgit){reference-type="ref" reference="lem:sumyit=sumgit"}.* Lemma [Lemma 2](#lem:dualconsesusboud){reference-type="ref" reference="lem:dualconsesusboud"} presents a bound on the consensus error of the dual variables whose proof refers to Lemma 1 in [@c16]. The results in Lemma [Lemma 1](#lem:sumyit=sumgit){reference-type="ref" reference="lem:sumyit=sumgit"}--[Lemma 2](#lem:dualconsesusboud){reference-type="ref" reference="lem:dualconsesusboud"} involve a number parameters of network, such as the number of nodes $N$ and network connectivity factor $B$, which eventually influence the dynamic regret and constraint violation bounds through the following lemma. **Lemma 3**. *Suppose Assumptions [Assumption 1](#ass: networkassumption){reference-type="ref" reference="ass: networkassumption"} and [Assumption 2](#asm:primalcouplingprob){reference-type="ref" reference="asm:primalcouplingprob"} hold. Then, for any $t \ge 1$ and arbitrary $\tilde{x}_{i,t} \in X_i$, $i \in \mathcal{V}$, $$\begin{aligned} & \frac{N}{2}\|\bar{\mu}_{t+1}\|^2\!-\!\frac{N}{2}\|\bar{\mu}_{t}\|^2 \displaybreak[0] \notag\\ &\le\!(\frac{N}{2}\!\!+\!\!\frac{N}{r})B_y^2\!+\!\frac{NG^2\alpha_t^2}{4\eta_t}\!+\!(2B_y\!\!+\!\!2F)\!\sum_{i=1}^{N}\|\bar{\mu}_t\!-\!\lambda_{i,t+1}\|\displaybreak[0]\notag\\ &+\sum_{i=1}^N \alpha_t\partial f_{i,t}(x_{i,t})^T\!(\tilde{x}_{i,t}\!-\!x_{i,t})+\sum_{i=1}^N\langle \bar{\mu}_t, ~g_i(\tilde{x}_{i,t})\rangle\displaybreak[0] \notag\\ &+\sum_{i=1}^N \eta_t(\|x_{i,t}\!-\tilde{x}_{i,t}\|^2\!\!-\!\|x_{i,t+1}\!-\!\tilde{x}_{i,t}\|^2).\displaybreak[0] \label{eq:barmut1barmutnormbound} \end{aligned}$$* *Proof.* See Appendix [6.1](#prf:proofofbarmut1barmutbound){reference-type="ref" reference="prf:proofofbarmut1barmutbound"}. ◻ Lemma [Lemma 3](#lem:mubart1normandmubartbound){reference-type="ref" reference="lem:mubart1normandmubartbound"} establishes the relationship between the bound on dual variables and the first-order information of the local functions, where the former involves constraint violations and the latter is related to the dynamic regret bound. By choosing $\tilde{x}_{i,t}$ appropriately and utilizing the convexity of local functions as well as Lemmas [Lemma 1](#lem:sumyit=sumgit){reference-type="ref" reference="lem:sumyit=sumgit"}--[Lemma 2](#lem:dualconsesusboud){reference-type="ref" reference="lem:dualconsesusboud"}, we obtain the dynamic regret and constraint violation bounds from Lemma [Lemma 3](#lem:mubart1normandmubartbound){reference-type="ref" reference="lem:mubart1normandmubartbound"}. **Theorem 1**. *Suppose Assumptions [Assumption 1](#ass: networkassumption){reference-type="ref" reference="ass: networkassumption"} and [Assumption 2](#asm:primalcouplingprob){reference-type="ref" reference="asm:primalcouplingprob"} hold. If we set $$\label{eq:TVdynamicstepsizechosen} \alpha_t=\sqrt{t}, \; \eta_t=t,$$ then for any $t \ge 1$, $$\begin{aligned} \operatorname{Reg}(T)&=\mathcal{O}(\sqrt{T})+\mathcal{O}(V_T), \displaybreak[0] \label{eq:TVdynamicresult} \end{aligned}$$ where $V_T:=\sum_{t=1}^T\sqrt{t}\sum_{i=1}^N\left\|x_{i, t+1}^*-x_{i, t}^*\right\|$ and $x_{i,t}^*$ is the $i$-th component of the optimal solution $\mathbf{x}_{t}^*:=\underset{\mathbf{x} \in \mathcal{X}}{\arg \min } \sum_{i=1}^N f_{i, t}\left(x_i\right)$ to problem [\[eq:primalprob\]](#eq:primalprob){reference-type="eqref" reference="eq:primalprob"}.* *Proof.* See Appendix [6.2](#prf:TVdynamicregret){reference-type="ref" reference="prf:TVdynamicregret"}. ◻ Theorem [Theorem 1](#thm:TVdynamicregret){reference-type="ref" reference="thm:TVdynamicregret"} shows that the dynamic regret grows sublinearly with $T$ if the accumulated variation of the optimal sequence $V_T$ is sublinear, which requires the online problem [\[eq:primalprob\]](#eq:primalprob){reference-type="eqref" reference="eq:primalprob"} does not change too drastically. Intuitively, the sublinearity guarantees that $\operatorname{Reg}(T)/T$ converges to $0$ as $T$ goes to infinity. It should be noted that if $V_T=0$, the result reduces to the static regret that achieves an $\mathcal{O}(\sqrt{T})$ bound. In addition, Theorem [Theorem 1](#thm:TVdynamicregret){reference-type="ref" reference="thm:TVdynamicregret"} indicates that DUST has stronger results on other existing algorithms applicable to coupled inequality constraints. Specifically, compared with [@c9; @c10] that are also applicable to unbalanced networks with column stochastic matrices, the static regret bound in [@c9] is strictly greater than $\mathcal{O}(\sqrt{T})$ and the dynamic regret bound in [@c10] is $\mathcal{O}(T^{\frac{1}{2}+2\kappa})+\mathcal{O}(V_T)$, $\kappa \in (0, 1/4)$ that is worse than ours. Moreover, [@c12] assumes the boundedness of $\mu_{i,t}$ while Theorem [Theorem 1](#thm:TVdynamicregret){reference-type="ref" reference="thm:TVdynamicregret"} does not. Though [@c11; @c13; @c14; @c15] can also handle coupled inequality constraints, [@c11; @c13; @c14; @c15] are only applied to balanced networks with doubly stochastic mixing matrices and or [@c11] only focus on the static regret, which is weaker than our result. The dynamic regret bounds in [@c14; @c15] depend on the accumulated error of optimal sequence $\sqrt{T}\sum_{t=1}^T\sum_{i=1}^N\left\|x_{i, t+1}^*-x_{i, t}^*\right\|$, which is large than $V_T$ in [\[eq:TVdynamicresult\]](#eq:TVdynamicresult){reference-type="eqref" reference="eq:TVdynamicresult"}, leading to a larger bound than [\[eq:TVdynamicresult\]](#eq:TVdynamicresult){reference-type="eqref" reference="eq:TVdynamicresult"}. Next, we present a bound on constraint violation. **Theorem 2**. *Suppose all the conditions in Theorem [Theorem 1](#thm:TVdynamicregret){reference-type="ref" reference="thm:TVdynamicregret"} hold. Then for any $t \ge 1$, $$\begin{aligned} \operatorname{Reg}^c(T) =\mathcal{O}(T^{\frac{3}{4}}). \displaybreak[0] \label{eq:TVwithoutSlaterconstraintvio} \end{aligned}$$* *Proof.* See Appendix [6.3](#prf:WioutSlaterCV){reference-type="ref" reference="prf:WioutSlaterCV"}. ◻ Theorem [Theorem 2](#thm:WioutSlaterCV){reference-type="ref" reference="thm:WioutSlaterCV"} shows that DUST achieves a sublinear constraint violation bound. The result is superior than [@c9; @c10; @c11] whose constraint violation bound is strictly greater than $\mathcal{O}(T^{\frac{3}{4}})$. Theorem 2 holds without assuming the Slater's condition that allows us to handle equality constraints by converting an equality into two inequalities. The following theorem shows that $Reg^c(T)$ is improved to $\mathcal{O}(\sqrt{T})$ if all local constraint functions $g_i, \forall i \in \mathcal{V}$ satisfy the Slater's condition, which is commonly assumed in [@c10; @c11; @c14]. **Assumption 3**. *There exists a constant $\epsilon >0$ and a point $\hat{x}_i \in\operatorname{relint}(X_i),~\forall i \in \mathcal{V}$ such that $\sum_{i=1}^{N}g_{i}(\hat{x}_{i}) \leq-\epsilon\mathbf{1}_p$.* **Theorem 3**. *Suppose Assumptions [Assumption 1](#ass: networkassumption){reference-type="ref" reference="ass: networkassumption"}$-$[Assumption 3](#asm: slatercondition){reference-type="ref" reference="asm: slatercondition"} hold. If we set $\eta_t$ and $V_t$ as these in Theorem [Theorem 1](#thm:TVdynamicregret){reference-type="ref" reference="thm:TVdynamicregret"}. Then, for any $t \ge 1$, $$\begin{aligned} \operatorname{Reg}^c(T) =\mathcal{O}(\sqrt{T}). \displaybreak[0] \label{eq:TVwithSlaterconstraintvio} \end{aligned}$$* *Proof.* See Appendix [6.4](#prf:WiSlaterCV){reference-type="ref" reference="prf:WiSlaterCV"}. ◻ **Remark 1**. *To the best of our knowledge, DUST is the first distributed algorithm achieving $\mathcal{O}(\sqrt{T})$ dynamic regret bound and $\mathcal{O}(T^{\frac{3}{4}})$ constraint violation bound for DOCO problems with coupled inequality constraints over unbalanced networks, let alone achieving $\mathcal{O}(\sqrt{T})$ constraint violation bound. Unlike [@c20; @c21; @c22] whose constraint violation bounds are affected by the dynamic optimal decisions $x_t^*$, $\forall t \ge 1$, our results are independent of them. Furthermore, from Appendix [6.2](#prf:TVdynamicregret){reference-type="ref" reference="prf:TVdynamicregret"}--[6.4](#prf:WiSlaterCV){reference-type="ref" reference="prf:WiSlaterCV"}, we observe that the bounds of $\operatorname{Reg}(T)$ and $\operatorname{Reg}^c(T)$ in [\[eq:TVdynamicresult\]](#eq:TVdynamicresult){reference-type="eqref" reference="eq:TVdynamicresult"}$-$[\[eq:TVwithSlaterconstraintvio\]](#eq:TVwithSlaterconstraintvio){reference-type="eqref" reference="eq:TVwithSlaterconstraintvio"} are proportional to $\frac{N^6}{r^3(1-\sigma)^3}$ with $r \geq \frac{1}{N^{N B}},~\sigma \leq\left(1-\frac{1}{N^{N B}}\right)^{\frac{1}{N B}}$. Note that $\frac{N^6}{r^3(1-\sigma)^3}$ increases as $N$ and $B$ grow and $\operatorname{Reg}(T)$ and $\operatorname{Reg}^c(T)$ increase accordingly. This statement is verified via a numerical example in the following section.* # Numerical Example {#sec: numericalexperiments} We apply DUST to solve the plug-in electric vehicles (PEVs) charging problem whose goal is to find an optimal charging schedule over a time period by minimizing the sum of the time-varying local cost of all PEVs while satisfying certain constraints at each time instance[@c10; @c13]. At each time $t$, the PEVs charging problem can be cast as: $$\label{eq:PEVsproblem} \begin{array}{cl} \underset{\substack{x_i \in X_i, \forall i \in \mathcal{V} }}{\operatorname{minimize}} ~\!\!\!& \sum_{i=1}^{N} c_{i,t}(x_{i})\\ %\vspace{1mm} {\operatorname{ subject~to }}\!\!\! & \sum_{i=1}^{N}A_{i}x_{i}-D/N \leq \mathbf{0}_p, \end{array}$$ where $x_i$ represents the charging rate of PEV $i$, $c_{i,t}(x_{i}):=a_{i,t}/2\|x_i\|^2+b_{i,t}^Tx_i$ is local cost function of PEV $i$ at time $t$ [@c23], and $X_i$ is the local constraint set involving maximum charging power and desired final state of charge of PEV $i$. The coupled constraint $\sum_{i=1}^{N}A_{i}x_{i}-D/N \leq \mathbf{0}_p$ guarantees that the aggregate charging power of all PEVs is less than maximum power that the network can deliver. In our simulation, each $a_{i,t}$ and $b_{i,t}$ are randomly generated from uniform distributions $[0.5, 1]$ and $\left(0, 1\right]^{d_i}$, respectively, where $d_i=24$ is the dimension of $x_i$. According to the set-up in [@c24], there are 48 coupled inequalities, i.e., the rate aggregation matrix $A_i \in \mathbb{R}^{48 \times 24}$ and each local set $X_i$ is determined by 197 inequalities. The values of $A_i$, $D$, and $X_i$ are obtained by referring to [@c24]. To investigate the convergence performance of DUST and the effects of network connectivity factor $B$ and node number $N$ on the convergence performance of DUST, we run DUST with different $B$ and different $N$. Fig. [\[differentB\]](#differentB){reference-type="ref" reference="differentB"} and Fig. [\[differentN\]](#differentN){reference-type="ref" reference="differentN"} plot the evolution of $\operatorname{Reg}(T)/T$ and $\operatorname{Reg}^c(T)/T$ with $B=2,10$ when $N$ is fixed as 10 and $N=10,20$ when $B$ is fixed as 2, respectively. From the two figures, we observe that DUST is able to achieve sublinear convergence in terms of dynamic regret and constraint violations, which validates our theoretical results. In addition, it can be seen that the convergence speed becomes slower if $B$ or $N$ increases. This fact is consistent with our analysis in Remark [Remark 1](#rmk:BNeffects){reference-type="ref" reference="rmk:BNeffects"}. We compare DUST with the distributed online primal-dual push-sum (DOPP) in [@c10], which is also developed based on column stochastic mixing matrices. For a fair comparison, we set $\kappa=0.2$ for DOPP so that it achieves satisfactory convergence performance. Fig. [\[compare\]](#compare){reference-type="ref" reference="compare"} presents the evolution of $\operatorname{Reg}(T)/T$ and $\operatorname{Reg}^c(T)/T$ of DUST and DOPP with $N=10$, $B=4$. It is evident that DUST achieves smaller dynamic regret and constraint violations than DOPP and thus validates the superior performance of DUST. # Conclusion {#sec:conclusion} We have constructed a distributed dual subgradient tracking algorithm (DUST) to solve the DOCO problem with a globally coupled inequality constraint over unbalanced networks. To develop it, we integrate the push-sum technique into the dual subgradient method. The subgradients with respect to dual variables can be estimated by primal constraint violations, which is tracked by local auxiliary variables, enabling distributed implementation. We show that DUST achieves sublinear dynamic regret and constraint violations if the accumulated variation of the optimal sequence is also sublinear. Our theoretical results are stronger than those of existing distributed algorithms applicable to unbalanced networks, which is verified via numerical experiments. 99 G. Lee, W. Saad and M. Bennis, \"An online optimization framework for distributed fog network formation with minimal latency,\" *IEEE Transactions on Wireless Communications*, vol. 18, no. 4, pp. 2244--2258, 2019. S. Shahrampour and A. Jadbabaie, \"Distributed online optimization in dynamic environments using mirror descent,\" *IEEE Transactions on Automatic Control*, vol. 63, no. 3, pp. 714--725, 2018. G. Carnevale, A. Camisa and G. Notarstefano, \"Distributed online aggregative optimization for dynamic multi-robot coordination,\" *IEEE Transactions on Automatic Control*, pp. 1--8, 2022. D. Mateos-Núñez and J. Cortés, \"Distributed online convex optimization Over Jointly Connected Digraphs,\" *IEEE Transactions on Network Science and Engineering*, vol. 1, no. 1, pp. 23--37, 2014. X. Li, L. Xie, and N. Li, \"A survey of decentralized online learning\", arXiv:2205.00473, 2022. M. Akbari, B. Gharesifard and T. Linder, \"Distributed online Convex optimization on time-varying directed graphs,\" *IEEE Transactions on Control of Network Systems*, vol. 4, no. 3, pp. 417--428, 2017. A. Nedić, S. Lee and M. Raginsky, \"Decentralized online optimization with global objectives and local communication,\" in *American Control Conference*, Chicago, IL, USA, 2015, pp. 4497--4503. A. Koppel, F. Y. Jakubiec and A. Ribeiro, \"A saddle point algorithm for networked online convex optimization,\" *IEEE Transactions on Signal Processing*, vol. 63, no. 19, pp. 5149--5164, 2015. C. Wang, S. Xu, D. Yuan, B. Zhang, and Z. Zhang, \"Distributed online convex optimization with a bandit primal-dual mirror descent push-sum algorithm,\" *Neurocomputing*, vol. 497, pp. 204--215, 2022. X. Li, X. Yi and L. Xie, \"Distributed online optimization for multi-agent networks With coupled inequality constraints,\" *IEEE Transactions on Automatic Control*, vol. 66, no. 8, pp. 3575--3591, 2021. K. Tada, N. Hayashi and S. Takai, \"Distributed inequality constrained online optimization for unbalanced digraphs using row stochastic property,\" in *IEEE Conference on Decision and Control*, Cancun, Mexico, 2022, pp. 2283--2288. S. Lee and M. M. Zavlanos, "On the sublinear regret of distributed primal-dual algorithms for online constrained optimization," *arXiv:1705.11128*, 2017. J. Li, C. Gu, Z. Wu and T. Huang, \"Online learning algorithm for distributed convex optimization with time-varying coupled constraints and bandit feedback,\" *IEEE Transactions on Cybernetics*, vol. 52, no. 2, pp. 1009--1020, 2022. X. Yi, X. Li, L. Xie and K. H. Johansson, \"Distributed online convex optimization with time-varying coupled inequality constraints,\" *IEEE Transactions on Signal Processing*, vol. 68, pp. 731--746, 2020. X. Yi, X. Li, T. Yang, L. Xie, T. Chai, and K. H. Johansson, \"Distributed Bandit online convex optimization with time-varying coupled inequality constraints,\" *IEEE Transactions on Automatic Control*, vol. 66, no. 10, pp. 4620--4635, 2021. A. Nedić and A. Olshevsky, \"Distributed optimization over time-varying directed graphs,\" *IEEE Transactions on Automatic Control*, vol. 60, no. 3, pp. 601--615, 2015. M. J. Neely and H. Yu, "Online convex optimization with time-varying constraints," *arXiv:1702.04783*, 2017. Y. Kim and D. Lee, "Online convex optimization with stochastic constraints: zero constraint violation and bandit feedback," *arXiv:2301.11267*, 2023. X. Yi, X. Li, L. Xie, T. Yang, Li. Xie, T. Chai and K. H. Johansson, \"Regret and cumulative constraint violation analysis for online convex optimization with long term constraints,\" in *Proceedings of International Conference on Machine Learning*, 2021, pp. 11998--12008. T. Chen, Q. Ling and G. B. Giannakis, \"An online convex optimization approach to proactive network resource allocation,\" *IEEE Transactions on Signal Processing*, vol. 65, no. 24, pp. 6350--6364, 2017. X. Cao, J. Zhang and H. V. Poor, \"A virtual-queue-based algorithm for constrained online convex optimization with applications to data center resource allocation,\" *IEEE Journal of Selected Topics in Signal Processing*, vol. 12, no. 4, pp. 703--716, 2018. X. Cao and K. J. R. Liu, \"Online convex optimization With time-varying constraints and bandit feedback,\" *IEEE Transactions on Automatic Control*, vol. 64, no. 7, pp. 2665--2680, 2019. J. Li, C. Li, Y. Xu, Z. Y. Dong, K. P. Wong, and T. Huang, \"Noncooperative game-based distributed charging control for plug-in electric vehicles in distribution networks,\" *IEEE Transactions on Industrial Informatics*, vol. 14, no. 1, pp. 301--310, 2018. R. Vujanic, P. M. Esfahani, P. J. Goulart, S. Mariethoz, and M. Morari, "A decomposition method for large scale milps, with performance guarantees and a power system application," *Automatica*, vol. 67, pp. 144--156, 2016. D. P. Bertsekas, Nonlinear Programming. Belmont, MA: Athena Scientific, 1999. For ease of exposition, let $\hat{\mu}_{i, t}=\sum_{j \in \mathcal{N}_{i,t}^{\text{in}}} w_{i j, t} \mu_{j, t}$, $\bar{\mu}_t=\frac{1}{N}\sum_{i=1}^{N}\mu_{i,t}$, and $\epsilon_{i,t+1}=[\hat{\mu}_{i, t}+y_{i,t+1}]_+-\hat{\mu}_{i, t}$. With these notations, we rewrite [\[eq:updateofcouplinglambdait1\]](#eq:updateofcouplinglambdait1){reference-type="eqref" reference="eq:updateofcouplinglambdait1"} and [\[eq:updateofcouplinguk\]](#eq:updateofcouplinguk){reference-type="eqref" reference="eq:updateofcouplinguk"} as ## Proof of Lemma [Lemma 3](#lem:mubart1normandmubartbound){reference-type="ref" reference="lem:mubart1normandmubartbound"} {#prf:proofofbarmut1barmutbound} Before we prove Lemma [Lemma 3](#lem:mubart1normandmubartbound){reference-type="ref" reference="lem:mubart1normandmubartbound"}, the following auxiliary lemma is first presented. **Lemma 4**. *Suppose Assumptions [Assumption 1](#ass: networkassumption){reference-type="ref" reference="ass: networkassumption"} and [Assumption 2](#asm:primalcouplingprob){reference-type="ref" reference="asm:primalcouplingprob"} hold. Then $$\begin{aligned} \|\epsilon_{i,t+1}\|\le B_y,~r \le c_{i,t} \le N,~\forall t\ge 1, \displaybreak[0] \label{eq:boundofepsiloncit} \end{aligned}$$ where $B_y$ and $r$ are given in Lemma [Lemma 1](#lem:sumyit=sumgit){reference-type="ref" reference="lem:sumyit=sumgit"}.* *Proof.* According to the property of projection $\|P_S(z_1)-P_S(z_2)\| \le \|z_1-z_2\|$, $\forall z_1, z_2 \in \mathbb{R}^n$, we have $\|\epsilon_{i,t+1}\|=\|[\hat{\mu}_{i, t}+y_{i,t+1}]_+-\hat{\mu}_{i, t-1} \| \le \|\hat{\mu}_{i, t-1}+y_{i,t}-\hat{\mu}_{i, t-1}\|\le \|y_{i,t}\| \le B_y$. The proof of boundedness of $c_{i,t}$ can refer to Lemma 3 in [@c10]. ◻ Summing $\mu_{i, t+1}=\hat{\mu}_{i, t}+\epsilon_{i,t+1}$from $i=1$ to $N$ yields $$\begin{aligned} \bar{\mu}_{t+1}=\bar{\mu}_t+\frac{1}{N}\sum_{i=1}^{N}\epsilon_{i,t+1}, \label{eq:barmut1barmut} \end{aligned}$$ which gives for all $\lambda \in \mathbb{R}_+^p$, $$\begin{aligned} \|\bar{\mu}_{t+1}\!-\!\lambda \|^2 \le \|\bar{\mu}_t\!-\!\lambda \|^2\!+\!\frac{2}{N}\sum_{i=1}^{N}\epsilon_{i,t+1}^T(\bar{\mu}_t\!-\!\lambda)\!+\!B_y^2.\label{eq:barmut1withepsilonbarmut} \end{aligned}$$ The last inequality in [\[eq:barmut1withepsilonbarmut\]](#eq:barmut1withepsilonbarmut){reference-type="eqref" reference="eq:barmut1withepsilonbarmut"} follows from [\[eq:boundofepsiloncit\]](#eq:boundofepsiloncit){reference-type="eqref" reference="eq:boundofepsiloncit"}. Let us now consider the term $\sum_{i=1}^{N}\epsilon_{i,t+1}^T(\bar{\mu}_t-\lambda)$. It can be obtained $$\begin{aligned} &\sum_{i=1}^{N}\!\epsilon_{i,t+1}^T(\bar{\mu}_t\!-\!\lambda)\!=\!\!\sum_{i=1}^{N}\!\epsilon_{i,t+1}^T(\frac{(\hat{\mu}_{i, t}\!-\!c_{i,t+1}\lambda)}{c_{i,t+1}}\!+\!\bar{\mu}_t\!-\!\lambda_{i,t+1})\displaybreak[0]\notag\\ &=\!\sum_{i=1}^{N}\!\big(\epsilon_{i,t+1}\!\!-\!\!y_{i,t+1})^T\!\frac{(\mu_{i, t+1}\!\!-\!\!c_{i,t+1}\lambda)}{c_{i,t+1}}\!+\!\epsilon_{i,t+1}^T(\bar{\mu}_t\!\!-\!\!\lambda_{i,t+1})\displaybreak[0] \notag\\ &+\!(\epsilon_{i,t+1}\!-\!y_{i,t+1})^T\frac{(\hat{\mu}_{i, t}\!-\!\mu_{i, t+1})}{c_{i,t+1}}\big)\!+\!y_{i,t+1}^T(\lambda_{i,t+1}\!-\!\lambda)\! \displaybreak[0]\notag\\ &\le \!\sum_{i=1}^{N}\!y_{i,t+1}^T(\frac{(\mu_{i, t+1}\!\!-\!\!\hat{\mu}_{i, t})}{c_{i,t+1}}\!+\!\lambda_{i,t+1}\!\!-\!\!\lambda)+\!\epsilon_{i,t+1}^T(\bar{\mu}_t\!\!-\!\!\lambda_{i,t+1}) \displaybreak[0]\notag\\ & \le \!\frac{N}{r}\!B_y^2\!+\!\sum_{i=1}^{N}\!y_{i,t+1}^T(\lambda_{i,t+1}\!\!-\!\!\lambda)\!+\!B_y\!\sum_{i=1}^{N}\!\|\bar{\mu}_t\!-\!\lambda_{i,t+1}\|, \displaybreak[0] \label{eq:epsilonmultiplybarmutlambdabound} \end{aligned}$$ where the first equality uses $\lambda_{i,t+1}=\frac{\hat{\mu}_{i, t}}{c_{i,t+1}}$. The first inequality uses: (a) $(\epsilon_{i,t+1}\!-y_{i,t+1})^T(\mu_{i, t+1}\!-\!c_{i,t+1}\lambda) \le 0$ according to the property of projection $(P_S(x)-x)^T(P_S(x)-y) \le 0$, $\forall x \in \mathbb{R}^n, y \in S$, where $\lambda \in \mathbb{R}_+^p$; (b) $\epsilon_{i,t+1}^T(\hat{\mu}_{i, t}\!-\!\mu_{i, t+1})=(\mu_{i, t+1}-\hat{\mu}_{i, t})^T(\hat{\mu}_{i, t}\!-\!\mu_{i, t+1}) \le 0$. The last inequality uses: (a) the Cauchy--Schwarz inequality; (b) [\[eq:boundednessofyit\]](#eq:boundednessofyit){reference-type="eqref" reference="eq:boundednessofyit"} and [\[eq:boundofepsiloncit\]](#eq:boundofepsiloncit){reference-type="eqref" reference="eq:boundofepsiloncit"}. The term $y_{i,t+1}^T(\lambda_{i,t+1}-\lambda)$ in [\[eq:epsilonmultiplybarmutlambdabound\]](#eq:epsilonmultiplybarmutlambdabound){reference-type="eqref" reference="eq:epsilonmultiplybarmutlambdabound"} can be obtained $$\begin{aligned} &\sum_{i=1}^{N}\!y_{i,t+1}^T(\lambda_{i,t+1}\!-\!\lambda)=\sum_{i=1}^{N}y_{i,t+1}^T(\lambda_{i,t+1}\!-\!\bar{\mu}_t\!+\!\bar{\mu}_t\!-\!\lambda) \displaybreak[0] \notag\\ &\le B_y\sum_{i=1}^{N}\|\bar{\mu}_t\!-\!\lambda_{i,t+1}\|+\!\sum_{i=1}^N g_i(x_{i,t+1})^T(\bar{\mu}_t\!-\!\lambda)\displaybreak[0] \notag\\ &\!\le\!(\!B_y\!\!+\!\!F)\!\sum_{i=1}^{N}\!\|\bar{\mu}_t\!\!-\!\!\lambda_{i,t+1}\|\!\!+\!\!\sum_{i=1}^N\! g_i(x_{i,t+1})^T\!(\lambda_{i,t+1}\!\!-\!\!\lambda), \label{eq:yitamultiplylambdait1minuslambdabound} \end{aligned}$$ where the last inequality utilizes Lemma [Lemma 1](#lem:sumyit=sumgit){reference-type="ref" reference="lem:sumyit=sumgit"} and [\[eq:bounofgivalue\]](#eq:bounofgivalue){reference-type="eqref" reference="eq:bounofgivalue"}. Let $S_{i,t}(x_i,\lambda_i)=\alpha_t\partial f_{i,t}(x_{i,t})^T\!(x_i\!-\!x_{i,t})+\langle \lambda_{i}, ~g_i(x_i)\rangle\!+\eta_t\|x_i\!-\!x_{i,t}\|^2$. Obviously, we have $\sum_{i=1}^N g_i(x_{i,t+1})^T(\lambda_{i,t+1}\!-\!\lambda)=\sum_{i=1}^NS_{i,t}(x_{i,t+1},\lambda_{i,t+1})-S_{i,t}(x_{i,t+1},\lambda) \le \sum_{i=1}^N\!S_{i,t}(\tilde{x}_{i,t}, \lambda_{i,t+1})\!\!-\!\!S_{i,t}(x_{i,t+1},\lambda)\!\!-\!\!\eta_t\|x_{i,t+1}\!\!-\!\!\tilde{x}_{i,t}\|^2$, $\forall \tilde{x}_{i,t} \in X_i$, which follows from the $2\eta_t$-strong convexity of $S_{i,t}(x_i,\lambda_{i,t+1})$ with respect to the variable $x_i$. Combing the inequality with $\sum_{i=1}^N\!S_{i,t}(\tilde{x}_{i,t}, \lambda_{i,t+1})\!-\!S_{i,t}(\tilde{x}_{i,t}, \bar{\mu}_t) \le F\!\sum_{i=1}^{N}\!\|\bar{\mu}_t\!-\!\lambda_{i,t+1}\|$ yields $$\begin{aligned} &\sum_{i=1}^N g_i(x_{i,t+1})^T(\lambda_{i,t+1}\!-\!\lambda)\le F\sum_{i=1}^{N}\!\|\bar{\mu}_t\!-\!\lambda_{i,t+1}\|\displaybreak[0]\notag\\ &+\!\sum_{i=1}^N\!S_{i,t}(\tilde{x}_{i,t}, \bar{\mu}_t)\!-\!\!S_{i,t}(x_{i,t+1},\lambda)\!\!-\!\eta_t\|x_{i,t+1}\!\!-\!\!\tilde{x}_{i,t}\|^2. \displaybreak[0] \label{eq:sumgxit1timeslambdait1bound} \end{aligned}$$ Let $\lambda=\mathbf{0}_p$. Imitating the Lemma 4 in [@c17] leads to $-S_{i,t}(x_{i,t+1},\lambda)=-\big(\alpha_t\sum_{i=1}^{N}\partial f_{i,t}(x_{i,t})^T\!(x_{i,t+1}\!-\!x_{i,t})\!+\!\eta_t\sum_{i=1}^{N}\|x_{i,t+1}\!-\!x_{i,t}\|^2\big) \le \frac{NG^2\alpha_t^2}{4\eta_t}$. By combing this inequality with [\[eq:barmut1withepsilonbarmut\]](#eq:barmut1withepsilonbarmut){reference-type="eqref" reference="eq:barmut1withepsilonbarmut"}-[\[eq:sumgxit1timeslambdait1bound\]](#eq:sumgxit1timeslambdait1bound){reference-type="eqref" reference="eq:sumgxit1timeslambdait1bound"}, dividing both sides by $\frac{2}{N}$, and substituting the expressions of $S_{i,t}(\tilde{x}_{i,t}, \bar{\mu}_t)$ yield [\[eq:barmut1barmutnormbound\]](#eq:barmut1barmutnormbound){reference-type="eqref" reference="eq:barmut1barmutnormbound"}. Thus, Lemma [Lemma 3](#lem:mubart1normandmubartbound){reference-type="ref" reference="lem:mubart1normandmubartbound"} holds. ## Proof of Theorem [Theorem 1](#thm:TVdynamicregret){reference-type="ref" reference="thm:TVdynamicregret"} {#prf:TVdynamicregret} For any $t \ge 1$, let $\tilde{x}_{i,t}=x_{i,t}^*$, $\forall i \in \mathcal{V}$, which results in $\sum_{i=1}^{N}g_i(x_{i,t}^*) \le \mathbf{0}_p$. With $\bar{\mu}_t \ge \mathbf{0}_p$, $\langle\bar{\mu}_t, \sum_{i=1}^{N}g_i(x_{i,t}^*)\rangle \le 0$. By virtual of the convexity of $f_{i,t}$, we have $\sum_{i=1}^{N}\!\alpha_t\partial f_{i,t}(x_{i,t})^T\!(x_{i,t}^*\!-\!x_{i,t})\! \le\! \alpha_t\!\sum_{i=1}^{N}\!f_{i,t}(x_{i,t}^*)\!-\!f_{i,t}(x_{i,t})$. Equipped with these, we divide [\[eq:barmut1barmutnormbound\]](#eq:barmut1barmutnormbound){reference-type="eqref" reference="eq:barmut1barmutnormbound"} both sides by $\alpha_t$ and then sum it from $t=1$ to $T$ to obtain $$\begin{aligned} &\sum_{t=1}^{T}\sum_{i=1}^{N} f_{i,t}(x_{i,t})-\sum_{t=1}^{T}\sum_{i=1}^{N} f_{i,t}(x_{i,t}^\star) \le\underbrace{(\frac{N}{2}+\frac{N}{r})\sum_{t=1}^{T}\frac{B_y^2}{\alpha_t}}_{S_1} \displaybreak[0] \notag\\ &+\underbrace{\sum_{t=1}^{T}\frac{NG^2\alpha_t}{4\eta_t}}_{S_2}+ \frac{N}{2}\underbrace{\sum_{t=1}^{T}\frac{1}{\alpha_t}(\|\bar{\mu}_{t}\|^2-\|\bar{\mu}_{t+1}\|^2)}_{S_3} \displaybreak[0] \notag\\ &+(2B_y\!+\!2F)\underbrace{\sum_{t=1}^{T}\frac{1}{\alpha_t}\!\sum_{i=1}^{N}\|\bar{\mu}_t\!-\!\lambda_{i,t+1}\|}_{S4} \displaybreak[0] \notag\\ &+\underbrace{\!\sum_{t=1}^{T}\frac{\eta_t}{\alpha_t}\sum_{i=1}^{N}\left(\|x_{i,t}^\star-x_{i,t}\|^2-\|x_{i,t+1}\!-\!x_{i,t}^\star\|^2\right)}_{S_5}.\label{eq:VtsumtTfitfitstar} \end{aligned}$$ Below, we analyze the upper bounds of $S_i$, $i =1, \ldots, 5$. With $\alpha_t=\sqrt{t}$ and $\eta_t=t$, it is easy to obtain $$\begin{aligned} &S_1\le (NB_y^2+\frac{2NB_y^2}{r})\sqrt{T},~S_2\le\frac{NG^2\sqrt{T}}{2}, \displaybreak[0] \label{eq:sumt1divideVt}\\ &S_3\!=\!\|\bar{\mu}_{1}\|^2\!\!+\!\!\sum_{t=2}^{T}\!(\frac{1}{\alpha_t}\!-\!\frac{1}{V_{t-1}})\|\bar{\mu}_{t}\|^2\!\!-\!\frac{1}{\alpha_t}\|\bar{\mu}_{T+1}\|^2 \le 0, \displaybreak[0] \label{eq:sumtVtbarmutbarmut1} \end{aligned}$$ where [\[eq:sumt1divideVt\]](#eq:sumt1divideVt){reference-type="eqref" reference="eq:sumt1divideVt"} follows from $\sum_{t=1}^{T} \frac{1}{\sqrt{t}} \le1+\int_{t=1}^{T}t^{-1/2}dt\le 2\sqrt{T}$ and [\[eq:sumtVtbarmutbarmut1\]](#eq:sumtVtbarmutbarmut1){reference-type="eqref" reference="eq:sumtVtbarmutbarmut1"} is because $\|\bar{\mu}_{1}\|^2=0$ and $\frac{1}{\alpha_t}-\frac{1}{\alpha_{t-1}} \le 0$. From Lemma [Lemma 2](#lem:dualconsesusboud){reference-type="ref" reference="lem:dualconsesusboud"}, we have $$\begin{aligned} S_4\le\! \frac{8N^2B_y\sqrt{p}}{r}\!\sum_{t=1}^{T}\frac{1}{\alpha_t}\!\sum_{k=1}^{t}\sigma^{t-k}\le\frac{16N^2B_y\sqrt{p}\sqrt{T}}{r(1-\sigma)}, \label{eq:sumTconsensuserror} \end{aligned}$$ where the last inequality in [\[eq:sumTconsensuserror\]](#eq:sumTconsensuserror){reference-type="eqref" reference="eq:sumTconsensuserror"} comes from $\!\sum_{t=1}^{T}\frac{1}{\alpha_t}\!\sum_{k=1}^{t}\sigma^{t-k}\le \sum_{t=0}^{T-1}\sigma^{t}\sum_{k=1}^{T}\frac{1}{\alpha_k}$. Let $\mathbf{x}_{t}=[(x_{1,t})^T, \ldots, (x_{N,t})^T]^T$. Similar to the proof of Theorem 2 in [@c19], the term $S_5$ is rewritten as $$\begin{aligned} &S_5\le\|\mathbf{x}_{1}\!-\!\mathbf{x}_1^\star\|^2+\sum_{t=1}^{T}(\sqrt{t+1}-\sqrt{t})\|\mathbf{x}_{t+1}\!-\!\mathbf{x}_{t+1}^\star\|^2\displaybreak[0]\notag\\ &+\sum_{t=1}^{T}\sqrt{t}(\mathbf{x}_{t+1}^\star\!-\!\mathbf{x}_{t}^\star)^T(\mathbf{x}_{t+1}^\star+\mathbf{x}_{t}^\star-2\mathbf{x}_{t+1})\displaybreak[0] \notag \\ &\le NR^2(1\!+\!\sum_{t=1}^{T}(\sqrt{t+1}\!-\!\sqrt{t})) \!+\!2NR\!\!\sum_{t=1}^{T}\!\sqrt{t}\|\mathbf{x}_{t+1}^\star\!-\!\mathbf{x}_{t}^\star\|\displaybreak[0]\notag\\ &\le 2NR^2\sqrt{T}+2NRV_T,\label{eq:etatVtsumtxtxstar} \end{aligned}$$ where $V_T:=\sum_{t=1}^T\sqrt{t}\sum_{i=1}^N\left\|x_{i, t+1}^*-x_{i, t}^*\right\|$ and the last inequality follows from Assumption [Assumption 2](#asm:primalcouplingprob){reference-type="ref" reference="asm:primalcouplingprob"} and $\sqrt{T+1}\le 2\sqrt{T}$, $\forall T \ge 1$. Combing [\[eq:VtsumtTfitfitstar\]](#eq:VtsumtTfitfitstar){reference-type="eqref" reference="eq:VtsumtTfitfitstar"} with [\[eq:sumt1divideVt\]](#eq:sumt1divideVt){reference-type="eqref" reference="eq:sumt1divideVt"}--[\[eq:etatVtsumtxtxstar\]](#eq:etatVtsumtxtxstar){reference-type="eqref" reference="eq:etatVtsumtxtxstar"} yields Theorem [Theorem 1](#thm:TVdynamicregret){reference-type="ref" reference="thm:TVdynamicregret"}. ## Proof of Theorem [Theorem 2](#thm:WioutSlaterCV){reference-type="ref" reference="thm:WioutSlaterCV"} {#prf:WioutSlaterCV} By $\tilde{x}_{i,t}=\tilde{x}_{i}$, $\forall i \in \mathcal{V}$ that satisfies $\sum_{i=1}^N g_i(\tilde{x}_i)\le \mathbf{0}_p$, we have $\langle\bar{\mu}_t, \sum_{i=1}^{N}g_i(\tilde{x}_{i})\rangle \le 0$. Based on this and $\alpha_t=\sqrt{t}$, $\eta_t=t$, summing [\[eq:barmut1barmutnormbound\]](#eq:barmut1barmutnormbound){reference-type="eqref" reference="eq:barmut1barmutnormbound"} from $t=1$ to $T$ yields $$\begin{aligned} &\frac{N}{2}\sum_{t=1}^T(\|\bar{\mu}_{t+1}\|^2-\|\bar{\mu}_{t}\|^2) =\frac{N}{2}\|\bar{\mu}_{T+1}\|^2\le\!(\frac{N}{2}\!\!+\!\!\frac{N}{r})TB_y^2\!\displaybreak[0] \notag\\ &+\sum_{t=1}^T\!\frac{NG^2\alpha_t^2}{4\eta_t}\!+\!(2B_y\!+\!2F)\!\sum_{t=1}^T\sum_{i=1}^{N}\|\bar{\mu}_t\!-\!\lambda_{i,t+1}\|\displaybreak[0]\notag\\ &+\sum_{t=1}^T\sum_{i=1}^N \alpha_t\partial f_{i,t}(x_{i,t})^T\!(\tilde{x}_{i,t}\!-\!x_{i,t})\!\displaybreak[0] \notag\\ &+\sum_{t=1}^T\sum_{i=1}^N \eta_t(\|x_{i,t}\!-\tilde{x}_{i,t}\|^2\!\!-\!\|x_{i,t+1}\!-\!\tilde{x}_{i,t}\|^2)\displaybreak[0] \notag\\ &\le (\frac{N}{2}\!\!+\!\!\frac{N}{r})TB_y^2\!+\frac{NG^2T}{4}+\!(2B_y\!+\!2F)\!\frac{8N^2B_y\sqrt{p}T}{r(1-\sigma)}\displaybreak[0]\notag\\ &+NGRT^{\frac{3}{2}}+2TNR^2,\displaybreak[0] \label{eq:muT1normbound} \end{aligned}$$ where Lemma [Lemma 2](#lem:dualconsesusboud){reference-type="ref" reference="lem:dualconsesusboud"}, Cauchy--Schwarz inequality, Assumption [Assumption 2](#asm:primalcouplingprob){reference-type="ref" reference="asm:primalcouplingprob"}, [\[eq:boundsibradientoffigi\]](#eq:boundsibradientoffigi){reference-type="eqref" reference="eq:boundsibradientoffigi"}, [\[eq:etatVtsumtxtxstar\]](#eq:etatVtsumtxtxstar){reference-type="eqref" reference="eq:etatVtsumtxtxstar"}, and $\sum_{t=1}^{T}\alpha_t \le 1+\int_{t=1}^{T}t^{1/2}dt\le T^{\frac{3}{2}}$ are used to infer the last inequality. In light of [\[eq:updateofcouplinguk\]](#eq:updateofcouplinguk){reference-type="eqref" reference="eq:updateofcouplinguk"}, we have $\mu_{i,t+1} \ge \hat{\mu}_{i, t}+y_{i,t+1}$. Summing this inequality from $i=1$ to $N$ gives $\bar{\mu}_{t+1} \ge \bar{\mu}_{t}+\frac{1}{N}\sum_{i=1}^N g_i(x_{i,t+1})$, which leads to $\sum_{t=1}^T\sum_{i=1}^N g_{i}(x_{i,t+1})\le N\sum_{t=1}^T(\bar{\mu}_{t+1}-\bar{\mu}_{t})\le N\bar{\mu}_{T+1} \le N\|\bar{\mu}_{T+1}\|$. Invoking to the convexity of $g_i$ gives $\sum_{t=1}^T\sum_{i=1}^N g_{i}(x_{i,t})\le \sum_{t=1}^T\sum_{i=1}^N g_{i}(x_{i,t+1})+NGR\mathbf{1}_p\le N\bar{\mu}_{T+1}+NGR\mathbf{1}_p$, which leads to $$\begin{aligned} \operatorname{Reg}^c(T) \le N\|\bar{\mu}_{T+1}\|+NGR\sqrt{p}.\displaybreak[0] \label{eq:sumTsumNgitgit1} \end{aligned}$$ The inequality [\[eq:muT1normbound\]](#eq:muT1normbound){reference-type="eqref" reference="eq:muT1normbound"} implies $\|\bar{\mu}_{T+1}\|=\mathcal{O}(T^{\frac{3}{4}})$. Inserting it with [\[eq:sumTsumNgitgit1\]](#eq:sumTsumNgitgit1){reference-type="eqref" reference="eq:sumTsumNgitgit1"} gives Theorem [Theorem 2](#thm:WioutSlaterCV){reference-type="ref" reference="thm:WioutSlaterCV"}. ## Proof of Theorem [Theorem 3](#thm:WiSlaterCV){reference-type="ref" reference="thm:WiSlaterCV"} {#prf:WiSlaterCV} From [\[eq:sumTsumNgitgit1\]](#eq:sumTsumNgitgit1){reference-type="eqref" reference="eq:sumTsumNgitgit1"}, we observe that $\operatorname{Reg}^c(T)$ depends on $\|\bar{\mu}_{T+1}\|$. The following lemma presents a smaller bound of $\|\bar{\mu}_{T+1}\|$ than [\[eq:muT1normbound\]](#eq:muT1normbound){reference-type="eqref" reference="eq:muT1normbound"}, enabling a smaller bound on $\operatorname{Reg}^c(T)$. **Lemma 5**. *Let $\tau=\lceil \sqrt{t} \rceil$, $\delta=B_y+\epsilon$. For any $t \ge 1$, $$\begin{aligned} \|\bar{\mu}_t\|&\le 4\delta\sqrt{t}\!+\! \theta_{t}(\tau)\!+\!\frac{16\sqrt{t}\delta^2}{\epsilon} \log\frac{32\delta^2}{\epsilon^2}\!+\!6B_y. \displaybreak[0]%\label{eq:withSCbarmutbound} \end{aligned}$$ where $\theta_t(\tau)=(1+\frac{2}{r})\frac{B_y^2}{\epsilon}+\frac{G^2}{2\epsilon}\!+\!\frac{(2B_y\!+\!2F)16NB_y\sqrt{p}}{r\epsilon(1-\sigma)}+\frac{4GR\alpha_t}{\epsilon}+\frac{4R^2\eta_t}{\epsilon\tau}+(2 B_y^2+\epsilon)\tau$.* *Proof.* We first bound the difference between $\|\bar{\mu}_{t+1}\|$ and $\|\bar{\mu}_{t}\|$, $\forall t \ge 1$, i.e., $$\begin{aligned} -B_y\le \|\bar{\mu}_{t+1}\|- \|\bar{\mu}_{t}\|\le B_y,\displaybreak[0] \label{eq: barmut1barmutlowuppbound} \end{aligned}$$ where [\[eq:boundofepsiloncit\]](#eq:boundofepsiloncit){reference-type="eqref" reference="eq:boundofepsiloncit"} and [\[eq:barmut1barmut\]](#eq:barmut1barmut){reference-type="eqref" reference="eq:barmut1barmut"} give rise to the right-hand inequality. With regard to the left-hand inequality, it can be obtained $\|\bar{\mu}_{t}\|-\|\bar{\mu}_{t+1}\|\le \|\bar{\mu}_{t+1}-\bar{\mu}_{t}\|=\|\frac{1}{N}\sum_{i=1}^{N}\epsilon_{i,t+1}\|\le B_y$. Let $\tilde{x}_{i,t}=\hat{x}_{i}$ and $\triangle_s=\frac{1}{2}\|\bar{\mu}_{s+1}\|^2-\frac{1}{2}\|\bar{\mu}_{s}\|^2$, Summing [\[eq:barmut1barmutnormbound\]](#eq:barmut1barmutnormbound){reference-type="eqref" reference="eq:barmut1barmutnormbound"} from $s=t, t+1, \ldots, t+\tau-1$, we have $$\begin{aligned} &\sum_{s=t}^{t+\tau-1}\!\!\triangle_s\le\!(\frac{1}{2}\!+\!\frac{1}{r})B_y^2\tau\!+\!\frac{G^2\tau}{4}\!+\!\eta_{t+\tau-1}R^2-\!\epsilon\!\!\sum_{s=t}^{t+\tau-1}\!\!\|\bar{\mu}_{s}\|\!\displaybreak[0]\notag\\ &+\frac{2B_y\!+\!2F}{N}\!\sum_{s=t}^{t+\tau-1}\!\!\sum_{i=1}^{N}\!\|\bar{\mu}_s\!-\!\lambda_{i,s+1}\|+\!GR\sum_{s=t}^{t+\tau-1}\!\alpha_s\!, \displaybreak[0]\label{eq:trianglesbound} \end{aligned}$$ where $\eta_{t+\tau-1}R^2$ is obtained by referring to [\[eq:etatVtsumtxtxstar\]](#eq:etatVtsumtxtxstar){reference-type="eqref" reference="eq:etatVtsumtxtxstar"}. Based on Assumption [Assumption 3](#asm: slatercondition){reference-type="ref" reference="asm: slatercondition"}, the term $-\epsilon\sum_{s=t}^{t+\tau-1}\|\bar{\mu}_s\|$ comes from $\sum_{s=t}^{t+\tau-1}\langle\bar{\mu}_s,\sum_{i=1}^{N}g_i(\hat{x}_{i})\rangle\le \sum_{s=t}^{t+\tau-1}\langle\bar{\mu}_s,\epsilon\mathbf{1}_p\rangle\le -\epsilon\sum_{s=t}^{t+\tau-1}\|\bar{\mu}_s\|$, where $\bar{\mu}_s \ge \mathbf{0}_p$. Since $1 \le \tau \le t+1$ and $V_s=\sqrt{s}$, we obtain $\sum_{s=t}^{t+\tau-1}\alpha_s\le 2\tau \alpha_{t}$ and $\eta_{t+\tau-1}\le 2\eta_t$. By resorting to Lemma [Lemma 2](#lem:dualconsesusboud){reference-type="ref" reference="lem:dualconsesusboud"} and [\[eq: barmut1barmutlowuppbound\]](#eq: barmut1barmutlowuppbound){reference-type="eqref" reference="eq: barmut1barmutlowuppbound"}, we obtain $$\begin{aligned} &\sum_{s=t}^{t+\tau-1}\sum_{i=1}^{N}\|\bar{\mu}_s\!-\!\lambda_{i,s+1}\|\ \le \frac{8N^2B_y\sqrt{p}}{r(1-\sigma)}\tau, \displaybreak[0] \label{eq:sumTaudualconsenus}\\ &\sum_{s=t}^{t+\tau-1}\|\bar{\mu}_{s}\|\!\!\ge\!\!\! \sum_{s=t}^{t+\tau-1}\!(\|\bar{\mu}_{t}\|\!\!-\!\!(s\!-\!t)B_y)\!\ge\! \tau \|\bar{\mu}_{t}\|\!-\!\tau^2B_y, \displaybreak[0] \label{eq:barmuslowbound} \end{aligned}$$ which together with [\[eq:trianglesbound\]](#eq:trianglesbound){reference-type="eqref" reference="eq:trianglesbound"} results in $$\begin{aligned} &\sum_{s=t}^{t+\tau-1}\triangle_s\le(\frac{1}{2}\!+\!\frac{1}{r})B_y^2\tau\!+\!\frac{G^2\tau}{4}\!+\!2R^2\eta_t\!+\!2GR\tau \alpha_t\displaybreak[0]\notag\\ &+\!\frac{(2B_y\!+\!2F)8NB_y\sqrt{p}}{r(1-\sigma)}\tau\!+\!\epsilon\tau^2B_y\!-\!\epsilon\tau \|\bar{\mu}_{t}\|. \displaybreak[0]\notag \label{eq: sumtrianglewithminusbarmut} \end{aligned}$$ This inequality implies $\|\bar{\mu}_{t+\tau}\|^2\!=\!\|\bar{\mu}_{t}\|^2\!+\!2\sum_{s=t}^{t+\tau-1}\!\!\triangle_s \le\|\bar{\mu}_{t}\|^2\!-\!2\epsilon\tau \|\bar{\mu}_{t}\|+\epsilon\tau\theta_t(\tau)$ according to the definition of $\theta_t(\tau)$. Thus, if $\|\bar{\mu}_t\|\ge \theta_t(\tau)$, we have $$\label{eq:batmutaddtboundbytau} \|\bar{\mu}_{t+\tau}\|-\|\bar{\mu}_t\| \leq -\frac{\epsilon\tau}{2}, \forall t \ge 1.$$ Next we utilize [\[eq:batmutaddtboundbytau\]](#eq:batmutaddtboundbytau){reference-type="eqref" reference="eq:batmutaddtboundbytau"} to bound $\|\bar{\mu}_t\|$. Consider the case $t \ge 6$. Let $\delta=B_y+\epsilon,~\xi=\frac{\epsilon}{2}$, $\tilde{r}=\frac{\xi}{4\lceil \sqrt{t}\rceil\delta^2}$, and $\rho=1-\frac{\tilde{r}\xi\tau}{2}$, which implies $0<\rho <1$. Denote $w_t=\|\bar{\mu}_t\|-\|\bar{\mu}_{t-\tau}\|$. According to [\[eq: barmut1barmutlowuppbound\]](#eq: barmut1barmutlowuppbound){reference-type="eqref" reference="eq: barmut1barmutlowuppbound"}, $w_t=\sum_{s=t-\tau}^{t-1}\|\bar{\mu}_{s+1}\|-\|\bar{\mu}_{s}\| \le \tau B_y \le \tau\delta$. Like Lemma 6 in [@c18], $e^{\tilde{r}\|\bar{\mu}_t\|}=e^{\tilde{r}(w_t+\|\bar{\mu}_{t-\tau}\|)}\le e^{\tilde{r}\|\bar{\mu}_{t-\tau}\|}(1+\tilde{r} w_t+\frac{1}{2} \tilde{r} \tau \xi)$. Note that $t-\tau \ge 1$, $\forall t \ge 6$. If $\|\bar{\mu}_{t-\tau}\|\ge \theta_{t-\tau}(\tau)$, we have $w_t=\|\bar{\mu}_t\|\!-\!\|\bar{\mu}_{t-\tau}\|\le-\frac{\epsilon\tau}{2}= -\xi\tau$ by [\[eq:batmutaddtboundbytau\]](#eq:batmutaddtboundbytau){reference-type="eqref" reference="eq:batmutaddtboundbytau"}, which implies $$\begin{aligned} e^{\tilde{r}\|\bar{\mu}_t\|} \le \rho e^{\tilde{r}\|\bar{\mu}_{t-\tau}\|}+e^{\tilde{r}\tau\delta}e^{\tilde{r}\theta_{t-\tau}(\tau)}.\displaybreak[0] \label{eq:erbarmutbound} \end{aligned}$$ It is easy to verify that [\[eq:erbarmutbound\]](#eq:erbarmutbound){reference-type="eqref" reference="eq:erbarmutbound"} holds if $\|\bar{\mu}_{t-\tau}\|< \theta_{t-\tau}(\tau)$. Moreover, $\forall t \ge 6$, $\lfloor \frac{t}{\tau}\rfloor= k$ for some $k \ge 2$. Consequently, $t-(k-2)\tau \ge 1$. Thus, we can apply [\[eq:erbarmutbound\]](#eq:erbarmutbound){reference-type="eqref" reference="eq:erbarmutbound"} for $s=t, t-\tau, \ldots, t-(k-2)\tau$ to obtain $$\begin{aligned} &e^{\tilde{r}\|\bar{\mu}_t\|}\le \rho e^{\tilde{r}\|\bar{\mu}_{t-\tau}\|}+e^{\tilde{r}\tau\delta}e^{\tilde{r}\theta_{t-\tau}(\tau)}\displaybreak[0] \notag\\ & \le \rho^{k-1} e^{\tilde{r} \|\bar{\mu}_{t-(k-1) \tau}\|}+e^{\tilde{r} \delta \tau} \sum_{i=1}^{k-1} \rho^{i-1} e^{\tilde{r} \theta_{t-i \tau}}\displaybreak[0] \notag\\ &\le\rho^{k-1} e^{2\tilde{r}\tau\delta}+e^{\tilde{r} \delta \tau}e^{\tilde{r} \theta_{t}} \sum_{i=1}^{k-1} \rho^{i-1} \le \frac{e^{2\tilde{r}\tau\delta}e^{\tilde{r} \theta_{t}}}{1-\rho}, \displaybreak[0] \label{eq:tmore9erbarmutbound} \end{aligned}$$ where the third inequality was resulted from: (1) $\|\bar{\mu}_{t-(k-1) \tau}\|\le (t-(k-1) \tau)B_y\le 2\tilde{r}\tau\delta$ according to [\[eq: barmut1barmutlowuppbound\]](#eq: barmut1barmutlowuppbound){reference-type="eqref" reference="eq: barmut1barmutlowuppbound"} and $t-(k-1) \tau \le 2\tau$; (2) $0 <\theta_{t-i \tau} \le\theta_{t}$ because $\theta_{t}$ increases with $t$. From [\[eq:tmore9erbarmutbound\]](#eq:tmore9erbarmutbound){reference-type="eqref" reference="eq:tmore9erbarmutbound"} and $\tau=\lceil\sqrt{t}\rceil\le 2\sqrt{t}$, we have $$\begin{aligned} \|\bar{\mu}_t\|&\le 2\tau\delta+\theta_{t}(\lceil\sqrt{t}\rceil)+\frac{1}{\tilde{r}} \log\frac{1}{1-\rho}\displaybreak[0] \notag\\ & \le 4\delta\sqrt{t}+\theta_{t}(\lceil\sqrt{t}\rceil)\!+\!\frac{16\sqrt{t}\delta^2}{\epsilon} \log\frac{32\delta^2}{\epsilon^2}\!+\!6B_y. \displaybreak[0]\label{eq:withSCbarmutbound} \end{aligned}$$ Consider the case $t<6$. It is straightforward to obtain $\|\bar{\mu}_t\|\le tB_y\le 6B_y$. Thus, Lemma [Lemma 5](#lem:withSCboundofbarmut){reference-type="ref" reference="lem:withSCboundofbarmut"} holds. ◻ Since $\theta_{t}(\lceil\sqrt{t}\rceil) =\mathcal{O}(\sqrt{t})$ according to the definition of $\theta_{t}$ in Lemma [Lemma 5](#lem:withSCboundofbarmut){reference-type="ref" reference="lem:withSCboundofbarmut"}, by combing it with [\[eq:withSCbarmutbound\]](#eq:withSCbarmutbound){reference-type="eqref" reference="eq:withSCbarmutbound"}, $\|\bar{\mu}_{t}\|=\mathcal{O}(\sqrt{t})$. Like [\[eq:sumTsumNgitgit1\]](#eq:sumTsumNgitgit1){reference-type="eqref" reference="eq:sumTsumNgitgit1"}, we have $\operatorname{Reg}^c(T)=\mathcal{O}(\sqrt{T})$. Thus, Theorem [Theorem 3](#thm:WiSlaterCV){reference-type="ref" reference="thm:WiSlaterCV"} holds. [^1]: D. Wang is with the School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China, with the University of Chinese Academy of Sciences, Beijing 100049, China, and with Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai 200050, China. Email: `wangdd2@shanghaitech.edu.cn`. D. Zhu and J. Lu are with the School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China. Email:`{zhudk, lujie}@shanghaitech.edu.cn`. K.C. Sou is with the Department of Electrical Engineering, National Sun Yat-sen University, Taiwan. Email: `sou12@mail.nsysu.edu.tw`.
arxiv_math
{ "id": "2309.01509", "title": "Distributed Online Optimization with Coupled Inequality Constraints over\n Unbalanced Directed Networks", "authors": "Dandan Wang, Daokuan Zhu, Kin Cheong Sou and Jie Lu", "categories": "math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We give a geometric model for the category of coherent sheaves over the weighted projective line of type $(p,q)$ in terms of an annulus with marked points on its boundary. We establish a bijection between indecomposable sheaves over the weighted projective line and certain homotopy classes of oriented curves in the annulus, and prove that the dimension of extension group between indecomposable sheaves equals to the positive intersection number between the corresponding curves. By using the geometric model, we provide a combinatorial description for the titling graph of tilting bundles, which is composed by quadrilaterals (or degenerated to a line). Moreover, we obtain that the automorphism group of the coherent sheaf category is isomorphic to the mapping class group of the marked annulus, and show the compatibility of their actions on the tilting graph of coherent sheaves and on the triangulation of the geometric model respectively. A geometric description of the perpendicular category with respect to an exceptional sheaf is presented at the end of the paper. author: - Jianmin Chen, Shiquan Ruan and Hongxia Zhang$^*$ title: Geometric model for weighted projective lines of type $(p,q)$ --- [^1] [^2] # Introduction Geometric models for categories have been studied by various authors in recent years. For instance, a geometric construction of cluster categories of type $A$ was given by Caldero-Chapton-Schiffler in [@CCS2006], of type $D$ was given by Schiffler in [@S08] and of type $A_\infty$ was given by Holm-Jørgensen in [@HJ2012]. There is also some progress for geometric models of abelian categories. Baur-Marsh in [@BM2012] provided a geometric model for tube categories. In [@W2008], Warkentin established a bijection between string modules over a quiver of affine type $A$ and certain oriented curves in a marked annulus. By these geometric realizations, many algebraic properties (e.g. the extension dimensions, Auslander-Reiten triangles, Auslander-Reiten sequences) of these categories can be studied in geometric terms. We refer to [@BM07; @BM08; @BS2021; @BT2019; @DLL19; @FST2008; @HZZ23; @S08; @T15] for related topics. Weighted projective lines and their coherent sheaves categories were introduced by Geigle and Lenzing in [@GL87], in order to give a geometric realization of canonical algebras in the sense of Ringel [@Ringel1984]. The study of weighted projective lines has a high contact with many mathematical branches, such as Lie theory [@Crawley-Boevey2010; @DengRuanXiao2020; @DouJiangXiao2012; @Schiffmann2004], representation theory [@M2004; @CLLR2021; @FG383; @HR1999] and singularity theory [@EbelingPloog2010; @Hubner1989; @Hubner1996; @Lenzing1994; @Lenzing1998; @Lenzing2011], which in particular including the aspects of Arnold's strange duality [@EbelingPloog2010; @EbelingTakahashi2013] and homological mirror symmetry [@Ebeling2003; @Scerbak1978]. By Geigle-Lenzing [@GL87], the coherent sheaf category ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ of type $(p,q)$ is derived equivalent to the finitely generated module category ${\rm mod}\tilde{A}_{p,q}$ of the canonical algebra $\tilde{A}_{p,q}$ (affine type $A$). Inspired by [@BT2019; @W2008], we hope to give a geometric model for the category ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ in terms of a marked annulus. Let $A_{p,q}$ be an annulus with $p$ marked points on the inner boundary and $q$ marked points on the outer boundary. We establish a bijection between indecomposable sheaves over ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ and certain homotopy classes of oriented curves in the annulus $A_{p,q}$ (see Theorem [Theorem 8](#corresponding){reference-type="ref" reference="corresponding"}). Under this correspondence, Auslander-Reiten sequences in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ can be realized via elementary moves in $A_{p,q}$ (see Proposition [Proposition 10](#irreducible morphism){reference-type="ref" reference="irreducible morphism"}), and the dimension of extension space between two indecomposable coherent sheaves equals to the positive intersection number of the correspondence curves (see Theorem [Theorem 14](#dimension and positive intersection){reference-type="ref" reference="dimension and positive intersection"}). Moreover, we obtain that the tilting sheaves in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ are in natural bijection with the triangulations of $A_{p,q}$ (see Theorem [Theorem 19](#triangulation and tilting){reference-type="ref" reference="triangulation and tilting"}), and the flip of an arc is compatible with the tilting mutation (see Proposition [Proposition 23](#mutation and flip){reference-type="ref" reference="mutation and flip"}). We point out here that the geometric model given in Theorem [Theorem 8](#corresponding){reference-type="ref" reference="corresponding"} has intuitive difference from the geometric realization of ${\rm mod}\tilde{A}_{p,q}$ in [@BT2019; @W2008]. More precisely, the indecomposable modules in the postprojective (*resp.* preinjective) components of ${\rm mod}\tilde{A}_{p,q}$ correspond to the bridging curves whose orientation is from the outer boundary to the inner boundary (*resp.* from the inner boundary to the outer boundary). However, all line bundles in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ correspond to the bridging curves whose orientations are from the outer boundary to the inner boundary. There is another interesting point, since the dimensions of the Hom-space and Ext-space between two indecomposable sheaves have explicit formulas due to the structure of the category ${\rm coh}\mbox{-}\mathbb{X}(p,q)$, Theorem [Theorem 14](#dimension and positive intersection){reference-type="ref" reference="dimension and positive intersection"} makes the positive intersection number of the correspondence curves in $A_{p,q}$ easily calculated. Therefore, it seems that the category of coherent sheaves over $\mathbb{X}(p,q)$ provides a nice categorification model for the annulus $A_{p,q}$. The geometric model has applications on the automorphism group of ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ and the tilting graph of coherent sheaves. Denote by $$\mathcal{T}_{A_{p,q}}:=\{{\rm Triangulations\ of}\ A_{p,q}\} {\text{\quad and \quad}} \mathcal{T}_{\mathbb{X}}:=\{{\rm Tilting \ sheaves \ in} \ {\rm coh}\mbox{-}\mathbb{X}(p,q)\}.$$ Then there is a bijection $\phi: \mathcal{T}_{A_{p,q}} \rightarrow\mathcal{T}_{\mathbb{X}}$, see ([\[map\]](#map){reference-type="ref" reference="map"}). There is an isomorphism $\psi$ between the mapping class group $\mathcal{MG} (A_{p,q})$ of the marked annulus $A_{p,q}$ and the automorphism group ${\rm Aut}({\rm coh}\mbox{-}\mathbb{X}(p,q))$ of ${\rm coh}\mbox{-}\mathbb{X}(p,q)$, see [\[automor psi\]](#automor psi){reference-type="eqref" reference="automor psi"}. Any automorphism of ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ preserves tilting sheaves. Hence there is a natural group action of ${\rm Aut}({\rm coh}\mbox{-}\mathbb{X}(p,q))$ on $\mathcal{T}_{\mathbb{X}}$. On the other hand, the mapping class group $\mathcal{MG} (A_{p,q})$ naturally acts on the set of triangulations of $A_{p,q}$. It turns out that these two actions are compatible. That is, we have the following commutative diagram, where the commutativity is in the sense of ([\[compatible of groups action\]](#compatible of groups action){reference-type="ref" reference="compatible of groups action"}). **Theorem 1**. *For any $f\in \mathcal{MG} (A_{p,q})$ and any triangulation $\Gamma$ of $A_{p,q}$, we have $$\begin{aligned} \label{compatible of groups action} \phi(f(\Gamma))=\psi(f)(\phi(\Gamma))\end{aligned}$$* The *tilting graph* $\mathcal{G}(\mathcal{T}_{\mathbb{X}})$ of ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ has as vertices the isomorphism classes of tilting sheaves in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$, while two vertices are connected by an edge if and only if the associated tilting sheaves differ by precisely one indecomposable direct summand. The full subgraph of $\mathcal{G}(\mathcal{T}_{\mathbb{X}})$ consisting of tilting bundles will be denoted by $\mathcal{G}(\mathcal{T}^{\nu}_{\mathbb{X}})$. The connectedness of titling graph for weighted projective lines has been investigated widely in the literature through category aspect, see for example [@BKL2008; @HU2005; @GengSF2020; @FG383]. However, the explicit shape of the tilting graph is still unknown. By using the above geometric model, we provide a combinatorial description of $\mathcal{G}(\mathcal{T}^{\nu}_{\mathbb{X}})$. Let $\Lambda_{(p,q)}$ be a graph with vertices $$\Lambda_{(p,q)}^0=\{(c_1, \cdots, c_p)\in\mathbb{Z}^{p}|c_1\leq \cdots\leq c_p\leq c_1+q\},$$ and there exists an edge between two vertex $(c_1, \cdots, c_p)$ and $(d_1, \cdots, d_p)$ if and only if $$\sum_{i=1}^{p}|c_i-d_i|=1.$$ **Theorem 2**. *The tilting graph $\mathcal{G}(\mathcal{T}^{\nu}_{\mathbb{X}})$ coincides with the graph $\Lambda_{(p,q)}$.* Denote by $\eta$ the bijection from $\Lambda_{(p,q)}$ to $\mathcal{G}(\mathcal{T}^{\nu}_{\mathbb{X}})$ obtained in Theorem [Theorem 2](#description of tilting graph of vector bundles){reference-type="ref" reference="description of tilting graph of vector bundles"}. Let $H_{p,q}=\langle r_1, r_2|r_1r_2=r_2r_1, r_1^{p}=r_2^{q}\rangle$ and $$\widetilde{H}_{p,q}= \left\{ \begin{array}{ll} H_{p,q}, & p\neq q;\\ H_{p,q}\times\mathbb{Z}_2, & p=q.\\ \end{array} \right.$$ Then $\widetilde{H}_{p,q}$ coincides with the mapping class group $\mathcal{MG} (A_{p,q})$, hence there exists a group isomorphism from $\widetilde{H}_{p,q}$ to ${\rm Aut}({\rm coh}\mbox{-}\mathbb{X}(p,q))$, still denoted by $\psi$. We construct an unexpected group action of $\widetilde{H}_{p,q}$ on the graph $\Lambda_{(p,q)}$ (c.f. Proposition [Proposition 44](#bijective map of vertex such that commutative){reference-type="ref" reference="bijective map of vertex such that commutative"}), which is compatible with the group action of ${\rm Aut}({\rm coh}\mbox{-}\mathbb{X}(p,q))$ on $\mathcal{G}(\mathcal{T}^{\nu}_{\mathbb{X}})$ in the following sense. **Theorem 3**. *For any $f\in \widetilde{H}_{p,q}$ and any vertex $\nu$ in $\Lambda_{(p,q)}$, we have $$\begin{aligned} \label{compatible of groups action2} \eta(f(\nu))=\psi(f)(\eta(\nu)).\end{aligned}$$ Consequently, we have the following commutative diagram* The paper is organized as follows. In Section 2, we recall some basic facts on weighted projective lines of type $(p,q)$. In Section 3, we show that the marked annulus $A_{p,q}$ gives a geometric model of the category ${\rm coh}\mbox{-}\mathbb{X}(p,q)$. In Section 4, we establish a bijection between tilting sheaves in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ and triangulations of $A_{p,q}$, and show that the flip of an arc is compatible with the tilting mutation of an indecomposable sheaf. Sections 5 and 6 focus on tilting mutation and tilting graphs, with the aim to prove Theorem [Theorem 2](#description of tilting graph of vector bundles){reference-type="ref" reference="description of tilting graph of vector bundles"}. We give a geometric interpretation of the automorphism group of ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ in Section 7, and prove Theorem [Theorem 1](#compatible){reference-type="ref" reference="compatible"} and Theorem [Theorem 3](#compatible2){reference-type="ref" reference="compatible2"}. In the final Section 8, we present a geometric description of the perpendicular category of an exceptional sheaf in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$. # Weighted projective lines of type (p,q) {#wpl} In this section, we recall from [@GL87; @Len2011] some basic facts about the weighted projective lines of type $(p,q)$ for $p,q\in \mathbb{Z}_{\geq1}$. Let $\mathbb{L}(p,q)$ be the abelian group on generators $\vec{x}_1, \vec{x}_2$ with relations $$p\vec{x}_1=q\vec{x}_2:=\vec{c}.$$ Then each $\vec{x}\in\mathbb{L}(p, q)$ can be uniquely written in *normal form* $$\vec{x}=l_1\vec{x}_1+l_2\vec{x}_2+l\vec{c},\;\;{\rm where}\,\, 0\leq l_1\leq p-1,\,0\leq l_2\leq q-1\,\,{\rm and}\;l\in \mathbb{Z}.$$ $\mathbb{L}(p,q)$ is an ordered group whose cone of positive elements is $\mathbb{N}\vec{x}_1+\mathbb{N}\vec{x}_2$. We equip $\mathbb{L}(p,q)$ with the structure of a partially ordered set: $\vec{x}\leq\vec{y}$ if and only if $\vec{y}-\vec{x}\in \mathbb{N}\vec{x}_1+\mathbb{N}\vec{x}_2$. Let $\mathbf{k}$ be an algebraically closed field and ${\boldsymbol\lambda}= (\lambda_1,\lambda_2)$ be a sequence of pairwise distinct closed points on the projective line $\mathbb{P}_{\mathbf k}^1$. A *weighted projective line* $\mathbb{X}(p,q)$ of weight type $(p,q)$ and parameter sequence ${\boldsymbol\lambda}$ is obtained from the projective line $\mathbb{P}_{\mathbf k}^1$ by attaching the weight $p,\; q$ to $\lambda_1, \lambda_2$, respectively. The parameter sequence can be normalized as $\lambda_1=\infty,\;\lambda_2=0.$ The *homogeneous coordinate algebra* $S(p,q)$ of the weighted projective line $\mathbb{X}(p,q)$ is given by $\mathbf{k}[x_1, x_2]$, which is $\mathbb{L}(p, q)$-graded by means of $\deg x_i=\vec{x}_i$ for $i=1,\,2$. That is, $S(p,q)=\bigoplus_{\vec{x}\in \mathbb{L}(p,q)} S(p,q)_{\vec{x}}$, where $S(p,q)_{\vec{x}}$ is the homogeneous component of degree $\vec{x}$. In particular, if we write $\vec{x}=l_1\vec{x}_1+l_2\vec{x}_2+l\vec{c}$ in its normal form, then $S(p,q)_{\vec{x}}\neq 0$ if and only if $l\geq 0$. Moreover, $\{x_1^{l_1+pa}x_2^{l_2+qb}\; |\; a+b=l, a, b\geq 0\}$ form a ${\mathbf k}$-basis of $S(p,q)_{\vec{x}}$; see [@GL87 Proposition 1.3]. We recall the definition of the category ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ of coherent sheaves over $\mathbb{X}(p,q)$ by a convenient description via graded $S(p,q)$-modules. Let ${\rm mod}^{{\mathbb L}(p,q)}\mbox{-}S(p,q)$ be the abelian category of finitely generated ${\mathbb L}(p,q)$-graded $S(p,q)$-modules, and ${\rm mod}_0^{{\mathbb L}(p,q)}\mbox{-}S(p,q)$ be its Serre subcategory formed by finite dimensional modules. Denote by ${\rm qmod}^{{\mathbb L}(p,q)}\mbox{-}S(p,q):={\rm mod}^{{\mathbb L}(p,q)}\mbox{-}S(p,q)/{{\rm mod}_0^{{\mathbb L}(p,q)}\mbox{-}S(p,q)}$ the quotient abelian category. By [@GL87 Theorem 1.8] the sheafification functor yields an equivalence $${\rm qmod}^{{\mathbb L}(p,q)}\mbox{-}S(p,q)\stackrel{\sim}\longrightarrow {\rm coh}\mbox{-}\mathbb{X}(p,q).$$ From now on we will identify these two categories. Notice that in this paper, we will abbreviate ${\rm Hom}_{{\rm coh}\mbox{-}\mathbb{X}(p,q)}(-,-)$ and ${\rm Ext}^{1}_{{\rm coh}\mbox{-}\mathbb{X}(p,q)}(-,-)$ as ${\rm Hom}(-,-)$ and ${\rm Ext}^{1}(-,-)$ respectively. **Proposition 4** ([@GL87; @Len2011]). *The category ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ is connected, ${\rm Hom}$-finite and $\mathbf{k}$-linear with the following properties:* - *${\rm coh}\mbox{-}\mathbb{X}(p,q)$ satisfies Serre duality in the form ${\rm Ext}^{1}(X,Y)\cong{\rm DHom}(Y, \tau(X)),$ where the $\mathbf{k}$-equivalence $\tau:{\rm coh}\mbox{-}\mathbb{X}(p,q)\longrightarrow {\rm coh}\mbox{-}\mathbb{X}(p,q)$ is the shift $X\mapsto X(\vec{\omega})$ with the dualizing element $\vec{\omega}=-(\vec{x}_1+\vec{x}_2)\in \mathbb{L}(p,q).$* - *${\rm coh}\mbox{-}\mathbb{X}(p,q)={\rm vec}\mbox{-}\mathbb{X}(p,q)\bigvee{\rm coh}_{0}\mbox{-}\mathbb{X}(p,q),$ where ${\rm vec}\mbox{-}\mathbb{X}(p,q)$ denotes the full subcategory of ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ consisting of vector bundles over $\mathbb{X}(p,q)$, and ${\rm coh}_{0}\mbox{-}\mathbb{X}(p,q)$ denotes the full subcategory of ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ consisting of all objects of finite length, $\bigvee$ means that each indecomposable object of ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ is either in ${\rm vec}\mbox{-}\mathbb{X}(p,q)$ or in ${\rm coh}_{0}\mbox{-}\mathbb{X}(p,q)$, and there are no non-zero morphism from ${\rm coh}_{0}\mbox{-}\mathbb{X}(p,q)$ to ${\rm vec}\mbox{-}\mathbb{X}(p,q).$* - *Each indecomposable bundle over $\mathbb{X}(p,q)$ is a line bundle.* - *Each line bundle $L$ is exceptional, i.e., ${\rm Hom}(L,L)\cong\mathbf{k}$ and ${\rm Ext}^{1}(L,L)=0$.* - *For any $\vec{x}, \vec{y}\in\mathbb{L}(p,q)$, there has $${\rm Hom}({\mathcal O}(\vec{x}), {\mathcal O}(\vec{y}))\cong S(p,q)_{\vec{y}-\vec{x}}.$$ In particular, ${\rm Hom}({\mathcal O}(\vec{x}), {\mathcal O}(\vec{y}))\neq 0$ if and only if $\vec{x}\leq \vec{y}.$* - *Denote by $K_{0}({\rm coh}\mbox{-}\mathbb{X}(p,q))$ the *Grothendieck group* of the category ${\rm coh}\mbox{-}\mathbb{X}(p,q)$. Then the classes $[{\mathcal O}(\vec{x})] \; (0\leq \vec{x}\leq \vec{c})$ form a $\mathbb{Z}$-basis of $K_{0}({\rm coh}\mbox{-}\mathbb{X}(p, q))$. Moreover, the rank of Grothendieck group $K_{0}({\rm coh}\mbox{-}\mathbb{X}(p,q))$ equals $p+q.$* By [@LR06 Proposition 1.1], the subcategory ${\rm coh}_0\mbox{-}\mathbb{X}(p,q)$ decomposes into a coproduct $\coprod_{\lambda\in \mathbb{P}_{\mathbf k}^1}\mathcal{U}_{\lambda}$ of connected uniserial length categories, whose associated Auslander-Reiten quiver is a stable tube $\mathbb{ZA}_{\infty}/(\tau^r)$ for some $r\in \mathbb{Z}_{\geq 1}$ (see e.g.[@SS2007]). Here, $r$ is called the rank of the stable tube $\mathbb{ZA}_{\infty}/(\tau^r)$, which depends on $\lambda$. Precisely, $r=p,\; q$ for $\lambda=\infty,\;0$, respectively and $r=1$ for $\lambda \in \mathbf k^{*}=\mathbf k\setminus \{0\}$. Objects that lie at the bottom of the stable tube are all simple objects of ${\rm coh}\mbox{-}\mathbb{X}(p,q)$. A stable tube of $r=1$ is called a *homogeneous stable tube*. Each $\lambda \in \mathbf k^{*}$ is associated with a unique simple sheaf $S_{\lambda}$, called *ordinary simple*; while $\lambda=\infty$ (*resp.* $\lambda=0$) is associated with $p$ (*resp.* $q$) simple sheaves $S_{\infty,i}\;(i\in\mathbb{Z}/p\mathbb{Z})$ (*resp.* $S_{0,i}\; (i\in\mathbb{Z}/q\mathbb{Z}$)) called *exceptional simples*. Now we recall some important short exact sequences in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$. For each ordinary simple sheaf $S_{\lambda}$, there is an exact sequence $$0\longrightarrow \mathcal{O}\stackrel{u_{\lambda}}{\longrightarrow}\mathcal{O}(\vec{c})\longrightarrow S_{\lambda}\longrightarrow 0,$$ where the homomorphism $u_{\lambda}: \mathcal{O}\longrightarrow \mathcal{O}(\vec{c})$ is given by multiplication with $x_2^{q}-\lambda x_1^{p}$. By contrast, if $\lambda\in \{\infty, 0\}$, there are exact sequences $$\begin{aligned} \label{important exact sequence} &0\longrightarrow \mathcal{O}((i-1)\vec{x}_1)\stackrel{u_{\infty}}{\longrightarrow} \mathcal{O}(i\vec{x}_1)\longrightarrow S_{\infty,i}\longrightarrow 0,\;\;i\in\mathbb{Z}/p\mathbb{Z};\\ &0\longrightarrow \mathcal{O}((i-1)\vec{x}_2)\stackrel{u_{0}}{\longrightarrow} \mathcal{O}(i\vec{x}_2)\longrightarrow S_{0,i}\longrightarrow 0,\;\;i\in\mathbb{Z}/q\mathbb{Z};\end{aligned}$$ where $u_{\infty}$ (*resp.* $u_{0}$) is given by multiplication with $x_1$ (*resp.* $x_2$). For each $\vec{x}=l_1\vec{x}_1+l_2\vec{x}_2\in \mathbb{L}(p,q)$, one has $$S_{\lambda}(\vec{x})=S_{\lambda}\;\;{\rm for}\;\, \lambda \in \mathbf k^{*};$$ and $$S_{\infty,i}(\vec{x})=S_{\infty, i+l_{1}},\;\; S_{0,i}(\vec{x})=S_{0, i+l_{2}}.$$ For $\lambda\in \{\infty,0\}$ and each $j \in \mathbb{Z}_{\geq 1}$, let $S_{\lambda,i}^{(j)}$ denote the indecomposable object in $\mathcal{U}_{\lambda}$ of length $j$ with top $S_{\lambda,i}.$ More precisely, the composition series of $S_{\lambda,i}^{(j)}$ has the following form: $$S_{\lambda, i-j+1} \hookrightarrow S_{\lambda, i-j+2}^{(2)} \hookrightarrow \cdots\hookrightarrow S_{\lambda, i-2}^{(j-2)} \hookrightarrow S_{\lambda, i-1}^{(j-1)}\hookrightarrow S_{\lambda, i}^{(j)}$$ with $S_{\lambda, i-k}^{(j-k)}/S_{\lambda, i-k-1}^{(j-k-1)}\cong S_{\lambda, i-k}$ for $0\leq k\leq j-2.$ # Geometric model for the category ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ {#gmwpl} In this section, we aim to give a geometric model for ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ in terms of an annulus with $p$ marked points on the inner boundary and $q$ marked points on the outer boundary. We will establish a correspondence between indecomposable coherent sheaves and homotopy classes of oriented curves in the marked annulus and then study algebraic properties (e.g. Auslander-Reiten sequences, the dimension of extension space of degree 1, exact sequences) of ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ in geometric aspects. ## The universal cover of $A_{p,q}$. Let $A_{p,q}$ be an annulus with $p$ marked points on the inner boundary and $q$ marked points on the outer boundary. Without loss of generality, assume $1\leq p\leq q.$ In this subsection, we recall from [@BT2019] for the universal cover of $A_{p,q}$. Assume the marked points are distributed in equidistance on the two boundaries of $A_{p,q}$. We label the marked points on the inner boundary with $0_{\partial},\,(\frac{1}{p})_{\partial},\,\cdots,\, (\frac{p-1}{p})_{\partial}$ in an anti-clockwise direction, and label the marked points on the outer boundary with $0_{\partial^{\prime}},$ $(\frac{q-1}{q})_{\partial^{\prime}},\,\cdots,$ $(\frac{1}{q})_{\partial^{\prime}}$ in a clockwise direction; c.f. Figure [\[marked annulus\]](#marked annulus){reference-type="ref" reference="marked annulus"}. Let ${\rm Cyl_{p,q}}$ be a rectangle of height 1 and width $1$ in $\mathbb{R}^{2}$ with $p$ marked points on the upper boundary $\partial$ and $q$ marked points on the lower boundary $\partial^{\prime}$, identifying its two vertical sides. Denote the points on the upper boundary of ${\rm Cyl_{p,q}}$ by $i_{\partial},$ for $0\leq i\leq \frac{p-1}{p}:$ $$0_{\partial}:=(0, 1),\;\,(\frac{1}{p})_{\partial}:=(\frac{1}{p}, 1),\;\,\cdots,\;\,(\frac{p-1}{p})_{\partial}:=(\frac{p-1}{p}, 1),$$ and denote the points on the lower boundary of ${\rm Cyl_{p,q}}$ by $j_{\partial^{\prime}}$, for $0\leq j\leq\frac{q-1}{q}:$ $$0_{\partial^{\prime}}:=(0, 0),\;\,(\frac{1}{q})_{\partial^{\prime}}:=(\frac{1}{q}, 0),\;\,\cdots,\;\,(\frac{q-1}{q})_{\partial^{\prime}}:=(\frac{q-1}{q}, 0).$$ Then $A_{p,q}$ can be identified with ${\rm Cyl_{p,q}}$ in the sense of viewing the upper (*resp.* lower) boundary of ${\rm Cyl_{p,q}}$ as the inner (*resp.* outer) boundary of $A_{p,q}$, where the marked points of ${\rm Cyl_{p,q}}$ are oriented from left to right on the upper boundary $\partial$ and from right to left on the lower boundary $\partial^{\prime}$, as in Figure [\[Annulus via rectangle\]](#Annulus via rectangle){reference-type="ref" reference="Annulus via rectangle"}. It will be most convenient to work with in the universal cover $(\mathbb{U}, \pi_{p,q})$ of ${\rm Cyl_{p,q}}$, where $\mathbb{U}=\{(x,y)\in\mathbb{R}^{2}|0\leq y\leq 1\}$ inherits the orientation of ${\rm Cyl_{p,q}}$, and the covering map $$\pi:=\pi_{p,q}: \mathbb{U}\to {\rm Cyl_{p,q}},\;\;(x,y)\mapsto(x-\lfloor x\rfloor, y).$$ Here, $\lfloor x\rfloor$ denotes the integer part of $x$. Naturally, $\pi$ is also a covering map of $A_{p,q}$. Denote the marked point $(i, 1)$ on the upper boundary $\partial$ of $\mathbb{U}$ by $i_{\partial}$ and the marked point $(j, 0)$ on the lower boundary $\partial^{\prime}$ of $\mathbb{U}$ by $j_{\partial^{\prime}}$, for $i\in \frac{\mathbb{Z}}{p}$ and $j\in \frac{\mathbb{Z}}{q}$, see Figure [\[Universal cover of \$A\_{p,q}\$\]](#Universal cover of $A_{p,q}$){reference-type="ref" reference="Universal cover of $A_{p,q}$"}. ## Arcs in $A_{p,q}$ and $\mathbb{U}$. In this subsection, we recall some definitions of curves and arcs in $A_{p,q}$ and $\mathbb{U}$. In this paper, we consider curves and arcs up to homotopy. The following definition refers to [@V2018; @BZ2011]. **Definition 5**. A *curve* in $\mathbb{U}$ is a continuous function $c:[0,1]\longrightarrow \mathbb{U}$ such that $c(t)\in \mathbb{U}^0:=\mathbb{U}\setminus\{\partial, \partial^{\prime}\}$ for any $t\in (0,1)$. An *arc* in $\mathbb{U}$ is a curve whose endpoints are marked points of $\mathbb{U}$, satisfying that it does not intersect itself in the interior of $\mathbb{U}$, the interior of the arc is disjoint from the boundary of $\mathbb{U}$, and it does not cut out a monogon or digon. If an arc in $\mathbb{U}$ starts at a marked point $x_{b_1}$ and ends at a marked point $y_{b_2}$ with $x, y\in \frac{\mathbb{Z}}{p}$ or $\frac{\mathbb{Z}}{q}, \,b_{i}\in \{\partial,\,\partial^{\prime}\}$ for $i=1,\,2$, it will be denoted by $[x_{b_1}, y_{b_2}]$. The following definition refers to [@BT2019]. **Definition 6**. Let $\alpha=[x_{b_1}, y_{b_2}]$ be an arc in $\mathbb{U}$. If $b_1\neq b_2,$ $\alpha$ is called a *bridging arc*. If $b_1=b_2=\partial$ and $y-x\geq \frac{2}{p}$, or $b_1=b_2=\partial^{\prime}$ and $y-x\geq \frac{2}{q}$, then $\alpha$ is called a *peripheral arc*. In order to give a geometric model for line bundles in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$, we only need to consider the bridging arcs that oriented from the boundary $\partial^{\prime}$ to the boundary $\partial$. Such an arc $[x_{\partial^{\prime}}, y_{\partial}]$ is called a *positive bridging arc*. Similarly, one can define curves and arcs in the marked annulus $A_{p,q}$. Note that the image of an arc in $\mathbb{U}$ under $\pi: \mathbb{U}\to A_{p,q}$ maybe intersect itself in the interior of $A_{p,q}$. We define *bridging (*resp.* peripheral) curve* in $A_{p,q}$ as the image $\pi(\alpha)$ for a *bridging (*resp.* peripheral) arc* $\alpha$ in $\mathbb{U}$. If $\pi(\alpha)$ is an arc additionally, then it will be called a *bridging (*resp.* peripheral) arc* in $A_{p,q}$. In order to give a geometric model for homogeneous torsion sheaves in the subcategory $\coprod_{\lambda\in \mathbf{k}^{*}}\mathcal{U}_{\lambda}$ of ${\rm coh}\mbox{-}\mathbb{X}(p,q)$, we introduce the notion of $\mathbf k^{*}$-parameterized $n$-cycles as follows. **Definition 7**. For $n\in \mathbb{Z}_{\geq1}$, an $n$-cycle in $A_{p,q}$ is a curve $\pi(c)$, where $c$ is a curve in $\mathbb{U}^{0}$ with $c(1)-c(0)=(n,0).$ In particular, the 1-cycle will be called a *loop*. For the notion of $\mathbf k^{*}$-parameterized $n$-cycles we refer to the set $\{(\lambda, L^{n})\,|\,\lambda\in \mathbf k^{*}\}$, where $L$ is a loop in $A_{p,q}$ with the orientation in an anti-clockwise direction. Recall from Proposition [Proposition 4](#the properties of coherent sheaves){reference-type="ref" reference="the properties of coherent sheaves"} that the category ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ contains two parts: vector bundles and torsion sheaves. Each indecomposable vector bundle is a line bundle. The torsion sheaves consists of two stable tubes of rank $p$ and $q$ respectively, and a family of homogeneous tubes indexed by $\mathbf k^{*}$. Let ${\rm ind}({\rm coh}\mbox{-}\mathbb{X}(p,q))$ denote the full subcategory of ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ consisting of all indecomposable objects. Then the objects of ${\rm ind}({\rm coh}\mbox{-}\mathbb{X}(p,q))$ are line bundles ${\mathcal O}(\vec{x})\,(\vec{x}\in \mathbb{L}(p,q))$, and torsion sheaves $S_{\infty,i}^{(j)}\,(i\in\mathbb{Z}/p\mathbb{Z})$, $S_{0,i}^{(j)}\,(i\in\mathbb{Z}/q\mathbb{Z})$ and $S_{\lambda}^{(j)}\,(\lambda \in \mathbf k^{*})$ for $j \in \mathbb{Z}_{\geq 1}$. Now we can state the main result of this subsection. **Theorem 8**. *A parametrization of the isoclasses of ${\rm ind}({\rm coh}\mbox{-}\mathbb{X}(p,q))$ is given as follows:* - *the indecomposable vector bundles are indexed by the homotopy classes of positive bridging curves $\pi([(\frac{j}{q})_{\partial^{\prime}}, (\frac{i}{p})_{\partial}])$ in $A_{p,q}$;* - *the objects in the stable tube of rank $p$ (*resp.* $q$) are indexed by the homotopy classes of peripheral curves $\pi([(\frac{i}{p})_{\partial}, (\frac{j}{p})_{\partial}])$ (*resp.* $\pi([(\frac{i}{q})_{\partial^{\prime}}, (\frac{j}{q})_{\partial^{\prime}}])$) in $A_{p,q}$;* - *the objects in homogeneous tubes are indexed by $\mathbf k^{*}$-parameterized $n$-cycles in $A_{p,q}$.* *Proof.* Recall that $\pi: \mathbb{U}\to A_{p,q}$ is the covering map. Let $\mathcal{C}$ be the set consisting of the following elements with $i, j\in\mathbb{Z}$ and $n\geq 1$: - positive bridging curves: $\pi([(\frac{j}{q})_{\partial^{\prime}}, (\frac{i}{p})_{\partial}])$; - peripheral curves: $\pi([(\frac{i}{p})_{\partial}, (\frac{j}{p})_{\partial}])$ and $\pi([(\frac{i}{q})_{\partial^{\prime}}, (\frac{j}{q})_{\partial^{\prime}}])$, with $j-i\geq 2;$ - $\mathbf k^{*}$-parameterized $n$-cycles: $\{(\lambda, L^{n})\,|\,\lambda\in \mathbf k^{*}\}$. Then the following assignments: $$\begin{aligned} \label{map}\nonumber &\pi([(\frac{j}{q})_{\partial^{\prime}}, \;(\frac{i}{p})_{\partial}])\mapsto{\mathcal O}(i\vec{x}_1-j\vec{x}_2),\quad\;\pi([(\frac{i-j-1}{p})_{\partial},\;(\frac{i}{p})_{\partial}])\mapsto S_{\infty,i}^{(j)}, \nonumber \\ &\\ &\pi([(\frac{-i}{q})_{\partial^{\prime}},\;(\frac{j-i+1}{q})_{\partial^{\prime}}])\mapsto S_{0,i}^{(j)},\quad\; (\lambda, L^{j}) \mapsto S_{\lambda}^{(j)},\nonumber\end{aligned}$$ define a bijective map $\phi:\mathcal{C}\longrightarrow{\rm ind}({\rm coh}\mbox{-}\mathbb{X}(p,q))$. This proves Theorem [Theorem 8](#corresponding){reference-type="ref" reference="corresponding"}. ◻ ## Elementary moves Basing on the bijection given in Theorem [Theorem 8](#corresponding){reference-type="ref" reference="corresponding"}, we consider the geometric interpretation of Auslander-Reiten sequences in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$. Firstly, we introduce the notion of elementary moves of positive bridging curves and peripheral curves in $A_{p,q}$. The following definition refers to [@BT2019; @BZ2011]. **Definition 9**. For any positive bridging curve or peripheral curve $\gamma$ in $A_{p,q}$, denote by $_{s}\gamma$ the *elementary move of $\gamma$ on its starting point*, meaning that the curve $_{s}\gamma$ is obtained from $\gamma$ moving its starting point to the next marked point on the same boundary, such that $_{s}\gamma$ is rotated in clockwise direction around the ending point of $\gamma.$ Similarly, denote by $\gamma_{e}$ the *elementary move of $\gamma$ on its ending point*. Iterated elementary moves are denoted $_{s}\gamma_{e}=$ $_{s}(\gamma_{e})=(_{s}\gamma)_{e},\, _{s^{2}}\gamma=$ $_{s}(_{s}\gamma)$ and $\gamma_{e^{2}}=(\gamma_{e})_{e}$, respectively. For the follow-up discussion, we recall from [@BT2019] for the intuitional expression of elementary moves of positive bridging curves and peripheral curves in $A_{p,q}$. - If $\gamma$ is a positive bridging curve in $A_{p,q}$, then $\gamma=\pi([(\frac{i}{q})_{\partial^{\prime}},\, (\frac{j}{p})_{\partial}])$ for some $i, j\in \mathbb{Z}$, and $$_{s}\gamma=\pi([(\frac{i-1}{q})_{\partial^{\prime}},\, (\frac{j}{p})_{\partial}]),\quad\; \gamma_{e}=\pi([(\frac{i}{q})_{\partial^{\prime}},\, (\frac{j+1}{p})_{\partial}]).$$ The corresponding picture in $\mathbb{U}$ is: - If $\gamma$ is a peripheral curve in $A_{p,q}$ with endpoints on the inner boundary, then $\gamma=\pi([(\frac{i}{p})_{\partial}, (\frac{j}{p})_{\partial}])$ for some $i, j\in \mathbb{Z},\,j-i\geq 2,$ and $$_{s}\gamma=\pi([(\frac{i+1}{p})_{\partial}, (\frac{j}{p})_{\partial}])\,\,(j-i\,\textgreater\,\, 2),\quad\; \gamma_{e}=\pi([(\frac{i}{p})_{\partial}, (\frac{j+1}{p})_{\partial}]).$$ In $\mathbb{U}$, they look as follows: - If $\gamma$ is a peripheral curve in $A_{p,q}$ with endpoints on the outer boundary, then $\gamma=\pi([(\frac{i}{q})_{\partial^{\prime}},\, (\frac{j}{q})_{\partial^{\prime}}])$ for some $i, j\in \mathbb{Z},\,j-i\geq 2$, and $$_{s}\gamma=\pi([(\frac{i-1}{q})_{\partial^{\prime}},\, (\frac{j}{q})_{\partial^{\prime}}]),\quad \;\gamma_{e}=\pi([(\frac{i}{q})_{\partial^{\prime}},\, (\frac{j-1}{q})_{\partial^{\prime}}])\,\,(j-i\,\textgreater\; 2).$$ In $\mathbb{U},$ they are described as follows: For the sake of simplicity, from now on, we denote by $$D^{\frac{i}{p}}_{\frac{j}{q}}:=[(\frac{j}{q})_{\partial^{\prime}}, (\frac{i}{p})_{\partial}],\;\;D^{\frac{i}{p}, \frac{j}{p}}:=[(\frac{i}{p})_{\partial}, (\frac{j}{p})_{\partial}]\;\;{\rm and}\;\;D_{\frac{i}{q}, \frac{j}{q}}:=[(\frac{i}{q})_{\partial^{\prime}}, (\frac{j}{q})_{\partial^{\prime}}]$$ for positive bridging arcs and peripheral arcs in $\mathbb{U}$, and denote their image under $\pi: \mathbb{U}\to A_{p,q}$ as $[D^{\frac{i}{p}}_{\frac{j}{q}}], [D^{\frac{i}{p}, \frac{j}{p}}]$ and $[D_{\frac{i}{q}, \frac{j}{q}}]$ respectively. It follows that for any $k\in\mathbb{Z}$, $$\begin{aligned} \label{periodicity of covering map} [D^{\frac{i}{p}+k}_{\frac{j}{q}+k}]=[D^{\frac{i}{p}}_{\frac{j}{q}}],\;\;[D^{\frac{i}{p}+k,\, \frac{j}{p}+k}]=[D^{\frac{i}{p}, \frac{j}{p}}]\;\;{\rm and}\;\;[D_{\frac{i}{q}+k,\, \frac{j}{q}+k}]=[D_{\frac{i}{q}, \frac{j}{q}}].\end{aligned}$$ **Proposition 10**. *Let $X\in{\rm ind}({\rm coh}\mbox{-}\mathbb{X}(p,q))$ be a line bundle or a torsion sheaf supported at an exceptional point. Assume $\phi^{-1}(X)=\gamma$. Then the irreducible morphisms in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ starting at $X$ are obtained by elementary moves on endpoints of $\gamma$. Moreover, the Auslander-Reiten sequence starting at $X$ has the form $$0\longrightarrow \phi(\gamma)\longrightarrow \phi(_{s}\gamma)\oplus\phi(\gamma_{e})\longrightarrow\phi(_{s}\gamma_{e})\longrightarrow 0.$$* *Proof.* If $X$ is a line bundle, then $X={\mathcal O}(i\vec{x}_1-j\vec{x}_2)$ for some $i, j\in\mathbb{Z}$, and then $\gamma=[D^{\frac{i}{p}}_{\frac{j}{q}}]$. Notice that the irreducible morphisms in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ starting at ${\mathcal O}(i\vec{x}_1-j\vec{x}_2)$ are given by $${\mathcal O}(i\vec{x}_1-j\vec{x}_2)\longrightarrow {\mathcal O}((i+1)\vec{x}_1-j\vec{x}_2)=\phi(\gamma_{e})$$ and $${\mathcal O}(i\vec{x}_1-j\vec{x}_2)\longrightarrow {\mathcal O}(i\vec{x}_1-(j-1)\vec{x}_2)=\phi(_{s}\gamma),$$ and $$\tau^{-1}({\mathcal O}(i\vec{x}_1-j\vec{x}_2))=({\mathcal O}(i\vec{x}_1-j\vec{x}_2))(\vec{x}_1+\vec{x}_2) ={\mathcal O}((i+1)\vec{x}_1-(j-1)\vec{x}_2)=\phi(_{s}\gamma_{e}).$$ Then the result follows. If $X$ is a torsion sheaf supported at $\infty$, then $X=S_{\infty,i}^{(j)}$ for some $i\in\mathbb{Z}/p\mathbb{Z}$ and $j\in \mathbb{Z}_{\geq 1}$, and then $\gamma=[D^{\frac{i-j-1}{p},\frac{i}{p}}].$ If $j=1$, we have $\phi(_{s}\gamma)=0$ and there is only one irreducible morphism starting at $S_{\infty,i}^{(1)}$, that is, $S_{\infty,i}^{(1)}\longrightarrow S_{\infty,i+1}^{(2)}=\phi(\gamma_{e})$. If $j\geq 2$, there are two irreducible morphisms starting at $S_{\infty,i}^{(j)}$, given by $S_{\infty,i}^{(j)}\longrightarrow S_{\infty,i+1}^{(j+1)}=\phi(\gamma_{e})$ and $S_{\infty,i}^{(j)}\longrightarrow S_{\infty,i}^{(j-1)}=\phi(_{s}\gamma)$. Moreover, $\tau^{-1}(S_{\infty,i}^{(j)})=S_{\infty,i+1}^{(j)}=\phi(_{s}\gamma_{e}).$ Similarly, if $X$ is a torsion sheaf supported at the exceptional point $0$, we can also obtain the result. We are done. ◻ ****Remark** 11**. For any indecomposable torsion sheaf $X$ supported at an ordinary point $\lambda\in\mathbf k^{*}$, we know that $X=S_{\lambda}^{(j)}$ for some $j\in \mathbb{Z}_{\geq 1}$. By Theorem [Theorem 8](#corresponding){reference-type="ref" reference="corresponding"}, we have $\phi^{-1}(S_{\lambda}^{(j)})=(\lambda, L^{j}).$ If we denote by $_s(\lambda, L^{k})=(\lambda, L^{k-1})$, $(\lambda, L^{k})_{e}=(\lambda, L^{k+1})$ and $_{s}(\lambda, L^{k})_{e}=$ $_{s}((\lambda, L^{k})_{e})=(_{s}(\lambda, L^{k}))_{e}=(\lambda, L^{k})$, then Proposition [Proposition 10](#irreducible morphism){reference-type="ref" reference="irreducible morphism"} holds for any $X\in{\rm ind}({\rm coh}\mbox{-}\mathbb{X}(p,q))$. ## Translation quiver and equivalence between categories In this subsection, we define a translation quiver $(\Gamma_{A_{p,q}}, \tau^{\prime})$ associated to $\mathcal{C}.$ The vertices of the quiver are the oriented curves in $\mathcal{C}.$ There is an arrow $\gamma\rightarrow\gamma^{\prime}$ if and only if $\gamma^{\prime}=$ $_{s}\gamma$ or $\gamma_{e}.$ For the positive bridging curves and peripheral curves in $\mathcal{C},$ let the translation $\tau^{\prime}$ be induced by the map $(\frac{i}{p})_{\partial}\mapsto (\frac{i-1}{p})_{\partial}$ and $(\frac{j}{q})_{\partial^{\prime}}\mapsto (\frac{j+1}{q})_{\partial^{\prime}}$ for $i,\,j \in\mathbb{Z},$ that is, $$\tau^{\prime}([D^{\frac{i}{p}}_{\frac{j}{q}}])=[D^{\frac{i-1}{p}}_{\frac{j+1}{q}}],\,\; \tau^{\prime}([D^{\frac{i}{p}, \frac{j}{p}}])=[D^{\frac{i-1}{p}, \frac{j-1}{p}}],\;\, \tau^{\prime}([D_{\frac{i}{q}, \frac{j}{q}}])=[D_{\frac{i+1}{q}, \frac{j+1}{q}}].$$ For the $\mathbf k^{*}$-parameterized $k$-cycles $(\lambda, L^{k})$ in $\mathcal{C},$ let $\tau^{\prime}((\lambda, L^{k}))=(\lambda, L^{k}).$ Let $\mathcal{C}_{A_{p,q}}$ be the mesh category of the translation quiver $(\Gamma_{A_{p,q}}, \tau^{\prime})$. Let $F:\mathcal{C}_{A_{p,q}}\longrightarrow{\rm ind}({\rm coh}\mbox{-}\mathbb{X}(p,q))$ be the functor between mesh categories defined as follows. On objects, let $F(\gamma)=\phi(\gamma).$ To define $F$ on morphisms, it suffices to define it on the elementary moves. Define $F(\gamma\rightarrow$ $_{s}\gamma)$ (*resp.* $F(\gamma\rightarrow \gamma_{e})$) to be the irreducible morphism $F(\gamma)\rightarrow F(_{s}\gamma)$ (*resp.* $F(\gamma)\rightarrow F(\gamma_{e})$). Let $(\Gamma_{{\rm coh}\mbox{-}\mathbb{X}(p,q)}, \tau)$ denote the Auslander-Reiten quiver of the category ${\rm coh}\mbox{-}\mathbb{X}(p,q).$ **Theorem 12**. *The functor $F:\mathcal{C}_{A_{p,q}}\longrightarrow{\rm ind}({\rm coh}\mbox{-}\mathbb{X}(p,q))$ is an equivalence of categories. In particular,* - *$F$ induces an isomorphism of translation quivers $(\Gamma_{A_{p,q}},\tau^{\prime})\longrightarrow (\Gamma_{{\rm coh}\mbox{-}\mathbb{X}(p,q)},\tau).$* - *$\tau^{\prime}$ corresponds to the Auslander-Reiten translation $\tau$ in the following sense $$F \circ\tau^{\prime}=\tau\circ F.$$* - *$F$ is an exact functor with respect to the induced abelian structure on $\mathcal{C}_{A_{p,q}}.$* *Proof.* By Theorem [Theorem 8](#corresponding){reference-type="ref" reference="corresponding"} and Proposition [Proposition 10](#irreducible morphism){reference-type="ref" reference="irreducible morphism"}, we get the statement $(1)$. For the statement $(2),$ we only consider the case $F \circ\tau^{\prime}([D^{\frac{i}{p}}_{\frac{j}{q}}])=\tau\circ F([D^{\frac{i}{p}}_{\frac{j}{q}}])$, the others being similar. In fact, F\^(\[D\^\_\])=F(\[D\^\_\])=&((i-1)\_1-(j+1)\_2)\ =&((i_1-j_2))\ =&F(\[D\^\_\]). Therefore, the statement $(2)$ holds. Since both categories $\mathcal{C}_{A_{p,q}}$ and ${\rm ind}({\rm coh}\mbox{-}\mathbb{X}(p,q))$ are the mesh categories of their translation quivers $(\Gamma_{A_{p,q}}, \tau^{\prime})$ and $(\Gamma_{{\rm coh}\mbox{-}\mathbb{X}(p,q)}, \tau)$ respectively, therefore, statement $(1)$ implies that $F$ is an equivalence of categories. In particular, this equivalence induces the structure of an abelian category on $\mathcal{C}_{A_{p,q}}$. With respect to this structure, $F$ is an exact functor since every equivalence between abelian categories is exact. ◻ ## Positive intersection In this subsection, we want to give a geometric explanation for the dimension of extension group ${\rm Ext}^i(X,Y)$ for $X,Y\in{\rm ind}({\rm coh}\mbox{-}\mathbb{X}(p,q))$. Since ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ is hereditary, the extension group is zero for $i\geq 2$. Hence we only need to consider the case $i=1$. Note that if $X,Y$ are supported at distinct points, then ${\rm Ext}^1(X,Y)=0$. If they are supported at the same ordinary point, then $X=S_{\lambda}^{(j_1)}, Y=S_{\lambda}^{(j_2)}$ for some $\lambda\in\mathbf{k}^*$ and $j_1, j_2\in \mathbb{Z}_{\geq 1}$. In this case, ${\rm dim}_{\mathbf{k}}{\rm Ext}^1(X,Y)=\min\{j_1,j_2\}$. Hence, we only consider the remaining cases in the following. We will show that ${\rm dim}_{\mathbf{k}}{\rm Ext}^1(X,Y)$ can be interpreted as the positive intersection number between the corresponding oriented curves in the marked annulus $A_{p,q}$. **Definition 13** ([@W2008]). A point of intersection of two oriented curves $\alpha$ and $\beta$ in $A_{p,q}$ is called *positive intersection* of $\alpha$ and $\beta$, if $\beta$ intersects $\alpha$ from the right, i.e., the picture looks like as: Denoted by $$I^{+}(\alpha,\beta)=\mathop{{\rm min}}\limits_{\alpha^{\prime}\sim\alpha,\beta^{\prime}\sim\beta}|\alpha^{\prime}\cap^{+}\beta^{\prime}|,$$ called the *positive intersection number* of $\alpha,\beta$, where $\alpha^{\prime}\cap^{+}\beta^{\prime}$ is the set of the positive intersections of $\alpha^{\prime}$ and $\beta^{\prime}$, excluding their endpoints, and the sign $a\sim b$ means that the two curves are homotopy. It follows that $I^{+}(\alpha,\beta)=0$ or 1 if $\alpha, \beta$ are oriented arcs in $\mathbb{U}$. **Theorem 14**. *Let $X, Y$ be two indecomposable objects in ${\rm coh}\mbox{-}\mathbb{X}(p,q).$ Assume $X, Y$ are not supported at ordinary points. Then $$\label{positive intersection theorem}{\rm dim_{\mathbf{k}}Ext^{1}}(X,Y)=I^{+}(\phi^{-1}(X),\phi^{-1}(Y)).$$* *Proof.* If $X, Y\in \mathcal{U}_{\infty}\coprod \mathcal{U}_{0}$, then the result follows from [@BM2012 Theorem 3.8]. If $X\in{\rm vec}\mbox{-}\mathbb{X}(p,q)$ and $Y\in \mathcal{U}_{\infty}\coprod \mathcal{U}_{0}$, then according to Proposition [Proposition 4](#the properties of coherent sheaves){reference-type="ref" reference="the properties of coherent sheaves"} (2), we have ${\rm Ext}^{1}(X, Y)=0$, and by Definition [Definition 13](#positive intersection){reference-type="ref" reference="positive intersection"}, we also have $I^{+}(\phi^{-1}(X),\phi^{-1}(Y))=0.$ So we only need to consider the following two cases: \(1\) $X, Y\in {\rm vec}\mbox{-}\mathbb{X}(p,q)$. By Proposition [Proposition 4](#the properties of coherent sheaves){reference-type="ref" reference="the properties of coherent sheaves"} (3), we know that each indecomposable bundle over $\mathbb{X}(p,q)$ is a line bundle, thus $X={\mathcal O}(\vec{x})$ and $Y={\mathcal O}(\vec{y})$ for some $\vec{x},\,\vec{y}\in\mathbb{L}(p,q).$ Write $\vec{x}=l_1\vec{x}_1+l_2\vec{x}_2+l\vec{c}$ and $\vec{y}=k_1\vec{x}_1+k_2\vec{x}_2+k\vec{c}$ in normal forms. According to Proposition [Proposition 4](#the properties of coherent sheaves){reference-type="ref" reference="the properties of coherent sheaves"} $(1)$ and $(5)$, we have $${\rm Ext}^{1}({\mathcal O}(\vec{x}),{\mathcal O}(\vec{y}))\cong D{\rm Hom}({\mathcal O}(\vec{y}), {\mathcal O}(\vec{x}-\vec{x}_1-\vec{x}_2))\cong S_{\vec{x}-\vec{x}_1-\vec{x}_2-\vec{y}}.$$ Observe that $\vec{x}-\vec{x}_1-\vec{x}_2-\vec{y}=(l_1-k_1-1)\vec{x}_1+(l_2-k_2-1)\vec{x}_2+(l-k)\vec{c},$ and $-p\leq l_1-k_1-1\leq p-2,\,\, -q\leq l_2-k_2-1\leq q-2.$ Denote $h_{1}:=l_1-k_1-1$ and $h_{2}:=l_2-k_2-1$, we get $${\rm dim_{\mathbf{k}}Ext^{1}}(X,Y)= \left\{ \begin{array}{ll} l-k+1& \;{\rm if}\;\,0\leq h_{1}\leq p-2,\, 0\leq h_{2}\leq q-2,\, k\leq l;\nonumber\\ l-k& \;{\rm if}\;\,-p\leq h_{1}\leq -1,\, 0\leq h_{2}\leq q-2,\, k\leq l;\\ l-k& \;{\rm if}\;\,0\leq h_{1}\leq p-2,\, -q\leq h_{2}\leq -1,\, k\leq l;\\ l-k-1& \;{\rm if}\;-p\leq h_{1}\leq -1,\, -q\leq h_{2}\leq -1,\, k+1\leq l;\\ 0& {\rm otherwise.} \end{array} \right.$$ On the other hand, we consider $I^{+}(\phi^{-1}(X),\phi^{-1}(Y)).$ Since $\phi^{-1}(X)=[D^{\frac{l_1}{p}}_{-\frac{l_2}{q}-l}]$ and $\phi^{-1}(Y)=[D^{\frac{k_1}{p}}_{-\frac{k_2}{q}-k}].$ By similar arguments as in [@BM2012], a point of intersection of $\phi^{-1}(X)$ and $\phi^{-1}(Y)$ is positive if and only if there exists $m\in \mathbb{Z}$ such that $\frac{l_1}{p}+m > \frac{k_1}{p}$ and $-\frac{l_2}{q}-l+m < -\frac{k_2}{q}-k$ (c.f. Figure [\[case1\]](#case1){reference-type="ref" reference="case1"}). These two inequalities can be combined into $$\frac{k_1-l_1}{p}\;\textless\; m\;\textless \;\frac{q(l-k)+l_2-k_2}{q}.$$ Therefore, $$I^{+}(\phi^{-1}(X),\phi^{-1}(Y))=|\{m\in\mathbb{Z}|\frac{k_1-l_1}{p}\;\textless\; m\;\textless \;\frac{q(l-k)+l_2-k_2}{q}\}|.$$ In order to prove that ${\rm dim_{\mathbf{k}}Ext^{1}}(X,Y)=I^{+}(\phi^{-1}(X),\phi^{-1}(Y))$, we only consider the case: $0\leq h_{1}\leq p-2,\, 0\leq h_{2}\leq q-2,\, k\leq l$; the others being similar. In this case, we have $$I^{+}(\phi^{-1}(X),\phi^{-1}(Y))=|\{m\in\mathbb{Z}|0\leq m\leq l-k\}|=l-k+1={\rm dim_{\mathbf{k}}Ext^{1}}(X,Y).$$ We are done. \(2\) $X\in \mathcal{U}_{\infty}\coprod \mathcal{U}_{0}$ and $Y\in {\rm vec}\mbox{-}\mathbb{X}(p,q)$. We only consider the case $X\in \mathcal{U}_{\infty}$, that is, $X=S_{\infty,i}^{(j)}$ for $i\in\mathbb{Z}/p\mathbb{Z},\,j \in \mathbb{Z}_{\geq 1}$, the case $X\in \mathcal{U}_{0}$ being similar. By Proposition [Proposition 4](#the properties of coherent sheaves){reference-type="ref" reference="the properties of coherent sheaves"} (3), $Y={\mathcal O}(\vec{x})$ for some $\vec{x}\in\mathbb{L}(p,q),$ write $\vec{x}=l_1\vec{x}_1+l_2\vec{x}_2+l\vec{c}$ in normal form. By exact sequence ([\[important exact sequence\]](#important exact sequence){reference-type="ref" reference="important exact sequence"}), we get ${\rm dim_{\mathbf{k}}Hom}({\mathcal O}(\vec{x}+\vec{x}_1+\vec{x}_2), S_{\infty,l_1+1})=1.$ Notice that $S_{\infty,i}^{(j)}$ has a composition series of the form $$S_{\infty, i-j+1} \hookrightarrow S_{\infty, i-j+2}^{(2)} \hookrightarrow \cdots\hookrightarrow S_{\infty, i-2}^{(j-2)} \hookrightarrow S_{\infty, i-1}^{(j-1)}\hookrightarrow S_{\infty, i}^{(j)}$$ with $S_{\infty, i-k}^{(j-k)}/S_{\infty, i-k-1}^{(j-k-1)}\cong S_{\infty, i-k}$ for $0\leq k\leq j-2.$ Write $j=np+r$ with $0\leq r\leq p-1.$ We get $${\rm dim_{\mathbf{k}}Ext^{1}}(S_{\infty,i}^{(j)},{\mathcal O}(\vec{x}))= \left\{ \begin{array}{ll} n& \;{\rm if}\; p\nmid i-l_1-k\;{\rm for\;each}\;1\leq k\leq r,\nonumber\\ n+1& {\rm otherwise}. \end{array} \right.$$ On the other hand, we consider $I^{+}(\phi^{-1}(S_{\infty,i}^{(j)}),\phi^{-1}({\mathcal O}(\vec{x}))).$ Since $$\phi^{-1}(S_{\infty,i}^{(j)})=[D^{\frac{i-j-1}{p},\frac{i}{p}}],\quad\quad\phi^{-1}({\mathcal O}(\vec{x}))=[D^{\frac{l_1}{p}}_{-\frac{l_2}{q}-l}].$$ By similar arguments as in [@BM2012], a point of intersection of $\phi^{-1}(S_{\infty,i}^{(j)})$ and $\phi^{-1}({\mathcal O}(\vec{x}))$ is positive if and only if there exists $m\in\mathbb{Z}$ such that $\frac{i-j-1}{p}\;\textless\; \frac{l_1}{p}+m\;\textless \; \frac{i}{p}$, (c.f. Figure [\[case2\]](#case2){reference-type="ref" reference="case2"}). Hence $$I^{+}(\phi^{-1}(S_{\infty,i}^{(j)}),\phi^{-1}({\mathcal O}(\vec{x})))=|\{m\in\mathbb{Z}|i-j-l_1\leq pm \leq i-l_1-1\}|.$$ Observe that $$\lfloor \frac{(i-l_1-1)-(i-j-l_1)+1}{p}\rfloor =\lfloor \frac{j}{p}\rfloor =\lfloor \frac{np+r}{p}\rfloor =n+\lfloor \frac{r}{p}\rfloor=n.$$ It follows that $$I^{+}(\phi^{-1}(S_{\infty,i}^{(j)}),\phi^{-1}({\mathcal O}(\vec{x})))= \left\{ \begin{array}{ll} n& \;{\rm if}\; p\nmid i-l_1-k\;{\rm for\;each}\;1\leq k\leq r,\nonumber\\ n+1& {\rm otherwise}. \end{array} \right.$$ Hence ${\rm dim_{\mathbf{k}}Ext^{1}}(X,Y)=I^{+}(\phi^{-1}(X),\phi^{-1}(Y))$. This finishes the proof. ◻ ****Remark** 15**. If one of $X,Y$ is supported at an ordinary point, then [\[positive intersection theorem\]](#positive intersection theorem){reference-type="eqref" reference="positive intersection theorem"} still holds. **Proposition 16**. *Let $\alpha,\beta$ be positive bridging arcs or peripheral arcs in $\mathbb{U}$ with $I^{+}(\alpha,\beta)=1$. Then there is a natural short exact sequence in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ associated to the intersection in geometric terms. More precisely, consider the following figure of the chosen intersection,* *we have a short exact sequence $$\begin{aligned} \label{exact sequence} 0\longrightarrow \phi(\pi(\beta))\longrightarrow \phi(\pi(\gamma_1))\oplus\phi(\pi(\gamma_2))\longrightarrow\phi(\pi(\alpha))\longrightarrow 0.\end{aligned}$$* *Proof.* We discuss case by case according to the types of $\alpha$ and $\beta$. If $\alpha,\,\beta$ are peripheral arcs, then the result follows from [@BBM2014 Section 6] and [@W2008 Remark 4.25]. Hence we only consider the following two cases. \(1\) $\alpha,\, \beta$ are positive bridging arcs. Then $\alpha=D^{\frac{i}{p}}_{\frac{j}{q}}$ and $\beta=D^{\frac{k}{p}}_{\frac{l}{q}}$ for some integers $k\textless i$ and $j\textless l.$ Hence $\gamma_1=D^{\frac{k}{p}}_{\frac{j}{q}}$ and $\gamma_2=D^{\frac{i}{p}}_{\frac{l}{q}}$. Note that $$\phi(\pi(\alpha))=\mathcal{O}(i\vec{x}_1-j\vec{x}_2),\; \quad \phi(\pi(\beta))=\mathcal{O}(k\vec{x}_1-l\vec{x}_2).$$ And we have the following short exact sequence in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$: $$0\longrightarrow \mathcal{O}(k\vec{x}_1-l\vec{x}_2)\longrightarrow \mathcal{O}(k\vec{x}_1-j\vec{x}_2)\oplus\mathcal{O}(i\vec{x}_1-l\vec{x}_2)\longrightarrow\mathcal{O}(i\vec{x}_1-j\vec{x}_2)\longrightarrow 0.$$ Then ([\[exact sequence\]](#exact sequence){reference-type="ref" reference="exact sequence"}) holds since $$\phi(\pi(\gamma_1))=\mathcal{O}(k\vec{x}_1-j\vec{x}_2),\; \quad \phi(\pi(\gamma_2))=\mathcal{O}(i\vec{x}_1-l\vec{x}_2).$$ \(2\) $\alpha$ is a peripheral arc and $\beta$ is a positive bridging arc. We only consider the case that both the endpoints of $\alpha$ are in the upper boundary of $\mathbb{U}$, the other being similar. By assumption, $\alpha=D^{\frac{i-j-1}{p},\,\frac{i}{p}}$ and $\beta=D^{\frac{k}{p}}_{\frac{l}{q}}$ for some $i,j, k, l\in \mathbb{Z}$ with $i-j-1\;\textless \; k\textless\;i.$ Then we have $$\phi(\pi(\alpha))=S_{\infty,i}^{(j)},\; \phi(\pi(\beta))=\mathcal{O}(k\vec{x}_1-l\vec{x}_2).$$ In ${\rm coh}\mbox{-}\mathbb{X}(p,q)$, there is a short exact sequence: $$0\longrightarrow \mathcal{O}(k\vec{x}_1-l\vec{x}_2)\longrightarrow \mathcal{O}(i\vec{x}_1-l\vec{x}_2)\oplus S_{\infty, k}^{(k+j-i)}\longrightarrow S_{\infty,i}^{(j)}\longrightarrow 0.$$ Therefore, the short exact sequence ([\[exact sequence\]](#exact sequence){reference-type="ref" reference="exact sequence"}) holds. ◻ # The geometric interpretation of tilting sheaf In this section, we investigate the correspondence between tilting sheaves in the category ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ and triangulations in the marked annulus $A_{p,q}$, and then study the relationship between the flip of an arc in a triangulation and the mutation of indecomposable direct summand of the corresponding tilting sheaf. ## Tilting sheaves and triangulations First let us recall from [@GL87; @GengSF2020] for the definition of tilting sheaves and tilting bundles in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ as follows. **Definition 17**. A sheaf $T$ in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ is called a *tilting sheaf* if - $T$ is rigid, that is, ${\rm Ext}^{1}(T, T)=0$. - For any object $X\in{\rm coh}\mbox{-}\mathbb{X}(p,q)$, the condition ${\rm Hom}(T,X)=0={\rm Ext}^{1}(T,X)$ implies that $X=0$. Moreover, if a tilting sheaf $T$ is given by a vector bundle, i.e., $T$ has no direct summand of finite length, then $T$ is called a *tilting bundle*. **Proposition 18** ([@HR1999]). *Assume that $\mathcal{H}$ is a hereditary abelian category with a tilting object. Then $T$ is a tilting object in $\mathcal{H}$ if and only if ${\rm Ext}^{1}(T,T)=0$ and the number of non-isomorphic indecomposable direct summands of $T$ equals the rank of the Grothendieck group $K_{0}(\mathcal {H})$.* Geigle-Lenzing [@GL87] constructed a canonical tilting sheaf for any weighted projective line. As a consequence, the rank of the Grothendieck group $K_{0}({\rm coh}\mbox{-}\mathbb{X}(p,q))$ equals to $p+q$, and any tilting sheaf in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ contains $p+q$ indecomposable direct summands. A rigid sheaf in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ with $p+q-1$-many (pairwise non-isomorphic) indecomposable direct summands is called an *almost complete tilting sheaf*. Let $\overline{T}$ be an almost complete tilting sheaf in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$. If there exists an indecomposable sheaf $E$, such that $\overline{T}\oplus E$ is a tilting sheaf, then $E$ is called a *complement* of $\overline{T}$. Recall from [@ABGP2010] that a *triangulation* $\Gamma$ of $A_{p,q}$ is a maximal collection of arcs that do not intersect in the interior of $A_{p,q}$. According to Proposition [@ABGP2010 Proposition 2.1], any triangulation $\Gamma$ of $A_{p,q}$ consists of $p+q$-many arcs. In our model, we always fix the orientations for the arcs in $\Gamma$, namely, we take the arcs from the set $\mathcal{C}$ appeared in Theorem [Theorem 8](#corresponding){reference-type="ref" reference="corresponding"}. For example, all the bridging arcs are oriented from the outer boundary to the inner boundary. **Theorem 19**. *The tilting sheaves in the category ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ are in natural bijection with the triangulations of $A_{p,q}$.* *Proof.* Let $\Gamma$ be a triangulation of $A_{p,q}$ and $\gamma_1, \gamma_2, \cdots, \gamma_{p+q}$ be the set of arcs in $\Gamma$. Let $$T=\bigoplus\limits_{i=1}^{p+q}\phi(\gamma_i).$$ According to Theorem [Theorem 14](#dimension and positive intersection){reference-type="ref" reference="dimension and positive intersection"} and the definition of triangulation, we have $${\rm dim_{\mathbf{k}}Ext^{1}}(\phi(\gamma_i),\, \phi(\gamma_j))=I^{+}(\gamma_i,\, \gamma_j)=0$$ for $1\leq i,\, j\leq p+q$. It follows that ${\rm dim_{\mathbf{k}}Ext^{1}}(T,T)=0.$ Combining with Proposition [Proposition 18](#tilting2){reference-type="ref" reference="tilting2"} and Propositions [Proposition 4](#the properties of coherent sheaves){reference-type="ref" reference="the properties of coherent sheaves"} $(6)$, we get that $T=\bigoplus\limits_{i=1}^{p+q}\phi(\gamma_i)$ is a tilting sheaf in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$. On the other hand, if $T=\bigoplus\limits_{i=1}^{p+q}T_{i}$ is a tilting sheaf in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$, then we have ${\rm Ext}^{1}(T,T)=0.$ By Theorem [Theorem 14](#dimension and positive intersection){reference-type="ref" reference="dimension and positive intersection"}, $\{\phi^{-1}(T_1), \phi^{-1}(T_2), \cdots, \phi^{-1}(T_{p+q})\}$ is a collection of arcs that do not intersect in the interior of $A_{p,q}$, which provides a triangulation of $A_{p,q}$ by Proposition [@ABGP2010 Proposition 2.1]. Therefore, we complete the proof of Theorem. ◻ ****Remark** 20**. The following general result has been stated in [@Hubner1996] for weighted projective lines of arbitrary type. We give a short proof for weight type $(p,q)$ by using the correspondence in Theorem [Theorem 19](#triangulation and tilting){reference-type="ref" reference="triangulation and tilting"}. **Corollary 21**. *Let $\overline{T}$ be an almost complete tilting sheaf in the category ${\rm coh}\mbox{-}\mathbb{X}(p,q)$. Then there exist precisely two complements of $\overline{T}$.* *Proof.* By the proof of Theorem [Theorem 19](#triangulation and tilting){reference-type="ref" reference="triangulation and tilting"}, we see that $\overline{T}$ can be represented by $p+q-1$ non-intersecting arcs in $\mathcal{C}.$ To get a triangulation of $A_{p,q}$, we should add just one further arc, which must be a diagonal in a quadrilateral. Since there are exactly two diagonals in a quadrilateral, we get two arcs belong to $\mathcal{C}$ by attaching suitable orientations and represent the two complements of $\overline{T}$. ◻ ## Flips and mutations In this subsection, we study the compatibility of the flip of an arc in a triangulation and the mutation of indecomposable direct summand of the corresponding tilting sheaf. **Definition 22**. ([@GengSF2020]) Let $T=\overline{T}\oplus X$ and $T^{\prime}=\overline{T}\oplus X^{\prime}$ be two tilting sheaves in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$, where $X$ and $X^{\prime}$ are two non-isomorphic indecomposable objects in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$. Then $T^{\prime}$ is called the *mutation* of $T$ at $X$ and denoted by $T^{\prime}=\mu_{X}(T)$. In this paper, we call $X^{\prime}$ the *mutation* of $X$ w.r.t $T$ and denote by $\mu_{T}(X)=X^{\prime}.$ Let $\gamma$ be an arc of a triangulation $\Gamma$ of $A_{p,q}$. Then $\gamma$ is one diagonal of the quadrilateral formed by the two triangles of $\Gamma$ that contain $\gamma$. Recall from [@ABGP2010] that the *flip* of $\gamma$ replaces the arc $\gamma$ by the other diagonal $\gamma^{\prime}$ of the same quadrilateral (c.f. Figure [\[figure of flip\]](#figure of flip){reference-type="ref" reference="figure of flip"}). Keeping all other arcs unchanged, one obtains a new triangulation $\mu_{\gamma}(\Gamma)$. For convenient, we call $\gamma^{\prime}$ the *flip* of $\gamma$ w.r.t $\Gamma$, and denote by $\gamma^{\prime}=\mu_{\Gamma}(\gamma).$ For a triangulation $\Gamma$ of $A_{p,q}$, by choosing suitable orientation for bridging arcs, we can always assume $\mu_{\gamma}(\Gamma)$ is also a triangulation of $A_{p,q}$. By Theorem [Theorem 19](#triangulation and tilting){reference-type="ref" reference="triangulation and tilting"}, we know that a triangulation $\Gamma$ corresponds to a tilting sheaf $T$ in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$, and $\phi(\gamma)$ is an indecomposable direct summand of $T$. The following proposition shows that the flip of $\gamma$ and the mutation of $\phi(\gamma)$ are compatible. **Proposition 23**. *Let $\Gamma$ be a triangulation of $A_{p,q}$ and $\gamma$ be an arc in $\Gamma$. Then $$\phi(\mu_{\Gamma}(\gamma))=\mu_{T}(\phi(\gamma)).$$* *Proof.* Let $\gamma,\,\gamma_1,\,\gamma_2,\,\cdots, \gamma_{p+q-1}$ be the set of arcs in the triangulation $\Gamma$. Therefore, $\mu_{\gamma}(\Gamma)$ is a triangulation of $A_{p,q}$ with arcs $\gamma^{\prime},\,\gamma_1,\,\gamma_2,\,\cdots, \gamma_{p+q-1}$. By Theorem [Theorem 19](#triangulation and tilting){reference-type="ref" reference="triangulation and tilting"}, we get two tilting sheaves $$T=\phi(\gamma)\oplus\phi(\gamma_1)\oplus\phi(\gamma_2)\oplus\cdots\oplus\phi(\gamma_{p+q-1})$$ and $$T^{\prime}=\phi(\gamma^{\prime})\oplus\phi(\gamma_1)\oplus\phi(\gamma_2)\oplus\cdots\oplus\phi(\gamma_{p+q-1})$$ in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$. It follows that $\phi(\mu_{\Gamma}(\gamma))=\phi(\gamma^{\prime})=\mu_{T}(\phi(\gamma))$. ◻ # Tilting bundles and bundle-mutations In this section, we use the geometric model to investigate tilting bundles in the category ${\rm coh}\mbox{-}\mathbb{X}(p,q)$, basing on Theorem [Theorem 19](#triangulation and tilting){reference-type="ref" reference="triangulation and tilting"} and Proposition [Proposition 23](#mutation and flip){reference-type="ref" reference="mutation and flip"}. Denote by $$\mathcal{T}^{\nu}_{\mathbb{X}}:=\{{\rm Tilting \ bundles \ in} \ {\rm coh}\mbox{-}\mathbb{X}(p,q)\}.$$ ## A combinatorial description of $\mathcal{T}^{\nu}_{\mathbb{X}}$ Let $T=\bigoplus\limits_{i=1}^{p+q}T_{i}$ be a tilting bundle in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$. Under Theorem [Theorem 8](#corresponding){reference-type="ref" reference="corresponding"}, we can view indecomposable objects in the category ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ as oriented curves in the marked annulus $A_{p,q}$. Hence each $T_{i}$ corresponds to a positive bridging arc $[D^{a_{i}}_{b_{i}}]$, where $a_{i}\in \frac{\mathbb{Z}}{p}$ and $b_{i}\in \frac{\mathbb{Z}}{q}$ for $1\leq i\leq p+q$. Moreover, by Theorem [Theorem 19](#triangulation and tilting){reference-type="ref" reference="triangulation and tilting"}, $T$ corresponds to a triangulation $\Gamma$ with arcs $[D^{a_{i}}_{b_{i}}], 1\leq i\leq p+q.$ Therefore, $\{D^{a_{i}}_{b_{i}}, 1\leq i\leq p+q\}$ is a triangulation of a parallelogram in $\mathbb{U}$. Thanks to [\[periodicity of covering map\]](#periodicity of covering map){reference-type="eqref" reference="periodicity of covering map"}, without loss of generality we may assume $T$ has the following *normal form*: $$\label{assumption on chains} 0=a_1< a_2\leq \cdots\leq a_{p+q}=1, {\quad} b_1\leq b_2\leq \cdots\leq b_{p+q}.$$ **Convention 1**. *From now on, for a tilting bundle $T=\bigoplus\limits_{i=1}^{p+q}T_i$ in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$, we always assume $T=\bigoplus\limits_{i=1}^{p+q}[D^{a_{i}}_{b_{i}}]$ satisfying [\[assumption on chains\]](#assumption on chains){reference-type="eqref" reference="assumption on chains"}, and denote by $\mu_{i}(T)$ for the mutation of $T$ at $T_{i}$, where $i$ is taken modulo $p+q$. In particular, $\mu_{i-1}(T)=\mu_{p+q}(T)$ for $i=1$, and $\mu_{i+1}(T)=\mu_{1}(T)$ for $i=p+q.$* We will give a combinatorial description for the set of tilting bundles $\mathcal{T}^{\nu}_{\mathbb{X}}$ in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$. For this, we consider the following set $$\Lambda_{(p,q)}^0:=\{(c_1, c_2, \cdots, c_p)\in\mathbb{Z}^{p}|c_1\leq c_2\leq \cdots\leq c_p\leq c_1+q\}.$$ **Theorem 24**. *There exists a bijection between $\mathcal{T}^{\nu}_{\mathbb{X}}$ and $\Lambda_{(p,q)}^0$.* *Proof.* For any tilting bundle $T$, by Theorem [Theorem 19](#triangulation and tilting){reference-type="ref" reference="triangulation and tilting"}, $T$ corresponds to a triangulation $\Gamma$ of (a parallelogram in) $\mathbb{U}$. Then for any $1\leq i\leq p$, the segment $[(\frac{i-1}{p})_{\partial}, (\frac{i}{p})_{\partial}]$ belongs to a unique triangle in $\Gamma$: Therefore, we obtain a sequence $(c_1, c_2, \cdots, c_p)\in\mathbb{Z}^{p}$. Since $\Gamma$ is a triangulation, we obtain that $\frac{c_1}{q}\leq \frac{c_2}{q}\cdots\leq \frac{c_p}{q}\leq\frac{c_1}{q}+1,$ i.e., $c_1\leq c_2\cdots\leq c_p\leq c_1+q.$ That is, $(c_1, c_2, \cdots, c_p)\in \Lambda_{(p,q)}^0.$ This defines a map $$\label{the map phi} \varphi: \mathcal{T}^{\nu}_{\mathbb{X}}\longrightarrow \Lambda_{(p,q)}^0; \quad T\mapsto (c_1, c_2, \cdots, c_p).$$ By Theorem [Theorem 19](#triangulation and tilting){reference-type="ref" reference="triangulation and tilting"}, we know that $\varphi$ is injective. Now we show that it is surjective. For any $(c_1, c_2, \cdots, c_p)\in \Lambda_{(p,q)}^0$, we can obtain $p$-triangles as in Figure [\[p-triangles\]](#p-triangles){reference-type="ref" reference="p-triangles"}. For any marked point $c_{\partial^{\prime}}$ satisfying $\frac{c_i}{q}\textless c\textless \frac{c_{i+1}}{q}$, there exists a unique bridging arc connecting to $c$ which does not intersect with the above $p$-triangles, namely, the arc $D^{\frac{i}{p}}_{c}$ (see the following picture): Therefore, the $p$-triangles in Figure [\[p-triangles\]](#p-triangles){reference-type="ref" reference="p-triangles"} can be extended to a unique triangulation $\Gamma$ of a parallelogram in $\mathbb{U}$. Then by the bijection between tilting bundles and triangulations, we obtain a tilting bundle $T$, which satisfies $\varphi(T)=(c_1, c_2, \cdots, c_p)$ by the construction. Hence $\varphi$ is surjective. We are done. ◻ ## Bundle-mutations Let $T$ and $T^{\prime}$ be two tilting sheaves in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ such that $T^{\prime}$ is a mutation of $T$. If $T, \,T^{\prime}$ are both tilting bundles, then such a mutation is called a *bundle-mutation*, cf. [@GengSF2020]. **Proposition 25**. *Let $T=\bigoplus\limits_{i=1}^{p+q}T_{i}$ be a tilting bundle in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$.* - *$\mu_{i}(T)$ is a tilting bundle if and only if the positions of the arcs in $\mathbb{U}$ corresponding to $T_{i-1}, T_{i}, T_{i+1}$ exactly satisfy one of the following two conditions:* - *$\mu_{i}(T)$ is not a tilting bundle if and only if the positions of the arcs in $\mathbb{U}$ corresponding to $T_{i-1}, T_{i}, T_{i+1}$ exactly satisfy one of the following two conditions:* *Proof.* The necessity is obvious. We only prove the sufficiency. By Proposition [Proposition 23](#mutation and flip){reference-type="ref" reference="mutation and flip"}, we have $T_i$ and $\mu_{T}(T_{i})$ are two diagonals of the same quadrilateral. If $\mu_i(T)$ is a tilting bundle, then $\mu_{T}(T_i)$ is a line bundle, it follows that $\mu_{T}(T_i)$ corresponds to a positive bridging arc of $A_{p,q}$ by Theorem [Theorem 8](#corresponding){reference-type="ref" reference="corresponding"}. Thus the statement $(1)$ holds. If $\mu_i(T)$ is not a tilting bundle, then $\mu_{T}(T_i)$ corresponds to a peripheral arc of $A_{p,q}$ by Theorem [Theorem 8](#corresponding){reference-type="ref" reference="corresponding"}. Therefore, the statement $(2)$ holds. ◻ **Proposition 26**. *Assume $T=\bigoplus\limits_{i=1}^{p+q}T_{i}$ is a tilting bundle in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$, and $\mu_{i}(T)$ is also a tilting bundle for some $1 \leq i\leq p+q$. Then* - *$\mu_{i-1}(T)$ is a tilting bundle if and only if $\mu_{i-1}(\mu_{i}(T))$ is not a tilting bundle;* - *$\mu_{i+1}(T)$ is a tilting bundle if and only if $\mu_{i+1}(\mu_{i}(T))$ is not a tilting bundle.* *Proof.* We only prove the statement (1), the statement (2) is similar. If $\mu_{i-1}(T)$ is a tilting bundle, by Proposition [Proposition 25](#position){reference-type="ref" reference="position"}, the position of the arcs in $\mathbb{U}$ corresponding to $T_{i-2}, T_{i-1}, T_{i}, T_{i+1}$ exactly satisfies one of the following two conditions: It is easy to see that $\mu_{i-1}(\mu_{i}(T))$ is not a tilting bundle in both situation. If $\mu_{i-1}(\mu_{i}(T))$ is not a tilting bundle, we claim that $\mu_{i-1}(T)$ is a tilting bundle. For contradiction, we assume $\mu_{i-1}(T)$ is not a tilting bundle, by Proposition [Proposition 25](#position){reference-type="ref" reference="position"}, the position of the arcs in $\mathbb{U}$ corresponding to $T_{i-2}, T_{i-1}, T_{i}, T_{i+1}$ exactly satisfies one of the following two conditions: It follows that $\mu_{i-1}(\mu_{i}(T))$ is a tilting bundle in both cases, a contradiction. We are done. ◻ ## Bundle-mutation index $n(T)$ For any tilting bundle $T=\bigoplus\limits_{i=1}^{p+q}T_{i}$ in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$, let $$I(T)=\{i|\mu_{i}(T) \;{\rm is\;a\; tilting \; bundle}\},$$ and define the *bundle-mutation index* of $T$ as $n(T)=|I(T)|.$ Under the following bijection from ([\[the map phi\]](#the map phi){reference-type="ref" reference="the map phi"}), $$\varphi: \mathcal{T}^{\nu}_{\mathbb{X}}\longrightarrow \Lambda_{(p,q)}^0; \quad T\mapsto (c_1, c_2, \cdots, c_p),$$ we define $$J(T)=\{i|c_i=c_{i+1}\;\, {\rm for\;} 1\leq i\leq p-1\},$$ and denote by $r(T)=|J(T)|.$ **Proposition 27**. *For any tilting bundle $T$ in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$, we have $n(T)=2(p-r(T))$. In particular, $2 \leq n(T)\leq 2p$.* *Proof.* For any tilting bundle $T$, there exists a unique $(c_1, c_2, \cdots, c_p)\in\Lambda_{(p,q)}^0$ associated to $T$ by Theorem [Theorem 24](#bijection between tilting bundle and combination vertices){reference-type="ref" reference="bijection between tilting bundle and combination vertices"}, moreover, $T$ corresponds to the following triangulation of (a parallelogram in) $\mathbb{U}$: We denote by $m_{T}(T_i)=1$ if $\mu_{i}(T)$ is a tilting bundle and $m_{T}(T_i)=0$ otherwise. Combining the above figure and Proposition [Proposition 25](#position){reference-type="ref" reference="position"}, we get $$m_{T}([D^{b}_{a}])= \left\{ \begin{array}{lll} 1,&& {\rm if\; } a=\frac{c_i}{q},\, b=\frac{i-1}{p} \;{\rm or}\;b=\frac{i}{p} {\rm \; for\; some\;} i;\\ 0,&& {\rm otherwise} . \end{array} \right.$$ It follows that $n(T)=2(p-r(T))$ and then $2\leq n(T)\leq 2p.$ ◻ By Proposition [Proposition 27](#the bound of index){reference-type="ref" reference="the bound of index"}, we have the following corollary. **Corollary 28**. *Keep notations as above. Let $T$ be a tilting bundle in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$. Then* - *$n(T)=2$ if and only if $c_1=c_2=\cdots=c_p.$* - *$n(T)=2p$ if and only if $c_1\textless c_2\textless\cdots\textless c_p.$* ****Remark** 29**. Let $T$ be a tilting bundle, which corresponds to a triangulation $\Gamma$ of $A_{p,q}$ by Theorem [Theorem 19](#triangulation and tilting){reference-type="ref" reference="triangulation and tilting"}. Let $Q_{\Gamma}$ be the quiver associated to the triangulation $\Gamma$. Then $n(T)=2$ if and only if $\mathbf{k}Q_{\Gamma}$ is a canonical algebra of type $(p,q)$. For any $a\in \frac{\mathbb{Z}}{p},\, b\in \frac{\mathbb{Z}}{q}$, denote by $$T^{a}_{b}=[D^{a}_{b}]\oplus [D^{a+\frac{1}{p}}_{b}]\oplus [D^{a+\frac{1}{p}}_{b+\frac{1}{q}}]\oplus [D^{a+\frac{2}{p}}_{b+\frac{1}{q}}]\oplus \cdots\oplus [D^{a+1}_{b+\frac{p-1}{q}}]\oplus [D^{a+1}_{b+\frac{p}{q}}]\oplus \cdots\oplus [D^{a+1}_{b+\frac{q-1}{q}}].$$ Intuitively, $T^{a}_{b}$ corresponds to the following triangulation of a parallelogram in $\mathbb{U}$ up to shift: We have the following observation. **Proposition 30**. *For any $a\in \frac{\mathbb{Z}}{p},\, b\in \frac{\mathbb{Z}}{q}$, $T^{a}_{b}$ is a tilting bundle with $n(T^{a}_{b})=2p$. Moreover, $$\mu_{2p-1}\cdot\mu_{2p-3}\cdots\cdot\mu_{1}(T^{a}_{b})=T^{a}_{b-\frac{1}{q}}\;\;{\rm and}\;\;\mu_{2p}\cdot\mu_{2p-2}\cdots\cdot\mu_{2}(T^{a}_{b})=T^{a}_{b+\frac{1}{q}}.$$* *Proof.* The first result follows from Theorem [Theorem 19](#triangulation and tilting){reference-type="ref" reference="triangulation and tilting"} and Proposition [Proposition 27](#the bound of index){reference-type="ref" reference="the bound of index"} immediately. Note that the triangulation $\mu_{2p-1}\cdot\mu_{2p-3}\cdots\cdot\mu_{1}(T^{a}_{b})$ can be obtained from that of $T^{a}_{b}$ (black arcs and green arcs) by iterated mutations, and each mutation $\mu_{2i-1}$ for $1\leq i\leq p$ replaces one green arc by a red arc as below: More precisely, $$\begin{aligned} \nonumber &\mu_{2p-1}\cdot\mu_{2p-3}\cdots\cdot\mu_{1}(T^{a}_{b})\nonumber\\ &=[D^{a+\frac{p+1}{p}}_{b+\frac{q-1}{q}}]\oplus[D^{a+\frac{1}{p}}_{b}]\oplus[D^{a+\frac{2}{p}}_{b}]\oplus\cdots\oplus [D^{a+1}_{b+\frac{p-2}{q}}]\oplus\cdots\oplus[D^{a+1}_{b+\frac{q-1}{q}}]\nonumber\\ &=[D^{a}_{b-\frac{1}{q}}]\oplus[D^{a+\frac{1}{p}}_{b-\frac{1}{q}}]\oplus[D^{a+\frac{1}{p}}_{b}]\oplus[D^{a+\frac{2}{p}}_{b}]\oplus\cdots\oplus [D^{a+1}_{b+\frac{p-2}{q}}]\oplus\cdots\oplus[D^{a+1}_{b+\frac{q-2}{q}}]\nonumber\\ &=T^{a}_{b-\frac{1}{q}}.\nonumber\end{aligned}$$ Similarly, one can obtain the other equation. We are done. ◻ The tilting bundle $T^{a}_{b}$ plays an important role in the next subsection for the connectedness of the tilting graph of vector bundles category. ## Connectedness of the tilting graph $\mathcal{G}(\mathcal{T}^{\nu}_{\mathbb{X}})$ Recall from [@BKL2008; @GengSF2020] that the *tilting graph* $\mathcal{G}(\mathcal{T}_{\mathbb{X}})$ of ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ has as vertices the isomorphism classes of tilting sheaves in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$, while two vertices are connected by an edge if and only if the associated tilting sheaves differ by precisely one indecomposable direct summand. The *tilting graph* $\mathcal{G}(\mathcal{T}^{\nu}_{\mathbb{X}})$ of ${\rm vec}\mbox{-}\mathbb{X}(p,q)$ is the full subgraph of $\mathcal{G}(\mathcal{T}_{\mathbb{X}})$ consisting of tilting bundles. The connectedness of titling graph for weighted projective lines has been investigated widely in the literature through category aspect, see for example [@BKL2008; @HU2005; @GengSF2020; @FG383]. In this subsection, we investigate the connectedness of $\mathcal{G}(\mathcal{T}^{\nu}_{\mathbb{X}})$ by using our geometric model. Let $T$ and $T^{\prime}$ be tilting bundles in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$. We say $T$ is *bundle-mutations to* $T'$ if there is a sequence of tilting bundles $T=T^{0},\,T^1,\,\cdots, T^{n-1}$, $T^{n}=T^{\prime}$ such that $T^{i}$ is a mutation of $T^{i-1}$ for any $1\leq i\leq n$. Under the bijection Theorem [Theorem 8](#corresponding){reference-type="ref" reference="corresponding"}, we call a triangulation $\Gamma$ is *bundle-flips to* $\Gamma^{\prime}$ if the associated tilting bundles $T$ and $T'$ satisfy that $T$ is *bundle-mutations to* $T'$. By Convention [Convention 1](#order){reference-type="ref" reference="order"}, we know that the direct summands of any tilting bundle $T$ can be arranged in a unique order. Hence we can define a map $\iota$ from the set of tilting bundles in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ to $\mathbb{Z}_2^{p+q}$, in the sense that $\iota(T)_{i}=1$ if $\mu_{i}(T)$ is a tilting bundle, and $\iota(T)_{i}=0$ otherwise. For example, in the case $p=2$ and $q=2$, if $\mu_{1}(T), \mu_2(T)$ are tilting bundles and $\mu_{3}(T), \mu_4(T)$ are not tilting bundles, we have $\iota(T)=(1,1,0,0)$. **Lemma 31**. *Let $T$ be a tilting bundle in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ and assume $$\iota(T)=(\underbrace{1, \cdots, 1}_{s_1\ \text{times}},\underbrace{0, \cdots, 0}_{t_1\ \text{times}},\underbrace{1, \cdots, 1}_{s_2\ \text{times}},\underbrace{0, \cdots, 0}_{t_2\ \text{times}}).$$ Then $T$ is bundle-mutations to some tilting bundle $T^{\prime}$ satisfying $$\iota(T^{\prime})=(\underbrace{1, \cdots, 1}_{m_1\ \text{times}},\underbrace{0, \cdots, 0}_{n_1\ \text{times}},\underbrace{1, \cdots, 1}_{m_2\ \text{times}})$$ and $m_1+m_2\geq s_1+s_2.$* *Proof.* Note that $p+q=s_1+t_1+s_2+t_2$. We divide the proof into two cases by considering $s_2$ is even or not. If $s_2$ is even, then $s_2=2s$ for some $s\in \mathbb{Z}_{\geq 1}$. Denote by $$\mu_{[a,b]}=\mu_a\cdot\mu_{a-1}\cdots\cdot\mu_{b}, \;\;b\leq a.$$ Let $$\nu_{k}=\mu_{[p+q+1-2k, \,\,p+q-t_2+2-2k ]},\;\;1\leq k\leq s.$$ By iterative use of Proposition [Proposition 26](#effect of tilting bundle-mutation){reference-type="ref" reference="effect of tilting bundle-mutation"}, we obtain $$\iota(\nu_{s}\cdot\nu_{s-1}\cdots\nu_1(T))=(\underbrace{1, \cdots, 1}_{s_1\ \text{times}},\underbrace{0, \cdots, 0}_{t_1+t_2\ \text{times}},\underbrace{1, \cdots, 1}_{s_2\ \text{times}}),$$ and $\nu_{s}\cdot\nu_{s-1}\cdots\nu_1(T)$ is a tilting bundle. If $s_2$ is odd, then $s_2=2s+1$ for some $s\in \mathbb{Z}_{\geq 0}$. We have $$\iota(\nu_{s}\cdot\nu_{s-1}\cdots\nu_1(T))=(\underbrace{1, \cdots, 1}_{s_1\ \text{times}},\underbrace{0, \cdots, 0}_{t_1\ \text{times}},1,\underbrace{0, \cdots, 0}_{t_2\ \text{times}},\underbrace{1, \cdots, 1}_{2s\ \text{times}}).$$ If $t_2\geq t_1,$ let $$\delta_{k}=\mu_{[p+q-2s-2k,\,\,p+q-t_2-2s+1-k]},\;\;1\leq k\leq t_1.$$ Then $$\iota(\delta_{t_1}\cdot\delta_{t_1-1}\cdots\delta_{1}\cdot\nu_{s}\cdot\nu_{s-1}\cdots\nu_{1}(T))=(\underbrace{1, \cdots, 1}_{s_1+1\ \text{times}},\underbrace{0, \cdots, 0}_{t_2-t_1\ \text{times}},\underbrace{1, \cdots, 1}_{2(s+t_1) \text{times}}),$$ and $s_{1}+1+2(s+t_1)=s_1+s_2+t_1\geq s_{1}+s_{2}$. If $t_2 < t_1$, let $$\delta_{k}=\mu_{s_1+2k}\cdot\mu_{s_1+2k+1}\cdots\cdot\mu_{s_1+t_1+k},\;\;1\leq k\leq t_2.$$ Then $$\iota(\delta_{t_2}\cdot\delta_{t_2-1}\cdots\delta_{1}\cdot\nu_{s}\cdot\nu_{s-1}\cdots\nu_{1}(T))=(\underbrace{1, \cdots, 1}_{s_1+2t_2\ \text{times}},\underbrace{0, \cdots, 0}_{t_1-t_2\ \text{times}},\underbrace{1, \cdots, 1}_{s_2\ \text{times}}),$$ and $s_{1}+2t_{2}+s_2\geq s_{1}+s_{2}$. We are done. ◻ Now we can state the main result of this subsection. **Proposition 32**. *Any tilting bundle $T$ in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ is bundle-mutations to $T^{a}_{b}$ for some $a\in \frac{\mathbb{Z}}{p},\, b\in \frac{\mathbb{Z}}{q}$. Consequently, the tilting graph $\mathcal{G}(\mathcal{T}_{\mathbb{X}}^{\nu})$ is connected.* *Proof.* According to equations ([\[periodicity of covering map\]](#periodicity of covering map){reference-type="ref" reference="periodicity of covering map"}) and Proposition [Proposition 27](#the bound of index){reference-type="ref" reference="the bound of index"}, we can always assume $\mu_1(T)$ is a tilting bundle. Hence $\iota(T)$ must be one of the following two forms by considering $\mu_{p+q}(T)$ is a tilting bundle or not: $$\iota(T)=(\underbrace{1, \cdots, 1}_{s_1\ \text{times}},\underbrace{0, \cdots, 0}_{t_1\ \text{times}},\underbrace{1, \cdots, 1}_{s_2\ \text{times}},\underbrace{0, \cdots, 0}_{t_2\ \text{times}}, \underbrace{1, \cdots, 1}_{s_3\ \text{times}},\cdots\cdots\underbrace{1, \cdots, 1}_{s_m\ \text{times}}),$$ or $$\iota(T)=(\underbrace{1, \cdots, 1}_{s_1\ \text{times}},\underbrace{0, \cdots, 0}_{t_1\ \text{times}},\underbrace{1, \cdots, 1}_{s_2\ \text{times}},\underbrace{0, \cdots, 0}_{t_2\ \text{times}},\underbrace{1, \cdots, 1}_{s_3\ \text{times}},\cdots\cdots\underbrace{1, \cdots, 1}_{s_m\ \text{times}},\underbrace{0, \cdots, 0}_{t_m\ \text{times}}).$$ Using Lemma [Lemma 31](#4 to 3){reference-type="ref" reference="4 to 3"}, we can reduce $\iota(T)$ to the following shapes: $$(\underbrace{1, \cdots, 1}_{m_1^{\prime}\ \text{times}},\underbrace{0, \cdots, 0}_{n_1^{\prime}\ \text{times}},\underbrace{1, \cdots, 1}_{m_2^{\prime}+s_3\ \text{times}},\underbrace{0, \cdots, 0}_{t_3\ \text{times}}\cdots\cdots\underbrace{1, \cdots, 1}_{s_m\ \text{times}})$$ or $$(\underbrace{1, \cdots, 1}_{m_1^{\prime}\ \text{times}},\underbrace{0, \cdots, 0}_{n_1^{\prime}\ \text{times}},\underbrace{1, \cdots, 1}_{m_2^{\prime}+s_3\ \text{times}},\underbrace{0, \cdots, 0}_{t_3\ \text{times}}\cdots\cdots\underbrace{1, \cdots, 1}_{s_m\ \text{times}},\underbrace{0, \cdots, 0}_{t_m\ \text{times}})$$ with $m_1^{\prime}+m_2^{\prime}\geq s_1+s_2$ respectively. By iterating the above process, one finally obtains that $T$ is bundle-mutations to a tilting bundle $T^{\prime}$ such that $$\iota(T^{\prime})=(\underbrace{1, \cdots, 1}_{m_1\ \text{times}},\underbrace{0, \cdots, 0}_{n_1\ \text{times}},\underbrace{1, \cdots, 1}_{m_2\ \text{times}})$$ and $m_1+m_2\geq s_1+s_2+\cdots+s_m$. Assume $T'=\bigoplus\limits_{i=1}^{p+q}[D^{a_{i}}_{b_{i}}]$ with $0= a_1< a_2\leq \cdots\leq a_{p+q}= 1,\,b_1\leq b_2\leq \cdots\leq b_{p+q}$. According to ([\[periodicity of covering map\]](#periodicity of covering map){reference-type="ref" reference="periodicity of covering map"}), we have$$T'=(\bigoplus\limits_{i=1}^{m_{1}+n_{1}}[D^{a_{i}}_{b_{i}}]) \oplus(\bigoplus\limits_{i=m_{1}+n_{1}+1}^{p+q}[D^{a_{i}}_{b_{i}}]) =(\bigoplus\limits_{i=m_{1}+n_{1}+1}^{p+q}[D^{a_{i}-1}_{b_{i}-1}]) \oplus(\bigoplus\limits_{i=1}^{m_{1}+n_{1}}[D^{a_{i}}_{b_{i}}]),$$ which implies that $$\iota(T^{\prime})=(\underbrace{1, \cdots, 1,}_{m_1+m_2\ \text{times}}\underbrace{0, \cdots, 0}_{n_1\ \text{times}}).$$ By the proof of Proposition [Proposition 27](#the bound of index){reference-type="ref" reference="the bound of index"}, we get $T^{\prime}=T^{a}_{b}$ for some $a\in \frac{\mathbb{Z}}{p},\, b\in \frac{\mathbb{Z}}{q}$. Combining with Proposition [Proposition 30](#translation){reference-type="ref" reference="translation"}, we obtain the connectedness of $\mathcal{G}(\mathcal{T}_{\mathbb{X}}^{\nu})$. This finishes the proof. ◻ # Tilting graph In this section, we will give more explicit structure for the tilting graph $\mathcal{G}(\mathcal{T}_{\mathbb{X}})$ of ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ and the tilting graph $\mathcal{G}(\mathcal{T}_{\mathbb{X}}^{\nu})$ of ${\rm vec}\mbox{-}\mathbb{X}(p,q)$ respectively by using the geometric model. ## Local shape of the tilting graphs In this subsection, we consider the local shape of the tilting graphs $\mathcal{G}(\mathcal{T}_{\mathbb{X}})$ and $\mathcal{G}(\mathcal{T}_{\mathbb{X}}^{\nu})$ respectively. For the case $p=q=1$, i.e., for the classical projective line $\mathbb{P}_{\mathbf k}^1$ case, it is well known that any tilting sheaf has the form $T_{n}:={\mathcal O}(n)\oplus {\mathcal O}(n+1)$ for some $n\in\mathbb{Z}$. Moreover, there exists a tilting mutation between $T_{n}$ and $T_{m}$ if and only if $m=n\pm1$. Therefore, the tilting subgraph $\mathcal{G}(\mathcal{T}_{\mathbb{X}}^{\nu})$ coincides with the tilting graph $\mathcal{G}(\mathcal{T}_{\mathbb{X}})$, which has the form: From now on we focus on the weighted projective line $\mathbb{X}(p,q)$ with $(p,q)\neq (1,1)$. **Proposition 33**. *Assume $(p,q)\neq (1,1)$. Let $T=\bigoplus\limits_{i=1}^{p+q}T_{i}$ be a tilting sheaf in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ and $\Gamma$ be the associated triangulation of $A_{p,q}$. Let $\gamma_i$ be the arc in $\Gamma$ corresponding to $T_{i}$ for $1\leq i\leq p+q$. For any $1\leq i\neq j\leq p+q,$ one of the following holds:* - *if two arcs $\gamma_i$ and $\gamma_j$ are in the same triangle, then $\mu_{i}\mu_{j}\mu_{i}(T)=\mu_{i}\mu_{j}(T);$* - *if two arcs $\gamma_i$ and $\gamma_j$ are not in the same triangle, then $\mu_{i}\mu_{j}(T)=\mu_{j}\mu_{i}(T).$* *Proof.* For any arc $\gamma_i$ in $\Gamma,$ there exists a quadrilateral formed by the two triangles of $\Gamma$ containing $\gamma_i$, drawn as follows: \(1\) If two arcs $\gamma_i$ and $\gamma_j$ are in the same triangle, without loss of generality, we may assume that the position of $\gamma_i$ and $\gamma_j$ is drawn as below: By applying the flips $\mu_{i}$ and $\mu_{j}$ on the corresponding arcs, and noting that $(p,q)\neq (1,1)$, we have the following commutative diagram: Therefore $\mu_{i}\mu_{j}\mu_{i}(T)=\mu_{i}\mu_{j}(T).$ \(2\) If two arcs $\gamma_i$ and $\gamma_j$ are not in the same triangle, then the mutations $\mu_{T}(T_i)$ and $\mu_{T}(T_j)$ do not affect each other. Therefore, we have $\mu_{i}\mu_{j}(T)=\mu_{j}\mu_{i}(T).$ ◻ As an immediate consequence of the above proposition, we obtain the following result, c.f. [@FST2008 Theorem 3.10]). **Corollary 34**. *Assume $(p,q)\neq (1,1)$, then the tilting graph $\mathcal{G}(\mathcal{T}_{\mathbb{X}})$ is composed of quadrilaterals and pentagons.* By considering the tilting subgraph $\mathcal{G}(\mathcal{T}^{\nu}_{\mathbb{X}})$ of tilting bundles, we have the following result. **Proposition 35**. - *The tilting graph $\mathcal{G}(\mathcal{T}^{\nu}_{\mathbb{X}})$ of ${\rm vec}\mbox{-}\mathbb{X}(1,q)$ is a line.* - *For $2\leq p\leq q$, the tilting graph $\mathcal{G}(\mathcal{T}^{\nu}_{\mathbb{X}})$ of ${\rm vec}\mbox{-}\mathbb{X}(p,q)$ is composed of quadrilaterals.* *Proof.* (1) If ${\mathbb X}$ has weight type $(1,q)$, then by Theorem [Theorem 19](#triangulation and tilting){reference-type="ref" reference="triangulation and tilting"} $T=T^0_b$ for some $b\in \frac{\mathbb{Z}}{q}$, and the bundle-mutation index $n(T)=2$ by Proposition [Proposition 27](#the bound of index){reference-type="ref" reference="the bound of index"}. Combining with Proposition [Proposition 30](#translation){reference-type="ref" reference="translation"}, we obtain that the tilting graph $\mathcal{G}(\mathcal{T}^{\nu}_{\mathbb{X}})$ of ${\rm vec}\mbox{-}\mathbb{X}(1,q)$ is a line. \(2\) Assume ${\mathbb X}$ has weight type $(p,q)$ with $2\leq p\leq q$. Let $T=\bigoplus\limits_{i=1}^{p+q}T_{i}$ be a tilting bundle. Assume $\mu_{i}(T)$ and $\mu_{j}(T)$ are both tilting bundles for some $i,j$, let $\gamma_i$ and $\gamma_j$ be the corresponding arcs of $T_i$ and $T_j$ respectively. If $\mu_{j}\mu_{i}(T)$ is also a tilting bundle, then $\gamma_i$ and $\gamma_j$ are not in the same triangle by Proposition [Proposition 26](#effect of tilting bundle-mutation){reference-type="ref" reference="effect of tilting bundle-mutation"}. It follows that $\mu_{i}\mu_{j}(T)=\mu_{j}\mu_{i}(T)$ by Proposition [Proposition 33](#pentagon){reference-type="ref" reference="pentagon"} $(2)$. Hence, the tilting graph $\mathcal{G}(\mathcal{T}^{\nu}_{\mathbb{X}})$ of vector bundle category is composed of quadrilaterals. ◻ ## Proof of Theorem [Theorem 2](#description of tilting graph of vector bundles){reference-type="ref" reference="description of tilting graph of vector bundles"} {#proof-of-theorem-description-of-tilting-graph-of-vector-bundles} Recall that $\Lambda_{(p,q)}$ is the graph with vertex set $$\Lambda_{(p,q)}^0=\{(c_1, \cdots, c_p)\in\mathbb{Z}^{p}|c_1\leq \cdots\leq c_p\leq c_1+q\},$$ and there exists an edge between two vertices $(c_1, \cdots, c_p)$ and $(d_1, \cdots, d_p)$ if and only if $\sum_{i=1}^{p}|c_i-d_i|=1$, or equivalently, there exists a unique $j$ such that $$d_i= \left\{ \begin{array}{lll} c_i\pm 1&& i=j;\\ c_i&& i\neq j. \end{array} \right.$$ By Theorem [Theorem 24](#bijection between tilting bundle and combination vertices){reference-type="ref" reference="bijection between tilting bundle and combination vertices"}, the vertices of $\mathcal{G}(\mathcal{T}^{\nu}_{\mathbb{X}})$ and $\Lambda_{(p,q)}^0$ coincide. Let $T$ be a tilting bundle, recall from ([\[the map phi\]](#the map phi){reference-type="ref" reference="the map phi"}) that we can assume $\varphi(T)=(c_1, \cdots, c_{i-1}, c_i, c_{i+1}, \cdots, c_p),$ and we define $$\mu_{i}^{+}(T):=(T\backslash [D^{\frac{i}{p}}_{\frac{c_i}{q}}])\oplus [D^{\frac{i-1}{p}}_{\frac{c_i+1}{q}}]$$ and $$\mu_{i}^{-}(T):=(T\backslash [D^{\frac{i-1}{p}}_{\frac{c_i}{q}}])\oplus [D^{\frac{i}{p}}_{\frac{c_i-1}{q}}].$$ Then $\mu_{i}^{+}(T)$ and $\mu_{i}^{-}(T)$ are both tilting bundles with $$\varphi(\mu_{i}^{\pm}(T))=(c_1, \cdots, c_{i-1}, c_i\pm 1, c_{i+1}, \cdots, c_p)=\varphi(T)\pm\epsilon_{i},$$ here, $\epsilon_{i}=(0,\cdots,0,1,0,\cdots,0)$ is $i$-th canonical row vector in $\mathbb{Z}^{p}.$ Therefore, if there exists an edge in the tilting graph $\mathcal{G}(\mathcal{T}_{\mathbb{X}}^{\nu})$, then there exists a corresponding edge in $\Lambda_{(p,q)}.$ Moreover, according to Proposition [Proposition 27](#the bound of index){reference-type="ref" reference="the bound of index"}, there are $n(T)=2(p-r(T))$-many edges attached to the tilting bundle $T$ in the graph $\mathcal{G}(\mathcal{T}_{\mathbb{X}}^{\nu}).$ On the other hand, there is an edge between $\alpha=(c_1, c_2, \cdots,c_p)$ and $\beta=(d_1, d_2, \cdots, d_p)$ if and only if there exists a unique $j$ such that $$d_i= \left\{ \begin{array}{lll} c_i\pm 1&& i=j\\ c_i&& i\neq j; \end{array} \right.$$ if and only if $\beta=\alpha\pm\epsilon_i$. Observe that $\beta=\alpha+\epsilon_i\in \Lambda_{(p,q)}^0$ if and only if $$c_1\leq c_2\leq\cdots\leq c_i\textless c_{i+1}\leq c_{i+2}\leq \cdots\leq c_p,$$ and $\beta=\alpha-\epsilon_i\in \Lambda_{(p,q)}^0$ if and only if $$c_1\leq c_2\leq \cdots\leq c_{i-1}\textless c_{i}\leq c_{i+1}\cdots\leq c_{p}.$$ Therefore, $c_{i}=c_{i+1}$ if and only if $\alpha+\epsilon _i\notin \Lambda_{(p,q)}^0$, $\alpha-\epsilon _{i+1}\notin \Lambda_{(p,q)}^0$. Assume $T=\varphi^{-1}(\alpha),$ then there are $2(p-r(T))=n(T)$-many edges attached to $\alpha$ in $\Lambda_{(p,q)}$. This finishes the proof. ## Typical examples of tilting graphs **Example 36**. For the case of $p=1$ and $q=2$, recall that $$T^{0}_{\frac{a}{2}}:={\mathcal O}(-a\vec{x}_2)\oplus {\mathcal O}(-a\vec{x}_2+\vec{x}_2)\oplus{\mathcal O}(-a\vec{x}_2+\vec{c}),\,\,a\in\mathbb{Z}$$ is a tilting bundle. We denote by $$T_{\frac{a}{2}}={\mathcal O}(-a\vec{x}_2)\oplus{\mathcal O}(-a\vec{x}_2+\vec{c})\oplus S_{0,1},$$ hence $T_{\frac{a}{2}}$ is a tilting sheaf but not a tilting bundle. By using Proposition [Proposition 33](#pentagon){reference-type="ref" reference="pentagon"}, the tilting graph $\mathcal{G}(\mathcal{T}_{\mathbb{X}})$ of ${\rm coh}\mbox{-}\mathbb{X}(1,2)$ can be drawn as below (see also [@W2008] Figure 8.1), where the second line is the tilting graph $\mathcal{G}(\mathcal{T}_{\mathbb{X}}^{\nu})$ of ${\rm vec}\mbox{-}\mathbb{X}(1,2)$. **Example 37**. For the case of $p=2$ and $q=2,$ by Theorem [Theorem 2](#description of tilting graph of vector bundles){reference-type="ref" reference="description of tilting graph of vector bundles"}, we obtain that the tilting graph $\mathcal{G}(\mathcal{T}_{\mathbb{X}}^{\nu})$ of ${\rm vec}\mbox{-}\mathbb{X}(2,2)$ can be described by the graph $\Lambda_{(2,2)}$, it follows that the shape of $\mathcal{G}(\mathcal{T}_{\mathbb{X}}^{\nu})$ can be drawn as below: Combining with Proposition [Proposition 33](#pentagon){reference-type="ref" reference="pentagon"}, the tilting graph $\mathcal{G}(\mathcal{T}_{\mathbb{X}})$ of ${\rm coh}\mbox{-}\mathbb{X}(2,2)$ can be drawn as below (see also [@W2008] Figure 7.2): **Example 38**. Consider the case of $p=2$ and $q=3,$ by using Theorem [Theorem 2](#description of tilting graph of vector bundles){reference-type="ref" reference="description of tilting graph of vector bundles"}, the shape of the tilting graph $\mathcal{G}(\mathcal{T}_{\mathbb{X}}^{\nu})$ of ${\rm vec}\mbox{-}\mathbb{X}(2,3)$ is drawn as below: It can be imaginable that the shape of the tilting graph $\mathcal{G}(\mathcal{T}_{\mathbb{X}})$ of ${\rm coh}\mbox{-}\mathbb{X}(2,3)$ is very difficult, here we omit the description of the tilting graph of ${\rm coh}\mbox{-}\mathbb{X}(2,3)$. # Geometric interpretation of automorphism group of ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ This section is devoted to giving a geometric interpretation for the automorphism group ${\rm Aut}({\rm coh}\mbox{-}\mathbb{X}(p,q))$ of the category ${\rm coh}\mbox{-}\mathbb{X}(p,q)$. We show that ${\rm Aut}({\rm coh}\mbox{-}\mathbb{X}(p,q))$ is isomorphic to the mapping class group of $A_{p,q}.$ ## Mapping class group Let's recall from [@ASS2012] for the mapping class group of the marked annulus $A_{p,q}$ first. Write $A_{p,q}=(S, M)$, where $S$ is an annulus and $M$ is the set of all the marked points in $A_{p,q}.$ Recall that two homeomorphisms $f,\, g$ of $S$ are *isotopic* if there is a continuous function $H: S\times [0, 1]\longrightarrow S$ such that $H(x, 0)=f$, $H(x, 1)=g$ and $H(x, t):S\longrightarrow S$ is a homeomorphism for each $t\in [0,1]$. Denote by ${\rm Homeo^{+}}(S, M)$ the group of orientation preserving homeomorphisms from $S$ to $S$ which map $M$ to $M$. Note that there do not require that the points in $M$ are fixed, neither that the points on the boundary of $S$ are fixed nor that each boundary component is mapped to itself. However, if a boundary component is mapped to another component, then $p=q$. **Definition 39**. A homeomorphism $f: S\longrightarrow S$ is *isotopic to the identity relative to* $M$, if $f$ is isotopic to the identity via an isotopy $H$ that fixes $M$ pointwise, i.e., $H(x, t)=x$ for all $x\in M$ and $t\in [0,1]$. Let ${\rm Homeo_0}(S, M)$ be the set $$\{f\in{\rm Homeo^{+}}(S, M)| f \;{\rm is\; isotopic\; to\; the\; identity\; relative\; to\;} M\}.$$ The *mapping class group* $\mathcal{MG} (S, M)$ of $(S, M)$ is defined to be the quotient $$\mathcal{MG} (S, M)={\rm Homeo^{+}}(S, M)/{\rm Homeo_0}(S, M).$$ Denote by $H_{p,q}=\langle r_1, r_2|r_1r_2=r_2r_1, r_1^{p}=r_2^{q}\rangle$ and $$\label{mapping class group} \widetilde{H}_{p,q}= \left\{ \begin{array}{ll} H_{p,q}, & p\neq q;\\ H_{p,q}\times\mathbb{Z}_2, & p=q.\\ \end{array} \right.$$ It is well known that the mapping class group $\mathcal{MG} (A_{p,q})$ of the marked annulus $A_{p,q}$ is isomorphism to $\widetilde{H}_{p,q}$, c.f. [@ASS2012]. We identify them in the following. ****Remark** 40**. In our paper, let $r_1$ (*resp.* $r_2$) be the rotation in anti-clockwise direction (*resp.* in clockwise direction) sending each marked point in $\partial$ (*resp.* $\partial^{\prime}$) to the next marked point in $\partial$ (*resp.* $\partial^{\prime}$), i.e., for any $a, b\in\mathbb{Z},$ $$\begin{aligned} &r_1((\frac{a}{p})_{\partial})=(\frac{a+1}{p})_{\partial}, &&r_1((\frac{b}{q})_{\partial^{\prime}})=(\frac{b}{q})_{\partial^{\prime}};\\ &r_2((\frac{a}{p})_{\partial})=(\frac{a}{p})_{\partial}, &&r_2((\frac{b}{q})_{\partial^{\prime}})=(\frac{b-1}{q})_{\partial^{\prime}}.\end{aligned}$$ For the case of $p=q$, there exists an involution $\sigma$ of $\mathcal{MG} (A_{p,q})$ maps $\partial$ to $\partial^{\prime}$, satisfying that $\sigma((\frac{a}{p})_{\partial})=(-\frac{a}{p})_{\partial^{\prime}}$ for any $a\in\mathbb{Z}$, in particular, $\sigma(0_\partial)=0_{\partial^{\prime}}$. Let ${\rm Aut}({\rm coh}\mbox{-}\mathbb{X}(p,q))$ be the automorphism group of the category ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ consists of isomorphism classes of $\mathbf k$-linear self-equivalences on ${\rm coh}\mbox{-}\mathbb{X}(p,q)$. Then we have **Proposition 41**. *${\rm Aut}({\rm coh}\mbox{-}\mathbb{X}(p,q))\cong \mathcal{MG} (A_{p,q}).$* *Proof.* By [@LenzingMelter2000 Theorem 3.4], we have $$\label{automorphsim group formula} {\rm Aut}({\rm coh}\mbox{-}\mathbb{X}(p,q))\cong \left\{ \begin{array}{ll} \mathbb{L}(p,q), & p\neq q;\\ \mathbb{L}(p,q)\times\mathbb{Z}_2, & p=q.\\ \end{array} \right.$$ Observe that $\mathbb{L}(p,q)$ is generated by $\vec{x}_1, \vec{x}_2$ subject to the relation $p\vec{x}_1=q\vec{x}_2$. By comparing [\[mapping class group\]](#mapping class group){reference-type="eqref" reference="mapping class group"} and [\[automorphsim group formula\]](#automorphsim group formula){reference-type="eqref" reference="automorphsim group formula"}, we see that the map $\vec{x}_1\mapsto r_1, \,\,\vec{x}_2\mapsto r_2$ gives a group isomorphism from $\mathbb{L}(p,q)$ to $H_{p,q}$, which induces a group isomorphism from ${\rm Aut}({\rm coh}\mbox{-}\mathbb{X}(p,q))$ to $\mathcal{MG} (A_{p,q}).$ ◻ ****Remark** 42**. In the case $p=q$, recall from [@LenzingMelter2000] that there exists a geometric automorphism of $\mathbb{X}(p,q)$ by exchanging the exceptional points $\infty$ and $0$, which induces isomorphism $$\label{involution on category}\sigma_{1,2}: {\rm coh}\mbox{-}\mathbb{X}(p,q)\longrightarrow{\rm coh}\mbox{-}\mathbb{X}(p,q),$$ by exchanging $S_{\infty,i}^{(j)}$ with $S_{0,i}^{(j)}$ for any $0\leq i\leq p-1$ and $j\in\mathbb{Z}_{\geq 1}$, and sending line bundles ${\mathcal O}(l_1\vec{x}_1+l_2\vec{x}_2+l{\vec{c}})$ to ${\mathcal O}(l_1\vec{x}_2+l_2\vec{x}_1+l{\vec{c}})$ for any $0\leq l_i\leq p-1$ with $i=1,2$ and $l\in\mathbb{Z}$. For any $\vec{x}\in\mathbb{L}(p,q)$, denote by $(\vec{x}):=-\otimes {\mathcal O}(\vec{x})$, which is the auto-equivalence functor in ${\rm Aut}({\rm coh}\mbox{-}\mathbb{X}(p,q))$ given by the grading twist with $\vec{x}$. Then we have an isomorphism of groups $$\label{automor psi}\psi: \mathcal{MG} (A_{p,q})\longrightarrow{\rm Aut}({\rm coh}\mbox{-}\mathbb{X}(p,q)),$$ which sends $$r_1\mapsto (\vec{x}_1);\quad r_2\mapsto (\vec{x}_2); \quad {\text {and}} \quad \sigma\mapsto \sigma_{1,2} \;\;({\text {when}}\;\; p=q).$$ ## Proof of Theorem [Theorem 1](#compatible){reference-type="ref" reference="compatible"} {#proof-of-theorem-compatible} For any triangulation $\Gamma$ of the marked annulus $A_{p,q}$, we assume $\Gamma$ consists of the following positive bridging arcs and peripheral arcs: $$[D^{\frac{a_{i}}{p}}_{\frac{b_{i}}{q}}] \;(1\leq i\leq n_1),\; [D^{\frac{i_k-j_k-1}{p}, \frac{i_k}{p}}]\;(1\leq k\leq n_2),\; [D_{\frac{-i_k}{q}, \frac{j_k-i_k+1}{q}}]\;(1\leq k\leq n_3).$$ According to the map ([\[map\]](#map){reference-type="ref" reference="map"}), we obtain a tilting sheaf $$\phi(\Gamma)=\oplus_{i=1}^{n_1}\mathcal{O}(a_{i}\vec{x}_1-b_{i}\vec{x}_2)\oplus(\oplus_{k=1}^{n_2}S_{\infty,i_k}^{(j_k)})\oplus(\oplus_{k=1}^{n_3}S_{0,i_k}^{(j_k)}).$$ Recall from [\[automor psi\]](#automor psi){reference-type="eqref" reference="automor psi"} that $\psi$ is a group isomorphism, we only need to prove that ([\[compatible of groups action\]](#compatible of groups action){reference-type="ref" reference="compatible of groups action"}) holds for any generator $f\in\mathcal{MG} (A_{p,q}).$ First assume $f=r_1$, then we have $\psi(f)=(\vec{x}_1)$. Recall that $$r_1((\frac{a}{p})_{\partial})=(\frac{a+1}{p})_{\partial},\,\, r_1((\frac{b}{q})_{\partial^{\prime}})=(\frac{b}{q})_{\partial^{\prime}},\,\,a, b\in\mathbb{Z}.$$ It follows that the triangulation $f(\Gamma)$ consists of the following arcs: $$[D^{\frac{a_{i}+1}{p}}_{\frac{b_{i}}{q}}] \;(1\leq i\leq n_1),\; [D^{\frac{i_k-j_k}{p}, \frac{i_k+1}{p}}]\;(1\leq k\leq n_2),\; [D_{\frac{-i_k}{q}, \frac{j_k-i_k+1}{q}}]\;(1\leq k\leq n_3).$$ Then $$\begin{aligned} \phi(f(\Gamma))=&\oplus_{i=1}^{n_1}\mathcal{O}((a_{i}+1)\vec{x}_1-b_{i}\vec{x}_2) \oplus(\oplus_{k=1}^{n_2}S_{\infty,i_k+1}^{(j_k)})\oplus(\oplus_{k=1}^{n_3}S_{0,i_k}^{(j_k)})\\ =&\phi(\Gamma)(\vec{x}_1)\\ =&\psi(f)(\phi(\Gamma)).\end{aligned}$$ Hence ([\[compatible of groups action\]](#compatible of groups action){reference-type="ref" reference="compatible of groups action"}) holds in this case. For $f=(r_2)$, the proof is similar. If $p=q$, then $\mathcal{MG} (A_{p,q})$ has another generator (involution) $f=\sigma$ such that $\sigma((\frac{a}{p})_{\partial})=(-\frac{a}{p})_{\partial^{\prime}}$. It follows that the triangulation $f(\Gamma)$ consists of the following arcs: $$[D^{-\frac{b_{i}}{p}}_{-\frac{a_{i}}{p}}] \;(1\leq i\leq n_1),\; [D_{-\frac{i_k}{p}, \frac{j_k-i_k+1}{p}}]\,(1\leq k\leq n_2),\, [D^{\frac{i_k-j_k-1}{p}, \frac{i_k}{p}}]\,(1\leq k\leq n_3).$$ Then $$\phi (f(\Gamma))=\oplus_{i=1}^{n_1}\mathcal{O}(a_{i}\vec{x}_2-b_{i}\vec{x}_1)\oplus(\oplus_{k=1}^{n_2}S_{0,i_k}^{(j_k)})\oplus(\oplus_{k=1}^{n_3}S_{\infty,i_k}^{(j_k)})=\psi(f)(\phi(\Gamma)).$$ Hence ([\[compatible of groups action\]](#compatible of groups action){reference-type="ref" reference="compatible of groups action"}) holds. This finishes the proof. Recall from [@BQ2012] that the *exchange graph* $EG(A_{p,q})$ of $A_{p,q}$ has as vertices the triangulations of $A_{p,q}$, with an edge between two vertices $\Gamma$ and $\Gamma^{\prime}$ whenever $\Gamma^{\prime}$ is obtained from $\Gamma$ by the flip of an arc. By Theorem [Theorem 1](#compatible){reference-type="ref" reference="compatible"}, we have the following corollary. **Corollary 43**. *For any $f\in \mathcal{MG} (A_{p,q})$, we have $\phi(f(EG(A_{p,q}))=\psi(f)(\phi(EG(A_{p,q}))$.* ## An involution on $\Lambda_{(p,p)}$ Recall from ([\[the map phi\]](#the map phi){reference-type="ref" reference="the map phi"}) that there exists a bijective map $$\varphi: \mathcal{T}^{\nu}_{\mathbb{X}}\longrightarrow \Lambda_{(p,p)}^0; \quad T\mapsto (c_1, c_2, \cdots, c_p)$$ where $\Lambda_{(p,p)}^0=\{(c_1, \cdots, c_p)\in\mathbb{Z}^{p}|c_1\leq \cdots\leq c_p\leq c_1+p\}.$ In case $p=q$, by [\[involution on category\]](#involution on category){reference-type="eqref" reference="involution on category"} there exists an involution $\sigma_{1,2}$ on ${\rm coh}\mbox{-}\mathbb{X}(p,p)$, which induces an automorphism on $\mathcal{G}(\mathcal{T}^{\nu}_{\mathbb{X}})$, still denoted by $\sigma_{1,2}$. **Proposition 44**. *There exists an involution $\rho: \Lambda_{(p,p)}\longrightarrow \Lambda_{(p,p)}$ such that the following diagram commutes* *[\[commutative of varphirho\]]{#commutative of varphirho label="commutative of varphirho"}* *Proof.* Let $T$ be a tilting bundle in ${\rm coh}\mbox{-}\mathbb{X}(p,p),$ recall from ([\[the map phi\]](#the map phi){reference-type="ref" reference="the map phi"}) that we can assume $$\varphi(T)=(c_1, \cdots, c_{i-1}, c_i, c_{i+1}, \cdots, c_p),$$ that is, $T$ corresponds to a triangulation $\Gamma$ of (a parallelogram in) $\mathbb{U}$ as follows: According to Remark [**Remark** 40](#rotation){reference-type="ref" reference="rotation"}, we obtain that $\sigma(\Gamma)$ is a triangulation of the following form: which corresponds to the tilting bundle $\sigma_{1,2}(T)$; see ([\[involution on category\]](#involution on category){reference-type="ref" reference="involution on category"}). By using ([\[periodicity of covering map\]](#periodicity of covering map){reference-type="ref" reference="periodicity of covering map"}), the above triangulation can be converted to its normal form ([\[assumption on chains\]](#assumption on chains){reference-type="ref" reference="assumption on chains"}) as follows: This induces a well-defined map: $$\rho: \Lambda_{(p,p)}^0\longrightarrow \Lambda_{(p,p)}^0, \quad (c_1, c_2, \cdots, c_p)\mapsto (d_1, d_2, \cdots, d_p).$$ By construction we have $\varphi(\sigma_{1,2}(T))=\rho(\varphi(T))$. According to the proof of Theorem [Theorem 2](#description of tilting graph of vector bundles){reference-type="ref" reference="description of tilting graph of vector bundles"}, we see that $\varphi$ is an isomorphism between the graphs $\mathcal{G}(\mathcal{T}^{\nu}_{\mathbb{X}})$ and $\Lambda_{(p,p)}$. This proves the commutativity of the diagram. Moreover, since $\sigma_{1,2}$ is an involution, it follows that $\rho$ is also an involution. ◻ **Example 45**. For the case of $p=q=4,$ let $T$ be a tilting bundle and $\varphi(T)=(0, 0, 1, 4),$ that is, $T$ corresponds to a triangulation $\Gamma$ of (a parallelogram in) $\mathbb{U}$ as follows: Then $\sigma_{1,2}(T)$ corresponds to a triangulation $\sigma(\Gamma)$ of (a parallelogram in) $\mathbb{U}$ as follows: By using ([\[periodicity of covering map\]](#periodicity of covering map){reference-type="ref" reference="periodicity of covering map"}), we can change the above triangulation to its normal form (red arcs): Then $\rho(0, 0, 1, 4)=(1, 1, 1, 2)$, and $\varphi(\sigma_{1,2}(T))=\rho(\varphi(T))$. ## Proof of Theorem [Theorem 3](#compatible2){reference-type="ref" reference="compatible2"} {#proof-of-theorem-compatible2} Assume $\nu=(c_1, c_2, \cdots, c_p)$. Let $T$ be a tilting bundle in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ corresponding to $\nu=(c_1, c_2, \cdots, c_p)$, that is, $\varphi(T)=(c_1, c_2, \cdots, c_p)$; see [\[the map phi\]](#the map phi){reference-type="eqref" reference="the map phi"}. According to the proof of Theorem [Theorem 24](#bijection between tilting bundle and combination vertices){reference-type="ref" reference="bijection between tilting bundle and combination vertices"}, we obtain $$\varphi(T(\vec{x}_1))=(c_p-q, c_1, c_2, \cdots, c_{p-1}),\;\;\;\varphi(T(\vec{x}_2))=(c_1-1, c_2-1, \cdots, c_{p}-1).$$ The following assignments define bijective maps $$\rho_1: \Lambda_{(p,q)}^0\longrightarrow \Lambda_{(p,q)}^0;\quad (c_1, c_2, \cdots, c_{p-1}, c_p)\mapsto (c_p-q, c_1, c_2, \cdots, c_{p-1});$$ and $$\rho_2: \Lambda_{(p,q)}^0\longrightarrow \Lambda_{(p,q)}^0; \quad (c_1, c_2, \cdots, c_{p-1}, c_p)\mapsto (c_1-1, c_2-1, \cdots, c_{p}-1).$$ Observe that the grading shift by $(\vec{x}_i)$ on ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ induces an automorphism on $\mathcal{G}(\mathcal{T}^{\nu}_{\mathbb{X}})$, still denoted by $(\vec{x}_i)$. Similar as the proof of Proposition [Proposition 44](#bijective map of vertex such that commutative){reference-type="ref" reference="bijective map of vertex such that commutative"}, we have the following commutative diagrams for $i=1,2$ It is easy to check that $\rho_1\rho_2=\rho_2\rho_1$ and $\rho_1^p=\rho_2^q$. Hence, the assignments $r_1\mapsto \rho_1, r_2\mapsto \rho_2$ defines a group homomorphism from $\widetilde{H}_{p,q}$ to ${\rm Aut}(\Lambda_{(p,q)})$. Combining with Proposition [Proposition 44](#bijective map of vertex such that commutative){reference-type="ref" reference="bijective map of vertex such that commutative"}, we obtain a group action of $\widetilde{H}_{p,q}$ on $\Lambda_{(p,q)}$. Then ([\[compatible of groups action2\]](#compatible of groups action2){reference-type="ref" reference="compatible of groups action2"}) follows from Theorem [Theorem 2](#description of tilting graph of vector bundles){reference-type="ref" reference="description of tilting graph of vector bundles"}, and hence we finish the proof of Theorem [Theorem 3](#compatible2){reference-type="ref" reference="compatible2"}. # Geometric interpretation of perpendicular category In this section, we focus on the geometric interpretation of the perpendicular category of an exceptional object in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$. Recall that a coherent sheaf $E$ is called *exceptional* if ${\rm Hom}(E,E)=\mathbf{k}$ and ${\rm Ext}^{1}(E,E)=0$. For an exceptional sheaf $E\in {\rm coh}\mbox{-}\mathbb{X}(p,q)$, define the *right perpendicular category* $$E^{\bot}:=\{X\in{\rm coh}\mbox{-}\mathbb{X}(p,q) \,|{\rm Hom}(E, X)=0={\rm Ext^{1}}(E, X) \};$$ and the *left perpendicular category* $$^{\bot}E:=\{X\in{\rm coh}\mbox{-}\mathbb{X}(p,q)\, |{\rm Hom}(X, E)=0={\rm Ext^{1}}(X, E)\}.$$ By Proposition [Proposition 4](#the properties of coherent sheaves){reference-type="ref" reference="the properties of coherent sheaves"} $(1)$, we obtain that $^{\bot}E=(E(-\vec{\omega}))^{\bot}$, where $\vec{\omega}=-(\vec{x}_1+\vec{x}_2)$ is the dualizing element in $\mathbb{L}(p,q).$ **Lemma 46**. *Let $E$ be an indecomposable sheaf in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$, then $E$ is an exceptional object if and only if $\phi^{-1}(E)$ is an arc in $A_{p,q}$.* *Proof.* By [@M2004 Lemma 3.2.3], $E$ is exceptional in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$ if and only if ${\rm Ext}^1(E,E)=0$. Combining with Theorem [Theorem 14](#dimension and positive intersection){reference-type="ref" reference="dimension and positive intersection"}, we get the result. ◻ We recall some basic facts on cutting marked annulus $A_{p,q}$ along an arc, cf. [@MP2014]. Let $\alpha$ be an arc in $A_{p,q}$. Denote by $A_{p,q}/\alpha$ the new marked surface obtained from $A_{p,q}$ by cutting along the arc $\alpha$ and then removing components which are homeomorphic to a triangle. Up to homeomorphism, it does not depend on the choice of representative of $\alpha$. **Theorem 47**. *Let $E$ be an exceptional object in ${\rm coh}\mbox{-}\mathbb{X}(p,q)$, and assume the oriented arc corresponding to $E$ is $\alpha$. Then the marked surface $A_{p,q}/\alpha$ gives a geometric model for the category $E^{\bot}$.* *Proof.* To prove the theorem, we consider the following two cases: \(1\) $E\in {\rm vec}\mbox{-}\mathbb{X}(p,q).$ In this case, $E$ is a line bundle, hence $E^{\bot}\cong {\rm mod}(A_{p+q-1})$ by [@Len2011 Proposition 2.14]. On the other hand, the marked surface $A_{p,q}/\alpha$ is a disk with $p+q+2$ marked points on its boundary. According to [@BS2021], this gives a geometric model for the category ${\rm mod}(A_{p+q-1})\cong E^{\bot}$. \(2\) $E\in {\rm coh}_{0}\mbox{-}\mathbb{X}(p,q).$ In this case, $E=S_{\infty,i}^{(j)}$, where $i\in\mathbb{Z}/p\mathbb{Z}$ and $1\leq j\leq p-1$; or $E=S_{0,i}^{(j)}$, where $i\in\mathbb{Z}/q\mathbb{Z}$ and $1\leq j\leq q-1$. We only consider the second case, the other one being similar. Note that the composition factors of $S_{0,i}^{(j)}$ are given by $S_{0,i},\,S_{0,i-1},\cdots,\,\,S_{0,i-j+1}.$ According to [@GL1991], $(S_{0,i}^{(j)})^{\bot}$ contains two disjoint components, one is $\{S_{0,i},\,S_{0,i-1},\cdots,\,\,S_{0,i-j+1}\}^{\bot}$, which equivalent to the weighted projective line of weight type $(p,q-j)$; the other one is $\langle S_{0,i-1},\,S_{0,i-2},\cdots,\,\,S_{0,i-j+1}\rangle$, which is equivalent to the module category of type $A_{j-1}$. Therefore, $$(S_{0,i}^{(j)})^{\bot}\cong {\rm coh}\mbox{-}\mathbb{X}(p,q-j)\coprod {\rm mod} (A_{j-1}).$$ On the other hand, the marked surface $A_{p,q}/\alpha$ has two connected components: an annulus with $p$ marked points on the inner boundary and $q-j$ marked points on the outer boundary, and a disk with $j+2$ marked points on its boundary. Combining with [@BS2021], this gives a geometric model for the category $(S_{0,i}^{(j)})^{\bot}$. We are done. ◻ Jianmin Chen and Hongxia Zhang were partially supported by the National Natural Science Foundation of China (Grant Nos. 12371040, 11971398, 12131018 and 12161141001). Shiquan Ruan was partially supported by the Natural Science Foundation of Xiamen (No. 3502Z20227184), the Natural Science Foundation of Fujian Province (No. 2022J01034), the National Natural Science Foundation of China (Nos. 12271448), and the Fundamental Research Funds for Central Universities of China (No. 20720220043). 9999 I. Assem, T. Brüstle, G. Charbonneau-Jodoin, and P. G. Plamondon. Gentle algebras arising from surface triangulations. 4 (2010), no. 2, 201--229. I. Assem, R. Schiffler, and V. Shramchenko. Cluster automorphisms. (3) 104 (2012), no. 6, 1271--1302. M. Barot, D. Kussin, and H. Lenzing. The cluster category of a canonical algebra. 362 (2010), no. 8, 4313--4330. K. Baur, A. B. Buan, and R. J. Marsh. Torsion pairs and rigid objects in tubes. 17 (2014), no. 2, 565--591. K. Baur, and R. J. Marsh. A geometric description of the $m$-cluster categories of type $D_n$. (2007), no.4. K. Baur, and R. J. Marsh. A geometric description of $m$-cluster categories. 360 (2008), no.11, 5789--5803. K. Baur, and R. J. Marsh. A geometric model of tube categories. 362 (2012), 178--191. K. Baur, and R. C. Sim$\rm{\tilde{o}}$es. A geometric model for the module category of a gentle algebra. 2021, no. 15, 11357--11392. K. Baur, and H. A. Torkildsen. A geometric interpretation of categories of type $\tilde{A}$ and of morphisms in the infinite radical. 23 (2020), no. 3, 657--692. T. Brüstle, and Y. Qiu. Tagged mapping class groups: Auslander-Reiten translation. 279 (2015), no. 3-4, 1103--1120. T. Brüstle, and J. Zhang. On the cluster category of a marked surface without punctures. 5 (2011), no. 4, 529--566. P. Caldero, F. Chapoton, and R. Schiffler. Quivers with relations arising from clusters ($A_n$ case). 358 (2006), no. 3, 1347--1364. J. Chen, Y. Lin, P. Liu, and S. Ruan. Tilting objects on tubular weighted projective lines: a cluster tilting approach. 64 (2021), no. 4, 691--710. W. Crawley-Boevey. Kac's theorem for weighted projective lines. 12 (2010), no. 6, 1331--1345. B. Deng, S. Ruan, and J. Xiao. Applications of mutations in the derived categories of weighted projective lines to Lie and quantum algebras. 2020, no. 19, 5814--5871. R. Dou, Y. Jiang, and J. Xiao. The Hall algebra approach to Drinfeld's presentation of quantum loop algebras. 231 (2012), no. 5, 2593--2625. B. Duan, L. Lamberti, and J. R. Li. Combinatorial model for $m$-cluster categories in type $E$, arXiv:1911.12042, 2019. W. Ebeling. The Poincaré series of some special quasihomogeneous surface singularities. 39 (2003), no. 2, 393--413. W. Ebeling, and D. Ploog. McKay correspondence for the Poincaré series of Kleinian and Fuchsian singularities. 347 (2010), no. 3, 689--702. W. Ebeling, and A. Takahashi. Mirror symmetry between orbifold curves and cusp singularities with group action. 2013, no. 10, 2240--2270. S. Fomin, M. Shapiro, and D. Thurston. Cluster algebras and triangulated surfaces. Part I: Cluster complexes. 201 (2008), no. 1, 83--146. C. Fu, and S. Geng. On cluster-tilting graphs for hereditary categories. 383 (2021). W. Geigle, and H. Lenzing. A class of weighted projective curves arising in representation theory of finite-dimensional algebras. 265--297, *Lecture Notes in Math.*, 1273, Springer, Berlin, 1987. W. Geigle, and H. Lenzing. Perpendicular categories with applications to representations and sheaves. 144 (1991), no. 2, 273--343. S. Geng. Mutation of tilting bundles of tubular type. 550 (2020), 186--209. D. Happel, and I. Reiten. Hereditary categories with tilting object. 232 (1999), no. 3, 559--588. D. Happel, and L. Unger. On the set of tilting objects in hereditary categories. P. He, Y. Zhou, and B. Zhu. A geometric model for the module category of a skew-gentle algebra. 304 (2023), no.1, Paper No. 18, 41. T. Holm, and P. Jørgensen. On a cluster category of infinite Dynkin type, and the relation to triangulations of the infinity-gon. 270 (2012), no. 1-2, 277--295. T. Hübner. Classification of Indecomposable Vector Bundles on Weighted Curves. , 1989. T. Hübner. Exzeptionelle Vektorbündel und Reflektionen an Kippgarben über projektiven gewichteten Kurven. , 1996. L. Lamberti. . 41 (2015), no. 4, 1023--1054. H. Lenzing. Wild canonical algebras and rings of automorphic forms. , 191--212, *NATO Adv. Sci. Inst. Ser. C: Math. Phys. Sci.,* 424, *Kluwer Acad. Publ., Dordrecht,* 1994. H. Lenzing. Representations of finite-dimensional algebras and singularity theory. H. Lenzing. Rings of singularities. 37 (2011), no. 2, 235--271. H. Lenzing. Weighted projective lines and applications. Representations of algebras and related topics, 2011. H. Lenzing, and H. Meltzer. The automorphism group of the derived category for a weighted projective line. 28 (2000), no. 4, 1685--1700. H. Lenzing, and I. Reiten. Hereditary Noetherian categories of positive Euler characteristic. 254 (2006), no. 1, 133--171 R. B. Marsh, and Y. Palu. Coloured quivers for rigid objects and partial triangulations: the unpunctured case. (3) 108 (2014), no. 2, 411--440. H. Meltzer. Exceptional vector bundles, tilting sheaves and tilting complexes for weighted projective lines. 171 (2004), no. 808. C. M. Ringel. . *Lecture Notes in Mathematics*, 1099. Springer-Verlag, Berlin, 1984. I. G. Ščerbak. Algebras of automorphic forms with three generators. 12 (1978), no. 2, 93--94. R. Schiffler. A geometric model for cluster categories of type $D_n$. 27 (2008), no. 1, 1--21. O. Schiffmann. Noncommutative projective curves and quantum loop algebras. 121 (2004), no. 1, 113--168. D. Simson, and A. Skowroński. Elements of the representation theory of associative algebras. Vol. 2. Tubes and concealed algebras of Euclidean type. 2007. H. A. Torkildsen. A geometric realization of the m-cluster category of affine type $A$. 43 (2015), no.6, 2541--2567. H. Vogel. Asymptotic triangulations and Coxeter transformations of the annulus. 60 (2018), no. 1, 63--96. M. Warkentin. Fadenmoduln über $\tilde{A}_{n}$ und Cluster-Kombinatorik (string modules over $\tilde{A}_{n}$ and cluster combinatorics). 2008. [^1]: $^*$ the corresponding author [^2]: [2020]{style="color: black"} *Mathematics Subject Classification*. 05C10, 05E10, 16D90, 16G70, 57K20.
arxiv_math
{ "id": "2310.04695", "title": "Geometric model for weighted projective lines of type $(p,q)$", "authors": "Jianmin Chen, Shiquan Ruan, Hongxia Zhang", "categories": "math.RT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this work, our interest lies in proving the existence of critical values of the following Rayleigh-type quotients $$\mathcal{Q}_{\mathbf{p}}(u) = \frac{\|\nabla u\|_{\mathbf{p}}}{\|u\|_{\mathbf{p}}},\quad\text{and}\quad \mathcal{Q}_{\mathbf{s},\mathbf{p}}(u) = \frac{[u]_{\mathbf{s},\mathbf{p}}}{\|u\|_{\mathbf{p}}},$$ where $\mathbf{p}= (p_1,\dots,p_n)$, $\mathbf{s}=(s_1,\dots,s_n)$ and $$\|\nabla u\|_{\mathbf{p}} = \sum_{i=1}^n \|u_{x_i}\|_{p_i}$$ is an anisotropic Sobolev norm, $[u]_{\mathbf{s},\mathbf{p}}$ is a fractional version of the same anisotropic norm, and $$\|u\|_{\mathbf{p}} =\left(\int_{\mathbb R}\left(\dots \left(\int_{\mathbb R}|u|^{p_1}dx_1\right)^{\frac{p_2}{p_1}}\,dx_2\dots \right)^{p_n/p_{n-1}}dx_n\right)^{1/p_n}$$ is an anisotropic Lebesgue norm. Using the Ljusternik-Schnirelmann theory, we prove the existence of a sequence of critical values and we also find an associated Euler-Lagrange equation for critical points. Additionally, we analyze the connection between the fractional critical values and its local counterparts. address: | Instituto de Cálculo, CONICET\ Departamento de Matemática, FCEN - Universidad de Buenos Aires\ Ciudad Universitaria, 0+$\infty$ building, C1428EGA, Av. Cantilo s/n\ Buenos Aires, Argentina author: - Ignacio Ceresa Dussel and Julián Fernández Bonder bibliography: - References.bib title: Existence of Eigenvalues for Anisotropic and Fractional Anisotropic Problems via Ljusternik-Schnirelmann Theory --- # Introduction Eigenvalue problems are a well-established and widely studied subject that spans across various fields, including analysis and partial differential equations (PDEs). In the context of PDEs, the Laplacian eigenvalue problem involves finding the eigenvalues and corresponding eigenfunctions that satisfy the equation $$\Delta u+\lambda u=0.$$ Solving this problem provides valuable insights into the behavior of functions within a given domain. The eigenvalues offer information about the Laplacian's spectrum, while the eigenfunctions reveal spatial patterns associated with different frequencies or modes. Another well-known eigenvalues problem arises with the $p$-Laplacian, a nonlinear generalization of the Laplacian defined by $$\Delta_p u=\mathop{\text{\normalfont div}}(|\nabla u|^{p-2}\nabla u).$$ The eigenvalue problem of the $p$-Laplacian, characterized by the nonlinearity introduced through power exponentiation, is both intriguing and demanding, as it entails solving the equation $$-\Delta_p u=\lambda |u|^{p-2}u$$ Many authors have extensively explored this problem, as seen in [@BONDERROSSI02; @LEAN; @LINDQVIST13; @BELLONI04]. Even more, in recent decades, there has been a growing interest in fractional operators due to their applications in various natural sciences models [@CORDOBA04; @GONZALVES19; @RIASCO15]. One prominent exponent of this family of fractional operators is the fractional $p$-Laplacian, defined as $$(-\Delta_p)^s u(x) = \text{p.v.} (1-s)K_{n,p}\int_{\mathbb R^n} \frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{n+sp}}\, dy,$$ where $K_{n,p}$ is a constant that depends only on $n$ and $p$. The eigenvalue problem associated with this fractional operator, $$(-\Delta_p)^s u(x)=\lambda |u|^{p-2}u,$$ has also been studied extensively by several authors [@CAFA07; @DINEZZA12; @VALDINOCI13; @FRANZINA14]. The aim of this paper is to introduce an anisotropic feature to these eigenvalue problems. This choice is motivated by the substantial attention dedicated to investigating this phenomenon in signal processing and diffusion studies [@CHERNYSHOV2018; @TSIOTSIOS2013]. For those not familiar with the term, anisotropy can be described as the characteristic of displaying directional dependence, where various attributes or qualities manifest differently in distinct directions. This stands in opposition to the isotropic nature of the Laplacian, $p-$Laplacian, and fractional $p-$Laplacian, where these properties remain uniform regardless of the direction. On one hand, a strategy to address anisotropy involves emphasizing the integrability of individual partial derivatives of a function $u$ by employing the sum of standard $L^p$ norms, $$\|\nabla u\|_{\mathbf{p}}=\sum_{i=1}^{n}\left\|u_{x_i}\right\|_{p_i},$$ see [@RAKOSNIK79; @RAKOSNIK81; @TROISI69]. Hence, we naturally arrive at the following anisotropic pseudo-laplace operator $$-\widetilde{\Delta}_\mathbf{p}u := -\mathop{\text{\normalfont div}}\left(\sum_{i=1}^{n}|u_{x_i}|^{p_i-2}u_{x_i}\right)$$ On the other hand, Benedek $\&$ Panzone [@BENEDEK61] present the anisotropic $L^{\mathbf{p}}$ ($\mathbf{p}=(p_1,\dots,p_n)$) space with a special norm to address the anisotropy of a function $u$. The mixed Lebesgue space is constructed by considering different exponents for each coordinate in the norm $$\|u\|_{\mathbf{p}}=\left(\int_{\mathbb R}\left(\dots \left(\int_{\mathbb R}|u|^{p_1}dx_1\right)^{\frac{p_2}{p_1}}\,dx_2\dots \right)^{p_n/p_{n-1}}dx_n\right)^{1/p_n}.$$ By considering different exponents for each coordinate, the mixed Lebesgue norm accounts for the anisotropy of the function $u$. It allows for a more flexible and nuanced characterization of the integrability and decay properties across different coordinates. By combining this two perspective we can state the following eigenvalue problem $$-\widetilde{\Delta}_\mathbf{p}u =\lambda \mathcal{F}_{\mathbf{p}}(u),$$ where $\mathcal{F}_{\mathbf{p}}$ is a suitable functional related to $\|u\|_{\mathbf{p}}$. See [\[F_p\]](#F_p){reference-type="eqref" reference="F_p"}. Unfortunately, this problem is hindered by its lack of homogeneity. It's important to observe that if $v$ is an eigenfunction associated with $\lambda$, there's a possibility that $tv$ may not qualify as an eigenfunction of $\lambda$. Note that a crucial approach to solving the Laplacian, $p-$Laplacian and fractional $p-$Laplacian eigenvalue problems involves finding the critical points of the Rayleigh quotient associated with each one, namely $$\mathcal{Q}_2(u) = \frac{\|\nabla u\|^2_{2}}{\|u\|^2_{2}},\quad\mathcal{Q}_p(u) = \frac{\|\nabla u\|_p^p}{\|u\|_p^p}\quad\text{and }\mathcal{Q}_{sp}(u) = \frac{[u]_{sp}^p}{\|u\|_p^p}.$$ Therefore, it is recommended to explore the following homogeneous Rayleigh quotient $$\label{Qp} \mathcal{Q}_{\mathbf{p}}(u)=\frac{\|\nabla u\|_{\mathbf{p}}}{\|u\|_{\mathbf{p}}}.$$ As we will observe in Section [3](#section3){reference-type="ref" reference="section3"}, the associated Euler-Lagrange equation of $\mathcal{Q}_{\mathbf{p}}(u)$ is the following homogeneous eigenvalue problem, $$\label{eigenvalue} -\mathcal{L}_{\mathbf{p}}u=-\mathop{\text{\normalfont div}}\left(\sum_{i=1}^{n}\left|\frac{u_{x_i}}{\|u_{x_i}\|_{p_i}}\right|^{p_i-2}\frac{u_{x_1}}{\|u\|_{p_i}}\right)=\lambda \mathcal{F}_{\mathbf{p}}(u).$$ In [@CHAKER23] fractional anisotropy is introduced through the utilization of integrability parameters $\mathbf{p}=(p_1,\ldots,p_n)$, $1<p_i<\infty$, and fractional parameters $\mathbf{s}=(s_1,\ldots,s_n)$, $0<s_i<1$, and the subsequent norm, $$[u]_{\mathbf{s},\mathbf{p}}=\sum_{i=1}^{n}\left( \int_{\mathbb R^n}\int_{\mathbb R} (1-s_i) \frac{|u(x+he_i)-u(x)|^{p_i}}{|h|^{1+s_ip_i}}\,dh \,dx\right)^{1/p_i}.$$ As in the non-fractional case, combining this perspective with the Benedek $\&$ Panzone's norm we arrive to the following eigenvalue problem $$(-\widetilde{\Delta}_{\mathbf{p}})^{\mathbf{s}} u(x)=\lambda \mathcal{F}_{\mathbf{p}}(u),$$ where $(-\widetilde{\Delta}_{\mathbf{p}})^{\mathbf{s}}$ it the *fractional pseudo p-Laplacian* operator defined as $$(-\widetilde{\Delta}_{\mathbf{p}})^{\mathbf{s}} u(x) =\sum_{i=1}^{n} \int_{\mathbb R^n}\frac{(1-s_i)}{p_i} \frac{|u(x+he_i)-u(x)|^{p_i-2}(u(x+he_i)-u(x))}{|h|^{1+sp}}\,dh\,dx .$$ Again, this is not an homogeneous problem, therefore we study the homogeneous Rayleight quotient $$\label{Qsp} \mathcal{Q}_{\mathbf{s},\mathbf{p}}(u)=\frac{[u]_{\mathbf{s},\mathbf{p}}}{\|u\|_{\mathbf{p}}}.$$ As we will see in Section [3](#section3){reference-type="ref" reference="section3"} the Euler-Lagrange equation is $$\label{eigenvalue_s} -\mathcal{L}_{\mathbf{s}. \mathbf{p}}u=\lambda \mathcal{F}_{\mathbf{p}}(u),$$ where $\mathcal{L}_{\mathbf{s},\mathbf{p}}$ is the fractional version of $\mathcal{L}_{\mathbf{p}}$. To address the problem of find criticals points of [\[Qp\]](#Qp){reference-type="eqref" reference="Qp"}, and [\[Qsp\]](#Qsp){reference-type="eqref" reference="Qsp"} and solve the eigenvalues problems [\[eigenvalue\]](#eigenvalue){reference-type="eqref" reference="eigenvalue"} and [\[eigenvalue_s\]](#eigenvalue_s){reference-type="eqref" reference="eigenvalue_s"}, the Ljusternik-Schnirelman theory serves as a powerful framework for exploring critical point theory and the existence of critical points for functionals as we will see in Section [5](#section5){reference-type="ref" reference="section5"}. See [@MOTREANU14]. The rest of the paper is organized as follows: In Section [2](#section2){reference-type="ref" reference="section2"}, we dive into anisotropic Sobolev spaces and fractional anisotropic Sobolev spaces, explaining them in more detail and discussing some interesting properties like the Poincaré inequality and a Rellich-Kondrashov type theorem. Then, in Section [3](#section3){reference-type="ref" reference="section3"}, we figure out the Euler-Lagrange equations associated with the corresponding Rayleight-type quotients. In Section [4](#section4){reference-type="ref" reference="section4"} we study the asymptotic behavior of the sequence of eigenvalues as $\mathbf{s}\to(1,\dots,1)$ and finally in Section [5](#section5){reference-type="ref" reference="section5"} we use Ljusternik-Schnirelman theory to prove the existence of eigenvalues. # Mixed, anisotropic and fractional spaces {#section2} In this section, our objective is to establish the definition of the mixed Lebesgue space, as introduced by [@BENEDEK61]. This space will serve as a fundamental building block for our analysis. Furthermore, we will define a suitable anisotropic Sobolev space, $W_0^{1,\mathbf{p}}(\Omega)$, and a fractional anistropic Sobolev space $W_0^{\mathbf{s},\mathbf{p}}(\Omega)$. ## Mixed space Let $\mathbf{p}=(p_1,p_2,\dots,p_n)$ with $1<p_i<\infty$ for $i=1,\dots,n$ be integral parameters. Without loss of generality, we can assume that $$\label{condicion} 1<p_1\leq p_2 \leq \dots\leq p_n<\infty.$$ We define the *mixed Lebesgue space* as $$L^{\mathbf{p}}(\mathbb R^n)=\{u\text{ measurable such that } \|u\|_{L^{\mathbf{p}}(\mathbb R^n)}<\infty\}.$$ Where $$\|u\|_{\mathbf{p}}=\left(\int_{\mathbb R}\left(\dots \left(\int_{\mathbb R}|u|^{p_1}dx_1\right)^{\frac{p_2}{p_1}}\,dx_2\dots \right)^{p_n/p_{n-1}}dx_n\right)^{1/p_n}.$$ Furthermore, given $\Omega$ an open bounded subset of $\mathbb R^n$, we define $$L^{\mathbf{p}}(\Omega)=\{u\in L^{\mathbf{p}}(\mathbb R^n)\text{ such that }u=0 \text{ in } \mathbb R^n\setminus\Omega\}.$$ Observe that $L^{\mathbf{p}}(\Omega)$ is a closed subspace of $L^{\mathbf{p}}(\mathbb R^n)$. This space $L^{\mathbf{p}}(\Omega)$ turns out to be a reflexive Banach space and its properties were studied in [@BENEDEK61; @ADAMS88]. *Remark 1*. The $\|.\|_{\mathbf{p}}$ norm can be defined by recurrence as $$\begin{aligned} I_1(u)&=\left(\int_{\mathbb R}|u|^{p_1}\,dx_1\right)^{1/p_1}\\ I_2(u)&=\left(\int_{\mathbb R}I_1(u)^{p_2}\,dx_2\right)^{1/p_2}\\ &\vdots\\ I_j(u)&=\left(\int_{\mathbb R}I_{j-1}(u)^{p_j}\,dx_j\right)^{1/p_j}\\ &\vdots\\ I_n(u)&=\left(\int_{\mathbb R}I_{n-1}(u)^{p_n}\,dx_n\right)^{1/p_n}\\ I(u)&= I_n(u) =\|u\|_{\mathbf{p}}\end{aligned}$$ *Remark 2*. Observe that, given $u\in L^{\mathbf{p}}(\mathbb R^n)$, $I_j(u)$ is a function of $(x_{j+1},\dots,x_n)$. Moreover, for almost every $(y_{j+2},\dots,y_n)\in \mathbb R^{n-j-2}$, the function $I_j(u)$ (as a function of $x_{j+1}$), belongs to $L^{p_{j+1}}(\mathbb R)$. Also, observe that if $\{u_k\}_{k\in \mathbb N}\subset L^{\mathbf{p}}(\mathbb R^n)$ is such that $u_k\to u \in L^{\mathbf{p}}(\mathbb R^n)$ as $k\to\infty$ then $I_j(u_k)(\cdot, y_{j+2},\dots,y_n)\to I_j(u)(\cdot, y_{j+2},\dots,y_n)$ in $L^{p_{j+1}}(\mathbb R)$ for a.e. $(y_{j+2},\dots,y_n)\in \mathbb R^{n-j-2}$. ## Anisotropic Sobolev spaces Our interest lies in functions whose partial derivatives have different integrability. With this fact in mind, given $\mathbf{p}= (p_1,\dots,p_n)$ with $1<p_i<\infty$, the anisotropic Sobolev space is defined as follows: $$W^{1,\mathbf{p}}(\mathbb R^n):=\left\{u\in L^{\mathbf{p}}(\mathbb R^n) \colon u_{x_i} \in L^{p_i}(\mathbb R^n),\ i=1,\dots,n\right\},$$ equipped with the following norm $$\| u\|_{1,\mathbf{p}} = \|u\|_{\mathbf{p}} + \sum_{i=1}^{n}\left\|u_{x_i}\right\|_{p_i} = \|u\|_{\mathbf{p}} + \|\nabla u\|_{\mathbf{p}}.$$ It is easy to prove that $W^{1,\mathbf{p}}(\mathbb R^n)$ is a separable, reflexive Banach space. Now, given a bounded domain $\Omega\subset \mathbb R^n$ we define $W^{1,\mathbf{p}}_0(\Omega)$ as the closure of $C_c^\infty(\Omega)$ in $W^{1,\mathbf{p}}(\mathbb R^n)$. ## Fractional space Next we present the fractional anisotropic Sobolev space. First, given $i=1,\dots, n$, $s\in(0,1]$ and $p\in (1,\infty)$, for any $u\colon \mathbb R^n\to\mathbb R$ measurable we define the quantity $$[u]_{s,p,i} = \left(\int_{\mathbb R^n}\int_\mathbb R\frac{|u(x+he_i)-u(x)|^p}{|h|^{1+sp}}\, dh dx\right)^{\frac{1}{p}},$$ where $e_i$ is the $i^\text{th}-$canonical vector base in $\mathbb R^n$. Now, given $\mathbf{p}=(p_1,\dots,p_n)$ and $\mathbf{s}=(s_1,\dots,s_n)$ with $1<p_i<\infty$ and $0<s_i<1$, for $i=1,\dots,n$, we define the anisotropic fractional order Sobolev space as $$W^{\mathbf{s},\mathbf{p}}(\mathbb R^n):=\left\{u\in L^{\mathbf{p}}(\mathbb R^n)\colon [u]_{s_i,p_i,i}<\infty,\ i=1,\dots,n\right\}.$$ This space has a natural norm defined as $$\|u\|_{\mathbf{s},\mathbf{p}} := \|u\|_{\mathbf{p}} + \sum_{i=1}^n [u]_{s_i,p_i,i} = \|u\|_{\mathbf{p}} + [u]_{\mathbf{s},\mathbf{p}}.$$ It is easy to see that $W^{\mathbf{s},\mathbf{p}}(\mathbb R^n)$ is a separable and reflexive Banach space. See [@CERESABONDER23; @CHAKER23]. As before, given $\Omega\subset \mathbb R^n$ a bounded domain, we define $W^{\mathbf{s},\mathbf{p}}_0(\Omega)$ as the closure of $C^\infty_c(\Omega)$ in $W^{\mathbf{s},\mathbf{p}}(\mathbb R^n)$. The following two theorems represent analogs to the classical Poincaré inequality and the Rellich-Kondrashov type theorem within the context of $L^{\mathbf{p}}(\Omega)$ and anisotropic fractional Sobolev space. **Proposition 3** (Poincaré). *Given $\Omega$ an open bounded subset on $\mathbb R^n$, there exists constants $C_1(\Omega,\mathbf{p},n)>0$ and $C_2(\Omega,\mathbf{p},\mathbf{s},n)>0$ such that for every $u$ in $W^{1,\mathbf{p}}_0(\Omega)$, the following inequality holds $$\label{Poincare1} \|u\|_{\mathbf{p}} \leq C_1\|\nabla u\|_{\mathbf{p}},$$ and for every $u$ in $W^{\mathbf{s},\mathbf{p}}_0(\Omega)$, the following inequality holds: $$\label{Poincare2} \|u\|_{\mathbf{p}} \leq C_2 [u]_{\mathbf{s},\mathbf{p}}.$$* *Proof.* Let $u$ be a function in $W^{1,\mathbf{p}}_0(\Omega)$. On one hand, observe that since $p_i\le p_n$ for every $i=1,\dots,n-1$ and $|\Omega|<\infty$, it follows by Hölder's inequality that $L^{p_n}(\Omega)$ is continuously embedded in $L^{\mathbf{p}}(\Omega)$, that is, there exists a positive constant $C>0$ such that $$\|u\|_{\mathbf{p}}\leq C\|u\|_{L^{p_n}} .$$ On the other hand, the Poincaré inequality for functions in $W^{1,p_n}_0(\Omega)$ $$\|u\|_{L^{p_n}(\Omega)}\leq C \|u_{x_n}\|_{L^{p_n}(\Omega)}\leq C \sum_{i=1}^{n}\|u_{x_i}\|_{L^{p_i}(\Omega)}$$ Therefore, by combining these results, we obtain [\[Poincare1\]](#Poincare1){reference-type="eqref" reference="Poincare1"}. For the second inequality, let $u$ be a function in $W^{\mathbf{s},\mathbf{p}}_0(\Omega)$, we can assume that there exist $R>0$ such that $\mathop{\text{\normalfont supp}}{u}\subset Q_R = [-R,R]^n$. Hence, $$\begin{aligned} _{s_1,p_1,1}^{p_1}&=\int_{\mathbb R^n}\int_\mathbb R\frac{|u(x+he_1)-u(x)|^{p_1}}{|h|^{1+s_1p_1}}\,dh\,dx\\ &\geq \int_{Q_R}\int_\mathbb R\frac{|u(x+he_1)-u(x)|^{p_1}}{|h|^{1+s_1p_1}}\,dh\,dx\\ &\geq \int_{Q_R'}\int_{|x_1|\leq R}\int_{|x_1+he_1|\geq R}\frac{|u(x)|^{p_1}}{|h|^{1+s_1p_1}}\,dh\,dx_1\,dx'\\ &\geq \int_{Q_R'}\int_{|x_1|\leq R}|u(x)|^{p_1}\int_{|h|\geq 2R}\frac{1}{|h|^{1+s_1p_1}}\,dh\,dx_1\,dx'\\ &\geq C\|u\|_{p_1}^{p_1},\end{aligned}$$ where $Q_R' = [-R,R]^{n-1}$ and $dx' = dx_2\cdots dx_n$. Arguing in a similar fashion we conclude that there exists $C_i(\Omega, s_i, p_i)$ such that $$C_i[u]_{s_i,p_i,i}\geq \|u\|_{p_i}.$$ Therefore taking $K=\max_{i}\{C_i\}$ we have that $$K\sum_{i=1}^n [u]_{s,p,i}\geq \sum_{i=1}^n \|u\|_{p_i}\geq \|u\|_{p_n}\geq C \|u\|_{\mathbf{p}}.$$ This fact concludes the proof of [\[Poincare2\]](#Poincare2){reference-type="eqref" reference="Poincare2"}. ◻ The following notation will be used. Given a vector $\mathbf{q}= (q_1,\dots,q_n)$ with $q_i>0$ for $i=1,\dots,n$, we denote by $\bar\mathbf{q}$ the *harmonic mean* of the vector $\mathbf{q}$, i.e. $$\bar\mathbf{q}:= \left(\frac{1}{n}\sum_{i=1}^n\frac{1}{q_i}\right)^{-1}.$$ Next, given two vectors $\mathbf{q}= (q_1,\dots,q_n)$ and $\mathbf{r}= (r_1,\dots, r_n)$ with $q_i, r_i>0$ for $i=1,\dots,n$ we define the *product* $\mathbf{q}\mathbf{r}$ as $$\mathbf{q}\mathbf{r}= (q_1 r_1,\dots,q_n r_n),$$ the coordinate by coordinate multiplication. **Proposition 4** (Rellich-Kondrashov). *Let $\mathbf{p}=(p_1,\dots,p_n)$ with $1<p_i<\infty$, $i=1,\dots,n$ and be such that $$\label{condicion1} \bar\mathbf{p}<n.$$ Define the *critical exponent* $\mathbf{p}^*$ as $$\label{p*} \mathbf{p}^*:=\frac{n\bar \mathbf{p}}{n-\bar \mathbf{p}}.$$ Then $W^{1,\mathbf{p}}_0(\Omega)\subset L^{q}(\Omega)$, for all $1\le q\leq \mathbf{p}^*$. Even more $W^{1,\mathbf{p}}_0(\Omega) \subset\subset L^{q}(\Omega)$ if $1\le q<\mathbf{p}^*$. In particular $W^{1,\mathbf{p}}_0(\Omega) \subset\subset L^{\mathbf{p}}(\Omega)$.* *Now, let $\mathbf{s}= (s_1,\dots,s_n)$ with $0<s_i<1$, for $i=1,\dots,n$ and $\mathbf{p}$ be as before. Assume that $$\label{condicion2} \overline{\mathbf{s}\mathbf{p}} < n$$ and define the *fractional critical exponent* $$\label{ps*} \mathbf{p}^*_{\mathbf{s}} =\frac{n\frac{\overline{\mathbf{s}\mathbf{p}}}{\bar \mathbf{s}}}{n-\overline{\mathbf{s}\mathbf{p}}}.$$ Moreover, assume that $$\label{condicion3} p_n < \mathbf{p}^*_{\mathbf{s}}.$$ Then $W^{\mathbf{s},\mathbf{p}}_0(\Omega) \subset L^{q}(\Omega)$, for all $1\le q\leq \mathbf{p}^*_{\mathbf{s}}$. Even more, $W^{\mathbf{s},\mathbf{p}}_0(\Omega) \subset\subset L^{q}(\Omega)$ for $1\le q<\mathbf{p}^*_{\mathbf{s}}$. In particular $W^{\mathbf{s},\mathbf{p}}_0(\Omega) \subset\subset L^{\mathbf{p}}(\Omega)$.* *Proof.* The proof of $W^{1,\mathbf{p}}_0(\Omega) \subset\subset L^{q}$ for all $1<q< \mathbf{p}^*$ is studied in the previous references [@TROISI69; @ELHAMIDI2007]. To prove that $W^{1,\mathbf{p}}_0(\Omega) \subset\subset L^{\mathbf{p}}(\Omega)$ observe that as $p_n< \mathbf{p}^*$ then $L^{p_n}(\Omega)\subset L^{\mathbf{p}}(\Omega)$ continuously. The proof of fractional case is immediate of [@CHAKER23 Theorem 2.1] and the previous idea. ◻ Without loss of generality, we can always assume that [\[condicion\]](#condicion){reference-type="eqref" reference="condicion"} is satisfied. In the rest of the paper, it will always be assumed that conditions [\[condicion1\]](#condicion1){reference-type="eqref" reference="condicion1"}, [\[condicion2\]](#condicion2){reference-type="eqref" reference="condicion2"} and [\[condicion3\]](#condicion3){reference-type="eqref" reference="condicion3"} hold. # The Euler-Lagrange equation {#section3} ## Non-fractional case In this subsection we will establish the Euler-Lagrange equation associated to the Rayleigh-type quotient $\mathcal{Q}_{\mathbf{p}}$ defined in [\[Qp\]](#Qp){reference-type="eqref" reference="Qp"}. In fact, following ideas from [@LINDQVIST13] (see also [@Bonder-Salort-Vivas]), we show that the EL equation turns out to be the following $$\label{P} \begin{cases} -\mathcal{L}_{\mathbf{p}} u:=\lambda \mathcal{F}_{\mathbf{p}} (u)\quad &\text{ in }\Omega\\ u=0&\text{ in } \mathbb R^n\setminus \Omega, \end{cases}$$ where $$\label{L_p} -\mathcal{L}_{\mathbf{p}} u := -\mathop{\text{\normalfont div}}\left(\sum_{i=1}^{n}\left|\frac{u_{x_i}}{\| u_{x_i}\|_{p_i}}\right|^{p_i-2}\frac{u_{x_i}}{\| u_{x_i}\|_{p_i}}\right)$$ and $$\label{F_p} \mathcal{F}_{\mathbf{p}}(u)= \prod_{i=1}^{n} I_i(u)^{p_{i+1}-p_i} |u|^{p_1-2}u,$$ where $p_{n+1}=1$. **Definition 5**. Let $u$ be a function in $W^{1,\mathbf{p}}_0(\Omega)$, then $u$ is a weak solution of [\[P\]](#P){reference-type="eqref" reference="P"} if and only if $u$ verifies $$\int_{\Omega}\sum_{i=1}^{n}\left|\frac{u_{x_i}}{\|u_{x_i}\|_{p_i}}\right|^{p_i-2}\frac{u_{x_i}}{\|u_{x_i}\|_{p_i}}v_{x_i}\,dx= \lambda\int_{\Omega}\mathcal{F}_{\mathbf{p}}(u) v\,dx,$$ for all $v\in W^{1,\mathbf{p}}_0(\Omega).$ We will need the following lemma regarding the behavior of the functional $\mathcal{F}_{\mathbf{p}}$. **Lemma 6**. *Let $\mathbf{p}=(p_1,\dots,p_n)$ be such that $1<p_i<\infty$ and let $\mathbf{p}'=(p_1',\dots,p_n')$. Let $\mathcal{F}_{\mathbf{p}}$ be the functional defined in [\[F_p\]](#F_p){reference-type="eqref" reference="F_p"}.* *Then $\mathcal{F}_{\mathbf{p}}\colon L^{\mathbf{p}}(\mathbb R^n)\to L^{\mathbf{p}'}(\mathbb R^n)$ is continuous.* *Proof.* To see that it is well defined, just observe that if $u\in L^{\mathbf{p}}(\mathbb R^n)$, then $$\begin{aligned} \left(\int_\mathbb R|\mathcal{F}_{\mathbf{p}}(u)|^{p_1'}\, dx_1\right)^{1/p_1'} &= \prod_{i=1}^n I_i(u)^{p_{i+1}-p_1}\left(\int_\mathbb R|u|^{(p_1-1)p_1'}\, dx_1\right)^{1/p_1'}\\ &= \prod_{i=1}^n I_i(u)^{p_{i+1}-p_1} I_1(u)^{p_1/p_1'}\\ &= \prod_{i=2}^n I_i(u)^{p_{i+1}-p_1} I_1(u)^{p_2-1}.\end{aligned}$$ Iterating this procedure, one easily conclude that $$\|\mathcal{F}_{\mathbf{p}}(u)\|_{\mathbf{p}'} = \|u\|_{\mathbf{p}}.$$ In order to see the continuity of $\mathcal{F}_{\mathbf{p}}$, let $\{u_k\}_{k\in\mathbb N}\subset L^{\mathbf{p}}(\mathbb R^n)$ be such that $u_k\to u$ in $L^{\mathbf{p}}(\mathbb R^n)$. Then, define $$\begin{aligned} & \tilde I_1(k) := \left(\int_\mathbb R|\mathcal{F}_{\mathbf{p}}(u_k)-\mathcal{F}_{\mathbf{p}}(u)|^{p_1'}\, dx_1\right)^{1/p_1'}\\ & \tilde I_{i+1}(k) := \left(\int_\mathbb R\tilde I_i(k)^{p_{i+1}'}\, dx_{i+1}\right)^{1/p_{i+1}'},\qquad i=1,\dots,n-1.\end{aligned}$$ Observe that $\|\mathcal{F}_{\mathbf{p}}(u_k) - \mathcal{F}_{\mathbf{p}}(u)\|_{\mathbf{p}'} = \tilde I_n(k)$, so it is enough to show that, up to a subsequence, $$\label{cotas.Itilde} \begin{split} & \tilde I_i(k) \to 0 \text{ as } k\to\infty\quad \text{a.e. } (x_{i+1},\dots,x_n), \qquad i=1,\dots,n\\ & \text{and}\\ & \text{a.e. } (x_{i+2},\dots, x_n),\ \tilde I_i(k)(x_{i+1}) \le h_i(x_{i+1}), \quad \text{with } h\in L^{p_{i+1}'}(\mathbb R). \end{split}$$ In fact, let us see [\[cotas.Itilde\]](#cotas.Itilde){reference-type="eqref" reference="cotas.Itilde"} for $i=1$ and the rest will follow by induction. By Remark [Remark 2](#rem.norma parcial){reference-type="ref" reference="rem.norma parcial"}, it is easy see that $\mathcal{F}_{\mathbf{p}}(u_k)\to \mathcal{F}_{\mathbf{p}}(u)$ a.e. So in order to see that $\tilde I_1(k)\to 0$ for a.e. $x'=(x_2,\dots,x_n)$ we need to find an integrable majorant for $|\mathcal{F}_{\mathbf{p}}(u_k) - \mathcal{F}_{\mathbf{p}}(u)|^{p_1'}$ for a.e. $x'\in \mathbb R^{n-1}$. Hence, $$\begin{aligned} |\mathcal{F}_{\mathbf{p}}(u_k) - \mathcal{F}_{\mathbf{p}}(u)|^{p_1'}\le C\left(\prod_{i=1}^n I_i(u_k)^{(p_{i+1}-p_i)p_1'} |u_k|^{p_1} + \prod_{i=1}^n I_i(u)^{(p_{i+1}-p_i)p_1'} |u|^{p_1} \right)\end{aligned}$$ As, by Remark [Remark 2](#rem.norma parcial){reference-type="ref" reference="rem.norma parcial"} $I_i(u_k)(\cdot, x_{i+2},\dots,x_n)\to I_i(u)(\cdot, x_{i+2},\dots,x_n)$ in $L^{p_{i+i}}(\mathbb R)$ for a.e. $(x_{i+2},\dots,x_n)$, using [@BREZIS Theorem 4.9], there exists $h_i=h_i(\cdot, x_{i+2},\dots,x_n)\in L^{p_{i+1}}(\mathbb R)$ such that $$I_i(u_k)(x_1, x_{i+2},\dots,x_n)\le h_i(x_1, x_{i+2},\dots,x_n).$$ Moreover, since $u_k(\cdot,x')\to u(\cdot,x')$ in $L^{p_1}(\mathbb R)$ we obtain the existence of $h_0(x)$, $h_0(\cdot,x')\in L^{p_1}(\mathbb R)$ such that $$|u_k(x)|\le h_0(x).$$ Hence $$|\mathcal{F}_{\mathbf{p}}(u_k) - \mathcal{F}_{\mathbf{p}}(u)|^{p_1'}\le C\left(\prod_{i=1}^n h_i^{(p_{i+1}-p_i)p_1'} h_0^{p_1} + \prod_{i=1}^n I_i(u)^{(p_{i+1}-p_i)p_1'} |u|^{p_1} \right) =: \Phi(x_1,x').$$ Since $\Phi(\cdot,x')\in L^1(\mathbb R)$ for a.e. $x'\in\mathbb R^{n-1}$ we obtain that $\tilde I_1(k)\to 0$. The proof of [\[cotas.Itilde\]](#cotas.Itilde){reference-type="eqref" reference="cotas.Itilde"} now follows by induction and the details are left to the reader. ◻ Knowing the definition of weak solution we can state our main result of this section. **Theorem 7**. *Let $u$ be a function in $W^{1,\mathbf{p}}_0(\Omega).$ Then $u$ is a critical point of [\[Qp\]](#Qp){reference-type="eqref" reference="Qp"} if and only if $u$ is a weak solution of [\[P\]](#P){reference-type="eqref" reference="P"}.* To prove this theorem we will use the following notation $$\label{HI} H(u)=\|\nabla u\|_{\mathbf{p}}\quad \text{and}\quad I(u) = \|u\|_{\mathbf{p}}$$ and we need to establish some lemmas that will facilitate the proof. First, we have to show that the functionals $H$ and $I$ are Fréchet differentiable. **Lemma 8**. *$I\colon L^{\mathbf{p}}(\Omega)\to \mathbb R$ and $H\colon W^{1,\mathbf{p}}_0(\Omega)\to \mathbb R$ are Gateaux differentiable away from zero and its derivatives are given by $$\label{eq.I'} \frac{d}{dt}I(u+tv)|_{t=0} = \langle I'(u),v \rangle=\int_{\mathbb R^n}\prod_{i=1}^n I_i(u)^{p_{i+1}-p_i} |u|^{p_1-2}uv\,dx,$$ where $p_{n+1}=1$ and $$\label{eq.H'} \frac{d}{dt}H(u+tv)|_{t=0}= \langle H'(u),v\rangle=\sum_{i=1}^n \int_{\mathbb R^n} \left|\frac{u_{x_i}}{\|u_{x_i}\|_{p_i}}\right|^{p_i-2}\frac{u_{x_i}}{\|u_{x_i}\|_{p_i}}v_{x_i}\,dx.$$ That is, $H' = \mathcal{L}_\mathbf{p}$ and $I' = \mathcal{F}_\mathbf{p}$.* *Proof.* To prove [\[eq.I\'\]](#eq.I'){reference-type="eqref" reference="eq.I'"}, let $u,v \in L^{\mathbf{p}}(\Omega)$ and $t\in\mathbb R$. Then, recalling Remark [Remark 1](#rem.recurrencia){reference-type="ref" reference="rem.recurrencia"}, we compute $$\begin{aligned} \left.\frac{d}{dt}I_1(u+tv)\right|_{t=0} = I_1(u)^{1-p_1}\int_{\mathbb R} |u|^{p_1-2}uv\, dx_1.\end{aligned}$$ Next, $$\begin{aligned} \left.\frac{d}{dt}I_2(u+tv)\right|_{t=0}&= I_2(u)^{1-p_2}\int_{\mathbb R}I_1(u+tv)^{p_2-1}\left.\frac{d}{dt}I_1(u+tv)\right|_{t=0}\,dx_2 \\ &= \int_{\mathbb R^2} I_2(u)^{1-p_2}I_1(u)^{p_2-p_1} |u|^{p_1-2}uv\, dx_1dx_2\end{aligned}$$ Therefore, by induction, we arrive at $$\left.\frac{d}{dt}I_n(u+tv)\right|_{t=0} = \int_{\mathbb R^n} \prod_{i=1}^n I_i(u)^{p_{i+1}-p_i} |u|^{p_1-2}uv\, dx,$$ where $p_{n+1}=1$ and the proof of [\[eq.I\'\]](#eq.I'){reference-type="eqref" reference="eq.I'"} follows observing that $I_n=I$. The proof of [\[eq.H\'\]](#eq.H'){reference-type="eqref" reference="eq.H'"} is standard and the details are left to the reader. ◻ **Theorem 9**. *The functionals $I$ and $H$ given in [\[HI\]](#HI){reference-type="eqref" reference="HI"} are Fréchet differentiable.* *Proof.* The proof follows easily from Lemma [Lemma 8](#I.diferenciable){reference-type="ref" reference="I.diferenciable"} just observing that $I'=\mathcal{F}_\mathbf{p}$ and $H' = \mathcal{L}_\mathbf{p}$ are continuous. In fact, the continuity of $\mathcal{F}_\mathbf{p}$ is proved in Lemma [Lemma 6](#Fp.bien.def){reference-type="ref" reference="Fp.bien.def"} and the continuity of $\mathcal{L}_\mathbf{p}$ is an easy excercise. ◻ At this point we can give a rigorous proof of Theorem [Theorem 7](#teo euler lagrange){reference-type="ref" reference="teo euler lagrange"}. *Proof of Theorem [Theorem 7](#teo euler lagrange){reference-type="ref" reference="teo euler lagrange"} .* Recall that, since $$\mathcal{Q}_{\mathbf{p}}(u) = \frac{H(u)}{I(u)},$$ using Lemma [Lemma 8](#I.diferenciable){reference-type="ref" reference="I.diferenciable"}, one obtain that, if $u\neq 0$, $$\langle \mathcal{Q}_{\mathbf{p}}'(u), v\rangle = \frac{1}{I(u)} \left(\langle H'(u), v\rangle - \mathcal{Q}_{\mathbf{p}}(u)\langle I'(u), v\rangle\right).$$ Hence, $u\in W^{1,\mathbf{p}}_0(\Omega)$ is a critical point of $\mathcal{Q}_{\mathbf{p}}$ if and only if $$\langle H'(u), v\rangle = \mathcal{Q}_{\mathbf{p}}(u)\langle I'(u), v\rangle.$$ But this is the same as saying that $u$ is a weak solution to [\[P\]](#P){reference-type="eqref" reference="P"} with $\lambda=\mathcal{Q}_{\mathbf{p}}(u)$. ◻ ## The fractional case Now, we will analyze the fractional case. So, we consider the Rayleigh-type quotient $\mathcal{Q_{\mathbf{s},\mathbf{p}}}$ defined in [\[Qsp\]](#Qsp){reference-type="eqref" reference="Qsp"}, and look for the Euler-Lagrange equation associated to it. The main result of this section is to show that the E-L equation is given by $$\label{Ps} \begin{cases} -\mathcal{L}_{\mathbf{s},\mathbf{p}} u = \lambda \mathcal{F}_{\mathbf{p}}(u)\quad &\text{ in }\Omega\\ u=0&\text{ in } \mathbb R^n\setminus \Omega, \end{cases}$$ where $\mathcal{F}_{\mathbf{p}}$ is given by [\[F_p\]](#F_p){reference-type="eqref" reference="F_p"} and $$\begin{aligned} \mathcal{L}_{\mathbf{s},\mathbf{p}} u = \text{p.v.} \sum_{i=1}^n\int_{\mathbb R^n}\int_\mathbb R\left|\frac{D^{s_i,i}_h u(x)}{[u]_{s_i,p_i,i}}\right|^{p_i-2}\frac{D^{s_i,i}_h u(x)}{[u]_{s_i,p_i,i}} \frac{dh}{|h|^{1+s_i}}\,dx,\\ =\lim_{\varepsilon\to 0}\sum_{i=1}^n\int_{\mathbb R^n}\int_{|h|>\varepsilon} \left|\frac{D^{s_i,i}_h u(x)}{[u]_{s_i,p_i,i}}\right|^{p_i-2}\frac{D^{s_i,i}_h u(x)}{[u]_{s_i,p_i,i}} \frac{dh}{|h|^{1+s_i}}\,dx\end{aligned}$$ and $$D^{s,i}_h u(x) = \frac{u(x+he_i)-u(x)}{|h|^s}.$$ It is shown in [@CERESABONDER23] that the operator $\mathcal{L}_{\mathbf{s},\mathbf{p}}$ is the fractional version of $\mathcal{L}_{\mathbf{p}}$. This operator $\mathcal{L}_{\mathbf{s},\mathbf{p}}$ has to be understood in the weak sense, i.e. given $u, v\in W^{\mathbf{s},\mathbf{p}}_0(\Omega)$, $$\langle -\mathcal{L}_{\mathbf{s},\mathbf{p}} u, v\rangle = \sum_{i=1}^n\int_{\mathbb R^n}\int_\mathbb R\left|\frac{D^{s_i,i}_h u(x)}{[u]_{s_i,p_i,i}}\right|^{p_i-2}\frac{D^{s_i,i}_h u(x)}{[u]_{s_i,p_i,i}} D^{s_i,i}_h v(x) \frac{dh}{|h|}\,dx.$$ Again, we have to give a definition of weak solution. **Definition 10**. Let $u$ be a function in $W^{\mathbf{s},\mathbf{p}}_0(\Omega)$, then $u$ is a weak solution of [\[Ps\]](#Ps){reference-type="eqref" reference="Ps"} if $u$ verifies $$\begin{aligned} \sum_{i=1}^n\int_{\mathbb R^n}\int_\mathbb R\left|\frac{D^{s_i,i}_h u(x)}{[u]_{s_i,p_i,i}}\right|^{p_i-2}\frac{D^{s_i,i}_h u(x)}{[u]_{s_i,p_i,i}} D^{s_i,i}_h v(x) \frac{dh}{|h|}\,dx = \lambda \int_{\mathbb R^n}\mathcal{F}_{\mathbf{p}}(u)v\,dx,\end{aligned}$$ for all $v\in W^{\mathbf{s},\mathbf{p}}_0(\Omega).$ Again, we introduce the notation $$\label{Hs} H_{\mathbf{s}}(u)=[u]_{\mathbf{s},\mathbf{p}}$$ and in an analogous form as in Lemma [Lemma 8](#I.diferenciable){reference-type="ref" reference="I.diferenciable"} we have the following lemma, whose proof is left to the reader. **Lemma 11**. *The functional $H_{\mathbf{s}}\colon W^{\mathbf{s},\mathbf{p}}_0(\Omega)\to \mathbb R$ is Gateaux differentiable away from zero and its derivative is given by $$\begin{aligned} %\label{eq.H_s'} \langle H_{\mathbf{s}}'(u), v\rangle = \sum_{i=1}^n\int_{\mathbb R^n}\int_\mathbb R\left|\frac{D^{s_i,i}_h u(x)}{[u]_{s_i,p_i,i}}\right|^{p_i-2}\frac{D^{s_i,i}_h u(x)}{[u]_{s_i,p_i,i}} D^{s_i,i}_h v(x) \frac{dh}{|h|}\,dx.\end{aligned}$$ That is $H_\mathbf{s}'=\mathcal{L}_{\mathbf{s},\mathbf{p}}$.* Finally, we can state the Euler-Lagrange Theorem for the fractional case, and its proof is analogous to the non-fractional case and therefore is ommitted. **Theorem 12**. *Let $u$ be a function in $W^{\mathbf{s},\mathbf{p}}_0(\Omega).$ Then $u$ is a critical point of [\[Qsp\]](#Qsp){reference-type="eqref" reference="Qsp"} if and only if $u$ is a weak solution of the Euler Lagrange equation [\[Ps\]](#Ps){reference-type="eqref" reference="Ps"}* # General properties of eigenvalues {#section4} After having derived the Euler-Lagrange equation for each case, it becomes evident that these are eigenvalue problems, for which we can explore some properties. We say that $\lambda \in \mathbb R$ is an eigenvalue of $\mathcal{L}_{\mathbf{p}}$ under Dirichlet boundary conditions in the domain $\Omega$, if problem [\[P\]](#P){reference-type="eqref" reference="P"} admits a nontrivial weak solution $u \in W^{1,\mathbf{p}}_0(\Omega)$. Then $u$ is called an eigenfunction of $\mathcal{L}_{\mathbf{p}}$ corresponding to $\lambda$. We will denote $\Sigma^{\mathbf{p}}$ the collection of these eigenvalues. Similarly, we say that $\lambda \in \mathbb R$ is an eigenvalue of $\mathcal{L}_{\mathbf{s},\mathbf{p}}$ under Dirichlet boundary conditions in the domain $\Omega$, if problem [\[Ps\]](#Ps){reference-type="eqref" reference="Ps"} admits a nontrivial weak solution $u \in W^{\mathbf{s},\mathbf{p}}_0(\Omega)$. Then $u$ is called an eigenfunction of $\mathcal{L}_{\mathbf{s},\mathbf{p}}$ corresponding to $\lambda$. We will denote $\Sigma^{\mathbf{s},\mathbf{p}}$ the collection of these eigenvalues. We begin this section by collecting some simple properties for the eigensets $\Sigma^\mathbf{p}$ and $\Sigma^{\mathbf{s},\mathbf{p}}$. **Proposition 13**. *$\Sigma^\mathbf{p}, \Sigma^{\mathbf{s},\mathbf{p}}\subset (0,\infty)$ are closed sets.* *Proof.* As we have done throughout the article, we will only provide the proof for the non-fractional case, leaving the fractional case for the reader. First, let $\lambda\in \Sigma^\mathbf{p}$ and $u\in W^{1,\mathbf{p}}_0(\Omega)$ be an associated eigenfunction. So, if we take the same $u$ as a test function in the weak formulation of [\[P\]](#P){reference-type="eqref" reference="P"}, we obtain that $$\|\nabla u\|_{\mathbf{p}} = \lambda \|u\|_{\mathbf{p}}.$$ Therefore, $\lambda>0$ Next, let us see that $\Sigma^\mathbf{p}$ is closed. To this end, let $\{\lambda_k\}$ be a sequence of eigenvalues such that $\lambda_k\to \lambda\in \mathbb R$ as $k\to \infty$ and $\{u_k\}_{k\in\mathbb N}\subset W^{1,\mathbf{p}}_0(\Omega)$ be a corresponding sequence of $L^{\mathbf{p}}$-normalized eigenfunctions. Observe that $$\|\nabla u_k\|_{\mathbf{p}}= \lambda_k \|u_k\|_{\mathbf{p}} = \lambda_k,$$ from where it follows that $\{u_k\}_{k\in\mathbb N}$ is a bounded sequence in $W^{1,\mathbf{p}}_0(\Omega)$. Hence, passing to a subsequence, we get that $$u_k\rightharpoonup u \text{ in }W^{1,\mathbf{p}}_0(\Omega)\text{ and } u_k \to u \text{ in } L^{\mathbf{p}}(\Omega).$$ From Lemma [Lemma 6](#Fp.bien.def){reference-type="ref" reference="Fp.bien.def"}, $\mathcal{F}_\mathbf{p}$ is continuous and we get that $$\langle \mathcal{L}_\mathbf{p}u_k, v\rangle=\lambda_k\langle \mathcal{F}_\mathbf{p}(u_k),v\rangle \to \lambda\langle \mathcal{F}_p(u),v\rangle.$$ As each $u_k$ is an $L^{\mathbf{p}}$-normalized eigenfunction, then $$\langle \mathcal{L}_\mathbf{p}u_k, u_k\rangle=\lambda_k\to \lambda.$$ Now, we make use of Lemma [Lemma 19](#lemma1){reference-type="ref" reference="lemma1"}, that is proved in the next section, to obtain that $u_k\to u$ in $W^{1,\mathbf{p}}_0(\Omega)$. Hence, we can pass to the limit $k\to\infty$ in the weak formulation $$\langle \mathcal{L}_\mathbf{p}u_k,v\rangle=\lambda_k\langle \mathcal{F}_p(u_k),v\rangle$$ and get that $$\langle \mathcal{L}_\mathbf{p}u,v\rangle=\lambda\langle \mathcal{F}_\mathbf{p}(u),v\rangle$$ for any $v\in W^{1,\mathbf{p}}_0(\Omega)$. That is, $u$ is eigenfunction associated to $\lambda$ and the proof is complete. ◻ Now, we arrive at the main point of this section, that is the asymptotic behavior of the eigenset $\Sigma^{\mathbf{s},\mathbf{p}}$ as the fractional parameters $\mathbf{s}=(s_1,\dots,s_n)$ verify that $s_i\to 1$, $i=1,\dots,n$. To this end, we will make use of the following result which is a particular case of [@CERESABONDER23 Theorem 3.3]. **Proposition 14**. *[@CERESABONDER23 Theorem 3.3][\[stability.semilinal\]]{#stability.semilinal label="stability.semilinal"} Let $\{\mathbf{s}_k\}_{k\in\mathbb N}$ be a sequence of fractional parameters $\mathbf{s}_k\to (1,\dots,1)$ as $k\to\infty$. Let $\mathbf{p}=(p_1,\dots,p_n)$ be such that $1<p_i<\infty$ for each $i=1,\dots, n$ and for each $k\in \mathbb N$ let $u_k\in W^{\mathbf{s}_k,\mathbf{p}}_0(\Omega)$ be such that $$\label{cota_uk} \sup_{k\in\mathbb N} \|u_k\|_{\mathbf{s}_k,\mathbf{p}} <\infty.$$* *Then, there exists a function $u\in W^{1,\mathbf{p}}_0(\Omega)$ and a subsequence $\{u_{k_j}\}_{j\in\mathbb N}\subset \{u_k\}_{k\in\mathbb N}$ such that $$u_{k_j}\to u \quad \text{in } L^\mathbf{p}(\Omega) \quad \text{and}\quad \|\nabla u\|_{\mathbf{p}}\le \liminf_{k\to\infty} \,[u_k]_{\mathbf{s}_k,\mathbf{p}}.$$* Moreover, we also need to borrow a Lemma from [@CERESABONDER23]. **Lemma 15**. *[@CERESABONDER23 Lemma 5.7] Let $\{\mathbf{s}_k\}_{k\in\mathbb N}$ be a sequence of fractional parameters satisfying that $\mathbf{s}_k\to(1,\dots,1)$ as $k\to\infty$ and let $v_k\in W^{\mathbf{s}_k,\mathbf{p}}_0(\Omega)$. Assume that $\{v_k\}_{k\in\mathbb N}$ satisfy [\[cota_uk\]](#cota_uk){reference-type="eqref" reference="cota_uk"}, and let $u\in W^{1,\mathbf{p}}_0(\Omega)$ be fixed.* *Without loss of generality, we can assume that there existes $v\in W^{1,\mathbf{p}}_0(\Omega)$ such that $v_k\to v$ in $L^\mathbf{p}(\Omega)$ as $k\to\infty$.* *Then $$\langle \mathcal{L}_\mathbf{p}^{\mathbf{s}_k} u, v_k\rangle \to \langle \mathcal{L}_\mathbf{p}u, v \rangle\quad\text{as } k\to\infty.$$* Hence we can state and prove the following theorem. **Theorem 16**. *Let $\{\mathbf{s}_k\}_{k\in\mathbb N}$ be a sequence of fractional parameters $\mathbf{s}_k\to (1,\dots,1)$ as $k\to\infty$. Let $\mathbf{p}=(p_1,\dots,p_n)$ be such that $1<p_i<\infty$ for each $i=1,\dots, n$. Let $\lambda_k\in \Sigma^{\mathbf{s}_k,\mathbf{p}}$ be an eigenvalue of [\[Ps\]](#Ps){reference-type="eqref" reference="Ps"} such that $\lambda_k\to \lambda$ as $k\to\infty$. Then $\lambda\in \Sigma^\mathbf{p}$. Moreover, if $u_k\in W^{\mathbf{s}_k,\mathbf{p}}_0(\Omega)$ is a normalized eigenfunction associated to $\lambda_k$ then any $L^\mathbf{p}(\Omega)-$accumulation point $u$ of the sequence $\{u_k\}_{k\in\mathbb N}$, satisfy that $u\in W^{1,\mathbf{p}}_0(\Omega)$ and is an eigenfunction associated to $\lambda$.* *Proof.* Let $\{\mathbf{s}_k\}_{k\in\mathbb N}$ be sequence of fractional parameters such that $\mathbf{s}_k\to (1,\dots,1)$ as $k\to\infty$ and et $\{\lambda_{\mathbf{s}_k}\}_{k\in\mathbb N}$ be a sequence of eigenvalues that converge to $\lambda$ as $k\to \infty$. For each $\lambda_{\mathbf{s}_k}$ there is a eigenfunction $u_k$, that we can assume to be $L^{\mathbf{p}}$-normalized. Note that since $\|u_k\|_\mathbf{p}= 1$ and $\lambda_{\mathbf{s}_k}$ is convergent, and therefore bounded, the sequence $\{u_k\}_{k\in\mathbb N}$ satisfy [\[cota_uk\]](#cota_uk){reference-type="eqref" reference="cota_uk"}. Therefore we can apply Proposition [\[stability.semilinal\]](#stability.semilinal){reference-type="ref" reference="stability.semilinal"} and obtain a subsequence, that we still denote $\{u_k\}_{k\in\mathbb N}$ and a function $u\in W^{1,\mathbf{p}}_0(\Omega)$ such that $u_k\to u$ in $L^{\mathbf{p}}(\Omega)$. Now, using Lemma [Lemma 15](#lema.clave){reference-type="ref" reference="lema.clave"}, the proof follows by using a classical monotonicity argument. ◻ # Existence of eigenvalues {#section5} In this section, we establish the existence of eigenvalues using the Ljusternik-Schnirelman theory. This is the main part of the article. First we recall some abstract results from critical point theory that will be essential in our proof of the existence of eigenvalues. Let $X$ be a reflexive Banach space and maps $\phi,\psi \in C^1(X,\mathbb R)$. We will assume that $\phi, \psi$ verify the assumptions (H1)--(H4) below: 1. $\phi,\psi \in C^1(X,\mathbb R)$, are even maps with $\phi(0)=\psi(0)=0$ and the level set $${\mathcal M}=\{u\in X :\psi(u)=1\}$$ is bounded. [\[H1\]]{#H1 label="H1"} 2. $\phi'$ is completely continuous. Moreover, for any $u\in X$ it holds that $$\langle \phi'(u),u \rangle=0 \iff \phi(u)=0,$$ where $\langle \cdot ,\cdot \rangle$ denote the duality brackets for the pair $(X,X^*).$ 3. $\psi'$ is continuous, bounded and, as $k\to\infty$, it holds that $$u_k\rightharpoonup u, \psi'(u_k)\rightharpoonup v \text{ and }\langle \psi'(u_k),u_k\rangle \to \langle v,u\rangle \Rightarrow u_k\to u\text{ in } X.$$ 4. For every $u\in X\setminus \{0\}$ it holds that $$\langle \psi'(u),u\rangle>0,\ \lim_{t\to\infty}\psi(tu)=\infty \text{ and } \inf_{u\in M}\langle \psi'(u),u\rangle>0.$$ Now, for any $n\in\mathbb N$, we define $${\mathcal K}_n =\{K\subset \mathcal M\colon K \text{ is symmetric, compact, with } \phi|_K>0 \text{ and } \gamma(K)\ge n\},$$ where $\gamma(K)$ is the Krasnoselskii genus of the set $K$. Finally, let $$c_n = \begin{cases} \sup_{K\in{\mathcal K}_n} \min_{u\in K} \phi(u) & \text{if } {\mathcal K}_n\neq\emptyset\\ 0 & \text{if } {\mathcal K}_n=\emptyset \end{cases}$$ The following general abstract result is proved in [@ZEIDLER] (see also [@MOTREANU14 Theorem 9.27]). **Theorem 17**. *Let $X$ be a reflexive Banach space and $\phi, \psi \in C^1(X,\mathbb R)$. Assume that $\phi, \psi$ satisfy *(H1)--(H4)*. Then* 1. *$c_1 <\infty$ and $c_n\to 0$ as $n\to\infty$.* 2. *If $c=c_n >0$, then we can find an element $u\in \mathcal M$ that is a solution of $$\label{eq.abstracta} \mu \psi'(u)=\phi'(u),\quad (\mu, u)\in \mathbb R\times{\mathcal M},$$ for an eigenvalue $\mu\neq0$ and such that $\phi(u) = c$.* 3. *More generally, if $c = c_n = c_{n+k} > 0$ for some $k \ge 0$, then the set of solutions $u \in{\mathcal M}$ of [\[eq.abstracta\]](#eq.abstracta){reference-type="eqref" reference="eq.abstracta"} such that $\phi(u)=c$ has genus $\ge k + 1$.* 4. *If $c_n > 0$ for all $n \ge1$, then there is a sequence $\{(\mu_n,u_n)\}_{n\in\mathbb N}$ of solutions of [\[eq.abstracta\]](#eq.abstracta){reference-type="eqref" reference="eq.abstracta"} with $\phi(u_n)=c_n$, $\mu_n\neq0$ for all $n\ge 1$, and $\mu_n\to 0$ as $n\to\infty$.* 5. *If we further require that $$\langle \phi'(u),u\rangle=0 \text{ if and only if } \phi(u)=0 \text{ if and only if } u=0,$$ then, $c_n > 0$ for all $n \ge 1$, and there is a sequence $\{(\mu_n,u_n)\}_{n\in\mathbb N}$ of solutions of [\[eq.abstracta\]](#eq.abstracta){reference-type="eqref" reference="eq.abstracta"} such that $\phi(u_n) = c_n$, $\mu_n\neq 0$, $\mu_n\to 0$, and $u_n \rightharpoonup 0$ in $X$ as $n\to\infty$.* Now we will apply Theorem [Theorem 17](#prop-LS){reference-type="ref" reference="prop-LS"} in the case where $X=W^{1,\mathbf{p}}_0(\Omega)$, $(\phi, \psi)=(I,H)$ and in the case where $X=W^{\mathbf{s},\mathbf{p}}_0(\Omega)$ and $(\phi,\psi) = (I, H_{\mathbf{s}})$, where the operators $I, H$ and $H_{\mathbf{s}}$ where introduced in [\[HI\]](#HI){reference-type="eqref" reference="HI"} and [\[Hs\]](#Hs){reference-type="eqref" reference="Hs"}. For enhanced readability of the work, we will demonstrate the properties of $H$ and $H_{\mathbf{s}}$ through a series of lemmas. **Lemma 18**. *Let $H\colon W^{1,\mathbf{p}}_0(\Omega)\to\mathbb R$ be defined in [\[HI\]](#HI){reference-type="eqref" reference="HI"} and $H_{\mathbf{s}}\colon W^{\mathbf{s},\mathbf{p}}_0(\Omega)\to\mathbb R$ be defined in [\[Hs\]](#Hs){reference-type="eqref" reference="Hs"}.* *Then $H'\colon W^{1,\mathbf{p}}_0(\Omega)\to W^{-1,\mathbf{p}'}(\Omega)$ and $H_{\mathbf{s}}'\colon W^{\mathbf{s},\mathbf{p}}_0(\Omega)\to W^{-\mathbf{s},\mathbf{p}'}(\Omega)$ are bounded and monotone.* *Proof.* Let us first demonstrate that $H'$ is bounded. In fact $$\begin{aligned} |\langle H'(u),v\rangle|& =\left|\sum_{i=1}^{n}\frac{1}{\|u_{x_i}\|_{p_i}^{p_i-1}}\int_{\mathbb R} |u_{x_i}|^{p_i-2}u_{x_i}v_{x_i}\,dx\right|\\ &\leq \sum_{i=1}^{n}\frac{1}{\|u_{x_i}\|_{p_i}^{p_i-1}}\int_{\mathbb R} |u_{x_i}|^{p_i-1}|v_{x_i}|\,dx\\ &\leq\sum_{i=1}^{n}\frac{\|u_{x_i}\|_{p_i}^{p_i/p_i'}\|v_{x_i}\|_{p_i}}{\|u_{x_i}\|_{p_i}^{p_i-1}}\\ &\leq \sum_{i=1}^{n}\|v_{x_i}\|_{p_i}=\|\nabla v\|_{\mathbf{p}}.\end{aligned}$$ Therefore, $H'$ is a bounded operator. Moreover, from this result we can obtain the monotonicity of it. $$\langle H'(u),v\rangle + \langle H'(v),u\rangle \leq \left|\langle H'(u),v\rangle\right| + \left|\langle H'(v),u\rangle\right|\leq \|\nabla v\|_{\mathbf{p}}+\|\nabla u\|_{\mathbf{p}}.$$ Hence, $$\begin{aligned} \langle H'(u)-H'(v),u-v\rangle&=\langle H'(u),u\rangle+\langle H'(v),v\rangle-\left(\langle H'(u),v\rangle+\langle H'(v),u\rangle\right)\\ &=\|\nabla u\|_{\mathbf{p}}+\|\nabla v\|_{\mathbf{p}}-\left(\langle H'(u),v\rangle+\langle H'(v),u\rangle\right)\geq 0.\end{aligned}$$ The proof for $H'$ is complete. The proof for $H_{\mathbf{s}}'$ is analogous and the details are left to the reader. ◻ **Lemma 19**. *The operators $H$ and $H_{\mathbf{s}}$ given in [\[HI\]](#HI){reference-type="eqref" reference="HI"} and [\[Hs\]](#Hs){reference-type="eqref" reference="Hs"} respectively, verify hypothesis *(H3)*.* *That is $H'\colon W^{1,\mathbf{p}}_0(\Omega)\to W^{-1,\mathbf{p}'}(\Omega)$ and $H_{\mathbf{s}}'\colon W^{\mathbf{s},\mathbf{p}}_0(\Omega)\to W^{-\mathbf{s},\mathbf{p}'}(\Omega)$ are continuous and bounded operator, and moreover, as $k\to\infty$, it holds that $$\begin{aligned} \label{H31} & u_k\rightharpoonup u,\ H'(u_k)\rightharpoonup v \text{ and }\langle H'(u_k),u_k\rangle \to \langle v,u\rangle \Rightarrow u_k\to u\text{ in } W^{1,\mathbf{p}}_0(\Omega)\\ \label{H32} & u_k\rightharpoonup u,\ H_{\mathbf{s}}'(u_k)\rightharpoonup v \text{ and }\langle H_{\mathbf{s}}'(u_k),u_k\rangle \to \langle v,u\rangle \Rightarrow u_k\to u\text{ in } W^{\mathbf{s},\mathbf{p}}_0(\Omega).\end{aligned}$$* *Proof.* In view of Lemma [Lemma 18](#H'monotone){reference-type="ref" reference="H'monotone"} it remains to see that $H'$ and $H_{\mathbf{s}}'$ are continuous and that verify [\[H31\]](#H31){reference-type="eqref" reference="H31"} and [\[H32\]](#H32){reference-type="eqref" reference="H32"} respectively. First we claim that $H'$ is a continuous operator. In fact just observe that we can rewrite $H'$ as $$H'(u) = \sum_{i=1}^n \frac{J_{p_i}(u_{x_i})}{\|u_{x_i}\|_{p_i}^{p_i-1}},$$ where $J_p(u) := |u|^{p-2}u$. Since $J_p\colon L^p(\mathbb R^n)\to L^{p'}(\mathbb R^n)$ is continuous, the claim follows To verify [\[H31\]](#H31){reference-type="eqref" reference="H31"} let $\{u_k\}_{k\in\mathbb N}$ be a sequence in $W^{1,\mathbf{p}}_0(\Omega)$ such that $u_k \rightharpoonup u$ in $W^{1,\mathbf{p}}_0(\Omega)$, $H'(u_k)\rightharpoonup v$ in $W^{-1,\mathbf{p}'}(\Omega)$ and $\langle H'(u_k),u_k\rangle\to\langle v,u\rangle$ then we need to show that $u_k\to u$ in $W^{1,\mathbf{p}}_0(\Omega).$ Given $w\in W^{1,\mathbf{p}}_0(\Omega)$ arbitrary, by the monotonicity of $H'$ (Lemma [Lemma 18](#H'monotone){reference-type="ref" reference="H'monotone"}) we get that $$0\leq \langle H'(w) - H'(u_k), w - u_k\rangle.$$ Taking the limit as $k\to\infty$, we arrive at $$0\leq \langle H'(w),w - u\rangle- \langle v, w - u\rangle.$$ Now we can take $w=u+tz$ with $t>0$ and we find that $$0\leq \langle H'(u+tz)-v,tz\rangle$$ Dividing by $t$ and taking limit $t\to0+$ we get that for all $z\in W^{1,\mathbf{p}}_0(\Omega)$, $$0\leq \langle H'(u)-v,z\rangle.$$ Therefore $H'(u)=v$. Moreover $$\|\nabla u_k\|_{\mathbf{p}} = \langle H'(u_k),u_k\rangle \to \langle v,u\rangle = \langle H'(u),u\rangle = \|\nabla u\|_{\mathbf{p}}.$$ As the space $W^{1,\mathbf{p}}_0(\Omega)$ is uniformly convex, weak convergence and norm convergence implies that $u_k\to u$ strongly as desired. The proof for $H_{\mathbf{s}}'$ is analogous. ◻ The upcoming theorem is the main important result of this section, as it assures the existence of eigenvalues. **Theorem 20**. *There exist a sequence $\{u_k\}_{k\in \mathbb N}\subset W^{1,\mathbf{p}}_0(\Omega)$ of critical points of $\mathcal{Q}_{\mathbf{p}}$ with associated critical values $\{\lambda_k\}_{k\in \mathbb N}\subset \mathbb R$ such that $\lambda_k\to\infty$ as $k\to\infty$. Moreover, these critical values have the following variational characterization $$\label{variational characterization} \lambda_k=\inf_{K\in {\mathcal K}_k}\sup_{u\in K} H(u)$$ where, for any $k\in\mathbb N$ $${\mathcal K}_k=\{K\subset M \text{ compact, symmetric with } H'(u)>0 \text{ on } K \text{ and } \gamma(K)\geq k\}$$ $$M=\{u\in W_0^{1,\mathbf{p}}(\Omega): I(u)=1\}$$ and $\gamma$ is the *Krasnoselskii* genus of $K$.* *In particular, $u_k$ is a weak solution to [\[P\]](#P){reference-type="eqref" reference="P"} with eigenvalue $\lambda_k$.* *Proof.* We must confirm that the functionals $I$ and $H$ satisfy the hypotheses of Theorem [Theorem 17](#prop-LS){reference-type="ref" reference="prop-LS"}. Note that conditions (H1) and (H3) are direct consequences of Lemmas [Lemma 8](#I.diferenciable){reference-type="ref" reference="I.diferenciable"} and [Lemma 19](#lemma1){reference-type="ref" reference="lemma1"}, respectively. Condition (H4) follows directly from the definition of $H'(u)$. In order to show that (H2) holds, just observe that if $u_k\rightharpoonup u \text{ in } W^{1,\mathbf{p}}_0(\Omega)$, by the compactness of the immersion $W^{1,\mathbf{p}}_0(\Omega)\subset\subset L^{\mathbf{p}}(\Omega)$, it follows that $u_k\to u$ in $L^{\mathbf{p}}(\Omega)$ and using Lemma [Lemma 6](#Fp.bien.def){reference-type="ref" reference="Fp.bien.def"}, we get that $I'(u_k)\to I'(u)$ in $L^{\mathbf{p}'}(\Omega)\subset W^{-1,\mathbf{p}'}(\Omega)$. Finally observe that $\langle I'(u),u\rangle = I(u) = \|u\|_{\mathbf{p}}$. Therefore each one is zero if and only if $u=0$. We then apply the Ljusternik-Schnirelman theory, Theorem [Theorem 17](#prop-LS){reference-type="ref" reference="prop-LS"}, to the functionals $I$ and $H$ on the level set ${\mathcal M} = \{u\in W^{1,\mathbf{p}}_0(\Omega)\colon H(u)=1\}$. By Theorem [Theorem 17](#prop-LS){reference-type="ref" reference="prop-LS"} there exist a sequence of numbers $\{\mu_k\}_{k\in\mathbb N}\searrow0$ and functions $\{u_k\}_{k\in\mathbb N}\in W^{1,\mathbf{p}}_0(\Omega)$ normalized such that $H(u_k)=1$, and $$\label{eq.abstracta2} \mu_k \langle H'(u_k),v\rangle = \langle I'(u_k),v\rangle \quad \forall v \in W^{1,\mathbf{p}}_0(\Omega)$$ and $I(u_k)=c_k$ with $$\label{ck} c_k = \sup_{K\subset {\mathcal K_k}} \min_{u\in K} I(u).$$ Using that $\langle H'(u),u\rangle = H(u)$ and $\langle I'(u),u\rangle = I(u)$ one immediately obtain that $c_k=\mu_k$. So, if we denote $\lambda_k = \mu_k^{-1}$, using [\[eq.abstracta2\]](#eq.abstracta2){reference-type="eqref" reference="eq.abstracta2"}, we have that $u_k$ is a weak solution to [\[P\]](#P){reference-type="eqref" reference="P"} with eigenvalue $\lambda_k$ and from [\[ck\]](#ck){reference-type="eqref" reference="ck"} one also obtain the validity of [\[variational characterization\]](#variational characterization){reference-type="eqref" reference="variational characterization"}. ◻ *Remark 21*. The eigenvalues obtained in Theorem [Theorem 20](#existence.eigenvalues){reference-type="ref" reference="existence.eigenvalues"} are commonly called the Ljusternik-Schnirelman eigenvalues or simply the **LS-eigenvalues** and are denoted by $\Sigma^\mathbf{p}_{LS}$. Similarly, we state a Theorem for the fractional counterpart. The proof is a slight variation of Theorem [Theorem 20](#existence.eigenvalues){reference-type="ref" reference="existence.eigenvalues"} and is omitted. **Theorem 22**. *There exist a sequence $\{u_k^{\mathbf{s}}\}_{k\in \mathbb N}\subset W^{\mathbf{s},\mathbf{p}}_0(\Omega)$ of critical points of $\mathcal{Q}_{\mathbf{s},\mathbf{p}}$ with critical values $\{\lambda_k^{\mathbf{s}}\}_{k\in \mathbb N}\subset \mathbb R$ such that $\lambda_k^{\mathbf{s}}\to\infty$ as $k\to\infty$. Moreover, this critical values have the following variational characterization $$\label{variational characterizations} \lambda_k^{\mathbf{s}}=\inf_{K\in {\mathcal K}_k^{\mathbf{s}}}\sup_{u\in K} H_{\mathbf{s}}(u)$$ where, for any $k\in\mathbb N$ $${\mathcal K}_k^{\mathbf{s}}=\{K\subset M_{\mathbf{s}} \text{ compact, symmetric with } H_{\mathbf{s}}'(u)>0 \text{ on } K \text{ and } \gamma(K)\geq k\}$$ $$M_{\mathbf{s}}=\{u\in W_0^{\mathbf{s},\mathbf{p}}(\Omega): I(u)=1\}$$ and $\gamma$ is the *Krasnoselskii* genus of $K$.* This eigenvalues related to the fractional problem will be denoted by $\Sigma_{LS}^{\mathbf{s}, \mathbf{p}}$. # Acknowledgements {#acknowledgements .unnumbered} This work was partially supported by UBACYT Prog. 2018 20020170100445BA, by ANPCyT PICT 2016-1022 and by PIP No. 11220150100032CO. J. Fernández Bonder is a members of CONICET and I. Ceresa Dussel is a doctoral fellow of CONICET.
arxiv_math
{ "id": "2309.14301", "title": "Existence of Eigenvalues for Anisotropic and Fractional Anisotropic\n Problems via Ljusternik-Schnirelmann Theory", "authors": "I. Ceresa Dussel and J. Fernandez Bonder", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | This paper presents a reduced algorithm to the classical projection method for the solution of $d$-dimensional quasiperiodic problems, particularly Schrödinger eigenvalue problems. Using the properties of the Schrödinger operator in higher-dimensional space via a projection matrix of size $d\times n$, we rigorously prove that the generalized Fourier coefficients of the eigenfunctions decay exponentially along a fixed direction associated with the projection matrix. An efficient reduction strategy of the basis space is then proposed to reduce the degrees of freedom from $O(N^{n})$ to $O(N^{n-d}D^d)$, where $N$ is the number of Fourier grids in one dimension and the truncation coefficient $D$ is much less than $N$. Correspondingly, the computational complexity of the proposed algorithm for solving the first $k$ eigenpairs using the Krylov subspace method decreases from $O(kN^{2n})$ to $O(kN^{2(n-d)}D^{2d})$. Rigorous error estimates of the proposed reduced projection method are provided, indicating that a small $D$ is sufficient to achieve the same level of accuracy as the classical projection method. We present numerical examples of quasiperiodic Schrödinger eigenvalue problems in one and two dimensions to demonstrate the accuracy and efficiency of our proposed method. **Key words.** Quasiperiodic problems, Schrödinger eigenvalue problems, projection method, spectral method, basis reduction. **AMS subject classifications.** 65N35, 65N22, 65F05, 35J05 address: School of Mathematical Sciences, MOE-LSC and CMA-Shanghai, Shanghai Jiao Tong University, Shanghai 200240, China author: - Zixuan Gao, Zhenli Xu and Zhiguo Yang title: Reduced projection method for quasiperiodic Schrödinger eigenvalue problems --- # Introduction The quasiperiodic problems emerge naturally in a great many physical systems such as quasicrystals, many-body problems, and low-dimensional materials [@lei2020study; @wang2020localization; @cao2018unconventional; @britnell2013strong; @xu2013graphene; @huang2016localization; @gao2023pythagoras; @lu2019superconductors; @zhang2021quasi], and have found numerous applications in the areas of mechanics, acoustics, electronics, solid-state physics, and physics of matter waves [@bistritzer2011moire; @carr2017twistronics; @gonzalez2019cold; @o2016moire; @hu2020moire]. Efficient and accurate numerical simulations of quasiperiodic problems play a critical role in exploring and utilizing novel material properties. Though quasiperiodic systems are ubiquitous in mathematics and physics, the numerical method is not as straightforward as that of periodic systems. Specifically, quasiperiodic systems are space-filling ordered without decay and translational invariance [@jiang2022numerical]. In recent years, there has been a growing interest in the research of the optical properties of moiré lattices, a prototype of quasicrystals, as evidenced by notable studies in [@fu2020optical; @kartashov2021multifrequency; @kouznetsov2022sieve; @salakhova2021fourier] that have brought about significant breakthroughs in the field of optics. It is exhilarating to note that a localization-to-delocalization transition of eigenstates of moiré lattices in two dimensions is observed for the first time in both numerical simulations and experiments [@wang2020localization], which paves a new way of controlling light at will. However, the localization of eigenstates as well as the phase transition for a high-dimensional case is not well explored, due to the exceedingly huge degrees of freedom and computational cost required. Considerable efforts have been devoted to overcoming this difficulty. One of the widely used approaches is the periodic approximation method, also known as the crystalline approximant method [@davenport1946simultaneous; @goldman1993quasicrystals; @lifshitz1997theoretical], which approximates the quasiperiodic function via a periodic function in a certain supercell. Nevertheless, this method is proven to be of slow convergence and the simultaneous Diophantine approximation error does not decay uniformly as the size of the supercell gradually increases. In contrast to the periodic approximation method, the projection method (PM), originated from Jiang and Zhang [@jiang2014numerical; @jiang2018numerical], regards a quasiperiodic function as a projection of higher-dimensional periodic functions. This method has the advantage of avoiding the simultaneous Diophantine approximation error and allowing for the convenient use of periodic boundary conditions for domain truncation. Despite its high accuracy, this method requires solving problems in higher dimensions, leading to significant increases in the degrees of freedom, computational cost, and memory consumption. This drawback makes the projection method prohibitive for solving high-dimensional quasiperiodic problems. Our proposal method is inspired by several pioneering work on quasiperiodic problems. Wang *et al.* [@wang2022convergence] characterizes the density of states of Schrödinger operators in the weak sense for the incommensurate system and propose numerical methods based on the planewave discretization and reciprocal space sampling. Zhou *et al.* [@zhou2019plane] propose $k$-points sampling reduction technique under the planewave framework for the electronic structure-related eigenvalue problems of the incommensurate systems. The main purpose of this paper is to propose an efficient reduced projection method (RPM) for solving the following quasiperiodic Schrödinger eigenvalue problem $$\label{eq1} \mathcal{L}[u]:=-\dfrac{1}{2}\Delta u(\bm z)+v(\bm z)u(\bm z)= Eu(\bm z),\quad \bm z\in \mathbb{R}^d,$$ where $\bm z=(z_1,\cdots,z_d)^{\intercal}$ is the physical coordinates in $d$ dimensions, $v(\bm z)$ is a quasiperiodic potential function, and $u(\bm z)$ and $E$ are respectively the eigenfunction and eigenvalue of the linear Schrödinger operator $\mathcal{L}$. By a careful analysis of the decay property of the generalized Fourier series of the eigenfunctions of $\mathcal{L}$ in a higher dimension, an efficient dimension reduction strategy for the classical projection method is proposed. Furthermore, we conduct rigorous error estimates of the RPM, which validate that the reduction strategy can guarantee the same level of accuracy as the PM, with a significant decrease in the degrees of freedom and computational time. The proposed method provide an efficient numerical tool to explore the phase transition of eigenstates for high-dimensional moiré lattices. The rest of this paper is organized as follows. Section [2](#s2){reference-type="ref" reference="s2"} presents the preliminaries of quasiperiodic functions, the theoretical foundation of the dimension reduction strategy, and the associated error estimates of the proposed reduced projection method. Section [3](#s4){reference-type="ref" reference="s4"} demonstrates the numerical results of the proposed method for solving quasiperiodic Schrödinger eigenvalue problems in one and two dimensions. Several interesting physical observations regarding the localization of eigenstates are also presented. Section [4](#s5){reference-type="ref" reference="s5"} then concludes the discussions with some closing remarks. # Reduced projection method for Schrödinger eigenvalue problems {#s2} We begin this section with some theoretical results for quasiperiodic functions, followed by an introduction to the projection method. The reduced projection method with delicate error analysis and the corresponding algorithm are then presented. ## Preliminaries We start with a $d$-dimensional periodic function $F(\bm{z})\in L^2([0,T]^d)$ with period $T$ in each dimension (dubbed as "$T$-periodic function"), equipped with the standard inner product and norm $$(F,G)=\dfrac{1}{T^d}\int_{[0,T]^d}F\Bar{G}d\bm{x},\quad \| F\|=\sqrt{(F,F)},$$ where $\Bar{G}$ is the complex conjugate of $G\in L^2([0,T]^d)$. The Fourier series of $F(\bm{z})$ is defined by $$\label{dfeq1} F(\bm{z})=\sum_{\bm{k}\in\mathbb{Z}^d}F_{\bm{k}}e^{\mathrm{i}\langle\bm{k},\bm{z}\rangle},\quad F_{\bm{k}}=\dfrac{1}{T^d}\int_{[0,T]^d}F(\bm{z})e^{-\mathrm{i}\langle\bm{k},\bm{z}\rangle}d\bm{z},\;\; \bm{k}\in\mathbb{Z}^d,$$ where $\langle \cdot,\cdot\rangle$ is the dot product between two vectors and $\mathbb{Z}$ is the set of integers. Here $F_{\bm k}$ is the $\bm k$th Fourier coefficient of $F(\bm z)$. One recalls the decay rate of Fourier coefficients with respect to the smoothness of a $T$-periodic function in the following lemma (see [@grafakos2008classical]). **Lemma 1**. *Let $s\in\mathbb{N}^+$ and $F(\bm{z})$ be a $d$-dimensional $T$-periodic function. Suppose that $\partial^{\bm{\alpha }}F$ exists and is integrable for $\forall \bm{\alpha}\in \mathbb{N}_0^d$ such that $\| \bm{\alpha}\|_\infty\leq s$, that is, $F(\bm x)\in C_p^s([0,T]^d)$, then $$\lim_{\|\bm{k}\|_2\rightarrow\infty}(1+\|\bm{k}\|_2^{2s})|F_{\bm{k}}|^2=0.$$ Furthermore, given that $F(\bm{z}) \in C_p^{\infty}([0,T]^d)$, the smooth $T$-periodic function space, its Fourier coefficients satisfy $F_{\bm{k}}=o(\|\bm{k}\|_2^{-s})$ when $\|\bm k\|_2\rightarrow\infty$ for any $s\in \mathbb{N}^+$, i.e. the Fourier coefficients $F_{\bm{k}}$ decay with $\|\bm{k}\|_2$ exponentially.* Next, we describe the definition of quasiperiodic functions and their relevant properties (see [@bohr2018almost; @levitan1982almost] for comprehensive discussions). **Definition 1**. *A $d$-dimensional function $f(\bm{z})$ is quasiperiodic if there exists a $d\times n$ projection matrix $\bm{P}$ such that $F(\bm{x})=F(\bm{P}^\intercal\bm{z})=f(\bm{z})$ is an $n$-dimensional periodic function, where all columns of matrix $\bm P$ are linearly independent over rational numbers. Here, $F(\bm x)$ is called the parent function of $f(\bm z)$ with respect to $\bm P$.* Define the mean value of a $d$-dimensional quasiperiodic function $f(\bm{z})$ by $$\mathcal{M}(f)=\lim_{L\rightarrow\infty}\dfrac{1}{|L|^d}\int_{K}f(\bm{z})d\bm{z},$$ where $K=\{\bm{z}\,| \,0\leq \bm{z}_i\leq L,i=1,\dots,d\}$. Correspondingly, one can define the inner product and norm of quasiperiodic functions as $$(f,g)_{\rm qp}=\mathcal{M}(f\bar{g}),\quad \|f \|_{\rm qp}=\sqrt{ (f,f)_{\rm qp} }\,,$$ where $f,g$ are required to be quasiperiodic under the same projection matrix. It is direct to verify that $\big\{e^{\mathrm{i} \langle \bm{q},\bm{z} \rangle} \big \}_{\bm{q}\in \mathbb{R}^d}$ forms a normalized orthogonal system as $$\big(e^{\mathrm{i} \langle \bm{q}_1,\bm{z} \rangle },e^{\mathrm{i}\langle \bm{q}_2,\bm{z} \rangle} \big)_{\rm qp}=\delta_{\bm{q}_1,\bm{q}_2},\quad \bm{q}_1,\bm{q}_2\in \mathbb{R}^d,$$ where $\delta_{\bm{q}_1,\bm{q}_2}$ is the Dirac delta function. In addition, the Fourier transform of the quasiperiodic function is $$\mathcal{F}\{f\}(\bm{q})=\mathcal{M}\big(f(\bm z)e^{-i\langle\bm q,\bm z\rangle}\big),$$ which is also called the Fourier-Bohr transformation [@jiang2022numerical]. Then we have the generalized Fourier series of the quasiperiodic function, given in Lemma [Lemma 2](#thm2){reference-type="ref" reference="thm2"}. We refer the readers to the detailed proof from [@bohr2018almost]. **Lemma 2**. *For a $d$-dimensional quasiperiodic function $f(\bm{z})$ with a projection matrix $\bm P$, it has generalized Fourier series $$f(\bm{z})=\sum_{\bm{q}\in\Gamma}f_{\bm{q}}e^{\mathrm{i}\langle\bm{q},\bm{z}\rangle}, \quad f_{\bm{q}}=\big(f,e^{\mathrm{i}\langle\bm{q},\bm{z}\rangle} \big)_{\rm qp}, \quad \Gamma:=\Big\{\bm q\Big| \bm q=\sum_{i=1}^{n}c_i \bm q_i, \;\; c_i\in \mathbb{Z} ,\;\; \bm q_i={\rm col}(\bm P)_i\Big \},$$ where ${\rm col}(\bm P)_i$ denotes the $i$th column of $\bm P$. In addition, if the series is absolutely convergent, then it is also uniformly convergent.* Similar to periodic functions, the generalized Fourier series of a quasiperiodic function has the Parseval's identity $$(f,f)_{\rm qp}=\mathcal{M}(|f|^2)=\sum_{\bm{q}\in\Omega}|f_{\bm{q}}|^2.$$ By Definition [Definition 1](#df1){reference-type="ref" reference="df1"}, a $d$-dimensional quasiperiodic function $f(\bm{z})$ can be transformed into an $n$-dimensional periodic function $F(\bm{x})$ by the projection matrix $\bm{P}$. Using the Fourier series of periodic functions, one can obtain $$\label{rfs} F(\bm{x})=\sum_{\bm{k}\in\mathbb{Z}^{n}}F_{\bm{k}}e^{\mathrm{i}\langle\bm{k},\bm{x}\rangle},$$ where $F_{\bm{k}}$ is the Fourier coefficient of parent function and Eq. [\[rfs\]](#rfs){reference-type="eqref" reference="rfs"} is the Fourier series in the raised dimensional space. In Lemma [Lemma 3](#cosis){reference-type="ref" reference="cosis"} we describe the consistency of the generalized Fourier coefficients and raised Fourier coefficients of quasiperiodic functions by invoking the Birkhoff's ergodic theorem [@pitt1942some; @walters2000introduction; @petersen1989ergodic] (see [@jiang2022numerical] for a detailed proof). **Lemma 3**. *For a $d$-dimensional quasiperiodic function $f(\bm{z})$ with parent function $F(\bm x)$, it holds $f_{\bm{q}}=F_{\bm{k}}$ when $\bm{q}=\bm{Pk}$.* The consistency between Fourier coefficients of parent function and generalized Fourier coefficients of the quasiperiodic function makes the projection a reliable method to solve quasiperiodic problems by its corresponding periodic problem in higher dimensions. Theorem [**Theorem** 1](#foco){reference-type="ref" reference="foco"} presents the decay rate of generalized Fourier coefficients by combining Lemmas [Lemma 1](#thm1){reference-type="ref" reference="thm1"} and [Lemma 3](#cosis){reference-type="ref" reference="cosis"}. ****Theorem** 1**. *Let $f(\bm{z})$ be a $d$-dimensional quasiperiodic function with parent function $F(\bm{x})\in C_p^{s}([0,T]^n)$. Then there exists a positive constant $C$ such that, $$f_{\bm{q}}\leq C|\bm{q}|^{-s}.$$* ## Reduced projection method The PM, proposed by Jiang and Zhang [@jiang2014numerical], serves as an accurate way to solve quasiperiodic eigenvalue problems. For a $d$-dimensional quasiperiodic problem, the PM transforms it into its corresponding periodic problem in $n$-dimensional space. Specifically, let $\bm{x}=\bm{P}^\intercal\bm{z}$, where $\bm{x}=(x_1,x_2,\dots,x_{n})^\intercal$, one can transform Eq. [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} into $$\label{eq2} EU(\bm{x})=-\dfrac{1}{2}\sum_{i=1}^d\sum_{j,l=1}^{n} P_{ij}P_{il} \dfrac{\partial^2 U(\bm{x})}{\partial x_j\partial x_l}+V(\bm{x})U(\bm{x}).$$ Here $U(\bm{x})$ and $V(\bm{x})$ are the parent functions of the eigenfunction $u(\bm z)$ and potential $v(\bm z)$, respectively. It should be noted that the choice of $\bm P$ is not unique. In this paper, we select those projection matrices that make the parent functions be $2\pi$-periodic, and this choice is unique. The PM discretizes Eq. [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"} by seeking an approximate solution $$\label{pm} U_N(\bm{x})=\sum_{\bm{k}\in\Omega}U_{N,\bm{k}}e^{\mathrm{i}\langle \bm{k},\bm{x}\rangle},\quad \Omega=\big\{\bm{k}\in\mathbb{Z}^{n} \big| \|\bm{k}\|_\infty\leq N,\;\;\bm{k}\in \mathbb{Z}^{n} \big\}.$$ This leads to a system of equations for each frequency mode $U_{N,\bm{k}}$, $$\label{eq3} EU_{N,\bm{k}}=\dfrac{1}{2}\sum_{i=1}^d\sum_{j,l=1}^{n}{\bm k}_j {\bm k}_lP_{ij}P_{il} U_{N,\bm{k}}+\{V(\bm{x})U_N(\bm{x})\}_{\bm{k}},$$ where $\{V(\bm{x})U_N(\bm{x})\}_{\bm{k}}$ is the $\bm k$th Fourier coefficient as defined in Eq. [\[dfeq1\]](#dfeq1){reference-type="eqref" reference="dfeq1"} and $k_j$ is the $j$th component of $\bm k$. One denotes that $\bm{\hat{U}}$ is a column vector with its components being the Fourier coefficients $U_{N,\bm{k}}$, and the discrete eigenvalue problem Eq. [\[eq3\]](#eq3){reference-type="eqref" reference="eq3"} can be rewritten into a matrix eigenvalue problem $\bm{H\hat{U}}=E\bm{\hat{U}}$. In real computations, due to the large size of the dense matrix $\bm H$, it is not practical to store its elements and the eigenvalue problem is to be solved in a matrix-free manner. That is, for matrix $\bm H$, one defines its matrix-vector product function $$\label{mvp} \bm{Hf}=\bm{\hat{D}f}+\hbox{FFT}\big(V(\bm x)\cdot\hbox{IFFT}(\bm f)\big),$$ where $\hbox{FFT}(\cdot)$ and $\hbox{IFFT}(\cdot)$ denote the $n$-dimensional fast Fourier transform (FFT) and inverse fast Fourier transform (IFFT). Here $\bm{\hat{D}}$ is a diagonal matrix such that for $\bm{\hat{U}}_m=U_{N,\bm k}$, the $m$th diagonal element of $\bm{\hat{D}}$ is $$\hat{D}_{mm}=\dfrac{1}{2}\sum_{i=1}^d\sum_{j,l=1}^{n}k_jk_lP_{ij}P_{il}.$$ Since only the operation $\bm H \bm f$ is invoked during the generation of basis vectors in the Krylov subspace, there is no need to store the dense matrix $\bm H$ itself, thus reducing the storage cost from $O(N^{2n})$ to $O(N^{n})$. Then the eigenvalue problem [\[eq3\]](#eq3){reference-type="eqref" reference="eq3"} can be solved via the Krylov subspace iterative method in a matrix-free manner [@van2000krylov; @watkins2007matrix]. Though the PM is a powerful and accurate numerical method for solving quasiperiodic Schrödinger eigenvalue problems, it suffers from the "curse of dimensionality". As the dimension is raised, the degrees of freedom (DOF) required may become extremely large, making it computationally prohibitive and memory-intensive to solve the quasiperiodic eigenvalue problem. For three-dimensional quasiperiodic problems with projection matrix of size $3\times 6$, the DOF to deal with is $O(N^6)$. This poses significant challenges in solving high-dimensional problems. The reduced projection method is based on the fact that the generalized Fourier coefficients of the eigenfunctions decay exponentially along a fixed direction of $\bm{Pk}$. In fact, given some mild restrictions on the quasiperiodic potential function $v(z)$ in Eq. [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"}, the index set $\Omega$ of Fourier expansion in Eq. [\[pm\]](#pm){reference-type="eqref" reference="pm"} can be reduced to $$\label{omega2} \Omega_R=\big\{\bm{k}\in\mathbb{Z}^{n} \big| \|\bm{Pk}\|_2\leq D, ||\bm{k}||_\infty\leq N \big\}$$ without sacrificing the accuracy of the approximation. Here, parameter $D< N$ is a prescribed truncation constant. The method based on the reduced index set $\Omega_R$ is the RPM, that is, the RPM seeks an approximate solution with the form $$\label{rpm} U_N(\bm{x})=\sum_{\bm{k}\in\Omega_R}U_{N,\bm{k}}e^{\mathrm{i}\langle \bm{k},\bm{x}\rangle}.$$ Let us denote the generalized Fourier coefficients and Fourier coefficients of parent function of $u(\bm z)$ as $u_{\bm q}$ and $U_{\bm k}$, respectively. By Lemma [Lemma 3](#cosis){reference-type="ref" reference="cosis"}, one has $$u_{\bm{q}}=U_{\bm{k}}, \;\; \text{when}\;\; \bm{q}=\bm{Pk}.$$ In the following theorem [**Theorem** 2](#thm6){reference-type="ref" reference="thm6"}, we show the decay rate of the generalized Fourier coefficients $u_{\bm{q}}$ of the eigenfunction $u$ of the quasiperiodic eigenvalue problem [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"}, which justifies our RPM and plays a crucial role for the error estimate. ****Theorem** 2**. *Let $u$ be the eigenfunction corresponding to eigenvalue $E$ of the $d$-dimensional quasiperiodic Schrödinger eigenvalue problem [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"}, $u_{\bm q}$ be the $\bm q$th generalized Fourier coefficient of $u$, and $v$, $V$ be the quasiperiodic potential function and its parent function. Given integer $\alpha \geq 3$, $\mathcal{F}\{v\}\in L^\beta(\mathbb{R}^d)$, ${V\in C_p^{d+\alpha-2}([0,T]^n)}$ with $1/\beta+1/d>1$ and $|\bm q|^2>4E$, there exists a constant $C_{\alpha}$ depending on $\|v\|$, $\|\mathcal{F}\{v\}\|_\beta$ and $d$ such that $$|u_{\bm{q}}|\leq C_\alpha|\bm{q}|^{-\alpha}.$$* *Proof.* Without loss of generality, we set $\|u\|=1$. Inserting the generalized Fourier series of $u$ into Eq. [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} leads to $$Eu_{\bm{q}}=\dfrac{1}{2}|\bm{q}|^2u_{\bm{q}}+\{vu\}_{\bm{q}}\,.$$ According to the properties of the generalized Fourier transform, $$\{vu\}_{\bm{q}}=\mathcal{F}\{vu\}(\bm{q})=\mathcal{F}\{v\}(\bm{q})*\mathcal{F}\{u\}(\bm{q}),$$ where $*$ denotes the convolution between two functions. By Hölder's inequality, Parseval's identity and the fact that $||u||=1$, one has $$\begin{split} \left|E-|\bm{q}|^2/2\right| |u_{\bm{q}}|&\leq |\mathcal{F}\{v\}*\mathcal{F}\{u\}|\leq \|\mathcal{F}\{v\}\|\cdot\|\mathcal{F}\{u\}\|=\|v\|\cdot\|u\|=\|v\|. \end{split}$$ Since $|\bm q|^2>4E$, it is direct to verify $|E-|\bm{q}|^2/2|>|\bm{q}|^2/4$. Therefore, $$|u_{\bm{q}}|\leq\dfrac{\|v\|}{\left|E-|\bm{q}|^2/2\right|}\leq 4\|v\| \cdot|\bm{q}|^{-2}.$$ By Parseval's identity and $||u||=1$, one can obtain that for any $\bm{q}$, $|u_{\bm{q}}|\leq1$. Therefore, define $$g_{-2}(\bm{q})=\min\{1,4||v|| \cdot|\bm{q}|^{-2}\},$$ then one has $|u_{\bm{q}}|\leq g_{-2}(\bm{q})$. Considering the convolution term, one has $$|\mathcal{F}\{v\}(\bm{q})*\mathcal{F}\{u\}(\bm{q})|\leq \int_{\mathbb{R}^n}|\mathcal{F}\{v\}(\bm{q}-\bm{t})||\mathcal{F}\{u\}(\bm{t})|d\bm{t}\leq \int_{\mathbb{R}^n}|\mathcal{F}\{v\}(\bm{q}-\bm{t})|g_{-2}(\bm{t})d\bm{t}.$$ The integral is split into two parts, that is, $$\begin{split} \int_{\mathbb{R}^n}|\mathcal{F}\{v\}(\bm{q}-\bm{t})|g_{-2}(\bm{t})d\bm{t} &=\int_{|\bm{t}|\geq|\bm{q}|/2}|\mathcal{F}\{v\}(\bm{q}-\bm{t})|g_{-2}(\bm{t})d\bm{t}+\int_{|\bm{t}|<|\bm{q}|/2}|\mathcal{F}\{v\}(\bm{q}-\bm{t})|g_{-2}(\bm{t})d\bm{t}\\ \\ &:=I_1+I_2, \end{split}$$ By the Hölder's inequality with $1/\beta+1/\gamma=1$, $$I_1\leq 4||v||\left(\int_{|\bm{t}|\geq|\bm{q}|/2}|\mathcal{F}\{v\}(\bm{q}-\bm{t})|^{\beta}d\bm{t}\right)^{1/\beta}\left(\int_{|\bm{t}|\geq|\bm{q}|/2}|\bm t|^{-2\gamma}d\bm{t}\right)^{1/\gamma},$$ where one uses $g_{-2}(\bm{t})\leq4||v|| \cdot|\bm{t}|^{-2}$. By using polar coordinates in a $d$-dimensional space, it can be observed that $$\left(\int_{|\bm{t}|\geq|\bm{q}|/2}|\bm t|^{-2\gamma}d\bm{t}\right)^{1/\gamma}= \left(dV_d\int_{|\bm{t}|\geq|\bm{q}|/2}|\bm t|^{-2\gamma+d-1}d|\bm{t}|\right)^{1/\gamma}=(dV_d)^{1/\gamma}|\bm q/2|^{-2+d/\gamma},$$ where $V_d$ is the volume of the $d$-dimensional unit ball. Assume that $1/\beta+1/d>1$. One obtains that $$I_1\leq 4||v||\cdot(dV_d)^{1/\gamma}||\mathcal{F}\{v\}||_{\beta}\cdot|\bm q/2|^{-2+d/\gamma}\leq 2||v||\cdot(dV_d)^{1/\gamma}||\mathcal{F}\{v\}||_{\beta}\cdot|\bm q|^{-1}.$$ By Theorem [**Theorem** 1](#foco){reference-type="ref" reference="foco"} and the fact that $V\in C^{d+\alpha-2}([0,T]^n)$, there exists a constant $C_{-r}$ such that $$|\mathcal{F}\{v\}(\bm{q}-\bm{t})|\leq C_{-r}|\bm q-\bm t|^{-r},$$ where $r\leq d+\alpha-2$. Then the integral $I_2$ can be bounded by $$I_2\leq C_{-r}\int_{|\bm{t}|<|\bm{q}|/2}|\bm q-\bm t|^{-r}d\bm{t}\leq 2^{-r+d}C_{-r}V_d|\bm q|^{-r+d},$$ where $g_{-2}(\bm t)$ is bounded by 1. Here we take $r=d+1$, and $$\left|E-|\bm{q}|^2/2\right||u_{\bm{q}}|\leq S_{-3}|\bm q|^{-1},$$ where $S_{-3}=2||v||(dV_d)^{1/\gamma}\cdot||\mathcal{F}\{v\}||_{\beta}+2^{-1}C_{-d-1}V_d$. Then combining $|E-|\bm{q}|^2/2|>|\bm{q}|^2/4$, one has $$|u_{\bm{q}}|\leq 4||v||\cdot S_{-3}|\bm q|^{-3}.$$ Following the same procedure, and we can obtain that for any $\alpha\geq3$, there exists a constant $S_{-\alpha}$ depending on $||v||$, $||\mathcal{F}\{v\}||_{\beta}$ and $d$ such that $$|u_{\bm{q}}|\leq4||v||\cdot S_{-\alpha}|\bm q|^{-\alpha}.$$ This ends the proof. ◻ We employ a 1D quasiperiodic problem of [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} as an example to validate the theoretical results of Theorem [**Theorem** 2](#thm6){reference-type="ref" reference="thm6"}. Let $v(z)=E_0/\big(1+(\cos(z)+\cos(\sqrt{5}z))^2 \big)$ and the projection matrix $\bm{P}=[\sqrt{5}\ \ 1]$. The PM method is employed to solve this problem and depict the generalized Fourier coefficients of the 1st and 50th eigenfunctions in the raised frequency domain. As shown in Figure [1](#fig3-1){reference-type="ref" reference="fig3-1"} (a), whether it is the first eigenfunction (the red line) or the 50th eigenfunction (the blue line), their generalized Fourier coefficients both decay exponentially. We adopt the RPM method to solve the same problem. In order to quantify the truncation error between the PM and RPM methods, we define $$\label{epsd} \hbox{Err}(D)=\sum_{|\boldsymbol{k}|>D,\boldsymbol{k}\in \Omega}|U_{\boldsymbol{k}}|^2.$$ Figure [1](#fig3-1){reference-type="ref" reference="fig3-1"} (b) depict the exponential decay of $\hbox{Err}(D)$ with respect to $D$. For $D\approx 30$, the method has the machine precision and the reduction error becomes negligible, which accounts for more than $80\%$ reduction of the DOF. ![The generalized Fourier coefficient modulus of eigenfunctions for 1D quasiperiodic potential. (a) The generalized Fourier coefficient modulus of eigenfunctions as function of $\boldsymbol{q}$. (b) The error $\hbox{Err}(D)$ as function of $D$. In both panels, $E_0=1$ and $N=180$.](3-1-1-2.pdf){#fig3-1 width="95%"} The results in Fig. [1](#fig3-1){reference-type="ref" reference="fig3-1"} demonstrate that the RPM can effectively reduce the DOF, without sacrificing the accuracy of approximation. For efficient implementation of the RPM, one first derives the index set $\Omega_R$ through predefined coefficients $N$ and $D$, as well as the projection matrix $\bm P$ for the $d$-dimensional quasiperiodic Schrödinger eigenvalue problem. Then with the Fourier expansion in Eq. [\[rpm\]](#rpm){reference-type="eqref" reference="rpm"} with the reduced basis set, the matrix-vector function $\bm H\bm f$ is straightforwardly implemented. When computing $\hbox{FFT}(V(\bm x)\cdot\hbox{IFFT}(\bm f))$, it is advisable to zero-fill the coefficients of $\bm f$ removed by the RPM, thereby enabling the use of the FFT. Then one takes a random starting vector $\bm b\in\mathbb{R}^{1\times|\Omega_2|}$, and generate the Krylov subspace $K_M=\hbox{span}\{\bm b,\bm {H}\bm b,\cdots,\bm{H}^{M-1}\bm b\}$. The orthonormal basis $\bm Q_M=(\bm q_1,\bm q_2,\cdots,\bm q_M)$ of $K_M$ can be generated by the implicitly restarted Arnoldi method [@lehoucq2001implicitly]. One can determine the Hessenberg matrix $\bm H_M=\bm Q_M^{\intercal}\bm{H}\bm{Q}_M=\bm Q_M^{\intercal}(\bm{H}\bm q_1,\bm{H}\bm q_2,\cdots,\bm{H}\bm q_M)$ and solve its eigenpairs $\{(E_m,u_m)\}_{m=1}^M$ of $\bm H_M$ by the QR algorithm. Detailed procedures of the RPM are summarized in Algorithm [\[algorithm1\]](#algorithm1){reference-type="ref" reference="algorithm1"}. Compute the potential $V$ in higher dimensional space by using $\bm P$ Determine the index sets $\Omega_R$ and $\Omega$ of basis space by Eqs. [\[omega2\]](#omega2){reference-type="eqref" reference="omega2"} and [\[pm\]](#pm){reference-type="eqref" reference="pm"} and take random starting vector $\bm b\in\mathbb{R}^{1\times|\Omega_R|}$ Expand $\bm b$ to $\bm b'\in \mathbb{R}^{1\times|\Omega|}$ by filling 0, compute $\hbox{FFT}(V(\bm x)\cdot\hbox{IFFT}(\bm b'))$ and remove the elements at the zero-filled positions Generate the matrix-vector product $\bm{H}\bm b$ by Eq. [\[mvp\]](#mvp){reference-type="eqref" reference="mvp"} Repeat steps $3-4$ to calculate $\bm {H}^2\bm b,\cdots,\bm{H}^{M-1}\bm b$ and generate the Krylov subspace $K_M$ and its orthonormal basis $\bm Q_M$ Determine Hessenberg matrix $\bm H_M$ and compute its eigenpairs $\{(E_m,u_m)\}_{m=1}^M$ of $\bm H_M$ Compute $U_N(\bm x)$ by $u_m$ and project the eigenfunctions $U_N(\bm x)$ back into the $d$-dimensional space By the RPM, the DOF and the complexity of the eigenvalue solver are significantly reduced. Specifically, the DOF for approximating a $d$ dimensional quasiperiodic system with a $d\times n$ projection matrix using the PM is reduced from $O(N^{n})$ to $O(N^{n-d}D^{d})$. Correspondingly, the computational complexity of the proposed algorithm for solving the first $k$ eigenpairs using the Krylov subspace method decreases from $O(kN^{2n})$ to $O(kN^{2(n-d)}D^{2d})$ [@lehoucq1998arpack; @wright2001large]. Due to the fast decay rate of the generalized Fourier coefficients along $\bm{Pk}$, the RPM can obtain reliable numerical results with much fewer DOF, thereby mitigating the curse of dimensions, especially for high-dimensional problems. **Remark 1**. *The computational complexity of computing $\hbox{FFT}(V(\bm x)\cdot\hbox{IFFT}(\bm f))$ is $O(N^n\log N)$, which is much less that of solving the eigenpairs, $O(kN^{2(n-d)}D^{2d})$. Therefore, the FFT and IFFT are not the primary contributors to the complexity of the RPM.* Moreover, some operators encountered in practical applications exhibit eigenvalues that are extremely sensitive to perturbations, which may lead to spurious eigenmodes in numerical approximation [@reichel1992eigenvalues]. The RPM can be combined with classical techniques such as spectral indicator method [@cakoni2010existence; @cakoni2010inverse] to deal with challenge quasiperiodic eigenvalue problem with highly-contrast quasiperiodic potentials to suppress any spurious eigenmodes. The details of the spectral indicator method are given in Appendix [5](#app: A){reference-type="ref" reference="app: A"}. ## Error estimate In what follows, we give rigorously error estimate of the RPM for quasiperiodic eigenvalue problems Eq. [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"}. Let $K: X\rightarrow X$ denote a compact operator. The space $X$ has the inner product $(\cdot,\cdot)$ and its associated norm $||\cdot||$. $S_N$ is a subspace of $X$ and the approximate operator $K_N$ is from $S_N$ to $S_N$. We define the two errors as $$\epsilon_N=\epsilon_N(E)=\sup_{u\in M(E)}\inf_{\chi\in S_N}||u-\chi||,\ \ \ \ \epsilon_N^*=\epsilon_N^*(E)=\sup_{v\in M^*(E)}\inf_{\chi\in S_N}||v-\chi||,$$ where $M(E)$ is the set of all normal eigenvectors of $K$ corresponding to the eigenvalue $E$ with the corresponding algebraic multiplicity being $m$. Here $K^*$ is the dual operator of $K$ and $M^*(E)$ is the set of all normal eigenvectors of $K^*$ corresponding to approximating eigenvalues $\mathcal{E}_j,j=1,\dots,m$. For any $s\in\mathbb{N}^*$, let the $s$-derivative Sobolev space on $T$ be $H^s(T)=\{F\in L^2(T),||F||_s<\infty\}$, where $T$ is the period of the $d$-dimensional period function $F$. As usual, the norm and the semi-norm equipped on this space reads $$||F||_s=\left(\sum_{\bm{k}\in\mathbb{Z}^n}(1+|\bm{k}|^{2s})|F_{\bm{k}}|^2\right)^{1/2},\ \ \ \ |F|_s=\left(\sum_{\bm{k}\in\mathbb{Z}^n}|\bm{k}|^{2s}|F_{\bm{k}}|^2\right)^{1/2}.$$ We further introduce operator $\varphi_{\bm P}$, which maps a quasiperiodic function $f$ to its parent function $F$, i.e. $\varphi_{\bm {P}}f=F$. In fact, the mapping $\varphi_{\bm {P}}$ is isomorphic [@jiang2022numerical]. In addition, the operators $\mathcal{P}$ and $\mathcal{Q}$ on a periodic function $F$ denote the partial sums $$\mathcal{P}F=\sum_{\bm{k}\in\Omega}F_{\bm{k}}e^{\mathrm{i}\langle\bm{k},\bm{x}\rangle},\ \ \ \ \mathcal{Q}F=\sum_{\bm{k}\in\Omega_R}F_{\bm{k}}e^{\mathrm{i}\langle\bm{k},\bm{x}\rangle},$$ where $\Omega$ and $\Omega_R$ are defined in Eqs. [\[pm\]](#pm){reference-type="eqref" reference="pm"} and [\[omega2\]](#omega2){reference-type="eqref" reference="omega2"}. In order to obtain the main result, one needs the following lemmas (see [@babuvska1991eigenvalue] for Lemma [Lemma 4](#corethm){reference-type="ref" reference="corethm"}, and [@shen2011spectral] for Lemma [Lemma 5](#lemma2){reference-type="ref" reference="lemma2"}). **Lemma 4**. *There exists a constant $C_0$ such that $$\label{eq17} |E-\mathcal{E}_j(N)|\leq C_0(\epsilon_N\epsilon_N^*)^{1/\alpha},j=1,2,\dots,m.$$ Here, $E$ represents the eigenvalue of operator $K$, and $\mathcal{E}_j(N)$ denotes the corresponding eigenvalues of the approximate operator $K_N$, and $\alpha$ is the smallest nonnegative integral such that the null-spaces of $(E-K)^{\alpha}$ and $(E-K)^{\alpha+1}$ are equal.* **Lemma 5**. *For any periodic function $F\in H^s(T),s\in\mathbb{N}^*$, and $0\leq\mu\leq s$, the following estimate for $\mathcal{P}F$ holds $$||\mathcal{P}F-F||_\mu\leq N^{\mu-s}|F|_{s}.$$ In addition, for any periodic function $F\in H^\nu(T)$ with $\nu>1/2$, there exists a constant $C'$ depending on $\nu$ such that $$||\mathcal{P}F-F||_\infty\leq C'N^{1/2-\nu}|F|_\nu.$$* The upper bound of the approximation under different norms of the operator $\mathcal{Q}$ to the eigenspace of Eq. [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} is given in Theorem [**Theorem** 3](#corecorethm){reference-type="ref" reference="corecorethm"}. ****Theorem** 3**. *Suppose that $u$ is the solution of the quasiperiodic Schrödinger eigenvalue problem Eq. [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"}. Let $U$ be the parent function of $u$. If $U\in H^s(T)$ with $s\in\mathbb{N}^+$ and $0\leq \mu\leq s$, there exists a constant $C$ depending on $\bm{P}$ such that $$\label{andythm1} ||\mathcal{Q}(\varphi_{\bm P}u)-\varphi_{\bm P}u||_\mu\leq (N^{\mu-s}+CD^{\mu-s})|\varphi_{\bm P}u|_s.$$ If $U\in H^\nu(T)$ with $\nu>1/2$, there exist constants $C_1$ and $C_2$ depending on $\bm{P}$ and $\nu$ such that $$\label{andythm2} ||\mathcal{Q}(\varphi_{\bm P}u)-\varphi_{\bm P}u||_\infty\leq (C_1N^{1/2-\nu}+C_2D^{1/2-\nu})|\varphi_{\bm P}u|_\nu.$$* *Proof.* By Lemma [Lemma 5](#lemma2){reference-type="ref" reference="lemma2"} and the isomorphism mapping between quasiperiodic function space and the higher dimensional periodic function space [@jiang2022numerical], we can obtain that for a quasiperiodic function $u$ with the parent function $U$, one has $\varphi_{\bm P}u=U$. If $U\in H^s(T)$ with $s\in\mathbb{N}^*$ and $0\leq\mu\leq s$, one has $$\label{lemma3} ||\mathcal{P}U-U||_\mu\leq N^{\mu-s}|U|_{s}.$$ In addition, if $U\in H^\nu(T)$ with $\nu>1/2$, there exists a constant $C'$ depending on $\nu$ satisfying $$\label{lemma6} ||\mathcal{P}U-U||_\infty\leq C'N^{1/2-\nu}|U|_\nu.$$ By a direct decomposition $\mathcal{Q}U-U=(\mathcal{Q}U-\mathcal{P}U)+(\mathcal{P}U-U)$ and the triangle inequality, one has $$\label{andy1} ||\mathcal{Q}(\varphi_{\bm P}u)-\varphi_{\bm P}u||_\mu\leq||\mathcal{Q}U-\mathcal{P}U||_\mu+||\mathcal{P}U-U||_\mu,$$ $$\label{andy2} ||\mathcal{Q}(\varphi_{\bm P}u)-\varphi_{\bm P}u||_\infty\leq||\mathcal{Q}U-\mathcal{P}U||_\infty+||\mathcal{P}U-U||_\infty.$$ The second terms in Eqs. [\[andy1\]](#andy1){reference-type="eqref" reference="andy1"} and [\[andy2\]](#andy2){reference-type="eqref" reference="andy2"} have been bounded by Eq. [\[lemma3\]](#lemma3){reference-type="eqref" reference="lemma3"} and Eq. [\[lemma6\]](#lemma6){reference-type="eqref" reference="lemma6"}, so only the first terms remain to be estimated. Consider the $\mu$-norm case. For any $\bm{k}\in\Omega/\Omega_R$, $||\bm{Pk}||_2>D$ holds, so that $$||\mathcal{Q}U-\mathcal{P}U||_\mu^2=\sum_{\bm{k}\in\Omega/\Omega_R}(1+|\bm{k}|^{2\mu})|U_{\bm{k}}|^2\leq D^{-2s+2\mu}\sum_{\bm{k}\in\Omega/\Omega_R}(1+|\bm{k}|^{2\mu})|U_{\bm{k}}|^2|\bm{Pk}|^{2s-2\mu},$$ where $U_{\bm k}$ is the Fourier coefficient of $U$ with frequency $\bm k$. Then, one has, $$\begin{split} ||\mathcal{Q}U-\mathcal{P}U||_\mu^2 &\leq D^{-2s+2\mu}|\bm{P}|^{2s-2\mu}\sum_{\bm{k}\in\Omega/\Omega_R}(1+|\bm{k}|^{2\mu})|U_{\bm{k}}|^2|\bm{k}|^{2s-2\mu}\\ &\lesssim D^{-2s+2\mu}|\bm{P}|^{2s-2\mu}|U|_s^2, \end{split}$$ where $A\lesssim B$ denotes that $A$ is less than or similar to $B$. Hence, there exists a constant $C_3$ depending on $\bm{P}$ such that $$||\mathcal{Q}(\varphi_{\bm P}u)-\mathcal{P}(\varphi_{\bm P}u)||_\mu\leq C_3D^{-2s+2\mu}|\varphi_{\bm P}u|_s,$$ which directly leads to Eq. [\[andythm1\]](#andythm1){reference-type="eqref" reference="andythm1"} when combined with Eq. [\[andy1\]](#andy1){reference-type="eqref" reference="andy1"}. For error estimate under the infinite norm, one can use the Cauchy-Schwarz inequality to obtain $$\begin{split} ||\mathcal{Q}U-\mathcal{P}U||_\infty&\leq \sum_{\bm{k}\in\Omega/\Omega_R}|U_{\bm{k}}|\leq \left(\sum_{\bm{k}\in\Omega/\Omega_R}|\bm{k}|^{-2\nu}\right)^{1/2}\left(\sum_{\bm{k}\in\Omega/\Omega_R}|\bm{k}|^{2\nu}|U_{\bm{k}}|^2\right)^{1/2}\\ &\leq \left(\sum_{\bm{k}\in\Omega/\Omega_R}|\bm{P}|^{2\nu}|\bm{q}|^{-2\nu}\right)^{1/2}|U|_\nu. \end{split}$$ For $\nu>1/2$, $$\sum_{\bm{k}\in\Omega/\Omega_R}|\bm{q}|^{-2\nu}\leq \int_D^\infty x^{-2\nu}dx\leq \dfrac{1}{2\nu-1}D^{1-2\nu}.$$ Therefore, $$\begin{split} ||\mathcal{Q}(\varphi_{\bm P}u)-\mathcal{P}(\varphi_{\bm P}u)||_\infty &\leq \left(\int_D^\infty x^{-2\nu}dx\right)^{1/2}|\bm{P}|^{\nu}|\varphi_{\bm P}u|_\nu\leq \sqrt{\dfrac{1}{2\nu-1}}D^{1/2-\nu}|\bm{P}|^{\nu}|\varphi_{\bm P}u|_\nu\\ &:=C_4D^{1/2-\nu}|\varphi_{\bm P}u|_\nu, \end{split}$$ where $C_4$ is a constant depending on $\nu$ and $|\bm{P}|$. This ends the proof. ◻ One has the Corollary [**Corollary** 1](#coroo){reference-type="ref" reference="coroo"} for the error estimate of the RPM with space basis $\Omega_R$ by Lemma [Lemma 4](#corethm){reference-type="ref" reference="corethm"} and Theorem [**Theorem** 3](#corecorethm){reference-type="ref" reference="corecorethm"}. ****Corollary** 1**. *Let $E$ represents the eigenvalue of the Schrödinger operator, and $\mathcal{E}_j(N,D)$ denotes the corresponding eigenvalues of the RPM. The error of $E$ is bounded under the $\mu$-norm by $$|E-\mathcal{E}_j(N,D)|\leq C_0[(N^{\mu-s}+CD^{\mu-s})|\varphi_{\bm P}u|_s]^{2/\alpha},$$ and under the infinite norm by $$|E-\mathcal{E}_j(N,D)|\leq C_0[(N^{1/2-\nu}+CD^{1/2-\nu})|\varphi_{\bm P}u|_\nu]^{2/\alpha},$$ where $C$ and $C_0$ are constants.* **Remark 2**. *In Corollary [**Corollary** 1](#coroo){reference-type="ref" reference="coroo"}, it can be observed that the eigenvalue error of RPM exhibits the same decay order with respect to both $N$ and $D$. This implies that, if the potential function $v$ has sufficiently good regularity, the error of the RPM can achieve exponential decay with respect to both $N$ and $D$. Consequently, in practical applications, a smaller value of $D$ can also ensure the accuracy of numerical results.* Corollary [**Corollary** 1](#coroo){reference-type="ref" reference="coroo"} establishes a rigorous theoretical foundation for solving the quasiperiodic Schrödinger eigenvalue problem using the RPM. Although increasing the dimension significantly enlarges the DOF, the distinctive properties of the Fourier coefficients enable the resolution of larger-scale problems with a noticeably reduced number of the DOF by the RPM. # Numerical examples {#s4} We present numerical results to demonstrate the effectiveness of the RPM. Specifically, we apply the algorithm to quasiperiodic Schrödinger eigenvalue problems in 1D and 2D spaces and assess the quality of the resulting eigenvalues and eigenfunctions, as well as the CPU time. A matrix-free preconditioned Krylov subspace method [@liesen2013krylov; @kelley1995iterative; @saad2003iterative] is employed which requires only the matrix-vector product to be stored at each iteration. Implementation of this method is facilitated by the function eigs in Matlab [@stewart2002krylov]. In these examples, we compare the RPM with the PM, which shows the accuracy and efficiency of the RPM. The calculations presented in this section are executed using Matlab code on an Intel TM core with a clock rate of 2.50 GHz and 32 GB of memory. ## 1D example We first examine the performance of the RPM for the 1D case. To be specific, we adopt the potential function in Eq. [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} to be $$v_1(z)=\dfrac{E_0}{\big[\cos\big(2\cos(\theta/2)z\big)+\cos\big(2\sin(\theta/2)z\big)\big]^2+1},$$ with $\theta=\pi/6$. The projection matrix is $\bm{P}=[2\cos(\theta/2),\ \ 2\sin(\theta/2)]$. We take $E_0=1$. We fix the number of Fourier expansions to be $N=180$ and depict in Figure [2](#fig3-1-2){reference-type="ref" reference="fig3-1-2"} the DOF and condition numbers of the RPM against the truncation parameter $D$. It can be observed clearly that both the DOF and condition numbers decrease rapidly with the decrease of $D$. For comparison, the DOF of the original PM is $N^2=32400$, much bigger than that of the RPM for small $D$. Thus, a small $D$ not only leads to a matrix eigenvalue system of much smaller size, but also reduces the number of iterations to converge. These observations highlight the potential of using the RPM to solve high-dimensional quasiperiodic eigenvalue problems. ![The DOF (a) and the condition number (b) as function of $D$ using the RPM in one dimension with $N=180$. Correspondingly, the DOF of the PM is $N^2=32400$.](3-1-2.pdf){#fig3-1-2 width="95%"} ![Maximum error of eigenvalues $\varepsilon$ and the $L^2$-error of the first eigenfunction $\delta$. (a,b): Error as function of $N$ for different $D$. (c,d): Error as function of $D$ for different $N$. ](3-3newnew.pdf){#fig3-3-1 width="95%"} To demonstrate the exceptional accuracy and rapid convergence of the RPM approach, we present the error plots in Figure [3](#fig3-3-1){reference-type="ref" reference="fig3-3-1"} for the potential function with $E_0=1$. The "exact\" eigenvalues and eigenfunctions are determined using the numerical results obtained from the PM when $N=300$. The error of eigenvalues, $\varepsilon$, is measured by the maximum error of the first five eigenvalues. The error of the first normalized eigenfunction is also measured by the $L^2$-norm in interval $[0,1]$, which is denoted by $\delta$. Fig. [3](#fig3-3-1){reference-type="ref" reference="fig3-3-1"} illustrates the convergence with the increase of $N$ for $D=15, 20$ and $25$, and the convergence with the increase of $D$ for $N=20$ and $30$, characterized by both $\varepsilon$ and $\delta$. Panels (a,b) illustrate that both $\varepsilon$ and $\delta$ exhibit an exponential decrease as $N$ increases, eventually attaining a fixed value. It is notable that the magnitude of this fixed value diminishes with the increase of $D$. For relatively small values of $N$ and $D$ (e.g. $N=30$ and $D=20$), $\varepsilon$ is already smaller than $10^{-10}$ and $\delta$ is smaller than $10^{-8}$, demonstrating the high accuracy of the RPM. Panels (c,d) exhibit exponential decays with the increase of $D$, which is in agreement with the error analysis. When $D$ is small ($D\leq15$), the error curves of $N=20$ and $30$ almost overlap, indicating that the error mainly comes from the basis reduction. When $D$ is large, the error curves of the two cases have a significant difference, indicating that the error is mainly caused by the PM part. Overall, one can observe that high accuracy of the results is remained in spite of a significant reduction in the number of bases. In Table [\[tb1\]](#tb1){reference-type="ref" reference="tb1"}, we display the DOF in the RPM and the CPU time for the 1D system with $E_0=1$ for $N=50, 100$ and $150$ with $D$ increasing from $10$ to $50$. The DOF increases linearly with $D$. Theoretically, the RPM of 1D systems has complexity $O(D^2)$ for given $N$, and $O(N^2)$ for given $D$. Correspondingly, the complexity of the original PM is $O(N^4)$. Moreover, the condition number of the RPM is much smaller than the PM, as shown in Fig. [2](#fig3-1-2){reference-type="ref" reference="fig3-1-2"}. The results of the CPU time validate the complexity analysis. We have shown that a small $D$ can achieve high accuracy. At $N=50$, setting $D=20$ has error as small as $10^{-10}$. In this case, the CPU time for the RPM is $1.46$ seconds, $11.6$ times faster than that of the original PM. The reduction for large $N$ is more significant. For $N=150$, the speedup with $D=20$ reaches $317.0$ times. Correspondingly, with $N=50,100$ and $150$, the DOFs of the original PM for $D=20$ are $2.4,4.8$ and $7.2$ times greater than those of the RPM, respectively. These results clearly demonstrate the attractive performance of the RPM. ccccccc & & & (r)2-3 (r)4-5 (r)6-7 & DOF & CPU time (s) & DOF & CPU time (s) & DOF & CPU time (s) & 2373 & 9.382 & 5177 & 35.500 & 7766 & 123.172 & 2236 & 8.389 & 4659 & 21.791 & 6990 & 85.054 & 2048 & 5.690 & 4141 & 17.595 & 6210 & 61.924 & 1811 & 4.433 & 3623 & 11.190 & 5434 & 42.748 & 1552 & 3.110 & 3106 & 9.733 & 4658 & 30.751 & 1295 & 2.266 & 2587 & 5.404 & 3881 & 18.599 & 1034 & 1.460 & 2069 & 3.529 & 3106 & 10.029 & 777 & 1.000 & 1551 & 2.456 & 2328 & 6.542 & 517 & 0.452 & 1035 & 1.050 & 1552 & 2.800 PM & 2500 & 17.010 & 10000 & 320.604 & 22500 & 3179.439 Figure [4](#fig3-3-3){reference-type="ref" reference="fig3-3-3"} depicts the error of the normalized first eigenfunction in interval $z\in[0, 1]$. We take $D=25$ for $N=20, 40$ and $60$, and calculate the absolute error for different $E_0=1, 2, 4$ and $8$ where the "exact\" eigenfunctions are generated by using the numerical results of the PM with $N=300$. One can observe that the error converges with the increase of $N$. With the increase of $E_0$, the error of the RPM increases. This is because $E_0$ describes the optical response strength in the photorefractive crystal [@wang2020localization]. For a large $E_0$, the eigenfunction can become localized, leading to an obvious singularity. The results in panels (cd) illustrate that the RPM remains high accuracy with a small $D$ with $N=60$, demonstrating that the RPM is efficient for simulating challenging problems such as localization-delocalization transition in photonic moiré lattices. ![Error of the normalized first eigenfunctions obtained by the RPM in interval $[0, 1]$ for different $E_0$. In each panel, $D=25$ and three different $N$ are calculated.](9.pdf){#fig3-3-3 width="95%"} ## 2D example Consider a 2D example with the potential function taking $$\label{eqp} v_2(z_1,z_2)=\dfrac{E_0}{(\cos z_1\cos z_2+\cos(\sqrt{5}z_1)\cos(\sqrt{5}z_2))^2+1}.$$ This potential Eq. [\[eqp\]](#eqp){reference-type="eqref" reference="eqp"} possesses the same structure as 2D moiré lattices [@wang2020localization; @gao2023pythagoras], making it applicable to simulations of photonic lattices. Correspondingly, the projection matrix is given by, $$\bm{P}=\begin{bmatrix} 1 & 0 & \sqrt{5} & 0\\ 0 & 1 & 0 & \sqrt{5} \end{bmatrix}.$$ In the calculation, we take $\theta=\pi/6$. ![The generalized Fourier coefficients (modulus) of eigenfunctions in the $\bm{q}$ space for $E_0=1$ and $N=30$. Results are present by logarithms with base $10$. ](3-2.pdf){#fig3-2 width="95%"} We first calculate the generalized Fourier coefficient of eigenfunctions of the system. We set $E_0=1$ and $N=30$. In Figure [5](#fig3-2){reference-type="ref" reference="fig3-2"}, we show the modulus of the coefficients for the 1st and 4th eigenfunctions in the $\bm{q}$ space, calculated by the PM. The data are present with values of logarithms 10. One can observe the exponential decay of the generalized Fourier coefficients for both eigenfunctions. In each panel, there is only one peak in the origin of the $\bm{q}$ space, and far from the origin the contributions of the Fourier modes are insignificant. Table [\[tb3\]](#tb3){reference-type="ref" reference="tb3"} presents the error measured by $\hbox{Err}(D)$ in Eq. [\[epsd\]](#epsd){reference-type="eqref" reference="epsd"} of the first and fourth eigenfunctions with the truncation constant $D$ for $N=30$. Again, one can observe rapid decays with respect to $D$ for both cases. These results are similar to the 1D case and demonstrate that the approximation in the reduced space can be of high accuracy for the eigenproblem. cccccc & & & (r)2-3 (r)5-6 &       1st       &       4th       & &        1st       &       4th        & 1.54E-05 & 2.19E-05 & 35 & 9.93E-15 & 1.58E-13 & 1.19E-07 & 1.16E-07 & 40 & 6.36E-16 & 7.36E-14 & 2.66E-09 & 2.50E-09 & 45 & 4.76E-17 & 3.40E-14 & 6.40E-11 & 7.30E-11 & 50 & 3.79E-18 & 1.52E-14 & 2.91E-12 & 3.20E-12 & 55 & 2.88E-19 & 2.68E-15 & 1.61E-13 & 3.71E-13 & 60 & 2.23E-20 & 1.73E-16 Figure [6](#fig3-2-2){reference-type="ref" reference="fig3-2-2"} present the DOF and condition number of the RPM as function of $D$ with same setup: $E_0=1$ and $N=30$. Again, both the DOF and condition number increase rapidly with the increase of $D$. In spite that $N=30$ is not big, the DOF of the entire system in the raised 4D space, $N^4=810000$, is a very big number. From the results, we can see that the use of a small $D$ can significantly reduce computational complexity, not only the size of the matrix eigenvalue problem, but also the iteration number in the solver of the implicitly restarted Arnoldi method. ![The DOF (a) and the change of condition number (b) with respect to $D$ of the RPM in 2D. $N=30$ and the maximum DOF is $N^4=810000$.](3-2-2.pdf){#fig3-2-2 width="95%"} We then study the accuracy and convergence of the RPM with $E_0=1$, and the results are presented in Figure [7](#fig4-3-1){reference-type="ref" reference="fig4-3-1"}. In the calculations, the "exact\" eigenvalues and eigenfunctions are obtained from numerical results of the PM with $N=32$. The errors are measured, where $\varepsilon$ represents the absolute error of the first eigenvalue, and $\delta$ represents the error of the first eigenfunction using the $L^2$ norm in interval $[0,1]^2$. Panels (a,b) illustrate the convergence of the numerical solution with the increase of $N$ for truncation coefficent $D=10,20$ and $30$. Both $\varepsilon$ and $\delta$ decay exponentially with increasing $N$, eventually converging to a fixed value which depends on $D$. Similar to the 1D case, small values of $D$ results in high accurate results. For $D=10$ (with a slightly bigger $N$), the RPM can achieve accuracy at the level of $10^{-5}$ in both the eigenvalue and eigenfunction calculation. Panels (c,d) displays the convergence with the increase of $D$ for $N=20$ and $28$. One can observe the exponential decays with $D$ at the beginning, as expected from the previous error analysis. For small $D$, the error curves for $N=20$ and $28$ almost overlap, suggesting that the reduction of the basis space dominates the error. With a mediate $D$, the two curves in both panels differ significantly, indicating that the error at this point mainly comes from the PM part. Overall, the accuracy with small $D$ (e.g., $D=10$) is good enough to provide accurate solutions. These findings demonstrate that high accuracy can be maintained by a significant reduction for basis functions. ![ Errors of the first eigenvalue and eigenfunction. (a,b): Error as function of $N$ for different $D$. (c,d): Error as function of $D$ for different $N$.](4-3.pdf){#fig4-3-1 width="95%"} We next conduct the study on the DOF and CPU time required for the RPM for the 2D system with $E_0=1$ for $N=20, 24$ and $28$ with $D$ increasing from $10$ to $30$, and the results are summarized in Table [\[tb2\]](#tb2){reference-type="ref" reference="tb2"}. The DOF exhibits a quadratic decrease with respect to $D$. Theoretically, the RPM of 2D systems has complexity $O(D^4)$ for given $N$, and $O(N^4)$ for given $D$, while the complexity of the original PM is $O(N^8)$. The numerical results of Table [\[tb2\]](#tb2){reference-type="ref" reference="tb2"} are in agreement with these theoretical analysis. It also can be found that a small $D$ is able to reach high accuracy. For $N=20$, the use of $D=10$ achieves an error level of $10^{-5}$. In this case, the CPU time for the RPM is $148.50$ seconds which is $15.8$ times faster than that of the PM, and the DOF is $4.9$ times smaller than that of the PM. The reduction for larger $N$ will be more significant. When $N=28$, the speedup with $D=10$ becomes $73.8$ times for the CPU time, and the reduction in the DOF is $9.7$ times, comparing the RPM with the PM. One can see this speedup is even more larger than 1D problems by introducting the reduction technique in the PM. ccccccc & & & (r)2-3 (r)4-5 (r)6-7 $D$ & Size & CPU time (s) & Size & CPU time (s) & Size & CPU time (s) & 156816 & 2089.2 & 291600 & 6681.5 & 459684 & 15591 & 152100 & 1927.7 & 273529 & 5760.1 & 421201 & 11562 & 145161 & 1803.2 & 252004 & 4807.6 & 379456 & 9889.5 & 135424 & 1552.9 & 227529 & 4318.9 & 335241 & 7631.6 & 123201 & 1284.0 & 200704 & 3144.2 & 291600 & 6118.3 & 108900 & 1057.3 & 173889 & 2362.0 & 247009 & 4035.2 & 94249 & 786.30 & 145924 & 1895.6 & 202500 & 3363.2 & 78400 & 650.76 & 117649 & 1409.2 & 160801 & 1832.3 & 62001 & 424.71 & 90601 & 821.09 & 123904 & 1341.1 & 46225 & 319.53 & 66564 & 493.80 & 91204 & 763.4 & 32400 & 148.50 & 46656 & 304.35 & 63504 & 446.09 PM & 160000 & 2343.2 & 331776 & 10305 & 614656 & 32901 Finally, we investigate the performance of the RPM for varying $E_0$. Figure [8](#fig4-3-3){reference-type="ref" reference="fig4-3-3"} shows the profiles of the first eigenfunctions in 2D quasiperiodic systems for various $E_0$ and $N$. With the increase of $E_0$, the eigenfunction becomes singular, leading to a localized eigenstate. This phenomenon is reminiscent of the localization-delocalization transition exhibited in experimental studies of 2D photonic moiré lattices [@wang2020localization]. The moiré lattices rely on flat-band structures for wave localization as opposed to the disordered media used in other approaches based on light diffusion in photonic quasicrystals [@freedman2006wave; @levi2011disorder]. The localization-delocalization transition of the eigenstates in 2D systems provide valuable insight into the exploration of quasicrystal structures. This transition process is displayed in Figure [8](#fig4-3-3){reference-type="ref" reference="fig4-3-3"}. The figure also illustrates that the results of different $N$ are basically the same for the four different $E_0$, indicating that the RPM converges fast for cases of both low and strong strength of optical response. Moreover, because of the lower DOF of the RPM, one can expect that more numerical nodes in each dimension can be applied to reach higher accuracy of approximation. ![The normalized first eigenfunction $|u|$ in 2D under different $E_0$ and $N$. $D=30$ is taken and the results in area $[0,10]^2$. (a,b,c): $N=24$, (d,e,f): $N=26$ and (g,h,i): $N=28$. (a,d,g): $E_0=0.25$, (b,e,h): $E_0=1$ and (c,f,i): $E_0=4$.](8.pdf){#fig4-3-3 width="100%"} # Conclusions {#s5} We proposed the RPM for accurate and fast calculations of eigenvalue problems of quasiperiodic Schrödinger equations. We show that the exponential decay of the generalized Fourier coefficients for eigenfunctions of the quasiperiodic problems, justifying the efficiency of the RPM. The error bound of the approximation is provided, which demonstrates the high accuracy from theoretical point of view. Compared to the original PM, the reduced method requires much less memory and significantly speeds up the calculation, making it possible to calculate high-dimensional quasiperiodic eigenvalue problems. Numerical results in both 1D and 2D problems show the efficiency and accuracy of the algorithm, demonstrating attractive features for a broader applications for practical problems. The RPM is potentially useful to solve 3D quasiperiodic problems, which will be reported in our future work. # The indicator method {#app: A} The indicator method [@cakoni2010existence; @cakoni2010inverse] can be very useful to remove spurious eigenmodes during the numerical calculation of the RPM. Let $\Theta\in \mathbb{C}$ be a simply-connected region in complex plane with bound $\partial \Theta$. For any eigenvalue problem $\bm{Hu}=E \bm{u}$, one can define a spectral projection operator $$\label{integral} \bm{Q}=\dfrac{1}{2\pi \mathrm{i}}\int_{\partial \Theta}(\bm{H}-s\bm{I})^{-1}ds,$$ with $\bm{I}$ being the identity matrix. Take a random vector $\bm{f}\neq \bm{0}$, one has $||\bm{Qf}||= 0$ if there is no eigenvalue within $\Theta$. If there is at least one eigenvalue in $\Theta$, then the probability of $||\bm{Qf}||\neq 0$ is 1. By these properties, one can define an indicator, $\mathrm{Ind}=\|\bm{Q} (\bm{Qf}/\|\bm{Qf}\|)\|$, to judge if there is eigenvalue in a specified region [@huang2016recursive]. To compute $\bm{Qf}$, the line integral along $\partial \Theta$ is obtained by numerical quadrature rule, $$\label{qf} \bm{Qf}\approx\dfrac{1}{2\pi \mathrm{i}}\sum_{j=1}^{n_0}\omega_j \bm{r}_j,$$ where $\{\omega_j\}$ are quadrature weights and $\{\bm{r}_j\}$ are the solutions of linear systems $$\label{gmres} (\bm{H}-s_j \bm{I})\bm{r}_j=\bm{f},j=1,2,\dots,n_0.$$ Here $\{s_j\}$ are the quadrature nodes on $\partial \Theta$. Practically, $\Theta$ can be chosen as small square such that trapezoidal rule with four vertices of the square as quadrature points guarantee high accuracy. The linear systems [\[gmres\]](#gmres){reference-type="eqref" reference="gmres"} are usually solved by the generalized minimal residual method (GMRES) in a matrix-free manner. The region will have no eigenvalue if the indicator is less than a small criterion. In our system, the matrix $\bm{H}$ is generated by Eq. [\[eq3\]](#eq3){reference-type="eqref" reference="eq3"} and the indicator method is applied in the frequency space. The random vector $\bm{f}$ is usually taken as the generalized Fourier coefficient vector of the potential function $v$. One can validate each eigenvalue obtained by the RPM by taking a small region with the eigenvalue as the center. The eigenvalue will be considered as a spurious one if the indicator of this region is less than the criterion. # Acknowledgement {#acknowledgement .unnumbered} Z. G. and Z. X. are supported by the National Natural Science Foundation of China (NNSFC)(No. 12071288) and Science and Technology Commission of Shanghai Municipality (grant Nos. 20JC1414100 and 21JC1403700). Z. Y. is supported by the NNSFC (No. 12101399) and the Shanghai Sailing Program (No. 21YF1421000). 10 Babuška I, Osborn J. Eigenvalue Problems. , 2:641--787, 1991. Bistritzer R, MacDonald A. Moiré bands in twisted double-layer graphene. , 108(30):12233--12237, 2011. Bohr H. . Courier Dover Publications, 2018. Britnell L, Ribeiro R, Eckmann A, et al. Strong light-matter interactions in heterostructures of atomically thin films. , 340(6138):1311--1314, 2013. Cakoni F, Colton D, Monk P, et al. The inverse electromagnetic scattering problem for anisotropic media. , 26(7):074004, 2010. Cakoni F, Gintides D, Haddar H. The existence of an infinite discrete set of transmission eigenvalues. , 42(1):237--255, 2010. Cao Y, Fatemi V, Fang S, et al. Unconventional superconductivity in magic-angle graphene superlattices. , 556(7699):43--50, 2018. Carr S, Massatt D, Fang S, et al. Twistronics: Manipulating the electronic properties of two-dimensional layered structures through their twist angle. , 95(7):075420, 2017. Davenport H, Mahler K. Simultaneous Diophantine approximation. , 13(1):105--111, 1946. Freedman B, Bartal G, Segev M, et al. Wave and defect dynamics in nonlinear photonic quasicrystals. , 440(7088):1166--1169, 2006. Fu Q, Wang P, Huang C, et al. Optical soliton formation controlled by angle twisting in photonic moiré lattices. , 14(11):663--668, 2020. Gao Z, Xu Z, Yang Z, et al. Pythagoras superposition principle for localized eigenstates of two-dimensional moiré lattices. , 108(8):013513, 2023. Goldman A, Kelton R. Quasicrystals and crystalline approximants. , 65(1):213, 1993. González-Tudela A, Cirac I. Cold atoms in twisted-bilayer optical potentials. , 100(5):053604, 2019. Grafakos L. , volume 2. Springer, 2008. Hu G, Krasnok A, Mazor Y, et al. Moiré hyperbolic metasurfaces. , 20(5):3217--3224, 2020. Huang C, Ye F, Chen X, et al. Localization-delocalization wavepacket transition in Pythagorean aperiodic potentials. , 6(1):1--8, 2016. Huang R, Struthers A, Sun J, et al. Recursive integral method for transmission eigenvalues. , 327:830--840, 2016. Jiang K, Li S, Zhang P. Numerical analysis of computing quasiperiodic systems. . Jiang K, Zhang P. Numerical methods for quasicrystals. , 256:428--440, 2014. Jiang K, Zhang P. Numerical mathematics of quasicrystals. In *Proceedings of the International Congress of Mathematicians: Rio de Janeiro 2018*, pages 3591--3609. World Scientific, 2018. Kartashov Y, Ye F, Konotop V, et al. Multifrequency solitons in commensurate-incommensurate photonic moiré lattices. , 127(16):163902, 2021. Kelley C. . SIAM, 1995. Kouznetsov D, Van Dorpe P, Verellen N. Sieve of eratosthenes for bose-einstein condensates in optical moiré lattices. , 105(2):L021304, 2022. Lehoucq R. Implicitly restarted arnoldi methods and subspace iteration. , 23(2):551--562, 2001. Lehoucq R, Sorensen D, Yang C. . SIAM, 1998. Lei F, Wang C. Study on the properties of solitons in Moiré lattice. , 219:165169, 2020. Levi L, Rechtsman M, Freedman B, et al. Disorder-enhanced transport in photonic quasicrystals. , 332(6037):1541--1544, 2011. Levitan B, Zhikov V. . CUP Archive, 1982. Liesen J, Strakos Z. . Oxford University Press, 2013. Lifshitz R, Petrich D. Theoretical model for Faraday waves with multiple-frequency forcing. , 79(7):1261, 1997. Lu X, Stepanov P, Yang W, et al. Superconductors, orbital magnets and correlated states in magic-angle bilayer graphene. , 574(7780):653--657, 2019. O'Riordan L, White A, Busch T. Moiré superlattice structures in kicked Bose-Einstein condensates. , 93(2):023609, 2016. Petersen K. , volume 2. Cambridge university press, 1989. Pitt H. Some generalizations of the ergodic theorem. In *Mathematical Proceedings of the Cambridge Philosophical Society*, volume 38, pages 325--343. Cambridge University Press, 1942. Reichel L, Trefethen L. Eigenvalues and pseudo-eigenvalues of Toeplitz matrices. , 162:153--185, 1992. Saad Y. . SIAM, 2003. Salakhova N, Fradkin I, Dyakov S, et al. Fourier modal method for moiré lattices. , 104(8):085424, 2021. Shen J, Tang T, and L. Wang. , volume 41. Springer Science & Business Media, 2011. Stewart G. A Krylov-Schur algorithm for large eigenproblems. , 23(3):601--614, 2002. Van Der Vorst H A. Krylov subspace iteration. , 2(1):32--37, 2000. Walters P. , volume 79. Springer Science & Business Media, 2000. Wang P, Zheng Y, Chen X, et al. Localization and delocalization of light in photonic Moiré lattices. , 577(7788):42--46, 2020. Wang T, Chen H, Zhou A, et al. Convergence of the planewave approximations for quantum incommensurate systems. . Watkins D S. . SIAM, 2007. Wright T, Trefethen L. Large-scale computation of pseudospectral using arpack and eigs. , 23(2):591--605, 2001. Xu M, Liang T, Shi M, et al. Graphene-like two-dimensional materials. , 113(5):3766--3798, 2013. Zeng F, Sun J, Xu L. A spectral projection method for transmission eigenvalues. , 59:1613--1622, 2016. Zhang X, Peng Y, Piao D. Quasi-periodic solutions for the general semilinear Duffing equations with asymmetric nonlinearity and oscillating potential. , 64:931--946, 2021. Zhou Y, Chen H, Zhou A. Plane wave methods for quantum eigenvalue problems of incommensurate systems. , 384:99--113, 2019.
arxiv_math
{ "id": "2309.09238", "title": "Reduced projection method for quasiperiodic Schr\\\"{o}dinger eigenvalue\n problems", "authors": "Zixuan Gao, Zhenli Xu and Zhiguo Yang", "categories": "math.NA cs.NA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We use the framework of multivariate regular variation to analyze the extremal behavior of preferential attachment models. To this end, we follow a directed linear preferential attachment model with a fixed number of outgoing edges per node for a random, heavy-tailed number of steps in time and treat the incoming edge count of all existing nodes as a random vector of random length. By combining martingale properties, moment bounds and a Breiman type theorem we show that the resulting quantity is multivariate regularly varying, both as a vector of fixed length formed by the edge counts of a finite number of oldest nodes, and also as a vector of random length viewed in sequence space. A Pólya urn representation allows us to explicitly describe the extremal dependence between the degrees with the help of Dirichlet distributions. As a by-product of our analysis we establish new results for almost sure convergence of the edge counts in sequence space as the number of nodes goes to infinity. bibliography: - bib.bib title: Multivariate Regular Variation of Preferential Attachment Models --- # Introduction Preferential attachment random graphs, as first introduced in [@barabasi99] and [@bollobas01], have become common models for the evolution of stochastic networks, see, e.g., Chapter 8 in [@hofstad16] for a brief overview on the topic which covers both the motivation and also a rigorous treatment of the asymptotic behavior. The key feature of this class of models is that the current number of edges of a node affects the probability of a new node connecting to it and it is well known that this underlying "rich get richer"-dynamic leads to a heavy-tailed behavior in the resulting limit distribution of edge counts per node, as the number of nodes goes to infinity. To make this notion of heavy tails more precise, think of a simple preferential attachment model which consists at time $n=0$ of a single node, labelled by $1$, and a single directed edge from this node to itself, i.e. a loop. Now, at each following point $n=1,2,\ldots$ in time, we add a new node, labelled by $n+1$, to the graph and attach one directed edge from this new node to itself or another already existing node. This node to connect to is chosen at random, with a probability that depends on the number of incoming edges per node. More precisely, moving from time $n$ to $n+1$ we create a new node that connects to the node with label $i \leq n+2$ with the probability $$\label{Eq:prefattach} p_i(n):=\frac{\deg_i^{\mbox{\scriptsize{in}}}(n)+\beta}{\sum_{k=1}^{n+2}(\deg_k^{\mbox{\scriptsize{in}}}(n)+\beta)},$$ where $\deg_i^{\mbox{\scriptsize{in}}}(n), i=1, \ldots, n+2$ stands for the in-degree of node $i$ at time $n$, i.e. the number of incoming edges and $\beta > 0$ denotes a so-called offset parameter. In this model, the resulting empirical degree distribution at time $n$, given by its probability mass function $$\label{Eq:empdeg} i \mapsto p_n(i):=\frac{\sum_{k=1}^{n+1}\mathds{1}_{\{\deg_k^{\mbox{\scriptsize{in}}}(n)=i\}}}{n+1}, \,\;\; n \in \mathbb{N}, 1 \leq i \leq n+1,$$ can be shown to converge as $n \to \infty$ to a limiting probability mass function $i \mapsto p(i), i \in \mathbb{N},$ such that there exists a constant $c>0$ for which $$\label{Eq:empdeglim} \lim_{i \to \infty} \frac{p(i)}{i^{2+\beta}}=c,$$ see [@bollobasetal03], and, e.g., [@krapivskyetal01; @mori07; @wangresnick18; @resnicksamorodnitsky16] for related analyses. Thus, the limiting probability mass function of the in-degrees decays like a power function. In network science, this property of a random graph is typically referred to as "scale free\" and it is often pointed out that a similar power-law behavior is visible in the empirical degree distributions of real life networks, see [@voitalovetal19] for a recent overview on the topic. While the empirical degree distribution in [\[Eq:empdeg\]](#Eq:empdeg){reference-type="eqref" reference="Eq:empdeg"} is an important summary statistic of an observed network, its limit $p(i), i \in \mathbb{N},$ tells us little about the asymptotic structure of the growing underlying random graph. In particular, it does not tell us how many steps in the network are typically necessary in order to see an in-degree which exceeds a certain high threshold and what the joint behavior of in-degrees looks like in extreme scenarios. The aim of our contribution is to provide such an asymptotic analysis for a common class of preferential attachment models, where we make use of the framework of multivariate regular variation, a common tool in multivariate extreme value theory, and apply it to the vector of in-degree counts. To this end, we study a dynamic which is a generalisation of the two models analyzed in [@pekoz17], i.e. linear directed preferential attachment models in which a new node creates a fixed number of $l \in \mathbb{N}$ outgoing edges, and where we allow for an arbitrary offset parameter $\beta \geq 0$ as in [\[Eq:prefattach\]](#Eq:prefattach){reference-type="eqref" reference="Eq:prefattach"}. We then follow our network for a *random* number of steps, reflecting the fact that all networks observed in real life are snapshots of a dynamic taken a certain time and never a limit of infinitely many nodes. As tools for our analysis, we extend approaches from [@mori05] and [@pekoz17] to our more flexible class of models and combine them with a generalization of a Breiman type result from [@wang21] in order to derive the limiting quantities for the asymptotic behavior of the network. We first treat the in-degrees as a random vector of arbitrary, but finite length, formed only by the oldest nodes. In a second step, we analyse all in-degrees jointly as a random vector of random length in sequence space by applying the extending concept of regular variation studied in [@tillier18]. The article is structured as follows: In Section [2](#Sec:Background){reference-type="ref" reference="Sec:Background"} we introduce our model, which allows both for a positive offset parameter and an arbitrary, but fixed number of outgoing edges for each new node. We make a connection to urn models which allow us to apply martingale methods in order to derive an almost sure limit of mixed powers of finitely many in-degrees as the numbers of nodes goes to infinity (Proposition [\[P:martingale_p\_factorial\]](#P:martingale_p_factorial){reference-type="ref" reference="P:martingale_p_factorial"}). Next, the concept of regular variation in Banach spaces is introduced in Section [3](#sec:MRVPARG){reference-type="ref" reference="sec:MRVPARG"} along with a Breiman type result (Theorem [\[Breiman_generalised\]](#Breiman_generalised){reference-type="ref" reference="Breiman_generalised"}) that allows us to transfer the regular variation of the observed number of nodes to regular variation of the in-degree vector. We first restrict the analysis to a finite stretch of the in-degree vector, i.e. we only analyze an arbitrary but fixed number of oldest nodes in the network and derive its regular variation (Theorem [\[th:mrvparg\]](#th:mrvparg){reference-type="ref" reference="th:mrvparg"} and Corollary [\[Cor:DegreeRV\]](#Cor:DegreeRV){reference-type="ref" reference="Cor:DegreeRV"}). In order to explicitly describe the extremal dependence structure we deduce the form of the so-called spectral measure. Again by making use of the urn model representation and its limiting Beta distributions we can show that the spectral measure can be explicitly described with the help of Dirichlet distributions (Theorem [\[L:Spectral_distr\]](#L:Spectral_distr){reference-type="ref" reference="L:Spectral_distr"} and Corollary [\[C:Spectral_Measure\]](#C:Spectral_Measure){reference-type="ref" reference="C:Spectral_Measure"}). Subsequently, in Section [4](#sec:Seq_Spaces){reference-type="ref" reference="sec:Seq_Spaces"}, we extend our analysis to sequence space, which allows us for example to keep track of the extremal behaviour of the maximum degree of our network (Corollary [\[Cor:MaxRV\]](#Cor:MaxRV){reference-type="ref" reference="Cor:MaxRV"}). As an auxiliary result we first derive almost sure convergence, as the number of nodes tends to infinity, of the degree sequence in sequence space (Proposition [\[P:BreiCond\]](#P:BreiCond){reference-type="ref" reference="P:BreiCond"}). Finally, we arrive at a regular variation result in sequence space (Theorem [\[Th:RVsequence\]](#Th:RVsequence){reference-type="ref" reference="Th:RVsequence"}). Longer proofs and auxiliary results have been deferred to the appendix [5](#sec:appendix){reference-type="ref" reference="sec:appendix"}. # Background on Preferential Attachment Models {#Sec:Background} ## Model Definitions In this section we introduce our random graph model and give an equivalent formulation of it that will turn out to be useful. Our model is a generalisation of the model considered in [@pekoz17] in that we allow for an arbitrary offset parameter $\beta \geq 0$ in the linear preferential attachment function. ### The Preferential Attachment Random Graph {#Subsec:PAM} Our object of interest is a graph $G(n),\,n\in\mathbb{N}_0,$ randomly evolving in a discrete time setting and growing by one vertex in each step from which a fixed amount $l \in \mathbb{N}$ of edges originate. The number of vertices at time $n$ will be called $N(n)$. As per usual a graph is defined as the tuple $G(n)=(V(n),E(n))$ of its set of vertices and its set of edges. In our case we will simply number these vertices in their order of appearance, so $V(n)=\{1,\dots,N(0),N(0)+1,\dots,N(0)+n\}$, where the concrete numbering of the initial $N(0)$ vertices is of no interest. For the edges, we assume them to be directed and from now on we will only consider the in-degrees $\deg_k^{\mbox{\scriptsize{in}}}(n), k \in N(n),$ of the vertex $k$ at time $n$ (as the out-degree is not random). If vertex $k$ does not yet exist at time $n$, we set $\deg_k^{\mbox{\scriptsize{in}}}(n)=0$. Finally, we want to introduce weights $D_k(n)$ which will determine the probability of a vertex $k$ to get a new edge attached to it at time $n$. We deal with *linear* preferential attachment models and so let $f:\mathbb{N}_0\to\mathbb{R}_{\geq 0}$ be an affine function with $f(x)=\alpha x+\beta,\, \alpha,\beta\geq 0$ and apply it to the (random) in-degrees: $D_k(n):=f(\deg_k^{\mbox{\scriptsize{in}}}(n))$. As the aforementioned probability shall be proportional to $D_k(n)$, without loss of generality we can restrict ourselves to weight functions $f$ with $\alpha=1$, i.e., set $D_k(n)=\deg_k^{\mbox{\scriptsize{in}}}(n)+\beta, k \in N(n).$ By choosing $\beta=l$ the weight $D_k(n),\,k>N(0)$ represents the total degree (in-degree $+$ out-degree) of vertex $k$.\ Next, we want to describe the random growth of the graph in more detail. We start at time $0$ with a graph that consists of $N(0)$ initial vertices and a set of edges between them, where both $N(0)$ and the set of edges are arbitrary but fixed, i.e. deterministic. Transitioning from time $n$ to $n+1$ a new vertex is added to the graph establishing a fixed number $l\in\mathbb{N}$ of edges to already existing vertices in two possible ways. The corresponding figures illustrate as an example how node 4 is added to an existing graph consisting of nodes 1, 2, 3 and edges (solid arrows) in each of the two models. Dashed-dotted arrows show all possibilities for creating a new edge with the corresponding probabilities. We consider two different models. **Model 0** (no loops): The first model does not allow for loops, i.e. an edge from a vertex to itself. We sequentially add the $l$ edges, immediately updating the weights of the corresponding vertex. For an $i\in\{1,\dots,l\}$ let $K_1,\dots, K_{i-1}$ be the vertices chosen for the first $i-1$ edges. The probability to attach the $i$th edge to vertex $k\in\{1,\dots,N(n)\}$ then is $$\frac{D_k(n)+\sum_{j=1}^{i-1}\mathds{1}_{\{K_j=k\}}}{(i-1)+\sum_{i=1}^{N(n)}D_i(n)},$$ which is the proportion of the weight of $k$ to the total sum of weights at this particular moment. Note that this formula does not consider the vertex $N(n)+1$ we are about to add. **Model 1** (with loops): The second model allows for loops by a small modification of the above probability formula. To this end, we set the weight of the new vertex $D_{N(n)+1}(n)=\beta$. Then, with the same notation as in Model 0, we allow the vertex $k$ to range over $\{1,\dots, N(n)+1\}$ and use $$\frac{D_k(n)+\sum_{j=1}^{i-1}\mathds{1}_{\{K_j=k\}}}{(i-1)+\sum_{i=1}^{N(n)+1}D_i(n)}$$ as probability for the $i$th edge to attach to it. Thus, the difference between the two models is simply that in Model $1$ the new vertex is added to the graph before its edges are attached to already existing ones, whereas in Model $0$ it is added afterwards. ### An Infinite-Colour Urn Model {#sec:InCoUrn} In order to analyse the limiting behavior of preferential attachment models, it has proven to be beneficial to exploit a close relationship to generalised Pólya urns that allows to transfer methods and results from the latter setting to the former, see, e.g., [@collevecchio13; @berger14; @pekoz17; @garavaglia19]. **Urn Model**: The idea is that each colour $k$ in an urn stands for a vertex of the random graph and the number of balls of that colour is the corresponding weight $D_k$. Since the weights can be any positive real number, we will also allow our urn to contain non-natural numbers of balls. So let us consider an urn that at time $0$ contains an arbitrary (natural) number of balls of $s$ different colours (representing the edges in the graph) plus $\beta$ balls of each of those colours (the weight functions offset). Then, in the time step $n\mapsto n+1$, we randomly draw a ball from the urn, where the probability of a certain colour to be drawn, just like in a regular urn, is the ratio of the number of balls of that colour to the total number of balls. Afterwards we return the ball together with an additional one of the same colour. Furthermore, if $n$ is a multiple of $l$, we add $\beta$ more balls of the new colour $s+\frac{n}{l}$, a mechanic we will refer to as immigration. So one can view $l$ time steps in the urn model as one step for the graph. We will denote the number of balls of colour $k$ in the urn at time $n$ with $C_k(n)$. We obtain the following lemma describing the aforementioned connection. [\[GraphUrnDuality\]]{#GraphUrnDuality label="GraphUrnDuality"} Let $j \in \{0,1\}$, $l \in \mathbb{N}$, $\beta> 0$ and in addition $N(0)\in \mathbb{N}$ and $(D_1(0),$ $\dots, D_{N(0)}(0)) \in \mathbb{R}_{\geq 0}^{N(0)}$. Then, there exists a probability space $(\Omega, \mathcal{A}, P)$ which accommodates two stochastic processes $(C_i(n))_{i,n \in \mathbb{N}}$ and $(D_i(n))_{i,n \in \mathbb{N}}$, such that - $(D_i(n))_{i,n \in \mathbb{N}}$ has the same distribution as the random graph Model $j$ described above (where $D_i(n)$ denotes the weight of vertex $i$ after $n$ new vertices have been added) with parameters $l$ and $\beta$ and starting configuration given by $N(0)$ vertices with weights $D_k(0), k=1, \ldots, N(0)$, - $(C_i(n))_{i,n \in \mathbb{N}}$ has the same distribution as the urn model described above (where $C_i(n)$ denotes the number of balls of colour $i$ after $n$ draws have been performed) with parameters $l$ and $\beta$ and starting configuration given by $s:=N(0)+j$ colours, with $D_k(0)$ balls of colour $k=1, \ldots, N(0)$ and, if $j=1$, $\beta$ balls of colour $N(0)+1$, such that $$(C_1(n\cdot l),\dots,C_r(n\cdot l))=(D_1(n),\dots,D_r(n))$$ for any $n \in \mathbb{N}, r<s+n$. *Proof.* Both the graph and the urn model for the first $s+n-1$ colours / vertices observed after $n$ steps in the respective model (where one step here means adding one ball of a certain colour / one edge to a certain node) form a Markov chain and one easily checks that the transition probabilities for one step coincide. This ensures the existence of the underlying probability space as stated, see [@kifer86 Section 1.1]. ◻ [\[rem:UrnGraph\]]{#rem:UrnGraph label="rem:UrnGraph"} The urn model is a useful way to look at the random graph for our purposes for several reasons. First, it performs each step of adding an edge to the graph separately, i.e., in one draw from the urn, and second, it ignores additional structure in the graph which we are not interested in, such as the information which vertex is hanging on the other side of an edge. Another advantage of the urn model is that we no longer need to distinguish between the two variants with and without loops when it comes to the transition probabilities. We simply shift the starting amount of colours in the urn by $j$, which corresponds either to adding the vertex before attaching its edges, or after. Note, however, that in the case where we add the colour beforehand, that colour exists at a time where the corresponding vertex should not yet, which is why we exclude the case $r=s+n$ in Lemma [\[GraphUrnDuality\]](#GraphUrnDuality){reference-type="ref" reference="GraphUrnDuality"}. In later proofs we will often switch ad libitum between the urn and the graph model, as justified by this lemma. A first observation we can make about the asymptotic behaviour of the individual weights/ball counts is that they will all tend to infinity almost surely: [\[L:InftyDegree\]]{#L:InftyDegree label="L:InftyDegree"} Assume the graph / urn model of Lemma [\[GraphUrnDuality\]](#GraphUrnDuality){reference-type="ref" reference="GraphUrnDuality"}. Then, as $n \to \infty$, for arbitrary $k$ the weight $D_k(n)$ of vertex $k$ / ball count $C_k(n)$ of colour $k$ diverges to infinity almost surely. *Proof.* We take the perspective of the urn model. Consider an arbitrary time $n=m\cdot l,\,m\in\mathbb{N}$ at which $C_k(n)>0$, where such an $n$ exists since we assumed $\beta>0$. Let $c$ denote the number of balls of colours other than $k$ at this time. Then the probability that no further ball of colour $k$ will be selected from here on is $$\prod_{i=0}^\infty\Bigl(1-\frac{C_k(n)}{c+i+\beta\floor{\frac{i}{l}}}\Bigr).$$ The statement follows if we can show that this probability is equal to $0$ or equivalently that the series $$-\sum_{i=0}^\infty\ln\Bigl(1-\frac{C_k(n)}{c+i+\beta\floor{\frac{i}{l}}}\Bigr)$$ diverges to infinity. Since $\ln(1-x)\sim -x$ as $x\to 0$ said series converges if and only if $$\sum_{i=0}^\infty\frac{C_k(n)}{c+i+\beta\floor{\frac{i}{l}}}$$ does so. As this expression can be bounded from below by the divergent harmonic series, the result follows. ◻ ## The Almost Sure Limit and its Moments {#Subsec:ASLimit} By Lemma [\[L:InftyDegree\]](#L:InftyDegree){reference-type="ref" reference="L:InftyDegree"} all weights $D_k(n)$ diverge to infinity as time progresses, raising the question about the asymptotic behavior of $$D^r(n):=(D_1(n),\dots,D_r(n)),$$ for an arbitrary but fixed length $r \in \mathbb{N}$ after suitable rescaling when $n \to \infty$ . The following result shows the almost sure convergence and characterises the limiting vector in terms of its mixed moments. [\[P:martingale_p\_factorial\]]{#P:martingale_p_factorial label="P:martingale_p_factorial"} Assume the graph / urn model of Lemma [\[GraphUrnDuality\]](#GraphUrnDuality){reference-type="ref" reference="GraphUrnDuality"}. Throughout this Proposition the times $n$ or $m$ are assumed to be greater than or equal to $(r-s)\vee 0$ (in the random graph setting) or $l$ times this number (in the urn setting). Define the generalised binomial coefficient $$\binom{x}{y}:=\frac{\Gamma(x+1)}{\Gamma(y+1)\Gamma(x-y+1)}\quad \forall x,y \in \mathbb{R}$$ and let $k_1,\dots,k_r\in[0,\infty)$, $k:=\sum_{i=1}^r k_i$. 1. [\[P:martingale_p\_factorial:martingale\]]{#P:martingale_p_factorial:martingale label="P:martingale_p_factorial:martingale"} There exists a normalising sequence $c(n,k)\sim n^{k\cdot l/(l+\beta)}$ such that, with respect to the natural filtration $\mathcal{F}_n:=\sigma(C_k(m),m\leq n, k\leq s+\floor{\frac{n}{l}})$, the process $$\begin{aligned} \Bigl(\frac{1}{c(n,k)}\prod_{i=1}^r \binom{D_i(n)+k_i-1}{k_i},\mathcal{F}_{n\cdot l}\Bigr)_n\label{term:process_p_factorial}\end{aligned}$$ is a positive martingale. 2. [\[P:martingale_p\_factorial:asConv\]]{#P:martingale_p_factorial:asConv label="P:martingale_p_factorial:asConv"} In particular, the above process is almost surely convergent with limit $$\begin{aligned} \prod_{i=1}^r\frac{\zeta_i^{k_i}}{\Gamma(k_i+1)}\in L^1(\mathbb{P}),\label{term:limit_p_factorial}\end{aligned}$$ where $\zeta_i=\lim_{n\to\infty}n^{-l/(l+\beta)}D_i(n)$. 3. The limit in 2) closes the martingale to the right, i.e. $$\mathop{\mathrm{\mathbb{E}}}\Bigl(\prod_{i=1}^r\frac{\zeta_i^{k_i}}{\Gamma(k_i+1)}\Big|\mathcal{F}_{n\cdot l}\Bigr)=\frac{1}{c(n,k)}\prod_{i=1}^r \binom{D_i(n)+k_i-1}{k_i} \quad \text{for every }n\in\mathbb{N}.$$ Most notably this means (with $c(n,0)=1$) $$\begin{aligned} \mathop{\mathrm{\mathbb{E}}}\Bigl(\prod_{i=1}^r\zeta_i^{k_i}\Bigr)=\prod_{i=1}^{r}\Biggl[&\Gamma(k_i+1)\cdot\frac{c((i-s)\vee 0,k_1+\dots+k_{i-1})}{c((i-s)\vee 0,k_1+\dots+k_i)}\nonumber\\ &\cdot\begin{cases}\binom{D_i(0)+k_i-1}{k_i}\quad &\text{if }i\leq N(0)\\ \binom{\beta+k_i-1}{k_i}\quad &\text{if }i> N(0)\end{cases}\Biggr]\label{eq:expec_zetap}\end{aligned}$$ The proof of this proposition is deferred to Section [5.1](#Sec:MartingaleProof){reference-type="ref" reference="Sec:MartingaleProof"}. # Regular Variation of the Finite-Dimensional Weight Vector {#sec:MRVPARG} The starting point for the asymptotic analysis in Section [2.2](#Subsec:ASLimit){reference-type="ref" reference="Subsec:ASLimit"} was to let the time index $n$ go to infinity and derive properties of the almost sure limit of the weight vector after proper, time-dependent standardization. In this section, we will take a different approach by focusing on the random vector $$\label{Eq:D(N):Def} D^r(N):=(D_1(N),\dots,D_r(N)),$$ of the weights of the oldest $r$ nodes evaluated at a *random* time $N$ (where we set $D_k(N)=0$ if node $1 \leq k \leq r$ does not yet exist at time $N$) and how the extremes of this random vector can be described. We can think of $N$ being the random number of steps that our network has already gone through at time of observation. The extremal behavior of $D^r(N)$ will thus both depend on the value of $N$ and the network dynamics. In order to further analyse the behavior, we first introduce a suitable framework. ## Background on Multivariate Regular Variation {#subsec:BackgroundMRV} When dealing with extremes of random vectors $X$ we typically deem an outcome extreme if its norm $||X||$ exceeds a high threshold and we seek for a stabilising behavior after suitable normalisation if the threshold goes to infinity. This leads to the definition of multivariate regular variation, where the concept can be formalised in several slightly different ways. While the original definition was based on vague convergence, see [@deHaan79; @resnick87], we will employ the more flexible concept of $\mathbb{M}$-convergence, see [@Hult06; @lindskog14; @kulik20], which will allow us to extend our results to the sequence space in Section [4](#sec:Seq_Spaces){reference-type="ref" reference="sec:Seq_Spaces"}. [\[D:MRV\]]{#D:MRV label="D:MRV"} Let $(B,||\cdot||)$ be a separable Banach space and set $B^*:=B\setminus \{0\}$, write $\mathcal{B}(B^*)$ for the Borel $\sigma$-algebra on $B^\ast$ and let $\mathbb{M}(B^*)$ denote the space of measures on $\mathcal{B}(B^*)$ which are finite on sets *bounded away from $0$*, i.e. sets $A$ such that $\inf_{a \in A}||a||>0$. Write furthermore $\mathcal{C}(B^\ast)$ for the set of all non-negative, bounded and continuous functions $f$ on $B^*$ for which there exists an $\epsilon>0$ such that $f$ vanishes on $B_\epsilon(0):=\{b \in B: \|b\|<\epsilon\}$. We say a random variable $X$ with values in $B$ is *multivariate regularly varying in $B^\ast$* if there exists a non-trivial measure $\mu \in \mathbb{M}(B^*)$, also called the *limit measure*, such that $$\label{Eq:Def:MRV}\frac{\mathbb{P}(X/t\in \cdot)}{\mathbb{P}(||X||>t)}\overset{t\to\infty}{\to}\mu(\cdot) \text{ in }\mathbb{M}(B^*),$$ where $$\label{Eq:M:Conv} \mu_t\to\mu\text{ in }\mathbb{M}(B^*)\,\,\,\,\Leftrightarrow\,\,\,\,\mu_t(f):=\int f\textup{d}\mu_t\to\int f\textup{d}\mu=:\mu(f) \quad\forall f\in\mathcal{C}(B^\ast).$$ The mode of convergence defined in [\[Eq:M:Conv\]](#Eq:M:Conv){reference-type="eqref" reference="Eq:M:Conv"} is called *$\mathbb{M}$-convergence*. [\[Rem:RVofNorm\]]{#Rem:RVofNorm label="Rem:RVofNorm"} Definition [\[D:MRV\]](#D:MRV){reference-type="ref" reference="D:MRV"} implies that for any multivariate regularly varying $X$ on $B \setminus \{0\}$, the random variable $||X||$ is (univariate) regularly varying on $(0,\infty)$ with some index $\alpha$, meaning that $$\lim_{t \to \infty}\frac{\mathbb{P}(||X||>\lambda t)}{\mathbb{P}(||X||>t)}=\lambda^{-\alpha} \;\;\; \forall\, \lambda>0.$$ For the remainder of this section [3](#sec:MRVPARG){reference-type="ref" reference="sec:MRVPARG"} we will stick to the special case $B =\mathbb{R}^n$. However, in preparation of section [4](#sec:Seq_Spaces){reference-type="ref" reference="sec:Seq_Spaces"}, when our aim is to take the weight sequence as a whole into consideration, we need to use the more general notion of a separable Banach space. This means that the choice of the norm matters in general (how much so, we will see in section [4](#sec:Seq_Spaces){reference-type="ref" reference="sec:Seq_Spaces"}), while in $\mathbb{R}^n$, at least for qualitative considerations, it does not. The limit measure $\mu$ has a certain structure implied by the multiplicative standardisation used in [\[Eq:Def:MRV\]](#Eq:Def:MRV){reference-type="eqref" reference="Eq:Def:MRV"}. Let $X$ be a multivariate regularly varying random variable with limit measure $\mu$. Then there exists an $\alpha> 0$ such that for all $\lambda>0$ and $A\in\mathcal{B}(B^*)$ we have $$\mu(\lambda A)=\lambda^{-\alpha}\mu(A).$$ We call $\alpha$ the index of regular variation or tail index. To read the $\alpha$ directly from the limit measure one may find the second of the following alternative characterisations of multivariate regular variation useful. [\[th:chara_MRV\]]{#th:chara_MRV label="th:chara_MRV"} Let $X$ be a random variable in a separable Banach space $B$. Each of the following statements is equivalent to $X$ being multivariate regularly varying with index $\alpha$. 1. [\[th:chara_MRV_1\]]{#th:chara_MRV_1 label="th:chara_MRV_1"} There exist a non-trivial measure $\mu$ and an increasing function $b(t)$ such that $$\begin{aligned} t\mathbb{P}(X/b(t)\in \cdot)\overset{t\to\infty}{\to}\mu\text{ in }\mathbb{M}(B^\ast)\,\,\text{ and }\,\,\frac{b(\lambda t)}{b(t)}\overset{t\to\infty}{\to}\lambda^{\frac{1}{\alpha}}\,\,\forall\lambda>0.\end{aligned}$$ 2. [\[th:chara_MRV_2\]]{#th:chara_MRV_2 label="th:chara_MRV_2"} There exists a probability measure $S$ on the unit sphere $\mathbb{S}_{||\cdot||}:=\{x \in B: ||x||=1\}$, also called the spectral measure, such that $$\frac{\mathbb{P}(X/t\in \cdot)}{\mathbb{P}(||X||>t)}\overset{t\to\infty}{\to}(\nu_\alpha\otimes S)\circ T^{-1}\text{ in }\mathbb{M}(B^\ast)$$ where $\nu_\alpha$ is a measure on $(0,\infty)$ determined by its values on the right-unbounded intervals $\nu_\alpha(t,\infty):=t^{-\alpha}$ and $T:(0,\infty)\times\mathbb{S}_{||\cdot||}\to B$ with $T(R,\Theta):=R\cdot\Theta$ is the polar coordinate transformation. Note that despite the suggesting notation here, limit measures coming from different normalising functions $b(t)$ may differ up to a multiplicative factor. To obtain the limit measure from [\[Eq:Def:MRV\]](#Eq:Def:MRV){reference-type="eqref" reference="Eq:Def:MRV"} in Definition [\[D:MRV\]](#D:MRV){reference-type="ref" reference="D:MRV"} one may choose the normalising function $b(t):=F^\leftarrow_{||X||}(1-t^{-1})$, where $F^{\leftarrow}_{||X||}(u):=\inf\{x \in \mathbb{R}: \mathbb{P}(||X|| \leq x)\geq u\}$ stands for the generalised inverse of the distribution function of $||X||$. ## Main Result In this chapter we will start with our analysis of the asymptotic behavior of the weight vector in the framework of multivariate regular variation. To this end, consider first for fixed $n,r\in\mathbb{N}$ the random vector $D^r(n)=(D_1(n),\dots,D_r(n))$, where we set $D_k(n)=0$ if $k>N(0)+n$. One immediately finds a deterministic upper bound for the $\ell_1$-norm of the vector in $$||D^r(n)||_1\leq||D^r(0)||_1+n\cdot(l+\beta) ,$$ which in view of Remark [\[Rem:RVofNorm\]](#Rem:RVofNorm){reference-type="ref" reference="Rem:RVofNorm"} rules out multivariate regular variation of $D^r(n)$. We thus need the number of nodes in the system to be random in order to witness heavy-tailed behavior of the weight vector. Viewing the number of nodes or, equivalently, the number of completed steps in the evolution of a network as random is also reasonable from a modelling perspective: We typically do not observe a network after an a priori known number of steps, in particular if new nodes are added according to some random arrival process instead of regularly over time. The results in Section [2.2](#Subsec:ASLimit){reference-type="ref" reference="Subsec:ASLimit"} imply that the number of nodes is the driving factor behind an extremal behavior of node weights. Our approach is thus to assume a heavy-tailed behavior of $N$, which we choose to be (univariate) regularly varying. Many typical regularly varying distributions are continuous distributions (e.g. Pareto or Student-$t$), but if we round any regularly varying random variable $Y$ to $\lfloor Y \rfloor$, the result is again regularly varying, leading to a wide variety in possible distributions for $N \in \mathbb{N}_0$. Now, an often observed principle in extreme value theory is derived from Breiman's Lemma and applies in settings in which, roughly speaking, a system consists of a heavy-tailed component and a light-tailed one. Then, the heavy-tailed component will typically drive the extremal behavior of the system and determine its index of regular variation, while the light-tailed one may well have an impact on the extremal dependence, in terms of the form of the spectral measure. It turns out that in our case a similar Breiman result, more precisely an extension of Theorem 3 in [@wang21] adapted to Banach space-valued processes, applies as well. [\[Breiman_generalised\]]{#Breiman_generalised label="Breiman_generalised"} Let $(X_t)_{t\in\mathbb{R}^+}$ be an at least one-sidedly continuous stochastic process with values in a separable Banach space $(B, ||\cdot||)$ and let $T$ be a positive random variable, such that the following conditions are satisfied: 1. $T$ is regularly varying with index $\alpha>0$, i.e. for some scaling function $b(t)$ we have $$t\cdot\mathbb{P}\Bigl(\frac{T}{b(t)}>\lambda\Bigr)\overset{t\to\infty}{\to}\lambda^{-\alpha}=\nu_\alpha((x,\infty)) \quad \forall \lambda>0.$$ 2. $(X_t)$ and $T$ are independent, i.e. $\sigma(T)\protect\mathpalette{\protect\independenT}{\perp}\sigma( X_t,t\geq 0)$. 3. [\[Brei_generalised:conv\]]{#Brei_generalised:conv label="Brei_generalised:conv"} $(X_t)$ converges to some $X_\infty\in B^\ast$ almost surely as $t \to \infty$. 4. [\[Brei_generalised:moment\]]{#Brei_generalised:moment label="Brei_generalised:moment"} For some $\alpha'>\alpha$ the following moment condition holds $$\sup_{t\geq 0}\mathop{\mathrm{\mathbb{E}}}\big(||X_t||^{\alpha'}\big)<\infty.$$ Then $T\cdot X_T$ is multivariate regularly varying in $B^\ast$ with index $\alpha$ and $$t\mathbb{P}\Bigl(\frac{T\cdot X_T}{b(t)}\in\cdot\Bigr)\overset{t\to\infty}{\to}[\mathbb{P}(X_\infty\in\cdot)\otimes \nu_\alpha]\circ h^{-1}\, \text{ in }\mathbb{M}(B^\ast),\label{eq:Breiman_generalised}$$ where $h:B\times \mathbb{R}_+\to B,\,h(x,y)=xy$. The proof of this theorem is deferred to Section [5.2](#Sec:BreimanProof){reference-type="ref" reference="Sec:BreimanProof"}. Using this we can now state and prove our findings: [\[th:mrvparg\]]{#th:mrvparg label="th:mrvparg"} Consider a preferential attachment random graph with the notation as introduced in the last sections. In particular denote with $l$ and $\beta$ the number of edges added with each new vertex and the weight functions offset respectively. For fixed $n,r\in\mathbb{N}$ let $D^r(n):=(D_1(n),\dots,D_r(n))$ be the corresponding weight vector and let $N$ be a positive integer-valued random variable. If the following assumptions are satisfied 1. [\[th:mrvparg:independence\]]{#th:mrvparg:independence label="th:mrvparg:independence"} $N$ and $(D^r(n))_n$ are independent and 2. [\[th:mrvparg:rv\]]{#th:mrvparg:rv label="th:mrvparg:rv"} $N$ is regularly varying with index $\alpha>0$ then $D^r(N)$ is multivariate regularly varying with index $\alpha\cdot \frac{l+\beta}{l}$. *Proof.* In order to apply Theorem [\[Breiman_generalised\]](#Breiman_generalised){reference-type="ref" reference="Breiman_generalised"} consider the process $(X_t)$ $$X_t:=0,\,t<1\qquad X_t:=\frac{1}{t}\cdot D^r(\bigl\lceil t^{\frac{l+\beta}{l}}\bigr\rceil),\, t\geq 1$$ and the random variable $T$ with $$T:=N^{\frac{l}{l+\beta}}$$ Then $T\cdot X_T = D^r(N)$. Therefore all we need to do is to check the conditions from Theorem [\[Breiman_generalised\]](#Breiman_generalised){reference-type="ref" reference="Breiman_generalised"} and determine the index of regular variation. 1. The independence follows immediately from assumption [\[th:mrvparg:independence\]](#th:mrvparg:independence){reference-type="ref" reference="th:mrvparg:independence"}. As for the regular variation we consider the equivalent formulation of assumption [\[th:mrvparg:rv\]](#th:mrvparg:rv){reference-type="ref" reference="th:mrvparg:rv"} $$\frac{\mathbb{P}(N>x\lambda)}{\mathbb{P}(N>x)}\to\lambda^{-\alpha},\quad x\to\infty$$ which implies $$\frac{\mathbb{P}(T>x\lambda)}{\mathbb{P}(T>x)}=\frac{\mathbb{P}(N>x^{(l+\beta)/l}\lambda^{(l+\beta)/l})}{\mathbb{P}(N>x^{(l+\beta)/l})}\to\lambda^{-\alpha\cdot\frac{l+\beta}{l}},\quad x\to\infty$$ i.e. $T$ is regularly varying with index $\alpha\cdot\frac{l+\beta}{l}$. 2. After the transformation $t':=t^{l/(l+\beta)}$ we apply Lemma [\[P:martingale_p\_factorial\]](#P:martingale_p_factorial){reference-type="ref" reference="P:martingale_p_factorial"} to: $$\begin{aligned} X_{t'}=\frac{1}{t^{l/(l+\beta)}}D^r(\ceil{t})=\Bigl(\frac{\ceil{t}}{t}\Bigr)^{\frac{l}{l+\beta}}\frac{1}{\ceil{t}^{l/(l+\beta)}}D^r(\ceil{t}),\quad t\geq 1\end{aligned}$$ and the almost sure convergence follows. 3. Because of the definition of $(X_t)$ we will only consider $t\geq 1$ and we set again $t':=t^{\frac{l}{l+\beta}}$. For $q\in\mathbb{N}$ we have by Lemma [\[GraphUrnDuality\]](#GraphUrnDuality){reference-type="ref" reference="GraphUrnDuality"} and Lemma [\[L:MomentCond\]](#L:MomentCond){reference-type="ref" reference="L:MomentCond"} that $$\begin{aligned} \mathop{\mathrm{\mathbb{E}}}(||X_{t'}||_1^q)&=\frac{1}{t^{ql/(l+\beta)}}\mathop{\mathrm{\mathbb{E}}}([D_1(\ceil{t})+\dots+D_r(\ceil{t})]^q)\\ &=\frac{1}{t^{ql/(l+\beta)}}\mathop{\mathrm{\mathbb{E}}}((C_1(l\cdot\ceil{t})+\dots+C_r(l\cdot\ceil{t}))^q)\\ &\leq C\cdot\frac{\ceil{t}^{ql/(l+\beta)}}{t^{ql/(l+\beta)}}\leq C\cdot 2^{\frac{ql}{l+\beta}}\end{aligned}$$ for some $C>0$, which completes the verification of the conditions of Theorem [\[Breiman_generalised\]](#Breiman_generalised){reference-type="ref" reference="Breiman_generalised"}.  ◻ For the actual in-degree vector without the weight functions offset, we obtain the following corollary. [\[Cor:DegreeRV\]]{#Cor:DegreeRV label="Cor:DegreeRV"} Under the same assumptions as in Theorem [\[th:mrvparg\]](#th:mrvparg){reference-type="ref" reference="th:mrvparg"} the vector $D^r(N)-\beta$ is multivariate regularly varying with the same index of regular variation and the same limit measure as $D^r(N)$. *Proof.* This is a consequence of Lemma 3.12 from [@mikosch06]. ◻ ## Characterisation of the Spectral Measure {#sec:SpecMeas} Having verified the multivariate regular variation of the vector $D^r(N)$, the natural next step is to characterise its spectral measure $S$. Over the next few lemmata we shall once more switch into the urn perspective for this purpose. Because of the close relation to Pólya urns the following Beta and Beta-related distributions will come up, which we repeat here for convenience and since there exist different parameterisations in the literature. $$ 1. A random variable on $[0,1]$ is said to follow a beta distribution $\mathsf{Beta}{(}{a,b}{)}$, with parameters $a,b>0$ if it has a density function $$f(x)=\frac{1}{\text{B}(a,b)}x^{a-1}(1-x)^{b-1}\cdot \mathds{1}_{[0,1]}(x),$$ where $\text{B}(a,b)=\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}$ is the beta function. 2. A random vector with values in the standard ($r$-1)-simplex in $\mathbb{R}^r_+$, i.e. the set $\Sigma^{r-1}:=\{x\in\mathbb{R}^r_+\,:\,x_1+\dots+x_r=1\}$, is said to follow a Dirichlet distribution $\mathsf{Dir}{(}{a_1,\dots,a_r}{)}$ with parameters $a_1,\dots,a_r>0$ if it has a density function $$\begin{aligned} f(x)=\frac{1}{\text{B}(a_1,\dots,a_r)}\prod_{i=1}^{r}x_i^{a_i-1}\cdot \mathds{1}_{\Sigma^{r-1}}(x),\end{aligned}$$ where $\text{B}(a_1,\dots,a_r):=\frac{\Gamma(a_1)\cdots\Gamma(a_r)}{\Gamma(a_1+\dots+a_r)}$ is the multivariate beta function. 3. A random vector with values in the standard ($r$-1)-simplex in $\mathbb{R}_+^r$ is said to follow a generalised Dirichlet distribution $\mathsf{GDir}{(}{a_1,b_1,\dots,a_{r-1},b_{r-1}}{)}$ with parameters $a_1,b_1,\dots,a_{r-1},b_{r-1}>0$ if it has a density function: $$\begin{aligned} f(x)=\frac{1}{\prod_{i=1}^{r-1}\text{B}(a_i,b_i)}x_r^{b_{r-1}-1}\prod_{i=1}^{r-1}\Bigl(x_i^{a_i-1}\Bigl(\sum_{j=i}^rx_j\Bigr)^{b_{i-1}-(a_i+b_i)}\Bigr)\cdot \mathds{1}_{\Sigma^{r-1}}(x),\end{aligned}$$ where $b_0$ is arbitrary. [\[Rem:ProofSketch_Dir\]]{#Rem:ProofSketch_Dir label="Rem:ProofSketch_Dir"} The generalised Dirichlet distribution boils down to the standard version if its parameters satisfy $b_{i-1}=a_i+b_i$. Its density function then depends only on the parameters $a_1,\dots,a_{r-1}$ and $b_{r-1}$ which can be viewed as the parameters $a_1,\dots,a_r$ of the standard Dirichlet distribution. These two distributions are also connected to the beta distribution through a stick breaking experiment, in which a random vector is constructed by breaking independent, beta distributed fractions of a unit length stick for a fixed number of times, and the lengths of the resulting pieces form the components of the random vector. In general, this approach leads to a generalised Dirichlet distribution with parameters equal to those of the beta distributed fractions in the order in which they were broken off (see [@mosimann69]). To obtain a standard Dirichlet vector, the beta distributions must satisfy the parameter constraints mentioned above. We begin with a well-known result for traditional multi-colour Pólya urns, i.e. our urn model but without immigration. The proof can be found in [@blackwell73]. [\[L:Polya_Dir\]]{#L:Polya_Dir label="L:Polya_Dir"} Consider an urn with balls of $r\in\mathbb{N}$ different colours and starting amounts $a^r=(a_1,\dots,a_r)\in\mathbb{R}^r$. Let $X(n)$, $n\in\mathbb{N}$ be the sequence of colours drawn in each time step by following a traditional multi-colour Pólya urn scheme where we add one ball of the drawn colour in each step. Then, for each colour $i$, the proportions of ball counts $$\frac{a_i+\sum_{k=1}^n\mathds{1}_{\{i\}}(X(k))}{\sum_{j=1}^ra_j+n},\quad i=1,\dots,r$$ converge almost surely to some $Y_i$ such that: $$Y^r=(Y_1,\dots,Y_r)\overset{\text{d}}{=}\mathsf{Dir}{(}{a_1,\dots,a_r}{)}$$ **Notation 1**. *In order to describe the spectral measure it is convenient to look at the projection of the limiting vector $$\zeta^r=(\zeta_1, \ldots, \zeta_r)=(\lim_{n\to\infty}n^{-l/(l+\beta)}D_i(n))_{1 \leq i \leq r},$$ on the $\ell_1$-unit sphere along with some derived proportions. We are specifically interested in two orderings of this vector: forwards $$\begin{aligned} S^r&:=(\zeta_1,\dots,\zeta_r)/(\zeta_1+\dots+\zeta_r), \; \mbox{with}\\ B_k&:=\frac{S_k}{S_k+\dots+S_r}=\frac{\zeta_k}{\zeta_k+\dots+\zeta_r},\quad 1\leq k \leq r\\ \intertext{and backwards} S^r_{\downarrow}&=(S_r,\dots,S_1),\,\mbox{ with } B_k^{\downarrow}:=\frac{S_k}{S_1+\dots+S_k}=\frac{\zeta_k}{\zeta_1+\dots+\zeta_k},\quad 1\leq k \leq r.\end{aligned}$$* In order to apply Lemma [\[L:Polya_Dir\]](#L:Polya_Dir){reference-type="ref" reference="L:Polya_Dir"} to our problem one needs to observe that once all colours we want to consider are present in the urn (which is at time $n_0^r=l(r-s) \vee 0$), these colours, considered separately from the rest of the urn, will behave just like they would in a traditional multi-colour Pólya urn. We thus obtain the following mixture distribution for $S^r$. [\[L:Spectral_distr\]]{#L:Spectral_distr label="L:Spectral_distr"} Let $r\in\mathbb{N}$, $C^r(n):=(C_1(n),\dots,C_r(n))$, $n \in \mathbb{N},$ be the vector of the first $r$ colours in the urn model after $n$ steps with $s$ initial colours from Lemma [\[GraphUrnDuality\]](#GraphUrnDuality){reference-type="ref" reference="GraphUrnDuality"} and let $S^r$ be as in Notation [Notation 1](#not:Xr){reference-type="ref" reference="not:Xr"} the $\ell_1$-projection of the limiting vector in the corresponding random graph model. 1. [\[L:Spectral_distr_1\]]{#L:Spectral_distr_1 label="L:Spectral_distr_1"} The limiting vector $S^r$ follows a mixture of Dirichlet distributions, i.e. $$S^r\overset{\text{d}}{=}\,\sum_{\mathclap{c\in\mathbb{R}^r}}\,\mathbb{P}(C^r(n_0^r)=c)\cdot\mathsf{Dir}{(}{c_1,\dots,c_r}{)}.$$ 2. [\[L:Spectral_distr_2\]]{#L:Spectral_distr_2 label="L:Spectral_distr_2"} For the vector obtained by inverting the order of components in $S^r$ we additionally have $$S^r_{\downarrow}\overset{\text{d}}{=}\mathsf{GDir}{(}{a_r,b_r,\dots,a_2,b_2}{)}$$ where $$(a_k,b_k)=\begin{cases}(C_k(0), \sum_{i=1}^{k-1}C_i(0)) \quad&\text{if } 2 \leq k\leq s\\ (\beta,\sum_{i=1}^{k-1}C_i(0)+(l+\beta)(k-1-s)+l)\quad &\text{if } k>s\end{cases},$$ The proof relies on the following lemma. [\[L:Hilfs_Beta\]]{#L:Hilfs_Beta label="L:Hilfs_Beta"} Let $B_k,\,B_k^\downarrow$, $1\leq k\leq r$ be as in Notation [Notation 1](#not:Xr){reference-type="ref" reference="not:Xr"}. We have 1. [\[L:Hilfs_Beta:1\]]{#L:Hilfs_Beta:1 label="L:Hilfs_Beta:1"} $B_1^\downarrow\equiv 1$\ $\displaystyle B_k^\downarrow\overset{\text{d}}{=}\begin{cases}\mathsf{Beta}{(}{C_k(0),\sum_{i=1}^{k-1}C_i(0)}{)} \quad&\text{if } 1<k\leq s\\ \mathsf{Beta}{(}{\beta,\sum_{i=1}^{k-1}C_i(0)+(l+\beta)(k-1-s)+l}{)}\quad &\text{if } k>s\end{cases}$ 2. [\[L:Hilfs_Beta:2\]]{#L:Hilfs_Beta:2 label="L:Hilfs_Beta:2"} The $B_1^\downarrow,\dots,B_r^\downarrow$ are independent. 3. [\[L:Hilfs_Beta:3\]]{#L:Hilfs_Beta:3 label="L:Hilfs_Beta:3"} Both $(B_1,\dots,B_r)$ and $(B_1^\downarrow,\dots,B_r^\downarrow)$ are independent of $\zeta_1+\dots+\zeta_r$. The proof of this lemma is deferred to section [5.4](#Sec:BetaIndProof){reference-type="ref" reference="Sec:BetaIndProof"}. *Proof of Theorem [\[L:Spectral_distr\]](#L:Spectral_distr){reference-type="ref" reference="L:Spectral_distr"}.* 1. We start by conditioning the urn on the time $n_0^r$, which is how the mixture distributions in the statement arise. Given that $C^r(n_0^r)=(c_1,\dots,c_r)\in\mathbb{R}^r$, let $t_i$ be the random time where for the $i$th time a ball from one of the colours $1$ to $r$ is chosen. At these times the vector of ratios of balls of each colour to the total number of balls with colours $1$ to $r$ behaves just like in a traditional multi-colour Pólya urn and therefore by Lemma [\[L:Polya_Dir\]](#L:Polya_Dir){reference-type="ref" reference="L:Polya_Dir"} converges almost surely to a Dirichlet distribution with $c_1,\dots,c_r$ as parameters. At the same time, however, it converges (conditionally on $C^r(n_0^r)=(c_1,\dots,c_r)$) to $X^r$, and so the statement follows. 2. We write $$S_{\downarrow}^r=\left(B_r^{\downarrow},(1-B_r^{\downarrow})B_{r-1}^{\downarrow}, \ldots, \left(\prod_{i=2}^r (1-B_{i}^{\downarrow})\right)B_{1}^{\downarrow}\right)$$ and since by Lemma [\[L:Hilfs_Beta\]](#L:Hilfs_Beta){reference-type="ref" reference="L:Hilfs_Beta"} the $B_{i}^{\downarrow}$'s are independent and have beta distributions, we can apply Remark [\[Rem:ProofSketch_Dir\]](#Rem:ProofSketch_Dir){reference-type="ref" reference="Rem:ProofSketch_Dir"} to arrive at the generalised Dirichlet distribution with the given parameters.  ◻ We now have all the tools to characterise the spectral measure. [\[C:Spectral_Measure\]]{#C:Spectral_Measure label="C:Spectral_Measure"} The spectral measure of regular variation for the random vectors $(D_1(N),$ $\dots,D_r(N))$ and $(D_r(N),\dots,D_1(N))$, with respect to the norm $||\cdot||_1$, coincides with the distribution of $S^{r}$ or $S^r_\downarrow$, respectively, from Theorem [\[L:Spectral_distr\]](#L:Spectral_distr){reference-type="ref" reference="L:Spectral_distr"}. *Proof.* We restrict ourselves to the forwards case, i.e. focus on $(D_1(N),$ $\dots,D_r(N))=D^r(N)$. By Equation [\[eq:Breiman_generalised\]](#eq:Breiman_generalised){reference-type="eqref" reference="eq:Breiman_generalised"} for the limit measure $\mu$ provided in Theorem [\[Breiman_generalised\]](#Breiman_generalised){reference-type="ref" reference="Breiman_generalised"} and with $X_\infty$ given by $\zeta^r$ we get $$\mu=(\mathbb{P}(\zeta^r\in\cdot)\otimes \nu_\alpha)\circ h^{-1}$$ with $h:\mathbb{R}^r_+\times \mathbb{R}_+\to\mathbb{R}^r_+,\,h(y,x)=xy$ and $\zeta^r=(\zeta_1,\dots,\zeta_r)$. On the other hand, by Theorem [\[th:chara_MRV\]](#th:chara_MRV){reference-type="ref" reference="th:chara_MRV"}, b) the corresponding spectral measure $S(A)$, for an $S$-continuity Borel set $A \subset \mathbb{S}_{\| \cdot \|}$ is given by $$\begin{aligned} S(A)&=\lim_{t \to \infty} \frac{\mathbb{P}\left(\frac{D^r(N)}{\|(D^r(N)\|_1} \in A, \|(D^r(N)\|_1>t\right)}{P(\|(D^r(N)\|_1>t)}\\ &=\frac{\mu(\{x \in \mathbb{R}^r: \| x \|_1 >1, x/\|x\|_1 \in A\})}{\mu(\{x \in \mathbb{R}^r: \| x \|_1 >1\})}.\end{aligned}$$ Now, $$\begin{aligned} & \quad \; \mu(\{x \in \mathbb{R}^r: \| x \|_1 >1, x/\|x\|_1 \in A\} \\ &=\int_{\mathbb{R}_+^r}\int_{\mathbb{R}_+}\mathds{1}_{\{x\cdot\| y\|_1>1\}}\mathds{1}_{\{\frac{y}{\| y\|_1}\in A\}}\,\nu_\alpha(\textup{d}x)\,\mathbb{P}(\zeta^r\in\textup{d}y)\\ &=\int_{\mathbb{R}_+^r}\| y\|_1^\alpha\mathds{1}_{\{\frac{y}{\| y\|_1}\in A\}}\,\mathbb{P}(\zeta^r\in\textup{d}y)\\ &=\mathop{\mathrm{\mathbb{E}}}(\|\zeta^r\|_1^\alpha)\cdot\mathbb{P}\Bigl(\frac{\zeta^r}{\|\zeta^r\|_1}\in A\Bigr),\end{aligned}$$ where the last equality follows from the independence of $\|\zeta^r\|_1=\zeta_1+\dots+\zeta_r$ and $\frac{\zeta^r}{\|\zeta^r\|_1}=f(B_1,\dots,B_{r})$ (statement [\[L:Hilfs_Beta:3\]](#L:Hilfs_Beta:3){reference-type="ref" reference="L:Hilfs_Beta:3"} in Lemma [\[L:Hilfs_Beta\]](#L:Hilfs_Beta){reference-type="ref" reference="L:Hilfs_Beta"}), with $f$ given componentwise by: $$f^i(B_1,\dots,B_r)=(1-B_1)(1-B_2)\dots (1-B_{i-1})B_i, \quad(B_r:=1).$$ Going back to determining $S$ we thus get $$S(A)=\frac{\mathop{\mathrm{\mathbb{E}}}(\|\zeta^r\|_1^\alpha)\cdot\mathbb{P}(\frac{\zeta^r}{\|\zeta^r\|_1}\in A)}{\mathop{\mathrm{\mathbb{E}}}(\|\zeta^r\|_1^\alpha)}=\mathbb{P}\Bigl(\frac{(\zeta_1,\dots,\zeta_r)}{\zeta_1+\dots+\zeta_r}\in A\Bigr)=\mathbb{P}(S^r\in A)$$ and the statement follows. ◻ We end this section with an example of an application for the spectral measure. [\[Ex:Appl1\]]{#Ex:Appl1 label="Ex:Appl1"} We can use the limit/spectral measure to approximate conditional probabilities given large exceedances of the weight vector. A simple special case is to condition on $\{\|D^r(N)\|_1> t\}$ for some large $t$. Let $A$ be a set in $\mathcal{B}(B\setminus\{0\})$ bounded away from $0$ and with $\mu(\partial A)=0$. Then we have $$\begin{aligned} \mathbb{P}(D^r(N)\in tA\,|\,\|D^r(N)\|_1>t)&=\frac{\mathbb{P}(D^r(N)/t\in A\cap\{x:\|x\|_1> 1\})}{\mathbb{P}(\|D^r(N)\|_1> t)}\\ &\approx (\nu_\alpha\otimes S)\circ T^{-1}(A\cap\{x:\|x\|_1> 1\}),\end{aligned}$$ which does not depend on $t$ anymore. So as long as we know that our $r$ vertices of interest have received a large amount of edges, we can approximate above probability using the limit/spectral measure. Next let us assume that $A$ only contains information about the proportions of the $D_1(N),\dots,D_r(N)$, i.e. that $A$ is of the form $\{r\cdot\theta|(r,\theta)\in(1,\infty)\times A^*\}$ with $A^*\in\mathcal{B}(\mathbb{S}_{\|\cdot\|_1})$, then $$\mathbb{P}(D^r(N)\in tA\,|\,\|D^r(N)\|_1\geq t)\approx S(A^*)\text{ for large }t.$$ So given that the total weight of our vector $D^r(N)$ is large, the proportions of its components follow roughly a mixture of Dirichlet distributions or a reversed generalised Dirichlet distribution. # Generalisation to Sequence Spaces {#sec:Seq_Spaces} ## Motivation and Framework Example [\[Ex:Appl1\]](#Ex:Appl1){reference-type="ref" reference="Ex:Appl1"} has shown that the results developed in the previous chapter allow us to approximate probabilities of extreme events - but with the restriction that those extreme events may only depend on a fixed number $r$ of the first nodes in the graph. For several natural applications this approach is then not sufficient, as we would for example like to approximate probabilities of events involving the maximum degree of all existing nodes. In this section, we will extend our scope to include the asymptotic behavior of the entire weight process rather than just its first $r$ vertices. We will thus again work in the framework of regular variation as in Section [3.1](#subsec:BackgroundMRV){reference-type="ref" reference="subsec:BackgroundMRV"}, but with a suitably adjusted Banach space $B$ for sequences. The regular variation of random sequences has previously been studied in [@tillier18]. In order to find an appropriate $B$ we start with the space of all finite sequences: $$c_{00}:=\{x=(x_n)_{n\in\mathbb{N}}\in\mathbb{R}^\mathbb{N}|\exists N\in\mathbb{N}\,\forall k\geq N: x_k=0\}.$$ At any time $n$ our weight process as well as the urn process can be represented as elements of this space: $$\begin{aligned} D(n)&:=(D_1(n),\dots,D_{N(0)}(n),D_{N(1)}(n),\dots,D_{N(n)}(n),0,\dots),\\ C(n)&:=(C_1(n\cdot l),\dots,C_{s}(n\cdot l),C_{s+1}(n\cdot l),\dots,C_{s+n}(n\cdot l),0,\dots).\end{aligned}$$ A crucial assumption in Theorem [\[Breiman_generalised\]](#Breiman_generalised){reference-type="ref" reference="Breiman_generalised"} is the almost sure convergence of the process and so we set $B$ equal to the completion of $c_{00}$ with respect to a norm $\|\cdot\|$ on $\mathbb{R}^\mathbb{N}$, i.e. $$B=c_{\|\cdot\|}:=\bigl\{x\in\mathbb{R}^\mathbb{N}\big|\,\|x\|<\infty\text{ and }\exists \, x_n \in c_{00}, n \in \mathbb{N}:\lim_{n\to\infty}\|x-x_n\|= 0\bigr\}.$$ In contrast to the previously studied finite dimensional setting, different norms on sequence spaces are no longer equivalent, which means that in order to study regular variation we have to find a suitable norm $\|\cdot\|$ for our model, where we restrict ourselves to the $\ell_p$-norms. Then, by construction, $B$ is a separable Banach space which allows us to employ the framework from Section [3.1](#subsec:BackgroundMRV){reference-type="ref" reference="subsec:BackgroundMRV"} again. ## The Breiman Conditions in Sequence Space Our goal is to establish a result similar to Theorem [\[th:mrvparg\]](#th:mrvparg){reference-type="ref" reference="th:mrvparg"} again by virtue of Breiman's Theorem. To this end, we need to check that Conditions [\[Brei_generalised:conv\]](#Brei_generalised:conv){reference-type="ref" reference="Brei_generalised:conv"} and [\[Brei_generalised:moment\]](#Brei_generalised:moment){reference-type="ref" reference="Brei_generalised:moment"} of Theorem [\[Breiman_generalised\]](#Breiman_generalised){reference-type="ref" reference="Breiman_generalised"} are satisfied, which is shown in the following Proposition. [\[P:BreiCond\]]{#P:BreiCond label="P:BreiCond"} Assume the graph / urn model of Lemma [\[GraphUrnDuality\]](#GraphUrnDuality){reference-type="ref" reference="GraphUrnDuality"} and let $$\zeta:=(\zeta_i)_{i \in \mathbb{N}}=\left(\lim_{n \to \infty} \frac{D_i(n)}{n^{l/(l+\beta)}}\right)_{i \in \mathbb{N}},$$ cf. Proposition [\[P:martingale_p\_factorial\]](#P:martingale_p_factorial){reference-type="ref" reference="P:martingale_p_factorial"}, 2). Furthermore, let $p\in[1,\infty]$. - If $p>\frac{l+\beta}{l}$, then 1. [\[P:BreiCond:zetainc\]]{#P:BreiCond:zetainc label="P:BreiCond:zetainc"} $\zeta\in c_{||\cdot||_p}$ a.s. since both 1. [\[P:BreiCond:zeta_p-smble\]]{#P:BreiCond:zeta_p-smble label="P:BreiCond:zeta_p-smble"} $||\zeta||_p<\infty$ a.s. and 2. [\[P:BreiCond:p-conv\]]{#P:BreiCond:p-conv label="P:BreiCond:p-conv"} $\begin{aligned} \frac{D(n)}{n^{l/(l+\beta)}}\to\zeta \text{ a.s. in } c_{||\cdot||_p} \end{aligned}$ as $n\to\infty$ 2. [\[P:BreiCond:moments\]]{#P:BreiCond:moments label="P:BreiCond:moments"} $\begin{aligned}\sup_n\mathop{\mathrm{\mathbb{E}}}\Bigl(\Bigl\lVert\frac{D(n)}{n^{l/(l+\beta)}}\Bigr\rVert_p^\alpha\Bigr)<\infty\end{aligned}$ for every $\alpha>0$. - If $p<\frac{l+\beta}{l}$ then 1. [\[P:BreiCond:no_p-conv\]]{#P:BreiCond:no_p-conv label="P:BreiCond:no_p-conv"} $\begin{aligned} \frac{D(n)}{n^{l/(l+\beta)}}\not\to\zeta \text{ a.s. in } c_{||\cdot||_p} \end{aligned}$ as $n\to\infty$. The proof of this proposition is deferred to Section [5.5](#Sec:BreiCondProof){reference-type="ref" reference="Sec:BreiCondProof"}. Proposition [\[P:BreiCond\]](#P:BreiCond){reference-type="ref" reference="P:BreiCond"} generalises the convergence results of Theorem 2 in [@pekoz17] from weak to almost sure convergence and allows for general $l$ and $\beta$. ## Regular Variation in Sequence Space [\[Th:RVsequence\]]{#Th:RVsequence label="Th:RVsequence"} Consider a preferential attachment random graph with the notation as introduced in the last sections. Let $D(n):=(D_1(n),$ $D_2(n),\dots)$ be the corresponding weight sequence at time $n\in\mathbb{N}$ and let $N$ be a positive integer-valued random variable. Let $p \in (\frac{l+\beta}{l},\infty]$ and $||\cdot||$ be a norm such that there exists some $C>0$ with $||\cdot||\leq C||\cdot||_p$. If the following conditions are satisfied: 1. $N$ and $(D(n))_n$ are independent and 2. $N$ is regularly varying with index $\alpha>0$ then $D(N)$ is multivariate regularly varying in $c_{||\cdot||}$ with index $\alpha\cdot \frac{l+\beta}{l}$ . *Proof.* Note first that our assumptions guarantee that the convergence results of Proposition [\[P:BreiCond\]](#P:BreiCond){reference-type="ref" reference="P:BreiCond"} hold analogously with the norm $\|\cdot \|_p$ replaced by $\|\cdot \|$. Thus, the assumptions of Theorem [\[Breiman_generalised\]](#Breiman_generalised){reference-type="ref" reference="Breiman_generalised"}) are met and the proof is completely analogous to the proof of Theorem [\[th:mrvparg\]](#th:mrvparg){reference-type="ref" reference="th:mrvparg"}. ◻ This result finally allows us to derive (univariate) regular variation of the maximum degree of a preferential attachment model. [\[Cor:MaxRV\]]{#Cor:MaxRV label="Cor:MaxRV"} Under the assumptions of Theorem [\[Th:RVsequence\]](#Th:RVsequence){reference-type="ref" reference="Th:RVsequence"} the maximum degree $sup_{i \in \mathbb{N}}D_i(N)$ is regularly varying with index $\alpha \cdot \frac{l+\beta}{l}$. *Proof.* This follows from Theorem [\[Th:RVsequence\]](#Th:RVsequence){reference-type="ref" reference="Th:RVsequence"} applied to $\| \cdot \|_\infty$ and Remark [\[Rem:RVofNorm\]](#Rem:RVofNorm){reference-type="ref" reference="Rem:RVofNorm"}. ◻ # Auxiliary Results and Deferred Proofs {#sec:appendix} ## Proof of Proposition [\[P:martingale_p\_factorial\]](#P:martingale_p_factorial){reference-type="ref" reference="P:martingale_p_factorial"} and Uniform Integrability {#Sec:MartingaleProof} For the proof of several lemmata and other auxiliary results we make use of a corollary to Stirling's formula, that can be found as 6.1.46 in [@abramowitz84] and reads [\[L:Stirling\]]{#L:Stirling label="L:Stirling"} $$\begin{aligned} \lim_{n\to\infty}n^{b-a}\frac{\Gamma(a+n)}{\Gamma(b+n)}=1\quad\forall a,b\in\mathbb{R}\end{aligned}$$ This allows us to estimate binomial coefficients by their respective powers and vice versa. [\[L:ineq_mart_power_form\]]{#L:ineq_mart_power_form label="L:ineq_mart_power_form"} Under the assumptions of Proposition [\[P:martingale_p\_factorial\]](#P:martingale_p_factorial){reference-type="ref" reference="P:martingale_p_factorial"}, we have $$\prod_{i=1}^r\frac{D_i^{k_i}(n)}{\Gamma(k_i+1)}\sim \prod_{i=1}^r\binom{D_i(n)+k_i-1}{k_i}\quad \text{ a.s.}$$ and there exist constants $C_1,C_2>0$ independent of $n$ such that almost surely $$\begin{aligned} C_1\cdot\prod_{i=1}^r\binom{D_i(n)+k_i-1}{k_i}\leq \prod_{i=1}^rD_i^{k_i}(n) \leq C_2\cdot\prod_{i=1}^r\binom{D_i(n)+k_i-1}{k_i}\label{eq:mart_power_form}\end{aligned}$$ *Proof.* Remember from Lemma [\[L:InftyDegree\]](#L:InftyDegree){reference-type="ref" reference="L:InftyDegree"} that almost surely $D_i(n)\to\infty$ as $n\to\infty$. Thus, by Lemma [\[L:Stirling\]](#L:Stirling){reference-type="ref" reference="L:Stirling"} we can find for each $i=1, \ldots, r$ constants $0<C^i_1<C_2^i$ (depending on $k_i$) such that $$\begin{aligned} C_1^i \binom{D_i(n)+k_i-1}{k_i} = C_1^i \frac{\Gamma(D_i(n)+k_i)}{\Gamma(k_i+1)\Gamma(D_i(n))} \leq \frac{D_i^{k_i}(n)}{\Gamma(k_i+1)} \leq C_2^i \binom{D_i(n)+k_i-1}{k_i} \end{aligned}$$ holds for all $n \geq (r-s) \vee 0$. Combining those constants leads to [\[eq:mart_power_form\]](#eq:mart_power_form){reference-type="eqref" reference="eq:mart_power_form"}. ◻ Now to Proposition [\[P:martingale_p\_factorial\]](#P:martingale_p_factorial){reference-type="ref" reference="P:martingale_p_factorial"}. The following proof adapts a martingale approach from [@mori05] (Theorem 2.1). *Proof of Proposition [\[P:martingale_p\_factorial\]](#P:martingale_p_factorial){reference-type="ref" reference="P:martingale_p_factorial"}.* $$ 1. We call the total ball count at time $m$ in the corresponding urn model $$S_m:=z+m+\beta\cdot\Bigl\lfloor\frac{m}{l}\Bigr\rfloor\quad \text{with }z:=\sum_{i=1}^{N(0)}C_i(0)+j\cdot \beta.$$ Only one colour obtains a new ball at a time, so $$\begin{aligned} &\mathop{\mathrm{\mathbb{E}}}\Bigl(\prod_{i=1}^r\binom{C_i(m+1)+k_i-1}{k_i}\Big|\mathcal{F}_m\Bigr)\\ &=\sum_{j=1}^r\mathop{\mathrm{\mathbb{E}}}\Bigl(\underset{i\neq j}{\prod_{i=1}^r}\binom{C_i(m)+k_i-1}{k_i}\binom{C_j(m)+k_j}{k_j}\cdot\mathds{1}_{\{C_j(m+1)=C_j(m)+1\}}\Big|\mathcal{F}_m\Bigr)\\ &\;\;\;+\mathop{\mathrm{\mathbb{E}}}\Bigl(\prod_{i=1}^r\binom{C_i(m)+k_i-1}{k_i}\cdot\mathds{1}_{\bigcap_{j=1}^r\{C_j(m+1)=C_j(m)\}}\Big|\mathcal{F}_m\Bigr)\\ \intertext{and using the transition probabilities of the urn model this equals} & \;\;\; \sum_{j=1}^r\underset{i\neq j}{\prod_{i=1}^r}\binom{C_i(m)+k_i-1}{k_i}\cdot \binom{C_j(m)+k_j-1}{k_j}\cdot\frac{C_j(m)+k_j}{C_j(m)}\cdot\frac{C_j(m)}{S_m}\\ &\;\;\; +\prod_{i=1}^r\binom{C_i(m)+k_i-1}{k_i}\cdot \Bigl(1-\sum_{j=1}^r\frac{C_j(m)}{S_m}\Bigr)\\ &=\prod_{i=1}^r\binom{C_i(m)+k_i-1}{k_i}\Bigl(\sum_{j=1}^r\frac{C_j(m)+k_j}{S_m}+1-\sum_{j=1}^r\frac{ C_j(m)}{S_m}\Bigr)\\ &=\prod_{i=1}^r\binom{C_i(m)+k_i-1}{k_i}\frac{S_m+k}{S_m}.\end{aligned}$$ Now consider times $m=n\cdot l$. These are the critical times where a new colour is migrated into the urn (a new vertex is added to the graph). In between nothing of interest happens and we can iterate the above calculation $l$ times to obtain: $$\mathop{\mathrm{\mathbb{E}}}\Bigl(\prod_{i=1}^r\binom{C_i((n+1)\cdot l)+k_i-1}{k_i}\Big|\mathcal{F}_{n\cdot l}\Bigr)=\prod_{i=1}^r\binom{C(n\cdot l)+k_i-1}{k_i}\prod_{i=0}^{l-1}\frac{S_{n\cdot l+i}+k}{S_{n\cdot l+i}}.$$ In the next step we see that $$\begin{aligned} &\prod_{i=0}^{l-1}\frac{S_{n\cdot l+i}+k}{S_{n\cdot l+i}}=\prod_{i=0}^{l-1}\frac{z+n\cdot l+i+\beta\cdot n+k}{z+n\cdot l+i+\beta\cdot n}=\prod_{i=0}^{l-1}\frac{\frac{z+k+i}{l+\beta}+n}{\frac{z+i}{l+\beta}+n}\\ &=\biggl(\prod_{i=0}^{l-1}\frac{\Gamma(\frac{z+k+i}{l+\beta}+n+1)}{\Gamma(\frac{z+i}{l+\beta}+n+1)}\biggr)\Big/\biggl(\prod_{i=0}^{l-1}\frac{\Gamma(\frac{z+k+i}{l+\beta}+n)}{\Gamma(\frac{z+i}{l+\beta}+n)}\biggr)=:\frac{c(n+1,k)}{c(n,k)}\end{aligned}$$ and the martingale property follows. Lastly we apply Lemma [\[L:Stirling\]](#L:Stirling){reference-type="ref" reference="L:Stirling"} to each factor of $c(n,k)$ to get $$\label{Eq:c(n,k):equiv}c(n,k)=\prod_{i=0}^{l-1}\frac{\Gamma(\frac{z+k+i}{l+\beta}+n)}{\Gamma(\frac{z+i}{l+\beta}+n)}\sim \prod_{i=0}^{l-1}n^{k/(l+\beta)}=n^{k\cdot l/(l+\beta)}.$$ 2. Doob's martingale convergence theorem (see Chapter XI.14 in [@doob94]) guarantees the existence of an almost sure limit of the martingale given in [\[term:process_p\_factorial\]](#term:process_p_factorial){reference-type="eqref" reference="term:process_p_factorial"} and that this limit is in $L^1(\mathbb{P})$. Lemma [\[L:ineq_mart_power_form\]](#L:ineq_mart_power_form){reference-type="ref" reference="L:ineq_mart_power_form"} in combination with [\[Eq:c(n,k):equiv\]](#Eq:c(n,k):equiv){reference-type="eqref" reference="Eq:c(n,k):equiv"} then ensures the almost sure convergence of the sequence $n^{-k_i l/(l+\beta)}D_i(n)^{k_i}$ to a limit $\zeta_i^{k_i} \in L^1(\mathbb{P})$ and a further application of Lemma [\[L:ineq_mart_power_form\]](#L:ineq_mart_power_form){reference-type="ref" reference="L:ineq_mart_power_form"} implies that the expression in [\[term:limit_p\_factorial\]](#term:limit_p_factorial){reference-type="eqref" reference="term:limit_p_factorial"} is indeed the limit of the process [\[term:process_p\_factorial\]](#term:process_p_factorial){reference-type="eqref" reference="term:process_p_factorial"}. 3. Doob's martingale convergence theorem (Chapter XI.14 in [@doob94]) yields that ([\[term:limit_p\_factorial\]](#term:limit_p_factorial){reference-type="ref" reference="term:limit_p_factorial"}) is the right closure to the martingale ([\[term:process_p\_factorial\]](#term:process_p_factorial){reference-type="ref" reference="term:process_p_factorial"}) if the latter is uniformly integrable. To show this uniform integrability we show that the process is $L^{1+\epsilon}$-bounded for any $\epsilon>0$. To this end, set $p_i:=k_i(1+\epsilon)$ and $p:=k(1+\epsilon)=p_1+\dots+p_r$. By Lemma [\[L:ineq_mart_power_form\]](#L:ineq_mart_power_form){reference-type="ref" reference="L:ineq_mart_power_form"} there exist constants $C_1,C_2>0$ independent of $n$ such that $$\begin{aligned} \Bigl[\frac{1}{c(n,k)}\prod_{i=1}^r\binom{D_i(n)+k_i-1}{k_i}\Bigr]^{1+\epsilon}&\leq C_1\cdot\frac{\prod_{i=1}^rD^{p_i}_i(n)}{n^{p\cdot l/(l+\beta)}}\\ &\leq C_2\cdot \frac{1}{c(n,p)}\prod_{i=1}^r\binom{D_i(n)+p_i-1}{p_i}.\end{aligned}$$ Since the right-hand side is a multiple of a martingale, it has bounded expectation and the uniform integrability of ([\[term:process_p\_factorial\]](#term:process_p_factorial){reference-type="ref" reference="term:process_p_factorial"}), and thus the first part of 3), follows. This now allows us to iteratively trace back the expectation of $\prod_{i=1}^r\zeta_i^{k_i}$ to the times at which colour $i, 1 \leq i \leq r,$ was first introduced to the urn, that is $n_0^i:=l(i-s) \vee 0$, and the number of balls was still deterministic, namely either $\beta$ if $i>N(0)$ or $D_i(0)$ if $i\leq N(0)$: $$\begin{aligned} &\mathop{\mathrm{\mathbb{E}}}\Bigl(\prod_{i=1}^r\frac{\zeta_i^{k_i}}{\Gamma(k_i+1)}\Bigr)=\mathop{\mathrm{\mathbb{E}}}\Bigl(\mathop{\mathrm{\mathbb{E}}}\Bigl(\prod_{i=1}^r\frac{\zeta_i^{k_i}}{\Gamma(k_i+1)}\Big|\mathcal{F}_{n_0^r}\Bigr)\Bigr)\\ &=\mathop{\mathrm{\mathbb{E}}}\Bigl(\frac{1}{c(n_0^r/l,k_1+\dots+k_r)}\prod_{i=1}^r\binom{D_i(n_0^r/l)+k_i-1}{k_i}\Bigr)\\ &=\frac{c(n_0^r/l,k_1+\dots+k_{r-1})}{c(n_0^r/l,k_1+\dots+k_r)}\cdot\begin{cases}\binom{D_r(0)+k_r-1}{k_r}\quad &\text{if }r\leq N(0)\\\binom{\beta+k_r-1}{k_r}\quad &\text{if }r> N(0)\end{cases}\\ &\phantom{=}\cdot\mathop{\mathrm{\mathbb{E}}}\Bigl(\mathop{\mathrm{\mathbb{E}}}\Bigl(\frac{1}{c(n_0^r/l,k_1+\dots+k_{r-1})}\prod_{i=1}^{r-1}\binom{D_i(n_0^r/l)+k_i-1}{k_i}\Big|\mathcal{F}_{n_0^{r-1}}\Bigr)\Bigr)\\ &=\prod_{i=2}^{r}\frac{c(n_0^i/l,k_1+\dots+k_{i-1})}{c(n_0^i/l,k_1+\dots+k_i)}\cdot\frac{1}{c(n_0^1/l,k_1)}\prod_{i=1}^r\begin{cases}\binom{D_i(0)+k_i-1}{k_i}\quad &\text{if }i\leq N(0)\\\binom{\beta+k_i-1}{k_i}\quad &\text{if }i> N(0).\end{cases}\end{aligned}$$  ◻ We end this section with a uniform integrability property that will turn out to be useful on multiple occasions. [\[L:uniformInt\]]{#L:uniformInt label="L:uniformInt"} Under the assumptions of and with the notation from Proposition [\[P:martingale_p\_factorial\]](#P:martingale_p_factorial){reference-type="ref" reference="P:martingale_p_factorial"} the process $\bigl(n^{-\frac{k\cdot l}{l+\beta}}\prod_{i=1}^rD_i^{k_i}(n)\bigr)_n$ is uniformly integrable. *Proof.* This follows from the uniform integrability of the martingale [\[term:process_p\_factorial\]](#term:process_p_factorial){reference-type="eqref" reference="term:process_p_factorial"} shown in Proposition [\[P:martingale_p\_factorial\]](#P:martingale_p_factorial){reference-type="ref" reference="P:martingale_p_factorial"} *1)* and *3)* together with Lemma [\[L:ineq_mart_power_form\]](#L:ineq_mart_power_form){reference-type="ref" reference="L:ineq_mart_power_form"}. ◻ ## Proof of Theorem [\[Breiman_generalised\]](#Breiman_generalised){reference-type="ref" reference="Breiman_generalised"} {#Sec:BreimanProof} We adapt the proof from Theorem 3 in [@wang21] to the case of a general separable Banach space. By our additional assumption about the continuity of $(X_t)$ we are able to straighten a previously not properly addressed subtlety in the proof about the application of Egorov's theorem. *Proof of Theorem [\[Breiman_generalised\]](#Breiman_generalised){reference-type="ref" reference="Breiman_generalised"}.* By a Portmanteau theorem for $\mathbb{M}$-convergence, see Theorem 2.1 in [@lindskog14], it is sufficient to show $$\label{Eq:ToShowmain}\lim_{t\to\infty}t\mathop{\mathrm{\mathbb{E}}}f\Bigl(\frac{X_T T}{b(t)}\Bigr)=\int_{0}^\infty\mathop{\mathrm{\mathbb{E}}}f(X_\infty y)\,\nu_\alpha(\textup{d}y)<\infty,$$ for all non-negative, bounded and uniformly continuous functions $f$ on $B$ whose support is bounded away from $0$, i.e. there exists an $\epsilon_0>0$ such that $f(x)=0$ for all $x \in B_{\epsilon_0}(0)$. Without loss of generality we will assume that $f$ is bounded by 1. In order to show [\[Eq:ToShowmain\]](#Eq:ToShowmain){reference-type="eqref" reference="Eq:ToShowmain"} we write, for $\eta, M >0$, $$\begin{aligned} &\Bigl|t\mathop{\mathrm{\mathbb{E}}}f\Bigl(\frac{X_TT}{b(t)}\Bigr)-\int_0^\infty\mathop{\mathrm{\mathbb{E}}}f(X_\infty y)\nu_\alpha(\textup{d}y)\Bigr|\\ &\leq\Bigl|t\mathop{\mathrm{\mathbb{E}}}\Bigl(f\Bigl(\frac{X_TT}{b(t)}\Bigr)\mathds{1}_{ \{M \geq T/b(t)\geq\eta\}}\Bigr)-\int_\eta^M\mathop{\mathrm{\mathbb{E}}}f(X_\infty y)\nu_\alpha(\textup{d}y)\Bigr| \\ &+t\mathop{\mathrm{\mathbb{E}}}\Bigl(f\Bigl(\frac{X_TT}{b(t)}\Bigr)\mathds{1}_{\{T/b(t)<\eta\}}\Bigr) +\int_0^\eta\mathop{\mathrm{\mathbb{E}}}f(X_\infty y)\nu_\alpha(\textup{d}y) \\ &+t\mathop{\mathrm{\mathbb{E}}}\Bigl(f\Bigl(\frac{X_TT}{b(t)}\Bigr)\mathds{1}_{\{T/b(t)>M\}}\Bigr) +\int_M^\infty\mathop{\mathrm{\mathbb{E}}}f(X_\infty y)\nu_\alpha(\textup{d}y) \\ &=: I^1(t,\eta,M)+I^2(t,\eta)+I^3(t,M).\end{aligned}$$ 1. We start by showing $I^1(t,\eta,M) \to 0, t \to \infty.$ To this end, first fix some $\epsilon>0$. By Egorov's theorem there exists a measurable set $A_\epsilon$ with $\mathbb{P}(A_\epsilon^c)<\epsilon$ such that $X_t\to X_\infty$ for $t\to\infty$ uniformly on $A_\epsilon$. At this point we use the one-sided continuity of $(X_t)$ as a sufficient condition to ensure the measurability of $A_\epsilon$. This allows us to bound $I^1(t,\eta,M)$ separately on $A_\epsilon$ and $A_\epsilon^c$: $$\begin{aligned} I^1(t,\eta,M)&\leq t\mathop{\mathrm{\mathbb{E}}}\Bigl(f\Bigl(\frac{X_TT}{b(t)}\Bigr)\mathds{1}_{A_\epsilon^c \cap \{M \geq T/b(t)\geq\eta\}}\Bigr)+\int_\eta^M\mathop{\mathrm{\mathbb{E}}}\Bigl(f(X_\infty y)\mathds{1}_{A_\epsilon^c}\Bigr) \nu_\alpha(\textup{d}y) \\ & + t\mathop{\mathrm{\mathbb{E}}}\Bigl(\Bigl|f\Bigl(\frac{X_T T}{b(t)}\Bigr)-f\Bigl(\frac{X_\infty T}{b(t)}\Bigr)\Bigr|\mathds{1}_{A_\epsilon\cap\{M \geq T/b(t)\geq\eta\}}\Bigr)\\ & + \Bigl|t\mathop{\mathrm{\mathbb{E}}}\Bigl(f\Bigl(\frac{X_\infty T}{b(t)}\Bigr)\mathds{1}_{A_\epsilon\cap\{M \geq T/b(t)\geq\eta\}}\Bigr)-\int_\eta ^M \mathop{\mathrm{\mathbb{E}}}\Bigl( f(X_\infty y)\mathds{1}_{A_\epsilon}\Bigr) \nu_\alpha(\textup{d}y)\Bigr|\\ &=: I_a^1(t,\eta,M)+I_b^1(t,\eta,M)+I_c^1(t,\eta,M)+I_d^1(t,\eta,M).\end{aligned}$$ Starting with $I_a^1(t,\eta,M)$ we use that $A_\epsilon^c \in\sigma(X_t,t\geq 0)$ which therefore is independent of $T$: $$\begin{aligned} \limsup_{t \to \infty} I_a^1(t,\eta,M) \leq \limsup_{t \to \infty} t\mathbb{P}\Bigl(A_\epsilon^c \cap\Bigl\{\frac{T}{b(t)}\geq \eta \Bigr\}\Bigr)\leq \epsilon\cdot\eta^{-\alpha}.\end{aligned}$$ Second, $$\begin{aligned} I_b^1(t,\eta,M) \leq \mathbb{P}(A_\epsilon^c)\int_\eta^M \nu_\alpha(\textup{d}y) \leq \epsilon \cdot (\eta^{-\alpha}-M^{-\alpha}).\end{aligned}$$ For $I_c^1(t,\eta,M)$, we note that $b(t)\to\infty$ for $t\to\infty$, yielding an arbitrarily large lower bound for $T$ on the set $\{T/b(t)\geq\eta\}$. Then, intersecting with $A_\epsilon$ we obtain that $b(t)^{-1} T ||X_T - X_\infty ||\cdot\mathds{1}_{A_\epsilon\cap\{M \geq T/b(t)\geq\eta\}}$ uniformly tends to $0$ and combining this with the uniform continuity of $f$ we find a $c(t)$ with $c(t)\to 0$ for $t\to\infty$ such that, for $t \to \infty$, $$\begin{aligned} I_c^1(t,\eta,M)&\leq c(t)\cdot t\mathbb{P}\Bigl(A_\epsilon\cap\Bigl\{M \geq \frac{T}{b(t)}\geq \eta \Bigr\}\Bigr)\leq c(t)\cdot t\mathbb{P}\Bigl(\frac{T}{b(t)}\geq\eta \Bigr)\to 0.\end{aligned}$$ Finally, for $I_d^1(t,\eta,M)$ we observe that $$y \mapsto E(f(X_\infty y)\mathds{1}_{A_\epsilon}) \mathds{1}_{[\eta,M]}(y)$$ is bounded and has support bounded away from 0. Even though it is not continuous in $y$, a standard approximation argument by continuous functions combined with dominated convergence gives $$\begin{aligned} && t\mathop{\mathrm{\mathbb{E}}}\Bigl(f\Bigl(\frac{X_\infty T}{b(t)}\Bigr)\mathds{1}_{A_\epsilon\cap\{M \geq T/b(t)\geq\eta\}}\Bigr) \\ & = & \int_\eta^M \mathop{\mathrm{\mathbb{E}}}(f(X_\infty y )\mathds{1}_{A_\epsilon})t\mathbb{P}^{T/b(t)}(\textup{d}y) \to \int_\eta ^M \mathop{\mathrm{\mathbb{E}}}(f(X_\infty y)\mathds{1}_{A_\epsilon}) \nu_\alpha(\textup{d}y)\end{aligned}$$ for $t \to \infty$, where we used regular variation of $T$ in combination with Fubini's theorem as $T$ is independent of $X_\infty$. Thus, $I_d(t,\eta,M) \to 0$ for $t \to \infty$. From the above and since we can choose $\epsilon>0$ arbitrarily small, we see that $I^1(t,\eta,M) \to 0, t \to \infty$. 2. We split up $I^2(t,\eta)$ into $$t\mathop{\mathrm{\mathbb{E}}}\Bigl(f\Bigl(\frac{X_TT}{b(t)}\Bigr)\mathds{1}_{\{T/b(t)<\eta\}}\Bigr) +\int_0^\eta\mathop{\mathrm{\mathbb{E}}}f(X_\infty y)\nu_\alpha(\textup{d}y)=:I^2_a(t,\eta)+I^2_b(\eta).$$ Then, using Markov's inequality as well as the independence of $(X_t)_{t\geq 0}$ and $T$, we arrive at the upper bound $$\begin{aligned} &I^2_a(t,\eta) \leq t \mathbb{P}\Bigl(\Big\|\frac{X_TT}{b(t)}\Bigr\|>\epsilon_0,\frac{T}{b(t)}<\eta\Bigr)\\ &=t\mathbb{P}\Bigl(\Big\|\frac{X_TT}{b(t)}\mathds{1}_{\{T/b(t)<\eta\}}\Bigr\|>\epsilon_0 \Bigr)\leq \epsilon_0^{-\alpha'}t\mathop{\mathrm{\mathbb{E}}}\Bigl(\Big\|\frac{X_TT}{b(t)}\mathds{1}_{\{T/b(t)<\eta\}}\Bigr\|^{\alpha'}\Bigr)\\ &= \epsilon_0^{-\alpha'}\int_{(0,\eta)}\mathop{\mathrm{\mathbb{E}}}||X_{b(t)y}||^{\alpha'} y^{\alpha'}t\mathbb{P}(T/b(t)\in\textup{d}y)\\ &\leq \epsilon_0^{-\alpha'} \sup_{t\geq 0}\mathop{\mathrm{\mathbb{E}}}||X_t||^{\alpha'} t\mathop{\mathrm{\mathbb{E}}}\Bigl(\Bigl(\frac{T}{b(t)}\Bigr)^{\alpha'}\mathds{1}_{\{T/b(t)<\eta\}}\Bigr)\end{aligned}$$ By Karamata's Theorem applied to truncated moments of regularly varying random variables, see Proposition 1.4.6 in [@kulik20], we get $$\begin{aligned} & t\mathop{\mathrm{\mathbb{E}}}\Bigl(\Bigl(\frac{T}{b(t)}\Bigr)^{\alpha'}\mathds{1}_{\{T/b(t)<\eta\}}\Bigr) \\ &= t\mathbb{P}(T>b(t))\frac{\mathop{\mathrm{\mathbb{E}}}\Bigl(T^{\alpha'}\mathds{1}_{\{T<\eta b(t)\}}\Bigr)}{(b(t))^{\alpha'}\mathbb{P}(T>b(t))} \\ &\to \frac{\alpha}{\alpha'-\alpha}\eta^{\alpha'-\alpha}.\end{aligned}$$ Thus, for $t \to \infty, \eta \to 0$, $I^2_a(t,\eta) \to 0$. In order to bound $I^2_b(\eta)$ we first note that $\mathop{\mathrm{\mathbb{E}}}||X_\infty||^{\beta}$ is finite for all $\alpha<\beta<\alpha'$ because $(||X_t||^\beta)_t$ is uniformly integrable (as $\sup_t\mathop{\mathrm{\mathbb{E}}}\bigl[(||X_t||^\beta)^{\frac{\alpha'}{\beta}}\bigr]<\infty$ and $\frac{\alpha'}{\beta}>1$) and the Vitali convergence theorem implies that $||X_t||^\beta$ converges in $L^1$ to $||X_\infty||^\beta$ and so do the expected values. Then, again by Markov's inequality we get $$\begin{aligned} & \int_0^\eta\mathop{\mathrm{\mathbb{E}}}f(X_\infty y)\nu_\alpha(\textup{d}y) \leq \int_0^\eta \mathbb{P}(\|X_\infty\|> \epsilon_0 /y)\nu_\alpha(\textup{d}y) \\ &\leq \int_0^\eta \mathop{\mathrm{\mathbb{E}}}\bigl(\|X_\infty\|^\beta\bigr) \epsilon_0^{-\beta} y^\beta \nu_\alpha(\textup{d}y) \\ &\leq \mathop{\mathrm{\mathbb{E}}}\bigl(\|X_\infty\|^\beta\bigr) \epsilon_0^{-\beta} \frac{\alpha}{\beta-\alpha}\eta^{\beta-\alpha}.\end{aligned}$$ Again, with $\eta \to 0$ we have shown that $I^2_b(\eta) \to 0$ and therefore $I^2(t,\eta) \to 0$ and also that the right hand side in [\[Eq:ToShowmain\]](#Eq:ToShowmain){reference-type="eqref" reference="Eq:ToShowmain"} is finite, since $\int_\eta^\infty \mathop{\mathrm{\mathbb{E}}}f(X_\infty y)\nu_\alpha(\textup{d}y) \leq \eta^{-\alpha}$. 3. Finally, regarding $I^3(t,M)$ we note that $$\begin{aligned} & \limsup_{t \to \infty} I^3(t,M) \leq \limsup_{t \to \infty} t\mathbb{P}(T>M b(t)) + \int_M^\infty \nu_\alpha (dy) \leq 2 M^{-\alpha},\end{aligned}$$ and so $\lim_{M \to \infty} \limsup_{t \to \infty} I^3(t,M)=0$. This finishes the proof of [\[Eq:ToShowmain\]](#Eq:ToShowmain){reference-type="eqref" reference="Eq:ToShowmain"}.  ◻ One could do without the continuity assumption of $(X_t)$ provided the measurability of all sets in the proof is ensured. In particular, this refers to the $A_\epsilon$ from Egorov's theorem. ## Moment conditions The following statement extends Lemma 4.1 in [@pekoz16] to arbitrary values of $\beta \geq 0$. [\[L:MomentCond\]]{#L:MomentCond label="L:MomentCond"} Assume the urn model of Section [2.1.2](#sec:InCoUrn){reference-type="ref" reference="sec:InCoUrn"}, fix $k \in \mathbb{N}$ and let $$U_k(n):=C_1(n)+\dots+C_k(n)$$ denote the total number of balls of colours 1 to k at time $n$. Then for every $q\in\mathbb{N}$ there exist $c,C>0$ such that for all $n\in\mathbb{N}_0$ $$\label{eq:MomentCond} cn^{ql/(l+\beta)}<\mathop{\mathrm{\mathbb{E}}}(U_k(n)^q)<Cn^{ql/(l+\beta)}.$$ *Proof.* Let $k \in \mathbb{N}$ be fixed throughout this proof. Note first that $U_k(n), n \in \mathbb{N}_0,$ is deterministic until there are at least $k+1$ different colours in the urn, i.e. for $n \leq n_0$ with $$n_0:=l(k+1-s) \vee 0$$ and set $$n_p:=n_0+p,\,p\in\mathbb{N}.$$ It is then sufficient to show [\[eq:MomentCond\]](#eq:MomentCond){reference-type="eqref" reference="eq:MomentCond"} for $n=n_p, p \in \mathbb{N},$ as for those finitely many $n \leq n_0$ there exist trivial bounds. Let $h_p=U_{(k+1)\vee s}(n_0)+p+\beta\floor{\frac{p}{l}}$ be the (deterministic) total number of balls at time $n_p$ and define $$M_{p,q}:=\prod_{i=0}^{q-1}(i+U_k(n_p)), p \in \mathbb{N}_0, q \in \mathbb{N}.$$ We start the proof by showing that $$\label{eq:RepDp} \mathop{\mathrm{\mathbb{E}}}M_{p,q}=M_{0,q}\cdot\prod_{i=0}^{p-1}\Bigl(1+\frac{q}{h_i}\Bigr).$$ Given the value $U_k(n_{p-1})$, there are only two possible values for $U_k(n_p)$, which are $U_k(n_{p-1})+1$ and $U_k(n_{p-1})$: Either we draw a ball of colours 1 to $k$ or we do not. Using this, one gets: $$\begin{aligned} \mathop{\mathrm{\mathbb{E}}}(M_{p,q}|U_k(n_{p-1}))&=\frac{U_k(n_{p-1})}{h_{p-1}}\prod_{i=0}^{q-1}(i+U_k(n_{p-1})+1)+\frac{h_{p-1}-U_k(n_{p-1})}{h_{p-1}}M_{p-1,q}\\ &= \frac{U_k(n_{p-1})}{h_{p-1}}\prod_{i=1}^{q}(i+U_k(n_{p-1}))+\frac{h_{p-1}-U_k(n_{p-1})}{h_{p-1}}M_{p-1,q}\\ &=\frac{U_k(n_{p-1})}{h_{p-1}}\frac{M_{p-1,q}(U_k(n_{p-1})+q)}{U_k(n_{p-1})}+\frac{h_{p-1}-U_k(n_{p-1})}{h_{p-1}}M_{p-1,q}\\ &=M_{p-1,q}\Bigl(1+\frac{q}{h_{p-1}}\Bigr).\end{aligned}$$ By iterating this calculation $p$ times, one obtains ([\[eq:RepDp\]](#eq:RepDp){reference-type="ref" reference="eq:RepDp"}).\ Now, by definition of $h_p, p \in \mathbb{N}$, we have $$U_{(k+1)\vee s}(n_0)+p+\beta\Bigl(\frac{p}{l}-1\Bigr)\leq h_p\leq U_{(k+1)\vee s}(n_0)+p+\beta\frac{p}{l}.$$ Setting $x:=\frac{l}{l+\beta}$ and $y:=(U_{(k+1)\vee s}(n_0)-\beta)\frac{l}{l+\beta}$ (which is non-negative as for each colour there are at least $\beta$ balls) for abbreviation this is equivalent to $$\begin{aligned} \frac{y+p}{x} &\leq h _p\leq \frac{y+p+\beta x}{x}.\label{eq:nyx}\end{aligned}$$ For the upper bound in [\[eq:MomentCond\]](#eq:MomentCond){reference-type="eqref" reference="eq:MomentCond"} use ([\[eq:RepDp\]](#eq:RepDp){reference-type="ref" reference="eq:RepDp"}) and ([\[eq:nyx\]](#eq:nyx){reference-type="ref" reference="eq:nyx"}) to get $$\begin{aligned} \mathop{\mathrm{\mathbb{E}}}M_{p,q}\leq M_{0,q} \cdot\prod_{i=0}^{p-1}\Bigl(1+\frac{qx}{y+i}\Bigr) &= M_{0,q} \cdot \frac{\Gamma(y)}{\Gamma(qx+y)} \cdot\frac{\Gamma(qx+y+p)}{\Gamma(y+p)}.\end{aligned}$$ Thus, using Lemma [\[L:Stirling\]](#L:Stirling){reference-type="ref" reference="L:Stirling"}, there exist $\tilde{C}, C>0$ such that $$\begin{aligned} \mathop{\mathrm{\mathbb{E}}}U_k(n_p)^q&\leq \mathop{\mathrm{\mathbb{E}}}M_{p,q} \leq M_{0,q} \cdot \frac{\Gamma(y)}{\Gamma(qx+y)} \cdot\frac{\Gamma(qx+y+p)}{\Gamma(y+p)} \\ &\leq \tilde{C} (y+p)^{qx} < C n_p^{qx}= C n_p^{ql/(l+\beta)}\end{aligned}$$ for all $p \in \mathbb{N}$, proving the upper bound. For the lower bound, use Jensen's inequality and [\[eq:RepDp\]](#eq:RepDp){reference-type="eqref" reference="eq:RepDp"}, [\[eq:nyx\]](#eq:nyx){reference-type="eqref" reference="eq:nyx"}, Lemma [\[L:Stirling\]](#L:Stirling){reference-type="ref" reference="L:Stirling"} and similar reasoning as above to see that there exist $\tilde{c}, c>0$ such that $$\begin{aligned} \mathop{\mathrm{\mathbb{E}}}U_k(n_p)^q&=\mathop{\mathrm{\mathbb{E}}}M_{p,1}^q\geq(\mathop{\mathrm{\mathbb{E}}}M_{p,1})^q \\ &\geq \left(M_{0,1}\cdot\prod_{i=0}^{p-1}\Bigl(1+\frac{x}{y+i+\beta x}\Bigr) \right)^q \\ &= M_{0,1}^q \left(\frac{\Gamma(y+\beta x)}{\Gamma(x+y+\beta x)}\right)^q \left(\frac{\Gamma(x+y+\beta x+p)}{\Gamma(y+\beta x+p)}\right)^q \\ &\geq \tilde{c} (y+\beta x +p)^{qx} > c n_p^{qx}=c n_p^{ql/(l+\beta)},\end{aligned}$$ for all $p \in \mathbb{N}$, which finishes the proof. ◻ ## Proof of Lemma [\[L:Hilfs_Beta\]](#L:Hilfs_Beta){reference-type="ref" reference="L:Hilfs_Beta"} {#Sec:BetaIndProof} *Proof of Lemma [\[L:Hilfs_Beta\]](#L:Hilfs_Beta){reference-type="ref" reference="L:Hilfs_Beta"}.* 1. We adapt the proof of Lemma 3.3 from [@mori05]. By definition $B_k^\downarrow=\zeta_{k}/(\zeta_1+\dots+\zeta_{k})$ and so the statement follows immediately for $k=1$. Thus, keep $k>1$ fixed in the following. We disregard all colours in the urn model but $1$ through $k$ and further simplify the model: To find the distribution of $B_k^\downarrow$ it is sufficient to view all balls of colours $1$ to $k-1$ as being "black" and those of colour $k$ as being "white". At the time $n_0^k$, when the last relevant colour is added to the urn, the number $w_k$ of white and $b_k$ of black balls is deterministic and given by $$(w_k,b_k)=\begin{cases}(C_k(0),\sum_{i=1}^{k-1}C_i(0)) \quad&\text{if } 1<k\leq s\\ (\beta,\sum_{i=1}^{k-1}C_i(0)+(l+\beta)(k-1-s)+l) \quad &\text{if } k>s\end{cases}.$$ Now at each time either the number of black and white balls does not change or a new ball is added to one of them. Conditionally on the latter, the transition probabilities are the same as in a traditional Pólya urn and therefore Lemma [\[L:Polya_Dir\]](#L:Polya_Dir){reference-type="ref" reference="L:Polya_Dir"} yields that the vector $(B_k^\downarrow,1-B_k^\downarrow)$ has a Dirichlet distribution, whose marginals are known to be beta distributions with the given parameters. 2. We provide an alternative construction for $C^r(n)$ to help us derive the desired independence properties:\ We start with the usual setup of an urn with $s$ different colours and starting amounts $C_0(0),\dots,C_s(0)$. From there on, instead of picking a ball directly, in each time step we flip a sequence of independent Bernoulli coins, determined below, until we observe heads for the first time. Then we add a new ball of one colour, based on which flip resulted in heads, and proceed to the next time step. The individual coins may have different success probabilities, i.e. probabilities of showing heads. More precisely, for the ball to be selected at time $n$, let $m$ be the currently largest numbered colour in the urn. If $m>r$ the first flip in a time step always represents the colours $>r$. If that coin flip shows heads we add a new ball to one of those (which one specifically is irrelevant to us). If it shows tails, however, we start traversing the remaining $r$ colours in backwards order. This means we start by flipping for colour $r$, then either add a new ball to it if the flip showed heads or otherwise continue to $r-1$ and so on. If $m\leq r$ we skip the flips for colours $m+1,m+2,\dots,$"$>r$". To also incorporate the migration of new colours in this model, after every $l$th time step, we add additional $\beta$ balls to colour $m+1$ or if $m\geq r$ to the "$>r$"-category. Next, for this process to have the same distribution as $(C^r(n))_{n \in \mathbb{N}_0}$, we need to find suitable success probabilities $p_i(n)$ when flipping for colour $i$ in time step $n\mapsto n+1$. To this end, we write $$U_k(n):=\sum_{i=1}^k C_i(n), \;\;\; k \geq 1,$$ for abbreviation and set $$\begin{aligned} \mbox{if }i\leq r:&&p_i(n)&=\frac{C_i(n)}{C_1(n)+\dots+C_i(n)}=1-\frac{U_{i-1}(n)}{U_i(n)}, \\ \mbox{ and } && p_{>r}(n)&=\frac{C_{r+1}(n)+\dots+C_m(n)}{C_1(n)+\dots+C_m(n)}=1-\frac{U_{r}(n)}{U_m(n)}.\end{aligned}$$ To verify that this yields exactly the same transition probabilities as in the urn model, write (with $\mathcal{F}_n=\sigma(C_i(t):\,t\leq n,\,i,t\in\mathbb{N})$):\ for colours $i\leq r$: $$\begin{aligned} &\mathbb{P}(C_i(n+1)-C_i(n)=1|\mathcal{F}_{n})=\frac{C_i(n)}{U_m(n)}=\frac{U_{r}(n)}{U_m(n)} \frac{U_{r-1}(n)}{U_r(n)}\dots \frac{U_i(n)}{U_{i+1}(n)}\frac{C_i(n)}{U_i(n)}\\ &\qquad=(1-p_{>r}(n))\cdot (1-p_r(n))\cdot\hdots\cdot (1-p_{i+1}(n))\cdot p_{i}(n)\end{aligned}$$ and for colour $``>r"$: $$\mathbb{P}([U_{m}(n+1)-U_r(n+1)]-[U_m(n)-U_r(n)]=1|\mathcal{F}_{n})=\frac{U_m(n)-U_r(n)}{U_m(n)}=p_{>r}(n).$$ So we conclude that the two processes indeed have the same transition probabilities and proceed to show the independence of the $B_i^\downarrow$'s. For that let $(Y_j^i)_{j=1}^\infty$ be the random variables which contain the outcome of the $j$th flip for color $i$, $i=1,\dots,r$. Define $T_j^i$ to be the step in the above urn model when coin $Y_j^i$ is flipped, noting that $T_j^i,\,j>0$ is random but a.s. finite by Lemma [\[L:InftyDegree\]](#L:InftyDegree){reference-type="ref" reference="L:InftyDegree"}. The success probability $p_i(n)$ at time $n=T_j^i$ is dependent on prior flips and given by $$\frac{C_i(n)}{C_1(n)+\dots+C_i(n)}=\frac{C_i(T^i_1)+\sum_{k=1}^{j-1}Y_k^i}{U_i(T^i_1)+j-1}$$ for colours $i \leq r$, as $U_i(T^i_1)$ balls of colours 1 to $i$ and $C_i(T^i_1)$ balls of colour $i$ are present when we first flip the coin for this colour and at the $j$-th flip further $j-1$ balls of colours 1 to $i$ have already been added to the urn, with $\sum_{k=1}^{j-1}Y_k^i$ of them to colour $i$. Thereby, the $p_i(n)$ at time $n=T_j^i$ are completely determined by deterministic starting values $U_i(T^i_1),C_i(T^i_1)$ and the previous flips for colour $i$ and the sequences $(Y_j^1)_j, \ldots, (Y_j^r)_j$ are jointly independent.\ With this, the independence of the $B_i^\downarrow$'s follows since for $j \to \infty$, $$\begin{aligned} \label{Eq:IndBis}\frac{C_i(T^i_1)+\sum_{k=1}^{j-1}Y_k^i}{U_i(T^i_1)+j-1}=\frac{C_i(T_j^i)}{C_1(T_j^i)+\dots+C_i(T_j^i)} \to B_i^\downarrow \qquad\quad\text{ for }i=2,\dots,r.\end{aligned}$$ 3. In addition to the sequences introduced in 2), let $(Y_j^{>r})_{j=1}^\infty$ be the random variables which contain the outcome of the $j$th flip for color $``>r"$ and let $T_j^{>r}, j>0$ be the step in the urn model when coin $Y_j^{>r}$ is flipped. The first flip $T_1^{>r}$ takes place at time $n_0^{r+1}$ with $U_{(r+1) \vee s}(n_0^{r+1})$ total balls in the urn, and $U_{(r+1) \vee s}(n_0^{r+1})-U_{r}(n_0^{r+1})$ of them of colour $``>r"$, both numbers being deterministic. After $n_0^{r+1}$, the coin for colour $``>r"$ is flipped in every consecutive step of the urn model. Thus, at the $j$-th flip for this colour, the total number of balls has grown to $U_{(r+1) \vee s}(n_0^{r+1})+j-1+\beta\floor{\frac{j-1}{l}}$ (where the last summand is due to the immigration), and the number of colour $``>r"$ has grown to $U_{(r+1) \vee s}(n_0^{r+1})-U_{r}(n_0^{r+1})+\sum_{k=1}^{j-1}Y_k^{>r}+\beta\floor{\frac{j-1}{l}}$. Therefore, the success probability $p_{>r}(n)$ at time $n=T_j^{>r}$ is given by $$\frac{U_{(r+1) \vee s}(n_0^{r+1})-U_{r}(n_0^{r+1})+\sum_{k=1}^{j-1}Y_k^{>r}+\beta\floor{\frac{j-1}{l}}}{U_{(r+1) \vee s}(n_0^{r+1})+j-1+\beta\floor{\frac{j-1}{l}}},$$ and the sequence $(Y_j^{r>})_{j=1}^\infty$ is independent of the sequences introduced in 2). Observe that $$\frac{1}{j^{l/(l+\beta)}}U_r(n_0^{r+1}+j)=\frac{1}{j^{l/(l+\beta)}}(U_r(n_0^{r+1})+\sum_{k=1}^j(1-Y_k^{>r}))\overset{j\to\infty}{\to}\zeta_1+\dots+\zeta_r,$$ and by independence of the coin flip sequences the limit $\zeta_1+\dots+\zeta_r$ is thus independent of $(B_1^\downarrow,\dots,B_r^\downarrow)$ and thus from $(B_1,\dots,B_r)$ as well.  ◻ ## Proof of Proposition [\[P:BreiCond\]](#P:BreiCond){reference-type="ref" reference="P:BreiCond"} {#Sec:BreiCondProof} *Proof of Proposition [\[P:BreiCond\]](#P:BreiCond){reference-type="ref" reference="P:BreiCond"}.* We start with showing 2). 1. Case $\alpha \leq p$: Let $\alpha'>p\geq \alpha$, then by Hölder's inequality $$\sup_n\mathop{\mathrm{\mathbb{E}}}\Bigl(\Bigl\lVert\frac{D(n)}{n^{l/(l+\beta)}}\Bigr\rVert_p^\alpha\Bigr)\leq \sup_n\Bigl(\mathop{\mathrm{\mathbb{E}}}\Bigl(\Bigl\lVert\frac{D(n)}{n^{l/(l+\beta)}}\Bigr\rVert_p^{\alpha'}\Bigr)\Bigr)^{\frac{\alpha}{\alpha'}}=\Bigl(\sup_n\mathop{\mathrm{\mathbb{E}}}\Bigl(\Bigl\lVert\frac{D(n)}{n^{l/(l+\beta)}}\Bigr\rVert_p^{\alpha'}\Bigr)\Bigr)^{\frac{\alpha}{\alpha'}}$$ and this case follows whenever there is a bound for the next one.\ Case $\alpha > p$: Because of the inverse Minkowski's inequality (see, e.g., III 2.4 Theorem 9 in [@bullen03]), $p$-quasinorms with $0<p<1$ are concave on $\mathbb{R}_{\geq 0}^d$. We can thus apply Jensen's inequality to $$\mathop{\mathrm{\mathbb{E}}}\Bigl(\Bigl\lVert\frac{D(n)}{n^{l/(l+\beta)}}\Bigr\rVert_p^\alpha\Bigr)=\mathop{\mathrm{\mathbb{E}}}\Bigl(\Bigl\lVert\Bigl(\frac{D(n)}{n^{l/(l+\beta)}}\Bigr)^\alpha\Bigr\rVert_{\frac{p}{\alpha}}\Bigr)\leq \Bigl(\sum_{i=1}^{N(n)}\Bigl(\mathop{\mathrm{\mathbb{E}}}\frac{D_i^\alpha(n)}{n^{\alpha\cdot l/(l+\beta)}}\Bigr)^{\frac{p}{\alpha}}\Bigr)^{\frac{\alpha}{p}}.$$ An application of Proposition [\[P:martingale_p\_factorial\]](#P:martingale_p_factorial){reference-type="ref" reference="P:martingale_p_factorial"}, 1) and Lemma [\[L:ineq_mart_power_form\]](#L:ineq_mart_power_form){reference-type="ref" reference="L:ineq_mart_power_form"} yields that there exists a $C>0$ such that the right hand side is bounded above by $$\begin{aligned} C\cdot\Biggl(\sum_{i=1}^{N(n)}\Biggl(c(n,\alpha)^{-1}\mathop{\mathrm{\mathbb{E}}}{D_i(n)+\alpha-1\choose \alpha}\Biggr)^{\frac{p}{\alpha}}\Biggr)^{\frac{\alpha}{p}}.\end{aligned}$$ Now, each base in the summands on the right is the expectation of a closed martingale and by Proposition [\[P:martingale_p\_factorial\]](#P:martingale_p_factorial){reference-type="ref" reference="P:martingale_p_factorial"}, 3) $$\begin{aligned} \label{Eq:ZetaNormSum} \Biggl(\sum_{i=1}^{N(n)}\Biggl(c(n,\alpha)^{-1}\mathop{\mathrm{\mathbb{E}}}{D_i(n)+\alpha-1\choose \alpha}\Biggr)^{\frac{p}{\alpha}}\Biggr)^{\frac{\alpha}{p}}=\Bigl(\sum_{i=1}^{N(n)}(\mathop{\mathrm{\mathbb{E}}}\zeta_i^\alpha)^{\frac{p}{\alpha}}\Bigr)^{\frac{\alpha}{p}}.\end{aligned}$$ By [\[eq:expec_zetap\]](#eq:expec_zetap){reference-type="eqref" reference="eq:expec_zetap"} and Proposition [\[P:martingale_p\_factorial\]](#P:martingale_p_factorial){reference-type="ref" reference="P:martingale_p_factorial"}, 1) we have, for any fixed $k \in [0,\infty)$ and all $i>N(0)$ $$\begin{aligned} \label{Eq:ZetaMomentsAsymp} \mathop{\mathrm{\mathbb{E}}}(\zeta_i^k) = C(k) c((i-s),k)^{-1} \sim C(k) (i-s)^{-k\cdot l/(l + \beta)},\end{aligned}$$ where $C(k)$ denotes a constant that does not depend on $i$. Thus, the limit as $n \to \infty$ of the right hand side of [\[Eq:ZetaNormSum\]](#Eq:ZetaNormSum){reference-type="eqref" reference="Eq:ZetaNormSum"} is finite if and only if $$\begin{aligned} \frac{\alpha \cdot l}{l+\beta} \cdot \frac{p}{\alpha} > 1 \;\;\; \Leftrightarrow \;\;\; p > \frac{l+ \beta}{l}, \end{aligned}$$ which is guaranteed by our assumption. 2. Assume first that $\frac{l+\beta}{l}<p<\infty$. Use monotone convergence to get $$\begin{aligned} \label{Eq:MonConvEZeta} \mathop{\mathrm{\mathbb{E}}}(\|\zeta\|_p^p)=\mathop{\mathrm{\mathbb{E}}}\sum_{i=1}^\infty\zeta_i^p =\sum_{i=1}^\infty\mathop{\mathrm{\mathbb{E}}}\zeta_i^p.\end{aligned}$$ Again from [\[Eq:ZetaMomentsAsymp\]](#Eq:ZetaMomentsAsymp){reference-type="eqref" reference="Eq:ZetaMomentsAsymp"}, this sum is finite under our assumption that $p>(\l+\beta)/l$ and so $\|\zeta\|_p^p<\infty$ a.s. Furthermore, since $\| \zeta\|_\infty \leq \| \zeta\|_p$, the result also follows for $p=\infty$. 3. Next we prove the almost sure convergence in $c_{||\cdot||_p}$. We need to show $$\Bigl\lVert\frac{D(n)}{n^{l/(l+\beta)}}-\zeta\Bigr\rVert_p^p=\sum_{i=1}^\infty\Bigl|\frac{D_i(n)}{n^{l/(l+\beta)}}-\zeta_i\Bigr|^p\to 0,\,n\to\infty\quad\text{a.s.}$$ By Scheffé's theorem the above convergence follows from componentwise convergence, i.e. $$\begin{aligned} \label{Eq:Pointwiselp} \frac{D_i(n)}{n^{l/(l+\beta)}} \to \zeta_i, \, n \to \infty, \quad\text{a.s.} \end{aligned}$$ for all $i \in \mathbb{N}$ in combination with convergence of the norms, which is $$\begin{aligned} \label{Eq:Normconvlp}\sum_{i=1}^{N(n)}\frac{D_i(n)^p}{n^{pl/(l+\beta)}} \to\sum_{i=1}^\infty\zeta_i^p,\,n\to\infty. \end{aligned}$$ Proposition [\[P:martingale_p\_factorial\]](#P:martingale_p_factorial){reference-type="ref" reference="P:martingale_p_factorial"}, 2) implies [\[Eq:Pointwiselp\]](#Eq:Pointwiselp){reference-type="eqref" reference="Eq:Pointwiselp"}, so we are left to show [\[Eq:Normconvlp\]](#Eq:Normconvlp){reference-type="eqref" reference="Eq:Normconvlp"}. To this end, introduce the random variables $X_i(n):=c(n,p)^{-1}{D_i(n)+p-1\choose p}$, which form, for each $i \in \mathbb{N}$, a martingale, according to Proposition [\[P:martingale_p\_factorial\]](#P:martingale_p_factorial){reference-type="ref" reference="P:martingale_p_factorial"}, 1). Their cumulative sums form submartingales, since $$\begin{aligned} \mathop{\mathrm{\mathbb{E}}}\Bigl(\sum_{i=1}^{N(n)}X_i(n)\Big|\mathcal{F}_{(n-1)l}\Bigr)=\sum_{i=1}^{N(n)}\mathop{\mathrm{\mathbb{E}}}(X_i(n)|\mathcal{F}_{(n-1)l})\geq\sum_{i=1}^{\mathclap{N(n-1)}}X_i(n-1), \; i \in \mathbb{N}.\end{aligned}$$ By Lemma [\[L:ineq_mart_power_form\]](#L:ineq_mart_power_form){reference-type="ref" reference="L:ineq_mart_power_form"} there exists for each $k \in \mathbb{N}$ a $C$ such that $$\begin{aligned} \mathop{\mathrm{\mathbb{E}}}\Bigl(\Bigl(\sum_{i=1}^{N(n)}X_i(n)\Bigr)^k\Bigr)\leq C\mathop{\mathrm{\mathbb{E}}}\Bigl(\sum_{i=1}^{N(n)}\frac{D_i(n)^p}{n^{pl/(l+\beta)}}\Bigr)^k.\end{aligned}$$ By [\[P:BreiCond:moments\]](#P:BreiCond:moments){reference-type="ref" reference="P:BreiCond:moments"} with $\alpha=pk$, we observe that the right hand side is bounded uniformly in $n$. This means the submartingale is uniformly integrable and therefore convergent almost surely and in $L_1$ (Chapter XI.14 in [@doob94]). To derive the limit we first note that by [\[P:martingale_p\_factorial\]](#P:martingale_p_factorial){reference-type="ref" reference="P:martingale_p_factorial"}, 2) $$\lim_{n \to \infty} \sum_{i=1}^{N(n)}X_i(n) - \sum_{i=1}^l \frac{\zeta_i^p}{\Gamma(p+1)}=\lim_{n \to \infty} \sum_{i=l+1}^{N(n)}X_i(n) \geq 0$$ for all $l \in \mathbb{N}$. Let $l \to \infty$ to conclude that $$\lim_{n \to \infty} \sum_{i=1}^{N(n)}X_i(n) - \sum_{i=1}^\infty \frac{\zeta_i^p}{\Gamma(p+1)} \geq 0.$$ This implies that $$\begin{aligned} &E\Bigl( \Bigl| \lim_{n \to \infty} \sum_{i=1}^{N(n)}X_i(n) - \sum_{i=1}^\infty \frac{\zeta_i^p}{\Gamma(p+1)} \Bigr| \Bigr) \\ &= E\Bigl( \lim_{n \to \infty} \sum_{i=1}^{N(n)}X_i(n) - \sum_{i=1}^\infty \frac{\zeta_i^p}{\Gamma(p+1)} \Bigr) \\ &= \lim_{n \to \infty} \sum_{i=1}^{N(n)}E(X_i(n))- \sum_{i=1}^\infty E\Bigl(\frac{\zeta_i^p}{\Gamma(p+1)} \Bigr) = 0,\end{aligned}$$ where we used the $L_1$-convergence of our submartingale together with [\[Eq:MonConvEZeta\]](#Eq:MonConvEZeta){reference-type="eqref" reference="Eq:MonConvEZeta"} in the penultimate step and [\[P:martingale_p\_factorial\]](#P:martingale_p_factorial){reference-type="ref" reference="P:martingale_p_factorial"}, 2) in the last step. Thus, $$\begin{aligned} \lim_{n \to \infty} \sum_{i=1}^{N(n)}c(n,p)^{-1}{D_i(n)+p-1\choose p} = \sum_{i=1}^\infty \frac{\zeta_i^p}{\Gamma(p+1)} \;\; \mbox{a.s.} \end{aligned}$$ Returning to the original process, we can use Proposition [\[P:martingale_p\_factorial\]](#P:martingale_p_factorial){reference-type="ref" reference="P:martingale_p_factorial"}, 1) and Lemma [\[L:ineq_mart_power_form\]](#L:ineq_mart_power_form){reference-type="ref" reference="L:ineq_mart_power_form"} to find a $C>0$ such that $$\begin{aligned} \frac{D_i(n)^p}{n^{pl/(l+\beta)}} \leq C \cdot c(n,k)^{-1}{D_i(n)+p-1\choose p} \end{aligned}$$ for all $i=1, \ldots, N(n)$ and thus apply Pratt's lemma ([@Pratt60]) to finally arrive at [\[Eq:Normconvlp\]](#Eq:Normconvlp){reference-type="eqref" reference="Eq:Normconvlp"} and thereby conclude 1) (b). 4. For $p\in (1,\infty)$ the $p$-norm on $\mathbb{R}^d$ among all vectors of equal $\ell_1$-norm is minimized if each component takes the same value. Let $c$ be the cumulated starting weight of the initial $N(0)$ vertices. Then $$\begin{aligned} \Bigl\lVert\frac{D(n)}{n^{l/(l+\beta)}}\Bigr\rVert_p^p=\frac{1}{n^{p\cdot l/(l+\beta)}}\sum_{i=1}^{\mathclap{N(n)}}D_i^p(n)\geq \frac{(N(0)+n)\cdot (\frac{c+n(l+\beta)}{N(0)+n})^p}{n^{p\cdot l/(l+\beta)}}\sim C\cdot n^{1-\frac{p\cdot l}{l+\beta}}\end{aligned}$$ for some $C>0$ and since by assumption $p<\frac{l+\beta}{l}$ the right-hand side diverges to $\infty$. Being a.s. unbounded, the sequence $D(n)/n^{l/(l+\beta)}$ thus fails to converge.  ◻ There are no funding bodies to thank relating to the creation of this article. There were no competing interests to declare which arose during the preparation or publication process of this article.
arxiv_math
{ "id": "2310.02785", "title": "Multivariate Regular Variation of Preferential Attachment Models", "authors": "Anja Jan{\\ss}en and Max Ziegenbalg", "categories": "math.PR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In the present paper, we establish three special $q$-Abel transformation formulae of $q$-series via the use of Abel's lemma on summation by parts. As direct applications, we set up the corresponding $q$-contiguous relations for three kinds of truncated $q$-series. Several new transformations are consequently established. address: - Department of Mathematics, Soochow University, SuZhou 215006, P.R.China - Department of Mathematics, Soochow University, SuZhou 215006, P.R.China author: - Jianan Xu - Xinrong Ma title: Three new $q$-Abel transformations and their applications --- $q$-Series; Abel's lemma; $q$-Abel transformation; truncated series; $q$-contiguous relation; summation. *AMS subject classification (2020)*: Primary 33D15; Secondary 33D65 # Introduction It is well known that Abel's lemma on summation by parts is one of the most important tools in Analysis and Number Theory, which is analogous to the integration by parts in Calculus. Abel's lemma can often be stated as follows: **Lemma 1** (Abel's lemma on summation by parts: [@knopp p. 313]). *For integer $n\geq 0$ and sequences $\{A_n\}_{n\geq 0}$ and $\{B_n\}_{n\geq 0}$, it always holds $$\begin{aligned} \sum_{k=0}^{n-1} B_k\Delta A_k =A_{n-1}B_{n}-A_{-1}B_0+\sum_{k=0}^{n-1}A_k\nabla B_k.\label{abelparts}\end{aligned}$$ Hereafter, we define the backward and forward difference operators $\Delta$ and $\nabla$ acting on arbitrary sequence $\{X_n\}_{n\geq 0}$, respectively, by $$\begin{aligned} \Delta X_k:=X_k-X_{k-1}\quad and \quad \nabla X_k:=X_k-X_{k+1}.\end{aligned}$$* To make this paper self-contained, we would like to reiterate some background from [@xuxrma]. First of all, it should be mentioned that it is Abel's lemma on summation by parts with which W. C. Chu and his cooperators via a series of papers, say [@chen0; @chen; @chu06; @chu2007; @Chu1; @chu2008; @chu2009], have systematically investigated and showed many transformation and summation formulae from the theory of various (general, basic, and theta) hypergeometric series. Among these proofs, perhaps most noteworthy is that by Abel's lemma, Chu [@chu06] found an elementary proof for the well-known Bailey ${}_6\psi_6$ summation formula. To the best of our knowledge, Chu et al.'s elementary proofs, to great extent, depend on two basic algebraic identities. **Lemma 2**. *For any complex parameters $a,b,c,x,y,z,w: xyzw\neq 0$, we define $$\begin{aligned} \theta(x;q):=(x;q)_\infty(q/x;q)_\infty,~~\mathcal{D}(x):=1-x.\label{charafunchi}\end{aligned}$$ Then the following two identities are true.* (i) *(Four-term algebraic identity) $$\begin{aligned} \mathcal{D}\big(cx,\frac{x}{c},bz,\frac{z}{b}\big)-\mathcal{D}\big(bx,\frac{x}{b},cz,\frac{z}{c}\big)=\frac{z}{c}\mathcal{D}\big(bc,\frac{c}{b},xz,\frac{x}{z}\big).\label{kkklll-111} \end{aligned}$$* (ii) *(Weierstrass' theta identity: [@10 Ex. 20.5.6]) $$\begin{aligned} \theta\big(cx,\frac{x}{c},bz,\frac{z}{b};q\big)-\theta\big(bx,\frac{x}{b},cz,\frac{z}{c};q\big)=\frac{z}{c}\theta\big(bc,\frac{c}{b},xz,\frac{x}{z};q\big).\label{trivalweierstrass} \end{aligned}$$* *In the above, the multivariate notation $\mathcal{D}(x_1,x_2,\ldots,x_m):\:=\:\mathcal{D}(x_1)\mathcal{D}(x_2) \ldots\mathcal{D}(x_m)$ and $\theta(x_1,x_2,\ldots,x_m):\:=\:\theta(x_1)\theta(x_2)\ldots\theta(x_m),$ $m\geq 1$ being integer.* It is worthy pointing out that [\[kkklll-111\]](#kkklll-111){reference-type="eqref" reference="kkklll-111"} is equivalent to Weierstrass' theta identity [\[trivalweierstrass\]](#trivalweierstrass){reference-type="eqref" reference="trivalweierstrass"}. As for this equivalency, we refer the reader to [@wangjin] for a full proof. In addition, we would like to refer the reader to [@koornwinder] for a historical anecdote about the origins of Weierstrass' theta identity. More or less, the idea of applying Abel's lemma to $q$-series may trace back to Gasper and Rahman's works [@Gasper89] and [@Gasper90], wherein they set forth a lot of multibasic (quadratic, cubic, and quartic) transformations for $q$-series via the difference method which is a specific Abel's lemma. Particularly, Gasper believed that their method could not work in general setting, the reason is that for arbitrary sequence $$\begin{aligned} s_n=\frac{(a ; p)_n(b ; q)_n(c ; P)_n(d ; Q)_n}{\big(e ; p^{\prime}\big)_n\big(f ; q^{\prime}\big)_n\big(g; P^{\prime}\big)_n\big(h ; Q^{\prime}\big)_n},\end{aligned}$$ the corresponding difference $\Delta(s_n)=s_n-s_{n-1}$ does not take the factorial form. The reader might consult [@Gasper89 Eq. (2.8)] for further details. Unlike Gasper's intuition, there do exist some general sequences $s_n$ such that $\Delta(s_n)$ can be decomposed into product of factorial factors, provided that $s_n$ is subject to [\[kkklll-111\]](#kkklll-111){reference-type="eqref" reference="kkklll-111"}. Having this fact in hand, we have made in [@xuxrma] a systematic study of Gasper's multibasic transformations and finally established the following rather general transformation, which is now coined *the general $q$-Abel transformation* in order to emphasis the central role played by Abel's lemma. **Lemma 3** (The general $q$-Abel transformation: [@xuxrma Thm.2.1]). *For any integer $n\geq 0$ and any complex vectors $\bar{a}=(a_1,a_2,a_3,a_4),$ $\bar{p}=(p_1,p_2,p_3,p_4)$, $\bar{x}=(x_1,x_2,x_3,x_4), \bar{q}=(q_1,q_2,q_3,q_4)$, where $a_ix_ip_iq_i\neq 0,1\leq i\leq 4$, there holds $$\begin{aligned} &\sum_{k=0}^{n-1}\Gamma_{k-1}[\bar{a};\bar{p}]\frac{KL^{k-1}-1}{KL^{k-1} }~ \prod_{i=1}^4\frac{(a_i^2;p_i)_{k-1}}{(K/a_i^2;L/p_i)_k }\prod_{i=1}^4 \frac{(x_i^2;q_i)_k} {(K_0/x_i^2;L_0/q_i)_k}\nonumber\\ &= \prod_{i=1}^4\frac{(a_i^2;p_i)_{n-1}} {(K/a_i^2;L/p_i)_{n-1}}\prod_{i=1}^4\frac{(x_i^2;q_i)_n} {(K_0/x_i^2;L_0/q_i)_{n}}- \prod_{i=1}^4\frac {L/p_i-K/a_i^2}{p_i-a_i^2}\label{pppppp}\\ &+\sum_{k=0}^{n-1}\Gamma_{k}[\bar{x};\bar{q}]\frac{1-K_0L_0^{k}}{K_0L_0^{k} } ~ \prod_{i=1}^4\frac{(x_i^2;q_i)_k} {(K_0/x_i^2;L_0/q_i)_{k+1}} \prod_{i=1}^4\frac{(a_i^2;p_i)_k} {(K/a_i^2;L/p_i)_k}\nonumber .\end{aligned}$$ Hereafter, we define $$\begin{aligned} K:=a_1a_2a_3a_4,\quad& L:=(p_1p_2p_3p_4)^{1/2},\label{kkklll-1}\\ K_0:=x_1x_2x_3x_4,\quad&L_0:=(q_1q_2q_3q_4)^{1/2},\label{kkklll-2}\end{aligned}$$ and for integer $k\geq -1$, the function $$\begin{aligned} \Gamma_k[\bar{x};\bar{q}]&:= \big(x_1x_2 (q_1q_2)^{k/2}-x_3x_4 (q_3q_4)^{k/2}\big) \label{chachacha}\\ &\times\big(x_1x_3 (q_1q_3)^{k/2}-x_2x_4 (q_2q_4)^{k/2}\big)\big(x_1x_4 (q_1q_4)^{k/2}-x_2x_3 (q_2q_3)^{k/2}\big).\nonumber\end{aligned}$$* Evidently, the problem of studying applications of [\[pppppp\]](#pppppp){reference-type="eqref" reference="pppppp"} becomes necessary and attractive. In this regard, we refer the reader to our previous paper [@xuxrma] for some relevant preliminary results. In the present paper, as further development of [@xuxrma], we will derive three special $q$-Abel transformations, i.e., Theorems [Theorem 4](#firsteq){reference-type="ref" reference="firsteq"}, [Theorem 5](#secondeq-second){reference-type="ref" reference="secondeq-second"}, and [Theorem 6](#secondeq-third){reference-type="ref" reference="secondeq-third"} below, from Lemma [Lemma 3](#type-i-i){reference-type="ref" reference="type-i-i"} with $\bar{a}$ and $\bar{x}$ subject to three different substitutions. In the sequel, we will focus on applications of these special $q$-Abel transformations to the theory of $q$-series. Our paper is organized as follows. In the next section, we will show how to derive Theorems [Theorem 4](#firsteq){reference-type="ref" reference="firsteq"}, [Theorem 5](#secondeq-second){reference-type="ref" reference="secondeq-second"}, and [Theorem 6](#secondeq-third){reference-type="ref" reference="secondeq-third"} from Lemma [Lemma 3](#type-i-i){reference-type="ref" reference="type-i-i"}. Section 3 is devoted to some concrete transformations which are deducible from Theorems [Theorem 4](#firsteq){reference-type="ref" reference="firsteq"}, [Theorem 5](#secondeq-second){reference-type="ref" reference="secondeq-second"}, and [Theorem 6](#secondeq-third){reference-type="ref" reference="secondeq-third"}. Among include some new $q$-contiguous relations and transformations for $q$-series which seem to have not been known in the literature. We believe that these $q$-contiguous relations reflect the underlying phenomenon of Abel's lemma. In this sense, an open problem is proposed in Section 4. Some explanations on notation are necessary. Throughout this paper, we will follow the notation and terminology in the book [@10] by Gasper and Rahman. The $q$-shifted factorial of complex variable $z$ with the base $q: |q|<1$, is given by $$\begin{aligned} (a;q)_{\infty}:=\prod_{k=0}^{\infty}(1-aq^k), ~~(a;q)_{n}:=\frac{(a;q)_{\infty}}{(aq^n;q)_{\infty}}~~(n\in \mathbf{Z}),\end{aligned}$$ where $\mathbf{Z}$ denotes the set of integers. For any complex numbers $a_1,a_2,\ldots,a_m$, the multivariate notation $$\begin{aligned} (a_1,a_2,\ldots,a_m;q)_n:=\prod_{k=1}^{m}(a_k;q)_n\end{aligned}$$ and the basic hypergeometric series with the base $q$ and the variable $z$ is defined to be $$\begin{aligned} _{r}\phi_{r-1}\left[\begin{matrix}a_1,a_2,\ldots,a_r\\b_1,b_2,\ldots,b_{r-1}\end{matrix};q,z\right] :=\sum_{n=0}^{+\infty}\frac{(a_1,a_2,\ldots,a_r;q)_n}{(b_1,b_2,\ldots,b_{r-1};q)_n}\frac{z^{n}}{(q;q)_{n}}.\label{phiseries-1}\end{aligned}$$ # The main theorems and proofs This section is devoted to the main theorems of the present paper and their full proofs. **Theorem 4** (The first special $q$-Abel transformation). *With the same notation as Lemma [Lemma 3](#type-i-i){reference-type="ref" reference="type-i-i"}. Then, for any integer $n\geq 0$, we have $$\begin{aligned} \sum_{k=0}^{n-1}\frac{E_{k-1}[\bar{a};\bar{p}]}{(p_3p_4)^{(k-1)/2}} \frac{(a_1^2/p_1;p_1)_{k}(a_2^2/p_2;p_2)_{k}} {(a_1a_2p_3X/L;L/p_3)_{k+1}(a_1a_2p_4/LX;L/p_4)_{k+1} }\nonumber\\ \qquad\qquad\qquad\qquad\times\frac{(x_1^2;q_1)_k(x_2^2;q_2)_k}{(x_1x_2Y;L_0/q_3)_{k}(x_1x_2/Y;L_0/q_4)_{k}}\nonumber\\ =1-\frac{(a_1^2/p_1;p_1)_{n}(a_2^2/p_2;p_2)_{n}(x_1^2;q_1)_n(x_2^2;q_2)_n}{ (a_1a_2p_3X/L;L/p_3)_{n}(a_1a_2p_4/LX;L/p_4)_{n} (x_1x_2Y;L_0/q_3)_n(x_1x_2/Y;L_0/q_4)_n}\label{oooooo-1}\\ -\sum_{k=0}^{n-1}\frac{E_k[\bar{x};\bar{q}]}{(q_3q_4)^{k/2}} \frac{(a_1^2/p_1;p_1)_{k+1}(a_2^2/p_2;p_2)_{k+1}} {(a_1a_2p_3X/L;L/p_3)_{k+1}(a_1a_2p_4/LX;L/p_4)_{k+1} }\nonumber\\ \qquad\qquad\qquad\qquad\times\frac{(x_1^2;q_1)_k(x_2^2;q_2)_k}{ (x_1x_2Y;L_0/q_3)_{k+1}(x_1x_2/Y;L_0/q_4)_{k+1}}, \nonumber\end{aligned}$$ where the notation $X:=a_4/a_3, Y:=x_4/x_3$, and for any integer $k\geq -1$, we define $$\begin{aligned} %E_k&:=\big(a_1(p_1p_3)^{k/2}-a_2 %X(p_2p_4)^{k/2}\big)~\big(a_1(p_1p_4)^{k/2}-a_2 (p_2p_3)^{k/2}/X\big),\label{XXX-123}\\ E_k[\bar{x};\bar{q}]&:=\big(x_1(q_1q_3)^{k/2}-x_2x_4 (q_2q_4)^{k/2}/x_3\big)~\big(x_1(q_1q_4)^{k/2}-x_2x_3 (q_2q_3)^{k/2}/x_4\big).\label{YYY-123}\end{aligned}$$* To show this theorem, we need only to set $$(a_3,a_4)\to (a_3s,a_4s)\quad\mbox{and}\quad(x_3,x_4)\to(x_3t,x_4t)$$ in [\[pppppp\]](#pppppp){reference-type="eqref" reference="pppppp"}. Under such a replacement of parameters, we find that $L$ and $L_0$ remain unchanged but $K\to Ks^2$ and $K_0\to K_0t^2$. Consequently, the functions $$\Gamma_{k-1}[\bar{a};\bar{p}]=\Gamma_{k-1}[\bar{a};\bar{p}]_{s}s^2\quad\mbox{and}\quad \Gamma_k[\bar{x};\bar{q}]=\Gamma_k[\bar{x};\bar{q}]_tt^2,$$ where $$\begin{aligned} \Gamma_{k-1}[\bar{a};\bar{p}]_{s}&:= \big(a_1a_2 (p_1p_2)^{(k-1)/2}-a_3a_4 s^2 (p_3p_4)^{(k-1)/2}\big)\mbox{Temp}_{k-1}[\bar{a};\bar{p}]\end{aligned}$$ and $$\begin{aligned} \Gamma_k[\bar{x};\bar{q}]_{t}&:= \big(x_1x_2 (q_1q_2)^{k/2}-x_3x_{4}t^2 (q_3q_4)^{k/2}\big)\mbox{Temp}_k[\bar{x};\bar{q}]\nonumber\end{aligned}$$ by defining $$\begin{aligned} %\mbox{Temp}_1(k)&:=\big(a_1a_{3}(p_1p_3)^{k/2}-a_2 %a_{4}(p_2p_4)^{k/2}\big)\big(a_1a_{4}(p_1p_4)^{k/2}-a_2 a_{3}(p_2p_3)^{k/2}\big),\\ \mbox{Temp}_{k}[\bar{x};\bar{q}]&:=\big(x_1x_{3}(q_1q_3)^{k/2}-x_2 x_{4}(q_2q_4)^{k/2}\big)\big(x_1x_{4}(q_1q_4)^{k/2}-x_2 x_{3}(q_2q_3)^{k/2}\big).\end{aligned}$$ With the help of these results, we may specialize [\[pppppp\]](#pppppp){reference-type="eqref" reference="pppppp"} to $$\begin{aligned} S_1=S_2+S_3,\label{S123}\end{aligned}$$ where $S_i~(i=1,2,3)$ are define, respectively, by $$\begin{aligned} S_1:=\sum_{k=0}^{n-1}\Gamma_{k-1}[\bar{a};\bar{p}]_{s}{s^2} ~ \frac{Ks^2L^{k-1}-1}{K{s^2}L^{k-1} }\\ \qquad\times \frac{(a_1^2;p_1)_{k-1}(a_2^2;p_2)_{k-1}(a_3^2s^2;p_3)_{k-1} (a_4^2s^2;p_4)_{k-1}}{(Ks^2/a_1^{2};L/p_1)_k(Ks^2/a_2^{2};L/p_2)_k(K/a_3^{2};L/p_3)_k(K/a_{4}^2;L/p_4)_k }\\ \qquad\times \frac{(x_1^2;q_1)_k(x_2^2;q_2)_k(x_3^2t^2;q_3)_k(x_4^2t^2;q_4)_k} {(K_{0}t^2/x_1^{2};L_0/q_1)_{k}(K_{0}t^2/x_2^{2};L_0/q_2)_{k} (K_{0}/x_{3}^2;L_0/q_3)_{k}(K_{0}/x_{4}^{2};L_0/q_4)_{k}};\end{aligned}$$ $$\begin{aligned} S_2 :=- \frac {L/p_1-Ks^2/a_1^{2}}{p_1-a_1^2}\frac {L/p_2-Ks^2/a_2^{2}}{p_2-a_2^2}\frac {L/p_3-K/a_{3}^{2}}{p_3-a_{3}^{2}s^2}\frac {L/p_4-K/a_{4}^{2}}{p_4-a_{4}^{2}s^2}\\ +\frac{(a_1^2;p_1)_{n-1}(a_2^2;p_2)_{n-1}(a_3^2s^2;p_3)_{n-1} (a_4^2s^2;p_4)_{n-1}}{(Ks^2/a_1^{2};L/p_1)_{n-1}(Ks^2/a_2^{2};L/p_2)_{n-1} (K/a_{3}^{2};L/p_3)_{n-1}(K/a_4^2;L/p_4)_{n-1} }\\ \times\frac{(x_1^2;q_1)_n(x_2^2;q_2)_n(x_3^2t^2;q_3)_n(x_4^2t^2;q_4)_n} {(K_{0}t^2/x_1^{2};L_0/q_1)_{n}(K_{0}t^2/x_2^{2};L_0/q_2)_{n} (K_{0}/x_{3}^2;L_0/q_3)_{n}(K_{0}/x_{4}^{2};L_0/q_4)_{n}};\end{aligned}$$ $$\begin{aligned} S_3:=\sum_{k=0}^{n-1}\Gamma_k[\bar{x};\bar{q}]_{t}{t^2}~ \frac{1-K_{0}t^2L_0^{k}} {K_{0}{t^2}L_0^{k} }\\ \times \frac{(a_1^2;p_1)_{k}(a_2^2;p_2)_{k}(a_3^2s^2;p_3)_{k} (a_4^2s^2;p_4)_{k}}{(Ks^2/a_1^{2};L/p_1)_k(Ks^2/a_2^{2};L/p_2)_k (K/a_{3}^{2};L/p_3)_kK/a_{4}^{2};L/p_4)_k }\\ \times \frac{(x_1^2;q_1)_k(x_2^2;q_2)_k(x_3^2t^2;q_3)_k(x_4^2t^2;q_4)_k} {(K_{0}t^2/x_1^{2};L_0/q_1)_{k+1}(K_{0}t^2/x_2^{2};L_0/q_2)_{k+1} (K_{0}/x_{3}^2;L_0/q_3)_{k+1}(K_{0}/x_{4}^{2};L_0/q_4)_{k+1}}\nonumber.\end{aligned}$$ Next, by canceling $s$ and $t$ in the denominators and taking $s\to 0, t\to 0$ simultaneously, we simplify [\[S123\]](#S123){reference-type="eqref" reference="S123"} to $$\begin{aligned} &-\sum_{k=0}^{n-1} \frac{\Gamma_{k-1}[\bar{a};\bar{p}]_{s=0}}{KL^{k-1} }\frac{(a_1^2;p_1)_{k-1}(a_2^2;p_2)_{k-1}(x_1^2;q_1)_k(x_2^2;q_2)_k}{(K/a_3^{2};L/p_3)_k(K/a_{4}^2;L/p_4)_k (K_{0}/x_{3}^2;L_0/q_3)_{k}(K_{0}/x_{4}^{2};L_0/q_4)_{k}}\nonumber\\ &=-\frac {L/p_3-K/a_{3}^{2}}{p_1-a_1^2}\frac {L/p_4-K/a_{4}^{2}}{p_2-a_2^2}\\ &+ \frac{(a_1^2;p_1)_{n-1}(a_2^2;p_2)_{n-1}(x_1^2;q_1)_n(x_2^2;q_2)_n}{ (K/a_{3}^{2};L/p_3)_{n-1}(K/a_4^2;L/p_4)_{n-1} (K_{0}/x_{3}^2;L_0/q_3)_n(K_{0}/x_{4}^{2};L_0/q_4)_n}\\ &+\sum_{k=0}^{n-1}\frac{\Gamma_k[\bar{x};\bar{q}]_{t=0}} {K_{0}L_0^{k} }\frac{(a_1^2;p_1)_{k}(a_2^2;p_2)_{k}(x_1^2;q_1)_k(x_2^2;q_2)_k}{(K/a_{3}^{2};L/p_3)_k(K/a_{4}^{2};L/p_4)_k(K_{0}/x_{3}^2;L_0/q_3)_{k+1}(K_{0}/x_{4}^{2};L_0/q_4)_{k+1}} \nonumber.\end{aligned}$$ A substitution of $\Gamma_{k-1}[\bar{a};\bar{p}]_{s=0}$ and $\Gamma_k[\bar{x};\bar{q}]_{t=0}$ yields $$\begin{aligned} &-\frac{1}{a_3a_4}\sum_{k=0}^{n-1} \frac{\mbox{Temp}_{k-1}[\bar{a};\bar{p}]}{(p_3p_4)^{(k-1)/2}}~ \frac{(a_1^2;p_1)_{k-1}(a_2^2;p_2)_{k-1}(x_1^2;q_1)_k(x_2^2;q_2)_k}{(K/a_3^{2};L/p_3)_k(K/a_{4}^2;L/p_4)_k (K_{0}/x_{3}^2;L_0/q_3)_{k}(K_{0}/x_{4}^{2};L_0/q_4)_{k}}\nonumber\\ &=-\frac{1-Kp_3/a_{3}^{2}L}{1-a_1^2/p_1}\frac {1-Kp_4/a_{4}^{2}L}{1-a_2^2/p_2}\\ &\qquad\qquad+ \frac{(a_1^2;p_1)_{n-1}(a_2^2;p_2)_{n-1}(x_1^2;q_1)_n(x_2^2;q_2)_n}{ (K/a_{3}^{2};L/p_3)_{n-1}(K/a_4^2;L/p_4)_{n-1} (K_{0}/x_{3}^2;L_0/q_3)_n(K_{0}/x_{4}^{2};L_0/q_4)_n}\\ &+\frac{1}{x_3x_4}\sum_{k=0}^{n-1}\frac{\mbox{Temp}_k[\bar{x};\bar{q}]} {(q_3q_4)^{k/2} }~\frac{(a_1^2;p_1)_{k}(a_2^2;p_2)_{k}(x_1^2;q_1)_k(x_2^2;q_2)_k}{(K/a_{3}^{2};L/p_3)_k(K/a_{4}^{2};L/p_4)_k (K_{0}/x_{3}^2;L_0/q_3)_{k+1}(K_{0}/x_{4}^{2};L_0/q_4)_{k+1}}\nonumber .\end{aligned}$$ As a last step, we reformulate this identity in terms of $X, Y, E_k[\bar{x};\bar{q}]$ and multiple both sides with $$-\frac{1-a_1^2/p_1}{1-Kp_3/a_{3}^{2}L}\frac {1-a_2^2/p_2}{1-Kp_4/a_{4}^{2}L}.$$ Hence the theorem is proved. ------------------------------------------------------------------------ **Theorem 5** (The second special $q$-Abel transformation). *With the same notation as Lemma [Lemma 3](#type-i-i){reference-type="ref" reference="type-i-i"}. Then, for any integer $n\geq 0$, we have $$\begin{aligned} &\sum_{k=0}^{n-1} \frac{E_{k-1}[\bar{a};\bar{p}]}{(p_3p_4)^{(k-1)/2}}\frac{(a_1^2/p_1;p_1)_{k}(a_2^2/p_2;p_2)_{k}(x_1^2;q_1)_k}{(Kp_3/La_3^2;L/p_3)_{k+1}(Kp_4/La_4^2;L/p_4)_{k+1} (K_0/x_2^2;L_0/q_2)_{k}}\nonumber\\ &=1- \frac{(a_1^2/p_1;p_1)_{n}(a_2^2/p_2;p_2)_{n}(x_1^2;q_1)_n}{ (Kp_3/La_3^2;L/p_3)_{n}(Kp_4/La_4^2;L/p_4)_{n} (K_0/x_2^2;L_0/q_2)_n} \\ &-\frac{x_1}{x_2}\sum_{k=0}^{n-1}D_k[\bar{x};\bar{q}]\bigg(\frac{q_1} {q_2}\bigg)^{k/2}\frac{(a_1^2/p_1;p_1)_{k+1}(a_2^2/p_2;p_2)_{k+1}(x_1^2;q_1)_k}{(Kp_3/La_3^2;L/p_3)_{k+1}(Kp_4/La_4^2;L/p_4)_{k+1}(K_0/x_2^2;L_0/q_2)_{k+1}}, \nonumber\end{aligned}$$ where $E_k$ is the same as [\[YYY-123\]](#YYY-123){reference-type="eqref" reference="YYY-123"} and $$\begin{aligned} D_k[\bar{x};\bar{q}]:=x_1x_2(q_1q_2)^{k/2}-x_3x_4(q_3q_4)^{k/2}.\label{YYY-123123}\end{aligned}$$* To prove Theorem [Theorem 5](#secondeq-second){reference-type="ref" reference="secondeq-second"}, we make the replacement of parameters $$\big(a_3, a_4\big) \rightarrow\big(a_3 s, a_4 s\big)~~ \text { and }~~(x_2, x_3, x_4) \rightarrow(x_2 t^2, x_3 t, x_4 t)$$ in [\[pppppp\]](#pppppp){reference-type="eqref" reference="pppppp"}. As have made in the proof of Theorem [Theorem 4](#firsteq){reference-type="ref" reference="firsteq"}, by simplifying and taking the limit as $s, t$ tend to zero, we have the desired transformation. The other detail is similar to above and thus left to the reader. ------------------------------------------------------------------------ **Theorem 6** (The third special $q$-Abel transformation). *With the same notation as Lemma [Lemma 3](#type-i-i){reference-type="ref" reference="type-i-i"} and Theorem [Theorem 5](#secondeq-second){reference-type="ref" reference="secondeq-second"}. Then we have $$\begin{aligned} &\frac{a_1}{a_2}\sum_{k=0}^{n-1}D_{k-1}[\bar{a};\bar{p}] \bigg(\frac{p_1}{p_2}\bigg)^{(k-1)/2} \frac{(a_1^2/p_1;p_1)_{k}(x_1^2;q_1)_k}{ (Kp_2/La_2^2;L/p_2)_{k+1} (K_0/x_2^2;L_0/q_2)_{k}}\nonumber\\ &=1- \frac{(a_1^2/p_1;p_1)_{n}(x_1^2;q_1)_n}{ (Kp_2/La_2^2;L/p_2)_{n} (K_0/x_2^2;L_0/q_2)_n} \label{eq116}\\ &-\frac{x_1}{x_2}\sum_{k=0}^{n-1}D_k[\bar{x};\bar{q}] \bigg(\frac{q_1} {q_2}\bigg)^{k/2}\frac{(a_1^2/p_1;p_1)_{k+1} (x_1^2;q_1)_k}{(Kp_2/La_2^2;L/p_2)_{k+1}(K_0/x_2^2;L_0/q_2)_{k+1}}, \nonumber\end{aligned}$$* To obtain this theorem, we apply the change of parameters $$(a_2,a_3, a_4) \rightarrow(a_2 s^2, a_3 s, a_4 s)~~ \text { and }~~(x_2, x_3, x_4) \rightarrow(x_2 t^2, x_3 t, x_4 t)$$ to [\[pppppp\]](#pppppp){reference-type="eqref" reference="pppppp"} and carry out the same procedure as above. We leave the detail to the interested reader. ------------------------------------------------------------------------ **Remark 7**. *We think that Theorems [Theorem 4](#firsteq){reference-type="ref" reference="firsteq"} -- [Theorem 6](#secondeq-third){reference-type="ref" reference="secondeq-third"} are completely different from each other, the distinction consist in Theorem [Theorem 6](#secondeq-third){reference-type="ref" reference="secondeq-third"} can not be deduced from Theorem [Theorem 5](#secondeq-second){reference-type="ref" reference="secondeq-second"} just only letting $a_2=0$ while Theorem [Theorem 5](#secondeq-second){reference-type="ref" reference="secondeq-second"} is not the special case $x_2=0$ of Theorem [Theorem 4](#firsteq){reference-type="ref" reference="firsteq"}. Later as we will see, they lead us to different $q$-series transformations.* # Applications Throughout the sequel, we will establish a few special cases of Theorems [Theorem 4](#firsteq){reference-type="ref" reference="firsteq"} -- [Theorem 6](#secondeq-third){reference-type="ref" reference="secondeq-third"} and then investigate their applications to $q$-series. ## Applications of Theorem [Theorem 4](#firsteq){reference-type="ref" reference="firsteq"} {#applications-of-theorem-firsteq} To begin, we need to establish a specific but useful case of Theorem [Theorem 4](#firsteq){reference-type="ref" reference="firsteq"}. **Theorem 8**. *With the same notation as Theorem [Theorem 4](#firsteq){reference-type="ref" reference="firsteq"}. Let $p_1=(q_1 q_2 q_4/q_3)^{1/2}$. Then it holds for any integer $n\geq 0$ $$\begin{aligned} &\sum_{k=0}^{n-1} \big((p_2 p_4/p_3)^{(k-1)/2}a_2X-p_1^{(k-1)/2}a_1\big) \big((p_2 p_3/p_4)^{(k-1)/2}a_2/X-p_1^{(k-1)/2}a_1\big)\nonumber\\ &\quad\times~ \frac{(a_2^2/p_2;p_2)_{k}(x_1^2;q_1)_k(x_2^2;q_2)_k}{(a_1a_2p_3X/L;L/p_3)_{k+1} (a_1a_2p_4/XL;L/p_4)_{k+1} (x_1^2x_2^2p_1/a_1^2;L_0/q_4)_{k}}\nonumber\\ &=1-\frac{(a_2^2/p_2;p_2)_{n}(x_1^2;q_1)_n(x_2^2;q_2)_n}{ (a_1a_2p_3X/L;L/p_3)_{n}(a_1a_2p_4/XL;L/p_4)_{n} (x_1^2x_2^2p_1/a_1^2;L_0/q_4)_n}\label{threethreeeq}\\ &-\frac{1-a_2^2/p_2}{1-x_1^2x_2^2p_1/a_1^2} \sum_{k=0}^{n-1} \big((q_1 q_4/q_3)^{k/2} -q_2^{k/2}p_1x_2^2/a_1^2\big)\big((q_1 q_3/ q_4)^{k/2}x_1^2-q_2^{k/2}a_1^2/p_1\big)\nonumber\\ &\quad\times~\frac{(a_2^2;p_2)_{k}(x_1^2;q_1)_k(x_2^2;q_2)_k}{(a_1a_2p_3X/L;L/p_3)_{k+ 1}(a_1a_2p_4/XL;L/p_4)_{k+1} (x_1^2x_2^2p_1L_0/q_4a_1^2;L_0/q_4)_{k}}. \nonumber\end{aligned}$$* To show [\[threethreeeq\]](#threethreeeq){reference-type="eqref" reference="threethreeeq"}, we need to assume further that, besides $p_1=L_0/q_3=(q_1 q_2 q_4/q_3)^{1/2}$, $$\begin{aligned} a_1^2/p_1=x_1x_2Y, \quad\mbox{i.e.,}\quad Y=a_1^2/p_1x_1x_2. \label{conditionY}\end{aligned}$$ As such, we specialize Theorem [Theorem 4](#firsteq){reference-type="ref" reference="firsteq"} to $$\begin{aligned} &\sum_{k=0}^{n-1} \frac{E_{k-1}[\bar{a};\bar{p}]}{(p_3p_4)^{(k-1)/2}}~ \frac{(a_2^2/p_2;p_2)_{k}(x_1^2;q_1)_k(x_2^2;q_2)_k}{(a_1a_2p_3X/L;L/p_3)_{k+1} (a_1a_2p_4/XL;L/p_4)_{k+1} (x_1^2x_2^2p_1/a_1^2;L_0/q_4)_{k}}\nonumber\\ &=1-\frac{(a_2^2/p_2;p_2)_{n}(x_1^2;q_1)_n(x_2^2;q_2)_n}{ (a_1a_2p_3X/L;L/p_3)_{n}(a_1a_2p_4/XL;L/p_4)_{n} (x_1^2x_2^2p_1/a_1^2;L_0/q_4)_n}-\frac{1-a_2^2/p_2}{1-x_1x_2/Y}\nonumber\\ &\times\sum_{k=0}^{n-1}\frac{E_k[\bar{x};\bar{q}]} {(q_3q_4)^{k/2} }~\frac{(a_2^2;p_2)_{k}(x_1^2;q_1)_k(x_2^2;q_2)_k}{(a_1a_2p_3X/L;L/p_3)_{k+1} (a_1a_2p_4/XL;L/p_4)_{k+1} (x_1x_2L_0/q_4Y;L_0/q_4)_{k}}. \nonumber\end{aligned}$$ Next, referring to [\[YYY-123\]](#YYY-123){reference-type="eqref" reference="YYY-123"} and using the conditions that $a_1=(x_1x_2p_1Y)^{1/2}$ and $p_1=(q_1 q_2 q_4/q_3)^{1/2}$, we may compute $$\begin{aligned} E_k[\bar{a};\bar{p}]&= %\big(a_1(p_1p_3)^{\frac{k}{2}}-a_2 %X(p_2p_4)^{\frac{k}{2}}\big)~\big(a_1(p_1p_4)^{\frac{k}{2}}-a_2 (p_2p_3)^{\frac{k}{2}}/X\big)\\ %&= (p_3p_4)^{k/2}\big(a_1p_1^{k/2}-a_2 X(p_2p_4/p_3)^{k/2}\big)~ \big(a_1p_1^{k/2}-a_2 (p_2p_3/p_4)^{k/2}/X\big);\\ E_k[\bar{x};\bar{q}]&= %(q_3q_4)^{k/2} %\big((q_1 q_4/q_3)^{k/2} x_1-q_2^{k/2}x_2/Y\big)\big((q_1 q_3/ q_4)^{k/2}x_1-q_2^{k/2}x_2 Y\big)\\ %&= (q_3q_4)^{k/2} \big((q_1 q_4/q_3)^{k/2} -q_2^{k/2}p_1x_2^2/a_1^2\big)\big((q_1 q_3/ q_4)^{k/2}x_1^2-q_2^{k/2} a_1^2/p_1\big).\end{aligned}$$ Rewriting the above we get [\[threethreeeq\]](#threethreeeq){reference-type="eqref" reference="threethreeeq"}. ------------------------------------------------------------------------ Regarding applications of Theorems [Theorem 4](#firsteq){reference-type="ref" reference="firsteq"} -- [Theorem 6](#secondeq-third){reference-type="ref" reference="secondeq-third"} to $q$-series, we also need a general solution for any recurrence relation of order one. As we will see later, this kind of recurrence relations occurs frequently to the contiguous relation of the truncated series related with Abel's lemma. **Lemma 9**. *Assume the sequence $\{F_n(W)\}_{n\geq 0}$ satisfies the functional equation $$\begin{aligned} F_n(W)=C(W)\sigma\big(F_n(W)\big)+D_n(W),\label{analticrec-fu}\end{aligned}$$ where the vector $W:=(a_1,a_2,\cdots,a_r)\in\mathbb{C}^r$ and the operator (namely, the replacement of parameters) $\sigma$ acting on $\mathbb{C}^r$ is defined by $$\begin{aligned} %\label{ditui-0} \sigma(W)&:=(\sigma(a_1),\sigma(a_2),\cdots,\sigma(a_r)); \\ \sigma^0(W)&:=W; \sigma^k(W):=\sigma\big(\sigma^{k-1}(W)).\end{aligned}$$ For any function $G(W)$, we define $$\begin{aligned} %\label{ditui-1} \sigma\Big(G(W)\Big)&:=G(\sigma(W)).\end{aligned}$$ Then, for $m\geq 0$, we have $$\begin{aligned} F_n(W)=\sigma^m(F_n(W))\prod_{k=0}^{m-1}\sigma^k(C(W))+\sum_{k=0}^{m-1} \sigma^k(D_n(W))\prod_{i=0}^{k-1}\sigma^i(C(W)). \label{Fn}\end{aligned}$$* Identity [\[Fn\]](#Fn){reference-type="eqref" reference="Fn"} can be proved by induction on $m$. We omit the detail here and leave it to the reader. ------------------------------------------------------------------------ In the rest of this subsection, we turn to study three truncated series by virtue of [\[threethreeeq\]](#threethreeeq){reference-type="eqref" reference="threethreeeq"}. Later as we will see, these finite sums are embed with very good $q$-contiguous relations with respect to various parameters. These $q$-contiguous relations often lead us to some $q$-series transformations. For convenience of writing, we record the following concept from [@xuxrma] originally proposed by ourselves. **Definition 10** ($(R,S)$-type transformation with degree $2m$: [@xuxrma Def. 4.2]). *The transformation [\[pppppp\]](#pppppp){reference-type="eqref" reference="pppppp"} is said to be the $(R,S)$-type with degree $2m$, provided that $$p_i=q^{2r_i},q_i=q^{2s_i}, r_i,s_i>0;~~ m=\max\{r_i,s_i\,|1\leq i\leq 4\}$$ and $R,S$ are the cardinalities of the index sets $\{r_i|1\leq i\leq 4\}$ and $\{s_i|1\leq i\leq 4\}$, respectively.* We remark that this definition generalizes Gasper and Rahman's terminology [@Gasper90] on the quadratic, cubic, quartic, and quintic transformations. ### $(2,3)$-Type of cubic transformation As planned, we will apply [\[threethreeeq\]](#threethreeeq){reference-type="eqref" reference="threethreeeq"} to investigate possible $q$-contiguous relation of the finite sum [\[macc-old\]](#macc-old){reference-type="eqref" reference="macc-old"} below which was discussed by the first author in her master thesis [@xujianan Chap. 3]: $$\begin{aligned} \mathbf{\Omega}_n(a,b,c):=\sum_{k=0}^{n-1} q^{k} \frac{(b/q;q)_{k}(cq;q)_{2k}}{(aq^3,c/aq;q)_k(bcq^2;q^3)_k}.\label{macc-old}\end{aligned}$$ Indeed, we can easily show **Theorem 11**. *Let $\mathbf{\Omega}_n(a,b,c)$ be given by [\[macc-old\]](#macc-old){reference-type="eqref" reference="macc-old"}. Then for any integer $n\geq 0$, it holds the $q$-contiguous relation $$\begin{aligned} \mathbf{\Omega}_n(a,b,c)&=\frac{c(q-b)(aq;q)_2} {(b-aq^3)(1-c)(aq-c)}\mathbf{\Omega}_n(a/q^2,bq,c/q)\label{macc}\\ &+\frac{q(1-aq^2) (1-bc/q)}{(b-aq^3)(1-c)}\left\{1-\frac{(b/q;q)_{n}(c;q)_{2n}}{ (aq^2,c/aq;q)_{n}(bc/q;q^3)_{n} }\right\}\nonumber.\end{aligned}$$* In order to obtain [\[macc\]](#macc){reference-type="eqref" reference="macc"}, it suffices to take in [\[threethreeeq\]](#threethreeeq){reference-type="eqref" reference="threethreeeq"} that $$\begin{aligned} (p_1,p_2, p_3, p_4)\to (q^3,q,q^3, q)~~\mbox{and}~~(q_1,q_2, q_3, q_4)\to(q^2,q^2,q,q^3).\end{aligned}$$ Under such substitutions, it is easy to check $L=L_0=q^4$ and $$\begin{aligned} E_k[\bar{a};\bar{p}]&=q^{3k}(a_1-a_2/X)(a_1q^{2k}-a_2 X),\\ E_k[\bar{x};\bar{q}]&=q^{3k}(x_1-x_2Yq^k)(x_1q^k-x_2/Y), ~~ Y=a_1^2/x_1x_2q^3.\end{aligned}$$ After some simplification, we specialize [\[threethreeeq\]](#threethreeeq){reference-type="eqref" reference="threethreeeq"} to $$\begin{aligned} \displaystyle &\frac{-a_2X\big(a_1-a_2/X\big)}{q(1-a_1a_2X/q) }\sum_{k=0}^{n-1} \frac{q^{k}\big(1-a_1q^{2(k-1)}/a_2X\big)}{ 1-a_1a_2/Xq^3}~ \frac{(a_2^2/q;q)_{k}(x_1^2,x_2^2;q^2)_k}{(a_1a_2X;q)_k(a_1a_2/X;q^3)_k(x_1^2x_2^2q^3/a_1^2;q)_{k}}\nonumber\\ &=1-\frac{(a_2^2/q;q)_{n}}{ (a_1a_2X/q;q)_{n}(a_1a_2/Xq^3;q^3)_{n} }\frac{(x_1^2,x_2^2;q^2)_n} {(x_1^2x_2^2q^3/a_1^2;q)_n}+\frac{x_1^2x_2^2q^3}{a_1^2}\frac{1-a_2^2/q}{1-x_1^2x_2^2q^3/a_1^2}\label{eq34}\\ &\quad\times~\sum_{k=0}^{n-1}\frac{q^k\big(1-a_1^2q^{k-3}/x_1^2\big)\big(1-a_1^2q^{k-3}/x_2^2\big)}{(1-a_1a_2X/q) (1-a_1a_2/Xq^3)}~\frac{(a_2^2;q)_{k}(x_1^2,x_2^2;q^2)_k}{(a_1a_2X;q)_k(a_1a_2/X;q^3)_k (x_1^2x_2^2q^4/a_1^2;q)_{k}}. \nonumber\end{aligned}$$ Now we are ready to show [\[macc\]](#macc){reference-type="eqref" reference="macc"}. To do this, we need to make the parametric replacements in [\[eq34\]](#eq34){reference-type="eqref" reference="eq34"}, as below: $$\begin{aligned} (x_1^2,x_2^2,a_1^2,a_2^2, X^2)\to(c,cq,acq^5,b,aq/bc).\end{aligned}$$ Under this substitution, it is easy to check $$\begin{aligned} 1-a_1q^{2(k-1)}/a_2X&=1-cq^{2k},\\ \big(1-a_1^2q^{k-3}/x_1^2\big)\big(1-a_1^2q^{k-3}/x_2^2\big) &=(1-aq^{k+1})(1-aq^{k+2}).\end{aligned}$$ Simplifying [\[eq34\]](#eq34){reference-type="eqref" reference="eq34"} by these computational results, we get $$\begin{aligned} &\frac{(b-aq^3)(1-c)}{q(1-aq^2) (1-bc/q)}\sum_{k=0}^{n-1} q^{k} \frac{(b/q;q)_{k}(cq;q)_{2k}}{(aq^3,c/aq;q)_k (q^2bc;q^3)_k}\nonumber\\ &=1-\frac{(b/q;q)_{n}(c;q)_{2n}}{ (aq^2,c/aq;q)_{n}(bc/q;q^3)_{n} } \\ &+\frac{c}{aq}\frac{(1-b/q)(1-aq)} {(1-c/aq)(1-bc/q)}\sum_{k=0}^{n-1}q^k\frac{(b;q)_{k}(c;q)_{2k}}{(aq,c/a;q)_k(bcq^2;q^3)_k }. \nonumber\end{aligned}$$ Restated in terms of $\mathbf{\Omega}_n(a,b,c)$. The conclusion follows. ------------------------------------------------------------------------ **Remark 12**. *As usual, we often refer the case $n<+\infty$ of arbitrary identity like [\[macc\]](#macc){reference-type="eqref" reference="macc"} as to a contiguous relation while the corresponding case $n=+\infty$ as to a transformation.* One of purposes for our pursuiting for the contiguous relation [\[macc\]](#macc){reference-type="eqref" reference="macc"} is to establish useful $q$-series transformations. As is expected to be, by using Lemma [Lemma 9](#xxx-xxx){reference-type="ref" reference="xxx-xxx"} , one may find with easy **Corollary 13** (cf. [@xujianan Thm 3.3.1]). *For integers $m,n\geq 0$, it holds $$\begin{aligned} & \mathbf{\Omega}_n(a, b, c)-\mathbf{\Omega}_n\left(a / q^{2 m}, bq^m, c/ q^m\right) \frac{\left(1 / a q^2 ; q\right)_{2 m}(b / q ; q)_m}{(c / a q, 1 / c ; q)_m\left(b / a q^3 ; q^3\right)_m}\label{macc-1} \\ =& \frac{\left(1-a q^2\right)(1-q / b c)}{\left(1-a q^3 / b\right)(1-1 / c)}\left\{\mathcal{K}_m(a, b, c)-\mathcal{K}_m\left(aq^n, bq^n, cq^{2 n}\right) \frac{(b / q ; q)_n(c ; q)_{2 n}}{\left(a q^2, c / a q ; q\right)_n\left(b c / q ; q^3\right)_n}\right\},\nonumber\end{aligned}$$ where $$\mathcal{K}_m(a, b, c)=\sum_{k=0}^{m-1} q^k \frac{(1 / a q ; q)_{2 k}(b / q ; q)_k}{(q / c, c / a q ; q)_k\left(b / a ; q^3\right)_k}.$$* Observe that [\[macc\]](#macc){reference-type="eqref" reference="macc"} can be restated as the form $$\begin{aligned} \mathbf{\Omega}_n(a,b,c)&=C(a,b,c) \sigma\big(\mathbf{\Omega}_n(a,b,c)\big)+D_n(a,b,c), \label{macc-1-11}\end{aligned}$$ where the operator $$\sigma(a,b,c):=(a/q^2,bq,c/q)$$ and the coefficients $$\begin{aligned} C(a,b,c)&:=\frac{c(q-b)(aq;q)_2} {(b-aq^3)(1-c)(aq-c)},\\ D_n(a,b,c)&:=\frac{q(1-aq^2) (1-bc/q)}{(b-aq^3)(1-c)}\left\{1-\frac{(b/q;q)_{n}(c;q)_{2n}}{ (aq^2,c/aq;q)_{n}(bc/q;q^3)_{n} }\right\}.\end{aligned}$$ Hence, [\[macc-1\]](#macc-1){reference-type="eqref" reference="macc-1"} comes out by applying Lemma [Lemma 9](#xxx-xxx){reference-type="ref" reference="xxx-xxx"} to [\[macc-1-11\]](#macc-1-11){reference-type="eqref" reference="macc-1-11"}. ------------------------------------------------------------------------ ### $(3,1)$-Type of cubic transformation Now we reconsider the finite sum $$\label{Omega-3} \mathbf{\chi}_n(a,b,c):=\sum_{k=0}^{n-1}q^{k} \frac{(abc;q^3)_{k}(b,c;q)_k} {(bc/q^2;q)_{2k+2}(aq;q)_{k}} .$$ It was studied in [@chen0 (1.1)]. By virtue of [\[threethreeeq\]](#threethreeeq){reference-type="eqref" reference="threethreeeq"}, we now discover a $q$-contiguous relation of $\mathbf{\chi}_n(a,b,c)$. **Theorem 14** (cf. [@chen0 p.793]). *Let $\mathbf{\chi}_n(a,b,c)$ be given by [\[Omega-3\]](#Omega-3){reference-type="eqref" reference="Omega-3"}. Then, for any integer $n\geq 0$, it holds the $q$-contiguous relation $$\begin{aligned} \mathbf{\chi}_n(a,b,c)&= \frac{(1-abc)(1-aq^3/b)(1-aq^3/c) }{(a q;q)_3}\mathbf{\chi}_n(aq^3,b,c)\label{threethreeeq-12345600}\\ &+\frac{aq^3}{bc(aq;q)_2} \left\{1-\frac{(abc;q^3)_{n}(b,c;q)_n}{ (bc/q^2;q)_{2n} (aq^3;q)_n}\right\}. \nonumber\end{aligned}$$* To show [\[threethreeeq-12345600\]](#threethreeeq-12345600){reference-type="eqref" reference="threethreeeq-12345600"}, we specificize [\[threethreeeq\]](#threethreeeq){reference-type="eqref" reference="threethreeeq"} by setting $$\begin{aligned} (p_1,p_2,p_3,p_4)\to(q,q^3,q^2,q^2)~~\mbox{and}~~(q_1,q_2,q_3,q_4)\to(q,q,q,q).\end{aligned}$$ Under such circumstance, it is easy to check $L=q^4, L_0=q^2$ and then to reduce [\[threethreeeq\]](#threethreeeq){reference-type="eqref" reference="threethreeeq"} to $$\begin{aligned} &a_1^2\sum_{k=0}^{n-1}q^{k-1} \big(1-a_2Xq^{k-1}/a_1\big) \big(1-a_2q^{k-1}/a_1X\big)\nonumber\\ &\qquad\quad\times \frac{(a_2^2/q^3;q^3)_{k}(x_1^2,x_2^2;q)_k} {(a_1a_2X/q^2,a_1a_2/Xq^2;q^2)_{k+1} (x_1^2x_2^2q/a_1^2;q)_{k}}\nonumber\\ &=1-\frac{(a_2^2/q^3;q^3)_{n}(x_1^2,x_2^2;q)_n}{ (a_1a_2X/q^2,a_1a_2/Xq^2;q^2)_{n} (x_1^2x_2^2q/a_1^2;q)_n}\label{threethreeeq-123}\\ &-\big(1-x_2^2q/a_1^2\big)\big(x_1^2-a_1^2/q\big)\frac{1-a_2^2/q^3}{1-x_1^2x_2^2q/a_1^2} \nonumber\\ &\qquad\quad\times\sum_{k=0}^{n-1}q^{k}\frac{(a_2^2;q^3)_{k}(x_1^2,x_2^2;q)_k} {(a_1a_2X/q^2,a_1a_2/Xq^2;q^2)_{k+1} (x_1^2x_2^2q^2/a_1^2;q)_{k}}. \nonumber\end{aligned}$$ If $a_1a_2X=x_1^2x_2^2$ and $a_1a_2/X=x_1^2x_2^2q$, then $$a_1a_2=x_1^2x_2^2q^{1/2}~~\mbox{and}~~ X=q^{-1/2}.$$ In this case, it is easy to check $$\begin{aligned} &\frac{(1-a_2Xq^{k-1}/a_1) (1-a_2q^{k-1}/a_1X)}{ (x_1^2x_2^2q/a_1^2;q)_{k}}=\frac{(1-a_1a_2Xq^{k-1}/a_1^2) (1-a_1a_2q^{k-1}/a_1^2X)}{ (x_1^2x_2^2q/a_1^2;q)_{k}}\\ &\quad\qquad=\frac{(1-x_1^2x_2^2q^{k-1}/a_1^2) (1-x_1^2x_2^2q^{k}/a_1^2)}{ (x_1^2x_2^2q/a_1^2;q)_{k}}=\frac{(1-x_1^2x_2^2/a_1^2q) (1-x_1^2x_2^2/a_1^2)}{ (x_1^2x_2^2/a_1^2q;q)_{k}}.\end{aligned}$$ Using these expressions, we may reformulate both sides of [\[threethreeeq-123\]](#threethreeeq-123){reference-type="eqref" reference="threethreeeq-123"} in the form $$\begin{aligned} &\frac{a_1^2\big(1-x_1^2x_2^2/a_1^2q\big) \big(1-x_1^2x_2^2/a_1^2\big)}{q}\sum_{k=0}^{n-1}q^{k} \frac{(x_1^4x_2^4/a_1^2q^2;q^3)_{k}(x_1^2,x_2^2;q)_k} {(x_1x_2/q^2,x_1x_2/q;q^2)_{k+1} (x_1^2x_2^2/a_1^2q;q)_{k}}\nonumber\\ &=1-\frac{(x_1^4x_2^4/a_1^2q^2;q^3)_{n}(x_1^2,x_2^2;q)_n}{ (x_1^2x_2^2/q^2,x_1^2x_2^2/q;q^2)_{n} (x_1^2x_2^2q/a_1^2;q)_n}-\big(1-x_2^2q/a_1^2\big) \big(x_1^2-a_1^2/q\big) \label{threethreeeq-123}\\ &\quad\times \frac{1-x_1^4x_2^4/a_1^2q^2}{1-x_1^2x_2^2q/a_1^2}\sum_{k=0}^{n-1}q^{k} \frac{(x_1^4x_2^4q/a_1^2;q^3)_{k}(x_1^2,x_2^2;q)_k} {(x_1^2x_2^2/q^2,x_1^2x_2^2/q;q^2)_{k+1} (x_1^2x_2^2q^2/a_1^2;q)_{k}}. \nonumber\end{aligned}$$ Evidently, under the replacement of parameters $$\begin{aligned} (a_1^2,x_1^2,x_2^2)\to(bc/aq^2,b,c),\end{aligned}$$ [\[threethreeeq-123\]](#threethreeeq-123){reference-type="eqref" reference="threethreeeq-123"} assumes the shape as [\[threethreeeq-12345600\]](#threethreeeq-12345600){reference-type="eqref" reference="threethreeeq-12345600"}. ------------------------------------------------------------------------ Starting from [\[threethreeeq-12345600\]](#threethreeeq-12345600){reference-type="eqref" reference="threethreeeq-12345600"} and using Lemma [Lemma 9](#xxx-xxx){reference-type="ref" reference="xxx-xxx"}, we recover without any difficulty **Corollary 15** (cf. [@chen0 p.796]). *For integer $m\geq 0$, it holds $$\begin{aligned} &\mathbf{\chi}_\infty(a,b,c) =\mathbf{\chi}_\infty(aq^{3m},b,c)\frac{\big(a b c, aq^3/ b, aq^3/ c;q^3)_m }{\big(aq;q)_{3m}} \nonumber\\ & +\frac{aq^3}{bc(1-aq)\left(1-aq^2 \right)} \sum_{k=0}^{m-1} q^{3 k}\frac{\big( a b c, aq^3/b, aq^3/c;q^3\big)_k }{\big( aq^3;q\big)_{3k}} \label{threethreeeq-12345600-new}\\ & -\frac{aq^3(b, c;q)_\infty (a b c ; q^3)_{\infty}}{bc(aq, b c/q^2;q)_{\infty}} \sum_{k=0}^{m-1}q^{3 k}(aq^3/ b ,aq^3/ c; q^3)_k .\nonumber\end{aligned}$$* Observe first that [\[threethreeeq-12345600\]](#threethreeeq-12345600){reference-type="eqref" reference="threethreeeq-12345600"} can be restated as the operator form $$\begin{aligned} \mathbf{\chi}_n(a,b,c)&=C(a,b,c) \sigma\big(\mathbf{\chi}_n(a,b,c)\big)+D_n(a,b,c), \label{threethreeeq-12345600-1}\end{aligned}$$ where the operator $$\sigma(a,b,c):=(aq^3,b,c)$$ and the coefficients $$\begin{aligned} C(a,b,c)&:= \frac{(1-abc)(1-aq^3/b)(1-aq^3/c) }{(a q;q)_3},\\ D_n(a,b,c)&:=\frac{aq^3}{bc(aq;q)_2} \left\{1-\frac{(abc;q^3)_{n}(b,c;q)_n}{ (bc/q^2;q)_{2n} (aq^3;q)_n}\right\}.\end{aligned}$$ Hence, by Lemma [Lemma 9](#xxx-xxx){reference-type="ref" reference="xxx-xxx"}, we may derive from [\[Fn\]](#Fn){reference-type="eqref" reference="Fn"} that $$\begin{aligned} \mathbf{\chi}_n(a,b,c)&=\mathbf{\chi}_n(aq^{3m},b,c) \prod_{i=0}^{m-1}\frac{(1-abcq^{3i})(1-aq^{3i+3}/b)(1-aq^{3i+3}/c) }{(aq^{3i+1};q)_3}\\ &+\sum_{i=0}^{m-1}\frac{aq^{3i+3}}{bc(aq^{3i+1};q)_2} \left\{1-\frac{(abcq^{3i};q^3)_{n}(b,c;q)_n}{ (bc/q^2;q)_{2n} (aq^{3i+3};q)_n}\right\}\\ &\qquad\quad\times\prod_{j=0}^{i-1}\frac{(1-abcq^{3j})(1-aq^{3j+3}/b)(1-aq^{3i+3}/c) }{(aq^{3j+1};q)_3}.\end{aligned}$$ Further simplification leads us to $$\begin{aligned} \mathbf{\chi}_n(a,b,c) &=\mathbf{\chi}_n(aq^{3m},b,c)\frac{\big(a b c, aq^3/ b, aq^3/ c;q^3)_m }{\big(aq;q)_{3m}} \\ & +\frac{aq^3 }{bc(1-aq)\left(1-aq^2 \right)} \sum_{i=0}^{m-1} q^{3 i}\frac{\big( a b c, aq^3/ b, aq^3/ c;q^3\big)_i }{\big( aq^3;q\big)_{3i}} \\ & -\frac{aq^3(b, c;q)_n}{bc(b c/q^2;q)_{2n}} \sum_{i=0}^{m-1}q^{3i}\frac{(abc; q^3)_{n+i}(aq^3/ b,aq^3/ c; q^3)_i}{ (aq; q)_{n+3i+2}}.\end{aligned}$$ Finally, [\[threethreeeq-12345600-new\]](#threethreeeq-12345600-new){reference-type="eqref" reference="threethreeeq-12345600-new"} comes out by taking the limit as $n\to \infty$. ------------------------------------------------------------------------ ### $(1,2)$-Type of quadratic transformation In the same way as above, we continue to investigate a new finite sum $$\mathbf{\Xi}_n(a,b,c,d)=\sum_{k=0}^{n-1}q^k \frac{(a,b;q)_k(c;q^2)_k} {(d,c/dq;q)_{k+1}(abq;q^2)_{k} }. \label{Chieq}$$ **Theorem 16**. *Let $\mathbf{\Xi}_n(a,b,c,d)$ be given by [\[Chieq\]](#Chieq){reference-type="eqref" reference="Chieq"}. Then, for any integer $n\geq 0$, it holds the $q$-contiguous relation $$\begin{aligned} \mathbf{\Xi}_n(a,b,c,d)&= \frac{d(a;q)_2(c-abq)}{\big(c-adq\big) \big(d-a\big)(1-abq)} \mathbf{\Xi}_n(aq^2,b,c,d)\label{xrma-1}\\ &+\frac{aq}{\big(c-adq\big) \big(1-a/d\big)}\left\{1- \frac{(a,b;q)_n(c;q^2)_n} {(d,c/dq;q)_{n} (abq;q^2)_n}\right\}\nonumber. \nonumber\end{aligned}$$* To show [\[xrma-1\]](#xrma-1){reference-type="eqref" reference="xrma-1"}, it suffices to take in [\[threethreeeq\]](#threethreeeq){reference-type="eqref" reference="threethreeeq"} that $a_2^2=x_2/x_1Y$, and $$\begin{aligned} (p_1,p_2,p_3,p_4)&\to(q,q,q,q)~~\mbox{and}~~(q_1,q_2,q_3,q_4)\to(q,q^2,q^2,q).\end{aligned}$$ In this case, it is easy to check $L=q^2,L_0=q^3$ and specify [\[threethreeeq\]](#threethreeeq){reference-type="eqref" reference="threethreeeq"} as the form $$\begin{aligned} &\sum_{k=0}^{n-1} \big(a_2Xq^{(k-1)/2}-(x_1x_2Y)^{1/2}q^{k/2}\big) \big(a_2q^{(k-1)/2}/X-(x_1x_2Y)^{1/2}q^{k/2}\big)\nonumber\\ &\qquad\quad\qquad\times \frac{(a_2^2/q;q)_{k}(x_1^2;q)_k(x_2^2;q^2)_k}{(x_2X/q^{1/2},x_2/Xq^{1/2};q)_{k+1} (x_1x_2/Y;q^2)_{k}}\nonumber\\ &=1-\frac{(a_2^2/q;q)_{n}}{(x_2X/q^{1/2},x_2/Xq^{1/2};q)_{n} }\frac{(x_1^2;q)_n(x_2^2;q^2)_n} {(x_1x_2/Y;q^2)_n}\label{threethreeeq-eq-eq}\\ &-\frac{1-a_2^2/q}{1-x_1x_2/Y} \sum_{k=0}^{n-1} \big(x_1q^{k}-x_2Yq^k\big)\frac{\big(x_1-x_2q^{k}/Y\big)(a_2^2;q)_{k} (x_1^2;q)_k(x_2^2;q^2)_k}{(x_2X/q^{1/2},x_2/Xq^{1/2};q)_{k+1} (x_1x_2q^2/Y;q^2)_{k}}. \nonumber\end{aligned}$$ Observe that, when $a_2^2=x_2/x_1Y$, it holds $$(x_1-x_2q^{k}/Y)(a_2^2;q)_{k}=x_1(1-a_2^2)(a_2^2q;q)_{k}.$$ Therefore $$\begin{aligned} &a_2^2\big(X/q^{1/2}-x_2/a_2^2\big) \big(1/Xq^{1/2}-x_2/a_2^2\big)\sum_{k=0}^{n-1} q^k\frac{(a_2^2/q;q)_{k}(x_1^2;q)_k(x_2^2;q^2)_k}{(x_2X/q^{1/2},x_2/Xq^{1/2};q)_{k+1}(x_1^2a_2^2;q^2)_{k} }\nonumber\\ &=1-\frac{(a_2^2/q;q)_{n}}{(x_2X/q^{1/2},x_2/Xq^{1/2};q)_{n} }\frac{(x_1^2;q)_n(x_2^2;q^2)_n} {(x_1^2a_2^2;q^2)_n}\nonumber\\ &-\frac{(a_2^2/q;q)_2(x_1^2-x_2^2/a_2^2)}{1-x_1^2a_2^2} \sum_{k=0}^{n-1}q^{k}\frac{(a_2^2q;q)_{k}(x_1^2;q)_k(x_2^2;q^2)_k} {(x_2X/q^{1/2},x_2/Xq^{1/2};q)_{k+1}(x_1^2a_2^2q^2;q^2)_{k} }. \nonumber\end{aligned}$$ Next, after making the parametric replacements $$\begin{aligned} (a_2^2, x_1^2, x_2^2,X)\to( aq, b,c, dq^{1/2}/c^{1/2})\end{aligned}$$ and reformulating the resulted in terms of $\mathbf{\Xi}_n(a,b,c,d)$, we achieve [\[xrma-1\]](#xrma-1){reference-type="eqref" reference="xrma-1"} at once. ------------------------------------------------------------------------ It is a bit surprise that by using the contiguous relation [\[xrma-1\]](#xrma-1){reference-type="eqref" reference="xrma-1"}, we may establish the following new and peculiar $q$-identity. **Corollary 17**. *$$\begin{aligned} &\frac{(aq;q)_{\infty}(b/d;q^2)_\infty}{(q^2,aq^2/d,abq;q^2)_\infty} \sum_{k=0}^{\infty}q^k \frac{(b;q)_k(adq;q^2)_k} {(dq,aq;q)_{k}}\label{YYY-ZZZ-WWW} \\ &=\bigg(1-\frac{1}{d}\bigg)\sum_{k=0}^{\infty}q^{2k} \frac{(a,aq,b/d;q^2)_k}{(q^2,aq^2/d,abq;q^2)_k} +\frac{1}{d} \frac{(b;q)_\infty(adq;q^2)_{\infty}}{(dq;q)_\infty(abq;q^2)_\infty} \sum_{k=0}^{\infty}q^{2k} \frac{(b/d;q^2)_k}{(q^2,aq^2/d;q^2)_k}.\nonumber\end{aligned}$$* Observe that [\[xrma-1\]](#xrma-1){reference-type="eqref" reference="xrma-1"} can be restated as the form $$\begin{aligned} \mathbf{\Xi}_n(a,b,c,d)&=C(a,b,c,d) \sigma\big(\mathbf{\Xi}_n(a,b,c,d)\big)+D_n(a,b,c,d),\label{xrma-1-11}\end{aligned}$$ where the operator $$\sigma(a,b,c,d):=(aq^2,b,c,d)$$ and the coefficients $$\begin{aligned} C(a,b,c,d)&:= \frac{d(a;q)_2(c-abq)}{\big(c-adq\big) \big(d-a\big)(1-abq)},\\ D_n(a,b,c,d)&:=\frac{aq}{\big(c-adq\big) \big(1-a/d\big)}\left\{1- \frac{(a,b;q)_n(c;q^2)_n} {(d,c/dq;q)_{n} (abq;q^2)_n}\right\}.\end{aligned}$$ Hence, [\[YYY-ZZZ-WWW\]](#YYY-ZZZ-WWW){reference-type="eqref" reference="YYY-ZZZ-WWW"} comes out by applying Lemma [Lemma 9](#xxx-xxx){reference-type="ref" reference="xxx-xxx"} to [\[xrma-1-11\]](#xrma-1-11){reference-type="eqref" reference="xrma-1-11"} and then taking the limit as $m,n\to \infty$, finally multiplying both sides of the resulting identity with $(1-adq/c)(1-a/d)$ and then letting $c=adq$. ------------------------------------------------------------------------ From [\[YYY-ZZZ-WWW\]](#YYY-ZZZ-WWW){reference-type="eqref" reference="YYY-ZZZ-WWW"} it follows at once by setting $d=1$ **Proposition 18**. *$$\begin{aligned} &\sum_{k=0}^{\infty}q^k \frac{(b;q)_k(aq;q^2)_k} {(q,aq;q)_{k}}=(-q;q)_\infty(bq;q^2)_\infty\sum_{k=0}^{\infty}q^{2k} \frac{(b;q^2)_k}{(q^2,aq^2;q^2)_k}.\label{YYY-ZZZ-WWW-new}\end{aligned}$$* Very interesting is that [\[YYY-ZZZ-WWW-new\]](#YYY-ZZZ-WWW-new){reference-type="eqref" reference="YYY-ZZZ-WWW-new"} implies **Proposition 19**. *For any complex number $|t|<1$, we have $$\begin{aligned} \sum_{k=0}^{\infty}t^{k} \frac{(aq;q^2)_k} {(q,aq;q)_{k}}= (-t;q)_\infty\sum_{k=0}^{\infty} \frac{t^{2k}}{(q^2,aq^2;q^2)_k}.\label{finalresult}\end{aligned}$$* For limitation of space, we leave the full derivation of [\[finalresult\]](#finalresult){reference-type="eqref" reference="finalresult"} to the interested reader. ## Applications of Theorem [Theorem 5](#secondeq-second){reference-type="ref" reference="secondeq-second"} {#applications-of-theorem-secondeq-second} First of all, we need to show **Theorem 20**. *Write $\mathcal{E}_0=\big(a_1-a_2 a_4/a_3\big)\big(a_1-a_2 a_3/a_4\big)$. Then, for any integer $n\geq 0$, we have $$\begin{aligned} &\mathcal{E}_0 \sum_{k=0}^{n-1}q^{k-1} \frac{\big(a_1^2 / q, a_2^2 / q ; q\big)_k\big(x_1^2 ; q^2\big)_k}{\big(K / a_3^2 q, K / a_4^2 q ; q\big)_{k+1}\big(x_1 x_3 x_4 / x_2 ; q^2\big)_k} \nonumber\\ & =1-\frac{\big(a_1^2 / q, a_2^2 / q ; q\big)_n\big(x_1^2 ; q^2\big)_n}{\big(K / a_3^2 q, K / a_4^2 q ; q\big)_n\big(x_1 x_3 x_4 / x_2 ; q^2\big)_n} \label{eq16}\\ & - x_1(x_1-x_3x_4/x_2)\sum_{k=0}^{n-1} q^{2 k}\frac{\big(a_1^2 / q, a_2^2 / q ; q\big)_{k+1}\big(x_1^2 ; q^2\big)_k}{\big(K / a_3^2 q, K / a_4^2 q ; q\big)_{k+1}\big(x_1 x_3 x_4 / x_2 ; q^2\big)_{k+1}} .\nonumber\end{aligned}$$* It follows from Theorem [Theorem 5](#secondeq-second){reference-type="ref" reference="secondeq-second"} by setting $\big(p_i, q_i\big) \rightarrow (q, q^2)$ for $1\leq i\leq 4$. ------------------------------------------------------------------------ One of the most important cases covered by Theorem [Theorem 20](#secondeq){reference-type="ref" reference="secondeq"} is as follows: **Corollary 21**. *Let $K=a_1a_2a_3a_4$ as before. Then it holds $$\begin{aligned} &\frac{q\big(a_1 a_3-a_2 a_4\big)\big(a_1 a_4-a_2 a_3\big)}{(a_3q-a_1a_2a_4)(a_4q-a_1a_2a_3)}~{}_{4}\phi_{3}\left[\begin{matrix}a_1^2 / q, a_2^2 / q,x_1,-x_1\\ K / a_3^2, K / a_4^2,-q \end{matrix};q,q\right]\nonumber\\ &=-\frac{\big(a_1^2 / q, a_2^2 / q,x_1,-x_1 ; q\big)_\infty}{\big(q,-q,K / a_3^2 q, K / a_4^2 q ; q\big)_\infty}+{}_{4}\phi_{3}\left[\begin{matrix}a_1^2 / q, a_2^2 / q,x_1/q,-x_1/q\\ K / a_3^2q, K / a_4^2q,-q \end{matrix};q,q^2\right].\label{eq17}\end{aligned}$$* It is the direct consequence of [\[eq16\]](#eq16){reference-type="eqref" reference="eq16"} by setting $x_1x_3x_4/x_2=q^2$ and taking the limit as $n\to \infty$. ------------------------------------------------------------------------ Another important case contained by [\[eq16\]](#eq16){reference-type="eqref" reference="eq16"} is the following transformation for $_{4}\phi_{3}$ series, which seems very different from Singh's transformation [@10 (III.21)]. **Corollary 22**. *$$\begin{aligned} &{}_{4}\phi_{3}\left[\begin{matrix}a_1^2 / q, a_2^2 / q,x_1,-x_1\\ (a_1 a_2)^2/q,q,-q \end{matrix};q,q\right]\label{eq1818}\\ =&\frac{x_1^2-q^2}{1-q^2}~{}_{4}\phi_{3}\left[\begin{matrix}a_1^2 , a_2^2,x_1,-x_1\\ (a_1 a_2)^2/q,q^2,-q^2 \end{matrix};q,q^2\right]+\frac{\big(a_1^2, a_2^2,x_1,-x_1 ; q\big)_\infty}{\big(q, (a_1 a_2)^2/q,q,-q; q\big)_\infty}.\nonumber\end{aligned}$$* Actually, [\[eq1818\]](#eq1818){reference-type="eqref" reference="eq1818"} is a special case of [\[eq17\]](#eq17){reference-type="eqref" reference="eq17"} under the assumption that $a_1a_2a_4=a_3q$. To see this clear, we reformulate [\[eq17\]](#eq17){reference-type="eqref" reference="eq17"} as the equivalent form $$\begin{aligned} &{}_{4}\phi_{3}\left[\begin{matrix}a_1^2 / q, a_2^2 / q,x_1,-x_1\\ K / a_3^2, K / a_4^2,-q \end{matrix};q,q\right]\nonumber\\ &\qquad=-\frac{\big(a_1^2 / q, a_2^2 / q,x_1,-x_1 ; q\big)_\infty}{\big(q,-q,K / a_3^2 q, K / a_4^2 q ; q\big)_\infty}~ \frac{(a_3q-a_1a_2a_4)(a_4q-a_1a_2a_3)}{q\big(a_1 a_3-a_2 a_4\big)\big(a_1 a_4-a_2 a_3\big)}\\ &\qquad+\frac{(a_3q-a_1a_2a_4)(a_4q-a_1a_2a_3)}{q\big(a_1 a_3-a_2 a_4\big)\big(a_1 a_4-a_2 a_3\big)}\bigg(1+\sum_{k=1}^\infty q^{2k}\frac{(a_1^2 / q, a_2^2 / q,x_1/q,-x_1/q;q)_{k}}{(q,K / a_3^2q, K / a_4^2q,-q;q)_k}\bigg).\nonumber\end{aligned}$$ After a bit routine simplification, it equals $$\begin{aligned} &{}_{4}\phi_{3}\left[\begin{matrix}a_1^2 / q, a_2^2 / q,x_1,-x_1\\ K / a_3^2, K / a_4^2,-q \end{matrix};q,q\right]=-\frac{\big(a_1^2 / q, a_2^2 / q,x_1,-x_1 ; q\big)_\infty}{\big(q,-q,K / a_3^2, K / a_4^2 ; q\big)_\infty}~ \frac{a_3a_4q}{\big(a_1 a_3-a_2 a_4\big)\big(a_1 a_4-a_2 a_3\big)}\\ &\qquad\qquad\qquad\qquad\qquad\qquad+\frac{(a_3q-a_1a_2a_4)(a_4q-a_1a_2a_3)}{q\big(a_1 a_3-a_2 a_4\big)\big(a_1 a_4-a_2 a_3\big)}\nonumber\\ &\qquad\qquad\qquad+\frac{a_3a_4q^3(1-a_1^2/q)(1-a_2^2/q)(1-x_1^2/q^2)}{(1-q^2)\big(a_1 a_3-a_2 a_4\big)\big(a_1 a_4-a_2 a_3\big)}\sum_{k=0}^\infty q^{2k}\frac{(a_1^2, a_2^2,x_1,-x_1;q)_{k}}{(K / a_3^2, K / a_4^2,q^2,-q^2;q)_k}.\nonumber\end{aligned}$$ Taking the assumption $a_1a_2a_4=a_3q$ into account, we thereby obtain $$\begin{aligned} &{}_{4}\phi_{3}\left[\begin{matrix}a_1^2 / q, a_2^2 / q,x_1,-x_1\\ (a_3 / a_4)^2q,q,-q \end{matrix};q,q\right]\\ =&\frac{\big(a_1^2, a_2^2,x_1,-x_1 ; q\big)_\infty}{\big(q,(a_3 / a_4)^2q,q,-q; q\big)_\infty}+\frac{x_1^2-q^2}{1-q^2}~{}_{4}\phi_{3}\left[\begin{matrix}a_1^2 , a_2^2,x_1,-x_1\\ (a_3 / a_4)^2q,q^2,-q^2 \end{matrix};q,q^2\right].\nonumber\end{aligned}$$ Replacing $a_3/a_4$ with $a_1a_2/q$, we complete the proof of Transformation [\[eq1818\]](#eq1818){reference-type="eqref" reference="eq1818"}. ------------------------------------------------------------------------ ## Applications of Theorem [Theorem 6](#secondeq-third){reference-type="ref" reference="secondeq-third"} {#applications-of-theorem-secondeq-third} We conclude this section with a direct application of Theorem [Theorem 6](#secondeq-third){reference-type="ref" reference="secondeq-third"}. For this, we introduce $$\begin{aligned} \mathbf{\Pi}_n(a_1,x_1,y,z):=\sum_{k=0}^{n-1} q^{k} \frac{(a_1^2/q,x_1^2;q)_k}{ (a_1y,x_1z;q)_{k}}\label{finalsum}\end{aligned}$$ provided that both $a_1y$ and $x_1z$ are not of form $q^{-m}$, $m$ being nonnegative integer. **Theorem 23**. *Let $\mathbf{\Pi}_n(a_1,x_1,y,z)$ be given by [\[finalsum\]](#finalsum){reference-type="eqref" reference="finalsum"}. Then there holds the $q$-contiguous relation $$\begin{aligned} \mathbf{\Pi}_n(a_1,x_1,y,z)&=\frac{x_1(x_1-z)(a_1^2-q)}{a_1(a_1-y)(1-x_1z)} \mathbf{\Pi}_n(a_1q^{1/2},x_1,y/q^{1/2},zq)\nonumber\\ &+\frac{q-a_1y}{a_1(a_1-y)} \left\{1-\frac{(a_1^2/q,x_1^2;q)_n}{(a_1y/q,x_1z;q)_n}\right\} \label{eq317317}\end{aligned}$$ and the transformation $$\begin{aligned} {}_2 \phi_1\left[\begin{array}{c} a_1^2 / q, x_1^2 \\ a_1 y \end{array} ; q, q\right]&+\frac{q-a_1y}{a_1(y-a_1)}{}_2 \phi_1\left[\begin{array}{c} a_1^2/q, x_1^2/q \\ a_1 y/q \end{array} ; q, q\right]=\frac{q}{a_1(y-a_1)}\frac{\big(a_1^2 / q,x_1^2; q\big)_{\infty}} {(a_1 y,q; q)_{\infty}}.\label{xrma-1-xrma}\end{aligned}$$* To prove [\[eq317317\]](#eq317317){reference-type="eqref" reference="eq317317"}, it is sufficient to set in [\[eq116\]](#eq116){reference-type="eqref" reference="eq116"} of Theorem [Theorem 6](#secondeq-third){reference-type="ref" reference="secondeq-third"} that for $1\leq i\leq 4$, $p_i=q_i=q,$ $a_3a_4=ya_2~~\mbox{and}~~x_3x_4=zx_2.$ All these together reduces [\[eq317317\]](#eq317317){reference-type="eqref" reference="eq317317"} to $$\begin{aligned} &\frac{a_1D_{0}(\bar{a},\bar{p})}{qa_2(1-a_1y/q)} \sum_{k=0}^{n-1} q^{k} \frac{(a_1^2/q,x_1^2;q)_k}{ (a_1y,x_1z;q)_{k}} \label{eq317} \\ &=1- \frac{(a_1^2/q,x_1^2;q)_n}{ (a_1y/q,x_1z;q)_n} -\frac{x_1D_{0}(\bar{x},\bar{q})(1-a_1^2/q)}{x_2(1-a_1y/q)(1-x_1z)}\sum_{k=0}^{n-1} q^k\frac{(a_1^2,x_1^2;q)_k}{(a_1y,x_1zq;q)_{k}}. \nonumber\end{aligned}$$ Upon substituting $D_{0}(\bar{a},\bar{p})$ and $D_{0}(\bar{x},\bar{q})$ in [\[eq317\]](#eq317){reference-type="eqref" reference="eq317"} and noting that $a_3a_4=ya_2$, we get $$\begin{aligned} &\frac{a_1(a_1-y)}{q-a_1y}\sum_{k=0}^{n-1} q^{k} \frac{(a_1^2/q,x_1^2;q)_k}{ (a_1y,x_1z;q)_{k}}\\ &\quad=1- \frac{(a_1^2/q,x_1^2;q)_n}{ (a_1y/q,x_1z;q)_n}-\frac{x_1(x_1-z)(q-a_1^2)}{(q-a_1y)(1-x_1z)}\sum_{k=0}^{n-1} q^k\frac{(a_1^2,x_1^2;q)_k}{(a_1y,x_1zq;q)_{k}}. \nonumber\end{aligned}$$ Finally, expressed in terms of $\mathbf{\Pi}_n(a_1,x_1,y,z)$, it takes the form of [\[eq317317\]](#eq317317){reference-type="eqref" reference="eq317317"}. As it turns out, [\[xrma-1-xrma\]](#xrma-1-xrma){reference-type="eqref" reference="xrma-1-xrma"} follows from [\[eq317317\]](#eq317317){reference-type="eqref" reference="eq317317"} after letting $x_1z=q$ and then taking $n\to \infty$. ------------------------------------------------------------------------ We think that [\[xrma-1-xrma\]](#xrma-1-xrma){reference-type="eqref" reference="xrma-1-xrma"} is novel to the literature, after compared with [@dlmf 17.6.E13]. Far beyond this, on appealing to the contiguous relation [\[eq317317\]](#eq317317){reference-type="eqref" reference="eq317317"}, we may establish **Corollary 24**. *$$\begin{aligned} \sum_{k=0}^{\infty} q^{k} \frac{(a_1^2/q,x_1^2;q)_k}{ (a_1y,x_1z;q)_{k}}&=\bigg(\frac{x_1^2q}{a_1y}\bigg)^m \frac{(z/x_1,a_1^2/q;q)_m}{(a_1/y,x_1z;q)_m} \sum_{k=0}^{\infty} q^{k} \frac{(a_1^2q^{m-1},x_1^2;q)_k}{ (a_1y,x_1zq^{m};q)_{k}}\nonumber\\ &+\frac{q-a_1y}{a_1(a_1-y)}\sum_{k=0}^{m-1} \bigg(\frac{x_1^2q}{a_1y}\bigg)^k \frac{(z/x_1,a_1^2/q;q)_k}{(a_1q/y,x_1z;q)_k}\label{eq317317-new}\\ &-\frac{q}{a_1(a_1-y)}\frac{(x_1^2,a_1^2/q;q)_\infty}{(a_1y,x_1z;q)_\infty} \sum_{k=0}^{m-1} \bigg(\frac{x_1^2q}{a_1y}\bigg)^k\frac{(z/x_1;q)_k}{(a_1q/y;q)_k}.\nonumber\end{aligned}$$* Actually, [\[eq317317\]](#eq317317){reference-type="eqref" reference="eq317317"} can be restated as the operator form $$\begin{aligned} \mathbf{\Pi}_n(a_1,x_1,y,z)&=C(a_1,x_1,y,z) \sigma\big(\mathbf{\Pi}_n(a_1,x_1,y,z)\big)+D_n(a_1,x_1,y,z),\label{xrma-1-11-111}\end{aligned}$$ where the operator $$\sigma(a_1,x_1,y,z):=(a_1q^{1/2},x_1,y/q^{1/2},zq)$$ and the coefficients $$\begin{aligned} C(a_1,x_1,y,z)&:= \frac{x_1(x_1-z)(a_1^2-q)}{a_1(a_1-y)(1-x_1z)},\\ D_n(a_1,x_1,y,z)&:=\frac{q-a_1y}{a_1(a_1-y)} \left\{1-\frac{(a_1^2/q,x_1^2;q)_n}{(a_1y/q,x_1z;q)_n}\right\}.\end{aligned}$$ Hence, [\[eq317317-new\]](#eq317317-new){reference-type="eqref" reference="eq317317-new"} comes out by applying Lemma [Lemma 9](#xxx-xxx){reference-type="ref" reference="xxx-xxx"} to [\[xrma-1-11-111\]](#xrma-1-11-111){reference-type="eqref" reference="xrma-1-11-111"} and then taking the limit as $n\to \infty$. ------------------------------------------------------------------------ **Corollary 25**. *For $|x_1^2q/a_1y|<1$, it holds $$\begin{aligned} \sum_{k=0}^{\infty} q^{k} \frac{(a_1^2/q,x_1^2;q)_k}{ (a_1y,x_1z;q)_{k}}&=\frac{q-a_1y}{a_1(a_1-y)}\sum_{k=0}^{\infty} \bigg(\frac{x_1^2q}{a_1y}\bigg)^k\frac{(z/x_1,a_1^2/q;q)_k} {(a_1q/y,x_1z;q)_k}\label{eq317317-new-new}\\ &-\frac{q}{a_1(a_1-y)}\frac{(x_1^2,a_1^2/q;q)_\infty}{(a_1y,x_1z;q)_\infty}\sum_{k=0}^{\infty} \bigg(\frac{x_1^2q}{a_1y}\bigg)^k\frac{(z/x_1;q)_k}{(a_1q/y;q)_k} .\nonumber\end{aligned}$$ Further, for $x_1z=q$, it holds $$\begin{aligned} {}_2 \phi_1\left[\begin{array}{c} a_1^2 / q, x_1^2 \\ a_1 y \end{array} ; q, q\right]&+\frac{q}{a_1(a_1-y)}\frac{\big(a_1^2 / q,x_1^2; q\big)_{\infty}} {(a_1 y,q; q)_{\infty}}{}_2 \phi_1\left[\begin{array}{c} q, q/x_1^2 \\ a_1q/y \end{array}; q, \frac{x_1^2q}{a_1y}\right]\nonumber\\ &=\frac{(q/a_1y,a_1x_1^2/y;q)_\infty}{(a_1/y,x_1^2q/a_1y;q)_\infty}. \label{xrma-1-xrma5}\end{aligned}$$* It is clear that [\[eq317317-new-new\]](#eq317317-new-new){reference-type="eqref" reference="eq317317-new-new"} is the limiting case of [\[eq317317-new\]](#eq317317-new){reference-type="eqref" reference="eq317317-new"} as $m\to\infty$. Note that $\lim_{m\to\infty}\big(x_1^2q/a_1y\big)^m=0$ for $|x_1^2q/a_1y|<1$. As a result, for $x_1z=q$, we can reformulate [\[eq317317-new-new\]](#eq317317-new-new){reference-type="eqref" reference="eq317317-new-new"} as $$\begin{aligned} {}_2 \phi_1\left[\begin{array}{c} a_1^2 / q, x_1^2 \\ a_1 y \end{array} ; q, q\right]&+\frac{q-a_1y}{a_1(y-a_1)}{}_2 \phi_1\left[\begin{array}{c} a_1^2/q, q/x_1^2\\ a_1q/y \end{array} ; q, \frac{x_1^2q}{a_1y}\right]\nonumber\\ &=\frac{q}{a_1(y-a_1)}\frac{\big(a_1^2 / q,x_1^2; q\big)_{\infty}} {(a_1 y,q; q)_{\infty}}{}_2 \phi_1\left[\begin{array}{c} q, q/x_1^2 \\ a_1q/y \end{array}; q, \frac{x_1^2q}{a_1y}\right].\label{xrma-1-xrma3}\end{aligned}$$ In this case, by the $q$-Gauss summation formula [@10 (II. 8)], we find $${}_2 \phi_1\left[\begin{array}{c} a_1^2/q, q/x_1^2\\ a_1q/y \end{array} ; q, \frac{x_1^2q}{a_1y}\right]= \frac{(q^2/a_1y,a_1x_1^2/y;q)_\infty}{(a_1q/y,x_1^2q/a_1y;q)_\infty}.$$ Substituting this into [\[xrma-1-xrma3\]](#xrma-1-xrma3){reference-type="eqref" reference="xrma-1-xrma3"}, we have $$\begin{aligned} {}_2 \phi_1\left[\begin{array}{c} a_1^2 / q, x_1^2 \\ a_1 y \end{array} ; q, q\right]&- \frac{(q/a_1y,a_1x_1^2/y;q)_\infty}{(a_1/y,x_1^2q/a_1y;q)_\infty}\nonumber\\ &=\frac{q}{a_1(y-a_1)}\frac{\big(a_1^2 / q,x_1^2; q\big)_{\infty}} {(a_1 y,q; q)_{\infty}}{}_2 \phi_1\left[\begin{array}{c} q, q/x_1^2 \\ a_1q/y \end{array}; q, \frac{x_1^2q}{a_1y}\right].\end{aligned}$$ This amounts to [\[xrma-1-xrma5\]](#xrma-1-xrma5){reference-type="eqref" reference="xrma-1-xrma5"}. ------------------------------------------------------------------------ # Concluding remarks A quick glance over all that we have done in the foregoing sections together with [@xuxrma] inspires us to wonder what will Theorem [Theorem 4](#firsteq){reference-type="ref" reference="firsteq"} subject to two conditions $$\begin{aligned} \left\{\begin{matrix}~a_1^2/p_1=x_1x_2Y\\ ~a_2^2/p_2=x_1x_2/Y \end{matrix} \right. \quad\mbox{and}\quad \left\{\begin{matrix}~p_1=L_0/q_3\\ ~p_2=L_0/q_4 \end{matrix} \right.\qquad\mbox{become?}\end{aligned}$$ As a preliminary result, we have now shown that **Theorem 26**. *With the same notation as Theorem [Theorem 4](#firsteq){reference-type="ref" reference="firsteq"}. Let $L=(p_1p_2p_3p_4)^{1/2}$ and further assume that $p_1p_2=q_1q_2$ and $a_1^2a_2^2=x_1^2x_2^2.$ Then, for any integer $n\geq 0$, it holds $$\begin{aligned} \sum_{k=0}^{n-1}g(k)\frac{(x_1^2;q_1)_k(x_2^2;q_2)_k}{(a_1a_2X;L/p_3)_k(a_1a_2/X;L/p_4)_k }=\frac{(x_1^2;q_1)_n(x_2^2;q_2)_n}{ (a_1a_2X;L/p_3)_{n}(a_1a_2/X;L/p_4)_{n} }-1,\end{aligned}$$ where $$\begin{aligned} g(k):=(L/p_3)^k\frac{(a_1a_2X-x_1^2(q_1p_3/L)^k)(a_1a_2X-x_2^2(q_2p_3/L)^k)}{a_1a_2 (1-a_1a_2X(L/p_3)^{k})(X-a_1a_2(L/p_4)^k)}.\end{aligned}$$* In a broad range, the same question about [\[pppppp\]](#pppppp){reference-type="eqref" reference="pppppp"} of Lemma [Lemma 3](#type-i-i){reference-type="ref" reference="type-i-i"} is also of interest. Truly, the answer to this problem would become more complicated to figure out. By contrast, any reduction of [\[oooooo-1\]](#oooooo-1){reference-type="eqref" reference="oooooo-1"} seems highly possibility for good $q$-contiguous relations of the truncated series involved. Further discussion on this topic will be made in our forthcoming paper. # Acknowledgement {#acknowledgement .unnumbered} This work was supported by the National Natural Science Foundation of China \[Grant No. 11971341\]. 99 X. J. Chen and W. C. Chu, $q$-Analogues of five difficult hypergeometric evaluations, Turkish J. Math., 2020 (44) 791--800. X. J. Chen and W. C. Chu, Summation formulae for a class of terminating balanced $q$-series, J. Math. Anal. Appl. **451**(2017) 508--523. W. C. Chu, Bailey's very well-poised ${_6\psi_6}$-series identity, J. Combin. Theory Ser. A, **113** (2006) 966--979. W. C. Chu, Abel's lemma on summation by parts and basic hypergeometric series, Adv. Appl. Math., **39** (2007) 490--514. W. C. Chu and C. Z. Jia, Abel's method on summation by parts and theta hypergeometric series, J. Combin. Theory Ser. A, **115** (2008) 815--844. W. C. Chu and C. Y. Wang, Abel's lemma on summation by parts and partial $q$-series transformations, Sci. China Ser. A, **52** (2009) 720--748. W. C. Chu and X. X. Wang, The modified Abel lemma on summation by parts and terminating hypergeometric series identities, Intergal Transforms Spec. Funct., **20** (2009) 93--118. G. Gasper, Summation, transformation, and expansion formulas for bibasic series, Trans. Amer. Math. Soc., **312** (1989) 257--277. G. Gasper and M. Rahman, An indefinite bibasic summation formula and some quadratic, cubic, and quartic summation and transformation formulas, Canad. J. Math., **42** (1990) 1--27. G. Gasper and M. Rahman, Basic Hypergeometric Series (2nd Edition), Cambridge University Press, Cambridge, 2004. K. Knopp, Theory and Application of Infinite Series, Dover books on Mathematics, Dover Publications, 1990. T. H. Koornwinder, On the equivalence of two fundamental theta identities, Anal. Appl. (Singap.) , **12** (2014) 711--725. F. W. J. Olver, D. W. Lozier, R. F. Boisvert, and C. W. Clark, editors. NIST Handbook of Mathematical Functions. Cambridge University Press, New York, NY, 2010. Print companion to \[DLMF\] or <http://dlmf.nist.gov/17.6.E13>. J. Wang, A new elliptic interpolation formula via the $(f,g)$-inversion, Proc. Amer. Math. Soc., **148** (2020) 3457--3471. J. N. Xu and X. R. Ma, General $q$-series transformations based on Abel's lemma on summation by parts and their applications, J. Diff. Equ. Appli., Submitted. <https://arxiv.org/abs/2307.07968>. J. N. Xu, Abel's Lemma on Summations by Parts and Hypergeometric Series Identities, Master Thesis, Nanjing University of Information Science and Technology, 2022.
arxiv_math
{ "id": "2309.02630", "title": "Three new $q$-Abel transformations and their applications", "authors": "Jianan Xu, Xinrong Ma", "categories": "math.CO math.CA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | It is well-known that the Lebesgue integral generalises the Riemann integral. However, as is also well-known but less frequently well-explained, this generalisation alone is not the reason why the Lebesgue integral is important and needs to be a part of the arsenal of any mathematician, pure or applied. Those who understand the correct reasons for the importance of the Lebesgue integral realise there are at least two crucial differences between the Riemann and Lebesgue theories. One is the difference between the Dominated Convergence Theorem in the two theories, and another is the completeness of the normed vector spaces of integrable functions. Here topological interpretations are provided for the differences in the Dominated Convergence Theorems, and explicit counterexamples are given which illustrate the deficiencies of the Riemann integral. Also illustrated are the deleterious consequences of the defects in the Riemann integral on Fourier transform theory if one restricts to Riemann integrable functions. author: - "Andrew D. Lewis[^1]" date: 2008/03/04 title: "Should we fly in the Lebesgue-designed airplane?The correct defence of the Lebesgue integral[^2]" --- **Keywords.** Lebesgue integral, Riemann integral. **AMS Subject Classifications (2010).** 28-01 # Introduction The title of this paper is a reference to the well-known quote of the applied mathematician and engineer Richard W. Hamming (1915--1998): > Does anyone believe that the difference between the Lebesgue and Riemann integrals can have physical significance, and that whether say, an airplane would or would not fly could depend on this difference? If such were claimed, I should not care to fly in that plane. The statement by @RWH:80 is open to many interpretations, but the interpretation of @RWH:80 himself can be gleaned from [@RWH:80] and, particularly, [@RWH:98]; also see [@PJD:98] for a discussion of some of @RWH:80's views on mathematics and the "real world." Perhaps a fair summary of @RWH:98's views toward the Riemann and Lebesgue theories of integration is that the distinction between them is not apt to be seen in Nature. This seems about right to us. Unfortunately, however, this quote of @RWH:98's is often used in a confused manner that indicates the quoter's misunderstanding of the purpose and importance of the Lebesgue integral. Indeed, very often @RWH:98's quote is brought out as an excuse to disregard Lebesgue integration, the idea being that it is the product of some vapid pursuit of generality. Oxymoronically, this is often done simultaneously with the free use of results (like completeness of the $\mathsf{L}^p$-spaces) which rely crucially on the distinctions between the Riemann and Lebesgue integrals. That the value of the Lebesgue theory of integration (and all of the theories equivalent to or generalising it[^3]) may not be appreciated fully by non-mathematicians should not be too surprising: the Lebesgue theory is subtle. Moreover, it is definitely not the case that the importance of the Lebesgue theory over the Riemann theory is explained clearly in all texts on integration theory; in fact, the important distinctions are rarely explicitly stated, though they are almost always implicitly present. What is most discomforting, however, is that mathematicians themselves sometimes offer an *incorrect* defence of the Lebesgue theory. For example, it is not uncommon to see defences made along the lines of, "The class of functions that can be integrated using the Lebesgue theory is larger than that using the Riemann theory" [@MD/MI:02]. Sometimes, playing into the existing suspicions towards unnecessary generality, it is asserted that the mere fact that the Lebesgue theory generalises the Riemann theory is sufficient to explain its superiority. These sorts of defences of the Lebesgue theory are certainly factual. But they are also emphatically *not* the sorts of reasons why the Lebesgue theory is important. The functions that can be integrated using the Lebesgue theory, but which cannot be integrated using the Riemann theory, are not important; just try showing a Lebesgue integrable but not Riemann integrable function to someone who is interested in applications of mathematics, and see if they think the function is important. This certainly must at least partially underlie @RWH:98's motivation for his quote. The value of the Lebesgue theory over the Riemann theory is that it is superior, as a *theory of integration*. By this it is meant that there are theorems in the Lebesgue theory that are true and useful, but that are not true in the Riemann theory. Probably the most crucial such theorem is the powerful version of the Dominated Convergence Theorem that one has in the Lebesgue theory. This theorem is constantly used in the proof of many results that are important in applications. For example, the Dominated Convergence Theorem is used crucially in the proof of the completeness of $\mathsf{L}^p$-spaces. In turn, the completeness of these spaces is an essential part of why these spaces are useful in, for example, the theory of signals and systems that is taught to all Electrical Engineering undergraduates. For instance, in many texts on signals and systems one can find the statement that the Fourier transform is an isomorphism of $\mathsf{L}^2(\mathbb{R};\mathbb{C})$. This statement is one that needs a sort of justification that is (understandably) not often present in such texts. But its presence can at least be justified by its being correct. With only the Riemann theory of integration at one's disposal, the statement is simply not correct. We illustrate this in Section [3.3](#subsec:riemann-ccft){reference-type="ref" reference="subsec:riemann-ccft"}. In this paper we provide topological explanations for the differences between the Riemann and Lebesgue theories of integration. The intent is not to make these differences clear for non-mathematicians. Indeed, for non-mathematicians the contents of the preceding paragraphs, along with the statement that, "The Lebesgue theory of integration is to the Riemann theory of integration as the real numbers are to the rational numbers," (this is the content of our Example [\[eg:Riemann!Cauchy\]](#eg:Riemann!Cauchy){reference-type="ref" reference="eg:Riemann!Cauchy"}) seems about the best one can do. No, what we aim to do in this paper is clarify *for mathematicians* the reasons for the superiority of the Lebesgue theory. We do this by providing two theorems, both topological in nature, that are valid for the Lebesgue theory and providing counterexamples illustrating that they are not true for the Riemann theory. We also illustrate the consequences of the topological deficiencies of the Riemann integral by explicitly depicting the limitations of the $\mathsf{L}^2$-Fourier transform with the Riemann integral. One of the contributions in this paper is that we give the *correct* counterexamples. All too often one sees counterexamples that illustrate *some* point, but not always the one that one wishes to make. The core of what we say here exists in the literature in one form or another, although we have not seen in the literature counterexamples that illustrate what we illustrate in Examples [\[eg:R1!hatMp-closed\]](#eg:R1!hatMp-closed){reference-type="ref" reference="eg:R1!hatMp-closed"} and [\[eg:Riemann!Cauchy\]](#eg:Riemann!Cauchy){reference-type="ref" reference="eg:Riemann!Cauchy"}. The principal objective here is to organise the results and examples in an explicit and compelling way. In the event that the reader is consulting this paper in a panic just prior to boarding an airplane, let us answer the question posed in the title of the paper. The answer is, "The question is meaningless as the distinctions between the Riemann and Lebesgue integrals do not, and should not be thought to, contribute to such worldly matters as aircraft design." However, the salient point is that this is not a valid criticism of the Lebesgue integral. What follows is, we hope, a valid defence of the Lebesgue integral. # Spaces of functions {#sec:LR-topologies} To keep things simple and to highlight the important parts of our presentation, in this section and in most of the rest of the paper we consider $\mathbb{R}$-valued functions defined on $I=\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\subseteq\mathbb{R}$. Extensions to more general settings are performed mostly by simple changes of notation. The Lebesgue measure on $\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$ is denoted by $\lambda$. In order to distinguish the Riemann and Lebesgue integrals we denote them by $$\int_0^1f(x)\,{\normalfont\textrm{d}}x,\qquad\int_If\,{\normalfont\textrm{d}}\lambda,$$ respectively. In order to make our statements as strong as possible, by the Riemann integral we mean the improper Riemann integral to allow for unbounded functions [@JEM/MJH:93 Section 8.5]. ## Normed vector spaces of integrable functions {#subsec:nvs-integrable} We use slightly unconventional notation that is internally self-consistent and convenient for our purposes here. Let us first provide the large vector spaces whose subspaces are of interest. By $\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}$ we denote the set of all $\mathbb{R}$-valued functions on $\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$. This is also the product of $\textup{card}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1])$ copies of $\mathbb{R}$, and we shall alternatively think of elements of $\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}$ as being functions or elements of a product of sets, as is convenient for our purposes. We consider the standard $\mathbb{R}$-vector space structure on $\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}$: $$(f+g)(x)=f(x)+g(x),\quad(af)(x)=a(f(x)),\qquad f,g\in\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]},\ a\in\mathbb{R}.$$ In $\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}$ consider the subspace $$Z(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})=\left\{f\in\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}\immediate\vphantom{\lambda(\{x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\;|\enspace f(x)\not=0\})=0}\;\right| \left.\immediate\vphantom{f\in\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}}\enspace\lambda(\{x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\;|\enspace f(x)\not=0\})=0\right\}.$$ Then denote $\hat{\mathbb{R}}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}=\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}/ Z(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$; this is then the set of equivalence classes of functions agreeing almost everywhere. We shall denote by $[f]=f+Z(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ the equivalence class of $f\in\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}$. Now let us consider subspaces of $\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}$ and $\hat{\mathbb{R}}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}$ consisting of integrable functions. Let us denote by $$\mathsf{R}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})=\{f\colon\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\rightarrow\mathbb{R}\;|\enspace f\ \textrm{is Riemann integrable}\}.$$ On $\mathsf{R}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ define a seminorm $\lVert\cdot\rVert_1$ by $$\lVert f\rVert_1=\int_0^1\lvert f(x)\rvert\,{\normalfont\textrm{d}}x,$$ and denote $$\mathsf{R}_0(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})=\left\{f\in\mathsf{R}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})\immediate\vphantom{\lVert f\rVert_1=0}\;\right| \left.\immediate\vphantom{f\in\mathsf{R}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})}\enspace\lVert f\rVert_1=0\right\}.$$ Then we define $$\hat{\mathsf{R}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})=\mathsf{R}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})/ \mathsf{R}_0(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R}),$$ and note that this $\mathbb{R}$-vector space is then equipped with the norm $\lVert[f]\rVert_1=\lVert f\rVert_1$, accepting the slight abuse of notation of using $\lVert\cdot\rVert_1$ is different contexts. The preceding constructions can be carried out, replacing "$\mathsf{R}$" with "$\mathsf{L}$" and replacing the Riemann integral with the Lebesgue integral, to arrive at the seminormed vector space $$\mathsf{L}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})=\{f\colon\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\rightarrow\mathbb{R}\;|\enspace f\ \textrm{is Lebesgue integrable}\}$$ with the seminorm $$\lVert f\rVert_1=\int_I\lvert f\rvert\,{\normalfont\textrm{d}}\lambda,$$ the subspace $$\mathsf{L}_0(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})=\left\{f\in\mathsf{L}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})\immediate\vphantom{\lVert f\rVert_1=0}\;\right| \left.\immediate\vphantom{f\in\mathsf{L}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})}\enspace\lVert f\rVert_1=0\right\},$$ and the normed vector space $$\hat{\mathsf{L}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})=\mathsf{L}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})/ \mathsf{L}_0(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R}).$$ We denote the norm on $\hat{\mathsf{L}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ by $\lVert\cdot\rVert_1$, this not being too serious an abuse of notation since $\hat{\mathsf{R}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ is a subspace of $\hat{\mathsf{L}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ with the restriction of the norm on $\hat{\mathsf{L}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ to $\hat{\mathsf{R}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ agreeing with the norm on $\hat{\mathsf{R}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. This is a consequence of the well-known fact that the Lebesgue integral generalises the Riemann integral [@DLC:13 Theorem 2.5.4]. During the course of the development of the Lebesgue theory of integration one shows that $$\mathsf{L}_0(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})=Z(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$$ [e.g., @DLC:13 Corollary 2.3.11]. The corresponding assertion is not true for the Riemann theory. [\[eg:Qcharfunc\]]{#eg:Qcharfunc label="eg:Qcharfunc"} Let us denote by $F\in\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}$ the characteristic function of $\mathbb{Q}\cap\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$. This is perhaps the simplest and most commonly seen example of a function that is Lebesgue integrable but not Riemann integrable [@DLC:13 Example 2.5.4]. Thus $F\not\in\mathsf{R}_0(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. However, $F\in Z(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. While the preceding example is often used as an example of a function that is not Riemann integrable but is Lebesgue integrable, one needs to be careful about exaggerating the importance, even mathematically, of this example. In Examples [\[eg:R1!hatMp-closed\]](#eg:R1!hatMp-closed){reference-type="ref" reference="eg:R1!hatMp-closed"} and [\[eg:Riemann!Cauchy\]](#eg:Riemann!Cauchy){reference-type="ref" reference="eg:Riemann!Cauchy"} below we shall see that this example is not sufficient for demonstrating some of the more important deficiencies of the Riemann integral. ## Pointwise convergence topologies For $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$ let us denote by $p_x\colon\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}\rightarrow\mathbb{R}_{\ge 0}$ the seminorm defined by $p_x(f)=\lvert f(x)\rvert$. The family $(p_x)_{x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}$ of seminorms on $\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}$ defines a locally convex topology. A basis of open sets for this topology is given by products of the form $\prod_{x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}U_x$ where $U_x\subseteq\mathbb{R}$ is open and where $U_x=\mathbb{R}$ for all but finitely many $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$. A sequence $(f_j)_{j\in\mathbb{Z}_{>0}}$ in $\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}$ converges to $f\in\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}$ if and only if the sequence converges pointwise in the usual sense [@SW:70 Theorem 42.2]. Let us, therefore, call this the ***topology of pointwise convergence*** and let us denote by $\mathsf{C}_p(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ the vector space $\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}$ when equipped with this topology. For clarity, we shall prefix with "$\mathsf{C}_p$" topological properties in the topology of pointwise convergence. Thus, for example, we shall say "$\mathsf{C}_p$-open" to denote an open set in $\mathsf{C}_p(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. We will be interested in bounded subsets of $\mathsf{C}_p(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. We shall use the characterisation of a bounded subset $B$ of a topological $\mathbb{R}$-vector space $\mathsf{V}$ that a set is bounded if and only if, for every sequence $(v_j)_{j\in\mathbb{Z}_{>0}}$ in $B$ and for every sequence $(a_j)_{j\in\mathbb{Z}_{>0}}$ in $\mathbb{R}$ converging to $0$, it holds that the sequence $(a_jv_j)_{j\in\mathbb{Z}_{>0}}$ converges to zero in the topology of $\mathsf{V}$ [@WR:91 Theorem 1.30]. **Proposition 1**. *A subset $B\subseteq\mathsf{C}_p(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ is $\mathsf{C}_p$-bounded if and only if there exists a nonnegative-valued $g\in\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}$ such that $$B\subseteq\{f\in\mathsf{C}_p(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})\;|\enspace\lvert f(x)\rvert\le g(x)\ \textrm{for every}\ x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\}.$$* *Suppose that there exists a nonnegative-valued $g\in\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}$ such that $\lvert f(x)\rvert\le g(x)$ for every $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$ if $f\in B$. Let $(f_j)_{j\in\mathbb{Z}_{>0}}$ be a sequence in $B$ and let $(a_j)_{j\in\mathbb{Z}_{>0}}$ be a sequence in $\mathbb{R}$ converging to $0$. If $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$ then $$\lim_{j\to\infty}\lvert a_jf_j(x)\rvert\le\lim_{j\to\infty}\lvert a_j\rvert g(x)=0,$$ which gives $\mathsf{C}_p$-convergence of the sequence $(a_jf_j)_{j\in\mathbb{Z}_{>0}}$ to zero.* *Next suppose that there exists no nonnegative-valued function $g\in\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}$ such that $\lvert f(x)\rvert\le g(x)$ for every $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$ if $f\in B$. This means that there exists $x_0\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$ such that, for every $M\in\mathbb{R}_{>0}$, there exists $f\in B$ such that $\lvert f(x_0)\rvert>M$. Let $(a_j)_{j\in\mathbb{Z}_{>0}}$ be a sequence in $\mathbb{R}$ converging to $0$ and such that $a_j\not=0$ for every $j\in\mathbb{Z}_{>0}$. Then let $(f_j)_{j\in\mathbb{Z}_{>0}}$ be a sequence in $B$ such that $\lvert f_j(x_0)\rvert>\left\lvert a_j^{-1}\right\rvert$ for every $j\in\mathbb{Z}_{>0}$. Then $\lvert a_jf_j(x_0)\rvert>1$ for every $j\in\mathbb{Z}_{>0}$, implying that the sequence $(a_jf_j)_{j\in\mathbb{Z}_{>0}}$ cannot $\mathsf{C}_p$-converge to zero. Thus $B$ is not $\mathsf{C}_p$-bounded.* Of course, in the theory of integration one is interested, not in pointwise convergence of arbitrary functions, but in pointwise convergence of measurable functions. Let us, therefore, denote $$\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})=\left\{f\in\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}\immediate\vphantom{f\ \textrm{is Lebesgue measurable}}\;\right| \left.\immediate\vphantom{f\in\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}}\enspace f\ \textrm{is Lebesgue measurable}\right\},$$ where we understand the topology on $\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ to be the subspace topology inherited from $\mathsf{C}_p(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. Standard theorems on measurable functions show that $\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ is a subspace [@DLC:13 Proposition 2.1.6] and is $\mathsf{C}_p$-sequentially closed [@DLC:13 Proposition 2.1.5]. Moreover, the stronger assertion of closedness does not hold. The following result shows this, as well as giving topological properties of $Z(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. **Proposition 2**. *The subspaces $\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ and $Z(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ of $\mathsf{C}_p(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ are not $\mathsf{C}_p$-closed, but are $\mathsf{C}_p$-sequentially closed.* *The $\mathsf{C}_p$-sequential closedness of $\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ and $Z(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ follow from standard theorems, as pointed out above. We first show that $\mathsf{C}_p(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})\setminus\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ is not $\mathsf{C}_p$-open. Let $f\in\mathsf{C}_p(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})\setminus\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ and let $V$ be a $\mathsf{C}_p$-open set containing $f$. Then $V$ contains a basic neighbourhood. Thus there exists $\epsilon\in\mathbb{R}_{>0}$, $x_1,\dots,x_k\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$, and a basic neighbourhood $U=\prod_{x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}U_x$ contained in $V$ where* 1. *$U_{x_j}=\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}({f(x_j)-\epsilon},{f(x_j)+\epsilon})$ for $j\in\{1,\dots,k\}$ and* 2. *$U_x=\mathbb{R}$ for $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\setminus\{x_1,\dots,x_k\}$.* *Then the function $g\in\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}$ defined by $$g(x)=\begin{cases}f(x),&x\in\{x_1,\dots,x_k\},\\ 0,&\textrm{otherwise}\end{cases}$$ is in $U\cap\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. Thus $g\in V$, so showing that every neighbourhood of $f$ contains a member of $\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$.* *To show that $Z(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ is not $\mathsf{C}_p$-closed we shall show that $\mathsf{C}_p(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})\setminus Z(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ is not $\mathsf{C}_p$-open. Let $f\in\mathsf{C}_p(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})\setminus Z(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ be given by $f(x)=1$ for all $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$. Let $V$ be a $\mathsf{C}_p$-open subset containing $f$. Then $V$ contains a basic neighbourhood from $\mathsf{C}_p(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$, and in particular a basic neighbourhood of the form $U=\prod_{x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}U_x$ where the $\mathsf{C}_p$-open sets $U_x\subseteq\mathbb{R}$, $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$, have the following properties:* 1. *there exists $\epsilon\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}(0,1)$ and a finite set $x_1,\dots,x_k\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$ such that $U_{x_j}=\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}({1-\epsilon},{1+\epsilon})$ for each $j\in\{1,\dots,k\}$;* 2. *for $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\setminus\{x_1,\dots,x_k\}$ we have $U_x=\mathbb{R}$.* *We claim that such a basic neighbourhood $U$ contains a function from $Z(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. Indeed, the function $$g(x)=\begin{cases}1,&x\in\{x_1,\dots,x_k\},\\0,&\textrm{otherwise}\end{cases}$$ is in $U\cap Z(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$, and so is in $V\cap Z(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. This shows that $\mathsf{C}_p(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})\setminus Z(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ is not $\mathsf{C}_p$-open, as desired.* ## Almost everywhere pointwise convergence limit structures For many applications, it is the space $\hat{\mathsf{L}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$, not $\mathsf{L}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$, that is of interest, this by virtue of its possessing a norm and not a seminorm. (Of course, one might also be interested in $\hat{\mathsf{R}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$, but the point of this paper is to clarify the ways in which this space is deficient.) Thus one is interested in subspaces of $\hat{\mathbb{R}}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}$. The largest such subspace in which we shall be interested is the image of the Lebesgue measurable functions in $\hat{\mathbb{R}}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}$ under the projection from $\mathbb{R}^{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}$: $$\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})= \mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})/Z(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R}).$$ Note that the quotient is well-defined since completeness of the Lebesgue measure gives $Z(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})\subseteq\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. Now, one wishes to provide structure on $\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ such that there is a notion of convergence which agrees with almost everywhere pointwise convergence. First let us be clear about what we mean by almost everywhere pointwise convergence relative to the various function spaces we are using. A sequence $(f_j)_{j\in\mathbb{Z}_{>0}}$ in $\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ is ***almost everywhere pointwise convergent*** to $f\in\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ if $$\lambda(\{x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\;|\enspace(f_j(x))\ \textrm{does not converge to}\ f(x)\})=0.$$ A sequence $([f_j])_{j\in\mathbb{Z}_{>0}}$ in $\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ is ***almost everywhere pointwise convergent*** to $[f]\in\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ if $$\tag*{\let\if@ADL@oprocend\if@ADL@oprocend% \global\@ADL@oprocendtrue \protect\if@ADL@oprocend\global\@ADL@oprocendfalse% \protect{\ifvmode\null\hfill{$\bullet$}\par\else\ifmmode% {\ifinner\unskip\nobreak\hfil\penalty 50\hskip.2em% \null\hfill\hfill{\hbox{$\bullet$}}\else% \tag*{$\hbox{$\bullet$}$}\fi}\else\ifinner\unskip\nobreak\hfil\penalty 50\hskip.2em% \null\hfill\hfill{$\bullet$}% \else{\parfillskip=0pt\widowpenalty=10000% \displaywidowpenalty=10000\finalhyphendemerits=0\unskip\nobreak\hfil\penalty 50\hskip.2em% \null\hfill\hfill$\bullet$\par}\fi\fi\fi}\else\ifhmode\ifinner\else\par\fi\fi\fi\global \let\if@ADL@oprocend\if@ADL@oprocend} \lambda(\{x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\;|\enspace(f_j(x))\ \textrm{does not converge to}\ f(x)\})=0.$$ We should ensure that the definition of almost everywhere pointwise convergence in $\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ is well-defined. **Lemma 3**. *For a sequence $([f_j])_{j\in\mathbb{Z}_{>0}}$ in $\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ and for $[f]\in\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ the following statements are equivalent:* *there exists a sequence $(g_j)_{j\in\mathbb{Z}_{>0}}$ in $\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ and $g\in\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ such that* *$[g_j]=[f_j]$ for $j\in\mathbb{Z}_{>0}$,* *$[g]=[f]$, and* *$(g_j)_{j\in\mathbb{Z}_{>0}}$ converges pointwise almost everywhere to $g$.* *for every sequence $(g_j)_{j\in\mathbb{Z}_{>0}}$ in $\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ and for every $g\in\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ satisfying* *$[g_j]=[f_j]$ for $j\in\mathbb{Z}_{>0}$ and* *$[g]=[f]$,* *it holds that $(g_j)_{j\in\mathbb{Z}_{>0}}$ converges pointwise almost everywhere to $g$.* *It is clear that the second statement implies the first, so we only prove the converse. Thus we let $(g_j)_{j\in\mathbb{Z}_{>0}}$ in $\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ and $g\in\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ be such that* 1. *$[g_j]=[f_j]$ for $j\in\mathbb{Z}_{>0}$,* 2. *$[g]=[f]$, and* 3. *$(g_j)_{j\in\mathbb{Z}_{>0}}$ converges pointwise almost everywhere to $g$.* *Let $(h_j)_{j\in\mathbb{Z}_{>0}}$ be a sequence in $\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ and let $h\in\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ be such that* 1. *$[h_j]=[f_j]$ for $j\in\mathbb{Z}_{>0}$ and* 2. *$[h]=[f]$.* *Define $$A=\{x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\;|\enspace g(x)\not=f(x)\},\quad B=\{x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\;|\enspace h(x)\not=f(x)\}$$ and, for $j\in\mathbb{Z}_{>0}$, define $$A_j=\{x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\;|\enspace g_j(x)\not=f_j(x)\},\quad B_j=\{x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\;|\enspace h_j(x)\not=f_j(x)\}$$ and note that $$x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\setminus(A\cup B)=(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\setminus A)\cap (\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\setminus B)\quad\implies\quad h(x)=f(x)=g(x)$$ and $$x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\setminus(A_j\cup B_j)=(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\setminus A_j)\cap(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\setminus B_j)\quad\implies\quad h_j(x)=f_j(x)=g_j(x).$$ Thus, $$x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\setminus\left((\cup_{j\in\mathbb{Z}_{>0}}A_j\cup B_j)\cup (A\cup B)\right)\quad\implies\quad \lim_{j\to\infty}h_j(x)=\lim_{j\to\infty}g_j(x)=g(x)=h(x).$$ Since $(\cup_{j\in\mathbb{Z}_{>0}}A_j\cup B_j)\cup(A\cup B)$ is a countable union of sets of measure zero, it has zero measure, and so $(h_j)_{j\in\mathbb{Z}_{>0}}$ converges pointwise almost everywhere to $h$.* Now that we understand just what sort of convergence we seek in $\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$, we can think about how to achieve this. The obvious first guess is to use the quotient topology on $\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ inherited from the $\mathsf{C}_p$-topology on $\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. However, convergence in this topology fails to agree with almost everywhere pointwise convergence. Indeed, we have the following more sweeping statement. **Proposition 4**. *Let $\mathscr{T}_{\textup{a.e.}}$ be the set of topologies $\tau$ on $\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ such that the convergent sequences in $\tau$ are precisely the almost everywhere pointwise convergent sequences. Then $\mathscr{T}_{\textup{a.e.}}=\emptyset$.* *Suppose that $\mathscr{T}_{\textup{a.e.}}\not=\emptyset$ and let $\tau\in\mathscr{T}_{\textup{a.e.}}$. Let us denote by $z\in\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ the zero function. Let $(f_j)_{j\in\mathbb{Z}_{>0}}$ be a sequence in $\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ converging in measure to $z$, i.e., for every $\epsilon\in\mathbb{R}_{>0}$, $$\lim_{j\to\infty}\lambda(\{x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\;|\enspace\lvert f_j(x)\rvert\ge\epsilon\})=0,$$ but not converging pointwise almost everywhere to $z$ [@DLC:13 Example 3.1.1(b)]. Since almost everywhere pointwise convergence agrees with convergence in $\tau$, there exists a neighbourhood $U$ of $[z]$ in $\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ such that the set $$\{j\in\mathbb{Z}_{>0}\;|\enspace[f_j]\in U\}$$ is finite. As is well-known [@DLC:13 Proposition 3.1.3], there exists a subsequence $(f_{j_k})_{k\in\mathbb{Z}_{>0}}$ of $(f_j)_{j\in\mathbb{Z}_{>0}}$ that converges pointwise almost everywhere to $z$. Thus the sequence $([f_{j_k}])_{k\in\mathbb{Z}_{>0}}$ converges pointwise almost everywhere to $[z]$, and so converges to $[z]$ in $\tau$. Thus, in particular, the set $$\{k\in\mathbb{Z}_{>0}\;|\enspace[f_{j_k}]\in U\}$$ is infinite, which is a contradiction.* It is moderately well-known that there can be no topology on $\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ which gives rise to almost everywhere pointwise convergence. For instance, this is observed by [@MF:21]. Our proof of Proposition [Proposition 4](#prop:aetop!exist){reference-type="ref" reference="prop:aetop!exist"} is adapted slightly from the observation of @ETO:66. The upshot of the result is that, if one is going to provide some structure with which to describe almost everywhere pointwise convergence, this structure must be something different than a topology. This was addressed by @RA:50 who observed that the notion of convergence in measure *is* topological, but almost everywhere pointwise convergence is not. To structurally distinguish between the two sorts of convergence, @RA:50 introduces the notion of a limit structure. This idea is discussed in some generality for Borel measurable functions by @UH:00 using multiple valued topologies. Here we will introduce the notion of a limit structure in as direct a manner as possible, commensurate with our objectives. Readers wishing to explore the subject in more detail are referred to [@RB/HPB:02]. For a set $X$ let $\mathscr{F}(X)$ denote the set of filters on $X$ and, for $x\in X$, denote by $$\mathcal{F}_x=\{S\subseteq X\;|\enspace x\in S\}$$ the principal filter generated by $\{x\}$. If $(\Lambda,\preceq)$ is a directed set and if $\phi\colon\Lambda\rightarrow X$ is a $\Lambda$-net, we denote the tails of the net $\phi$ by $$T_\phi(\lambda)=\{\phi(\lambda')\;|\enspace\lambda'\ge\lambda\},\qquad\lambda\in\Lambda.$$ We then denote by $$\mathcal{F}_\phi=\{S\subseteq X\;|\enspace T_\phi(\lambda)\subseteq S\ \textrm{for some}\ \lambda\in\Lambda\}$$ the "tail filter" (also sometimes called the "Fréchet filter") associated to the $\Lambda$-net $\phi$. A ***limit structure*** on a set $X$ is a subset $\mathscr{L}\subseteq\mathscr{F}(X)\times X$ with the following properties: [\[pl:ls1\]]{#pl:ls1 label="pl:ls1"} if $x\in X$ then $(\mathcal{F}_x,x)\in\mathscr{L}$; [\[pl:ls2\]]{#pl:ls2 label="pl:ls2"} if $(\mathcal{F},x)\in\mathscr{L}$ and if $\mathcal{F}\subseteq\mathcal{G}\in\mathscr{F}(X)$ then $(\mathcal{G},x)\in\mathscr{L}$; [\[pl:ls3\]]{#pl:ls3 label="pl:ls3"} if $(\mathcal{F},x),(\mathcal{G},x)\in\mathscr{L}$ then $(\mathcal{F}\cap\mathcal{G},x)\in\mathscr{L}$. If $(\Lambda,\preceq)$ is a directed set, a $\Lambda$-net $\phi\colon\Lambda\rightarrow X$ is ***$\mathscr{L}$-convergent*** to $x\in X$ if $(\mathcal{F}_\phi,x)\in\mathscr{L}$. Let us denote by $\mathscr{S}(\mathscr{L})$ the set of $\mathscr{L}$-convergent $\mathbb{Z}_{>0}$-nets, i.e., the set of $\mathscr{L}$-convergent sequences. The intuition behind the notion of a limit structure is as follows. Condition [\[pl:ls1\]](#pl:ls1){reference-type="eqref" reference="pl:ls1"} says that the trivial filter converging to $x$ should be included in the limit structure, condition [\[pl:ls2\]](#pl:ls2){reference-type="eqref" reference="pl:ls2"} says that if a filter converges to $x$, then every coarser filter also converges to $x$, and condition [\[pl:ls3\]](#pl:ls3){reference-type="eqref" reference="pl:ls3"} says that "mixing" filters converging to $x$ should give a filter converging to $x$. Starting from the definition of a limit structure, one can reproduce many of the concepts from topology, e.g., openness, closedness, compactness, continuity. Since we are not going to make us of any of these general constructions, we merely refer the interested reader to [@RB/HPB:02]. The one notion we will use is the following: a subset $A$ of a set $X$ with a limit structure $\mathscr{L}$ is ***$\mathscr{L}$-sequentially closed*** if every $\mathscr{L}$-convergent sequence $(x_j)_{j\in\mathbb{Z}_{>0}}$ in $A$ converges to $x\in A$. We are interested in the special case of limit structures on a $\mathbb{R}$-vector space $\mathsf{V}$; one will trivially see that there is nothing in the definitions that requires the field to be $\mathbb{R}$. For $\mathcal{F},\mathcal{G}\in\mathscr{F}(\mathsf{V})$ and for $a\in\mathbb{R}$ we denote $$\mathcal{F}+\mathcal{G}=\{A+B\;|\enspace A\in\mathcal{F},\ B\in\mathcal{G}\},\quad a\mathcal{F}=\{aA\;|\enspace A\in\mathcal{F}\},$$ where, as usual, $$A+B=\{u+v\;|\enspace u\in A,\ v\in B\},\quad aA=\{au\;|\enspace u\in A\}.$$ We say that a limit structure $\mathscr{L}$ on a vector space $\mathsf{V}$ is ***linear*** if $(\mathcal{F}_1,v_1),(\mathcal{F}_2,v_2)\in\mathscr{L}$ implies that $(\mathcal{F}_1+\mathcal{F}_2,v_1+v_2)\in\mathscr{L}$ and if $a\in\mathbb{R}$ and $(\mathcal{F},v)\in\mathscr{L}$ then $(a\mathcal{F},av)\in\mathscr{L}$. Following the characterisation of bounded subsets of topological vector spaces, we say a subset $B\subseteq\mathsf{V}$ is ***$\mathscr{L}$-bounded*** if, for every sequence $(v_j)_{j\in\mathbb{Z}_{>0}}$ in $B$ and for every sequence $(a_j)_{j\in\mathbb{Z}_{>0}}$ in $\mathbb{R}$ converging to $0$, the sequence $(a_jv_j)_{j\in\mathbb{Z}_{>0}}$ is $\mathscr{L}$-convergent to zero. For $[f]\in\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ define $$\begin{gathered} \mathscr{F}_{[f]}=\{\mathcal{F}\in \mathscr{F}(\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R}))\mid\enspace \mathcal{F}_\phi\subseteq\mathcal{F}\ \textrm{for some}\ \mathbb{Z}_{>0}\textrm{-net}\ \phi\ \textrm{such that}\\ (\phi(j))_{j\in\mathbb{Z}_{>0}}\ \textrm{is almost everywhere pointwise convergent to}\ [f]\}.\end{gathered}$$ We may now define a limit structure on $\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ as follows. **Theorem 5**. *The subset of $\mathscr{F}(\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R}))\times \hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ defined by $$\mathscr{L}_\lambda= \left\{(\mathcal{F},[f])\immediate\vphantom{\mathcal{F}\in\mathscr{F}_{[f]}}\;\right| \left.\immediate\vphantom{(\mathcal{F},[f])}\enspace\mathcal{F}\in\mathscr{F}_{[f]}\right\}$$ is a linear limit structure on $\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. Moreover, a sequence $([f_j])_{j\in\mathbb{Z}_{>0}}$ is $\mathscr{L}_\lambda$-convergent to $[f]$ if and only if the sequence is almost everywhere pointwise convergent to $[f]$.* *Let $[f]\in\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. Consider the trivial $\mathbb{Z}_{>0}$-net $\phi_{[f]}\colon\mathbb{Z}_{>0}\rightarrow\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ defined by $\phi_{[f]}(j)=[f]$. Since $\mathcal{F}_\phi=\mathcal{F}_{[f]}$ and since $(\mathcal{F}_\phi,[f])\in\mathscr{L}_\lambda$, the condition [\[pl:ls1\]](#pl:ls1){reference-type="eqref" reference="pl:ls1"} for a limit structure is satisfied.* *Let $(\mathcal{F},[f])\in\mathscr{L}_\lambda$ and suppose that $\mathcal{F}\subseteq\mathcal{G}$. Then $\mathcal{F}\in\mathscr{F}_{[f]}$ and so $\mathcal{F}\supseteq\mathcal{F}_\phi$ for some $\mathbb{Z}_{>0}$-net $\phi$ that converges pointwise almost everywhere to $[f]$. Therefore, we immediately have $\mathcal{F}_\phi\subseteq\mathcal{G}$ and so $(\mathcal{G},[f])\in\mathscr{L}_\lambda$. This verifies condition [\[pl:ls2\]](#pl:ls2){reference-type="eqref" reference="pl:ls2"} in the definition of a limit structure.* *Finally, let $(\mathcal{F},[f]),(\mathcal{G},[f])\in\mathscr{L}_\lambda$ and let $\phi$ and $\psi$ be $\mathbb{Z}_{>0}$-nets that converge pointwise almost everywhere to $[f]$ and satisfy $\mathcal{F}_\phi\subseteq\mathcal{F}$ and $\mathcal{F}_\psi\subseteq\mathcal{G}$. Define a $\mathbb{Z}_{>0}$-net $\phi\wedge\psi$ by $$\phi\wedge\psi(j)=\begin{cases}\phi(\frac{1}{2}(j+1)),&j\ \textrm{odd},\\ \psi(\frac{1}{2}j),&j\ \textrm{even}.\end{cases}$$ We first claim that $\phi\wedge\psi$ converges pointwise almost everywhere to $[f]$. Let $$A=\left\{x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\immediate\vphantom{\lim_{j\to\infty}\phi(j)(x)\not=f(x)}\;\right| \left.\immediate\vphantom{x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}\enspace\lim_{j\to\infty}\phi(j)(x)\not=f(x)\right\},\quad B=\left\{x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\immediate\vphantom{\lim_{j\to\infty}\psi(j)(x)\not=f(x)}\;\right| \left.\immediate\vphantom{x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}\enspace\lim_{j\to\infty}\psi(j)(x)\not=f(x)\right\}.$$ If $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\setminus(A\cup B)$ then $$\lim_{j\to\infty}\phi(j)(x)=\lim_{j\to\infty}\psi(j)(x)=f(x).$$ Thus, for $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\setminus(A\cup B)$ and $\epsilon\in\mathbb{R}_{>0}$ there exists $N\in\mathbb{Z}_{>0}$ such that $$\lvert f(x)-\phi(j)(x)\rvert,\lvert f(x)-\psi(j)(x)\rvert<\epsilon,\qquad j\ge N.$$ Therefore, for $j\ge2N$ and for $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\setminus(A\cup B)$ we have $\lvert f(x)-\phi\wedge\psi(j)(x)\rvert<\epsilon$ and so $$\lim_{j\to\infty}\phi\wedge\psi(j)(x)=f(x),\qquad x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\setminus(A\cup B).$$ Since $\lambda(A\cup B)=0$ it indeed follows that $\phi\wedge\psi$ converges pointwise almost everywhere to $[f]$.* *We next claim that $\mathcal{F}_{\phi\wedge\psi}\subseteq\mathcal{F}\cap\mathcal{G}$. Indeed, let $S\in\mathcal{F}_{\phi\wedge\psi}$. Then there exists $N\in\mathbb{Z}_{>0}$ such that $T_{\phi\wedge\psi}(N)\subseteq S$. Therefore, there exists $N_\phi,N_\psi\in\mathbb{Z}_{>0}$ such that $T_\phi(N_\phi)\subseteq S$ and $T_\psi(N_\psi)\subseteq S$. That is, $S\in\mathcal{F}_\phi\cap\mathcal{F}_\psi\subseteq\mathcal{F}\cap\mathcal{G}$. This shows that $(\mathcal{F}\cap\mathcal{G},[f])\in\mathscr{L}_\lambda$ and so shows that condition [\[pl:ls3\]](#pl:ls3){reference-type="eqref" reference="pl:ls3"} in the definition of a limit structure holds.* *Thus we have shown that $\mathscr{L}_\lambda$ is a limit structure. Let us show that it is a linear limit structure. Let $(\mathcal{F}_1,[f_1]),(\mathcal{F}_2,v_2)\in\mathscr{L}_\lambda$. Thus there exists $\mathbb{Z}$-nets $\phi_1$ and $\phi_2$ in $\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ converging pointwise almost everywhere to $[f_1]$ and $[f_2]$, respectively, and such that $\mathcal{F}_{\phi_1}\subseteq\mathcal{F}_1$ and $\mathcal{F}_{\phi_2}\subseteq\mathcal{F}_2$. Let us denote by $(f_{1,j})_{j\in\mathbb{Z}_{>0}}$ and $(f_{2,j})_{j\in\mathbb{Z}_{>0}}$ sequences in $\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ such that $[f_{1,j}]=\phi_1(j)$ and $[f_{2,j}]=\phi_2(j)$ for $j\in\mathbb{Z}_{>0}$. Then, as in the proof of Lemma [Lemma 3](#lem:pwaec){reference-type="ref" reference="lem:pwaec"}, there exists a subset $A\subseteq\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$ of zero measure such that $$\lim_{j\to\infty}f_{j,1}(x)=f_1(x),\quad\lim_{j\to\infty}f_{2,j}(x)=f_2(x), \qquad x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\setminus A.$$ Thus, for $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\setminus A$, $$\lim_{j\to\infty}(f_{1,j}+f_{2,j})(x)=(f_1+f_2)(x).$$ This shows that the $\mathbb{Z}_{>0}$-net $\phi_1+\phi_2$ converges pointwise almost everywhere to $[f_1+f_2]$. Since $\mathcal{F}_{\phi_1+\phi_2}\subseteq\mathcal{F}_1+\mathcal{F}_2$, it follows that $(\mathcal{F}_1+\mathcal{F}_2,[f_1+f_2])\in\mathscr{L}_\lambda$. An entirely similarly styled argument gives $(a\mathcal{F},av)\in\mathscr{L}_\lambda$ for $(\mathcal{F},v)\in\mathscr{L}_\lambda$.* *We now need to show that $\mathscr{S}(\mathscr{L}_\lambda)$ consists exactly of the almost everywhere pointwise convergent sequences. The very definition of $\mathscr{L}_\lambda$ ensures that if a $\mathbb{Z}_{>0}$-net $\phi$ is almost everywhere pointwise convergent then $\phi\in\mathscr{S}(\mathscr{L}_\lambda)$. We prove the converse, and so let $\phi$ be $\mathscr{L}_\lambda$-convergent to $[f]$. Therefore, by definition of $\mathscr{L}_\lambda$, there exists a $\mathbb{Z}_{>0}$-net $\psi$ converging pointwise almost everywhere to $[f]$ such that $\mathcal{F}_\psi\subseteq\mathcal{F}_\phi$.* ***Lemma 1**. *There exists of a subsequence $\psi'$ of $\psi$ such that $\mathcal{F}_{\psi'}=\mathcal{F}_\phi$.** **Let $n\in\mathbb{Z}_{>0}$ and note that $T_\psi(n)\in\mathcal{F}_\psi\subseteq\mathcal{F}_\phi$. Thus there exists $k\in\mathbb{Z}_{>0}$ such that $T_\phi(k)\subseteq T_\psi(n)$. Then define $$k_n=\min\{k\in\mathbb{Z}_{>0}\;|\enspace T_\phi(k)\subseteq T_\psi(n)\},$$ the minimum being well-defined since $$k>k'\quad\implies\quad T_\phi(k)\subseteq T_\phi(k').$$ This uniquely defines, therefore, a sequence $(k_n)_{n\in\mathbb{Z}_{>0}}$. Moreover, if $n_1>n_2$ then $T_\psi(n_2)\subseteq T_\psi(n_1)$ which implies that $T_\phi(k_{n_2})\subseteq T_\psi(n_1)$. Therefore, $k_{n_2}\ge k_{n_1}$, showing that the sequence $(k_n)_{n\in\mathbb{Z}_{>0}}$ is nondecreasing.** **Now define $\theta\colon\mathbb{Z}_{>0}\rightarrow\mathbb{Z}_{>0}$ as follows. If $j<k_n$ for every $n\in\mathbb{Z}_{>0}$ then define $\theta(j)$ in an arbitrary manner. If $j\ge k_1$ then note that $\phi(j)\in T_\phi(k_1)\subseteq T_\psi(1)$. Thus there exists (possibly many) $m\in\mathbb{Z}_{>0}$ such that $\phi(j)=\psi(m)$. If $j\ge k_n$ for $n\in\mathbb{Z}_{>0}$ then there exists (possibly many) $m\ge n$ such that $\phi(j)=\psi(m)$. Thus for any $j\in\mathbb{Z}_{>0}$ we can define $\theta(j)\in\mathbb{Z}_{>0}$ such that $\phi(j)=\psi(\theta(j))$ if $j\ge k_1$ and such that $\theta(j)\ge n$ if $j\ge k_n$.** **Note that any function $\theta\colon\mathbb{Z}_{>0}\rightarrow\mathbb{Z}_{>0}$ as constructed above is unbounded. Therefore, there exists a strictly increasing function $\rho\colon\mathbb{Z}_{>0}\rightarrow\mathbb{Z}_{>0}$ such that $\operatorname{image}(\rho)=\operatorname{image}(\theta)$. We claim that $\mathcal{F}_\rho=\mathcal{F}_\theta$. First let $n\in\mathbb{Z}_{>0}$ and let $j\ge k_{\rho(n)}$. Then $\theta(j)\ge\rho(n)$. Since $\operatorname{image}(\rho)=\operatorname{image}(\theta)$ there exists $m\in\mathbb{Z}_{>0}$ such that $\rho(m)=\theta(j)\ge\rho(n)$. Since $\rho$ is strictly increasing, $m\ge n$. Thus $\theta(j)\in T_\rho(n)$ and so $T_\theta(k_{\rho(n)})\subseteq T_\rho(n)$. This implies that $\mathcal{F}_\rho\subseteq\mathcal{F}_\theta$.** **Conversely, let $n\in\mathbb{Z}_{>0}$ and let $r_n\in\mathbb{Z}_{>0}$ be such that $$\rho(r_n)>\max\{\theta(1),\dots,\theta(n)\};$$ this is possible since $\rho$ is unbounded. If $j\ge r_n$ then $$\rho(j)\ge\rho(r_n)>\max\{\theta(1),\dots,\theta(n)\}.$$ Since $\operatorname{image}(\rho)=\operatorname{image}(\theta)$ we have $\rho(j)=\theta(m)$ for some $m\in\mathbb{Z}_{>0}$. We must have $m>n$ and so $\rho(j)\in T_\theta(n)$. Thus $T_\rho(r_n)\subseteq T_\theta(n)$ and so $\mathcal{F}_\theta\subseteq\mathcal{F}_\rho$.** **To arrive at the conclusions of the lemma we first note that, by definition of $\theta$, $\mathcal{F}_\phi=\mathcal{F}_{\psi\raise 1pt\hbox{$\,\scriptstyle\circ\,$}\theta}$. We now define $\psi'=\psi\raise 1pt\hbox{$\,\scriptstyle\circ\,$}\rho$ and note that $$\mathcal{F}_\phi=\mathcal{F}_{\psi\raise 1pt\hbox{$\,\scriptstyle\circ\,$}\theta}= \psi(\mathcal{F}_\theta)=\psi(\mathcal{F}_\rho)=\mathcal{F}_{\psi\raise 1pt\hbox{$\,\scriptstyle\circ\,$}\rho},$$ as desired.** *Since a subsequence of an almost everywhere pointwise convergent sequence is almost everywhere pointwise convergent to the same limit, it follows that $\psi'$, and so $\phi$, converges almost everywhere pointwise to $[f]$.* The preceding theorem seems to be well-known; see [@RB/HPB:02] where, in particular, the essential lemma in the proof is given. Nonetheless, we have never seen the ingredients of the proof laid out clearly in one place, so the result is worth recording. Let us record a characterisation of $\mathscr{L}_\lambda$-bounded subsets of $\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. **Proposition 6**. *A subset $B\subseteq\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ is $\mathscr{L}_\lambda$-bounded if and only if there exists a nonnegative-valued $g\in\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ such that $$B\subseteq\{[f]\in\hat{\mathsf{M}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})\;|\enspace\lvert f(x)\rvert\le g(x)\ \textrm{for almost every}\ x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\}.$$* *We first observe that the condition that $\lvert f(x)\rvert\le g(x)$ for almost every $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$ is independent of the choice of representative $f$ from the equivalence class $[f]$.* *Suppose that there exists a nonnegative-valued $g\in\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ such that, if $[f]\in B$, then $\lvert f(x)\rvert\le g(x)$ for almost every $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$. Let $([f_j])_{j\in\mathbb{Z}_{>0}}$ be a sequence in $B$ and let $(a_j)_{j\in\mathbb{Z}_{>0}}$ be a sequence in $\mathbb{R}$ converging to zero. For $j\in\mathbb{Z}_{>0}$ define $$A_j=\{x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\;|\enspace\lvert f_j(x)\rvert\le g(x)\}.$$ Note that if $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\setminus(\cup_{j\in\mathbb{Z}_{>0}}A_j)$ then $$\lim_{j\to\infty}\lvert a_jf_j(x)\rvert\le\lim_{j\to\infty}\lvert a_j\rvert g(x)=0.$$ Since $\lambda(\cup_{j\in\mathbb{Z}_{>0}}A_j)=0$ this implies that the sequence $(a_j[f_j])_{j\in\mathbb{Z}_{>0}}$ is $\mathscr{L}_\lambda$-convergent to zero. One may show that this argument is independent of the choice of representatives $f_j$ from the equivalence classes $[f_j]$, $j\in\mathbb{Z}_{>0}$.* *Conversely, suppose that there exists no nonnegative-valued function $g\in\mathsf{M}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ such that, for every $[f]\in B$, $\lvert f(x)\rvert\le g(x)$ for almost every $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$. This means that there exists a set $E\subseteq\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$ of positive measure such that, for any $M\in\mathbb{R}_{>0}$, there exists $[f]\in B$ such that $\lvert f(x)\rvert>M$ for almost every $x\in E$. Let $(a_j)_{j\in\mathbb{Z}_{>0}}$ be a sequence in $\mathbb{R}$ converging to $0$ and such that $a_j\not=0$ for every $j\in\mathbb{Z}_{>0}$. Then let $([f_j])_{j\in\mathbb{Z}_{>0}}$ be a sequence in $B$ such that $\lvert f_j(x)\rvert>\left\lvert a_j^{-1}\right\rvert$ for almost every $x\in E$ and for every $j\in\mathbb{Z}_{>0}$. Define $$A_j=\{x\in E\;|\enspace\lvert f_j(x)\rvert>\left\lvert a_j^{-1}\right\rvert\}.$$ If $x\in E\setminus(\cup_{j\in\mathbb{Z}_{>0}}A_j)$ then $\lvert a_jf_j(x)\rvert>1$ for every $j\in\mathbb{Z}_{>0}$. Since $\lambda(E\setminus(\cup_{j\in\mathbb{Z}_{>0}}A_j))>0$ it follows that $(a_j[f_j])_{j\in\mathbb{Z}_{>0}}$ cannot $\mathscr{L}_\lambda$-converge to zero, and so $B$ is not $\mathscr{L}_\lambda$-bounded.* # Two topological distinctions between the Riemann and Lebesgue theories of integration In this section we give topological characterisations of the differences between the Riemann and Lebesgue theories. In Section [3.3](#subsec:riemann-ccft){reference-type="ref" reference="subsec:riemann-ccft"} we also explicitly see how these distinctions lead to a deficiency in the Fourier transform theory using the Riemann integral. ## The Dominated Convergence Theorems Both the Lebesgue and Riemann theories of integration possess a Dominated Convergence Theorem. This gives us two versions of the Dominated Convergence Theorem that we can compare and contrast. Moreover, there are also "pointwise convergent" and "almost everywhere pointwise convergent" versions of both theorems. Typically, the "pointwise convergent" version is stated for the Riemann integral[^4] and the "almost everywhere pointwise convergent" version is stated for the Lebesgue integral. However, both versions are valid for both integrals, so this gives, in actuality, four theorems to compare and contrast. What we do here is state both versions of the Dominated Convergence Theorem for the Lebesgue integral using topological and limit structures, and we give counterexamples illustrating why these statements are not valid for the Riemann integral. Let us first state the various Dominated Convergence Theorems in their usual form. The Dominated Convergence Theoremincluding the pointwise convergent and almost everywhere pointwise convergent statementsfor the Riemann integral is the following. **Theorem 7**. *Let $(f_j)_{j\in\mathbb{Z}_{>0}}$ be a sequence of $\mathbb{R}$-valued functions on $\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$ satisfying the following conditions:* *$f_j\in\mathsf{R}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ for each $j\in\mathbb{Z}_{>0}$;* *there exists a nonnegative-valued $g\in\mathsf{R}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ such that $\lvert f_j(x)\rvert\le g(x)$ for every (resp. almost every) $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$ and for every $j\in\mathbb{Z}_{>0}$;* *the limit $\lim_{j\to\infty}f_j(x)$ exists for every (resp. almost every) $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$;* *the function $f\colon\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\rightarrow\mathbb{R}$ defined by $f(x)=\lim_{j\to\infty}f_j(x)$ is in $\mathsf{R}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ (resp. there exists $f\in\mathsf{R}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ such that $\lim_{j\to\infty}f_j(x)=f(x)$ for almost every $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$).* *Then $$\lim_{j\to\infty}\int_0^1f_j(x)\,{\normalfont\textrm{d}}x=\int_0^1f(x)\,{\normalfont\textrm{d}}x.$$* For the Lebesgue integral we have the following Dominated Convergence Theorem(s). **Theorem 8**. *Let $(f_j)_{j\in\mathbb{Z}_{>0}}$ be a sequence of $\mathbb{R}$-valued functions on $\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$ satisfying the following conditions:* *$f_j$ is measurable for each $j\in\mathbb{Z}_{>0}$;* *there exists a nonnegative-valued $g\in\mathsf{L}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ such that $\lvert f_j(x)\rvert\le g(x)$ for every (resp. almost every) $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$ and for every $j\in\mathbb{Z}_{>0}$;* *the limit $\lim_{j\to\infty}f_j(x)$ exists for every (resp. almost every) $x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$.* *Then the function $f\colon\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\rightarrow\mathbb{R}$ defined by $$f(x)=\begin{cases}\lim_{j\to\infty}f_j(x),&\textrm{the limit exists},\\ 0,&\textrm{otherwise}\end{cases}$$ and each of the functions $f_j$, $j\in\mathbb{Z}_{>0}$, are in $\mathsf{L}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ and $$\lim_{j\to\infty}\int_If_j\,{\normalfont\textrm{d}}\lambda=\int_If\,{\normalfont\textrm{d}}\lambda.$$* Our statements make it clear that there is one real difference between the Riemann and Lebesgue theories: the condition of integrability of the limit function $f$ is an *hypothesis* in the Riemann theory but a *conclusion* in the Lebesgue theory. This distinction is crucial and explains why the Lebesgue theory is more powerful than the Riemann theory. Moreover, the structure we introduced in Section [2](#sec:LR-topologies){reference-type="ref" reference="sec:LR-topologies"} allows for an elegant expression of this distinction. The result which follows is simply a rephrasing of the Dominated Convergence Theorem(s) for the Lebesgue integral, and follows from that theorem, along with Theorem [Theorem 5](#the:hatM-convergence){reference-type="ref" reference="the:hatM-convergence"} and Proposition [Proposition 6](#prop:hatM-bounded){reference-type="ref" reference="prop:hatM-bounded"}. **Theorem 9**. *The following statements hold:* *[\[pl:DCT1\]]{#pl:DCT1 label="pl:DCT1"} $\mathsf{C}_p$-bounded subsets of $\mathsf{L}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ are $\mathsf{C}_p$-sequentially closed;* *[\[pl:DCT2\]]{#pl:DCT2 label="pl:DCT2"} $\mathscr{L}_\lambda$-bounded subsets of $\hat{\mathsf{L}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ are $\mathscr{L}_\lambda$-sequentially closed.* The necessity of the weaker conclusions for the Dominated Convergence Theorem(s) for the Riemann integral is illustrated by the following examples. First we show why part [\[pl:DCT1\]](#pl:DCT1){reference-type="eqref" reference="pl:DCT1"} of Theorem [Theorem 9](#the:DCT){reference-type="ref" reference="the:DCT"} does not hold for the Riemann integral. [\[eg:R1!Cp-closed\]]{#eg:R1!Cp-closed label="eg:R1!Cp-closed"} By means of an example, we show that there are $\mathsf{C}_p$-bounded subsets of the seminormed vector space $\mathsf{R}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ that are not sequentially closed in the topology of $\mathsf{C}_p(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. Let us denote $$B=\left\{f\in\mathsf{R}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})\immediate\vphantom{\lvert f(x)\rvert\le 1}\;\right| \left.\immediate\vphantom{f\in\mathsf{R}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})}\enspace\lvert f(x)\rvert\le 1\right\},$$ noting by Proposition [Proposition 1](#prop:Cp-bounded){reference-type="ref" reference="prop:Cp-bounded"} that $B$ is $\mathsf{C}_p$-bounded. Let $(q_j)_{j\in\mathbb{Z}_{>0}}$ be an enumeration of the rational numbers in $\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$ and define a sequence $(F_k)_{k\in\mathbb{Z}_{>0}}$ in $\mathsf{R}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ by $$F_k(x)=\begin{cases}1,&x\in\{q_1,\dots,q_k\},\\ 0,&\textrm{otherwise}.\end{cases}$$ The sequence converges in $\mathsf{C}_p(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ to the characteristic function of $\mathbb{Q}\cap\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$; let us denote this function by $F$. This limit function is not Riemann integrable and so not in $\mathsf{R}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. Thus $B$ is not $\mathsf{C}_p$-sequentially closed. Next we show why part [\[pl:DCT2\]](#pl:DCT2){reference-type="eqref" reference="pl:DCT2"} of Theorem [Theorem 9](#the:DCT){reference-type="ref" reference="the:DCT"} does not hold for the Riemann integral. [\[eg:R1!hatMp-closed\]]{#eg:R1!hatMp-closed label="eg:R1!hatMp-closed"} We give an example that shows that $\mathscr{L}_\lambda$-bounded subsets of the normed vector space $\hat{\mathsf{R}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ are not $\mathscr{L}_\lambda$-sequentially closed. We first remark that the construction of Example [\[eg:R1!Cp-closed\]](#eg:R1!Cp-closed){reference-type="ref" reference="eg:R1!Cp-closed"}, projected to $\hat{\mathsf{R}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$, does not suffice because $[F]$ is equal to the equivalence class of the zero function which *is* Riemann integrable, even though $F$ is not, cf. the statements following Example 3.117 in [@DSK:04]. The fact that $[F]$ contains functions that are Riemann integrable and functions that are not Riemann integrable is a reflection of the fact that $\mathsf{R}_0(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ is not sequentially closed. This is a phenomenon of interest, but it is not what is of interest here. The construction we use is the following. Let $(q_j)_{j\in\mathbb{Z}_{>0}}$ be an enumeration of the rational numbers in $\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$. Let $\ell\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}(0,1)$ and for $j\in\mathbb{Z}_{>0}$ define $$I_j=\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\cap \@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}({q_j-\tfrac{\ell}{2^{j+1}}},{q_j+\tfrac{\ell}{2^{j+1}}})$$ to be the interval of length $\frac{\ell}{2^j}$ centred at $q_j$. Then define $A_k=\cup_{j=1}^kI_j$, $k\in\mathbb{Z}_{>0}$, and $A=\cup_{j\in\mathbb{Z}_{>0}}A_j$. Also define $G_k=\chi_{A_k}$, $k\in\mathbb{Z}_{>0}$, and $G=\chi_A$ be the characteristic functions of $A_k$ and $A$, respectively. Note that $A_k$ is a union of a finite number of intervals and so $G_k$ is Riemann integrable for each $k\in\mathbb{Z}_{>0}$. However, we claim that $G$ is not Riemann integrable. Indeed, the characteristic function of a set is Riemann integrable if and only the boundary of the set has measure zero; this is a direct consequence of Lebesgue's theorem stating that a function is Riemann integrable if and only if its set of discontinuities has measure zero [@DLC:13 Theorem 2.5.4]. Note that since $\operatorname{closure}(\mathbb{Q}\cap\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1])=\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$ we have $$\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]=\operatorname{closure}(A)=A\cup\operatorname{boundary}(A).$$ Thus $$\lambda(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1])\le\lambda(A)+\lambda(\operatorname{boundary}(A)).$$ Since $$\lambda(A)\le\sum_{j=1}^\infty\lambda(I_j)\le\ell,$$ it follows that $\lambda(\operatorname{boundary}(A))\ge1-\ell\in\mathbb{R}_{>0}$. Thus $G$ is not Riemann integrable, as claimed. It is clear that $(G_k)_{k\in\mathbb{Z}_{>0}}$ is $\mathsf{C}_p$-convergent to $G$. Therefore, by Theorem [Theorem 5](#the:hatM-convergence){reference-type="ref" reference="the:hatM-convergence"} it follows that $([G_k])_{k\in\mathbb{Z}_{>0}}$ is $\mathscr{L}_\lambda$-convergent to $[G]$. We claim that $[G]\not\in\hat{\mathsf{R}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. This requires that we show that if $g\in\mathsf{C}_p(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ satisfies $[g]=[G]$, then $g$ is not Riemann integrable. To show this, it suffices to show that $g$ is discontinuous on a set of positive measure. We shall show that $g$ is discontinuous on the set $g^{-1}(0)\cap\operatorname{boundary}(A)$. Indeed, let $x\in g^{-1}(0)\cap\operatorname{boundary}(A)$. Then, for any $\epsilon\in\mathbb{R}_{>0}$ we have $\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}({x-\epsilon},{x+\epsilon})\cap A\not=\emptyset$ since $x\in\operatorname{boundary}(A)$. Since $\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}({x-\epsilon},{x+\epsilon})\cap A$ is a nonempty open set, it has positive measure. Therefore, since $G$ and $g$ agree almost everywhere, there exists $y\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}({x-\epsilon},{x+\epsilon})\cap A$ such that $g(y)=1$. Since this holds for every $\epsilon\in\mathbb{R}_{>0}$ and since $g(x)=0$, it follows that $g$ is discontinuous at $x$. Finally, it suffices to show that $g^{-1}(0)\cap\operatorname{boundary}(A)$ has positive measure. But this follows since $\operatorname{boundary}(A)=G^{-1}(0)$ has positive measure and since $G$ and $g$ agree almost everywhere. To complete the example, we note that the sequence $([G_k])_{j\in\mathbb{Z}_{>0}}$ is in the set $$B=\left\{[f]\in\hat{\mathsf{R}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})\immediate\vphantom{\lvert f(x)\rvert\le 1\ \textrm{for almost every}\ x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]}\;\right| \left.\immediate\vphantom{[f]\in\hat{\mathsf{R}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})}\enspace\lvert f(x)\rvert\le 1\ \textrm{for almost every}\ x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\right\},$$ which is $\mathscr{L}_\lambda$-bounded by Proposition [Proposition 6](#prop:hatM-bounded){reference-type="ref" reference="prop:hatM-bounded"}. The example shows that this $\mathscr{L}_\lambda$-bounded subset of $\hat{\mathsf{R}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ is not $\mathscr{L}_\lambda$-sequentially closed. ## Completeness of spaces of integrable functions In Section [2.1](#subsec:nvs-integrable){reference-type="ref" reference="subsec:nvs-integrable"} we constructed the two normed vector spaces $\hat{\mathsf{R}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ and $\hat{\mathsf{L}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. Using the Dominated Convergence Theorem, one proves the following well-known and important result [@DLC:13 Theorem 3.4.1]. **Theorem 10**. *$(\hat{\mathsf{L}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R}),\lVert\cdot\rVert_1)$ is a Banach space.* It is generally understood that $(\hat{\mathsf{R}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R}),\lVert\cdot\rVert_1)$ is *not* a Banach space. However, we have not seen this demonstrated in a sufficiently compelling manner, so the following example will hopefully be interesting in this respect. [\[eg:Riemann!Cauchy\]]{#eg:Riemann!Cauchy label="eg:Riemann!Cauchy"} Let us consider the sequence of functions $(G_k)_{k\in\mathbb{Z}_{>0}}$ in $\mathsf{R}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ constructed in Example [\[eg:R1!hatMp-closed\]](#eg:R1!hatMp-closed){reference-type="ref" reference="eg:R1!hatMp-closed"}. We also use the pointwise limit function $G$ defined in that same example. We shall freely borrow the notation introduced in this example. We claim that the sequence $([G_k])_{k\in\mathbb{Z}_{>0}}$ is Cauchy in $\hat{\mathsf{R}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. Let $\epsilon\in\mathbb{R}_{>0}$. Note that $\sum_{j=1}^\infty\lambda(I_j)\le\ell$. This implies that there exists $N\in\mathbb{Z}_{>0}$ such that $\sum_{j=k+1}^m\lambda(I_j)<\epsilon$ for all $k,m\ge N$. Now note that for $k,m\in\mathbb{Z}_{>0}$ with $m>k$, the functions $G_k$ and $G_m$ agree except on a subset of $I_{k+1}\cup\dots\cup I_m$. On this subset, $G_m$ has value $1$ and $G_k$ has value $0$. Thus $$\int_0^1\lvert G_m(x)-G_k(x)\rvert\,{\normalfont\textrm{d}}x\le\lambda(I_{k+1}\cup\dots\cup I_m)\le \sum_{j=k+1}^m\lambda(I_j).$$ Thus we can choose $N\in\mathbb{Z}_{>0}$ sufficiently large that $\lVert G_m-G_k\rVert_1<\epsilon$ for $k,m\ge N$. Thus the sequence $([G_k])_{k\in\mathbb{Z}_{>0}}$ is Cauchy, as claimed. We next show that the sequence $([G_k])_{k\in\mathbb{Z}_{>0}}$ converges to $[G]$ in $\hat{\mathsf{L}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. In Example [\[eg:R1!hatMp-closed\]](#eg:R1!hatMp-closed){reference-type="ref" reference="eg:R1!hatMp-closed"} we showed that $([G_k])_{k\in\mathbb{Z}_{>0}}$ is $\mathscr{L}_\lambda$-convergent to $[G]$). Since the sequence $([G-G_k])_{k\in\mathbb{Z}_{>0}}$ is in the subset $$\{[f]\in\hat{\mathsf{L}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})\;|\enspace\lvert f(x)\rvert\le 1\ \textrm{for almost every}\ x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\},$$ and since this subset is $\mathscr{L}_\lambda$-bounded by Proposition [Proposition 6](#prop:hatM-bounded){reference-type="ref" reference="prop:hatM-bounded"}, it follows from Theorem [Theorem 9](#the:DCT){reference-type="ref" reference="the:DCT"}([\[pl:DCT2\]](#pl:DCT2){reference-type="ref" reference="pl:DCT2"}) that $$\lim_{k\to\infty}\lVert G-G_k\rVert_1= \int_I\lim_{k\to\infty}\lvert G-G_k\rvert\,{\normalfont\textrm{d}}\lambda=0.$$ This gives us the desired convergence of $([G_k])_{k\in\mathbb{Z}_{>0}}$ to $[G]$ in $\hat{\mathsf{L}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. However, in Example [\[eg:R1!hatMp-closed\]](#eg:R1!hatMp-closed){reference-type="ref" reference="eg:R1!hatMp-closed"} we showed that $[G]\not\in\hat{\mathsf{R}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. Thus the Cauchy sequence $([G_k])_{k\in\mathbb{Z}_{>0}}$ in $\hat{\mathsf{R}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ is not convergent in $\hat{\mathsf{R}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$, giving the desired incompleteness of $(\hat{\mathsf{R}}(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R}),\lVert\cdot\rVert_1)$. In [@DSK:04 Example 3.117] the sequence $(f_j)_{j\in\mathbb{Z}_{>0}}$ defined by $$f_j(x)=\begin{cases}0,&x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,{\frac{1}{j}}],\\ x^{-1/2},&x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}({\frac{1}{j}},1]\end{cases}$$ is shown to be Cauchy in $\hat{\mathsf{R}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$, but not convergent in $\hat{\mathsf{R}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$. This sequence, however, is not as interesting as that in our preceding example since the limit function $f\in\hat{\mathsf{L}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ in @DSK:04's example *is* Riemann integrable using the usual rule for defining the improper Riemann integral for unbounded functions. In the construction used in Example [\[eg:Riemann!Cauchy\]](#eg:Riemann!Cauchy){reference-type="ref" reference="eg:Riemann!Cauchy"}, the limit function in $\mathsf{L}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ is not Riemann integrable in the sense of bounded functions defined on compact intervals, i.e., in the sense of the usual construction involving approximation above and below by step functions. In [@WD/HM/RJN/EvW:88] a convergence structure is introduced on the set of Riemann integrable functions that is sequentially complete. The idea in this work is to additionally constrain convergence in $\hat{\mathsf{L}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{R})$ in such a way that Riemann integrability is preserved by limits. ## The $\mathsf{L}^2$-Fourier transform for the Riemann integral {#subsec:riemann-ccft} In order to illustrate why it is important that spaces of integrable functions be complete, we consider the theory of the $\mathsf{L}^2$-Fourier transform restricted to square Riemann integrable functions. Let us first recall the essential elements of the theory. For $p\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[1,\infty)$ we denote by $\mathsf{L}^p(\mathbb{R};\mathbb{C})$ the set of $\mathbb{C}$-valued functions $f$ defined on $\mathbb{R}$ which satisfy $$\int_{\mathbb{R}}\lvert f\rvert^p\,{\normalfont\textrm{d}}\lambda<\infty,$$ where we now denote by $\lambda$ the Lebesgue measure on $\mathbb{R}$. We let $$\mathsf{L}_0(\mathbb{R};\mathbb{C})=\left\{f\in\mathsf{L}^p(\mathbb{R};\mathbb{C})\immediate\vphantom{\int_{\mathbb{R}}\lvert f\rvert^p\,{\normalfont\textrm{d}}\lambda=0}\;\right| \left.\immediate\vphantom{f\in\mathsf{L}^p(\mathbb{R};\mathbb{C})}\enspace\int_{\mathbb{R}}\lvert f\rvert^p\,{\normalfont\textrm{d}}\lambda=0\right\}$$ and denote $$\hat{\mathsf{L}}^p(\mathbb{R};\mathbb{C})=\mathsf{L}^p(\mathbb{R};\mathbb{C})/\mathsf{L}_0(\mathbb{R};\mathbb{C}).$$ As we have done previously, we denote $[f]=f+\mathsf{L}_0(\mathbb{R};\mathbb{C})$. If we define $$\lVert[f]\rVert_p=\left(\int_{\mathbb{R}}\lvert f\rvert^p\,{\normalfont\textrm{d}}\lambda\right)^{1/p}$$ then one shows that $(\hat{\mathsf{L}}^p(\mathbb{R};\mathbb{C}),\lVert\cdot\rVert_p)$ is a Banach space [@DLC:13 Theorem 3.4.1]. For $a\in\mathbb{C}$ let us denote $\exp_a\colon\mathbb{R}\rightarrow\mathbb{C}$ by $\exp_a(x)=\textup{e}^{ax}$. For $[f]\in\hat{\mathsf{L}}^1(\mathbb{R};\mathbb{C})$ one defines $\mathscr{F}_1([f])\colon\mathbb{R}\rightarrow\mathbb{C}$ by $$\mathscr{F}_1([f])(\omega)=\int_{\mathbb{R}}f\exp_{-2\pi\textup{i}\omega}\,{\normalfont\textrm{d}}\lambda$$ the ***$\mathsf{L}^1$-Fourier transform*** of $[f]\in\hat{\mathsf{L}}^1(\mathbb{R};\mathbb{C})$. If we define $\mathsf{C}^0_{0,\textup{uc}}(\mathbb{R};\mathbb{C})$ to be the set of uniformly continuous $\mathbb{C}$-valued functions $f$ on $\mathbb{R}$ that satisfy $\lim_{\lvert x\rvert\to\infty}\lvert f(x)\rvert=0$, then $(\mathsf{C}^0_{0,\textup{uc}}(\mathbb{R};\mathbb{C}),\lVert\cdot\rVert_\infty)$ is a Banach space with $$\lVert f\rVert_\infty=\sup\{\lvert f(x)\rvert\;|\enspace x\in\mathbb{R}\}.$$ Moreover, $\mathscr{F}_1([f])\in\mathsf{C}^0_{0,\textup{uc}}(\mathbb{R};\mathbb{C})$ and the linear map $\mathscr{F}_1\colon\hat{\mathsf{L}}^1(\mathbb{R};\mathbb{C})\rightarrow\mathsf{C}^0_{0,\textup{uc}}(\mathbb{R};\mathbb{C})$ is continuous [@CG/PW:99 Theorem 17.1.3]. For $[f]\in\hat{\mathsf{L}}^2(\mathbb{R};\mathbb{C})$ the preceding construction cannot be applied verbatim since $\hat{\mathsf{L}}^2(\mathbb{R};\mathbb{C})$ is not a subspace of $\hat{\mathsf{L}}^1(\mathbb{R};\mathbb{C})$. However, one can make an adaptation as follows [@CG/PW:99 Lesson 22]. One shows that $\hat{\mathsf{L}}^1(\mathbb{R};\mathbb{C})\cap\hat{\mathsf{L}}^2(\mathbb{R};\mathbb{C})$ is dense in $\hat{\mathsf{L}}^2(\mathbb{R};\mathbb{C})$. One can do this explicitly by defining, for $[f]\in\hat{\mathsf{L}}^2(\mathbb{R};\mathbb{C})$, a sequence $([f_j])_{j\in\mathbb{Z}_{>0}}$ in $\hat{\mathsf{L}}^1(\mathbb{R};\mathbb{C})\cap\hat{\mathsf{L}}^2(\mathbb{R};\mathbb{C})$ by $$f_j(x)=\begin{cases}f(x),&x\in\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[-j,j],\\ 0,&\textrm{otherwise},\end{cases}$$ and showing, using the CauchyBunyakovskySchwarz inequality, that this sequence converges in $\hat{\mathsf{L}}^2(\mathbb{R};\mathbb{C})$ to $[f]$. Moreover, one can show that the sequence $(\mathscr{F}_1([f_j]))_{j\in\mathbb{Z}_{>0}}$ is a Cauchy sequence $\hat{\mathsf{L}}^2(\mathbb{R};\mathbb{C})$ and so converges to some element of $\hat{\mathsf{L}}^2(\mathbb{R};\mathbb{C})$ that we denote by $\mathscr{F}_2([f])$, the ***$\hat{\mathsf{L}}^2$-Fourier transform*** of $[f]\in\hat{\mathsf{L}}^2(\mathbb{R};\mathbb{C})$. The map $\mathscr{F}_2\colon\hat{\mathsf{L}}^2(\mathbb{R};\mathbb{C})\rightarrow\hat{\mathsf{L}}^2(\mathbb{R};\mathbb{C})$ so defined is, moreover, a Hilbert space isomorphism. The inverse has the property that $$\mathscr{F}_2^{-1}([f])(x)=\int_{\mathbb{R}}f\exp_{2\pi\textup{i}x}\,{\normalfont\textrm{d}}\lambda$$ for almost every $x\in\mathbb{R}$, where the same constructions leading to the definition of $\mathscr{F}_2$ for functions in $\hat{\mathsf{L}}^2(\mathbb{R};\mathbb{C})$ are applied. Let us see that $\mathscr{F}_2$ restricted to the subspace of square Riemann integrable functions is problematic. We denote by $\mathsf{R}^p(\mathbb{R};\mathbb{C})$ the collection of functions $f\colon\mathbb{R}\rightarrow\mathbb{C}$ which satisfy $$\int_{-\infty}^\infty\lvert f(x)\rvert^p\,{\normalfont\textrm{d}}x<\infty,$$ where we use, as above, the Riemann integral for possibly unbounded functions defined on unbounded domains [@JEM/MJH:93 Section 8.5]. We also define $$\mathsf{R}_0(\mathbb{R};\mathbb{C})=\left\{f\in\mathsf{R}^p(\mathbb{R};\mathbb{C})\immediate\vphantom{\int_{-\infty}^\infty\lvert f(x)\rvert^p\,{\normalfont\textrm{d}}x=0}\;\right| \left.\immediate\vphantom{f\in\mathsf{R}^p(\mathbb{R};\mathbb{C})}\enspace\int_{-\infty}^\infty\lvert f(x)\rvert^p\,{\normalfont\textrm{d}}x=0\right\}$$ and denote $$\hat{\mathsf{R}}^p(\mathbb{R};\mathbb{C})=\mathsf{R}^p(\mathbb{R};\mathbb{C})/\mathsf{R}_0(\mathbb{R};\mathbb{C}).$$ As we have done previously, we denote $[f]=f+\mathsf{R}_0(\mathbb{R};\mathbb{C})$. If we define $$\lVert[f]\rVert_p=\left(\int_{-\infty}^\infty\lvert f(x)\rvert^p\,{\normalfont\textrm{d}}x\right)^{1/p}$$ then $(\hat{\mathsf{R}}^p(\mathbb{R};\mathbb{C}),\lVert\cdot\rVert_p)$ is a normed vector space. It is not a Banach space since the example of Example [\[eg:Riemann!Cauchy\]](#eg:Riemann!Cauchy){reference-type="ref" reference="eg:Riemann!Cauchy"} can be extended to $\hat{\mathsf{R}}^p(\mathbb{R};\mathbb{C})$ by taking all functions to be zero outside the interval $\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$. Let us show that $\mathscr{F}_2|\hat{\mathsf{R}}^2(\mathbb{R};\mathbb{C})$ does not take values in $\hat{\mathsf{R}}^2(\mathbb{R};\mathbb{C})$, and thus show that the "$\hat{\mathsf{R}}^2$-Fourier transform" is not well-defined. We denote by $G$ the function defined in Example [\[eg:R1!hatMp-closed\]](#eg:R1!hatMp-closed){reference-type="ref" reference="eg:R1!hatMp-closed"}, but now extended to be defined on $\mathbb{R}$ by taking it to be zero off $\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]$. We have $G\in\mathsf{L}^1(\mathbb{R};\mathbb{C})\cap\mathsf{L}^2(\mathbb{R};\mathbb{C})$ since $G$ is bounded and measurable with compact support. Now define $F\colon\mathbb{R}\rightarrow\mathbb{C}$ by $$F(x)=\int_{\mathbb{R}}G\exp_{2\pi\textup{i}x}\,{\normalfont\textrm{d}}\lambda;$$ thus $F$ is the inverse Fourier transform of $G$. Since $G\in\mathsf{L}^1(\mathbb{R};\mathbb{C})$ it follows that $F\in\mathsf{C}^0_{0,\textup{uc}}(\mathbb{R};\mathbb{C})$. Therefore, $F|\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[-R,R]$ is continuous and bounded, and hence Riemann integrable for every $R\in\mathbb{R}_{>0}$. Since $G\in\mathsf{L}^2(\mathbb{R};\mathbb{C})$ we have $F\in\mathsf{L}^2(\mathbb{R};\mathbb{C})$ which implies that $$\int_{-R}^R\lvert F(x)\rvert^2\,{\normalfont\textrm{d}}x= \int_{\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[-R,R]}\lvert F\rvert^2\,{\normalfont\textrm{d}}\lambda\le \int_{\mathbb{R}}\lvert F\rvert^2\,{\normalfont\textrm{d}}\lambda,\qquad R\in\mathbb{R}_{>0}.$$ Thus the limit $$\lim_{R\to\infty}\int_{-R}^R\lvert F(x)\rvert^2\,{\normalfont\textrm{d}}x$$ exists. This is exactly the condition for Riemann integrability of $F$ as a function on an unbounded domain [@JEM/MJH:93 Section 8.5]. Now, since $[F]=\mathscr{F}_2^{-1}([G])$ by definition, we have $\mathscr{F}_2([F])=[G]$. In Example [\[eg:R1!hatMp-closed\]](#eg:R1!hatMp-closed){reference-type="ref" reference="eg:R1!hatMp-closed"} we showed that $[G]|\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1]\not\in\hat{\mathsf{R}}^1(\@ifnextchar({\@ADL@openleftint}{\@ADL@closedleftint}[0,1];\mathbb{C})$. From this we conclude that $[G]\not\in\hat{\mathsf{R}}^1(\mathbb{R};\mathbb{C})$ and, since $\lvert G\rvert^2=G$, $[G]\not\in\hat{\mathsf{R}}^2(\mathbb{R};\mathbb{C})$. Thus $\mathscr{F}_2(\hat{\mathsf{R}}^2(\mathbb{R};\mathbb{C}))\not\subseteq \hat{\mathsf{R}}^2(\mathbb{R};\mathbb{C})$, as it was desired to show. [^1]: Professor, Department of Mathematics and Statistics, Queen's University, Kingston, ON K7L 3N6, Canada, email: `andrew.lewis@queensu.ca` [^2]: Research supported in part by a grant from the Natural Sciences and Engineering Research Council of Canada [^3]: @RWH:98 himself seems to find the Henstock integral, which generalises the Lebesgue integral, to be a more satisfactory construction than the Lebesgue integral. [^4]: Since the "almost everywhere pointwise convergent" version actually requires the Lebesgue theory of integration.
arxiv_math
{ "id": "2309.08908", "title": "Should we fly in the Lebesgue-designed airplane? -- The correct defence\n of the Lebesgue integral", "authors": "Andrew D. Lewis", "categories": "math.HO math.GN", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | For a timely decarbonization of our economy, power systems need to accommodate increasing numbers of clean but stochastic resources. This requires new operational methods that internalize this stochasticity to ensure safety and efficiency. This paper proposes a novel approach to compute adaptive safety intervals for each stochastic resource that internalize power flow physics and optimize the expected cost of system operations, making them "prescriptive". The resulting intervals are interpretable and can be used in a tractable robust optimal power flow problem as uncertainty sets. We use stochastic gradient descent with differentiable optimization layers to compute a mapping that obtains these intervals from a given vector of context parameters that captures the expected system state. We demonstrate and discuss the proposed approach on two case studies. author: - | \ bibliography: - literature.bib title: Prescribed Robustness in Optimal Power Flow --- [^1] # Introduction The ongoing deployment of clean but stochastic energy resources challenges established power system operations by threatening their ability to ensure system security and economic efficiency. To tackle this, numerous effective stochastic optimizations methods have been proposed, covering, for example, system scheduling and control [@roald2017chance; @lee2021robust; @bienstock2014chance] or electricity markets and the unit commitment process [@kazempour2018stochastic; @bertsimas2012adaptive], and inertia provision [@paturet2020stochastic; @liang2022inertia]. However, the adoption of [@roald2017chance; @lee2021robust; @bienstock2014chance; @kazempour2018stochastic; @bertsimas2012adaptive; @paturet2020stochastic; @liang2022inertia] and similar proposals is obstructed by the necessity to significantly alter established operations, an expensive and risky process for system operators. Moreover, stochastic approaches to power system operations typically optimize reserve allocations and control policies implicitly based on probabilistic models and risk parameters defined by the system operator. This complicates a transparent communication towards power system stakeholders (e.g., owners of generation assets, electricity market participants who are interested in anticipating price formation and scheduling procedures). To overcome these barriers and further facilitate the adoption of clean energy technology, actionable methods for power system operation and planning that smartly internalize resource stochasticity while remaining interpretable and largely compatible with established processes are required. Motivated by this requirement, this paper proposes a data-driven approach inspired by [@wang2023learning] to compute uncertainty sets for stochastic resources that (i) robustify generation and reserve allocation, (ii) internalize system physics, and (iii) minimize risk-adjusted cost while avoiding the need for a stochastic decision-making process. Deviating from [@wang2023learning], we introduce an additional prescription step that allows to adjust the uncertainty sets based on the problem context. ## General problem formulation Consider power system operations as a two-stage optimization problem with uncertain parameters $\bm{\xi}$, e.g., real-time nodal demand or renewable energy injection. Given a current parametrization of the system $\bm{\zeta}$ (e.g., net-demand forecast, generator availability), the first stage computes a resource allocation $\bm{x}$ (e.g., generation schedules, reserve allocations). The second stage then decides on recourse actions $\bm{y}$ depending on the first-stage allocation $\bm{x}^*$ and the realization of $\bm{\xi}$. In current industry practice, the two stages are solved deterministically and independently using fixed security parameters $\bm{\theta}$ (e.g., reserve requirements) in the first stage: $$\begin{aligned} {3} &\text{1st stage:} \quad \hspace{1.8em} \mathcal{P}(\bm{\zeta},\bm{\theta}) &&= \min_{\bm{x}\in\mathscr{X}(\bm{\zeta},\bm{\theta})} \mathcal{C}^{\rm F}(\bm{x}, \bm{\zeta}, \bm{\theta}) \label{eq:first_stage}\\ &\text{2nd stage:} \quad \mathcal{Q}(\bm{x}^*, \bm{\zeta}, \bm{\xi}) &&= \min_{\bm{y}\in\mathscr{Y}(\bm{x}^*, \bm{\zeta}, \bm{\xi})} \mathcal{C}^{\rm S} (\bm{x}^*, \bm{\zeta}, \bm{\xi})\end{aligned}$$ While this process is computationally efficient and transparent, it ignores any probabilistic information that my be available from models or historic observations of $\bm{\xi}$. As an alternative to replacing this deterministic two-step procedure with probabilistic optimization (see discussion above), this paper proposes an adaptive computation of parameters $\bm{\theta}$ that internalizes information on the stochasticity of $\bm{\xi}$ and is aware of $\bm{\zeta}$. We highlight that vector $\bm{\zeta}$ can be considered richer than just containing problem parameters, but may also contain additional covariates of $\bm{\xi}$ (e.g., weather information, forecasts). We therefore call $\bm{\zeta}$ *context parameters*. Given $\bm{\zeta}$, the optimal choice of $\bm{\theta}$ in the first stage is $$\bm{\theta}^*(\bm{\zeta}) \in \mathop{\mathrm{arg\,min}}_{\bm{\theta}}\mathbb{E}_{(\bm{\xi}\mid\bm{\zeta})}\big[\mathcal{P}(\bm{\zeta},\bm{\theta}) + \mathcal{Q}(\bm{x}^*, \bm{\zeta}, \bm{\xi})\big], \label{eq:general_problem}$$ Following [@bertsimas2020predictive], we call $\bm{\theta}^*$ *prescriptive* in the context of $\bm{\zeta}$, as it provides a parametrization that minimizes the conditional expectation over $\bm{\xi}$. To avoid re-solving [\[eq:general_problem\]](#eq:general_problem){reference-type="ref" reference="eq:general_problem"} every time (which might be hard or impossible), the objective of this paper is to obtain a mapping $\mathcal{M}$ that computes $\bm{\theta}^*$ from $\bm{\zeta}$, i.e.: $$\min_{\mathcal{M}}\mathbb{E}_{(\bm{\zeta,\bm{\xi}})}\big[\mathcal{P}(\bm{\zeta}, \mathcal{M}(\bm{\zeta})) + \mathcal{Q}(\bm{x^*}, \bm{\zeta}, \bm{\xi})]. \label{eq:general_training_problem}$$ Solving problem [\[eq:general_training_problem\]](#eq:general_training_problem){reference-type="ref" reference="eq:general_training_problem"} is hard in general and its tractability depends on the definition of the operational problem $\mathcal{P}$, $\mathcal{Q}$ and the mapping $\mathcal{M}$. This paper studies the following approach: - For $\mathcal{P}$ and $\mathcal{Q}$ we focus on an optimal power flow (OPF) problem where the first stage computes a reserve allocation and a second-stage control policy that ensures system safety for predefined uncertainty regions given by $\bm{\theta}$. This approach follows the idea of robust optimization, but we can avoid overly conservative solutions by properly tuning $\bm{\theta}$. Section [2](#sec:operation_model){reference-type="ref" reference="sec:operation_model"} explains the model in detail. - For $\mathcal{M}$ we focus on an interpretable linear model $\mathcal{M}_{\bm{w}}(\bm{\zeta}) = \bm{M} \bm{\zeta} + \bm{m}$ parametrized by $\bm{w} = (\bm{M}, \bm{m})$ similar to the approach in [@morales2023prescribing]. We will refer to $\bm{w}$ as *weights* throughout this paper to avoid confusion with *parameters* $\bm{\zeta}$ and $\bm{\theta}$. - The resulting problem is a bi-level program that we solve using stochastic gradient descent with differentiable optimization layers. We discuss this in Section [3](#sec:solution_approach){reference-type="ref" reference="sec:solution_approach"}. ## Related literature Robust optimization for OPF has been studied alongside numerous proposals for general stochastic OPF. Mainly introduced by the seminal work in [@bertsimas2012adaptive], robust OPF can provide security with manageable computational complexity and transparent communication of the considered uncertain region for each resource. More recent variants include data-driven approaches [@bertsimas2018data] and tractable extensions to AC-OPF [@louca2018robust; @lee2021robust]. However, how to define good uncertainty sets that avoid overly conservative results remains tricky [@golestaneh2018polyhedral]. Adoption barriers for stochastic optimization in power systems have been previously highlighted in [@morales2014electricity; @vdvorkin2018setting] and options for approximating their performance with no or little alteration of current industry practice have been explored in [@wang2013flexiramp; @morales2014electricity; @vdvorkin2018setting; @garcia2021application; @morales2023prescribing]. While [@wang2013flexiramp; @vdvorkin2018setting] study more flexible reserve products and reserve requirements informed by an auxiliary stochastic program, respectively, [@morales2014electricity; @garcia2021application; @morales2023prescribing] propose methods that prescribe intentionally biased input parameters (forecasts) to the first stage problem. This approach fits our formulations in [\[eq:general_problem\]](#eq:general_problem){reference-type="ref" reference="eq:general_problem"} and [\[eq:general_training_problem\]](#eq:general_training_problem){reference-type="ref" reference="eq:general_training_problem"}. In [@morales2014electricity] the authors solve a bi-level program for each instance of the first stage to obtain a (prescriptive) alternative wind power forecast. Building on this idea, [@morales2023prescribing] computes a mapping from the original to a prescribed net-demand forecast. To this end, the authors include the first stage optimality conditions in the second stage to compute the map using in a single stochastic program. Pursuing the similar objective of obtaining optimally biased forecasts that reflect the asymmetric power system cost structure (generation excess can typically be handled more cheaply than shortage), [@garcia2021application] propose a bi-level program with a scaleable solution heuristic. Models (e.g., for forecasting) that minimize the loss of a downstream optimization task have gained general popularity as *end-to-end learning* [@kotary2021end] or *smart predict-and-optimize* [@elmachtoub2022smart]. Many exciting results in this direction have been unlocked by differentiable optimization frameworks, e.g., [@agrawal2019differentiable], that enable efficient iterative model training procedures that internalize optimization layers. For example, [@donti2017task] train a demand forecast model that minimizes the expected cost of generation excess and shortage, [@liang2022inertia] train a generative network to obtain adversarial forecast scenarios to improve reserve allocation, [@wahdany2023more] create a wind power forecast model that minimizes wind spillage, and [@vdvorkin2023price] tune their wind power prediction model to minimize forecast errors in resulting electricity prices. # Operation model {#sec:operation_model} We consider a short-term generator dispatch problem with balancing control (e.g., automatic generator control) with uncertain injections from stochastic wind generators. We model these uncertain injections as a $D$-dimensional vector $\bm{u}(\bm{\xi})=\bm{u} + \bm{\xi}$, where $\bm{u}$ is a (deterministic) forecast and $\bm{\xi}$ is a vector of random forecast errors. Power imbalance caused by forecast errors $\bm{\xi}$ are corrected by controllable generators. The generator output is a $G$-dimensional vector $\bm{p}(\bm{\xi}) = \bm{p} - \bm{A}\bm{\xi}$. By ensuring $\bm{A}^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{1}_{G} = \bm{1}_D$, where $\bm{1}_{D}$ is a $D$-dimensional vector of ones, all forecast errors are balanced. ## Robust optimal power flow with fixed recourse {#ssec:robust_opf} To immunize the system against uncertain injections $\bm{u}(\bm{\xi})$, $\bm{p}(\bm{\xi})$ and the resulting uncertain power flows, the system operator defines an uncertainty set $\mathscr{U}$ that captures all outcomes of $\bm{\xi}$ for which all system constrains should hold. Given a parametrization $\bm{\zeta}$ (forecast $\bm{u}$ and demand vector $\bm{d}$[^2]) the system operator then solves the following robust OPF problem to decide on the generator dispatch $\bm{p}$ and reserves $\bm{r^+}$, $\bm{r^-}$: $$\begin{aligned} \min \quad & (\bm{c}^{\rm E})^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{p} + (\bm{c}^{\rm R})^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}(\bm{r}^+ + \bm{r}^{-}) \label{base_dcopf:objective}\\ \text{s.t.}\quad & \bm{1}_G^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{p} = \bm{1}_V^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{d} - \bm{1}_D^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{u} \label{base_dcopf:enerbal} \\ & \bm{A}^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{1}_G= \bm{1}_D \label{base_dcopf:resbal}\\ & \bm{p} + \bm{r}^+ \le \bm{p}^{\rm max} \label{base_dcopf:gen_uplim}\\ & \bm{p} - \bm{r}^- \ge \bm{p}^{\rm min} \label{base_dcopf:gen_dnlim}\\ & \bm{B}^{\rm G} \bm{p} + \bm{B}^{\rm W} \bm{u} - \bm{B}^{\rm B}\bm{d} = \bm{f}^{\rm max} - \bm{f}^{\rm RAM+} \label{base_dcopf:lin_uplim}\\ &-(\bm{B}^{\rm G} \bm{p} + \bm{B}^{\rm W} \bm{u} - \bm{B}^{\rm B}\bm{d}) = \bm{f}^{\rm max} - \bm{f}^{\rm RAM-} \label{base_dcopf:lin_dnlim} \\ & -\bm{A}\bm{\xi} \le \bm{r}^+ &&\hspace{-2cm} \forall \bm{\xi}\in\mathscr{U} \label{base_dcopf:rob_resconst_up}\\ & \bm{A}\bm{\xi} \le \bm{r}^- &&\hspace{-2cm} \forall \bm{\xi}\in\mathscr{U} \label{base_dcopf:rob_resconst_lo}\\ & (\bm{B}^{\rm W} - \bm{B}^{\rm G}\bm{A})\bm{\xi} \le \bm{f}^{\rm RAM+}\! &&\hspace{-2cm} \forall \bm{\xi}\in\mathscr{U} \label{base_dcopf:rob_flowconst_up}\\ & -(\bm{B}^{\rm W} - \bm{B}^{\rm G}\bm{A})\bm{\xi} \le \bm{f}^{\rm RAM-}\! &&\hspace{-2cm} \forall \bm{\xi}\in\mathscr{U} \label{base_dcopf:rob_flowconst_lo}\end{aligned}$$ [\[eq:base_dcopf\]]{#eq:base_dcopf label="eq:base_dcopf"} The objective [\[base_dcopf:objective\]](#base_dcopf:objective){reference-type="ref" reference="base_dcopf:objective"} minimizes system cost given energy cost and reserve provision cost vectors $\bm{c}^{\rm E}$ and $\bm{c}^{\rm R}$. Energy balance [\[base_dcopf:enerbal\]](#base_dcopf:enerbal){reference-type="ref" reference="base_dcopf:enerbal"} ensures that the total generator injections equals the total system demand $\bm{d}$ and $\bm{u}$. Similarly, [\[base_dcopf:resbal\]](#base_dcopf:resbal){reference-type="ref" reference="base_dcopf:resbal"} ensures that all forecast errors are balanced. Constraints [\[base_dcopf:gen_uplim,base_dcopf:gen_dnlim\]](#base_dcopf:gen_uplim,base_dcopf:gen_dnlim){reference-type="ref" reference="base_dcopf:gen_uplim,base_dcopf:gen_dnlim"} enforce the technical production limits of each controllable generator. Constraints [\[base_dcopf:lin_uplim,base_dcopf:lin_dnlim\]](#base_dcopf:lin_uplim,base_dcopf:lin_dnlim){reference-type="ref" reference="base_dcopf:lin_uplim,base_dcopf:lin_dnlim"} map the power injections and withdrawals of each resource and load to a resulting power flow using suitable linear maps [@bolognani2015fast], e.g., obtained from the DC power flow approximation. Vectors $\bm{f}^{\rm RAM+}$ and $\bm{f}^{\rm RAM-}$ are the remaining available margins for each power transmission line, i.e., the difference between the power flow caused by the forecast injections and the upper and lower line limits. Constraints [\[base_dcopf:rob_resconst_lo,base_dcopf:rob_resconst_up,base_dcopf:rob_flowconst_lo,base_dcopf:rob_flowconst_up\]](#base_dcopf:rob_resconst_lo,base_dcopf:rob_resconst_up,base_dcopf:rob_flowconst_lo,base_dcopf:rob_flowconst_up){reference-type="ref" reference="base_dcopf:rob_resconst_lo,base_dcopf:rob_resconst_up,base_dcopf:rob_flowconst_lo,base_dcopf:rob_flowconst_up"} enforce robust constraints on the system response to uncertain forecast errors $\bm{\xi}$. Constraints [\[base_dcopf:rob_resconst_lo,base_dcopf:rob_resconst_up\]](#base_dcopf:rob_resconst_lo,base_dcopf:rob_resconst_up){reference-type="ref" reference="base_dcopf:rob_resconst_lo,base_dcopf:rob_resconst_up"} ensure that generator balancing responses do not exceed the available reserves and [\[base_dcopf:rob_flowconst_lo,base_dcopf:rob_flowconst_up\]](#base_dcopf:rob_flowconst_lo,base_dcopf:rob_flowconst_up){reference-type="ref" reference="base_dcopf:rob_flowconst_lo,base_dcopf:rob_flowconst_up"} ensure that the resulting power flow changes do not exceed the remaining available margins on each power line for any $\bm{\xi}\in\mathscr{U}$. Problem [\[eq:base_dcopf\]](#eq:base_dcopf){reference-type="ref" reference="eq:base_dcopf"} can be re-written in a more concise form. We collect all decision variables in a vector $\bm{x}$, cost vectors $\bm{c}^{\rm E}$, $\bm{c}^{\rm R}$ in a vector $\bm{c}$, denote the feasible space defined by constraints [\[base_dcopf:resbal,base_dcopf:enerbal,base_dcopf:gen_uplim,base_dcopf:gen_dnlim,base_dcopf:lin_uplim,base_dcopf:lin_dnlim\]](#base_dcopf:resbal,base_dcopf:enerbal,base_dcopf:gen_uplim,base_dcopf:gen_dnlim,base_dcopf:lin_uplim,base_dcopf:lin_dnlim){reference-type="ref" reference="base_dcopf:resbal,base_dcopf:enerbal,base_dcopf:gen_uplim,base_dcopf:gen_dnlim,base_dcopf:lin_uplim,base_dcopf:lin_dnlim"} as $\mathscr{F}(\bm{\zeta})$ and write: $$\begin{aligned} \min_{\bm{x}\in\mathscr{F}(\bm{\zeta})}\ & \bm{c}^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{x} \label{compact_dcopf:objective} \\ \text{s.t.}\quad & \max_{k=1,...,K}[ \bm{a}_{k}^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{\xi} + b_k] \le 0, \quad \forall\bm{\xi}\in\mathscr{U} \label{compact_dcopf:max_of_affine_robust}\end{aligned}$$ [\[eq:compact_dcopf\]]{#eq:compact_dcopf label="eq:compact_dcopf"} where $\bm{a}_k$ and $b_k$ are, respectively, the $k$-th row and $k$-th entry of the $K\times D$ matrix and $K\times 1$ vector $$\begin{bmatrix} -\bm{A} \\ \bm{A} \\ (\bm{B}^{\rm\bm{W}} - \bm{B}^{\rm G}\bm{A}) \\ -(\bm{B}^{\rm W} - \bm{B}^{\rm G}\bm{A}) \end{bmatrix} \text{ and } \begin{bmatrix} -\bm{r}^+ \\ -\bm{r}^- \\ - \bm{f}^{\rm RAM+} \\ -\bm{f}^{\rm RAM-} \end{bmatrix}. % \label{eq:Ab_definition}$$ Note that [\[compact_dcopf:max_of_affine_robust\]](#compact_dcopf:max_of_affine_robust){reference-type="ref" reference="compact_dcopf:max_of_affine_robust"} is an exact reformulation of [\[base_dcopf:rob_resconst_lo,base_dcopf:rob_resconst_up,base_dcopf:rob_flowconst_lo,base_dcopf:rob_flowconst_up\]](#base_dcopf:rob_resconst_lo,base_dcopf:rob_resconst_up,base_dcopf:rob_flowconst_lo,base_dcopf:rob_flowconst_up){reference-type="ref" reference="base_dcopf:rob_resconst_lo,base_dcopf:rob_resconst_up,base_dcopf:rob_flowconst_lo,base_dcopf:rob_flowconst_up"} as the maximum of $K$ affine functions. ## Uncertainty set formulation Model [\[eq:base_dcopf\]](#eq:base_dcopf){reference-type="ref" reference="eq:base_dcopf"} cannot be solved directly but requires a definition of $\mathscr{U}$ alongside a tractable reformulation of constraints [\[base_dcopf:rob_resconst_lo,base_dcopf:rob_resconst_up,base_dcopf:rob_flowconst_lo,base_dcopf:rob_flowconst_up\]](#base_dcopf:rob_resconst_lo,base_dcopf:rob_resconst_up,base_dcopf:rob_flowconst_lo,base_dcopf:rob_flowconst_up){reference-type="ref" reference="base_dcopf:rob_resconst_lo,base_dcopf:rob_resconst_up,base_dcopf:rob_flowconst_lo,base_dcopf:rob_flowconst_up"}. This paper focuses on box uncertainty sets, as they provide a clear safety region for each uncertain resource. Other formulations are possible [@bertsimas2011theory]. Box uncertainty sets ensure that constraints are feasible for a security interval along each dimension of the uncertain vector $\bm{\xi}$. We can define such a set as $\mathscr{U}^{\rm box}(\bm{\theta}) = \{\xi_j|\xi_j\in[\mu_j-\sigma_j, \mu_j+\sigma_j],\ j=1,...,D\}$ parametrized by $\bm{\theta}=(\bm{\mu}, \bm{\sigma})$ with $\bm{\mu}=[\mu_1,...,\mu_D]$ and $\bm{\sigma}=[\sigma_1,...,\sigma_D]$. Parameter $\bm{\mu}$ is the center of the security interval and can be interpreted as a forecast error bias. Parameter $\bm{\sigma}$ defines the width of the interval. Using $\mathscr{U} = \mathscr{U}^{\rm box}$ and introducing auxiliary variable $\bm{t}_k$, [\[compact_dcopf:max_of_affine_robust\]](#compact_dcopf:max_of_affine_robust){reference-type="ref" reference="compact_dcopf:max_of_affine_robust"} becomes $$\begin{aligned} % &\max_{k=1,...,K}[ \dprod{\bm{a}_{k}}{\bm{\xi}} + b_k] \le 0, \quad \forall\bm{\xi}\in\set{U}^{\rm box} \\ % \Leftrightarrow \quad % & \left\{\begin{array}{l} \bm{a}_k^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{\mu} + \bm{t}_k^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{\sigma} \le b_k, \ \bm{t}_k \ge |\bm{a}_k|, \quad \forall k=1,..,K. % \end{array}\right. \end{aligned}$$ See also [@gorissen2013robust] for an in-depth discussion on robust counterparts of constraints with the form of [\[compact_dcopf:max_of_affine_robust\]](#compact_dcopf:max_of_affine_robust){reference-type="ref" reference="compact_dcopf:max_of_affine_robust"}. ## Real-time cost and security {#ssec:real_time_cost} The choice of uncertainty set parameters $\bm{\theta}$ implies a trade-off between security in real-time and cost in the first-stage decision. For example, choosing $\bm{\theta}$ such that $\mathscr{U}$ is large and covers all potential outcomes of $\bm{\xi}$ will lead to high security in real-time but also to high first-stage cost. A good choice of $\bm{\theta}$ will balance this trade-off by minimizing the combined first- and second-stage cost as defined in [\[eq:general_problem\]](#eq:general_problem){reference-type="ref" reference="eq:general_problem"}. We consider two relevant approaches to quantify security of the robust problem from Section [2.1](#ssec:robust_opf){reference-type="ref" reference="ssec:robust_opf"} in real-time. ### Cost of exceedance We can define the real-time cost as cost of exceedance by imposing a penalty for insufficient reserves $\bm{r}^+$, $\bm{r}^-$, $\bm{f}^{\rm RAM +}$, $\bm{f}^{\rm RAM -}$. Using the notation from [\[eq:compact_dcopf\]](#eq:compact_dcopf){reference-type="ref" reference="eq:compact_dcopf"}, we compute this cost as $$\mathcal{C}^{\rm S}(\bm{x}, \bm{\xi}) = \sum_{k=1}^K c_k^{\rm viol}\big[ \bm{a}_{k}^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{\xi} + b_k\big]^+ ,$$ where $[\cdot]^+ = \max\{\cdot, 0\}$ and $c_k^{\rm viol}$ is the cost for exceeding the reserve given by $b_k$. Cost $c_k^{\rm viol}$ could, for example, reflect the cost of procuring emergency resources or load shedding. ### Probability of exceedance Instead of minimizing cost of exceedance, the system operator may be interested in a probabilistic guarantee that real-time operations do not exceed reserves, i.e.: $$\mathbb{P}\Big[\max_{k=1,...,K}[ \bm{a}_{k}^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{\xi} + b_k] < 0 \Big] \ge 1-\gamma.$$ Here, $(1-\gamma)$ defines the target probability of no constraint exceeding its limits and $\gamma$ is a small risk factor. # Solution Approach {#sec:solution_approach} We now show an iterative method inspired by the results in [@wang2023learning] to obtain the desired mapping $\mathcal{M}_{\bm{w}}(\bm{\zeta})=\bm{M}\bm{\zeta} + \bm{m}$ that returns uncertainty set parameters $\bm{\theta}$, which, in turn, optimally parametrize the first-stage uncertainty set with respect to the cost and probability of exceedance discussed in Section [2.3](#ssec:real_time_cost){reference-type="ref" reference="ssec:real_time_cost"}. ## Cost of exceedance {#ssec:coe_solution_approach} The problem to compute the optimal choice of weights $\bm{w}=(\bm{M}, \bm{m})$ that minimize the combined expected first- and second-stage cost is the bi-level problem: $$\begin{aligned} \min_{\bm{w}}\quad & \mathbb{E}_{(\bm{\zeta},\bm{\xi})}\big[\bm{c}^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{x}^* + \mathcal{C}^{\rm S}(\bm{x}^*, \bm{\xi})\big] \label{eq:coe_objective}\\ \text{s.t.}\quad & \bm{\theta} = \mathcal{M}_{\bm{w}}(\bm{\zeta}) \label{eq:coe_prescription}\\ & \hspace{-0.6cm}\bm{x}^*(\bm{\zeta},\bm{\theta})\! \in \! \left\{\begin{array}{l} \!\!\mathop{\mathrm{arg\,min}}_{\bm{x}\in\mathscr{F}(\bm{\zeta})}\ \!\bm{c}^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{x} \\[1.3ex] \!\!\text{s.t.}\ \max_{k}[ \bm{a}_{k}^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{\xi} + b_k]\! \le\! 0,\ \forall\bm{\xi}\!\in\!\mathscr{U}(\bm{\theta}). \end{array} \right. \label{eq:coe_inner}\end{aligned}$$ [\[eq:coe_bilevel_problem\]]{#eq:coe_bilevel_problem label="eq:coe_bilevel_problem"} We solve [\[eq:coe_bilevel_problem\]](#eq:coe_bilevel_problem){reference-type="ref" reference="eq:coe_bilevel_problem"} using a stochastic gradient descent approach with $v^{\rm max}$ outer steps (epochs) indexed by $v$ and $z^{\rm max}$ inner steps ("mini-batches" [@gower2019sgd]) indexed by $z$. For each step $z$ we assume to have access to an *individual* context parameter sample $\bm{\zeta}^z$ (e.g., obtained from historic observations or a sample generation mechanism) and a *set of $N_z$ samples* $\mathscr{X}^z = \{\bm{\xi}_{i}^z\}_{i=1}^{N_z}$ of $\bm{\xi}$ (again, either obtained from historical observations or a sample generation mechanism). We note that $\mathscr{X}^z$ may be conditional to $\bm{\zeta}^z$. See, for example, [@dvorkin2015uncertainty] who also provide an approach to sample $\mathscr{X}^z$ for a given wind power forecast $\bm{u}^z$ (which is part of the parametrization $\bm{\zeta}^z$). We provide additional discussion on the relationship between $\bm{\zeta}^z$ and $\mathscr{X}^z$ in the case study below. We define $\bm{x}^z = \bm{x}^*(\bm{\zeta}^z, \bm{\theta}^z)$ as the solution to the inner problem [\[eq:coe_inner\]](#eq:coe_inner){reference-type="ref" reference="eq:coe_inner"}. The loss function $L(\bm{w}^z; \bm{\zeta}^z, \bm{\xi}^z)$ corresponds to the problem objective in [\[eq:coe_inner\]](#eq:coe_inner){reference-type="ref" reference="eq:coe_inner"} and we compute it as the empirical mean given $\mathscr{X}^z$: $$L^{\rm C}\!(\bm{w}^z\!; \bm{\zeta}^z\!, \mathscr{X}^z\!)\! =\! \bm{c}^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{x}^z\! +\! \frac{1}{N_z}\! \sum_{i=1}^{N_z} \sum_{k=1}^K\! c_k\big[ (\bm{a}_{k}^z)^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{\xi}_{i}^z\! +\! b_k^z\big]^+.$$ Recall from [\[compact_dcopf:max_of_affine_robust\]](#compact_dcopf:max_of_affine_robust){reference-type="ref" reference="compact_dcopf:max_of_affine_robust"} that $\bm{a}_k$ and $b_k$ are decision variables and a part of $\bm{x}$. Following [@wang2023learning] we estimate the derivative of $L(\bm{w}^z; \bm{\zeta}^z, \bm{\xi}^z)$ using a subgradient $G^{\rm C}(\bm{w}^z; \bm{\zeta}^z, \mathscr{X}^z)$ computed over samples of $\bm{\zeta}$ and $\bm{\xi}$. Notably, the computation of $G^{\rm C}(\bm{w}^z; \bm{\zeta}^z, \mathscr{X}^z)$ requires computing gradient $\nabla_{\bm{\theta}}\bm{x}^*(\bm{\zeta}, \bm{\theta})$, i.e., the derivative of the decision variables of the inner problem over the uncertainty set parameters. We discuss an efficient approach to obtain this gradient in Section [3.3](#ssec:implementation){reference-type="ref" reference="ssec:implementation"} below. Choosing initial weights $(\bm{M}^{\rm init}, \bm{m}^{\rm init})$ and a learning rate $\rho$ the resulting solution steps are itemized in Algorithm [\[alg:coe_sgd\]](#alg:coe_sgd){reference-type="ref" reference="alg:coe_sgd"}. $\bm{w}^0 = (\bm{M}^{\rm init}, \bm{m}^{\rm init})$, learning rate $\rho$ $\bm{\zeta}^z \gets$ sample context parameter $\mathscr{X}^z \gets$ sample forecast errors conditional to $\bm{\zeta}^z$ $\bm{\theta}^z \gets \mathcal{M}_{\bm{w}^{v-1}}(\bm{\zeta}^z)$ $\bm{x}^z \gets$ solve inner problem [\[eq:coe_inner\]](#eq:coe_inner){reference-type="ref" reference="eq:coe_inner"} $\bm{g}^z \gets G^{\rm C}(\bm{w}^{v-1}; \bm{\zeta}^z, \mathscr{X}^z)$ $\bm{g}^v \gets \frac{1}{z^{\rm max}}\sum_{z=1}^{z^{\rm max}}\bm{g}^z$ $\bm{w}^{v} \gets \bm{w}^{v-1}\! -\! \rho \bm{g}^v$ $\bm{w}^{v^{\rm max}}$ ## Probability of exceedance {#ssec:poe_solution_approach} The problem to compute the optimal choice of weights $\bm{w}$ that ensure real-time operations do not exceed their limits with a probability of at least $(1-\gamma)$ is the bi-level problem: $$\begin{aligned} \min_{\bm{w}}\quad & \mathbb{E}_{(\bm{\zeta},\bm{\xi})}\big[\bm{c}^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{x}^* \big] \\ \text{s.t.}\quad % & \bm{\theta} = \mathcal{M}_{\bm{w}}(\zeta) \\ & \mathbb{P}\Big[\max_{k=1,...,K}[ (\bm{a}_{k}^*)^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{\xi} + b_k^*] < 0 \Big] \ge 1-\gamma \label{eq:poe_chance_constraint} \\ & \big[\text{$\bm{\theta}$ and $\bm{x}^*(\bm{\zeta, \bm{\theta}})$ as in \cref{eq:coe_prescription,eq:coe_inner}}\big]. % \\[1.3ex] % & \bm{x}^*(\bm{\zeta, \bm{\theta}}) \in % \left\{\begin{array}{l} % \argmin_{\bm{x}\in\set{F}(\bm{\zeta})}\ \bm{c}\tran\bm{x} \\[1.3ex] % \text{s.t.}\ \max_{k}[ \bm{a}_{k}\tran\bm{\xi} + b_k] \le 0 \forall\bm{\xi}\in\set{U}(\bm{\theta}) % \end{array} % \right. \end{aligned}$$ [\[eq:poe_bilevel_problem\]]{#eq:poe_bilevel_problem label="eq:poe_bilevel_problem"} Constraint [\[eq:poe_chance_constraint\]](#eq:poe_chance_constraint){reference-type="ref" reference="eq:poe_chance_constraint"} is generally non-convex and complicates the solution of [\[eq:poe_bilevel_problem\]](#eq:poe_bilevel_problem){reference-type="ref" reference="eq:poe_bilevel_problem"}. To create a tractable problem, we reformulate [\[eq:poe_chance_constraint\]](#eq:poe_chance_constraint){reference-type="ref" reference="eq:poe_chance_constraint"} using conditional value-at-risk [@wang2023learning; @mieth2023data]: $$\begin{aligned} & \mathbb{P}\big[\max_{k=1,...,K}[ \bm{a}_{k}^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{\xi} + b_k \le 0 \big] \ge 1-\gamma \\ \Leftrightarrow \quad &\mathop{\mathrm{VaR}}_{\gamma}\big( \max_{k=1,...,K}[ \bm{a}_{k}^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{\xi} + b_k] \big) \le 0 \\ \Leftarrow \quad & \mathop{\mathrm{CVaR}}_{\gamma}\big( \max_{k=1,...,K}[ \bm{a}_{k}^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{\xi} + b_k ]\big) \le 0 \\ \end{aligned} \label{eq:cvar_reformulation}$$ where $\mathop{\mathrm{VaR}}_{\gamma}$ and $\mathop{\mathrm{CVaR}}_{\gamma}$ are the value-at-risk and conditional value-at-risk at risk level $\gamma$. Reformulation [\[eq:cvar_reformulation\]](#eq:cvar_reformulation){reference-type="ref" reference="eq:cvar_reformulation"} utilizes the fact that limiting CVaR implies a limit on VaR [@rockafellar2000optimization]. CVaR further allows the convex reformulation [@rockafellar2000optimization] $$\mathop{\mathrm{CVaR}}_{\gamma} = \inf_{\tau}\Big\{\mathbb{E}\Big[\frac{1}{\gamma}\big[ \max_{k=1,...,K}[ \bm{a}_{k}^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{\xi} + b_k] - \tau\big]^+ + \tau\Big]\Big\}, \label{eq:cvar_as_inf}$$ which we use to define $$h(\bm{x}^*, \tau, \bm{\xi}) = \frac{1}{\gamma}\big[ \max_{k=1,...,K}[ \bm{a}_{k}^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{\xi} + b_k] - \tau\big]^+ + \tau,$$ and to reformulate [\[eq:poe_bilevel_problem\]](#eq:poe_bilevel_problem){reference-type="ref" reference="eq:poe_bilevel_problem"} as [@wang2023learning]: $$\begin{aligned} \min_{\bm{w}, \tau}\quad & \mathbb{E}_{(\bm{\zeta}, \bm{\xi})} \big[\bm{c}^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{x}^* \big] \\ \text{s.t.}\quad & \mathbb{E}_{(\bm{\zeta}, \bm{\xi})} \big[h(\bm{x}^*, \tau, \bm{\xi}) \big] = 0 \label{eq:poe_reform_cvar} \\ & \big[\text{$\bm{\theta}$ and $\bm{x}^*(\bm{\zeta, \bm{\theta}})$ as in \cref{eq:coe_prescription,eq:coe_inner}}\big].\end{aligned}$$ [\[eq:poe_reformulated\]]{#eq:poe_reformulated label="eq:poe_reformulated"} We note that [\[eq:poe_reformulated\]](#eq:poe_reformulated){reference-type="ref" reference="eq:poe_reformulated"} has an additional auxiliary variable $\tau$ related to the CVaR reformulation [\[eq:cvar_as_inf\]](#eq:cvar_as_inf){reference-type="ref" reference="eq:cvar_as_inf"}. Also, following the logic in [@wang2023learning], we note that [\[eq:poe_reform_cvar\]](#eq:poe_reform_cvar){reference-type="ref" reference="eq:poe_reform_cvar"} is an equality constraint, because zero is the optimal CVaR target given [\[eq:cvar_reformulation\]](#eq:cvar_reformulation){reference-type="ref" reference="eq:cvar_reformulation"} and the convexity of [\[eq:cvar_as_inf\]](#eq:cvar_as_inf){reference-type="ref" reference="eq:cvar_as_inf"}. Similar to he procedure outlined in [@wang2023learning; @zhang2022solving] we can now solve the equality-constrained problem [\[eq:poe_reformulated\]](#eq:poe_reformulated){reference-type="ref" reference="eq:poe_reformulated"} by introducing Lagrangian multiplier $\lambda$ and defining the loss function $$L^{\rm P}((\bm{w}\!,\tau); \lambda\!, \bm{\zeta}\!, \bm{\xi})\! = \!\mathbb{E}_{(\bm{\zeta}, \bm{\xi})} \big[\bm{c}^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{x}^* \big]\! +\! \lambda \mathbb{E}_{(\bm{\zeta}, \bm{\xi})} \big[h(\bm{x}^*\!, \tau\!, \bm{\xi}) \big]. \label{eq:lagrangian_expectation}$$ Using the notation introduced in Section [3.1](#ssec:coe_solution_approach){reference-type="ref" reference="ssec:coe_solution_approach"} above, we compute [\[eq:lagrangian_expectation\]](#eq:lagrangian_expectation){reference-type="ref" reference="eq:lagrangian_expectation"} in each step $z$ as $$\begin{aligned} &L^{\rm P}((\bm{w}^z, \tau^z); \lambda^z, \bm{\zeta}^z, \mathscr{X}^z) = \label{eq:poe_loss_function}\\ &\quad \bm{c}^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{x}^z\! +\! \lambda^z \underbrace{\frac{1}{N_z}\sum_{i=1}^{N_z}\! \! \Big( \frac{1}{\gamma}\big[\max_{k=1,...,K}[ (\bm{a}_{k}^z)^{\hspace{-0.1em}\mkern-1.mu\mathsf{T}}\bm{\xi}_i^z\! +\! b_k^z]\! -\! \tau^z\big]^+\!\! + \!\tau^z \Big)}_{H((\bm{w}^z, \tau^z); \bm{\zeta}^z, \mathscr{X}^z)}. \nonumber\end{aligned}$$ Denoting $G^{\rm P}((\bm{w}^z, \tau^z); \lambda^z, \bm{\zeta}^z, \mathscr{X}^z)$ as the gradient of [\[eq:poe_loss_function\]](#eq:poe_loss_function){reference-type="ref" reference="eq:poe_loss_function"}, $\tau^{\rm init}$, $\lambda^{\rm init}$ as initial values for $\tau$, $\lambda$, and $\kappa$ as the step size for $\lambda$ the resulting solution steps are itemized in Algorithm [\[alg:poe_sgd\]](#alg:poe_sgd){reference-type="ref" reference="alg:poe_sgd"}. $\bm{w}^0 = (\bm{M}^{\rm init}, \bm{m}^{\rm init})$, $\tau^0=\tau^{\rm init}$, $\lambda^0 = \lambda^{\rm init}$, $\rho$, $\kappa$ $\bm{\zeta}^z \gets$ sample context parameter $\mathscr{X}^z \gets$ sample forecast errors conditional to $\bm{\zeta}^z$ $\bm{\theta}^z \gets \mathcal{M}_{\bm{w}^{v-1}}(\bm{\zeta}^z)$ $\bm{x}^z \gets$ solve inner problem [\[eq:coe_inner\]](#eq:coe_inner){reference-type="ref" reference="eq:coe_inner"} $\bm{g}^z \gets G^{\rm C}((\bm{w}^{v-1}, \tau^{v-1}); \lambda^{v-1}, \bm{\zeta}^z, \mathscr{X}^z)$ $H^v \gets H((\bm{w}^z, \tau^z), \bm{\zeta}^z, \mathscr{X}^z)$ $\bm{g}^v \gets \frac{1}{z^{\rm max}}\sum_{z=1}^{z^{\rm max}}\bm{g}^z$ $(\bm{w}^{v}, \tau^v) \gets (\bm{w}^{v-1}, \tau^{v-1})\! -\! \rho \bm{g}^v$ $H^v = \frac{1}{z^{\rm max}}\sum_{z=1}^{z^{\rm max}}H^z$ $\lambda^{v} \gets \lambda^{v-1} + \kappa H^v$ $(\bm{w}^{v^{\rm max}}, \tau^{v^{\rm max}})$ ## Implementation {#ssec:implementation} We implement Algorithms [\[alg:coe_sgd\]](#alg:coe_sgd){reference-type="ref" reference="alg:coe_sgd"} and [\[alg:poe_sgd\]](#alg:poe_sgd){reference-type="ref" reference="alg:poe_sgd"} in Python using state-of-the-art machine learning packages (PyTorch) and results from [@agrawal2019differentiable] (package Cvxpylayers) that allow numerical differentiation through the inner optimization [\[eq:coe_inner\]](#eq:coe_inner){reference-type="ref" reference="eq:coe_inner"}. The computation steps are illustrated in Fig. [1](#fig:dolayers_pipeline){reference-type="ref" reference="fig:dolayers_pipeline"}. The differentiable optimization layer obtains the required gradient $\nabla_{\bm{\theta}}\bm{x}^*(\bm{\zeta}, \bm{\theta})$ by differentiating through the Karush-Kuhn-Tucker conditions of the inner problem at optimality. This is performed efficiently by utilizing the implicit function theorem and by solving an inner quadratic program to obtain the required matrix inversion [@agrawal2019differentiating]. ![Computation pipeline for the steps itemized in Algorithms [\[alg:coe_sgd\]](#alg:coe_sgd){reference-type="ref" reference="alg:coe_sgd"} and [\[alg:poe_sgd\]](#alg:poe_sgd){reference-type="ref" reference="alg:poe_sgd"}. The differentiable optimization layer using `cvxpylayers` [@agrawal2019differentiable], allows the computation of subgradient $G = \nabla_{\bm{w}}L$ with a single backwards pass.](figures/dolayers_pipeline_paper.pdf){#fig:dolayers_pipeline width="0.85\\linewidth"} We add the following implementation remarks: 1. Formulation [\[eq:base_dcopf\]](#eq:base_dcopf){reference-type="ref" reference="eq:base_dcopf"} limits the decision variables $\bm{A}$ to the interval $[0,1]$. This leads to $\bm{A}$ taking small values compared to the other decision variables causing conditioning problems in the Cvxpylayers solver ECOS [@domahidi2013ecos]. Introducing a scaling factor for $\bm{A}$ solves this problem consistently. [\[ir:conditioniong\]]{#ir:conditioniong label="ir:conditioniong"} 2. The inner problem has to be always feasible. To ensure this, we add a slack variable to [\[base_dcopf:enerbal\]](#base_dcopf:enerbal){reference-type="ref" reference="base_dcopf:enerbal"} to allow the curtailment of wind power $\bm{u}$ if needed, and a slack variable to [\[compact_dcopf:max_of_affine_robust\]](#compact_dcopf:max_of_affine_robust){reference-type="ref" reference="compact_dcopf:max_of_affine_robust"} to account for the case that the required uncertainty set can not be met with the available reserves. Both sets of slack variables are penalized in the problem objective and conditioned using a scaling factor as in [\[ir:conditioniong\]](#ir:conditioniong){reference-type="ref" reference="ir:conditioniong"}. 3. Formulation [\[eq:base_dcopf\]](#eq:base_dcopf){reference-type="ref" reference="eq:base_dcopf"} is a linear problem and as such may not generally be differentiable with respect to its parameters [@wilder2019melding]. This can be avoided by introducing a regularization $\pi\left\lVert\bm{x}\right\rVert_2^2$ with regularization factor $\pi$ to the objective of the problem. We note that in contrast to [@wilder2019melding], our problem is linear with *continuous* variables and adding a regularization term with a small $\pi$ did not significantly impact our results. Lastly, we note that we implement $\mathcal{M}_{\bm{w}}$ as two linear models $\mathcal{M}^{\bm{\mu}}=\bm{M}^{\mu}\bm{\zeta} + \bm{m}^{\mu}$ and $\mathcal{M}^{\bm{\sigma}}=\bm{M}^{\sigma}\bm{\zeta} + \bm{m}^{\sigma}$. # Numerical Experiments ## Illustrative 5-bus case We first illustrate the suggested approach using synthetic data on the small-scale "case5" data set from MATPOWER [@matpowercase5]. The system topology with two configurations of wind generators is shown in Fig. [2](#fig:5bus_system){reference-type="ref" reference="fig:5bus_system"}. Its parameters follow [@mieth2023data] with , $\bm{c}^{\rm E}=(14,15,30,40,10)$, and $\bm{c}^{\rm R}=(80,80,15,30,80)$. For training and testing the proposed approach we create a collection of $N=2000$ samples of $\bm{\zeta}=(\bm{d},\bm{u})$ with corresponding samples of $\bm{\xi}$ as follows. First, we define the nominal demand as and the nominal wind forecast as . We then create $N$ samples of $\bm{d}$ by uniformly drawing from the set $\{\bm{d}\in\mathbb{R}\mid 0.5\bm{d}_0 \le \bm{d} \le 1.1\bm{d}_0\}$ and $N$ samples of $\bm{u}$ by uniformly drawing from the set $\{\bm{u}\in\mathbb{R}\mid 0.5\bm{u}_0 \le \bm{u} \le 1.1\bm{u}_0\}$. Next, for each available sample of $\bm{\zeta}=(\bm{d},\bm{u})$ we draw a sample of forecast errors $\bm{\xi}$ from a multivariate normal distribution with a uniform correlation coefficient of $\phi=0.5$ between all wind farms. Following the observations in [@dvorkin2015uncertainty], we model the forecast error standard deviation to be conditional to the forecast and the forecast error mean to be zero. The resulting normal distribution is given as: $$\setlength\arraycolsep{2pt} \bm{\xi}\,|\,\bm{u} \!\sim\! \mathcal{N}(\bm{0}, \bm{\Sigma}_{\bm{u}}),\ \bm{\Sigma}_{\bm{u}} \!= \!\! \begin{bmatrix}(0.15u_1)^2 & 0.15^2\phi u_1 u_2) \\ 0.15^2\phi u_1 u_2) & (0.15u_2)^2 \end{bmatrix}\!. \label{eq:forecast_error_sample_distribution}$$ Finally, we set the maximum wind farm capacity to $\bm{u}^{\rm max} = \unit[(2.0, 3.0)]{p.u.}$ and truncate all generated samples of $\bm{\xi}\,|\,\bm{u}$ such that $\bm{\xi}\ge-\bm{u}$ and $\bm{\xi}\le\bm{u}^{\rm max} - \bm{u}$. The resulting collection of samples of $\bm{\zeta}$ with a single sample of $\bm{\xi}$ corresponds to what would be available in practice. We use 1500 of these generated samples for training and 500 for testing. We define the following 5 cases that we use to demonstrate the proposed prescribed robust sets: - *Full*: Reference case for which the security interval of each wind farm covers the entire empirical forecast error support. - *90 Perc*: Reference case for which the security interval of each wind farm is fixed between the and percentile of the forecast error training data. - *Single*: Learning a single fixed uncertainty set without prescription, i.e., $\mathcal{M}^{\mu} = \bm{m}^{\mu}$ and $\mathcal{M}^{\sigma} = \bm{m}^{\sigma}$. - *P-All*: Learning $\bm{w}$ using all available forecast errors. As a result, the training is ignorant to the error distribution being conditional to $\bm{u}$. - *P-Cond*: Learning $\bm{w}$ assuming access to samples from the true conditional distribution. In each step $z$, given $\bm{u}^z$, we generated 200 new samples of $\bm{\zeta}$ using [\[eq:forecast_error_sample_distribution\]](#eq:forecast_error_sample_distribution){reference-type="ref" reference="eq:forecast_error_sample_distribution"}. - *P-Bins*: Learning $\bm{w}$ with each set of forecast error samples $\mathscr{X}^z$ obtained by separating the available training samples of $\bm{u}$ into 10 bins of equal width, as in [@dvorkin2015uncertainty], collecting the forecast error of each bin, and assigning each sample $\bm{u}^z$ the set of forecast error samples corresponding to its bin. We set $v^{\rm max}=100$, although we typically observed satisfying convergence within 40 iterations. We used a mini-batch size of $z^{\rm max}=20$ and a learning rate of $\tau=10^{-6}$. Finally, we set $\bm{w}^{\rm init}$ such that all entries of $\bm{M}^{\mu}$, $\bm{M}^{\sigma}$, and $\bm{m}^{\mu}$ are zero and the entries $\bm{m}^{\sigma}$ correspond to two times the empirical standard deviation of the training data. All experiments are implemented in Python (see also Section [3.3](#ssec:implementation){reference-type="ref" reference="ssec:implementation"}) and available online [@git_p-robust_dcopf]. We used a standard PC workstation with memory and an Intel i5 processor. ![Schematic of the 5 bus test system in configuration A (wind farms at buses 3 and 5) and B (wind farms at buses 2 and 3).](figures/5bus_system_AB.pdf){#fig:5bus_system width="0.75\\linewidth"} ### Cost-based box uncertainty set {#ssec:case_study_coe_5bus} We first train $\mathcal{M}_{\bm{w}}$ to optimize the expected cost of constraint exceedance (see Section [3.1](#ssec:coe_solution_approach){reference-type="ref" reference="ssec:coe_solution_approach"}). We set $c_k^{\rm viol} = \unit[20k]{\nicefrac{\$}{MW}},\ \forall k$, which corresponds to the lower end of the value of lost load estimated by New York ISO [@nyiso2019ancilary]. The average computation time for each epoch across all cases was around , including sampling, solving the inner optimization problems, computing the loss function, and obtaining the gradients. Fig. [3](#fig:5bus_cost_based_training){reference-type="ref" reference="fig:5bus_cost_based_training"} shows the out-of-sample (OOS) average cost for the studied cases. The fully robust approach (*Full*) is overly conservative, which we can infer from the fact that the lowest cost on this case are around the 25-percentile of all other cases. Also, it is often infeasible, leading to higher cost from infeasibility penalties (see Section [3.3](#ssec:implementation){reference-type="ref" reference="ssec:implementation"} IR2). The fixed *90 Perc.* case is less conservative and mostly avoids infeasibility, but is outperformed by the other cases. The *Single* uncertainty set improves average cost relative to *90 Perc*, but the set remains too small, as it tries to avoid infeasibility penalties. Introducing the prescription step overcomes this problem. The prescriptive cases *P-All*, *P-Cond*, and *P-Bins* further improve upon *Single* by , , and , respectively. Case *P-Cond* with access to the true conditional distribution slightly outperforms *Single*. Case *P-Bins* performs worse than *P-Single*, which we attribute to the loss of correlation information in the binning of the forecast errors. Re-running the experiment without correlation ($\phi=0$) confirms this. Now, *P-All*, *P-Cond*, and *P-Bins* result in average OOS cost of , , and , with *P-Bins* now outperforming *P-All* and only being slightly worse than the ideal *P-Cond*. ![Out-of-sample (OOS) results for 5-bus system in configuration A. Optimized for expected cost of exceedance. The boxes extend from the first quartile to the third quartile. Red line shows the median. The whiskers extend from the box to the farthest data point within 1.5 times the inter-quartile range from the box.](plots/5bus_cost_based_training_with_expected_cost.pdf){#fig:5bus_cost_based_training width="0.9\\linewidth"} Fig. [4](#fig:two_configs_with_prescription){reference-type="ref" reference="fig:two_configs_with_prescription"} shows how a change of the system topology impacts the prescribed uncertainty set, highlighting the relevance of making the training problem-aware. By moving wind farm $j=2$ from bus 5 (configuration A) to bus 2 (configuration B), it can no longer rely on the direct balancing from the generator at bus 5. As a result, transmission lines are now more likely to exceed their remaining available margins in real time which leads to larger required safety intervals for both wind farms. ![Prescribed sets from case *P-All* (optimized for cost of exceedance) for (a) $\bm{\zeta}=(\bm{d}_0, \bm{u}_0)$ and (b) $\bm{\zeta}=0.7(\bm{d}_0, \bm{u}_0)$, each for the two network configurations shown in Fig. [2](#fig:5bus_system){reference-type="ref" reference="fig:5bus_system"}.](plots/two_configs_with_prescription.pdf){#fig:two_configs_with_prescription width="0.9\\linewidth"} ### Constraint-based box uncertainty set {#ssec:case_study_poe_5bus} We now train $\mathcal{M}_{\bm{w}}$ such that the probability of constraint exceedance remains below , i.e., $\gamma=0.01$, as described in Section [3.2](#ssec:poe_solution_approach){reference-type="ref" reference="ssec:poe_solution_approach"}. For this experiment we increase the learning rate to $\phi=10^{-5}$ and set $\lambda^{\rm init} = 100$ and $\kappa = 0.1$. The time per iteration increased slightly to around per epoch, which we can mainly attribute to the more complex loss function. For *Single* the CVaR at convergence is 0.96 with an in-training probability of exceedance of 3.1% and a testing probability of exceedance of 13%. As discussed in Section [4.1.1](#ssec:case_study_coe_5bus){reference-type="ref" reference="ssec:case_study_coe_5bus"} above, this unsatisfying result can be explained with the algorithm avoiding infeasible uncertainty sets resulting in a set that is too small for many parametrizations (see Fig. [5](#fig:5bus_cvar_boxes){reference-type="ref" reference="fig:5bus_cvar_boxes"}). For *P-All*, on the other hand, the CVaR at convergence is 0.06 and we achieve an in-training probability of exceedance of 1% and a testing probability of exceedance of 1.8%. We note that the exact match of the in-training probability of exceedance with the target is a coincidence. The actual target of a CVaR equal to zero would lead to a smaller in-training probability of exceedance. However, the algorithm converges above this target as it again tries to avoid to create infeasible sets. Fig. [5](#fig:5bus_cvar_boxes){reference-type="ref" reference="fig:5bus_cvar_boxes"} shows the resulting uncertainty sets for *Single* and *P-All* and highlights the improved ability of the set in the latter case to adapt to the current system state. ![Uncertainty sets for cases *P-All* and *P-Single* optimized for probability of exceedance. For *P-All* all prescriptions resulting from the test samples of $\bm{\zeta}$ are shown. (System configuration A.)](plots/5bus_cvar_boxes.pdf){#fig:5bus_cvar_boxes width="0.9\\linewidth"} ## RTS 96-bus case We now test the suggested approach on more realistic data using the system provided by the Reliability Test System Grid Modernization Lab Consortium (RTS-GLMC [@rts_glmc_git]). This update of the RTS-96 test system has 73 buses, 120 transmission lines, 73 conventional generators, 4 wind farms, and 76 other resources (hydro, PV). In our experiment we focus on the 4 wind farms as uncertain resources and treat hydro and PV injections as fixed negative demand, i.e. as part of $\bm{d}$. The RTS-GLMC data set includes data for one year. To have a richer data set for the wind farms, we use the coordinates provided in the RTS-GLMC data set to map the 4 wind farms to the closest data points available in the extensive NREL WIND Toolkit [@draxl2015overview]. We scale this data to fit the wind farms in the RTS-GLMC data set and obtain 7 years of wind power injections and realistic forecast errors. We select the data from 2012 to replace the wind data from the RTS-GLMC data set, as the yearly wind structure matches original data most closely (measured in terms of average deviation of hourly total wind injections). From the resulting 8760 available samples net-demand and wind-injection samples of the respective day-ahead data sets, we select 1500 for training and 500 for testing. We use all forecast errors for training as in *P-All* above and focus on the analysis of the cost of exceedance-based training. We select the same meta-parameters as for the 5-bus case, but reduce the mini-batch size to $z^{\rm max}=10$. Training requires an average of per epoch and we observe convergence after around 30 iterations. We note that around half of the time per iteration is spend on computing the gradient. This is expected because both larger parameter matrices $\bm{M}^{\sigma}$, $\bm{M}^{\mu}$ and inner optimization overproportionally increase the size of the computational graph from which the gradient is computed. However, because $\mathcal{M}_{\bm{w}}$ has to be trained only once offline, training time and resources are not a critical limiting factor. In this case, the reference *90 Perc.* leads to a expected out of sample cost of while the *P-All* trained prescribed sets achieve a significant improvement of . (We performed an additional grid search to find a better percentile-based uncertainty set. The best result was attained for the 78-percentile with ). Fig. [6](#fig:rts96_coe_pall){reference-type="ref" reference="fig:rts96_coe_pall"} shows the OOS forecast errors alongside the collection of prescribed security intervals for the 4 wind farms in the system. We observe that the uncertainty sets are biased towards negative forecast errors. This can largely explained by the fact that the model chooses to curtail wind if the forecast is very high, which (i) leads to an insensitivity to upwards forecast errors and (ii) amplifies the fact that larger negative forecast errors are more likely at high wind power forecasts. This result highlights the advantage of internalizing the problem cost structure into the training. ![OOS results for the RTS 96 bus case using the *P-All* training. Box plots show the distribution of the OOS forecast errors and red lines show the various security intervals obtained from the trained mapping. See Fig. [3](#fig:5bus_cost_based_training){reference-type="ref" reference="fig:5bus_cost_based_training"} for box plot explanation.](plots/rts96_coe.pdf){#fig:rts96_coe_pall width="0.95\\linewidth"} # Conclusion Motivated by the need for actionable methods to deal with stochastic resources in power systems, this paper has demonstrated an approach to compute uncertainty sets for robust optimal power flow that (i) are prescriptive, i.e., minimize the expected cost system cost, and (ii) are adaptive, i.e., are prescribed individually for each expected system state given by a vector of context parameters. Our approach to obtain these sets was inspired by [@wang2023learning] but additionally achieves property (ii). The problem in [\[eq:general_training_problem\]](#eq:general_training_problem){reference-type="ref" reference="eq:general_training_problem"} opens a wide range of future research [@sadana2023survey]. For the approach studied in this paper we are pursuing the following avenues for further research. (i) Including richer context vectors. For example, in our data for the RTS96 case study, we observed a clear dependency of the forecast error distribution on the wind direction. Making the problem more context aware and better estimate conditional error distributions from real data should reveal impactful relations between system security and observable parameters, but is a non-trivial task [@sadana2023survey]. (ii) AC power flow. Using a higher fidelity operational model, e.g., along the lines of [@lee2021robust], promises improved applicability in practice and improved system security. (iii) Non-fixed recourse. Relaxing the second stage with a decision-making problem (or a tractable proxy, e.g., a trained neural network), should allow for a broader set of applications of the method. (iv) Improving scalability by utilizing ongoing improvements in differentiable optimization, e.g., [@kotary2023folded]. # Acknowledgment {#acknowledgment .unnumbered} The authors would like to thank Irina Wang and Bartolomeo Stellato of Pinceton University for helpful discussions. [^1]: This work was supported in part by a grant from the U.S. National Science Foundation under grant number ECCS-2039716 and a grant from the C3.ai Digital Transformation Institute. The work of R. Mieth was supported by a fellowship from the German Academy of Sciences, Leopoldina. [^2]: We assume in this paper that cost and system topology remain constant.
arxiv_math
{ "id": "2310.02957", "title": "Prescribed Robustness in Optimal Power Flow", "authors": "Robert Mieth, H. Vincent Poor", "categories": "math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | This paper addresses the problem of Age-of-Information (AoI) in UAV-assisted networks. Our objective is to minimize the expected AoI across devices by optimizing UAVs' stopping locations and device selection probabilities. To tackle this problem, we first derive a closed-form expression of the expected AoI that involves the probabilities of selection of devices. Then, we formulate the problem as a non-convex minimization subject to quality of service constraints. Since the problem is challenging to solve, we propose an Ensemble Deep Neural Network (EDNN) based approach which takes advantage of the dual formulation of the studied problem. Specifically, the Deep Neural Networks (DNNs) in the ensemble are trained in an unsupervised manner using the Lagrangian function of the studied problem. Our experiments show that the proposed EDNN method outperforms traditional DNNs in reducing the expected AoI, achieving a remarkable reduction of $29.5\%$. author: - Mouhamed Naby Ndiaye - El Houcine Bergou - Hajar El Hammouti bibliography: - IEEEabrv.bib - biblio_traps_dynamics.bib title: Ensemble DNN for Age-of-Information Minimization in UAV-assisted Networks --- Age-of-Information, DNN, Ensemble DNN, Trajectory optimization, UAV-assisted networks, Unsupervised Learning. # Introduction Over the past few years, there has been a significant surge in research around the concept of Age of Information (AoI). This interest is driven by various network applications that require timely information to carry out some specific tasks. Examples of such applications include providing real-time traffic to smartphone users and delivering status updates to smart systems [@yates2021age; @kadota2021age]. For such applications, the AoI is an important metric as it measures the freshness of the data and evaluates how quickly the data update reaches the destination [@yang2020age; @wang2020priority]. In this paper, we are interested in the scenario where a set of unmanned aerial vehicles (UAVs) is deployed to gather time-sensitive data and send it to a server for analysis and decision-making [@bajracharya20226g; @wei2022uav]. To this end, the UAVs should dynamically adjust their trajectories and strategically select the subsets of users from whom data is collected so that the AoI is minimized. Specifically, we answer the question: what is the optimal frequency (or equivalently, the probability to select users) at which UAVs should visit and gather data from devices, and what are the optimal locations of UAVs over time so that the global AoI is minimized? ## Related work Minimizing the AoI in UAV-assisted networks is a daunting task. First, the dynamic movements of UAVs which are often constrained by limited energy resources make the optimization problem a challenging task. Second, the distribution of IoT devices and users across the target area can be uneven, which makes balancing data collection to minimize AoI across all users a complex problem. Recently, many works have investigated the AoI minimization in UAV-assisted networks. In [@abd2018average], the authors minimize the peak of AoI between source-destination pairs. To this end, the authors simultaneously optimize the UAV's flight trajectory and service time for packet transmissions. To solve the problem, they propose an iterative approach where the initial optimization is divided into sub-problems. Each sub-problem is solved analytically and a closed-form expression of the sub-solution is provided. Similarly, in [@gao2023aoi], the problem of the average peak of AoI is divided into two sub-problems. First, a clustering algorithm is proposed to determine the locations of data collection points. Then, the collection points are grouped into clusters, and finally, the flight trajectories of the UAVs are optimized using an ant colony optimization algorithm. In [@ndiaye2022age], a probabilistic approach is proposed to minimize the probabilities of associations between users and UAVs, and UAVs and the base station. The authors propose a convex reformulation of the problem, which is then solved numerically. The previously cited works propose heuristics to solve the AoI optimization. These methods suffer from several limitations. First, they do not scale well with high-dimensional variables. Additionally, their convergence time is considerably long, and they lack the ability to adapt and generalize to new setups [@arulkumaran2017deep]. To overcome these limitations, machine learning (ML) based approaches have been proposed. In [@sun2021aoi], the authors propose a deep-learning based method to obtain an efficient solution for the flight speed and the trajectory of a single UAV that collects data from IoT devices. A similar approach is proposed in [@liu2021average] where the AoI of ground users is minimized by simultaneously optimizing the trajectory of a UAV, the scheduling of information transmission, and energy harvesting for the ground users. The proposed approach uses a deep reinforcement learning (DRL) to efficiently find optimal solutions. In [@zhu2022uav], the authors tackle the problem of AoI minimization by using a transformer network that outputs the optimal visiting order for the ground clusters. The transformer network is combined with a weighted A\* algorithm that is used to determine the most suitable hovering point for each cluster. Unlike previous works which considered a single UAV setup, the authors in [@naby2023muti] consider a multi-UAV setup where the AoI is minimized. They introduce a centralized multi-agent reinforcement learning approach to optimize the UAV trajectories. The proposed scheme relies on a centralized training where information about the environment is shared between UAVs, and a decentralized execution. Our paper presents two distinctive differences from existing works. First, instead of considering the association variables as binary, it rather deals with the probability that a UAV visits a given device. This probability can be interpreted as the frequency at which a UAV collects data from a device during its flight. Accordingly, the event of collecting data becomes stochastic, which justifies the use of the *expected AoI* as a target of our optimization problem. Second, unlike existing works, we propose a novel approach where a collection of DNNs is trained using unsupervised learning. The proposed approach guarantees accurate and robust results. ## Contribution In this paper, we aim to minimize the expected AoI while optimizing the stopping locations of UAVs and the probabilities of device selection. The probabilities of device selection can be interpreted as the frequencies at which UAVs visit devices during the target time. Our contributions can be summarized as follows. - First, we provide a closed-form expression of the expected AoI of the network which involves the probabilities of selection of devices. Then, we formulate the AoI minimization problem as an optimization with quality of service constraints. - To address the studied problem, we leverage the framework of EDNNs. The EDNN is based on training a collection of DNNs. Each DNN is trained individually in an unsupervised manner. The training of DNNs relies on the primal-dual formulation of the initial optimization problem. - Our simulation results show that the proposed EDNN approach outperforms traditional DNNs, leading to a reduction of $29.5\%$ of the expected AoI. ## Organization The remainder of the paper is organized as follows. First, the system model is described in Section [2](#Sys){reference-type="ref" reference="Sys"}. The mathematical formulation of the problem is given in Section [3](#Prob){reference-type="ref" reference="Prob"}. In Section [4](#Ensemble){reference-type="ref" reference="Ensemble"}, we describe in details the proposed solution. Next, in Section [5](#Simu){reference-type="ref" reference="Simu"}, we show the performance of our appraoch using simulation experiments. Finally, concluding remarks are provided in Section [6](#Conc){reference-type="ref" reference="Conc"}. # System Model {#Sys} Consider a wireless network where a set $\mathcal{I}$ of $I$ IoT devices periodically generate data updates. The data is transmitted to a server located at the base station (BS). Due to the restricted communication range of IoT devices, a set $\mathcal{U}$ of UAVs is deployed to collect data updates from IoT devices at regular intervals, and then re-transmit the collected data to the BS. During each time interval $t \in \mathcal{T}\triangleq\{0,\dots,T-1\}$, an IoT device $i$ sends its data to a UAV $u$ with probability $p_{i,u}[t]$ using the air-to-ground channel. Our aim is to timely collect the generated data so that its expected age is minimized. ## Communication Model To model the uplink channel between device $i$ and UAV $u$, we assume a block Rician-fading model, where the channel conditions remain constant over a time interval $t$. As a consequence, the channel response between device $i$ and UAV $u$ at time step $t$ is given by $$h_{i,u}[t] = \sqrt{\frac{\Phi}{\Phi+1}} {\xi}^{\rm LoS}_{i,u}[t] + \sqrt{\frac{1}{\Phi+1}} {\xi_{i,u}^{\rm NLoS}}[t],$$ where $\Phi$ represents the Rician factor, ${\xi}^{LoS}_{iu}[t]$ is the line-of-sight (LoS) component with magnitude $\left|{\xi}^{\rm LoS}_{i,u}[t]\right| = 1$, and ${\xi}^{\rm NLoS}_{i,u}[t]$ is the random non-line-of-sight (NLoS) component following a Rayleigh distribution with mean zero and variance one. Let $(x_u[t], y_u[t], H_u)$ be the 3D position of UAV $u$ at time interval $t$, where $H_u$ is the altitude of UAV $u$ that is assumed fixed. Similarly, we denote by $(x_i, y_i, 0)$ the position of device $i$. Hence, the distance between device $i$ and UAV $u$ during time interval $t$ is given by $d_{i,u}[t]=\sqrt{(x_{u}[t] - x_{i})^2 + (y_{u}[t] - y_{i})^2 + (H_{u})^2}$. We assume that devices use orthogonal frequency division multiple access (OFDMA) to communicate with the UAVs. Hence, the signal-to-noise ratio (SNR) of IoT device $i$ with respect to UAV $u$ at time slot $t$ is given by $$\Gamma_{i,u}[t] = \frac{P_{i}[t] \left|h_{i,u}[t]\right|^{2}}{\sigma^{2}d_{i,u}[t]^{2}},$$ where $P_i[t]$ is the transmit power of device $i$ during time interval $t$, and $\sigma^{2}$ is the variance of an additive white Gaussian noise. Accordingly, the rate of IoT device $i$ with respect to UAV $u$ during time slot $t$ can be expressed as $${R}_{i,u}[t] = B_{i,u}[t] \log_2\left(1 + \Gamma_{i,u}[t]\right),$$ where $B_{i,u}[t]$ is the allocated bandwidth between device $i$ and UAV $u$ during time slot $t$. We assume that the generated data from IoT devices is stored in a buffer until it is collected by a UAV for transmission. We also suppose that the size of the devices' buffers is large enough to save all the generated data during the entire time span $T$. For a successful and efficient data transmission between device $i$ and UAV $u$ during time interval $t$, the data rate $R_{i,u}[t]$ between device-UAV pair should exceed a predefined threshold denoted as $R^{\rm min}$. This threshold $R^{\rm min}$ is carefully chosen to guarantee that data updates can be transmitted almost instantaneously, ensuring rapid and reliable communication between the device and the UAV. Throughout their flights, UAVs make stops to collect data from subsets of IoT devices. We assume that the data collection time is negligible compared to the overall flight time. We also assume that the UAVs maintain a constant speed $V$ during their flight. Consequently, the total flight time of UAV $u$, denoted as $\zeta_u$, can be expressed as: $$\begin{aligned} \zeta_{u}(\!\boldsymbol{x}_u,\boldsymbol{y}_u\!)\!=\!\! \!\sum_{t=0}^{T-1}\!\!\frac{\sqrt{\!(\!x_{u}[\!t+1\!]\!-\!x_{u}[t]\!)^2\!+\!(\!y_{u}[\!t+1\!]\!-\!y_{u}[t]\!)^2}}{V}. % \label{flighttime}\end{aligned}$$ ## Age of Information The objective of this work is to optimize the UAVs' 3D locations over time jointly with the probabilities to collect data while maximizing the freshness of the data updates. In particular, our aim is to minimize the expected AoI. The AoI is defined as the time elapsed between the last update is successfully received by the UAV. Let $\alpha_{i,u}[t]$ be the probabilistic event that UAV $u$ collects data from IoT device $i$ at time interval $t$, and let $p_{i,u}[t]$ be the probability that $\alpha_{i,u}[t]=1$. Specifically, $$\alpha_{i, u}[t]=\left\{\begin{array}{lc} 1, & \text { with probability } p_{i,u}[t] \\ 0, & \text { with probability } 1-p_{i,u}[t]. \end{array}\right. \label{alphauit}$$ We define the AoI of IoT device $i$ with respect to UAV $u$ at time interval $t\geq 1$ using a recursive formula as follows $$A_{i,u}[t]=\left(A_{i,u}[t-1]+1\right)\left(1-\alpha_{i,u}[t]\right), \label{Aiu}$$ where $A_{i,u}[0]=0$. Accordingly, when the data updates of device $i$ are not collected during time interval $t$ (i.e., $A_{i,u}[t]=0$), the AoI is increased by one unit of time. Inversely, when the updates are transmitted, the AoI is reinitialized to zero. In this context, it is judicious to consider the expected AoI with respect to the probabilities of data collection over a number of intervals $T$. The following lemma provides a closed-form expression of the expected AoI. **Lemma 1**. *The expected AoI $\mathbb{E}(A_{i,u}[t])$ for an IoT device $i$ associated with UAV $u$ at time step $t$ can be expressed as* *$$\label{AgeClosedForm} % \begin{aligned} % & \mathbb{E}(A_{i,u}[t])\!= \overline{p}_{i,u}[t]\! \left(\!1 \!+ \!\overline{p}_{i,u}[t-1]\!+\!\!\sum_{k=1}^{t-1}\!\! \left(\prod_{j=1}^{k}\!\overline{p}_{i,u}[j]\overline{p}_{i,u}[j-1]\!\right)\!\!\right) % \end{aligned} \begin{aligned} & \mathbb{E}(A_{i,u}[t])\\&= \overline{p}_{i,u}[t]\! \left(\!1 \!+ \!\overline{p}_{i,u}[t-1]\!+\!\!\sum_{k=1}^{t-1}\!\! \left(\prod_{j=1}^{k}\!\overline{p}_{i,u}[j]\overline{p}_{i,u}[j-1]\!\right)\!\!\right), \end{aligned}$$ where $\overline{p}_{i,u}[t] = 1 - p_{i,u}[t]$, and the expectation $\mathbb{E}(.)$ is with respect to the probabilistic event that UAV $u$ collects data from IoT $i$ at time $t$.* *Proof.* We prove the lemma by induction. **Base Case:** For $t = 1$, we have $$\begin{aligned} & \mathbb{E}(A_{i,u}[t])\!= \overline{p}_{i,u}[1]\! \left(\!1 \!+ \!\overline{p}_{i,u}[0]\!\right)=\overline{p}_{i,u}[1], \end{aligned}$$ which matches the derived expression. **Inductive Step:** Let us assume that the lemma holds for $t = n$, $1<n<T-1$ i.e., $$\label{ASSU} \begin{aligned} & \mathbb{E}(\!A_{i,u}[n]\!)\!=\!\overline{p}_{i,u}[n]\! \left(\!1\! +\! \overline{p}_{i,u}[n-1] \!+\!\sum_{k=1}^{n-2} \left(\!\prod_{j=1}^{k}\!\overline{p}_{i,u}[j]\overline{p}_{i,u}[j-1]\!\!\right)\right). \end{aligned}$$ In the following, we prove that it holds for $t = n+1$. $$\begin{aligned} \mathbb{E}(A_{i,u}[n+1]) &= \mathbb{E}((A_{i,u}[n]+1)(1-\alpha_{i,u}[n+1])\\ &=(1-p_{i,u}[n+1])\mathbb{E}(A_{i,u}[n])+1-p_{i,u}[n+1]. \end{aligned}$$ Using our assumption in equation ([\[ASSU\]](#ASSU){reference-type="ref" reference="ASSU"}), we replace $\mathbb{E}(A_{i,u}[n])$ by its expression and obtain $$\begin{aligned} \mathbb{E}(A_{i,u}[n+1]) &=\overline{p}_{i,u}[n+1]\overline{p}_{i,u}[n] (1\! + \overline{p}_{i,u}[n-1] \\&+\!\sum_{k=1}^{n} \left(\!\prod_{j=1}^{k}\!\overline{p}_{i,u}[j]\overline{p}_{i,u}[j-1]\!\!\right))+\overline{p}_{i,u}[n+1]). \end{aligned}$$ Finally, by arranging the expression above, we obtain $$\begin{aligned} & \mathbb{E}(A_{i,u}[n+1])= \overline{p}_{i,u}[n+1] \left(1 + \overline{p}_{i,u}[n] \right.\\ &+\sum_{k=1}^{n} \left(\prod_{j=1}^{k}\overline{p}_{i,u}[j]\overline{p}_{i,u}[j-1]\right)). \end{aligned}$$ Therefore, the lemma holds for $t = n+1$. By induction, the lemma is proven for all $t \geq 1$. ◻ From equation ([\[AgeClosedForm\]](#AgeClosedForm){reference-type="ref" reference="AgeClosedForm"}), we can observe that when $p_{i,u}[t]=1$ (or equivalently $\overline{p}_{i,u}[t]=0$) for all $t\in \mathcal{T}$, i.e., the data is collected from user $i$ by UAV $u$ for all time intervals, the corresponding expected AoI becomes zero. Conversely, when $p_{i,u}[t]=0$, i.e., no data has been collected over the considered time, the expected AoI related to user $i$ and UAV $u$ reaches its maximum value which is equal to $T$. # Problem Formulation {#Prob} The objective of this work is to minimize the expected AoI across devices during a number of time intervals $T$. The optimization problem involves finding the optimal probabilities $\boldsymbol{p}$ of selecting devices to collect data updates and the stopping points $(\boldsymbol{x},\boldsymbol{y}$) of UAVs over time, while considering various constraints. Accordingly, our problem is formulated as follows ,, \_\_i,u\[t\] (+ \_i,u\[t-1\] +\_k=1\^t-1 (\_j=1\^k\_i,u\[j\]\_i,u\[j-1\])) [\[objective\]]{#objective label="objective"} [\[GeneralOptimizati\]]{#GeneralOptimizati label="GeneralOptimizati"} Constraint ([\[Rate\]](#Rate){reference-type="ref" reference="Rate"}) ensures that the expected rate between each UAV and its served IoT device is above a predefined threshold $R^{\min}$. Constraint ([\[Association2\]](#Association2){reference-type="ref" reference="Association2"}) guarantees that a device can transmit to at most one UAV at a time, on average. Similarly, constraint ([\[CapacityUAV\]](#CapacityUAV){reference-type="ref" reference="CapacityUAV"}) ensures that the expected number of served devices by UAV $u$ does not exceed its maximum capacity $N_u$. Constraint ([\[TimeUAV\]](#TimeUAV){reference-type="ref" reference="TimeUAV"}) guarantees that each UAV $u$ adheres to a maximum flight time, denoted as $\zeta_{\text{max}}^u$, which aligns with its energy budget. Constraints ([\[xposition\]](#xposition){reference-type="ref" reference="xposition"}) and ([\[yposition\]](#yposition){reference-type="ref" reference="yposition"}) limit UAVs' movements to a specific area. Constraint ([\[alpha\]](#alpha){reference-type="ref" reference="alpha"}) bounds the probabilities of device selection between $0$ and $1$. Solving the expected AoI minimization is challenging due to the non-convexity of both the objective function and constraints ([\[Rate\]](#Rate){reference-type="ref" reference="Rate"}) and ([\[TimeUAV\]](#TimeUAV){reference-type="ref" reference="TimeUAV"}). To address this problem, we leverage the power of EDNNs. EDNNs take advantage of the impressive ability of DNN to approximate highly complex functions. Specifically, EDNN is a collection of DNNs trained with different initial weights and training data. Each DNN model in the ensemble is individually trained and stored. During the test, the DNNs' results are combined using an aggregation rule (e.g., averaging). In the next section, we first explain how a single DNN model can efficiently solve the expected AoI minimization, then, we describe how the EDNN solution is leveraged to provide accurate results. # Ensemble Deep Neural Networks based Approach {#Ensemble} To address the constrained AoI problem, an alternative approach is to solve its primal-dual formulation. In fact, while the optimal solution of the dual problem may not necessarily be the optimal solution for the original AoI minimization (due to the non-convexity of the problem), it can still offer an efficient local optimum. Specifically, the Lagrangian function for the problem under study is defined in ([\[LossFunction\]](#LossFunction){reference-type="ref" reference="LossFunction"}). $$\begin{aligned} L(\boldsymbol{p},\boldsymbol{x},\boldsymbol{y},\boldsymbol{\mu})\!=&\sum \limits_{\substack{(i, u,t)\in\\\mathcal{I}\times \mathcal{U}\times \mathcal{T}}}\!\!\!\mathbb{E}(A_{i,u}[t])\!\!+\!\!\left(\!\sum \limits_{\substack{(i, u,t)\in\\\mathcal{I}\times \mathcal{U}\times \mathcal{T}}} \!\!\!\mu_{i,u,t}^1 C_{i,u,t}^1\!\right)\! +\!\!\left(\!\sum \limits_{\substack{(i, t)\in\\\mathcal{I}\times \mathcal{T}}}\!\! \mu_{i,t}^2 C_{ i,t}^2\!\right) \left.+\left(\!\sum \limits_{\substack{( u,t)\in\\\mathcal{U}\times \mathcal{T}}} \mu_{u,t}^3 C_{ u,t}^3\!\right)\! \right.+\left.\!\left(\!\sum \limits_{u\in \mathcal{U}} \mu_{u}^4 C_{ u}^4\!\right)\!+\!\left(\sum \limits_{\substack{( u,t)\in\\ \mathcal{U}\times \mathcal{T}}} \mu^5_{u,t} C_{u,t}^5\!\right)\right. \\&\left.\!+\!\left(\!\sum \limits_{\substack{(i, u,t)\in\\\mathcal{I}\times \mathcal{U}\times \mathcal{T}}} \mu_{i,u,t}^6 C_{ i,u,t}^6\!\right)\right. \!+\!\left(\!\sum \limits_{\substack{(i,u,t)\in\\\mathcal{I}\times \mathcal{U}\times \mathcal{T}}} \mu_{i,u,t}^7C_{ i,u,t}^7\!\right). \end{aligned} \label{LossFunction}$$ $(C_{.}^j)_{j=1}^7$ in  ([\[LossFunction\]](#LossFunction){reference-type="ref" reference="LossFunction"}) captures the constraints of the problem, which are expressed as follows $C_{i,u,t}^1=\operatorname{ReLU}\left(R^{\rm min}-p_{i,u}[t]R_{i,u}[t]\right)$, $C_{i, t}^2=\operatorname{ReLU}(\sum \limits_{u\in \mathcal{U}}p_{i,u}[t]- 1)$, $C_{ u,t}^3=\operatorname{ReLU}(\sum \limits_{i \in \mathcal{U}}p_{i,u}[t]-N_u)$, $C_{ u}^4=\operatorname{ReLU}\left(\zeta(\boldsymbol{x}_u,\boldsymbol{y}_u)-\zeta^{\rm max}\right)$, $C_{ u,t}^5=\operatorname{ReLU}\left(x_{u}[t]-x^{\rm max}\right)$, $C_{ u,t}^6=\operatorname{ReLU}\left(y_{u}[t]-y^{\rm max}\right)$, $C_{i, u,t}^7=\operatorname{ReLU}\left(p_{iu}[t]-1\right)$, where $\operatorname{ReLU}(x)=\max(0,x)$, is the rectified linear function, and $(\boldsymbol{\mu}^j_{.})_{j=1}^7$ are the non-negative Lagrange multipliers. Accordingly, an alternative formulation to solve the expected AoI minimization problem is given by $$\label{lagr} \max \limits_{\{\boldsymbol{\mu}^j\}} \min \limits_{\boldsymbol{x},\boldsymbol{y},\boldsymbol{p}} L(\boldsymbol{p},\boldsymbol{x},\boldsymbol{y},\boldsymbol{\mu}).$$ in order to get $\boldsymbol{x}^{t},\boldsymbol{y}^{t},\boldsymbol{p}^{t}$ . To solve problem ([\[lagr\]](#lagr){reference-type="ref" reference="lagr"}), we leverage the ability of DNN to approximate complex functions. To this end, the data collection probabilities and the scheduling of UAVs locations are modeled as an output of a DNN. Specifically, $$(\boldsymbol{x},\boldsymbol{y},\boldsymbol{p})\triangleq f(\boldsymbol{w};\theta),$$ where $\boldsymbol{w}$ is the DNN's vector of weights, $\theta$ is the input vector composed of environment parameters (e.g., the transmit powers, bandwidth, channel gains, etc) and $f(.)$ is the DNN model. Hence, to find the optimal data collection probabilities and effectively schedule their locations over time, we adopt an unsupervised learning approach. This approach differs from traditional supervised learning, where the DNN's training depends on a numerical algorithm to solve the optimization problem. Instead, we utilize the Lagrangian of the optimization problem as a cost function to train the DNN using an unsupervised learning. Moreover, to optimize the Lagrangian multipliers, we employ a gradient ascent optimization. Accordingly, at the $t^{\rm th}$ iteration, the variables of problem ([\[lagr\]](#lagr){reference-type="ref" reference="lagr"}) are optimized using stochastic gradient descent and gradient ascent as follows $$\boldsymbol{w}^{t+1}=\boldsymbol{w}^{t}-\eta \frac{\partial \hat{L}(\boldsymbol{w},\boldsymbol{\mu})}{\partial w},$$ and for all $j \in \{1,\dots,7\}$ $${(\boldsymbol{\mu}^j)}^{t+1}={(\boldsymbol{\mu}^j)}^t+\beta \frac{\partial \hat{L}(\boldsymbol{w},\boldsymbol{\mu})}{\partial\mu^j}={(\boldsymbol{\mu}^j)}^t+\beta \left(\hat{C}^j\right),$$ where $\boldsymbol{w}^t$ and ${(\boldsymbol{\mu}^j)}^t$ are the vectors of weights and Lagrangian multipliers at the $t^{\rm th}$ iteration, respectively. $\hat{L}(\boldsymbol{w},\boldsymbol{\mu})$ is the expected value of the Lagrangian function applied to a batch of input data. Similarly, $\hat{C}^j$ is the expected value of the $j^{\rm th}$ constraint applied to a batch of input data. Finally, $\eta$ and $\beta$ are the learning rates of stochastic gradient descent and ascent, respectively. ![image](images/qgevsepoch.eps){width="1\\linewidth"} ![image](images/agetestbar.eps){width="1\\linewidth"} ![image](images/agenvseddnsize.eps){width="1\\linewidth"} The DNN is trained with the aim to output the optimal UAV positions $\boldsymbol{x}$, $\boldsymbol{y}$, and data collection probabilities $\boldsymbol{p}$. To ensure that each UAV's position remains within the target area, a $\operatorname{ReLU}$ function is applied to outputs related to UAVs' 2D positions at the output layer. Similarly, a $\operatorname{Sigmoid}$ function is applied to the outputs related to data collection probabilities. These activation functions guarantee that the DNN's outputs are bounded within the specified intervals. To enhance the generalization performance and ensure the robustness of the proposed DNN, we leverage the framework of EDNNs. Compared to DNNs, EDNNs combine multiple DNN models into an ensemble, which leads to enhanced accuracy and robustness [@ganaie2022ensemble]. In the context of AoI minimization, the EDNN will improve the generalization ability of the model for unseen scenarios and handle the uncertainty of the wireless environment. In the following, we describe how the EDNN is efficiently trained and tested. The description of the proposed approach is provided in algorithm [\[alg:3\]](#alg:3){reference-type="ref" reference="alg:3"}. 1. **Training EDNN**: We implement an EDNN structure in which the DNN models within the ensemble share the same architecture. However, each DNN is initialized and trained using different initial weights and training sets. In fact, to achieve an efficient training of EDNNs and avoid overfitting, it is important to ensure a minimal overlap in the datasets used to train each DNN within the ensemble. For the studied AoI problem, the input data is composed of the channel gains, the bandwidth allocations, the transmit power, and the locations of IoT devices. The data is generated randomly and is equally divided between the DNNs within the ensemble. Then, each DNN is initialized randomly. At each iteration of the training, a mini-batch is randomly selected to perform gradient descent and ascents updates. 2. **Testing EDNN**: During the testing phase, the test set is drawn from the same distribution as the training data. Each trained DNN is provided with the test data and produces the UAVs' scheduling and data collection probabilities. The final output of the EDNN is computed by taking a weighted average of all the output vectors, where the weights are proportional to the AoI provided by each DNN within the ensemble. It is important to note that due to the computational complexity of the training and the relatively small gain in performance that comes from adding multiple DNNs, the number of models in EDNN is kept small (generally up to $10$). # Simulation Results {#Simu} To evaluate the performance of the proposed approach, we consider an area of $1000m\times1000m$, where a number of $30$ IoT devices are randomly scattered. We also suppose that $3$ UAVs are deployed to collect and keep the data as fresh as possible. The UAVs hover at altitudes between $80m$ and $100m$. Moreover, the devices are assigned a fixed bandwidth, randomly picked between $[1.5,2]$ GHz and a constant power between $[0,1]$ mWatt. To satisfy the quality of service constraint, the minimum rate is set to $150$ Kbit/s. The parameters of our simulation setup are summarized in Table [1](#tab:my-table-parameter){reference-type="ref" reference="tab:my-table-parameter"}. **Parameter** **Value** **Parameter** **Value** --------------- --------------- --------------- -------------- $I$ $30$ $U$ $3$ $x_{max}$ $1000m$ $y_{max}$ $1000m$ $H_u$ $[80,100]m$ $R_{min}$ $150$ Kbit/s $N_u$ $8$ $T$ $40$ $B_{i,u}$ $[1.5,2]$ GHz $\sigma^{2}$ $-120dBm$ $P_{i}$ $[0,1]$ mW $T$ $40$ : Experiment setup The mini-batch size and the gradient descent learning rate are taken as 50 and $0.001$. At each iteration, the number of epochs is 150. Moreover, the step size $\beta$ of updating the penalty parameters is set to $0.1$. For each single DNN, the number of neurons from the input layer to the output layer is given as $\{600,1200,2400,4800\}$. Finally, the ensemble size is set to $8$. In Fig.  [\[fig:AoIvsepoch\]](#fig:AoIvsepoch){reference-type="ref" reference="fig:AoIvsepoch"}, we observe a consistent reduction in the expected AoI for both DNN and EDNN during the training phase. Moreover, it can be seen through the figure that by the end of the training the EDNN achieves a substantial decrease in the average AoI. These results are confirmed through the testing phase as illustrated by Fig. [\[fig:AoItestphase\]](#fig:AoItestphase){reference-type="ref" reference="fig:AoItestphase"}. Specifically, Fig. [\[fig:AoItestphase\]](#fig:AoItestphase){reference-type="ref" reference="fig:AoItestphase"} plots the expected AoI in the test and compare it with DNN and a numerical method (based on the interior- point algorithm). As it can be seen through the figure, the EDNN approach outperforms DNN and the numerical method as it achieves a reduction of approximately $29.5\%$ compared to DNN, and a reduction of approximately $35.5\%$ compared to the numerical method. In Figure [\[fig:aoivsednnsize\]](#fig:aoivsednnsize){reference-type="ref" reference="fig:aoivsednnsize"}, we investigate the impact of the ensemble size in EDNN on the achieved expected AoI during the test. It can be seen from the figure that as the ensemble size increases, the expected AoI is further minimized, which indicates the potential for even better performance with larger ensemble sizes. # Conclusion {#Conc} In this paper, we studied the problem of AoI minimization in UAV-assisted networks. Specifically, we proposed an EDNN based approach to efficiently schedule the 2D positions over time and optimize the probabilities of selection. The EDNN is trained using an unsupervised learning which relies on the minimization of the Lagrangian function of the studied problem. Our simulation results show that the proposed approach outperforms traditional DNN in minimizing the AoI. # Acknowledgment {#acknowledgment .unnumbered} This document has been produced with the financial assistance of the European Union (Grant no. DCI-PANAF/2020/420-028), through the African Research Initiative for Scientific Excellence (ARISE), pilot programme. ARISE is implemented by the African Academy of Sciences with support from the European Commission and the African Union Commission.
arxiv_math
{ "id": "2309.02913", "title": "Ensemble DNN for Age-of-Information Minimization in UAV-assisted\n Networks", "authors": "Mouhamed Naby Ndiaye, El Houcine Bergou, and Hajar El Hammouti", "categories": "math.OC cs.LG cs.NI", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Stochastic approximation (SA) is a powerful and scalable computational method for iteratively estimating the solution of optimization problems in the presence of randomness, particularly well-suited for large-scale and streaming data settings. In this work, we propose a theoretical framework for stochastic approximation (SA) applied to non-parametric least squares in reproducing kernel Hilbert spaces (RKHS), enabling online statistical inference in non-parametric regression models. We achieve this by constructing asymptotically valid pointwise (and simultaneous) confidence intervals (bands) for local (and global) inference of the nonlinear regression function, via employing an online multiplier bootstrap approach to a functional stochastic gradient descent (SGD) algorithm in the RKHS. Our main theoretical contributions consist of a unified framework for characterizing the non-asymptotic behavior of the functional SGD estimator and demonstrating the consistency of the multiplier bootstrap method. The proof techniques involve the development of a higher-order expansion of the functional SGD estimator under the supremum norm metric and the Gaussian approximation of suprema of weighted and non-identically distributed empirical processes. Our theory specifically reveals an interesting relationship between the tuning of step sizes in SGD for estimation and the accuracy of uncertainty quantification. author: - "Meimei Liu[^1]  , Zuofeng Shang[^2]  and Yun Yang[^3]" bibliography: - ref.bib title: Scalable Statistical Inference in Non-parametric Least Squares --- # Introduction Stochastic approximation (SA) [@robbins1951stochastic; @ruppert1988efficient; @bottou2008tradeoffs] is a class of iterative stochastic algorithms to solve the stochastic optimization problem $\min_{\theta\in\Theta}\big\{\mathcal L(\theta):\,= \mathbb{E}_{Z}[\ell(\theta;Z)]\big\}$, where $\ell(\theta;z)$ is some loss function, $Z$ denotes the internal random variable, and $\Theta$ is the domain of the loss function. Statistical inference, such as parameter estimation, can be viewed as a special case of stochastic optimization where the goal is to estimate the minimizer $\theta^\ast=\mathop{\mathrm{argmin}}_{\theta\in\Theta} \mathcal L(\theta)$ of the expected loss function $\mathcal L(\theta)$ based on a finite number of i.i.d. observations $\{Z_1,\ldots,Z_n\}$. Classical estimation procedures based on minimizing an empirical version $\mathcal L_n(\theta)=n^{-1}\sum_{i=1}^n \ell(\theta;Z_i)$ of the loss correspond to the sample average approximation (SAA) [@kleywegt2002sample; @kim2015guide] for solving the stochastic optimization problem. However, directly minimizing $L_n$ with massive data is computationally wasteful in both time and space, and may pose numerical challenges. For example, in applications involving streaming data where new and dynamic observations are generated on a continuous basis, it may not be necessary or feasible to store all historical data. Instead, stochastic gradient descent (SGD), or Robbins-Monro type SA algorithm [@robbins1951stochastic], is a scalable approximation algorithm for parameter estimation with constant per-iteration time and space complexity. SGD can be viewed as a stochastic version of the gradient descent method that uses a noisy gradient, such as $\nabla\ell(\cdot\,;Z)$ based on a single $Z$, to replace the true gradient $\nabla \mathcal L(\cdot)$. In this work, we explore the use of SA for statistical inference in infinite-dimensional models where $\Theta$ is a functional space or, more precisely, in solving non-parametric least squares in reproducing kernel Hilbert spaces (RKHS). Consider the standard random-design non-parametric regression model $$\label{eq:model:0} Y_i = f^\ast(X_i) + \epsilon_i, \quad \epsilon_i \sim N(0,\sigma^2) \quad \; \textrm{for} \; i=1,\cdots, n,$$ with $X_i\in\mathcal{X}$ denoting the $i$-th copy of random covariate $X$, $Y_i$ the $i$-th copy of response $Y$, and $f^\ast$ the unknown regression function in a reproducing kernel Hilbert space (RKHS, [@aronszajn1950theory; @wahba1990spline]) $\mathbb H$ to be estimated. For simplicity, we assume that $\mathcal X=[0,1]^d$ is the unit cube in $\mathbb R^d$. Since $f^\ast$ minimizes the population-level expected squared error loss objective $\mathcal L(f) = \mathbb E\big[\ell\big(f;\,(X,Y)\big)\big]$ over all functions $f:\,\mathcal X\to \mathbb R$, with $\ell\big(f;\,(X,Y)\big)=(f(X)-Y)^2$ representing the squared loss function, one can adopt the SSA approach to estimate $f^\ast$ by minimizing a penalized sample-level squared error loss objective. Given a sample $\{(X_i,Y_i)\}_{i=1}^n$ of size $n$, a commonly used SAA approach for estimating $f$ is kernel ridge regression (KRR). KRR incorporates a penalty term that depends on the norm $\|\cdot\|_{\mathbb H}$ associated with the RKHS $\mathbb H$. Although the KRR estimator enjoys many attractive statistical properties [@koltchinskii2006local; @mendelson2002geometric; @yang2017], its computational complexity of $\mathcal{O}(n^3)$ time and $\mathcal{O}(n^2)$ space hinders its practicality in large-scale problems [@saunders1998ridge]. In this work, we instead consider an SA-type approach for directly minimizing the functional $\mathcal{E}(f)$ over the infinite-dimensional RKHS. By operating SGD in this non-parametric setting (see Section [2.2](#sec:sgd:problem){reference-type="ref" reference="sec:sgd:problem"} for details), the resulting algorithm achieves $\mathcal{O}(n^2)$ time complexity and $\mathcal{O}(n)$ space complexity. In a recent study [@Bach2016], the authors demonstrate that the online estimator of $f$ resulting from the SGD achieves optimal rates of convergence for a variety of $f\in \mathbb H$. It is interesting to note that since the functional gradient is defined with respect to the RKHS norm $\|\cdot\|_{\mathbb H}$, the functional SGD implicitly induces an algorithmic regularization due to the "early-stopping\" in the RKHS, which is controlled by the accumulated step sizes. Therefore, with a proper step size decaying scheme, no explicit regularization is needed to achieve optimal convergence rates. The aim of this research is to take a step further by constructing a new inferential framework for quantifying the estimation uncertainty in the SA procedure. This will be achieved through the construction of pointwise confidence intervals and simultaneous confidence bands for the functional SGD estimator of $f$. Previous SGD algorithms and their variants, such as those discussed in [@bottou2008tradeoffs; @bottou1998online; @le2011optimization; @bottou2010large; @cao2019generalization; @cheridito2021non], are mainly utilized to solve finite-dimensional parametric learning problems with a root-$n$ convergence rate. In the parametric setting, asymptotic properties of estimators arising in SGD, such as consistency and asymptotic normality, have been well established in literature; for example, see [@Bach2016; @P1992; @Fang2017; @Chen2016]. However, the problem of uncertainty quantification for functional SGD estimators in non-parametric settings is rarely addressed in the literature. In the parametric setting, several methods have been proposed to conduct uncertainty quantification in SGD. [@nesterov2008; @nemirovski2009] appear to be among the first to formally characterize the magnitudes of random fluctuations in SA; however, their notion of confidence level is based on the large deviation properties of the solution and can be quite conservative. More recently, [@Fang2017] proposes applying a multiplier bootstrap method for the construction of SGD confidence intervals, whose asymptotic confidence level is shown to exactly match the nominal level. [@Chen2016] proposes a batch mean method to estimate the asymptotic covariance matrix of the estimator based on a single SGD trajectory. Due to the limited information from a single run of SGD, the best achievable error of their confidence interval (in terms of coverage probability) is of the order $\mathcal O(n^{-1/8})$, which is worse than the error of the order $\mathcal O(n^{-1/3})$ achieved by the multiplier bootstrap. [@su2018uncertainty] proposes a different method called Higrad. Higrad constructs a hierarchical tree of a number of SGD estimators and uses their outputs in the leaves to construct a confidence interval. In this work, we develop a multiplier bootstrap method for uncertainty quantification in SA for solving online non-parametric least squares. Bootstrap methods [@efron1994introduction; @diciccio1988review] are widely used in statistics to estimate the sampling distribution of a statistic for uncertainty quantification. Traditional resampling-based bootstrap methods are unsuitable for streaming data inference as the resampling step necessitates storing all historical data, which contradicts the objective of maintaining constant space and time complexity in online learning. Instead, we extend the parametric online multiplier bootstrap method from [@Fang2017] to the non-parametric setting. We achieve this by employing a perturbed stochastic functional gradient, which is represented as an element in the RKHS evaluated upon the arrival of each new covariate-response pair $(X_i,Y_i)$, to capture the stochastic fluctuation arising from the random streaming data. To theoretically justify the use of the proposed multiplier bootstrap method, we make two main contributions. First, we build a novel theoretical framework to characterize the non-asymptotic behavior of the infinite-dimensional functional SGD estimator via expanding it into higher-orders under the supremum norm metric. This framework enables us to perform local inference to construct pointwise confidence intervals for $f$ and global inference to construct a simultaneous confidence band. Second, we demonstrate the consistency of the multiplier bootstrap method by proving that the perturbation injected into the stochastic functional gradient accurately mimics the randomness pattern in the online estimation procedure, so that the conditional law of the bootstrapped functional SGD estimator given the data asymptotically coincides with the sampling law of the functional SGD estimator. Our proof is non-trivial and contains several major improvements that refine the best (to our knowledge) convergence analysis of SGD for non-parametric least squares in [@Bach2016], and also advances consistency analysis of the multiplier bootstrap in a non-parametric setting. Concretely, in [@Bach2016], the authors derive the convergence rate of the functional SGD estimator relative to the $L_2$ norm metric. Their theory only concerns the $L_2$ convergence rate of the estimation; hence, the proof involves decomposing the SGD recursion into a leading first-order recursion and the remaining higher-order recursions; and bounding their $L_2$ norms respectively by directly bounding their expectations. In comparison, our analysis for statistical inference in online non-parametric regression requires a functional central limit theorem type result and calls for several substantial refinements in proof techniques. Our first improvement is to refine the SGD recursion analysis by using a stronger supremum norm metric. This enables us to accurately characterize the stochastic fluctuation of the functional estimator uniformly across all locations. As a result, we can study the coverage probability of simultaneous confidence bands in our subsequent inference tasks. Analyzing the supremum convergence is significantly more intricate than analyzing the $L_2$ convergence. In the proof, we introduce an augmented RKHS different from $\mathbb H$ as a bridge in order to better align its induced norm with the supremum metric; see Remark [Remark 2](#remark:3_2){reference-type="ref" reference="remark:3_2"} or equation [\[eqn:augmented\]](#eqn:augmented){reference-type="eqref" reference="eqn:augmented"} in Section [6](#sec:proof_sketch){reference-type="ref" reference="sec:proof_sketch"} for further details. Additionally, we have to employ uniform laws of large numbers and leverage ideas from empirical processes to uniformly control certain stochastic terms that emerge in the expansions. Our second improvement comes from the need of characterizing the large-sample distributional limit of the functional SGD estimator. By using the same recursion decomposition, we must now analyze a high-probability supremum norm bound for the all orders of recursions and determine the large-sample distributional limit of the leading term in the expansion. It is worth noting that the second-order recursion is the most complicated and challenging one to analyze. This recursion requires specialized treatment that involves substantially more effort than the remaining higher-order recursions. A loose analysis, achieved by directly converting an $L_2$ norm bound into the supremum norm bound using the reproducing kernel property the original RKHS $\mathbb H$ --- which suffices for bounding the higher-order recursions --- might result in a bound whose order is comparable to that of the leading term. This is where we introduce an augmented RKHS and directly analyze the supremum norm using empirical process tools. Last but not least, in order to analyze the distributional limit of the leading bias and variance terms appearing in the expansion of the functional SGD estimator, we develop new tools by extending the recent technique of Gaussian approximation of suprema of empirical processes [@chernozhukov2014gaussian] from equally weighted sum to a weighted sum. This extension is important and unique for analyzing functional SGD, since earlier-arrived data points will have larger weights in the leading bias and variance terms than later-arrived data points; see Remark [Remark 3](#rem:asym_var){reference-type="ref" reference="rem:asym_var"} for more discussions. Towards the analysis of our bootstrap procedure, we further develop Gaussian approximation bounds for multiplier bootstraps for suprema of weighted and non-identically distributed empirical process, which can be used to control the Kolmogorov distance between the sampling distributions of the pointwise evaluation (local inference) of the functional SGD estimator or its supremum norm (global inference), and their bootstrapping counterparts. Our results also elucidate the interplay between early stopping (controlled by the step size) for optimal estimation and the accuracy of uncertainty quantification. The rest of the article is organized as follows. In Section [2](#sec:background){reference-type="ref" reference="sec:background"} we introduce the background of RKHS and the functional stochastic gradient descent algorithms in the RKHS; in Section [3](#sec:SGD_inference){reference-type="ref" reference="sec:SGD_inference"}, we establish the distributional convergence of SGD for non-parametric least squares; in Section [4](#sec:bootstrap_SGD){reference-type="ref" reference="sec:bootstrap_SGD"}, we develop the scalable uncertainty quantification in RKHS via multiplier bootstrapped SGD estimators; Section [5](#sec:numerical){reference-type="ref" reference="sec:numerical"} includes extensive numerical studies to demonstrate the performance of the proposed SGD inference. Section [6](#sec:proof_sketch){reference-type="ref" reference="sec:proof_sketch"} presents a sketched proof highlighting some important technical details and key steps; Section [7](#sec:dis){reference-type="ref" reference="sec:dis"} provides an overview and future direction for our work. Section [8](#sec:key_proof){reference-type="ref" reference="sec:key_proof"} includes some key proofs for the theorems. In this paper, we use $C, C', C_1, C_2,\dots$ to denote generic positive constants whose values may change from one line to another, but are independent from everything else. We use the notation $\|f\|_{\infty}$ to denote the supremum norm of a function $f$, defined as $\|f\|_{\infty} = \sup_{x\in \mathcal{X}} |f(x)|$, where $\mathcal{X}$ is the domain of $f$. The notations $a \lesssim b$ and $a\gtrsim b$ denote inequalities up to a constant multiple; we write $a\asymp b$ when both $a\lesssim b$ and $a \gtrsim b$ hold. For $k>0$, let $\lfloor k \rfloor$ denote the largest integer smaller than or equal to $k$. For two operators $M$ and $N$, The order $M \preccurlyeq N$ if $N-M$ is positive semi-definite. # Background and Problem Formulation {#sec:background} We begin by introducing some background on reproducing kernel Hilbert space (RKHS) and functional stochastic gradient descent algorithms in the RKHS. ## Reproducing kernel Hilbert spaces {#sub:rkhs} To describe the structure of regression function $f$ in non-parametric regression model [\[eq:model:0\]](#eq:model:0){reference-type="eqref" reference="eq:model:0"}, we adopt the standard framework of a reproducing kernel Hilbert space (RKHS, [@wahba1990spline; @berlinet2011reproducing; @gu2013smoothing]) by assuming $f^\ast=\mathop{\mathrm{argmin}}_{f} \mathbb E\big[(f(X)-Y)^2\big]$ to belong to an RKHS $\mathbb H$. Let $\mathbb P_X$ to denote the marginal distribution of the random design $X$, and $L^2(\mathbb P_X)=\big\{f:\,\mathcal X\to\mathbb R\,\big|\,\int_{\mathcal X} f^2(X)\,\mathbb P_X(dx) <\infty\big\}$ to denote the space of all square-integrable functions over $\mathcal X$ with respect to $\mathbb P_X$. Briefly speaking, an RKHS is a Hilbert space $\mathbb H\subset L^2(\mathbb P_X)$ of functions defined over a set $\mathcal X$, equipped with inner product $\langle\cdot,\,\cdot\rangle_{\mathbb H}$, so that for any $x\in\mathcal X$, the evaluation functional at $x$ defined by $L_x(f) = f(x)$ is a continuous linear functional on the RKHS. Uniquely associated with $\mathbb H$ is a positive-definite function $K:\mathcal X\times \mathcal X\to \mathbb R$, called the reproducing kernel. The key property of the reproducing kernel is that it satisfies the reproducing property: the evaluation functional $L_x$ can be represented by the reproducing kernel function $K_x:\,=K(x,\,\cdot)$ so that $f(x) = L_x(f) =\langle K_x,\,f\rangle_{\mathbb H}$. According to Mercer's theorem [@aronszajn1950theory], kernel function $K$ has the following spectral decomposition: $$\label{M:decom:K} K(x,x') = \sum_{j=1}^\infty \mu_j \,\phi_j(x)\,\phi_j(x'), \,\,\,\,x,x'\in\mathcal{X},$$ where the convergence is absolute and uniform on $\mathcal X\times\mathcal X$. Here, $\mu_1 \geq \mu_2 \geq \cdots \geq 0$ is the sequence of eigenvalues, and $\{\phi_{j}\}_{j=1}^{\infty}$ are the corresponding eigenfunctions forming an orthonormal basis in $L^2(\mathbb P_X)$, with the following property: for any $j,k \in \mathbb{N}$, $$\langle \phi_{j}, \phi_{k}\rangle_{L^2(\mathcal P_X)} = \delta_{jk} \quad \mbox{and} \quad \langle \phi_{j}, \phi_{k}\rangle_{\mathbb{H}} = \delta_{jk}/\mu_j,$$ where $\delta_{jk} =1$ if $j=k$ and $\delta_{jk} =0$ otherwise. Moreover, any $f\in \mathbb H$ can be decomposed into $f=\sum_{j=1}^\infty f_j \phi_j$ with $f_j=\langle f, \phi_j \rangle_{L_2(\mathbb P_X)}$, and its RKHS norm can be computed via $\|f\|_{\mathbb H}^2 = \sum_{j=1}^\infty \mu_j^{-1} f_j^2$. We introduce some technical conditions on the reproducing kernel $K$ in terms of its spectral decomposition. **Assumption 1**. *The eigenfunctions $\{\phi_k\}_{k=0}^\infty$ of $K$ are uniformly bounded on $\mathcal{X}$, i.e., there exists a finite constant $c_\phi>0$ such that $\sup_{k\ge 1} \|\phi_k\|_{\infty} \le c_\phi$. Moreover, they satisfy the Lipschitz condition $|\phi_k(s)-\phi_k(t)|\leq L\, k\, |s-t|$ for any $s, t \in[0,1]$, where $L$ is a finite constant.* **Assumption 2**. *The eigenvalues $\{\mu_k\}_{k=1}^\infty$ of $K$ satisfy $\mu_k \asymp k^{-\alpha}$ for some $\alpha> 1$.* The uniform boundedness condition in Assumption [Assumption 1](#asmp:A1){reference-type="ref" reference="asmp:A1"} is common in the literature [@mendelson2010regularization]. Assumption [Assumption 2](#asmp:A2){reference-type="ref" reference="asmp:A2"} assumes the kernel to have polynomially decaying eigenvalues. Assumption [Assumption 1](#asmp:A1){reference-type="ref" reference="asmp:A1"}-[Assumption 2](#asmp:A2){reference-type="ref" reference="asmp:A2"} together also implies the kernel function is bounded as $\sup_x K(x,x)\leq c^2_\phi \sum_{k=1}^\infty k^{-\alpha} := R^2$. One special class of kernels satisfying Assumptions [Assumption 1](#asmp:A1){reference-type="ref" reference="asmp:A1"}-[Assumption 2](#asmp:A2){reference-type="ref" reference="asmp:A2"} is composed of translation-invariant kernels $K(t,s)=g(t-s)$ for some even function $g$ of period one. In fact, by utilizing the Fourier series expansion of the kernel function $g$, we observe that the eigenfunctions of the corresponding kernel matrix $K$ are trigonometric functions $$\phi_{2k-1}(x) = \sin (\pi k x), \quad \phi_{2k}(x)= \cos (\pi k x), \quad k=1,2, \dots$$ on $\mathcal{X}=[0,1]$. It is easy to see that we can choose $c_\phi=1$ and $L = \pi$ to satisfy Assumption [Assumption 1](#asmp:A1){reference-type="ref" reference="asmp:A1"}. Although we primarily consider kernels with eigenvalues that decay polynomially for the sake of clarity in this paper, it is worth mentioning that our theory extends to other kernel classes, such as squared exponential kernels and polynomial kernels [@bach2017equivalence]. ## Stochastic gradient descent in RKHS {#sec:sgd:problem} To motivate functional SGD in RKHS, we first review SGD in Euclidean setting for minimizing the expected loss function $\mathcal L(\theta) = \mathbb E_Z[\ell(\theta;Z)]$, where $\theta\in \mathbb{R}^d$ is the parameter of interest, $\ell:\,\mathbb R^d \times \mathcal Z\to\mathbb R$ is the loss function and $Z$ denotes a generic random sample, e.g. $Z=(X,Y)$ in the non-parametric regression setting [\[eq:model:0\]](#eq:model:0){reference-type="eqref" reference="eq:model:0"}. By first-order Taylor's expansion, one can locally approximate $\mathcal L(\theta + s)$ for any small deviation $s$ by $\mathcal L(\theta+s) \approx \mathcal L(\theta) + \langle \nabla \mathcal L(\theta),\, s\rangle$, where $\nabla \mathcal L(\theta)$ denotes the gradient (vector) of $\mathcal L(\cdot)$ evaluated at $\theta$. The gradient $\nabla \mathcal L(\theta)$ therefore encodes the (infinitesimal) steepest descent direction of $L$ at $\theta$, leading to the following *gradient decent* (GD) updating formula: $$\begin{aligned} \widehat{\theta}_i = \widehat{\theta}_{i-1} - \gamma_i \,\nabla L (\widehat{\theta}_{i-1}), \quad\mbox{for}\quad i=1,2,\ldots,\end{aligned}$$ starting from some initial value $\widehat\theta_0$, where $\gamma_i>0$ is the step size (also called learning rate) at iteration $i$. GD typically requires the computation of the full gradient $\nabla \mathcal L(\theta)$, which is unavailable due to the unknown data distribution of $Z$. In stochastic approximation, SGD takes a more efficient approach by using an unbiased estimate of the gradient as $G_i(\theta)= \nabla \ell(\theta,Z_i)$ based on one sample $Z_i$ to substitute $\nabla \mathcal L(\theta)$ in the updating rule. Accordingly, the SGD updating rule takes the form of $$\begin{aligned} \widehat{\theta}_i = \widehat{\theta}_{i-1} - \gamma_i \,G_i(\widehat{\theta}_{i-1}), \quad\mbox{for}\quad i=1,2,\ldots.\end{aligned}$$ Let us now extend the concept of SGD from minimizing an expected loss function in Euclidean space to minimizing an expected loss functional in function space. Here for concreteness, we develop SGD for minimizing the expected squared error loss $\mathcal L(f)=\mathbb E\big[(f(X) - Y)^2\big]$ over an RKHS $\mathbb H$ equipped with inner product $\langle \cdot,\cdot\rangle_{\mathbb H}$. Let us begin by extending the concept of the "gradient\". By identifying the gradient (operator) $\nabla L:\mathbb H \to \mathbb H$ of functional $\mathcal L(\cdot)$ as a steepest descent "direction\" in $\mathbb H$ through the following first-order "Taylor expansion\" $$\begin{aligned} \mathcal L(f) = \mathcal L(g) + \langle \nabla \mathcal L(g),\, f-g\rangle_{\mathbb H} + \mathcal O\big(\|f-g\|_{\mathbb H}^2\big),\ \ \mbox{as }f\to g,\notag\end{aligned}$$ we obtain after some simple algebra that $$\begin{aligned} \langle \nabla \mathcal L(g),\, f-g\rangle_{\mathbb H} + \mathcal O\big(\|f-g\|_{\mathbb H}^2\big) &\,= \mathcal L(f) - \mathcal L(g) \\ & \,= \mathbb E\big[\big(f(X) - g(X)\big)\cdot \big(g(X)-Y\big)\big] + \mathbb E\big[(f(X) - g(X))^2\big].\end{aligned}$$ Now by using the reproducing property $h(x) = \langle h,\, K_x\rangle_{\mathbb H}$ for any $h\in\mathbb H$, we further obtain $$\begin{aligned} \label{eqn:RKHS_grad} \langle \nabla \mathcal L(g),\, f-g\rangle_{\mathbb H} = \big\langle \mathbb E\big[\big(g(X) -Y\big)K_X\big],\, f-g\big\rangle_{\mathbb H} + \mathcal O\big(\|f-g\|_{\mathbb H}^2\big).\end{aligned}$$ Here, we have used the fact that by Cauchy-Schwarz inequality, $$(f(x)-g(x))^2 = \langle f-g,\, K_x\rangle^2 \leq \|f-g\|_{\mathbb H}^2 \cdot \|K_x\|_{\mathbb H}^2 = K(x,x)\,\|f-g\|_{\mathbb H}^2 = \mathcal O\big(\|f-g\|_{\mathbb H}^2\big),$$ since Assumptions [Assumption 1](#asmp:A1){reference-type="ref" reference="asmp:A1"}-[Assumption 2](#asmp:A2){reference-type="ref" reference="asmp:A2"} together with Mercer's expansion [\[M:decom:K\]](#M:decom:K){reference-type="eqref" reference="M:decom:K"} imply $K$ to be uniformly bounded, or $K(x,x) \leq c_K\sum_{j=1}^\infty \mu_k \leq C\sum_{j=1}^\infty j^{-\alpha} \leq C'$, as long as $\alpha>1$. From equation [\[eqn:RKHS_grad\]](#eqn:RKHS_grad){reference-type="eqref" reference="eqn:RKHS_grad"}, we can identify the gradient $\nabla \mathcal L(g)$ at $g\in\mathbb H$ as $$\begin{aligned} \nabla \mathcal L(g) = \mathbb E\big[\big(g(X) -Y\big)K_X\big] \in \mathbb H.\end{aligned}$$ Throughout the rest of the paper, we will refer to above $\nabla \mathcal L(g)$ as the RKHS gradient of functional $L$ at $g$. Upon the arrival of the $i$th data point $(X_i,Y_i)$, we can form an unbiased estimator $G_i(g)$ of the RKHS gradient $\nabla \mathcal L(g)$ as $G_i(g) = \big(g(X_i) -Y_i\big)K_{X_i}$. This leads to the following SGD in RKHS for solving non-parametric least squares: for a given initial estimate $\widehat{f}_0$, the SGD recursively updates the estimate of $f$ upon the arrival of each data point as $$\label{eq:sgd:ini} \widehat{f}_i = \widehat{f}_{i-1} - \gamma_i \,G_i(\widehat{f}_{i-1}) = \widehat{f}_{i-1} + \gamma_i \big(Y_i - \widehat{f}_{i-1}(X_i)\big) K_{X_i},\quad \mbox{for }i=1,2,\ldots.$$ By utilizing the reproducing property, the above iterative updating formula can be rewritten as $$\begin{aligned} \widehat{f}_i = \widehat{f}_{i-1} + \gamma_i \,\big(Y_i - \langle \widehat{f}_{i-1}, K_{X_i}\rangle_{\mathbb H} \big)\, K_{X_i} = (I - \gamma_i\, K_{X_i}\otimes K_{X_i})\, \widehat f_{i-1} + \gamma_i \,Y_i\, K_{X_i}, \label{eq:stand:sgd} \end{aligned}$$ where $I$ denotes the identity map on $\mathbb H$, and $\otimes$ is the tensor product operator defined through $g \otimes h(f)=\langle f, h \rangle_{\mathbb H} \,g$ for all $g,h,f \in \mathbb H$. Formula [\[eq:sgd:ini\]](#eq:sgd:ini){reference-type="eqref" reference="eq:sgd:ini"} is more straightforward to use for practical implementation, while formula [\[eq:stand:sgd\]](#eq:stand:sgd){reference-type="eqref" reference="eq:stand:sgd"} provides a more tractable expression that will facilitate our theoretical analysis. Following [@ruppert1988efficient] and [@polyak1992acceleration], we consider the so-called Polyak averaging scheme to further improve the estimation accuracy by averaging over the entire updating trajectory, i.e. we use $\bar{f}_n = n^{-1}\sum_{i=1}^n\, \widehat{f}_i$ as the final functional SGD estimator of $f$ based on a dataset of sample size $n$. Note that this averaged estimator can be efficiently computed without storing all past estimators by using the recursively updating formula $\bar{f}_i = (1-i^{-1})\,\bar{f}_{i-1}\, +\, i^{-1}\,\widehat{f}_i$ for $i=1,\dots, n$ on the fly. We will refer to the above SGD as functional SGD in order to differentiate it from the SGD in Euclidean space, and $\bar f_n$ as the functional SGD estimator (using $n$ samples). Throughout the remainder of the paper, we consider a zero initialization, $\widehat{f}_{0}=0$, without loss of generality. In functional SGD with total sample size (time horizon) $n$, the only adjustable component is the step size scheme $\{\gamma_i:\,i=1,2,\ldots,n\}$, which is crucial for achieving fast convergence and accurate estimations (c.f. Remark [Remark 1](#rk:step_size){reference-type="ref" reference="rk:step_size"}). We examine two common schemes [@bottou2010large; @bottou2007tradeoffs]: (1) constant step size scheme where $\gamma_i \equiv \gamma = \gamma(n)$ only depends on the total sample size $n$; (2) non-constant step size scheme where $\gamma_i = i^{-\xi}$ decays polynomially in $i$ for $i=1,2,\ldots,n$ and some $\xi>0$. While the constant step scheme is more amenable to theoretical analysis, it suffers from two notable drawbacks: (1) it assumes prior knowledge of the sample size $n$, which is typically unavailable in streaming data scenarios, and (2) the optimal estimation error is only achieved at the $n$-th iteration, leading to suboptimal performance before that time point. In contrast, the non-constant step size scheme, despite significantly complicating our theoretical analysis, overcomes the aforementioned limitations and leads to a truly online algorithm that achieves rate-optimal estimation at any intermediate time point (c.f. Theorem [Theorem 1](#le:bias_variance){reference-type="ref" reference="le:bias_variance"}). Due to this characteristic, we will also refer to the non-constant step size scheme as the online scheme. Although functional SGD operates in the infinite-dimensional RKHS, it can be implemented using a finite-dimensional representation enabled by the kernel trick. Concretely, upon the arrival of the $i$-th observation $(X_i,Y_i)$, we can express the time-$i$ intermediate estimator $\widehat f_i$ as $\widehat{f}_i = \sum_{j=1}^i \widehat{\beta}_{j}\, K_{X_j}$ due to equation [\[eq:sgd:ini\]](#eq:sgd:ini){reference-type="eqref" reference="eq:sgd:ini"} and the zero initialization $\widehat f^\ast=0$ condition, where only the last entry $\widehat \beta_i$ in the coefficient vector $(\widehat{\beta}_{1},\,\widehat{\beta}_{2},\,\dots,\, \widehat{\beta}_{i})^\top$ needs to be updated, $$\begin{aligned} \widehat{\beta}_i= \gamma_i\,\big(Y_i-\widehat{f}_{i-1}(X_i)\big) = \gamma_i \,Y_i - \gamma_i \sum_{j=1}^{i-1} \widehat \beta_j \,K(X_j, \,X_i).\end{aligned}$$ Note that the computational complexity at time $i$ is $\mathcal O(i)$ for $i=1,2,\ldots,n$. Correspondingly, the functional SGD estimator at time $i$ can be computed through $\bar{f}_i = (1-i^{-1})\,\bar{f}_{i-1} + i^{-1} \widehat{f}_i=\sum_{j=1}^i \bar\beta_j\, K_{X_j}$, where (can be proved by induction) $$\begin{aligned} \bar\beta_j = \Big(1 - \frac{j-1}{i}\Big) \,\widehat \beta_j,\quad \mbox{for}\quad j=1,2,\ldots,i.\end{aligned}$$ Consequently, the overall time complexity of the resulting algorithm is $\mathcal O(n^2)$, and the space complexity is $\mathcal O(n)$. ## Problem formulation {#sec:problem_formulation} Our objective is to develop online inference for the non-parametric regression function $f^\ast$ based on the functional SGD estimator $\bar{f}_n$. Specifically, we aim to construct level-$\beta$ pointwise confidence intervals (local inference) $CI_n(x;\,\beta) = [U_n(x;\,\beta),\, V_n(x;\,\beta)]$ for $f^\ast(x)$, where $x\in\mathcal X$, and a level-$\beta$ simultaneous confidence band (global inference) $CB_n(\beta) = \big\{g:\, \mathcal X\to\mathbb R \,\big|\, g(x)\in[\bar f_n(x) - b_n(\beta),\,\bar f_n(x) + b_n(\beta)],\ \forall x\in \mathcal X\big\}$ for $f^\ast$. We require these intervals and band to be asymptotically valid, meaning that the coverage probabilities, i.e., the probabilities of $f^\ast(x)$ or $f^\ast$ falling within $CI_n(x;\,\beta)$ or $CB_n(\beta)$ respectively, are close to their nominal level $\beta$. Mathematically, this means $\mathbb{P}[f^\ast(x)\in CI_n(x;\,\beta)]=\beta + o(1)$ and $\mathbb{P}[f^\ast\in CB_n(\beta)]=\beta + o(1)$ as $n\to\infty$. The coverage probability analysis of these intervals and band requires us to examine and prove the distributional convergence of two random quantities (with appropriate rescaling) based on the functional SGD estimator $\bar{f}_n$: the pointwise difference $\bar{f}_n(x) - f^\ast(x)$ for $x\in\mathcal{X}$ and the supremum norm $\|\bar{f}_n - f^\ast\|_{\infty}$ of $\bar{f}_n - f^\ast$. In particular, the appropriate rescaling choice determines a precise convergence rate of $\bar{f}$ towards $f^\ast$ under the supremum norm metric. The characterization of the convergence rate of a non-parametric regression estimator under the supremum norm metric is a challenging and important problem in its own right. We note that the distribution of the supremum norm $\|\bar{f}_n - f^\ast\|_{\infty}$ after a proper re-scaling behaves like the supreme norm of a Gaussian process in the asymptotic sense, which is not practically feasible to estimate. Therefore, for inference purposes, it is not necessary to explicitly characterize this distributional limit; instead, we will prove a bootstrap consistency by showing that the Kolmogorov distance between the sampling distributions of this supremum norm and its bootstrapping counterpart converges to zero as $n\to\infty$. In our theoretical development to address these problems, we will utilize a recursive expansion of the functional SGD updating formula to construct a higher-order expansion of $\bar{f}_n$ under the $\|\cdot\|_\infty$ norm metric. Building upon this expansion, we will establish in Section [3](#sec:SGD_inference){reference-type="ref" reference="sec:SGD_inference"} the distributional convergence of the two aforementioned random quantities and characterize their limiting distributions with an explicit representation of the limiting variance for $\bar{f}_n(x) - f^\ast(x)$ in the large-sample setting. However, these limiting distributions and variances depend on the spectral decomposition of the kernel $K$, the marginal distribution of the design variable $X$, and the unknown noise variance $\sigma^2$, which are either inaccessible or computationally expensive to evaluate in an online learning scenario. To overcome this challenge, we will propose a scalable bootstrap-based inference method in Section [4](#sec:bootstrap_SGD){reference-type="ref" reference="sec:bootstrap_SGD"}, enabling efficient online inference for $f^\ast$. # Finite-Sample Analysis of Functional SGD Estimator {#sec:SGD_inference} In this section, we start by deriving a higher-order expansion of $\bar{f}_n$ under the $\|\cdot\|_\infty$ norm metric. We then proceed to establish the distributional convergence of $\bar f(x)-f^\ast(x)$ for any $x\in\mathcal{X}$ by characterizing the leading term in the expansion. These results will be useful for motivating our online local and global inference for $f^\ast$ in the following section. ## Higher-order expansion under supreme norm {#sec:higher-order} We begin by decomposing the functional SGD update of $\widehat{f}_n-f^\ast$ into two leading recursive formulas and a higher-order remainder term. This decomposition allows us to distinguish between the deterministic term responsible for the estimation bias and the stochastic fluctuation term contributing to the estimation variance. Concretely, we obtain the following by plugging $Y_i=f^\ast(X_i) + \epsilon_i$ into the recursive updating formula [\[eq:stand:sgd\]](#eq:stand:sgd){reference-type="eqref" reference="eq:stand:sgd"}, $$\label{eq:sgd:recursion_f0} \widehat{f}_i - f^\ast = (I - \gamma_i\, K_{X_i}\otimes K_{X_i}) \,(\widehat{f}_{i-1}-f^\ast) + \gamma_i \,\epsilon_i \,K_{X_i}.$$ Let $\Sigma:\,= \mathbb{E}[K_{X_1}\otimes K_{X_1}]:\, \mathbb{H} \to \mathbb{H}$ denote the population-level covariance operator, so that for any $f$, $g \in \mathbb{H}$ we have $\langle f, \, \Sigma\, g \rangle_\mathbb{H} = \mathbb{E}[f(X_1)\,g(X_1)]$. Now we recursively define the *leading bias term* through $$\label{eq:bias:lead:main} \eta_0^{bias,0}= \widehat f_0-f^\ast = -f^\ast \quad \mbox{and}\quad \eta_i^{bias,0}= (I - \gamma_i\, \Sigma)\, \eta_{i-1}^{bias, 0} \quad \mbox{for} \quad i=1,2,\ldots$$ that collects the leading deterministic component in [\[eq:sgd:recursion_f0\]](#eq:sgd:recursion_f0){reference-type="eqref" reference="eq:sgd:recursion_f0"}; and the *leading noise term* through $$\begin{aligned} \eta_0^{noise,0}= 0 \quad\mbox{and}\quad \eta_i^{noise,0} = (I - \gamma_i\,\Sigma)\, \eta^{noise,0}_{i-1} + \gamma_i \,\epsilon_i\, K_{X_i} \quad \mbox{for} \quad i=1,2,\ldots \label{eq:lead:noise}\end{aligned}$$ that collects the leading stochastic fluctuation component in [\[eq:sgd:recursion_f0\]](#eq:sgd:recursion_f0){reference-type="eqref" reference="eq:sgd:recursion_f0"}; so that we have the following decomposition for the recursion: $$\label{eq:sgd:higher_order_exp:1} \widehat{f}_i -f^\ast =\underbrace{\eta_i^{bias,0}}_\text{leading bias} + \underbrace{\eta_i^{noise,0}}_\text{leading noise} + \ \ \underbrace{\big(\widehat{f}_i -f^\ast -\eta_i^{bias,0} - \eta_i^{noise,0}\big)}_\text{remainder term} \quad \mbox{for} \quad i=1,2,\ldots.$$ Correspondingly, we define $\bar{\eta}_i^{bias,0} = i^{-1}\sum_{j=1}^i \eta_j^{bias,0}$ and $\bar{\eta}_i^{noise,0} = i^{-1}\sum_{j=1}^i \eta_j^{noise,0}$ as the leading bias and noise terms, respectively, in the functional SGD estimator (after averaging). The following Theorem [Theorem 1](#le:bias_variance){reference-type="ref" reference="le:bias_variance"} presents finite-sample bounds for the two leading terms and the remainder term associated with $\bar f_n$ under the supreme norm metric. The results indicate that the remainder term is of strictly higher order (in terms of dependence on $n$) compared to the two leading terms, validating the term "leading\" for them. **Theorem 1** (Finite-sample error bound under supreme norm). *Suppose that the kernel $K$ satisfies Assumptions [Assumption 1](#asmp:A1){reference-type="ref" reference="asmp:A1"}-[Assumption 2](#asmp:A2){reference-type="ref" reference="asmp:A2"}. Assume $f^\ast\in \mathbb{H}$ satisfies $\sum_{\nu=1}^\infty \langle f^\ast, \phi_\nu \rangle_{L_2}\mu_\ell^{-1/2} < \infty$.* 1. *(constant step size) Assume that the step size $\gamma_i\equiv\gamma$ satisfies $\gamma \in(0,\, \mu_1^{-1})$, then we have $$\sup_{x\in \mathcal X}|\bar{\eta}_n^{bias,0}(x)|\leq C \frac{1}{\sqrt{n\gamma}},\quad \textrm{and}\; \sup_{x\in \mathcal X} \mathop{\mathrm{{\rm Var}}}(\bar{\eta}_n^{noise,0}(x))\leq C' \frac{(n\gamma)^{1/\alpha}}{n},$$ where $C,\, C'$ are constants independent of $(n,\gamma)$. Furthermore, assume that the step size $0<\gamma < n^{-\frac{2}{2+3\alpha}}$, we have $$\mathbb{P}\Big(\|\bar{f}_n - f^\ast - \bar{\eta}_n^{bias,0}-\bar{\eta}_n^{noise,0}\|^2_{\infty} \geq \gamma^{1/2} (n\gamma)^{-1}+ \gamma^{1/4} (n\gamma)^{1/\alpha}n^{-1}\log n\Big) \leq C/n + C\gamma^{1/4},$$ where the randomness is with respect to the randomness in $\{(X_i, \epsilon_i)\}_{i=1}^n$.* 2. *(non-constant step size) Assume the step size to satisfy $\gamma_i = i^{-\xi}$ for some $\xi\in(0,\, 1/2)$, then we have $$\sup_{x\in \mathcal X}|\bar{\eta}_n^{bias,0}(x)|\leq C \frac{1}{\sqrt{n\gamma_n}},\quad \textrm{and}\; \sup_{x\in \mathcal X}\mathop{\mathrm{{\rm Var}}}(\bar{\eta}_n^{noise,0}(x))\leq C' \frac{(n\gamma_n)^{1/\alpha}}{n},$$ where $C,\,C'$ are constants independent of $(n,\gamma_n)$. For the special choice of $\xi = \frac{1}{\alpha+1}$, we have $$\mathbb{P}\Big(\|\bar{f}_n - f^\ast - \bar{\eta}_n^{bias,0}-\bar{\eta}_n^{noise,0}\|^2_{\infty} \geq \gamma_n^{1/2} (n\gamma_n)^{-1}+ \gamma_n^{1/2} (n\gamma_n)^{1/\alpha}n^{-1}\log n\Big) \leq C/n + C\gamma_n^{1/2}.$$* A proof of this theorem is based on a higher-order recursion expansion and careful supreme norm analysis of the recursive formula; see Remark [Remark 2](#remark:3_2){reference-type="ref" reference="remark:3_2"} and proof sketch in Section [6](#sec:proof_sketch){reference-type="ref" reference="sec:proof_sketch"}. The detailed proof is outlined in [@liu2023supp]. **Remark 1**. *As demonstrated in Theorem [Theorem 1](#le:bias_variance){reference-type="ref" reference="le:bias_variance"}, the selection of the step size $\gamma$ (or $\gamma_n$ for non-constant step size) in the SGD estimator entails a trade-off between bias and variance. A larger $\gamma$ (or $\gamma_n$) increases bias while reducing variance, and vice versa. This trade-off can be optimized by choosing the (optimal) step size $\gamma_n = n^{-\frac{1}{\alpha+1}}$. This is why we specifically focus on this particular choice in the non-constant step size setting in the theorem, which also significantly simplifies the proof. Interestingly, the step size (scheme) in the functional SGD plays a similar role as the regularization parameter in regularization-based approaches in preventing overfitting according to Theorem [Theorem 1](#le:bias_variance){reference-type="ref" reference="le:bias_variance"}. To see this, let us consider the classic kernel ridge regression (KRR), where the estimator $\widehat{f}_{n,\lambda}$ is constructed as $$\widehat{f}_{n,\lambda} = \mathop{\mathrm{argmin}}_{f\in \mathbb{H}} \Big\{ \frac{1}{n}\sum_{i=1}^n \big(Y_i - f(X_i)\big)^2 + \lambda \|f\|_{\mathbb{H}}^2 \Big\},$$ where $\lambda$ serves as the regularization parameter to avoid overfitting. It can be shown (e.g., [@yang2017]) that the squared bias of $\widehat{f}_{n,\lambda}$ has an order of $\lambda$, while the variance has an order of $d_\lambda/n$, where $d_\lambda = \sum_{\nu=1}^\infty \min\{1,\lambda \mu_\nu\}$ represents the effective dimension of the model and is of order $\lambda^{-1/\alpha}$ under Assumption [Assumption 2](#asmp:A2){reference-type="ref" reference="asmp:A2"}. In comparison, the squared bias and variance of the functional SGD estimator $\bar f_n$ are of order $(n\gamma_n)^{-1}$ and $(n\gamma_n)^{1/\alpha} / n$ respectively. Therefore, $(n\gamma_n)^{-1}$ and $(n\gamma_n)^{1/\alpha}$ respectively play the same role as the regularization parameter $\lambda$ and effective dimension $d_\lambda$ in KRR. More generally, a step size scheme $\{\gamma_i\}_{i=1}^n$ corresponds to an effective regularization parameter of the order $\lambda = \big(\sum_{i=1}^n \gamma_i\big)^{-1}$, which in our considered settings is of order $(n\gamma_n)^{-1}$. Note that the accumulated step size $\sum_{i=1}^n \gamma_i$ can be interpreted as the total path length in the functional SGD algorithm. This total path length determines the early stopping of the algorithm, effectively controlling the complexity of the learned model and preventing overfitting.* **Remark 2**. *The higher-order recursion expansion and the supreme norm bound in the theorem provide a finer insight into the distributional behavior of $\bar{f}_n$ and paves the way for inference. That is, we only need to focus on the leading noise recursive term for statistical inference. In our proof of bounding the supremum norm for the remainder term in equation ([\[eq:sgd:higher_order_exp:1\]](#eq:sgd:higher_order_exp:1){reference-type="ref" reference="eq:sgd:higher_order_exp:1"}), we further decompose the remainder $\bar{f}_n - f^\ast - \bar{\eta}_n^{bias,0}-\bar{\eta}_n^{noise,0}$ into two parts: the bias remainder and the noise remainder. Note that a loose analysis of bounding the noise remainder under the $\|\cdot\|_{\infty}$ metric by directly converting an $L_2$ norm bound into the supremum norm bound using the reproducing kernel property of the original RKHS $\mathbb H$ would result in a bound whose order is comparable to that of the leading term. This motivates us to introduce an augmented RKHS $\mathbb{H}_a = \{f= \sum_{\nu=1}^\infty f_\nu \phi_\nu \mid \sum_{\nu=1}^\infty f_\nu^2 \mu_\nu^{2a-1}< \infty\}$ with $0\leq a\leq 1/2-1/(2\alpha)$ equipped with the kernel function $K^a(x,y)= \sum_{\nu=1}^\infty \phi_\nu(X)\phi_\nu(y)\mu_\nu^{1-2a}$ and norm $\|f\|_a=\big(\sum_{\nu=1}^\infty f_\nu^2 \mu_\nu^{2a-1}\big)^{1/2}$ for any $f=\sum_{\nu=1}^\infty f_\nu \phi_\nu\in\mathbb{H}$. This augmented RKHS norm weakens the impact of high-frequency components compared to the norm $\|f\|_{\mathbb{H}}=\big(\sum_{\nu=1}^\infty f_\nu^2 \mu_\nu^{-1}\big)^{1/2}$ and its induced norm turns out to be better aligns with the functional supremum norm in our context. As a result, we have $\|f\|_{\infty}\leq c_a \|f\|_{a} \leq c_k \|f\|_{\mathbb{H}}$ for any $f\in\mathbb{H}$, where $(c_a,\,c_k)$ are constants. In particular, a supremum norm bound based on controlling the $\|f\|_{a}$ norm with appropriate choice of $a$ could be substantially better than that based on $\|f\|_{\mathbb{H}}$; see Section [6](#sec:proof_sketch){reference-type="ref" reference="sec:proof_sketch"} and Section [8.2](#app:le:rem_bias:con){reference-type="ref" reference="app:le:rem_bias:con"} for further details.* As we discussed in Section [2.3](#sec:problem_formulation){reference-type="ref" reference="sec:problem_formulation"}, for inference purposes, it is not necessary to explicitly characterize distributional convergence limit of the supremum norm $\|\bar{f}_n - f^\ast\|_{\infty}$; instead, we will prove a bootstrap consistency by showing that the Kolmogorov distance between the sampling distributions of this supremum norm and its bootstrapping counterpart converges to zero as $n\to\infty$. However, the pointwise convergence limit of $\bar{f}_n(z_0) - f^\ast(z_0)$ for fixed $z_0\in[0,1]$ has an easy characterization. Therefore, we present the pointwise convergence limit and use it to discuss the impact of online estimation in the non-parametric regression model in the following subsection. ## Pointwise distributional convergence According to Theorem [Theorem 1](#le:bias_variance){reference-type="ref" reference="le:bias_variance"}, the large-sample behavior of the functional SGD estimator $\bar f_n$ is completely determined by the two leading processes: bias term and noise term. According to ([\[eq:bias:lead:main\]](#eq:bias:lead:main){reference-type="ref" reference="eq:bias:lead:main"}), under the constant step size $\gamma$, the leading bias term has an explicit expression as $$\label{eq:local_bias} \begin{aligned} \bar{\eta}_n^{bias,0}(x)= & \frac{1}{n}\gamma^{-1}\Sigma^{-1}\,(I-\gamma \Sigma)\, [I-(I-\gamma\Sigma)^n\,]f^\ast(x) \\ = & \frac{1}{\sqrt{\gamma}n}\sum_{k=1}^n \sum_{\nu=1}^\infty \langle f^\ast, \phi_\nu\rangle_{L_2} \mu_\ell^{-1/2} (1-\gamma \mu_\nu)^k (\gamma\mu_\nu)^{1/2}\phi_\nu(x) ,\quad \forall x\in\mathcal X, \end{aligned}$$ and the leading noise term is $$\label{eq:local_noise} \begin{aligned} \bar{\eta}_n^{noise,0}(x) =&\, \frac{1}{n}\sum_{k=1}^n \Sigma^{-1}\big[I-(I-\gamma \Sigma)^{n+1-k}\big]\, K(X_k,\,x)\, \epsilon_k \\ =&\, \frac{1}{n}\sum_{k=1}^n \ \epsilon_k\, \cdot\, \underbrace{\bigg\{ \sum_{\nu=1}^\infty \big[1-(1-\gamma \mu_\nu)^{n+1-k}\big]\,\phi_\nu(X_k)\,\phi_\nu(x)\bigg\}}_{\Omega_{n,k}(x)},\quad \forall x\in\mathcal X. \end{aligned}$$ For each fixed $z_0\in\mathcal X$, conditioning on the design $\{X_i\}_{i=1}^n$, the leading noise term $\bar{\eta}_n^{noise,0}(z_0)$ is a weighted average of $n$ independent and centered normally distributed random variables. This representation enables us to identify the limiting distribution of $\bar{\eta}_n^{noise,0}(z_0)$ (this subsection) and conduct local inference (i.e. pointwise confidence intervals) by a bootstrap method (next section). Under Assumption [Assumption 2](#asmp:A2){reference-type="ref" reference="asmp:A2"}, the weight $\Omega_{n,k}(z_0)$ associated with the $k$-th observation pair $(X_k, \, Y_k)$ is of order $\sum_{\nu=1}^\infty \big[1-(1-\gamma \mu_\nu)^{n+1-k}\big] \asymp \big[(n+1-k)\gamma\big]^{1/\alpha}$, which decreases in $k$. This diminishing impact trend is inherent to online learning, as later observations tend to have a smaller influence compared to earlier observations. This characteristic is radically different from offline estimation settings, where all observations contribute equally to the final estimator, and will change the asymptotic variance (i.e., the $\sigma^2_{z_0}$ in Theorem [Theorem 2](#thm:local:main1){reference-type="ref" reference="thm:local:main1"}). Furthermore, the entire leading noise process $\bar{\eta}_n^{noise,0}(\cdot)$ can be viewed as a weighted and non-identically distributed empirical process indexed by the spatial location. This characterization enables us to conduct global inference (i.e. simultaneous confidence band) for non-parametric online learning by borrowing and extending the recent developments [@chernozhukov2014gaussian; @chernozhukov2016empirical; @chernozhukov2014anti] on Gaussian approximation and multiplier bootstraps for suprema of (equally-weighted and identically distributed) empirical processes, which will be the main focus of next section. In the following Theorem [Theorem 2](#thm:local:main1){reference-type="ref" reference="thm:local:main1"}, we prove, by analyzing the leading noise term $\bar{\eta}_n^{noise,0}$, a finite-sample upper bound on the Kolmogorov distance between the sampling distribution of $\bar{f}_n(z_0)-f^\ast(z_0)$ and the distribution of a standard normal random variable (i.e. supreme norm between the two cumulative distributions) for any $z_0\in\mathcal X$. **Theorem 2** (Pointwise convergence). *Assume that the kernel $K$ to satisfy Assumptions [Assumption 1](#asmp:A1){reference-type="ref" reference="asmp:A1"}-[Assumption 2](#asmp:A2){reference-type="ref" reference="asmp:A2"}.* 1. *(Constant step size) Consider the step size $\gamma(n) = \gamma$ with $0< \gamma < n^{-\frac{2}{2+3\alpha}}$. For any fixed $z_0\in[0,1]$, we have $$\sup_{u\in \mathbb R} \Big|\, \mathbb{P}\Big(\sigma^{-1}_{z_0} \sqrt{n(n\gamma)^{-1/\alpha}}\big(\bar{f}_n (z_0) - f^\ast(z_0) - \bar{\eta}_n^{bias,0}(z_0)\big)\leq u \Big) - \Phi(u)\Big|\leq \frac{C_1}{\sqrt{n(n\gamma)^{-1/\alpha}}} + \kappa_n,$$ where $\kappa_n =C_2\sqrt{\gamma^{1/2}(n\gamma)^{-1}}+ \sqrt{\gamma^{1/2}(n\gamma)^{1/\alpha}n^{-1}}$. Here, the bias term has an explicit expression as given in [\[eq:local_bias\]](#eq:local_bias){reference-type="eqref" reference="eq:local_bias"}, and the (limiting) variance is $$\sigma_{z_0}^2=\sigma^2(n\gamma)^{-1/\alpha}n^{-1} \sum_{k=1}^n \sum_{\nu=1}^\infty \big[\big(1-(1-\gamma\mu_\nu)^{n+1-k}\big)^2\big]\, \phi_\nu^2(z_0).$$* 2. *(Non-constant step size) Consider the step size $\gamma_i = i^{-\frac{1}{\alpha+1}}$ for $i=1,\dots, n$. For any fixed $z_0\in[0,1]$, we have $$\sup_{u\in \mathbb R} \big| \mathbb{P}\Big(\sigma^{-1}_{z_0} \sqrt{n(n\gamma_n)^{-1/\alpha}}\big(\bar{f}_n (z_0) - f^\ast(z_0) - \bar{\eta}_n^{bias,0}(z_0)\big)\leq u \Big) - \Phi(u)\big|\leq \frac{C_1}{\sqrt{n(n\gamma_n)^{-1/\alpha}}}.$$ Here, the bias term takes an explicit expression as $\bar{\eta}_n^{bias,0}(z_0)= n^{-1}\sum_{k=1}^n \prod_{i=1}^k(I-\gamma_i\Sigma)\,f^\ast(z_0)$, and the variance is $$\sigma_{z_0}^2=\frac{\sigma^2}{n^2} \sum_{k=1}^n \gamma_k^2 \,\sum_{\nu=1}^\infty \mu_\nu^2\, \phi_\nu^2(z_0) \Big(\sum_{j=k}^n \prod_{i=k+1}^j (1-\gamma_i\mu_\nu)\Big)^2.$$* Theorem [Theorem 2](#thm:local:main1){reference-type="ref" reference="thm:local:main1"} establishes that the sampling distribution of $\bar f_n-f^*$ at any fixed $z_0$ can be approximated by a normal distribution $N(\bar{\eta}_n^{bias,0}(z_0), n^{-1}(n\gamma_n)^{1/\alpha}\sigma^2_{z_0})$. According to Theorem [Theorem 1](#le:bias_variance){reference-type="ref" reference="le:bias_variance"}, the bias $\bar{\eta}_n^{bias,0}(z_0)$ has the order of $(n\gamma_n)^{-1/2}$ while the variance has the order of $n^{-1}(n\gamma_n)^{1/\alpha}$; Theorem [Theorem 2](#thm:local:main1){reference-type="ref" reference="thm:local:main1"} also implies that the minimax convergence rate $n^{-\frac{1}{2(\alpha+1)}}$ of estimating $f^\ast$ can be achieved with $\gamma=\gamma_n = n^{-\frac{1}{\alpha+1}}$, which attains an optimal bias-variance tradeoff. In practice, the bias term can be suppressed by applying a undersmoothing technique; see Remark [Remark 7](#rem:undersmooth){reference-type="ref" reference="rem:undersmooth"} for details. **Remark 3**. *From the theorem, we see that the (limiting) variance $\sigma_{z_0}^2$ is precisely the variance of the scaled leading noise $\sqrt{n(n\gamma_n)^{-1/\alpha}}\bar{\eta}_n^{noise,0}(z_0)$ at $z_0$, that is, $\mathop{\mathrm{{\rm Var}}}\big(\sqrt{n(n\gamma_n)^{-1/\alpha}}\bar{\eta}_n^{noise,0}(z_0)\big)$; and $\sigma^2_{z_0}$ has the same $\mathcal O(1)$ order for both the constant and non-constant cases. The contribution of for each data point to the variance differs between the constant and non-constant step size cases. Concretely, in the constant step size case, let ${\bf{C}}=(c_1, \dots, c_n)$ be the vector of variation, where $c_k$ $(k=1,\dots, n)$ represents the contribution to $\sigma^2_{z_0}$ from the $k$-th arrival observation $(X_k, Y_k)$. According to equation ([\[eq:local_noise\]](#eq:local_noise){reference-type="ref" reference="eq:local_noise"}), $c_k = \mathbb{E}\Omega^2_{n,k}(z_0)\asymp (n\gamma)^{-1/\alpha}n^{-1} \big((n+1-k)\gamma\big)^{1/\alpha}$ and is of order $(n-k)^{1/\alpha}$ in the observation index $k$, which decreases monotonically to nearly $0$ as $k$ grows to $n$. In comparison, in the online (nonconstant) step case, we denote ${\bf{O}}=(o_1,\dots, o_k)$ as the vector of variation with $o_k$ being the contribution from the $k$-th observation. A careful calculation shows that $o_k= n^{-2}\gamma^2_k\sum_{\nu=1}^\infty \mu_\nu^2 \, \phi_\nu^2(z_0) \Big(\sum_{j=k}^n \prod_{i=k+1}^j (1-\gamma_i\mu_\nu)\Big)^2$, which has order $n^{-2}\gamma^2_k \gamma^{-2}_{n+1-k} \big((n+1-k)\gamma_{n+1-k}\big)^{1/\alpha} + \big((n+1-k)\gamma_k\big)^{1/\alpha}$ and decreases slower than the constant step size case. This means that the nonconstant step scheme yields a more balanced weighted average over the entire dataset, which tends to lead to a smaller asymptotic variance.* *Figure [1](#fig:var_comp){reference-type="ref" reference="fig:var_comp"} compares the individual variation contribution for both the constant and non-constant step cases. We keep the total step size budget the same for both cases (which also makes the two leading bias terms roughly equal); that is, we choose constant $B$ in the nonconstant step size $\gamma_i= B \cdot i^{-\frac{1}{\alpha+1}}$ so that $n\gamma = \sum_{i=1}^n \gamma_i$ with $\gamma = n^{-\frac{1}{\alpha+1}}$ being the constant step size. The data index $k$ is plotted on the $x$ axis of Figure [1](#fig:var_comp){reference-type="ref" reference="fig:var_comp"} (A), with the variation contribution summarized by the $y$ axis. As we can see, the variation contribution from each observation decreases as observations arrive later in both cases. However, the pattern is flatter in the non-constant step case. Figure [1](#fig:var_comp){reference-type="ref" reference="fig:var_comp"} (B) is a violin plot visualizing the distributions of the components in $\bf{C}$ and $\bf{O}$. Specifically, the variation among $\{o_k\}_{k=1}^n$ (depicted by the short blue interval) is smaller in the non-constant case, suggesting reduced fluctuation in individual variation for this setting. As detailed in Section [5](#sec:numerical){reference-type="ref" reference="sec:numerical"}, our numerical analysis further confirms that using a nonconstant learning rate outperforms that using a constant learning rate (e.g., Figure [\[fig:sim1\]](#fig:sim1){reference-type="ref" reference="fig:sim1"}). An interesting direction for future research might be to identify an optimal learning rate decaying scheme by minimizing the variance $\sigma^2_{z_0}$ as a function of $\{\gamma_i\}_{i=1}^n$. It is also interesting to determine whether this scheme results in an equal contribution from each observation. However, this is beyond the scope of this paper.* ![Compare the individual variation contribution for each observation in two cases: constant step size case (red curve) and non-constant step size case (blue curve). In (A), $x$-axis is the observation index, $y$-axis is the variance contributed by the $t$-th observation. (B) is the violin plot of individual variance contribution for two cases; the solid dots represent mean while the intervals represent variance. ](images/variance_compare.png){#fig:var_comp width="90%"} **Remark 4**. *The Kolmogorov distance bound between the sampling distribution of $\sigma^{-1}_{z_0} \sqrt{n(n\gamma)}\big(\bar{f}_n(z_0)-f^\ast(z_0)\big)$ and the standard normal distribution depends on the step size $\gamma_n$ and sample size $n$. In particular, $\kappa_n$ is the remainder bound stated in Theorem [Theorem 1](#le:bias_variance){reference-type="ref" reference="le:bias_variance"}, which is negligible compared to $\frac{C_1}{\sqrt{n(n\gamma)^{-1/\alpha}}}$ when $\gamma > n^{-\frac{2}{\alpha+2}}$ in the constant step size case. Consequently, a smaller $\gamma$ or larger sample size $n$ leads to a smaller Kolmogorov distance. The same conclusion also applies to the non-constant step size case if we choose $\gamma_i= i^{-\frac{1}{\alpha+1}}$.* Although Theorem [Theorem 2](#thm:local:main1){reference-type="ref" reference="thm:local:main1"} explicitly characterizes the distribution of the SGD estimator, the expression of the standard deviation $\sigma_{z_0}$ depends on the eigenvalues and eigenfunctions of $\mathbb{H}$, the underlying distribution of the design $X$, and the unknown noise variance $\sigma^2$, which are typically unknown in practice. One approach is to use plug-in estimators for these unknown quantities, such as empirical eigenvalues and eigenfunctions obtained through SVD decomposition of the empirical kernel matrix $\mathbf{K}\in \mathbb{R}^{n\times n}$, whose $ij$-th element is $\mathbf{K}_{ij}= K(X_i, X_j)$. However, computing these plug-in estimators requires access to all observed data points $\{(X_i,\,Y_i)\}_{i=1}^n$ and has a computational complexity of $\mathcal O(n^3)$, which undermines the sequential updating advantages of SGD. In the following section, we develop a scalable inference framework that uses multiplier-type bootstraps to generate randomly perturbed SGD estimators upon arrival of each observation. This approach enables us to bypass the evaluation of $\sigma_{z_0}$ when constructing confidence intervals. # Online Statistical Inference via Multiplier Bootstrap {#sec:bootstrap_SGD} In this section, we first propose a multiplier bootstrap method for inference based on the functional SGD estimator. After that, we study the theoretical properties of the proposed method, which serve as the cornerstone for proving bootstrap consistency for the local inference of constructing pointwise confidence intervals and the global inference of constructing simultaneous confidence bands. Finally, we describe the resulting online inference algorithm for non-parametric regression based on the functional SGD estimator. ## Multiplier bootstrap for functional SGD {#sec:MBootstrap} Recall that Theorem [Theorem 1](#le:bias_variance){reference-type="ref" reference="le:bias_variance"} provides a high probability decomposition of the functional SGD estimator $\bar f_n$ (relative to supreme norm metric) into the following sum $$\begin{aligned} \bar f_n \ =\ f^\ast \ +\ \bar{\eta}_n^{bias,0}\ + \ \bar{\eta}_n^{noise,0} \ +\ \mbox{smaller remainder term},\end{aligned}$$ where $\bar{\eta}_n^{bias,0}$ is the leading bias process defined in equation [\[eq:local_bias\]](#eq:local_bias){reference-type="eqref" reference="eq:local_bias"} and $\bar{\eta}_n^{noise,0}$ is the leading noise process defined in equation [\[eq:local_noise\]](#eq:local_noise){reference-type="eqref" reference="eq:local_noise"}. Motivated by this result, we propose in this section a multiplier bootstrap method to mimic and capture the random fluctuation from this leading noise process $\bar{\eta}_n^{noise,0}(\cdot) = n^{-1}\sum_{k=1}^n \epsilon_k \cdot \Omega_{n,k}(\cdot)$, where recall that term $\Omega_{n,k}$ only depends on the $k$-th design point $X_k$, and the primary source of randomness in $\bar{\eta}_n^{noise,0}$ is coming from random noises $\{\epsilon_k\}_{k=1}^n$ that are i.i.d. normally distributed under a standard non-parametric regression setting. Our online inference approach is inspired by the multiplier bootstrap idea proposed in [@Fang2017] for online inference of parametric models using SGD. Remarkably, we demonstrate that their development can be naturally adapted to enable online inference of non-parametric models based on functional SGD. The key idea is to perturb the stochastic gradient in the functional SGD by incorporating a random multiplier upon the arrival of each data point. r Specifically, let $w_1$, $w_2$, $\ldots$ denote a sequence of i.i.d. random bootstrap multipliers, whose mean and variance are both equal to one. At time $i$ with the observed data point $(X_i,\, Y_i)$, we use a randomly perturbed functional SGD updating formula as: $$\label{eq:bootstrap:sgd} \begin{aligned} \widehat{f}^b_i = &\, \widehat{f}_{i-1}^b + \gamma_i\, w_{i}\, G_i(\widehat{f}^b_{i-1}) = \widehat{f}_{i-1}^b + \gamma_i \,w_{i}\, (Y_i - \langle \widehat{f}^b_{i-1}, \,K_{X_i}\rangle_{\mathbb{H}})\, K_{X_i}, \\ = & \, (I - \gamma_i\, w_{i}\, K_{X_i}\otimes K_{X_i}) \,\widehat{f}^b_{i-1} + \gamma_i\, w_{i} \,Y_i \,K_{X_i}\quad \mbox{for}\quad i=1,2,\ldots, \end{aligned}$$ which modifies equations [\[eq:sgd:ini\]](#eq:sgd:ini){reference-type="eqref" reference="eq:sgd:ini"} and [\[eq:stand:sgd\]](#eq:stand:sgd){reference-type="eqref" reference="eq:stand:sgd"} for functional SGD by multiplying the stochastic gradient $G_i(\widehat{f}^b_{i-1})$ with random multiplier $w_i$. We adopt the same zero initialization $\widehat{f}^b_0 = \widehat{f}_0=0$ and call the (Polyak) averaged estimator $\bar{f}_n^b = n^{-1}\sum_{i=1}^n \widehat{f}_i^b$ as the bootstrapped functional SGD estimator (with $n$ samples). ## Bootstrap consistency {#sec:bs-consistency} Let us now proceed to derive a higher-order expansion of $\bar{f}_n^b$ analogous to Section [3.1](#sec:higher-order){reference-type="ref" reference="sec:higher-order"} and compare its leading terms with those associated with the original functional SGD estimator $\bar f_n$. Utilizing equation [\[eq:bootstrap:sgd\]](#eq:bootstrap:sgd){reference-type="eqref" reference="eq:bootstrap:sgd"} and plugging in $Y_i=f^\ast(X_i)+\epsilon_i$, we obtain the following expression: $$\widehat{f}^b_i - f^\ast = (I - \gamma_i\, w_{i} \, K_{X_i}\otimes K_{X_i}) \,(\widehat{f}^b_{i-1}-f^\ast) + \gamma_i \,w_{i}\, \epsilon_i \,K_{X_i}.$$ Since $w_i$ has a unit mean, we have an important identity $\Sigma =\mathbb{E}(w_nK_{X_n}\otimes K_{X_n})$. Similar to equation [\[eq:sgd:recursion_f0\]](#eq:sgd:recursion_f0){reference-type="eqref" reference="eq:sgd:recursion_f0"}-[\[eq:sgd:higher_order_exp:1\]](#eq:sgd:higher_order_exp:1){reference-type="eqref" reference="eq:sgd:higher_order_exp:1"}, due to this key identity, we can still recursively define the leading bootstrapped bias term through $$\label{eq:bootstrap:bias:lead} \eta_0^{b,bias,0}=\widehat f_0^{b} -f^\ast = -f^\ast \quad \mbox{and}\quad \eta_i^{b,bias,0}= (I - \gamma_n\, \Sigma) \,\eta_{n-1}^{bias, 0} \quad\mbox{for}\quad i=1,2,\ldots,$$ which coincides with the original leading bias term, i.e. ${\eta}_i^{b,bias,0}\equiv {\eta}_i^{bias,0}$; and the leading bootstrapped noise term through $$\begin{aligned} \eta_0^{b,noise,0}= 0 \quad \mbox{and}\quad \eta_i^{b,noise,0} = (I - \gamma_i\,\Sigma) \eta^{b,noise,0}_{i-1} + \gamma_i \,w_i\,\epsilon_i\, K_{X_i}, \quad\mbox{for}\quad i=1,2,\ldots,\end{aligned}$$ so that a similar decomposition as in equation [\[eq:sgd:higher_order_exp:1\]](#eq:sgd:higher_order_exp:1){reference-type="eqref" reference="eq:sgd:higher_order_exp:1"} holds, $$\label{eq:sgd:higher_order_exp} \widehat{f}^b_i -f^\ast =\ \underbrace{\eta_i^{b,bias,0}}_\text{leading bias} \ +\ \underbrace{\eta_i^{b,noise,0}}_\text{leading noise} \ + \ \ \underbrace{\big(\widehat{f}^b_i -f^\ast -\eta_i^{b,bias,0} - \eta_i^{b,noise,0}\big)}_\text{remainder term} \quad \mbox{for} \quad i=1,2,\ldots.$$ Corresponding, we define $\bar{\eta}_i^{b,bias,0} = i^{-1}\sum_{j=1}^i\eta_j^{b,bias,0}$ and $\bar{\eta}_i^{b,noise,0} = i^{-1}\sum_{j=1}^i\eta_j^{b,noise,0}$ as the leading bootstrapped bias and noise terms, respectively, in the bootstrapped functional SGD estimator. Notice that $\bar{\eta}_i^{b,bias,0}$ also coincides with the original leading bias term $\bar{\eta}_i^{bias,0}$, i.e. $\bar{\eta}_i^{b,bias,0}\equiv \bar{\eta}_i^{bias,0}$. Therefore, $\bar{\eta}_i^{b,bias,0}$ has the same explicit expression as equation [\[eq:local_bias\]](#eq:local_bias){reference-type="eqref" reference="eq:local_bias"}; while the leading bootstrapped noise term $\bar{\eta}_i^{b,noise,0}$ has a slightly different expression that incorporates the bootstrap multipliers as $$\label{eq:local_noise_bs} \begin{aligned} \bar{\eta}_n^{b,noise,0}(x) =&\, \frac{1}{n}\sum_{k=1}^n w_k\cdot \epsilon_k \cdot \Omega_{n,k}(x),\quad \forall x\in\mathcal X, \end{aligned}$$ where recall that $\Omega_{n,k}(\cdot)$ is defined in equation [\[eq:local_noise\]](#eq:local_noise){reference-type="eqref" reference="eq:local_noise"} and only depends on $X_k$. By taking the difference between $\bar{\eta}_n^{b,noise,0}$ and $\bar{\eta}_n^{noise,0}$, we obtain $$\begin{aligned} \label{eq:boot:noise} \bar{\eta}_n^{b,noise,0}(x) - \bar{\eta}_n^{noise,0}(x) = \frac{1}{n}\sum_{k=1}^n (w_k-1)\cdot \epsilon_k \cdot \Omega_{n,k}(x),\quad \forall x\in\mathcal X.\end{aligned}$$ This expression also takes the form of a weighted and non-identically distributed empirical process with "effective\" noises $\big\{(w_i-1)\epsilon_i\big\}_{i=1}^n$. Since $w_i$ has unit mean and variance, these effective noises have the same first two-order moments as the original noises $\{\epsilon_i\}_{i=1}^n$, suggesting that the difference $\bar{\eta}_n^{b,noise,0}(\cdot) - \bar{\eta}_n^{noise,0}(\cdot)$, conditioning on data $\{(X_i,\,Y_i\}_{i=1}^n$, tends to capture the random pattern of the original leading noise term $\bar{\eta}_n^{noise,0}(\cdot)$, leading to the so-called bootstrap consistency as formally stated in the theorem below. **Assumption 3**. *For $i=1,\dots, n$, bootstrap multipliers $w_{i}$s are i.i.d. samples of a random variable $W$ that satisfies $\mathbb{E}(W)=1$, $\mathop{\mathrm{{\rm Var}}}(W)=1$ and $\mathbb{P}(|W|\geq t)\leq 2 \exp(-t^2/C)$ for all $t\geq 0$ with a constant $C>0$.* One simple example that satisfies Assumption [Assumption 3](#asmp:weights){reference-type="ref" reference="asmp:weights"} is $W\sim N(1,1)$. A second example is bounded random variables, such as a scaled and shifted uniform random variable on the interval $[-1, 3]$. One popular choice in practice is discrete random variables, such as $W$ such that $\mathbb{P}(W=3)= \mathbb{P}(W=-1)= 1/2$. Let $\mathcal D_n:\,=\{X_i, Y_i\}_{i=1}^n$ denote the data of sample size $n$, and $\mathbb{P}^*(\cdot)= \mathbb{P}(\,\cdot\, |\, \mathcal{D}_n)$ denote the conditional probability measure given $\mathcal{D}_n$. We first establish the bootstrap consistency for local inference of leading noise term in the following Theorem [Theorem 3](#thm:boot_local:main1){reference-type="ref" reference="thm:boot_local:main1"}. **Theorem 3** (Bootstrap consistency for local inference of leading noise term). *Assume that kernel $K$ satisfies Assumptions [Assumption 1](#asmp:A1){reference-type="ref" reference="asmp:A1"}-[Assumption 2](#asmp:A2){reference-type="ref" reference="asmp:A2"} and multiplier weights $\{w_i\}_{i=1}^n$ satisfy Assumption [Assumption 3](#asmp:weights){reference-type="ref" reference="asmp:weights"}.* 1. *(Constant step size) Consider the step size $\gamma(n)=\gamma$ with $\gamma\in(0,\, n^{-\frac{\alpha-3}{3}})$ for some $\alpha>1$. Then for any $z_0\in [0,1]$, we have with probability at least $1-C n^{-1}$, $$\begin{aligned} & \sup_{u \in \mathbb{R}} \Big|\, \mathbb{P}^* \Big(\sqrt{n (n\gamma)^{-\frac{1}{\alpha}}}\, \big(\bar{\eta}_n^{b,noise,0}(z_0)- \bar{\eta}_n^{noise,0}(z_0) \big)\leq u \Big) -\, \mathbb{P}\Big( \sqrt{n (n\gamma)^{-\frac{1}{\alpha}}}\, \bar{\eta}_n^{noise,0}(z_0) \leq u \Big) \Big|\\ &\qquad \leq C' (\log n)^{3/2} (n(n\gamma)^{-1/\alpha})^{-1/6}, \label{eq:GP_approx:local:con}\end{aligned}$$ where $C, C'$ are constants independent of $n$.* 2. *(Non-constant step size) Consider the step size $\gamma_i =i^{-\xi}$, $i=1,\dots, n$, for some $\xi\in(\min\{0, 1-\alpha/3\},\,1/2)$. Then the following bound holds with probability at least $1-2n^{-1}$, $$\begin{aligned} \sup_{u \in \mathbb{R}} \Big|\, \mathbb{P}^* \Big( \sqrt{n (n\gamma_n)^{-\frac{1}{\alpha}}} \,\big(\bar{\eta}_n^{b,noise,0}(z_0)- \bar{\eta}_n^{noise,0}(z_0) \big) \leq u \Big) -&\, \mathbb{P}\Big( \sqrt{n (n\gamma_n)^{-\frac{1}{\alpha}}} \,\bar{\eta}_n^{noise,0}(z_0) \leq u \Big) \Big| \nonumber\\ &\qquad \leq \frac{C' (\log n)^{3/2}}{\sqrt{n(n\gamma_n)^{-3/(2\alpha)}}}. \end{aligned}$$* **Remark 5**. *Recall that from ([\[eq:local_noise\]](#eq:local_noise){reference-type="ref" reference="eq:local_noise"}) and ([\[eq:boot:noise\]](#eq:boot:noise){reference-type="ref" reference="eq:boot:noise"}), we can express $\bar{\eta}_n^{noise,0}(z_0)= \frac{1}{n}\sum_{k=1}^n \epsilon_k \cdot \Omega_{n,k}(z_0)$ and $\bar{\eta}_n^{b,noise,0}(z_0) - \bar{\eta}_n^{noise,0}(z_0) = \frac{1}{n}\sum_{k=1}^n (w_k-1)\cdot \epsilon_k \cdot \Omega_{n,k}(z_0)$. Theorem [Theorem 2](#thm:local:main1){reference-type="ref" reference="thm:local:main1"} shows that $\sum_{k=1}^n \epsilon_k \cdot \Omega_{n,k}(z_0)$ can be approximated by a normal distribution $\Phi\big(0, n^{-1}(n\gamma_n)^{1/\alpha}\sigma^2_{z_0}\big)$. To prove Theorem [Theorem 3](#thm:boot_local:main1){reference-type="ref" reference="thm:boot_local:main1"}, we introduce an intermediate empirical process evaluated at $z_0$ as $\sum_{k=1}^n (e_k-1)\cdot \epsilon_k \cdot \Omega_{n,k}(z_0)$ where $e_k$'s are independent and identically distributed standard normal random variables, such that $\sum_{k=1}^n (e_k-1)\cdot \epsilon_k \cdot \Omega_{n,k}(z_0)\mid \mathcal{D}_n$ has the same (conditional) variance as the (conditional) variance of $\big(\bar{\eta}_n^{b,noise,0}(z_0)- \bar{\eta}_n^{noise,0}(z_0) \big)\mid \mathcal{D}_n$.* **Theorem 4** (Bootstrap consistency for global inference of leading noise term). *Assume that kernel $K$ satisfies Assumptions [Assumption 1](#asmp:A1){reference-type="ref" reference="asmp:A1"}-[Assumption 2](#asmp:A2){reference-type="ref" reference="asmp:A2"} and multiplier weights $\{w_i\}_{i=1}^n$ satisfy Assumption [Assumption 3](#asmp:weights){reference-type="ref" reference="asmp:weights"}.* 1. *(Constant step size) Consider the step size $\gamma(n)=\gamma$ with $\gamma\in(0,\, n^{\frac{\alpha-3}{3}})$ for some $\alpha>2$. Then the following bound holds with probability at least $1-5 n^{-1}$ (with respect to the randomness in data $\mathcal D_n$) $$\begin{aligned} \sup_{u\in \mathbb{R}} \Big|\, \mathbb{P}^* \Big(\sqrt{n (n\gamma)^{-\frac{1}{\alpha}}}\, \|\,\bar{\eta}_n^{b,noise,0}- \bar{\eta}_n^{noise,0}\,\|_{\infty} \big)\leq u \Big) -&\, \mathbb{P}\Big( \sqrt{n (n\gamma)^{-\frac{1}{\alpha}}}\, \| \,\bar{\eta}_n^{noise,0}\,\|_{\infty} \leq u \Big) \Big|\\ &\quad\leq C(\log n)^{3/2}\big(n(n\gamma)^{-3/\alpha}\big)^{-1/8}. \label{eq:GP_approx:con}\end{aligned}$$* 2. *(Non-constant step size) Consider the step size $\gamma_i =i^{-\xi}$, $i=1,\dots, n$, for some $\xi\in(\min\{0, 1-\alpha/3\},\,1/2)$. Then the following bound holds with probability at least $1-5n^{-1}$, $$\begin{aligned} \sup_{u \in \mathbb{R}} \Big|\, \mathbb{P}^* \Big( \sqrt{n (n\gamma_n)^{-\frac{1}{\alpha}}} \,\|\,\bar{\eta}_n^{b,noise,0}- \bar{\eta}_n^{noise,0}\,\|_{\infty}\leq u \Big) -&\, \mathbb{P}\Big( \sqrt{n (n\gamma_n)^{-\frac{1}{\alpha}}} \,\|\,\bar{\eta}_n^{noise,0}\,\|_{\infty} \leq u \Big) \Big| \nonumber\\ &\quad\leq C(\log n)^{3/2}\big(n(n\gamma_n)^{-3/\alpha}\big)^{-1/8}.\end{aligned}$$* **Remark 6**. *Theorem [Theorem 4](#thm:global:main1){reference-type="ref" reference="thm:global:main1"} demonstrates that the sampling distribution of $\sqrt{n (n\gamma_n)^{-1/\alpha}} \|\bar{\eta}_n^{noise,0}\|_{\infty}$ can be approximated closely by the conditional distribution of $\sqrt{n (n\gamma_n)^{-1/\alpha}} \|\bar{\eta}_n^{b,noise,0}- \bar{\eta}_n^{noise,0}\|_{\infty}$ given data set $\mathcal{D}_n$. This theorem serves as the theoretical foundation for adopting the multiplier bootstrap method detailed in Section [4.1](#sec:MBootstrap){reference-type="ref" reference="sec:MBootstrap"} for global inference. Recall that the optimal step size for achieving the minimax optimal estimation error is $\gamma = n^{-\frac{1}{\alpha+1}}$ for the constant step size and $\gamma_i= i^{-\frac{1}{\alpha+1}}$ for the non-constant step size (Theorem [Theorem 2](#thm:local:main1){reference-type="ref" reference="thm:local:main1"}). To ensure that the Kolmogorov distance bound in Theorem [Theorem 4](#thm:global:main1){reference-type="ref" reference="thm:global:main1"} decays to $0$ as $n \to \infty$ under these step sizes, we require $\alpha > 2$. It is likely that our current Kolmogorov distance bound, which is dominated by an error term that arises from applying the Gaussian approximation to analyze $\|\bar{\eta}_n^{noise,0}\|_{\infty}$ and $\|\bar{\eta}_n^{b,noise,0}- \bar{\eta}_n^{noise,0}\|_{\infty}$ through space-discretization (see Section [6.2](#sec:sketch:pf:GA){reference-type="ref" reference="sec:sketch:pf:GA"}), can be substantially refined. We leave this improvement of the Kolmogorov distance bound, which would consequently lead to a weaker requirement on $\alpha$, to future research.* Since the leading noise terms $\bar{\eta}_n^{noise,0}$ and $\bar{\eta}_n^{b,noise,0}$ contribute to the primary source of randomness in the functional SGD and its bootstrapped counterpart (Theorem [Theorem 1](#le:bias_variance){reference-type="ref" reference="le:bias_variance"}), Theorem [Theorem 4](#thm:global:main1){reference-type="ref" reference="thm:global:main1"} then implies the bootstrap consistency for statistical inference of $f^\ast$ based on bootstrapped functional SGD. Particularly, we present the following Corollary, which establishes a high probability supremum norm bound for the remainder term in the bootstrapped functional SGD decomposition [\[eq:sgd:higher_order_exp\]](#eq:sgd:higher_order_exp){reference-type="eqref" reference="eq:sgd:higher_order_exp"}. Such a bound further implies that the sampling distribution of $\sqrt{n (n\gamma_n)^{-1/\alpha}} \|\bar{f}_n - f^\ast\|_{\infty}$ can be effectively approximated by the conditional distribution of $\sqrt{n (n\gamma_n)^{-1/\alpha}} \|\bar{f}^b_n - \bar{f}_n\|_{\infty}$ given data $\mathcal D_n$. Recall that we use $\mathbb{P}^*(\cdot)= \mathbb{P}(\,\cdot\, |\, \mathcal{D}_n)$ to denote the conditional probability measure given $\mathcal{D}_n=\{X_i, Y_i\}_{i=1}^n$. **Corollary 5** (Bootstrap consistency for functional SGD inference). *Assume that kernel $K$ satisfies Assumptions [Assumption 1](#asmp:A1){reference-type="ref" reference="asmp:A1"}-[Assumption 2](#asmp:A2){reference-type="ref" reference="asmp:A2"} and multiplier weights $\{w_i\}_{i=1}^n$ satisfies Assumption [Assumption 3](#asmp:weights){reference-type="ref" reference="asmp:weights"}.* 1. *(Constant step size) Consider the step size $\gamma(n)=\gamma$ with $\gamma\in(0,\, n^{\frac{\alpha-3}{3}})$ for some $\alpha>2$. Then it holds with probability at least $1-\gamma^{1/4}-\gamma^{1/2}-1/n$ with respect to the randomness of $\mathcal{D}_n$ that $$\label{eq:bootstrap_remainder} \mathbb{P}^* \Big(\|\,\bar{f}^b_n - f_n - \bar{\eta}_n^{b,bias,0}-\bar{\eta}_n^{b,noise,0}- \bar{\eta}_n^{bias,0}+ \bar{\eta}_n^{noise,0}\,\|^2_{\infty} \geq \gamma^{1/4} (n\gamma)^{1/\alpha}n^{-1}\Big) \leq \gamma^{1/4}+ \gamma^{1/2}+ 1/n.$$ Furthermore, for $0<\gamma< n^{-\frac{4}{7\alpha +1}}(\log n)^{-3/2}$, it holds with probability at least $1-5n^{-1}-3\gamma^{1/2}-\gamma^{-1/4}$, that $$\begin{aligned} \sup_{u \in \mathbb{R}} \Big|\, \mathbb{P}^* \Big(\sqrt{n (n\gamma)^{-1/\alpha}} \,\|\,\bar{f}^b_n \,- & \,\bar{f}_n\,\|_{\infty}\leq u \Big) - \mathbb{P}\Big( \sqrt{n (n\gamma)^{-1/\alpha}}\, \|\, \bar{f}_n - f^\ast-\mbox{Bias}(f^\ast)\,\|_{\infty} \leq u \Big) \Big| \\ &\qquad \leq C_1(\log n)^{3/2} n^{-1/8} (n\gamma)^{3/(8\alpha)} +C\gamma^{1/4}\end{aligned}$$ where $\mbox{Bias}(f^\ast) = \bar \eta_n^{bias,0}$ denotes the bias term, $C_1, C>0$ are constants.* 2. *(Non-constant step size) Consider the step size $\gamma_i = i^{-\frac{1}{\alpha+1}}$ for $i=1,\dots, n$. Then it holds with probability at least $1-\gamma_n^{1/4}-\gamma_n^{1/2}-1/n$ that $$\mathbb{P}^* \Big(\|\,\bar{f}^b_n - f^\ast - \bar{\eta}_n^{b,bias,0}-\bar{\eta}_n^{b,noise,0}\,\|^2_{\infty} \geq \gamma_n^{1/4} (n\gamma_n)^{1/\alpha}n^{-1}\Big) \leq \gamma_n^{1/4}+ \gamma_n^{1/2}+ 1/n.$$ Furthermore, it holds with probability at least $1-5n^{-1}-\gamma_n^{1/2}-\gamma_n^{1/4}$ that $$\begin{aligned} \sup_{u \in \mathbb{R}} \Big| \,\mathbb{P}^* \Big(\sqrt{n (n\gamma_n)^{-1/\alpha}}\, \|\,\bar{f}^b_n \,- &\,\bar{f}_n\,\|_{\infty}\leq u \Big) - \mathbb{P}\Big( \sqrt{n (n\gamma_n)^{-1/\alpha}} \,\|\,\bar{f}_n - f^\ast-\mbox{Bias}(f^\ast)\,\|_{\infty} \leq u \Big) \Big| \\ &\qquad \lesssim (\log n)^{3/2} n^{-1/8}(n\gamma_n)^{\frac{3}{8\alpha}}+ \gamma_n^{1/4}.\end{aligned}$$* **Remark 7**. *Corollary [Corollary 5](#cor:global:main){reference-type="ref" reference="cor:global:main"} suggests that a smaller step size $\gamma$ (or $\gamma_n$) and a larger sample size $n$ result in more accurate uncertainty quantification. As discussed in Section [4.2](#sec:bs-consistency){reference-type="ref" reference="sec:bs-consistency"}, the functional SGD estimator and its bootstrap counterpart share the same leading bias term, which eliminates the bias in the conditional distribution of $\bar f_n^b - \bar f_n$ given $\mathcal D_n$. However, the bias term $\mbox{Bias}(f^\ast)$ still exists in the sampling distribution of $\bar f_n - f^\ast$. According to Theorem [Theorem 1](#le:bias_variance){reference-type="ref" reference="le:bias_variance"}, this bias term can be bounded by $O(1/\sqrt{n\gamma})$ with high probability, while the convergence rate of the leading noise term under the supremum norm metric is of order $O(1/\sqrt{n(n\gamma)^{-1/\alpha}})$. Therefore, to make the bias term asymptotically negligible, we can adopt the common practice of "undersmoothing\" [@neumann1998simultaneous; @armstrong2020simple]. In our context, this means slightly enlarging the step size as $\gamma=\gamma(n)= n^{-\frac{1}{\alpha+1}+\varepsilon}$ (constant step size) or $\gamma_i = i^{-\frac{1}{\alpha+1}+\varepsilon}$ for $i=1, \dots, n$ (non-constant step size), where $\varepsilon$ is any small positive constant.* ## Online inference algorithm {#sec:online_alg} Output: SGD estimators $\bar{f}_{n}$ and the Bootstrap estimates $\{\bar{f}^{b,j}_{n}\}_{j=1}^J$. Calculate $\{\bar{f}^{b,j}_{n} - \bar{f}_{n}\}_{j=1}^J$.\ Construct the $100(1-\alpha)\%$ confidence interval for $f$ evaluated at any fixed $z_0$ via 1. Normal CI: $(\bar{f}_n(z_0) - z_{\alpha/2}\sqrt{T_n^b(z_0)},\, \bar{f}_n(z_0) + z_{\alpha/2}\sqrt{T_n^b(z_0)})$, where $T_n^{b}(z_0)= \frac{1}{J-1}\sum_{j=1}^J \big(\bar{f}_n^{b,j}(z_0)- \bar{f}_n(z_0)\big)^2$. 2. Percentile CI: $\big(\bar{f}_n(z_0)-C_{\alpha/2},\,\bar{f}_n(z_0)+ C_{1-\alpha/2} \big)$, where $C_{\alpha/2}$ and $C_{1-\alpha/2}$ are the sample $\alpha/2$-th and $(1-\alpha/2)$-th quantile of $\{\bar{f}^{b,j}_{n}(z_0) - \bar{f}_{n}(z_0)\}_{j=1}^J$. Construct the $100(1-\alpha)\%$ confidence band for $f$ at any $x\in \mathcal{X}$:\ Step 1: Evenly choose $t_1, \dots, t_M \in \mathcal{X}$.\ Step 2: For $j \in 1, \dots, J$, calculate $\max_{1\leq m \leq M} \big|\bar{f}_n^{b,j}(t_m)-\bar{f}_n(t_m)\big|.$\ Step 3: Calculate the sample $\alpha/2$-th and the $(1-\alpha/2)$-th quantiles of $$\max_{1\leq m \leq M} \big|\bar{f}_n^{b,1}(t_m)-\bar{f}_n(t_m)\big|, \dots, \max_{1\leq m \leq M} \big|\bar{f}_n^{b,J}(t_m)-\bar{f}_n(t_m)\big|,$$ and denote them by $Q_{\alpha/2}$ and $Q_{1-\alpha/2}$.\ Step 4: Construct the $100(1-\alpha) \%$ confidence band as $\big\{g:\, \mathcal X\to\mathbb R \,\big|\, g(x)\in[\bar f_n(x) - Q_{\alpha/2},\,\bar f_n(x) + Q_{1-\alpha/2}],\ \forall x\in \mathcal X\big\}$. As we demonstrated in Theorem [Theorem 4](#thm:global:main1){reference-type="ref" reference="thm:global:main1"}, the sampling distribution of $\sqrt{n(n\gamma_n)^{-1/\alpha}}(\bar{f}_n - f^\ast)$ can be effectively approximated by the conditional distribution of $\sqrt{n(n\gamma_n)^{-1/\alpha}}(\bar{f}_n^b - \bar{f}_n)$ given data $\mathcal D_n$ using the bootstrap functional SGD. This result provides a strong foundation for conducting online statistical inference based on bootstrap. Specifically, we can run $J$ bootstrapped functional SGD in parallel, producing $J$ estimators $\bar{f}_n^{b,j} = \frac{1}{n}\sum_{i=1}^n \widehat{f}_i^{b,j}$ for $j=1, \dots, J$ with $$\widehat{f}^{b,j}_i = \widehat{f}_{i-1}^{b,j} + \gamma_n w_{i,j} (Y_i - \langle \widehat{f}^{b,j}_{i-1}, K_{X_i}\rangle_{\mathbb{H}}) K_{X_i}, \quad \mbox{for}\quad i=1,2,\ldots,$$ where $w_{i,j}$ are i.i.d. bootstrap weights satisfying Assumption [Assumption 3](#asmp:weights){reference-type="ref" reference="asmp:weights"}. Then we can approximate the sampling distribution of $(\bar{f}_n - f^\ast)$ using the empirical distribution of $\{\widehat{f}^{b,j}_n - \bar{f}_n, \, j =1, \dots, J\}$ conditioning on $\mathcal D_n$, and further construct the point-wise confidence intervals and simultaneous confidence band for $f^\ast$. We can also use the empirical variance of $\{\bar{f}_n^{b,j},\, j=1,\dots,J\}$ to approximate the variance of $\bar{f}_n$. Based on these quantities, we can construct the point-wise confidence interval for $f^\ast(x)$ for any fixed $x\in\mathcal X$ in two ways: 1. Normal CI - giving the sequence of bootstrapped estimators $\bar{f}_n^{b,j}(x)$ for $j=1,\dots, J$, we calculate the variance as $T_n^{b}(x)= \frac{1}{J-1}\sum_{j=1}^J \big(\bar{f}_n^{b,j}(x)- \bar{f}_n(x)\big)^2$, and construct the $100(1 - \alpha)\%$ confidence interval for $f^\ast(x)$ as $(\bar{f}_n(x) - z_{\alpha/2}\sqrt{T_n^b(x)}, \bar{f}_n(x) + z_{\alpha/2}\sqrt{T_n^b(x)})$; 2. Percentile CI - giving the sequence of bootstrapped estimators $\bar{f}_n^{b,j}(x)$ for $j=1,\dots, J$, we calculate $\{\bar{f}_n^{b,j}(x)-\bar{f}_n(x)\}_{j=1}^J$, and its $\alpha/2$-th and $(1-\alpha/2)$-th quantiles as $C_{\alpha/2}$ and $C_{1-\alpha/2}$, then construct the $100(1 - \alpha)\%$ CI for $f^\ast(x)$ as $\big(\bar{f}_n(x)-C_{\alpha/2},\bar{f}_n(x)+ C_{1-\alpha/2} \big)$. To construct the simultaneous confidence band, we first choose a dense grid points $t_1,\dots, t_M\in \mathcal{X}$; then for each $j\in\{1,\dots, J\}$, we calculate $\max_{1\leq m \leq M} \big|\bar{f}_n^{b,j}(t_m)-\bar{f}_n(t_m)\big|$ to approximate $\sup_t |\bar{f}_n^{b,j}(t)-\bar{f}_n(t)|$. Accordingly, we obtain the following $J$ bootstrapped supremum norms: $$\label{eq:CB} \max_{1\leq m \leq M} \big|\bar{f}_n^{b,1}(t_m)-\bar{f}_n(t_m)\big|\ , \ \max_{1\leq m \leq M} \big|\bar{f}_n^{b,2}(t_m)-\bar{f}_n(t_m)\big|\ ,\ \dots \ ,\ \mbox{and}\ \max_{1\leq m \leq M} \big|\bar{f}_n^{b,J}(t_m)-\bar{f}_n(t_m)\big|.$$ Denote the sample $\alpha/2$-th and the $(1-\alpha/2)$-th quantiles of ([\[eq:CB\]](#eq:CB){reference-type="ref" reference="eq:CB"}) as $Q_{\alpha/2}$ and $Q_{1-\alpha/2}$. Then we construct a $100(1 - \alpha)\%$ confidence band for $f^\ast$ as $\big\{g:\, \mathcal X\to\mathbb R \,\big|\, g(x)\in[\bar f_n(x) - Q_{\alpha/2},\,\bar f_n(x) + Q_{1-\alpha/2}],\ \forall x\in \mathcal X\big\}$. Our online inference algorithm is computationally efficient, as it only requires one pass over the data, and the bootstrapped functional SGD can be computed in parallel. The detailed algorithm is summarized in Algorithm [\[algorithm1\]](#algorithm1){reference-type="ref" reference="algorithm1"}. # Numerical Study {#sec:numerical} In this section, we test our proposed online inference approach via simulations. Concretely, we generate synthetic data in a streaming setting with a total sample size of $n$. We use $(X_t, Y_t)$ to represent the $t$-th observed data point for $t=1, \dots, n$. We evaluate the performance of our proposed method as described in Algorithm [\[algorithm1\]](#algorithm1){reference-type="ref" reference="algorithm1"} for constructing confidence intervals for $f(x)$ at $x=X_t$ for $t=501,1000,1500,2000,2500,3000,3500,4000$, and compare our method with three existing alternative approaches, which we refer to as "offline\" methods. "Offline\" methods involve calculating the confidence intervals after all data have been collected, up to the $t$-th observation's arrival, which necessitates refitting the model each time new data arrive. We also evaluate the coverage probabilities of the simultaneous confidence bands constructed in Algorithm [\[algorithm1\]](#algorithm1){reference-type="ref" reference="algorithm1"}. We first enumerate the compared offline confidence interval methods as follows: (i) Offline Bayesian confidence interval (Offline BA) proposed in [@wahba1983bayesian]: According to [@wahba1978improper], a smoothing spline method corresponds to a Bayesian procedure when using a partially improper prior. Given this relationship between smoothing splines and Bayes estimates, confidence intervals can be derived from the posterior covariance function of the estimation. In practice, we implement Offline BA using the "gss\" R package [@gu2013smoothing]. (ii) Offline bootstrap normal interval (Offline BN) proposed in [@wang1995bootstrap]: Let $\widehat{f}_{\lambda}$ and $\widehat{\sigma}$ denote the estimates of $f$ and $\sigma$ respectively, achieved by minimizing ([\[eq:plr\]](#eq:plr){reference-type="ref" reference="eq:plr"}) with $\{X_i, Y_i\}_{i=1}^t$ as below. $$\label{eq:plr} \sum_{i = 1}^{t} (Y_i - f(X_i))^2 + \frac{t}{2}\lambda \int_{0}^1 (f^{''}(u))^2 du$$ where $\lambda$ is the roughness penalty and $f^{''}(u)$ is the second derivative evaluated at $u$. A bootstrap sample is generated from $$Y_i^\dag = \widehat{f}_{\lambda}(X_i) + \epsilon^\dag,\quad i = 1,\dots,t$$ where $\epsilon^\dag_i$s are i.i.d. Gaussian white noise with variance $\widehat{\sigma}^2$. Based on the bootstrap sample, we calculate the bootstrap estimate as $\widehat{f}_\lambda^\dag$. Repeating $J$ times, we have a sequence of offline bootstrap estimates $\bar{f}_\lambda^{\dag,1}, \dots, \bar{f}_\lambda^{\dag,J}$. We estimate the variance of $\widehat f_\lambda(X_t)$ as $T^\dag_t= \frac{1}{J-1}\sum_{j=1}^J\big(\widehat{f}_\lambda^{\dag,j}(X_t)-\widehat{f}_\lambda(X_t) \big)^2$. A $100(1 - \alpha)\%$ offline normal bootstrap confidence interval for $\widehat f_\lambda(X_t)$ is then constructed as $\big(\,\widehat{f}_\lambda(X_t) - z_{\alpha/2}\sqrt{T^\dag_t}, \,\widehat{f}_\lambda(X_t) + z_{\alpha/2}\sqrt{T^\dag_t}\,\big)$. (iii) Offline bootstrap percentile interval (Offline BP): We apply the same data bootstrapping procedure in Offline BN, which produces the estimate $\widehat{f}^\dag_\lambda(X_t)$ based on the bootstrap sample. The confidence interval is then constructed using the percentile method suggested in [@efron1982jackknife]. Specifically, let $C^\dag_{\alpha/2}(X_t)$ and $C^\dag_{1-\alpha/2}(X_t)$ represent the $\alpha/2$-th quantile and the $(1-\alpha/2)$-th quantile of the empirical distribution of $\big\{\widehat{f}_\lambda^{\dag,j}(X_t)-\widehat{f}^\dag_\lambda(X_t)\big\}_{j=1}^J$, respectively. A $100(1 -\alpha)\%$ confidence interval for $\widehat f_\lambda(X_t)$ is then constructed as $\big(\,\widehat{f}_\lambda(X_t) -C^\dag_{\alpha/2}(X_t),\,\widehat{f}_\lambda(X_t) +C^\dag_{1-\alpha/2}(X_t)\,\big)$. As $t$ increases, offline methods lead to a considerable increase in computational cost. For instance, Offline BA/BN theoretically has a total time complexity of order $\mathcal O(t^4)$ (with an $\mathcal O(t^3)$ cost at time $t$). In contrast, online bootstrap confidence intervals are computed sequentially as new data points become available, making them well-suited for streaming data settings. They have a theoretical complexity of at most $\mathcal O(t^2)$ (with an $\mathcal O(t)$ cost at time $t$). We examine both the normal CI and percentile CI, as outlined in Algorithm [\[algorithm1\]](#algorithm1){reference-type="ref" reference="algorithm1"}, when constructing the confidence interval. We examine the effects of various step size schemes. Specifically, we consider a constant step size $\gamma=\gamma(t)= t^{-\frac{1}{\alpha+1}}$, where $t$ represents the total sample size at which the CIs are constructed, and an online step size $\gamma_i = i^{-\frac{1}{\alpha+1}}$ for $i=1,\dots, t$. A limitation of the constant-step size method is its dependency on prior knowledge of the total time horizon $t$. Consequently, the estimator is only rate-optimal at the $t$-th step. We assess our proposed online bootstrap confidence intervals in four different scenarios: (i) Online BNC, which uses a constant step size for the normal interval; (ii) Online BPC, which uses a constant step size for the percentile interval; (iii) Online BNN, which employs a non-constant step size for the normal interval; and (iv) Online BPN, which utilizes a non-constant step size for the percentile interval. We generate our data as i.i.d. copies of random variables $(X,Y)$, where $X$ is drawn from a uniform distribution in the interval $(0,1)$, and $Y = f(X) + \epsilon$. Here $f$ is the unknown regression function to be estimated, $\epsilon$ represents Gaussian white noise with a variance of $0.2$. We consider the following three cases of $f=f_\ell$, $\ell=1,2,3$: $$\begin{aligned} \mbox{Case 1:}\quad & f_1(x)= \sin (3\pi x/2), \\ \mbox{Case 2:}\quad & f_2(x) = \frac{1}{3}\beta_{10,5}(x) + \frac{1}{3}\beta_{7,7}(x) + \frac{1}{3}\beta_{5,10}(x), \\ \mbox{Case 3:}\quad & f_3(x) = \frac{6}{19}\beta_{30,17}(x) + \frac{4}{10}\beta_{3,11}(x).\end{aligned}$$ Here, $\beta_{p,q}= \frac{x^{p-1}(1-x)^{q-1}}{B(p,q)}$ with $B(p,q) = \frac{\Gamma(p)\Gamma(q)}{\Gamma(p+q)}$ denoting the beta function, and $\Gamma$ is the gamma function with $\Gamma(p)=p!$ when $p\in\mathbb N_+$. Cases 2 and 3 are designed to mimic the increasingly complex "truth" scenarios similar to the settings in [@wahba1983bayesian; @wang1995bootstrap]. We draw training data of size $n=3000$ from these models. In our online approaches, we first use 500 data points to build an initial estimate and then employ SGD to derive online estimates from the 501st to the 3000th data point. Given that our framework is designed for online settings, we can construct the confidence band based on the datasets of size $501$, $1000$, $1500$, $2000$, $2500$ and $3000$, i.e., using the averaged estimators $\bar{f}_t$ at $t=501,1000, 1500, 2000, 2500, 3000$. We repeat the data generation process $200$ times for each case. For each replicate, upon the arrival of a new data point, we apply the proposed multiplier bootstrap method for online inference, using $500$ bootstrap samples (i.e., $J=500$ in Algorithm [\[algorithm1\]](#algorithm1){reference-type="ref" reference="algorithm1"}) with bootstrap weight $W$ generated from a normal distribution with mean $1$ and standard deviation $1$. We then construct $95\%$ confidence intervals based on Algorithm [\[algorithm1\]](#algorithm1){reference-type="ref" reference="algorithm1"}. Our results will show the coverage and distribution of the lengths of the confidence intervals built at $t=501, 1000, 1500, 2000, 2500$, and $3000$. -------------------------------------------------------------- ---------------------------------------------------------- **Case 1:** (A1) Coverage (A2) Length of Confidence Interval ![image](images/sim1-cov_new.pdf){width="0.49\\columnwidth"} ![image](images/sim1-len.pdf){width="0.49\\columnwidth"} **Case 2:** (B1) Coverage (B2) Length of Confidence Interval ![image](images/sim2-cov_new.pdf){width="0.49\\columnwidth"} ![image](images/sim2-len.pdf){width="0.49\\columnwidth"} **Case 3:** (C1) Coverage (C2) Length of Confidence Interval ![image](images/sim3-cov_new.pdf){width="0.49\\columnwidth"} ![image](images/sim3-len.pdf){width="0.49\\columnwidth"} -------------------------------------------------------------- ---------------------------------------------------------- ![Confidence band constructed using an online bootstrap approach with a non-constant step size. Data are generated in three cases with sample sizes of 1000 (red), 2000 (blue), and 3000 (yellow). The colored band represents the confidence band, the solid black curve is the true function curve, and the colored curve is the estimated function curve based on SGD.](images/CB.png){#fig:band width="90%"} As shown in Figure [\[fig:sim1\]](#fig:sim1){reference-type="ref" reference="fig:sim1"}, the coverage of all methods approaches the predetermined level of $95\%$ as $t$ increases. The offline Bayesian method exhibits the lowest coverage of all. While it has the longest average confidence interval length in Cases 1-3, it also has the smallest variance in confidence interval lengths. The offline bootstrap-based methods demonstrate higher coverage and shorter average confidence interval lengths than the offline Bayesian method. The variance in confidence interval lengths for these bootstrap-based methods is larger, due to the bootstrap multiplier resampling procedure or the random step size used in our proposed online bootstrap procedures. As the sample size grows, the variance in the length of the confidence interval diminishes for all methods. Our online bootstrap procedure with a non-constant step size outperforms the others regarding both the average length and the variance of the confidence interval. It offers the shortest average confidence interval length and the smallest variance, compared to the Bayesian confidence interval, offline bootstrap methods, and the online bootstrap procedure with a constant step size. Moreover, the online bootstrap method with a non-constant step size achieves the predetermined coverage level of $95\%$ more quickly than the other methods. We only tested our methods (online BNN and online BQN) with an increased $t$ at $t=3500$ and $t=4000$ due to computational costs. As observed in Figure [\[fig:sim1\]](#fig:sim1){reference-type="ref" reference="fig:sim1"} (A1), (B1), and (C1), the coverage stabilizes at the predetermined coverage level of $95\%$. We also use our proposed online bootstrap method, as outlined in Algorithm [\[algorithm1\]](#algorithm1){reference-type="ref" reference="algorithm1"}, to construct a confidence band of level of $95\%$ with a step size of $\gamma_i = i^{-\frac{1}{\alpha+1}}$ at $n=1000,2000,3000$. As seen in Figure [2](#fig:band){reference-type="ref" reference="fig:band"}, the average width of the confidence band decreases as the sample size increases for Case 1-3, and all of them cover the true function curve represented by the solid black curve, indicating that the accuracy of our confidence band estimates improves with a larger sample size. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- (A) Cumulative Computation Time (B) Current Computation Time ![ (A) Cumulative Computation time is recorded as $t$ increasing from $501$ to $4000$. (B) The computation time of constructing the confidence interval is recorded at different time points. The computation time is represented on the Y-axis with a scaled interval to differentiate between the blue and green curves.](images/time1.pdf){#fig:computing width="0.45\\columnwidth"} ![ (A) Cumulative Computation time is recorded as $t$ increasing from $501$ to $4000$. (B) The computation time of constructing the confidence interval is recorded at different time points. The computation time is represented on the Y-axis with a scaled interval to differentiate between the blue and green curves.](images/time2.pdf){#fig:computing width="0.45\\columnwidth"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Finally, we compared the computational time of various methods in constructing confidence intervals on a computer workstation equipped with a 32-core 3.50 GHz AMD Threadripper Pro 3975WX CPU and 64GB RAM. We recorded the computational times as data points $501, 1000, \dots, 4000$ arrived and calculated the cumulative computational times up to $t=501, 1000, \cdots, 4000$ for both offline and online algorithms. The normal and percentile Bootstrap methods displayed similar computational times, so we chose to report the computational time of the percentile bootstrap interval for both offline and online approaches. Despite leveraging parallel computing to accelerate the bootstrap procedures, the offline bootstrap algorithms still demanded significant computation time. This is attributed to the need to refit the model each time a new data point arrived, which substantially raises the computational expense. The computational complexity of offline methods for computing the estimate of $f$ at time $t$ is $\mathcal O(t^3)$, leading to a cumulative computational complexity of order $\mathcal O(t^4)$. Including the bootstrap cost, the total computational complexity at $t$ becomes $\mathcal O(Bt^3)$, leading to a cumulative computational complexity of order $\mathcal O(Bt^4)$. As shown in Figure [4](#fig:computing){reference-type="ref" reference="fig:computing"}, the cumulative computational time reaches approximately $60$ hours for the offline bootstrap method and around $8$ hours for the Bayesian bootstrap method. Conversely, the cumulative computational time for our proposed bootstrap method grows almost linearly with $t$, and requires less than $30$ minutes up to $t=4000$. At $t=4000$, offline bootstrap methods take about $200$ seconds, and the Bayesian confidence interval necessitates roughly $30$ seconds to construct the confidence interval. Our proposed online bootstrap method requires fewer than $3$ seconds, demonstrating its potential for time-sensitive applications such as medical diagnosis and treatment, financial trading, and traffic management, where real-time decision-making is essential as data continuously flows in. # Proof Sketch of Main Results {#sec:proof_sketch} In this section, we present sketched proofs for the expansion of the functional SGD estimator relative to the supremum norm (Theorem [Theorem 1](#le:bias_variance){reference-type="ref" reference="le:bias_variance"}) and the bootstrap consistency for global inference (Theorem [Theorem 4](#thm:global:main1){reference-type="ref" reference="thm:global:main1"}), while highlighting some important technical details and key steps. ## Proof sketch for estimator expansion under supremum norm metric {#subsec:sketch1} Theorem [Theorem 1](#le:bias_variance){reference-type="ref" reference="le:bias_variance"} establishes the supreum norm bound with high probability for the high-order expansion of the SGD estimator. This result is crucial for the inference framework, as we only need to focus on the distribution behavior of leading terms given the negligible remainders. In the sketched proof, we denote $\eta_n = \widehat{f}_{n}- f^\ast$. According to ([\[eq:sgd:recursion_f0\]](#eq:sgd:recursion_f0){reference-type="ref" reference="eq:sgd:recursion_f0"}), we have $$\eta_n = (I - \gamma_n K_{X_n}\otimes K_{X_n}) \eta_{n-1} + \gamma_n \epsilon_n K_{X_n}.$$ We split the recursion of $\eta_n$ into two finer recursions: bias recursion of $\eta_n^{bias}$ and noise recursion of $\eta_n^{noise}$ such that $\eta_n = \eta_n^{bias} + \eta_n^{noise}$, where $$\begin{aligned} \mbox{bias recursion:} \qquad &\eta_n^{bias} = (I - \gamma_n K_{X_n}\otimes K_{X_n}) \eta^{bias}_{n-1} \quad \quad \textrm{with} \quad \eta_0^{bias}= -f^*;\\ \mbox{noise recursion:} \qquad & \eta^{noise}_n = (I - \gamma_n K_{X_n}\otimes K_{X_n}) \eta^{noise}_{n-1} + \gamma_n \epsilon_n K_{X_n} \quad \quad \textrm{with} \quad \eta_0^{noise}= 0.\end{aligned}$$ To proceed, we further decompose the bias recursion into two parts: (1) the leading bias recursion $\eta_n^{bias,0}$; and (2) the remainder bias recursion $\eta_n^{bias}-\eta_n^{bias,0}$ as follows: $$\begin{aligned} \eta_n^{bias,0}= &\, (I - \gamma_n \Sigma) \eta_{n-1}^{bias,0} \quad \quad \textrm{with} \quad \eta_0^{bias}= -f^*; \\ \eta_n^{bias} - \eta_n^{bias,0} = &\, (I- \gamma_n K_{X_n}\otimes K_{X_n}) (\eta_{n-1}^{bias} - \eta_{n-1}^{bias,0}) + \gamma_n (\Sigma - K_{X_n}\otimes K_{X_n} )\eta_{n-1}^{bias,0}.\end{aligned}$$ It is worth noting that the leading bias recursion essentially replaces $K_{X_n}\otimes K_{X_n}$ by its expectation $\Sigma = \mathbb{E}[K_{X_n}\otimes K_{X_n}]$. To bound the residual term $\| \bar{\eta}_n^{bias} - \bar{\eta}_n^{bias,0}\|_\infty$ associated with the leading bias term of the averaged estimator, we introduce an augmented RKHS space (with $a\in[0,\, 1/2-1/(2\alpha)]$) $$\label{eqn:augmented} \mathbb{H}_a = \Big\{f= \sum_{\nu=1}^\infty f_\nu \phi_\nu \,\mid \,\sum_{\nu=1}^\infty f_\nu^2 \mu_\nu^{2a-1}< \infty\Big\}$$ equipped with the kernel function $K^a(x,y)= \sum_{\nu=1}^\infty \phi_\nu(X)\phi_\nu(y)\mu_\nu^{1-2a}$. To verify $K^a(\cdot,\cdot)$ is the reproducing kernel of $\mathbb{H}_a$, we notice that $$\|K_x^a\|_a^2 = \|\sum_{\nu=1}^\infty \mu_\nu^{1-2a}\phi_\nu(x)\phi_\nu \|_a^2 = \sum_{\nu=1}^\infty (\phi_\nu(x)\mu_\nu^{1-2a})^2 \mu_\nu^{2a-1} = \sum_{\nu=1}^\infty \phi^2_\nu(x)\mu_\nu^{1-2a} < c^2_a,$$ where $c_a$ is a constant. Moreover, $K_x^a(\cdot)$ also satisfies the reproducing property since $$\begin{aligned} \langle K_x^a, f \rangle_a = &\langle \sum_{\nu=1}^\infty \mu_\nu^{1-2a}\phi_\nu(x)\phi_\nu, f \rangle_a = \sum_{\nu=1}^\infty \phi_\nu(x)\mu_\nu^{1-2a} f_\nu \langle \phi_\nu, \phi_\nu \rangle_a = \sum_{\nu=1}^\infty f_\nu \phi_\nu(x) = f(x). \end{aligned}$$ For any $f\in \mathbb{H}\subset \mathbb{H}_a$, we can use the above reproducing property to bound the supremum norm of $f$ as $\|f\|_{\infty} =\sup_{x\in[0,1]}|f(x)| = |\langle K_x^a, f \rangle_a| \leq \|f\|_a \cdot\|K_x^a\|_a^2< c_a \|f\|_a$. Also note that for any $f\in \mathbb{H}$, $\|f\|^2_{\mathbb{H}}= \sum_{\nu=1}^\infty f_\nu^2 \mu_\nu^{-1} \leq \sum_{\nu=1}^\infty f_\nu^2 \mu_\nu^{2a-1}= \|f\|^2_{a}$ for $a\geq 0$; therefore, we have the relationship $\|f\|_{\infty}\leq c_a \|f\|_{a} \leq c_k \|f\|_{\mathbb{H}}$, meaning that $\|\cdot\|_a$ provides a tighter bound for the supremum norm compared with $\|\cdot\|_\mathbb{H}$. In Section [8.2](#app:le:rem_bias:con){reference-type="ref" reference="app:le:rem_bias:con"} (Lemma [Lemma 6](#app:le:bias_rem:con){reference-type="ref" reference="app:le:bias_rem:con"}), we use this augmented RKHS to show that the bias remainder term satisfies $\|\bar{\eta}_n^{bias}-\bar{\eta}_n^{bias,0}\|^2_{\infty} = o(\bar{\eta}_n^{bias,0})$ through computing the expectation $\mathbb{E}\big[\|\bar{\eta}_n^{bias}-\bar{\eta}_n^{bias,0}\|^2_{a}\big]$ and applying the Markov inequality. For the noise recursion of $\eta_n^{noise}$, we can similarly split it into the leading noise recursion term and residual noise recursion term as $$\begin{aligned} \eta_n^{noise,0} = &\, (I - \gamma_n\Sigma) \eta^{noise,0}_{n-1} + \gamma_n \epsilon_n K_{X_n} \quad \quad \textrm{with}\quad \eta_{0}^{noise, 0}=0;\\ \eta_n^{noise} - \eta_n^{noise,0} = & \, (I - \gamma_n K_{X_n}\otimes K_{X_n}) (\eta_{n-1}^{noise} - \eta_{n-1}^{noise,0}) + \gamma_n (\Sigma - K_{X_n}\otimes K_{X_n} ) \eta_{n-1}^{noise, 0}.\end{aligned}$$ The leading noise recursion is described as a "semi-stochastic" recursion induced by $\eta_n^{noise}$ in [@Bach2016] since it keeps the randomness in the noise recursion $\eta_n^{noise}$ due to the noise $\{\epsilon_i\}_{i=1}^n$, but get ride of the randomness arising from $K_{X_n}\otimes K_{X_n}$, which is due to the random design $\{X_i\}_{i=1}^n$. For the residual noise recursion, directly bound $\|\bar{\eta}_n^{noise} - \bar{\eta}_n^{noise,0}\|_\infty$ is difficult. Instead, we follow [@Bach2016] by further decomposing $\eta_n^{noise} - \eta_n^{noise,0}$ into a sequence of higher-order "semi-stochastic" recursions as follows. We first define a semi-stochastic recursion induced by $\eta_n^{noise} - \eta_n^{noise,0}$, denoted as $\eta_n^{noise,1}$: $$\eta_n^{noise,1} = (I-\gamma_n \Sigma)\eta_{n-1}^{noise,1} + \gamma_n (\Sigma - K_{X_n}\otimes K_{X_n})\eta_{n-1}^{noise,0}.\label{eq:noise_1:recursion}$$ Here, $\eta_n^{noise,1}$ replaces the random operator $K_{X_n}\otimes K_{X_n}$ with its expectation $\Sigma$ in the residual noise recursion for $\eta_n^{noise} - \eta_n^{noise,0}$, and can be viewed as a second-order term in the expansion of the noise recursion, or the leading remainder noise term. The rest noise remainder parts can be expressed as $$\begin{aligned} &\, \eta_n^{noise} - \eta_n^{noise,0} -\eta_n^{noise,1} = (I -\gamma_n K_{X_n}\otimes K_{X_n}) (\eta_{n-1}^{noise}-\eta_{n-1}^{noise,0}) - (I-\gamma_n \Sigma)\eta_{n-1}^{noise,1}\\ &\qquad = (I-\gamma_n K_{X_n}\otimes K_{X_n})(\eta_{n-1}^{noise}-\eta_{n-1}^{noise,0}-\eta_{n-1}^{noise,1}) + \gamma_n (\Sigma - K_{X_n}\otimes K_{X_n})\eta_{n-1}^{noise,1}.\end{aligned}$$ Then we can further define a semi-stochastic recursion induced by $\eta_n^{noise} - \eta_n^{noise,0} -\eta_n^{noise,1}$, and repeat this process. If we define $\mathcal{E}_n^{r} = (\Sigma - K_{X_n}\otimes K_{X_n})\,\eta_{n-1}^{noise,r-1}$ for $r\geq 1$, then we can expand $\eta_n^{noise}$ into $(r+2)$ terms as $$\eta_n^{noise} = \eta_n^{noise,0} + \eta_n^{noise,1} + \eta_n^{noise,2} +\cdots + \eta_n^{noise,r} + \textrm{Remainder},$$ where $\eta_n^{noise,d} = (I-\gamma_n \Sigma)\eta_{n-1}^{noise,d} + \gamma_n \mathcal{E}_n^{d}$ for $1\leq d\leq r$,. The Remainder term $\eta_n^{noise}-\sum_{d=0}^r\eta_n^{noise,d}$ also has a recursive characterization: $$\label{eq:remainder:recursion} \eta_n^{noise} - \sum_{d=1}^r \eta_n^{noise,i} = (I -\gamma_n K_{X_n}\otimes K_{X_n})(\eta_{n-1}^{noise} - \sum_{d=1}^r \eta_{n-1}^{noise,d}) + \gamma_n \mathcal{E}_n^{r+1}.$$ To establish the supreme norm bound of $\bar{\eta}_n^{noise} - \bar{\eta}_n^{noise,0}$, the idea is to show that $\bar{\eta}_n^{noise,r}$ decays as $r$ increases, that is, to prove $$\bar{\eta}_n^{noise} = \bar{\eta}_n^{noise,0} + \underbrace{\bar{\eta}_n^{noise,1}}_{=o(\bar{\eta}_n^{noise,0})} + \underbrace{\bar{\eta}_n^{noise,2}}_{=o(\bar{\eta}_n^{noise,1})} + \dots + \underbrace{\bar{\eta}_n^{noise,r}}_{=o(\bar{\eta}_n^{noise,r-1})}+ \underbrace{ \bar{\eta}_n^{noise} - \sum_{i=0}^r \bar{\eta}_n^{noise,i}}_{negligible}.$$ Concretely, we consider the constant-step case for a simple presentation. By accumulating the effects of the iterations, we can further express $\eta_n^{noise,1}$ as $$\begin{aligned} \eta_n^{noise,1} = &\gamma \sum_{i=1}^{n-1} (I-\gamma\Sigma)^{n-i-1}\big(\Sigma - K_{X_{i+1}}\otimes K_{X_{i+1}}\big)\eta_i^{noise,0}\\ % = & \gamma^2 \sum_{i=1}^{n-1}\sum_{j=1}^i \epsilon_j (I-\gamma\Sigma)^{n-i-1}\big(\Sigma - K_{X_{i+1}}\otimes K_{X_{i+1}}\big)(I-\gamma\Sigma)^{i-j}K_{X_j}, \end{aligned}$$ and accordingly, the averaged version is $$\bar{\eta}_n^{noise,1} = \frac{1}{n} \sum_{j=1}^{n-1}\, \epsilon_j \cdot \underbrace{\gamma^2 \,\Big[\sum_{i=j}^{n-1}(\sum_{\ell=i}^{n-1}(I-\gamma\Sigma)^{\ell-i})\big(\Sigma - K_{X_{i+1}}\otimes K_{X_{i+1}}\big)(I-\gamma\Sigma)^{i-j}K_{X_j}\Big]}_{g_j}.$$ This implies that conditioning on the covariates $\{X_1,\dots, X_n\}$, the empirical process $\bar{\eta}_n^{noise,1}(\cdot) = \frac{1}{n}\sum_{j=1}^{n-1} \epsilon_j\cdot g_j (\cdot)$ over $[0,1]$ is a Gaussian process with (function) weights $\{g_j\}_{j=1}^n$. We can then prove a bound of $\|\bar{\eta}_n^{noise,1}\|_{\infty}$ by careful analyzing the random function $\sum_{j=1}^{n-1} g_j^2(\cdot)$; see Appendix [8.3](#app:le:rem_var:con){reference-type="ref" reference="app:le:rem_var:con"} for further details. A complete proof of Theorem [Theorem 1](#le:bias_variance){reference-type="ref" reference="le:bias_variance"} under constant step size is included in [@liu2023supp]; see Figure [\[float_chart\]](#float_chart){reference-type="ref" reference="float_chart"} for a float chart explaining the relationship among different components in its proof. The proof for the non-constant step size case is conceptually similar but is considerably more involved, requiring a much more refined analysis of the accumulated step size effect on the iterations of the recursions in [@liu2023supp]. ## Proof sketch for Bootstrap consistency of global inference {#sec:sketch:pf:GA} Recall that $\mathcal{D}_n = \{X_i, Y_i\}_{i=1}^n$ represents the data. The goal is to bound the difference between the sampling distribution of $\sqrt{n(n\gamma)^{-1/\alpha}}\, \|\bar{\eta}_n^{noise,0}\|_\infty$ and the conditional distribution of $\sqrt{n(n\gamma)^{-1/\alpha}}\, \|\bar{\eta}_n^{b,noise,0} - \bar{\eta}_n^{noise,0}\|_\infty$ given $\mathcal{D}_n$; see Section [4.2](#sec:bs-consistency){reference-type="ref" reference="sec:bs-consistency"} for detailed definitions of these quantities. We sketch the proof idea under the constant step size scheme. We will use the shorthand $\bar{\alpha}_n = \sqrt{n(n\gamma)^{-1/\alpha}} \,\bar{\eta}_n^{noise,0}$ and $\bar{\alpha}_n^b = \sqrt{n (n\gamma)^{-1/\alpha}} \,\big(\bar{\eta}_n^{b,noise,0}- \bar{\eta}_n^{noise,0}\big)$. Recall that from equations [\[eq:local_noise\]](#eq:local_noise){reference-type="eqref" reference="eq:local_noise"} and [\[eq:boot:noise\]](#eq:boot:noise){reference-type="eqref" reference="eq:boot:noise"}, we have $$\begin{aligned} \bar{\alpha}_n (\cdot)= \frac{1}{\sqrt{n(n\gamma)^{1/\alpha}}}\sum_{i=1}^n \epsilon_i \cdot \Omega_{n,i}(\cdot) \quad\mbox{and}\quad \bar{\alpha}_n^b(\cdot) = \frac{1}{\sqrt{n(n\gamma)^{1/\alpha}}}\sum_{i=1}^n (w_i-1)\cdot \epsilon_i \cdot \Omega_{n,i}(\cdot). \end{aligned}$$ From this display, we see that for any $t\in \mathcal{X}$, $\bar{\alpha}_n (t)$ is a weighted sum of Gaussian random variables, with the weights being functions of covariates $\{X_i\}_{i=1}^n$; conditioning on $\mathcal{D}_n$, $\bar{\alpha}_n^b (t)$ is a weighted sum of sub-Gaussian random variables. In the proof, we also require a sufficiently dense space discretization given by $0=t_1<t_2<\cdots<t_N=1$. This discretization forms an $\varepsilon$-covering for some $\varepsilon$ with respect to a specific distance metric that will be detailed later. To bound the difference between the distribution of $\|\bar{\alpha}_n\|_{\infty}$ and the conditional distribution of $\|\bar{\alpha}_n^b\|_{\infty}$ given $\mathcal{D}_n$, we introduce two intermediate processes: (1) $\bar{\alpha}_n^e (\cdot) = \frac{1}{\sqrt{n(n\gamma)^{1/\alpha}}}\sum_{i=1}^n e_i \cdot \epsilon_i \cdot \Omega_{n,i}(\cdot)$ with $e_i$ being i.i.d. standard normal random variables for $i=1,\cdots, n$; (2) an $N$-dimensional multivariate normal random vector $\big(\bar{Z}_n (t_k) = \frac{1}{\sqrt{n}}\sum_{i=1}^n Z_{i}(t_k),\, k=1,2,\ldots,N\big)$ (recall that $0=t_1<t_2<\cdots<t_N=1$ is the space discretization we defined earlier), where $\big\{\big(Z_1(t_1), Z_1(t_2),\ldots,Z_1(t_N)\big)\big\}_{i=1}^n$ are i.i.d. (zero mean) normally distributed random vectors having the same covariance structure as $\big(\bar{\alpha}_n (t_1),\bar{\alpha}_n (t_2),\ldots,\bar{\alpha}_n (t_N))$; that is, $Z_{i}(t_k) \sim N \big(0, (n\gamma)^{-1/\alpha} \sum_{\nu=1}^\infty (1-(1-\gamma \mu_\nu)^{n-i})^2\phi_\nu^2(t_k)\big)$, $\mathbb{E}\big(Z_{i}(t_k)\cdot Z_{i}(t_\ell)\big) =(n\gamma)^{-1/\alpha} \sum_{\nu=1}^\infty (1-(1-\gamma \mu_\nu)^{n-i})^2\phi_\nu(t_k)\phi_\nu(t_\ell)$, and $\mathbb{E}\big(Z_{i}(t_k)\cdot Z_{j}(t_\ell)\big) = 0$ for $(k,\ell)\in[N]^2$ and $(i,j)\in[n]^2$, $i\neq j$. These two intermediate processes are introduced so that the conditional distribution of $\max_{1\leq j\leq N} \bar{\alpha}_n^e(t_j)$ given $\mathcal{D}_n$ will be used to approximate the conditional distribution of $\|\bar{\alpha}_n^b\|_{\infty}$ given $\mathcal{D}_n$; while the distribution of $\max_{1\leq j\leq N} \bar{Z}_n (t_j)$ will be used to approximate the distribution of $\|\bar{\alpha}_n\|_{\infty}$. Since both the distribution of $\big(\bar{Z}_n (t_1),\bar{Z}_n (t_2),\ldots,\bar{Z}_n (t_N)\big)$ and the conditional distribution of $\big(\bar{\alpha}_n^e (t_1),\bar{\alpha}_n^e (t_2),\ldots,\bar{\alpha}_n^e (t_N)\big)$ given $\mathcal{D}_n$ are centered multivariate normal distributions, we can use a Gaussian comparison inequality to bound the difference between them by bounding the difference between their covariances. = \[rectangle, rounded corners, minimum width=3cm, minimum height=1cm, text centered, draw=black, fill=white!30\] = \[thick,-\>,\>=stealth\] The actual proof is even more complicated, as we also need to control the discretization error. See Figure [\[fig:flowchart:GA\]](#fig:flowchart:GA){reference-type="ref" reference="fig:flowchart:GA"} for a flow chart that summarizes all the intermediate approximation steps and the corresponding lemmas in the appendix. For Steps and in Figure [\[fig:flowchart:GA\]](#fig:flowchart:GA){reference-type="ref" reference="fig:flowchart:GA"}, we approximate the continuous supremum norms of $\bar{\alpha}_n$ and $\bar{\alpha}_n^b$ by the finite maximums of $\big(\bar{\alpha}_n(t_1),\ldots, \bar{\alpha}_n(t_N)\big)$ and $\big(\bar{\alpha}_n^b(t_1),\ldots, \bar{\alpha}_n^b(t_N)\big)$, respectively. Here, $N$ is chosen as the $\varepsilon$-covering number of the unit interval $[0,1]$ with respect to the metric defined by $e_P^2(t,s)= \mathbb{E}\big[\big(\bar{\alpha}_n(t)-\bar{\alpha}_n(s)\big)^2\big]$ for $(t,s)\in[0,1]^2$; that is, there exist $t_1, \dots, t_N \in [0,1]$, such that for every $t\in [0,1]$, there exist $1\leq j \leq N$ with $e_P(t, t_j) < \varepsilon$. We refer a detailed proof in Step (Step ) to Supplementary. Notice that $\bar{\alpha}_n$ is a weighted and non-identically distributed empirical process. In Step , we further develop Gaussian approximation bounds to control the Kolmogorov distance between the sampling distributions of $\max_{1\leq j \leq N} \bar{\alpha}_n(t_j)$ and the distribution of $\max_{1\leq j \leq N} \bar{Z}_n(t_j)$; see the proof in Lemma [Lemma 8](#app:le:GP:atoz:con){reference-type="ref" reference="app:le:GP:atoz:con"}. In Step , by noticing that conditional on $\mathcal{D}_n$, $\bar{\alpha}_n^b$ is a weighted and non-identically distributed sub-Gaussian process with randomness coming from the Bootstrap multiplier $\{w_i\}_{i=1}^n$, we adopt a similar argument as in Step to bound the Kolmogorov distance between the distributions of $\max_{1\leq j \leq N} \bar{\alpha}_n^e(t_j)$ and $\max_{1\leq j \leq N} \bar{\alpha}_n^b(t_j)$ given $\mathcal D_n$. # Discussion {#sec:dis} Quantifying uncertainty (UQ) in large-scale streaming data is a central challenge in statistical inference. We are developing multiplier bootstrap-based inferential frameworks for UQ in online non-parametric least squares regression. We propose using perturbed stochastic functional gradients to generate a sequence of bootstrapped functional SGD estimators for constructing point-wise confidence intervals (local inference) and simultaneous confidence bands (global inference) for function parameters in RKHS. Theoretically, we establish a framework to derive the non-asymptotic law of the infinite-dimensional SGD estimator and demonstrate the consistency of the multiplier bootstrap method. This work assumes that random errors in non-parametric regression follow a Gaussian distribution. However, in many real-world applications, heavy-tailed distributions are more common and suitable for capturing outlier behaviors. One future research direction is to expand the current methods to address heavy-tailed errors, thereby offering a more robust approach to online non-parametric inference. Another direction to explore is the generalization of the multiplier bootstrap weights to independent sub-exponential random variables and even exchangeable weights. Finally, a promising direction is the consideration of online non-parametric inference for dependent data. Such an extension is necessary to address problems like multi-arm bandit and reinforcement learning, where data dependencies are frequent and real-time updates are essential. Adapting our methods to these problems could provide deeper insights into the interplay between statistical inference and online decision-making. # Some Key Proofs {#sec:key_proof} ## Proof of leading terms in Theorem [Theorem 1](#le:bias_variance){reference-type="ref" reference="le:bias_variance"} in constant step size case {#append:proof:lemma:bias_variance:1} Recall in Section [6.1](#subsec:sketch1){reference-type="ref" reference="subsec:sketch1"}, we split the recursion of $\eta_n= \widehat{f}_n - f^*$ into the bias recursion and noise recursion. That is, $\eta_n = \eta_n^{bias} + \eta_n^{noise}$. Here $\eta_n^{bias}$ can be further decomposed as its leading bias term $\eta_n^{bias,0}$ and remainder $\eta_n^{bias}-\eta_n^{bias,0}$ satisfying the recursion $$\begin{aligned} \eta_n^{bias,0}= & (I - \gamma_n \Sigma) \eta_{n-1}^{bias,0} \quad \quad \textrm{with} \quad \eta_0^{bias,0}=f^\ast \label{eq:bias:lead}\\ \eta_n^{bias} - \eta_n^{bias,0} = & (I- \gamma_n K_{X_n}\otimes K_{X_n}) (\eta_{n-1}^{bias} - \eta_{n-1}^{bias,0}) + \gamma_n (\Sigma - K_{X_n}\otimes K_{X_n} )\eta_{n-1}^{bias,0}. \label{eq:bias:rem}\end{aligned}$$ We further decompose $\eta_n^{noise}$ to its main recursion terms and residual recursion terms as $$\begin{aligned} \eta_n^{noise,0} = & (I - \gamma_n\Sigma) \eta^{noise,0}_{n-1} + \gamma_n \epsilon_n K_{X_n} \quad \quad \textrm{with} \quad \eta_{0}^{noise, 0}=0 \\ \eta_n^{noise} - \eta_n^{noise,0} = & (I - \gamma_n K_{X_n}\otimes K_{X_n}) (\eta_{n-1}^{noise} - \eta_{n-1}^{noise,0}) + \gamma_n (\Sigma - K_{X_n}\otimes K_{X_n} ) \eta_{n-1}^{noise, 0} \label{eq:noise:rem}\end{aligned}$$ We focus on the averaged version $\bar{\eta}_n = \bar{f}_n - f^\ast$ with $$\begin{aligned} \bar{\eta}_n = & \bar{\eta}_n^{bias,0} + \bar{\eta}_n^{noise,0} + Rem_{noise} + Rem_{bias}, \end{aligned}$$ where $Rem_{noise} = \bar{\eta}_n^{noise}-\bar{\eta}_n^{noise,0}$, $Rem_{bias} = \bar{\eta}_n^{bias} - \bar{\eta}_n^{bias,0}$. Theorem [Theorem 1](#le:bias_variance){reference-type="ref" reference="le:bias_variance"} for the constant step case includes three results as follows: $$\begin{aligned} & \sup_{z_0\in\mathcal{X}}|\bar{\eta}_n^{bias,0}(z_0)|\lesssim \frac{1}{\sqrt{n\gamma}} \label{eq:le:bias_variance:1}\\ &\sup_{z_0\in\mathcal{X}} \mathop{\mathrm{{\rm Var}}}\big(\bar{\eta}_n^{noise,0}(z_0)\big)\lesssim \frac{(n\gamma)^{1/\alpha}}{n}\label{eq:le:bias_variance:2}\\ & \mathbb{P}\Big(\|\bar{\eta}_n - \bar{\eta}_n^{bias,0}-\bar{\eta}_n^{noise,0}\|^2_{\infty} \geq \gamma^{1/2} (n\gamma)^{-1}+ \gamma^{1/2} (n\gamma)^{1/\alpha}n^{-1}\log n\Big) \leq 1/n + \gamma^{1/2} \label{eq:le:bias_variance:3}. \end{aligned}$$ In this session, we will bound the sup-norm of the leading bias term $\bar{\eta}_n^{bias,0}$ and leading variance term $\bar{\eta}_n^{noise,0}$. To complete the proof of ([\[eq:le:bias_variance:3\]](#eq:le:bias_variance:3){reference-type="ref" reference="eq:le:bias_variance:3"}), we will bound $\|Rem_{bias}\|_\infty$ in Section [8.2](#app:le:rem_bias:con){reference-type="ref" reference="app:le:rem_bias:con"} and $\|Rem_{noise}\|_\infty$ in Section [8.3](#app:le:rem_var:con){reference-type="ref" reference="app:le:rem_var:con"}. We first provide a clear expression for $\eta_n^{bias,0}$ and $\eta_n^{noise,0}$. Denote $$\begin{aligned} D(k, n, \gamma_i) = & \prod_{i=k}^n (I - \gamma_i \Sigma) \quad \textrm{and} \quad D(k, n, \gamma) = \prod_{i=k}^n (I - \gamma \Sigma)= (I - \gamma \Sigma)^{n-k+1}\\ M(k, n, \gamma_i ) = & \prod_{i=k}^n (I - \gamma_i K_{X_i} \otimes K_{X_i}) \quad \textrm{and} \quad M(k, n, \gamma) = \prod_{i=k}^n (I - \gamma K_{X_i} \otimes K_{X_i}),\end{aligned}$$ with $D(n+1, n, \gamma_i)=D(n+1, n, \gamma)=1$. We have $$\label{eq:bias:lead:exp} \eta_n^{bias,0} = D(1, n, \gamma_i)f^\ast \quad \textrm{and} \quad \bar{\eta}_n^{bias,0}= \frac{1}{n}\sum_{k=1}^n D(1, k, \gamma_i)f^\ast;$$ $$\label{eq:noise:lead:exp} \eta_n^{noise,0} = \sum_{i=1}^n D(i+1,n, \gamma_i) \gamma_i \epsilon_i K_{X_i} \quad \textrm{and} \quad \bar{\eta}_n^{noise,0} = \frac{1}{n} \sum_{i=1} ^n \big(\sum_{j=i}^n D(i+1,j, \gamma_i) \big) \gamma_i \epsilon_i K_{X_i}.$$ [**Bound the leading bias term ([\[eq:le:bias_variance:1\]](#eq:le:bias_variance:1){reference-type="ref" reference="eq:le:bias_variance:1"})**]{.ul} For the case of constant step size, based on ([\[eq:bias:lead:exp\]](#eq:bias:lead:exp){reference-type="ref" reference="eq:bias:lead:exp"}), we have $$\begin{aligned} \bar{\eta}_n^{bias,0}(z_0)= & \frac{1}{n}\sum_{k=1}^n (I - \gamma \Sigma)^{n-k+1} f^\ast(z_0). \end{aligned}$$ Note that any $f\in \mathbb{H}$ can be represented as $f=\sum_{\nu=1}^\infty \langle f, \phi_\nu \rangle_{L_2}\phi_\nu$, where $\{\phi_\nu\}_{\nu=1}^\infty$ satisfies $\|\phi_\nu\|_{L_2}^2 = 1 = \mathbb{E}(\phi_\nu^2(x))$, $\langle \phi_\nu, \phi_\nu \rangle_{\mathbb{H}} = \mu_\nu^{-1}$, and $\Sigma \phi_\nu = \mu_\nu \phi_\nu$, $\Sigma^{-1}\phi_\nu = \mu_\nu^{-1}\phi_\nu$. Then for any $z_0\in \mathcal{X}$, $f^\ast(z_0) = \sum_{\nu=1}^\infty \langle f^\ast, \phi_\nu\rangle_{L_2} \phi_\nu(z_0)$. By the assumption that $f\in \mathbb{H}$ satisfies $\sum_{\nu=1}^\infty \langle f^\ast, \phi_\nu\rangle_{L_2} \mu_\nu^{-1/2} < \infty$, we have $$\begin{aligned} \bar{\eta}_n^{bias,0}(z_0) = & \frac{1}{\sqrt{\gamma}n}\sum_{k=1}^n \sum_{\nu=1}^\infty \langle f^\ast, \phi_\nu\rangle_{L_2} \mu_\nu^{-1/2} (1-\gamma \mu_\nu)^k (\gamma\mu_\nu)^{1/2}\phi_\nu(z_0)\\ \leq & c_\phi \frac{1}{\gamma^{1/2}n} \sum_{\nu=1}^\infty \langle f^\ast, \phi_\nu\rangle_{L_2} \mu_\nu^{-1/2} \Big(\sup_{0\leq x \leq 1} \big(\sum_{k=1}^n (1-x)^k x^{1/2}\big) \Big) \\ \leq & c_\phi \frac{1}{\sqrt{n \gamma}} \sum_{\nu=1}^\infty \langle f^\ast, \phi_\nu\rangle_{L_2} \mu_\nu^{-1/2} \lesssim \frac{1}{\sqrt{n \gamma}}, \end{aligned}$$ where the inequality holds based on the bound that $\sup_{0\leq x \leq 1} \big(\sum_{k=1}^n (1-x)^k x^{1/2}\big)\leq \sqrt{n}$. [**Bound the leading noise term ([\[eq:le:bias_variance:2\]](#eq:le:bias_variance:2){reference-type="ref" reference="eq:le:bias_variance:2"})**]{.ul} We first deduce the explicit expression of $\bar{\eta}_n^{noise,0}(z_0)$ and its variance. Based on ([\[eq:noise:lead:exp\]](#eq:noise:lead:exp){reference-type="ref" reference="eq:noise:lead:exp"}), for constant step case, we have for any $z_0\in \mathcal{X}$, $$\bar{\eta}_n^{noise,0}(z_0) = \frac{1}{n}\sum_{k=1}^n \Sigma^{-1}\big(I - (I-\gamma\Sigma)^{n+1-k} \big) K(X_k,z_0)\epsilon_k.$$ Note that for any $x,z$, $K(x,z) = \sum_{\nu=1}^\infty \mu_\nu \phi_\nu(x)\phi_\nu(z)$. Then $$\begin{aligned} \Sigma^{-1}\big(I - (I-\gamma\Sigma)^{n+1-k} \big) K(X_k,z_0) = & \Sigma^{-1}\big(I - (I-\gamma\Sigma)^{n+1-k} \big) \big(\sum_{\nu=1}^\infty \mu_\nu\phi_\nu(X_k) \phi_\nu(z_0)\big)\\ = & \sum_{\nu=1}^\infty (1-(1-\gamma \mu_\nu)^{n+1-k})\phi_\nu(X_k)\phi_\nu(z_0).\end{aligned}$$ Therefore, $\bar{\eta}_n^{noise, 0}(z_0) = \frac{1}{n} \sum_{k=1}^n \sum_{\nu=1}^\infty (1-(1-\gamma \mu_\nu)^{n+1-k})\phi_\nu(X_k)\phi_\nu(z_0)\epsilon_k$ with $\mathbb{E}(\bar{\eta}_n^{noise,0}(z_0)) = 0$, and $$\mathop{\mathrm{{\rm Var}}}\big(\bar{\eta}_n^{noise,0}(z_0)\big) = \frac{\sigma^2}{n^2} \sum_{\nu=1}^\infty \phi_\nu^2(z_0)\sum_{k=1}^n[1-(1-\gamma \mu_\nu)^{n+1-k} ]^2.$$ Note that $$\sum_{k=1}^n \sum_{\nu=1}^\infty \big(1-(1-\gamma \mu_\nu)^k\big)^2 \asymp \sum_{k=1}^n \sum_{\nu=1}^\infty \min\bigl\{1, (k\gamma \mu_\nu)^2\bigr\}.$$ On the other hand, $\sum_{\nu=1}^\infty \min\{1,(k\gamma \mu_\nu)^2\} = (k\gamma)^{1/\alpha} + \sum_{\nu=(k\gamma)^{1/\alpha}+1}^\infty (k\gamma \mu_\nu)^2$. Since $$\sum_{\nu=(k\gamma)^{1/\alpha}+1}^\infty (k\gamma \mu_\nu)^2 \leq \sum_{\nu=(k\gamma)^{1/\alpha}+1}^\infty k\gamma\mu_\nu = k \gamma \sum_{\nu=(k\gamma)^{1/\alpha}+1}^\infty \nu^{-\alpha} \leq k \gamma \int_{(k\gamma)^{1/\alpha}}^\infty x^{-\alpha} dx = (k\gamma)^{1/\alpha},$$ we have $$\sum_{k=1}^n \sum_{\nu=1}^\infty \big(1-(1-\gamma \mu_\nu)^k\big)^2 \asymp \sum_{k=1}^n (k\gamma)^{1/\alpha} \asymp \gamma^{1/\alpha} n^{(\alpha+1)/\alpha} = (n\gamma)^{1/\alpha} n.$$ Accordingly, $\mathop{\mathrm{{\rm Var}}}(\bar{\eta}_n^{noise,0}(z_0))\lesssim \frac{(n\gamma)^{1/\alpha}}{n}$. Meanwhile, $\sum_{\nu=1}^\infty \min\{1,(k\gamma \mu_\nu)^2\} \geq (k\gamma)^{1/\alpha}$ leads to the result that $\sum_{k=1}^n \sum_{\nu=1}^\infty \big(1-(1-\gamma \mu_\nu)^k\big)^2 \geq n(n\gamma)^{1/\alpha},$ thus $\mathop{\mathrm{{\rm Var}}}(\bar{\eta}_n^{noise,0}(z_0))\geq \frac{(n\gamma)^{1/\alpha}}{n}$. Therefore, $\mathop{\mathrm{{\rm Var}}}(\bar{\eta}_n^{noise,0}(z_0)) \asymp \frac{(n\gamma)^{1/\alpha}}{n}$. [**Bound the remaining term ([\[eq:le:bias_variance:3\]](#eq:le:bias_variance:3){reference-type="ref" reference="eq:le:bias_variance:3"})**]{.ul} Recall $$\begin{aligned} \bar{f}_n - f^\ast = & \bar{\eta}_n = \bar{\eta}_n^{bias,0} + \bar{\eta}_n^{noise,0} + \big(\bar{\eta}_n^{bias} - \bar{\eta}_n^{bias,0}\big) + \big(\bar{\eta}_n^{noise} - \bar{\eta}_n^{noise,0}\big). \end{aligned}$$ To prove ([\[eq:le:bias_variance:3\]](#eq:le:bias_variance:3){reference-type="ref" reference="eq:le:bias_variance:3"}) in Theorem [Theorem 1](#le:bias_variance){reference-type="ref" reference="le:bias_variance"}, we bound $\|\bar{\eta}_n^{bias} - \bar{\eta}_n^{bias,0}\|_\infty$ and $\|\bar{\eta}_n^{noise} - \bar{\eta}_n^{noise,0}\|_\infty$ separately in Section [8.2](#app:le:rem_bias:con){reference-type="ref" reference="app:le:rem_bias:con"} and Section [8.3](#app:le:rem_var:con){reference-type="ref" reference="app:le:rem_var:con"}. ## Bound the bias remainder in constant step case {#app:le:rem_bias:con} Recall in ([\[eq:bias:rem\]](#eq:bias:rem){reference-type="ref" reference="eq:bias:rem"}), the bias remainder recursion follows $$\begin{aligned} \eta_n^{bias} - \eta_n^{bias,0} = & (I -\gamma K_{X_n}\otimes K_{X_n})(\eta_{n-1}^{bias} - \eta_{n-1}^{bias,0}) + \gamma (\Sigma - K_{X_n}\otimes K_{X_n})\eta_{n-1}^{bias,0}. \end{aligned}$$ Our goal is to bound $\|\bar{\eta}_n^{bias} - \bar{\eta}_n^{bias,0}\|_{\infty}$. Let $\beta_{n} = \eta_n^{bias} - \eta_n^{bias,0}$ with $\beta_{0} = 0$, we have $$\beta_{n} = (I -\gamma K_{X_n}\otimes K_{X_n})\beta_{n-1} + \gamma (\Sigma -K_{X_n}\otimes K_{X_n}) \eta_{n-1}^{bias,0}$$ with $\eta_n^{bias,0} = (I-\gamma \Sigma)\eta_{n-1}^{bias,0}$ and $\eta_0^{bias,0}=f^\ast$. We first express $\beta_n$ in an explicit form as follows. Let $S_{n} = I-\gamma K_{X_n}\otimes K_{X_n}$, $T_n=\Sigma- K_{X_n}\otimes K_{X_n}$ and $T= I - \gamma \Sigma$, we have $\beta_{n} = S_n \beta_{n-1} + \gamma T_n \eta_{n-1}^{bias,0}.$ We can further represent $\beta_{n}$ as $$\beta_{n} = \gamma (T_n \eta_{n-1}^{bias,0} + S_n T_{n-1} \eta_{n-2}^{bias,0}+ \cdots + S_n S_{n-1}\dots S_{2}T_{1} \eta_0^{bias,0});$$ on the other hand, $\eta_{i}^{bias,0} = (I-\gamma \Sigma)^i f^\ast$. Therefore, for any $1\leq i\leq n$, we have $$\label{eq:beta} \beta_{i} = \gamma (T_{i} T^{i-1} + S_{i} T_{i-1}T^{i-2}+ \cdots + S_{i} S_{i-1}\cdots S_{2} T_1)\cdot f^\ast \equiv \gamma U_{i}.$$ Note that $\|\bar{\beta}_{n}\|_{\infty}\leq \|\Sigma^a \bar{\beta}_{n}\|_\mathbb{H}$. In the following lemma [Lemma 6](#app:le:bias_rem:con){reference-type="ref" reference="app:le:bias_rem:con"}, we bound $\|\bar{\beta}_{n}\|_{\infty}$ through $\mathbb{E}\langle \bar{\beta}_{n}, \Sigma^{2a}\bar{\beta}_{n}\rangle$, and show that $\|\bar{\eta}_n^{bias} - \bar{\eta}_n^{bias,0}\|^2_{\infty} = o(\bar{\eta}_n^{bias,0})$ with high probability. **Lemma 6**. *Suppose the step size $\gamma(n) = \gamma$ with $0< \gamma < \mu_1^{-1}$. Then $$\mathbb{P}\Big(\|\bar{\eta}_n^{bias} - \bar{\eta}_n^{bias,0}\|^2_{\infty} \geq \gamma^{1/2} (n\gamma)^{-1}\Big) \leq \gamma^{1/2}.$$* To simplify the notation, we set $\langle \cdot, \cdot \rangle$ as $\langle \cdot, \cdot \rangle_\mathbb{H}$. For $\bar{\beta}_{n}= \eta_n^{bias} - \eta_n^{bias,0}$, by ([\[eq:beta\]](#eq:beta){reference-type="ref" reference="eq:beta"}), we have $$\mathbb{E}\langle \bar{\beta}_{n}, \Sigma^{2a}\bar{\beta}_{n}\rangle = \mathbb{E}\langle \frac{1}{n}\sum_{i=1}^n\gamma U_{i}, \Sigma^{2a}\frac{1}{n}\sum_{i=1}^n \gamma U_{i}\rangle = \frac{\gamma^2}{n^2} \sum_{i=1}^n \mathbb{E}\langle U_{i}, \Sigma^{2a}U_{i} \rangle + \frac{2\gamma^2}{n^2}\sum_{i<j} \mathbb{E}\langle U_{i}, \Sigma^{2a}U_{j} \rangle.$$ That is, we split $\mathbb{E}\langle \bar{\beta}_{n}, \Sigma^{2a}\bar{\beta}_{n}\rangle$ into two parts, and will bound each part separately. We first bound $\frac{\gamma^2}{n^2} \sum_{i=1}^n \mathbb{E}\langle U_{i}, \Sigma^{2a}U_{i} \rangle$. Denote $H_{i\ell} = S_iS_{i-1}\cdots S_{\ell+1}T_\ell T^{\ell-1}f^\ast$ with $H_{ii}= T_i T^{i-1}f^\ast$, then $U_{i} = H_{ii}+ H_{i(i-1)}+ \cdots + H_{i1}$. $$\begin{aligned} \mathbb{E}\langle U_{i}, \Sigma^{2a} U_{i} \rangle = & \mathbb{E}\langle H_{ii}+ H_{i(i-1)}+ \cdots + H_{i1} , \Sigma^{2a}(H_{ii}+ H_{i(i-1)}+ \cdots + H_{i1}) \rangle\\ = & \sum_{j,k=1}^i \mathbb{E}\langle H_{ij}, \Sigma^{2a}H_{ik}\rangle = \sum_{j=1}^i \mathbb{E}\langle H_{ij}, \Sigma^{2a} H_{ij}\rangle + \sum_{j\neq k} \mathbb{E}\langle H_{ij}, \Sigma^{2a}H_{ik}\rangle.\end{aligned}$$ If $j\neq k$, suppose $i\geq j >k \geq 1$, then $$\begin{aligned} &\mathbb{E}\langle H_{ij}, \Sigma^{2a} H_{ik} \rangle = \mathbb{E}\langle S_i, S_{i-1}\cdots S_{j+1}T_j T^{j-1}f^\ast, \Sigma^{2a} S_i S_{i-1}\cdots S_{k+1}T_kT^{k-1}f^\ast\rangle \\ = &\mathbb{E}\Big[ \mathbb{E}\big[\langle S_iS_{i-1}\cdots S_{j+1}T_jT^{j-1}f^\ast, \Sigma^{2a}S_iS_{i-1}\cdots S_{k+1}T_kT^{k-1}f^\ast | X_i,\dots, X_j, w_i, \dots, w_j\big]\Big]\\ = & \mathbb{E}\big(\langle S_iS_{i-1}\cdots S_{j+1}T_jT^{j-1}f^\ast, \Sigma^{2a}S_iS_{i-1}\dots S_j \mathbb{E}(S_{j-1}\cdots S_{k+1}T_k)T^{k-1}f^\ast\big) = 0, \end{aligned}$$ where the last step is due to $\mathbb{E}(S_{j-1}\cdots S_{k+1}T_k)= \mathbb{E}S_{j-1}\cdots \mathbb{E}S_{k+1}\mathbb{E}T_k = 0$ with the fact that $\mathbb{E}T_k =0$. Therefore, we have $\mathbb{E}\langle U_{i}, \Sigma^{2a} U_{i} \rangle = \sum_{j=1}^i \mathbb{E}\langle H_{ij}, \Sigma^{2a} H_{ij}\rangle$. Furthermore, $$\begin{aligned} & \mathbb{E}\langle H_{ij}, \Sigma^{2a}H_{ij} \rangle = \mathbb{E}\langle S_iS_{i-1}\cdots S_{j+1}T_jT^{j-1}f^\ast, \Sigma^{2a}S_iS_{i-1}\cdots S_{j+1}T_jT^{j-1}f^\ast \rangle \\ = & \langle f^\ast, \mathbb{E}(T^{j-1}T_jS_{j+1}\cdots S_i\Sigma^{2a}S_iS_{i-1}\cdots S_{j+1}T_jT^{j-1})f^\ast\rangle = \langle f^\ast, \Delta f^\ast\rangle . \end{aligned}$$ Note that $\Delta = \mathbb{E}\big( T^{j-1}T_jS_{j+1}\cdots \mathbb{E}(S_i\Sigma^{2a}S_i)S_{i-1}\cdots S_{j+1}T_jT^{j-1} \big)$, with $$\begin{aligned} \mathbb{E}(S_i \Sigma^{2a} S_i) = & \mathbb{E}\big((I-\gamma K_{X_i}\otimes K_{X_i})\Sigma^{2a} (I-\gamma K_{X_i}\otimes K_{X_i})\big)\nonumber\\ = & \Sigma^{2a} - \gamma (\Sigma \cdot \Sigma^{2a} + \Sigma^{2a}\cdot \Sigma - 2\gamma S\Sigma^{2a}) = \Sigma^{2a}-\gamma G\Sigma^{2a}, \label{eq:s_sigma_s}\end{aligned}$$ where $G\Sigma^{2a}= \Sigma \cdot \Sigma^{2a} + \Sigma^{2a}\cdot\Sigma - 2 \gamma S\Sigma^{2a}$ with $S\Sigma^{2a}=\mathbb{E}\big((K_{x}\otimes K_{x})\Sigma^{2a}(K_{x}\otimes K_{x})\big)$. To be abstract, for any $A$, $\mathbb{E}S_i A S_i =A - \gamma (\Sigma A + A \Sigma - 2\gamma SA) = A - \gamma GA = (I-\gamma G)A,$ where $GA= \Sigma A + A \Sigma - 2\gamma SA$. Then $\Delta$ can be written as $$\begin{aligned} \Delta = & \mathbb{E}\big(T^{j-1} T_j S_{j+1}\cdots S_{i-1} (I-\gamma G) \Sigma S_{i-1} \cdots S_{j+1}T_j\big) T^{j-1} = \mathbb{E}\big(T^{j-1}T_j(I-\gamma G)^{i-j}\Sigma^{2a}T_j T^{j-1}\big).\end{aligned}$$ Furthermore, for any $A$, $$\begin{aligned} \mathbb{E}T_j A T_j = & \mathbb{E}( K_{X_j}\otimes K_{X_j} - \Sigma)A ( K_{X_j}\otimes K_{X_j} - \Sigma) = \mathbb{E}(K_{X_j}\otimes K_{X_j})A(K_{X_j}\otimes K_{X_j}) - \Sigma A \Sigma \label{eq:T1}\\ \leq & 2 \mathbb{E}(K_{X_j}\otimes K_{X_j})A(K_{X_j}\otimes K_{X_j}) = 2 SA.\nonumber\end{aligned}$$ Therefore, $\Delta \prec 2 T^{j-1}S(I-\gamma G)^{i-j}\Sigma^{2a}T^{j-1}$, and in ([\[eq:T1\]](#eq:T1){reference-type="ref" reference="eq:T1"}), we have $\mathbb{E}\langle H_{ij}, \Sigma^{2a} H_{ij} \rangle \leq 2 \langle f^\ast, T^{j-1}S(I-\gamma G)^{i-j}\Sigma^{2a}T^{j-1}f^\ast\rangle.$ Then we have $$\begin{aligned} \sum_{i=1}^n \mathbb{E}\langle U_{i}, \Sigma^{2a}U_{i} \rangle = \sum_{i=1}^n \sum_{j=1}^i \mathbb{E}\langle H_{ij}, \Sigma^{2a} H_{ij}\rangle \leq & 2 \langle f^\ast, \sum_{i=1}^n \sum_{j=1}^i T^{j-i}S(I-\gamma G)^{i-j}\Sigma^{2a}T^{j-1}f^\ast \rangle. \end{aligned}$$ Denote $P=\sum_{i=1}^n \sum_{j=1}^i T^{j-i}S(I-\gamma G)^{i-j}\Sigma^{2a}T^{j-1}$, then $$\begin{aligned} P = & \sum_{i=1}^n \sum_{j=1}^i T^{j-1}S(I-\gamma G)^{i-j} \Sigma^{2a} T^{j-1} = \sum_{j=1}^n T^{j-1}S \sum_{i=j}^n (I-\gamma G)^{i-j}\Sigma^{2a} T^{j-1} \leq n\sum_{j=1}^n T^{j-1}S\Sigma^{2a}T^{j-1}.\end{aligned}$$ Recall $S\Sigma^{2a} = \mathbb{E}\big((K_X\otimes K_X)\Sigma^{2a}(K_X\otimes K_X)\big)$, we can bound $S\Sigma^{2a}\leq c_k \Sigma$ as follows. $$\begin{aligned} \langle (S\Sigma^{2a})f, f \rangle = & \langle \mathbb{E}\big((K_X\otimes K_X)\Sigma^{2a}(K_X\otimes K_X)\big)f, f \rangle = \langle \mathbb{E}f(X)\Sigma^{2a} K_X(X)K_X, f \rangle \\ = & \mathbb{E}f^2(X) (\Sigma^{2a}K_X)(X) \leq c_k \mathbb{E}f^2(X) = c_k \langle \Sigma f, f\rangle,\end{aligned}$$ where the last inequality is due to the fact that $$\Sigma^{2a}K_X(X) = \sum_{\nu=1}^\infty \phi_\nu(X)\phi_\nu(X)\mu_\nu \mu_\nu^{2a} = \sum_{\mu=1}^\infty \phi_{\nu}(x)\phi_{\nu}(x)\mu_\nu^{2a+1} \leq \infty.$$ Accordingly, $P \leq n c_k \sum_{j=1}^n T^{2(j-1)}\Sigma \leq n c_k (I-T^2)^{-1}\Sigma \leq n c_k \gamma^{-1}I$; and $$\frac{\gamma^2}{n^2}\sum_{i=1}^n \mathbb{E}\langle U_{i}, \Sigma^{2a}U_{i} \rangle = \frac{\gamma^2}{n^2}\langle f^\ast, P f^\ast \rangle \lesssim \frac{\gamma}{n}\|f\|^2_{\mathbb{H}}.$$ Next, we analyze $\mathbb{E}\langle U_{i}, \Sigma^{2a}U_{j}\rangle$ in ([\[eq:beta\]](#eq:beta){reference-type="ref" reference="eq:beta"}) for $1\leq i < j \leq n$. Note that $$\begin{aligned} \mathbb{E}\langle U_{i}, \Sigma^{2a}U_{j}\rangle = & \mathbb{E}\langle H_{ii}+ \cdots + H_{i1}, \Sigma^{2a}(H_jj + \cdots + H_{j1})\rangle = \sum_{\ell=1}^j \sum_{k=1}^j \mathbb{E}\langle H_{i\ell}, \Sigma^{2a}H_{jk}\rangle.\end{aligned}$$ We first consider $\ell \neq k$ and assume $\ell > k$, note that $i < j$, then $$\begin{aligned} & \mathbb{E}\langle H_{i\ell}, \Sigma^{2a} H_{jk} \rangle = \mathbb{E}\langle S_iS_{i-1}\cdots S_{\ell+1}T_\ell T^{\ell-1}f^\ast, \Sigma^{2a}S_jS_{j-1}\cdots S_{k+1}T_kT^{k-1}f^\ast\rangle \nonumber \\ = & \mathbb{E}\langle S_i S_{i-1}\cdots S_{\ell+1}T_{\ell}T^{\ell-1}f^\ast, \Sigma^{2a} S_j \cdots S_{\ell} \mathbb{E}(S_{\ell -1}\cdots S_{k+1}T_k)T^{k-1}f^\ast \rangle = 0.\label{eq:H:con}\end{aligned}$$ Similarly, for $\ell <k$, $\mathbb{E}\langle H_{i\ell}, \Sigma^{2a}H_{jk} \rangle = \mathbb{E}[\mathbb{E}\langle H_{i\ell}, \Sigma^{2a}H_{jk} \rangle | X_j, \cdots X_k] = 0.$ Therefore, $$\begin{aligned} \mathbb{E}\langle U_{i}, \Sigma^{2a} U_{j} \rangle = & \sum_{\ell=1}^i \mathbb{E}\langle H_{i\ell}, \Sigma^{2a}H_{j\ell} \rangle = \sum_{\ell=1}^i \mathbb{E}\langle S_i\cdots S_{\ell+1}T_\ell T^{\ell-1}f^\ast, \Sigma^{2a}S_j\cdots S_{\ell+1}T_\ell T^{\ell-1}f^\ast \rangle\\ = &\sum_{\ell=1}^i \mathbb{E}\langle f^\ast, T^{\ell-1}T_\ell S_{\ell+1}\cdots S_i \Sigma^{2a}S_j \cdots S_i S_{i-1}\cdots S_{\ell+1}T_\ell T^{\ell-1}f^\ast \rangle \\ = & \sum_{\ell=1}^i \mathbb{E}\langle f^\ast, T^{\ell-1}T_\ell S_{\ell+1}\cdots S_i \Sigma^{2a} (I-\gamma \Sigma)^{j-i}S_i \cdots S_{\ell+1}T_\ell T^{\ell-1}f^\ast \rangle.\end{aligned}$$ And $\sum_{i<j} \mathbb{E}\langle U_{i}, \Sigma^{2a} U_{j} \rangle = \sum_{i=1}^{n-1} \sum_{\ell=1}^i \mathbb{E}\langle f^\ast, T^{\ell-1}T_\ell S_{\ell+1} \cdots S_i \Sigma^{2a}(\sum_{j=i+1}^n (I -\gamma \Sigma)^{j-i})S_i \cdots S_{\ell+1}T_\ell T^{\ell-1}f^\ast\rangle.$ Since $\Sigma^{2a} \sum_{j=i+1}^n (I -\gamma \Sigma)^{j-i} = \Sigma^{2a} \sum_{\ell=1}^{n-i}(I - \gamma \Sigma)^\ell \leq \Sigma^{2a}(I -\gamma \Sigma)(\sum_{\ell=0}^{n-1}(I-\gamma \Sigma)^\ell) \leq \Sigma^{2a}\sum_{\ell=1}^{n-1}(I-\gamma \Sigma)^{\ell} \equiv A,$ we have $$\begin{aligned} & \sum_{i<j} \mathbb{E}\langle U_{i}, \Sigma^{2a}U_{j} \rangle \leq \sum_{i=1}^{n-1}\sum_{\ell=1}^i \mathbb{E}\langle f^\ast, T^{\ell-1}T_\ell S_{\ell+1}\cdots S_i A S_i \cdots S_{\ell+1}T_\ell T^{\ell-1}f^\ast \rangle \nonumber\\ = & \sum_{\ell=1}^{n-1}\sum_{i=\ell}^{n-1} \langle f^\ast, T^{\ell-1}\mathbb{E}(T_\ell S_{\ell+1}\cdots S_i A S_i \cdots S_{\ell+1}T_\ell)T^{\ell-1}f^\ast \rangle \leq \sum_{\ell=1}^{n-1} \langle f^\ast, T^{\ell-1}S (\sum_{i=\ell}^{n-1}(I-\gamma G)^{i-\ell})AT^{\ell-1}f^\ast \rangle \nonumber\\ \leq & \sum_{\ell=1}^{n-1} \langle f^\ast, T^{\ell-1}BAT^{\ell-1}f^\ast\rangle, \label{eq: bias_ij}\end{aligned}$$ where $B=S\sum_{i=\ell}^{n-1}(I-\gamma G)^{i-\ell}$, and $BA= S(\sum_{i=0}^{n-1}(I-\gamma G)^i)A \leq nSA = 2n \mathbb{E}(K_x \otimes K_x)A(K_x \otimes K_x) \leq n\gamma^{-1}\mathbb{E}\big((K_X \otimes K_X) \Sigma^{-1+2a}(K_X \otimes K_X)\big) \leq n \gamma^{-1} c_k \Sigma,$ where the last step is due to the fact that $$\begin{aligned} & \langle \mathbb{E}\big((K_X \otimes K_X) \Sigma^{-1+2a}(K_X \otimes K_X)\big)f,f \rangle = \mathbb{E}\langle (K_X \otimes K_X) \Sigma^{-1+2a}K_X f(X),f \rangle \\ = &\mathbb{E}f(X)\langle K_X\Sigma^{-1+2a}K_X(X), f \rangle = \mathbb{E}f^2(X)\langle \Sigma^{-1+2a}K_X, K_X \rangle \leq C \langle \Sigma f, f \rangle\end{aligned}$$ with $\langle \Sigma^{-1+2a}K_X, K_X \rangle = \sum_{\nu=1}^\infty \phi_\nu(X)\mu_\nu^{2a}\phi_\nu(X)\leq c_\phi^2 \sum_{\nu=1}^\infty \nu^{-2a\alpha}< \infty$ for $2a\alpha>1$. By equation ([\[eq: bias_ij\]](#eq: bias_ij){reference-type="ref" reference="eq: bias_ij"}), we have $\sum_{i<j} \mathbb{E}\langle U_{i}, \Sigma^{2a}U_{j} \rangle \leq \sum_{\ell=1}^{n-1} \langle f^\ast, T^{\ell-1}BAT^{\ell-1}f^\ast \rangle$. Recall $T= I - \gamma \Sigma$. For notation simlicity, let $C=BA$, then $TCT$ can be written as $(I-\gamma \Sigma) C (I-\gamma \Sigma) = C - \gamma \Sigma C - \gamma C \Sigma + \gamma^2 \Sigma C \Sigma = C- \gamma \Theta C = (I-\gamma \Theta)C,$ where $\Theta$ is an operator such that for any $C$, $\Theta C = \Sigma C + C \Sigma - \gamma^2 \Sigma C \Sigma$. Replacing $C$ with $BA$, we have $T^{\ell-1}BAT^{\ell-1} = (I-\gamma \Theta)^{\ell-1}BA$, and $$\sum_{i<j} \mathbb{E}\langle U_{i}, \Sigma^{2a}U_{j} \rangle \leq \langle f^\ast, \sum_{\ell=1}^{n-1} (I-\gamma \Theta)^{\ell -1} BA f^\ast \rangle.$$ Since $\sum_{\ell=1}^{n-1} (I-\gamma \Theta)^{\ell -1} \leq \gamma^{-1}\Theta^{-1}$, we further need to bound $\Theta^{-1}$. Let $C=\Theta^{-1}$, then $I = \Sigma \Theta^{-1} + \Theta^{-1}\Sigma - \gamma \Sigma \Theta^{-1}\Sigma$. Note that $\Sigma \Theta^{-1}\Sigma \leq tr(\Sigma) \Theta^{-1}\Sigma \leq c \Theta^{-1}\Sigma$, where $c$ is a constant. Then $$\begin{aligned} I \succeq & \Sigma \Theta^{-1} + \Theta^{-1}\Sigma - c\gamma\Theta^{-1}\Sigma = \Sigma \Theta^{-1} + (1-c\gamma) \Theta^{-1}\Sigma = (\Sigma \otimes I + (1-c\gamma)I \otimes \Sigma) \Theta^{-1}.\end{aligned}$$ Therefore, $\Theta^{-1} \preceq (\Sigma \otimes I + (1-c\gamma)I \otimes \Sigma)^{-1} I$, and $$\begin{aligned} \sum_{\ell=1}^{n-1} (I -\gamma \Theta)^{\ell -1} BA \preceq & \gamma^{-1}\Theta^{-1}n\gamma^{-1}\Sigma \preceq \frac{1}{1+(1-c\gamma)} n\gamma^{-2} \end{aligned}$$ Accordingly, we have $\frac{\gamma^2}{n^2}\sum_{i<j} \mathbb{E}\langle U_{i}, \Sigma^{2a}U_{j} \rangle \lesssim \frac{\gamma}{n\gamma}$. Therefore, $$\begin{aligned} \mathbb{E}\langle \bar{\beta}_{n}, \Sigma^{2a}\bar{\beta}_{n} \rangle = & \frac{\gamma^2}{n^2} \sum_{i=1}^n \mathbb{E}\langle U_{i}, \Sigma^{2a}U_{i} \rangle + \frac{2\gamma^2}{n^2}\sum_{i<j} \mathbb{E}\langle U_{i}, \Sigma^{2a}U_{j} \rangle \lesssim \frac{\gamma}{n\gamma} \|f\|^2_{\mathbb{H}}.\end{aligned}$$ Then by Markov's inequality, we have $$\mathbb{P}\Big( \|\Sigma^a \big(\bar{\eta}_n^{bias} - \bar{\eta}_n^{bias,0}\big)\|^2_\mathbb{H} > \gamma^{-1/2}\mathbb{E}\|\Sigma^a \bar{\beta}_{b,n}\|^2_\mathbb{H}\Big) \leq \gamma^{1/2} .$$ That is, $\|\bar{\eta}_n^{bias} - \bar{\eta}_n^{bias,0}\|^2_{\infty} \leq \frac{\gamma^{1/2}}{n\gamma }$ with probability at least $1-\gamma^{1/2}$. ------------------------------------------------------------------------ \ ## Proof the sup-norm bound of noise remainder in constant step case {#app:le:rem_var:con} Recall the noise remainder recursion follows $$\eta_n^{noise} - \eta_n^{noise,0} = (I-\gamma K_{X_n}\otimes K_{X_n}) (\eta_{n-1}- \eta_{n-1}^{noise,0}) + \gamma(\Sigma-K_{X_n}\otimes K_{X_n})\eta_{n-1}^{noise,0}.$$ Follow the recursion decomposition in Section [6.1](#subsec:sketch1){reference-type="ref" reference="subsec:sketch1"}, we can split $\eta_n^{noise} - \eta_n^{noise,0}$ into higher order expansions as $$\eta_n^{noise} = \eta_n^{noise,0} + \eta_n^{noise,1} + \eta_n^{noise,2} +\cdots + \eta_n^{noise,r} + \textrm{Remainder},$$ where $\eta_n^{noise,d}$ can be viewed as $\eta_n^{noise,d} = (I-\gamma \Sigma)\eta_{n-1}^{noise,d} + \gamma \mathcal{E}_n^{d}$ and $\mathcal{E}_n^{d} = (\Sigma - K_{X_n}\otimes K_{X_n})\eta_{n-1}^{noise,d-1}$ for $1\leq d\leq r$ and $r\geq 1$. The remainder term follows the recursion as $$\eta_n^{noise} - \sum_{d=0}^r \eta_n^{noise,d} = (I -\gamma K_{X_n}\otimes K_{X_n})(\eta_{n-1}^{noise} - \sum_{d=1}^r \eta_{n-1}^{noise,d}) + \gamma \mathcal{E}_n^{r+1}.$$ The following lemma (see [Lemma 7](#app:le:noise_rem:con){reference-type="ref" reference="app:le:noise_rem:con"}) demonstrates that the high-order expansion terms $\bar{\eta}_n^{noise,d}$ (for $d\geq 1$) decrease as the value of $d$ increases. In particular, we first characterize the behavior of $\|\bar{\eta}_n^{noise,1}\|_\infty$ by representing it as a weighted empirical process and establish its convergence rate that $\|\bar{\eta}_n^{noise,1}\|_\infty=o(\|\bar{\eta}_n^{noise,0}\|_\infty)$ with high probability. Next, we show that $\|\bar{\eta}_n^{noise,d+1}\|_\infty= o(\|\bar{\eta}_n^{noise,d}\|_\infty)$ for $d\geq 1$ using mathematical induction. Finally, we bound $\bar{\eta}_n^{noise} - \sum_{d=0}^r \bar{\eta}_n^{noise,d}$ through its $\mathbb{H}$-norm based on the property that $\|\bar{\eta}_n^{noise} - \sum_{d=0}^r \bar{\eta}_n^{noise,d}\|_\infty \leq \|\bar{\eta}_n^{noise} - \sum_{d=0}^r \bar{\eta}_n^{noise,d}\|_\mathbb{H}$. **Lemma 7**. *Suppose the step size $\gamma(n) = \gamma$ with $0< \gamma < n^{-\frac{2}{2+3\alpha}}$. Then* 1. *$$\mathbb{P}\Big(\|\bar{\eta}_n^{noise,1}\|_\infty > \sqrt{\gamma^{1/2}(n\gamma)^{1/\alpha}n^{-1}\log n} \Big) \leq 2\gamma^{1/2}.$$* 2. *$$\begin{aligned} \mathbb{P}\Big( \|\bar{\eta}_n^{noise,d}\|^2_{\infty} \geq \gamma^{1/4}(n\gamma)^{1/\alpha} n^{-1} \Big) % \leq & (n\gamma)^{1/\alpha + 2 \varepsilon}\gamma^{d-1/4}.\end{aligned}$$ Furthermore, for $d\geq 2$ and $0<\gamma < n^{-\frac{2}{2+3\alpha}}$, we have $(n\gamma)^{1/\alpha + 2 \varepsilon}\gamma^{d-1/4}\leq \gamma^{1/4}$.* 3. *$$\mathbb{P}\Big( \|\bar{\eta}_i^{noise} - \sum_{d=0}^r \bar{\eta}_i^{noise,d}\|^2_{\infty} \geq \gamma^{1/4}(n\gamma)^{1/\alpha} n^{-1} \Big) % \leq n^{-1} .$$ with $r$ large enough.* *Furthermore, combine (a)-(c), we have $$\mathbb{P}\Big( \|\bar{\eta}_n^{noise} - \bar{\eta}_n^{noise,0}\|^2_{\infty} \geq C \gamma^{1/4}(n\gamma)^{1/\alpha} n^{-1} \Big)\leq \gamma^{1/4},$$ where $C$ is a constant.* [**Proof of Lemma [Lemma 7](#app:le:noise_rem:con){reference-type="ref" reference="app:le:noise_rem:con"} (a) by analyzing $\|\bar{\eta}_n^{noise,1}\|_{\infty}$.**]{.ul} First, we calculate the explicit expression of $\eta_n^{noise,1}$. Let $T= I-\gamma\Sigma$ and $T_n = \Sigma - K_{X_n}\otimes K_{X_n}$, then $\eta_n^{noise,1}= T \eta_{n-1}^{noise,1} + \gamma T_n \eta_{n-1}^{noise,0}$ with $\eta_0^{noise,1}=0$. Therefore, $$\begin{aligned} \eta_n^{noise,1} = &\gamma \sum_{i=1}^{n-1} T^{n-i-1}T_{i+1}\eta_i^{noise,0} = \gamma^2 \sum_{i=1}^{n-1}\sum_{j=1}^i \epsilon_j T^{n-i-1}T_{i+1}T^{i-j}K_{X_j},\end{aligned}$$ where the last step is by plugging in $\eta_i^{noise,0} = \gamma \sum_{j=1}^{i-1} T^{i-j}\epsilon_jK_{X_j}$ in ([\[eq:noise:lead:exp\]](#eq:noise:lead:exp){reference-type="ref" reference="eq:noise:lead:exp"}) with $\gamma= \gamma(n)$. Accordingly, $$\begin{aligned} \bar{\eta}_n^{noise,1} = & \frac{\gamma^2}{n} \sum_{\ell=1}^{n-1}\sum_{i=1}^\ell \sum_{j=1}^i \epsilon_j T^{\ell-i}T_{i+1}T^{i-j}K_{X_j} = \frac{\gamma^2}{n} \sum_{j=1}^{n-1}\big(\sum_{i=j}^{n-1}(\sum_{\ell=i}^{n-1}T^{\ell-i})T_{i+1}T^{i-j}K_{X_j}\big)\epsilon_j. \label{eq:eta_1:con}\end{aligned}$$ Let $g_{j} = \sum_{i=j}^{n-1}(\sum_{\ell=i}^{n-1}T^{\ell-i})T_{i+1}T^{i-j}K_{X_j}$, where the randomness of $g_{j}$ involves $X_j,X_{j+1}, \dots, X_n$. Then $\bar{\eta}_n^{noise,1}(\cdot) = \frac{\gamma^2}{n} \sum_{j=1}^{n-1} \epsilon_j \cdot g_{j}(\cdot)$, which is a Gaussian process conditional on $(X_j,\dots, X_n)$. We can further express $g_{j}(\cdot)$ as a function of the eigenvalues and eigenfunctions that follows $$\label{eq:gj:con} g_{j}(\cdot) = \gamma^{-1} \sum_{\nu,k=1}^\infty \mu_\nu \sum_{i=j}^{n-1}(1-\gamma \mu_\nu)^{i-j}(1-(1-\gamma \mu_k)^{n-i})\phi_{i\nu k}\phi_\nu(X_j)\phi_k(\cdot)$$ with $\phi_{i\nu k} = \phi_\nu(X_{i+1})\phi_k(X_{i+1})- \delta_{\nu k}$; we refer the proof to [@liu2023supp]. Such expression can facilitate the downstream analysis of $\bar{\eta}_n^{noise, 1}$. Denote $a_{ij\nu}=(1-\gamma \mu_\nu)^{i-j}$ and $b_{ik}= 1-(1-\gamma \mu_k)^{n-i}$. Then $g_j$ can be simplified as $g_{j}= \gamma^{-1}\sum_{\nu, k=1}^\infty \mu_\nu \big(\sum_{i=j}^{n-1}a_{ij\nu}b_{ik}\phi_{b,i\nu k} \big)\phi_\nu(X_j)\phi_k$. We are ready to prove that $\|\bar{\eta}_n^{noise,1} \|_{\infty} \leq \gamma^{\frac{1}{2}} n^{-\frac{1}{2}} (n\gamma)^{\frac{1}{2\alpha}}$ where $\bar{\eta}_n^{noise,1} (\cdot) = \frac{\gamma^2}{n}\sum_{j=1}^{n-1} \epsilon_j\cdot g_{j}(\cdot)$. It involves two steps: (1) for any fixed $s$, we see that $\bar{\eta}_n^{noise,1} (s) = \frac{\gamma^2}{n}\sum_{j=1}^{n-1} \epsilon_j\cdot g_{j}(s)$ is a weighted Gaussian random variable with variance $\frac{\gamma^4}{n^2}\sum_{j=1}^{n-1}g^2_{j}(s)$ conditional on $X_{1:n}= (X_1, \dots, X_n)$. Therefore, we first bound $\bar{\eta}_n^{noise,1} (s)$ with an exponentially decaying probability by characterizing $\sum_{j=1}^{n-1}g^2_{j}(s)$; (2) we then bridge $\bar{\eta}_n^{noise,1} (s)$ to $\|\bar{\eta}_n^{noise,1}\|_\infty$. We illustrate the details as follows. Conditional on $X_{1:n}$, $\bar{\eta}_n^{noise,1} (s) = \frac{\gamma^2}{n}\sum_{j=1}^{n-1} \epsilon_j\cdot g_{j}(s)$ is a weighted Gaussian random variable; by Hoeffding's inequality, $$\label{eq:noise1:con:original} \mathbb{P}\Big( \frac{\gamma^2}{n}|\sum_{j=1}^{n-1} \epsilon_j\cdot g_{j}(s)| > u \mid X_{1:n} \Big) \leq \exp\Big(-\frac{u^2 n^2}{\gamma^4 \sum_{j=1}^{n-1}g_j^2(s)}\Big).$$ We then bound $\sum_{j=1}^{n-1}\mathbb{E}g_j^2(s)$. We separate $\sum_{j=1}^{n-1}g^2_j(s)$ as two parts as follows: $$\begin{aligned} & \sum_{j=1}^{n-1}g^2_j(s) \\ \leq & \gamma^{-2} \sum_{\nu, \nu'=1}^\infty \sum_{j=1}^{n-1} \mu_\nu \mu_{\nu'}(\phi_\nu(X_j)\phi_{\nu'}(X_j)-\delta_{\nu\nu'}) \sum_{i,\ell=j}^{n-1}a_{ij\nu}a_{\ell j\nu'}\sum_{k,k'=1}^\infty b_{ik}b_{\ell k'}\phi_{i\nu k}\phi_{\ell\nu' k'}\phi_k(s)\phi_{k'}(s)\\ & + \gamma^{-2}\sum_{\nu=1}^\infty \mu_\nu^2 \sum_{j=1}^{n-1}\sum_{i,\ell=j}^{n-1} a_{ij\nu}a_{\ell j\nu}\sum_{k,k'=1}^\infty b_{ik}b_{\ell k'}\phi_{i\nu k}\phi_{\ell \nu k'}\phi_k(s)\phi_{k'}(s)\\ = &\Delta_1 + \Delta_2,\end{aligned}$$ where $\Delta_1$ involves the interaction terms indexed by $\nu, \nu'$ and $\Delta_2$ includes the terms that $\nu=\nu'$. Recall $b_{ik}=(1-(1-\gamma\mu_k)^{n-i})$. Then $b_{ik}<(1-(1-\gamma\mu_k)^{n})\equiv b_k$ for $1\leq i \leq n$. For $\Delta_1$, we have $$\begin{aligned} \Delta_1 \leq & \gamma^{-2} \sum_{k,k'=1}^\infty b_{k}b_{k'}\phi_k(s)\phi_{k'}(s) \sum_{\nu, \nu'=1}^\infty \mu_\nu\mu_{\nu'}\sum_{j=1}^{n-1} \big(\phi_\nu(X_j)\phi_{\nu'}(X_j)-\delta_{\nu\nu'}\big)\sum_{i,\ell=j}^{n-1}a_{ij\nu}a_{\ell j\nu'}\phi_{i\nu k}\phi_{\ell \nu' k'}, \end{aligned}$$ Take expectation on $\Delta_1$, we can see $$\begin{aligned} & \mathbb{E}|\sum_{\nu,\nu'=1}^\infty \mu_\nu\mu_{\nu'}\sum_{j=1}^{n-1} \big(\phi_\nu(X_j)\phi_{\nu'}(X_j)-\delta_{\nu\nu'}\big)\sum_{i,\ell=j}^{n-1} a_{ij\nu}a_{\ell j\nu'}\phi_{i\nu k}\phi_{\ell \nu' k'}|^2 \label{eq:noise:1:delta:con}\\ \leq & \big(\sum_{\nu,\nu'=1}^\infty \mu_\nu^{\frac{1+\varepsilon}{\alpha}}\mu_{\nu'}^{\frac{1+\varepsilon}{\alpha}}\big) \big(\sum_{\nu, \nu'=1}^\infty \mu_\nu^{2-\frac{1+\varepsilon}{\alpha}}\mu_{\nu'}^{2-\frac{1+\varepsilon}{\alpha}} \mathbb{E}|\sum_{j=1}^{n-1} \big(\phi_\nu(X_j)\phi_{\nu'}(X_j)-\delta_{\nu\nu'}\big) \sum_{i,\ell=j}^{n-1} a_{ij\nu}a_{\ell j\nu'}\phi_{i\nu k}\phi_{\ell \nu' k'}|^2\big)\nonumber\\ \lesssim & \big(\sum_{\nu, \nu'=1}^\infty \mu_\nu^{\frac{1+\varepsilon}{\alpha}}\big)^2 \big(\sum_{\nu, \nu'=1}^\infty \mu_\nu^{2-\frac{1+\varepsilon}{\alpha}}\mu_{\nu'}^{2-\frac{1+\varepsilon}{\alpha}} \sum_{j=1}^{n-1}\mathbb{E}\big(\phi_\nu(X_j)\phi_{\nu'}(X_j)-\delta_{\nu\nu'}\big)^2\cdot \mathbb{E}|\sum_{i,\ell=j}^{n-1} a_{ij\nu}a_{\ell j\nu'}\phi_{i\nu k}\phi_{\ell \nu' k'}|^2\big),\nonumber\end{aligned}$$ where the last step is due to the calculation that $$\begin{aligned} & \mathbb{E}|\sum_{j=1}^{n-1} \big(\phi_\nu(X_j)\phi_{\nu'}(X_j)-\delta_{\nu\nu'}\big) \sum_{i,\ell=j}^{n-1} a_{ij\nu}a_{\ell j\nu'}\phi_{i\nu k}\phi_{\ell \nu' k'}|^2 \\ = & \sum_{j=1}^{n-1} \mathbb{E}\big(\phi_\nu(X_j)\phi_{\nu'}(X_j)-\delta_{\nu\nu'}\big)^2\cdot \mathbb{E}|\sum_{i,\ell=j}^{n-1} a_{ij\nu}a_{\ell j\nu'}\phi_{i\nu k}\phi_{\ell \nu' k'}|^2 \\ & + 2 \sum_{j_1< j_2} \mathbb{E}\big(\big(\phi_\nu(x_{j_1})\phi_{\nu'}(x_{j_1})-\delta_{\nu\nu'}\big)\big) \cdot \mathbb{E}\big( \big(\phi_\nu(x_{j_2})\phi_{\nu'}(x_{j_2})-\delta_{\nu\nu'}\big)\cdot(\sum_{i,\ell=j_1}^{n-1} a_{ij_1\nu}a_{\ell j_1\nu'}\phi_{i\nu k}\phi_{\ell \nu' k'})\\ & \cdot (\sum_{i,\ell=j_2}^{n-1} a_{ij_2\nu}a_{\ell j_2\nu'}\phi_{i\nu k}\phi_{\ell \nu' k'}) \big)= \sum_{j=1}^{n-1} \mathbb{E}\big(\phi_\nu(X_j)\phi_{\nu'}(X_j)-\delta_{\nu\nu'}\big)^2\cdot \mathbb{E}|\sum_{i,\ell=j}^{n-1} a_{ij\nu}a_{\ell j\nu'}\phi_{i\nu k}\phi_{\ell \nu' k'}|^2,\end{aligned}$$ with $\mathbb{E}\big(\phi_\nu(x_{j_1})\phi_{\nu'}(x_{j_1})-\delta_{\nu\nu'}\big) = 0$. Note that $$\begin{aligned} & \sum_{j=1}^{n-1}\big(\mathbb{E}\big(\phi_\nu(X_j)\phi_{\nu'}(X_j)-\delta_{\nu\nu'}\big)^2\cdot \mathbb{E}|\sum_{i,\ell=j}^{n-1} a_{ij\nu}a_{\ell j\nu'}\phi_{i\nu k}\phi_{\ell \nu' k'}|^2\big) \leq \sum_{j=1}^{n-1} \mathbb{E}|\sum_{i,\ell=j}^{n-1} a_{ij\nu}a_{\ell j\nu'}\phi_{i\nu k}\phi_{\ell \nu' k'}|^2\\ = & \sum_{j=1}^{n-1}\sum_{i_1,i_2=j}^{n-1} \sum_{\ell_1,\ell_2=j}^{n-1} a_{i_1j\nu}a_{\ell_1 j \nu'}a_{i_2 j \nu}a_{\ell_2 j \nu'} \mathbb{E}(\phi_{i_1\nu k}\phi_{i_2\nu k}\phi_{\ell_1\nu'k'}\phi_{\ell_2 \nu'k'})\\ \overset{(i)}\lesssim & \sum_{j=1}^n \big(\sum_{i=j}^{n-1} a_{ij\nu}^2a_{ij\nu'}^2 + \sum_{i,\ell=j}^{n-1}a_{ij\nu}^2a_{\ell j\nu'}^2 + \sum_{i,\ell=j}^{n-1} a_{ij\nu}a_{ij\nu'}a_{\ell j \nu}a_{\ell j \nu'}\big) \\ \leq & \sum_{j=1}^{n-1}\big(\sum_{i=j}^{n-1}a_{ij\nu}^2\big)\big(\sum_{i=j}^{n-1}a^2_{ij\nu'}\big) + \big(\sum_{i=j}^{n-1}a_{ij\nu}a_{ij\nu'}\big)^2.\end{aligned}$$ In the $(i)$-step, $\mathbb{E}(\phi_{i_1\nu k}\phi_{i_2\nu k}\phi_{\ell_1\nu'k'}\phi_{\ell_2 \nu'k'})\neq 0$ if and only if the following cases hold: (1)$i_1 = i_2 = \ell_1= \ell_2$; (2) $i_1 = i_2$ and $\ell_1= \ell_2$; (3) $i_1= \ell_1$ and $i_2= \ell_2$. Recall $a_{ij\nu}=(1-\gamma \mu_\nu)^{i-j}$. Then we have $$\begin{aligned} \sum_{i=j}^{n-1}a_{ij\nu}a_{ij\nu'} = & \sum_{i=j}^{n-1}[(1-\gamma \mu_\nu)(1-\gamma \mu_{\nu'})]^{i-j} \leq (1-(1-\gamma \mu_\nu)(1-\gamma \mu_{\nu'}))^{-1} \leq \gamma^{-1}(\mu_\nu + \mu_{\nu'})^{-1}.\end{aligned}$$ For $\sum_{i=j}^{n-1} a^2_{ij\nu}$, we have $\sum_{i=j}^{n-1} a^2_{ij\nu} = \sum_{i=j}^{n-1} (1-\gamma \mu_\nu)^{2(i-j)} \lesssim \gamma^{-1} \mu_\nu^{-1} .$ Therefore, $$\begin{aligned} & \mathbb{E}|\sum_{\nu,\nu'=1}^\infty \mu_\nu\mu_{\nu'}\sum_{j=1}^{n-1} \big(\phi_\nu(X_j)\phi_{\nu'}(X_j)-\delta_{\nu\nu'}\big)\sum_{i,\ell=j}^{n-1} a_{ij\nu}a_{\ell j\nu'}\phi_{i\nu k}\phi_{\ell \nu' k'}|^2\\ \lesssim & (\sum_{\nu=1}^\infty \mu_\nu^{\frac{1+\varepsilon}{\alpha}})^2 \sum_{\nu,\nu'=1}^\infty \mu_\nu^{2-\frac{1+\varepsilon}{\alpha}}\mu_{\nu'}^{2-\frac{1+\varepsilon}{\alpha}} \sum_{j=1}^{n-1}\big(\gamma^{-2}(\mu_\nu+ \mu_\nu')^{-2} + \gamma^{-1}\mu_\nu^{-1}\gamma^{-1}\mu_{\nu'}^{-1}\big)\\ \lesssim & n\gamma^{-2}\big(\sum_{\nu,\nu'=1}^\infty \mu_\nu^{1-\frac{1+\varepsilon}{\alpha}}\mu_{\nu'}^{1-\frac{1+\varepsilon}{\alpha}}+\sum_{\nu,\nu'=1}^\infty \mu_\nu^{2-\frac{1+\varepsilon}{\alpha}}\mu_{\nu'}^{2-\frac{1+\varepsilon}{\alpha}}(\mu_\nu + \mu_{\nu'})^{-2} \big) \lesssim n\gamma^{-2}, \end{aligned}$$ with $\varepsilon\to 0$. The final step is due to the fact that $$\begin{aligned} & \sum_{\nu,\nu'=1}^\infty \mu_\nu^{2-\frac{1+\varepsilon}{\alpha}}\mu_{\nu'}^{2-\frac{1+\varepsilon}{\alpha}}(\mu_\nu + \mu_{\nu'})^{-2} \\ =& \sum_{\nu,\nu'=1}^\infty \frac{\mu_\nu \mu_{\nu'}}{(\mu_\nu+\mu_{\nu'})^2}\mu_{\nu}^{1-\frac{1+\varepsilon}{\alpha}}\mu_{\nu'}^{1-\frac{1+\varepsilon}{\alpha}} \leq \sum_{\nu,\nu'=1}^\infty \mu_{\nu}^{1-\frac{1+\varepsilon}{\alpha}}\mu_{\nu'}^{1-\frac{1+\varepsilon}{\alpha}} = (\sum_{\nu=1}^\infty \mu_{\nu}^{1-\frac{1+\varepsilon}{\alpha}})^2 \leq C. \end{aligned}$$ Since $b_{k}\leq \min\{1, n\gamma\mu_k\}$ and accordingly, $\sum_{k,k' =1}^\infty b_k b_{k'} = (\sum_{k=1}^\infty (1-(1-\gamma \mu_k)^n))^2 \leq (n\gamma)^{\frac{2}{\alpha}}.$ Therefore, we have $$\label{eq:delta_1} \mathbb{E}\Delta_1 \leq \sqrt{\mathbb{E}\Delta_1^2} \leq \sqrt{n} \gamma^{-3} (n\gamma)^{\frac{2}{\alpha}}.$$ For $\Delta_2$, we rewrite $\Delta_2$ as $$\begin{aligned} \Delta_2= & \gamma^{-2}\sum_{\nu=1}^\infty \mu_\nu^2 \sum_{j=1}^{n-1}\sum_{i,\ell=j}^{n-1} a_{ij\nu}a_{\ell j\nu}\sum_{k,k'=1}^\infty b_{ik}b_{\ell k'}\phi_{i\nu k}\phi_{\ell \nu k'} \nonumber\\ = & \gamma^{-2}\sum_{\nu=1}^\infty \mu_\nu^2 \sum_{j=1}^{n-1}\sum_{j\leq i< \ell \leq n-1} a_{ij\nu}a_{\ell j\nu}\sum_{k,k'=1}^\infty b_{ik}b_{\ell k'}\phi_{i\nu k}\phi_{\ell \nu k'}\nonumber\\ & \quad + \gamma^{-2}\sum_{\nu=1}^\infty \mu_\nu^2 \sum_{j=1}^{n-1}w_j^2 \sum_{i=j}^{n-1} a^2_{ij\nu}\sum_{k,k'=1}^\infty b_{ik}b_{\ell k'}\phi_{i\nu k}\phi_{i \nu k'} \nonumber\\ = &\Delta_{21} + \Delta_{22}, \label{eq:noise:con:delta2}\end{aligned}$$ where $\Delta_{21}$ includes the terms that $i\neq \ell$ and $\Delta_{22}$ includes the terms that $i=\ell$. For $\Delta_{21}$, with any positive $\varepsilon\to 0$, we have $$\begin{aligned} \Delta_{21} \leq & 2\gamma^{-2}\sum_{k,k'=1}^\infty b_{k}b_{k'}\sum_{\nu=1}^\infty \mu_\nu^2 \sum_{j=1}^{n-1} \sum_{j\leq i< \ell \leq n-1} a_{ij\nu}a_{\ell j\nu} \phi_{i\nu k}\phi_{\ell \nu k'} \\ = & 2 \gamma^{-2}\sum_{k,k'=1}^\infty b_{k}b_{k'} \sum_{\nu=1}^\infty \mu_\nu^{\frac{1+\varepsilon}{2\alpha}} \mu_\nu^{2-\frac{1+\varepsilon}{2\alpha}}\sum_{j=1}^{n-1} \sum_{j\leq i< \ell \leq n-1} a_{ij\nu}a_{\ell j\nu} \phi_{i\nu k}\phi_{\ell \nu k'}\\ \leq & 2 \gamma^{-2} \sum_{k,k'=1}^\infty b_{k}b_{k'} \sqrt{\sum_{\nu=1}^\infty \mu_\nu^{\frac{1+2\varepsilon}{\alpha}} }\sqrt{ \sum_{\nu=1}^\infty \mu_\nu^{4-\frac{1+2\varepsilon}{\alpha}}\big(\sum_{j=1}^{n-1}\sum_{j\leq i< \ell \leq n-1} a_{ij\nu}a_{\ell j\nu} \phi_{i\nu k}\phi_{\ell \nu k'}\big)^2}.\end{aligned}$$ To bound the expectation of $\Delta_{21}$, we need to bound $\mathbb{E}|\sum_{j=1}^{n-1}\sum_{j\leq i< \ell \leq n-1} a_{ij\nu}a_{\ell j\nu} \phi_{i\nu k}\phi_{\ell \nu k'}|^2$. Note that $$\begin{aligned} & \mathbb{E}|\sum_{j=1}^{n-1}\sum_{j\leq i< \ell \leq n-1} a_{ij\nu}a_{\ell j\nu} \phi_{i\nu k}\phi_{\ell \nu k'}|^2 \lesssim \mathbb{E}|\sum_{j=1}^{n-1}\sum_{j\leq i< \ell \leq n-1} a_{ij\nu}a_{\ell j\nu} \phi_{i\nu k}\phi_{\ell \nu k'}|^2\\ = & \sum_{j,d=1}^{n-1} \mathbb{E}\big(\sum_{j\leq i< \ell \leq n-1} a_{ij\nu}a_{\ell j\nu} \phi_{i\nu k}\phi_{\ell \nu k'}\big)\big(\sum_{d\leq i< \ell \leq n-1} a_{id\nu}a_{\ell d\nu}\phi_{i\nu k}\phi_{\ell \nu k'}\big)\\ = & \sum_{j=1}^n \mathbb{E}|\sum_{j\leq i< \ell \leq n-1} a_{ij\nu}a_{\ell j\nu} \phi_{i\nu k}\phi_{\ell \nu k'}|^2 \\ & + 2 \sum_{d=1}^{n-1}\sum_{j=1}^{d-1} \mathbb{E}\big(\sum_{d\leq i< \ell \leq n-1} a_{ij\nu}a_{\ell j\nu}\phi_{i\nu k}\phi_{\ell \nu k'}\big)\big(\sum_{d\leq i< \ell \leq n-1} a_{id\nu}a_{\ell d\nu} \phi_{i\nu k}\phi_{\ell \nu k'} \big) \\ & \overset{(i)}+ 2 \sum_{d=1}^{n-1}\sum_{j=1}^{d-1} \mathbb{E}\big(\sum_{j\leq i< \ell \leq d-1} a_{ij\nu}a_{\ell j\nu}\phi_{i\nu k}\phi_{\ell \nu k'}\big)\big(\sum_{d\leq i< \ell \leq n-1} a_{id\nu}a_{\ell d\nu} \phi_{i\nu k}\phi_{\ell \nu k'} \big),\end{aligned}$$ where the last term $(i)$ is $0$. Then we have $$\begin{aligned} & \sum_{d=1}^{n-1}\sum_{j=1}^{d-1} \mathbb{E}\big(\sum_{d\leq i< \ell \leq n-1} a_{ij\nu}a_{\ell j\nu} \phi_{i\nu k}\phi_{\ell \nu k'}\big)\big(\sum_{d\leq i< \ell \leq n-1} a_{id\nu}a_{\ell d\nu} \phi_{i\nu k}\phi_{\ell \nu k'} \big)\\ = & \sum_{d=1}^{n-1}\sum_{j=1}^{d-1} \sum_{d\leq i< \ell \leq n-1} a_{ij\nu}a_{\ell j\nu} a_{id\nu}a_{\ell d\nu}\mathbb{E}\phi^2_{i\nu k}\phi^2_{\ell \nu k'} \lesssim \sum_{j<d} \sum_{d\leq i< \ell \leq n-1} a_{ij\nu}a_{\ell j\nu} a_{id\nu}a_{\ell d\nu}\\ = & \sum_{d=1}^{n-1}\sum_{j=1}^{d-1} \sum_{d\leq i< \ell \leq n-1} (1-\gamma \mu_\nu)^{i-j}(1-\gamma \mu_\nu)^{\ell - j} (1-\gamma \mu_\nu)^{i-d}(1-\gamma \mu_\nu)^{\ell-d} \\ = & 2 \sum_{d=1}^{n-1}\sum_{j=1}^{d-1} \big[\sum_{d\leq i < \ell \leq n-1} (1-\gamma \mu_\nu)^{2(i-d)}(1-\gamma \mu_\nu)^{2(\ell-d)} \big](1-\gamma \mu_\nu)^{2(d-j)}\\ \leq & 2\big( \sum_{d=1}^{n-1}\sum_{j=1}^{d-1} (1-\gamma \mu_\nu)^{2(d-j)}\big)\big(\sum_{i=d}^{n-1}(1-\gamma \mu_\nu)^{2(i-d)}\big)\big(\sum_{\ell=d}^{n-1}(1-\gamma \mu_\nu)^{2(\ell-d)}\big) \lesssim n(\gamma \mu_\nu)^{-3}.\end{aligned}$$ Then accordingly, $$\label{eq:delta_21} \mathbb{E}\Delta_{21}\leq \gamma^{-2}\sum_{k,k'=1}^\infty b_k b_{k'} \sqrt{\sum_{\nu=1}^\infty \mu_\nu^{\frac{1+2\epsilon}{\alpha}} } \cdot \sqrt{\sum_{\nu=1}^\infty \mu_\nu^{4-\frac{1+2\epsilon}{\alpha}}|\sum_{j=1}^{n-1}\sum_{j\leq i< \ell \leq n-1} a_{ij\nu}a_{\ell j\nu} \phi_{i\nu k}\phi_{\ell \nu k'}|^2} \lesssim (n\gamma)^{\frac{2}{\alpha}}\sqrt{n}\gamma^{-\frac{7}{2}}.$$ For $\Delta_{22}$, we have $$\begin{aligned} & \gamma^{-2}\sum_{k,k'=1}^\infty b_k b_{k'} \sum_{\nu=1}^\infty \mu_\nu^2 \sum_{j=1}^{n-1} \sum_{i=j}^{n-1} a^2_{ij\nu}\phi_{i\nu k}\phi_{i \nu k'}\\ = & \gamma^{-2} \sum_{k,k'=1}^\infty b_k b_{k'} \sum_{\nu=1}^\infty \mu_\nu^2 \sum_{j=1}^{n-1} \sum_{i=j}^{n-1} a^2_{ij\nu} (\phi_{i\nu k}\phi_{i \nu k'}- \mathbb{E}(\phi_{i\nu k}\phi_{i \nu k'}))\\ + & \gamma^{-2} \sum_{k,k'=1}^\infty b_k b_{k'} \sum_{\nu=1}^\infty \mu_\nu^2 \sum_{j=1}^{n-1} \sum_{i=j}^{n-1} a^2_{ij\nu} \mathbb{E}(\phi_{i\nu k}\phi_{i \nu k'})\\ = & \Delta_{22}^{(1)}+ \Delta_{22}^{(2)}. \end{aligned}$$ We first bound $| \Delta_{22}^{(1)} |$. $$\begin{aligned} & | \Delta_{22}^{(1)} | \leq & \gamma^{-2} \sum_{k,k'=1}^\infty b_{k}b_{k'}\sqrt{\sum_{\nu=1}^\infty \mu_\nu^{\frac{1+2\epsilon}{\alpha}} }\sqrt{ \sum_{\nu=1}^\infty \mu_\nu^{4-\frac{1+2\epsilon}{\alpha}}\big(\sum_{j=1}^{n-1} \sum_{i=j}^{n-1} a^2_{ij\nu} (\phi_{i\nu k}\phi_{i \nu k'}- \mathbb{E}(\phi_{i\nu k}\phi_{i \nu k'}))\big)^2}\end{aligned}$$ Notice that $$\begin{aligned} & \mathbb{E}\big( \sum_{j=1}^{n-1} \sum_{i=j}^{n-1} a^2_{ij\nu} (\phi_{i\nu k}\phi_{i \nu k'}- \mathbb{E}(\phi_{i\nu k}\phi_{i \nu k'})) |^2 = \sum_{i=1}^{n-1} \mathbb{E}\big(\sum_{j=1}^{i} a^2_{ij\nu} (\phi_{i\nu k}\phi_{i \nu k'}- \mathbb{E}(\phi_{i\nu k}\phi_{i \nu k'})) \big)^2\\ = & \sum_{i=1}^{n-1} \mathbb{E}\big(\sum_{j=1}^{i} a^2_{ij\nu} (\phi_{i\nu k}\phi_{i \nu k'}- \mathbb{E}(\phi_{i\nu k}\phi_{i \nu k'}))\big) ^2 \\ & + 2\sum_{1\leq i_1< i_2\leq n-1} \big( \sum_{j=1}^{i_1} a^2_{i_1j\nu} (\phi_{i_1\nu k}\phi_{i_1 \nu k'}- \mathbb{E}(\phi_{i_1\nu k}\phi_{i_1 \nu k'}))\big) \cdot \big( \sum_{j=1}^{i_2} a^2_{i_2j\nu} (\phi_{i_2\nu k}\phi_{i_2 \nu k'}- \mathbb{E}(\phi_{i_2\nu k}\phi_{i_2 \nu k'}))\big)\\ = & \sum_{i=1}^{n-1} \mathbb{E}\big(\sum_{j=1}^{i} a^2_{ij\nu} (\phi_{i\nu k}\phi_{i \nu k'}- \mathbb{E}(\phi_{i\nu k}\phi_{i \nu k'}))\big) ^2,\end{aligned}$$ since $\mathbb{E}\big(\phi_{i_1\nu k}\phi_{i_1 \nu k'}- \mathbb{E}(\phi_{i_1\nu k}\phi_{i_1 \nu k'})\big)=0$. Then we have $$\mathbb{E}\big( \sum_{j=1}^{n-1} \sum_{i=j}^{n-1}a^2_{ij\nu} (\phi_{i\nu k}\phi_{i \nu k'}- \mathbb{E}(\phi_{i\nu k}\phi_{i \nu k'})) \big)^2 \lesssim \sum_{i=1}^{n-1}(\sum_{j=1}^i a^2_{ij\nu})^2 \mathbb{E}(\phi_{i\nu k}\phi_{i \nu k'}- \mathbb{E}(\phi_{i\nu k}\phi_{i \nu k'}))^2 \lesssim n (\gamma^{-1}\mu_\nu^{-1})^2$$ due to the property that $\sum_{j=1}^i a^2_{ij\nu} = \sum_{j=1}^i (1-\gamma \mu_\nu)^{2(i-j)}\leq \gamma^{-1}\mu_\nu^{-1}$. Accordingly, we have $$\sum_{k,k'=1}^\infty b_{k}b_{k'} \sqrt{\sum_{\nu=1}^\infty \mu_\nu^{\frac{1+2\epsilon}{\alpha}} }\sqrt{ \sum_{\nu=1}^\infty \mu_\nu^{4-\frac{1+2\epsilon}{\alpha}}\big(\sum_{j=1}^{n-1} \sum_{i=j}^{n-1} a^2_{ij\nu} (\phi_{i\nu k}\phi_{i \nu k'}- \mathbb{E}(\phi_{i\nu k}\phi_{i \nu k'}))\big)^2} = O_P(\sqrt{n}\gamma^{-1}(n\gamma)^{\frac{2}{\alpha}}).$$ Therefore, $$\label{eq:delta22_1} \mathbb{E}\Delta^{(1)}_{22}\lesssim \sqrt{n}\gamma^{-3}(n\gamma)^{\frac{2}{\alpha}}.$$ We next deal with $\Delta^{(2)}_{22}$. $$\begin{aligned} &\mathbb{E}\Delta^{(2)}_{22} = \gamma^{-2} \sum_{k,k'=1}^\infty \sum_{\nu=1}^\infty \mu_\nu^2 \sum_{j=1}^{n-1} \sum_{i=j}^{n-1} a^2_{ij\nu}b_{ik}b_{ik'} \mathbb{E}(\phi_{i\nu k}\phi_{i \nu k'}) \nonumber\\ \leq & \gamma^{-2}\sum_{k,k'=1}^\infty \sum_{\nu=1}^\infty\mu_\nu^2 \sum_{j=1}^{n-1}\sum_{i=j}^{n-1} a^2_{ij\nu} b_{ik}b_{ik'}\mathbb{E}(\phi^2_\nu(X_{i+1})\phi_k(X_{i+1})\phi_{k'}(X_{i+1}))\nonumber\\ = & \gamma^{-2} \sum_{\nu=1}^\infty\mu_\nu^2 \sum_{j=1}^{n-1}\sum_{i=j}^{n-1} a^2_{ij\nu} \mathbb{E}\big(\phi^2_\nu(X_{i+1})\cdot\big(\sum_{k=1}^\infty b_{ik}\phi_k(X_{i+1})\big)^2\big)\nonumber\\ \leq& c_\phi^2\gamma^{-2} \sum_{\nu=1}^\infty\mu_\nu^2 \sum_{j=1}^{n-1}\sum_{i=j}^{n-1} a^2_{ij\nu} \mathbb{E}\big(\sum_{k=1}^\infty b_{ik}\phi_k(X_{i+1})\big)^2 = c_\phi^2\gamma^{-2} \sum_{\nu=1}^\infty\mu_\nu^2 \sum_{j=1}^{n-1}\sum_{i=j}^{n-1} a^2_{ij\nu} \sum_{k=1}^\infty b^2_{ik} \lesssim \gamma^{-3} n (n\gamma)^{\frac{1}{\alpha}} \label{eq:delta_22_2}\end{aligned}$$ Combine equation ([\[eq:delta_1\]](#eq:delta_1){reference-type="ref" reference="eq:delta_1"}), ([\[eq:delta_21\]](#eq:delta_21){reference-type="ref" reference="eq:delta_21"}), ([\[eq:delta22_1\]](#eq:delta22_1){reference-type="ref" reference="eq:delta22_1"}), and ([\[eq:delta_22_2\]](#eq:delta_22_2){reference-type="ref" reference="eq:delta_22_2"}) together, and notice that $\sqrt{n}\gamma^{-\frac{7}{2}}(n\gamma)^{\frac{2}{\alpha}} \leq n\gamma^{-3}(n\gamma)^{\frac{1}{\alpha}}$ for $\gamma \geq n^{-1}$, we have $$\mathbb{E}\sum_{j=1}^{n-1} g^2_j(s)\leq n\gamma^{-3}(n\gamma)^{\frac{1}{\alpha}}.$$ Define an event $\mathcal{E}_1= \{\sum_{j=1}^{n-1} g^2_j(s)\leq \gamma^{-7/2} n (n\gamma)^{1/\alpha}\}$, by Markov inequality, $\mathbb{P}\big(\mathcal{E}_1\big)> 1-\gamma^{1/2}$. Conditional on the event $\mathcal{E}_1$, and let $u=C n^{-\frac{1}{2}}\gamma^{\frac{1}{4}}(n\gamma)^{\frac{1}{2\alpha}}\sqrt{\log n}$ in equation ([\[eq:noise1:con:original\]](#eq:noise1:con:original){reference-type="ref" reference="eq:noise1:con:original"}), we have $$\label{eq:noise1:con} \mathbb{P}\Big( \frac{\gamma^2}{n}\bigl|\sum_{j=1}^{n-1} \epsilon_j\cdot g_{j}(s)\bigr| > C n^{-\frac{1}{2}}\gamma^{\frac{1}{4}}(n\gamma)^{\frac{1}{2\alpha}}\sqrt{\log n} \bigm| \mathcal{E}_1\Big) \leq \exp\Big(-C' \log n\Big).$$ Combined with the Lemma bridging $\bar{\eta}_n^{noise,1}(t)$ and $\|\bar{\eta}_n^{noise,1}\|_\infty$ in Supplementary [@liu2023supp], we achieve the result. [**Next, we prove Lemma [Lemma 7](#app:le:noise_rem:con){reference-type="ref" reference="app:le:noise_rem:con"} (b) and analyze $\|\bar{\eta}_n^{noise,d}\|_{\infty}$ for $d\geq 2$.**]{.ul} Note that $\|\bar{\eta}_n^{noise,d}\|_{\infty} \leq \|\Sigma^a \bar{\eta}_n^{noise,d}\|_\mathbb{H}$. In the following part, we focus on $\mathbb{E}\|\Sigma^a \bar{\eta}_n^{noise,d}\|^2_\mathbb{H}$. Recall in Section [6](#sec:proof_sketch){reference-type="ref" reference="sec:proof_sketch"}, $\eta_n^{noise,d}$ follows the recursion as $\eta_n^{noise,d} = (I-\gamma \Sigma)\eta_{n-1}^{noise,d} + \gamma \mathcal{E}_n^{d},$ where $\mathcal{E}_n^{d} = (\Sigma -K_{X_n}\times K_{X_n})\eta_{n-1}^{noise,d}$ for $d\geq 1$ and $\mathcal{E}_n^{d} = \varepsilon_n$ for $d=0$. Let $T=I-\gamma \Sigma$, then $\eta_j^{noise,d} =\gamma \sum_{k=1}^j T^{j-k}\mathcal{E}_k^{d}$, and $\bar{\eta}_n^{noise,d} = \gamma \frac{1}{n}\sum_{j=1}^n \sum_{k=1}^j T^{j-k}\mathcal{E}_k^{d}$. $$\begin{aligned} & \mathbb{E}\langle \bar{\eta}_n^{noise,d}, \Sigma^{2a}\bar{\eta}_n^{noise,d} \rangle \\ = & \frac{\gamma^2}{n^2} \mathbb{E}\langle \sum_{j=1}^n \sum_{k=1}^j T^{j-k} \mathcal{E}_k^{d}, \Sigma^{2a}\sum_{j=1}^n \sum_{k=1}^j T^{j-k}\mathcal{E}_k^{d} \rangle\\ = & \frac{\gamma^2}{n^2} \sum_{k=1}^n \mathbb{E}\langle \sum_{k=1}^n (\sum_{j=k}^n T^{j-k}) \mathcal{E}_k^{d}, \Sigma^{2a}\sum_{k=1}^n (\sum_{j=k}^n T^{j-k})\mathcal{E}_k^{d}\rangle \\ = & \frac{\gamma^2}{n^2} \sum_{k=1}^n \mathbb{E}\langle M_{n,k}\mathcal{E}_k^{d}, \Sigma^{2a}M_{n,k}\mathcal{E}_k^{d} \rangle = \frac{\gamma^2}{n^2} \sum_{k=1}^n \mathbb{E}\mathop{\mathrm{tr}}(\mathcal{E}_k^{d} M_{n,k}\Sigma^{2a}M_{n,k}\mathcal{E}_k^{d}) = \frac{\gamma^2}{n^2} \sum_{k=1}^n \mathbb{E}\mathop{\mathrm{tr}}(M_{n,k}\Sigma^{2a}M_{n,k}\mathcal{E}_k^{d} \otimes \mathcal{E}_k^{d})\\ = & \frac{\gamma^2}{n^2} \sum_{k=1}^n \mathop{\mathrm{tr}}(M_{n,k}\Sigma^{2a}M_{n,k}\mathbb{E}\big(\mathcal{E}_k^{d} \otimes \mathcal{E}_k^{d})\big) \lesssim \frac{\gamma^{2+d}}{n^2} \sum_{k=1}^n \mathop{\mathrm{tr}}\big(M_{n,k} \Sigma^{2a} M_{n,k}\Sigma \big)\end{aligned}$$ where we use the property that $\mathbb{E}\big(\mathcal{E}_k^{d} \otimes \mathcal{E}_k^{d}\big)\lesssim \gamma^d\Sigma$. Since $M_{n,k}= \sum_{j=k}^n T^{j-k} = I + T + T^2 + \cdots + T^{n-k} \leq nI$, then $M_{n,k}\Sigma^{2a} \leq n \Sigma^{2a}$. On the other hand, $M_{n,k}\Sigma^{2a} = \gamma^{-1}\Sigma^{-1}(I-T^{n-k})\Sigma^{2a} \preceq \gamma^{-1}\Sigma^{2a-1}$. Therefore, we have $$M_{n,k}\Sigma^{2a} \preceq (n\Sigma^{2a})^{q}(\gamma^{-1}\Sigma^{2a-1})^{1-q}$$ with $0\leq q \leq 1$. Also, $M_{n,k}\Sigma \preceq \gamma^{-1}\Sigma^{-1}(I-T^{n-k})\Sigma \preceq \gamma^{-1}I$. Then $$\mathop{\mathrm{tr}}\big(M_{n,k} \Sigma^{2a} M_{n,k}\Sigma \big) \leq n^q \gamma^{q-1}\gamma^{-1}\sum_{\nu=1}^\infty \mu_\nu^{2aq+ (2a-1)(1-q)}.$$ Therefore, $\mathbb{E}\langle \bar{\eta}_n^{noise,d}, \Sigma^{2a}\bar{\eta}_n^{noise,d} \rangle \lesssim \frac{\gamma^{2+d}}{n^2} n \gamma^{-1}n^{q}\gamma^{q-1}\sum_{\nu=1}^\infty \mu_\nu^{2a-1+q} \leq (n\gamma)^q n^{-1}\gamma^d \sum_{\nu=1}^\infty \mu_\nu^{2a-1+q}.$ Let $2a-1+q=1/\alpha + \varepsilon$ with $a=1/2-1/(2\alpha)-\varepsilon$ and $\varepsilon \to 0$, then we have $\sum_{\nu=1}^\infty \mu_\nu^{2a-1+q}=\sum_{j=1}^\infty \mu_j^{1/\alpha+\varepsilon} = \sum_{j=1}^\infty j^{-1-\alpha \varepsilon} < \infty$ and $$\mathbb{E}\langle \bar{\eta}_n^{noise,d}, \Sigma^{2a}\bar{\eta}_n^{noise,d} \rangle \lesssim (n\gamma)^{1/\alpha} n^{-1} (n\gamma)^{1/\alpha + 2 \varepsilon}\gamma^{d}.$$ Through Markov's inequality, we have $$\begin{aligned} \mathbb{P}\Big( \|\bar{\eta}_n^{noise,d}\|^2_{\infty} \geq \gamma^{1/4}(n\gamma)^{1/\alpha} n^{-1} \Big) \leq & \frac{\mathbb{E}\|\Sigma^a \bar{\eta}_n^{noise,d}\|^2_{\mathbb{H}}}{\gamma^{1/4}(n\gamma)^{1/\alpha} n^{-1}}\leq (n\gamma)^{1/\alpha + 2 \varepsilon}\gamma^{d-1/4}. %\end{aligned}$$ For $d\geq 2$ and $0<\gamma < n^{-\frac{2}{2+3\alpha}}$, we have $(n\gamma)^{1/\alpha + 2 \varepsilon}\gamma^{d-1/4}\leq 1/2\gamma^{1/4}$. [**Proof of Lemma [Lemma 7](#app:le:noise_rem:con){reference-type="ref" reference="app:le:noise_rem:con"} (c) - the reminder term $\bar{\eta}_n^{noise} - \sum_{d=0}^r \bar{\eta}_n^{noise,d}$**]{.ul}. Note that for any $f\in \mathbb{H}$, $|f(x)| = |\langle f, K_x \rangle_\mathbb{H}| \leq |K_x|_\mathbb{H} \|f\|_\mathbb{H} \leq C \|f\|_\mathbb{H}$. Therefore, $\|\bar{\eta}_n^{noise} - \sum_{d=0}^r \bar{\eta}_n^{noise,d}\|_{\infty} \leq \|\bar{\eta}_n^{noise} - \sum_{d=0}^r \bar{\eta}_n^{noise,d}\|_{\mathbb{H}}$. Next, we will bound $\|\bar{\eta}_n^{noise} - \sum_{d=0}^r \bar{\eta}_n^{noise,d}\|_{\mathbb{H}}$. For $i=1,\dots,n$, recall $\eta_i^{noise} - \sum_{d=0}^r \eta_i^{noise,d} = (I-\gamma K_{X_i}\otimes K_{X_i})(\eta_{i-1}^{noise} - \sum_{d=0}^r \eta_{i-1}^{noise,d}) + \gamma \mathcal{E}_i^{r+1},$ we have $$\|\eta_i^{noise} - \sum_{d=0}^r \eta_i^{noise,d}\|_\mathbb{H} \leq \|\eta_{i-1}^{noise} - \sum_{d=0}^r \eta_{i-1}^{noise,d}\|_\mathbb{H} + \gamma \|\mathcal{E}_i^{r+1}\|_\mathbb{H}\leq \sum_{j=1}^i \gamma\|\mathcal{E}_j^{r+1}\|_\mathbb{H}.$$ Accordingly, $\mathbb{E}\|\eta_i^{noise} - \sum_{k=0}^d \eta_i^{noise,k}\|_\mathbb{H}^2 \leq \gamma^2 \sum_{j=1}^i \big(\sum_{j=1}^i \mathbb{E}\|\mathcal{E}_j^{d+1}\|^2_\mathbb{H} \big) %$. Since $\mathbb{E}\|\mathcal{E}_j^{d+1}\|_\mathbb{H}^2 = \mathbb{E}\mathop{\mathrm{tr}}(\mathcal{E}_j^{d+1}\otimes \mathcal{E}_j^{d+1}) = \mathop{\mathrm{tr}}\mathbb{E}(\mathcal{E}_j^{d+1}\otimes \mathcal{E}_j^{d+1}) \leq \sigma^2 \gamma^{d+1}R^{2d+2}\mathop{\mathrm{tr}}(\Sigma)$, we have $$\begin{aligned} \mathbb{E}\|\eta_i^{noise} - \sum_{d=0}^r \eta_i^{noise,d}\|_\mathbb{H}^2 \leq \gamma^2 i^2 \sigma^2 \gamma^{r+1}R^{2r+2}\mathop{\mathrm{tr}}(\Sigma),\end{aligned}$$ and accordingly $$\begin{aligned} \mathbb{E}\|\bar{\eta}_n^{noise} - \sum_{d=0}^r \bar{\eta}_n^{noise,d}\|_\mathbb{H}^2 \leq & \frac{2}{n}\sum_{i=1}^n \mathbb{E}\|\eta_i^{noise} - \sum_{d=0}^r \eta_i^{noise,d}\|_\mathbb{H}^2 \nonumber\\ \leq & \sigma^2 \gamma^{r+3}R^{2r+2}\mathop{\mathrm{tr}}(\Sigma) \frac{1}{n}\sum_{i=1}^n i^2 \leq \sigma^2 \gamma^{r+3}R^{2r+2}\mathop{\mathrm{tr}}(\Sigma) n^2. \label{eq:app:remind_b}\end{aligned}$$ By Markov inequality, $$\mathbb{P}\Big( \|\bar{\eta}_i^{noise} - \sum_{d=0}^r \bar{\eta}_i^{noise,d}\|^2_{\infty} \geq \gamma^{1/4}(n\gamma)^{1/\alpha} n^{-1} \Big) \leq \frac{\mathbb{E}\|\bar{\eta}_n^{noise} - \sum_{d=0}^r \bar{\eta}_n^{noise,d}\|^2_{\mathbb{H}}}{\gamma^{1/4}(n\gamma)^{1/\alpha} n^{-1}} \leq 1/n$$ with the constant $r$ large enough. Finally, we have $$\begin{aligned} & \mathbb{P}\Big( \|\bar{\eta}_n^{noise} - \bar{\eta}_n^{noise,0}\|^2_{\infty} \geq (r+1)\gamma^{1/4}(n\gamma)^{1/\alpha} n^{-1} \Big)\\ \leq & \sum_{d=1}^r\mathbb{P}\Big( \|\bar{\eta}_n^{noise,d}\|^2_{\infty} \geq \gamma^{1/4}(n\gamma)^{1/\alpha} n^{-1} \Big) + \mathbb{P}\Big( \|\bar{\eta}_i^{noise} - \sum_{d=0}^r \bar{\eta}_i^{noise,d}\|^2_{\infty} \geq \gamma^{1/4}(n\gamma)^{1/\alpha} n^{-1} \Big) \leq \gamma^{1/4} . \end{aligned}$$ ------------------------------------------------------------------------ \ ## Bootstrap SGD decomposition {#app:bSGD:decomp:con} Similar to the SGD recursion decomposition in Section [6](#sec:proof_sketch){reference-type="ref" reference="sec:proof_sketch"}, we define the Bootstrap SGD recursion decomposition as follows. Based on ([\[eq:bootstrap:sgd\]](#eq:bootstrap:sgd){reference-type="ref" reference="eq:bootstrap:sgd"}), denote $\eta_n^b = \widehat{f}_n^b - f^\ast$, then $$\label{eq:boot:recursion} \eta_n^b = (I - \gamma_n w_n K_{X_n}\otimes K_{X_n}) (f_{n-1}^b - f^\ast) + \gamma_n w_n^b \epsilon_n K_{X_n}.$$ We split the recursion ([\[eq:boot:recursion\]](#eq:boot:recursion){reference-type="ref" reference="eq:boot:recursion"}) in two recursions $\eta_n^{b,bias}$ and $\eta_n^{b,noise}$ such that $\eta_n^b = \eta_n^{b,bias} + \eta_n^{b,noise}$. Specifically, $$\begin{aligned} \eta_n^{b,bias} = & (I - \gamma_n w_n K_{X_n}\otimes K_{X_n}) \eta^{b,bias}_{n-1} \quad \textrm{with} \quad \eta_0^{b,bias} = f^\ast, \label{eq:eta:init}\\ \eta_n^{b,noise} = & (I - \gamma_n w_n K_{X_n}\otimes K_{X_n}) \eta^{b,noise}_{n-1} + \gamma_n w_n \epsilon_n K_{X_n} \quad \textrm{with} \quad \eta_0^{b,noise} = 0 . \label{eq:eta:noise}\end{aligned}$$ Since $\mathbb{E}[w_n K_{X_n}\otimes K_{X_n}] = \Sigma$, we further decompose $\eta_n^{b,bias}$ to two parts: (1) its main recursion terms which determine the bias order; (2) residual recursion terms. That is, $$\begin{aligned} \eta_n^{b,bias,0}= & (I - \gamma_n \Sigma) \eta_{n-1}^{b,bias,0} \quad \quad \textrm{with} \quad \eta_{0}^{b,bias,0}=f^\ast \\ \eta_n^{b,bias} - \eta_n^{b,bias,0} = & (I- \gamma_n w_n K_{X_n}\otimes K_{X_n}) (\eta_{n-1}^{b,bias} - \eta_{n-1}^{b,bias,0}) + \gamma_n (\Sigma - w_n K_{X_n}\otimes K_{X_n} )\eta_{n-1}^{b,bias,0}, \end{aligned}$$ Similarly, we decompose $\eta_n^{b,noise}$ to its main recursion term that dominates the variation and residual recursion terms as $$\begin{aligned} \eta_n^{b,noise,0} = & (I - \gamma_n\Sigma) \eta^{b,noise,0}_{n-1} + \gamma_n w_n \epsilon_n K_{X_n} \label{eq:noise:main:boot}\\ \eta_n^{b,noise} - \eta_n^{b,noise,0} = & (I - \gamma_n w_n K_{X_n}\otimes K_{X_n}) (\eta_{n-1}^{b,noise} - \eta_{n-1}^{b,noise,0}) + \gamma_n (\Sigma - w_n K_{X_n}\otimes K_{X_n} ) \eta_{n-1}^{b, noise, 0}, \nonumber\end{aligned}$$ with $\eta_0^{b,noise,0}=0$. We aim to quantify the distribution behavior of $(\bar{f}_n^b - \bar{f}_n) \mid \mathcal{D}_n$ given $\mathcal{D}_n$. Denote $\bar{\eta}_n^b=\frac{1}{n}\sum_{i=1}^n \widehat{f}_i^b$. Then $$\begin{aligned} \bar{f}_n^b -\bar{f}_n = & \bar{\eta}_n^b - \bar{\eta}_n = \frac{1}{n}\sum_{i=1}^n \big(\widehat{f}_i^b - f^\ast\big) - \frac{1}{n}\sum_{i=1}^n \big(\widehat{f}_i - f^\ast \big) \\ = & \underbrace{\bar{\eta}_n^{b,bias,0}- \bar{\eta}_n^{bias,0}}_{\textrm{leading bias}} + \underbrace{\bar{\eta}_n^{b,noise,0}-\bar{\eta}_n^{noise,0}}_{\textrm{leading noise}} + \underbrace{Rem_{noise}^b + Rem_{bias}^b- Rem_{noise} - Rem_{bias}}_{\textrm{negligible terms}}, \end{aligned}$$ where $Rem^b_{noise} = \bar{\eta}_n^{b,noise} - \bar{\eta}_n^{b,noise,0}$, $Rem^b_{bias} = \bar{\eta}_n^{b,bias} - \bar{\eta}_n^{b,bias,0}$, and $Rem_{noise}, Rem_{bias}$ are remainder terms in original SGD recursion with $Rem_{noise} = \bar{\eta}_n^{noise}-\bar{\eta}_n^{noise,0}$ (bounded in Section [Lemma 7](#app:le:noise_rem:con){reference-type="ref" reference="app:le:noise_rem:con"}), $Rem_{bias} = \bar{\eta}_n^{bias}-\bar{\eta}_n^{bias,0}$ (bounded in Section [8.2](#app:le:rem_bias:con){reference-type="ref" reference="app:le:rem_bias:con"}). Since $\bar{\eta}_n^{b,bias,0}$ and $\bar{\eta}_n^{bias,0}$ follow the same recursion, we have the leading bias of $\bar{f}_n^b -\bar{f}_n$ as 0. We next need to: (1) characterize the distribution behavior of $\bar{\eta}_n^{b,noise,0} - \bar{\eta}_n^{noise,0}$ conditional on $\mathcal{D}_n$; and (2) prove the term $Rem_{noise}^b + Rem_{bias}^b- Rem_{noise} - Rem_{bias}$ are negligible. In the following, we provide a clear express on $\bar{\eta}_n^{b,noise,0} - \bar{\eta}_n^{noise,0}$. Similar to the expression of $\eta_n^{noise,0} = \sum_{i=1}^n D(i+1,n, \gamma_i) \gamma_i \epsilon_i K_{X_i}$ in ([\[eq:noise:lead:exp\]](#eq:noise:lead:exp){reference-type="ref" reference="eq:noise:lead:exp"}), A simple calculation from the recursion ([\[eq:noise:main:boot\]](#eq:noise:main:boot){reference-type="ref" reference="eq:noise:main:boot"}) shows that $\eta_n^{b,noise,0} = \sum_{i=1}^n D(i+1,n, \gamma_i) \gamma_i w_i \epsilon_i K_{X_i}.$ Accordingly, $$\eta_n^{b,noise,0} - \eta_n^{noise,0} = \sum_{i=1}^n D(i+1,n, \gamma_i) \gamma_i (w_i -1) \epsilon_i K_{X_i} .$$ Then $$\begin{aligned} & \bar{\eta}_n^{b,noise,0} - \bar{\eta}_n^{noise,0} \nonumber\\ = & \frac{1}{n}\sum_{j=1}^n \sum_{i=1}^j D(i+1,j, \gamma_i) \gamma_i (w_i -1) \epsilon_i K_{X_i} = \frac{1}{n} \sum_{i=1} ^n \big(\sum_{j=i}^n D(i+1,j, \gamma_i) \big) \gamma_i (w_i -1) \epsilon_i K_{X_i}. \label{eq:noise:lead}\end{aligned}$$ ## Proof of the Bootstrap consistency in Theorem [Theorem 4](#thm:global:main1){reference-type="ref" reference="thm:global:main1"} for constant step size case {#pf:thm:global:con} We follow the proof sketch in Section [6.2](#sec:sketch:pf:GA){reference-type="ref" reference="sec:sketch:pf:GA"} and complete the proof of Step , and in this section. For the reader's convenience, we restate the following notations. Denote $$\begin{array}{rcl@{\qquad}rcl} \bar{\alpha}_n (\cdot) & = & \frac{1}{\sqrt{n(n\gamma)^{1/\alpha}}}\sum_{i=1}^n \epsilon_i \cdot \Omega_{n,i}(\cdot) &\bar{\alpha}_n^b(\cdot) & = & \frac{1}{\sqrt{n(n\gamma)^{1/\alpha}}}\sum_{i=1}^n (w_i-1)\cdot \epsilon_i \cdot \Omega_{n,i}(\cdot) \\ \bar{\alpha}_n^e (\cdot) & = & \frac{1}{\sqrt{n(n\gamma)^{1/\alpha}}}\sum_{i=1}^n e_i \cdot \epsilon_i \cdot \Omega_{n,i}(\cdot) & \bar{Z}_n (\cdot) & = & \frac{1}{\sqrt{n(n\gamma)^{1/\alpha}}}\sum_{i=1}^n Z_{i}(\cdot) \end{array}$$ where $e_i$'s, for $i=1,\cdots, n$, are i.i.d. standard normal random variables, and $Z_{i}(t) \sim N \big(0, (n\gamma)^{-1/\alpha} \sum_{\nu=1}^\infty (1-(1-\gamma \mu_\nu)^{n-i})^2\phi_\nu^2(t)\big)$ satisfying $\mathbb{E}\big(Z_{i}(t_k)\cdot Z_{i}(t_\ell)\big) =(n\gamma)^{-1/\alpha} \sum_{\nu=1}^\infty (1-(1-\gamma \mu_\nu)^{n-i})^2\phi_\nu(t_k)\phi_\nu(t_\ell)$, and $\mathbb{E}\big(Z_{i}(t_k)\cdot Z_{j}(t_\ell)\big) = 0$ for $i\neq j$. **Lemma 8**. *(Proof of Step ) Suppose $\alpha>2$ and $\gamma= n^{-\xi}$ with $\xi > \max\{1-\alpha/3, 0\}$. We have $$\label{eq:max:1} \sup_{\nu\in\mathbb{R}}\Big|\mathbb{P}(\max_{1\le k\le N} \bar{\alpha}_n(t_k)\le\nu) -\mathbb{P}(\max_{1\le k\le N}\bar{Z}_n(t_k)\le\nu)\Big|\leq \frac{(\log N)^{3/2}}{\big(n(n\gamma)^{-3/\alpha}\big)^{1/8}},$$ which converges to $0$ with increased $n$.* **Lemma 9**. *(Proof of Step ) Suppose $\alpha>2$ and $\gamma= n^{-\xi}$ with $\xi > \max\{1-\alpha/3, 0\}$. With probability at least $1-\exp(-C \log n)$, $$\sup_{\nu\in \mathbb{R}}\Big| \mathbb{P}^*\Big(\max_{1\leq j \leq N} \bar{\alpha}^e_n(t_j) \leq \nu \Big) - \mathbb{P}\Big(\max_{1\leq j \leq N} \bar{Z}_n(t_j) \leq \nu \Big) \Big| \preceq \big((n\gamma)^{1/\alpha}n^{-1}\big)^{1/6}(\log n)^{1/3} (\log N)^{2/3}.$$* **Lemma 10**. *(Proof of Step ) Suppose $\alpha>2$ and $\gamma= n^{-\xi}$ with $\xi > \max\{1-\alpha/3, 0\}$. With probability at least $1-4/n$, $$\sup_{\zeta\in\mathbb{R}}\Big|\mathbb{P}^*(\max_{1\le k\le N}\bar{\alpha}_n^b(t_k)\le\zeta) -\mathbb{P}^*(\max_{1\le k\le N}\bar{\alpha}_n^e(t_k)\le\zeta)\Big| \leq \frac{(\log N)^{3/2}}{\big(n(n\gamma)^{-3/\alpha}\big)^{1/8}}.$$* [**Proof of Lemma [Lemma 8](#app:le:GP:atoz:con){reference-type="ref" reference="app:le:GP:atoz:con"}**]{.ul} We define $g_{m} (i, X_i, \epsilon_i) = \frac{1}{\sqrt{(n\gamma)^{1/\alpha}}} \epsilon_i \cdot \Omega_{n,i} (t_m)$ for $t_m \in \{t_1, \dots, t_N\}$. With little abuse of notation, we use $g_{i,m}$ to represent $g_{m}(i,X_i,\epsilon_i)$. Then $\bar{\alpha}_n(t_m) = \frac{1}{\sqrt{n}} \sum_{i=1}^n g_{i,m}$. Define $\bm{g}_i = (g_{i,1},\cdots,g_{i,N})^\top$ and $\bar{\bm{\alpha}}_n =\big(\bar{\alpha}_n(t_1), \cdots, \bar{\alpha}_n(t_N)\big)^\top\in \mathbb{R}^N$, then $\bar{\bm{\alpha}}_n = \frac{1}{\sqrt{n}} \sum_{i=1}^n\bm{g}_i$. For $1\leq m\leq k\leq N$, $$\begin{aligned} \mathbb{E}(g_{i,m}\cdot g_{i,k}) = & \frac{\sigma^2}{(n\gamma)^{1/\alpha}} \mathbb{E}[\big( \sum_{\nu=1}^\infty (1-(1-\gamma \mu_\nu)^{n-i})\phi_\nu(t_m)\phi_\nu(X_i)\big)\big( \sum_{\nu=1}^\infty (1-(1-\gamma \mu_\nu)^{n-i})\phi_\nu(t_k)\phi_\nu(X_i)\big))]\\ = & \frac{\sigma^2}{(n\gamma)^{1/\alpha}} \sum_{\nu=1}^\infty (1-(1-\gamma \mu_\nu)^{n-i})^2\phi_\nu(t_m)\phi_\nu(t_k).\end{aligned}$$ When $m=k$, $\mathbb{E}(g_{i,m}\cdot g_{i,m}) = \frac{1}{(n\gamma)^{1/\alpha}} \sum_{\nu=1}^\infty (1-(1-\gamma \mu_\nu)^{n-i})^2\phi_\nu^2(t_m)$. We also have $\mathbb{E}(g_{i,m}\cdot g_{j,m}) =0$ for $i\neq j$. We also use the notation $Z_{i,m}$ to represent $Z_{i}(t)$ defined in Section [6.2](#sec:sketch:pf:GA){reference-type="ref" reference="sec:sketch:pf:GA"}. Let $\bm{Z}_i= (Z_{i,1},\cdots, Z_{i,N})^\top\in \mathbb{R}^N$ for $i=1,\dots, n$, and $\bar{\bm{Z}}_n= \frac{1}{\sqrt{n}}\sum_{i=1}^n \bm{Z}_i = (\bar{Z}_n(t_1),\cdots, \bar{Z}_n(t_N))^\top \in \mathbb{R}^N$. We remark that $\bar{\bm{\alpha}}_n$ has the same mean and covariance structure as $\bar{\bm{Z}}_n$. For $q$ as an underdetermined scalar that depends on $n$, and $\bm{\beta}=(\beta_1,\ldots,\beta_{N})^\top\in\mathbb{R}^{N}$, define $F_q(\bm{\beta})=q^{-1}\log(\sum_{l=1}^{N}\exp(q\beta_l)).$ It follows by [@chernozhukov2013gaussian] that $F_q(\bm{\beta})$ satisfies $0\le F_q(\bm{\beta})-\max_{1\le l\le N}\beta_l\le q^{-1}\log{N}.$ Let $U_0:\mathbb{R}\rightarrow[0,1]$ be a $C^3$-function such that $U_0(s)=1$ for $s\le0$ and $U_0(s)=0$ for $s\ge1$. Let $U_\zeta(s)=U_0(\psi_n(s-\zeta-q^{-1}\log{N}))$, for $\zeta\in\mathbb{R}$, where $\psi_n$ is underdetermined. Then $$\mathbb{P}(\max_{1\le m\le N}\frac{1}{\sqrt{n}}\sum_{i=1}^{n}g_{i,m} \le \zeta) \le \mathbb{P}(F_q(\bar{\bm{\alpha}}_n)\le\zeta+q^{-1}\log{N})\le \mathbb{E}\{U_\zeta(F_q(\bar{\bm{\alpha}}_n))\}.$$ To proceed, we approximate $\mathbb{E}\{U_\zeta(F_q(\bar{\bm{\alpha}}_n))-U_\zeta(F_q(\bar{\bm{Z}}_n))\}$ using the techniques used in [@chernozhukov2013gaussian]. Let $G=U_\zeta\circ F_b$. Define $\Psi(t)=\mathbb{E}\{G(\sqrt{t}\bar{\bm{\alpha}}_n+\sqrt{1-t}\bar{\bm{Z}}_n)\}$, $W(t)=\sqrt{t}\bar{\bm{\alpha}}_n+\sqrt{1-t}\bar{\bm{Z}}_n$, $W_i(t)=\frac{1}{\sqrt{n}}(\sqrt{t}\bm{g}_{i}+\sqrt{1-t}\bm{Z}_{i})$ and $W_{-i}(t)=W(t)-W_i(t)$, for $i=1,\ldots,n$. Let $G_{k}(\bm{\beta})=\frac{\partial}{\partial \beta_k}G(\bm{\beta})$, $G_{kl}(\bm{\beta})=\frac{\partial^2}{\partial \beta_k\partial \beta_l}G(\bm{\beta})$ and $G_{klq}(\bm{\beta})=\frac{\partial^3}{\partial \beta_k\partial \beta_l\partial \beta_q}G(\bm{\beta})$, for $1\le k,l,q\le N$. Then $W'_{ik}(t) = \frac{1}{2\sqrt{n}} (g_{i,k}/\sqrt{t}-Z_{i,k}/\sqrt{1-t})$. Then $$\begin{aligned} &&\mathbb{E}\{G(\bar{\bm{\alpha}}_n) - G(\bar{\bm{Z}}_n)\} = \mathbb{E}\{U_\zeta(F_q(\bar{\bm{\alpha}}_n))-U_\zeta(F_q(\bar{\bm{Z}}_n))\} =\Psi(1)-\Psi(0) =\int_0^1\Psi'(t)dt\\ &=&\frac{1}{2\sqrt{n}}\sum_{k=1}^{N}\int_0^1 \mathbb{E}\{G_k(W(t))(\sum_{i=1}^{n} g_{i,k}/\sqrt{t}-\sum_{i=1}^{n}Z_{i,k}/\sqrt{1-t})\}dt\\ &=&\frac{1}{2\sqrt{n}}\sum_{k=1}^{N}\sum_{i=1}^{n} \int_0^1\mathbb{E}\big\{G_k(W(t))(g_{i,k}/\sqrt{t}-Z_{i,k}/\sqrt{1-t})\big\}dt\\ &=&\frac{1}{2\sqrt{n}}\sum_{k=1}^{N}\sum_{i=1}^{n}\int_0^1\mathbb{E} \big\{\big[G_k(W_{-i}(t))+\frac{1}{\sqrt{n}}\sum_{l=1}^{N}G_{kl}(W_{-i}(t))(\sqrt{t}g_{i,l}+\sqrt{1-t}Z_{i,l}) \\ &&+\frac{1}{n}\sum_{l=1}^{N}\sum_{d=1}^{N}\int_0^1(1-t')G_{kld}(W_{-i}(t)+t'W_i(t))(\sqrt{t}g_{i,l}+\sqrt{1-t}Z_{i,l})(\sqrt{t}g_{i,d}+\sqrt{1-t}Z_{i,d})dt'\big]\\ &&\times (g_{i,k}/\sqrt{t}-Z_{i,k}/\sqrt{1-t})\big\}dt\\ &=&\frac{1}{2\sqrt{n}}\sum_{k=1}^{N}\sum_{i=1}^{n}\int_0^1 \mathbb{E}\{G_k(W_{-i}(t))\}\mathbb{E}\{g_{i,k}/\sqrt{t}-Z_{i,k}/\sqrt{1-t}\}dt\\ &&+\frac{1}{2n}\sum_{k,l=1}^{N}\sum_{i=1}^{n}\int_0^1\mathbb{E}\{G_{kl}(W_{-i}(t))\} \times \mathbb{E}\{(\sqrt{t}g_{i,l}+\sqrt{1-t}Z_{i,l}) (g_{i,k}/\sqrt{t}-Z_{i,k}/\sqrt{1-t})\}dt\\ &&+\frac{1}{2n^{3/2}}\sum_{k,l,d=1}^{N}\sum_{i=1}^{n}\int_0^1\int_0^1 (1-t')\mathbb{E}\{G_{kld}(W_{-i}(t)+t'W_i(t))(\sqrt{t}g_{i,l}+\sqrt{1-t}Z_{i,l})\\ &&(\sqrt{t}g_{i,d}+\sqrt{1-t}Z_{i,d})(g_{i,k}/\sqrt{t}-Z_{i,k}/\sqrt{1-t})\}dtdt'\\ &\equiv&J_1/2+J_2/2 + J_3/2,\end{aligned}$$ where $$\begin{aligned} J_1& = &\frac{1}{\sqrt{n}} \sum_{k=1}^{N}\sum_{i=1}^{n}\int_0^1 \mathbb{E}\{G_k(W_{-i}(t))\}\mathbb{E}\{g_{i,k}/\sqrt{t}-Z_{i,k}/\sqrt{1-t}\}dt = 0\\ J_2&=&\frac{1}{n}\sum_{k,l=1}^{N}\sum_{i=1}^{n}\int_0^1\mathbb{E}\{G_{kl}(W_{-i}(t))\}\times \mathbb{E}\{(\sqrt{t}g_{i,l}+\sqrt{1-t}Z_{i,l}) (g_{i,k}/\sqrt{t}-Z_{i,k}/\sqrt{1-t})\}dt \\ J_3&=&\frac{1}{n^{3/2}}\sum_{k,l,d=1}^{N}\sum_{i=1}^{n}\int_0^1\int_0^1 (1-t')\mathbb{E}\{G_{kld}(W_{-i}(t)+t'W_i(t))(\sqrt{t}g_{i,l}+\sqrt{1-t}Z_{i,l}) (\sqrt{t}g_{i,d}+\sqrt{1-t}Z_{i,d}) \\ &&(Z_{i,k}/\sqrt{t}-Z_{i,k}/\sqrt{1-t})\}dtdt'\end{aligned}$$ We further note that $J_2= 0$ since $\mathbb{E}\{(\sqrt{t}g_{i,l}+\sqrt{1-t}Z_{i,l}) (g_{i,k}/\sqrt{t}-Z_{i,k}/\sqrt{1-t})\} = \mathbb{E}(g_{i,l}g_{i,k}) - \mathbb{E}(Z_{i,l}Z_{i,k})=0$. For $J_3$, it follows from [@chernozhukov2013gaussian] that for any $z\in\mathbb{R}^{N}$, $$\sum_{k,l,d=1}^{N}|G_{kld}(z)|\le (C_3\psi_n^3+6C_2q\psi_n^2+6C_1q^2\psi_n),$$ where $C_3=\|U_0'''\|_{\infty}$ is a finite constant. Then $$\begin{aligned} |J_3|\le& \frac{1}{n^{3/2}}\sum_{k,l,d=1}^{N}\sum_{i=1}^{n}\int_0^1\int_0^1 \mathbb{E}\{|G_{kld}(W_{-i}(t)+t'W_i(t))|\max_{1\le k\le N}(|g_{i,k}|+|Z_{i,k}|)^3\}\nonumber\\ &\times (1/\sqrt{t}+1/\sqrt{1-t})dtdt'\nonumber\\ \le&\frac{1}{n^{3/2}}4(C_3\psi_n^3+6C_2q\psi_n^2+6C_1q^2\psi_n)\sum_{i=1}^{n} \mathbb{E}\{\max_{1\le k\le N}(|g_{i,k}|+|Z_{i,k}|)^3\}\nonumber\\ \le&\frac{1}{n^{3/2}}32(C_3\psi_n^3+6C_2q\psi_n^2+6C_1q^2\psi_n) \big\{\sum_{i=1}^{n}(\mathbb{E}\{\max_{1\leq k\leq N}|g_{i,k}|^3\}+\mathbb{E}\{\max_{1\leq k\leq N}|Z_{i,k}|^3\})\big\}\label{pf:J3:con}\end{aligned}$$ We need to bound $\sum_{i=1}^{n}(\mathbb{E}\{\max\limits_{1\leq k\leq N}|g_{i,k}|^3\}$ and $\mathbb{E}\{\max\limits_{1\leq k\leq N}|Z_{i,k}|^3\}$. $$\begin{aligned} \sum_{i=1}^n \mathbb{E}\max_{1\leq k \leq N} |g_{i,k}|^3 = & \frac{1}{(n\gamma)^{3/(2\alpha)}} \sum_{i=1}^n \mathbb{E}\max_{1\leq k \leq N} |\epsilon_i \cdot \Omega_{n,i}(t_k)|^3\\ \leq & \frac{1}{(n\gamma)^{3/(2\alpha)}} \sum_{i=1}^n \mathbb{E}|\epsilon_i|^3 \cdot \mathbb{E}\max_{1\leq k \leq N} |\Omega_{n,i}(t_k)|^3\\ \lesssim & \frac{\sigma^3}{(n\gamma)^{3/(2\alpha)}}\sum_{i=1}^n\mathbb{E}\max_{1\leq k \leq N} |\Omega_{n,i}(t_k)|^3 \leq c_\phi^6 \sigma^3 n (n\gamma)^{3/(2\alpha)}, \end{aligned}$$ where the last step is due to the property that $|\Omega_{n,i}| = \sum_{\nu=1}^\infty (1-(1-\gamma \mu_\nu)^{n-i})\cdot |\phi_\nu(X_i)|\cdot |\phi_\nu(t_k)| \leq c_\phi^2 (n\gamma)^{1/\alpha}$, and $\max\limits_{1\leq k \leq N} |\Omega_{i,k}|^3 \leq c_\phi^6 (n\gamma)^{3/\alpha}$. Next we deal with $\mathbb{E}\max\limits_{1\leq k \leq N} |Z_{i,k}|^3$, where $Z_{i,k} \sim N \big(0, \frac{1}{(n\gamma)^{1/\alpha}} \sum_{\nu=1}^\infty (1-(1-\gamma \mu_\nu)^{n-i})^2\phi_\nu^2(t_k)\big)$, and $\mathbb{E}(Z_{i,k}\cdot Z_{i,l}) = \frac{1}{(n\gamma)^{1/\alpha}} \sum_{\nu=1}^\infty (1-(1-\gamma \mu_\nu)^{n-i})^2\phi_\nu(t_k)\phi_\nu(t_l)$. For $p>3$, we have $$\begin{aligned} \mathbb{E}\max_{1\leq k \leq N} |Z_{i,k}|^3= & \mathbb{E}\max_{1\leq k \leq N} (|Z_{i,k}|^p)^{3/p} \leq \big(\mathbb{E}\max_{1\leq k \leq N} |Z_{i,k}|^p\big)^{3/p} \leq \big(\sum_{k=1}^N \mathbb{E}|Z_{i,k}|^p\big)^{3/p}\\ = & \frac{1}{(n\gamma)^{3/(2\alpha)}}\big[\sum_{k=1}^N \big(\sum_{\nu=1}^\infty (1-(1-\gamma \mu_\nu)^{n-i})^2\phi_\nu^2(t_k)\big)^{p/2}\big]^{3/p} ((p-1)!!)^{3/p}\\ \leq & c_\phi^2 ((p-1)!!)^{3/p} \frac{1}{(n\gamma)^{3/(2\alpha)}} N^{3/p} [(n-i)\gamma]^{\frac{3}{2\alpha}} \end{aligned}$$ Then we have $$\begin{aligned} J_3 \leq & 32\sigma^3(C_3\psi_n^3+6C_2q\psi_n^2+6C_1q^2\psi_n)\big(n^{-3/2} n(n\gamma)^{3/(2\alpha)} + n^{-3/2} c_\phi^2 ((p-1)!!)^{3/p} N^{3/p} n(n\gamma)^{\frac{3}{2\alpha}}\big) \\ \leq & C' (C_3\psi_n^3+6C_2q\psi_n^2+6C_1q^2\psi_n)\big(n^{-1/2} (n\gamma)^{\frac{3}{2\alpha}} + ((p-1)!!)^{3/p} N^{3/p}n^{-1/2}\big) \end{aligned}$$ Therefore, $$\begin{aligned} \label{at:eq:gaussian:bound} & |\mathbb{E}\{U_\zeta(F_q(\bar{\bm{\alpha}}_n))-U_\zeta(F_q(\bar{\bm{Z}}_n))\}|\nonumber\\ \le & C' (C_3\psi_n^3+6C_2q\psi_n^2+6C_1q^2\psi_n)\big(n^{-1/2} (n\gamma)^{\frac{3}{2\alpha}} + ((p-1)!!)^{3/p} N^{3/p}n^{-1/2}\big) \end{aligned}$$ In the meantime, it follows by Lemma 2.1 of [@chernozhukov2013gaussian] that $$\begin{aligned} \mathbb{E}\{U_\zeta(F_q(\bar{\bm{Z}}_n))\} &\le& \mathbb{P}(\max_{1\le k\le N}\sum_{i=1}^{n}Z_{i,k}\le\zeta+b^{-1}\log{N}+\psi_n^{-1})\\ &\le& \mathbb{P}(\max_{1\le k\le N}\sum_{i=1}^{n}Z_{i,k}\le\zeta)+C'(b^{-1}\log{N}+\psi_n^{-1})(1+\sqrt{2\log{N}}),\end{aligned}$$ where $C'>0$ is a universal constant. Therefore, for any $\zeta\in \mathbb{R}$, $$\begin{aligned} & \mathbb{P}(\max_{1\le k\le N}\bar{\alpha}_n (t_k)\le\zeta) -\mathbb{P}(\max_{1\le k\le N}\bar{Z}_n(t_k)\le\zeta)\\ \le & C_4(C_3\psi_n^3+6C_2q\psi_n^2+6C_1q^2\psi_n)\big(n^{-1/2} (n\gamma)^{\frac{3}{2\alpha}} + ((p-1)!!)^{3/p} N^{3/p}n^{-1/2}\big) \\ & \quad \quad + c'(b^{-1}\log N + \psi_n^{-1})(1+\sqrt{2\log N}). \end{aligned}$$ On the other hand, let $V_\zeta(s)=U_0(\psi_n(s-\zeta)+1)$. Then $$\mathbb{P}(\max_{1\le k\le N}\frac{1}{\sqrt{n}}\sum_{i=1}^{n}g_{i,k}\le\zeta) \ge \mathbb{P}(F_q(\bar{\bm{\alpha}}_n)\le\zeta)\ge \mathbb{E}\{V_\zeta(F_q(\bar{\bm{\alpha}}_n))\}.$$ Using the same arguments, it can be shown that $|\mathbb{E}\{V_\zeta(F_q(\bar{\bm{\alpha}}_n))-V_\zeta(F_q(\bar{\bm{Z}}_n))\}|$ has the same upper bound specified as in ([\[at:eq:gaussian:bound\]](#at:eq:gaussian:bound){reference-type="ref" reference="at:eq:gaussian:bound"}). Furthermore, by Lemma 2.1 of [@chernozhukov2013gaussian] and direct calculations, we have $$\begin{aligned} \mathbb{E}\{V_\zeta(F_q(\bar{\bm{Z}}_n))\} &\ge& \mathbb{P}(F_q(\bar{\bm{Z}}_n)\le\zeta-\psi_n^{-1}) \ge \mathbb{P}(\max_{1\le k\le N}\frac{1}{\sqrt{n}}\sum_{i=1}^{n} Z_{i,k}\le\zeta-(\psi_n^{-1}+b^{-1}\log{N}))\\ &\ge& \mathbb{P}(\max_{1\le k\le N}\frac{1}{\sqrt{n}}\sum_{i=1}^{n}Z_{i,k}\le\zeta)-C'(\psi_n^{-1}+b^{-1}\log{N})(1+\sqrt{2\log{N}}).\end{aligned}$$ Therefore, $$\begin{aligned} &&\mathbb{P}(\max_{1\le k\le N}\bar{\alpha}_n(t_k)\le\zeta) -\mathbb{P}(\max_{1\le k\le N}\bar{Z}_n(t_k)\le\zeta)\\ &\ge&-C_0^{''}(C_3\psi_n^3+C_2q\psi_n^2 + 6C_1 q^2\psi_n)\big( n^{-1/2} (n\gamma)^{\frac{3}{2\alpha}} + ((p-1)!!)^{3/p} N^{3/p}n^{-1/2}\big)\\ && - C^{''}(\psi_n^{-1} + q^{-1}\log N)(1+\sqrt{2\log N}).\end{aligned}$$ Consequently, let $\psi_n = q = \big(n(n\gamma)^{-3/\alpha}\big)^{1/8}$ and $p$ large enough, we have $$\sup_{\zeta\in\mathbb{R}}\Big|\mathbb{P}(\max_{1\le k\le N} \bar{\alpha}_n(t_k)\le\zeta) -\mathbb{P}(\max_{1\le k\le N}\bar{Z}_n(t_k)\le\zeta)\Big|\leq \frac{(\log N)^{3/2}}{\big(n(n\gamma)^{-3/\alpha}\big)^{1/8}},$$ which converges to $0$ with increased $n$ when $\alpha>2$ and $\gamma= n^{-\xi}$ with $\xi > \max\{1-\alpha/3, 0\}$. ------------------------------------------------------------------------ \ [**Proof of Lemma [Lemma 9](#app:le:GP:ztoe:con){reference-type="ref" reference="app:le:GP:ztoe:con"}**]{.ul} Let $\bar{\bm{\alpha}}^e_n =\big(\bar{\alpha}^e_n(t_1), \cdots, \bar{\alpha}^e_n(t_N)\big)^\top$ and $\bar{\bm{Z}}_n= \frac{1}{\sqrt{n}}\sum_{i=1}^n \bm{Z}_i = (\bar{Z}_n(t_1),\cdots, \bar{Z}_n(t_N))^\top$. Then $\bar{\bm{\alpha}}^e_n \mid \mathcal{D}_n \sim N(0, \Sigma^{\bar{\alpha}^e_n})$ and $\bar{\bm{Z}}_n \sim N(0,\Sigma^{\bar{Z}_n})$. Denote the $jk$-th element of the covariance matrices as $\Sigma_{j,k}^{\bar{\alpha}^e_n}$ and $\Sigma_{j,k}^{\bar{Z}_n}$, respectively. Set $b_{i\nu} = (1-(1-\gamma \mu_\nu)^{n-i})$. Then $$\begin{aligned} \Sigma_{j,k}^{\bar{\alpha}^e_n} = & \frac{1}{n(n\gamma)^{1/\alpha}} \sum_{i=1}^n \epsilon_i^2 \big(\sum_{\nu=1}^\infty b_{i\nu}\phi_\nu(X_i)\phi_\nu(t_j)\big)\cdot \big(\sum_{\nu=1}^\infty b_{i\nu}\phi_\nu(X_i)\phi_\nu(t_k)\big), \end{aligned}$$ and $\Sigma_{j,k}^{\bar{Z}_n} = \frac{1}{n(n\gamma)^{1/\alpha}}\sum_{i=1}^n \sum_{\nu=1}^\infty b_{i\nu}^2 \phi_\nu(t_k)\phi_\nu(t_j)$. Following Lemma in [@liu2023supp], we have $$\mathbb{P}\Big(|\Sigma_{j,k}^{\bar{\alpha}^e_n} - \Sigma_{j,k}^{\bar{Z}_n}| \geq C(n\gamma)^{1/(2\alpha)}n^{-1/2} \log n \Big) \leq \exp(-C_1 \log n).$$ Then $$\begin{aligned} & \mathbb{P}\Big(\max_{1\leq j,k \leq N} |\Sigma_{j,k}^{\bar{\alpha}^e_n} - \Sigma_{j,k}^{\bar{Z}_n}| \geq C(n\gamma)^{1/(2\alpha)}n^{-1/2} \log n \Big) \leq N^2\exp\{-C_1\log n\}. \end{aligned}$$ Consequently, we have with probability at least $1-\exp\{-C\log n\}$, $$\sup_{\nu\in \mathbb{R}}\Big| \mathbb{P}^*\Big(\max_{1\leq j \leq N} \bar{\alpha}^e_n(t_j) \leq \nu \Big) - \mathbb{P}\Big(\max_{1\leq j \leq N} \bar{Z}_n(t_j) \leq \nu \Big) \Big| \preceq \big((n\gamma)^{1/\alpha}n^{-1}\big)^{1/6}(\log n)^{1/3} (\log N)^{2/3}.$$ ------------------------------------------------------------------------ \ [**Proof of Lemma [Lemma 10](#app:le:GP:etob:con){reference-type="ref" reference="app:le:GP:etob:con"}**]{.ul} Define $\alpha_{i,j}^e = e_i \cdot g_{j} (i, X_i, \epsilon_i)$ with $g_{j} (i, X_i, \epsilon_i) = \frac{1}{\sqrt{(n\gamma)^{1/\alpha}}} \epsilon_i \cdot \Omega_{n,i}(t_j)=\sum_{\nu=1}^\infty \big(1-(1-\gamma \mu_\nu)^{n-i}\big)\phi_\nu(X_i)\phi_\nu(t_j)\epsilon_i.$ We have $\mathbb{E}^* (\alpha_{i,j}^e \cdot \alpha_{i,\ell}^e )= g_{j} (i, X_i, \epsilon_i)g_{\ell} (i, X_i, \epsilon_i)$ and $\mathbb{E}^*(\alpha_{i,j}^e \cdot \alpha_{k,j}^e) =0$. Define $\bm{\alpha}_{i}^e= (\alpha_{i,j}^e,\dots, \alpha_{i,N}^e)^\top$, then $\bm{\alpha}_{i}^e$ and $\bm{\alpha}_{k}^e$ are independent for $i\neq k$ and $i,k=1,\dots,n$. Let $\bar{\bm{\alpha}}_n^e = \frac{1}{\sqrt{n}}\sum_{i=1}^n \bm{\alpha}_{i}^e= \big( \bar{\alpha}_n^e(t_1), \dots, \bar{\alpha}_n^e(t_N) \big)^\top$ with $\bar{\alpha}_n^e(t_j) = \frac{1}{\sqrt{n}} \sum_{i=1}^n \alpha^e_{i,j} = \frac{1}{\sqrt{n}} \sum_{i=1}^n e_i \cdot g_{j} (i, X_i, \epsilon_i)$ for $j=1,\dots, N$. Similarly, denote $\alpha_{i,j}^b = (w_i-1) \cdot g_{j} (i, X_i, \epsilon_i)$ and $\bm{\alpha}_{i}^b= (\alpha_{i,j}^b,\dots, \alpha_{i,N}^b)^\top$. Then we have $\mathbb{E}^* (\alpha_{i,j}^b \cdot \alpha_{i,\ell}^b )= g_{j} (i, X_i, \epsilon_i)g_{\ell} (i, X_i, \epsilon_i)$, and $\mathbb{E}^*(\alpha_{i,j}^b \cdot \alpha_{k,j}^b) =0.$ Denote $\bar{\bm{\alpha}}_n^b = \frac{1}{\sqrt{n}}\sum_{i=1}^n \bm{\alpha}_{i}^b= \big( \bar{\alpha}_n^b(t_1), \dots, \bar{\alpha}_n^b(t_N) \big)^\top$ with $\bar{\alpha}_n^b(t_j) = \frac{1}{\sqrt{n}} \sum_{i=1}^n \alpha^b_{i,j}$. The proof of Lemma [Lemma 10](#app:le:GP:etob:con){reference-type="ref" reference="app:le:GP:etob:con"} follows the proof of Lemma [Lemma 8](#app:le:GP:atoz:con){reference-type="ref" reference="app:le:GP:atoz:con"}. We adopt the notation and follow the proof of Lemma [Lemma 8](#app:le:GP:atoz:con){reference-type="ref" reference="app:le:GP:atoz:con"} step by step, with only the following changes: (1) replacing $\bar{\bm{\alpha}}_n$ with $\bar{\bm{\alpha}}_n^b$; (2) replacing $\bar{\bm{Z}}_n$ with $\bar{\bm{\alpha}}_n^e$; (3) replacing the probability $\mathbb{P}(\cdot)$ and expectation $\mathbb{E}(\cdot)$ to conditional probabilities $\mathbb{P}*(\cdot)=\mathbb{P}(\cdot \mid \mathcal{D}_n)$ and conditional expectation $\mathbb{E}^*(\cdot)= \mathbb{E}(\cdot \mid \mathcal{D}_n)$. Then equation ([\[pf:J3:con\]](#pf:J3:con){reference-type="ref" reference="pf:J3:con"}) here will be adapted to $$\label{eq:J3:conditional:con} |J_3| \le\frac{1}{n^{3/2}}32(C_3\psi_n^3+6C_2q\psi_n^2+6C_1q^2\psi_n) \Big(\sum_{i=1}^{n}(\mathbb{E}^*\{\max_{1\leq k\leq N}|\alpha^b_{i,k}|^3\}+\mathbb{E}^*\{\max_{1\leq k\leq N}|\alpha^e_{i,k}|^3\})\Big)$$ Since $$\begin{aligned} \mathbb{E}^* \max_{1\leq k \leq N} |\alpha^b_{i,k}|^3 \leq & \frac{1}{(n\gamma)^{3/(2\alpha)}} \max_{1\leq k \leq N} |\epsilon_i\cdot \Omega_{n,i}(t_k)|^3 \cdot \mathbb{E}|w_i-1|^3 \lesssim |\epsilon_i|^3(n\gamma)^{3/(2\alpha)}, \end{aligned}$$ where the last inequality follows the proof in Lemma [Lemma 8](#app:le:GP:atoz:con){reference-type="ref" reference="app:le:GP:atoz:con"} that $\max\limits_{1\leq k \leq N} |\Omega_{i,k}|^3 \leq c_\phi^6 (n\gamma)^{3/\alpha}$. Then with probability at least $1-n^{-1}$, we have $$\begin{aligned} \frac{1}{n^{3/2}}\sum_{i=1}^n\mathbb{E}^*\big(\max_{1\leq k \leq N} |\alpha^b_{i,k}|^3\big) \leq & n^{-1/2} (n\gamma)^{3/(2\alpha)} \frac{1}{n}\sum_{i=1}^n |\epsilon_i|^3 \leq C n^{-1/2} (n\gamma)^{3/(2\alpha)}. \end{aligned}$$ Similarly, with probability at least $1-n^{-1}$, $\frac{1}{n^{3/2}}\sum_{i=1}^n\mathbb{E}^*\big(\max_{1\leq k \leq N} |\alpha^e_{i,k}|^3\big) \leq C n^{-1/2} (n\gamma)^{3/(2\alpha)},$ where $C$ is a constant independent of $n$. Then we have with probability at least $1-2n^{-1}$, $$\begin{aligned} J_3 \leq & C (C_3\psi_n^3+6C_2q\psi_n^2+6C_1q^2\psi_n) n^{-1/2} (n\gamma)^{3/(2\alpha)}. \end{aligned}$$ Therefore, follow the proofs in Lemma [Lemma 8](#app:le:GP:atoz:con){reference-type="ref" reference="app:le:GP:atoz:con"}, we have $$\begin{aligned} \label{eq:max:1} & |\mathbb{P}^*(\max_{1\le k\le N}\bar{\alpha}_n^b(t_k)\le\zeta) -\mathbb{P}^*(\max_{1\le k\le N}\bar{\alpha}_n^e(t_k)\le\zeta)| \\ \leq & C_0(C_3\psi_n^3+C_2q\psi_n^2 + 6C_1 q^2\psi_n) n^{-1/2} (n\gamma)^{3/(2\alpha)}+ C^{''}(\psi_n^{-1} + q^{-1}\log N)(1+\sqrt{2\log N}). \end{aligned}$$ Consequently, let $\psi_n = q = \big(n(n\gamma)^{-3/\alpha}\big)^{1/8}$, we have with probability at least $1-4/n$, $$\sup_{\zeta\in\mathbb{R}}\Big|\mathbb{P}^*(\max_{1\le k\le N}\bar{\alpha}_n^b(t_k)\le\zeta) -\mathbb{P}^*(\max_{1\le k\le N}\bar{\alpha}_n^e(t_k)\le\zeta)\Big| \leq \frac{(\log N)^{3/2}}{\big(n(n\gamma)^{-3/\alpha}\big)^{1/8}}.$$ ------------------------------------------------------------------------ \ [^1]: Department of Statistics,Virginia Tech, Blacksburg, VA. Email: meimeiliu\@vt.edu [^2]: Mathematical Sciences, NJIT,Newark, NJ. Email:zshang\@njit.edu [^3]: Department of Statistics, UIUC, Champaign, IL. Email: yy84\@illinois.edu
arxiv_math
{ "id": "2310.00881", "title": "Scalable Statistical Inference in Non-parametric Least Squares", "authors": "Meimei Liu, Zuofeng Shang and Yun Yang", "categories": "math.ST stat.TH", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We develop a new kind of nonnegativity certificate for univariate polynomials on an interval. In many applications, nonnegative Bernstein coefficients are often used as a simple way of certifying polynomial nonnegativity. Our proposed condition is instead an explicit lower bound for each Bernstein coefficient in terms of the geometric mean of its adjacent coefficients, which is provably less restrictive than the usual test based on nonnegative coefficients. We generalize to matrix-valued polynomials of arbitrary degree, and we provide numerical experiments suggesting the practical benefits of this condition. The techniques for constructing this inexpensive certificate could potentially be applied to other semialgebraic feasibility problems. author: - | Mitchell Tong Harris\ [mitchh\@mit.edu](mitchh@mit.edu) - | Pablo A. Parrilo\ [parrilo\@mit.edu](parrilo@mit.edu) bibliography: - references.bib title: | Improved Nonnegativity Testing in the Bernstein Basis\ via Geometric Means --- # Introduction {#sec:intro} The cubic Bernstein polynomials (e.g., [@farouki2012bernstein; @lorentz2012bernstein]) are $$b_0(x) = (1-x)^3, \qquad b_1(x) = 3x(1-x)^2, \qquad b_2(x) = 3x^2(1-x), \qquad b_3(x) = x^3.$$ Suppose we have a polynomial $p(x)$ such that $$\label{eqn:cubic} p(x) = p_0 b_0(x) + p_1b_1(x) + p_2b_2(x) + p_3b_3(x).$$ We would like to find explicit conditions on the real numbers $p_0, p_1,p_2,p_3$ (called Bernstein coefficients) that guarantee $p(x) \geq 0$ on $[0,1]$. This task and its higher degree variants discussed in Section [4](#sec:generalization){reference-type="ref" reference="sec:generalization"} are central questions in applied mathematics. Research on nonnegativity certificates of different kinds is a classical topic in real algebraic geometry, including well-known work by Sturm, Hilbert, Artin, and others; see e.g. [@BPRbook; @powers2021certificates] and the references therein. More recently, concrete connections to applications in statistics, control, and optimization have heightened interest in finding computation-friendly nonnegativity certificates [@parrilo2000structured; @papp2011optimization; @ahmadi2016some]. Existing conditions that have been used in applications as nonnegativity certificates of [\[eqn:cubic\]](#eqn:cubic){reference-type="eqref" reference="eqn:cubic"} on intervals can roughly be classified as follows: 1. **Exact characterizations:** The Markov--Lukács Theorem ([@szeg1939orthogonal p. 4]) gives a necessary and sufficient condition for nonnegativity on an interval. It states that a polynomial $p$ is nonnegative on $[0,1]$ if and only if there exist polynomials $s_1$ and $s_2$ such that $$p(x) = x \, s_1(x)^2 + (1-x) \, s_2(x)^2.$$ In the cubic case, the polynomials $s_1$ and $s_2$ are linear, so writing this condition in the language of Bernstein polynomials along with a characterization of polynomials that are sums of squares (SOS) yields the second-order cone (SOCP) condition in Lemma [\[lem:nonnegative_socp\]](#lem:nonnegative_socp){reference-type="ref" reference="lem:nonnegative_socp"}. While this condition has fixed computational cost and is necessary and sufficient for a polynomial to be nonnegative on the interval, it is nontrivial to verify -- one has to search for the linear polynomials $s_1$ and $s_2$, typically via convex optimization. Alternatively, an exact characterization can be given in terms of the discriminant of $p$. We elaborate on this approach in Section [3.3](#sec:cubic-exact){reference-type="ref" reference="sec:cubic-exact"}. As mentioned in [@schmidt1988positivity], conditions of this type on $p$ are typically more complicated to understand and to compute, particularly for higher degrees. 2. **Sufficient conditions:** A simple, sufficient condition for nonnegativity of the polynomial $p$ is $p_0,p_1,p_2,p_3 \geq 0$. Because each of the Bernstein polynomials is nonnegative on the interval, a nonnegative combination of them is nonnegative too. This idea was mentioned by Bernstein [@bernshtein1915representation] in 1915, expanded in 1966 by [@cargo1966bernstein], and further discussed more recently in [@lin1995methods]. The downside to this condition is that it is potentially conservative -- there are plenty of polynomials nonnegative on $[0,1]$ that have at least one negative Bernstein coefficient. We give names to the sets of polynomials that satisfy each of these conditions. **Definition 1**. *$\mathcal{P}$ ("Positive") is the set of cubic polynomials such that $p(x) \geq 0$ on $[0,1]$.* **Definition 2**. *$\mathcal{NB}$ ("Nonnegative Bernstein") is the set of cubic polynomials whose Bernstein coefficients are all nonnegative.* These conditions reflect two extremes in the tradeoff space between cost and quality of approximation. On the one hand, $\mathcal{P}$ is the full set, but testing membership is nontrivial. On the other hand, $\mathcal{NB}$ is a smaller set for which checking membership is easy. The purpose of this paper is to introduce a set that lies in between $\mathcal{NB}$ and $\mathcal{P}$, a set that better trades off accuracy for computational cost. In doing so, we highlight a technique for generating new sufficient conditions for nonnegativity based on making an explicit data-dependent choice of feasible solutions in the SOS characterizations. In the cubic case, these conditions become the explicit set of inequalities $p_0, p_3 \geq 0$ and $$p_2 \geq -2\sqrt{\frac{p_3\max(p_1,0)}{3}} \qquad \text{and} \qquad p_1 \geq -2\sqrt{\frac{p_0\max(p_2,0)}{3}}.$$ These inequalities imply nonnegativity of $p$ on the interval without requiring solving a program with additional decision variables. This new condition, and its higher degree generalizations, more gracefully balance the tradeoffs between efficiency and approximation power. We show how these fast-to-check, explicit conditions naturally generalize to arbitrary degree (Section 4) and polynomial matrices (Section 5); see Definition [\[def:GBmatrix\]](#def:GBmatrix){reference-type="ref" reference="def:GBmatrix"} and Theorem [\[thm:arbitrary_general_inclusions\]](#thm:arbitrary_general_inclusions){reference-type="ref" reference="thm:arbitrary_general_inclusions"}. Finally, we illustrate how using them in the right context may have practical benefits. ## Related work There is a long history of work related to nonnegativity certificates. Some of the oldest results are due to Descartes, Sylvester, Sturm, Budan and Fourier (see, for instance, [@bochnack2013real; @BPRbook; @anderson1998descartes]). In the multivariate case, Hilbert's 1888 paper on sums of squares [@hilbert1888ueber] again revitalized the area. Ever since, there has been consistent work on characterizing large classes of nonnegative polynomials [@powers2011positive; @powers2021certificates]. Due the intrinsic computational difficulty of the problem, to varying extents all these methods trade off conservativeness for computational ease. Bernstein's original 1915 work [@bernshtein1915representation] provided a sufficient condition for nonnegativity on the unit interval. This condition is exceptionally easy to test, and within about fifty years a higher dimensional version was adopted for applications in the context of computer aided design. The higher dimension version is the familiar Bézier curve [@de1963courbes; @bezier1966definition; @bezier1967definition], which is a parametric curve such that each coordinate is a univariate Bernstein polynomial. Bernstein's sufficient condition in one dimension is exactly the defining condition of $\mathcal{NB}$; it can be generalized to the containment of a Bézier curve in the polytope defined by the convex hull of the Bernstein coefficient vectors [@biran2018geometry]. This condition is tractable enough that it has been used in applications ranging from geometric constraints systems [@foufou2012bernstein] to problems of robust stability [@garloff2000application; @garloff1997speeding; @malan1992b; @zettler1998robustness]. On the other hand, sum of squares nonnegativity certificates were not too practical until the development of further connections to tractable optimization problems. Sum of squares polynomials became useful for applications when Shor [@shor1987class], Nesterov [@nesterov2000squared], Parrilo [@parrilo2000structured] and Lasserre [@lasserre2001global] used convex optimization and semidefinite programming to algorithmically check this property. Sum of squares techniques allow a more precise tuning of the computational costs -- the more conservative one wants to be, the more expensive the method becomes. Further work built on this, by restricting to subsets of the set of sum of squares for which checking membership is more tractable. One proposal is to look for easier-to-check certificates; e.g., instead of requiring the associated Gram matrix to be positive semidefinite, one can use the stronger condition of diagonal dominance or scaled diagonal dominance [@ahmadi2019dsos]. Other classes of nonnegativity certificates have also been explored, with an eye towards potential computational benefits. A thread of work that goes back to Reznick [@reznick1989forms], uses a decomposition of a polynomial as a sum whose terms are nonnegative due to the AM-GM inequality. This idea, combined with relative entropy optimization [@chandrasekaran2016relative] has been used to generate a variety of new nonnegativity conditions including those presented in [@fidalgo2011positive; @chandrasekaran2016relative; @iliman2016amoebas]. In general, these sets are incomparable with the set of sum of squares, so they exemplify another compromise between conservativeness and computational ease. These approaches have a similar flavor to our proposal; however, checking the membership condition of the existing methods still requires the solution of some program, so they are generally more expensive than checking Bernstein's condition. Because our conditions are explicit, and do not require solving any optimization programs, they sit closer to the cheaper, less explored end of the tradeoff spectrum. # Preliminaries {#sec:preliminaries-parent} ## Notation We use $\mathbb{R}_d[x]$ to denote the vector space of univariate polynomials of degree less than or equal to $d$. Given a set $S$, let $S^\circ$ be the interior of the set $S$. While the paper begins with a focus on univariate polynomials, we eventually generalize our results to polynomial matrices in Section [5](#sec:polynomial-matrices){reference-type="ref" reference="sec:polynomial-matrices"}. In that case, the analogue of being pointwise nonnegative is to be pointwise positive semidefinite. Let $\mathcal{P}_{d}^{n}$ denote the set of symmetric $n \times n$ univariate polynomial matrices of degree $d$ with nonnegative eigenvalues on the unit interval. For example, $\mathcal{P}_3^1$ is the set of cubic polynomials nonnegative on $[0,1]$. We use the same subscript/superscript convention for other sets such as in $\mathcal{NB}_3^1$. If we drop the dimension superscript or the degree subscript it should be implied from context. If $x \in \mathbb{R}$, let $x_+ = \max(0, x)$. Let $X, Y \in S^n$ be $n \times n$ symmetric matrices. Let $X_+$ be the orthogonal projection of $X$ onto the positive semidefinite cone. We write $X \succeq Y$ if $X-Y$ is positive semidefinite and $X \succ Y$ if $X-Y$ is positive definite. If $X$ is positive semidefinite, then $X^{\frac{1}{2}}$ is its matrix square root, which is the unique positive semidefinite matrix $Z$ such that $Z^2 = X$. Both the projection and the square root can be computed with eigenvalue decompositions; a review of the algorithms are given in Appendix [8](#sec:preliminaries-proofs){reference-type="ref" reference="sec:preliminaries-proofs"}. ## Second order cones and Bernstein polynomials {#sec:preliminaries} The two main ingredients in our method are Bernstein polynomials and second order cones. We define both before presenting the second order cone condition for nonnegativity of a cubic Bernstein polynomial. **Definition 3**. *The three-dimensional *rotated second order cone* $\mathcal{Q} \subset \mathbb{R}^3$ is the set $$\label{eqn:definingQ} \mathcal{Q} = \{ (x_0, x_1; x_2) \in \mathbb{R}^+ \times \mathbb{R}^+ \times \mathbb{R} \, | \, 2x_0 x_1 \geq x_2^2\}.$$* This is a closed convex cone. The point $(x_0, x_1; x_2) \in \mathcal{Q}$ if and only if the matrix $$\label{eqn:2x2psdQversion} \begin{pmatrix} x_0 & \frac{1}{\sqrt{2}}x_2 \\ \frac{1}{\sqrt{2}}x_2 & x_1 \end{pmatrix}$$ is positive semidefinite. The second order cone is self-dual; i.e., $\mathcal{Q}$ is the set of points whose inner product with every point in $\mathcal{Q}$ is nonnegative. In particular, if $(x_0, x_1; x_2) \in \mathcal{Q}$ and $(y_0, y_1; y_2) \in \mathcal{Q}$, then $x_0 y_0 + x_1 y_1 + x_2y_2 \geq 0$. See [@alizadeh2003second] for more background on $\mathcal{Q}$. **Definition 4**. *Let $0 \leq i \leq d$. The $i$th degree $d$ *Bernstein polynomial* is $$b_i(x) = \binom{d}{i}x^{i}(1-x)^{d-i}.$$ The degree $d$ of $b_i$ will be made clear from context.* There are two well-known properties of Bernstein polynomials we use: 1. They form a basis: $\{b_i\}_{i=0}^d$ is a basis of $\mathbb{R}_d[x]$. 2. The basis is nonnegative: For every $d$, $0 \leq i \leq d$, and $0 \leq x \leq 1$, $b_i(x) \geq 0$. Even though the second property is stated for the unit interval, Bernstein polynomials are useful in a more general context. If the interval of interest is not $[0,1]$ but rather $[r,s]$, change $b_i(x)$ to $b_i(\frac{x-r}{s-r})$. This change of variables allows us to restrict our attention to the $[0,1]$ case without loss of generality. The following lemma is an algebraic identity relating consecutive Bernstein polynomials. The identity provides a connection between Bernstein polynomials and the second order cone $\mathcal{Q}$. lembernsteinsocp[\[lemma:bernstein_socp\]]{#lemma:bernstein_socp label="lemma:bernstein_socp"} Let $$\label{eqn:midef}m_i = \frac12 \cdot \frac{(i+1) \cdot (d-i+1)}{i \cdot (d-i)}.$$ Then for $1 \leq i \leq d-1$, $$2m_i \, b_{i-1}(x) b_{i+1}(x) = b_{i}(x)^2.$$ In particular, for every $0 \leq x \leq 1$, $$\label{eqn:bernstein_socp} \Bigg(b_{i-1}(x), b_{i+1}(x); \frac{1}{\sqrt{m_i}}b_i(x) \Bigg) \in \mathcal{Q}.$$ *Proof.* We calculate $$\begin{aligned} 2m_ib_{i-1}(x)b_{i+1}(x) &= \frac{(i+1) \cdot (d-i+1)}{i \cdot (d-i)}\left(\binom{d}{i-1}x^{i-1}(1-x)^{d-i+1}\right) \left( \binom{d}{i+1}x^{i+1}(1-x)^{d-i-1}\right)\\ &= \frac{(i+1) \cdot (d-i+1)}{i \cdot (d-i)} \frac{d!}{(i-1)!(d-i+1)!}\frac{d!}{(i+1)!(d-i-1)!}x^{2i}(1-x)^{2(d-i)}\\ &= \left(\frac{d!}{i!(d-i)!}\right)^2x^{2i}(1-x)^{2(d-i)}\\ &= b_{i}^2(x). \end{aligned}$$ Therefore, the quadratic inequality required by [\[eqn:bernstein_socp\]](#eqn:bernstein_socp){reference-type="eqref" reference="eqn:bernstein_socp"} holds with equality -- geometrically, these points all lie on the boundary of $\mathcal{Q}$. ◻ **Definition 5**. *Let $p \in \mathbb{R}_d[x]$. The *Bernstein coefficients* of $p$ are the unique real numbers $\{p_i\}_{i=0}^d$ such that $$p(x) =\sum_{i=0}^d p_i b_i( x).$$* We write $p_i$ to refer to the Bernstein coefficients of a polynomial $p$. #### Subdivision method The Bernstein coefficients are a key ingredient of the *subdivision method* to prove nonnegativity of a polynomial on the interval. The basic idea is to recursively bisect the domain, until a given termination criterion is satisfied. The subdivision method with nonnegativity criterion $S$ is described by the following algorithm. > - Check if $p \in S$. If yes, terminate. > - Otherwise, subdivide $p$ into $p^1$ and $p^2$ (explained below). Repeat algorithm with $p^1$ and $p^2$ and wait until both calls terminate. If the algorithm terminates, then $p \in \mathcal{P}$. Termination of the subdivision algorithm is guaranteed for positive polynomials when $S = \mathcal{NB}$ [@leroy2012convergence]. As explained below, if the polynomial $p$ is written in the Bernstein basis, each of the steps can be efficiently computed. Step 1 of the subdivision method is to check an explicit nonnegativity condition (which we interchangeably call a "termination" or "nonnegativity" criterion). If $S = \mathcal{NB}$, then we can efficiently check whether $p \in \mathcal{NB}$ when the Bernstein coefficients are given. Step 2 of the subdivision method is to subdivide a polynomial. The idea of subdivision is to split the unit interval into two subintervals and give two polynomials that agree with the original on each half. Those new polynomials are rescaled to be defined on $[0,1]$. An illustration of this idea is given for the polynomial $(1-2x)^2$ in Figure [\[fig:subdiv-explanation\]](#fig:subdiv-explanation){reference-type="ref" reference="fig:subdiv-explanation"}. Let $p(x) \in \mathbb{R}_d[x]$. Subdividing $p$ gives polynomials $p^1(x)$ and $p^2(x)$ such that $$\label{eqn:subdivision-def} p(x) = \begin{cases} p^1(2x) \qquad &0 \leq x \leq \frac{1}{2}\\ p^2(2x-1) \qquad &\frac{1}{2} \leq x \leq 1.\end{cases}$$ When the Bernstein coefficients of a polynomial $p$ are known, De Casteljau's algorithm [@farouki2012bernstein] provides an efficient formula for computing the Bernstein coefficients of $p^1$ and $p^2$. For cubic polynomials, the Bernstein coefficients of $p^1$ and $p^2$ are $$\begin{pmatrix} 1 & 0 & 0 & 0\\ \frac{1}{2} & \frac{1}{2} & 0 & 0\\ \frac{1}{4} & \frac{2}{4} & \frac{1}{4} & 0 \\ \frac{1}{8} & \frac{3}{8} & \frac{3}{8} & \frac{1}{8} \end{pmatrix}\begin{pmatrix} p_0\\p_1\\p_2\\p_3 \end{pmatrix} \qquad \text{and} \qquad \begin{pmatrix} \frac{1}{8} & \frac{3}{8} & \frac{3}{8} & \frac{1}{8}\\ 0 & \frac{1}{4} &\frac{2}{4} & \frac{1}{4}\\ 0 & 0 & \frac{1}{2} & \frac{1}{2} \\ 0&0&0&1 \end{pmatrix} \begin{pmatrix} p_0\\p_1\\p_2\\p_3 \end{pmatrix}.$$ In Section [6](#sec:numerical){reference-type="ref" reference="sec:numerical"}, we explore how the number of subdivision steps required by this method is impacted by using our new nonnegativity criterion versus using $\mathcal{NB}$. While we consider just one variant of univariate subdivision, the general idea of subdivision is useful for a variety of problems in robustness analysis and computer-aided design [@lane1980theoretical]. # The cubic case We begin with the cubic case because it is the simplest setting in which our results are nontrivial. First we present the exact condition for nonnegativity alluded to in Section [1](#sec:intro){reference-type="ref" reference="sec:intro"} and describe a strategy to use it to get new conditions on polynomials that imply nonnegativity. Then we employ this method to provide one based on geometric means in Section [3.1](#sec:P2_def){reference-type="ref" reference="sec:P2_def"} and justify why it is natural in Section [3.2](#sec:why){reference-type="ref" reference="sec:why"}. Since in the cubic case the set of nonnegative polynomials can be understood via the discriminant, Section [3.3](#sec:cubic-exact){reference-type="ref" reference="sec:cubic-exact"} contains an exact discriminantal condition, unrelated to our geometric mean one, in order to compare how much simpler ours is. We finish this section by showing how the geometric mean condition can also be used to construct an exact nonnegativity characterization. The following lemma, written in the language of Bernstein polynomials and second order cones, gives an exact characterization of cubic nonnegativity and is the basis for how we develop our new conditions. lemnonnegativesocp\[Cubic nonnegativity and SOCP\][\[lem:nonnegative_socp\]]{#lem:nonnegative_socp label="lem:nonnegative_socp"} Let $p \in\mathbb{R}_3[x]$. The polynomial $p \in \mathcal{P}$ if and only if there exist $c_1, c_2 \in \mathbb{R}$ such that $$\label{eqn:nonnegative_socp} \Big(p_0, \; p_2-\frac{c_2}{3} ; \; \frac{c_1}{\sqrt{6}} \Big) \in \mathcal{Q} \qquad \text{ and } \qquad \Big(p_1-\frac{c_1}{3}, \; p_3; \; \frac{c_2 }{\sqrt{6}}\Big)\in \mathcal{Q}.$$ *Proof.* First we prove sufficiency: If the second order cone conditions hold for $p$, then $p \in \mathcal{P}$. In this proof, we write $b_i$ to mean $b_i(x)$ for any fixed $0 \leq x \leq 1$. Plugging $n = 3$ into Lemma [\[lemma:bernstein_socp\]](#lemma:bernstein_socp){reference-type="ref" reference="lemma:bernstein_socp"}, we get $(b_0, b_2;\sqrt{\frac{2}{3}} b_1) \in \mathcal{Q}$ and $(b_1, b_3; \sqrt{\frac{2}{3}}b_2) \in \mathcal{Q}$. Applying self duality of the second order cone, we get $$\begin{aligned}(p_0, p_2-\frac{c_2}{3}; \frac{c_1}{\sqrt{6}}) \cdot (b_0, b_2;\sqrt{\frac{2}{3}} b_1)& %=3p_0b_0 + (3p_2-a)b_2 + bb_1 \geq 0 = &-\frac{c_2}{3}b_2 + &\frac{c_1}{3}b_1 + p_0b_0 + p_2b_2 &\geq 0 \\ (p_1-\frac{c_1}{3}, p_3; \frac{c_2}{\sqrt{6}}) \cdot (b_1, b_3; \sqrt{\frac{2}{3}}b_2)& %= (3p_1 - b)b_1 + 3p_3b_3 + ab_2 = &\frac{c_2}{3}b_2 -&\frac{c_1}{3}b_1 + p_1b_1 + p_3b_3 &\geq 0.\end{aligned}$$ Adding these, we get $p_0b_0 + p_1b_1 + p_2b_2 + p_3b_3 \geq 0$ as desired. That proves sufficiency. Next we prove necessity. If $p \geq 0$ on $[0,1]$, then by the Markov--Lukács Theorem ([@szeg1939orthogonal p. 4]), $p(x) = xq_1^2(x) + (1-x) q_2^2(x)$ where $q_1(x)$ and $q_2(x)$ are linear functions. A basis of linear functions is given by $\{x, 1-x\}$, so there exist $M_1, M_2 \succeq 0$ such that $$p(x) = x \begin{pmatrix} x \\ 1-x \end{pmatrix}^T M_1 \begin{pmatrix} x \\ 1-x \end{pmatrix} + (1-x)\begin{pmatrix} x \\ 1-x \end{pmatrix}^T M_2 \begin{pmatrix} x \\ 1-x \end{pmatrix}.$$ The affine constraints on the coefficients give $M_1 = \begin{pmatrix} p_0 & \frac{c_1}{2} \\ \frac{c_1}{2} & 3p_2 - c_2 \end{pmatrix}$ and $M_2 = \begin{pmatrix} 3p_1-c_1 & \frac{c_2}{2} \\ \frac{c_2}{2} &p_3 \end{pmatrix}$ for some $c_1, c_2\in \mathbb{R}$. Multiplying $M_1$ by $\begin{pmatrix} 1 & 0 \\ 0 &\frac{1}{\sqrt{3}} \end{pmatrix}$ on both sides and $M_2$ by $\begin{pmatrix} \frac{1}{\sqrt{3}} & 0 \\ 0 &1 \end{pmatrix}$ on both sides gives two matrices that are positive semidefinite if and only if [\[eqn:nonnegative_socp\]](#eqn:nonnegative_socp){reference-type="eqref" reference="eqn:nonnegative_socp"} is satisfied. ◻ **Remark 1**. *Lemma [\[lem:nonnegative_socp\]](#lem:nonnegative_socp){reference-type="ref" reference="lem:nonnegative_socp"} above can be interpreted as a specialization to the cubic case of the usual SDP characterization of nonnegative polynomials, adapted to the interval; see e.g., [@BPRbook Lemma 3.33]. This is because SDPs of size $2 \times 2$ can be equivalently written as second-order cone programs.* ## A new strategy {#sec:P2_def} If we want to use Lemma [\[lem:nonnegative_socp\]](#lem:nonnegative_socp){reference-type="ref" reference="lem:nonnegative_socp"} to prove that a given polynomial $p \in \mathcal{P}$ is nonnegative on the interval, we must find $c_1$ and $c_2$ that satisfy [\[eqn:nonnegative_socp\]](#eqn:nonnegative_socp){reference-type="eqref" reference="eqn:nonnegative_socp"}. Here are three possible strategies to find $c_1$ and $c_2$. #### Convex optimization Given $p$, one option is to solve the convex feasibility problem [\[eqn:nonnegative_socp\]](#eqn:nonnegative_socp){reference-type="eqref" reference="eqn:nonnegative_socp"} for some feasible $c_1$ and $c_2$. This is a foolproof way, in the sense that for every $p \in \mathcal{P}$, there always exist (generally nonunique) solutions $c_i$. The drawback of this strategy is the computational cost. #### Constant guess, independent of $p$ On the other extreme, one could ignore $p$ and always try the same constants $c_1$ and $c_2$. Given any constants $c_1$ and $c_2$ there is a set of feasible $p$ to [\[eqn:nonnegative_socp\]](#eqn:nonnegative_socp){reference-type="eqref" reference="eqn:nonnegative_socp"} induced by any such choice; this set of feasible $p$ is a (possibly empty) convex subset of $\mathcal{P}$. Given $p$ and the fixed constant choices $c_1$ and $c_2$, checking feasibility of [\[eqn:nonnegative_socp\]](#eqn:nonnegative_socp){reference-type="eqref" reference="eqn:nonnegative_socp"} is as simple as verifying a few inequalities. In fact, this is the strategy used to construct $\mathcal{NB}$. The set of polynomials that are feasible in [\[eqn:nonnegative_socp\]](#eqn:nonnegative_socp){reference-type="eqref" reference="eqn:nonnegative_socp"} when $c_1 = c_2 = 0$ is exactly the set $\mathcal{NB}$. This also points out the main weakness of this strategy: there are plenty of polynomials in $\mathcal{P}$ that are not feasible in [\[eqn:nonnegative_socp\]](#eqn:nonnegative_socp){reference-type="eqref" reference="eqn:nonnegative_socp"} for $c_1 = c_2 = 0$. For example, consider $p = (1-x)(1-4x)^2 + x^3$, whose Bernstein coefficients are $(1,-2,3,1)$, not all of which are nonnegative. On the other hand, the conditions in Lemma [\[lem:nonnegative_socp\]](#lem:nonnegative_socp){reference-type="ref" reference="lem:nonnegative_socp"} hold for $c_1 = -6$ and $c_2 = 0$. #### Explicit function of $p$ There is a clear compromise between the two strategies above. We propose instead a middle ground, where $c_1$ and $c_2$ are "simple," explicit functions of $p$. The decision variables depend on the data, but they are explicit, low-complexity computable functions. What would be a good definition for $c_1$ and $c_2$ of this form? A "good choice" for $c_1$ and $c_2$ is one with the following informally stated properties: - Non-inferiority: it should be no worse than fixing $c_1 = c_2 = 0$ and - Maximality: it should be maximal in some sense. Formal statements of these properties and additional intuition for our choices are presented in Section [3.2](#sec:why){reference-type="ref" reference="sec:why"}. For the cubic case, we propose the choice $$c_1 = -2\sqrt{3p_0(p_2)_+} \quad \text{ and } \quad c_2 = -2\sqrt{3p_3(p_1)_+}. \label{eqn:cubic_c1c2}$$ While this is more complicated than $c_1 = c_2 = 0$, Section [3.2](#sec:why){reference-type="ref" reference="sec:why"} provides additional motivation and an argument of why our proposal satisfies the desirata mentioned above. Substituting the values in [\[eqn:cubic_c1c2\]](#eqn:cubic_c1c2){reference-type="eqref" reference="eqn:cubic_c1c2"} into [\[eqn:nonnegative_socp\]](#eqn:nonnegative_socp){reference-type="eqref" reference="eqn:nonnegative_socp"} gives the conditions that define our new set. **Definition 6**. *The set $\mathcal{GB}$ ("Geometric Bernstein") is $$\label{eqn:definingGB} \mathcal{GB} = \Bigg\{ \sum_{i=0}^3p_ib_i^3(x) \quad \Bigg| \quad p_0, p_3 \geq 0, \qquad p_2 \geq -2\sqrt{\frac{p_3(p_1)_+}{3}}, \qquad p_1 \geq -2\sqrt{\frac{p_0(p_2)_+}{3}} \Bigg\}.$$* Each Bernstein coefficient is bounded below by a multiple of the geometric mean of its neighboring Bernstein coefficients, which motivates the name of the set. A visual comparison of $\mathcal{GB}$ to $\mathcal{NB}$ and $\mathcal{P}$ is given in Figure [\[fig:inclusion-cross-sections11\]](#fig:inclusion-cross-sections11){reference-type="ref" reference="fig:inclusion-cross-sections11"}. ## Why this definition? {#sec:why} #### Non-inferiority The following theorem formalizes the claim that our choices of $c_1$ and $c_2$ are at least as good as $c_1 = c_2 = 0$. thmthmoneintwointhree[\[thm:1in2in3\]]{#thm:1in2in3 label="thm:1in2in3"} $\mathcal{NB} \subset \mathcal{GB}\subset \mathcal{P}$ *Proof of Theorem [\[thm:1in2in3\]](#thm:1in2in3){reference-type="ref" reference="thm:1in2in3"}.* The containment $\mathcal{NB} \subset \mathcal{GB}$ is simple: If all $p_i \geq 0$, the left hand side of all inequalities in [\[eqn:definingGB\]](#eqn:definingGB){reference-type="eqref" reference="eqn:definingGB"} are nonnegative and the right hand sides are nonpositive. Therefore $p \in \mathcal{GB}$. Next, we show that $\mathcal{GB} \subset \mathcal{P}$. Let $p \in \mathcal{GB}$. We claim setting $c_1$ and $c_2$ as in [\[eqn:cubic_c1c2\]](#eqn:cubic_c1c2){reference-type="eqref" reference="eqn:cubic_c1c2"} satisfies [\[eqn:nonnegative_socp\]](#eqn:nonnegative_socp){reference-type="eqref" reference="eqn:nonnegative_socp"}. The constraint $(p_0, p_2 - \frac{c_2}{3}; \frac{c_1}{\sqrt{6}}) \in \mathcal{Q}$ is a combination of three inequalities: - $p_0 \geq 0$: This is required by membership in $\mathcal{GB}$. - $p_2 - \frac{c_2}{3} \geq 0$: Since $p \in \mathcal{GB}$, $0 \leq p_2 + 2\sqrt{\frac{(p_1)_+p_3}{3}} = p_2 - \frac{c_2}{3}$. - $2p_0(p_2 - \frac{c_2}{3}) \geq \frac{c_1^2}{6}$: First consider the case $p_2 \geq 0$. We always have $c_2 \leq 0$, so $2p_0(p_2 - \frac{c_2}{3}) \geq 2p_0p_2 = \frac{c_1^2}{6}$. In the other case $p_2 < 0$ and so $c_1 = 0$. In that case the first two inequalities prove the product is nonnegative. Therefore this second order cone condition is satisfied. The argument that the other second order cone condition holds is analogous. ◻ A less precise, but more insightful, "picture proof" that $\mathcal{NB} \subset \mathcal{GB}$ is given in Figure [\[fig:abchoice\]](#fig:abchoice){reference-type="ref" reference="fig:abchoice"}, which also motivates the choice of $c_1$ and $c_2$. The feasible region of each second order cone condition is the shaded region inside of each parabola. We want a point $(c_1,c_2)$ inside the intersection of these two regions. The set $\mathcal{NB}$ results from picking the origin. The $\star$ will always be in the intersection whenever the origin is. The intuition for why $\mathcal{GB}$ strictly contains $\mathcal{NB}$ is the following. If $p_1 < 0$, the horizontal parabola moves left. Then we would choose $c_2 = 0$, but picking a negative $c_1$ may still be in the intersection even if the origin is not. #### Maximality Now that we have established our choice of $c_1$ and $c_2$ is "better" than $c_1 =c_2 = 0$, we formalize the sense in which our choice is maximal. In general, $c_1$ and $c_2$ could be functions of all of the Bernstein coefficients; i.e., $c_1 = g(p_0,p_1,p_2,p_3)$ and $c_2 = h(p_0,p_1,p_2,p_3)$.[^1] For the sake of simplicity our choice of $g$ and $h$ will be such that they are only functions of *two* of the Bernstein coefficients each. Furthermore, by symmetry, we restrict our attention to those where $c_1 = g(p_0,p_1,p_2,p_3) = f(p_0,p_2)$ and $c_2 = h(p_0,p_1,p_2,p_3) = f(p_3,p_1)$ for some $f :\mathbb{R}^2 \to \mathbb{R}$. Let $S(f)$ be the set $$S(f) = \Big\{ p \in \mathbb{R}_3[x] \, \Big| \, c_1 = f(p_0,p_2) \text{ and } c_2 = f(p_1, p_3) \text{ are feasible for \eqref{eqn:nonnegative_socp}} \Big\}.$$ Using a single $f$ for both $c_1$ and $c_2$ ensures that membership in $S(f)$ does not change when transforming $p$ by $x \to 1-x$. For our choice in [\[eqn:cubic_c1c2\]](#eqn:cubic_c1c2){reference-type="eqref" reference="eqn:cubic_c1c2"}, we take $f(z_1,z_2) = -2\sqrt{3(z_1)_+(z_2)_+}$, which we denote as $f^*$. For reference, $S(f^*) = \mathcal{GB}$ and $S(0) = \mathcal{NB}$. Within this family, $S(f^*)$ is maximal in the sense of the following proposition, which is proven in Appendix [10](#sec:p2proofs){reference-type="ref" reference="sec:p2proofs"}. proptight[\[prop:tight\]]{#prop:tight label="prop:tight"} If there exists a $p \in \mathcal{GB}\setminus \mathcal{NB}$ such that both $f(p_0, p_2) \neq f^*(p_0, p_2)$ and $f(p_3, p_1) \neq f^*(p_3, p_1)$, then $S(f^*) \not\subset S(f)$. Equivalently, if $S(f^*) \subset S(f)$ then for every $p \in \mathcal{GB} \setminus \mathcal{NB}$, either $f(p_0,p_2) = f^*(p_0,p_2)$ or $f(p_3, p_1) = f^*(p_3,p_1)$. ## Aside: an exact condition with the discriminant {#sec:cubic-exact} One can also characterize strict positivity through the *discriminant* of a polynomial. As opposed to the second order cone constraints that require solving a convex optimization problem, this condition can be checked explicitly. The discriminant is defined as follows. **Definition 7**. *The *discriminant* of a polynomial $p$ with leading coefficient $a_n$ in the monomial basis and roots $\alpha_i$ for $1\leq i \leq n$ is $$\label{eqn:discdef} D(p) = a_n^{2n-2}\prod_{1 \leq i < j \leq n}(\alpha_i - \alpha_j)^2.$$* Although not obvious from this definition, $D(p)$ is a polynomial in the (monomial or Bernstein) coefficients of $p$. For now we are only concerned with cubic polynomials, so we explicitly compute the discriminant for this case in terms of Bernstein coefficients. **Example 1**. *The discriminant of the polynomial $\sum_{i=0}^3 p_i b_i(x)$ is $$-27 (-3 p_1^2 p_2^2 + 4 p_0p_2^3 + 4 p_1^3 p_3 - 6 p_0p_1p_2p_3 + p_0^2 p_3^2).$$* Polynomials with repeated roots provide an intuitive connection between the discriminant and the set of nonnegative polynomials. On the one hand, the discriminant of $p$ vanishes precisely when $p$ has repeated roots, as can be seen directly from the definition in [\[eqn:discdef\]](#eqn:discdef){reference-type="eqref" reference="eqn:discdef"}. On the other hand, the boundary of the set of polynomials nonnegative on $\mathbb{R}$ consists of polynomials with repeated roots. This connection via polynomials with repeated roots suggests that changes in nonnegativity can be characterized by changes in sign of the discriminant. There is a caveat for using this intuition for nonnegativity on the interval: it matters whether the repeated root is actually *on* the interval -- it could be outside, or it could even be complex-valued. In those cases the discriminant would vanish, but $p$ may not on the boundary of $\mathcal{P}$. This concern is addressed by ensuring additional conditions, not just sign conditions on $D(p)$. Theorem [\[thm:exactcubic\]](#thm:exactcubic){reference-type="ref" reference="thm:exactcubic"} carefully formalizes the connection but requires the definition of a few sets. The interior of the set $\mathcal{P}$, which we denote $\mathcal{P}^\circ$, is the set of strictly positive cubic polynomials on the closed interval $[0,1]$. Similarly, $\mathcal{NB}^\circ$ is the set of cubic polynomials with strictly positive Bernstein coefficients. Let $\mathcal{D}^{>}$ be the set of cubic polynomials $p$ such that $p_0, p_3 > 0$ and $-D(p) > 0$. Let $\mathcal{D}^{\geq}$ be the set of cubic $p$ polynomials such that $p_0,p_3 \geq0$ and $-D(p) \geq 0$.[^2] It is evident from Figure [\[fig:NB_D\]](#fig:NB_D){reference-type="ref" reference="fig:NB_D"} that $\mathcal{D}^>$ is not all of $\mathcal{P}^\circ$ (because $\mathcal{P}^\circ$ includes $\mathcal{NB}^\circ$). All that is needed is to include $\mathcal{NB}^\circ$: thmexactcubic[\[thm:exactcubic\]]{#thm:exactcubic label="thm:exactcubic"} $\mathcal{P}^\circ = \mathcal{D}^> \cup \mathcal{NB}^\circ$. On the other hand, $\mathcal{P} \neq \mathcal{D}^\geq \cup \mathcal{NB}$.[^3] If the discriminant of $p$ vanishes when $p \not \in \mathcal{NB}$, it is not possible to characterize the sign of $p$ solely based on this information. We can see this in Figure [\[fig:NB_D2\]](#fig:NB_D2){reference-type="ref" reference="fig:NB_D2"}. The negative horizontal axis has vanishing discriminant but does not contain nonnegative polynomials. The left hand side of the parabola consists of nonnegative polynomials. To be completely explicit, we provide two polynomials, one from each of these sets, marked with a dot. These examples show that both $\mathcal{P}$ and its complement have a nonempty intersection with $\mathcal{D}^\geq$. **Example 2**. *Let $r = (1-x)^3 - 6x(1-x)^2$. Not all the Bernstein coefficients are positive and the discriminant of $r$ is zero, but $r(\frac{1}{2}) = -\frac{5}{8}$, so $r$ is not positive.* **Example 3**. *Let $s = (1-x)^3-6x(1-x)^2+3x^2(1-x) = (1-x)(1-4x)^2$. Not all the Bernstein coefficients are positive and the discriminant of $s$ is zero. Nevertheless, the factored form demonstrates nonnegativity of $s$ on the interval.* While the exact condition discussed in this section has an explicit form, it only characterizes strictly positive polynomials. A traditional proof of Theorem [\[thm:exactcubic\]](#thm:exactcubic){reference-type="ref" reference="thm:exactcubic"} would involve a somewhat tedious enumeration of cases. Instead of that route, we take a more modern computational approach and use a quantifier elimination program to "derive" the theorem. See Appendix [11](#sec:cubic-exact-proof){reference-type="ref" reference="sec:cubic-exact-proof"} for a demonstration. ## An exact condition with $\mathcal{GB}$ One can also use $\mathcal{GB}$ to provide an exact characterization of nonnegativity. Recall that we have the strict containment $\mathcal{GB} \subset \mathcal{P}$. Perhaps surprisingly, any element in $\mathcal{P}$ is actually the sum of two elements in the smaller set $\mathcal{GB}$, i.e., $$\label{eqn:gbplusgb} \mathcal{GB} + \mathcal{GB} = \mathcal{P}.$$ The following proposition shows how to construct $\mathcal{P}$ from two convex subsets of $\mathcal{GB}$ and yields [\[eqn:gbplusgb\]](#eqn:gbplusgb){reference-type="eqref" reference="eqn:gbplusgb"} as a corollary. **Proposition 1**. *Let $$\begin{aligned} S = &\Bigg\{\sum_{i=0}^3s_ib_i^3(x) \, \Bigg | \, \Big(s_0,\, s_2;\, \sqrt{\frac32}s_1\Big) \in \mathcal{Q} \textrm{ and } s_3 \geq 0\Bigg\} \textrm{ and }\\ T = &\Bigg\{\sum_{i=0}^3t_ib_i^3(x) \, \Bigg | \, \Big(t_1,\, t_3;\, \sqrt{\frac32}t_2\Big) \in \mathcal{Q} \textrm{ and } t_0 \geq 0\Bigg\}. \end{aligned}$$ Then, $S + T = \mathcal{P},$ which implies $\mathcal{GB} + \mathcal{GB} = \mathcal{P}$.* *Proof.* Since $S, T \subseteq \mathcal{GB}$, and $\mathcal{GB} \subset \mathcal{P}$, we have $S, T \subset \mathcal{P}$. Furthermore, $\mathcal{P}$ is a cone, so $S + T \subset \mathcal{P}$. For the other direction, suppose that $p \in \mathcal{P}$. We will decompose $p = s + t$ and show $s \in S$ and $t \in T$. Let $c_1, c_2$ be the constants guaranteed to exist for $p$ by Lemma [\[lem:nonnegative_socp\]](#lem:nonnegative_socp){reference-type="ref" reference="lem:nonnegative_socp"}. Let the Bernstein coefficients of $s$ be $(p_0, \frac{c_1}{3}, p_2 - \frac{c_2}{3}, 0)$ and let the Bernstein coefficients of $t$ be $(0, p_1 - \frac{c_1}{3}, \frac{c_2}{3}, p_3)$. All that is left is to show that $s \in S$ and $t \in T$. First, $s \in S$ because $(s_0, s_2; \sqrt{\frac{3}{2}}s_1) \in \mathcal{Q}$ if and only if $(p_0, p_2 - \frac{c_2}{3}; \frac{c_1}{\sqrt{6}}) \in \mathcal{Q}$, which is guaranteed by the choice of $c_1$ and $c_2$. The same argument shows $t \in T$. Hence the decomposition is valid. Therefore, $P \subset S + T$. ◻ # Arbitrary degree {#sec:generalization} Now we extend the definitions of $\mathcal{NB}$, $\mathcal{P}$, and eventually $\mathcal{GB}$ to arbitrary degree. **Definition 8**. *$\mathcal{P}_d$ ("Positive") is the set of $p \in \mathbb{R}_d[x]$ such that $p$ is nonnegative on $[0,1]$.* **Definition 9**. *$\mathcal{NB}_d$ ("Nonnegative Bernstein") is the set of $p \in \mathbb{R}_d[x]$ such that $p$ has all nonnegative Bernstein coefficients.* In this section, we propose the corresponding generalization of $\mathcal{GB}$. To accomplish this, one can attempt to generalize Lemma [\[lem:nonnegative_socp\]](#lem:nonnegative_socp){reference-type="ref" reference="lem:nonnegative_socp"} for arbitrary degree. However, whereas in the cubic case the second order cone conditions are necessary and sufficient for nonnegativity, in the general case this requires instead a semidefinite program. This is too computationally expensive, so we restrict to only *tridiagonal* matrices, which will give an SOCP condition. We then make strategic choices for the decision variables in the SOCP. To ease notation, define $$\label{eqn:widef}w_i = \begin{cases}\frac{1}{2} &\qquad 1 < i < d-1 \\ 1 &\qquad \text{otherwise}\end{cases}.$$ Recall that $m_i = \frac12 \cdot \frac{(i+1) \cdot (d-i+1)}{i \cdot (d-i)}$. Here is a sufficient condition for nonnegativity written in terms of second order cones and Bernstein coefficients. lemgennonnegativesocp[\[lem:gen_nonnegative_socp\]]{#lem:gen_nonnegative_socp label="lem:gen_nonnegative_socp"} Let $p\in \mathbb{R}_d[x]$ with Bernstein coefficients $\{p_i\}_{i=0}^{d}$. Let $c_0 = c_d = 0$. If there exist $c_1, \dots, c_{d-1} \in \mathbb{R}$ such that for all $1 \leq i \leq d-1$ $$\label{eqn:general-scalar-sufficient} \begin{aligned} \left( p_{i-1}-c_{i-1}, \quad p_{i+1} - c_{i+1};\quad c_i\sqrt{\frac{m_i}{w_{i-1}w_{i+1}}} \right) \quad &\in \quad \mathcal{Q}, \end{aligned}$$ then $p\in \mathcal{P}_d$. *Proof.* Throughout this proof we use $b_i$ to mean $b_i(x)$ for any $0 \leq x \leq 1$. Constraint [\[eqn:general-scalar-sufficient\]](#eqn:general-scalar-sufficient){reference-type="eqref" reference="eqn:general-scalar-sufficient"} is equivalent to $$\label{eqn:wgeneral-scalar-sufficient} \left( w_{i-1}(p_{i-1}-c_{i-1}), \quad w_{i+1}(p_{i+1} - c_{i+1});\quad c_i\sqrt{m_i} \right) \quad \in \quad \mathcal{Q}.$$ By Lemma [\[lemma:bernstein_socp\]](#lemma:bernstein_socp){reference-type="ref" reference="lemma:bernstein_socp"}, for all $1 \leq i \leq d-1$, $(b_{i-1}, b_{i+1}; \frac{1}{\sqrt{m_i}}b_i) \in \mathcal{Q}$. The inner product between constraint $i$ of Lemma [\[lemma:bernstein_socp\]](#lemma:bernstein_socp){reference-type="ref" reference="lemma:bernstein_socp"} and the $i$th constraint of [\[eqn:wgeneral-scalar-sufficient\]](#eqn:wgeneral-scalar-sufficient){reference-type="eqref" reference="eqn:wgeneral-scalar-sufficient"} is $$w_{i-1}(p_{i-1} - c_{i-1})b_{i-1} + w_{i+1}(p_{i+1} - c_{i+1})b_{i+1} + c_ib_i \geq 0$$ by self-duality of the second order cone. Sum all of these results from $1 \leq i \leq d-1$ to get $$\begin{aligned}\label{eqn:lemma4-allbutspecial} 0 &\leq \sum_{i=1}^{d-1}\left(w_{i-1}(p_{i-1} - c_{i-1})b_{i-1} + w_{i+1}(p_{i+1} - c_{i+1})b_{i+1} + c_ib_i \right) \\ &= \sum_{i=0}^kp_ib_i.% \end{aligned}%$$ ◻ In Appendix [9](#sec:altsosproof){reference-type="ref" reference="sec:altsosproof"} we give an alternative proof of the lemma that shows [\[eqn:general-scalar-sufficient\]](#eqn:general-scalar-sufficient){reference-type="eqref" reference="eqn:general-scalar-sufficient"} is a restricted case of sums of squares, where the Gram matrix is constrained to be tridiagonal. Unlike in the cubic case, Lemma [\[lem:gen_nonnegative_socp\]](#lem:gen_nonnegative_socp){reference-type="ref" reference="lem:gen_nonnegative_socp"} is only a sufficient condition for nonnegativity. To emphasize that it is sufficient but not necessary, consider the following example of a nonnegative polynomial for which [\[eqn:general-scalar-sufficient\]](#eqn:general-scalar-sufficient){reference-type="eqref" reference="eqn:general-scalar-sufficient"} is not feasible. **Example 4**. *Let $p = (2x-1)^4$. Clearly $p(x) \geq 0$. The Bernstein coefficients are $(1, -1,1, -1, 1)$, but there are no feasible $c_i$ for [\[eqn:general-scalar-sufficient\]](#eqn:general-scalar-sufficient){reference-type="eqref" reference="eqn:general-scalar-sufficient"}.* Just as in the cubic case, we can produce sufficient conditions for nonnegativity as follows. Fix an explicit formula for $c_i$ in Lemma [\[lem:gen_nonnegative_socp\]](#lem:gen_nonnegative_socp){reference-type="ref" reference="lem:gen_nonnegative_socp"}. If a polynomial satisfies those fixed constraints then $p \in \mathcal{P}_d$. Setting $c_i = 0$ in this procedure gives the defining conditions of $\mathcal{NB}_d$. We propose the choice $$\label{eqn:arbitrary_ci}c_i = -\sqrt{\frac{2w_{i-1}w_{i+1}}{m_i}(p_{i-1})_+(p_{i+1})_+},$$ which gives rise to the condition below. Define $p_{i} = 0$ if $i<0$ or $i > d$. **Definition 10**. *$\mathcal{GB}_d$ is the set of $p \in \mathbb{R}_d[x]$ for which for every $0 \leq i \leq d$, $$\label{eqn:scalar-ci}p_i \geq -\sqrt{\frac{2w_{i-1}w_{i+1}}{m_i}(p_{i-1})_+(p_{i+1})_+} .$$* The following theorem shows that this choice of $c_i$ is not worse than $c_i = 0$. thmgenoneintwointhree [\[thm:1in2in3_general\]]{#thm:1in2in3_general label="thm:1in2in3_general"} $\mathcal{NB}_d \subset \mathcal{GB}_d \subset \mathcal{P}_d$. *Proof.* For this proof we drop the subscript $d$ but it is understood that all polynomials are degree $d$. It is clear that if every $p_i \geq 0$, then they are all at least some nonpositive number, so $p\in \mathcal{NB}$. Next, we show that $\mathcal{GB} \subset \mathcal{P}$. Let $p \in \mathcal{GB}_d$. We claim that setting $c_i$ as in [\[eqn:arbitrary_ci\]](#eqn:arbitrary_ci){reference-type="eqref" reference="eqn:arbitrary_ci"} for every $i$ makes [\[eqn:general-scalar-sufficient\]](#eqn:general-scalar-sufficient){reference-type="eqref" reference="eqn:general-scalar-sufficient"} feasible. The second order cone condition is satisfied if and only if both of the following hold: 1. $p_{i-1} \geq c_{i-1}$ and $p_{i+1} \geq c_{i-1}$: These are implied by the definition of $p \in \mathcal{GB}$. 2. $2w_{i-1}w_{i+1}(p_{i-1}-c_{i-1})(p_{i+1}-c_{i+1}) \geq m_i c_i^2$: If either $p_{i-1}$ or $p_{i+1}$ are not positive, then $c_i = 0$. Since the two factors are nonnegative, the inequality holds in that case. Otherwise, when $p_{i-1}, p_{i+1} \geq 0$, we have $$2w_{i-1}w_{i+1}(p_{i-1}-c_{i-1})(p_{i+1}-c_{i+1}) \geq 2w_{i-1}w_{i+1}(p_{i-1}p_{i+1}) = m_ic_i^2$$ due to the nonpositivity of $c_{i-1}$ and $c_{i+1}$ and the definition of $c_i^2$. Therefore, the second order cone condition is feasible with this choice, and so $p \in \mathcal{P}$. ◻ # Polynomial matrices {#sec:polynomial-matrices} The same problems and approaches we considered for scalar polynomials generalize to matrix-valued polynomials, or polynomial matrices. A (symmetric) polynomial matrix $P(x)$ is given by $$P(x) = \sum_{i=0}^d b_i(x)P_i ,$$ where $P_0, \dots, P_d \in S^n$; that is, the coefficients are symmetric matrices of the same dimensions. Because $P(x)$ is symmetric, it has real eigenvalues for every $x$. Before, we were concerned with nonnegativity of polynomials. The analogous property for polynomial matrices is positive semidefiniteness. We want conditions on the $P_i$ that guarantee that $P(x) \succeq 0$ for all $0 \leq x \leq 1$. The approaches from the scalar case can be extended to the polynomial matrix case. We define the matrix analogues of $\mathcal{P}$ and $\mathcal{NB}$ as follows: **Definition 11**. *$\mathcal{P}_d^n$ is the set of $n \times n$ symmetric polynomial matrices $P(x)$ whose entries are in $\mathbb{R}_d[x]$ and $P(x) \succeq 0$ for all $0 \leq x \leq 1$.* **Definition 12**. *$\mathcal{NB}_d^n$ is the set of $n \times n$ symmetric polynomial matrices $P(x)$ whose entries are in $\mathbb{R}_d[x]$ and $P_i \succeq 0$ for every $0 \leq i \leq d$.* To define sets in the tradeoff space between these extremes, we generalize Lemma [\[lem:gen_nonnegative_socp\]](#lem:gen_nonnegative_socp){reference-type="ref" reference="lem:gen_nonnegative_socp"} to the polynomial matrix case. Recall the definitions of $m_i$ and $w_i$ from [\[eqn:midef\]](#eqn:midef){reference-type="eqref" reference="eqn:midef"} and [\[eqn:widef\]](#eqn:widef){reference-type="eqref" reference="eqn:widef"}. lemgenmatrixnonnegativesocp[\[lem:gen_matrix_nonnegative_socp\]]{#lem:gen_matrix_nonnegative_socp label="lem:gen_matrix_nonnegative_socp"} Let $P(x) = \sum_{i=0}^d b_i(x) P_i$ where $P_i \in S^n$. Let $C_0$ and $C_d$ be the zero matrix. If there exist $C_i \in \mathbb{R}^{n\times n}$ for $1 \leq i \leq d-1$ such that for all $1 \leq i \leq d-1$ $$\label{eqn:general_deg_matrix_pmp} \begin{aligned} \begin{pmatrix} P_{i-1}-C_{i-1}-C_{i-1}^T & \sqrt{\frac{2m_i}{w_{i-1}w_{i+1}}}C_i \\ \sqrt{\frac{2m_i}{w_{i-1}w_{i+1}}} C_i^T & P_{i+1} - C_{i+1} - C_{i+1}^T \end{pmatrix} \succeq 0, \end{aligned}$$ then $P\in \mathcal{P}_d^n$. *Proof.* Let $\mathbf{1}_n$ be the $n \times n$ matrix of all 1's and $I_n$ be the $n \times n$ identity matrix. By Lemma [\[lemma:bernstein_socp\]](#lemma:bernstein_socp){reference-type="ref" reference="lemma:bernstein_socp"}, $$0 \preceq B_i(x) := \begin{pmatrix} b_{i-1}(x) & \sqrt{\frac{w_{i-1}w_{i+1}}{2m_i}}b_i(x)\\ \sqrt{\frac{w_{i-1}w_{i+1}}{2m_i}}b_i(x)& b_{i+1}(x) \end{pmatrix}$$ for every $0 \leq x \leq 1$. So on this domain, $B_i(x) \otimes \mathbf{1}_n \succeq 0$. Suppose the $C_i$ satisfy equation [\[eqn:general_deg_matrix_pmp\]](#eqn:general_deg_matrix_pmp){reference-type="eqref" reference="eqn:general_deg_matrix_pmp"} for $P$. The Schur product theorem implies the Hadamard product of $B_i(x) \otimes \mathbf{1}_n$ with the $i$th matrix in [\[eqn:general_deg_matrix_pmp\]](#eqn:general_deg_matrix_pmp){reference-type="eqref" reference="eqn:general_deg_matrix_pmp"} is positive semidefinite. Summing these Hadamard products over $1 \leq i \leq d-1$ gives $$0 \preceq \begin{pmatrix} \sum_{i=0}^{d-2} w_k(P_i - C_i - C_i^T) b_i(x) & \sum_{i=1}^{d-1} C_i b_i(x) \\ \sum_{i=1}^{d-1} C_i^T b_i(x) & \sum_{i=2}^{d} w_k(P_i - C_i - C_i^T) b_i(x) \end{pmatrix}.$$ Finally, matrix congruence preserves positive semidefiniteness, so $$0 \preceq \begin{pmatrix} I_n \\ I_n \end{pmatrix}^T \begin{pmatrix} \sum_{i=0}^{d-2} w_k(P_i - C_i - C_i^T) b_i(x) & \sum_{i=1}^{d-1} C_i b_i(x) \\ \sum_{i=1}^{d-1} C_i^T b_i(x) & \sum_{i=2}^{d} w_k(P_i - C_i - C_i^T) b_i(x) \end{pmatrix} \begin{pmatrix} I_n \\ I_n \end{pmatrix} = P(x)$$ because the terms occurring twice have $w_i = \frac{1}{2}$ and the others have $w_i = 1$. Therefore $P \in \mathcal{P}_d^n$. ◻ Recall that in the scalar case our choice was given by [\[eqn:arbitrary_ci\]](#eqn:arbitrary_ci){reference-type="eqref" reference="eqn:arbitrary_ci"}. This expression suggests that the choice for $C_i$ may require the matrix version of two natural operations: projection onto the "positive" part, and a generalized notion of geometric mean. Luckily, both of these are possible. As mentioned earlier, given a matrix $X$, its projection $X_+$ onto the positive semidefinite cone can be computed with an eigenvalue decomposition; details are presented in Appendix [8](#sec:preliminaries-proofs){reference-type="ref" reference="sec:preliminaries-proofs"}. The key property of the scalar geometric mean that we want to extend is: given $a,b \geq 0$, their geometric mean is the largest number $x \geq 0$ such that $$\begin{pmatrix} a & x \\ x & b \end{pmatrix} \succeq 0.$$ For the matrix case, we can ask for the generalized property: given $A, B \succeq 0$ (which will be $(P_{i-1})_+$ and $(P_{i+1})_+$), their "geometric mean" should be an $X \succeq 0$ such that $$\label{eqn:geomeankeyfact} \begin{pmatrix} A & X \\ X & B \end{pmatrix} \succeq 0 .$$ It turns out that the *matrix geometric mean* of $A$ and $B$ has exactly this property. **Definition 13**. *Let $A, B \succ 0$. The *geometric mean* of $A$ and $B$, denoted $A\#B$, is given by $$A\#B = A^\frac{1}{2}(A^{-\frac{1}{2}}B A^{-\frac{1}{2}})^{\frac{1}{2}}A^{\frac{1}{2}}. \label{eq:geomean}$$ One can extend the definition to all $A,B \succeq 0$ by continuity. [\[def:geomean\]]{#def:geomean label="def:geomean"}* Although [\[eq:geomean\]](#eq:geomean){reference-type="eqref" reference="eq:geomean"} is not obviously symmetric in $A$ and $B$, it indeed holds that $A\#B = B\#A$. There are many other beautiful properties of the matrix geometric mean, including strong connections to Riemannian geometry; see e.g. [@bhatia2009positive; @hiai2014introduction; @bhatia2006riemannian]. The following fact is the key property we need. **Fact 1** (Theorem 3.4 of [@lawson2001geometric]). *The geometric mean $A \# B$ is the largest (in the Loewner order) $X \in S^n$ such that [\[eqn:geomeankeyfact\]](#eqn:geomeankeyfact){reference-type="eqref" reference="eqn:geomeankeyfact"} holds.* With the correct notions of positive projection and geometric mean in place, we can now define $\mathcal{GB}_d^n$ as a natural generalization of $\mathcal{GB}_d$. Let $\mathbf{0}_n$ be the $n\times n$ zero matrix. Let $P_{i} = \mathbf{0}_n$ if $i<0$ or $i > d$. We choose $$\label{eqn:cichoicematrix} C_i = -\sqrt{\frac{w_{i-1}w_{i+1}}{2m_i}}((P_{i-1})_+\#(P_{i+1})_+),$$ which appropriately generalizes [\[eqn:cubic_c1c2\]](#eqn:cubic_c1c2){reference-type="eqref" reference="eqn:cubic_c1c2"} and [\[eqn:scalar-ci\]](#eqn:scalar-ci){reference-type="eqref" reference="eqn:scalar-ci"}. **Definition 14**. *$\mathcal{GB}_d^n$ is the set of $n\times n$ polynomial matrices $\sum_{i=0}^db_i(x)P_i$ such that for all $0 \leq i \leq d$, $$\label{eqn:GBdn} P_i \succeq -\sqrt{\frac{2w_{i-1}w_{i+1}}{{m_i}}}\left((P_{i-1})_+ \# (P_{i+1})_+\right).$$ [\[def:GBmatrix\]]{#def:GBmatrix label="def:GBmatrix"}* Finally, we show this definition is in the trade space between $\mathcal{NB}_d^n$ and $\mathcal{P}_d^n$, and this theorem implies Theorems [\[thm:1in2in3\]](#thm:1in2in3){reference-type="ref" reference="thm:1in2in3"} and [\[thm:1in2in3_general\]](#thm:1in2in3_general){reference-type="ref" reference="thm:1in2in3_general"}. thmarbitrarygeneralinclusions[\[thm:arbitrary_general_inclusions\]]{#thm:arbitrary_general_inclusions label="thm:arbitrary_general_inclusions"} $\mathcal{NB}_d^n \subset \mathcal{GB}_d^n \subset \mathcal{P}_d^n$. *Proof.* First suppose that $P \in \mathcal{NB}_d^n$. Since $P_i \succeq 0$, if $M \in S^n$ has nonpositive eigenvalues, then $P_i - M \succeq 0$. In particular, $-1$ times the geometric mean of $P_{i-1}$ and $P_{i+1}$ has nonpositive eigenvalues. Therefore $P \in \mathcal{GB}_d^n$. Next we argue that if $P \in \mathcal{GB}_d^n$ then choosing $C_i$ as in [\[eqn:cichoicematrix\]](#eqn:cichoicematrix){reference-type="eqref" reference="eqn:cichoicematrix"} is feasible for [\[eqn:general_deg_matrix_pmp\]](#eqn:general_deg_matrix_pmp){reference-type="eqref" reference="eqn:general_deg_matrix_pmp"}. If either $P_{i-1}$ or $P_{i+1}$ is indefinite, then the only thing to check is that both $P_{i-1} - C_{i-1} - C_{i-1}^T$ and $P_{i+1} - C_{i+1} - C_{i+1}^T$ are positive semidefinite (because the off-diagonal entries are 0). This is exactly the condition that $P \in \mathcal{GB}$. On the other hand, if both are positive semidefinite, then we need to show $$\label{eqn:GBdecomp} \begin{pmatrix} P_{i-1} & \sqrt{\frac{2m_i}{w_{i-1}w_{i+1}}} C_i \\ \sqrt{\frac{2m_i}{w_{i-1}w_{i+1}}}C_i^T & P_{i+1} \end{pmatrix} + \begin{pmatrix} - C_{i-1} - C_{i-1}^T & 0 \\ 0 & - C_{i+1} - C_{i+1}^T \end{pmatrix} \succeq 0.$$ By Fact [Fact 1](#fact:key-geom-mean){reference-type="ref" reference="fact:key-geom-mean"}, $$\begin{pmatrix} P_{i-1} & -\sqrt{\frac{2m_i}{w_{i-1}w_{i+1}}}C_i \\ -\sqrt{\frac{2m_i}{w_{i-1}w_{i+1}}}C_i^T &P_{i+1} \end{pmatrix} \succeq 0.$$ This means for all $x, y \in \mathbb{R}^n$, $$\begin{aligned} 0 &\leq \begin{pmatrix} x \\y \end{pmatrix}^T\begin{pmatrix} P_{i-1} & -\sqrt{\frac{2m_i}{w_{i-1}w_{i+1}}}C_i \\ -\sqrt{\frac{2m_i}{w_{i-1}w_{i+1}}}C_i^T &P_{i+1} \end{pmatrix}\begin{pmatrix} x \\y \end{pmatrix}\\ &= \begin{pmatrix} x \\-y \end{pmatrix}^T\begin{pmatrix} P_{i-1} & \sqrt{\frac{2m_i}{w_{i-1}w_{i+1}}}C_i \\ \sqrt{\frac{2m_i}{w_{i-1}w_{i+1}}}C_i^T &P_{i+1} \end{pmatrix}\begin{pmatrix} x \\-y \end{pmatrix}, \end{aligned}$$ so the first matrix in [\[eqn:GBdecomp\]](#eqn:GBdecomp){reference-type="eqref" reference="eqn:GBdecomp"} is positive semidefinite. The second matrix is positive semidefinite because the block diagonals are geometric means, which are positive semidefinite. Therefore the sum is as well. Hence $P \in \mathcal{P}_d^n$. ◻ # Numerical experiments {#sec:numerical} For every application that membership in $\mathcal{NB}$ is used as a sufficient condition for nonnegativity, it could be worth considering how testing membership in $\mathcal{GB}$ instead may provide a benefit. Our examples deal with certifying a lower bound $\delta$ on a polynomial $p$, or proving $p - \delta$ is nonnegative. We compare how well $\mathcal{NB}$ and $\mathcal{GB}$ work as termination criteria for the subdivision method. Recall the subdivision method described in Section [2.2](#sec:preliminaries){reference-type="ref" reference="sec:preliminaries"}. The computation required by De Casteljau's algorithm in Step 2 exceeds the cost of checking $p \in S$ in Step 1 when $S = \mathcal{NB}$ or $\mathcal{GB}$, so saving on the number of subdivisions can provide a significant advantage. We therefore count the number of subdivisions required when using the two different nonnegativity criteria. ## Varying root locations The first experiment considers the quadratic polynomials $(x-t)^2$, written in the cubic Bernstein basis. The rationale of using such a simple model is to gain a fundamental understanding of how the locations of roots affect the subdivision method. Generically, polynomials are locally quadratic near local minima, thus by studying subdivisions of quadratics one can hope to learn about the behavior of higher degree polynomials with several local minima. Given a small fixed value of $\delta>0$, in Figure [\[fig:numsubsquad\]](#fig:numsubsquad){reference-type="ref" reference="fig:numsubsquad"} we report the number of subdivisions required to prove $(x-t)^2\geq -\delta$. The fractal pattern is related to the binary expansion of the root $t$. Both methods do better when a subdivision occurs near the minimizer. For example, $(x- \frac{1}{2})^2$ only takes one subdivision because when we subdivide $[0,1]$, the first subdivision split is at $x = \frac12$. When $\mathcal{NB}$ is the termination criterion, a small change in the location of the minimizer may have a big effect on the number of subdivisions required. On the other hand, $\mathcal{GB}$ is less sensitive to the exact location of the minimizer. We can observe from Figure [\[fig:numsubsquad\]](#fig:numsubsquad){reference-type="ref" reference="fig:numsubsquad"} that the number of subdivisions required with $\mathcal{GB}$ is never bigger than the number required with $\mathcal{NB}$. The same phenomenon is presented in a different way in Figure [\[fig:avgsubsquad\]](#fig:avgsubsquad){reference-type="ref" reference="fig:avgsubsquad"}. We show the percentage of roots $t$ on $[0,1]$ for which fewer than $1,2,3,4,5,$ and 6 subdivisions are required with each of the nonnegativity criteria to prove $(x-t)^2 \geq -\delta$ for a small $\delta > 0$. We see that a nontrivial portion of the ensemble requires just a couple subdivisions when we check nonnegativity with $\mathcal{GB}$, whereas checking with $\mathcal{NB}$ frequently requires the maximum number of subdivisions. subdiv-avg.dat numsubdivs NB GB 0 0.030008 9.256552 1 0.12011000000000001 16.598612 2 0.481742 28.262324 3 1.948476 46.90262 4 8.182101999999999 76.82164999999999 5 62.666666000000006 100.00 6 100.00 100.00 ## Varying matrix size The second experiment compares different termination criteria in the subdivision method for matrix-valued polynomials of varying sizes whose eigenvalues approach 0 for many $x$. In this case, the subdivisions $P^1(x)$ and $P^2(x)$ are defined so that $$\label{eqn:subdivision-def-matrix} P(x) = \begin{cases} P^1(2x) \qquad &0 \leq x \leq \frac{1}{2}\\ P^2(2x-1) \qquad &\frac{1}{2} \leq x \leq 1.\end{cases}$$ For each $n$, we sample 100 $n\times n$ matrices of the form $$\label{eqn:random_matrix_model} T^T \begin{pmatrix} \rho_1(x) & 0 & \cdots & 0\\ 0 & \rho_2(x) & \cdots & 0\\ 0 & 0 & \ddots & 0\\ 0 & 0 & \cdots & \rho_n(x) \end{pmatrix} T,$$ where the entries of $T$ are independently and identically distributed Gaussian random variables and all $\rho(x)$ are random cubic polynomials nonnegative on the interval. The random congruence transformation means that [\[eqn:random_matrix_model\]](#eqn:random_matrix_model){reference-type="eqref" reference="eqn:random_matrix_model"} is still always positive semidefinite but implies the eigenvalues are not polynomials in $x$. An example of the eigenvalues for one such $10 \times 10$ matrix are given in Figure [\[fig:example_eigs_10x10\]](#fig:example_eigs_10x10){reference-type="ref" reference="fig:example_eigs_10x10"}. We run the subdivision algorithm on the matrix polynomials to prove $P(x) \succeq -\delta I$ using $\mathcal{NB}$ and $\mathcal{GB}$ as nonnegativity criteria. For each matrix we compute the difference in the number of subdivisions required with each criteria. The average and standard deviations across all 100 matrices are plotted in Figure [\[fig:matrix_subdiv_demo2\]](#fig:matrix_subdiv_demo2){reference-type="ref" reference="fig:matrix_subdiv_demo2"}. As the matrices get larger, the average number of subdivisions saved with $\mathcal{GB}$ increases. # Conclusion We developed novel simple and explicit conditions to certify nonnegativity of Bernstein polynomials. The new tests better balance the tradeoffs between exact but expensive conditions, and the commonly used test based on nonnegative Bernstein coefficients. The method is based on making explicit choices for the decision variables in the SDP/SOCP characterizations of nonnegativity, bypassing the need to solve them numerically. There are several related open areas for potential further work. An open question is whether there are other reasonable low-complexity choices for the decision variables (that may violate the hypotheses of Proposition [\[prop:tight\]](#prop:tight){reference-type="ref" reference="prop:tight"} or that may not satisfy the conditions of Theorem [\[thm:1in2in3\]](#thm:1in2in3){reference-type="ref" reference="thm:1in2in3"}). Generalizing the basic idea of Proposition [\[prop:tight\]](#prop:tight){reference-type="ref" reference="prop:tight"} to higher degrees and the polynomial matrix case is also future work. Finally, it would be interesting to do a more comprehensive evaluation of how well these techniques perform in different applied settings. # Projections, square roots, and geometric means {#sec:preliminaries-proofs} The orthogonal projection $X_+$ of a symmetric matrix $X$ onto the positive semidefinite cone is defined as $$X_+ = \underset{{Z \succeq 0}}{\textrm{argmin}}\, \Vert X - Z \Vert_\text{Fro}^2,$$ where $\Vert X \Vert_\text{Fro}$ is the Frobenius norm of $X$. This projection can be computed with the standard algorithm: 1. Find an eigendecomposition of $X = PDP^{T}$ with $D$ diagonal. 2. Define $(D_+)_{ij} = (D_{ij})_+$ for every $1 \leq i,j \leq n$. 3. Return $X_+ = PD_+P^{T}$. To compute the matrix geometric mean using Definition [\[def:geomean\]](#def:geomean){reference-type="ref" reference="def:geomean"}, we need the matrix square root. Recall that the square root of a diagonal matrix is the diagonal matrix whose entries are the square roots of the original matrix. Then the matrix square root of any positive semidefinite matrix $X^{\frac{1}{2}}$ can be computed as $PD^{\frac{1}{2}}P^{T}$ where $PDP^{T}$ is an eigendecomposition of $X$. We also remark that there are other, more efficient algorithms to compute the matrix geometric mean; see e.g. [@Iannazzo]. # Alternative approach to Lemma [\[lem:gen_nonnegative_socp\]](#lem:gen_nonnegative_socp){reference-type="ref" reference="lem:gen_nonnegative_socp"} {#sec:altsosproof} We note in Section [4](#sec:generalization){reference-type="ref" reference="sec:generalization"} that there is an alternative proof of Lemma [\[lem:gen_nonnegative_socp\]](#lem:gen_nonnegative_socp){reference-type="ref" reference="lem:gen_nonnegative_socp"}. We sketch this proof here. *Alternative proof sketch of Lemma [\[lem:gen_nonnegative_socp\]](#lem:gen_nonnegative_socp){reference-type="ref" reference="lem:gen_nonnegative_socp"}.* Suppose the degree of $p$ is $2d+1$. Let $M_1$ and $M_2$ be tridiagonal matrices with $$\begin{aligned} M_1 = \begin{pmatrix} a_0p_0 & f_1c_1 & 0 & 0 & \cdots & 0\\ f_1c_1 & a_2(p_2 - c_2) & f_3c_3 & 0& \cdots &0\\ 0 & f_3c_3 & a_4(p_4 - c_4) & f_5c_5 & \cdots & 0\\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & \cdots & a_{2d}(p_{2d} - c_{2d}) \end{pmatrix}, \\M_2 = \begin{pmatrix} a_1(p_1-c_1) & f_2c_2 & 0 & 0 & \cdots & 0\\ f_2c_2 & a_3(p_3 - c_3) & f_4c_4 & 0& \cdots &0\\ 0 & f_4c_4 & a_5(p_5 - c_5) & f_6c_6 & \cdots & 0\\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & \cdots & a_{2d+1} p_{2d+1} \end{pmatrix}, \end{aligned}$$ where $a_{2k} = \frac{\binom{2d+1}{2k}}{\binom{d}{k}^2}$, $a_{2k+1} = \frac{\binom{2d+1}{2k+1}}{\binom{d}{k}^2}$, $f_{2k+2} = \frac{\binom{2d+1}{2k+2}}{2\binom{d}{k}\binom{d}{k+1}}, f_{2k+1} = \frac{\binom{2d+1}{2k+1}}{2\binom{d}{k}\binom{d}{k+1}}$. Let $\vec{b} = [b_0(x), \dots, b_d(x)]^T$. Then one can show $$\label{eqn:generalmarkovlukacs} p = (1-x) \vec{b}^{T} M_1 \vec{b} +x \vec{b}^{T} M_2 \vec{b} . %= (1-x) \sum_{i,j = 0}^{d}b_i^d b_j^d M_1^{ij} + x \sum_{i,j = 0}^{d}b_i^d b_j^d M_2^{ij}.$$ Therefore, if $M_1$ and $M_2$ are positive semidefinite, then $p$ is nonnegative on the interval. If [\[eqn:general-scalar-sufficient\]](#eqn:general-scalar-sufficient){reference-type="eqref" reference="eqn:general-scalar-sufficient"} is feasible, then $M_1, M_2\succeq 0$, because they can be constructed by interlacing positive semidefinite $2 \times 2$ blocks. Interlacing $2 \times 2$ matrices here means to form a tridiagonal matrix by adding the $(1,1)$ entry of each subsequent $2 \times 2$ matrix to the $(2,2)$ entry of the preceding $2 \times 2$ matrix along the diagonal. What remains is to show [\[eqn:general-scalar-sufficient\]](#eqn:general-scalar-sufficient){reference-type="eqref" reference="eqn:general-scalar-sufficient"} implies the required $2 \times 2$ matrices are positive semidefinite. The case when $d$ is even is similar. In both cases, the condition is equivalent to certain tridiagonal matrices being positive semidefinite. ◻ # Maximality of $\mathcal{GB}$ {#sec:p2proofs} This section contains the proof of the following proposition. *Proof of Proposition [\[prop:tight\]](#prop:tight){reference-type="ref" reference="prop:tight"}.* We enumerate the inequalities equivalent to the second order cone conditions [\[eqn:nonnegative_socp\]](#eqn:nonnegative_socp){reference-type="eqref" reference="eqn:nonnegative_socp"} so that we can refer to each of them individually. They are: $$\begin{aligned} 4p_0(3p_2-c_2) &\geq c_1^2 \label{eqn:p1constraintquad}\\ 3p_2-c_2 &\geq 0 \label{eqn:p1constraintlin}\\ 4p_3(3p_1-c_1) &\geq c_2^2\label{eqn:p2constraintquad} \\3p_1 - c_1 &\geq 0 \label{eqn:p2constraintlin} .\end{aligned}$$ For this proof, $c_1$ and $c_2$ are explicit functions of $p$. We let $c_1 = f(p_0,p_2)$, $c_1^*=f^*(p_0,p_2)$, $c_2 = f(p_3,p_1)$ and $c_2^* = f^*(p_3,p_1)$. The proof structure is as follows. Let $p \in \mathcal{GB} \setminus \mathcal{NB}$ for which $c_1 \neq c_1^*$ and $c_2 \neq c_2^*$. If $p \not \in S(f)$, we are done, because we have a point in $S(f^*)$ not in $S(f)$. First, we consider $c_2 > c_2^*$ and produce a point $q \in S(f^*)$ and $q \not \in S(f)$. A symmetric argument produces such a $q$ if $c_1> c_1^*$. Second, we assume that $c_1 < c_1^*$ and $c_2 < c_2^*$, and then show the existence of a sequence $\{q^k\}_{k=1}^\infty$ for which eventually $q^k \in S(f^*)$ but $q^k \not \in S(f)$. If $p \in \mathcal{GB} \setminus \mathcal{NB}$, then exactly one of $p_1$ and $p_2$ is negative. We treat these possibilities as two subcases. #### $\mathbf{c_2 > c_2^*:}$ - Let $q = (p_0, p_1, \frac{1}{3}c_2^*, p_3)$. The value $c_2 = f(q_3, q_1) = f(p_3, p_1)$ remains unchanged. Constraint [\[eqn:p1constraintlin\]](#eqn:p1constraintlin){reference-type="eqref" reference="eqn:p1constraintlin"} requires that $3q_2 - f(q_3, q_1) = c_2^* - c_2 \geq 0$, which is violated because $c_2 > c_2^*$. Therefore $q \not \in S(f)$. On the other hand, $3q_2 - f^*(q_3, q_1) = c_2^* - c_2^* = 0$. Furthermore $f^*(q_0,q_2) = 0$ (since $c_2^* \leq 0$), so [\[eqn:p1constraintquad\]](#eqn:p1constraintquad){reference-type="eqref" reference="eqn:p1constraintquad"} is satisfied too. Because $c_1^* = 0$ and $f^*(p_3,p_1) = f^*(q_3,q_1)$, [\[eqn:p2constraintquad\]](#eqn:p2constraintquad){reference-type="eqref" reference="eqn:p2constraintquad"} and [\[eqn:p2constraintlin\]](#eqn:p2constraintlin){reference-type="eqref" reference="eqn:p2constraintlin"} are still satisfied for $q$, so $q \in S(f^*)$. - Let $q = (\frac{3p_1^2}{4p_2}, p_1, p_2, p_3)$. When $p_2 >0$, $p_1 <0$, so $f^*(p_1,p_3) = 0$. Therefore $f(p_1,p_3) > 0$. Suppose for contradiction $q \in S(f)$. If [\[eqn:p1constraintquad\]](#eqn:p1constraintquad){reference-type="eqref" reference="eqn:p1constraintquad"} were satisfied for $q$, then $9p_1^2 > c_1^2$. Furthermore, [\[eqn:p2constraintlin\]](#eqn:p2constraintlin){reference-type="eqref" reference="eqn:p2constraintlin"} would imply $3p_1 - c_1 \geq 0$, or $3p_1 \geq c_1$. Since $p_1 < 0$, this implies $9p_1^2 \leq c_1^2$. This contradicts what we showed above. Therefore $q \not \in S(f)$. However $f^*(q_3,q_1) = 0$ and $f^*(q_0, q_2) = 3p_1$, satisfies all the constraints, so $q \in S(f^*)$. #### $\mathbf{c_2< c_2^*:}$ Suppose $f(p_3, p_1) < f^*(p_3, p_1)$ and $f(p_0,p_2) < f^*(p_0,p_2)$. Without loss of generality suppose $p_2 < 0$. Let $q^k = (\frac{1}{k}, p_1, \frac{1}{3}f^*(p_1,p_3), p_3)$. We argue that for some sufficiently large $k$, $q^k \not \in S(f)$. Notice that $f(q_3,q_1)$ does not change as $k$ increases. If $q^k \in S(f)$ then [\[eqn:p1constraintquad\]](#eqn:p1constraintquad){reference-type="eqref" reference="eqn:p1constraintquad"} implies $4q_0(3q_2 - f(p_3,p_1)) \geq f(p_0,p_2)^2$, or substituting, $\frac{4}{k}(f^*(p_3,p_1) - f(p_3,p_1)) \geq f(\frac{1}{k}, \frac{1}{3}f^*(p_3,p_1))^2$, so $$\label{eqn:tightlb}f\Big(\frac{1}{k}, \frac{1}{3}f^*(p_3,p_1)\Big) \geq -\sqrt{\frac{4}{k}(f^*(p_3,p_1) - f(p_3,p_1))}.$$ At the same time [\[eqn:p2constraintquad\]](#eqn:p2constraintquad){reference-type="eqref" reference="eqn:p2constraintquad"} would imply $c_1 \leq 3q_1 - \frac{c_2^2}{4q_3}$, or $$\label{eqn:tightub}f\Big(\frac{1}{k}, \frac{1}{3}f^*(p_3,p_1)\Big) \leq 3p_1 - \frac{1}{4p_3}f(p_3,p_1)^2 < 0.$$ The second inequality holds because $f(p_3, p_1) < f^*(p_3, p_1)$, and the middle expression vanishes when substituting $f^*(p_3,p_1)$ for $f(p_3,p_1)$. For some large enough $k$, the lower bound [\[eqn:tightlb\]](#eqn:tightlb){reference-type="eqref" reference="eqn:tightlb"} approaches zero and will exceed the constant nonpositive upper bound provided by [\[eqn:tightub\]](#eqn:tightub){reference-type="eqref" reference="eqn:tightub"}. Finally, we show that for every $k$, $q^k \in S(f^*)$. As $f^*(p_3,p_1) \leq 0$, for every $k$, $f^*(q^k_0,q^k_2) = 0$. Therefore [\[eqn:p1constraintquad\]](#eqn:p1constraintquad){reference-type="eqref" reference="eqn:p1constraintquad"} and [\[eqn:p1constraintlin\]](#eqn:p1constraintlin){reference-type="eqref" reference="eqn:p1constraintlin"} are satisfied since $\frac{1}{k} > 0$ and $3(\frac{1}{3}f^*(p_3,p_1)) - f^*(p_3,p_1) = 0$. If $p \in S(f^*)$, then [\[eqn:p2constraintquad\]](#eqn:p2constraintquad){reference-type="eqref" reference="eqn:p2constraintquad"} and [\[eqn:p2constraintlin\]](#eqn:p2constraintlin){reference-type="eqref" reference="eqn:p2constraintlin"} must be satisfied for each $q^k$ as well because nothing in those conditions has changed. Hence $q^k \in S(f^*)$. ◻ # Quantifier elimination {#sec:cubic-exact-proof} Theorem [\[thm:exactcubic\]](#thm:exactcubic){reference-type="ref" reference="thm:exactcubic"} can be verified automatically using decision algebra. Specifically, quantifier elimination algorithms are systematic procedures to eliminate quantifiers in first-order formulas over the real field; see e.g. [@CavinessJohnson]. A well-known implementation is `QEPCAD` [@brown2003qepcad]. We demonstrate how to use `QEPCAD` to eliminate the quantifier in the logical expression defining $p \in \mathcal{P}^\circ$, $$%\begin{aligned} \forall{x} %\Big( (((0\leq x) \land (x \leq 1)) \Longrightarrow p_0(1-x)^3 + 3 p_1x(1-x)^2 + 3p_2x^2(1-x) + p_3x^3 > 0) ,%\Big) % \end{aligned}$$ to get $$p_0 >0 \land p_3 > 0 \land (D(p) < 0 \lor (p_1 > 0 \land p_2>0)).$$ ``` {multicols="2" caption="Verification of Theorem~\\ref{thm:exactcubic}"} $ ./qepcad +N80000000 ======================================================= Quantifier Elimination in Elementary Algebra and Geometry by Partial Cylindrical Algebraic Decomposition Version B 1.65, 10 May 2011 by Hoon Hong (hhong@math.ncsu.edu) With contributions by: Christopher W. Brown, George E. Collins, Mark J. Encarnacion, Jeremy R. Johnson Werner Krandick, Richard Liska, Scott McCallum, Nicolas Robidoux, and Stanly Steinberg ======================================================= Enter an informal description between '[' and ']': [Eliminate the quantifiers in P^circ description] Enter a variable list: (p0,p1,p2,p3,x) Enter the number of free variables: 4 Enter a prenex formula: (A x)[ [0 <= x /\ x <= 1] ==> (1-x)^3 p0 + 3 p1 (1-x)^2 x + 3 p2 (1-x) x^2 + p3 x^3 > 0 ]. ======================================================= Before Normalization > go Before Projection (x) > proj-op (m,m,h,h) Before Projection (x) > finish An equivalent quantifier-free formula: p0 > 0 /\ p3 > 0 /\ [ p0^2 p3^2 - 6 p0 p1 p2 p3 + 4 p1^3 p3 + 4 p0 p2^3 - 3 p1^2 p2^2 > 0 \/ [ p1 > 0 /\ p2 > 0 ] ] ===================== The End ======================= --------------------------------------------------- -------------------------- 0 Garbage collections, 0 Cells and 0 Arrays reclaimed, in 0 milliseconds. 16973738 Cells in AVAIL, 40000000 Cells in SPACE. System time: 1453 milliseconds. System time after the initialization: 1268 milliseconds. ---------------------------------------------------- ------------------------- ``` The projection option we use tells `QEPCAD` to perform Hong's projection algorithm [@hong1990improvement]. The default projection algorithm in `QEPCAD` sometimes gives warnings even though they can be safely ignored, which is the case with our formula. [^1]: For instance, if $g$ and $h$ were defined as giving the analytic center [@BoydBook §8.5] of the convex set [\[eqn:nonnegative_socp\]](#eqn:nonnegative_socp){reference-type="eqref" reference="eqn:nonnegative_socp"}, then that $g$ and $h$ would always give $c_1$ and $c_2$ that prove nonnegativity whenever $p$ is nonnegative on the interval. This yields $g$ and $h$ that are semialgebraic functions, but they would be "too complicated" for a simple test. [^2]: In fact, $\mathcal{D}^\geq$ is not the closure of $\mathcal{D}^>$. To see this, note the polynomial $r$ in Example [Example 2](#example:strictinterior1){reference-type="ref" reference="example:strictinterior1"} is in $\mathcal{D}^\geq$ but is not the limit point of a sequence in $\mathcal{D}^>$. [^3]: It is easy to overlook these subtle complications on the boundary, but doing so could lead to imprecise results such as Proposition 2 of [@schmidt1988positivity].
arxiv_math
{ "id": "2309.10675", "title": "Improved Nonnegativity Testing in the Bernstein Basis via Geometric\n Means", "authors": "Mitchell Tong Harris and Pablo A. Parrilo", "categories": "math.OC", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Distributed optimization is an important direction of research in modern optimization theory. Its applications include large scale machine learning, distributed signal processing and many others. The paper studies decentralized min-max optimization for saddle point problems. Saddle point problems arise in training adversarial networks and in robust machine learning. The focus of the work is optimization over (slowly) time-varying networks. The topology of the network changes from time to time, and the velocity of changes is limited. We show that, analogically to decentralized optimization, it is sufficient to change only two edges per iteration in order to slow down convergence to the arbitrary time-varying case. At the same time, we investigate several classes of time-varying graphs for which the communication complexity can be reduced. author: - Nhat Trung Nguyen , Alexander Rogozin , Dmitriy Metelev , Alexander Gasnikov bibliography: - references.bib title: Min-max optimization over slowly time-varying graphs --- saddle point problem, decentralized optimization, time-varying graph, extragradient method # Introduction This paper studies min-max optimization problems of type $$\label{eq:main_prob} \min_{x \in \mathcal{X}} \max_{y \in \mathcal{Y}}\text{ } f(x, y) := \frac{1}{M} \sum_{m = 1}^{M} f_m(x, y),$$ where functions $f_m(x, y)$ are convex in $x$ and concave in $y$ and ${\mathcal{X}}, {\mathcal{Y}}$ are closed convex sets. Each function $f_m(x, y)$ is held locally at some computational node. The nodes are connected to each other via a decentralized communication network. Each agent can perform local computations and exchange information with its immediate neighbors in the network. Additionally, the network is allowed to change with time. Due to some malfunctions or disturbances, the links in the network may fail or reappear from time to time. This type of networks is called *time-varying graphs*. There are numerous applications for optimization over time-varying networks [@nedic2010cooperative; @nedic2020distributed]. They include distributed machine learning [@rabbat2004distributed; @forero2010consensus; @nedic2017fast], distributed control of power systems [@ram2009distributed; @gan2012optimal], vehicle control [@ren2008distributed], distributed sensing [@bazerque2009distributed]. Decentralized optimization and min-max optimization first-order methods use two types of steps: local gradient updates and inter-node communications. We consider the case when communications are done in synchronized rounds. Therefore, the complexity of the method is measured by two quantities: number of communication rounds and number of local oracle calls. These quantities depend on the problem characteristics, which include network condition number $\chi$, function condition number $L/\mu$ and desired accuracy $\varepsilon$. Here $L$ is the Lipschitz constant of the objective gradient and $\mu$ is the strong convexity constant. This paper is devoted to *slowly time-varying graphs*. That means that only a limited number of edges is allowed to change after each communication round. We provide lower complexity bounds for the considered class of problems. Also we propose min-max optimization methods with better communication complexity for two particular classes of slowly time-varying networks. For optimization over networks (not min-max) lower bounds are known as well as corresponding optimal algorithms. For static graphs, lower bounds were derived in [@scaman2017optimal] and in the same paper the optimal dual (i.e. using a dual oracle) algorithm was proposed. Optimal decentralized methods with primal oracle were proposed in [@kovalev2020optimal]. Considering time-varying graphs (with arbitrary changes at each iteration), lower complexity bounds were proposed in [@kovalev2021lower]. Optimal primal algorithm was proposed in the same paper [@kovalev2021lower], and optimal dual method first appeared in [@kovalev2021adom]. After that, lower bounds for slowly time-varying graphs with different velocities of network changes were studied in [@metelev2023consensus]. In [@metelev2023decentralized], it was shown that it is sufficient to change only two edges at each iteration to make communication complexity equal to arbitrary time-varying graph. The overview of lower bounds for decentralized optimization is presented in Table [1](#table:optimization_lower_bounds){reference-type="ref" reference="table:optimization_lower_bounds"} (notation $\Omega(\cdot)$ is omitted). r8.5cm static time-var. slowly time-var. -------- ---------------------------------------------------------- --------------------------------------------------- ----------------------------------------------------- comm. $\sqrt{\chi}\sqrt\frac{L}{\mu}\log\frac{1}{\varepsilon}$ $\chi\sqrt\frac{L}{\mu}\log\frac{1}{\varepsilon}$ ${\chi}\sqrt\frac{L}{\mu}\log\frac{1}{\varepsilon}$ oracle $\sqrt\frac{L}{\mu}\log\frac{1}{\varepsilon}$ $\sqrt\frac{L}{\mu}\log\frac{1}{\varepsilon}$ $\sqrt\frac{L}{\mu}\log\frac{1}{\varepsilon}$ paper [@scaman2017optimal] [@kovalev2021lower] [@metelev2023decentralized] : Lower bounds for optimization The results for decentralized saddle-point problems are analogical to optimization. Lower bounds for min-max optimization over static graphs were given in [@beznosikov2021distributed_2]. The same paper [@beznosikov2021distributed_2] proposed algorithms optimal up to a logarithmic term. The case of (arbitrary) time-varying graphs was studied in [@beznosikov2021near] along with methods optimal up to a logarithmic factor. Optimal algorithms for sum-type variational inequalities (a generalization of saddle point problem) were proposed in [@kovalev2022optimal]. Finally, this paper studies lower bounds for saddle-point problems over slowly time-varying graphs (only two edge changes per iteration). The corresponding results are presented in Table [2](#table:saddle_lower_bounds){reference-type="ref" reference="table:saddle_lower_bounds"}. It is worth noting that the lower complexity bounds are same as for optimization (Table [1](#table:optimization_lower_bounds){reference-type="ref" reference="table:optimization_lower_bounds"}), except for replacing $\sqrt{L/\mu}$ by $L/\mu$. r8.5cm static time-var. slowly time-var. -------- ----------------------------------------------------- ---------------------------------------------- ------------------------------------------------ comm. $\sqrt{\chi}\frac{L}{\mu}\log\frac{1}{\varepsilon}$ $\chi\frac{L}{\mu}\log\frac{1}{\varepsilon}$ ${\chi}\frac{L}{\mu}\log\frac{1}{\varepsilon}$ oracle $\frac{L}{\mu}\log\frac{1}{\varepsilon}$ $\frac{L}{\mu}\log\frac{1}{\varepsilon}$ $\frac{L}{\mu}\log\frac{1}{\varepsilon}$ paper [@beznosikov2021distributed_2] [@beznosikov2021near] This paper : Lower bounds for saddle point problems This paper has the following organization. In Section [2](#sec:notation_and_assumptions){reference-type="ref" reference="sec:notation_and_assumptions"}, we introduce the needed assumptions and notation. After that, in Section [3](#sec:upper_bounds){reference-type="ref" reference="sec:upper_bounds"}, we show how to get an acceleration in communications using additional assumptions on the time-varying network. In Section [4](#sec:lower_bounds){reference-type="ref" reference="sec:lower_bounds"}, we provide lower bounds for slowly time-varying networks. # Notation and Assumptions {#sec:notation_and_assumptions} **Smoothness and strong convexity**. We work with the problem [\[eq:main_prob\]](#eq:main_prob){reference-type="eqref" reference="eq:main_prob"}, where the sets $\mathcal{X} \subseteq \mathbb{R}^{n_x}$ and $\mathcal{Y} \subseteq \mathbb{R}^{n_y}$ are closed convex sets. Additionally, we introduce the set $\mathcal{Z} = \mathcal{X} \times \mathcal{Y} \subseteq \mathbb{R}^{n_z}$, $z = (x, y)$, $n_z = n_x + n_y$, and the operator $F$: $$\label{eq:operatorF} F_m(z) = F_m(x, y) = \begin{pmatrix} \nabla_x f_m(x, y) \\ -\nabla_y f_m(x, y) \end{pmatrix},\hspace{0.2cm} F(z) = \frac{1}{M} \sum_{m=1}^M F_m(z).$$ **Assumption 1**. Let functions $f(x,y)$ and $f_m(x, y)$ satisfy following properties: 1. Function $f(x, y)$ is $L$-smooth, i.e. for all $z_1, z_2 \in \mathcal{Z}$ it holds $$\| F(z_1) - F(z_2) \| \leq L \| z_1 - z_2 \|.$$ 2. For all $m$, $f_m(x, y)$ is $L_{\text{max}}$-smooth, i.e. for all $z_1, z_2 \in \mathcal{Z}$ it holds $$\| F_m(z_1) - F_m(z_2) \| \leq L_{\text{max}} \| z_1 - z_2 \|.$$ 3. Function $f(x, y)$ is strongly-convex-strongly-concave with constant $\mu$, i.e. for all $z_1, z_2 \in \mathcal{Z}$ it holds $$\langle F(z_1) - F(z_2), z_1 - z_2 \rangle \geq \mu \|z_1 - z_2\|^2.$$ **Decentralized Communication**. At each communication round, we use a graph to represent the connection between computing nodes. Denote the network of communications over time by the sequence of graphs $\{\mathcal{G}^k\}_{k=0}^\infty = \{(\mathcal{V}, \mathcal{E}^k)\}_{k=0}^{\infty}$, where $\mathcal{V} = \{1, \dots, M \}$ is the set of nodes and $\mathcal{E}^k$ is the set of available connections at $k$-th communication round. For each node $m \in \mathcal{V}$, we use the notation $\mathcal{N}_m^k = \{i \in \mathcal{V} | (i, m) \in \mathcal{E}^k \}$ to indicate the set of its neighbors at round $k$ and at that time it can only communicate with nodes in $\mathcal{N}_m^k$. **Gossip Matrices** Each computational node $m$ contains its own local vector $z_m = (x_m, y_m)$. It is required to satisfy the consensus constraints $z_1 = \dots = z_M$. For this purpose, we use a concept called *gossip matrix*. **Assumption 2**. Each graph in the time-varying network correspond to a gossip matrix $W^k \in \mathbb R^{M \times M}$ that satisfies the following properties. 1. $W^k$ is positive semi-definite, 2. $W^k_{i, j} = 0$ if $i \neq j$ and $(i, j) \notin \mathcal{E}^k$, 3. $\ker W^k = \mathop{\mathrm{span}}(\mathbf{1})$, where $\mathbf{1} = (1, \dots, 1) \in \mathbb R^M$. The number $\chi(W)= \frac{\lambda_{\max}(W) }{\lambda^+_{\min} (W) }$ is called the *condition number* of gossip matrix $W$, where $\lambda_{\max}(W)$ and $\lambda^+_{\min}(W)$ denote the largest and smallest positive eigenvalue of $W$. For a time-varying network $\{\mathcal{G}^k\}_{k=1}^{\infty}$, the condition number is given by $\chi = \sup\limits_{k \in \mathbb N \cup \{0\}} \frac{\lambda_{\max}(W^k)}{\lambda_{\min}(W^k)}$. In this paper, we also consider network with Laplacian matrices $\mathbf{L}(\mathcal{G}^k)$, which is a typical example of gossip matrix. We introduce the *consensus space* $\mathcal{L} \subseteq \mathbb{R}^{Mn_z}$, defined by $$\mathcal{L} = \{ \mathbf{z} = (z_1^T, \dots, z_M^T)^T \in \mathbb{R}^{Mn_z}: z_1 = \dots = z_M\}.$$ Consider also the space $\mathcal{L}^{\perp} \subseteq \mathbb{R}^{Mn_z}$ which is an orthogonal complement to the space $\mathcal{L}$, defined by $$\mathcal{L}^{\perp} = \{ \mathbf{z} = (z_1^T, \dots, z_M^T)^T \in \mathbb{R}^{Mn_z}: \sum_{m = 1}^M z_m = 0\}.$$ # Upper bounds {#sec:upper_bounds} In this section, we cover two classes of time-varying networks: networks with connected skeleton and networks with small random Markovian changes. For both scenarios, we propose a decentralized optimization algorithm that uses an auxiliary consensus procedure. We show that for the considered classes of problems, the dependence of communication complexity on factor $\chi$ may be reduced from $\chi$ to $\sqrt\chi$ with additional terms. The overview of results is presented in Table [3](#table:tw_upper_bounds){reference-type="ref" reference="table:tw_upper_bounds"} (the $O(\cdot)$ notation is omitted). arbitrary time-var. slowly time-var. connected skeleton Markovian -------- ---------------------------------------------- ---------------------------------------------- ------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------- comm. $\chi\frac{L}{\mu}\log\frac{1}{\varepsilon}$ $\chi\frac{L}{\mu}\log\frac{1}{\varepsilon}$ $\sqrt\chi\log\chi\frac{L}{\mu}\log^2\frac{1}{\varepsilon}$ $\tau\left(\sqrt{\chi} + \frac{\rho^2}{(\lambda_{\min}^+)^2}\right)\frac{L}{\mu}\log^2\frac{1}{\varepsilon}$ oracle $\frac{L}{\mu}\log\frac{1}{\varepsilon}$ $\frac{L}{\mu}\log\frac{1}{\varepsilon}$ $\frac{L}{\mu}\log\frac{1}{\varepsilon}$ $\frac{L}{\mu}\log\frac{1}{\varepsilon}$ paper [@kovalev2022optimal] [@kovalev2022optimal] This paper This paper : Upper bounds for saddle point problems over arbitrary and slowly time-varying graphs Our algorithms are based on extragradient method with consensus subroutine. They make several communication rounds after each extragradient step. After a sufficient number of communications, consensus is reached up to a desired accuracy. Such approximate averaging makes the trajectories of computational nodes almost synchronized. For each class of time-varying networks, we use a corresponding consensus subroutine and incorporate it into extragradient method to get a decentralized optimization algorithm. **Accelerated Gossip with Connected Skeleton and Non-Recoverable Links**. First, we focus on the type of graphs with connected skeleton. We assume that all graphs in the sequence have a common connected subgraph that we call a *skeleton*. The edges may still appear and disappear, but each node remembers which incident links have failed at least once and stops communicating by that links. This strategy we call *non-recoverable links*. Effectively the communication network only loses edges at each iteration, but remains connected. In other words, the graph of interest can be called \"monotonically decreasing\". **Assumption 3**. Graph sequence $\{\mathcal{G}^k = (\mathcal{V}, \mathcal{E}^k)\}_{q = 0}^{\infty}$ has a connected skeleton: there exists a connected graph $\hat{\mathcal{G}} = (\mathcal{V}, \hat{\mathcal{E}})$ such that for all $k \in \mathbb{N} \cup \{ 0 \}$ we have $\hat{\mathcal{E}} \subset \mathcal{E}^k$, $\lambda_{max} (\mathbf{L}(\mathcal{G}^k)) \leq \lambda_{max}$ and $\lambda_{min}^+ \leq \lambda_{min}^+(\hat{\mathcal{G}})$. With these properties of the network, we introduce the following algorithm for consensus. Vectors $z_1, \dots, z_M$, number of iterations $H$, current communication round number $k_0$. Step sizes $\eta, \beta > 0$.\ Construct column vector $\mathbf{z} = (z_1^T \dots z_M^T)^T$.\ Set $\mathbf{u}^0 = \mathbf{z}^0 = \mathbf{z}$.\ Every node $i = 1, 2, \dots, M$ initializes set of neighbors $\mathcal{N}_i = \mathcal{N}_i^{k_0}$. Update the set of nodes to which the node communicates: $\mathcal{N}_i = \mathcal{N}_i \cap \mathcal{N}_i^{k_0 + k}$. $u_i^{k + 1} = z_i^k - \eta\left(|\mathcal{N}_i| z_i^k - \sum_{j \in \mathcal{N}_i} z_j^k\right)$ $z_i^{k+1} = (1 + \beta)u_i^{k+1} - \beta u_i^k$ \ $z_1^H, \dots z_M^H$. **Lemma 1**. **(From the proof of Theorem 4.3 in [@metelev2023consensus])** Let Assumptions [Assumption 3](#assum:sleketon){reference-type="ref" reference="assum:sleketon"} hold and $\{ \hat{z}_m \}_{m=1}^M$ be output of Algorithm [\[alg:AccNonRecoverable\]](#alg:AccNonRecoverable){reference-type="ref" reference="alg:AccNonRecoverable"} with input $\{ z_m \}_{m=1}^M$ and step sizes $\eta = 1 / \lambda_{\max}$, $\beta = ( \sqrt{\chi} - 1) / ( \sqrt{\chi} + 1)$, where $\chi = \lambda_{\max} / \lambda_{\min}^+$. Then after $H$ iterations, it holds that $$\label{eq:algo-output-est} \frac{1}{M} \sum_{m = 1}^M \| \hat{z}_m - \bar{{z}} \|^2 \leq \frac{2 \chi}{M} \left( 1 - \frac{1}{\sqrt{\chi}} \right)^H \sum_{m=1}^M \|z_m - \bar z\|^2,$$ where $\bar z = \frac{1}{M} \sum_{m = 1}^M z_m$. Based on Algorithm [\[alg:AccNonRecoverable\]](#alg:AccNonRecoverable){reference-type="ref" reference="alg:AccNonRecoverable"}, it is possible to develop a decentralized algorithm for solving the saddle point problem [\[eq:main_prob\]](#eq:main_prob){reference-type="ref" reference="eq:main_prob"} with a sequence of graphs that have a connected skeleton and non-recoverable links Step size $\gamma > 0$; number of `AccGossipNonRecoverable` steps $H$, communication rounds $K$, number of iterations $N$.\ Choose $(x^0, y^0) = z^0 \in \mathcal{Z}, z_m^0 = z^0$. Each machine $m$ computes $\hat{z}_m^{k + 1/2} = z_m^k - \gamma \cdot F_m(z_m^k)$ Communication: $\tilde{z}_1^{k + 1/2}, \dots, \tilde{z}_M^{k + 1/2} = \texttt{AccGossipNonRecoverable}(\hat{z}_1^{k + 1/2}, \dots, \hat{z}_M^{k + 1/2}, H)$ Each machine $m$ computes $z_m^{k + 1/2} = \mathop{\mathrm{proj}}_{\mathcal{Z}}(\tilde{z}_m^{k + 1/2}$) Each machine $m$ computes $\hat{z}_m^{k + 1} = z_m^k - \gamma \cdot F_m(z_m^{k + 1/2})$ Communication: $\tilde{z}_1^{k + 1}, \dots, \tilde{z}_M^{k + 1} = \texttt{AccGossipNonRecoverable}(\hat{z}_1^{k + 1}, \dots, \hat{z}_M^{k + 1}, H)$ Each machine $m$ computes $z_m^{k + 1} = \mathop{\mathrm{proj}}_{\mathcal{Z}}(\tilde{z}_m^{k + 1}$) **Theorem 1**. Suppose Assumptions [Assumption 1](#assum:func-properties){reference-type="ref" reference="assum:func-properties"}, [Assumption 3](#assum:sleketon){reference-type="ref" reference="assum:sleketon"} hold. Let problem [\[eq:main_prob\]](#eq:main_prob){reference-type="eqref" reference="eq:main_prob"} be solved by Algorithm [\[alg:DESM-NonRecoverable\]](#alg:DESM-NonRecoverable){reference-type="ref" reference="alg:DESM-NonRecoverable"} with $\gamma \leq \frac{1}{4 L_{\max}}$. Then in order to achieve $\varepsilon_0$-approximate consensus at each iteration, it takes $$H = \left( \sqrt{\chi} \log \left[ \chi \left( 4 + \frac{\frac{1}{2}\|z^0 - z^* \|^2 + \frac{Q^2}{2 L_{\max}^2}}{\varepsilon_0^2} \right) \right] \right) \text{ communications,}$$ where $Q^2 = \frac{1}{M} \sum_{m=1}^M \|F_m(z^*) \|^2$ and $z^*$ is a solution of [\[eq:main_prob\]](#eq:main_prob){reference-type="eqref" reference="eq:main_prob"}. *Proof.* Let us have $\varepsilon_0$-accuracy of consensus after $k$ iterations, i.e. $$\frac{1}{M} \sum_{m=1}^M \|z_m^k - z^k \|^2 \leq \varepsilon_0^2.$$ We introduce the following notation: $$\begin{aligned} g_m^k = F_m(z_m^k), \quad g_m^{k+1/2} = F_m(z_m^{k + 1/2}), \end{aligned}$$ and $$\begin{aligned} z^k = \frac{1}{M} \sum_{m=1}^M z_m^k, \quad z^{k + 1/2} = \frac{1}{M} \sum_{m=1}^{M} z_m^{k+1/2}, \quad g^k = \frac{1}{M} \sum_{m=1}^M g_m^k, \quad g^{k+1/2} = \frac{1}{M} \sum_{m=1}^M g_m^{k+1/2}, \\ \hat z^k = \frac{1}{M} \sum_{m=1}^M \hat z_m^k, \quad \hat z^{k + 1/2} = \frac{1}{M} \sum_{m=1}^{M} \hat z_m^{k+1/2}, \quad \tilde z^k = \frac{1}{M} \sum_{m=1}^M \tilde z_m^k, \quad \tilde z^{k + 1/2} = \frac{1}{M} \sum_{m=1}^{M} \tilde z_m^{k+1/2}. \end{aligned}$$ We have $$\begin{aligned} \frac{1}{M} \sum_{m=1}^M \|g_m^k - g^k\|^2 \leq \frac{1}{M} \sum_{m=1}^M \|g_m^k\|^2. \end{aligned}$$ Let $T = 2 \chi \left( 1 - \frac{1}{\sqrt{\chi}} \right)^H$. Then $$\begin{aligned} \frac{1}{M} \sum_{m=1}^M \|\tilde{z}_m^{k+1/2} - \tilde{z}^{k+1/2}\|^2 &\leq \frac{T}{M} \sum_{m=1}^M \|\hat{z}_m^{k+1/2} - \hat{z}^{k+1/2}\|^2 = \frac{T}{M} \sum_{m=1}^M \|z_m^k -\gamma g_m^k - z^k -\gamma g^k\|^2 \\ & \leq \frac{2T}{M} \sum_{m=1}^M \|z_m^k - z^k \|^2 + \frac{2 T \gamma^2}{M} \sum_{m=1}^M \|g_m^k - g^k \|^2 \\ & \leq \frac{2T}{M} \sum_{m=1}^M \|z_m^k - z^k \|^2 + \frac{2 T \gamma^2}{M} \sum_{m=1}^M \|g_m^k\|^2 = 2T \varepsilon_0^2 + \frac{2 T \gamma^2}{M} \sum_{m=1}^M \|g_m^k\|^2. \end{aligned}$$ On the other side $$\begin{aligned} \frac{1}{M} \sum_{m=1}^M \| g_m^k \|^2 & = \frac{1}{M} \sum_{m=1}^M \| F_m(z_m^k)\|^2 \leq \frac{2}{M} \sum_{m=1}^M \| F_m(z_m^k) - F_m(z^*)\|^2 + \frac{2}{M} \sum_{m=1}^M \|F_m(z^*)\|^2 \\ & \leq 2 L_{\max}^2 \|z_m^k - z^*\|^2 + \frac{2}{M} \sum_{m=1}^M \|F_m(z^*)\|^2 \leq 2 L_{\max}^2 \|z_m^0 - z^*\|^2 + \frac{2}{M} \sum_{m=1}^M \|F_m(z^*)\|^2. \end{aligned}$$ Let $Q^2 = \frac{1}{M} \sum_{m=1}^M \|F_m(z^*) \|^2$, then we have $$\begin{aligned} \frac{1}{M} \sum_{m=1}^M \|z_m^{k+1/2} - z^{k+1/2}\|^2 &= \frac{1}{M} \sum_{m=1}^M \|\mathop{\mathrm{proj}}_{\mathcal Z} \tilde {z}_m^{k+1/2} - \mathop{\mathrm{proj}}_{\mathcal Z} \tilde{z}^{k+1/2}\|^2 \leq \frac{1}{M} \sum_{m=1}^M \|\tilde{z}_m^{k+1/2} - \tilde{z}^{k+1/2}\|^2 \\ & \leq 2T \left(\varepsilon_0^2 + 2 {\gamma}^2\left( L_{\max}^2 \|z_m^0 - z^*\|^2 + Q^2 \right) \right) \leq 2T \left(\varepsilon_0^2 + \frac{1}{8} \|z_m^0 - z^*\|^2 + \frac{Q^2}{8 L_{\max}^2}\right)\\ & = \chi \left( 1 - \frac{1}{\sqrt{\chi}} \right)^H \left(4 \varepsilon_0^2 + \frac{1}{2} \|z^0 - z^*\|^2 + \frac{Q^2}{2 L_{\max}^2} \right). \end{aligned}$$ If we take $$\begin{aligned} H \geq \sqrt{\chi} \log \left[\chi \left( 4 + \frac{\frac{1}{2}\|z^0 - z^* \|^2 + \frac{Q^2}{2 L_{\max}^2}}{\varepsilon_0^2} \right) \right], \end{aligned}$$ then $$\begin{aligned} \frac{1}{M} \sum_{m = 1}^M\|z_m^{k+1/2} - z^{k+1/2}\|^2 \leq \varepsilon_0^2. \end{aligned}$$ Analogically, we get the same estimate for $H$ to ensure that $$\begin{aligned} \frac{1}{M} \sum_{m = 1}^M \| z_m^{k+1} - z^{k+1} \|^2 \leq \varepsilon_0^2. \end{aligned}$$ Hence, to achieve accuracy $\varepsilon_0$, we need to perform $$H = \mathcal{O} \left( \sqrt{\chi} \log \left[ \chi \left( 4 + \frac{\frac{1}{2}\|z^0 - z^* \|^2 + \frac{Q^2}{2 L_{\max}^2}}{\varepsilon_0^2} \right) \right] \right) \text{ communications.}$$ ◻ **Theorem 2**. **(From Theorem 6 in [@beznosikov2020distributed])** Let $\{z_m^k\}_{k \geq 0}$ denote the iterates of Algorithm [\[alg:DESM-NonRecoverable\]](#alg:DESM-NonRecoverable){reference-type="ref" reference="alg:DESM-NonRecoverable"} for solving problem [\[eq:main_prob\]](#eq:main_prob){reference-type="ref" reference="eq:main_prob"} . Let Assumptions [Assumption 1](#assum:func-properties){reference-type="ref" reference="assum:func-properties"}, [Assumption 3](#assum:sleketon){reference-type="ref" reference="assum:sleketon"} be satisfied. Then if $\gamma \leq \frac{1}{4L_{\max}}$ , we have the following estimates $$\| \bar{z}^{N} - z^* \| = \mathcal{O} \left( \|z^0 - z^*\|^2 \exp \left( - \frac{\mu K}{8L \cdot H} \right) \right).$$ **Corollary 1**. In the setting of Theorem [Theorem 1](#th:eps-est){reference-type="ref" reference="th:eps-est"} and Theorem [Theorem 2](#th:DESM-est){reference-type="ref" reference="th:DESM-est"}, if $H = \mathcal{O} \left( \sqrt{\chi} \log\chi \log(1/\varepsilon) \right)$, then the number of communication rounds required for Algorithm [\[alg:DESM-NonRecoverable\]](#alg:DESM-NonRecoverable){reference-type="ref" reference="alg:DESM-NonRecoverable"} to obtain $\varepsilon$-solution is upper bounded by $$\mathcal{O} \left( \sqrt{\chi} \log\chi \frac{L}{\mu} \log^2\frac{1}{\varepsilon} \right),$$ and the number of local computations on each device is upper bounded by $$\mathcal{O} \left( \frac{L}{\mu} \log \frac{1}{\varepsilon} \right).$$ **Consensus for networks with Markovian changes** This subsection is devoted to slowly time-varying graphs with random changes satisfying Markovian law. At each iteration, several randomly chosen edges may appear or disappear. The choice of edges depends only on the current graph topology, so the sequence of graphs is a Markovian process. Let introduce some requirements for time-varying network with Markovian changes. **Assumption 4**. The communication network satisfy the following conditions 1. $\{ W^k \}_{k=0}^\infty$ is a stationary Markov chain on $(W_G, W_\sigma)$, where $W_G$ is the set of all possible gossip matrices for the network and $W_{\sigma}$ is the $\sigma$-field on $W_G$ and the chain $\{ W^k \}_{k=0}^\infty$ has a Markov kernel $Q$ and a unique stationary distribution $\pi$. 2. $Q$ is uniformly geometrically ergodic with mixing time $\tau \in \mathbb{N}$, i.e., for every $m \in \mathbb N$, $$\Delta (Q^m) = \sup_{W, W' \in W_G} (1/2) \| Q^m(W, \cdot) - Q^m(W', \cdot) \|_{TV} \leq (1/4)^{\lfloor m / \tau \rfloor}.$$ 3. For all $k \in \mathbb N \cup \{0\}$, it holds $\mathbb E_\pi [ W(q) ] = \bar W$ and the $\bar W$ satisfies Assumption [Assumption 2](#assum:gossip){reference-type="ref" reference="assum:gossip"}. Denote $\lambda_{\max} = \lambda_{\max}(\bar W)$, $\lambda_{\min}^+ = \lambda_{\min}^+(\bar W)$, $\chi = \frac{\lambda_{\max}}{\lambda_{\min}}$. 4. For any graph $\mathcal{G}$ that can appear in the network it holds: $$\| W(\mathcal{G}) - \bar{W} \| \leq \rho.$$ Consider the consensus search problem: $$\label{eq:markov-consensus} \begin{aligned} \min_{\mathbf{z} \in \mathbb{R}^{M n_z}} & \left[ r(\mathbf{z}) = \left\| \left(\sqrt{\bar W } \otimes \mathbf{I}_{n_z}\right) \mathbf{z} \right\|^2 \right] \\ \textrm{s.t.} & \sum_{m = 1}^M z_m = \sum_{m=1}^M z_m^0 \end{aligned},$$ where $\mathbf{z} = (z_1^T, \dots, z_M^T)^T$. stepsize $\gamma > 0$, momentums $\theta, \eta, \beta, p$, number of iterations $N$, batchsize limit $S$.\ Choose $z_f^0 = z^0, T^0 = 0$, set the same random seed for generating $\{J_k\}$ on all devices $z_g^k = \theta z_f^k + (1 - \theta) z^k$ Sample $J_k \sim \text{Geom}(1/2)$ Send $z_g^k$ to neighbors in the networks $\{ \mathcal{G}^{T^k+i} \}_{i=1}^{2^{J_k} B}$ Compute $g^k = g_0^k + \begin{cases} 2^{J_k} \left( g_{J_k}^k - g_{J_k - 1} ^ k \right), & \text{if} \; 2^{J_k} \leq S \\ 0, & \text{otherwise} \end{cases}$ with $g_j^k = 2^{-j}B^{-1}\sum_{i = 1}^{2^j B} W^{T^k+i} z_g^k$ $z_f^{k+1} = z_g^k - p \gamma g^k$ $z^{k+1} = \eta z_f^{k+1} + (p - \eta)z_f^k + (1-p)(1-\beta)z^k + (1-p)\beta z_g^k$ $T^{k+1} = T^k + 2^{J_k} B$ **Theorem 3**. **(Theorem 1 from [@metelev2023decentralized])** Let Assumptions [Assumption 4](#assum:markov-chain){reference-type="ref" reference="assum:markov-chain"} hold. Let problem [\[eq:markov-consensus\]](#eq:markov-consensus){reference-type="eqref" reference="eq:markov-consensus"} be solved by Algorithm [\[alg:ACOGWMC\]](#alg:ACOGWMC){reference-type="ref" reference="alg:ACOGWMC"}. Then for any $b \in \mathbb{N}$, $$\gamma \in \left( 0; \min \left\{ \frac{3}{4 \lambda_{\max}} ; \frac{\lambda_{\min}^3}{[1800 \rho^2(\tau b^{-1} + \tau^2 b^{-2})]^2} \right\}\right),$$ and $\beta, \theta, \eta, p, S, B$ satisfying $$p = \frac{1}{4}, \, \beta = \sqrt{\frac{4p^2 \mu \gamma}{3}}, \, \eta = \frac{3\beta}{p \mu \gamma} = \sqrt{\frac{12}{\mu \gamma}}, \, \theta = \frac{p\eta^{-1} - 1}{\beta p \eta^{-1} - 1}$$ $$S = \max \left\{2; \sqrt{\frac{1}{4} \left(1 + \frac{2}{\beta} \right)}\right\}, \, B = \lceil b \,\log_2 S \rceil,$$ it holds that $$\begin{aligned} &\mathbb{E} \left[ \|z^N - z^*\|^2 + \frac{24}{\lambda_{\min}} (r(z_f^N) - r(z^*))\right] \\ &\quad= \mathcal{O} \left( \exp \left( -N \sqrt{\frac{p^2 \lambda_{\min} \gamma}{3}} \right) \left[\|z^0 - z^*\|^2 + \frac{24}{\lambda_{\min}} (r(z^0) - r(z^*)) \right] \right), \end{aligned}$$ where $z^*_{m} = \frac{1}{M} \sum_{i=1}^M z_{i}$ for $m = 1, \dots, M$. **Corollary 2**. In the setting of Theorem [Theorem 3](#th:markov-consensus-est){reference-type="ref" reference="th:markov-consensus-est"}, if $b = \tau$ and $\gamma \simeq \text{\normalfont{min}} \left\{ \frac{1}{\lambda_{\text{\normalfont{max}}}}; \frac{\lambda_{\text{\normalfont{min}}}^3}{\rho^4} \right\}$, then in order to achieve $\varepsilon$-approximate solution (in terms of $\mathbb{E} [\|z - z^*\|^2] \lesssim \varepsilon$) it takes $$\tilde{\mathcal{O}} \left( \tau \left[ \sqrt{\chi} + \frac{\rho^2}{\lambda_{\text{\normalfont{min}}}^2} \log \frac{1}{\varepsilon} \right] \right) \text{ communications.}$$ Step size $\gamma \leq \frac{1}{4L}$, number of `AccGossipNonRecoverable` steps $H$, number of iterations $N$.\ Choose $(x^0, y^0) = z^0 \in \mathcal{Z}, z_m^0 = z^0$. Each machine $m$ computes $\hat{z}_m^{k + 1/2} = z_m^k - \gamma \cdot F_m(z_m^k)$ Communication: $\tilde{z}_1^{k + 1/2}, \dots, \tilde{z}_M^{k + 1/2} = \texttt{ACOGWMC} (\hat{z}_1^{k + 1/2}, \dots, \hat{z}_M^{k + 1/2}, H)$ Each machine $m$ computes $z_m^{k + 1/2} = \text{proj}_{\mathcal{Z}}(\tilde{z}_m^{k + 1/2}$) Each machine $m$ computes $\hat{z}_m^{k + 1} = z_m^k - \gamma \cdot F_m(z_m^{k + 1})$ Communication: $\tilde{z}_1^{k + 1}, \dots, \tilde{z}_M^{k + 1} = \texttt{ACOGWMC} (\hat{z}_1^{k + 1}, \dots, \hat{z}_M^{k + 1}, H)$ Each machine $m$ computes $z_m^{k + 1} = \text{proj}_{\mathcal{Z}}(\tilde{z}_m^{k + 1}$) **Theorem 4**. Let Assumptions [Assumption 1](#assum:func-properties){reference-type="ref" reference="assum:func-properties"}, [Assumption 4](#assum:markov-chain){reference-type="ref" reference="assum:markov-chain"} hold. Let problem [\[eq:main_prob\]](#eq:main_prob){reference-type="eqref" reference="eq:main_prob"} be solved by Algorithm [\[alg:DESM-markov\]](#alg:DESM-markov){reference-type="ref" reference="alg:DESM-markov"}. Then, if $\gamma \leq \frac{1}{4 L_{\max}}$ and $H = \mathcal{O} \left( \tau \left[ \sqrt{\chi} + \frac{\rho^2}{\lambda_{\text{\normalfont{min}}}^2} \text{\normalfont{log}} \frac{1}{\varepsilon} \right] \right)$ it holds that to achieve $\varepsilon$-solution (in terms of $\mathbb{E} [f(z) - f(z^*)] \lesssim \varepsilon$) it takes $$\tilde{\mathcal{O}} \left( \tau \left[ \sqrt{\chi} + \frac{\rho^2}{\lambda_{\text{\normalfont{min}}}^2} \right] \frac{L}{\mu} \; \text{log}^2\frac{1}{\varepsilon} \right) \textit{communications and}$$ $$\mathcal{O} \left( \frac{L}{\mu} \; \text{log}\frac{1}{\varepsilon} \right) \textit{local computations on each node.}$$ # Lower bounds {#sec:lower_bounds} The previous section was devoted to applying new consensus algorithms for specific types of time-varying networks: networks with connected skeleton and graphs changing according to Markovian law. In this section, we show that without specific constraints on the network change, such as connected skeleton or Markovian changes, acceleration is not obtained on generalised slowly changing graphs, even if the constraints on the rate of change are very high. That is, under constant network constraints, when at most a constant number of edges in the network per iteration is changed, the worst-case dependence on $\chi$ cannot be improved in comparison to arbitrary changing networks. In particular, we show that it sufficient to change only two edges per iteration. We start with the definition of black-box procedure class of algorithms on which we will evaluate the lower bound. **Definition 1**. An algorithm with $T$ local iterations and $K$ communication rounds that satisfies following properties is called a *black-box procedure*, denoted by $\textbf{BBP}(T, K)$. Each node $m$ maintains a local memory with $\mathcal{M}^x_m$ and $\mathcal{M}^y_m$ for the $x$- and $y$-variables, which are initialized as $\mathcal{M}^x_m = \mathcal{M}^y_m = \{0\}$. $\mathcal{M}^x_m$ and $\mathcal{M}^y_m$ are updated as follows: - **Local computation:** Each node $m$ computes and adds to its $M^x_m$ and $M^y_m$ a finite number of points $x$, $y$, each satisfying $$x \in \mathop{\mathrm{span}}\{ x', \nabla_x f_m(x'', y'')\}, \quad y \in \mathop{\mathrm{span}}\{ y', \nabla_y f_m(x'', y'')\},$$ for given $x', x'' \in \mathcal{M}^x_m$ and $y', y'' \in \mathcal{M}^y_m$. - **Communication:** $M^x_m$ and $M^y_m$ are updated according to $$\mathcal{M}^x_m := \mathop{\mathrm{span}}\left\{ \bigcup\limits_{(i, m) \in \mathcal{E}^k} \mathcal{M}^x_i\right\}, \quad \mathcal{M}^y_m := \mathop{\mathrm{span}}\left\{ \bigcup\limits_{(i, m) \in \mathcal{E}^k} \mathcal{M}^y_i \right\},$$ where $\mathcal{G}^k = (\mathcal{V}, \mathcal{E}^k)$ is the current state of network. - **Output:** The final global output at current moment of time is calculated as: $$\hat x \in \mathop{\mathrm{span}}\left\{ \bigcup\limits_{m = 1}^M \mathcal{M}^x_m\right\}, \quad \hat y \in \mathop{\mathrm{span}}\left\{ \bigcup\limits_{m = 1}^M \mathcal{M}^y_m\right\}.$$ To estimate the lower bound of distributed saddle point problem [\[eq:main_prob\]](#eq:main_prob){reference-type="eqref" reference="eq:main_prob"}, we need to provide a \"bad function\" and a \"bad sequence of graphs\" such that any black-box procedure cannot solve it using less than a given number of rounds. Using the time-varying network from [@metelev2023decentralized] and the objective function from [@zhang2019lower] we can prove the following theorem. **Theorem 5**. For any $L > \mu > 0$ and any $\chi \geq 1$, there exists a decentralized distributed saddle point problem on $\mathcal{X} \times \mathcal{Y} = \mathbb{R}^n \times \mathbb{R}^n$ (where $n$ is sufficiently large) with $x^*, y^* \ne 0$, a sequence of graphs $\{\mathcal{G}^k = (\mathcal{V}, \mathcal{E}^k)\}_{k = 0}^{\infty}$, where consecutive graphs differ in no more than two edges, and a sequence of corresponding gossip matrices $\{W^k\}_{k = 0}^{\infty}$ with characteristic number $\chi$, such that for any output $\hat{x}, \hat{y}$ after $K$ communication rounds of any **BBP**, the following estimate hold: $$\|\hat{x} - x^*\|^2 + \|\hat{y} - y^*\|^2 \geq \Omega \left(\text{exp}\left(-\frac{32 \mu}{L - \mu} \cdot \frac{K}{\chi} \right) \| y_0 - y^* \| ^ 2\right)$$ *Proof.* Let us introduce the graph $T_{a, b}$ from [@metelev2023decentralized]. This graph contains two partitions: left and right, these partitions consist of $a$ and $b$ vertices respectively. Each vertex in partitions is connected to its root. These roots are connected to the another vertex called central root. Consider the network with $|\mathcal{V}| = 2 d + 3$ nodes ($d \geq 2$). We select a node to be the central root and two other nodes to be the left and right roots. Central root can be changed over time but partition roots are fixed. We also select $\left[\frac{d}{2}\right]$ fixed vertices for each partition and denote them by $\mathcal{V}_1$ (left side) and $\mathcal{V}_2$ (right side). At each communication round $k$, the graph $\mathcal{G}^k$ has the form $T_{a, b}$, where $a + b = 2d$ and $a, b \geq \left[\frac{d}{2}\right]$. The communication network changes alternatively in two phases. The first phase starts with graph $T_{2 d - \left[\frac{d}{2} \right], \left[\frac{d}{2} \right]}$ and ends with graph $T_{\left[\frac{d}{2} \right], 2 d - \left[\frac{d}{2} \right]}$. At each iteration, the central root is moved to the right partition and one vertex from left partition, but not in $\mathcal{V}_1$, becomes the central root. The second phase has the same procedure, but from right to left. We modify the objective function from section B.1 in [@beznosikov2020distributed] based on our graph type: $$\label{eq:node-func} f_m(x, y) = \begin{cases} f_1(x,y) = \frac{M}{2 | \mathcal{V}_2|} \cdot \frac{L}{2} x^T A_1 y + \frac{\mu}{2} \|x\|^2 - \frac{\mu}{2} \|y\|^2 + \frac{M}{2|\mathcal{V}_2|} \cdot \frac{L^2}{2\mu} e_1^Ty, & m \in \mathcal{V}_2 \\ f_2(x,y) = \frac{M}{2|\mathcal{V}_1|} \cdot \frac{L}{2} x^T A_2 y + \frac{\mu}{2} \|x\|^2 - \frac{\mu}{2} \|y\|^2, & m \in \mathcal{V}_1\\ f_3(x,y) = \frac{\mu}{2} \|x\|^2 - \frac{\mu}{2} \|y\|^2, & \text{otherwise} \end{cases}\,.$$ where $e_1 = (1, 0, \dots , 0)$ and $A_1 =$ $\begin{pmatrix} & 1 & 0 & & & & & & & \\ & & 1 & -2 & & & & & & \\ & & & 1 & 0 & & & & & \\ & & & & 1 & -2 & & & & \\ & & & & & \dots & \dots & & & \\ & & & & & & 1 & -2 & & \\ & & & & & & & 1 & 0 & \\ & & & & & & & & 1 & \end{pmatrix}$, $A_2 =$ $\begin{pmatrix} & 1 & -2 & & & & & & & \\ & & 1 & 0 & & & & & & \\ & & & 1 & -2 & & & & & \\ & & & & 1 & 0 & & & & \\ & & & & & \dots & \dots & & & \\ & & & & & & 1 & 0 & & \\ & & & & & & & 1 & -2 & \\ & & & & & & & & 1 & \end{pmatrix}$. Consider the problem with global objective function: $$f(x,y) := \frac{1}{M} \sum_{m=1}^{M} f_m(x, y) = \frac{1}{M}(|\mathcal{V}_2| \cdot f_1(x, y) + |\mathcal{V}_1| \cdot f_2(x, y) + (M - |\mathcal{V}_1| - |\mathcal{V}_2|) \cdot f_3(x, y)$$ $$\label{eq:global-func} = \frac{L}{2} x^T A y + \frac{\mu}{2} \|x\|^2 - \frac{\mu}{2} \|y\|^2 + \frac{L^2}{4\mu} e_1^Ty, \; \text{with} \;A = \frac{1}{2}(A_1 + A_2).$$ We estimate the number of communication rounds required to obtain a new non-zero element in a local memory using the following lemma. **Lemma 2**. Let Problem [\[eq:main_prob\]](#eq:main_prob){reference-type="eqref" reference="eq:main_prob"} be solved by any **BBP**. Then after $K$ communication rounds only the first $\lfloor \frac{K}{d} \rfloor$ coordinates of the global output can be non-zero while the rest of the $n - \lfloor \frac{K}{d} \rfloor$ coordinates are strictly equal to zero. *Proof.* Between two consecutive communication rounds, only nodes from $\mathcal{V}_1$ and $\mathcal{V}_2$ can add new non-zero coordinate to their local memory, but in this interval two nodes from $\mathcal{V}_1$ and $\mathcal{V}_2$ cannot progress simultaneously. Moreover, at most one new non-zero coordinate can be added between two communication rounds (see [@beznosikov2020distributed] for more details). Hence, we constantly have to transfer information from the group of nodes $\mathcal{V}_1$ to $\mathcal{V}_2$ and back to get new non-zero coordinates. For each new non-zero coordinate, we need at least one local computation. So $T > K$ is required. We know from [@metelev2023decentralized] that each transfer requires at least $d$ communication rounds. So after $K$ communication rounds at most first $\lfloor \frac{K}{d} \rfloor$ coordinates of the global output can be non-zero. ◻ We use the following auxiliary lemmas to estimate lower bound on convergence. **Lemma 3**. **(Lemma 4 from [@beznosikov2020distributed])** Let $\alpha = \frac{4\mu^2}{L^2}$ and $q = \frac{1}{2}(2 + \alpha - \sqrt{\alpha^2 + 4\alpha}) \in (0;1)$ - the smallest root of $q^2 - (2 + \alpha)q + 1 = 0$, and let introduce approximation $\Bar{y}^*$ $$\Bar{y}_i^* = \frac{q^i}{1-q}.$$ Then error between approximation and real solution of [\[eq:global-func\]](#eq:global-func){reference-type="eqref" reference="eq:global-func"} can be bounded $$\| \Bar{y}^* - y^* \| \leq \frac{q^{n+1}}{\alpha(1-q)}.$$ **Lemma 4**. **(Lemma 5 from [@beznosikov2020distributed])** Consider a distributed saddle point problem in form [\[eq:node-func\]](#eq:node-func){reference-type="eqref" reference="eq:node-func"}, [\[eq:global-func\]](#eq:global-func){reference-type="eqref" reference="eq:global-func"} with sequence of graphs $\{\mathcal{G}_k = (\mathcal{V}, \mathcal{E}_k)\}_{k = 0}^{\infty}$ and sequence of corresponding gossip matrices $\{W(\mathcal{G}_k)\}_{k = 0}^{\infty}$. For any pairs $T, K (T > K)$ one can found size of the problem $n \geq \text{max} \{2 \text{log}_q \left( \frac{\alpha}{4 \sqrt{2}} \right), 2K \}$, where $\alpha = \frac{4\mu^2}{L^2}$ and $q = \frac{1}{2} (2 + \alpha - \sqrt{\alpha^2 + 4 \alpha}) \in (0; 1)$. Then, any output $\hat{x}, \hat{y}$ produced by any $\mathbf{BBP}(T, K)$ after $K$ communication rounds and $T$ local computations, is such that $$\|\hat{x} - x^*\|^2 + \|\hat{y} - y^*\|^2 \geq q^{\frac{2K}{d}} \frac{\|y_0 - y^* \|^2}{16}.$$ Using result from the proof of Proposition 3.6 in [@zhang2019lower], we have: $$\text{ln}(q) \geq \frac{q -1}{q} = \frac{2}{1 - \sqrt{\frac{L^2}{\mu^2} + 1}} \geq \frac{2}{1 - \frac{L}{\mu}} = \frac{-2 \mu}{L - \mu}.$$ For each graph in our sequence we map a weighted Laplacian from Lemma 8 in [@metelev2023decentralized], so $\chi \leq 8d$. Hence $$\text{ln} (q) \cdot \frac{2K}{d} \geq \frac{-4 \mu}{L - \mu} \cdot \frac{K}{d} \geq \frac{-32 \mu}{L - \mu} \cdot \frac{K}{\chi}.$$ We get $$q^{\frac{2K}{d}} = \text{exp} \left( \text{ln} (q) \cdot \frac{2K}{d} \right) \geq \text{exp} \left( \frac{-32 \mu}{L - \mu} \cdot \frac{K}{\chi} \right).$$ So we obtain $$\|\hat{x} - x^*\|^2 + \|\hat{y} - y^*\|^2 = \Omega \left(\text{exp}\left(-\frac{32 \mu}{L - \mu} \cdot \frac{K}{\chi} \right) \| y_0 - y^* \| ^ 2\right).$$ ◻ **Corollary 3**. In the setting of Theorem 1, the number of communication rounds required to obtain a $\varepsilon$-solution is lower bounded by $$\Omega \left( \chi \frac{L}{\mu} \cdot \text{log} \left( \frac{\| y^* \| ^ 2}{\varepsilon} \right) \right),$$ and the number of local calculations on each node is lower bounded by: $$\Omega \left( \frac{L}{\mu} \cdot \text{log} \left( \frac{\| y^* \| ^ 2}{\varepsilon} \right) \right).$$ # Conclusion In this paper, we studied min-max optimization over slowly time-varying graphs. We showed that if the graph changes in an adversarial manner and a constant number of edges is changed at each iteration, then lower complexity bounds coincide with those for arbitrary changing networks. Moreover, we showed that for particular time-varying graphs -- networks with connected skeleton and networks with Markovian changes -- acceleration of communication procedures is possible. We proposed the corresponding algorithms for saddle point problems for these two classes of problems. The research was supported by Russian Science Foundation (project No. 23-11-00229),\ <https://rscf.ru/en/project/23-11-00229/>.
arxiv_math
{ "id": "2309.03769", "title": "Min-max optimization over slowly time-varying graphs", "authors": "Nhat Trung Nguyen, Alexander Rogozin, Dmitry Metelev, Alexander\n Gasnikov", "categories": "math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- author: - | Yongang Wang, Huiqiu Lin[^1]\ School of Mathematics, East China University of Science and Technology,\ Shanghai 200237, P.R. China\ title: "**The largest eigenvalue of $\\mathcal{C}_4^{-}$-free signed graphs[^2]**" --- **Abstract** Let $\mathcal{C}_{k}^{-}$ be the set of all negative $C_k$. For odd cycle, Wang, Hou and Li [@C3free] gave a spectral condition for the existence of negative $C_3$ in unbalanced signed graphs. For even cycle, we determine the maximum index among all $\mathcal{C}_4^{-}$-free unbalanced signed graphs and completely characterize the extremal signed graph in this paper. This could be regarded as a signed graph version of the results by Nikiforov[@NikiKr] and Zhai and Wang[@zhaiC4]. **Keywords:** Signed graph; eigenvalues; largest eigenvalue **AMS Classification:** 05C50; # Introduction All graphs in this paper are simple. Let $\mathcal{F}$ be a family of graphs. A graph $G$ is $\mathcal{F}$-free if $G$ does not contain any graph in $\mathcal{F}$ as a subgraph. The classical spectral Turán problem is to determine the maximum spectral radius of an $\mathcal{F}$-free graph of order $n$, which is known as the *spectral Turán number* of $\mathcal{F}$. This problem was originally proposed by Nikiforov[@proposeNikiforov]. With regard to unsigned graphs, much attention has been paid to the spectral Turán problem in the past decades, see [@1Babai; @triangleLin; @WilfKr; @NikiC4; @Li; @JCTBlin; @Bollobas]. In this paper, we focus on the spectral Turán problem in signed graphs. A *signed graph* $\Gamma=(G,\sigma)$ consists of a graph $G=(V(G),E(G))$ and a sign function $\sigma : E\rightarrow \{-1,1\}$, where $G$ is its underlying graph and $\sigma$ is its sign function. An edge $e$ is *positive* (*negative*) if $\sigma(e)=1$ (resp. $\sigma(e)=-1$). A cycle $C$ in a signed graph $\Gamma$ is called *positive* (resp. *negative*) if the number of its negative edges is even (resp. odd). A signed graph is called *balanced* if all its cycles are positive; otherwise, it is called *unbalanced*. The *adjacency* *matrix* of $\Gamma$ is denoted by $A(\Gamma)=(a^{\sigma}_{ij})$, where $a^{\sigma}_{ij} =\sigma(v_{i}v_{j})$ if $v_{i}\sim v_{j}$, and $0$ otherwise. The eigenvalues of $A({\Gamma})$ are called the eigenvalues of $\Gamma$. The largest eigenvalue of ${\Gamma}$ is called the *index* of $\Gamma$ and denoted by $\lambda_1(\Gamma)$. For more details about the notion of signed graphs, we refer to[@treepositive; @cycle]. The spectral Turán problem of signed graphs has been studied in recent years. Let $\mathcal{K}_4^{-}$ be the set of all unbalanced $K_4$. Chen and Yuan[@K4free] gave the spectral Turán number of $\mathcal{K}_4^{-}.$ For the largest eigenvalue of a signed graph with certain structures, Koledin and Stanić[@connectedindex] studied connected signed graphs of fixed order, size and number of negative edges that maximize the index of their adjacency matrices. After that, signed graphs maximizing the index in suitable subsets of signed complete graphs have been studied by Ghorbani and Majidi[@maxindex], Li, Lin and Meng[@Meng] and Akbari, Dalvandi, Heydari and Maghasedi[@kedge]. It is well known that the eigenvalues of a balanced signed graph are the same as those of its underlying graph. Therefore, the largest eigenvalue of an unbalanced signed graph has attracted more attention of scholars. In $2019$, Akbari, Belardo, Heydari, Maghasedi and Souri[@unicyclic] determined the signed graphs achieving the minimal or maximal index in the class of unbalanced signed unicyclic graphs. In 2021, He, Li, Shan and Wang [@shuangquan] gave the first five largest indices among all unbalanced signed bicyclic graphs of order $n\geq36$. In $2022$, Brunetti and Stanić[@unbalancedindex] studied the extremal spectral radius among all unbalanced connected signed graphs. More results on the the spectral theory of signed graphs can be found in [@open; @signedturan; @WYQ; @JCTA; @huanghao], where [@open] is an excellent survey about some general results and problems on the spectra of signed graphs. The study of cycles from the eigenvalue perspective has a long history, such as $C_{2k+1}$ [@oddNiki], $C_4$ [@NikiKr; @zhaiC4], $C_6$ [@C6Zhai], $C_{2k}$ for $k\geq 4$ [@CDT], cycles of consecutive lengths [@LiNing; @NP; @ZL; @oddNiki] and long cycles [@LiEJC; @GH]. Let $\mathcal{C}_{k}^{-}$ be the set of all negative $C_k$. For signed graphs, Wang, Hou and Li [@C3free] determined the spectral Turán number of $\mathcal{C}_3^{-}.$ Denote by $(G,+)$ (resp. $(G,-)$) the signed graph whose edges are all positive (resp. negative). Note that $(K_n,-)$ is unbalanced and $\mathcal{C}_{2k}^{-}$-free whose spectral radius is always $n-1$ for $n\geq4$. Then it is interesting to study the existence of negative $C_{2k}$ from the largest eigenvalue condition (see Problem [Problem 1](#problem){reference-type="ref" reference="problem"}). **Problem 1**. *What is the largest eigenvalue among all $\mathcal{C}_{2k}^{-}$-free unbalanced signed graphs for $k\geq 2?$* Suppose $\Gamma=(G,\sigma)$ is a signed graph and $U\subset V(G)$. The operation that changes the sign of all edges between $U$ and $V(G)\backslash U$ is called a *switching operation*. If a signed graph $\Gamma^{\prime}$ is obtained from $\Gamma$ by applying finitely many switching operations, then $\Gamma$ is said to be *switching equivalent* to $\Gamma^{\prime}$. For $n\geq5$, we define the signed graphs $\Gamma_1$ and $\Gamma_2$ as shown in Figure [\[fig2\]](#fig2){reference-type="ref" reference="fig2"}, where the circles represent unsigned complete graphs, the red lines represent negative edges and the other lines represent positive edges, and especially, the green lines joined to the circles represent the connection of all possible edges. In this paper, we give an answer to Problem [Problem 1](#problem){reference-type="ref" reference="problem"} for $k=2$ as follows. **Theorem 1**. *Let $\Gamma=(G,\sigma)$ be an unbalanced signed graph of order $n\geq5$. If $$\lambda_{1}(\Gamma)\geq\lambda_{1}(\Gamma_{1}),$$ then $\Gamma$ contains a negative $C_4$ unless $\Gamma$ is switching equivalent to $\Gamma_{1}$ (see Figure [\[fig2\]](#fig2){reference-type="ref" reference="fig2"}).* # The largest eigenvalues of signed graphs $\Gamma_1$ and $\Gamma_2$ In this section, we shall show that $\lambda_{1}(\Gamma_1)>n-3.$ We now introduce the definition of equitable quotient matrix. Let $M$ be a real symmetric matrix of order $n$, and let $[n]=\{1,2, \ldots, n\}$. Given a partition $\Pi:[n]=X_1 \cup X_2 \cup \cdots \cup X_k$, the matrix $M$ can be written as $$M=\left[\begin{array}{cccc} M_{1,1} & M_{1,2} & \cdots & M_{1, k} \\ M_{2,1} & M_{2,2} & \cdots & M_{2, k} \\ \vdots & \vdots & \ddots & \vdots \\ M_{k, 1} & M_{k, 2} & \cdots & M_{k, k} \end{array}\right] .$$ If all row sums of $M_{i, j}$ are the same, say $b_{i, j}$, for all $i, j \in\{1,2, \ldots, k\}$, then $\Pi$ is called an $equitable$ $partition$ of $M$, and the matrix $Q=\left(b_{i, j}\right)_{i, j=1}^k$ is called an $equitable$ $quotient$ $matrix$ of $M$. **Lemma 1**. ***([@fz p.24])**[\[quotient\]]{#quotient label="quotient"} Let $M$ be a real symmetric matrix, and let $Q$ be an equitable quotient matrix of $M$. Then the eigenvalues of $Q$ are also eigenvalues of $M$.* **Lemma 2**. *Let $\Gamma_1$ and $\Gamma_2$ be the signed graphs as shown in Figure [\[fig2\]](#fig2){reference-type="ref" reference="fig2"}. Then we have the following statements.* - *$\lambda_{1}(\Gamma_1)>n-3.$* - *$\lambda_{1}(\Gamma_2)< \lambda_{1}(\Gamma_1).$* *Proof.* $(1)$ Let $J$, $I$ and $O$ denote the all-ones matrix, identity matrix and all-zeros matrix, respectively. By a suitable partition, $$A(\Gamma_1)=\left[\begin{array}{cccc} 0 & -1 & 1 & O\\ -1 & 0& 1 & O\\ 1 & 1& 0 & J\\ O & O& J& J-I\\ \end{array}\right],$$ and $A(\Gamma_1)$ has the equitable quotient matrix $$Q_1=\left[\begin{array}{cccc} 0 & -1 & 1 & 0\\ -1 & 0& 1 & 0\\ 1 & 1& 0 & n-3\\ 0 & 0& 1& n-4\\ \end{array}\right].$$ Note that rank$(A(\Gamma_1)+I)$=4. Then $A(\Gamma_1)+I$ has $0$ as an eigenvalue with multiplicity $n-4.$ Hence, $A(\Gamma_1)$ has $-1$ as an eigenvalue with multiplicity $n-4.$ By a simple calculation, the characteristic polynomial of $Q_1$ is $$f(x)=(x-1)(x^3+(5-n)x^2+(5-2n)x+n-5).$$ It is easy to check that $f(n-3)=-2n+8<0$ and $f(n-2)=(n-3)^2(n+1)>0$. Then $\lambda_1(Q_1)>n-3.$ Since $f(-1)\neq0,$ we have $\lambda_1(\Gamma_1)=\lambda_1(Q_1)>n-3$ by Lemma [\[quotient\]](#quotient){reference-type="ref" reference="quotient"}. $(2)$ For $5\leq n\leq 6,$ by a direct calculation, we have $\lambda_{1}(\Gamma_2)< \lambda_{1}(\Gamma_1).$ For $n\geq7,$ by a suitable partition, $$A(\Gamma_2)=\left[\begin{array}{cccc} 0 & -1 & J & O\\ -1 & 0& J & O\\ J & J& O_{2\times 2} & J\\ O & O& J& J-I\\ \end{array}\right],$$ and $A(\Gamma_2)$ has the equitable quotient matrix $$Q_2=\left[\begin{array}{cccc} 0 & -1 & 2 & 0\\ -1 & 0& 2 & 0\\ 1 & 1& 0 & n-4\\ 0 & 0& 2& n-5\\ \end{array}\right].$$ Note that rank$(A(\Gamma_2))\leq n-1$ and rank$(A(\Gamma_2)+I)=5$. Then $A(\Gamma_2)$ has $0$ as an eigenvalue with multiplicity at least $1,$ and $A(\Gamma_2)$ has $-1$ as an eigenvalue with multiplicity $n-5.$ By a simple calculation, the characteristic polynomial of $Q_2$ is $$g(x)=(x-1)h(x),$$ where $h(x)=x^3+(6-n)x^2+(9-3n)x+2n-12$. Observe that $h(-\infty)<0,$ $h(0)>0,$ $h(n-4)<0$ and $h(n-3)>0.$ Then the three roots of $h(x)$ lie in $(-\infty,0),$ $(0,n-4)$ and $(n-4,n-3)$, respectively. Since $g(0)\neq0$ and $g(-1)\neq0,$ we have $\lambda_1(\Gamma_2)=\lambda_1(Q_2)<n-3<\lambda_1(\Gamma_1)$ by Lemma [\[quotient\]](#quotient){reference-type="ref" reference="quotient"} and Lemma [Lemma 2](#dayu n3){reference-type="ref" reference="dayu n3"}$(1)$. ◻ # Proof of Theorem [Theorem 1](#C4-free){reference-type="ref" reference="C4-free"} {#proof-of-theorem-c4-free} By the table of the spectra of signed graphs with five vertices[@table], we can check that Theorem [Theorem 1](#C4-free){reference-type="ref" reference="C4-free"} is true for $n=5.$ Therefore, we now assume that $n\geq 6.$ We first give two lemmas which are needed in the proof of Theorem [Theorem 1](#C4-free){reference-type="ref" reference="C4-free"}. **Lemma 3**. ***( [@SGX Lemma 2.5])**[\[Nonnegative\]]{#Nonnegative label="Nonnegative"} Let $\Gamma$ be a signed graph. Then there exists a signed graph $\Gamma^{\prime}$ switching equivalent to $\Gamma$ such that $A(\Gamma^{\prime})$ has a non-negative eigenvector corresponding to $\lambda_{1}(\Gamma^{\prime})$.* **Lemma 4**. ***([@treepositive Proposition 3.2])**[\[remaincycle\]]{#remaincycle label="remaincycle"} Two signed graphs with the same underlying graph are switching equivalent if and only if they have the same set of positive cycles.* Nikiforov[@NikiKr] and Zhai and Wang[@zhaiC4] determined the spectral conditions for the existence of $C_4$ for odd $n$ and even $n$, respectively, and they also further characterized the corresponding spectral extremal graphs. **Lemma 5**. *Let $G$ be a $C_4$-free graph of order $n$ with $\lambda_1(G)=\lambda.$ Then* *$(i)$**([@NikiKr])** If $n$ is odd, then $\lambda^{2}-\lambda-(n-1) \leq 0.$* *$(ii)$**([@zhaiC4])** If $n$ is even, then $\lambda^{3}-\lambda^{2}-(n-1)\lambda+1 \leq 0.$* Now, we are in a position to give the proof of Theorem [Theorem 1](#C4-free){reference-type="ref" reference="C4-free"}. *Proof.* Suppose that $\Gamma=(G,\sigma)$ has the maximum index among all $\mathcal{C}_{4}^{-}$-free unbalanced signed graphs. We shall show that $\Gamma$ is switching equivalent to $\Gamma_1.$ Let $\Gamma^{\prime}=(G,\sigma^{\prime})$ be a signed graph switching equivalent to $\Gamma.$ By Lemma [\[Nonnegative\]](#Nonnegative){reference-type="ref" reference="Nonnegative"}, we can assume that $A(\Gamma^{\prime})$ has a non-negative eigenvector corresponding to $\lambda_{1}(\Gamma^{\prime})$. Then by Lemma [\[remaincycle\]](#remaincycle){reference-type="ref" reference="remaincycle"}, $\Gamma^{\prime}$ is also unbalanced and $\mathcal{C}_{4}^{-}$-free. Furthermore, $\Gamma^{\prime}$ also has the maximum index among all $\mathcal{C}_{4}^{-}$-free unbalanced signed graphs. Set $V(\Gamma^{\prime})=\{v_1,v_2,\ldots,v_n\}.$ Let $x=(x_1,x_2,\ldots,x_n)^{\top}$ be the non-negative unit eigenvector of $A(\Gamma^{\prime})$ corresponding to $\lambda_1(\Gamma^{\prime}),$ where $x_i$ corresponds to the vertex $v_i$ for $1\leq i\leq n.$ Then $$\lambda_1(\Gamma^{\prime})=x^{\top}A(\Gamma^{\prime})x.$$ Since $\Gamma_{1}$ is unbalanced and $\mathcal{C}_{4}^{-}$-free, we may suppose that $\lambda_1(\Gamma^{\prime})\geq\lambda_1(\Gamma_{1})>n-3$ by Lemma [Lemma 2](#dayu n3){reference-type="ref" reference="dayu n3"}. Now we begin to analyze the structure of $\Gamma^{\prime}$. First we give some claims. **Claim 1**. *[\[Claim1\]]{#Claim1 label="Claim1"} $x$ has at most one zero coordinate.* Otherwise, without loss of generality, assume $x_1=x_2=0,$ then $$\begin{aligned} \lambda_1(\Gamma^{\prime})&=x^{\top}A(\Gamma^{\prime})x=(x_3,x_4,\ldots,x_n)A(\Gamma^{\prime}-v_1-v_2)(x_3,x_4,\ldots,x_n)^{\top}\\ &\leq\lambda_1(\Gamma^{\prime}-v_1-v_2)\leq\lambda_1(K_{n-2})=n-3, \end{aligned}$$ a contradiction. So Claim [\[Claim1\]](#Claim1){reference-type="ref" reference="Claim1"} holds. Let $N_{\Gamma^{\prime}}(v_i)$ denote the set of neighbours of $v_i$ in $\Gamma^{\prime}.$ **Claim 2**. *$\Gamma^{\prime}$ is connected.* Otherwise, assume $\Gamma^{\prime}_1$ and $\Gamma^{\prime}_2$ are two distinct connected components of $\Gamma^{\prime}$, where $\lambda_1(\Gamma^{\prime})=\lambda_1(\Gamma^{\prime}_1)$. Without loss of generality, we choose two vertices $v_i\in V(\Gamma^{\prime}_1)$ and $v_j\in V({\Gamma^{\prime}_2}).$ Then we can construct a new signed graph $\Gamma^{\ast}$ obtained from $\Gamma^{\prime}$ by adding a positive edge $v_iv_j.$ Clearly, $\Gamma^{\ast}$ is also unbalanced and $\mathcal{C}_4^-$-free. By Rayleigh principle, we obtain that $$\begin{aligned} \lambda_1(\Gamma^{\ast})-\lambda_1(\Gamma^{\prime})&\geq x^{\top}A(\Gamma^{\ast})x-x^{\top}A(\Gamma^{\prime})x\\ &=2x_ix_j\geq0. \end{aligned}$$ If $\lambda_1(\Gamma^{\ast})=\lambda_1(\Gamma^{\prime})$, then $x$ is also an eigenvector of $A(\Gamma^{\ast})$ corresponding to $\lambda_1(\Gamma^{\ast}).$ Based on the following equations, $$\lambda_1(\Gamma^{\prime})x_{i}=\sum_{v_s\in N_{\Gamma^{\prime}}(v_i)}\sigma^{\prime}(v_sv_i)x_s,$$ $$\lambda_1(\Gamma^{\prime})x_{j}=\sum_{v_s\in N_{\Gamma^{\prime}}(v_j)}\sigma^{\prime}(v_sv_j)x_s,$$ $$\lambda_1(\Gamma^{\ast})x_{i}=\sum_{v_s\in N_{\Gamma^{\prime}}(v_i)}\sigma^{\prime}(v_sv_i)x_s+x_j$$ and $$\lambda_1(\Gamma^{\ast})x_{j}=\sum_{v_s\in N_{\Gamma^{\prime}}(v_j)}\sigma^{\prime}(v_sv_j)x_s+x_i,$$ we obtain that $x_i=x_j=0,$ which contradicts Claim [Claim 1](#1zero){reference-type="ref" reference="1zero"}. Hence, $\lambda_1(\Gamma^{\ast})>\lambda_1(\Gamma^{\prime}),$ a contradiction. Since $\Gamma^{\prime}$ is unbalanced, $\Gamma^{\prime}$ contains at least one negative edge and at least one negative cycle. Let $\mathscr{C}$ be one of the shortest negative cycles of $\Gamma^{\prime}$. **Claim 3**. *$\mathscr{C}$ contains all negative edges of $\Gamma^{\prime}.$* Otherwise, without loss of generality, assume $e=v_iv_j$ is a negative edge of $\Gamma^{\prime}$ and $e\notin E(\mathscr{C}).$ Then we can construct a new signed graph $\Gamma^{\ast}$ obtained from $\Gamma^{\prime}$ by deleting $e.$ Clearly, $\Gamma^{\ast}$ is also unbalanced and $\mathcal{C}_4^-$-free. By Rayleigh principle, we obtain that $$\begin{aligned} \lambda_1(\Gamma^{\ast})-\lambda_1(\Gamma^{\prime})&\geq x^{\top}A(\Gamma^{\ast})x-x^{\top}A(\Gamma^{\prime})x\\ &=2x_ix_j\geq0 \end{aligned}$$ If $\lambda_1(\Gamma^{\ast})=\lambda_1(\Gamma^{\prime})$, then $x$ is also an eigenvector of $A(\Gamma^{\ast})$ corresponding to $\lambda_1(\Gamma^{\ast}).$ Based on the following equations, $$\lambda_1(\Gamma^{\prime})x_{i}=\sum_{v_s\in N_{\Gamma^{\prime}}(v_i)}\sigma^{\prime}(v_sv_i)x_s,$$ $$\lambda_1(\Gamma^{\prime})x_{j}=\sum_{v_s\in N_{\Gamma^{\prime}}(v_j)}\sigma^{\prime}(v_sv_j)x_s,$$ $$\lambda_1(\Gamma^{\ast})x_{i}=\sum_{v_s\in N_{\Gamma^{\prime}}(v_i)}\sigma^{\prime}(v_sv_i)x_s+x_j$$ and $$\lambda_1(\Gamma^{\ast})x_{j}=\sum_{v_s\in N_{\Gamma^{\prime}}(v_j)}\sigma^{\prime}(v_sv_j)x_s+x_i,$$ we obtain that $x_i=x_j=0,$ which contradicts Claim [Claim 1](#1zero){reference-type="ref" reference="1zero"}. Hence, $\lambda_1(\Gamma^{\ast})>\lambda_1(\Gamma^{\prime}),$ a contradiction. **Claim 4**. *$G$ contains $C_4$ as a subgraph.* Otherwise, assume $G$ is $C_4$-free, then by Lemma [Lemma 5](#GC4free){reference-type="ref" reference="GC4free"}, $\lambda_1(\Gamma^{\prime})\leq \lambda_1(G)<n-3,$ a contradiction. Let $l=|V(\mathscr{C})|.$ Without loss of generality, we can suppose $\mathscr{C}=v_1v_2\cdots v_{l-1}v_lv_1.$ For $1\leq i<j\leq l,$ we assert that $v_{i}\nsim v_j$ if $j-i\geq2$ and $v_iv_j\neq v_1v_l$. Otherwise, there exists a shorter negative cycle than $\mathscr{C}$, which contradicts the choice of $\mathscr{C}.$ Now, we define the signed graphs $\Gamma_{3}$ and $\Gamma_{4}$ as shown in Figure [\[fig45\]](#fig45){reference-type="ref" reference="fig45"}, where the black and red lines represent positive and negative edges, respectively, and especially, the blue lines represent the edges with uncertain signs. **Claim 5**. *All edges of any cycle of $order$ $4$ in $\Gamma^{\prime}$ are positive.* Otherwise, assume that $C_4^{\prime}$ is a signed cycle of order $4$ in $\Gamma^{\prime}$ and $C_4^{\prime}$ contains at least one negative edge. Since $\Gamma^{\prime}$ is $\mathcal{C}_4^{-}$-free, $C_4^{\prime}$ contains two or four negative edges. If $C_4^{\prime}$ contains four negative edges, then by Claim [Claim 3](#containallnegative){reference-type="ref" reference="containallnegative"}, $\mathscr{C}$ contains four negative edges, and thus $\mathscr{C}$ is positive, a contradiction. If $C_4^{\prime}$ contains two negative edges, say $e_1$ and $e_2$, then we assert that $e_1$ and $e_2$ must contain a common vertex. Otherwise, without loss of generality, let $e_1=v_1v_2$ and $e_2=v_3v_4.$ Then $e_1,e_2\in E(\mathscr{C})$ and $l\geq 5.$ Thus, there exists a shorter negative cycle than $\mathscr{C}$, a contradiction. Without loss of generality, let $e_1=v_1v_2$ and $e_2=v_2v_3.$ Then $\Gamma^{\prime}$ must contain one of the signed graphs $\Gamma_{3}$ and $\Gamma_{4}$ (see Figure [\[fig45\]](#fig45){reference-type="ref" reference="fig45"}) as a subgraph, and we can construct a new signed graph $\Gamma^{\ast}$ obtained from $\Gamma^{\prime}$ by deleting $e_1$ and $e_2.$ Clearly, $\Gamma^{\ast}$ is also unbalanced and $\mathcal{C}_4^-$-free. By Rayleigh principle, we obtain that $$\begin{aligned} \lambda_1(\Gamma^{\ast})-\lambda_1(\Gamma^{\prime})&\geq x^{\top}A(\Gamma^{\ast})x-x^{\top}A(\Gamma^{\prime})x\\ &=2x_2(x_1+x_3)\geq0. \end{aligned}$$ If $\lambda_1(\Gamma^{\ast})=\lambda_1(\Gamma^{\prime})$, then $x$ is also an eigenvector of $A(\Gamma^{\ast})$ corresponding to $\lambda_1(\Gamma^{\ast}).$ Based on the following equations, $$\lambda_1(\Gamma^{\prime})x_{2}=\sum_{v_s\in N_{\Gamma^{\prime}}(v_2)}\sigma^{\prime}(v_sv_2)x_s$$ and $$\lambda_1(\Gamma^{\ast})x_{2}=\sum_{v_s\in N_{\Gamma^{\prime}}(v_2)}\sigma^{\prime}(v_sv_2)x_s+x_1+x_3,$$ we obtain that $x_1=x_3=0,$ which contradicts Claim [Claim 1](#1zero){reference-type="ref" reference="1zero"}. Hence, $\lambda_1(\Gamma^{\ast})>\lambda_1(\Gamma^{\prime}),$ a contradiction. **Claim 6**. * $\Gamma^{\prime}$ contains exactly one negative edge.* Otherwise, assume $\Gamma^{\prime}$ contains $m$ $(m\neq1)$ negative edges. By Claim [Claim 3](#containallnegative){reference-type="ref" reference="containallnegative"}, $m\geq3$ and $m$ is odd. We first consider that $l=3.$ Without loss of generality, let $v_1v_2$ and $v_2v_3$ be two negative edges of $\Gamma^{\prime}.$ Then we can construct a new signed graph $\Gamma^{\ast}$ obtained from $\Gamma^{\prime}$ by reversing the sign of $v_1v_2$ and $v_2v_3.$ By Claim [Claim 5](#ALLedgeC4positive){reference-type="ref" reference="ALLedgeC4positive"}, $\Gamma^{\ast}$ is also unbalanced and $\mathcal{C}_4^-$-free. Furthermore, by Rayleigh principle, we obtain that $$\begin{aligned} \lambda_1(\Gamma^{\ast})-\lambda_1(\Gamma^{\prime})&\geq x^{\top}A(\Gamma^{\ast})x-x^{\top}A(\Gamma^{\prime})x\\ &=2(x_1x_2+x_2x_3)-2(-x_1x_2-x_2x_3)\\ &=4x_2(x_1+x_3)\geq0. \end{aligned}$$ If $\lambda_1(\Gamma^{\ast})=\lambda_1(\Gamma^{\prime})$, then $x$ is also an eigenvector of $A(\Gamma^{\ast})$ corresponding to $\lambda_1(\Gamma^{\ast}).$ Based on the following equations, $$\lambda_1(\Gamma^{\prime})x_{2}=\sum_{v_s\in N_{\Gamma^{\prime}}(v_2)}\sigma^{\prime}(v_sv_2)x_s$$ and $$\lambda_1(\Gamma^{\ast})x_{2}=\sum_{v_s\in N_{\Gamma^{\prime}}(v_2)}\sigma^{\prime}(v_sv_2)x_s+2(x_1+x_3),$$ we obtain that $x_1=x_3=0,$ which contradicts Claim [Claim 1](#1zero){reference-type="ref" reference="1zero"}. Hence, $\lambda_1(\Gamma^{\ast})>\lambda_1(\Gamma^{\prime}),$ a contradiction. Now, we consider that $l\geq 5.$ Without loss of generality, let $v_1v_2$ and $v_3v_4$ be two negative edges of $\Gamma^{\prime}.$ Then we can construct a new signed graph $\Gamma^{\ast}$ obtained from $\Gamma^{\prime}$ by reversing the sign of $v_1v_2$ and $v_3v_4.$ By Claim [Claim 5](#ALLedgeC4positive){reference-type="ref" reference="ALLedgeC4positive"}, $\Gamma^{\ast}$ is also unbalanced and $\mathcal{C}_4^-$-free. Furthermore, by Rayleigh principle and Claim [Claim 1](#1zero){reference-type="ref" reference="1zero"}, we obtain that $$\begin{aligned} \lambda_1(\Gamma^{\ast})-\lambda_1(\Gamma^{\prime})&\geq x^{\top}A(\Gamma^{\ast})x-x^{\top}A(\Gamma^{\prime})x\\ &=2(x_1x_2+x_3x_4)-2(-x_1x_2-x_3x_4)\\ &=4(x_1x_2+x_3x_4)>0. \end{aligned}$$ Hence, $\lambda_1(\Gamma^{\ast})>\lambda_1(\Gamma^{\prime}),$ a contradiction. Thus, Claim [Claim 6](#only1negative){reference-type="ref" reference="only1negative"} holds. Without loss of generality, by Claim [Claim 6](#only1negative){reference-type="ref" reference="only1negative"}, we can suppose that $\mathscr{C}=v_1v_2\cdots v_{l-1}v_lv_1$ and $v_1v_2$ is the unique negative edge of $\Gamma^{\prime}.$ Let $d_{\Gamma^{\prime}}(v_i)$ denote the degree of $v_i$ in $\Gamma^{\prime}.$ **Claim 7**. *$x_i>0$ for $3\leq i\leq n.$* Otherwise, assume $x_i=0$ for $3\leq i\leq n.$ By Claims [Claim 2](#connected){reference-type="ref" reference="connected"} and [Claim 6](#only1negative){reference-type="ref" reference="only1negative"}, $d_{\Gamma^{\prime}}(v_i)\geq 1$ and all edges incident to $v_i$ are positive. Based on the following equation, $$0=\lambda_1(\Gamma^{\prime})x_i=\sum\limits_{v_j\in N_{\Gamma^{\prime}}(v_i)}x_j,$$ we have $x_j=x_i=0$ for any $v_j\in N_{\Gamma^{\prime}}(v_i),$ which contradicts Claim [Claim 1](#1zero){reference-type="ref" reference="1zero"}. Next, without loss of generality, we suppose $x_1\geq x_2\geq0.$ **Claim 8**. *$l=3.$* Otherwise, assume $l\geq5$. We first consider that $l\geq6.$ Then $v_{3}\nsim v_{l-1}$ and $v_{1}\nsim v_{l-1},$ and we can construct a new signed graph $\Gamma^{\ast}$ obtained from $\Gamma^{\prime}$ by adding a positive edge $v_3v_{l-1}.$ Clearly, $\Gamma^{\ast}$ is also unbalanced and $\mathcal{C}_4^-$-free. By Rayleigh principle and Claim [Claim 7](#fenliangdayu0){reference-type="ref" reference="fenliangdayu0"}, we obtain that $$\begin{aligned} \lambda_1(\Gamma^{\ast})-\lambda_1(\Gamma^{\prime})&\geq x^{\top}A(\Gamma^{\ast})x-x^{\top}A(\Gamma^{\prime})x\\ &=2x_3x_{l-1}>0.\\ \end{aligned}$$ Hence, $\lambda_1(\Gamma^{\ast})>\lambda_1(\Gamma^{\prime}),$ a contradiction. Thus, $l=5,$ $\mathscr{C}=v_1v_2v_3v_4v_5v_1,$ $v_1\nsim v_3,$ $v_2\nsim v_5,$ $v_1\nsim v_4$ and $v_2\nsim v_4.$ Let $W_1=N_{\Gamma^{\prime}}(v_1)\cap N_{\Gamma^{\prime}}(v_5)$ and $W_2=N_{\Gamma^{\prime}}(v_2)\cap N_{\Gamma^{\prime}}(v_3).$ We assert that $W_1\neq \emptyset.$ Otherwise, assume $W_1= \emptyset.$ Then we can construct a new signed graph $\Gamma^{\ast}$ obtained from $\Gamma^{\prime}$ by adding a positive edge $v_2v_5.$ Clearly, $\Gamma^{\ast}$ is also unbalanced and $\mathcal{C}_4^-$-free. By Rayleigh principle, we obtain that $$\begin{aligned} \lambda_1(\Gamma^{\ast})-\lambda_1(\Gamma^{\prime})&\geq x^{\top}A(\Gamma^{\ast})x-x^{\top}A(\Gamma^{\prime})x\\ &=2x_2x_5\geq0. \end{aligned}$$ If $\lambda_1(\Gamma^{\ast})=\lambda_1(\Gamma^{\prime})$, then $x$ is also an eigenvector of $A(\Gamma^{\ast})$ corresponding to $\lambda_1(\Gamma^{\ast}).$ Based on the following equations, $$\lambda_1(\Gamma^{\prime})x_{2}=\sum_{v_s\in N_{\Gamma^{\prime}}(v_2)}\sigma^{\prime}(v_sv_2)x_s$$ and $$\lambda_1(\Gamma^{\ast})x_{2}=\sum_{v_s\in N_{\Gamma^{\prime}}(v_2)}\sigma^{\prime}(v_sv_2)x_s+x_5,$$ we obtain that $x_5=0,$ which contradicts Claim [Claim 7](#fenliangdayu0){reference-type="ref" reference="fenliangdayu0"}. Hence, $\lambda_1(\Gamma^{\ast})>\lambda_1(\Gamma^{\prime}),$ a contradiction. Similarly, $W_2\neq \emptyset.$ Recall that $\Gamma^{\prime}$ is $\mathcal{C}_{4}^-$-free. Then for any $v_p\in W_1,$ $v_p\nsim v_2$ and $v_p\nsim v_3$ and for any $v_q\in W_2,$ $v_q\nsim v_1$ and $v_q\nsim v_5.$ If $x_5\geq x_3,$ then for all $v_q\in W_2,$ we can construct a new signed graph $\Gamma^{\ast}$ obtained from $\Gamma^{\prime}$ by rotating all positive edges $v_2v_q$ to the non-edge position $v_1v_q,$ rotating all positive edges $v_3v_q$ to the non-edge position $v_5v_q$ and adding a positive edge $v_1v_3.$ Clearly, $\Gamma^{\ast}$ is also unbalanced and $\mathcal{C}_4^-$-free. By Rayleigh principle and Claim [Claim 7](#fenliangdayu0){reference-type="ref" reference="fenliangdayu0"}, we obtain that $$\begin{aligned} \lambda_1(\Gamma^{\ast})-\lambda_1(\Gamma^{\prime})&\geq x^{\top}A(\Gamma^{\ast})x-x^{\top}A(\Gamma^{\prime})x\\ &=2\sum\limits_{v_q\in W_2}x_q(x_1-x_2)+2\sum\limits_{v_q\in W_2}x_q(x_5-x_3)+2x_1x_3\\ &>0. \end{aligned}$$ Hence, $\lambda_1(\Gamma^{\ast})>\lambda_1(\Gamma^{\prime}),$ a contradiction. If $x_5 \leq x_3,$ then for all $v_p\in W_1$ and $v_q\in W_2,$ we can construct a new signed graph $\Gamma^{\ast}$ obtained from $\Gamma^{\prime}$ by rotating all positive edges $v_5v_p$ to the non-edge position $v_3v_p,$ rotating all positive edges $v_2v_q$ to the non-edge position $v_1v_q,$ deleting the positive edge $v_2v_3$ and adding two positive edges $v_1v_3$ and $v_2v_5.$ Clearly, $\Gamma^{\ast}$ is also unbalanced and $\mathcal{C}_4^-$-free. By Rayleigh principle and Claim [Claim 7](#fenliangdayu0){reference-type="ref" reference="fenliangdayu0"}, we obtain that $$\begin{aligned} \lambda_1(\Gamma^{\ast})-\lambda_1(\Gamma^{\prime})&\geq x^{\top}A(\Gamma^{\ast})x-x^{\top}A(\Gamma^{\prime})x\\ &=2\sum\limits_{v_p\in W_1}x_p(x_3-x_5)+2\sum\limits_{v_q\in W_2}x_q(x_1-x_2)+2x_3(x_1-x_2)+2x_2x_5 \\ &>0. \end{aligned}$$ Hence, $\lambda_1(\Gamma^{\ast})>\lambda_1(\Gamma^{\prime}),$ a contradiction. Thus, Claim [Claim 8](#k=3){reference-type="ref" reference="k=3"} holds. By Claims [Claim 3](#containallnegative){reference-type="ref" reference="containallnegative"}, [Claim 6](#only1negative){reference-type="ref" reference="only1negative"} and [Claim 8](#k=3){reference-type="ref" reference="k=3"}, we have $\mathscr{C}=v_1v_2v_3v_1$ and $v_1v_2$ is the unique negative edge of $\Gamma^{\prime}$. Without loss of generality, suppose $x_r=\max\limits_{1\leq i\leq n}{x_i}.$ **Claim 9**. *$d_{\Gamma^{\prime}}(v_r)\geq n-2.$* Otherwise, assume $d_{\Gamma^{\prime}}(v_r)\leq n-3$. Then $$\lambda_1(\Gamma^{\prime})x_r=\sum_{v_j\in N_{\Gamma^{\prime}}(v_r)}\sigma^{\prime}(v_rv_j){x_j}\leq(n-3)x_r.$$ Thus, $\lambda_1(\Gamma^{\prime})\leq n-3,$ a contradiction. For $S\subset V(G),$ we denote by $G[S]$ the subgraph of $G$ induced by $S$ and $\Gamma^{\prime}[S]$ the signed induced subgraph of $\Gamma^{\prime}=(G,\sigma^{\prime})$ whose underlying graph is $G[S]$ and edges have the same signs as them in $\Gamma^{\prime}$. **Claim 10**. *$d_{\Gamma^{\prime}}(v_r)=n-1$.* Otherwise, assume $d_{\Gamma^{\prime}}(v_r)=n-2$ by Claim [Claim 9](#duwei N-2){reference-type="ref" reference="duwei N-2"}. We first consider that $r=1.$ Without loss of generality, let $N_{\Gamma^{\prime}}(v_r)=\{v_2,v_3,\ldots,v_{n-1}\}.$ Then $$\begin{aligned}\lambda_1(\Gamma^{\prime})x_{r}&=\lambda_1(\Gamma^{\prime})x_{1}=-x_2+\sum_{3\leq s\leq n-1}x_s\\ &\leq 0+(n-3)x_r=(n-3)x_r. \end{aligned}$$ Hence, $\lambda_1(\Gamma^{\prime})\leq n-3,$ a contradiction. Similarly, $r\neq2.$ Next, we consider that $r=3.$ Without loss of generality, let $N_{\Gamma^{\prime}}(v_r)=\{v_1,v_2,v_5,\ldots,v_{n}\}.$ Since $\Gamma^{\prime}$ is $\mathcal{C}_4^-$-free, we have $v_i\nsim v_j$ for any $1\leq i\leq2$ and $5\leq j\leq n.$ We assert that $\Gamma^{\prime}[V(\Gamma^{\prime})\backslash\{v_1,v_2,v_3,v_4\}]=(K_{n-4},+).$ Otherwise, without loss of generality, assume $v_5\nsim v_6.$ Then we can construct a new signed graph $\Gamma^{\ast}$ obtained from $\Gamma^{\prime}$ by adding a positive edge $v_5v_6.$ Clearly, $\Gamma^{\ast}$ is also unbalanced and $\mathcal{C}_4^-$-free. By Rayleigh principle and Claim [Claim 7](#fenliangdayu0){reference-type="ref" reference="fenliangdayu0"}, we obtain that $$\begin{aligned} \lambda_1(\Gamma^{\ast})-\lambda_1(\Gamma^{\prime})&\geq x^{\top}A(\Gamma^{\ast})x-x^{\top}A(\Gamma^{\prime})x\\ &=2x_5x_6>0. \end{aligned}$$ Hence, $\lambda_1(\Gamma^{\ast})>\lambda_1(\Gamma^{\prime}),$ a contradiction. Similarly, $v_4v_i$ is a positive edge of $\Gamma^{\prime}$ for any $v_i\in V(\Gamma^{\prime})\backslash\{v_3,v_4\}.$ Thus, $\Gamma^{\prime}=\Gamma_{2}$ (see Figure [\[fig2\]](#fig2){reference-type="ref" reference="fig2"}). However, by Lemma [Lemma 2](#dayu n3){reference-type="ref" reference="dayu n3"}, $\lambda_1(\Gamma^{\prime})=\lambda_1(\Gamma_{2})<\lambda_1(\Gamma_{1}),$ a contradiction. Now, there is only one case, i.e., $r>3.$ We assert that $v_r\nsim v_3.$ Otherwise, assume $v_r\sim v_3.$ Since $d_{\Gamma^{\prime}}(v_r)=n-2,$ at least one of $v_1$ and $v_2$ is adjacent to $v_r.$ Then there must exist a negative cycle $v_1v_2v_3v_rv_1$ or $v_1v_2v_rv_3v_1$ of order $4,$ a contradiction. Thus, $N_{\Gamma^{\prime}}(v_r)=V(\Gamma)\backslash\{v_3,v_r\}.$ By similar arguments, we have $\Gamma^{\prime}=\Gamma_{2},$ a contradiction. So Claim [Claim 10](#d=n-1){reference-type="ref" reference="d=n-1"} holds. **Claim 11**. *$r=3.$* Otherwise, assume $r\neq 3.$ Recall that $x_r=\max\limits_{1\leq i\leq n}{x_i}$ and $d_{\Gamma^{\prime}}(v_r)=n-1.$ If $r>3,$ then there must exist a negative cycle $v_1v_2v_3v_rv_1$ of order $4$, a contradiction. If $r=1,$ then we assert that $x_2=\min\limits_{1\leq i\leq n}{x_i}.$ Otherwise, assume $x_j=\min\limits_{1\leq i\leq n}{x_i}$ and $j\neq 2.$ Then $$\begin{aligned}\lambda_1(\Gamma^{\prime})x_{r}&=\lambda_1(\Gamma^{\prime})x_{1}=-x_2+\sum_{3\leq s\leq n}x_s\\ &\leq (x_j-x_2)+(n-3)x_r\\ &< 0+(n-3)x_r=(n-3)x_r. \end{aligned}$$ Hence, $\lambda_1(\Gamma^{\prime})< n-3,$ a contradiction. Now, we assert that at least one of $V(\Gamma)\backslash\{v_1,v_2,v_3\}$ is not adjacent to $v_2.$ Otherwise, assume that all vertices of $V(\Gamma)\backslash\{v_1,v_2,v_3\}$ are adjacent to $v_2$. Based on the following equations, $$\lambda_1(\Gamma^{\prime})x_1=-x_2+\sum\limits_{i=3}^{n}x_i$$ and $$\lambda_1(\Gamma^{\prime})x_2=-x_1+\sum\limits_{i=3}^{n}x_i,$$ we get that $x_1=x_2,$ i.e., $\max\limits_{1\leq i\leq n}{x_i}=\min\limits_{1\leq i\leq n}{x_i}.$ Recall that $\Gamma^{\prime}$ is $\mathcal{C}_4^-$-free. Then for any $4\leq k\leq n,$ $v_k\nsim v_3.$ Thus, $$\lambda_1(\Gamma^{\prime})x_3=x_1+x_2,\ \text{i.e.,} \ \lambda_1(\Gamma^{\prime})x_1=2x_1,$$ a contradiction. Therefore, without loss of generality, we can suppose that $v_4$ is not adjacent to $v_2.$ For any $5\leq i\leq n,$ if $v_i\sim v_2,$ then $v_i\nsim v_4$ since $\Gamma^{\prime}$ is $\mathcal{C}_4^-$-free. We can construct a new signed graph $\Gamma^{\ast}$ obtained from $\Gamma^{\prime}$ by rotating the positive edge $v_iv_2$ to the non-edge position $v_iv_4.$ Clearly, $\Gamma^{\ast}$ is also unbalanced and $\mathcal{C}_4^-$-free. By Rayleigh principle, we obtain that $$\begin{aligned} \lambda_1(\Gamma^{\ast})-\lambda_1(\Gamma^{\prime})&\geq x^{\top}A(\Gamma^{\ast})x-x^{\top}A(\Gamma^{\prime})x\\ &=2x_i(x_4-x_2)\geq0. \end{aligned}$$ If $\lambda_1(\Gamma^{\ast})=\lambda_1(\Gamma^{\prime})$, then $x$ is also an eigenvector of $\Gamma^{\ast}$ corresponding to $\lambda_1(\Gamma^{\ast}).$ Based on the following equations, $$\lambda_1(\Gamma^{\prime})x_{4}=\sum_{v_s\in N_{\Gamma^{\prime}}(v_4)}\sigma^{\prime}(v_sv_t)x_s$$ and $$\lambda_1(\Gamma^{\ast})x_{4}=\sum_{v_s\in N_{\Gamma^{\prime}}(v_4)}\sigma^{\prime}(v_sv_t)x_s+x_i,$$ we obtain that $x_i=0,$ which contradicts Claim [Claim 7](#fenliangdayu0){reference-type="ref" reference="fenliangdayu0"}. Hence, $\lambda_1(\Gamma^{\ast})>\lambda_1(\Gamma^{\prime}),$ a contradiction. Thus, for any $4\leq k\leq n,$ $v_k\nsim v_2$ and $v_k\nsim v_3.$ Based on the following equations, $$\lambda_1(\Gamma^{\prime})x_2=x_3-x_1$$ and $$\lambda_1(\Gamma^{\prime})x_3=x_1+x_2,$$ we obtain that $$\lambda_1(\Gamma^{\prime})x_3=x_3,$$ a contradiction. Therefore, $r\neq 1.$ Similarly, $r\neq2.$ So Claim [Claim 11](#r=3){reference-type="ref" reference="r=3"} holds. By Claims [Claim 10](#d=n-1){reference-type="ref" reference="d=n-1"} and [Claim 11](#r=3){reference-type="ref" reference="r=3"}, we have $d_{\Gamma^{\prime}}(v_3)=n-1.$ Thus, $v_i\nsim v_j$ for any $1\leq i\leq2$ and $4\leq j\leq n.$ **Claim 12**. *$\Gamma^{\prime}[V(\Gamma^{\prime})\backslash\{v_1,v_2\}]=(K_{n-2},+).$* Otherwise, without loss of generality, assume $v_4\nsim v_5.$ Then we can construct a new signed graph $\Gamma^{\ast}$ obtained from $\Gamma^{\prime}$ by adding a positive edge $v_4v_5.$ Clearly, $\Gamma^{\ast}$ is also unbalanced and $\mathcal{C}_4^-$-free. By Rayleigh principle and Claim [Claim 7](#fenliangdayu0){reference-type="ref" reference="fenliangdayu0"}, we obtain that $$\begin{aligned} \lambda_1(\Gamma^{\ast})-\lambda_1(\Gamma^{\prime})&\geq x^{\top}A(\Gamma^{\ast})x-x^{\top}A(\Gamma^{\prime})x\\ &=2x_4x_5>0. \end{aligned}$$ Hence, $\lambda_1(\Gamma^{\ast})>\lambda_1(\Gamma^{\prime}),$ a contradiction. Above all, $\Gamma^{\prime}=\Gamma_{1},$ which means $\Gamma$ is switching equivalent to $\Gamma_{1}$. This completes the proof. ◻ 99 B.D. Acharya, Spectral criterion for cycle balance in networks, *J. Graph Theory* **4** (1) (1980) 1--11. S. Akbari, F. Belardo, F. Heydari, M. Maghasedi, M. Souri, On the largest eigenvalue of signed unicyclic graphs, *Linear Algebra Appl.* **581** (2019) 145--162. S. Akbari, S. Dalvandi, F. Heydari, M. Maghasedi, Signed complete graphs with maximum index, *Discuss. Math. Graph Theory* **40**(2) (2020) 393--403. L. Babai, B. Guiduli, Spectral extrema for graphs: the Zarankiewicz problem, *Electron. J. Combin.* **16**(1) (2009) Research Paper 123, 8 pp. F. Belardo, S.M. Cioabă, J. Koolen, J.F. Wang, Open problems in the spectral theory of signed graphs, *Art Discrete Appl. Math*. **1**(2) (2018) Paper No. 2.10, 23 pp. B. Bollobás, V. Nikiforov, Cliques and the spectral radius, *J. Combin. Theory Ser. B* **97**(5) (2007) 859--865. A.E. Brouwer, W.H. Haemers, Spectra of Graphs, Universitext, Springer, New York, 2012. M. Brunetti, Z. Stanić, Unbalanced signed graphs with extremal spectral radius or index, *Comput. Appl. Math.* **41**(3) (2022) Paper No. 118, 13pp. F.C. Bussemaker, P.J. Cameron, J.J. Seidel, S.V. Tsaranov, Tables of signed graphs, Eut Report 91--WSK--01, Eindhoven, 1991. F. Chen, X.Y. Yuan, Turán problem for $\mathcal{K}_4^{-}$-free signed graphs, arXiv:2306.06655. S.M. Cioabă, D. Desai, M. Tait, The spectral even cycle problem, arXiv:2205.00990. J. Gao, X.M. Hou, The spectral radius of graphs without long cycles, *Linear Algebra Appl.* **566** (2019), 17--33. E. Ghorbani, A. Majidi, Signed graphs with maximal index, *Discrete Math.* **344** (2021) 112463. C.X. He, Y.Y. Li, H.Y. Shan, W.Y. Wang, On the index of unbalanced signed bicyclic graphs, *Comput. Appl. Math.* **40**(4) (2021) Paper No. 124, 14 pp. H. Huang, Induced graphs of the hypercube and a proof of the Sensitivity Conjecture, *Ann. of Math.* **190** (2019) 949--955. M.R. Kannan, S. Pragada, Signed spectral Turán type theorems, *Linear Algebra Appl*. **663** (2023) 62--79. T. Koledin, Z. Stanić, Connected signed graphs of fixed order, size, and number of negative edges with maximal index, *Linear Multilinear Algebra* **65** (2017) 2187--2198. B.L. Li, B. Ning, Eigenvalues and cycles of consecutive lengths, *J. Graph Theory* **103**(3) (2023) 486--492. B.L. Li, B. Ning, Stability of Woodall's theorem and spectral conditions for large cycles, *Electron. J. Combin.* **30**(1) (2023) Paper No. 1.39, 20 pp. D. Li, H.Q. Lin, J.X. Meng, Extremal spectral results related to spanning trees of signed complete graphs, *Discrete Math.* **346**(2) (2023) 113250. S.C. Li, W.T. Sun, Y.T. Yu, Adjacency eigenvalues of graphs without short odd cycle, *Discrete Math.* **345** (2022) 112633. H.Q. Lin, B. Ning, B. Wu, Eigenvalues and triangles in graphs, *Combin. Probab. Comput.* **30**(2) (2021) 258--270. V. Nikiforov, Bounds on graph eigenvalues II, *Linear Algebra Appl.* **427** (2007) 183--189. V. Nikiforov, A spectral condition for odd cycles in graphs, *Linear Algebra Appl.* **428** (2008) 1492--1498. V. Nikiforov, The maximum spectral radius of $C_4$-free graphs of given order and size, *Linear Algebra Appl*. **430**(11--12) (2009) 2898--2905. V. Nikiforov, The spectral radius of graphs without paths and cycles of specified length, *Linear Algebra Appl.* **432** (2010) 2243--2256. B. Ning, X. Peng, Extensions of the Erdős-Gallai theorem and Luo's theorem, *Combin. Probab. Comput.* **29**(1) (2020) 128--136. G.X. Sun, F. Liu, K.Y. Lan, A note on eigenvalues of signed graphs, *Linear Algebra Appl*. **652** (2022) 125--131. D.J. Wang, Y.P. Hou, D.Q. Li, Extremed signed graphs for triangle, arXiv:2212.11460. W. Wang, Z.D. Yan, J.G. Qian, Eigenvalues and chromatic number of a signed graph, *Linear Algebra Appl.* **619** (2021) 137--145. H. Wilf, Spectral bounds for the clique and independence numbers of graphs, *J. Combin. Theory Ser. B* **40** (1986) 113--117. P. Wissing, E.R. van Dam, Spectral fundamentals and characterizations of signed directed graphs, *J. Combin. Theory Ser. A*. **187** (2022) 105573. T. Zaslavsky, Signed graphs, *Discrete Appl. Math*. **4**(1) (1982) 47--74. M.Q. Zhai, H.Q. Lin, Spectral extrema of graphs: forbidden hexagon, *Discrete Math.* **343** (10) (2020) 112028. M.Q. Zhai, H.Q. Lin, Spectral extrema of $K_{s,t}$-minor free graphs---on a conjecture of M. Tait, *J. Combin. Theory Ser. B* **157** (2022) 184--215. M.Q. Zhai, H.Q. Lin, A strengthening of the spectral chromatic critical edge theorem: books and theta graphs, *J. Graph Theory* **102**(3) (2023) 502--520. M.Q. Zhai, B. Wang, Proof of a conjecture on the spectral radius of $C_4$-free graphs, *Linear Algebra Appl*. **437**(7) (2012) 1641--1647. [^1]: Corresponding author. Email: huiqiulin\@126.com [^2]: This work is supported by the National Natural Science Foundation of China (Grant No. 12271162 ), Natural Science Foundation of Shanghai (No. 22ZR1416300) and The Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning (No. TP2022031).
arxiv_math
{ "id": "2309.04101", "title": "The largest eigenvalue of $\\mathcal{C}_4^{-}$-free signed graphs", "authors": "Yongang Wang, Huiqiu Lin", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | This paper explores a construction of the elliptic classes of the Springer resolution using the periodic Hecke module. The module is established by employing the Poincaré line bundle over the product of the abelian variety of elliptic cohomology and its dual. Additionally, we introduce the elliptic twisted group algebra, which acts on the periodic module. The construction of the elliptic twisted group algebra is such that the Demazure-Lusztig (DL) operators with dynamical parameters are rational sections. We define elliptic classes as rational sections of the periodic module, and give explicit formulas of the restriction to fixed points. Our main result shows that a natural assembly of the DL operators defines a rational isomorphism between the periodic module and the one associated to the Langlands dual root system. This isomorphism intertwines the (opposite) elliptic classes with the fixed point basis in the dual system. address: - State University of New York at Albany, 1400 Washington Avenue, Albany, NY 12222 - The University of Melbourne, School of Mathematics and Statistics, 813 Swanston Street, Parkville VIC 3010, Australia - State University of New York at Albany, 1400 Washington Avenue, Albany, NY 12222 author: - Cristian Lenart - Gufang Zhao - Changlong Zhong title: Elliptic classes via the periodic Hecke module and its Langlands dual --- # Introduction Elliptic classes are a type of mathematical object that arise in the study of symplectic resolutions and their associated structures. They are essentially rational sections of certain vector bundles on abelian varieties, and they have many interesting properties and applications. In recent years, there has been a growing interest in understanding the behavior of elliptic classes in the context of Springer resolutions. Notably, Aganagić and Okounkov [@AO21] following the idea of stable envelope, and Rimányi and Weber [@RW20; @RW22], following Borisov and Libgober [@BL03]. It is worth pointing out that in the latter approach, based on the Bott-Samelson resolution of Schubert varieties, an iteration formula of the elliptic classes is obtained in terms of elliptic Demazure-Lusztig operators with dynamical parameters. In this paper, we explore the elliptic classes via a purely algebraic approach using the structures of root systems and the Poincaré line bundle on an elliptic curve. We start with an algebraic definition of the (localized) elliptic Hecke algebras and the periodic Hecke modules. The elliptic classes are rational sections of the periodic modules. We give explicit formulas of the restriction of elliptic classes on fixed points. Moreover, the definition of elliptic Hecke algebra with dynamical parameters is well-behaved with respect to Langlands duality. We formulate such relation in terms of the periodic Hecke modules. As a consequence, we deduce some interesting relations between elliptic classes of Langlands dual root systems. In the subsequent subsections of the introduction, we present a summary account of our findings, providing detailed descriptions and explanations. ## Elliptic Demazure-Lusztig (DL) operators with dynamical operators The idea of elliptic Hecke algebra and elliptic DL operators goes back to Ginzburg, Kapranov, and Vasserot [@GKV97] (see also [@GKV95; @Ga12]). Rimányi-Weber realized that after adding the dynamical parameters, there are sections, which are analogues of the Demazure-Lusztig operators, satisfying the braid relations. We reformulate Rimányi-Weber's definition in terms of a sheafified version of these operators, using only the elliptic curve, the Poincaré line bundle, and the root systems as input. Following directly from this construction is the polynomial representation, and more closely related to this paper, the periodic module. More precisely, let $E$ be an elliptic curve over ${\mathbb C}$. Let $X^*$ and $X_*$ be the two dual lattices of the root datum. Define $\mathfrak{A}=X_*\otimes E$, then the dual of this abelian variety is $\mathfrak{A}^\vee\cong X^*\otimes E$. The Weyl group $W$ acts on both $\mathfrak{A}$ and $\mathfrak{A}^\vee$, where the latter action is denoted by $W^{\operatorname{d}}$. Let ${\mathbb L}$ be (a certain $\rho$-shift of) the Poincaré line bundle on $\mathfrak{A}\times\mathfrak{A}^\vee$ (Definition [Definition 3](#def:bbL){reference-type="ref" reference="def:bbL"}). Utilizing the Weyl group actions, we obtain a new line bundle denoted as ${\mathbb S}_{w,v} = {\mathbb L}\otimes (w^{-1})^*(v^{-1})^{{{\operatorname{d}}*}}{\mathbb L}^{-1}$ for $v, w \in W$. Define ${\mathbb S}$ as the direct sum ${\mathbb S}:= \bigoplus_{v, w \in W} {\mathbb S}_{w,v}$, referred to as the elliptic twisted group algebra. The algebra structure is to be understood within a suitable monoidal category of coherent sheaves (see § [3](#sec:bbS){reference-type="ref" reference="sec:bbS"} for detailed explanations). Considering the vector space of rational sections, the algebra structure corresponds to the well-known Kostant-Kumar twisted product, augmented by the additional dynamical Weyl group action (refer to § [3.7](#subsec:Sproductrational){reference-type="ref" reference="subsec:Sproductrational"} for further details). A key feature of this construction is the existence of a rational section of ${\mathbb S}$ associated with each simple root $\alpha$, referred to as the elliptic Demazure-Lusztig operator denoted by $T^{z,\lambda}_\alpha$ (abbreviated as $T_\alpha$ in this introduction). The appearance of these operators was first noted in [@RW20] in an iteration formula for the elliptic classes of Schubert varieties. They satisfies the braid relations, ensuring that $T_w$ is well-defined for any $w \in W$. By definition, the algebra ${\mathbb S}$ acts on a module called the periodic module, defined as ${\mathbb M}= \bigoplus_{w \in W} w^*{\mathbb L}$. The module ${\mathbb M}$ can be viewed as the equivariant elliptic cohomology of the Springer resolution $T^*G/B$. To establish the fixed point class associated with $w \in W$, we fix a rational section $\frak{c}$ of ${\mathbb L}$. We denote the fixed point class as $$\tilde{f}_w = {}^{w^{-1}}\frak{c},$$ which is a section of ${\mathbb M}_w = w^*{\mathbb L}$. Furthermore, we define the elliptic class for $w \in W$ as $${\mathbf E}_w = T_{w^{-1}}\tilde{f}_e.$$ They coincide with the classes defined by the elliptic stable envelop of Aganagic-Okounkov for suitable chosen parameters (chamber and polarization) [@AO21]. ## Elliptic classes and Langlands dual One of the key properties of the periodic module is that it is equipped with a natural pairing, called the Poincaré pairing, which is a bilinear form on the space of periodic modules associated with a given root system. The precise definition of the Poincaré pairings involves a sequence of duality functors, and dual algebras and modules, elaborated in § [5](#sec:dual){reference-type="ref" reference="sec:dual"}. They are used to prove dualities between the elliptic classes ${\mathbf E}_v$ and the opposite elliptic classes ${\mathcal E}_v$, where the later is defined by ${\mathcal E}_v=T_{v^{-1}w_0}\tilde f_{w_0}$. Moreover, these functors and modules allow us to relate the periodic modules associated with a given root system to those associated with its Langlands dual, and to derive various properties of these modules and their associated structures. More precisely, the periodic module associated to the Langlands dual system is also a vector bundle on $\mathfrak{A}\times\mathfrak{A}^\vee$, defined by $${\mathbb M}^{\operatorname{d}}:=\bigoplus_{w\in W} w^{{\operatorname{d}}*}{\mathbb L}.$$ Indeed, in the definition of ${\mathbb M}$ and ${\mathbb M}^{\operatorname{d}}$ there is an asymmetry between $\mathfrak{A}$ and $\mathfrak{A}^\vee$. This asymmetry gives a notion of dynamical periodic module ${\mathbb M}^{\operatorname{d}}$ (§ [3](#sec:bbS){reference-type="ref" reference="sec:bbS"}), which we identify as the periodic module associated to the Langlands dual system (§ [6](#sec:Langlands){reference-type="ref" reference="sec:Langlands"}). The module ${\mathbb M}^{\operatorname{d}}$ contains similarly defined fixed-point classes $\tilde f^{\operatorname{d}}_w$ for $w\in W$, and is acted on by the operators $T^{\operatorname{d}}_w$ for $w\in W$, where $T_w^{\operatorname{d}}$ are rational sections of the same algebra ${\mathbb S}$. These operators define elliptic classes ${\mathbf E}^{\operatorname{d}}_w$ and opposite elliptic classes ${\mathcal E}^{\operatorname{d}}_w$ of the Langlands dual system. Naturally, the operators $T_w$ assemble into a rational map of vector bundles on $\mathfrak{A}\times\mathfrak{A}^\vee$ (§ [3.8](#subsec:sum1){reference-type="ref" reference="subsec:sum1"} and Definition [Definition 40](#def:bbT){reference-type="ref" reference="def:bbT"}) $${\mathbb T}:{\mathbb M}\dashrightarrow{\mathbb M}^{\operatorname{d}}.$$ Now we state the main theorem regarding elliptic classes of Langlands dual systems. **Theorem 1** (Theorem [Theorem 50](#thm:inverse_maps){reference-type="ref" reference="thm:inverse_maps"}). 1. The map ${\mathbb T}$ is a rational isomorphism between vector bundles ${\mathbb M}\dashrightarrow{\mathbb M}^{\operatorname{d}}$; 2. ${\mathbb T}(\tilde f_v)={\mathcal E}_v^{\operatorname{d}}$, and ${\mathbb T}({\mathcal E}_v)=\tilde f^{\operatorname{d}}_v$; 3. The inverse map of ${\mathbb T}$ is given by ${\mathbb T}^{\operatorname{d}}$, which assembles the operators $T^{\operatorname{d}}_v$'s naturally. The periodic modules ${\mathbb M}$ and ${\mathbb M}^{\operatorname{d}}$ correspond to the torus equivariant elliptic cohomology of the Springer resolution $T^*G/B$ and its Langlands dual $T^*G^\vee/B^\vee$, respectively. These modules serve as representation-theoretical models for studying the respective systems. Our main theorem establishes a canonical isomorphism between the equivariant elliptic cohomology of $T^*G/B$ and its Langlands dual $T^*G^\vee/B^\vee$. This isomorphism maps the fixed point basis of $T^*G/B$ to the (opposite) elliptic classes of $T^*G^\vee/B^\vee$ and vice versa. To aid readers' comprehension, we provide a summary of all the constructed classes in the present paper in § [9.3](#subsec:diagram){reference-type="ref" reference="subsec:diagram"}. The proof of the theorem relies on the Poincaré pairing and the adjunction properties of the elliptic DL operators, which play crucial roles. In § [9](#sec:complete){reference-type="ref" reference="sec:complete"}, we utilize the same adjunction properties to reproduce a result previously established by Rimányi-Weber [@RW22]. This result compares restriction formulas of (normalized) elliptic classes and their Langlands dual, and it is linked to the concept of 3d-mirror symmetry, as explained in the aforementioned reference. The study of 3d-mirror symmetry (also called the symplectic duality) in elliptic cohomology has gained significant attention in recent years, with notable contributions from various researchers (e.g., [@BR23; @FRV18; @H20; @KS22; @RSVZ19; @RSZV22; @RW22; @SZ22]). We refer interested readers to [@AO21 § 1.3.1], which presents a related statement to that of [@RW22]. The main theorem of our paper offers a distinct perspective on this topic. Elliptic classes are defined as rational sections of specific line bundles (direct sums) on an abelian variety. Consequently, the poles and zeros of an elliptic class are constrained by the properties of the line bundle. This constraint enables us to analyze the properties of zeros and poles of the elliptic classes. In § [11](#sec:restriction){reference-type="ref" reference="sec:restriction"}, we provide examples of calculations to illustrate these properties. Moreover, we define the logarithmic degree in $X^*\otimes X_*$ of pairs $(w,v)\in W^2$. It can be used to recover the information of zeroes and poles of the restriction coefficients. On another hand, Allen Knutson has communicated to the authors that he was able to employ pipe dreams to identify restriction coefficients of cohomology Schubert classes with double Schubert polynomials for the root system of type $A$. Given this result, and since the elliptic classes are the sole classes associated with Schubert varieties in elliptic cohomology, one may regard our restriction coefficients as "elliptic double Schubert polynomials" (in type $A$). In this paper, we study elliptic classes in the elliptic Hecke algebra with dynamical parameters and its periodic module. We show that a localization of the elliptic Hecke algebra and its periodic module is sufficient to captures the essential properties of the elliptic classes. An integral structure of the elliptic Hecke algebra with dynamical parameters, that is, the algebra without localization, is interesting for representation theory purpose. We refer an interested reader to [@ZZ23] for a definition of such an integral structure, and an application in Fourier-Mukai transform of representations. In order to study the restriction formula of the elliptic classes, one can also derive a formula similar to the ones due to Billey [@billey] (for cohomological Schubert classes, cf. also [@ajs]) and Graham-Willems [@graham; @willems] (for $K$-theory Schubert classes). Such a formula (also known as a root polynomial formula) will offer a different perspective compared to the one presented in Theorem [Theorem 55](#thm:res){reference-type="ref" reference="thm:res"}, and has certain advantages. While the formula in the mentioned theorem concerns the $a_{v,w}$-coefficients, the root polynomial formula will focus on the $b_{v,w}$-coefficients. Notably, the exploration of these $b$-coefficients served as the original motivation for the present paper. The root polynomial formula will be presented in an upcoming paper [@LZZ] soon. ## Organization {#organization .unnumbered} In § [2](#sec:poin){reference-type="ref" reference="sec:poin"} we introduce basic concepts of root systems and their associated structures, together with the Poincaré line bundle, and in § [3](#sec:bbS){reference-type="ref" reference="sec:bbS"} we define the elliptic twisted group algebra ${\mathbb S}$, and the product of its rational sections. We also introduce the periodic module ${\mathbb M}$. In § [4](#sec:S_property){reference-type="ref" reference="sec:S_property"} we study basic properties of ${\mathbb S}$. In § [5](#sec:dual){reference-type="ref" reference="sec:dual"} we introduce the duality functors that are used to construct Poincaré pairings. We compare the construction for the Langlands dual system in § [6](#sec:Langlands){reference-type="ref" reference="sec:Langlands"} with the dynamical periodic module. In § [7](#sec:DL){reference-type="ref" reference="sec:DL"} we define the elliptic DL operators and show that they are rational sections of ${\mathbb S}$. We define the elliptic classes in § [8](#sec:ell_class){reference-type="ref" reference="sec:ell_class"}, study their dual via the Poincaré pairings, and prove the main result. For the convenience of the readers, in § [9](#sec:complete){reference-type="ref" reference="sec:complete"}, we collect a list of all versions of elliptic classes. In § [10](#sec:mirror){reference-type="ref" reference="sec:mirror"}, we deduce a known result due to Rimányi and Weber. In sections § [11](#sec:restriction){reference-type="ref" reference="sec:restriction"} and § [12](#sec:examples){reference-type="ref" reference="sec:examples"}, we give a formula of the restriction coefficients, and define the logarithmic degree. ## Acknowledgements {#acknowledgements .unnumbered} G. Z. is partially supported by the Australian Research Council via grants DE190101222 and DP210103081. C. L. is partially supported by the NSF grant DMS-1855592. C. Z.  would like to thank Allen Knutson and Richárd Rimányi for helpful conversations. Part of this research were conducted when C. Z.  was visiting the University of Melbourne, the Sydney Mathematical Research Institute, University of Ottawa, and the Max Planck Institute of Mathematics. He would like to thank these institutes for their hospitality and support. # The Poincaré line bundle {#sec:poin} In this section we collect some basic facts about the Poincaré line bundle ${\mathcal P}$, root system, and define the $\rho$-shift of ${\mathcal P}$, the latter of which serves as the polynomial representation of the DL operators with dynamical parameters. ## Coherent sheaves and line bundles We clarify some notation about the Weyl group action on the bundles and their rational sections. If ${\mathcal F}$ is a vector bundle over $X$ on which the isomorphism $w$ acts, and if $U$ is an open subset, then a section $f\in {\mathcal F}(U)$ defines a section $f'\in (w^*{\mathcal F})(w^{-1}{U})$, and we will denote $$f'={}^{w^{-1}}f.$$ This is compatible with compositions, that is, given $v,w$ acting as isomorphisms, then a section $f\in {\mathcal F}(U)$ defines a section ${}^{v^{-1}}f\in (v^*{\mathcal F})(v^{-1}(U))$, and $${}^{(vw)^{-1}}f={}^{w^{-1}}({}^{v^{-1}}f)\in (w^*(v^*{\mathcal F}))(w^{-1}(v^{-1}(U)))= ((vw)^*{\mathcal F})((vw)^{-1}(U)).$$ Now let $f:X\to Y$ be a homomorphism of algebraic varieties, then it induces a homomorphism $f^*:\mathop{\mathrm{Pic}}^0(Y)\to \mathop{\mathrm{Pic}}^0(X)$. It has the property that for any $\lambda\in \mathop{\mathrm{Pic}}^0(Y)$ that defines the line bundle ${\mathcal O}(\lambda)$, $f^*{\mathcal O}(\lambda)={\mathcal O}(f^*\lambda)$. We state this property in family and functorially. **Lemma 1**. *Let $f:X\to Y$ be a morphism of algebraic varieties, which induces the following maps: $$\xymatrix{ Y\times \mathop{\mathrm{Pic}}^0(Y) & X\times \mathop{\mathrm{Pic}}^0(Y)\ar[l]_{f\times 1}\ar[r]^{1\times f^*} & X\times \mathop{\mathrm{Pic}}^0(X) .}$$ Let ${\mathcal P}_X$ and ${\mathcal P}_Y$ be the tautological line bundles on $X\times\mathop{\mathrm{Pic}}^0(X)$ and $Y\times\mathop{\mathrm{Pic}}^0(Y)$ respectively [@FGA Exercise 9.4.3]. Then, there is a canonical isomorphism $$(1\times f^*){\mathcal P}_X\cong(f\times 1)^*{\mathcal P}_Y.$$ which is furthermore compatible with compositions of morphisms of algebraic varieties.* *Proof.* This follows from the definition of the Picard functor [@FGA § 9]. ◻ ## Elliptic curve Let $E$ be an elliptic curve over ${\mathbb C}$. We have isomorphisms $$E\cong {\mathbb C}/({\mathbb Z}+{\mathbb Z}\tau)\cong {\mathbb C}^*/q^{\mathbb Z},$$ where $\tau\in {\mathbb C}$ satisfies $\mathop{\mathrm{im}}\tau>0$, and $q=e^{2\pi i\tau}$. Recall the Jacobi-Theta function $$\vartheta(u)=\frac{1}{2\pi i}(u^{1/2}-u^{-1/2})\prod_{s>0}(1-q^su)(1-q^su^{-1})\cdot \prod_{s>0}(1-q^s)^{-2}, \quad u\in {\mathbb C}^*.$$ Assume $|q|<1$, then this series converges and defines a holomorphic function on a double cover of ${\mathbb C}^*$. The line bundle ${\mathcal O}(\{0\})$ becomes trivial when lifted to the cover ${\mathbb C}^*$. Denote the lifting by $\tilde{L}\to {\mathbb C}^*$, which is endowed with a lift of the section $\tilde{s}:{\mathbb C}^*\to \tilde{L}$. The trivialization $$\xymatrix{ \tilde{L}\ar@/^/[dr]\ar[rr]^{\cong}_{\phi}&&{\mathbb C}^*\times {\mathbb C}\ar[dl]\\ &{\mathbb C}^*\ar@/^/[ul]^{\tilde{s}}& }$$ can be uniquely fixed by the property that $\phi$ commutes with multiplication by $q^{\mathbb Z}$ and that the derivative of $\phi\circ\tilde{s}$ is 1 at $1\in {\mathbb C}^*$ [@Sie p.38]. The above formula for the theta function are written so that it is a function on ${\mathbb C}^*$, that vanishes of order 1 at $q^{\mathbb Z}$, non-zero everywhere else, and the derivative at $1\in {\mathbb C}^*$ is equal to 1. In what follows, we denote $\theta(x)=\vartheta(e^{2\pi i x})$. We identify $\mathop{\mathrm{Pic}}^0(E)$ with $E$ itself, via the map $E\to \mathop{\mathrm{Pic}}^0(E)$, $\lambda\mapsto \{\lambda\}-\{0\}$. In particular, on $E\times E$, there is a universal line bundle ${\mathcal P}_E$, called the Poincaré line bundle. Writing the coordinate of $E\times E$ as $(z,\lambda)$, the Poincaré line bundle is characterized by the property that for any $\lambda\in E$, we have ${\mathcal P}_E|_{E\times \lambda}={\mathcal O}( \{\lambda\}-\{0\})$, and that ${\mathcal P}_E|_{0\times E}={\mathcal O}$. Equivalently, with $\vartheta$ introduced above, a rational section of ${\mathcal P}_E$ is $$\frac{\theta(z-\lambda)}{\theta(z)}.$$ The following standard fact is used frequently in determining rational sections of line bundles over the elliptic curve. **Proposition 2**. *Let $x\in E$. If $f$ is a rational section of the degree zero line bundle ${\mathcal O}(x)$ on an elliptic curve with zeros $\{p_i, i=1,...,n\}$ and poles $\{q_j, j=1,...,m\}$ (counting with multiplicities), then $$m=n, \text{ and } \sum p_i-\sum q_j=x.$$* ## Root system {#subsec:rootsys} We recall the notion of root datum following [@SGA3 Exp. XXI, § 1.1]. A root datum of rank $n$ consists of two lattices $X^*$ and $X_*$ of rank $n$ with a perfect ${\mathbb Z}$-values pairing $(-,-)$, non-empty finite subsets of roots and coroots $\Phi\subset X^*$, and $\Phi^\vee\subset X_*$ with a bijection $\alpha\mapsto \alpha^\vee$. If the root datum has maximal torus $T$, the lattices above are respectively ${\mathbb X}^*(T)$ and ${\mathbb X}_*(T)$, the group of characters and co-characters of $T$. In the present paper, we fix a set $\Sigma$ of simple roots $\{\alpha_1,\dots,\alpha_n\}$, with simple reflections $s_i:=s_{\alpha_i}$. The Weyl group is denoted by $W$. For each $w\in W$, define the inversion set $\Phi(w)=w\Phi_-\cap \Phi_+$, and the length as $\ell(w)$, with the sign $\epsilon_w=(-1)^{\ell(w)}$. Let $\rho$, resp. $\rho^\vee$ be the half-sum of all positive roots of $\Phi$ and $\Phi^\vee$. Define $\mathfrak{A}=X_*\otimes E$, and $\mathfrak{A}^\vee:=X^*\otimes E$. The Weyl group $W$ acts on both $\mathfrak{A}$ and $\mathfrak{A}^\vee$, and hence on the product $\mathfrak{A}\times\mathfrak{A}^\vee$, we have the action of product of two Weyl groups. To avoid confusion, we write the factor of $W$ acting on $\mathfrak{A}$ as $W$ and the factor acting on $\mathfrak{A}^\vee$ as $W^{\operatorname{d}}$. Similarly, for any element $w\in W$, we write the element in $W^{\operatorname{d}}$ as $w^{\operatorname{d}}$. For clarity, we call them respectively the *spectral Weyl group action* and the *dynamical Weyl group action*. Since they act on different factors of the product, they obviously commute. The isomorphism $E\cong \mathop{\mathrm{Pic}}^0(E)$ induces an isomorphism $$\mathfrak{A}^\vee\cong \mathop{\mathrm{Pic}}^0(\mathfrak{A}),$$ where $\mathop{\mathrm{Pic}}^0(\mathfrak{A})$ is the dual abelian variety of $\mathfrak{A}$. In particular, the notation $\mathfrak{A}^\vee$ has no ambiguity. We now make this isomorphism explicit. We consider a special point $\mu \otimes t\in \mathfrak{A}^\vee=X^*\otimes E$, with $\mu \in X^*$ and $t\in E$. The character $\mu$ defines a map $\chi_\mu :\mathfrak{A}\to E$, and $t\in E=E^\vee$ defines a degree 0 line bundle ${\mathcal O}(t)$ on $E$. Under the above isomorphism, the line bundle on $\mathfrak{A}$ corresponding to $\mu \otimes t$ is $\chi_\mu ^*{\mathcal O}(t)$. In what follows, we write $\mu \otimes t$ simply as $t\mu$, and ${\mathcal O}(t\mu)=\chi^*_\mu{\mathcal O}(t)$. Also denote $z_\mu=\chi_\mu(z)$ for $z\in \mathfrak{A}$. Dually, each $\mu^\vee\in X_*$ defines a map $\chi_{\mu^\vee}:\mathfrak{A}^\vee\to E$, and denote $\lambda_{\mu^\vee}=\chi_{\mu^\vee}(\lambda), \lambda\in \mathfrak{A}^\vee$. For $t\in E$, we have similarly $\chi_{\mu ^\vee}^*{\mathcal O}(t)={\mathcal O}(t\mu ^\vee)$. ## The $\rho$-shift of the Poincaré line bundle In this section, and what follows we fix an $\hbar\in E$. On $\mathfrak{A}\times\mathfrak{A}^\vee$ there is a line bundle, the Poincaré line bundle, also denoted by ${\mathcal P}$ [@P03 § 9]. It satisfies the universal property, that is, for any variety $X$, if there is any line bundle ${\mathcal L}$ over $\mathfrak{A}\times S$ for a variety $S$, then there is a unique map $g:\mathfrak{A}\times S\to \mathfrak{A}\times \mathfrak{A}^\vee$ such that ${\mathcal L}=g^*{\mathcal P}$. In particular, it satisfies the property that ${\mathcal P}|_{\mathfrak{A}\times \lambda}\cong {\mathcal O}(\lambda)$ for any $\lambda\in \mathfrak{A}^\vee$. Such a line bundle is unique if we impose a normalization condition ${\mathcal P}|_{a\times \mathfrak{A}^\vee}\cong {\mathcal O}_{\mathfrak{A}^\vee}$ for a given point $a\in \mathfrak{A}$. In what follows, we take $a=\hbar\rho^\vee$ with $\hbar\in E$. Here the uniqueness follows from the See-Saw Lemma, which asserts that two line bundles on $\mathfrak{A}\times\mathfrak{A}^\vee$ are isomorphic if the restrictions on $\mathfrak{A}\times\{\lambda\}$ are isomorphic for all $\lambda\in \mathfrak{A}^\vee$, and on $\{z\}\times\mathfrak{A}^\vee$ for some $z\in \mathfrak{A}$. Indeed, this property will be used below frequently in showing two line bundles over $\mathfrak{A}\times \mathfrak{A}^\vee$ are isomorphic. We note that the See-Saw Lemma only implies the existence of an isomorphism but does not necessarily provide a canonical one. Define the dotted action of $W$ on $\mathfrak{A}$ by $$w_\bullet z=w(z-\hbar\rho^\vee)+\hbar\rho^\vee.$$ By definition, the map $w_\bullet:\mathfrak{A}\to \mathfrak{A}$ induces a map $w^*:\mathop{\mathrm{Pic}}^0(\mathfrak{A})\to \mathop{\mathrm{Pic}}^0(\mathfrak{A})$, which is denoted by $(w^{-1})^{\operatorname{d}}$. That is, ${\mathcal O}((w^{-1})^{\operatorname{d}}(\lambda))=w^*{\mathcal O}(\lambda)$ as line bundles on $\mathfrak{A}$. Then Lemma [Lemma 1](#lem:Pic_Poincare){reference-type="ref" reference="lem:Pic_Poincare"} gives an isomorphism $$\label{eqn:Pincare_dot} (w_\bullet\times 1)^*{\mathcal P}\cong(1\times (w^{-1})^{\operatorname{d}})^*{\mathcal P},$$ which is compatible with compositions of group elements. [\[subsec:rho-shift\]]{#subsec:rho-shift label="subsec:rho-shift"} **Definition 3**. Define ${\mathbb L}={\operatorname{vor}}^*{\mathcal P}$ on $\mathfrak{A}\times \mathfrak{A}^\vee$, where ${\operatorname{vor}}: \mathfrak{A}^\vee\to \mathfrak{A}^\vee, ~ \lambda\mapsto \lambda+\rho \hbar.$ The line bundle ${\mathbb L}$ will be the polynomial representation of the dynamical elliptic Hecke algebra [@ZZ23]. Also see Lemma [Lemma 6](#lem:poly){reference-type="ref" reference="lem:poly"} below. # The elliptic twisted group algebra {#sec:bbS} In this section we define the elliptic twisted group algebra ${\mathbb S}$ and the periodic modules ${\mathbb M}$, together with some of their variant forms. We also explain the algebra structure of ${\mathbb S}$ and its action on ${\mathbb M}$ in terms of rational sections. Note that rational sections of vector bundles in this paper mean sections on open subsets on which they are defined. ## The elliptic twisted group algebra {#the-elliptic-twisted-group-algebra} We define the twisted algebra, which is a vector bundle induced by the Weyl group actions on the line bundle ${\mathbb L}$. **Definition 4**. We define, for each $w,v\in W$ $${\mathbb S}_{w,v}={\mathbb L}\otimes (w^{-1})^*(v^{-1})^{{\operatorname{d}}*}{\mathbb L}^{-1}.$$ ## A tensor structure We consider the following category $\mathop{\mathrm{Coh}}^{W\times W^{\operatorname{d}}}(\mathfrak{A}\times \mathfrak{A})$, that is, the $W\times W^{\operatorname{d}}$-graded category of coherent sheaves on $\mathfrak{A}\times \mathfrak{A}^\vee$. Objects of this category can be written as ${\mathcal F}=\bigoplus_{w\in W,v\in W^{\operatorname{d}}}{\mathcal F}_{w,v}$ with ${\mathcal F}_{w,v}$ at degree $(w,v)$. We then define the tensor product $\star$ by $${\mathcal F}_{w,v}\star{\mathcal F}_{w',v'}':={\mathcal F}_{w,v}\otimes(w^{-1})^*(v^{-1})^{{\operatorname{d}}*}{\mathcal F}'_{w',v'},$$ which is set to belong to degree $(ww',vv')$. Clearly ${\mathcal O}$ is the identity object. It is trivial to check that this defines a tensor monoidal category structure with unit ${\mathcal O}$ concentrated at degree $(e,e)$. **Lemma 5**. 1. *[\[lem:associat\]]{#lem:associat label="lem:associat"} We have the following composition properties: $$\begin{aligned} {\mathbb S}_{w_1w_2, v_1v_2}&={\mathbb S}_{w_1,v_1}\otimes (w_1^{-1})^*(v_1^{-1})^{{\operatorname{d}}*}{\mathbb S}_{w_2,v_2},&\quad {\mathbb S}_{w,v}^{-1}&=(w^{-1})^*(v^{-1})^{{\operatorname{d}}*}{\mathbb S}_{w^{-1}, v^{-1}}.\end{aligned}$$* 2. *The objects $${\mathbb S}_{W\times W^{\operatorname{d}}}:=\bigoplus_{w,v\in W}{\mathbb S}_{w,v}\hbox{ and }~{\mathbb S}_{W\times W^{\operatorname{d}}}^{-1}:=\bigoplus_{w,v\in W}{\mathbb S}_{w,v}^{-1}$$ are monoidal objects, where the degrees of ${\mathbb S}_{w,v}$ and ${\mathbb S}_{w,v}^{-1}$ are both $(w,v)$. For simplicity, we write ${\mathbb S}$ and ${\mathbb S}^{-1}$ respectively.* *Proof.* (1). We have $$\begin{aligned} {\mathbb S}_{w_1w_2,v_1v_2}&={\mathbb L}\otimes ((w_1w_2)^{-1})^*((v_1v_2)^{-1})^{{\operatorname{d}}*}{\mathbb L}^{-1}\\ &={\mathbb L}\otimes (w_1^{-1})^*(w_2^{-1})^*(v_1^{-1})(v_2^{-1})^{{\operatorname{d}}*}{\mathbb L}^{-1}\\ &={\mathbb L}\otimes (w_1^{-1})^*(v_1^{-1})^{{\operatorname{d}}*}\left((w_2^{-1})^*(v_2^{-1})^{{\operatorname{d}}*}{\mathbb L}^{-1}\right)\\ &={\mathbb L}\otimes (w_1^{-1})^*(v_1^{-1})^{{\operatorname{d}}*}({\mathbb S}_{w_2, v_2}\otimes {\mathbb L}^{-1})\\ &={\mathbb L}\otimes (w_1^{-1})^*(v_1^{-1})^{{\operatorname{d}}*}{\mathbb S}_{w_2,v_2}\otimes (w_1^{-1})^*(v_1^{-1})^{{\operatorname{d}}*}{\mathbb L}^{-1}\\ &={\mathbb S}_{w_1,v_1}\otimes (w_1^{-1})^*(v_1^{-1})^{{\operatorname{d}}*}{\mathbb S}_{w_2, v_2}. \end{aligned}$$ This proves the first identity. Considering $w_1w_2=e=v_1v_2$ in the first identity, we obtain the second identity. (2). From part (1), we have an isomorphism $$\begin{aligned} {\mathbb S}_{w,v}\star{\mathbb S}_{x,y}&={\mathbb S}_{w,v}\otimes (w^{-1})^*(v^{-1})^{{\operatorname{d}}*}{\mathbb S}_{x,y}\overset{\sim}\longrightarrow{\mathbb S}_{wx, vy},\end{aligned}$$ it is then easy to see that this product is associative. The statement for ${\mathbb S}^{-1}_{W\times W^{\operatorname{d}}}$ can be proved similarly. ◻ The sheaf ${\mathbb S}$, together with its variant forms ${\mathbb S}', {\mathbb S}'', {\mathbb S}(-\lambda), {\mathbb S}(-\lambda)', {\mathbb S}(-\lambda)'', {\mathbb S}(-z), {\mathbb S}(-z)', {\mathbb S}(-z)''$ defined below, are called the elliptic twisted group algebras. **Lemma 6**. *The object ${\mathbb L}\in \mathop{\mathrm{Coh}}(\mathfrak{A}\times \mathfrak{A}^\vee)$ is a left module over the monoidal object ${\mathbb S}$.* *Proof.* The actions are defined as follows: $$\begin{aligned} {\mathbb S}_{w,v}\star{\mathbb L}&= {\mathbb S}_{w,v}\otimes (w^{-1})^*(v^{-1})^{{\operatorname{d}}*}{\mathbb L}\overset{\sim}\longrightarrow{\mathbb L},\end{aligned}$$ it is then straightforward to show that the associativity holds. ◻ ## Equivalent tensor structures We define another tensor product on the category $\mathop{\mathrm{Coh}}^{W\times W^{\operatorname{d}}}(\mathfrak{A}\times \mathfrak{A}^\vee)$: for objects ${\mathcal F}_{w_1,v_1}$ and ${\mathcal G}_{w_1,v_2}$ with degrees $(w_1,v_1)$ and $(w_2,v_2)$, respectively, define $${\mathcal F}_{w_1, v_1}{\star'}{\mathcal G}_{w_2,v_2}=w_2^*{\mathcal F}_{w_1,v_1}\otimes (v_1^{-1})^{{\operatorname{d}}*}{\mathcal G}_{w_2,v_2},$$ whose degree is $(w_1w_2, v_1v_2)$. Similar as the second tensor, we can define the third tensor product structure on $\mathop{\mathrm{Coh}}^{W\times W^{\operatorname{d}}}(\mathfrak{A}\times \mathfrak{A}^\vee)$ as $${\mathcal F}_{w_1,v_1}{\star''}{\mathcal G}_{w_2,v_2}=v_2^{{\operatorname{d}}*}{\mathcal F}_{w_1,v_1}\otimes (w_1^{-1})^*{\mathcal G}_{w_2,v_2}.$$ **Lemma 7**. *We have the following commutative diagram of equivalences between these different tensor monoidal categories: $$\xymatrix{&\mathop{\mathrm{Coh}}^{W\times W^{\operatorname{d}}}(\mathfrak{A}\times \mathfrak{A}^\vee, \star)\ar[dl]_-{{\mathcal F}_{w,v}\mapsto w^*{\mathcal F}_{w,v}}\ar[dr]^-{{\mathcal F}_{w,v}\mapsto v^{{\operatorname{d}}*}{\mathcal F}_{w,v}}&\\ \mathop{\mathrm{Coh}}^{W\times W^{\operatorname{d}}}(\mathfrak{A}\times \mathfrak{A}^\vee, {\star'})\ar[rr]^-{{\mathcal F}_{w,v}\mapsto (w^{-1})^*v^{{\operatorname{d}}*}{\mathcal F}_{w,v}}&&\mathop{\mathrm{Coh}}^{W\times W^{\operatorname{d}}}(\mathfrak{A}\times \mathfrak{A}^\vee, {\star''}).}$$* *Proof.* The proof is straightforward by using the definitions. ◻ Define ${\mathbb S}_{w,v}'=w^*{\mathbb S}_{w,v}$, and $${\mathbb S}'={\mathbb S}'_{W\times W^{\operatorname{d}}}=\bigoplus_{w,v}{\mathbb S}'_{w,v},$$ which by Lemma [Lemma 7](#lem:monoidal_equivalence){reference-type="ref" reference="lem:monoidal_equivalence"} is a monoidal object in the category $(\mathop{\mathrm{Coh}}^{W\times W^{\operatorname{d}}}(\mathfrak{A}\times \mathfrak{A}^\vee), {\star'})$. The algebra structure is obtained from the following isomorphism $${\mathbb S}'_{w_1,v_1}{\star'}{\mathbb S}'_{w_2,v_2}=w_2^*{\mathbb S}'_{w_1,v_1}\otimes (v_1^{-1})^{{\operatorname{d}}*}{\mathbb S}'_{w_2,v_2}\overset{\sim}\longrightarrow{\mathbb S}'_{w_1w_2,v_1v_2}.$$ Similarly, the object ${\mathbb S}''={\mathbb S}''_{W\times W^{\operatorname{d}}}=\bigoplus_{w,v}{\mathbb S}''_{w,v}$ with $${\mathbb S}''_{w,v}:=v^{{\operatorname{d}}*}{\mathbb S}_{w,v}$$ defines a monoidal object in $(\mathop{\mathrm{Coh}}^{W\times W^{\operatorname{d}}}(\mathfrak{A}\times \mathfrak{A}^\vee), {\star''})$. ## The degree inversion {#subsec:deg_inv} We define the degree inversion functor, which will be used in § [5.3](#subsec:antiinvolution){reference-type="ref" reference="subsec:antiinvolution"} below in defining the anti-involution of the elliptic twisted group algebra. Define $$\frak{D}:\mathop{\mathrm{Coh}}^{W\times W^{{\operatorname{d}}}}(\mathfrak{A}\times \mathfrak{A}^\vee)\to \mathop{\mathrm{Coh}}^{W\times W^{{\operatorname{d}}}}(\mathfrak{A}\times \mathfrak{A}^\vee)$$ $${\mathcal F}\mapsto \frak{D}({\mathcal F}) \text{ so that } \frak{D}({\mathcal F})_{w,v}={\mathcal F}_{w^{-1},v^{-1}}.$$ Notice that $\frak{D}^2=\mathop{\mathrm{id}}$. Moreover, we have $$\begin{aligned} \frak{D}({\mathcal F}\star'{\mathcal G})_{xw,yv}&=({\mathcal F}\star'{\mathcal G})_{w^{-1}x^{-1}, v^{-1}y^{-1}}\\ &={\mathcal F}_{w^{-1}, v^{-1}}\star'{\mathcal G}_{x^{-1}, y^{-1}}\\ &=(x^{-1})^*{\mathcal F}_{w^{-1}, v^{-1}}\otimes v^{{\operatorname{d}}*}{\mathcal G}_{x^{-1}, y^{-1}}\\ &=(x^{-1})^*\frak{D}({\mathcal F})_{w,v}\otimes v^{{\operatorname{d}}*}\frak{D}({\mathcal G})_{x,y}\\ &=\frak{D}({\mathcal G})_{x,y}\star'' \frak{D}({\mathcal F})_{w,v}\\ &=(\frak{D}({\mathcal G})\star''\frak{D}({\mathcal F}))_{xw,yv},\end{aligned}$$ therefore, $$\label{eqn:D-star} \frak{D}({\mathcal F}\star'{\mathcal G})=\frak{D}({\mathcal G})\star''\frak{D}({\mathcal F}).$$ In particular, we have $$\frak{D}({\mathbb S}'_{w,v}\star' {\mathbb S}'_{x,y})=\frak{D}({\mathbb S}'_{x,y})\star''\frak{D}({\mathbb S}'_{w,v}).$$ Therefore, if we view ${\mathbb S}'_{w,v}$ as in degree $(w^{-1}, v^{-1})$, ${\mathbb S}'$ becomes an algebra object in $\mathop{\mathrm{Coh}}^{W\times W^{\operatorname{d}}}(\mathfrak{A}\times\mathfrak{A}^\vee,{\star''})$. Symmetric conclusion holds for ${\mathbb S}''$. ## The periodic modules {#subsec:act_periodic} The category $\mathop{\mathrm{Coh}}^W(\mathfrak{A}\times \mathfrak{A}^\vee)$ is a module over the monoidal category $(\mathop{\mathrm{Coh}}^{W\times W^{\operatorname{d}}}(\mathfrak{A}\times \mathfrak{A}^\vee), \star)$, where the action is defined as$${\mathcal F}_{w,v}\star{\mathcal G}_x:=(wx)^*{\mathcal F}_{w,v}\otimes (v^{-1})^{{\operatorname{d}}*}{\mathcal G}_x$$ sitting in degree $wx$. The associativity is straightforward to check. Similarly, $\mathop{\mathrm{Coh}}^{W^{\operatorname{d}}}(\mathfrak{A}\times \mathfrak{A}^\vee)$ is also a module over the monoidal category $(\mathop{\mathrm{Coh}}^{W\times W^{\operatorname{d}}}(\mathfrak{A}\times \mathfrak{A}^\vee), \star)$. Via the equivalences in Lemma [Lemma 7](#lem:monoidal_equivalence){reference-type="ref" reference="lem:monoidal_equivalence"}, the monoidal categories $(\mathop{\mathrm{Coh}}^{W\times W^{\operatorname{d}}}(\mathfrak{A}\times \mathfrak{A}^\vee), {\star'})$ and $(\mathop{\mathrm{Coh}}^{W\times W^{\operatorname{d}}}(\mathfrak{A}\times \mathfrak{A}^\vee), {\star''})$ have module categories $\mathop{\mathrm{Coh}}^W(\mathfrak{A}\times \mathfrak{A}^\vee)$ and $\mathop{\mathrm{Coh}}^{W^{\operatorname{d}}}(\mathfrak{A}\times \mathfrak{A}^\vee)$ where we use the notations ${\star'}$ and ${\star''}$ for the actions. We leave the details to interested readers. **Definition 8**. Define the periodic module in $\mathop{\mathrm{Coh}}^W(\mathfrak{A}\times \mathfrak{A}^\vee)$ as follows: $${\mathbb M}=\bigoplus _{x\in W}{\mathbb M}_x=\bigoplus_{x\in W}x^*{\mathbb L}.$$Denote, for each $w\in W$, $\frak{p}_w:{\mathbb M}\to {\mathbb M}_w$ the projection and $\frak{i}_w:{\mathbb M}_w\to {\mathbb M}$ the embedding of each summands. The action of ${\mathbb S}$ in $(\mathop{\mathrm{Coh}}^{W\times W^{\operatorname{d}}}(\mathfrak{A}\times \mathfrak{A}^\vee), \star)$ on the object ${\mathbb M}$ in $\mathop{\mathrm{Coh}}^W(\mathfrak{A}\times \mathfrak{A}^\vee)$ is denoted by $$\bullet:{\mathbb S}\star{\mathbb M}\to{\mathbb M},$$ via, for each $w,v,x\in W$, the canonical isomorphism $$\bullet: {\mathbb S}_{w,v}\star x^*{\mathbb L}= (wx)^{*}{\mathbb S}_{w,v}\otimes (v^{-1})^{{\operatorname{d}}*}x^*{\mathbb L}=(wx)^{*}{\mathbb L}\otimes (wx)^{*}(w^{-1})^*(v^{-1})^{{\operatorname{d}}*}{\mathbb L}^{-1}\otimes (v^{-1})^{{\operatorname{d}}*}x^*{\mathbb L}=(wx)^{*}{\mathbb L}.$$ Similarly, ${\mathbb M}^{\operatorname{d}}:=\bigoplus_y{\mathbb M}_y^{\operatorname{d}}$ with ${\mathbb M}_y^{\operatorname{d}}=y^{{\operatorname{d}}*}{\mathbb L}$ is an object in $\mathop{\mathrm{Coh}}^{W^{\operatorname{d}}}(\mathfrak{A}\times \mathfrak{A}^\vee)$. We denote $\frak{p}_w^{\operatorname{d}}:{\mathbb M}^{\operatorname{d}}\to {\mathbb M}_w^{\operatorname{d}}$, and $\frak{i}_w^{\operatorname{d}}:{\mathbb M}_w^{\operatorname{d}}\to {\mathbb M}$. The action of ${\mathbb S}$ on ${\mathbb M}^{\operatorname{d}}$ in $(\mathop{\mathrm{Coh}}^{W\times W^{\operatorname{d}}}(\mathfrak{A}\times \mathfrak{A}^\vee), \star)$ is denoted by $$\bullet^{\operatorname{d}}:{\mathbb S}\star{\mathbb M}^{\operatorname{d}}\to{\mathbb M}^{\operatorname{d}}.$$ **Remark 9**. The periodic module of the usual affine Hecke algebra, besides its applications in combinatorics and representation theory, also has a geometric realization as the $K$-theory of the Springer resolutions $T^*G/B$ [@L]. Therefore, the periodic modules ${\mathbb M}$ and ${\mathbb M}^{\operatorname{d}}$ in the present paper should be considered as the elliptic cohomology of the Springer resolutions $T^*G/B$ and $T^*G^\vee/B^\vee$ respectively. A version of such construction, in the absence of the dynamical parameters, can be found in [@ZZ22]. We notice another construction by Hikita [@H] in ellitptic cohomology related to Lusztig's original [@L]. We leave its relation with the present paper to future investigations. ## Variant forms of the actions From the equivalences of monoidal categories in Lemma [Lemma 7](#lem:monoidal_equivalence){reference-type="ref" reference="lem:monoidal_equivalence"}, the algebra object ${\mathbb S}'$ then act on module objects ${\mathbb M}$ and ${\mathbb M}^{\operatorname{d}}$. For example, the action $$\label{eq:periodicaction} \bullet: {\mathbb S}'_{w,v}{\star'}{\mathbb M}_x\overset\sim\longrightarrow {\mathbb M}_{wx}$$ is defined by $${\mathbb S}'_{w,v}{\star'}{\mathbb M}_x:=x^*{\mathbb S}'_{w,v}\otimes (v^{-1})^{{\operatorname{d}}*}{\mathbb M}_x\overset\sim\to x^*{\mathbb S}'_{w,v}\otimes (v^{-1})^{{\operatorname{d}}*}{\mathbb S}'_{x,e}\otimes (v^{-1})^{{\operatorname{d}}*}{\mathbb L}\overset\sim\to {\mathbb S}'_{wx,v}\otimes (v^{-1})^{{\operatorname{d}}*}{\mathbb L}\overset\sim \to{\mathbb M}_{wx}.$$ Similarly, equivalences in Lemma [Lemma 7](#lem:monoidal_equivalence){reference-type="ref" reference="lem:monoidal_equivalence"} also induces an action of ${\mathbb S}''$ on ${\mathbb M}^{\operatorname{d}}$, denoted by $$\label{eq:periodicactiondyn} \bullet^{\operatorname{d}}:{\mathbb S}''_{w,v}{\star''}{\mathbb M}_y^{\operatorname{d}}\to {\mathbb M}^{\operatorname{d}}_{vy}.$$ More precisely, we have $${\mathbb S}''_{w,v}{\star''}{\mathbb M}''_{y}:=y^{{\operatorname{d}}*}{\mathbb S}''_{w,v}\otimes (w^{-1})^*{\mathbb M}''_y\overset\sim\to y^{{\operatorname{d}}*}{\mathbb S}''_{w,v}\otimes (w^{-1})^*{\mathbb S}''_{e,y}\otimes (w^{-1})^*{\mathbb L}\overset\sim\to {\mathbb S}''_{w,vy}\otimes (w^{-1})^*{\mathbb L}\overset\sim\to {\mathbb M}_{vy}^{\operatorname{d}}.$$ ## Twisted product of rational sections {#subsec:Sproductrational} We will perform calculations of rational sections in ${\mathbb S}$ and their variant forms with $'$ and $''$. To keep track of the degrees in $W\times W^{\operatorname{d}}$, a rational section $a$ of ${\mathbb S}_{w,v}$ is denoted by $a\delta_w\delta_v^{\operatorname{d}}$. The functor sending a coherent sheaf to the vector space of rational sections is a Lax monoidal functor with respect to the tensor product $\star$ in Lemma [Lemma 5](#lem:tensor){reference-type="ref" reference="lem:tensor"}. Indeed, one has a stronger version, which we recall for illustration purpose although not used in the present paper. Let $\pi^2_*: \mathfrak{A}\times\mathfrak{A}^\vee\to \mathfrak{A}/W\times\mathfrak{A}^\vee/W^{\operatorname{d}}$ be the projection. Then, the functor $\pi^2_*$ is a Lax monoidal functor with the target endowed with the usual tensor structure of coherent sheaves [@ZZ23 Lemma 3.3]. In particular, we have for example $$\pi^2_*{\mathbb S}'_{w,v}\otimes \pi^2_*{\mathbb S}'_{x,y}\to \pi^2_*{\mathbb S}'_{wx,vy},$$ and similarly $$\pi^2_*{\mathbb S}'_{w,v}\to \bigoplus_x\mathop{\mathrm{{\mathscr{H}om}}}(\pi^2_*{\mathbb M}_x, \pi^2_*{\mathbb M}_{wx}).$$ For completeness in the summary § [3.8](#subsec:sum1){reference-type="ref" reference="subsec:sum1"} below we write this morphism after applying $\pi^2_*$ as $$\xymatrix{{\mathbb S}'_{w,v}\ar@{.>}[r]^-{\bullet} & \bigoplus_x\mathop{\mathrm{{\mathscr{H}om}}}({\mathbb M}_x, {\mathbb M}_{wx})}.$$ The space of rational sections of ${\mathbb S}_{w,v}$ has an algebra structure, which can be described using Kostant-Kumar's twisted product: if $a\delta_{w_1}\delta^{\operatorname{d}}_{ v_1}$, and $b\delta_{w_2}\delta_{v_2}^{\operatorname{d}}$ are rational sections of ${\mathbb S}_{w_1,v_1}$ and ${\mathbb S}_{w_2,v_2}$, respectively, then the tensor product $\star$ gives $$\label{eq:prod} a\delta_{w_1}\delta^{\operatorname{d}}_{ v_1}b\delta_{w_2}\delta_{v_2}^{\operatorname{d}}=a\cdot {}^{w_1v_1^{\operatorname{d}}}b\delta_{w_1w_2}\delta_{v_1v_2}^{\operatorname{d}},$$ which is a rational section of ${\mathbb S}_{w_1w_2,v_1v_2}$ via ${\mathbb S}_{w_1,v_2}\star{\mathbb S}_{w_2,v_2}\overset\sim\longrightarrow {\mathbb S}_{w_1w_2,v_1v_2}$. If $a\delta_{w}\delta_v^{\operatorname{d}}$ is a rational section of ${\mathbb S}_{w,v}$, then $\delta_{w}\cdot {}^{w^{-1}}a\cdot\delta_{v}^{\operatorname{d}}$ can be viewed as a rational section of $w^*{\mathbb S}_{w, v}={\mathbb S}'_{w,v}$ (again, $\delta_w,\delta_v^{\operatorname{d}}$ can be thought of as the bi-degree of this rational section). Then in terms of rational sections, the multiplication of ${\mathbb S}'_{w,v}$ under ${\star'}$ can be written as $$\delta_{w_1}{}^{w_1^{-1}}a\delta_{v_1}^{\operatorname{d}}\delta_{w_2}{}^{w_2^{-1}}b\delta_{v_2}^{\operatorname{d}}=\delta_{w_1w_2}{}^{w_2^{-1}w_1^{-1}}a\cdot {}^{v_1^{\operatorname{d}}w_2^{-1}}b\delta_{v_1v_2}^{{\operatorname{d}}}.$$ This agrees with [\[eq:prod\]](#eq:prod){reference-type="eqref" reference="eq:prod"}. The same remark holds for ${\mathbb S}''$, that is, a rational section $a\delta_{w}\delta_{v}^{\operatorname{d}}$ of ${\mathbb S}_{w,v}$ defines a rational section $\delta_v^{\operatorname{d}}\cdot{}^{(v^{-1})^{\operatorname{d}}}a\cdot\delta_w$ of ${\mathbb S}''_{w,v}$, and $$\delta_{v_1}^{\operatorname{d}}{}^{(v_1^{-1})^{\operatorname{d}}}a\delta_{w_1}\delta_{v_2}^{\operatorname{d}}{}^{(v_2^{-1})^{\operatorname{d}}}b\delta_{w_2}=\delta_{v_1v_2}^{\operatorname{d}}{}^{(v_2^{-1}v_1^{-1})^{\operatorname{d}}} a \cdot {}^{w_1(v_2^{-1})^{\operatorname{d}}}b\delta_{w_1w_2},$$ which again coincides with the result from [\[eq:prod\]](#eq:prod){reference-type="eqref" reference="eq:prod"}. [\[subsec:rat_sec\]]{#subsec:rat_sec label="subsec:rat_sec"} In the same spirit, we spell out the action of ${\mathbb S}'$ on ${\mathbb M}$ from [\[eq:periodicaction\]](#eq:periodicaction){reference-type="eqref" reference="eq:periodicaction"} on the space of rational sections. Rational sections of ${\mathbb M}$ are finite sums of $a f_x$ where $a$ is a rational section of ${\mathbb M}_x=x^*{\mathbb L}$. Again, $f_x$ is here to keep track of the degree of $a$. Then the action can be written as $$\delta_wb\delta_v^{\operatorname{d}}\bullet af_x={}^{v^{\operatorname{d}}}a\cdot {}^{x^{-1}}bf_{wx}.$$ One can check that this provides the same formula if we start from the variant form ${\mathbb S}'$, that is, we can also start from ${\mathbb S}$ or ${\mathbb S}''$. For example, if $\delta_v^{\operatorname{d}}b\delta_w$ is a rational section of ${\mathbb S}''_{w,v}$, the $\bullet$ action would give $$\label{eq:bullet} \delta_v^{\operatorname{d}}b\delta_w\bullet af_x={}^{v^{\operatorname{d}}(wx)^{-1}}b\cdot {}^{v^{\operatorname{d}}}a f_{wx}.$$ Similarly, from [\[eq:periodicactiondyn\]](#eq:periodicactiondyn){reference-type="eqref" reference="eq:periodicactiondyn"} the action of of ${\mathbb S}''$ on ${\mathbb M}^{\operatorname{d}}$ can be described in the same way. Rational sections of ${\mathbb M}^{\operatorname{d}}_y$ are written as finite sums of $a f_y^{\operatorname{d}}$, and given a rational section $\delta_v^{\operatorname{d}}b\delta_w$ of ${\mathbb S}''$, the action is written as $$\label{eq:bulletdyn} \delta_v^{\operatorname{d}}b\delta_w\bullet^{\operatorname{d}}af_y^{\operatorname{d}}= {}^{(y^{-1})^{\operatorname{d}}}b\cdot{}^wa f^{\operatorname{d}}_{{vy}}.$$ If one takes a rational section $\delta_w b\delta_v^{{\operatorname{d}}}$ of ${\mathbb S}'$, the action would be written as $$\delta_w b\delta_v^{\operatorname{d}}\bullet^{\operatorname{d}}af_y^{\operatorname{d}}={}^w a\cdot {}^{((vy)^{-1})^{\operatorname{d}}w}bf_{vy}^{\operatorname{d}}.$$ ## A short summary {#subsec:sum1} Recall that we have embeddings and projections $$\xymatrix{{\mathbb M}\ar@/^.5pc/[r]^-{\frak{p}_w}&\ar@/^.5pc/[l]^-{\frak{i}_w} {\mathbb M}_w},\quad \xymatrix{{\mathbb M}^{\operatorname{d}}\ar@/^.5pc/[r]^-{\frak{p}^{\operatorname{d}}_w}&\ar@/^.5pc/[l]^-{\frak{i}^{\operatorname{d}}_w} {\mathbb M}^{\operatorname{d}}_w} .$$ By the definition of ${\mathbb S}$, we have $${\mathbb S}'_{w,v}=w^*{\mathbb L}\otimes (v^{-1})^{{\operatorname{d}}*}{\mathbb L}^{-1}\cong \mathop{\mathrm{{\mathscr{H}om}}}({\mathbb M}_{v^{-1}}^{\operatorname{d}}, {\mathbb M}_w),$$ where a rational section $\delta_wa\delta_v^{\operatorname{d}}$ of ${\mathbb S}'_{w,v}$ is sent to $\frak{p}_w a\frak{i}^{\operatorname{d}}_{v^{-1}}$ as a rational section of $\mathop{\mathrm{{\mathscr{H}om}}}({\mathbb M}^{\operatorname{d}}, {\mathbb M})$. Similarly, the isomorphism $${\mathbb S}''_{w,v}=v^{{\operatorname{d}}*}{\mathbb L}\otimes (w^{-1})^*{\mathbb L}^{-1}\cong \mathop{\mathrm{{\mathscr{H}om}}}({\mathbb M}_{w^{-1}}, {\mathbb M}_v^{\operatorname{d}}),$$ shows that a rational section $\delta_v^{\operatorname{d}}a\delta_w$ of ${\mathbb S}''_{w,v}$ is sent to $\frak{p}_v^{\operatorname{d}}a \frak{i}_{w^{-1}}$ as a rational section of $\mathop{\mathrm{{\mathscr{H}om}}}({\mathbb M},{\mathbb M}^{\operatorname{d}})$. To summarize, we have $$\xymatrix{\mathop{\mathrm{{\mathscr{H}om}}}({\mathbb M}_{v^{-1}}^{\operatorname{d}}, {\mathbb M}_w) & {\mathbb S}'_{w,v}\ar[l]^-\sim\ar@{.>}[r]^-{\bullet} &\bigoplus_x\mathop{\mathrm{{\mathscr{H}om}}}({\mathbb M}_x, {\mathbb M}_{wx})},$$ $$\xymatrix{ \frak{p}_wa\frak{i}_{v^{-1}}^{\operatorname{d}}& \delta_w a\delta_v^{\operatorname{d}}\ar@{|->}[l]\ar@{|->}[r]& \delta_wa\delta_v^{\operatorname{d}}\bullet\_},$$ $$\xymatrix{\mathop{\mathrm{{\mathscr{H}om}}}({\mathbb M}_{w^{-1}}, {\mathbb M}_v^{\operatorname{d}}) & {\mathbb S}''_{w,v}\ar[l]^-\sim\ar@{.>}[r]^-{\bullet^{\operatorname{d}}} &\bigoplus_y\mathop{\mathrm{{\mathscr{H}om}}}({\mathbb M}_y^{\operatorname{d}}, {\mathbb M}_{vy}^{\operatorname{d}})},$$ $$\xymatrix{\frak{p}_v^{\operatorname{d}}a\frak{i}_{w^{-1}} & \delta_v^{\operatorname{d}}a\delta_w \ar@{|->}[l]\ar@{|->}[r]& \delta_v^{\operatorname{d}}a\delta_w\bullet^{\operatorname{d}}\_}.$$ # Basic properties of the Poincaré line bundle {#sec:S_property} In this section we prove some basic properties and the restrictions of the Poincaré line bundle, its shift ${\mathbb L}$, and also the twisted group algebra ${\mathbb S}$. ## Weyl group actions on ${\mathcal P}$ We first fix some notations. For the bundle ${\mathcal O}(z)$ on $\mathfrak{A}^\vee$, denote by $\tilde{\mathcal O}(z)$ its pull back to $\mathfrak{A}\times \mathfrak{A}^\vee$. Similarly, denote by $\tilde{\mathcal O}(\lambda)$ the pull back of ${\mathcal O}(\lambda)$ on $\mathfrak{A}$ to $\mathfrak{A}\times \mathfrak{A}^\vee$. The following lemma collects some elementary facts about line bundles, whose proofs are straightforward. **Lemma 10**. 1. *[\[item:basic-bundle1\]]{#item:basic-bundle1 label="item:basic-bundle1"}For any $z,z'\in \mathfrak{A}, \lambda, \lambda'\in \mathfrak{A}^\vee,$ we have $$\tilde{\mathcal O}(z)|_{z'}={\mathcal O}(z), ~ \tilde{\mathcal O}(z)|_\lambda={\mathcal O}_{\mathfrak{A}},~\tilde{\mathcal O}(\lambda)|_{\lambda'}={\mathcal O}(\lambda), ~ \tilde{\mathcal O}(\lambda)|_z={\mathcal O}_{\mathfrak{A}^\vee}.$$* 2. *[\[item:basic-bundle2\]]{#item:basic-bundle2 label="item:basic-bundle2"}For any $v\in W$, we have $$\begin{aligned} v^*\tilde{\mathcal O}(z)&=\tilde{\mathcal O}(z), & v^{{\operatorname{d}}*}\tilde{\mathcal O}(z)&=(v^{{\operatorname{d}}*}{\mathcal O}(z))^\sim=\tilde{\mathcal O}(v^{-1}(z)), \\ v^{{\operatorname{d}}*}\tilde{\mathcal O}(\lambda)&=\tilde{\mathcal O}(\lambda), & v^{*}\tilde{\mathcal O}(\lambda)&=(v^{*}{\mathcal O}(\lambda))^\sim=\tilde{\mathcal O}(v^{-1}(\lambda)).\end{aligned}$$* 3. *[\[item:basic-bundle3\]]{#item:basic-bundle3 label="item:basic-bundle3"}Let $\phi:\mathfrak{A}\to \mathfrak{A}$, $\phi^{\operatorname{d}}:\mathfrak{A}^\vee\to \mathfrak{A}^\vee$ be maps of varieties. For any sheaf ${\mathcal F}$ on $\mathfrak{A}\times \mathfrak{A}^\vee$, we have $$(\phi^*{\mathcal F})|_{\lambda}=\phi^*({\mathcal F}|_\lambda), ~(\phi^*{\mathcal F})|_z={\mathcal F}|_{\phi(z)}, ~(\phi^{{\operatorname{d}}*}{\mathcal F})|_\lambda={\mathcal F}|_{\phi^{\operatorname{d}}(\lambda)}, ~(\phi^{{\operatorname{d}}*}{\mathcal F})|_z=\phi^{{\operatorname{d}}*}({\mathcal F}|_z).$$* Recall the convention of Poincaré line bundle in § [\[subsec:rho-shift\]](#subsec:rho-shift){reference-type="ref" reference="subsec:rho-shift"}, which has the property that $${\mathcal P}|_z={\mathcal O}(z-\hbar\rho^\vee), ~{\mathcal P}|_\lambda={\mathcal O}(\lambda).$$ The maps $w^{-1}$ and $w^{{\operatorname{d}}}$ from $\mathfrak{A}\times \mathfrak{A}^\vee$ to itself are related by the following property: **Lemma 11**. *For any $w\in W$, we have $(w^{-1})^*{\mathcal P}=w^{{\operatorname{d}}*}{\mathcal P}\otimes \tilde {\mathcal O}(w^{-1}\hbar\rho^\vee-\hbar\rho^\vee).$* *Proof.* By using Lemma [Lemma 10](#lem:basic-bundle){reference-type="ref" reference="lem:basic-bundle"}, we get $$\begin{aligned} (w^{{\operatorname{d}}*}{\mathcal P})|_{\hbar\rho^\vee}&=w^{{\operatorname{d}}*}({\mathcal P}|_{\hbar\rho^\vee})=w^{{\operatorname{d}}*}{\mathcal O}_{\mathfrak{A}^\vee}={\mathcal O}_{\mathfrak{A}^\vee}, \\ (w^{{\operatorname{d}}*}{\mathcal P})|_\lambda&={\mathcal P}|_{w(\lambda)}={\mathcal O}(w(\lambda)),\\ ((w^{-1})^*{\mathcal P})|_{\hbar\rho^\vee}&={\mathcal P}|_{w^{-1}(\hbar\rho^\vee)}={\mathcal O}(w^{-1}(\hbar\rho^\vee)-\hbar\rho^\vee)=\left(w^{{\operatorname{d}}*}{\mathcal P}\otimes \tilde{\mathcal O}(w^{-1}(\hbar\rho^\vee)-\hbar\rho^\vee)\right)|_{\hbar\rho^\vee},\\ ((w^{-1})^*{\mathcal P})|_\lambda&=(w^{-1})^*({\mathcal P}|_\lambda)=(w^{-1})^*{\mathcal O}(\lambda)={\mathcal O}(w(\lambda))=\left(w^{{\operatorname{d}}*}{\mathcal P}\otimes \tilde{\mathcal O}(w^{-1}(\hbar\rho^\vee)-\hbar\rho^\vee)\right)|_{\lambda}.\end{aligned}$$ Therefore, by the See-Saw Lemma, $$(w^{-1})^*{\mathcal P}=w^{{\operatorname{d}}*}{\mathcal P}\otimes \tilde {\mathcal O}(w^{-1}(\hbar\rho^\vee)-\hbar\rho^\vee).$$ ◻ ## Weyl group actions on ${\mathbb L}$ By the definition of ${\mathbb L}$ in Definition [Definition 3](#def:bbL){reference-type="ref" reference="def:bbL"} and Lemma [Lemma 10](#lem:basic-bundle){reference-type="ref" reference="lem:basic-bundle"}, it is easy to see that $$\begin{aligned} \label{eq:Lres} {\mathbb L}|_\lambda&={\mathcal P}|_{\lambda+\rho\hbar}={\mathcal O}(\lambda+\rho \hbar), & {\mathbb L}|_{z }&={\mathcal O}(z-\hbar\rho^\vee).\end{aligned}$$ We define a new action of $W$ on $\mathfrak{A}^\vee$ by $$w_\bullet^{\operatorname{d}}\lambda=w(\lambda+\rho\hbar)-\rho\hbar, ~\lambda\in \mathfrak{A}^\vee.$$ We then have **Lemma 12**. *For any $w\in W$, we have isomorphisms $$w_\bullet^*{\mathcal O}(\lambda)=w^*{\mathcal O}(\lambda), \quad w_\bullet^{{\operatorname{d}}*}{\mathcal O}(z)=w^{{\operatorname{d}}*}{\mathcal O}(z), \quad \forall z\in \mathfrak{A}, \lambda\in \mathfrak{A}^\vee.$$* *Proof.* They follow from the fact that degree 0 line bundles are invariant under translations. ◻ The following lemma shows that ${\mathbb L}$ behaves well in terms of the compatibility of the two $\bullet$-actions. **Lemma 13**. *For any $w\in W$, we have a canonical isomorphism $$(w_\bullet^{-1})^*{\mathbb L}\cong w_\bullet^{{\operatorname{d}}*}{\mathbb L},$$ which is compatible with composition of group elements.* *Proof.* This follows direct from the definition of $w_\bullet^*$, the definition of ${\mathbb L}$, as well as the canonical functorial isomorphism [\[eqn:Pincare_dot\]](#eqn:Pincare_dot){reference-type="eqref" reference="eqn:Pincare_dot"}. ◻ **Remark 14**. We may also use Lemma [Lemma 10](#lem:basic-bundle){reference-type="ref" reference="lem:basic-bundle"} to compute the restrictions: $$\begin{aligned} ((w_\bullet^{-1})^{*}{\mathbb L})|_{\lambda}&=(w_\bullet^{-1})^{*}({\mathbb L}|_{\lambda})\overset{\eqref{eq:Lres}}=(w^{-1}_\bullet)^{*}{\mathcal O}(\lambda+\hbar\rho)=(w^{-1})^*{\mathcal O}(\lambda+\hbar\rho)={\mathcal O}(w(\lambda+\hbar\rho)),\\ (w_\bullet^{-1})^{*}{\mathbb L}|_{{\hbar\rho^\vee}}&={\mathbb L}|_{w^{-1}_\bullet\hbar\rho^\vee}={\mathbb L}|_{{\hbar\rho^\vee}}\overset{\eqref{eq:Lres}}={\mathcal O}_{\mathfrak{A}^\vee},\\ (w_\bullet^{{\operatorname{d}}*}{\mathbb L})|_{\lambda}&={\mathbb L}|_{w_\bullet\lambda}\overset{\eqref{eq:Lres}}={\mathcal O}(w_\bullet\lambda+\hbar\rho)={\mathcal O}(w(\lambda+\hbar\rho)),\\ (w_\bullet^{{\operatorname{d}}*}{\mathbb L})|_{{\hbar\rho^\vee}}&=w_\bullet^{{\operatorname{d}}*}({\mathbb L}|_{\hbar\rho^\vee})\overset{\eqref{eq:Lres}}=w_\bullet^{{\operatorname{d}}*}{\mathcal O}_{\mathfrak{A}^\vee}={\mathcal O}_{\mathfrak{A}^\vee}.\end{aligned}$$ This also shows the existence of an isomorphism $w^{{\operatorname{d}}*}_\bullet{\mathbb L}\cong (w^{-1}_\bullet)^*{\mathbb L}.$ However, Lemma [Lemma 13](#lem:bullettransfer){reference-type="ref" reference="lem:bullettransfer"} above provides a canonical isomorphism. The following lemma explains how ${\mathbb L}$ behaves with the maps on $\mathfrak{A}\times \mathfrak{A}^\vee$. **Lemma 15**. 1. *$w^{{\operatorname{d}}*}{\mathbb L}\cong{\mathbb L}\otimes w^{{\operatorname{d}}*}{\mathcal P}\otimes {\mathcal P}^{-1}.$* 2. *$w^*{\mathbb L}\cong(w^{-1}_\bullet)^{{\operatorname{d}}*}{\mathbb L}\otimes\tilde{\mathcal O}(w\hbar\rho^\vee-\hbar\rho^\vee)\cong {\mathbb L}\otimes (w_\bullet^{-1})^{{\operatorname{d}}*}{\mathcal P}\otimes {\mathcal P}^{-1} \otimes\tilde{\mathcal O}(w\hbar\rho^\vee-\hbar\rho^\vee).$* *Proof.* (1). From Lemma [Lemma 10](#lem:basic-bundle){reference-type="ref" reference="lem:basic-bundle"}, we have $$\begin{aligned} (w^{{\operatorname{d}}*}{\mathbb L})|_\lambda&=(w^{{\operatorname{d}}*}{\operatorname{vor}}^*{\mathcal P})|_\lambda={\mathcal P}|_{w^{{\operatorname{d}}}(\lambda)+\rho\hbar}={\mathcal P}|_{\lambda+\rho\hbar+w^{{\operatorname{d}}}(\lambda)-\lambda}=({\mathbb L}\otimes w^{{\operatorname{d}}*}{\mathcal P}\otimes {\mathcal P}^{-1})|_\lambda,\\ (w^{{\operatorname{d}}*}{\mathbb L})|_{\hbar\rho^\vee}&=w^{{\operatorname{d}}*}({\mathbb L}|_{\hbar\rho^\vee})=w^{{\operatorname{d}}*}{\mathcal O}_{\mathfrak{A}^\vee}={\mathcal O}_{\mathfrak{A}^\vee}=({\mathbb L}\otimes w^{{\operatorname{d}}*}{\mathcal P}\otimes{\mathcal P}^{-1})|_{\hbar\rho^\vee}.\end{aligned}$$ By the See-Saw Lemma, we know that $w^{{\operatorname{d}}*}{\mathbb L}\cong{\mathbb L}\otimes w^{{\operatorname{d}}*}{\mathcal P}\otimes {\mathcal P}^{-1}$. (2). Similarly, we have $$\begin{aligned} (w^*{\mathbb L})|_{\lambda}&= w^*({\mathbb L}|_\lambda)= w^*_\bullet({\mathbb L}|_\lambda)= (w^*_\bullet{\mathbb L})|_\lambda\overset{\text{Lem. } \ref{lem:bullettransfer}}=((w_\bullet^{-1})^{{\operatorname{d}}*}{\mathbb L})|_\lambda,\\ (w^*{\mathbb L})|_{\hbar\rho^\vee}&={\mathbb L}|_{w(\hbar\rho^\vee)}={\operatorname{vor}}^*({\mathcal P}|_{w(\hbar\rho^\vee)})\overset{\sharp}={\mathcal P}|_{w(\hbar\rho^\vee)}={\mathcal O}(w(\hbar\rho^\vee)-\hbar\rho^\vee),\\ ((w_\bullet^{-1})^{{\operatorname{d}}*}{\mathbb L})|_{\hbar\rho^\vee}&=(w_\bullet^{-1})^{{\operatorname{d}}*}({\mathbb L}|_{\hbar\rho^\vee})=(w_\bullet^{-1})^{{\operatorname{d}}*}{\mathcal O}_{\mathfrak{A}^\vee}={\mathcal O}_{\mathfrak{A}^\vee}.\end{aligned}$$ Here $\sharp$ follows from the invariance of translations of degree zero line bundles. Therefore, the See-Saw Lemma gives $$w^*{\mathbb L}\cong(w_\bullet^{-1})^{{\operatorname{d}}*}{\mathbb L}\otimes \tilde {\mathcal O}(w(\hbar\rho^\vee)-\hbar\rho^\vee).$$ ◻ **Example 16**. For each simple root $\alpha$ and $\lambda\in \mathfrak{A}^\vee$, Lemma [Lemma 15](#lem:act){reference-type="ref" reference="lem:act"} gives $$\begin{aligned} (s_\alpha^{{\operatorname{d}}*}{\mathbb L})|_\lambda&={\mathbb L}|_{s_\alpha(\lambda)}={\mathcal O}(s_\alpha(\lambda)+\rho\hbar)={\mathcal O}(\lambda+\rho\hbar)\otimes {\mathcal O}(-\lambda_{\alpha^\vee}\alpha)={\mathbb L}|_\lambda\otimes {\mathcal O}(-\lambda_{\alpha^\vee}\alpha),\\ (s_\alpha^{{\operatorname{d}}*}{\mathbb L})|_{\hbar\rho^\vee}&=s_\alpha^{{\operatorname{d}}*}({\mathbb L}|_{\hbar\rho^\vee})=s_\alpha^{{\operatorname{d}}*}{\mathcal O}_{\mathfrak{A}^\vee}={\mathcal O}_{\mathfrak{A}^\vee},\\ (s_\alpha^*{\mathbb L})|_\lambda&=s_\alpha^*{\mathcal O}(\lambda+\rho\hbar)={\mathcal O}(s_\alpha(\lambda+\hbar\rho))={\mathbb L}|_\lambda\otimes {\mathcal O}(-\lambda_{\alpha^\vee}\alpha-\hbar\alpha),\\ (s_\alpha^*{\mathbb L})|_{\hbar\rho^\vee}&={\mathbb L}|_{s_\alpha(\hbar\rho^\vee)}={\mathcal O}(-\hbar\alpha^\vee). \end{aligned}$$ **Lemma 17**. *For each simple root $\alpha$, we have $$s_\alpha^*s_\alpha^{{\operatorname{d}}*}{\mathbb L}\cong{\mathbb L}\otimes \tilde{\mathcal O}(-\hbar\alpha)\otimes \tilde {\mathcal O}(\hbar\alpha^\vee)\cong{\mathbb L}\otimes ({\mathcal O}(-\hbar\alpha)\boxtimes {\mathcal O}(\hbar\alpha^\vee)).$$* *Proof.* We have $$\begin{aligned} (s_\alpha^*s_\alpha^{{\operatorname{d}}*}{\mathbb L})|_\lambda&=s_\alpha^*(s_\alpha^{{\operatorname{d}}*}{\mathbb L}|_\lambda)=s_\alpha^*({\mathbb L}|_{s_\alpha(\lambda)})=s_\alpha^*{\mathcal O}(s_\alpha(\lambda)+\rho\hbar)\\ &={\mathcal O}(\lambda+s_\alpha(\rho\hbar))={\mathcal O}(\lambda+\rho\hbar-\hbar\alpha),\\ ({\mathbb L}\otimes \tilde{\mathcal O}(-\hbar\alpha)\otimes \tilde {\mathcal O}(\hbar\alpha^\vee))|_\lambda&={\mathcal O}(\lambda+\rho\hbar)\otimes {\mathcal O}(-\hbar\alpha)={\mathcal O}(\lambda+\rho\hbar-\hbar\alpha),\\ (s_\alpha^*s_\alpha^{{\operatorname{d}}*}{\mathbb L})|_{\hbar\rho^\vee}&=(s_\alpha^{{\operatorname{d}}*}{\mathbb L})|_{s_\alpha(\hbar\rho^\vee)}=s_\alpha^{{\operatorname{d}}*}({\mathbb L}|_{s_\alpha(\hbar\rho^\vee)})=s_\alpha^{{\operatorname{d}}*}{\mathcal O}(s_\alpha(\hbar\rho^\vee)-\hbar\rho^\vee)\\ &=s_\alpha^{{\operatorname{d}}*}{\mathcal O}(-\hbar\alpha^\vee)={\mathcal O}(\hbar\alpha^\vee),\\ ({\mathbb L}\otimes \tilde{\mathcal O}(-\hbar\alpha)\otimes \tilde{\mathcal O}(\hbar\alpha^\vee))|_{\hbar\rho^\vee}&={\mathbb L}|_{\hbar\rho^\vee}\otimes \tilde{\mathcal O}(-\hbar\alpha)|_{\hbar\rho^\vee}\otimes \tilde{\mathcal O}(\hbar\alpha^\vee)|_{\hbar\rho^\vee}\\ &={\mathcal O}(\hbar\rho^\vee-\hbar\rho^\vee)\otimes {\mathcal O}_{\mathfrak{A}}\otimes {\mathcal O}(\hbar\alpha^\vee)={\mathcal O}(\hbar\alpha^\vee).\end{aligned}$$ The conclusion then follows from the See-Saw Lemma. ◻ ## Restrictions of ${\mathbb S}$ The following Lemma will be used when we determine zeros and poles of rational sections. **Lemma 18**. 1. *[\[item:basicS1\]]{#item:basicS1 label="item:basicS1"} We have the following restrictions: $$\begin{aligned} {\mathbb S}_{w,v}|_\lambda&={\mathcal O}(\lambda-w_\bullet v^{-1}(\lambda)),&\quad {\mathbb S}_{w,v}|_{z}&={\mathcal O}(z-v_\bullet w^{-1}(z)),\\ {\mathbb S}'_{w,v}|_\lambda&={\mathcal O}(w_\bullet^{-1}\lambda- v^{-1}(\lambda)),&\quad {\mathbb S}'_{w,v}|_{z}&={\mathcal O}(w(z)-v_\bullet z),\\ {\mathbb S}''_{w,v}|_\lambda&={\mathcal O}(v(\lambda)-w_\bullet \lambda),&\quad {\mathbb S}''_{w,v}|_{z}&={\mathcal O}(v_\bullet^{-1}z- w^{-1}(z)).\end{aligned}$$* 2. *[\[item:basicS2\]]{#item:basicS2 label="item:basicS2"}If $w=v=s_\alpha$, we have $${\mathbb S}_{\alpha, \alpha}={\mathbb S}_{s_\alpha, s_\alpha}={\mathcal O}(\hbar\alpha)\boxtimes {\mathcal O}(-\hbar\alpha^\vee).$$ In particular, ${\mathbb S}_{w_0,w_0}={\mathcal G}\otimes {\mathcal H}^{-1}$ where ${\mathcal G}:=\tilde{\mathcal O}(2\hbar\rho)$, ${\mathcal H}:=\tilde{\mathcal O}(2\hbar\rho^\vee)$.* *Proof.* (1). They follow from [\[eq:Lres\]](#eq:Lres){reference-type="eqref" reference="eq:Lres"} and Lemma [Lemma 10](#lem:basic-bundle){reference-type="ref" reference="lem:basic-bundle"}. We prove the restrictions to $\lambda$ as examples. $$\begin{aligned} {\mathbb S}_{w,v}|_\lambda&=({\mathbb L}\otimes (w^{-1})^*(v^{-1})^{{\operatorname{d}}*}{\mathbb L}^{-1})|_\lambda={\mathcal O}(\lambda+\rho\hbar)\otimes (w^{-1})^*({\mathbb L}^{-1}|_{v^{-1}(\lambda)})\\ &={\mathcal O}(\lambda+\rho\hbar)\otimes (w^{-1})^*{\mathcal O}(-v^{-1}(\lambda)-\rho\hbar)={\mathcal O}(\lambda+\rho\hbar-wv^{-1}(\lambda)-w(\rho\hbar))\\ &={\mathcal O}(\lambda-w_\bullet v^{-1}(\lambda)). \\ {\mathbb S}'_{w,v}|_\lambda&=(w^*{\mathbb S}_{w,v})|_\lambda=w^*({\mathbb S}_{w,v}|_\lambda)=w^*{\mathcal O}(\lambda-w_\bullet v^{-1}(\lambda))={\mathcal O}(w^{-1}_\bullet \lambda-v^{-1}(\lambda)),\\ {\mathbb S}''_{w,v}|_\lambda&=(v^{{\operatorname{d}}*}{\mathbb S}_{w,v})|_\lambda={\mathbb S}_{w,v}|_{v(\lambda)}={\mathcal O}(v(\lambda)-w_\bullet \lambda)).\end{aligned}$$ (2). They follow from Part (1), and the identities $s_\alpha(\rho)=\rho-\alpha$ and $w_0(\rho)=-\rho.$ ◻ # Duality functors and Poincaré pairings {#sec:dual} In this section we introduce an anti-involution of the elliptic twisted group algebra and the Poincaré pairings of periodic modules. ## Duality functors Denote $$\theta_\Pi(\hbar\pm z)=\prod_{\alpha>0}\theta(\hbar\pm z_\alpha), \quad \theta_\Pi(\hbar\pm \lambda)=\prod_{\alpha>0}\theta(\hbar\pm\lambda_{\alpha^\vee}), \quad \theta_\Pi(z)=\prod_{\alpha>0}\theta(z_\alpha), \quad \theta_\Pi(\lambda)=\prod_{\alpha>0}\theta(\lambda_{\alpha^\vee}).$$ Note that ${}^w\theta_\Pi(z)=\epsilon_w\theta_\Pi(z)$ and ${}^{w^{{\operatorname{d}}}}\theta_\Pi(\lambda)=\epsilon_w\theta_\Pi(\lambda)$. Define $${\mathbf g}=\frac{\theta_\Pi(\hbar-z)}{\theta_\Pi(z)},\quad {\mathbf h}=\frac{\theta_\Pi(\hbar-\lambda)}{\theta_\Pi(\lambda)}.$$ Recall from Lemma [Lemma 18](#lem:basicS){reference-type="ref" reference="lem:basicS"} that $${\mathcal G}=\tilde{\mathcal O}(2\hbar\rho),\, \ {\mathcal H}=\tilde{\mathcal O}(2\hbar\rho^\vee).$$ Note that $w^*{\mathcal H}={\mathcal H}, w^{{\operatorname{d}}*}{\mathcal G}={\mathcal G}$. **Lemma 19**. *The function ${\mathbf g}=\frac{\theta_\Pi(\hbar-z)}{\theta_\Pi(z)}$ is a rational section of the line bundle ${\mathcal G}=\tilde{\mathcal O}(2\hbar\rho )$, and ${\mathbf h}=\frac{\theta_\Pi(\hbar-\lambda)}{\theta_\Pi(\lambda)}$ is a rational section of the line bundle ${\mathcal H}=\tilde {\mathcal O}(2\hbar\rho^\vee)$. Moreover, ${\mathbf g}$ is invariant under $w^{{\operatorname{d}}*}$ and ${\mathbf h}$ is invariant under $w^*$.* *Proof.* Let $\alpha>0$. We first look at the function $\frac{\theta(\hbar-z_\alpha)}{\theta(z_\alpha)}$ with variable $z_\alpha\in E$. It is a rational section of ${\mathcal O}(\hbar)$ over $E$. Pulling back to $\mathfrak{A}$ via $\chi_\alpha:\mathfrak{A}\to E$, we see that $\frac{\theta(\hbar-z_\alpha)}{\theta(z_\alpha)}$ is a rational section of ${\mathcal O}(\hbar\alpha)$ over $\mathfrak{A}$, which can also be viewed as a rational section of $\tilde{\mathcal O}(\hbar\alpha)$ over $\mathfrak{A}\times \mathfrak{A}^\vee$. Therefore, the fraction $\frac{\theta_\Pi(\hbar-z)}{\theta_\Pi(z)}$ is a rational section of $$\bigotimes_{\alpha>0}\tilde{\mathcal O}(\hbar\alpha)=\tilde{\mathcal O}(2\hbar\rho).$$ The claim for ${\mathbf h}$ is proved similarly. The last part follows from Lemma [Lemma 10](#lem:basic-bundle){reference-type="ref" reference="lem:basic-bundle"}. ◻ We have the following duality functors $$\mathop{\mathrm{{\mathscr{H}om}}}(-,{\mathcal H}^{-1}):\mathop{\mathrm{Coh}}(\mathfrak{A}\times \mathfrak{A}^\vee)\to \mathop{\mathrm{Coh}}(\mathfrak{A}\times \mathfrak{A}^\vee), \, \ \mathop{\mathrm{{\mathscr{H}om}}}(-,{\mathcal G}):\mathop{\mathrm{Coh}}(\mathfrak{A}\times \mathfrak{A}^\vee)\to \mathop{\mathrm{Coh}}(\mathfrak{A}\times \mathfrak{A}^\vee).$$ The functors given in this form will not be used. Instead, below we give simple expressions of these functors on the objects of interests. ## Dual algebras and modules Define the map $$D_\lambda:E\times \mathfrak{A}\times \mathfrak{A}^\vee\to E\times \mathfrak{A}\times \mathfrak{A}^\vee, (\hbar,z, \lambda)\mapsto (\hbar, z,-\lambda),$$ and similarly define $D_z, D_\hbar: E\times \mathfrak{A}\times \mathfrak{A}^\vee\to E\times \mathfrak{A}\times \mathfrak{A}^\vee$ inverting $z$ and $\hbar$ respectively. We have the following (*non-commutative*) diagram $$\xymatrix{E\times \mathfrak{A}\times\mathfrak{A}^\vee\ar[rd]^-{D_z}\ar[rr]^-{D_\lambda}&&E\times \mathfrak{A}\times \mathfrak{A}^\vee\\ &E\times \mathfrak{A}\times \mathfrak{A}^\vee\ar[ur]^{D_\hbar}&}.$$ These maps induced auto-functors on the categories $\mathop{\mathrm{Coh}}^{W\times W^{{\operatorname{d}}}}(\mathfrak{A}\times \mathfrak{A}^\vee)$, $\mathop{\mathrm{Coh}}^{W^{{\operatorname{d}}}}(\mathfrak{A}\times \mathfrak{A}^\vee)$, $\mathop{\mathrm{Coh}}^{W}(\mathfrak{A}\times \mathfrak{A}^\vee)$, and $\mathop{\mathrm{Coh}}(\mathfrak{A}\times \mathfrak{A}^\vee)$. By definition, it is easy to see the following. **Lemma 20**. *We have* 1. *$D_z^*, D_\lambda^*, D_\hbar^*$ commute with each other, and commute with $w^*$ and $w^{{\operatorname{d}}*}$;* 2. *we have $(D_\lambda^*)^2=(D_z^*)^2=(D_\hbar^*)^2=\mathop{\mathrm{id}}$.* **Definition 21**. We define $$\begin{aligned} {\mathbb L}(-\lambda)&=D_\lambda^*{\mathbb L},& {\mathbb L}(-z)&=D_{z}^*{\mathbb L}, & {\mathbb S}(-\lambda)&=D_\lambda^*{\mathbb S},& {\mathbb S}(-z)&=D_z^*{\mathbb S}.\end{aligned}$$ We prove some basic properties of these functors. In particular, we are going to see that $D_\lambda^*{\mathbb L}=\mathop{\mathrm{{\mathscr{H}om}}}({\mathbb L},{\mathcal G})$ and $D_z^*{\mathbb L}=\mathop{\mathrm{{\mathscr{H}om}}}({\mathbb L},{\mathcal H}^{-1})$, so $D_\lambda^*$ and $D_z^*$ are actually the duality functors when applied to ${\mathbb L}$. **Lemma 22**. *We have the following canonical isomorphisms.* 1. *$D_\lambda^*{\mathcal G}={\mathcal G}, D_\lambda^*{\mathcal H}={\mathcal H}^{-1}$.* 2. *$D_z^*{\mathcal G}={\mathcal G}^{-1}$, $D_z^*{\mathcal H}={\mathcal H}$.* 3. *$D_\hbar^*{\mathcal G}={\mathcal G}^{-1},$ $D_\hbar^*{\mathcal H}={\mathcal H}^{-1}$.* 4. *$D_\lambda^*{\mathbb L}={\mathbb L}^{-1}\otimes {\mathcal G}=\mathop{\mathrm{{\mathscr{H}om}}}({\mathbb L},{\mathcal G})$,* 5. *$D_z^*{\mathbb L}={\mathbb L}^{-1}\otimes {\mathcal H}^{-1}=\mathop{\mathrm{{\mathscr{H}om}}}({\mathbb L},{\mathcal H}^{-1})$,* 6. *$D_\hbar^*{\mathbb L}={\mathbb L}\otimes {\mathcal G}^{-1}\otimes {\mathcal H}$.* 7. *Applied on ${\mathbb L}$, ${\mathcal H}$, and ${\mathcal G}$, the composition of two of the maps in $\{D_z^*, D_\lambda^*, D_\hbar^*\}$ agrees with the third one.* *Proof.* (1)-(3). They follow from definitions. (4)-(6). They follow from the property of Poincaré line bundle together with definitions of ${\mathbb L}$, ${\mathcal H}$, and ${\mathcal G}$. (7). We only prove $D_\lambda^*{\mathbb L}=D_z^*D_\hbar^*{\mathbb L}$ as an example. By Parts (1)-(6), we have $$\begin{aligned} D_z^*D_\hbar^*{\mathbb L}&=D_z^*({\mathbb L}\otimes {\mathcal G}^{-1}\otimes {\mathcal H})={\mathbb L}^{-1}\otimes {\mathcal H}^{-1}\otimes {\mathcal G}^{-1}\otimes {\mathcal H}={\mathbb L}^{-1}\otimes {\mathcal G}^{-1}=D_\lambda^*{\mathbb L}.\end{aligned}$$ ◻ **Lemma 23**. *We have the following canonical isomorphisms.* 1. *${\mathbb S}(-\lambda)_{w,v}={\mathbb L}(-\lambda)\otimes (w^{-1})^*(v^{-1})^{{\operatorname{d}}*}{\mathbb L}(-\lambda)^{-1}.$* 2. *${\mathbb S}(-z)_{w,v}={\mathbb L}(-z)\otimes (w^{-1})^*(v^{-1})^{{\operatorname{d}}*}{\mathbb L}(-z)^{-1}.$* 3. *$D_\lambda^*{\mathbb S}_{w,v}={\mathbb S}(-\lambda)_{w,v}={\mathbb S}^{-1}_{w,v}\otimes {\mathcal G}\otimes (w^{-1})^*{\mathcal G}^{-1}.$* 4. *$D_z^*{\mathbb S}_{w,v}={\mathbb S}(-z)_{w,v}={\mathbb S}^{-1}_{w,v}\otimes {\mathcal H}^{-1}\otimes (v^{-1})^{{\operatorname{d}}*}{\mathcal H}.$* 5. *$D_\hbar^*{\mathbb S}_{w,v}={\mathbb S}_{w,v}\otimes (w^{-1})^*{\mathcal G}\otimes {\mathcal G}^{-1}\otimes (v^{-1})^{{\operatorname{d}}*}{\mathcal H}^{-1}\otimes {\mathcal H}.$* 6. *$D_\lambda^*{\mathbb S}(-z)_{w,v}={\mathbb S}(-z)_{w,v}^{-1}\otimes (w^{-1})^*{\mathcal G}\otimes {\mathcal G}^{-1}$.* 7. *$D_z^*{\mathbb S}(-\lambda)_{w,v}={\mathbb S}(-\lambda)_{w,v}^{-1}\otimes {\mathcal H}\otimes (v^{-1})^{{\operatorname{d}}*}{\mathcal H}^{-1}$.* *Proof.* (1). Applying $D_\lambda^*$ and $D_z^*$ on the identity from the definition of ${\mathbb S}_{w,v}$ in Definition [Definition 4](#def:bbS){reference-type="ref" reference="def:bbS"}: $${\mathbb S}_{w,v}={\mathbb L}\otimes (w^{-1})^*(v^{-1})^{{\operatorname{d}}*}{\mathbb L}^{-1},$$ one will obtain the two identities. (2)-(6): They all follow from Lemma [Lemma 22](#lem:D2){reference-type="ref" reference="lem:D2"}. ◻ **Lemma 24**. *The functors $D_\lambda^*$ and $D_z^*$ are monoidal functors in the tensor monoidal category $\mathop{\mathrm{Coh}}^{W\times W^{\operatorname{d}}}(\mathfrak{A}\times \mathfrak{A}^\vee)$. In particular, ${\mathbb S}(-\lambda)$ and ${\mathbb S}(-z)$ are algebra objects acting on the modules ${\mathbb L}(-\lambda)$ and ${\mathbb L}(-z)$, respectively.* *Proof.* They follow from the fact that $D_\lambda$ and $D_z$ both commute with the Weyl actions. ◻ Hence, for any two morphisms $f:{\mathcal F}\to{\mathcal G}$ and $g:{\mathcal F}'\to {\mathcal G}'$, we have $D^*_\lambda(f\star g)=D^*_\lambda(f)\star D^*_\lambda(g)$. We can similarly define ${\mathbb S}(-\lambda)'$, ${\mathbb S}(-\lambda)''$, ${\mathbb S}(-z)'$, and ${\mathbb S}(-z)''$. **Remark 25**. Similar to § [\[subsec:rat_sec\]](#subsec:rat_sec){reference-type="ref" reference="subsec:rat_sec"}, one can write the twisted product of rational sections of ${\mathbb S}(-\lambda)$ and ${\mathbb S}(-z)$. Moreover, the actions of ${\mathbb S}(-z)$ and ${\mathbb S}(-\lambda)$ on the corresponding modules ${\mathbb M}(-\lambda)$, ${\mathbb M}(-\lambda)^{\operatorname{d}}$, ${\mathbb M}(-z)$, and ${\mathbb M}(-z)^{\operatorname{d}}$ can be described in the same way as in § [\[subsec:rat_sec\]](#subsec:rat_sec){reference-type="ref" reference="subsec:rat_sec"}. It is easy to see that ${\mathbb S}(-\lambda)_{w,v}$ and ${\mathbb S}(-z)_{w,v}$ share similar properties as ${\mathbb S}_{w,v}$ in Lemma [Lemma 18](#lem:basicS){reference-type="ref" reference="lem:basicS"}. Notably, we have $$\begin{aligned} {\mathbb S}(-z)_{w_1w_2,v_1v_2}&={\mathbb S}(-z)_{w_1,v_1}\otimes (w_1^{-1})^*(v_1^{-1})^{{\operatorname{d}}*}{\mathbb S}(-z)_{w_2,v_2},\\ {\mathbb S}(-\lambda)_{w,v}^{-1}&= (w^{-1})^*(v^{-1})^{{\operatorname{d}}*}{\mathbb S}(-\lambda)_{w^{-1}, v^{-1}},\\ {\mathbb S}(-\lambda)_{\alpha, \alpha}&={\mathcal O}(\hbar\alpha)\boxtimes {\mathcal O}(\hbar\alpha^\vee),\\ {\mathbb S}(-z)_{w_1w_2,v_1v_2}&={\mathbb S}(-z)_{w_1,v_1}\otimes (w_1^{-1})^*(v_1^{-1})^{{\operatorname{d}}*}{\mathbb S}(-z)_{w_2,v_2},\\ {\mathbb S}(-z)_{w,v}^{-1}&= (w^{-1})^*(v^{-1})^{{\operatorname{d}}*}{\mathbb S}(-z)_{w^{-1}, v^{-1}},\\ {\mathbb S}(-z)_{\alpha, \alpha}&={\mathcal O}(-\hbar\alpha)\boxtimes {\mathcal O}(-\hbar\alpha^\vee).\end{aligned}$$ ## The anti-involutions {#subsec:antiinvolution} From Lemma [Lemma 23](#lem:D-S){reference-type="ref" reference="lem:D-S"}.(2), we have the following canonical isomorphisms $${\mathbb S}(-\lambda)''_{w^{-1},v^{-1}}=(v^{-1})^{{\operatorname{d}}*}{\mathbb S}(-\lambda)_{w^{-1},v^{-1}}= w^*{\mathbb S}_{w, v}\otimes w^*{\mathcal G}^{-1}\otimes {\mathcal G}={\mathbb S}_{w,v}'\otimes w^*{\mathcal G}^{-1}\otimes {\mathcal G}.$$ The rational section ${\mathbf g}$ defines a rational section $\frac{{\mathbf g}}{{}^{w^{-1}}{\mathbf g}}$ of $w^*{\mathcal G}^{-1}\otimes {\mathcal G}$. We define the rational map $$\label{eq:iotaDla} {\iota}_{\lambda}:{\mathbb S}_{w,v}'\dashrightarrow {\mathbb S}(-\lambda)_{w^{-1},v^{-1}}''=\frak{D}({\mathbb S}(-\lambda)''_{w,v}),$$ given by multiplication by $\frac{{\mathbf g}}{{}^{w^{-1}}{\mathbf g}}$. Here $\frak{D}$ is the degree inversion functor defined in § [3.4](#subsec:deg_inv){reference-type="ref" reference="subsec:deg_inv"}. Taking the sum over all $w,v\in W$, we obtain a rational map of $W\times W^{\operatorname{d}}$-graded coherent sheaves $$\label{eq:iota_la} {\iota}_{\lambda}:{\mathbb S}'\dashrightarrow \frak{D}({\mathbb S}(-\lambda)''), \quad \delta_wa\delta_v^{\operatorname{d}}\mapsto \delta_{v^{-1}}^{\operatorname{d}}a\cdot \frac{{\mathbf g}}{{}^{w^{-1}{\mathbf g}}}\delta_{w^{-1}}.$$ **Proposition 26**. *The rational map $\iota_{\lambda}$ defines an anti-homomorphism of sheaves of algebras. That is, we have the following commutative diagram $$\xymatrix{{\mathbb S}'_{w_1,v_1}\star'{\mathbb S}'_{w_2,v_2}\ar[d]^-\sim\ar@{-->}[rr]^-{\sigma\circ (\iota_{\lambda}\times \iota_{\lambda})}&&\frak{D}({\mathbb S}(-\lambda)_{w_2,v_2}''){\star'}\frak{D}({\mathbb S}(-\lambda)_{w_1,v_1}'')\ar@{=}[d]_-{\eqref{eqn:D-star}}&\\ {\mathbb S}'_{w_1w_2,v_1v_2}\ar@{-->}[r]_-{\iota_{\lambda}}&\frak{D}({\mathbb S}(-\lambda)_{w_1w_2,v_1v_2}'')& \frak{D}\left({\mathbb S}(-\lambda)_{w_2,v_2}''{\star''}{\mathbb S}(-\lambda)_{w_1,v_1}''\right)\ar[l]^-{\sim}}.$$ Here $\sigma$ means switching the two components.* *Proof.* It suffices to show this on graded pieces. On ${\mathbb S}'_{w_1,v_1}\star' {\mathbb S}'_{w_2,v_2}$, the composite of the top horizontal maps sends $(\delta_{w_1}a_1\delta_{v_1}^{\operatorname{d}}, \delta_{w_2}a_2\delta_{v_2}^{\operatorname{d}})$ to $$\delta_{v_2^{-1}}^{\operatorname{d}}a_2\cdot \frac{{\mathbf g}}{{}^{w_2^{-1}}{\mathbf g}}\delta_{w_2^{-1}}\delta_{v_1^{-1}}^{\operatorname{d}}a_1\cdot \frac{{\mathbf g}}{{}^{w_1^{-1}}{\mathbf g}}\delta_{w_1^{-1}}=\delta_{(v_1v_2)^{-1}}^{\operatorname{d}}{}^{v_1^{\operatorname{d}}}a_2\cdot {}^{w_2^{-1}}a_1\cdot \frac{{\mathbf g}}{{}^{w_2^{-1}}{\mathbf g}}\cdot {}^{w_2^{-1}}(\frac{{\mathbf g}}{{}^{w_1^{-1}}{\mathbf g}})\delta_{(w_1w_2)^{-1}}.$$ On the other hand, we have $$\iota_\lambda(\delta_{w_1}a_1\delta_{v_1}^{\operatorname{d}}\delta_{w_2}a_2\delta_{v_2}^{\operatorname{d}})=\iota_\lambda(\delta_{w_1w_2}{}^{w_2^{-1}}a_1\cdot {}^{v_1^{\operatorname{d}}}a_2\delta_{v_1v_2}^{\operatorname{d}})=\delta_{(v_1v_2)^{-1}}^{\operatorname{d}}{}^{v_1^{\operatorname{d}}}a_2\cdot {}^{w_2^{-1}}a_1\cdot \frac{{\mathbf g}}{{}^{(w_1w_2)^{-1}}{\mathbf g}}\delta_{(w_1w_2)^{-1}}.$$ Therefore, the two agree. ◻ Similarly, the rational section ${\mathbf h}$ of ${\mathcal H}$ and the canonical isomorphisms $${\mathbb S}(-z)_{w^{-1}, v^{-1}}''=(v^{-1})^{{\operatorname{d}}*}{\mathbb S}(-z)_{w^{-1},v^{-1}}=w^*{\mathbb S}_{w,v}\otimes (v^{-1})^{{\operatorname{d}}*}{\mathcal H}^{-1}\otimes {\mathcal H}={\mathbb S}'_{w,v}\otimes (v^{-1})^{{\operatorname{d}}*}{\mathcal H}^{-1}\otimes {\mathcal H}$$ define an anti-homomorphism by multiplying by $\frac{{\mathbf h}}{{}^{v^{\operatorname{d}}}{\mathbf h}}$: $$\label{eq:iota_z} \iota_{z}: {\mathbb S}'_{w, v}\dashrightarrow {\mathbb S}(-z)''_{w^{-1},v^{-1}}\otimes (v^{-1})^{{\operatorname{d}}*}{\mathcal H}\otimes {\mathcal H}^{-1}, \quad \delta_w a\delta_v^{\operatorname{d}}\mapsto \delta_{v^{-1}}^{\operatorname{d}}a\cdot \frac{{\mathbf h}}{{}^{v^{\operatorname{d}}}{\mathbf h}}\delta_{w^{-1}}.$$ The statement corresponding to Proposition [Proposition 26](#prop:anti){reference-type="ref" reference="prop:anti"} holds as well, that is, $$\iota_z(\delta_{w_1}a_1\delta_{v_1}^{\operatorname{d}}\delta_{w_2}a_2\delta_{v_2}^{\operatorname{d}})=\iota_z(\delta_{w_2}a_2\delta_{v_2}^{\operatorname{d}})\iota_z(\delta_{w_1}a_1\delta_{v_1}^{\operatorname{d}}).$$ **Remark 27**. Via the equivalences of categories in Lemma [Lemma 7](#lem:monoidal_equivalence){reference-type="ref" reference="lem:monoidal_equivalence"}, we can replace the domains and codomains of $\iota_\lambda$ and $\iota_z$ by the other versions of ${\mathbb S}, {\mathbb S}(-\lambda)$ and ${\mathbb S}(-z)$. For instance, if one considers $\iota_\lambda:{\mathbb S}_{w,v}(-\lambda)\dashrightarrow {\mathbb S}_{w^{-1}, v^{-1}}'$, in terms of rational sections, one would get $$\iota_\lambda(a\delta_w\delta_v^{\operatorname{d}})=\iota_\lambda(\delta_w {}^{w^{-1}}a\delta_v^{\operatorname{d}})=\delta_{v^{-1}}^{\operatorname{d}}{}^{w^{-1}}a\cdot \frac{{\mathbf g}}{{}^{w^{-1}}{\mathbf g}}\delta_{w^{-1}}=\delta_{w^{-1}}{}^{(v_1^{-1})^{\operatorname{d}}}a\cdot \frac{{}^w{\mathbf g}}{{\mathbf g}}\delta_{v^{-1}}^{\operatorname{d}}.$$ ## Poincaré pairings on periodic modules Recall that $$\begin{aligned} {\mathbb M}&=\bigoplus_{x\in W}{\mathbb M}_x=\bigoplus_{x\in W}x^*{\mathbb L}, &{\mathbb M}^{\operatorname{d}}&=\bigoplus_{y\in W}{\mathbb M}^{\operatorname{d}}_y=\bigoplus_{y\in W}y^{{\operatorname{d}}*}{\mathbb L}.\end{aligned}$$ Applying $D^*_\lambda$ and $D^*_z$ give $$\begin{aligned} {\mathbb M}(-\lambda)&:=\bigoplus_{x\in W}{\mathbb M}(-\lambda)_x=\bigoplus_{x\in W}x^*{\mathbb L}(-\lambda), &{\mathbb M}(-\lambda)^{\operatorname{d}}&:=\bigoplus_{w\in W}{\mathbb M}(-\lambda)^{\operatorname{d}}_y=\bigoplus_{y\in W}y^{{\operatorname{d}}*}{\mathbb L}(-\lambda),\\ {\mathbb M}(-z)&:=\bigoplus_{x\in W}{\mathbb M}(-z)_x=\bigoplus_{x\in W}x^*{\mathbb L}(-z), &{\mathbb M}(-z)^{\operatorname{d}}&:=\bigoplus_{y\in W}{\mathbb M}(-z)_y^{\operatorname{d}}=\bigoplus_{y\in W}y^{{\operatorname{d}}*}{\mathbb L}(-z).\end{aligned}$$ Similar as in § [3.5](#subsec:act_periodic){reference-type="ref" reference="subsec:act_periodic"}, ${\mathbb M}(-\lambda)$ and ${\mathbb M}(-\lambda)^{\operatorname{d}}$ (resp. ${\mathbb M}(-z)$ and ${\mathbb M}(-z)^{\operatorname{d}}$) are modules over ${\mathbb S}(-\lambda)$, ${\mathbb S}(-\lambda)'$, and ${\mathbb S}(-\lambda)''$ (resp. ${\mathbb S}(-z)$, ${\mathbb S}(-z)'$, and ${\mathbb S}(-z)''$), with actions denoted by $\bullet$ (resp. $\bullet^{\operatorname{d}}$). We define the following pairing $$\langle\_, \_\rangle_{\lambda}: {\mathbb M}\otimes {\mathbb M}(-\lambda)\dashrightarrow {\mathcal O}, ~{\mathbb M}_x\otimes {\mathbb M}(-\lambda)_x=x^*{\mathcal G}\overset{\frac{1}{{}^{x^{-1}}{\mathbf g}}}\dashrightarrow {\mathcal O}.$$ $$\label{eq:D-la-pair} \langle af_{x_1}, bf_{x_2}\rangle_{\lambda}\mapsto \delta_{x_1,x_2}^{\mathop{\mathrm{{{Kr}}}}}a\cdot b\cdot \frac{1}{{}^{x^{-1}}{\mathbf g}}.$$ **Proposition 28** (The Adjointness). *For any $v,w,x\in W$, the following diagram commutes $$\xymatrix{ v^{{\operatorname{d}}*}\big(({\mathbb S}_{w,v}'\star' {\mathbb M}_x)\otimes {\mathbb M}(-\lambda)_{wx}\big) \ar@{-->}[d]^-{ (p\star'af_x,bf_{wx}) \mapsto (af_x,\iota_\lambda(p)\star'' bf_{wx}) }\ar[rr]^-{v^{{\operatorname{d}}*}(\bullet\otimes \mathop{\mathrm{id}})}&&v^{{\operatorname{d}}*}\big({\mathbb M}_{wx}\otimes {\mathbb M}(-\lambda)_{wx}\big)\ar@{-->}[rr]^-{v^{{\operatorname{d}}*}\langle-,-\rangle_\lambda}&&{\mathcal O}\ar@{=}[d]\\ {\mathbb M}_x\otimes \left({\mathbb S}''(-\lambda)_{w^{-1}, v^{-1}}\star'' {\mathbb M}(-\lambda)_{wx}\right)\ar[rr]_-{\mathop{\mathrm{id}}\otimes \bullet}& &{\mathbb M}_x\otimes {\mathbb M}(-\lambda)_x\ar@{-->}[rr]_-{\langle-,-\rangle_\lambda} && {\mathcal O}}.$$ Here the left vertical map is given by the composite $$\xymatrix{v^{{\operatorname{d}}*}\big(({\mathbb S}_{w,v}'\star' {\mathbb M}_x)\otimes {\mathbb M}(-\lambda)_{wx}\big)\ar[r]^-\sim &v^{{\operatorname{d}}*}x^*{\mathbb S}'_{w,v}\otimes {\mathbb M}_x\otimes v^{{\operatorname{d}}*}{\mathbb M}(-\lambda)_{wx}\ar@{-->}[d]^-{\iota_\lambda\otimes \mathop{\mathrm{id}}\otimes \mathop{\mathrm{id}}}\\ {\mathbb M}_x\otimes \left({\mathbb S}''(-\lambda)_{w^{-1}, v^{-1}}\star'' {\mathbb M}(-\lambda)_{wx}\right)& x^*v^{{\operatorname{d}}*}{\mathbb S}(-\lambda)''_{w^{-1}, v^{-1}}\otimes {\mathbb M}_x\otimes v^{{\operatorname{d}}*}{\mathbb M}(-\lambda)_{wx}\ar[l]^-\sim}.$$ In particular, for any rational section $p$ of ${\mathbb S}_{w,v}'$, and $f$ and $f'$ rational sections of the corresponding modules, we have $$\begin{aligned} \label{eq:adjD-la}{}^{(v^{-1})^{{\operatorname{d}}}}\langle p\bullet f,f'\rangle_{\lambda}&=\langle f,\iota_{\lambda}(p)\bullet f'\rangle_{\lambda}.\end{aligned}$$* *Proof.* One can check the commutativity by using the formulas of $\iota_\lambda$ in [\[eq:iota_la\]](#eq:iota_la){reference-type="eqref" reference="eq:iota_la"} and the pairing $\langle\_, \_\rangle_\lambda$ in [\[eq:D-la-pair\]](#eq:D-la-pair){reference-type="eqref" reference="eq:D-la-pair"}. ◻ **Remark 29**. Similarly, we also have the following pairings: $$\begin{aligned} \langle\_, \_\rangle_{ \lambda}^{\operatorname{d}}:{\mathbb M}^{\operatorname{d}}\otimes {\mathbb M}(-\lambda)^{\operatorname{d}}\dashrightarrow {\mathcal O},& \quad \langle\_, \_\rangle_{ \lambda}^{\operatorname{d}}:{\mathbb M}_y^{\operatorname{d}}\otimes {\mathbb M}(-\lambda)^{\operatorname{d}}_y={\mathcal G}\overset{\frac{1}{{}^{}{\mathbf g}}}\dashrightarrow {\mathcal O}.\\ \langle\_, \_\rangle_{ z}: {\mathbb M}\otimes {\mathbb M}(-z)\dashrightarrow {\mathcal O}, &\quad \langle\_, \_\rangle_{ z}: {\mathbb M}_x\otimes {\mathbb M}(-z)_x={\mathcal H}^{-1}\overset{\mathbf h}\dashrightarrow {\mathcal O}.\\ \langle\_, \_\rangle_{ z}^{\operatorname{d}}: {\mathbb M}^{\operatorname{d}}\otimes {\mathbb M}(-z)^{\operatorname{d}} \dashrightarrow {\mathcal O},&\quad \langle\_,\_\rangle_z^{\operatorname{d}}: {\mathbb M}_y\otimes {\mathbb M}(-\lambda)_y=y^{{\operatorname{d}}*}{\mathcal H}^{-1}\overset{{}^{(y^{-1})^{\operatorname{d}}}{\mathbf h}}\dashrightarrow {\mathcal O}.\end{aligned}$$ For instance, in terms of rational sections, the last one can be written as $$\label{eq:D-z-dyn-pair} \langle af_{y_1}^{\operatorname{d}}, bf_{y_2}^{\operatorname{d}}\rangle_{ z}^{\operatorname{d}}\mapsto \delta^{\mathop{\mathrm{{{Kr}}}}}_{y_1,y_2}a\cdot b\cdot {}^{(y_1^{-1})^{\operatorname{d}}}{\mathbf h}.$$ They satisfy the following adjointness: given $p$ a rational section of ${\mathbb S}_{w,v}$ (or ${\mathbb S}'_{w,v}$ or ${\mathbb S}''_{w,v}$), $$\begin{aligned} \label{eq:adjD-la-dyn}\langle p\bullet^{\operatorname{d}}f^{\operatorname{d}}, (f')^{\operatorname{d}}\rangle_{ \lambda}^{\operatorname{d}}&={}^w\langle f^{\operatorname{d}}, \iota_{ \lambda}(p)\bullet^{\operatorname{d}}(f')^{\operatorname{d}}\rangle_{ \lambda},\\ \label{eq:adjD-z}\langle p\bullet f,f'\rangle_{ z}&={}^{v^{{\operatorname{d}}}}\langle f,\iota_{ z}(p)\bullet f'\rangle_{ z},\\ \label{eq:adjD-z-dyn}\langle p\bullet^{\operatorname{d}}f^{\operatorname{d}}, (f')^{\operatorname{d}}\rangle_{ z}^{\operatorname{d}}&={}^w\langle f^{\operatorname{d}}, \iota_{ z}(p)\bullet^{\operatorname{d}}(f')\rangle^{\operatorname{d}}_{ z}.\end{aligned}$$ # Langlands dual {#sec:Langlands} In this section we consider the Langlands dual system. In Proposition [Proposition 31](#prop:i*S){reference-type="ref" reference="prop:i*S"} and Lemma [Lemma 36](#lem:i*T){reference-type="ref" reference="lem:i*T"} below, we identify the DL operators of the Langlands dual system with the operators $T^{z,\lambda,{\operatorname{d}}}_\alpha$ defined in Section § [7](#sec:DL){reference-type="ref" reference="sec:DL"}. Let ${\mathcal A}=\mathfrak{A}^\vee$ and hence we have a natural isomorphism of abelian varieties ${\mathcal A}^\vee\cong\mathfrak{A}$ that preserves group structures. Notice that ${\mathcal A}=X^*\otimes E$, and ${\mathcal A}^\vee:=X_*\otimes E$, with the lattices $X^*$ and $X_*$ coming from the root system and hence an action of its Weyl group $W$. The actions are denoted by $w^\vee$ on ${\mathcal A}$ and $w^{\vee{\operatorname{d}}}$ on ${\mathcal A}^\vee$, respectively. Fixing $\hbar':=-\hbar\in E$, the construction of § [\[subsec:rho-shift\]](#subsec:rho-shift){reference-type="ref" reference="subsec:rho-shift"} defines a line bundle ${\mathbb L}^\vee$ on ${\mathcal A}\times{\mathcal A}^\vee$. For concreteness and to avoid possible confusions, we spell out the line bundle ${\mathbb L}^\vee$[^1]. The construction of § [2.3](#subsec:rootsys){reference-type="ref" reference="subsec:rootsys"} using the lattices $X^*$ and $X_*$ defines coordinates $(z^\vee,\lambda^\vee)\in {\mathcal A}\times{\mathcal A}^\vee$. Clearly we have an identification $(z^\vee,\lambda^\vee)\in {\mathcal A}\times{\mathcal A}^\vee$ with $(\lambda, z)\in \mathfrak{A}^\vee\times \mathfrak{A}$. Moreover, $(\rho^\vee)^\vee$ is identified with $\rho$. Then similar as in [\[eq:Lres\]](#eq:Lres){reference-type="eqref" reference="eq:Lres"}, we have: $$\label{eq:Lvee-res} {\mathbb L}^\vee|_{\lambda^\vee}={\mathcal O}(\lambda^\vee+\hbar'\rho^\vee)={\mathcal O}(\lambda^\vee-\hbar\rho^\vee), ~{\mathbb L}^\vee|_{z^\vee}={\mathcal O}(z^\vee-\hbar'\rho)={\mathcal O}(z^\vee+\hbar\rho).$$ As a consequence, we obtain the following. **Proposition 30**. *Let $i:\mathfrak{A}\times\mathfrak{A}^\vee\to {\mathcal A}\times{\mathcal A}^\vee$ be the isomorphism $\lambda\mapsto z^\vee$ and $z\mapsto \lambda^\vee$. Then there is a canonical isomorphism $$i^*{\mathbb L}^\vee={\mathbb L}.$$* *Proof.* This follows from the definition of Poincaré line bundles and the following identifications under $i$ $$\hbar'\rho^\vee=-\hbar\rho^\vee\hbox{ and }-\hbar'(\rho^\vee)^\vee=\hbar\rho.$$ ◻ It is easy to see that $$\begin{aligned} \label{eq:dualweyl} w^*i^*&=i^*(w^{\vee})^{{\operatorname{d}}*}, & v^{{\operatorname{d}}*}i^*&=i^*(v^\vee)^*.\end{aligned}$$ Similar as Definition [Definition 4](#def:bbS){reference-type="ref" reference="def:bbS"}, we define $${\mathbb S}^\vee_{w,v}={\mathbb L}^\vee\otimes ((w^\vee)^{-1})^*((v^\vee)^{-1})^{{\operatorname{d}}*}({\mathbb L}^\vee)^{-1}.$$ **Proposition 31**. *We have $$i^*{\mathbb S}^\vee_{w,v}={\mathbb S}_{v,w},\quad i^*{\mathbb S}^{\vee'}_{w,v}={\mathbb S}''_{v,w}, \quad i^*{\mathbb S}^{\vee''}_{w,v}={\mathbb S}'_{v,w}.$$* *Proof.* The first one follows from [\[eq:dualweyl\]](#eq:dualweyl){reference-type="eqref" reference="eq:dualweyl"} and Proposition [Proposition 30](#prop:i-Lvee){reference-type="ref" reference="prop:i-Lvee"}. For the second one, we have $$i^*({\mathbb S}^\vee_{w,v})'=i^*{w^\vee}^*{\mathbb S}^\vee_{w,v}=w^{{\operatorname{d}}*}i^*{\mathbb S}^\vee_{w,v}=w^{{\operatorname{d}}*}{\mathbb S}_{w,v}={\mathbb S}''_{v,w}.$$ The third identity can be proved similarly. ◻ **Example 32**. We have $$\begin{aligned} (s_\alpha^{\vee*}{\mathbb L}^\vee)|_{z^\vee}&={\mathbb L}^\vee|_{s_\alpha^\vee(z^\vee)}={\mathcal O}(s_\alpha^\vee(z^\vee)+\hbar\rho)={\mathcal O}(z^\vee+\hbar\rho-z^\vee_{\alpha^\vee}\alpha)={\mathbb L}^\vee|_{z^\vee}\otimes {\mathcal O}(-z^\vee_{\alpha^\vee}\alpha),\\ ~(s_\alpha^{\vee*}{\mathbb L}^\vee)|_{\lambda^\vee}&=s_\alpha^{\vee*}({\mathbb L}^\vee|_{\lambda^\vee})=s_\alpha^{\vee*}{\mathcal O}(\lambda^\vee-\hbar\rho^\vee)={\mathcal O}(s_\alpha^{\vee{\operatorname{d}}}(\lambda^\vee-\hbar\rho^\vee))\\ &={\mathcal O}(\lambda^\vee-\hbar\rho^\vee-\lambda^\vee_{\alpha}\alpha^\vee+\hbar\alpha^\vee)={\mathbb L}^\vee|_{\lambda^\vee}\otimes {\mathcal O}(\hbar\alpha^\vee-\lambda^\vee_\alpha\alpha^\vee).\\ (s_\alpha^{\vee})^{{\operatorname{d}}*}{\mathbb L}^\vee|_{z^\vee}&=(s_{\alpha}^\vee)^{{\operatorname{d}}*}({\mathbb L}^\vee|_{z^\vee})=(s_\alpha^\vee)^{{\operatorname{d}}*}{\mathcal O}(z^\vee+\hbar\rho)={\mathcal O}(s_\alpha^\vee(z^\vee+\hbar\rho))={\mathcal O}(z^\vee+\hbar\rho-z^\vee_{\alpha^\vee}\alpha-\hbar\alpha), \\ (s_\alpha^\vee)^{{\operatorname{d}}*}{\mathbb L}^\vee|_{\lambda^\vee}&={\mathbb L}^\vee|_{s_\alpha^\vee(\lambda^\vee)}={\mathcal O}(s_\alpha^\vee(\lambda^\vee)-\hbar\rho^\vee)={\mathcal O}(\lambda^\vee-\lambda^\vee_\alpha\alpha^\vee-\hbar\rho^\vee).\end{aligned}$$ Similar as Lemma [Lemma 18](#lem:basicS){reference-type="ref" reference="lem:basicS"}, we have $${\mathbb S}^\vee_{\alpha, \alpha}={\mathbb L}^\vee\otimes (s_\alpha^*s_\alpha^{{\operatorname{d}}*}{\mathbb L}^{\vee})^{-1}={\mathcal O}(\hbar\alpha)\boxtimes {\mathcal O}(-\hbar\alpha^\vee).$$ One can then verify that $i^*{\mathbb S}_{\alpha,\alpha}^\vee={\mathbb S}_{\alpha,\alpha}$ because from Lemma [Lemma 18](#lem:basicS){reference-type="ref" reference="lem:basicS"}, we have ${\mathbb S}_{\alpha,\alpha}={\mathcal O}(\hbar\alpha)\boxtimes {\mathcal O}(-\hbar\alpha^\vee)$. # The elliptic DL operators {#sec:DL} In this section, we define the DL operators with dynamical parameters and show that they are rational sections of the elliptic twisted group algebras. ## The DL operators {#subsec:DL_operator} Let $z\in \mathfrak{A}, \lambda\in \mathfrak{A}^\vee$. For any simple root $\alpha$, define the DL operators with dynamical parameters by $$\begin{aligned} \label{eq:Tzla} T_{\alpha}^{z,\lambda}&=\delta_\alpha^{\operatorname{d}}\frac{\theta(\hbar)\theta(z_\alpha+\lambda_{\alpha^\vee})}{\theta(z_\alpha)\theta(\hbar+\lambda_{\alpha^\vee})}+\delta_\alpha^{{\operatorname{d}}}\frac{\theta(\hbar-z_\alpha)\theta(-\lambda_{\alpha^\vee})}{\theta(z_\alpha)\theta(\hbar+\lambda_{\alpha^\vee})}\delta_\alpha.\\ \label{eq:Tzladyn} T_\alpha^{z,\lambda, {\operatorname{d}}}&=\delta_\alpha\frac{\theta(\hbar)\theta(\lambda_{\alpha^\vee}+z_\alpha)}{\theta(\lambda_{\alpha^\vee})\theta(\hbar-z_{\alpha})}+\delta_\alpha\frac{\theta(\hbar+\lambda_{\alpha^\vee})\theta(-z_{\alpha})}{\theta(\lambda_{\alpha^\vee})\theta(\hbar-z_{\alpha})}\delta_\alpha^{{\operatorname{d}}}.\end{aligned}$$ Using $\iota_{ \lambda}, \iota_{ z}$ defined in [\[eq:iota_la\]](#eq:iota_la){reference-type="eqref" reference="eq:iota_la"} and [\[eq:iota_z\]](#eq:iota_z){reference-type="eqref" reference="eq:iota_z"}, we define $$\begin{aligned} \iota_{ \lambda}(T_\alpha^{z,\lambda})=T_{\alpha}^{z,-\lambda}, & ~\iota_{ \lambda}(T^{z,\lambda,{\operatorname{d}}}_{\alpha})=T_\alpha^{z, -\lambda,{\operatorname{d}}}, \\ \iota_{ z}(T^{z,\lambda}_\alpha)=T_\alpha^{-z, \lambda}, &~ \iota_{ z}(T_\alpha^{z,\lambda, {\operatorname{d}}})=T_\alpha^{-z, \lambda,{\operatorname{d}}}, \end{aligned}$$ explicitly given by $$\begin{aligned} \label{eq:Tz-la} T_{\alpha}^{z,-\lambda}&=\delta_\alpha^{\operatorname{d}}\frac{\theta(\hbar)\theta(z_\alpha-\lambda_{\alpha^\vee})}{\theta(z_\alpha)\theta(\hbar-\lambda_{\alpha^\vee})}+\delta_\alpha^{\operatorname{d}}\frac{\theta(\hbar-z_\alpha)\theta(\lambda_{\alpha^\vee})}{\theta(z_\alpha)\theta(\hbar-\lambda_{\alpha^\vee})}\delta_\alpha, \\ \label{eq:Tz-ladyn}T_{\alpha}^{z,-\lambda, {\operatorname{d}}}&=\delta_\alpha\frac{\theta(\hbar)\theta(z_\alpha-\lambda_{\alpha^\vee})}{\theta(\lambda_{\alpha^\vee})\theta(\hbar-z_\alpha)}+\delta_\alpha\frac{\theta(\hbar-\lambda_{\alpha^\vee})\theta(z_\alpha)}{\theta(\lambda_{\alpha^\vee})\theta(\hbar-z_\alpha)}\delta_\alpha^{{\operatorname{d}}}, \\ \label{eq:T-zla} T_{\alpha}^{-z,\lambda}&=\delta_\alpha^{\operatorname{d}}\frac{\theta(\hbar)\theta(z_\alpha-\lambda_{\alpha^\vee})}{\theta(-z_\alpha)\theta(\hbar+\lambda_{\alpha^\vee})}+\delta_\alpha^{{\operatorname{d}}}\frac{\theta(\hbar+z_\alpha)\theta(\lambda_{\alpha^\vee})}{\theta(z_\alpha)\theta(\hbar+\lambda_{\alpha^\vee})}\delta_\alpha, \\ \label{eq:T-zladyn}T_{\alpha}^{-z,\lambda, {\operatorname{d}}}&=\delta_\alpha\frac{\theta(\hbar)\theta(\lambda_{\alpha^\vee}-z_\alpha)}{\theta(\lambda_{\alpha^\vee})\theta(\hbar+z_\alpha)}+\delta_\alpha\frac{\theta(\hbar+\lambda_{\alpha^\vee})\theta(z_\alpha)}{\theta(\lambda_{\alpha^\vee})\theta(\hbar+z_\alpha)} \delta_\alpha^{{\operatorname{d}}}. \end{aligned}$$ Note that the operators $T^{\pm z,\pm \lambda}_\alpha$ have constant degree $s_\alpha^{\operatorname{d}}$ at $W^{\operatorname{d}}$, and $T^{\pm z,\pm \lambda, {\operatorname{d}}}_\alpha$ have constant degree $s_\alpha$ at $W$. They are the dynamical DL operators for $G/B$ and for its Langlands dual $G^\vee/B^\vee$, respectively. **Remark 33**. Notice that $T^{z,-\lambda}_\alpha$ and $T^{z,-\lambda,{\operatorname{d}}}_\alpha$ are defined to be $\iota_\lambda$ applied to $T^{z,\lambda}_\alpha$ and $T^{z,\lambda,{\operatorname{d}}}_\alpha$ respectively. This is in general not the same as applying $D_\lambda^*$ to $T^{z,\lambda}_\alpha$ and $T^{z,\lambda,{\operatorname{d}}}_\alpha$, while the latter simply substitute $-\lambda$ in place of $\lambda$ in [\[eq:Tzla\]](#eq:Tzla){reference-type="eqref" reference="eq:Tzla"} and [\[eq:Tzladyn\]](#eq:Tzladyn){reference-type="eqref" reference="eq:Tzladyn"} respectively. Indeed, one can simply check that $T^{z,-\lambda}_\alpha= D_\lambda^*T^{z,\lambda}_\alpha$ but $T^{z,-\lambda,{\operatorname{d}}}_\alpha\neq D_\lambda^*T^{z,\lambda, {\operatorname{d}}}_\alpha$. Similarly, $T^{-z,\lambda,{\operatorname{d}}}_\alpha= D_z^*T^{z,\lambda,{\operatorname{d}}}_\alpha$ but $T^{-z,\lambda}_\alpha\neq D_z^*T^{z,\lambda}_\alpha$. **Remark 34**. The formula of DL operator with dynamical parameters first appeared in [@RW20]. Indeed, one obtains $-T_\alpha^{z,-\lambda}$ from the formula in [@RW20 Theorem 1.3] by pluging-in $-\ln h=\hbar$, $\ln h^{\alpha^\vee}=\lambda_{\alpha^\vee}$, and $c_1^{coh}({\mathcal L}_\alpha)=z_\alpha$ with ${\mathcal L}_\alpha=T\times_B{\mathbb C}_{-\alpha}$. As noted in [@ZZ22 Remark 4.10], the same formula can alternatively obtained from Felder's elliptic R-matrices with dynamical parameters, which originated from solutions to the dynamical Yang-Baxeter equation. **Theorem 35**. 1. *The operators $T^{z,\lambda}_\alpha$ and $T^{z,\lambda,{\operatorname{d}}}_\alpha$ are rational sections of ${\mathbb S}''_{}$ and ${\mathbb S}'_{}$, respectively.* 2. *The operators $T^{z,-\lambda}_\alpha$ and $T^{z,-\lambda,{\operatorname{d}}}_\alpha$ are rational sections of ${\mathbb S}(-\lambda)''_{}$ and ${\mathbb S}(-\lambda)'_{}$, respectively.* 3. *The operators $T^{-z,\lambda}_\alpha$ and $T^{-z,\lambda,{\operatorname{d}}}_\alpha$ are rational sections of ${\mathbb S}(-z)''_{}$ and ${\mathbb S}(-z)'_{}$, respectively.* *Proof.* We only prove the conclusion for $T^{z,\lambda}_\alpha$ as an example. By Lemma [Lemma 7](#lem:monoidal_equivalence){reference-type="ref" reference="lem:monoidal_equivalence"} and §[3.7](#subsec:Sproductrational){reference-type="ref" reference="subsec:Sproductrational"}, it suffices to show that its variant form (move $\delta_\alpha^{\operatorname{d}}$ to the right of the coefficients) $$\label{eq:T-in-bbS} \frac{\theta(\hbar)\theta(z_\alpha-\lambda_{\alpha^\vee})}{\theta(z_\alpha)\theta(\hbar-\lambda_{\alpha^\vee})}\delta_\alpha^{{\operatorname{d}}}+\frac{\theta(\hbar-z_\alpha)\theta(\lambda_{\alpha^\vee})}{\theta(z_\alpha)\theta(\hbar-\lambda_{\alpha^\vee})}\delta_\alpha\delta_\alpha^{{\operatorname{d}}}$$ is a rational section of ${\mathbb S}$. In other words, we show that the coefficient of $\delta_\alpha^{\operatorname{d}}$ is a rational section of ${\mathbb S}_{e,s_\alpha}$ and that of $\delta_\alpha\delta_\alpha^{\operatorname{d}}$ is a rational section of ${\mathbb S}_{\alpha,\alpha}$. Recall from Lemma [Lemma 18](#lem:basicS){reference-type="ref" reference="lem:basicS"}.[\[item:basicS1\]](#item:basicS1){reference-type="eqref" reference="item:basicS1"} that $$\begin{aligned} {\mathbb S}_{e,s_\alpha}|_z&={\mathcal O}(z_\alpha\alpha^\vee-\hbar\alpha^\vee), & {\mathbb S}_{e,s_\alpha}|_\lambda&={\mathcal O}(\lambda_{\alpha^\vee}\alpha).\\ {\mathbb S}_{s_\alpha,s_\alpha}|_z&={\mathcal O}(-\hbar\alpha^\vee), &{\mathbb S}_{s_\alpha,s_\alpha}|_\lambda&={\mathcal O}(\hbar\alpha). \end{aligned}$$ Let us consider the fraction $\frac{\theta(\hbar)\theta(z_\alpha-\lambda_{\alpha^\vee})}{\theta(z_\alpha)\theta(\hbar-\lambda_{\alpha^\vee})}$ in [\[eq:T-in-bbS\]](#eq:T-in-bbS){reference-type="eqref" reference="eq:T-in-bbS"}. When restricting ${\mathbb S}_{e,s_\alpha}$ to $z$, one obtains a line bundle over $\mathfrak{A}^\vee$ and $\lambda$ becomes the only variable. Since $\lambda_{\alpha^\vee}$ has a zero at $z_\alpha$ and a pole at $\hbar$, so it is a rational section of ${\mathcal O}(z_\alpha-\hbar)$ on $E$. Pulling back to $\mathfrak{A}^\vee$ via the map $\chi_{\alpha^\vee}$, we see that the fraction is a rational section of ${\mathbb S}_{e,s_\alpha}|_z={\mathcal O}((z_\alpha-\hbar)\alpha^\vee)$. Now restrict to $\lambda$, then $z\in \mathfrak{A}$ is the only variable, and this fraction has a zero at $\lambda_{\alpha^\vee}$ and a pole at $0$ on $E$. Pulling back to $\mathfrak{A}$, we see that it is a rational section of ${\mathbb S}_{e,s_\alpha}|_\lambda$. By the See-Saw Lemma, we see that the fraction is a rational section of ${\mathbb S}_{e,s_\alpha}$. For simplicity, we call the above analysis the 'zero-poles' property. For the second fraction $\frac{\theta(\hbar-z_\alpha)\theta(\lambda_{\alpha^\vee})}{\theta(z_\alpha)\theta(\hbar-\lambda_{\alpha^\vee})}$, we can see that the 'zeros-poles' for $\lambda$ is $-\hbar\alpha^\vee$, and the 'zeros-poles' for $z$ is $\hbar\alpha$, so it is a rational section of ${\mathbb S}_{s_\alpha,s_\alpha}$. ◻ ## DL operators of the Langlands dual We define the dynamical DL operators for the Langlands dual system (recall from §[6](#sec:Langlands){reference-type="ref" reference="sec:Langlands"} that $\hbar'=-\hbar$): $$\begin{aligned} T^{z^\vee,\lambda^\vee}_{\alpha^\vee}&=\delta_\alpha^{{\operatorname{d}}\vee}\frac{\theta(\hbar)\theta(z^\vee_{\alpha^\vee}+\lambda^\vee_{\alpha})}{\theta(z^\vee_{\alpha^\vee})\theta(\hbar-\lambda^\vee_{\alpha})}+\delta_\alpha^{{\operatorname{d}}\vee}\frac{\theta(\hbar+z^\vee_{\alpha^\vee})\theta(-\lambda^\vee_{\alpha})}{\theta(z^\vee_{\alpha^\vee})\theta(\hbar-\lambda^\vee_{\alpha})}\delta_\alpha^{\vee}. \\ T^{z^\vee,\lambda^\vee,{\operatorname{d}}}_{\alpha^\vee}&=\delta_\alpha^\vee\frac{\theta(\hbar)\theta(\lambda^\vee_{\alpha}+z^\vee_{\alpha^\vee})}{\theta(\lambda^\vee_{\alpha})\theta(\hbar+z^\vee_{\alpha^\vee})}+\delta_\alpha^\vee\frac{\theta(\hbar-\lambda^\vee_{\alpha})\theta(-z^\vee_{\alpha^\vee})}{\theta(\lambda^\vee_{\alpha})\theta(\hbar+z^\vee_{\alpha^\vee})} \delta_\alpha^{{\operatorname{d}}\vee}. \end{aligned}$$ By Theorem [Theorem 35](#thm:Trational){reference-type="ref" reference="thm:Trational"}, they are rational sections of $({\mathbb S}^\vee)''$ and ${\mathbb S}^\vee)'$, respectively. The following lemma explains how we obtain the operators $T_\alpha^{z,\lambda,{\operatorname{d}}}$ from $T^{z^\vee,\lambda^\vee}_{\alpha^\vee}$, that is, $T^{z,\lambda,{\operatorname{d}}}_\alpha$ is really the corresponding ' $T^{z,\lambda}_\alpha$ ' in the Langlands dual system. **Lemma 36**. *We have $$i^* T_{\alpha^\vee}^{z^\vee, \lambda^\vee, {\operatorname{d}}}=T_\alpha^{z,\lambda},\quad i^*T_{\alpha^\vee}^{z^\vee, \lambda^\vee}=T_\alpha^{z,\lambda,{\operatorname{d}}}.$$* *Proof.* They follow from Proposition [Proposition 31](#prop:i*S){reference-type="ref" reference="prop:i*S"} and direct computation using the definition of $i^*$. ◻ ## Braid relations {#subsec:braid} Let $\sharp$ stand for one of $({\pm z,\pm\lambda}), ({\pm z,\pm\lambda},{\operatorname{d}})$. It is proved in [@RW20] that $(T^\sharp_\alpha)^2=1$ for any $\alpha\in\Sigma$ and that the braid relations are satisfied (see also [@ZZ22 Proposition 4.11] for the present form). Given a reduced sequence of $v\in W$, by using the twisted product in § [3.7](#subsec:Sproductrational){reference-type="ref" reference="subsec:Sproductrational"}, one define $T^{\sharp}_v$ as the product of corresponding $T^\sharp_\alpha$. Expanding using the direct sum ${\mathbb S}''=\bigoplus_{v,w}{\mathbb S}''_{w,v}$, we define the following coefficients via $$\begin{aligned} \label{eq:Tlinearcomb} T^{\pm z,\pm\lambda}_{v}&=\sum_{w\le v}\delta_v^{\operatorname{d}}a^{\pm z,\pm \lambda}_{v,w}\delta_w,& T^{\pm z,\pm\lambda,{\operatorname{d}}}_{v}&=\sum_{w\le v}\delta_v a^{\pm z,\pm \lambda, {\operatorname{d}}}_{v,w}\delta_w^{\operatorname{d}}.\end{aligned}$$ Note that $T^{\pm z, \pm \lambda}_v$ have constant degree $v\in W^{\operatorname{d}}$ and $T^{\pm z, \pm \lambda, {\operatorname{d}}}_v$ have constant degree $v\in W$, respectively. Standard calculation using the twisted product gives $$\begin{aligned} \label{eq:aw0} a^{z,\lambda}_{w_0,w_0}&=\frac{{\mathbf g}}{{}^{w_0^{\operatorname{d}}}{\mathbf h}},& a^{z,-\lambda}_{w_0,w_0}&=\frac{{\mathbf g}}{{\mathbf h}},& a^{-z,\lambda}_{w_0,w_0}&=\frac{{}^{w_0}{\mathbf g}}{{}^{w_0^{\operatorname{d}}}{\mathbf h}}.\\ \label{eq:aw0dyn} a^{z,\lambda, {\operatorname{d}}}_{w_0,w_0}&=\frac{{}^{w_0^{\operatorname{d}}}{\mathbf h}}{{\mathbf g}},& a^{z,-\lambda, {\operatorname{d}}}_{w_0,w_0}&=\frac{{\mathbf h}}{{\mathbf g}}, &a^{-z,\lambda, {\operatorname{d}}}_{w_0,w_0}&=\frac{{}^{w_0^{\operatorname{d}}}{\mathbf h}}{{}^{w_0}{\mathbf g}}.\end{aligned}$$ We compute $a^{z,\lambda}_{w_0,w_0}$ as an example. Let $w_0=s_{i_1}\cdots s_{i_k}$. We have $$\begin{aligned} \delta_{w_0}^{\operatorname{d}}a^{z,\lambda}_{w_0,w_0}\delta_{w_0}=\delta_{\alpha_{i_1}}^{\operatorname{d}}\frac{\theta(\hbar-z_{\alpha_{i_1}})\theta(-\lambda_{\alpha^\vee_{i_1}})}{\theta(z_{\alpha_{i_1}})\theta(\hbar+\lambda_{\alpha^\vee_{i_1}})}\delta_{\alpha_{i_1}}\cdots \delta_{\alpha_{i_k}}^{\operatorname{d}}\frac{\theta(\hbar-z_{\alpha_{i_k}})\theta(-\lambda_{\alpha^\vee_{i_k}})}{\theta(z_{\alpha_{i_k}})\theta(\hbar+\lambda_{\alpha^\vee_{i_k}})}\delta_{\alpha_{i_k}}.\end{aligned}$$ By moving all $\delta_\alpha^{\operatorname{d}}$ to the left and $\delta_\alpha$ to the right, we see that the coefficient in the middle is $$\prod_{\alpha>0} \frac{\theta(\hbar-z_{\alpha})\theta(-\lambda_{\alpha^\vee})}{\theta(z_{\alpha})\theta(\hbar+\lambda_{\alpha^\vee})}=\frac{{\mathbf g}}{{}^{w_0^{\operatorname{d}}}{\mathbf h}}.$$ Furthermore, by using the anti-involutions in §[5.3](#subsec:antiinvolution){reference-type="ref" reference="subsec:antiinvolution"}, we see that $$\begin{aligned} \iota_\lambda(T_v^{z,\lambda})&=T_{v^{-1}}^{z,-\lambda}, &\iota_\lambda(T_v^{z,\lambda, {\operatorname{d}}})&=T_{v^{-1}}^{z,-\lambda, {\operatorname{d}}},\\ \iota_z(T_v^{z,\lambda})&=T_{v^{-1}}^{-z,\lambda}, &\iota_z(T_v^{z,\lambda, {\operatorname{d}}})&=T_{v^{-1}}^{-z,\lambda, {\operatorname{d}}}.\end{aligned}$$ **Lemma 37**. *We have the following identity $$\begin{aligned} {}^w{\mathbf g}\cdot a^{z,\pm\lambda}_{v,w}&={\mathbf g}\cdot{}^{(v^{-1})^{\operatorname{d}}w}a^{z, \mp\lambda}_{v^{-1}, w^{-1}}.\end{aligned}$$* *Proof.* We have $$\begin{aligned} \sum_w\delta_v^{\operatorname{d}}a^{z,\pm \lambda, }_{v,w}\delta_w&=T^{z,\pm \lambda, }_v=\iota_{\lambda}(T^{z,\mp \lambda, }_{v^{-1}})=\iota_{\lambda}(\sum_w\delta_{v^{-1}}^{\operatorname{d}}a^{z,\mp \lambda, }_{v^{-1}, w^{-1}}\delta_{w^{-1}})\\ &\overset{\eqref{eq:iota_la}}=\sum_w\delta_v^{\operatorname{d}}\frac{{\mathbf g}}{{}^w{\mathbf g}}\cdot {}^{(v^{-1})^{\operatorname{d}}w }a^{z,\mp \lambda, }_{v^{-1}, w^{-1}}\delta_w.\end{aligned}$$ Comparing the coefficients of $\delta_v^{\operatorname{d}}?\delta_w$, we obtain the conclusion. ◻ **Remark 38**. We can also verify $$\begin{aligned} {\mathbf g}\cdot a^{z,\pm \lambda, {\operatorname{d}}}_{v,w}&={}^{v^{-1}}{\mathbf g}\cdot {}^{w^{\operatorname{d}}v^{-1}}a^{z,\mp\lambda, {\operatorname{d}}}_{v^{-1}, w^{-1}}, &{}^{(v^{-1})^{\operatorname{d}}}{\mathbf h}\cdot a^{\pm z,\lambda}_{v,w}&={\mathbf h}\cdot {}^{(v^{-1})^{\operatorname{d}}w}a^{\mp z, \lambda}_{v^{-1}, w^{-1}}, \\ {\mathbf h}\cdot a_{v,w}^{\pm z, \lambda, {\operatorname{d}}}&={}^{w^{\operatorname{d}}}{\mathbf h}\cdot {}^{v^{-1}w^{\operatorname{d}}}a_{v^{-1}, w^{-1}}^{\mp z, \lambda, {\operatorname{d}}}.&&\end{aligned}$$ Following from Proposition [Proposition 28](#prop:adj){reference-type="ref" reference="prop:adj"}, we have **Corollary 39**. *For any rational sections $f,g, f^{\operatorname{d}}, g^{\operatorname{d}}$ of ${\mathbb M}$, ${\mathbb M}(-\lambda)$, ${\mathbb M}^{\operatorname{d}}$ and ${\mathbb M}(-z)^{\operatorname{d}}$, correspondingly, we have $$\begin{aligned} \label{eq:adjT-la} \langle T^{z,\lambda}_{v^{-1}}\bullet f,g\rangle_{ \lambda}&={}^{(v^{-1})^{\operatorname{d}}}\langle f, T^{z,-\lambda}_v\bullet g\rangle_{ \lambda}, \\ \langle T^{z,\lambda, {\operatorname{d}}}_{v^{-1}}\bullet^{\operatorname{d}}f^{\operatorname{d}},g^{\operatorname{d}}\rangle_{ z}&={}^{(v^{-1})^{\operatorname{d}}}\langle f^{\operatorname{d}}, T^{-z, \lambda,{\operatorname{d}}}_v\bullet^{\operatorname{d}}g^{\operatorname{d}}\rangle_{ z}.\end{aligned}$$* # The elliptic classes {#sec:ell_class} In this section, we define the elliptic classes for $T^*G/B$ and its Langlands dual as rational sections of the periodic modules. We also prove the main result. Our definition of elliptic classes relies on a pre-fixed rational section $\frak{c}$ of ${\mathbb L}.$ ## Transition matrices {#subsec:trans} In the study the elliptic classes defined below, we need to consider the inverse of the matrix of rational sections in [\[eq:Tlinearcomb\]](#eq:Tlinearcomb){reference-type="eqref" reference="eq:Tlinearcomb"}. Recall from § [3.8](#subsec:sum1){reference-type="ref" reference="subsec:sum1"} that for $w,v\in W$ $${\mathbb S}_{w,v}''=v^{{\operatorname{d}}*}{\mathbb S}_{w,v}\overset{\sim}{\longrightarrow}\mathop{\mathrm{{\mathscr{H}om}}}({\mathbb M}_{w^{-1}},{\mathbb M}^{\operatorname{d}}_v),$$ and the rational section of ${\mathbb S}''$ $$T_v^{z,\lambda}=\sum_{w\le v}\delta_v^{\operatorname{d}}a^{z,\lambda}_{v,w}\delta_w.$$ Here $a^{z,\lambda}_{v,w}$ is a rational section of ${\mathbb S}_{w,v}''$ hence defines an rational map of line bundles $${\mathbb M}_{w^{-1}}\dashrightarrow{\mathbb M}^{\operatorname{d}}_v.$$ **Definition 40**. Define the rational map $$\begin{aligned} {\mathbb T}^{z, \lambda}=\sum_vT_v^{z,\lambda}=\sum_{v,w} \frak{p}^{\operatorname{d}}_v a^{z,\lambda}_{v,w}\frak{i}_{w^{-1}}: {\mathbb M}=\bigoplus_{w}{\mathbb M}_{w^{-1}}\dashrightarrow \bigoplus_{v}{\mathbb M}_{v}^{\operatorname{d}}={\mathbb M}^{\operatorname{d}}.\end{aligned}$$ **Lemma 41**. *This rational map ${\mathbb T}^{z,\lambda}$ is invertible.* *Proof.* The matrix $(a^{z,\lambda}_{v,w})_{w,v\in W}$ is lower triangular with nonzero (so invertible) diagonals. ◻ Therefore, there exists for each $w,v\in W$ a $b_{w,v}^{z,\lambda}$, a rational section of $$\label{eq:b}\mathop{\mathrm{{\mathscr{H}om}}}({\mathbb M}_v^{\operatorname{d}},{\mathbb M}_{w^{-1}})\cong v^{{\operatorname{d}}*}{\mathbb S}_{w,v}^{-1}=(w^{-1})^*{\mathbb S}_{w^{-1},v^{-1}}={\mathbb S}'_{w^{-1}, v^{-1}},$$ so that $$\sum_w\frak{p}^{\operatorname{d}}_v a^{z,\lambda}_{v,w}\frak{i}_{w^{-1}}\circ\frak{p}_{w^{-1}}b^{z,\lambda}_{w,u}\frak{i}^{\operatorname{d}}_u=\delta_{v,u}$$ as a rational section of $$\mathop{\mathrm{{\mathscr{H}om}}}({\mathbb M}_u^{\operatorname{d}},{\mathbb M}_{w^{-1}})\otimes \mathop{\mathrm{{\mathscr{H}om}}}({\mathbb M}_{w^{-1}},{\mathbb M}_v^{\operatorname{d}})\overset{\sim}\longrightarrow \mathop{\mathrm{{\mathscr{H}om}}}({\mathbb M}_u^{\operatorname{d}},{\mathbb M}_v^{\operatorname{d}}).$$ Similarly, we have for each $v,w\in W$ $${\mathbb S}_{v,w}'=v^*{\mathbb S}_{v,w}=\mathop{\mathrm{{\mathscr{H}om}}}({\mathbb M}_{w^{-1}}^{\operatorname{d}}, {\mathbb M}_v),$$ and hence the coefficients of $$T_v^{z,\lambda,{\operatorname{d}}}=\sum_{w}\delta_v a^{z,\lambda,{\operatorname{d}}}_{v,w}\delta_w^{\operatorname{d}}$$ define $$\begin{aligned} {\mathbb T}^{ z, \lambda, {\operatorname{d}}}=\sum_vT_v^{z,\lambda,{\operatorname{d}}}&=\sum_{v,w}\frak{p}_v a^{z,\lambda, {\operatorname{d}}}_{v,u}\frak{i}_{u^{-1}}^{\operatorname{d}}: {\mathbb M}^{\operatorname{d}}=\oplus_u{\mathbb M}_{u^{-1}}^{\operatorname{d}}\dashrightarrow \oplus_v {\mathbb M}_v= {\mathbb M}.\end{aligned}$$ There is also the inverse matrix $(b^{z,\lambda,{\operatorname{d}}}_{w,u})_{w,v\in W}$ with $b^{z,\lambda,{\operatorname{d}}}_{w,u}$ a rational section of $$u^* {\mathbb S}^{-1}_{u,w}\overset{\text{Lem. }\ref{lem:tensor}}=(w^{-1})^{{\operatorname{d}}*}{\mathbb S}_{u^{-1}, w^{-1}}=\mathop{\mathrm{{\mathscr{H}om}}}({\mathbb M}_u,{\mathbb M}^{\operatorname{d}}_{w^{-1}}).$$ **Remark 42**. One could repeat the construction, and define the matrices $(b_{w,v}^{\pm z,\pm \lambda,{\operatorname{d}}})_{w,v}$ as inverses of $(a_{w,v}^{\pm z,\pm \lambda, {\operatorname{d}}})$ (or without the superscript ${\operatorname{d}}$). Moreover, one could also define variant forms of ${\mathbb T}^{z,\lambda}$. See § [9.2](#subsec:invmat){reference-type="ref" reference="subsec:invmat"} below for more details. ## Definition of the elliptic classes We fix a rational section $\frak{c}$ of ${\mathbb L}$. **Lemma 43**. *The four functions $$\frac{{\mathbf g}}{\frak{c}}, \quad \frac{1}{\frak{c}{\mathbf h}}, \quad \frac{{\mathbf h}}{{}^{w_0^{\operatorname{d}}}\frak{c}}, \quad \frac{{}^{w_0}{\mathbf g}}{{\mathbf h}\cdot {}^{w_0^{\operatorname{d}}}{\mathbf h}\cdot {}^{w_0}\frak{c}},$$are rational sections of the following line bundles: $${\mathbb M}(-\lambda), \quad {\mathbb M}(-z), \quad {\mathbb M}(-\lambda)_{w_0}, \quad {\mathbb M}(-z)^{\operatorname{d}}_{w_0}.$$* *Proof.* Lemma [Lemma 22](#lem:D2){reference-type="ref" reference="lem:D2"} give ${\mathbb L}(-\lambda)\cong {\mathbb L}^{-1}\otimes {\mathcal G}$ and ${\mathbb L}(-z)\cong{\mathbb L}^{-1}\otimes {\mathcal H}^{-1}$, which have the rational sections $\frac{{\mathbf g}}{\frak{c}}$ and $\frac{1}{\frak{c}{\mathbf h}}$, respectively. From Lemma [Lemma 18](#lem:basicS){reference-type="ref" reference="lem:basicS"}, we know $$\label{eq:Sw0} {\mathbb S}_{w_0, w_0}={\mathcal G}\otimes {\mathcal H}^{-1}\cong w_0^*w_0^{{\operatorname{d}}*}{\mathbb L}^{-1}\otimes {\mathbb L},$$ which gives $${\mathbb M}(-\lambda)_{w_0}=w_0^*{\mathbb L}(-\lambda)=w_0^*{\mathbb L}^{-1}\otimes w_0^*{\mathcal G}\overset{\eqref{eq:Sw0}}=w_0^{{\operatorname{d}}*}{\mathbb L}^{-1}\otimes {\mathcal H},$$ so $\frac{{\mathbf h}}{{}^{w_0^{\operatorname{d}}}\frak{c}}$ is a rational section of it. Lastly, $$\begin{aligned} {\mathbb M}(-z)_{w_0}^{\operatorname{d}}&=w_0^{{\operatorname{d}}*}{\mathbb L}(-z)&\\ &=w_0^{{\operatorname{d}}*}{\mathbb L}^{-1}\otimes w_0^{{\operatorname{d}}*}{\mathcal H}^{-1}\\ &=w_0^*{\mathbb L}^{-1}\otimes w_0^{{\operatorname{d}}*}{\mathcal H}^{-1}\otimes {\mathcal H}^{-1}\otimes w_0^*{\mathcal G}\otimes w_0^*{\mathcal G}^{-1}\otimes {\mathcal H}\otimes w_0^{{\operatorname{d}}*}{\mathbb L}^{-1}\otimes w_0^*{\mathbb L}\\ &\overset{\eqref{eq:Sw0}}=w_0^*{\mathbb L}^{-1}\otimes w_0^{{\operatorname{d}}*}{\mathcal H}^{-1}\otimes {\mathcal H}^{-1}\otimes w_0^*{\mathcal G}.\end{aligned}$$ Therefore, $\frac{{}^{w_0}{\mathbf g}}{{\mathbf h}\cdot {}^{w_0^{\operatorname{d}}}{\mathbf h}\cdot {}^{w_0}\frak{c}}$ is a rational section of ${\mathbb M}(-z)^{\operatorname{d}}_{w_0}$. ◻ **Definition 44**. We define the elliptic classes as rational sections of ${\mathbb M}$ and ${\mathbb M}^{\operatorname{d}}$, as follows: $$\begin{aligned} {\mathbf E}_v^{z,\lambda}&=T^{z,\lambda}_{v^{-1}}\bullet \frak{c}f_e,& {\mathbf E}_v^{z,\lambda,{\operatorname{d}}}&=T^{z,\lambda,{\operatorname{d}}}_{v^{-1}}\bullet^{\operatorname{d}}\frak{c}f_e^{\operatorname{d}}.\end{aligned}$$ Recall that under the Langlands duality $i$ (Lemma [Lemma 36](#lem:i*T){reference-type="ref" reference="lem:i*T"}) the classes ${\mathbf E}_v^{z,\lambda,{\operatorname{d}}}$ are identified with the elliptic classes of the Langlands dual system. One also note that both ${\mathbf E}_v^{z,\lambda}$ and ${\mathbf E}_v^{z,\lambda,{\operatorname{d}}}$ are supported on $w\le v^{-1}$. We also define the following classes. $$\begin{aligned} \mathop{\mathrm{\mathcal{E}}}_{v}^{z,-\lambda}&=T^{z,-\lambda}_{v^{-1}w_0}\bullet \frac{{\mathbf h}}{{}^{w_0^{\operatorname{d}}}\frak{c}}f_{w_0},&\mathop{\mathrm{\mathcal{E}}}_v^{-z,\lambda,{\operatorname{d}}}&=T^{-z,\lambda,{\operatorname{d}}}_{v^{-1}w_0}\bullet ^{\operatorname{d}}\frac{{}^{w_0}{\mathbf g}}{{\mathbf h}\cdot {}^{w_0^{\operatorname{d}}}{\mathbf h}\cdot {}^{w_0}\frak{c}}f_{w_0}^{\operatorname{d}}.\end{aligned}$$ The superscript $(z,-\lambda)$ (resp. $(-z,\lambda,{\operatorname{d}})$) is to indicate that they are rational sections of ${\mathbb M}(-\lambda)$ (resp. ${\mathbb M}(-z)^{\operatorname{d}}$). We are going to show that via the Poincaré pairing, they are dual to the corresponding elliptic classes, and therefore are referred to as the opposite elliptic classes. By using the definition, together with the expression of $T^?_v$ in [\[eq:Tlinearcomb\]](#eq:Tlinearcomb){reference-type="eqref" reference="eq:Tlinearcomb"}, one can write down ${\mathbf E}^?_v$ as linear combinations of $f_?$ or $f^{\operatorname{d}}_?$. We compute ${\mathbf E}^{z,\lambda}_v$ as an example (see also [\[eq:res1\]](#eq:res1){reference-type="eqref" reference="eq:res1"} and $\eqref{eq:res2}$ below): $$\begin{aligned} {\mathbf E}^{z,\lambda}_v&\overset{\eqref{eq:Tlinearcomb}}=\sum_{w}\delta_{v^{-1}}^{\operatorname{d}}a^{z,\lambda}_{v^{-1},w^{-1}}\delta_{w^{-1}}\bullet \frak{c}f_e\overset{\eqref{eq:bullet}}=\sum_w {}^{(v^{-1})^{\operatorname{d}}w}a^{z,\lambda}_{v^{-1}, w^{-1}}\cdot ^{(v^{-1})^{\operatorname{d}}}\frak{c}f_{w^{-1}}\\ \label{eq:bfE-z-la}&\overset{\text{Lem.}~\ref{lem:a-iota}}=\sum_w\frac{{}^w{\mathbf g}}{{\mathbf g}}\cdot a^{z,-\lambda}_{v,w}\cdot {}^{(v^{-1})^{\operatorname{d}}}\frak{c}f_{w^{-1}}. \end{aligned}$$ The rational sections in front of $f_{w^{-1}}$ are called restriction coefficients. **Theorem 45** (Poincaré Duality). *We have the following dualities $$\begin{aligned} \langle{\mathbf E}_v^{z, \lambda},\mathop{\mathrm{\mathcal{E}}}_u^{ z,- \lambda}\rangle_{\lambda}&=\delta_{v,u}, & \langle{\mathbf E}_v^{ z, \lambda,{\operatorname{d}}},\mathop{\mathrm{\mathcal{E}}}_u^{- z, \lambda,{\operatorname{d}}}\rangle_{z}^{\operatorname{d}}=\delta_{v,u}.\end{aligned}$$* *Proof.* Note that $T^{z,\lambda}_{v^{-1}}$ is homogenous of degree $(v^{-1})^{\operatorname{d}}$, so we have $$\begin{aligned} \langle{\mathbf E}^{z,\lambda}_v, \mathop{\mathrm{\mathcal{E}}}^{z,-\lambda}_u\rangle_{\lambda}&\overset{\text{Def.}~\ref{def:ellclass}}=\langle T^{z,\lambda}_{v^{-1}}\bullet \frak{c}f_e, T^{z,-\lambda}_{uw_0^{-1}}\bullet \frac{{\mathbf h}}{{}^{w_0^{\operatorname{d}}}\frak{c}}f_{w_0}\rangle_{\lambda}\overset{\eqref{eq:adjT-la}}={}^{(v^{-1})^{\operatorname{d}}}\langle\frak{c}f_e, T^{z,-\lambda}_vT^{z,-\lambda}_{u^{-1}w_0}\bullet \frac{{\mathbf h}}{{}^{w_0^{\operatorname{d}}}\frak{c}}f_{w_0}\rangle_{\lambda}.\end{aligned}$$ By the definition of the pairing $\langle\_,\_\rangle_{\lambda}$ in [\[eq:D-la-pair\]](#eq:D-la-pair){reference-type="eqref" reference="eq:D-la-pair"}, it suffices to look at the term of degree $e$ in $T^{z,-\lambda}_vT^{z,-\lambda}_{u^{-1}w_0}\bullet \frac{{\mathbf h}}{{}^{w_0^{\operatorname{d}}}\frak{c}}f_{w_0}$, and by the definition of the $\bullet$-action in [\[eq:bullet\]](#eq:bullet){reference-type="eqref" reference="eq:bullet"}, it suffices to look at the term of degree ${w_0}\in W$ (i.e., $\delta_{w_0}$) in $T^{z,-\lambda}_vT^{z,-\lambda}_{u^{-1}w_0}=T^{z,-\lambda}_{vu^{-1}w_0}$, which is equal to $0$ unless $u=v$. If $u=v$, then by [\[eq:aw0\]](#eq:aw0){reference-type="eqref" reference="eq:aw0"}, the term involving $\delta_{w_0}^{\operatorname{d}}?\delta_{w_0}$ is $\delta_{w_0}^{\operatorname{d}}\frac{{\mathbf g}}{{\mathbf h}}\delta_{w_0}$, so in this case, we have $$\begin{aligned} \langle{\mathbf E}^{z,\lambda}_v,\mathop{\mathrm{\mathcal{E}}}^{z,-\lambda}_v\rangle_{\lambda}&={}^{(v^{-1})^{\operatorname{d}}}\langle\frak{c}f_e, \delta_{w_0}^{\operatorname{d}}\frac{{\mathbf g}}{{\mathbf h}}\delta_{w_0}\bullet \frac{{\mathbf h}}{{}^{w_0^{\operatorname{d}}}\frak{c}}f_{w_0}\rangle_{\lambda}\overset{\eqref{eq:D-la-pair}}={}^{(v^{-1})^{\operatorname{d}}}(\frak{c}\cdot\frac{{\mathbf g}}{{}^{w_0^{\operatorname{d}}}{\mathbf h}}\cdot \frac{{}^{w_0^{\operatorname{d}}}{\mathbf h}}{\frak{c}}\cdot \frac{1}{{\mathbf g}})=1.\end{aligned}$$ The second duality is proved similarly. ◻ ## Inverse matrices and dual classes The following are some auxiliary classes. $$\begin{aligned} \label{eq:T*} (T_u^{z,-\lambda})^*&=\sum_{x\ge u}b^{z,-\lambda}_{x, u}\cdot \frac{{\mathbf g}}{ {}^{(u^{-1})^{\operatorname{d}}}\frak{c}} f_{x^{-1}},& (T_u^{-z,\lambda, {\operatorname{d}}})^*&=\sum_{y\ge u}b^{-z,\lambda, {\operatorname{d}}}_{y,u}\cdot \frac{1}{{\mathbf h}\cdot {}^{u^{-1}}\frak{c}}f^{\operatorname{d}}_{y^{-1}}.\end{aligned}$$ **Lemma 46**. *The classes are rational sections of the modules ${\mathbb M}(-\lambda)$ and ${\mathbb M}(-z)^{\operatorname{d}}$, respectively.* *Proof.* By definition (see [\[eq:b\]](#eq:b){reference-type="eqref" reference="eq:b"} and Remark [Remark 42](#rem:b){reference-type="ref" reference="rem:b"}), we know $b^{z,-\lambda}_{x,u}$ is a rational section of ${\mathbb S}(-\lambda)'_{x^{-1}, u^{-1}}$. So $b^{z,-\lambda}_{x,u}\cdot \frac{{\mathbf g}}{{}^{(u^{-1})^{\operatorname{d}}}\frak{c}}$ is a rational section of $$(x^{-1})^*{\mathbb S}(-\lambda)_{x^{-1}, u^{-1}}\otimes {\mathcal G}\otimes u^{{\operatorname{d}}*}{\mathbb L}^{-1}\overset{\text{Lem. }\ref{lem:D2}}\cong(x^{-1})^*{\mathbb S}(-\lambda)_{x^{-1}, u^{-1}}\otimes u^{{\operatorname{d}}*}{\mathbb L}(-\lambda)\cong (x^{-1})^*{\mathbb L}(-\lambda).$$ This shows that $(T^{z,-\lambda}_u)^*$ is a rational section of ${\mathbb M}(-\lambda)$. One can prove similar conclusion for $(T^{-z,\lambda,{\operatorname{d}}}_u)^*$. ◻ **Lemma 47**. *We have the following dualities $$\begin{aligned} \langle{\mathbf E}_v^{z, \lambda}, (T^{z,- \lambda}_u)^*\rangle_{\lambda}&=\delta_{v,u}. & \langle{\mathbf E}_v^{z, \lambda, {\operatorname{d}}}, (T^{- z, \lambda, {\operatorname{d}}}_u)^*\rangle^{\operatorname{d}}_{z}&=\delta_{v,u}.\end{aligned}$$* *Proof.* We prove the first one as an example: $$\begin{aligned} \langle{\mathbf E}_v^{z,\lambda}, (T_u^{z,-\lambda})^*\rangle_{\lambda}&\overset{\eqref{eq:bfE-z-la}}= \langle\sum_w\frac{{}^w{\mathbf g}}{{\mathbf g}}\cdot a^{z,-\lambda}_{v,w} \cdot {}^{(v^{-1})^{\operatorname{d}}}\frak{c}f_{w^{-1}}, \sum_{x}b^{z,-\lambda}_{x,u}\cdot \frac{{\mathbf g}}{ {}^{(u^{-1})^{\operatorname{d}}}\frak{c}} f_{x^{-1}}\rangle_{\lambda}\\ &\overset{\eqref{eq:D-la-pair}}=\sum_w\frac{{}^w{\mathbf g}}{{\mathbf g}}\cdot a^{z,-\lambda}_{v,w}\cdot b^{z,-\lambda}_{w, u}\cdot {}^{(v^{-1})^{\operatorname{d}}}\frak{c}\cdot\frac{{\mathbf g}}{{}^{(u^{-1})^{\operatorname{d}}}\frak{c}} \cdot\frac{1}{{}^w{\mathbf g}}=\delta_{v,u}.\\\end{aligned}$$ ◻ Combining Lemma [Lemma 47](#lem:bfE-T*){reference-type="ref" reference="lem:bfE-T*"} and Theorem [Theorem 45](#thm:Poin){reference-type="ref" reference="thm:Poin"}, we get the following. **Corollary 48**. *We have $$\mathop{\mathrm{\mathcal{E}}}^{ z,- \lambda}_v=(T^{ z,- \lambda}_v)^*, \quad \mathop{\mathrm{\mathcal{E}}}^{ -z, \lambda,{\operatorname{d}}}_v=(T^{- z, \lambda,{\operatorname{d}}}_v)^*.$$* **Remark 49**. Here is a short summary $$\xymatrix{ {\mathbf E}_v^{z,\lambda}\ar@{..>}[r]&{\mathbb M}\ar@{<->}[rr]^-{\langle\_,\_\rangle_{\lambda}}\ar@{-->}@/^1pc/[d]^-{{\mathbb T}}& &{\mathbb M}(-\lambda) &\ar@{..>}[l] \mathop{\mathrm{\mathcal{E}}}_v^{z,-\lambda}\ar@{=}[r]&(T^{z,-\lambda}_v)^*\\ {\mathbf E}_v^{z,\lambda,{\operatorname{d}}}\ar@{..>}[r]&{\mathbb M}^{\operatorname{d}}\ar@{-->}@/^1pc/[u]^-{{\mathbb T}^{\operatorname{d}}}\ar@{<->}[rr]^-{\langle\_,\_\rangle_z}& &{\mathbb M}(-z)^{\operatorname{d}}& \mathop{\mathrm{\mathcal{E}}}_v^{-z,\lambda,{\operatorname{d}}}\ar@{=}[r]\ar@{..>}[l]& (T^{-z,\lambda,{\operatorname{d}}}_v)^*.}$$ Here $\xymatrix{\ar@{..>}[r]&}$ are to indicate rational sections of a vector bundle. For a more complete list, see § [9](#sec:complete){reference-type="ref" reference="sec:complete"} below. ## The main theorem We now show that the elliptic DL operators with dynamical parameters intertwine between elliptic classes and the fixed point bases of the Langlands dual system. The notations in the next theorem are introduced in §[8.1](#subsec:trans){reference-type="ref" reference="subsec:trans"}. **Theorem 50**. *The following rational maps of vector bundles on $\mathfrak{A}\times\mathfrak{A}^\vee$ are inverses to each other $$\begin{aligned} {\mathbb T}^{z,\lambda}&:{\mathbb M}\dashrightarrow{\mathbb M}^{\operatorname{d}}, & {\mathbb T}^{z,\lambda,{\operatorname{d}}}: {\mathbb M}^{\operatorname{d}}\dashrightarrow {\mathbb M}.\end{aligned}$$ Furthermore, under these isomorphisms, we have $$\begin{aligned} {\mathbb T}^{z,\lambda}(\mathop{\mathrm{\mathcal{E}}}^{z,\lambda}_u)&={}^{(u^{-1})^{\operatorname{d}}}\frak{c}f_u^{\operatorname{d}},&{\mathbb T}^{z,\lambda,{\operatorname{d}}}(\mathop{\mathrm{\mathcal{E}}}_u^{z,\lambda,{\operatorname{d}}})&={}^{u^{-1}}\frak{c}f_u. \end{aligned}$$* **Remark 51**. The fixed rational section $\frak{c}$ of ${\mathbb L}$ naturally induces the rational sections ${}^{(u^{-1})^{\operatorname{d}}}\frak{c}f_u^{\operatorname{d}}$ and ${}^{u^{-1}}\frak{c}f_u$ of ${\mathbb M}^{\operatorname{d}}_u$ and ${\mathbb M}_u$, respectively. Hence, one can define the opposite elliptic classes as $${\mathcal E}_u^{z,\lambda}={\mathbb T}^{z,\lambda,{\operatorname{d}}}({}^{(u^{-1})^{\operatorname{d}}}\frak{c}f_u^{\operatorname{d}}), \quad {\mathcal E}_u^{z,\lambda,{\operatorname{d}}}={\mathbb T}^{z,\lambda}( {}^{u^{-1}}\frak{c}f_u).$$ In other words, the opposite elliptic classes are really the fixed point basis from the Langlands dual system. *Proof of Theorem [Theorem 50](#thm:inverse_maps){reference-type="ref" reference="thm:inverse_maps"}.* The statement of the two maps being inverses to each other is reformulated and proved in Theorem [Theorem 52](#thm:invmat){reference-type="ref" reference="thm:invmat"} below. The statement ${\mathbb T}^{z,\lambda}(\mathop{\mathrm{\mathcal{E}}}^{z,\lambda}_u)={}^{(u^{-1})^{\operatorname{d}}}\frak{c}f_u^{\operatorname{d}}$ follows from the definition together with Corollary [Corollary 48](#cor:opp=*){reference-type="ref" reference="cor:opp=*"}. Indeed, $$\begin{aligned} {\mathbb T}^{z,\lambda}(\mathop{\mathrm{\mathcal{E}}}^{z,\lambda}_{u})&\overset{\text{ Def.}~\ref{def:bbT}}=(\sum_v T_v^{z,\lambda})((T_u^{z,\lambda})^*)\overset{\text{ Cor.}~\ref{cor:opp=*}}=\sum_{v,w}\frak{p}_v^{\operatorname{d}}a^{z,\lambda}_{v,w}\frak{i}_{w^{-1}}(\sum_xb^{z,\lambda}_{x,u}\cdot {}^{(u^{-1})^{\operatorname{d}}}\frak{c}f_{x^{-1}})\\ &=\sum_v\sum_wa^{z,\lambda}_{v,w}b^{z,\lambda}_{w,u}\cdot{}^{(u^{-1})^{\operatorname{d}}}\frak{c}f_{v}=\sum_v\delta_{v,u}\cdot {}^{(u^{-1})^{\operatorname{d}}}\frak{c}f_v={}^{(u^{-1})^{\operatorname{d}}}\frak{c}f_u.\end{aligned}$$ The other equality is proved similarly. ◻ ## Proof of the main theorem The following is a reformulation of (part of) Theorem [Theorem 50](#thm:inverse_maps){reference-type="ref" reference="thm:inverse_maps"}. **Theorem 52**. *The matrix $(a^{z,\lambda}_{v,w})_{w,v\in W}$ is the inverse of the matrix $(a^{z,\lambda,{\operatorname{d}}}_{v^{-1}, w^{-1}})_{w,v\in W}$.* *Proof.* The statement is equivalent to $a^{z,\lambda,{\operatorname{d}}}_{v^{-1}, w^{-1}}=b^{z,\lambda}_{v,w}$ for any $v,w\in W$. We prove this by induction along the Bruhat order of $v$. Keep in mind that $(a_{v,w}^{z,\lambda})_{w,v}$ is lower triangular. If $v=e$, then $T^{z,\lambda,{\operatorname{d}}}_e=1$. Also we know that $a^{z,\lambda}_{e,e}=1$, which implies that $b^{z,\lambda} _{e,e}=1$. So the conclusion holds for $v=e$. In the inductive process, for simplicity we write $T_\alpha^{z,\lambda}=\delta_\alpha^{\operatorname{d}}p_\alpha\delta_\alpha+\delta_\alpha^{\operatorname{d}}q_\alpha$. Note that we have in the same notation $$T^{z,\lambda, {\operatorname{d}}}_{\alpha}=\delta_\alpha\frac{1}{p_\alpha}\delta_\alpha^{\operatorname{d}}-\delta_\alpha\frac{q_\alpha}{p_\alpha}.$$ Assume that $a^{z,\lambda, {\operatorname{d}}}_{v^{-1},w^{-1}}=b^{z,\lambda} _{v,w}$ for any $w$, we want to show that $b^{z,\lambda} _{vs_\alpha, w}=a^{z,\lambda,{\operatorname{d}}}_{s_\alpha v^{-1}, w^{-1}}$ for any $w$. We have $$T^{z,\lambda}_\alpha\bullet (T^{z,\lambda}_{w})^*\overset{\text{Cor.}~\ref{cor:opp=*}}=T_\alpha^{z,\lambda}\bullet \mathop{\mathrm{\mathcal{E}}}_w^{z,\lambda}\overset{\text{ Def.}~\ref{def:ellclass}}=\mathop{\mathrm{\mathcal{E}}}_{ws_\alpha}^{z,\lambda}=(T^{z,\lambda}_{ws_\alpha})^*.$$ Plug-in the definition $(T_u^{z,\lambda})^*=\sum_{x\ge u}b_{x,y}^{z,\lambda}\cdot {}^{(u^{-1})^{\operatorname{d}}}\frak{c}f_{x^{-1}}$, we obtain $${}^{s_\alpha^{\operatorname{d}}}b^{z,\lambda} _{v,ws_\alpha}=b^{z,\lambda} _{vs_\alpha, w}\cdot {}^vp_\alpha+b^{z,\lambda} _{v,w}\cdot {}^v q_\alpha.$$ This gives $$\label{eq:brecur} b^{z,\lambda} _{vs_\alpha, w}={}^{s_\alpha^{\operatorname{d}}}b^{z,\lambda} _{v,ws_\alpha}\cdot \frac{1}{{}^v p_\alpha}-b^{z,\lambda} _{v,w}\cdot {}^{v}(\frac{q_\alpha}{ p_\alpha}).$$ We have for each $w,v\in W$ a canonical isomorphism $$({\mathbb S}''_{w^{-1}, v^{-1}})^{-1}= w^*{\mathbb L}\otimes (v^{-1})^{{\operatorname{d}}*}{\mathbb L}^{-1}= w^*{\mathbb S}_{w,v}={\mathbb S}_{w,v}',$$ and hence $$\overline{\{\cdot \}}:~{\mathbb S}'_{w,v}= ({\mathbb S}''_{w^{-1}, v^{-1}})^{-1}, \quad \delta_w a\delta_v^{\operatorname{d}}\mapsto\overline{\delta_w a\delta_v^{\operatorname{d}}}:= \delta_{v^{-1}}^{\operatorname{d}}a\delta_{w^{-1}}.$$ Notice that both ${\mathbb S}'$ and $({\mathbb S}'')^{-1}$ are algebra objects, and it is easy to see that $\overline{\{\cdot \}}$ is an anti-involution. Indeed, we have $$\begin{aligned} \overline{\delta_{w_1}a_1\delta_{v_{1}}^{\operatorname{d}}\cdot \delta_{w_2}a_2\delta_{v_2}^{\operatorname{d}}}&=\overline{\delta_{w_1w_2}{}^{(w_2)^{-1}}a_1\cdot {}^{v_1^{\operatorname{d}}}a_2\delta_{v_1v_2}^{\operatorname{d}}}\\ &=\delta_{v_2^{-1}v_1^{-1}}^{\operatorname{d}}{}^{(w_2)^{-1}}a_1\cdot {}^{v_1^{\operatorname{d}}}a_2\delta_{w_2^{-1}w_1^{-1}}\\ &=\delta^{\operatorname{d}}_{v_2^{-1}}a_2\delta_{w_2^{-1}}\delta_{v_1^{-1}}^{\operatorname{d}}a_1\delta_{w_1^{-1}}\\ &=\overline{\delta_{w_2}a_2\delta_{v_2}^{{\operatorname{d}}}}\cdot \overline{\delta_{w_1}a_1\delta_{v_1}^{\operatorname{d}}}.\end{aligned}$$ In particular, we have $$\overline{T^{z,\lambda,{\operatorname{d}}}_{w'}\cdot T^{z,\lambda,{\operatorname{d}}}_{w}}=\overline {T^{z,\lambda,{\operatorname{d}}}_{w}}\cdot \overline{T^{z,\lambda,{\operatorname{d}}}_{w'}}.$$ It is easy to see that $$\overline{T^{z,\lambda,{\operatorname{d}}}_\alpha}=\delta^{\operatorname{d}}_\alpha\frac{1}{p_\alpha}\delta_\alpha-\frac{q_\alpha}{p_\alpha}\delta_\alpha.$$ We then have $$\begin{aligned} \sum_w\delta_{w}^{\operatorname{d}}a^{z,\lambda,{\operatorname{d}}}_{s_\alpha v^{-1}, w^{-1}}\delta_{vs_\alpha}= &\overline{\sum_{w}\delta_{s_\alpha v^{-1}}a_{s_\alpha v^{-1}, w^{-1}}^{z,\lambda,{\operatorname{d}}}\delta^{\operatorname{d}}_{w^{-1}}}\\ &=\overline{T_{s_\alpha v^{-1}}^{z,\lambda,{\operatorname{d}}}}=\overline{T_{\alpha}^{z,\lambda,{\operatorname{d}}}T_{v^{-1}}^{z,\lambda,{\operatorname{d}}}}=\overline{T^{z,\lambda,{\operatorname{d}}}_{v^{-1}}}\cdot \overline{T^{z,\lambda,{\operatorname{d}}}_\alpha}\\ &=\overline{\sum_w\delta_{v^{-1}} a^{z,\lambda,{\operatorname{d}}}_{v^{-1},w^{-1}}\delta_{w^{-1}}^{\operatorname{d}}}\cdot (\delta_\alpha^{\operatorname{d}}\frac{1}{p_\alpha}\delta_\alpha-\frac{q_\alpha}{p_\alpha}\delta_\alpha)\\ &=\sum_w\delta_{w}^{\operatorname{d}}a^{z,\lambda,{\operatorname{d}}}_{v^{-1},w^{-1}}\delta_{v}\cdot (\delta_\alpha^{\operatorname{d}}\frac{1}{p_\alpha}\delta_\alpha-\frac{q_\alpha}{p_\alpha}\delta_\alpha)\\ &=\sum_w\delta_{ws_\alpha}^{\operatorname{d}}{}^{s_\alpha^{\operatorname{d}}}a^{z,\lambda,{\operatorname{d}}}_{v^{-1},w^{-1}}\cdot\frac{1}{{}^vp_\alpha}\delta_{vs_\alpha}-\delta_w^{\operatorname{d}}a^{z,\lambda,{\operatorname{d}}}_{v^{-1},w^{-1}}\cdot {}^v(\frac{q_\alpha}{p_\alpha})\delta_{vs_\alpha}\\ &=\sum_w \delta_w^{\operatorname{d}}\left( {}^{s_\alpha^{\operatorname{d}}}a^{z,\lambda,{\operatorname{d}}}_{v^{-1},s_\alpha w^{-1}}\cdot \frac{1}{{}^vp_\alpha}-a^{z,\lambda,{\operatorname{d}}}_{v^{-1},w^{-1}}\cdot {}^v(\frac{q_\alpha}{p_\alpha})\right)\delta_{vs_\alpha} .\end{aligned}$$ Therefore, $$a^{z,\lambda,{\operatorname{d}}}_{s_\alpha v^{-1}, w^{-1}}={}^{s_\alpha^{\operatorname{d}}}a^{z,\lambda,{\operatorname{d}}}_{v^{-1},s_\alpha w^{-1}}\cdot \frac{1}{{}^vp_\alpha}-a^{z,\lambda,{\operatorname{d}}}_{v^{-1},w^{-1}}\cdot {}^v(\frac{q_\alpha}{p_\alpha}).$$ Comparing with the recursive formula for $b^{z,\lambda}_{v,w}$ in [\[eq:brecur\]](#eq:brecur){reference-type="eqref" reference="eq:brecur"}, we see that $b^{z,\lambda}_{v,w}=a^{z,\lambda,{\operatorname{d}}}_{v^{-1}, w^{-1}}$. ◻ *Proof of Theorem [Theorem 50](#thm:inverse_maps){reference-type="ref" reference="thm:inverse_maps"}.* We only prove ${\mathbb T}^{z,\lambda}\circ {\mathbb T}^{z,\lambda,{\operatorname{d}}}=\mathop{\mathrm{id}}$. We have $$\begin{aligned} {\mathbb T}^{z,\lambda}\circ {\mathbb T}^{z,\lambda,{\operatorname{d}}}&=\sum_uT_u^{z,\lambda}\circ \sum_vT^{z,\lambda,{\operatorname{d}}}_{v^{-1}}\\ &=\sum_u\sum_{w_1}\frak{p}_u^{\operatorname{d}}a^{z,\lambda}_{u,w_1}\frak{i}_{w_1^{-1}}\circ \sum_{v}\sum_{w_2}\frak{p}_{v^{-1}}a^{z,\lambda, {\operatorname{d}}}_{v^{-1},w_2^{-1}}\frak{i}_{w_2}^{\operatorname{d}}\\ &=\sum_{u,w_2}\frak{p}_u^{\operatorname{d}}\left(\sum_v a^{z,\lambda}_{u,v} a^{z,\lambda, {\operatorname{d}}}_{v^{-1},w_2^{-1}}\right)\frak{i}_{w_2}^{\operatorname{d}}\overset{\text{Thm.}~\ref{thm:invmat}}=\sum_{u,w_2}\frak{p}_u^{\operatorname{d}}\left(\sum_v a^{z,\lambda}_{u,v} b^{z,\lambda}_{v, w_2}\right)\frak{i}_{w_2}^{\operatorname{d}}\\ &=\sum_{u,w_2}\delta_{u,w_2}\frak{p}^{\operatorname{d}}_u\frak{i}^{\operatorname{d}}_{w_2}=\sum_u\frak{p}^{\operatorname{d}}_u\frak{i}^{\operatorname{d}}_u=\mathop{\mathrm{id}}_{{\mathbb M}(\lambda)^{\operatorname{d}}}. \end{aligned}$$ ◻ # A complete list of elliptic classes {#sec:complete} In this section, we collect all elliptic classes that one could consider in our setting. This is for the sake of completeness, and for the convenience of the readers. The proofs of the properties in this section are similar to the corresponding ones in the previous section. ## Complete list of elliptic classes Here is a complete list of elliptic classes one can consider, including the ones we defined before: $$\begin{aligned} {\mathbf E}_v^{z,\lambda}&=T^{z,\lambda}_{v^{-1}}\bullet \frak{c}f_e,&\mathop{\mathrm{\mathcal{E}}}_v^{z,\lambda}&=T^{z,\lambda}_{v^{-1}w_0}\bullet \frac{{}^{w_0^{\operatorname{d}}}{\mathbf h}\cdot {}^{w_0^{\operatorname{d}}}\frak{c}}{{\mathbf g}}f_{w_0},\\ {\mathbf E}_v^{z,\lambda,{\operatorname{d}}}&=T^{z,\lambda,{\operatorname{d}}}_{v^{-1}}\bullet \frak{c}f_e^{\operatorname{d}},&\mathop{\mathrm{\mathcal{E}}}_v^{z,\lambda,{\operatorname{d}}}&=T^{z,\lambda,{\operatorname{d}}}_{v^{-1}w_0}\bullet \frac{{}^{w_0}\frak{c}\cdot {\mathbf g}}{{}^{w_0^{\operatorname{d}}}{\mathbf h}} f_{w_0}^{\operatorname{d}},\\ {\mathbf E}_v^{z,-\lambda}&=T^{z,-\lambda}_{v^{-1}}\bullet \frac{{\mathbf g}}{\frak{c}} f_e,&\mathop{\mathrm{\mathcal{E}}}_{v}^{z,-\lambda}&=T^{z,-\lambda}_{v^{-1}w_0}\bullet \frac{{\mathbf h}}{{}^{w_0^{\operatorname{d}}}\frak{c}}f_{w_0},\\ {\mathbf E}_v^{z,-\lambda,{\operatorname{d}}}&=T^{z,-\lambda, {\operatorname{d}}}_{v^{-1}}\bullet \frac{{\mathbf g}}{\frak{c}} f_e^{\operatorname{d}},&\mathop{\mathrm{\mathcal{E}}}_v^{z,-\lambda,{\operatorname{d}}}&=T^{z,-\lambda,{\operatorname{d}}}_{v^{-1}w_0}\bullet \frac{{\mathbf g}\cdot {}^{w_0}{\mathbf g}}{{\mathbf h}\cdot {}^{w_0}\frak{c}}f_{w_0}^{\operatorname{d}},\\ {\mathbf E}_v^{-z,\lambda}&=T^{-z,\lambda}_{v^{-1}}\bullet \frac{1}{{\mathbf h}\frak{c}} f_e,&\mathop{\mathrm{\mathcal{E}}}_v^{-z,\lambda}&=T^{-z,\lambda}_{v^{-1}w_0}\bullet \frac{1}{{}^{w_0}{\mathbf g}\cdot {}^{w_0^{\operatorname{d}}}\frak{c}}f_{w_0},\\ {\mathbf E}_v^{-z,\lambda,{\operatorname{d}}}&=T^{-z,\lambda,{\operatorname{d}}}_{v^{-1}}\bullet \frac{1}{{\mathbf h}\frak{c}} f_e^{\operatorname{d}},&\mathop{\mathrm{\mathcal{E}}}_v^{-z,\lambda,{\operatorname{d}}}&=T^{-z,\lambda,{\operatorname{d}}}_{v^{-1}w_0}\bullet \frac{{}^{w_0}{\mathbf g}}{{\mathbf h}\cdot {}^{w_0^{\operatorname{d}}}{\mathbf h}\cdot {}^{w_0}\frak{c}}f_{w_0}^{\operatorname{d}},\\ (T^{z,\lambda}_{u})^*&=\sum_{x\ge u}b^{z,\lambda}_{x,u}\cdot {{}^{(u^{-1})^{\operatorname{d}}}\frak{c}}f_{x^{-1}},&(T_u^{z,\lambda,{\operatorname{d}}})^*&=\sum_{y\ge u}b^{z,\lambda,{\operatorname{d}}}_{y,u}\cdot {}^{u^{-1}}\frak{c}f_{y^{-1}}^{\operatorname{d}},\\ (T_u^{z,-\lambda})^*&=\sum_{x\ge u}b^{z,-\lambda}_{x, u}\cdot \frac{{\mathbf g}}{ {}^{(u^{-1})^{\operatorname{d}}}\frak{c}} f_{x^{-1}},& (T^{z,-\lambda, {\operatorname{d}}}_u)^*&=\sum_{y\ge u}b^{z,-\lambda,{\operatorname{d}}}_{y,u}\cdot \frac{{}^{u^{-1}}{\mathbf g}}{{}^{u^{-1}}\frak{c}}f^{\operatorname{d}}_{y^{-1}},\\ (T_u^{-z,\lambda})^*&=\sum_{x\ge u}b^{-z,\lambda}_{x,u}\cdot \frac{1}{{}^{(u^{-1})^{\operatorname{d}}}{\mathbf h}\cdot {}^{(u^{-1})^{\operatorname{d}}}\frak{c}}f_{x^{-1}},& (T_u^{-z,\lambda, {\operatorname{d}}})^*&=\sum_{y\ge u}b^{-z,\lambda, {\operatorname{d}}}_{y,u}\cdot \frac{1}{{\mathbf h}\cdot {}^{u^{-1}}\frak{c}}f^{\operatorname{d}}_{y^{-1}}.\end{aligned}$$ Using the definitions of the periodic modules, one can verify that the ${\mathbf E}^?_v$ and $\mathop{\mathrm{\mathcal{E}}}_v^?$ are rational sections of the corresponding vector bundles. Similarly, one can use the definition of the $b$-coefficients to check that the $(T^?_u)^*$ are all rational sections of the corresponding vector bundles. Note also that in the definition of the classes $\mathop{\mathrm{\mathcal{E}}}_v^?$, we apply $T^?_{{v^{-1}w_0}}$ on certain rational sections of degree $w_0$ or $w_0^{\operatorname{d}}$. These sections are chosen so as to obtain the Poincaré duality without any normalization factor. ## Rational maps on the other periodic modules {#subsec:invmat} For the operators $T^{z,-\lambda}_v, T^{-z,\lambda}_v, T_v^{z,-\lambda, {\operatorname{d}}}, T_v^{-z,\lambda,{\operatorname{d}}}$, we can repeat the construction in § [8.1](#subsec:trans){reference-type="ref" reference="subsec:trans"}, and define $$\begin{aligned} {\mathbb T}^{z,-\lambda}&=\sum_v\epsilon_vT_v^{z,-\lambda}, & {\mathbb T}^{-z,\lambda}&=\sum_v\epsilon_vT_v^{-z,\lambda}, \\ {\mathbb T}^{ z, - \lambda, {\operatorname{d}}}&=\sum_{v}\epsilon_vT^{ z, -\lambda, {\operatorname{d}}}, &{\mathbb T}^{ -z, \lambda, {\operatorname{d}}}&=\sum_{v}\epsilon_vT^{ -z, \lambda, {\operatorname{d}}}.\end{aligned}$$ Their domains and codomains are indicated in the diagram below. Note that we add the sign $\epsilon_v$ in these four identities. Indeed, similar as Theorem [Theorem 50](#thm:inverse_maps){reference-type="ref" reference="thm:inverse_maps"}, we have $$\begin{aligned} {\mathbb T}^{z,-\lambda}(\mathop{\mathrm{\mathcal{E}}}_u^{z,-\lambda})&=\frac{\epsilon_u {\mathbf g}}{{}^{(u^{-1})^{\operatorname{d}}}\frak{c}}f_u^{\operatorname{d}}, & {\mathbb T}^{z,-\lambda,{\operatorname{d}}}(\mathop{\mathrm{\mathcal{E}}}_u^{z,-\lambda,{\operatorname{d}}})&=\frac{\epsilon_u \cdot {}^{u^{-1}}{\mathbf g}}{{}^{u^{-1}}\frak{c}}f_u.\\ {\mathbb T}^{-z,\lambda}(\mathop{\mathrm{\mathcal{E}}}_u^{-z,\lambda})&=\frac{\epsilon_u }{{}^{(u^{-1})^{\operatorname{d}}}{\mathbf h}\cdot {}^{(u^{-1})^{\operatorname{d}}}\frak{c}}f_u^{\operatorname{d}}, &{\mathbb T}^{-z,\lambda,{\operatorname{d}}}(\mathop{\mathrm{\mathcal{E}}}_u^{-z,\lambda,{\operatorname{d}}})&=\frac{\epsilon_u }{{\mathbf h}\cdot {}^{u^{-1}}\frak{c}}f_u.\end{aligned}$$ Similar as in Remark [Remark 51](#rem:fixed){reference-type="ref" reference="rem:fixed"}, the rational sections in front of the $f$'s are naturally induced by the prefixed rational sections $\frak{c}, {\mathbf g}, {\mathbf h}$. Moreover, we have 1. The matrices $(a^{z,-\lambda}_{v,w})_{v,w\in W}$ is the inverse of the matrix $(\epsilon_w\epsilon_v a^{z,-\lambda,{\operatorname{d}}}_{v^{-1}, w^{-1}})_{v,w\in W}$. Consequently, ${\mathbb T}^{z,-\lambda}$ is the inverse of ${\mathbb T}^{z,-\lambda,{\operatorname{d}}}$. 2. The matrices $(a^{-z,\lambda}_{v,w})_{v,w\in W}$ is the inverse of the matrix $(\epsilon_w\epsilon_v a^{-z,\lambda,{\operatorname{d}}}_{v^{-1}, w^{-1}})_{v,w\in W}$. Consequently, ${\mathbb T}^{-z,\lambda}$ is the inverse of ${\mathbb T}^{-z,\lambda,{\operatorname{d}}}$. Their relation with the periodic modules are summarized in the diagram below. ## A diagram {#subsec:diagram} Relations of the periodic modules and elliptic classes are summarized in the following triangular prism: $$\xymatrix{{\mathbf E}^{z,\lambda}_v,\mathop{\mathrm{\mathcal{E}}}_v^{z,\lambda}=(T_v^{z,\lambda})^*\ar@{.>}[r]&{\mathbb M}\ar@{-->}@/^1pc/[ddd]^{{\mathbb T}^{z,\lambda}}\ar@{<->}[rr]^-{\langle\_,\_\rangle_{\lambda}}\ar@{<->}[dr]^-{\langle\_,\_\rangle_{z}}&&{\mathbb M}(-\lambda)\ar@{-->}@/^1pc/[ddd]^{ {\mathbb T}^{z,-\lambda}}&{\mathbf E}_v^{z,-\lambda}, \ar@{.>}[l]\mathop{\mathrm{\mathcal{E}}}_v^{z,-\lambda}=(T^{z,-\lambda}_v)^*\\ &&{\mathbb M}(-z)\ar@{-->}@/^1pc/[ddd]^{{\mathbb T}^{-z,\lambda}} &&{\mathbf E}_v^{-z,\lambda}, \mathop{\mathrm{\mathcal{E}}}_v^{-z,\lambda}=(T^{-z,\lambda}_v)^*\ar@{.>}[ll]\\ &&&&&\\ {\mathbf E}_v^{z,\lambda,{\operatorname{d}}}, \mathop{\mathrm{\mathcal{E}}}_v^{z,\lambda,{\operatorname{d}}}=(T^{z,\lambda,{\operatorname{d}}})^*\ar@{.>}[r]&{\mathbb M}^{\operatorname{d}}\ar@{-->}@/^1pc/[uuu]^-{{\mathbb T}^{z,\lambda,{\operatorname{d}}}} \ar@{<->}[rr]\ar@{<->}[dr]&&{\mathbb M}(-\lambda)^{\operatorname{d}}\ar@{-->}@/^1pc/[uuu]^{{\mathbb T}^{z,-\lambda,{\operatorname{d}}}}&{\mathbf E}_{v}^{z,-\lambda,{\operatorname{d}}}, \mathop{\mathrm{\mathcal{E}}}_v^{z,-\lambda,{\operatorname{d}}}=(T^{z,-\lambda,{\operatorname{d}}}_v)^*\ar@{.>}[l]\\ &&{\mathbb M}(-z)^{\operatorname{d}}\ar@{-->}@/^1pc/[uuu]^{{\mathbb T}^{-z,\lambda,{\operatorname{d}}}}&&{\mathbf E}_v^{-z,\lambda,{\operatorname{d}}},\mathop{\mathrm{\mathcal{E}}}_v^{-z,\lambda,{\operatorname{d}}}=(T^{-z,\lambda,{\operatorname{d}}}_v)^*\ar@{.>}[ll]}$$ The top face denotes the periodic modules for the chosen root system and the bottom face denotes the modules for the Langlands dual system. The solid two-head arrows on the top and bottom faces denote the four Poincaré Pairings $\langle\_,\_\rangle_{\lambda}$, $\langle\_,\_\rangle_{z}$, $\langle\_,\_\rangle_{\lambda}^{\operatorname{d}}$, $\langle\_,\_\rangle_{z}^{\operatorname{d}}$. The vertical dashed arrows denote the rational maps ${\mathbb T}$ induced by the dynamical DL operators, which are mutually inverses to each other (Theorem [Theorem 50](#thm:inverse_maps){reference-type="ref" reference="thm:inverse_maps"}). The twelve elliptic classes are rational sections of the corresponding periodic modules, indicated by the dotted arrows. Moreover, the ${\mathbf E}$ classes are always dual to the ${\mathcal E}$ classes, and the ${\mathcal E}$ classes are always equal to the $T^*$ classes. # 3d mirror symmetry {#sec:mirror} The transition matrices in Theorem [Theorem 52](#thm:invmat){reference-type="ref" reference="thm:invmat"} (and also in § [9.2](#subsec:invmat){reference-type="ref" reference="subsec:invmat"}) are not only inverses to each other, they are equal to each other after certain shifts by $w_0$ and normalization by the rational sections ${\mathbf g}, {\mathbf h}$. We establish such a statement in this section. ## In addition to the relation between the $a$-coefficients in Remark [Remark 38](#rem:acoeffrelation){reference-type="ref" reference="rem:acoeffrelation"}, Theorem [Theorem 52](#thm:invmat){reference-type="ref" reference="thm:invmat"} also gives the following result. **Theorem 53**. *We have $$\begin{aligned} {\mathbf g}\cdot \epsilon_x\epsilon_ua^{z,-\lambda, {\operatorname{d}}}_{x, u}={}^{(uw_0)^{\operatorname{d}}}{\mathbf h}\cdot {}^{(uw_0)^{\operatorname{d}}x^{-1}}a^{z,-\lambda}_{uw_0, xw_0}, \quad {\mathbf g}\cdot a_{x, u}^{z,\lambda,{\operatorname{d}}}={}^{u^{\operatorname{d}}}{\mathbf h}\cdot {}^{(uw_0)^{\operatorname{d}}x^{-1}} a^{z,\lambda}_{uw_0, xw_0}. \end{aligned}$$* *Proof.* We have $$\begin{aligned} \mathop{\mathrm{\mathcal{E}}}_u^{z,-\lambda}&=T^{z, -\lambda}_{u^{-1}w_0}\bullet \frac{{\mathbf h}}{{}^{w_0^{\operatorname{d}}}\frak{c}} f_{w_0}\\ &=\sum_{w}\delta_{u^{-1}w_0}^{\operatorname{d}}a^{z,-\lambda}_{u^{-1}w_0, w}\delta_w\bullet \frac{{\mathbf h}}{{}^{w_0^{\operatorname{d}}}\frak{c}} f_{w_0}\\ &=\sum_w {}^{(u^{-1}w_0)^{\operatorname{d}}}(\frac{{\mathbf h}}{{}^{w_0^{\operatorname{d}}}\frak{c}} )\cdot {}^{(u^{-1}w_0)^{\operatorname{d}}(ww_0)^{-1}}a^{z,-\lambda}_{u^{-1}w_0, w}f_{ww_0}\\ &=\sum_x {}^{(u^{-1}w_0)^{\operatorname{d}}}(\frac{{\mathbf h}}{{}^{w_0^{\operatorname{d}}}\frak{c}} )\cdot {}^{(u^{-1}w_0)^{\operatorname{d}}x}a^{z,-\lambda}_{u^{-1}w_0, x^{-1}w_0}f_{x^{-1}}. \end{aligned}$$ Recall from Corollary [Corollary 48](#cor:opp=*){reference-type="ref" reference="cor:opp=*"}, ${\mathcal E}_u^{z,-\lambda}=(T_u^{z,-\lambda})^*$. Comparing with the formula for $(T^{z,-\lambda}_u)^*$ in [\[eq:T\*\]](#eq:T*){reference-type="eqref" reference="eq:T*"}, we get $${}^{(u^{-1}w_0)^{\operatorname{d}}}(\frac{{\mathbf h}}{{}^{w_0^{\operatorname{d}}}\frak{c}} )\cdot {}^{(u^{-1}w_0)^{\operatorname{d}}x}a^{z,-\lambda}_{u^{-1}w_0, x^{-1}w_0}=b^{z,-\lambda}_{x,u}\cdot \frac{{\mathbf g}}{{}^{(u^{-1})^{\operatorname{d}}}\frak{c}},$$ which gives $${\mathbf g}\cdot b^{z,-\lambda}_{x,u}={}^{(u^{-1}w_0)^{\operatorname{d}}}{\mathbf h}\cdot {}^{(u^{-1}w_0)^{\operatorname{d}}x}a^{z,-\lambda}_{u^{-1}w_0, x^{-1}w_0}.$$ By § [9.2](#subsec:invmat){reference-type="ref" reference="subsec:invmat"}, the matrix $(\epsilon_w\epsilon_v a^{z,-\lambda,{\operatorname{d}}}_{v^{-1}, w^{-1}})_{v,w\in W}$ is inverse to $(a^{z,-\lambda}_{v,w})_{v,w\in W}$, so $${\mathbf g}\cdot \epsilon_x\epsilon_ua^{z,-\lambda, {\operatorname{d}}}_{x^{-1}, u^{-1}}={\mathbf g}\cdot b^{z,-\lambda}_{x,u}={}^{(u^{-1}w_0)^{\operatorname{d}}}{\mathbf h}\cdot {}^{(u^{-1}w_0)^{\operatorname{d}}x}a^{z,-\lambda}_{u^{-1}w_0, x^{-1}w_0}.$$ We can rewrite it as $${\mathbf g}\cdot \epsilon_x\epsilon_ua^{z,-\lambda, {\operatorname{d}}}_{x, u}={}^{(uw_0)^{\operatorname{d}}}{\mathbf h}\cdot {}^{(uw_0)^{\operatorname{d}}x^{-1}}a^{z,-\lambda}_{uw_0, xw_0}.$$ This proves the first identity. Applying $\Gamma_\lambda$ (see Remark [Remark 54](#rem:Gamma){reference-type="ref" reference="rem:Gamma"} below), and use the fact that $$\Gamma_\lambda({}^{(uw_0)^{\operatorname{d}}}{\mathbf h})={}^{u^{\operatorname{d}}}{\mathbf h},$$ we obtain the second identity. ◻ ## Comparison with a 3d-mirror symmetry result of Rimanyi and Weber Recall from [\[eq:bfE-z-la\]](#eq:bfE-z-la){reference-type="eqref" reference="eq:bfE-z-la"}, we have $$\begin{aligned} \label{eq:res1} {\mathbf E}^{z,\lambda}_v&=\sum_w\frac{{}^w{\mathbf g}}{{\mathbf g}}\cdot a^{z,-\lambda}_{v,w}\cdot {}^{(v^{-1})^{\operatorname{d}}}\frak{c}f_{w^{-1}}=:\sum_{w}c^{z,\lambda}_{v,w}f_{w^{-1}}.\end{aligned}$$ Similar calculation gives $$\begin{aligned} \label{eq:res2} {\mathbf E}_{v}^{z,\lambda, {\operatorname{d}}}&=\sum_{w}\frac{{\mathbf g}}{{}^{v^{-1}}{\mathbf g}}a^{z,-\lambda, {\operatorname{d}}}_{v,w} \cdot {}^{v^{-1}}\frak{c}f_{w^{-1}}^{\operatorname{d}}=:\sum_wc^{z,\lambda,{\operatorname{d}}}_{v,w}f_{w^{-1}}^{\operatorname{d}}.\end{aligned}$$ From the first identity of Theorem [Theorem 53](#thm:trans_mat){reference-type="ref" reference="thm:trans_mat"}, we obtain $${}^v c^{z,\lambda,{\operatorname{d}}}_{v,w}\epsilon_v\epsilon_w \cdot {}^{vw_0}{\mathbf g}={}^{(ww_0)^{\operatorname{d}}}{\mathbf h}\cdot {}^{(ww_0)^{\operatorname{d}}}c_{ww_0, vw_0}^{z,\lambda}$$ Explicitly, we have $${}^vc_{v,w}^{z,\lambda,{\operatorname{d}}}\cdot \prod_{\alpha>0}\frac{\theta(\hbar+z_v(\alpha))}{\theta(z_\alpha)}=\prod_{\alpha>0}\frac{\theta(\hbar+\lambda_{w(\alpha^\vee)})}{\theta(\lambda_{\alpha^\vee})}\cdot {}^{(ww_0)^{\operatorname{d}}}c_{ww_0,vw_0}^{z,\lambda}.$$ This identity recovers [@RW22 Theorem 7], which provides a comparison of restriction coefficients of the elliptic classes (after certain normalization) and those of the Langlands dual system. Such comparison was predicted by the 3d mirror symmetry [@AO21 § 1.3.1]. A large class of examples are given in [@RSVZ19; @RSZV22]. **Remark 54**. Let $\Gamma_\lambda$ (resp. $\Gamma_z$) be the operation on rational sections by formally replacing $\lambda$ by $-\lambda$ (resp. $z$ by $-z$). Then by comparing the formula of the operators in § [7.1](#subsec:DL_operator){reference-type="ref" reference="subsec:DL_operator"} and use the calculation in § [7.3](#subsec:braid){reference-type="ref" reference="subsec:braid"}, we easily obtain the following four identities. $$\begin{aligned} \label{eq:aGammala} a^{z,\lambda}_{v,w}&=\Gamma_\lambda(a^{z,-\lambda}_{v,w}),& ~a^{z,\lambda,{\operatorname{d}}}_{v,w}&=\Gamma_z(a^{-z, \lambda,{\operatorname{d}}}_{v,w}).\\ \label{eq:aGammaz}a^{z,\lambda,{\operatorname{d}}}_{v,w}&=\epsilon_v\epsilon_w\Gamma_\lambda(a^{z,-\lambda, {\operatorname{d}}}_{v,w}), & a^{z,\lambda}_{v,w}&=\epsilon_v\epsilon_w\Gamma_z(a^{-z,\lambda}_{v,w}).\end{aligned}$$ Looking at their inverses matrices, we can also obtain the last four identities. $$\begin{aligned} \label{eq:bGammala}b^{z,\lambda}_{v,w}&=\Gamma_\lambda(b^{z,-\lambda}_{v,w}),& ~b^{z,\lambda,{\operatorname{d}}}_{v,w}&=\Gamma_z(b^{-z, \lambda,{\operatorname{d}}}_{v,w}).\\ \label{eq:bGammaz}b^{z,\lambda,{\operatorname{d}}}_{v,w}&=\epsilon_w\epsilon_v\Gamma_\lambda(b^{z,-\lambda, {\operatorname{d}}}_{v,w}), & b^{z,\lambda}_{v,w}&=\epsilon_w\epsilon_v\Gamma_z(b^{-z,\lambda}_{v,w}).\end{aligned}$$ Using these identities, properties of one transition matrix implies the same property of another. # A formula of the transition matrix/restriction formula {#sec:restriction} In this section, we provide a formula of the transition matrix between the Weyl group elements and the dynamical DL operators. Using [\[eq:bfE-z-la\]](#eq:bfE-z-la){reference-type="eqref" reference="eq:bfE-z-la"} and [\[eq:res1\]](#eq:res1){reference-type="eqref" reference="eq:res1"}, this gives restriction formulas of the elliptic classes. ## The formula for the transition matrices Let $T_\alpha$ be one of $T^{\pm z,\pm \lambda}_\alpha$ or their counterparts for the Langlands dual system. We write $$T_\alpha=\delta_\alpha^{\operatorname{d}}\frak{a}(z_\alpha, \lambda_{\alpha^\vee})+\delta_\alpha^{\operatorname{d}}\frak{b}(z_\alpha, \lambda_{\alpha^\vee})\delta_\alpha,$$ and write $$T_v=\sum_{w\le v }\delta_v^{\operatorname{d}}a_{v,w}\delta_w.$$ Specializing $T_\alpha$ to $T^{\pm z,\pm \lambda}_\alpha$, the $a_{v,w}$ above becomes $a^{\pm z,\pm \lambda}_{v,w}$. Similar for the Langlands dual system. For any reduced sequence $v=s_{i_1}\cdots s_{i_l}$, denote $I=(i_1,...,i_l)$. Define $\beta_l^\vee=\alpha^\vee_{i_l}$ and $\beta_j^\vee=s_{i_l}\cdots s_{i_{j+1}}(\alpha^\vee _{i_j}), j=l-1,...,1$. For any $J\subset[l]$, denote $J_0=\emptyset$ and $J_j=J\cap [j]$ for $j\le l-1$, $w_{I,J_j}=\prod_{k\in J_j}s_{i_k}\in W$, and $\gamma_{j}=w_{I,J_{j-1}}(\alpha_{i_j})$. Clearly the set $\{\gamma_1,...,\gamma_l\}$ depends on $I$ and $J$, but the sequence $(\beta_1^\vee ,...,\beta^\vee _l)$ depends only on $I$. Moreover, $\{\beta_1^\vee ,\cdots ,\beta_l^\vee \}=\Phi^\vee (v^{-1})$. **Theorem 55**. *We have $$a_{v,w}=\sum_{J\subset [l], w_{I,J}=w}\zeta_J, \text{ where }\zeta_J:=\prod_{j\in J}\frak{b}(z_{\gamma_j}, \lambda_{\beta_j^\vee})\prod_{j\not \in J}\frak{a}(z_{\gamma_j}, \lambda_{\beta_j^\vee}).$$* *Proof.* By definition, $a_{v,w}$ is the sum of terms, one for each $J\subset [l]$ with $w_{I,J}=w$. For each $J$, we have $$\prod_{j\in J}\delta_{\alpha_{i_j}}^{\operatorname{d}}\frak{b}^\lambda(z_{\lambda_{i_j}}, \lambda_{\alpha^\vee_{i_j}})\delta_{\alpha_{i_j}}\cdot \delta_{\alpha_{i_j}}^{\operatorname{d}}\prod_{j\not \in J}\frak{a}^\lambda(z_{\lambda_{i_j}}, \lambda_{\alpha^\vee_{i_j}}),$$ where the product should follow the order $j=1,...,l$. We then move the $\delta_\alpha^{\operatorname{d}}$ to the left and $\delta_\alpha$ to the right, and obtain the conclusion. ◻ **Remark 56**. By combining the formula for the corresponding $a$-coefficients with [\[eq:res1\]](#eq:res1){reference-type="eqref" reference="eq:res1"}, [\[eq:res2\]](#eq:res2){reference-type="eqref" reference="eq:res2"} and their analogues, one obtains a formula for the restriction coefficients of the elliptic classes. In a forthcoming paper [@LZZ], we will derive a root polynomial formula for the $b$-coefficients, generalizing the Billey and Graham-Willems formulas [@billey; @ajs; @graham; @willems]. This will be a more advantageous formula for the restriction coefficients of the elliptic classes, for reasons which will be specified. ## The coefficients $a^{z,\lambda}_{v,w}$ {#subsec:T_res} In the remaining part of this paper, we focus on $T_\alpha=T_{\alpha}^{z,\lambda}$ version. The other versions are similar. Recall that $$T_{\alpha}^{z,\lambda}=\delta_\alpha^{\operatorname{d}}\frac{\theta(\hbar)\theta(z_\alpha+\lambda_{\alpha^\vee})}{\theta(z_\alpha)\theta(\hbar+\lambda_{\alpha^\vee})}+\delta_\alpha^{{\operatorname{d}}}\frac{\theta(\hbar-z_\alpha)\theta(-\lambda_{\alpha^\vee})}{\theta(z_\alpha)\theta(\hbar+\lambda_{\alpha^\vee})}\delta_\alpha.$$ From Theorem [Theorem 55](#thm:res){reference-type="ref" reference="thm:res"}, we have $$\label{eq:T_zeta} \zeta_J=\frac{\prod_{j\in J}\theta(\hbar-z_{\gamma_j})\theta(-\lambda_{{\beta_j^\vee}})\cdot \prod_{j\not\in J}\theta(\hbar)\theta(z_{\gamma_j}+\lambda_{\beta_{j}^\vee})}{ \prod_{j=1}^l\theta(z_{\gamma_j})\theta(\hbar+\lambda_{\beta_j^\vee})}.$$ **Example 57**. [$[$ Changlong: We can remove this example if needed. $]$]{style="color: blue"}We list all coefficients for the type $A_2$. In this case there are two simple roots $\alpha_1,\alpha_2$. For simplicity, denote $\delta_i=\delta_{s_i}$, $\delta_i^{\operatorname{d}}=\delta_{s_i}^{\operatorname{d}}, z_{i+j}=z_{\alpha_i+\alpha_j}, \lambda_{i+j}=\lambda_{\alpha_i^\vee+\alpha_j^\vee}$, $T_{ij}^{z,\lambda}=T_{s_is_j}^{z,\lambda}$. $$\begin{aligned} T^{z,\lambda}_{1}&=\delta_1^{\operatorname{d}}\frac{\theta(\hbar)\theta(z_1+\lambda_{1})}{\theta(z_1)\theta(\hbar+\lambda_{1})}+\delta_1^{{\operatorname{d}}}\frac{\theta(\hbar-z_1)\theta(-\lambda_{1})}{\theta(z_1)\theta(\hbar+\lambda_{1})}\delta_1.\\ T^{z,\lambda}_{2}&=\delta_2^{\operatorname{d}}\frac{\theta(\hbar)\theta(z_2+\lambda_{2})}{\theta(z_2)\theta(\hbar+\lambda_{2})}+\delta_2^{{\operatorname{d}}}\frac{\theta(\hbar-z_2)\theta(-\lambda_{2})}{\theta(z_2)\theta(\hbar+\lambda_{2})}\delta_2.\\ T^{z,\lambda}_{12}&=\delta_{12}^{\operatorname{d}}\frac{1}{\theta(\hbar+\lambda_{1+2})\theta(\hbar+\lambda_2)}\bigg(\frac{\theta(\hbar)^2\theta(z_1+\lambda_{1+2})\theta(z_2+\lambda_2)}{\theta(z_1)\theta(z_2)}+\frac{\theta(\hbar-z_1)\theta(-\lambda_{1+2})\theta(\hbar)\theta(z_{1+2}+\lambda_2)}{\theta(z_1)\theta(z_{1+2})}\delta_1\\ &+\frac{\theta(\hbar)\theta(z_1+\lambda_{1+2})\theta(\hbar-z_2)\theta(-\lambda_2)}{\theta(z_1)\theta(z_2)}\delta_2+\frac{\theta(\hbar-z_1)\theta(-\lambda_{1+2})\theta(\hbar-z_{1+2})\theta(-\lambda_2)}{\theta(z_1)\theta(z_{1+2})}\delta_{12}\bigg).\\ T^{z,\lambda}_{21}&=\delta_{21}^{\operatorname{d}}\frac{1}{\theta(\hbar+\lambda_{1+2})\theta(\hbar+\lambda_1)}\bigg(\frac{\theta(\hbar)^2\theta(z_2+\lambda_{1+2})\theta(z_1+\lambda_1)}{\theta(z_2)\theta(z_1)}+\frac{\theta(\hbar-z_2)\theta(-\lambda_{1+2})\theta(\hbar)\theta(z_{1+2}+\lambda_1)}{\theta(z_2)\theta(z_{1+2})}\delta_2\\ &+\frac{\theta(\hbar)\theta(z_2+\lambda_{1+2})\theta(\hbar-z_1)\theta(-\lambda_1)}{\theta(z_2)\theta(z_1)}\delta_1+\frac{\theta(\hbar-z_2)\theta(-\lambda_{1+2})\theta(\hbar-z_{1+2})\theta(-\lambda_1)}{\theta(z_2)\theta(z_{1+2})}\delta_{21}\bigg).\\ T^{z,\lambda}_{121}&=\delta_{121}^{\operatorname{d}}\frac{1}{\theta(\hbar+\lambda_1)\theta(\hbar+\lambda_2)\theta(\hbar+\lambda_{1+2})}\bigg(\frac{\theta(\hbar)^3\theta(z_1+\lambda_2)\theta(z_2+\lambda_{1+2})\theta(z_1+\lambda_1)}{\theta(z_1)^2\theta(z_2)}\\ &+\frac{\theta(\hbar)^2\theta(z_1+\lambda_2)\theta(z_2+\lambda_{1+2})\theta(\hbar-z_1)\theta(-\lambda_1)}{\theta(z_1)^2\theta(z_2)}\delta_1\\ &+\frac{\theta(\hbar)^2\theta(\hbar-z_1)\theta(-\lambda_2)\theta(z_{1+2}+\lambda_{1+2})\theta(z_{-1}+\lambda_1)}{-\theta(z_1)^2\theta(z_{1+2})}\delta_1\\ &+\frac{\theta(\hbar)\theta(\hbar-z_1)\theta(-\lambda_2)\theta(z_{1+2}+\lambda_{1+2})\theta(\hbar+z_1)\theta(-\lambda_1)}{-\theta(z_1)^2\theta(z_{1+2})}\\ &+\frac{\theta(\hbar)^2\theta(z_1+\lambda_2)\theta(\hbar-z_2)\theta(-\lambda_{1+2})\theta(z_{1+2}+\lambda_1)}{\theta(z_1)\theta(z_2)\theta(z_{1+2})}\delta_2\\ &+\frac{\theta(\hbar)\theta(z_1+\lambda_2)\theta(\hbar-z_2)\theta(-\lambda_{1+2})\theta(-\lambda_1)\theta(\hbar-z_{1+2})}{\theta(z_1)\theta(z_2)\theta(z_{1+2})}\delta_{21}\\ &+\frac{\theta(\hbar)\theta(\hbar-z_1)\theta(-\lambda_2)\theta(-\lambda_{1+2})\theta(\hbar-z_{1+2})\theta(z_2+\lambda_1)}{\theta(z_1)\theta(z_2)\theta(z_{1+2})}\delta_{12}\\ &+\frac{\theta(\hbar-z_1)\theta(\hbar-z_2)\theta(\hbar-z_{1+2})\theta(-\lambda_1)\theta(-\lambda_2)\theta(-\lambda_{1+2})}{\theta(z_1)\theta(z_2)\theta(z_{1+2})}\delta_{121}\bigg).\end{aligned}$$ In the expansion of $T_{121}^{z,\lambda}$, one can use Fay's Trisecant Identity to verify that the coefficients for $\delta_1$ coincides with that for $\delta_2$ if one switches $1$ and $2$, and also show that the coefficient for $\delta_e=1$ is symmetric with respect to $1$ and $2$. This shows that $T_{121}^{z,\lambda}=T_{212}^{z,\lambda}$. That is, the braid relation for $A_2$ is satisfied. **Example 58**. Let us consider the operator $T^{z,\lambda}_\alpha$ in the $A_3$ case with simple roots $\alpha_1,\alpha_2,\alpha_3$. For simplicity, denote $\alpha_{i\pm j}=\alpha_i\pm \alpha_j$, $\lambda_{i\pm j}=\lambda_{\alpha_i^\vee\pm \alpha_j^\vee}$, $z_{i\pm j}=z_{\alpha_i}\pm z_{\alpha_j}$. Note that $\alpha_{i+j}^\vee=\alpha_i^\vee+\alpha_j^\vee$. Let $v=s_1s_2s_1s_3s_2$ and $w=s_1s_2$. So $$\beta_1^\vee =\alpha_3^\vee , \beta_2^\vee =\alpha^\vee _{1+2+3},\beta^\vee _3=\alpha_{1+2}^\vee ,\beta_4^\vee =\alpha_{2+3}^\vee , \beta_5^\vee =\alpha_2^\vee .$$ There are three subsets $J$ of $[5]$ with $w_{I,J}=w$, namely, $$J=\{1,1,-,-,-\}, ~J'=\{1,-,-,-,1\},~ J''=\{-,-,-,1,-,1\}.$$ We compute $\zeta_{J'}$ as an example. In this case $$\gamma_1=\alpha_1,\gamma_2=\alpha_{1+2}, \gamma_3=-\alpha_1,\gamma_4=\alpha_3,\gamma_5=\alpha_{1+2}.$$ So $$\zeta_{J'}=\frac{\theta(\hbar)^3\theta(z_{1+2}+\lambda_{1+2+3})\theta(z_{-1}+\lambda_{1+2})\theta(z_{3}+\lambda_{2+3})\theta(-\lambda_{3})\theta(-\lambda_{2})\theta(\hbar-z_{1})\theta(\hbar-z_{1+2})}{\theta(z_1)\theta(z_{1+2})\theta(z_{-1})\theta(z_{3})\theta(z_{1+2})\theta(\hbar+\lambda_{3})\theta(\hbar+\lambda_{1+2+3})\theta(\hbar+\lambda_{1+2})\theta(\hbar+\lambda_{2+3})\theta(\hbar+\lambda_{2})}.$$ ## Restrictions of line bundles {#subsec:restr_linebdl} Recall from Theorem [Theorem 35](#thm:Trational){reference-type="ref" reference="thm:Trational"}, $T^{z,\lambda}_{v}$ is a rational section of ${\mathbb S}''$. So $a^{z,\lambda}_{v,w}$ is a rational section of ${\mathbb S}''_{w,v}=v^{{\operatorname{d}}*}{\mathbb S}_{w,v}$. Using Lemma [Lemma 10](#lem:basic-bundle){reference-type="ref" reference="lem:basic-bundle"} and Lemma [Lemma 18](#lem:basicS){reference-type="ref" reference="lem:basicS"}, we have $$\begin{aligned} \label{eq:bbS''res-la} {\mathbb S}''_{w,v}|_\lambda&=(v^{{\operatorname{d}}*}{\mathbb S}_{w,v})|_\lambda={\mathbb S}_{w,v}|_{v(\lambda)}={\mathcal O}(v(\lambda)-w_\bullet \lambda)),\\ \label{eq:bbS''res-z}{\mathbb S}''_{w,v}|_z&=(v^{{\operatorname{d}}*}{\mathbb S}_{w,v})|_z=v^{{\operatorname{d}}*}{\mathcal O}(z-v_\bullet w^{-1}(z))={\mathcal O}(v^{-1}_\bullet z-w^{-1}(z)). \end{aligned}$$ Let $v=s_{i_1}\cdots s_{i_k}$ and denote $\mu_j=s_{i_1}\cdots s_{i_{j-1}}\alpha_{i_j}$ and $\mu_j^\vee=s_{i_1}\cdots s_{i_{j-1}}\alpha^\vee_{i_j}$, then it is easy to see that $$\begin{aligned} v(\lambda)-\lambda&=-\sum_{j}\lambda_{\alpha_{i_j}^\vee}\mu_j,& \quad v(z)-z&=-\sum_jz_{\alpha_{i_j}}\mu_j^\vee.\\ v(\hbar\rho)-\hbar\rho&=-\hbar\sum_j\mu_j,&v(\hbar\rho^\vee)-\hbar\rho^\vee&=-\hbar\sum_j\mu_j^\vee.\end{aligned}$$ These identities can be used in calculations. **Example 59**. We continue with Example [Example 58](#ex:1){reference-type="ref" reference="ex:1"}, and illustrate Theorem [Theorem 35](#thm:Trational){reference-type="ref" reference="thm:Trational"}. We first compute ${\mathbb S}''_{w,v}|_\lambda$, and get $$\begin{aligned} {\mathbb S}''_{s_1s_2, s_1s_2s_1s_3s_2}|_z&={\mathcal O}((2\hbar-z_2)\alpha_1^\vee+(4\hbar-z_2-z_3)\alpha_2^\vee+(3\hbar-z_1-z_2-z_3)\alpha_3^\vee),\\ {\mathbb S}''_{s_1s_2,s_1s_2s_1s_3s_2}|_\lambda&={\mathcal O}(2\hbar-\lambda_3)\alpha_1+(\hbar-\lambda_1-\lambda_2-\lambda_3)\alpha_2+(-\lambda_2-\lambda_3)\alpha_3).\end{aligned}$$ Consider $J=\{1,1,-,-,-\}$. We have $$\gamma_1=\alpha_1,\gamma_2=\alpha_{1+2},\gamma_3=\alpha_{2}, \gamma_4=\alpha_{1+2+3}, \gamma_5=\alpha_{-1-2}.$$ so $$\zeta_{J}=\frac{\theta(\hbar)^3\theta(\hbar-z_1)\theta(\hbar-z_{1+2})\theta(-\lambda_{3})\theta(-\lambda_{1+2+3})\theta(z_2+\lambda_{1+2})\theta(z_{1+2+3}+\lambda_{2+3})\theta(z_{-1-2}+\lambda_{2})}{\theta(z_{1})\theta(z_{1+2})\theta(z_{2})\theta(z_{1+2+3})\theta(z_{-1-2})\theta(\hbar+\lambda_{2})\theta(\hbar+\lambda_{2+3})\theta(\hbar+\lambda_{1+2})\theta(\hbar+\lambda_{1+2+3})\theta(\hbar+\lambda_{3})}.$$ One can verify that this is a rational section of ${\mathbb S}''_{s_1s_2,s_1s_2s_1s_3s_2}$ by using the argument in the proof of Corollary [Corollary 60](#cor:zero){reference-type="ref" reference="cor:zero"} below (or see Example [Example 61](#ex:C2){reference-type="ref" reference="ex:C2"} below). # Further properties and examples {#sec:examples} The method in § [11.3](#subsec:restr_linebdl){reference-type="ref" reference="subsec:restr_linebdl"} has further applications. We work with notations from Theorem [Theorem 55](#thm:res){reference-type="ref" reference="thm:res"}, for the operators $T^{z,\lambda}_w$. We would like to thank Allen Knutson for helpful suggestions in rephrasing some of the results in this section. **Corollary 60**. *For any $\lambda\in \mathfrak{A}^\vee, z\in \mathfrak{A}$, and for any $J\subset [l]$ so that $w_{I,J}=w$, we have the following two identities in $\mathfrak{A}^\vee$ and $\mathfrak{A}$, respectively: $$\begin{aligned} \label{eq:zero} v(\lambda)-w_\bullet (\lambda)&=\hbar\sum_{j\in J}\gamma_j-\sum_{j\not \in J} \lambda_{\beta_j^\vee}\gamma_j, & v_\bullet^{-1}z-w^{-1}(z)&=\hbar\sum_{j=1}^l\beta_j^\vee-\sum_{j\not \in J}z_{\gamma_j}\beta_j^\vee. \end{aligned}$$* *Proof.* We use the expression [\[eq:T_zeta\]](#eq:T_zeta){reference-type="eqref" reference="eq:T_zeta"}, which is a rational section of ${\mathbb S}''_{w,v}$. First, fix $\lambda\in \mathfrak{A}^\vee$ so that $\zeta_J$ only depends on $z\in \mathfrak{A}$. If $j\in J$, then $\frac{\theta(\hbar-z_{\gamma_j})}{\theta(z_{\gamma_j})}$ is a rational section of ${\mathcal O}(\hbar \gamma_j)$. If $j\not \in J$, then $\frac{\theta(z_{\gamma_j}+\lambda_{\beta_j^\vee})}{\theta(z_{\gamma_j})}$ is a rational section of ${\mathcal O}(-\lambda_{\beta_j^\vee}\gamma_j)$. Therefore, $\zeta_J$ is a rational section of $${\mathcal O}(\hbar\sum_{j\in J}\gamma_j-\sum_{j\not \in J} \lambda_{\beta_j^\vee}\gamma_j).$$ Comparing with ${\mathbb S}''_{w,v}|_\lambda$ in [\[eq:bbS\'\'res-z\]](#eq:bbS''res-z){reference-type="eqref" reference="eq:bbS''res-z"}, we obtain $$\hbar\sum_{j\in J}\gamma_j-\sum_{j\not \in J} \lambda_{\beta_j^\vee}\gamma_j=v(\lambda)-w_\bullet (\lambda).$$ Similarly, fix $z\in\mathfrak{A}$. If $j\in J$, then $\frac{\theta(-\lambda_{\beta_{j}^\vee})}{\theta(\hbar+\lambda_{\beta_j^\vee})}$ is a rational section of ${\mathcal O}(\hbar\beta_j^\vee)$, and if $j\not\in J$, then $\frac{\theta(z_{\gamma_j}+\lambda_{\beta_j^\vee})}{\theta(\hbar+\lambda_{\beta_j^\vee})}$ is a rational section of ${\mathcal O}((\hbar-z_{\gamma_j})\beta_j^\vee)$. Therefore, $\zeta_J$ is a rational section of $${\mathcal O}(\hbar\sum_{j\in J}\beta_j^\vee+\sum_{j\not\in J}(\hbar-z_{\gamma_j})\beta_j^\vee)={\mathcal O}(\hbar \sum_{j=1}^l\beta_j^\vee-\sum_{j\not\in J}z_{\gamma_j}\beta_j^\vee).$$ Comparing with ${\mathbb S}''_{w,v}|_z$, we obtain $$\hbar\sum_{j=1}^l\beta_j^\vee-\sum_{j\not\in J}z_{\gamma_j}\beta_j^\vee=v_\bullet^{-1}z-w^{-1}(z).$$ ◻ **Example 61**. We illustrate the proof of Corollary [Corollary 60](#cor:zero){reference-type="ref" reference="cor:zero"} in the $C_2$ case. The Langlands dual system is of type $B_2$. There are two simple roots $\alpha_1,\alpha_2$ such that $$s_1(\alpha_2)=\alpha_2+2\alpha_1,~s_2(\alpha_1)=\alpha_1+\alpha_2,~ s_1(\alpha_2^\vee)=\alpha_2^\vee+\alpha_1^\vee,~ s_2(\alpha_1^\vee)=\alpha_1^\vee+2\alpha_2^\vee.$$ Let $v=s_{1}s_2s_1s_2, w=s_1$, then by computations using [\[eq:bbS\'\'res-la\]](#eq:bbS''res-la){reference-type="eqref" reference="eq:bbS''res-la"} and [\[eq:bbS\'\'res-z\]](#eq:bbS''res-z){reference-type="eqref" reference="eq:bbS''res-z"}, we obtain $$\begin{aligned} {\mathbb S}''_{w,v}|_\lambda&={\mathcal O}((\hbar-\lambda_1-2\lambda_2)\alpha_1+(-\lambda_1-2\lambda_2)\alpha_2),\\ {\mathbb S}''_{w,v}|_z&={\mathcal O}((3\hbar-z_1-z_2)\alpha_1^\vee+(4\hbar-2z_1-2z_2)\alpha_2^\vee).\end{aligned}$$ On the other hand, with $v=s_{1}s_2s_1s_2$, we have $$\beta_1^\vee=\alpha_1^\vee ,\beta_2^\vee=\alpha_1^\vee+\alpha_2^\vee ,\beta_3^\vee =\alpha_{1}^\vee+2\alpha_2^\vee,\beta_4^\vee =\alpha_2^\vee.$$ Consider $J=\{1,-,-,-\}$, so $$\gamma_1=\alpha_1,\gamma_2=\alpha_2+2\alpha_1, \gamma_3=-\alpha_1,\gamma_4=\alpha_2+2\alpha_1.$$ Then [\[eq:T_zeta\]](#eq:T_zeta){reference-type="eqref" reference="eq:T_zeta"} gives $$\begin{aligned} \zeta_J&=\frac{ \theta(\hbar-z_{\alpha_1})\theta(-\lambda_{\alpha_1^\vee})\theta(\hbar)^3\theta(z_{\alpha_2+2\alpha_1}+\lambda_{\alpha_1^\vee+\alpha_2^\vee})\theta(z_{-\alpha_1}+\lambda_{\alpha_1^\vee+2\alpha_2^\vee})\theta(z_{2\alpha_1+\alpha_2}+\lambda_{\alpha_2^\vee})}{ \theta(z_{\alpha_1})\theta(z_{2\alpha_{1}+\alpha_2})\theta(-z_{\alpha_1})\theta(z_{2\alpha_1+\alpha_2})\theta(\hbar+\lambda_{\alpha_1^\vee})\theta(\hbar+\lambda_{\alpha_1^\vee+\alpha_2^\vee})\theta(\hbar+\lambda_{\alpha_1^\vee+2\alpha_2^\vee})\theta(\hbar+\lambda_{\alpha_2^\vee})}\\ &=\frac{\theta(\hbar-z_{\alpha_1})}{\theta(z_{\alpha_1})}\cdot \frac{\theta(z_{2\alpha_1+\alpha_2}+\lambda_{\alpha_1^\vee+\alpha_2^\vee})}{\theta(z_{2\alpha_1+\alpha_2})}\cdot \frac{\theta(z_{-\alpha_1}+\lambda_{\alpha_1^\vee+2\alpha_2^\vee})}{\theta(-z_{\alpha_1})}\cdot \frac{\theta(z_{2\alpha_1+\alpha_2}+\lambda_{\alpha_2^\vee})}{\theta(z_{2\alpha_1+\alpha_2})}\\ &\cdot \frac{\theta(-\lambda_{\alpha_1^\vee})\theta(\hbar)^3}{\theta(\hbar+\lambda_{\alpha_1^\vee})\theta(\hbar+\lambda_{\alpha_1^\vee+\alpha_2^\vee})\theta(\hbar+\lambda_{\alpha_1^\vee+2\alpha_2^\vee})\theta(\hbar+\lambda_{\alpha_2^\vee})}.\end{aligned}$$ Fix $\lambda\in \mathfrak{A}^\vee$, and let $z\in \mathfrak{A}$ vary. The first fraction is a zero section of ${\mathcal O}(\hbar)$ with $z_{\alpha_1}\in E$, so it is a rational section of ${\mathcal O}(\hbar\alpha_1)=\chi_{\alpha_1}^*{\mathcal O}(\hbar)$. Similar, the second, third, fourth fractions are rational sections of $${\mathcal O}(-\lambda_{\alpha_1^\vee+\alpha_2^\vee}(2\alpha_1+\alpha_2)),~ {\mathcal O}(-\lambda_{\alpha_1^\vee+2\alpha_2^\vee}(-\alpha_1)),~ {\mathcal O}(-\lambda_{\alpha_2^\vee}(2\alpha_1+\alpha_2)).$$ The last fraction is a constant since there is no $z$ variable involved. So $\zeta_J$ (viewed as on $\mathfrak{A}\times\{\lambda\}$) is a rational section of $$\begin{aligned} &{\mathcal O}(\hbar\alpha_1)\otimes {\mathcal O}(-\lambda_{\alpha_1^\vee+\alpha_2^\vee}(2\alpha_1+\alpha_2))\otimes{\mathcal O}(-\lambda_{\alpha_1^\vee+2\alpha_2^\vee}(-\alpha_1))\otimes {\mathcal O}(-\lambda_{\alpha_2^\vee}(2\alpha_1+\alpha_2))\\ &={\mathcal O}(\hbar \alpha_1-\lambda_{\alpha_1^\vee+\alpha_2^\vee}(2\alpha_1+\alpha_2)+\lambda_{\alpha_1^\vee+2\alpha_2^\vee}\alpha_1-\lambda_{\alpha_2}(2\alpha_1+\alpha_2))\\ &={\mathcal O}((\hbar-\lambda_{\alpha_1^\vee}-2\lambda_{\alpha_2^\vee})\alpha_1-(\lambda_{\alpha_1^\vee}+2\lambda_{\alpha_2^\vee})\alpha_2)={\mathbb S}''_{w,v}|_\lambda.\end{aligned}$$ One can similarly fix $z$, and let $\lambda\in \mathfrak{A}^\vee$ vary. Rewriting $\zeta_J$, one can show that it is a rational section of ${\mathbb S}''_{w,v}|_z$. This verifies that $\zeta_J$ is a rational section of ${\mathbb S}''_{w,v}$. The following is an application in root systems. **Corollary 62**. *For any $\mu\in X^*, \mu^\vee\in X_*$ and $v\ge w$, with notations as in Theorem [Theorem 55](#thm:res){reference-type="ref" reference="thm:res"}, we have $$\begin{aligned} \label{eq:la}v(\mu)-w(\mu)&=-\sum_{j\not\in J}\langle\mu, \beta_j^\vee\rangle\gamma_j, & v^{-1}(\mu^\vee)-w^{-1}(\mu^\vee)&=-\sum_{j\not\in J}\langle\mu^\vee, \gamma_j\rangle\beta_j^\vee. \end{aligned}$$ In particular, $\sum_{j\not\in J}\gamma_j\otimes \beta_j^\vee\in X^*\otimes X_*$ is independent of $I$ or $J$.* *Proof.* We prove the second identity of [\[eq:la\]](#eq:la){reference-type="eqref" reference="eq:la"} first. Let $z=\hbar\otimes \mu^\vee$, then the second identity in Corollary [Corollary 60](#cor:zero){reference-type="ref" reference="cor:zero"} can be written as $$\hbar\otimes (v^{-1}(\mu^\vee-\rho^\vee)+\rho^\vee-w^{-1}(\mu^\vee))=\hbar\otimes ( \sum_{j=1}^l\beta_j^\vee-\sum_{j\not\in J}\langle\mu^\vee, \gamma_j\rangle\beta_j^\vee),$$ which implies the following identity: $$v^{-1}(\mu^\vee-\rho^\vee)+\rho^\vee-w^{-1}(\mu^\vee)=\sum_{j=1}^l\beta_j^\vee-\sum_{j\not\in J}\langle\mu^\vee, \gamma_j\rangle\beta_j^\vee.$$ It is easy to see that $\rho^\vee-v^{-1}(\rho^\vee)=\sum_{j=1}^l\beta_j^\vee$, so the identity in [\[eq:la\]](#eq:la){reference-type="eqref" reference="eq:la"} holds. The first identity of [\[eq:la\]](#eq:la){reference-type="eqref" reference="eq:la"} can be proved similarly. ◻ ## Logarithmic degree Motivated by the previous corollary, for any $w\in W$, we consider the following map $$\Uptheta_{w}:X^*\to X^*, \mu\mapsto \mu-w(\mu),$$ Via the isomorphism $\mathop{\mathrm{{Hom}}}_{\mathbb Z}(X^*,X^*)\cong X^*\otimes X_*$, the map $\Uptheta_{w}$ is uniquely determined by an element of $X^*\otimes X_*$. That is, if $\Uptheta_{w}=\sum_j\gamma_j\otimes \beta_j^\vee\in X^*\otimes X_*$, then $$\label{eq:up} \Uptheta_{w}(\mu)=\sum_{j}\langle\mu, \beta_j^\vee\rangle\gamma_j, \quad \mu\in X^*.$$ Moreover, the isomorphism $\mathop{\mathrm{{Hom}}}_{\mathbb Z}(X_*,X_*)\cong X^*\otimes X_*$ makes $\Uptheta_{w}$ into a map $$\Uptheta_{w}:X_*\to X_*, \quad \Uptheta_{w}(\mu^\vee)=\sum_j\langle\mu^\vee, \gamma_j\rangle\beta_j^\vee.$$ Note that for any $\mu'\in X^*, \mu^\vee\in X_*$, we have $\langle w(\mu'), \mu^\vee\rangle=\langle\mu', w^{-1}(\mu^\vee)\rangle.$ Therefore, $$\Uptheta_{w}(\mu^\vee)=\mu^\vee-w^{-1}(\mu^\vee).$$ We call this element $\Uptheta_{w}$ the *logarithmic degree* of the element $w\in W$. Clearly from [\[eq:up\]](#eq:up){reference-type="eqref" reference="eq:up"}, one can write $\Uptheta_{w}=\sum_{j}\gamma_j\otimes \beta_j^\vee$ so that $\gamma_j$ are positive roots and $\beta_j^\vee$ are simple roots for all $j$. We will not distinguish $\Uptheta_{w}\in X^*\otimes X_*$ from the element it defines in $\mathop{\mathrm{End}}_{\mathbb Z}(X^*)$ or $\mathop{\mathrm{End}}_{\mathbb Z}(X_*)$. For convenience, in the following proposition we denote the Weyl group action on $X^*$ and $X_*$ by $w$ and $w^\vee$ respectively. **Proposition 63**. 1. *$\Uptheta_{e}=0$.* 2. *$u(\Uptheta_{v}-\Uptheta_w)=\Uptheta_{uv}-\Uptheta_{uw}$, $u^\vee(\Uptheta_{v}-\Uptheta_w)=\Uptheta_{vu^{-1}}-\Uptheta_{wu^{-1}}$.* *Proof.* (1) is obvious. For (3), write $\Uptheta_{v}-\Uptheta_w=\sum_{j}\gamma_{j}\otimes \beta_{j}^\vee$, so $$(\Uptheta_v-\Uptheta_w)(\mu)=w(\mu)-v(\mu)=\sum_{j}\langle\mu, \beta_{j}^\vee\rangle\gamma_{j}.$$ Then $$u(\Uptheta_v-\Uptheta_w)=\sum_{j}u(\gamma_j)\otimes \beta_j^\vee,$$ $$u(\Uptheta_v-\Uptheta_w)(\mu)=\sum_{j}\langle\mu, \beta_j^\vee\rangle u(\gamma_j)=u((\Uptheta_{v}-\Uptheta_w)(\mu))=u(w\mu)-v(\mu))=uv(\mu)-uw(\mu)=(\Uptheta_{uv}-\Uptheta_{uv})(\mu).$$ Therefore, $u(\Uptheta_{v}-\Uptheta_w)=\Uptheta_{uv}-\Uptheta_{ uw}$. The second part can be proved by using the identity $(\Uptheta_{v}-\Uptheta_w)(\mu^\vee)=w^{-1}(\mu^\vee)-v^{-1}(\mu^\vee).$ ◻ **Remark 64**. Suppose $w\le v$. From Corollary [Corollary 60](#cor:zero){reference-type="ref" reference="cor:zero"} and Corollary [Corollary 62](#cor:la){reference-type="ref" reference="cor:la"}, one has $$\sum_{\gamma\in \Phi(w)}\gamma=\sum_{j\in J}\gamma_j, \quad \sum_{\beta^\vee\in \Phi^\vee(v^{-1})}\beta^\vee=\sum_{j\in I}\beta_j^\vee, \quad \Uptheta_v-\Uptheta_w=\sum_{j\not\in J}\gamma_j\otimes \beta_j^\vee.$$ In other words, one can compute the RHSs of these identities without using the notations of Theorem [Theorem 55](#thm:res){reference-type="ref" reference="thm:res"}. Then via [\[eq:bbS\'\'res-la\]](#eq:bbS''res-la){reference-type="eqref" reference="eq:bbS''res-la"} and [\[eq:bbS\'\'res-z\]](#eq:bbS''res-z){reference-type="eqref" reference="eq:bbS''res-z"}, these three elements uniquely determine the line bundle ${\mathbb S}''_{w,v}$. For simplicity, we write $\lambda\otimes \mu^\vee=\lambda\otimes \mu^\vee$ in $X^*\otimes X_*$. **Example 65**. In the $A_3$ case, we compute $\Uptheta_{v}-\Uptheta_w$ with $v=s_1s_2s_3s_1s_2s_1$ and $w=s_1$. We have $$\Uptheta_v=\alpha_1\alpha_1^\vee+\alpha_2\alpha_{1+2}^\vee+\alpha_3\alpha_{1+2+3}^\vee+\alpha_1\alpha_2^\vee+\alpha_2\alpha_{2+3}^\vee+\alpha_1\alpha_3^\vee, \quad \Uptheta_w=\alpha_1\alpha_1^\vee,$$ so $$\Uptheta_v-\Uptheta_w=\alpha_2\alpha_{1+2}^\vee+\alpha_3\alpha_{1+2+3}^\vee+\alpha_1\alpha_2^\vee+\alpha_2\alpha_{2+3}^\vee+\alpha_1\alpha_3^\vee.$$ **Example 66**. In the case of $C_2$, we compute $\Uptheta_v-\Uptheta_w$ with $v=s_1s_2s_1, w=e$. We have $$\Uptheta_v-\Uptheta_e=\alpha_1\alpha_1^\vee+\alpha_2(\alpha_1^\vee+\alpha_2^\vee)+\alpha_1(\alpha_1^\vee+2\alpha_2^\vee).$$ **Example 67**. We consider the $A_2$ case. Let $v=s_1s_2s_1$. We compute $\Uptheta_{s_1s_2s_1}-\Uptheta_w$ for all $w\le s_1s_2s_1$. Then $$\begin{aligned} \Uptheta_{s_1s_2s_1}-\Uptheta_e&=\alpha_1\alpha_2^\vee+\alpha_2\alpha_{1+2}^\vee+\alpha_1\alpha_1^\vee=\alpha_{1+2}\alpha_{1+2}^\vee.\\ \Uptheta_{s_1s_2s_1}-\Uptheta_{s_1}&=\alpha_1\alpha_2^\vee+\alpha_2\alpha_{1+2}^\vee.\\ \Uptheta_{s_1s_2s_1}-\Uptheta_{s_2}&=\alpha_2\alpha_1^\vee+\alpha_{1}\alpha_{1+2}^\vee.\\ \Uptheta_{s_1s_2s_1}-\Uptheta_{s_1s_2}&=\alpha_2\alpha_1^\vee.\\ \Uptheta_{s_1s_2s_1}-\Uptheta_{s_2s_1}&=a_1\alpha_2^\vee.\\\end{aligned}$$ 00 M. Aganagic, A. Okounkov, *Elliptic stable envelope*, J. Amer. Math. Soc. 34 (2021), no. 1, 79--133. H. Andersen, J. Jantzen, W. Soergel, *Representations of quantum groups at a $p$th root of unity and of semisimple groups in characteristic $p$: independence of $p$*, Astérisque (1994), no. 220, 321 pp. S. Billey, *Kostant polynomials and the cohomology ring for $G/B$*, Duke J. Math. 96 (1999), no. 1, 205--224. L. Borisov, A. Libgober, *Elliptic genera of singular varieties*, Duke Math. J. 116 (2003), no. 2, 319--351. T. M. Botta, R. Rimanyi, *Bow varieties: stable envelopes and threir 3d mirror symmetry*, Preprint, `arXiv:2308.07300`. M. Demazure, A. Grothendieck, *Schémas en groupes I, II, III*, Lecture Notes in Math 151, 152, 153, Springer-Verlag, New York, 1970, and new edition : Documents Mathématiques 7, 8, Société Mathématique de France, 2003. B. Fantechi, L. Göttsche, L. Illusie, S. L. Kleiman, N. Nitsure, A. Vistoli, *Fundamental Algebraic Geometry: Grothendieck's FGA Explained*, Mathematical surveys and monographs 123, American Mathematical Soc., 2005. G. Felder, R. Rimányi, A. Varchenko, *Elliptic dynamical quantum groups and equivariant elliptic cohomology*, SIGMA Symmetry Integrability Geom. Methods Appl. 14 (2018), Paper No. 132, 41 pp. N. Ganter, *The elliptic Weyl character formula*, Compos. Math. 150 (2014), no. 7, 1196-1234. V. Ginzburg, M. Kapranov, E. Vasserot, *Elliptic algebras and equivariant elliptic cohomology*, Preprint, (1995). `arXiv:9505012` V. Ginzburg, M. Kapranov, E. Vasserot, *Residue construction of Hecke algebras* Adv. Math. 128 (1997), no. 1, 1--19. W. Graham, *Equivariant $K$-theory and Schubert varieties*, Preprint, 2002. T. Hikita, *Elliptic canonical bases for toric hyper-Kähler manifolds*, `arXiv:2003.03573`. Y. Kononov, A. Smirnoy, *Pursuing quantum difference equations I: stable envelopes of subvarieties*, Lett. Math. Phys. 112 (2022), no. 4, Paper No. 69, 25 pp. C. Lenart, G. Zhao, C. Zhong, *The Root polynomials for elliptic classes*, in preparation. G. Lusztig, *Bases in equivariant $K$-theory. II*, Represent. Theory 3 (1999), 281--353. A. Polishchuk, *Abelian varieties, theta functions and the Fourier transform* Cambridge Tracts in Mathematics, 153. Cambridge University Press, Cambridge, 2003. xvi+292 pp. ISBN: 0-521-80804-9. R. Rimányi, A. Smirnov, Z. Zhou, A. Varchenko, *Three-dimensional mirror self-symmetry of the cotangent bundle of the full flag variety*, SIGMA Symmetry Integrability Geom. Methods Appl. 15 (2019), Paper No. 093, 22 pp. R. Rimányi, A. Smirnov, Z. Zhou, A. Varchenko, *Three-dimensional mirror symmetry and elliptic stable envelopes*, Int. Math. Res. Not. IMRN 2022, no. 13, 10016--10094. R. Rimányi, A. Weber, *Elliptic classes of Schubert varieties via Bott-Samelson resolution*, J. of Topology, Vol. 13, Issue 3, September 2020, 1139-1182, DOI 10.1112/topo.12152 R. Rimányi, A. Weber, *Elliptic classes on Langlands dual flag varieties*, Comm. in Contemp. Math., Vol. 24, No. 1 (2022) 2150014, DOI: 10.1142/S0219199721500140 C.L. Siegel, *Advanced Analytic Number Theory*, Second edition. Tata Institute of Fundamental Research Studies in Mathematics, 9. Tata Institute of Fundamental Research, Bombay, 1980. v+268 pp. A. Smirnov, Z. Zhou, *3d mirror symmetry and quantum K -theory of hypertoric varieties*, Adv. Math. 395 (2022), Paper No. 108081, 61 pp. M. Willems, *Cohomologie et $K$-théorie équivariantes des variétés de Bott-Samelson et des variétés de drapeaux*, Bull. Soc. Math. France 132 (2004), no. 4, 569--589. G. Zhao and C. Zhong, *Representations of the elliptic affine Hecke algebras*, Adv. Math. 395 (2022), Paper No. 108077, 75 pp. G. Zhao and C. Zhong, *A Langlands duality of elliptic Hecke algebras*, Preprint, 2023. [^1]: In the present paper we avoid the usual dual of vector bundles and reserve the notation $\vee$ for Langlands duality, which is consistent with the dual of abelian varieties.
arxiv_math
{ "id": "2309.09140", "title": "Elliptic classes via the periodic Hecke module and its Langlands dual", "authors": "Cristian Lenart, Gufang Zhao, Changlong Zhong", "categories": "math.AG math.KT math.RT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We prove that the scaled maximum steady-state waiting time and the scaled maximum steady-state queue length among $N$ $GI/GI/1$-queues in the $N$-server fork-join queue, converge to a normally distributed random variable as $N\to\infty$. The maximum steady-state waiting time in this queueing system scales around $\frac{1}{\gamma}\log N$, where $\gamma$ is determined by the cumulant generating function $\Lambda$ of the service distribution and solves the Cramér-Lundberg equation with stochastic service times and deterministic inter-arrival times. This value $\frac{1}{\gamma}\log N$ is reached at a certain hitting time. The number of arrivals until that hitting time satisfies the central limit theorem, with standard deviation $\frac{\sigma_A}{\sqrt{\Lambda'(\gamma)\gamma}}$. By using distributional Little's law, we can extend this result to the maximum queue length. Finally, we extend these results to a fork-join queue with different classes of servers. author: - Dennis Schol, Maria Vlasiou, and Bert Zwart title: Extreme values for the waiting time in large fork-join queues --- # Introduction Fork-join queues are a useful modeling tool for congestion in complex networks, such as assembly systems, communication networks, and supply chains. Such networks can be large and assembly is only possible upon availability of all parts. Thus, the bottleneck of the system is caused by the slowest production line in the system. This setting motivates us to investigate such delays in a stylized version of a large fork-join queueing system. In this setting, a key quantity of interest is the behavior of the longest queue when the system is in the steady-state situation. Furthermore, we assume that arrival and service processes are general and mutually independent. As we try to model systems with many servers, we are typically interested in the behavior of this random variable as $N\to\infty$. In [@MeijerSchol2021], it is shown that $\max_{i\leq N}(B_i(s)+B_A(s)-\beta s)$ is in the domain of attraction of the normal distribution: $$\begin{aligned} \label{eq: central limit convergence} \operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}(B_i(s)+B_A(s)-\beta s)>\frac{\sigma^2}{2\beta}\log N+x\sqrt{\log N}}\overset{N\to\infty}{\longrightarrow}\operatorname{\mathbb{P}}\probarg*{\frac{\sigma\sigma_A}{\sqrt{2}\beta}X>x},\end{aligned}$$ with $X\overset{d}{=}\mathcal{N}(0,1)$, where $\{B_i(t),t\geq 0\}$ and $\{B_A(t),t\geq 0\}$ are Brownian motions with standard deviations $\sigma$ and $\sigma_A$, respectively. We see from the limit in [\[eq: central limit convergence\]](#eq: central limit convergence){reference-type="eqref" reference="eq: central limit convergence"} that $\max_{i\leq N}(B_i(s)+B_A(s)-\beta s)$ centers around $\frac{\sigma^2}{2\beta}\log N$ and deviates with order $\sqrt{\log N}$. This convergence result provides a prediction of the typical delay. In this study, we aim to extend this result to a more general setting. In particular, we investigate the maximum steady-state waiting time among the $N$ servers with a common arrival process $\max_{i\leq N}W_i(\infty)=\max_{i\leq N}\sup_{k\geq 0}\sum_{j=1}^k(S_i(j)-A(j))$. This expression follows from Lindley's recursion. Furthermore, we have that both $(S_i(j),j\geq 1,1\leq i\leq N)$ and $(A(j),j\geq 1)$ are i.i.d. and the inter-arrival times and service times are mutually independent. Thus $S_i(j)$ indicates the service time of the $j$-th customer in queue $i$, $A(j)$ indicates the inter-arrival time between the $(j-1)$-st and the $j$-th customer. We see that the maximum steady-state wating time is a maximum of $N$ dependent random variables, due to the common arrival process $(A(j),j\geq 1)$. The earliest literature on fork-join queues focuses on systems with two service stations. Analytic results, such as asymptotics on limiting distributions, can be found in [@baccelli1985two; @flatto1984two; @de1988fredholm; @wright1992two]. However, due to the complexity of fork-join queues, these results cannot be expanded to fork-join queues with more than two service stations. Thus, most of the work on fork-join queues with more than two service stations is focused on finding approximations of performance measures. For example, an approximation of the distribution of the response time in $M/M/s$ fork-join queues is given in Ko and Serfozo [@ko2004response]. Upper and lower bounds for the mean response time of servers, and other performance measures, are given by Nelson, Tantawi [@nelson1988approximate] and Baccelli, Makowski [@baccelli1989queueing]. These bounds can be used for fork-join queues with large size, but apart from this, there is not much literature on the convergence of the longest queue length in a fork-join queue as $N\to\infty$. Some results can be found in [@MeijerSchol2021; @scholMOR; @schol2022tail]. In [@MeijerSchol2021], the same convergence results are given as in this paper, but only for the Brownian fork-join queue, thus this paper extends on these results. This paper is organized as follows. In Section [2](#sec: model){reference-type="ref" reference="sec: model"}, we present our main results; in Theorem [Theorem 1](#thm: convergence maximum waiting time){reference-type="ref" reference="thm: convergence maximum waiting time"} we state that the longest steady-state waiting time satisfies a central limit result; in Theorem [Theorem 2](#thm: convergence maximum queue length){reference-type="ref" reference="thm: convergence maximum queue length"} we show that a similar result holds for the longest queue length, and in Corollary [Corollary 1](#cor: K classes){reference-type="ref" reference="cor: K classes"} we present a similar result when the service distributions can differ among the different queues. In Section [3](#sec: heuristic analysis){reference-type="ref" reference="sec: heuristic analysis"} we give an intuition why the results hold and how we prove these. Section [4](#sec: proofs){reference-type="ref" reference="sec: proofs"} is devoted to proofs. # Model {#sec: model} We investigate a fork-join queue with $N$ servers. Each of the $N$ servers has the same arrival stream of jobs and works independently from all other servers but with the same service distribution. In this section, we state the main result for the longest steady-state waiting time in Theorem [Theorem 1](#thm: convergence maximum waiting time){reference-type="ref" reference="thm: convergence maximum waiting time"}. We also show that a similar result holds for the maximum queue length in Lemma [Lemma 2](#lem: dist little law){reference-type="ref" reference="lem: dist little law"} and Theorem [Theorem 2](#thm: convergence maximum queue length){reference-type="ref" reference="thm: convergence maximum queue length"}. Furthermore, we extend the result in Theorem [Theorem 2](#thm: convergence maximum queue length){reference-type="ref" reference="thm: convergence maximum queue length"} to a heterogeneous model in Corollary [Corollary 1](#cor: K classes){reference-type="ref" reference="cor: K classes"}. We now specify some properties of the service times and interarrival times in this fork-join queueing system. First, the sequence of non-negative random variables $(S_i(j),i\geq 1,j\geq 1)$ are i.i.d. with $S_i(j)\sim S$, and $S_i(j)$ indicating the service time of the $j$-th subtask in queue $i$. Furthermore, the sequence of non-negative random variables $(A(j),j\geq 1)$ are i.i.d. with $A(j)\sim A$, $\mathbb{E}[A(j)]=1/\lambda$, $\text{Var}(A(j))=\sigma_A^2$, and $A(j)$ indicating the interarrival time between the $(j-1)$-st and the $j$-th task. Finally, we have that $\mathbb{E}[S_i(j)-A(j)]=-\mu$, with $\mu>0$, and $(A(j),j\geq 1)$ and $(S_i(j),i\geq 1,j\geq 1)$ are mutually independent. We can now write the cumulative distribution function of the longest steady-state waiting time as the cumulative distribution function of the maximum of $N$ all-time suprema of random walks involving the interarrival and service times. **Lemma 1**. *For the model given in Section [2](#sec: model){reference-type="ref" reference="sec: model"} with $W_i(1)=0$ for all $i\leq N$, we have that the longest waiting time in steady state satisfies $$\begin{aligned} \max_{i\leq N}W_i(\infty)\overset{d}{=}\max_{i\leq N}\sup_{k\geq 0}\sum_{j=1}^k(S_i(j)-A(j)).\end{aligned}$$* *Proof.* Using Lindley's recursion [@lindley1952theory], we can write the waiting time of tasks in front of server $i$ as $$W_i(n)=\sup_{0\leq k\leq n}\sum_{j=k+1}^{n}(S_i(j)-A(j)).$$ Thus, the longest steady steady-state waiting time satisfies. $$\max_{i\leq N}W_i(n)=\max_{i\leq N}\sup_{0\leq k\leq n}\sum_{j=k+1}^{n}(S_i(j)-A(j)).$$ We have that $$\operatorname{\mathbb{P}}\probarg{\max_{i\leq N}W_i(\infty)\geq x}=\lim_{n\to\infty}\operatorname{\mathbb{P}}\probarg{\max_{i\leq N}W_i(n)\geq x}.$$ Because, $$\begin{aligned} \max_{i\leq N}W_i(n)\overset{d}{=}\max_{i\leq N}\sup_{0\leq k\leq n}\sum_{j=1}^{k}(S_i(j)-A(j)),\end{aligned}$$ we obtain the lemma by using the monotone convergence theorem. ◻ In order to be able to prove convergence of the longest steady-state waiting time, we need some additional structure for the service-time distribution. We define $$\begin{aligned} \Lambda(\theta):=\log(\mathbb{E}[\exp(\theta(S-1/\lambda)]).\end{aligned}$$ Moreover, we write $\mathcal{D}(\Lambda):=\{\theta: \Lambda(\theta)<\infty\}$ and $\mathcal{D}^{\circ}(\Lambda)$ as the interior of $\mathcal{D}(\Lambda)$. **Assumption 1**. *We assume there exists a $\gamma>0$ such that* 1. *$\Lambda(\gamma)=0$,* 2. *$\gamma\in \mathcal{D}^{\circ}(\Lambda)$.* The first assumption indicates that the random variable $S-1/\lambda$ has a tail that is bounded by an exponential. The second assumption is needed for our proofs. In [@dembo2009large Ex. 2.2.24], it is namely stated that when $\gamma\in \mathcal{D}^{\circ}(\Lambda)$, $\Lambda$ is infinitely differentiable at the point $\gamma$. For example, when $S-1/\lambda$ has density function $f_{S-1/\lambda}(x)=c_1\exp(-x)/(1+x^2)$ for $x>0$, where $c_1,\lambda$ are chosen such that $\operatorname{\mathbb{P}}\probarg{S-1/\lambda<x}$ is a cumulative distribution function and $\gamma=1$, then the first assumption is satisfied but the second is not, since $\Lambda(\theta)$ is not differentiable at $\theta=\gamma$. Our main result is given in Theorem [Theorem 1](#thm: convergence maximum waiting time){reference-type="ref" reference="thm: convergence maximum waiting time"}. **Theorem 1**. *For the model in Section [2](#sec: model){reference-type="ref" reference="sec: model"} where the sequence of service times $(S_i(j),i\geq 1,j\geq 1)$ satisfies Assumption [Assumption 1](#assump: 1){reference-type="ref" reference="assump: 1"}, we have that $$\begin{aligned} \label{eq: limit result maximum waiting time} \frac{\max_{i\leq N}W_i(\infty)-\frac{1}{\gamma}\log N}{\sqrt{\log N}}\overset{d}{\longrightarrow}\frac{\sigma_A}{\sqrt{\Lambda'(\gamma)\gamma}}X,\end{aligned}$$ with $X\sim\mathcal{N}(0,1)$, as $N\to\infty$.* **Lemma 2** (Distributional Little's Law). *Let for $t\geq 0$, $\mathbf{N}_A(t)$ indicate the number of arrivals up to time $t$, where the interarrival times are i.i.d. with $A(j)\sim A$. Then $$\begin{aligned} \max_{i\leq N}Q_i(\infty)\overset{d}{=}\mathbf{N}_A\left(\max_{i\leq N}W_i(\infty)\right).\end{aligned}$$* *Proof.* In [@haji1971relation], a short proof is given that for the $GI/GI/1$ queue under the FCFS policy, $Q\overset{d}{=}\mathbf{N}_A(W)$. We follow the same steps to prove that $\max_{i\leq N}Q_i(\infty)\overset{d}{=}\mathbf{N}_A\left(\max_{i\leq N}W_i(\infty)\right)$. First, let $t>0$ be given such that the system is in steady state. Furthermore, let $\tilde{W}_i(j)$ be the waiting time of the $i$-th subtask of the $j$-th task numbered backward in time, beginning at time $t$. Thus, $\tilde{W}_i(1)$ is the waiting time of the $i$-th subtask of the last task arriving before time $t$. Now, let the random variable $T(j)$ be such that $t-T(j)$ is the arrival time of the $j$-th task numbered backward in time. Then, observe that the event $\{\max_{i\leq N}Q_i(t)\geq j\}$ is equivalent to the event that at least one subtask of the $j$-th task numbered backward in time is still in the queue at time $t$. Thus, $$\left\{\max_{i\leq N}Q_i(t)\geq j\right\}=\left\{\max_{i\leq N}\tilde{W}_i(j)\geq T(j)\right\},$$ for $j\geq 1$. The event $\{T(j)\leq x\}$ is equivalent to the event that the number of arrivals during the period $[t-x,t)$ is larger than or equal to $j$. The arrival process is a stationary process, thus the event $\{T(j)\leq x\}$ is equivalent to the event $\{\mathbf{N}_A(x)\geq j\}$. Additionally, the random variables $\max_{i\leq N}\tilde{W}_i(j)$ and $T(j)$ are independent. Therefore, $$\left\{\max_{i\leq N}Q_i(t)\geq j\right\}=\left\{\mathbf{N}_A\big(\max_{i\leq N}\tilde{W}_i(j)\big)\geq j\right\}.$$ As the system is in steady state, we get that $$\left\{\max_{i\leq N}Q_i(\infty)\geq j\right\}=\left\{\mathbf{N}_A\big(\max_{i\leq N}\tilde{W}_i(\infty)\big)\geq j\right\}.$$ ◻ Now, combining the result in Lemma [Lemma 2](#lem: dist little law){reference-type="ref" reference="lem: dist little law"} with the main result in Theorem [Theorem 1](#thm: convergence maximum waiting time){reference-type="ref" reference="thm: convergence maximum waiting time"}, we can find a similar convergence result for the maximum queue length in steady state. **Theorem 2**. *For the model in Section [2](#sec: model){reference-type="ref" reference="sec: model"} where the sequence of service times $(S_i(j),i\geq 1,j\geq 1)$ satisfies Assumption [Assumption 1](#assump: 1){reference-type="ref" reference="assump: 1"}, we have that $$\begin{aligned} \frac{\max_{i\leq N}Q_i(\infty)-\frac{\lambda}{\gamma}\log N}{\sqrt{\log N}}\overset{d}{\longrightarrow}\sqrt{\frac{\lambda^2\sigma_A^2}{\Lambda'(\gamma)\gamma}+\frac{\lambda^3\sigma_A^2}{\gamma}}X,\end{aligned}$$ with $X\sim\mathcal{N}(0,1)$, as $N\to\infty$.* *Proof.* Let $\hat{A}(j)\sim A$, let $(\hat{A}(j),j\geq 1)$ be mutually independent, and $\hat{A}(j)$ and $\max_{i\leq N}W_i(\infty)$ be mutually independent for all $j\geq 1$. Then, using Lemma [Lemma 2](#lem: dist little law){reference-type="ref" reference="lem: dist little law"} and Theorem [Theorem 1](#thm: convergence maximum waiting time){reference-type="ref" reference="thm: convergence maximum waiting time"}, we get that $$\begin{aligned} &\operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}Q_i(\infty)\leq \frac{\lambda}{\gamma}\log N+x\sqrt{\log N}}\\ &\quad=\operatorname{\mathbb{P}}\probarg*{\mathbf{N}_A\left(\max_{i\leq N}W_i(\infty)\right)\leq \big\lfloor \frac{\lambda}{\gamma}\log N+x\sqrt{\log N}\big\rfloor}\\ &\quad=\operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}W_i(\infty)\leq \sum_{j=1}^{\big\lfloor \frac{\lambda}{\gamma}\log N+x\sqrt{\log N}\big\rfloor}\hat{A}(j)}\\ &\quad=\operatorname{\mathbb{P}}\probarg*{\frac{\max_{i\leq N}W_i(\infty)-\frac{1}{\gamma}\log N}{\sqrt{\log N}}\leq \frac{\sum_{j=1}^{\big\lfloor \frac{\lambda}{\gamma}\log N+x\sqrt{\log N}\big\rfloor}\hat{A}(j)-\frac{1}{\gamma}\log N}{\sqrt{\log N}}}\\ &\quad\overset{N\to\infty}{\longrightarrow}\operatorname{\mathbb{P}}\probarg*{\frac{\sigma_A}{\sqrt{\Lambda'(\gamma)\gamma}}X_1\leq \frac{\sigma_A\sqrt{\lambda}}{\sqrt{\gamma}}X_2+ \frac{x}{\lambda}},\end{aligned}$$ with $X_1,X_2$ independent and standard normally distributed, this convergence holds, as $(\hat{A}(j),j\geq 1)$ and $\max_{i\leq N}W_i(\infty)$ are independent. Thus, the theorem follows. ◻ Until now, we considered the fork-join queueing system where each server has the same service distribution. In Corollary [Corollary 1](#cor: K classes){reference-type="ref" reference="cor: K classes"}, we show that we can extend the convergence of the longest steady-state waiting time to a more heterogeneous setting. We examine a fork-join queueing system with $N$ servers, where each of these $N$ servers belongs to one of $K$ classes. Additionally, we assume that the size of class $k$ with $k\in\{1,\ldots,K\}$ grows as $\alpha_kN$, as $N$ becomes large, with $0<\alpha_k<1$. **Corollary 1**. *Let $K\in\mathbb{N}$, let $k=1,\ldots,K$, furthermore, take an increasing sequence of integers given by $M_0^{(N)},M_1^{(N)},M_2^{(N)},\ldots, M_K^{(N)}>0$ with $M_0^{(N)}=1$, $M_K^{(N)}=N$, and $M_{k}^{(N)}-M_{k-1}^{(N)}\in \mathbb{N}$. Moreover, $(M_{k}^{(N)}-M_{k-1}^{(N)})/N\overset{N\to\infty}{\longrightarrow}\alpha_k\in(0,1]$ with $\sum_{k=1}^K\alpha_k=1$. Let $(S_i(j),j\geq 1,M_{k-1}^{(N)}< i\leq M_{k}^{(N)})$ be i.i.d. with $S_i(j)\sim S_k$, $(A(j),j\geq 1)$ be i.i.d. with $A(j)\sim A$, $\mathbb{E}[A(j)]=1/\lambda$, $\text{Var}(A(j))=\sigma_A^2$, $\mathbb{E}[S_i(j)-A(j)]=-\mu_k$ with $\mu_k>0$, $\Lambda_k(\theta)=\log(\mathbb{E}[\exp(\theta(S_k-1/\lambda)])$, $\Lambda_k$ satisfies Assumption [Assumption 1](#assump: 1){reference-type="ref" reference="assump: 1"}. Furthermore, $S_{i_1}(j_1)$ and $S_{i_2}(j_2)$ are mutually independent for all $i_1,i_2,j_1,j_2$. Let $K^*=\arg \min\{\gamma_k,k=1,\ldots,K\}$. We assume that $|K^*|=1$ and $k^*\in K^*$. Then, $$\begin{aligned} \label{eq: K classes limit} \frac{\max_{i\leq N}W_i(\infty)-\frac{1}{\gamma_{k^*}}\log N}{\sqrt{\log N}}\overset{d}{\longrightarrow}\frac{\sigma_A}{\sqrt{\Lambda_{k^*}'(\gamma_{k^*})\gamma_{k^*}}}X,\end{aligned}$$ with $X\sim\mathcal{N}(0,1)$, as $N\to\infty$.* *Proof.* We prove this corollary by giving an asymptotically sharp lower and upper bound. First, observe that $$\max_{i\leq N}W_i(\infty)\geq_{st.} \max_{M_{k^*-1}^{(N)}< i\leq M_{k^*}^{(N)}}\sup_{k\geq 0}\sum_{j=1}^k(S_i(j)-A(j)),$$ with $X\geq_{st.}Y$ meaning that $\operatorname{\mathbb{P}}\probarg{X\geq x}\geq \operatorname{\mathbb{P}}\probarg{Y\geq x}$ for all $x$. Applying the result from Theorem [Theorem 1](#thm: convergence maximum waiting time){reference-type="ref" reference="thm: convergence maximum waiting time"} on the lower bound results in [\[eq: K classes limit\]](#eq: K classes limit){reference-type="eqref" reference="eq: K classes limit"}. By using the union bound we get the following upper bound: $$\begin{gathered} \operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}W_i(\infty)\geq\frac{1}{\gamma_{k^*}}\log N+x\sqrt{\log N}}\\ =\sum_{l=1}^K\operatorname{\mathbb{P}}\probarg*{\max_{M_{l-1}^{(N)} < i\leq M_l^{(N)}}\sup_{k\geq 0}\sum_{j=1}^k(S_i(j)-A(j))\geq\frac{1}{\gamma_{k^*}}\log N+x\sqrt{\log N}}.\end{gathered}$$ When $l\neq k^*$, we get after applying the results from Theorem [Theorem 1](#thm: convergence maximum waiting time){reference-type="ref" reference="thm: convergence maximum waiting time"} that $$\operatorname{\mathbb{P}}\probarg*{\max_{M_{l-1}^{(N)} < i\leq M_l^{(N)}}\sup_{k\geq 0}\sum_{j=1}^k(S_i(j)-A(j))\geq\frac{1}{\gamma_{l}}\log N+x\sqrt{\log N}} \overset{N\to\infty}{\longrightarrow}1-\Phi\left(\frac{\sqrt{\Lambda_l'(\gamma_l)\gamma_l}}{\sigma_A}x\right),$$ with $\Phi$ the cumulative distribution function of a standard normal random variable. Because $\gamma_{k^*}<\gamma_l$ we get that $$\operatorname{\mathbb{P}}\probarg*{\max_{M_{l-1}^{(N)} < i\leq M_l^{(N)}}\sup_{k\geq 0}\sum_{j=1}^k(S_i(j)-A(j))\geq\frac{1}{\gamma_{K^*}}\log N+x\sqrt{\log N}}\overset{N\to\infty}{\longrightarrow}0.$$ The corollary follows. ◻ **Remark 1**. *In Corollary [Corollary 1](#cor: K classes){reference-type="ref" reference="cor: K classes"} we assume that $|K^*|=1$. The situation that $|K^*|>1$ follows analogously. Assume for instance that $|K^*|=2$, then we can introduce a new random variable $\tilde{S}$ such that $\tilde{S}_i(j)\sim S_1$ with probability $\alpha$ and $\tilde{S}_i(j)\sim S_2$ with probability $1-\alpha$, such that $\gamma_1=\gamma_2=\gamma_{K^*}$. As $N$ is large enough this fork-join queueing system behaves analogous to the original fork-join queue, and for this system $|K^*|=1$.* We give the proofs of the convergence of the longest steady-state waiting time in Section [4](#sec: proofs){reference-type="ref" reference="sec: proofs"}. First, we give a heuristic explanation of why the convergence result in Theorem [Theorem 1](#thm: convergence maximum waiting time){reference-type="ref" reference="thm: convergence maximum waiting time"} is true, and we illustrate the structure of the proof. # Heuristic analysis {#sec: heuristic analysis} To prove Theorem [Theorem 1](#thm: convergence maximum waiting time){reference-type="ref" reference="thm: convergence maximum waiting time"}, we analyze lower and upper bounds of the tail probability of the longest steady-state waiting time among the $N$ servers $\operatorname{\mathbb{P}}\probarg{\max_{i\leq N}W_i(\infty)>\frac{1}{\gamma}\log N+x\sqrt{\log N}}$ and we show that these lower and upper bounds converge to the same limit as $N\to\infty$. The longest steady-state waiting time has the form $\max_{i\leq N}W_i(\infty)\overset{d}{=}\sup_{k\geq 0}\max_{i\leq N}\sum_{j=1}^k(S_i(j)-A(j))$. Thus the longest steady-state waiting time is the all-time supremum of the maximum of $N$ random walks. For all processes $(X(t),t\geq 0)$, we have for all $t>0$ $$\begin{aligned} \label{eq: lower bound} \operatorname{\mathbb{P}}\probarg*{\sup_{s>0}X(s)>x}\geq \operatorname{\mathbb{P}}\probarg*{X(t)>x}.\end{aligned}$$ Furthermore, due to the union bound, we have for all $0<t_1<t_2$ that $$\begin{aligned} \label{eq: upper bound} \operatorname{\mathbb{P}}\probarg*{\sup_{s>0}X(s)>x} \leq \operatorname{\mathbb{P}}\probarg*{\sup_{0<s<t_1}X(s)>x}+\operatorname{\mathbb{P}}\probarg*{\sup_{t_1\leq s<t_2}X(s)>x}+\operatorname{\mathbb{P}}\probarg*{\sup_{s\geq t_2}X(s)>x}.\end{aligned}$$ We use these types of lower and upper bounds to prove Theorem [Theorem 1](#thm: convergence maximum waiting time){reference-type="ref" reference="thm: convergence maximum waiting time"}. Obviously, not all choices of $t,t_1$, and $t_2$ give sharp bounds. We can however make an educated guess about which choices will give the sharpest bounds. Let us first replace the sequence of random variables $(A(j),j\geq 1)$ with their expectation $1/\lambda$. Thus, we look at a simplified fork-join queue with deterministic arrivals. Because the arrivals are deterministic, the waiting times are mutually independent, and we are able to use standard extreme-value theory. We know from the Cramér-Lundberg approximation [@asmussen2003applied Ch. XIII, Thm. 5.2] that $\operatorname{\mathbb{P}}\probarg{\sup_{k\geq 0}\sum_{j=1}^k(S_i(j)-1/\lambda)>x}\sim C\exp(-\gamma x)$, as $x\to\infty$, with $0<C<1$. Thus, $\operatorname{\mathbb{P}}\probarg{\sup_{k\geq 0}\sum_{j=1}^k(S_i(j)-1/\lambda)>\frac{1}{\gamma}\log N}\sim C/N$, as $N\to\infty$. Now we can conclude by using basic extreme-value results; see [@de2007extreme Thm. 5.4.1, p. 188], that $$\frac{\max_{i\leq N}\sup_{k\geq 0}\sum_{j=1}^k\left(S_i(j)-\frac{1}{\lambda}\right)}{\log N}\overset{\mathbb{P}}{\longrightarrow}\frac{1}{\gamma},$$ as $N\to\infty$. Thus, we know that $\max_{i\leq N}\sup_{k\geq 0}\sum_{j=1}^k(S_i(j)-1/\lambda)$ centers around $\frac{1}{\gamma}\log N$. In order to find suitable lower and upper bounds of the form as given in [\[eq: lower bound\]](#eq: lower bound){reference-type="eqref" reference="eq: lower bound"} and [\[eq: upper bound\]](#eq: upper bound){reference-type="eqref" reference="eq: upper bound"}, we need to estimate the hitting time $$\tau^{(N)}:=\inf\left\{k\geq 0:\max_{i\leq N}\sum_{j=1}^k\left(S_i(j)-\frac{1}{\lambda}\right)\geq \frac{1}{\gamma}\log N\right\}.$$ As mentioned before, we have that $\operatorname{\mathbb{P}}\probarg{\sup_{k\geq 0}\sum_{j=1}^k\left(S_i(j)-\frac{1}{\lambda}\right)>\frac{1}{\gamma}\log N}\sim C/N$ as $N\to\infty$. Thus, a good estimate $\hat{\tau}^{(N)}$ for $\tau^{(N)}$ should also satisfy the property that $$\begin{aligned} \label{eq: liminf tail prob} \liminf_{N\to\infty}N\operatorname{\mathbb{P}}\probarg*{\sum_{j=1}^{\hat{\tau}^{(N)}}\left(S_i(j)-\frac{1}{\lambda}\right)>\frac{1}{\gamma}\log N}>0 \end{aligned}$$ and $$\begin{aligned} \label{eq: limsup tail prob} \limsup_{N\to\infty}N\operatorname{\mathbb{P}}\probarg*{\sum_{j=1}^{\hat{\tau}^{(N)}}\left(S_i(j)-\frac{1}{\lambda}\right)>\frac{1}{\gamma}\log N}<\infty.\end{aligned}$$ Now, by using Cramér's theorem and by using the fact that $\Lambda$ is at least twice differentiable at $\gamma$, we know that $$\begin{aligned} \label{eq: cramers theorem} \lim_{n\to\infty}\frac{1}{n}\log\left(\operatorname{\mathbb{P}}\probarg*{\sum_{j=1}^n\left(S_i(j)-\frac{1}{\lambda}\right)\geq nx}\right)= -\Lambda^*(x),\end{aligned}$$ for all $x>\mathbb{E}[S_i(j)-1/\lambda]$ with $\Lambda^*(x)=\sup_{t\in\mathbb{R}}(tx-\Lambda(t))$; see [@asmussen2003applied Ch. XIII, Thm. 2.1 (2.3)]. We write $\hat{\tau}^{(N)}=\hat{c}\log N$. Then we can conclude from Equation [\[eq: cramers theorem\]](#eq: cramers theorem){reference-type="eqref" reference="eq: cramers theorem"} that $$\begin{aligned} \label{eq: cramers theorem2} \lim_{N\to\infty}\frac{1}{\log N}\log\left(\operatorname{\mathbb{P}}\probarg*{\sum_{j=1}^{\lfloor\hat{c}\log N\rfloor}\left(S_i(j)-\frac{1}{\lambda}\right)\geq x\hat{c}\log N}\right)=-\Lambda^*(x)\hat{c}.\end{aligned}$$ Thus, in order to find a good estimate $\hat{\tau}^{(N)}$ for the hitting time $\tau^{(N)}$ we need to solve two equations. First, $x\hat{c}=1/\gamma$, because we know that the longest steady-state waiting time under deterministic arrivals is approximately equal to $\frac{1}{\gamma}\log N$. Therefore the expression $x\hat{c}\log N$ in [\[eq: cramers theorem2\]](#eq: cramers theorem2){reference-type="eqref" reference="eq: cramers theorem2"} should be the same as $\frac{1}{\gamma}\log N$. Second, $-\Lambda^*(x)\hat{c}=-1$, because we know from [\[eq: liminf tail prob\]](#eq: liminf tail prob){reference-type="eqref" reference="eq: liminf tail prob"}, [\[eq: limsup tail prob\]](#eq: limsup tail prob){reference-type="eqref" reference="eq: limsup tail prob"}, and [\[eq: cramers theorem2\]](#eq: cramers theorem2){reference-type="eqref" reference="eq: cramers theorem2"} that for large $N$ $$\operatorname{\mathbb{P}}\probarg*{\sum_{j=1}^{\lfloor\hat{c}\log N\rfloor}\left(S_i(j)-\frac{1}{\lambda}\right)\geq x\hat{c}\log N}\approx \frac{1}{N}=\exp(-\Lambda^*(x)\hat{c}\log N).$$ Combining these two equations gives $\hat{c}=\frac{1}{\Lambda'(\gamma)\gamma}$ and $x=\Lambda'(\gamma)$. Clearly, $x\hat{c}=1/\gamma$, and $$\Lambda^*(x)\hat{c}=\frac{\Lambda^*(\Lambda'(\gamma))}{\gamma\Lambda'(\gamma)}.$$ From [@dembo2009large Lem. 2.2.5(c)], we know that $\Lambda^*(\Lambda'(\gamma))=\gamma\Lambda'(\gamma)$, thus indeed, $\Lambda^*(x)\hat{c}=1$. Finally, we can conclude that $\hat{\tau}^{(N)}=\hat{c}\log N=\frac{1}{\gamma\Lambda'(\gamma)}\log N$. Obviously, in order to be a good estimation for a hitting time we need to have that $\Lambda'(\gamma)>0$. This is the case because $\Lambda(\theta)$ is convex; see [@asmussen2003applied Ch. XIII, Thm. 5.1]. Until this point, we know the first-order scaling of the largest of $N$ steady-state waiting times with deterministic arrivals, and we can give an estimation of the hitting time of this value. Now, we can use these results to obtain a second-order convergence result for the longest steady-state waiting time with stochastic arrivals. Following the analysis above together with the lower bound in [\[eq: lower bound\]](#eq: lower bound){reference-type="eqref" reference="eq: lower bound"}, we see that $$\begin{gathered} \label{eq: lower bound2} \operatorname{\mathbb{P}}\probarg*{\frac{\max_{i\leq N}W_i(\infty)-\frac{1}{\gamma}\log N}{\sqrt{\log N}}\geq x}\\\geq \operatorname{\mathbb{P}}\probarg*{\frac{\max_{i\leq N}\sup_{\big(\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon\big)\log N<k<\frac{1}{\Lambda'(\gamma)\gamma}\log N}\sum_{j=1}^{k}(S_i(j)-A(j))-\frac{1}{\gamma}\log N}{\sqrt{\log N}}\geq x},\end{gathered}$$ with $\epsilon>0$ and small. For this lower bound, we can write $$\begin{gathered} \frac{\max_{i\leq N}\sup_{\big(\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon\big)\log N<k<\frac{1}{\Lambda'(\gamma)\gamma}\log N}\sum_{j=1}^{k}(S_i(j)-A(j))-\frac{1}{\gamma}\log N}{\sqrt{\log N}}\\ \geq \frac{\max_{i\leq N}\sum_{j=1}^{\big\lfloor\frac{1}{\Lambda'(\gamma)\gamma}\log N\big\rfloor}\left(S_i(j)-\frac{1}{\lambda}\right)-\frac{1}{\gamma}\log N}{\sqrt{\log N}}+\frac{\sum_{j=1}^{\big\lfloor\frac{1}{\Lambda'(\gamma)\gamma}\log N\big\rfloor}(\frac{1}{\lambda}-A(j))}{\sqrt{\log N}}. \end{gathered}$$ Obviously, the second term on the right-hand side of [\[eq: lower bound2\]](#eq: lower bound2){reference-type="eqref" reference="eq: lower bound2"} satisfies the central limit theorem.In Lemma [Lemma 3](#lem: lower bound){reference-type="ref" reference="lem: lower bound"}, we prove that the right-hand side in [\[eq: lower bound2\]](#eq: lower bound2){reference-type="eqref" reference="eq: lower bound2"} converges to a function that is close to the tail probability of a normally distributed random variable. Furthermore, we show in Lemmas [Lemma 4](#lem: supremum 0 t){reference-type="ref" reference="lem: supremum 0 t"}, [Lemma 5](#lem: t-e t+e){reference-type="ref" reference="lem: t-e t+e"}, and [Lemma 6](#lem: t+e infty){reference-type="ref" reference="lem: t+e infty"}, that this lower bound is sharp. To achieve this, we first divide the supremum over all positive numbers in the random variable $\max_{i\leq N}W_i(\infty)$ in three parts. After that, we take the supremum over the intervals $\left[0,\left(\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon\right)\log N\right]$, $\left(\left(\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon\right)\log N,\left(\frac{1}{\Lambda'(\gamma)\gamma}+\epsilon\right)\log N\right]$, and $\left(\left(\frac{1}{\Lambda'(\gamma)\gamma}+\epsilon\right)\log N,\infty\right)$, with $\epsilon>0$ and small. Consequently, we show that the tail probabilities of the first and third suprema of the maximum of $N$ random walks asymptotically vanish, while $$\operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}\sup_{\big(\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon\big)\log N<k<\big(\frac{1}{\Lambda'(\gamma)\gamma}+\epsilon\big)\log N}\sum_{j=1}^k(S_i(j)-A(j))>\frac{1}{\gamma}\log N+x\sqrt{\log N}}$$ converges to a limit close to the lower bound as $N\to\infty$. **Remark 2**. *The lower bound presented in Equation [\[eq: lower bound\]](#eq: lower bound){reference-type="eqref" reference="eq: lower bound"} gives us information about the convergence rate of the result in Theorem [Theorem 1](#thm: convergence maximum waiting time){reference-type="ref" reference="thm: convergence maximum waiting time"}. From the Berry-Esséen theorem [@michel1981constant], we know that when $\frac{1}{\sqrt{n}}\sum_{i=1}^nX_i\overset{d}{\longrightarrow}X\sim\mathcal{N}(0,1)$, the convergence rate is of order $1/\sqrt{n}$. Thus, the lower bound in [\[eq: lower bound\]](#eq: lower bound){reference-type="eqref" reference="eq: lower bound"} shows that the convergence rate is of order $1/\sqrt{\log N}$.* # Proofs {#sec: proofs} **Lemma 3**. *Given the model in Section [2](#sec: model){reference-type="ref" reference="sec: model"} where the sequence of service times $(S_i(j),i\geq 1,j\geq 1)$ satisfies Assumption [Assumption 1](#assump: 1){reference-type="ref" reference="assump: 1"}, $0<\epsilon<\frac{1}{\Lambda'(\gamma)\gamma}$, $t_1^{(N)}=\left(\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon\right)\log N$, and $t_2^{(N)}=\frac{1}{\Lambda'(\gamma)\gamma}\log N$, then for all $x\in \mathbb{R}$, we have that $$\begin{gathered} \label{eq: lower bound convergence} \liminf_{N\to\infty}\operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}\sup_{t_1^{(N)}<k<t_2^{(N)}}\sum_{j=1}^k(S_i(j)-A(j))>\frac{1}{\gamma}\log N+x\sqrt{\log N}}\\ \geq \operatorname{\mathbb{P}}\probarg*{\sigma_A\sqrt{\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon}X_1-\sigma_A\sqrt{\epsilon}\left|X_2\right|>x},\end{gathered}$$ with $X_1,X_2\sim\mathcal{N}(0,1)$ and independent.* *Proof.* In order to prove this convergence result, we first bound $$\max_{i\leq N}\sup_{t_1^{(N)}<k<t_2^{(N)}}\sum_{j=1}^k(S_i(j)-A(j)) \geq\max_{i\leq N}\sup_{t_1^{(N)}<k<t_2^{(N)}}\sum_{j=1}^k\bigg(S_i(j)-\frac{1}{\lambda}\bigg) +\inf_{t_1^{(N)}< k< t_2^{(N)}}\sum_{j=1}^{k}\bigg(\frac{1}{\lambda}-A(j)\bigg).$$ We treat the terms on the right-hand side separately. We first prove that $$\begin{aligned} \label{eq: lower bound arrival process clt} \frac{\inf_{t_1^{(N)}< k< t_2^{(N)}}\sum_{j=1}^{k}\left(\frac{1}{\lambda}-A(j)\right)}{\sqrt{\log N}}\overset{d}{\longrightarrow}\sigma_A\sqrt{\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon}X_1-\sigma_A\sqrt{\epsilon}\left|X_2\right|,\end{aligned}$$ as $N\to\infty$. Afterwards, we prove that $$\begin{aligned} \label{eq: independent part to zero} \frac{\max_{i\leq N}\sup_{t_1^{(N)}<k<t_2^{(N)}}\sum_{j=1}^k\left(S_i(j)-\frac{1}{\lambda}\right)-\frac{1}{\gamma}\log N}{\sqrt{\log N}}\overset{\mathbb{P}}{\longrightarrow}0,\end{aligned}$$ as $N\to\infty$. The first convergence result follows from Donsker's theorem. The left-hand side in [\[eq: lower bound arrival process clt\]](#eq: lower bound arrival process clt){reference-type="eqref" reference="eq: lower bound arrival process clt"} is an infimum of a random walk with drift 0. Then for $(B(t),t\geq 0)$ a Brownian motion with drift 0 and standard deviation 1, by using Donsker's theorem [@donsker1951invariance] and the fact that the infimum is a continuous functional, we obtain that $$\operatorname{\mathbb{P}}\probarg*{\frac{\inf_{t_1^{(N)}< k< t_2^{(N)}}\sum_{j=1}^{k}(\frac{1}{\lambda}-A(j))}{\sqrt{\log N}}>x}\overset{N\to\infty}{\longrightarrow}\operatorname{\mathbb{P}}\probarg*{\inf_{\left(\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon\right)<s<\frac{1}{\Lambda'(\gamma)\gamma}}\sigma_A B(s)>x}.$$ Furthermore, we can rewrite $$\inf_{\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon<s<\frac{1}{\Lambda'(\gamma)\gamma}}\sigma_A B(s)\overset{d}{=}\sigma_AB\left(\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon\right)-\inf_{0<s<\epsilon}\sigma_A\tilde{B}(s),$$ where $\tilde{B}$ is an independent copy of $B$. Obviously, we have that $\sigma_AB\left(\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon\right)\overset{d}{=}\sigma_A\sqrt{\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon}X_1$ with $X_1\sim\mathcal{N}(0,1)$. Because $\inf_{0<s<\epsilon}\sigma_A\tilde{B}(s)\overset{d}{=}\sigma_A\sqrt{\epsilon}|X_2|$, with $X_2\sim\mathcal{N}(0,1)$, we have that the limit in [\[eq: lower bound arrival process clt\]](#eq: lower bound arrival process clt){reference-type="eqref" reference="eq: lower bound arrival process clt"} follows. In order to prove the second convergence result, we define for $A\in\mathcal{F}_k$, with $\{\mathcal{F}_k,k\geq 1\}$ the natural filtration, the probability measure $$\mathbb{P}_i(A):=\mathbb{E}\left[\exp\left(\gamma\sum_{j=1}^k\left(S_i(j)-\frac{1}{\lambda}\right)\right)\mathbbm{1}(A)\right];$$ see [@asmussen2003applied Ch. XIII, Par. 3]. Now, we know that $$\mathbb{E}_i\left[S_i(j)-\frac{1}{\lambda}\right]=\mathbb{E}\left[\left(S_i(j)-\frac{1}{\lambda}\right)\exp\left(\gamma\left(S_i(j)-\frac{1}{\lambda}\right)\right)\right]=\Lambda'(\gamma).$$ Thus, by checking the conditions in [@asmussen2003applied Ch. XIII, Thm. 5.6], we see that $$\begin{gathered} \label{eq: supremum conv 1} \operatorname{\mathbb{P}}\probarg*{\sup_{0\leq k< t_2^{(N)}}\sum_{j=1}^{k}\left(S_i(j)-\frac{1}{\lambda}\right)\geq\frac{1}{\gamma}\log N+x\sqrt{\log N}}\\ =C\exp\bigg(-\gamma\bigg(\frac{1}{\gamma}\log N+x\sqrt{\log N}\bigg)\bigg)\Phi\bigg(-x\frac{\sqrt{\gamma\Lambda'(\gamma)}}{\sqrt{\Lambda''(\gamma)}}\bigg)(1+o(1)).\end{gathered}$$ With the same approach, we get from [@asmussen2003applied Ch. XIII, Thm. 5.6] that $$\begin{aligned} \label{eq: supremum conv 2} \operatorname{\mathbb{P}}\probarg*{\sup_{0\leq k< t_1^{(N)}}\sum_{j=1}^{k}\left(S_i(j)-\frac{1}{\lambda}\right)\geq\frac{1}{\gamma}\log N+x\sqrt{\log N}} =o\bigg(C\exp\bigg(-\gamma\bigg(\frac{1}{\gamma}\log N+x\sqrt{\log N}\bigg)\bigg)\bigg),\end{aligned}$$ as $N\to\infty$, for all $x\in\mathbb{R}$. By applying the union bound, we get that $$\begin{aligned} &\operatorname{\mathbb{P}}\probarg*{\sup_{0\leq k< t_2^{(N)}}\sum_{j=1}^{k}\left(S_i(j)-\frac{1}{\lambda}\right)\geq\frac{1}{\gamma}\log N+x\sqrt{\log N}}\\ &\quad\leq\operatorname{\mathbb{P}}\probarg*{\sup_{0\leq k< t_1^{(N)}}\sum_{j=1}^{k}\left(S_i(j)-\frac{1}{\lambda}\right)\geq\frac{1}{\gamma}\log N+x\sqrt{\log N}}+\operatorname{\mathbb{P}}\probarg*{\sup_{t_1^{(N)}< k< t_2^{(N)}}\sum_{j=1}^{k}\left(S_i(j)-\frac{1}{\lambda}\right)\geq\frac{1}{\gamma}\log N+x\sqrt{\log N}}\\ &\quad\leq\operatorname{\mathbb{P}}\probarg*{\sup_{0\leq k< t_1^{(N)}}\sum_{j=1}^{k}\left(S_i(j)-\frac{1}{\lambda}\right)\geq\frac{1}{\gamma}\log N+x\sqrt{\log N}}+\operatorname{\mathbb{P}}\probarg*{\sup_{0\leq k< t_2^{(N)}}\sum_{j=1}^{k}\left(S_i(j)-\frac{1}{\lambda}\right)\geq\frac{1}{\gamma}\log N+x\sqrt{\log N}}.\end{aligned}$$ We can conclude from these bounds, together with [\[eq: supremum conv 1\]](#eq: supremum conv 1){reference-type="eqref" reference="eq: supremum conv 1"} and [\[eq: supremum conv 2\]](#eq: supremum conv 2){reference-type="eqref" reference="eq: supremum conv 2"} that $$\begin{gathered} \operatorname{\mathbb{P}}\probarg*{\sup_{t_1^{(N)}< k< t_2^{(N)}}\sum_{j=1}^{k}\left(S_i(j)-\frac{1}{\lambda}\right)\geq\frac{1}{\gamma}\log N+x\sqrt{\log N}}\\ =C\exp\bigg(-\gamma\bigg(\frac{1}{\gamma}\log N+x\sqrt{\log N}\bigg)\bigg)\Phi\bigg(-x\frac{\sqrt{\gamma\Lambda'(\gamma)}}{\sqrt{\Lambda''(\gamma)}}\bigg)(1+o(1)).\end{gathered}$$ By using this expression it is easy to derive that for $x>0$ $$\begin{gathered} \operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}\sup_{t_1^{(N)}< k<t_2^{(N)}}\sum_{j=1}^{k}\left(S_i(j)-\frac{1}{\lambda}\right)\leq\frac{1}{\gamma}\log N+x\sqrt{\log N}}\\ =\operatorname{\mathbb{P}}\probarg*{\sup_{t_1^{(N)}< k< t_2^{(N)}}\sum_{j=1}^{k}\left(S_i(j)-\frac{1}{\lambda}\right)\leq\frac{1}{\gamma}\log N+x\sqrt{\log N}}^N\overset{N\to\infty}{\longrightarrow}1.\end{gathered}$$ Similarly, for $x<0$, $$\begin{gathered} \operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}\sup_{t_1^{(N)}< k<t_2^{(N)}}\sum_{j=1}^{k}\left(S_i(j)-\frac{1}{\lambda}\right)\leq\frac{1}{\gamma}\log N+x\sqrt{\log N}}\\ =\operatorname{\mathbb{P}}\probarg*{\sup_{t_1^{(N)}< k< t_2^{(N)}}\sum_{j=1}^{k}\left(S_i(j)-\frac{1}{\lambda}\right)\leq\frac{1}{\gamma}\log N+x\sqrt{\log N}}^N\overset{N\to\infty}{\longrightarrow}0.\end{gathered}$$ Combining these two results gives us the limit in [\[eq: independent part to zero\]](#eq: independent part to zero){reference-type="eqref" reference="eq: independent part to zero"}. Finally, the convergence result in [\[eq: lower bound convergence\]](#eq: lower bound convergence){reference-type="eqref" reference="eq: lower bound convergence"} follows from the two limits in [\[eq: lower bound arrival process clt\]](#eq: lower bound arrival process clt){reference-type="eqref" reference="eq: lower bound arrival process clt"} and [\[eq: independent part to zero\]](#eq: independent part to zero){reference-type="eqref" reference="eq: independent part to zero"}. ◻ **Lemma 4**. *Given the model in Section [2](#sec: model){reference-type="ref" reference="sec: model"} where the sequence of service times $(S_i(j),i\geq 1,j\geq 1)$ satisfies Assumption [Assumption 1](#assump: 1){reference-type="ref" reference="assump: 1"}, $t_1^{(N)}=\left(\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon\right)\log N$, $\delta=\frac{\delta_1}{\Lambda'(\gamma)\gamma}+\delta_2$ with $\delta_{1,2}>0$ and small, and $\epsilon=\delta^{1/4}$, then for all $x\in \mathbb{R}$, we have that $$\begin{aligned} \label{eq: sup 0 t} \operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}\sup_{0\leq k<t_1^{(N)}}\sum_{j=1}^k(S_i(j)-A(j))>\frac{1}{\gamma}\log N+x\sqrt{\log N}}\overset{N\to\infty}{\longrightarrow}0.\end{aligned}$$* *Proof.* We derive upper bounds for the left-hand side of [\[eq: sup 0 t\]](#eq: sup 0 t){reference-type="eqref" reference="eq: sup 0 t"} that converge to 0 as $N\to\infty$. We get by using the subadditivity property of the sup operator and the union bound that $$\begin{aligned} &\operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}\sup_{0\leq k<t_1^{(N)}}\sum_{j=1}^k(S_i(j)-A(j))>\frac{1}{\gamma}\log N}\\ &\quad\leq \operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}\sup_{0\leq k<t_1^{(N)}}\sum_{j=1}^k\left(S_i(j)-\frac{1}{\lambda}+\delta_1\right)>\left(\frac{1}{\gamma}-\delta_2\right)\log N}\label{subeq: upper bound 0}\\ &\quad\quad+\operatorname{\mathbb{P}}\probarg*{\sup_{k\geq 0}\sum_{j=1}^k\left(\frac{1}{\lambda}-\delta_1-A(j)\right)>\delta_2\log N+x\sqrt{\log N}}\label{subeq: upper bound 1 performance}.\end{aligned}$$ First, because $\mathbb{E}[\frac{1}{\lambda}-\delta_1-A(j)]<0$, we get that $$\operatorname{\mathbb{P}}\probarg*{\sup_{k\geq 0}\sum_{j=1}^k\left(\frac{1}{\lambda}-\delta_1-A(j)\right)>\delta_2\log N+x\sqrt{\log N}}\overset{N\to\infty}{\longrightarrow}0.$$ Second, we can bound the term in [\[subeq: upper bound 0\]](#subeq: upper bound 0){reference-type="eqref" reference="subeq: upper bound 0"} as follows; $$\begin{gathered} \operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}\sup_{0\leq k<t_1^{(N)}}\sum_{j=1}^k\left(S_i(j)-\frac{1}{\lambda}+\delta_1\right)>\left(\frac{1}{\gamma}-\delta_2\right)\log N}\\ \leq \operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}\sup_{0\leq k<t_1^{(N)}}\sum_{j=1}^k\left(S_i(j)-\frac{1}{\lambda}\right)>\left(\frac{1}{\gamma}-\frac{\delta_1}{\Lambda'(\gamma)\gamma}-\delta_2\right)\log N}.\end{gathered}$$ Now, we can bound this further to $$\begin{gathered} \operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}\sup_{0\leq k<t_1^{(N)}}\sum_{j=1}^k\left(S_i(j)-\frac{1}{\lambda}\right)>\left(\frac{1}{\gamma}-\frac{\delta_1}{\Lambda'(\gamma)\gamma}-\delta_2\right)\log N}\\ \leq \sum_{k=0}^{\lfloor t_1^{(N)}\rfloor }N\operatorname{\mathbb{P}}\probarg*{\sum_{j=1}^k\left(S_i(j)-\frac{1}{\lambda}\right)>\left(\frac{1}{\gamma}-\frac{\delta_1}{\Lambda'(\gamma)\gamma}-\delta_2\right)\log N}.\end{gathered}$$ By using Chernoff's bound we obtain that for $\Lambda(\theta)<\infty$ $$\begin{aligned} &\sum_{k=0}^{\lfloor t_1^{(N)}\rfloor }N\operatorname{\mathbb{P}}\probarg*{\sum_{j=1}^k\left(S_i(j)-\frac{1}{\lambda}\right)>\left(\frac{1}{\gamma}-\frac{\delta_1}{\Lambda'(\gamma)\gamma}-\delta_2\right)\log N}\\ &\quad\leq N \sum_{k=0}^{\lfloor t_1^{(N)}\rfloor }\exp(k \Lambda(\theta))\exp\bigg(-\theta\bigg(\frac{1}{\gamma}-\frac{\delta_1}{\Lambda'(\gamma)\gamma}-\delta_2\bigg)\log N\bigg)\nonumber\\ &\quad = N\frac{-1+\exp\left((\lfloor t_1^{(N)}\rfloor+1)\Lambda(\theta)\right)}{\exp(\Lambda(\theta))-1}\exp\bigg(-\theta\bigg(\frac{1}{\gamma}-\frac{\delta_1}{\Lambda'(\gamma)\gamma}-\delta_2\bigg)\log N\bigg).\label{subeq: upper bound 2 performance}\end{aligned}$$ Now, $$\begin{gathered} \frac{\log\left(N\frac{-1+\exp\left((\lfloor t_1^{(N)}\rfloor+1)\Lambda(\theta)\right)}{\exp(\Lambda(\theta))-1}\exp\bigg(-\theta\bigg(\frac{1}{\gamma}-\frac{\delta_1}{\Lambda'(\gamma)\gamma}-\delta_2\bigg)\log N\bigg)\right)}{\log N}\\ \overset{N\to\infty}{\longrightarrow}1-\left(\theta\left(\frac{1}{\gamma}-\frac{\delta_1}{\Lambda'(\gamma)\gamma}-\delta_2\right)-\left(\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon\right)\Lambda(\theta)\right).\end{gathered}$$ In order to make the bound in [\[subeq: upper bound 2 performance\]](#subeq: upper bound 2 performance){reference-type="eqref" reference="subeq: upper bound 2 performance"} as sharp as possible, we need to choose a convenient $\theta$. The choice of $\theta$ that gives the sharpest bound maximizes the function $\theta\left(\frac{1}{\gamma}-\frac{\delta_1}{\Lambda'(\gamma)\gamma}-\delta_2\right)-\left(\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon\right)\Lambda(\theta)$. We have that $\delta=\frac{\delta_1}{\Lambda'(\gamma)\gamma}+\delta_2$ and $\epsilon=\delta^{1/4}$. Furthermore, we choose $\theta=\gamma+\sqrt{\delta}$. This gives us a sharp enough bound in [\[subeq: upper bound 2 performance\]](#subeq: upper bound 2 performance){reference-type="eqref" reference="subeq: upper bound 2 performance"}. We obviously have that $$\sup_{\eta\in\mathbb{R}}\left(\eta\left(\frac{1}{\gamma}-\delta\right)-\left(\frac{1}{\Lambda'(\gamma)\gamma}-\delta^{1/4}\right)\Lambda(\eta)\right) \geq \left((\gamma+\sqrt{\delta})\left(\frac{1}{\gamma}-\delta\right)-\left(\frac{1}{\Lambda'(\gamma)\gamma}-\delta^{1/4}\right)\Lambda(\gamma+\sqrt{\delta})\right).$$ The first order Taylor series of $\Lambda(\gamma+\sqrt{\delta})$ around $\gamma$ gives $$\Lambda(\gamma+\sqrt{\delta})=\Lambda(\gamma)+\sqrt{\delta}\Lambda'(\gamma)+O(\delta)=\sqrt{\delta}\Lambda'(\gamma)+O(\delta).$$ Thus, $$\left((\gamma+\sqrt{\delta})\left(\frac{1}{\gamma}-\delta\right)-\left(\frac{1}{\Lambda'(\gamma)\gamma}-\delta^{1/4}\right)\Lambda(\gamma+\sqrt{\delta})\right)=1+\delta^{3/4}\Lambda'(\gamma)+O(\delta)>1,$$ for $\delta$ small enough. Thus the expression in [\[subeq: upper bound 2 performance\]](#subeq: upper bound 2 performance){reference-type="eqref" reference="subeq: upper bound 2 performance"} is upper bounded by the term $N^{-\delta^{3/4}\Lambda'(\gamma)-O(\delta)}\overset{N\to\infty}{\longrightarrow}0$. ◻ **Lemma 5**. *Given the model in Section [2](#sec: model){reference-type="ref" reference="sec: model"} where the sequence of service times $(S_i(j),i\geq 1,j\geq 1)$ satisfies Assumption [Assumption 1](#assump: 1){reference-type="ref" reference="assump: 1"}, $0<\epsilon<\frac{1}{\Lambda'(\gamma)\gamma}$, $t_1^{(N)}=\left(\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon\right)\log N$, and $t_3^{(N)}=\left(\frac{1}{\Lambda'(\gamma)\gamma}+\epsilon\right)\log N$, then for all $x\in \mathbb{R}$, we have that $$\begin{gathered} \limsup_{N\to\infty}\operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}\sup_{t_1^{(N)}\leq k<t_3^{(N)}}\sum_{j=1}^k(S_i(j)-A(j))>\frac{1}{\gamma}\log N+x\sqrt{\log N}}\\ \leq \operatorname{\mathbb{P}}\probarg*{\sigma_A\sqrt{\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon}X_1+\sigma_A\sqrt{2\epsilon}\left|X_2\right|>x},\end{gathered}$$ with $X_1,X_2\sim\mathcal{N}(0,1)$ and independent.* *Proof.* In order to prove this lemma, we first rewrite $$\begin{aligned} &\frac{\max_{i\leq N}\sup_{t_1^{(N)}\leq k<t_3^{(N)}}\sum_{j=1}^k(S_i(j)-A(j))-\frac{1}{\gamma}\log N}{\sqrt{\log N}}\nonumber\\ &\quad\leq \frac{\max_{i\leq N}\sup_{t_1^{(N)}\leq k<t_3^{(N)}}\sum_{j=1}^k\left(S_i(j)-\frac{1}{\lambda}\right)-\frac{1}{\gamma}\log N}{\sqrt{\log N}}+ \frac{\sup_{t_1^{(N)}\leq k<t_3^{(N)}}\sum_{j=1}^k\left(\frac{1}{\lambda}-A(j)\right)}{\sqrt{\log N}} \nonumber\\ &\quad\leq \frac{\max_{i\leq N}\sup_{k\geq 0}\sum_{j=1}^k\left(S_i(j)-\frac{1}{\lambda}\right)-\frac{1}{\gamma}\log N}{\sqrt{\log N}}+ \frac{\sup_{t_1^{(N)}\leq k<t_3^{(N)}}\sum_{j=1}^k\left(\frac{1}{\lambda}-A(j)\right)}{\sqrt{\log N}}.\label{subeq: upper bound term 1}\end{aligned}$$ We first look at the first term in [\[subeq: upper bound term 1\]](#subeq: upper bound term 1){reference-type="eqref" reference="subeq: upper bound term 1"}. This term gives the rescaled longest steady-state waiting time of $N$ i.i.d. $D/G/1$ queues. We know that $$\operatorname{\mathbb{P}}\probarg*{\sup_{k\geq 0}\sum_{j=1}^k\left(S_i(j)-\frac{1}{\lambda}\right)>x}\sim C\exp(-\gamma x),$$ as $x\to\infty$, with $0<C<1$; see [@asmussen2003applied Ch. XIII, Thm. 5.2]. Thus for $x>0$, $$\begin{gathered} \operatorname{\mathbb{P}}\probarg*{\frac{\max_{i\leq N}\sup_{k\geq 0}\sum_{j=1}^k\left(S_i(j)-\frac{1}{\lambda}\right)-\frac{1}{\gamma}\log N}{\sqrt{\log N}}>x}\\\sim 1-\left(1-C\exp(-\gamma (1/\gamma\log N+x\sqrt{\log N}))\right)^N\overset{N\to\infty}{\longrightarrow}0.\end{gathered}$$ Similarly, for $x<0$, $$\begin{gathered} \operatorname{\mathbb{P}}\probarg*{\frac{\max_{i\leq N}\sup_{k\geq 0}\sum_{j=1}^k\left(S_i(j)-\frac{1}{\lambda}\right)-\frac{1}{\gamma}\log N}{\sqrt{\log N}}>x}\\\sim 1-\left(1-C\exp(-\gamma (1/\gamma\log N+x\sqrt{\log N}))\right)^N\overset{N\to\infty}{\longrightarrow}1.\end{gathered}$$ Thus, the first term in [\[subeq: upper bound term 1\]](#subeq: upper bound term 1){reference-type="eqref" reference="subeq: upper bound term 1"} converges in probability to 0. Now, we prove convergence of the tail probability of the second term in [\[subeq: upper bound term 1\]](#subeq: upper bound term 1){reference-type="eqref" reference="subeq: upper bound term 1"}. This term is a supremum of a random walk with drift 0. Then for $(B(t),t\geq 0)$ a Brownian motion with drift 0 and standard deviation 1, by using Donsker's theorem [@donsker1951invariance] and the fact that the supremum is a continuous functional, we obtain with a similar analysis as in Lemma [Lemma 3](#lem: lower bound){reference-type="ref" reference="lem: lower bound"}, that $$\operatorname{\mathbb{P}}\probarg*{\frac{\sup_{t_1^{(N)}\leq k<t_3^{(N)}}\sum_{j=1}^k(\frac{1}{\lambda}-A(j))}{\sqrt{\log N}}>x} \overset{N\to\infty}{\longrightarrow}\operatorname{\mathbb{P}}\probarg*{\sigma_A\sqrt{\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon}X_1+\sigma_A\sqrt{2\epsilon}\left|X_2\right|>x}.$$ ◻ **Lemma 6**. *Given the model in Section [2](#sec: model){reference-type="ref" reference="sec: model"} where the sequence of service times $(S_i(j),i\geq 1,j\geq 1)$ satisfies Assumption [Assumption 1](#assump: 1){reference-type="ref" reference="assump: 1"}, $\delta=\frac{\delta_1}{\Lambda'(\gamma)\gamma}+\delta_2$ with $\delta_{1,2}>0$ and small, $\epsilon=\delta^{1/4}$, and $t_3^{(N)}=\left(\frac{1}{\Lambda'(\gamma)\gamma}+\epsilon\right)\log N$, then for all $x\in \mathbb{R}$, we have that $$\begin{aligned} \operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}\sup_{k\geq t_3^{(N)}}\sum_{j=1}^k(S_i(j)-A(j))>\frac{1}{\gamma}\log N+x\sqrt{\log N}}\overset{N\to\infty}{\longrightarrow}0.\end{aligned}$$* *Proof.* As in the proof of Lemma [Lemma 4](#lem: supremum 0 t){reference-type="ref" reference="lem: supremum 0 t"}, we derive upper bounds for $$\operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}\sup_{k\geq t_3^{(N)}}\sum_{j=1}^k(S_i(j)-A(j))>\frac{1}{\gamma}\log N+x\sqrt{\log N}}$$ that converge to 0 as $N\to\infty$. First, we see that by using subadditivity and the union bound, we obtain $$\begin{aligned} &\operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}\sup_{k\geq t_3^{(N)}}\sum_{j=1}^k(S_i(j)-A(j))>\frac{1}{\gamma}\log N+x\sqrt{\log N}}\\ &\quad\leq \operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}\sup_{k\geq t_3^{(N)}}\sum_{j=1}^k\left(S_i(j)-\frac{1}{\lambda}+\delta_1\right)>\left(\frac{1}{\gamma}-\delta_2\right)\log N}\\ &\quad\quad+\operatorname{\mathbb{P}}\probarg*{\sup_{k\geq 0}\sum_{j=1}^k\left(\frac{1}{\lambda}-\delta_1-A(j)\right)>\delta_2\log N+x\sqrt{\log N}}.\end{aligned}$$ As in the proof of Lemma [Lemma 4](#lem: supremum 0 t){reference-type="ref" reference="lem: supremum 0 t"}, we have that $$\operatorname{\mathbb{P}}\probarg*{\sup_{k\geq 0}\sum_{j=1}^k\left(\frac{1}{\lambda}-\delta_1-A(j)\right)>\delta_2\log N+x\sqrt{\log N}}\overset{N\to\infty}{\longrightarrow}0.$$ Furthermore, observe that $\log\mathbb{E}[\exp(\theta(S_i(j)-1/\lambda+\delta_1))]=\Lambda(\theta)+\theta\delta_1$. Now, as in the proof of Lemma [Lemma 4](#lem: supremum 0 t){reference-type="ref" reference="lem: supremum 0 t"}, we can bound $$\begin{aligned} &\operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}\sup_{k\geq t_3^{(N)}}\sum_{j=1}^k\left(S_i(j)-\frac{1}{\lambda}+\delta_1\right)>\left(\frac{1}{\gamma}-\delta_2\right)\log N}\\ &\quad \leq N\sum_{k=\lfloor t_3^{(N)}\rfloor }^{\infty}\operatorname{\mathbb{P}}\probarg*{\sum_{j=1}^k\left(S_i(j)-\frac{1}{\lambda}+\delta_1\right)>\left(\frac{1}{\gamma}-\delta_2\right)\log N}\\ &\quad\leq N \sum_{k=\lfloor t_3^{(N)}\rfloor }^{\infty}\exp(k (\Lambda(\theta)+\theta\delta_1))\exp\left(-\theta\left(\frac{1}{\gamma}-\delta_2\right)\log N\right)\nonumber\\ &\quad = N\frac{\exp\left(\lfloor t_3^{(N)}\rfloor(\Lambda(\theta)+\theta\delta_1)\right)}{\exp(\Lambda(\theta)+\theta\delta_1)-1}\exp\left(-\theta\left(\frac{1}{\gamma}-\delta_2\right)\log N\right),\label{subeq: upper bound 3}\end{aligned}$$ when $\Lambda(\theta)+\theta\delta_1<0$. When $\Lambda(\theta)+\theta\delta_1\geq 0$ the sum in the upper bound diverges to $\infty$. Now, for the case $\Lambda(\theta)+\theta\delta_1<0$, we have that $$\begin{aligned} \frac{\log\left(N\frac{\exp\left(\lfloor t_3^{(N)}\rfloor(\Lambda(\theta)+\theta\delta_1)\right)}{\exp(\Lambda(\theta)+\theta\delta_1)-1}\exp\left(-\theta\left(\frac{1}{\gamma}-\delta_2\right)\log N\right)\right)}{\log N}\overset{N\to\infty}{\longrightarrow}1+\left(\frac{1}{\Lambda'(\gamma)\gamma}+\epsilon\right)(\Lambda(\theta)+\theta\delta_1)-\theta\left(\frac{1}{\gamma}-\delta_2\right).\end{aligned}$$ As in the proof of Lemma [Lemma 4](#lem: supremum 0 t){reference-type="ref" reference="lem: supremum 0 t"}, we have $\delta=\frac{\delta_1}{\Lambda'(\gamma)\gamma}+\delta_2$ and $\epsilon=\delta^{1/4}$. We now get after a similar derivation as in the proof of Lemma [Lemma 4](#lem: supremum 0 t){reference-type="ref" reference="lem: supremum 0 t"} that $\theta=\gamma-\sqrt{\delta}$ gives a sharp bound. First, observe that $\Lambda(\gamma-\sqrt{\delta})=-\sqrt{\delta}\Lambda'(\gamma)+O(\delta)$, thus $\Lambda(\theta)+\theta\delta_1=-\sqrt{\delta}\Lambda'(\gamma)+(\gamma-\sqrt{\delta})\delta_1+O(\delta)=-\sqrt{\delta}\Lambda'(\gamma)+O(\delta)<0$ for $\delta$ small enough, thus the upper bound in [\[subeq: upper bound 3\]](#subeq: upper bound 3){reference-type="eqref" reference="subeq: upper bound 3"} holds. Second, we see that $$\begin{gathered} \sup_{\eta\in\mathbb{R}}\left(\eta\left(\frac{1}{\gamma}-\delta_2\right)-\left(\frac{1}{\Lambda'(\gamma)\gamma}+\epsilon\right)(\Lambda(\eta)+\eta\delta_1)\right)\\ \geq (\gamma-\sqrt{\delta})\left(\frac{1}{\gamma}-\delta_2\right)-\left(\frac{1}{\Lambda'(\gamma)\gamma}+\epsilon\right)(\Lambda(\gamma-\sqrt{\delta})+(\gamma-\sqrt{\delta})\delta_1).\end{gathered}$$ So, we can conclude that $$(\gamma-\sqrt{\delta})\left(\frac{1}{\gamma}-\delta_2\right)-\left(\frac{1}{\Lambda'(\gamma)\gamma}+\delta^{1/4}\right)(\Lambda(\gamma-\sqrt{\delta})+(\gamma-\sqrt{\delta})\delta_1)=1+\delta^{3/4}\Lambda'(\gamma)+O(\delta)>1$$ for $\delta$ small enough, thus the expression in [\[subeq: upper bound 3\]](#subeq: upper bound 3){reference-type="eqref" reference="subeq: upper bound 3"} converges to 0 as $N\to\infty$. ◻ *Proof of Theorem [Theorem 1](#thm: convergence maximum waiting time){reference-type="ref" reference="thm: convergence maximum waiting time"}.* First, to prove a lower bound, we see that $$\max_{i\leq N}W_i(\infty)\geq_{st.} \max_{i\leq N}\sum_{j=1}^{\big\lfloor \frac{1}{(\Lambda'(\gamma)\gamma)}\log N\big\rfloor}(S_i(j)-A(j)).$$ Thus, combining this inequality with the result from Lemma [Lemma 3](#lem: lower bound){reference-type="ref" reference="lem: lower bound"}, we see that $$\liminf_{N\to\infty}\operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}W_i(\infty)>\frac{1}{\gamma}\log N+x\sqrt{\log N}}\geq \operatorname{\mathbb{P}}\probarg*{\sigma_A\sqrt{\frac{1}{\Lambda'(\gamma)\gamma}}X>x}.$$ Second, by using the union bound of the types as given in [\[eq: upper bound\]](#eq: upper bound){reference-type="eqref" reference="eq: upper bound"} and explained in Section [3](#sec: heuristic analysis){reference-type="ref" reference="sec: heuristic analysis"}, we get from Lemmas [Lemma 4](#lem: supremum 0 t){reference-type="ref" reference="lem: supremum 0 t"}, [Lemma 5](#lem: t-e t+e){reference-type="ref" reference="lem: t-e t+e"}, and [Lemma 6](#lem: t+e infty){reference-type="ref" reference="lem: t+e infty"}, with $t_1^{(N)}=\left(\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon\right)\log N$ and $t_3^{(N)}=\left(\frac{1}{\Lambda'(\gamma)\gamma}+\epsilon\right)\log N$, that $$\begin{aligned} &\limsup_{N\to\infty}\operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}W_i(\infty)>\frac{1}{\gamma}\log N+x\sqrt{\log N}}\\ &\quad\leq \limsup_{N\to\infty}\operatorname{\mathbb{P}}\probarg*{\max_{i\leq N}\sup_{t_1^{(N)}\leq k<t_3^{(N)}}\sum_{j=1}^k(S_i(j)-A(j))>\frac{1}{\gamma}\log N+x\sqrt{\log N}}\\ &\quad\leq \operatorname{\mathbb{P}}\probarg*{\sigma_A\sqrt{\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon}X_1+\sigma_A\sqrt{2\epsilon}\left|X_2\right|>x}.\end{aligned}$$ Finally, we have that $$\operatorname{\mathbb{P}}\probarg*{\sigma_A\sqrt{\frac{1}{\Lambda'(\gamma)\gamma}-\epsilon}X_1+\sigma_A\sqrt{2\epsilon}\left|X_2\right|>x}\overset{\epsilon\downarrow 0}{\longrightarrow} \operatorname{\mathbb{P}}\probarg*{\sigma_A\sqrt{\frac{1}{\Lambda'(\gamma)\gamma}}X>x}.$$ ◻ 10 Søren Asmussen. , volume 2. Springer, 2003. François Baccelli. . , 1985. François Baccelli and Armand M. Makowski. Queueing models for systems with synchronization constraints. , 77(1):138--161, 1989. Amir Dembo and Ofer Zeitouni. , volume 38. Springer Science & Business Media, 2009. Monroe David Donsker. , volume 6. Memoirs of the American Mathematical Society, 1951. Leopold Flatto and Sann Hahn. . , 44(5):1041--1053, 1984. Laurens de Haan and Ana Ferreira. . Springer Science & Business Media, 2006. Rasoul Haji and Gordon F Newell. A relation between stationary queue and waiting time distributions. , 8(3):617--620, 1971. Stephanus J. de Klein. . PhD thesis, Rijksuniversiteit Utrecht, 1988. Sung-Seok Ko and Richard F. Serfozo. . , 36(3):854--871, 2004. David V Lindley. The theory of queues with a single server. In *Mathematical Proceedings of the Cambridge Philosophical Society*, volume 48, pages 277--289. Cambridge University Press, 1952. Mirjam Meijer, Dennis Schol, Willem van Jaarsveld, Maria Vlasiou, and Bert Zwart. . , 2021. Reinhard Michel. . , 55(1):109--117, 1981. Randolph Nelson and Asser N Tantawi. Approximate analysis of fork/join synchronization in parallel queues. , 37(6):739--743, 1988. Dennis Schol, Maria Vlasiou, and Bert Zwart. Large fork-join queues with nearly deterministic arrival and service times. , 47(2):1335--1364, 2021. Dennis Schol, Maria Vlasiou, and Bert Zwart. Tail asymptotics for the delay in a brownian fork-join queue. , 2022. Paul E. Wright. Two parallel processors with coupled inputs. , 24(4):986--1007, 1992.
arxiv_math
{ "id": "2309.08373", "title": "Extreme values for the waiting time in large fork-join queues", "authors": "Dennis Schol, Maria Vlasiou, Bert Zwart", "categories": "math.PR cs.PF", "license": "http://creativecommons.org/licenses/by-nc-sa/4.0/" }
--- abstract: | We present Zeroth-order Riemannian Averaging Stochastic Approximation (`Zo-RASA`) algorithms for stochastic optimization on Riemannian manifolds. We show that `Zo-RASA` achieves optimal sample complexities for generating $\epsilon$-approximation first-order stationary solutions using only one-sample or constant-order batches in each iteration. Our approach employs Riemannian moving-average stochastic gradient estimators, and a novel Riemannian-Lyapunov analysis technique for convergence analysis. We improve the algorithm's practicality by using retractions and vector transport, instead of exponential mappings and parallel transports, thereby reducing per-iteration complexity. Additionally, we introduce a novel geometric condition, satisfied by manifolds with bounded second fundamental form, which enables new error bounds for approximating parallel transport with vector transport. author: - "Jiaxiang Li [^1]" - "Krishnakumar Balasubramanian[^2]" - "Shiqian Ma [^3]" - bibliography: - reference.bib title: Zeroth-order Riemannian Averaging Stochastic Approximation Algorithms --- # Introduction We consider zeroth-order algorithms for solving the following Riemannian optimization problem, $$\begin{aligned} \label{stochastic_problem} \min_{x\in\mathcal{M}} f(x):=\mathbb{E}_{\xi}[F(x,\xi)],\end{aligned}$$ where $\mathcal{M}$ is a $d$-dimensional complete manifold, $f:\mathcal{M}\rightarrow\mathbb{R}$ is a smooth function, and we can access only the noisy function evaluations $F(x,\xi)$. A natural zeroth-order algorithm is to estimate the gradients of $f$ and use them in the context of Riemannian stochastic gradient descent. The main difficulty in doing so is the construction of the zeroth-order gradient estimation. Assuming that we have independent samples $u_i$ that are standard normal random vectors supported on $\operatorname{T}_{x}\mathcal{M}$, the tangent space at $x\in\mathcal{M}$, [@li2022stochastic] proposed to construct the zeroth-order gradient estimator as $$\begin{aligned} \label{zeroth_order_estimator} G^\mathsf{Exp}_{\mu}(x) = \frac{1}{m}\sum^m_{i=1}\frac{F(\mathsf{Exp}_{x}(\mu u_i),\xi_i) - F(x, \xi_i)}{\mu} u_i\end{aligned}$$ where $\mu>0$ is a smoothing parameter. Note here that if a retraction is available, then one could also replace the exponential mapping with a retraction based estimator, $$\begin{aligned} \label{zeroth_order_estimator_retr} G^\mathsf{Retr}_{\mu}(x) = \frac{1}{m}\sum^m_{i=1}\frac{F(\mathsf{Retr}_{x}(\mu u_i),\xi_i) - F(x, \xi_i)}{\mu} u_i.\end{aligned}$$ The merit of having a Gaussian distribution on the tangent space is that the variance of the constructed estimator $G_{\mu}(x)$ will only depend on the intrinsic dimension $d$ of the manifold, and is independent of the dimension $n$ of the ambient Euclidean space. We refer to [@li2022stochastic] for the details of our zeroth-order estimator and its applications. See also [@wang2021greene; @wang2023sharp] for additional follow-up works. To obtain an $\epsilon$-approximate stationary solution of [\[stochastic_problem\]](#stochastic_problem){reference-type="eqref" reference="stochastic_problem"} (as in Definition [Definition 2](#def:epsstat){reference-type="ref" reference="def:epsstat"}) using the above approach, [@li2022stochastic] established a sample complexity of $\mathcal{O}(d/\epsilon^4)$, with $\mathcal{O}(1/\epsilon^2)$ iteration complexity and $m=\mathcal{O}(d/\epsilon^2)$ per-iteration batch size. Even considering $d=1$ for simplicity, this suggests for example that to get an accuracy of $\epsilon \approx 10^{-3}$, one needs batch-sizes of order $m\approx 10^{6}$ resulting in a highly impractical per-iteration complexity. Intriguingly, when implementing these algorithms in practice, favorable results are obtained even when the batch-size is simply set between ten and fifty. Thus, there exists a discrepancy between the current theory and practice of stochastic zeroth-order Riemannian optimization. Furthermore, in online Riemannian optimization problems [@maass2022tracking; @wang2023online] where the data sequence is observed in a streaming fashion, waiting for very long time-periods in each iteration in order to obtain the required order of batch-sizes is highly undesirable. \|c\|c\|c\|c\|c\|c\| Result & Objective & Manifold & Operations & $m$ & N\ & & general & Retr & $\mathcal{O}(d/\epsilon^2)$ &$\Omega(1)$\ & & & & $\mathcal{O}(d)$ & $\Omega(1)$\ & & & & $\mathcal{O}(1)$ & $\Omega(d)$\ & & & & $\mathcal{O}(d)$ & $\Omega(1)$\ & & & & $\mathcal{O}(1)$ & $\Omega(1)$\ The main motivation of the current work stems from the above-mentioned undesirable issues associated with the use of mini-batches in stochastic Riemannian optimization algorithms by [@li2022stochastic]. We address the problem by getting rid of the use of mini-batches altogether, and by developing batch-free, fully-online algorithm, Zeroth-order Riemannian Averaging Stochastic Approximation `(Zo-RASA)` algorithm, for solving [\[stochastic_problem\]](#stochastic_problem){reference-type="eqref" reference="stochastic_problem"}. We show that to obtain the sample complexity of $\mathcal{O}(d/\epsilon^4)$, `Zo-RASA` only requires $m=1$ (see the remark after Theorem [Theorem 1](#theorem3){reference-type="ref" reference="theorem3"}), which is a significant improvement compared to [@li2022stochastic]. The first version of `Zo-RASA` in Algorithm [\[algorithm2\]](#algorithm2){reference-type="ref" reference="algorithm2"} uses exponential mapping and parallel-transports. However, this version is not implementation-friendly. As a case-in-point, consider the Stiefel manifold (see [\[stiefel\]](#stiefel){reference-type="eqref" reference="stiefel"}) for which we highlight that there is no closed-form expression for the parallel transport $P_{x^k}^{x^{k+1}}$. Indeed, they are only available as solutions to certain ordinary differential equation, which increases the per-iteration complexity of implementing Algorithm [\[algorithm2\]](#algorithm2){reference-type="ref" reference="algorithm2"}. To overcome this issue and to develop a practical version of the RASA framework, we replace the exponential mapping and parallel transport by retraction and vector transport respectively, resulting in the practical version of `Zo-RASA` method in Algorithm [\[algorithm2_vec_tran\]](#algorithm2_vec_tran){reference-type="ref" reference="algorithm2_vec_tran"}. As we will discuss in Section [2](#sec_manifold_basics){reference-type="ref" reference="sec_manifold_basics"}, in the case of Stiefel manifolds, retractions cost only $1/4$ the time of an exponential mapping. Also, while there is no closed-form for parallel transport on Stiefel manifolds, vector transport has an easy closed-form implementation. We establish that Algorithm [\[algorithm2_vec_tran\]](#algorithm2_vec_tran){reference-type="ref" reference="algorithm2_vec_tran"} has the same sample complexity as Algorithm [\[algorithm2\]](#algorithm2){reference-type="ref" reference="algorithm2"}, with significantly improved per-iteration complexity. We now highlight two specific novelties that we introduce in this work to establish the above result. - **Moving-average gradient estimators and Lifting-based Riemannian-Lyapunov analysis.** We introduce a Riemannian moving-average technique (see, Line 4 in Algorithm [\[algorithm2\]](#algorithm2){reference-type="ref" reference="algorithm2"} and Algorithm [\[algorithm2_vec_tran\]](#algorithm2_vec_tran){reference-type="ref" reference="algorithm2_vec_tran"}) and a corresponding novel Riemannian-Lyapunov technique for analyzing zeroth-order stochastic Riemannian optimization problems, which works in the lifted space by tracking both the optimization trajectory and the gradient along the trajectory (see [\[eta_def\]](#eta_def){reference-type="eqref" reference="eta_def"}). For Euclidean problems, these techniques were introduced and extended in [@ruszczynski1983stochastic; @ruszczynski1987linearization; @ghadimi2020single; @ruszczynski2021stochastic; @balasubramanian2022stochastic]. However, those works rely heavily on the Euclidean structure. Non-trivial adaptions are needed to extend such methodology and analyses to the Riemannian settings; see Theorem [Theorem 1](#theorem3){reference-type="ref" reference="theorem3"} and Theorem [Theorem 3](#theorem2_vec){reference-type="ref" reference="theorem2_vec"}. - **Approximation error between parallel and vector transports.** A major challenge in analyzing Algorithm [\[algorithm2_vec_tran\]](#algorithm2_vec_tran){reference-type="ref" reference="algorithm2_vec_tran"} is to handle the additional errors introduced by the use of retractions and vector transports. We identify a novel geometric condition on the manifolds under consideration (see Assumption [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"}) under which we provide novel error bounds between parallel and vector transports (see Theorem [Theorem 2](#thm_assump_vec_trans){reference-type="ref" reference="thm_assump_vec_trans"}). We further show that the proposed condition, which plays a crucial role in our subsequent convergence analysis, is naturally satisfied if the *second fundamental form* of the manifold is bounded. We remark that the obtained error bounds, between parallel and vector transport, are of independent interest and are potentially applicable to a variety of other Riemannian optimization problems. In Table [\[table0\]](#table0){reference-type="ref" reference="table0"}, we summarize the sample complexities of stochastic zeroth-order Riemannian unioptimization algorithms ## Prior works We refer to [@absil2009optimization; @boumal2020introduction] for a discussion on general Riemannian optimization methods. To the best of our knowledge, [@li2022stochastic] provided the first oracle complexity results for zeroth-order stochastic Riemannian optimization. Following this, [@wang2021greene; @wang2023sharp; @maass2022tracking] improved and extended the applicability of zeroth-order Riemannian optimization. A central concern in Riemannian optimization is the increased per-iteration complexity caused by the use of exponential mapping and (sometimes) parallel transport. To tackle this, retraction and vector transport are often preferred [@absil2009optimization; @boumal2020introduction]. Such replacements have thus far been considered in the deterministic settings, in the context of Riemannian quasi-Newton methods [@huang2015broyden], Riemannian variance reduction methods [@sato2019riemannian], Riemannian proximal gradient methods [@chen2020proximal; @huang2022riemannian] and Riemannian conjugate gradient methods [@sato2022riemannian]. We discuss precise comparisons to this work later in Section [4.1.1](#sec:comparisonprior){reference-type="ref" reference="sec:comparisonprior"}. Stochastic gradient averaging methods in the Euclidean setting were studied in the several earlier works [@polyak1977comparison; @ruszczynski1983stochastic; @xiao2009dual]. For nonconvex problems, [@ghadimi2020single] analyzed the averaging stochastic approximation algorithm and established a sample complexity of $\mathcal{O}(1/\epsilon^4)$ to obtain an $\epsilon$-approximate first-order stationary solution without using mini-batches; see also [@ghadimi2022stochastic] for a zeroth-order extension. For the smooth Riemannian setting, [@han2020riemannian] used a related moving-average technique, and achieve $\mathcal{O}(\epsilon^{-3})$ sample complexity. However, [@han2020riemannian] assumes a Lipschitz smooth-type inequality over $\mathsf{grad}F(x;\xi)$ itself under a given retraction (which is stronger than our assumption) and assume access to the computationally demanding isometric vector transport (see [\[eq_isometry_parallel_trans\]](#eq_isometry_parallel_trans){reference-type="eqref" reference="eq_isometry_parallel_trans"}). More importantly, they assume an opaque and rather strong condition that all iterates of their algorithm are close to a local optima of the problem to carry out their analysis. # Basics of Riemannian optimization {#sec_manifold_basics} A differentiable manifold $\mathcal{M}$ is a Riemannian manifold if it is equipped with an inner product (called Riemannian metric) on the tangent space, $\langle \cdot, \cdot \rangle _x : \operatorname{T}_x\mathcal{M}\times \operatorname{T}_x\mathcal{M}\rightarrow \mathbb{R}$, that varies smoothly on $\mathcal{M}$. The norm of a tangent vector is defined as $\|\xi\|_x\coloneqq\sqrt{\langle \xi, \xi\rangle _x}$. We drop the subscript $x$ and simply write $\langle \cdot, \cdot \rangle$ (and $\|\xi\|$) if $\mathcal{M}$ is an embedded submanifold with Euclidean metric. Here we use the notion of the tangent space $\operatorname{T}_{x}\mathcal{M}$ of a differentiable manifold $\mathcal{M}$, whose precise definition can be found in [@tu2011manifolds Chapter 8]. As an example, consider the Stiefel manifold given by $$\begin{aligned} \label{stiefel} \mathcal{M}= \operatorname{St}(n, p):=\{X\in\mathbb{R}^{n\times p}: X^\top X=I_p\}.\end{aligned}$$ The tangent space of $\operatorname{St}(n, p)$ is given by $\operatorname{T}_X\mathcal{M}=\{\xi\in\mathbb{R}^{n\times p}: X^\top \xi+\xi^\top X=0\}.$ One could equip the tangent space with common inner product $\langle X, Y\rangle:=\mathop\mathrm{tr}(X^\top Y)$ to form a Riemannian manifold. For additional examples, see @absil2009optimization [Chapter 3] or @boumal2020introduction [Chapter 7] . We now introduce the concept of a Riemannian gradient and the notion of $\epsilon$-approximate first-order stationary solution for [\[stochastic_problem\]](#stochastic_problem){reference-type="eqref" reference="stochastic_problem"}. **Definition 1** (Riemannian Gradient). *Suppose $f$ is a smooth function on Riemannian manifold $\mathcal{M}$. The Riemannian gradient $\mathsf{grad}f(x)$ is a vector in $\operatorname{T}_x\mathcal{M}$ satisfying $\left.\frac{d(f(\gamma(t)))}{d t}\right|_{t=0}=\langle v, \mathsf{grad}f(x)\rangle_{x}$ for any $v\in \operatorname{T}_x\mathcal{M}$, where $\gamma(t)$ is a curve satisfying $\gamma(0)=x$ and $\gamma'(0)=v$.* **Definition 2** ($\epsilon$-approximate first-order stationary solution for [\[stochastic_problem\]](#stochastic_problem){reference-type="eqref" reference="stochastic_problem"}). *We call a point $\bar{x}$ an $\epsilon$-approximate first-order stationary solution for [\[stochastic_problem\]](#stochastic_problem){reference-type="eqref" reference="stochastic_problem"} if it satisfies $\mathbb{E}[\| \mathsf{grad}f(\bar{x})\|_{\bar{x}}^2] \leq \epsilon^2$, where the expectation is with respect to both the problem and algorithm-based randomness.* **Geodesics, retractions and exponential mappings.** Given two tangent vectors $\xi, \eta\in \operatorname{T}\mathcal{M}$, the Levi-Civita connection $\nabla:\operatorname{T}\mathcal{M}\times \operatorname{T}\mathcal{M}\rightarrow \operatorname{T}\mathcal{M}$, $(\xi,\eta)\rightarrow\nabla_{\xi}\eta\in \operatorname{T}\mathcal{M}$ is the "directional differential\" of $\eta$ along the direction of $\xi$, which is determined uniquely by the metric tensor $\langle\cdot, \cdot\rangle_x$. In Euclidean spaces, $\nabla_{\xi}\eta$ is just calculating the directional derivative of the vector field $\eta$ along $\xi$. For a Riemannian manifold $\mathcal{M}$, the geodesic $\gamma$ is a curve on $\mathcal{M}$ that satisfies $\nabla_{\gamma'} \gamma'=0$, i.e., the directional derivative along the tangent direction is always zero. Usually we find the geodesic with the initial value condition, $\nabla_{\gamma'} \gamma'=0,\ \gamma(0)=x,\ \gamma'(0)=v,$ whose existence and uniqueness are locally guaranteed by the existence and uniqueness theorem for linear ODEs. Given any curve $\gamma(t)$ on $\mathcal{M}$, one could calculate the length of the curve and define the distance between the two points $x,y\in\mathcal{M}$ respectively by $L(\gamma)\coloneqq\int_{a}^{b}\|\gamma'(t)\|_{\gamma(t)}dt\quad\text{and}\quad \mathsf{d}(x,y)\coloneqq\min_{\gamma,\gamma(a)=x,\gamma(b)=y}L(\gamma)$. If the manifold is a complete Riemannian manifold, according to [@do1992riemannian Corollary 3.9], there exists a unique minimal geodesic $\gamma$ satisfying $\gamma(a)=x,\gamma(b)=y$ that minimizes $L(\gamma)$. Therefore, we can always calculate the distance with respect to the minimal geodesic as $\mathsf{d}(x,y)=\int_{a}^{b}\|\gamma'(t)\|_{\gamma(t)}dt, \nabla_{\gamma'}\gamma'=0,\gamma(a)=x,\gamma(b)=y$, which will be utilized in our error analysis in Section [4](#sec_avg_retr){reference-type="ref" reference="sec_avg_retr"}. A retraction mapping $\mathsf{Retr}_x$ is a smooth mapping from $\operatorname{T}_x\mathcal{M}$ to $\mathcal{M}$ such that: $\mathsf{Retr}_x(0)=x$, where $0$ is the zero element of $\operatorname{T}_x\mathcal{M}$, and the differential of $\mathsf{Retr}_x$ at $0$ is an identity mapping, i.e., $\left.\frac{d \mathsf{Retr}_x(t\eta)}{d t}\right|_{t=0}=\eta$, $\forall \eta\in \operatorname{T}_x\mathcal{M}$. In particular, the exponential mapping $\mathsf{Exp}_x$ on a Riemannian manifold is a retraction that generated by geodesics, i.e. $\mathsf{Exp}_{x}(t\xi)\coloneqq\gamma(t)$ where $\gamma$ is a geodesic with $\gamma(0)=x$ and $\gamma'(0)=\xi$. Notice that the retraction is not always injective from $\operatorname{T}_x\mathcal{M}$ to $\mathcal{M}$ for any point $x\in\mathcal{M}$, thus the existence of the inverse of the retraction function $\mathsf{Retr}_{x}^{-1}$ is not guaranteed. However, when $\mathcal{M}$ is complete, the exponential mapping $\mathsf{Exp}_x$ is always defined for every $\xi\in \operatorname{T}_x\mathcal{M}$, and the inverse of the exponential mapping $\mathsf{Exp}_{x}^{-1}(y)\in \operatorname{T}_x\mathcal{M}$ is always well-defined for any $x,y\in\mathcal{M}$. Also, since $\mathsf{Exp}_{x}(t\xi)$ generates geodesics, we have $\mathsf{d}(x, \mathsf{Exp}_{x}(t\xi))=t\|\xi\|_x$. These are facts that we use in Assumption [Assumption 1](#assumption0_2){reference-type="ref" reference="assumption0_2"} and convergence proofs. As an example, the retractions on Stiefel manifolds can be defined by the QR decomposition, $R_X(\xi):=Q$ where $X+\xi = QR$. It can also be defined through the Polar decomposition as $R_X(\xi):=U V^\top$, where $X+\xi = U\Sigma V^\top$ is the (thin) singular value decomposition of $X+\xi$. The geodesic on the Stiefel manifold is given by: $X(t)=\left[\begin{array}{ll} X(0) & \dot{X}(0) \end{array}\right] \exp \left(t\left[\begin{array}{cc} A(0) & -S(0) \\ I & A(0) \end{array}\right]\right)\left[\begin{array}{l} I \\ 0 \end{array}\right] \exp (-A(0) t),$ for $A(t)=X^\top(t) \dot{X}(t)$ and $S(t)=\Dot{X}^\top(t)\dot{X}(t)$ with initial point $X(0)$ and initial speed $\dot{X}(0)$. The exponential mapping is thus given by $\mathsf{Exp}_{X(0)}(\dot{X}(0))=X(1)$. The computation cost of the QR and Polar decomposition retractions are of order $2dk^2+\mathcal{O}(k^3)$ and $3dk^2+\mathcal{O}(k^3)$, whereas as shown by @chen2020proximal [Section 3] the exponential mapping takes $8dk^2+\mathcal{O}(k^3)$, which illustrates the favorability of retractions in practical computations. We refer to @absil2009optimization [Chapter 4] and @boumal2020introduction [Chapter 3] for additional examples and more discussions on retractions and exponential mappings. **Vector and parallel transport.** Vector transports are linear mappings from one tangent space to another, which can be formally defined below. **Definition 3** (Vector and parallel transport). *A vector transport $\mathcal{T}$ on a smooth manifold $\mathcal{M}$ is a smooth mapping $\operatorname{T}\mathcal{M}\times \operatorname{T}\mathcal{M}\rightarrow \operatorname{T}\mathcal{M}:(\eta_x, \xi_x)\rightarrow \mathcal{T}_{\eta_x}(\xi_x)\in \operatorname{T}\mathcal{M}$, where the subscript $x$ means that the vector is in $\operatorname{T}_{x}\mathcal{M}$, such that: (i) There exists a retraction $R$ so that $\mathcal{T}_{\eta_x}(\xi_x)\in \operatorname{T}_{R_x(\eta_x)}\mathcal{M}$, (ii) $\mathcal{T}_{0_x}\xi_x = \xi_x$ for all $\xi_x\in \operatorname{T}_x\mathcal{M}$, and (iii) $\mathcal{T}_{\eta_x}(a\xi_x+b\zeta_x) = a\mathcal{T}_{\eta_x}(\xi_x)+b\mathcal{T}_{\eta_x}(\zeta_x)$, i.e., linearity. Particularly, for a complete Riemannian manifold $(\mathcal{M}, \langle\cdot,\cdot\rangle)$, we can construct a special vector transport, namely the parallel transport $P$, that can map vectors to another tangent space "parallelly", i.e., $\forall \eta,\xi\in \operatorname{T}_x\mathcal{M}$ and $y\in\mathcal{M}$, $$\begin{aligned} \label{eq_isometry_parallel_trans} \langle P_{\mathsf{Exp}_{x}^{-1}(y)}(\eta), P_{\mathsf{Exp}_{x}^{-1}(y)}(\xi) \rangle_y=\langle \eta, \xi \rangle_x. \end{aligned}$$ Notice that parallel transport is not the only transport that satisfies [\[eq_isometry_parallel_trans\]](#eq_isometry_parallel_trans){reference-type="eqref" reference="eq_isometry_parallel_trans"}, and we call the vector transport an isometric vector transport if it satisfies [\[eq_isometry_parallel_trans\]](#eq_isometry_parallel_trans){reference-type="eqref" reference="eq_isometry_parallel_trans"}.* We can equivalently view $P$ as a mapping from the tangent space $\operatorname{T}_{x}\mathcal{M}$ to $\operatorname{T}_y\mathcal{M}$. We hence denote $P_{x}^{y}:\operatorname{T}_{x}\mathcal{M}\rightarrow \operatorname{T}_{y}\mathcal{M}$. Note that parallel transport depends on the curve along which the vectors are moving. If the curve is not specified, it refers to the case when we are considering the minimal geodesic connecting the two points, which exists due to completeness. As an example, for the Stiefel manifold in [\[stiefel\]](#stiefel){reference-type="eqref" reference="stiefel"}, there is no closed-form expression for the parallel transport, whereas one can always utilize the projection onto the tangent space, given by $\mathsf{proj}_{\operatorname{T}_X\mathcal{M}}(\xi) = (I-X X^\top)\xi + X\operatorname{skew}(X^\top \xi)$, where $\operatorname{skew}(A):=(A-A^\top)/2$, to transport $\xi\in \operatorname{T}_{X_0}\operatorname{St}(d, p)$ to $\operatorname{T}_{X}\operatorname{St}(d, p)$. We refer to @absil2009optimization [Chapter 8] and @boumal2020introduction [Chapter 10] for additional examples and more discussions on vector and parallel transports. **Second fundamental form.** We now discuss the notion of second fundamental form, which will be helpful in characterizing a geometric condition used in Section [4](#sec_avg_retr){reference-type="ref" reference="sec_avg_retr"} to quantify the error of approximating parallel transports with vector transports. In general, the notion of second fundamental form can be studied for general isometric immersions and we restrict here to the embedding in Euclidean spaces only for brevity. **Definition 4** (Second fundamental form). *Suppose $\mathcal{M}\subset\mathbb{R}^D$ is a complete Riemannian manifold equipped with the Euclidean metric. For any $\xi, \eta\in \operatorname{T}\mathcal{M}$, denote the extension of two vector fields to $\mathbb{R}^D$ as $\bar{\xi},\bar{\eta}\in \mathbb{R}^D$, also the directional derivative of $\Bar{\eta}$ along $\Bar{\xi}$ as $\Bar{\nabla}_{\bar{\xi}}\bar{\eta}\in\mathbb{R}^D$. The second fundamental form refers to the bilinear and symmetric vector, $B(\xi,\eta) = \Bar{\nabla}_{\bar{\xi}}\bar{\eta}- \nabla_{\xi}\eta \in (\operatorname{T}\mathcal{M})^\bot,$ which quantifies the deviation of the Riemannian directional derivatives (depicted by Levi-Civita connection $\nabla$) from the Euclidean one (common directional derivative $\Bar{\nabla}$).* Finally, we remark that there are various definitions of second fundamental forms, among which the most common one is a quadratic form related to $B$; see [@do1992riemannian Chapter 6, Definition 2.2]. Here we simply refer to $B$ as the second fundamental form. # Zeroth-order RASA for smooth manifold optimization {#sec_smooth_avg} We now introduce the Zeroth-order Riemannian Average Stochastic Approximation (`Zo-RASA`) algorithm for solving [\[stochastic_problem\]](#stochastic_problem){reference-type="eqref" reference="stochastic_problem"}. The formal procedure is stated in Algorithm [\[algorithm2\]](#algorithm2){reference-type="ref" reference="algorithm2"}, where $P_{x}^{y}$ is the parallel transport from $\operatorname{T}_{x}\mathcal{M}$ to $\operatorname{T}_{y}\mathcal{M}$ along the minimum geodesic connecting $x$ and $y$. To establish the sample complexity of Algorithm [\[algorithm2\]](#algorithm2){reference-type="ref" reference="algorithm2"}, we extend the analysis of [@ghadimi2020single], which is in-turn motivated by the lifting-technique introduced in [@ruszczynski1983stochastic; @ruszczynski1987linearization], to the Riemannian setting. As such works heavily rely on the Euclidean structure, our proofs involve a non-trivial adaption of such techniques. Initial point $x^0\in\mathcal{M}$, $g^0=G_{\mu}^{\mathsf{Exp}}(x^0)$, total number of iterations $N$, parameters $\beta>0$, $\tau_0=1$, $\tau_k=1/\sqrt{N}$ or $\tau_k=1/\sqrt{dN}$ when $k\geq 1$, and stepsize $t_k=\tau_k/\beta$. $x^{k+1}\leftarrow \mathsf{Exp}_{x^{k}}(- t_k g^{k})$ $g^{k+1}\leftarrow (1-\tau_k) P_{x^{k}}^{x^{k+1}} g^{k} + \tau_k {P_{x^{k}}^{x^{k+1}} G_{\mu}^k}$ where $G_{\mu}^k=G_{\mu}^{\mathsf{Exp}}(x^k)$ is given by [\[zeroth_order_estimator\]](#zeroth_order_estimator){reference-type="eqref" reference="zeroth_order_estimator"} with batch-size $m=m_k$ In our convergence analysis, we always choose $\tau_0=1$, and we consider two choices of $\tau_k$ when $k\geq 1$: $$\label{tau-2-choices} \tau_k =1/\sqrt{N} \mbox{ or } \tau_k=1/\sqrt{d N}, k\geq 1,$$ which corresponds to large or single batch, respectively. Moreover, we always choose $t_k=\tau_k/\beta$, where $\beta$ is a positive constant determined by the smoothness constant in Assumption [Assumption 1](#assumption0_2){reference-type="ref" reference="assumption0_2"} (see Theorem [Theorem 1](#theorem3){reference-type="ref" reference="theorem3"}), so that the step-size and the averaging weights are in the same order. Furthermore, we define $$\label{def-Gamma} \Gamma_{0}=\Gamma_{1} = 1, \mbox{ and } \Gamma_k=\Gamma_1\prod_{i=1}^{k-1}(1-\tau_i^2).$$ This leads to the following inequalities which will be used frequently in our convergence analysis: $$\label{tau-conditions} \sum_{i=k+1}^{N} \tau_{i}\Gamma_{i}\leq \Gamma_{k+1} \mbox{ and } \sum_{i=k+1}^{N} \tau_{i}^2\Gamma_{i}\leq \tau_k\Gamma_{k+1}.$$ To proceed, we construct the following potential function $$\begin{aligned} \label{eta_def} W(x,g) \coloneqq (f(x) - f^*) - \eta(x,g),\quad\text{where}\quad \eta(x,g)\coloneqq-\frac{1}{2\beta}\|g\|_x^2,\ g\in \operatorname{T}_x\mathcal{M},\end{aligned}$$ where $f^*=\min_{x\in\mathcal{M}}f(x)$ and $\beta>0$ is a constant to be determined later. Note that the potential function in [\[eta_def\]](#eta_def){reference-type="eqref" reference="eta_def"} has the component of both function value and the norm of the (estimated) gradients, also that $W$ is always non-negative. In our analysis, we proceed by bounding the difference of potential function between successive iterates. More specifically, using the convexity of the norm, for any pair $(x,g)$, we have $\|\mathsf{grad}f(x)\|_{x}^2\leq -2\beta\,\eta(x,g) + 2\|g-\mathsf{grad}f(x)\|_{x}^2$. This observation will be leveraged in the proof of Theorem [Theorem 1](#theorem3){reference-type="ref" reference="theorem3"} to obtain the sample complexity of Algorithm [\[algorithm2\]](#algorithm2){reference-type="ref" reference="algorithm2"} for obtaining an $\epsilon$-approximate stationary solution. We also highlight that our convergence analysis extensively utilizes the isometry property of parallel transport, stated in [\[eq_isometry_parallel_trans\]](#eq_isometry_parallel_trans){reference-type="eqref" reference="eq_isometry_parallel_trans"}, i.e., $\langle P_{x}^{y}(\eta), P_{x}^{y}(\xi) \rangle_y=\langle \eta, \xi \rangle_x$. This result is a generalization of the isometry in the Euclidean spaces, since the inner product in Euclidean spaces is unchanged if one moves the beginning point of the vectors together. A direct result of this identity is that the length of the vectors is unchanged, namely $\|P_{x}^{y}(\xi)\|_{y}=\|\xi\|_{x}$, which we will also use extensively. We now introduce the assumptions needed for our analysis. **Assumption 1**. *The function $f:\mathcal{M}\to\mathbb{R}$ is $L$-smooth on $\mathcal{M}$, i.e., $\forall x,y\in\mathcal{M}$, we have $\|P_{x}^{y}\mathsf{grad}f(x) - \mathsf{grad}f(y)\|_{y}\leq L\,\mathsf{d}(x,y)$. An immediate consequence (see, for example, @boumal2020introduction [Proposition 10.53]) of this condition is that we have $|f(y)-f(x) - \langle \mathsf{grad}f(x), \mathsf{Exp}_{x}^{-1}(y) \rangle_{x}|\leq \frac{L}{2}\|\mathsf{Exp}_{x}^{-1}(y) \|_{x}^2$.* Assumption [Assumption 1](#assumption0_2){reference-type="ref" reference="assumption0_2"} is a generalization of the standard gradient-Lipschitz assumption in Euclidean optimization [@nesterov2018lectures; @lan2020first] to the Riemannian setting, and is made in several works [@boumal2020introduction]. To generalize it to the Riemannian setting, due to the fact that $\mathsf{grad}f(x)$ and $\mathsf{grad}f(y)$ are not in the same tangent space, we need to utilize parallel transports $P_{x}^{y}$ to match the two vectors in the same tangent space. Throughout the paper, we define $\mathcal{F}_{k}$ as the $\sigma$-algebra generated by all the randomness till iteration $k$ of the algorithms. Namely, for Algorithm [\[algorithm2\]](#algorithm2){reference-type="ref" reference="algorithm2"}, we have $\mathcal{F}_{k}=\sigma(\xi_0,\ldots,\xi_k,x_0,\ldots, x_k, g_0,\ldots, g_k)$. **Assumption 2**. *Along the trajectory of the algorithm, the stochastic gradients are unbiased and have bounded-variance, i.e., for $k \in \{1,\ldots, N\}$, we have $\mathbb{E}_\xi [\mathsf{grad}F(x^k;\xi_k)|\mathcal{F}_{k-1}] = \mathsf{grad}f(x^k)$ and $\mathbb{E}_\xi[\|\mathsf{grad}F(x^k;\xi_k) - \mathsf{grad}f(x^k)\|_{x^{k}}^2|\mathcal{F}_{k-1}]\leq \sigma^2.$* The above assumption is widely used in stochastic Riemannian optimization literature; see, for example, [@zhang2016riemannian; @li2022stochastic; @boumal2020introduction], and generalizes the standard assumption used in Euclidean stochastic optimization [@nesterov2018lectures; @lan2020first]. Now we proceed to the convergence analysis of Algorithm [\[algorithm2\]](#algorithm2){reference-type="ref" reference="algorithm2"}. We first state the following standard result characterizing the approximation error of $G_{\mu}^{\mathsf{Exp}}$ (given by [\[zeroth_order_estimator\]](#zeroth_order_estimator){reference-type="eqref" reference="zeroth_order_estimator"}) to the true Riemannian gradient. **Lemma 1** (Proposition 1 in [@li2022stochastic] with exponential mapping). *Under Assumptions [Assumption 1](#assumption0_2){reference-type="ref" reference="assumption0_2"}, [Assumption 2](#assumption0_1){reference-type="ref" reference="assumption0_1"} we have $\|\mathbb{E}G_{\mu}^{\mathsf{Exp}}(x) - \mathsf{grad}f(x)\|_{x}^2 \leq \frac{\mu^2L^2}{4}(d+3)^3$, $\mathbb{E}\|G_{\mu}^{\mathsf{Exp}}(x)\|_{x}^2 \leq \mu^2 L^2(d + 6)^3 + 2(d+4) \|\mathsf{grad}f(x)\|_{x}^2$ and $\mathbb{E}\|G_{\mu}^{\mathsf{Exp}}(x) - \mathsf{grad}f(x)\|_{x}^2 \leq \mu^2 L^2(d + 6)^3 + \frac{8(d+4)}{m}\sigma^2+\frac{8(d+4)}{m}\|\mathsf{grad}f(x)\|_{x}^2$, where the expectation is taken toward all the Gaussian vectors in $G_{\mu}$ and the random variable $\xi$.* Based on the above result, we have the following Lemma [Lemma 2](#lemma_zo_gk_gradk){reference-type="ref" reference="lemma_zo_gk_gradk"} which bounds the difference of $g^k$ to the true Riemannian gradient $\mathsf{grad}f(x^k)$, and Lemma [Lemma 3](#lemma_zo_sum_gkplus1_gk){reference-type="ref" reference="lemma_zo_sum_gkplus1_gk"} bounds the difference of two consecutive $g^k$, where we use parallel transport to make $g^k$ and $g^{k+1}$ in the same tangent space, i.e., $\|P_{x^{k+1}}^{x^{k}}g^{k+1} - g^k\|_{x^k}^2$. **Lemma 2**. *Suppose the Assumptions [Assumption 1](#assumption0_2){reference-type="ref" reference="assumption0_2"} and [Assumption 2](#assumption0_1){reference-type="ref" reference="assumption0_1"} hold, and $\{x^k,g^k\}$ is generated by Algorithm [\[algorithm2\]](#algorithm2){reference-type="ref" reference="algorithm2"}. We have $$\begin{aligned} \label{zo_gk_gradk_bound} %\begin{aligned} & \mathbb{E}\|g^{k} - \mathsf{grad}f(x^{k})\|_{x^{k}}^2 \\ \leq & \Gamma_{k}\tilde{\sigma}_0^2 + \Gamma_{k}\sum_{i=1}^{k}\Big( \frac{(1+\tau_{i-1})\tau_{i-1}}{\Gamma_{i}}\frac{L^2\|g^{i-1}\|_{x^{i-1}}^2}{\beta^2}+\frac{\tau_{i-1}^2}{\Gamma_{i}}\tilde{\sigma}_{i-1}^2+\tau_k\hat{\sigma}^2 \Big),\nonumber %\end{aligned} \end{aligned}$$ where the expectation $\mathbb{E}$ is taken with respect to all random variables up to iteration $k$, including the random variables $\{u_i\}_{i=1}^k$ used to construct the zeroth-order estimator as in [\[zeroth_order_estimator\]](#zeroth_order_estimator){reference-type="eqref" reference="zeroth_order_estimator"}. Here the notations are defined as: $$\begin{aligned} \label{def-tilde-sigma} \begin{aligned} \hat{\sigma}^2&\coloneqq \frac{\mu^2L^2}{4}(d+3)^3\\ \tilde{\sigma}_k^2&\coloneqq \sigma_{k}^2+\frac{8(d+4)}{m_k}\mathbb{E}\|\mathsf{grad}f(x^{k})\|_{x^k}^2\text{ where }\sigma_{k}^2\coloneqq\mu^2 L^2(d + 6)^3 + \frac{8(d+4)}{m_k}\sigma^2. \end{aligned}\end{aligned}$$ Moreover, from [\[tau-conditions\]](#tau-conditions){reference-type="eqref" reference="tau-conditions"} we have $$\begin{split} \sum_{k=1}^{N}\tau_k\mathbb{E}\|g^{k} - \mathsf{grad}f(x^{k})\|_{x^{k}}^2 \leq \sum_{k=0}^{N-1}\bigg( (1+\tau_{k})\tau_{k}\frac{L^2\mathbb{E}\|g^{k}\|_{x^{k}}^2}{\beta^2} + \tau_{k}^2\tilde{\sigma}_k^2+\tau_k\hat{\sigma}^2 \bigg)+\tilde{\sigma}_0^2, \\%\label{zo_sum_gk_gradk_bound_1}\\ \sum_{k=1}^{N}\tau_k^2\mathbb{E}\|g^{k} - \mathsf{grad}f(x^{k})\|_{x^{k}}^2 \leq \sum_{k=0}^{N-1}\bigg( (1+\tau_{k})\tau^2_{k}\frac{L^2\mathbb{E}\|g^{k}\|_{x^{k}}^2}{\beta^2} + \tau_{k}^3\tilde{\sigma}_k^2+\tau^2_k\hat{\sigma}^2 \bigg)+ \sum_{k=1}^{N}\tau_k^2\tilde{\sigma}_0^2.%\label{zo_sum_gk_gradk_bound_2} \end{split}$$* **Proof.** Firstly, note that we have the following: $g^{k} - \mathsf{grad}f(x^{k}) = (1-\tau_{k-1}) P_{x^{k-1}}^{x^{k}} g^{k-1} + \tau_{k-1} P_{x^{k-1}}^{x^{k}} G_{\mu}^{k-1} - \mathsf{grad}f(x^{k}) =(1-\tau_{k-1}) P_{x^{k-1}}^{x^{k}}(g^{k-1} - \mathsf{grad}f(x^{k-1})) + (P_{x^{k-1}}^{x^{k}}\mathsf{grad}f(x^{k-1}) - \mathsf{grad}f(x^{k}))+ \tau_{k-1} P_{x^{k-1}}^{x^{k}}(G_{\mu}^{k-1} - \mathsf{grad}f(x^{k-1}))=(1-\tau_{k-1}) P_{x^{k-1}}^{x^{k}}(g^{k-1} - \mathsf{grad}f(x^{k-1})) + \tau_{k-1} e_{k-1} + \tau_{k-1}\Delta_{k-1}^{f}$. Hence, we have $$\begin{aligned} \label{zo_temp1} \begin{aligned} & \|g^{k} - \mathsf{grad}f(x^{k})\|_{x^{k}}^2 \\ \leq &(1-\tau_{k-1})\|g^{k-1} - \mathsf{grad}f(x^{k-1})\|_{x^{k-1}}^2 + \tau_{k-1}\|e_{k-1}\|_{x^{k}}^2+\tau_{k-1}^2\|\Delta_{k-1}^{f}\|_{x^{k}}^2\\ & + 2\tau_{k-1}\langle (1-\tau_{k-1})P_{x^{k-1}}^{x^{k}}(g^{k-1} - \mathsf{grad}f(x^{k-1})) + \tau_{k-1}e_{k-1}, \Delta_{k-1}^{f}\rangle_{x^{k}}, \end{aligned} \end{aligned}$$ where the notation is defined as $e_{k-1} \coloneqq \frac{1}{\tau_{k-1}}(P_{x^{k-1}}^{x^{k}}\mathsf{grad}f(x^{k-1}) - \mathsf{grad}f(x^{k})),$ and $\Delta_{k-1}^{f} \coloneqq P_{x^{k-1}}^{x^{k}} (G_{\mu}^{k-1} - \mathsf{grad}f(x^{k-1}))$. Denote $\delta_{k-1}=\langle (1-\tau_{k-1})P_{x^{k-1}}^{x^{k}}(g^{k-1} - \mathsf{grad}f(x^{k-1})) + \tau_{k-1}e_{k-1}, \Delta_{k-1}^{f}\rangle_{x^{k}}$. The main novelty in the proof of this lemma is that $\delta$ is no longer an unbiased estimator (which is true for the first-order situation). We have by Lemma [Lemma 1](#bound_zo_estimator){reference-type="ref" reference="bound_zo_estimator"} that $$\begin{split} &2\mathbb{E}_{u^k}[\delta_{k-1}] = 2\langle (1-\tau_{k-1})P_{x^{k-1}}^{x^{k}}(g^{k-1} - \mathsf{grad}f(x^{k-1})) + \tau_{k-1}e_{k-1}, \mathbb{E}_{u^k}[\Delta_{k-1}^{f}|\mathcal{F}_{k-2}]\rangle_{x^{k}} \\ \leq & \|(1-\tau_{k-1})P_{x^{k-1}}^{x^{k}}(g^{k-1} - \mathsf{grad}f(x^{k-1})) + \tau_{k-1}e_{k-1}\|_{x^{k} }^2 + \|\mathbb{E}_{u^k} G_{\mu}^{k-1} - \mathsf{grad}f(x^{k-1})\|_{x^{k-1}}^2 \\ \leq & (1-\tau_{k-1})\|g^{k-1} - \mathsf{grad}f(x^{k-1})\|_{x^{k-1}}^2 + \tau_{k-1}\|e_{k-1}\|_{x^{k} }^2 + \hat{\sigma}^2. \end{split}$$ Notice that in the above computation, the expectation is only taken with respect to the Gaussian random variables that we used to construct $G_{\mu}(x^{k-1})$. Plugging this back to [\[zo_temp1\]](#zo_temp1){reference-type="eqref" reference="zo_temp1"}, we have $\mathbb{E}_{u^k}\|g^{k} - \mathsf{grad}f(x^{k})\|_{x^{k}}^2\leq \tau_{k-1}\hat{\sigma}^2+ (1-\tau_{k-1}^2)\|g^{k-1} - \mathsf{grad}f(x^{k-1})\|_{x^{k-1}}^2 + \tau_{k-1}(1+\tau_{k-1})\|e_{k-1}\|_{x^{k}}^2 + \tau_{k-1}^2\|\Delta_{k-1}^{f}\|_{x^{k}}^2$. Now dividing both sides of this inequality by our new definition of $\Gamma_k$, we get $\frac{1}{\Gamma_k}\mathbb{E}_{u^k}\|g^{k} - \mathsf{grad}f(x^{k})\|_{x^{k}}^2\leq \frac{1}{\Gamma_{k-1}}\|g^{k-1} - \mathsf{grad}f(x^{k-1})\|_{x^{k-1}}^2 + \frac{(1+\tau_{k-1})\tau_{k-1}}{\Gamma_k}\|e_{k-1}\|_{x^{k}}^2 + \frac{\tau_{k-1}^2}{\Gamma_k}\|\Delta_{k-1}^{f}\|_{x^{k}}^2 + \frac{\tau_{k-1}}{\Gamma_k}\hat{\sigma}^2$. By Assumptions [Assumption 1](#assumption0_2){reference-type="ref" reference="assumption0_2"}, [Assumption 2](#assumption0_1){reference-type="ref" reference="assumption0_1"} and Lemma [Lemma 1](#bound_zo_estimator){reference-type="ref" reference="bound_zo_estimator"}, we have that $\|e_{i}\|_{x^{i+1}}^2\leq \frac{L^2}{\tau_i^2}\mathsf{d}(x^i,x^{i+1})^2 \leq \frac{L^2 t_i^2\|g^i\|_{x^i}^2}{\tau_i^2} = \frac{L^2\|g^i\|_{x^i}^2}{\beta^2}$, and $\mathbb{E}[\|\Delta_{i}^{f}\|_{x^{i+1}}^2|\mathcal{F}_{i-1}]\leq \sigma_i^2+\frac{8(d+4)}{m_i}\mathbb{E}[\|\mathsf{grad}f(x^i)\|_{x^i}^2|\mathcal{F}_{i-1}]$. Hence, by applying law of total expectation (to take the expectation over all random variables), we have $\frac{1}{\Gamma_k}\mathbb{E}\|g^{k} - \mathsf{grad}f(x^{k})\|_{x^{k}}^2 \leq \frac{1}{\Gamma_{k-1}}\mathbb{E}\|g^{k-1} - \mathsf{grad}f(x^{k-1})\|_{x^{k-1}}^2 + \frac{(1+\tau_{k-1})\tau_{k-1}}{\Gamma_k}\frac{L^2\mathbb{E}\|g^{k-1}\|_{x^{k-1}}^2}{\beta^2} + \frac{\tau_{k-1}^2}{\Gamma_k}\Tilde{\sigma}_{k-1}^2 + \frac{\tau_{k-1}}{\Gamma_k}\hat{\sigma}^2$. Now by telescoping the sum in the above equation, we get (note that we take $g^0=G_{\mu}(x^0)$) $$\begin{aligned} \begin{aligned} &\mathbb{E}\|g^{k} - \mathsf{grad}f(x^{k})\|_{x^{k}}^2\leq \Gamma_{k}\mathbb{E}\|G_{\mu}(x^0) - \mathsf{grad}f(x^{0})\|_{x^{0}}^2 \\ &+ \Gamma_{k}\sum_{i=1}^{k}\bigg( \frac{(1+\tau_{i-1})\tau_{i-1}}{\Gamma_{i}}\frac{L^2\mathbb{E}\|g^{i-1}\|_{x^{i-1}}^2}{\beta^2} + \frac{\tau_{i-1}^2}{\Gamma_{i}}\Tilde{\sigma}_{i-1}^2 + \frac{\tau_{i-1}}{\Gamma_i}\hat{\sigma}^2 \bigg) \\ &\leq \Gamma_{k}\tilde{\sigma}_0^2+ \Gamma_{k}\sum_{i=1}^{k}\bigg( \frac{(1+\tau_{i-1})\tau_{i-1}}{\Gamma_{i}}\frac{L^2\mathbb{E}\|g^{i-1}\|_{x^{i-1}}^2}{\beta^2} + \frac{\tau_{i-1}^2}{\Gamma_{i}}\Tilde{\sigma}_{i-1}^2 + \frac{\tau_{i-1}}{\Gamma_i}\hat{\sigma}^2 \bigg). \end{aligned} \end{aligned}$$ This proves [\[zo_gk_gradk_bound\]](#zo_gk_gradk_bound){reference-type="eqref" reference="zo_gk_gradk_bound"}. From [\[tau-conditions\]](#tau-conditions){reference-type="eqref" reference="tau-conditions"} we have $$\begin{aligned} %\label{zo_temp2} \begin{split} & \sum_{k=1}^{N}\tau_k\mathbb{E}\|g^{k} - \mathsf{grad}f(x^{k})\|_{x^{k}}^2 \\ \leq &\sum_{k=1}^{N}\tau_k\Gamma_{k} \sum_{i=1}^{k}\bigg( \frac{(1+\tau_{i-1})\tau_{i-1}}{\Gamma_{i}}\frac{L^2\mathbb{E}\|g^{i-1}\|_{x^{i-1}}^2}{\beta^2} + \frac{\tau_{i-1}^2}{\Gamma_{i}}\Tilde{\sigma}_{i-1}^2 + \frac{\tau_{i-1}}{\Gamma_i}\hat{\sigma}^2 \bigg)+\tilde{\sigma}_0^2 \\ = & \sum_{k=0}^{N-1}\bigg(\sum_{i=k+1}^{N}\tau_i\Gamma_{i}\bigg)\frac{1}{\Gamma_{k+1}}\bigg( (1+\tau_{k})\tau_{k}\frac{L^2\mathbb{E}\|g^{k}\|_{x^{k}}^2}{\beta^2} + \tau_{k}^2\Tilde{\sigma}_{k}^2 + \tau_{k}\hat{\sigma}^2 \bigg)+\tilde{\sigma}_0^2\\ \leq & \sum_{k=0}^{N-1}\bigg( (1+\tau_{k})\tau_{k}\frac{L^2\mathbb{E}\|g^{k}\|_{x^{k}}^2}{\beta^2} + \tau_{k}^2\Tilde{\sigma}_{k}^2 +\tau_k\hat{\sigma}^2 \bigg)+\tilde{\sigma}_0^2, \end{split} \end{aligned}$$ where we used $\sum_{k=1}^{N}\tau_k\Gamma_{k}\leq\Gamma_{1} = 1$ due to [\[tau-conditions\]](#tau-conditions){reference-type="eqref" reference="tau-conditions"}, so that the last term is simply $\tilde{\sigma}_0^2$. By using similar calculations, we have that $$\begin{aligned} %\label{zo_temp3} &\sum_{k=1}^{N}\tau_k^2\mathbb{E}\|g^{k} - \mathsf{grad}f(x^{k})\|_{x^{k}}^2\leq\\ &\sum_{k=1}^{N}\tau_k^2\Gamma_{k} \sum_{i=1}^{k}\bigg( \frac{(1+\tau_{i-1})\tau_{i-1}}{\Gamma_{i}}\frac{L^2\mathbb{E}\|g^{i-1}\|_{x^{i-1}}^2}{\beta^2}+ \frac{\tau_{i-1}^2}{\Gamma_{i}}\Tilde{\sigma}_{i-1}^2 + \frac{\tau_{i-1}}{\Gamma_{i}}\hat{\sigma}^2 \bigg)+ \sum_{k=1}^{N}\tau_k^2\tilde{\sigma}_0^2\\ = & \sum_{k=0}^{N-1}\left(\sum_{i=k+1}^{N}\tau_i^2\Gamma_{i}\right)\frac{1}{\Gamma_{k+1}}\bigg( (1+\tau_{k})\tau_{k}\frac{L^2\mathbb{E}\|g^{k}\|_{x^{k}}^2}{\beta^2}+\tau_{k}^2\Tilde{\sigma}_{k}^2 + {\tau_{k}}\hat{\sigma}^2 \bigg)+ \sum_{k=1}^{N}\tau_k^2\tilde{\sigma}_0^2\\ \leq & \sum_{k=0}^{N-1}\tau_k\bigg( (1+\tau_{k})\tau_{k}\frac{L^2\mathbb{E}\|g^{k}\|_{x^{k}}^2}{\beta^2} + \tau_{k}^2\Tilde{\sigma}_{k}^2+\tau_k\hat{\sigma}^2 \bigg)+ \sum_{k=1}^{N}\tau_k^2\tilde{\sigma}_0^2, \end{aligned}$$ which completes the proof. 0◻\ **Lemma 3**. *Suppose Assumptions [Assumption 1](#assumption0_2){reference-type="ref" reference="assumption0_2"} and [Assumption 2](#assumption0_1){reference-type="ref" reference="assumption0_1"} hold. We have $$\begin{aligned} \label{zo_sum_gkplus1_gk} \begin{split} \sum_{k=1}^{N}&\mathbb{E}\|P_{x^{k+1}}^{x^{k}}g^{k+1} - g^k\|_{x^k}^2\leq 2\sum_{k=0}^{N}\tau_k^2\hat{\sigma}^2 + 2\sum_{k=0}^{N}\left(\tau_k^2+\tau_k^3\right)\sigma_{k}^2+2\sum_{k=0}^{N}\tau_k^2\Tilde{\sigma}_0^2\\& +2\sum_{k=0}^{N}(1+\tau_{k})\tau_{k}^2\frac{L^2\mathbb{E}\|g^{k}\|_{x^{k}}^2}{\beta^2}+ 2\sum_{k=0}^{N}\left(\tau_k^2+\tau_k^3\right)\frac{8(d+4)}{m_k}\mathbb{E}\|\mathsf{grad}f(x^k)\|_{x^k}^2 \end{split} \end{aligned}$$ where the expectation $\mathbb{E}$ is taken with respect to all random variables up to iteration $k$, which includes the Gaussian variables $u$ in the zeroth-order estimator as in [\[zeroth_order_estimator\]](#zeroth_order_estimator){reference-type="eqref" reference="zeroth_order_estimator"}.* **Proof.** First note that $\|P_{x^{k+1}}^{x^{k}}g^{k+1} - g^k\|_{x^k}^2 = \tau_k^2\|G_{\mu}^k - g^k\|_{x^k}^2 = \tau_k^2\|G_{\mu}^k - \mathsf{grad}f(x^k) + \mathsf{grad}f(x^k) - g^k\|_{x^k}^2 \leq 2\tau_k^2\|G_{\mu}^k - \mathsf{grad}f(x^k)\|_{x^k}^2 + 2\tau_k^2\|\mathsf{grad}f(x^k) - g^k\|_{x^k}^2$. Taking the expectation conditioned on $\mathcal{F}_{k-1}$, we get $$\begin{aligned} &\frac{1}{2}\mathbb{E}[\|P_{x^{k+1}}^{x^{k}}g^{k+1} - g^k\|_{x^k}^2|\mathcal{F}_{k-1}]\\ \leq & \tau_k^2\mathbb{E}[\|G_{\mu}^k - \mathsf{grad}f(x^k)\|_{x^k}^2|\mathcal{F}_{k-1}] + \tau_k^2\mathbb{E}[\|\mathsf{grad}f(x^k) - g^k\|_{x^k}^2|\mathcal{F}_{k-1}]\\ \leq & \tau_k^2\bigg(\sigma_{k}^2+\frac{8(d+4)}{m_k}\mathbb{E}[\|\mathsf{grad}f(x^k)\|_{x^k}^2|\mathcal{F}_{k-1}]\bigg)+ \tau_k^2\mathbb{E}[\|\mathsf{grad}f(x^k) - g^k\|_{x^k}^2|\mathcal{F}_{k-1}], \end{aligned}$$ where last inequality is by Lemma [Lemma 1](#bound_zo_estimator){reference-type="ref" reference="bound_zo_estimator"}. Now using law of total expectation to take the expectation for all random variables and summing up over $k=0,...,N-1$, we have $$\begin{aligned} & \frac{1}{2}\sum_{k=1}^{N}\mathbb{E}\|P_{x^{k+1}}^{x^{k}}g^{k+1} - g^k\|_{x^k}^2\\ \leq & \sum_{k=1}^{N}\tau_k^2\sigma_{k}^2+ \sum_{k=1}^{N}\tau_k^2\frac{8(d+4)}{m_k}\mathbb{E}\|\mathsf{grad}f(x^k)\|_{x^k}^2 + \sum_{k=1}^{N}\tau_k^2\mathbb{E}\|\mathsf{grad}f(x^k) - g^k\|_{x^k}^2 \\ \leq & \sum_{k=0}^{N}\tau_k^2\hat{\sigma}^2 + \sum_{k=0}^{N}\left(\tau_k^2+\tau_k^3\right)\sigma_{k}^2+\sum_{k=0}^{N}\tau_k^2\Tilde{\sigma}_0^2\\& +\sum_{k=0}^{N}(1+\tau_{k})\tau_{k}^2\frac{L^2\mathbb{E}\|g^{k}\|_{x^{k}}^2}{\beta^2}+ \sum_{k=0}^{N}\left(\tau_k^2+\tau_k^3\right)\frac{8(d+4)}{m_k}\mathbb{E}\|\mathsf{grad}f(x^k)\|_{x^k}^2, \end{aligned}$$ where the second inequality is by Lemma [Lemma 2](#lemma_zo_gk_gradk){reference-type="ref" reference="lemma_zo_gk_gradk"}. 0◻\ Now we are ready to present our main result. **Theorem 1**. *Suppose Assumptions [Assumption 1](#assumption0_2){reference-type="ref" reference="assumption0_2"} and [Assumption 2](#assumption0_1){reference-type="ref" reference="assumption0_1"} hold. In Algorithm [\[algorithm2\]](#algorithm2){reference-type="ref" reference="algorithm2"}, we set $\mu=\mathcal{O}\bigg(\frac{1}{L d^{3/2}N^{1/4}}\bigg)$, and $\beta\geq 4 L$. Then the following holds.* - *If we choose $\tau_0=1$, $\tau_k= {1}/{\sqrt{N}}$, $k\geq 1$ and $m_k\equiv 8(d+4)$, $k\geq 0$, then we have $\frac{1}{N+1}\sum_{k=0}^{N} \mathbb{E}\|\mathsf{grad}f(x^{k})\|_{x^{k}}^2 \leq \mathcal{O}({1}/{\sqrt{N}})$.* - *If we choose $\tau_0=1$, $\tau_k= {1}/{\sqrt{dN}}$, $k\geq 1$, $m_0=d$ and $m_k=1$ for $k\geq 1$, then we have $\frac{1}{N+1}\sum_{k=0}^{N} \mathbb{E}\|\mathsf{grad}f(x^{k})\|_{x^{k}}^2 \leq \mathcal{O}(\sqrt{{d}/{N}})$, for all $N = \Omega(d)$.* *Here the expectation $\mathbb{E}$ is taken with respect to all random variables up to iteration $k$, which includes the random variables $u$ in zeroth-order estimator [\[zeroth_order_estimator\]](#zeroth_order_estimator){reference-type="eqref" reference="zeroth_order_estimator"}.* Before we proceed to the proof of Theorem [Theorem 1](#theorem3){reference-type="ref" reference="theorem3"}, we have the following Lemma [Lemma 4](#lemma_zo_inequalities){reference-type="ref" reference="lemma_zo_inequalities"} which will be utilized in the proof. **Lemma 4**. *Suppose we take parameters the same as Theorem [Theorem 1](#theorem3){reference-type="ref" reference="theorem3"}, then we have* *$$\begin{gathered} \frac{\tau_k}{2\beta} - \frac{\tau_k^2 L}{2 \beta^2} - \frac{(1+\tau_{k})\tau_{k}^2}{\beta}\frac{L^2}{\beta^2}\geq \frac{\tau_k}{4\beta}, \label{zo_temp_ineq1}\\ \frac{\tau_k}{2} - \bigg( 4\bigg(\frac{2L^2}{\beta^2}+1\bigg)(1+\tau_k)+1 \bigg)\frac{8(d+4)}{m_k}\tau_k^2\geq \frac{\tau_k}{4}. \label{zo_temp_ineq2} \end{gathered}$$* **Proof.** To show [\[zo_temp_ineq1\]](#zo_temp_ineq1){reference-type="eqref" reference="zo_temp_ineq1"}, using $\beta\geq 4L$, we just need to show that $\tau_k/8+(1+\tau_k)\tau_k/16\leq 1/4$, which holds naturally in both cases (i) and (ii). As for [\[zo_temp_ineq2\]](#zo_temp_ineq2){reference-type="eqref" reference="zo_temp_ineq2"}, again by $\beta\geq 4L$ we just need to show that $\big( 4({1}/{8}+1)(1+\tau_k)+1 \big)({8(d+4)}/{m_k})\tau_k\leq {1}/{4}$. In case (i), this is equivalent to $18\tau_k^2+22\tau_k-1\leq 0$, which is guaranteed when $N \geq 520$. For case (ii), similar calculation shows that we need $\tau_k\leq ({\sqrt{22^2+9/(d+4)} - 22})/{36}$, which is guaranteed when $N\geq 3.2\cdot 10^4\cdot {(d+4)^2}/{d}$. 0◻\ **Proof.**\[Proof of Theorem [Theorem 1](#theorem3){reference-type="ref" reference="theorem3"}\] By the isometry property of parallel transport, $$\begin{aligned} & \eta(x^{k},g^{k}) - \eta(x^{k+1},g^{k+1}) = \frac{1}{2\beta}\|g^{k+1}\|_{x^{k+1}}^2 - \frac{1}{2\beta}\|g^{k}\|_{x^{k}}^2 \\ = & \frac{1}{2\beta}\|P_{x^{k+1}}^{x^{k}}g^{k+1}\|_{x^{k}}^2 - \frac{1}{2\beta}\|g^{k}\|_{x^{k}}^2 \\ = &-\langle -\frac{1}{\beta}g^{k}, P_{x^{k+1}}^{x^{k}}g^{k+1} - g^k \rangle_{x^k} + \frac{1}{2\beta} \|P_{x^{k+1}}^{x^{k}}g^{k+1} - g^k\|_{x^k}^2.\end{aligned}$$ By combining this and Assumption [Assumption 1](#assumption0_2){reference-type="ref" reference="assumption0_2"}, we have the following bound for the difference of the merit function (defined in [\[eta_def\]](#eta_def){reference-type="eqref" reference="eta_def"}), evaluated at successive iterates: $$%\label{thm3.1-proof-W-decerase} \begin{split} &W(x^{k+1}, g^{k+1}) - W(x^{k}, g^{k}) \\ \leq & - t_k \langle \mathsf{grad}f(x^k),g^k \rangle_{x^{k}} + \frac{t_k^2 L}{2} \|g^k\|_{x^k}^2 + \frac{1}{\beta}\langle g^k, P_{x^{k+1}}^{x^{k}}g^{k+1} - g^k \rangle_{x^{k}} + \frac{1}{2\beta}\|P_{x^{k+1}}^{x^{k}}g^{k+1} - g^k\|_{x^k}^2 \\ % = & - t_k \langle \grad f(x^k),g^k \rangle_{x^{k}} + \frac{t_k^2 L}{2} \|g^k\|_{x^k}^2 + \frac{\tau_k}{\beta}\langle g^k, G_{\mu}^k - g^k \rangle_{x^{k}} + \frac{1}{2\beta}\|P_{x^{k+1}}^{x^{k}}g^{k+1} - g^k\|_{x^k}^2 \\ = & \bigg(\frac{ t_k^2 L}{2} - t_k\bigg) \|g^k\|_{x^k}^2 + t_k\langle g^k, G_{\mu}^k - \mathsf{grad}f(x^k) \rangle_{x^{k}} + \frac{1}{2\beta}\|P_{x^{k+1}}^{x^{k}}g^{k+1} - g^k\|_{x^k}^2. \end{split}$$ Moreover, we have $$\begin{aligned} &\mathbb{E}_{u^k}[\langle g^k, G_{\mu}(x^k) - \mathsf{grad}f(x^k) \rangle_{x^k}] = \langle g^k, \mathbb{E}_{u^k} G_{\mu}(x^k) - \mathsf{grad}f(x^k) \rangle_{x^k} \\ \leq & \frac{1}{2}\|g^k\|_{x^k}^2 + \frac{1}{2}\|\mathbb{E}_{u^k} G_{\mu}(x^k) - \mathsf{grad}f(x^k)\|_{x^k}^2 \leq \frac{1}{2}\|g^k\|_{x^k}^2 + \frac{1}{2}\hat{\sigma}^2,\end{aligned}$$ where the expectation is only taken with respect to the Gaussian random variables that we used to construct $G_{\mu}(x^{k})$. Therefore, by using the law of total expectation, we have $\mathbb{E}W(x^{k+1}, g^{k+1}) - \mathbb{E}W(x^{k}, g^{k}) \leq \frac{1}{\beta}\left(\frac{\tau_k^2 L}{2\beta} - \frac{\tau_k}{2}\right) \mathbb{E}\|g^k\|_{x^k}^2 + \frac{\tau_k}{2\beta}\hat{\sigma}^2 + \frac{1}{2\beta}\mathbb{E}\|P_{x^{k+1}}^{x^{k}}g^{k+1} - g^k\|_{x^k}^2$, and we thus have (by summing up the above inequality over $k=0,...,N$): $$\begin{aligned} \label{zo_W_decrease} \begin{aligned} &\sum_{k=0}^{N}\left(\mathbb{E}W(x^{k+1}, g^{k+1}) - \mathbb{E}W(x^{k}, g^{k})\right) \\ \leq & \sum_{k=0}^{N}\frac{1}{2\beta}\left(\frac{\tau_k^2 L}{\beta} - \tau_k\right) \mathbb{E}\|g^k\|_{x^k}^2 + \sum_{k=0}^{N}\frac{\tau_k}{2\beta}\hat{\sigma}^2 + \frac{1}{2\beta}\sum_{k=1}^{N}\mathbb{E}\|P_{x^{k+1}}^{x^{k}}g^{k+1} - g^k\|_{x^k}^2, \end{aligned} \end{aligned}$$ where the last term sums from $1$ since $g^{1} - P_{x^{0}}^{x^{1}}g^0 = \tau_0(G_{\mu}^0-g^0)=0$. Utilizing [\[zo_sum_gkplus1_gk\]](#zo_sum_gkplus1_gk){reference-type="eqref" reference="zo_sum_gkplus1_gk"} and [\[zo_W\_decrease\]](#zo_W_decrease){reference-type="eqref" reference="zo_W_decrease"}, we have (note that $W\geq 0$) $$\begin{split} &\sum_{k=0}^{N}\frac{1}{2\beta}\left(\tau_k - \frac{\tau_k^2 L}{\beta}\right) \mathbb{E}\|g^k\|_{x^k}^2\leq W(x^0, g^0) + \sum_{k=0}^{N}\frac{\tau_k}{2\beta}\hat{\sigma}^2 + \frac{1}{2\beta}\sum_{k=1}^{N}\mathbb{E}\|P_{x^{k+1}}^{x^{k}}g^{k+1} - g^k\|_{x^k}^2 \\ \leq & W(x^0, g^0) + \frac{1}{2\beta}\sum_{k=0}^{N}(\tau_k+2\tau_k^2)\hat{\sigma}^2 + \frac{1}{\beta}\sum_{k=0}^{N}\left(\tau_k^2+\tau_k^3\right)\sigma_{k}^2+\frac{1}{\beta}\sum_{k=0}^{N}\tau_k^2\Tilde{\sigma}_0^2\\& +\frac{1}{\beta}\sum_{k=0}^{N}(1+\tau_{k})\tau_{k}^2\frac{L^2\mathbb{E}\|g^{k}\|_{x^{k}}^2}{\beta^2}+ \frac{1}{\beta}\sum_{k=0}^{N}\left(\tau_k^2+\tau_k^3\right)\frac{8(d+4)}{m_k}\mathbb{E}\|\mathsf{grad}f(x^k)\|_{x^k}^2. \end{split}$$ Combining this with [\[zo_temp_ineq1\]](#zo_temp_ineq1){reference-type="eqref" reference="zo_temp_ineq1"} we have $$\begin{aligned} \label{zo_temp5} \begin{aligned} \sum_{k=0}^{N}\tau_k& \mathbb{E}\|g^k\|_{x^k}^2 \leq 4\beta W(x^0, g^0) + 2\sum_{k=0}^{N}(\tau_k+2\tau_k^2)\hat{\sigma}^2 + 4\sum_{k=0}^{N}\left(\tau_k^2+\tau_k^3\right)\sigma_{k}^2\\& +4\sum_{k=0}^{N}\tau_k^2\Tilde{\sigma}_0^2+4\sum_{k=0}^{N}\left(\tau_k^2+\tau_k^3\right)\frac{8(d+4)}{m_k}\mathbb{E}\|\mathsf{grad}f(x^k)\|_{x^k}^2. \end{aligned} \end{aligned}$$ By Lemma [Lemma 2](#lemma_zo_gk_gradk){reference-type="ref" reference="lemma_zo_gk_gradk"} and [\[zo_temp5\]](#zo_temp5){reference-type="eqref" reference="zo_temp5"}, we get (also by $\tau_k\leq 1$) $$\begin{aligned} \label{zo_temp_ineq1.5} \begin{aligned} &\frac{1}{2}\sum_{k=0}^{N}\tau_k \mathbb{E}\|\mathsf{grad}f(x^{k})\|_{x^{k}}^2 \leq \sum_{k=0}^{N}\tau_k\mathbb{E}\|g^{k} - \mathsf{grad}f(x^{k})\|_{x^{k}}^2 + \sum_{k=0}^{N}\tau_k\mathbb{E}\|g^k\|_{x^{k}}^2 \\ \leq & \sum_{k=0}^{N-1}\tau_{k}^2\tilde{\sigma}_k^2+\sum_{k=0}^{N-1}\tau_k\hat{\sigma}^2 + \left(\frac{2L^2}{\beta^2}+1\right)\sum_{k=0}^{N}\tau_k\mathbb{E}\|g^k\|_{x^{k}}^2+2\tilde{\sigma}_0^2 \\ \leq & \left(\frac{8L^2}{\beta}+4\beta\right) W(x^0,g^0) + \sum_{k=0}^{N}\bigg[\tau_k+2\left(\frac{2L^2}{\beta^2}+1\right)(\tau_k+2\tau_k^2)\bigg]\hat{\sigma}^2\\ &+\sum_{k=0}^{N}\bigg[ \tau_k^2+4\left(\frac{2L^2}{\beta^2}+1\right)(\tau_k^2+\tau_k^3) \bigg]\sigma_{k}^2 +\bigg[ 4\left(\frac{2L^2}{\beta^2}+1\right)\sum_{k=0}^{N}\tau_k^2 + 2 \bigg]\Tilde{\sigma}_0^2 \\ &+ \sum_{k=0}^{N}\bigg[ 4\left(\frac{2L^2}{\beta^2}+1\right)(\tau_k^2+\tau_k^3)+\tau_k^2 \bigg]\frac{8(d+4)}{m_k}\mathbb{E}\|\mathsf{grad}f(x^k)\|_{x^k}^2, \end{aligned} \end{aligned}$$ where $\tau_0\mathbb{E}\|g^{0} - \mathsf{grad}f(x^{0})\|_{x^{0}}^2\leq \Tilde{\sigma}_0^2$ is used in the last term on the second line. By combining [\[zo_temp_ineq1.5\]](#zo_temp_ineq1.5){reference-type="eqref" reference="zo_temp_ineq1.5"} and [\[zo_temp_ineq2\]](#zo_temp_ineq2){reference-type="eqref" reference="zo_temp_ineq2"} we get $$\begin{aligned} \label{zo_temp7} \begin{aligned} & \sum_{k=0}^{N}\frac{\tau_k}{4} \mathbb{E}\|\mathsf{grad}f(x^{k})\|_{x^{k}}^2 \\ \leq &\sum_{k=0}^{N}\bigg[\frac{\tau_k}{2} - \bigg( 4\left(\frac{2L^2}{\beta^2}+1\right)(\tau_k^2+\tau_k^3)+\tau_k^2 \bigg)\frac{8(d+4)}{m_k}\bigg]\mathbb{E}\|\mathsf{grad}f(x^{k})\|_{x^{k}}^2 \\ \leq & \left(\frac{8L^2}{\beta}+4\beta\right) W(x^0,g^0) + \sum_{k=0}^{N}\bigg[\tau_k+2\left(\frac{2L^2}{\beta^2}+1\right)(\tau_k+2\tau_k^2)\bigg]\hat{\sigma}^2\\ &+\sum_{k=0}^{N}\bigg[ \tau_k^2+4\left(\frac{2L^2}{\beta^2}+1\right)(\tau_k^2+\tau_k^3) \bigg]\sigma_{k}^2 +\bigg[ 4\left(\frac{2L^2}{\beta^2}+1\right)\sum_{k=0}^{N}\tau_k^2 + 2 \bigg]\Tilde{\sigma}_0^2. \end{aligned} \end{aligned}$$ For case (i) in Theorem [Theorem 1](#theorem3){reference-type="ref" reference="theorem3"}, [\[zo_temp7\]](#zo_temp7){reference-type="eqref" reference="zo_temp7"} can be rewritten as $$\frac{1}{N+1}\sum_{k=0}^{N} \mathbb{E}\|\mathsf{grad}f(x^{k})\|_{x^{k}}^2 \leq \frac{c_1W(x^0, g^0)}{\sqrt{N}}+ c_2\hat{\sigma}^2 + \frac{c_3\frac{1}{N}\sum_{k=0}^{N}\sigma_{k}^2}{\sqrt{N}} + \frac{c_4}{\sqrt{N}}\Tilde{\sigma}_0^2,$$ for some absolute positive constants $c_1$, $c_2$, $c_3$ and $c_4$. The proof for case (i) is completed by noting that (see [\[def-tilde-sigma\]](#def-tilde-sigma){reference-type="eqref" reference="def-tilde-sigma"}) $\hat{\sigma}^2=\mathcal{O}(1/\sqrt{N})$, $\frac{1}{N}\sum_{k=0}^{N}\sigma_{k}^2=\mathcal{O}(1)$ and $\Tilde{\sigma}_0^2=\mathcal{O}(1)$. For case (ii) in Theorem [Theorem 1](#theorem3){reference-type="ref" reference="theorem3"}, [\[zo_temp7\]](#zo_temp7){reference-type="eqref" reference="zo_temp7"} can be rewritten as $$\frac{1}{N+1}\sum_{k=0}^{N} \mathbb{E}\|\mathsf{grad}f(x^{k})\|_{x^{k}}^2 \leq c_1' W(x^0, g^0)\sqrt{\frac{d}{N}}+ c_2'\hat{\sigma}^2 + \frac{c_3'\frac{1}{N}\sum_{k=0}^{N}\sigma_{k}^2}{\sqrt{d N}} + c_4'\sqrt{\frac{d}{N}}\Tilde{\sigma}_0^2,$$ for some positive constants $c_1'$, $c_2'$, $c_3'$ and $c_4'$. The proof of case (ii) is completed by noting that $\Tilde{\sigma}_0^2=\mathcal{O}(1)$, $\hat{\sigma}^2=\mathcal{O}(1/\sqrt{N})$ and $\frac{1}{N}\sum_{k=0}^{N}\sigma_{k}^2=\mathcal{O}(d)$. 0◻\ **Remark 1**. *If we sample $R\in\{0,1,2,...,N\}$ with $\mathbb{P}(R=k)=\tau_k/(\textstyle \sum_{k=0}^{N}\tau_k)$, then the left hand side of the inequalities in Theorem [Theorem 1](#theorem3){reference-type="ref" reference="theorem3"}, i.e., $\frac{1}{N+1}\sum_{k=0}^{N} \mathbb{E}\|\mathsf{grad}f(x^{k})\|_{x^{k}}^2$, becomes $\mathbb{E}\|\mathsf{grad}f(x^{R})\|_{x^{R}}^2$. If we use this sampling in case (i) of Theorem [Theorem 1](#theorem3){reference-type="ref" reference="theorem3"}, then to get an $\epsilon$-approximate stationary solution as in Definition [Definition 2](#def:epsstat){reference-type="ref" reference="def:epsstat"}, we require an iteration complexity of $N=\mathcal{O}(1/\epsilon^4)$ and so an oracle complexity of $Nm=\mathcal{O}(d/\epsilon^4)$. Case (i) requires $m=\mathcal{O}(d)$ per-iteration, which might be inconvenient in practice. Case (ii) of Theorem [Theorem 1](#theorem3){reference-type="ref" reference="theorem3"} avoids this, as in case (ii) both the iteration complexity and the oracle complexity are $N=\mathcal{O}(d/\epsilon^4)$, with batch size $m=\mathcal{O}(1)$. This makes case (ii) more convenient to use in practice, from a streaming or online perspective. For the simulations in Section [5](#sec:experiments){reference-type="ref" reference="sec:experiments"}, we thus choose $m=\mathcal{O}(1)$ and apply the result from case (ii). We also remark that the above results provide concrete solutions to the question raised by [@scheinberg2022finite], namely, on the need for mini-batches (and its order per-iteration) in zeroth-order stochastic optimization[^4].* **Remark 2**. *Notice that to prove [\[zo_temp_ineq2\]](#zo_temp_ineq2){reference-type="eqref" reference="zo_temp_ineq2"}, we need $N=\Omega(d)$ for case (ii) in Theorem [Theorem 1](#theorem3){reference-type="ref" reference="theorem3"}. We can remove this condition if in addition we have that $\mathsf{grad}f(x)$ is uniformly upper bounded: $\|\mathsf{grad}f(x)\|_x\leq G,~ \forall x\in\mathcal{M}$; see also Assumption [Assumption 4](#assumption_compactness){reference-type="ref" reference="assumption_compactness"} which we utilize in the next section. Under this condition, [\[zo_temp_ineq1.5\]](#zo_temp_ineq1.5){reference-type="eqref" reference="zo_temp_ineq1.5"} directly gives: $$\begin{aligned} \begin{aligned} &\frac{1}{2}\sum_{k=0}^{N}\tau_k \mathbb{E}\|\mathsf{grad}f(x^{k})\|_{x^{k}}^2\leq \sum_{k=0}^{N}\bigg[\tau_k+2\left(\frac{2L^2}{\beta^2}+1\right)(\tau_k+2\tau_k^2)\bigg]\hat{\sigma}^2\\ &+\sum_{k=0}^{N}\bigg[ \tau_k^2+4\left(\frac{2L^2}{\beta^2}+1\right)(\tau_k^2+\tau_k^3) \bigg]\sigma_k^2 +\bigg[ 4\left(\frac{2L^2}{\beta^2}+1\right)\sum_{k=0}^{N}\tau_k^2 + 2 \bigg]\Tilde{\sigma}_0^2 \\ &+ \sum_{k=0}^{N}\bigg[ 4\left(\frac{2L^2}{\beta^2}+1\right)(\tau_k^2+\tau_k^3)+\tau_k^2 \bigg]\frac{8(d+4)}{m_k}G^2+\left(\frac{8L^2}{\beta}+4\beta\right) W(x^0,g^0), \end{aligned} \end{aligned}$$ whose right hand side has the same order as [\[zo_temp7\]](#zo_temp7){reference-type="eqref" reference="zo_temp7"}. Therefore in this case we do not need $N=\Omega(d)$ for case (ii) to achieve the same rates of convergence as in Theorem [Theorem 1](#theorem3){reference-type="ref" reference="theorem3"}.* # RASA with retractions and vector transports {#sec_avg_retr} Algorithm [\[algorithm2\]](#algorithm2){reference-type="ref" reference="algorithm2"} is based on exponential mapping and parallel transport, which has a high per-iteration complexity for various manifold choices $\mathcal{M}$. In this section, we focus on reducing the per-iteration complexity of the `Zo-RASA` algorithm. The approach is based on replacing the exponential mapping and parallel transport with retractions and vector transports, respectively, which leads to practically efficient implementations and improved per-iteration complexity. The convergence analysis of algorithms with retractions and vector transports are sharply different and much harder than the one we presented in Section [3](#sec_smooth_avg){reference-type="ref" reference="sec_smooth_avg"}. Recall that the analysis in Section [3](#sec_smooth_avg){reference-type="ref" reference="sec_smooth_avg"} relied on the isometry property [\[eq_isometry_parallel_trans\]](#eq_isometry_parallel_trans){reference-type="eqref" reference="eq_isometry_parallel_trans"} of the parallel transports, which is no longer available for vector transports. We hence assume explicit global error bounds between the difference of retraction to exponential mapping, as well as vector transport to parallel transport in Assumption [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"}. In Section [4.1.2](#sec:examples){reference-type="ref" reference="sec:examples"} we provide conditions on the manifold under which such assumptions are naturally satisfied and provide explicit examples. Based on this, we establish that under a bounded fourth (instead of the second) central moment condition, the same sample complexity result as in the previous section could be obtained for the practical versions of `Zo-RASA` algorithm on compact manifolds. ## Approximation error of retractions and vector transports {#sec:errorbounds} We start with the following condition on the vector transport used; recall the notation from Definition [Definition 3](#def_vec_para_trans){reference-type="ref" reference="def_vec_para_trans"}. **Assumption 3**. *If $x^{+}=\mathsf{Retr}_{x}(g)$, $g\in \operatorname{T}_{x}\mathcal{M}$, then with $\mathsf{d}$ denoting the geodesic distance, the vector transport $\mathcal{T}_{g}$ satisfies the following inequalities: $$\begin{aligned} \label{eq_assump_vec_trans1} \|\mathcal{T}_{g}(v)\|_{x^{+}}\leq \|v\|_{x},\ \mathsf{d}(x, x^+)\leq \|g\|_x,\ \|\mathcal{T}_{g}(v) - P_{x}^{x^{+}}(v)\|_{x^{+}}\leq C \|v\|_{x}\mathsf{d}(x, x^+) \end{aligned}$$ for any vector $v\in \operatorname{T}_{x}\mathcal{M}$.* An intuitive explanation of the first inequality in [\[eq_assump_vec_trans1\]](#eq_assump_vec_trans1){reference-type="eqref" reference="eq_assump_vec_trans1"} is that our retraction and vector transport are "conservative" so that their length/magnitude is not longer than the exact operation of exponential mapping and parallel transport. As for the last inequality in [\[eq_assump_vec_trans1\]](#eq_assump_vec_trans1){reference-type="eqref" reference="eq_assump_vec_trans1"}, we are essentially positing that the vector transport would not "twist" the vector too much so that its difference from the parallel-transported vector is not large. In general, conditions in [\[eq_assump_vec_trans1\]](#eq_assump_vec_trans1){reference-type="eqref" reference="eq_assump_vec_trans1"} require the vector transport not to be very far from the parallel-transported vectors on the new tangent space. ### Comparison to prior works {#sec:comparisonprior} We now provide a detailed comparison to similar type of conditions proposed in two prior works, [@huang2015broyden] and [@sato2022riemannian], and highlight the differences and advantages of our proposal. According to the definition of vector transport in Definition [Definition 3](#def_vec_para_trans){reference-type="ref" reference="def_vec_para_trans"}, we need to specify a retraction associated with the transport so that $\mathcal{T}_{\eta_x}(\xi_x)\in \operatorname{T}_{R_x(\eta_x)}\mathcal{M}$. In this section, we consider the projection retraction, denoted simply as $R$. Given two transports, $\mathcal{T}_{S}$ and $\mathcal{T}_{R}$, [@huang2015broyden] propose certain conditions on approximating one with the other. First they require that $\mathcal{T}_{S}$ is isometric, i.e., $\langle\mathcal{T}_{S_{\eta}} (\xi), \mathcal{T}_{S_{\eta}} (\zeta)\rangle_{R_{x}(\eta)} = \langle\xi, \zeta\rangle_{x}$, hence we can basically regard $\mathcal{T}_{S}$ as parallel transport for comparison. Let $\mathcal{T}_{R}$ denote the differential of the retraction, given by $\mathcal{T}_{R_\eta}(\xi)= D R_x(\eta)[\xi] = \frac{d}{d t}R_x(\eta + t\xi) \in \operatorname{T}_{R_x(\eta)}\mathcal{M}$. Now the conditions stated in Equations (2.5) and (2.6) in [@huang2015broyden] are as follows: there exists a *neighborhood* $\mathcal{U}$ of $x$, such that $\forall y\in\mathcal{U}$ we have $\|\mathcal{T}_{S_\eta} - \mathcal{T}_{R_\eta}\|_{\mathrm{op}}\leq c_0\|\eta\|_x$ and $\|\mathcal{T}_{S_\eta}^{-1} - \mathcal{T}_{R_\eta}^{-1}\|_{\mathrm{op}}\leq c_0\|\eta\|_x$, where $\eta = R_{x}^{-1}(y)$ and $\|\cdot\|_{\mathrm{op}}$ is the operator norm. These assumptions are essentially local results, and as a result, [@huang2015broyden] needs to impose an additional stringent condition (see, their Assumption 3.2) that all the updates in their algorithms are already sufficiently close to the (local) optimal value to prove their convergence results. With the above conditions (in particular for a $\mathcal{T}_{1_\eta}$ satisfying their conditions in (2.5) and (2.6)), [@huang2015broyden] shows in Lemma 3.5 that *locally* we have $\|\mathcal{T}_{1_\eta}(\xi) - \mathcal{T}_{2_\eta}(\xi)\|_{y}\leq c_0\|\eta\|_x\|\xi\|_x$. The proof of their Lemma 3.5 relies on the smoothness of the local coordinate form of the vector transports, which could hold only when we have a coordinate chart covering the local neighborhood we consider. Hence, the assumptions in [@huang2015broyden] are in a different flavor from ours. In particular, our assumptions are global, and we show in Theorem [Theorem 2](#thm_assump_vec_trans){reference-type="ref" reference="thm_assump_vec_trans"} that they are satisfied by a certain (global) assumption on the second fundamental form of the manifold $\mathcal{M}$. The existing work [@huang2015broyden] also assumes the so-called locking condition $\mathcal{T}_{S_\eta}(\xi) =\beta \mathcal{T}_{R_\eta}(\xi)$, where $\beta=\|\xi\|_x/\|\mathcal{T}_{R_\xi}(\xi)\|_{R_{\xi}(x)}$, which means that the approximating transport keeps the same direction as the parallel transport $\mathcal{T}_{S}$. In our analysis, we avoid such a condition since we are trying to transport two vectors $g^k$ and $G_{\mu}^k$ (see Algorithm [\[algorithm2_vec_tran\]](#algorithm2_vec_tran){reference-type="ref" reference="algorithm2_vec_tran"}), and not just one previous gradient as in the Riemannian quasi-Newton method [@huang2015broyden]. Another existing work [@sato2022riemannian] requires algorithm-specific conditions in their Assumption 3.1. To elaborate, we recall that the *deterministic* Riemannian conjugate gradient iterates (Algorithm 1 in [@sato2022riemannian]) are given by $x_{k+1}\leftarrow R_{x_k}(t_k\eta_k)$ and $\eta_{k+1}\leftarrow-\mathsf{grad}f(x_{k+1})+\beta_{k+1} s_k \mathscr{T}^{k}(\eta_k),$ where $t_k$, $\beta_k$ and $s_k$ are parameters and $\mathscr{T}^{k}$ is a transport map from $\operatorname{T}_{x_k}\mathcal{M}$ to $\operatorname{T}_{x_{k+1}}\mathcal{M}$. Given this, their Assumption 3.1 requires that there exist $C\geq 0$ and index sets $K_1\subset\mathbb{N}$ and $K_2 = \mathbb{N}-K_1$ such that $\left\|\mathscr{T}^{(k)}\left(\eta_k\right)-\mathrm{D} R_{x_k}\left(t_k \eta_k\right)\left[\eta_k\right]\right\|_{x_{k+1}} \leq C t_k\left\|\eta_k\right\|_{x_k}^2, k \in K_1$ and $\left\|\mathscr{T}^{(k)}\left(\eta_k\right)-\mathrm{D} R_{x_k}\left(t_k \eta_k\right)\left[\eta_k\right]\right\|_{x_{k+1}} \leq C\left(t_k+t_k^2\right)\left\|\eta_k\right\|_{x_k}^2, k \in K_2$. Our assumption differs from the above in three aspects: (i) we do not make algorithm-specific assumptions, where each inequality depends on the iterate number $k$; (ii) we are not only comparing transporting $\eta_k$ (which is the direction along which we update $x^k$), but also the zeroth-order estimator $G_{\mu}^k$ (see Algorithm [\[algorithm2_vec_tran\]](#algorithm2_vec_tran){reference-type="ref" reference="algorithm2_vec_tran"}), i.e., we assume a more general inequality by replacing $\mathrm{D} R_{x}(t_k \eta)[\eta]$ with $\mathrm{D} R_{x}(t_k \eta)[\xi]$, where $\xi$ can be different from $\eta$; (iii) we *derive* the last inequality in [\[eq_assump_vec_trans1\]](#eq_assump_vec_trans1){reference-type="eqref" reference="eq_assump_vec_trans1"} using global assumption of second fundamental form of the manifold $\mathcal{M}$ in Theorem [Theorem 2](#thm_assump_vec_trans){reference-type="ref" reference="thm_assump_vec_trans"}, instead of *assuming* it. ### Illustrative Examples {#sec:examples} We now further inspect Assumption [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"} by checking the conditions under which [\[eq_assump_vec_trans1\]](#eq_assump_vec_trans1){reference-type="eqref" reference="eq_assump_vec_trans1"} holds in general, and also verifying it for various matrix-manifolds arising in applications. We start with the first inequality in [\[eq_assump_vec_trans1\]](#eq_assump_vec_trans1){reference-type="eqref" reference="eq_assump_vec_trans1"}. It holds naturally if the manifold is a submanifold and the vector transport is the orthogonal projection, due to the non-expansiveness of orthogonal projections. The second inequality in [\[eq_assump_vec_trans1\]](#eq_assump_vec_trans1){reference-type="eqref" reference="eq_assump_vec_trans1"} is much trickier. For the scope of this work, we show that the second equation in [\[eq_assump_vec_trans1\]](#eq_assump_vec_trans1){reference-type="eqref" reference="eq_assump_vec_trans1"} holds for projectional retractions and projectional vector transports on Stiefel manifold, which also includes spheres and orthogonal groups as special cases. If the inverse of the retraction in Assumption [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"} is well-defined, the second inequality in [\[eq_assump_vec_trans1\]](#eq_assump_vec_trans1){reference-type="eqref" reference="eq_assump_vec_trans1"} could equivalently be stated as $\|\mathsf{Exp}_{x}^{-1}(x^+)\|_{x}\leq \|\mathsf{Retr}_{x}^{-1}(x^+)\|_x$, which may hold for a larger class of manifolds and retractions. We leave a detailed study of this as future work. **Stiefel manifold.** Consider the Stiefel manifold $\operatorname{St}(d, p)$ defined in [\[stiefel\]](#stiefel){reference-type="eqref" reference="stiefel"}, with the tangent space $\operatorname{T}_{X}\operatorname{St}(d, p)=\{\xi|X^\top \xi+\xi^\top X=0\}$ and Euclidean inner product $\langle X, Y\rangle:=\mathop\mathrm{tr}(X^\top Y)$. We consider the projectional retraction [@absil2012projection] given by $X^+=R_X(\xi):=U V^\top$, where $X+\xi = U\Sigma V^\top$ is the (thin) singular value decomposition of $X+\xi$. Also, the projectional vector transport $\mathcal{T}$ is simply projecting a tangent vector $\xi\in \operatorname{T}_{X_0}\operatorname{St}(d, p)$ to $\operatorname{T}_{X}\operatorname{St}(d, p)$. It is clear that $\|\mathcal{T}(\xi)\|\leq \|\xi\|$ due to the non-expansiveness of orthogonal projections (note that $\operatorname{T}_{X}\operatorname{St}(d, p)$ is simply a linear subspace). To show $\mathsf{d}(X, X^+)\leq \|\xi\|$, denote $\gamma(t)$ the minimal geodesic connecting $X$ and $X^+$ with $\gamma(0)=X$ and $\gamma(1)=X^+$, so that $\mathsf{d}(X, X^+) = \int_{0}^{1}\|\gamma'(t)\| d t$. Notice that we can define another curve $c(t)=U(t)V^\top(t)$, where $X+t \xi=U(t)\Sigma(t)V^\top(t)$ is the singular value decomposition. The curve $c(t)=\mathsf{Retr}_{X}(t\xi)$ is the parameterized curve of projectional retraction. Now using the distance with respect to the minimal geodesic, we have $\mathsf{d}(X, X^+) = \int_{0}^{1}\|\gamma'(t)\| d t \leq \int_{0}^{1}\|c'(t)\| d t \leq \int_{0}^{1}\|\xi\| d t = \|\xi\|$, where $\|c'(t)\|\leq \|\xi\|$ is due to the non-expansiveness of orthogonal projections, namely, $\|c(t_1)-c(t_2)\|\leq \|X+t_1\xi - (X+t_2\xi)\|$. Indeed, although $\operatorname{St}(d, p)$ is not a convex set, the non-expansiveness condition still holds [@gallivan2010note], because $(X+\xi)^\top(X+\xi)=I_p+\xi^\top\xi\succeq I_p$, and the projection of $X+\xi$ onto the Stiefel manifold is the same as projection onto its convex hull $\{X\in\mathbb{R}^{d\times p}|\|X\|_2\leq 1\}$. Now we turn to the last inequality in [\[eq_assump_vec_trans1\]](#eq_assump_vec_trans1){reference-type="eqref" reference="eq_assump_vec_trans1"}. Given a complete embedded submanifold, we can show that the last inequality in [\[eq_assump_vec_trans1\]](#eq_assump_vec_trans1){reference-type="eqref" reference="eq_assump_vec_trans1"} holds under the boundedness of the second fundamental form in Theorem [Theorem 2](#thm_assump_vec_trans){reference-type="ref" reference="thm_assump_vec_trans"}, given that the vector transport is the orthogonal projection to the new tangent space. **Theorem 2**. *Suppose $\mathcal{M}$ is an embedded complete Riemannian submanifold of Euclidean space. Suppose for all unit vector $\xi,\eta\in \operatorname{T}\mathcal{M}$, $\|\xi\|=\|\eta\|=1$, the norm of the second fundamental form $B(\xi,\eta)$ is bounded by constant $C$. Consider the parallel transport $P_{x}^{y}$ along the minimal geodesic from $x\in\mathcal{M}$ to $y\in\mathcal{M}$, we have $\|\mathsf{proj}_{\operatorname{T}_{y}\mathcal{M}}(v) - P_{x}^{y}(v)\|\leq C \|v\|\mathsf{d}(x,y)$, for any $v\in \operatorname{T}_{x}\mathcal{M}$. That is, the last inequality in [\[eq_assump_vec_trans1\]](#eq_assump_vec_trans1){reference-type="eqref" reference="eq_assump_vec_trans1"} holds with constant $C$.* **Proof.** Without loss of generality, we assume $\|v\|=1$, otherwise conduct the proof for $v/\|v\|$. Denote the minimum geodesic $\gamma$ with unit speed connecting $x$ and $y$, parameterized by variable $t$, also denote the parallel transported vector of $v$ along $\gamma$ as $v(t)$, i.e. $v(0)=v$. Now for the extrinsic geometry, we denote $v=v^{\top}(t)+v^{\bot}(t)$, where $v^{\top}(t)\in \operatorname{T}_{\gamma(t)}\mathcal{M}$ and $v^{\bot}(t)$ is orthogonal to $\operatorname{T}_{\gamma(t)}\mathcal{M}$. Note that the left-hand side of the inequality we want to prove is now parameterized as $\|v(t) - v^{\top}(t)\|$. Now since $v(t)$ is a parallel transport of $v$, the tangent component must be zero, i.e., $(v'(t))^\top = 0$. Now consider any parallel unit vector $z(t)\in \operatorname{T}_{\gamma(t)}\mathcal{M}$ along $\gamma$, then $\langle (v^\bot)'(t), z(t) \rangle = -\langle v^\bot(t), z'(t) \rangle= -\langle v^\bot(t), B(\gamma'(t),z(t)) \rangle,$ where $B$ is the second fundamental form. Along with the fact that $(v^\top)'=-(v^\bot)'$ we get $\langle (v^\top)'(t), z(t) \rangle =\langle v^\bot(t), B(\gamma'(t),z(t)) \rangle$. Now the right-hand side has a uniform upper bound of $C$, and by the arbitrarily chosen $z(t)\in \operatorname{T}_{\gamma(t)}\mathcal{M}$, we get $\|((v^\top)'(t))^\top\|\leq C$. We can now bound the derivative of $\|v(t) - v^{\top}(t)\|$ as $(\|v(t) - v^{\top}(t)\|^2)' = (1 - 2\langle v(t), v^{\top}(t)\rangle + \|v^{\top}(t)\|^2)' = - 2\langle v(t), (v^{\top}(t))'\rangle + 2\langle v^{\top}(t), (v^{\top}(t))'\rangle = 2\langle v^{\top}(t) - v(t), ((v^{\top}(t))')^\top\rangle\leq 2 C\|v^{\top}(t) - v(t)\|.$ Therefore, we get $\|v(t) - v^{\top}(t)\|'\leq C$. Now integrating the above inequality from $x$ to $y$ along the minimal geodesic $\gamma$ (i.e., with respect to $t$) and using the distance with respect to the minimal geodesic, we obtain $\|\mathsf{proj}_{\operatorname{T}_{y}\mathcal{M}}(v) - P_{x}^{y}(v)\|\leq C \mathsf{d}(x, y)$, which completes the proof. 0◻\ Theorem [Theorem 2](#thm_assump_vec_trans){reference-type="ref" reference="thm_assump_vec_trans"} connects extrinsic and intrinsic geometry by measuring the difference of orthogonal projection (extrinsic operation) and parallel transport (intrinsic operation), which might be of independent interest for studying embedded submanifolds. The condition in Theorem [Theorem 2](#thm_assump_vec_trans){reference-type="ref" reference="thm_assump_vec_trans"} is stronger than the bounded sectional curvature condition since if the second fundamental form is bounded, the sectional curvature is also bounded by the Gauss formula (see Chapter 6, Theorem 2.5 in [@do1992riemannian]). We point out that the condition of Theorem [Theorem 2](#thm_assump_vec_trans){reference-type="ref" reference="thm_assump_vec_trans"} is still satisfied by all the embedded submanifold applications we consider, namely the sphere, the orthogonal group and the Stiefel manifold. In particular, we have the following observation. **Proposition 1**. *Suppose $\mathcal{M}$ is a compact complete embedded Riemannian submanifold of Euclidean space (i.e. satisfying Assumption [Assumption 4](#assumption_compactness){reference-type="ref" reference="assumption_compactness"}), then the norm of the second fundamental form $\|B(\xi,\eta)\|$ is uniformly bounded for all unit vector $\xi,\eta\in \operatorname{T}\mathcal{M}$, $\|\xi\|=\|\eta\|=1$.* The proof is immediate, since for all unit vector $\xi,\eta\in \operatorname{T}\mathcal{M}$, $\|B(\xi,\eta)\|\in\mathbb{R}$ is a smooth function defined over a compact domain, and therefore it is upper bounded. As a result, Assumption [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"} holds for all the embedded submanifold applications we consider, namely the sphere, the orthogonal group and the Stiefel manifold. **Remark 3**. *We remind the readers that Theorem [Theorem 2](#thm_assump_vec_trans){reference-type="ref" reference="thm_assump_vec_trans"} requires the embedded submanifold assumption, yet Assumption [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"} does not, as long as [\[eq_assump_vec_trans1\]](#eq_assump_vec_trans1){reference-type="eqref" reference="eq_assump_vec_trans1"} hold. This is also the main reason why we summarize our assumption as in Assumption [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"}, and not present Theorem [Theorem 2](#thm_assump_vec_trans){reference-type="ref" reference="thm_assump_vec_trans"} directly.* **Example: Grassmann manifold.** Above, we have shown that Assumption [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"} holds for a class of embedded matrix submanifolds. Yet another setting is that of quotient manifolds (e.g., the Grassmann manifold) which arises in applications of Riemannian optimization. Such manifolds are not naturally embedded submanifolds of a Euclidean space. As a result, we can inspect Assumption [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"} directly for such manifolds. Taking the Grassmann manifold as an example, we next verify Assumption [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"}. To proceed, we utilize the following result. **Lemma 5**. *Suppose $X\in \operatorname{St}(d, p)$, $G\in\mathbb{R}^{d\times p}$ with $X^\top G=0$, and the QR decomposition of $X+G=Q R$ where $Q\in\operatorname{St}(d, p)$ and $R\in\mathbb{R}^{p\times p}$ is upper triangular. The principal angle between the subspace spanned by $X$ and $Q$ is given by $\|\Theta\|_F$, where $\Theta:=\arccos(\Sigma)$ where $\Sigma$ is the singular value matrix of $X^\top Q$, i.e., $X^\top Q=U \Sigma V^\top$; see, for example @edelman1998geometry [Section 4.3]. We have that $\|\Theta\|_F\leq \|G\|_F.$* **Proof.** Since $R^\top R = (X+G)^\top(X+G) = I_p+\|G\|_F^2$, we know that all the singular values of $R$ are greater than or equal to $1$. Denote $\Sigma=\mathop\mathrm{diag}([\sigma_1,...,\sigma_p])$. Since $X^\top Q = X^\top(X+G) R^{-1}=R^{-1}$, we know that the singular value decomposition of $R=V \Sigma^{-1} U^\top$ (which implies that $\sigma_i\leq 1$, $\forall i=1,2,....,p$) and $\|R\|_F^2= \|\Sigma^{-1}\|_F^2 = \sum_{i=1}^{p}\frac{1}{\sigma_i^2}$. Also, as $\|R\|_F^2=\|X+G\|_F^2=\mathop\mathrm{tr}((X+G)^\top(X+G))=p+\|G\|_F^2,$ we get $\|G\|_F^2 = \sum_{i=1}^{p}\frac{1}{\sigma_i^2} - p$. Thus, $\|\Theta\|_F^2 = \|\arccos(\Sigma)\|_F^2=\sum_{i=1}^{p}(\arccos(\sigma_i))^2 \leq \sum_{i=1}^{p} (\frac{1}{\sigma_i^2} - 1)= \|G\|_F^2$, where we use the fact that $(\arccos(t))^2\leq \frac{1}{t^2}-1$, $\forall t\in(0, 1]$. 0◻\ Now we can inspect the Grassmann manifold. The Grassmann manifold $\operatorname{Gr}(d, p)$ is the set of all $p$-dimensional subspace of $\mathbb{R}^d$; see, for example, [@absil2009optimization Section 2.1]. A quotient formulation writes $\operatorname{Gr}(d, p)=\operatorname{St}(d, p)/\mathcal{O}(p)$ with $\mathcal{O}(p)=\{Q\in\mathbb{R}^{p\times p}|Q^\top Q=I_p\}$ being the orthogonal group. The elements of the Grassmann manifold can be expressed as $[X]\in\operatorname{Gr}(d,p)$ with $[X]:=\{XQ|Q\in\mathcal{O}(p)\}$ and $X\in\operatorname{St}(d, p)$. The element $\bar{\xi}$ on the tangent space $\operatorname{T}_{[X]}\operatorname{Gr}(d, p)$ can be shown with a one-to-one mapping (called the horizontal lift) to the set $[\xi]$ with $\xi\in \operatorname{T}_{X}\operatorname{St}(d, p)$ and $X^\top\xi=0$. Suppose we start from an element $[X]\in\operatorname{Gr}(d, p)$ with $X\in\operatorname{St}(d, p)$ and the initial speed $\bar{G}\in \operatorname{T}_{[X]}\operatorname{Gr}(d, p)$, where $G\in \operatorname{T}_{X}\operatorname{St}(d, p)$ and $X^\top G=0$. We denote the singular value decomposition of $G=U\Sigma V^\top$ with $U\in\mathbb{R}^{d\times p}$ and $\Sigma, V\in\mathbb{R}^{p\times p}$. Then the exponential mapping is given by $Y := \mathsf{Exp}_{[X]}(\bar{G})=[X V\cos(\Sigma)+U\sin(\Sigma)]$, where $\sin$ and $\cos$ are matrix trigonometric functions; see [@absil2009optimization Example 5.4.3]. Also, the parallel transport is given by: $\Bar{\xi_1}=P_{[X]}^{[Y]}(\bar{\xi})$ with $\xi_1=-X V\sin(\Sigma)U^\top \xi + U \cos(\Sigma)U^\top \xi + (I-U U^\top) \xi$. See [@absil2009optimization Example 8.1.3]. Hence, the projectional retraction is given by $Y' := \mathsf{Retr}_{[X]}(\bar{G})=[X + G] = [Q],$ where $X+G = QR$ is the QR decomposition of $X+G$; see [@absil2009optimization Example 4.1.5]. Furthermore, the projectional vector transport is given by $\Bar{\xi_2}=\mathcal{T}_{\Bar{G}}(\bar{\xi})$ with $\xi_2=(I - Y Y^\top)\xi$. See [@absil2009optimization Example 8.1.10]. Now we show that [\[eq_assump_vec_trans1\]](#eq_assump_vec_trans1){reference-type="eqref" reference="eq_assump_vec_trans1"} is satisfied. It is obvious that $\|\mathcal{T}_{\Bar{G}}(\bar{\xi})\| = \|(I - Y Y^\top)\xi\|\leq \|\xi\|$. The geodesic distance of $[X]$ and the projectional retraction $[Q]$ is exactly the principal angle between the subspace spanned by $X$ and $Q$, see [@edelman1998geometry Section 4.3]. Following Lemma [Lemma 5](#lemma_principal_angle){reference-type="ref" reference="lemma_principal_angle"}, we can hence conclude that $\mathsf{d}([X], [Q]) = \|\Theta\|_F\leq \|G\|_F$. Now we inspect the last equation in [\[eq_assump_vec_trans1\]](#eq_assump_vec_trans1){reference-type="eqref" reference="eq_assump_vec_trans1"}. We can directly check that $\|\xi_1 - \xi_2\|_{F}=\|A\xi\|_{F}\leq \|A\|_{F}\|\xi\|_{F}$, with $$\begin{aligned} A\coloneqq& -X V\sin(\Sigma)U^\top + U \cos(\Sigma)U^\top + Y Y^\top-U U^\top \\ =&-X V\sin(\Sigma)U^\top + U \cos(\Sigma)U^\top - U \cos^2(\Sigma)U^\top + X V\cos^2(\Sigma) V^\top X^\top \\&+ U \sin(\Sigma)\cos(\Sigma)V^\top X^\top + X V \cos(\Sigma)\sin(\Sigma)U^\top. \end{aligned}$$ Note also that we have the bound $$\begin{aligned} \|A\|=&\|-X V\sin(\Sigma)U^\top + U \cos(\Sigma)U^\top - U \cos^2(\Sigma)U^\top + X V\cos^2(\Sigma) V^\top X^\top \\&+ U \sin(\Sigma)\cos(\Sigma)V^\top X^\top + X V \cos(\Sigma)\sin(\Sigma)U^\top\| \\ \leq & \|\sin(\Sigma)\| + \|\cos(\Sigma)(I - \cos(\Sigma))\| + 2\|\sin(\Sigma)\cos(\Sigma)\| \leq 4\|\sin(\Sigma)\|\leq 4\|G\|, \end{aligned}$$ where we use the fact that $X^\top X=U^\top U=V^\top V=I_p$ and all norms are the Frobenius norm. Therefore, we see that the last equation in [\[eq_assump_vec_trans1\]](#eq_assump_vec_trans1){reference-type="eqref" reference="eq_assump_vec_trans1"} is satisfied with $C=4$. ## Convergence of retraction and vector transport based `Zo-RASA` {#sec:pracconvergence} We now proceed to the convergence analysis of `Zo-RASA` algorithm with retraction and vector transports. Algorithm [\[algorithm2_vec_tran\]](#algorithm2_vec_tran){reference-type="ref" reference="algorithm2_vec_tran"} is the analog of Algorithm [\[algorithm2\]](#algorithm2){reference-type="ref" reference="algorithm2"}, using retraction and vector transport. Notice that the zeroth-order estimator used in Algorithm [\[algorithm2_vec_tran\]](#algorithm2_vec_tran){reference-type="ref" reference="algorithm2_vec_tran"} is as defined in [\[zeroth_order_estimator_retr\]](#zeroth_order_estimator_retr){reference-type="eqref" reference="zeroth_order_estimator_retr"}, which is with respect to the retraction in contrast to [\[zeroth_order_estimator\]](#zeroth_order_estimator){reference-type="eqref" reference="zeroth_order_estimator"}. Also $\mathcal{T}$ is the vector transport where we write $\mathcal{T}^k:=\mathcal{T}_{- t_k g^{k}}$ for brevity. The vector transport we use in experiments is simply the orthogonal projection onto the target tangent space. For our analysis, apart from the smoothness condition in Assumption [Assumption 1](#assumption0_2){reference-type="ref" reference="assumption0_2"}, we also need to assume that the manifold is compact. **Assumption 4**. *The manifold $\mathcal{M}$ is compact with diameter $D$, and the Riemannian gradient satisfies $\|\mathsf{grad}f(x)\|_x\leq G$.* Here, $G$ could potentially be a function of $D$ and the constant $L$ from Assumption [Assumption 1](#assumption0_2){reference-type="ref" reference="assumption0_2"}, due to compactness and smoothness. We remark that this compactness assumption is satisfied by various matrix manifolds like the Stiefel manifold and the Grassmann manifold (see, for example, Lemma 5.1 in [@milnor1974characteristic]). Change the updates of $x^{k+1}$ and $g^{k+1}$ in Algorithm [\[algorithm2\]](#algorithm2){reference-type="ref" reference="algorithm2"} respectively to $$\begin{aligned} x^{k+1}&\leftarrow \mathsf{Retr}_{x^{k}}(- t_k g^{k}) \quad\text{and}\quad g^{k+1}\leftarrow (1-\tau_k) \mathcal{T}^k (g^{k}) + \tau_k \mathcal{T}^k (G_{\mu}^k),\end{aligned}$$ where $G_{\mu}^k=G_{\mu}^{\mathsf{Retr}}(x^k)$ is given by [\[zeroth_order_estimator_retr\]](#zeroth_order_estimator_retr){reference-type="eqref" reference="zeroth_order_estimator_retr"} with batch-size $m=m_k$. Turning to the stochastic gradient oracles, the bounded second moment condition in Assumption [Assumption 2](#assumption0_1){reference-type="ref" reference="assumption0_1"} is now replaced by the following condition of bounded fourth central moment. Such a condition is needed to conduct our convergence analysis. It is interesting to relax this assumption or show this condition is necessary and sufficient to design batch-free, fully-online algorithms with vector transports and retractions. **Assumption 5**. *Along the trajectory of the algorithm, we have that the stochastic gradients are unbiased and have bounded fourth central moment, i.e., for each $k \in \{1,\ldots, N\}$, we have $\mathbb{E}_\xi [\mathsf{grad}F(x^k;\xi_k)|\mathcal{F}_{k-1}] = \mathsf{grad}f(x^k)$ and $\mathbb{E}_\xi[\|\mathsf{grad}F(x^k;\xi_k) - \mathsf{grad}f(x^k)\|_{x^{k}}^4|\mathcal{F}_{k-1}]\leq \sigma^4$.* Note that Assumption [Assumption 5](#assumption3){reference-type="ref" reference="assumption3"} implies Assumption [Assumption 2](#assumption0_1){reference-type="ref" reference="assumption0_1"}. To proceed with the convergence analysis of Algorithm [\[algorithm2_vec_tran\]](#algorithm2_vec_tran){reference-type="ref" reference="algorithm2_vec_tran"}, we also need to assume that the retraction we use in Algorithm [\[algorithm2_vec_tran\]](#algorithm2_vec_tran){reference-type="ref" reference="algorithm2_vec_tran"} is a second-order retraction, as in Assumption [Assumption 6](#assumption4){reference-type="ref" reference="assumption4"}. **Assumption 6**. *The retraction we use in Algorithm [\[algorithm2_vec_tran\]](#algorithm2_vec_tran){reference-type="ref" reference="algorithm2_vec_tran"} is a second order retraction, i.e. $\forall \xi\in \operatorname{T}_{x}\mathcal{M}$, we have $\mathsf{d}(\mathsf{Retr}_{x}(\xi), \mathsf{Exp}_{x}(\xi)) \leq C \|\xi\|_x^2$.* Note that the notion of second order retraction is only a local property, i.e., the above inequality only holds when $\|\xi\|$ is not too large. We refer to second order retraction without this locality restriction, since we assume the compactness of $\mathcal{M}$ in Assumption [Assumption 4](#assumption_compactness){reference-type="ref" reference="assumption_compactness"} and thus the condition in Assumption [Assumption 6](#assumption4){reference-type="ref" reference="assumption4"} also holds for large $\|\xi\|$ and the constant $C$ will globally depend on the curvature of the manifold. We also point out that the condition in Assumption [Assumption 6](#assumption4){reference-type="ref" reference="assumption4"} is satisfied by projectional retractions; see, for example, [@absil2012projection Proposition 2.2]. The study of higher-order (better) approximation to the exponential mapping by the retractions is still an on-going research topic [@gawlik2018high], while here we only need a second-order retraction. The following result in Lemma [Lemma 6](#lemma_comparison){reference-type="ref" reference="lemma_comparison"}, which is a standard comparison-type result, will be utilized in the subsequent proof. **Lemma 6** (Theorem 6.5.6 in [@burago2022course]). *Suppose the sectional curvature of $\mathcal{M}$ is upper bounded, then $\forall\xi,\eta\in \operatorname{T}_{x}\mathcal{M}$, we have $\|\xi - \eta\|_x\leq C\,\mathsf{d}(\mathsf{Exp}_{x}(\xi), \mathsf{Exp}_{x}(\xi))$, without loss of generality we assume the constant to be $C=1$ for the rest of the paper.* The following result shows that with a second-order retraction, the smoothness with respect to exponential mapping implies the smoothness with respect to retractions. **Lemma 7**. *Suppose Assumption [Assumption 1](#assumption0_2){reference-type="ref" reference="assumption0_2"}, [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"} and [Assumption 4](#assumption_compactness){reference-type="ref" reference="assumption_compactness"} hold, if the retraction we use in Algorithm [\[algorithm2_vec_tran\]](#algorithm2_vec_tran){reference-type="ref" reference="algorithm2_vec_tran"} and [\[zeroth_order_estimator_retr\]](#zeroth_order_estimator_retr){reference-type="eqref" reference="zeroth_order_estimator_retr"} satisfy Assumption [Assumption 6](#assumption4){reference-type="ref" reference="assumption4"}, then there exists a parameter $L'>0$, such that $f$ is also $L'$-smooth with the retraction, i.e., $|f(\mathsf{Retr}_{x}(\eta)) - f(x) - \langle\mathsf{grad}f(x), \eta\rangle_x|\leq \frac{L'}{2}\|\eta\|_x^2,\ \forall \eta\in \operatorname{T}_{x}\mathcal{M}.$ From now on, we denote $L$ as the parameter that satisfies both Assumption [Assumption 1](#assumption0_2){reference-type="ref" reference="assumption0_2"} and Lemma [Lemma 7](#lemma_exp_retr_smoothness){reference-type="ref" reference="lemma_exp_retr_smoothness"} for brevity.* **Proof.** Denote $y=\mathsf{Retr}_x(\eta)$. Note that we have $|f(y) - f(x) - \langle\mathsf{grad}f(x), \eta\rangle_x| \leq |f(y) - f(x) - \langle\mathsf{grad}f(x), \mathsf{Exp}_{x}^{-1}(y)\rangle_x| + |\langle\mathsf{grad}f(x), \mathsf{Exp}_{x}^{-1}(y) - \eta\rangle_x| \leq L \|\mathsf{Exp}_{x}^{-1}(y)\|_x^2 + \|\mathsf{grad}f(x)\|_x \|\eta - \mathsf{Exp}_{x}^{-1}(y)\|_x \leq L \|\eta\|_x^2 + G \mathsf{d}(\mathsf{Exp}_{x}(\eta), y) \leq (L+G C) \|\eta\|_x^2 =: L' \|\eta\|_x^2$, where the first inequality is by Assumption [Assumption 2](#assumption0_1){reference-type="ref" reference="assumption0_1"}, the second is by Assumption [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"} and Lemma [Lemma 6](#lemma_comparison){reference-type="ref" reference="lemma_comparison"}, and the last inequality is by Assumption [Assumption 6](#assumption4){reference-type="ref" reference="assumption4"}. 0◻\ We remind the readers that Lemma [Lemma 7](#lemma_exp_retr_smoothness){reference-type="ref" reference="lemma_exp_retr_smoothness"} can guarantee that the retraction-based zeroth-order estimator [\[zeroth_order_estimator_retr\]](#zeroth_order_estimator_retr){reference-type="eqref" reference="zeroth_order_estimator_retr"} still satisfies Lemma [Lemma 1](#bound_zo_estimator){reference-type="ref" reference="bound_zo_estimator"}. In addition, we have the following bound on the fourth moment of $G_{\mu}^{\mathsf{Retr}}$. **Lemma 8**. *Consider $G_{\mu}$ given by [\[zeroth_order_estimator_retr\]](#zeroth_order_estimator_retr){reference-type="eqref" reference="zeroth_order_estimator_retr"}. Under Assumptions [Assumption 1](#assumption0_2){reference-type="ref" reference="assumption0_2"}, [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"}, [Assumption 4](#assumption_compactness){reference-type="ref" reference="assumption_compactness"} and [Assumption 5](#assumption3){reference-type="ref" reference="assumption3"}, we have $\mathbb{E}\|G_{\mu}^{\mathsf{Retr}}(x)\|_{x}^4 \leq \frac{\mu^4 L^4}{2 }(d+12)^6 + 3d^2\|\mathsf{grad}f(x)\|_{x}^4$, where the expectation is taken toward the Gaussian vectors when constructing $G_{\mu}$ and the random variable $\xi$.* **Proof.** Since $\mathbb{E}\|G_{\mu}^{\mathsf{Retr}}(x)\|_{x}^4 = \frac{1}{\mu^4}\mathbb{E}_{u}[(f(\mathsf{Retr}_x(\mu u)) - f(x))^4\|u\|_{x}^4]$ and $$\begin{aligned} &(f(\mathsf{Retr}_x(\mu u)) - f(x))^4 \\ = & (f(\mathsf{Retr}_x(\mu u)) - f(x) - \langle\mathsf{grad}f(x), \mu u\rangle_x + \langle\mathsf{grad}f(x), \mu u\rangle_x)^4 \\ \leq & 8(f(\mathsf{Retr}_x(\mu u)) - f(x) - \langle\mathsf{grad}f(x), \mu u\rangle_x)^4+8(\langle\mathsf{grad}f(x), \mu u\rangle_x)^4 \\ \leq & 8 \left(\frac{L}{2}\|\mu u\|_x^2\right)^4 + 8(\langle\mathsf{grad}f(x), \mu u\rangle_x)^4, \end{aligned}$$ where the last inequality is by Lemma [Lemma 7](#lemma_exp_retr_smoothness){reference-type="ref" reference="lemma_exp_retr_smoothness"}. Therefore we have $$\begin{aligned} \mathbb{E}\|G_{\mu}^{\mathsf{Retr}}(x)\|_{x}^4 & \leq \frac{\mu^4 L^4}{2}\mathbb{E}\|u\|_{x}^{12} + 8\mathbb{E}[\langle\mathsf{grad}f(x), u\rangle_x^4\|u\|_{x}^4] \\ & \leq \frac{\mu^4 L^4}{2}(d+12)^6 + 8\mathbb{E}[\langle\mathsf{grad}f(x), u\rangle_x^4\|u\|_{x}^4], \end{aligned}$$ where the last inequality is by Lemma 2 in [@li2022stochastic]. It remains to bound the last term on the right hand side, and we apply the same trick as in Proposition 1 in [@li2022stochastic] here. Since $u$ is an Gaussian vector on the tangent space $\operatorname{T}_x\mathcal{M}$ (dimension is $d$), we can calculate the expectation using the integral directly (denote $g=\mathsf{grad}f(x)$ and omit the subscript $x$ for simplicity): $$\begin{aligned} \begin{aligned} & \mathbb{E}(\| \langle\mathsf{grad}f(x),u\rangle u\|^4) = \frac{1}{\kappa(d)}\int_{\mathbb{R}^d}\langle g, x\rangle^4 \|x\|^4 e^{-\frac{1}{2}\|x\|^2}d x \\ \leq & \frac{1}{\kappa(d)}\int_{\mathbb{R}^d}\|x\|^4 e^{-\frac{\tau}{2}\|x\|^2}\langle g, x\rangle^4 e^{-\frac{1-\tau}{2}\|x\|^2}d x \leq \frac{1}{\kappa(d)}\bigg(\frac{4}{\tau e}\bigg)^2\int_{\mathbb{R}^d}\langle g, x\rangle^4 e^{-\frac{1-\tau}{2}\|x\|^2}d x \\ = &\frac{1}{\kappa(d)}\bigg(\frac{4}{\tau e}\bigg)^2\bigg(\frac{1}{1-\tau}\bigg)^{d/2-2}\int_{\mathbb{R}^d} \langle g, x\rangle^4 e^{-\frac{1}{2}\|x\|^2}d x = 48\bigg(\frac{1}{\tau e}\bigg)^2\bigg(\frac{1}{1-\tau}\bigg)^{d/2-2}\|g\|^4, \end{aligned} \end{aligned}$$ where $\kappa(d):=\int_{\mathbb{R}^d}e^{-\frac{1}{2}\|x\|^2}d x$ is the constant that normalizes Gaussian distribution, the second inequality is by the following fact: $x^p e^{-\frac{\tau}{2}x^2}\leq (\frac{p}{\tau e})^{p/2}$, the second equality is by change of variables and the last equality is by $\mathbb{E}_{x\sim\mathcal{N}(0,I_d)}\langle g, x\rangle^4=3\|g\|^4$. Taking $\tau = {4}/{d}$ gives the desired result. 0◻\ We now provide the convergence result for `Zo-RASA` (Algorithm [\[algorithm2_vec_tran\]](#algorithm2_vec_tran){reference-type="ref" reference="algorithm2_vec_tran"}). We remind the readers that we assume $C=1$ in both Assumptions [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"} and [Assumption 6](#assumption4){reference-type="ref" reference="assumption4"}. We would first need to utilize the following Lemma [Lemma 9](#lemma_zo_gk_gradk_vec_trans){reference-type="ref" reference="lemma_zo_gk_gradk_vec_trans"}, which is an analog to Lemma [Lemma 2](#lemma_zo_gk_gradk){reference-type="ref" reference="lemma_zo_gk_gradk"}. **Lemma 9**. *Suppose Assumptions [Assumption 1](#assumption0_2){reference-type="ref" reference="assumption0_2"}, [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"}, [Assumption 4](#assumption_compactness){reference-type="ref" reference="assumption_compactness"}, [Assumption 5](#assumption3){reference-type="ref" reference="assumption3"} and [Assumption 6](#assumption4){reference-type="ref" reference="assumption4"} hold, and $\{x^k,g^k\}$ is generated by Algorithm [\[algorithm2\]](#algorithm2){reference-type="ref" reference="algorithm2"}. We have $$\begin{split} \mathbb{E}\|g^{k} - \mathsf{grad}f(x^{k})\|_{x^{k}}^2 \leq \Gamma_{k}\tilde{\sigma}_0^2 + \Gamma_{k}\sum_{i=1}^{k}\Big( \frac{(1+\tau_{i-1})\tau_{i-1}}{\Gamma_{i}}\frac{L^2\|g^{i-1}\|_{x^{i-1}}^2}{\beta^2}+\frac{\tau_{i-1}^2}{\Gamma_{i}}\tilde{\sigma}_{i-1}^2+\tau_k\hat{\sigma}^2 \Big), \end{split}$$ where the expectation $\mathbb{E}$ is taken with respect to all random variables up to iteration $k$, including the Gaussian variables $\{u_i\}_{i=1}^k$ in the zeroth-order estimator [\[zeroth_order_estimator\]](#zeroth_order_estimator){reference-type="eqref" reference="zeroth_order_estimator"}, and $\tilde{\sigma}_k^2$ is defined in [\[def-tilde-sigma\]](#def-tilde-sigma){reference-type="eqref" reference="def-tilde-sigma"}. Further, from the definition of $\tau_k$ in [\[tau-2-choices\]](#tau-2-choices){reference-type="eqref" reference="tau-2-choices"}, we have $$\begin{split} \sum_{k=1}^{N}\tau_k\mathbb{E}\|g^{k} - \mathsf{grad}f(x^{k})\|_{x^{k}}^2 \leq \sum_{k=0}^{N-1}\bigg( (1+\tau_{k})\tau_{k}\frac{L^2\mathbb{E}\|g^{k}\|_{x^{k}}^2}{\beta^2} + \tau_{k}^2\tilde{\sigma}_k^2+\tau_k\hat{\sigma}^2 \bigg)+\tilde{\sigma}_0^2, \\%\label{zo_sum_gk_gradk_bound_vec_1}\\ \sum_{k=1}^{N}\tau_k^2\mathbb{E}\|g^{k} - \mathsf{grad}f(x^{k})\|_{x^{k}}^2 \leq \sum_{k=0}^{N-1}\bigg( (1+\tau_{k})\tau^2_{k}\frac{L^2\mathbb{E}\|g^{k}\|_{x^{k}}^2}{\beta^2} + \tau_{k}^3\tilde{\sigma}_k^2+\tau^2_k\hat{\sigma}^2 \bigg)+ \sum_{k=1}^{N}\tau_k^2\tilde{\sigma}_0^2.% \label{zo_sum_gk_gradk_bound_vec_2} \end{split}$$* **Proof.** The proof is almost identical to the proof of Lemma [Lemma 2](#lemma_zo_gk_gradk){reference-type="ref" reference="lemma_zo_gk_gradk"}, and we thus omit the details. Note that here we need to utilize Assumption [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"} to show $\mathsf{d}(x^i,x^{i+1})^2 \leq t_i^2\|g^i\|_{x^i}^2$. 0◻\ **Proof.**\[Proof of Lemma [Lemma 9](#lemma_zo_gk_gradk_vec_trans){reference-type="ref" reference="lemma_zo_gk_gradk_vec_trans"}\] Following the same process as the proof of Lemma [Lemma 2](#lemma_zo_gk_gradk){reference-type="ref" reference="lemma_zo_gk_gradk"}, we have $$\begin{aligned} \label{zo_temp_vec1} \begin{split} \|g^{k} - \mathsf{grad}f(x^{k})\|_{x^{k}}^2 \leq& (1-\tau_{k-1})\|g^{k-1} - \mathsf{grad}f(x^{k-1})\|_{x^{k-1}}^2 + \tau_{k-1}\|e_{k-1}\|_{x^{k}}^2 + \tau_{k-1}^2\|\Delta_{k-1}^{f}\|_{x^{k}}^2 \\ & + 2\tau_{k-1}\langle (1-\tau_{k-1})\mathcal{T}^{k-1}(g^{k-1} - \mathsf{grad}f(x^{k-1})) + \tau_{k-1}e_{k-1}, \Delta_{k-1}^{f}\rangle_{x^{k}}, \end{split} \end{aligned}$$ where the notation is now defined as: $$\begin{aligned} \begin{split} e_{k-1} &\coloneqq \frac{1}{\tau_{k-1}}(\mathcal{T}^{k-1}\mathsf{grad}f(x^{k-1}) - \mathsf{grad}f(x^{k})),\\ \Delta_{k-1}^{f} &\coloneqq \mathcal{T}^{k-1} (G_{\mu}^{k-1} - \mathsf{grad}f(x^{k-1})). \end{split} \end{aligned}$$ Denote $\delta_{k-1}=\langle (1-\tau_{k-1})\mathcal{T}^{k-1}(g^{k-1} - \mathsf{grad}f(x^{k-1})) + \tau_{k-1}e_{k-1}, \Delta_{k-1}^{f}\rangle_{x^{k}}$. We have by Lemma [Lemma 1](#bound_zo_estimator){reference-type="ref" reference="bound_zo_estimator"} and Assumption [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"} that $$\begin{aligned} \begin{split} 2\mathbb{E}_{u^k}[\delta_{k-1}] =& 2\langle (1-\tau_{k-1})\mathcal{T}^{k-1}(g^{k-1} - \mathsf{grad}f(x^{k-1})) + \tau_{k-1}e_{k-1}, \mathbb{E}_{u^k}[\Delta_{k-1}^{f}|\mathcal{F}_{k-2}]\rangle_{x^{k}} \\ \leq & \|(1-\tau_{k-1})\mathcal{T}^{k-1}(g^{k-1} - \mathsf{grad}f(x^{k-1})) + \tau_{k-1}e_{k-1}\|_{x^{k} }^2 + \|\mathbb{E}_{u^k} G_{\mu}^{k-1} - \mathsf{grad}f(x^{k-1})\|_{x^{k-1}}^2 \\ \leq & (1-\tau_{k-1})\|g^{k-1} - \mathsf{grad}f(x^{k-1})\|_{x^{k-1}}^2 + \tau_{k-1}\|e_{k-1}\|_{x^{k} }^2 + \hat{\sigma}^2. \end{split} \end{aligned}$$ Notice that in the above computation, the expectation is only taken with respect to the Gaussian random variables that we used to construct $G_{\mu}(x^{k-1})$. Plugging this back to [\[zo_temp1\]](#zo_temp1){reference-type="eqref" reference="zo_temp1"}, we have $$\begin{aligned} \mathbb{E}_{u^k}\|g^{k} - \mathsf{grad}f(x^{k})\|_{x^{k}}^2\leq & \tau_{k-1}\hat{\sigma}^2+ (1-\tau_{k-1}^2)\|g^{k-1} - \mathsf{grad}f(x^{k-1})\|_{x^{k-1}}^2 \\ & + \tau_{k-1}(1+\tau_{k-1})\|e_{k-1}\|_{x^{k}}^2 + \tau_{k-1}^2\|\Delta_{k-1}^{f}\|_{x^{k}}^2. \end{aligned}$$ Now dividing both sides of this inequality by our new definition of $\Gamma_k$, we get $$\begin{aligned} \begin{split} &\frac{1}{\Gamma_k}\mathbb{E}_{u^k}\|g^{k} - \mathsf{grad}f(x^{k})\|_{x^{k}}^2\\ \leq& \frac{1}{\Gamma_{k-1}}\|g^{k-1} - \mathsf{grad}f(x^{k-1})\|_{x^{k-1}}^2 + \frac{(1+\tau_{k-1})\tau_{k-1}}{\Gamma_k}\|e_{k-1}\|_{x^{k}}^2 + \frac{\tau_{k-1}^2}{\Gamma_k}\|\Delta_{k-1}^{f}\|_{x^{k}}^2 + \frac{\tau_{k-1}}{\Gamma_k}\hat{\sigma}^2. \end{split} \end{aligned}$$ By Assumptions [Assumption 1](#assumption0_2){reference-type="ref" reference="assumption0_2"}, [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"} and [Assumption 5](#assumption3){reference-type="ref" reference="assumption3"}, we have: $$\begin{aligned} \begin{split} \|e_{i}\|_{x^{i+1}}^2&\leq \frac{L^2}{\tau_i^2}\mathsf{d}(x^i,x^{i+1})^2 \leq \frac{L^2 t_i^2\|g^i\|_{x^i}^2}{\tau_i^2} = \frac{L^2\|g^i\|_{x^i}^2}{\beta^2}. \\ \mathbb{E}[\|\Delta_{i}^{f}\|_{x^{i+1}}^2|\mathcal{F}_{i-1}]&\leq \sigma_i^2+\frac{8(d+4)}{m_i}\mathbb{E}[\|\mathsf{grad}f(x^i)\|_{x^i}^2|\mathcal{F}_{i-1}]. \end{split} \end{aligned}$$ Hence, by applying law of total expectation (to take the expectation over all random variables), we have $$\begin{aligned} \begin{split} \frac{1}{\Gamma_k}\mathbb{E}\|g^{k} &- \mathsf{grad}f(x^{k})\|_{x^{k}}^2 \leq \frac{1}{\Gamma_{k-1}}\mathbb{E}\|g^{k-1} - \mathsf{grad}f(x^{k-1})\|_{x^{k-1}}^2\\ &+ \frac{(1+\tau_{k-1})\tau_{k-1}}{\Gamma_k}\frac{L^2\mathbb{E}\|g^{k-1}\|_{x^{k-1}}^2}{\beta^2} + \frac{\tau_{k-1}^2}{\Gamma_k}\Tilde{\sigma}_{k-1}^2 + \frac{\tau_{k-1}}{\Gamma_k}\hat{\sigma}^2. \end{split} \end{aligned}$$ Now by telescoping the sum in the above equation, we get (note that we take $g^0=G_{\mu}(x^0)$) $$\begin{aligned} \begin{aligned} &\mathbb{E}\|g^{k} - \mathsf{grad}f(x^{k})\|_{x^{k}}^2\leq \Gamma_{k}\mathbb{E}\|G_{\mu}(x^0) - \mathsf{grad}f(x^{0})\|_{x^{0}}^2 \\ &+ \Gamma_{k}\sum_{i=1}^{k}\bigg( \frac{(1+\tau_{i-1})\tau_{i-1}}{\Gamma_{i}}\frac{L^2\mathbb{E}\|g^{i-1}\|_{x^{i-1}}^2}{\beta^2} + \frac{\tau_{i-1}^2}{\Gamma_{i}}\Tilde{\sigma}_{i-1}^2 + \frac{\tau_{i-1}}{\Gamma_i}\hat{\sigma}^2 \bigg) \\ &\leq \Gamma_{k}\tilde{\sigma}_0^2+ \Gamma_{k}\sum_{i=1}^{k}\bigg( \frac{(1+\tau_{i-1})\tau_{i-1}}{\Gamma_{i}}\frac{L^2\mathbb{E}\|g^{i-1}\|_{x^{i-1}}^2}{\beta^2} + \frac{\tau_{i-1}^2}{\Gamma_{i}}\Tilde{\sigma}_{i-1}^2 + \frac{\tau_{i-1}}{\Gamma_i}\hat{\sigma}^2 \bigg). \end{aligned} \end{aligned}$$ This proves the first inequality in the Lemma statement. The rest of the proof is totally same as the proof of Lemma [Lemma 2](#lemma_zo_gk_gradk){reference-type="ref" reference="lemma_zo_gk_gradk"}. 0◻\ To show the bound for the term $\mathbb{E}\|P_{x^{k+1}}^{x^{k}}g^{k+1} - g^k\|_{x^k}^2$, we further need to utilize the following bound for $\|g^k\|_{x^{k}}$ first. **Lemma 10**. *Consider $g^k$ given by Algorithm [\[algorithm2_vec_tran\]](#algorithm2_vec_tran){reference-type="ref" reference="algorithm2_vec_tran"}. Suppose Assumption [Assumption 1](#assumption0_2){reference-type="ref" reference="assumption0_2"}, [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"}, [Assumption 4](#assumption_compactness){reference-type="ref" reference="assumption_compactness"}, [Assumption 5](#assumption3){reference-type="ref" reference="assumption3"} and [Assumption 6](#assumption4){reference-type="ref" reference="assumption4"} hold. Then, we have $\mathbb{E}\|g^{k}\|_{x^{k}}^2\leq \mu^2 L^2(d + 6)^3 + 2(d+4) G^2$ and $\mathbb{E}\|g^{k}\|_{x^{k}}^4\leq \frac{\mu^4L^4}{2}(d+12)^6 + 3d^2 G^4$, where the expectation $\mathbb{E}$ is taken with respect to all random variables up to iteration $k$.* **Proof.** Note that we have $$\begin{aligned} \|g^{k}\|_{x^{k}}^2 & = \|(1-\tau_{k-1}) \mathcal{T}^{k-1} (g^{k-1}) + \tau_{k-1} \mathcal{T}^{k-1} (G_{\mu}^{k-1})\|_{x^{k}}^2 \\ & \leq (1-\tau_{k-1})\|g^{k-1}\|_{x^{k-1}}^2 + \tau_{k-1}\|G_{\mu}^{k-1}\|_{x^{k-1}}^2. \end{aligned}$$ Taking expectation conditioned on $\mathcal{F}_{k-1}$, we have by Lemma [Lemma 1](#bound_zo_estimator){reference-type="ref" reference="bound_zo_estimator"} that $\mathbb{E}[\|g^{k}\|_{x^{k}}^2|\mathcal{F}_{k-1}]\leq (1-\tau_{k-1})\mathbb{E}\|g^{k-1}\|_{x^{k-1}}^2 + \tau_{k-1}(\mu^2 L^2(d + 6)^3 + 2(d+4) \|\mathsf{grad}f(x^{k-1})\|_{x^{k-1}}^2)$. We remove the conditional expectation by law of total expectation, also by Assumption [Assumption 4](#assumption_compactness){reference-type="ref" reference="assumption_compactness"} we have that $$\begin{aligned} \mathbb{E}\|g^{k}\|_{x^{k}}^2\leq (1-\tau_{k-1})\mathbb{E}\|g^{k-1}\|_{x^{k-1}}^2 + \tau_{k-1}(\mu^2 L^2(d + 6)^3 + 2(d+4) G^2). \end{aligned}$$ Denote $A_k=\mathbb{E}\|g^{k}\|_{x^{k}}^2$, note that we have $A_k\leq (1-\tau_{k-1})A_{k-1} + \tau_{k-1}(\mu^2 L^2(d + 6)^3 + 2(d+4) G^2)$. Again from Lemma [Lemma 1](#bound_zo_estimator){reference-type="ref" reference="bound_zo_estimator"} we have $A_0 \leq \mu^2 L^2(d + 6)^3 + 2(d+4) G^2$, from which and using induction, we conclude that $A_k = \mathbb{E}\|g^{k}\|_{x^{k}}^2\leq \mu^2 L^2(d + 6)^3 + 2(d+4) G^2$. As for the fourth moment, note that $$\begin{aligned} % \begin{aligned} \mathbb{E}&(\|g^{k}\|_{x^{k}}^2)^2 \leq \mathbb{E}\left((1-\tau_{k-1})\|g^{k-1}\|_{x^{k-1}}^2 + \tau_{k-1}\|G_{\mu}^{k-1}\|_{x^{k-1}}^2\right)^2 \\ \leq& (1-\tau_{k-1})\mathbb{E}\|g^{k-1}\|_{x^{k-1}}^4 + \tau_{k-1}\mathbb{E}\|G_{\mu}^{k-1}\|_{x^{k-1}}^4,\\ \leq & (1-\tau_{k-1})\mathbb{E}\|g^{k-1}\|_{x^{k-1}}^4 + \tau_{k-1}\bigg(\frac{\mu^4 L^4}{2 }(d+12)^6 + 3d^2\|\mathsf{grad}f(x^k)\|_{x^k}^4\bigg) % \end{aligned} \end{aligned}$$ where the last inequality is by Lemma [Lemma 8](#bound_zo_estimator_4th){reference-type="ref" reference="bound_zo_estimator_4th"}. The final result follows similarly to the second moment case. 0◻\ Now we are ready to study the difference between $g^k$ and $g^{k+1}$. **Lemma 11**. *Suppose Assumptions [Assumption 1](#assumption0_2){reference-type="ref" reference="assumption0_2"}, [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"}, [Assumption 4](#assumption_compactness){reference-type="ref" reference="assumption_compactness"}, [Assumption 5](#assumption3){reference-type="ref" reference="assumption3"} and [Assumption 6](#assumption4){reference-type="ref" reference="assumption4"} hold, and take $\tau_k$ as in [\[tau-2-choices\]](#tau-2-choices){reference-type="eqref" reference="tau-2-choices"}. Then, we have $$\begin{aligned} \label{zo_sum_gkplus1_gk_retr} \begin{split} \sum_{k=1}^{N}\mathbb{E}\|P_{x^{k+1}}^{x^{k}}g^{k+1} &- g^k\|_{x^k}^2\leq \frac{4L^2}{\beta^2}\sum_{k=0}^{N-1}(1+\tau_{k})\tau^2_{k}\mathbb{E}\|g^{k}\|_{x^{k}}^2 + 4\sum_{k=0}^{N}(\tau_k^2+\tau_k^3)\Tilde{\sigma}_k^2\\& +\left[4\tilde{\sigma}_0^2 + 4\hat{\sigma}^2 + \frac{8}{\beta^2}\bigg(\frac{\mu^4L^4}{2}(d+12)^6 + 3d^2 G^4\bigg)\right]\sum_{k=0}^{N}\tau_k^2, \end{split} \end{aligned}$$ where the expectation $\mathbb{E}$ is taken with respect to all random variables up to iteration $k$, which includes the random variables $u$ in the zeroth-order estimator [\[zeroth_order_estimator_retr\]](#zeroth_order_estimator_retr){reference-type="eqref" reference="zeroth_order_estimator_retr"}.* **Proof.** Since $$\begin{aligned} \begin{aligned} &\|P_{x^{k+1}}^{x^{k}}g^{k+1} - g^k\|_{x^{k}}^2 = \|g^{k+1} - P_{x^{k}}^{x^{k+1}}g^k\|_{x^{k+1}}^2 \\ \leq & 2\|g^{k+1} - \mathcal{T}^k g^k\|_{x^{k+1}}^2 + 2\|\mathcal{T}^k g^k - P_{x^{k}}^{x^{k+1}}g^k\|_{x^{k+1}}^2 \\ \leq & 2\tau_k^2 \|G_{\mu}^k - g^k\|_{x^{k}}^2+2\mathsf{d}(x^{k+1}, x^{k})^2\|g^k\|_{x^{k}}^2\\ \leq & 4\tau_k^2 \|G_{\mu}^k - \mathsf{grad}f(x^k)\|_{x^{k}}^2 + 4\tau_k^2 \|\mathsf{grad}f(x^k) - g^k\|_{x^{k}}^2 +2\frac{\tau_k^2}{\beta^2} \|g^k\|_{x^{k}}^4, \end{aligned} \end{aligned}$$ where the second inequality is by the update and Assumption [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"}, and the last inequality is by Assumption [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"}. Now taking the expectation conditioned on $\mathcal{F}_{k-1}$ we get: $$\begin{aligned} \mathbb{E}[\|P_{x^{k+1}}^{x^{k}}g^{k+1} &- g^k\|_{x^{k}}^2|\mathcal{F}_{k-1}]\leq 4\tau_k^2 \mathbb{E}[\|G_{\mu}^k - \mathsf{grad}f(x^k)\|_{x^{k}}^2|\mathcal{F}_{k-1}] \\&+ 4\tau_k^2 \mathbb{E}[\|\mathsf{grad}f(x^k) - g^k\|_{x^{k}}^2|\mathcal{F}_{k-1}] +2\frac{\tau_k^2}{\beta^2} \mathbb{E}[\|g^k\|_{x^{k}}^4|\mathcal{F}_{k-1}]. \end{aligned}$$ Thus we have (by law of total expectation): $$\begin{split} & \sum_{k=1}^{N}\mathbb{E}\|P_{x^{k+1}}^{x^{k}}g^{k+1} - g^k\|_{x^{k}}^2\\ \leq & 4\sum_{k=1}^{N}\tau_k^2\mathbb{E}\|G_{\mu}^k - \mathsf{grad}f(x^k)\|_{x^{k}}^2 + 4\sum_{k=1}^{N}\tau_k^2\mathbb{E}\|\mathsf{grad}f(x^k) - g^k\|_{x^{k}}^2+\frac{2}{\beta^2} \sum_{k=1}^{N}\tau_k^2\mathbb{E}\|g^k\|_{x^{k}}^4 \\ \leq& 4\sum_{k=1}^{N}\tau_k^2\Tilde{\sigma}_k^2 + 4\sum_{k=1}^{N}\tau_k^2\mathbb{E}\|\mathsf{grad}f(x^k) - g^k\|_{x^{k}}^2+\frac{8}{\beta^2}\bigg(\frac{\mu^4L^4}{2}(d+12)^6 + 3d^2 G^4\bigg) \sum_{k=1}^{N}\tau_k^2 % \\ \end{split}$$ where the second inequality is by Lemmas [Lemma 1](#bound_zo_estimator){reference-type="ref" reference="bound_zo_estimator"} and [Lemma 10](#lemma_zo_gk_bounded){reference-type="ref" reference="lemma_zo_gk_bounded"}. The desired result follows by applying Lemma [Lemma 9](#lemma_zo_gk_gradk_vec_trans){reference-type="ref" reference="lemma_zo_gk_gradk_vec_trans"} to the above inequality. 0◻\ We now state the main result in Theorem [Theorem 3](#theorem2_vec){reference-type="ref" reference="theorem2_vec"}, as an analog to Theorem [Theorem 1](#theorem3){reference-type="ref" reference="theorem3"}. Notice that different from Theorem [Theorem 1](#theorem3){reference-type="ref" reference="theorem3"}, we do not need $N=\Omega(d)$ in case (ii), in view of Remark [Remark 2](#rmk_bdd_grad_no_d){reference-type="ref" reference="rmk_bdd_grad_no_d"} and Assumption [Assumption 4](#assumption_compactness){reference-type="ref" reference="assumption_compactness"}. **Theorem 3**. *Suppose Assumptions [Assumption 1](#assumption0_2){reference-type="ref" reference="assumption0_2"}, [Assumption 3](#assumption1){reference-type="ref" reference="assumption1"}, [Assumption 4](#assumption_compactness){reference-type="ref" reference="assumption_compactness"}, [Assumption 5](#assumption3){reference-type="ref" reference="assumption3"} and [Assumption 6](#assumption4){reference-type="ref" reference="assumption4"} hold. In Algorithm [\[algorithm2_vec_tran\]](#algorithm2_vec_tran){reference-type="ref" reference="algorithm2_vec_tran"}, we set $\mu=\mathcal{O}\big(\frac{1}{L d^{3/2}N^{1/4}}\big)$ and $\beta\geq \sqrt{d} L$. Then the following holds.* - *If we choose $\tau_0=1$, $\tau_k= {1}/{\sqrt{N}}$, $k\geq 1$ and $m_k\equiv 8(d+4)$, $k\geq 0$, then we have $\frac{1}{N+1}\sum_{k=0}^{N} \mathbb{E}\|\mathsf{grad}f(x^{k})\|_{x^{k}}^2 \leq \mathcal{O}({1}/{\sqrt{N}})$.* - *If we choose $\tau_0=1$, $\tau_k= {1}/{\sqrt{dN}}$, $k\geq 1$, $m_0=d$ and $m_k=1$ for $k\geq 1$, then we have $\frac{1}{N+1}\sum_{k=0}^{N} \mathbb{E}\|\mathsf{grad}f(x^{k})\|_{x^{k}}^2 \leq \mathcal{O}(\sqrt{{d}/{N}})$.* *Here the expectation $\mathbb{E}$ is taken with respect to all random variables up to iteration $k$, which includes the random variables $u$ in zeroth-order estimator [\[zeroth_order_estimator_retr\]](#zeroth_order_estimator_retr){reference-type="eqref" reference="zeroth_order_estimator_retr"}.* **Proof.**\[Proof of Theorem [Theorem 3](#theorem2_vec){reference-type="ref" reference="theorem2_vec"}\] The proof is very similar to the proof of Theorem [Theorem 1](#theorem3){reference-type="ref" reference="theorem3"}. We first will have the following inequality analogue to [\[zo_temp5\]](#zo_temp5){reference-type="eqref" reference="zo_temp5"}: $$\begin{aligned} \begin{aligned} \frac{1}{8\beta^2}\sum_{k=0}^{N}\tau_k\mathbb{E}\|g^k\|_{x^k}^2 \leq & W^0 + \frac{1}{2\beta}\sum_{k=0}^{N}\tau_k\hat{\sigma}^2 +\frac{2}{\beta}\sum_{k=0}^{N}(\tau_k^2+\tau_k^3)\Tilde{\sigma}_k^2\\&+\frac{1}{2\beta}[4\Tilde{\sigma}_0^2+4\hat{\sigma}^2+\frac{8}{\beta^2}(\frac{\mu^2L^2}{2}(d+12)^6+3d^2G^4)]\sum_{k=0}^{N}\tau_k^2 \end{aligned} \end{aligned}$$ Note that we still need [\[zo_temp_ineq1\]](#zo_temp_ineq1){reference-type="eqref" reference="zo_temp_ineq1"} to show the above inequality. We then directly provide the result corresponding to [\[zo_temp7\]](#zo_temp7){reference-type="eqref" reference="zo_temp7"}: $$\begin{aligned} \label{zo_temp7_vec} \begin{aligned} \sum_{k=1}^{N}\frac{\tau_k}{2}& \mathbb{E}\|\mathsf{grad}f(x^{k})\|_{x^{k}}^2 \leq (8\beta^2+16 L^2)\bigg(W^0 + \frac{1}{2\beta}\sum_{k=0}^{N}\tau_k\hat{\sigma}^2 +\frac{2}{\beta}\sum_{k=0}^{N}(\tau_k^2+\tau_k^3)\Tilde{\sigma}_k^2\\&+\frac{1}{2\beta}[4\Tilde{\sigma}_0^2+4\hat{\sigma}^2+\frac{8}{\beta^2}(\frac{\mu^2L^2}{2}(d+12)^6+3d^2G^4)]\sum_{k=0}^{N}\tau_k^2\bigg) + \sum_{k=0}^{N-1}\tau_k^2\Tilde{\sigma}_k^2+\sum_{k=0}^{N-1}\tau_k^2\hat{\sigma}^2+\Tilde{\sigma}_0^2 \end{aligned} \end{aligned}$$ Now by Assumption [Assumption 4](#assumption_compactness){reference-type="ref" reference="assumption_compactness"}, we have $\Tilde{\sigma}_k^2\leq \sigma_k^2+\frac{8(d+4)}{m_k}G^2$, which is exactly the reason we don't need to show an inequality similar to [\[zo_temp_ineq2\]](#zo_temp_ineq2){reference-type="eqref" reference="zo_temp_ineq2"}. For case (i) in Theorem [Theorem 3](#theorem2_vec){reference-type="ref" reference="theorem2_vec"}, [\[zo_temp7_vec\]](#zo_temp7_vec){reference-type="eqref" reference="zo_temp7_vec"} can be rewritten as $$\frac{1}{N+1}\sum_{k=0}^{N} \mathbb{E}\|\mathsf{grad}f(x^{k})\|_{x^{k}}^2 \leq \frac{c_1W(x^0, g^0)}{\sqrt{N}}+ c_2\hat{\sigma}^2 + \frac{c_3\frac{1}{N}\sum_{k=0}^{N}\tilde{\sigma}_{k}^2}{\sqrt{N}} + \frac{c_4}{\sqrt{N}}\Tilde{\sigma}_0^2,$$ for some absolute positive constants $c_1$, $c_2$, $c_3$ and $c_4$. The proof for case (i) is completed by noting that (see [\[def-tilde-sigma\]](#def-tilde-sigma){reference-type="eqref" reference="def-tilde-sigma"}) $\hat{\sigma}^2=\mathcal{O}(1/\sqrt{N})$, $\frac{1}{N}\sum_{k=0}^{N}\Tilde{\sigma}_{k}^2=\mathcal{O}(1)$ and $\Tilde{\sigma}_0^2=\mathcal{O}(1)$. For case (ii) in Theorem [Theorem 3](#theorem2_vec){reference-type="ref" reference="theorem2_vec"}, [\[zo_temp7_vec\]](#zo_temp7_vec){reference-type="eqref" reference="zo_temp7_vec"} can be rewritten as $$\frac{1}{N+1}\sum_{k=0}^{N} \mathbb{E}\|\mathsf{grad}f(x^{k})\|_{x^{k}}^2 \leq c_1' W(x^0, g^0)\sqrt{\frac{d}{N}}+ c_2'\hat{\sigma}^2 + \frac{c_3'\frac{1}{N}\sum_{k=0}^{N}\tilde{\sigma}_{k}^2}{\sqrt{d N}} + c_4'\sqrt{\frac{d}{N}}\Tilde{\sigma}_0^2,$$ for some positive constants $c_1'$, $c_2'$, $c_3'$ and $c_4'$. The proof of case (ii) is completed by noting that $\Tilde{\sigma}_0^2=\mathcal{O}(1)$, $\hat{\sigma}^2=\mathcal{O}(1/\sqrt{N})$ and $\frac{1}{N}\sum_{k=0}^{N}\Tilde{\sigma}_{k}^2=\mathcal{O}(d)$. 0◻\ **Remark 4**. *By the technique discussed in Remark [Remark 1](#sec:samplingtrick){reference-type="ref" reference="sec:samplingtrick"}, to obtain an $\epsilon$-approximate stationary point in Definition [Definition 2](#def:epsstat){reference-type="ref" reference="def:epsstat"} we need an oracle complexity of $\mathcal{O}(d/\epsilon^4)$.* # Numerical experiments {#sec:experiments} ## $k$-PCA We now provide numerical results on the $k$-PCA problem to demonstrate the effectiveness of the Zo-RASA algorithms. For a given centered random vector $\mathbf{z}\in\mathbb{R}^n$, the $k$-PCA problem corresponds to finding the subspace spanned by the top-$k$ eigenvectors of its positive definite covariance matrix $\Sigma=\mathbb{E}[\mathbf{z} \mathbf{z}^\top]$. Formally, we have the following problem on the Stiefel manifold: $$\begin{aligned} \label{problem_kPCA} \min_{X\in\operatorname{St}(n, r)} f(X) := -\frac{1}{2}\mathop\mathrm{tr}(X^\top \mathbb{E}[\mathbf{z} \mathbf{z}^\top] X).\end{aligned}$$ Note that the dimension of the Stiefel is given by $d=nr-r(r+1)/2$. For any $Y=XQ$ where $Q\in\mathbb{R}^{r\times r}$, and $Q^\top Q = QQ^\top=I_r$, we have $f(X)=f(Y)$. Hence, we can equivalently view [\[problem_kPCA\]](#problem_kPCA){reference-type="eqref" reference="problem_kPCA"} as the following minimization problem on the Grassmann manifold: $$\begin{aligned} %\label{problem_kPCA_gr} \min_{[X]\in\operatorname{Gr}(n, r)} f([X]) := -\frac{1}{2}\mathop\mathrm{tr}(X^\top \mathbb{E}[\mathbf{z} \mathbf{z}^\top] X).\end{aligned}$$ Note that the dimension of the Grassmannian is given by $d=r(n-r)$. We solve [\[problem_kPCA\]](#problem_kPCA){reference-type="eqref" reference="problem_kPCA"} using Algorithm [\[algorithm2_vec_tran\]](#algorithm2_vec_tran){reference-type="ref" reference="algorithm2_vec_tran"} and compare it with the zeroth-order Riemannian SGD method from [@li2022stochastic]. In all the experiments, we used projecting vector transport rather than parallel transport for Stiefel manifolds, due to the aforementioned facts that parallel transport is time-consuming to numerically compute on Stiefel manifold, and has no closed form. In the stochastic zeroth-order setting, for each query point $X_k$, the stochastic oracle returns a noise estimate of $f(x)$ based on a single observation $\mathbf{z}_k$, i.e. $F(X^{k};\mathbf{z}_k)=-1/2\mathop\mathrm{tr}((X^k)^\top \mathbf{z}_k \mathbf{z}_k^\top X^k)$. For our experiments, we assume $\mathbf{z}_k$ is sampled from a centered Gaussian distribution with covariance matrix given by $\Sigma = \sum_{i=1}^r \lambda_i v_i v_i^\top + \sum_{i=r+1}^{n} \lambda_i v_i v_i^\top$, where $V=[v_1, ..., v_n]$ is an orthogonal matrix. The first $r$ $\lambda_i$s are uniform random numbers in $[100, 200]$ and the last $n-r$ are uniform random numbers in $[1, 50]$. For our experiments, we fix $r$ and try different $n$ (reflected in different rows in Figure [\[fig:kpca2\]](#fig:kpca2){reference-type="ref" reference="fig:kpca2"}). We set $N=50000\times n$ for `Zo-RASA` and one-batch Zo-RSGD (`Zo-RSGD-1`) algorithms, while $N=50000$ for our mini-batch Zo-RSGD algorithm (`Zo-RSGD-m`). The reason here is that for `Zo-RSGD-m`, we take $m=n=\mathcal{O}(d)$ since we fix $r$ and change $n$. While the theoretical result in [@li2022stochastic] requires the batch-size $m$ to be $\mathcal{O}(d/\epsilon^2)$, they empirically observed reasonable-order batch-sizes suffices. For `Zo-RASA`, according to our theory, we again take $\tau_k=0.01/\sqrt{N}$ and $\beta=100$. For `Zo-RSGD-1` and `Zo-RSGD-m`, we set $t_k$ as $t_k=10^{-4}/\sqrt{N}$ and $t_k=5\times10^{-4}/\sqrt{N}$ respectively. For all algorithms, we again compare the function value, norm of the Riemannian gradient and the principal angles between the current iterate and the optimal subspace. Figures [\[fig:kpca2\]](#fig:kpca2){reference-type="ref" reference="fig:kpca2"} plots the results. The experimental results provide support for the proposed algorithms (and the established theory), demonstrating that the proposed Zo-RASA algorithm is more efficient in terms of decreasing the Riemannian gradient and principal angles compared to conventional zeroth-order Riemannian stochastic gradient descent methods that utilize mini-batches. ## Identification of a fixed rank symmetric positive semi-definite matrix We now provide another numerical example from [@bonnabel2013stochastic]. Consider a matrix-version linear model as in [@tsuda2005matrix]: $$\begin{aligned} y_t=\mathop\mathrm{tr}(W \mathbf{x}_t \mathbf{x}_t^\top) = \mathbf{x}_t^\top W \mathbf{x}_t\end{aligned}$$ where $\mathbf{x}_t\in\mathbb{R}^n$ is the input and $y_t\in\mathbb{R}$ is the output, and the unknown matrix $W\in\mathbb{R}^{n\times n}$ is a positive semi-definite matrix with a fixed rank $r$ ($r\leq n$). Denote the set $$\begin{aligned} \label{eq_frpd_manifold} S_{+}(n, r) = \{W\in\mathbb{R}^{n\times n}| W=W^\top,\mathrm{rank}(W)=r\}\end{aligned}$$ which is the set of positive definite matrices with rank $r$. The problem is thus formulated as a matrix least square problem $$\begin{aligned} \label{problem_frpd} \min_{W\in S_{+}(n, r)}f(W):=\frac{1}{2}\mathbb{E}_{\mathbf{x},y}(\mathbf{x}^\top W \mathbf{x} - y)^2\end{aligned}$$ Notice that $W$ can be represented as $W=GG^\top$ where $G\in\mathbb{R}^{n,r}$ is a matrix with full column rank. Also notice that for any orthogonal matrix $O\in\mathbb{R}^{r\times r}$ we have $W= G O O^\top G^\top=GG^\top$, we have the following quotient representation of the set of fixed rank positive definite matrices $S_{+}(n, r) \simeq \mathbb{R}_*^{n \times r} / \mathcal{O}(r)$, where the right hand side represents the set of equivalent classes: $$\begin{aligned} =\{GO| O\in\mathcal{O}(r)\}.\end{aligned}$$ We could thus conduct our experiment on the quotient manifold $\mathbb{R}_*^{n \times r} / \mathcal{O}(r)$, with the following re-formulated problem: $$\begin{aligned} \label{problem_frpd_re} \min_{[G]\in \mathbb{R}_*^{n \times r} / \mathcal{O}(r)}f(G):=\frac{1}{2}\mathbb{E}_{\mathbf{x},y}(\mathbf{x}^\top G G^\top \mathbf{x} - y)^2\end{aligned}$$ The manifold $S_{+}(n, r)$ has dimension $d = nr - r(r-1)/2$ and is not a compact manifold. We test [\[problem_frpd_re\]](#problem_frpd_re){reference-type="eqref" reference="problem_frpd_re"} to show the efficiency of our proposed algorithm even without the compactness assumption (Assumption [Assumption 4](#assumption_compactness){reference-type="ref" reference="assumption_compactness"}) which we need to conduct our theoretical analysis. We solve [\[problem_frpd_re\]](#problem_frpd_re){reference-type="eqref" reference="problem_frpd_re"} using Algorithm [\[algorithm2_vec_tran\]](#algorithm2_vec_tran){reference-type="ref" reference="algorithm2_vec_tran"} and compare it with the zeroth-order Riemannian SGD method from [@li2022stochastic]. In all the experiments, we used again retraction and projecting vector transport rather than exponential mapping and parallel transport. The ground-truth $G^\star\in\mathbb{R}^{n\times r}$ is sampled randomly with standard Gaussian entries. For our experiments, we sample $\mathbf{x}\sim \mathcal{N}(0, I_d)$ and construct $y=\mathbf{x}^\top W \mathbf{x}$ noiselessly. Specifically, given a query point $G^t$ and a Gaussian sample $\mathbf{x}_t$ with $y_t=\mathbf{x}_t^\top G^\star (G^\star)^\top \mathbf{x}_t$, the stochastic zeroth-order oracle gives the value $\frac{1}{2}(\mathbf{x}_t^\top G^t (G^t)^\top \mathbf{x}_t - y_t)^2$. For our experiments, we fix $r$ and test with different $n$ (reflected in different rows in Figure [\[fig:frpd\]](#fig:frpd){reference-type="ref" reference="fig:frpd"}). We set $N=5000\times n$ for `Zo-RASA` and one-batch Zo-RSGD (`Zo-RSGD-1`) algorithms, while $N=5000$ for our mini-batch Zo-RSGD algorithm (`Zo-RSGD-m`) for the same reason as the kPCA experiments. For `Zo-RASA`, according again to our theory, we again take $\tau_k=10^{-3}/\sqrt{N}$ and $\beta=100$. For `Zo-RSGD-1` and `Zo-RSGD-m`, we set $t_k=10^{-5}/\sqrt{N}$. For all algorithms, we again compare the function value, norm of the Riemannian gradient and the quantity $\|G^t (G^t)^\top - G^\star (G^\star)^\top\|$ which measures the error to the ground truth positive semi-definite matrix. Figures [\[fig:frpd\]](#fig:frpd){reference-type="ref" reference="fig:frpd"} plots the results. It's worth noticing here that mini-batch Zo-RSGD seems to work the worst in the plots, which is due to the fact that we take the step sizes the same for `Zo-RSGD-1` and `Zo-RSGD-m`. The reason we cannot enlarge the step size for `Zo-RSGD-m` is that the projectional retraction and projectional vector transport requires solving a Sylvester equation which leads to numerical stability issues if the step sizes become large (see [@manopt] for details). The experimental results provide support for the proposed algorithms (and the established theory), demonstrating that the proposed Zo-RASA algorithm is more efficient in terms of decreasing the Riemannian gradient and function values compared to conventional zeroth-order Riemannian stochastic gradient descent methods that utilize mini-batches. # Acknowledgements {#acknowledgements .unnumbered} We thank Prof. Otis Chodosh (Stanford) for several helpful discussions and clarifications regarding several differential geometric concepts. JL thanks Xuxing Chen for helpful discussions. KB was supported in part by National Science Foundation (NSF) grant DMS-2053918. SM was supported in part by NSF grants DMS-2243650, CCF-2308597, CCF-2311275 and ECCS-2326591, UC Davis CeDAR (Center for Data Science and Artificial Intelligence Research) Innovative Data Science Seed Funding Program, and a startup fund from Rice University. [^1]: Department of Mathematics, University of California, Davis. `jxjli@ucdavis.edu` [^2]: Department of Statistics, University of California, Davis. `kbala@ucdavis.edu` [^3]: Department of Computational Applied Math and Operations Research, Rice University. `sqma@rice.edu` [^4]: *Although [@scheinberg2022finite] focuses on the Euclidean case, the discussion there also holds in the Riemannian setting.*
arxiv_math
{ "id": "2309.14506", "title": "Zeroth-order Riemannian Averaging Stochastic Approximation Algorithms", "authors": "Jiaxiang Li, Krishnakumar Balasubramanian and Shiqian Ma", "categories": "math.OC cs.LG stat.ML", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The approach for Poisson bialgebras characterized by Manin triples with respect to the invariant bilinear forms on both the commutative associative algebras and the Lie algebras is not available for giving a bialgebra theory for transposed Poisson algebras. Alternatively, we consider Manin triples with respect to the commutative 2-cocycles on the Lie algebras instead. Explicitly, we first introduce the notion of anti-pre-Lie bialgebras as the equivalent structure of Manin triples of Lie algebras with respect to the commutative 2-cocycles. Then we introduce the notion of anti-pre-Lie Poisson bialgebras, characterized by Manin triples of transposed Poisson algebras with respect to the bilinear forms which are invariant on the commutative associative algebras and commutative 2-cocycles on the Lie algebras, giving a bialgebra theory for transposed Poisson algebras. Finally the coboundary cases and the related structures such as analogues of the classical Yang-Baxter equation and $\mathcal O$-operators are studied. address: - Chern Institute of Mathematics & LPMC, Nankai University, Tianjin 300071, China - Chern Institute of Mathematics & LPMC, Nankai University, Tianjin 300071, China author: - Guilai Liu - Chengming Bai title: A bialgebra theory for transposed Poisson algebras via anti-pre-Lie bialgebras and anti-pre-Lie-Poisson bialgebras --- # Introduction  This paper aims to give a bialgebra theory for transposed Poisson algebras in terms of the bialgebra structures corresponding to Manin triples of transposed Poisson algebras with respect to the invariant bilinear forms on the commutative associative algebras and the commutative 2-cocycles on the Lie algebras. ## Transposed Poisson algebras   Poisson algebras arose in the study of Poisson geometry ([@BV1; @Li77; @Wei77]), and are closely related to a lot of topics in mathematics and physics such as classical mechanics, quantum mechanics and deformation quantization. The notion of transposed Poisson algebras is given as the dual notion of Poisson algebras, which exchanges the roles of the two binary operations in the Leibniz rule defining the Poisson algebras. **Definition 1**. ([@Bai2020])[\[defi:transposed Poisson algebra\]]{#defi:transposed Poisson algebra label="defi:transposed Poisson algebra"} A **transposed Poisson algebra** is a triple $(A,\cdot,[-,-])$, where the pair $(A,\cdot)$ is a commutative associative algebra, the pair $(A,[-,-])$ is a Lie algebra, and the following equation holds: $$\label{eq:defi:transposed Poisson algebra} 2z\cdot[x,y]=[z\cdot x,y]+[x,z\cdot y],\;\; \forall x,y,z\in A.$$ Transposed Poisson algebras share common properties with Poisson algebras, such as the closure under taking tensor products and the Koszul self-duality as an operad. They also closely relate to a lot of other algebraic structures such as Novikov-Poisson algebras ([@Xu1997]) and 3-Lie algebras ([@Fil]). In particular, due to the relationships between transposed Poisson algebras and 3-Lie algebras, the factor $2$ in Eq. ([\[eq:defi:transposed Poisson algebra\]](#eq:defi:transposed Poisson algebra){reference-type="ref" reference="eq:defi:transposed Poisson algebra"}) is interpreted as the arity of the operation of the Lie algebra ([@Bai2020]). There are further studies on transposed Poisson algebras in [@BFK; @BOK; @FKL; @KK; @KK2; @KLZ; @LS; @YH]. On the other hand, there are the following examples of transposed Poisson algebras constructed from commutative differential (associative) algebras, and conversely, any unital transposed Poisson algebra $(A,\cdot, [-,-])$ in the sense that $(A,\cdot)$ is a unital commutative associative algebra is exactly obtained this way. **Example 2**. ([@Bai2020])[\[ex:TPA\]]{#ex:TPA label="ex:TPA"} Let $(A,\cdot)$ be a commutative associative algebra with a derivation $P$. Then there is a (Witt type) Lie algebra ([@SXZ; @Xu]) defined by $$\label{eq:Lie algebras form differential commutative associative algebras} [x,y]=P(x)\cdot y-x\cdot P(y),\;\;\forall x,y\in A.$$ Moreover, $(A,\cdot,[-,-])$ is a transposed Poisson algebra. A bialgebra structure is a vector space equipped with both an algebra structure and a coalgebra structure satisfying certain compatibility conditions. The known examples of such structures include Lie bialgebras ([@CP1; @Dri]) and antisymmetric infinitesimal bialgebras ([@Agu2000; @Agu2001; @Agu2004; @Bai2010]). Lie bialgebras, as equivalent structures of Manin triples of Lie algebras (with respect to the invariant bilinear forms), are closely related to Poisson-Lie groups and play an important role in the infinitesimalization of quantum groups ([@CP1]). Antisymmetric infinitesimal bialgebras, as the associative analogue for Lie bialgebras, can be characterized as double constructions of Frobenius algebras which are widely applied in 2d topological field and string theory ([@Kock; @Lau]). There is a bialgebra theory for Poisson algebras given in [@NB] and thus the notion of Poisson bialgebras was introduced as a combination of Lie bialgebras and commutative and cocommutative infinitesimal bialgebras satisfying certain compatible conditions. Equivalently, a Poisson bialgebra is characterized as a Manin triple of Poisson algebras which is both a Manin triple of Lie algebras (with respect to the invariant bilinear form) and a double construction of commutative Frobenius algebras simultaneously. It is quite natural to consider giving a bialgebra theory for transposed Poisson algebras. The main idea is still to characterize the bialgebra theory for transposed Poisson algebras obtained from a specific kind of Manin triples of transposed Poisson algebras with respect to the bilinear forms satisfying suitable conditions. Unfortunately, the approach for Poisson bialgebras in terms of Manin triples with the respect to the invariant bilinear forms on both the commutative associative algebras and the Lie algebras in [@NB] is not available for transposed Poisson algebras. In fact, if there is a nondegenerate symmetric bilinear form $\mathcal{B}$ on a transposed Poisson algebra $(A,\cdot,[-,-])$ such that it is **invariant** on both $(A,\cdot)$ and $(A,[-,-])$ in the sense that $$\mathcal{B}(x\cdot y,z)=\mathcal{B}(x,y\cdot z),\;\; %\end{equation} %\begin{equation} \mathcal{B}([x,y],z)=\mathcal{B}(x,[y,z]),\;\;\forall x,y,z\in A,$$ then one shows (see Proposition [Proposition 60](#pro:tpa bilinear form){reference-type="ref" reference="pro:tpa bilinear form"}) that $(A,\cdot,[-,-])$ satisfies $$\label{eq:coherent TPA} [x,y]\cdot z=[x\cdot y,z]=0,\;\;\forall x,y,z\in A.$$ It is regarded as a "trivial\" case for Eq. ([\[eq:defi:transposed Poisson algebra\]](#eq:defi:transposed Poisson algebra){reference-type="ref" reference="eq:defi:transposed Poisson algebra"}). Note that it is a Poisson algebra which is also regarded as a "trivial\" case for the Leibniz rule. Also note that this fact exactly exhibits an obvious difference between Poisson algebras and transposed Poisson algebras. On the other hand, recall that a **commutative 2-cocycle** ([@Dzh]) on a Lie algebra $(\mathfrak{g},[-,-])$ is a symmetric bilinear form $\mathcal{B}$ such that $$\label{defi:2-cocycle} \mathcal{B}([x,y],z)+\mathcal{B}([y,z],x)+\mathcal{B}([z,x],y)=0,\;\;\forall x,y,z\in\mathfrak{g}.$$ Commutative 2-cocycles appear in the study of non-associative algebras satisfying certain skew-symmetric identities ([@Dzh09]), and in the description of the second cohomology of current Lie algebras ([@Zu]). For a transposed Poisson algebra $(A,\cdot,[-,-])$ given in Example [\[ex:TPA\]](#ex:TPA){reference-type="ref" reference="ex:TPA"}, if $(A,\cdot)$ is a commutative symmetric Frobenius algebra, that is, if there exists a nondegenerate symmetric bilinear form $\mathcal{B}$ on $A$ such that it is invariant on $(A,\cdot)$, then $\mathcal B$ is a commutative 2-cocycle on $(A,[-,-])$ due to the following fact: **Proposition 3**. *([@LB2022])[\[pro:comm 2-cocycle\]]{#pro:comm 2-cocycle label="pro:comm 2-cocycle"} Let $(A,\cdot)$ be a commutative associative algebra with a derivation $P$. Let $\mathcal{B}$ be a symmetric invariant bilinear form on $(A,\cdot)$. Then $\mathcal{B}$ is a commutative 2-cocycle on the Witt type Lie algebra $(A,[-,-])$ defined by Eq. ([\[eq:Lie algebras form differential commutative associative algebras\]](#eq:Lie algebras form differential commutative associative algebras){reference-type="ref" reference="eq:Lie algebras form differential commutative associative algebras"}).* Hence there exists a nontrivial transposed Poisson algebra with a nondegenerate symmetric bilinear form which is invariant on the commutative associative algebra and a commutative 2-cocycle on the Lie algebra. Therefore it is reasonable to consider a Manin triple of transposed Poisson algebras with respect to such a bilinear form instead of the invariant bilinear form on both the commutative associative algebras and the Lie algebras. Obviously, it is a combination of a double construction of commutative Frobenius algebras and a Manin triple of Lie algebras with respect to the commutative 2-cocycle. Note that the former corresponds to a commutative and cocommutative infinitesimal bialgebra in [@Bai2010]. ## Anti-pre-Lie algebras and anti-pre-Lie bialgebras As the first step for considering a bialgebra theory for transposed Poisson algebras along the approach given at the end of the previous subsection, we consider the bialgebra structures corresponding to Manin triples of Lie algebras with respect to the commutative 2-cocycles, which are closely related to the following algebraic structures. **Definition 4**. ([@LB2022]) Let $A$ be a vector space with a bilinear operation $\circ: A\otimes A\rightarrow A$. The pair $(A,\circ)$ is called an **anti-pre-Lie algebra** if the following equations are satisfied: $$\label{eq:defi:anti-pre-Lie algebras1} x\circ(y\circ z)-y\circ(x\circ z)=[y,x]\circ z,$$ $$\label{eq:defi:anti-pre-Lie algebras2} [x,y]\circ z+[y,z]\circ x+[z,x]\circ y=0,$$ where $[x,y]=x\circ y-y\circ x$, for all $x,y,z\in A$. For an anti-pre-Lie algebra $(A,\circ)$, the bilinear operation $[-,-]$ defines a Lie algebra, which is called the **sub-adjacent Lie algebra** of $(A,\circ)$ and denoted by $(\mathfrak{g}(A),[-,-])$, and $(A,\circ)$ is called a **compatible anti-pre-Lie algebra** structure on $(\mathfrak{g}(A),[-,-])$. Conversely, anti-pre-Lie algebras are characterized as a class of Lie-admissible algebras whose negative left multiplication operators make representations of the commutator Lie algebras, justifying the notion by the comparison with pre-Lie algebras ([@Bur]) which are characterized as a class of Lie-admissible algebras whose left multiplication operators do so. Moreover, they are regarded as the underlying algebraic structures of Lie algebras with nondegenerate commutative 2-cocycles due to the following relationship between them. **Theorem 5**. * ([@LB2022]) Let $\mathcal{B}$ be a nondegenerate commutative 2-cocycle on a Lie algebra $(\frak g,[-,-])$. Then there exists a unique compatible anti-pre-Lie algebra structure $\circ$ on $(\frak g,[-,-])$ such that $$\label{eq:thm:commutative 2-cocycles and anti-pre-Lie algebras} \mathcal{B}(x\circ y,z)=\mathcal{B}(y,[x,z]), \;\;\forall x,y,z\in \frak g.$$* Therefore Manin triples of Lie algebras with respect to the commutative 2-cocycles are interpreted in terms of anti-pre-Lie algebras and in particular, they are equivalent to certain bialgebra structures for anti-pre-Lie algebras, namely, anti-pre-Lie bialgebras. Furthermore, both them are equivalent to the matched pairs of Lie algebras with respect to the representations given by the compatible anti-pre-Lie algebras. The study of coboundary cases of anti-pre-Lie bialgebras leads to the introduction of an analogue of the classical Yang-Baxter equation in a Lie algebra, called the anti-pre-Lie Yang-Baxter equation (APL-YBE). The skew-symmetric solutions of the APL-YBE in anti-pre-Lie algebras give anti-pre-Lie bialgebras. Moreover, the notions of $\mathcal{O}$-operators on anti-pre-Lie algebras and pre-anti-pre-Lie(pre-APL) algebras are introduced to provide skew-symmetric solutions of the APL-YBE in anti-pre-Lie algebras . ## Anti-pre-Lie Poisson algebras and anti-pre-Lie Poisson bialgebras Now we consider, as a bialgebra theory for transposed Poisson algebras, the bialgebra structures corresponding to Manin triples of transposed Poisson algebras with respect to the invariant bilinear forms on the commutative associative algebras and the commutative 2-cocycles on the Lie algebras. The notion of anti-pre-Lie Poisson algebras was introduced as follows. **Definition 6**. ([@LB2022]) An **anti-pre-Lie Poisson algebra** is a triple $(A,\cdot,\circ)$, where the pair $(A,\cdot)$ is a commutative associative algebra and the pair $(A,\circ)$ is an anti-pre-Lie algebra such that the following equations hold: $$\begin{aligned} \label{eq:defi:anti-pre-Lie Poisson1} 2(x\circ y)\cdot z-2(y\circ x)\cdot z&=&y\cdot(x\circ z)-x\cdot(y\circ z),\\ %\end{equation} %\begin{equation} \label{eq:defi:anti-pre-Lie Poisson2} 2x\circ(y\cdot z)&=&(z\cdot x)\circ y+z\cdot(x\circ y),\;\;\forall x,y,z\in A.\end{aligned}$$ Anti-pre-Lie Poisson algebras play a similar role here as anti-pre-Lie algebras in the previous subsection. In particular, there are relationships between anti-pre-Lie Poisson algebras and transposed Poisson algebras which are analogues of the relationships between anti-pre-Lie algebras and the sub-adjacent Lie algebras. Explicitly, for an anti-pre-Lie Poisson algebra $(A,\cdot,\circ)$ with $(A,[-,-])$ being the sub-adjacent Lie algebra of $(A,\circ)$, $(A,\cdot, [-,-])$ is a transposed Poisson algebra, called the **sub-adjacent transposed Poisson algebra**. Conversely, anti-pre-Lie Poisson algebras are characterized in terms of representations of the sub-adjacent transposed Poisson algebras on the dual spaces of themselves. Moreover, like Theorem [\[thm:commutative 2-cocycles and anti-pre-Lie algebras\]](#thm:commutative 2-cocycles and anti-pre-Lie algebras){reference-type="ref" reference="thm:commutative 2-cocycles and anti-pre-Lie algebras"}, for a transposed Poisson algebra $(A,\cdot, [-,-])$ with a nondegenerate symmetric bilinear form $\mathcal B$ such that it is invariant on $(A,\cdot)$ and a commutative 2-cocycle on $(A,[-,-])$, there is an anti-pre-Lie Poisson algebra $(A,\cdot,\circ)$, where $(A,\circ)$ is defined by Eq. ([\[eq:thm:commutative 2-cocycles and anti-pre-Lie algebras\]](#eq:thm:commutative 2-cocycles and anti-pre-Lie algebras){reference-type="ref" reference="eq:thm:commutative 2-cocycles and anti-pre-Lie algebras"}) through $\mathcal B$ ([@LB2022]). Hence the role of anti-pre-Lie Poisson algebras in the study of the bialgebra structures corresponding to Manin triples of transposed Poisson algebras with respect to the invariant bilinear forms on the commutative associative algebras and the commutative 2-cocycles on the Lie algebras is the same as the role of anti-pre-Lie algebras in the study of the bialgebra structures corresponding to Manin triples of Lie algebras with respect to the commutative 2-cocycles. Consequently, we introduce the notion of anti-pre-Lie Poisson bialgebras as the equivalent structures for the above Manin triples of transposed Poisson algebras as well as the needed bialgebra theory for transposed Poisson algebras. The study of coboundary cases and the related structures such as an analogue of the classical Yang-Baxter equation and $\mathcal O$-operators on anti-pre-Lie Poisson algebras is still available. ## Layout of the paper This paper is organized as follows. In Section [2](#S2){reference-type="ref" reference="S2"}, we introduce the notion of anti-pre-Lie bialgebras as the bialgebra structures corresponding to Manin triples of Lie algebras with respect to the commutative 2-cocycles. Both of them are interpreted in terms of certain matched pairs of Lie algebras as well as the compatible anti-pre-Lie algebras. The study of coboundary cases leads to the introduction of the APL-YBE, whose skew-symmetric solutions give coboundary anti-pre-Lie bialgebras. The notions of $\mathcal{O}$-operators of anti-pre-Lie algebras and pre-APL algebras are introduced to construct skew-symmetric solutions of the APL-YBE in anti-pre-Lie algebras. In Section [3](#S4){reference-type="ref" reference="S4"}, we characterize anti-pre-Lie Poisson algebras in terms of representations of the sub-adjacent transposed Poisson algebras on the dual spaces of themselves. Then we introduce the notion of anti-pre-Lie Poisson bialgebras as the bialgebra structures corresponding to Manin triples of transposed Poisson algebras with respect to the invariant bilinear forms on the commutative associative algebras and the commutative 2-cocycles on the Lie algebras, characterized by certain matched pairs of anti-pre-Lie Poisson algebras and transposed Poisson algebras. The study of coboundary cases and the related structures is given. Throughout this paper, unless otherwise specified, all the vector spaces and algebras are finite-dimensional over an algebraically closed field $\mathbb {K}$ of characteristic zero, although many results and notions remain valid in the infinite-dimensional case. # Anti-pre-Lie bialgebras {#S2} We introduce the notions of representations and matched pairs of anti-pre-Lie algebras, and give their relationships with representations and matched pairs of the sub-adjacent Lie algebras. Then we introduce the notion of Manin triples of Lie algebras with respect to the commutative 2-cocycles and give their equivalence with certain matched pairs of Lie algebras as well as the compatible anti-pre-Lie algebras. Consequently, we introduce the notion of anti-pre-Lie bialgebras as their equivalent structures. Finally, we study the coboundary anti-pre-Lie bialgebras, which lead to the introduction of the anti-pre-Lie Yang-Baxter equation (APL-YBE). In particular, a skew-symmetric solution of the APL-YBE in an anti-pre-Lie algebra gives a coboundary anti-pre-Lie bialgebra. We also introduce the notions of $\mathcal{O}$-operators of anti-pre-Lie algebras and pre-anti-pre-Lie (pre-APL) algebras to construct skew-symmetric solutions of the APL-YBE in anti-pre-Lie algebras. ## Representations and matched pairs of anti-pre-Lie algebras   Recall some basic facts on the representations of Lie algebras. A **representation** of a Lie algebra $(\mathfrak{g},[-,-])$ is a pair $(\rho,V)$, such that $V$ is a vector space and $\rho:\mathfrak{g}\rightarrow\mathfrak{gl}(V)$ is a Lie algebra homomorphism for the natural Lie algebra structure on $\mathfrak{gl}(V)=\mathrm{End}(V)$. In particular, the linear map ${\rm ad}:\mathfrak{g}\rightarrow \mathfrak{gl}(\mathfrak{g})$ defined by ${\rm ad}(x)(y)=[x,y]$ for all $x,y\in \mathfrak{g}$, gives a representation $(\mathrm{ad},\mathfrak{g})$, called the **adjoint representation** of $(\mathfrak{g},[-,-])$. For a vector space $V$ and a linear map $\rho:\mathfrak{g}\rightarrow\mathfrak{gl}(V)$, the pair $(\rho,V)$ is a representation of a Lie algebra $(\mathfrak{g},[-,-])$ if and only if $\mathfrak{g}\oplus V$ is a (**semi-direct product**) Lie algebra by defining the multiplication $[-,-]_{\mathfrak{g}\oplus V}$ (often still denoted by $[-,-]$ for simplicity) on $\mathfrak{g}\oplus V$ by $$\label{eq:SDLie} [x+u,y+v]_{\mathfrak{g}\oplus V}=[x,y]+\rho(x)v-\rho(y)u,\;\;\forall x,y\in \mathfrak{g}, u,v\in V.$$ We denote it by $\mathfrak{g}\ltimes_{\rho}V$. Now we give the notion of representations of anti-pre-Lie algebras. **Definition 7**. Let $(A,\circ)$ be an anti-pre-Lie algebra. A **representation** of $(A,\circ)$ is a triple $(l_{\circ},r_{\circ},V)$, such that $V$ is a vector space, and $l_{\circ},r_{\circ}:A\rightarrow \mathrm{End}(V)$ are linear maps satisfying $$\begin{aligned} l_{\circ}(y\circ x)-l_{\circ}(x\circ y)&=&l_{\circ}(x)l_{\circ}(y)-l_{\circ}(y)l_{\circ}(x),\label{eq:defi:rep anti-pre-Lie algebra1}\\ r_{\circ}(x\circ y)&=&l_{\circ}(x)r_{\circ}(y)+r_{\circ}(y)l_{\circ}(x)-r_{\circ}(y)r_{\circ}(x),\label{eq:defi:rep anti-pre-Lie algebra2}\\ l_{\circ}(y\circ x)-l_{\circ}(x\circ y)&=&r_{\circ}(x)l_{\circ}(y)-r_{\circ}(y)l_{\circ}(x)-r_{\circ}(x)r_{\circ}(y)+r_{\circ}(y)r_{\circ}(x),\label{eq:defi:rep anti-pre-Lie algebra3}\end{aligned}$$ for all $x,y\in A$. **Example 8**. Let $(A,\circ)$ be an anti-pre-Lie algebra. Define linear maps $\mathcal{L}_{\circ},\mathcal{R}_{\circ}:A\rightarrow\mathrm{End}(A)$ by $\mathcal{L}_{\circ}(x)y=\mathcal{R}_{\circ}(y)x=x\circ y$, for all $x,y\in A$. Then $(\mathcal{L}_{\circ},\mathcal{R}_{\circ},A)$ is a representation of $(A,\circ)$, called the **adjoint representation** of $(A,\circ).$ **Proposition 9**. *Let $(A,\circ)$ be an anti-pre-Lie algebra and $\big(\mathfrak{g}(A),[-,-]\big)$ be the sub-adjacent Lie algebra of $(A,\circ)$. Let $V$ be a vector space and $l_{\circ},r_{\circ}:A\rightarrow \mathrm{End}(V)$ be two linear maps.* 1. *[\[rep1\]]{#rep1 label="rep1"} $(l_{\circ},r_{\circ},V)$ is a representation of $(A,\circ)$ if and only if the direct sum $A\oplus V$ of vector spaces is a (**semi-direct product**) anti-pre-Lie algebra by defining the bilinear operation $\circ_{A\oplus V}$ (often still denoted by $\circ$) on $A\oplus V$ by $$\label{eq:pro:repandsemidirectproduct1} (x+u)\circ_{A\oplus V}(y+v)=x\circ y+l_{\circ}(x)v+r_{\circ}(y)u,\;\;\forall x,y\in A, u,v\in V.$$ We denote it by $A\ltimes_{l_{\circ},r_{\circ}}V$.* 2. *Let $(l_{\circ},r_{\circ},V)$ be a representation of $(A,\circ)$. [\[rep2\]]{#rep2 label="rep2"}* - *[\[rep2-1\]]{#rep2-1 label="rep2-1"} $(-l_{\circ},V)$ is a representation of $\big(\mathfrak{g}(A),[-,-]\big)$. In particular, $(-\mathcal{L}_{\circ},A)$ is a representation of $\big(\mathfrak{g}(A),[-,-]\big)$.* - *[\[rep2-2\]]{#rep2-2 label="rep2-2"} $(l_{\circ}-r_{\circ},V)$ is a representation of $\big(\mathfrak{g}(A),[-,-]\big)$.* *Proof.* ([\[rep1\]](#rep1){reference-type="ref" reference="rep1"}). It is a special case of the matched pair of anti-pre-Lie algebras in Theorem [\[thm:matched pairs of anti-pre-Lie algebras\]](#thm:matched pairs of anti-pre-Lie algebras){reference-type="ref" reference="thm:matched pairs of anti-pre-Lie algebras"} when $B=V$ is equipped with the zero multiplication. ([\[rep2\]](#rep2){reference-type="ref" reference="rep2"}) (a). It follows directly from Eq. ([\[eq:defi:rep anti-pre-Lie algebra1\]](#eq:defi:rep anti-pre-Lie algebra1){reference-type="ref" reference="eq:defi:rep anti-pre-Lie algebra1"}). ([\[rep2\]](#rep2){reference-type="ref" reference="rep2"}) (b). It is a special case of Proposition  [\[pro:from matched pairs of anti-pre-Lie algebras to matched pairs of Lie algebras\]](#pro:from matched pairs of anti-pre-Lie algebras to matched pairs of Lie algebras){reference-type="ref" reference="pro:from matched pairs of anti-pre-Lie algebras to matched pairs of Lie algebras"} when $B=V$ is equipped with the zero multiplication. ◻ Let $A$ and $V$ be vector spaces. For a linear map $\rho:A\rightarrow{\rm End}(V)$, we set $\rho^{*}:A\rightarrow\mathrm{End}(V^{*})$ by $$\langle\rho^{*}(x)u^{*},v\rangle=-\langle u^{*},\rho(x)v\rangle,\;\;\forall x\in A, u^{*}\in V^{*}, v\in V.$$ Here $\langle\ ,\ \rangle$ is the usual pairing between $V$ and $V^*$. It is known that if $(\rho,V)$ is a representation of a Lie algebra $(\mathfrak{g},[-,-])$, then $(\rho^{*},V^{*})$ is also a representation of $(\mathfrak{g},[-,-])$. In particular, $(\mathrm{ad}^{*},\mathfrak{g}^{*})$ is a representation of $(\mathfrak{g},[-,-])$. **Proposition 10**. *Let $(l_{\circ}, r_{\circ},V)$ be a representation of an anti-pre-Lie algebra $(A,\circ)$.* 1. *[\[it:a\]]{#it:a label="it:a"}$(-l^{*}_{\circ}, V^{*})$ is a representation of the sub-adjacent Lie algebra $\big(\mathfrak{g}(A),[-,-]\big)$. In particular, $(-\mathcal{L}^{*}_{\circ}, A^{*})$ is a representation of $\big(\mathfrak{g}(A),[-,-]\big)$.* 2. *[\[it:b\]]{#it:b label="it:b"} $(r_{\circ}^{*}-l_{\circ}^{*}$,$r_{\circ}^{*}$,$V^{*})$ is a representation of $(A,\circ)$. In particular, $(\mathcal{R}_{\circ}^{*}-\mathcal{L}_{\circ}^{*}=-\mathrm{ad}^{*},\mathcal{R}_{\circ}^{*},A^{*})$ is a representation of $(A,\circ)$.* *Proof.* ([\[it:a\]](#it:a){reference-type="ref" reference="it:a"}). It follows from Proposition [Proposition 9](#pro:repandsemidirectproduct){reference-type="ref" reference="pro:repandsemidirectproduct"} ([\[rep2\]](#rep2){reference-type="ref" reference="rep2"}) (a). ([\[it:b\]](#it:b){reference-type="ref" reference="it:b"}). Let $x,y\in A, u^{*} \in V^{*}, v\in V$. Then we have $$\begin{aligned} &&\langle \big( (r_{\circ}^{*}-l_{\circ}^{*})(y\circ x)-(r_{\circ}^{*}-l_{\circ}^{*})(x\circ y)-(r_{\circ}^{*}-l_{\circ}^{*})(x)(r_{\circ}^{*}-l_{\circ}^{*})(y)+(r_{\circ}^{*}-l_{\circ}^{*})(y)(r_{\circ}^{*}-l_{\circ}^{*})(x) \big) u^{*},v\rangle\\ &&=\langle u^{*}, \big( (l_{\circ}-r_{\circ})(y\circ x)-(l_{\circ}-r_{\circ})(x\circ y)-(l_{\circ}-r_{\circ})(y)(l_{\circ}-r_{\circ})(x)+(l_{\circ}-r_{\circ})(x)(l_{\circ}-r_{\circ})(y) \big) v\rangle\\ &&\overset{(\ref{eq:defi:rep anti-pre-Lie algebra3})}{=}\langle u^{*},\big( -r_{\circ}(y\circ x)+r_{\circ}(x\circ y)-l_{\circ}(y)l_{\circ}(x)+l_{\circ}(y)r_{\circ}(x)+l_{\circ}(x)l_{\circ}(y)-l_{\circ}(x)r_{\circ}(y) \big) v \rangle\\ &&\overset{(\ref{eq:defi:rep anti-pre-Lie algebra1}),(\ref{eq:defi:rep anti-pre-Lie algebra2})}{=}\langle u^{*},\big( l_{\circ}(x)r_{\circ}(y)+r_{\circ}(y)l_{\circ}(x)-r_{\circ}(y)r_{\circ}(x) -l_{\circ}(y)r_{\circ}(x)-r_{\circ}(x)l_{\circ}(y)+r_{\circ}(x)r_{\circ}(y)\\ &&\hspace{1cm} +l_{\circ}(y)r_{\circ}(x)-l_{\circ}(x)r_{\circ}(y)+l_{\circ}(y\circ x)-l_{\circ}(x\circ y)\big) v\rangle\\ &&\overset{(\ref{eq:defi:rep anti-pre-Lie algebra3})}{=}0. \end{aligned}$$ Thus Eq. ([\[eq:defi:rep anti-pre-Lie algebra1\]](#eq:defi:rep anti-pre-Lie algebra1){reference-type="ref" reference="eq:defi:rep anti-pre-Lie algebra1"}) holds for the triple $(r_{\circ}^{*}-l_{\circ}^{*}$,$r_{\circ}^{*}$,$V^{*})$. Similarly, Eqs. ([\[eq:defi:rep anti-pre-Lie algebra2\]](#eq:defi:rep anti-pre-Lie algebra2){reference-type="ref" reference="eq:defi:rep anti-pre-Lie algebra2"}) and ([\[eq:defi:rep anti-pre-Lie algebra3\]](#eq:defi:rep anti-pre-Lie algebra3){reference-type="ref" reference="eq:defi:rep anti-pre-Lie algebra3"}) hold for the triple $(r_{\circ}^{*}-l_{\circ}^{*}$,$r_{\circ}^{*}$,$V^{*})$. Thus $(r_{\circ}^{*}-l_{\circ}^{*}$,$r_{\circ}^{*}$,$V^{*})$ is a representation of $(A,\circ)$. ◻ Next we consider matched pairs of anti-pre-Lie algebras. **Definition 11**. Let $(A,\circ_{A})$ and $(B,\circ_{B})$ be two anti-pre-Lie algebras. Suppose that $(l_{\circ_{A}},r_{\circ_{A}},B)$ and $(l_{\circ_{B}}$, $r_{\circ_{B}}$, $A)$ are representations of $(A,\circ_{A})$ and $(B,\circ_{B})$ respectively. Let $\big(\mathfrak{g}(A),[-,-]_{A}\big)$ and $\big(\mathfrak{g}(B)$, $[-,-]_{B}\big)$ be the sub-adjacent Lie algebras of $(A,\circ_{A})$ and $(B,\circ_{B})$ respectively. Suppose that the following equations hold: $$\label{eq:defi:matched pairs of anti-pre-Lie algebras1} r_{\circ_{B}}\big ( l_{\circ_{A}}(y)a\big)x+x\circ_{A}r_{\circ_{B}}(a)y-r_{\circ_{B}}\big( l_{\circ_{A}}(x)a\big) y-y\circ_{A}r_{\circ_{B}}(a)x=r_{\circ_{B}}(a)([y,x]_{A}),$$ $$\label{eq:defi:matched pairs of anti-pre-Lie algebras2} x\circ_{A}l_{\circ_{B}}(a)y+r_{\circ_{B}}\big( r_{\circ_{A}}(y)a \big) x-l_{\circ_{B}}(a)(x\circ_{A}y)=(l_{\circ_{B}}-r_{\circ_{B}})(a)x\circ_{A}y+l_{\circ_{B}}\big( (r_{\circ_{A}}-l_{\circ_{A}})(x)a\big) y,$$ $$\label{eq:defi:matched pairs of anti-pre-Lie algebras3} r_{\circ_{B}}(a)([x,y]_{A})=(l_{\circ_{B}}-r_{\circ_{B}})(a)y\circ_{A}x+(r_{\circ_{B}}-l_{\circ_{B}})(a)x\circ_{A}y+l_{\circ_{B}}\big((r_{\circ_{A}}-l_{\circ_{A}})(y)a\big)x+l_{\circ_{B}}\big((l_{\circ_{A}}-r_{\circ_{A}})(x)a\big)y,$$ $$\label{eq:defi:matched pairs of anti-pre-Lie algebras4} r_{\circ_{A}}\big(l_{\circ_{B}}(b)x\big)a+a\circ_{B}r_{\circ_{A}}(x)b-r_{\circ_{A}}\big(l_{\circ_{B}}(a)x\big)b-b\circ_{B}r_{\circ_{A}}(x)a=r_{\circ_{A}}(x)([b,a]_{B}),$$ $$\label{eq:defi:matched pairs of anti-pre-Lie algebras5} a\circ_{B}l_{\circ_{A}}(x)b+r_{\circ_{A}}\big(r_{\circ_{B}}(b)x\big)a-l_{\circ_{A}}(x)(a\circ_{B}b)=(l_{\circ_{A}}-r_{\circ_{A}})(x)a\circ_{B}b+l_{\circ_{A}}\big((r_{\circ_{B}}-l_{\circ_{B}})(a)x\big)b,$$ $$\label{eq:defi:matched pairs of anti-pre-Lie algebras6} r_{\circ_{A}}(x)([a,b]_{B})=(l_{\circ_{A}}-r_{\circ_{A}})(x)b\circ_{B}a+(r_{\circ_{A}}-l_{\circ_{A}})(x)a\circ_{B}b+l_{\circ_{A}}\big((r_{\circ_{B}}-l_{\circ_{B}})(b)x\big)a+l_{\circ_{A}}\big((l_{\circ_{B}}-r_{\circ_{B}})(a)x\big)b,$$ for all $x,y\in A,a,b\in B$. Such a structure is called a **matched pair of anti-pre-Lie algebras** $(A,\circ_{A})$ and $(B,\circ_{B})$. We denote it by $(A,B,l_{\circ_{A}},r_{\circ_{A}},l_{\circ_{B}},r_{\circ_{B}})$. **Theorem 12**. *Let $(A,\circ_{A})$ and $(B,\circ_{B})$ be two anti-pre-Lie algebras. Suppose that $l_{\circ_{A}},r_{\circ_{A}}:A\rightarrow\mathrm{End}(B)$ and $l_{\circ_{B}},r_{\circ_{B}}:B\rightarrow\mathrm{End}(A)$ are linear maps. Define a bilinear operation on $A\oplus B$ by $$\label{thm:matched pairs of anti-pre-Lie algebras1} (x+a)\circ(y+b)=x\circ_{A}y+l_{\circ_{B}}(a)y+r_{\circ_{B}}(b)x+l_{\circ_{A}}(x)b+r_{\circ_{A}}(y)a+a\circ_{B}b,$$ for all $x,y\in A, a,b\in B$. Then $(A\oplus B,\circ)$ is an anti-pre-Lie algebra if and only if $(A,B,l_{\circ_{A}},r_{\circ_{A}},l_{\circ_{B}}$, $r_{\circ_{B}})$ is a matched pair of anti-pre-Lie algebras. In this case, we denote this anti-pre-Lie algebra structure on $A\oplus B$ by $A\bowtie^{l_{\circ_{A}},r_{\circ_{A}}}_{l_{\circ_{B}},r_{\circ_{B}}}B$. Conversely, every anti-pre-Lie algebra which is the direct sum of the underlying vector spaces of two subalgebras can be obtained from a matched pair of anti-pre-Lie algebras by this construction.* *Proof.* It is straightforward. ◻ Recall the notion of matched pairs of Lie algebras ([@Maj]). Let $(\mathfrak{g},[-,-]_{\mathfrak{g}})$ and $(\mathfrak{h},[-,-]_{\mathfrak{h}})$ be two Lie algebras. Suppose that $(\rho_{\mathfrak{g}},\mathfrak{h})$ and $(\rho_{\mathfrak{h}},\mathfrak{g})$ are representations of $(\mathfrak{g},[-,-]_{\mathfrak{g}})$ and $(\mathfrak{h},[-,-]_{\mathfrak{h}})$ respectively. If the following equations are satisfied: $$\label{eq:matched pair of Lie algebras1} \rho_{\mathfrak{g}}(x)[a,b]_{\mathfrak{h}}-[\rho_{\mathfrak{g}}(x)a,b]_{\mathfrak{h}}-[a,\rho_{\mathfrak{g}}(x)b]_{\mathfrak{h}}+\rho_{\mathfrak{g}}\big(\rho_{\mathfrak{h}}(a)x\big)b-\rho_{\mathfrak{g}}\big(\rho_{\mathfrak{h}}(b)x\big)a=0,$$ $$\label{eq:matched pair of Lie algebras2} \rho_{\mathfrak{h}}(a)[x,y]_{\mathfrak{g}}-[\rho_{\mathfrak{h}}(a)x,y]_{\mathfrak{g}}-[x,\rho_{\mathfrak{h}}(a)y]_{\mathfrak{g}}+\rho_{\mathfrak{h}}\big(\rho_{\mathfrak{g}}(x)a\big)y-\rho_{\mathfrak{h}}\big(\rho_{\mathfrak{g}}(y)a\big)x=0,$$ for all $x,y\in \mathfrak{g},a,b\in \mathfrak{h}$, then $(\mathfrak{g},\mathfrak{h},\rho_{\mathfrak{g}},\rho_{\mathfrak{h}})$ is called a **matched pair of Lie algebras**. In fact, for Lie algebras $(\mathfrak{g},[-,-]_{\mathfrak{g}})$ , $(\mathfrak{h},[-,-]_{\mathfrak{h}})$ and linear maps $\rho_{\mathfrak{g}}:\mathfrak{g}\rightarrow\mathrm{End}(\mathfrak{h})$,$\rho_{\mathfrak{h}}:\mathfrak{h}\rightarrow\mathrm{End}(\mathfrak{g})$, there is a Lie algebra structure on the vector space $\mathfrak{g}\oplus \mathfrak{h}$ given by $$\label{eq:Lie} [x+a,y+b]=[x,y]_{\mathfrak{g}}+\rho_{\mathfrak{h}}(a)y-\rho_{\mathfrak{h}}(b)x+[a,b]_{\mathfrak{h}}+\rho_{\mathfrak{g}}(x)b-\rho_{\mathfrak{g}}(y)a,\;\;\forall x,y\in \mathfrak{g}, a,b\in \mathfrak{h}$$ if and only if $(\mathfrak{g},\mathfrak{h},\rho_{\mathfrak{g}},\rho_{\mathfrak{h}})$ is a matched pair of Lie algebras. In this case, we denote the Lie algebra structure on $\mathfrak{g}\oplus \mathfrak{h}$ by $\mathfrak{g}\bowtie_{\rho_{\mathfrak{h}}}^{\rho_{\mathfrak{g}}}\mathfrak{h}$. Conversely, every Lie algebra which is the direct sum of the underlying vector spaces of two subalgebras can be obtained from a matched pair of Lie algebras by this construction. **Proposition 13**. *Let $(A,\circ_{A})$ and $(B,\circ_{B})$ be two anti-pre-Lie algebras and their sub-adjacent Lie algebras be $\big(\mathfrak{g}(A),[-,-]_{A}\big)$ and $\big(\mathfrak{g}(B),[-,-]_{B}\big)$ respectively. If $(A,B,l_{\circ_{A}},r_{\circ_{A}},l_{\circ_{B}},r_{\circ_{B}})$ is a matched pair of anti-pre-Lie algebras, then $\big(\mathfrak{g}(A),\mathfrak{g}( B),l_{\circ_{A}}-r_{\circ_{A}},l_{\circ_{B}}-r_{\circ_{B}}\big)$ is a matched pair of Lie algebras.* *Proof.* By Theorem [Theorem 12](#thm:matched pairs of anti-pre-Lie algebras){reference-type="ref" reference="thm:matched pairs of anti-pre-Lie algebras"} , there is an anti-pre-Lie algebra $A\bowtie^{l_{\circ_{A}},r_{\circ_{A}}}_{l_{\circ_{B}},r_{\circ_{B}}}B$, whose sub-adjacent Lie algebra is given by $$\begin{aligned} &=&(x+a)\circ (y+b)-(y+b)\circ (x+a)\\ &=&x\circ_{A}y+l_{\circ_{B}}(a)y+r_{\circ_{B}}(b)x+l_{\circ_{A}}(x)b+r_{\circ_{A}}(y)a+a\circ_{B}b\\ &&-\big(y\circ_{A}x+l_{\circ_{B}}(b)x+r_{\circ_{B}}(a)y+l_{\circ_{A}}(y)a+r_{\circ_{A}}(x)b+b\circ_{B}a\big)\\ &=&[x,y]_{A}+(l_{\circ_{B}}-r_{\circ_{B}})(a)y-(l_{\circ_{B}}-r_{\circ_{B}})(b)x+[a,b]_{B}+(l_{\circ_{A}}-r_{\circ_{A}})(x)b-(l_{\circ_{A}}-r_{\circ_{A}})(y)a, \end{aligned}$$ for all $x,y\in A, a,b\in B$. Thus $\big(\mathfrak{g}(A),\mathfrak{g}( B),l_{\circ_{A}}-r_{\circ_{A}},l_{\circ_{B}}-r_{\circ_{B}}\big)$ is a matched pair of Lie algebras. ◻ ## Manin triples of Lie algebras with respect to the commutative 2-cocycles and anti-pre-Lie bialgebras **Definition 14**. Let $(\mathfrak{g},[-,-]_{\mathfrak{g}})$ be a Lie algebra. Assume that there is a Lie algebra structure $(\mathfrak{g}^{*},[-,-]_{\mathfrak{g}^{*}})$ on the dual space $\mathfrak{g}^{*}$. Suppose that there is a Lie algebra structure $(\mathfrak{g}\oplus\mathfrak{g}^{*},[-,-])$ on the direct sum $\mathfrak{g}\oplus\mathfrak{g}^{*}$ of vector spaces such that the nondegenerate symmetric bilinear form $\mathcal{B}_{d}$ defined by $$\label{eq:defi:Manin triples of Lie algebras} \mathcal{B}_{d}(x+a^{*},y+b^{*})=\langle x,b^{*}\rangle+\langle a^{*},y\rangle, \;\;\forall x,y\in A,a^{*},b^{*}\in A^{*},$$ is a commutative 2-cocycle on $(\mathfrak{g}\oplus\mathfrak{g}^{*},[-,-])$, and $(\mathfrak{g},[-,-]_{\mathfrak{g}})$ and $(\mathfrak{g}^{*},[-,-]_{\mathfrak{g}^{*}})$ are Lie subalgebras of $(\mathfrak{g}\oplus\mathfrak{g}^{*},[-,-])$. Such a structure is called a **(standard) Manin triple of Lie algebras with respect to the commutative 2-cocycle**. We denote it by $\big((\mathfrak{g}\oplus\mathfrak{g}^{*},[-,-],\mathcal{B}_{d}),\mathfrak{g},\mathfrak{g}^{*}\big)$. **Lemma 15**. *Let $\big((\mathfrak{g}\oplus\mathfrak{g}^{*},[-,-],\mathcal{B}_{d}),\mathfrak{g},\mathfrak{g}^{*}\big)$ be a Manin triple of Lie algebras with respect to the commutative 2-cocycle. Then there exists a compatible anti-pre-Lie algebra structure $\circ$ on $\frak g\oplus \frak g^*$ defined by Eq. ([\[eq:thm:commutative 2-cocycles and anti-pre-Lie algebras\]](#eq:thm:commutative 2-cocycles and anti-pre-Lie algebras){reference-type="ref" reference="eq:thm:commutative 2-cocycles and anti-pre-Lie algebras"}) through $\mathcal B_d$. Moreover, with this operation, $(\frak g,\circ_{\frak g}=:\circ|_{\frak g\otimes \frak g})$ and $(\frak g^*,\circ_{\frak g^*}=:\circ|_{\frak g^*\otimes \frak g^*})$ are anti-pre-Lie subalgebras whose sub-adjacent Lie algebras are $(\mathfrak{g},[-,-]_{\mathfrak{g}})$ and $(\mathfrak{g}^{*},[-,-]_{\mathfrak{g}^{*}})$ respectively.* *Proof.* The first conclusion follows from Theorem [\[thm:commutative 2-cocycles and anti-pre-Lie algebras\]](#thm:commutative 2-cocycles and anti-pre-Lie algebras){reference-type="ref" reference="thm:commutative 2-cocycles and anti-pre-Lie algebras"}. Let $x,y\in \frak g$. Suppose that $x\circ_{\frak g} y=w+w^*$, where $w\in \frak g$ and $w^*\in \frak g^*$. Then we have $$\langle w^*,z\rangle=\mathcal B_d(x\circ_{\frak g} y,z)=\mathcal B_d(y, [x,z])=0,\;\;\forall z\in \frak g.$$ Hence $w^*=0$ and thus $x\circ_{\frak g} y\in \frak g$. So $(\frak g,\circ_{\frak g})$ is an anti-pre-Lie subalgebra. Similarly, $(\frak g^*,\circ_{\frak g^*})$ is also an anti-pre-Lie subalgebra. Obviously, the sub-adjacent Lie algebras of $(\frak g,\circ_{\frak g})$ and $(\frak g^*,\circ_{\frak g^*})$ are $(\mathfrak{g},[-,-]_{\mathfrak{g}})$ and $(\mathfrak{g}^{*},[-,-]_{\mathfrak{g}^{*}})$ respectively. ◻ **Theorem 16**. *Let $(A,\circ_{A})$ and $(A^{*},\circ_{A^{*}})$ be two anti-pre-Lie algebras and their sub-adjacent Lie algebras be $\big(\mathfrak{g}(A),[-,-]_{A}\big)$ and $\big(\mathfrak{g}(A^{*}),[-,-]_{A^{*}}\big)$ respectively. Then the following conditions are equivalent:* 1. *[\[it:APL1\]]{#it:APL1 label="it:APL1"} There is a Manin triple $\Big(\big(\mathfrak{g}(A)\oplus\mathfrak{g}(A^{*}),[-,-],\mathcal{B}_{d}\big),\mathfrak{g}(A),\mathfrak{g}(A^{*})\Big)$ of Lie algebras with respect to the commutative 2-cocycle such that the compatible anti-pre-Lie algebra $(A\oplus A^{*},\circ)$ defined by Eq. ([\[eq:thm:commutative 2-cocycles and anti-pre-Lie algebras\]](#eq:thm:commutative 2-cocycles and anti-pre-Lie algebras){reference-type="ref" reference="eq:thm:commutative 2-cocycles and anti-pre-Lie algebras"}) through $\mathcal B_d$ contains $(A,\circ_{A})$ and $(A^{*},\circ_{A^{*}})$ as anti-pre-Lie subalgebras.* 2. *[\[it:APL2\]]{#it:APL2 label="it:APL2"} $(A,A^{*},-\mathrm{ad}^{*}_{A},\mathcal{R}^{*}_{\circ_{A}},-\mathrm{ad}^{*}_{A^{*}},\mathcal{R}^{*}_{\circ_{A^{*}}})$ is a matched pair of anti-pre-Lie algebras.* 3. *[\[it:APL3\]]{#it:APL3 label="it:APL3"} $\big(\mathfrak{g}(A),\mathfrak{g}(A^{*}),-\mathcal{L}^{*}_{\circ_{A}},-\mathcal{L}^{*}_{\circ_{A^{*}}}\big)$ is a matched pair of Lie algebras.* *Moreover, every Manin triple of Lie algebras with respect to the commutative 2-cocycle can be obtained this way.* *Proof.* $(\ref{it:APL1})\Longrightarrow (\ref{it:APL2})$. By assumption and Theorem [Theorem 12](#thm:matched pairs of anti-pre-Lie algebras){reference-type="ref" reference="thm:matched pairs of anti-pre-Lie algebras"}, there are linear maps $l_{\circ_{A}},r_{\circ_{A}}:A\rightarrow\mathrm{End}(A^{*})$ and $l_{\circ_{A^{*}}},r_{\circ_{A^{*}}}:A^{*}\rightarrow\mathrm{End}(A)$ such that $(A,A^{*},l_{\circ_{A}},r_{\circ_{A}},l_{\circ_{A^{*}}},r_{\circ_{A^{*}}})$ is a matched pair of anti-pre-Lie algebras and $$x\circ a^{*}=l_{\circ_{A}}(x)a^{*}+r_{\circ_{A^{*}}}(a^{*})x, \;\;a^{*}\circ x=l_{\circ_{A^{*}}}(a^{*})x+r_{\circ_{A}}(x)a^{*},\;\;\forall x\in A, a^*\in A^*.$$ Then we have $$\langle l_{\circ_{A}}(x)a^{*}, y\rangle=\mathcal{B}_{d}(x\circ a^{*}, y)=\mathcal{B}_{d}([x,y]_{A},a^{*})=\langle[x,y]_{A}, a^{*}\rangle=\langle -{\rm ad}_A^*(x)a^*, y\rangle,\;\;\forall x,y\in A,a^{*}\in A^{*}.$$ Thus $l_{\circ_{A}}=-\mathrm{ad}^*_{A}$, and similarly $l_{\circ_{A^{*}}}=-\mathrm{ad}_{A^{*}}^*$. Moreover, for all $x,y\in A,a^{*}\in A^{*}$, we have $$\begin{aligned} \langle a^{*}, x\circ_{A}y\rangle&=& \mathcal{B}_{d}(a^{*},x\circ_{A}y)=\mathcal{B}_{d}([x,a^{*}],y)=\mathcal{B}_{d}(x\circ a^{*}-a^{*}\circ x,y)\\ &=&\langle(l_{\circ_{A}}-r_{\circ_{A}})(x)a^{*},y\rangle=-\langle a^{*},(l^*_{\circ_{A}}-r^*_{\circ_{A}})(x)y\rangle. \end{aligned}$$ Since $l_{\circ_{A}}=-\mathrm{ad}^*_{A}$, we have $r^*_{\circ_{A}}=\mathcal{R}_{\circ_{A}}$ and thus $r_{\circ_{A}}=\mathcal{R}^*_{\circ_{A}}$. Similarly we have $r_{\circ_{A^{*}}}=\mathcal{R}^*_{\circ_{A^{*}}}$. Thus $(A,A^{*},-\mathrm{ad}^{*}_{A},\mathcal{R}^{*}_{\circ_{A}},-\mathrm{ad}^{*}_{A^{*}}$, $\mathcal{R}^{*}_{\circ_{A^{*}}})$ is a matched pair of anti-pre-Lie algebras. $(\ref{it:APL2})\Longrightarrow (\ref{it:APL3})$. It follows from Proposition [Proposition 13](#pro:from matched pairs of anti-pre-Lie algebras to matched pairs of Lie algebras){reference-type="ref" reference="pro:from matched pairs of anti-pre-Lie algebras to matched pairs of Lie algebras"}. $(\ref{it:APL3})\Longrightarrow (\ref{it:APL1})$. Suppose that $\big(\mathfrak{g}(A),\mathfrak{g}(A^{*}),-\mathcal{L}^{*}_{\circ_{A}},-\mathcal{L}^{*}_{\circ_{A^{*}}}\big)$ is a matched pair of Lie algebras. Then there is a Lie algebra $\mathfrak{g}(A)\bowtie_{-\mathcal{L}^{*}_{\circ_{A^{*}}}}^{-\mathcal{L}^{*}_{\circ_{A}}}\mathfrak{g}(A^{*})$ with $$[x,a^{*}]=-\mathcal{L}^{*}_{\circ_{A}}(x)a^{*}+\mathcal{L}^{*}_{\circ_{A^{*}}}(a^{*})x,\;\;\forall x\in A,a^{*}\in A^{*}.$$ It is straightforward to show that $\mathcal{B}_{d}$ is a commutative 2-cocycle on this Lie algebra. Therefore $\Big(\big(\mathfrak{g}(A)\oplus\mathfrak{g}(A^{*}),[-,-],\mathcal{B}_{d}\big),\mathfrak{g}(A),\mathfrak{g}(A^{*})\Big)$ is a Manin triple of Lie algebras with respect to the commutative 2-cocycle. Moreover, by Lemma [Lemma 15](#lem:111){reference-type="ref" reference="lem:111"}, with the compatible anti-pre-Lie algebra structure $(A\oplus A^{*},\circ)$ defined by Eq. ([\[eq:thm:commutative 2-cocycles and anti-pre-Lie algebras\]](#eq:thm:commutative 2-cocycles and anti-pre-Lie algebras){reference-type="ref" reference="eq:thm:commutative 2-cocycles and anti-pre-Lie algebras"}) through $\mathcal B_d$, the two anti-pre-Lie subalgebra structures on $A$ and $A^*$ are exactly $(A,\circ_{A})$ and $(A^{*},\circ_{A^{*}})$ respectively, that is, Item ([\[it:APL1\]](#it:APL1){reference-type="ref" reference="it:APL1"}) holds. Finally, the last conclusion also follows from Lemma [Lemma 15](#lem:111){reference-type="ref" reference="lem:111"}. ◻ **Definition 17**. An **anti-pre-Lie coalgebra** is a pair $(A,\delta)$, such that $A$ is a vector space and $\delta:A\rightarrow A\otimes A$ is a linear map satisfying $$\label{eq:defi:anti-pre-Lie coalgebras1} (\mathrm{id}^{\otimes 3}-\tau\otimes \mathrm{id})(\mathrm{id}\otimes\delta)\delta=(\tau\otimes \mathrm{id}-\mathrm{id}^{\otimes 3})(\delta\otimes \mathrm{id})\delta,$$ where $\tau:A\otimes A\rightarrow A\otimes A$ is the flip map defined by $\tau(x\otimes y)=y\otimes x$, for all $x,y\in A$, and $$\label{eq:defi:anti-pre-Lie coalgebras2} (\mathrm{id}^{\otimes 3}+\xi+\xi^{2})(\tau\otimes \mathrm{id}-\mathrm{id}^{\otimes 3})(\delta\otimes \mathrm{id})\delta=0,$$ where $\xi(x\otimes y\otimes z)=y\otimes z\otimes x$, for all $x,y,z\in A$. **Proposition 18**. *Let $A$ be a vector space and $\delta:A\rightarrow A\otimes A$ be a linear map. Let $\circ_{A^{*}}:A^{*}\otimes A^{*}\rightarrow A^{*}$ be the linear dual of $\delta$, that is, $$\label{eq:pro:anti-pre-Lie coalgebras and anti-pre-Lie algebras} \langle a^{*}\circ_{A^{*}}b^{*},x\rangle=\langle\delta^{*}(a^{*}\otimes b^{*}),x\rangle=\langle a^{*}\otimes b^{*},\delta(x)\rangle, \;\;\forall a^{*},b^{*}\in A^{*}, x\in A.$$ Then $(A,\delta)$ is an anti-pre-Lie coalgebra if and only if $(A^{*},\circ_{A^{*}})$ is an anti-pre-Lie algebra.* *Proof.* For all $x\in A, a^{*},b^{*},c^{*}\in A^{*}$, by Eq. ([\[eq:pro:anti-pre-Lie coalgebras and anti-pre-Lie algebras\]](#eq:pro:anti-pre-Lie coalgebras and anti-pre-Lie algebras){reference-type="ref" reference="eq:pro:anti-pre-Lie coalgebras and anti-pre-Lie algebras"}), we have $$\begin{aligned} \langle (\mathrm{id}^{\otimes 3}-\tau\otimes \mathrm{id})(\mathrm{id}\otimes\delta)\delta(x),a^{*}\otimes b^{*}\otimes c^{*}\rangle &=&\langle x,\delta^{*}(\mathrm{id}\otimes \delta^{*})(\mathrm{id}^{\otimes 3}-\tau\otimes \mathrm{id})(a^{*}\otimes b^{*}\otimes c^{*})\rangle\\ &=&\langle x,a^{*}\circ_{A^{*}}(b^{*}\circ_{A^{*}} c^{*})-b^{*}\circ_{A^{*}}(a^{*}\circ_{A^{*}}c^{*})\rangle,\end{aligned}$$ $$\begin{aligned} \langle (\tau\otimes \mathrm{id}-\mathrm{id}^{\otimes 3})(\delta\otimes \mathrm{id})\delta(x), a^{*}\otimes b^{*}\otimes c^{*}\rangle&=&\langle x, \delta^{*}(\delta^{*}\otimes \mathrm{id})(\tau\otimes \mathrm{id}-\mathrm{id}^{\otimes 3})(a^{*}\otimes b^{*}\otimes c^{*})\rangle\\ &=&\langle x,(b^{*}\circ_{A^{*}} a^{*})\circ_{A^{*}} c^{*}-(a^{*}\circ_{A^{*}} b^{*})\circ_{A^{*}} c^{*}\rangle.\end{aligned}$$ Thus Eq. ([\[eq:defi:anti-pre-Lie coalgebras1\]](#eq:defi:anti-pre-Lie coalgebras1){reference-type="ref" reference="eq:defi:anti-pre-Lie coalgebras1"}) holds if and only if Eq. ([\[eq:defi:anti-pre-Lie algebras1\]](#eq:defi:anti-pre-Lie algebras1){reference-type="ref" reference="eq:defi:anti-pre-Lie algebras1"}) holds on $A^{*}$. Similarly Eq. ([\[eq:defi:anti-pre-Lie coalgebras2\]](#eq:defi:anti-pre-Lie coalgebras2){reference-type="ref" reference="eq:defi:anti-pre-Lie coalgebras2"}) holds if and only if Eq. ([\[eq:defi:anti-pre-Lie algebras2\]](#eq:defi:anti-pre-Lie algebras2){reference-type="ref" reference="eq:defi:anti-pre-Lie algebras2"}) holds on $A^{*}$. Hence the conclusion follows. ◻ **Remark 19**. By [@LB2022], $(A,\circ)$ is an anti-pre-Lie algebra if and only if Eq. ([\[eq:defi:anti-pre-Lie algebras1\]](#eq:defi:anti-pre-Lie algebras1){reference-type="ref" reference="eq:defi:anti-pre-Lie algebras1"}) and the following equation $$\label{eq:defi:anti-pre-Lie algebras5} x\circ[y,z]+y\circ[z,x]+z\circ[x,y]=0, \;\;\forall x,y,z\in A,$$ hold. Therefore by dualization, $(A,\delta)$ is an anti-pre-Lie coalgebra if and only if Eq. ([\[eq:defi:anti-pre-Lie coalgebras1\]](#eq:defi:anti-pre-Lie coalgebras1){reference-type="ref" reference="eq:defi:anti-pre-Lie coalgebras1"}) and the following equation $$\label{eq:defi:anti-pre-Lie coalgebras3} (\mathrm{id}^{\otimes 3}+\xi+\xi^{2})(\tau\otimes\mathrm{id}-\mathrm{id}^{\otimes 3})(\mathrm{id}\otimes\delta)\delta=0,$$ hold. **Definition 20**. An **anti-pre-Lie bialgebra** is a triple $(A,\circ,\delta)$, such that the pair $(A,\circ)$ is an anti-pre-Lie algebra, the pair $(A,\delta)$ is an anti-pre-Lie coalgebra, and the following equations hold: $$\begin{aligned} &&(\mathrm{id}^{\otimes 2}-\tau)\Big(\delta(x\circ y)-\big(\mathcal{L}_{\circ}(x)\otimes \mathrm{id}\big)\delta(y)-\big(\mathrm{id}\otimes\mathcal{L}_{\circ}(x)\big)\delta(y)+\big(\mathrm{id}\otimes\mathcal{R}_{\circ}(y)\big)\delta(x)\Big)=0,\label{eq:defi:anti-pre-Lie bialgebra1}\\ &&\delta([x,y])=\big(\mathrm{id}\otimes\mathrm{ad}(x)-\mathcal{L}_{\circ}(x)\otimes \mathrm{id}\big)\delta(y)-\big(\mathrm{id}\otimes\mathrm{ad}(y)-\mathcal{L}_{\circ}(y)\otimes \mathrm{id}\big)\delta(x),\label{eq:defi:anti-pre-Lie bialgebra2}\end{aligned}$$ for all $x,y\in A$. For a representation $(\rho,V)$ of a Lie algebra $(\mathfrak{g},[-,-])$, recall that a **1-cocycle** of $(\mathfrak{g},[-,-])$ associated to $(\rho,V)$ is a linear map $f:\mathfrak{g}\rightarrow V$ such that $$f([x,y])=\rho(x)f(y)-\rho(y)f(x),\;\;\forall x,y\in\mathfrak{g}.$$ In particular, if there exists $u\in V$ such that $f(x)=\rho(x) u$ for all $x\in \frak g$, then $f$ is called a **1-coboundary**. Any 1-coboundary is a 1-cocycle. For two representations $(\rho,V)$ and $(\phi,W)$ of a Lie algebra $(\mathfrak{g},[-,-])$, it is known that $(\rho\otimes\mathrm{id}+\mathrm{id}\otimes\phi, V\otimes W)$ is also a representation of $(\mathfrak{g},[-,-])$, where $$(\rho\otimes\mathrm{id}+\mathrm{id}\otimes\phi)(x)(v\otimes w)%=(\rho(x)\otimes\mathrm{id}+\mathrm{id}\otimes\phi(x))(v\otimes w) =\rho(x)v\otimes w+v\otimes\phi(x)w, \;\;\forall x\in\mathfrak{g}, v\in V,w\in W.$$ **Theorem 21**. *Let $(A,\circ_{A})$ be an anti-pre-Lie algebra. Suppose that there is an anti-pre-Lie algebra structure $(A^{*},\circ_{A^{*}})$ on the dual space $A^{*}$. Let $\big(\mathfrak{g}(A),[-,-]_{A}\big)$ and $\big(\mathfrak{g}(A^{*}),[-,-]_{A^{*}}\big)$ be the sub-adjacent Lie algebras of $(A,\circ_{A})$ and $(A^{*},\circ_{A^{*}})$ respectively. Let $\delta:A\rightarrow A\otimes A$ and $\beta:A^{*}\rightarrow A^{*}\otimes A^{*}$ be linear duals of $\circ_{A^{*}}$ and $\circ_{A}$ respectively. Then the following conditions are equivalent:* 1. *[\[it:aa1\]]{#it:aa1 label="it:aa1"} $(A,\circ_{A},\delta)$ is an anti-pre-Lie bialgebra.* 2. *[\[it:aa2\]]{#it:aa2 label="it:aa2"} $\delta$ is a 1-cocycle of $\big(\mathfrak{g}(A),[-,-]_{A}\big)$ associated to $(-\mathcal{L}_{\circ_{A}}\otimes\mathrm{id}+\mathrm{id}\otimes\mathrm{ad}_{A})$, and $\beta$ is a 1-cocycle of $\big(\mathfrak{g}(A^{*}),[-,-]_{A^{*}}\big)$ associated to $(-\mathcal{L}_{\circ_{A^{*}}}\otimes\mathrm{id}+\mathrm{id}\otimes\mathrm{ad}_{A^{*}})$.* 3. *[\[it:aa3\]]{#it:aa3 label="it:aa3"} $\big(\mathfrak{g}(A),\mathfrak{g}(A^{*}),-\mathcal{L}^{*}_{\circ_{A}},-\mathcal{L}^{*}_{\circ_{A^{*}}}\big)$ is a matched pair of Lie algebras.* *Proof.* $(\ref{it:aa1})\Longleftrightarrow (\ref{it:aa2})$. Obviously Eq. ([\[eq:defi:anti-pre-Lie bialgebra2\]](#eq:defi:anti-pre-Lie bialgebra2){reference-type="ref" reference="eq:defi:anti-pre-Lie bialgebra2"}) holds if and only if $\delta$ is a 1-cocycle of $(\mathfrak{g}(A),[-,-]_{A})$ associated to $(-\mathcal{L}_{\circ_{A}}\otimes\mathrm{id}+\mathrm{id}\otimes\mathrm{ad}_{A})$. Let $x,y\in A, a^{*},b^{*}\in A^{*}$. Then we have $$\begin{aligned} \langle\beta([a^{*},b^{*}]_{A^{*}}),x\otimes y\rangle&=&%\langle [a^{*},b^{*}]_{A^{*}}, %x\circ_{A}y\rangle= \langle a^{*}\circ_{A^{*}}b^{*}, x\circ_{A}y\rangle-\langle b^{*}\circ_{A^{*}}a^{*}, x\circ_{A}y\rangle\\ &=&\langle a^*\otimes b^*, (\mathrm{id}^{\otimes 2}-\tau)\delta(x\circ_{A}y)\rangle,\\ \langle \big(\mathrm{id}\otimes\mathrm{ad}_{A^{*}}(b^{*})-\mathcal{L}_{\circ_{A^{*}}}(b^{*})\otimes\mathrm{id}\big)\beta(a^{*}), x\otimes y\rangle &=&-\langle\beta(a^{*}),x\otimes\mathrm{ad}^{*}_{A^{*}}(b^{*})y\rangle+\langle\beta(a^{*}),\mathcal{L}^{*}_{\circ_{A^{*}}}(b^{*})x\otimes y\rangle\\ &=&-\langle a^{*},x\circ_{A}\mathrm{ad}^{*}_{A^{*}}(b^{*})y\rangle+\langle a^{*},\mathcal{L}^{*}_{\circ_{A^{*}}}(b^{*})x\circ_{A}y\rangle\\ &=&\langle[\mathcal{L}^{*}_{\circ_{A}}(x)a^{*},b^{*}]_{A^{*}},y\rangle+\langle b^{*}\circ_{A^{*}}\mathcal{R}^{*}_{\circ_{A}}(y)a^{*},x\rangle\\ &=&\langle a^{*}\otimes b^{*} ,\tau\big(\mathrm{id}\otimes\mathcal{L}_{\circ_{A}}(x)\big)\delta(y)-\tau\big(\mathrm{id}\otimes\mathcal{R}_{\circ_{A}}(y)\big)\delta(x)\\ &&\hspace{1.6cm}-\big(\mathcal{L}_{\circ_{A}}(x)\otimes\mathrm{id}\big)\delta(y)\rangle,\\ -\langle \big(\mathrm{id}\otimes\mathrm{ad}_{A^{*}}(a^{*})-\mathcal{L}_{\circ_{A^{*}}}(a^{*})\otimes\mathrm{id}\big)\beta(b^{*}), x\otimes y\rangle&=& \langle a^{*}\otimes b^{*}, -\big(\mathrm{id}\otimes\mathcal{L}_{\circ_{A}}(x)\big)\delta(y)+\big(\mathrm{id}\otimes\mathcal{R}_{\circ_{A}}(y)\big)\delta(x)\\ &&\hspace{1.6cm}+\tau\big(\mathcal{L}_{\circ_{A}}(x)\otimes\mathrm{id}\big)\delta(y)\rangle.\end{aligned}$$ Hence $\beta$ is a 1-cocycle of $\big(\mathfrak{g}(A^{*}),[-,-]_{A^{*}}\big)$ associated to $(-\mathcal{L}_{\circ_{A^{*}}}\otimes\mathrm{id}+\mathrm{id}\otimes\mathrm{ad}_{A^{*}})$ if and only if Eq. ([\[eq:defi:anti-pre-Lie bialgebra1\]](#eq:defi:anti-pre-Lie bialgebra1){reference-type="ref" reference="eq:defi:anti-pre-Lie bialgebra1"}) holds. $(\ref{it:aa2})\Longleftrightarrow (\ref{it:aa3})$. For all $x,y\in A, a^{*},b^{*}\in A^{*}$, we have $$\begin{aligned} -\langle\mathcal{L}^{*}_{\circ_A^{*}}(a^{*})([x,y]_{A}),b^{*}\rangle&=&\langle[x,y]_{A},a^{*}\circ_{A^{*}}b^{*}\rangle=\langle\delta([x,y]_{A}),a^{*}\otimes b^{*}\rangle,\\ \langle[\mathcal{L}^{*}_{\circ_{A^{*}}}(a^{*})x,y]_{A},b^{*}\rangle&=&\langle\mathcal{L}^{*}_{\circ_{A^{*}}}(a^{*})x,\mathrm{ad}^{*}_{A}(y)b^{*}\rangle=-\langle x,a^{*}\circ_{A^{*}}\mathrm{ad}^{*}_{A}(y)b^{*}\rangle\\ &=&-\langle x,\delta^{*}\big(\mathrm{id}\otimes\mathrm{ad}^{*}_{A}(y)\big)(a^{*}\otimes b^{*})\rangle=\langle\big(\mathrm{id}\otimes\mathrm{ad}_{A}(y)\big)\delta(x),a^{*}\otimes b^{*}\rangle,\\ \langle[x,\mathcal{L}^{*}_{\circ_{a^{*}}}(a^{*})y]_{A},b^{*}\rangle&=&-\langle\big(\mathrm{id}\otimes\mathrm{ad}_{A}(x)\big)\delta(y),a^{*}\otimes b^{*}\rangle,\\ \langle\mathcal{L}^{*}_{\circ_{A^{*}}}\big(\mathcal{L}^{*}_{\circ_{A}}(x)a^{*}\big)y,b^{*}\rangle&=&-\langle y,\mathcal{L}^{*}_{\circ_{A}}(x)a^{*}\circ_{A^{*}}b^{*}\rangle=-\langle y,\delta^{*}\big(\mathcal{L}^{*}_{\circ_{A}}(x)\otimes \mathrm{id}\big)(a^{*}\otimes b^{*})\rangle\\ &=&\langle\big(\mathcal{L}_{\circ_{A}}(x)\otimes \mathrm{id}\big)\delta(y),a^{*}\otimes b^{*}\rangle,\\ -\langle\mathcal{L}^{*}_{\circ_{A^{*}}}\big(\mathcal{L}^{*}_{\circ_{A}}(y)a^{*}\big)x,b^{*}\rangle&=&-\langle\big(\mathcal{L}_{\circ_{A}}(y)\otimes \mathrm{id}\big)\delta(x),a^{*}\otimes b^{*}\rangle. \end{aligned}$$ Thus $\delta$ is a 1-cocycle of $\big(\mathfrak{g}(A),[-,-]_{A}\big)$ associated to $(-\mathcal{L}_{\circ_{A}}\otimes\mathrm{id}+\mathrm{id}\otimes\mathrm{ad}_{A})$ if and only if Eq. ([\[eq:matched pair of Lie algebras2\]](#eq:matched pair of Lie algebras2){reference-type="ref" reference="eq:matched pair of Lie algebras2"}) holds for $\rho_{\frak g(A)}=-\mathcal{L}^{*}_{\circ_{A}}, \rho_{\frak g (A^{*})}=-\mathcal{L}^{*}_{\circ_{A^{*}}}$. By symmetry, $\beta$ is a 1-cocycle of $\big(\mathfrak{g}(A^{*}),[-,-]_{A^{*}}\big)$ associated to $(-\mathcal{L}_{\circ_{A^{*}}}\otimes\mathrm{id}+\mathrm{id}\otimes\mathrm{ad}_{A^{*}})$ if and only if Eq. ([\[eq:matched pair of Lie algebras1\]](#eq:matched pair of Lie algebras1){reference-type="ref" reference="eq:matched pair of Lie algebras1"}) holds for $\rho_{\frak g(A)}=-\mathcal{L}^{*}_{\circ_{A}}, \rho_{\frak g (A^{*})}=-\mathcal{L}^{*}_{\circ_{A^{*}}}$. ◻ Combining Theorems [Theorem 16](#thm:Manin triples and matched pairs){reference-type="ref" reference="thm:Manin triples and matched pairs"} and [\[thm:equivalence matched pairs of Lie algebras and anti-pre-Lie bialgebras\]](#thm:equivalence matched pairs of Lie algebras and anti-pre-Lie bialgebras){reference-type="ref" reference="thm:equivalence matched pairs of Lie algebras and anti-pre-Lie bialgebras"} together, we have **Corollary 22**. *Let $(A,\circ_{A})$ be an anti-pre-Lie algebra. Suppose that there is an anti-pre-Lie algebra structure $(A^{*},\circ_{A^{*}})$ on the dual space $A^{*}$ and $\delta:A\rightarrow A\otimes A$ is the linear dual of $\circ_{A^{*}}$. Let $\big(\mathfrak{g}(A),[-,-]_{A}\big)$ and $\big(\mathfrak{g}(A^{*}),[-,-]_{A^{*}}\big)$ be the sub-adjacent Lie algebras of $(A,\circ_{A})$ and $(A^{*},\circ_{A^{*}})$ respectively. Then the following conditions are equivalent:* 1. *There is a Manin triple $\Big(\big(\mathfrak{g}(A)\oplus\mathfrak{g}(A^{*}),[-,-],\mathcal{B}_{d}\big),\mathfrak{g}(A),\mathfrak{g}(A^{*})\Big)$ of Lie algebras with respect to the commutative 2-cocycle such that the compatible anti-pre-Lie algebra $(A\oplus A^{*},\circ)$ defined by Eq. ([\[eq:thm:commutative 2-cocycles and anti-pre-Lie algebras\]](#eq:thm:commutative 2-cocycles and anti-pre-Lie algebras){reference-type="ref" reference="eq:thm:commutative 2-cocycles and anti-pre-Lie algebras"}) through $\mathcal B_d$ contains $(A,\circ_{A})$ and $(A^{*},\circ_{A^{*}})$ as anti-pre-Lie subalgebras.* 2. *$(A,A^{*},-\mathrm{ad}^{*}_{A},\mathcal{R}^{*}_{\circ_{A}},-\mathrm{ad}^{*}_{A^{*}},\mathcal{R}^{*}_{\circ_{A^{*}}})$ is a matched pair of anti-pre-Lie algebras.* 3. *$\big(\mathfrak{g}(A),\mathfrak{g}(A^{*}),-\mathcal{L}^{*}_{\circ_{A}},-\mathcal{L}^{*}_{\circ_{A^{*}}}\big)$ is a matched pair of Lie algebras.* 4. *$(A,\circ_{A},\delta)$ is an anti-pre-Lie bialgebra.* ## Coboundary anti-pre-Lie bialgebras and the anti-pre-Lie Yang-Baxter equation **Definition 23**. An anti-pre-Lie bialgebra $(A,\circ,\delta)$ is called **coboundary** if $\delta:A\rightarrow A\otimes A$ is a 1-coboundary of $\big(\mathfrak{g}(A),[-,-]\big)$ associated to $(-\mathcal{L}_{\circ}\otimes\mathrm{id}+\mathrm{id}\otimes\mathrm{ad})$, that is, there exists an $r\in A\otimes A$ such that $$\label{eq:defi:coboundary anti-pre-Lie bialgebras} \delta(x):=\delta_{r}(x):=\big(-\mathcal{L}_{\circ}(x)\otimes \mathrm{id}+\mathrm{id}\otimes\mathrm{ad}(x)\big)r,\;\; \forall x\in A.$$ **Proposition 24**. *Let $(A,\circ)$ be an anti-pre-Lie algebra and $r=\sum\limits_{i}a_{i}\otimes b_{i}\in A\otimes A$. Let $\delta=\delta_{r}:A\rightarrow A\otimes A$ be a linear map defined by Eq. ([\[eq:defi:coboundary anti-pre-Lie bialgebras\]](#eq:defi:coboundary anti-pre-Lie bialgebras){reference-type="ref" reference="eq:defi:coboundary anti-pre-Lie bialgebras"}). Denote $$\textbf{T}(r)=r_{12}\circ r_{13}+r_{12}\circ r_{23}-[r_{13},r_{23}],$$ where $$r_{12}\circ r_{13}=\sum_{i,j}a_{i}\circ a_{j}\otimes b_{i}\otimes b_{j}, r_{12}\circ r_{23}=\sum_{i,j}a_{i}\otimes b_{i}\circ a_{j}\otimes b_{j}, [r_{13},r_{23}]=\sum_{i,j}a_{i}\otimes a_{j}\otimes [b_{i},b_{j}].$$* 1. *[\[it:bb1\]]{#it:bb1 label="it:bb1"}Eq. ([\[eq:defi:anti-pre-Lie coalgebras1\]](#eq:defi:anti-pre-Lie coalgebras1){reference-type="ref" reference="eq:defi:anti-pre-Lie coalgebras1"}) holds if and only if for all $x\in A$, the following equation holds: $$\label{eq:pro:cob coalg1} \begin{split} &\Big(\mathcal{L}_{\circ }(x)\otimes\mathrm{id}\otimes\mathrm{id}-(\tau\otimes\mathrm{id})\big(\mathcal{L}_{\circ }(x)\otimes\mathrm{id}\otimes\mathrm{id}\big)-\mathrm{id}\otimes\mathrm{id}\otimes\mathrm{ad}(x)\Big)\textbf{T}(r)\\ &+\sum_{j}\big(\mathrm{id}\otimes\mathcal{L}_{\circ}(a_{j})\otimes\mathrm{ad}(x)-\mathrm{ad}(a_{j})\otimes\mathrm{id}\otimes\mathrm{ad}(x)\big)\Big(\big(r+\tau(r)\big)\otimes b_{j}\Big)\\ &+(\mathrm{id}^{\otimes 3}-\tau\otimes\mathrm{id})\sum_{j}\big(\mathcal{L}_{\circ}(x\circ a_{j})\otimes\mathrm{id}\otimes\mathrm{id}-\mathcal{L}_{\circ}(x)\mathcal{R}_{\circ}(a_{j})\otimes\mathrm{id}\otimes\mathrm{id}\big)\Big(\big(r+\tau(r)\big)\otimes b_{j}\Big)=0. \end{split}$$* 2. *[\[it:bb2\]]{#it:bb2 label="it:bb2"} Eq. ([\[eq:defi:anti-pre-Lie coalgebras2\]](#eq:defi:anti-pre-Lie coalgebras2){reference-type="ref" reference="eq:defi:anti-pre-Lie coalgebras2"}) holds if and only if for all $x\in A$, the following equation holds:* *$$\label{eq:pro:cob coalg2} \begin{split} &(\mathrm{id}^{\otimes 3}+\xi+\xi^{2})\bigg(-\big(\mathrm{id}\otimes\mathrm{id}\otimes\mathcal{L}_{\circ}(x)\big)\textbf{T}(r)+\sum_{j}\big(\mathrm{id}\otimes\mathrm{id}\otimes\mathcal{L}_{\circ}([x,b_{j}])\big)(\mathrm{id}^{\otimes 3}-\tau\otimes\mathrm{id})\Big(a_{j}\otimes \big(r+\tau(r)\big)\Big)\\ &+\sum_{j}\big(\mathrm{id}\otimes\mathrm{ad}(b_{j})\otimes \mathcal{L}_{\circ}(x)\big)\Big(a_{j}\otimes \big(r+\tau(r)\big)\Big)+\sum_{j}\big(\mathrm{id}\otimes\mathrm{ad}(a_{j})\otimes \mathcal{L}_{\circ}(x)\big)\Big(b_{j}\otimes \big(r+\tau(r)\big)\Big)\\ &+\sum_{j}\big(\mathcal{L}_{\circ}(a_{j})\otimes\mathrm{id}\otimes \mathcal{L}_{\circ}(x)\big)(\tau\otimes\mathrm{id})\Big(b_{j}\otimes \big(r+\tau(r)\big)\Big)+\sum_{j}\big(\mathrm{ad}(b_{j})\otimes \mathrm{id}\otimes\mathcal{L}_{\circ}(x)\big) \Big(\big(r+\tau(r)\big)\otimes a_{j}\Big)\bigg)=0. \end{split}$$* 3. *[\[it:bb3\]]{#it:bb3 label="it:bb3"} Eq. ([\[eq:defi:anti-pre-Lie bialgebra1\]](#eq:defi:anti-pre-Lie bialgebra1){reference-type="ref" reference="eq:defi:anti-pre-Lie bialgebra1"}) holds if and only if for all $x,y\in A$, the following equation holds:* *$$\label{eq:pro:coboundary anti-pre-Lie bialgebras1} \big(\mathrm{id}\otimes\mathcal{L}_{\circ}(x\circ y)-\mathrm{id}\otimes\mathcal{L}_{\circ}(x)\mathcal{L}_{\circ}(y)+\mathcal{L}_{\circ}(x)\mathcal{L}_{\circ}(y)\otimes \mathrm{id}-\mathcal{L}_{\circ}(x\circ y)\otimes \mathrm{id}+\mathcal{L}_{\circ}(y)\otimes\mathcal{L}_{\circ}(x)-\mathcal{L}_{\circ}(x)\otimes\mathcal{L}_{\circ}(y)\big)\big(r+\tau(r)\big)=0.$$* *Proof.* The proof follows from the careful interpretation. It seems a little lengthy and hence we put it into the Appendix. ◻ **Theorem 25**. *Let $(A,\circ)$ be an anti-pre-Lie algebra and $r=\sum\limits_{i}a_{i}\otimes b_{i}\in A\otimes A$. Let $\delta=\delta_{r}$ be a linear map defined by Eq. ([\[eq:defi:coboundary anti-pre-Lie bialgebras\]](#eq:defi:coboundary anti-pre-Lie bialgebras){reference-type="ref" reference="eq:defi:coboundary anti-pre-Lie bialgebras"}). Then $(A,\circ,\delta)$ is an anti-pre-Lie bialgebra if and only if Eqs. ([\[eq:pro:cob coalg1\]](#eq:pro:cob coalg1){reference-type="ref" reference="eq:pro:cob coalg1"})-([\[eq:pro:coboundary anti-pre-Lie bialgebras1\]](#eq:pro:coboundary anti-pre-Lie bialgebras1){reference-type="ref" reference="eq:pro:coboundary anti-pre-Lie bialgebras1"}) hold.* *Proof.* Obviously Eq. ([\[eq:defi:anti-pre-Lie bialgebra2\]](#eq:defi:anti-pre-Lie bialgebra2){reference-type="ref" reference="eq:defi:anti-pre-Lie bialgebra2"}) holds automatically. By Proposition [Proposition 24](#pro:cob coalg){reference-type="ref" reference="pro:cob coalg"}, the conclusion holds. ◻ The simplest solutions to satisfy Eqs. ([\[eq:pro:cob coalg1\]](#eq:pro:cob coalg1){reference-type="ref" reference="eq:pro:cob coalg1"})-([\[eq:pro:coboundary anti-pre-Lie bialgebras1\]](#eq:pro:coboundary anti-pre-Lie bialgebras1){reference-type="ref" reference="eq:pro:coboundary anti-pre-Lie bialgebras1"}) are to assume that $r$ is skew-symmetric (that is, $\tau(r)=-r$) and $\textbf{T}(r)=0$, that is, **Corollary 26**. *Let $(A,\circ)$ be an anti-pre-Lie algebra and $r\in A\otimes A$ be skew-symmetric. If $\textbf{T}(r)=0$, then $(A,\circ,\delta)$ is an anti-pre-Lie bialgebra, where $\delta=\delta_{r}$ is defined by Eq. ([\[eq:defi:coboundary anti-pre-Lie bialgebras\]](#eq:defi:coboundary anti-pre-Lie bialgebras){reference-type="ref" reference="eq:defi:coboundary anti-pre-Lie bialgebras"}).* **Definition 27**. Let $(A,\circ)$ be an anti-pre-Lie algebra and $r\in A\otimes A$. We say $r$ is a solution of the **anti-pre-Lie Yang-Baxter equation** or **APL-YBE** in short, in $(A,\circ)$ if $\textbf{T}(r)=0$. **Example 28**. Let $(A,\cdot)$ be a commutative associative algebra with a derivation $P$. Then according to [@LB2022], there is an anti-pre-Lie algebra $(A,\circ)$ given by $$\label{eq:diff anti-pre-Lie} x\circ y=P(x\cdot y)+P(x)\cdot y,\;\;\forall x,y\in A.$$ Let $r\in A\otimes A$ be a solution of the **associative Yang-Baxter equation (AYBE)** in $(A,\cdot)$, that is, $r$ satisfies $$\label{eq:AYBE}{\bf A}(r):=r_{12}\cdot r_{13}-r_{23}\cdot r_{12}+r_{13}\cdot r_{23}=0,$$ where for $r=\sum\limits_{i} a_{i}\otimes b_{i}$, $$r_{12}\cdot r_{13}=\sum_{i,j} a_{i}\cdot a_{j}\otimes b_{i}\otimes b_{j},\ r_{23}\cdot r_{12}=\sum_{i,j}a_{j}\otimes a_{i}\cdot b_{j}\otimes b_{i},\ r_{13}\cdot r_{23}=\sum_{i,j} a_{i}\otimes a_{j}\otimes b_{i}\cdot b_{j}.$$ If in addition, $r$ satisfies $$\label{eq:-P} (\mathrm{id}\otimes P+P\otimes\mathrm{id})r=0,$$ then $r$ is also a solution of the APL-YBE in $(A,\circ)$. In fact, we have $$\begin{aligned} \textbf{T}(r)&=&r_{12}\circ r_{13}+r_{12}\circ r_{23}-[r_{13}, r_{23}]\\ &=&\sum_{i,j}\big(P(a_{i}\cdot a_{j})\otimes b_{i}\otimes b_{j}+P(a_{i})\cdot a_{j}\otimes b_{i}\otimes b_{j}+a_{i}\otimes P(b_{i}\cdot a_{j})\otimes b_{j}\\ &&\ +a_{i}\otimes P(b_{i})\cdot a_{j}\otimes b_{j}+a_{i}\otimes a_{j}\otimes b_{i}\cdot P(b_{j})-a_{i}\otimes a_{j}\otimes P(b_{i})\cdot b_{j}\big)\\ &{\overset{(\ref{eq:-P})}{=}}&\sum_{i,j}\big(P(a_{i}\cdot a_{j})\otimes b_{i}\otimes b_{j}-a_{i}\cdot a_{j}\otimes P(b_{i})\otimes b_{j}+a_{i}\otimes P(b_{i}\cdot a_{j})\otimes b_{j}\\ &&\ -P(a_{i})\otimes b_{i}\cdot a_{j}\otimes b_{j}-a_{i}\otimes P(a_{j})\otimes b_{i}\cdot b_{j}+P(a_{i})\otimes a_{j}\otimes b_{i}\cdot b_{j}\big)\\ &=&(P\otimes\mathrm{id}\otimes\mathrm{id}-\mathrm{id}\otimes P\otimes\mathrm{id})(r_{12}\cdot r_{13}-r_{23}\cdot r_{12}+r_{13}\cdot r_{23})\\ &=&0. \end{aligned}$$ For a vector space $A$, the isomorphism $A\otimes A\cong {\rm Hom}(A^*,A)$ identifies an $r\in A\otimes A$ with a map from $A^*$ to $A$ which we still denote by $r$. Explicitly, writing $r=\sum\limits_{i}a_{i}\otimes b_{i}\in A\otimes A$, we have $$\label{eq:identify} r:A^{*}\rightarrow A,\;\; r(a^{*})=\sum_{i}\langle a^{*}, a_{i}\rangle b_{i},\;\;\forall a^{*}\in A^{*}.$$ **Theorem 29**. *Let $(A,\circ)$ be an anti-pre-Lie algebra and $r\in A\otimes A$ be skew-symmetric. Then the following conditions are equivalent.* 1. *[\[it:ss1\]]{#it:ss1 label="it:ss1"} $r$ is a solution of the APL-YBE in $(A,\circ)$.* 2. *[\[it:ss2\]]{#it:ss2 label="it:ss2"} The following equation holds. $$\label{eq:APL-O} r(a^{*})\circ r(b^{*})+r\Big(\mathrm{ad}^{*}\big(r(a^{*})\big)b^{*}\Big)-r\Big(\mathcal{R}^{*}_{\circ}\big(r(b^{*})\big)a^{*}\Big)=0,\;\;\forall a^{*},b^{*}\in A^{*}.$$* 3. *[\[it:ss3\]]{#it:ss3 label="it:ss3"} The following equation holds. $$\label{eq:Lie-O} [r(a^{*}),r(b^{*})]+r\Big(\mathcal{L}^{*}_{\circ}\big(r(a^{*})\big)b^{*}\Big)-r\Big(\mathcal{L}^{*}_{\circ}\big(r(b^{*})\big)a^{*}\Big)=0,\;\;\forall a^{*},b^{*}\in A^{*}.$$* *Proof.* Let $r=\sum\limits_{i}a_{i}\otimes b_{i}\in A\otimes A$ and $a^{*},b^{*},c^{*}\in A^{*}$. ([\[it:ss1\]](#it:ss1){reference-type="ref" reference="it:ss1"}) $\Longleftrightarrow$ ([\[it:ss2\]](#it:ss2){reference-type="ref" reference="it:ss2"}). By Eq. ([\[eq:identify\]](#eq:identify){reference-type="ref" reference="eq:identify"}), we have $$\begin{aligned} \langle r(a^{*})\circ r(b^{*}), c^{*}\rangle&=&\sum_{i,j}\langle a^{*},a_{i}\rangle\langle b^{*},a_{j}\rangle\langle b_{i}\circ b_{j}, c^{*}\rangle=\sum_{i,j}\langle c^{*}\otimes a^{*}\otimes b^{*}, b_{i}\circ b_{j}\otimes a_{i}\otimes a_{j}\rangle\\ &=&\sum_{i,j}\langle c^{*}\otimes a^{*}\otimes b^{*}, a_{i}\circ a_{j}\otimes b_{i}\otimes b_{j}\rangle,\\ \langle r\Big(\mathrm{ad}^{*}\big(r(a^{*})\big)b^{*}\Big), c^{*}\rangle&=&\sum_{i}\langle\mathrm{ad}^{*}\big(r(a^{*})\big)b^{*}, a_{i}\rangle\langle b_{i},c^{*}\rangle=-\sum_{i}\langle b^{*},[r(a^{*}),a_{i}]\rangle\langle b_{i},c^{*}\rangle\\ &=&-\sum_{i,j}\langle a^{*}, a_{j}\rangle\langle b^{*},[b_{j},a_{i}]\rangle\langle b_{i},c^{*}\rangle=-\sum_{i,j}\langle c^{*}\otimes a^{*}\otimes b^{*}, b_{i}\otimes a_{j}\otimes [b_{j},a_{i}]\rangle\\ &=&-\sum_{i,j}\langle c^{*}\otimes a^{*}\otimes b^{*}, a_{i}\otimes a_{j}\otimes [b_{i},b_{j}]\rangle,\\ -\langle r\Big(\mathcal{R}^{*}_{\circ}\big(r(b^{*})\big)a^{*}\Big),c^{*}\rangle&=&-\sum_{i}\langle \mathcal{R}^{*}_{\circ}\big(r(b^{*})\big)a^{*},a_{i}\rangle\langle b_{i},c^{*}\rangle=\sum_{i}\langle a^{*},a_{i}\circ r(b^{*})\rangle\langle b_{i},c^{*}\rangle\\ &=&\sum_{i,j}\langle b^{*},a_{j}\rangle\langle a^{*}, a_{i}\circ b_{j}\rangle\langle b_{i},c^{*}\rangle=\sum_{i,j}\langle c^{*}\otimes a^{*}\otimes b^{*}, b_{i}\otimes a_{i}\circ b_{j}\otimes a_{j}\rangle\\ &=&\sum_{i,j}\langle c^{*}\otimes a^{*}\otimes b^{*}, a_{i}\otimes b_{i}\circ a_{j}\otimes b_{j}\rangle.\end{aligned}$$ Hence Item ([\[it:ss1\]](#it:ss1){reference-type="ref" reference="it:ss1"}) holds if and only if Eq. ([\[eq:APL-O\]](#eq:APL-O){reference-type="ref" reference="eq:APL-O"}) holds. ([\[it:ss1\]](#it:ss1){reference-type="ref" reference="it:ss1"}) $\Longleftrightarrow$ ([\[it:ss3\]](#it:ss3){reference-type="ref" reference="it:ss3"}). By a similar study as above, we have $$\begin{aligned} \langle [r(a^{*}), r(b^{*})], c^{*}\rangle &=&\sum_{i,j}\langle a^{*}\otimes b^{*}\otimes c^{*}, a_{i}\otimes a_{j}\otimes [b_{i},b_{j}]\rangle,\\ \langle r\Big(\mathcal{L}^{*}_{\circ}\big(r(a^{*})\big)b^{*}\Big),c^{*}\rangle &=&-\sum_{i,j}\langle a^{*}\otimes b^{*}\otimes c^{*}, a_{i}\otimes b_{i}\circ a_{j}\otimes b_{j}\rangle,\\ -\langle r\Big(\mathcal{L}^{*}_{\circ}\big(r(b^{*})\big)a^{*}\Big),c^{*}\rangle &=&-\sum_{i,j}\langle a^{*}\otimes b^{*}\otimes c^{*}, a_{i}\circ a_{j}\otimes b_{i}\otimes b_{j}\rangle. \end{aligned}$$ Hence Item ([\[it:ss1\]](#it:ss1){reference-type="ref" reference="it:ss1"}) holds if and only if Eq. ([\[eq:Lie-O\]](#eq:Lie-O){reference-type="ref" reference="eq:Lie-O"}) holds. ◻ ## $\mathcal{O}$-operators of anti-pre-Lie algebras and pre-anti-pre-Lie algebras   Theorem [Theorem 29](#thm:O-operator and T-equation){reference-type="ref" reference="thm:O-operator and T-equation"} motivates to give the following notion. **Definition 30**. Let $(A,\circ)$ be an anti-pre-Lie algebra and $(l_{\circ},r_{\circ},V)$ be a representation of $(A,\circ)$. A linear map $T:V\rightarrow A$ is called **an $\mathcal{O}$-operator of $(A,\circ)$ associated to** $(l_{\circ},r_{\circ},V)$ if $$\label{eq:defi:O-operators} T(u)\circ T(v)=T\Big(l_{\circ}\big(T(u)\big)v+r_{\circ}\big(T(v)\big)u\Big),\;\;\forall u,v\in V.$$ Recall ([@Kup]) that an **$\mathcal O$-operator $T$ of a Lie algebra $(\frak g,[-,-])$ associated to a representation $(\rho, V)$** is a linear map $T:V\rightarrow \frak g$ satisfying $$=T\Big(\rho\big(T(u)\big)v-\rho\big(T(v)\big)u\Big),\;\;\forall u,v\in V.$$ Obviously, if $T$ is an $\mathcal O$-operator of an anti-pre-Lie algebra $(A,\circ)$ associated to a representation $(l_{\circ},r_{\circ},V)$, then $T$ is an $\mathcal O$-operator of the sub-adjacent Lie algebra $\big(\frak g(A),[-,-]\big)$ associated to $(l_\circ-r_\circ, V)$. **Example 31**. Theorem [Theorem 29](#thm:O-operator and T-equation){reference-type="ref" reference="thm:O-operator and T-equation"} is rewritten as follows. Let $(A,\circ)$ be an anti-pre-Lie algebra and $r\in A\otimes A$ be skew-symmetric. Then $r$ is a solution of the APL-YBE in $(A,\circ)$ if and only if $r$ is an $\mathcal{O}$-operator of $(A,\circ)$ associated to $(-\mathrm{ad}^{*},\mathcal{R}^{*}_{\circ},A^{*})$, or equivalently, $r$ is an $\mathcal O$-operator of the sub-adjacent Lie algebra $\big(\frak g(A),[-,-]\big)$ associated to $(-\mathcal L^*_\circ, A^{*})$. **Theorem 32**. *Let $(A,\circ)$ be an anti-pre-Lie algebra and $(l_{\circ},r_{\circ},V)$ be a representation of $(A,\circ)$. Set $\hat{A}=A\ltimes_{r^{*}_{\circ}-l^{*}_{\circ},r^{*}_{\circ}}V^{*}$. Let $T:V\rightarrow A$ be a linear map which is identified as an element in the vector space $\hat{A}\otimes\hat{A}$ (through $\mathrm{Hom}(V,A)\cong A\otimes V^{*} \subseteq \hat{A}\otimes\hat{A}$). Then $r=T-\tau(T)$ is a skew-symmetric solution of the APL-YBE in the anti-pre-Lie algebra $\hat{A}$ if and only if $T$ is an $\mathcal{O}$-operator of $(A,\circ)$ associated to $(l_{\circ},r_{\circ},V)$.* *Proof.* Let $\lbrace v_{1},\cdots ,v_{n}\rbrace$ be a basis of $V$ and $\lbrace v^{*}_{1},\cdots ,v^{*}_{n}\rbrace$ be the dual basis. Then $$T=\sum_{i=1}^{n}T(v_{i})\otimes v^{*}_{i}\in T(V)\otimes V^{*}\subset \hat{A}\otimes \hat{A}.$$ Note that $$l^{*}_{\circ}\big(T(v_{i})\big)v^{*}_{j}=\sum_{k=1}^{n}-\langle v^{*}_{j},l_{\circ}\big(T(v_{i})\big)v_{k}\rangle v^{*}_{k},\ \ r^{*}_{\circ}\big(T(v_{i})\big)v^{*}_{j}=\sum_{k=1}^{n}-\langle v^{*}_{j},r_{\circ}\big(T(v_{i})\big)v_{k}\rangle v^{*}_{k}.$$ Then we have $$\begin{aligned} r_{12}\circ r_{13} &=&\sum_{i,j=1}^{n}T(v_{i})\circ T(v_{j})\otimes v^{*}_{i}\otimes v^{*}_{j}-T(v_{i})\circ v^{*}_{j}\otimes v^{*}_{i}\otimes T(v_{j})-v^{*}_{i}\circ T(v_{j})\otimes T(v_{i})\otimes v^{*}_{j}\\ &=&\sum_{i,j=1}^{n}T(v_{i})\circ T(v_{j})\otimes v^{*}_{i}\otimes v^{*}_{j} -(r^{*}_{\circ}-l^{*}_{\circ})\big(T(v_{i})\big)v^{*}_{j}\otimes v^{*}_{i}\otimes T(v_{j}) -r^{*}_{\circ}\big(T(v_{j})\big)v^{*}_{i}\otimes T(v_{i})\otimes v^{*}_{j}\\ &=&\sum_{i,j=1}^{n}T(v_{i})\circ T(v_{j})\otimes v^{*}_{i}\otimes v^{*}_{j} +v^{*}_{j}\otimes v^{*}_{i}\otimes T\Big((r_{\circ}-l_{\circ})\big(T(v_{i})\big)v_{j}\Big) +v^{*}_{i}\otimes T\Big(r_{\circ}\big(T(v_{j})\big)v_{i}\Big)\otimes v^{*}_{j}. \end{aligned}$$ Similarly, we have $$\begin{aligned} r_{12}\circ r_{23}&=&\sum_{i,j=1}^{n}-T\Big(r_{\circ}\big(T(v_{j})\big)v_{i}\Big)\otimes v^{*}_{i}\otimes v^{*}_{j}-v^{*}_{i}\otimes T(v_{i})\circ T(v_{j})\otimes v^{*}_{j}\\ &\mbox{}&\hspace{4cm} +v^{*}_{i}\otimes v^{*}_{j}\otimes T\Big((r_{\circ}-l_{\circ})\big(T(v_{i})\big)v_{j}\Big),\\ -[r_{13},r_{23}]&=&\sum_{i,j=1}^{n}v^{*}_{i}\otimes T\Big(l_{\circ}\big(T(v_{i})\big)v_{j}\Big)\otimes v^{*}_{j}-T\Big(l_{\circ}\big(T(v_{j})\big)v_{i}\Big)\otimes v^{*}_{j}\otimes v^{*}_{i}+v^{*}_{i}\otimes v^{*}_{j}\otimes[T(v_{i}),T(v_{j})]. \end{aligned}$$ Hence the conclusion follows. ◻ **Definition 33**. A **pre-anti-pre-Lie algebra** or a **pre-APL algebra** in short, is a triple $(A,\succ$, $\prec)$, where $A$ is a vector space and $\succ,\prec:A\otimes A\rightarrow A$ are bilinear operations, such that the following equations hold: $$\begin{aligned} (y\succ x+y\prec x)\succ z-(x\succ y+x\prec y)\succ z&=&x\succ(y\succ z)-y\succ(x\succ z),\ \ \ \ \ \ \label{eq:defi:quasi anti-pre-Lie algebras1}\\ z\prec(x\succ y+ x\prec y)&=&x\succ(z\prec y)+(x\succ z)\prec y-(z\prec x)\prec y,\ \ \ \ \ \ \label{eq:defi:quasi anti-pre-Lie algebras2}\\ (y\succ x+y\prec x)\succ z-(x\succ y+x\prec y)\succ z &=&(y\succ z)\prec x-(x\succ z)\prec y-(z\prec y)\prec x+(z\prec x)\prec y,\ \ \ \ \ \ \label{eq:defi:quasi anti-pre-Lie algebras3}\end{aligned}$$ for all $x,y,z\in A$. **Proposition 34**. *Let $(A,\succ,\prec)$ be a pre-APL algebra. Define a bilinear operation $\circ:A\otimes A\rightarrow A$ by $$\label{eq:defi:quasi anti-pre-Lie algebras and anti-pre-Lie algebras} x\circ y=x\succ y+ x\prec y,\;\;\forall x,y\in A.$$ Then $(A,\circ)$ is an anti-pre-Lie algebra, called the **sub-adjacent anti-pre-Lie algebra** of $(A,\succ,\prec)$, and $(A,\succ,\prec)$ is called a **compatible pre-APL algebra** of $(A,\circ)$. Moreover, $(\mathcal{L}_{\succ},\mathcal{R}_{\prec},A)$ is a representation of $(A,\circ)$, where $\mathcal{L}_{\succ},\mathcal{R}_{\prec}:A\rightarrow\mathrm{End}(A)$ are linear maps defined by $$\mathcal{L}_{\succ}(x)y=x\succ y,\; \mathcal{R}_{\prec}(x)y=y\prec x,\;\forall x,y\in A.$$* *Proof.* Let $x,y,z\in A$. Then by Eqs. ([\[eq:defi:quasi anti-pre-Lie algebras1\]](#eq:defi:quasi anti-pre-Lie algebras1){reference-type="ref" reference="eq:defi:quasi anti-pre-Lie algebras1"}) and ([\[eq:defi:quasi anti-pre-Lie algebras2\]](#eq:defi:quasi anti-pre-Lie algebras2){reference-type="ref" reference="eq:defi:quasi anti-pre-Lie algebras2"}), we have $$\begin{aligned} &&x\circ(y\circ z)-y\circ(x\circ z)+(x\circ y)\circ z-(y\circ x)\circ z\\ &&=x\prec(y\succ z) +x\prec( y\prec z) - y\succ(x\prec z)+ ( x\prec y)\prec z- (y\succ x)\prec z\\ &&\ \ -y\prec (x\succ z) -y\prec( x\prec z) + x\succ( y\prec z)- ( y\prec x)\prec z+ (x\succ y)\prec z\\ &&\ \ +x\succ(y\succ z)-y\succ(x\succ z)+(x\succ y)\succ z+( x\prec y)\succ z-(y\succ x)\succ z-( y\prec x)\succ z\\ &&=0. \end{aligned}$$ Hence Eq. ([\[eq:defi:anti-pre-Lie algebras1\]](#eq:defi:anti-pre-Lie algebras1){reference-type="ref" reference="eq:defi:anti-pre-Lie algebras1"}) holds. Similarly, Eq. ([\[eq:defi:anti-pre-Lie algebras2\]](#eq:defi:anti-pre-Lie algebras2){reference-type="ref" reference="eq:defi:anti-pre-Lie algebras2"}) holds by Eq. ([\[eq:defi:quasi anti-pre-Lie algebras3\]](#eq:defi:quasi anti-pre-Lie algebras3){reference-type="ref" reference="eq:defi:quasi anti-pre-Lie algebras3"}), and thus $(A,\circ)$ is an anti-pre-Lie algebra. Moreover, by Eqs. ([\[eq:defi:rep anti-pre-Lie algebra1\]](#eq:defi:rep anti-pre-Lie algebra1){reference-type="ref" reference="eq:defi:rep anti-pre-Lie algebra1"})-([\[eq:defi:rep anti-pre-Lie algebra3\]](#eq:defi:rep anti-pre-Lie algebra3){reference-type="ref" reference="eq:defi:rep anti-pre-Lie algebra3"}), $(\mathcal{L}_{\succ},\mathcal{R}_{\prec},A)$ is a representation $(A,\circ)$. ◻ Recall the definition of Zinbiel algebras. **Definition 35**. ([@Lod]) A **Zinbiel algebra** is a pair $(A,\star)$, where $A$ is a vector space and $\star:A\otimes A\rightarrow A$ is a bilinear operation such that the following equation holds: $$\label{eq:Zinbiel} x\star(y\star z)=(y\star x)\star z+(x\star y)\star z, \;\;\forall x,y,z\in A.$$ Let $(A,\star)$ be a Zinbiel algebra. Define a new bilinear operation $\cdot$ on $A$ by $$\label{eq:ZintoAss} x\cdot y=x\star y+y\star x,\;\;\forall x,y\in A.$$ Then $(A,\cdot)$ is a commutative associative algebra. Moreover, $(\mathcal{L}_{\star},A)$ is a representation of the commutative associative algebra $(A,\cdot)$ (see Eq. [\[eq:54\]](#eq:54){reference-type="eqref" reference="eq:54"}), where $\mathcal{L}_{\star}:A\rightarrow\mathrm{End}(A)$ is given by $\mathcal{L}_{\star}(x)y=x\star y$, for all $x,y\in A$. Similar to the fact that a commutative associative algebra with a derivation gives an anti-pre-Lie algebra in Example [Example 28](#ex:AYBE){reference-type="ref" reference="ex:AYBE"}, there is the following construction of pre-APL algebras from Zinbiel algebras with derivations, which can be regarded as the "splitting viewpoint\" of the former fact. **Example 36**. Let $(A,\star)$ be a Zinbiel algebra with a derivation $P$. Define two bilinear operations $\succ,\prec:A\otimes A\rightarrow A$ respectively by $$\label{eq:ex:Zinbiel derivation} x\succ y=P(x\star y)+P(x)\star y,\; x\prec y=P(y\star x)+y\star P(x),\;\;\forall x,y\in A.$$ Then $(A,\succ,\prec)$ is a pre-APL algebra. Moreover, the sub-adjacent anti-pre-Lie algebra $(A,\circ)$ of $(A,\succ,\prec)$ satisfies Eq. ([\[eq:diff anti-pre-Lie\]](#eq:diff anti-pre-Lie){reference-type="ref" reference="eq:diff anti-pre-Lie"}). **Proposition 37**. *Let $(A,\circ)$ be an anti-pre-Lie algebra and $(l_{\circ},r_{\circ},V)$ be a representation of $(A,\circ)$. Let $T:V\rightarrow A$ be an $\mathcal{O}$-operator of $(A,\circ)$ associated to $(l_{\circ},r_{\circ},V)$. Then there exists a pre-APL algebra structure $(V,\succ,\prec)$ on $V$, where $$\label{eq:thm:O-operator and quasi anti-pre-Lie algebras1} u\succ v=l_{\circ}\big(T(u)\big)v,\;u\prec v=r_{\circ}\big(T(v)\big)u,\;\;\forall u,v\in V.$$ Conversely, let $(A,\succ,\prec)$ be a pre-APL algebra and $(A,\circ)$ be the sub-adjacent anti-pre-Lie algebra. Then the identity map $\mathrm{id}:A\rightarrow A$ is an $\mathcal{O}$-operator of $(A,\circ)$ associated to $(\mathcal{L}_{\succ},\mathcal{R}_{\prec},A)$.* *Proof.* It is straightforward. ◻ **Proposition 38**. *Let $(A,\succ,\prec)$ be a pre-APL algebra and $(A,\circ)$ be the sub-adjacent anti-pre-Lie algebra of $(A,\succ,\prec)$. Let $\lbrace e_{1},\cdots ,e_{n}\rbrace$ be a basis of $A$ and $\lbrace e^{*}_{1},\cdots ,e^{*}_{n}\rbrace$ be the dual basis. Then $$\label{eq:pro:O-operator and T-equation} r=\sum_{i=1}^{n}(e_{i}\otimes e^{*}_{i}-e^{*}_{i}\otimes e_{i})$$ is a skew-symmetric solution of the APL-YBE in the anti-pre-Lie algebra $A\ltimes_{\mathcal{R}^{*}_{\prec}-\mathcal{L}^{*}_{\succ},\mathcal{R}^{*}_{\prec}}A^{*}$.* *Proof.* By Proposition [\[thm:O-operator and quasi anti-pre-Lie algebras\]](#thm:O-operator and quasi anti-pre-Lie algebras){reference-type="ref" reference="thm:O-operator and quasi anti-pre-Lie algebras"}, the identity map $\mathrm{id}$ is an $\mathcal{O}$-operator of $(A,\circ)$ associated to $(\mathcal{L}_{\succ},\mathcal{R}_{\prec}$, $A)$. Note that $\mathrm{id}=\sum\limits_{i=1}^{n}e_{i}\otimes e^{*}_{i}$. Thus by Theorem [\[thm:O-operator and T-equation:semi-direct product version\]](#thm:O-operator and T-equation:semi-direct product version){reference-type="ref" reference="thm:O-operator and T-equation:semi-direct product version"}, the conclusion follows. ◻ # Anti-pre-Lie Poisson bialgebras {#S4} We give the notion of representations of transposed Poisson algebras. Thus anti-pre-Lie Poisson algebras are characterized in terms of representations of the sub-adjacent transposed Poisson algebras on the dual spaces of themselves. We also introduce the notion of representations of anti-pre-Lie Poisson algebras as well as the notions of matched pairs of both transposed Poisson and anti-pre-Lie Poisson algebras and study the relationships between them. We introduce the notion of Manin triples of transposed Poisson algebras with respect to the invariant bilinear forms on the commutative associative algebras and the commutative 2-cocycles on the Lie algebras, which is a combination of a double construction of commutative Frobenius algebras and a Manin triple of Lie algebras with respect to the commutative 2-cocycle, characterized by certain matched pairs of anti-pre-Lie Poisson algebras and transposed Poisson algebras. Consequently, we introduce the notion of anti-pre-Lie Poisson bialgebras as their equivalent structures. Finally, we study the coboundary anti-pre-Lie Poisson bialgebras, which lead to the introduction of the anti-pre-Lie Poisson Yang-Baxter equation (APLP-YBE). In particular, a skew-symmetric solution of the APLP-YBE in an anti-pre-Lie Poisson algebra gives a coboundary anti-pre-Lie Poisson bialgebra. We also introduce the notions of $\mathcal{O}$-operators of anti-pre-Lie Poisson algebras and pre-(anti-pre-Lie Poisson) algebras to construct skew-symmetric solutions of the APLP-YBE in anti-pre-Lie Poisson algebras. ## Representations of transposed Poisson algebras and anti-pre-Lie Poisson algebras   Recall that a **representation** of a commutative associative algebra $(A,\cdot)$ is a pair $(\mu,V)$, where $V$ is a vector space and $\mu:A\rightarrow\mathrm{End}(V)$ is a linear map satisfying $$\label{eq:54} \mu(x\cdot y)=\mu(x)\mu(y),\;\;\forall x,y\in A.$$ For a commutative associative algebra $(A,\cdot)$, let $\mathcal{L}_{\cdot}:A\rightarrow\mathrm{End}(A)$ be a linear map given by $\mathcal{L}_{\cdot}(x)y=x\cdot y$, for all $x,y\in A$. Then $(\mathcal{L}_{\cdot},A)$ is a representation of $(A,\cdot)$, called the **adjoint representation** of $(A,\cdot)$. In fact, $(\mu,V)$ is a representation of a commutative associative algebra $(A,\cdot)$ if and only if the direct sum $A\oplus V$ of vector spaces is a ( **semi-direct product**) commutative associative algebra by defining the multiplication $\cdot_{A\oplus V}$ (often still denoted by $\cdot$) on $A\oplus V$ by $$\label{eq:SDASSO} (x+u)\cdot_{A\oplus V}(y+v)=x\cdot y+\mu(x)v+\mu(y)u,\;\;\forall x,y\in A, u,v\in V.$$ We denote it by $A\ltimes_{\mu}V$. If $(\mu,V)$ is a representation of a commutative associative algebra $(A,\cdot)$, then $(-\mu^{*},V^{*})$ is also a representation of $(A,\cdot)$. In particular, $(-\mathcal{L}^{*}_{\cdot},A^{*})$ is a representation of $(A,\cdot)$. Now we introduce the notion of representations of transposed Poisson algebras. **Definition 39**. A **representation** of a transposed Poisson algebra $(A,\cdot,[-,-])$ is a triple $(\mu,\rho,V)$, such that the pair $(\mu,V)$ is a representation of the commutative associative algebra $(A,\cdot)$, the pair $(\rho,V)$ is a representation of the Lie algebra $(A,[-,-])$, and the following conditions hold: $$\begin{aligned} &&2\mu(x)\rho(y)=\rho(x\cdot y)+\rho(y)\mu(x),\label{eq:defi:TPA REP2}\\ &&2\mu([x,y])=\rho(x)\mu(y)-\rho(y)\mu(x),\label{eq:defi:TPA REP1}\end{aligned}$$ for all $x,y\in A$. **Example 40**. Let $(A,\cdot,[-,-])$ be a transposed Poisson algebra. Then $(\mathcal{L}_{\cdot},\mathrm{ad},A)$ is a representation of $(A,\cdot,[-,-])$, called the **adjoint representation** of $(A,\cdot,[-,-])$. **Proposition 41**. *Let $(A,\cdot,[-,-])$ be a transposed Poisson algebra and $V$ be a vector space. Let $\mu,\rho:A\rightarrow \mathrm{End}(V)$ be linear maps. Then $(\mu,\rho,V)$ is a representation of $(A,\cdot,[-,-])$ if and only if the direct sum $A\oplus V$ of vector spaces is a (**semi-direct product**) transposed Poisson algebra by defining the bilinear operations on $A\oplus V$ by Eqs. ([\[eq:SDASSO\]](#eq:SDASSO){reference-type="ref" reference="eq:SDASSO"}) and ([\[eq:SDLie\]](#eq:SDLie){reference-type="ref" reference="eq:SDLie"}) respectively. We denote it by $A\ltimes_{\mu,\rho}V$.* *Proof.* It is a special case of the matched pair of transposed Poisson algebras in Theorem [Theorem 56](#thm:mp TPA){reference-type="ref" reference="thm:mp TPA"} when $B=V$ is equipped with the zero multiplications. ◻ **Proposition 42**. *Let $(A,\cdot,[-,-])$ be a transposed Poisson algebra and $(\mu,\rho,V)$ be a representation of $(A,\cdot,[-,-])$. Then $(-\mu^{*},\rho^{*},V^{*})$ is a representation of $(A,\cdot,[-,-])$ if and only if $$\label{eq:d0} \mu([x,y])=0,\ \rho(x\cdot y)=\mu(x)\rho(y),\;\;\forall x,y\in A.$$ In particular, $(-\mathcal{L}^{*}_{\cdot},\mathrm{ad}^{*},A^{*})$ is a representation of $(A,\cdot,[-,-])$ if and only if Eq. ([\[eq:coherent TPA\]](#eq:coherent TPA){reference-type="ref" reference="eq:coherent TPA"}) holds, that is, the following equation holds: $$\label{eq:coherent TPA1} [x,y]\cdot z=[x\cdot y,z]=0,\;\;\forall x,y,z\in A.$$* *Proof.* Let $x,y\in A, u^{*}\in V^{*}, v\in V$. Then we have $$\label{eq:d1} \langle \big(\rho^{*}(x\cdot y)-\rho^{*}(y)\mu^{*}(x)+2\mu^{*}(x)\rho^{*}(y)\big)u^{*}, v\rangle=\langle u^{*}, \big(-\rho(x\cdot y)-\mu(x)\rho(y)+2\rho(y)\mu(x)\big)v\rangle,$$ $$\label{eq:d2} \langle \big(-\rho^{*}(x)\mu^{*}(y)+\rho^{*}(y)\mu^{*}(x)+2\mu^{*}([x,y])\big)u^{*},v\rangle=\langle u^{*},\big(-\mu(y)\rho(x)+\mu(x)\rho(y)-2\mu([x,y])\big)v\rangle.$$ Thus $(-\mu^{*},\rho^{*},V^{*})$ is a representation if and only if the following equations hold: $$\label{eq:d3} \rho(x\cdot y)+\mu(x)\rho(y)-2\rho(y)\mu(x)=0,$$ $$\label{eq:d4} \mu(y)\rho(x)-\mu(x)\rho(y)+2\mu([x,y])=0.$$ By Eqs. ([\[eq:defi:TPA REP2\]](#eq:defi:TPA REP2){reference-type="ref" reference="eq:defi:TPA REP2"}) and ([\[eq:defi:TPA REP1\]](#eq:defi:TPA REP1){reference-type="ref" reference="eq:defi:TPA REP1"}), Eqs. ([\[eq:d3\]](#eq:d3){reference-type="ref" reference="eq:d3"}) and ([\[eq:d4\]](#eq:d4){reference-type="ref" reference="eq:d4"}) hold if and only if Eq. ([\[eq:d0\]](#eq:d0){reference-type="ref" reference="eq:d0"}) holds. The particular case follows immediately. ◻ **Remark 43**. In [@Bai2020], a transposed Poisson algebra $(A,\cdot,[-,-])$ is also a Poisson algebra if and only if Eq. ([\[eq:coherent TPA\]](#eq:coherent TPA){reference-type="ref" reference="eq:coherent TPA"}) holds. On the other hand, by [@NB], if $(\mu,\rho,V)$ is a representation of a Poisson algebra $(A,\cdot,[-,-])$, then $(-\mu^{*},\rho^{*},V^{*})$ is automatically a representation of $(A,\cdot,[-,-])$. Hence here there is an obvious difference between Poisson algebras and transposed Poisson algebras, that is, for the latter, there is not a "natural\" dual representation for any representation of a transposed Poisson algebra. Next we interpret the relationship between transposed Poisson algebras and anti-pre-Lie Poisson algebras in terms of representations of transposed Poisson algebras. **Proposition 44**. *([@LB2022]) Let $(A,\cdot,\circ)$ be an anti-pre-Lie Poisson algebra and $(A,[-,-])$ be the sub-adjacent Lie algebra of $(A,\circ)$. Then $(A,\cdot,[-,-])$ is a transposed Poisson algebra.* Hence we give the following notion. **Definition 45**. Let $(A,\cdot,\circ)$ be an anti-pre-Lie Poisson algebra and $(A,[-,-])$ be the sub-adjacent Lie algebra of $(A,\circ)$. We call $(A,\cdot,[-,-])$ the **sub-adjacent transposed Poisson algebra** of $(A,\cdot,\circ)$, and $(A,\cdot,\circ)$ a **compatible anti-pre-Lie Poisson algebra** of $(A,\cdot,[-,-])$. On the other hand, anti-pre-Lie Poisson algebras are characterized in terms of representations of the sub-adjacent transposed Poisson algebras on the dual spaces of themselves. **Proposition 46**. *Let $(A,\cdot,[-,-])$ be a transposed Poisson algebra. Suppose that $(A,[-,-])$ is the sub-adjacent Lie algebra of an anti-pre-Lie algebra $(A,\circ)$. Then $(-\mathcal{L}^{*}_{\cdot},-\mathcal{L}^{*}_{\circ},A^{*})$ is a representation of $(A,\cdot,[-,-])$ if and only if $(A,\cdot,\circ)$ is an anti-pre-Lie Poisson algebra.* *Proof.* It is obvious that $(-\mathcal{L}^{*}_{\cdot}, A^{*})$ is a representation of $(A,\cdot)$, and $(-\mathcal{L}^{*}_{\circ}, A^{*})$ is a representation of $(A,\circ)$. Moreover, for all $x,y,z\in A, a^{*}\in A^{*}$, we have $$\langle \big(2\mathcal{L}^{*}_{\cdot}(x)\mathcal{L}^{*}_{\circ}(y)+\mathcal{L}^{*}_{\circ}(x\cdot y)-\mathcal{L}^{*}_{\circ}(y)\mathcal{L}^{*}_{\cdot}(x)\big)a^{*},z\rangle=\langle a^{*}, 2y\circ(x\cdot z)-(x\cdot y)\circ z-x\cdot(y\circ z)\rangle,$$ $$\langle \big(2\mathcal{L}^{*}_{\cdot}([x,y])-\mathcal{L}^{*}_{\circ}(x)\mathcal{L}^{*}_{\cdot}(y)+\mathcal{L}^{*}_{\circ}(y)\mathcal{L}^{*}_{\cdot}(x)\big)a^{*},z\rangle=\langle a^{*}, -2[x,y]\cdot z-y\cdot (x\circ z)+x\cdot(y\circ z)\rangle.$$ Hence the conclusion follows. ◻ **Corollary 47**. *Let $(A,\cdot,\circ)$ be an anti-pre-Lie Poisson algebra and $(A,\cdot,[-,-])$ be the sub-adjacent transposed Poisson algebra. Then $(\mathcal{L}_{\cdot},-\mathcal{L}_{\circ},A)$ is a representation of $(A,\cdot,[-,-])$ if and only if the following equation holds: $$\label{eq:cor} [x,y]\cdot z=0, (x\cdot y)\circ z=y\circ(x\cdot z),\;\;\forall x,y,z\in A.$$* *Proof.* By Proposition [Proposition 46](#pro:anti-pre-Lie Poisson){reference-type="ref" reference="pro:anti-pre-Lie Poisson"}, $(-\mathcal{L}^{*}_{\cdot},-\mathcal{L}^{*}_{\circ},A^{*})$ is a representation of $(A,\cdot,[-,-])$. Then by Proposition [Proposition 42](#pro:dual rep TPA){reference-type="ref" reference="pro:dual rep TPA"}, $\big(-(-\mathcal{L}^{*}_{\cdot})^{*},(-\mathcal{L}^{*}_{\circ})^{*},(A^{*})^{*}\big)=(\mathcal{L}_{\cdot},-\mathcal{L}_{\circ}, A)$ is a representation of $(A,\cdot,[-,-])$ if and only if $$-\mathcal{L}^{*}_{\cdot}([x,y])=0, -\mathcal{L}^{*}_{\circ}(x\cdot y)=\mathcal{L}^{*}_{\cdot}(x)\mathcal{L}^{*}_{\circ}(y),\;\;\forall x,y\in A,$$ or equivalently, Eq. ([\[eq:cor\]](#eq:cor){reference-type="ref" reference="eq:cor"}) holds. ◻ **Remark 48**. Unlike anti-pre-Lie algebras ([@LB2022]) and pre-Lie algebras ([@Bur]) being characterized in terms of representations of the sub-adjacent Lie algebras on both the spaces of themselves and equivalently, the dual spaces, only the representations of the sub-adjacent transposed Poisson algebras of anti-pre-Lie Poisson algebras on the dual spaces of themselves are involved in characterizing anti-pre-Lie Poisson algebras. **Definition 49**. Let $(A,\cdot,\circ)$ be an anti-pre-Lie Poisson algebra. A **representation** of $(A,\cdot,\circ)$ is a quadruple $(\mu,l_{\circ}, r_{\circ},V)$, such that the pair $(\mu,V)$ is a representation of $(A,\cdot)$, the triple $(l_{\circ}, r_{\circ},V)$ is a representation of $(A,\circ)$, and the following conditions hold: $$\begin{aligned} 2\mu(y)l_{\circ}(x)- 2\mu(y)r_{\circ}(x)&=&\mu(x\circ y)-\mu(x)r_{\circ}(y), \label{eq:defi:anti-pre-Lie Poisson rep1}\\ 2\mu(x\circ y)- 2\mu(y\circ x)&=&\mu(y)l_{\circ}(x)-\mu(x)l_{\circ}(y), \label{eq:defi:anti-pre-Lie Poisson rep2}\\ 2r_{\circ}(x\cdot y)&=&r_{\circ}(y)\mu(x)+\mu(x)r_{\circ}(y), \label{eq:defi:anti-pre-Lie Poisson rep3}\\ 2l_{\circ}(x)\mu(y)&=&l_{\circ}(x\cdot y)+\mu(y)l_{\circ}(x), \label{eq:defi:anti-pre-Lie Poisson rep4}\\ 2l_{\circ}(x)\mu(y)&=&r_{\circ}(y)\mu(x)+\mu(x\circ y), \label{eq:defi:anti-pre-Lie Poisson rep5}\end{aligned}$$ for all $x,y\in A$. **Example 50**. Let $(A,\cdot,\circ)$ be an anti-pre-Lie Poisson algebra. Then $(\mathcal{L}_{\cdot},\mathcal{L}_{\circ},\mathcal{R}_{\circ},A)$ is a representation of $(A,\cdot,\circ)$, called the **adjoint representation** of $(A,\cdot,\circ)$. **Proposition 51**. *Let $(A,\cdot,\circ)$ be an anti-pre-Lie Poisson algebra and $(A,\cdot,[-,-])$ be the sub-adjacent transposed Poisson algebra. Let $V$ be a vector space and $\mu,l_{\circ},r_{\circ}:A\rightarrow \mathrm{End}(V)$ be linear maps.* 1. *[\[it:hh1\]]{#it:hh1 label="it:hh1"} $(\mu,l_{\circ},r_{\circ},V)$ is a representation of $(A,\cdot,\circ)$ if and only if the direct sum $A\oplus V$ of vector spaces is an (**semi-direct product**) anti-pre-Lie Poisson algebra by defining the bilinear operations on $A\oplus V$ by Eqs. ([\[eq:SDASSO\]](#eq:SDASSO){reference-type="ref" reference="eq:SDASSO"}) and ([\[eq:pro:repandsemidirectproduct1\]](#eq:pro:repandsemidirectproduct1){reference-type="ref" reference="eq:pro:repandsemidirectproduct1"}) respectively. We denote it by $A\ltimes_{\mu,l_{\circ},r_{\circ}}V$.* 2. *[\[rep32\]]{#rep32 label="rep32"} If $(\mu,l_{\circ},r_{\circ},V)$ is a representation of $(A,\cdot,\circ)$, then $(\mu,l_{\circ}-r_{\circ},V)$ is a representation of $(A,\cdot,[-,-])$.* *Proof.* ([\[it:hh1\]](#it:hh1){reference-type="ref" reference="it:hh1"}). It is a special case of the matched pair of anti-pre-Lie Poisson algebras in Theorem [Theorem 58](#thm:mp AP){reference-type="ref" reference="thm:mp AP"} when $B=V$ is equipped with the zero multiplications. ([\[rep32\]](#rep32){reference-type="ref" reference="rep32"}) It is a special case of Proposition  [Proposition 59](#pro:TPA-APL){reference-type="ref" reference="pro:TPA-APL"} when $B=V$ is equipped with the zero multiplications. ◻ **Proposition 52**. *Let $(\mu,l_{\circ}, r_{\circ},V)$ be a representation of an anti-pre-Lie Poisson algebra $(A,\cdot,\circ)$. Then $(-\mu^{*}, r^{*}_{\circ}-l^{*}_{\circ},r^{*}_{\circ},V^{*})$ is a representation of $(A,\cdot,\circ)$. In particular, $(-\mathcal{L}^{*}_{\cdot},-\mathrm{ad}^{*},\mathcal{R}^{*}_{\circ},A^{*})$ is a representation of $(A,\cdot,\circ)$.* *Proof.* For all $x,y\in A, u^{*}\in V^{*}, v\in V$, we have $$\begin{aligned} &&\langle \Big(-2\mu^{*}(y)\big(r^{*}_{\circ}(x)-l^{*}_{\circ}(x)\big)+2\mu^{*}(y)r^{*}_{\circ}(x)+\mu^{*}(x\circ y)-\mu^{*}(x)r^{*}_{\circ}(y)\Big)u^{*},v\rangle\\ &&=\langle u^{*}, \big(2l_{\circ}(x)\mu(y)-\mu(x\circ y)-r_{\circ}(y)\mu(x)\big)v\rangle \overset{(\ref{eq:defi:anti-pre-Lie Poisson rep5})}{=}0. \end{aligned}$$ Thus Eq. ([\[eq:defi:anti-pre-Lie Poisson rep1\]](#eq:defi:anti-pre-Lie Poisson rep1){reference-type="ref" reference="eq:defi:anti-pre-Lie Poisson rep1"}) holds for the quadruple $(-\mu^{*}, r^{*}_{\circ}-l^{*}_{\circ},r^{*}_{\circ},V^{*})$. Similarly Eqs. ([\[eq:defi:anti-pre-Lie Poisson rep2\]](#eq:defi:anti-pre-Lie Poisson rep2){reference-type="ref" reference="eq:defi:anti-pre-Lie Poisson rep2"})-([\[eq:defi:anti-pre-Lie Poisson rep5\]](#eq:defi:anti-pre-Lie Poisson rep5){reference-type="ref" reference="eq:defi:anti-pre-Lie Poisson rep5"}) hold for $(-\mu^{*}, r^{*}_{\circ}-l^{*}_{\circ},r^{*}_{\circ},V^{*})$. Thus $(-\mu^{*}, r^{*}_{\circ}-l^{*}_{\circ},r^{*}_{\circ},V^{*})$ is a representation of $(A,\cdot,\circ)$. ◻ **Corollary 53**. *Let $(A,\cdot,\circ)$ be an anti-pre-Lie Poisson algebra and $(A,\cdot,[-,-])$ be the sub-adjacent transposed Poisson algebra. If $(\mu,l_{\circ},r_{ \circ},V)$ is a representation of $(A,\cdot,\circ)$, then $(-\mu^{*}$, $-l^{*}_{\circ}$, $V^{*})$ is a representation of $(A,\cdot,[-,-])$.* *Proof.* It follows from Propositions [Proposition 51](#pro:repandsemidirectproduct3){reference-type="ref" reference="pro:repandsemidirectproduct3"} ([\[rep32\]](#rep32){reference-type="ref" reference="rep32"}) and [Proposition 52](#pro:rep of anti-pre-Lie Poisson){reference-type="ref" reference="pro:rep of anti-pre-Lie Poisson"}. ◻ **Remark 54**. As a direct consequence of Corollary [Corollary 53](#cor:dual rep TPA){reference-type="ref" reference="cor:dual rep TPA"}, for an anti-pre-Lie Poisson algebra $(A,\cdot,\circ)$, the triple $(-\mathcal{L}^{*}_{\cdot},-\mathcal{L}^{*}_{\circ},A^{*})$ is a representation of the sub-adjacent transposed Poisson algebra $(A$, $\cdot$, $[-,-])$, which recovers the "if\" part of the result in Proposition [Proposition 46](#pro:anti-pre-Lie Poisson){reference-type="ref" reference="pro:anti-pre-Lie Poisson"}. ## Matched pairs of transposed Poisson algebras and anti-pre-Lie Poisson algebras   Recall matched pairs of commutative associative algebras ([@Bai2010]). Let $(A,\cdot_{A})$ and $(B,\cdot_{B})$ be two commutative associative algebras, and $(\mu_{A},B)$ and $(\mu_{B},A)$ be representations of $(A,\cdot_{A})$ and $(B,\cdot_{B})$ respectively. If the following equations are satisfied: $$\begin{aligned} &&\mu_{A}(x)(a\cdot_{B} b)=\big(\mu_{A}(x)a\big)\cdot_{B} b+\mu_{A}\big(\mu_{B}(a)x\big)b,\\ &&\mu_{B}(a)(x\cdot_{A} y)=\big(\mu_{B}(a)x\big)\cdot_{A} y+\mu_{B}\big(\mu_{A}(x)a\big)y,\end{aligned}$$ for all $x,y\in A,a,b\in B$, then $(A,B,\mu_{A},\mu_{B})$ is called a **matched pair of commutative associative algebras**. In fact, for commutative associative algebras $(A,\cdot_{A})$ , $(B,\cdot_{B})$ and linear maps $\mu_{A}:A\rightarrow\mathrm{End}(B)$, $\mu_{B}:B\rightarrow\mathrm{End}(A)$, there is a commutative associative algebra structure on the vector space $A\oplus B$ given by $$\label{eq:Asso} (x+a)\cdot (y+b)=x\cdot_{A} y+\mu_{B}(a)y+\mu_{B}(b)x+a\cdot_{B} b+\mu_{A}(x)b+\mu_{A}(y)a,\;\;\forall x,y\in A, a,b\in B,$$ if and only if $(A,B,\mu_{A},\mu_{B})$ is a matched pair of commutative associative algebras. In this case, we denote the commutative associative algebra structure on $A\oplus B$ by $A\bowtie^{\mu_{A}}_{\mu_{B}}B$. Conversely, every commutative associative algebra which is the direct sum of the underlying vector spaces of two subalgebras can be obtained from a matched pair of commutative associative algebras by this construction. **Definition 55**. Let $(A,\cdot_{A},[-,-]_{A})$ and $(B,\cdot_{B},[-,-]_{B})$ be two transposed Poisson algebras. Let $\mu_{A},\rho_{A}:A\rightarrow\mathrm{End}(B)$ and $\mu_{B},\rho_{B}:B\rightarrow\mathrm{End}(A)$ be linear maps. Suppose that the following conditions are satisfied: 1. $(A,B,\mu_{A},\mu_{B})$ is a matched pair of commutative associative algebras and $(A,B,\rho_{A},\rho_{B})$ is a matched pair of Lie algebras. 2. $(\mu_{A},\rho_{A},B)$ is a representation of $(A,\cdot_{A},[-,-]_{A})$ and $(\mu_{B},\rho_{B},A)$ is a representation of $(B$, $\cdot_{B}$, $[-,-]_{B})$. 3. For all $x,y\in A, a,b\in B$, the following equations hold: $$\begin{aligned} &&2\mu_{B}\big(\rho_{A}(y)a\big)x-2x\cdot_{A}\rho_{B}(a)y=-\rho_{B}(a)(x\cdot_{A} y)-\rho_{B}\big(\mu_{A}(x)a\big)y+[y,\mu_{B}(a)x]_{A},\label{eq:defi:MP TPA a}\\ && 2\mu_{B}(a)([x,y]_{A})=[\mu_{B}(a)x,y]_{A}+\rho_{B}\big(\mu_{A}(x)a\big)y+[x,\mu_{B}(a)y]_{A}-\rho_{B}\big(\mu_{A}(y)a\big)x,\label{eq:defi:MP TPA b}\\ &&2\mu_{A}\big(\rho_{B}(b)x\big)a-2a\cdot_{B}\rho_{A}(x)b=-\rho_{A}(x)(a\cdot_{B} b)-\rho_{A}\big(\mu_{B}(a)x\big)b+[b,\mu_{A}(x)a]_{B},\label{eq:defi:MP TPA c}\\ &&2\mu_{A}(x)([a,b]_{B})=[\mu_{A}(x)a,b]_{B}+\rho_{A}\big(\mu_{B}(a)x\big)b+[a,\mu_{A}(y)b]_{B}-\rho_{A}\big(\mu_{B}(b)x\big)a.\label{eq:defi:MP TPA d} \end{aligned}$$ Such a structure is called a **matched pair of transposed Poisson algebras** $(A,\cdot_{A},[-,-]_{A})$ and $(B,\cdot_{B},[-,-]_{B})$. We denote it by $(A,B,\mu_{A},\rho_{A},\mu_{B},\rho_{B})$. **Theorem 56**. *Let $(A,\cdot_{A},[-,-]_{A})$ and $(B,\cdot_{B},[-,-]_{B})$ be two transposed Poisson algebras. Suppose that $\mu_{A},\rho_{A}:A\rightarrow\mathrm{End}(B)$ and $\mu_{B},\rho_{B}:B\rightarrow\mathrm{End}(A)$ are linear maps. Define two bilinear operations $\cdot,[-,-]$ on $A\oplus B$ by Eqs.  ([\[eq:Asso\]](#eq:Asso){reference-type="ref" reference="eq:Asso"}) and ([\[eq:Lie\]](#eq:Lie){reference-type="ref" reference="eq:Lie"}) respectively. Then $(A\oplus B,\cdot,[-,-])$ is a transposed Poisson algebra if and only if $(A,B,\mu_{A},\rho_{A},\mu_{B},\rho_{B})$ is a matched pair of transposed Poisson algebras. In this case, we denote this transposed Poisson algebra structure on $A\oplus B$ by $A\bowtie^{\mu_{A},\rho_{A}}_{\mu_{B},\rho_{B}}B$. Conversely, every transposed Poisson algebra which is the direct sum of the underlying vector spaces of two subalgebras can be obtained from a matched pair of transposed Poisson algebras by this construction.* *Proof.* It is straightforward. ◻ **Definition 57**. Let $(A,\cdot_{A},\circ_{A})$ and $(B,\cdot_{B},\circ_{B})$ be two anti-pre-Lie Poisson algebras. Let $\mu_{A},l_{\circ_{A}},$ $r_{\circ_{A}}:A\rightarrow\mathrm{End}(B)$ and $\mu_{B},l_{\circ_{B}},r_{\circ_{B}}:B\rightarrow\mathrm{End}(A)$ be linear maps. Suppose that the following conditions are satisfied: 1. $(A,B,\mu_{A},\mu_{B})$ is a matched pair of commutative associative algebras and $(A,B,l_{\circ_{A}},r_{\circ_{A}},l_{\circ_{B}},$ $r_{\circ_{B}})$ is a matched pair of anti-pre-Lie algebras. 2. $(\mu_{A},l_{\circ_{A}},r_{\circ_{A}},B)$ is a representation of $(A,\cdot_{A},\circ_{A})$ and $(\mu_{B},l_{\circ_{B}},r_{\circ_{B}},A)$ is a representation of $(B,\cdot_{B},\circ_{B})$. 3. For all $x,y\in A, a,b\in B$, the following equations hold: $$\begin{aligned} &&2\mu_{B}(a)(x\circ_{A}y)-2\mu_{B}(a)(y\circ_{A}x)=\mu_{B}\big(l_{\circ_{A}}(x)a\big)y+y\cdot_{A}r_{\circ_{B}}(a)x-\mu_{B}\big(l_{\circ_{A}}(y)a\big)x-x\cdot_{A}r_{\circ_{B}}(a)y,\\ && 2\mu_{B}\big((l_{\circ_{A}}-r_{\circ_{A}})(x)a\big)y-2(l_{\circ_{B}}-r_{\circ_{B}})(a)x\cdot_{A}y =\mu_{B}(a)(x\circ_{A}y)-x\cdot_{A}l_{\circ_{B}}(a)y-\mu_{B}\big(r_{\circ_{A}}(y)a\big)x,\\ &&2r_{\circ_{B}}\big(\mu_{A}(y)a\big)x+2x\circ_{A}\mu_{B}(a)y=\mu_{B}(a)x\circ_{A}y+l_{\circ_{B}}\big(\mu_{A}(x)a\big)y+\mu_{B}(a)(x\circ_{A}y),\\ &&2r_{\circ_{B}}\big(\mu_{A}(y)a\big)x+2x\circ_{A}\mu_{B}(a)y=r_{\circ_{B}}(a)(x\cdot_{A}y)+\mu\big(l_{\circ_{A}}(x)a\big)y+y\cdot_{A}r_{\circ_{B}}(a)x,\\ &&2l_{\circ_{B}}(a)(x\cdot_{A}y)=l_{\circ_{B}}\big(\mu_{A}(y)a\big)x+\mu_{B}(a)y\circ_{A}x+y\cdot_{A}l_{\circ_{B}}(a)x+\mu_{B}\big(r_{\circ_{A}}(x)a\big)y,\\ &&2\mu_{A}(x)(a\circ_{B}b)-2\mu_{A}(x)(b\circ_{B}a)=\mu_{A}\big(l_{\circ_{B}}(a)x\big)b+b\cdot_{B}r_{\circ_{A}}(x)a-\mu_{A}\big(l_{\circ_{B}}(b)x\big)a-a\cdot_{B}r_{\circ_{A}}(x)b,\\ && 2\mu_{A}\big((l_{\circ_{B}}-r_{\circ_{B}})(a)x\big)b-2(l_{\circ_{A}}-r_{\circ_{A}})(x)a\cdot_{B}b =\mu_{A}(x)(a\circ_{B}b)-a\cdot_{B}l_{\circ_{A}}(x)b-\mu_{A}\big(r_{\circ_{B}}(b)x\big)a,\\ &&2r_{\circ_{A}}\big(\mu_{B}(b)x\big)a+2a\circ_{B}\mu_{A}(x)b=\mu_{A}(x)a\circ_{B}b+l_{\circ_{A}}\big(\mu_{B}(a)x\big)b+\mu_{A}(x)(a\circ_{B}b),\\ &&2r_{\circ_{A}}\big(\mu_{B}(b)x\big)a+2a\circ_{B}\mu_{A}(x)b=r_{\circ_{A}}(x)(a\cdot_{B}b)+\mu\big(l_{\circ_{B}}(a)x\big)b+b\cdot_{B}r_{\circ_{A}}(x)a,\\ &&2l_{\circ_{A}}(x)(a\cdot_{B}b)=l_{\circ_{A}}\big(\mu_{B}(b)x\big)a+\mu_{A}(x)b\circ_{B}a+b\cdot_{B}l_{\circ_{A}}(x)a+\mu_{A}\big(r_{\circ_{B}}(a)x\big)b.\end{aligned}$$ Such a structure is called a **matched pair of anti-pre-Lie Poisson algebras** $(A,\cdot_{A},\circ_{A})$ and $(B,\cdot_{B},\circ_{B})$. We denote it by $(A,B,\mu_{A},l_{\circ_{A}},r_{\circ_{A}},\mu_{B},l_{\circ_{B}},r_{\circ_{B}})$. **Theorem 58**. *Let $(A,\cdot_{A},\circ_{A})$ and $(B,\cdot_{B},\circ_{B})$ be two anti-pre-Lie Poisson algebras. Suppose that $\mu_{A},l_{\circ_{A}},$ $r_{\circ_{A}}:A\rightarrow\mathrm{End}(B)$ and $\mu_{B},l_{\circ_{B}},r_{\circ_{B}}:B\rightarrow\mathrm{End}(A)$ are linear maps. Define two bilinear operations $\cdot,\circ$ on $A\oplus B$ by Eqs. ([\[eq:Asso\]](#eq:Asso){reference-type="ref" reference="eq:Asso"}) and ([\[thm:matched pairs of anti-pre-Lie algebras1\]](#thm:matched pairs of anti-pre-Lie algebras1){reference-type="ref" reference="thm:matched pairs of anti-pre-Lie algebras1"}) respectively. Then $(A\oplus B,\cdot,\circ)$ is an anti-pre-Lie Poisson algebra if and only if $(A,B,\mu_{A},l_{\circ_{A}},r_{\circ_{A}},\mu_{B},l_{\circ_{B}},r_{\circ_{B}})$ is a matched pair of anti-pre-Lie Poisson algebras. In this case, we denote this anti-pre-Lie Poisson algebra structure on $A\oplus B$ by $A\bowtie^{\mu_{A},l_{\circ_{A}},r_{\circ_{A}}}_{\mu_{B},l_{\circ_{B}},r_{\circ_{B}}}B$. Conversely, every anti-pre-Lie Poisson algebra which is the direct sum of the underlying vector spaces of two subalgebras can be obtained from a matched pair of anti-pre-Lie Poisson algebras by this construction.* *Proof.* It is straightforward. ◻ **Proposition 59**. *Let $(A,\cdot_{A},\circ_{A})$ and $(B,\cdot_{B},\circ_{B})$ be two anti-pre-Lie Poisson algebras and their sub-adjacent transposed Poisson algebras be $(A,\cdot_{A},[-,-]_{A})$ and $(B,\cdot_{B},[-,-]_{B})$ respectively. If $(A,B,\mu_{A},l_{\circ_{A}},r_{\circ_{A}},\mu_{B},l_{\circ_{B}},r_{\circ_{B}})$ is a matched pair of anti-pre-Lie Poisson algebras, then $(A,B,\mu_{A},$ $l_{\circ_{A}}-r_{\circ_{A}},\mu_{B},l_{\circ_{B}}-r_{\circ_{B}})$ is a matched pair of transposed Poisson algebras.* *Proof.* It is similar to the proof of Proposition [Proposition 13](#pro:from matched pairs of anti-pre-Lie algebras to matched pairs of Lie algebras){reference-type="ref" reference="pro:from matched pairs of anti-pre-Lie algebras to matched pairs of Lie algebras"}. ◻ ## Manin triples of transposed Poisson algebras and anti-pre-Lie Poisson bialgebras   As mentioned in Introduction, transposed Poisson algebras are "inconsistent\" with the nondegenerate (symmetric) bilinear forms which are invariant on both the commutative associative algebras and the Lie algebras in the following sense. **Proposition 60**. *Let $(A,\cdot,[-,-])$ be a transposed Poisson algebra. If there is a nondegenerate bilinear form $\mathcal{B}$ such that it is invariant on both $(A,\cdot)$ and $(A,[-,-])$, then Eq. ([\[eq:coherent TPA\]](#eq:coherent TPA){reference-type="ref" reference="eq:coherent TPA"}) holds.* *Proof.* Let $x,y,z,w\in A$. We have $$\begin{aligned} &&\mathcal{B}(2z\cdot [x,y]-[z\cdot x,y]-[x,z\cdot y],w)=\mathcal{B}(z,2[x,y]\cdot w-x\cdot[y,w]+y\cdot[x,w]),\\ &&\mathcal{B}(2z\cdot [x,y]-[z\cdot x,y]-[x,z\cdot y],w)=\mathcal{B}(x,2[y,z\cdot w]-z\cdot[y,w]-[z\cdot y,w]). \end{aligned}$$ Then by the nondegeneracy of $\mathcal{B}$, we have $$\label{eq:invariant tpa} 2[x,y]\cdot z-x\cdot [y,z]+y\cdot[x,z]=0,\ 2[y,z\cdot x]-z\cdot[y,x]-[z\cdot y,x]=0.$$ Combining Eq. ([\[eq:invariant tpa\]](#eq:invariant tpa){reference-type="ref" reference="eq:invariant tpa"}) with Eq. ([\[eq:defi:transposed Poisson algebra\]](#eq:defi:transposed Poisson algebra){reference-type="ref" reference="eq:defi:transposed Poisson algebra"}), we get Eq. ([\[eq:coherent TPA\]](#eq:coherent TPA){reference-type="ref" reference="eq:coherent TPA"}). ◻ On the other hand, the relationship between anti-pre-Lie Poisson algebras and transposed Poisson algebras with the nondegenerate symmetric bilinear forms which are invariant on the commutative associative algebras and commutative 2-cocycles on the Lie algebras is given as follows. **Proposition 61**. *([@LB2022]) Let $(A,\cdot,[-,-])$ be a transposed Poisson algebra. Suppose that there is a nondegenerate symmetric bilinear form $\mathcal{B}$ on $A$ such that it is invariant on $(A,\cdot)$ and a commutative 2-cocycle on $(A,[-,-])$. Then there is an anti-pre-Lie algebra $(A,\circ)$ defined by Eq. ([\[eq:thm:commutative 2-cocycles and anti-pre-Lie algebras\]](#eq:thm:commutative 2-cocycles and anti-pre-Lie algebras){reference-type="ref" reference="eq:thm:commutative 2-cocycles and anti-pre-Lie algebras"}) through $\mathcal B$ such that $(A,\cdot,\circ)$ is a compatible anti-pre-Lie Poisson algebra of $(A,\cdot,[-,-])$.* Recall that a **Frobenius algebra** is a triple $(A,\cdot,\mathcal B)$, where the pair $(A,\cdot)$ is an associative algebra and $\mathcal{B}$ is a nondegenerate invariant bilinear form on $(A,\cdot)$. Furthermore, we recall the definition of double constructions of (commutative) Frobenius algebras. **Definition 62**. ([@Bai2010]) Let $(A,\cdot_{A})$ be a commutative associative algebra. Suppose that there is a commutative associative algebra structure $\cdot_{A^{*}}$ on the dual space $A^{*}$. A **double construction of commutative Frobenius algebras** associated to $(A,\cdot_{A})$ and $(A^{*},\cdot_{A^{*}})$ is a collection $\big((A\oplus A^{*},\cdot,\mathcal{B}_{d}),A,A^{*}\big)$, such that $(A\oplus A^{*},\cdot)$ is a commutative associative algebra which contains $(A,\cdot_{A})$ and $(A^{*},\cdot_{A^{*}})$ as subalgebras, and the nondegenerate symmetric bilinear form $\mathcal{B}_{d}$ defined by Eq. ([\[eq:defi:Manin triples of Lie algebras\]](#eq:defi:Manin triples of Lie algebras){reference-type="ref" reference="eq:defi:Manin triples of Lie algebras"}) makes $(A\oplus A^{*},\cdot,\mathcal{B}_{d})$ a Frobenius algebra. **Definition 63**. Let $(A,\cdot_{A},[-,-]_{A})$ be a transposed Poisson algebra. Assume that there is a transposed Poisson algebra structure $(A^{*},\cdot_{A^{*}},[-,-]_{A^{*}})$ on the dual space $A^{*}$. Suppose that there is a transposed Poisson algebra structure $(A\oplus A^{*},\cdot,[-,-])$ on the direct sum $A\oplus A^{*}$ of vector spaces, such that $\big((A\oplus A^{*},\cdot,\mathcal{B}_{d}),A,A^{*}\big)$ is a double construction of commutative Frobenius algebras, $\big((A\oplus A^{*},[-,-],\mathcal{B}_{d}), A, A^{*}\big)$ is a Manin triple of Lie algebras with respect to the commutative 2-cocycle, and $(A,\cdot_{A},[-,-]_{A})$ and $(A^{*},\cdot_{A^{*}},[-,-]_{A^{*}})$ are transposed Poisson subalgebras of $(A\oplus A^{*},\cdot,[-,-])$. Such a structure is called a **Manin triple of transposed Poisson algebras with respect to the invariant bilinear form on $(A\oplus A^{*},\cdot)$ and the commutative 2-cocycle on $(A\oplus A^{*},[-,-])$**, or simply a **Manin triple of transposed Poisson algebras**, and is denoted by $\big((A\oplus A^{*},\cdot,[-,-],\mathcal{B}_{d}),A,A^{*}\big)$. **Lemma 64**. *Let $\big((A\oplus A^{*},\cdot,[-,-],\mathcal{B}_{d}),A,A^{*}\big)$ be a Manin triple of transposed Poisson algebras. Then there exists a compatible anti-pre-Lie algebra structure $\circ$ on $A\oplus A^*$ defined by Eq. ([\[eq:thm:commutative 2-cocycles and anti-pre-Lie algebras\]](#eq:thm:commutative 2-cocycles and anti-pre-Lie algebras){reference-type="ref" reference="eq:thm:commutative 2-cocycles and anti-pre-Lie algebras"}) through $\mathcal B_d$ such that $(A\oplus A^*, \cdot, \circ)$ is a compatible anti-pre-Lie Poisson algebra. Moreover, $(A,\cdot_A,\circ_A=%£º \circ|_{A\otimes A})$ and $(A^*,\cdot_{A^*},\circ_{A^*}=%£º \circ|_{A^*\otimes A^*})$ are anti-pre-Lie Poisson subalgebras, whose sub-adjacent transposed Poisson algebras are $(A,\cdot_{A},[-.-]_{A})$ and $(A^{*},\cdot_{A^{*}}, [-,-]_{A^{*}})$ respectively.* *Proof.* It is similar to the proof of Lemma [Lemma 15](#lem:111){reference-type="ref" reference="lem:111"}. ◻ **Theorem 65**. *Let $(A,\cdot_{A},\circ_{A})$ and $(A^{*},\cdot_{A^{*}},\circ_{A^{*}})$ be two anti-pre-Lie Poisson algebras and their sub-adjacent transposed Poisson algebras be $(A,\cdot_{A},[-,-]_{A})$ and $(A^{*},\cdot_{A^{*}},[-,-]_{A^{*}})$ respectively. Then the following conditions are equivalent:* 1. *There is a Manin triple of transposed Poisson algebras $\big((A\oplus A^{*},\cdot,[-,-],\mathcal{B}_{d}),A,A^{*}\big)$ such that the compatible anti-pre-Lie Poisson algebra $(A\oplus A^{*},\cdot,\circ)$ in which $\circ$ is defined by Eq. ([\[eq:thm:commutative 2-cocycles and anti-pre-Lie algebras\]](#eq:thm:commutative 2-cocycles and anti-pre-Lie algebras){reference-type="ref" reference="eq:thm:commutative 2-cocycles and anti-pre-Lie algebras"}) through $\mathcal B_d$ contains $(A,\cdot_{A},\circ_{A})$ and $(A^{*},\cdot_{A^{*}},\circ_{A^{*}})$ as anti-pre-Lie Poisson subalgebras.* 2. *$(A,A^{*},-\mathcal{L}^{*}_{\cdot_{A}},-\mathrm{ad}^{*}_{A},\mathcal{R}^{*}_{\circ_{A}},-\mathcal{L}^{*}_{\cdot_{A^{*}}},-\mathrm{ad}^{*}_{A^{*}},\mathcal{R}^{*}_{\circ_{A^{*}}})$ is a matched pair of anti-pre-Lie Poisson algebras.* 3. *$(A,A^{*},-\mathcal{L}^{*}_{\cdot_{A}},-\mathcal{L}^{*}_{\circ_{A}},-\mathcal{L}^{*}_{\cdot_{A^{*}}}-\mathcal{L}^{*}_{\circ_{A^{*}}})$ is a matched pair of transposed Poisson algebras.* *Proof.* Note that by [@Bai2010], there is a double construction of commutative Frobenius algebras $\big((A\oplus A^{*},\cdot,\mathcal{B}_{d}),A,A^{*}\big)$ if and only if $(A,A^{*},-\mathcal{L}^{*}_{\cdot_{A}},-\mathcal{L}^{*}_{\cdot_{A^{*}}})$ is a matched pair of commutative associative algebras. Hence the conclusion follows from a proof similar to the one of Proposition [Theorem 16](#thm:Manin triples and matched pairs){reference-type="ref" reference="thm:Manin triples and matched pairs"}. ◻ Recall ([@Bai2010]) that a **cocommutative coassociative coalgebra** is a pair $(A,\Delta)$, such that $A$ is a vector space and $\Delta:A\rightarrow A\otimes A$ is a linear map satisfying $$\begin{aligned} \tau\Delta&=&\Delta,\label{eq:symmetric}\\ (\mathrm{id}\otimes \Delta)\Delta&=&(\Delta\otimes\mathrm{id})\Delta.\label{AssoCo}\end{aligned}$$ A **commutative and cocommutative infinitesimal bialgebra** is a triple $(A,\cdot,\Delta)$ such that the pair $(A,\cdot)$ is a commutative associative algebra, the pair $(A,\Delta)$ is a cocommutative coassociative coalgebra, and the following equation holds: $$\label{AssoBia} \Delta(x\cdot y)=\big(\mathcal{L}_{\cdot}(x)\otimes \mathrm{id}\big)\Delta(y)+\big(\mathrm{id}\otimes\mathcal{L}_{\cdot}(y)\big)\Delta(x),\;\;\forall x,y\in A.$$ **Definition 66**. An **anti-pre-Lie Poisson coalgebra** is a triple $(A,\Delta,\delta)$, such that $(A,\Delta)$ is a cocommutative coassociative coalgebra, $(A,\delta)$ is an anti-pre-Lie coalgebra, and the following conditions are satisfied: $$\begin{aligned} 2(\delta\otimes\mathrm{id})\Delta-2(\tau\otimes\mathrm{id})(\delta\otimes\mathrm{id})\Delta&=&(\tau\otimes\mathrm{id})(\mathrm{id}\otimes\delta)\Delta-(\mathrm{id}\otimes\delta)\Delta,\label{eq:defi:anti-pre-Lie Poisson coalg1}\\ 2(\mathrm{id}\otimes\Delta)\delta&=&(\mathrm{id}\otimes\tau)(\delta\otimes\mathrm{id})\delta+(\delta\otimes\mathrm{id})\Delta.\label{eq:defi:anti-pre-Lie Poisson coalg2}\end{aligned}$$ **Proposition 67**. *Let $A$ be a vector space and $\Delta,\delta:A\rightarrow A\otimes A$ be linear maps. Let $\cdot_{A^{*}},\circ_{A^{*}}:A^{*}\otimes A^{*}\rightarrow A^{*}$ be linear duals of $\Delta$ and $\delta$ respectively. Then $(A,\Delta,\delta)$ is an anti-pre-Lie Poisson coalgebra if and only if $(A^{*},\cdot_{A^{*}},\circ_{A^{*}})$ is an anti-pre-Lie Poisson algebra.* *Proof.* By [@Bai2010], $(A,\Delta)$ is a cocommutative coassociative coalgebra if and only if $(A^{*},\cdot_{A^{*}})$ is a commutative associative algebra. Moreover, by a proof similar to the one of Proposition [\[pro:anti-pre-Lie coalgebras and anti-pre-Lie algebras\]](#pro:anti-pre-Lie coalgebras and anti-pre-Lie algebras){reference-type="ref" reference="pro:anti-pre-Lie coalgebras and anti-pre-Lie algebras"}, Eqs. ([\[eq:defi:anti-pre-Lie Poisson1\]](#eq:defi:anti-pre-Lie Poisson1){reference-type="ref" reference="eq:defi:anti-pre-Lie Poisson1"})-([\[eq:defi:anti-pre-Lie Poisson2\]](#eq:defi:anti-pre-Lie Poisson2){reference-type="ref" reference="eq:defi:anti-pre-Lie Poisson2"}) hold on $A^*$ if and only if Eqs. ([\[eq:defi:anti-pre-Lie Poisson coalg1\]](#eq:defi:anti-pre-Lie Poisson coalg1){reference-type="ref" reference="eq:defi:anti-pre-Lie Poisson coalg1"})-([\[eq:defi:anti-pre-Lie Poisson coalg2\]](#eq:defi:anti-pre-Lie Poisson coalg2){reference-type="ref" reference="eq:defi:anti-pre-Lie Poisson coalg2"}) hold respectively. Hence the conclusion follows. ◻ **Definition 68**. Let $(A,\cdot_{A},\circ_{A})$ be an anti-pre-Lie Poisson algebra and $(A,\Delta,\delta)$ be an anti-pre-Lie Poisson coalgebra. Suppose that the following conditions are satisfied: 1. $(A,\cdot_{A},\Delta)$ is a commutative and cocommutative infinitesimal bialgebra. 2. $(A,\circ_{A},\delta)$ is an anti-pre-Lie bialgebra. 3. The following equations hold: $$\label{eq:Poisson bialg 1} 2\big(\mathcal{L}_{\circ_{A}}(x)\otimes\mathrm{id}\big)\Delta(y)-2\big(\mathrm{id}\otimes\mathcal{L}_{\cdot_{A}}(y)\big)\delta(x)+\delta(x\cdot_{A} y)+\big(\mathcal{L}_{\cdot_{A}}(y)\otimes\mathrm{id}\big)\delta(x)-\big(\mathrm{id}\otimes\mathrm{ad}_{A}(x)\big)\Delta(y)=0,$$ $$\label{eq:Poisson bialg 2} 2\Delta([x,y]_{A})+\big(\mathrm{id}\otimes\mathrm{ad}_{A}(y)\big)\delta(x)-\big(\mathcal{L}_{\cdot_{A}}(x)\otimes\mathrm{id}\big)\delta(y)-\big(\mathrm{id}\otimes\mathrm{ad}_{A}(x)\big)\delta(y)+\big(\mathcal{L}_{\cdot_{A}}(y)\otimes\mathrm{id}\big)\delta(x)=0,$$ $$\label{eq:Poisson bialg 3} \begin{array}{ll} &2\big(\mathrm{id}\otimes\mathcal{L}_{\cdot_{A}}(y)\big)\delta(x)-2\big(\mathcal{L}_{\circ_{A}}(x)\otimes\mathrm{id}\big)\Delta(y)+\Delta(x\circ_{A} y)+\big(\mathcal{R}_{\circ_{A}}(y)\otimes\mathrm{id}\big)\Delta(x)\\ &+\tau\big(\mathcal{L}_{\cdot_{A}}(x)\otimes\mathrm{id}\big)\delta(y)-\big(\mathrm{id}\otimes\mathcal{L}_{\cdot_{A}}(x)\big)\delta(y)=0, \end{array}$$ $$\label{eq:Poisson bialg 4} (\tau-\mathrm{id}^{\otimes 2})\Big(2\delta(x\cdot_{A} y)-\big(\mathcal{L}_{\cdot_{A}}(x)\otimes\mathrm{id}\big)\delta(y)-\big(\mathrm{id}\otimes\mathcal{L}_{\cdot_{A}}(x)\big)\delta(y)-\big(\mathrm{id}\otimes\mathcal{R}_{\circ_{A}}(y)\big)\Delta(x)\Big)=0,$$ for all $x,y\in A$. Such a structure is called an **anti-pre-Lie Poisson bialgebra**. We denote it by $(A,\cdot_{A},\circ_{A},\Delta,\delta)$. **Theorem 69**. *Let $(A,\cdot_{A},\circ_{A})$ be an anti-pre-Lie Poisson algebra. Suppose that there is an anti-pre-Lie Poisson algebra structure $(A^{*},\cdot_{A^{*}},\circ_{A^{*}})$ on the dual space $A^{*}$. Let $(A,\cdot_A,[-,-]_A)$ and $(A^*,\cdot_{A^*}$, $[-,-]_{A^*})$ be the sub-adjacent transposed Poisson algebras respectively. Let $\Delta,\delta:A\rightarrow A\otimes A$ be the linear duals of $\cdot_{A^{*}}$ and $\circ_{A^{*}}$ respectively. Then $(A,\cdot_{A},\circ_{A},\Delta,\delta)$ is an anti-pre-Lie Poisson bialgebra if and only if $(A,A^{*},-\mathcal{L}^{*}_{\cdot_{A}},-\mathcal{L}^{*}_{\circ_{A}},-\mathcal{L}^{*}_{\cdot_{A^{*}}},-\mathcal{L}^{*}_{\circ_{A^{*}}})$ is a matched pair of transposed Poisson algebras.* *Proof.* By [@Bai2010], $(A,A^{*}, -\mathcal{L}^{*}_{\cdot_{A}},-\mathcal{L}^{*}_{\cdot_{A^{*}}})$ is a matched pair of commutative associative algebras if and only if $(A,\cdot_{A},\Delta)$ is a commutative and cocommutative infinitesimal bialgebra, and by Theorem [Theorem 21](#thm:equivalence matched pairs of Lie algebras and anti-pre-Lie bialgebras){reference-type="ref" reference="thm:equivalence matched pairs of Lie algebras and anti-pre-Lie bialgebras"}, $(A,A^{*}, -\mathcal{L}^{*}_{\circ_{A}},-\mathcal{L}^{*}_{\circ_{A^{*}}})$ is a matched pair of Lie algebras if and only if $(A,\circ_{A},\delta)$ is an anti-pre-Lie bialgebra. By Proposition [Proposition 46](#pro:anti-pre-Lie Poisson){reference-type="ref" reference="pro:anti-pre-Lie Poisson"}, $(-\mathcal{L}^{*}_{\cdot_{A}},-\mathcal{L}^{*}_{\circ_{A}}, A^*)$ and $(-\mathcal{L}^{*}_{\cdot_{A^{*}}},-\mathcal{L}^{*}_{\circ_{A^{*}}},A)$ are representations of the transposed Poisson algebras $(A,\cdot_A,[-,-]_A)$ and $(A^*,\cdot_{A^*},[-,-]_{A^*})$ respectively. By Proposition [\[pro:Poisson algs and Poisson coalgs\]](#pro:Poisson algs and Poisson coalgs){reference-type="ref" reference="pro:Poisson algs and Poisson coalgs"}, $(A,\Delta,\delta)$ is an anti-pre-Lie Poisson coalgebra if and only if $(A^{*},\cdot_{A^{*}},\circ_{A^{*}})$ is an anti-pre-Lie Poisson algebra. Moreover, for all $x,y\in A, a^{*}, b^{*}\in A^{*}$, we have $$\begin{aligned} \langle 2\mathcal{L}^{*}_{\cdot_{A^{*}}}\big(\mathcal{L}^{*}_{\circ_{A}}(x)a^{*}\big)y,b^{*}\rangle&=&-\langle 2y, \mathcal{L}^{*}_{\circ_{A}}(x)a^{*}\cdot_{A^{*}}b^{*}\rangle=\langle 2\big(\mathcal{L}_{\circ_{A}}(x)\otimes\mathrm{id}\big)\Delta(y),a^{*}\otimes b^{*}\rangle,\\ \langle 2y\cdot_{A}\mathcal{L}^{*}_{\circ_{A^{*}}}(a^{*})x, b^{*}\rangle&=&\langle 2x, a^{*}\circ_{A^{*}}\mathcal{L}^{*}_{\cdot_{A}}(y)b^{*}\rangle=-\langle 2\big(\mathrm{id}\otimes\mathcal{L}_{\cdot_{A}}(y)\big)\delta(x), a^{*}\otimes b^{*}\rangle,\\ \langle\mathcal{L}^{*}_{\circ_{A^{*}}}(a^{*})(x\cdot_{A}y), b^{*}\rangle&=&\-\langle \delta(x\cdot_{A}y), a^{*}\circ_{A^{*}}b^{*}\rangle,\\ -\langle\mathcal{L}^{*}_{\circ_{A^{*}}}\big(\mathcal{L}^{*}_{\cdot_{A}}(y)a^{*}\big)x,b^{*}\rangle&=&\langle x, \mathcal{L}^{*}_{\cdot_{A}}(y)a^{*}\circ_{A^{*}}b^{*}\rangle=-\langle\big(\mathcal{L}_{\cdot_{A}}(y)\otimes\mathrm{id}\big)\delta(x), a^{*}\otimes b^{*}\rangle,\\ -\langle[x,\mathcal{L}^{*}_{\cdot_{A^{*}}}(a^{*})y]_{A},b^{*}\rangle&=&-\langle y, a^{*}\cdot_{A^{*}} \mathrm{ad}^{*}_{A}(x)b^{*}\rangle=\langle\big(\mathrm{id}\otimes \mathrm{ad}_{A}(x)\big)\Delta(y), a^{*}\otimes b^{*}\rangle.\end{aligned}$$ Thus Eq. ([\[eq:defi:MP TPA a\]](#eq:defi:MP TPA a){reference-type="ref" reference="eq:defi:MP TPA a"}) holds if and only if Eq. ([\[eq:Poisson bialg 1\]](#eq:Poisson bialg 1){reference-type="ref" reference="eq:Poisson bialg 1"}) holds for $\mu_{A}=-\mathcal{L}^{*}_{\cdot_{A}}$, $\mu_{B}=-\mathcal{L}^{*}_{\cdot_{A^{*}}}$, $\rho_{A}=-\mathcal{L}^{*}_{\circ_{A}}$, $\rho_{B}=-\mathcal{L}^{*}_{\circ_{A^{*}}}$. Similarly Eqs. ([\[eq:defi:MP TPA b\]](#eq:defi:MP TPA b){reference-type="ref" reference="eq:defi:MP TPA b"})-([\[eq:defi:MP TPA d\]](#eq:defi:MP TPA d){reference-type="ref" reference="eq:defi:MP TPA d"}) hold if and only if Eqs. ([\[eq:Poisson bialg 2\]](#eq:Poisson bialg 2){reference-type="ref" reference="eq:Poisson bialg 2"})-([\[eq:Poisson bialg 4\]](#eq:Poisson bialg 4){reference-type="ref" reference="eq:Poisson bialg 4"}) hold for $\mu_{A}=-\mathcal{L}^{*}_{\cdot_{A}}$, $\mu_{B}=-\mathcal{L}^{*}_{\cdot_{A^{*}}}$, $\rho_{A}=-\mathcal{L}^{*}_{\circ_{A}}$, $\rho_{B}=-\mathcal{L}^{*}_{\circ_{A^{*}}}$ respectively. Hence the conclusion follows. ◻ Combining Theorems [Theorem 65](#thm:ddd){reference-type="ref" reference="thm:ddd"} and [Theorem 69](#thm:eee){reference-type="ref" reference="thm:eee"} together, we have **Corollary 70**. *Let $(A,\cdot_{A},\circ_{A})$ be an anti-pre-Lie Poisson algebra. Suppose that there is an anti-pre-Lie Poisson algebra structure $(A^{*},\cdot_{A^{*}},\circ_{A^{*}})$ on the dual space $A^{*}$ and $\Delta,\delta:A\rightarrow A\otimes A$ are the linear duals of $\cdot_{A^{*}}$ and $\circ_{A^{*}}$ respectively. Let $(A,\cdot_{A},[-,-]_{A})$ and $(A^{*},\cdot_{A^{*}},[-,-]_{A^{*}})$ be the sub-adjacent transposed Poisson algebras of $(A,\cdot_{A},\circ_{A})$ and $(A^{*},\cdot_{A^{*}},\circ_{A^*})$ respectively. Then the following conditions are equivalent:* 1. *There is a Manin triple of transposed Poisson algebras $\big((A\oplus A^{*},\cdot,[-,-],\mathcal{B}_{d}),A,A^{*}\big)$ such that the compatible anti-pre-Lie Poisson algebra $(A\oplus A^{*},\cdot,\circ)$ in which $\circ$ is defined by Eq. ([\[eq:thm:commutative 2-cocycles and anti-pre-Lie algebras\]](#eq:thm:commutative 2-cocycles and anti-pre-Lie algebras){reference-type="ref" reference="eq:thm:commutative 2-cocycles and anti-pre-Lie algebras"}) through $\mathcal B_d$ contains $(A,\cdot_{A},\circ_{A})$ and $(A^{*},\cdot_{A^{*}},\circ_{A^{*}})$ as anti-pre-Lie Poisson subalgebras.* 2. *$(A,A^{*},-\mathcal{L}^{*}_{\cdot_{A}},-\mathrm{ad}^{*}_{A},\mathcal{R}^{*}_{\circ_{A}},-\mathcal{L}^{*}_{\cdot_{A^{*}}},-\mathrm{ad}^{*}_{A^{*}},{\mathcal{R}^{*}_{\circ_{A^{*}}}})$ is a matched pair of anti-pre-Lie Poisson algebras.* 3. *$(A,A^{*},-\mathcal{L}^{*}_{\cdot_{A}},-\mathcal{L}^{*}_{\circ_{A}},-\mathcal{L}^{*}_{\cdot_{A^{*}}}-\mathcal{L}^{*}_{\circ_{A^{*}}})$ is a matched pair of transposed Poisson algebras.* 4. *$(A,\cdot_{A},\circ_{A},\Delta,\delta)$ is an anti-pre-Lie Poisson bialgebra.* ## Coboundary anti-pre-Lie Poisson bialgebras and the anti-pre-Lie Poisson Yang-Baxter equation   Recall ([@Bai2010]) that a commutative and cocommutative infinitesimal bialgebra $(A,\cdot,\Delta)$ is called **coboundary** if there exists an $r\in A\otimes A$ such that $$\label{AssoCob} \Delta(x):=\Delta_{r}(x):=\big(\mathrm{id}\otimes\mathcal{L}_{\cdot}(x)-\mathcal{L}_{\cdot}(x)\otimes \mathrm{id}\big)r,\ \ \forall x\in A.$$ Let $(A,\cdot)$ be a commutative associative algebra and $r\in A\otimes A$. Let $\Delta:A\rightarrow A\otimes A$ be a linear map defined by Eq. ([\[AssoCob\]](#AssoCob){reference-type="ref" reference="AssoCob"}). Then by [@Bai2010], $(A,\cdot,\Delta)$ is a commutative and cocommutative infinitesimal bialgebra if and only if for all $x\in A,$ $$\begin{aligned} \big(\mathrm{id}\otimes\mathcal{L}_{\cdot}(x)-\mathcal{L}_{\cdot}(x)\otimes \mathrm{id}\big)\big(r+\tau(r)\big)&=&0,\label{eq:AYBE1}\\ \big(\mathrm{id}\otimes \mathrm{id}\otimes\mathcal{L}_{\cdot}(x)-\mathcal{L}_{\cdot}(x)\otimes \mathrm{id}\otimes \mathrm{id}\big)\textbf{A}(r)&=&0,\label{eq:AYBE2}\end{aligned}$$ where $\textbf{A}(r)$ is defined by Eq. ([\[eq:AYBE\]](#eq:AYBE){reference-type="ref" reference="eq:AYBE"}). **Definition 71**. An anti-pre-Lie Poisson bialgebra $(A,\cdot,\circ,\Delta,\delta)$ is called **coboundary** if there exists an $r%=\sum\limits_{i}a_{i}\otimes b_{i} \in A\otimes A$ such that Eqs. ([\[AssoCob\]](#AssoCob){reference-type="ref" reference="AssoCob"}) and ([\[eq:defi:coboundary anti-pre-Lie bialgebras\]](#eq:defi:coboundary anti-pre-Lie bialgebras){reference-type="ref" reference="eq:defi:coboundary anti-pre-Lie bialgebras"}) hold. A coboundary anti-pre-Lie Poisson bialgebra is clearly both a coboundary commutative and cocommutative infinitesimal bialgebra and a coboundary anti-pre-Lie bialgebra. **Proposition 72**. *Let $(A,\cdot,\circ)$ be an anti-pre-Lie Poisson algebra and $r=\sum\limits_{i}a_{i}\otimes b_{i}\in A\otimes A$. Let $\Delta=\Delta_{r}$ and $\delta=\delta_{r}$ be two linear maps defined by Eqs. ([\[AssoCob\]](#AssoCob){reference-type="ref" reference="AssoCob"}) and ([\[eq:defi:coboundary anti-pre-Lie bialgebras\]](#eq:defi:coboundary anti-pre-Lie bialgebras){reference-type="ref" reference="eq:defi:coboundary anti-pre-Lie bialgebras"}) respectively.* 1. *[\[it:1\]]{#it:1 label="it:1"} Eq. ([\[eq:defi:anti-pre-Lie Poisson coalg1\]](#eq:defi:anti-pre-Lie Poisson coalg1){reference-type="ref" reference="eq:defi:anti-pre-Lie Poisson coalg1"}) holds if and only if for all $x\in A$, the following equation holds:* *$$\label{eq:TPBA1} \begin{split} &\big(2\mathrm{id}\otimes\mathrm{id}\otimes\mathcal{L}_{\cdot}(x)-\mathcal{L}_{\cdot}(x)\otimes\mathrm{id}\otimes\mathrm{id}-\mathrm{id}\otimes\mathcal{L}_{\cdot}(x)\otimes\mathrm{id}\big)\textbf{T}(r)\\ &\ \ +\sum\limits_{j}\Big(2\big(\mathrm{ad}(a_{j})\otimes\mathrm{id}\otimes \mathcal{L}_{\cdot}(x)-\mathrm{id}\otimes\mathcal{L}_{\circ}(a_{j})\otimes\mathcal{L}_{\cdot}(x)-\mathrm{ad}(x\cdot a_{j})\otimes\mathrm{id}\otimes\mathrm{id}+\mathrm{id}\otimes\mathcal{L}_{\circ}(x\cdot a_{j})\otimes\mathrm{id}\big)\\ &\ \ +\mathcal{R}_{\circ}(a_{j})\otimes \mathcal{L}_{\cdot}(x)\otimes\mathrm{id}-\mathcal{R}_{\circ}(a_{j})\mathcal{L}_{\cdot}(x)\otimes\mathrm{id}\otimes\mathrm{id}\Big)\Big(\big(r+\tau(r)\big)\otimes b_{j}\Big)=0. \end{split}$$* 2. *[\[it:2\]]{#it:2 label="it:2"} Eq. ([\[eq:defi:anti-pre-Lie Poisson coalg2\]](#eq:defi:anti-pre-Lie Poisson coalg2){reference-type="ref" reference="eq:defi:anti-pre-Lie Poisson coalg2"}) holds if and only if for all $x\in A$, the following equation holds:* *$$\label{eq:TPBA2} \begin{split} &\big(2\mathcal{L}_{\circ}(x)\otimes\mathrm{id}\otimes\mathrm{id}-\mathrm{id}\otimes\mathrm{ad}(x)\otimes\mathrm{id}\big)\textbf{A}(r)-\big(\mathrm{id}\otimes\mathrm{id}\otimes\mathcal{L}_{\cdot}(x)\big)(\mathrm{id}\otimes \tau)\textbf{T}(r)\\ &\ \ +\sum\limits_{j} \big(\mathrm{id}\otimes\mathrm{ad}(x)\otimes\mathcal{L}_{\cdot}(b_{j})+\mathrm{id}\otimes\mathrm{id}\otimes\mathcal{L}_{\cdot}(x)\mathcal{L}_{\circ}(b_{j})-\mathrm{id}\otimes\mathrm{id}\otimes\mathcal{L}_{\cdot}(b_{j})\mathcal{L}_{\circ}(x)\\ &\ \ -\mathrm{id}\otimes\mathrm{ad}(b_{j})\otimes\mathcal{L}_{\cdot}(x)\big)\Big(a_{j}\otimes\big(r+\tau(r)\big)\Big)=0. \end{split}$$* 3. *[\[it:3\]]{#it:3 label="it:3"} Eq.([\[eq:Poisson bialg 1\]](#eq:Poisson bialg 1){reference-type="ref" reference="eq:Poisson bialg 1"}) holds automatically.* 4. *[\[it:4\]]{#it:4 label="it:4"} Eq.([\[eq:Poisson bialg 2\]](#eq:Poisson bialg 2){reference-type="ref" reference="eq:Poisson bialg 2"}) holds automatically.* 5. *[\[it:5\]]{#it:5 label="it:5"}Eq.([\[eq:Poisson bialg 3\]](#eq:Poisson bialg 3){reference-type="ref" reference="eq:Poisson bialg 3"}) holds if and only if for all $x,y\in A$, the following equation holds: $$\label{eq:TPBA3} \big(\mathrm{ad}(y)\otimes\mathcal{L}_{\cdot}(x)-\mathrm{id}\otimes\mathcal{L}_{\cdot}(x)\mathcal{L}_{\circ}(y)\big)\big(r+\tau(r)\big)=0.$$* 6. *[\[it:6\]]{#it:6 label="it:6"} Eq.([\[eq:Poisson bialg 4\]](#eq:Poisson bialg 4){reference-type="ref" reference="eq:Poisson bialg 4"}) holds if and only if for all $x,y\in A$, the following equation holds:* *$$\label{eq:TPBA4} \big(\mathcal{L}_{\cdot}(x)\otimes\mathcal{L}_{\circ}(y)-\mathcal{L}_{\circ}(y)\otimes\mathcal{L}_{\cdot}(x)+2\mathcal{L}_{\circ}(x\cdot y)\otimes\mathrm{id}-2\mathrm{id}\otimes\mathcal{L}_{\circ}(x\cdot y)+\mathrm{id}\otimes\mathcal{L}_{\cdot}(x)\mathcal{L}_{\circ}(y)-\mathcal{L}_{\cdot}(x)\mathcal{L}_{\circ}(y)\otimes\mathrm{id}\big)\big(r+\tau(r)\big)=0.$$* *Proof.* It follows from the careful interpretation, which is put into the Appendix. ◻ Therefore, with Theorem [Theorem 25](#thm:cob bialg){reference-type="ref" reference="thm:cob bialg"} together, we have the following conclusion. **Theorem 73**. *Let $(A,\cdot,\circ)$ be an anti-pre-Lie Poisson algebra and $r=\sum\limits_{i}a_{i}\otimes b_{i}\in A\otimes A$. Let $\Delta=\Delta_{r}$ and $\delta=\delta_{r}$ be two linear maps defined by Eqs. ([\[AssoCob\]](#AssoCob){reference-type="ref" reference="AssoCob"}) and ([\[eq:defi:coboundary anti-pre-Lie bialgebras\]](#eq:defi:coboundary anti-pre-Lie bialgebras){reference-type="ref" reference="eq:defi:coboundary anti-pre-Lie bialgebras"}) respectively. Then $(A,\cdot,\circ,\Delta,\delta)$ is an anti-pre-Lie Poisson bialgebra if and only if Eqs. ([\[eq:pro:cob coalg1\]](#eq:pro:cob coalg1){reference-type="ref" reference="eq:pro:cob coalg1"})- ([\[eq:pro:coboundary anti-pre-Lie bialgebras1\]](#eq:pro:coboundary anti-pre-Lie bialgebras1){reference-type="ref" reference="eq:pro:coboundary anti-pre-Lie bialgebras1"}) and ([\[eq:AYBE1\]](#eq:AYBE1){reference-type="ref" reference="eq:AYBE1"})-([\[eq:TPBA4\]](#eq:TPBA4){reference-type="ref" reference="eq:TPBA4"}) hold.* **Definition 74**. Let $(A,\cdot,\circ)$ be an anti-pre-Lie Poisson algebra and $r\in A\otimes A$. We say $r$ is a solution of the **anti-pre-Lie Poisson Yang-Baxter equation** or the **APLP-YBE** in short, in $(A,\cdot,\circ)$ if $r$ satisfies both the AYBE and the APL-YBE, that is, $$\textbf{A}(r)=\textbf{T}(r)=0.$$ **Example 75**. Let $(A,\cdot)$ be a commutative associative algebra with a derivation $P$ and $(A,\circ)$ be the anti-pre-Lie algebra defined by Eq. ([\[eq:diff anti-pre-Lie\]](#eq:diff anti-pre-Lie){reference-type="ref" reference="eq:diff anti-pre-Lie"}). Then by [@LB2022], $(A,\cdot,\circ)$ is an anti-pre-Lie Poisson algebra. Moreover, by Example [Example 28](#ex:AYBE){reference-type="ref" reference="ex:AYBE"}, if $r$ is a solution of the AYBE in $(A,\cdot)$ satisfying Eq. ([\[eq:-P\]](#eq:-P){reference-type="ref" reference="eq:-P"}), then $r$ is also a solution of the APLP-YBE in $(A,\cdot,\circ)$. **Corollary 76**. *Let $(A,\cdot,\circ)$ be an anti-pre-Lie Poisson algebra and $r\in A\otimes A$ be a skew-symmetric solution of the APLP-YBE in $(A,\cdot,\circ)$. Then $(A,\cdot,\circ,\Delta,\delta)$ is an anti-pre-Lie Poisson bialgebra, where $\Delta=\Delta_{r}$ and $\delta=\delta_{r}$ are defined by Eqs. ([\[AssoCob\]](#AssoCob){reference-type="ref" reference="AssoCob"}) and ([\[eq:defi:coboundary anti-pre-Lie bialgebras\]](#eq:defi:coboundary anti-pre-Lie bialgebras){reference-type="ref" reference="eq:defi:coboundary anti-pre-Lie bialgebras"}) respectively.* *Proof.* It follows from Theorem [Theorem 73](#thm:lllll){reference-type="ref" reference="thm:lllll"}. ◻ ## $\mathcal{O}$-operators of anti-pre-Lie Poisson algebras and pre-(anti-pre-Lie Poisson) algebras   Let $(A,\cdot)$ be a commutative associative algebra and $(\mu,V)$ be a representation of $(A,\cdot)$. Recall ([@Bai2010]) that a linear map $T:V\rightarrow A$ is called an **$\mathcal{O}$-operator of $(A,\cdot)$ associated to** $(\mu,V)$ if $$\label{eq:O-op} T(u)\cdot T(v)=T\Big(\mu\big(T(u)\big)v+\mu\big(T(v)\big)u\Big),\;\;\forall u,v\in V.$$ **Definition 77**. Let $(A,\cdot,\circ)$ be an anti-pre-Lie Poisson algebra and $(\mu,l_{\circ},r_{\circ},V)$ be a representation of $(A,\cdot,\circ)$. A linear map $T:V\rightarrow A$ is called an **$\mathcal{O}$-operator of $(A,\cdot,\circ)$ associated to $(\mu,l_{\circ},r_{\circ},V)$** if $T$ is both an $\mathcal{O}$-operator of $(A,\cdot)$ associated to $(\mu,V)$ and an $\mathcal{O}$-operator of $(A,\circ)$ associated to $(l_{\circ},r_{\circ},V)$, that is, Eqs. ([\[eq:O-op\]](#eq:O-op){reference-type="ref" reference="eq:O-op"}) and ([\[eq:defi:O-operators\]](#eq:defi:O-operators){reference-type="ref" reference="eq:defi:O-operators"}) hold. **Theorem 78**. *Let $(A,\cdot,\circ)$ be an anti-pre-Lie Poisson algebra and $r\in A\otimes A$ be skew-symmetric. Then $r$ is a solution of the APLP-YBE in $(A,\cdot,\circ)$ if and only if $r$ is an $\mathcal{O}$-operator of $(A,\cdot,\circ)$ associated to $(-\mathcal{L}^{*}_{\cdot},-\mathrm{ad}^{*},\mathcal{R}^{*}_{\circ},A^{*})$.* *Proof.* By [@Bai2010], $r$ is a solution of AYBE if and only if $r$ is an $\mathcal{O}$-operator of $(A,\cdot)$ associated to $(-\mathcal{L}^{*}_{\cdot}, A^{*})$. Hence the conclusion follows from Theorem [Theorem 29](#thm:O-operator and T-equation){reference-type="ref" reference="thm:O-operator and T-equation"}. ◻ **Theorem 79**. *Let $(A,\cdot,\circ)$ be an anti-pre-Lie Poisson algebra and $(\mu,l_{\circ},r_{\circ},V)$ be a representation of $(A,\cdot,\circ)$. Set $\hat{A}=A\ltimes_{-\mu^{*},r^{*}_{\circ}-l^{*}_{\circ},r^{*}_{\circ}}V^{*}$. Let $T:V\rightarrow A$ be a linear map which is identified as an element in $\hat{A}\otimes \hat{A}$. Then $r=T-\tau(T)$ is a skew-symmetric solution of the APLP-YBE in the anti-pre-Lie Poisson algebra $\hat{A}$ if and only if $T$ is an $\mathcal{O}$-operator of $(A,\cdot,\circ)$ associated to $(\mu,l_{\circ},r_{\circ},V)$.* *Proof.* By [@Bai2012], $r$ is a skew-symmetric solution of the AYBE in $A\ltimes_{-\mu^{*}}V^{*}$ if and only if $T$ is an $\mathcal{O}$-operator associated to $(\mu,V)$. Hence the conclusion follows from Theorem [Theorem 32](#thm:O-operator and T-equation:semi-direct product version){reference-type="ref" reference="thm:O-operator and T-equation:semi-direct product version"}. ◻ **Definition 80**. A **pre-(anti-pre-Lie Poisson) algebra** or a **pre-APLP algebra** in short, is a quadruple $(A,\star,\succ,\prec)$, such that $(A,\star)$ is a Zinbiel algebra, $(A,\succ,\prec)$ is a pre-APL algebra, and the following equations hold: $$\begin{aligned} 2y\star(x\succ z)-2y\star( z\prec x)&=&(x\succ y+x\prec y)\star z-x\star( z\prec y),\label{eq:PP1}\\ 2(x\succ y+x\prec y)\star z-2(y\succ x+y\prec x)\star z&=&y\star(x\succ z)-x\star(y\succ z),\label{eq:PP2}\\ 2z\prec (x\star y+y\star x) &=& (x\star z)\prec y+x\star( z\prec y),\label{eq:PP3}\\ 2x\succ (y\star z)&=&{(x\star y +y\star x)\succ z}+y\star(x\succ z),\label{eq:PP4}\\ 2x\succ (y\star z)&=& (x\star z)\prec y+{(x\succ y+x\prec y)\star z},\label{eq:PP5} \end{aligned}$$ for all $x,y,z\in A$. **Remark 81**. In fact, the operad of pre-APLP algebras is the successor of the operad of anti-pre-Lie Poisson algebras in the sense of [@BBGN]. Note that they are analogues of pre-Poisson algebras ([@A2]) whose operad is the successor of the operad of Poisson algebras. **Proposition 82**. *Let $(A,\star,\succ,\prec)$ be a pre-APLP algebra. Let $\cdot,\circ:A\otimes A\rightarrow A$ be two bilinear operations defined by Eqs. ([\[eq:ZintoAss\]](#eq:ZintoAss){reference-type="ref" reference="eq:ZintoAss"}) and ([\[eq:defi:quasi anti-pre-Lie algebras and anti-pre-Lie algebras\]](#eq:defi:quasi anti-pre-Lie algebras and anti-pre-Lie algebras){reference-type="ref" reference="eq:defi:quasi anti-pre-Lie algebras and anti-pre-Lie algebras"}) respectively. Then $(A,\cdot,\circ)$ is an anti-pre-Lie Poisson algebra, called the **sub-adjacent anti-pre-Lie Poisson algebra** of $(A,\star,\succ,\prec)$, and $(A,\star,\succ,\prec)$ is called a **compatible pre-APLP algebra** of $(A,\cdot,\circ)$. Moreover, $(\mathcal{L}_{\star},\mathcal{L}_{\succ},\mathcal{R}_{\prec},A)$ is a representation of $(A,\cdot,\circ)$.* *Proof.* Let $x,y,z\in A$, by Eqs. ([\[eq:PP1\]](#eq:PP1){reference-type="ref" reference="eq:PP1"}) and ([\[eq:PP2\]](#eq:PP2){reference-type="ref" reference="eq:PP2"}), we have $$\begin{aligned} &&2(x\circ y)\cdot z-2(y\circ x)\cdot z+x\cdot(y\circ z)-y\cdot(x\circ z)\\ &&=2\big((x\succ y)\star z+( x\prec y)\star z+z\star(x\succ y)+z\star( x\prec y)\big)\\ &&\ \ -2\big((y\succ x)\star z+( y\prec x)\star z+z\star(y\succ x)+z\star( y\prec x)\big)\\ &&\ \ +x\star(y\succ z)+x\star( y\prec z)+(y\succ z)\star x+( y\prec z)\star x\\ &&\ \ -y\star(x\succ z)-y\star( x\prec z)-(x\succ z)\star y-( x\prec z)\star y\\ &&=0. \end{aligned}$$ Hence Eq. ([\[eq:defi:anti-pre-Lie Poisson1\]](#eq:defi:anti-pre-Lie Poisson1){reference-type="ref" reference="eq:defi:anti-pre-Lie Poisson1"}) holds. Similarly, Eq. ([\[eq:defi:anti-pre-Lie Poisson2\]](#eq:defi:anti-pre-Lie Poisson2){reference-type="ref" reference="eq:defi:anti-pre-Lie Poisson2"}) holds by Eqs. ([\[eq:PP3\]](#eq:PP3){reference-type="ref" reference="eq:PP3"})-([\[eq:PP5\]](#eq:PP5){reference-type="ref" reference="eq:PP5"}). Thus $(A,\cdot,\circ)$ is an anti-pre-Lie Poisson algebra. By Eqs. ([\[eq:defi:anti-pre-Lie Poisson rep1\]](#eq:defi:anti-pre-Lie Poisson rep1){reference-type="ref" reference="eq:defi:anti-pre-Lie Poisson rep1"})-([\[eq:defi:anti-pre-Lie Poisson rep5\]](#eq:defi:anti-pre-Lie Poisson rep5){reference-type="ref" reference="eq:defi:anti-pre-Lie Poisson rep5"}), $(\mathcal{L}_{\star},\mathcal{L}_{\succ},\mathcal{R}_{\prec},A)$ is a representation of $(A,\cdot,\circ)$. ◻ **Example 83**. Let $(A,\star)$ be a Zinbiel algebra with a derivation $P$. Let $(A,\succ,\prec)$ be the corresponding pre-APL algebra defined by Eq. ([\[eq:ex:Zinbiel derivation\]](#eq:ex:Zinbiel derivation){reference-type="ref" reference="eq:ex:Zinbiel derivation"}). Then it is straightforward to show that $(A,\star,\succ,\prec)$ is a pre-APLP algebra. **Proposition 84**. *Let $(A,\cdot ,\circ )$ be an anti-pre-Lie Poisson algebra and $(\mu,l_{\circ},r_{\circ},V)$ be a representation of $(A,\cdot ,\circ )$. Let $T:V\rightarrow A$ be an $\mathcal{O}$-operator of $(A,\cdot ,\circ )$ associated to $(\mu,l_{\circ},r_{\circ},V)$. Then there exists a pre-APLP algebra structure $(V,\star ,\succ ,\prec )$ on $V$ defined by $$u\star v=\mu\big(T(u)\big)v,\; u\succ v=l_{\circ}\big(T(u)\big)v,\; u\prec v=r_{\circ}\big(T(v)\big)u,\;\;\forall u,v\in V.$$ Conversely, let $(A,\star,\succ,\prec)$ be a pre-APLP algebra and $(A,\cdot, \circ)$ be the sub-adjacent anti-pre-Lie Poisson algebra. Then the identity map $\mathrm{id}:A\rightarrow A$ is an $\mathcal{O}$-operator of $(A,\cdot, \circ)$ associated to $(\mathcal{L}_{\star},\mathcal{L}_{\succ},\mathcal{R}_{\prec},A)$.* *Proof.* It is straightforward. ◻ **Proposition 85**. *Let $(A,\star,\succ,\prec)$ be a pre-APLP algebra and $(A,\cdot,\circ)$ be the sub-adjacent anti-pre-Lie Poisson algebra of $(A,\star,\succ,\prec)$. Let $\lbrace e_{1},\cdots ,e_{n}\rbrace$ be a basis of $A$ and $\lbrace e^{*}_{1},\cdots$, $e^{*}_{n}\rbrace$ be the dual basis. Then $$r=\sum_{i=1}^{n}(e_{i}\otimes e^{*}_{i}-e^{*}_{i}\otimes e_{i})$$ is a skew-symmetric solution of the APLP-YBE in the anti-pre-Lie Poisson algebra $A\ltimes_{-\mathcal{L}^{*}_{\star},\mathcal{R}^{*}_{\prec}-\mathcal{L}^{*}_{\succ}, \mathcal{R}^{*}_{\prec}}A^{*}$.* *Proof.* It follows from Proposition [\[thm:O-operator and pre anti-pre-Lie Poisson algebras\]](#thm:O-operator and pre anti-pre-Lie Poisson algebras){reference-type="ref" reference="thm:O-operator and pre anti-pre-Lie Poisson algebras"} and Theorem [Theorem 79](#thm:AP2){reference-type="ref" reference="thm:AP2"}. ◻ **Acknowledgements.** This work is supported by NSFC (11931009, 12271265, 12261131498), the Fundamental Research Funds for the Central Universities and Nankai Zhide Foundation. The authors thank the referee for valuable suggestions. 99 M. Aguiar, Infinitesimal Hopf algebras, in: New Trends in Hopf Algebra Theory, La Falda, 1999, *Contemp. Math.* 267, Amer. Math. Soc., Providence (2000) 1-29. M. Aguiar, Pre-Poisson Algebras, *Lett. Math. Phys.* 54 (2000) 263-277. M. Aguiar, On the associative analog of Lie bialgebras, *J. Algebra* 244 (2001) 492-532. M. Aguiar, Infinitesimal bialgebras, pre-Lie and dendriform algebras, in: Hopf Algebras, *Lecture Notes in Pure and Appl. Math.* 237, Marcel Dekker, New York (2004) 1-33. C. Bai, Double constructions of Frobenius algebras, Connes cocycles and their duality, *J. Noncommut. Geom.* 4 (2010) 475-530. C. Bai, R. Bai, L. Guo and Y. Wu, Transposed Poisson algebras, Novikov-Poisson algebras and 3-Lie algebras, *J. Algebra* 632 (2023) 535-566. C. Bai, O. Bellier, L. Guo and X. Ni, Splitting of operations, Manin products and Rota-Baxter operators, *Int. Math. Res. Not. IMRN.* (2013) 485-524. C. Bai, L. Guo and X. Ni, $\mathcal{O}$-operators on associative algebras and associative Yang-Baxter equations, *Pac. J. Math.* 256 (2012) 257-289. P.D. Beites, B.L.M. Ferreira and I. Kaygorodov, Transposed Poisson structures, arXiv: 2207.00281. P.D. Beites, A. Fernández Ouaridi and I. Kaygorodov, The algebraic and geometric classification of transposed Poisson algebras, *Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat.* 117 (2023) 55, 25pp. K.H. Bhaskara and K. Viswanath, Poisson Algebras and Poisson Manifolds, *Pitman Res. Notes Math. Ser.* 174, Longman Scientific & Technical, Harlow (1988). D. Burde, Left-symmetric algebras, or pre-Lie algebras in geometry and physics, *Cent. Eur. J. Math*. 4 (2006) 323-357. V. Chari and A. Pressley, A Guide to Quantum Groups, Cambridge University Press, Cambridge (1994). V.G. Drinfeld, Halmiltonian structure on the Lie groups, Lie bialgebras and the geometric sense of the classical Yang-Baxter equations, *Sov. Math. Dokl.* 27 (1983) 68-71. A. Dzhumadil'daev, Algebras with skew-symmetric identity of degree 3, *J. Math. Sci.* 161 (2009) 11-30. A. Dzhumadil'daev and P. Zusmanovich, Commutative 2-cocycles on Lie algebras, *J. Algebra* 324 (2010) 732-748. B.L.M. Ferreira, I. Kaygorodov and V. Lopatkin, $\frac{1}{2}$-derivations of Lie algebras and transposed Poisson algebras, *Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat.* 115 (2021) 142, 19pp. V.T. Filippov, Lie algebras satisfying identities of degree 5, *Algebra Log.* 34 (1996) 379-394. I. Kaygorodov and M. Khrypchenko, Transposed Poisson structures on Block Lie algebras and superalgebras *Linear Algebra Appl.* 656 (2023) 167-197. I. Kaygorodov and M. Khrypchenko, Transposed Poisson structures on Witt type algebras, *Linear Algebra Appl.* 665 (2023) 196-210. I. Kaygorodov, V. Lopatkin and Z. Zhang, Transposed Poisson structures on Galilean and solvable Lie algebras, *J. Geom. Phys.* 187 (2023) 104781, 13pp. J. Kock, Frobenius Algebras and 2d Topological Quantum Field Theories, Cambridge University Press, Cambridge (2004). B.A. Kupershmidt, What a classical $r$-matrix really is, *J. Nonlinear Math. Phys.* 6 (1999) 448-488. I. Laraiedh and S. Silvestrov, Transposed Hom-Poisson and Hom-pre-Lie Poisson algebras and bialgebras, arXiv:2106.03277 A. Lauda and H. Pfeiffer, Open-closed strings: two-dimensional extended TQFTs and Frobenius algebras, *Topology Appl.* 155 (2005) 623-666. A. Lichnerowicz, Les variétiés de Poisson et leurs algèbras de Lie associées, *J. Diff. Geom.* 12 (1977) 253-300. G. Liu and C. Bai, Anti-pre-Lie algebras, Novikov algebras and commutative 2-cocycles on Lie algebras, *J. Algebra* 609 (2022) 337-379. J.-L. Loday, Cup product for Leibniz cohomology and dual Leibniz algebras, *Math. Scand.* 77 (1995) 189-196. S. Majid, Matched pairs of Lie groups associated to solutions of the Yang-Baxter equation, *Pac. J. Math.* 141 (1990) 311-332. X. Ni and C. Bai, Poisson bialgebras, *J. Math. Phys.* 54 (2013) 023515, 14pp. Y. Su, X. Xu and H. Zhang, Derivation-simple algebras and structures of Lie algebras of Witt type, *J. Algebra* 233 (2000) 642-662. A. Weinstein, Lectures on Symplectic Manifolds, *CBMS Regional Conference Series in Mathematics* 29, Amer. Math. Soc., Providence (1979). X. Xu, Novikov-Poisson algebras, *J. Algebra* 190 (1997) 253-279. X. Xu, New generalized simple Lie algebras of Cartan type over a field with characteristic zero, *J. Algebra* 224 (2000) 23-58. L. Yuan and Q. Hua, $\frac{1}{2}$-(bi)derivations and transposed Poisson algebra structures on Lie algebras, *Linear Multilinear Algebra* 70 (2022) 7672-7701. P. Zusmanovich, The second homology group of current Lie algebras, *Astérisque* 226 (1994) 435-452. # Appendix: Proofs of Propositions [Proposition 24](#pro:cob coalg){reference-type="ref" reference="pro:cob coalg"} and [Proposition 72](#pro:fff2){reference-type="ref" reference="pro:fff2"} {#appendix-proofs-of-propositions-procob-coalg-and-profff2 .unnumbered} . ([\[it:bb1\]](#it:bb1){reference-type="ref" reference="it:bb1"}). Let $x\in A$. Then we have $$\begin{aligned} &&(\mathrm{id}\otimes\delta)\delta(x)-(\tau\otimes\mathrm{id})(\mathrm{id}\otimes\delta)\delta(x)+(\delta\otimes\mathrm{id})\delta(x)-(\tau\otimes\mathrm{id})(\delta\otimes\mathrm{id})\delta(x)\\ &&=\sum\limits_{i,j}\big(a_{i}\otimes a_{j}\otimes[[x,b_{i}],b_{j}]-a_{i}\otimes [x,b_{i}]\circ a_{j}\otimes b_{j}-x\circ a_{i}\otimes a_{j}\otimes [b_{i},b_{j}]+x\circ a_{i}\otimes b_{i}\circ a_{j}\otimes b_{j}\\ &&\hspace{0.2cm}-a_{j}\otimes a_{i}\otimes[[x,b_{i}],b_{j}]+[x,b_{i}]\circ a_{j}\otimes a_{i}\otimes b_{j}+a_{j}\otimes x\circ a_{i}\otimes [b_{i},b_{j}]-b_{i}\circ a_{j}\otimes x\circ a_{i}\otimes b_{j}\\ &&\hspace{0.2cm}+a_{j}\otimes[a_{i},b_{j}]\otimes [x,b_{i}]-a_{i}\circ a_{j}\otimes b_{j}\otimes [x,b_{i}]-a_{j}\otimes[x\circ a_{i},b_{j}]\otimes b_{i}+(x\circ a_{i})\circ a_{j}\otimes b_{j}\otimes b_{i}\\ &&\hspace{0.2cm}-[a_{i},b_{j}]\otimes a_{j}\otimes [x,b_{i}]+b_{j}\otimes a_{i}\circ a_{j}\otimes[x,b_{i}]+[x\circ a_{i},b_{j}]\otimes a_{j}\otimes b_{i}-b_{j}\otimes(x\circ a_{i})\circ a_{j}\otimes b_{i}\big)\\ &&=A(1)+A(2)+A(3),\end{aligned}$$ where $$\begin{aligned} A(1)&=&\sum\limits_{i,j}(a_{i}\otimes a_{j}\otimes[[x,b_{i}],b_{j}]-a_{j}\otimes a_{i}\otimes[[x,b_{i}],b_{j}]+a_{j}\otimes[a_{i},b_{j}]\otimes[x,b_{i}]\\ &&-a_{i}\circ a_{j}\otimes b_{j}\otimes [x,b_{i}]-[a_{i},b_{j}]\otimes a_{j}\otimes[x,b_{i}]+b_{j}\otimes a_{i}\circ a_{j}\otimes[x,b_{i}])\\ &=&\sum\limits_{i,j}(a_{i}\otimes a_{j}\otimes[x,[b_{i},b_{j}]]+a_{j}\otimes a_{i}\circ b_{j}\otimes[x,b_{i}]-a_{j}\otimes b_{j}\circ a_{i}\otimes [x,b_{i}]\\ &&+b_{j}\otimes a_{i}\circ a_{j}\otimes [x,b_{i}]-a_{i}\circ a_{j}\otimes b_{j}\otimes[x,b_{i}] -[a_{i}, b_{j}]\otimes a_{j}\otimes[x,b_{i}]\\ &&-[a_{i}, a_{j}]\otimes b_{j}\otimes[x,b_{i}]+[a_{i}, a_{j}]\otimes b_{j}\otimes[x,b_{i}] )\\ &=&-\big(\mathrm{id}\otimes\mathrm{id}\otimes\mathrm{ad}(x)\big)\textbf{T}(r)+\sum_{j}\big(\mathrm{id}\otimes\mathcal{L}_{\circ}(a_{j})\otimes\mathrm{ad}(x)\big)\Big(\big(r+\tau(r)\big)\otimes b_{j}\Big)\\ &&-\sum_{j}\big(\mathrm{ad}(a_{j})\otimes\mathrm{id}\otimes\mathrm{ad}(x)\big)\Big(\big(r+\tau(r)\big)\otimes b_{j}\Big),\\ A(2)&=&\sum\limits_{i,j}\big([x,b_{i}]\circ a_{j}\otimes a_{i}\otimes b_{j}+(x\circ a_{i})\circ a_{j}\otimes b_{j}\otimes b_{i}+[x\circ a_{i},b_{j}]\otimes a_{j}\otimes b_{i}\\ &&-x\circ a_{i}\otimes a_{j}\otimes [b_{i},b_{j}]+x\circ a_{i}\otimes b_{i}\circ a_{j}\otimes b_{j}\big)\\ &\overset{(\ref{eq:defi:anti-pre-Lie algebras1})}{=}&\sum_{i,j}\big(b_{i}\circ(x\circ a_{j})\otimes a_{i}\otimes b_{j}-x\circ(b_{i}\circ a_{j})\otimes a_{i}\otimes b_{j}+(x\circ a_{i})\circ a_{j}\otimes b_{j}\otimes b_{i}\\ &&+(x\circ a_{i})\circ b_{j}\otimes a_{j}\otimes b_{i}-b_{j}\circ(x\circ a_{i})\otimes a_{j}\otimes b_{i}-x\circ(a_{i}\circ a_{j})\otimes b_{i}\otimes b_{j}\\ &&+x\circ (a_{i}\circ a_{j})\otimes b_{i}\otimes b_{j}-x\circ a_{i}\otimes a_{j}\otimes [b_{i}, b_{j}]+x\circ a_{i}\otimes b_{i}\circ a_{j}\otimes b_{j}\big)\\ &=&\big(\mathcal{L}_{\circ}(x)\otimes\mathrm{id}\otimes\mathrm{id}\big)\textbf{T}(r)+\sum_{j}\big(\mathcal{L}_{\circ}(x\circ a_{j})\otimes\mathrm{id}\otimes\mathrm{id}\big)\Big(\big(r+\tau(r)\big)\otimes b_{j}\Big)\\ &&-\sum_{j}\big(\mathcal{L}_{\circ}(x)\mathcal{R}_{\circ}(a_{j})\otimes\mathrm{id}\otimes\mathrm{id}\big)\Big(\big(r+\tau(r)\big)\otimes b_{j}\Big),\\ A(3)&=&\sum_{i,j}(-a_{i}\otimes [x,b_{i}]\circ a_{j}\otimes b_{j}+a_{j}\otimes x\circ a_{i}\otimes[b_{i},b_{j}]-b_{i}\circ a_{j}\otimes x\circ a_{i}\otimes b_{j}\\ &&-a_{j}\otimes[x\circ a_{i},b_{j}]\otimes b_{i}-b_{j}\otimes(x\circ a_{i})\circ a_{j}\otimes b_{i})\\ &=&-(\tau\otimes\mathrm{id})A(2)\\ &=&-(\tau\otimes \mathrm{id})\big(\mathcal{L}_{\circ}(x)\otimes\mathrm{id}\otimes\mathrm{id}\big)\textbf{T}(r)-\sum_{j}(\tau\otimes\mathrm{id})\big(\mathcal{L}_{\circ}(x\circ a_{j})\otimes\mathrm{id}\otimes\mathrm{id}\big)\Big(\big(r+\tau(r)\big)\otimes b_{j}\Big)\\ &&+\sum_{j}(\tau\otimes\mathrm{id})\big(\mathcal{L}_{\circ}(x)\mathcal{R}_{\circ}(a_{j})\otimes\mathrm{id}\otimes\mathrm{id}\big)\Big(\big(r+\tau(r)\big)\otimes b_{j}\Big).\end{aligned}$$ Hence Eq. ([\[eq:defi:anti-pre-Lie coalgebras1\]](#eq:defi:anti-pre-Lie coalgebras1){reference-type="ref" reference="eq:defi:anti-pre-Lie coalgebras1"}) holds if and only if Eq. ([\[eq:pro:cob coalg1\]](#eq:pro:cob coalg1){reference-type="ref" reference="eq:pro:cob coalg1"}) holds. ([\[it:bb2\]](#it:bb2){reference-type="ref" reference="it:bb2"}). Let $x\in A$. Then we have $$\begin{aligned} &&(\mathrm{id}^{\otimes 3}+\xi+\xi^{2})(\mathrm{id}^{\otimes 3}-\tau\otimes \mathrm{id})(\mathrm{id}\otimes\delta)\delta(x)\\ &&=(\mathrm{id}^{\otimes 3}+\xi+\xi^{2})\sum_{i,j}(a_{i}\otimes a_{j}\otimes[[x,b_{i}],b_{j}]-b_{j}\otimes a_{i}\otimes[x,b_{i}]\circ a_{j}\\ &&\ \ -a_{j}\otimes [b_{i},b_{j}]\otimes x\circ a_{i}+b_{i}\circ a_{j}\otimes b_{j}\otimes x\circ a_{i}- a_{j}\otimes a_{i}\otimes [[x,b_{i}],b_{j}]\\ &&\ \ +a_{i}\otimes b_{j}\otimes[x,b_{i}]\circ a_{j}+[b_{i},b_{j}]\otimes a_{j}\otimes x\circ a_{i}-b_{j}\otimes b_{i}\circ a_{j}\otimes x\circ a_{i})\\ &&=(\mathrm{id}^{\otimes 3}+\xi+\xi^{2})\big(B(1)+B(2)+B(3)\big),\end{aligned}$$ where $$\begin{aligned} B(1)&=&\sum_{i,j}(a_{i}\otimes a_{j}\otimes[[x,b_{i}],b_{j}]-b_{j}\otimes a_{i}\otimes[x,b_{i}]\circ a_{j}- a_{j}\otimes a_{i}\otimes [[x,b_{i}],b_{j}]+a_{i}\otimes b_{j}\otimes[x,b_{i}]\circ a_{j})\\ &=&\sum_{i,j}(a_{i}\otimes a_{j}\otimes[[x,b_{i}],b_{j}]-b_{j}\otimes a_{i}\otimes[x,b_{i}]\circ a_{j}-a_{j}\otimes a_{i}\otimes[x,b_{i}]\circ b_{j}+a_{j}\otimes a_{i}\otimes[x,b_{i}]\circ b_{j}\\ &&- a_{j}\otimes a_{i}\otimes [[x,b_{i}],b_{j}]+a_{i}\otimes b_{j}\otimes[x,b_{i}]\circ a_{j}+a_{i}\otimes a_{j}\otimes[x,b_{i}]\circ b_{j}-a_{i}\otimes a_{j}\otimes[x,b_{i}]\circ b_{j})\\ &=&\sum_{i,j}a_{i}\otimes a_{j}\otimes x\circ[b_{i},b_{j}]-\sum_{j}\big(\mathrm{id}\otimes\mathrm{id}\otimes\mathcal{L}_{\circ}([x,b_{j}])\big)(\tau\otimes\mathrm{id})\Big(a_{j}\otimes\big(r+\tau(r)\big)\Big)\\ &&+\sum_{j}\big(\mathrm{id}\otimes\mathrm{id}\otimes\mathcal{L}_{\circ}([x,b_{j}])\big)\Big(a_{j}\otimes\big(r+\tau(r)\big)\Big),\\ B(2)&=&\sum_{i,j}(-a_{j}\otimes [b_{i},b_{j}]\otimes x\circ a_{i}-b_{j}\otimes b_{i}\circ a_{j}\otimes x\circ a_{i})\\ &=&\sum_{i,j}(-a_{j}\otimes [b_{i},b_{j}]\otimes x\circ a_{i}-a_{j}\otimes [a_{i},b_{j}]\otimes x\circ b_{i}+a_{j}\otimes [a_{i},b_{j}]\otimes x\circ b_{i}\\ &&-b_{j}\otimes b_{i}\circ a_{j}\otimes x\circ a_{i}-b_{j}\otimes a_{i}\circ a_{j}\otimes x\circ b_{i}\\ &&+b_{j}\otimes a_{i}\circ a_{j}\otimes x\circ b_{i}+a_{j}\otimes a_{i}\circ b_{j}\otimes x\circ b_{i}-a_{j}\otimes a_{i}\circ b_{j}\otimes x\circ b_{i})\\ &=&-\sum_{i,j}a_{j}\otimes b_{j}\circ a_{i}\otimes x\circ b_{i}+\sum_{j}\big(\mathrm{id}\otimes\mathrm{ad}(b_{j})\otimes\mathcal{L}_{\circ}(x)\big)\Big(a_{j}\otimes\big(r+\tau(r)\big)\Big)\\ &&-\sum_{j}\big(\mathrm{id}\otimes\mathcal{R}_{\circ}(a_{j})\otimes\mathcal{L}_{\circ}(x)\big)\Big(b_{j}\otimes\big(r+\tau(r)\big)\Big)+\sum_{j}\big(\mathrm{id}\otimes\mathcal{L}_{\circ}(a_{j})\otimes\mathcal{L}_{\circ}(x)\big)\Big(b_{j}\otimes\big(r+\tau(r)\big)\Big),\\ B(3)&=&\sum_{i,j}(b_{i}\circ a_{j}\otimes b_{j}\otimes x\circ a_{i}+[b_{i},b_{j}]\otimes a_{j}\otimes x\circ a_{i})\\ &=&\sum_{i,j}(b_{i}\circ a_{j}\otimes b_{j}\otimes x\circ a_{i}+[b_{i},b_{j}]\otimes a_{j}\otimes x\circ a_{i}+[b_{i},a_{j}]\otimes b_{j}\otimes x\circ a_{i}-[b_{i},a_{j}]\otimes b_{j}\otimes x\circ a_{i})\\ &=&\sum_{i,j}(a_{j}\circ b_{i}\otimes b_{j}\otimes x\circ a_{i}+a_{j}\circ a_{i}\otimes b_{j}\otimes x\circ b_{i}-a_{j}\circ a_{i}\otimes b_{j}\otimes x\circ b_{i}\\ &&+[b_{i},b_{j}]\otimes a_{j}\otimes x\circ a_{i}+[b_{i},a_{j}]\otimes b_{j}\otimes x\circ a_{i})\\ &=&-\sum_{i,j}a_{i}\circ a_{j}\otimes b_{i}\otimes x\circ b_{j} +\sum_{j}\big(\mathcal{L}_{\circ}(a_{j})\otimes\mathrm{id}\otimes\mathcal{L}_{\circ}(x)\big)(\tau\otimes\mathrm{id})\Big(b_{j}\otimes\big(r+\tau(r)\big)\Big)\\ &&+\sum_{j}\big(\mathrm{ad}(b_{j})\otimes\mathrm{id}\otimes\mathcal{L}_{\circ}(x)\big)\Big(\big(r+\tau(r)\big)\otimes a_{j}\Big).\end{aligned}$$ Hence Eq. ([\[eq:defi:anti-pre-Lie coalgebras2\]](#eq:defi:anti-pre-Lie coalgebras2){reference-type="ref" reference="eq:defi:anti-pre-Lie coalgebras2"}) holds if and only if Eq. ([\[eq:pro:cob coalg2\]](#eq:pro:cob coalg2){reference-type="ref" reference="eq:pro:cob coalg2"}) holds. ([\[it:bb3\]](#it:bb3){reference-type="ref" reference="it:bb3"}). Let $x\in A$. Then we have $$\begin{aligned} &&(\mathrm{id}^{\otimes 2}-\tau)\Big(\delta(x\circ y)-\big(\mathcal{L}_{\circ}(x)\otimes \mathrm{id}\big)\delta(y)-\big(\mathrm{id}\otimes\mathcal{L}_{\circ}(x)\big)\delta(y)+\big(\mathrm{id}\otimes\mathcal{R}_{\circ}(y)\big)\delta(x)\Big)\\ &&=\sum_{i}\big(a_{i}\otimes [x\circ y,b_{i}]-(x\circ y)\circ a_{i}\otimes b_{i}-[x\circ y,b_{i}]\otimes a_{i}+b_{i}\otimes(x\circ y)\circ a_{i}\\ &&\ \ -x\circ a_{i}\otimes [y,b_{i}]+x\circ(y\circ a_{i})\otimes b_{i}+[y,b_{i}]\otimes x\circ a_{i}-b_{i}\otimes x\circ(y\circ a_{i})\\ &&\ \ -a_{i}\otimes x\circ[y,b_{i}]+y\circ a_{i}\otimes x\circ b_{i}+x\circ[y,b_{i}]\otimes a_{i}-x\circ b_{i}\otimes y\circ a_{i}\\ &&\ \ +a_{i}\otimes [x,b_{i}]\circ y-x\circ a_{i}\otimes b_{i}\circ y-[x,b_{i}]\circ y\otimes a_{i}+b_{i}\circ y\otimes x\circ a_{i}\big)\\ &&=C(1)+C(2)+C(3),\end{aligned}$$ where $$\begin{aligned} C(1)&=&\sum_{i}\big(a_{i}\otimes [x\circ y,b_{i}]+b_{i}\otimes(x\circ y)\circ a_{i}-b_{i}\otimes x\circ(y\circ a_{i})-a_{i}\otimes x\circ[y,b_{i}]+a_{i}\otimes [x,b_{i}]\circ y\big)\\ &\overset{(\ref{eq:defi:anti-pre-Lie algebras1})}{=}&\sum_{i}\big(a_{i}\otimes (x\circ y)\circ b_{i}-a_{i}\otimes x\circ(y\circ b_{i})+b_{i}\otimes (x\circ y)\circ a_{i}-b_{i}\otimes x\circ(y\circ a_{i})\big)\\ &=&\big(\mathrm{id}\otimes\mathcal{L}_{\circ}(x\circ y)-\mathrm{id}\otimes\mathcal{L}_{\circ}(x)\mathcal{L}_{\circ}(y)\big)\big(r+\tau(r)\big),\\ C(2)&=&\sum_{i}\big(-(x\circ y)\circ a_{i}\otimes b_{i}-[x\circ y,b_{i}]\otimes a_{i}+x\circ(y\circ a_{i})\otimes b_{i}+x\circ[y,b_{i}]\otimes a_{i}-[x,b_{i}]\circ y\otimes a_{i}\big)\\ &\overset{(\ref{eq:defi:anti-pre-Lie algebras1})}{=}&\sum_{i}\big(-(x\circ y)\circ a_{i}\otimes b_{i}+x\circ(y\circ a_{i})\otimes b_{i}-(x\circ y)\circ b_{i}\otimes a_{i}+x\circ(y\circ b_{i})\otimes a_{i}\big)\\ &=&\big(\mathcal{L}_{\circ}(x)\mathcal{L}_{\circ}(y)\otimes \mathrm{id}-\mathcal{L}_{\circ}(x\circ y)\otimes \mathrm{id}\big)\big(r+\tau(r)\big),\\ C(3)&=&\sum_{i}(-x\circ a_{i}\otimes [y,b_{i}]+[y,b_{i}]\otimes x\circ a_{i}+y\circ a_{i}\otimes x\circ b_{i}-x\circ b_{i}\otimes y\circ a_{i}-x\circ a_{i}\otimes b_{i}\circ y\\ &&\hspace{1cm}+b_{i}\circ y\otimes x\circ a_{i})\\ &=&\sum_{i}(-x\circ a_{i}\otimes y\circ b_{i}+y\circ b_{i}\otimes x\circ a_{i}+y\circ a_{i}\otimes x\circ b_{i}-x\circ b_{i}\otimes y\circ a_{i})\\ &=&\big(\mathcal{L}_{\circ}(y)\otimes\mathcal{L}_{\circ}(x)-\mathcal{L}_{\circ}(x)\otimes\mathcal{L}_{\circ}(y)\big)\big(r+\tau(r)\big). \end{aligned}$$ Thus Eq. ([\[eq:defi:anti-pre-Lie bialgebra1\]](#eq:defi:anti-pre-Lie bialgebra1){reference-type="ref" reference="eq:defi:anti-pre-Lie bialgebra1"}) holds if and only if Eq. ([\[eq:pro:coboundary anti-pre-Lie bialgebras1\]](#eq:pro:coboundary anti-pre-Lie bialgebras1){reference-type="ref" reference="eq:pro:coboundary anti-pre-Lie bialgebras1"}) holds. $\Box$ . ([\[it:1\]](#it:1){reference-type="ref" reference="it:1"}). Let $x\in A$. Then we have $$\begin{aligned} &&2(\delta\otimes\mathrm{id})\Delta(x)-2(\tau\otimes\mathrm{id})(\delta\otimes\mathrm{id})\Delta(x)+(\mathrm{id}\otimes\delta)\Delta(x)-(\tau\otimes\mathrm{id})(\mathrm{id}\otimes\delta)\Delta(x)\\ &&=\sum_{i,j}\big(2a_{j}\otimes[x\cdot a_{i},b_{j}]\otimes b_{i}-2(x\cdot a_{i})\circ a_{j}\otimes b_{j}\otimes b_{i}-2a_{j}\otimes [a_{i},b_{j}]\otimes x\cdot b_{i}+2a_{i}\circ a_{j}\otimes b_{j}\otimes x\cdot b_{i}\\ &&\ \ -2[x\cdot a_{i},b_{j}]\otimes a_{j}\otimes b_{i}+2b_{j}\otimes (x\cdot a_{i})\circ a_{j}\otimes b_{i}+2[a_{i},b_{j}]\otimes a_{j}\otimes x\cdot b_{i}-2b_{j}\otimes a_{i}\circ a_{j}\otimes x\cdot b_{i}\\ &&\ \ +x\cdot a_{i}\otimes a_{j}\otimes[b_{i},b_{j}]-x\cdot a_{i}\otimes b_{i}\circ a_{j}\otimes b_{j}-a_{i}\otimes a_{j}\otimes[x\cdot b_{i}, b_{j}]+a_{i}\otimes(x\cdot b_{i})\circ a_{j}\otimes b_{j}\\ &&\ \ -a_{j}\otimes x\cdot a_{i}\otimes [b_{i},b_{j}]+b_{i}\circ a_{j}\otimes x\cdot a_{i}\otimes b_{j}+a_{j}\otimes a_{i}\otimes[x\cdot b_{i},b_{j}]-(x\cdot b_{i})\circ a_{j}\otimes a_{i}\otimes b_{j}\big)\\ &&=D(1)+D(2)+D(3), \end{aligned}$$ where $$\begin{aligned} D(1)&=&\sum_{i,j}\big(-2(x\cdot a_{i})\circ a_{j}\otimes b_{j}\otimes b_{i}-2[x\cdot a_{i},b_{j}]\otimes a_{j}\otimes b_{i}\\ &&+x\cdot a_{i}\otimes a_{j}\otimes[b_{i},b_{j}]-x\cdot a_{i}\otimes b_{i}\circ a_{j}\otimes b_{j}-(x\cdot b_{i})\circ a_{j}\otimes a_{i}\otimes b_{j}\big)\\ &=&\sum_{i,j}\big(-2(x\cdot a_{i})\circ a_{j}\otimes b_{j}\otimes b_{i}-2[x\cdot a_{i},b_{j}]\otimes a_{j}\otimes b_{i}-2[x\cdot a_{i}, a_{j}]\otimes b_{j}\otimes b_{i}\\ &&+2[x\cdot a_{i}, a_{j}]\otimes b_{j}\otimes b_{i}+x\cdot a_{i}\otimes a_{j}\otimes[b_{i},b_{j}]-x\cdot a_{i}\otimes b_{i}\circ a_{j}\otimes b_{j}\\ &&-(x\cdot a_{i})\circ a_{j}\otimes b_{i}\otimes b_{j}-(x\cdot b_{i})\circ a_{j}\otimes a_{i}\otimes b_{j}+(x\cdot a_{i})\circ a_{j}\otimes b_{i}\otimes b_{j}\big)\\ &\overset{(\ref{eq:defi:anti-pre-Lie Poisson2})}{=}&\sum_{i,j}\big(-x\cdot (a_{i}\circ a_{j})\otimes b_{i}\otimes b_{j} -x\cdot a_{i}\otimes b_{i}\circ a_{j}\otimes b_{j}+x\cdot a_{i}\otimes a_{j}\otimes[b_{i},b_{j}]\big)\\ &&+ \sum_{j}\bigg(-2\big(\mathrm{ad}(x\cdot a_{j})\otimes\mathrm{id}\otimes\mathrm{id}\big)\Big(\big(r+\tau(r)\big)\otimes b_{j}\Big)-\big(\mathcal{R}_{\circ}(a_{j})\mathcal{L}_{\cdot}(x)\otimes\mathrm{id}\otimes\mathrm{id}\big)\Big(\big(r+\tau(r)\big)\otimes b_{j}\Big)\bigg)\\ &=&-\big(\mathcal{L}_{\cdot}(x)\otimes\mathrm{id}\otimes\mathrm{id}\big)\textbf{T}(r)-\sum_{j}\big(2\mathrm{ad}(x\cdot a_{j})\otimes\mathrm{id}\otimes\mathrm{id}+\mathcal{R}_{\circ}(a_{j})\mathcal{L}_{\cdot}(x)\otimes\mathrm{id}\otimes\mathrm{id}\big)\Big(\big(r+\tau(r)\big)\otimes b_{j}\Big),\\ D(2)&=&\sum_{i,j}\big(2a_{j}\otimes[x\cdot a_{i},b_{j}]\otimes b_{i}+2b_{j}\otimes (x\cdot a_{i})\circ a_{j}\otimes b_{i}\\ &&\ \ +a_{i}\otimes(x\cdot b_{i})\circ a_{j}\otimes b_{j} -a_{j}\otimes x\cdot a_{i}\otimes [b_{i},b_{j}]+b_{i}\circ a_{j}\otimes x\cdot a_{i}\otimes b_{j}\big)\\ &=&\sum_{i,j}\big(2b_{i}\otimes (x\cdot a_{j})\circ a_{i}\otimes b_{j}+2a_{i}\otimes (x\cdot a_{j})\circ b_{i}\otimes b_{j}-2a_{i}\otimes (x\cdot a_{j})\circ b_{i}\otimes b_{j}\\ &&\ \ +2a_{i}\otimes[x\cdot a_{j},b_{i}]\otimes b_{j}+a_{i}\otimes(x\cdot b_{i})\circ a_{j}\otimes b_{j}+a_{i}\otimes x\cdot a_{j}\otimes[b_{i},b_{j}]\\ &&\ \ +b_{i}\circ a_{j}\otimes x\cdot a_{i}\otimes b_{j}+a_{i}\circ a_{j}\otimes x\cdot b_{i}\otimes b_{j}-a_{i}\circ a_{j}\otimes x\cdot b_{i}\otimes b_{j}\big)\\ &\overset{(\ref{eq:defi:anti-pre-Lie Poisson2})}{=}&\sum_{i,j}\big(-a_{i}\otimes x\cdot(b_{i}\circ a_{j})\otimes b_{j}+a_{i}\otimes x\cdot a_{j}\otimes [b_{i},b_{j}]-a_{i}\circ a_{j}\otimes x\cdot b_{i}\otimes b_{j}\big)\\ &&+\sum_{j}\bigg(2\big(\mathrm{id}\otimes\mathcal{L}_{\circ}(x\cdot a_{j})\otimes\mathrm{id}\big)\Big(\big(r+\tau(r)\big)\otimes b_{j}\Big)+\big(\mathcal{R}_{\circ}(a_{j})\otimes\mathcal{L}_{\cdot}(x)\otimes\mathrm{id}\big)\Big(\big(r+\tau(r)\big)\otimes b_{j}\Big)\bigg)\\ &=&-\big(\mathrm{id}\otimes\mathcal{L}_{\cdot}(x)\otimes\mathrm{id}\big)\textbf{T}(r)+\sum_{j}\big(2\mathrm{id}\otimes\mathcal{L}_{\circ}(x\cdot a_{j})\otimes\mathrm{id}+\mathcal{R}_{\circ}(a_{j})\otimes\mathcal{L}_{\cdot}(x)\otimes\mathrm{id}\big)\Big(\big(r+\tau(r)\big)\otimes b_{j}\Big),\\ D(3)&=&\sum_{i,j}(-2a_{j}\otimes [a_{i},b_{j}]\otimes x\cdot b_{i}+2a_{i}\circ a_{j}\otimes b_{j}\otimes x\cdot b_{i}+2[a_{i},b_{j}]\otimes a_{j}\otimes x\cdot b_{i}\\ &&\ \ -2b_{j}\otimes a_{i}\circ a_{j}\otimes x\cdot b_{i}-a_{i}\otimes a_{j}\otimes[x\cdot b_{i},b_{j}]+a_{j}\otimes a_{i}\otimes[x\cdot b_{i},b_{j}])\\ &=&\sum_{i,j}( -2a_{i}\otimes[a_{j},b_{i}]\otimes x\cdot b_{j}+2a_{j}\circ a_{i}\otimes b_{i}\otimes x\cdot b_{j}\\ &&\ \ +2[a_{j},b_{i}]\otimes a_{i}\otimes x\cdot b_{j}+2[a_{j},a_{i}]\otimes b_{i}\otimes x\cdot b_{j}-2[a_{j},a_{i}]\otimes b_{i}\otimes x\cdot b_{j}\\ &&\ \ -2b_{i}\otimes a_{j}\circ a_{i}\otimes x\cdot b_{j}-2a_{i}\otimes a_{j}\circ b_{i}\otimes x\cdot b_{j} +2a_{i}\otimes a_{j}\circ b_{i}\otimes x\cdot b_{j}\\ &&\ \ -a_{i}\otimes a_{j}\otimes[x\cdot b_{i},b_{j}] -a_{i}\otimes a_{j}\otimes[ b_{i},x\cdot b_{j}])\\ &\overset{(\ref{eq:defi:transposed Poisson algebra})}{=}&\sum_{i,j}(2a_{i}\circ a_{j}\otimes b_{i}\otimes x\cdot b_{j}+2a_{i}\otimes b_{i}\circ a_{j}\otimes x\cdot b_{j}-2a_{i}\otimes a_{j}\otimes x\cdot [b_{i},b_{j}])\\ &&+\sum_{j}\bigg(2\big(\mathrm{ad}(a_{j})\otimes\mathrm{id}\otimes\mathcal{L}_{\cdot}(x)\big)\Big(\big(r+\tau(r)\big)\otimes b_{j}\Big)-2\big(\mathrm{id}\otimes\mathcal{L}_{\circ}(a_{j})\otimes\mathcal{L}_{\cdot}(x)\big)\Big(\big(r+\tau(r)\big)\otimes b_{j}\Big)\bigg)\\ &=&2\big(\mathrm{id}\otimes\mathrm{id}\otimes\mathcal{L}_{\cdot}(x)\big)\textbf{T}(r)_+\sum_{j}2\big(\mathrm{ad}(a_{j})\otimes\mathrm{id}\otimes\mathcal{L}_{\cdot}(x)-\mathrm{id}\otimes\mathcal{L}_{\circ}(a_{j})\otimes\mathcal{L}_{\cdot}(x)\big)\Big(\big(r+\tau(r)\big)\otimes b_{j}\Big). \end{aligned}$$ Hence Eq. ([\[eq:defi:anti-pre-Lie Poisson coalg1\]](#eq:defi:anti-pre-Lie Poisson coalg1){reference-type="ref" reference="eq:defi:anti-pre-Lie Poisson coalg1"}) holds if and only if Eq. ([\[eq:TPBA1\]](#eq:TPBA1){reference-type="ref" reference="eq:TPBA1"}) holds. ([\[it:2\]](#it:2){reference-type="ref" reference="it:2"}). Let $x\in A$. Then we have $$\begin{aligned} &&2(\mathrm{id}\otimes\Delta)\delta(x)-(\mathrm{id}\otimes\tau)(\Delta\otimes\mathrm{id})\delta(x)+(\delta\otimes\mathrm{id})\Delta(x)\\ &&=\sum_{i,j}\big(2a_{i}\otimes[x,b_{i}]\cdot a_{j}\otimes b_{j}-2a_{i}\otimes a_{j}\otimes[x,b_{i}]\cdot b_{j} -2x\circ a_{i}\otimes b_{i}\cdot a_{j}\otimes b_{j}+2x\circ a_{i}\otimes a_{j}\otimes b_{i}\cdot b_{j} \\ &&\ \ -a_{i}\cdot a_{j}\otimes[x,b_{i}]\otimes b_{j}+a_{j}\otimes[x,b_{i}]\otimes a_{i}\cdot b_{j}+(x\circ a_{i})\cdot a_{j}\otimes b_{i}\otimes b_{j}-a_{j}\otimes b_{i}\otimes(x\circ a_{i})\cdot b_{j}\\ &&\ \ -a_{j}\otimes[x\cdot a_{i},b_{j}]\otimes b_{i}+(x\cdot a_{i})\circ a_{j}\otimes b_{j}\otimes b_{i}+a_{j}\otimes [a_{i},b_{j}]\otimes x\cdot b_{i}-a_{i}\circ a_{j}\otimes b_{j}\otimes x\cdot b_{i}\big)\\ &&=E(1)+E(2)+E(3),\end{aligned}$$ where $$\begin{aligned} E(1)&=&\sum_{i,j}\big(-2x\circ a_{i}\otimes b_{i}\cdot a_{j}\otimes b_{j}+2x\circ a_{i}\otimes a_{j}\otimes b_{i}\cdot b_{j}+(x\circ a_{i})\cdot a_{j}\otimes b_{i}\otimes b_{j}\\ &\mbox{}&\ \ +(x\circ a_{j})\cdot a_{i}\otimes b_{i}\otimes b_{j}\big)\\ &\overset{(\ref{eq:defi:anti-pre-Lie Poisson2})}{=}&\sum_{i,j}\big(-2x\circ a_{i}\otimes b_{i}\cdot a_{j}\otimes b_{j}+2x\circ a_{i}\otimes a_{j}\otimes b_{i}\cdot b_{j}+2x\circ (a_{i}\cdot a_{j})\otimes b_{i}\otimes b_{j}\big)\\ &=&2\big(\mathcal{L}_{\circ}(x)\otimes\mathrm{id}\otimes\mathrm{id}\big)\textbf{A}(r),\\ E(2)&=&\sum_{i,j}(2a_{i}\otimes[x,b_{i}]\cdot a_{j}\otimes b_{j}-a_{i}\cdot a_{j}\otimes [x,b_{i}]\otimes b_{j}+a_{j}\otimes[x,b_{i}]\otimes a_{i}\cdot b_{j}-a_{j}\otimes[x\cdot a_{i},b_{j}]\otimes b_{i})\\ &=&\sum_{i,j}(2a_{i}\otimes[x,b_{i}]\cdot a_{j}\otimes b_{j}-a_{i}\cdot a_{j}\otimes [x,b_{i}]\otimes b_{j}+a_{j}\otimes[x,b_{i}]\otimes a_{i}\cdot b_{j}\\ &&\ \ +a_{j}\otimes[x,a_{i}]\otimes b_{i}\cdot b_{j}-a_{i}\otimes[x,a_{j}]\otimes b_{i}\cdot b_{j}-a_{i}\otimes[x\cdot a_{j},b_{i}]\otimes b_{j})\\ &\overset{(\ref{eq:defi:transposed Poisson algebra})}{=}&\sum_{i,j}(a_{i}\otimes[x,a_{j}\cdot b_{i}]\otimes b_{j}-a_{i}\cdot a_{j}\otimes[x,b_{i}]\otimes b_{j}-a_{i}\otimes[x,a_{j}]\otimes b_{i}\cdot b_{j})\\ &&\ \ +\sum_{j}\big(\mathrm{id}\otimes\mathrm{ad}(x)\otimes\mathcal{L}_{\cdot}(b_{j})\big)\Big(a_{j}\otimes\big(r+\tau(r)\big)\Big)\\ &=&-\big(\mathrm{id}\otimes\mathrm{ad}(x)\otimes\mathrm{id}\big)\textbf{A}(r)+\sum_{j}\big(\mathrm{id}\otimes\mathrm{ad}(x)\otimes\mathcal{L}_{\cdot}(b_{j})\big)\Big(a_{j}\otimes\big(r+\tau(r)\big)\Big),\\ E(3)&=&\sum_{i,j}\big(-2a_{i}\otimes a_{j}\otimes[x,b_{i}]\cdot b_{j}-a_{j}\otimes b_{i}\otimes(x\circ a_{i})\cdot b_{j}+a_{j}\otimes [a_{i},b_{j}]\otimes x\cdot b_{i}-a_{i}\circ a_{j}\otimes b_{j}\otimes x\cdot b_{i}\big)\\ &=&\sum_{i,j}\big(-2a_{i}\otimes a_{j}\otimes[x,b_{i}]\cdot b_{j}-a_{j}\otimes b_{i}\otimes(x\circ a_{i})\cdot b_{j}-a_{j}\otimes a_{i}\otimes(x\circ b_{i})\cdot b_{j}\\ &&\ \ +a_{j}\otimes a_{i}\otimes(x\circ b_{i})\cdot b_{j}+a_{j}\otimes[a_{i},b_{j}]\otimes x\cdot b_{i} +a_{j}\otimes[b_{i},b_{j}]\otimes x\cdot a_{i}-a_{j}\otimes[b_{i},b_{j}]\otimes x\cdot a_{i}\\ &&\ \ -a_{i}\circ a_{j}\otimes b_{j}\otimes x\cdot b_{i}\big)\\ &=&\sum_{i,j}\big(-2a_{i}\otimes a_{j}\otimes[x,b_{i}]\cdot b_{j}+a_{j}\otimes a_{i}\otimes(x\circ b_{i})\cdot b_{j}-a_{j}\otimes[b_{i},b_{j}]\otimes x\cdot a_{i}-a_{i}\circ a_{j}\otimes b_{j}\otimes x\cdot b_{i}\big)\\ &&\ \ -\sum_{j}\big(\mathrm{id}\otimes\mathrm{id}\otimes\mathcal{L}_{\cdot}(b_{j})\mathcal{L}_{\circ}(x)\big)\Big(a_{j}\otimes\big(r+\tau(r)\big)\Big)-\sum_{j}\big(\mathrm{id}\otimes\mathrm{ad}(b_{j})\otimes\mathcal{L}_{\cdot}(x)\big)\Big(a_{j}\otimes\big(r+\tau(r)\big)\Big)\\ &\overset{(\ref{eq:defi:anti-pre-Lie Poisson2})}{=}&\sum_{i,j}\big(a_{i}\otimes a_{j}\otimes x\cdot(b_{i}\circ b_{j})+a_{i}\otimes b_{j}\otimes x\cdot(b_{i}\circ a_{j})-a_{i}\otimes b_{j}\otimes x\cdot(b_{i}\circ a_{j})\\ &&\ \ +a_{i}\otimes [b_{i},b_{j}]\otimes x\cdot a_{j}-a_{i}\circ a_{j}\otimes b_{j}\otimes x\cdot b_{i} \big)\\ &&\ \ -\sum_{j}\big(\mathrm{id}\otimes\mathrm{id}\otimes\mathcal{L}_{\cdot}(b_{j})\mathcal{L}_{\circ}(x)+\mathrm{id}\otimes\mathrm{ad}(b_{j})\otimes\mathcal{L}_{\cdot}(x)\big)\Big(a_{j}\otimes\big(r+\tau(r)\big)\Big)\\ &=&-\big(\mathrm{id}\otimes\mathrm{id}\otimes\mathcal{L}_{\cdot}(x)\big)(\mathrm{id}\otimes\tau)\textbf{T}(r)\\ &&\ +\sum_{j}\big(\mathrm{id}\otimes\mathrm{id}\otimes\mathcal{L}_{\cdot}(x)\mathcal{L}_{\circ}(b_{j})-\mathrm{id}\otimes\mathrm{id}\otimes\mathcal{L}_{\cdot}(b_{j})\mathcal{L}_{\circ}(x)-\mathrm{id}\otimes\mathrm{ad}(b_{j})\otimes\mathcal{L}_{\cdot}(x)\big)\Big(a_{j}\otimes\big(r+\tau(r)\big)\Big).\end{aligned}$$ Hence Eq. ([\[eq:defi:anti-pre-Lie Poisson coalg2\]](#eq:defi:anti-pre-Lie Poisson coalg2){reference-type="ref" reference="eq:defi:anti-pre-Lie Poisson coalg2"}) holds if and only if Eq. ([\[eq:TPBA2\]](#eq:TPBA2){reference-type="ref" reference="eq:TPBA2"}) holds. ([\[it:3\]](#it:3){reference-type="ref" reference="it:3"}). Let $x,y\in A$. Then we have $$\begin{aligned} &&2\big(\mathcal{L}_{\circ}(x)\otimes\mathrm{id}\big)\Delta(y)-2\big(\mathrm{id}\otimes\mathcal{L}_{\cdot}(y)\big)\delta(x)+\delta(x\cdot y)+\big(\mathcal{L}_{\cdot}(y)\otimes\mathrm{id}\big)\delta(x)-\big(\mathrm{id}\otimes\mathrm{ad}(x)\big)\Delta(y)\\ &&=\sum_{i}\big(2x\circ(y\cdot a_{i})\otimes b_{i}-2x\circ a_{i}\otimes y\cdot b_{i} -2a_{i}\otimes y\cdot[x,b_{i}]+2x\circ a_{i}\otimes y\cdot b_{i}+a_{i}\otimes[x\cdot y, b_{i}]\\ &&\ \ -(x\cdot y)\circ a_{i}\otimes b_{i} +y\cdot a_{i}\otimes[x,b_{i}]-y\cdot(x\circ a_{i}) \otimes b_{i} -y\cdot a_{i}\otimes[x,b_{i}]+a_{i}\otimes[x,y\cdot b_{i}] \big)\overset{(\ref{eq:defi:anti-pre-Lie Poisson2})}{=}0.\end{aligned}$$ Thus Eq.([\[eq:Poisson bialg 1\]](#eq:Poisson bialg 1){reference-type="ref" reference="eq:Poisson bialg 1"}) holds automatically. ([\[it:4\]](#it:4){reference-type="ref" reference="it:4"}). It follows from a similar proof of Item ([\[it:3\]](#it:3){reference-type="ref" reference="it:3"}). ([\[it:5\]](#it:5){reference-type="ref" reference="it:5"}). Let $x,y\in A$. Then we have $$\begin{aligned} &&2\big(\mathrm{id}\otimes\mathcal{L}_{\cdot}(y)\big)\delta(x)-2\big(\mathcal{L}_{\circ}(x)\otimes\mathrm{id}\big)\Delta(y)+\Delta(x\circ y)+ {\big(\mathcal{R}_{\circ}(y)\otimes\mathrm{id}\big)\Delta(x)}+\tau\big(\mathcal{L}_{\cdot}(x)\otimes\mathrm{id}\big)\delta(y) \\ &&\hspace{5cm}-\big(\mathrm{id}\otimes\mathcal{L}_{\cdot}(x)\big)\delta(y)\\ &&=\sum_{i}\big(2a_{i}\otimes y\cdot[x,b_{i}]-2x\circ a_{i}\otimes y\cdot b_{i}-2x\circ(y\cdot a_{i})\otimes b_{i}+2x\circ a_{i}\otimes y\cdot b_{i}\\ &&\ \ +(x\circ y)\cdot a_{i}\otimes b_{i}-a_{i}\otimes(x\circ y)\cdot b_{i} +(x\cdot a_{i})\circ y\otimes b_{i}-a_{i}\circ y\otimes x\cdot b_{i}\\ &&\ \ +[y,b_{i}]\otimes x\cdot a_{i}-b_{i}\otimes x\cdot(y\circ a_{i})-a_{i}\otimes x\cdot[y,b_{i}]+y\circ a_{i}\otimes x\cdot b_{i}\big)\\ &&=F(1)+F(2)+F(3)+F(4),\end{aligned}$$ where $$\begin{aligned} F(1)&=&\sum_{i}\big(-2x\circ(y\cdot a_{i})\otimes b_{i}+(x\circ y)\cdot a_{i}\otimes b_{i}+(x\cdot a_{i})\circ y\otimes b_{i}\big)\overset{(\ref{eq:defi:anti-pre-Lie Poisson2})}{=}0,\\ F(2)&=&\sum_{i}(-2x\circ a_{i}\otimes y\cdot b_{i}+2x\circ a_{i}\otimes y\cdot b_{i})=0,\\ F(3)&=&\sum_{i}(-a_{i}\circ y\otimes x\cdot b_{i}+[y,b_{i}]\otimes x\cdot a_{i}+y\circ a_{i}\otimes x\cdot b_{i})=\big(\mathrm{ad}(y)\otimes\mathcal{L}_{\cdot}(x)\big)\big(r+\tau(r)\big),\\ F(4)&=&\sum_{i}\big({2}a_{i}\otimes y\cdot[x,b_{i}]-a_{i}\otimes(x\circ y)\cdot b_{i}-b_{i}\otimes x\cdot (y\circ a_{i})-a_{i}\otimes x\cdot[y,b_{i}]\big)\\ %&=&\sum_{i}\big({2}a_{i}\otimes y\cdot[x,b_{i}]-a_{i}\otimes(x\circ y)\cdot b_{i}-b_{i}\otimes x\cdot (y\circ a_{i})\\ %&&\ \ -a_{i}\otimes x\cdot (y\circ b_{i})+a_{i}\otimes x\cdot (y\circ b_{i})-a_{i}\otimes x\cdot[y,b_{i}]\big)\\ &\overset{(\ref{eq:defi:anti-pre-Lie Poisson1})}{=}&-\big({\mathrm{id}\otimes\mathcal{L}_{\cdot}(x)\mathcal{L}_{\circ}(y)}\big)\big(r+\tau(r)\big).\end{aligned}$$ Hence Eq.([\[eq:Poisson bialg 3\]](#eq:Poisson bialg 3){reference-type="ref" reference="eq:Poisson bialg 3"}) holds if and only if Eq. ([\[eq:TPBA3\]](#eq:TPBA3){reference-type="ref" reference="eq:TPBA3"}) holds. ([\[it:6\]](#it:6){reference-type="ref" reference="it:6"}). Let $x,y\in A$. Then we have $$\begin{aligned} &&(\tau-\mathrm{id}^{\otimes 2})\Big(2\delta(x\cdot y)-\big(\mathcal{L}_{\cdot}(x)\otimes\mathrm{id}\big)\delta(y)-\big(\mathrm{id}\otimes\mathcal{L}_{\cdot}(x)\big)\delta(y)-\big(\mathrm{id}\otimes\mathcal{R}_{\circ}(y)\big)\Delta(x)\Big)\\ &&=\sum_{i,j}\big( 2[x\cdot y,b_{i}]\otimes a_{i}-2b_{i}\otimes(x\cdot y)\circ a_{i}-2a_{i}\otimes[x\cdot y,b_{i}]+2(x\cdot y)\circ a_{i}\otimes b_{i}\\ &&\ \ +[y,b_{i}]\otimes x\cdot a_{i}+b_{i}\otimes x\cdot(y\circ a_{i})+x\cdot a_{i}\otimes [y,b_{i}]-x\cdot(y\circ a_{i})\otimes b_{i}\\ &&\ \ -x\cdot[y,b_{i}]\otimes a_{i}+x\cdot b_{i}\otimes y\circ a_{i}+a_{i}\otimes x\cdot[y,b_{i}]-y\circ a_{i}\otimes x\cdot b_{i}\\ &&\ \ -b_{i}\circ y\otimes x\cdot a_{i}+(x\cdot b_{i})\circ y\otimes a_{i}+x\cdot a_{i}\otimes b_{i}\circ y-a_{i}\otimes(x\cdot b_{i})\circ y\big)\\ &&=G(1)+G(2)+G(3)+G(4), \end{aligned}$$ where $$\begin{aligned} G(1)&=&\sum_{i}\big( 2[x\cdot y,b_{i}]\otimes a_{i}+2(x\cdot y)\circ a_{i}\otimes b_{i}-x\cdot(y\circ a_{i})\otimes b_{i}-x\cdot[y,b_{i}]\otimes a_{i}+(x\cdot b_{i})\circ y\otimes a_{i}\big)\\ &=&\sum_{i}\big(2(x\cdot y)\circ b_{i}\otimes a_{i}-2b_{i}\circ(x\cdot y)\otimes a_{i}+2(x\cdot y)\circ a_{i}\otimes b_{i}-x\cdot(y\circ a_{i})\otimes b_{i}\\ &&-x\cdot( y\circ b_{i})\otimes a_{i}+x\cdot( b_{i}\circ y)\otimes a_{i}+(x\cdot b_{i})\circ y\otimes a_{i}\big)\\ &\overset{(\ref{eq:defi:anti-pre-Lie Poisson2})}{=}&2\big(\mathcal{L}_{\circ}(x\cdot y)\otimes\mathrm{id}\big)\big(r+\tau(r)\big)-\big(\mathcal{L}_{\cdot}(x)\mathcal{L}_{\circ}(y)\otimes\mathrm{id}\big)\big(r+\tau(r)\big),\\ G(2)&=&\sum_{i}(x\cdot a_{i}\otimes[y,b_{i}]+x\cdot b_{i}\otimes y\circ a_{i}+x\cdot a_{i}\otimes b_{i}\circ y)=\big(\mathcal{L}_{\cdot}(x)\otimes\mathcal{L}_{\circ}(y)\big)\big(r+\tau(r)\big),\end{aligned}$$ and similarly $$\begin{aligned} G(3)&=&\sum_{i}(-[y,b_{i}]\otimes x\cdot a_{i}-y\circ a_{i}\otimes x\cdot b_{i} -b_{i}\circ y\otimes x\cdot a_{i})=-\big(\mathcal{L}_{\circ}(y)\otimes\mathcal{L}_{\cdot}(x)\big)\big(r+\tau(r)\big),\\ G(4)&=&\sum_{i}\big( -2b_{i}\otimes(x\cdot y)\circ a_{i}-2a_{i}\otimes[x\cdot y,b_{i}]+b_{i}\otimes x\cdot(y\circ a_{i}) +a_{i}\otimes x\cdot[y,b_{i}]-a_{i}\otimes(x\cdot b_{i})\circ y\big)\\ &=&\big(\mathrm{id}\otimes\mathcal{L}_{\cdot}(x)\mathcal{L}_{\circ}(y)-2\mathrm{id}\otimes\mathcal{L}_{\circ}(x\cdot y)\big)\big(r+\tau(r)\big). \end{aligned}$$ Hence Eq.([\[eq:Poisson bialg 4\]](#eq:Poisson bialg 4){reference-type="ref" reference="eq:Poisson bialg 4"}) holds if and only if Eq. ([\[eq:TPBA4\]](#eq:TPBA4){reference-type="ref" reference="eq:TPBA4"}) holds. $\Box$
arxiv_math
{ "id": "2309.16174", "title": "A bialgebra theory for transposed Poisson algebras via anti-pre-Lie\n bialgebras and anti-pre-Lie-Poisson bialgebras", "authors": "Guilai Liu and Chengming Bai", "categories": "math.QA math-ph math.MP math.RA math.RT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We prove that the definitions of a $d$-dimensional pseudocharacter (or pseudorepresentation) given by Chenevier and V. Lafforgue agree over any ring. We also compare the scheme of Lafforgue's $G$-pseudocharacters of a group with its $G$-character variety. address: "CNRS et Unité De Mathématiques Pures Et Appliquées, ENS de Lyon, 69342 Lyon Cedex 07, France. `http://perso.ens-lyon.fr/sophie.morel/, sophie.morel@ens-lyon.fr.`" author: - Kathleen Emerson and Sophie Morel bibliography: - main.bib date: . title: Comparison of different definitions of pseudocharacters --- Pseudocharacters (also known as pseudorepresentations) were first introduced by Wiles ([@Wiles]) and Taylor ([@Taylor]) to study congruences modulo power of prime numbers between representations of Galois groups. They defined congruence between representations as congruence between their characters, and this led them to the introduction of pseudocharacters, which are functions on a group whose properties mimick those of the character of a finite-dimensional representation. More precisely, let $\Gamma$ be a group, $A$ be a commutative unital ring and $d$ be a positive integer. A map $T:\Gamma\rightarrow A$ is a *$d$-dimensional pseudocharacter* if $d!\in A^\times$, $T(1)=d$, $T(\gamma_1\gamma_2)= T(\gamma_2\gamma_1)$ for all $\gamma_1,\gamma_2\in\Gamma$ and $T$ satisfies the $d$-dimensional *pseudo-character identity*: for all $\gamma_1,\ldots,\gamma_{d+1}\in\Gamma$, $$\sum_{\sigma\in\mathfrak{S}_{d+1}}\mathop{\mathrm{sgn}}(\sigma)T^\sigma(\gamma_1,\ldots,\gamma_{d+1})=0,$$ where, if the decomposition into cycles of $\sigma\in\mathfrak{S}_{d+1}$ (including trivial cycles) is $$\sigma=(i_{1,1}i_{1,2}\ldots i_{1,n_1})\ldots(i_{k,1}i_{k,2}\ldots i_{k,n_k}),$$ then $$T^\sigma(\gamma_1,\ldots,\gamma_{d+1})=T(\gamma_{i_{1,1}}\gamma_{i_{1,2}}\ldots \gamma_{i_{1,n_1}})\ldots T(\gamma_{i_{k,1}}\gamma_{i_{k,2}}\ldots \gamma_{i_{k,n_n}}).$$ We denote by $\mathop{\mathrm{PChar}}_{\Gamma,d}(A)$ the set of $d$-dimensional pseudo-characters on $\Gamma$ with values in $A$. This is a contravariant functor from the category of commutative $\mathbb{Z}[1/d!]$-algebras to that of sets, and it is not hard to see that it is representable, unlike the functor sending $A$ to the set of equivalence classes of $d$-dimensional representations $\Gamma\rightarrow{\bf {GL}}_d(A)$. Moreover, if $A$ is an algebraically closed field (in which $d!$ is invertible), then Taylor has shown in [@Taylor Theorem 1] that the trace induces a bijection from the set of equivalence classes of $d$-dimensional *semi-simple* representations of $\Gamma$ over $A$ and $\mathop{\mathrm{PChar}}_{\Gamma,d}(A)$ and, if $k$ is a field of characteristic $0$, then Chenevier has shown in [@Chenevier2 Proposition 2.3] that $\mathop{\mathrm{PChar}}_{\Gamma,d,\mathop{\mathrm{Spec}}k}$ is naturally isomorphic to the ${\bf {GL}}_d$-character variety of $\Gamma$ over $\mathop{\mathrm{Spec}}k$. The original definition of a $d$-dimensional pseudocharacter does not work well in characteristic $p$ dividing $d!$, because a semi-simple $d$-dimensional representation is no longer determined by its trace. However, it is determined if we know all the coefficients of its characteristic polynomial, and Chenevier ([@Chenevier1]) came up with a compact way of packaging all that data in his theory of determinants. Determinants make sense over an arbitrary ring $A$ without the requirement that $d!\in A^\times$, and the functor $\mathop{\mathrm{Det}}_{\Gamma,d}$ that they define is representable. Moreover, if $\rho:\Gamma\rightarrow{\bf {GL}}_d(A)$ is a morphism, then it defines a determinant $D_\rho\in\mathop{\mathrm{Det}}_{\Gamma,d}(A)$. Chenevier shows that, if $k$ is an algebraically closed field, then $\rho\mapsto D_\rho$ is a bijection from the set of $d$-dimensional semi-simple representations of $\Gamma$ over $k$ to $\mathop{\mathrm{Det}}_{\Gamma,k}(k)$. However, it is not known whether the determinant functor $\mathop{\mathrm{Det}}_{\Gamma,d}$ is isomorphic to the ${\bf {GL}}_d$-character variety of $\Gamma$. Vincent Lafforgue also gave a very general definition of pseudocharacters, that this time makes sense for representations with values in any affine group scheme $G$ over a base ring $C$. The definition of his pseudocharacters is less compact, as it requires data for every invariant function on any power $G$. Again, we obtain a contravariant functor $\mathop{\mathrm{LPC}}_{\Gamma,G}$ on the category of commutative $C$-algebras, which turns out to be representable by an affine scheme. Also, if $\rho:\Gamma\rightarrow G(A)$ is a morphism, then it defines a Lafforgue pseudocharacter $\Theta_\rho\in\mathop{\mathrm{LPC}}_{\Gamma,G}(A)$. Lafforgues proves in [@Lafforgue1 Proposition 11.7] (see also [@Lafforgue2 Proposition 8.2]) that, if $A=k$ is an algebraically closed field, then $\rho\mapsto\Theta_\rho$ is a bijection from the set of $G(k)$-conjugacy classes of $G$-completely reducible morphisms $\Gamma\rightarrow G(k)$ to $\mathop{\mathrm{LPC}}_{\Gamma,G}(k)$; see also Theorem 4.5 of the paper [@BHKT] of Böckle, Harris, Khare and Thorne for the details of the proof in positive characteristic, where they also give the more general definition of a Lafforgue pseudocharacter (Lafforgue himself was working over a field). In Section 2, we construct a is morphism of schemes from $\mathop{\mathrm{LPC}}_{\Gamma,G}$ to the $G$-character variety of $\Gamma$ compatible with the bijections of the previous paragraph. We prove that, for $G$ a geometrically reductive group scheme with connected geometric fibers over a Dedekind domain or a field, this morphism is an *adequate homeomorphism* in the sense of Alper (see [@Alper Definition 3.3.1]), that is, an integral universal homeomorphism which is a local isomorphism at all points with residue characteristic $0$; see Proposition [Proposition 1](#prop_charvar){reference-type="ref" reference="prop_charvar"}. We come back to the case $G={\bf {GL}}_d$. The main result of this note is that, over any ring, Lafforgue pseudocharacters of $\Gamma$ are in bijection with $d$-dimensional determinants. More precisely, we prove the following theorem: **Theorem 1**. *(See Theorem [Theorem 1](#thm_main){reference-type="ref" reference="thm_main"}.) There exists an isomorphism $\alpha:\mathop{\mathrm{LPC}}_{\Gamma,{\bf {GL}}_d}\rightarrow\mathop{\mathrm{Det}}_{\Gamma,d}$ such that, for any commutative ring $A$ and any morphism $\rho:\Gamma\rightarrow{\bf {GL}}_d(A)$, $\alpha$ sends $\Theta_\rho$ to $D_\rho$.* The plan of the paper is as follows. In Section [1](#Det){reference-type="ref" reference="Det"}, we recall Chenevier's definition of determinants and the properties of determinants that we will need; in particular, we give Vaccarino's formula for the universal determinant ring of a free monoid (see [@Vaccarino] and [@Chenevier2 Theorem 1.15]). In Section [2](#LPC){reference-type="ref" reference="LPC"}, we define Lafforgue pseudocharacters and give some of their basic properties, in particular the (easy) fact that they define a representable functor and the relationship with the character variety (Proposition [Proposition 1](#prop_charvar){reference-type="ref" reference="prop_charvar"}). We construct the morphism $\alpha$ in Section [3](#LPCDet){reference-type="ref" reference="LPCDet"} and, in Section [4](#DetLPC){reference-type="ref" reference="DetLPC"}, we prove that this morphism is invertible. The main ingredients are Donkin's ([@Donkin]) description of generators of $\mathscr{O}({\bf {GL}}_{d,\mathbb{Z}}^n)^{{\bf {GL}}_{d,\mathbb{Z}}}$ and the result of Vaccarino recalled in Section [1](#Det){reference-type="ref" reference="Det"}. We thank Julian Quast and Olivier Taïbi for useful discussions. # Determinants of algebras {#Det} In this section, we recall the definition of determinants by Chenevier ([@Chenevier2]) and some basic properties of the functor of determinants. Throughout this section, $A$ denotes a commutative unital ring, and $A$-algebras are always supposed to be associative and unital. Let $\mathscr{C}_A$ be the category of commutative $A$-algebras and $\mathbf{Set}$ be the category of sets. We first recall Roby's definition of a polynomial law ([@Roby], I.2), a homogeneous polynomial law ([@Roby], I.8) and a multiplicative polynomial law ([@Roby2], III.4). **Definition 1**. Let $M$ and $N$ be $A$-modules. A *polynomial law* $f:M\rightarrow N$ is a morphism of functors from $\mathscr{C}_A$ to $\mathbf{Set}$ between the functors $B\mapsto M\otimes_A B$ and $B\mapsto N\otimes_A B$. For every object $B$ of $\mathscr{C}_A$, we denote the resulting map $M\otimes_A B\rightarrow N\otimes_A B$ by $f_B$. **Definition 1**. Let $M$ and $N$ be $A$-modules, and let $f:M\rightarrow N$ be a polynomial law. Let $d\in\mathbb{N}$. - We say that $f$ is *homogeneous of degree $d$* if, for all $B\in\mathscr{C}_A$, $x\in M\otimes_A B$ and $b\in B$, we have $$f_B(bx)=b^d f_B(x).$$ - Suppose that $M$ and $N$ are $A$-algebras. We say that $f$ is *multiplicative* if, for every $B\in\mathscr{C}_A$, we have $f_B(1)=1$ and $f_B(xy)=f_B(x)f_B(y)$ if $x,y\in M\otimes_A B$. If $M$ and $N$ are $A$-algebras and $d\in\mathbb{N}$, we write $\mathscr{M}_A^d(M,N)$ for the set of multiplicative polynomial laws from $M$ to $N$ that are homogeneous of degree $d$. The following definition is due to Chenevier (see [@Chenevier2], 1.5). **Definition 1**. Let $R$ be an $A$-algebra. An *$A$-valued $d$-dimensional determinant on $R$* is an element of $\mathscr{M}_A(R,A)$. When $R=A[\Gamma]$ for some group $\Gamma$, we also talk about *$A$-valued $d$-dimensional determinants on $\Gamma$*. We write $\mathop{\mathrm{Det}}_{R,d}(A)$ (resp. $\mathop{\mathrm{Det}}_{\Gamma,d}(A)$) for the set of $A$-valued $d$-dimensional determinants on $R$ (resp. $\Gamma$); so $\mathop{\mathrm{Det}}_{\Gamma,d}(A)=\mathop{\mathrm{Det}}_{A[\Gamma],d}(A)$. **Example 1**. - Let $X$ be a set, and write $A\{X\}$ for the free unital associative $A$-algebra on $X$. Any map $\rho:X\rightarrow M_d(A)$ gives rise to an $A$-valued $d$-dimensional determinant $D_\rho$ on $A\{X\}$ as follows. For any object $B$ of $\mathscr{C}_A$, the map $\rho$ induces a morphism of $B$-algebras $\rho_B:B\{X\}\rightarrow M_d(B)$, where $M_d$ is the scheme of $d\times d$ matrices; moreover, if $B\rightarrow B'$ is a morphism of commutative $A$-algebras, then we get a commutative diagram: $$\xymatrix{B\{X\}\ar[d]\ar[r]^-{\rho_B} & M_d(B)\ar[d] \\ B'\{X\}\ar[r]_-{\rho_{B'}} & M_d(B')}$$ We define $D_{\rho,B}:B\{X\}\rightarrow B$ by $D_{\rho,B}(\omega)=\det(\rho_B(\omega))$, where $\det$ is the usual determinant on $M_d(B)$. - Let $\Gamma$ be a group and $\rho:\Gamma\rightarrow{\bf {GL}}_d(A)$ be a morphism of groups. Then we get an $A$-valued $d$-dimensional determinant $D_\rho$ on $A[\Gamma]$ in a similar way: for every commutative $A$-algebra $B$ and every $x\in B[\Gamma]$, we set $D_{\rho,B}=\det(\rho_B(x))$, where $\rho_B:B[\Gamma]\rightarrow M_d(B)$ sends an element $\sum_{i=1}^n b_i\gamma_i$ of $B[\Gamma]$, with $b_i\in B$ and $\gamma_i\in\Gamma$, to $\sum_{i=1}^n b_i\rho(\gamma_i)$. **Example 1**. Let $R$ be an $A$-algebra and $D\in\mathscr{M}_A^d(R,A)$. Following Chenevier ([@Chenevier2], 1.10), we can define polynomial laws $\Lambda_i:R\rightarrow A$, for $0\leq i\leq d$, as follows. For any commutative $A$-algebra $B$ and any $x\in R\otimes_A B$, set $$D_{B[T]}(T-x)=\sum_{i=0}^d\Lambda_{i,B}(x)T^{d-i}.$$ Note that $\Lambda_i$ is homogeneous of degree $i$. If $D=D_\rho$ for $\rho$ as in Example [Example 1](#ex_det){reference-type="ref" reference="ex_det"}(1) or (2), then $\Lambda_{i,B}(x)$ is the coefficient of the degree $d-i$ term in the characteristic polynomial of the matrix $\rho_B(x)\in M_d(B)$. **Definition 1**. Let $R$ be an $A$-algebra and $d\in\mathbb{N}$. The *determinant functor* $\mathop{\mathrm{Det}}_{R,d}:\mathscr{C}_A\rightarrow\mathbf{Set}$ is defined by $$\mathop{\mathrm{Det}}_{R,d}(B)=\mathscr{M}_B^d(R\otimes_A B,B)=\mathscr{M}_A^d(R,B)$$ (the equality $\mathscr{M}_B^d(R\otimes_A B)=\mathscr{M}_A^d(R,B)$ is proved in [@Chenevier2 3.4]). If $R=A[\Gamma]$ with $\Gamma$ a group, we also write $\mathop{\mathrm{Det}}_{\Gamma,d}$ instead of $\mathop{\mathrm{Det}}_{R,d}$. **Remark 1**. If $D_1\in\mathop{\mathrm{Det}}_{R,d_1}$ and $D_2\in\mathop{\mathrm{Det}}_{R,d_2}$, we can define $D_1\times D_2\in\mathop{\mathrm{Det}}_{R,d_1+d_2}$ as follows. For every commutative $A$-algebra $B$ and every $x\in R\otimes_A B$, take $$(D_1\times D_2)_B(x)=D_{1,B}(x)D_{2,B}(x).$$ If $\Gamma$ is a group, $\rho_1:\Gamma\rightarrow{\bf {GL}}_{d_1}(A)$, $\rho_2:\Gamma\rightarrow {\bf {GL}}_{d_2}(A)$ are representations and $D_1=D_{\rho_1}$, $D_2=D_{\rho_2}$, then $D_1\times D_2=D_{\rho_1\oplus\rho_2}$. **Example 1**. Let $\Gamma$ be a group, and let $X$ be the underlying set of $\Gamma$. Then the canonical surjective $A$-algebra morphism $A\{X\}\rightarrow A[\Gamma]$ gives rise to an injective morphism of functors $\mathop{\mathrm{Det}}_{\Gamma,d}\rightarrow\mathop{\mathrm{Det}}_{A\{X\},d}$. **Theorem 1** (Roby, see [@Roby] III.1 and [@Chenevier2] 1.6). *Let $R$ be an $A$-algebra and $d\in\mathbb{N}$. Then the functor $\mathop{\mathrm{Det}}_{R,d}$ is representable by the $A$-algebra $(\Gamma^d_A(R))^\mathrm{ab}$, where $\Gamma^d_A(R)$ is the $A$-algebra of divided powers of order $d$ relative to $A$ and $(-)^\mathrm{ab}$ denotes the abelianization.* See Roby's papers ([@Roby] III.1 and [@Roby2] II) for the definition of $\Gamma^d_A(M)$ if $M$ is an $A$-module, and the $A$-algebra structure on $\Gamma^d_A(R)$ if $R$ is an $A$-algebra. We will also write $\mathop{\mathrm{Det}}_{R,d}$ for the affine scheme representing the functor $\mathop{\mathrm{Det}}_{R,d}$. **Remark 1**. The scheme $\mathop{\mathrm{Det}}_{R,d}$ is a contravariant functor of the $A$-algebra $R$. Indeed, if $u:R\rightarrow S$ is a morphism of $R$-algebras, then, for every commutative $A$-algebra $B$ and every $D\in\mathop{\mathrm{Det}}_{S,d}(B)= \mathscr{M}_A^d(S,B)$, the maps $u^*(D)_C:R\otimes_A C\rightarrow B\otimes_A C$, $x\mapsto D((u\otimes\mathrm{id}_C)(x))$, for $C$ an object of $\mathscr{C}_A$, define an element of $\mathscr{M}_A^d(R,B)=\mathop{\mathrm{Det}}_{R,d}(B)$. In particular, the scheme $\mathop{\mathrm{Det}}_{A\{X\},d}$ (resp. $\mathop{\mathrm{Det}}_{\Gamma,d}$) is a contravariant functor of the set $X$ (resp. the group $\Gamma$). **Remark 1**. For every commutative $A$-algebra $B$, we have a natural morphism of schemes $\mathop{\mathrm{Det}}_{R\otimes_A B,d}\rightarrow\mathop{\mathrm{Det}}_{R,d}\times_{\mathop{\mathrm{Spec}}A} \mathop{\mathrm{Spec}}B$. This is an isomorphism by Theorem III.3 on page 262 of [@Roby]. Let $X$ be a set. We recall Vaccarino's construction of the universal ring of $\mathop{\mathrm{Det}}(\mathbb{Z}\{X\},d)$, following Chenevier's presentation in [@Chenevier2 1.15]. Let $F_X(d)=\mathbb{Z}[x_{i,j},\ x\in X, 1\leq i,j\leq d]$ be the ring of polynomials on the variables $x_{i,j}$ for $x\in X$ and $i,j\in\{1,2,\ldots,d\}$. The *generic matrices representation* is the ring morphism $$\rho^\mathrm{univ}:\mathbb{Z}\{X\}\rightarrow M_d(F_X(d))$$ sending each $x\in X$ to the matrix $(x_{ij})_{1\leq i,j\leq d}$. This defines a degree $d$ homogeneous multiplicative polynomial law $D_{\rho^\mathrm{univ}}:\mathbb{Z}\{X\}\rightarrow E_X(d)$, where $E_X(d)$ is the subring of $F_X(d)$ generated by the coefficients of the characteristic polynomials of the $\rho^\mathrm{univ}(p)$, $p\in\mathbb{Z}\{X\}$. **Theorem 1** (Vaccarino, see Theorem 1.15 of [@Chenevier2]). *The ring $E_X(d)$ and $D_{\rho^\mathrm{univ}}\in\mathop{\mathrm{Det}}_{\mathbb{Z}\{X\},d}(E_X(d))$ represent the functor $\mathop{\mathrm{Det}}_{\mathbb{Z}\{X\},d}$.* Using results of Donkin, we deduce the following corollary. **Corollary 1**. *The morphism $M_d^X\rightarrow\mathop{\mathrm{Det}}_{\mathbb{Z}\{X\},d}$ sending $\rho$ to $D_\rho$ (see Example [Example 1](#ex_det){reference-type="ref" reference="ex_det"}(1)) induces an isomorphism $\mathop{\mathrm{Spec}}(\mathscr{O}(M_d^X)^{{\bf {GL}}_{d}})\stackrel{\sim}{\rightarrow}\mathop{\mathrm{Det}}_{\mathbb{Z}\{X\},d}$.* *Proof.* Note that $M_d^X=\mathop{\mathrm{Spec}}(F_X(d))$, and that the morphism $M_d^X\rightarrow\mathop{\mathrm{Det}}_{\mathbb{Z}\{X\},d}$ of the statement corresponds to the multiplicative polynomial law $\mathbb{Z}\{X\}\stackrel{D_{\rho^\mathrm{univ}}}{\rightarrow}E_X(d)\subset F_X(d)$. On the other hand, by the results of Donkin on the generators of $\mathscr{O}(M_d^X)^{{\bf {GL}}_{d}}$ (see [@Donkin 3.1]), we have $E_X(d)=\mathscr{O}(M_d^X)^{{\bf {GL}}_{d}}$. The corollary follows. ◻ # Lafforgue's definition of pseudocharacters {#LPC} Let $A$ be a commutative unital ring and $G$ be an affine group scheme over $A$. As in Section [1](#Det){reference-type="ref" reference="Det"}, we denote by $\mathscr{C}_A$ the category of commutative unital $A$-algebras. If $B$ is a commutative $A$-algebra and $X$ is a set, we write $\mathscr{C}(X,B)$ for the $B$-algebra of functions $X\rightarrow B$. For every set $X$ (resp. integer $n\in\mathbb{N}$), we make $G$ act on the scheme $G^X$ (resp. $G^n$) by diagonal conjugation, and we denote by $\mathscr{O}(G^X)^G$ (resp. $\mathscr{O}(G^n)^G$) the $A$-algebra of $G$-invariant regular functions on $G^X$ (resp. $G^n$). **Definition 1** (See [@Lafforgue1] Proposition 11.7 and [@BHKT] Definition 4.1). Let $X$ be a set and $B$ be an object of $\mathscr{C}_A$. A *$B$-valued $G$-pseudocharacter for the set $X$* is a family of $A$-algebra morphisms $\mathscr{O}(G^n)^G\rightarrow\mathscr{C}(X,B)$, for $n\geq 1$, satisfying the following condition: - For all $n,m\geq 1$, every map $\zeta:\{1,2,\ldots,m\}\rightarrow\{1,2,\ldots,n\}$ and every $f\in\mathscr{O}(G^m)^G$, if we define $f^\zeta\in\mathscr{O}(G^n)^G$ by $f^\zeta(g_1,\ldots,g_n)=f(g_{\zeta(1)},\ldots,g_{\zeta(m)})$, then we have $$\Theta_m(f)(x_{\zeta(1)},\ldots,x_{\zeta(m)})=\Theta_n(f^\zeta)(x_1,\ldots,x_n)$$ for all $x_1,\ldots,x_n\in X$. Suppose that $X$ is the underlying set of a group $\Gamma$. Then a *$B$-valued $G$-pseudocharacter for the group $\Gamma$* is a $B$-valued $G$-pseudocharacter for the set $X$ satisfying the following additional condition: - For all $n\geq 1$ and $f\in\mathscr{O}(G^n)^G$, if we define $\widehat{f}\in \mathscr{O}(G^{n+1})^G$ by $\widehat{f}(g_1,\ldots,g_{n+1})=f(g_1,\ldots,g_{n-1}, g_ng_{n+1})$, then we have $$\Theta_{n+1}(\widehat{f})(\gamma_1,\ldots,\gamma_{n+1})= \Theta_n(f)(\gamma_1,\ldots,\gamma_{n-1},\gamma_n\gamma_{n+1})$$ for all $\gamma_1,\ldots,\gamma_{n+1}\in\Gamma$. We denote by $\mathop{\mathrm{LPC}}^1_{X,G}$ (resp. $\mathop{\mathrm{LPC}}_{\Gamma,G}$) the functor $\mathscr{C}_A\rightarrow\mathbf{Set}$ sending $B$ to the set of $B$-valued $G$-pseudocharacters for the set $X$ (resp. for the group $\Gamma$). If $X$ is the underlying set of a group $\Gamma$, then $\mathop{\mathrm{LPC}}_{\Gamma,G}$ is a subfunctor of $\mathop{\mathrm{LPC}}^1_{X,G}$. **Example 1**. - If $X$ is a set and $\rho:X\rightarrow G(A)$ is a map, then we define an $A$-valued $G$-pseudocharacter $\Theta^1_\rho$ for the set $X$ by $$\Theta^1_{\rho,n}(f)(x_1,\ldots,x_n)=f(\rho(x_1),\ldots,\rho(x_n)),$$ for every $n\geq 1$, every $f\in\mathscr{O}(G^n)^G$ and all $x_1,\ldots,x_n\in X$. Note that $\Theta^1_\rho$ only depends on the conjugacy class of $\rho$. - Similarly, if $\Gamma$ is a group and $\rho:\Gamma\rightarrow G(A)$ is a morphism of groups, then we define an $A$-valued $G$-pseudocharacter $\Theta_\rho$ for the group $\Gamma$ by $$\Theta_{\rho,n}(f)(\gamma_1,\ldots,\gamma_n)=f(\rho(\gamma_1),\ldots, \rho(\gamma_n)),$$ for every $n\geq 1$, every $f\in\mathscr{O}(G^n)^G$ and all $\gamma_1,\ldots,\gamma_n \in\Gamma$. Again, $\Theta_\rho$ only depends on the conjugacy class of $\rho$. **Remark 1**. The functor $\mathop{\mathrm{LPC}}^1_{X,G}$ depends contravariantly on the set $X$. Indeed, if $u:X\rightarrow Y$ is a map, then, for every commutative $A$-algebra $B$ and every $\Theta=(\Theta_n)_{n\geq 1}\in\mathop{\mathrm{LPC}}^1_{Y,G}(B)$, the family $u^*(\Theta)=(u^*(\Theta)_n)_{n\geq 1}$ defined by $$u^*(\Theta)_n(f)(x_1,\ldots,x_n)=\Theta_n(f)(u(x_1),\ldots,u(x_n))$$ for $n\geq 1$, $f\in\mathscr{O}(G^n)^G$ and $x_1,\ldots,x_n\in X$ is a $B$-valued $G$-pseudocharacter for the set $X$. Similary, the functor $\mathop{\mathrm{LPC}}_{\Gamma,G}$ depends contravariantly on the group $\Gamma$. If $X$ is a set, $x_1,\ldots,x_n\in X$ and $f\in\mathscr{O}(G^n)$, we define a regular function $f_{x_1,\ldots,x_n}\in\mathscr{O}(G^X)$ by $$f_{x_1,\ldots,x_n}((g_x)_{x\in X})=f(g_{x_1},\ldots,g_{x_n}).$$ Note that $f\in\mathscr{O}(G^n)^G$ if and only if $f_{x_1,\ldots,x_n}\in\mathscr{O}(G^X)^G$. **Proposition 1**. *Let $X$ be a set and let $R^1_{X,G}=\mathscr{O}(G^X)^G$. Consider the element $\Theta^\mathrm{univ}$ of $\mathop{\mathrm{LPC}}^1_{X,G}(R^1_{X,G})$ defined by $$\Theta^\mathrm{univ}_n(f)(x_1,\ldots,x_n)=f_{x_1,\ldots,x_n}\in R^1_{X,G},$$ for every $n\geq 1$, every $f\in\mathscr{O}(G^n)^G$ and all $x_1,\ldots,x_n\in X$. Then $R^1_{X,G}$ represents the functor $\mathop{\mathrm{LPC}}^1_{X,G}$, and $\Theta^\mathrm{univ}\in\mathop{\mathrm{LPC}}^1_{X,G} (R^1_{X,G})$ is the universal element.* *Proof.* The morphism of functors $\mathop{\mathrm{Spec}}(R^1_{X,G})\rightarrow\mathop{\mathrm{LPC}}^1_{X,G}$ corresponding to $\Theta^\mathrm{univ}$ sends a morphism of rings $u:R^1_{X,G}\rightarrow B$ to the $G$-pseudocharacter $\Theta_u$ defined by $$\Theta_{u,n}(f)(x_1,\ldots,x_n)=u(\Theta^\mathrm{univ}_n(f)(x_1,\ldots,x_n)),$$ for $n\geq 1$, $f\in\mathscr{O}(G^n)^G$ and $x_1,\ldots,x_n\in X$. We check that it is an isomorphism. The central point is that, by definition of the product $G^X$, we have $\mathscr{O}(G^X)=\varinjlim_{Y\subset X\ \mathrm{finite}}\mathscr{O}(G^Y)$; in particular, every element of $\mathscr{O}(G^X)^G$ is of the form $f_{x_1,\ldots,x_n}$, for some $n\geq 1$, $x_1,\ldots,x_n\in X$ and $f\in\mathscr{O}(G^n)^G$. Let $B$ be a commutative $A$-algebra. Let $u,v:R^1_{X,G}\rightarrow B$ be two morphisms of $A$-algebras such that $\Theta_u=\Theta_v$. Let $h\in R^1_{X,G}$, and choose an integer $n\geq 1$, $x_1,\ldots,x_n\in X$ and $f\in\mathscr{O}(G^n)^G$ such that $h=f_{x_1,\ldots,x_n}$. Then $$u(h)=\Theta_{u,n}(f)(x_1,\ldots,x_n)=\Theta_{v,n}(f)(x_1,\ldots,x_n)=v(h).$$ This proves that $u=v$. Now let $\Theta\in\mathop{\mathrm{LPC}}^1_{X,G}(B)$; we want to find a morphism of $A$-algebras $w:R^1_{X,G}\rightarrow B$ such that $\Theta=\Theta_w$. Let $Y$ be a finite subset of $X$. If $\zeta:Y\stackrel{\sim}{\rightarrow}\{1,2,\ldots,n\}$ is a bijection, then we get an isomorphism of $A$-algebras $\mathscr{O}(G^Y)^G\stackrel{\sim}{\rightarrow}\mathscr{O}(G^n)^G$ sending $h\in\mathscr{O}(G^Y)^G$ to the regular function $h^\zeta:(g_1,\ldots,g_n)\mapsto h((g_{\zeta(y)})_{y\in Y})$, and we define $w_Y:\mathscr{O}(G^Y)^G\rightarrow B$ by $$w_Y(h)=\Theta_n(h^\zeta)(\zeta^{-1}(1),\ldots,\zeta^{-1}(n)).$$ By condition [\[LPC1\]](#LPC1){reference-type="ref" reference="LPC1"}, this does not depend on the choice of $\zeta$ and, if $Y\subset Z$ are finite subsets of $X$, then $w_{Z\mid\mathscr{O}(G^Y)^G}=w_Y$. So the family $(w_Y)_{Y\subset X\ \mathrm{finite}}$ defines a morphism of $A$-algebras $w:R^1_{X,G}\rightarrow B$, and it follows immediately from the definition of $\Theta^\mathrm{univ}$ that we have $\Theta=\Theta_w$. ◻ Let $\Gamma$ be a group. For all $\gamma,\delta\in\Gamma$, we define a $G$-equivariant morphism of $A$-modules $\varphi_{\gamma,\delta}:\mathscr{O}(G\times G^\Gamma)\rightarrow\mathscr{O}(G^\Gamma)$ by $$\varphi_{\gamma,\delta}(f)((g_\alpha)_{\alpha\in\Gamma})=f(g_{\gamma\delta}, (g_\alpha)_{\alpha\in\Gamma})-f(g_\gamma g_\delta,(g_\alpha)_{\alpha\in\Gamma}).$$ We have a morphism of $A$-algebras $\mathscr{O}(G^\Gamma)\rightarrow\mathscr{O}(G\times G^\Gamma)$ induced by the projection of $G\times G^\Gamma$ on its second factor, and this morphism is equivariant for the action of $G$ by diagonal conjugation. The map $\varphi_{\gamma,\delta}$ becomes $\mathscr{O}(G^\Gamma)$-linear for this action of $\mathscr{O}(G^\Gamma)$ on $\mathscr{O}(G\times G^\Gamma)$. In particular, $\varphi_{\gamma,\delta}(\mathscr{O}(G\times G^\Gamma))$ (resp. $\varphi_{\gamma,\delta}(\mathscr{O}(G\times G^\Gamma)^G)$) is an ideal of $\mathscr{O}(G^\Gamma)$ (resp. $\mathscr{O}(G^\Gamma)^G$). **Proposition 1**. 1. *Let $J_{\Gamma,G}\subset\mathscr{O}(G^\Gamma)$ be the sum of the images of the $\varphi_{\gamma,\delta}$, for all $\gamma,\delta\in\Gamma$. Then $J_{\Gamma,G}$ is an ideal of $\mathscr{O}(G^\Gamma)$, and $\mathop{\mathrm{Spec}}(\mathscr{O}(G^\Gamma)/J_{\Gamma,G}) \subset\mathop{\mathrm{Spec}}(\mathscr{O}(G^\Gamma))=G^\Gamma$ sends any commutative $B$-algebra $A$ to the set of maps $\rho:\Gamma\rightarrow G(B)$ that are morphisms of groups.* 2. *Let $I_{\Gamma,G}\subset J_{\Gamma,G}^G$ be the sum of the $\varphi_{\gamma,\delta}(\mathscr{O}(G\times G^\Gamma)^G)$, for all $\gamma,\delta\in\Gamma$. Then $I_{\Gamma,G}$ is an ideal of $R^1_{\Gamma,G}=\mathscr{O}(G^\Gamma)^G$, and, if we set $R_{\Gamma,G}=R^1_{\Gamma,G}/I_{\Gamma,G}$, then the isomorphism $\mathop{\mathrm{Spec}}(R^1_{\Gamma,G})\stackrel{\sim}{\rightarrow}\mathop{\mathrm{LPC}}^1_{\Gamma,G}$ of Proposition [Proposition 1](#prop_rep_LPC1){reference-type="ref" reference="prop_rep_LPC1"} induces an isomorphism between the closed subscheme $\mathop{\mathrm{Spec}}(R_{\Gamma,G})$ of $\mathop{\mathrm{Spec}}(R^1_{\Gamma,G})$ and $\mathop{\mathrm{LPC}}_{\Gamma,G} \subset \mathop{\mathrm{LPC}}^1_{\Gamma,G}$.* *Proof.* We already know that $J_{\Gamma,G}$ and $I_{\Gamma,G}$ are ideals, because they are sums of ideals. Let $B$ be a commutative $A$-algebra and $\rho:\Gamma\rightarrow G(B)$ be a map; this corresponds to a morphism of $A$-algebras $u:\mathscr{O}(G^\Gamma)\rightarrow B$, and we have $$u(f)=f((\rho(\gamma))_{\gamma\in\Gamma})$$ for every $f\in\mathscr{O}(G^\Gamma)$. We want to prove that $\rho$ is a morphism of groups if and only if $u(J_{\Gamma,G})=0$. Suppose that $\rho$ is a morphism of groups. Let $\gamma,\delta\in\Gamma$ and $f\in\mathscr{O}(G\times G^\Gamma)$. Then $$\begin{aligned} u(\varphi_{\gamma,\delta}(f))= f(\rho(\gamma\delta),(\rho(\alpha)_{\alpha\in\Gamma}))- f(\rho(\gamma)\rho(\delta),(\rho(\alpha)_{\alpha\in\Gamma}))=0\end{aligned}$$ as $\rho$ is a morphism. This shows that $J_{\Gamma,G}\subset\mathop{\mathrm{Ker}}u$. Conversely, suppose that $J_{\Gamma,G}\subset\mathop{\mathrm{Ker}}u$. Let $\gamma,\delta\in\Gamma$. Then, for every $f\in\mathscr{O}(G)$, we have $$0=u(\varphi_{1,(\gamma,\delta)}(f))=f(\rho(\gamma\delta))- f(\rho(\gamma)\rho(\delta)),$$ hence $\rho(\gamma\delta)=\rho(\gamma)\rho(\delta)$. This finishes the proof of (i). We now prove the second statement of (ii). Let $B$ be a commutative $A$-algebra, let $\Theta\in\mathop{\mathrm{LPC}}^1_{\Gamma,G}(B)$, and let $u:R^1_{\Gamma,G}\rightarrow B$ be the morphism of $A$-algebras corresponding to $\Theta$. Suppose that $\Theta$ satisfies condition [\[LPC2\]](#LPC2){reference-type="ref" reference="LPC2"}. Let $\gamma,\delta\in \Gamma$ and $f\in\mathscr{O}(G\times G^\Gamma)^G$. Choose a finite subset $\{\gamma_1,\ldots,\gamma_n\}$ of $\Gamma$ and $h\in\mathscr{O}(G^{n+1})^G$ such that $$f(g,(g_\alpha)_{\alpha\in\Gamma})=h(g_{\gamma_1},\ldots,g_{\gamma_n},g).$$ Then we have $$f(g_\gamma g_\delta,(g_\alpha)_{\alpha\in\Gamma})=\widehat{h}((g_\alpha)_{\alpha\in \Gamma},g_\gamma,g_\delta),$$ so $$\begin{aligned} u(\varphi_{\gamma,\delta}(f)) = \Theta_{n+1}(h)(\gamma_1,\ldots,\gamma_n,\gamma\delta)- \Theta_{n+2}(\widehat{h})(\gamma_1,\ldots,\gamma_n,\gamma,\delta)=0.\end{aligned}$$ This proves that $I_{\Gamma,G}\subset\mathop{\mathrm{Ker}}u$. Conversely, suppose that $I_{\Gamma,G}\subset\mathop{\mathrm{Ker}}u$. Let $n\geq 1$, $h\in\mathscr{O}(G^n)^G$ and $\gamma_1,\ldots,\gamma_{n+1}\in\Gamma$. Define $f\in\mathscr{O}(G\times G^\Gamma)^G$ by $$f(g,(g_\alpha)_{\alpha\in\Gamma})=h(g_{\gamma_1},\ldots,g_{\gamma_{n-1}},g).$$ Then $$0=u(\varphi_{\gamma_n,\gamma_{n+1}}(f))= \Theta_n(h)(\gamma_1,\ldots,\gamma_{n-1},\gamma_n\gamma_{n+1})- \Theta_n(\widehat{h})(\gamma_1,\ldots,\gamma_{n-1},\gamma_n,\gamma_{n+1}).$$ This implies that $\Theta$ satisfies condition [\[LPC2\]](#LPC2){reference-type="ref" reference="LPC2"}. ◻ Next we discuss the behavior of the functors $\mathop{\mathrm{LPC}}^1_{X,G}$ and $\mathop{\mathrm{LPC}}_{\Gamma,G}$ under change of the base ring $A$. We will use the notion of *adequate homeomorphism* defined by Alper (see [@Alper Definition 3.3.1]): a morphism of schemes is an adequate homeomorphism if it is integral, a universal homeomorphism and a local isomorphism at all points whose residue field is of characteristic $0$. We will also use Alper's notion of *geometrically reductive group schemes*, see Definition 9.1.1 of [@Alper]), which generalizes that of reductive group schemes. In particular, if $G$ is an affine smooth algebraic group over a field $k$, then it is geometrically reductive if and only if it is reductive (Lemma 9.2.8 of [@Alper]), and, if $G\rightarrow S$ is a smooth group scheme with connected fibers, then it is a geometrically reductive group scheme if and only if it is reductive (Theorem 9.7.5 of [@Alper]). Fix a commutative ring $A$ and a flat affine group scheme $G$ over $A$. Let $B$ be a commutative $A$-algebra. For every $A$-module $V$ with an action of $G$, we have $V^G\otimes_A B\subset (V\otimes_A B)^{G_B}$. As $\mathscr{O}(G^X)\otimes_A B=\mathscr{O}(G_B^X)$ for every set $X$, we deduce that $J_{\Gamma,G_B}=J_{\Gamma,G}\otimes_A B$ and $I_{\Gamma,G_B}\supset I_{\Gamma,G}\otimes_A B$. So we get morphisms of $B$-algebras, for $X$ a set and $\Gamma$ a group, $$R^1_{X,G}\otimes_A B=\mathscr{O}(G^X)^G\otimes_A B\rightarrow\mathscr{O}(G_B^X)^{G_B}=R^1_{X,G_B}$$ and $$R_{\Gamma,G}\otimes_A B= (\mathscr{O}(G^\Gamma)^G\otimes_A B)/(I_{\Gamma,G}\otimes_A B) \rightarrow R_{\Gamma,G_B}$$ and corresponding morphisms of schemes $$\beta_X:\mathop{\mathrm{LPC}}^1_{X,G_B}\rightarrow\mathop{\mathrm{LPC}}^1_{X,G}\otimes_A B$$ and $$\beta_\Gamma:\mathop{\mathrm{LPC}}_{\Gamma,G_B}\rightarrow\mathop{\mathrm{LPC}}_{\Gamma,G}\otimes_A B.$$ **Proposition 1**. 1. *The morphisms $\beta_X$ and $\beta_\Gamma$ are isomorphisms in the following cases:* - *$B$ is a flat $A$-algebra;* - *$A$ is a Dedekind domain and $G$ is geometrically reductive over $A$ and has connected geometric fibers.* 2. *If $G$ is geometrically reductive over $A$ and has connected geometric fibers, then $\beta_X$ and $\beta_\Gamma$ are always adequate homeomorphisms.* *Proof.* Point (ii) follows from Proposition 5.2.9(3) of [@Alper]. We prove (i). Suppose first that $B$ is flat over $A$. Then, by Lemma 2 of Seshadri's paper [@Seshadri], for every $A$-module $V$ with an action of $G$, the canonical morphism $V^G\otimes_A B\rightarrow(V\otimes_A B)^{G_B}$ is an isomorphism. Applying this lemma to $\mathscr{O}(G^X)$, we see that $\beta_X$ is an isomorphism. As the functor $(-)\otimes_A B$ is right exact, Seshadri's lemma also implies that $I_{\Gamma,G}\otimes_A B=I_{\Gamma,G_B}$, hence that $\beta_\Gamma$ is an isomorphism. We finally assume that $A$ is a principal ideal domain and that $G$ is geometrically reductive over $A$. By Lemma [Lemma 1](#lemme_no_way){reference-type="ref" reference="lemme_no_way"}, the morphism $\mathscr{O}(G^Y)^G\otimes_A B\rightarrow\mathscr{O}(G_B^Y)^{G_B}$ is an isomorphism for every set $Y$; as in the proof of (i)(a), we conclude that $\beta_X$ and $\beta_\Gamma$ are isomorphisms. ◻ **Lemma 1**. *Let $A$ be a Dedekind domain and $G$ be a geometrically reductive group scheme with connected geometric fibers over $A$. Then, for every set $Y$, the injective morphism $\mathscr{O}(G^Y)^G\otimes_A B\rightarrow\mathscr{O}(G_B^Y)^{G_B}$ is an isomorphism.* **Remark 1**. Lemma [Lemma 1](#lemme_no_way){reference-type="ref" reference="lemme_no_way"} and its application to the functor $\mathop{\mathrm{LPC}}$ are probably well-known to many people in some form. We found version of it in a note by Chen (see [@Chen]), as well as in a preprint by Quast (see Section 1.3 of [@Quast]). *Proof of Lemma [Lemma 1](#lemme_no_way){reference-type="ref" reference="lemme_no_way"}.* We will use a number of results about the cohomology of algebraic groups that are gathered in Jantzen's book [@Jantzen]; the particular results that we need are due to Donkin and Mathieu, see [@Jantzen] for the original references. First, as the tensor product and the functor of invariants commute with direct limits and as $\mathscr{O}(G^Y)$ is the direct limit of the $\mathscr{O}(G^Z)$ for $Z \subset Y$ finite, we may assume that $Y$ is finite. By the universal coefficients theorem (Proposition 4.18 in Chapter I of [@Jantzen]), we have an exact sequence $$0\rightarrow\mathscr{O}(G^Y)^G\otimes_A B\rightarrow\mathscr{O}(G_B^Y)^{G_B}\rightarrow \mathop{\mathrm{Tor}}_1^A(\mathrm{H}^1(G,\mathscr{O}(G^Y)),B)\rightarrow 0.$$ So it suffices to show that $\mathrm{H}^1(G,\mathscr{O}(G^Y))=0$. For this, it suffices to proves that the localization of $\mathrm{H}^1(G,\mathscr{O}(G^Y))$ at every ideal of $A$ is zero; as localizations are flat, by the universal coefficient theorem again, we may replace $A$ by one of its localizations, hence assume that $A$ is a field or a discrete valuation ring. Now, by Lemma B.9 of [@Jantzen], it suffices to prove that, for every maximal ideal $\mathfrak{m}$ of $A$, if $k=A/\mathfrak{m}$, then $\mathscr{O}(G_k^Y)$ has a good filtration (in the sense of II.4.16 of [@Jantzen]) as a $G_k$-module. This follows immediately from Propositions 4.20 and 4.21 of Chapter II of [@Jantzen]. ◻ We finally investigate the relationship between pseudo-characters and the character variety. **Definition 1**. Let $A$ be a commutative ring, $G$ be a flat affine group scheme over $A$ and $\Gamma$ be a group. The *$G$-character variety of $\Gamma$* is the affine scheme $\mathop{\mathrm{Char}}_{\Gamma,G}=\mathop{\mathrm{Spec}}((\mathscr{O}(G^\Gamma)/J_{\Gamma,G})^G)$. If $B$ is an $A$-algebra, then we have $I_{\Gamma,G_B}\supset I_{\Gamma,G}\otimes_A B$, so we get a morphism of $B$-algebras $$(\mathscr{O}(G^\Gamma)/J_{\Gamma,G})^G\otimes_A B\rightarrow ((\mathscr{O}(G^\Gamma)/J_{\Gamma,G})\otimes_A B)^{G_B}= (\mathscr{O}(G_B^\Gamma)/I_{\Gamma,G_B})^{G_B},$$ and a corresponding morphism of schemes $$\beta_\Gamma':\mathop{\mathrm{Char}}_{\Gamma,G_B}\rightarrow\mathop{\mathrm{Char}}_{\Gamma,G}\otimes_A B.$$ **Lemma 1**. *Suppose that $G$ is geometrically reductive over $A$ and has connected geometric fibers.* 1. *If $B$ is a flat $A$-algebra, then $\beta'_\Gamma$ is an isomorphism.* 2. *In general, $\beta'_\Gamma$ is an adequate homeomorphism.* *Proof.* Point (i) follows from Proposition 5.2.9(1) of [@Alper], and point (ii) from Proposition 5.2.9(3) of the same paper. ◻ **Proposition 1**. *Suppose that $G$ is a geometrically reductive group scheme with connected geometric fibers over $A$ and that $A$ is a Dedekind domain or a field. Let $\iota$ be the morphism $\mathop{\mathrm{Char}}_{\Gamma,G}\rightarrow\mathop{\mathrm{LPC}}_{\Gamma,G}$ induced by $\mathscr{O}(G^\Gamma)^G/I_{\Gamma,G}\twoheadrightarrow\mathscr{O}(G^\Gamma)^G/J_{\Gamma,G}^G \hookrightarrow(\mathscr{O}(G^\Gamma)/J_{\Gamma,G})^G$.* 1. *If $A$ is a field of characteristic $0$, then $\iota$ is an isomorphism.* 2. *In general, $\iota$ is an adequate homeomorphism.* The statement of the proposition is very close to that of Proposition 11.7 of [@Lafforgue1] and Theorem 4.5 of [@BHKT], and its proof uses the same kind of ideas. We still include it here because it is not very hard. *Proof.* We prove (i). As $A$ is a field, the group $G$ is reductive over $A$. So, for every algebraic representation $V$ of $G$ over $A$ (not necessarily finite-dimensional), we have the Reynolds operator $E=E_V:V\rightarrow V$ (see [@Mumford] Definition 1.5), which is a $G$-equivariant projection with image $V^G$, compatible with any morphism of representations $V\rightarrow W$; also, if $V$ is a $A$-algebra and the action of $G$ preserves its multiplication, then $E_V$ is $V^G$-linear. We claim that $I_{\Gamma,G}=J_{\Gamma,G}^G$. We already know that $I_{\Gamma,G} \subset J_{\Gamma,G}^G$. Conversely, let $h\in J_{\Gamma,G}^G$. Write $h=\sum_{i=1}^n \varphi_{\gamma_i,\delta_i}(f_i)$, with $\gamma_i,\delta_i\in\Gamma$ and $f_i\in\mathscr{O}(G\times G^\Gamma)$. As the $\varphi_{\gamma_i,\delta_i}$ are $G$-equivariant morphisms, they are compatible with the Reynolds operators, so $$h=E(h)=\sum_{i=1}^n\varphi_{\gamma_i,\delta_i}(E(f_i))\in I_{\Gamma,G}.$$ It remains to prove that the injective morphism $\mathscr{O}(G^\Gamma)^G/J_{\Gamma,G}^G\rightarrow (\mathscr{O}(G^\Gamma)/J_{\Gamma,G})^G$ is also surjective. Let $f\in\mathscr{O}(G^\Gamma)$, and suppose that the class of $f$ modulo $J_{\Gamma,G}$ is $G$-invariant. As the canonical surjection $\mathscr{O}(G^\Gamma)\rightarrow\mathscr{O}(G^\Gamma)/J_{\Gamma,G}$ is $G$-equivariant, it is compatible with the Reynolds operators, so we deduce that $f-E(f)\in J_{\Gamma,G}$, which means that $f+J_{\Gamma,G}$ is in the image of $\mathscr{O}(G^\Gamma)^G/J_{\Gamma,G}^G$. We prove (ii). We know that $\mathop{\mathrm{Char}}_{\Gamma,G}\rightarrow \mathop{\mathrm{Spec}}(\mathscr{O}(G^\Gamma)^G/J_{\Gamma,G}^G)$ is an adequate homeomorphism by Lemma 5.2.12 of [@Alper] (see Remark 5.2.14 of *loc. cit.*). So it remains to prove that $\iota':\mathop{\mathrm{Spec}}(\mathscr{O}(G^\Gamma)^G/J_{\Gamma,G}^G)\rightarrow \mathop{\mathrm{Spec}}(\mathscr{O}(G^\Gamma)^G/I_{\Gamma,G})$ is an adequate homeomorphism. This morphism is a closed embedding, hence it is integral, universally injective and universally closed. If $\mathop{\mathrm{Frac}}(A)$ is of characteristic $0$, then $\iota'$ becomes an isomorphism after we tensor it by $\mathop{\mathrm{Frac}}(A)$ by Proposition [Proposition 1](#prop_BC){reference-type="ref" reference="prop_BC"}(ii); otherwise, its source and target have no point with residue field of characteristic $0$. So it remains to show that $\iota'$ is surjective, which is equivalent to the fact that $\iota$ is surjective. Let $x$ be a point of $\mathop{\mathrm{Spec}}(\mathscr{O}(G^\Gamma)^G/I_{\Gamma,G})$, which corresponds to a morphism of $A$-algebras $u:\mathscr{O}(G^\Gamma)^G/I(\Gamma,G)\rightarrow K$, with $K$ a field. We want to find an extension $L$ of $K$ and a morphism of $A$-algebras $v:(\mathscr{O}(G^\Gamma)/J_{\Gamma,G})^G\rightarrow L$ such that $v\circ{\iota}^*=u$. We may always enlarge $K$ and $L$, so we may assume that they are algebraically closed. Then, by Proposition [Proposition 1](#prop_BC){reference-type="ref" reference="prop_BC"}(ii) and Lemma [Lemma 1](#lemme_BC){reference-type="ref" reference="lemme_BC"}(ii), $\mathop{\mathrm{Char}}_{\Gamma,G}$ and $\mathop{\mathrm{Char}}_{\Gamma,G_K}$ (resp. $\mathop{\mathrm{LPC}}_{\Gamma,G}$ and $\mathop{\mathrm{LPC}}_{\Gamma,G_K}$) have the same points over any algebraically closed extension of $K$, so we may assume that $A=K$ (and so that $G$ is reductive over $K$). By Theorem 5.13 of [@Popp] (or Lemma 5.2.1 and Remark 5.2.2 of [@Alper]), there exists an extension $L$ of $K$ and a morphism of $K$-algebras $w:\mathscr{O}(G^\Gamma)\rightarrow L$ such that $u$ is induced by $w_{\mid\mathscr{O}(G^\Gamma)^G}$. The morphism $w$ corresponds to an element $(g_\gamma)\in G^\Gamma(L)$, and we denote by $H$ the closed subgroup of $G_L$ generated by the set $\{g_\gamma,\ \gamma\in\Gamma\}$. We choose $w$ such that the dimension of the maximal tori in $Z_G(H)$ is maximal. We claim that $H$ is then strongly reductive in $G$ in the sense of Definition 16.1 of Richardson's paper [@Richardson], that is, $H$ is not contained in any proper parabolic subgroup of $Z_G(S)$, where $S$ is a maximal torus of $Z_G(H)$. Indeed, let $\lambda$ be a cocharacter of $Z_G(S)$, and suppose that $H$ is contained in $P(\lambda):=\{g\in Z_G(S)\mid \lim_{t\to 0}\lambda(t)g \lambda(t)^{-1}\ \mathrm{exists}\}$. For every $\gamma\in\Gamma$, let $g'_\gamma=\lim_{t\to 0}\lambda(t)g_\gamma\lambda(t)^{-1}\in G(L)$. Then $(g_\gamma)_{\gamma\in\Gamma}$ and $(g'_\gamma)_{\gamma\in\Gamma}$ have the same image in $\mathop{\mathrm{Spec}}(\mathscr{O}(G^\Gamma)^G)(L)$, so the morphism $w':\mathscr{O}(G^\Gamma)\rightarrow L$ corresponding to $(g'_\gamma)_{\gamma\in\Gamma}\in G^\Gamma(L)$ satisfies $w'_{\mid\mathscr{O}(G^\Gamma)^G}=w_{\mid\mathscr{O}(G^\Gamma)^G}$. On the other hand, the closed subgroup $H'$ of $G$ generated by the $g'_\gamma$ is contained in the centralizer of $\lambda$ in $Z_G(S)$, so $\lambda$ has to be central in $Z_G(S)$, otherwise $Z_G(H')$, which contains the group generated by $S$ and the image of $\lambda$, would have a maximal torus of dimension greater than $\dim(S)$. So $H$ is not contained in any proper parabolic subgroup of $Z_G(S)$. Also, as $H$ is a Noetherian scheme, for every big enough finite subset $X$ of $\Gamma$, the closed subgroup of $G$ generated by $\{g_\gamma,\ \gamma\in X\}$ is equal to $H$. We now prove that, for this choice of $w$, the map $\gamma\mapsto g_\gamma$ is a morphism of groups; this implies that $w$ extends to $\mathscr{O}(G^\Gamma)/J_{\Gamma,G}$, hence defines a point $y$ of $\mathop{\mathrm{Char}}_{\Gamma,G}(L)$ such that $\iota(y)=x$. Let $\gamma,\delta\in\Gamma$. Choose a finite subset $X$ of $\Gamma$ such that $\gamma,\delta,\gamma\delta\in X$ and the closed subgroup of $G$ generated by $\{g_\alpha,\ \alpha\in X\}$ is equal to $H$. As $w$ vanishes on $I_{\Gamma,G}$, the images of $(g_{\gamma\delta},(g_\alpha)_{\alpha\in\Gamma})$ and $(g_{\gamma}g_{\delta},(g_\alpha)_{\alpha\in\Gamma})$ by the map $(G\times G^\Gamma)(L)\rightarrow\mathop{\mathrm{Spec}}(\mathscr{O}(G\times G^\Gamma)^G)(L)$ are equal, so the images of $(g_{\gamma\delta},(g_\alpha)_{\alpha\in X})$ and $(g_{\gamma}g_{\delta},(g_\alpha)_{\alpha\in X})$ by the map $(G\times G^X)(L)\rightarrow\mathop{\mathrm{Spec}}(\mathscr{O}(G\times G^X)^G)(L)$ are also equal. The closed subgroups of $G$ generated by the families $(g_{\gamma\delta},(g_\alpha)_{\alpha\in X})$ and $(g_{\gamma}g_{\delta},(g_\alpha)_{\alpha\in X})$ are both equal to $H$, hence strongly reductive in $G$; so, by Theorem 16.4 of Richardson's paper [@Richardson], the $G$-orbits of these families in $(G\times G^X)(L)$ are closed. As they have the same image in $\mathop{\mathrm{Spec}}(\mathscr{O}(G\times G^X)^G)(L)$, and as $G\times G^X$ is of finite type over $K$, this implies that they are in the same $G$-conjugacy class. Let $h\in G(L)$ such that $h(g_{\gamma\delta},(g_\alpha)_{\alpha\in X})h^{-1}= (g_{\gamma}g_{\delta},(g_\alpha)_{\alpha\in X})$. Then $h$ centralizes all the $g_\alpha$ for $\alpha\in X$, so $g_\gamma g_\delta=hg_{\gamma\delta}h^{-1}= g_{\gamma\delta}$. This finishes the proof. ◻ **Remark 1**. The morphism $\mathop{\mathrm{Char}}_{\Gamma,G}\rightarrow\mathop{\mathrm{LPC}}_{\Gamma,G}$ of Proposition [Proposition 1](#prop_charvar){reference-type="ref" reference="prop_charvar"} is an isomorphism if and only if both morphisms $\mathscr{O}(G^\Gamma)^G/I_{\Gamma,G}\twoheadrightarrow\mathscr{O}(G^\Gamma)^G/J_{\Gamma,G}^G$ and $\mathscr{O}(G^\Gamma)^G/J_{\Gamma,G}^G \hookrightarrow(\mathscr{O}(G^\Gamma)/J_{\Gamma,G})^G$ are isomorphisms. This seems unlikely, but we cannot offer a counterexample. # A Lafforgue pseudocharacter gives rise to a determinant {#LPCDet} Let $A$ be a commutative unital ring and $d$ be a positive integer. If $X$ is a set and $\Gamma$ is a group, we write $\mathop{\mathrm{LPC}}^1_{X,d}$ and $\mathop{\mathrm{LPC}}_{\Gamma,d}$ instead of $\mathop{\mathrm{LPC}}^1_{X,{\bf {GL}}_{d,A}}$ and $\mathop{\mathrm{LPC}}_{\Gamma,{\bf {GL}}_{d,A}}$. We will also use the notation of Example [Example 1](#ex_LPC){reference-type="ref" reference="ex_LPC"}. Let $X$ be a set, $d$ be a positive integer, $B$ be a commutative $A$-algebra and $\Theta= (\Theta_n)$ be an element of $\mathop{\mathrm{LPC}}^1_{X,d}(B)$. We want to construct an element $\alpha^1_X(\Theta)$ of $\mathop{\mathrm{Det}}_{A\{X\},d}(B)$, that is, a degree $d$ homogeneous multiplicative polynomial law from $A\{X\}$ to $B$ (seen as $A$-algebras). Let $Y,Z$ be sets and $\sigma:Y\rightarrow Z$ be a map. If $C$ is a commutative $A$-algebra, then we define a map $\det_{\sigma,C}:C\{Y\}=A\{Y\}\otimes_A C\rightarrow\mathscr{O}({{\bf {GL}}}_{d,C}^Z)^{{\bf {GL}}_{d,C}}$ in the following way. Let $p\in C\{Y\}$. Then $\det_{\sigma,C}(p)$ is the regular function on ${{\bf {GL}}}_{d,C}^Z$ sending $(g_z)_{z\in Z}$ to $\det(p((g_{\sigma(y)})_{y\in Y}))$ (where $\det$ is the usual determinant on ${\bf {GL}}_d$), which is clearly ${\bf {GL}}_{d,C}$-invariant. Note that this construction is functorial in $C$, and that we have $\mathscr{O}({\bf {GL}}_{d,C}^X)^{{\bf {GL}}_{d,C}}= \mathscr{O}({{\bf {GL}}}_{d,A}^Z)^{{\bf {GL}}_{d,A}}\otimes_A C$ by Lemma [Lemma 1](#lemme_no_way){reference-type="ref" reference="lemme_no_way"}. Hence the family $(\det_{\sigma,C})_{C\in\mathop{\mathrm{\mathrm{Ob}}}(\mathscr{C}_A)}$ defines a degree $d$ homogeneous multiplicative polynomial law from $A\{Y\}$ to $\mathscr{O}({\bf {GL}}_{d,A}^Z)^{{\bf {GL}}_{d,A}}$, which we denote by $\det_\sigma$. We come back to our $\Theta\in\mathop{\mathrm{LPC}}^1_{X,d}(B)$. By Proposition [Proposition 1](#prop_rep_LPC1){reference-type="ref" reference="prop_rep_LPC1"}, it corresponds to a morphism of $A$-algebras $u_\Theta:\mathscr{O}({\bf {GL}}_{d,A}^X)^G\rightarrow B$, and we send it to the polynomial law $$\alpha^1_X(\Theta)=u_\Theta\circ\det\nolimits_{\mathrm{id}_X}:A\{X\} \rightarrow B.$$ In other words, for every commutative $A$-algebra $C$ and every $p\in C\{X\}$, the element $\alpha^1_X(\Theta)_C(p)$ of $B\otimes_A C$ is the image by $u_{\Theta}\otimes\mathrm{id}_C$ of the element $(g_x)_{x\in X}\rightarrow\det(p(g_x)_{x\in X})$ of $\mathscr{O}({\bf {GL}}_{d,C}^X)^{{\bf {GL}}_{d,C}}$. The functoriality of $\alpha^1_X(\Theta)_C$ in $C$ follows immediately from its definition, and the fact that it defines a degree $d$ homogeneous multiplicative polynomial law follows from the properties of the determinant on ${\bf {GL}}_d$. **Proposition 1**. 1. *Let $X$ be a set and $d\geq 1$. Then the maps $\mathop{\mathrm{LPC}}^1_{X,d}(B)\rightarrow\mathop{\mathrm{Det}}_{A\{X\},d}(B)=\mathscr{M}_A(A\{X\},B)$, $\Theta\mapsto \alpha^1_X(\Theta)$, form a morphism of functors $\alpha^1_X:\mathop{\mathrm{LPC}}^1_{X,d}\rightarrow\mathop{\mathrm{Det}}_{A\{X\},d}$ such that, for every commutative $A$-algebra $B$ and every map $\rho:X\rightarrow{\bf {GL}}_d(B)$, we have $\alpha^1_X(\Theta^1_\rho)=D_\rho$. Moreover, the morphisms $\alpha^1_X$ are natural in $X$.* 2. *Let $\Gamma$ be a group and $d\geq 1$. We denote by $X$ the underlying set of $\Gamma$. Then the morphism $\alpha_{X}:\mathop{\mathrm{LPC}}^1_{X,d}\rightarrow\mathop{\mathrm{Det}}_{A\{X\},d}$ restricts to a morphism $\alpha_\Gamma:\mathop{\mathrm{LPC}}_{\Gamma,d}\rightarrow\mathop{\mathrm{Det}}_{A[\Gamma],d}$ such that, for every commutative $A$-algebra $B$ and every morphism of groups $\rho:\Gamma\rightarrow{\bf {GL}}_d(B)$, we have $\alpha_\Gamma(\Theta_\rho)=D_\rho$. Moreover, the morphisms $\alpha_\Gamma$ are natural in $\Gamma$.* *Proof.* 1. The fact that $\alpha^1_X$ is a morphism of functors and the naturality in $X$ follow easily from the definition of $\alpha^1_X(\Theta)$. Let $B$ be a commutative $A$-algebra and $\rho:X\rightarrow{\bf {GL}}_d(B)$ be a map; then the morphism of $A$-algebra $u:\mathscr{O}({\bf {GL}}_{d,A}^X)^{{\bf {GL}}_{d,A}} \rightarrow B$ corresponding to $\Theta_\rho$ is given by $u(f)=f((\rho(x))_{x\in X})$. So, if $C$ is a commutative $A$-algebra and $p\in C\{X\}$. we have $$\alpha^1_X(\Theta)_C(p)=\det(p((\rho(x)))_{x\in X})=\det(\rho(p))=D_\rho(p).$$ 2. It suffices to prove that $\alpha_{X}$ sends $\mathop{\mathrm{LPC}}_{\Gamma,d}$ to $\mathop{\mathrm{Det}}_{A[\Gamma],d}$, the other statements then follow immediately from (i). Let $B$ be a commutative $A$-algebra and $\Theta\in\mathop{\mathrm{LPC}}_{\Gamma,d}(B)$. As $\Theta$ is also in $\mathop{\mathrm{LPC}}^1_{X,d}(B)$, we have a degree $d$ homogeneous polynomial law $D=\alpha^1_{X}(\Theta):A\{X\}\rightarrow B$. Saying that $D$ is in the image of $\mathscr{M}_A(A[\Gamma],B)$ means that, for every commutative $A$-algebra $C$, the map $D_C:C\{X\}\rightarrow B\otimes_A C$ factors through the obvious surjection $\pi_C:C\{X\}\rightarrow C[\Gamma]$. Fix a commutative $A$-algebra $C$. We want to prove that, for all $p,q\in C\{X\}$ such that $\pi_C(p)=\pi_C(q)$, we have $D_C(p)=D_C(q)$. For $p\in C\{X\}$, let $n(p)$ be the sum over all the nonconstant monomials $m$ appearing in $p$ of $\deg(m)-1$; so $n(p)=0$ if and only if $p$ is of degree $\leq 1$. We claim that, for every $p\in C\{X\}$ such that $n(p)\geq 1$, there exists $p_1\in C\{X\}$ such that $n(p_1)=n(p)-1$, $\pi_C(p_1)=\pi_C(p)$ and $D_C(p_1)=D_C(p)$. This claim implies the desired result; indeed, if $p,q\in C\{X\}$ are such that $\pi_C(p)=\pi_C(q)$, then, by the claim, we can find $p_1,q_1\in C\{X\}$ such that $n(p_1)=n(q_1)=0$, $\pi_C(p_1)=\pi_C(p)=\pi_C(q)=\pi_C(q_1)$, $D_C(p_1)=D_C(p)$ and $D_C(q_1)=D_C(q)$; as $n(p_1)=n(q_1)=0$, the polynomials $p_1$ and $q_1$ are of degree $\leq 1$, so $\pi_C(p_1)=\pi_C(q_1)$ implies that $p_1=q_1$, and then we have $D_C(p)=D_C(p_1)=D_C(q_1)=D_C(q)$. So it suffices to prove the claim. Let $p\in C\{X\}$ such that $n(p)\geq 1$, and let $m$ be a monomial of degree $\geq 2$ appearing in $p$. Write $m=c x_{\gamma_1}\ldots x_{\gamma_k}$ with $c\in C\setminus\{0\}$, $k\geq 2$ and $\gamma_1,\ldots,\gamma_k\in\Gamma$, where, for every $\gamma\in\Gamma$, we denote by $x_\gamma$ the corresponding element of $X$. Let $r=p-m$ and set $p_1=r+c x_{\gamma_1\gamma_2}x_{\gamma_3}\ldots x_{\gamma_k}$. Then $n(p_1)=n(p)-1$ and $\pi_C(p_1)=\pi_C(p)$. It remains to prove that $D_C(p_1)=D_C(p)$. Let $u_\Theta:\mathscr{O}({\bf {GL}}_{d,A}^X)^{{\bf {GL}}_{d,A}}\rightarrow B$ be the morphism of $A$-algebras corresponding to the image of $\Theta$ in $\mathop{\mathrm{LPC}}^1_{\Gamma,d}(B)$. We have $$D_C(p)=\alpha_X^1(\Theta)_C(p)=u_\Theta(\det\nolimits_{\mathrm{id}_X}(p))\quad\mbox{and} \quad D_C(p_1)=u_\Theta(\det\nolimits_{\mathrm{id}_X}(p_1).$$ As $\Theta$ is in $\mathop{\mathrm{LPC}}_{\Gamma,d}(B)$, the morphism $u_\Theta$ vanishes on $I_{\Gamma,G}$. So, to show that $D_C(p)=D_C(p_1)$, it suffices to note that $p_1-p=\varphi_{\gamma_1,\gamma_2}(F)$, with $F\in\mathscr{O}({\bf {GL}}_{d,A}\times {\bf {GL}}_{d,A}^X)^{{\bf {GL}}_{d,A}}$ the function $(g,(g_\gamma)_{\gamma\in\Gamma})\mapsto \det\left(r((g_\gamma)_{\gamma\in\Gamma})+c gg_{\gamma_3}\ldots g_{\gamma_k}\right)$.  ◻ # From determinants to pseudocharacters {#DetLPC} The main result of this note is the following theorem. **Theorem 1**. *Let $A$ be a commutative ring and $d$ be a positive integer.* 1. *Let $X$ be a set. The morphism $\alpha^1_X:\mathop{\mathrm{LPC}}^1_{X,d}\rightarrow \mathop{\mathrm{Det}}(A\{X\},d)$ of Proposition [Proposition 1](#prop_LPCDet){reference-type="ref" reference="prop_LPCDet"}(i) corresponds by the isomorphisms $\mathop{\mathrm{LPC}}^1_{X,d}\simeq\mathop{\mathrm{Spec}}(\mathscr{O}({\bf {GL}}_d^X)^{{\bf {GL}}_d})$ and $\mathop{\mathrm{Det}}_{A\{X\},d}\simeq\mathop{\mathrm{Spec}}(\mathscr{O}(M_d^X)^{{\bf {GL}}_d})$ of Proposition [Proposition 1](#prop_rep_LPC1){reference-type="ref" reference="prop_rep_LPC1"} and Corollary [Corollary 1](#cor_DetX){reference-type="ref" reference="cor_DetX"} to the restriction morphism $\mathscr{O}(M_d^X)^{{\bf {GL}}_d}\rightarrow\mathscr{O}({\bf {GL}}_d^X)^{{\bf {GL}}_d}$. Moreover, $\alpha^1_X$ is injective, and, for every commutative $A$-algebra $B$, an element $D$ of $\mathop{\mathrm{Det}}_{A\{X\},d}(B)$ is in the image of $\alpha^1_X$ if and only if $D_B(x)\in B^\times$ for every $x\in X$.* 2. *For every set $\Gamma$, the morphism $\alpha_\Gamma:\mathop{\mathrm{LPC}}_{\Gamma,d}\rightarrow \mathop{\mathrm{Det}}(A[\Gamma],d)$ of Proposition [Proposition 1](#prop_LPCDet){reference-type="ref" reference="prop_LPCDet"}(ii) is an isomorphism.* *Proof.* By Remark [Remark 1](#rmk_BC){reference-type="ref" reference="rmk_BC"} and Proposition [Proposition 1](#prop_BC){reference-type="ref" reference="prop_BC"}(i)(b), we may assume that $A=\mathbb{Z}$. Let $X$ be a set. As at the end of Section [1](#Det){reference-type="ref" reference="Det"}, we denote the universal matrices representation by $\rho^\mathrm{univ}:X\rightarrow M_d(F_X(d))$, where $F_X(d)=\mathbb{Z}[x_{i,j},\ x\in X,\ 1\leq i,j\leq d]$. The universal element of $\mathop{\mathrm{Det}}_{\mathbb{Z}\{X\},d}$ is then $D^\mathrm{univ}=D_{\rho^\mathrm{univ}}:\mathbb{Z}\{X\}\rightarrow E_X(d)$, where $E_X(d)$ is generated by the coefficients of the characteristic polynomials of the $\rho^\mathrm{univ}(p)$, $p\in\mathbb{Z}\{X\}$. We prove (i). The first statement follows from the fact that $\alpha^1_X(D_\rho)=\Theta_\rho$ for every commutative ring $A$ and every map $\rho:\Gamma\rightarrow{\bf {GL}}_d(A)$. For the injectivity of $\alpha^1_X$, it suffices to prove that $\mathscr{O}({\bf {GL}}_d^X)^{{\bf {GL}}_d}$ is a localization of $\mathscr{O}(M_d^X)^{{\bf {GL}}_d}$, but this follows from the fact that $\mathscr{O}({\bf {GL}}_d^X)$ is the localization of $\mathscr{O}(M_d^X)$ by the multiplicative set generated by the functions $\det_{x}:(g_y)_{y\in X}\mapsto\det(g_{x})$, $x\in X$, which are in $\mathscr{O}(M_d^X)^{{\bf {GL}}_d}$. Finally, let $A$ be a commutative ring, let $D\in\mathop{\mathrm{Det}}_{\mathbb{Z}\{X\},d}(A)$, and let $u:\mathscr{O}(M_d^X)^{{\bf {GL}}_d}\rightarrow A$ be the morphism of rings corresponding to $D$. Then $D$ is the image of $\alpha^1_X$ if and only if $u$ extends to a morphism of rings $\mathscr{O}({\bf {GL}}_d^X)^{{\bf {GL}}_d}\rightarrow A$, which is equivalent to the condition that $u(\det_x)\in A^\times$ for every $x\in A$. As $u(\det_x)=u(\det(\rho^\mathrm{univ}(x)))=D_A(x)$ for every $x\in X$, we get the conclusion. We prove (ii). Let $X$ be the underlying set of $\Gamma$. We have a commutative diagram $$\xymatrix{\mathop{\mathrm{LPC}}_{\Gamma,d}\ar[r]^-{\alpha_\Gamma}\ar@{^{(}->}[d] & \mathop{\mathrm{Det}}_{\mathbb{Z}[\Gamma],d}\ar@{^{(}->}[d] \\ \mathop{\mathrm{LPC}}^1_{\Gamma,d}\ar[r]_-{\alpha^1_X} & \mathop{\mathrm{Det}}_{\mathbb{Z}\{X\},d}}$$ and $\alpha^1_X$ is injective by (i), so it suffices to check that the image of $\alpha_\Gamma$ is $\mathop{\mathrm{Det}}_{\mathbb{Z}[\Gamma],d}$. For this, we check that the universal element of $\mathop{\mathrm{Det}}_{\mathbb{Z}[\Gamma],d}$ is in the image of $\alpha_\Gamma$. Let $\varphi:E_X(d)\simeq(\Gamma_\mathbb{Z}^d(\mathbb{Z}\{X\}))^\mathrm{ab}\rightarrow A:=(\Gamma_\mathbb{Z}(\mathbb{Z}[\Gamma])) ^\mathrm{ab}$ be induced by the quotient map $\pi:\mathbb{Z}\{X\}\rightarrow\mathbb{Z}[\Gamma]$. The universal element $D$ of $\mathop{\mathrm{Det}}_{\mathbb{Z}[\Gamma],d}$ is defined by $D\circ\pi=\varphi\circ D^\mathrm{univ}$, and its preimage by $\alpha^1_X$ is the element $\Theta$ of $\mathop{\mathrm{LPC}}^1_{\Gamma,d}$ defined by $$\Theta_n(f)(\gamma_1,\ldots,\gamma_n)=\varphi(\Theta^\mathrm{univ}_n(f)(\gamma_1,\ldots, \gamma_n)),$$ where $\Theta^\mathrm{univ}=(\alpha^1_X)^{-1}(D^\mathrm{univ})$. It suffices to show that $\Theta$ satisfies condition [\[LPC2\]](#LPC2){reference-type="ref" reference="LPC2"}; we will then have $\Theta\in\mathop{\mathrm{LPC}}_{\Gamma,d}(A)$ and $\alpha_\Gamma(\Theta)=D$. Let $n$ be a positive integer and $\gamma_1,\ldots,\gamma_{n+1}\in\Gamma$. We want to check that $\Theta_n(f)(\gamma_1,\ldots,\gamma_{n-1},\gamma_n\gamma_{n+1})= \Theta_{n+1}(\widehat{f})(\gamma_1,\ldots,\gamma_{n+1})$ for every $f\in\mathscr{O}({\bf {GL}}_d^n)^{{\bf {GL}}_d}$. As both sides are $\mathbb{Z}$-algebra morphisms in $f$, it suffices to check it on generators of $\mathscr{O}({\bf {GL}}_d^n)^{{\bf {GL}}_d}$. So, by results of Donkin (cf. [@Donkin 3.1]), we may assume that $$f(g_1,\ldots,g_n)=\Lambda_k(g_{i_1}\ldots g_{i_r}),$$ where $k\in\{1,\ldots,n\}$, $\Lambda_k$ is the $k$th coefficient of the characteristic polynomial (i.e. $\Lambda_k(g)=(-1)^k\mathop{\mathrm{Tr}}(\Lambda^k g)$), $r\in\mathbb{N}$ and $i_1,\ldots,i_r\in\{1,2,\ldots,n\}$. For every $\gamma\in\Gamma$, we denote by $x_\gamma$ the element of $X$ corresponding to $\gamma$. For $i\in\{1,2,\ldots,n-1\}$, we set $y_i=z_i=x_{\gamma_i}$, and we also set $y_n=x_{\gamma_n\gamma_{n+1}}$, $z_n=x_{\gamma_n}x_{\gamma_{n+1}}$; these are all elements of $\mathbb{Z}\{X\}$. We then have $$\begin{aligned} \Theta_n(f)(\gamma_1,\ldots,\gamma_{n-1},\gamma_n\gamma_{n+1}) &= \varphi(\Theta^\mathrm{univ}_n(f)(\gamma_1,\ldots,\gamma_{n-1},\gamma_n\gamma_{n+1}))\\ & = \varphi(f(\rho^\mathrm{univ}(x_{\gamma_1}),\ldots,\rho^\mathrm{univ}(x_{\gamma_{n-1}}), \rho^\mathrm{univ}(x_{\gamma_n\gamma_{n+1}})))\\ & = \varphi(\Lambda_k(\rho^\mathrm{univ}(y_{i_1}\ldots y_{i_r})))\end{aligned}$$ and $$\begin{aligned} \Theta_{n+1}(\widehat{f})(\gamma_1,\ldots,\gamma_{n+1}) &= \varphi(\Theta^\mathrm{univ}_{n+1}(\widehat{f})(\gamma_1,\ldots,\gamma_{n+1}))\\ & = \varphi(\widehat{f}(\rho^\mathrm{univ}(x_{\gamma_1}),\ldots,\rho^\mathrm{univ}(x_{\gamma_{n+1}})))\\ & = \varphi(\Lambda_k(\rho^\mathrm{univ}(z_{i_1}\ldots z_{i_r}))).\end{aligned}$$ As $\pi\circ\det\circ\rho^\mathrm{univ}= \varphi\circ D^\mathrm{univ}=D\circ\pi$ as polynomial laws, we get a commutative diagram $$\xymatrix{\mathbb{Z}\{X\}[t]\ar[r]^-{\det\circ\rho^\mathrm{univ}} \ar[d]_{\pi\otimes\mathrm{id}_{\mathbb{Z}[t]}} & E_X(d)[t]\ar[d]^{\varphi\otimes\mathrm{id}_{\mathbb{Z}[t]}} \\ \mathbb{Z}[\Gamma][t]\ar[r]_-{D} & (\Gamma_\mathbb{Z}^d(\mathbb{Z}[\Gamma]))^\mathrm{ab}[t] }$$ As $(\pi\otimes\mathrm{id}_{\mathbb{Z}[t]})(t-y_{i_1}\ldots y_{i_r})= (\pi\otimes\mathrm{id}_{\mathbb{Z}[t]})(t-z_{i_1}\ldots z_{i_r})$, we get that $$\tag{**}\label{eq_det} (\varphi\otimes\mathrm{id}_{\mathbb{Z}[t]})(\det\circ\rho^\mathrm{univ}(t-y_{i_1}\ldots y_{i_r})) =(\varphi\otimes\mathrm{id}_{\mathbb{Z}[t]})(\det\circ\rho^\mathrm{univ}(t-z_{i_1}\ldots z_{i_r})).$$ But, for every $p\in\mathbb{Z}\{X\}$, we have $$\det\circ\rho^\mathrm{univ}(t-p)=t^d+\sum_{k=1}^d\Lambda_k(\rho^\mathrm{univ}(p))t^{d-k} \in E_X(d)[t],$$ so identity [\[eq_det\]](#eq_det){reference-type="eqref" reference="eq_det"} implies that $$\varphi(\Lambda_k(\rho^\mathrm{univ}(y_{i_1}\ldots y_{i_r})))= \varphi(\Lambda_k(\rho^\mathrm{univ}(z_{i_1}\ldots z_{i_r}))),$$ as desired. ◻
arxiv_math
{ "id": "2310.03869", "title": "Comparison of different definitions of pseudocharacters", "authors": "Kathleen Emerson and Sophie Morel", "categories": "math.AG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The Model Hypothesis (abbreviated $\mathsf{MH}$) and $\Delta$ are set-theoretic axioms introduced by J. Roitman in her work on the box product problem. Answering some questions of Roitman and Williams on these two principles, we show (1) $\mathsf{MH}$ implies the existence of $P$-points in $\omega^*$ and is therefore not a theorem of $\mathsf{ZFC}$; (2) $\mathsf{MH}$ also fails in the side-by-side Sacks models; (3) as $\Delta$ holds in these models, this implies $\Delta$ is strictly stronger than $\mathsf{MH}$; (4) furthermore, $\Delta$ holds in a large class of forcing extensions in which it was not previously known to hold. address: - | H. Barriga-Acosta\ Department of Mathematics and Statistics\ University of North Carolina at Charlotte\ Charlotte, NC 28223, USA - | W. R. Brian\ Department of Mathematics and Statistics\ University of North Carolina at Charlotte\ Charlotte, NC 28223, USA - | A. S. Dow\ Department of Mathematics and Statistics\ University of North Carolina at Charlotte\ Charlotte, NC 28223, USA author: - Hector Barriga-Acosta - Will Brian - Alan Dow title: On Roitman's principles $\mathsf{MH}$ and $\Delta$ --- [^1] # Introduction The *box product problem* asks whether the countable box product $\square \mathbb{R}^\omega$ is normal. First posed by Tietze in the 1940's, this is perhaps the oldest open question in general or set-theoretic topology (and it is certainly one of the oldest); see [@Williams], [@RW], and [@Roitman2] for references. The problem was partly solved in the 1970's when Mary Ellen Rudin proved that the Continuum Hypothesis ($\mathsf{CH}$) implies $\square \mathbb{R}^\omega$ is paracompact, hence normal, in [@Rudin]. But whether the normality of $\square \mathbb{R}^\omega$ is a consequence of $\mathsf{ZFC}$ or independent of it remains an important open question, and this is what is meant by the "box product problem" today. An important variant of the box product problem asks about the space $\square (\omega+1)^\omega$ instead of $\square \mathbb{R}^\omega$. Because $\square (\omega+1)^\omega$ is a closed subspace of $\square \mathbb{R}^\omega$, the normality of $\square \mathbb{R}^\omega$ implies the normality of $\square (\omega+1)^\omega$, and in particular Rudin's work shows that $\square (\omega+1)^\omega$ is normal under $\mathsf{CH}$. But whether $\square (\omega+1)^\omega$ is consistently non-normal is an open problem. This version of the box product problem rose to prominence in the 1970's, in large part because of a result of Kunen proved in [@Kunen]: *If $X_n$ is a compact metric space for each $n \in \omega$, then the box product $\square_{n \in \omega}X_n$ is paracompact if and only if the nabla product $\nabla_{n \in \omega}X_n$ is paracompact.* Recall that the *nabla product* $\nabla_{n \in \omega}X_n$ is defined as the quotient $\square_{n \in \omega}X_n / \!=^*$ of the box product by the "almost equal" relation: i.e., the relation defined by taking $x =^* y$ if and only if $\left\lbrace n \in \omega \colon x(n) \neq y(n) \right\rbrace$ is finite. In particular, $\square (\omega+1)^\omega$ is paracompact if and only if $\nabla (\omega+1)^\omega$ is. Many set-theoretic principles are known to imply $\square (\omega+1)^\omega$ is paracompact: $\mathsf{CH}$(Rudin, [@Rudin]), $\mathfrak d= \omega_1$ (Williams, [@Williams]), $\mathfrak b= \mathfrak d$ (van Douwen, [@vanDouwen]), $\mathfrak d= \mathfrak{c}$ (Roitman, [@Roitman1]), and the axioms $\ensuremath{\mathsf{MH}}\xspace$ and $\Delta$ (Roitman, [@Roitman2]). These last two axioms, $\mathsf{MH}$ and $\Delta$, were introduced by Roitman in [@Roitman2] (see also [@Roitman0; @Roitman1]) specifically to deal with the box product problem, or rather its variation for $\square (\omega+1)^\omega$. These axioms represent the minimal sufficient set-theoretic tools required to push through certain arguments showing the paracompactness of $\nabla (\omega+1)^\omega$ (hence the paracompactness of $\square (\omega+1)^\omega$, by Kunen's result). In other words, they are very weak assumptions, seemingly the weakest possible that still enable us to prove the normality of $\square (\omega+1)^\omega$. In fact, the first author proved in joint work with Paul Gartside (see [@BAG]) that $\Delta$ is equivalent to the paracompactness of $\nabla (\omega+1)^\omega$. Roitman left open the question of whether the principles $\mathsf{MH}$ and $\Delta$ might simply follow from $\mathsf{ZFC}$. If so, this would answer the $\square (\omega+1)^\omega$ variant of the box product problem once and for all. Another important question in set-theoretic topology and combinatorial set theory is whether, in the random model, there are $P$-points in $\omega^*$. An alleged proof of a "yes" answer was published by Paul E. Cohen in [@Cohen] (not to be confused with Paul J. Cohen, who invented the technique of forcing), and the problem was thought to be settled for decades. Cohen's argument proceeds in two steps: $(1)$ he defines something called a "pathway" and proves that the existence of pathways implies the existence of $P$-points; $(2)$ he argues that there are pathways in the random model. Several years ago, Osvaldo Guzmán found a flaw in part $(2)$ of Cohen's argument (the flaw is explained in the appendix to [@CG]). Thus the problem of whether there are $P$-points in the random model was reopened, and it remains unsolved. However, part $(1)$ of Cohen's argument is correct: the existence of pathways implies the existence of $P$-points. The first theorem of this paper observes a connection between these two important open problems: 1. $\mathsf{MH}$ implies the existence of pathways. Consequently, $\mathsf{MH}$ implies there are $P$-points in $\omega^*$, and $\mathsf{MH}$ is not a theorem of $\mathsf{ZFC}$. All known models without $P$-points satisfy $\mathfrak b= \mathfrak d= \aleph_1$, and therefore also satisfy $\Delta$. Thus, as a fairly immediate consequence of $(1)$, we deduce: - $\Delta$ does not imply $\mathsf{MH}$. This answers a question of Roitman from [@Roitman2]. Furthermore, we prove: - Pathways do not exist in the side-by-side Sacks models (so $\mathsf{MH}$ fails there too). Consequently, the existence of pathways is not equivalent to the existence of $P$-points (and neither is $\mathsf{MH}$). All these results are proved in Section [3](#sec:P){reference-type="ref" reference="sec:P"}. In Section [4](#sec:ccc){reference-type="ref" reference="sec:ccc"}, we show: - Pathways exist in a large class of forcing extensions. What precisely this last result means is explained in Section [4](#sec:ccc){reference-type="ref" reference="sec:ccc"}. For now, let us say only that the result significantly enlarges the variety of models in which $\Delta$ is known to hold. In particular, the results of Section [4](#sec:ccc){reference-type="ref" reference="sec:ccc"} answer a question of Roitman and Williams by showing that forcing to add $\aleph_2$ Cohen reals and then $\aleph_3$ random reals over a model of $\mathsf{CH}$ produces a model of $\Delta$. # $\mathsf{MH}$ and $\Delta$ In this section we define $\mathsf{MH}$ and $\Delta$ and lay out some of what is already known about them. Recall that $H(\kappa)$ denotes the set of all sets hereditarily smaller than $\kappa$; in particular, $H(\omega_1)$ is the set of hereditarily countable sets. Roitman's Model Hypothesis $\mathsf{MH}$ states: - For some cardinal $\kappa$, there is an increasing sequence $\left\langle M_\alpha \colon \alpha< \kappa \right\rangle$ of elementary submodels of $H(\omega_1)$, with $H(\omega_1) = \bigcup_{\alpha< \kappa} M_\alpha$, such that for every $\alpha< \kappa$, there is some $f \in M_{\alpha+1} \cap \omega^\omega$ such that $f \not\leq^* g$ for all $g \in M_\alpha\cap \omega^\omega$. Observe that if $\mathsf{MH}$ holds, then the $\kappa$ from this definition must satisfy $\mathfrak b\leq \kappa\leq \mathfrak d$. For the first inequality, note that if $\mathfrak b> \kappa$ then we could find a function $g \geq^* f_\alpha$ for every $\alpha< \kappa$, contradicting the fact that $g \in \bigcup_{\alpha< \kappa}M_\alpha$ and $f_{\alpha+1}$ is not dominated by any function in $M_\alpha$. For the second inequality, if $\mathcal{D}$ is a dominating family, then we cannot have $\mathcal{D}\subseteq H_\alpha$ for any particular $\alpha$, but this situation would certainly arise if $\kappa> \mathfrak d$. Observe also that $\mathfrak d= \mathfrak{c}$ implies $\mathsf{MH}$. The reason is that a Löwenheim-Skolem argument gives us an increasing sequence $\left\langle M_\alpha \colon \alpha< \mathfrak{c} \right\rangle$ of elementary submodels of $H(\omega_1)$ with $H(\omega_1) = \bigcup_{\alpha< \mathfrak{c}} M_\alpha$ such that $|M_\alpha| < |H(\omega_1)| = \mathfrak{c}$ and $M_\alpha\in M_{\alpha+1}$ for every $\alpha< \mathfrak{c}$. But $M_\alpha\in M_{\alpha+1}$ and $|M_\alpha| < \mathfrak d$ implies there is some $f \in M_{\alpha+1} \cap \omega^\omega$ that is unbounded over $M_\alpha\cap \omega^\omega$. Let $\partial \omega^\omega= \bigcup \left\lbrace \omega^A \colon A \subseteq\omega\text{ is infinite and co-infinite} \right\rbrace$. (In [@Roitman2], Roitman denotes this set of partial functions by $\omega^{\subset \omega}$). Observe that each $x \in \partial \omega^\omega$ corresponds to an element $\bar x \in \square (\omega+1)^\omega$, namely $$\bar x(x) \,=\, \begin{cases} x(n) &\text{ if } n \in \mathrm{dom}(x), \\ \omega&\text{ if not.} \end{cases}$$ If $x,y \in \partial \omega^\omega$, then the set-theoretic difference $x \setminus y$ is a (possibly finite) partial function $\omega\to \omega$. If $h \in \omega^\omega$ and $x \in \partial \omega^\omega$, we define $x \not>^* h$ to mean that there are infinitely many $n \in \mathrm{dom}(x)$ such that $x(n) \leq h(n)$. Roitman's principle $\Delta$ states: - There exists a mapping $\partial \omega^\omega\to \omega^\omega$, which we denote by $x \mapsto h_x$, such that for any $x,y \in \partial \omega^\omega$, if 1. $|x \setminus y| = |y \setminus x| = \aleph_0$ and 2. $\left\lvert \left\lbrace n \in \mathrm{dom}(x) \cap \mathrm{dom}(y) \colon x(n) \neq y(n) \right\rbrace \right\rvert < \aleph_0$, then then either $x \setminus y \not>^* h_y$ or $y \setminus x \not>^* h_x$. Observe that $\mathfrak b= \mathfrak d$ implies $\Delta$. To see this, recall that if $\mathfrak b= \mathfrak d$ then there is a $\leq^*$-increasing enumeration $\left\langle f_\alpha \colon \alpha< \mathfrak d \right\rangle$ of a dominating family in $\omega^\omega$ (i.e., a scale). Given $x \in \partial \omega^\omega$, define $h_x = f_\alpha$ where $\alpha$ is the minimal ordinal with $x <^* f_\alpha$ (by which we mean $x(n) < f_\alpha(n)$ for all but finitely many $n \in \mathrm{dom}(x)$). If $x,y \in \partial \omega^\omega$ satisfy (1) and (2) from the statement of $\Delta$, then either $x \setminus y \not>^* h_y$ or $y \setminus x \not>^* h_x$ depending on which of $x$ or $y$ is dominated by an earlier member of the scale. Observe that $\mathsf{MH}$ also implies $\Delta$. To see this, suppose $\left\langle M_\alpha \colon \alpha< \kappa \right\rangle$ is a sequence witnessing $\mathsf{MH}$. By replacing each $f_{\alpha+1}$ from the definition of $\ensuremath{\mathsf{MH}}\xspace$ with a strictly larger, strictly increasing function, we may (and do) assume $x \not>^* f_{\alpha+1}$ for every $x \in M_\alpha\cap \partial \omega^\omega$. Now given $x \in \partial \omega^\omega$, define $h_x = f_{\alpha+1}$ where $\alpha$ is minimal with $x \in M_\alpha$. If $x,y \in \partial \omega^\omega$ satisfy (1) and (2) from the statement of $\Delta$, then either $x \setminus y \not>^* h_y$, if $y$ does not appear before $x$ in the sequence of $M_\alpha$'s, or $y \setminus x \not>^* h_x$ if $x$ does not appear before $y$. All the above observations, plus some of the results mentioned in the introduction, are summarized in the following diagram: Arrows that are open at the back indicate implications that are not known to reverse, and arrows that are closed at the back indicate implications that are known not to reverse. For example, it is open whether $\Delta$, or for that matter any of the five statements on the right side of the diagram, is a theorem of $\mathsf{ZFC}$. We have no models in which any of these statements fail, hence no way to see that implications involving them fail to reverse. Thus all the arrows on the right side of the digram are open at the back. The fact that "$P$-points exist" does not imply "pathways exist" is proved in Section [3](#sec:P){reference-type="ref" reference="sec:P"}, where we show the side-by-side Sacks models have $P$-points but no pathways. (The same is true for the iterated Sacks model, by essentially the same proof, but we provide the details for the side-by-side models.) These same models satisfy $\Delta$, which is how we know the existence of pathways does not imply $\Delta$. We also prove in Section [3](#sec:P){reference-type="ref" reference="sec:P"} that the existence of pathways implies both $\Delta$ and the existence of $P$-points. The fact that $\mathsf{CH}$ does not imply $\mathfrak b= \mathfrak d$ or $\mathfrak d= \mathfrak{c}$ is common knowledge. So too is the fact that $\mathfrak b< \mathfrak d= \mathfrak{c}$ is consistent, and as $\mathfrak d= \mathfrak{c}$ implies $\Delta$, this means $\Delta$ does not imply $\mathfrak b= \mathfrak d$. To finish justifying how we have drawn our arrows, we claim that $\ensuremath{\mathsf{MH}}\xspace$ does not imply $\mathfrak d= \mathfrak{c}$. To see this, begin with a model $V$ of $\neg \ensuremath{\mathsf{CH}}\xspace$, and then add a generic $G$ for the length-$\omega_1$ finite support iteration of Hechler forcing. In the resulting model $V[G]$, we have $\mathfrak d= \aleph_1 < \mathfrak{c}$, and so $\mathfrak d= \mathfrak{c}$ fails. For each $\alpha< \omega_1$, let $M_\alpha= H(\omega_1)^{V[G_\alpha]}$ (i.e., the hereditarily countable sets in the intermediate model $V[G_\alpha]$ obtained by restricting $G$ to the first $\alpha$ stages of the iteration). If $V$ contains sufficiently large cardinals, then $M_\alpha\preceq H(\omega_1)^{V[G]}$ for all $\alpha$, and therefore $\left\langle M_\alpha \colon \alpha< \omega_1 \right\rangle$ witnesses $\mathsf{MH}$ in $V[G]$. (For example, if there is a weakly compact Woodin cardinal, then by a result of Neeman and Zapletal in [@NZ] there is an elementary embedding $L(\mathbb{R}^{V[G_\alpha]}) \to L(\mathbb{R}^{V[G]})$ for all $\alpha$. This implies $H(\omega_1)^{V[G_\alpha]} \preceq H(\omega_1)^{V[G]}$ with room to spare.) Thus, unless certain large cardinal axioms turn out to be inconsistent, $\mathsf{MH}$ does not imply $\mathfrak d= \mathfrak{c}$. # The consistent failure of $\mathsf{MH}$ {#sec:P} What follows is a slight generalization of the notion of a pathway defined by Paul E. Cohen in [@Cohen]. Given $f,g \in \omega^\omega$, define the *join* of $f$ and $g$ to be the function $f \vee g \in \omega^\omega$ given by $$(f \vee g)(n) \,=\, \begin{cases} f(\frac{n}{2}) &\text{ if $n$ is even,} \\ g(\frac{n+1}{2}) &\text{ if $n$ is odd.} \end{cases}$$ A *pathway* is an increasing sequence $\left\langle A_\alpha \colon \alpha< \kappa \right\rangle$ of subsets of $\omega^\omega$, for some cardinal $\kappa$, such that $\bigcup_{\alpha< \kappa}A_\alpha= \omega^\omega$ and, for all $\alpha< \kappa$, - $A_\alpha$ does not dominate $A_{\alpha+1}$, - if $f,g \in A_\alpha$ then $f \vee g \in A_\alpha$, and - $A_\alpha$ is closed under Turing reducibility. The reason this is a slight generalization of Cohen's definition is because Cohen requires $\kappa= \mathfrak d$. We omit this requirement because (1) it is not needed to prove that the existence of pathways implies the existence of $P$-points, and (2) omitting it enables us to prove the following theorem. Note however, that if $\left\langle A_\alpha \colon \alpha< \omega_1 \right\rangle$ is a pathway, then $\mathfrak b\leq \kappa\leq \mathfrak d$, for exactly the same reasons that a witness to $\mathsf{MH}$ must have length between $\mathfrak b$ and $\mathfrak d$. **Theorem 1**. *$\mathsf{MH}$ implies there is a pathway.* *Proof.* Suppose $\left\langle M_\alpha \colon \alpha< \kappa \right\rangle$ is a sequence of models witnessing $\mathsf{MH}$. For each $\alpha< \kappa$, let $A_\alpha= M_\alpha\cap \omega^\omega$. Then $\left\langle A_\alpha \colon \alpha< \kappa \right\rangle$ is a sequence of subsets of $\omega^\omega$. It is increasing because $\left\langle M_\alpha \colon \alpha< \kappa \right\rangle$ is increasing, and $\bigcup_{\alpha< \kappa}A_\alpha= \omega^\omega\cap \bigcup_{\alpha< \kappa}M_\alpha= \omega^\omega$. By the definition of $\mathsf{MH}$, some $f_{\alpha+1} \in M_{\alpha+1}$ is not dominated by $M_\alpha\cap \omega^\omega$; i.e., $A_\alpha$ does not dominate $A_{\alpha+1}$. Because each $M_\alpha$ is an elementary substructure of $H(\omega_1)$, each $M_\alpha$ is closed under basic set-theoretic operations (like taking the join of two functions), which means each $A_\alpha$ is closed under the join operator. Finally, if $f \in A_\alpha$ then $\left\lbrace g \in \omega^\omega \colon g \leq_T f \right\rbrace$ (where $g \leq_T f$ means that $g$ is Turing reducible to $f$) is countable. If $f \in M_\alpha$ then $\left\lbrace g \in \omega^\omega \colon g \leq_T f \right\rbrace \in M_\alpha$ by elementarity, because this set is a member of $H(\omega_1)$ definable from $f$. But countable members of $M_\alpha$ are subsets of $M_\alpha$ (another consequence of elementarity), so $\left\lbrace g \in \omega^\omega \colon g \leq_T f \right\rbrace \subseteq M_\alpha$. Hence $A_\alpha$ is closed under Turing reducibility, and $\left\langle A_\alpha \colon \alpha< \kappa \right\rangle$ is a pathway. ◻ A notion of "strong pathways" is defined in [@FBH], and we note in passing that the existence of these strong pathways is very similar to the assertion that there is a witness to $\mathsf{MH}$ with length $\kappa= \omega_1$. (It would be equivalent to it if the word "elementary" were deleted from the definition of $\mathsf{MH}$, and replaced with the weaker requirement that each $M_\alpha$ be a model of $\ensuremath{\mathsf{ZFC}}\xspace^-$.) More on pathways can also be found in [@FB] or in the appendix to [@CG]. **Question 2**. *Is the existence of pathways equivalent to $\mathsf{MH}$?* **Theorem 3**. *The existence of a pathway implies $\Delta$.* The proof of this theorem is just a small modification of the proof that $\mathsf{MH}$ implies $\Delta$ (which is due to Roitman). *Proof.* Suppose $\left\langle A_\alpha \colon \alpha< \kappa \right\rangle$ is a pathway. For each $\alpha< \kappa$, there is some function not dominated by $A_\alpha$, which implies there is a non-decreasing function not dominated by $A_\alpha$. Let $f_\alpha$ be some such function. For each $x \in \partial \omega^\omega$, define a function $\tilde x \in \omega^\omega$ by setting $$\tilde x (n) \,=\, \begin{cases} x(n)+1 &\text{ if } n \in \mathrm{dom}(x),\\ 0 &\text{ if } n \notin \mathrm{dom}(x) \end{cases}$$ for all $n$. The total function $\tilde x$ is computable from $x$, and the partial function $x$ is computable from $\tilde x$. For each $x \in \partial \omega^\omega$, there is some $\alpha< \kappa$ such that $\tilde x \in A_\alpha$. Let $\alpha_x$ denote the minimum such $\alpha< \kappa$, and define $h_x = f_{\alpha_x}$. Now suppose that $x,y \in \partial \omega^\omega$ satisfy (1) and (2) from the statement of $\Delta$: i.e., $\left\lvert x \setminus y \right\rvert = \left\lvert y \setminus x \right\rvert = \aleph_0$ and $\left\lvert \left\lbrace n \in \mathrm{dom}(x) \cap \mathrm{dom}(y) \colon x(n) \neq y(n) \right\rbrace \right\rvert < \aleph_0$. Furthermore, suppose $\alpha_x \leq \alpha_y$. Because the $A_\alpha$'s are increasing, we have $\tilde x,\tilde y \in A_{\alpha_y}$, and so $\tilde x \vee \tilde y \in A_{\alpha_y}$. Because $x$ and $y$ are each computable from $\tilde x \vee \tilde y$, this implies that any function computable from $x$ and $y$ is a member of $A_{\alpha_y}$. In particular, $A_{\alpha_y}$ contains the function $g$ defined by $$g(n) \,=\, x(\min \left\lbrace m \geq n \colon m \in \mathrm{dom}(x \setminus y) \right\rbrace),$$ which is well-defined because $\mathrm{dom}(x \setminus y)$ is infinite. Thus $h_y = f_{\alpha_y} \not\leq^* g$, that is, there are infinitely many $n$ such that $f_{\alpha_y}(n) > g(n)$. Fix $n$, and let $m = \min \left\lbrace m \geq n \colon m \in \mathrm{dom}(x \setminus y) \right\rbrace$. Using the fact that $f_{\alpha_y}$ is non-decreasing, $f_{\alpha_y}(n) > g(n)$ implies $$h_y(m) = f_{\alpha_y}(m) \geq f_{\alpha_y}(n) > g(n) = x(m)$$ and $m \in \mathrm{dom}(x \setminus y)$. Because there are infinitely many $n$ with $f_{\alpha_y}(n) > g(n)$, this implies there are infinitely many $m \in \mathrm{dom}(x \setminus y)$ with $h_y(m) > x(m)$. In other words, $x \setminus y \not>^* h_y$. Similarly, if $\alpha_y \leq \alpha_x$ then $y \setminus x \not>^* h_x$. ◻ Next we include a proof that the existence of pathways implies there are $P$-points in $\omega^*$. While this can be found in Cohen's paper [@Cohen], we have chosen to include a proof for three reasons. First, we have slightly expanded Cohen's definition of a pathway, so we must argue that our generalized pathways still imply the existence of $P$-points. Second, some readers may not trust a proof in a paper known to contain an incorrect proof. The third reason is simply the convenience of the reader who wishes to see the proof. Ketonen proved in [@Ketonen] that $\mathfrak d= \mathfrak{c}$ implies there are $P$-points in $\omega^*$. Given that $\mathfrak d= \mathfrak{c}$ implies the existence of pathways, but not vice versa, the following theorem can be seen as strengthening Ketonen's result. **Theorem 4**. *The existence of a pathway implies there are $P$-points in $\omega^*$.* *Proof.* Our proof more or less follows Ketonen's. Fix a pathway $\left\langle A_\alpha \colon \alpha< \kappa \right\rangle$. To begin, note that subsets of $\omega$ can be "coded" by their characteristic functions. Thus, even though $A_\alpha$ contains functions and not sets, we may think of it as describing a collection of subsets of $\omega$: for each $\alpha< \kappa$, define $$\mathrm{Set}_\alpha\,=\, \left\lbrace B \subseteq\omega \colon \chi_{{}_B} \in A_\alpha \right\rbrace.$$ Given $B,C \subseteq\omega$, note that the characteristic function $\chi_{{}_{B \cap C}}$ is computable from $\chi_{{}_B} \vee \chi_{{}_C}$. Because $A_\alpha$ is closed under the $\vee$ operator and Turing reducability, this means that $\mathrm{Set}_\alpha$ is closed under binary intersections. It follows (via induction) that $\mathrm{Set}_\alpha$ is closed under finite intersections. Furthermore, $\mathrm{Set}_\alpha$ is closed under Turing reducibility, because $A_\alpha$ is. Also, if $B \in \mathrm{Set}_\alpha$ then the natural enumeration of $B$ (i.e., the function mapping $n$ to $\text{the $n^{\mathrm{th}}$ element of $B$}$), is in $A_\alpha$. Sequences of subsets of $\omega$ can also be coded with functions. Fix some coding/decoding functions $c$ and $d$ such that for any sequence $\vec{s} = \left\langle B_n \colon n \in \omega \right\rangle$ of subsets of $\omega$, $c(\vec s\,) \in \omega^\omega$ and $d(c(\vec s\,)) = \vec s$. Furthermore, do this in such a way that $\chi_{{}_{B_n}}$ (the characteristic function of the $n^\mathrm{th}$ member of $\vec s\,$) is uniformly (in $n$) computable from $c(\vec s\,)$ for all $n$. (This can be accomplished, for example, with a computable pairing function $\omega\times \omega\to \omega$, which can be used to code a sequence of characteristic functions into a single $0$-$1$ sequence.) For each $\alpha< \kappa$, let $$\mathrm{Seq}_\alpha\,=\, \left\lbrace \vec s \in (\mathcal{P}(\omega))^\omega \colon c(\vec s\,) \in A_\alpha \right\rbrace.$$ Because the $A_\alpha$ are closed under Turing reducibility, every subset of $\omega$ computable from some $\vec s \in \mathrm{Seq}_\alpha$ is in $\mathrm{Set}_\alpha$, and every sequence of sets computable from $\vec s$ is in $\mathrm{Seq}_\alpha$. For example, if $\vec s = \left\langle B_n \colon n \in \omega \right\rangle \in \mathrm{Seq}_\alpha$, then $B_n \in \mathrm{Set}_\alpha$ for all $n$, and $\bigcap_{i \leq n}B_i \in \mathrm{Set}_\alpha$ for all $n$, and $\left\langle \bigcap_{i \leq n}B_i \colon n \in \omega \right\rangle \in \mathrm{Seq}_\alpha$. **Claim 1**. *Fix $\alpha< \kappa$, and suppose $\mathcal{F}\subseteq A_\alpha$ is a free filter on $\omega$. For every $f \in \omega^\omega$ not dominated by $A_\alpha$, there is a function $\psi$, computable from $f$, such that $\psi$ maps $\mathrm{Seq}_\alpha\cap \mathcal{F}^\omega$ to infinite subsets of $\omega$, in such a way that* 1. *For every $\vec s \in \mathrm{Seq}_\alpha\cap \mathcal{F}^\omega$, $\psi(\vec s\,)$ is a pseudo-intersection for $\vec s$.* 2. *$\mathcal{F}\cup \left\lbrace \psi(\vec s\,) \colon \vec s \in \mathrm{Seq}_\alpha\cap \mathcal{F}^\omega \right\rbrace$ is a filter base.* *Proof of Claim.$\ $* Fix some $f \in \omega^\omega$ that is not dominated by $A$: i.e., $f \not\leq^* g$ for every $g \in A$. Given $\vec s = \left\langle B_n \colon n \in \omega \right\rangle \in \mathrm{Seq}_\alpha\cap \mathcal{F}^\omega$, define $$\psi(\vec s\,) \,=\, \textstyle \bigcup_{n \in \omega} \big( f(n) \cap \bigcap_{i \leq n} B_i \big).$$ For each $n \in \omega$, the set $\bigcap_{i \leq n} B_i$ is infinite (because $\mathcal{F}$ is a free filter and $B_0,\dots,B_n \in \mathcal{F}$). As mentioned above, $\vec s \in \mathrm{Seq}_\alpha$ implies $\left\langle \bigcap_{i \leq n}B_i \colon n \in \omega \right\rangle \in \mathrm{Seq}_\alpha$. From (the code for) this sequence, one can compute the function $g$ mapping $n \in \omega$ to the $n^\mathrm{th}$ member of $\bigcap_{i \leq n}B_i$. So $g \in A_\alpha$. Hence $g \not>^* f$, which implies $f(n) \cap \bigcap_{i \leq n} B_i$ has size $\geq\! n$ for infinitely many $n$. Thus $\psi(\vec s\,)$ is infinite. As $\psi(\vec s\,) \setminus B_n \subseteq f(n)$ for all $m$, this means $\psi(\vec s\,)$ is a pseudo-intersection for $\vec s$. To finish the proof of the claim, it remains to check that $$\mathcal{G}\,=\, \mathcal{F}\cup \left\lbrace \psi(\vec s\,) \colon \vec s \in \mathrm{Seq}_\alpha\cap \mathcal{F}^\omega \right\rbrace$$ is a filter base. To this end, let us fix some $F_0,F_1,\dots,F_{k-1} \in \mathcal{F}$, and some $\vec s_0 = \left\langle B^0_n \colon n \in \omega \right\rangle,\vec s_1 = \left\langle B^1_n \colon n \in \omega \right\rangle,\dots,\vec s_n = \left\langle B^{\ell-1}_n \colon n \in \omega \right\rangle$ in $A_\alpha\cap \mathcal{F}^\omega$. Then define a new sequence $\vec{\hspace{.4mm}t}\, = \left\langle Y_n \colon n \in \omega \right\rangle$ of subsets of $\omega$ by setting $$Y_n \,=\, F_0 \cap F_1 \cap \dots \cap F_{k-1} \cap B^0_n \cap B^1_n \cap \dots \cap B_n^{\ell-1}$$ for all $n \in \omega$. Note that $F_i \in A_\alpha$ for every $i < k$ and $\left\langle B^i_n \colon n \in \omega \right\rangle \in \mathrm{Seq}_\alpha$ for every $i < \ell$. Because the sequence $\vec{\hspace{.4mm}t}$ is is computable from these inputs, $\vec{\hspace{.4mm}t} \in \mathrm{Seq}_\alpha$. Furthermore, each $Y_n$ is a member of $\mathcal{F}$. Thus $\vec{\hspace{.4mm}t}$ is in the domain of $\psi$, and $\psi(\vec{\hspace{.4mm}t}\,) \in \mathcal{G}$. By our definition of $\psi$ and of the $Y_n$, we have $\psi(\vec{\hspace{.4mm}t}\,) \subseteq F_i$ for every $i < k$ and $\psi(\vec{\hspace{.4mm}t}\,) \subseteq\psi(\vec s_i)$ for every $i < \ell$. Hence $\psi(\vec{\hspace{.4mm}t}\,) \in \mathcal{G}$ and $$\psi(\vec{\hspace{.4mm}t}\,) \,\subseteq\, \textstyle \bigcap_{i < k}F_i \cap \bigcap_{i < \ell} \psi(\vec s_i).$$ Thus any finitely many members of $\mathcal{G}$ have a subset of their intersection in $\mathcal{G}$; in other words, $\mathcal{G}$ is a filter base. Returning to the proof of the theorem, we now produce an increasing sequence $\left\langle \mathcal{F}_\gamma \colon \gamma< \kappa \right\rangle$ of filter bases via transfinite recursion. For the base case, let $\mathcal{F}_0$ be the Fréchet filter. If $\beta$ is a limit ordinal, let $\mathcal{F}_\beta= \bigcup_{\xi < \beta}\mathcal{F}_\xi$. If $\beta$ is a successor ordinal, say $\beta= \alpha+1$, then at stage $\beta$ of the recursion we have an increasing sequence $\left\langle \mathcal{F}_\xi \colon \xi \leq \alpha \right\rangle$ of filter bases. Let $\mathcal{U}_\alpha$ be some ultrafilter extending $\mathcal{F}_\alpha$, and let $\mathcal{F}$ be the (free) filter generated by the filter base $\mathcal{U}_\alpha\cap \mathrm{Set}_\alpha$. Note that $\mathcal{F}_\alpha\subseteq\mathcal{F}$. There is a function $f \in A_\beta$ not dominated by $A_\alpha$. Let $\psi$ be the function described in our claim, as defined from $f$. Applying our claim, $$\mathcal{F}_\beta\,=\, \mathcal{F}\cup \left\lbrace \psi(\vec s\,) \colon \vec s \in \mathrm{Seq}_\alpha\cap \mathcal{F}^\omega \right\rbrace$$ is a filter base. As $\mathcal{F}_\alpha\subseteq\mathcal{F}$, we have $\mathcal{F}_\alpha\subseteq\mathcal{F}_\beta$. Furthermore, because $\psi(\vec s\,)$ is computable from $f$ and $\vec s$ for any given $\vec s \in A_\alpha$, $\psi(\vec \sigma,) \in \mathrm{Set}_\beta$ whenever $\vec s \in \mathrm{Seq}_\alpha$. Hence $\mathcal{F}_\beta\subseteq\mathrm{Set}_\beta$. This completes the recursion. Let $\mathcal{U}= \bigcup_{\gamma< \kappa} \mathcal{F}_\gamma$. We claim that $\mathcal{U}$ is a $P$-point in $\omega^*$. To see that $\mathcal{U}$ is an ultrafilter, fix some $A \subseteq\omega$. There is some $\alpha< \kappa$ such that $A \in \mathrm{Set}_\alpha$. At stage $\beta= \alpha+1$ of our recursion, we chose an ultrafilter $\mathcal{U}_\alpha$ and then described a filter base $\mathcal{F}_\beta$ extending $\mathcal{U}_\alpha\cap \mathrm{Set}_\alpha$. This implies that either $A \in \mathcal{F}_\beta$ or else $\omega\setminus A \in \mathcal{F}_\beta$. Hence either $A \in \mathcal{U}$ or else $\omega\setminus A \in \mathcal{U}$. As we already know $\mathcal{U}$ is a filter base, this means $\mathcal{U}$ is an ultrafilter. To see that $\mathcal{U}$ is a $P$-point, suppose $\vec s = \left\langle B_n \colon n \in \omega \right\rangle$ is a sequence of sets in $\mathcal{U}$. There is some $\alpha< \kappa$ with $\vec s \in \mathrm{Seq}_\alpha$. At stage $\beta= \alpha+1$ of our recursion, we chose an ultrafilter $\mathcal{U}_\alpha$ and obtained a filter base $\mathcal{F}_\beta$ extending $\mathcal{U}_\alpha\cap \mathrm{Set}_\alpha$. For each $n \in \omega$, we have $B_n \in \mathrm{Set}_\alpha$, so because $\mathcal{U}_\alpha$ is an ultrafilter, either $B_n \in \mathcal{U}_\alpha$ or $\omega\setminus B_n \in \mathcal{U}_\alpha$. But $\mathcal{U}\supseteq \mathcal{F}_\beta\supseteq \mathcal{U}_\alpha\cap \mathrm{Set}_\alpha$, so in fact $B_n \in \mathcal{U}_\alpha$ for every $n$. Thus $\vec s = \left\langle B_n \colon n \in \omega \right\rangle \in \mathrm{Seq}_\alpha\cap (\mathcal{U}_\alpha\cap \mathrm{Set}_\alpha)^\omega$, and at stage $\beta$ of our recursion we added to $\mathcal{F}_\beta$ a pseudo-intersection $\psi(\vec s\,)$ for the sequence $\vec s$. Because $\mathcal{F}_\beta\subseteq\mathcal{U}$, this shows that $\mathcal{U}$ contains a pseudo-intersection for $\vec s$. ◻ Ketonen proved a little more than just that $\mathfrak d= \mathfrak{c}$ implies the existence of $P$-points; he showed that $\mathfrak d= \mathfrak{c}$ implies every filter generated by $<\!\mathfrak d$ sets extends to a $P$-point. Let us point out that, with a little more work, the above argument can be adjusted to show that if there is a pathway of length $\kappa$, then every filter generated by $<\!\kappa$ sets extends to a $P$-point. **Corollary 5**. *It is consistent that $\mathsf{MH}$ is false.* This follows from the previous two theorems, plus the fact that it is consistent there are no $P$-points in $\omega^*$. This consistency result was first proved by Shelah (see [@Shelah; @Wimmers]). Later work by Chodounský and Guzmán in [@CG] shows there are no $P$-points in the Silver model, or even in side-by-side Silver extensions where (unlike in Shelah's model) $\mathfrak{c}$ may be arbitrarily large. **Corollary 6**. *$\Delta$ does not imply $\mathsf{MH}$.* *Proof.* In the model of Shelah without $P$-points, or in the Silver extensions studied by Chodounský and Guzmán, $\mathfrak b= \mathfrak d= \aleph_1$. As mentioned in the previous section, $\mathfrak b= \mathfrak d$ implies $\Delta$. So these are models of $\Delta$ and not $\ensuremath{\mathsf{MH}}\xspace$. ◻ Our next result gives yet another instance of the failure of $\mathsf{MH}$ and the non-existence of pathways, this time in a model with $P$-points. Thus the converse of Theorem [Theorem 4](#thm:Ppoints){reference-type="ref" reference="thm:Ppoints"} does not hold: the existence of pathways is not equivalent to the existence of $P$-points. The Sacks forcing $\mathbb{S}$ is the poset of all perfect subtrees of $2^{<\omega}$, ordered by inclusion. (Recall that $T \subseteq 2^{<\omega}$ is *perfect* if every node in $T$ has two incompatible extensions.) The Sacks poset is an $\omega^\omega$-bounding, Axiom A (hence proper) forcing. (See [@BL] or [@GQ] for a reference.) For a given cardinal $\kappa$, let $\mathbb{S}_\kappa$ denote the countable support product of $\kappa$ copies of $\mathbb{S}$. Any model obtained by forcing with $\mathbb{S}_\kappa$, for some regular $\kappa> \aleph_1$, over a model of $\mathsf{GCH}$ is called the side-by-side Sacks model with $\mathfrak{c}= \kappa$. **Theorem 7**. *Let $\kappa$ be the successor of a regular uncountable cardinal. There are no pathways in the side-by-side Sacks model with $\mathfrak{c}= \kappa$.* *Proof.* Let $\kappa$ be the successor of an uncountable regular cardinal, suppose $V \models \ensuremath{\mathsf{GCH}}\xspace$, and let $G$ be a $\mathbb{S}_\kappa$-generic filter over $V$. Aiming for a contradiction, suppose there is a pathway in $V[G]$. Recall that $\mathbb{S}_\kappa\Vdash\mathfrak d= \aleph_1$, and that a pathway cannot have length $>\!\mathfrak d$. Thus all pathways in $V[G]$ have length $\omega_1$. Furthermore, we claim there is a pathway $\left\langle A_\alpha \colon \alpha< \omega_1 \right\rangle$ in $V[G]$ such that $A_\alpha$ does not dominate $A_{\alpha+1} \cap V$ for all $\alpha$. To this, let $\left\langle A^0_\alpha \colon \alpha< \omega \right\rangle$ be an arbitrary pathway in $V[G]$. Fix $\alpha< \omega_1$. By the definition of a pathway, there is some $g \in A_{\alpha+1}^0$ not dominated by $A_\alpha^0$. Because $\mathbb{S}_\kappa$ is $\omega^\omega$-bounding, there is some $h \in V \cap \omega^\omega$ with $h \not<^* g$. Furthermore, $h \in A_\beta^0$ for some $\beta$, and in fact we must have $\beta> \alpha$ because the members of a pathway are increasing. Thus there is some $\beta> \alpha$ such that $A_\beta^0 \cap V$ is not dominated by $A_\alpha^0$. Therefore, by thinning out the sequence $\left\langle A_\alpha^0 \colon \alpha< \omega_1 \right\rangle$ appropriately, we can obtain a pathway $\left\langle A_\alpha \colon \alpha< \omega_1 \right\rangle$ such that $A_\alpha$ does not dominate $A_{\alpha+1} \cap V$ for all $\alpha$. Fix a pathway $\langle A_\alpha:\, \alpha< \omega_1 \rangle$ in $V[G]$ such that $A_\alpha$ does not dominate $A_{\alpha+1} \cap V$ for all $\alpha< \omega_1$. Also fix a corresponding sequence $\langle \dot A_\alpha:\, \alpha< \omega_1 \rangle$ of nice names in $V$. Like in the proof of Theorem [Theorem 4](#thm:Ppoints){reference-type="ref" reference="thm:Ppoints"}, we wish to reason not only about the functions in some $A_\alpha$, but also about the things coded by functions in $A_\alpha$. For example, we can fix a computable bijection $\omega\to 2^{<\omega}$, and via this bijection, Sacks conditions $T \subseteq 2^{<\omega}$ can be "coded" as subsets of $\omega$, which in turn can be coded (via characteristic functions) as members of $\omega^\omega$. Likewise, a mapping between two subsets of $2^{<\omega}$ can be coded as a function $\omega\to \omega$ in a canonical, computable way. Furthermore, because each $A_\alpha$ is closed under Turing reducibility, so are all these coded objects. For example, a subtree of $2^{<\omega}$ is coded in $A_\alpha$ if it is computable from a bijection between two subsets of $2^{<\omega}$ that is coded in $A_\alpha$. All of this will be used without further comment in what follows. Let $\dot x_\gamma$ be a (nice) name for the $\mathbb{S}$-generic real added by the $\gamma^{\mathrm{th}}$ coordinate of $\mathbb{S}_\kappa$. In $V[G]$, every subset of $2^{<\omega}$ is canonically coded as a function, and in particular the real $x_\gamma= (\dot x_\gamma)_G$ (which is naturally identified with a branch through $2^{<\omega}$) has a code appearing in some $A_\alpha$. Thus, in $V$, there is for each $\gamma< \kappa$ some $p_\gamma\in \mathbb{S}_\kappa$ and $\alpha_\gamma< \omega_1$ such that $p_\gamma\Vdash\dot x_\gamma$ is coded in $\dot A_{\alpha_\gamma}$. Working in the ground model, we have $|\mathbb{S}| = \aleph_1$ (by $\mathsf{CH}$). Because $\kappa$ is regular and $>\!\aleph_1$, there is some particular $T \in \mathbb{S}$ and a stationary $S \subseteq\kappa$ such that $p_\gamma(\gamma) = T$ for all $\gamma\in S$. By the same reasoning, we may (and do) assume, by thinning out $S$ if necessary, that there is some particular $\alpha< \omega_1$ with $\alpha_\gamma= \alpha$ for all $\gamma\in S$. Using the generalized $\Delta$-system lemma (which applies because $\mathsf{GCH}$ holds and $\kappa$ is not the successor of a singular cardinal), we may (and do) assume, by thinning out $S$ again if needed, that there is some $\bar p \in \mathbb{S}_\kappa$ such that $p_\gamma\!\restriction\!\gamma= \bar p$ for all $\gamma\in S$, and $(\mathrm{supp}(p_\gamma) \cap \mathrm{supp}(p_\delta)) = \mathrm{supp}(\bar p)$ for all $\gamma,\delta\in S$ with $\gamma\neq \delta$. (In other words, the supports of the conditions $p_\gamma$, $\gamma\in S$, form a generalized $\Delta$-system of countable sets, with $\mathrm{supp}(\bar p)$ being the root of the $\Delta$-system.) To summarize: we have a stationary $S \subseteq\kappa$, $T \in \mathbb{S}$, $\alpha< \omega_1$, and $\bar p \in \mathbb{S}_\kappa$ such that, for all $\gamma\in S$, $p_\gamma\Vdash\dot x_\gamma$ is coded in $A_\alpha$, $p_\gamma(\gamma) = T$, $p_\gamma\!\restriction\!\gamma= \bar p$, and if $\gamma\neq \delta\in S$ then $(\mathrm{supp}(p_\gamma) \cap \mathrm{supp}(p_\delta)) = \mathrm{supp}(\bar p)$. Let $B \subseteq T$ denote the set of all branching nodes of $T$: that is, $B = \left\lbrace t \in T \colon t \!\,^{\frown}0, t \!\,^{\frown}1 \in T \right\rbrace$. Fix an order-isomorphism $\varphi: 2^{<\omega} \to B$. (Because $T$ is a perfect tree, $B$ is in fact order-isomorphic to $2^{<\omega}$.) Because $\varphi$ can be canonically coded as a function in $\omega^\omega$, there is some $\mathbb{S}_\kappa$-condition $\bar q \leq \bar p$ and some $\beta\geq \alpha$ such that $\bar q \Vdash$ $\varphi$ is coded in $\dot A_\beta$. Recall that, in $V[G]$, $A_\beta$ does not dominate $A_{\beta+1} \cap V$. Thus, in $V$, there is some condition $\bar r \leq \bar q$ and some function $f \in \omega^\omega$ such that $\bar r \Vdash f \in \dot A_{\beta+1}$ and $f$ is not dominated by any function in $\dot A_\beta$. Because $\left\lbrace \mathrm{supp}(p_\gamma) \colon \gamma\in S \right\rbrace$ is a (generalized) $\Delta$-system and $\mathrm{supp}(\bar r)$ is countable, $\mathrm{supp}(p_\gamma) \cap \mathrm{supp}(\bar r) = \mathrm{supp}(\bar p)$ for all but countably many $\gamma\in S$. Thinning out $S$ one last time, let us suppose $\mathrm{supp}(p_\gamma) \cap \mathrm{supp}(\bar r) = \mathrm{supp}(\bar p)$ for all $\gamma\in S$. Note that this implies $p_\gamma$ and $\bar r$ have a common extension (namely $(p_\gamma\setminus p_\gamma\!\restriction\!\mathrm{supp}(\bar p)) \cup \bar r$) for all $\gamma\in S$. Fix $\gamma\in S$. Let $\dot y_\gamma$ be a nice name for the function $\varphi^{-1} \circ x_\gamma$ in $V[G]$. In other words, $\dot y_\gamma$ is a name for an element of $2^\omega$ which reveals, via $\varphi$, the way in which the $\mathbb{S}$-generic real $x_\gamma$ traces on $B$. Any common extension of $\bar r$ and $p_\gamma$ forces that all of $\dot x_\gamma,\varphi,\varphi^{-1},\dot y_\gamma$ are coded in $\dot A_\beta$. (For $\dot x_\gamma$, this is true because $p_\gamma\Vdash$ $\dot x_\gamma$ is coded in $\dot A_\alpha$ and $\dot A_\alpha\subseteq\dot A_\beta$; for $\varphi$ this is true because $\bar r \Vdash$ $\varphi$ is coded in $\dot A_\beta$; for $\varphi^{-1}$ and $\dot y_\gamma$, this is true because, in the extension, (codes for) $\varphi^{-1}$ and $y_\gamma$ are computable from (codes for) $\varphi$ and $x_\gamma$.) In $V[G]$, let $I_\gamma= \left\lbrace j \colon y_\gamma(j) = 1 \right\rbrace = y_\gamma^{-1}(1)$ and let $h_\gamma(n)$ denote the $n^\mathrm{th}$ element of $I_\gamma$. Let $\dot I_\gamma$ and $\dot h_\gamma(n)$ be nice names for these two objects in $V$. Because $I_\gamma$ and $h_\gamma$ are computable from $y_\gamma$, any common extension of $\bar r$ and $p_\gamma$ forces $\dot I_\gamma,\dot h_\gamma\in$ are coded in $A_\beta$ (as well as $\dot x_\gamma,\varphi,\varphi^{-1},\dot y_\gamma$). Given $C \in [\omega]^\omega$, let $T_C \,=\, \left\lbrace t \in 2^{<\omega} \colon t^{-1}(1) \subseteq C \right\rbrace.$ In other words, $T_C$ is the tree that branches to $0$ at every node, and branches also to $1$ at (and only at) levels in $C$. Let $\varphi[T_C] \!\downarrow$ denote the downward closure of the image of $T_C$ under $\varphi$. (So, for example, $T_\omega= 2^{<\omega}$ and $\varphi[T_\omega] \!\downarrow \ = B\!\downarrow\ = T$.) Note that $\varphi[T_C] \!\downarrow\, \in \mathbb{S}$ and $\varphi[T_C] \!\downarrow\ \leq T$ (as $\mathbb{S}$-conditions). Because $f \in V$, there is some infinite $C \subseteq\omega$ with $C \in V$ such that the $n^\mathrm{th}$ element of $C$ is $\geq\!f(n)$. For any $\gamma\in S$, observe that $$s \,=\, (p_\gamma\setminus \bar p) \cup \bar r \cup (\gamma,\varphi[T_C]\!\downarrow)$$ is a common extension of $p_\gamma$ and $\bar r$. Because $s(\gamma) = \varphi[T_C]\!\downarrow\,$, $s$ forces that $\dot x_\gamma$ is a branch through the tree $\varphi[T_A]\!\downarrow\,$. But for any branch $b$ through $\varphi[T_C]\!\downarrow\,$, we can have $\varphi^{-1} \circ b(n) = 1$ only if $n \in C$ (by the definition of $\varphi[T_C]\!\downarrow\,$). Thus $s \Vdash\dot I_\gamma\subseteq C$. But if $I_\gamma\subseteq C$, then the $n^\mathrm{th}$ element of $I_\gamma$ is even bigger than the $n^\mathrm{th}$ element of $C$, which in turn is $\geq\!f_{\beta+1}(n)$. Thus, in $V[G]$, the function $h_\gamma$ enumerating $I_\gamma$ dominates $f$. Hence $s \Vdash\dot h_\gamma\geq f$. But $f$ is unbounded over $A_\beta$ in $V[G]$, and $r \Vdash\dot h_\gamma\in \dot A_\beta$. Contradiction! ◻ Let us note in passing that, with a little more work, the above proof can be modified to give the same conclusion in the iterated Sacks model. **Corollary 8**. *The existence of $P$-points does not imply the existence of pathways.* *Proof.* By the previous theorem, it suffices to note that there are $P$-points in the side-by-side Sacks models. This was proved by Laver in [@Laver]. ◻ Interestingly, Laver's argument in [@Laver] does not show that all $P$-points from the ground model are preserved by $\mathbb{S}_\kappa$ (although this is true for the iterated Sacks poset). Instead, Laver constructs specific $P$-points in the ground model that are preserved by $\mathbb{S}_\kappa$. It is an open question whether all ground model $P$-points generate $P$-points in side-by-side Sacks models. Note that neither of the results from this section addresses the question of whether it is consistent for $\Delta$ to fail. In fact, $\mathfrak b= \mathfrak d= \aleph_1$ in every known model without $P$-points, and in the side-by-side and iterated Sacks models. Because $\mathfrak b= \mathfrak d$ implies $\Delta$, all the models considered in this section satisfy $\Delta$. **Question 9** (Roitman). *Is $\Delta$ a theorem of $\mathsf{ZFC}$?* Because both $\mathfrak b= \mathfrak d$ and $\mathfrak d= \mathfrak{c}$ imply $\Delta$, any model in which $\Delta$ fails must satisfy $\mathfrak b< \mathfrak d< \mathfrak{c}$, and in particular it must satisfy $\mathfrak{c}\geq \aleph_3$. On the one hand, this means that a countable support iteration of proper posets is not useful for solving the problem. On the other hand, the results in the next section make it seem doubtful that a ccc poset could be useful either. # Finding pathways in forcing extensions {#sec:ccc} Roitman proved in [@Roitman2] that $\Delta$ implies $\nabla (\omega+1)^\omega$ is paracompact. As mentioned in Section 2, later work of the first author and Gartside in [@BAG] shows that $\Delta$ is in fact equivalent to the paracompactness of $\nabla (\omega+1)^\omega$. Roitman also proved in [@Roitman1] that following any ccc forcing iteration with uncountable cofinality, $\nabla (\omega+1)^\omega$ is paracompact (even more: she showed $\nabla_{n \in \omega}X_n$ is paracompact whenever each $X_n$ is compact). As we now know $\Delta$ is equivalent to the paracompactness of $\nabla (\omega+1)^\omega$, this means that $\Delta$ holds in such forcing extensions. In this section we strengthen Roitman's result in two ways: by strengthening the conclusion from $\Delta$ to the existence of pathways, and by extending the class of posets for which this conclusion holds. Given a ccc poset $\mathbb{P}$, let $\ensuremath{\mathsf{MH}}\xspace(\mathbb{P})$ denote the following statement: - For some uncountable cardinal $\kappa$, there is an increasing sequence $\left\langle M_\alpha \colon \alpha< \kappa \right\rangle$ of transitive models of $\ensuremath{\mathsf{ZFC}}\xspace^-$ such that $\mathbb{P}\subseteq\bigcup_{\alpha< \kappa}M_\alpha$, $\bigcup_{\alpha< \kappa}M_\alpha$ is countably closed, and for all $\alpha< \kappa$, - $M_{\alpha+1} \cap \mathbb{P}$ is not dominated by $M_\alpha\cap \omega^\omega$, - $\mathbb{P}_\alpha= M_\alpha\cap \mathbb{P}\in M_\alpha$, and $\mathbb{P}_\alpha<\hspace{-2.5mm} \cdot \hspace{1.5mm}\mathbb{P}$, and - $M_\alpha$ witnesses that $M_\alpha\cap \mathbb{P}= \mathbb{P}_\alpha$ is $\omega^\omega$-bounding, in the sense that for every nice name $\dot f$ for a function in $\omega^\omega$ with $\dot f \in M_\alpha$, there is some $g \in M_\alpha\cap \omega^\omega$ such that $\Vdash_{\mathbb{P}_\alpha} \dot f <^* g$. Note that $\ensuremath{\mathsf{MH}}\xspace(\mathbb{P})$ implies $\mathbb{P}$ is $\omega^\omega$-bounding. This is because if $\dot f$ is a (nice) name for a function in $\omega^\omega$, then because $\mathbb{P}$ is ccc, $\mathbb{P}\subseteq\bigcup_{\alpha< \kappa}M_\alpha$, and $\bigcup_{\alpha< \kappa}M_\alpha$ is countably closed, we get $\dot f \in \bigcup_{\alpha< \kappa}M_\alpha$. Because $M_\alpha\cap \mathbb{P}<\hspace{-2.5mm} \cdot \hspace{1.5mm}\mathbb{P}$ and $M_\alpha$ witnesses that $M_\alpha\cap \mathbb{P}$ is $\omega^\omega$-bounding, this implies that the evaluation of $\dot f$ in an extension will be dominated by some function in the ground model. Hence $\ensuremath{\mathsf{MH}}\xspace(\mathbb{P})$ may be thought of as a strong version of $\omega^\omega$-boundedness. The insistence that the $M_\alpha$ be transitive is not strictly necessary. It is fine, for example, if the $M_\alpha$ are elementary submodels of some $H(\theta)$. (If so, then identifying the members of $\mathbb{P}$ with ordinals $<\!\mu = |\mathbb{P}|$ and replacing each $M_\alpha$ with its transitive collapse does not change any other aspect of the definition. In this sense, an elementary submodels version of the definition implies the stated version.) What is really needed is that the $M_\alpha$ agree with $V$ on what $\omega$ and $\omega^\omega$ are, which can fail in non-standard models. **Theorem 10**. *If $\mathbb{P}$ is a ccc poset and $\ensuremath{\mathsf{MH}}\xspace(\mathbb{P})$ holds, then $\mathbb{P}$ forces that pathways exist.* *Proof.* Let $\mathbb{P}$ be a ccc poset and suppose $\ensuremath{\mathsf{MH}}\xspace(\mathbb{P})$ holds in the ground model $V$. Let $G$ be a $\mathbb{P}$-generic filter over $V$. Fix a sequence $\left\langle M_\alpha \colon \alpha< \kappa \right\rangle$ in $V$ witnessing $\ensuremath{\mathsf{MH}}\xspace(\mathbb{P})$. For each $\alpha< \kappa$, let $$A_\alpha\,=\, \left\lbrace (\dot f)_G \colon \dot f \in M_\alpha\text{ and $\dot f$ is a nice $\mathbb{P}$-name for a function $\omega\to \omega$} \right\rbrace.$$ We claim that $\left\langle A_\alpha \colon \alpha< \kappa \right\rangle$ is a pathway in $V[G]$. As the sequence $\left\langle M_\alpha \colon \alpha< \kappa \right\rangle$ is increasing, $\left\langle A_\alpha \colon \alpha< \kappa \right\rangle$ is too. If $f \in \omega^\omega$ in $V[G]$, then there is a nice name $\dot f$ for $f$ in $V$. Because $\mathbb{P}$ is ccc, $\dot f$ consists of countably many pairs of the form $((m,n),p)$ with $p \in \mathbb{P}$. Because $\mathbb{P}\subseteq\bigcup_{\alpha< \kappa}M_\alpha$ and $\bigcup_{\alpha< \kappa}M_\alpha$ is countably closed, $\dot f \in \bigcup_{\alpha< \kappa}M_\alpha$. This implies $f \in A_\alpha$ for some $\alpha< \kappa$. As $f$ was arbitrary, $\bigcup_{\alpha< \kappa}A_\alpha= \omega^\omega$. Next, fix $\alpha< \kappa$. Because $M_{\alpha+1} \cap \omega^\omega$ is not dominated by $M_\alpha\cap \omega^\omega$, there is some $g \in M_{\alpha+1} \cap \omega^\omega$ such that $g \not<^* h$ for all $h \in M_\alpha\cap \omega^\omega$. If $f \in A_\alpha$, there is some nice name $\dot f \in M_\alpha$ with $(\dot f)_G = f$. Because $M_\alpha$ witnesses that $\mathbb{P}_\alpha$ is $\omega^\omega$-bounding, there is some $h \in M_\alpha\cap \omega^\omega$ such that $\Vdash_{\mathbb{P}_\alpha} \dot f <^* h$. Because $\mathbb{P}_\alpha<\hspace{-2.5mm} \cdot \hspace{1.5mm}\mathbb{P}$, this means $\Vdash_\mathbb{P}\dot f <^* h$. Thus in the extension, $f <^* h$, which implies $g \not<^* f$. As $f$ was an arbitrary member of $A_\alpha$, this means $g$ witnesses that $A_{\alpha+1}$ is not dominated by $A_\alpha$. Now suppose $f,g \in A_\alpha$. This means there are nice names $\dot f$ and $\dot g$ for $f$ and $g$ in $M_\alpha$. But then $$\left\lbrace ((2i,j),p) \colon ((i,j),p) \in \dot f \right\rbrace \cup \left\lbrace ((2i-1,j),p) \colon \vphantom{\dot f}((i,j),p) \in \dot g \right\rbrace$$ is a nice name for $f \vee g$, and this name is in $M_\alpha$ because it is definable from $\dot f$ and $\dot g$ and $M_\alpha\models \ensuremath{\mathsf{ZFC}}\xspace^-$. Hence $f \vee g \in A_\alpha$. Finally, suppose $g \in A_\alpha$ and $f \in \omega^\omega$ is Turing reducible to $g$. This means $f$ is computable from $g$, in the sense that there is an oracle Turing machine $T$ that, when using $g$ as an oracle, outputs $f(n)$ on input $n$ for all $n \in \omega$. For any given $n$, there is some $h(n)$ large enough that $T$ computes $f(n)$ in $\leq\! h(n)$ steps. In particular, $T$ does not read more than the first $h(n)$ values in $g$, and if $g'$ is any function with $g \!\restriction\!h(n) = g' \!\restriction\!h(n)$, then $T$ will correctly compute $f(n)$ using $g'$ as an oracle instead of $g$. For any given finite sequence $\left\langle k_i \colon i \leq n \right\rangle$ of natural numbers, define a function $G_{\left\langle k_i \colon i \leq n \right\rangle}: \omega\to \omega$ by $$G_{\left\langle k_i \colon i \leq n \right\rangle}(i) \,=\, \begin{cases} k_i &\text{ if } i \leq n \\ 0 &\text{ if } i > n. \end{cases}$$ Because $M_\alpha$ witnesses that $\mathbb{P}_\alpha$ is $\omega^\omega$-bounding, there is some ground model function $\bar h \in M_\alpha\cap \omega^\omega$ such that $h(n) \leq \bar h(n)$ for all $n$. Thus, letting $\dot g \in M_\alpha$ be some nice name for $g$, $$\begin{aligned} \Big\{ \, ((n,m),p) \,:\ \, &p \in \mathbb{P}_\alpha\text{ decides $\left\langle \dot g(i) \colon i < \bar h(n) \right\rangle = \left\langle k_i \colon i < \bar h(n) \right\rangle$, and } \\ & T \text{ outputs } m \text{ with input } n \text{ using $G_{\left\langle k_i \colon i < \bar h(n) \right\rangle}$ as an oracle} \, \Big\}\end{aligned}$$ is a name for $f$. (This uses the fact that $\mathbb{P}_\alpha<\hspace{-2.5mm} \cdot \hspace{1.5mm}\mathbb{P}$, which implies that the members of $\mathbb{P}_\alpha$ that decide $\left\langle \dot g(i) \colon i < \bar h(n) \right\rangle$ form a dense subset of $\mathbb{P}$.) This name is in $M_\alpha$, because it is definable (as above) from $\dot g$, $\bar h$, $T$, and $\mathbb{P}$, all of which are in $M_\alpha$. Because $M_\alpha\models \ensuremath{\mathsf{ZFC}}\xspace^-$, there is also a nice name for $f$ in $M_\alpha$. Hence $f \in A_\alpha$, and this shows $A_\alpha$ is closed under Turing reducibility. ◻ It is unclear whether the existence of pathways can be improved to $\mathsf{MH}$ in the conclusion of this theorem. The problem is that $\mathsf{MH}$ requires the $A_\alpha$ to be closed under set-theoretic definability, not merely Turing reducibility. Our proof explicitly relies on the fact that if $f \leq_T g$, then any given entry of $f$ can be computed by knowing some finitely many entries of $g$. The same idea does not work for set-theoretic definability, where some of the entries of $f$ might depend somehow on infinitely many entries of $g$. **Question 11**. *If $\mathbb{P}$ is a ccc poset and $\ensuremath{\mathsf{MH}}\xspace(\mathbb{P})$ holds, does $\mathbb{P}$ forces $\mathsf{MH}$?* Of course, Theorem [Theorem 10](#thm:AlmostAllCCC){reference-type="ref" reference="thm:AlmostAllCCC"} raises the question: Under what circumstances, and for which posets $\mathbb{P}$, does $\ensuremath{\mathsf{MH}}\xspace(\mathbb{P})$ hold? We shall be particularly (but not exclusively) interested in this question when $\mathbb{P}$ is the measure algebra of weight $\mu$, the standard poset for adding $\mu$ random reals simultaneously. For technical reasons (that become clear in the proof of Theorem [Theorem 13](#thm:RandomPoset){reference-type="ref" reference="thm:RandomPoset"}), we define the members of $\mathbb{B}_\mu$ to be Borel codes, rather than Borel sets or equivalence classes of Borel sets. **Lemma 12**. *For any cardinal $\mu$, $\ensuremath{\mathsf{MH}}\xspace(\mathbb{B}_\mu)$ if, for some uncountable cardinal $\kappa$, there is an increasing sequence $\left\langle M_\alpha \colon \alpha< \kappa \right\rangle$ of transitive models of $\ensuremath{\mathsf{ZFC}}\xspace^-$ such that $\mu \in M_0$, $\mathbb{B}_\mu \subseteq\bigcup_{\alpha< \kappa}M_\alpha$, $\bigcup_{\alpha< \kappa}M_\alpha$ is countably closed, and $M_{\alpha+1} \cap \omega^\omega$ is not dominated by $M_\alpha\cap \omega^\omega$ for all $\alpha< \kappa$. In other words, the last two items in the definition of $\ensuremath{\mathsf{MH}}\xspace(\mathbb{B}_\mu)$ are automatic, in the sense that when $\mathbb{P}= \mathbb{B}_\mu$, they follow already from the previous conditions (plus the condition that $\mu \in M_0$).* *Proof.* Suppose $\left\langle M_\alpha \colon \alpha< \kappa \right\rangle$ is a sequence of models having the properties stated in the lemma. We must check that $M_\alpha\cap \mathbb{B}_\mu \in M_\alpha$, that $M_\alpha\cap \mathbb{B}_\mu <\hspace{-2.5mm} \cdot \hspace{1.5mm}\mathbb{B}_\mu$, and that $M_\alpha\cap \mathbb{B}_\mu$ witnesses that $M_\alpha\cap \mathbb{B}_\mu$ is $\omega^\omega$-bounding. Fix $\alpha< \kappa$. Because $\mu \in M_0 \subseteq M_\alpha$, we have $\mu \in M_\alpha$. Thus $\mathbb{B}_\mu$ is definable as the set of all Borel codes for members of $2^\mu$, and this definition is absolute for transitive models containing $\mu$. Hence $M_\alpha\cap \mathbb{B}_\mu \in M_\alpha$. To see that $M_\alpha\cap \mathbb{B}_\mu <\hspace{-2.5mm} \cdot \hspace{1.5mm}\mathbb{B}_\mu$, suppose $A$ is a maximal antichain in $M_\alpha\cap \mathbb{B}_\mu$. This means that $A$ consists of codes for non-null Borel sets, with the codes in $M_\alpha$, such that (1) any two of these Borel sets intersect in a null set, and (2) if any other Borel set coded in $M_\alpha$ has null intersection with all these Borel sets, it is null. Because $M_\alpha\models \ensuremath{\mathsf{ZFC}}\xspace^-$, and because basic facts about Borel codes (like the measure of the set they code) are absolute between models of $\ensuremath{\mathsf{ZFC}}\xspace^-$, condition (2) is equivalent to ($2'$) the sum of the measures of these Borel sets is equal to $1$. But $(1)$ and $(2')$ together imply that $A$ is a maximal antichain in $\mathbb{B}_\mu$, not just in $M_\alpha\cap \mathbb{B}_\mu$. To see that $M_\alpha$ witnesses that $M_\alpha\cap \mathbb{B}_\mu$ is $\omega^\omega$-bounding, let $\dot f$ be a nice $(M_\alpha\cap \mathbb{B}_\mu)$-name for a function in $\omega^\omega$. That is, $$\dot f = \left\lbrace ((m,n),p_{m,n}) \colon m,n \in \omega \right\rbrace,$$ where each $p_{m,n} \in M_\alpha\cap \mathbb{B}_\mu$. Going through the usual proof that $\mathbb{B}_\mu$ is $\omega^\omega$-bounding, we obtain a ground model function $$g(n) \,=\, \min \left\lbrace k \colon \textstyle \sum_{i < k} \lambda(p_{n,i}) > 1 - \nicefrac{1}{4^n} \right\rbrace$$ (where $\lambda$ denotes the Lebesgue measure) such that $\Vdash_{\mathbb{B}_\mu} \dot f <^* g$. But this function $g$ is definable from $\dot f$, so $g \in M_\alpha$. Moreover, the usual proof that $\Vdash_{\mathbb{B}_\mu} \dot f <^* g$ can be carried out in $M_\alpha$, and this adaptation of the proof shows that $\Vdash_{M_\alpha\cap \mathbb{B}_\mu} \dot f <^* g$. ◻ Note that if $\mu$ is a definable cardinal with a "nice" definition that is absolute to transitive models (e.g., if $\mu = \aleph_n$ for some $n$, or $\mu = \aleph_{\omega_1+\omega^2+49}$), then the condition $\mu \in M_0$ is superfluous, and the statement of the lemma can be strengthened from "if" to "if and only if". **Theorem 13**. *Suppose $\mathbb{P}$ is a forcing iteration (with any support) of length $\lambda$, where $\mathrm{cf}(\lambda) > \omega$. Furthermore, suppose $\mathbb{P}$ adds unbounded reals at cofinally many stages of the iteration. In any forcing extension by $\mathbb{P}$, if $\mathbb{B}_\mu$ denotes the measure algebra of weight $\mu$, then $\ensuremath{\mathsf{MH}}\xspace(\mathbb{B}_\mu)$ holds.* *Proof.* Let $\mathbb{P}$ be a forcing iteration as described in the statement of the theorem, and let $G$ be a generic for $\mathbb{P}$. For each $\gamma< \lambda$, let $G_\gamma$ denote the restriction of $G$ to only the first $\gamma$ stages of the iteration. Let $\kappa= \mathrm{cf}(\lambda)$, and by recursion obtain a $\kappa$-sequence $\left\langle \gamma_\alpha \colon \alpha< \kappa \right\rangle$ of ordinals $<\!\lambda$ such that $\left\langle \gamma_\alpha \colon \alpha< \kappa \right\rangle$ is cofinal in $\lambda$, and for all $\alpha< \kappa$ there is an unbounded real added at stage $\xi$ of the iteration for some $\xi \in (\gamma_\alpha,\gamma_{\alpha+1}]$. In $V[G]$, define $M_\alpha= H(\mu^+)^{V[G_{\gamma_\alpha}]}$ for all $\alpha< \kappa$. We claim $\left\langle M_\alpha \colon \alpha< \kappa \right\rangle$ witnesses $\ensuremath{\mathsf{MH}}\xspace(\mathbb{B}_\mu)$. Clearly $\mu \in M_0$. The sequence of $M_\alpha$'s is increasing because the sequence of $V[G_{\gamma_\alpha}]$'s is increasing. Because each $V[G_{\gamma_\alpha}]$ is a model of $\mathsf{ZFC}$, we have $V[G_{\gamma_\alpha}] \models (H(\mu^+) \models \ensuremath{\mathsf{ZFC}}\xspace^-)$. By the absoluteness of the satisfaction relation, $M_\alpha\models \ensuremath{\mathsf{ZFC}}\xspace^-$ for all $\alpha$. Every code for a Borel set in $\mathbb{B}_\mu$ (depending on one's choice of coding) is essentially a countable partial function $\mu \times \omega\to \omega$, and every such countable partial function appears in $V[G_\gamma]$ for some $\gamma< \kappa$ (because $\mathrm{cf}(\kappa) > \omega$). Thus $\mathbb{B}_\mu \subseteq\bigcup_{\alpha< \kappa}M_\alpha$. Similar reasoning shows that $\bigcup_{\alpha< \kappa}M_\alpha$ is countably closed. Finally, $M_{\alpha+1} \cap \omega^\omega$ is not dominated by $M_\alpha\cap \omega^\omega$ because an unbounded real is added at stage $\xi$ of the iteration for some $\xi \in (\gamma_\alpha,\gamma_{\alpha+1}]$. That $\ensuremath{\mathsf{MH}}\xspace(\mathbb{B}_\mu)$ holds now follows from Lemma [Lemma 12](#lem:RandomLemma){reference-type="ref" reference="lem:RandomLemma"}. ◻ By convention, "the random model" means any model obtained from a model of $\mathsf{CH}$ after adding $\aleph_2$ random reals. But the "the" in this name is misleading, because some properties of the forcing extension may depend on precisely which model of $\mathsf{CH}$ we started with. An unpublished result of Kunen shows that if we begin with a model of $\mathsf{CH}$, then force to add $\aleph_1$ Cohen reals, and then force to add $\geq\!\aleph_2$ random reals, then we get a model with $P$-points. Thus it is consistent that "the" random model contains $P$-points. (Another result along these lines was obtained by the third author in [@Dow]: adding $\aleph_2$ random reals to a model of $\ensuremath{\mathsf{CH}}\xspace+\square_{\omega_1}$ gives a model with $P$-points.) However, as mentioned in the introduction, it is an important open problem whether adding random reals to *any* model of $\mathsf{CH}$ produces a model with $P$-points. The following corollary to the previous two theorems contains Kunen's result as a special case, and shows that in fact many forcings can be used to produce models $V$ of $\mathsf{CH}$ such that "the" random model built from $V$ contains $P$-points. **Corollary 14**. *Suppose $\mathbb{P}$ is a forcing iteration (with any support) of length $\lambda$, where $\mathrm{cf}(\lambda) > \omega$, and suppose $\mathbb{P}$ adds unbounded reals at cofinally many stages of the iteration. Then forcing with $\mathbb{P}* \dot \mathbb{B}_\mu$ produces a model with pathways (consequently, a model where $\Delta$ holds and $P$-points exist).* *Proof.* This follows directly from Theorems [Theorem 10](#thm:AlmostAllCCC){reference-type="ref" reference="thm:AlmostAllCCC"} and [Theorem 13](#thm:RandomPoset){reference-type="ref" reference="thm:RandomPoset"}. ◻ Note that every finite support iteration of nontrivial forcings (of the appropriate length) satisfies the hypotheses of this corollary. This is because finite support iterations of nontrivial forcings add Cohen reals at limit stages of countable cofinality. **Question 15**. *Does $\mathsf{CH}$ imply $\ensuremath{\mathsf{MH}}\xspace(\mathbb{B}_{\omega_2})$?* A positive answer to this question would imply that there are $P$-points in the random model (all of them). In [@RW], Roitman and Williams ask whether $\Delta$ holds in the model obtained by adding $\aleph_3$ random reals to the $\aleph_2$-Cohen model, i.e., the model obtained by forcing with $\mathbb{C}_{\omega_2} * \mathbb{B}_{\omega_3}$ over a model of $\mathsf{CH}$. This was seen to be the simplest model for which it was unknown whether $\Delta$ holds (in part because $\mathfrak b= \aleph_1 < \mathfrak d= \aleph_2 < \mathfrak{c}= \aleph_3$ in this model). **Corollary 16**. *$\Delta$ holds in any model obtained by forcing with $\mathbb{C}_{\omega_2} * \mathbb{B}_{\omega_3}$.* *Proof.* One may view $\mathbb{C}_{\omega_2}$ as a length-$\omega_2$ finite support iteration that adds Cohen reals at every stage. ◻ Lastly, let us consider an arbitrary ccc poset $\mathbb{P}$. **Question 17**. *Suppose $\mathsf{CH}$ holds in the ground model and $\Vdash_\mathbb{P}\mathfrak d< \aleph_\omega$. Does $\mathbb{P}$ forces that $\Delta$ holds?* To approach this question, let us suppose $G$ be a $\mathbb{P}$-generic filter over $V$. Let $\mathfrak d= \mathfrak d^{V[G]}$, the dominating number of the extension. (The dominating number of the ground model is $\aleph_1$, because $\mathsf{CH}$ holds). If $\mathfrak d= \aleph_1$ then $\mathfrak b= \mathfrak d$ in $V[G]$, and consequently $\Delta$ holds. Thus in what follows, we may (and do) assume $\mathfrak d> \aleph_1$. Let $D = \langle \dot f_\alpha:\, \alpha< \mathfrak d\rangle$ be a sequence of $V$-names for members of $\omega^\omega$ such that $\mathbf{1}_\mathbb{B}\Vdash D$ is a dominating family. In $V$, we can find an increasing sequence $\left\langle M_\alpha \colon \alpha< \mathfrak d \right\rangle$ of elementary submodels of $H(\theta)$ (where $\theta$ is a sufficiently large regular cardinal) such that - each $M_\alpha$ is countably closed (in $V$), with $\left\lvert M_\alpha \right\rvert < \mathfrak d$, - if $\mathrm{cf}(\alpha) > \omega$, then $M_\alpha= \bigcup_{\xi < \alpha}M_\xi$, and - $D \in M_\alpha$ for all $\alpha< \mathfrak d$. Note that $\mathfrak d$ has uncountable cofinality, which implies $M = \bigcup_{\alpha< \mathfrak d}M_\alpha$ is a countably closed elementary submodel of $H(\theta)$ with $\left\lvert M \right\rvert = \mathfrak d$. Let $\mathbb{P}_M = \mathbb{P}\cap M$. Because $\mathbb{P}$ is ccc and $M$ is countably closed, $\mathbb{P}_M <\hspace{-2.5mm} \cdot \hspace{1.5mm}\mathbb{P}$. Thus there is a $\mathbb{P}_M$-name $\dot \mathbb{Q}= \mathbb{P}/ \mathbb{P}_M$ for a Boolean algebra such that $\mathbb{P}= \mathbb{P}_M * \dot \mathbb{Q}$. Furthermore, $\Vdash_{\mathbb{P}_M} \dot \mathbb{Q}$ is $\omega^\omega$-bounding. In other words, we are able to factor $\mathbb{P}$ into two pieces: one part $\mathbb{P}_M$ that adds unbounded reals, and another part $\dot \mathbb{Q}$ such that $\Vdash_{\mathbb{P}_M} \dot \mathbb{Q}$ is $\omega^\omega$-bounding. Now the real question behind Question [Question 17](#q:a){reference-type="ref" reference="q:a"} is whether $\Vdash_{\mathbb{P}_M} \ensuremath{\mathsf{MH}}\xspace(\dot \mathbb{Q})$. If $\Vdash_{\mathbb{P}_M} \ensuremath{\mathsf{MH}}\xspace(\dot \mathbb{Q})$, then by Theorem [Theorem 10](#thm:AlmostAllCCC){reference-type="ref" reference="thm:AlmostAllCCC"}, $\mathbb{P}= \mathbb{P}_M * \dot \mathbb{Q}$ forces that pathways exist, and therefore $\Delta$ holds. To see that this may be plausible, consider the posets $\mathbb{P}_\alpha= \mathbb{P}\cap M_\alpha$ for each $\alpha< \mathfrak d$, which are all completely embedded in $\mathbb{P}$. These $\mathbb{P}_\alpha$'s act like the initial stages of a forcing iteration, and something like the proof of Theorem [Theorem 13](#thm:RandomPoset){reference-type="ref" reference="thm:RandomPoset"} can then be used to show that if $\Vdash_{\mathbb{P}_M} \dot \mathbb{Q}= \dot \mathbb{B}_\mu$, then $\Vdash_{\mathbb{P}_M} \ensuremath{\mathsf{MH}}\xspace(\dot \mathbb{Q})$. The problem, however, is that if $\dot \mathbb{Q}$ is a badly behaved poset, the intermediate models arising from these $\mathbb{P}_\alpha$ do not seem to give a witness to $\ensuremath{\mathsf{MH}}\xspace(\dot \mathbb{Q})$ after forcing with $\mathbb{P}_M$. What kind of structure do these intermediate models impose on $\dot \mathbb{Q}$, and is this structure enough to deduce $\Delta$ (or even pathways)? We do not yet know, and so Question [Question 17](#q:a){reference-type="ref" reference="q:a"} remains open for now. 99 H. Barriga-Acosta and P. M. Gartside, "Monotone normality and nabla products," *Fundamenta Mathematicae* **254** (2021), pp. 99--120. J. E. Baumgartner and R. Laver, "Iterated perfect-set forcing," *Annals of Mathematical Logic* **17** (1979), pp. 271--288. A. Blass, "Combinatorial cardinal characteristics of the continuum," in *Handbook of Set Theory,* eds. M. Foreman and A. Kanamori, Springer-Verlag (2010), pp. 395--489. D. Chodounský and O. Guzmán, "There are no $P$-points in Silver extensions," *Israel Journal of Mathematics* **232** (2019), pp. 759--773. Paul E. Cohen, "$P$-points in random universes," *Proceedings of the American Mathematical Society* **74** (1979), pp. 318--321. A. Dow, "$P$-filters and Cohen, random, and Laver forcing," *Topology and Its Applications* **281** (2020), article no. 107200. E. K. van Douwen, "Covering and separation properties of box products," *Surveys in General Topology*, Elsevier (1980), pp. 55--129. D. J. Fernández-Bretón, "Generalized pathways," unpublished manuscript available at `https://arxiv.org/abs/1810.06093`. D. J. Fernández-Bretón and M. Hrušák, "Gruff ultrafilters," *Topology and Its Applications* **210** (2016), pp. 355--365. S. Geschke and S. Quickert, "On Sacks forcing and the Sacks property," in *Classical and New Paradigms of Computation and their Complexity Hierarchies*, eds. B. Löwe et al., Trends in Logic **23** (2007), pp. 95--139. J. Ketonen, "On the existence of $P$-points in the Stone-Čech compactification of integers," *Fundamenta Mathematicae* **92** (1976), pp. 91--94. K. Kunen, "Paracompactness of box products of compact spaces," *Transactions of the American Mathematical Society* **240** (1978), pp. 307--316. R. Laver, "Products of infinitely many perfect trees," *Journal of the London Mathematical Society* **29** (1984), pp. 385--396. I. Neeman and J. Zapletal, "Proper forcings and absoluteness in $\mathrm{L}(\mathbb{R})$," *Commentationes Mathematicae Universitatis Carolinae* **39** (1998), pp. 281--301. J. Roitman, "Paracompact box products in forcing extensions," *Fundamenta Mathematicae* **102** (1979), pp. 219--228. J. Roitman, "More paracompact box products," *Proceedings of the American Mathematical Society* **74** (1979), pp. 171--176. J. Roitman, "Paracompactness and normality in box products: old and new," in *Set Theory and its Applications* (2011), ed. L. Babinkostova et al., Contemporary Mathematics 533, Providence, RI, pp. 157--181. J. Roitman and S. Williams, "Paracompactness, normality, and related properties of topologies on infinite products," *Topology and its Applications* **195** (2015), pp. 79--92. M. E. Rudin, "The box product of countably many compact metric spaces," *General Topology and its Applications* **2** (1972), pp. 293--298. S. Shelah, *Proper and Improper Forcing*, $2^\mathrm{nd}$ ed., Perspectives in Mathematical Logic (1998), Springer-Verlag, Berlin. S. Williams, "Box products," in *Handbook of Set-Theoretic Topology* (1984), eds. K. Kunen and J. E. Vaughan, North-Holland, pp. 169--200. E. L. Wimmers, "The Shelah $P$-point independence theorem," *Israel Journal of Mathematics* **43** (1982), pp. 28--48. [^1]: The second author is supported by NSF grant DMS-2154229.
arxiv_math
{ "id": "2309.10307", "title": "On Roitman's principles $\\mathsf{MH}$ and $\\Delta$", "authors": "Hector Barriga-Acosta, Will Brian, Alan Dow", "categories": "math.GN math.LO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We prove sharp characterizations of higher order fractional powers $(-L)^s$, where $s>0$ is noninteger, of generators $L$ of uniformly bounded $C_0$-semigroups on Banach spaces via extension problems, which in particular include results of Caffarelli--Silvestre, Stinga--Torrea and Galé--Miana--Stinga when $0<s<1$. More precisely, we prove existence and uniqueness of solutions $U(y)$, $y\geq0$, to initial value problems for both higher order and second order extension problems and characterizations of $(-L)^su$, $s>0$, in terms of boundary derivatives of $U$ at $y=0$, under the sharp hypothesis that $u$ is in the domain of $(-L)^s$. Our results resolve the question of setting up the correct initial conditions that guarantee well-posedness of both extension problems. Furthermore, we discover new explicit subordination formulas for the solution $U$ in terms of the semigroup $\{e^{tL}\}_{t\geq0}$ generated by $L$. address: - | Department of Mathematics\ University of Nebraska-Lincoln\ 210 Avery Hall, Lincoln\ NE 68588, United States of America - | Department of Mathematics\ Iowa State University\ 396 Carver Hall, Ames\ IA 50011, United States of America author: - Animesh Biswas - Pablo Raúl Stinga title: Sharp extension problem characterizations for higher fractional power operators in Banach spaces --- # Introduction Extension problem characterizations of fractional powers of linear operators [@Caffarelli-Silvestre; @Gale-Miana-Stinga; @Stinga-Torrea] are powerful tools in the study of nonlocal fractional equation problems in analysis, PDEs, geometry, fractional calculus, mathematical finance, continuum mechanics, numerical analysis and computational mathematics, among other areas. The extension problem characterization for the fractional Laplacian $(-L)^s=(-\Delta)^s$ in $\mathbb{R}^n$, for $0<s<1$, was introduced by Caffarelli and Silvestre [@Caffarelli-Silvestre] in 2007. In 2010, Stinga and Torrea developed in [@Stinga-Torrea] the method of semigroups and generalized the extension problem characterization to fractional powers $(-L)^s$, $0<s<1$, of any nonnegative normal operator $-L$ in Hilbert spaces. The most general extension description was proved in 2013 by Galé, Miana and Stinga, see [@Gale-Miana-Stinga]. The result in [@Gale-Miana-Stinga] is established through the method of semigroups for infinitesimal generators $L$ of tempered $\alpha$-times integrated semigroups on Banach spaces $X$, for $\alpha\geq0$. In particular, it applies to generators $L$ of uniformly bounded $C_0$-semigroups $\{e^{tL}\}_{t\geq0}$ on $X$ and, in this case, it states the following. Given $u\in X$ and $0<s<1$, define $$\label{eq:Usemigroup} U(y)=U[u](y)=\frac{y^{2s}}{4^s\Gamma(s)}\int_0^\infty e^{-y^2/(4t)}e^{tL}u\,\frac{dt}{t^{1+s}}\qquad y>0.$$ Then $U$ is a bounded classical solution to the $X$-valued extension problem $$\label{eq:extension_problem_X} \begin{cases} LU+\frac{1-2s}{y}\partial_yU+\partial_{yy}U=0&\hbox{for}~y>0\\ \lim_{y\to0}U(y)=u&\hbox{in}~X. \end{cases}$$ If, in addition, $u\in D(L)$ (the domain of $L$ in $X$) then $$\label{eq:normalderivative} -\lim_{y \to 0}y^{1-2s}\partial_yU(y)=c_s(-L)^su\qquad\hbox{in}~X$$ where $c_s>0$ is a constant explicitly computed in [@Stinga-Torrea] that depends only on $s$. In fact, $U$ given by [\[eq:Usemigroup\]](#eq:Usemigroup){reference-type="eqref" reference="eq:Usemigroup"} is the unique bounded classical solution to [\[eq:extension_problem_X\]](#eq:extension_problem_X){reference-type="eqref" reference="eq:extension_problem_X"} and if $u\in D((-L)^s)$, for $0<s<1$, then [\[eq:normalderivative\]](#eq:normalderivative){reference-type="eqref" reference="eq:normalderivative"} still holds, see [@Meichsner-Seifert]. Formula [\[eq:Usemigroup\]](#eq:Usemigroup){reference-type="eqref" reference="eq:Usemigroup"} was first discovered in [@Stinga-Torrea]. In this paper we prove sharp characterizations for all fractional powers $(-L)^s$, where $s>0$ is noninteger, with both higher order and second order extension problems, see [\[eq:uniqueness s greater 2\]](#eq:uniqueness s greater 2){reference-type="eqref" reference="eq:uniqueness s greater 2"} and [\[eq:bvpinyalls\]](#eq:bvpinyalls){reference-type="eqref" reference="eq:bvpinyalls"}, respectively. We show existence and uniqueness of the classical solution to the these extension problems and the characterizations of $(-L)^su$ as certain derivatives of $U$ at $y=0$ under the sharp hypothesis that $u\in D((-L)^s)$, see [\[eq:introNeumannL\]](#eq:introNeumannL){reference-type="eqref" reference="eq:introNeumannL"} and [\[eq:introNeumanny\]](#eq:introNeumanny){reference-type="eqref" reference="eq:introNeumanny"}. In particular, we set up the correct initial conditions for well-posedness of the problems. Furthermore, we find new explicit representations of the solution $U$ in the form of subordination formulas involving the semigroup $\{e^{tL}\}_{t\geq0}$, see [\[eq:U component s Greater 2\]](#eq:U component s Greater 2){reference-type="eqref" reference="eq:U component s Greater 2"}. Our main result is the following (see Section [2](#sec:balakrishnan){reference-type="ref" reference="sec:balakrishnan"} for notation). **Theorem 1** (Extension problems for any noninteger $s>0$). *Let $L$ be the infinitesimal generator of a uniformly bounded $C_0$-semigroup $\{e^{tL}\}_{t\geq0}$ on a Banach space $X$. Assume that $0\in\rho(L)$, the resolvent set of $L$. Let $s>0$ be any noninteger and let $[s]$ be the integer part of $s$. Given any $u\in X$, let $U(y)=U[u](y)$ be as in [\[eq:Usemigroup\]](#eq:Usemigroup){reference-type="eqref" reference="eq:Usemigroup"}. The following statements hold.* 1. *For any $k\geq0$, $U\in C^\infty((0,\infty);D(L^k))\cap C([0,\infty);X)$ and $U$ is a bounded classical solution to the higher order extension problem $$\label{eq:extension_problem_s_greater1} \begin{cases} \big(L+\frac{1-2(s-[s])}{y}\partial_y+\partial_{yy}\big)^{[s]+1}U=0&\hbox{for}~y>0\\ \lim_{y\to0}U(y)=u&\hbox{in}~X \end{cases}$$ and to the second order extension problem $$\label{eq:introextension} \begin{cases} LU+\frac{1-2s}{y}\partial_yU+\partial_{yy}U=0&\hbox{for}~y>0\\ \lim_{y\to0}U(y)=u&\hbox{in}~X. \end{cases}$$* 2. *If $f\in X$ and $u\in D((-L)^s)$ is the unique solution to $(-L)^su=f$ then we have $$\label{eq:U component s Greater 2} \begin{aligned} U(y) &= \sum^{[s]}_{k=0} \frac{y^{2k}\Gamma(s-k)}{4^kk!\Gamma(s)}L^ku +\frac{1}{\Gamma(s)} \int^\infty_0\bigg[e^{-y^2/(4t)}- \sum^{[s]}_{k=0} \frac{(-1)^ky^{2k}}{k!(4t)^k}\bigg] e^{tL}f\,\frac{dt}{t^{1-s}} \\ &= \sum^{[s]}_{k=0} \frac{y^{2k}\Gamma(s-k)}{4^kk!\Gamma(s)}L^ku +\frac{y^{2s}}{4^s\Gamma(s)}\int^\infty_0\bigg[e^{-r}-\sum^{[s]}_{k=0} \frac{(-r)^k}{k!}\bigg]e^{\frac{y^2}{4r}L}f\,\frac{dr}{r^{1+s}}. \end{aligned}$$* 3. *Furthermore, $u\in D((-L)^s)$ if and only if the limits $$\lim_{y\to0}y^{1-2(s-[s])}\partial_y\big(L+\tfrac{1-2(s-[s])}{y}\partial_y+\partial_{yy}\big)^{[s]}U(y)$$ or $$\label{eq:U Ls Greater 2} \lim_{y\to0}y^{1-2(s-[s])}\partial_y\big(\tfrac{2}{y}\partial_y\big)^{[s]} U(y)$$ exist in $X$ and, in these cases, $$\label{eq:introNeumannL} \lim_{y\to0}y^{1-2(s-[s])}\partial_y\big(L+\tfrac{1-2(s-[s])}{y}\partial_y+\partial_{yy}\big)^{[s]}U(y) =c_s[s]!(-L)^su$$ and $$\label{eq:introNeumanny} \lim_{y\to0}y^{1-2(s-[s])}\partial_y\big(\tfrac{2}{y}\partial_y\big)^{[s]}U(y) =c_s(-L)^su$$ where $c_s=\frac{(-1)^{[s]+1}\Gamma([s]+1-s)}{4^{s-([s]+1/2)}\Gamma(s)}$.* 4. *If $u\in D((-L)^s)$ then $U$ is the unique classical solution to the higher order initial value extension problem $$\label{eq:uniqueness s greater 2} \begin{cases} \big(L+\frac{1-2(s-[s])}{y}\partial_y+\partial_{yy}\big)^{[s]+1}U=0&\hbox{for}~y>0\\ \lim_{y\to0}\big(L+\frac{1-2(s-[s])}{y}\partial_y+\partial_{yy}\big)^mU =\frac{[s]!\Gamma(s-m)}{([s]-m)!\Gamma(s)} L^mu&\hbox{for}~0\leq m\leq [s] \\ \lim_{y\to0}y^{1-2(s-[s])}\partial_y \big(L+\frac{1-2(s-[s])}{y}\partial_y+\partial_{yy}\big)^mU=0&\hbox{for}~0\leq m<[s] \\ \lim_{y\to0}y^{1-2(s-[s])}\partial_y \big(L+\frac{1-2(s-[s])}{y}\partial_y+\partial_{yy}\big)^{[s]}U=c_s[s]!(-L)^su, \end{cases}$$ and to the second order initial value extension problem $$\label{eq:bvpinyalls} \begin{cases} LU+\frac{1-2s}{y}\partial_yU+\partial_{yy}U=0&\hbox{for}~y>0\\ \lim_{y\to0}\big(\frac{2}{y}\partial_y\big)^mU(y) =\frac{\Gamma(s-m)}{\Gamma(s)}L^mu&\hbox{for}~0\leq m\leq [s] \\ \lim_{y\to0}y^{1-2(s-[s])}\partial_y \big(\frac{2}{y}\partial_y\big)^mU(y)=0&\hbox{for}~0\leq m<[s] \\ \lim_{y\to0}y^{1-2(s-[s])}\partial_y \big(\frac{2}{y}\partial_y\big)^{[s]}U(y)=c_s(-L)^su. \end{cases}$$* By classical solution here we mean a solution $U(y)$ that is as continuously differentiable in $(0,\infty)$ as the equation requires with values in the corresponding domains of $L$ and integer powers of $L$, and whose initial conditions are satisfied with continuity up to $y=0$. One of the main difficulties of the assumption $u\in D((-L)^s)$ in Theorem [Theorem 1](#thm:extension-general){reference-type="ref" reference="thm:extension-general"} is the actual semigroup description of the domain of $(-L)^s$. Indeed, Berens, Butzer and Westphal proved in [@Berens-Butzer-Westphal] that $u\in D((-L)^s)$ if and only if, for any integer $k>s$, the limit $$v:=\lim_{\varepsilon\to0}\frac{1}{c(s,k)}\int_\varepsilon^\infty\big(e^{tL}u-u\big)^k\,\frac{dt}{t^{1+s}}$$ exists in $X$, where $$c(s,k)=\int_0^\infty\big(e^{-t}-1\big)^k\,\frac{dt}{t^{1+s}}$$ and, in this case, $v=(-L)^su$. In general, this limit has no explicit rate of convergence, the integral does not converge absolutely (however, it does if $u$ is more regular, say, $u\in D((-L)^{[s]+1})$, see [@Gale-Miana-Stinga] where this assumption is used when $0<s<1$) and may even oscillate, involving cancelations. Hence we can not directly use this description in our semigroup analysis. We overcome these difficulties thanks to our new semigroup formulas [\[eq:U component s Greater 2\]](#eq:U component s Greater 2){reference-type="eqref" reference="eq:U component s Greater 2"}. Another problem that we solve and clarify is that of imposing appropriate initial conditions to [\[eq:extension_problem_s\_greater1\]](#eq:extension_problem_s_greater1){reference-type="eqref" reference="eq:extension_problem_s_greater1"} and [\[eq:introextension\]](#eq:introextension){reference-type="eqref" reference="eq:introextension"} that guarantee well-posedness of the extension problem. Indeed, [\[eq:extension_problem_s\_greater1\]](#eq:extension_problem_s_greater1){reference-type="eqref" reference="eq:extension_problem_s_greater1"} involves a $(2[s]+2)$-order $X$-valued ODE. Thus, it is natural to impose $(2[s]+2)$ initial conditions that will ensure uniqueness of solutions. This is indeed the case as we prove here, see [\[eq:uniqueness s greater 2\]](#eq:uniqueness s greater 2){reference-type="eqref" reference="eq:uniqueness s greater 2"} and Theorem [Theorem 6](#thm:uniquenessgeneral){reference-type="ref" reference="thm:uniquenessgeneral"}. The case $0<s<1$ is revisited in our context in Theorem [Theorem 9](#lem:unique_A_unbounded){reference-type="ref" reference="lem:unique_A_unbounded"}. On the other hand, [\[eq:introextension\]](#eq:introextension){reference-type="eqref" reference="eq:introextension"} is a second order $X$-valued ODE problem, so one may think that imposing an extra initial Neumann condition involving $U'(y)$ at $y=0$ would suffice for uniqueness. However, we show that the only case in which uniqueness can be achieved in general in [\[eq:introextension\]](#eq:introextension){reference-type="eqref" reference="eq:introextension"} with a Neumann-type initial condition is when $0<s<1$, see Lemma [Lemma 7](#lem:Besselivp){reference-type="ref" reference="lem:Besselivp"}. When $s>1$, we prove uniqueness for the second order initial value problem [\[eq:introextension\]](#eq:introextension){reference-type="eqref" reference="eq:introextension"} with $(2[s]+2)>2$ initial conditions [\[eq:bvpinyalls\]](#eq:bvpinyalls){reference-type="eqref" reference="eq:bvpinyalls"}, see Theorem [Theorem 6](#thm:uniquenessgeneral){reference-type="ref" reference="thm:uniquenessgeneral"}. Observe that considering the initial value problems [\[eq:uniqueness s greater 2\]](#eq:uniqueness s greater 2){reference-type="eqref" reference="eq:uniqueness s greater 2"} and [\[eq:bvpinyalls\]](#eq:bvpinyalls){reference-type="eqref" reference="eq:bvpinyalls"} instead of trying to impose conditions for $y$ near infinity in [\[eq:extension_problem_s\_greater1\]](#eq:extension_problem_s_greater1){reference-type="eqref" reference="eq:extension_problem_s_greater1"} and [\[eq:introextension\]](#eq:introextension){reference-type="eqref" reference="eq:introextension"} to prove well-posedness is enough for analytical, numerical and computational applications. Indeed, the extension problem is most significant when $y\to 0$. Our analysis is finally able to clarify the role of initial conditions in extension problems for higher fractional power operators. In fact, extension problems of this kind have been considered in recent years. For Hilbert spaces, the first result was proved by Roncal and Stinga in [@Roncal-Stinga], where it was shown that $U[u](y)$ solves [\[eq:introextension\]](#eq:introextension){reference-type="eqref" reference="eq:introextension"} and satisfies [\[eq:introNeumanny\]](#eq:introNeumanny){reference-type="eqref" reference="eq:introNeumanny"}, for any $s>0$ noninteger. For the fractional Laplacian in $\mathbb{R}^n$, Yang in [@Yang] applied the Fourier transform to prove a characterization of $(-\Delta)^s$, $s>0$ noninteger, through a higher order extension equation as in [\[eq:extension_problem_s\_greater1\]](#eq:extension_problem_s_greater1){reference-type="eqref" reference="eq:extension_problem_s_greater1"} in adequate Sobolev spaces and satisfying a number of initial conditions that are a mix between some of those in [\[eq:uniqueness s greater 2\]](#eq:uniqueness s greater 2){reference-type="eqref" reference="eq:uniqueness s greater 2"} and some from [\[eq:bvpinyalls\]](#eq:bvpinyalls){reference-type="eqref" reference="eq:bvpinyalls"}. Later on, Cora and Musina in [@Cora-Musina] further expanded the results of Yang by using variational methods and the Poisson kernel from [@Caffarelli-Silvestre]. In particular, [@Cora-Musina] shows that minimizers of the corresponding energy, which imposes conditions on $U$ as $y\to\infty$, are unique. They also prove various properties of derivatives of $U$ at $y=0$. However, it is not clear from their work which are the initial conditions that would suffice for uniqueness. More recently, Musina and Nazarov proved in [@Musina-Nazarov] an extension problem characterization of higher fractional powers $(-L)^s$ for nonnegative symmetric operators $-L$ on Hilbert spaces with an extension equation like in [@Cora-Musina; @Yang]. For this, they used spectral separation of variables and Bessel functions, extending the methodology initially put forth in [@Stinga-Torrea] for $0<s<1$. The results of our paper show that, under the adequate initial conditions that we establish, the extension characterizations for Hilbert spaces of [@Roncal-Stinga] and [@Musina-Nazarov] are equivalent. In turn, we provide new semigroup subordination formulas for the solution $U$ that are not present in [@Cora-Musina; @Musina-Nazarov; @Yang], and generalize to Banach spaces by completely different techniques. Although fractional powers of linear operators on Banach spaces is a fairly classical topic in functional analysis and operator theory [@Balakrishnan; @Butzer-Berens-Book; @Martinez; @Martinez-book; @Yosida], for the past 15 to 20 years the mathematics, physics, engineering, biology and computer science communities have shown a still increasing interest in the theory and applications of nonlinear nonlocal problems involving fractional power operators. Higher fractional powers $s>1$ are central in many pure and applied problems. We only mention one application to fluid mechanics here. Let $v=(v_1(x_1,x_2),v_2(x_1,x_2))$ solve the $2/3$-fractional Stokes equation of anomalous turbulence $(-\Delta)^{2/3}v+\nabla p=f$, with $\mathop{\mathrm{div}}v=0$ in $\mathbb{R}^2$, for a given vector field $f=(f_1,f_2)$ and a scalar function $p$, see [@Chen]. Since $v$ is divergence free, one can introduce a stream function $u=u(x_1,x_2)$ set by $v_1=-\partial_{x_2}u$ and $v_2=\partial_{x_1}u$. By letting $g:=\partial_{x_2}f_1-\partial_{x_1}f_2$ it easily follows that $u$ is a solution to the higher order fractional equation $(-\Delta)^{1+2/3}u=g$. In addition, we especially point out that extension problems have been crucial for the numerical analysis and computational implementation of fractional nonlocal equations. Indeed, the seminal work [@Nochetto-et-al] generates finite element approximations by using the extension and formulas of [@Stinga-Torrea]. Our results are general and can be applied, for instance, to higher fractional powers of second order elliptic operators, parabolic operators, hyperbolic operators, Laplacians on manifolds and graphs, among many others. They will be useful for numerical implementation. Furthermore, our theorems can be extended to generators $L$ of tempered $\alpha$-times integrated semigroups, for $\alpha\geq0$, to fractional complex powers $s\in\mathbb{C}\setminus\mathbb{N}$ with positive real part $\operatorname{Re}(s)>0$, and the solution $U(y)$ can be analytically extended to a complex sector including the half line $(0,\infty)$. These will appear elsewhere. We also mention here that the condition $0\in\rho(L)$ is not restrictive for applications. For example, for the fractional Neumann Laplacian $(-\Delta_N)^s$ in a bounded domain or the fractional Ornstein--Uhlenbeck operator $(-\Delta+2x\cdot\nabla)^s$ in the whole space $\mathbb{R}^n$, where the first eigenvalue is $0$, one looks for solutions in quotient spaces over constants so that uniqueness holds. The paper is organized as follows. Section [2](#sec:balakrishnan){reference-type="ref" reference="sec:balakrishnan"} contains preliminary facts about fractional powers of nonnegative closed operators and of generators of uniformly bounded $C_0$-semigroups on Banach spaces. In particular, Theorems [Theorem 2](#lem:approx_identity){reference-type="ref" reference="lem:approx_identity"} and [Theorem 3](#lem:inverse){reference-type="ref" reference="lem:inverse"} will be essential to prove our main results. In Section [3](#sec:extension){reference-type="ref" reference="sec:extension"} we analyze the explicit semigroup subordination formula of the solution to the extension equation [\[eq:Usemigroup\]](#eq:Usemigroup){reference-type="eqref" reference="eq:Usemigroup"}. Uniqueness is addressed in Section [4](#sec:uniqueness){reference-type="ref" reference="sec:uniqueness"}. In Section [5](#sec: extension s small){reference-type="ref" reference="sec: extension s small"} we present and prove Theorem [Theorem 1](#thm:extension-general){reference-type="ref" reference="thm:extension-general"} for the cases $0<s<1$ and $1<s<2$. We do this for two reasons. First, they are the cases that are mostly used in applications and for which we can say more about characterizing $(-L)^su$ in terms of limits of incremental quotients of $U$, see [\[eq:neumanns\]](#eq:neumanns){reference-type="eqref" reference="eq:neumanns"} and [\[eq:1s2incremental\]](#eq:1s2incremental){reference-type="eqref" reference="eq:1s2incremental"}. The latter will be helpful when performing finite difference approximations of $U$. Second, we believe that it will help the reader in understanding the structure of the proof of the general Theorem [Theorem 1](#thm:extension-general){reference-type="ref" reference="thm:extension-general"}, which is done in Section [6](#sec: extension s general){reference-type="ref" reference="sec: extension s general"}. # Fractional power operators in Banach spaces {#sec:balakrishnan} Throughout the paper, $X$ denotes a Banach space with norm $\|\cdot\|_X$, $I$ is the identity operator and $A:D(A)\subset X\to X$ is a linear operator with domain $D(A)$ and range $R(A)$. The resolvent set $\rho(A)$ of $A$ is the set of all $\lambda\in\mathbb{C}$ such that $R(\lambda I - A)$ is dense in $X$ and $(\lambda I - A)^{-1}$ is a bounded operator on its domain $D((\lambda I-A)^{-1})=R(\lambda I -A)$. Hence, $(\lambda I -A)^{-1}$ extends as a bounded linear operator on $X$. The spectrum of $A$ is $\sigma(A)=\mathbb{C}\setminus\rho(A)$. We say that $A$ is nonnegative if $(-\infty,0)\subset\rho(A)$ and $$M_A:=\displaystyle\sup_{\mu>0}\|\mu(\mu I +A)^{-1}\|<\infty.$$ In this case, we call $M_A$ the nonnegativity constant of $A$. From now on, and for the rest of the paper, we assume that $A$ is a nonnegative operator. ## Fractional powers of nonnegative operators The construction of fractional powers of $A$ is classical, see [@Balakrishnan; @Martinez; @Martinez-book; @Yosida]. For $s>0$, consider first the Balakrishnan operators $J^s$ defined as follows. For $0<s<1$, $D(J^s)=D(A)$ and $$J^s u = \frac{\sin(s\pi)}{\pi} \int^\infty_0 \mu^{s-1} (\mu I +A)^{-1} Au\, d\mu.$$ For $0<s<2$, $D(J^s) = D(A^2)$ and $$J^s u = \frac{\sin(s\pi)}{\pi} \int^\infty_0 \mu^{s-1} \bigg[(\mu I +A)^{-1} - \frac{\mu}{1+\mu^2} \bigg]Au \, d\mu + \sin(s \pi/2)Au.$$ For $n<s<n+1$, $D(J^s)=D(A^{n+1})$ and $J^s u = J^{s-n}A^n u$. Finally, for $n<s\leq n+1$, $D(J^s) = D(A^{n+2})$ and $J^s u = J^{s-n}A^n u$. We next define the positive fractional powers of $A$. If $A$ is bounded and $s>0$ then $A^s=J^s$, with domain $D(A^s)=D(J^s)=D(A)=X$. If $A$ is unbounded and $0 \in \rho(A)$ then $A^s = [(A^{-1})^s]^{-1}$, with domain $D(A^s)=R((A^{-1})^s)$. If $A$ is unbounded and $0 \in \sigma(A)$ then $$A^s u = \lim_{\varepsilon \to 0} (\varepsilon I+A)^s u.$$ The domain of $A^s$ in this last case is the collection of all $u$'s for which the limit exists. If $A$ is bounded then $A^s$ is bounded. If $A$ is unbounded, $A^s$ is closed and $D(A^s)\subset\overline{D(A)}$. Also, $A^1=A$. If $0\in\rho(A)$ then $(A^{-1})^s$ is injective. We can then consider negative fractional powers of $A$ as well. Indeed, in this case, $A^{-s} = (A^{-1})^s$ with domain $D(A^{-s})=D((A^{-1})^s)=X$. We have that $A^{-s}=(A^s)^{-1}$. One of the main results we will need is the following consequence of Propositions 7.4.1 and 7.2.2 of [@Martinez-book], in which it is important to note the presence of the hypothesis $u\in D(A^\alpha)$. **Theorem 2**. *Let $A$ be a nonnegative operator with $0\in\rho(A)$. Then, for all $u\in D(A^\alpha)$ and $0\leq\beta\leq\alpha$, $$\lim_{\varepsilon\to0} (\varepsilon I+A)^{-\beta} A^\alpha u = A^{\alpha-\beta}u.$$* ## Fractional powers of generators Here we collect classical facts about semigroups of linear operators and their infinitesimal generators, see [@Butzer-Berens-Book; @Pazy; @Yosida]. A family $\{S_t\}_{t \geq 0}$ of bounded linear operators on $X$ is a semigroup if $S_0 = I$ and $S_{t_1}\circ S_{t_2} = S_{t_1+t_2}$ for every $t_1,t_2\geq0$. If, in addition, $S_tu \to u$ in $X$ as $t\to 0$ for all $u \in X$, then we say that $\{S_t\}_{t \geq 0}$ is a $C_0$-semigroup. If $\{S_t\}_{t\geq0}$ is a $C_0$-semigroup then there exist constants $\omega\geq0$ and $M\geq1$ such that the operator norm of $S_t$ satisfies the estimate $\|S_t\|\leq Me^{\omega t}$, for all $t\geq0$. If $\omega=0$, that is, $\|S_tu\|_X\leq M\|u\|_X$ for all $t\geq0$ and $u\in X$, then the $C_0$-semigroup is said to be uniformly bounded. From now on, and for the rest of the paper, we will only consider uniformly bounded $C_0$-semigroups. The infinitesimal generator of $\{S_t\}_{t \geq 0}$ is the linear operator $L$ defined as $$Lu=\lim_{t\to0}\frac{S_tu-u}{t}$$ with domain $D(L)=\{u\in X:Lu~\hbox{exists}\}\subset X$. In this case, we write $$S_t\equiv e^{tL}.$$ If $u\in D(L)$ then the $X$-valued function $v=e^{tL}u$ is differentiable for $t\geq0$ and satisfies the equation $\partial_tv = Lv$ for $t>0$, with $v=u$ at $t=0$. The infinitesimal generator $L$ is a closed operator with domain $D(L)$ dense in $X$. Conversely, a linear operator $(L,D(L))$ on $X$ is said to generate a semigroup if there is a semigroup for which $L$ is its infinitesimal generator. It follows form the Hille--Yosida and the Lumer--Phillips theorems that a linear operator $(L,D(L))$ on $X$ is the infinitesimal generator of a $C_0$-semigroup $\{e^{tL}\}_{t\geq0}$ satisfying $\|e^{tL}\|\leq M$, for some $M\geq1$ and all $t\geq0$, if and only if $L$ is closed, $D(L)$ is dense in $X$, $(-\infty,0)\subset\rho(-L)$ and $$\sup_{\mu>0}\|\mu^n(\mu I-L)^{-n}\|\leq M\qquad\hbox{for}~n\geq1.$$ If $L$ is the infinitesimal generator of a uniformly bounded $C_0$-semigroup $\{e^{tL}\}_{t\geq0}$ then the last statement implies that $A=-L$ is a nonnegative operator. Thus, the fractional powers $A^s=(-L)^s$ can be defined for any $s>0$ as in the previous subsection. We will need the following result, proved in [@Martinez-book Lemma 6.1.5] when $0<\alpha<1$. The generalization to $\alpha>1$ is obtained by induction using Balakrishnan's operators. **Theorem 3**. *Let $L$ be the infinitesimal generator of a uniformly bounded $C_0$-semigroup $\{e^{tL}\}_{t\geq0}$ on $X$. Then, for any $\varepsilon>0$ and $\alpha>0$, $$(\varepsilon I-L)^{-\alpha}u = \frac{1}{\Gamma(\alpha)} \int^\infty_0 e^{-\varepsilon t} e^{tL}u\, \frac{dt}{t^{1-\alpha}}$$ where $\Gamma$ denotes the Gamma function. If $0\in\rho(L)$ then this formula is also valid for $\varepsilon=0$.* # Basic properties of the subordination formula solution {#sec:extension} In this section, $L$ is the infinitesimal generator of a uniformly bounded $C_0$-semigroup $\{e^{tL}\}_{t\geq0}$ such that $\|e^{tL}u\|_X\leq M\|u\|_X$, for all $u\in X$ and $t\geq0$, for some $M\geq1$. **Lemma 4**. *Fix any $s>0$ noninteger. Given any $u\in X$, define $U(y)=U[u](y)$, $y>0$, as in [\[eq:Usemigroup\]](#eq:Usemigroup){reference-type="eqref" reference="eq:Usemigroup"}. The following properties hold.* 1. *The integral defining $U(y)$ is absolutely convergent in the sense of Bochner and $$\|U(y)\|_X\leq M\|u\|_X\qquad\hbox{for all}~y>0.$$* 2. *$U\in C([0,\infty);X)$ and $\lim_{y\to0}U(y)=u$ in $X$.* 3. *For each $y>0$ and $k\geq0$, $U(y)\in D(L^k)$ and $$L^kU(y)=\frac{(-1)^k}{4^s\Gamma(s)}\int_0^\infty\big(\tfrac{1-2s}{y}\partial_y+\partial_{yy}\big)^k\big(y^{2s}e^{-y^2/(4t)}\big) e^{tL}u\,\frac{dt}{t^{1+s}}.$$* 4. *For any $k\geq0$, $U\in C^\infty((0,\infty);D(L^k))$ and, for any $m\geq1$, $$\frac{d^m}{dy^m}U(y)=\frac{1}{4^s\Gamma(s)}\int_0^\infty\frac{\partial^m}{\partial y^m}\big(y^{2s}e^{-y^2/(4t)}\big)e^{tL}u\,\frac{dt}{t^{1+s}}.$$ In particular, $$\big(\tfrac{1-2s}{y}\partial_y+\partial_{yy}\big)^kU=(-L)^kU(y)\qquad\hbox{for all}~y>0.$$* *Proof.* The result was proved for $0<s<1$ in [@Gale-Miana-Stinga], see also [@Stinga-Torrea] for the case of Hilbert spaces. A careful inspection shows that the proofs in [@Gale-Miana-Stinga] extend to any noninteger $s>0$ without major modifications. We only sketch the argument for $(3)$, which uses an integration by parts idea of Miana [@Miana]. Let us define $$v(t)=\int_0^te^{rL}u\,dr\qquad t>0.$$ Then $v(t)\in D(L)$ and $\partial_tv(t) = e^{t L} u$. Integration by parts gives, $$\begin{aligned} U(y) &= \frac{y^{2s}}{4^s\Gamma(s)}\int_0^\infty e^{-y^2/(4t)}\partial_tv(t)\,\frac{dt}{t^{1+s}} \\ &=-\frac{y^{2s}}{4^s\Gamma(s)}\int_0^\infty \partial_t \bigg( \frac{e^{-y^2/(4t)}}{t^{1+s}} \bigg)v(t)\,dt \\ &= -\frac{1}{4^s\Gamma(s)}\int_0^\infty\big(\tfrac{1-2s}{y}\partial_y+\partial_{yy}\big)\big(y^{2s}e^{-y^2/(4t)}\big) v(t)\,\frac{dt}{t^{1+s}}.\end{aligned}$$ Using that $Lv(t)=e^{tL}u-u$ and the dominated convergence theorem, it can be verified that $U(y)\in D(L)$ and $$\begin{aligned} LU(y) &= -\frac{1}{4^s\Gamma(s)}\int_0^\infty \big(\tfrac{1-2s}{y}\partial_y+\partial_{yy}\big)\big(y^{2s}e^{-y^2/(4t)}\big)\big(e^{tL}u-u\big)\,\frac{dt}{t^{1+s}}\\ &= -\frac{1}{4^s\Gamma(s)}\int_0^\infty \big(\tfrac{1-2s}{y}\partial_y+\partial_{yy}\big)\big(y^{2s}e^{-y^2/(4t)}\big) e^{tL}u\,\frac{dt}{t^{1+s}}.\end{aligned}$$ In the last line we used that $$\frac{1}{4^s\Gamma(s)}\int_0^\infty y^{2s}e^{-y^2/(4t)}\,\frac{dt}{t^{1+s}} =1\qquad y>0$$ so its derivative with respect to $y$ vanishes. This shows $(3)$ for the case $k=1$. The general case $k\geq2$ is proved by induction using the same integration by parts strategy. ◻ The next formula for $U$ in terms of $(-L)^su$ was derived for $0<s<1$ and $u\in D(L)$ in [@Gale-Miana-Stinga]. Inspecting the proof in [@Gale-Miana-Stinga], we see that, because Theorem [Theorem 2](#lem:approx_identity){reference-type="ref" reference="lem:approx_identity"} holds for $\alpha=s$ and $u\in D((-L)^s)$, we can extend it to any $s>0$ noninteger. Thus, the proof is omitted. **Theorem 5**. *Suppose that $0 \in \rho(L)$. Let $s>0$ be noninteger, $f\in X$ and assume that $u\in D((-L)^s)$ is the unique solution to $(-L)^su=f$. Then $U(y)=U[u](y)$ given in [\[eq:Usemigroup\]](#eq:Usemigroup){reference-type="eqref" reference="eq:Usemigroup"} can also be written as $$U(y)=\lim_{\varepsilon\to0}\frac{1}{\Gamma(s)}\int_0^\infty e^{-\varepsilon t}e^{-y^2/(4t)}e^{tL}f\,\frac{dt}{t^{1-s}} \qquad\hbox{in}~X.$$* # Uniqueness {#sec:uniqueness} The main result of this section is the following. **Theorem 6** (Uniqueness for any $s>0$ noninteger). *Let $L$ be the infinitesimal generator of a uniformly bounded $C_0$-semigroup $\{e^{tL}\}_{t\geq0}$ on $X$. Assume that $0\in\rho(L)$. Fix $s>0$ noninteger. Then the $(2[s]+2)$-order initial value extension problem $$\label{eq:problem1general} \begin{cases} \big(L+\frac{1-2(s-[s])}{y}\partial_y+\partial_{yy}\big)^{[s]+1}U=0&\hbox{for}~y>0\\ \lim_{y\to0}\big(L+\frac{1-2(s-[s])}{y}\partial_y+\partial_{yy}\big)^m U =u_m&\hbox{in}~X \\ \lim_{y\to0}y^{1-2(s-[s])}\partial_y\big(L+\frac{1-2(s-[s])}{y}\partial_y+\partial_{yy}\big)^mU=v_m&\hbox{in}~X \end{cases}$$ and the second order extension problem with $(2[s]+2)$ initial conditions $$\label{eq:problem2general} \begin{cases} LU+\frac{1-2s}{y}\partial_yU+\partial_{yy}U=0&\hbox{for}~y>0\\ \lim_{y\to0}\big(\frac{2}{y}\partial_y\big)^m U =u_m&\hbox{in}~X \\ \lim_{y\to0}y^{1-2(s-[s])}\partial_y\big(\frac{2}{y}\partial_y\big)^mU=v_m&\hbox{in}~X \end{cases}$$ where $u_m,v_m\in X$, $0\leq m\leq [s]$, have at most one classical solution.* We start by showing that uniqueness in [\[eq:problem2general\]](#eq:problem2general){reference-type="eqref" reference="eq:problem2general"} fails for $s>1$ if we only impose two initial conditions in terms of $U$ and $\partial_yU$. For this, we begin by finding two independent solutions to the following Bessel ODE: $$\label{eq:BesselODE} \phi''(y)+ \frac{a}{y}\phi'(y)=\lambda\phi(y)\qquad \hbox{for}~y,\lambda>0.$$ In the extension problem, $a=1-2s$, so if $s>0$ then $a\in(-\infty,1)$. For this analysis, we allow any $a<1$, $a\neq0$, and study when we can obtain existence and uniqueness of solutions to a typical initial value problem for [\[eq:BesselODE\]](#eq:BesselODE){reference-type="eqref" reference="eq:BesselODE"} where we prescribe $\phi$ and $\phi'$ at $y=0$. Since $y=0$ is a regular singular point for [\[eq:BesselODE\]](#eq:BesselODE){reference-type="eqref" reference="eq:BesselODE"}, we can find solutions by applying the classical method of power series expansions as in [@Simmons Chapter 5]. The indicial equation is $\rho(\rho-1)+\rho a=0$, which has roots $\rho_1=0$ and $\rho_2=1-a$. We look for solutions of the form $\phi_i(y,\lambda)=y^{\rho_i}\sum_{k=0}^\infty a_{i,k}y^k$, $i=1,2$. Inserting these into [\[eq:BesselODE\]](#eq:BesselODE){reference-type="eqref" reference="eq:BesselODE"} one can find the recurrence relations for the coefficients $a_{i,k}$ and get that $$\phi_1(y,\lambda)=\Gamma(1-\tfrac{1-a}{2})\sum_{k=0}^\infty\frac{y^{2k}\lambda^k}{2^{2k}\Gamma(k+1)\Gamma(k+1-\frac{1-a}{2})}$$ and $$\phi_2(y,\lambda)=\frac{\Gamma(\tfrac{1-a}{2})}{2}y^{1-a}\sum_{k=0}^\infty\frac{y^{2k}\lambda^k}{2^{2k}\Gamma(k+1)\Gamma(k+1+\frac{1-a}{2})}$$ are two independent solutions. By taking into account the power series expansions of the Bessel functions $I_{\pm\nu}$ (see [@Lebedev p. 108]), $$\phi_1(y,\lambda)=\Gamma(1-\tfrac{1+a}{2})(\lambda^{1/2} y/2)^{\frac{1-a}{2}}I_{-\frac{1-a}{2}}(\lambda^{1/2} y)$$ and $$\phi_2(y,\lambda)=\frac{\Gamma(\tfrac{1-a}{2})}{2}(2\lambda^{-1/2}y)^{\frac{1-a}{2}} I_{\frac{1-a}{2}}(\lambda^{1/2} y).$$ Next, using the power series expansions above, it follows that, as $y\to0$, $$\label{eq:boundaryconditionsBessel} \begin{cases} \phi_1(y,\lambda)\to1,&\partial_y\phi_1(y,\lambda)\sim-\frac{\lambda}{a}y,\\ \phi_2(y,\lambda)\sim\frac{1}{1-a}y^{1-a},&\partial_y\phi_2(y,\lambda)\sim y^{-a}. \end{cases}$$ **Lemma 7**. *Fix $\lambda>0$, $a<1$ and $b\in\mathbb{R}$. The initial value problem $$\begin{cases} \phi''(y)+ \frac{a}{y}\phi'(y)=\lambda\phi(y)&\hbox{for}~y>0\\ \lim_{y\to0}\phi(y)=\alpha\\ \lim_{y\to0}y^b\phi'(y)=\beta \end{cases}$$ has at most one classical solution for arbitrary $\alpha,\beta\in\mathbb{R}$ if and only if $a=b\in[-1,1)$.* *Proof.* Suppose that $a\neq0$. The general solution is $\phi=c_1\phi_1+c_2\phi_2$, where $c_1,c_2\in\mathbb{R}$. Since $\phi(0)=c_1$ and $1-a>0$, it follows from [\[eq:boundaryconditionsBessel\]](#eq:boundaryconditionsBessel){reference-type="eqref" reference="eq:boundaryconditionsBessel"} that $\phi=\alpha\phi_1+c_2\phi_2$, where $c_2$ is arbitrary. Next, using again [\[eq:boundaryconditionsBessel\]](#eq:boundaryconditionsBessel){reference-type="eqref" reference="eq:boundaryconditionsBessel"}, $$y^b\phi'(y)\sim-\frac{\alpha\lambda}{a}y^{1+b}+c_2y^{b-a}\qquad\hbox{as}~y\to0.$$ If $b<a$ this limit blows up unless $c_2=0$, but then we are forced to take $\beta=0$ if $b>-1$ or $\beta=-\frac{\alpha\lambda}{a}$ if $b=-1$. If $b>a$, the second term vanishes in the limit independently of $c_2$, so there is no uniqueness for all $\alpha$ and $\beta$. If $b=a$, the first term vanishes if and only if $-1<a<1$ and we can choose $c_2=\beta$ to get the unique solution. If $b=a={-1}$ then we choose $c_2=\beta-\alpha\lambda$ and uniqueness follows. When $a=0$, one can use the independent solutions $e^{y\lambda^{1/2}}$ and $e^{-y\lambda^{1/2}}$. ◻ Recall that $s>0$ is noninteger. Therefore, we focus our attention to [\[eq:BesselODE\]](#eq:BesselODE){reference-type="eqref" reference="eq:BesselODE"} for $-1<a<1$. For the nonhomogeneous problem $$\label{eq:BesselODEg} \phi''(y)+ \frac{a}{y} \phi'(y) = \lambda\phi(y)+g(y)\qquad y,\lambda>0$$ where now $a\in(-1,1)$, we apply the method of variation of parameters. Using the Wroskian for Bessel functions [@Lebedev], a particular solution $\phi_p$ can thus be found as $$\label{eq:formulaforphip} \phi_p(y,\lambda) =\int^y_0\big(\phi_2(y,\lambda)\phi_1(t,\lambda)-\phi_1(y,\lambda)\phi_2(t,\lambda)\big)g(t)t^a\,dt.$$ It is easy to verify that $$\label{eq:Neumannphip} \lim_{y\to0}\phi_p(y,\lambda)=\lim_{y\to0}y^a\phi_p'(y,\lambda)=0.$$ Since the power series representations of $\phi_1$ and $\phi_2$ have infinite radius of convergence, we can replace $\lambda$ by any bounded linear operator $T$ on $X$ to get well-defined $X$-valued functions of $y$: $$\phi_1(y,T)u=\Gamma(1-\tfrac{1-a}{2})\sum_{k=0}^\infty\frac{y^{2k}}{2^{2k}\Gamma(k+1)\Gamma(k+1-\frac{1-a}{2})}T^ku$$ and $$\phi_2(y,T)v=\frac{\Gamma(\tfrac{1-a}{2})}{2}y^{1-a}\sum_{k=0}^\infty\frac{y^{2k}}{2^{2k}\Gamma(k+1)\Gamma(k+1+\frac{1-a}{2})}T^kv.$$ Clearly, these are classical solutions to the $X$-valued Bessel equation [\[eq:BesselODE\]](#eq:BesselODE){reference-type="eqref" reference="eq:BesselODE"} with $\lambda=T$ and with corresponding boundary conditions (see [\[eq:boundaryconditionsBessel\]](#eq:boundaryconditionsBessel){reference-type="eqref" reference="eq:boundaryconditionsBessel"}) $$\begin{cases} \lim_{y \to 0}\phi_1(y,T)u = u,& \lim_{y \to 0} y^{a}\phi'_1(y,T)u = 0, \\ \lim_{y \to 0}\phi_2(y,T)v = 0, & \lim_{y \to 0} y^{a}\phi'_2(y,T)v = v. \end{cases}$$ Similarly, defining $\phi_p(y,T)u$ by replacing $\lambda$ by $T$ in the formula for $\phi_p$ in [\[eq:formulaforphip\]](#eq:formulaforphip){reference-type="eqref" reference="eq:formulaforphip"}, one can verify that $\phi_p(y,T)u$ satisfies [\[eq:BesselODEg\]](#eq:BesselODEg){reference-type="eqref" reference="eq:BesselODEg"} and [\[eq:Neumannphip\]](#eq:Neumannphip){reference-type="eqref" reference="eq:Neumannphip"} with $T$ in place of $\lambda$. **Lemma 8**. *Let $T:X\to X$ be a bounded linear operator and fix $f\in C([0,\infty);X)$. Let $a\in(-1,1)$. Then the problem $$\begin{cases} \partial_{yy}U+\frac{a}{y}\partial_yU=TU+f&\hbox{for}~y>0 \\ \lim_{y\to0}U(y)=u&\hbox{in}~X \\ \lim_{y \to 0} y^a\partial_y U(y)=v&\hbox{in}~X \end{cases}$$ for $u,v\in X$, has a unique classical solution. If $u=v=0$, the unique solution is $$U(y)=\int^y_0\big(\phi_2(y,T)\phi_1(t,T)-\phi_1(y,T)\phi_2(t,T)\big)f(t) t^a\,dt.$$* *Proof.* For uniqueness, we use Gronwall's inequality as in [@Meichsner-Seifert]. By linearity, it is enough to assume that $u=v=f=0$. Notice that the equation for $U$ can be written as $$\partial_y\big(y^a\partial_yU\big)=y^{a}TU.$$ Integrating this twice, using the initial conditions and Fubini's theorem, we get $$U(y)=\frac{1}{1-a}\int_0^y\big(y^{1-a}-t^{1-a}\big)TU(t)t^a\,dt.$$ Since $T$ is bounded, $$\|U(y)\|_X\leq\frac{\|T\|}{1-a}\int_0^y\big(y^{1-a}-t^{1-a}\big)\|U(t)\|_Xt^a\,dt$$ and Gronwall's inequality implies that $U=0$. ◻ Clearly, the proof of Lemma [Lemma 8](#lem:unique_A_bounded_nonhomogenous){reference-type="ref" reference="lem:unique_A_bounded_nonhomogenous"} fails when $T$ is unbounded. However, the result is still true for $-1<a<1$ in the unbounded case (see Theorem [Theorem 9](#lem:unique_A_unbounded){reference-type="ref" reference="lem:unique_A_unbounded"}) as was shown in [@Stinga-Torrea] for Hilbert spaces and in [@Meichsner-Seifert] for Banach spaces. We next revisit the proof of the general case of [@Meichsner-Seifert] in our context and provide some of the missing details where necessary. To this end, we need to introduce sectorial operators and their functional calculus, see [@Hasse; @Martinez-book]. A sector in the complex plane of angle $\omega\in(0,\pi]$ is defined as $S_\omega= \{z \in \mathbb{C}: z \neq 0, |\arg z|< w \}$, and we set $S_0= (0,\infty)$. A linear operator $(A,D(A))$ on $X$ is called sectorial if there is an angle $\omega\in[0,\pi)$ such that $\sigma(A) \subseteq S_\omega$ and $\sup\{\|\lambda(\lambda I - A)^{-1}\| : \lambda \in \mathbb{C}\setminus \overline{S_{\omega'}} \} < \infty$, for all $\omega' \in (\omega,\pi)$. For such an operator $A$, $\omega_A = \min \{0 \leq\omega < \pi : A \in S_\omega \}$ is the angle of sectoriality of $A$. It is well known that every closed nonnegative operator is a sectorial operator and any sectorial operator is nonnegative. If $L$ is the generator of a uniformly bounded $C_0$-semigroup then $A=-L$ is sectorial, with $\omega_A\leq\pi/2$. We say that a meromorphic function $F$ on a sector $S_\omega$ has a finite polynomial limit $c\in\mathbb{C}$ at $z=0$ if $F(z)-c=O(|z|^\alpha)$, as $z\to0$, for some $\alpha>0$. Similarly, $F$ is said to have a finite polynomial limit $d\in\mathbb{C}$ at $\infty$ if $F(z^{-1})$ has polynomial limit $d$ at $0$. Let $A$ be a sectorial operator with angle of sectoriality $\omega_A$. For bounded, holomorphic functions $F$ on an open sector containing $S_{\omega_A}$ that have finite polynomial limits both at $0$ and at $\infty$, there is a well defined primary functional calculus, namely, the operator $F(A)$ is defined via a Cauchy integral over an open sector containing $S_{\omega_A}$, see [@Hasse Lemma 2.3.2]. A sequence $\{A_n\}_{n\geq1}$ of sectorial operators is called uniformly sectorial with angle $\omega\in[0,\pi)$ if $A_n$ is sectorial with angle of sectoriality $\omega$ and $\sup_n\sup\{\|\lambda(\lambda I-A)^{-1}\|:\lambda\in\mathbb{C}\setminus\overline{S_{\omega'}}\}<\infty$ for every $\omega'\in(\omega,\pi)$. A uniformly sectorial sequence $\{A_n\}_{n\geq1}$ with angle $\omega$ is a sectorial approximation of a sectorial operator $A$ with the same angle of sectoriality if $\lambda\in\rho(A)$ and $(\lambda I-A_n)^{-1}\to(\lambda I-A)^{-1}$ in the operator norm, for some $\lambda\notin\overline{S_\omega}$. In this case, this is in fact true for all $\lambda\notin\overline{S_\omega}$. Moreover, $F(A_n) \to F(A)$ in operator norm, whenever $F(A)$ is defined by the primary functional calculus for $A$, see [@Hasse Lemma 2.6.7]. **Theorem 9** (Uniqueness for $a\in(-1,1)$). *Let $L$ be the infinitesimal generator of a uniformly bounded $C_0$-semigroup $\{e^{tL}\}_{t\geq0}$ on $X$. Assume that $0\in\rho(L)$. Let $a\in(-1,1)$. Then the second order initial value problem $$\begin{cases} LU+\frac{a}{y}\partial_y U+\partial_{yy}U=0&\hbox{for}~y>0\\ \lim_{y\to0}U(y)=u&\hbox{in}~X \\ \lim_{y \to0}y^a\partial_yU(y)=v&\hbox{in}~X, \end{cases}$$ for $u,v\in X$, has at most one classical solution.* *Proof.* By linearity, we can assume that $u=v=0$. Since $L$ is injective, it is enough to prove that $(-L)^{-1}U=0$. Now $(-L)^{-1}U$ satisfies the same equation as $U$, therefore we may assume that $U\in C((0,\infty);D(L^2)) \cap C([0, \infty); D(L)) \cap C^2((0,\infty);D(L))$. As $L$ is the infinitesimal generator of a uniformly bounded $C_0$-semigroup, the operator $A=-L$ is closed, nonnegative and has dense domain $\overline{D(A)}=X$. For any $\varepsilon>0$, define $$A_\varepsilon = A(I+\varepsilon A)^{-1}.$$ Since $A$ is nonnegative, it can be seen that $\{A_\varepsilon\}_{\varepsilon>0}$ is a family of bounded, nonnegative operators, with nonnegativity constant uniformly bounded in $\varepsilon$. Moreover, $0\in\rho(A_\varepsilon)$. Furthermore, by using [@Hasse Proposition 2.1.1(f)], $\{A_\varepsilon\}_{\varepsilon>0}$ is uniformly sectorial. Finally, it can be proved that $A_\varepsilon$ is a sectorial approximation of $A$. Next, we define, $$U_\varepsilon=(I+\varepsilon A)^{-1}U.$$ Since $A$ is nonnegative and $D(A)$ is dense in $X$, $$\label{eq:UepstoU} \lim_{\varepsilon\to0}U_\varepsilon(y)=U(y)\qquad\hbox{for all}~y\geq0.$$ Moreover, $U_\varepsilon$ is a solution to $$\label{eq:regularizedproblem} \begin{cases} \partial_{yy}U_\varepsilon+\frac{a}{y}\partial_y U_\varepsilon=A_\varepsilon U_\varepsilon+f_\varepsilon &\hbox{for}~y>0 \\ \lim_{y\to0}U_\varepsilon(y)=0&\hbox{in}~X \\ \lim_{y \to 0}y^a\partial_y U_\varepsilon(y)= 0&\hbox{in}~X \end{cases}$$ where $$f_\varepsilon(y)=AU_\varepsilon(y)-A_\varepsilon U_\varepsilon(y).$$ In view of [\[eq:UepstoU\]](#eq:UepstoU){reference-type="eqref" reference="eq:UepstoU"}, it is enough to prove that, for each $y>0$, $U_\varepsilon(y)\to0$ as $\varepsilon\to0$. Since $A_\varepsilon$ is a bounded linear operator on $X$ and $f_\varepsilon\in C([0,\infty);X)$, Lemma [Lemma 8](#lem:unique_A_bounded_nonhomogenous){reference-type="ref" reference="lem:unique_A_bounded_nonhomogenous"} implies that the unique classical solution to [\[eq:regularizedproblem\]](#eq:regularizedproblem){reference-type="eqref" reference="eq:regularizedproblem"} is $$U_\varepsilon(y)=\int^y_0\big(\phi_2(y,A_\varepsilon)\phi_1(t,A_\varepsilon)-\phi_1(y,A_\varepsilon)\phi_2(t,A_\varepsilon)\big)f_\varepsilon(t)t^a\,dt\qquad y>0.$$ Recall that $U\in D(A)$. Then $A(I+\varepsilon A)^{-1}U=(I+\varepsilon A)^{-1}AU$ and we can write $$\label{eq:f_epsilon} f_\varepsilon= \varepsilon^{-1}(\varepsilon^{-1}+A)^{-1}A(\varepsilon^{-1} +A)^{-1} (AU)$$ From here, using again that $A$ is nonnegative, we conclude that $f_\varepsilon(y)\to 0$, for each $y\geq0$. Furthermore, since $AU \in C([0,\infty);X)$, and the bounded operators $\varepsilon^{-1}(\varepsilon^{-1}+A)^{-1}$ and $A(\varepsilon^{-1} +A)^{-1}$ have norms uniformly bounded in $\varepsilon>0$, it follows that $f_\varepsilon(y)$ is uniformly bounded in $y\in[0,y_0]$ and $\varepsilon\geq0$. At this point, we cannot take the limit in [\[eq:UepstoU\]](#eq:UepstoU){reference-type="eqref" reference="eq:UepstoU"} inside the integral defining $U_\varepsilon$ because, say, $\phi_1(y,A)$ is not well defined unless stronger conditions on $U$ are imposed. Instead, we consider $e^{-y_0A^{1/2}}U_\varepsilon(y)$, where $y_0>0$ is fixed and $\{e^{-tA^{1/2}}\}_{t\geq0}$ is the Poisson semigroup associated to $A$ (see [@Martinez-book; @Yosida]), and prove that $$\label{eq:uniquelimitepsilon} e^{-y_0A^{1/2}}U(y)=\lim_{\varepsilon\to0}e^{-y_0A_\varepsilon^{1/2}}U_\varepsilon(y)=0\qquad\hbox{for all}~0<y<y_0.$$ Thus, by the injectivity of $e^{-y_0A^{1/2}}$, the desired conclusion $U=0$ follows. To this end, fix $0<y<y_0$ and define the holomorphic functions $$F_1(y,z)=e^{-y_0z}\phi_1(y,z^2)=e^{-y_0z}\Gamma(1-\tfrac{1-a}{2}) \sum^{\infty}_{k=0}\frac{y^{2k}z^{2k}}{2^{2k}\Gamma(k+1)\Gamma(k+1-\frac{1-a}{2})}$$ and $$F_2(y,z)=e^{-y_0z}\phi_2(y,z^2)=e^{-y_0z}\frac{\Gamma(\frac{1-a}{2})}{2}y^{1-a} \sum^{\infty}_{k=0} \frac{y^{2k}z^{2k}}{2^{2k}\Gamma(k+1)\Gamma(k+1+\frac{1-a}{2})}.$$ Then we can write $$e^{-y_0A_\varepsilon^{1/2}}U_\varepsilon(y)=\int^y_0 \big(F_2(y,A^{1/2}_\varepsilon)F_1(t,A^{1/2}_\varepsilon)- F_1(y,A^{1/2}_\varepsilon)F_2(t,A^{1/2}_\varepsilon)\big)f_\varepsilon(t)t^a\,dt.$$ Using the power series representations (or, equivalently, the asymptotic expansions of Bessel functions [@Lebedev]), it is not difficult to check that $F_1(y,z)$ and $F_2(y,z)$ are holomorphic and bounded on proper subsectors of $\operatorname{Re}(z)>0$, and have finite polynomial limits when approaching $0$ and $\infty$ from within $\operatorname{Re}(z)>0$. If we prove that $A_\varepsilon^{1/2}$ is a sectorial approximation of angle $\omega\leq\pi/4$ (note that $\overline{S_{\pi/4}}$ is still a proper subsector of $\operatorname{Re}(z)>0$) of $A^{1/2}$ then, by the primary functional calculus, $$F_1(y,A^{1/2}_\varepsilon)\to F_1(y,A^{1/2})\qquad\hbox{and}\qquad F_2(y,A^{1/2}_\varepsilon)\to F_2(y,A^{1/2})$$ as $\varepsilon\to0$, where both $F_1(y,A^{1/2})$ and $F_2(y,A^{1/2})$ are now bounded operators. Therefore, the dominated convergence theorem can be applied and [\[eq:uniquelimitepsilon\]](#eq:uniquelimitepsilon){reference-type="eqref" reference="eq:uniquelimitepsilon"} follows. We are left to show that $A_\varepsilon^{1/2}$ is a sectorial approximation of $A^{1/2}$. We want to prove that $(\lambda I + A^{1/2}_\varepsilon)^{-1} \to (\lambda I+ A^{1/2})^{-1}$ given that $(\lambda I + A_\varepsilon)^{-1} \to (\lambda I + A)^{-1}$. By [@Martinez-book Proposition 5.3.2 and Theorem 5.4.1], $A^{1/2}$ is a sectorial operator with angle of sectoriality $\omega_A/2$, where $\omega_A$ is the angle of sectoriality of $A$. Since $A_\varepsilon$ is a nonnegative operator with nonnegativity constant uniform in $\varepsilon$, so does $A^{1/2}_\varepsilon$. Hence $(\lambda I+A^{1/2}_\varepsilon)^{-1}$ exists for any $\lambda>0$ and $$(\lambda I+ A^{1/2}_\varepsilon)^{-1} = \lambda^{-1}(A^{-1/2}_\varepsilon+\lambda^{-1}I)^{-1} A^{-1/2}_\varepsilon.$$ Next, we see that $(A^{-1/2}_\varepsilon+\lambda^{-1}I) \to (A^{-1/2}+\lambda^{-1}I)$ in norm as $\varepsilon\to 0$, so that $(A^{-1/2}_\varepsilon+\lambda^{-1}I)^{-1} \to (A^{-1/2}+\lambda^{-1}I)^{-1}$ as $\varepsilon\to 0$. Combining all together, we have that $(\lambda I+ A^{1/2}_\varepsilon)^{-1} \to (\lambda I+ A^{1/2})^{-1}$ in norm. Therefore, $\{A^{1/2}_\varepsilon\}_{\varepsilon>0}$ is a sectorial approximation of $A^{1/2}$. ◻ Before proving Theorem [Theorem 6](#thm:uniquenessgeneral){reference-type="ref" reference="thm:uniquenessgeneral"}, will need the following differential identities lemma. **Lemma 10**. *Let $L$ be a linear operator on $X$ and $a\in\mathbb{R}$. The differential identity $$\big(L+\tfrac{a}{y}\partial_y+\partial_{yy}\big)\big(\tfrac{2}{y}\partial_y\big)= \big(\tfrac{2}{y}\partial_y\big)\big(L+\tfrac{a-2}{y}\partial_y+\partial_{yy}\big)$$ holds. Moreover, if $U$ is a sufficiently smooth function that satisfies the equation $$LU+\tfrac{a}{y}\partial_yU+\partial_{yy}U=0$$ then, for any $m\geq0$, $$\big(L+\tfrac{a+2m}{y}\partial_y+\partial_{yy}\big)^{m+1}U=0$$ and, for any $0\leq m\leq n$, $$\big(L+\tfrac{a+2n}{y}\partial_y+\partial_{yy}\big)^{m}U=\frac{n!}{(n-m)!}\big(\tfrac{2}{y}\partial_y\big)^{m}U.$$* *Proof.* We can write $$\big(L+\tfrac{a}{y}\partial_y+\partial_{yy}\big)\big(\tfrac{2}{y}\partial_y\big)= \big(\tfrac{2}{y}\partial_y\big)L+\big(\tfrac{a}{y}\partial_y+\partial_{yy}\big)\big(\tfrac{2}{y}\partial_y\big)$$ and it is easy to check that $$\big(\tfrac{a}{y}\partial_y + \partial_{yy} \big) \big(\tfrac{2}{y}\partial_y\big) =\big(\tfrac{2}{y}\partial_y\big)\big(\tfrac{a-2}{y} \partial_y + \partial_{yy} \big).$$ Let $U$ be as in the statement. For any $m\geq0$, by the differential identity, $$\begin{aligned} \big(L+\tfrac{a+2m}{y}\partial_y+\partial_{yy}\big)^{m+1}U &= \big(L+\tfrac{a+2m}{y}\partial_y+\partial_{yy}\big)^m\big[\big(L+\tfrac{a}{y}\partial_y+\partial_{yy}\big)U+\big(\tfrac{2m}{y}\partial_y\big)U\big] \\ &=\big(L+\tfrac{a+2m}{y}\partial_y+\partial_{yy}\big)^m\big(\tfrac{2m}{y} \partial_y \big) U \\ &= \big(\tfrac{2m}{y} \partial_y\big)\big(L+\tfrac{a+2m-2}{y}\partial_y+\partial_{yy}\big)^mU \\ &= m\big(\tfrac{2}{y} \partial_y\big)\big(L+\tfrac{a+2m-2}{y}\partial_y+\partial_{yy}\big)^{m-1}\big(\tfrac{2m-2}{y}\partial_y\big)U \\ &= m(m-1)\big(\tfrac{2}{y} \partial_y\big)^2\big(L+\tfrac{a+2m-4}{y}\partial_y+\partial_{yy}\big)^{m-1}U.\end{aligned}$$ Continuing this process $m$-times, $$\big(L+\tfrac{a+2m}{y}\partial_y+\partial_{yy}\big)^{m+1}U=m!\big(\tfrac{2}{y} \partial_y\big)^m\big(L+\tfrac{a}{y}\partial_y+\partial_{yy}\big)U=0.$$ The statement for $0\leq n\leq m$ is proved in a similar way: $$\begin{aligned} \big(L+\tfrac{a+2n}{y}\partial_y+\partial_{yy}\big)^mU &=\big(L+\tfrac{a+2n}{y}\partial_y+\partial_{yy}\big)^{m-1}\big(\tfrac{2n}{y}\partial_y\big)U \\ &=n\big(L+\tfrac{a+2n}{y}\partial_y+\partial_{yy}\big)^{m-2}\big(L+\tfrac{a+2n}{y}\partial_y+\partial_{yy}\big)\big(\tfrac{2}{y}\partial_y\big)U \\ &=n\big(L+\tfrac{a+2n}{y}\partial_y+\partial_{yy}\big)^{m-2}\big(\tfrac{2}{y}\partial_y\big)\big(L+\tfrac{a+2n-2}{y}\partial_y+\partial_{yy}\big)U \\ &=n(n-1)\big(L+\tfrac{a+2n}{y}\partial_y+\partial_{yy}\big)^{m-2}\big(\tfrac{2}{y}\partial_y\big)^2U.\end{aligned}$$ Continuing the process for $m$-times, $$\begin{aligned} \big(L+\tfrac{a+2n}{y}\partial_y+\partial_{yy}\big)^mU &= n(n-1)(n-2)\cdots (n-m+1)\big(\tfrac{2}{y} \partial_y\big)^mU \\ &=\frac{n!}{(n-m)!}\big(\tfrac{2}{y} \partial_y\big)^mU.\end{aligned}$$ ◻ For the reasons mentioned at the end of the introduction, we believe that stating and proving Theorem [Theorem 6](#thm:uniquenessgeneral){reference-type="ref" reference="thm:uniquenessgeneral"} for the case when $1<s<2$ will be useful and instructive. **Theorem 11** (Uniqueness for $1<s<2$). *Let $L$ be the infinitesimal generator of a uniformly bounded $C_0$-semigroup $\{e^{tL}\}_{t\geq0}$ on $X$. Assume that $0 \in \rho(L)$. Fix $1<s<2$. Then the fourth order initial value extension problem $$\label{eq:problem1} \begin{cases} \big(L+\frac{3-2s}{y}\partial_y+\partial_{yy}\big)^{2}U=0&\hbox{for}~y>0\\ \lim_{y\to0}U(y)=u_1&\hbox{in}~X \\ \lim_{y\to0}y^{3-2s}\partial_yU(y)=u_2&\hbox{in}~X \\ \lim_{y\to0}\big(LU+\frac{3-2s}{y}\partial_yU+\partial_{yy}U\big)= u_3&\hbox{in}~X \\ \lim_{y\to0}y^{3-2s}\partial_y\big(LU+\frac{3-2s}{y}\partial_yU+\partial_{yy}U\big)=u_4&\hbox{in}~X \end{cases}$$ and the second order initial value extension problem $$\label{eq:problem2} \begin{cases} LU+\frac{1-2s}{y}\partial_yU+\partial_{yy}U=0&\hbox{for}~y>0\\ \lim_{y\to0}U(y)=u_1&\hbox{in}~X \\ \lim_{y\to0}y^{3-2s}\partial_yU=u_2&\hbox{in}~X \\ \lim_{y\to0}\frac{2}{y}\partial_yU= u_3&\hbox{in}~X \\ \lim_{y\to0}y^{3-2s}\partial_y\big(\frac{2}{y}\partial_yU\big)=u_4&\hbox{in}~X \end{cases}$$ where $u_m\in X$, $1\leq m\leq4$, have at most one classical solution.* *Proof.* By linearity, it is enough to assume that $u_1=u_2=u_3=u_4=0$. Let $U$ be a classical solution to [\[eq:problem1\]](#eq:problem1){reference-type="eqref" reference="eq:problem1"}. Then $V=LU+\tfrac{3-2s}{y}\partial_yU+\partial_{yy}U$ solves $$\begin{cases} LV+\frac{a}{y}\partial_yV+\partial_{yy}V=0&\hbox{for}~y>0\\ \lim_{y\to0}V(y) =0&\hbox{in}~X \\ \lim_{y\to0}y^{a}\partial_y V = 0&\hbox{in}~X \end{cases}$$ where $a=3-2s\in(-1,1)$. By Theorem [Theorem 9](#lem:unique_A_unbounded){reference-type="ref" reference="lem:unique_A_unbounded"}, $V(y)=0$ for all $y\geq0$. Consequently, $U$ solves $$\begin{cases} LU+\frac{3-2s}{y}\partial_yU+\partial_{yy}U = 0 &\hbox{for}~y>0\\ \lim_{y\to0}U(y)=0&\hbox{in}~X \\ \lim_{y\to0}y^{3-2s}\partial_yU=0&\hbox{in}~X. \end{cases}$$ Applying again Theorem [Theorem 9](#lem:unique_A_unbounded){reference-type="ref" reference="lem:unique_A_unbounded"} with $a=3-2s\in(-1,1)$, $U(y) = 0$ for all $y\geq0$. Next, suppose that $U$ is a classical solution to [\[eq:problem2\]](#eq:problem2){reference-type="eqref" reference="eq:problem2"}. For all $y>0$, by Lemma [Lemma 10](#lem:recursion){reference-type="ref" reference="lem:recursion"} with $a=1-2s$ and $m=n=1$, $\big(L+\tfrac{3-2s}{y}\partial_y+\partial_{yy}\big)^2U=0$ and $LU+\tfrac{3-2s}{y}\partial_yU+\partial_{yy}U=\tfrac{2}{y}\partial_yU$. Therefore, $U$ solves [\[eq:problem1\]](#eq:problem1){reference-type="eqref" reference="eq:problem1"} with $u_m=0$ so, by the first part of this proof, $U=0$. ◻ *Proof of Theorem [Theorem 6](#thm:uniquenessgeneral){reference-type="ref" reference="thm:uniquenessgeneral"}.* With an induction argument using the reduction idea of the proof of Theorem [Theorem 11](#cor:unique1s2){reference-type="ref" reference="cor:unique1s2"} and applying Theorem [Theorem 9](#lem:unique_A_unbounded){reference-type="ref" reference="lem:unique_A_unbounded"}, uniqueness for [\[eq:problem1general\]](#eq:problem1general){reference-type="eqref" reference="eq:problem1general"} follows. Similarly as in the proof of Theorem [Theorem 11](#cor:unique1s2){reference-type="ref" reference="cor:unique1s2"}, Lemma [Lemma 10](#lem:recursion){reference-type="ref" reference="lem:recursion"} gives that a classical solution to [\[eq:problem2general\]](#eq:problem2general){reference-type="eqref" reference="eq:problem2general"} is also a solution to a problem of the form [\[eq:problem1general\]](#eq:problem1general){reference-type="eqref" reference="eq:problem1general"}, and so it is unique. Details are left to the reader. ◻ # The sharp extension problem for $0<s<1$ and $1<s<2$ {#sec: extension s small} In this section we state and prove Theorem [Theorem 1](#thm:extension-general){reference-type="ref" reference="thm:extension-general"} for $0<s<1$ and $1<s<2$. This has a two-fold purpose. First, these are the typical fractional powers used in most applications, and we believe that presenting them separately will give the reader a better understanding of the strategy of proof for the general case $s>0$. Second, in both cases we can say something more about the derivative limits characterizing $(-L)^su$ in terms of difference quotients, see [\[eq:neumanns\]](#eq:neumanns){reference-type="eqref" reference="eq:neumanns"} and [\[eq:U Ls\]](#eq:U Ls){reference-type="eqref" reference="eq:U Ls"}, that we were not able to generalize to $s>2$. These latter descriptions are important as they give the correct way of approximating the conormal derivative of $U$ with finite differences, a fundamental aspect to compute $(-L)^su$ numerically using the extension. **Theorem 12** (Extension problem for $0<s<1$). *Let $L$ be the infinitesimal generator of a uniformly bounded $C_0$-semigroup $\{e^{tL}\}_{t\geq0}$ on $X$. Assume that $0 \in \rho(L)$. For $0<s<1$ and $u\in X$, let $U(y)=U[u](y)$ be its extension as in [\[eq:Usemigroup\]](#eq:Usemigroup){reference-type="eqref" reference="eq:Usemigroup"}. The following statements hold.* 1. *If $u\in D((-L)^s)$ and $f=(-L)^su$ then $$\label{eq:newidentity} \begin{aligned} U(y) &= u+\frac{1}{\Gamma(s)}\int_0^\infty\big(e^{-y^2/(4t)}-1\big)e^{tL}f\,\frac{dt}{t^{1-s}} \\ &= u+\frac{y^{2s}}{4^s\Gamma(s)}\int_0^\infty\big(e^{-r}-1\big)e^{\frac{y^2}{4r}L}f\,\frac{dr}{r^{1+s}}. \end{aligned}$$* 2. *We have that $u\in D((-L)^s)$ if and only if the limits $$\lim_{y\to0}y^{1-2s}\partial_yU(y)\qquad\hbox{or}\qquad\lim_{y\to0}\frac{U(y)-u}{y^{2s}}$$ exist in $X$ and, in this case, $$\label{eq:neumanns} \lim_{y\to0}y^{1-2s}\partial_yU(y)=c_s(-L)^su=2s\lim_{y\to0}\frac{U(y)-u}{y^{2s}}\quad\hbox{in}~X,$$ where $c_s=\frac{-\Gamma(1-s)}{4^{s-1/2}\Gamma(s)}$.* 3. *If $u\in D((-L)^s)$ then its extension $U$ given by [\[eq:Usemigroup\]](#eq:Usemigroup){reference-type="eqref" reference="eq:Usemigroup"} is the unique classical solution to the second order initial value problem $$\label{eq:extension_problem} \begin{cases} LU+\frac{1-2s}{y}\partial_yU+\partial_{yy}U=0&\hbox{for}~y>0\\ \lim_{y\to0}U(y)=u&\hbox{in}~X\\ \lim_{y\to0}y^{1-2s}\partial_yU(y)=c_s(-L)^su&\hbox{in}~X. \end{cases}$$* *Proof.* For $(a)$, we point out that [\[eq:newidentity\]](#eq:newidentity){reference-type="eqref" reference="eq:newidentity"} was proved in [@Gale-Miana-Stinga] for the case $u\in D(L)$. The same idea works if we suppose that $u\in D((-L)^s)$ because of Theorem [Theorem 2](#lem:approx_identity){reference-type="ref" reference="lem:approx_identity"}. Indeed, let $f=(-L)^su$. By Theorems [Theorem 2](#lem:approx_identity){reference-type="ref" reference="lem:approx_identity"} and [Theorem 3](#lem:inverse){reference-type="ref" reference="lem:inverse"}, $$u=(-L)^{-s}f=\lim_{\varepsilon\to0}(\varepsilon I-L)^{-s}f=\lim_{\varepsilon\to0}\frac{1}{\Gamma(s)}\int_0^\infty e^{-\varepsilon t}e^{tL}f\,\frac{dt}{t^{1-s}}.$$ Using this and Theorem [Theorem 5](#lem: extension_epsilon_presentation){reference-type="ref" reference="lem: extension_epsilon_presentation"}, $$\begin{aligned} U(y) -u &= \lim_{\varepsilon \to 0}\frac{1}{\Gamma(s)}\int_0^\infty e^{-\varepsilon t}\big(e^{-y^2/(4t)}-1\big)e^{tL}f\,\frac{dt}{t^{1-s}} \\ &= \frac{1}{\Gamma(s)}\int_0^\infty\big(e^{-y^2/(4t)}-1\big)e^{tL}f\,\frac{dt}{t^{1-s}},\end{aligned}$$ where in the last line we applied the dominated convergence theorem. The change of variables $r=y^2/(4t)$ gives the second formula in [\[eq:newidentity\]](#eq:newidentity){reference-type="eqref" reference="eq:newidentity"}. Let us consider $(b)$. Suppose first that $u\in D((-L)^s)$ and let $f=(-L)^su$. By [\[eq:newidentity\]](#eq:newidentity){reference-type="eqref" reference="eq:newidentity"} and the dominated convergence theorem, $$\lim_{y\to0}\frac{U(y) - u}{y^{2s}}=\lim_{y\to0}\frac{1}{4^s\Gamma(s)}\int^\infty_0 (e^{-r} - 1) e^{\frac{y^2}{4r}L}f \, \frac{dr}{r^{1+s}} =\frac{c_s}{2s}f.$$ Similarly, after differentiating inside the integral sign in the first identity in [\[eq:newidentity\]](#eq:newidentity){reference-type="eqref" reference="eq:newidentity"} and changing variables, we get $$\lim_{y\to0}y^{1-2s}\partial_yU(y)=-\lim_{y\to0}\frac{1}{4^{s-1/2}\Gamma(s)}\int_0^\infty e^{-r}e^{\frac{y^2}{4r}L}f\,\frac{dr}{r^{s}} = c_sf.$$ For the converse in $(b)$, let $U[v](y)$ denote the extension of $v\in X$ as in [\[eq:Usemigroup\]](#eq:Usemigroup){reference-type="eqref" reference="eq:Usemigroup"} and define $$T_sv:=\lim_{y\to0}\frac{U[v](y)-v}{y^{2s}}$$ with domain $D(T_s) =\big\{v\in X:T_sv~\hbox{exists}\big\}$. We just proved that $D((-L)^s)\subset D(T_s)$. To show the opposite inclusion, let $u\in D(T_s)$ and set $g=T_su\in X$. Since $(-L)^{-s}$ is a bounded linear operator that commutes with $e^{tL}$, it is clear that $(-L)^{-s}U[u](y)=U[(-L)^{-s}u](y)$ and $$(-L)^{-s}g= (-L)^{-s}[T_su]=T_s[(-L)^{-s}u].$$ But now, by the direct statement we just proved applied to $v:=(-L)^{-s}u\in D((-L)^s)$, $$T_s[(-L)^{-s}u]=T_s[v]=\frac{c_s}{2s}(-L)^sv=\frac{c_s}{2s}(-L)^s[(-L)^{-s}u]=\frac{c_s}{2s}u.$$ Hence, $(-L)^{-s}g=\frac{c_s}{2s}u$ and $u\in D((-L)^s)$, as desired. A similar argument using the identity $(-L)^{-s}\big(y^{1-2s}\partial_yU[u](y)\big)=y^{1-2s}\partial_yU[(-L)^{-s} u](y)$ can be done to establish that if the limit $\lim_{y\to0}y^{1-2s}\partial_yU[u](y)$ exists then $u\in D((-L)^s)$. Finally, $(c)$ follows from Lemma [Lemma 4](#lem:extension_equation){reference-type="ref" reference="lem:extension_equation"} and Theorem [Theorem 9](#lem:unique_A_unbounded){reference-type="ref" reference="lem:unique_A_unbounded"}. ◻ **Theorem 13** (Extension problem for $1<s<2$). *Let $L$ be the infinitesimal generator of a uniformly bounded $C_0$-semigroup $\{e^{tL}\}_{t\geq0}$ on $X$. Assume that $0 \in \rho(L)$. For $1<s<2$ and $u\in X$, let $U(y)=U[u](y)$ be its extension as in [\[eq:Usemigroup\]](#eq:Usemigroup){reference-type="eqref" reference="eq:Usemigroup"}. The following statements hold.* 1. *If $u\in D((-L)^s)$ and $f=(-L)^su$ then $$\label{eq:U component s large} \begin{aligned} U(y) &= u+\frac{y^2 \Gamma(s-1)}{4 \Gamma(s)}Lu+\frac{1}{\Gamma(s)}\int^\infty_0 \bigg[e^{-y^2/(4t)}-1+\frac{y^2}{4t}\bigg]e^{tL} f\,\frac{dt}{t^{1-s}} \\ &= u+\frac{y^2 \Gamma(s-1)}{4 \Gamma(s)}Lu+\frac{y^{2s}}{4^s\Gamma(s)}\int^\infty_0 \big(e^{-r}-1+r\big)e^{\frac{y^2}{4r}L}f\,\frac{dr}{r^{1+s}}. \end{aligned}$$* 2. *We have that $u\in D((-L)^s)$ if and only if the limits $$\label{eq:1s2incremental} \lim_{y\to0}y^{3-2s}\partial_y \big(\tfrac{2}{y} \partial_y U(y) \big)\qquad\hbox{or}\qquad \lim_{y \to 0}\frac{U(2y) - 4 U(y) + 3u}{y^{2s}}$$ or $$\lim_{y \to 0}y^{3-2s}\partial_y \big( LU +\tfrac{3-2s}{y} \partial_y U + \partial_{yy}U \big)$$ exist in $X$ and, in these cases, $$\label{eq:U Ls} \lim_{y\to0^+}y^{3-2s}\partial_y \big(\tfrac{2}{y}\partial_yU(y)\big)=\lim_{y \to 0}y^{3-2s}\partial_y \big( LU +\tfrac{3-2s}{y} \partial_y U + \partial_{yy}U \big)=c_s(-L)^su$$ and $$\label{eq:limincremental1s2} \lim_{y \to 0}\frac{U(2y) - 4 U(y) + 3U(0)}{y^{2s}}=d_s(-L)^su$$ where $c_s=\frac{\Gamma(2-s)}{4^{s-3/2}\Gamma(s)}$ and $d_s = \frac{(4^{1-s}-1)\Gamma(1-s)}{\Gamma(1+s)}$.* 3. *If $u\in D((-L)^s)$ then its extension $U$ given by [\[eq:Usemigroup\]](#eq:Usemigroup){reference-type="eqref" reference="eq:Usemigroup"} is the unique classical solution to the fourth order initial value problem $$\label{eq:uniquenessL1s2} \begin{cases} \big(L+\frac{3-2s}{y}\partial_y+\partial_{yy}\big)^{2}U=0&\hbox{for}~y>0\\ \lim_{y\to0}U(y)=u&\hbox{in}~X \\ \lim_{y\to0}y^{3-2s}\partial_yU(y)=0&\hbox{in}~X \\ \lim_{y\to0}\big(LU+\frac{3-2s}{y}\partial_yU+\partial_{yy}U\big)=\frac{ \Gamma(s-1)}{ \Gamma(s)}Lu&\hbox{in}~X \\ \lim_{y\to0}y^{3-2s}\partial_y\big(LU+\frac{3-2s}{y}\partial_yU+\partial_{yy}U \big)=c_s(-L)^su &\hbox{in}~X \end{cases}$$ and to the second order initial value problem $$\label{eq:uniquenessy1s2} \begin{cases} LU+\frac{1-2s}{y}\partial_yU+\partial_{yy}U=0&\hbox{for}~y>0\\ \lim_{y\to0}U(y)=u&\hbox{in}~X \\ \lim_{y\to0}y^{3-2s}\partial_yU(y)=0&\hbox{in}~X \\ \lim_{y\to0}\frac{2}{y}\partial_yU(y)=\frac{ \Gamma(s-1)}{ \Gamma(s)}Lu&\hbox{in}~X \\ \lim_{y\to0}y^{3-2s}\partial_y\big(\frac{2}{y}\partial_yU(y)\big)=c_s(-L)^su &\hbox{in}~X. \end{cases}$$* *Proof.* To begin with $(a)$, let $u\in D((-L)^s)$ and $f=(-L)^su$. By Theorem [Theorem 2](#lem:approx_identity){reference-type="ref" reference="lem:approx_identity"}, $$u=\lim_{\varepsilon\to0}(\varepsilon I-L)^{-s}f\qquad\hbox{and}\qquad -Lu=\lim_{\varepsilon\to0}(\varepsilon I-L)^{-(s-1)}f$$ Therefore, by Theorems [Theorem 5](#lem: extension_epsilon_presentation){reference-type="ref" reference="lem: extension_epsilon_presentation"} and [Theorem 3](#lem:inverse){reference-type="ref" reference="lem:inverse"} and the dominated convergence theorem, $$\begin{aligned} U(y)-u-\frac{y^2 \Gamma(s-1)}{4 \Gamma(s)}Lu &= \lim_{\varepsilon\to 0}\frac{1}{\Gamma(s)}\int^\infty_0 e^{-\varepsilon t} \bigg[e^{-y^2/(4t)}-1+\frac{y^2}{4t}\bigg]e^{tL}f\,\frac{dt}{t^{1-s}} \\ &= \frac{1}{\Gamma(s)}\int^\infty_0\bigg[e^{-y^2/(4t)}-1+\frac{y^2}{4t}\bigg]e^{tL}f\,\frac{dt}{t^{1-s}}.\end{aligned}$$ This and the change of variables $r=y^2/(4t)$ give [\[eq:U component s large\]](#eq:U component s large){reference-type="eqref" reference="eq:U component s large"}. For $(b)$, suppose that $u\in D((-L)^s)$ and set $f=(-L)^su$. Notice that, by Lemmas [Lemma 4](#lem:extension_equation){reference-type="ref" reference="lem:extension_equation"} and [Lemma 10](#lem:recursion){reference-type="ref" reference="lem:recursion"} with $a=1-2s$ and $n=m=1$, $$\label{eq:Landdy} LU+\tfrac{3-2s}{y}\partial_yU+\partial_{yy}U=\tfrac{2}{y}\partial_yU.$$ Differentiating the first formula in [\[eq:U component s large\]](#eq:U component s large){reference-type="eqref" reference="eq:U component s large"} and changing variables $r=y^2/(4t)$, we obtain $$y^{3-2s}\partial_y\big(\tfrac{2}{y}\partial_yU\big)(y)= \frac{1}{4^{s-3/2}\Gamma(s)}\int^\infty_0 e^{-r} e^{\frac{y^2}{4r}L}f\,\frac{dr}{r^{s-1}}$$ Using this and [\[eq:Landdy\]](#eq:Landdy){reference-type="eqref" reference="eq:Landdy"}, we get [\[eq:U Ls\]](#eq:U Ls){reference-type="eqref" reference="eq:U Ls"}. Next, by [\[eq:U component s large\]](#eq:U component s large){reference-type="eqref" reference="eq:U component s large"}, $$\frac{U(2y) - 4U(y) + 3U(0)}{y^{2s}} = \frac{1}{4^s\Gamma(s)}\int^\infty_0 \big(e^{-r}-1+r\big)\big(4^se^{\frac{y^2}{r}L}f-4e^{\frac{y^2}{4r}L}f\big)\,\frac{dr}{r^{1+s}}.$$ We can take the limit as $y\to0$ here and get [\[eq:limincremental1s2\]](#eq:limincremental1s2){reference-type="eqref" reference="eq:limincremental1s2"}. Let us prove the converse of $(b)$. For any $v\in X$, let $U(y)=U[v](y)$ denote its extension as in [\[eq:Usemigroup\]](#eq:Usemigroup){reference-type="eqref" reference="eq:Usemigroup"}. Define the linear operator $$T_s v =\lim_{y \to 0}y^{3-2s}\partial_y\big(\tfrac{2}{y}\partial_yU[v](y)\big)= \lim_{y\to0}y^{3-2s}\partial_y\big(LU[v](y)+\tfrac{3-2s}{y} \partial_y U[v](y) + \partial_{yy}U[v](y) \big)$$ for $v\in D(T_s)=\{v\in X: T_sv~\hbox{exists}\}$, where in the second identity we applied [\[eq:Landdy\]](#eq:Landdy){reference-type="eqref" reference="eq:Landdy"}. We just proved that $D((-L)^s)\subset D(T_s)$. For the opposite inclusion, let $u \in D(T_s)$ and $g = T_s u$. Since $(-L)^{-s}$ is a bounded linear operator that commutes with $e^{tL}$ and $(-L)^su\in D((-L)^s)$, $$(-L)^{-s} g=(-L)^{-s}T_su=T_s[(-L)^{-s}u]=c_s(-L)^s[(-L)^{-s}u]=c_su.$$ that is, $u \in D((-L)^s)$. A similar argument can be done to show that the second limit in [\[eq:1s2incremental\]](#eq:1s2incremental){reference-type="eqref" reference="eq:1s2incremental"} exists then $u\in D((-L)^s)$, and the details are left to the interested reader. For $(c)$, Lemma [Lemma 4](#lem:extension_equation){reference-type="ref" reference="lem:extension_equation"} and Lemma [Lemma 10](#lem:recursion){reference-type="ref" reference="lem:recursion"} give that $U$ solves the first equations in [\[eq:uniquenessy1s2\]](#eq:uniquenessy1s2){reference-type="eqref" reference="eq:uniquenessy1s2"} and [\[eq:uniquenessL1s2\]](#eq:uniquenessL1s2){reference-type="eqref" reference="eq:uniquenessL1s2"}. By [\[eq:Landdy\]](#eq:Landdy){reference-type="eqref" reference="eq:Landdy"}, it is enough to verify that $U$ satisfies the boundary conditions of [\[eq:uniquenessy1s2\]](#eq:uniquenessy1s2){reference-type="eqref" reference="eq:uniquenessy1s2"}. Differentiating the first formula in [\[eq:U component s large\]](#eq:U component s large){reference-type="eqref" reference="eq:U component s large"} and changing variables, $$\partial_yU(y)=\frac{y \Gamma(s-1)}{2 \Gamma(s)}Lu - \frac{y^{2s-1}}{4^{s-1/2}\Gamma(s)}\int^\infty_0\big( e^{-r} - 1 \big) e^{\frac{y^2}{4r} L}f\, \frac{dr}{r^{s}}$$ so that $\lim_{y\to0}y^{3-2s}\partial_yU(y)=0$ and $\lim_{y\to0}\frac{2}{y}\partial_yU(y)=\frac{\Gamma(s-1)}{\Gamma(s)}Lu$. The uniqueness is established in Theorem [Theorem 11](#cor:unique1s2){reference-type="ref" reference="cor:unique1s2"}. ◻ # Proof of Theorem [Theorem 1](#thm:extension-general){reference-type="ref" reference="thm:extension-general"} {#sec: extension s general} For the proof of Theorem [Theorem 1](#thm:extension-general){reference-type="ref" reference="thm:extension-general"} we fix $s>0$ with integer part $[s]$. The cases $[s]=0$ and $[s]=1$ are contained in Theorems [Theorem 12](#thm: extension frac domain){reference-type="ref" reference="thm: extension frac domain"} and [Theorem 13](#thm:s in between 1 and 2){reference-type="ref" reference="thm:s in between 1 and 2"}, respectively. Thus, we consider $[s]\geq2$. Lemmas [Lemma 4](#lem:extension_equation){reference-type="ref" reference="lem:extension_equation"} and [Lemma 10](#lem:recursion){reference-type="ref" reference="lem:recursion"} with $a=1-2s$ and $m=[s]$ give $(a)$. For $(b)$, let $u\in D((-L)^s)$ and $f=(-L)^su$. Theorem [Theorem 2](#lem:approx_identity){reference-type="ref" reference="lem:approx_identity"} implies that, for any $0\leq k\leq[s]$, $$\lim_{\varepsilon\to0} (\varepsilon I-L)^{-(s-k)}f=\lim_{\varepsilon\to0} (\varepsilon I-L)^{-(s-k)} (-L)^s u = (-L)^ku .$$ Using this, the semigroup expression for $(\varepsilon I-L)^{-(s-k)}$ given in Theorem [Theorem 3](#lem:inverse){reference-type="ref" reference="lem:inverse"} and the limit formula for $U$ in Theorem [Theorem 5](#lem: extension_epsilon_presentation){reference-type="ref" reference="lem: extension_epsilon_presentation"}, we can write $$U(y) - \sum^{[s]}_{k=0}\bigg(\frac{y^2}{4}\bigg)^k \frac{\Gamma(s-k)}{k!\Gamma(s)}L^k u \\ =\lim_{\varepsilon\to0}\frac{1}{\Gamma(s)}\int_0^\infty\bigg[e^{-y^2/(4t)}-\sum_{k=0}^{[s]}\frac{(-\frac{y^2}{4t})^k}{k!}\bigg] e^{-\varepsilon t} e^{tL}f\,\frac{dt}{t^{1-s}}.$$ The limit can be placed inside the integral sign because its integrand is bounded by $$C(s,y,M,\|f\|_X)\bigg[\frac{\chi_{(0,1)}}{t^{1-s}}+\frac{\chi_{(1,\infty)(t)}}{t^{2+[s]-s}}\bigg]\in L^1((0,\infty),dt)$$ uniformly in $\varepsilon>0$. This and the change of variables $r=y^2/(4t)$ give [\[eq:U component s Greater 2\]](#eq:U component s Greater 2){reference-type="eqref" reference="eq:U component s Greater 2"}. To prove $(c)$ and $(d)$, let $u\in D((-L)^s)$. Since $LU+\frac{1-2s}{y}\partial_yU+\partial_{yy}U=0$, we can apply Lemma [Lemma 10](#lem:recursion){reference-type="ref" reference="lem:recursion"} with $a=1-2s$ and $0\leq m\leq n=[s]$ to get $$\big(L+\tfrac{1-2(s-[s])}{y}\partial_y+\partial_{yy}\big)^mU=\frac{[s]!}{([s]-m)!}\big(\tfrac{2}{y} \partial_y\big)^mU.$$ In particular, it is enough to prove that [\[eq:U Ls Greater 2\]](#eq:U Ls Greater 2){reference-type="eqref" reference="eq:U Ls Greater 2"} exists and that $U$ solves [\[eq:bvpinyalls\]](#eq:bvpinyalls){reference-type="eqref" reference="eq:bvpinyalls"}, as uniqueness follows from Theorem [Theorem 6](#thm:uniquenessgeneral){reference-type="ref" reference="thm:uniquenessgeneral"}. For this, we define the functions $$S_{n,N}(r) = \sum^N_{k=n}\frac{(-1)^{k-n}}{(k-n)!} r^{k-n} \frac{\Gamma(s-k)}{\Gamma(s)}(-L)^k u$$ and $$F_N(r) = e^{-r} - \sum^N_{k=0}\frac{(-r)^k}{k!}$$ for $r>0$. Then $$U(y)= S_{0,[s]}\big(\tfrac{y^2}{4}\big)+\frac{1}{\Gamma(s)} \int^\infty_0 F_{[s]}\big(\tfrac{y^2}{4t}\big)e^{tL}(-L)^s u \, \frac{dt}{t^{1-s}}.$$ Since $S_{n,N}'(r)=-S_{n+1,N}(r)$, by induction, for any $m\geq0$, $$\big(\tfrac{2}{y} \partial_y \big)^mS_{n,N}\big(\tfrac{y^2}{4}\big)=(-1)^mS_{n+m,N}\big(\tfrac{y^2}{4}\big).$$ Similarly, $F'_N(r)=-F_{N-1}(r)$ gives by induction that $$\big(\tfrac{2}{y}\partial_y\big)^mF_N\big(\tfrac{y^2}{4t}\big)=\tfrac{(-1)^m}{t^m}F_{N-m}\big(\tfrac{y^2}{4t}\big).$$ With these identities and the change of variables $r=y^2/(4t)$, $$\begin{aligned} \big(\tfrac{2}{y}\partial_y\big)^mU(y) &= (-1)^mS_{m,[s]}\big(\tfrac{y^2}{4}\big)+\frac{(-1)^m}{\Gamma(s)} \int^\infty_0 F_{[s]-m}\big(\tfrac{y^2}{4t}\big)e^{tL}(-L)^su\,\frac{dt}{t^{1-s+m}} \\ &= (-1)^mS_{m,[s]}\big(\tfrac{y^2}{4}\big)+\frac{(-1)^my^{2(s-m)}}{4^{s-m}\Gamma(s)} \int^\infty_0F_{[s]-m} (r)e^{\frac{y^2}{4r}L}(-L)^s u\,\frac{dr}{r^{1+s-m}}.\end{aligned}$$ From dominated convergence, it follows that, for any $0\leq m\leq [s]$, $$\lim_{y\to0}\big(\tfrac{2}{y}\partial_y\big)^mU(y)=\frac{\Gamma(s-m)}{\Gamma(s)}L^mu$$ as the integral term vanishes in the limit. One can also easily see that, for $0\leq m<[s]$, $$2\lim_{y\to0}y^{1-2(s-[s])}\partial_y\big(\tfrac{2}{y}\partial_y\big)^mU(y)= \lim_{y\to0}y^{2-2(s-[s])}\big(\tfrac{2}{y}\partial_y\big)^{m+1}U(y)=0.$$ Finally, when $m=[s]$, $$\big(\tfrac{2}{y}\partial_y\big)^{[s]}U(y)=\frac{\Gamma(s-[s])}{\Gamma(s)}(-L)^{[s]}u+ \frac{(-1)^{[s]}}{\Gamma(s)}\int_0^\infty\big(e^{-y^2/(4t)}-1\big)e^{tL}(-L)^su\,\frac{dt}{t^{1-(s-[s])}}.$$ We can differentiate inside the integral sign and change variables $r=y^2/(4t)$ to get $$\partial_y\big(\tfrac{2}{y}\partial_y\big)^{[s]}U(y)=\frac{(-1)^{[s]+1}y^{-1+2(s-[s])}}{4^{s-([s]+1/2)}\Gamma(s)} \int_0^\infty e^{-r}e^{\frac{y^2}{4t}L}(-L)^su\,\frac{dr}{r^{s-[s]}}.$$ By using dominated convergence, we conclude that $$\lim_{y\to0}y^{1-2(s-[s])}\partial_y\big(\tfrac{2}{y}\partial_y\big)^{[s]}U(y)= \frac{(-1)^{[s]+1}\Gamma([s]+1-s)}{4^{s-([s]+1/2)}\Gamma(s)}(-L)^su.$$ This finishes the proof of $(d)$ and the direct statement of $(c)$. For the converse of $(c)$, given $v\in X$, let $U[v](y)$ denote its extension as in [\[eq:Usemigroup\]](#eq:Usemigroup){reference-type="eqref" reference="eq:Usemigroup"}. Define the linear operator $$\begin{aligned} T_sv &= \lim_{y\to0}y^{1-2(s-[s])}\partial_y\big(\big(\tfrac{2}{y}\partial_y\big)^{[s]} U[v](y) \big) \\ &=\frac{1}{[s]!}\lim_{y\to0}y^{1-2(s-[s])}\partial_y\big(\big(L+\tfrac{1-2(s-[s])}{y}\partial_y+\partial_{yy}\big)^{[s]}U[v](y)\big)\end{aligned}$$ with domain $D(T_s)=\big\{v\in X:T_sv~\hbox{exists}\big\}$. We already know that $D((-L)^s)\subset D(T_s)$. Let $u\in D(T_s)$ and set $g=T_su$. Then, using the continuity of $(-L)^{-s}$, $$\begin{aligned} (-L)^{-s}g &= \lim_{y\to0}y^{1-2(s-[s])}\partial_y \big( \big(\tfrac{2}{y}\partial_y\big)^{[s]} U[(-L)^su](y) \big) \\ &= c_s(-L)^s(-L)^{-s}u=c_su.\end{aligned}$$ Hence, $u\in D((-L)^s)$.0◻ 10 A. V. Balakrishnan, Fractional powers of closed operators and the semigroups generated by them, *Pacific J. Math.* **10** (1960), 419--437. H. Berens, P. L. Butzer and U. Westphal, Representation of fractional powers of infinitesimal generators of semigroups, *Bull. Amer. Math. Soc.* **74** (1968), 191--196. P. L. Butzer and H. Berens, *Semi-groups of Operators and Approximation*, Die Grundlehren der mathematischen Wissenschaften **145**, Springer-Verlag New York Inc., New York, 1967. L. A. Caffarelli and L. Silvestre, An extension problem related to the fractional laplacian, *Comm. Partial Differential Equations* **32** (2007), 1245--1260. W. Chen, A speculative study of 2/3-order fractional Laplacian modeling of turbulence: some thoughts and conjectures, *Chaos* **16** (2006), 023126. G. Cora and R. Musina, The $s$-polyharmonic extension problem and higher-order fractional Laplacians, *J. Funct. Anal.* **283** (2022), Paper No. 109555, 33 pp. J. E. Galé, P. J. Miana, and P. R. Stinga, Extension problem and fractional operators: semigroups and wave equations, *J. Evol. Equ.* **13** (2013), 343--386. M. Hasse, *The Functional Calculus for Sectorial Operators*, Oper. Theory Adv. Appl. **169**, Birkhäuser Verlag, Basel, 2006. N. N. Lebedev, *Special Functions and Their Applications*, revised edition, Dover Publications, Inc., New York, 1972. C. Martínez, M. Sanz and L. Marco, Fractional powers of operators, *J. Math. Soc. Japan* **40** (1988), 331--347. C. Martínez Carracedo and M. Sanz Alix, *The Theory of Fractional Powers of Operators*, North-Holland Mathematics Studies **187**, North-Holland Publishing Co., Amsterdam, 2001. J. Meichsner and C. Seifert, On the harmonic extension approach to fractional powers in Banach spaces, *Fract. Calc. Appl. Anal.* **23** (2020), 1054--1089. P. J. Miana, $\alpha$-times integrated semigroups and fractional derivation, *Forum Math.* **14** (2002), 23--46. R. Musina and A. I. Nazarov, Fractional operators as traces of operator-valued curves, arXiv:2208.06873 (2022), 38pp. R. H. Nochetto, E. Otárola and A. J. Salgado, A PDE approach to fractional diffusion in general domains: a priori error analysis, *Found. Comput. Math.* **15** (2015), 733--791. A. Pazy, *Semigroups of Linear Operators and Applications to Partial Differential Equations*, Applied Mathematical Sciences **44**, Springer--Verlag, New York, 1983. L. Roncal and P. R. Stinga, Fractional Laplacian on the torus, *Commun. Contemp. Math.* **18** (2016), 1550033, 26 pp. G. F. Simmons, *Differential Equations with Applications and Historical Notes*, third edition, Textbooks in Mathematics, CRC Press, Boca Raton, FL, 2017. P. R. Stinga and J. L. Torrea, Extension problem and Harnack's inequality for some fractional operators, *Comm. Partial Differential Equations* **35** (2010), 2092--2122. R. Yang, On higher order extensions for the fractional Laplacian, arXiv:1302.4413 (2013), 14pp. K. Yosida, *Functional Analysis*, reprint of the sixth (1980) edition, Classics in Mathematics, Springer-Verlag, Berlin, 1995.
arxiv_math
{ "id": "2309.12512", "title": "Sharp extension problem characterizations for higher fractional power\n operators in Banach spaces", "authors": "A. Biswas, P. R. Stinga", "categories": "math.AP math.CA math.FA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this work, we develop first-order (Hessian-free) and zero-order (derivative-free) implementations of the Cubically regularized Newton method for solving general non-convex optimization problems. For that, we employ finite difference approximations of the derivatives. We use a special adaptive search procedure in our algorithms, which simultaneously fits both the regularization constant and the parameters of the finite difference approximations. It makes our schemes free from the need to know the actual Lipschitz constants. Additionally, we equip our algorithms with the lazy Hessian update that reuse a previously computed Hessian approximation matrix for several iterations. Specifically, we prove the global complexity bound of $\mathcal{O}( n^{1/2} \epsilon^{-3/2})$ function and gradient evaluations for our new Hessian-free method, and a bound of $\mathcal{O}( n^{3/2} \epsilon^{-3/2} )$ function evaluations for the derivative-free method, where $n$ is the dimension of the problem and $\epsilon$ is the desired accuracy for the gradient norm. These complexity bounds significantly improve the previously known ones in terms of the joint dependence on $n$ and $\epsilon$, for the first-order and zeroth-order non-convex optimization. author: - "Nikita Doikov [^1]" - "Geovani N. Grapiglia [^2]" bibliography: - bibliography.bib date: September 5, 2023 title: | **First and zeroth-order implementations\ of the regularized Newton method with lazy approximated Hessians** --- # Introduction #### Motivation. The Newton Method is a powerful algorithm for solving numerical optimization problems. Employing the matrix of second derivatives (the Hessian of the objective), the Newton Method is able to efficiently tackle *ill-conditioned problems*, which can be very difficult for solving by the first-order Gradient Methods. While the Newton Method has been remaining popular for many decades due to its exceptional practical performance, the study of its *global* complexity bounds is relatively recent. One of the most theoretically established versions of this method is the Cubically Regularized Newton Method [@nesterov2006cubic], that achieves a global complexity of the order $\mathcal{O}( \epsilon^{-3/2} )$ for finding a second-order stationary point for non-convex objective with Lipschitz continuous Hessian, where $\epsilon > 0$ is the desired accuracy for the gradient norm. The corresponding complexity of the Gradient Method [@nesterov2018lectures] for non-convex functions with Lipschitz continuous gradient is $\mathcal{O}( \epsilon^{-2})$, which is significantly worse. Thus, the Cubic Newton Method (CNM) achieves a *provable improvement* of the global complexity, as compared to the first-order methods. In the recent years, there were developed many efficient modifications of CNM, including *adaptive* and *universal* methods [@cartis2011adaptive1; @cartis2011adaptive2; @grapiglia2017regularized; @grapiglia2019accelerated; @doikov2021minimizing; @doikov2022super] that does not require to know the actual Lipschitz constant of the Hessian and that can automatically adapt to the best problem class among the functions with Hölder continuous derivatives, *accelerated* second-order schemes [@nesterov2008accelerating; @monteiro2013accelerated; @nesterov2018lectures; @grapiglia2019accelerated; @kovalev2022first; @carmon2022optimal] with even improved convergence rates for convex functions and matching the lower complexity bounds [@agarwal2018lower; @nesterov2018lectures; @arjevani2019oracle]. Clearly, we pay a significant price for the better convergence rates of CNM which is: computation of second derivatives and solving a more difficult subproblem in each step. Note that for some of the most difficult modern applications, our available information about the objective function $f(\cdot)$ can be restricted to the black-box *First-order oracle:* $\;\; x \; \mapsto \; \{ f(x), \nabla f(x) \}$, or even to *Zeroth-order oracle:* $\;\; x \;\; \mapsto \; \{ f(x) \}$, without a direct access to the problem structure and any ability to compute the second derivatives $\nabla^2 f(x)$ exactly. Thus, in this black-box scenarios, we are interested to use optimization schemes which efficiently employ only the information we have an access to. First-order implementations of CNM were proposed and analysed in [@cartis2012oracle] and [@grapiglia2022cubic]. In both of these works, the methods employ finite-difference Hessian approximations, and complexity bounds of $\mathcal{O}(n\epsilon^{-3/2})$ calls of the oracle were proved, where $n$ is the dimension of the problem. In [@cartis2012oracle], a zeroth-order implementation of CNM was also proposed, for which the authors showed a complexity bound of $\mathcal{O}(n^{2}\epsilon^{-3/2})$ calls of the oracle. At each iteration, methods in [@cartis2012oracle] and [@grapiglia2022cubic] require the computation of one or more Hessian approximations. Recently, in [@doikov2023second], a second-order variant of CNM with *lazy Hessians* was proposed, in which the same Hessian matrix is reused during $m \geq 1$ consecutive iterations (as in [@shamanskii1967modification]). Remarkably, the method with lazy Hessians retains the iteration complexity bound of $\mathcal{O}(\epsilon^{-3/2})$ for nonconvex problems. Moreover, when $m=n$, it requires in the worst-case a number of Hessian evaluations smaller by a factor of $\sqrt{n}$ in comparison with the standard CNM. In this paper, we efficiently combine the use of finite-differences with the reuse of previously computed Hessian approximations to obtain new first and zeroth-order implementations of the CNM. Specifically, our algorithms employ adaptive searches by which the regularization parameters in the models and the finite-difference intervals are simultaneously adjusted (as in [@grapiglia2022cubic]). Additionally, to improve the total oracle complexity of our schemes, we employ the lazy Hessian updates [@doikov2023second], reusing each Hessian approximation for several consecutive steps. As the result, we obtain purely first-order (Hessian-free) and zeroth-order (derivative-free) implementations of CNM that are adaptive and need, respectively, at most $\mathcal{O}(n^{1/2}\epsilon^{-3/2})$ and $\mathcal{O}(n^{3/2}\epsilon^{-3/2})$ calls of the oracle to find an $\epsilon$-approximate second-order stationary point of the objective function. These complexity bounds significantly improve the corresponding bounds in [@cartis2012oracle] and [@grapiglia2022cubic] in terms of the dependence on $n$. Note that our new methods also support *composite problem* formulation (as, e.g. in [@grapiglia2019accelerated]), which include both unconstrained minimization and minimization with respect to simple convex constraints or additive regularization. In its turn, the smooth (and the difficult) part of the problem can be non-convex. Finally, we report the result of preliminary numerical experiments that illustrate the practical efficiency of the proposed methods. #### Contents. In Section [2](#SectionCubic){reference-type="ref" reference="SectionCubic"} we introduce the inexact step of CNM, which is the main primitive of all our algorithmic schemes. Section [3](#SectionFiniteDiff){reference-type="ref" reference="SectionFiniteDiff"} is devoted to the finite difference approximations of the second- and first-order derivatives of a smooth functions. In Section [4](#SectionHF){reference-type="ref" reference="SectionHF"}, we present first-order (*Hessian-free*) implementation of CNM and establish its global complexity bounds. Section [5](#SectionZO){reference-type="ref" reference="SectionZO"} contains zeroth-order (*derivative-free*) implementation of CNM. In Section [6](#SectionLocal){reference-type="ref" reference="SectionLocal"}, we establish local superlinear convergence for our schemes. Section [7](#SectionExperiments){reference-type="ref" reference="SectionExperiments"} presents illustrative numerical experiments. In Section [8](#SectionDiscussion){reference-type="ref" reference="SectionDiscussion"}, we discuss our results. #### Notation and Assumptions. By $\| \cdot \|$ we denote the standard Euclidean norm for vectors and the spectral norm for matrices, while notation $\| \cdot \|_F$ is reserved for the matrix Frobenius norm. We denote by $e_1, \ldots, e_n$ the standard basis vectors in $\mathbb{R}^n$. We want to solve the following minimization problem $$\label{MainProblem} \begin{array}{rcl} \min\limits_{x \in Q} \Bigl\{ F(x) & \stackrel{\mathrm{def}}{=}& f(x) + \psi(x) \Bigr\}, \end{array}$$ where $Q \stackrel{\mathrm{def}}{=}\mathop{\mathrm{dom}}\psi \subseteq \mathbb{R}^n$. Function $f : \mathbb{R}^n \to \mathbb{R}$ is a twice continuously differentiable, potentially *non-convex*, while the composite part $\psi: \mathbb{R}^n \to \mathbb{R}\cup \{ +\infty \}$ is a *simple* proper, closed, and convex, but possibly non-differentiable (e.g. indicator of a given closed convex set $Q$). Therefore, our goal is to find a point $\bar{x} \in Q$ with a small (sub)gradient norm: $$\label{InexactSolution} \begin{array}{rcl} \| \nabla f(\bar{x}) + \psi'(\bar{x}) \| & \leq & \epsilon, \end{array}$$ where $\psi'(\bar{x}) \in \partial \psi(\bar{x})$ and $\epsilon > 0$ is a desired tolerance. We are aiming to find a point satisfying [\[InexactSolution\]](#InexactSolution){reference-type="eqref" reference="InexactSolution"}, using only *first-order* or *zeroth-order* black-box oracle calls for $f$. At the same time, the composite component $\psi$ is assumed to be simple enough, such that the corresponding auxiliary minimization problems that involve $\psi$ can be efficiently solved (we present the form of the subproblem that we require to solve explicitly in the next section). We assume that $F$ is bounded from below on $Q$ and denote $$\begin{array}{rcl} F^{\star} & \stackrel{\mathrm{def}}{=}& \inf\limits_{x \in Q} F(x) \;\; > \;\; -\infty. \end{array}$$ To characterize the smoothness of the differentiable part of the objective, we assume the following: **A1** The Hessian of $f$ is Lipschitz continuous, i.e., $$\label{LipHess} \begin{array}{rcl} \|\nabla^{2}f(y)-\nabla^{2}f(x)\| & \leq & L\|y-x\|,\qquad \forall x,y \in \mathbb{R}^n, \end{array}$$ where $L \geq 0$ is the Lipschitz constant. Note that in all our methods, we do not need to know the exact value of $L$, estimating it *automatically* with an adaptive procedure. # Inexact Cubic Newton Step {#SectionCubic} In this section, we analyze one step of the Cubically regularized Newton method with an *approximate* second-order and first-order information. We also assume that the step of the method is computed *inexactly*, which would allow to apply our methods in the large scale setting. Given $x \in Q$ and $\sigma>0$, let us define the models for $f(y)$ around $x$, exact second-order model with cubic regularization: $$\begin{array}{rcl} \Omega_{x,\sigma}(y) & \stackrel{\mathrm{def}}{=}& f(x) + \langle \nabla f(x),y-x \rangle + \dfrac{1}{2}\langle\nabla^{2}f(x)(y-x),y-x\rangle + \dfrac{\sigma}{6}\|y-x\|^{3}, \label{ExactModel} \end{array}$$ and an *approximate model*: $$\begin{array}{rcl} M_{x,\sigma}(y) & \stackrel{\mathrm{def}}{=}& f(x)+\langle g,y-x\rangle+\dfrac{1}{2}\langle B(y-x),y-x\rangle+\dfrac{\sigma}{6}\|y-x\|^{3}, \end{array} \label{ApproxModel}$$ where $g\in\mathbb{R}^{n}$ is an approximation to $\nabla f(x)$ and $B\in\mathbb{R}^{n\times n}$ is an approximation to $\nabla^{2}f(z)$, with some previous point $z \in \mathbb{R}^n$ from the past. In the simplest case, we can set $z := x$. However, to reduce the iteration cost of our methods, we will use the same anchor point $z$ for several iterations (that we call *lazy Hessian updates*). Note that due to the cubic regularizer, we can minimize model [\[ApproxModel\]](#ApproxModel){reference-type="eqref" reference="ApproxModel"} globally even when the quadratic part is non-convex. Efficient techniques for solving such subproblems by using Linear Algebra tools or gradient-based solvers were extensively developed in the context of trust-region methods [@conn2000trust] and for the Cubically regularized Newton methods [@nesterov2006cubic; @cartis2011adaptive1; @cartis2011adaptive2; @carmon2019gradient]. Let us consider a minimizer for our approximate model [\[ApproxModel\]](#ApproxModel){reference-type="eqref" reference="ApproxModel"} augmented by the composite component: $$\label{CompositeSubproblem} \boxed{ \begin{array}{rcl} x^+ & \approx & \mathop{\mathrm{argmin}}\limits_{y \in Q} \Bigl\{ M_{x, \sigma}(y) + \psi(y) \Bigr\} \end{array} }$$ We will use such point $x^+$ as the main iteration step in all our methods. Note that if $x^+$ is an *exact solution* to [\[CompositeSubproblem\]](#CompositeSubproblem){reference-type="eqref" reference="CompositeSubproblem"}, then the following first-order optimality condition holds (see, e.g. Theorem 3.1.23 in [@nesterov2018lectures]): $$\label{ExactStatCond} \begin{array}{rcl} {\langle}g + B(x^+ - x) + \frac{\sigma}{2}\|x^+ - x\| (x^+ - x), y - x^+ {\rangle} + \psi(y) & \geq & \psi(x^+), \quad \forall y \in Q. \end{array}$$ Hence, we have an explicit expression for a specific subgradient of $\psi$ at new point: $$\begin{array}{rcl} - g - B(x^+ - x) - \frac{\sigma}{2}\|x^+ - x\| (x^+ - x) & \overset{\eqref{ExactStatCond}}{\in} & \partial \psi(x^+). \end{array}$$ Thus, usually for any solver of [\[CompositeSubproblem\]](#CompositeSubproblem){reference-type="eqref" reference="CompositeSubproblem"}, along with $x^+$ we are able to compute the corresponding subgradient vector as well. In what follows, we will consider *inexact minimizers* of our model. First, we provide the bound for the new gradient norm. **Lemma 1**. *Let $x^{+}$ be an inexact minimizer of subproblem [\[CompositeSubproblem\]](#CompositeSubproblem){reference-type="eqref" reference="CompositeSubproblem"} satisfying the following condition, for some $\theta \geq 0$: $$\label{InexactCond} \begin{array}{rcl} \| \nabla M_{x, \sigma}(x^{+}) + \psi'(x^{+}) \| & \leq & \theta \|x^{+} - x\|^2, \end{array}$$ for a certain $\psi'(x^{+}) \in \partial \psi(x^{+})$. Let, for some $\delta_g, \delta_B \geq 0$, it hold that $$\label{GradHessApprox} \begin{array}{rcl} \|g - \nabla f(x) \| & \leq & \delta_g, \\ \\ \| B - \nabla^2 f(z) \| & \leq & \delta_B. \end{array}$$ Then, we have $$\label{NewGradBound} \begin{array}{rcl} \| \nabla f(x^{+}) + \psi'(x^{+}) \| & \leq & \bigl( \theta + \frac{\sigma + L}{2} \bigr) r^2 \, + \, \bigl( \delta_B + L \|x - z\| \bigr) r \, + \, \delta_g, \end{array}$$ where $r := \|x^+ - x\|$.* Indeed, $$\begin{array}{rcl} \| \nabla f(x^{+}) + \psi'(x^{+}) \| & \leq & \| \nabla f(x^{+}) - \nabla \Omega_{x, \sigma}(x^{+}) \| + \| \nabla \Omega_{x, \sigma}(x^+) - \nabla M_{x, \delta}(x^+) \| \\ \\ & & \quad + \; \| \nabla M_{x, \sigma}(x^+) + \psi'(x^+) \| \\ \\ & = & \bigl\| \nabla f(x^+) - \nabla f(x) - \nabla^2 f(x)(x^+ - x) - \frac{\sigma}{2}r(x^+ - x) \bigr\| \\ \\ & & \quad + \; \| \nabla f(x) - g + (\nabla^2 f(x) - B)(x^+ - x) \| + \| \nabla M_{x, \sigma}(x^+) + \psi'(x^{+}) \| \\ \\ & \overset{\eqref{LipHess}, \eqref{InexactCond}}{\leq} & \bigl( \theta + \frac{\sigma + L}{2} \bigr) r^2 \, + \, \| \nabla^2 f(x) - B \| r \, + \, \| \nabla f(x) - g\| \\ \\ & \overset{\eqref{LipHess}, \eqref{GradHessApprox}}{\leq} & \bigl( \theta + \frac{\sigma + L}{2} \bigr) r^2 \, + \, \bigl( \delta_B + L\|x - z\| \bigr) r \, + \, \delta_g. \hfill \Box \end{array}$$ Now, we can express the progress of one step in terms of the objective function value. **Lemma 2**. *Let $x^+$ satisfy the following condition: $$\label{XplusCond} \begin{array}{rcl} M_{x, \sigma}(x^{+}) + \psi(x^+) & \leq & F(x), \end{array}$$ and let $g$ and $B$ satisfy [\[GradHessApprox\]](#GradHessApprox){reference-type="eqref" reference="GradHessApprox"} for some $\delta_g, \delta_B \geq 0$. Then, we have $$\label{NewFuncProgress} \begin{array}{rcl} F(x) - F(x^+) & \geq & \frac{\sigma - L}{6}r^3 - \frac{1}{2}(\delta_B + L\|x - z\|) r^2 - \delta_g r, \end{array}$$ where $r := \|x^+ - x\|$.* Indeed, we have $$\begin{array}{rcl} F(x^+) & \overset{\eqref{LipHess}}{\leq} & \Omega_{x, L}(x^+) + \psi(x^+) \\ \\ & = & f(x) + {\langle}\nabla f(x), x^{+} - x {\rangle} + \frac{1}{2} {\langle}\nabla^2 f(x)(x^+ - x), x^+ - x {\rangle} + \frac{L}{6}\|x^+ - x\|^3 + \psi(x^+) \\ \\ & = & M_{x, \sigma}(x^+) + {\langle}\nabla f(x) - g, x^{+} - x {\rangle} + \frac{1}{2} {\langle}(\nabla^2 f(x) - B)(x^+ - x), x^+ - x {\rangle}\\ \\ & & \quad \; + \; \frac{L - \sigma}{6}\|x^+ - x\|^3 + \psi(x^+) \\ \\ & \overset{\eqref{XplusCond}, \eqref{GradHessApprox}, \eqref{LipHess}}{\leq} & F(x) + \delta_g r + \frac{1}{2} (\delta_B + L\|x - z\|) r^2 + \frac{L - \sigma}{6} r^3, \end{array}$$ and this is [\[NewFuncProgress\]](#NewFuncProgress){reference-type="eqref" reference="NewFuncProgress"}. 0◻ Finally, we analyze the smallest eigenvalues for the Hessian of our problem. Let us consider the case when the composite part $\psi$ is *twice differentiable*, so the Hessian of the full objective in [\[MainProblem\]](#MainProblem){reference-type="eqref" reference="MainProblem"} is well-defined. Then, we denote $$\label{XiDef} \begin{array}{rcl} \xi(y) & \stackrel{\mathrm{def}}{=}& \max\Bigl\{ -\lambda_{\min}( \nabla^2 F(y) ), 0 \Bigr\}, \qquad y \in Q. \end{array}$$ Thus, the value of $\xi(y) \geq 0$ indicates how big is the negative part of the smallest eigenvalue of the Hessian at point $y$. Note that if $x^{+}$ is an *exact solution* to our subproblem [\[CompositeSubproblem\]](#CompositeSubproblem){reference-type="eqref" reference="CompositeSubproblem"}, we can use the following second-order optimality condition (see, e.g. Theorem 1.2.2 in [@nesterov2018lectures]): $$\label{SOStat} \begin{array}{rcl} B + \frac{\sigma}{2} \|x^+ - x\| I + \frac{\sigma}{2r} (x^+ - x) (x^+ - x)^{\top} + \nabla^2 \psi(x^+) & \succeq & 0, \end{array}$$ where $I$ is identity matrix. In order to provide the guarantee for $\xi(x^+)$, we can use the relaxed version of [\[SOStat\]](#SOStat){reference-type="eqref" reference="SOStat"}. **Lemma 3**. *Let $\psi$ be twice differentiable. Let $x^+$ satisfy the following condition, for some $\theta \geq 0$: $$\label{HessInexactCondition} \begin{array}{rcl} B + \theta \|x^+ - x\| I + \nabla^2 \psi(x^+) & \succeq & 0. \end{array}$$ Let, for some $\delta_B \geq 0$, it hold that $$\label{SOInexHess} \begin{array}{rcl} \| B - \nabla^2 f(z) \| & \leq & \delta_B. \end{array}$$ Then, we have $$\label{XiBound} \begin{array}{rcl} \xi(x^+) & \leq & (L + \theta) r + L\|x - z\| + \delta_B. \end{array}$$ where $r := \|x^+ - x\|$.* Using Lipschitzness of the Hessian of $f$ [\[LipHess\]](#LipHess){reference-type="eqref" reference="LipHess"}, we have $$\begin{array}{rcl} \nabla^2 F(x^+) & \succeq & \nabla^2 f(x) + \nabla^2 \psi(x^+) - Lr I \\ \\ & \succeq & \nabla^2 f(z) + \nabla^2 \psi(x^+) - (Lr + L\|x - z\|) I \\ \\ & \overset{\eqref{SOInexHess}}{\succeq} & B + \nabla^2 \psi(x^+) - (Lr + L\|x - z\| - \delta_B) I \\ \\ & \overset{\eqref{HessInexactCondition}}{\succeq} & - ( Lr + \theta r + L \|x - z\| + \delta_B ) I, \end{array}$$ which leads to [\[XiBound\]](#XiBound){reference-type="eqref" reference="XiBound"}. 0◻ Let us combine all our lemmas together. We justify the following bound for the progress of one step for our inexact composite Cubic Newton Method (CNM): **Theorem 4**. *Let $\sigma \geq 2L$. Let $x^+$ be an inexact minimizer of model [\[ApproxModel\]](#ApproxModel){reference-type="eqref" reference="ApproxModel"} satisfying the following two conditions, for a certain $\psi'(x^+) \in \partial \psi(x^+)$: $$\label{InexactThStep} \begin{array}{rcl} \| \nabla M_{x, \sigma}(x^+) + \psi'(x^+) \| & \leq & \frac{\sigma}{4}\|x^+ - x\|^2, \\ \\ M_{x, \sigma}(x^+) + \psi(x^+) & \leq & F(x), \end{array}$$ where $g$ and $B$ satisfy [\[GradHessApprox\]](#GradHessApprox){reference-type="eqref" reference="GradHessApprox"} for some $\delta_g, \delta_B \geq 0$. Then, we have $$\label{InexactProgressThStep} \begin{array}{rcl} F(x) - F(x^+) & \geq & \frac{1}{3 \cdot 2^6 \sigma^{1/2}} \| \nabla f(x^+) + \psi'(x^+) \|^{3/2} + \mathcal{E}, \end{array}$$ where $$\begin{array}{rcl} \mathcal{E} & \stackrel{\mathrm{def}}{=}& \frac{\sigma}{48} \|x^+ - x\|^3 - \frac{171}{\sigma^2}\Bigl[ \delta_B^3 + L^3 \|x - z\|^3 \Bigr] - \frac{3}{\sigma^{1/2}} \delta_g^{3/2}. \end{array}$$ Assume additionally that $\psi$ is twice differentiable, and $x^+$ satisfies the following extra condition: $$\label{InexatSOThStep} \begin{array}{rcl} B + \sigma \|x^+ - x\| I + \nabla^2 \psi(x^+) & \succeq & 0. \end{array}$$ Then, we can improve [\[InexactProgressThStep\]](#InexactProgressThStep){reference-type="eqref" reference="InexactProgressThStep"}, as follows: $$\label{InexactProgressSOThStep} \begin{array}{rcl} F(x) - F(x^+) & \geq & \max\Bigl\{ \frac{1}{3 \cdot 2^6 \sigma^{1/2}} \| \nabla f(x^+) + \psi'(x^+) \|^{3/2}, \; \frac{1}{2 \cdot 3^6 \sigma^2} \bigl[ \xi(x^+) \bigr]^3 \Bigr\} + \mathcal{E}. \end{array}$$* We denote $r := \|x^+ - x\|$. Firstly, we bound the negative terms from [\[NewFuncProgress\]](#NewFuncProgress){reference-type="eqref" reference="NewFuncProgress"}, by using Young's inequality: $ab \leq \frac{a^3}{3} + \frac{2b^{3/2}}{3}$, $a, b \geq 0$. We have $$\label{FuncProgThB1} \begin{array}{rcl} \frac{1}{2} (\delta_B + L \|x - z\|) r^2 & = & \Bigl[ \frac{\sigma^{2/3} r^2}{2^{10/3}} \Bigr] \cdot \Bigl[ \frac{2^{10 / 3}}{2 \sigma^{2/3}} \cdot \bigl( \delta_B + L\|x - z\| \bigr) \Bigr] \\ \\ & \leq & \frac{2}{3} \Bigl[ \frac{\sigma^{2/3} r^2}{2^{10/3}} \Bigr]^{3/2} + \frac{1}{3} \Bigl[ \frac{2^{10 / 3}}{2 \sigma^{2/3}} \cdot \bigl( \delta_B + L \|x - z\| \bigr) \Bigr]^3 \\ \\ & = & \frac{\sigma r^3}{3 \cdot 2^4} + \frac{2^7}{3\sigma^2} \bigl( \delta_B + L \|x - z\| \bigr)^3 \;\; \leq \;\; \frac{\sigma r^3}{3 \cdot 2^4} + \frac{2^9}{3 \sigma^2} \bigl( \delta_B^3 + L^3\|x - z\|^3 \bigr), \end{array}$$ and $$\label{FuncProgThB2} \begin{array}{rcl} \delta_g r & = & \Bigl[ \frac{\sigma^{1/3} r}{2^{4/3}} \Bigr] \cdot \Bigl[ \frac{2^{4/3} \delta_g}{\sigma^{1/3}} \Bigr] \;\; \leq \;\; \frac{\sigma r^3}{48} + \frac{2^3 \delta_g^{3/2}}{3\sigma^{1/2}}. \end{array}$$ Therefore, for the functional progress, we obtain $$\label{FuncProgThStep1} \begin{array}{rcl} F(x) - F(x^+) & \overset{ \eqref{NewFuncProgress}, \eqref{FuncProgThB1}, \eqref{FuncProgThB2} }{\geq} & \frac{\sigma}{24} r^3 \; - \; \frac{2^9}{3 \sigma^2} \bigl( \delta_B^3 + L^3\|x - z\|^3 \bigr) \; - \; \frac{2^3 \delta_g^{3/2}}{3\sigma^{1/2}}. \end{array}$$ Secondly, we can relate $r$ and the new gradient norm by using [\[NewGradBound\]](#NewGradBound){reference-type="eqref" reference="NewGradBound"}. We get $$\label{FuncProgThStep2} \begin{array}{rcl} \| \nabla f(x^+) + \psi'(x^+) \|^{3/2} & \overset{\eqref{NewGradBound}}{\leq} & \Bigl( \sigma r^2 + \delta_B r + L \|x - z\| r + \delta_g \Bigr)^{3/2} \\ \\ & \overset{(*)}{\leq} & 2 \sigma^{1/2} \Bigl( \sigma r^3 \; + \; \frac{\delta_B^{3/2} r^{3/2}}{\sigma^{1/2}} \; + \; \frac{L^{3/2} \|x - z\|^{3/2} r^{3/2}}{\sigma^{1/2}} \; + \; \frac{ \delta_g^{3/2}}{\sigma^{1/2}} \Bigr)\\ \\ & \overset{(**)}{\leq} & 2 \sigma^{1/2} \Bigl( 2 \sigma r^3 + \frac{\delta_B^{3}}{2 \sigma^2} + \frac{L^3 \|x - z\|^3}{2 \sigma^2} + \frac{\delta_g^{3/2}}{\sigma^{1/2}}, \Bigr) \end{array}$$ where we used in $(*)$ Jensen's inequality: $(\sum_{i = 1}^4 a_i )^{3/2} \leq 2 \sum_{i = 1}^4 a_i^{3/2}$ for non-negative numbers $\{ a_i \}_{i = 1}^4$, and in $(**)$ Young's inequality: $ab \leq \frac{a^2}{2} + \frac{b^2}{2}$, $a, b \geq 0$. Rearranging the terms, we obtain $$\label{FuncProgThStep3} \begin{array}{rcl} \sigma r^3 & \overset{\eqref{FuncProgThStep2}}{\geq} & \frac{1}{4 \sigma^{1/2}} \| \nabla f(x^+) + \psi'(x^+) \|^{3/2} - \frac{1}{4\sigma^2} \bigl( \delta_B^3 + L^3\|x - z\|^3 \bigr) - \frac{\delta_g^{3/2}}{2 \sigma^{1/2}}. \end{array}$$ Combining [\[FuncProgThStep1\]](#FuncProgThStep1){reference-type="eqref" reference="FuncProgThStep1"} and [\[FuncProgThStep3\]](#FuncProgThStep3){reference-type="eqref" reference="FuncProgThStep3"} gives [\[InexactProgressThStep\]](#InexactProgressThStep){reference-type="eqref" reference="InexactProgressThStep"}. Finally, assuming twice differentiability of the composite part and using Lemma [Lemma 3](#LemmaNewHess){reference-type="ref" reference="LemmaNewHess"} for the extra condition [\[InexatSOThStep\]](#InexatSOThStep){reference-type="eqref" reference="InexatSOThStep"} on $x^+$, we get $$\label{Xi3Bound} \begin{array}{rcl} \bigl[ \xi(x^+) \bigr]^3 & \overset{\eqref{XiBound}}{\leq} & \Bigl[ \frac{3}{2} \sigma r + L\|x - z\| + \delta_B \Bigr]^3 \\ \\ & \overset{(*)}{\leq} & \frac{3^5}{2^3} \sigma^3 r^3 + 3^2 L^3 \|x - z\|^3 + 3^2 \delta_B^3, \end{array}$$ where we used in $(*)$ Jensen's inequality: $( \sum_{i = 1}^3 a_i )^3 \leq 3^2 \sum_{i = 1}^3 a_i$ for non-negative numbers $\{ a_i \}_{i = 1}^3$. Hence, rearranging the terms, we obtain $$\begin{array}{rcl} \sigma r^3 & \overset{\eqref{Xi3Bound}}{\geq} & \frac{2^3}{3^5 \sigma^2} \bigl[ \xi(x^+) \bigr]^3 - \bigl( \frac{2}{3} \bigr)^3 \frac{1}{\sigma^2} \bigl( \delta_B^^3 + L^3\|x - z\|^3 \bigr). \end{array}$$ Combining it with [\[FuncProgThStep1\]](#FuncProgThStep1){reference-type="eqref" reference="FuncProgThStep1"} justifies the improved bound [\[InexactProgressSOThStep\]](#InexactProgressSOThStep){reference-type="eqref" reference="InexactProgressSOThStep"}. 0◻ # Finite Difference Approximations {#SectionFiniteDiff} In this section, we recall important bounds on finite difference approximations for the Hessian and for the gradient of our objective. Let us start with the first-order approximation of the Hessian, that will lead us to the first-order (Hessian-free) implementation of the Cubic Newton Method. See, e.g., Lemma 3 in [@grapiglia2022cubic]. **Lemma 5**. *Suppose that A1 holds. Given $\bar{x}\in\mathbb{R}^{n}$ and $h>0$, let $A\in\mathbb{R}^{n\times n}$ be defined by $$\label{HessFODefA} \begin{array}{rcl} A &= & \left[\dfrac{\nabla f(\bar{x}+he_{1})-\nabla f(\bar{x})}{h},\ldots,\dfrac{\nabla f(\bar{x}+he_{n})-\nabla f(\bar{x})}{h}\right]. \end{array}$$ Then, the matrix $$\label{HessFODefB} \begin{array}{rcl} B &= & \frac{1}{2}\left(A+A^{\top}\right) \end{array}$$ satisfies $$\label{HessFOBound} \begin{array}{rcl} \|B-\nabla^{2}f(\bar{x})\| & \leq & \frac{\sqrt{n} L}{2} h. \end{array}$$* Now, let us consider zeroth-order approximations of the derivatives, that requires computing only the objective function value (see, e.g., Section 7.1 in [@nocedal2006numerical]). We establish explicit bounds necessary for the analysis of our methods and provide their proofs to ensure completeness of our presentation. The following lemma gives a zeroth-order approximation guarantee for the gradient. **Lemma 6**. *Suppose that A1 holds. Given $\bar{x}\in\mathbb{R}^{n}$ and $h>0$, let $g\in\mathbb{R}^{n}$ be defined by $$\label{GradZODef} \begin{array}{rcl} g_{i} & = & \dfrac{f(\bar{x}+he_{i})-f(\bar{x}-he_{i})}{2h},\quad i=1,\ldots,n. \end{array}$$ Then, $$\label{GradZOBound} \begin{array}{rcl} \| g - \nabla f(\bar{x}) \| & \leq & \frac{\sqrt{n} L}{6} h^2. \end{array}$$* *Proof.* By A1 we have $$\begin{array}{rcl} \left|f(\bar{x}+he_{i})-f(\bar{x})-h\langle\nabla f(\bar{x}),e_{i}\rangle-\dfrac{h^{2}}{2}\langle\nabla^{2}f(\bar{x})e_{i},e_{i}\rangle\right| & \leq & \frac{Lh^{3}}{6} \end{array} \label{eq:2.22}$$ and $$\begin{array}{rcl} \left|f(\bar{x})-h\langle\nabla f(\bar{x}),e_{i}\rangle+\dfrac{h^{2}}{2}\langle\nabla^{2}(\bar{x})e_{i},e_{i}\rangle-f(\bar{x}-he_{i})\right| & \leq & \frac{Lh^{3}}{6}. \end{array} \label{eq:2.23}$$ Summing ([\[eq:2.22\]](#eq:2.22){reference-type="ref" reference="eq:2.22"}) and ([\[eq:2.23\]](#eq:2.23){reference-type="ref" reference="eq:2.23"}) and using the triangle inequality, we get $$\label{GFOinterm} \begin{array}{rcl} \left|f(\bar{x}+he_{i})-f(\bar{x}-he_{i})-2h\left[\nabla f(\bar{x})\right]_{i}\right| & \leq & \frac{Lh^{3}}{3} \end{array}$$ Therefore, $$\begin{array}{rcl} |g_i - [\nabla f(\bar{x})]_i |& = & \left| \frac{f(\bar{x} + he_i) - f(\bar{x} - he_i)}{2h} - [\nabla f(\bar{x})]_i \right| \;\; \overset{\eqref{GFOinterm}}{\leq} \;\; \frac{L h^2}{6}. \end{array}$$ Thus, we conclude $$\begin{array}{rcl} \|g - \nabla f(\bar{x})\| & \leq & \sqrt{n}\|g-\nabla f(\bar{x})\|_{\infty} \;\; \leq \;\; \frac{\sqrt{n}L}{6}h^{2}. \end{array}$$ ◻ Finally, we provide a zeroth-order approximation guarantee for the Hessian. **Lemma 7**. *Suppose that A1 holds. Given $\bar{x}\in\mathbb{R}^{n}$ and $h>0$, let $A\in\mathbb{R}^{n\times n}$ be defined by $$\label{HessZO} \begin{array}{rcl} A_{ij} & = & \dfrac{f(\bar{x}+he_{i}+he_{j})-f(\bar{x}+he_{i})-f(\bar{x}+he_{j})-f(\bar{x})}{h^{2}},\quad i,j=1,\ldots,n. \end{array}$$ Then, the matrix $$\label{HessZOBDef} \begin{array}{rcl} B & = & \frac{1}{2}\left(A+A^{\top}\right) \end{array}$$ satisfies $$\label{HessZOBBound} \begin{array}{rcl} \|B-\nabla^{2}f(\bar{x})\| & \leq & \frac{2 nL}{3}h. \end{array}$$* *Proof.* By A1 we have the following inequalities: $$\label{HessZOEqBound1} \begin{array}{cl} & \Bigl|f(\bar{x} + h e_{i} + h e_{j}) - f(\bar{x}) - h\langle\nabla f(\bar{x}),e_{i}\rangle - h\langle\nabla f(\bar{x}),e_{j}\rangle \\ \\ & \; - \frac{h^{2}}{2}\langle\nabla^{2}f(\bar{x})e_{i},e_{i}\rangle-h^{2}\langle\nabla^{2}f(\bar{x})e_{i},e_{j}\rangle - \frac{h^{2}}{2}\langle\nabla^{2}f(\bar{x})e_{j},e_{j}\rangle \Bigr| \;\; \leq \;\; \frac{Lh^{3}}{3}, \end{array}$$ $$\label{HessZOEqBound2} \begin{array}{c} \Bigl|f(\bar{x}) + h\langle\nabla f(\bar{x}),e_{i}\rangle + \frac{h^{2}}{2}\langle\nabla^{2}f(\bar{x})e_{i},e_{i}\rangle - f(\bar{x}+he_{i}) \Bigr| \;\; \leq \;\; \frac{Lh^{3}}{6}, \end{array}$$ and $$\label{HessZOEqBound3} \begin{array}{c} \Bigl|f(\bar{x}) + h\langle\nabla f(\bar{x}),e_{j}\rangle + \frac{h^{2}}{2}\langle\nabla^{2}f(\bar{x})e_{j},e_{j}\rangle - f(\bar{x}+he_{j}) \Bigr| \;\; \leq \;\; \frac{Lh^{3}}{6} \end{array}$$ Summing [\[HessZOEqBound1\]](#HessZOEqBound1){reference-type="eqref" reference="HessZOEqBound1"}-[\[HessZOEqBound2\]](#HessZOEqBound2){reference-type="eqref" reference="HessZOEqBound2"}, and using the triangle inequality, we get $$\begin{array}{c} \Bigl|f(\bar{x}+he_{i}+he_{j})-f(\bar{x}+he_{i})-f(\bar{x}+he_{j}) + f(\bar{x})-h^{2} \langle\nabla^{2}f(\bar{x})e_{i},e_{j}\rangle\Bigr|\;\; \leq \;\;\frac{2Lh^{3}}{3} \end{array}$$ Hence, $$\begin{array}{c} h^{2}\Bigl|\frac{f(\bar{x}+he_{i}+he_{j})-f(\bar{x}+he_{i})-f(\bar{x}+he_{j})+f(\bar{x})}{h^{2}}-\left[\nabla^{2}f(\bar{x})\right]_{ij}\Bigr| \;\; \leq \;\; \frac{2Lh^{3}}{3} \end{array}$$ and, consequently, $$\begin{array}{rcl}\left|A_{ij}-\left[\nabla^{2}f(\bar{x})\right]_{ij}\right| & \leq & \frac{2L}{3}h. \end{array}$$ Thus, we finally obtain $$\begin{array}{rcl} \|B-\nabla^{2}f(\bar{x})\|\leq \|A-\nabla^{2}f(\bar{x})\| & \leq & n\|A-\nabla^{2}f(\bar{x})\|_{\max}\leq\frac{2nL}{3}h, \end{array}$$ which is the required bound. ◻ # Hessian-Free CNM with Lazy Hessians {#SectionHF} Let us present our first algorithm, which is the *Hessian-free* implementation of the Cubic Newton Method (CNM) [@nesterov2006cubic]. In each iteration of our algorithm, we use an adaptive search to fit simultaneously the regularization constant $\sigma$ and the parameter $h$ of finite difference approximation of the Hessian (see Lemma [Lemma 5](#LemmaHessFO){reference-type="ref" reference="LemmaHessFO"}). Therefore, our algorithm does not need to fix these parameters in advance, adjusting them automatically. After the new approximation $B_{k, \ell} \approx \nabla^2 f(x_k)$ of the Hessian is computed, where $k \geq 0$ is the current iteration and $\ell$ is the adaptive search index, we keep using the same matrix $B_{k, \ell}$ for the next $m$ Cubic Newton steps [\[CompositeSubproblem\]](#CompositeSubproblem){reference-type="eqref" reference="CompositeSubproblem"}, where $m \geq 1$ is our global key parameter. If we set $m := 1$, it means we update the Hessian approximation each Cubic Newton step, which can be costly from the computational point of view. Instead, we can use $m > 1$ (lazy Hessian updates [@doikov2023second]), that reuses the same Hessian approximation for several steps and thus reduces the arithmetic complexity. Let us denote by $(\hat{x}, \alpha) = \text{\ttfamily CubicSteps}(x, B, \sigma, m, \epsilon)$ an auxiliary procedure that performs $m$ inexact Cubic Newton steps [\[CompositeSubproblem\]](#CompositeSubproblem){reference-type="eqref" reference="CompositeSubproblem"}, starting from point $x \in Q$ and using the same given matrix $B = B^{\top}$ and regularization constant $\sigma > 0$ for all steps, while recomputing the gradients. Parameter $\epsilon > 0$ is used for validating a certain stopping condition. We can write this procedure in the algorithmic form, as follows. **Step 0.** Set $x_0 := x$ and $t := 0$. **Step 1.** If $t = m$ then stop and **return** $(x_t, \, \text{\ttfamily success})$. **Step 2.** Compute $x_{t + 1}$ as an approximate solution to the subproblem $$\begin{array}{c} \min\limits_{y \in Q} \Bigl\{ \, M_{x_{t}, \sigma}(y) + \psi(y) \, \Bigr\}, \qquad \text{where} \end{array}$$ $$\begin{array}{rcl} M_{x_{t}, \sigma}(y) & \equiv & f(x_t) + {\langle}\nabla f(x_t), y - x_t {\rangle} + \frac{1}{2}{\langle}B(y - x_t), y - x_t {\rangle} + \frac{\sigma}{6}\|y - x_t\|^3 \end{array}$$ such that $$\label{FOMCond} \begin{array}{rcl} M_{x_{t},\sigma}(x_{t + 1}) + \psi(x_{t + 1}) &\leq & F(x_{t}) \quad \text{and} \\ \\ \|\nabla M_{x_{t}, \sigma} ( x_{t + 1} ) + \psi'(x_{t + 1}) \| & \leq & \frac{\sigma}{4} \|x_{t + 1} - x_t \|^{2} \quad \text{for some} \;\; \psi'(x_{t + 1}) \in \partial \psi(x_{t + 1}), \end{array}$$ and (**optionally**, if $\psi$ is twice differentiable) such that $$\label{FOMCond2} \begin{array}{rcl} B + \sigma \|x_{t + 1} - x_t \| I + \nabla^2 \psi(x_{t + 1}) & \succeq & 0. \end{array}$$ **Step 3.** If $\| \nabla f(x_{t + 1}) + \psi'(x_{t + 1}) \| \leq \epsilon$ then stop and **return** $(x_{t + 1}, \, \text{\ttfamily solution})$. **Step 4.** If $F(x_0) - F(x_{t + 1}) \geq \frac{\epsilon^{3/2}}{384 \sigma^{1/2}} (t + 1)$ holds then set $t := t + 1$ and go to Step 1. Otherwise, stop and **return** $(x_{t + 1}, \text{\ttfamily halt})$. This procedure returns the resulting point $\hat{x} \in Q$ and a status variable $$\begin{array}{rcl} \alpha &\in & \{\text{\ttfamily success}, \, \text{\ttfamily solution}, \, \text{\ttfamily halt} \} \end{array}$$ that corresponds respectfully to finishing all the steps *successfully*, finding a point with *small gradient norm*, and *halting* the procedure due to insufficient progress in terms of the objective function. In the last case, we will need to update our estimates $\sigma$ and $B$ adaptively and restart this procedure with new parameters. The next lemma shows that for sufficiently big value of $\sigma$ and small enough $h$ (the parameter of finite difference approximation of the Hessian), the result of Algorithm [\[alg:HessianFree\]](#alg:HessianFree){reference-type="ref" reference="alg:HessianFree"} always belongs to $\{ \text{\ttfamily success}, \, \text{\ttfamily solution} \}$, that it either makes a significant progress in the function value, or solves the initial problem [\[MainProblem\]](#MainProblem){reference-type="eqref" reference="MainProblem"}. **Lemma 8**. *Suppose that A1 holds. Given $x \in Q$, $\epsilon > 0$, $\sigma > 0$, and $m \in \mathbb{N} \setminus \{0 \}$, let $(\hat{x}, \alpha)$ be the corresponding output of Algorithm [\[alg:HessianFree\]](#alg:HessianFree){reference-type="ref" reference="alg:HessianFree"} with $B = \frac{1}{2}(A + A^{\top})$, where $$\label{HFLemmaA} \begin{array}{rcl} A & = & \Bigl[ \frac{\nabla f(x + he_1) - \nabla f(x)}{h}, \ldots, \frac{\nabla f(x + h e_n) - \nabla f(x)}{h} \Bigr] \end{array}$$ for some $h > 0$. If $$\label{HFSimgaH} \begin{array}{rcl} \sigma & \geq & 2^4 \bigl( \frac{2}{3} \bigr)^{\frac{1}{3}} mL \qquad \text{and} \qquad h \;\; \leq \;\; \Bigl[ \frac{3 \sigma^{3/2} \epsilon^{3/2}}{ 2^7 \cdot (192) n^{3/2} L^3 } \Bigr]^{\frac{1}{3}}, \end{array}$$ then either $\alpha = \text{\normalfont \ttfamily solution}$ (and thus $\| \nabla f(\hat{x}) + \psi'(\hat{x}) \| \leq \epsilon$), or $\alpha = \text{\normalfont \ttfamily success}$ and so we have $$\label{FOFuncProgress} \begin{array}{rcl} F(x) - F(\hat{x}) & \geq & \frac{\epsilon^{3/2}}{2 \cdot (192) \sigma^{1/2}} m. \end{array}$$* *Proof.* Suppose that $$\label{BigGrad} \begin{array}{rcl} \| \nabla f(\hat{x}) + \psi'(\hat{x}) \| & > & \epsilon. \end{array}$$ Hence, $\alpha \not= \text{\ttfamily solution}$. Let us denote by $t^{\star}$ the last value of $t$ checked in Step 1. Clearly, $t^{\star} \leq m$ and we need just to prove that $t^{\star} = m$. Suppose that $t^{\star} < m$, and hence inequality in Step 4 of the algorithm does not hold for $t := t^{\star}$. It follows from [\[HFLemmaA\]](#HFLemmaA){reference-type="eqref" reference="HFLemmaA"} and Lemma [Lemma 5](#LemmaHessFO){reference-type="ref" reference="LemmaHessFO"} that $$\label{FOBBound} \begin{array}{rcl} \| B - \nabla^2 f(x) \| & \leq & \delta_B \end{array}$$ for $$\begin{array}{rcl} \delta_B & = & \frac{\sqrt{n} L}{2} h. \end{array}$$ Then, by the second inequality in [\[HFSimgaH\]](#HFSimgaH){reference-type="eqref" reference="HFSimgaH"} we get $$\label{DeltaBBound} \begin{array}{rcl} \frac{2^9}{3 \sigma^2} \delta_B^3 & = & \frac{2^9}{3 \sigma^2} \cdot \frac{n^{3/2} L^3}{2^3} \cdot h^3 \;\; \leq \;\; \frac{\epsilon^{3/2}}{2 \cdot (192) \sigma^{1/2}}. \end{array}$$ Hence, in view of [\[FOMCond\]](#FOMCond){reference-type="eqref" reference="FOMCond"} and [\[FOBBound\]](#FOBBound){reference-type="eqref" reference="FOBBound"}, Theorem [Theorem 4](#ThStep){reference-type="ref" reference="ThStep"} with $\delta_g := 0$ and $z := x = x_0$ gives $$\label{FuncProgLemma1} \begin{array}{rcl} F(x_{t}) - F(x_{t + 1}) & \geq & \frac{\sigma}{48} \|x_{t + 1} - x_{t}\|^3 + \frac{1}{192 \sigma^{1/2}} \| \nabla f(x_{t + 1}) + \psi'(x_{t + 1}) \|^{3/2} \\ \\ & & \quad - \; \frac{2^9}{3 \sigma^2} \delta_B^3 - \frac{2^9 L^3}{3 \sigma^2} \|x_{t} - x_0\|^3 \\ \\ & \overset{\eqref{DeltaBBound}}{\geq} & \frac{\sigma}{48} \|x_{t + 1} - x_{t}\|^3 + \frac{1}{192 \sigma^{1/2}} \| \nabla f(x_{t + 1}) + \psi'(x_{t + 1}) \|^{3/2} \\ \\ & & \quad - \; \frac{\epsilon^{3/2}}{2(192)\sigma^{1/2}} - \frac{2^9 L^3}{3 \sigma^2} \|x_{t} - x_0\|^3 \\ \\ & \overset{\eqref{BigGrad}}{\geq} & \frac{\epsilon^{3/2}}{2(192)\sigma^{1/2}} + \frac{\sigma}{48} \|x_{t + 1} - x_{t}\|^3 - \frac{2^9 L^3}{3 \sigma^2} \|x_{t} - x_0\|^3, \end{array}$$ for any $0 \leq t \leq t^{\star}$. Finally, summing up these inequalities, and using the triangle inequality, we obtain $$\begin{array}{rcl} F(x_0) - F(x_{t^{\star} + 1}) & \geq & \frac{\epsilon^{3/2}}{2(192)\sigma^{1/2}} (t^{\star} + 1) + \frac{\sigma}{48} \sum\limits_{i = 1}^{t^{\star} + 1} r_i^3 - \frac{2^9 L^3}{3 \sigma^2} \sum\limits_{i = 1}^{t^{\star}} \Bigl( \sum\limits_{j = 1}^i r_j \Bigr)^3, \end{array}$$ where $r_i := \|x_i - x_{i - 1}\|$. Using Lemma B.1 from [@doikov2023second] and our choice of $\sigma$ [\[HFSimgaH\]](#HFSimgaH){reference-type="eqref" reference="HFSimgaH"} we conclude that $$\begin{array}{rcl} F(x_0) - F(x_{t^{\star}}) & \geq & \frac{\epsilon^{3/2}}{2(192)\sigma^{1/2}} (t^{\star} + 1), \end{array}$$ which contradicts that inequality in Step 4 does not hold. Hence, $t^{\star} = m$ and $\alpha = \text{\ttfamily success}$. ◻ For establishing the global convergence to a *second-order stationary point*, we can use our procedure with a stronger guarantee on the solution to the subrpoblem [\[FOMCond2\]](#FOMCond2){reference-type="eqref" reference="FOMCond2"}. This is optional. In case we use extra guarantee [\[FOMCond2\]](#FOMCond2){reference-type="eqref" reference="FOMCond2"}, the procedure should not be stopped in Step 3 anymore, since then we are interested in points with *both* small norm of the gradient and bounded smallest eigenvalue. We can justify the following analogue of Lemma [Lemma 8](#LemmaHF){reference-type="ref" reference="LemmaHF"} when using condition [\[FOMCond2\]](#FOMCond2){reference-type="eqref" reference="FOMCond2"}: **Lemma 9**. *Consider the sequence $\{ x_t \}_{t = 1}^m$ generated by Algorithm [\[alg:HessianFree\]](#alg:HessianFree){reference-type="ref" reference="alg:HessianFree"} with extra condition [\[FOMCond2\]](#FOMCond2){reference-type="eqref" reference="FOMCond2"} on the inexact solution to the subproblem and without stop[^3] in Step 3. Then, under the conditions of Lemma [Lemma 8](#LemmaHF){reference-type="ref" reference="LemmaHF"}, we have either $$\label{HF2SmallGrad} \begin{array}{rcl} \min\limits_{ 1 \leq t \leq m } \biggl[ \, \Delta_t \; \stackrel{\mathrm{def}}{=}\; \max\Bigl\{ \| \nabla f(x_t) + \psi'(x_t) \|, \; \frac{1}{\sigma} \bigl( \frac{2}{3} \bigr)^{\frac{10}{3}} \bigl[ \xi(x_t) \big]^2 \Bigr\} \, \biggr] & \leq & \epsilon, \end{array}$$ or $$\label{FOFuncProgress2} \begin{array}{rcl} F(x) - F(\hat{x}) & \geq & \frac{\epsilon^{3/2}}{2 \cdot (192) \sigma^{1/2}} m. \end{array}$$* *Proof.* Suppose that [\[HF2SmallGrad\]](#HF2SmallGrad){reference-type="eqref" reference="HF2SmallGrad"} does not hold, hence $$\label{HF2BigGrad} \begin{array}{rcl} \Delta_t & \geq & \epsilon, \qquad 1 \leq t \leq m. \end{array}$$ In view of extra inexact condition [\[FOMCond2\]](#FOMCond2){reference-type="eqref" reference="FOMCond2"}, from Theorem [Theorem 4](#ThStep){reference-type="ref" reference="ThStep"} with $\delta_g := 0$ and $z := x = x_0$ we obtain the following guarantee for one step: $$\begin{array}{rcl} F(x_t) - F(x_{t + 1}) & \overset{\eqref{InexactProgressSOThStep}}{\geq} & \frac{\sigma}{48}\|x_{t + 1} - x_t\|^3 + \frac{1}{192 \sigma^{1/2}} \Delta_{t + 1}^{3/2} - \frac{2^9}{3 \sigma^2} \delta_B^3 - \frac{2^9 L^3}{3\sigma^2}\|x_t - x_0\|^3 \\ \\ & \overset{\eqref{DeltaBBound}, \eqref{HF2BigGrad}}{\geq} & \frac{\epsilon^{3/2}}{2(192)\sigma^{1/2}} + \frac{\sigma}{48} \|x_{t + 1} - x_{t}\|^3 - \frac{2^9 L^3}{3 \sigma^2} \|x_{t} - x_0\|^3. \end{array}$$ It remains to sum up these inequalities for all $0 \leq t \leq m - 1$ and apply the same reasoning as in Lemma [Lemma 8](#LemmaHF){reference-type="ref" reference="LemmaHF"} to get [\[FOFuncProgress2\]](#FOFuncProgress2){reference-type="eqref" reference="FOFuncProgress2"}. ◻ We are ready to present our whole algorithm, which is first-order implementation of CNM. It uses procedure CubicSteps as the basic subroutine. **Step 0.** Given $x_0 \in Q$, $\tau_0 > 0$, $\epsilon > 0$, $m \in \mathbb{N} \setminus \{ 0 \}$, set $k : = 0$. **Step 1.** Set $\ell := 0$. **Step 1.1.** Using $$\label{AlgFOSigm} \begin{array}{rcl} \sigma_{k, \ell} & = & 2^4 \bigl( \frac{2}{3} \bigr)^{1/3} (2^{\ell} \tau_k) m \end{array}$$ and $$\label{AlgFOH} \begin{array}{rcl} h_{k, \ell} & = & \Bigl[ \frac{3 \sigma_{k, \ell}^{3/2} \epsilon^{3/2}}{2^7(192)n^{3/2} (2^{\ell} \tau_k)^3} \Bigr]^{1/3} \end{array}$$ compute $B_{k, \ell} = \frac{1}{2}\bigl(A_{k, \ell} + A_{k, \ell}^{\top} \bigr)$ with $$\label{AlgFOA} \begin{array}{rcl} A_{k, \ell} & = & \Bigl[ \frac{\nabla f(x_k + h_{k, \ell} e_1) - \nabla f(x_k)}{h_{k, \ell}} \; , \; \ldots \; , \; \frac{\nabla f(x_k + h_{k, \ell} e_n) - \nabla f(x_k)}{h_{k, \ell}} \Bigr]. \end{array}$$ **Step 1.2.** Perform $m$ inexact Cubic steps using the same Hessian approximation: $$\begin{array}{rcl} (\hat{x}_{k, \ell}, \alpha_{k, \ell}) & := & \text{\ttfamily CubicSteps}(x_k, \, B_{k, \ell}, \, \sigma_{k, \ell}, \, m, \, \epsilon). \end{array}$$ **Step 2.** If $\alpha_{k, \ell} = \text{\ttfamily halt}$, then set $\ell := \ell + 1$ and go to Step 1.1. **Step 3.** Set $x_{k + 1} = \hat{x}_{k, \ell}$. **Step 4.** If $\alpha_{k, \ell} = \text{\ttfamily success}$, then $\tau_{k + 1} = \max\{ \tau_0, \, 2^{\ell_k - 1} \tau_k\}$, $k := k + 1$, and go to Step 1. Stop otherwise. Due to Lemmas [Lemma 8](#LemmaHF){reference-type="ref" reference="LemmaHF"} and [Lemma 9](#LemmaHF2){reference-type="ref" reference="LemmaHF2"}, this algorithm is well-defined and its inner loop of the adaptive search (Steps 1-2) always quits with a sufficiently big finite value of $\ell$ and the method continues to Step 3. In the following lemmas, we show how to bound the maximal value for the regularization parameter and the total number of inner loop steps in our algorithm. **Lemma 10**. *Suppose that A1 holds and let $\{ \tau_k \}_{k \geq 0}$ be generated by Algorithm [\[alg:FirstOrderCNM\]](#alg:FirstOrderCNM){reference-type="ref" reference="alg:FirstOrderCNM"}. Then $$\label{HFTauBound} \begin{array}{rcl} \tau_k & \leq & \max\bigl\{ \tau_0, L \bigr\}, \qquad \forall k \geq 0. \end{array}$$* Clearly, [\[HFTauBound\]](#HFTauBound){reference-type="eqref" reference="HFTauBound"} is true for $k = 0$. Suppose that it is also true for some $k \geq 0$. If $\ell_k = 0$, then it follows from the definition of $\tau_{k + 1}$ and from the induction assumption that $$\begin{array}{rcl} \tau_{k + 1} & = & \frac{1}{2} \tau_k \;\; < \;\; \tau_k \;\; \leq \;\; \max\bigl\{ \tau_0, L \bigr\}, \end{array}$$ and so [\[HFTauBound\]](#HFTauBound){reference-type="eqref" reference="HFTauBound"} is true for $k + 1$. Now, suppose that $\ell_k > 0$. In this case, we also must have $$\begin{array}{rcl} \tau_{k + 1} & \leq & \max\bigl\{ \tau_0, L \bigr\}, \end{array}$$ since otherwise we would have $$\begin{array}{rcl} 2^{\ell_k - 1} \tau_k & > & L \end{array}$$ and by [\[AlgFOSigm\]](#AlgFOSigm){reference-type="eqref" reference="AlgFOSigm"}, [\[AlgFOH\]](#AlgFOH){reference-type="eqref" reference="AlgFOH"}, [\[AlgFOA\]](#AlgFOA){reference-type="eqref" reference="AlgFOA"} and Lemma [Lemma 8](#LemmaHF){reference-type="ref" reference="LemmaHF"}, the inner procedure CubicSteps would return $\alpha_{k, \ell} \in \{ \text{\ttfamily success}, \text{ \ttfamily solution}\}$ for some $\ell \leq \ell_k - 1$, contradicting the definition of $\ell_k$. Thus, [\[HFTauBound\]](#HFTauBound){reference-type="eqref" reference="HFTauBound"} is also true for $k + 1$ in this case. 0◻ **Lemma 11**. *Suppose that A1 holds and let $\text{FO}_{T}$ be the total number of function and gradient evaluations of $f(\cdot)$ performed by Algorithm [\[alg:FirstOrderCNM\]](#alg:FirstOrderCNM){reference-type="ref" reference="alg:FirstOrderCNM"} during the first $T$ iteration. Then $$\label{HFNumberBound} \begin{array}{rcl} \text{FO}_{T} & \leq & \bigl[ 5 + 2(n + m) \bigr] \cdot T \; + \; \bigl[ 2 + n + m\bigr] \cdot \log_2 \frac{\max\{ \tau_0, L \}}{\tau_0}. \end{array}$$* The total number of function and gradient evaluations performed at the $k$the iteration of Algorithm [\[alg:FirstOrderCNM\]](#alg:FirstOrderCNM){reference-type="ref" reference="alg:FirstOrderCNM"} is bounded from above by $$\begin{array}{rcl} 1 + \bigl[ (n + 1) + (m + 1) \bigr] \cdot (\ell_k + 1). \end{array}$$ Since $\tau_{k + 1} = 2^{\ell_k - 1} \tau_k$, we have $$\begin{array}{rcl} \ell_k - 1 & = & \log_2 \tau_{k + 1} - \log_2 \tau_k, \end{array}$$ and so $$\begin{array}{rcl} 1 + \bigl[ (n + 1) + (m + 1) \bigr] \cdot (\ell_k + 1) & = & 1 + \bigl[ 2 + n + m \bigr] \cdot (2 + \log_2 \tau_{k + 1} - \log_2 \tau_k). \end{array}$$ Thus, $$\begin{array}{rcl} \text{FO}_{T} & \leq & \sum\limits_{k = 0}^{T - 1} 1 + \bigl[ 2 + n + m \bigr] \cdot (2 + \log_2 \tau_{k + 1} - \log_2 \tau_k) \\ \\ & = & T + \bigl[ 2 + n + m\bigr] \cdot 2T + \bigl[ 2 + n + m\bigr] \cdot \log_2 \frac{\tau_{T}}{\tau_0} \\ \\ & \leq & \bigl[5 + 2(n + m) \bigr] \cdot T + \bigl[ 2 + n + m \bigr] \cdot \log_2 \frac{\max\{\tau_0, L \}}{\tau_0}, \end{array}$$ where the last inequality follows from Lemma [Lemma 10](#LemmaHFTau){reference-type="ref" reference="LemmaHFTau"}. 0◻ We are ready to establish the global complexity bound for our Hessian-free CNM. **Theorem 12**. *Suppose that A1 holds and let $\{ x_k \}_{k \geq 1}$ be generated by Algorithm [\[alg:FirstOrderCNM\]](#alg:FirstOrderCNM){reference-type="ref" reference="alg:FirstOrderCNM"}. Let $T(\epsilon) \leq +\infty$ be the first iteration index such that $\| \nabla f(x_{T(\epsilon)}) + \psi'( x_{T(\epsilon)} ) \| \leq \epsilon$, for a certain $\psi'( x_{T(\epsilon)} ) \in \partial \psi( x_{T(\epsilon)} )$. We have $$\label{HFTBound} \begin{array}{rcl} T(\epsilon) & \leq & \frac{(384) 2^{5/2} (\frac{2}{3})^{1/6} \max\{ \tau_0, L \}^{1/2} (F(x_0) - F^{\star}) }{ \sqrt{m} } \cdot \epsilon^{-3/2} \end{array}$$ and, consequently, the total number of the function and gradient evaluations is bounded as $$\label{HFTotBound} \begin{array}{rcl} \text{FO}_{T(\epsilon)} & \leq & \frac{ [5 + 2(n + m)] }{\sqrt{m}} (384) 2^{5/2} \bigl( \frac{2}{3} \bigr)^{1/6} \max\{ \tau_0, L \}^{1/2} ( F(x_0) - F^{\star} ) \cdot \epsilon^{-3/2} \\ \\ & & \qquad + \; [2 + n + m] \log_2 \frac{\max\{ \tau_0, L \}}{\tau_0}. \end{array}$$* By the definition of $T(\epsilon)$, we have $$\begin{array}{rcl} \| \nabla f(x_k) + \psi'(x_k) \| & \geq & \epsilon, \quad \text{for} \quad k = 0, \ldots, T(\epsilon) - 1, \quad \text{and} \quad \forall \psi'(x_k) \in \partial \psi(x_k). \end{array}$$ Consequently, by Lemma [Lemma 8](#LemmaHF){reference-type="ref" reference="LemmaHF"} we have $$\label{HFFuncGlob} \begin{array}{rcl} F(x_k) - F(x_{k + 1}) & \geq & \frac{\epsilon^{3/2}}{(384) \sigma_{k, \ell_k}^{1/2}} \quad \text{for} \quad k = 0, \ldots , T(\epsilon) - 1. \end{array}$$ Moreover, by Lemma [Lemma 10](#LemmaHFTau){reference-type="ref" reference="LemmaHFTau"} we also have $$\label{HFSigmaBound} \begin{array}{rcl} \sigma_{k, \ell_k} & = & 2^4 \bigl( \frac{2}{3} \bigr)^{1/3} m (2^{\ell_k} \tau_k) \;\; = \;\; 2^4 \bigl( \frac{2}{3} \bigr)^{1/3} m (2 \tau_{k + 1}) \;\; \leq \;\; 2^5 \bigl( \frac{2}{3} \bigr)^{1/3} m \cdot \max\{ \tau_0, L \}. \end{array}$$ Combining [\[HFFuncGlob\]](#HFFuncGlob){reference-type="eqref" reference="HFFuncGlob"} and [\[HFSigmaBound\]](#HFSigmaBound){reference-type="eqref" reference="HFSigmaBound"}, it follows that $$\begin{array}{rcl} F(x_k) - F(x_{k + 1}) & \geq & \frac{\epsilon^{3/2} \sqrt{m}}{(384) 2^{5/2} (\frac{2}{3})^{1/6} \max\{ \tau_0, L \}^{1/2}}, \quad \text{for} \quad k = 0, \ldots, T(\epsilon) - 1. \end{array}$$ Summing up these inequalities and using the lower bound $F^{\star}$ on $F(\cdot)$, we get $$\begin{array}{rcl} F(x_0) - F^{\star} & \geq & F(x_0) - F(x_{T(\epsilon)}) \\ \\ & = & \sum\limits_{k = 0}^{T(\epsilon) - 1} F(x_k) - F(x_{k + 1}) \\ \\ & \geq & \frac{\epsilon^{3/2} \sqrt{m} }{ (384) 2^{5/2} (\frac{2}{3})^{1/6} \max\{ \tau_0, L \}^{1/2}} T(\epsilon) \end{array}$$ which inplies [\[HFTBound\]](#HFTBound){reference-type="eqref" reference="HFTBound"}. Finally, combining [\[HFTBound\]](#HFTBound){reference-type="eqref" reference="HFTBound"} and Lemma [Lemma 11](#LemmaHFNumber){reference-type="ref" reference="LemmaHFNumber"} we obtain [\[HFTotBound\]](#HFTotBound){reference-type="eqref" reference="HFTotBound"}. 0◻ **Corollary 13**. *By taking $m := n$, it follows from Theorem [Theorem 12](#TheoremHF){reference-type="ref" reference="TheoremHF"} that Algorithm [\[alg:FirstOrderCNM\]](#alg:FirstOrderCNM){reference-type="ref" reference="alg:FirstOrderCNM"} needs at most $$\begin{array}{c} \mathcal{O}\bigl( n^{1/2} \epsilon^{-3/2} + n \bigr) \end{array}$$ total function and gradient evaluations of $f(\cdot)$ to generate $x_k$ such that $\| \nabla f(x_k) + \psi'(x_k) \| \leq \epsilon$.* Let us establish a similar complexity result for reaching the *second-order* stationary points by Algorithm [\[alg:FirstOrderCNM\]](#alg:FirstOrderCNM){reference-type="ref" reference="alg:FirstOrderCNM"}, providing the guarantee on the values of $\xi(\cdot)$ (see definition [\[XiDef\]](#XiDef){reference-type="eqref" reference="XiDef"}). **Theorem 14**. *Suppose that A1 holds. Let $x_{k, \ell}(t)$ be the $t$-th iterate of Algorithm [\[alg:HessianFree\]](#alg:HessianFree){reference-type="ref" reference="alg:HessianFree"} with extra condition [\[FOMCond2\]](#FOMCond2){reference-type="eqref" reference="FOMCond2"} and without stop in Step 3, applied at the $k$-th iteration of Algorithm [\[alg:FirstOrderCNM\]](#alg:FirstOrderCNM){reference-type="ref" reference="alg:FirstOrderCNM"}. Let $T(\epsilon) \leq +\infty$ be the first iteration index such that $$\label{HFSOGuarantee} \begin{array}{rcl} \max\Bigl\{ \, \| \nabla f(x_{T( \epsilon ), \ell}(t) ) + \psi'(x_{T( \epsilon ), \ell}(t) ) \|, \; \frac{1}{2^2 3^3 \cdot m \cdot \max\{\tau_0, L \}} \bigl[ \xi(x_{T( \epsilon ), \ell}(t) ) \bigr]^2 \, \Bigr\} & \leq & \epsilon, \end{array}$$ for some $\ell \geq 0$ and $t \in \{0, \ldots, m\}$. Then, bounds [\[HFTBound\]](#HFTBound){reference-type="eqref" reference="HFTBound"} and [\[HFTotBound\]](#HFTotBound){reference-type="eqref" reference="HFTotBound"} hold.* *Proof.* The proof is similar to those one of Theorem [Theorem 12](#TheoremHF){reference-type="ref" reference="TheoremHF"}, using Lemma [Lemma 9](#LemmaHF2){reference-type="ref" reference="LemmaHF2"} instead of Lemma [Lemma 8](#LemmaHF){reference-type="ref" reference="LemmaHF"}. ◻ Therefore, we conclude that our Hessian-free scheme achieves the second-order stationary guarantee [\[HFSOGuarantee\]](#HFSOGuarantee){reference-type="eqref" reference="HFSOGuarantee"}, even though the method does not need to compute directly *any second-order information*, using solely the first-order oracle for $f(\cdot)$. # Zeroth-Order CNM {#SectionZO} In this section, we present the *zeroth-order* implementation of the Cubic Newton Method, which uses only the *function evaluations* for $f(\cdot)$ to solve our optimization problem [\[MainProblem\]](#MainProblem){reference-type="eqref" reference="MainProblem"}. Hence, we will use finite difference approximations *both* for the Hessian and for the gradients. Note that approximating the Hessian matrix [\[HessZOBDef\]](#HessZOBDef){reference-type="eqref" reference="HessZOBDef"} remains to be *$n$ times more expensive* than the gradient vector [\[GradZODef\]](#GradZODef){reference-type="eqref" reference="GradZODef"}. Therefore, we keep using each approximation $B_{k, \ell} \approx \nabla^2 f(x_k)$ for consecutive $m \geq 1$ inexact cubic steps, while updating the gradient estimates each step. In what follows, we show that the optimal schedule is $\boxed{m := n}$, which gives the best zeroth-order oracle complexity for our scheme. Let us denote by $(\hat{x}, \alpha) = \texttt{ZeroOrderCubicSteps}(x, B, \tau, m, \epsilon)$ an auxiliary procedure that performs $m$ inexact Cubic Newton steps [\[CompositeSubproblem\]](#CompositeSubproblem){reference-type="eqref" reference="CompositeSubproblem"}, starting from a point $x \in Q$, using the same given matrix $B = B^{\top}$, and estimating the new gradients with finite differences. We use $\sigma > 0$ as a regularization parameter, and $\epsilon > 0$ is the target accuracy [\[InexactSolution\]](#InexactSolution){reference-type="eqref" reference="InexactSolution"}. The procedure returns the last computed iterate $\hat{x}$ and a status variable $$\begin{array}{rcl} \alpha & \in & \{ \texttt{success}, \, \texttt{halt} \}, \end{array}$$ which indicates whether the progress condition was satisfied for all steps or not. We define this procedure formally as Algorithm [\[alg:ZO_CNM\]](#alg:ZO_CNM){reference-type="ref" reference="alg:ZO_CNM"}. **Step 0.** Set $x_0 := x$ and $t := 0$. **Step 1.** If $t = m$ then stop and **return** $(x_t, \, \text{\ttfamily success})$. **Step 2.** For $$\label{ZO_hgdef} \begin{array}{rcl} h_g & = & \frac{1}{3^{1/3}} \Bigl[ \frac{\epsilon m }{\sigma n^{1/2} } \Bigr]^{1/2} \end{array}$$ compute $g_t \in \mathbb{R}^n$ by $$\label{ZO_gtdef} \begin{array}{rcl} \bigl[ g_t \bigr]^{(i)} & = & \frac{f(x_t + h_g e_i) - f(x_t - h_g e_i)}{2 h_g}, \quad i = 1, \ldots, n. \end{array}$$ **Step 3.** Compute $x_{t + 1}$ as an approximate solution to the subproblem $$\begin{array}{c} \min\limits_{y \in Q} \Bigl\{ \, M_{x_{t}, \sigma}(y) + \psi(y) \, \Bigr\}, \qquad \text{where} \end{array}$$ $$\begin{array}{rcl} M_{x_{t}, \sigma}(y) & \equiv & f(x_t) + {\langle}g_t, y - x_t {\rangle} + \frac{1}{2}{\langle}B(y - x_t), y - x_t {\rangle} + \frac{\sigma}{6}\|y - x_t\|^3 \end{array}$$ such that $$\label{ZOMCond} \begin{array}{rcl} M_{x_{t},\sigma}(x_{t + 1}) + \psi(x_{t + 1}) &\leq & F(x_{t}) \quad \text{and} \\ \\ \|\nabla M_{x_{t}, \sigma} ( x_{t + 1} ) + \psi'(x_{t + 1}) \| & \leq & \frac{\sigma}{4} \|x_{t + 1} - x_t \|^{2} \quad \text{for some} \;\; \psi'(x_{t + 1}) \in \partial \psi(x_{t + 1}), \end{array}$$ and (**optionally**, if $\psi$ is twice differentiable) such that $$\label{ZOMCon2} \begin{array}{rcl} B + \sigma \|x_{t + 1} - x_t \| I + \nabla^2 \psi(x_{t + 1}) & \succeq & 0. \end{array}$$ **Step 4.** If $F(x_0) - F(x_{t + 1}) \geq \frac{\epsilon^{3/2}}{384 \sigma^{1/2}} (t + 1)$ holds then set $t := t + 1$ and go to Step 1. Otherwise, stop and **return** $(x_{t + 1}, \text{\ttfamily halt})$. We can prove the following main result about this procedure. **Lemma 15**. *Suppose that A1 holds. Given $x \in Q$, $\epsilon > 0$, $\sigma > 0$, and $m \in \mathbb{N} \setminus \{ 0 \}$, let $(\hat{x}, \alpha)$ be the corresponding output of Algorithm [\[alg:ZO_CNM\]](#alg:ZO_CNM){reference-type="ref" reference="alg:ZO_CNM"} with $B = \frac{1}{2}(A + A^{\top})$, where $$\label{ZOADef} \begin{array}{rcl} A^{(i, j)} & = & \frac{f(x + h e_i + h e_j) - f(x + h e_i) - f(x + h e_j) - f(x)}{h^2}, \quad i, j = 1, \ldots, n, \end{array}$$ for some $h > 0$. If $$\label{SigmaHBZOCondition} \begin{array}{rcl} \sigma & \geq & 2^4 \bigl( \frac{2}{3} \bigr)^{1/3} mL \quad \text{and} \quad h \;\; \leq \;\; \Bigl[ \frac{3^4 \sigma^{3/2} \epsilon^{3/2}}{ 2^{14} (192) n^3 L^3 } \Bigr]^{1/3}, \end{array}$$ then, for the iterations $\{ x_t \}_{t = 1}^{m}$ of Algorithm [\[alg:ZO_CNM\]](#alg:ZO_CNM){reference-type="ref" reference="alg:ZO_CNM"}, we have either $$\label{ZOMinGrad} \begin{array}{rcl} \min\limits_{t = 1, \ldots, m} \| \nabla f(x_t) + \psi'(x_t) \| & \leq & \epsilon, \end{array}$$ or $$\label{ZOFuncProgress} \begin{array}{rcl} F(x) - F(\hat{x}) & \geq & \frac{\epsilon^{3/2}}{2 (192) \sigma^{1/2}} m. \end{array}$$* By [\[ZO_gtdef\]](#ZO_gtdef){reference-type="eqref" reference="ZO_gtdef"} and Lemma [Lemma 6](#LemmaGradZO){reference-type="ref" reference="LemmaGradZO"} we have $$\label{ZO_gtbound} \begin{array}{rcl} \| g_t - \nabla f(x_t) \| & \leq & \delta_g \end{array}$$ for $$\label{DeltaG_ZO_expr} \begin{array}{rcl} \delta_g & = & \frac{\sqrt{n} L}{6} h_g^2. \end{array}$$ In view of [\[ZO_hgdef\]](#ZO_hgdef){reference-type="eqref" reference="ZO_hgdef"} and the assumption [\[SigmaHBZOCondition\]](#SigmaHBZOCondition){reference-type="eqref" reference="SigmaHBZOCondition"} it follows that $$\label{ZO_deltaG_bound} \begin{array}{rcl} \frac{3}{\sigma^{1/2}} \cdot \delta_g^{3/2} & \overset{\eqref{DeltaG_ZO_expr}}{=} & \frac{3}{\sigma^{1/2}} \cdot \frac{n^{3/4} L^{3/2}}{6^{3/2}} h_g^3 \;\; \overset{\eqref{ZO_hgdef}}{=} \;\; \frac{\epsilon^{3/2}}{ 2^{8} 3 \sigma^{1/2}} \cdot \frac{1}{\sigma^{3/2}} \cdot \frac{ 2^{13/2} m^{3/2} L^{3/2}}{3^{1/2}} \\ \\ & \overset{\eqref{SigmaHBZOCondition}}{\leq} & \frac{\epsilon^{3/2}}{ 4(192) \sigma^{1/2}}. \end{array}$$ On the other hand, by [\[ZOADef\]](#ZOADef){reference-type="eqref" reference="ZOADef"} and Lemma [Lemma 7](#LemmaHessZO){reference-type="ref" reference="LemmaHessZO"} we have $$\label{ZO_Bbound} \begin{array}{rcl} \| B - \nabla^2 f(x) \| & \leq & \delta_B \end{array}$$ for $$\begin{array}{rcl} \delta_B & = & \frac{2n L}{3} h. \end{array}$$ Then, in view of [\[SigmaHBZOCondition\]](#SigmaHBZOCondition){reference-type="eqref" reference="SigmaHBZOCondition"}, it follows that $$\label{ZO_deltaB_bound} \begin{array}{rcl} \frac{2^9}{3 \sigma^2} \cdot \delta_B^3 & = & \frac{2^9}{3 \sigma^2} \cdot \frac{2^3 n^3 L^3}{3^3} \cdot h^3 \;\; \leq \;\; \frac{2^9}{3 \sigma^2} \cdot \frac{2^3 n^3 L^3}{3^3} \cdot \frac{3^4 \sigma^{3/2} \epsilon^{3/2} }{2^{14} (192) n^3 L^3} \\ \\ & = & \frac{\epsilon^{3/2}}{ 4(192) \sigma^{1/2} } \end{array}$$ Combining [\[ZO_deltaG_bound\]](#ZO_deltaG_bound){reference-type="eqref" reference="ZO_deltaG_bound"} and [\[ZO_deltaB_bound\]](#ZO_deltaB_bound){reference-type="eqref" reference="ZO_deltaB_bound"}, we have $$\label{ZO_deltas_bound} \begin{array}{rcl} \frac{2^9}{3 \sigma^2} \delta_B^3 + \frac{3}{\sigma^{1/2}} \delta_g^{3/2} & \leq & \frac{\epsilon^{3/2}}{2(192) \sigma^{1/2}}. \end{array}$$ Then, by [\[ZOMCond\]](#ZOMCond){reference-type="eqref" reference="ZOMCond"}, [\[ZO_gtbound\]](#ZO_gtbound){reference-type="eqref" reference="ZO_gtbound"}, [\[ZO_Bbound\]](#ZO_Bbound){reference-type="eqref" reference="ZO_Bbound"}, [\[ZO_deltas_bound\]](#ZO_deltas_bound){reference-type="eqref" reference="ZO_deltas_bound"} and Theorem [Theorem 4](#ThStep){reference-type="ref" reference="ThStep"} with $z = x$, we obtain $$\label{ZO_func_prog} \begin{array}{rcl} F(x_{t - 1}) - F(x_t) & \geq & \frac{\sigma}{48} \|x_t - x_{t - 1}\|^3 + \frac{1}{192 \sigma^{1/2}} \| \nabla f(x_t) + \psi'(x_t) \|^{3/2} \\ \\ & & \qquad \; - \; \frac{1}{2(192)\sigma^{1/2}} \epsilon^{3/2} - \frac{2^9 L^3}{3\sigma^2} \|x_{t - 1} - x_t\|^3, \end{array}$$ for $t = 1, \ldots, m$. Consequently, if [\[ZOMinGrad\]](#ZOMinGrad){reference-type="eqref" reference="ZOMinGrad"} is not true, then $$\begin{array}{rcl} F(x_{t - 1}) - F(x_t) & \geq & \frac{\sigma}{48} \|x_t - x_{t - 1} \|^3 + \frac{1}{2(192) \sigma^{1/2}} \epsilon^{3/2} - \frac{2^9 L^3}{3 \sigma^2} \|x_{t - 1} - x_0\|^3 \end{array}$$ for $t = 1, \ldots, m$. Finally, summing up these inequalities and using Lemma B.1 in [@doikov2023second] for our choice [\[SigmaHBZOCondition\]](#SigmaHBZOCondition){reference-type="eqref" reference="SigmaHBZOCondition"} of $\sigma$, we conclude that [\[ZOFuncProgress\]](#ZOFuncProgress){reference-type="eqref" reference="ZOFuncProgress"} is true. 0◻ Let us formulate our new optimization method for solving problem [\[MainProblem\]](#MainProblem){reference-type="eqref" reference="MainProblem"}, which is the zeroth-order implementation of CNM. **Step 0.** Given $x_0 \in Q$, $\tau_0 > 0$, $\epsilon > 0$, $m \in \mathbb{N} \setminus \{ 0 \}$, set $k : = 0$. **Step 1.** Set $\ell := 0$. **Step 1.1.** Using $$\label{AlgZOSigm} \begin{array}{rcl} \sigma_{k, \ell} & = & 2^4 \bigl( \frac{2}{3} \bigr)^{1/3} (2^{\ell} \tau_k) m \end{array}$$ and $$\label{AlgZOH} \begin{array}{rcl} h_{k, \ell} & = & \Bigl[ \frac{3^4 \sigma_{k, \ell}^{3/2} \epsilon^{3/2}}{ 2^{14}(192)n^{3} (2^{\ell} \tau_k)^3} \Bigr]^{1/3} \end{array}$$ compute $B_{k, \ell} = \frac{1}{2}\bigl(A_{k, \ell} + A_{k, \ell}^{\top} \bigr)$ with $$\label{AlgZOA} \begin{array}{rcl} \bigl[ A_{k, \ell} \bigr]^{(i, j)} & = & \frac{f(x_k + h_{k, \ell} e_i + h_{k, \ell} e_j ) - f(x_k + h_{k, \ell} e_i) - f(x_k + h_{k, \ell} e_j) - f(x_k) }{h_{k, \ell}^2} \end{array}$$ for $i, j = 1, \ldots, n$. **Step 1.2.** Perform $m$ inexact zeroth-order Cubic steps with the same Hessian approximation: $$\begin{array}{rcl} (\hat{x}_{k, \ell}, \alpha_{k, \ell}) & = & \texttt{ZeroOrderCubicSteps}(x_k, B_{k, \ell}, \sigma_{k, \ell}, m, \epsilon). \end{array}$$ **Step 2.** If $\alpha_{k, \ell} = \texttt{halt}$, then set $\ell := \ell + 1$ and go to Step 1.1. **Step 3.** Set $x_{k + 1} = \hat{x}_{k, \ell_k}$, $\tau_{k + 1} = \max\{ \tau_0, \, 2^{\ell_k - 1} \tau_k\}$, $k := k + 1$, and go to Step 1. Employing a stronger condition [\[ZOMCon2\]](#ZOMCon2){reference-type="eqref" reference="ZOMCon2"} on the solution to the subproblem, we can also justify the progress of our procedure in terms of the *second-order stationarity measure*. **Lemma 16**. *Consider the sequence $\{ x_t \}_{t = 1}^m$ generated by Algorithm [\[alg:ZO_CNM\]](#alg:ZO_CNM){reference-type="ref" reference="alg:ZO_CNM"} with extra condition [\[ZOMCon2\]](#ZOMCon2){reference-type="eqref" reference="ZOMCon2"} on the inexact solution to the subproblem. Then, under the assumptions of Lemma [Lemma 15](#LemmaZOStep){reference-type="ref" reference="LemmaZOStep"}, we have either $$\label{ZOMinGrad2} \begin{array}{rcl} \min\limits_{ 1 \leq t \leq m } \biggl[ \, \Delta_t \; \stackrel{\mathrm{def}}{=}\; \max\Bigl\{ \| \nabla f(x_t) + \psi'(x_t) \|, \; \frac{1}{\sigma} \bigl( \frac{2}{3} \bigr)^{\frac{10}{3}} \bigl[ \xi(x_t) \big]^2 \Bigr\} \, \biggr] & \leq & \epsilon, \end{array}$$ or $$\label{ZOFuncProgress2} \begin{array}{rcl} F(x) - F(\hat{x}) & \geq & \frac{\epsilon^{3/2}}{2 \cdot (192) \sigma^{1/2}} m. \end{array}$$* The proof follows the reasoning of Lemma [Lemma 15](#LemmaZOStep){reference-type="ref" reference="LemmaZOStep"}, using the stronger one step guarantee provided by Theorem [Theorem 4](#ThStep){reference-type="ref" reference="ThStep"}. 0◻ **Lemma 17**. *Suppose that A1 holds and let $\{ \tau_k \}_{k \geq 0}$ be generated by Algorithm [\[alg:ZeroOrderCNM\]](#alg:ZeroOrderCNM){reference-type="ref" reference="alg:ZeroOrderCNM"}. Then $$\label{AlgZOTauKBound} \begin{array}{rcl} \tau_k & \leq & \max\{ \tau_0, L \}, \qquad \forall k \geq 0. \end{array}$$* It follows exactly as in the proof of Lemma [Lemma 10](#LemmaHFTau){reference-type="ref" reference="LemmaHFTau"}, using Lemma [Lemma 15](#LemmaZOStep){reference-type="ref" reference="LemmaZOStep"} to conclude that $$\begin{array}{rcl} \tau_{k + 1} & \leq & \max\{ \tau_0, L \} \end{array}$$ when $\ell_k > 0$. 0◻ **Lemma 18**. *Suppose that A1 holds and let $\text{ZO}_{T}$ be the total number of function evaluations of $f(\cdot)$ performed by Algorithm [\[alg:ZeroOrderCNM\]](#alg:ZeroOrderCNM){reference-type="ref" reference="alg:ZeroOrderCNM"} during the first $T$ iterations. Then, $$\begin{array}{rcl} \text{ZO}_{T} & \leq & \bigl[4 + 4 mn + 6n^2\bigr] \cdot T \; + \; \bigl[2 + 2mn + 3n^2\bigr] \cdot \log_2\frac{\max\{ \tau_0, L \}}{\tau_0}. \end{array}$$* The number of function evaluations performed by Algorithm [\[alg:ZeroOrderCNM\]](#alg:ZeroOrderCNM){reference-type="ref" reference="alg:ZeroOrderCNM"} (including those ones performed by Algorithm [\[alg:ZO_CNM\]](#alg:ZO_CNM){reference-type="ref" reference="alg:ZO_CNM"} in Step 2) is bounded from above by $$\begin{array}{rcl} [2 + 2 mn + 3n^2] \cdot (\ell_k + 1). \end{array}$$ Since $\tau_{k + 1} = 2^{\ell_k - 1} \tau_k$, we have $$\begin{array}{rcl} \ell_{k + 1} & = & 2 + \log_2 \tau_{k + 1} - \log_2 \tau_k. \end{array}$$ Thus, $$\begin{array}{rcl} \text{ZO}_{T} & \leq & \sum\limits_{k = 0}^{T - 1} [2 + 2mn + 3n^2] \cdot (2 + \log_2 \tau_{k + 1} - \log_2 \tau_k) \\ \\ & = & [2 + 2mn + 3n^2] \cdot ( 2T + \log_2 \tau_T - \log_2 \tau_0 ) \\ \\ & \leq & [2 + 2mn + 3n^2] \cdot ( 2T + \log_2 \frac{\max\{ \tau_0, L \}}{\tau_0}, \end{array}$$ where the last inequality follows from Lemma [Lemma 17](#LemmaZOTau){reference-type="ref" reference="LemmaZOTau"}. 0◻ We prove the following main result. **Theorem 19**. *Suppose that A1. Let $x_{k, \ell}(t)$ be the $t$-th iterate of Algorithm [\[alg:ZO_CNM\]](#alg:ZO_CNM){reference-type="ref" reference="alg:ZO_CNM"} applied at the $k$-th iteration of Algorithm [\[alg:ZeroOrderCNM\]](#alg:ZeroOrderCNM){reference-type="ref" reference="alg:ZeroOrderCNM"} in the $\ell$-th inner loop. Let $T(\epsilon) \leq +\infty$ be the first iteration index such that $$\begin{array}{rcl} \| \nabla f( x_{T(\epsilon), \ell}(t) ) + \psi'( x_{T(\epsilon), \ell}(t) ) \| & \leq & \epsilon \end{array}$$ for some $\ell \geq 0$ and $t \in \{ 0, \ldots, m \}$. Then, $$\label{ZOTBound} \begin{array}{rcl} T(\epsilon) & \leq & \frac{(384) 2^{5/2} (\frac{2}{3})^{1/6} \max\{ \tau_0, L \}^{3/2} (f(x_0) - f^{\star}) }{\sqrt{m}} \epsilon^{-3/2} \end{array}$$ and, consequently, the total number of the function evaluations is bounded as $$\label{ZOTBoundTotal} \begin{array}{rcl} \text{ZO}_{T(\epsilon)} & \leq & \mathcal{O}\Bigl( \frac{mn + n^2}{\sqrt{m}} \max\{ \tau_0, L \}^{1/2} (f(x_0) - f^{\star}) \cdot \epsilon^{-3/2} + (mn + 3n^2) \log_2 \frac{\max\{ \tau_0, L \}}{\tau_0} \Bigr). \end{array}$$* Similarly to the proof of Theorem [Theorem 12](#TheoremHF){reference-type="ref" reference="TheoremHF"}, we get [\[ZOTBound\]](#ZOTBound){reference-type="eqref" reference="ZOTBound"} from Lemma [Lemma 15](#LemmaZOStep){reference-type="ref" reference="LemmaZOStep"} and Lemma [Lemma 17](#LemmaZOTau){reference-type="ref" reference="LemmaZOTau"}. Then, combining [\[ZOTBound\]](#ZOTBound){reference-type="eqref" reference="ZOTBound"} with Lemma [Lemma 18](#LemmaZONumber){reference-type="ref" reference="LemmaZONumber"}, we get [\[ZOTBoundTotal\]](#ZOTBoundTotal){reference-type="eqref" reference="ZOTBoundTotal"}. 0◻ **Corollary 20**. *By taking $\boxed{m := n}$, it follows from Theorem [Theorem 19](#TheoremZO){reference-type="ref" reference="TheoremZO"} that Algorithm [\[alg:ZeroOrderCNM\]](#alg:ZeroOrderCNM){reference-type="ref" reference="alg:ZeroOrderCNM"} needs at most $$\begin{array}{c} \mathcal{O}( n^{3/2} \epsilon^{-3/2} ) \end{array}$$ function evaluations of $f(\cdot)$ to find a point $\bar{x}$ such that $\| \nabla f(\bar{x}) + \psi'(\bar{x}) \| \leq \epsilon$.* Finally, we can establish the convergence result in terms of the *second-order stationary point*. The proof is identical and it just needs to replace Lemma [Lemma 15](#LemmaZOStep){reference-type="ref" reference="LemmaZOStep"} by Lemma [Lemma 16](#LemmaZOStep2){reference-type="ref" reference="LemmaZOStep2"}. **Theorem 21**. *Suppose that A1 holds. Let $x_{k, \ell}(t)$ be the $t$-th iterate of Algorithm [\[alg:ZO_CNM\]](#alg:ZO_CNM){reference-type="ref" reference="alg:ZO_CNM"} with extra condition [\[ZOMCon2\]](#ZOMCon2){reference-type="eqref" reference="ZOMCon2"} on the inexact solution to the subproblem, applied at the $k$-th iteration of Algorithm [\[alg:ZeroOrderCNM\]](#alg:ZeroOrderCNM){reference-type="ref" reference="alg:ZeroOrderCNM"} in the $\ell$-th inner loop. Let $T(\epsilon) \leq +\infty$ be the first iteration index such that $$\begin{array}{rcl} \max\Bigl\{ \, \| \nabla f( x_{T(\epsilon), \ell}(t) ) + \psi'(x_{T(\epsilon), \ell}(t)) \|, \; \frac{1}{2^2 3^3 \cdot m \cdot \max\{ \tau_0, L \}} \bigl[ \xi( x_{T(\epsilon), \ell}(t) ) \bigr]^2 \, \Bigr\} & \leq & \epsilon \end{array}$$ for some $\ell \geq 0$ and $t \in \{ 0, \ldots, m \}$. Then, bounds [\[ZOTBound\]](#ZOTBound){reference-type="eqref" reference="ZOTBound"} and [\[ZOTBoundTotal\]](#ZOTBoundTotal){reference-type="eqref" reference="ZOTBoundTotal"} hold.* # Local Superlinear Convergence {#SectionLocal} One of the main classical results about Newton's Method is its *local quadratic convergence*, which dates back to the works of Fine [@fine1916newton], Bennett [@bennett1916newton], and Kantorovich [@kantorovich1948newton]. It assumes that the iterates of the method are already in a neighbourhood of a non-degenerate solution (a strict local minimum $x^{\star}$ satisfying $\nabla^2 f(x^{\star}) \succ 0$), and it shows importantly that under this condition the method converges very fast. Later on, a local superlinear convergence of the Newton Method that uses the same Hessian for $m \geq 1$ consecutive steps, where $m$ is a parameter, was established by Shamanskii in [@shamanskii1967modification], and recently in [@doikov2023second]. The local quadratic convergence of the CNM with finite difference Hessian approximations was studied in [@grapiglia2022cubic]. In this section, we justify local superlinear convergence for our implementations of the inexact composite CNM. To quantify our problem class, we additionally assume the following[^4]: **A2** The Hessian of $f$ is below bounded on $Q$, for some $\mu > 0$: $$\label{HessBounded} \begin{array}{rcl} \nabla^2 f(x) & \succeq & \mu I, \qquad \forall x \in Q. \end{array}$$ It is well known that bound [\[HessBounded\]](#HessBounded){reference-type="eqref" reference="HessBounded"} means that our composite objective $F(\cdot)$ is *strongly convex* on $Q$ with parameter $\mu > 0$. Thus, it has unique minimizer $x^{\star} \in Q$, and the following standard inequality holds [@nesterov2018lectures]: $$\label{SolBound} \begin{array}{rcl} \| x - x^{\star} \| & \leq & \frac{1}{\mu} \| F'(x) \|, \qquad \forall x \in Q, \; F'(x) \in \partial F(x). \end{array}$$ Let us study one iteration $k \geq 0$ of our first-order CNM (Algorithm [\[alg:FirstOrderCNM\]](#alg:FirstOrderCNM){reference-type="ref" reference="alg:FirstOrderCNM"}). First, we have the following bounds, for any $\ell \geq 0$: $$\label{LocalSigmaHBound} \begin{array}{rcl} \sigma_{k, \ell} & \overset{\textit{Step~1.1}}{=} & 2^4 \bigl( \frac{2}{3} \bigr)^{1/3} (2^{\ell} \tau_k) m \;\; \overset{ \textit{Step~4}, \, \eqref{HFTauBound} }{\leq} \;\; 2^5 \bigl( \frac{2}{3} \bigr)^{1/3} m\cdot \max\{ \tau_0, L \}, \\ \\ h_{k, \ell} & \overset{\textit{Step~1.1}}{=} & \Bigl[ \frac{3 \cdot 2^6 ( \frac{2}{3} )^{1/3} m^{3/2} \epsilon^{3/2} }{ 2^7 (192) n^{3/2} (2^\ell \tau_k)^{3/2} } \Bigr]^{1/3} \;\; = \;\; c \cdot \sqrt{\frac{m \epsilon}{2^{\ell} n \tau_k } } \\ \\ & \leq & c \cdot \sqrt{\frac{m \epsilon}{n \tau_k } } \;\; \overset{\textit{Step~4}}{\leq} \;\; c \cdot \sqrt{\frac{m \epsilon}{n \tau_0 } }, \end{array}$$ where $c := \frac{1}{3^{1/6} \cdot 2^{13/6}}$ is a numerical constant. Let us consider the following set, for a fixed $\epsilon, \kappa > 0$ and some given selection of subgradients $F'(x) \in \partial F(x)$: $$\label{BoundSubgr} \begin{array}{rcl} \mathcal{Q}_{\epsilon, \kappa} & \stackrel{\mathrm{def}}{=}& \Bigl\{ \, x \in Q \; : \; \epsilon \, \leq \, \| F'(x) \| \, \leq \, \frac{\kappa }{2} \, \Bigr\}, \end{array}$$ where $\kappa$ is the following constant describing the *region of quadratic convergence*: $$\label{KappaDef} \begin{array}{rcl} \kappa & \stackrel{\mathrm{def}}{=}& \mu^2 \cdot \frac{1}{2} \biggl[ 3 \cdot 2^6 \bigl( \frac{2}{3} \bigr)^{1/3} m \cdot \max\{\tau_0, L \} + 8L + \frac{c^2 L^2 m}{\tau_0} \biggr]^{-1} \;\; \sim \;\;\; \frac{\mu^2}{mL}. \end{array}$$ By [\[BoundSubgr\]](#BoundSubgr){reference-type="eqref" reference="BoundSubgr"}, we assume the desired accuracy $\epsilon$ to be sufficiently small: $$\label{LocalEpsBound} \begin{array}{rcl} \epsilon & \leq & \frac{\kappa}{2} \;\; \leq \;\; \frac{\tau_0 \mu^2}{m L^2 c^2}. \end{array}$$ Then, we have $$\label{LocalHklBound} \begin{array}{rcl} h_{k, \ell} & \overset{\eqref{LocalSigmaHBound}}{\leq} & c \cdot \sqrt{ \frac{m \epsilon}{n \tau_0} } \;\; \overset{\eqref{LocalEpsBound}}{\leq} \;\; \frac{\mu}{L\sqrt{n}}. \end{array}$$ Therefore, due to Lemma [Lemma 5](#LemmaHessFO){reference-type="ref" reference="LemmaHessFO"}, for all Hessian approximations $B_{k, \ell}$ constructed in Algorithm [\[alg:FirstOrderCNM\]](#alg:FirstOrderCNM){reference-type="ref" reference="alg:FirstOrderCNM"}, it holds: $$\label{LocalBBound} \begin{array}{rcl} \| B_{k, \ell} - \nabla^2 f(x_k) \| & \overset{\eqref{HessFOBound}}{\leq} & \frac{\sqrt{n} L}{2} h_{k, \ell} \;\; \overset{\eqref{LocalHklBound}}{\leq} \;\; \frac{\mu}{2}. \end{array}$$ Taking into account our assumption A2, we conclude that our Hessian approximations are *always positive definite*: $$\label{LocalBStrongConvex} \begin{array}{rcl} B_{k, \ell} & \succeq & \frac{\mu}{2} I. \end{array}$$ In this case, we can easily bound the lenght of one inexact CNM step, as follows. **Lemma 22**. *Let $x^+$ be an inexact minimizer of model [\[ApproxModel\]](#ApproxModel){reference-type="eqref" reference="ApproxModel"} satisfying the following condition: $$\label{InexactThStep2} \begin{array}{rcl} \| \nabla M_{x, \sigma}(x^+) + \psi'(x^+) \| & \leq & \frac{\sigma}{4}\|x^+ - x\|^2, \end{array}$$ for a certain $\psi'(x^+) \in \partial \psi(x^+)$, where $g$ satisfy [\[GradHessApprox\]](#GradHessApprox){reference-type="eqref" reference="GradHessApprox"} for some $\delta_g \geq 0$, and $B \succeq \frac{\mu}{2}I$. Then, we have $$\label{CubicStepBound} \begin{array}{rcl} r \;\; := \;\; \|x^+ - x \| & \leq & \frac{2}{\mu}\Bigl( \| F'(x) \| + \delta_g \Bigr), \qquad \forall F'(x) \in \partial F(x). \end{array}$$* Indeed, we get that $$\label{StepLemma1} \begin{array}{rcl} \frac{\sigma r^3}{4} & \overset{\eqref{InexactThStep2}}{\geq} & {\langle}\nabla M_{x, \sigma}(x^+) + \psi'(x^+), x^+ - x {\rangle}\\ \\ & = & {\langle}g + B(x^+ - x) + \frac{\sigma}{2}r(x^+ - x) + \psi'(x^+), x^+ - x {\rangle}\\ \\ & \geq & {\langle}g + \psi'(x^+), x^+ - x {\rangle} + \frac{\mu r^2}{2} + \frac{\sigma r^3}{2}. \end{array}$$ Hence, rearranging the terms and using convexity of $\psi$, we obtain, for any $\psi'(x) \in \partial \psi(x)$, $$\begin{array}{rcl} \frac{\mu r^2}{2} & \overset{\eqref{StepLemma1}}{\leq} & {\langle}g + \psi'(x^+), x - x^+ {\rangle} - \frac{\sigma r^3}{4} \;\; \leq \;\; {\langle}g + \psi'(x), x - x^+ {\rangle}- \frac{\sigma r^3}{4} \\ \\ & \leq & r \Bigl( \| \nabla f(x) + \psi'(x) \| + \delta_g \Bigr) \;\; = \;\; r \Bigl( \| F'(x) \| + \delta_g \Bigr), \end{array}$$ which is [\[CubicStepBound\]](#CubicStepBound){reference-type="eqref" reference="CubicStepBound"}. 0◻ Now, let us look at the local progress given by one inexact CNM step $x \mapsto x^+$, with anchor point $z := x_k$. Assuming that $x \in \mathcal{Q}_{\epsilon, \kappa}$ and under assumptions of Lemma [Lemma 1](#LemmaNewGrad){reference-type="ref" reference="LemmaNewGrad"} with $\theta = \frac{\sigma_{k, \ell} }{4}$, $\delta_g = 0$, and $\delta_B \overset{\eqref{HessFOBound}}{=} \frac{\sqrt{n} L}{2} h_{k, \ell}$, we get $$\label{LocalOneStepProgress} \begin{array}{rcl} \| F'(x^+) \| & \overset{\eqref{NewGradBound} }{\leq} & \Bigl( \frac{3}{4} \sigma_{k, \ell} + \frac{L}{2} \Bigr) r^2 + \Bigl( \frac{\sqrt{n} L}{2} h_{k, \ell} + L\|x - x_k \|\Bigr) r \\ \\ & \overset{\eqref{LocalSigmaHBound}}{\leq} & \Bigl( \frac{3}{4} \sigma_{k, \ell} + \frac{L}{2} \Bigr) r^2 + \Bigl( \frac{cL}{2}\sqrt{\frac{m}{\tau_0}} \cdot \sqrt{\epsilon} + L\|x - x_k \| \Bigr) r \\ \\ & \leq & \Bigl( \frac{3}{4} \sigma_{k, \ell} + \frac{L}{2} + \frac{c^2 L^2 m}{8 \tau_0} \Bigr) r^2 + L r \|x - x^{\star} \| + L r \|x_k - x^{\star} \| + \frac{\epsilon}{2} \\ \\ & \overset{\eqref{CubicStepBound}, \eqref{SolBound}}{\leq} & \frac{1}{\mu^2} \Bigl( 3 \sigma_{k, \ell} + 4L + \frac{c^2 L^2 m}{2\tau_0} \Bigr) \| F'(x) \|^2 + \frac{2L}{\mu^2} \| F'(x) \| \cdot \| F'(x_k) \| + \frac{\epsilon}{2}. \end{array}$$ We see that the first term in the right hand side of [\[LocalOneStepProgress\]](#LocalOneStepProgress){reference-type="eqref" reference="LocalOneStepProgress"} is responsible for the local quadratic convergence in terms of the (sub)gradient norm, as in the classical Newton's Method, and the last two terms appear due to the inexactness of our Hessian approximations. It remains to combine all our observations together. **Theorem 23**. *Suppose that A1 and A2 hold. Let $x_0 \in \mathcal{Q}_{\epsilon, \kappa}$, given by [\[BoundSubgr\]](#BoundSubgr){reference-type="eqref" reference="BoundSubgr"} and $\kappa$ given by [\[KappaDef\]](#KappaDef){reference-type="eqref" reference="KappaDef"}. Let $\{ x_k \}_{k \geq 1}$ be generated by Algorithm [\[alg:FirstOrderCNM\]](#alg:FirstOrderCNM){reference-type="ref" reference="alg:FirstOrderCNM"} and denote by $T(\epsilon) \leq +\infty$ be the first iteration index such that $\| \nabla f( x_{T(\epsilon)} ) + \psi'(x_{T(\epsilon)}) \| \leq \epsilon$, for a certain $\psi'(x_{T(\epsilon)}) \in \partial \psi(x_{T(\epsilon)})$. We have $$\label{LocalTBound} \begin{array}{rcl} T(\epsilon) & \leq & \frac{1}{\log_2(1 + m)} \log_2 \log_2 \frac{\kappa}{\epsilon} + 1. \end{array}$$ By the definition of $T(\epsilon)$, we have $$\begin{array}{rcl} \| F'(x_k) \| \;\; \equiv \;\; \| \nabla f(x_k) + \psi'(x_k) \| & \geq & \epsilon, \quad \text{for} \quad k = 0, \ldots, T(\epsilon) - 1, \end{array}$$ and for all iterations generated by Algorithm [\[alg:HessianFree\]](#alg:HessianFree){reference-type="ref" reference="alg:HessianFree"} launched from Algorithm [\[alg:FirstOrderCNM\]](#alg:FirstOrderCNM){reference-type="ref" reference="alg:FirstOrderCNM"}. We prove by induction that $$\label{LocalGkBound} \begin{array}{rcl} \frac{1}{\kappa} \| F'(x_k) \| & \leq & \bigl( \frac{1}{2} \bigr)^{(1 + m)^k + 1}, \qquad k = 0, \ldots, T(\epsilon) - 1, \end{array}$$ which immediately leads to the desired bound.* *For $k = 0$, inequality [\[LocalGkBound\]](#LocalGkBound){reference-type="eqref" reference="LocalGkBound"} holds due to our assumption: $x_0 \in \mathcal{Q}_{\epsilon, \kappa}$, and this is the base of our induction. Assume that it holds for some $k \geq 0$, and consider one iteration of Algorithm [\[alg:FirstOrderCNM\]](#alg:FirstOrderCNM){reference-type="ref" reference="alg:FirstOrderCNM"}. In *Step 1.2* it runs `CubicSteps` (Algorithm [\[alg:HessianFree\]](#alg:HessianFree){reference-type="ref" reference="alg:HessianFree"}) and will do the adaptive search until gets status $\alpha_{k, \ell} = \texttt{success}$ ($\alpha_{k, \ell} = \texttt{solution}$ is impossible by our assumption).* *Hence, $x_{k + 1}$ will be computed as $m$ inexact Cubic steps performed from the point $x_k$. Denoting these steps by $x_k^{0} \mapsto x_k^{1} \mapsto \ldots \mapsto x_k^{m}$ ($x_k^{0} \equiv x_k$ and $x_k^{m} \equiv x_{k + 1}$), we conclude that, for each $0 \leq t \leq m - 1$: $$\label{LocalProgress2} \begin{array}{rcl} \| F'(x_k^{t + 1}) \| & \overset{\eqref{LocalOneStepProgress}, \eqref{LocalSigmaHBound}}{\leq} & \frac{1}{2\kappa} \Bigl( \| F'(x_k^t) \|^2 + g_k \| F'(x_k^t) \| \Bigr) + \frac{\epsilon}{2} \\ \\ & \leq & \frac{1}{2\kappa} \Bigl( \| F'(x_k^t) \|^2 + \| F'(x_k^0) \| \| F'(x_k^t) \| \Bigr) + \frac{1}{2} \| F'(x_k^{t + 1}) \|. \end{array}$$ Now, assuming that $$\label{Induct2} \begin{array}{rcl} \frac{1}{\kappa} \| F'(x_k^t) \| & \leq & \bigl( \frac{1}{2} \bigr)^{(1 + t)(1 + m)^k + 1} \end{array}$$ (which holds for $t = 0$ by [\[LocalGkBound\]](#LocalGkBound){reference-type="eqref" reference="LocalGkBound"}), we have $$\begin{array}{cl} & \frac{1}{\kappa} \| F'(x_k^{t + 1}) \| \;\; \overset{\eqref{LocalProgress2}}{\leq} \;\; \frac{1}{\kappa} \Bigl( \| F'(x_k^t) \| + \|F'(x_k^0) \| \Bigr) \cdot \frac{1}{\kappa} \| F'(x_k^t) \| \\ \\ & \overset{\eqref{Induct2}}{\leq} \;\; \Bigl( \bigl( \frac{1}{2} \bigr)^{ (1 + t)(1 + m)^k + 1} + \bigl( \frac{1}{2} \bigr)^{ (1 + m)^k + 1} \Bigr) \cdot \bigl(\frac{1}{2} \bigr)^{ (1 + t)(1 + m)^k + 1} \;\; \leq \;\; \bigl(\frac{1}{2} \bigr)^{ (1 + t + 1)(1 + m)^k + 1}. \end{array}$$ Thus, [\[Induct2\]](#Induct2){reference-type="eqref" reference="Induct2"} holds for all $0 \leq t \leq m$, and for $t = m$ it gives [\[LocalGkBound\]](#LocalGkBound){reference-type="eqref" reference="LocalGkBound"} for the next iterate. 0◻* Finally, let us discuss the local superlinear convergence for our derivative-free CNM (Algorithm [\[alg:ZeroOrderCNM\]](#alg:ZeroOrderCNM){reference-type="ref" reference="alg:ZeroOrderCNM"}), while the analysis remains similar to the Hessian-free version. For the derivative-free method, we have, for a fixed iteration $k \geq 0$ and for any $\ell \geq 0$: $$\label{ZOHBounds} \begin{array}{rcl} \sigma_{k, \ell} & \overset{\textit{Step 1.1}}{=} & 2^4 \bigl( \frac{2}{3} \bigr)^{1/3} (2^\ell \tau_k) m \;\; \overset{\textit{Step 3}, \;\eqref{AlgZOTauKBound}}{\leq} \;\; 2^5 \bigl(\frac{2}{3} \bigr)^{1/3} m \cdot \max\{ \tau_0, L \}, \\ \\ h_{k, \ell} & \overset{\textit{Step 1.1}}{=} & \Bigl[ \bigl(\frac{2}{3}\bigr)^{1/2} \frac{3^3 \epsilon^{3/2} m^{3/2} }{2^{14} n^3 (2^{\ell} \tau_k)^{3/2}} \Bigr]^{1/3} \;\; \overset{\textit{Step 3}}{\leq} \;\; \frac{c_B \epsilon^{1/2} m^{1/2}}{n \tau_0^{1/2}}, \quad \text{where} \quad c_B \; := \; \frac{3}{2^{14/3}} \cdot \bigl( \frac{2}{3} \bigr)^{1/6}, \end{array}$$ and in each call of Algorithm [\[alg:ZO_CNM\]](#alg:ZO_CNM){reference-type="ref" reference="alg:ZO_CNM"}, the gradient finite difference parameter is $$\label{ZOHGBound} \begin{array}{rcl} h_g & \overset{\textit{Step 2}}{=} & \frac{1}{3^{1/3}} \Bigl[ \frac{\epsilon m}{\sigma_{k,\ell} n^{1/2}} \Bigr]^{1/2} \;\; \leq \;\; c_g \cdot \frac{\epsilon^{1/2}}{n^{1/4} \tau_0^{1/2}}, \quad \text{where} \quad c_g \; := \; \frac{1}{3^{1/3} 2^2 (2/3)^{1/6}}. \end{array}$$ Therefore, due to Lemma [Lemma 6](#LemmaGradZO){reference-type="ref" reference="LemmaGradZO"} and [Lemma 7](#LemmaHessZO){reference-type="ref" reference="LemmaHessZO"}, all our gradient and Hessian approximations used in Algorithms [\[alg:ZO_CNM\]](#alg:ZO_CNM){reference-type="ref" reference="alg:ZO_CNM"} and [\[alg:ZeroOrderCNM\]](#alg:ZeroOrderCNM){reference-type="ref" reference="alg:ZeroOrderCNM"} satisfy the following guarantees: $$\label{GradHessGuarantees} \begin{array}{rcl} \| g_t - \nabla f(x_t)\| & \overset{\eqref{GradZOBound}}{\leq} & \frac{\sqrt{n} L}{6} h_g^2 \;\; \overset{\eqref{ZOHGBound}}{\leq} \;\; \frac{c_g^2}{6} \cdot \frac{ \epsilon L}{\tau_0}, \\ \\ \| B_{k, \ell} - \nabla^2 f(x_k) \| & \overset{\eqref{HessZOBBound}}{\leq} & \frac{2nL}{3} h_{k,\ell} \;\; \overset{\eqref{ZOHBounds}}{\leq} \;\; \frac{2 c_B}{3} \cdot \frac{\epsilon^{1/2} m^{1/2} L}{\tau_0^{1/2}}. \end{array}$$ In particular, assuming that $\epsilon$ is sufficiently small [\[LocalEpsBound\]](#LocalEpsBound){reference-type="eqref" reference="LocalEpsBound"}, we ensure $\| B_{k, \ell} - \nabla^2 f(x_k) \| \leq \frac{\mu}{2}$, and hence our Hessian approximations are positive definite: $B_{k, \ell} \succeq \frac{\mu}{2} I$. Let us assume that the initial regularization parameter is sufficiently big: $$\label{Tau0Big} \begin{array}{rcl} \tau_0 & \geq & \frac{2 c_g^2 L}{3}. \end{array}$$ Using Lemma [Lemma 22](#LemmaStepBound){reference-type="ref" reference="LemmaStepBound"}, we can bound one (zeroth-order) inexact CNM step $x \mapsto x^+$ for a point $x \in Q_{\epsilon, \kappa}$, as follows: $$\label{ZORBound} \begin{array}{rcl} r & := & \|x^+ - x\| \;\; \overset{\eqref{CubicStepBound}, \eqref{GradHessGuarantees}}{\leq} \;\; \frac{2}{\mu} \Bigl( \| F'(x)\| + \frac{c_g^2}{6} \cdot \frac{\epsilon L}{\tau_0} \Bigr) \\ \\ & \overset{\eqref{Tau0Big}}{\leq} & \frac{2}{\mu} \Bigl( \| F'(x)\| + \frac{\epsilon}{4} \Bigr) \;\; \overset{\eqref{BoundSubgr}}{\leq} \;\; \frac{5}{2\mu} \|F'(x) \|. \end{array}$$ It remains to apply Lemma [Lemma 1](#LemmaNewGrad){reference-type="ref" reference="LemmaNewGrad"} with $\theta = \frac{\sigma_{k,\ell}}{4}$, $\delta_g =\frac{c_g^2 \epsilon L}{6\tau_0} \overset{\eqref{Tau0Big}}{\leq} \frac{\epsilon}{4}$, $\delta_B = \frac{2C_B \epsilon^{1/2}m^{1/2}L}{3\tau_0^{1/2}}$ and anchor point $z := x_k$. We obtain $$\label{ZOLocalGrad} \begin{array}{rcl} \| F'(x^+) \| & \overset{\eqref{NewGradBound}}{\leq} & \Bigl( \frac{3}{4} \sigma_{k, \ell} + \frac{L}{2} \Bigr) r^2 + \Bigl( \frac{2c_B}{3} \cdot \frac{\epsilon^{1/2} m^{1/2} L}{\tau_0^{1/2}} r + L\|x - x_k\| \Bigr)r + \frac{\epsilon}{4} \\ \\ & \leq & \Bigl( \frac{3}{4} \sigma_{k, \ell} + \frac{L}{2} + \frac{4c_B^2 mL^2}{9\tau_0} \Bigr) r^2 + Lr\|x - x^{\star} \| + Lr\|x_k - x^{\star}\| + \frac{\epsilon}{2} \\ \\ & \overset{\eqref{ZORBound}, \eqref{SolBound}}{\leq} & \frac{1}{\mu^2} \Bigl( \frac{75}{4} \sigma_{k, \ell} + \frac{45L}{8} + \frac{4 c_B^2 mL^2}{9\tau_0} \Bigr) \| F'(x) \|^2 + \frac{5L}{2\mu^2} \|F'(x) \| \cdot \| F'(x_k) \| + \frac{\epsilon}{2}. \end{array}$$ We see that this inequality has the same structure as [\[LocalOneStepProgress\]](#LocalOneStepProgress){reference-type="eqref" reference="LocalOneStepProgress"} established for the Hessian-free CNM. Applying bound [\[ZOHBounds\]](#ZOHBounds){reference-type="eqref" reference="ZOHBounds"}, it is easy to verify that we can use the same local region, given by [\[BoundSubgr\]](#BoundSubgr){reference-type="eqref" reference="BoundSubgr"}, [\[KappaDef\]](#KappaDef){reference-type="eqref" reference="KappaDef"}. Therefore, repeating the previous reasoning, we prove the following local superlinear convergence. **Theorem 24**. *Suppose that A1 and A2 hold. Let $x_0 \in \mathcal{Q}_{\epsilon, \kappa}$, given by [\[BoundSubgr\]](#BoundSubgr){reference-type="eqref" reference="BoundSubgr"} and $\kappa$ given by [\[KappaDef\]](#KappaDef){reference-type="eqref" reference="KappaDef"}. Let initial regularization parameter $\tau_0$ be sufficiently big [\[Tau0Big\]](#Tau0Big){reference-type="eqref" reference="Tau0Big"}. Let $x_{k, \ell}(t)$ be the $t$-th iterate of Algorithm [\[alg:ZO_CNM\]](#alg:ZO_CNM){reference-type="ref" reference="alg:ZO_CNM"} applied at the $k$-th iteration of Algorithm [\[alg:ZeroOrderCNM\]](#alg:ZeroOrderCNM){reference-type="ref" reference="alg:ZeroOrderCNM"} in the $\ell$-th inner loop. Let $T(\epsilon) \leq +\infty$ be the first iteration index such that $\| \nabla f( x_{T(\epsilon), \ell} ) + \psi'(x_{T(\epsilon), \ell}) \| \leq \epsilon$, for some $\ell \geq 0$ and $t \in \{0, \ldots\, m\}$. Then, $$\label{LocalTBoundZO} \begin{array}{rcl} T(\epsilon) & \leq & \frac{1}{\log_2(1 + m)} \log_2 \log_2 \frac{\kappa}{\epsilon} + 1. \end{array}$$* # Illustrative Numerical Experiments {#SectionExperiments} We performed preliminary numerical experiments with Matlab implementations of the proposed methods applied to the set of 35 problems from the Moré-Garbow-Hillstrom collection [@more1981testing][^5] For both algorithms, we considered $\tau_{0}=1$ and $\epsilon=10^{-4}$, allowing a maximum of $3,000$ calls of the oracle. Moreover, each cubic subproblem was approximately solved by a BFGS method with Armijo line search (using the origin as initial point). Figure [1](#fig:01){reference-type="ref" reference="fig:01"} presents the performance profiles [@dolan2002benchmarking][^6] for Algorithm [\[alg:FirstOrderCNM\]](#alg:FirstOrderCNM){reference-type="ref" reference="alg:FirstOrderCNM"}, comparing the variants with $m=1$, $m=n$ and $m=2n$ in terms of the number of calls of the oracle required to find the first $\epsilon$-approximate stationary point. For each value $x$ in x-axis, we show in y-axis percentage of the problems for which the corresponding code performs with a factor $2^x$ of the best performance among all the methods. In accordance with our theory, $m=n$ resulted in the best performance, with the corresponding code requiring less calls of the oracle in $48.6\%$ of the problems. ![Performance profiles in $\log_{2}$ scale for Algorithm [\[alg:FirstOrderCNM\]](#alg:FirstOrderCNM){reference-type="ref" reference="alg:FirstOrderCNM"}. For each choice of $m$, the caption indicates the percentage of problems in which the corresponding code was the best in terms of number of calls of the oracle.](performance_fo_cnm.png){#fig:01} We performed similar experiments with Algorithm [\[alg:ZeroOrderCNM\]](#alg:ZeroOrderCNM){reference-type="ref" reference="alg:ZeroOrderCNM"}, comparing the choices $m=1$, $m=n$ and $m=2n$ in terms of the number of function evaluations required to find $\bar{x}$ such that $$\begin{array}{rcl} f(\bar{x})-f_{best} & \leq & \epsilon\left(f(x_{0})-f_{best}\right). \end{array}$$ For each problem, $f_{best}$ is the smallest value of the objective function obtained by applying the three variants of Algorithm [\[alg:ZeroOrderCNM\]](#alg:ZeroOrderCNM){reference-type="ref" reference="alg:ZeroOrderCNM"} with a budget of $3,000$ function evaluations. Figure [2](#fig:02){reference-type="ref" reference="fig:02"} presents the corresponding performance profiles. Again, the variant with $m=n$ outperformed the others, requiring less function evaluations in $60.0\%$ of the problems. ![Performance profiles in $\log_{2}$ scale for Algorithm [\[alg:ZeroOrderCNM\]](#alg:ZeroOrderCNM){reference-type="ref" reference="alg:ZeroOrderCNM"}.](performance_zo_cnm.png){#fig:02} # Discussion {#SectionDiscussion} In this paper, we have developed new first-order and zeroth-order implementations of the Cubically regularized Newton method, that need, respectively, at most $\mathcal{O}( n^{1/2} \epsilon^{-3/2} )$ and $\mathcal{O}( n^{3/2} \epsilon^{-3/2} )$ calls of the oracle to find an $\epsilon$-approximate second-order stationary point. Along with improved complexity guarantees, one of the main advantages of our schemes is the adaptive search, which makes the algorithms free from the need to fix the actual Lipschitz constant and the finite-difference approximation parameters. While in this work we study the general class of non-convex optimization problems, it can be interesting to investigate the global performance of our methods for convex objectives. Indeed, it is well-known that, when the problem is convex, the rate of minimizing the gradient norm can be improved and the methods can be accelerated [@grapiglia2019accelerated]. Hence, it seems to be an important direction for future research to study the complexities of first-order and zeroth-order regularized Newton schemes in convex case. Another interesting question is related to comparison of our new schemes with derivative-free implementation of the first-order and direct-search methods [@nesterov2017random; @bergou2020stochastic; @gratton2015direct; @grapiglia2023worst]. These methods need at most $\mathcal{O}( n \epsilon^{-2} )$ function evaluations to find a first-order $\epsilon$-stationary point (in expectation or with high probability for stochastic methods [@nesterov2017random; @bergou2020stochastic; @gratton2015direct], or in terms of the full gradient norm for a deterministic method [@grapiglia2023worst]). We see that bound $\mathcal{O}( n \epsilon^{-2} )$ is worse than ours $\mathcal{O}( n^{3/2} \epsilon^{-3/2} )$ in terms of dependence on $\epsilon$, but has a better dimension factor. However, note that these complexity bounds are obtained for *different problem classes*, assuming either the first or second derivative to be Lipschitz continuous. Therefore, the development of universal schemes that can automatically achieve the best possible complexity bounds across various problem classes appears to be important, both from practical and theoretical perspectives. We keep these questions for further research. [^1]: École Polytechnique Fédérale de Lausanne (EPFL), Machine Learning and Optimization Laboratory (MLO), Switzerland (nikita.doikov\@epfl.ch). The work was supported by the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract number 22.00133. [^2]: Université catholique de Louvain (UCLouvain), Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM/INMA), Belgium (geovani.grapiglia\@uclouvain.be). [^3]: *Thus, $\alpha$ can be either success or halt in this case.* [^4]: Note that for simplicity we assume here strong convexity for the whole feasible set $Q$, while it can be possible to restrict our analysis to a neighbourhood of a non-degenerate local minimum. [^5]: For each problem, $n$ was chosen as in [@birgin2020use], resulting in a set of problems with dimensions ranging from $2$ to $40$. [^6]: The performance profiles were generated using the code **perf.m** freely available in the website <https://www.mcs.anl.gov/~more/cops/>.
arxiv_math
{ "id": "2309.02412", "title": "First and zeroth-order implementations of the regularized Newton method\n with lazy approximated Hessians", "authors": "Nikita Doikov, Geovani Nunes Grapiglia", "categories": "math.OC cs.LG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We prove a conjecture of Schmid and the second named author [@SV] that the unitarity of a representation of a real reductive Lie group with real infinitesimal character can be read off from a canonical filtration, the Hodge filtration. Our proof rests on two main ingredients. The first is a wall crossing theory for mixed Hodge modules: the key result is that, in certain natural families, the Hodge filtration varies semi-continuously with jumps controlled by extension functors. The second ingredient is a Hodge-theoretic refinement of Beilinson-Bernstein localization: we show that the Hodge filtration of a mixed Hodge module on the flag variety satisfies the usual cohomology vanishing and global generation properties enjoyed by the underlying ${\mathcal{D}}$-module. As byproducts of our work, we obtain a version of Saito's Kodaira vanishing for twisted mixed Hodge modules, a calculation of the Hodge filtration on a certain object in category ${\mathcal{O}}$, and a host of new vanishing results for, for example, homogeneous vector bundles on flag varieties. address: - School of Mathematics and Statistics, University of Melbourne, VIC 3010, Australia - School of Mathematics and Statistics, University of Melbourne, VIC 3010, Australia, also Department of Mathematics and Statistics, University of Helsinki, Helsinki, Finland author: - Dougal Davis - Kari Vilonen date: 23 September 2023 title: Unitary representations of real groups and localization theory for Hodge modules --- # Introduction In [@SV], Schmid and the second named author proposed a conceptual approach to the notoriously difficult problem of computing the unitary dual of a real reductive Lie group. They observed that an irreducible representation with real infinitesimal character carries a canonical Hodge filtration and conjectured that unitarity of the representation can be read off from the Hodge filtration. We prove this conjecture here. The main ingredients in the proof are new algebro-geometric vanishing theorems and deformation arguments for Hodge modules. The problem of determining the unitary dual of a Lie group, i.e., the set of its irreducible unitary representations, has a long history, going back at least to the 1930s; see, for example, [@ALTV §1] for a brief overview. The key case to consider is that of the real reductive groups, the groups which arise as real forms of connected complex reductive groups. We will not attempt to recall here all that is known about the unitary duals of real groups. Suffice it to say that a number of cases are known by explicit calculation (e.g., for ${\mathrm{GL}}_n(F)$, $F = {\mathbb{R}}, {\mathbb{C}}, {\mathbb{H}}$ [@vogan-gln], complex classical groups [@barbasch], and several others) and that there is an algorithm, developed by Adams, van Leeuwen, Trapa and Vogan [@ALTV] (building on several decades of work by many authors) that can in principle compute the list of unitary representations for any real reductive group. This algorithm has been implemented effectively in the `atlas` software, although computational resources become an issue for large examples. Nevertheless, we are still far from a complete understanding of the unitary dual. At present, there is not even a precise conjecture as to which representations are unitary in general. In the 1960's a conceptual approach to the problem was proposed, the orbit method, whose main idea is to produce the unitary representations by quantizing co-adjoint orbits. It has turned out to be difficult to implement, however, and has so far served more as a guiding principle. The motivation behind [@SV] was to provide a different conceptual geometric approach to the problem of the unitary dual that could potentially overcome the limitations of the existing methods. They proposed that the representations carry an (infinite dimensional) Hodge structure and that this Hodge structure should be obtained from Morihiko Saito's theory of mixed Hodge modules. Once the problem is cast within this framework many tools and techniques, in particular functoriality, can be used which were not available before. For a gentle introduction, see [@SV2], where the theory is worked out for ${\mathrm{SL}}_2({\mathbb{R}})$. In this paper, we implement enough of this program to obtain the desired results about unitarity: the main result is Theorem [Theorem 6](#thm:intro unitarity criterion){reference-type="ref" reference="thm:intro unitarity criterion"} below, which gives a complete characterization of unitary representations in terms of Hodge theory. We think of our results as a version of Beilinson-Bernstein localization theory for Hodge modules. We explain the results from this point of view in detail in §[2](#sec:long intro){reference-type="ref" reference="sec:long intro"}; for now, let us content ourselves with a brief overview. We begin with a connected complex reductive group $G$ with Lie algebra ${\mathfrak{g}}$. We write ${\mathcal{B}}$ for the flag variety of $G$ (a projective algebraic variety over the complex numbers) and ${\mathfrak{h}}$ for the Lie algebra of the universal Cartan of $G$. Beilinson-Bernstein localization [@beilinson-ICM; @BB1; @BB2] associates to each $\lambda \in {\mathfrak{h}}^*$ a sheaf ${\mathcal{D}}_\lambda$ of twisted differential operators on ${\mathcal{B}}$ and a pair of adjoint functors $$\Gamma \colon {\mathrm{Mod}}({\mathcal{D}}_\lambda) \to {\mathrm{Mod}}(U({\mathfrak{g}}))_{\chi_\lambda}, \quad \Delta \colon {\mathrm{Mod}}(U({\mathfrak{g}}))_{\chi_\lambda} \to {\mathrm{Mod}}({\mathcal{D}}_\lambda);$$ here ${\mathrm{Mod}}(U({\mathfrak{g}}))_{\chi_\lambda}$ is the category of $U({\mathfrak{g}})$-modules on which the center $Z(U({\mathfrak{g}}))$ acts via the scalar $$\chi_\lambda \colon Z(U({\mathfrak{g}})) \cong S({\mathfrak{h}})^W \subset S({\mathfrak{h}}) \xrightarrow{\lambda} {\mathbb{C}},$$ where $W$ is the Weyl group and the isomorphism above is the Harish-Chandra isomorphism. The right adjoint $\Gamma$ is given by taking global sections of the ${\mathcal{D}}_\lambda$-module. Beilinson and Bernstein's main results are that $\Gamma$ is exact whenever $\lambda$ is integrally dominant, and an equivalence if, in addition, $\lambda$ is regular. Now let us assume that $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ is real. Then the category ${\mathrm{Mod}}({\mathcal{D}}_\lambda)$ has a mixed Hodge module version ${\mathrm{MHM}}({\mathcal{D}}_\lambda)$. The general theory of mixed Hodge modules, constructed by Saito [@S1; @S2], is a vast generalization of classical Hodge theory. The objects are ${\mathcal{D}}$-modules equipped with several extra structures, such as a Hodge filtration $F_\bullet$ and a weight filtration $W_\bullet$. These satisfy remarkable strictness properties, as well as all the good functoriality properties of holonomic ${\mathcal{D}}$-modules. A key fact is that the irreducible ${\mathcal{D}}$-modules coming from intermediate extensions of unitary local systems all admit essentially unique mixed Hodge module structures. We direct the reader to §[2.2](#subsec:monodromic){reference-type="ref" reference="subsec:monodromic"} for an explanation of the twisted setting from our perspective. In the setting of the flag variety, we prove the following powerful refinements of Beilinson and Bernstein's results for twisted mixed Hodge modules. First, we have a refinement of the exactness theorem for dominant $\lambda$: **Theorem 1**. *Let $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ be real dominant and let ${\mathcal{M}} \in {\mathrm{MHM}}({\mathcal{D}}_\lambda)$ with Hodge filtration $F_\bullet {\mathcal{M}}$. Then $${\mathrm{H}}^i({\mathcal{B}}, F_p{\mathcal{M}}) = 0 \quad \text{for $i > 0$ and all $p$}.$$ Hence, the functor $$\Gamma \colon {\mathrm{MHM}}({\mathcal{D}}_\lambda) \to {\mathrm{Mod}}_{\mathit{fil}}(U({\mathfrak{g}}))_{\chi_\lambda}$$ is filtered exact.* Theorem [Theorem 1](#thm:intro filtered exactness){reference-type="ref" reference="thm:intro filtered exactness"} is restated as Theorem [Theorem 24](#thm:filtered exactness){reference-type="ref" reference="thm:filtered exactness"} in §[2](#sec:long intro){reference-type="ref" reference="sec:long intro"}, and proved in §[8](#sec:vanishing){reference-type="ref" reference="sec:vanishing"}. It was originally stated as a theorem in [@SV] without proof[^1]. We highlight one feature, which is typical of the theory: the condition that $\lambda$ be real dominant is semi-algebraic, while the weaker integral dominance condition in Beilinson and Bernstein's results is algebraic. This kind of semi-algebraic behavior of the Hodge filtration is one hint that Hodge modules are better equipped to detect unitarity than plain ${\mathcal{D}}$-modules; see also Theorem [Theorem 7](#thm:intro semi-continuity){reference-type="ref" reference="thm:intro semi-continuity"} below. The vanishing in Theorem [Theorem 1](#thm:intro filtered exactness){reference-type="ref" reference="thm:intro filtered exactness"} is remarkably strong. To illustrate this, we note the following corollary, obtained by applying Theorem [Theorem 1](#thm:intro filtered exactness){reference-type="ref" reference="thm:intro filtered exactness"} when ${\mathcal{M}}$ is the pushforward of a line bundle on a closed subvariety of ${\mathcal{B}}$. **Corollary 2**. *Let $X \subset {\mathcal{B}}$ be a smooth closed subvariety with normal bundle ${\mathcal{N}}_{X/{\mathcal{B}}}$ and canonical bundle $\omega_X$. Then for any ample line bundle ${\mathcal{L}}$ on ${\mathcal{B}}$, we have $${\mathrm{H}}^i(X, {\mathrm{Sym}}^p({\mathcal{N}}_{X/{\mathcal{B}}}) \otimes \omega_X \otimes {\mathcal{L}}) = 0 \quad \text{for all $i>0$ and all $p$}.$$* In the case of a closed orbit under a reductive subgroup $K \subset G$, this gives the following vanishing result for homogeneous vector bundles on the flag variety of $K$, which appears to be new. **Corollary 3**. *Let $K \subset G$ be a reductive subgroup and $B \subset G$ be a Borel subgroup such that $K \cap B \subset K$ is also a Borel. Then, for $\lambda \in {\mathbb{X}}^*(H)$ dominant for $G$, we have $${\mathrm{H}}^i\left(\frac{K}{K \cap B}, {\mathrm{Sym}}^p\left(\frac{{\mathfrak{g}}}{{\mathfrak{b}} + {\mathfrak{k}}}\right) \otimes {\mathcal{O}}(\lambda + \rho - 2\rho_K)\right) = 0 \quad \text{for all $i> 0$ and all $p$}.$$* Here we have identified homogeneous vector bundles on the flag variety $K/(K \cap B)$ of $K$ with the corresponding representations of $K \cap B$. We have no idea how to prove such a statement without appealing to Theorem [Theorem 1](#thm:intro filtered exactness){reference-type="ref" reference="thm:intro filtered exactness"}, even in the cases that arise in the study of real groups. In the very special case of $G \subset G \times G$, we obtain a version of the main result of [@broer]. A key ingredient in the proof of Theorem [Theorem 1](#thm:intro filtered exactness){reference-type="ref" reference="thm:intro filtered exactness"} is the following twisted version of Saito's Kodaira vanishing theorem for Hodge modules [@S2 Proposition 2.33]. This is the obvious generalization of Saito's result in the twisted setting, and has been independently proved by Christian Schnell and Ruijie Yang [@schnell-yang]. We give the statement here in its most pedestrian form---a more general version is given in the text as Theorem [Theorem 80](#thm:twisted kodaira){reference-type="ref" reference="thm:twisted kodaira"}. Let $X$ be a smooth projective variety and let ${\mathcal{L}}$ be a line bundle on $X$. Then to every $a \in {\mathbb{R}}$ we have the associated category of ${\mathcal{L}}^{a}$-twisted mixed Hodge modules on $X$, ${\mathrm{MHM}}_{{\mathcal{L}}^{a}}(X)$ ($={\mathrm{MHM}}_a(\tilde{X})$ for $\tilde{X} = {\mathcal{L}}^\times$, in the notation of §[2.2](#subsec:monodromic){reference-type="ref" reference="subsec:monodromic"}). For each ${\mathcal{M}}$ in this category and each $p \in {\mathbb{Z}}$, we have a complex $$\mathop{\mathrm{Gr}}_p^F {\mathrm{DR}}({\mathcal{M}}) = [\mathop{\mathrm{Gr}}_p^F {\mathcal{M}} \to \mathop{\mathrm{Gr}}_{p + 1}^F{\mathcal{M}} \otimes \Omega^1_X \to \cdots \to \mathop{\mathrm{Gr}}_{p + n}^F{\mathcal{M}} \otimes \Omega_X^n]$$ of coherent sheaves on $X$, concentrated in degrees $[-n, 0]$. Here $n = \dim X$. **Theorem 4**. *Assume that ${\mathcal{L}}$ is ample, $a > 0$ and ${\mathcal{M}} \in {\mathrm{MHM}}_{{\mathcal{L}}^a}(X)$. Then $${\mathbb{H}}^i(X, \mathop{\mathrm{Gr}}^F_p{\mathrm{DR}}({\mathcal{M}})) = 0 \quad \text{for $i > 0$ and all $p$}.$$* For example, setting $a = 1$, an ${\mathcal{L}}$-twisted Hodge module is the same thing as an untwisted Hodge module tensored with ${\mathcal{L}}$, so Theorem [Theorem 4](#thm:intro twisted kodaira){reference-type="ref" reference="thm:intro twisted kodaira"} recovers Saito's result in this case. Returning to localization theory for Hodge modules on the flag variety, we also have a Hodge analog of Beilinson and Bernstein's equivalence of categories for regular dominant $\lambda$. In the statement below, ${\mathrm{MHM}}^w(U({\mathfrak{g}}))_{\chi_\lambda}$ denotes the category of weak mixed Hodge $U({\mathfrak{g}})$-modules defined in §[2.6](#subsec:polarization localization){reference-type="ref" reference="subsec:polarization localization"}. **Theorem 5**. *Let $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ be real dominant and regular. Then the global sections functor $$\Gamma \colon {\mathrm{MHM}}({\mathcal{D}}_\lambda) \to {\mathrm{MHM}}^w(U({\mathfrak{g}}))_{\chi_\lambda}$$ is fully faithful.* We give a more precise statement, as Theorem [Theorem 30](#thm:full faithfulness){reference-type="ref" reference="thm:full faithfulness"} in §[2.6](#subsec:polarization localization){reference-type="ref" reference="subsec:polarization localization"}. The main ingredient (Theorem [Theorem 25](#thm:hodge generation){reference-type="ref" reference="thm:hodge generation"}) is a general result on global generation of the Hodge filtration. The category ${\mathrm{MHM}}^w(U({\mathfrak{g}}))_{\chi_\lambda}$ contains the category of $U({\mathfrak{g}})$-modules equipped with (possibly infinite dimensional) mixed Hodge structures. The main conjecture in [@SV], in its most general form, would imply that the functor of Theorem [Theorem 5](#thm:intro full faithfulness){reference-type="ref" reference="thm:intro full faithfulness"} actually lands inside this subcategory. While we do not prove this statement here, we prove via a slightly different route that the main consequence for unitary representations of real groups still holds. Let us now explain the connection to the unitary dual of a real form $G_{\mathbb{R}}$ of $G$. Recall that to each admissible representation of $G_{\mathbb{R}}$ is associated a dense subspace, the corresponding Harish-Chandra $({\mathfrak{g}}, K)$-module. This is a complex vector space equipped with compatible algebraic actions of the Lie algebra ${\mathfrak{g}}$ (the complexification of ${\mathfrak{g}}_{\mathbb{R}} = {\mathrm{Lie}}(G_{\mathbb{R}})$) and of the complexification $K \subset G$ of a maximal compact subgroup $K_{\mathbb{R}} \subset G_{\mathbb{R}}$. The irreducible Harish-Chandra modules were completely classified by Harish-Chandra and Langlands. One of Harish-Chandra's main results is that the irreducible unitary representations of $G_{\mathbb{R}}$ are in bijection with the irreducible unitary Harish-Chandra modules; here we say that a Harish-Chandra module $V$ is *unitary* if it admits a positive definite $({\mathfrak{g}}_{\mathbb{R}}, K_{\mathbb{R}})$-invariant Hermitian form. So the problem of computing the unitary dual is reduced to the question of which irreducible Harish-Chandra modules are unitary. This can be reduced to the case of real infinitesimal character (see, for example, [@knapp1986book Chapter 16]), so we will consider only this case from now on. By Beilinson-Bernstein localization, each irreducible Harish-Chandra module $V$ with real infinitesimal character arises as the global sections of a unique irreducible $K$-equivariant ${\mathcal{D}}_\lambda$-module ${\mathcal{M}}$ on ${\mathcal{B}}$ with $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ real dominant. Moreover, all such ${\mathcal{D}}_\lambda$-modules lift to twisted Hodge modules in ${\mathrm{MHM}}({\mathcal{D}}_\lambda)$. So the Harish-Chandra module $V$ carries a canonical Hodge filtration $F_\bullet V = \Gamma(F_\bullet {\mathcal{M}})$. It is well-known and not difficult to determine which irreducible Harish-Chandra modules admit a non-degenerate $({\mathfrak{g}}_{\mathbb{R}}, K_{\mathbb{R}})$-invariant Hermitian form (see, e.g., Remark [Remark 33](#rmk:hermitian){reference-type="ref" reference="rmk:hermitian"}). We call such modules *Hermitian*. In the case of real infinitesimal character, these are precisely those Harish-Chandra modules $V$ that admit a Cartan involution $\theta \colon V \to V$, compatible with the Cartan involution on $G$. Our main theorem is the following result. **Theorem 6**. *Let $V$ be an irreducible Harish-Chandra module with real infinitesimal character as above, and suppose that $V$ is Hermitian. Then $V$ is unitary if and only if the Cartan involution $\theta \colon V \to V$ acts on $\mathop{\mathrm{Gr}}^F_p V$ with eigenvalue $(-1)^p$ for all $p$, or $(-1)^{p + 1}$ for all $p$.* Theorem [Theorem 6](#thm:intro unitarity criterion){reference-type="ref" reference="thm:intro unitarity criterion"} is an immediate consequence of the more general conjecture of [@SV]. We explain the statement in more detail in §[2.7](#subsec:intro harish-chandra){reference-type="ref" reference="subsec:intro harish-chandra"} (where we also restate it as Theorem [Theorem 32](#thm:unitarity criterion){reference-type="ref" reference="thm:unitarity criterion"}) and give the proof in §[4](#sec:pf of unitarity){reference-type="ref" reference="sec:pf of unitarity"}. We remark here that the purpose of the involution is to relate the invariant Hermitian form with the natural polarization on the Hodge module, for which a more general result is true (see Theorem [Theorem 44](#thm:hodge and signature K){reference-type="ref" reference="thm:hodge and signature K"}). Based on conversations with Schmid and the second named author, Adams, Trapa and Vogan [@ATV] observed that Theorem [Theorem 6](#thm:intro unitarity criterion){reference-type="ref" reference="thm:intro unitarity criterion"} follows rather formally from the methods of [@ALTV] assuming a number of conjectures about Hodge modules. We have proved some of these conjectures previously [@DV1; @DV2]; in this paper we prove the rest. Very roughly, the argument goes as follows: given an irreducible Harish-Chandra module $V$, one considers a deformation of $V$ to a tempered Harish-Chandra module. Tempered representations are known to be unitary on general grounds, so to calculate the signature of a Hermitian form on $V$ one only needs to know how the signature jumps at a certain discrete set of reducibility walls in the deformation. Theorem [Theorem 6](#thm:intro unitarity criterion){reference-type="ref" reference="thm:intro unitarity criterion"} is proved by checking that the result is true for tempered modules (which we did in [@DV2]) and that the Hodge filtration behaves in the same way as the Hermitian form under deformations and wall crossing. This last step rests on some general theory, which we develop in §§[3](#sec:deformations){reference-type="ref" reference="sec:deformations"}, [5](#sec:semi-continuity){reference-type="ref" reference="sec:semi-continuity"} and [6](#sec:hodge-lefschetz){reference-type="ref" reference="sec:hodge-lefschetz"}. We give the highlights here in the case of Harish-Chandra sheaves; this is a special case of a more general geometric context, which is explained in §[3](#sec:deformations){reference-type="ref" reference="sec:deformations"}. The irreducible Harish-Chandra sheaves are indexed by tuples $(Q, \lambda, \gamma)$, where $j \colon Q \hookrightarrow {\mathcal{B}}$ is a $K$-orbit, $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ and $\gamma$ is an irreducible $K$-equivariant $\lambda$-twisted local system on $Q$. For each such tuple, we have the standard, irreducible and costandard ${\mathcal{D}}_\lambda$-modules $$j_!\gamma \twoheadrightarrow j_{!*}\gamma \hookrightarrow j_*\gamma.$$ The parameters $(Q, \lambda, \gamma)$ can be varied continuously to $(Q, \lambda + s\varphi, \gamma_s)$ for $s \in {\mathbb{R}}$ and $\varphi$ in an explicit real subspace of ${\mathfrak{h}}^*_{\mathbb{R}}$. Assuming that $\varphi$ is positive with respect to $Q$ in a precise sense (see §§[3.1](#subsec:deformation construction){reference-type="ref" reference="subsec:deformation construction"} and [4.3](#subsec:hc deformations){reference-type="ref" reference="subsec:hc deformations"}), we have $j_!\gamma_s = j_*\gamma_s = j_{!*}\gamma_s$ for $s$ outside a discrete set. For such a positive deformation, we show that the Hodge filtrations behave as follows. **Theorem 7**. *Let $\varphi \in {\mathfrak{h}}^*_{\mathbb{R}}$ be positive with respect to $Q$. Then the Hodge filtrations on $j_!\gamma_s$ (resp., $j_*\gamma_s$) are lower (resp., upper) semi-continuous in the sense that $$\mathop{\mathrm{Gr}}^F j_!\gamma_{s - \epsilon} \cong \mathop{\mathrm{Gr}}^F j_!\gamma_s \quad \text{and} \quad \mathop{\mathrm{Gr}}^F j_*\gamma_{s + \epsilon} \cong \mathop{\mathrm{Gr}}^F j_*\gamma_s$$ for $0 < \epsilon \ll 1$. In particular, the isomorphism class of $\mathop{\mathrm{Gr}}^F j_{!*}\gamma_s$ is locally constant in the region where $j_!\gamma_s = j_*\gamma_s = j_{!*}\gamma_s$.* When combined with Theorem [Theorem 1](#thm:intro filtered exactness){reference-type="ref" reference="thm:intro filtered exactness"} and a generalization (Theorem [Theorem 41](#thm:jantzen){reference-type="ref" reference="thm:jantzen"}) of our earlier result on Jantzen forms [@DV1 Theorem 1.2], Theorem [Theorem 7](#thm:intro semi-continuity){reference-type="ref" reference="thm:intro semi-continuity"} gives enough control over deformations and wall-crossing to push through the proof of Theorem [Theorem 6](#thm:intro unitarity criterion){reference-type="ref" reference="thm:intro unitarity criterion"}. Theorem [Theorem 7](#thm:intro semi-continuity){reference-type="ref" reference="thm:intro semi-continuity"} is restated in a more general context in §[3](#sec:deformations){reference-type="ref" reference="sec:deformations"} as Theorem [Theorem 40](#thm:semi-continuity){reference-type="ref" reference="thm:semi-continuity"}. The proof is given in §[5](#sec:semi-continuity){reference-type="ref" reference="sec:semi-continuity"}. We believe that the general statement Theorem [Theorem 40](#thm:semi-continuity){reference-type="ref" reference="thm:semi-continuity"} should be of interest far beyond the scope of the present work: for example, it provides us with a short and very transparent proof of the twisted Kodaira vanishing Theorem [Theorem 4](#thm:intro twisted kodaira){reference-type="ref" reference="thm:intro twisted kodaira"} (see §[7](#sec:twisted kodaira){reference-type="ref" reference="sec:twisted kodaira"}). We conclude this introduction by highlighting one more result, which we prove en route to Theorems [Theorem 1](#thm:intro filtered exactness){reference-type="ref" reference="thm:intro filtered exactness"} and [Theorem 5](#thm:intro full faithfulness){reference-type="ref" reference="thm:intro full faithfulness"}. The unitarity criterion reduces the problem of determining unitarity of a Harish-Chandra module $V$ to the knowledge of the associated graded of its Hodge filtration. Thus, the key to unlocking the unitary dual is to understand these Hodge filtrations. In the case of tempered Harish-Chandra sheaves, we gave an explicit formula for the Hodge filtration in [@DV2]. We add one more example where the associated graded of the Hodge filtration is known explicitly in §[8.2](#subsec:big projective){reference-type="ref" reference="subsec:big projective"}. **Theorem 8**. *Let $\Xi$ denote the projective cover of the irreducible Harish-Chandra sheaf for the complex pair $({\mathfrak{g}} \times {\mathfrak{g}}, G)$ associated with the closed orbit at infinitesimal character $0$. Then $$\mathop{\mathrm{Gr}}^F \Xi \cong {\mathcal{O}}_{{\mathrm{St}}},$$ where $${\mathrm{St}} = \tilde{{\mathfrak{g}}}^* \times_{{\mathfrak{g}}^*} \tilde{{\mathcal{N}}}^*$$ is the Steinberg scheme.* The second named author learned this fact from Roman Bezrukavnikov many years ago. We give a more precise statement (as Theorem [Theorem 87](#thm:xi hodge filtration){reference-type="ref" reference="thm:xi hodge filtration"}) and proof in §[8.2](#subsec:big projective){reference-type="ref" reference="subsec:big projective"}. We simply note here that the asymmetry in the fiber product defining ${\mathrm{St}}$ is an artefact of the category in which we chose to take the projective cover. The proof we give for Theorem [Theorem 8](#thm:intro xi hodge){reference-type="ref" reference="thm:intro xi hodge"} is a version of the same argument we used in [@DV2] in the case of tempered modules for split groups. In fact, one can generalize both to give a formula for the Hodge filtration on the projective cover of the spherical tempered sheaf for any quasi-split real group. To make this statement precise (in particular, to specify the precise category in which to take the projective cover) would take us too far afield, so we have chosen to omit it from the present work. ## Conventions Throughout this paper, all varieties are assumed quasi-projective and defined over the complex numbers. When working with ${\mathcal{D}}$-modules, we will always take these to be left ${\mathcal{D}}$-modules unless otherwise specified. When working with mixed Hodge modules, we will always mean the complex Hodge modules of, say, [@SS]. See also Notation [Notation 13](#notation:functors){reference-type="ref" reference="notation:functors"} for our notation for various pullbacks/pushforwards, and §[2.3](#subsec:flag variety){reference-type="ref" reference="subsec:flag variety"} for our conventions for reductive groups and their flag varieties. ## Outline of the paper The paper is structured as follows. In §[2](#sec:long intro){reference-type="ref" reference="sec:long intro"}, we recall some necessary background (mainly concerning Hodge modules) before carefully stating our main theorems in precise form. At the end of the section, we also give an illustrative example of our unitarity criterion in action by using it to reprove Vogan's theorem on unitarity of cohomological induction [@vogan-annals Theorem 1.3]. In §[3](#sec:deformations){reference-type="ref" reference="sec:deformations"}, we state our results on deformations and wall crossing for mixed Hodge modules, which we use in §[4](#sec:pf of unitarity){reference-type="ref" reference="sec:pf of unitarity"} to give the proof of the unitarity criterion. We then return to the proofs of the wall crossing results in §§[5](#sec:semi-continuity){reference-type="ref" reference="sec:semi-continuity"}--[6](#sec:hodge-lefschetz){reference-type="ref" reference="sec:hodge-lefschetz"}. In §[7](#sec:twisted kodaira){reference-type="ref" reference="sec:twisted kodaira"}, we state and prove the twisted Kodaira vanishing theorem. In §[8](#sec:vanishing){reference-type="ref" reference="sec:vanishing"}, we address the main results on localization for Hodge modules: here we give the proof of Theorem [Theorem 1](#thm:intro filtered exactness){reference-type="ref" reference="thm:intro filtered exactness"} and most of Theorem [Theorem 5](#thm:intro full faithfulness){reference-type="ref" reference="thm:intro full faithfulness"}, as well as our Hodge filtration calculation Theorem [Theorem 8](#thm:intro xi hodge){reference-type="ref" reference="thm:intro xi hodge"}. Finally, in §[9](#sec:integral){reference-type="ref" reference="sec:integral"} we prove a technical proposition required to complete the proof of Theorem [Theorem 5](#thm:intro full faithfulness){reference-type="ref" reference="thm:intro full faithfulness"} on full faithfulness. ## Acknowledgements {#acknowledgements .unnumbered} We would like to thank Jeff Adams, Wilfried Schmid, Peter Trapa, David Vogan, Ruijie Yang and Xinwen Zhu for helpful conversations. # Background and statements of the main theorems {#sec:long intro} In this section, we give a more detailed overview of the main results of this paper, with precise statements. We begin with a general discussion of mixed Hodge modules in §[2.1](#subsec:mhm){reference-type="ref" reference="subsec:mhm"} and the monodromic and twisted versions of this theory in §[2.2](#subsec:monodromic){reference-type="ref" reference="subsec:monodromic"}. We then specialize to the case of the flag variety in §[2.3](#subsec:flag variety){reference-type="ref" reference="subsec:flag variety"} and recall the main results of Beilinson-Bernstein localization theory in §[2.4](#subsec:beilinson-bernstein){reference-type="ref" reference="subsec:beilinson-bernstein"}. We then state our main results on localization for Hodge modules in §§[2.5](#subsec:hodge localization){reference-type="ref" reference="subsec:hodge localization"}--[2.6](#subsec:polarization localization){reference-type="ref" reference="subsec:polarization localization"}. In §[2.7](#subsec:intro harish-chandra){reference-type="ref" reference="subsec:intro harish-chandra"}, we explain how our results specialize to the Harish-Chandra setting and state our main theorem on unitary representations and Hodge theory. Finally, in §[2.8](#subsec:cohomological induction){reference-type="ref" reference="subsec:cohomological induction"}, we give an illustrative application of our unitarity criterion, a short proof of a classic theorem of Vogan [@vogan-annals] on cohomological induction of unitary representations. ## Mixed Hodge modules {#subsec:mhm} In this subsection, we recall some aspects of the theory of mixed Hodge modules, following the treatment in [@SS]. This is (of course) far from a comprehensive introduction; our aim is rather to give an overview of the ideas of the subject as it pertains to the present paper. In particular, we have omitted any mention of ${\mathbb{Q}}$-structures (we consider only complex Hodge structures and their Hodge module versions) as well as any real discussion of classical Hodge theory. For our purposes, the starting point is the notion of polarized Hodge structure: **Definition 9**. Let $w \in {\mathbb{Z}}$. A *pure Hodge structure of weight $w$* is a finite dimensional complex vector space $V$ equipped with a *Hodge decomposition* $$V = \bigoplus_{p + q = w} V^{p, q}.$$ A *polarized Hodge structure of weight $w$* is a pure Hodge structure $V$ equipped with a Hermitian form $S \colon V \otimes \overline{V} \to {\mathbb{C}}$ (the *polarization*) such that 1. the Hodge decomposition is orthogonal with respect to $S$, and 2. the form $S|_{V^{p, q}}$ is $(-1)^q$-definite. If the polarization $S$ and the weight $w$ are fixed, then we can recover the Hodge decomposition from the (increasing) *Hodge filtration* $$F_p V := \bigoplus_{p' \geq -p} V^{p', w - p'}$$ via the formula $$V^{p, w - p} = F_{-p} V \cap (F_{-p - 1} V)^\perp.$$ Thus, a polarized Hodge structure is captured by the data $(V, F_\bullet, S, w)$. The next step towards mixed Hodge modules is to consider families of polarized Hodge structures over a smooth (say quasi-projective) variety $X$. The basic structure is an algebraic vector bundle ${\mathcal{V}}$ on $X$ equipped with a flat connection $$\nabla \colon {\mathcal{V}} \to {\mathcal{V}} \otimes \Omega^1_X$$ with regular singularities at infinity. We equip this with an additional Hodge filtration, which is a filtration $F_\bullet {\mathcal{V}}$ by algebraic sub-bundles satisfying the Griffths transversality condition $$\nabla(F_p {\mathcal{V}}) \subset F_{p + 1}{\mathcal{V}} \otimes \Omega^1_X,$$ and a polarization, which is an ${\mathcal{O}}_{X} \otimes \overline{{\mathcal{O}}}_{X}$-linear Hermitian pairing $$S \colon {\mathcal{V}} \otimes \overline{{\mathcal{V}}} \to {\mathcal{C}}^\infty_X$$ satisfying the flatness condition $$\operatorname{d} S(v, \overline{v'}) = S(\nabla v, \overline{v'}) + S(v, \overline{\nabla v'})$$ for $v, v'$ local sections of ${\mathcal{V}}$. Here ${\mathcal{C}}^\infty_X$ is the sheaf of smooth complex-valued functions on $X$.[^2] **Definition 10**. We say that $({\mathcal{V}}, F_\bullet {\mathcal{V}}, S)$ as above forms a *polarized variation of Hodge structure of weight $w$* if, for every $x \in X$, the fiber $({\mathcal{V}}_x, F_\bullet {\mathcal{V}}_x, S_x)$ at $x$ is a polarized Hodge structure of weight $w$. **Remark 11**. We have given the definition here in terms of algebraic vector bundles, as this is the most convenient for our purposes; in other contexts, the holomorphic version of the theory is more natural. On an algebraic variety, however, the two notions of polarized variation of Hodge structure coincide. At the level of connections, analytification $({\mathcal{V}}, \nabla) \mapsto ({\mathcal{V}}^{\mathit{an}}, \nabla^{\mathit{an}})$ defines an equivalence of categories between algebraic vector bundles ${\mathcal{V}}$ on $X$ with a flat algebraic connection $\nabla$ with regular singularities at infinity, and holomorphic vector bundles ${\mathcal{V}}^{\mathit{an}}$ on $X$ equipped with a flat holomorphic connection $\nabla^{\mathit{an}}$. At the level of Hodge filtrations, it is a non-trivial fact that if $({\mathcal{V}}^{\mathit{an}}, \nabla^{\mathit{an}})$ underlies a polarized variation of Hodge structure, then the Hodge filtration $F_\bullet {\mathcal{V}}^{\mathit{an}}$ is necessarily the analytification of a filtration $F_\bullet {\mathcal{V}}$ by algebraic sub-bundles. This last fact is due to Schmid [@schmid] in the case of quasi-unipotent monodromy, while in the general case it is essentially due to Takuro Mochizuki; see, for example, [@deng]. The basic objects in mixed Hodge module theory are singular versions of polarized variations of Hodge structure: the polarized Hodge modules. The idea is that to get good functoriality, one should enlarge the theory to include degenerations as follows. First, the singular version of a flat vector bundle ${\mathcal{V}}$ is a regular holonomic ${\mathcal{D}}$-module ${\mathcal{M}}$. As in the case of flat vector bundles, one can work in either the algebraic or the analytic category: in this setting, the category of algebraic regular holonomic ${\mathcal{D}}$-modules embeds as a full subcategory of the analytic category. For our purposes, the algebraic category will be most convenient, so we will continue to work in that context. For the Hodge filtration, one now takes a filtration $F_\bullet {\mathcal{M}}$ by coherent ${\mathcal{O}}_X$-submodules such $$\label{eq:griffiths d-module} F_p {\mathcal{D}}_X \cdot F_q {\mathcal{M}} \subset F_{p + q} {\mathcal{M}}$$ for $p, q \in {\mathbb{Z}}$. Here $F_\bullet {\mathcal{D}}_X$ is the filtration by order of differential operators. One easily checks that [\[eq:griffiths d-module\]](#eq:griffiths d-module){reference-type="eqref" reference="eq:griffiths d-module"} is equivalent to the Griffiths transversality condition when ${\mathcal{M}}$ is a vector bundle. Finally, for the polarization, to get a good theory one expands the target from the sheaf ${\mathcal{C}}^\infty_X$ of smooth functions to the sheaf ${{\mathcal{D}}\mathit{b}}_X$ of distributions on $X$, so that the polarization becomes a ${\mathcal{D}}_X \otimes \overline{{\mathcal{D}}}_X$-linear homomorphism $$S \colon {\mathcal{M}} \otimes \overline{{\mathcal{M}}} \to {{\mathcal{D}}\mathit{b}}_X\,;$$ here we view distributions as continuous duals of smooth compactly supported top-dimensional differential forms. Such pairings to distributions were first considered by Kashiwara [@kashiwara]. In order for the data $({\mathcal{M}}, F_\bullet{\mathcal{M}}, S)$ to define a polarized Hodge module, one needs to impose a condition, analogous to Definition [Definition 10](#defn:polarized vhs){reference-type="ref" reference="defn:polarized vhs"}, that we recover a polarized Hodge structure upon specialization to a point. At smooth points (i.e., points near which ${\mathcal{M}}$ is a vector bundle) this can be done by "taking the fiber" naively as before, but this operation ceases to make sense at singularities. The key idea of Saito is to impose conditions on nearby and vanishing cycles. We direct the reader to, for example, [@SS Chapter 14] for the precise definition of polarized Hodge module. A key consequence of the definition is that the polarization is *perfect*, i.e., it induces an isomorphism $${\mathcal{M}} \cong {\mathcal{M}}^h$$ at the level of ${\mathcal{D}}$-modules, where ${\mathcal{M}}^h$ (the *Hermitian dual* of ${\mathcal{M}}$) is the regular holonomic ${\mathcal{D}}$-module characterized by $$({\mathcal{M}}^h)^{\mathit{an}} = {{\mathcal{H}}{\mathit{om}}}_{\overline{{\mathcal{D}}}_X}(\overline{{\mathcal{M}}}, {{\mathcal{D}}\mathit{b}}_X);$$ see also [@DV1 §3.1]. In our study, the polarized Hodge modules are in some sense the main objects of interest: roughly speaking, they are analogous to (although much more general than) unitary representations. Like unitary representations, however, they do not form a very good category on their own. The main idea in mixed Hodge theory is to embed the polarized objects into a much larger abelian category with better formal properties. First, one relaxes the structure slightly to obtain an abelian category of pure objects (aka polarizable, in this context). There are a few ways to set this up; the most convenient for us is the language of "triples" (cf., [@SS]). In place of the polarization $S$ pairing ${\mathcal{M}}$ with itself, we introduce a second filtered ${\mathcal{D}}$-module $({\mathcal{M}}', F_\bullet{\mathcal{M}}')$ and a perfect pairing $${\mathfrak{s}} \colon {\mathcal{M}} \otimes \overline{{\mathcal{M}}'} \to {{\mathcal{D}}\mathit{b}}_X.$$ We say that the data $(({\mathcal{M}}, F_\bullet), ({\mathcal{M}}', F_\bullet), {\mathfrak{s}})$ defines a *polarizable Hodge module of weight $w$* if there exists a filtered isomorphism $\phi \colon {\mathcal{M}}' \cong {\mathcal{M}}(w)$ (where $F_p{\mathcal{M}}(w) := F_{p - w}{\mathcal{M}}$) such that $({\mathcal{M}}, F_\bullet, S)$ is a polarized Hodge module of weight $w$ with $S = {\mathfrak{s}}(\cdot, \overline{\phi(\cdot)})$. Note that if ${\mathcal{M}}$ (equivalently ${\mathcal{M}}'$) is irreducible, then the polarization is unique up to a positive scalar (if it exists). For example, when $X$ is a point, the polarizable Hodge modules are precisely the pure Hodge structures: if $V$ and $V'$ are finite dimensional vector spaces with finite filtrations $F_\bullet$ and ${\mathfrak{s}}$ is a perfect sesquilinear pairing between them, then the triple $((V, F_\bullet), (V', F_\bullet), {\mathfrak{s}})$ defines a polarizable Hodge module of weight $w$ on a point if and only if $$V = \bigoplus_{p + q = w} V^{p, q},$$ where $$V^{p, q} = F_{-p} V \cap \bar{F}_{-q}V, \quad \mbox{for} \quad \bar{F}_{-q} V := (F_{q - 1} V')^\perp.$$ The data $((V, F_\bullet), (V', F_\bullet), {\mathfrak{s}})$ are of course equivalent to the more traditional bi-filtered vector space $(V, F_\bullet, \bar{F}_\bullet)$. Finally, the abelian category of mixed Hodge modules is built by allowing certain extensions of pure objects of different weights. Over a point, one obtains the category of *mixed Hodge structures*; that is, tuples $(V, F_\bullet, \bar{F}_\bullet, W_\bullet)$, where $V$ is a finite dimensional vector space equipped with finite increasing filtrations $F_\bullet$, $\bar{F}_\bullet$, $W_\bullet$ such that $(\mathop{\mathrm{Gr}}^W_w V, F_\bullet, \bar{F}_\bullet)$ is a pure (=polarizable) Hodge structure of weight $w$ for all $w$. One can equivalently write this definition in terms of the data $(V, V', {\mathfrak{s}})$. Over a general smooth variety, a *mixed Hodge module* is specified by a tuple $$(({\mathcal{M}}, F_\bullet, W_\bullet), ({\mathcal{M}}', F_\bullet, W_\bullet), {\mathfrak{s}}),$$ where $({\mathcal{M}}, F_\bullet)$, $({\mathcal{M}}', F_\bullet)$ and ${\mathfrak{s}}$ are as before, and $W_\bullet{\mathcal{M}}$, $W_\bullet{\mathcal{M}}'$ are finite filtrations by ${\mathcal{D}}$-submodules such that $W_w {\mathcal{M}}' = (W_{-w - 1}{\mathcal{M}})^\perp$ and $$((\mathop{\mathrm{Gr}}^W_w{\mathcal{M}}, F_\bullet), (\mathop{\mathrm{Gr}}^W_{-w}{\mathcal{M}}', F_\bullet), {\mathfrak{s}})$$ is a polarizable Hodge module of weight $w$ for all $w \in {\mathbb{Z}}$. The filtration $W_\bullet$ is called the *weight filtration*. Over a point this is the full definition, but on a general $X$ one needs to impose further conditions on the weight filtration near singularities (again relating to nearby and vanishing cycles). For example, when ${\mathcal{M}}$ and ${\mathcal{M}}'$ are vector bundles, a tuple $({\mathcal{M}}, {\mathcal{M}}', {\mathfrak{s}})$ as above is the same thing as a variation of mixed Hodge structure, and the condition that $({\mathcal{M}}, {\mathcal{M}}', {\mathfrak{s}})$ be a mixed Hodge module is the condition that this variation of mixed Hodge structure be admissible at infinity [@S2 Theorem 3.27] [@kashiwara2]. **Remark 12**. If $(({\mathcal{M}}, F_\bullet, W_\bullet), ({\mathcal{M}}', F_\bullet, W_\bullet), {\mathfrak{s}})$ is a mixed Hodge module, the ${\mathcal{D}}$-module ${\mathcal{M}}'$ plays a rather auxiliary role: its sole purpose is to carry the filtration $F_\bullet {\mathcal{M}}'$. While this is an important part of the theory (e.g., without it one would not have an abelian category), we will mostly be interested in the ${\mathcal{D}}$-module ${\mathcal{M}}$ equipped with its Hodge and weight filtrations, and sometimes a polarization. We will therefore often suppress the rest of the triple, and write simply ${\mathcal{M}}$ in place of $({\mathcal{M}}, {\mathcal{M}}', {\mathfrak{s}})$. As an illustrative example, the conventions are such that $({\mathcal{O}}_X, S)$ is a polarized Hodge module of weight $\dim X$, where $$F_p {\mathcal{O}}_X = \begin{cases} 0, & \mbox{if $p < 0$}, \\ {\mathcal{O}}_X, &\mbox{otherwise}\end{cases}$$ and $$S(f, \bar{g}) = f\bar{g}.$$ Hence, the triple $(({\mathcal{O}}_X, F_\bullet), ({\mathcal{O}}_X, F_{\bullet - \dim X}), {\mathfrak{s}} = S)$ is a pure Hodge module of weight $\dim X$. The category ${\mathrm{MHM}}(X)$ of mixed Hodge modules on a variety $X$ has many remarkable properties. For example: 1. The category ${\mathrm{MHM}}(X)$ is abelian. 2. Every pure object in ${\mathrm{MHM}}(X)$ is semi-simple. 3. The forgetful functor ${\mathrm{MHM}}(X) \to {\mathrm{Mod}}({\mathcal{D}}_X)_{rh}$ is exact and faithful and sends irreducibles to irreducibles. Here ${\mathrm{Mod}}({\mathcal{D}}_X)_{rh}$ is the category of regular holonomic ${\mathcal{D}}$-modules on $X$. 4. Every morphism in ${\mathrm{MHM}}(X)$ is strict with respect to the weight and Hodge filtrations. Hence, if $$0 \to {\mathcal{M}} \to {\mathcal{N}} \to {\mathcal{P}} \to 0$$ is an exact sequence in ${\mathrm{MHM}}(X)$, then the sequences $$0 \to W_w{\mathcal{M}} \to W_w{\mathcal{N}} \to W_w{\mathcal{P}} \to 0$$ and $$0 \to F_p {\mathcal{M}} \to F_p {\mathcal{N}} \to F_p {\mathcal{P}} \to 0$$ are also exact. 5. If ${\mathcal{M}} \in {\mathrm{MHM}}(X)$, then $\mathop{\mathrm{Gr}}^F{\mathcal{M}}$ is a coherent $\mathop{\mathrm{Gr}}^F{\mathcal{D}}_X = {\mathcal{O}}_{T^*X}$-module. It is Cohen-Macaulay of dimension $\dim X$, i.e., we have $${{\mathcal{E}}{\mathit{xt}}}_{{\mathcal{O}}_{T^*X}}^i(\mathop{\mathrm{Gr}}^F{\mathcal{M}}, {\mathcal{O}}_{T^*X}) = 0 \quad \mbox{for $i \neq \dim X$}.$$ Another fundamental part of the theory is that the standard six functor operations for regular holonomic ${\mathcal{D}}$-modules (or equivalently, for perverse sheaves) lift to mixed Hodge modules. First, we have the operation of linear duality. This is an anti-equivalence $${\mathbb{D}} \colon {\mathrm{MHM}}(X)^{\mathit{op}} \to {\mathrm{MHM}}(X)$$ given on the underlying filtered ${\mathcal{D}}$-modules by the filtered derived hom $${\mathbb{D}}({\mathcal{M}}, F_\bullet) = {\mathrm{R}}{{\mathcal{H}}{\mathit{om}}}_{({\mathcal{D}}_X, F_\bullet)}(({\mathcal{M}}, F_\bullet), ({\mathcal{D}}_X \otimes \omega_X^{-1}, F_{\bullet - 2\dim X}))[\dim X].$$ Note that the Cohen-Macaulay property of $\mathop{\mathrm{Gr}}^F{\mathcal{M}}$ is equivalent to this filtered complex being strict and concentrated in degree zero. Since we are working with complex coefficients, we also have the related operation of Hermitian duality on ${\mathrm{MHM}}(X)$, given in terms of triples by $$({\mathcal{M}}, {\mathcal{M}}', {\mathfrak{s}})^h = ({\mathcal{M}}', {\mathcal{M}}, {\mathfrak{s}}^h),$$ where ${\mathfrak{s}}^h$ is the transpose of ${\mathfrak{s}}$, $${\mathfrak{s}}^h(m', \overline{m}) = \overline{{\mathfrak{s}}(m, \overline{m'})}.$$ Of course, $(-)^h$ commutes with the forgetful functor since ${\mathfrak{s}}$ identifies ${\mathcal{M}}'$ with the Hermitian dual of ${\mathcal{M}}$ as a ${\mathcal{D}}$-module. Second, given two smooth varieties $X$ and $Y$ equipped with mixed Hodge modules ${\mathcal{M}}$ and ${\mathcal{N}}$, the external tensor product ${\mathcal{M}} \boxtimes {\mathcal{N}}$ is naturally a mixed Hodge module on $X \times Y$. Finally, if $f \colon X \to Y$ is a morphism, then we have functors $$f_*, f_! \colon {\mathrm{D}}^b{\mathrm{MHM}}(X) \to {\mathrm{D}}^b {\mathrm{MHM}}(Y)$$ and $$f^*, f^! \colon {\mathrm{D}}^b{\mathrm{MHM}}(Y) \to {\mathrm{D}}^b {\mathrm{MHM}}(X)$$ such that $f^*$ is left adjoint to $f_*$ and $f^!$ is right adjoint to $f_!$. The six operations $({\mathbb{D}}, \boxtimes, f_*, f^*, f_!, f^!)$ are related by a number of standard axioms (due to Grothendieck) that we do not recall here. The main upshot for us is that any ${\mathcal{D}}$-module constructed functorially from a polarizable variation of Hodge structure is endowed with a canonical mixed Hodge module structure. **Notation 13**. In this paper, we will reserve the notation $f_*, f_!, f^*, f^!$ for the above functors coming from the six functor formalism for mixed Hodge modules or regular holonomic ${\mathcal{D}}$-modules. For the usual pushforward of sheaves (or ${\mathcal{O}}$-modules) along a morphism $f \colon X \to Y$, we will write $f_{\mathpalette\bigcdot@{.5}}$. We write $f^{-1}$ for the pullback of sheaves and $f^{\mathpalette\bigcdot@{.5}}$ for the pullback of ${\mathcal{O}}$-modules. Finally, when the morphism $f$ is smooth of relative dimension $d$, we write $$f^\circ := f^*[d] = f^![-d] \colon {\mathrm{Mod}}({\mathcal{D}}_Y)_{rh} \to {\mathrm{Mod}}({\mathcal{D}}_X)_{rh}$$ and $$f^\circ = f^*[d] = f^!(-d)[-d] \colon {\mathrm{MHM}}(Y) \to {\mathrm{MHM}}(X),$$ where $(-d)$ denotes a Tate twist. The conventions are such that $f^\circ {\mathcal{M}} = f^{\mathpalette\bigcdot@{.5}}{\mathcal{M}}$ as ${\mathcal{O}}_X$-modules, with Hodge and weight filtrations given by $$F_pf^\circ {\mathcal{M}} = f^{\mathpalette\bigcdot@{.5}}F_p{\mathcal{M}} \quad \mbox{and} \quad W_wf^\circ {\mathcal{M}} = f^{\mathpalette\bigcdot@{.5}}W_{w - d}{\mathcal{M}}.$$ For example, we have $f^\circ {\mathcal{O}}_Y = {\mathcal{O}}_X$ as mixed Hodge modules. ## The monodromic setting {#subsec:monodromic} For applications to representation theory, we need to work not only with the sheaf ${\mathcal{D}}_X$ of differential operators on a variety $X$, but also with certain sheaves of twisted differential operators and their modules. In this subsection, we recall the definitions from [@BB2] and explain how the theory of mixed Hodge modules extends to this setting. In order to transfer standard ${\mathcal{D}}$-module (and mixed Hodge module) theory to the twisted setting, it is convenient to introduce an auxiliary space $\tilde{X} \to X$ and define our twisted objects in terms of untwisted ${\mathcal{D}}$-modules on $\tilde{X}$. This should be regarded mainly as a formal device, however: we explain at the end of the subsection how to view these as modules over a sheaf of rings on the original space $X$. We will often adopt this latter point of view when convenient. The starting point is the notion of weak equivariance for ${\mathcal{D}}$-modules. **Definition 14**. Let $\tilde{X}$ be a smooth variety equipped with a left action of an algebraic group $H$. A *weakly $H$-equivariant ${\mathcal{D}}$-module on $\tilde{X}$* is a ${\mathcal{D}}$-module ${\mathcal{M}}$ on $\tilde{X}$ equipped with an action of $H$ as an ${\mathcal{O}}_X$-module such that the map ${\mathcal{D}}_{\tilde X} \otimes {\mathcal{M}} \to {\mathcal{M}}$ is $H$-equivariant. The left $H$-action on $\tilde{X}$ naturally induces a right $H$-action on ${\mathcal{O}}_{\tilde{X}}$. The derivative acts through an algebra homomorphism $$i \colon U({\mathfrak{h}})^{\mathit{op}} \to {\mathcal{D}}_{\tilde{X}},$$ for ${\mathfrak{h}} = {\mathrm{Lie}}(H)$, restricting to the map ${\mathfrak{h}} \to {\mathcal{T}}_{\tilde{X}}$ given by the pointwise derivative of the action map. Here $(-)^{\mathit{op}}$ denotes the opposite algebra structure. For ${\mathcal{M}}$ weakly $H$-equivariant, we may also differentiate the (right) $H$-action on ${\mathcal{M}}$ to obtain a $U({\mathfrak{h}})^{\mathit{op}}$-action $$\begin{aligned} U({\mathfrak{h}})^{\mathit{op}} \otimes {\mathcal{M}} &\to {\mathcal{M}} \\ h \otimes m &\mapsto h \cdot m.\end{aligned}$$ An easy calculation shows that the difference $$\label{eq:weak difference} \begin{aligned} {\mathfrak{h}} \otimes {\mathcal{M}} &\to {\mathcal{M}} \\ h \otimes m &\mapsto hm := i(h)m - h \cdot m \end{aligned}$$ defines an action of $U({\mathfrak{h}})^{\mathit{op}}$ on ${\mathcal{M}}$ commuting with the action of ${\mathcal{D}}_{\tilde{X}}$. **Definition 15**. We say that a weakly $H$-equivariant ${\mathcal{M}}$ as above is *strongly $H$-equivariant* if the action [\[eq:weak difference\]](#eq:weak difference){reference-type="eqref" reference="eq:weak difference"} is trivial. It is a standard exercise to check that a weakly $H$-equivariant ${\mathcal{D}}$-module ${\mathcal{M}}$ is strongly $H$-equivariant if and only if the isomorphism of ${\mathcal{O}}$-modules $$a^{\mathpalette\bigcdot@{.5}} {\mathcal{M}} = a^\circ {\mathcal{M}} \cong {\mathcal{O}}_H \boxtimes {\mathcal{M}}$$ defining the $H$-action is an isomorphism of ${\mathcal{D}}_{H \times \tilde{X}}$-modules. Here $a \colon H \times \tilde{X} \to \tilde{X}$ is the action map and $a^{\mathpalette\bigcdot@{.5}}$ denotes the pullback of ${\mathcal{O}}$-modules. So strong equivariance is defined "internally" to the category of ${\mathcal{D}}$-modules (e.g., it corresponds to the natural notion of equivariance for perverse sheaves under the Riemann-Hilbert correspondence), while weak equivariance is not. Let us now specialize to the case where $H$ is a torus. In this case, since $H$ is commutative, we have $U({\mathfrak{h}})^{\mathit{op}} = U({\mathfrak{h}}) = S({\mathfrak{h}})$. We make the following definition. **Definition 16**. Let ${\mathcal{M}}$ be a weakly $H$-equivariant ${\mathcal{D}}$-module on $\tilde{X}$ and fix $\lambda \in {\mathfrak{h}}^*$. We say that ${\mathcal{M}}$ is *$\lambda$-twisted* (resp., *$\lambda$-monodromic*) if the ${\mathfrak{h}}$-action [\[eq:weak difference\]](#eq:weak difference){reference-type="eqref" reference="eq:weak difference"} is such that $h - \lambda(h)$ acts by zero (resp., nilpotently) on ${\mathcal{M}}$ for all $h \in {\mathfrak{h}}$. We say that ${\mathcal{M}}$ is *monodromic* if it is a direct sum of its generalized eigenspaces under [\[eq:weak difference\]](#eq:weak difference){reference-type="eqref" reference="eq:weak difference"}. We write $${\mathrm{Mod}}_\lambda({\mathcal{D}}_{\tilde{X}}) \subset {\mathrm{Mod}}_{\widehat{\lambda}}({\mathcal{D}}_{\tilde{X}}) \subset {\mathrm{Mod}}_{{\mathit{mon}}}({\mathcal{D}}_{\tilde{X}})$$ for the categories of $\lambda$-twisted, $\lambda$-monodromic and monodromic ${\mathcal{D}}$-modules respectively. It is an easy lemma that any weakly $H$-equivariant regular holonomic ${\mathcal{D}}$-module is automatically monodromic in the sense above. By the following observation of Beilinson and Bernstein, $\lambda$-monodromicity (for fixed $\lambda$) is in fact a property of the underlying ${\mathcal{D}}$-module, not extra structure. **Proposition 17** ([@BB2 Lemma 2.5.4]). *The forgetful functor $${\mathrm{D}}^b{\mathrm{Mod}}_{\widehat{\lambda}}({\mathcal{D}}_{\tilde{X}}) \to {\mathrm{D}}^b {\mathrm{Mod}}({\mathcal{D}}_{\tilde{X}})$$ is fully faithful.* We extend the theory of mixed Hodge modules to the monodromic setting as follows. We define a mixed Hodge module ${\mathcal{M}}$ on $\tilde X$ to be *$\lambda$-monodromic* or *$\lambda$-twisted* if the underlying ${\mathcal{D}}$-module is. A *mondromic mixed Hodge module* is a mixed Hodge module equipped with a decomposition $${\mathcal{M}} = \bigoplus_{\lambda \in {\mathfrak{h}}^*} {\mathcal{M}}_\lambda,$$ where each ${\mathcal{M}}_\lambda$ is a $\lambda$-monodromic mixed Hodge module. We write $${\mathrm{MHM}}_\lambda(\tilde{X}) \subset {\mathrm{MHM}}_{\widehat{\lambda}}(\tilde{X}) \subset {\mathrm{MHM}}_{\mathit{mon}}(\tilde{X})$$ for the categories of $\lambda$-twisted, $\lambda$-monodromic, and monodromic mixed Hodge modules respectively. The following proposition shows that our definition of monodromic mixed Hodge module is a reasonable one. **Proposition 18**. *Let $({\mathcal{M}}, {\mathcal{M}}', {\mathfrak{s}})$ be a $\lambda$-monodromic mixed Hodge module on $\tilde{X}$. We have the following.* 1. *[\[itm:monodromic mhm 1\]]{#itm:monodromic mhm 1 label="itm:monodromic mhm 1"} If ${\mathcal{M}} \neq 0$, then $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}} := {\mathbb{X}}^*(H) \otimes {\mathbb{R}}$.* 2. *[\[itm:monodromic mhm 2\]]{#itm:monodromic mhm 2 label="itm:monodromic mhm 2"} If $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$, then the ${\mathcal{D}}$-module ${\mathcal{M}}'$ is also $\lambda$-monodromic.* 3. *[\[itm:monodromic mhm 3\]]{#itm:monodromic mhm 3 label="itm:monodromic mhm 3"} The weak $H$-actions on ${\mathcal{M}}$ and ${\mathcal{M}}'$ preserve the weight and Hodge filtrations $W_\bullet$ and $F_\bullet$.* *Proof.* In general, if ${\mathcal{M}} \in {\mathrm{Mod}}({\mathcal{D}}_{\tilde{X}})_{rh}$ is $\lambda$-monodromic, then its Hermitian dual ${\mathcal{M}}^h$ is $\bar{\lambda}$-monodromic. Since ${\mathcal{M}}' = {\mathcal{M}}^h$, this implies [\[itm:monodromic mhm 2\]](#itm:monodromic mhm 2){reference-type="eqref" reference="itm:monodromic mhm 2"}. Since a pure Hodge module is assumed polarizable (so ${\mathcal{M}} \cong {\mathcal{M}}'$ as ${\mathcal{D}}$-modules in this case), this also implies [\[itm:monodromic mhm 1\]](#itm:monodromic mhm 1){reference-type="eqref" reference="itm:monodromic mhm 1"}. The property [\[itm:monodromic mhm 3\]](#itm:monodromic mhm 3){reference-type="eqref" reference="itm:monodromic mhm 3"} follows, for example, from results of Takahiro Saito [@S3]. First, since $H$ is generated by its $1$-parameter subgroups, we may assume without loss of generality that $H = {\mathbb{C}}^\times$. By pulling back along the action map ${\mathbb{C}}^\times \times \tilde{X} \to \tilde{X}$ if necessary, we may further reduce to the case where $\tilde{X} = {\mathbb{C}}^\times \times X$ for some smooth variety $X$. Now, since ${\mathcal{M}}$ is $\lambda$-monodromic, we have $$\label{eq:monodromic mhm 1} {\mathcal{M}} = \bigoplus_{\alpha \equiv \lambda \mod {\mathbb{Z}}} {\mathcal{M}}^\alpha,$$ as a ${\mathcal{D}}_{\tilde{X}} = {\mathcal{D}}_X[t, t^{-1}, \partial_t]$-module, where $t$ denotes the coordinate on ${\mathbb{C}}^\times$ and ${\mathcal{M}}^\alpha \in {\mathrm{Mod}}({\mathcal{D}}_X)$ is the $\alpha$-generalized eigenspace of $t\partial_t \in {\mathfrak{h}}$. The weak $H = {\mathbb{C}}^\times$-action acts on each generalized eigenspace ${\mathcal{M}}^\alpha$ with eigenvalue $\alpha - \lambda \in {\mathbb{Z}}$. The weight filtration is obviously compatible with the decomposition [\[eq:monodromic mhm 1\]](#eq:monodromic mhm 1){reference-type="eqref" reference="eq:monodromic mhm 1"} (since it is a filtration by ${\mathcal{D}}$-submodules) and the Hodge filtration is compatible with [\[eq:monodromic mhm 1\]](#eq:monodromic mhm 1){reference-type="eqref" reference="eq:monodromic mhm 1"} by [@S3 Theorem 2.2]. The same argument applies to ${\mathcal{M}}'$, so this proves [\[itm:monodromic mhm 3\]](#itm:monodromic mhm 3){reference-type="eqref" reference="itm:monodromic mhm 3"}. ◻ In light of [\[itm:monodromic mhm 1\]](#itm:monodromic mhm 1){reference-type="eqref" reference="itm:monodromic mhm 1"}, we will always assume $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ when working with monodromic mixed Hodge modules. In our applications to real groups, we are interested in monodromic mixed Hodge modules that are strongly equivariant under another group $K$ acting on $\tilde{X}$. If $K$ is an algebraic group acting compatibly on both $\tilde{X}$ and $H$, then a *$K$-equivariant monodromic ${\mathcal{D}}_{\tilde{X}}$-module* is a ${\mathcal{D}}_{\tilde{X}}$-module equipped with a weak $K \ltimes H$-action such that the $K$-action is strong. Equivalently, ${\mathcal{M}}$ is a monodromic ${\mathcal{D}}_{\tilde{X}}$-module equipped with a strong $K$-action such that for all $k \in K$ and all $\lambda \in {\mathfrak{h}}^*$, the action map $$k^*{\mathcal{M}} \to {\mathcal{M}}$$ sends $k^*{\mathcal{M}}_{k\lambda}$ to ${\mathcal{M}}_\lambda$. Similarly, a *$K$-equivariant monodromic mixed Hodge module* is a monodromic mixed Hodge module ${\mathcal{M}}$, equipped with a $K$-action as a mixed Hodge module (i.e., a strong $K$-action such that the map $a^\circ{\mathcal{M}} \to {\mathcal{O}}_K \boxtimes {\mathcal{M}}$ is a map of mixed Hodge modules) satisfying the above condition on eigenspaces. For $\lambda \in ({\mathfrak{h}}^*)^K$ we can restrict to the categories of $\lambda$-twisted and $\lambda$-monodromic objects. We write $${\mathrm{Mod}}^K_\lambda({\mathcal{D}}_{\tilde{X}}) \subset {\mathrm{Mod}}^K_{\widehat{\lambda}}({\mathcal{D}}_{\tilde{X}}) \subset {\mathrm{Mod}}^K_{{\mathit{mon}}}({\mathcal{D}}_{\tilde{X}})$$ and $${\mathrm{MHM}}^K_\lambda(\tilde{X}) \subset {\mathrm{MHM}}^K_{\widehat{\lambda}}(\tilde{X}) \subset {\mathrm{MHM}}^K_{{\mathit{mon}}}(\tilde{X})$$ for the categories of $K$-equivariant objects. **Remark 19**. Most of the time, we can restrict to the setting where $K$ acts trivially on $H$, i.e., the actions of $K$ and $H$ commute. However, we will need the more general setting to apply the trick of passing to the extended group when dealing with Hermitian representations; see §[4.1](#subsec:hodge and signature){reference-type="ref" reference="subsec:hodge and signature"}. Now suppose that $\tilde{X}$ is an $H$-torsor over another variety $X$ and write $\pi \colon \tilde{X} \to X$ for the quotient map. Consider the sheaf $$\tilde{{\mathcal{D}}} = \pi_{\mathpalette\bigcdot@{.5}}({\mathcal{D}}_{\tilde{X}})^H$$ of rings on $X$, a central extension of ${\mathcal{D}}_X$ by $S({\mathfrak{h}})$. We have an equivalence of categories $$\label{eq:monodromic descent} \begin{aligned} {\mathrm{Mod}}_{\mathit{mon}}({\mathcal{D}}_{\tilde X}) &\cong {\mathrm{Mod}}(\tilde{{\mathcal{D}}}) \\ {\mathcal{M}} &\mapsto \pi_{\mathpalette\bigcdot@{.5}}({\mathcal{M}})^H \\ \pi^{\mathpalette\bigcdot@{.5}} {\mathcal{N}} &\mathrel{\reflectbox{$\mapsto$}} {\mathcal{N}} \end{aligned}$$ such that action [\[eq:weak difference\]](#eq:weak difference){reference-type="eqref" reference="eq:weak difference"} on the left corresponds to the action of the subalgebra $S({\mathfrak{h}}) \subset \tilde{{\mathcal{D}}}$ on the right. In particular, [\[eq:monodromic descent\]](#eq:monodromic descent){reference-type="eqref" reference="eq:monodromic descent"} identifies the full subcategories ${\mathrm{Mod}}_0({\mathcal{D}}_{\tilde{X}})$ and ${\mathrm{Mod}}({\mathcal{D}}_X)$ on either side. We will often identify a monodromic ${\mathcal{D}}_{\tilde{X}}$-module with its corresponding $\tilde{{\mathcal{D}}}$-module on $X$. If ${\mathcal{M}} \in {\mathrm{MHM}}_{\mathit{mon}}(\tilde{X})$ is a monodromic mixed Hodge module, then by Proposition [Proposition 18](#prop:monodromic mhm){reference-type="ref" reference="prop:monodromic mhm"}, we may endow the associated $\tilde{{\mathcal{D}}}$-module $\pi_{\mathpalette\bigcdot@{.5}}({\mathcal{M}})^H$ with a Hodge filtration (compatible with the natural filtration on $\tilde{{\mathcal{D}}}$) and a weight filtration (by $\tilde{{\mathcal{D}}}$-submodules) given by $$\label{eq:dtilde filtrations} F_p\pi_{\mathpalette\bigcdot@{.5}}({\mathcal{M}})^H = \pi_{\mathpalette\bigcdot@{.5}}(F_p{\mathcal{M}})^H \quad \mbox{and} \quad W_w\pi_{\mathpalette\bigcdot@{.5}}({\mathcal{M}})^H = \pi_{\mathpalette\bigcdot@{.5}}(W_{w + \dim H}{\mathcal{M}})^H.$$ The shift in the weight filtration above is chosen so that the natural equivalence $$\pi^\circ \colon {\mathrm{MHM}}(X) \overset{\sim}\to {\mathrm{MHM}}_0(\tilde{X})$$ is compatible with forgetful functor to bi-filtered $\tilde{{\mathcal{D}}}$-modules, cf., Notation [Notation 13](#notation:functors){reference-type="ref" reference="notation:functors"}. ## The flag variety {#subsec:flag variety} Fix now a connected reductive group $G$. Our main example of interest is $X = {\mathcal{B}}$ the flag variety of $G$, $\tilde{X} = \tilde{{\mathcal{B}}}$ the base affine space and $H$ the universal Cartan of $G$. We will also write $\Phi \subset {\mathbb{X}}^*(H)$ and $\check\Phi \subset {\mathbb{X}}_*(H)$ for the set of roots and coroots respectively. Recall from [@DV1 §2.2] our conventions for positive and negative roots. If we fix a maximal torus and Borel subgroup $T \subset B \subset G$ then $T$ is identified naturally with $H = B/N$, where $N$ is the unipotent radical of $B$. We define the set $\Phi_- \subset {\mathbb{X}}^*(H)$ of *negative* roots to be the characters of $T = H$ acting on ${\mathfrak{n}} = {\mathrm{Lie}}(N)$. With this convention, a character $\lambda \in {\mathbb{X}}^*(H)$ is dominant if and only if the associated line bundle on ${\mathcal{B}}$ is semi-ample. In this setting, we will use the following notation for twisted and monodromic objects. We write $${\mathcal{D}}_\lambda := \tilde{{\mathcal{D}}} \otimes_{S({\mathfrak{h}}), \lambda - \rho} {\mathbb{C}}$$ for $\lambda \in {\mathfrak{h}}^*$. Here $\rho$ is half the sum of the positive roots. Note that we have $${\mathrm{Mod}}({\mathcal{D}}_\lambda) = {\mathrm{Mod}}_{\lambda - \rho}({\mathcal{D}}_{\tilde{X}}).$$ The shift by $\rho$ is convenient for the purposes of Beilinson-Bernstein localization. We will also write $${\mathrm{Mod}}({\mathcal{D}}_{\widehat{\lambda}}) := {\mathrm{Mod}}_{\widehat{\lambda - \rho}}({\mathcal{D}}_{\tilde{X}}) \subset {\mathrm{Mod}}(\tilde{{\mathcal{D}}}) = {\mathrm{Mod}}_{\mathit{mon}}({\mathcal{D}}_{\tilde{X}}).$$ For mixed Hodge modules, we write $${\mathrm{MHM}}({\mathcal{D}}_\lambda) = {\mathrm{MHM}}_{\lambda - \rho}(\tilde{{\mathcal{B}}}), \quad {\mathrm{MHM}}({\mathcal{D}}_{\widehat{\lambda}}) = {\mathrm{MHM}}_{\widehat{\lambda - \rho}}(\tilde{{\mathcal{B}}})$$ $$\mbox{and} \quad {\mathrm{MHM}}(\tilde{{\mathcal{D}}}) = {\mathrm{MHM}}_{\mathit{mon}}(\tilde{{\mathcal{B}}}).$$ We will often use notation identifying a monodromic mixed Hodge module on $\tilde{{\mathcal{B}}}$ with its underlying bi-filtered $\tilde{{\mathcal{D}}}$-module on ${\mathcal{B}}$ via [\[eq:dtilde filtrations\]](#eq:dtilde filtrations){reference-type="eqref" reference="eq:dtilde filtrations"}. In particular, when discussing weights in this context, we will use the weight filtration on ${\mathcal{B}}$, which is shifted by $\dim H$ relative to the weight filtration on $\tilde{{\mathcal{B}}}$. If $K$ acts compatibly on $\tilde{{\mathcal{B}}}$ and $H$ and $\lambda \in ({\mathfrak{h}}^*)^K$, we write $${\mathrm{Mod}}({\mathcal{D}}_\lambda, K) \subset {\mathrm{Mod}}({\mathcal{D}}_{\widehat{\lambda}}, K) \subset {\mathrm{Mod}}(\tilde{{\mathcal{D}}}, K) := {\mathrm{Mod}}^K_{\mathit{mon}}({\mathcal{D}}_{\tilde{X}})$$ and $${\mathrm{MHM}}({\mathcal{D}}_\lambda, K) = {\mathrm{MHM}}_{\lambda - \rho}^K(\tilde{{\mathcal{B}}}) ,\quad {\mathrm{MHM}}({\mathcal{D}}_{\widehat{\lambda}}, K) = {\mathrm{MHM}}_{\widehat{\lambda - \rho}}^K(\tilde{{\mathcal{B}}})$$ $$\mbox{and} \quad {\mathrm{MHM}}(\tilde{{\mathcal{D}}}, K) = {\mathrm{MHM}}^K_{\mathit{mon}}(\tilde{{\mathcal{B}}})$$ for the corresponding $K$-equivariant categories. ## Beilinson-Bernstein localization {#subsec:beilinson-bernstein} We now briefly recall the localization theory of Beilinson and Bernstein [@BB1; @BB2], which links ${\mathcal{D}}$-modules with representation theory. The first part of localization theory is the observation that the global differential operators on ${\mathcal{B}}$ (or $\tilde{{\mathcal{B}}}$) are closely related to the universal enveloping algebra of ${\mathfrak{g}} = {\mathrm{Lie}}(G)$. More precisely, consider the sheaf $\tilde{{\mathcal{D}}} = \pi_{\mathpalette\bigcdot@{.5}}({\mathcal{D}}_{\tilde{{\mathcal{B}}}})^H$. The action of $G$ on $\tilde{{\mathcal{B}}}$ together with the inclusion $S({\mathfrak{h}}) \subset \tilde{{\mathcal{D}}}$ define an algebra homomorphism $$\label{eq:bb 1} U({\mathfrak{g}}) \otimes S({\mathfrak{h}}) \to \Gamma({\mathcal{B}}, \tilde{{\mathcal{D}}}).$$ **Theorem 20**. *The morphism [\[eq:bb 1\]](#eq:bb 1){reference-type="eqref" reference="eq:bb 1"} descends to an isomorphism $$U({\mathfrak{g}}) \otimes_{Z(U({\mathfrak{g}}))} S({\mathfrak{h}}) \overset{\sim}\to \Gamma({\mathcal{B}}, \tilde{{\mathcal{D}}}),$$ where the map to $S({\mathfrak{h}})$ is via the Harish-Chandra homomorphism $$Z(U({\mathfrak{g}})) \overset{\sim}\to S({\mathfrak{h}})^W\subset S({\mathfrak{h}})$$ composed with the $\rho$-shift $$S({\mathfrak{h}}) \to S({\mathfrak{h}}); \quad h \mapsto h + \rho(h).$$ Here $W$ is the Weyl group.* In particular, for $\lambda \in {\mathfrak{h}}^*$, we get functors $$\label{eq:bb 2} \Gamma \colon {\mathrm{Mod}}({\mathcal{D}}_\lambda) \to {\mathrm{Mod}}(U({\mathfrak{g}}))_{\chi_\lambda}$$ and $$\label{eq:bb 3} \Gamma \colon {\mathrm{Mod}}({\mathcal{D}}_{\widehat{\lambda}}) \to {\mathrm{Mod}}(U({\mathfrak{g}}))_{\widehat{\chi_\lambda}},$$ where $\chi_\lambda \colon Z(U({\mathfrak{g}})) \to {\mathbb{C}}$ is the character corresponding to $\lambda$, and ${\mathrm{Mod}}(U({\mathfrak{g}}))_{\chi_\lambda}$ (resp., ${\mathrm{Mod}}(U({\mathfrak{g}}))_{\widehat{\chi_\lambda}}$) is the category of $U({\mathfrak{g}})$-modules on which $Z(U({\mathfrak{g}}))$ acts with (generalized) character $\chi_\lambda$. The character $\chi_\lambda$ is often called the *infinitesimal character*. The next part of Beilinson and Bernstein's theory is a cohomology vanishing theorem asserting the exactness of the functors [\[eq:bb 2\]](#eq:bb 2){reference-type="eqref" reference="eq:bb 2"} and [\[eq:bb 3\]](#eq:bb 3){reference-type="eqref" reference="eq:bb 3"} for most $\lambda$. In the statement below, we say that $\lambda \in {\mathfrak{h}}^*$ is *integrally dominant* if $\langle \lambda, \check \alpha \rangle \not\in {\mathbb{Z}}_{< 0}$ for all positive coroots $\check \alpha$. **Theorem 21**. *Assume $\lambda \in {\mathfrak{h}}^*$ is integrally dominant and let ${\mathcal{M}} \in {\mathrm{Mod}}({\mathcal{D}}_{\widehat{\lambda}})$. Then $${\mathrm{H}}^i({\mathcal{B}}, {\mathcal{M}}) = 0 \quad \mbox{for $i > 0$.}$$ In particular, the functors [\[eq:bb 2\]](#eq:bb 2){reference-type="eqref" reference="eq:bb 2"} and [\[eq:bb 3\]](#eq:bb 3){reference-type="eqref" reference="eq:bb 3"} are exact.* Note that for fixed $\chi \colon Z({\mathfrak{g}}) \to {\mathbb{C}}$, we can always find $\lambda \in {\mathfrak{h}}^*$ integrally dominant such that $\chi = \chi_\lambda$. The final part of the localization theory asserts that, under further assumptions on $\lambda$, the functors [\[eq:bb 2\]](#eq:bb 2){reference-type="eqref" reference="eq:bb 2"} and [\[eq:bb 3\]](#eq:bb 3){reference-type="eqref" reference="eq:bb 3"} are equivalences: **Theorem 22**. *Assume $\lambda \in {\mathfrak{h}}^*$ is integrally dominant and regular (i.e., $\langle \lambda, \check\alpha \rangle \neq 0$ for $\check\alpha \in \check\Phi$). Then for any ${\mathcal{M}} \in {\mathrm{Mod}}({\mathcal{D}}_{\widehat{\lambda}})$, the canonical morphism $$\tilde{{\mathcal{D}}} \otimes_{U({\mathfrak{g}})} \Gamma({\mathcal{M}}) \to {\mathcal{M}}$$ is surjective. Moreover, the functors [\[eq:bb 2\]](#eq:bb 2){reference-type="eqref" reference="eq:bb 2"} and [\[eq:bb 3\]](#eq:bb 3){reference-type="eqref" reference="eq:bb 3"} are equivalences of categories.* Of course, the global generation in Theorem [Theorem 22](#thm:bb 3){reference-type="ref" reference="thm:bb 3"} follows from the equivalences of categories. We have highlighted it in the statement as it is the main step in the proof: the "moreover" follows easily from this given Theorems [Theorem 20](#thm:bb 1){reference-type="ref" reference="thm:bb 1"} and [Theorem 21](#thm:bb 2){reference-type="ref" reference="thm:bb 2"}. ## Localization for the Hodge filtration {#subsec:hodge localization} We now turn to the main results of this paper concerning the interaction between localization theory and the extra structures furnished by Hodge theory. In this subsection, we state our results concerning the Hodge filtration only. The main idea is that each of the main results in localization theory has a Hodge-filtered refinement. First, we note the following well-known refinement of Theorem [Theorem 20](#thm:bb 1){reference-type="ref" reference="thm:bb 1"}, which forms part of the standard proof of that theorem. **Proposition 23**. *The isomorphism $$U({\mathfrak{g}}) \otimes_{Z(U({\mathfrak{g}}))} S({\mathfrak{h}}) \overset{\sim}\to \Gamma({\mathcal{B}}, \tilde{{\mathcal{D}}})$$ is a filtered isomorphism. Here we equip $U({\mathfrak{g}})$ and $S({\mathfrak{h}}) = U({\mathfrak{h}})$ with the usual Poincaré-Birkhoff-Witt filtrations and $\tilde{{\mathcal{D}}}$ with its filtration by order of differential operator.* Our first main result concerning the Hodge filtration is an analog of Theorem [Theorem 21](#thm:bb 2){reference-type="ref" reference="thm:bb 2"}. Recall that the theory of monodromic mixed Hodge modules is non-trivial only when $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}} = {\mathbb{X}}^*(H) \otimes {\mathbb{R}}$. In the statement below, we say that $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ is *dominant* if $\langle \lambda, \check\alpha \rangle \geq 0$ for all $\check\alpha \in \check\Phi_+$; note that this is a more restrictive condition than integral dominance. **Theorem 24**. *Assume that $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ is dominant. Then for any ${\mathcal{M}} \in {\mathrm{MHM}}({\mathcal{D}}_{\widehat{\lambda}})$, we have $${\mathrm{H}}^i({\mathcal{B}}, F_p {\mathcal{M}}) = 0 \quad \mbox{for all $i > 0$ and all $p$}.$$* We give the proof of Theorem [Theorem 24](#thm:filtered exactness){reference-type="ref" reference="thm:filtered exactness"} in §§[8.3](#subsec:regular vanishing){reference-type="ref" reference="subsec:regular vanishing"}--[8.4](#subsec:singular vanishing){reference-type="ref" reference="subsec:singular vanishing"}. In particular, Theorem [Theorem 24](#thm:filtered exactness){reference-type="ref" reference="thm:filtered exactness"} implies that any short exact sequence $$0 \to {\mathcal{M}} \to {\mathcal{N}} \to {\mathcal{P}} \to 0$$ in ${\mathrm{MHM}}({\mathcal{D}}_{\widehat{\lambda}})$ gives rise to a short exact sequence $$0 \to \Gamma(F_p{\mathcal{M}}) \to \Gamma(F_p{\mathcal{N}}) \to \Gamma(F_p{\mathcal{P}}) \to 0$$ for every $p$, i.e., the functor $\Gamma$ is "filtered exact" on mixed Hodge modules. Our second main result is an analog of Theorem [Theorem 22](#thm:bb 3){reference-type="ref" reference="thm:bb 3"}. **Theorem 25**. *Assume $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ is dominant and let ${\mathcal{M}} \in {\mathrm{MHM}}({\mathcal{D}}_{\widehat{\lambda}})$. If the morphism of $\tilde{{\mathcal{D}}}$-modules $$\label{eq:hodge generation 1} \tilde{{\mathcal{D}}} \otimes_{U({\mathfrak{g}})} \Gamma({\mathcal{M}}) \to {\mathcal{M}}$$ is surjective, then it is filtered surjective, where we equip ${\mathcal{M}}$ (and hence $\Gamma({\mathcal{M}})$) with its Hodge filtration and $\tilde{{\mathcal{D}}}$ with the filtration by order of differential operator. In particular, if $\lambda$ is regular, then [\[eq:hodge generation 1\]](#eq:hodge generation 1){reference-type="eqref" reference="eq:hodge generation 1"} is filtered surjective for any ${\mathcal{M}} \in {\mathrm{MHM}}({\mathcal{D}}_{\widehat{\lambda}})$.* In other words, in suitable situations, the Hodge filtration can be recovered from its global sections. We give the proof of Theorem [Theorem 25](#thm:hodge generation){reference-type="ref" reference="thm:hodge generation"} also in §§[8.3](#subsec:regular vanishing){reference-type="ref" reference="subsec:regular vanishing"}--[8.4](#subsec:singular vanishing){reference-type="ref" reference="subsec:singular vanishing"}. ## Localization for polarizations {#subsec:polarization localization} The core idea behind the main conjecture in [@SV] is that localization theory also interacts with polarizations in a nice way. We discuss here the main construction and what we can (and cannot) prove for general twisted Hodge modules on ${\mathcal{B}}$. Let $\lambda \in {\mathfrak{h}}^*$, let ${\mathcal{M}} \in {\mathrm{Mod}}({\mathcal{D}}_{\widehat{\lambda}})_{rh}$ and ${\mathcal{M}}' \in {\mathrm{Mod}}({\mathcal{D}}_{\widehat{\bar{\lambda}}})_{rh}$ and suppose that $${\mathfrak{s}} \colon {\mathcal{M}} \otimes \overline{{\mathcal{M}}'} \to {{\mathcal{D}}\mathit{b}}_{\tilde{{\mathcal{B}}}}$$ is a ${\mathcal{D}}_{\tilde{{\mathcal{B}}}} \otimes \overline{{\mathcal{D}}}_{\tilde{{\mathcal{B}}}}$-linear pairing. We define a pairing on the global sections modules as follows. Fix once and for all a maximal compact subgroup $U_{\mathbb{R}} \subset G$ and a base point in $\tilde{{\mathcal{B}}}$ defining an equivariant embedding $U_{\mathbb{R}} \subset \tilde{{\mathcal{B}}}$ (see Remark [Remark 26](#rmk:integration choices){reference-type="ref" reference="rmk:integration choices"} below for the dependence of the construction on these choices). Then we obtain an isomorphism $$\tilde{{\mathcal{B}}} \cong U_{\mathbb{R}} \times H_{\mathbb{R}}^\circ,$$ where $H_{\mathbb{R}}^\circ = {\mathbb{X}}_*(H) \otimes {\mathbb{R}}_{>0} = \exp({\mathfrak{h}}_{\mathbb{R}})$ is the non-compact part of $H$. For $$m \in \Gamma({\mathcal{M}}) = \Gamma(\tilde{{\mathcal{B}}}, {\mathcal{M}})^H \quad \text{and} \quad m' \in \Gamma({\mathcal{M}}') = \Gamma(\tilde{{\mathcal{B}}}, {\mathcal{M}}')^H,$$ we have a distribution $${\mathfrak{d}}(m, \overline{m}') := \int_{U_{\mathbb{R}}} {\mathfrak{s}}(m, \overline{m'}) \in {{\mathcal{D}}\mathit{b}}_{H_{\mathbb{R}}^\circ},$$ where we integrate with respect to the invariant volume form on $U_{\mathbb{R}}$ of volume 1. By monodromicity, we have $$(i(h) - (\lambda - \rho)(h))^n {\mathfrak{s}}(m, \overline{m'}) = 0 \quad \mbox{and} \quad (\overline{i(h)} - (\lambda - \rho)(h))^n {\mathfrak{s}}(m, \overline{m'}) = 0$$ for $h \in {\mathfrak{h}}$ and $n \gg 0$. Here we denote by $i(h)$ (resp., $\overline{i(h)}$) the holomorphic (resp., anti-holomorphic) vector field on $\tilde{{\mathcal{B}}}$ coming from the action of $H$. In this notation, the action of the real Lie group $H_{\mathbb{R}}^\circ$ is generated by the vector fields $i(h) + \overline{i(h)}$ for $h \in {\mathfrak{h}}_{\mathbb{R}}$. So we deduce $$\label{eq:monodromic distribution} (h - 2(\lambda - \rho)(h))^n {\mathfrak{d}}(m, \overline{m}') = 0 \quad \mbox{for $h \in {\mathfrak{h}}_{\mathbb{R}}$ and $n \gg 0$}.$$ Hence, the distribution ${\mathfrak{d}}(m, \overline{m}')$ is in fact a smooth function, equal to a polynomial function on ${\mathfrak{h}}_{\mathbb{R}}$ times $\exp(2(\lambda - \rho))$; here we identify $H_{\mathbb{R}}^\circ$ with ${\mathfrak{h}}_{\mathbb{R}}$ via the exponential map. We define $$\Gamma({\mathfrak{s}}) \colon \Gamma({\mathcal{M}}) \otimes \overline{\Gamma({\mathcal{M}}')} \to {\mathbb{C}}$$ by evaluating ${\mathfrak{d}}(m, \overline{m'})$ at the identity: $$\Gamma({\mathfrak{s}})(m, \overline{m'}) = {\mathfrak{d}}(m, \overline{m'})|_{1 \in H_{\mathbb{R}}^\circ} = \int_{U_{\mathbb{R}}} {\mathfrak{s}}(m, \overline{m'})|_{U_{\mathbb{R}}}.$$ By construction, the pairing $\Gamma({\mathfrak{s}})$ is ${\mathfrak{u}}_{\mathbb{R}}$-invariant, where ${\mathfrak{u}}_{\mathbb{R}} = {\mathrm{Lie}}(U_{\mathbb{R}}) \subset {\mathfrak{g}}$. **Remark 26**. The pairing $\Gamma({\mathfrak{s}})$ depends on the choices as follows. The dependence on the choice of compact form $U_{\mathbb{R}}$ is serious (as is, of course, the notion of ${\mathfrak{u}}_{\mathbb{R}}$-invariant pairing). The dependence on the base point in $\tilde{{\mathcal{B}}}$ is mild, but non-trivial. When ${\mathcal{M}}$ and ${\mathcal{M}}'$ are twisted, the equation [\[eq:monodromic distribution\]](#eq:monodromic distribution){reference-type="eqref" reference="eq:monodromic distribution"} holds with $n = 1$. So ${\mathfrak{d}}(m, \bar{m}')\exp(-2(\lambda - \rho))$ is a constant, and hence $\Gamma({\mathfrak{s}})$ depends on the choice of base point in $\tilde{{\mathcal{B}}}$ only up to a positive real scalar. This is the case, for instance, for the integral of the polarization on a polarized Hodge module below. When ${\mathcal{M}}$ is merely $\lambda$-monodromic, however, there is a subtle dependence on the $U_{\mathbb{R}}$-orbit of the base point. This dependence is, in fact, a familiar phenomenon in Hodge theory, see Remark [Remark 29](#rmk:unipotent variation){reference-type="ref" reference="rmk:unipotent variation"} below. We have the following proposition. **Proposition 27**. *In the setting above, assume $\lambda \in {\mathfrak{h}}^*$ is integrally dominant. Then we have the following.* 1. *[\[itm:integral pairing 1\]]{#itm:integral pairing 1 label="itm:integral pairing 1"} If ${\mathfrak{s}}$ is non-degenerate, then so is $\Gamma({\mathfrak{s}})$.* 2. *[\[itm:integral pairing 2\]]{#itm:integral pairing 2 label="itm:integral pairing 2"} Assume ${\mathcal{M}}$ and ${\mathcal{M}}'$ are generated by their global sections as $\tilde{{\mathcal{D}}}$-modules. If $\Gamma({\mathfrak{s}}) = 0$, then ${\mathfrak{s}} = 0$.* We prove Proposition [Proposition 27](#prop:integral pairing){reference-type="ref" reference="prop:integral pairing"} in §[9](#sec:integral){reference-type="ref" reference="sec:integral"} assuming either that $\lambda$ is regular or that ${\mathcal{M}}$ and ${\mathcal{M}}'$ underlie mixed Hodge modules. We also give a sketch of proof in the general case. Using the integral pairings $\Gamma({\mathfrak{s}})$, we can reflect all the structure of a Hodge module in structures on its global sections. We consider separately the cases of polarized Hodge modules and of mixed Hodge modules. Let $({\mathcal{M}}, S)$ be a polarized Hodge module of weight $w$ in ${\mathrm{MHM}}({\mathcal{D}}_{\widehat{\lambda}})$, for $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ dominant. Then the corresponding $U({\mathfrak{g}})$-module $\Gamma({\mathcal{M}})$ is equipped with a good filtration $F_\bullet\Gamma({\mathcal{M}}) = \Gamma(F_\bullet{\mathcal{M}})$ and a non-degenerate ${\mathfrak{u}}_{\mathbb{R}}$-invariant Hermitian form. The main conjecture of [@SV] is: **Conjecture 28** ([@SV Conjecture 5.12]). *The tuple $(\Gamma({\mathcal{M}}), \Gamma(F_\bullet {\mathcal{M}}), \Gamma(S))$ is a polarized Hodge structure of weight $w - \dim {\mathcal{B}}$. That is, $\Gamma(S)$ is $(-1)^{p + w - \dim {\mathcal{B}}}$-definite on the subspace $$\Gamma(F_p{\mathcal{M}}) \cap \Gamma(F_{p - 1}{\mathcal{M}})^\perp \subset \Gamma({\mathcal{M}})$$ for all $p$.* Conjecture [Conjecture 28](#conj:schmid-vilonen){reference-type="ref" reference="conj:schmid-vilonen"} is known for tempered Harish-Chandra sheaves by [@DV2]. The relevant integrals are also analyzed explicitly in [@chaves] in the cases of anti-dominant Verma modules and discrete series representations. Unfortunately, we are not yet able to prove Conjecture [Conjecture 28](#conj:schmid-vilonen){reference-type="ref" reference="conj:schmid-vilonen"} in full. Nevertheless, we are able to prove independently that the main consequences for unitary representations of real groups are indeed true: see §[2.7](#subsec:intro harish-chandra){reference-type="ref" reference="subsec:intro harish-chandra"} below. Consider now the case of general mixed Hodge modules, regarded as triples $({\mathcal{M}}, {\mathcal{M}}', {\mathfrak{s}})$. For $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$, let ${\mathrm{MHM}}^w(U({\mathfrak{g}}))_{\widehat{\chi_\lambda}}$ denote the category of triples $$((V, W_\bullet, F_\bullet), (V', W_\bullet, F_\bullet), S),$$ where $V$ and $V'$ are $U({\mathfrak{g}})$-modules with generalized infinitesimal character $\chi_\lambda$, $$S \colon V \otimes \overline{V'} \to {\mathbb{C}}$$ is a non-degenerate ${\mathfrak{u}}_{\mathbb{R}}$-invariant pairing, $W_\bullet V$ and $W_\bullet V'$ are finite filtrations by $U({\mathfrak{g}})$-submodules (equal to each other's orthogonal complements under $S$), and $F_\bullet V$ and $F_\bullet V'$ are good filtrations of $U({\mathfrak{g}})$-modules. We think of such things as "weak mixed Hodge $U({\mathfrak{g}})$-modules". We have a functor $$\label{eq:hodge global sections} \Gamma \colon {\mathrm{MHM}}({\mathcal{D}}_{\widehat{\lambda}}) \to {\mathrm{MHM}}^w(U({\mathfrak{g}}))_{\widehat{\chi_\lambda}}.$$ sending $({\mathcal{M}}, {\mathcal{M}}', {\mathfrak{s}})$ to the triple $$((\Gamma({\mathcal{M}}), \Gamma(F_\bullet{\mathcal{M}}), \Gamma(W_{\bullet + \dim {\mathcal{B}}}{\mathcal{M}})), (\Gamma({\mathcal{M}}'), \Gamma(F_\bullet{\mathcal{M}}'), \Gamma(W_{\bullet - \dim {\mathcal{B}}}{\mathcal{M}}')), \Gamma({\mathfrak{s}})).$$ Objects in the image of [\[eq:hodge global sections\]](#eq:hodge global sections){reference-type="eqref" reference="eq:hodge global sections"} deserve the name "mixed Hodge $U({\mathfrak{g}})$-modules". In this context, Conjecture [Conjecture 28](#conj:schmid-vilonen){reference-type="ref" reference="conj:schmid-vilonen"} would imply these are always infinite dimensional mixed Hodge structures in the obvious sense. **Remark 29**. In view of Remark [Remark 26](#rmk:integration choices){reference-type="ref" reference="rmk:integration choices"}, the functor [\[eq:hodge global sections\]](#eq:hodge global sections){reference-type="eqref" reference="eq:hodge global sections"} on mixed objects depends on a choice of base point in $\tilde{{\mathcal{B}}}$, even though its restriction to pure objects does not. This is a necessary consequence of the following fact: there exist monodromic mixed Hodge modules ${\mathcal{M}} \in {\mathrm{MHM}}({\mathcal{D}}_{\widehat{\lambda}})$ such that, as mixed Hodge modules on $\tilde{{\mathcal{B}}}$, we have $h^*{\mathcal{M}} \not\cong {\mathcal{M}}$ for some $h \in H$ (even though $h^*{\mathcal{M}} \cong {\mathcal{M}}$ as ${\mathcal{D}}$-modules always). This comes from a familiar phenomenon in Hodge theory: if we restrict to a fiber of $\tilde{{\mathcal{B}}} \to {\mathcal{B}}$, an object in ${\mathrm{MHM}}({\mathcal{D}}_{\widehat{\lambda}})$ becomes a unipotent variation of mixed Hodge structure (see, e.g., [@hain-zucker]) tensored with a rank $1$ local system on $H$. Such variations of mixed Hodge structure are trivial on their pure subquotients, but the extensions generally vary from point to point. Theorem [Theorem 25](#thm:hodge generation){reference-type="ref" reference="thm:hodge generation"} and Proposition [Proposition 27](#prop:integral pairing){reference-type="ref" reference="prop:integral pairing"} imply the following localization result for Hodge structures. **Theorem 30**. *Assume $\lambda$ is regular dominant. Then [\[eq:hodge global sections\]](#eq:hodge global sections){reference-type="eqref" reference="eq:hodge global sections"} is fully faithful.* *Proof.* Since the global sections functor from ${\mathcal{D}}$-modules to $U({\mathfrak{g}})$-modules is fully faithful, the functor [\[eq:hodge global sections\]](#eq:hodge global sections){reference-type="eqref" reference="eq:hodge global sections"} is clearly faithful. It therefore remains to show that it is full, i.e., if $$({\mathcal{M}}, {\mathcal{M}}', {\mathfrak{s}}), ({\mathcal{N}}, {\mathcal{N}}', {\mathfrak{s}}) \in {\mathrm{MHM}}({\mathcal{D}}_{\widehat{\lambda}})$$ and $$f \colon {\mathcal{M}} \to {\mathcal{N}}, \quad f' \colon {\mathcal{N}}' \to {\mathcal{M}}'$$ are morphisms of ${\mathcal{D}}$-modules such that maps $\Gamma(f)$ and $\Gamma(f')$ on global sections are compatible with the filtrations and parings, then so are $f$ and $f'$ themselves, and hence $(f, f')$ defines a morphism of mixed Hodge modules. First, it is clear that $f$ and $f'$ preserve the weight filtrations, since these are filtrations in the category of ${\mathcal{D}}$-modules and $\Gamma$ is an equivalence of categories here. That $f$ and $f'$ preserve the Hodge filtrations follows by Theorem [Theorem 25](#thm:hodge generation){reference-type="ref" reference="thm:hodge generation"}. Finally, the claim that $f$ and $f'$ preserve the pairings reduces to the claim that the pairing $${\mathfrak{s}}(f(\cdot), \bar{\cdot}) - {\mathfrak{s}}(\cdot, \overline{f'(\cdot)}) \colon {\mathcal{M}} \otimes \overline{{\mathcal{N}}'} \to {{\mathcal{D}}\mathit{b}}_{\tilde{{\mathcal{B}}}}$$ is equal to zero. But this follows from the fact that its global sections are zero by Proposition [Proposition 27](#prop:integral pairing){reference-type="ref" reference="prop:integral pairing"}, so we are done. ◻ ## The Harish-Chandra setting {#subsec:intro harish-chandra} Let us now consider what Hodge theory can say about the unitary representations of a real reductive group. Recall that a real reductive group $G_{\mathbb{R}}$ is by definition the set of fixed points $G_{\mathbb{R}} = G^\sigma$ of a complex reductive group $G$ under an anti-holomorphic involution $\sigma \colon \bar{G} \to G$. If we fix a compact real form $U_{\mathbb{R}} = G^{\sigma_c} \subset G$ whose complex conjugation $\sigma_c \colon \bar{G} \to G$ commutes with $\sigma$, then $\theta = \sigma\circ \sigma_c \colon G \to G$ is an algebraic involution such that $K = G^\theta$ is the complexification of the maximal compact subgroup $K_{\mathbb{R}} = U_{\mathbb{R}} \cap G_{\mathbb{R}} \subset G_{\mathbb{R}}$. We are interested in the classical problem of classifying the irreducible unitary representations of $G_{\mathbb{R}}$. While this is *a priori* a question in analysis (recall that unitary representations of non-compact Lie groups are generally infinite dimensional Hilbert spaces), it can be reduced to algebra by the following celebrated theory of Harish-Chandra. First, Harish-Chandra showed that every irreducible unitary representation ${\mathbb{V}}$ of $G_{\mathbb{R}}$ is admissible; that is, every irreducible representation of the maximal compact subgroup $K_{\mathbb{R}}$ appears with finite multiplicity. The subspace $V = {\mathbb{V}}^{K_{\mathbb{R}}\text{-fin}}$ of $K_{\mathbb{R}}$-finite vectors, of which ${\mathbb{V}}$ itself is a completion, is therefore a Harish-Chandra module: i.e., an admissible, finitely generated $({\mathfrak{g}}, K)$-module, where ${\mathfrak{g}} = {\mathrm{Lie}}(G)$. The main result of Harish-Chandra is that this procedure defines a bijection between irreducible unitary representations of $G_{\mathbb{R}}$ and irreducible Harish-Chandra modules admitting a (necessarily unique) positive definite $({\mathfrak{g}}_{\mathbb{R}}, K_{\mathbb{R}})$-invariant Hermitian form, where ${\mathfrak{g}}_{\mathbb{R}} = {\mathrm{Lie}}(G_{\mathbb{R}})$. We are therefore left with the problem of determining when an irreducible Harish-Chandra module $V$ admits a positive definite $({\mathfrak{g}}_{\mathbb{R}}, K_{\mathbb{R}})$-invariant Hermitian form. We will call such Harish-Chandra modules *unitary* (=unitarizable) for short. It is not difficult to reduce this question to the case of real infinitesimal character, i.e., to Harish-Chandra modules on which the center $Z(U({\mathfrak{g}}))$ acts by a character $\chi_\lambda$ for $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ (see, for example, [@knapp1986book Chapter 16]). A key insight (due to [@ALTV]) is that in this setting one always has a $({\mathfrak{u}}_{\mathbb{R}}, K_{\mathbb{R}})$-invariant Hermitian form, to which the $({\mathfrak{g}}_{\mathbb{R}}, K_{\mathbb{R}})$-invariant form can be related when it exists. More precisely, we have the following. **Proposition 31** (e.g., [@ALTV]). *Let $V$ be an irreducible $({\mathfrak{g}}, K)$-module with real infinitesimal character.* 1. *There exists a non-degenerate $({\mathfrak{u}}_{\mathbb{R}}, K_{\mathbb{R}})$-invariant Hermitian form $\langle\,,\,\rangle_{{\mathfrak{u}}_{\mathbb{R}}}$ on $V$.* 2. *The module $V$ carries a non-zero $({\mathfrak{g}}_{\mathbb{R}}, K_{\mathbb{R}})$-invariant Hermitian form on $V$ if and only if $V \cong \theta^*V$ (where $\theta^*$ denotes the twist of the ${\mathfrak{g}}$-action via the involution $\theta$). Moreover, if we fix an isomorphism $\theta \colon V \to \theta^*V$ such that $\theta^2 = 1$, then $$\langle u, \bar{v}\rangle_{{\mathfrak{g}}_{\mathbb{R}}} := \langle \theta(u), \bar{v}\rangle_{{\mathfrak{u}}_{\mathbb{R}}}$$ is such a form.* Thus, the problem of determining the unitarity of $V$ is reduced to the problem of computing the signature of a ${\mathfrak{u}}_{\mathbb{R}}$-invariant form and the action of $\theta$. In practice, it is important to normalize the sign of the ${\mathfrak{u}}_{\mathbb{R}}$-invariant form. In [@ALTV], Adams, van Leeuwen, Trapa and Vogan showed that the form can be normalized to be positive definite on the minimal $K$-types; using this normalization, they were able to write down a recursive algorithm to compute its signature. Conjecture [Conjecture 28](#conj:schmid-vilonen){reference-type="ref" reference="conj:schmid-vilonen"}, on the other hand, addresses the problem as follows. First, let us introduce some notation. We will write $${\mathrm{HC}}({\mathfrak{g}}, K) \subset {\mathrm{Mod}}({\mathfrak{g}}, K)$$ for the category of Harish-Chandra modules and, for $\chi \colon Z(U({\mathfrak{g}})) \to {\mathbb{C}}$ a character, $${\mathrm{HC}}({\mathfrak{g}}, K)_{\widehat{\chi}} \subset {\mathrm{Mod}}({\mathfrak{g}}, K)_{\widehat{\chi}}$$ for the category of Harish-Chandra modules with generalized infinitesimal character $\chi$. Now, the Beilinson-Bernstein localization theory provides a functor $$\Gamma \colon {\mathrm{Mod}}({\mathcal{D}}_{\widehat{\lambda}}, K) \to {\mathrm{Mod}}({\mathfrak{g}}, K)_{\widehat{\chi_\lambda}},$$ which is an equivalence for $\lambda \in {\mathfrak{h}}^*$ regular and integrally dominant. It is a well-known fact that a finitely generated $({\mathfrak{g}}, K)$-module with generalized infinitesimal character is automatically admissible. Moreover, the subgroup $K \subset G$ acts on the (complex) flag variety ${\mathcal{B}}$ with finitely many orbits, so any finitely generated module in ${\mathrm{Mod}}({\mathcal{D}}_{\widehat{\lambda}}, K)$ is necessarily regular holonomic. Therefore, $\Gamma$ restricts to a functor $$\label{eq:hc global sections} \Gamma \colon {\mathrm{HC}}({\mathcal{D}}_{\widehat{\lambda}}, K) := {\mathrm{Mod}}({\mathcal{D}}_{\widehat{\lambda}}, K)_{rh} \to {\mathrm{HC}}({\mathfrak{g}}, K)_{\widehat{\chi_\lambda}},$$ which is again an equivalence for $\lambda \in {\mathfrak{h}}^*$ regular and integrally dominant. We call the objects in the domain *Harish-Chandra sheaves*. When $\lambda$ is integrally dominant but not necessarily regular, the functor [\[eq:hc global sections\]](#eq:hc global sections){reference-type="eqref" reference="eq:hc global sections"} is exact and a simple exercise shows that $\Gamma$ sends irreducibles to irreducibles or zero, and that each irreducible object in the target is the image of a unique irreducible object in the domain. Thus, each irreducible Harish-Chandra module $V$ with real infinitesimal character may be written uniquely as $V = \Gamma({\mathcal{M}})$, where ${\mathcal{M}} \in {\mathrm{HC}}({\mathcal{D}}_\lambda, K)$ is irreducible and $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ is (real) dominant. As explained in [@DV1 §2.3], for example, any such ${\mathcal{M}}$ is the intermediate extension of a rank $1$ unitary local system on a $K \times H$-orbit in $\tilde{{\mathcal{B}}}$, and hence has a standard (and almost unique) lift to a polarizable Hodge module in ${\mathrm{MHM}}({\mathcal{D}}_\lambda, K)$. Integrating the polarization $S$ on ${\mathcal{M}}$ produces a non-degenerate $({\mathfrak{u}}_{\mathbb{R}}, K_{\mathbb{R}})$-invariant Hermitian form $\Gamma(S)$ on $V = \Gamma({\mathcal{M}})$. We have previously shown [@DV1 Theorem 4.5 and Proposition 4.7] that, with our standard choice of Hodge structure on ${\mathcal{M}}$, the form $\Gamma(S)$ is the unique one that is positive definite on the minimal $K$-types, i.e., it is precisely the same form considered in [@ALTV]. Now let us consider the $({\mathfrak{g}}_{\mathbb{R}}, K_{\mathbb{R}})$-invariant forms in terms of geometry. Observe that the involution $\theta$ acts canonically on the flag variety ${\mathcal{B}}$ and hence on the universal Cartan $H = {\mathrm{Hom}}_{\mathbb{Z}}({\mathrm{Pic}}^G({\mathcal{B}}), {\mathbb{C}}^\times)$. We will denote the involution on $H$ by $\delta$ to avoid confusion with the action on $\theta$-stable maximal tori in $G$. Note that $\delta$ preserves the sets of positive roots and dominant weights in ${\mathfrak{h}}^*$. Tautologically, we have $\theta^*\tilde{{\mathcal{B}}} \cong \tilde{{\mathcal{B}}}$ as $H$-torsors over ${\mathcal{B}}$ (after twisting the $H$-action on one side by $\delta$), so $\theta$ lifts to a compatible involution $\theta \colon \tilde{{\mathcal{B}}} \to \tilde{{\mathcal{B}}}$. This lift is not unique: we can compose it with the action of any element $h \in H$ such that $\delta(h) = h^{-1}$ to get another one. Fixing a lift, we get a pullback functor $$\label{eq:hodge involution} \theta^* \colon {\mathrm{MHM}}({\mathcal{D}}_\lambda, K) \to {\mathrm{MHM}}({\mathcal{D}}_{\delta\lambda}, K)$$ compatible with the functor $\theta^* \colon {\mathrm{HC}}({\mathfrak{g}}, K) \to {\mathrm{HC}}({\mathfrak{g}}, K)$. For mixed objects, the functor [\[eq:hodge involution\]](#eq:hodge involution){reference-type="eqref" reference="eq:hodge involution"} depends subtly on the choice of lift, but restricted to pure ones it does not, cf., Remark [Remark 29](#rmk:unipotent variation){reference-type="ref" reference="rmk:unipotent variation"}. Since the pure Hodge module ${\mathcal{M}}$ is determined uniquely by $V = \Gamma({\mathcal{M}})$, we deduce from Proposition [Proposition 31](#prop:gr vs ur){reference-type="ref" reference="prop:gr vs ur"} that $V$ admits a $({\mathfrak{g}}_{\mathbb{R}}, K_{\mathbb{R}})$-invariant Hermitian form if and only if $\delta \lambda = \lambda$ and $\theta^*{\mathcal{M}} \cong {\mathcal{M}}$. In this case, if we fix an isomorphism $\theta \colon {\mathcal{M}} \to \theta^*{\mathcal{M}}$ such that $\theta^2 = 1$, then $$\langle u, \bar{v}\rangle_{{\mathfrak{g}}_{\mathbb{R}}} := \Gamma(S)(\theta(u), \bar{v})$$ is the unique (up to scale) non-degenerate $({\mathfrak{g}}_{\mathbb{R}}, K_{\mathbb{R}})$-invariant Hermitian form on $V$. Conjecture [Conjecture 28](#conj:schmid-vilonen){reference-type="ref" reference="conj:schmid-vilonen"} predicts the following result. **Theorem 32**. *Let $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ be dominant, let ${\mathcal{M}} \in {\mathrm{MHM}}({\mathcal{D}}_\lambda, K)$ be an irreducible object such that the $({\mathfrak{g}}, K)$-module $V = \Gamma({\mathcal{M}})$ is Hermitian, and fix an involution $\theta \colon {\mathcal{M}} \to \theta^*{\mathcal{M}}$ as above. Then $V$ is unitary if and only if $\theta$ acts on $\mathop{\mathrm{Gr}}^F_p V$ with eigenvalue $(-1)^p$ for all $p$, or with eigenvalue $(-1)^{p + 1}$ for all $p$.* We prove Theorem [Theorem 32](#thm:unitarity criterion){reference-type="ref" reference="thm:unitarity criterion"} unconditionally in §[4](#sec:pf of unitarity){reference-type="ref" reference="sec:pf of unitarity"} **Remark 33**. We note that the condition $\delta \lambda = \lambda$, $\theta^*{\mathcal{M}} \cong {\mathcal{M}}$ for the existence of an invariant Hermitian form is very easy to check in practice. Recall from [@DV1 §2.4] that ${\mathcal{M}}$ is classified by the parameter $(Q, \lambda, \Lambda)$, where $Q \subset {\mathcal{B}}$ is a $K$-orbit, $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ is the infinitesimal character and $(\lambda - \rho, \Lambda)$ is a character for the associated Harish-Chandra pair $({\mathfrak{h}}, H^{\theta_Q})$. The condition $\delta \lambda = \lambda$ simply picks out a linear subspace for the infinitesimal character, and the condition $\theta^*{\mathcal{M}} \cong {\mathcal{M}}$ is simply that $\theta(Q) = Q$ and $\delta \Lambda = \Lambda$. ## Application: cohomological induction {#subsec:cohomological induction} To illustrate the power of our unitarity criterion (Theorem [Theorem 32](#thm:unitarity criterion){reference-type="ref" reference="thm:unitarity criterion"}), we use it here to give a very short proof of the classical (but highly non-trivial) fact that cohomological induction preserves unitarity in the good range [@vogan-annals Theorem 1.3]. We first recall the definition of (left) cohomological induction (see, for example, [@knapp-vogan Chapter V]). In the setting of the previous subsection, let $P \subset G$ be a $\theta$-stable parabolic subgroup with Levi factor $L$. The restriction functor $${\mathrm{Mod}}({\mathfrak{g}}, K) \to {\mathrm{Mod}}({\mathfrak{g}}, K \cap L)$$ has a left adjoint $\Pi$ with left derived functors $\Pi_j$. If $V$ is an $({\mathfrak{l}}, L \cap K)$-module, the *$j$th cohomological induction* of $V$ is $${\mathcal{L}}_j(V) := \Pi_j(U({\mathfrak{g}}) \otimes_{U({\mathfrak{p}})} (V \otimes \det ({\mathfrak{g}}/{\mathfrak{p}}))) \in {\mathrm{Mod}}({\mathfrak{g}}, K)$$ where ${\mathfrak{p}} = {\mathrm{Lie}}(P)$ and ${\mathfrak{l}} = {\mathrm{Lie}}(L)$, and we regard $V$ as a $U({\mathfrak{p}})$-module via the quotient map ${\mathfrak{p}} \to {\mathfrak{l}}$. Cohomological induction sends Harish-Chandra modules to Harish-Chandra modules. The operation of cohomological induction has the following geometric description. Suppose that $V = \Gamma({\mathcal{B}}_L, {\mathcal{M}})$ for ${\mathcal{M}} \in {\mathrm{HC}}({\mathcal{D}}_{{\mathcal{B}}_L, \lambda}, K \cap L)$, where ${\mathcal{B}}_L$ is the flag variety of $L$. We suppose for the sake of convenience that $\lambda$ is integrally dominant for $L$; in particular, the higher cohomology of ${\mathcal{M}}$ vanishes. The choice of $\theta$-stable parabolic $P$ determines a $\theta$-fixed point $x \in {\mathcal{P}}$ in the corresponding partial flag variety, an equivariant embedding ${\mathcal{B}}_L = \pi^{-1}(x) \hookrightarrow {\mathcal{B}}$, and a closed immersion $$i \colon K \times^{K \cap P} {\mathcal{B}}_L = \pi^{-1}(Q) \hookrightarrow {\mathcal{B}},$$ where $\pi \colon {\mathcal{B}} \to {\mathcal{P}}$ is the projection and $Q = Kx \subset {\mathcal{P}}$ is the $K$-orbit of $x$. We write ${\mathcal{M}}_K$ for the $K$-equivariant twisted ${\mathcal{D}}$-module on $\pi^{-1}(Q)$ restricting to ${\mathcal{M}}$ on ${\mathcal{B}}_L$. Pushing forward to ${\mathcal{B}}$, we obtain a module $$i_*{\mathcal{M}}_K \in {\mathrm{HC}}({\mathcal{D}}_{{\mathcal{B}}, \lambda + \rho_P}, K),$$ where $\rho_P = \rho - \rho_L \in {\mathfrak{h}}^*$ is half the sum of the roots in ${\mathfrak{g}}/{\mathfrak{p}}$. The shift by $\rho_P$ just comes from the $\rho$-shift built into the definition of the rings ${\mathcal{D}}_\lambda$. The following well-known result follows reasonably formally from the various adjunctions in play, cf., e.g., [@hmsw1 Theorem 4.3] and [@bien Theorem 4.4]. **Proposition 34**. *We have $${\mathcal{L}}_j(V) = {\mathrm{H}}^{S - j}({\mathcal{B}}, i_*{\mathcal{M}}_K),$$ where $S = \dim K/(K \cap P)$.* Let us now assume that the $({\mathfrak{l}}, L\cap K)$-module $V$ has real infinitesimal character, and choose ${\mathcal{M}}$ so that $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ is dominant for $L$. **Theorem 35**. *Assume $\lambda + \rho_P$ is dominant for $G$. Then ${\mathcal{L}}_j(V) = 0$ for $j \neq S$, and if $V$ is unitary then so is ${\mathcal{L}}_S(V)$.* *Proof.* Under the dominance assumption we have $${\mathrm{H}}^j({\mathcal{B}}, i_*{\mathcal{M}}_K) = 0 \quad \mbox{for $j \neq 0$}$$ by Beilinson-Bernstein localization, so the first statement is clear. Moreover, if $V$ is unitary, then by Theorem [Theorem 32](#thm:unitarity criterion){reference-type="ref" reference="thm:unitarity criterion"} we may choose an action of $\theta$ on ${\mathcal{M}}$ acting on $\mathop{\mathrm{Gr}}^F_p V$ by $(-1)^p$. This induces an involution on $i_*{\mathcal{M}}_K$. Moreover, from the definition of closed pushforward for the Hodge filtration (see, e.g., §[5.2](#subsec:hodge pushforward){reference-type="ref" reference="subsec:hodge pushforward"}), we also have, up to a shift of the filtration, $$\mathop{\mathrm{Gr}}^F{\mathcal{L}}_S(V) = \Gamma({\mathcal{B}}, \mathop{\mathrm{Gr}}^Fi_*{\mathcal{M}}_K) = \Gamma(Q, \mathop{\mathrm{Gr}}^F V \otimes \mathop{\mathrm{Sym}}({\mathcal{N}}_{Q/{\mathcal{P}}}) \otimes \det {\mathcal{N}}_{Q/{\mathcal{P}}}),$$ where ${\mathcal{N}}_{Q/{\mathcal{P}}}$ denotes the normal bundle and we have identified the $K \cap L$-module $\mathop{\mathrm{Gr}}^F V$ with the corresponding $K$-equivariant vector bundle on $Q = K/(K \cap P)$. But now $\theta$ acts trivially on $Q$ and on ${\mathcal{N}}_{Q/{\mathcal{P}}}$ with eigenvalue $-1$, so it follows that $\theta$ acts on $\mathop{\mathrm{Gr}}^F_p{\mathcal{L}}_S(V)$ with eigenvalue $(-1)^p$. Hence, ${\mathcal{L}}_S(V)$ is unitary by Theorem [Theorem 32](#thm:unitarity criterion){reference-type="ref" reference="thm:unitarity criterion"}. ◻ **Remark 36**. Theorem [Theorem 35](#thm:cohomological induction){reference-type="ref" reference="thm:cohomological induction"} is a version of Vogan's result [@vogan-annals Theorem 1.3]. Note that Vogan's result is stated in terms of the contragredient dual functors ${\mathcal{R}}^j$ instead of ${\mathcal{L}}_j$; this reverses the sign convention for the dominance condition (note that we always assume that unipotent radicals etc consist of *negative* roots). # Deformations and wall crossing for mixed Hodge modules {#sec:deformations} In this section, we state our key technical results on the behavior of mixed Hodge modules under certain natural deformations. The main results are generalizations of Theorem [Theorem 7](#thm:intro semi-continuity){reference-type="ref" reference="thm:intro semi-continuity"} from the introduction and [@DV1 Theorem 3.2]. These techniques form the backbone of our proof of the unitarity criterion for Harish-Chandra modules. We believe this perspective is also of independent interest from a purely algebro-geometric point of view: for example, our techniques yield a quick and easy proof of the Kodaira vanishing theorem for twisted Hodge modules (see §[7](#sec:twisted kodaira){reference-type="ref" reference="sec:twisted kodaira"}). We work in the general geometric setting of an affine locally closed immersion $j \colon Q \to X$ (i.e., for which the boundary of $Q$ is a divisor) and a mixed Hodge module ${\mathcal{M}}$ on $Q$. To these data, we associate a real vector space $\Gamma_{\mathbb{R}}(Q)$ and a deformation $f{\mathcal{M}}$ indexed by $f \in \Gamma_{\mathbb{R}}(Q)$. Roughly speaking, our main results are that the intermediate extension $j_{!*}f{\mathcal{M}}$ varies continuously for $f$ outside a discrete set of hyperplanes in $\Gamma_{\mathbb{R}}(Q)$, and that Hodge filtrations and polarizations behave in a controlled way when we cross these hyperplanes in a "positive" direction (specified by a natural positive cone $\Gamma_{\mathbb{R}}(Q)_+ \subset \Gamma_{\mathbb{R}}(Q)$). These positive deformations are a generalization of the deformations constructed from a boundary equation of $Q$, such as in [@BB2 §4.2] or [@DV1 §3.3]. The outline of the section is as follows. In §[3.1](#subsec:deformation construction){reference-type="ref" reference="subsec:deformation construction"}, we write down the general setting for our deformations and discuss the cone of positive deformations. We then state the main results precisely in §[3.2](#subsec:wall crossing){reference-type="ref" reference="subsec:wall crossing"}. Finally, in §[3.3](#subsec:monodromic deformation){reference-type="ref" reference="subsec:monodromic deformation"}, we discuss how our results apply in the setting of equivariant and monodromic mixed Hodge modules. ## Construction of standard deformations {#subsec:deformation construction} Let $X$ be a smooth quasi-projective variety and $j \colon Q \hookrightarrow X$ the inclusion of a smooth affinely embedded locally closed subvariety. Consider the set $\Gamma(Q, {\mathcal{O}}_Q^\times)$ of non-vanishing regular functions on $Q$, regarded as an abelian group under multiplication. We we will consider deformations of mixed Hodge modules parametrized by the ${\mathbb{R}}$-vector space $$\Gamma_{\mathbb{R}}(Q) := \frac{\Gamma(Q, {\mathcal{O}}_Q^\times)}{{\mathbb{C}}^\times}\otimes {\mathbb{R}}.$$ To be consistent with the notation for $\Gamma(Q, {\mathcal{O}}_Q^\times)$, we will write the operations in this vector space multiplicatively. So, for example, a general element $f \in \Gamma_{\mathbb{R}}(Q)$ is of the form $f = f_1^{s_1} \cdots f_n^{s_n}$ for some $f_i \in \Gamma(Q, {\mathcal{O}}_Q^\times)$ and $s_i \in {\mathbb{R}}$. If ${\mathcal{M}}$ is a mixed Hodge module on $Q$, we define a family of deformed mixed Hodge modules $f{\mathcal{M}}$ parametrized by $f \in \Gamma_{\mathbb{R}}(Q)$ as follows. To each $f = f_1^{s_1}\cdots f_n^{s_n}$, we associate the local system $f{\mathcal{O}}_Q$ given by ${\mathcal{O}}_Q$ with the ${\mathcal{D}}$-module structure $$\partial (fv) = f \partial v + \left(s_1\frac{\partial f_1}{f_1} + \cdots + s_n \frac{\partial f_n}{f_n}\right) fv$$ for vector fields $\partial$ and $v \in {\mathcal{O}}_Q$. Note that the positive definite Hermitian form $S(fu, \overline{fv}) = |f|^2 u\bar{v}$ (well-defined up to a positive real scalar) is flat with respect to this ${\mathcal{D}}$-module structure, so the local system $f{\mathcal{O}}_Q$ is unitary and therefore underlies a mixed Hodge module with trivial Hodge structure. Hence, if ${\mathcal{M}}$ is any mixed Hodge module on $Q$ then $$f{\mathcal{M}} := f{\mathcal{O}}_Q \otimes {\mathcal{M}}$$ is also a mixed Hodge module with the Hodge and weight filtrations induced from ${\mathcal{M}}$. We are interested in the behavior of the extensions $$j_!f{\mathcal{M}} \twoheadrightarrow j_{!*}f{\mathcal{M}} \hookrightarrow j_* f{\mathcal{M}}$$ as we vary $f$. Our main results concern this behavior when $f$ varies along a ray inside the following positive cone. Choose any normal variety $Q'$ with a proper birational map $Q' \to \bar{Q} \subset X$ that is an isomorphism over $Q$. (For example, we could take $Q'$ to be the normalization of $\bar{Q}$, but it will be convenient to allow other choices also.) Note that, since $Q$ is assumed affinely embedded in $X$, the complement of $Q$ in $\bar{Q}$ is a divisor, and hence so is the complement of $Q$ in $Q'$. If $D \subset Q' - Q$ is an irreducible component, we therefore have a linear map $$\operatorname{ord}_D \colon \Gamma_{\mathbb{R}}(Q) \to {\mathbb{R}}$$ taking the order of vanishing along $D$. **Definition 37**. We say that $f \in \Gamma_{\mathbb{R}}(Q)$ is *positive* if $$\operatorname{ord}_D f > 0$$ for all irreducible components $D \subset Q' - Q$. We write $$\Gamma_{\mathbb{R}}(Q)_+ \subset \Gamma_{\mathbb{R}}(Q)$$ for the set of positive elements. In other words, $f$ is positive if the associated ${\mathbb{R}}$-divisor on $Q'$ is effective and has support equal to the entire boundary of $Q$. The following proposition ensures that the notion of positivity is independent of the choice of $Q'$. For the statement, we say that $f \in \Gamma(Q, {\mathcal{O}}_Q^\times)$ is a *boundary equation* if $f$ extends to a regular function on $\bar{Q}$ such that $Q = f^{-1}({\mathbb{C}}^\times)$. **Proposition 38**. *Let $f \in \Gamma_{\mathbb{R}}(Q)$.* 1. *[\[itm:positivity 1\]]{#itm:positivity 1 label="itm:positivity 1"} If $$f \in \frac{\Gamma(Q, {\mathcal{O}}_Q^\times)}{{\mathbb{C}}^\times} \otimes {\mathbb{Q}} \subset \Gamma_{\mathbb{R}}(Q)$$ then $f$ is positive if and only if $f^n$ is the image of a boundary equation for some $n \in {\mathbb{Z}}_{>0}$.* 2. *[\[itm:positivity 2\]]{#itm:positivity 2 label="itm:positivity 2"} In general, $f \in \Gamma_{\mathbb{R}}(Q)$ is positive if and only if $f = \prod_i f_i^{a_i}$ with $f_i$ boundary equations and $a_i > 0$.* *Proof.* To prove [\[itm:positivity 1\]](#itm:positivity 1){reference-type="eqref" reference="itm:positivity 1"}, it is clear that if $f^n$ is a boundary equation, then $f$ is positive. For the converse, by replacing $f$ with some positive integer power, we may assume without loss of generality that $f \in \Gamma(Q, {\mathcal{O}}^\times)$. The condition that $f$ be positive implies that $f$ extends to a regular function on $Q'$ (since $Q'$ is normal and $f$ extends outside codimension $2$) such that $Q = f^{-1}({\mathbb{C}}^\times)$. It therefore remains to show that some power of $f$ descends to a regular function on $\bar{Q}$. First, it is clear that $f$ extends to a regular function on the normalization $Q^{\mathit{norm}}$ of $\bar{Q}$, since the map $Q' \to Q^{\mathit{norm}}$ must be an isomorphism outside a set of codimension $2$ in $Q^{\mathit{norm}}$. Then, locally on $\bar{Q}$, we may write $\bar{Q} = \mathop{\mathrm{Spec}}R$ and $Q^{\mathit{norm}} = \mathop{\mathrm{Spec}}S$. Since $Q$ is affinely embedded in $\bar{Q}$, we may choose $g \in R$ a boundary equation. Since $f$ and $g$ have the same vanishing locus on $Q^{\mathit{norm}}$, we have $f^p \in gS$ for some $p \in {\mathbb{Z}}_{> 0}$ by the Nullstellensatz. Moreover, the quotient $S/R$ is a finitely generated $R$-module such that the localization $(S/R)[g^{-1}] = 0$. Hence there exists $q \in {\mathbb{Z}}_{>0}$ such that $g^q(S/R) = 0$. So $g^q S \subset R$. Hence, setting $n = pq$, we have $$f^n \in (gS)^q \subset g^q S \subset R.$$ So $f^n$ extends to a boundary equation for $R$ as claimed. To prove [\[itm:positivity 2\]](#itm:positivity 2){reference-type="eqref" reference="itm:positivity 2"}, write $$f = g_1^{s_1} \cdots g_n^{s_n}$$ for $g_i \in \Gamma(Q, {\mathcal{O}}^\times)$ and $s_i \in {\mathbb{R}}$, and consider the map $\phi \colon {\mathbb{R}}^n \to \Gamma_{\mathbb{R}}(Q)$ sending $(t_1, \ldots, t_n)$ to $g_1^{t_1} \cdots g_n^{t_n}$. Note that since the positivity condition is defined by the positivity of finitely many linear functionals, the set of $t \in {\mathbb{R}}^n$ such that $\phi(t)$ is positive is open. Moreover, for any $\epsilon > 0$, we may choose vectors $v_0, v_1, \ldots, v_n \in {\mathbb{Q}}^n$ within a ball of radius $\epsilon$ around $s = (s_1, \ldots, s_n)$ such that $s$ lies in the interior of the convex hull of the $v_i$. So $$f = \prod_{i = 0}^{n + 1} \phi(v_i)^{b_i}$$ for some $b_i > 0$ with $\sum_i b_i = 1$. But for small enough $\epsilon$, $\phi(v_i) \in \Gamma_{\mathbb{Q}}(Q)$ will be positive for all $i$, and hence $\phi(v_i) = f_i^{1/n_i}$ for some boundary equations $f_i$ and $n_i > 0$ by [\[itm:positivity 1\]](#itm:positivity 1){reference-type="eqref" reference="itm:positivity 1"}. So $$f = \prod_i f_i^{a_i} \quad \text{with} \quad a_i = b_i/n_i > 0$$ as claimed. ◻ ## Wall crossing phenomena {#subsec:wall crossing} We now state our main results. The first result describes the locus of $f \in \Gamma_{\mathbb{R}}(Q)$ such that $j_!f{\mathcal{M}} \to j_*f{\mathcal{M}}$ is an isomorphism. **Proposition 39**. *There exists a (non-canonical) finite set $\Psi$ of linear functionals $\alpha \colon \Gamma(Q, {\mathcal{O}}^\times)/{\mathbb{C}}^\times \to {\mathbb{Z}}$ with the following properties.* 1. *[\[itm:hyperplanes 1\]]{#itm:hyperplanes 1 label="itm:hyperplanes 1"} We have $f \in \Gamma_{\mathbb{R}}(Q)$ is positive if and only if $\alpha(f) > 0$ for all $\alpha \in \Psi$.* 2. *[\[itm:hyperplanes 2\]]{#itm:hyperplanes 2 label="itm:hyperplanes 2"} If ${\mathcal{M}}$ is a mixed Hodge module on $Q$, then there exists a hyperplane arrangement in $\Gamma_{\mathbb{R}}(Q)$ of the form $$\bigcup_{\alpha \in \Psi} \alpha^{-1}(A_\alpha + {\mathbb{Z}})$$ for some finite sets $A_\alpha \subset {\mathbb{R}}/{\mathbb{Z}}$, such that $j_!f{\mathcal{M}} \to j_*f{\mathcal{M}}$ is an isomorphism as long as $f$ lies in the complement $${\mathcal{A}} := \Gamma_{\mathbb{R}}(Q) - \bigcup_{\alpha \in \Psi} \alpha^{-1}(A_\alpha + {\mathbb{Z}}).$$* In particular, Proposition [Proposition 39](#prop:hyperplanes){reference-type="ref" reference="prop:hyperplanes"} implies that, for $f$ positive, $j_!f^s{\mathcal{M}} \to j_*f^s{\mathcal{M}}$ is an isomorphism for $s$ outside some discrete set in ${\mathbb{R}}$. The proof of Proposition [Proposition 39](#prop:hyperplanes){reference-type="ref" reference="prop:hyperplanes"} is given in §[5.1](#subsec:pf of thm:hyperplanes){reference-type="ref" reference="subsec:pf of thm:hyperplanes"}. In the proof of our main results on real groups, we will need to analyze the behavior of both Hodge filtrations and polarizations as we cross the hyperplanes in Proposition [Proposition 39](#prop:hyperplanes){reference-type="ref" reference="prop:hyperplanes"}. Our next result describes this wall-crossing for Hodge filtrations. **Theorem 40**. *In the setting of Proposition [Proposition 39](#prop:hyperplanes){reference-type="ref" reference="prop:hyperplanes"} [\[itm:hyperplanes 2\]](#itm:hyperplanes 2){reference-type="eqref" reference="itm:hyperplanes 2"}, the isomorphism class of the coherent sheaf $\mathop{\mathrm{Gr}}^F j_{!*}f{\mathcal{M}}$ on $T^*X$ is constant for $f$ in each connected component of ${\mathcal{A}}$. Moreover, if $f$ is positive, then the Hodge filtrations on $j_!f^s{\mathcal{M}}$ and $j_*f^s{\mathcal{M}}$ are semi-continuous in the sense that $$\mathop{\mathrm{Gr}}^F j_!f^{s - \epsilon}{\mathcal{M}} \cong \mathop{\mathrm{Gr}}^F j_!f^s{\mathcal{M}} \quad \text{and} \quad \mathop{\mathrm{Gr}}^F j_*f^{s + \epsilon} {\mathcal{M}} \cong \mathop{\mathrm{Gr}}^Fj_*f^s{\mathcal{M}}$$ for all $0 < \epsilon \ll 1$.* The proof of Theorem [Theorem 40](#thm:semi-continuity){reference-type="ref" reference="thm:semi-continuity"} is given in §[5.3](#subsec:pf of thm:semi-continuity){reference-type="ref" reference="subsec:pf of thm:semi-continuity"}. Our final result determines the wall-crossing behavior of polarizations. It is a generalization to real deformations of [@DV1 Theorem 3.2], the polarized Hodge module version of Beilinson and Bernstein's result [@BB2] on the Jantzen filtration. Let $f \in \Gamma_{\mathbb{R}}(Q)$ be positive. As in [@DV1 §3.3], we assume ${\mathcal{M}}$ pure of weight $w$ and consider the family of morphisms $j_! f^s{\mathcal{M}} \to j_* f^s{\mathcal{M}}$ for $s \in {\mathbb{R}}$ and the formal completion $$\label{eq:jantzen 1} j_! f^s{\mathcal{M}}[[s]] \to j_*f^s{\mathcal{M}} [[s]].$$ The completions are endowed with Hodge and weight filtrations as in [@DV1 §7.2], for example (cf., also §[5.3](#subsec:pf of thm:semi-continuity){reference-type="ref" reference="subsec:pf of thm:semi-continuity"}). By Proposition [Proposition 39](#prop:hyperplanes){reference-type="ref" reference="prop:hyperplanes"}, [\[eq:jantzen 1\]](#eq:jantzen 1){reference-type="eqref" reference="eq:jantzen 1"} is an isomorphism after inverting $s$, and hence induces finite Jantzen filtrations on $j_!{\mathcal{M}}$ and $j_*{\mathcal{M}}$ given by $$J_{-n} j_!{\mathcal{M}} = (j_!f^s{\mathcal{M}}[[s]] \cap s^{-n}j_*f^s{\mathcal{M}}[[s]])/(s)$$ and $$J_n j_*{\mathcal{M}} = (s^{-n}j_!f^s{\mathcal{M}}[[s]] \cap j_*f^s{\mathcal{M}}[[s]])/(s)$$ and isomorphisms $$s^n \colon \mathop{\mathrm{Gr}}^J_n j_*{\mathcal{M}} \overset{\sim}\to \mathop{\mathrm{Gr}}^J_{-n} j_!{\mathcal{M}}(-n).$$ Fixing a polarization $S$ on ${\mathcal{M}}$, we obtain a family of perfect pairings $S_s \colon j_!f^s{\mathcal{M}} \to (j_*f^s{\mathcal{M}})^h(-w)$ by pushing forward the polarizations $$f^sm \otimes \overline{f^sm'} \mapsto |f|^{2s}S(m, \overline{m'})$$ on $f^s{\mathcal{M}}$. The pairings $S_s$ induce polarizations on $j_{!*}f^s{\mathcal{M}}$. The Jantzen filtrations on $j_!{\mathcal{M}}$ and $j_*{\mathcal{M}}$ are dual under $S_0$, so we obtain nondegenerate pairings ("Jantzen forms") $$s^{-n} \mathop{\mathrm{Gr}}^J_{-n}(S) \colon \mathop{\mathrm{Gr}}^J_{-n} j_!{\mathcal{M}} \to (\mathop{\mathrm{Gr}}^J_{-n} j_!{\mathcal{M}})^h (-w + n)$$ for all $n$. **Theorem 41**. *For all $n$, $\mathop{\mathrm{Gr}}^J_{-n}j_!{\mathcal{M}}$ is a pure Hodge module of weight $w - n$, and the Jantzen form $s^{-n} \mathop{\mathrm{Gr}}^J_{-n}(S)$ is a polarization. In particular, $J_n j_!{\mathcal{M}} = W_{w + n} j_!{\mathcal{M}}$.* The proof of Theorem [Theorem 41](#thm:jantzen){reference-type="ref" reference="thm:jantzen"} is given in §[6.4](#subsec:pf of thm:jantzen){reference-type="ref" reference="subsec:pf of thm:jantzen"}. In the case of rational deformation directions (where $f$ is a boundary equation), Theorem [Theorem 41](#thm:jantzen){reference-type="ref" reference="thm:jantzen"} is [@DV1 Theorem 7.4]; the work we do in §[6](#sec:hodge-lefschetz){reference-type="ref" reference="sec:hodge-lefschetz"} is to generalize this to real deformation directions. **Remark 42**. We have presented here a theory that handles deformations along arbitrary real rays inside the positive cone. The theory simplifies somewhat if one considers only rays of rational slope: in this case, the deformations are given by the standard $f^s$ construction associated with a boundary equation $f$ for $Q$. The generalization to real directions has a relatively minor impact on the proofs of Proposition [Proposition 39](#prop:hyperplanes){reference-type="ref" reference="prop:hyperplanes"} and Theorem [Theorem 40](#thm:semi-continuity){reference-type="ref" reference="thm:semi-continuity"}, but a larger impact on the proof of Theorem [Theorem 41](#thm:jantzen){reference-type="ref" reference="thm:jantzen"}. We have chosen to write down the full theory for the real directions for two reasons. First, it makes the proof of our main result on unitarity considerably simpler and more transparent. Second, we feel it is helpful to settle once and for all the generality of the result, which should help to avoid many awkward "reduction to rational infinitesimal character" arguments in the future. This kind of issue already arises in the unitarity algorithm, which rests heavily on Beilinson and Bernstein's result that the Jantzen filtration equals the weight filtration; see, for example, [@ALTV Theorem 18.11]. ## The monodromic setting {#subsec:monodromic deformation} In this subsection, we explain how the results above extend to the monodromic and equivariant settings. Let us first define the appropriate deformation spaces. For the monodromic version, suppose we are given a Cartesian diagram $$\begin{tikzcd} \tilde{Q} \ar[r] \ar[d, "\pi_Q"] & \tilde{X} \ar[d, "\pi_X"] \\ Q \ar[r] & X, \end{tikzcd}$$ where the vertical arrows are principal $H$-bundles for some torus $H$ and the top arrow is $H$-equivariant. Define $$\Gamma(\tilde Q, {\mathcal{O}}_{\tilde Q}^\times)^{\mathit{mon}}= \coprod_{\mu \in {\mathbb{X}}^*(H)} \Gamma(Q, {\mathcal{O}}_Q(\mu)^\times) \subset \Gamma(\tilde{Q}, {\mathcal{O}}_{\tilde{Q}}),$$ where $${\mathcal{O}}_Q(\mu) = (\pi_{Q\mathpalette\bigcdot@{.5}}{\mathcal{O}}_{\tilde{Q}} \otimes {\mathbb{C}}_\mu)^H$$ is the line bundle associated to $\mu \in {\mathbb{X}}^*(H)$, and ${\mathcal{O}}_Q(\mu)^\times$ is its sheaf of non-vanishing sections. We remark that, since any non-vanishing function on a torus is a constant multiple of a character, we have $$\Gamma(\tilde Q, {\mathcal{O}}_{\tilde Q}^\times)^{\mathit{mon}}= \Gamma(\tilde Q, {\mathcal{O}}_{\tilde Q}^\times)$$ if $Q$ is connected. We set $$\Gamma_{\mathbb{R}}(\tilde Q)^{\mathit{mon}}= \frac{\Gamma(\tilde Q, {\mathcal{O}}_{\tilde Q}^\times)^{\mathit{mon}}}{{\mathbb{C}}^\times} \otimes {\mathbb{R}} \subset \Gamma_{\mathbb{R}}(\tilde{Q}).$$ We have a canonical linear map $$\varphi \colon \Gamma_{\mathbb{R}}(\tilde Q)^{\mathit{mon}}\to {\mathfrak{h}}^*_{\mathbb{R}}$$ sending $f = f_1^{s_1} \cdots f_n^{s_n}$ to $$\varphi(f) = \sum_i s_i \mu_i \quad \text{for $f_i \in \Gamma(Q, {\mathcal{O}}_Q(\mu_i)^\times)$}.$$ One easily checks that if ${\mathcal{M}} \in {\mathrm{MHM}}(\tilde{Q})$ is $\lambda$-monodromic then $f{\mathcal{M}}$ is naturally $(\lambda + \varphi(f))$-monodromic. Now suppose that an algebraic group $K$ acts compatibly on $\tilde{Q}$, $\tilde{X}$, $Q$, $X$ and $H$. We set $$\Gamma_{\mathbb{R}}^K(\tilde Q) = \frac{\Gamma(\tilde Q, {\mathcal{O}}_{\tilde Q}^\times)^K}{{\mathbb{C}}^\times} \otimes {\mathbb{R}} \subset \Gamma_{\mathbb{R}}(\tilde Q)$$ and $$\Gamma_{\mathbb{R}}^K(\tilde Q)^{\mathit{mon}}= \Gamma_{\mathbb{R}}^K(\tilde{Q}) \cap \Gamma_{\mathbb{R}}(\tilde Q)^{\mathit{mon}}.$$ Elements $f \in \Gamma_{\mathbb{R}}^K(\tilde Q)^{\mathit{mon}}$ have the property that $\varphi(f) \in ({\mathfrak{h}}^*_{\mathbb{R}})^K$, and if ${\mathcal{M}}$ is a $K$-equivariant monodromic mixed Hodge module, then so is $f{\mathcal{M}}$. We now consider how the results stated in §[3.2](#subsec:wall crossing){reference-type="ref" reference="subsec:wall crossing"} carry over to this setting. The presence of monodromic or equivariant structures make no difference to the statements of Proposition [Proposition 39](#prop:hyperplanes){reference-type="ref" reference="prop:hyperplanes"} and Theorem [Theorem 41](#thm:jantzen){reference-type="ref" reference="thm:jantzen"}. For Theorem [Theorem 40](#thm:semi-continuity){reference-type="ref" reference="thm:semi-continuity"}, it is helpful to know that the isomorphisms between associated gradeds respect the group actions: **Proposition 43**. *If ${\mathcal{M}} \in {\mathrm{MHM}}_{{\mathit{mon}}}^K(\tilde{Q})$ and $f \in \Gamma_{\mathbb{R}}^K(\tilde Q)^{{\mathit{mon}}}$ is positive, then the isomorphisms of Theorem [Theorem 40](#thm:semi-continuity){reference-type="ref" reference="thm:semi-continuity"} respect the actions of $K$ and $H$. In particular, identifying a monodromic mixed Hodge module on $\tilde X$ with its underlying filtered $\tilde{{\mathcal{D}}}$-module on $X$, for $0 < \epsilon \ll 1$ we have isomorphisms $$\mathop{\mathrm{Gr}}^F j_!f^{s - \epsilon} {\mathcal{M}} \cong \mathop{\mathrm{Gr}}^F j_!f^s{\mathcal{M}} \quad \text{and} \quad \mathop{\mathrm{Gr}}^F j_*f^{s + \epsilon}{\mathcal{M}} \cong \mathop{\mathrm{Gr}}^F j_*f^s{\mathcal{M}}$$ of $K$-equivariant graded $\mathop{\mathrm{Gr}}^F \tilde{{\mathcal{D}}}$-modules.* Proposition [Proposition 43](#prop:monodromic semi-continuity){reference-type="ref" reference="prop:monodromic semi-continuity"} is in fact clear from the proof of Theorem [Theorem 40](#thm:semi-continuity){reference-type="ref" reference="thm:semi-continuity"}: the isomorphisms are constructed in a sufficiently canonical way that they are automatically equivariant. # Proof of the unitarity criterion {#sec:pf of unitarity} In this section, we give the proof of Theorem [Theorem 32](#thm:unitarity criterion){reference-type="ref" reference="thm:unitarity criterion"}, the unitarity criterion for Harish-Chandra modules. The proof makes use of several of our technical results whose proofs have been deferred until later. These are: - Theorem [Theorem 24](#thm:filtered exactness){reference-type="ref" reference="thm:filtered exactness"} on cohomology vanishing for the Hodge filtration (proved in §[8](#sec:vanishing){reference-type="ref" reference="sec:vanishing"}), - Proposition [Proposition 39](#prop:hyperplanes){reference-type="ref" reference="prop:hyperplanes"} and Theorem [Theorem 40](#thm:semi-continuity){reference-type="ref" reference="thm:semi-continuity"} on semi-continuity of the Hodge filtration (proved in §[5](#sec:semi-continuity){reference-type="ref" reference="sec:semi-continuity"}), and - Theorem [Theorem 41](#thm:jantzen){reference-type="ref" reference="thm:jantzen"} on the Jantzen forms (proved in §[6](#sec:hodge-lefschetz){reference-type="ref" reference="sec:hodge-lefschetz"}). Given these ingredients, the proof more or less follows the sketch outlined by Adams, Trapa and Vogan in the manuscript [@ATV]. In §[4.1](#subsec:hodge and signature){reference-type="ref" reference="subsec:hodge and signature"}, we reduce Theorem [Theorem 32](#thm:unitarity criterion){reference-type="ref" reference="thm:unitarity criterion"} to a numerical statement about Hodge and signature characters of Harish-Chandra modules (Theorem [Theorem 46](#thm:hodge and signature K'){reference-type="ref" reference="thm:hodge and signature K'"}), a weak form of Conjecture [Conjecture 28](#conj:schmid-vilonen){reference-type="ref" reference="conj:schmid-vilonen"}. This numerical statement is then proved by running a version of the [@ALTV] algorithm to reduce to the case of tempered representations (for which the conjecture is known by [@DV2]): we give this argument in §§[4.4](#subsec:pf of signature K){reference-type="ref" reference="subsec:pf of signature K"}--[4.5](#subsec:pf of signature K'){reference-type="ref" reference="subsec:pf of signature K'"}. In order to make the argument work, we need to know the cones of positive deformations of §[3](#sec:deformations){reference-type="ref" reference="sec:deformations"} in the case of Harish-Chandra sheaves: we compute these explicitly in §[4.3](#subsec:hc deformations){reference-type="ref" reference="subsec:hc deformations"} after recalling some necessary background about the structure of $K$-orbits on the flag variety in §[4.2](#subsec:K-orbits){reference-type="ref" reference="subsec:K-orbits"}. ## Hodge and signature characters {#subsec:hodge and signature} We recall the setup of §[2.7](#subsec:intro harish-chandra){reference-type="ref" reference="subsec:intro harish-chandra"}. So $G$ is a complex reductive group, $G_{\mathbb{R}} \subset G$ is a real form, $U_{\mathbb{R}} \subset G$ is a compatible compact real form (so that $K_{\mathbb{R}} = G_{\mathbb{R}} \cap U_{\mathbb{R}} \subset G_{\mathbb{R}}$ is a maximal compact subgroup), $\theta \colon G \to G$ is the corresponding Cartan involution and $K = K_{\mathbb{R}}\otimes {\mathbb{C}} = G^\theta$. Let us now fix $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ dominant and $({\mathcal{M}}, S)$ a polarized Hodge module in ${\mathrm{MHM}}({\mathcal{D}}_{\lambda}, K)$. We wish to relate the signature of the ${\mathfrak{u}}_{\mathbb{R}}$-invariant form $\Gamma(S)$ on the Harish-Chandra module $\Gamma({\mathcal{M}})$ to the Hodge structure on ${\mathcal{M}}$. To this end, we introduce the following numerical invariants. First, we write $$\chi^{\mathit{sig}}({\mathcal{M}}, \zeta) = \sum_{\mu \in \widehat{K}} (m^+({\mathcal{M}}, \mu) + \zeta m^-({\mathcal{M}}, \mu))[\mu] \in \widehat{{\mathrm{Rep}}}(K)[\zeta]/(\zeta^2 - 1),$$ where the sum is over isomorphism classes of irreducible representations $\mu$ of $K$, $(m^+({\mathcal{M}}, \mu), m^-({\mathcal{M}}, \mu))$ is the signature of the non-degenerate Hermitian form $\Gamma(S)$ on the (finite dimensional) multiplicity space ${\mathrm{Hom}}_K(\mu, \Gamma({\mathcal{M}}))$ and $$\widehat{{\mathrm{Rep}}}(K) = \prod_{\mu \in \widehat{K}}{\mathbb{Z}}\cdot [\mu]$$ is the completion of the Grothendieck group ${\mathrm{Rep}}(K)$ of finite dimensional $K$-modules. We will call the quantity $\chi^{\mathit{sig}}({\mathcal{M}}, \zeta)$ the *signature $K$-character*; it keeps track of numerical information about the ${\mathfrak{u}}_{\mathbb{R}}$-invariant form $\Gamma(S)$. We also write $$\chi^H({\mathcal{M}}, u) = \sum_{p \in {\mathbb{Z}}} [\mathop{\mathrm{Gr}}^F_p\Gamma({\mathcal{M}})]u^p \in {\mathrm{Rep}}(K)((u)).$$ We call $\chi^H({\mathcal{M}}, u)$ the *Hodge $K$-character*; it keeps track of numerical information about the Hodge filtration on $\Gamma({\mathcal{M}})$. Note that $$\mathop{\mathrm{Gr}}^F_p \Gamma({\mathcal{M}}) = \Gamma(\mathop{\mathrm{Gr}}^F_p{\mathcal{M}})$$ by Theorem [Theorem 24](#thm:filtered exactness){reference-type="ref" reference="thm:filtered exactness"}. Since $\Gamma({\mathcal{M}})$ is an admissible $({\mathfrak{g}}, K)$-module, the Hodge $K$-character lies in the subgroup $${\mathrm{Rep}}(K)((u))^{\mathit{adm}} \subset {\mathrm{Rep}}(K)((u))$$ consisting of formal Laurent series $$a_n u^n + a_{n + 1}u^{n + 1} + \cdots, \quad a_i \in {\mathrm{Rep}}(K)$$ such that the class $[\mu]$ of each irreducible $K$-module $\mu$ appears in only finitely many of the $a_i$. There is a well-defined reduction map $$\begin{aligned} {\mathrm{Rep}}(K)((u))^{\mathit{adm}} &\to \widehat{{\mathrm{Rep}}}(K)[\zeta]/(\zeta^2 - 1) \\ \chi(u) &\mapsto \chi(\zeta) \mod \zeta^2 - 1.\end{aligned}$$ We prove the following result in §[4.4](#subsec:pf of signature K){reference-type="ref" reference="subsec:pf of signature K"}. **Theorem 44**. *Assume $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ is dominant, and let $({\mathcal{M}}, S)$ be a polarized object of weight $w$ in ${\mathrm{MHM}}({\mathcal{D}}_\lambda, K)$. Then $$\chi^{\mathit{sig}}({\mathcal{M}}, \zeta) = \zeta^c\chi^H({\mathcal{M}}, \zeta) \mod \zeta^2 - 1,$$ where $c = \dim {\mathcal{B}} - w$.* **Remark 45**. If ${\mathcal{M}}$ is the intermediate extension of a local system on a $K$-orbit $Q$ equipped with its standard Hodge structure as in [@DV1 §2.3], then $c$ is simply the codimension of $Q$, which is the lowest index $p$ for which $F_p{\mathcal{M}} \neq 0$. Theorem [Theorem 44](#thm:hodge and signature K){reference-type="ref" reference="thm:hodge and signature K"} is an obvious consequence of Conjecture [Conjecture 28](#conj:schmid-vilonen){reference-type="ref" reference="conj:schmid-vilonen"}. We regard this as strong evidence for the full conjecture. Theorem [Theorem 44](#thm:hodge and signature K){reference-type="ref" reference="thm:hodge and signature K"} shows that the Hodge filtration controls the signature of the $({\mathfrak{u}}_{\mathbb{R}}, K_{\mathbb{R}})$-invariant Hermitian form on $\Gamma({\mathcal{M}})$. In order to deduce the signature of the $({\mathfrak{g}}_{\mathbb{R}}, K_{\mathbb{R}})$-invariant form (when it exists), we need the following mild refinement. Recall from §[2.7](#subsec:intro harish-chandra){reference-type="ref" reference="subsec:intro harish-chandra"} that the Harish-Chandra module $V = \Gamma({\mathcal{M}})$ carries a non-degenerate $({\mathfrak{g}}_{\mathbb{R}}, K_{\mathbb{R}})$-invariant Hermitian form if and only if $V \cong \theta^*V$, which in turn holds if and only if $\lambda = \delta \lambda$ and ${\mathcal{M}} \cong \theta^*{\mathcal{M}}$. Here we recall that $\theta$ acts on the flag variety ${\mathcal{B}}$ and the base affine space $\tilde{{\mathcal{B}}}$ and that we write $\delta \colon H \to H$ for the induced action on the universal Cartan $H$. If we fix an isomorphism $\theta \colon \theta^*{\mathcal{M}} \cong {\mathcal{M}}$, squaring to the identity, then we have a $({\mathfrak{g}}_{\mathbb{R}}, K_{\mathbb{R}})$-invariant form on $\Gamma({\mathcal{M}})$ given by $$\langle u, \bar{v} \rangle_{{\mathfrak{g}}_{\mathbb{R}}} = \Gamma(S)(\theta(u), \bar{v}).$$ We thus need a version of Theorem [Theorem 44](#thm:hodge and signature K){reference-type="ref" reference="thm:hodge and signature K"} that keeps track of the action of $\theta$ on ${\mathcal{M}}$. We can package this neatly using the trick of passing to the extended group (cf., [@ALTV Chapter 12]). Let $K' = K \times \{1, \theta\}$. Then we have compatible actions of $K'$ on ${\mathcal{B}}$, $\tilde{{\mathcal{B}}}$ and $H$. The choice of involution on ${\mathcal{M}}$ upgrades it to an object in ${\mathrm{MHM}}({\mathcal{D}}_\lambda, K')$. More generally, we can consider polarized objects in the larger monodromic category $${\mathcal{M}} \in {\mathrm{MHM}}(\tilde{{\mathcal{D}}}, K').$$ We will say that ${\mathcal{M}}$ has *dominant monodromies* if $${\mathcal{M}} = \bigoplus_{\lambda} {\mathcal{M}}_\lambda, \quad {\mathcal{M}}_\lambda \in {\mathrm{MHM}}({\mathcal{D}}_\lambda, K)$$ with each $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ dominant. In this setting, we may define signature and Hodge $K'$-characters $$\chi^{\mathit{sig}}({\mathcal{M}}, \zeta) \in \widehat{{\mathrm{Rep}}}(K')[\zeta]/(\zeta^2 - 1) \quad \mbox{and} \quad \chi^H({\mathcal{M}}, u) \in {\mathrm{Rep}}(K')((u))^{\mathit{adm}}$$ just as before. The $K'$-equivariant version of Theorem [Theorem 44](#thm:hodge and signature K){reference-type="ref" reference="thm:hodge and signature K"} also holds. **Theorem 46**. *Let ${\mathcal{M}}$ be an irreducible object in ${\mathrm{MHM}}(\tilde{{\mathcal{D}}}, K')$ with dominant monodromies and polarization $S$. If ${\mathcal{M}}$ is pure of weight $w$, then $$\chi^{\mathit{sig}}({\mathcal{M}}, \zeta) = \zeta^c\chi^H({\mathcal{M}}, \zeta) \mod \zeta^2 - 1,$$ where $c = \dim {\mathcal{B}} - w$.* We explain how to extend the proof of Theorem [Theorem 44](#thm:hodge and signature K){reference-type="ref" reference="thm:hodge and signature K"} to the $K'$-equivariant setting in §[4.5](#subsec:pf of signature K'){reference-type="ref" reference="subsec:pf of signature K'"}. Theorem [Theorem 46](#thm:hodge and signature K'){reference-type="ref" reference="thm:hodge and signature K'"} implies the unitarity criterion: *Proof of Theorem [Theorem 32](#thm:unitarity criterion){reference-type="ref" reference="thm:unitarity criterion"} modulo Theorem [Theorem 46](#thm:hodge and signature K'){reference-type="ref" reference="thm:hodge and signature K'"}.* The irreducible Harish-Chandra module $V$ is unitary if and only if the ${\mathfrak{g}}_{\mathbb{R}}$-invariant form $$\label{eq:unitarity criterion 1} \langle u, \bar{v}\rangle_{{\mathfrak{g}}_{\mathbb{R}}} = \Gamma(S)(\theta(u), \bar{v})$$ is either positive or negative definite. This form is positive definite if and only if, for each $K'$-type $\mu$, the form $\Gamma(S)$ is $\epsilon(\mu)$-definite on the multiplicity space ${\mathrm{Hom}}_{K'}(\mu, V)$, where $\epsilon(\mu)$ is the eigenvalue of $\theta \in Z(K')$ acting on $\mu$. But by Theorem [Theorem 46](#thm:hodge and signature K'){reference-type="ref" reference="thm:hodge and signature K'"}, this holds if and only if $\mathop{\mathrm{Gr}}^F_p{\mathrm{Hom}}_{K'}(\mu, V) \neq 0$ implies $\epsilon(\mu) = (-1)^{p + c}$ for all $p$, which in turn holds if and only if $\theta$ acts on $\mathop{\mathrm{Gr}}^F_p V$ with eigenvalue $(-1)^{p + c}$ for all $p$. Similarly, the ${\mathfrak{g}}_{\mathbb{R}}$-invariant form [\[eq:unitarity criterion 1\]](#eq:unitarity criterion 1){reference-type="eqref" reference="eq:unitarity criterion 1"} is negative definite if and only if $\theta$ acts on $\mathop{\mathrm{Gr}}^F_p V$ with eigenvalue $(-1)^{p + c + 1}$ for all $p$, so this proves the theorem. ◻ ## Recollections on $K$-orbits and Harish-Chandra sheaves {#subsec:K-orbits} In this subsection, we briefly recall some basic terminology and properties of $K$-orbits and Harish-Chandra sheaves on ${\mathcal{B}}$, which we will use presently. Let $Q \subset {\mathcal{B}}$ be a $K$-orbit. Choose any point $x \in Q$. Then, by a result of Matsuki, there exists a $\theta$-stable maximal torus $T \subset G$ fixing $x$. Writing $B_x = {\mathrm{Stab}}_G(x)$ for the associated Borel subgroup and $\tau_x \colon B_x \to H$ for the quotient by the unipotent radical, we obtain an isomorphism $$\tau_x \colon T \overset{\sim}\to H.$$ Transporting $\theta|_T$ along this map, we obtain an involution $\theta_Q \colon H \to H$. One can easily check that $\theta_Q$ is independent of the choice of $x \in Q$. By construction, $\theta_Q$ acts naturally on the root datum of $G$. We have the following standard terminology. **Definition 47**. Let $\alpha \in \Phi_+$ be a root and $Q \subset {\mathcal{B}}$ a $K$-orbit. We say that $\alpha$ is - *complex* if $\theta_Q \alpha \not\in\{\pm \alpha\}$, - *real* if $\theta_Q \alpha = -\alpha$, - *compact imaginary* if $\theta_Q \alpha = \alpha$ and $\theta|_{{\mathfrak{g}}_\alpha} = + 1$, - *non-compact imaginary* if $\theta_Q \alpha = \alpha$ and $\theta|_{{\mathfrak{g}}_\alpha} = -1$. Here $${\mathfrak{g}} = {\mathfrak{t}} \oplus \bigoplus_{\alpha \in \Phi} {\mathfrak{g}}_\alpha$$ is the root space decomposition with respect to a $\theta$-stable maximal torus $T$ fixing a point $x \in Q$ as above. We emphasize that these notions depend on the choice of orbit $Q$. The different root types correspond to different shapes of the orbit when intersected with standard subvarieties in ${\mathcal{B}}$. For example, if $\alpha$ is a simple root and $L \cong {\mathbb{P}}^1$ is the corresponding line through $x \in Q$, then $$\label{eq:root lines} L \cap Q \cong \begin{cases} {\mathbb{A}}^1, & \text{if $\alpha$ is complex and $\theta_Q \alpha \in \Phi_-$}, \\ {\mathrm{pt}}, & \text{if $\alpha$ is complex and $\theta_Q\alpha \in \Phi_+$}, \\ {\mathbb{G}}_m, & \text{if $\alpha$ is real}, \\ {\mathbb{P}}^1, & \text{if $\alpha$ is compact imaginary}, \\ {\mathrm{pt}} \text{ or } {\mathrm{pt}} \cup {\mathrm{pt}}, & \text{if $\alpha$ is non-compact imaginary}.\end{cases}$$ See, for example, [@vogan-ic3 Lemma 5.1]. We now consider $K$-equivariant $\lambda$-twisted local systems $\gamma$ on $Q$ for some $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$. These have an explicit combinatorial parametrization, see, e.g., [@DV1 §2.4]. The following two properties play an important role. **Definition 48**. We say that $\gamma$ (or $j_{!*}\gamma$) is - *regular* if $\lambda$ is dominant and $\Gamma(j_{!*}\gamma) \neq 0$, and - *tempered* if it is regular and $\lambda = \theta_Q\lambda$. Here $j \colon Q \to {\mathcal{B}}$ is the inclusion. The regular local systems agree with the final parameters of, say, [@ALTV], and are in bijection with irreducible Harish-Chandra modules with real infinitesimal character (see also [@DV1 §4.1]). The tempered local systems are precisely those such that the corresponding Harish-Chandra module is tempered in the traditional sense; in particular, they are always unitary. The Schmid-Vilonen conjecture (and hence Theorem [Theorem 46](#thm:hodge and signature K'){reference-type="ref" reference="thm:hodge and signature K'"}) is known for tempered Harish-Chandra sheaves [@DV2]. The partial flag varieties determined by the set of singular coroots for $\lambda$ play an important role in what follows. Let us briefly recall how the general theory works. Fix any subset $S$ of the simple roots for $G$, with corresponding simple coroots $\check S$. Then there is an associated conjugacy class of parabolic subgroups of $G$, parametrized by a partial flag variety ${\mathcal{P}}_S$ with projection map $\pi_S \colon {\mathcal{B}} \to {\mathcal{P}}_S$. We fix the bijection so that, for any choice of Borel $B \subset G$, with roots $\Phi_-$, the corresponding parabolic subgroup $P$ containing $B$ has roots $\Phi_- + \Phi_S$, where $\Phi_S \subset \Phi$ is the set of roots spanned by $S$. We note that, with this convention, $${\mathrm{Pic}}^G({\mathcal{P}}_S) \subset {\mathrm{Pic}}^G({\mathcal{B}}) = {\mathbb{X}}^*(H)$$ is the set of characters $\mu$ such that $\langle \mu, \check\alpha \rangle = 0$ for $\check\alpha \in \check S$. We write $H_S$ for the quotient of $H$ with this character group. The cone of ample line bundles on ${\mathcal{P}}_S$ is $${\mathbb{X}}^*(H_S)_+ = \{\mu \in {\mathbb{X}}^*(H_S) \mid \langle \mu, \check\alpha \rangle > 0 \text{ for } \check\alpha \in \check\Phi_+ - \check\Phi_S\}.$$ **Proposition 49**. *Let $\gamma$ be a regular twisted local system on $Q$. Then:* 1. *[\[itm:regular hc 1\]]{#itm:regular hc 1 label="itm:regular hc 1"} If $\alpha$ is a simple root such that $\theta_Q \alpha \in \Phi_-$ and $\langle \lambda, \check\alpha\rangle = 0$, then $\alpha$ is real.* 2. *[\[itm:regular hc 2\]]{#itm:regular hc 2 label="itm:regular hc 2"} Let $S$ be the set of real simple roots $\alpha$ such that $\langle \lambda, \check\alpha\rangle = 0$. Let $\pi_S \colon {\mathcal{B}} \to {\mathcal{P}}_S$ be the map to the corresponding partial flag variety and set $Q_S = \pi_S^{-1}\pi_S(Q)$. Then $Q$ is open in $Q_S$ and $\gamma$ extends cleanly to $Q_S$ (i.e., the $!$ and $*$ extensions coincide).* Statement [\[itm:regular hc 1\]](#itm:regular hc 1){reference-type="eqref" reference="itm:regular hc 1"} is an elementary consequence of the definitions and [\[eq:root lines\]](#eq:root lines){reference-type="eqref" reference="eq:root lines"}, while [\[itm:regular hc 2\]](#itm:regular hc 2){reference-type="eqref" reference="itm:regular hc 2"} follows from [@hmsw Theorems 8.7 and 9.1]. ## Positive deformations for $K$-orbits {#subsec:hc deformations} We now specialize the theory of §[3](#sec:deformations){reference-type="ref" reference="sec:deformations"} to the setting of certain unions of $K$-orbits and write down the deformation spaces and cones of positive elements explicitly. We consider the following locally closed subvarieties of the flag variety ${\mathcal{B}}$ (cf., Proposition [Proposition 49](#prop:regular hc){reference-type="ref" reference="prop:regular hc"} [\[itm:regular hc 2\]](#itm:regular hc 2){reference-type="eqref" reference="itm:regular hc 2"} above). Fix a $K$-orbit $Q \subset {\mathcal{B}}$ and a set $S$ of simple roots. This defines a partial flag variety ${\mathcal{P}}_S$ and a fibration $\pi_S \colon {\mathcal{B}} \to {\mathcal{P}}_S$ as above. Define $$Q_S := \pi_S^{-1}\pi_S(Q).$$ We note that $Q_S$ is $K$-stable. If $\theta(Q) = Q$ and $\delta(S) = S$, then $Q_S$ is also stable under $K'$. In what follows, we write $${\mathfrak{h}}^*_{S, {\mathbb{R}}} = {\mathbb{X}}^*(H_S) \otimes {\mathbb{R}} = \{\mu \in {\mathfrak{h}}^*_{\mathbb{R}} \mid \langle \mu, \check\alpha \rangle = 0 \text{ for } \alpha \in S\}$$ and $${\mathfrak{h}}^*_{S, {\mathbb{R}}, +} = {\mathbb{R}}_{>0}\text{-span}({\mathbb{X}}^*(H_S)_+) = \{\mu \in {\mathfrak{h}}^*_{\mathbb{R}} \mid \langle \mu, \check\alpha \rangle > 0 \text{ for }\alpha \in \Phi_+ - \Phi_S\}.$$ We also write $\tilde Q$ and $\tilde{Q}_S$ for the pre-images of $Q$ and $\tilde Q$ in $\tilde{{\mathcal{B}}}$. **Proposition 50**. *In the setting above, assume that the root system $\Phi_S \subset \Phi$ spanned by $S$ is stable under $\theta_Q$. Then $Q_S$ is affinely embedded in ${\mathcal{B}}$, and:* 1. *[\[itm:hc deformations 1\]]{#itm:hc deformations 1 label="itm:hc deformations 1"} The map $$\varphi \colon \Gamma_{\mathbb{R}}^K(\tilde{Q}_S)^{\mathit{mon}}\to {\mathfrak{h}}^*_{\mathbb{R}}$$ factors through an isomorphism $$\Gamma_{\mathbb{R}}^K(\tilde{Q}_S)^{\mathit{mon}}\cong ({\mathfrak{h}}^*_{S, {\mathbb{R}}})^{-\theta_Q}$$ onto the $(-1)$-eigenspace of $\theta_Q$ acting on ${\mathfrak{h}}^*_{S, {\mathbb{R}}} \subset {\mathfrak{h}}^*_{\mathbb{R}}$.* 2. *[\[itm:hc deformations 2\]]{#itm:hc deformations 2 label="itm:hc deformations 2"} Under the above identification, we have $$\Gamma_{\mathbb{R}}^K(\tilde{Q}_S)^{\mathit{mon}}_+ \supset (1 - \theta_Q){\mathfrak{h}}^*_{S, {\mathbb{R}}, +}.$$* 3. *[\[itm:hc deformations 3\]]{#itm:hc deformations 3 label="itm:hc deformations 3"} If $Q_S$ is $K'$-stable, then $$\Gamma_{\mathbb{R}}^{K'}(\tilde{Q}_S)^{\mathit{mon}}= ({\mathfrak{h}}^*_{S, {\mathbb{R}}})^{-\theta_Q} \cap ({\mathfrak{h}}^*_{\mathbb{R}})^\delta.$$* In fact, one can easily check that the containment in [\[itm:hc deformations 2\]](#itm:hc deformations 2){reference-type="eqref" reference="itm:hc deformations 2"} is an equality, but we will not use this fact. *Proof.* We first prove [\[itm:hc deformations 1\]](#itm:hc deformations 1){reference-type="eqref" reference="itm:hc deformations 1"} and [\[itm:hc deformations 3\]](#itm:hc deformations 3){reference-type="eqref" reference="itm:hc deformations 3"} when $S = \emptyset$; in this case, $Q_S = Q$ is a single $K$-orbit. Fix $x \in Q$. Then we may identify the fiber of $\tilde{{\mathcal{B}}} \to {\mathcal{B}}$ over $x$ with $H$, and $$\Gamma(\tilde{Q}, {\mathcal{O}}_{\tilde{Q}}^\times)^K \cong \Gamma(H, {\mathcal{O}}_H^\times)^{{\mathrm{Stab}}_K(x)}.$$ By construction, ${\mathrm{Stab}}_K(x)$ acts on $H$ via the surjective homomorphism $$\tau_x \colon {\mathrm{Stab}}_K(x) \to H^{\theta_Q},$$ so $$\Gamma(\tilde{Q}, {\mathcal{O}}_{\tilde{Q}}^\times)^K \cong \Gamma(H, {\mathcal{O}}_H^\times)^{H^{\theta_Q}} = \Gamma(H/H^{\theta_Q}, {\mathcal{O}}^\times).$$ Now for $\mu \in {\mathbb{X}}^*(H)$, $\Gamma(Q, {\mathcal{O}}_Q(\mu)^\times)^K$ is identified with the subset $$\Gamma(Q, {\mathcal{O}}_Q(\mu)^\times)^K = \begin{cases} {\mathbb{C}}^\times \cdot \mu, &\mbox{if $\mu \in {\mathbb{X}}^*(H/H^{\theta_Q})$,} \\ 0, &\mbox{otherwise.} \end{cases}$$ So we deduce that $\varphi$ defines an isomorphism $$\Gamma_{\mathbb{R}}^K(\tilde Q)^{\mathit{mon}}\overset{\sim}\to {\mathbb{X}}^*(H/H^{\theta_Q}) \otimes {\mathbb{R}} = ({\mathfrak{h}}^*_{\mathbb{R}})^{-\theta_Q} \subset {\mathfrak{h}}^*_{\mathbb{R}},$$ proving [\[itm:hc deformations 1\]](#itm:hc deformations 1){reference-type="eqref" reference="itm:hc deformations 1"} for $S = \emptyset$. We also deduce [\[itm:hc deformations 3\]](#itm:hc deformations 3){reference-type="eqref" reference="itm:hc deformations 3"} by observing that $$\Gamma(Q, {\mathcal{O}}_Q(\mu)^\times)^{K'} = \Gamma(Q, {\mathcal{O}}_Q(\mu)^\times)^{K}$$ as long as $\mu \in 2{\mathbb{X}}^*(H/H^{\theta_Q})^\delta$. Now consider [\[itm:hc deformations 1\]](#itm:hc deformations 1){reference-type="eqref" reference="itm:hc deformations 1"} and [\[itm:hc deformations 3\]](#itm:hc deformations 3){reference-type="eqref" reference="itm:hc deformations 3"} when $S \neq \emptyset$. We may assume without loss of generality that $Q \subset Q_S$ is open; the claims now follow from the fact that $f \in {\mathrm{H}}^0(Q, {\mathcal{O}}(\mu))$ extends to a non-vanishing section of ${\mathcal{O}}(\mu)$ on $Q_S$ if and only if ${\mathcal{O}}(\mu)$ descends to ${\mathcal{P}}_S$. We now prove that $Q_S$ is affinely embedded, and [\[itm:hc deformations 2\]](#itm:hc deformations 2){reference-type="eqref" reference="itm:hc deformations 2"}. Following [@BB2 Lemma 3.5.3], we will show that $Q_S$ has a boundary equation in ${\mathrm{H}}^0(\bar{Q}_S, {\mathcal{O}}(\mu - \theta_Q\mu))^K$ for any $\mu \in {\mathbb{X}}^*(H_S)_+$. This implies the result by Proposition [Proposition 38](#prop:positivity){reference-type="ref" reference="prop:positivity"}. To construct the boundary equation, consider the map $$p \colon {\mathcal{B}} \xrightarrow{(1, \theta)} {\mathcal{B}} \times {\mathcal{B}} \xrightarrow{(\pi_S, \pi_{\delta(S)})} {\mathcal{P}}_S \times {\mathcal{P}}_{\delta(S)}.$$ Fix $x \in Q \subset {\mathcal{B}}$ corresponding to a Borel $B = B_x$, let $P = P_{\pi_S(x)}$ be the corresponding parabolic and let $$X = G \cdot (P, \theta(P)) \subset {\mathcal{P}}_S \times {\mathcal{P}}_{\delta(S)}$$ be the corresponding $G$-orbit. Then, by the argument of [@BB2 Lemma 3.5.3], $$Q_S \subset p^{-1}(X)$$ is a connected component. Let us write ${\mathcal{P}}_S = G/P$ and ${\mathcal{P}}_{\delta(S)} = G/\theta(P)$. Then, since $\theta_Q$ preserves $\Phi_S$, $P \cap \theta(P)$ contains a Levi subgroup of both $P$ and $\theta(P)$. Now, $\mu \in {\mathbb{X}}^*(H_S)_+$ defines a character of $P$, and hence also of $\theta(P)$ via this common Levi; note that $\mu$ is dominant for $P$ but not necessarily for $\theta(P)$. Departing slightly from our usual conventions, we write ${\mathcal{O}}(\mu, -\mu)$ for the corresponding line bundle on $G/P \times G/\theta(P)$. We claim that $X$ has a $G$-invariant boundary equation in ${\mathrm{H}}^0(\bar{X}, {\mathcal{O}}(\mu, -\mu))$; pulling back along $p$, this gives the desired boundary equation for $\bar{Q}_S$. It remains to prove the claim. This is equivalent to the claim that the $\theta(P)$-orbit $X_0$ of the base point in $G/P$ has a boundary equation in ${\mathrm{H}}^0(\bar{X}_0, {\mathcal{O}}(\mu))$ on which $\theta(P)$ acts with character $\mu$. Since $P$ and $\theta(P)$ contain a common Levi, this orbit is also a $\theta(N)$-orbit (where $N \subset B$ is the unipotent radical), so we may apply the argument of [@BB2 Lemma 3.5.1] to conclude that there is such a boundary equation, given by the highest weight vector in ${\mathrm{H}}^0(G/P, {\mathcal{O}}(\mu))$ restricted to the orbit. ◻ ## Proof in the $K$-equivariant case {#subsec:pf of signature K} We now give the proof of Theorem [Theorem 44](#thm:hodge and signature K){reference-type="ref" reference="thm:hodge and signature K"}, i.e., that $$\label{eq:hodge and signature pf 1} \chi^{\mathit{sig}}({\mathcal{M}}, \zeta) = \zeta^{c({\mathcal{M}})} \chi^{H}({\mathcal{M}}, \zeta) \mod \zeta^2 - 1,$$ for $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ dominant and ${\mathcal{M}} \in {\mathrm{MHM}}({\mathcal{D}}_\lambda, K)$ a polarized Hodge module of weight $w$, where $$c({\mathcal{M}}) = \dim {\mathcal{B}} - w.$$ Clearly it is enough to prove the theorem in the case where ${\mathcal{M}}$ is irreducible. So we may as well assume that ${\mathcal{M}} = j_{!*}\gamma$ with $j \colon Q \hookrightarrow {\mathcal{B}}$ a $K$-orbit and $\gamma$ a $K$-equivariant $\lambda$-twisted local system on $Q$. We work by induction on $\dim Q$. We first reduce to the case of regular local systems. **Lemma 51**. *Assume the local system $\gamma$ on $Q$ is not regular. Then $$\Gamma({\mathcal{B}}, \mathop{\mathrm{Gr}}^Fj_{!*}\gamma) = 0.$$ Hence, [\[eq:hodge and signature pf 1\]](#eq:hodge and signature pf 1){reference-type="eqref" reference="eq:hodge and signature pf 1"} holds trivially for ${\mathcal{M}} = j_{!*}\gamma$.* *Proof.* By definition, we have $\Gamma({\mathcal{B}}, j_{!*}\gamma) = 0$ since $\gamma$ is not regular. But by Theorem [Theorem 24](#thm:filtered exactness){reference-type="ref" reference="thm:filtered exactness"}, the spectral sequence associated with the Hodge filtration degenerates at the first page, so this implies that $\Gamma({\mathcal{B}}, \mathop{\mathrm{Gr}}^Fj_{!*}\gamma) = 0$ as claimed. ◻ We next note the following base case. **Lemma 52**. *If $(1 - \theta_Q)\lambda = 0$ and $\gamma$ is regular, then [\[eq:hodge and signature pf 1\]](#eq:hodge and signature pf 1){reference-type="eqref" reference="eq:hodge and signature pf 1"} holds for ${\mathcal{M}} = j_{!*}\gamma$.* *Proof.* The condition says that $j_{!*}\gamma$ is tempered. Hence, [\[eq:hodge and signature pf 1\]](#eq:hodge and signature pf 1){reference-type="eqref" reference="eq:hodge and signature pf 1"} holds for ${\mathcal{M}} = j_{!*}\gamma$ by [@DV2 Theorem 3.2]. ◻ Suppose now that $\gamma$ is regular and $(1 - \theta_Q)\lambda \neq 0$. In this setting, we use the techniques of §[3](#sec:deformations){reference-type="ref" reference="sec:deformations"} to construct an appropriate deformation of $\gamma$ to one with $(1 - \theta_Q)\lambda = 0$. First, let $S$ denote the set of simple roots $\alpha$ such that $\langle \lambda, \check\alpha \rangle = 0$ and $\theta_Q\alpha \in \Phi_-$. By Proposition [Proposition 49](#prop:regular hc){reference-type="ref" reference="prop:regular hc"}, the assumption that $\gamma$ is regular implies that these are all real with respect to the orbit $Q$ and that $\gamma$ extends cleanly to a pure Hodge module ${\mathcal{N}}$ on $Q_S:= \pi_S^{-1}\pi_S(Q)$ in the notation of §[4.3](#subsec:hc deformations){reference-type="ref" reference="subsec:hc deformations"}. Since the extension is clean, writing $j_S \colon Q_S \to {\mathcal{B}}$ for the inclusion, we have $$j_! f\gamma = j_{S!}f{\mathcal{N}}, \quad j_*f\gamma = j_{S*}f{\mathcal{N}} \quad \mbox{and} \quad j_{!*}f\gamma = j_{S!*}f{\mathcal{N}}$$ for all $f \in \Gamma_{\mathbb{R}}^K(\tilde{Q}_S)^{{\mathit{mon}}}$. By construction, $\lambda \in {\mathfrak{h}}^*_{S, {\mathbb{R}}, +}$, so by Proposition [Proposition 50](#prop:hc deformations){reference-type="ref" reference="prop:hc deformations"}, we have a unique $f \in \Gamma_{\mathbb{R}}^K(\tilde{Q}_S)^{\mathit{mon}}_+$ such that $\varphi(f) = (1 - \theta_Q)\lambda$. We consider the associated family $$j_{S!*}f^s{\mathcal{N}} = j_{!*}f^s\gamma \in {\mathrm{MHM}}({\mathcal{D}}_{\lambda + s (1 - \theta_Q)\lambda}).$$ **Lemma 53**. *Let $$s_0 = \min\left\{\left. s \in \left[-\frac{1}{2}, 0\right]\, \right| \,\mbox{$\lambda + s(1 - \theta_Q)\lambda$ is dominant}\right\}.$$ If $s_0 > -\frac{1}{2}$ then $f^{s_0}\gamma$ is not regular.* *Proof.* If $s_0 > -\frac{1}{2}$ then there exists a simple root $\alpha$ such that $\langle \lambda', \check\alpha\rangle = 0$ and $\langle (1 - \theta_Q)\lambda, \check\alpha \rangle > 0$. But now, writing $$\lambda' = \lambda + s_0(1 - \theta_Q)\lambda$$ we have $$\langle \lambda', \theta_Q\check\alpha \rangle = -\langle(1 - \theta_Q) \lambda', \check\alpha\rangle = -(1 + 2s_0)\langle (1 - \theta_Q)\lambda, \check\alpha \rangle < 0.$$ Hence, we must have that $\theta_Q\alpha \in \Phi_-$ and that $\alpha$ is not real. So by Proposition [Proposition 49](#prop:regular hc){reference-type="ref" reference="prop:regular hc"}, $f^{s_0}\gamma$ is not regular as claimed. ◻ By Lemmas [Lemma 51](#lem:irregular parameters){reference-type="ref" reference="lem:irregular parameters"} and [Lemma 52](#lem:continuous param zero){reference-type="ref" reference="lem:continuous param zero"}, we deduce that [\[eq:hodge and signature pf 1\]](#eq:hodge and signature pf 1){reference-type="eqref" reference="eq:hodge and signature pf 1"} holds for ${\mathcal{M}} = j_{!*}f^{s_0}\gamma$. We now apply Proposition [Proposition 39](#prop:hyperplanes){reference-type="ref" reference="prop:hyperplanes"}: since $f$ is positive for $Q_S$, there exist $s_0 < s_1 < s_2 < \cdots < s_n < s_{n + 1} = 0$ such that $$\mbox{$j_!f^s\gamma \to j_*f^s\gamma$ is an isomorphism for $s \in (s_0, 0) - \{s_1, \ldots, s_n\}$}.$$ By continuity of eigenvalues, the signature characters $\chi^{\mathit{sig}}(j_{!*}f^s\gamma, \zeta)$ are constant on the open intervals $(s_i, s_{i + 1})$. By Theorem [Theorem 40](#thm:semi-continuity){reference-type="ref" reference="thm:semi-continuity"} and Proposition [Proposition 43](#prop:monodromic semi-continuity){reference-type="ref" reference="prop:monodromic semi-continuity"}, the same is true for $\mathop{\mathrm{Gr}}^F j_{!*}f^s\gamma$, and hence $\chi^H(j_{!*}f^s\gamma, u)$. So, by induction, it suffices to prove the following lemma. **Lemma 54**. *Assume that [\[eq:hodge and signature pf 1\]](#eq:hodge and signature pf 1){reference-type="eqref" reference="eq:hodge and signature pf 1"} holds for dominantly twisted Harish-Chandra sheaves with supports of lower dimension than $Q$. Then the following are equivalent:* 1. *[\[eq:hodge and signature pf 1\]](#eq:hodge and signature pf 1){reference-type="eqref" reference="eq:hodge and signature pf 1"} holds for ${\mathcal{M}} = j_{!*}f^s\gamma$ with $s_i < s < s_{i + 1}$.* 2. *[\[eq:hodge and signature pf 1\]](#eq:hodge and signature pf 1){reference-type="eqref" reference="eq:hodge and signature pf 1"} holds for ${\mathcal{M}} = j_{!*}f^{s_i}\gamma$.* 3. *[\[eq:hodge and signature pf 1\]](#eq:hodge and signature pf 1){reference-type="eqref" reference="eq:hodge and signature pf 1"} holds for ${\mathcal{M}} = j_{!*}f^s\gamma$ with $s_{i - 1} < s < s_i$.* *Proof.* We first relate the Hodge characters. By Theorem [Theorem 40](#thm:semi-continuity){reference-type="ref" reference="thm:semi-continuity"}, we have $$\chi^H(j_{!*}f^s\gamma, u) = \begin{cases}\chi^H(j_*f^{s_i}\gamma, u), & \mbox{if $s_i < s < s_{i + 1}$}, \\ \chi^H(j_!f^{s_i}\gamma, u), & \mbox{if $s_{i - 1} < s < s_{i + 1}$}.\end{cases}$$ We also have the Jantzen filtrations $J_\bullet$ on $j_!f^{s_i}\gamma$ and $j_*f^{s_i}\gamma$ and isomorphisms $$(s - s_i)^n \colon \mathop{\mathrm{Gr}}^J_n j_*f^{s_i}\gamma \overset{\sim}\to \mathop{\mathrm{Gr}}^J_{-n}j_!f^{s_i}\gamma(-n)$$ for $n \geq 0$. We have $\mathop{\mathrm{Gr}}^J_0 j_!f^{s_i}\gamma = j_{!*}f^{s_i}\gamma$, and for $n > 0$, $\mathop{\mathrm{Gr}}^J_{-n} j_!f^{s_i}\gamma$ is supported on the boundary of $Q_S$. By Theorem [Theorem 24](#thm:filtered exactness){reference-type="ref" reference="thm:filtered exactness"}, the Hodge character $\chi^H$ is additive in exact sequences. We deduce that $$\label{eq:hs wall crossing 1} \chi^H(j_{!*}f^{s}\gamma, u) = \begin{cases} \chi^H(j_{!*}f^{s_i}\gamma, u) + \sum_{n > 0} u^{-n}\chi^H(\mathop{\mathrm{Gr}}^J_{-n}j_!f^{s_i}\gamma, u), &\mbox{for $s_i < s < s_{i + 1}$}, \\ \chi^H(j_{!*}f^{s_i}\gamma, u) + \sum_{n > 0} \chi^H(\mathop{\mathrm{Gr}}^J_{-n}j_!f^{s_i}\gamma, u), &\mbox{for $s_{i - 1} < s < s_i$}.\end{cases}$$ We next relate the signature characters. By Theorem [Theorem 41](#thm:jantzen){reference-type="ref" reference="thm:jantzen"}, the Hodge modules $\mathop{\mathrm{Gr}}^J_{-n}j_!f^{s_i}\gamma$ are pure of weight $w - n$ (where $w = \dim \tilde{Q}$ is the weight of $\gamma$) and the Jantzen forms are polarizations. By construction, the signature for $s_i < s < s_{i + 1}$ is that of the direct sum of the Jantzen forms, and the signature for $s_{i - 1} < s < s_i$ is that of the direct sum of the Jantzen forms with alternating sign. So we deduce that $$\label{eq:hs wall crossing 2} \chi^{\mathit{sig}}(j_{!*}f^s\gamma, \zeta) = \begin{cases} \chi^{\mathit{sig}}(j_{!*}f^{s_i}\gamma, \zeta) + \sum_{n > 0} \chi^{\mathit{sig}}(\mathop{\mathrm{Gr}}^J_{-n}j_!f^{s_i}\gamma, \zeta), &\mbox{if $s_i < s < s_{i + 1}$}, \\ \chi^{\mathit{sig}}(j_{!*}f^{s_i}\gamma, \zeta) + \sum_{n > 0} \zeta^n \chi^{\mathit{sig}}(\mathop{\mathrm{Gr}}^J_{-n}j_!f^{s_i}\gamma, \zeta), &\mbox{if $s_{i - 1} < s < s_i$}.\end{cases}$$ Applying the induction hypothesis on dimension (and recalling that the constant $c$ depends on the weight), we have $$\chi^{\mathit{sig}}(\mathop{\mathrm{Gr}}^J_{-n}j_!f^{s_i}\gamma, \zeta) = \zeta^{c(\mathop{\mathrm{Gr}}^J_{-n}j_!f^{s_i}\gamma)} \chi^H(\mathop{\mathrm{Gr}}^J_{-n}j_!f^{s_i}\gamma, \zeta) = \zeta^{c + n} \chi^H(\mathop{\mathrm{Gr}}^J_{-n}j_!f^{s_i}\gamma, \zeta),$$ where $c = c(j_{!*}\gamma)$. We deduce from [\[eq:hs wall crossing 1\]](#eq:hs wall crossing 1){reference-type="eqref" reference="eq:hs wall crossing 1"} and [\[eq:hs wall crossing 2\]](#eq:hs wall crossing 2){reference-type="eqref" reference="eq:hs wall crossing 2"} that $$\chi^{\mathit{sig}}(j_{!*}f^s\gamma, \zeta) - \zeta^c\chi^H(j_{!*}f^s\gamma, \zeta) = \chi^{\mathit{sig}}(j_{!*}f^{s_i}\gamma, \zeta) - \zeta^c\chi^H(j_{!*}f^{s_i}\gamma, \zeta)$$ for $s_{i - 1} < s < s_{i + 1}$. The statement of the lemma now follows. ◻ This concludes the proof of Theorem [Theorem 44](#thm:hodge and signature K){reference-type="ref" reference="thm:hodge and signature K"}. ## Proof in the $K'$-equivariant case {#subsec:pf of signature K'} In this subsection, we explain how to Prove Theorem [Theorem 46](#thm:hodge and signature K'){reference-type="ref" reference="thm:hodge and signature K'"} by extending the arguments of §[4.4](#subsec:pf of signature K){reference-type="ref" reference="subsec:pf of signature K"} to the $K'$-equivariant setting. Let ${\mathcal{M}} \in {\mathrm{MHM}}(\tilde{{\mathcal{D}}}, K')$ be irreducible with dominant monodromies. Then ${\mathcal{M}}$ takes one of two forms: either ${\mathcal{M}} = {\mathcal{N}} \oplus \theta^*{\mathcal{N}}$, where ${\mathcal{N}} \in {\mathrm{MHM}}(\tilde{{\mathcal{D}}}, K)$ satisfies $\theta^*{\mathcal{N}} \not\cong {\mathcal{N}}$, or ${\mathcal{M}}$ is already irreducible $K$-equivariantly. In the first case, the desired result follows from the $K$-equivariant case and the following obvious lemma. Here we note that we have a well-defined map $$(1 + \xi) \colon {\mathrm{Rep}}(K) \to {\mathrm{Rep}}(K'),$$ where $\xi$ is the non-trivial character of $\{1, \theta\}$. **Lemma 55**. *We have $$\chi^H({\mathcal{N}} \oplus \theta^*{\mathcal{N}}, u) = (1 + \xi)\chi^H({\mathcal{N}}, u)$$ and $$\chi^{\mathit{sig}}({\mathcal{N}} \oplus \theta^*{\mathcal{N}}, \zeta) = (1 + \xi)\chi^{\mathit{sig}}({\mathcal{N}}, \zeta).$$* So we may as well assume that ${\mathcal{M}} = j_{!*}\gamma$, where $\gamma$ is a $K'$-equivariant $\lambda$-twisted local system on a $\theta$-stable $K$-orbit $Q$. In this case, we run exactly the same argument as in §[4.4](#subsec:pf of signature K){reference-type="ref" reference="subsec:pf of signature K"}. The only extra thing we need to check is that the deformations we construct are $K'$-equivariant. By Proposition [Proposition 50](#prop:hc deformations){reference-type="ref" reference="prop:hc deformations"} [\[itm:hc deformations 3\]](#itm:hc deformations 3){reference-type="eqref" reference="itm:hc deformations 3"}, this follows from $$\label{eq:K'-equivariant 1} (1 - \theta_Q)\lambda \in ({\mathfrak{h}}^*_{S, {\mathbb{R}}})^{-\theta_Q} \cap ({\mathfrak{h}}^*_{\mathbb{R}})^\delta.$$ To show [\[eq:K\'-equivariant 1\]](#eq:K'-equivariant 1){reference-type="eqref" reference="eq:K'-equivariant 1"}, we observe that since $\theta^*\gamma \cong \gamma$, we must have $\delta(\lambda) = \lambda$, so it is enough to show that $\theta_Q$ and $\delta$ commute. If we choose $x \in Q$, with corresponding Borel $B_x$ and a $\theta$-stable maximal torus $T \subset B_x$, then we have a commutative diagram $$\begin{tikzcd} T \ar[r, "\theta"] \ar[d, "\tau_{\theta(x)}"] & T \ar[r, "\theta"] \ar[d, "\tau_x"] & T \ar[r, "\theta"] \ar[d, "\tau_x"] & T \ar[d, "\tau_{\theta(x)}"] \\ H \ar[r, "\delta"] & H \ar[r, "\theta_Q"] & H \ar[r, "\delta"] & H, \end{tikzcd}$$ where $\tau_x, \tau_{\theta(x)} \colon T \to H$ are the isomorphisms determined by $x$ and $\theta(x)$. Here commutativity of the left and right squares follows from the fact that the action of $\theta$ on $G$ and $\tilde{{\mathcal{B}}}$ intertwines the action of $\delta$ on $H$, and commutativity of the middle square is the definition of $\theta_Q$. Since $\theta(Q) = Q$, we have $\theta(x) \in Q$ also, so commutativity of the outer rectangle gives $$\theta_Q = \delta \theta_Q \delta.$$ So $\theta_Q$ and $\delta$ commute as claimed. This completes the proof of Theorem [Theorem 46](#thm:hodge and signature K'){reference-type="ref" reference="thm:hodge and signature K'"}. # Proof of semi-continuity of the Hodge filtration {#sec:semi-continuity} We now turn to the proofs of our more technical Hodge-theoretic results, which were used in the proof of the unitarity criterion. In this section, we give the proofs of two of the main results from §[3](#sec:deformations){reference-type="ref" reference="sec:deformations"}, namely Proposition [Proposition 39](#prop:hyperplanes){reference-type="ref" reference="prop:hyperplanes"} on the existence of a hyperplane arrangement in $\Gamma_{\mathbb{R}}(Q)$ and Theorem [Theorem 40](#thm:semi-continuity){reference-type="ref" reference="thm:semi-continuity"} on semi-continuity of the Hodge filtration. The outline of the section is as follows. In §[5.1](#subsec:pf of thm:hyperplanes){reference-type="ref" reference="subsec:pf of thm:hyperplanes"}, we recall the theory of the $V$-filtration and give the proof of Proposition [Proposition 39](#prop:hyperplanes){reference-type="ref" reference="prop:hyperplanes"}. In §[5.2](#subsec:hodge pushforward){reference-type="ref" reference="subsec:hodge pushforward"} we recall the behavior of Hodge filtrations under pushforwards. Finally, we give the proof of Theorem [Theorem 40](#thm:semi-continuity){reference-type="ref" reference="thm:semi-continuity"} in §[5.3](#subsec:pf of thm:semi-continuity){reference-type="ref" reference="subsec:pf of thm:semi-continuity"}. ## Proof of Proposition [Proposition 39](#prop:hyperplanes){reference-type="ref" reference="prop:hyperplanes"} {#subsec:pf of thm:hyperplanes} Proposition [Proposition 39](#prop:hyperplanes){reference-type="ref" reference="prop:hyperplanes"} is a relatively straightforward consequence of the theory of $V$-filtrations. We briefly recall this here. Suppose $X$ is a smooth variety and $D \subset X$ is a smooth divisor. The idea of the $V$-filtration is to capture the notion of "order of pole along $D$" in the language of ${\mathcal{D}}$-modules. Formally, we proceed as follows. The $V$-filtration on the sheaf of differential operators is the decreasing ${\mathbb{Z}}$-indexed filtration $$V^n {\mathcal{D}}_X = \{ \partial \in {\mathcal{D}}_X \mid \partial {\mathcal{I}}^m \subset {\mathcal{I}}^{m + n} \text{ for all } > 0\},$$ where ${\mathcal{I}}$ is the ideal sheaf of $D$. If ${\mathcal{M}}$ is a mixed Hodge module on $X$, then the underlying ${\mathcal{D}}$-module has a unique decreasing filtration $V^\bullet {\mathcal{M}}$, indexed by the set of real numbers with its usual ordering and compatible with $V^\bullet {\mathcal{D}}_X$, such that: 1. The filtration $V^\bullet {\mathcal{M}}$ is exhaustive and finitely generated over $({\mathcal{D}}_X, V^\bullet)$ in the sense that, locally on $X$, there exist sections $m_1, \ldots, m_k \in {\mathcal{M}}$ and degrees $\alpha_1, \ldots, \alpha_k \in {\mathbb{R}}$ such that $$V^\alpha {\mathcal{M}} = \sum_{i = 1}^k\sum_{\alpha_i + n_i \geq \alpha} V^{n_i}{\mathcal{D}}_X m_i.$$ 2. For all $\alpha \in {\mathbb{R}}$, the operator $${\mathrm{N}} := \alpha - x\partial_x \colon \mathop{\mathrm{Gr}}^\alpha_V {\mathcal{M}} \to \mathop{\mathrm{Gr}}^\alpha_V {\mathcal{M}}$$ is nilpotent, where $x$ is a local coordinate on $X$ such that $D = \{x = 0\}$. (We remark that this theory extends to arbitrary regular holonomic ${\mathcal{D}}$-modules if one allows $V$-filtrations indexed by ${\mathbb{C}}$.) We next make the following observations concerning extensions of ${\mathcal{D}}$-modules across normal crossings divisors. Suppose that $X$ is a smooth variety and $D$ is a simple normal crossings divisor with irreducible components $D_i$, $i \in I$. Let $U = X - D$. The following standard lemma can be readily proved using the ideas of [@DV1 §6.1], for example. **Lemma 56**. *Suppose that ${\mathcal{M}}$ is a regular holonomic ${\mathcal{D}}$-module underlying a mixed Hodge module on $U$. Then $j_!{\mathcal{M}} \to j_*{\mathcal{M}}$ is an isomorphism if and only if $$\mathop{\mathrm{Gr}}_{V_{i}}^0 j_*{\mathcal{M}} = 0 \quad \mbox{for all $i \in I$},$$ where we write $V_{i}^\bullet$ for the $V$-filtration of along the smooth divisor $D_i$.* We will also need the following fact, which is immediate from the definitions. **Lemma 57**. *Let $f \in \Gamma_{\mathbb{R}}(U)$. If ${\mathcal{M}}$ is a mixed Hodge module on $U$, then we have $$V^\alpha_{i} j_*f{\mathcal{M}} = V^{\alpha - \operatorname{ord}_{D_i}(f)}_{i} j_*{\mathcal{M}}$$ under the identification of sheaves of ${\mathcal{O}}$-modules $j_*f{\mathcal{M}} \cong j_*{\mathcal{M}}$.* We now give the proof of Proposition [Proposition 39](#prop:hyperplanes){reference-type="ref" reference="prop:hyperplanes"}. *Proof of Proposition [Proposition 39](#prop:hyperplanes){reference-type="ref" reference="prop:hyperplanes"}.* Fix $\pi \colon Q' \to \bar{Q} \subset X$ a resolution of singularities such that $\pi^{-1}(Q) \to Q$ is an isomorphism and $Q' - Q$ is a divisor with simple normal crossings. Set $$\Psi = \{ \operatorname{ord}_D \mid D \subset Q' - Q \mbox{ an irreducible component}\}.$$ Note that the assertion [\[itm:hyperplanes 1\]](#itm:hyperplanes 1){reference-type="eqref" reference="itm:hyperplanes 1"} of the statement of the proposition is true by definition. To prove [\[itm:hyperplanes 2\]](#itm:hyperplanes 2){reference-type="eqref" reference="itm:hyperplanes 2"}, suppose that ${\mathcal{M}}$ is a mixed Hodge module on $Q$. Then writing $j' \colon Q \to Q'$ for the inclusion, we have $$j_!f{\mathcal{M}} = \pi_*j'_!f{\mathcal{M}} \quad \mbox{and} \quad j_*f{\mathcal{M}} = \pi_*j'_*f{\mathcal{M}}.$$ So we may reduce without loss of generality to the case where $Q' = X$ and $\pi$ is the identity. We are now in the setting of Lemma [Lemma 56](#lem:cleanness){reference-type="ref" reference="lem:cleanness"} with $U = Q$. By the lemma, we have that $j_! f{\mathcal{M}} \to j_*f{\mathcal{M}}$ is an isomorphism if and only if $$\mathop{\mathrm{Gr}}_{V_{i}}^0 j_*f{\mathcal{M}} = 0 \quad \mbox{for all $i \in I$},$$ where $V_{i}$ is the $V$-filtration along the irreducible component $D_i$ of $X - Q = \bigcup_{i \in I} D_i$. Applying Lemma [Lemma 57](#lem:normal crossings V twist){reference-type="ref" reference="lem:normal crossings V twist"}, this becomes $$\mathop{\mathrm{Gr}}_{V_{i}}^{ - \alpha_i(f)} j_*{\mathcal{M}} = 0 \quad \mbox{for all $i \in I$},$$ where $\alpha_i = \operatorname{ord}_{D_i} \in \Psi$. Setting $$A_{\alpha_i} + {\mathbb{Z}} = \{ \alpha \in {\mathbb{R}} \mid \mathop{\mathrm{Gr}}_{V_{i}}^{ - \alpha_i} j_*{\mathcal{M}} \neq 0\}$$ we deduce that $j_!f{\mathcal{M}} \to j_*f{\mathcal{M}}$ is an isomorphism as long as $f \not\in \alpha_i^{-1}(A_{\alpha_i} + {\mathbb{Z}})$ for any $i$, as was required to prove. ◻ ## The Hodge filtration on a pushforward {#subsec:hodge pushforward} In preparation for the proof of Theorem [Theorem 40](#thm:semi-continuity){reference-type="ref" reference="thm:semi-continuity"}, we recall in this subsection the behavior of the Hodge filtration under a pushforward of mixed Hodge modules. We first recall the naive pushforward operation for filtered ${\mathcal{D}}$-modules (see, e.g., [@laumon]). Let $g \colon X \to Y$ be a morphism of smooth varieties. If ${\mathcal{M}}$ is any ${\mathcal{D}}$-module on $X$, then the ${\mathcal{D}}$-module pushforward is $$g_+{\mathcal{M}} := {\mathrm{R}}g_{\mathpalette\bigcdot@{.5}}({\mathcal{D}}_{Y \leftarrow X} \overset{{\mathrm{L}}}\otimes_{{\mathcal{D}}_X}{\mathcal{M}}),$$ where $${\mathcal{D}}_{Y \leftarrow X} = g^{-1}{\mathcal{D}}_Y \otimes_{g^{-1}{\mathcal{O}}_Y} \omega_{X/Y},$$ with right ${\mathcal{D}}_X$-module action defined so that the action map $${\mathcal{D}}_{Y \leftarrow X} \to {\mathrm{Hom}}(g^{-1}\omega_Y, \omega_X)$$ is a map of right ${\mathcal{D}}_X$-modules. We endow ${\mathcal{D}}_{Y \leftarrow X}$ with the filtration by order of differential operator, shifted to start in degree $\dim Y - \dim X$. If $({\mathcal{M}}, F_\bullet)$ is a filtered ${\mathcal{D}}_X$-module, then there is an induced (naive) filtration on the complex $g_+{\mathcal{M}}$ given by $$F_p g_+{\mathcal{M}} = {\mathrm{R}}g_{\mathpalette\bigcdot@{.5}}F_p(({\mathcal{D}}_{Y \leftarrow X}, F_\bullet) \overset{{\mathrm{L}}}\otimes_{({\mathcal{D}}_X, F_\bullet)} ({\mathcal{M}}, F_\bullet)).$$ We note that if ${\mathcal{M}}$ is regular holonomic, then $g_+{\mathcal{M}} = g_*{\mathcal{M}}$ as complexes of ${\mathcal{D}}$-modules. If $g$ is proper, then the naive pushforward is also the correct operation at the level of Hodge filtrations: we have $$g_*{\mathcal{M}} = g_!{\mathcal{M}} = g_+{\mathcal{M}}$$ as complexes of filtered ${\mathcal{D}}_Y$-modules for ${\mathcal{M}} \in {\mathrm{MHM}}(X)$. When $g$ is not proper, the filtration needs to be adjusted. By choosing an appropriate factorization of $g$, the behavior is fixed once we give the formula for the extension across a smooth divisor. So, suppose now that $U \subset X$ is the complement of a smooth divisor $D$ and write $j \colon U \to X$ for the inclusion. Then, for ${\mathcal{M}} \in {\mathrm{MHM}}(U)$, Saito's theory endows the extensions $j_!{\mathcal{M}}$ and $j_*{\mathcal{M}}$ with a Hodge filtration coming from the order of pole along $D$.[^3] More precisely, one easily checks that $$j_!{\mathcal{M}} = {\mathcal{D}}_X \otimes_{V^0{\mathcal{D}}_X} V^{>-1}j_+{\mathcal{M}} \quad \text{and} \quad j_*{\mathcal{M}} = {\mathcal{D}}_X \otimes_{V^0{\mathcal{D}}_X} V^{\geq -1}j_+{\mathcal{M}}$$ as ${\mathcal{D}}$-modules, where $V$ is the $V$-filtration on the regular holonomic ${\mathcal{D}}$-module $j_+{\mathcal{M}}$; the Hodge filtrations are then given by the formulae $$(j_!{\mathcal{M}}, F_\bullet) = ({\mathcal{D}}_X, F_\bullet) \otimes_{(V^0{\mathcal{D}}_X, F_\bullet)} V^{> -1}j_+({\mathcal{M}}, F_\bullet)$$ and $$(j_*{\mathcal{M}}, F_\bullet) = ({\mathcal{D}}_X, F_\bullet) \otimes_{(V^0{\mathcal{D}}_X, F_\bullet)} V^{\geq -1}j_+({\mathcal{M}}, F_\bullet).$$ ## Proof of Theorem [Theorem 40](#thm:semi-continuity){reference-type="ref" reference="thm:semi-continuity"} {#subsec:pf of thm:semi-continuity} In this subsection, we give the proof of Theorem [Theorem 40](#thm:semi-continuity){reference-type="ref" reference="thm:semi-continuity"}. The basic idea is behind the proof is that the inequalities in the $V$-filtration formula for Hodge filtration on the extension across a divisor lead to semi-continuity properties of the associated gradeds. In order to exploit this idea to produce canonical isomorphisms, we will need the following constructions. Fix ${\mathcal{M}} \in {\mathrm{MHM}}(Q)$ and $f \in \Gamma_{\mathbb{R}}(Q)$ positive. Consider the (infinitely generated) ${\mathcal{D}}_Q$-module $$f^s{\mathcal{M}}[s]$$ given by $f^s{\mathcal{M}}[s] = {\mathcal{M}}\otimes {\mathbb{C}}[s]$ as ${\mathcal{O}}_Q$-modules, with differential operators acting by $$\partial f^s m(s) = f^s \partial m(s) + s \frac{\partial f}{f} m(s)$$ for vector fields $\partial$ on $Q$, where, for $f = f_1^{s_1} \cdots f_k^{s_k}$, $f_i \in \Gamma(Q, {\mathcal{O}}^\times)$ we write $$\frac{\partial f}{f} := s_1 \frac{\partial f_1}{f_1} + \cdots + s_k \frac{\partial f_k}{f_k}.$$ We endow $f^s{\mathcal{M}}[s]$ with a filtration by setting $$F_pf^s{\mathcal{M}}[s] = \sum_{k + q \leq p} s^kf^sF_q{\mathcal{M}},$$ i.e., the filtration comes from the Hodge filtration on ${\mathcal{M}}$ and the polynomial order in $s$. Via the naive filtered pushforward, we obtain a filtered ${\mathcal{D}}$-module $j_+f^s{\mathcal{M}}[s]$ on $X$. Now fix $s_0 \in {\mathbb{R}}$. Our aim is to realize the filtered ${\mathcal{D}}$-modules $j_!f^{s_0}{\mathcal{M}}$ and $j_*f^{s_0}{\mathcal{M}}$ as subquotients of $j_+f^s{\mathcal{M}}[s]$. To do so, observe that, for any $n > 0$, the quotient module $$\frac{f^s{\mathcal{M}}[s]}{(s - s_0)^n},$$ equipped with its inherited filtration, underlies a mixed Hodge module on $Q$, given by tensoring ${\mathcal{M}}$ with a rank $n$ admissible variation of mixed Hodge structure on $Q$. Moreover, for any $p$, there exists $n$ such that $$F_{q} \frac{f^s{\mathcal{M}}[s]}{(s - s_0)^{m + 1}} \to F_{q} \frac{f^s{\mathcal{M}}[s]}{(s - s_0)^{m}}$$ is an isomorphism for all $q \leq p$ and all $m \geq n$. It follows that for each $p$, the sequences $$F_p j_!\left(\frac{f^s{\mathcal{M}}[s]}{(s - s_0)^n}\right) \quad \text{and} \quad F_p j_*\left(\frac{f^s{\mathcal{M}}[s]}{(s - s_0)^n}\right)$$ stabilize for $n \gg 0$. We define (infinitely generated) filtered ${\mathcal{D}}$-modules $j_!^{(s_0)}f^s{\mathcal{M}}[s]$ and $j_*^{(s_0)}f^s{\mathcal{M}}[s]$ on $X$ by setting $$F_p j_!^{(s_0)}f^s{\mathcal{M}}[s] := \varprojlim_n F_p j_! \left(\frac{f^s{\mathcal{M}}[s]}{(s - s_0)^n}\right), \quad j_!^{(s_0)}f^s{\mathcal{M}}[s] = \bigcup_p F_pj_!^{(s_0)}f^s{\mathcal{M}}[s]$$ and $$F_p j_*^{(s_0)}f^s{\mathcal{M}}[s] := \varprojlim_n F_p j_*\left(\frac{f^s{\mathcal{M}}[s]}{(s - s_0)^n}\right), \quad j_*^{(s_0)}f^s{\mathcal{M}}[s] = \bigcup_p F_pj_*^{(s_0)}f^s{\mathcal{M}}[s].$$ By construction, we have strict short exact sequences $$0 \to j_!^{(s_0)}f^s{\mathcal{M}}[s]\{1\} \xrightarrow{s - s_0} j_!^{(s_0)}f^s{\mathcal{M}}[s] \to j_!f^{s_0}{\mathcal{M}} \to 0$$ and $$0 \to j_*^{(s_0)}f^s{\mathcal{M}}[s]\{1\} \xrightarrow{s - s_0} j_*^{(s_0)}f^s{\mathcal{M}}[s] \to j_*f^{s_0}{\mathcal{M}} \to 0$$ and canonical morphisms of filtered ${\mathcal{D}}$-modules $$\label{eq:semi-continuity inclusion 1} j_!^{(s_0)} f^s{\mathcal{M}}[s] \to j_*^{(s_0)} f^s{\mathcal{M}}[s] \to j_+f^s{\mathcal{M}}[s]$$ restricting to the identity on $Q$. Here $\{n\}$ denotes the filtration shift $F_p({\mathcal{N}}\{n\}) = F_{p - n}{\mathcal{N}}$. **Lemma 58**. *The morphisms [\[eq:semi-continuity inclusion 1\]](#eq:semi-continuity inclusion 1){reference-type="eqref" reference="eq:semi-continuity inclusion 1"} are injective and strict with respect to the filtrations. Moreover, for $0 < \epsilon \ll 1$, we have $$\label{eq:semi-continuity inclusion shriek} j_!^{(s_0)} f^s{\mathcal{M}}[s] = j_!^{(s_0 - \epsilon)}f^s{\mathcal{M}}[s] = j_*^{(s_0 - \epsilon)}f^s{\mathcal{M}}[s]$$ and $$\label{eq:semi-continuity inclusion star} j_*^{(s_0)} f^s{\mathcal{M}}[s] = j_*^{(s_0 + \epsilon)}f^s{\mathcal{M}}[s] = j_!^{(s_0 + \epsilon)}f^s{\mathcal{M}}[s]$$ as subsheaves of $j_+f^s{\mathcal{M}}[s]$, and $$j_+f^s{\mathcal{M}}[s] = \bigcup_{s_0 \in {\mathbb{R}}} j_!^{(s_0)}f^s{\mathcal{M}}[s] = \bigcup_{s_0 \in {\mathbb{R}}} j_*^{(s_0)} f^s{\mathcal{M}}[s].$$* *Proof.* Let us first consider the case where $f$ is a boundary equation for $Q$. Since the claim is local on $X$, we may assume without loss of generality that $f$ lifts to a regular function $f \colon X \to {\mathbb{C}}$. Using the standard trick of embedding via the graph of $f$, we may reduce to the case where $X = X' \times {\mathbb{C}}$, $Q = X' \times {\mathbb{C}}^\times$ and $f = t$ is the coordinate on ${\mathbb{C}}$ for some smooth variety $X'$. In this setting, the formula for the Hodge filtration on the extension across a smooth divisor gives $$j_!^{(s_0)} f^s{\mathcal{M}}[s] = {\mathcal{D}}_X \otimes_{V^0{\mathcal{D}}_X} f^{s - s_0}(V^{>-1}j_+f^{s_0}{\mathcal{M}})[s] = {\mathcal{D}}_X \otimes_{V^0{\mathcal{D}}_X}f^s(V^{>-1 - s_0}j_+{\mathcal{M}})[s]$$ and $$j_*^{(s_0)} f^s{\mathcal{M}}[s] = {\mathcal{D}}_X \otimes_{V^0{\mathcal{D}}_X}f^s(V^{\geq -1 - s_0}j_+{\mathcal{M}})[s]$$ as filtered ${\mathcal{D}}$-modules. The semi-continuity properties [\[eq:semi-continuity inclusion shriek\]](#eq:semi-continuity inclusion shriek){reference-type="eqref" reference="eq:semi-continuity inclusion shriek"} and [\[eq:semi-continuity inclusion star\]](#eq:semi-continuity inclusion star){reference-type="eqref" reference="eq:semi-continuity inclusion star"} are now clear, and also $$j_+ f^s{\mathcal{M}}[s] = \varinjlim_{s_0 \in {\mathbb{R}}} j_!^{(s_0)}f^s{\mathcal{M}}[s] = \varinjlim_{s_0 \in {\mathbb{R}}} j_*^{(s_0)}f^s{\mathcal{M}}[s].$$ It therefore remains to prove that the maps $$\label{eq:semi-continuity inclusion 4} j_!^{(s_0)} f^s{\mathcal{M}}[s] \to j_!^{(s_1)} f^s{\mathcal{M}}[s]$$ induced by the inclusions of the $V$-filtrations for $s_0 < s_1$ are strict and injective. But by the semi-continuity properties noted above, these are all finite compositions of filtered isomorphisms and maps of the form $$\label{eq:semi-continuity inclusion 2} j_!^{(s_0)}f^s{\mathcal{M}}[s] \to j_*^{(s_0)}f^s{\mathcal{M}}[s].$$ But [\[eq:semi-continuity inclusion 2\]](#eq:semi-continuity inclusion 2){reference-type="eqref" reference="eq:semi-continuity inclusion 2"} is injective (this follows from Proposition [Proposition 39](#prop:hyperplanes){reference-type="ref" reference="prop:hyperplanes"}, for example) and strict since it is the limit of a tower of morphisms of mixed Hodge modules. Hence [\[eq:semi-continuity inclusion 4\]](#eq:semi-continuity inclusion 4){reference-type="eqref" reference="eq:semi-continuity inclusion 4"} is strict and injective as claimed, and the lemma is proved in the case where $f$ is a genuine boundary equation. Let us now consider the general case. Using Proposition [Proposition 38](#prop:positivity){reference-type="ref" reference="prop:positivity"}, we write $f$ as a product of positive real powers of boundary equations. By induction on the number of boundary equations required, it suffices to prove the lemma for $f = f_1 f_2$ under the assumption that it holds for $f_1$ and $f_2$ separately. For $a_1, a_2 \in {\mathbb{R}}$, we may form the two-variable version of our construction, to define, $$j_!^{(a_1, a_2)} f_1^{s_1}f_2^{s_2} {\mathcal{M}}[s_1, s_2] \quad \text{and} \quad j_*^{(a_1, a_2)} f_1^{s_1}f_2^{s_2}{\mathcal{M}}[s_1, s_2]$$ with $$F_p j_!^{(a_1, a_2)} f_1^{s_1}f_2^{s_2} {\mathcal{M}}[s_1, s_2] := \varprojlim_{m, n} F_p j_!\left(\frac{f^{s_1}f^{s_2}{\mathcal{M}}[s_1, s_2]}{((s_1 - a_1)^m, (s_2 - a_2)^n)}\right)$$ and similarly for $j_*$. Applying the induction hypotheses to $f_1$ and $f_2$, we deduce that the maps $$j_!^{(a_1, a_2)} f_1^{s_1}f_2^{s_2} {\mathcal{M}}[s_1, s_2] \to j_*^{(a_1, a_2)} f_1^{s_1}f_2^{s_2} {\mathcal{M}}[s_1, s_2] \to j_+ f_1^{s_1}f_2^{s_2}{\mathcal{M}}[s_1, s_2]$$ are injective and strict, and that, for any fixed $a \in {\mathbb{R}}$, we have $$j_+ f_1^{s_1}f_2^{s_2}{\mathcal{M}}[s_1, s_2] = \bigcup_{a_1 \in {\mathbb{R}}} j_!^{(a_1, a)} f_1^{s_1}f_2^{s_2} {\mathcal{M}}[s_1, s_2] = \bigcup_{a_2 \in {\mathbb{R}}} j_!^{(a, a_2)} f_1^{s_1}f_2^{s_2}{\mathcal{M}}[s_1, s_2].$$ Taking the quotient setting $s_1 = s_2 = s$, we deduce that $$\label{eq:semi-continuity inclusion 3} j_+f^s{\mathcal{M}}[s] = \varinjlim_{a \in {\mathbb{R}}} j_!^{(a)}f^s{\mathcal{M}}[s] = \varinjlim_{a \in {\mathbb{R}}} j_*^{(a)}f^s{\mathcal{M}}[s];$$ here, for example, the morphisms $$j_!^{(a)}f^s{\mathcal{M}}[s] \to j_!^{(b)}f^s{\mathcal{M}}[s]$$ for $a < b$ in the first colimit come from taking the quotient $s_1 = s_2 = s$ of the inclusion $$j_!^{(a, a)}f_1^{s_1}f_2^{s_2}{\mathcal{M}}[s_1, s_2] \subset j_!^{(b, b)}f_1^{s_1}f_2^{s_2} {\mathcal{M}}[s_1, s_2].$$ Now, semi-continuity for $f_1$ and $f_2$ imply that [\[eq:semi-continuity inclusion shriek\]](#eq:semi-continuity inclusion shriek){reference-type="eqref" reference="eq:semi-continuity inclusion shriek"} and [\[eq:semi-continuity inclusion star\]](#eq:semi-continuity inclusion star){reference-type="eqref" reference="eq:semi-continuity inclusion star"} hold also for $j_!^{(s_0)}f^s{\mathcal{M}}[s]$ and $j_*^{(s_0)}f^s{\mathcal{M}}[s]$ with respect to the maps constructed in this way; together with [\[eq:semi-continuity inclusion 3\]](#eq:semi-continuity inclusion 3){reference-type="eqref" reference="eq:semi-continuity inclusion 3"}, this implies that the inclusions [\[eq:semi-continuity inclusion 1\]](#eq:semi-continuity inclusion 1){reference-type="eqref" reference="eq:semi-continuity inclusion 1"} are injective and strict as argued above, so we are done. ◻ *Proof of Theorem [Theorem 40](#thm:semi-continuity){reference-type="ref" reference="thm:semi-continuity"}.* Let us prove the semi-continuity statement for $j_!$; the proof for $j_*$ is identical. We have, canonically, $$j_!f^{s_0}{\mathcal{M}} = \mathop{\mathrm{coker}}((s - s_0) \colon j_!^{(s_0)}f^s{\mathcal{M}}[s] \{1\} \to j_!^{(s_0)}f^s{\mathcal{M}}[s])$$ as filtered ${\mathcal{D}}$-modules, and hence $$\mathop{\mathrm{Gr}}^F j_!f^{s_0}{\mathcal{M}} = \mathop{\mathrm{coker}}(s \colon \mathop{\mathrm{Gr}}^F j_!^{(s_0)}f^s{\mathcal{M}}[s]\{1\} \to \mathop{\mathrm{Gr}}^Fj_!^{(s_0)}f^s{\mathcal{M}}[s])$$ as coherent sheaves on $T^*X$. But now, canonically, $j_!^{(s_0 - \epsilon)} f^s{\mathcal{M}}[s] = j_!^{(s_0)} f^s{\mathcal{M}}[s]$ by Lemma [Lemma 58](#lem:semi-continuity inclusion){reference-type="ref" reference="lem:semi-continuity inclusion"}, so we get $$\mathop{\mathrm{Gr}}^F j_{!*}f^{s_0 - \epsilon}{\mathcal{M}} = \mathop{\mathrm{Gr}}^Fj_!f^{s_0 - \epsilon}{\mathcal{M}} \cong \mathop{\mathrm{Gr}}^F j_!f^{s_0}{\mathcal{M}} \quad \text{for $0 < \epsilon \ll 1$}.$$ This proves the semi-continuity statements for positive deformations. It remains to prove that the isomorphism class of $j_!f^s{\mathcal{M}}$ is constant on connected components of the complement ${\mathcal{A}}$ of the hyperplane arrangement. Assume first that there exists some positive $f \in \Gamma_{\mathbb{R}}(Q)$. Then the set of positive elements is open (for the colimit topology) and non-empty, hence there exists a basis for the vector space $\Gamma_{\mathbb{R}}(Q)$ consisting of positive elements. But now we have shown above that the isomorphism class of $\mathop{\mathrm{Gr}}^F j_{!*}f{\mathcal{M}}$ does not change if $f$ is perturbed by a small (positive or negative) multiple of a positive element. Since we can obtain all small perturbations in this way, the statement is proved in this case. Observe now that the above argument in fact produces canonical isomorphisms between the $\mathop{\mathrm{Gr}}^F j_{!*}f{\mathcal{M}}$. So in the general case, we may pass to an open cover of $X$ on which boundary equations for $Q$ exist and glue the resulting isomorphisms. This completes the proof. ◻ # Hodge-Lefschetz theory {#sec:hodge-lefschetz} In this section, we recall some necessary Hodge-theoretic background, then give the proof of Theorem [Theorem 41](#thm:jantzen){reference-type="ref" reference="thm:jantzen"} (i.e., that the associated gradeds of the Jantzen filtration for an $f^s$-deformation are polarized Hodge modules when $f$ is positive). We remark that the material in this section is used only in the proof of Theorem [Theorem 41](#thm:jantzen){reference-type="ref" reference="thm:jantzen"}: we do not use it anywhere else in this paper. Theorem [Theorem 41](#thm:jantzen){reference-type="ref" reference="thm:jantzen"} is a generalization from rational to real deformations of [@DV1 Theorem 7.4]. In the rational case, the deformation is constructed from an honest function $f$, and the proof works by reducing the statement to a standard "Hodge-Lefschetz" property [@DV1 Corollary 7.3] of the nearby cycles with respect to $f$. In the real case, an identical reduction works, but the required Hodge-Lefschetz property of the generalized nearby cycles module is no longer standard. The proof of this property is a straightforward generalization of the classical case, but nevertheless requires a substantial detour through mixed Hodge module theory, which takes up the majority of this section (§§[6.1](#subsec:HL){reference-type="ref" reference="subsec:HL"}--[6.3](#subsec:normal crossings){reference-type="ref" reference="subsec:normal crossings"}). The outline of the section is as follows. First, in §[6.1](#subsec:HL){reference-type="ref" reference="subsec:HL"}, we recall the definition of Hodge-Lefschetz structures and modules and recall Saito's theorem on their behavior under pushforward. In §[6.2](#subsec:nilpotent orbit){reference-type="ref" reference="subsec:nilpotent orbit"}, we recall the closely related notion of nilpotent orbit. In §[6.3](#subsec:normal crossings){reference-type="ref" reference="subsec:normal crossings"}, we recall the gluing theory for mixed Hodge modules of normal crossings type; the key result (Proposition [Proposition 71](#prop:polarized gluing data){reference-type="ref" reference="prop:polarized gluing data"}) is that polarized Hodge modules are characterized by their gluing data being systems of nilpotent orbits. Finally, in §[6.4](#subsec:pf of thm:jantzen){reference-type="ref" reference="subsec:pf of thm:jantzen"}, we put all the ingredients together to give the proof of Theorem [Theorem 41](#thm:jantzen){reference-type="ref" reference="thm:jantzen"}. ## Hodge-Lefschetz modules {#subsec:HL} In this subsection, we recall the basic structures arising in Hodge-Lefschetz theory and their properties. **Definition 59**. A *Hodge-Lefschetz module* of weight $w$ on a smooth variety $X$ is a mixed Hodge module ${\mathcal{M}}$ on $X$ equipped with a (necessarily nilpotent) endomorphism ${\mathrm{N}} \colon {\mathcal{M}} \to {\mathcal{M}}(-1)$ such that ${\mathrm{N}}^n \colon {\mathrm{Gr}}^W_{w + n} {\mathcal{M}} \to {\mathrm{Gr}}^W_{w - n}{\mathcal{M}}(-n)$ is an isomorphism for all $n \geq 0$. In other words, a Hodge-Lefschetz module is a mixed Hodge module whose weight filtration is equal (up to a shift) to the monodromy weight filtration with respect to a specified nilpotent operator ${\mathrm{N}}$. In the case where $X$ is a point (so ${\mathcal{M}}$ is a mixed Hodge structure), we will often use the term Hodge-Lefschetz structure in place of Hodge-Lefschetz module. This terminology is borrowed from [@SS]. The following is standard and elementary. **Proposition 60**. *Let $({\mathcal{M}}, {\mathrm{N}})$ be a Hodge-Lefschetz module of weight $w$. Then we have the Lefschetz decomposition $$\mathop{\mathrm{Gr}}^W{\mathcal{M}} = \bigoplus_{n \geq 0, 0 \leq k \leq n} {\mathrm{N}}^k{\mathrm{P}}_n{\mathcal{M}},$$ where $${\mathrm{P}}_n{\mathcal{M}} = \ker({\mathrm{N}}^{n + 1} \colon \mathop{\mathrm{Gr}}^W_{w + n} {\mathcal{M}} \to \mathop{\mathrm{Gr}}^W_{w - n - 2} {\mathcal{M}}(-n - 1)).$$* **Definition 61**. Let $({\mathcal{M}}, {\mathrm{N}})$ be a Hodge-Lefschetz module of weight $w$. A *polarization* of $({\mathcal{M}}, {\mathrm{N}})$ is a Hermitian form $S \colon {\mathcal{M}} \to {\mathcal{M}}^h(-w)$ such that 1. $S({\mathrm{N}} \cdot, \bar{\cdot}) = S(\cdot, \overline{{\mathrm{N}}\cdot})$, and 2. $(-1)^nS(\cdot, \overline{{\mathrm{N}}^n\cdot})$ is a polarization on ${\mathrm{P}}_n{\mathcal{M}}$ for all $n \geq 0$. A *polarized Hodge-Lefschetz module* is a Hodge-Lefschetz module equipped with a polarization. Polarized Hodge-Lefschetz modules are fundamental in the theory of Hodge modules. For example, one of the defining properties of a polarized Hodge module is that the nearby and vanishing cycles are always polarized Hodge-Lefschetz modules. One of the key properties of Hodge-Lefschetz modules is that they behave well under pushforward; this is one of the main results in [@S1]. To give the precise statement, recall the definition of pushforward for sesquilinear pairings on ${\mathcal{D}}$-modules. First, if $f \colon X \to Y$ is a closed immersion and ${\mathcal{M}}$ and ${\mathcal{N}}$ are regular holonomic ${\mathcal{D}}$-modules on $X$ with a sesquilinear pairing ${\mathfrak{s}} \colon {\mathcal{M}} \otimes \overline{{\mathcal{N}}} \to {{\mathcal{D}}\mathit{b}}_X$, we define $$\langle \eta, f_*{\mathfrak{s}}(\xi \otimes u, \overline{\xi' \otimes v}) \rangle = \frac{(-1)^{n(n + 1)/2}}{(2\pi i)^n} \langle f^*(\eta \mathrel{\lrcorner}(\xi \wedge \overline{\xi'})), {\mathfrak{s}}(u, \bar{v}) \rangle,$$ where $n = \dim X - \dim Y$ is the negative of the codimension and we use the sign convention of [@DV1 §3.1] for the contraction. Here we have specified the formula for local sections of $f_{\mathpalette\bigcdot@{.5}}(\omega_{X/Y} \otimes {\mathcal{M}}) \subset f_*{\mathcal{M}}$. Next, if $f$ is the projection $X = Y \times Z \to Y$, then we have $$f_*{\mathcal{M}} = {\mathrm{R}}f_{\mathpalette\bigcdot@{.5}}(\Omega^\bullet_Z \otimes {\mathcal{M}} [n]) = f_{\mathpalette\bigcdot@{.5}}({\mathcal{E}}^\bullet_Z \otimes {\mathcal{M}}[n]),$$ where ${\mathcal{E}}^\bullet_Z$ is the de Rham complex of smooth differential forms on $Z$. In this case, we define a map of complexes $$f_*{\mathfrak{s}} \colon f_*{\mathcal{M}} \otimes \overline{f_*{\mathcal{N}}} \to {{\mathcal{D}}\mathit{b}}_X$$ by the formula $$\langle \eta, f_*{\mathfrak{s}}(\alpha \otimes u, \overline{\beta \otimes v})\rangle = \frac{(-1)^{n(n - 1)/2}}{(2\pi i)^n} (-1)^{n(n - j)}\langle f^*\eta \wedge \alpha \wedge \bar{\beta}, {\mathfrak{s}}(u, \bar{v}) \rangle$$ where $n = \dim X - \dim Y$, and $\alpha \in {\mathcal{E}}_Z^{n - j}$, $\beta \in {\mathcal{E}}_Z^{n + j}$. **Theorem 62** ([@S1 Proposition 1]). *Let $f \colon X \to Y$ be projective, and let $\ell \in {\mathrm{H}}^2(X, 2\pi i{\mathbb{Z}})$ be the first Chern class of an $f$-ample line bundle on $X$. If $({\mathcal{M}}, {\mathrm{N}}, S)$ is a polarized Hodge-Lefschetz module of weight $w$, then for all $j \geq 0$,* 1. *$\ell^j \colon {\mathcal{H}}^{-j}f_*{\mathcal{M}} \to {\mathcal{H}}^jf_*{\mathcal{M}}(j)$ is an isomorphism, and* 2. *the triple $$({\mathrm{P}}^\ell {\mathcal{H}}^{-j}f_*{\mathcal{M}}, {\mathrm{N}}, (-1)^{j(j + 1)/2}f_*S(\cdot, \overline{\ell^j\cdot}))$$ is a polarized Hodge-Lefschetz module of weight $w - j$, where $${\mathrm{P}}^\ell {\mathcal{H}}^{-j}f_*{\mathcal{M}} := \ker(\ell^{j + 1} \colon {\mathcal{H}}^{-j}f_*{\mathcal{M}} \to {\mathcal{H}}^{j + 2}f_*{\mathcal{M}}(j + 1)).$$* **Remark 63**. We note that Saito uses slightly different conventions for polarizations: his polarizations are real bilinear forms, while ours are complex Hermitian forms following [@SS]. See, for example, [@DV1 §A.2] for a discussion of the relationship between the two notions. Compared with [@S1], the signs in Theorem [Theorem 62](#thm:HL pushforward){reference-type="ref" reference="thm:HL pushforward"} have been adjusted accordingly. ## Nilpotent orbits {#subsec:nilpotent orbit} We will also have need of the closely related notion of nilpotent orbit (in the sense of Hodge theory). **Definition 64**. Let $V$ be a finite dimensional complex vector space equipped with finite filtrations $F_\bullet$ and $\bar{F}_\bullet$, a Hermitian form $S$, and commuting self-adjoint nilpotent operators ${\mathrm{N}}_1, \ldots, {\mathrm{N}}_n \colon V \to V$ such that ${\mathrm{N}}_i F_p V \subset F_{p + 1}V$ and ${\mathrm{N}}_i \bar{F}_p V \subset \bar{F}_{p + 1}V$. We say that these data define a *nilpotent orbit of weight $w$ and dimension $n$* if there exist $\alpha_i \in {\mathbb{R}}$ such that $$\left(V, \exp\left(\sum_i z_i {\mathrm{N}}_i \right)\cdot F_\bullet, \exp\left(-\sum_i \bar{z}_i {\mathrm{N}}_i\right)\cdot \bar{F}_\bullet, S\right)$$ is a polarized Hodge structure of weight $w$ for all $z_i \in {\mathbb{C}}$ with $\mathop{\mathrm{Re}}z_i > \alpha_i$. The notion of nilpotent orbit is abstracted from the Nilpotent Orbit Theorem of Schmid [@schmid], which shows that these are the natural structures arising when one considers limits of a polarized variation of Hodge structure over an $n$-dimensional base. A key property, due to Cattani and Kaplan is that a nilpotent orbit is equipped with a canonical weight filtration making it into a mixed Hodge structure. **Theorem 65** ([@cattani-kaplan Theorem 3.3]). *Given a nilpotent orbit as above, there exists a unique filtration $W_\bullet V$ such that $$(V, F_\bullet, \bar{F}_\bullet, W_\bullet, S, \sum_i a_i {\mathrm{N}}_i)$$ is a polarized Hodge-Lefschetz structure of weight $w$ for all $a_i > 0$.* We will also need the following. **Proposition 66**. *Let $(V, {\mathrm{N}}_1, \ldots, {\mathrm{N}}_n, S)$ be a nilpotent orbit of weight $w$. Then for all $m \geq 0$, we have that $$({\mathrm{P}}_m^{{\mathrm{N}}_n} V, {\mathrm{N}}_1, \ldots, {\mathrm{N}}_{n - 1}, (-1)^mS(\cdot, \overline{{\mathrm{N}}_n^m\cdot}))$$ is a nilpotent orbit of weight $w + m$. Here $${\mathrm{P}}_m^{{\mathrm{N}}_n}V = \ker({\mathrm{N}}_n^{m + 1} \colon \mathop{\mathrm{Gr}}^{W({\mathrm{N}}_n)}_m V \to \mathop{\mathrm{Gr}}^{W({\mathrm{N}}_n)}_{-m - 2} V (- m - 1))$$ for $W({\mathrm{N}}_n)$ the monodromy weight filtration with respect to ${\mathrm{N}}_n$.* *Proof.* Unwinding the definitions, we need to show that, for $z_i \in {\mathbb{C}}$ with $\mathop{\mathrm{Re}}z_i > \alpha_i$, $i = 1, \ldots, n - 1$, the tuple $$\left(V, \exp\left(\sum_{i = 1}^{n - 1} z_i{\mathrm{N}}_i\right)\cdot F_\bullet, \exp\left(-\sum_{i = 1}^{n - 1} \bar{z}_i{\mathrm{N}}_i\right) \cdot \bar{F}_\bullet, S, {\mathrm{N}}_n\right)$$ is a polarized Hodge-Lefschetz structure of weight $w$ when equipped with the appropriate shifted monodromy weight filtration. But it is clearly a nilpotent orbit, so this follows from Theorem [Theorem 65](#thm:nilpotent orbit HL){reference-type="ref" reference="thm:nilpotent orbit HL"}. ◻ ## Mixed Hodge modules of normal crossings type {#subsec:normal crossings} We next recall the notion of mixed Hodge module of normal crossings type and their explicit description via gluing data. The main reference is [@S2 §3]. The main point for us is Proposition [Proposition 71](#prop:polarized gluing data){reference-type="ref" reference="prop:polarized gluing data"}, which allows one to recognize a polarized Hodge module from its gluing data. **Definition 67**. Let $X$ be a smooth variety and $D \subset X$ a divisor with normal crossings. A mixed Hodge module ${\mathcal{M}}$ on $X$ is *of normal crossings type (with respect to $D$)* if ${\mathcal{M}}$ is constructible with respect to the stratification determined by $D$. Disregarding the Hodge structures, the category of perverse sheaves (or ${\mathcal{D}}$-modules) of normal crossings type has an elementary description in terms of local systems on the strata together with some gluing data [@GGM]. One of the main results of Saito [@S2] was to extend this to mixed Hodge modules, as we now explain. We will suppose for simplicity that the divisor $D$ has simple normal crossings, and moreover that $D$ is given by a reduced equation $x_1 \cdots x_n = 0$ for some regular functions $x_i \colon X \to {\mathbb{C}}$. We write $D_i = \{x_i = 0\}$ for the irreducible components of $D$. For $I \subset \{1, \ldots, n\}$, we set $$D_I = \bigcap_{i \in I} D_i.$$ For $\alpha \in [-1, 0]$ and $i \not\in I$, we have the functor $$\mathop{\mathrm{Gr}}_{V_i}^\alpha \colon {\mathrm{MHM}}(D_I) \to {\mathrm{MHM}}(D_{I \cup \{i\}}),$$ where $V_i$ is the $V$-filtration with respect to the coordinate function $x_i$.[^4] For the sake of notational simplicity, we will shift the Hodge and weight filtrations on $\mathop{\mathrm{Gr}}_{V_i}^{-1}{\mathcal{M}}$ so that the nearby and vanishing cycles functors are given by $$\psi_{x_i}{\mathcal{M}} = \bigoplus_{-1 < \alpha \leq 0} \mathop{\mathrm{Gr}}_{V_i}^\alpha{\mathcal{M}} \quad \mbox{and} \quad \phi_{x_i} {\mathcal{M}} = \bigoplus_{-1 \leq \alpha < 0} \mathop{\mathrm{Gr}}_{V_i}^\alpha{\mathcal{M}},$$ cf., e.g., [@SS Definition 9.4.3]. The $\mathop{\mathrm{Gr}}_{V_i}^\alpha$ come equipped with functorial monodromy operators $${\mathrm{N}}_i := - (x_i \partial_{x_i} - \alpha) \colon \mathop{\mathrm{Gr}}_{V_i}^\alpha {\mathcal{M}} \to \mathop{\mathrm{Gr}}_{V_i}^\alpha {\mathcal{M}} (-1)$$ and maps $${\mathrm{can}}_i := -\partial_{x_i} \colon \mathop{\mathrm{Gr}}_{V_i}^0{\mathcal{M}} \to \mathop{\mathrm{Gr}}_{V_i}^{-1}{\mathcal{M}} \quad \mbox{and} \quad {\mathrm{var}}_i := x_i \colon \mathop{\mathrm{Gr}}_{V_i}^{-1}{\mathcal{M}} \to \mathop{\mathrm{Gr}}_{V_i}^0{\mathcal{M}}(-1)$$ satisfying ${\mathrm{can}}_i \circ {\mathrm{var}}_i = {\mathrm{N}}_i$ and ${\mathrm{var}}_i \circ {\mathrm{can}}_i = {\mathrm{N}}_i$. If ${\mathcal{M}}$ is of normal crossings type, then the $V$-filtrations on ${\mathcal{M}}$ with respect to the different coordinates are compatible, so ${\mathcal{M}}$ gives rise to a system of admissible variations of mixed Hodge structure $${\mathcal{M}}_I^{\boldsymbol{\alpha}_I} := {\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I}{\mathcal{M}}|_{D_I^\circ} \quad \text{for $\boldsymbol{\alpha}_I \in [-1, 0]^I$},$$ on each open stratum $D_I^\circ$, equipped with nilpotent endomorphisms, $${\mathrm{N}}_i \colon {\mathcal{M}}_I^{\boldsymbol{\alpha}_I} \to {\mathcal{M}}_I^{\boldsymbol{\alpha}_I}(-1) \quad \mbox{for $i \in I$},$$ can and var maps, $${\mathrm{can}}_i \colon {\mathcal{M}}_I^{(\boldsymbol{\alpha}_{I - \{i\}}, 0_i)} \to {\mathcal{M}}_I^{(\boldsymbol{\alpha}_{I - \{i\}}, (-1)_i)} \quad \mbox{and} \quad {\mathrm{var}}_i \colon {\mathcal{M}}_I^{(\boldsymbol{\alpha}_{I - \{i\}}, (-1)_i)} \to {\mathcal{M}}_I^{(\boldsymbol{\alpha}_{I - \{i\}}, (0)_i)} (-1)$$ satisfying ${\mathrm{can}}_i \circ {\mathrm{var}}_i = {\mathrm{var}}_i \circ {\mathrm{can}}_i = {\mathrm{N}}_i$, and compatible isomorphisms $${\mathrm{Gr}}_{V_i}^{\alpha_i} {\mathcal{M}}_I^{\boldsymbol{\alpha}_I} \cong {\mathcal{M}}_{I \cup \{i\}}^{(\boldsymbol{\alpha}_I, \alpha_i)}$$ for $i \not\in I$ and $-1 < \alpha_i \leq 0$. (Note that for $-1 < \alpha_i \leq 0$, $\mathop{\mathrm{Gr}}_{V_i}^{\alpha_i}{\mathcal{M}}$ depends only on ${\mathcal{M}}|_{\{x_i \neq 0\}}$, so the functors $$\mathop{\mathrm{Gr}}_{V_i}^{\alpha_i} \colon {\mathrm{MHM}}(D_I^\circ) \to {\mathrm{MHM}}(D_{I \cup \{i\}}^\circ)$$ are well-defined.) We will call the data $\{{\mathcal{M}}_I^{\boldsymbol{\alpha}_I}, {\mathrm{N}}_i, {\mathrm{can}}_i, {\mathrm{var}}_i\}$ the *gluing data* for ${\mathcal{M}}$. We will often suppress the maps ${\mathrm{N}}_i$, ${\mathrm{can}}_i$ and ${\mathrm{var}}_i$ from the notation. A key result of Saito is that ${\mathcal{M}}$ is completely specified by its gluing data. **Theorem 68**. *The functor $${\mathcal{M}} \mapsto \{{\mathcal{M}}^{\boldsymbol{\alpha}_I}_I\}$$ from mixed Hodge modules of normal crossings type to gluing data is an equivalence of categories.* *Proof.* Iterate [@S2 Proposition 2.4 and Theorem 3.27]. ◻ We now turn to the following problem: given a pair $({\mathcal{M}}, S)$ of a mixed Hodge module ${\mathcal{M}}$ of normal crossings type and a Hermitian form $S \colon {\mathcal{M}} \to {\mathcal{M}}^h(-w)$, what property of the gluing data ensures that $({\mathcal{M}}, S)$ is a polarized Hodge module? We first note that Hermitian duality acts on gluing data as follows. **Proposition 69**. *There are canonical identifications $$({\mathcal{M}}^h)_I^{\boldsymbol{\alpha}_I} = ({\mathcal{M}}_I^{\boldsymbol{\alpha}_I})^h(\#\{i \in I \mid \alpha_i \neq -1\})$$ such that $${\mathrm{N}}_i^h = {\mathrm{N}}, \quad {\mathrm{can}}_i^h = -{\mathrm{var}}_i, \quad \mbox{and} \quad {\mathrm{var}}_i^h = -{\mathrm{can}}_i.$$* For ${\mathcal{M}}$ of normal crossings type, a Hermitian form $S \colon {\mathcal{M}} \to {\mathcal{M}}^h(-w)$ of weight $w$ is therefore equivalent to a system of Hermitian forms $$S \colon {\mathcal{M}}_I^{\boldsymbol{\alpha}_I} \to ({\mathcal{M}}_I^{\boldsymbol{\alpha}_I})^h(-w + \#\{i \in I \mid \alpha_i \neq -1\})$$ satisfying $$S_I({\mathrm{N}}_i \cdot, \bar{\cdot}) = S_I(\cdot, \overline{{\mathrm{N}}_i\cdot}) \quad \mbox{and} \quad S_I({\mathrm{can}}_i (\cdot), \bar{\cdot}) = - S_I(\cdot, \overline{{\mathrm{var}}_i(\cdot)}).$$ Consider the following condition. **Definition 70**. We say that $({\mathcal{M}}, S)$ as above has *polarized gluing data* if, for all $I \subset \{1, \ldots, n\}$ and $\boldsymbol{\alpha}_I \in [-1, 0]^I$, $$({\mathcal{M}}_I^{\boldsymbol{\alpha}_I}, S, \{{\mathrm{N}}_i \mid i \in I\})$$ is (pointwise) a nilpotent orbit of weight $w - n + \#\{i \in I \mid \alpha_i = -1\}$. **Proposition 71**. *Let $({\mathcal{M}}, S)$ be as above. Then $({\mathcal{M}}, S)$ has polarized gluing data if and only if the mixed Hodge module ${\mathcal{M}}$ is pure of weight $w$ and $S$ is a polarization.* *Proof.* The "only if" direction is a standard consequence of Schmid's Nilpotent Orbit Theorem [@schmid] (cf., e.g., [@S2 (3.18.7)]); the key input is the fact that the iterated nearby cycles of a polarized variation of Hodge structure form a nilpotent orbit. The converse follows from [@S2 Theorem 3.20] and Lemma [Lemma 72](#lem:S decomposability){reference-type="ref" reference="lem:S decomposability"} below. ◻ **Lemma 72**. *Assume $({\mathcal{M}}, S)$ has polarized gluing data. Then $({\mathcal{M}}, S)$ is a direct sum of intermediate extensions of polarized variations of Hodge structure on strata $D_I^\circ$.* *Proof.* We prove the lemma by induction on the number of irreducible components $n$ of $D$. First, if $n = 0$, then the lemma is vacuous. So assume $n \geq 1$. We claim that, for all $I$ with $n \in I$ and all $\boldsymbol{\alpha}_I$ with $\alpha_n = -1$, we have the orthogonal direct sum decomposition $$\label{eq:S decomposability 1} {\mathcal{M}}_I^{\boldsymbol{\alpha}_I} = \mathop{\mathrm{im}}{\mathrm{can}}_n \oplus \ker {\mathrm{var}}_n.$$ Given the claim, we prove the result as follows. Each $\ker {\mathrm{var}}_n$ is a nilpotent orbit of dimension $|I|$ with ${\mathrm{N}}_n = 0$, which is the same thing as a nilpotent orbit of dimension $|I| - 1$. So by the induction hypothesis applied to $D_n$, we have that $${\mathcal{N}}_I^{\boldsymbol{\alpha}_I} := \begin{cases} \ker {\mathrm{var}}_n \subset {\mathcal{M}}_I^{\boldsymbol{\alpha}_I}, & \mbox{if $n \in I$ and $\alpha_i = -1$}, \\ 0, & \mbox{otherwise}\end{cases}$$ is the gluing data for a direct sum ${\mathcal{N}}$ of intermediate extensions of polarized variations of Hodge structure on strata of $D_n$. Similarly, an elementary exercise shows that intermediate extensions from $X - D_n$ are characterized by the property that ${\mathrm{can}}_n$ is surjective and ${\mathrm{var}}_n$ is injective, so $${\mathcal{P}}_I^{\boldsymbol{\alpha}_I} := \begin{cases} \mathop{\mathrm{im}}{\mathrm{can}}_n \subset {\mathcal{M}}_I^{\boldsymbol{\alpha}_I}, & \mbox{if $n \in I$ and $\alpha_i = -1$}, \\ {\mathcal{M}}_I^{\boldsymbol{\alpha}_I}, & \mbox{otherwise}\end{cases}$$ is the gluing data for a direct sum ${\mathcal{P}}$ of intermediate extensions of polarized variations of Hodge structure on strata of $X - D_n$. The claimed direct sum decomposition [\[eq:S decomposability 1\]](#eq:S decomposability 1){reference-type="eqref" reference="eq:S decomposability 1"} implies that $${\mathcal{M}} = {\mathcal{N}} \oplus {\mathcal{P}},$$ so we conclude the result. It remains to prove the claim, which can be checked pointwise in $D_I^\circ$. So let us fix $p \in D_I^\circ$ and write $$\psi = ({\mathcal{M}}_I^{(\boldsymbol{\alpha}_{I - \{n\}}, 0)})_p \quad \mbox{and} \quad \phi = ({\mathcal{M}}_I^{\boldsymbol{\alpha}_I})_p.$$ Replacing the Hodge filtrations $F_\bullet$ and $\bar{F}_\bullet$ with $$\exp\left(\sum_{i \in I - \{n\}} z_i {\mathrm{N}}_i\right) \cdot F_\bullet \quad \mbox{and} \quad \exp\left(\sum_{i \in I - \{n\}} -\bar{z}_i {\mathrm{N}}_i\right) \cdot \bar{F}_\bullet$$ for $\mathop{\mathrm{Re}}z_i \gg 0$, we may assume without loss of generality that $\psi$ and $\phi$ are $1$-dimensional nilpotent orbits (i.e., polarized Hodge-Lefschetz structures) of weights $w'$ and $w' + 1$ respectively, where $$w' = w - n + \#\{i \in I - \{n\} \mid \alpha_i = - 1\}.$$ In this setting, the decomposition [\[eq:S decomposability 1\]](#eq:S decomposability 1){reference-type="eqref" reference="eq:S decomposability 1"} is standard: see, e.g., [@SS Theorem 3.4.22]. ◻ ## Proof of Theorem [Theorem 41](#thm:jantzen){reference-type="ref" reference="thm:jantzen"} {#subsec:pf of thm:jantzen} We now turn to the proof of Theorem [Theorem 41](#thm:jantzen){reference-type="ref" reference="thm:jantzen"}. Recall that we are in the context of an affine locally closed immersion $j \colon Q \to X$ of smooth varieties and a positive element $f \in \Gamma_{\mathbb{R}}(Q)$. We first briefly outline the proof given in [@DV1 §7] for the case where $f$ is a boundary equation, and explain which modifications are needed for more general $f$. First, in [@DV1 §7.1], we defined the Beilinson functors $$\pi_f^a({\mathcal{M}}) := \mathop{\mathrm{coker}}(j_!f^s{\mathcal{M}}[[s]](a) \xrightarrow{s^a} j_*f^s{\mathcal{M}}[[s]]) \in {\mathrm{MHM}}(X)$$ for ${\mathcal{M}} \in {\mathrm{MHM}}(Q)$ and $a \geq 0$, equipped with the nilpotent endomorphism $${\mathrm{N}} : = s \colon \pi_f^a({\mathcal{M}}) \to \pi_f^a({\mathcal{M}})(-1).$$ For $({\mathcal{M}}, S)$ polarized of weight $w$, $\pi_f^a({\mathcal{M}})$ is also equipped with a non-degenerate Hermitian form $$\pi_f^a(S) \colon \pi_f^a({\mathcal{M}}) \to \pi_f^a({\mathcal{M}})^h(-w + a - 1),$$ constructed as follows. There is a canonical pairing $$\mathop{\mathrm{Res}}S \colon f^s{\mathcal{M}}((s)) \to f^s{\mathcal{M}}((s))^h(- w - 1)$$ given by $$\mathop{\mathrm{Res}}S(f^s m, \overline{f^sm'}) := \mathop{\mathrm{Res}}_{s = 0} |f|^{2s} S(m, \overline{m'}).$$ Pushing forward, we obtain a pairing $$j_*(\mathop{\mathrm{Res}}S) \colon j_*f^s{\mathcal{M}}((s)) \to (j_!f^s{\mathcal{M}}((s)))^h (-w -1).$$ Observing that the canonical map $$\beta \colon j_!f^s{\mathcal{M}}((s)) \to j_*f^s{\mathcal{M}}((s))$$ is an isomorphism, we set $$\pi_f^a(S)(\cdot, \bar{\cdot}) = j_*(\mathop{\mathrm{Res}}S)(s^{-a}\beta^{-1}(\cdot), \bar{\cdot}).$$ These constructions all make sense for general $f \in \Gamma(Q, {\mathcal{O}}^\times) \otimes {\mathbb{R}}$ positive, so choosing any pre-image of our given $f \in \Gamma_{\mathbb{R}}(Q)$ in this space we may proceed as in [@DV1]. Next, in [@DV1 §7.3], we adapted the arguments of [@BB2] to reduce Theorem [Theorem 41](#thm:jantzen){reference-type="ref" reference="thm:jantzen"} to the following statement about $\pi_f^0({\mathcal{M}})$ (cf., [@DV1 Corollary 7.3]): **Lemma 73**. *The triple $(\pi_f^0({\mathcal{M}}), {\mathrm{N}}, -\pi_f^0(S))$ is a polarized Hodge-Lefschetz module of weight $w + 1$ in the sense of §[6.1](#subsec:HL){reference-type="ref" reference="subsec:HL"}.* This reduction works word for word the same in our setting, so it remains to prove Lemma [Lemma 73](#lem:beilinson polarization){reference-type="ref" reference="lem:beilinson polarization"}. Finally, in [@DV1 §7.2], we proved Lemma [Lemma 73](#lem:beilinson polarization){reference-type="ref" reference="lem:beilinson polarization"} by relating the Beilinson functor $\pi_f^0({\mathcal{M}})$ to the unipotent nearby cycles functor $\psi_f^{\mathit{un}} {\mathcal{M}}$ and appealing to the standard Hodge-Lefschetz property of the nearby cycles of a polarized Hodge module. It is only at this step that the proof requires modification. The proof of Lemma [Lemma 73](#lem:beilinson polarization){reference-type="ref" reference="lem:beilinson polarization"} for positive $f \in \Gamma(Q, {\mathcal{O}}^\times) \otimes {\mathbb{R}}$ occupies the rest of the subsection. The first step is to reduce to a normal crossings situation. **Lemma 74**. *Assume that ${\mathcal{M}}$ is the intermediate extension of a variation of Hodge structure on an open set $U \subset Q$ such that $U$ is the complement of a divisor with simple normal crossings in $X$. Then Lemma [Lemma 73](#lem:beilinson polarization){reference-type="ref" reference="lem:beilinson polarization"} holds.* *Proof that Lemma [Lemma 74](#lem:beilinson polarization normal crossings){reference-type="ref" reference="lem:beilinson polarization normal crossings"} implies Lemma [Lemma 73](#lem:beilinson polarization){reference-type="ref" reference="lem:beilinson polarization"}.* Since ${\mathcal{M}}$ is a pure (polarized) Hodge module, we have by [@S2 Theorem 3.21] that $({\mathcal{M}}, S) = (j'_{!*}{\mathcal{M}}', S')$ for some locally closed embedding $j' \colon Q' \hookrightarrow Q$, with $Q'$ smooth, and some polarized variation of Hodge structures $({\mathcal{M}}', S')$ on $Q'$. Take a resolution $g \colon \tilde{Q}' \to \bar{Q}'$ for the closure of $Q'$ in $X$ such that $Q' \cong g^{-1}(Q') \subset \tilde{Q}'$ is the complement of a divisor with simple normal crossings, and write $j'' \colon Q' \hookrightarrow g^{-1}(Q)$ for the inclusion. Now $({\mathcal{M}}, S)$ is a direct summand of the polarized Hodge module $${\mathrm{P}}^\ell{\mathcal{H}}^0 g_*j''_{!*}({\mathcal{M}}', S').$$ So $(\pi_f^0({\mathcal{M}}), {\mathrm{N}}, -\pi_f^0(S))$ is a direct summand of $$\pi_f^0{\mathrm{P}}^\ell {\mathcal{H}}^0g_*j_{!*}''({\mathcal{M}}', -S') = {\mathrm{P}}^\ell {\mathcal{H}}^0g_*(\pi_f^0j''_{!*}{\mathcal{M}}', {\mathrm{N}}, -\pi_f^0j''_{!*}S')$$ for $\ell$ the first Chern class of a $g$-ample line bundle. By Lemma [Lemma 74](#lem:beilinson polarization normal crossings){reference-type="ref" reference="lem:beilinson polarization normal crossings"} and Theorem [Theorem 62](#thm:HL pushforward){reference-type="ref" reference="thm:HL pushforward"}, the latter is a polarized Hodge-Lefschetz module of weight $w + 1$, so the result follows. ◻ It remains to prove Lemma [Lemma 74](#lem:beilinson polarization normal crossings){reference-type="ref" reference="lem:beilinson polarization normal crossings"}. To this end, write $D = X - U$, and suppose $D$ is given by the reduced equation $x_1 \cdots x_n = 0$. (This is no loss of generality: we may always ensure this is the case by passing to an open cover of $X$.) We then have $f = g x_1^{a_1} \cdots x_n^{a_n}$ with $a_i = \operatorname{ord}_{D_i}(f) \geq 0$ and $g \in \Gamma(X, {\mathcal{O}}_X^\times)\otimes {\mathbb{R}}$. Lemma [Lemma 74](#lem:beilinson polarization normal crossings){reference-type="ref" reference="lem:beilinson polarization normal crossings"} is proved by writing down the gluing data for the mixed Hodge module $\pi_f^0({\mathcal{M}})$ of normal crossings type and applying Proposition [Proposition 71](#prop:polarized gluing data){reference-type="ref" reference="prop:polarized gluing data"}. Let us write $J = \{i \in \{1, \ldots, n\} \mid a_i = 0\}$, so that $X - Q = \bigcup_{i \not\in J} D_i$. Then we have well-defined gluing data for $({\mathcal{M}}, S)$ given by $$({\mathcal{M}}^{\boldsymbol{\alpha}_I}_I, S) = (\mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I}{\mathcal{M}}, \mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I}S)$$ for all $I$, $\boldsymbol{\alpha}_I$ such that $\alpha_i = -1$ implies $i \in J$. Here the pairing $\mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I} S$ is given explicitly by iterating [@SS Definitions 12.5.10 and 12.5.23]. Similarly, we can form the gluing data for $f^s{\mathcal{M}}[[s]]$ and $(f^s{\mathcal{M}}((s)), \mathop{\mathrm{Res}}S)$. The following lemma is an easy consequence of the definitions. **Lemma 75**. *We have $$(\mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I}f^s{\mathcal{M}}[[s]], \{{\mathrm{N}}_i\}, s) \cong (f_I^s(\mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I}{\mathcal{M}})[[s]], \{{\mathrm{N}}_i - a_is\}, s)$$ and $$(\mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I}f^s{\mathcal{M}}((s)), \{{\mathrm{N}}_i\}, s, \mathop{\mathrm{Res}}S) \cong (f_I^s(\mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I}{\mathcal{M}})((s)), \{{\mathrm{N}}_i - a_is\}, s, \mathop{\mathrm{Res}}S),$$ where $$f_I = \left.\frac{f}{\prod_{i \in I} x_i^{a_i}}\right|_{D_I^\circ}.$$ If $i \in J$, so that the ${\mathrm{can}}_i$ and ${\mathrm{var}}_i$ maps are defined, they are identified by the above isomorphisms.* There is a similarly straightforward description of gluing data for $!$ and $*$ extensions. In the statement below, for $I \subset \{1, \ldots, n\}$ and $\alpha_I \in [-1, 0]^I$, we define $I(\boldsymbol{\alpha}) \subset \{1, \ldots, n\}$ and $\boldsymbol{\alpha}_I' \in [-1, 0]^I$ by $$I(\boldsymbol{\alpha}) = \{i \in I \mid \mbox{$i \not\in J$ and $\alpha_i = -1$}\}$$ and $$\alpha_i' = \begin{cases} 0, &\mbox{if $i \in I(\boldsymbol{\alpha})$}, \\ \alpha_i, &\mbox{otherwise}.\end{cases}$$ **Lemma 76**. *Let ${\mathcal{N}}$ be a mixed Hodge module of normal crossings type on $Q$ and $S \colon {\mathcal{N}} \to {\mathcal{N}}^h(-w)$ a Hermitian form. Then:* 1. *[\[itm:extension gluing 1\]]{#itm:extension gluing 1 label="itm:extension gluing 1"} We have identifications $$\mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I} j_!{\mathcal{N}} = \mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I'}{\mathcal{N}}$$ such that ${\mathrm{can}}_i$ is the identity and ${\mathrm{var}}_i = {\mathrm{N}}_i$ for $i \in I(\boldsymbol{\alpha})$.* 2. *[\[itm:extension gluing 2\]]{#itm:extension gluing 2 label="itm:extension gluing 2"} We have identifications $$\mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I} j_*{\mathcal{N}} = \mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I'}{\mathcal{N}}(-|I(\boldsymbol{\alpha})|)$$ such that ${\mathrm{can}}_i = {\mathrm{N}}_i$ and ${\mathrm{var}}_i$ is the identity for $i \in I(\boldsymbol{\alpha})$.* 3. *[\[itm:extension gluing 3\]]{#itm:extension gluing 3 label="itm:extension gluing 3"} Under the above identifications, the canonical map $j_!{\mathcal{N}} \to j_*{\mathcal{N}}$ is given by $$\prod_{i \in I(\boldsymbol{\alpha})} {\mathrm{N}}_i \colon \mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I} j_!{\mathcal{N}} = \mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I'}{\mathcal{N}} \to \mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I'}{\mathcal{N}}(-|I(\boldsymbol{\alpha})|) = \mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I} j_*{\mathcal{N}}.$$* 4. *[\[itm:extension gluing 4\]]{#itm:extension gluing 4 label="itm:extension gluing 4"} Under the above identifications, the image of the pairing $j_*S \colon j_!{\mathcal{N}} \to (j_*{\mathcal{N}})^h(-w)$ is given by $$\mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I} S = (-1)^{|I(\boldsymbol{\alpha})|} \mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I'} S.$$* *Proof.* The identifications [\[itm:extension gluing 1\]](#itm:extension gluing 1){reference-type="eqref" reference="itm:extension gluing 1"} and [\[itm:extension gluing 2\]](#itm:extension gluing 2){reference-type="eqref" reference="itm:extension gluing 2"} follow easily from the universal properties of $j_!{\mathcal{N}}$ and $j_*{\mathcal{N}}$, from which [\[itm:extension gluing 3\]](#itm:extension gluing 3){reference-type="eqref" reference="itm:extension gluing 3"} is also clear. By definition, $j_*S$ is the unique pairing restricting to $S$ on $Q$. The formula in [\[itm:extension gluing 4\]](#itm:extension gluing 4){reference-type="eqref" reference="itm:extension gluing 4"} defines gluing data for such an extension, so this must be the gluing data for $j_*S$ as claimed. ◻ Combining Lemmas [Lemma 75](#lem:fs gluing){reference-type="ref" reference="lem:fs gluing"} and [Lemma 76](#lem:extension gluing){reference-type="ref" reference="lem:extension gluing"}, we obtain the following description of the gluing data of $(\pi_f^0({\mathcal{M}}), {\mathrm{N}}, \pi_f^0(S))$. **Lemma 77**. *We have an isomorphism $$\begin{aligned} \mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I}\pi_f^0({\mathcal{M}}) &= \mathop{\mathrm{coker}}(\tilde{{\mathrm{N}}} \colon f_I^s{\mathcal{M}}^{\boldsymbol{\alpha}_I'}[[s]] \to f_I^s{\mathcal{M}}^{\boldsymbol{\alpha}_I'}[[s]](-|I(\boldsymbol{\alpha})|)) \\ {\mathrm{N}}_i &\mapsto {\mathrm{N}}_i - a_i s,\\ {\mathrm{N}} &\mapsto s, \\ \pi_f^0(S) &\mapsto (-1)^{|I(\boldsymbol{\alpha})|} \mathop{\mathrm{Res}}_{s = 0} S(\tilde{{\mathrm{N}}}^{-1} \cdot, \bar{\cdot}),\end{aligned}$$ where $$\tilde{{\mathrm{N}}} = \prod_{i \in I(\boldsymbol{\alpha})} ({\mathrm{N}}_i - a_i s) \colon f_I^s{\mathcal{M}}^{\boldsymbol{\alpha}_I'}[[s]] \to f_I^s{\mathcal{M}}^{\boldsymbol{\alpha}_I'}[[s]](-|I(\boldsymbol{\alpha})|)$$ which we note becomes invertible after inverting $s$.* The final step is to apply the following result of Kashiwara. **Proposition 78** (Kashiwara). *Let $(V, {\mathrm{N}}_1, \ldots, {\mathrm{N}}_n, S)$ be a nilpotent orbit of weight $w$ and dimension $n$. Fix $I \subset \{1, \ldots, n\}$ and $a_i > 0$ for $i \in I$. Set $$\tilde{V} = \mathop{\mathrm{coker}}(\tilde{{\mathrm{N}}} \colon V[[s]] \to V[[s]](-|I|))$$ and $$\tilde{S}(\cdot, \bar{\cdot}) = \mathop{\mathrm{Res}}_{s = 0} S(\tilde{{\mathrm{N}}}^{-1}\cdot, \bar{\cdot})$$ where $$\tilde{{\mathrm{N}}} = \prod_{i \in I} ({\mathrm{N}}_i - a_i s).$$ Then $(\tilde{V}, {\mathrm{N}}_1, \ldots, {\mathrm{N}}_n, {\mathrm{N}} = s, -\tilde{S})$ is a nilpotent orbit of weight $w + 1 - |I|$.* *Proof.* This is [@S2 Proposition 3.19], with signs adjusted to our conventions. Note that while the proposition is stated in the reference for $a_i \in {\mathbb{Z}}_{> 0}$, the integrality is never used in the proof. ◻ We now complete the proof of Lemma [Lemma 74](#lem:beilinson polarization normal crossings){reference-type="ref" reference="lem:beilinson polarization normal crossings"} and hence Theorem [Theorem 41](#thm:jantzen){reference-type="ref" reference="thm:jantzen"}. Since $({\mathcal{M}}, S)$ is polarized of weight $w$, Propositions [Proposition 71](#prop:polarized gluing data){reference-type="ref" reference="prop:polarized gluing data"} and [Proposition 78](#prop:kashiwara){reference-type="ref" reference="prop:kashiwara"} and Lemma [Lemma 77](#lem:beilinson gluing){reference-type="ref" reference="lem:beilinson gluing"} imply that $$(\mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I}\pi_f^0({\mathcal{M}}), \{{\mathrm{N}}_i \mid i \in I - I(\boldsymbol{\alpha})\}, \{{\mathrm{N}}_i + a_i {\mathrm{N}} \mid i \in I(\boldsymbol{\alpha})\}, {\mathrm{N}}, -\pi_f^0(S))$$ is pointwise a nilpotent orbit of weight $w + 1 - n + \#\{i \in I \mid \alpha_i = -1\}$ and dimension $|I| + 1$. By Proposition [Proposition 66](#prop:nilpotent orbit primitive){reference-type="ref" reference="prop:nilpotent orbit primitive"}, the primitive parts $$({\mathrm{P}}^{{\mathrm{N}}}_m \mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I}\pi_f^0({\mathcal{M}}), \{{\mathrm{N}}_i \mid i \in I\}, (-1)^{m - 1}\pi_f^0(S)(\cdot, \overline{{\mathrm{N}}^m\cdot}))$$ are therefore nilpotent orbits of weight $w + 1 - n + \#\{i \in I \mid \alpha_i = -1\} + m$ for all $m \geq 0$. Here we note that ${\mathrm{N}}_i + a_i {\mathrm{N}}$ and ${\mathrm{N}}_i$ induce the same operator on $\mathop{\mathrm{Gr}}^{W({\mathrm{N}})}$. Since $${\mathrm{P}}^{{\mathrm{N}}}_m \mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I}\pi_f^0({\mathcal{M}}) = \mathop{\mathrm{Gr}}_{V_I}^{\boldsymbol{\alpha}_I} {\mathrm{P}}^{{\mathrm{N}}}_m \pi_f^0({\mathcal{M}})$$ we deduce that $$({\mathrm{P}}^{{\mathrm{N}}}_m \pi_f^0({\mathcal{M}}), (-1)^{m - 1}\pi_f^0(S)(\cdot, \overline{{\mathrm{N}}^m\cdot}))$$ has polarized gluing data of weight $w + 1 + m$, and is hence pure and polarized of weight $w + 1 + m$. So $(\pi_f^0({\mathcal{M}}), {\mathrm{N}}, -\pi_f^0(S))$ is a polarized Hodge-Lefschetz module of weight $w + 1$. This completes the proof. # The twisted Kodaira vanishing theorem {#sec:twisted kodaira} In this section, we state and prove the twisted version of Saito's Kodaira vanishing theorem for mixed Hodge modules. We first recall (cf., e.g., [@laumon]) the interaction between proper pushforwards and associated gradeds for mixed Hodge modules. Let $g \colon X \to Y$ be a proper morphism of smooth varieties and let ${\mathcal{M}} \in {\mathrm{MHM}}(X)$. Recall from §[5.2](#subsec:hodge pushforward){reference-type="ref" reference="subsec:hodge pushforward"} that the filtered complex underlying $g_*{\mathcal{M}}$ is given by $$(g_*{\mathcal{M}}, F_\bullet) = {\mathrm{R}}g_{\mathpalette\bigcdot@{.5}}(({\mathcal{D}}_{Y\leftarrow X}, F_\bullet) \overset{{\mathrm{L}}}\otimes_{({\mathcal{D}}_X, F_\bullet)} ({\mathcal{M}}, F_\bullet)),$$ where ${\mathcal{D}}_{Y \leftarrow X}$ is equipped with the filtration by order of differential operator, shifted to start in degree $\dim Y - \dim X$. Identifying $\mathop{\mathrm{Gr}}^F {\mathcal{D}}_X \cong {\mathcal{O}}_{T^*X}$ and $\mathop{\mathrm{Gr}}^F {\mathcal{D}}_Y \cong {\mathcal{O}}_{T^*Y}$, we have, canonically, $$\mathop{\mathrm{Gr}}^F {\mathcal{D}}_{Y \leftarrow X} \cong {\mathcal{O}}_{T^*Y \times_Y X} \otimes \omega_{X/Y}\{\dim Y - \dim X\}$$ as a bimodule over $g^{-1}{\mathcal{O}}_{T^*Y}$ and ${\mathcal{O}}_{T^*X}$, where $\{\cdot\}$ denotes grading shift. We deduce the following. **Proposition 79**. *Let $g \colon X \to Y$ be a proper morphism between smooth varieties as above. Define $$\mathop{\mathrm{Gr}}(g)_* \colon {\mathrm{D}}^b{\mathrm{Coh}}^{{\mathbb{G}}_m}(T^*X) \to {\mathrm{D}}^b{\mathrm{Coh}}^{{\mathbb{G}}_m}(T^*Y)$$ by $$\mathop{\mathrm{Gr}}(g)_*{\mathcal{M}} = {\mathrm{R}} p_{g\mathpalette\bigcdot@{.5}} {\mathrm{L}} q_g^{\mathpalette\bigcdot@{.5}}({\mathcal{M}} \otimes \omega_{X/Y})\{\dim Y - \dim X\},$$ where $q_g$ and $p_g$ are the morphisms $$T^*X \xleftarrow{q_g} T^*Y \times_Y X \xrightarrow{p_g} T^*Y.$$ Then we have a canonical isomorphism $$\mathop{\mathrm{Gr}}^F(g_*{\mathcal{M}}) \cong \mathop{\mathrm{Gr}}(g)_*\mathop{\mathrm{Gr}}^F{\mathcal{M}}$$ for ${\mathcal{M}} \in {\mathrm{MHM}}(X)$.* Now suppose that $\pi \colon \tilde{X} \to X$ is an $H$-torsor, for some torus $H$, and $g \colon X \to Y$ is a projective morphism. If $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ and ${\mathcal{M}} \in {\mathrm{MHM}}_\lambda(\tilde{X})$ is $\lambda$-twisted, then the pushforward $g_*{\mathcal{M}}$ does not make sense, but the associated graded pushforward $\mathop{\mathrm{Gr}}(g)_*\mathop{\mathrm{Gr}}^F{\mathcal{M}} \in {\mathrm{Coh}}^{{\mathbb{G}}_m}(T^*Y)$ does; here we have identified ${\mathcal{M}}$ with its underlying filtered module on $X$ so that $\mathop{\mathrm{Gr}}^F{\mathcal{M}} \in {\mathrm{Coh}}^{{\mathbb{G}}_m}(T^*X)$. We have the following relative Kodaira vanishing result for twisted mixed Hodge modules. Define an element $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ to be *$g$-ample* if it is an ${\mathbb{R}}_{>0}$-linear combination of elements $\mu \in {\mathbb{X}}^*(H)$ such that the line bundle ${\mathcal{O}}_X(\mu) = \pi_{\mathpalette\bigcdot@{.5}}({\mathcal{O}}_{\tilde{X}} \otimes {\mathbb{C}}_\mu)^H$ is $g$-ample. **Theorem 80**. *Let $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ and ${\mathcal{M}} \in {\mathrm{MHM}}_\lambda(\tilde X)$. Suppose that $\lambda$ is $g$-ample. Then $${\mathcal{H}}^i\mathop{\mathrm{Gr}}(g)_*\mathop{\mathrm{Gr}}^F{\mathcal{M}} = 0 \quad \mbox{for $i > 0$}.$$* In particular, taking $g$ to be the map from $X$ to a point, we recover the absolute statement (Theorem [Theorem 4](#thm:intro twisted kodaira){reference-type="ref" reference="thm:intro twisted kodaira"}) given in the introduction. *Proof.* We prove the desired vanishing by induction on the dimension of the support of ${\mathcal{M}}$. By tensoring with a line bundle on $Y$ if necessary, we may assume without loss of generality that $\lambda$ is ample. In this case, in the language of §[3](#sec:deformations){reference-type="ref" reference="sec:deformations"}, there exists a divisor $D \subset X$ such that $U = X - D$ is affine over $Y$ and $\dim D \cap \operatorname{Supp}{\mathcal{M}} < \dim \operatorname{Supp}{\mathcal{M}}$, and a positive element $f \in \Gamma_{\mathbb{R}}(\tilde U)^{\mathit{mon}}$ such that $\varphi(f) = \lambda$. Here $\tilde U = \pi^{-1}(U)$. Consider the family $f^sj^*{\mathcal{M}}$, where $j \colon U \to X$ is the inclusion. By Proposition [Proposition 39](#prop:hyperplanes){reference-type="ref" reference="prop:hyperplanes"}, there exist $-1 < s_1 < \cdots < s_n < 0$ such that the extension $j_*f^sj^*{\mathcal{M}}$ is clean for $s \in (-1, 0) - \{s_1, \ldots, s_n\}$. We prove the vanishing by induction on $n$. We first claim that it suffices to prove the desired vanishing for the Hodge module $j_!j^*{\mathcal{M}}$. To see this, observe that the tautological map $j_!j^*{\mathcal{M}} \to {\mathcal{M}}$ gives exact sequences $$0 \to {\mathcal{N}} \to j_!j^*{\mathcal{M}} \to {\mathcal{P}} \to 0$$ and $$0 \to {\mathcal{P}} \to {\mathcal{M}} \to {\mathcal{Q}} \to 0$$ such that ${\mathcal{N}}$ and ${\mathcal{Q}}$ are supported in $D \cap \operatorname{Supp}{\mathcal{M}}$. So by induction, we have the vanishing for ${\mathcal{N}}$ and ${\mathcal{Q}}$. The associated long exact sequences for ${\mathcal{H}}^i \mathop{\mathrm{Gr}}(g)_*\mathop{\mathrm{Gr}}^F$ show that the vanishing for $j_!j^*{\mathcal{M}}$ implies the vanishing for ${\mathcal{P}}$, which implies the vanishing for ${\mathcal{M}}$ as claimed. We now proceed with the induction. First, if $n = 0$, then $$\mathop{\mathrm{Gr}}^Fj_!j^*{\mathcal{M}} \cong \mathop{\mathrm{Gr}}^F j_*f^{-1}j^*{\mathcal{M}}$$ by Theorem [Theorem 40](#thm:semi-continuity){reference-type="ref" reference="thm:semi-continuity"} and Proposition [Proposition 43](#prop:monodromic semi-continuity){reference-type="ref" reference="prop:monodromic semi-continuity"}. But now $f^{-1}j^*{\mathcal{M}}$ is an untwisted mixed Hodge module on $U$, so $$\mathop{\mathrm{Gr}}(g)_*\mathop{\mathrm{Gr}}^Fj_*f^{-1}j^*{\mathcal{M}} = \mathop{\mathrm{Gr}}^F g_*j_*f^{-1}{\mathcal{M}}$$ by Proposition [Proposition 79](#prop:associated graded pushforward){reference-type="ref" reference="prop:associated graded pushforward"}. But ${\mathcal{H}}^i g_*j_*f^{-1}{\mathcal{M}} = 0$ for $i > 0$ since $g \circ j$ is affine, so we are done in this case. If $n > 0$, on the other hand, then we have $$\mathop{\mathrm{Gr}}^F j_!j^*{\mathcal{M}} \cong \mathop{\mathrm{Gr}}^F j_*f^{s_n}j^*{\mathcal{M}}$$ by Theorem [Theorem 40](#thm:semi-continuity){reference-type="ref" reference="thm:semi-continuity"} and Proposition [Proposition 43](#prop:monodromic semi-continuity){reference-type="ref" reference="prop:monodromic semi-continuity"} again. But now the $(1 + s_n)\lambda$-twisted Hodge module $j_*f^{s_n}j^*{\mathcal{M}}$ satisfies our hypotheses, but has only $n - 1$ reducibility points in the corresponding $f^s$ family. So by induction on $n$, the vanishing holds for $j_*f^{s_n}j^*{\mathcal{M}}$, so we are done in this case as well. This completes the proof. ◻ # Filtered exactness and global generation {#sec:vanishing} The aim of this section is to prove the theorems on cohomology vanishing (Theorem [Theorem 24](#thm:filtered exactness){reference-type="ref" reference="thm:filtered exactness"}) and global generation (Theorem [Theorem 25](#thm:hodge generation){reference-type="ref" reference="thm:hodge generation"}) of the Hodge filtrations of monodromic mixed Hodge modules on ${\mathcal{B}}$. Along the way, we also prove a result of independent interest on the Hodge filtration on the big projective in category ${\mathcal{O}}$. The main idea behind the proof is to reduce the statements to a cohomology vanishing statement for certain convolutions, which is in turn proved using the twisted Kodaira vanishing theorem. To this end, we discuss in §[8.1](#subsec:convolutions){reference-type="ref" reference="subsec:convolutions"} the convolution operations for mixed Hodge modules on the flag variety and coherent sheaves on its cotangent bundle. In §[8.2](#subsec:big projective){reference-type="ref" reference="subsec:big projective"}, we study a particular convolution kernel, the big projective, and compute its Hodge filtration explicitly (Theorem [Theorem 87](#thm:xi hodge filtration){reference-type="ref" reference="thm:xi hodge filtration"}). We then turn to the proofs of Theorems [Theorem 24](#thm:filtered exactness){reference-type="ref" reference="thm:filtered exactness"} and Theorem [Theorem 25](#thm:hodge generation){reference-type="ref" reference="thm:hodge generation"}. It turns out that the proofs for regular dominant $\lambda$ are somewhat simpler, so we treat this case first in §[8.3](#subsec:regular vanishing){reference-type="ref" reference="subsec:regular vanishing"}, before extending the arguments to the case of singular $\lambda$ in §[8.4](#subsec:singular vanishing){reference-type="ref" reference="subsec:singular vanishing"}. ## Convolutions {#subsec:convolutions} Given $\lambda, \mu \in {\mathfrak{h}}^*$, we recall here the well-known category of functors $${\mathrm{D}}^b{\mathrm{Mod}}({\mathcal{D}}_\lambda) \to {\mathrm{D}}^b{\mathrm{Mod}}({\mathcal{D}}_\mu)$$ constructed via convolution. This has a number of incarnations and minor variants. We will work with the category $${\mathrm{Mod}}({\mathcal{D}}_{\widehat{\mu}, -\lambda}, G)_{rh} = {\mathrm{Mod}}_{\widehat{\mu - \rho}, -\lambda - \rho}^G({\mathcal{D}}_{\tilde{{\mathcal{B}}} \times \tilde{{\mathcal{B}}}})_{rh}$$ of $G$-equivariant monodromic ${\mathcal{D}}$-modules on $\tilde{{\mathcal{B}}} \times \tilde{{\mathcal{B}}}$ such that the monodromy action of ${\mathfrak{h}}$ coming from the first (resp., second) $\tilde{{\mathcal{B}}}$ has generalized eigenvalue $\mu - \rho$ (resp., simple eigenvalue $-\lambda - \rho$). In other words, the objects are monodromic in the first factor and twisted in the second. Other well-studied variations include sheaves that are monodromic (resp., twisted) on both factors, as well as pro-completions of such categories; the formulation above will suffice for our purposes, however. We remark that the categories ${\mathrm{Mod}}({\mathcal{D}}_{\widehat{\mu}, -\lambda}, G)_{rh}$ are very familiar in representation theory: if we choose a Borel subgroup $B \subset G$ with unipotent radical $N$, then restriction to the corresponding fiber of the first projection $\tilde{{\mathcal{B}}} \times \tilde{{\mathcal{B}}}$ defines a fully faithful embedding $${\mathrm{Mod}}({\mathcal{D}}_{\widehat{\mu}, -\lambda}, G)_{rh} \hookrightarrow {\mathrm{Mod}}({\mathcal{D}}_{-\lambda}, N)_{rh};$$ the right hand side is the (geometric) category ${\mathcal{O}}$ at this infinitesimal character, and our subcategory is just a direct sum of blocks. The convolution action is given in this context as follows. We write $${\mathrm{D}}^b_G{\mathrm{Mod}}({\mathcal{D}}_{(\widehat{\mu}, -\lambda)})_{rh}$$ for the corresponding $G$-equivariant derived category. We have a functor $$* \colon {\mathrm{D}}^b_G{\mathrm{Mod}}({\mathcal{D}}_{(\widehat{\mu}, -\lambda)})_{rh} \times {\mathrm{D}}^b{\mathrm{Mod}}({\mathcal{D}}_\lambda) \to {\mathrm{D}}^b{\mathrm{Mod}}({\mathcal{D}}_{\widehat{\mu}})$$ given in terms of $\tilde{{\mathcal{D}}}$-modules by $$\label{eq:d-module convolution} {\mathcal{K}} * {\mathcal{M}} = {\mathrm{R}}{\mathrm{pr}}_{1\mathpalette\bigcdot@{.5}}({\mathcal{K}} \overset{{\mathrm{L}}}\otimes_{{\mathrm{pr}}_2^{-1}{\mathcal{D}}_\lambda}{\mathrm{pr}}_2^{-1}{\mathcal{M}}),$$ where ${\mathcal{D}}_\lambda$ acts on ${\mathcal{K}}$ on the right via the usual isomorphism ${\mathcal{D}}_\lambda^{\mathit{op}} = {\mathcal{D}}_{-\lambda}$. Now suppose that $\lambda, \mu \in {\mathfrak{h}}^*_{\mathbb{R}}$. Then we have mixed Hodge module versions $${\mathrm{MHM}}({\mathcal{D}}_{\widehat{\mu}, -\lambda}, G) \quad \text{and} \quad {\mathrm{D}}^b_G{\mathrm{MHM}}({\mathcal{D}}_{\widehat{\mu}, -\lambda})$$ of the same categories. The convolution lifts to a functor $$* \colon {\mathrm{D}}^b_G{\mathrm{MHM}}({\mathcal{D}}_{\widehat{\mu}, -\lambda}) \times {\mathrm{D}}^b {\mathrm{MHM}}({\mathcal{D}}_\lambda) \to {\mathrm{D}}^b{\mathrm{MHM}}({\mathcal{D}}_{\widehat{\mu}})$$ defined as follows. First, for $${\mathcal{K}} \in {\mathrm{D}}^b_G{\mathrm{MHM}}({\mathcal{D}}_{\widehat{\mu}, -\lambda}) \quad \text{and} \quad {\mathcal{M}} \in {\mathrm{D}}^b {\mathrm{MHM}}({\mathcal{D}}_\lambda),$$ we take the external tensor product $${\mathcal{K}} \boxtimes {\mathcal{M}} \in {\mathrm{D}}^b_G{\mathrm{MHM}}({\mathcal{D}}_{\widehat{\mu}, -\lambda, \lambda}),$$ a monodromic mixed Hodge module on $\tilde{{\mathcal{B}}} \times \tilde{{\mathcal{B}}} \times \tilde{{\mathcal{B}}}$. Next, we write $$\Delta_{2 3} \colon \tilde{{\mathcal{B}}} \times \tilde{{\mathcal{B}}} \to \tilde{{\mathcal{B}}} \times \tilde{{\mathcal{B}}} \times \tilde{{\mathcal{B}}}$$ for the map sending $(x, y)$ to $(x, y, y)$, and take the intermediate pullback $$\Delta_{2 3}^\circ ({\mathcal{K}} \boxtimes {\mathcal{M}}) := \Delta_{23}^*({\mathcal{K}} \boxtimes {\mathcal{M}})[\dim \tilde{{\mathcal{B}}}] \in {\mathrm{D}}^b{\mathrm{MHM}}({\mathcal{D}}_{\widehat{\mu}, -\rho}).$$ The $-\rho$ comes from the $\rho$-shift built into our notation. Note that $G$-equivariance of ${\mathcal{K}}$ implies that ${\mathcal{K}} \boxtimes {\mathcal{M}}$ is always non-characteristic for $\Delta_{2 3}$, so $$\Delta_{2 3}^\circ({\mathcal{K}} \boxtimes {\mathcal{M}}) = \Delta_{2 3}^!({\mathcal{K}} \boxtimes {\mathcal{M}})(\dim \tilde{{\mathcal{B}}})[-\dim \tilde{{\mathcal{B}}}] = {\mathrm{L}}\Delta_{2 3}^{\mathpalette\bigcdot@{.5}}({\mathcal{K}} \boxtimes {\mathcal{M}})$$ is just the naive tensor product of ${\mathcal{K}}$ and ${\mathcal{M}}$. Tensoring with the line bundle ${\mathcal{O}}(0, 2\rho) = {\mathcal{O}}\boxtimes \omega_{{\mathcal{B}}}^{-1}$ on ${\mathcal{B}} \times {\mathcal{B}}$ translates this to an object $${\mathcal{O}}(0, 2\rho) \otimes \Delta_{23}^\circ ({\mathcal{K}} \boxtimes {\mathcal{M}}) \in {\mathrm{D}}^b{\mathrm{MHM}}({\mathcal{D}}_{\widehat{\mu}, \rho}) := {\mathrm{D}}^b{\mathrm{MHM}}_{\widehat{\mu - \rho}, 0}(\tilde{{\mathcal{B}}} \times \tilde{{\mathcal{B}}}).$$ Now we write $$\pi_2 \colon \tilde{{\mathcal{B}}} \times \tilde{{\mathcal{B}}} \to \tilde{{\mathcal{B}}} \times {\mathcal{B}}$$ for the natural projection, and $$\pi_2^\circ \colon {\mathrm{MHM}}_{\widehat{\mu - \rho}}(\tilde{{\mathcal{B}}} \times {\mathcal{B}}) \overset{\sim}\to {\mathrm{MHM}}_{\widehat{\mu - \rho}, 0}(\tilde{{\mathcal{B}}} \times \tilde{{\mathcal{B}}})$$ for the equivalence given by pullback; we therefore obtain an object $$(\pi_2^\circ)^{-1}({\mathcal{O}}(0, 2\rho)\otimes \Delta_{23}^\circ({\mathcal{K}} \boxtimes {\mathcal{M}})) \in {\mathrm{D}}^b{\mathrm{MHM}}_{\widehat{\mu - \rho}}(\tilde{{\mathcal{B}}} \times {\mathcal{B}}).$$ Finally, we push this forward along the first projection $${\mathrm{pr}}_1 \colon \tilde{{\mathcal{B}}} \times {\mathcal{B}} \to \tilde{{\mathcal{B}}},$$ and define $$\label{eq:hodge convolution} \begin{aligned} {\mathcal{K}} * {\mathcal{M}} := {\mathrm{pr}}_{1*}(\pi_2^\circ)^{-1}({\mathcal{O}}(0, 2\rho) &\otimes \Delta_{23}^\circ({\mathcal{K}} \boxtimes {\mathcal{M}})) \\ &\in {\mathrm{D}}^b{\mathrm{MHM}}_{\widehat{\mu - \rho}}(\tilde{{\mathcal{B}}}) = {\mathrm{D}}^b{\mathrm{MHM}}({\mathcal{D}}_{\widehat{\mu}}). \end{aligned}$$ The following proposition is an elementary (if tedious) check. **Proposition 81**. *The convolutions [\[eq:d-module convolution\]](#eq:d-module convolution){reference-type="eqref" reference="eq:d-module convolution"} and [\[eq:hodge convolution\]](#eq:hodge convolution){reference-type="eqref" reference="eq:hodge convolution"} agree under the forgetful functor from mixed Hodge modules to ${\mathcal{D}}$-modules.* **Remark 82**. Our convolutions, which are natural from the point of view of localization, are $\rho$-shifted relative to the natural topological convolutions. For example, the category $${\mathrm{D}}^b_G{\mathrm{Mod}}({\mathcal{D}}_{{\mathcal{B}} \times {\mathcal{B}}})_{rh}$$ is the usual (equivariant) Hecke category acting on $${\mathrm{D}}^b{\mathrm{Mod}}({\mathcal{D}}_{{\mathcal{B}}})_{rh}.$$ In our notation, this is ${\mathrm{D}}^b_G{\mathrm{Mod}}({\mathcal{D}}_{\rho, \rho})_{rh}$ acting on ${\mathrm{D}}^b{\mathrm{Mod}}({\mathcal{D}}_\rho)_{rh}$, whereas we have defined an action of ${\mathrm{D}}_G^b{\mathrm{Mod}}({\mathcal{D}}_{\rho, -\rho})_{rh}$. We conclude this subsection by observing that, on taking associated gradeds with respect to the Hodge filtration, we recover a version of the well known convolution for coherent sheaves on the Steinberg variety. Let us write $\tilde{{\mathcal{N}}}^* = T^*{\mathcal{B}}$. Then we have the Springer map $$\mu \colon \tilde{{\mathcal{N}}}^* \to \subset {\mathfrak{g}}^*.$$ The notation comes from the fact that $\mu$ is a resolution of singularities of the nilpotent cone ${\mathcal{N}}^* \subset {\mathfrak{g}}^*$. Similarly, setting $$\tilde{{\mathfrak{g}}}^* = T^*\tilde{{\mathcal{B}}}/H,$$ we have the Grothendieck-Springer map $$\tilde \mu \colon \tilde{{\mathfrak{g}}}^* \to {\mathfrak{g}}^*.$$ We form the *Steinberg scheme* $${\mathrm{St}} := \tilde{{\mathfrak{g}}}^* \times_{{\mathfrak{g}}^*} \tilde{{\mathcal{N}}}^*.$$ We note that passage to the associated graded of the Hodge filtration defines functors $$\mathop{\mathrm{Gr}}^F \colon {\mathrm{D}}^b{\mathrm{MHM}}({\mathcal{D}}_\lambda) \to {\mathrm{D}}^b{\mathrm{Coh}}^{{\mathbb{G}}_m}(\tilde{{\mathcal{N}}}^*), \quad \mathop{\mathrm{Gr}}^F \colon {\mathrm{D}}^b{\mathrm{MHM}}({\mathcal{D}}_{\widehat{\mu}}) \to {\mathrm{D}}^b_{\tilde{{\mathcal{N}}}^*}{\mathrm{Coh}}^{{\mathbb{G}}_m}(\tilde{{\mathfrak{g}}}^*)$$ and $$\mathop{\mathrm{Gr}}^F \colon {\mathrm{D}}^b_G{\mathrm{MHM}}({\mathcal{D}}_{\widehat{\mu}, -\lambda}) \to {\mathrm{D}}^b_{{\mathrm{St}}}{\mathrm{Coh}}^{G \times {\mathbb{G}}_m}(\tilde{{\mathfrak{g}}}^* \times \tilde{{\mathcal{N}}}^*),$$ where ${\mathrm{D}}^b_Z$ denotes the derived category of complexes whose cohomologies are set-theoretically supported on $Z$. Now, we have an obvious convolution functor $$* \colon {\mathrm{D}}^b_{{\mathrm{St}}}{\mathrm{Coh}}^{G \times {\mathbb{G}}_m}(\tilde{{\mathfrak{g}}}^* \times \tilde{{\mathcal{N}}}^*) \times {\mathrm{D}}^b{\mathrm{Coh}}^{{\mathbb{G}}_m}(\tilde{{\mathcal{N}}}^*) \to {\mathrm{D}}^b_{\tilde{{\mathcal{N}}}^*}{\mathrm{Coh}}^{{\mathbb{G}}_m}(\tilde{{\mathfrak{g}}}^*),$$ given by $${\mathcal{F}} * {\mathcal{G}} = {\mathrm{R}}{\mathrm{pr}}_{1\mathpalette\bigcdot@{.5}} (\Delta_{23}^-)^{\mathpalette\bigcdot@{.5}}({\mathcal{F}} \boxtimes {\mathcal{G}}),$$ where $${\mathrm{pr}}_1 \colon \tilde{{\mathfrak{g}}}^* \times \tilde{{\mathcal{N}}}^* \to \tilde{{\mathfrak{g}}}^*$$ denotes the projection to the first factor and $$\Delta_{23}^- \colon \tilde{{\mathfrak{g}}}^* \times \tilde{{\mathcal{N}}}^* \to \tilde{{\mathfrak{g}}}^* \times \tilde{{\mathcal{N}}}^* \times \tilde{{\mathcal{N}}}^*$$ is the map sending $(x, y)$ to $(x, y, -y)$; here negation is taken fiberwise over ${\mathcal{B}}$. **Proposition 83**. *For $${\mathcal{K}} \in {\mathrm{D}}^b_G{\mathrm{MHM}}({\mathcal{D}}_{\widehat{\mu}, -\lambda}) \quad \text{and} \quad {\mathcal{M}} \in {\mathrm{D}}^b{\mathrm{MHM}}({\mathcal{D}}_\lambda)$$ we have $$\mathop{\mathrm{Gr}}^F({\mathcal{K}} * {\mathcal{M}}) \cong \mathop{\mathrm{Gr}}^F{\mathcal{K}} * \mathop{\mathrm{Gr}}^F {\mathcal{M}}.$$* *Proof.* Note first that since ${\mathcal{K}} \boxtimes {\mathcal{M}}$ is non-characteristic for $\Delta_{23}$, $\Delta_{23}^\circ ({\mathcal{K}} \boxtimes {\mathcal{M}})$ coincides with the naive pullback of filtered ${\mathcal{D}}$-modules. Hence, $$\mathop{\mathrm{Gr}}^F\Delta_{23}^\circ({\mathcal{K}} \boxtimes {\mathcal{M}}) = {\mathrm{R}}q_{\Delta_{23} \mathpalette\bigcdot@{.5}} {\mathrm{L}}p_{\Delta_{23}}^{\mathpalette\bigcdot@{.5}}(\mathop{\mathrm{Gr}}^F{\mathcal{K}} \boxtimes \mathop{\mathrm{Gr}}^F{\mathcal{M}})$$ in the notation of Proposition [Proposition 79](#prop:associated graded pushforward){reference-type="ref" reference="prop:associated graded pushforward"}. Applying Proposition [Proposition 79](#prop:associated graded pushforward){reference-type="ref" reference="prop:associated graded pushforward"}, we deduce that $$\mathop{\mathrm{Gr}}^F({\mathcal{K}} * {\mathcal{M}}) = \mathop{\mathrm{Gr}}({\mathrm{pr}}_{1})_*({\mathcal{O}}(0, 2\rho) \otimes {\mathrm{R}}q_{\Delta_{23} \mathpalette\bigcdot@{.5}} {\mathrm{L}}p_{\Delta_{23}}^{\mathpalette\bigcdot@{.5}}(\mathop{\mathrm{Gr}}^F{\mathcal{K}} \boxtimes \mathop{\mathrm{Gr}}^F{\mathcal{M}})).$$ We deduce the assertion by applying Lemma [Lemma 84](#lem:springer convolution){reference-type="ref" reference="lem:springer convolution"} below. ◻ **Lemma 84**. *If $${\mathcal{F}} \in {\mathrm{D}}^b_{{\mathrm{St}}}{\mathrm{Coh}}^{G \times {\mathbb{G}}_m}(\tilde{{\mathfrak{g}}}^* \times \tilde{{\mathcal{N}}}^*) \quad \text{and} \quad {\mathcal{G}} \in {\mathrm{D}}^b{\mathrm{Coh}}^{{\mathbb{G}}_m}(\tilde{{\mathcal{N}}}^*)$$ then we have $${\mathcal{F}} * {\mathcal{G}} = \mathop{\mathrm{Gr}}({\mathrm{pr}}_1)_*({\mathcal{O}}(0, 2\rho) \otimes {\mathrm{R}}q_{\Delta_{23}\mathpalette\bigcdot@{.5}} {\mathrm{L}}p_{\Delta_{23}}^{\mathpalette\bigcdot@{.5}}({\mathcal{F}} \boxtimes {\mathcal{G}})).$$* *Proof.* The statement follows by an easy diagram chase, using the fact that $\omega_{{\mathcal{B}}} = {\mathcal{O}}(-2\rho)$. ◻ ## The big projective {#subsec:big projective} Let us now specialize our attention to the categories $${\mathrm{Mod}}({\mathcal{D}}_{\widehat{0}, 0}, G)_{rh} \quad \text{and} \quad {\mathrm{MHM}}({\mathcal{D}}_{\widehat{0}, 0}, G).$$ The group $G$ acts on ${\mathcal{B}} \times {\mathcal{B}}$ with finitely many orbits, indexed by the Weyl group $W$; for $w \in W$, we write $$j_w \colon X_w \hookrightarrow {\mathcal{B}} \times {\mathcal{B}}$$ for the inclusion of the corresponding orbit. We fix the indexing so that $X_1 = {\mathcal{B}} \subset {\mathcal{B}} \times {\mathcal{B}}$ is the closed orbit and $X_{w_0}$ is the open orbit, where $w_0 \in W$ is the longest element. For each orbit $X_w$, there is a unique rank $1$ $G$-equivariant twisted local system $\gamma_w$ on $X_w$ defining standard, costandard and irreducible objects $$j_{w!}\gamma_w, \quad j_{w*}\gamma_w \quad \text{and} \quad j_{w!*}\gamma_w \in {\mathrm{MHM}}({\mathcal{D}}_{\widehat{0}, 0}, G).$$ Here we normalize the Hodge structures as usual so that $j_{w!*}\gamma_w$ is pure of weight $\dim X_w = \dim {\mathcal{B}} + \ell(w)$ and has Hodge filtration starting in degree $\mathop{\mathrm{codim}}X_w = \dim {\mathcal{B}} - \ell(w)$. These objects generate the entire category under extensions and Hodge twists. We are interested in the following special object in ${\mathrm{Mod}}({\mathcal{D}}_{\widehat{0}, 0}, G)_{rh}$. **Definition 85**. The *big projective* is the $G$-equivariant $\tilde{{\mathcal{D}}} \boxtimes {\mathcal{D}}_0$-module $$\Xi = (\tilde{{\mathcal{D}}} \boxtimes {\mathcal{D}}_0) \otimes_{U({\mathfrak{g}})} {\mathbb{C}},$$ where $U({\mathfrak{g}})$ acts trivially on ${\mathbb{C}}$ and $G$ acts in the obvious way. One can think of $\Xi$ as a "universal object" that packages all the intertwining functors together. **Proposition 86**. *The object $\Xi$ is the projective cover of the irreducible object $j_{1!*}\gamma_1$ associated with the closed $G$-orbit $X_1 \subset {\mathcal{B}} \times {\mathcal{B}}$ in the category ${\mathrm{Mod}}({\mathcal{D}}_{\widehat{0}, 0}, G)_{rh}$.* *Proof.* One easily checks that $\Xi \in {\mathrm{Mod}}({\mathcal{D}}_{\widehat{0}, 0}, G)_{rh}$. Moreover, we have, tautologically, $${\mathrm{Hom}}_{{\mathrm{Mod}}({\mathcal{D}}_{\widehat{0}, 0}, G)_{rh}}(\Xi, {\mathcal{M}}) = \Gamma({\mathcal{M}})^G.$$ Since the right hand side is exact in ${\mathcal{M}}$, we deduce that $\Xi$ is projective. Moreover, we have $$\Gamma(j_{w!*}\gamma_w) = 0 \quad \text{for $w \neq 1$}$$ and an easy (and standard) calculation shows that $$\Gamma(j_{1!*}\gamma_1)^G \cong {\mathbb{C}}.$$ So $\Xi$ is indeed the projective cover of $j_{1!*}\gamma_1$ as claimed. ◻ The main result of this subsection is: **Theorem 87**. *There exists a mixed Hodge module structure on $\Xi$ such that the Hodge filtration on $\Xi = (\tilde{{\mathcal{D}}} \otimes {\mathcal{D}}_0) \otimes_{U({\mathfrak{g}})} {\mathbb{C}}$ is generated by the trivial $G$-type ${\mathbb{C}}$, i.e., it is the obvious one given by the order filtrations on ${\mathcal{D}}_0$ and $\tilde{{\mathcal{D}}}$. Hence $$\mathop{\mathrm{Gr}}^F\Xi \cong {\mathcal{O}}_{{\mathrm{St}}},$$ where $${\mathrm{St}} = \tilde{{\mathfrak{g}}}^* \times_{{\mathfrak{g}}^*} \tilde{{\mathcal{N}}}^*$$ is the Steinberg scheme.* **Remark 88**. In fact, the proof of Theorem [Theorem 87](#thm:xi hodge filtration){reference-type="ref" reference="thm:xi hodge filtration"} shows that the result holds up to isomorphism and Hodge twists for any mixed Hodge module structure on $\Xi$. The proof of Theorem [Theorem 87](#thm:xi hodge filtration){reference-type="ref" reference="thm:xi hodge filtration"} follows the same lines as [@DV2 Theorem 4.1], which gives a similar result for the Hodge filtration on the spherical tempered Harish-Chandra sheaf for a split real group. We first need a few remarks about duality. Recall the ${\mathbb{C}}$-linear duality functor on mixed Hodge modules. In the monodromic setting, this defines a functor $${\mathbb{D}} \colon {\mathrm{MHM}}({\mathcal{D}}_{\widehat{0}, 0}, G)^{\mathit{op}} \overset{\sim}\to {\mathrm{MHM}}({\mathcal{D}}_{\widehat{0}, 0}, G),$$ compatible with the duality functor for filtered $\tilde{{\mathcal{D}}} \boxtimes {\mathcal{D}}_0$-modules given by $${\mathbb{D}}({\mathcal{M}}, F_\bullet) = {\mathrm{R}}{{\mathcal{H}}{\mathit{om}}}_{(\tilde{{\mathcal{D}}} \boxtimes {\mathcal{D}}_0, F_\bullet)}(({\mathcal{M}}, F_\bullet), (\tilde{{\mathcal{D}}} \boxtimes {\mathcal{D}}_0, F_{\bullet - 4\dim {\mathcal{B}} - \dim H}))[2 \dim {\mathcal{B}} + \dim H].$$ Note that the cohomological and filtration shifts are arranged so that $${\mathbb{D}} {\mathcal{O}}(-\rho, -\rho) = {\mathcal{O}}(-\rho, -\rho)\{2 \dim {\mathcal{B}}\}$$ with respect to the usual (trivial) filtration on ${\mathcal{O}}(-\rho, -\rho)$. Now let $F_\bullet'\Xi$ denote the order filtration on $\Xi$ described in Theorem [Theorem 87](#thm:xi hodge filtration){reference-type="ref" reference="thm:xi hodge filtration"}. **Lemma 89**. *The filtered $\tilde{{\mathcal{D}}} \boxtimes {\mathcal{D}}_0$-module $(\Xi, F_\bullet')$ is Cohen-Macaulay and satisfies $${\mathbb{D}}(\Xi, F_\bullet') \cong (\Xi, F_\bullet')\{2\dim {\mathcal{B}}\} := (\Xi, F_{\bullet - 2\dim {\mathcal{B}}}).$$* *Proof.* Consider the filtered Koszul complex $$\label{eq:xi koszul 1} 0 \to (\tilde{{\mathcal{D}}} \boxtimes {\mathcal{D}}_0) \otimes \wedge^n {\mathfrak{g}}\{ n\} \to \cdots \to (\tilde{{\mathcal{D}}} \boxtimes {\mathcal{D}}_0) \otimes {\mathfrak{g}}\{1\} \to \tilde{{\mathcal{D}}} \boxtimes {\mathcal{D}}_0\to (\Xi, F_\bullet') \to 0,$$ where $n = \dim {\mathfrak{g}}$. Passing to associated gradeds, we get the Koszul resolution $$0 \to {\mathcal{O}}_{\tilde{{\mathfrak{g}}}^* \times \tilde{{\mathcal{N}}}^*} \otimes \wedge^n {\mathfrak{g}}\{n\} \to \cdots \to {\mathcal{O}}_{\tilde{{\mathfrak{g}}}^* \times \tilde{{\mathcal{N}}}^*} \otimes {\mathfrak{g}}\{ 1\} \to {\mathcal{O}}_{\tilde{{\mathfrak{g}}}^* \times \tilde{{\mathcal{N}}}^*} \to {\mathcal{O}}_{{\mathrm{St}}} \to 0,$$ which is exact since the map $(\tilde{\mu}, -\mu) \colon \tilde{{\mathfrak{g}}}^* \times \tilde{{\mathcal{N}}}^* \to {\mathfrak{g}}^*$ is flat. So [\[eq:xi koszul 1\]](#eq:xi koszul 1){reference-type="eqref" reference="eq:xi koszul 1"} is filtered exact, i.e., the Koszul complex is a resolution of $(\Xi, F_\bullet')$. Taking the filtered dual, we find that ${\mathbb{D}}(\Xi, F_\bullet')$ is quasi-isomorphic to the complex $$\begin{aligned} (\tilde{{\mathcal{D}}} \boxtimes {\mathcal{D}}_0) \{n + 2 \dim {\mathcal{B}}\} \to & (\tilde{{\mathcal{D}}} \boxtimes {\mathcal{D}}_0) \otimes {\mathfrak{g}}^*\{n - 1 + 2 \dim {\mathcal{B}}\} \to \cdots \\ &\to (\tilde{{\mathcal{D}}} \boxtimes {\mathcal{D}}_0) \otimes \wedge^n{\mathfrak{g}}^* \{2 \dim {\mathcal{B}}\}, \end{aligned}$$ where we note that $2 \dim {\mathcal{B}} + \dim H = n$. But this is precisely the Koszul resolution for the module $$(\Xi, F_\bullet') \otimes \wedge^n{\mathfrak{g}}^*\{2 \dim {\mathcal{B}}\} \cong (\Xi, F_\bullet')\{2 \dim {\mathcal{B}}\}$$ so this proves the lemma. ◻ We next endow $\Xi$ with a Hodge module structure as follows. Since $\Xi$ is a projective cover of the irreducible object $j_{1*}\gamma_1$, by a completely formal argument [@BGS Lemma 4.5.3], there exists a lift of $\Xi$ to a mixed Hodge module such that: $$\label{eq:xi hodge condition} \text{there exists a non-zero morphism $\Xi \to j_{1*}\gamma_1(-\dim {\mathcal{B}})$.}$$ The Tate twist is arranged so that the Hodge filtrations start in degree $0$. We fix such a mixed Hodge module structure from here on. **Lemma 90**. *For any $w \in W$, the stalk $j_w^*\Xi$ is given by $$j_w^*\Xi = \gamma_w(\ell(w) - \dim{\mathcal{B}}).$$* *Proof.* That all the stalks are (perverse) local systems (or equivalently, that $\Xi$ has a standard filtration) is a well-known fact about the underlying ${\mathcal{D}}$-module, see, for example [@MV]. To fix the ranks and the Hodge structures, we note that $j_{1*}\gamma_1(-\ell(w))$ appears as a multiplicity one composition factor in $j_{w*}\gamma_w$. So, as a Hodge structure, $${\mathrm{Hom}}(j_w^*\Xi, \gamma_w) = {\mathrm{Hom}}(\Xi, j_{w*}\gamma_w) = {\mathbb{C}}(-\ell(w) + \dim{\mathcal{B}}),$$ which shows that the Hodge structure on the stalk is as claimed. ◻ **Lemma 91**. *The mixed Hodge module dual ${\mathbb{D}}\Xi$ admits a non-zero map $${\mathbb{D}} \Xi \to j_{1*}\gamma_1(\dim {\mathcal{B}}).$$ Hence, ${\mathbb{D}}\Xi (-2 \dim {\mathcal{B}})$ is another Hodge lift of $\Xi$ satisfying the same condition [\[eq:xi hodge condition\]](#eq:xi hodge condition){reference-type="eqref" reference="eq:xi hodge condition"}.* *Proof.* By Lemma [Lemma 90](#lem:xi stalks){reference-type="ref" reference="lem:xi stalks"}, we have an injective map $$j_{w_0!}\gamma_{w_0} = j_{w_0!}j_{w_0}^* \Xi \to \Xi,$$ where $w_0 \in W$ is the longest element (so $j_{w_0}$ is the inclusion of the open orbit). But now we have an inclusion $$j_{1*}\gamma_1 \hookrightarrow j_{w_0!}\gamma_{w_0}$$ as the lowest piece of the weight filtration. So $$j_{1*}\gamma_1 \hookrightarrow \Xi.$$ Taking duals, we deduce the lemma. ◻ The final ingredient in the proof of Theorem [Theorem 87](#thm:xi hodge filtration){reference-type="ref" reference="thm:xi hodge filtration"} is to give some control over the lowest piece of the Hodge filtration in $\Xi$. To this end, we make the following general observation. **Lemma 92**. *Let $X$ be a smooth projective variety, $H$ a torus and $\tilde{X} \to X$ an $H$-torsor. Suppose that $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ is ample, let ${\mathcal{M}} \in {\mathrm{MHM}}_\lambda(\tilde{X})$ be a monodromic mixed Hodge module and let $c \in {\mathbb{Z}}$ be the lowest index such that $F_c{\mathcal{M}} \neq 0$. Then $${\mathrm{H}}^i(X, F_c{\mathcal{M}} \otimes \omega_X) = 0 \quad \text{for $i > 0$}.$$* *Proof.* Applying Theorem [Theorem 80](#thm:twisted kodaira){reference-type="ref" reference="thm:twisted kodaira"} to the morphism from $X$ to a point, we have $${\mathbb{H}}^i(X, [F_{p - n}{\mathcal{M}} \to F_{p - n + 1}{\mathcal{M}} \otimes \Omega^1_X \to \cdots \to F_p{\mathcal{M}} \otimes \Omega^n_X]) = 0 \quad \text{for $i > 0$},$$ where $n = \dim X$ and the complex is placed in degrees $-n, \ldots, 0$. Setting $p = c$, this gives $${\mathrm{H}}^i(X, F_c{\mathcal{M}} \otimes \Omega^n_X) = {\mathrm{H}}^i(X, F_c{\mathcal{M}} \otimes \omega_X) = 0 \quad \text{for $i > 0$}$$ as claimed. ◻ We observe that for $${\mathcal{M}} \in {\mathrm{MHM}}({\mathcal{D}}_{\widehat{0}, 0}, G) = {\mathrm{MHM}}_{\widehat{-\rho}, -\rho}^G(\tilde{{\mathcal{B}}} \times \tilde{{\mathcal{B}}}),$$ we have $${\mathcal{M}} \otimes \omega_{{\mathcal{B}} \times {\mathcal{B}}}^{-1} = {\mathcal{M}} \otimes {\mathcal{O}}(2\rho, 2\rho) \in {\mathrm{MHM}}_{\widehat{\rho}, \rho}(\tilde{{\mathcal{B}}} \times \tilde{{\mathcal{B}}}).$$ So Lemma [Lemma 92](#lem:lowest hodge vanishing){reference-type="ref" reference="lem:lowest hodge vanishing"} gives $$\label{eq:lowest hodge vanishing} {\mathrm{H}}^i({\mathcal{B}} \times {\mathcal{B}}, F_c{\mathcal{M}}) = 0 \quad \text{for $i > 0$}$$ where $F_c{\mathcal{M}}$ is the lowest piece of the Hodge filtration. **Lemma 93**. *The unique $G$-invariant vector in $\Gamma(F_0j_{1*}\gamma_1(-\dim {\mathcal{B}}))$ lifts to a $G$-invariant vector in $\Gamma(F_0\Xi)$.* *Proof.* We need to show that the map $$\Gamma(F_0\Xi) \to \Gamma(F_0 j_{1*}\gamma_1(-\dim {\mathcal{B}}))$$ is surjective. By [\[eq:lowest hodge vanishing\]](#eq:lowest hodge vanishing){reference-type="eqref" reference="eq:lowest hodge vanishing"}, it is enough to show that $$F_p \Xi = 0 \quad \text{for $p < 0$}.$$ Lemma [Lemma 90](#lem:xi stalks){reference-type="ref" reference="lem:xi stalks"} implies that $\Xi$ has a filtration by the objects $$j_!\gamma_w(\ell(w) - \dim {\mathcal{B}}).$$ But now $$F_p j_!\gamma_w = 0 \quad \text{for $p < \mathop{\mathrm{codim}}X_w$},$$ and hence $$F_p j_!\gamma_w(\ell(w) - \dim {\mathcal{B}}) = 0 \quad \text{for $p < 0$}$$ so we are done. ◻ *Proof of Theorem [Theorem 87](#thm:xi hodge filtration){reference-type="ref" reference="thm:xi hodge filtration"}.* Given a Hodge lift of $\Xi$ as above, by Lemma [Lemma 93](#lem:xi lowest piece){reference-type="ref" reference="lem:xi lowest piece"}, there exists a map $$\label{eq:xi hodge filtration 0} \Xi = (\tilde{{\mathcal{D}}} \boxtimes {\mathcal{D}}_0)\otimes_{U({\mathfrak{g}})}{\mathbb{C}} \to \Xi$$ respecting the map to $j_{1*}\gamma_1$ and sending the generator in $$F_0'\Xi = {\mathbb{C}} \subset \Xi = (\tilde{{\mathcal{D}}} \boxtimes {\mathcal{D}}_0)\otimes_{U({\mathfrak{g}})} {\mathbb{C}}$$ to a $G$-invariant vector in $\Gamma(F_0\Xi)$. By uniqueness of projective covers, any such map must be an isomorphism of ${\mathcal{D}}$-modules; so we may as well choose the original mixed Hodge structure on $\Xi$ so that [\[eq:xi hodge filtration 0\]](#eq:xi hodge filtration 0){reference-type="eqref" reference="eq:xi hodge filtration 0"} is the identity. In other words, we can choose the Hodge module structure such that $F_p'\Xi \subset F_p\Xi$ for all $p$, i.e., so that the identity is a filtered map $$\label{eq:xi hodge filtration 1} (\Xi, F_\bullet') \to (\Xi, F_\bullet).$$ Passing to duals and applying Lemma [Lemma 89](#lem:xi koszul){reference-type="ref" reference="lem:xi koszul"}, we get a map $$({\mathbb{D}}\Xi, F_\bullet) = {\mathbb{D}}(\Xi, F_\bullet) \to {\mathbb{D}}(\Xi, F_\bullet') \cong (\Xi, F_\bullet')\{2 \dim {\mathcal{B}}\},$$ an isomorphism on the underlying ${\mathcal{D}}$-modules. But now ${\mathbb{D}}\Xi(-2\dim {\mathcal{B}})$ is another Hodge lift of $\Xi$ satisfying the [\[eq:xi hodge condition\]](#eq:xi hodge condition){reference-type="eqref" reference="eq:xi hodge condition"} by Lemma [Lemma 91](#lem:xi hodge dual){reference-type="ref" reference="lem:xi hodge dual"}, so we conclude that we also have a map $$(\Xi, F_\bullet')\{2 \dim {\mathcal{B}}\} \to ({\mathbb{D}}\Xi, F_\bullet),$$ which is also an isomorphism on the underlying ${\mathcal{D}}$-modules. We deduce that the composition $$(\Xi, F_\bullet')\{2\dim {\mathcal{B}}\} \to ({\mathbb{D}}\Xi, F_\bullet) \to (\Xi, F_\bullet')\{2\dim {\mathcal{B}}\}$$ must be the identity (up to scale), and hence our original map [\[eq:xi hodge filtration 1\]](#eq:xi hodge filtration 1){reference-type="eqref" reference="eq:xi hodge filtration 1"} must be a filtered isomorphism. This concludes the proof. ◻ ## Proof of Theorems [Theorem 24](#thm:filtered exactness){reference-type="ref" reference="thm:filtered exactness"} and [Theorem 25](#thm:hodge generation){reference-type="ref" reference="thm:hodge generation"} for regular $\lambda$ {#subsec:regular vanishing} We now give the proofs of Theorems [Theorem 24](#thm:filtered exactness){reference-type="ref" reference="thm:filtered exactness"} and [Theorem 25](#thm:hodge generation){reference-type="ref" reference="thm:hodge generation"} in the case where $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ is regular dominant. The key input is the following consequence of twisted Kodaira vanishing. **Proposition 94**. *Suppose $\lambda, \lambda', \mu \in {\mathfrak{h}}^*_{\mathbb{R}}$ and let ${\mathcal{K}} \in {\mathrm{MHM}}({\mathcal{D}}_{\widehat{\mu}, -\lambda'}, G)$ and ${\mathcal{M}} \in {\mathrm{MHM}}({\mathcal{D}}_{\lambda})$. If $\lambda - \lambda' \in {\mathfrak{h}}^*_{\mathbb{R}}$ is regular dominant, then $${\mathcal{H}}^i(\mathop{\mathrm{Gr}}^F{\mathcal{K}} * \mathop{\mathrm{Gr}}^F{\mathcal{M}}) = 0 \quad \text{for $i > 0$.}$$* *Proof.* As in the proof of Proposition [Proposition 83](#prop:convolution gr){reference-type="ref" reference="prop:convolution gr"}, we have that $${\mathcal{O}}(0, 2\rho) \otimes {\mathrm{R}}q_{\Delta_{23}\mathpalette\bigcdot@{.5}}{\mathrm{L}}p^{\mathpalette\bigcdot@{.5}}_{\Delta_{23}} (\mathop{\mathrm{Gr}}^F{\mathcal{K}} \boxtimes \mathop{\mathrm{Gr}}^F{\mathcal{M}}) = {\mathcal{O}}(0, 2\rho) \otimes \mathop{\mathrm{Gr}}^F\Delta_{23}^\circ({\mathcal{K}} \boxtimes {\mathcal{M}})$$ is the associated graded of an object $${\mathcal{O}}(0, 2\rho) \otimes \Delta_{23}^\circ ({\mathcal{K}} \boxtimes {\mathcal{M}}) \in {\mathrm{MHM}}_{\widehat{\mu - \rho}, \lambda - \lambda'}(\tilde{{\mathcal{B}}} \times \tilde{{\mathcal{B}}}).$$ We deduce the claim from Lemma [Lemma 84](#lem:springer convolution){reference-type="ref" reference="lem:springer convolution"} and Theorem [Theorem 80](#thm:twisted kodaira){reference-type="ref" reference="thm:twisted kodaira"}. ◻ *Proof of Theorem [Theorem 24](#thm:filtered exactness){reference-type="ref" reference="thm:filtered exactness"} in the regular case.* Fix $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ regular dominant. We need to show that if ${\mathcal{M}} \in {\mathrm{MHM}}({\mathcal{D}}_{\widehat{\lambda}})$ then $$\label{eq:filtered exactness 1} {\mathrm{H}}^i({\mathcal{B}}, \mathop{\mathrm{Gr}}^F{\mathcal{M}}) = 0 \quad \text{for $i > 0$}.$$ Since the claim is stable under taking extensions, we may as well assume that ${\mathcal{M}} \in {\mathrm{MHM}}({\mathcal{D}}_\lambda)$ is twisted rather than monodromic. Regarding $\mathop{\mathrm{Gr}}^F{\mathcal{M}}$ as a coherent sheaf on $\tilde{{\mathcal{N}}}^* := T^*{\mathcal{B}}$, [\[eq:filtered exactness 1\]](#eq:filtered exactness 1){reference-type="eqref" reference="eq:filtered exactness 1"} is equivalent to the claim that $$\label{eq:filtered exactness 2} {\mathcal{H}}^i{\mathrm{L}}\tilde{\mu}^{\mathpalette\bigcdot@{.5}}{\mathrm{R}}\mu_{\mathpalette\bigcdot@{.5}} \mathop{\mathrm{Gr}}^F{\mathcal{M}} = 0 \quad \text{for $i > 0$},$$ where $$\mu \colon \tilde{{\mathcal{N}}}^* \to {\mathfrak{g}}^* \quad \text{and} \quad \tilde{\mu} \colon \tilde{{\mathfrak{g}}}^* \to {\mathfrak{g}}^*$$ are the Springer map and Grothendieck-Springer map respectively. Since the morphism $$(\tilde{\mu}, -\mu) \colon \tilde{{\mathfrak{g}}}^* \times \tilde{{\mathcal{N}}}^* \to {\mathfrak{g}}^*$$ is flat, we have by flat base change $${\mathrm{L}}\tilde{\mu}^{\mathpalette\bigcdot@{.5}}{\mathrm{R}}\mu_{\mathpalette\bigcdot@{.5}} \mathop{\mathrm{Gr}}^F{\mathcal{M}} = {\mathcal{O}}_{{\mathrm{St}}} * \mathop{\mathrm{Gr}}^F{\mathcal{M}} = \mathop{\mathrm{Gr}}^F\Xi * \mathop{\mathrm{Gr}}^F {\mathcal{M}}$$ using Theorem [Theorem 87](#thm:xi hodge filtration){reference-type="ref" reference="thm:xi hodge filtration"}, where $${\mathrm{St}} = \tilde{{\mathfrak{g}}}^* \times_{{\mathfrak{g}}^*} \tilde{{\mathcal{N}}}^*$$ is the scheme-theoretic fiber product and $\Xi \in {\mathrm{MHM}}({\mathcal{D}}_{\widehat{0}, 0}, G)$ is the big projective. Since we have assumed $\lambda$ regular dominant, [\[eq:filtered exactness 2\]](#eq:filtered exactness 2){reference-type="eqref" reference="eq:filtered exactness 2"} now follows by Proposition [Proposition 94](#prop:convolution vanishing){reference-type="ref" reference="prop:convolution vanishing"}. ◻ *Proof of Theorem [Theorem 25](#thm:hodge generation){reference-type="ref" reference="thm:hodge generation"} in the regular case.* Given $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ and ${\mathcal{M}} \in {\mathrm{MHM}}({\mathcal{D}}_{\widehat{\lambda}})$, we need to show that the morphism $$\label{eq:regular hodge generation 1} \mathop{\mathrm{Gr}}^F(\tilde{{\mathcal{D}}} \overset{{\mathrm{L}}}\otimes_{U({\mathfrak{g}})} \Gamma({\mathcal{M}})) \to {\mathcal{M}}$$ is surjective on ${\mathcal{H}}^0$. Let us first consider the case where ${\mathcal{M}} \in {\mathrm{MHM}}({\mathcal{D}}_\lambda)$ is twisted rather than monodromic. In this case, [\[eq:regular hodge generation 1\]](#eq:regular hodge generation 1){reference-type="eqref" reference="eq:regular hodge generation 1"} is given by the morphism $$\mathop{\mathrm{Gr}}^F \tilde{{\mathcal{D}}} \overset{{\mathrm{L}}}\otimes_{S({\mathfrak{g}})} {\mathrm{R}}\Gamma(\mathop{\mathrm{Gr}}^F{\mathcal{M}}) = {\mathcal{O}}_{{\mathrm{St}}} * \mathop{\mathrm{Gr}}^F{\mathcal{M}} = \mathop{\mathrm{Gr}}^F\Xi * \mathop{\mathrm{Gr}}^F{\mathcal{M}} \to {\mathcal{M}}.$$ So it suffices to show that $${\mathcal{H}}^i(\mathop{\mathrm{Gr}}^F{\mathcal{K}} * \mathop{\mathrm{Gr}}^F{\mathcal{M}}) = 0 \quad \text{for $i > 0$},$$ where ${\mathcal{K}} = \ker(\Xi \to j_{1*}\gamma_1(-\dim{\mathcal{B}}))$. (Here we note that $\mathop{\mathrm{Gr}}^F j_{1*}\gamma_1(-\dim{\mathcal{B}})$ is the structure sheaf of the diagonal in $\tilde{{\mathcal{N}}}^* \times \tilde{{\mathcal{N}}}^*$, which acts via convolution as the identity.) But this follows from Proposition [Proposition 94](#prop:convolution vanishing){reference-type="ref" reference="prop:convolution vanishing"}, so we are done in this case. In the general monodromic case, we argue as follows. We first note that, for any $H$-torsor $\pi \colon \tilde{X} \to X$ over a smooth variety $X$, the natural functor $$\iota \colon {\mathrm{D}}^b{\mathrm{MHM}}_\mu(\tilde X) \to {\mathrm{D}}^b{\mathrm{MHM}}_{\widehat{\mu}}(\tilde X)$$ has a left adjoint $\iota^L$ satisfying $$\label{eq:twisted monodromy inclusion} \iota \circ \iota^L{\mathcal{M}} = {\mathbb{C}} \overset{{\mathrm{L}}}\otimes_{S({\mathfrak{h}}(1))} {\mathcal{M}},$$ where ${\mathfrak{h}}(1)$ acts on ${\mathcal{M}}$ via the log monodromy action [\[eq:weak difference\]](#eq:weak difference){reference-type="eqref" reference="eq:weak difference"} translated by $\mu$ so that the generalized eigenvalue is zero. (It is a standard fact about monodromic mixed Hodge modules that this action is always given by a morphism of mixed Hodge modules ${\mathfrak{h}}(1) \otimes {\mathcal{M}} \to {\mathcal{M}}$.) To see that the left adjoint exists, observe that since both the twisted and monodromic categories satisfy Zariski descent (this follows formally from the standard properties of the restriction and extension functors for open immersions), it is enough to check the case where the $H$-torsor is trivial, and hence the case $\lambda = 0$. In this case, the functor is simply $\iota = \pi^\circ = \pi^!(-\dim H)[-\dim H]$, which has a left adjoint given by $\iota^L = \pi_!(\dim H)[\dim H]$. To deduce the formula [\[eq:twisted monodromy inclusion\]](#eq:twisted monodromy inclusion){reference-type="eqref" reference="eq:twisted monodromy inclusion"}, we observe that there is a natural map $${\mathbb{C}} \overset{{\mathrm{L}}}\otimes_{S({\mathfrak{h}}(1))}{\mathcal{M}} \to {\mathbb{C}} \overset{{\mathrm{L}}}\otimes_{S({\mathfrak{h}}(1))}(\iota\circ \iota^L{\mathcal{M}}) \to \iota\circ \iota^L{\mathcal{M}},$$ which one checks is an isomorphism by a local calculation. Now, returning to our specific situation, to show that the Hodge filtration on ${\mathcal{M}}$ is globally generated, we need to show that the morphism $$\mathop{\mathrm{Gr}}^F(\tilde{{\mathcal{D}}} \overset{{\mathrm{L}}}\otimes_{U({\mathfrak{g}})} \Gamma({\mathcal{M}})) \to \mathop{\mathrm{Gr}}^F{\mathcal{M}}$$ is surjective on ${\mathcal{H}}^0$. The natural (nilpotent) actions of ${\mathfrak{h}}(1)$ on $\Gamma({\mathcal{M}})$ and ${\mathcal{M}}$ are compatible via this morphism, so it is enough to show that the map $$\mathop{\mathrm{Gr}}^F(\tilde{{\mathcal{D}}} \overset{{\mathrm{L}}}\otimes_{U({\mathfrak{g}})} {\mathrm{R}}\Gamma({\mathbb{C}} \overset{{\mathrm{L}}}\otimes_{S({\mathfrak{h}}(1))}{\mathcal{M}})) \to \mathop{\mathrm{Gr}}^F({\mathbb{C}} \overset{{\mathrm{L}}}\otimes_{S({\mathfrak{h}}(1))} {\mathcal{M}})$$ is surjective on ${\mathcal{H}}^0$. But this now follows by the same argument as in the twisted case applied with the object $$\iota^L{\mathcal{M}} \in {\mathrm{D}}^b{\mathrm{MHM}}({\mathcal{D}}_\lambda)$$ in place of ${\mathcal{M}}$. ◻ ## Proof of Theorems [Theorem 24](#thm:filtered exactness){reference-type="ref" reference="thm:filtered exactness"} and [Theorem 25](#thm:hodge generation){reference-type="ref" reference="thm:hodge generation"} in the general case {#subsec:singular vanishing} Let us now extend the above argument to the case where $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ is dominant but not necessarily regular. We first need to refine Proposition [Proposition 94](#prop:convolution vanishing){reference-type="ref" reference="prop:convolution vanishing"} as follows. Fix a subset $S$ of the simple roots for $G$ and let $\pi_S \colon {\mathcal{B}} \to {\mathcal{P}}_S$ be the corresponding partial flag variety (cf., §[4.2](#subsec:K-orbits){reference-type="ref" reference="subsec:K-orbits"}). Recall that $${\mathrm{Pic}}^G({\mathcal{P}}_S) = {\mathbb{X}}^*(H_S) = \{\mu \in {\mathbb{X}}^*(H) \mid \langle \mu, \check\alpha\rangle = 0 \text{ for } \alpha \in S\}$$ is the character group of a quotient $H_S$ of $H$. We therefore have a tautological $H_S$-torsor $\tilde{{\mathcal{P}}}_S \to {\mathcal{P}}_S$ and an $H$-equivariant morphism $\tilde{{\mathcal{B}}} \to \tilde{{\mathcal{P}}}_S$. We also have the corresponding cone $${\mathfrak{h}}^*_{S, {\mathbb{R}}, +} = \{\mu \in {\mathfrak{h}}^*_{S, {\mathbb{R}}} \mid \langle \mu, \check\alpha \rangle > 0 \text{ for }\alpha \in \Phi_+ - {\mathrm{span}}(S)\} \subset {\mathfrak{h}}^*_{S, {\mathbb{R}}} = {\mathbb{X}}^*(H_S) \otimes {\mathbb{R}}$$ of ample elements for ${\mathcal{P}}_S$. For $w \in W$, let us write $i_{w, S}$ and $j_{w, S}$ for the inclusions $$X_w \xrightarrow{i_{w, S}} ({\mathrm{id}}, \pi_S)^{-1}({\mathrm{id}}, \pi_S)(X_w) \xrightarrow{j_{w, S}} {\mathcal{B}} \times {\mathcal{B}},$$ where $$({\mathrm{id}}, \pi_S) \colon {\mathcal{B}} \times {\mathcal{B}} \to {\mathcal{B}} \times {\mathcal{P}}_S$$ is the obvious map. We have the following result, which specializes to Proposition [Proposition 94](#prop:convolution vanishing){reference-type="ref" reference="prop:convolution vanishing"} when $S = \emptyset$ and ${\mathfrak{h}}^*_{S, {\mathbb{R}}, +}$ is the set of regular dominant elements. **Proposition 95**. *Let $\lambda, \lambda', \mu \in {\mathfrak{h}}^*_{\mathbb{R}}$, ${\mathcal{K}} \in {\mathrm{MHM}}({\mathcal{D}}_{\widehat{\mu}, -\lambda'}, G)$ and ${\mathcal{M}} \in {\mathrm{MHM}}({\mathcal{D}}_{\lambda})$. Suppose that $\lambda - \lambda' \in {\mathfrak{h}}^*_{S, {\mathbb{R}}, +}$ and that, for every $G$-orbit $X_w \subset {\mathcal{B}} \times {\mathcal{B}}$, we have $${\mathcal{H}}^i(i_{w, S}^! j_{w, S}^*{\mathcal{K}}) = 0 \quad \text{for $i > 0$}.$$ Then $${\mathcal{H}}^i(\mathop{\mathrm{Gr}}^F {\mathcal{K}} * \mathop{\mathrm{Gr}}^F {\mathcal{M}}) = 0 \quad \text{for $i > 0$}.$$* *Proof.* The assumption on ${\mathcal{K}}$ implies that, as an object in ${\mathrm{D}}^b{\mathrm{MHM}}({\mathcal{D}}_{\widehat{\mu}, -\lambda'}, G)$, ${\mathcal{K}}$ has a filtration by complexes of the form $$j_{w,S !}i_{w, S *} {\mathcal{K}}_w,$$ where ${\mathcal{H}}^i({\mathcal{K}}_w) = 0$ for $i > 0$. So applying Lemma [Lemma 84](#lem:springer convolution){reference-type="ref" reference="lem:springer convolution"}, the complex $\mathop{\mathrm{Gr}}^F{\mathcal{K}} *\mathop{\mathrm{Gr}}^F{\mathcal{M}}$ has a filtration by the complexes $$\label{eq:refined convolution vanishing 1} \mathop{\mathrm{Gr}}({\mathrm{pr}}_1)_*\mathop{\mathrm{Gr}}^F( {\mathcal{O}}(0, 2\rho) \otimes j_{w, S !}i_{w, S *} \Delta_{23}^\circ({\mathcal{K}}_w \boxtimes {\mathcal{M}})).$$ Now, since $\lambda - \lambda' \in {\mathfrak{h}}^*_{S, {\mathbb{R}}} = {\mathrm{Pic}}^G({\mathcal{P}}_S) \otimes {\mathbb{R}}$ is pulled back from ${\mathcal{P}}_S$, we have a well-defined pushforward $$({\mathrm{id}}, \pi_{S})_* \colon {\mathrm{D}}^b{\mathrm{MHM}}_{\widehat{\mu - \rho}, \lambda -\lambda'}(\tilde{{\mathcal{B}}} \times \tilde{{\mathcal{B}}}) \to {\mathrm{D}}^b{\mathrm{MHM}}_{\widehat{\mu - \rho}, \lambda - \lambda'}(\tilde{{\mathcal{B}}} \times \tilde{{\mathcal{P}}}_S).$$ So we can rewrite [\[eq:refined convolution vanishing 1\]](#eq:refined convolution vanishing 1){reference-type="eqref" reference="eq:refined convolution vanishing 1"} as $$\begin{aligned} {\mathrm{Gr}}({\mathrm{pr}}_{1, S})_* \mathop{\mathrm{Gr}}^F &({\mathrm{id}}, \pi_{S })_*({\mathcal{O}}(0, 2\rho) \otimes j_{w, S!}i_{w, S*} \Delta_{23}^\circ({\mathcal{K}}_w \boxtimes {\mathcal{M}}))\\ &= {\mathrm{Gr}}({\mathrm{pr}}_{1, S})_* \mathop{\mathrm{Gr}}^F j_{({\mathrm{id}}, \pi_{S})(X_w)!} \pi_{w, S *}({\mathcal{O}}(0, 2\rho) \otimes\Delta_{23}^\circ({\mathcal{K}}_w \boxtimes {\mathcal{M}})) ,\end{aligned}$$ where ${\mathrm{pr}}_{1, S} \colon {\mathcal{B}} \times {\mathcal{P}}_S \to {\mathcal{B}}$ is the projection to the first factor, $$j_{({\mathrm{id}}, \pi_S)(X_w)} \colon ({\mathrm{id}}, \pi_S)(X_w) \to {\mathcal{B}} \times {\mathcal{P}}_S$$ is the inclusion and $\pi_{w, S} \colon X_w \to ({\mathrm{id}}, \pi_S)(X_w)$ is the restriction of $({\mathrm{id}}, \pi_S)$. Observing that $j_{({\mathrm{id}}, \pi_S)(X_w)}$ and $\pi_{w, S}$ are affine morphisms and $\lambda - \lambda'$ is ample on ${\mathcal{P}}_S$, we conclude by Theorem [Theorem 80](#thm:twisted kodaira){reference-type="ref" reference="thm:twisted kodaira"} that [\[eq:refined convolution vanishing 1\]](#eq:refined convolution vanishing 1){reference-type="eqref" reference="eq:refined convolution vanishing 1"} and hence ${\mathcal{K}} * {\mathcal{M}}$ have vanishing higher cohomology as claimed. ◻ We next check that the big projective satisfies the hypotheses of Proposition [Proposition 95](#prop:refined convolution vanishing){reference-type="ref" reference="prop:refined convolution vanishing"}. **Proposition 96**. *For any $w \in W$ and subset $S$ of the simple roots, we have $${\mathcal{H}}^i(i_{w, S}^!j_{w, S}^*\Xi) = 0 \quad \text{for $i > 0$}.$$* *Proof.* Observe that we have an equivalence of categories $$\label{eq:xi mixed stalks 1} {\mathrm{Mod}}({\mathcal{D}}_{({\mathrm{id}}, \pi_S)^{-1}({\mathrm{id}}, \pi_S)(X_w), \widehat{0}, 0}, G)_{rh} \cong {\mathrm{Mod}}({\mathcal{D}}_{{\mathcal{B}}_L \times {\mathcal{B}}_L, \widehat{0}, 0}, L)_{rh}$$ given by restriction to a fiber ${\mathcal{B}}_L \times {\mathcal{B}}_L$ of ${\mathcal{B}} \times {\mathcal{B}} \to {\mathcal{P}}_S \times {\mathcal{P}}_S$; here $L \subset G$ is a Levi subgroup of a parabolic in the conjugacy class determined by $S$. By Proposition [Proposition 86](#prop:xi projective){reference-type="ref" reference="prop:xi projective"} and Lemma [Lemma 90](#lem:xi stalks){reference-type="ref" reference="lem:xi stalks"}, the ${\mathcal{D}}$-module $j_{w, S}^*\Xi$ must be a projective object in this category, satisfying $${\mathrm{Hom}}(j_{w, S}^*\Xi, i_{w', S*}\gamma_{w'}) = {\mathbb{C}}$$ for all orbits $X_{w'} \subset ({\mathrm{id}}, \pi_S)^{-1}({\mathrm{id}}, \pi_S)(X_w)$. Under the equivalence [\[eq:xi mixed stalks 1\]](#eq:xi mixed stalks 1){reference-type="eqref" reference="eq:xi mixed stalks 1"}, one easily sees that the only such projective object is the big projective $$\Xi_L \in {\mathrm{Mod}}({\mathcal{D}}_{{\mathcal{B}}_L \times {\mathcal{B}}_L, \widehat{0}, 0}, L)_{rh}$$ for $L$. So, as ${\mathcal{D}}$-modules $$i_{X_w, S}^!j_{w, S}^*\Xi = i_{w, S}^!\Xi_L = {\mathbb{D}} i_{w, S}^*\Xi_L \cong \gamma_w$$ by Lemmas [Lemma 89](#lem:xi koszul){reference-type="ref" reference="lem:xi koszul"} and [Lemma 90](#lem:xi stalks){reference-type="ref" reference="lem:xi stalks"} applied to $L$, so we are done. ◻ *Proof of Theorem [Theorem 24](#thm:filtered exactness){reference-type="ref" reference="thm:filtered exactness"}.* As in the regular case, it is enough to prove the statement when ${\mathcal{M}} \in {\mathrm{MHM}}({\mathcal{D}}_\lambda)$ is twisted. Let $S$ be the set of simple roots $\alpha$ such that $\langle \lambda, \check\alpha \rangle = 0$. Then ${\mathcal{K}} = \Xi$ and ${\mathcal{M}}$ satisfy the hypotheses of Proposition [Proposition 95](#prop:refined convolution vanishing){reference-type="ref" reference="prop:refined convolution vanishing"}, so exactly the same argument as in the regular case applies. ◻ We now turn to the proof of Theorem [Theorem 25](#thm:hodge generation){reference-type="ref" reference="thm:hodge generation"}. For this statement, we will need an additional lemma. Let us fix $\lambda \in {\mathfrak{h}}^*_{\mathbb{R}}$ dominant and let $S$ be the set of simple roots $\alpha$ such that $\langle \lambda, \check\alpha \rangle = 0$. Let us write $$X_S = {\mathcal{B}} \times_{{\mathcal{P}}_S} {\mathcal{B}} \subset {\mathcal{B}} \times {\mathcal{B}},$$ where ${\mathcal{B}} \to {\mathcal{P}}_S$ is the projection to the associated partial flag variety, and $$j_S \colon X_S \to {\mathcal{B}} \times {\mathcal{B}}$$ for the inclusion. We define $$\Xi_S = j_{S*}j_S^*\Xi \in {\mathrm{MHM}}({\mathcal{D}}_{\widehat{0}, 0}, G)$$ and $$\Xi_\lambda = ((\tilde{{\mathcal{D}}} \boxtimes {\mathcal{D}}_{-\lambda}) \otimes_{U({\mathfrak{g}})} {\mathbb{C}})_{\widehat{\lambda}} \in {\mathrm{Mod}}({\mathcal{D}}_{\widehat{\lambda}, -\lambda}, G)_{rh}.$$ Here the subscript $(-)_{\widehat{\lambda}}$ denotes the $\lambda$-generalized eigenspace of ${\mathfrak{h}} \subset \tilde{{\mathcal{D}}}$. **Lemma 97**. *There is a Hodge structure on $\Xi_\lambda$ such that $$\mathop{\mathrm{Gr}}^F \Xi_\lambda \cong \mathop{\mathrm{Gr}}^F\Xi_S.$$* *Proof.* One easily checks that, for $\mu \in {\mathbb{X}}^*(H_S)$, the line bundle ${\mathcal{O}}(\mu, -\mu)$ is trivial on $X_S$; we deduce that there is an $f \in \Gamma_{\mathbb{R}}^G(X_S)^{\mathit{mon}}$ such that $\varphi(f) = (\lambda, -\lambda)$. Since $X_S$ is a closed subvariety in ${\mathcal{B}} \times {\mathcal{B}}$, we deduce that $$\mathop{\mathrm{Gr}}^F\Xi_S \cong \mathop{\mathrm{Gr}}^F j_{S*}fj_S^*\Xi.$$ Now, we note that, as a ${\mathcal{D}}$-module, $\Xi_S$ is the projective cover of $j_{1*}\gamma_1$ inside the category $${\mathrm{Mod}}_{X_S}({\mathcal{D}}_{\widehat{0}, 0}, G)_{rh}$$ of equivariant modules supported on $X_S$. We deduce that $j_{S*}fj_S^*\Xi$ is the projective cover of the irreducible object $$j_{1*}f\gamma_1 \in {\mathrm{Mod}}_{X_S}({\mathcal{D}}_{\widehat{\lambda}, -\lambda}, G)_{rh}.$$ Now, one can check that $\Xi_\lambda$ is also supported on $X_S$. Since the functor $\Gamma$ is exact when restricted to the subcategory ${\mathrm{Mod}}_{X_S}({\mathcal{D}}_{\widehat{\lambda}, -\lambda}, G)_{rh}$ (this follows, for example, from Theorem [Theorem 24](#thm:filtered exactness){reference-type="ref" reference="thm:filtered exactness"} and the fact that the equivalence ${\mathcal{M}} \mapsto f{\mathcal{M}}$ preserves associated gradeds), the argument of Proposition [Proposition 86](#prop:xi projective){reference-type="ref" reference="prop:xi projective"} shows that $\Xi_\lambda$ is also a projective cover of $j_{1*}\gamma_1$ in this subcategory. So we have $\Xi_\lambda \cong j_{S*}fj_S^*\Xi$, from which the lemma follows. ◻ *Proof of Theorem [Theorem 25](#thm:hodge generation){reference-type="ref" reference="thm:hodge generation"}.* We give the proof in the case when ${\mathcal{M}} \in {\mathrm{MHM}}({\mathcal{D}}_\lambda)$ is twisted. The modification required to prove the monodromic statement is the same as in the regular case. We need to show that the map $$\mathop{\mathrm{Gr}}^F(\tilde{{\mathcal{D}}} \overset{{\mathrm{L}}}\otimes_{U({\mathfrak{g}})}\Gamma({\mathcal{M}})) = {\mathcal{O}}_{{\mathrm{St}}} * \mathop{\mathrm{Gr}}^F{\mathcal{M}} = \mathop{\mathrm{Gr}}^F\Xi * \mathop{\mathrm{Gr}}^F{\mathcal{M}} \to \mathop{\mathrm{Gr}}^F{\mathcal{M}}$$ is surjective. We factor this as $$\mathop{\mathrm{Gr}}^F\Xi * \mathop{\mathrm{Gr}}^F{\mathcal{M}} \to \mathop{\mathrm{Gr}}^F \Xi_S * \mathop{\mathrm{Gr}}^F{\mathcal{M}} \to \mathop{\mathrm{Gr}}^F {\mathcal{M}}.$$ By Lemma [Lemma 97](#lem:hodge generation 2){reference-type="ref" reference="lem:hodge generation 2"}, the second map may be identified with the canonical morphism $$\label{eq:hodge generation 2} \mathop{\mathrm{Gr}}^F\Xi_\lambda * \mathop{\mathrm{Gr}}^F{\mathcal{M}} = \mathop{\mathrm{Gr}}^F(\Xi_\lambda * {\mathcal{M}}) \to \mathop{\mathrm{Gr}}^F {\mathcal{M}}.$$ But we have, by definition, $$\Xi_\lambda * {\mathcal{M}} = (\tilde{{\mathcal{D}}} \overset{{\mathrm{L}}}\otimes_{U({\mathfrak{g}})} \Gamma({\mathcal{M}}))_{\widehat{\lambda}},$$ so the morphism ${\mathcal{H}}^0(\Xi_\lambda * {\mathcal{M}}) \to {\mathcal{M}}$ is a surjective morphism of mixed Hodge modules by our assumption that ${\mathcal{M}}$ is globally generated. Hence, [\[eq:hodge generation 2\]](#eq:hodge generation 2){reference-type="eqref" reference="eq:hodge generation 2"} is surjective as well. It therefore remains to show that the map $$\mathop{\mathrm{Gr}}^F \Xi * \mathop{\mathrm{Gr}}^F{\mathcal{M}} \to \mathop{\mathrm{Gr}}^F \Xi_S * \mathop{\mathrm{Gr}}^F {\mathcal{M}}$$ is surjective on ${\mathcal{H}}^0$. Consider the kernel $${\mathcal{K}} = \ker (\Xi \to \Xi_S).$$ For any $G$-orbit $X_w \subset {\mathcal{B}} \times {\mathcal{B}}$, we have $$j_{w, S}^*{\mathcal{K}} = \begin{cases} 0, &\text{if $X_w \subset {\mathcal{B}} \times_{{\mathcal{P}}_S}{\mathcal{B}}$,} \\ j_{w, S}^*\Xi, & \text{otherwise}.\end{cases}$$ So ${\mathcal{K}}$ satisfies the hypotheses of Proposition [Proposition 95](#prop:refined convolution vanishing){reference-type="ref" reference="prop:refined convolution vanishing"} by Proposition [Proposition 96](#prop:xi mixed stalks){reference-type="ref" reference="prop:xi mixed stalks"}, so $${\mathcal{H}}^i(\mathop{\mathrm{Gr}}^F {\mathcal{K}} * \mathop{\mathrm{Gr}}^F{\mathcal{M}}) = 0 \quad \text{for $i > 0$}$$ and the desired surjectivity follows. This completes the proof. ◻ # Non-degeneracy of integral pairings {#sec:integral} We conclude the paper by tying off the remaining loose end: the proof of Proposition [Proposition 27](#prop:integral pairing){reference-type="ref" reference="prop:integral pairing"}. So let us take $\lambda \in {\mathfrak{h}}^*$ integrally dominant and consider regular holonomic monodromic ${\mathcal{D}}$-modules ${\mathcal{M}} \in {\mathrm{Mod}}({\mathcal{D}}_{\widehat{\lambda}})_{rh}$ and ${\mathcal{M}}' \in {\mathrm{Mod}}({\mathcal{D}}_{\widehat{\bar{\lambda}}})_{rh}$ equipped with a pairing $${\mathfrak{s}} \colon {\mathcal{M}} \otimes \overline{{\mathcal{M}}'} \to {{\mathcal{D}}\mathit{b}}_{\tilde{{\mathcal{B}}}}.$$ We begin with the following lemma. Note that if $\lambda$ is regular or if ${\mathcal{M}}$ is an extension of Hermitian self-dual objects (as is the case whenever ${\mathcal{M}}$ underlies a mixed Hodge module) then the lemma is obvious. **Lemma 98**. *Let ${\mathcal{M}} \in {\mathrm{Mod}}({\mathcal{D}}_{\widehat{\lambda}})_{rh}$ as above, and let ${\mathcal{M}}^h \in {\mathrm{Mod}}({\mathcal{D}}_{\widehat{\bar{\lambda}}})_{rh}$ be its Hermitian dual. Then $\Gamma({\mathcal{M}}) = 0$ if and only if $\Gamma({\mathcal{M}}^h) = 0$.* *Sketch of proof.* Since both $\Gamma$ and Hermitian duality are exact, we may assume without loss of generality that ${\mathcal{M}} \in {\mathrm{Mod}}({\mathcal{D}}_\lambda)_{rh}$. Recall that $$\Xi_\lambda := ((\tilde{{\mathcal{D}}} \boxtimes {\mathcal{D}}_{-\lambda}) \otimes_{U({\mathfrak{g}})} {\mathbb{C}})_{\widehat{\lambda}},$$ so $$\Xi_\lambda * {\mathcal{M}} = (\tilde{{\mathcal{D}}} \overset{{\mathrm{L}}}\otimes_{U({\mathfrak{g}})} {\mathrm{R}}\Gamma({\mathcal{M}}))_{\widehat{\lambda}}$$ is the localization of $\Gamma({\mathcal{M}})$. So $\Gamma({\mathcal{M}}) = 0$ if and only if $\Xi_\lambda * {\mathcal{M}} = 0$. We claim that $$\Xi_\lambda^h = \Xi_{\bar{\lambda}};$$ the assertion of the lemma now follows since $$(\Xi_\lambda * {\mathcal{M}})^h \cong \Xi_\lambda^h *{\mathcal{M}}^h$$ as $\tilde{{\mathcal{D}}}$-modules. ◻ Now, let $\Omega(\lambda)$ denote the space of smooth top forms $\eta$ on $\tilde{{\mathcal{B}}}$ satisfying $$\eta \cdot (i(h) - (\lambda - \rho)(h)) = \eta \cdot (\overline{i(h)} - (\lambda - \rho)(h)) = 0$$ for $h \in {\mathfrak{h}}$ with respect to the usual right action of differential operators on top forms. For $\mu \in {\mathfrak{h}}^*$, let us write $|\cdot|^{\mu}$ for the composition $$|\cdot|^{\mu} \colon \tilde{{\mathcal{B}}} \to H_{{\mathbb{R}}}^\circ = \tilde{{\mathcal{B}}}/U_{\mathbb{R}} \xrightarrow{\log} {\mathfrak{h}}_{\mathbb{R}} \xrightarrow{\exp(\mu(\cdot))} {\mathbb{C}}^\times.$$ In this notation, $\Omega(\lambda)$ is precisely the space of forms $\eta$ such that $|\cdot|^{2(\lambda - \rho)}\eta$ is $H$-invariant. Assume for the moment that ${\mathcal{M}}$ and ${\mathcal{M}}'$ are twisted. Then for $m \in \Gamma({\mathcal{M}})$ and $m' \in \Gamma({\mathcal{M}}')$, the current $\eta \cdot {\mathfrak{s}}(m, \overline{m'})$ is $H$-invariant, so $$\int_{U_{\mathbb{R}}} \eta \cdot {\mathfrak{s}}(m, \overline{m'})$$ is an $H_{\mathbb{R}}^\circ$-invariant current on $H_{\mathbb{R}}^\circ$, i.e., an invariant volume form. Fixing a choice of reference invariant volume form on $H_{\mathbb{R}}^\circ$, we may identify this with a complex number, so we get a $U({\mathfrak{g}}) \otimes U(\bar{{\mathfrak{g}}})$-module map $${\mathfrak{s}}' \colon \Gamma({\mathcal{M}}) \otimes \overline{\Gamma({\mathcal{M}}')} \to \Omega(\lambda)^*,$$ where the target is the usual continuous dual of $\Omega(\lambda)$. Unwinding the definitions, the pairing $\Gamma({\mathfrak{s}})$ is given by $$\Gamma({\mathfrak{s}})(m, \overline{m'}) = \langle \eta_0, {\mathfrak{s}}'(m, \overline{m'}) \rangle,$$ where $\eta_0 \in \Omega(\lambda)$ is the $U_{\mathbb{R}}$-invariant vector given by our chosen invariant volume forms on $U_{\mathbb{R}}$ and $H_{\mathbb{R}}^\circ$ times $|\cdot|^{-2(\lambda - \rho)}$. **Lemma 99**. *The subspace $$\eta_0U(\bar{g}) \subset \Omega(\lambda)$$ is dense.* *Proof.* As a left $G({\mathbb{C}})$-module, $\Omega(\lambda)$ is the principal series module $$\Omega(\lambda) = {\mathrm{Ind}}_{B({\mathbb{C}})}^{G({\mathbb{C}})}(|\cdot|^{-2(\lambda + \rho)}).$$ Here $B \subset G$ denotes the Borel subgroup given by our chosen base point in $\tilde{{\mathcal{B}}}$ and ${\mathrm{Ind}}$ denotes smooth induction. The switch from $\lambda - \rho$ to $\lambda + \rho$ comes from the bundle of top forms on ${\mathcal{B}}$. The calculation of the corresponding Harish-Chandra module is a standard exercise: one sees that the module is generated by the $U_{\mathbb{R}}$-invariant vector $\eta_0$ under either $U({\mathfrak{g}})$ or $U(\bar{{\mathfrak{g}}})$ as long as $\lambda$ is integrally dominant. Since the Harish-Chandra module is dense in the $G({\mathbb{C}})$-module, the statement of the lemma follows. ◻ *Proof of Proposition [Proposition 27](#prop:integral pairing){reference-type="ref" reference="prop:integral pairing"}.* Let us first consider the case where ${\mathcal{M}}$ and ${\mathcal{M}}'$ are irreducible (hence twisted). In this case, note that [\[itm:integral pairing 2\]](#itm:integral pairing 2){reference-type="eqref" reference="itm:integral pairing 2"} is just the contrapositive of [\[itm:integral pairing 1\]](#itm:integral pairing 1){reference-type="eqref" reference="itm:integral pairing 1"}. To prove [\[itm:integral pairing 1\]](#itm:integral pairing 1){reference-type="eqref" reference="itm:integral pairing 1"}, we argue as follows. Suppose ${\mathfrak{s}}$ is non-degenerate. Note that if $\Gamma({\mathcal{M}}) = 0$ then $\Gamma({\mathcal{M}}') = 0$ also by Lemma [Lemma 98](#lem:singular hermitian dual){reference-type="ref" reference="lem:singular hermitian dual"}, so $\Gamma({\mathfrak{s}})$ is trivially non-degenerate. So we may as well assume that $\Gamma({\mathcal{M}}) \neq 0$ and hence $\Gamma({\mathcal{M}}') \neq 0$. In this case, ${\mathcal{M}}$ and ${\mathcal{M}}'$ are generated by global sections, so we must have ${\mathfrak{s}}(m, \overline{m}') \neq 0$ for some $m \in \Gamma({\mathcal{M}})$, $m' \in \Gamma({\mathcal{M}}')$. So there is some top form $\eta$ on $\tilde{{\mathcal{B}}}$ such that $\langle \eta, {\mathfrak{s}}(m, \overline{m}') \rangle \neq 0$. Moreover, for an appropriate choice of invariant volume form $\sigma$ on $H$, the form $$\int_{a(H)} (|\cdot|^{-2(\lambda - \rho)}\sigma) \wedge \eta \in \Omega(\lambda)$$ satisfies $$\langle \eta, {\mathfrak{s}}(m, \overline{m}') \rangle = \left \langle \int_{a(H)} (|\cdot|^{-2(\lambda - \rho)}\sigma) \wedge \eta, {\mathfrak{s}}'(m, \overline{m'})\right\rangle,$$ where the integral over $a(H)$ denotes integration along the action map $a \colon H \times \tilde{{\mathcal{B}}} \to \tilde{{\mathcal{B}}}$. Hence, ${\mathfrak{s}}'(m, \overline{m}') \neq 0$. But now, by Lemma [Lemma 99](#lem:hc density){reference-type="ref" reference="lem:hc density"} there must exist $x \in U({\mathfrak{g}})$ such that $$0 \neq \langle \eta_0\bar{x}, {\mathfrak{s}}'(m, \overline{m'}) \rangle = \langle \eta_0, {\mathfrak{s}}'(m, \overline{x m'})\rangle = \Gamma({\mathfrak{s}})(m, \overline{xm'}).$$ So $\Gamma({\mathfrak{s}}) \neq 0$. Hence, since $\Gamma({\mathcal{M}})$ and $\Gamma({\mathcal{M}}')$ are irreducible $U({\mathfrak{g}})$-modules, $\Gamma({\mathfrak{s}})$ is non-degenerate as claimed. Now consider the general case. To prove [\[itm:integral pairing 1\]](#itm:integral pairing 1){reference-type="eqref" reference="itm:integral pairing 1"}, simply note that if ${\mathfrak{s}}$ is non-degenerate, then there exist dual composition series for ${\mathcal{M}}$ and ${\mathcal{M}}'$ and apply the result to each irreducible subquotient. To prove [\[itm:integral pairing 2\]](#itm:integral pairing 2){reference-type="eqref" reference="itm:integral pairing 2"}, let us assume ${\mathfrak{s}} \neq 0$. Note that if ${\mathcal{M}}$ is generated by its global sections, then so is ${\mathcal{M}}/\ker {\mathfrak{s}}$. So $\Gamma({\mathcal{M}}/\ker {\mathfrak{s}}) \neq 0$ and hence $\Gamma(({\mathcal{M}}/\ker {\mathfrak{s}})^h) \neq 0$ by Lemma [Lemma 98](#lem:singular hermitian dual){reference-type="ref" reference="lem:singular hermitian dual"}. Hence, by [\[itm:integral pairing 1\]](#itm:integral pairing 1){reference-type="eqref" reference="itm:integral pairing 1"}, it suffices to show that $\Gamma({\mathcal{M}}') \to \Gamma(({\mathcal{M}}/\ker{\mathfrak{s}})^h)$ is surjective. But ${\mathcal{M}}' \to ({\mathcal{M}}/\ker{\mathfrak{s}})^h$ is a surjective map of ${\mathcal{D}}$-modules, so this follows from Beilinson-Bernstein exactness. This completes the proof of the proposition. ◻ HMSW2 J. Adams, M. van Leeuwen, P. Trapa and D. Vogan, "Unitary representations of real reductive groups", *Astérisque* 417 (2020). J. Adams, P. Trapa and D. Vogan, *Computing Hodge filtrations*. Notes available for download at <http://www.liegroups.org/papers/atlasHodge.pdf>. D. Barbasch, "The unitary dual for complex classical Lie groups", *Invent. Math.* 96 (1989), 103--176. A. Beilinson and J. Bernstein, "Localisation de ${\mathfrak{g}}$-modules", *C. R. Acad. Sci.* 292 (1981), no. 1, 15--18. A. Beilinson and J. Bernstein, "A proof of Jantzen conjectures", *Advances in Soviet Mathematics* 16 (1991), no. 1, 1--50. A. Beilinson, V. Ginzburg and W. Soergel, "Koszul duality patterns in representation theory", *J. Amer. Math. Soc.* 9 (1996), 473--527. A. Beilinson, "Localization of representations of reductive Lie algebras", *Proceedings of the ICM, August 16-24, Warsawa* (1983), 699--710. F. Bien, *$D$-Modules and Spherical Representations*, Princeton University Press (1990). B. Broer, "Line bundles on the cotangent bundle of the flag variety", *Invent. Math.* 113 (1993), 1--20. A. S. Chaves Aguilar, *The Schmid-Vilonen Conjecture for Verma modules of highest antidominant weight and discrete series representations*, [PhD thesis](https://oatd.org/oatd/record?record=oai%5C:uchicago.tind.io%5C:4862), University of Chicago, 2022. E. Cattani and A. Kaplan, "Polarized mixed Hodge structures and the local monodromy of a variation of Hodge structure", *Invent. Math.* 67 (1982), 101--115. P. Deligne, "Théorie de Hodge : II", *Pub. Math. IHES* 40 (1971), 5--57. Y. Deng, *On the nilpotent orbit theorem of complex variation of Hodge structures*, [arXiv:2203.04266](https://arxiv.org/abs/2203.04266). D. Davis and K. Vilonen, *Mixed Hodge modules and real groups*, [arXiv:2202.08797](https://arxiv.org/abs/2202.08797). D. Davis and K. Vilonen, *Hodge filtrations on tempered Hodge modules*, [arXiv:2206.09091](https://arxiv.org/abs/2206.09091). P. A. Griffiths, "On the periods of certain rational integrals. I, II.", *Ann. of Math.* (2) 90 (1969), 460--495; 90 (1969), 496--541. A. Galligo, M. Granger, and Ph. Maisonobe, "${\mathcal{D}}$-modules et faisceaux pervers dont le support singulier est un croisement normal. II", in *Differential systems and singularities (Luminy, 1983)*. Astérisque No. 130 (1985), 240--259. R. Hain and S. Zucker, "Unipotent variations of mixed Hodge structure", *Invent. Math.* 88 (1987), 83--124. H. Hecht, D. Miličić, W. Schmid and J. Wolf, "Localization and standard modules for real semisimple Lie groups I: the duality theorem", *Invent. Math.* 90 (1987), 297--332. H. Hecht, D. Miličić, W. Schmid and J. Wolf, "Localization and standard modules for real semisimple Lie groups II: irreduciblity, vanishing theorems, and classification", to appear in *Pure Appl. Math. Q.*. M. Kashiwara, "Regular holonomic ${\mathcal{D}}$-modules and distributions on complex manifolds", *Advanced Studies in Pure Mathematics (Complex Analytic Singularities)* 8 (1986), 199--206. M. Kashiwara, "A study of variation of mixed Hodge structure", *Publ. Res. Inst. Math. Sci.* 22 (1986), 991--1024. A. Knapp, *Representation Theory of Semisimple Groups, An Overview Based on Examples*, Princeton Landmarks in Mathematics, (1986). A. Knapp and D. Vogan, *Cohomological induction and unitary representations*, Princeton University Press, (1995). G. Laumon, "Sur la catégorie dérivée des ${\mathcal{D}}$-modules filtrés", *Algebraic Geometry (Tokyo/Kyoto 1982)*, Lecture Notes in Mathematics 1016, Springer, Berlin, (1983), 151--237. R. Mirollo and K. Vilonen, "Bernstein-Gelfand-Gelfand reciprocity on perverse sheaves", *Ann. Sci. Ecole Norm. Sup.* (4) 20 (1987), No. 3, 311--323. H. Poincaré, "Sur les résidus des intégrales doubles" (French), *Acta Math.* 9 (1887), no. 1, 321--380. C. Sabbah and C. Schnell, *Mixed Hodge Module Project*, <http://www.math.polytechnique.fr/cmat/sabbah/MHMProject/mhm.html>. M. Saito, "Modules de Hodge polarisables", *Publ. Res. Inst. Math. Sci.* 24 (1988), 849--995. M. Saito, "Mixed Hodge modules", *Publ. Res. Inst. Math. Sci.* 26 (1990), no. 2, 221--333. T. Saito, "A description of monodromic mixed Hodge modules", *J. für Reine Angew. Math.* 786 (2022), 107--153. W. Schmid, "Variation of Hodge structure: the singularities of the period mapping", *Invent. Math.* 22 (1973), 211--319. W. Schmid and K. Vilonen, "Hodge theory and unitary representations of reductive Lie groups", *Frontiers of Mathematical Sciences*, International Press, Somerville MA (2011), 397--420. W. Schmid and K. Vilonen, "Hodge theory and unitary representations", *Representations of reductive groups*, Progr. Math. 312, Birkhäuser/Springer, Cham (2015), 443--453. C. Schnell and R. Yang, *Higher multiplier ideals*, to appear. D. Vogan, "Irreducible characters of semisimple Lie groups III. Proof of Kazhdan-Lusztig Conjecture in the integral case", *Invent. Math.* 71 (1983), 381--417. D. Vogan, "Unitarizability of certain sets of representations", *Annals of Math.* 120 (1984), 141--187. D. Vogan, "The unitary dual of ${\mathrm{GL}}(n)$ over an archimedean field", *Invent. Math.* 83 (1986), 449--505. [^1]: The argument that the authors of [@SV] had in mind was not correct. [^2]: While *a priori* ${\mathcal{V}} \otimes \overline{{\mathcal{V}}}$ is a sheaf in the Zariski topology and ${\mathcal{C}}^\infty_X$ is a sheaf in the analytic topology, one may define a morphism from the first to the second either by restricting ${\mathcal{C}}^\infty_X$ to the Zariski topology or by analytifying ${\mathcal{V}}$, as preferred. [^3]: This is a very old idea. It plays a central role in Deligne's work on mixed Hodge structures [@deligne] and traces its origins via Griffiths [@G] as far back as Poincaré [@P]. [^4]: Strictly speaking, it takes some effort to define the mixed Hodge module structure on $\mathop{\mathrm{Gr}}_{V_i}^\alpha$; in the language of triples, the hard part is the pairing ${\mathfrak{s}}$. See, for example [@SS §12.5].
arxiv_math
{ "id": "2309.13215", "title": "Unitary representations of real groups and localization theory for Hodge\n modules", "authors": "Dougal Davis and Kari Vilonen", "categories": "math.RT math.AG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We show anomalous dissipation of scalar fields advected by a (typical) weak solution to the Euler equations with $C^{\frac{1}{3}^-}$ regularity in the 3D periodic setting. address: - Mathematisches Institut, Leipzig University, Augustusplatz 10, 04109 Leipzig, Germany and Max Planck Institute for Mathematics in the Sciences, Inselstraße 22-26, 04103 Leipzig, Germany - Max Planck Institute for Mathematics in the Sciences, Inselstraße 22-26, 04103 Leipzig, Germany - Max Planck Institute for Mathematics in the Sciences, Inselstraße 22-26, 04103 Leipzig, Germany author: - Jan Burczak - László Székelyhidi, Jr. - Bian Wu title: Anomalous dissipation and Euler flows --- # Introduction We consider the Cauchy problem for the linear advection-diffusion equation $$\label{e:advectiondiffusion} \begin{split} \partial_t\rho_{\kappa}+u\cdot\nabla \rho_\kappa&=\kappa\Delta\rho_{\kappa}\\ \rho_{\kappa}|_{t=0}&=\rho_{in}, \end{split}$$ on the 3-dimensional flat torus $\ensuremath{\mathbb{T}}^3$. The quantity $\rho_{\kappa}$ is a passive scalar and $\kappa>0$ represents molecular diffusivity. We are interested in the case where the vectorfield $u=u(x,t)$ is a weak solution of the incompressible Euler equations $$\label{e:Euler} \begin{split} \partial_tu+u\cdot \nabla u+\nabla p&=0,\\ \mathop{\rm div}\nolimits u&=0. \end{split}$$ We study the behavior of $$\label{e:ad} \kappa\int_0^T\|\nabla\rho_\kappa\|_{L^2}^2\,dt$$ as $\kappa \to 0$. Our main result is the following rigorous verification of the phenomenon of scalar anomalous dissipation induced by turbulent flows, in the deterministic setting. **Theorem 1**. *Let $0<\beta<1/3$. There exists a weak solution $u\in C^{\beta}(\ensuremath{\mathbb{T}}^3\times [0,T])$ to [\[e:Euler\]](#e:Euler){reference-type="eqref" reference="e:Euler"} such that for every non-zero initial datum $\rho_{in}\in H^1(\ensuremath{\mathbb{T}}^3)$ with zero mean, the family of unique solutions $\{\rho_{\kappa}\}_{\kappa>0}$ to [\[e:advectiondiffusion\]](#e:advectiondiffusion){reference-type="eqref" reference="e:advectiondiffusion"} satisfies $$\label{e:anomalousdissipation} \limsup_{\kappa\to 0}\quad \kappa\int_0^T\|\nabla\rho_\kappa\|_{L^2}^2\,dt\geq c \|\rho_{in}\|_{L^2}^2,$$ where $c>0$ depends only on $\frac{\|\nabla \rho_{in}\|_{L^2(\ensuremath{\mathbb{T}}^3)}}{\|\rho_{in}\|_{_{L^2(\ensuremath{\mathbb{T}}^3)}}}$ and $T>0$. Moreover, any smooth solution of [\[e:Euler\]](#e:Euler){reference-type="eqref" reference="e:Euler"} can be uniformly approximated by such weak solutions $u$.* The inequality [\[e:anomalousdissipation\]](#e:anomalousdissipation){reference-type="eqref" reference="e:anomalousdissipation"}, meaning that as molecular diffusivity $\kappa\to 0$, the dissipation rate becomes independent of the diffusivity, is referred to as anomalous dissipation, and is a highly nonlinear feature of the vectorfield $u$. We remark also that if we drop the requirement that $u$ is a weak solution of [\[e:Euler\]](#e:Euler){reference-type="eqref" reference="e:Euler"} and only require $u$ to be divergence-free, then the set of such vectorfields is $C^0$-dense in the space of all time-dependent divergence-free vectorfields. Also, the vectorfields we construct have the stronger feature that anomalous dissipation happens on every time interval contained in $[0,T]$. That is, given $u$ as in Theorem [Theorem 1](#t:main){reference-type="ref" reference="t:main"} and any $\mathcal{I}\subset[0,T]$, the restriction of $u$ to $\mathcal{I}$ has the same property, of course with a constant $c>0$ in [\[e:anomalousdissipation\]](#e:anomalousdissipation){reference-type="eqref" reference="e:anomalousdissipation"} depending on $\mathcal{I}$. ## Context ### Turbulence In the 1949 paper [@Obu49] titled 'Structure of the Temperature Field in Turbulent Flow' A.N. Obukhov writes *In other words, the turbulent motion inside a thermally heterogeneous medium with gradients which are initially weak can contribute to the local gradients of temperature, which are subsequently smoothed out by the action of molecular heat conductivity.* This statement is precisely a prediction that little ('molecular') diffusion $\kappa \Delta$ present in [\[e:advectiondiffusion\]](#e:advectiondiffusion){reference-type="eqref" reference="e:advectiondiffusion"} can be amplified by advection $u \cdot \nabla$ by a turbulent velocity field $u$. The responsible mechanism should be transport of modes towards higher frequencies (smaller scales), where eventually diffusivity of a molecular length-scale steps in. This prediction has been further corroborated by phenomenological, experimental and numerical arguments, see classical [@Cor51], [@Bat59], and more recent [@SreSch10], [@DonSreYou05], [@ShrSig00]. Furthermore, the expected amplification of diffusivity is supposed to keep the quantity [\[e:ad\]](#e:ad){reference-type="eqref" reference="e:ad"} bounded away from zero, a phenomenon referred to as 'anomalous dissipation'. Thus, it is strongly believed that anomalous dissipation is a key feature of turbulence. On the other hand, it is argued that turbulent flows are irregular. More precisely, in his famous 1949 note on statistical hydrodynamics [@Ons49], L. Onsager conjectured that the threshold regularity for the validity of energy conservation in the class of Hölder continuous weak solutions of the Euler equations is the exponent $1/3$. Onsager's interest in this issue came from an effort to explain the primary mechanism of energy dissipation in turbulence, one that prevails even in the absence of viscosity. Thus, the implicit suggestion was that Hölder continuous weak solutions of the Euler equations may be an appropriate mathematical description of turbulent flows in the inviscid limit. Putting these two pieces together, considering non-smooth velocity fields advecting a scalar is of primary interest for turbulence. There are several analytical models of scalar turbulence, including the 1968 Kraichnan model [@Kra68] (seemed nowadays as rather incompatible, compare the relevant discussion and references of [@ArmstrongVicol]) and the more recent alternating shear flows, discussed in more detail below. ### Anomalous dissipation in PDEs For any fixed initial datum, the energy solutions of [\[e:advectiondiffusion\]](#e:advectiondiffusion){reference-type="eqref" reference="e:advectiondiffusion"} converge (along subseqences) to a distributional solution of the transport equation $\partial_t\rho+u\cdot\nabla \rho = 0$. These are given by a uniquely defined and measure preserving flow map, provided the drift is regular enough (classically $u \in L^1 (W^{1, \infty})$, but also $u \in L^1 (W^{1, 1})$ suffices [@DiPernaLions]) and divergence-free. In such a case, anomalous dissipation is excluded, since measure preservation implies in particular conservation of the $L^2$ norm. On the other hand, below the regularity threshold which required for a well-posedness theory of the transport equation, there are examples of divergence free velocity fields that give rise to anomalous dissipation. We mention the recent [@ElgLis23], where $u \in C^\infty ([0,T]; C^\alpha (\ensuremath{\mathbb{T}}^2))$, $\alpha<1$ (in fact it is merely logarithmically below Lipschitz regularity) and earlier [@DriElgIyeJeo22], where $u \in L^1 (C^\alpha), \alpha<1$. The constructions of $u$ in the aforementioned articles are essentially based on (i) shear flows alternating in time in a quasi self-similar fashion, which by 'stirring' push towards smaller scales, where molecular diffusion takes over (inspired by [@DeP03; @Pie94; @Aiz78]) and on (ii) comparison with the inviscid case [@CZDelEl2020]. It is worth mentioning the recent papers [@BruDeL23], [@Bru_etals22], partially drawing from [@JeoYon21; @JeoYon22], that manage to embed the above mentioned passive scalar methodology into nonlinear fluid dynamics (Navier-Stokes equations), at the cost of an arising forcing term. A different approach based on *iterated quantitative homogenization* has been recently proposed by Armstrong-Vicol in [@ArmstrongVicol]. Although both iterated quantitative homogenization (see e.g. [@NiuShenXu2020]) and homogenization in the present context of passive tracers in turbulent flows [@McLaughlin1985] has been studied previously, in [@ArmstrongVicol] this idea was successfully combined with an infinite iterative scheme, and has substantially inspired our work. The methodology of [@ArmstrongVicol] has several advantages over the previous ones: First of all, it abandons the need of the aforementioned comparison with the inviscid case, consequently becomes compatible with the existence of inertial range in turbulence. Secondly, in works based on alternating shear flows focussing towards a singular time, the anomalous dissipation actually occurs at only one instant of time. This seems a major drawback in the context of turbulent flows in light of the statistical time-invariance. Thirdly, as in classical homogenization, the method allows for enhanced and anomalous dissipation of arbitrary initial tracer fields, in contrast with previous constructions based on convex integration [@MoSz2018; @MoSz2019; @BruCoDe2021; @MoSa2020], where the tracer is constructed in parallel to the velocity field scale-by-scale. The main step forward that we make compared with [@ArmstrongVicol] is that our velocity field solves incompressible Euler equation, thus providing the first deterministic link on the PDE level between the Obukhov-Corrsin theory and fluid mechanics. Indeed, to our knowledge this is the first result on anomalous dissipation, where the advecting vectorfield is itself the solution of a PDE. In general, we interpret iterative quantitative homogenization in parallel with convex integration, with the basic principle in mind that whilst convex integration is a form of \"inverse renormalization\" strategy, iterative quantitative homogenization can be seen as \"forward renormalization\". As an interesting byproduct of our approach, which intertwines convex integration techniques with iterated homogenization, we construct not just a single velocity field exhibiting anomalous dissipation, but a $C^0$-dense set of such fields. Moreover, in our approach we abandon the shear-flow setting, giving us more flexibility in constructed such vectorfields. ### Related topics: enhanced dissipation, mixing, and beyond Within or above Lipschitz regularity for $u$, divergence-free drift can of course also assist dissipation (while it cannot counteract it), but in a weaker sense than anomalous dissipation (roughly, $T$ must grow at at rate commensurate with regularity), which is referred to as enhanced dissipation. The enhanced dissipation by a single shear flow, or a related simple 'lower dimensional' flow is well studied, with roots in the computations by Kelvin and Kolmogorov [@Kol34], compare [@AlbBeeNov22], [@BedCZ17], [@CZElgWid20], [@CZGal], [@ColCZWid21] and their references. In such cases, one needs to restrict initial datum to a class not orthogonal to the shearing direction, but the setting usually allows to extract precise decay rates. For a more general related approach see [@Vuk21]. The general condition in the Lipschitz case has been provided in the seminal [@ConKisRyzZla08] (albeit without clear rate, for an optimal example see [@ElgLissMatt2023]), whose results has been recently slightly strenghtened and reproved by a different method in [@Wei21]. Let us finally point out, without getting into details, natural connections between enhanced dissipation and mixing (which is the phenomenon of inviscid-case migration towards smaller scales caused by a vector field), see [@CZDelElg20] and the recent survey [@CZsurvey] with its references. There are also interesting developments in the stochastic setting, see [@BedBluPS22a], [@BedBluPS22b] and [@Hofmanova2023], in the kinetic setting (Landau damping) cf [@MouVil08], and in suppression of chemotactic blowup, including recent beautiful [@HuKisYao23]. In this context we mention the recent work [@Otto2023], where optimal quantitative enhanced dissipation estimates are obtained for random drift given by the Gaussian free field - the strategy here, based on a renormalization strategy of computing effective diffusivities scale-by-scale, is very much reminiscent of our point of view. ## Notation We use $\ensuremath{\mathbb{R}}^3 := \ensuremath{\mathbb{R}}^{3 \times 1}$. For two vectors $v_1, v_2 \in \ensuremath{\mathbb{R}}^3$, we use $v_1^T v_2$ or $v_1 \cdot v_2$ to denote their inner product. For an invertible matrix $A \in \ensuremath{\mathbb{R}}^{3 \times 3}$, we use $A^T$ to denote its transpose, $A^{-1}$ to denote its inverse and $A^{-T}$ to denote the inverse of its transpose. For a vector-value function $\chi = (\chi_1, \chi_2, \chi_3)^T: \ensuremath{\mathbb{R}}^3 \rightarrow \ensuremath{\mathbb{R}}^3$, we use $\nabla \chi$ to denote $( \nabla \chi_1, \nabla \chi_2, \nabla \chi_3 )^T$. We use $\boldsymbol{\alpha}$ to denote the multiindex in partial derivative and we use $|\boldsymbol{\alpha}|$ to denote its total order of differentiation. # Outline of proof of Theorem [Theorem 1](#t:main){reference-type="ref" reference="t:main"} {#outline-of-proof-of-theorem-tmain} In this section we provide an overview of the key steps in our proof, and postpone technical details to subsequent sections. As mentioned in the introduction, our strategy involves a \"backward renormalization\" in constructing the velocity field in Section [2.1](#s:vectorfield){reference-type="ref" reference="s:vectorfield"} (in the sense of an iteration from large-scale to small-scale, $u_q\mapsto u_{q+1}\mapsto\dots$), and a \"forward renormalization\" in constructing the passive tracer in Section [2.2](#s:enhanceddis){reference-type="ref" reference="s:enhanceddis"} (in the sense of an iteration from small-scale to large-scale, $\rho_{q+1}\mapsto\rho_q\mapsto\dots$). We have tried to present the proofs in later sections in a self-contained manner, because we believe these may be of independent interest. ## Construction of the vectorfield $u$ - Convex integration {#s:vectorfield} The general scheme for producing Hölder continuous weak solutions of the Euler equations [\[e:Euler\]](#e:Euler){reference-type="eqref" reference="e:Euler"} is by now well understood. One proceeds via an inductive process on a sequence of approximate solutions $u_q$ and associated Reynolds defect $\mathring{R}_q$ and pressure $p_q$, $q=0,1,2,\dots$, which satisfy the *Euler-Reynolds system* $$\label{e:EulerReynolds} \begin{split} \partial_tu_q+\mathop{\rm div}\nolimits(u_q\otimes u_q)+\nabla p_q&=\mathop{\rm div}\nolimits\mathring{R}_q\,,\\ \mathop{\rm div}\nolimits u_q&=0\,, \end{split}$$ with constraints $$\label{e:ERconstraints} \ensuremath{\mathrm{tr\,}}\mathring{R}_q(x,t)=0,\quad \int_{\ensuremath{\mathbb{T}}^3}u_q(x,t)\,dx=0,\quad \int_{\ensuremath{\mathbb{T}}^3}p_q(x,t)\,dx=0.$$ We note in passing that these normalizations determine the pressure $p_q$ uniquely from $(u_q,\mathring{R}_q)$ so that one may speak of the pair $(u_q,\mathring{R}_q)$ being a solution of [\[e:EulerReynolds\]](#e:EulerReynolds){reference-type="eqref" reference="e:EulerReynolds"}. ### Inductive assumptions The induction process involves a set of *inductive estimates*. These estimates are in terms of a frequency parameter $\lambda_q$ and amplitude $\delta_q$, which are given by $$\label{e:lambdadelta} \lambda_q:= 2\pi \ceil{a^{(b^q)}},\quad \delta_q:=\lambda_q^{-2\beta}$$ where $\ceil{x}$ denotes the smallest integer $n\geq x$, $a\gg 1$ is a large parameter, $b>1$ is close to $1$ and $0<\beta<\sfrac13$ is the exponent of Theorem [Theorem 1](#t:main){reference-type="ref" reference="t:main"}. The parameters $a$ and $b$ are then related to $\beta$. With these parameters the inductive estimates take the form[^1] $$\begin{aligned} \left\|\mathring R_q\right\|_{C^0}&\leq \delta_{q+1}\lambda_q^{-\gamma_R},\label{e:R_q_inductive_est}\\ \left\|u_q\right\|_{C^n}&\leq M \delta_q^{\sfrac12}\lambda_q^n\quad\textrm{ for }n=1,2,\dots,\bar{N},\label{e:u_q_inductive_est}\\ %\norm{u_q}_{C^0} & \leq 1- \delta_q^{\sfrac12},\label{e:u_q_0}\\ \left|e(t)-\int_{\ensuremath{\mathbb{T}}^3}\left|u_q\right|^2\,dx-\bar{e}\delta_{q+1}\right|&\leq \delta_{q+1}\lambda_q^{-\gamma_E},\label{e:energy_inductive_assumption}\end{aligned}$$ where $\gamma_R,\gamma_E>0$ are small parameters, $\bar{N}\in\ensuremath{\mathbb{N}}$ is a large parameter to be chosen suitably (depending on $\beta>0$ and $b>1$), and $\bar{e}>0$, $M\geq 1$ are universal constants, which will be fixed throughout the iteration and depend on the particular geometric form of the perturbing building blocks - see they will be specified in Definition [Definition 29](#d:defebar){reference-type="ref" reference="d:defebar"} and [Definition 33](#d:defM){reference-type="ref" reference="d:defM"}. We remark that in [@BDSV] only the case $\bar{N}=1$ is required in [\[e:u_q\_inductive_est\]](#e:u_q_inductive_est){reference-type="eqref" reference="e:u_q_inductive_est"} and [\[e:energy_inductive_assumption\]](#e:energy_inductive_assumption){reference-type="eqref" reference="e:energy_inductive_assumption"} is slightly weaker. Moreover, in [@BDSV] a generic small parameter $\alpha>0$ is used in place of $\gamma_R,\gamma_E$, but for our purposes we need to choose these small corrections more carefully. ### Parameter choices The inductive construction for passing from $u_q$ to $u_{q+1}$ involves three steps: *mollification*, *gluing* and *perturbation*. In these steps two more scales are introduced, an adjusted length-scale $\ell_q$ and an adjusted time-scale $\tau_q$. In our case these will be defined as $$\label{e:elltau} \ell_q:=\lambda_q^{-1-\gamma_L},\quad \tau_q:=\lambda_q^{-1+\beta-\gamma_T},$$ where $\gamma_L,\gamma_T$ are additional small parameters to be chosen suitably in dependence of $\beta,b$. It is worth pointing out that, if one were to set $\gamma_L=\gamma_T=0$, then $\ell_q,\tau_q$ would be the natural (dimensionally consistent) length- and time-scales induced by the velocity field $u_q$ (c.f. [\[e:lambdadelta\]](#e:lambdadelta){reference-type="eqref" reference="e:lambdadelta"} and [\[e:u_q\_inductive_est\]](#e:u_q_inductive_est){reference-type="eqref" reference="e:u_q_inductive_est"}). Indeed, we can think of $\delta_q^{\sfrac12}$ having physical dimension of velocity, i.e. $LT^{-1}$, and $\lambda_q^{-1}$ having physical dimension of length $L$. Moreover, setting $\gamma_R=0$ would be the consistent estimate in light of the basic principle in convex integration, that the error $R_q$ is cancelled by the new average stress $\langle (u_{q+1}-u_q)\otimes (u_{q+1}-u_q)\rangle$. In addition to these small parameters we will also use $\alpha>0$, as is also done in [@BDSV], to take care of the lack of Schauder estimates in $C^0, C^1,\dots$ spaces. Thus, in summary, we will use the following set of additional parameters, which will all be chosen depending on $\beta,b$: - $\gamma_R\in(0,1)$: smallness of $\|\mathring{R}_q\|_{C^0}$ with respect to $\|R_q\|_{C^0}$ ; - $\gamma_L\in(0,1)$: smallness of $\ell_q$ with respect to $\lambda_q^{-1}$; - $\gamma_T\in(0,1)$: smallness of $\tau_q$ with respect to $(\delta_q^{\sfrac12}\lambda_q)^{-1}$; - $\gamma_E\in(0,1)$: smallness of energy gap with respect to $\|R_q\|_{C^0}$; - $\alpha\in(0,1)$: Schauder exponent; - $\bar{N}\in\ensuremath{\mathbb{N}}$: number of derivatives in the induction. With a suitable choice of these parameters we have the following analogue of [@BDSV Proposition 2.1]: **Proposition 2**. *There exist universal constants $M\geq 1$, $\bar{e}>0$ with the following property. Assume $0<\beta<\frac13$ and $$\label{e:b_beta_rel} 1<b<\frac{1-\beta}{2\beta}\,.$$ Further, assume that $\gamma_T,\gamma_R,\gamma_E>0$ satisfy $$\label{e:Onsager_Conditions} \max\{\gamma_T+b\gamma_R,\gamma_E\}<(b-1)\bigl(1-(2b+1)\beta\bigr)$$ and let $e:[0,T]\to\ensuremath{\mathbb{R}}$ be a strictly positive smooth function. Then there exists $\gamma_L>0$, $\bar{N}\in \ensuremath{\mathbb{N}}$ and $\alpha_0>0$ depending on $\beta, b, \gamma_T,\gamma_R,\gamma_E$ such that, for any $0<\alpha<\alpha_0$, there exists $a_0\gg 1$ (in addition depending on $e$), such that for any $a\geq a_0$ the following holds:* *Let $(u_q,\mathring R_q)$ be a smooth solution of [\[e:EulerReynolds\]](#e:EulerReynolds){reference-type="eqref" reference="e:EulerReynolds"} satisfying the estimates [\[e:R_q\_inductive_est\]](#e:R_q_inductive_est){reference-type="eqref" reference="e:R_q_inductive_est"}--[\[e:energy_inductive_assumption\]](#e:energy_inductive_assumption){reference-type="eqref" reference="e:energy_inductive_assumption"}. Then there exists another solution $(u_{q+1}, \mathring R_{q+1})$ to [\[e:EulerReynolds\]](#e:EulerReynolds){reference-type="eqref" reference="e:EulerReynolds"} satisfying [\[e:R_q\_inductive_est\]](#e:R_q_inductive_est){reference-type="eqref" reference="e:R_q_inductive_est"}--[\[e:energy_inductive_assumption\]](#e:energy_inductive_assumption){reference-type="eqref" reference="e:energy_inductive_assumption"} with $q$ replaced by $q+1$, and we have $$\left\|u_{q+1}-u_q\right\|_{C^0}+\frac{1}{\lambda_{q+1}}\left\|u_{q+1}-u_q\right\|_{C^1} \leq M\delta_{q+1}^{\sfrac12}\label{e:v_diff_prop_est}.$$* The new velocity field $u_{q+1}$ is obtained as $$\label{e:iteration} u_{q+1}:=\bar{u}_q+w_{q+1},$$ where $\bar{u}_q$ is constructed from $u_q$ via Isett's gluing procedure [@Isett2018], and $w_{q+1}$ is the new perturbation consisting of a deformed family of Mikado flows. In Sections [2.1.3](#s:gluingsketch){reference-type="ref" reference="s:gluingsketch"}-[2.1.4](#s:Mikado){reference-type="ref" reference="s:Mikado"} below we now sketch the proof of Proposition [Proposition 2](#p:Onsager){reference-type="ref" reference="p:Onsager"}. The detailed proof, which is very much based on the proof in [@BDSV], is deferred to Section [6](#s:Onsager){reference-type="ref" reference="s:Onsager"}. ### Gluing procedure {#s:gluingsketch} The gluing procedure amounts to the following statement: **Proposition 3**. *Within the setting of Proposition [Proposition 2](#p:Onsager){reference-type="ref" reference="p:Onsager"} we have the following statement. Let $(u_q,\mathring R_q)$ be a smooth solution of [\[e:EulerReynolds\]](#e:EulerReynolds){reference-type="eqref" reference="e:EulerReynolds"} satisfying the estimates [\[e:R_q\_inductive_est\]](#e:R_q_inductive_est){reference-type="eqref" reference="e:R_q_inductive_est"}--[\[e:energy_inductive_assumption\]](#e:energy_inductive_assumption){reference-type="eqref" reference="e:energy_inductive_assumption"}. Then there exists another solution $(\bar{u}_q,\mathring{\bar{R}}_q)$ to [\[e:EulerReynolds\]](#e:EulerReynolds){reference-type="eqref" reference="e:EulerReynolds"} such that $$%\label{e:gluedsupport} \mathop{\rm supp}\nolimits\mathring{\bar{R}}_q\subset \ensuremath{\mathbb{T}}^3\times \bigcup_{i\in\ensuremath{\mathbb{N}}}(i\tau_q+\tfrac{1}{3}\tau_q,i\tau_q+\tfrac{2}{3}\tau_q)$$ and the following estimates hold for any $N\geq 0$: $$\begin{aligned} \|\bar{u}_q\|_{C^{N+1}}&\lesssim \delta_q^{\sfrac12}\lambda_q\ell_q^{-N}\,,\label{e:gluingu}\\ \|\mathring{\bar{R}}_{q}\|_{C^{N+\alpha}}&\lesssim \mathring{\delta}_{q+1}\ell_q^{-N-2\alpha}\,,\label{e:gluingR}\\ \|(\partial_t+\bar{u}_q\cdot\nabla)\mathring{\bar{R}}_{q}\|_{C^{N+\alpha}}&\lesssim \tau_q^{-1}\mathring{\delta}_{q+1}\ell_q^{-N-2\alpha}\,,\label{e:gluingDR}\\ \left|\int_{\ensuremath{\mathbb{T}}^3}|\bar{u}_q|^2-|u_q|^2\,dx\right|&\lesssim\mathring{\delta}_{q+1}\,.\label{e:gluingE}\end{aligned}$$ Moreover, the vector potentials of $u_q$ and $\bar{u}_q$ satisfy $$\label{e:gluingz} \|z_q-\bar{z}_{q}\|_{C^\alpha}\lesssim \tau_q\mathring{\delta}_{q+1}\ell_q^{-\alpha}\,.$$* Here and in the sequel, the vector potential $z$ of a divergence-free velocity field $u$ is given by the Biot-Savart operator on $\ensuremath{\mathbb{T}}^3$, defined as $\mathcal{B}=(-\Delta)^{-1}\mathop{\rm curl}\nolimits$, so that $z=\mathcal{B}u$ is the unique solution of $$\label{e:Biot_Savart} \mathop{\rm div}\nolimits z=0\qquad\textrm{ and }\qquad\mathop{\rm curl}\nolimits z=u \,,$$ recalling that we assume zero spatial average $\int_{\ensuremath{\mathbb{T}}^3} u (x,t) dx$. Proposition [Proposition 3](#p:gluing){reference-type="ref" reference="p:gluing"} is restated below as Corollary [Corollary 26](#c:gluing){reference-type="ref" reference="c:gluing"} and is a direct consequence of Corollary [Corollary 23](#c:mollification){reference-type="ref" reference="c:mollification"} and Proposition [Proposition 25](#p:p_gluing){reference-type="ref" reference="p:p_gluing"}. ### The perturbation {#s:Mikado} The formula for the new perturbation $w_{q+1}$ uses Mikado flows, which we recall here briefly. Mikado flows were introduced originally in [@DaSz2017] and widely used since in applications of convex integration to fluid dynamics. We fix a finite set $\Lambda\subset\ensuremath{\mathbb{R}}^3$ consisting of nonzero vectors $\vec{k}\in\ensuremath{\mathbb{R}}^3$ with rational coordinates, and for each $\vec{k}\in\Lambda$ let $\varphi_{\vec{k}}$ be a periodic function with the properties - $\vec{k}\cdot\nabla\varphi_{\vec{k}}=0$, - For any $\vec{k}\neq \vec{k}'\in\Lambda$ we have $\mathop{\rm supp}\nolimits\varphi_{\vec{k}}\cap \mathop{\rm supp}\nolimits\varphi_{\vec{k}'}=\emptyset$, - $\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}\varphi_{\vec{k}}(\xi)\,d\xi=0$ and $\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}|\Delta\varphi_{\vec{k}}(\xi)|^2\,d\xi=\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}|\nabla\varphi_{\vec{k}}(\xi)|^2\,d\xi=1$. Next, let $\psi_{\vec{k}}=\Delta\varphi_{\vec{k}}$ and define $$\label{e:defUW} W_{\vec{k}}(\xi)=\psi_{\vec{k}}(\xi)\vec{k},\quad U_{\vec{k}}(\xi)=\vec{k}\times\nabla\varphi_{\vec{k}},$$ so that[^2] $\mathop{\rm curl}\nolimits U_{\vec{k}}=W_{\vec{k}}$ and, for any $a_{\vec{k}}$, $\vec{k}\in \Lambda$, the vectorfield $W=\sum_{\vec{k}\in\Lambda}a_{\vec{k}}W_{\vec{k}}$ satisfies $$\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int W\,d\xi=0,\quad \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int W\otimes W\,d\xi=\sum_{\vec{k}\in\Lambda}a_{\vec{k}}^2\vec{k}\otimes\vec{k}.$$ Further, we define $H_{\vec{k}}=H_{\vec{k}}(\xi)$ to be the antisymmetric zero-mean matrix with the property that, for any $v \in \ensuremath{\mathbb{R}}^3$ we have $H_{\vec{k}} v = U_{\vec{k}} \times v$. The following lemma (originating in the work of Nash [@Nash54]) is crucial: **Lemma 4**. *For any compact set $\mathcal{N}\subset\mathcal{S}^{3\times 3}_{+}$ there exists a finite $\Lambda\subset\mathbb{Q}^3$ such that there exists smooth functions $a_{\vec{k}}:\mathcal{N}\to\ensuremath{\mathbb{R}}_+$ with $$\label{e:MikadoProperty} \sum_{\vec{k}\in\Lambda}a_{\vec{k}}^2(R)\vec{k}\otimes\vec{k}=R\quad\textrm{ for any }R\in\mathcal{N}.$$* The corresponding vectorfield $W=W(R,\xi)=\sum_{\vec{k}\in\Lambda}a_{\vec{k}}(R)W_{\vec{k}}(\xi)$ is called a Mikado flow. In the following we will fix $\mathcal{N}:=B_{1/2}(\ensuremath{\mathrm{Id}})$, the metric ball of radius $1/2$ around the identity matrix in $\mathcal{S}^{3\times 3}_{+}$ and denote the corresponding Mikado vectorfield by $W=W(R,\xi)$. Following [@BDSV] the new perturbation is then defined as $$\label{e:defwq+1} w_{q+1}=\frac{1}{\lambda_{q+1}}\mathop{\rm curl}\nolimits\left[\sum_{i\in\ensuremath{\mathbb{N}}}\sum_{\vec{k}\in\Lambda}\eta_i\sigma_{q}^{\sfrac12}a_{\vec{k}}(\tilde R_{q,i})\nabla\Phi_i^TU_{\vec{k}}(\lambda_{q+1}\Phi_i)\right]\,,$$ where - $\eta_i=\eta_i(x,t)$, $i\in\ensuremath{\mathbb{N}}$, are smooth nonnegative cutoff functions with pairwise disjoint supports such that $$\label{e:propertyofetai} \|\partial_t^m\nabla^n_x\eta_i\|_{L^\infty}\leq C_{n,m}\tau_q^{-m},\quad \sum_i\eta_i^2(x,t)=\bar{\eta}^2(x,\tau_q^{-1}t)\,,$$ where $\bar{\eta}=\bar{\eta}(x,t)$ is a (universal) function, $1$-periodic in $t$, such that $$\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}\bar{\eta}^2(x,t)\,dx=c_0,\quad \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_0^1\bar{\eta}^2(x,t)\,dt=c_1$$ for some universal constants $c_0,c_1>0$. - $\sigma_q=\sigma_q(t)$ is a positive scalar function with the property $$\label{e:propertyofsigmaq} |\sigma_q(t)-c_1^{-1}\delta_{q+1}|\leq C\delta_{q+1}\lambda_q^{-\min\{\gamma_E,\gamma_R,(b-1)\beta\}}\leq \tfrac{1}{3c_1}\delta_{q+1}$$ - The maps $\Phi_i=\Phi_i(x,t)$ are the volume-preserving diffeomorphisms defined as the inverse flow map of the velocity field $\bar{u}_q$, which satisfy for $(x,t)\in\mathop{\rm supp}\nolimits\eta_i$: $$\label{e:propertyofPhii} \|\nabla\Phi_i(x,t)\|_{C^n}+\|(\nabla\Phi_i)^{-1}(x,t)\|_{C^n}\leq C_n\ell_q^{-n},\quad \|\nabla\Phi_i-\ensuremath{\mathrm{Id}}\|_{L^\infty}\leq C\lambda_q^{-\gamma_T}\,.$$ - $\tilde R_{q,i}=\tilde R_{q,i}(x,t)$, $i\in\ensuremath{\mathbb{N}}$, satisfies $$\label{e:propertyoftildeRqi} \tilde R_{q,i}=\nabla\Phi_i(\ensuremath{\mathrm{Id}}-\sigma_q^{-1}\mathring{\bar{R}}_q)\nabla\Phi_i^T,\quad \|\tilde R_{q,i}\|_{C^n}\leq C_n\ell_q^{-n}\,.$$ These properties will be obtained in Section [6.3](#s:perturbation){reference-type="ref" reference="s:perturbation"}, where we will conclude the proof of Proposition [Proposition 2](#p:Onsager){reference-type="ref" reference="p:Onsager"}. ## Enhanced dissipation via iterative stages {#s:enhanceddis} We define an additional $q$-dependent parameter which plays a key role in this paper: $$\label{e:defkappaq} \kappa_q:=\lambda_q^{-\theta},\quad \theta=\frac{2b}{b+1}(1+\beta)\,.$$ Using [\[e:lambdadelta\]](#e:lambdadelta){reference-type="eqref" reference="e:lambdadelta"} it is easy to verify the recursive identity $$\label{e:kappaqkappaq+1} \kappa_q=\frac{\delta_{q+1}}{\lambda_{q+1}^2\kappa_{q+1}}.$$ In turn, this identity indicates that we can think of $\kappa_q$ has having physical dimension of diffusion coefficient $L^2T^{-1}$. We consider the sequence of advection-diffusion equations on $\ensuremath{\mathbb{T}}^3\times[0,T]$: $$\label{e:equationq} \begin{split} \partial_t\rho_{q}+u_{q}\cdot\nabla\rho_{q}&=\kappa_{q}\Delta\rho_{q}\,,\\ \rho_{q}|_{t=0}&=\rho_{in}\,, \end{split}$$ where $u_q$ is the sequence of velocity fields obtained in Section [2.1](#s:vectorfield){reference-type="ref" reference="s:vectorfield"} via Proposition [Proposition 2](#p:Onsager){reference-type="ref" reference="p:Onsager"}. We are interested in comparing the cumulative dissipation for subsequent values of $q$, given by $$\label{e:defDq} D_q:=\kappa_q\int_0^T\|\nabla\rho_q\|_{L^2}^2\,dt=\tfrac{1}{2}(\|\rho_{in}\|_{L^2}^2-\|\rho_q(T)\|_{L^2}^2).$$ Our main result is **Proposition 5**. *Let $0<\beta<\frac13$ and $b>1$ as in [\[e:b_beta_rel\]](#e:b_beta_rel){reference-type="eqref" reference="e:b_beta_rel"}. Assume that $\gamma_L, \gamma_T,\gamma_R \in (0,1)$ satisfy $$\label{e:Dissipation} \gamma_T< \frac{b-1}{b+1}(1-(2b+1)\beta)<\gamma_R+\gamma_T,$$ $$\label{e:Dissipation2} 2\gamma_L< \frac{b-1}{b+1} (1+\beta).$$ Then there exists $\gamma>0$ and $\alpha_1>0$ such that for all $0<\alpha<\alpha_1$ there exists $\tilde{N}\in\ensuremath{\mathbb{N}}$ and $q_0\in\ensuremath{\mathbb{N}}$ with the following property.* *For any $q\geq q_0$ and any initial datum $\rho_{in}\in L^2(\ensuremath{\mathbb{T}}^3)$ with $\int_{\ensuremath{\mathbb{T}}^3}\rho_{in}\,dx=0$ such that $$\label{e:boundoninitialdatum} \|\rho_{in}\|_{H^n} \leq \lambda_q^n \min \{ D_q^{\sfrac12}, D_{q+1}^{\sfrac12} \} \qquad\textrm{ for }1\leq n\leq \tilde N,$$ we have $$|D_{q+1}-D_q|\leq \tfrac{1}{2}\lambda_q^{-\gamma}D_q.$$* *Proof.* We start by fixing $\gamma,\alpha_1>0$. The exponent $\gamma>0$ is determined by the inequalities $$\label{e:global_gamma} \begin{split} \gamma < \tfrac{1}{4} \min \Bigl\{\gamma_T,\gamma_R,\gamma_E,(b-1)\beta,(b-1)\theta,\gamma_T+\gamma_R+(2b-1)\beta+1-\theta, \\ \frac{b-1}{b+1}(1-(2b+1)\beta) - \gamma_T \Bigr\}. \end{split}$$ We remark that the last condition in the first line above can be satisfied because of the right inequality in [\[e:Dissipation\]](#e:Dissipation){reference-type="eqref" reference="e:Dissipation"}. In turn, $\alpha_1$ is then chosen sufficiently small so that $$\label{e:conditiononalpha1} \alpha_1(1+\gamma_L)+\gamma+\theta<\gamma_T+\gamma_R+(2b-1)\beta+1.$$ We also need to choose $$\label{e:global_N} \begin{split} \tilde N \geq \max \Bigl\{ (b+1) \left( \frac{2}{b-1} + \frac{\theta}{b} \right), \, 1 + \frac{2b(b-1)(1+\beta)}{\gamma_T(b+1)} \Bigr\}, \end{split}$$ where the first lower bound is given by $N_h$ in [\[e:hom_para_assump:Nh\]](#e:hom_para_assump:Nh){reference-type="eqref" reference="e:hom_para_assump:Nh"} and the second lower bound is given by $N$ to validate (A1) in section [5](#s:time){reference-type="ref" reference="s:time"}. *Step 1: Preparation ($\rho_{q+1} \leadsto \tilde \rho_{q+1}$)* We start with the equation $$\label{e:equationq+1} \begin{split} \partial_t\rho_{q+1}+u_{q+1}\cdot\nabla\rho_{q+1}&=\kappa_{q+1}\Delta\rho_{q+1}\,,\\ \rho_{q+1}|_{t=0}&=\rho_{in}\,, \end{split}$$ and recall the form $u_{q+1}=\bar{u}_q+w_{q+1}$ of the advecting field $u_{q+1}$, where $w_{q+1}$ is given in [\[e:defwq+1\]](#e:defwq+1){reference-type="eqref" reference="e:defwq+1"}. Then we make use of the linear algebra identity $(Au)\times(Av)=\textrm{cof }A(u\times v)=\det A A^{-T}(u\times v)$ for any $3\times 3$ matrix $A$ and vectors $u,v\in\ensuremath{\mathbb{R}}^3$. From this we deduce that if $H$ is the antisymmetric matrix such that $Hv=u\times v$ and $\det A=1$, then $\tilde H=A^{-1}HA^{-T}$ is the antisymmetric matrix such that $\tilde Hv=(A^Tu)\times v$. In particular, using the antisymmetric matrix-valued functions $H_{\vec{k}}(\xi)$ introduced above in Section [2.1.4](#s:Mikado){reference-type="ref" reference="s:Mikado"}, we deduce $$(\nabla\Phi^{-1}_i H_{\vec{k}}\nabla\Phi^{-T}_i)v=(\nabla\Phi_i^TU_{\vec{k}})\times v.$$ Therefore, using the identity $\mathop{\rm div}\nolimits(z\times\nabla\rho)=(\mathop{\rm curl}\nolimits z)\cdot\nabla\rho$, we can write equation [\[e:equationq+1\]](#e:equationq+1){reference-type="eqref" reference="e:equationq+1"} equivalently as $$\label{e:equationAq+1} \begin{split} \partial_t\rho_{q+1}+\bar{u}_{q}\cdot\nabla\rho_{q+1}&=\mathop{\rm div}\nolimits A_{q+1}\nabla\rho_{q+1}\,,\\ \rho_{q+1}|_{t=0}&=\rho_{in}\,, \end{split}$$ where the elliptic matrix $A_{q+1}=A_{q+1}(x,t)$ is defined as $$\label{e:defAq+1} A_{q+1} (x,t)=\kappa_{q+1}\ensuremath{\mathrm{Id}}+\frac{\eta_i (x,t)}{\lambda_{q+1}}\sum_i\nabla\Phi_i^{-1} (x,t) H^{(i)}(x,t,\lambda_{q+1}\Phi_i)\nabla\Phi_i^{-T} (x,t) \,,$$ and $$\begin{aligned} H^{(i)}(x,t,\xi)&=\sum_{\vec{k}\in\Lambda}\sigma_q^{\sfrac12}(t)a_{\vec{k}}(\tilde R_{q,i}(x,t))H_{\vec{k}}(\lambda_{q+1}\xi)\,. \end{aligned}$$ Let $(\tilde\eta_i)_i$ be a partition of unity such that $\tilde\eta_i\eta_i=\eta_i$, $\tilde\eta_i\eta_j=0$ for $j\neq i$ and satisfying estimates of the same type as [\[e:propertyofetai\]](#e:propertyofetai){reference-type="eqref" reference="e:propertyofetai"}: $$\label{e:propertyoftildeetai} \|\partial_t^m\nabla^n_x\tilde\eta_i\|_{L^\infty}\leq \tilde C_{n,m}\tau_q^{-m}.$$ Define the elliptic matrix $$\label{e:defAq+1tilde} \tilde A_{q+1}(x,t)=\sum_i\tilde\eta_i(x,t)\nabla\Phi_i^{-1}(x,t)\left[\kappa_{q+1}\ensuremath{\mathrm{Id}}+\frac{\eta_i(x,t)}{\lambda_{q+1}}H^{(i)}(x,t,\lambda_{q+1}\Phi_i(x,t))\right]\nabla\Phi_i^{-T}(x,t)\,.$$ The estimate [\[e:propertyofPhii\]](#e:propertyofPhii){reference-type="eqref" reference="e:propertyofPhii"} implies $$\label{e:AtildeA} \|A_{q+1}-\tilde A_{q+1}\|_{L^\infty}\leq \kappa_{q+1}\sum_{i}\tilde\eta_i\|\nabla\Phi_i^{-1}\nabla\Phi_i^{-T}-\ensuremath{\mathrm{Id}}\|_{L^\infty}\leq C\kappa_{q+1}\lambda_{q}^{-\gamma_T}.$$ Therefore we can compare the cumulative dissipation in [\[e:equationAq+1\]](#e:equationAq+1){reference-type="eqref" reference="e:equationAq+1"} with that in $$\label{e:equationAq+1tilde} \begin{split} \partial_t\tilde\rho_{q+1}+\bar{u}_{q}\cdot\nabla\tilde\rho_{q+1}&=\mathop{\rm div}\nolimits\tilde A_{q+1}\nabla\tilde \rho_{q+1}\,,\\ \tilde \rho_{q+1}|_{t=0}&=\rho_{in}\,. \end{split}$$ More precisely, assuming that $q_0$ is sufficiently large and $q\geq q_0$, we can ensure $C\lambda_q^{-\gamma_T}<\tfrac{1}{2}\lambda_q^{-\sfrac{\gamma_T}{2}}$, and then apply Proposition [Proposition 10](#p:stability_in_ellipticity){reference-type="ref" reference="p:stability_in_ellipticity"} with $\varepsilon=\tfrac{1}{2}\lambda_q^{-\sfrac{\gamma_T}{2}}<\tfrac12$ to conclude $$\left|\|\rho_{q+1}(T)\|^2_{L^2}-\|\tilde\rho_{q+1}(T)\|_{L^2}^2\right|\lesssim \lambda_q^{-\sfrac{\gamma_T}{2}}\left|\|\rho_{in}\|^2_{L^2}-\|\tilde\rho_{q+1}(T)\|_{L^2}^2\right| .$$ We may also write this as $$\label{e:step1} \left|D_{q+1}-\tilde D_{q+1}\right|\lesssim \lambda_q^{-\sfrac{\gamma_T}{2}}\tilde D_{q+1},$$ where $\tilde D_{q+1}=\tfrac12\left|\|\rho_{in}\|^2_{L^2}-\|\tilde\rho_{q+1}(T)\|_{L^2}^2\right|$. *Step 2: Spatial homogenization ($\tilde \rho_{q+1} \leadsto \bar\rho_{q}^{(1)}$)* In the second step, we use classical estimates in quantitative homogenization and an explicit formula for the corrector, to replace [\[e:equationAq+1tilde\]](#e:equationAq+1tilde){reference-type="eqref" reference="e:equationAq+1tilde"} by the homogenized equation $$\label{e:equationAqbar} \begin{split} \partial_t\bar{\rho}_{q}^{(1)}+\bar{u}_{q}\cdot\nabla \bar{\rho}^{(1)}_{q}&=\mathop{\rm div}\nolimits\bar{A}_{q}\nabla\bar{\rho}^{(1)}_{q}\,,\\ \bar{\rho}_{q}^{(1)}|_{t=0}&=\rho_{in}\,. \end{split}$$ The homogenized elliptic coefficient is given, like in classical homogenization theory, by $$\bar A_{q}(x,t) = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int\tilde A_{q+1}(x,t,\xi) \Big( \ensuremath{\mathrm{Id}}+ \sum_i \eta_i(x,t) \nabla\Phi_i^T(x,t) \nabla_\xi\chi_i^T(x,t,\xi) \Big) \,d\xi\,$$ with corrector $\chi_i: \ensuremath{\mathbb{T}}^3 \times [0,T] \times \ensuremath{\mathbb{T}}^3 \rightarrow \ensuremath{\mathbb{R}}^3$. We will show below in Section [4](#s:homogenization){reference-type="ref" reference="s:homogenization"} that, because of the special structure of the oscillating vectorfield $w_{q+1}$, $\chi$ can be defined *explicitly* by $$\label{e:explicitchii} \chi_i(x,t,\xi) = - \frac{\sigma_q^{1/2}(t)}{\kappa_{q+1} \lambda_{q+1}} \nabla \Phi_i^{-1}(x,t) \sum_{\vec{k}} a_{\vec{k}} \big( \tilde R_{q,i}(x,t) \big) \varphi_{\vec{k}}(\xi) \vec{k}\,.$$ Using the properties of Mikado flows in Section [2.1.4](#s:Mikado){reference-type="ref" reference="s:Mikado"}, this formula allows us to obtain an explicit expression for $\bar{A}_{q+1}$. Indeed, since both $H^{(i)}$ and $\chi_i$ satisfy $\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int H^{(i)}\,d\xi=0$, $\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int\chi_i\,d\xi=0$, we have $$\bar{A}_q=\kappa_{q+1}\sum_i\tilde\eta_i\nabla\Phi_i^{-1}\nabla\Phi_i^{-T}+\frac{1}{\lambda_{q+1}}\sum_i\eta_i^2\nabla\Phi_i^{-1}\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int H^{(i)}(\nabla_\xi\chi_i)^T\,d\xi.$$ On the other hand, using [\[e:explicitchii\]](#e:explicitchii){reference-type="eqref" reference="e:explicitchii"} and the definition of $H^{(i)}(x,t,\xi)$, $$\begin{aligned} H^{(i)}(\nabla_\xi\chi_i)^T&=-\frac{\sigma_q}{\lambda_{q+1}\kappa_{q+1}}\sum_{\vec{k}}a_{\vec{k}}^2(\tilde R_{q,i})H_{\vec{k}}(\nabla\varphi_{\vec{k}}\otimes (\nabla\Phi_i^{-1}\vec{k}))\\ &=\frac{\sigma_q}{\lambda_{q+1}\kappa_{q+1}}\sum_{\vec{k}}a_{\vec{k}}^2(\tilde R_{q,i})|\nabla\varphi_{\vec{k}}|^2(\vec{k}\otimes\vec{k})\nabla\Phi_i^{-T}\,.\end{aligned}$$ Here we used the definition of $H_{\vec{k}}(\xi)$ to deduce $$H_{\vec{k}}\nabla\varphi_{\vec{k}}=(\vec{k}\times\nabla\varphi_{\vec{k}})\times\nabla\varphi_{\vec{k}}=-|\nabla\varphi_{\vec{k}}|^2$$ Using the normalization in the definition of Mikado flows in Section [2.1.4](#s:Mikado){reference-type="ref" reference="s:Mikado"} as well as Lemma [Lemma 4](#l:Mikado){reference-type="ref" reference="l:Mikado"}, we conclude $$\begin{aligned} \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int H^{(i)}(\nabla_\xi\chi_i)^T\,d\xi=\frac{\sigma_q}{\lambda_{q+1}\kappa_{q+1}}\tilde R_{q,i}\nabla\Phi_i^{-T}\,.\end{aligned}$$ Using [\[e:propertyoftildeRqi\]](#e:propertyoftildeRqi){reference-type="eqref" reference="e:propertyoftildeRqi"} we deduce $$\label{e:defbarA} \bar{A}_q(x,t)=\sum_i\tilde\eta_i\kappa_{q+1}\nabla\Phi^{-1}_i\nabla\Phi_i^{-T}+\frac{\sigma_q}{\lambda_{q+1}^2\kappa_{q+1}}\sum_i\eta_i^2(\ensuremath{\mathrm{Id}}-\sigma_q^{-1}\mathring{\bar R}_q)\,.$$ Next, let $$\label{e:defkappabarq} \begin{split} \tilde{\kappa}_q(x,t)&=\kappa_{q+1}+c_1^{-1}\frac{\delta_{q+1}}{\lambda_{q+1}^2\kappa_{q+1}}\sum_i\eta_i^2(x,t)\\ &=\kappa_{q+1}+c_1^{-1}\bar{\eta}^2(x,\tau_q^{-1}t)\kappa_{q}, \end{split}$$ where we have used [\[e:kappaqkappaq+1\]](#e:kappaqkappaq+1){reference-type="eqref" reference="e:kappaqkappaq+1"} and [\[e:propertyofetai\]](#e:propertyofetai){reference-type="eqref" reference="e:propertyofetai"}. Then we compute $$\begin{aligned} \bar{A}_q -\tilde\kappa_q\ensuremath{\mathrm{Id}}&=\kappa_{q+1}\sum_i\tilde\eta_i(\nabla\Phi_i^{-1}\nabla\Phi_i^{-T}-\ensuremath{\mathrm{Id}})+\\ &+\kappa_q\sum_i\eta_i^2(\delta_{q+1}^{-1}\sigma_q-c_1^{-1})\ensuremath{\mathrm{Id}}-\kappa_q\sum_i\eta_i^2\delta_{q+1}^{-1}\mathring{\bar{R}}_q.\end{aligned}$$ It then follows from [\[e:gluingR\]](#e:gluingR){reference-type="eqref" reference="e:gluingR"}, [\[e:propertyofsigmaq\]](#e:propertyofsigmaq){reference-type="eqref" reference="e:propertyofsigmaq"} and [\[e:propertyofPhii\]](#e:propertyofPhii){reference-type="eqref" reference="e:propertyofPhii"} that, for all $(x,t)$, $$\label{e:barAbarkappa} |\bar{A}_{q}(x,t)-\tilde{\kappa}_q(x,t)\ensuremath{\mathrm{Id}}|\leq C \tilde{\kappa}_{q}(x,t)\lambda_{q}^{-\min\{\gamma_T,\gamma_R,\gamma_E,(b-1)\beta\}}\,,$$ for some fixed constant $C$. Then, as in Step 1 we can choose $q_0$ is sufficiently large to ensure that $C\lambda_q^{-2\gamma}<\tfrac{1}{2}\lambda_q^{-\sfrac{\gamma}{2}}$. In particular, we may bound pointwise $$\label{e:comparebarAtildekappa} \frac{1}{2}\tilde\kappa_q\ensuremath{\mathrm{Id}}\leq \bar{A}_q\leq 2\tilde\kappa_q\ensuremath{\mathrm{Id}}.$$ In Section [4](#s:homogenization){reference-type="ref" reference="s:homogenization"}, specifically Corollary [Corollary 14](#c:hom_dissipation){reference-type="ref" reference="c:hom_dissipation"}, we compare the cumulative dissipation of $\tilde\rho_{q+1}$ with that of $\bar{\rho}_q^{(1)}$. The setting in Section [4](#s:homogenization){reference-type="ref" reference="s:homogenization"} is based on validity of the inequality [\[e:comparebarAtildekappa\]](#e:comparebarAtildekappa){reference-type="eqref" reference="e:comparebarAtildekappa"} as well as [\[e:Dissipation\]](#e:Dissipation){reference-type="eqref" reference="e:Dissipation"}, [\[e:Dissipation2\]](#e:Dissipation2){reference-type="eqref" reference="e:Dissipation2"}. Then, Corollary [Corollary 14](#c:hom_dissipation){reference-type="ref" reference="c:hom_dissipation"} and Remark [Remark 15](#r:hom_dissipation){reference-type="ref" reference="r:hom_dissipation"} yields: $$\left|\|\tilde\rho_{q+1}(T)\|^2_{L^2}-\|\bar{\rho}_{q}^{(1)}(T)\|_{L^2}^2\right|\lesssim \tfrac{1}{2}\lambda_q^{-\gamma}\left|\|\rho_{in}\|^2_{L^2}-\|\bar{\rho}^{(1)}_{q}(T)\|_{L^2}^2\right| ,$$ or equivalently $$\label{e:step2} \left|\tilde D_{q+1}-D_{q}^{(1)}\right|\lesssim \lambda_q^{-\gamma}D_{q}^{(1)},$$ where $D_{q}^{(1)}=\tfrac12\left|\|\rho_{in}\|^2_{L^2}-\|\bar\rho^{(1)}_{q}(T)\|_{L^2}^2\right|$. *Step 3: Diagonal reduction ($\bar\rho_{q}^{(1)} \leadsto \bar\rho_{q}^{(2)}$)* Using once more [\[e:barAbarkappa\]](#e:barAbarkappa){reference-type="eqref" reference="e:barAbarkappa"} we now apply Proposition [Proposition 10](#p:stability_in_ellipticity){reference-type="ref" reference="p:stability_in_ellipticity"} with $\varepsilon=\tfrac{1}{2}\lambda_q^{-\gamma}<\tfrac12$ to conclude $$\label{e:step3} \left|D_q^{(2)}-D_q^{(1)}\right|\lesssim \lambda_q^{-\gamma}D_{q}^{(2)},$$ where $D_{q}^{(1)}=\tfrac12\left|\|\rho_{in}\|^2_{L^2}-\|\bar\rho^{(1)}_{q}(T)\|_{L^2}^2\right|$ and $\bar{\rho}^{(2)}_{q}$ is the solution of $$\label{e:equationkappaqbar} \begin{split} \partial_t\bar{\rho}^{(2)}_{q}+\bar{u}_{q}\cdot\nabla\bar{\rho}^{(2)}_{q}&=\mathop{\rm div}\nolimits\tilde{\kappa}_q \nabla\bar{\rho}^{(2)}_{q}\,,\\ \bar{\rho}_{q}^{(2)}|_{t=0}&=\rho_{in}\,. \end{split}$$ *Step 4: Time averaging ($\bar\rho_{q}^{(2)} \leadsto \bar\rho_{q}^{(3)}$)* The advection-diffusion equation [\[e:equationkappaqbar\]](#e:equationkappaqbar){reference-type="eqref" reference="e:equationkappaqbar"} has two different characteristic time-scales: the advective time-scale $\|\nabla \bar{u}_q\|_{L^\infty}^{-1}$ and the time-scale $\tau_q$ given by the time-oscillatory behaviour of the ellipticity coefficient $\tilde{\kappa}_q(x,t)$ given in [\[e:defkappabarq\]](#e:defkappabarq){reference-type="eqref" reference="e:defkappabarq"}, with the relationship given by [\[e:elltau\]](#e:elltau){reference-type="eqref" reference="e:elltau"}: $\|\nabla \bar{u}_q\|_{L^\infty}\tau_q\leq M \lambda_q^{-\gamma_T}$. Let us now invoke Proposition [Proposition 18](#p:t_avg){reference-type="ref" reference="p:t_avg"} with the following choices (on the right-hand side we choose parameters used in Proposition [Proposition 18](#p:t_avg){reference-type="ref" reference="p:t_avg"} upon the current construction) $$\eta := c_1^{-1}\bar{\eta}^2, \quad \kappa_0 := \kappa_{q+1}, \quad \kappa_1 := \kappa_{q}, \quad \mu := \delta_q^{\sfrac{1}{2}} \lambda_q, \quad \tau := \tau_q.$$ We remark that Proposition [Proposition 18](#p:t_avg){reference-type="ref" reference="p:t_avg"} requires assumptions (A1)-(A4). Control of the cutoff-functions in [\[e:propertyofetai\]](#e:propertyofetai){reference-type="eqref" reference="e:propertyofetai"} yields (A4), whereas (A2)-(A3) follow from assumptions [\[e:Dissipation\]](#e:Dissipation){reference-type="eqref" reference="e:Dissipation"}, [\[e:Dissipation2\]](#e:Dissipation2){reference-type="eqref" reference="e:Dissipation2"}, [\[e:gluingu\]](#e:gluingu){reference-type="eqref" reference="e:gluingu"} as well as [\[e:elltau\]](#e:elltau){reference-type="eqref" reference="e:elltau"}. (A1) gives the second lower bound for $\tilde N$ in [\[e:global_N\]](#e:global_N){reference-type="eqref" reference="e:global_N"}. In this setting one can average over the faster time-scale $\tau_q$ and obtain the estimate $$\label{e:step4-1} \left|\|\bar{\rho}^{(3)}_{q}(T)\|^2_{L^2}-\|\bar{\rho}^{(2)}_{q}(T)\|_{L^2}^2\right|\leq C\lambda_q^{-\gamma_T}(D^{(2)}_q+D^{(3)}_q),$$ where $$D^{(i)}_q=\left|\|\rho_{in}\|^2_{L^2}-\|\bar{\rho}^{(i)}_{q}(T)\|_{L^2}^2\right|$$ and $\bar{\rho}^{(3)}_q$ is the solution of $$\label{e:equationkappaq} \begin{split} \partial_t\bar{\rho}^{(3)}_{q}+\bar{u}_{q}\cdot\nabla\bar{\rho}^{(3)}_{q}&=(\kappa_{q+1}+\kappa_{q})\Delta\bar{\rho}^{(3)}_{q}\,,\\ \bar{\rho}_{q}^{(3)}|_{t=0}&=\rho_{in}\,. \end{split}$$ Choosing $q_0$ sufficiently large, we may ensure that $C\lambda_q^{-\gamma_T}<\tfrac{1}{4}\lambda_q^{-\gamma}$ in [\[e:step4-1\]](#e:step4-1){reference-type="eqref" reference="e:step4-1"}, from which we can then conclude $$\left|\frac{D_q^{(2)}}{D_q^{(3)}}-1\right|\leq \frac{1}{4}\lambda_q^{-\gamma}\left(1+\frac{D_q^{(2)}}{D_q^{(3)}}\right).$$ Therefore we deduce $$\label{e:step4} \left|\frac{D_q^{(2)}}{D_q^{(3)}}-1\right|\leq \frac{1}{2}\lambda_q^{-\gamma}.$$ *Step 5: Gluing estimate ($\bar\rho_{q}^{(3)} \leadsto \rho_{q}$)* Finally, we compare [\[e:equationkappaq\]](#e:equationkappaq){reference-type="eqref" reference="e:equationkappaq"} to [\[e:equationq\]](#e:equationq){reference-type="eqref" reference="e:equationq"} by using the estimate [\[e:gluingz\]](#e:gluingz){reference-type="eqref" reference="e:gluingz"}. Indeed, we can write [\[e:equationkappaq\]](#e:equationkappaq){reference-type="eqref" reference="e:equationkappaq"} as $$\begin{split} \partial_t\bar{\rho}^{(3)}_{q}+u_{q}\cdot\nabla\bar{\rho}^{(3)}_{q}&=\mathop{\rm div}\nolimits\left(\kappa_q \nabla\bar{\rho}^{(3)}_{q}+\kappa_{q+1}\nabla\bar{\rho}^{(3)}_{q}+(z_q-\bar{z}_q)\times\nabla\bar{\rho}^{(3)}_q\right)\,,\\ \bar{\rho}_{q}^{(3)}|_{t=0}&=\rho_{in}\,. \end{split}$$ On the other hand [\[e:gluingz\]](#e:gluingz){reference-type="eqref" reference="e:gluingz"} and our choice of parameters imply $$\label{e:gluingestimate} \|z_q-\bar{z}_q\|_{L^\infty}\lesssim \tau_q\delta_{q+1}\lambda_q^{-\gamma_R+\alpha(1+\gamma_L)}\leq \kappa_q\lambda_q^{-2\gamma}\,,\quad \kappa_{q+1}=\lambda_q^{-(b-1)\theta}\kappa_q\leq \lambda_q^{-2\gamma}\kappa_q.$$ A final application of Proposition [Proposition 10](#p:stability_in_ellipticity){reference-type="ref" reference="p:stability_in_ellipticity"}, assuming $q_0$ is suffuciently large to absorb the implicit constant, leads to $$\left|\|\bar{\rho}^{(3)}_{q}(T)\|^2_{L^2}-\|\rho_{q}(T)\|_{L^2}^2\right|\lesssim \lambda_q^{-2\gamma}\left|\|\rho_{in}\|^2_{L^2}-\|\rho_{q}(T)\|_{L^2}^2\right|,$$ where $\rho_q$ is the solution of [\[e:equationq\]](#e:equationq){reference-type="eqref" reference="e:equationq"}, equivalently $$\label{e:step5} \left|D_q^{(3)}-D_{q}\right|\lesssim \lambda_q^{-2\gamma}D_{q}.$$ *Final estimate* Overall, we see that after the five steps above, we achieve the estimate $$\frac{D_{q+1}}{D_q}=\frac{D_{q+1}}{\tilde D_{q+1}}\frac{\tilde D_{q+1}}{D_q^{(1)}}\frac{D_{q}^{(1)}}{D_q^{(2)}}\frac{D_{q}^{(2)}}{D_q^{(3)}}\frac{D_{q}^{(3)}}{D_q}\geq (1-C\lambda_q^{-\gamma})^5D_q\geq (1-\tfrac{1}{2}\lambda_q^{-\gamma/2})D_q,$$ where we have again assumed $q_0$ is suffuciently large to absorb the constants and the exponent $5$ at the expense of decreasing the exponent from $\gamma$ to $\gamma/2$. This concludes the proof of Proposition [Proposition 5](#p:main){reference-type="ref" reference="p:main"}, with $\gamma$ replaced by $\gamma/2$. ◻ ## Construction of the vectorfield $u$ - h-principle {#s:hprinciple} In Section [2.1](#s:vectorfield){reference-type="ref" reference="s:vectorfield"} we detailed the main iteration scheme for producing Hölder-continuous weak solutions of the Euler equations. What remains is to produce an initial vectorfield and associated Reynolds tensor, which satisfies the inductive assumptions [\[e:R_q\_inductive_est\]](#e:R_q_inductive_est){reference-type="eqref" reference="e:R_q_inductive_est"}-[\[e:energy_inductive_assumption\]](#e:energy_inductive_assumption){reference-type="eqref" reference="e:energy_inductive_assumption"} for *some* $q\in\ensuremath{\mathbb{N}}$. To this end we recall that a smooth strict subsolution to the Euler equations is a smooth triple $(\bar{u},\bar{p},\bar{R})$ on $\ensuremath{\mathbb{T}}^3\times[0,T]$ solving the Euler-Reynolds system [\[e:EulerReynolds\]](#e:EulerReynolds){reference-type="eqref" reference="e:EulerReynolds"} with the normalizations $\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}\bar{u}\,dx=0$, $\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}\bar{p}\,dx=0$, such that $\bar{R}(x,t)>0$ is (uniformly) positive definite on $\ensuremath{\mathbb{T}}^3\times[0,T]$. The energy associated with a subsolution is (c.f. [@DaSz2017; @DaRuSz2023; @BDSV]) $$\label{e:hprinciple_energy} e(t)=\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}|\bar{u}|^2+\ensuremath{\mathrm{tr\,}}\bar{R}\,dx.$$ We have the following statement, which is a variant of [@DaSz2017 Proposition 3.1], see also [@BDSV Theorem 7.1] and [@DaRuSz2023 Proposition 4.1]. **Proposition 6**. *Let $(\bar{u},\bar{p},\bar{R})$ be a smooth strict subsolution with energy $e$. Let the parameters $M,\bar{e}$, $\beta,b,\gamma_T,\gamma_R,\gamma_E,\alpha_0>0$ and $a_0\gg 1$ be as in Proposition [Proposition 2](#p:Onsager){reference-type="ref" reference="p:Onsager"}, and $(\delta_q,\lambda_q)_{q\in\ensuremath{\mathbb{N}}}$ be as in [\[e:lambdadelta\]](#e:lambdadelta){reference-type="eqref" reference="e:lambdadelta"}.* *Then, there exists $q_0\in\ensuremath{\mathbb{N}}$ such that, for any $q\geq q_0$ there exists $(u_q,\mathring{R}_q)$ solutions of [\[e:EulerReynolds\]](#e:EulerReynolds){reference-type="eqref" reference="e:EulerReynolds"} with [\[e:ERconstraints\]](#e:ERconstraints){reference-type="eqref" reference="e:ERconstraints"} such that [\[e:R_q\_inductive_est\]](#e:R_q_inductive_est){reference-type="eqref" reference="e:R_q_inductive_est"}-[\[e:energy_inductive_assumption\]](#e:energy_inductive_assumption){reference-type="eqref" reference="e:energy_inductive_assumption"} hold. Moreover, $$\begin{aligned} \|\bar{u}-u_q\|_{C^0}&\leq \bar{C}\|\bar{R}\|_{C^0}^{\sfrac12}\,,\label{e:hprinciple-C0}\\ \|\bar{z}-z_q\|_{C^\alpha}&\leq \bar{C}\|\bar{R}\|_{C^\alpha}^{\sfrac12}\lambda_q^{-1+\alpha}\label{e:hpriciple-C-1}\,, \end{aligned}$$ where the constant $\bar{C}$ only depends on $\bar{u}$ and on $\delta:=\inf_{(x,t)}\frac{\min\{\bar{R}\zeta\cdot\zeta:|\zeta|=1\}}{\ensuremath{\mathrm{tr\,}}\bar{R}}$.* *Proof.* We construct, following [@DaSz2017; @DaRuSz2023], $u_q:=\bar{u}+w$, where $w$ is from the perturbation step of Section [6](#s:Onsager){reference-type="ref" reference="s:Onsager"}, but without the need for time-cutoffs. More precisely, we define $$\tilde R=\bar{R}-\tfrac13\delta_{q+1}\ensuremath{\mathrm{Id}}.$$ For $q$ sufficiently large, we ensure that $$\frac{1}{\ensuremath{\mathrm{tr\,}}\tilde R}\tilde R\geq \tfrac{1}{2}\delta\ensuremath{\mathrm{Id}}.$$ Further, define $\Phi=\Phi(x,t)$ to be the inverse flow associated to the vectorfield $\bar{u}$, i.e. such that $(\partial_t+\bar{u}\cdot\nabla)\Phi=0$ and $\Phi(x,0)=x$. Since $\bar{u}\in C^\infty(\ensuremath{\mathbb{T}}^3\times[0,T])$ with $\mathop{\rm div}\nolimits\bar{u}=0$, for every $t$ the map $\Phi(\cdot,t)$ is a volume-preserving diffeomorphisms, and there exists a constant $C$ such that $$C^{-1}\leq |\nabla\Phi|,|\nabla\Phi^{-1}|\leq C.$$ Now we apply Lemma [Lemma 4](#l:Mikado){reference-type="ref" reference="l:Mikado"} with $$\mathcal{N}:=\left\{R\in \mathcal{S}^{3\times 3}_+:\,R=\tfrac{1}{\ensuremath{\mathrm{tr\,}}S}ASA^T\textrm{ where $C^{-1}\leq |A|,|A^{-1}|\leq C$ and $S\geq \tfrac12\delta\ensuremath{\mathrm{tr\,}}S\ensuremath{\mathrm{Id}}$}\right\}\,,$$ to obtain $\Lambda\subset \ensuremath{\mathcal{Q}}^3$ and $a_{\vec{k}}$ satisfying [\[e:MikadoProperty\]](#e:MikadoProperty){reference-type="eqref" reference="e:MikadoProperty"}. We set (c.f. [\[e:defRqi\]](#e:defRqi){reference-type="eqref" reference="e:defRqi"}) $$\begin{aligned} \tilde S=\frac{1}{\ensuremath{\mathrm{tr\,}}\tilde R}\nabla\Phi \tilde R\nabla\Phi^T, \end{aligned}$$ and observe that $\tilde S(x,t)\in \mathcal{N}$. The new perturbation $w$ is then defined, in analogy with [\[e:defwq+1\]](#e:defwq+1){reference-type="eqref" reference="e:defwq+1"}, as $$\label{e:hprinciple_defw} w=\frac{1}{\lambda}\mathop{\rm curl}\nolimits\left[\sum_{\vec{k}\in\Lambda}(\ensuremath{\mathrm{tr\,}}\tilde R)^{\sfrac12}a_{\vec{k}}(\tilde S)\nabla\Phi^TU_{\vec{k}}(\lambda\Phi)\right]\,.$$ The associated Reynolds stress is (c.f. [\[e:decompR\]](#e:decompR){reference-type="eqref" reference="e:decompR"}) $$\mathring{R}_{q}=\mathcal{R}\left[\partial_tw+\bar{u}\cdot\nabla w+w\cdot\nabla\bar{u}+\mathop{\rm div}\nolimits(w\otimes w-\tilde R)\right],$$ where $\mathcal R$ is the inverse divergence operator on symmetric $2$-tensors, introduced in [@DSz13]. Since $\bar{R}-\tilde R$ is a constant multiple of the identity, it easily follows, as in [@DaSz2017; @BDSV], that $u_q,\mathring{R}_q$ is a solution of [\[e:EulerReynolds\]](#e:EulerReynolds){reference-type="eqref" reference="e:EulerReynolds"}. Moreover, the estimates in [@DaSz2017; @BDSV] (see also Section [6.3](#s:perturbation){reference-type="ref" reference="s:perturbation"}) imply $$\begin{aligned} \|\mathring{R}_q\|_{C^0}&\leq C\lambda^{-1+\alpha}\,,\\ \|w\|_{C^n}&\leq C\|\bar{R}\|_{C^0}^{\sfrac12}\lambda^n\quad\textrm{ for all }n\leq \bar{N}\,,\\ \left|\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}|u_q|^2-|\bar u|^2-\ensuremath{\mathrm{tr\,}}\tilde R\,dx\right|&\leq C\lambda^{-1+\alpha}\,. \end{aligned}$$ Here the constant $C$ depends on $(\bar{u},\bar{R})$ and on $\mathcal{N}$. It remains to choose $\lambda$ and $q$. We fix an exponent $0<\gamma_o$ such that $$\beta<\gamma_o\textrm{ and }\alpha+\gamma_o<1-2b\beta-\max\{\gamma_E,\gamma_R\}$$ and define $\lambda=\lambda_q^{1-\gamma_o}$ ($q$ still to be fixed). Observe that [\[e:b_beta_rel\]](#e:b_beta_rel){reference-type="eqref" reference="e:b_beta_rel"}-[\[e:Onsager_Conditions\]](#e:Onsager_Conditions){reference-type="eqref" reference="e:Onsager_Conditions"} guarantee the existence of such $\gamma_o$. Now, validity of [\[e:R_q\_inductive_est\]](#e:R_q_inductive_est){reference-type="eqref" reference="e:R_q_inductive_est"}-[\[e:energy_inductive_assumption\]](#e:energy_inductive_assumption){reference-type="eqref" reference="e:energy_inductive_assumption"} follows from $$C\lambda^{-1+\alpha}\leq \delta_{q+1}\lambda_q^{-\max\{\gamma_E,\gamma_R\}},\textrm{ as well as }C\lambda\leq M\delta_q^{\sfrac12}\lambda_q,\,\lambda\leq\lambda_q.$$ But by our choice of exponent $\gamma_o$ these inequalities are satisfied for $q$ sufficiently large. This concludes the proof. ◻ **Remark 7**. *An obvious way to apply Proposition [Proposition 6](#p:initialu){reference-type="ref" reference="p:initialu"} is to take $\bar{u}=0$, $\bar{p}=0$ and $\bar{R}=\tfrac{1}{3}e(t)\ensuremath{\mathrm{Id}}$. More generally, one can take any smooth $\bar{u},\bar{p}$ which is a classical solution of the Euler equations [\[e:Euler\]](#e:Euler){reference-type="eqref" reference="e:Euler"}.* ## Proof of Theorem [Theorem 1](#t:main){reference-type="ref" reference="t:main"} {#proof-of-theorem-tmain} *Proof.* *Step 1. Choice of parameters.* Given $0<\beta<\tfrac13$, we choose $1<b<\min\{\sqrt{\tfrac32},\tfrac{1-\beta}{2\beta}\}$, and then fix $\gamma_T,\gamma_R,\gamma_E>0$ as $$\label{e:choiceofgammaTgammaR} \gamma_E=\gamma_T=\gamma_R=\frac{b-1}{b(b+1)}(1-(2b+1)\beta)\,.$$ One easily verifies that then $$\begin{aligned} \gamma_T+b\gamma_R&<(b-1)(1-(2b+1)\beta)\,,\label{e:conditiontransport}\\ \gamma_T&<\frac{b-1}{b+1}(1-(2b+1)\beta)\,,\label{e:conditionhomogenization}\\ \gamma_R+\gamma_T&>\frac{b-1}{b+1}(1-(2b+1)\beta)\,.\label{e:conditiongluing}\end{aligned}$$ Indeed, we have $$\gamma_T+b\gamma_R=\frac{b-1}{b}(1-(2b+1)\beta),\quad \gamma_R+\gamma_T=\frac{2}{b}\frac{b-1}{b+1}(1-(2b+1)\beta).$$ Note that [\[e:conditiontransport\]](#e:conditiontransport){reference-type="eqref" reference="e:conditiontransport"} is the requirement [\[e:Onsager_Conditions\]](#e:Onsager_Conditions){reference-type="eqref" reference="e:Onsager_Conditions"} in Proposition [Proposition 2](#p:Onsager){reference-type="ref" reference="p:Onsager"}, whereas [\[e:conditionhomogenization\]](#e:conditionhomogenization){reference-type="eqref" reference="e:conditionhomogenization"} and [\[e:conditiongluing\]](#e:conditiongluing){reference-type="eqref" reference="e:conditiongluing"} are the left and right inequalities in [\[e:Dissipation\]](#e:Dissipation){reference-type="eqref" reference="e:Dissipation"} in Proposition [Proposition 5](#p:main){reference-type="ref" reference="p:main"}. Therefore, with this choice of parameters both Propositions are applicable. *Step 2. Construction of the vectorfield $u$.* Having fixed the parameters $\beta,b,\gamma_T,\gamma_R,\gamma_E$ in Step 1, as well as $\alpha_0,\alpha_1,a_0$ in Propositions [Proposition 2](#p:Onsager){reference-type="ref" reference="p:Onsager"} and [Proposition 5](#p:main){reference-type="ref" reference="p:main"}, we apply Proposition [Proposition 6](#p:initialu){reference-type="ref" reference="p:initialu"} to obtain $q_0\in\ensuremath{\mathbb{N}}$ and $(u_{q_0},\mathring{R}_{q_0})$ satisfying the estimates [\[e:R_q\_inductive_est\]](#e:R_q_inductive_est){reference-type="eqref" reference="e:R_q_inductive_est"}-[\[e:energy_inductive_assumption\]](#e:energy_inductive_assumption){reference-type="eqref" reference="e:energy_inductive_assumption"}. Then, Proposition [Proposition 2](#p:Onsager){reference-type="ref" reference="p:Onsager"} applies inductively for any $q\geq q_0$ and yields a sequence $\{u_q\}_{q\geq q_0}$. Arguing as in [@BDSV], we deduce that $u_q\to u$ in $C([0,T];C^{\beta'}(\ensuremath{\mathbb{T}}^3))$ for any $\beta'<\beta$, and moreover, that $u\in C^{\beta''}([0,T]\times\ensuremath{\mathbb{T}}^3)$ for any $\beta''<\beta'$. Since $\beta''<\beta'\beta<1/3$ was arbitrary in this argument, by renaming $\beta''<\beta'\beta$ we deduce the existence of $u\in C^\beta([0,T]\times\ensuremath{\mathbb{T}}^3)$ as in the statement of the Theorem. *Step 3. Macroscopic diffusion for mollified initial datum* Next, we turn to the enhanced dissipation properties of the velocity field $u$. First, we fix $\gamma>0$ and $\tilde N$ as in Proposition [Proposition 5](#p:main){reference-type="ref" reference="p:main"}. Let $\rho_{in}\in H^1(\ensuremath{\mathbb{T}}^3)$ be a nonzero initial datum with $\mathop{\rm div}\nolimits\rho_{in}=0$ and $\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}\rho_{in}\,dx=0$. We define the length-scale $$\ell:=\frac{\|\rho_{in}\|_{L^2}} {\|\nabla \rho_{in}\|_{L^2}}.$$ Define $$\tilde\rho_{in}:=\rho_{in}*\psi_{r\ell}\,,$$ where $\psi$ is a standard symmetric mollifier (or one can use e.g. $\psi$ from Section [6.1](#s:mollify){reference-type="ref" reference="s:mollify"}) and $0<r<1$ is still to be fixed. Then (see e.g. Lemma [Lemma 21](#l:mollify){reference-type="ref" reference="l:mollify"}) $$\begin{aligned} \|\tilde\rho_{in}-\rho_{in}\|_{L^2}&\leq Cr\ell\|\nabla\rho_{in}\|_{L^2}\leq Cr\|\rho_{in}\|_{L^2}\label{e:comparerhotilde}\\ \|\tilde\rho_{in}\|_{H^{n+1}}&\leq C(r\ell)^{-n}\|\nabla\rho_{in}\|_{L^2}\leq C(r\ell)^{-n}\ell^{-1}\|\rho_{in}\|_{L^2}\quad\textrm{ for all }0\leq n\leq \tilde N.\label{e:Hnrhotilde}\end{aligned}$$ The small factor $r$ will be chosen below; for now we merely assume $r<1$ is sufficiently small so that the first inequality becomes $$\label{e:comparerhotilderho} \|\tilde\rho_{in}-\rho_{in}\|_{L^2}\leq \tfrac{1}{2}\|\rho_{in}\|_{L^2}\,.$$ Next, for any $q\geq q_0$ we consider the solution $\tilde\rho_q$ of the equation $$\label{e:main_proof:e5.5} \begin{split} \partial_t \tilde\rho_{q} + u_{q} \cdot \nabla \tilde\rho_{q} =& \kappa_{q} \Delta \tilde\rho_{q}\,, \\ \tilde\rho_{q}|_{t=0} =& \tilde\rho_{in}\,. \end{split}$$ By applying the Poincaré inequality, there exists $c_0>0$ such that $$\begin{aligned} \label{e:main_proof:e6} \frac{d}{dt} \| \tilde\rho_{q}(t) \|^2_{L^2} dt \leq -2 \kappa_{q} \| \nabla \tilde\rho_{q}(t) \|^2_{L^2} \leq - c_0 \kappa_q\| \tilde\rho_{q}(t) \|^2_{L^2},\end{aligned}$$ and therefore $$\label{e:mainproof_diffusionbound1} \begin{split} \kappa_{q} \int_0^T \| \nabla \tilde\rho_{q} \|^2_{L^2} dt =& \frac{1}{2} \left( \|\tilde\rho_{in}\|_{L^2}^2 - \|\tilde\rho_{q}(T) \|_{L^2}^2 \right) \\ \geq& \frac{1}{2} \left( 1-\exp(-c_0\kappa_{q}T) \right) \|\tilde\rho_{in}\|_{L^2}^2\\ \geq& 16c\kappa_{q}T \|\tilde\rho_{in}\|_{L^2}^2\geq 4c\kappa_qT\|\rho_{in}\|^2_{L^2} \end{split}$$ for some universal constant $c$, provided $q$ is sufficiently large so that $c_0\kappa_qT<1/2$. Here we also used [\[e:comparerhotilderho\]](#e:comparerhotilderho){reference-type="eqref" reference="e:comparerhotilderho"}. *Step 4. The inertial range - choice of $q_I$ and $r$.* For some $q_I\geq q_0$ to be fixed, set $$\label{e:choiceofr} r:=\lambda_{q_I}^{-\sfrac{b\theta}{2}}.$$ We claim that for sufficiently large $q_I$ the following inequalities hold: $$\begin{aligned} C\ell^{-1}(r\ell)^{-n}&\leq \lambda_q^{n+1}(cT\kappa_{q+1})^{\sfrac12}\textrm{ for all }n\geq 0\,,\label{e:condition-q1}\\ Cr^2&\leq \tfrac14 cT\kappa_{q_I}\,.\label{e:condition-q2}\end{aligned}$$ Indeed, [\[e:condition-q1\]](#e:condition-q1){reference-type="eqref" reference="e:condition-q1"} is satisfied if $$\label{e:conditionq1theta} (r\ell)^{-1}\leq\lambda_q\textrm{ and }C\ell^{-1}\leq (cT)^{\sfrac12}\lambda_q^{1-\sfrac{b\vartheta}{2}}\,,$$ where we used [\[e:defkappaq\]](#e:defkappaq){reference-type="eqref" reference="e:defkappaq"}. Comparing powers of $\lambda_q$ we then observe that [\[e:condition-q1\]](#e:condition-q1){reference-type="eqref" reference="e:condition-q1"} is valid provided $\lambda_{q_I}^{b\theta-\theta}$ is sufficiently large, whereas [\[e:condition-q2\]](#e:condition-q2){reference-type="eqref" reference="e:condition-q2"} is valid provided $\lambda_{q_I}^{1-b\theta/2}$ is sufficiently large. Since by choice $b<\sqrt{\tfrac32}$, from [\[e:defkappaq\]](#e:defkappaq){reference-type="eqref" reference="e:defkappaq"} it follows $b\theta/2<1$. We conclude that with sufficiently large $q_I$ and the choice [\[e:choiceofr\]](#e:choiceofr){reference-type="eqref" reference="e:choiceofr"}, the inequalities [\[e:condition-q1\]](#e:condition-q1){reference-type="eqref" reference="e:condition-q1"}-[\[e:condition-q2\]](#e:condition-q2){reference-type="eqref" reference="e:condition-q2"} hold. *Step 5. Enhanced dissipation in the inertial range.* Combining [\[e:condition-q1\]](#e:condition-q1){reference-type="eqref" reference="e:condition-q1"} with [\[e:mainproof_diffusionbound1\]](#e:mainproof_diffusionbound1){reference-type="eqref" reference="e:mainproof_diffusionbound1"} and [\[e:Hnrhotilde\]](#e:Hnrhotilde){reference-type="eqref" reference="e:Hnrhotilde"} we observe that for any $q\geq q_I$ $$\|\tilde\rho_{in}\|_{H^n}\leq \lambda_q^n\left(\kappa_{q+1} \int_0^T \| \nabla \tilde\rho_{q+1} \|^2_{L^2} dt\right)^{\sfrac12}\textrm{ for any }1\leq n\leq\tilde N\,,$$ i.e. condition [\[e:boundoninitialdatum\]](#e:boundoninitialdatum){reference-type="eqref" reference="e:boundoninitialdatum"} in Proposition [Proposition 5](#p:main){reference-type="ref" reference="p:main"} holds. Therefore, for any $q \geq q_{I}$, we may apply Proposition [Proposition 5](#p:main){reference-type="ref" reference="p:main"} with initial data $\tilde\rho_{in}$, to obtain $$\begin{aligned} (1-\tfrac12\lambda_q^{-\gamma}) \kappa_{q} \int_0^T \| \nabla \tilde\rho_{q} \|^2_{L^2} dt \leq \kappa_{q+1} \int_0^T \| \nabla \tilde\rho_{q+1} \|^2_{L^2} dt.\end{aligned}$$ Next, observe that there exists $\tilde{q}\in\ensuremath{\mathbb{N}}$ and $\tilde c>0$ (depending only on $\gamma>0$ and the choice of $a,b$ for defining the sequence $(\lambda_q)_q$), such that $\prod_{q'\geq \tilde{q}}(1-\tfrac12\lambda_q'^{-\gamma})\geq e^{-\tilde{c}\lambda_{\tilde{q}}^{-\gamma}}\geq \tfrac12$. Consequently, assuming $q_{I}\geq \tilde{q}$, we have $$\begin{aligned} \label{e:mainproof:e10} \kappa_{q} \int_0^T \| \nabla \tilde\rho_{q} \|^2_{L^2} dt \geq \kappa_{q_I} \int_0^T \| \nabla \tilde\rho_{q_I} \|^2_{L^2} dt \prod_{q'=q_I}^{q-1} (1-\tfrac12\lambda_{q'}^{-\gamma}) \geq \tfrac12\kappa_{q_I} \int_0^T \| \nabla \tilde\rho_{q_I} \|^2_{L^2} dt .\end{aligned}$$ We deduce, for any $q\geq q_I$ $$\label{e:mainproof:e11} \kappa_{q} \int_0^T \| \nabla \tilde\rho_{q} \|^2_{L^2} dt \geq 2c\kappa_{q_I}T \|\rho_{in}\|_{L^2}^2\,.$$ *Step 6. dissipation in the molecular range.* Now, let us fix $\kappa=\kappa_{q_M}$ for some $q_M\geq q_I$, and compare $\tilde\rho_{q_M}$ to the solution $\tilde\rho$ of $$\label{e:mainproof-uqu} \begin{split} \partial_t \tilde\rho + u \cdot \nabla \tilde \rho =& \kappa\Delta \tilde \rho, \\ \tilde \rho|_{t=0} =& \tilde\rho_{in}. \end{split}$$ To this end we consider the vector potentials $z_{q_M},z$ of $u_{q_M},u$. We have, for any $q$, $$\|z_{q+1}-z_q\|_{C^0}\leq \|z_{q+1}-\bar{z}_q\|_{C^0}+\|\bar{z}_{q}-z_q\|_{C^0},$$ where $\bar{z}_q$ is the vector potential of $\bar{u}_q$ obtained in Proposition [Proposition 3](#p:gluing){reference-type="ref" reference="p:gluing"}. Using the same arguments as in Section [6](#s:Onsager){reference-type="ref" reference="s:Onsager"} (see for instance Proposition [Proposition 35](#p:newReynolds){reference-type="ref" reference="p:newReynolds"}) it is easy to verify $$\|z_{q+1}-\bar{z}_q\|_{C^0}\lesssim \|\mathcal{R}w_{q+1}\|_{C^\alpha}\lesssim \delta_{q+1}^{\sfrac12}\lambda_{q+1}^{-1+\alpha}=\lambda_q^{-b(1+\beta-\alpha)}.$$ Since $\theta-b(1+\beta)=-b\frac{(b-1)}{b+1}(1+\beta)>2\gamma$, we deduce (assuming $\alpha$ sufficiently small) $$\|z_{q+1}-z_q\|_{C^0}\lesssim \kappa_q\lambda_q^{-2\gamma},$$ where we used [\[e:gluingestimate\]](#e:gluingestimate){reference-type="eqref" reference="e:gluingestimate"} for estimating $\bar{z}_{q}-z_q$. In particular we obtain $$\|z-z_{q_M}\|_{C^0}\leq \sum_{q=q_*}^\infty\|z_{q+1}-z_{q}\|_{C^0}\lesssim \kappa_{q_M}\lambda_{q_M}^{-2\gamma}=\kappa\lambda_{q_M}^{-2\gamma}.$$ Writing the equation [\[e:mainproof-uqu\]](#e:mainproof-uqu){reference-type="eqref" reference="e:mainproof-uqu"} as $$\partial_t \tilde\rho + u_{q_M} \cdot \nabla \tilde \rho = \mathop{\rm div}\nolimits(\kappa\nabla\tilde\rho+(z-z_{q_M})\times\nabla\tilde\rho),$$ we may then apply Proposition [Proposition 10](#p:stability_in_ellipticity){reference-type="ref" reference="p:stability_in_ellipticity"} to deduce $$\begin{aligned} \label{e:main_proof:e10} \kappa \int_0^T \| \nabla \tilde \rho \|^2_{L^2} dt \geq c\kappa_{q_I}T \|\rho_{in}\|_{L^2}^2.\end{aligned}$$ *Step 7. Enhanced dissipation for original initial datum.* Finally, we compare $\tilde\rho$ to the solution $\rho$ of $$\label{e:mainproof:finaleq} \begin{split} \partial_t \rho + u \cdot \nabla \rho =& \kappa \Delta \rho, \\ \rho|_{t=0} =& \rho_{in}. \end{split}$$ The basic energy estimate together with [\[e:comparerhotilde\]](#e:comparerhotilde){reference-type="eqref" reference="e:comparerhotilde"} gives $$\kappa\int_0^T\|\nabla\rho-\nabla\tilde\rho\|_{L^2}^2\,dt\leq \frac{1}{2}\|\rho_{in}-\tilde\rho_{in}\|_{L^2}\leq Cr^2\|\rho_{in}\|_{L^2}^2\,.$$ Consequently $$\begin{aligned} \kappa^{\sfrac12}\left(\int_0^T\|\nabla\rho\|_{L^2}^2\,dt\right)^{\sfrac12}&\geq \kappa^{\sfrac12}\left(\int_0^T\|\nabla\tilde\rho\|_{L^2}^2\,dt\right)^{\sfrac12}-C^{\sfrac12}r\|\rho_{in}\|_{L^2}\\ &\geq \left[(c\kappa_{q_I}T)^{\sfrac12}-C^{\sfrac12}r\right]\|\rho_{in}\|_{L^2}\\ &\geq \tfrac12(c\kappa_{q_I}T)^{\sfrac12}\|\rho_{in}\|_{L^2},\end{aligned}$$ where we used [\[e:condition-q2\]](#e:condition-q2){reference-type="eqref" reference="e:condition-q2"}. Since this is true for $\kappa=\kappa_{q_M}$ for any $q_M\geq q_I$, we deduce $$\limsup_{\kappa\to 0}\kappa\int_0^T\|\nabla\rho\|_{L^2}^2\,dt\geq \tfrac14c\kappa_{q_I}T\|\rho_{in}\|_{L^2}^2.$$ This concludes the proof. ◻ # Energy estimates {#s:energy} ## Estimates for advection-diffusion with Laplacian Estimates in this section are needed for both spatial homogenisation and time averaging. We consider the advection-diffusion equation $$\label{e:eqE1} \begin{aligned} \partial_t\rho+u\cdot\nabla \rho&=\kappa\Delta\rho \qquad \textrm{ on }\ensuremath{\mathbb{T}}^3 \times [0, T]\\ %+\div f\\ \rho|_{t=0}&=\rho_{in} \end{aligned}$$ with smooth initial datum. For short notation, in this section we use space-time norms on time interval $[0,T]$, denoted by $L_{xt}$. We assume the scales relationship $$\label{e:scales_u1} \left(\frac{\|\nabla u\|_{L^\infty}}{\kappa}\right)^n \ge \left( \frac{\|\nabla^{n} u\|_{L^\infty}}{ \kappa} \right)^\frac{2n}{n+1}.$$ **Lemma 8**. *Let $\kappa>0$ and $u\in C^{\infty}(\ensuremath{\mathbb{T}}^d;\ensuremath{\mathbb{R}}^d)$ be divergence free. Assume [\[e:scales_u1\]](#e:scales_u1){reference-type="eqref" reference="e:scales_u1"}. Then the advection-diffusion equation [\[e:eqE1\]](#e:eqE1){reference-type="eqref" reference="e:eqE1"} satisfies for any ${t \le T}$ $$\frac{1}{2} \|\rho (t)\|_{L^2}^2+\kappa\int_0^t\|\nabla\rho (s)\|_{L^2}^2\,ds = \frac{1}{2} \|\rho_{in}\|_{L^2}^2,$$ and, for any $n\geq 1$ $$\label{e:ener_un} \sup_{t \le T} \|(\nabla^n \rho)(t)\|^2_{L^2}+ \kappa\int_0^T \|\nabla^{n+1} \rho (s)\|_{L^2}^2 ds \le \|\nabla^n \rho_{in}\|^2_{L^2} + C_n \left(\frac{\|\nabla u\|_{L^\infty}}{\kappa}\right)^n \kappa \int_0^T \|\nabla \rho (s)\|_{L^2}^2.$$* *Proof.* Apply $\ensuremath{ \partial^{\boldsymbol{\alpha}}}$, where $|\boldsymbol{\alpha}|=n$, to the equation and and test the result with $\partial^{\boldsymbol{\alpha}}$ to get after integration in time $$\frac{1}{2} \sup_{t \le T} \|(\ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho)(t) \|^2_{L^2}+ \kappa \|\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho\|_{L_{xt}^2}^2 \le \frac{1}{2} \|\nabla^n \rho_{in}\|^2_{L^2} + \| [u \cdot \nabla, \ensuremath{ \partial^{\boldsymbol{\alpha}}}] \rho \|_{L_{xt}^2} \|\ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho\|_{L_{xt}^2}.$$ For the last term use the commutator estimate $$\| [u \cdot \nabla, \ensuremath{ \partial^{\boldsymbol{\alpha}}}] f \|_{L^2} \lesssim_n \| \nabla^n u \|_{L^\infty} \| \nabla f \|_{L^2} + \| \nabla u \|_{L^\infty} \| \nabla^n f \|_{L^2}$$ (followed by Hölder's inequality in time) to write $$\begin{split} \| [u \cdot \nabla, \ensuremath{ \partial^{\boldsymbol{\alpha}}}] \rho \|_{L_{xt}^2} &\lesssim_n \| \nabla^n u \|_{L^\infty} \| \nabla \rho \|_{L_{xt}^2} \| \nabla^n \rho \|_{L_{xt}^2} + \| \nabla u \|_{L^\infty} \| \nabla^n \rho \|^2_{L_{xt}^2}\\ &\lesssim_n \| \nabla^n u \|_{L^\infty} \| \nabla \rho \|^\frac{n+1}{n}_{L_{xt}^2} \| \nabla^{n+1} \rho \|^\frac{n-1}{n}_{L_{xt}^2} + \| \nabla u \|_{L^\infty} \| \nabla \rho \|^\frac{2}{n}_{L_{xt}^2} \| \nabla^{n+1} \rho \|^\frac{2n-2}{n}_{L_{xt}^2}. \end{split}$$ Here, for the latter $\lesssim$ we use interpolation. Thus, summing over all partial derivatives of order $n$ we have $$\begin{split} \frac{1}{2} \sup_{t \le T} \|(\nabla^n \rho)(t) \|^2_{L^2}+ \kappa \|\nabla^{n+1} &\rho\|_{L_{xt}^2}^2 \le C_n \kappa^{-1} \| \nabla^n u \|_{L^\infty} (\kappa^\frac{1}{2} \| \nabla \rho \|_{L_{xt}^2})^\frac{n+1}{n} (\kappa^\frac{1}{2} \| \nabla^{n+1} \rho \|_{L_{xt}^2})^\frac{n-1}{n} \\ + C_n & \kappa^{-1} \| \nabla u \|_{L^\infty} (\kappa^\frac{1}{2} \| \nabla \rho \|_{L_{xt}^2})^\frac{2}{n} (\kappa^\frac{1}{2} \| \nabla^{n+1} \rho \|_{L_{xt}^2})^\frac{2n-2}{n} + \frac{1}{2} \|\nabla^n \rho_{in}\|^2_{L^2}. \end{split}$$ The above and Young's inequality gives $$\sup_{t \le T} \|(\nabla^n \rho)(t) \|^2_{L^2}+ \kappa \|\nabla^{n+1} \rho\|_{L_{xt}^2}^2 \le \|\nabla^n \rho_{in}\|^2_{L^2} + C_n \left( \left(\frac{\|\nabla u\|_{L^\infty}}{\kappa}\right)^n + \left( \frac{\|\nabla^{n} u\|_{L^\infty}}{ \kappa} \right)^\frac{2n}{n+1} \right) \kappa \|\nabla \rho\|_{L_{xt}^2}^2,$$ which with assumption [\[e:scales_u1\]](#e:scales_u1){reference-type="eqref" reference="e:scales_u1"} gives [\[e:ener_un\]](#e:ener_un){reference-type="eqref" reference="e:ener_un"}. ◻ We will need also the following immediate corollary for the forced advection-diffusion equation $$\label{e:eqE1f} \begin{aligned} \partial_t\rho+u\cdot\nabla \rho&=\kappa\Delta\rho +\mathop{\rm div}\nolimits f \qquad \textrm{ on }\ensuremath{\mathbb{T}}^3 \times [0, T]\\ %+\div f\\ \rho|_{t=0}&=\rho_{in} \end{aligned}$$ with smooth initial datum and forcing. **Corollary 9**. *Let $\kappa>0$ and $u\in C^{\infty}(\ensuremath{\mathbb{T}}^d;\ensuremath{\mathbb{R}}^d)$ be divergence free. Assume [\[e:scales_u1\]](#e:scales_u1){reference-type="eqref" reference="e:scales_u1"}. Then the forced advection-diffusion equation [\[e:eqE1f\]](#e:eqE1f){reference-type="eqref" reference="e:eqE1f"} satisfies for any ${t \le T}$ $$\frac{1}{2} \|\rho (t)\|_{L^2}^2+\kappa\int_0^t\|\nabla\rho (s)\|_{L^2}^2\,ds = \frac{1}{2} \|\rho_{in}\|_{L^2}^2 + \int_0^t \int f \cdot \nabla\rho,$$ and for any $n\geq 1$: $$\label{e:ener_un_f} \begin{split} &\sup_{t \le T} \|(\nabla^n \rho)(t)\|^2_{L^2}+ \kappa\int_0^T \|\nabla^{n+1} \rho (s)\|_{L^2}^2 ds \le \\ &\|\nabla^n \rho_{in}\|^2_{L^2} + C_n \left(\frac{\|\nabla u\|_{L^\infty}}{\kappa}\right)^n \kappa \int_0^T \|\nabla \rho (s)\|_{L^2}^2 + \sum_{|\boldsymbol{\alpha}|=n}\left|\int_0^T \int \ensuremath{ \partial^{\boldsymbol{\alpha}}}f \cdot \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho \right|. \end{split}$$* Observe that we don't use above norms for forcing, which will be important later. In particular, this is why we have immediately integrated in time while deriving estimate [\[e:ener_un\]](#e:ener_un){reference-type="eqref" reference="e:ener_un"}. ## Estimates for advection-diffusion with an elliptic matrix Estimates in this section are needed mainly in the space homogenisation proposition. We will consider $$\label{e:e_homogenized} \begin{aligned} \partial_t\bar\rho+u\cdot\nabla\bar\rho&=\mathop{\rm div}\nolimits\bar A\nabla\bar\rho \qquad \textrm{ on }\ensuremath{\mathbb{T}}^3 \times [0, T]\\ \bar \rho|_{t=0}&=\rho_{in} \end{aligned}$$ with smooth initial datum and smooth matrix $\bar A$, or a difference of two solutions $\bar\rho_1$ and $\bar\rho_2$ with respective matrices $\bar A_1$, $\bar A_2$. ### Comparison First, we prove an estimate that allows to compare two solutions of [\[e:e_homogenized\]](#e:e_homogenized){reference-type="eqref" reference="e:e_homogenized"} with two ellipticity matrices. **Proposition 10** (Stability estimates). *Let $\varrho_1$ and $\varrho_2$ solve the following equations on $\ensuremath{\mathbb{T}}^3 \times [0, T]$ $$\begin{aligned} \partial_t \varrho_1 + u \cdot \nabla \varrho_1 &= \mathop{\rm div}\nolimits( A_1 \nabla \varrho_1 ), \\ \partial_t \varrho_2 + u \cdot \nabla \varrho_2 &= \mathop{\rm div}\nolimits( A_2 \nabla \varrho_2 )\end{aligned}$$ with initial data $\varrho_1(0) = \varrho_2(0) = \rho_{\text{in}} \in L^2(\ensuremath{\mathbb{T}}^3)$ and uniformly elliptic symmetric matrices $A_1, A_2: \ensuremath{\mathbb{T}}^3 \times [0,T] \rightarrow \ensuremath{\mathbb{R}}^{3 \times 3}$ satisfying for $\varepsilon \le \frac{1}{2}$ $$\begin{aligned} \label{p:stability_in_ellipticity_c0} \big| (A_1 - A_2) \xi \cdot \zeta \big| \leq& \varepsilon (A_1 \xi \cdot \xi)^{\frac{1}{2}} (A_1 \zeta \cdot \zeta)^{\frac{1}{2}}, \quad \text{for any } (x,t) \in \ensuremath{\mathbb{T}}^3 \times [0,T] \text{ and } \xi,\zeta \in \ensuremath{\mathbb{R}}^3.\end{aligned}$$ Let $\tilde \varrho := \varrho_1 - \varrho_2$, then we have $$\begin{aligned} %\iint | A_1 \nabla \varrho_2 \cdot \nabla \varrho_2 | %dx \leq & \; 2 A_2 \nabla \varrho_2 \cdot \nabla \varrho_2 %dx , \label{p:stability_in_ellipticity_e0} \\ \sup_{t \le T} \| \tilde \varrho(t) \|^2_{L^2} + \int_0^T \int A_1 \nabla \tilde \varrho \cdot \nabla \tilde \varrho dxdt \leq& \varepsilon^2 \int_0^T \int A_1 \nabla \varrho_2 \cdot \nabla \varrho_2 dxdt, \label{p:stability_in_ellipticity_e1} \\ \big| \|\varrho_1(t)\|_{L^2}^2 - \|\varrho_2(t)\|_{L^2}^2 \big| \leq& 9 \varepsilon \int_0^t \int A_1 \nabla \varrho_2 \cdot \nabla \varrho_2 dxdt \le 18 \varepsilon \int_0^t \int A_2 \nabla \varrho_2 \cdot \nabla \varrho_2 dxdt \label{p:stability_in_ellipticity_e2}\end{aligned}$$ for any $t \le T$.* *Proof.* The pointwise inequality [\[p:stability_in_ellipticity_e0\]](#p:stability_in_ellipticity_e0){reference-type="eqref" reference="p:stability_in_ellipticity_e0"} follows from [\[p:stability_in_ellipticity_c0\]](#p:stability_in_ellipticity_c0){reference-type="eqref" reference="p:stability_in_ellipticity_c0"}, ellipticity, and $\varepsilon \le \frac{1}{2}$. Taking the difference of the two equations, we have $$\begin{aligned} \label{p:stability_in_ellipticity_e5} \partial_t \tilde \varrho + u \cdot \nabla \tilde \varrho &= \mathop{\rm div}\nolimits\big( A_1 \nabla \tilde \varrho \big) + \mathop{\rm div}\nolimits\big( (A_1-A_2) \nabla \varrho_2 \big).\end{aligned}$$ Testing with $\tilde \varrho$ and integrate by parts. For the last term in [\[p:stability_in_ellipticity_e5\]](#p:stability_in_ellipticity_e5){reference-type="eqref" reference="p:stability_in_ellipticity_e5"}, we use Young's inequality to absorb the term $\tilde \varrho$ from the first term on the right hand side. This gives [\[p:stability_in_ellipticity_e1\]](#p:stability_in_ellipticity_e1){reference-type="eqref" reference="p:stability_in_ellipticity_e1"}. From the equations for $\varrho_1$ and $\varrho_2$, we also can derive $$\label{p:stability_in_ellipticity_e6} %\begin{split} \partial_t \big( \tilde \varrho \varrho_2 \big) = -\tilde \varrho u \cdot \nabla \varrho_2 + \tilde \varrho \mathop{\rm div}\nolimits( A_2 \nabla \varrho_2 ) \\ -\varrho_2 u \cdot \nabla \tilde \varrho + \varrho_2 \mathop{\rm div}\nolimits\big( A_1 \nabla \tilde \varrho \big) + \varrho_2 \mathop{\rm div}\nolimits\big( (A_1-A_2) \nabla \varrho_2 \big). %\end{split}$$ Integrate in $\ensuremath{\mathbb{T}}^3 \times [0,T]$, the first and the third terms on the right hand side of [\[p:stability_in_ellipticity_e6\]](#p:stability_in_ellipticity_e6){reference-type="eqref" reference="p:stability_in_ellipticity_e6"} cancel. For the rest, we integrate by parts $$\label{p:stability_in_ellipticity_e7} \begin{split} \left| \int \tilde \varrho(t) \varrho_2(t) dx \right| &\le \left|\int_0^t \int (A_1 + A_2) \nabla\tilde \varrho \nabla \varrho_2 dx ds \right|+ \left| \int_0^t \int (A_1-A_2) \nabla \varrho_2 \nabla \varrho_2 dx ds \right| \\ & \le 2\left|\int_0^t \int A_1 \nabla\tilde \varrho \nabla \varrho_2 dx ds \right|+ 2\left| \int_0^t \int (A_1-A_2) \nabla \varrho_2 \nabla \varrho_2 dx ds \right|. \end{split}$$ We estimate the latter right-hand side term by $2\varepsilon \int_0^t \int A_1 \nabla \varrho_2 \cdot \nabla \varrho_2 dx ds$ using the assumption. For the first term we use $A_1 \xi \cdot \zeta \le (A_1 \xi \cdot \xi)^{\frac{1}{2}} (A_1 \zeta \cdot \zeta)^{\frac{1}{2}}$ (Cauchy-Schwarz) and Young's inequality that utilizes $\varepsilon^2$ of [\[p:stability_in_ellipticity_e1\]](#p:stability_in_ellipticity_e1){reference-type="eqref" reference="p:stability_in_ellipticity_e1"}, to get $$\left| \int \tilde \varrho(t) \varrho_2(t) dx \right| \le 4\varepsilon \int_0^t \int A_1 \nabla \varrho_2 \cdot \nabla \varrho_2 dx ds.$$ Then the following fact concludes the proof of [\[p:stability_in_ellipticity_e2\]](#p:stability_in_ellipticity_e2){reference-type="eqref" reference="p:stability_in_ellipticity_e2"}, $$\begin{aligned} \left| \|\varrho_1(t)\|_{L^2}^2 - \|\varrho_2(t)\|_{L^2}^2 \right| = \left| \int \tilde \varrho(t) \big( \varrho_1(t) + \varrho_2(t) \big) dx \right| = \| \tilde \varrho(t) \|^2_{L^2} + 2\left|\int \tilde \varrho(t) \varrho_2(t) dx\right|.\end{aligned}$$ ◻ ### General weighted estimate We define $$\bar D = \int_0^T\|\bar \kappa^\frac{1}{2} \nabla \bar\rho (s)\|_{L^2}^2\,ds.$$ Assume the following inequalities $$\label{e:e_ell} 2 \bar\kappa(x,t) \ensuremath{\mathrm{Id}}\geq \bar{A} (x,t) \geq \frac{\bar\kappa (x,t)}{2} \ensuremath{\mathrm{Id}}$$ and $$\label{e:e_kappatransported} \left\|\frac{D^u_t \bar\kappa }{\bar\kappa} \right\|_{L^\infty} \leq C \tau^{-1}.$$ Assume further that for $m \ge 1$ $$\label{e:e_modelAestimates} \|\bar \kappa^\frac{m-2}{2} \nabla^m\bar A\|_{L^\infty} \leq C (\tau^{-1})^\frac{m}{2} \qquad \|\bar \kappa^\frac{m-2}{2} \nabla^m\bar \kappa\|_{L^\infty} \lesssim (\tau^{-1})^\frac{m}{2}$$ and $$\label{e:e_model-estimates2} \| \bar\kappa^\frac{m-1}{2} \nabla^m u\|_{L^\infty} \leq C \tau^{-1} (\tau^{-1})^\frac{m-1}{2}.$$ **Lemma 11**. *Let $u$ be divergence free. Assume the inequalities [\[e:e_ell\]](#e:e_ell){reference-type="eqref" reference="e:e_ell"}, [\[e:e_kappatransported\]](#e:e_kappatransported){reference-type="eqref" reference="e:e_kappatransported"}, [\[e:e_model-estimates2\]](#e:e_model-estimates2){reference-type="eqref" reference="e:e_model-estimates2"}, [\[e:e_modelAestimates\]](#e:e_modelAestimates){reference-type="eqref" reference="e:e_modelAestimates"} hold. Then the general advection-diffusion equation [\[e:e_homogenized\]](#e:e_homogenized){reference-type="eqref" reference="e:e_homogenized"} satisfies for any ${t \le T}$ $$\frac{1}{2} \|\bar \rho (t)\|_{L^2}^2+\int_0^t\|\bar \kappa^\frac{1}{2} \nabla \bar\rho (s)\|_{L^2}^2\,ds = \frac{1}{2} \|\rho_{in}\|_{L^2}^2$$ and for any $n \ge 1$ $$\label{e:rhobarenerg} \sup_{t\le T} \|(\bar\kappa^\frac{n}{2} \nabla^n\bar\rho)(t)\|^2_{L^2}+\int_0^T \|\bar\kappa^\frac{n+1}{2}\nabla^{n+1}\bar\rho\|_{L^2}^2 \lesssim (\tau^{-1})^n \bar D + \sum_{i=1}^n (\tau^{-1})^{n-i} \|(\bar\kappa^{i/2} \nabla^i\bar\rho)_{in}\|^2_{L^2}.$$ Further, for any $|\boldsymbol{\alpha}|=n$ $$\label{e:rhobarflowB} \int_0^T \|\bar\kappa^\frac{n}{2} D_t \ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho\|^2_{L^2} \lesssim (\tau^{-1})^{n+1} \bar D + \sum_{i=1}^{n+1} \left(\tau^{-1} \right)^{{n+1-i}} \|(\bar\kappa^{i/2} \nabla^i\bar\rho)_{in}\|^2_{L^2}$$ The constants in $\lesssim$ depend on the constants in assumptions and $n$* The proof occupies the rest of this section. *Proof.* *Step 1: Preliminary $m$-th order estimate*. Apply $\ensuremath{ \partial^{\boldsymbol{\alpha}}}$, where $|\boldsymbol{\alpha}|=n$, to the equation [\[e:e_homogenized\]](#e:e_homogenized){reference-type="eqref" reference="e:e_homogenized"} and test the result with $\bar\kappa^n \partial^{\boldsymbol{\alpha}} \bar\rho$ $$\label{e:rhobarenergym2_m1} \begin{split} \frac{1}{2} \frac{d}{dt} &\int |(\bar\kappa^{n/2} \ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho)(t)|^2+ \int \bar\kappa^n \bar A (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho) (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho) =\frac{1}{2} \int \frac{D^u_t (\bar\kappa^n) }{\bar\kappa^n} \bar\kappa^n |\ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho|^2 \\ & -\sum_{{\boldsymbol{\beta}} + {\boldsymbol{\gamma}} ={\boldsymbol{\alpha}} , {\boldsymbol{\beta}} >0} c_{\boldsymbol{\beta}} \int \partial^{\boldsymbol{\beta}} u \cdot \partial^{\boldsymbol{\gamma}} \nabla \bar\rho (\ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho) \bar\kappa^n + \partial^{\boldsymbol{\beta}} \bar A \nabla \partial^{\boldsymbol{\gamma}} \bar\rho \nabla (\bar\kappa^n \ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho ) + \int \bar A \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho \nabla (\bar\kappa^n ) \ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho, \end{split}$$ where $c_{\boldsymbol{\beta}}$ are binomial coefficients. Estimate four right-hand side terms of [\[e:rhobarenergym2_m1\]](#e:rhobarenergym2_m1){reference-type="eqref" reference="e:rhobarenergym2_m1"}, in order of their appearance. For the first one, use $\frac{D^u_t (\bar\kappa^n) }{\bar\kappa^n} = n \frac{D^u_t (\bar\kappa) }{\bar\kappa}$ and estimate in modulus $$\int \frac{D^u_t (\bar\kappa^n) }{\bar\kappa^n} \bar\kappa^n |\ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho|^2 \le n \|\bar\kappa^\frac{n}{2}\nabla^{n}\bar\rho\|_{L^2}^2 \left\|\frac{D^u_t \bar\kappa }{\bar\kappa} \right\|_{L^\infty}.$$ For a single summand of the second one, estimate its modulus by distributing the weights according to derivatives as follows $$\int \bar\kappa^{\frac{n-|\boldsymbol{\gamma}|-1}{2} } \partial^{\boldsymbol{\beta}} u \cdot (\bar\kappa^\frac{|\boldsymbol{\gamma}|+1}{2} \partial^{\boldsymbol{\gamma}} \nabla \bar\rho) ((\ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho) \bar\kappa^{n/2}) \le \|\bar\kappa^\frac{n}{2}\nabla^{n}\bar\rho\|_{L^2} \|\bar\kappa^\frac{|\boldsymbol{\gamma}|+1}{2}\nabla^{|\boldsymbol{\gamma}|+1}\bar\rho\|_{L^2} \left\| \bar\kappa^{\frac{n-|\boldsymbol{\gamma}|-1}{2} } \nabla^{n-|\boldsymbol{\gamma}|} u \right\|_{L^\infty}.$$ For a single summand of the third one we estimate its modulus by $$\label{e:rhobarenergym2_2short3} \begin{split} \int&\partial^{\boldsymbol{\beta}} \bar A \nabla \partial^{\boldsymbol{\gamma}} \bar\rho \nabla (\bar\kappa^n \ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho ) = n \int\partial^{\boldsymbol{\beta}} \bar A (\nabla \partial^{\boldsymbol{\gamma}} \bar\rho) \bar\kappa^{n-1} \nabla \bar\kappa \ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho + \int\partial^{\boldsymbol{\beta}} \bar A (\nabla \partial^{\boldsymbol{\gamma}} \bar\rho) \bar\kappa^n \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho \le \\ & n \|\bar\kappa^{\frac{n-|\boldsymbol{\gamma}|-2}{2} } \nabla^{n-|\boldsymbol{\gamma}|} \bar A \|_{L^\infty} \|\bar\kappa^{-\frac{1}{2}} \nabla \bar\kappa\|_{L^\infty} \| \bar\kappa^\frac{|\boldsymbol{\gamma}|+1}{2}\nabla^{|\boldsymbol{\gamma}|+1} \bar\rho\|_{L^2} \|\bar\kappa^{\frac{n}{2}} \nabla^n \bar\rho \|_{L^2} \\ +& \|\bar\kappa^{\frac{n-|\boldsymbol{\gamma}|-2}{2} } \nabla^{n-|\boldsymbol{\gamma}|} \bar A \|_{L^\infty} \|\bar\kappa^\frac{|\boldsymbol{\gamma}|+1}{2}\nabla^{|\boldsymbol{\gamma}|+1} \bar\rho\|_{L^2} \|\bar\kappa^\frac{n+1}{2} \nabla^{n+1} \bar\rho\|_{L^2}. \end{split}$$ For the fourth, last term of [\[e:rhobarenergym2_m1\]](#e:rhobarenergym2_m1){reference-type="eqref" reference="e:rhobarenergym2_m1"} we estimate it in modulus writing, thanks to the upper bound [\[e:e_ell\]](#e:e_ell){reference-type="eqref" reference="e:e_ell"} $$\int \bar\kappa^{n-1} \bar A \nabla \bar\kappa \ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho \le 2\int |\bar\kappa|^{n} |\nabla \bar\kappa | | \ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho |\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho| \le 2 \|\bar\kappa^{-\frac{1}{2}} \nabla \bar \kappa \|_{L^\infty} \|\bar\kappa^\frac{n}{2}\nabla^{n} \bar\rho\|_{L^2} \|\bar\kappa^\frac{n+1}{2} \nabla^{n+1} \bar\rho\|_{L^2}.$$ Together, these estimates for four right-hand side terms of [\[e:rhobarenergym2_m1\]](#e:rhobarenergym2_m1){reference-type="eqref" reference="e:rhobarenergym2_m1"}, absorbing their terms $\|\bar\kappa^\frac{n+1}{2} \nabla^{n+1} \bar\rho\|_{L^2}$ by the dissipative part using Young's inequality, after summing over all multiindices of order $n$ yield $$\label{e:rhobarenergym2_m2} \begin{split} \frac{1}{2} \frac{d}{dt} &\|(\bar\kappa^{n/2} \nabla^n\bar\rho)(t)\|^2_{L^2}+ \frac{1}{2} \|\bar\kappa^\frac{n+1}{2}\nabla^{n+1}\bar\rho\|_{L^2}^2 \lesssim_n \|\bar\kappa^\frac{n}{2}\nabla^{n}\bar\rho\|_{L^2}^2 \left\|\frac{D^u_t \bar\kappa }{\bar\kappa} \right\|_{L^\infty} \\ & + \|\bar\kappa^\frac{n}{2}\nabla^{n}\bar\rho\|_{L^2} \sum_{j=0}^{n-1} \|\bar\kappa^\frac{j+1}{2}\nabla^{j+1}\bar\rho\|_{L^2} \left\| \bar\kappa^{\frac{n-j-1}{2} } \nabla^{n-j} u \right\|_{L^\infty} \\ & + \|\bar\kappa^{-\frac{1}{2}} \nabla \bar\kappa\|_{L^\infty} \|\bar\kappa^\frac{n}{2} \nabla^n \bar\rho \|_{L^2} \sum_{j=0}^{n-1} \|\bar\kappa^{\frac{n-j-2}{2} } \nabla^{n-j} \bar A\|_{L^\infty} \| \bar\kappa^\frac{j+1}{2}\nabla^{j+1} \bar\rho\|_{L^2} \\ &+ \sum_{j=0}^{n-1} \|\bar\kappa^{\frac{n-j-2}{2} } \nabla^{n-j} \bar A\|^2_{L^\infty} \|\bar\kappa^\frac{j+1}{2}\nabla^{j+1} \bar\rho \|^2_{L^2} + \|\bar\kappa^{-\frac{1}{2}} \nabla \bar \kappa \|^2_{L^\infty} \|\bar\kappa^\frac{n}{2}\nabla^{n} \bar\rho\|^2_{L^2}. \end{split}$$ *Step 2: Plugging in scales assumptions.* Use the assumptions [\[e:e_kappatransported\]](#e:e_kappatransported){reference-type="eqref" reference="e:e_kappatransported"}, [\[e:e_modelAestimates\]](#e:e_modelAestimates){reference-type="eqref" reference="e:e_modelAestimates"}, [\[e:e_model-estimates2\]](#e:e_model-estimates2){reference-type="eqref" reference="e:e_model-estimates2"} for right-hand side of [\[e:rhobarenergym2_m2\]](#e:rhobarenergym2_m2){reference-type="eqref" reference="e:rhobarenergym2_m2"} to obtain $$\label{e:rhobarenergym2_m3} \frac{d}{dt} \|(\bar\kappa^{n/2} \nabla^n\bar\rho)(t)\|^2_{L^2}+ \|\bar\kappa^\frac{n+1}{2}\nabla^{n+1}\bar\rho\|_{L^2}^2 \lesssim_n \sum_{j=0}^{n-1} \|\bar\kappa^\frac{j+1}{2}\nabla^{j+1}\bar\rho\|^2_{L^2} (\tau^{-1})^{n-j}$$ which after integrating in time and writing $P(i) =\int_0^T \|\bar\kappa^\frac{i}{2}\nabla^{i}\bar\rho\|_{L^2}^2$ yields for any $n \ge 1$ $$\label{e:rhobarenergym2_m4} \sup_{t\le T} \|(\bar\kappa^\frac{n}{2} \nabla^n\bar\rho)(t)\|^2_{L^2} + P(n+1) \lesssim_n \|(\bar\kappa^\frac{n}{2} \nabla^n\bar\rho)_{in}\|^2_{L^2} +% \tau^{-1} D(n) + \sum_{j=0}^{n-1} P(j+1) (\tau^{-1})^{n-j}.$$ *Step 3: Iterations.* Take [\[e:rhobarenergym2_m4\]](#e:rhobarenergym2_m4){reference-type="eqref" reference="e:rhobarenergym2_m4"} with $n=1$. Observing that $P(1) = \bar D$ we have $$\label{e:rhobarenergym2_1} \sup_{t\le T} \|(\bar\kappa^{1/2} \nabla\bar\rho)(t)\|^2_{L^2}+\int_0^T\|\bar\kappa\nabla^2\bar\rho\|_{L^2}^2\,dt \lesssim \tau^{-1}\bar D + \|(\bar\kappa^{1/2} \nabla\bar\rho)_{in}\|^2_{L^2}$$ i.e. [\[e:rhobarenerg\]](#e:rhobarenerg){reference-type="eqref" reference="e:rhobarenerg"} with $n=1$, this allows to start induction. Assume [\[e:rhobarenerg\]](#e:rhobarenerg){reference-type="eqref" reference="e:rhobarenerg"} holds for any $j \le n$, in particular $$P(j+1) \lesssim_n (\tau^{-1})^j \bar D + \sum_{i=1}^j (\tau^{-1})^{j-i} \|(\bar\kappa^{i/2} \nabla^i\bar\rho)_{in}\|^2_{L^2}.$$ For $n+1$ we have via [\[e:rhobarenergym2_m4\]](#e:rhobarenergym2_m4){reference-type="eqref" reference="e:rhobarenergym2_m4"} $$\begin{split} P(n+2) & \lesssim_n \|(\bar\kappa^\frac{n+1}{2} \nabla^{n+1}\bar\rho)_{in}\|^2_{L^2} + \sum_{j=0}^{n} P(j+1) (\tau^{-1})^{n+1-j} \\ &\lesssim_n \|(\bar\kappa^\frac{n+1}{2} \nabla^{n+1}\bar\rho)_{in}\|^2_{L^2} + \sum_{j=0}^{n} \left((\tau^{-1})^j \bar D + \sum_{i=1}^j (\tau^{-1})^{j-i} \|(\bar\kappa^{i/2} \nabla^i\bar\rho)_{in}\|^2_{L^2} \right)(\tau^{-1})^{n+1-j}, \end{split}$$ which gives [\[e:rhobarenerg\]](#e:rhobarenerg){reference-type="eqref" reference="e:rhobarenerg"} for $n+1$. *Step 4: Transport estimate.* Apply $\ensuremath{ \partial^{\boldsymbol{\alpha}}}$, where $|\boldsymbol{\alpha}|=n$, to the equation [\[e:e_homogenized\]](#e:e_homogenized){reference-type="eqref" reference="e:e_homogenized"} $$\begin{split} D_t \ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho = -\sum_{{\boldsymbol{\beta}} + {\boldsymbol{\gamma}} ={\boldsymbol{\alpha}} , {\boldsymbol{\beta}} >0} c_{\boldsymbol{\beta}} \partial^{\boldsymbol{\beta}} u \cdot \partial^{\boldsymbol{\gamma}} \nabla \bar\rho + \mathop{\rm div}\nolimits\sum_{{\boldsymbol{\beta}} + {\boldsymbol{\gamma}} ={\boldsymbol{\alpha}}} c_{\boldsymbol{\beta}} \partial^{\boldsymbol{\beta}} \bar A \partial^{\boldsymbol{\gamma}} \nabla \bar\rho. \end{split}$$ Multiply both sides with $\bar\kappa^\frac{n}{2}$, distribute the weights, and take space-time $L^2$ norms on time interval $[0,T]$, denoted by $L^2_{xt}$. These give $$\begin{split} \|\bar\kappa^\frac{n}{2} D_t \ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho\|_{L^2_{xt}} &\lesssim_n \sum_{i+j=n, i>0} \|\bar\kappa^\frac{i-1}{2} \nabla^i u\|_{L^\infty} \| \bar\kappa^\frac{j+1}{2} \nabla^{j+1} \bar\rho\|_{L^2_{xt}} \\ &+ \sum_{i+j=n} \|\bar\kappa^\frac{i-1}{2} \nabla^{i+1} \bar A \|_{L^\infty} \| \bar\kappa^\frac{j+1}{2} \nabla^{j+1} \bar\rho\|_{L^2_{xt}} + \sum_{i+j=n} \| \bar\kappa^\frac{i-2}{2} \nabla^i \bar A \|_{L^\infty} \| \bar\kappa^\frac{j+2}{2} \nabla^{j+2} \bar\rho\|_{L^2_{xt}}. \end{split}$$ Use the assumptions [\[e:e_modelAestimates\]](#e:e_modelAestimates){reference-type="eqref" reference="e:e_modelAestimates"}, [\[e:e_model-estimates2\]](#e:e_model-estimates2){reference-type="eqref" reference="e:e_model-estimates2"} $$\|\bar\kappa^\frac{n}{2} D_t \ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho\|_{L^2_{xt}} \lesssim_n \tau^{-1} \sum_{j=1}^{n+2} (\tau^{-1})^\frac{n-j}{2} \| \bar\kappa^\frac{j}{2} \nabla^{j} \bar\rho\|_{L^2_{xt}}.$$ Squaring both sides above yields $$\int_0^T \|\bar\kappa^\frac{n}{2} D_t \ensuremath{ \partial^{\boldsymbol{\alpha}}}\bar\rho\|^2_{L^2} \lesssim_n (\tau^{-1})^2 \sum_{j=1}^{n+2} (\tau^{-1})^{n-j} P(j).$$ [\[e:rhobarflowB\]](#e:rhobarflowB){reference-type="eqref" reference="e:rhobarflowB"} now follows from using [\[e:rhobarenerg\]](#e:rhobarenerg){reference-type="eqref" reference="e:rhobarenerg"} to control $P(j)$. ◻ # Spatial homogenisation {#s:homogenization} ## Setup In this section, we use $\xi$ to denote the variable in the cell. For a function $f: \ensuremath{\mathbb{T}}^3 \rightarrow \ensuremath{\mathbb{R}}$ defined in the cell, i.e. taking the variable $\xi$ as its argument, we use $\langle f \rangle$ to denote its integral in $\ensuremath{\mathbb{T}}^3$. Without further specification, the integral domain is $\ensuremath{\mathbb{T}}^3$ for the variables $x,\xi \in \ensuremath{\mathbb{T}}^3$. Let $g$ be a function taking arguments $(x,t,\xi)$, we use $\|g(x,t,\cdot)\|_{L^\infty_\xi}$ to denote the supreme norm in $\xi$ variable. Note that $\|g\|_{L^\infty_\xi}$ is still a function in $(x,t)$. The usual $L^2$ and $H^{-1}$ norms are in $x$ variable. The constant in $\lesssim$ in this section depends on $N_h$ defined in [\[e:hom_para_assump:Nh\]](#e:hom_para_assump:Nh){reference-type="eqref" reference="e:hom_para_assump:Nh"}. In this section we consider the following advection-diffusion equation for $\rho: \ensuremath{\mathbb{T}}^3 \times [0,T] \rightarrow \ensuremath{\mathbb{R}}$ $$\label{e:hom_base:eq} \begin{split} \partial_t\rho+u\cdot\nabla \rho =& \mathop{\rm div}\nolimits\big( \tilde A\nabla\rho \big), \\ \rho|_{t=0} =& \rho_{\text{in}}, \end{split}$$ with elliptic tensors $\tilde A: \ensuremath{\mathbb{T}}^3 \times [0,T] \rightarrow \ensuremath{\mathbb{R}}^{3 \times 3}$ and $A: \ensuremath{\mathbb{T}}^3 \times [0,T] \times \ensuremath{\mathbb{T}}^3 \rightarrow \ensuremath{\mathbb{R}}^{3 \times 3}$ given by $$\begin{aligned} \tilde A(x,t) :=& A \big( x, t, \lambda \Phi_i(x,t) \big), \label{e:hom_base:tA} \\ A(x,t,\xi) :=& \sum_i \tilde\eta_i(x,t) \nabla \Phi_i^{-1}(x,t) \Bigg( \kappa \ensuremath{\mathrm{Id}}+ \frac{\eta_i(x,t) \sigma^{1/2}(t)}{\lambda} \sum_{\vec{k}} a_{\vec{k}} \big( \tilde R_{i}(x,t) \big) H_{\vec{k}}(\xi) \Bigg) \nabla \Phi_i^{-T}(x,t), \label{e:hom_base:A}\end{aligned}$$ where, compared to the setting in Step 2 of the proof of Proposition [Proposition 5](#p:main){reference-type="ref" reference="p:main"} we drop the index $q$ for simplicity, so that $$\label{e:hom_base_para_ex} \begin{split} \kappa =& \kappa_{q+1}, \\ \lambda =& \lambda_{q+1}, \\ \bar \kappa =& \tilde \kappa_q, \\ \tau =& \tau_q, \end{split} \quad \quad \quad \begin{split} \ell =& \ell_q, \\ u =& \bar u_q, \\ \sigma =& \sigma_q, \\ \tilde R_{i} =& \tilde R_{q,i}. \end{split}$$ and $\bar u_q$, $\Phi_i$, $\sigma_q$, $\eta_i$, $a_{\vec{k}}$, $\tilde R_{q,i}$ are given in section [2.1.4](#s:Mikado){reference-type="ref" reference="s:Mikado"}. The partition of unity $\tilde\eta_i$ is defined in [\[e:propertyoftildeetai\]](#e:propertyoftildeetai){reference-type="eqref" reference="e:propertyoftildeetai"}. Here, we also clarify a notation taking [\[e:hom_base:tA\]](#e:hom_base:tA){reference-type="eqref" reference="e:hom_base:tA"} as an example. We often use $( x, t, \lambda \Phi_i(x,t))$ as an argument of some function, this function always involves $(\eta_i)_i$ as a family of cutoff functions with disjoint support. It means taking $( x, t, \lambda \Phi_i(x,t))$ as the argument in the support of $\eta_i$ for any $i$. Our goal in this section is to show the solution $\rho$ to [\[e:hom_base:eq\]](#e:hom_base:eq){reference-type="eqref" reference="e:hom_base:eq"} homogenizes to the solution $\bar \rho: \ensuremath{\mathbb{T}}^3 \times [0,T] \rightarrow \ensuremath{\mathbb{R}}$ of the following equation $$\label{e:hom_ed:eq} \begin{split} \partial_t \bar \rho + u \cdot \nabla \bar \rho =& \mathop{\rm div}\nolimits\big( \bar A \nabla \bar \rho \big), \\ \bar \rho|_{t=0} =& \rho_{\text{in}}, \end{split}$$ with elliptic tensor $\bar A: \ensuremath{\mathbb{T}}^3 \times [0,T] \rightarrow \ensuremath{\mathbb{R}}^{3 \times 3}$ given by $$\begin{aligned} \label{e:hom_ed:bA} \bar A(x,t) = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int A(x,\xi,t) \Big( \ensuremath{\mathrm{Id}}+ \sum_i \eta_i(x,t) \nabla\Phi_i^T(x,t) \nabla_\xi\chi_i^T(x,t,\xi) \Big) \,d\xi, \end{aligned}$$ and $\chi_i, \chi: \ensuremath{\mathbb{T}}^3 \times [0,T] \times \ensuremath{\mathbb{T}}^3 \rightarrow \ensuremath{\mathbb{R}}^3$ given by $$\begin{aligned} \label{e:hom_ed:chi} \chi_i(x,t,\xi) =& - \frac{\sigma^{1/2}(t)}{\kappa \lambda} \nabla \Phi_i^{-1}(x,t) \sum_{\vec{k}} a_{\vec{k}} \big( \tilde R_i(x,t) \big) \varphi_{\vec{k}}(\xi) \vec{k}, \\ \chi(x,t,\xi) =& \sum_i \chi_i(x,t,\xi) \eta_i(x,t).\end{aligned}$$ With the choice in [\[e:hom_base_para_ex\]](#e:hom_base_para_ex){reference-type="eqref" reference="e:hom_base_para_ex"}, we collect the following facts: $$\begin{aligned} \| \nabla \Phi_i - \ensuremath{\mathrm{Id}}\|_{L^\infty} \leq& \lambda^{-2\gamma}, \label{e:hom_assump:Phi} \\ \tfrac12\bar \kappa \leq \bar A &\leq 2 \bar \kappa, \quad\textrm{ where }\bar \kappa= \kappa \left( 1 + \sum_i \frac{\sigma \eta_i^2}{\kappa^2\lambda^2} \right), \label{e:hom_assump:barkappa}\end{aligned}$$ and for $n=0,1$ $$\begin{aligned} \| D_t^n\nabla_x^m \chi_i(x,t,\cdot) \|_{C^1_\xi} \lesssim& \frac{\sigma^{1/2}}{\kappa \lambda} \tau^{-n}\ell^{-m}, \label{e:hom_assump:chi}\\ \| D_t^n\nabla_x^m \sigma \|_{L^\infty} \lesssim& \sigma \tau^{-n}\ell^{-m}, \label{e:hom_assump:sigma}\\ \| D_t^n\nabla_x^m \eta_i \|_{L^\infty} + \| D_t^n\nabla_x^m \tilde \eta_i \|_{L^\infty} +& \| D_t^n\nabla_x^m \nabla \Phi_i \|_{L^\infty} + \| D_t^n\nabla_x^m (a_{\vec{k}} \circ \tilde R_{i}) \|_{L^\infty} \lesssim \tau^{-n}\ell^{-m}. \label{e:hom_assump:slow}\end{aligned}$$ Notice that from [\[e:Dissipation\]](#e:Dissipation){reference-type="eqref" reference="e:Dissipation"} and [\[e:Dissipation2\]](#e:Dissipation2){reference-type="eqref" reference="e:Dissipation2"}, and with $\gamma>0$ given in [\[e:global_gamma\]](#e:global_gamma){reference-type="eqref" reference="e:global_gamma"}, we have $$\begin{aligned} \tau \|\bar \kappa\|_{L^\infty} \ell^{-2} \leq& 1, \label{e:hom_para_assump:taukappaell} \\ \tau \|\bar\kappa\|_{L^\infty} \lambda^2 \geq& \lambda^{\frac{b-1}{b+1}}, \label{e:hom_para_assump:excep} \\ \tau \|\bar \kappa\|_{L^\infty} \lambda^{\frac{2}{b}} \leq& \lambda^{-2\gamma}, \label{e:hom_para_assump:for_rem} \\ \|\bar \kappa\|_{L^\infty}^{-1} \tau^{-1} \lambda^{-2} \leq \kappa^{-1} \tau^{-1} \lambda^{-2} <& \lambda^{-2\gamma}. \label{e:hom_para_assump:kappataulambda}\end{aligned}$$ Furthermore, we need to choose $N_h \geq 3$ sufficiently large such that $$\begin{aligned} N_h \geq (b+1) \left( \frac{2}{b-1} + \frac{\theta}{b} \right), \label{e:hom_para_assump:Nh}\end{aligned}$$ then [\[e:hom_para_assump:Nh\]](#e:hom_para_assump:Nh){reference-type="eqref" reference="e:hom_para_assump:Nh"} and [\[e:hom_para_assump:excep\]](#e:hom_para_assump:excep){reference-type="eqref" reference="e:hom_para_assump:excep"} give the following relation $$\begin{aligned} \lambda^2 \| \bar \kappa \|_{L^\infty} \kappa^{-1} &\lesssim \big( \tau \|\bar\kappa\|_{L^\infty} \lambda^2 \big)^{N_h}. \label{e:hom_para_assump:Nh_result}\end{aligned}$$ ## Quantitative estimates The quantitative version of the above homogenisation process is given as follows. **Proposition 12**. *Given [\[e:hom_base_para_ex\]](#e:hom_base_para_ex){reference-type="eqref" reference="e:hom_base_para_ex"}, the parameter setting in Proposition [Proposition 2](#p:Onsager){reference-type="ref" reference="p:Onsager"} and Proposition [Proposition 5](#p:main){reference-type="ref" reference="p:main"}, let $\rho$ be the solution to [\[e:hom_base:eq\]](#e:hom_base:eq){reference-type="eqref" reference="e:hom_base:eq"}-[\[e:hom_base:A\]](#e:hom_base:A){reference-type="eqref" reference="e:hom_base:A"}. Let $\bar \rho$ be the solution to [\[e:hom_ed:eq\]](#e:hom_ed:eq){reference-type="eqref" reference="e:hom_ed:eq"}-[\[e:hom_ed:chi\]](#e:hom_ed:chi){reference-type="eqref" reference="e:hom_ed:chi"}. Choose $N_h$ such that [\[e:hom_para_assump:Nh\]](#e:hom_para_assump:Nh){reference-type="eqref" reference="e:hom_para_assump:Nh"} holds. Define $\tilde \rho: \ensuremath{\mathbb{T}}^3 \times [0,T] \rightarrow \ensuremath{\mathbb{R}}$ such that $$\label{e:hom_trho_ansatz} \rho(x,t) = \bar{\rho}(x,t) + \frac{1}{\lambda} \chi \big( x,t,\lambda \Phi_i(x,t) \big)\cdot\nabla\bar\rho(x,t)+\tilde\rho(x,t).$$ Then $$\label{e:hom_trho_est} \begin{split} \| \tilde \rho(\cdot, T) \|^2_{L^2} +& \kappa \int_0^T \| \nabla \tilde \rho \|^2_{L^2} dt \lesssim \frac{1}{\lambda^2 \kappa \tau} \mathcal{D}_{N_h}, \\ \mathcal{D}_l :=& \int_0^T \| \bar \kappa^{\sfrac{1}{2}} \nabla \bar \rho \|^2_{L^2} dt + \sum_{i=1}^{l} \tau^{i} \int_0^T \| \bar \kappa^{\sfrac{i}{2}} \nabla^i \rho_{\text{in}} \|_{L^2}^2 dt \quad \text{ for } l \in \ensuremath{\mathbb{N}}. \end{split}$$* We also have the following corollaries **Corollary 13**. *Let $\rho, \bar \rho$ and $\mathcal{D}_l$ be as in Proposition [Proposition 12](#p:hom_trho){reference-type="ref" reference="p:hom_trho"}, then $$\label{c:hom_trho_est} \begin{split} \| \rho(t) - \bar\rho(t) \|^2_{L^2} +& \kappa \int_0^T \| \nabla \rho - \nabla \bar \rho \|^2_{L^2} dt \lesssim \frac{1}{\lambda^2 \kappa \tau} \mathcal{D}_{N_h}. \\ \end{split}$$* **Corollary 14**. *Let $\rho, \bar \rho$ and $\mathcal{D}_l$ be as in Proposition [Proposition 12](#p:hom_trho){reference-type="ref" reference="p:hom_trho"}, then $$\label{c:hom_dissipation_est} \begin{split} \Big| \| \rho(T) \|^2_{L^2} - \| \bar\rho(T) \|^2_{L^2} \Big| \lesssim \Big( \frac{1}{\lambda \kappa^{\sfrac{1}{2}} \tau^{\sfrac{1}{2}} } + \lambda^{-2\gamma} \Big) \mathcal{D}_{N_h}. \\ \end{split}$$* **Remark 15**. *If $\rho_{in}$ satisfies that, for any $i \in [0,N_h]$ $$\begin{aligned} \|\nabla^i \rho_{in}\|_{L^2}^2 \leq \lambda^{\frac{2i}{b}} \kappa \int_0^T \| \nabla \rho \|_{L^2}^2 dt,\end{aligned}$$ then for $\mathcal{D}_{N_h}$, we have $$\label{r:hom_dissipation_e1} \begin{split} \mathcal{D}_{N_h} \leq& \int_0^T \| \bar \kappa^{\sfrac{1}{2}} \nabla \bar \rho \|^2_{L^2} dt + \sum_{i=1}^{N_h} \tau^i \|\bar \kappa\|_{L^\infty}^i \lambda^{\frac{2i}{b}} \kappa \int_0^T \| \nabla \rho \|_{L^2}^2 dt, \\ \leq&_{\eqref{e:hom_para_assump:for_rem}} \int_0^T \| \bar \kappa^{\sfrac{1}{2}} \nabla \bar \rho \|^2_{L^2} dt + \kappa \int_0^T \| \nabla \rho \|_{L^2}^2 dt. \end{split}$$ Therefore, combining [\[c:hom_dissipation_est\]](#c:hom_dissipation_est){reference-type="eqref" reference="c:hom_dissipation_est"}, [\[r:hom_dissipation_e1\]](#r:hom_dissipation_e1){reference-type="eqref" reference="r:hom_dissipation_e1"} and [\[e:hom_para_assump:kappataulambda\]](#e:hom_para_assump:kappataulambda){reference-type="eqref" reference="e:hom_para_assump:kappataulambda"}, we have $$\begin{aligned} \Big| \| \rho(T) \|^2_{L^2} - \| \bar\rho(T) \|^2_{L^2} \Big| \lesssim \lambda^{ -\gamma } \min \left\{ \int_0^T \| \bar \kappa^{\sfrac{1}{2}} \nabla \bar \rho \|^2_{L^2} dt, \kappa \int_0^T \| \nabla \rho \|_{L^2}^2 dt \right\}.\end{aligned}$$* The proofs of Proposition [Proposition 12](#p:hom_trho){reference-type="ref" reference="p:hom_trho"}, Corollary [Corollary 13](#c:hom_trho){reference-type="ref" reference="c:hom_trho"} and Corollary [Corollary 14](#c:hom_dissipation){reference-type="ref" reference="c:hom_dissipation"} are given in the end of this section. The $\tilde \rho$ in Proposition [Proposition 12](#p:hom_trho){reference-type="ref" reference="p:hom_trho"} is the homogenization error term. To estimate this error term, we show $\tilde \rho$ satisfies an equation in the following lemma. **Lemma 16**. *Let $\tilde \rho$ be as in [\[e:hom_trho_ansatz\]](#e:hom_trho_ansatz){reference-type="eqref" reference="e:hom_trho_ansatz"}, then $\tilde \rho$ satisfies $$\label{e:hom_trho_eq} \begin{split} \partial_t\tilde\rho &+ u\cdot\nabla \tilde\rho - \mathop{\rm div}\nolimits\big( \tilde A\nabla\tilde\rho \big) = \mathop{\rm div}\nolimits\big( \tilde B \nabla\bar{\rho} \big) \\ &+ \frac{1}{\lambda} \Big( \mathop{\rm div}\nolimits\big( \tilde A \nabla^2\bar\rho \chi \big) + \mathop{\rm div}\nolimits\big( \tilde A\nabla_x \chi^T\nabla\bar{\rho} \big) \Big) \\ &- \frac{1}{\lambda} \Big( \chi \cdot D_t\nabla\bar{\rho}\cdot + \nabla\bar\rho \cdot D_t\chi \Big) \end{split}$$ with the matrices $\tilde B: \ensuremath{\mathbb{T}}^3 \times [0,T] \rightarrow \ensuremath{\mathbb{R}}^{3 \times 3}$ and $B: \ensuremath{\mathbb{T}}^3 \times [0,T] \times \ensuremath{\mathbb{T}}^3 \rightarrow \ensuremath{\mathbb{R}}^{3 \times 3}$ given by $$\begin{aligned} \tilde B(x,t) =& B \big( x, t, \lambda \Phi_i(x,t) \big), \label{e:hom_trho:tB} \\ B(x,t,\xi) =& A \Big( \ensuremath{\mathrm{Id}}+ \sum_i \eta_i \nabla\Phi_i^T \nabla_\xi\chi_i^T \Big) - \Big\langle A \Big( \ensuremath{\mathrm{Id}}+ \sum_i \eta_i \nabla\Phi_i^T \nabla_\xi\chi_i^T \Big) \Big\rangle, \label{e:hom_trho:B}\end{aligned}$$ and $D_t \chi$ denotes the transport derivative in $x$, i.e. $$\begin{aligned} D_t\chi = \big( \partial_t\chi + (u\cdot\nabla_x) \chi \big) (x,t,\lambda\Phi_i). \end{aligned}$$* *Proof of Lemma [Lemma 16](#l:hom_trho_comp){reference-type="ref" reference="l:hom_trho_comp"}.* From the ansatz [\[e:hom_trho_ansatz\]](#e:hom_trho_ansatz){reference-type="eqref" reference="e:hom_trho_ansatz"}, omitting the argument $\big( x, t, \lambda \Phi_i(x,t) \big)$, direct computations give $$\begin{aligned} \nabla\tilde\rho &= \nabla\rho - \Big( \ensuremath{\mathrm{Id}}+ \sum_i \eta_i \nabla\Phi_i^T \nabla_{\xi} \chi_i^T \Big) \nabla\bar{\rho} - \frac{1}{\lambda} \sum_i \Big( \nabla^2\bar\rho\chi_i\eta_i + \nabla_x (\chi_i\eta_i)^T\nabla\bar{\rho} \Big), \label{l:hom_trho_comp:e3} \\ \partial_t\tilde\rho &= \partial_t\rho - \partial_t\bar\rho - \sum_i \eta_i \partial_t\Phi_i^T \nabla_\xi\chi_i^T \nabla\bar\rho - \frac{1}{\lambda} \sum_i \Big( \partial_t\nabla\bar{\rho}\cdot\chi_i\eta_i + \nabla\bar\rho \cdot \partial_t(\chi_i\eta_i) \Big). \label{l:hom_trho_comp:e4}\end{aligned}$$ Using $\partial_t\Phi_i + (u\cdot\nabla)\Phi_i=0$ and omitting the argument $\big( x, t, \lambda \Phi_i(x,t) \big)$, we obtain $$\begin{split} \partial_t\tilde\rho + u\cdot\nabla \tilde\rho - \mathop{\rm div}\nolimits\tilde A\nabla\tilde\rho =& \mathop{\rm div}\nolimits\left(\left[ A \Big( \ensuremath{\mathrm{Id}}+ \sum_i \eta_i \nabla\Phi_i^T \nabla_\xi\chi_i^T \Big) - \Big\langle A \Big( \ensuremath{\mathrm{Id}}+ \sum_i \eta_i \nabla\Phi_i^T \nabla_\xi\chi_i^T \Big) \Big\rangle \right]\nabla\bar{\rho} \right)\\ &+ \frac{1}{\lambda} \sum_i \Big( \mathop{\rm div}\nolimits\big( \tilde A \nabla^2\bar\rho \chi_i \eta_i \big) + \mathop{\rm div}\nolimits\big( \tilde A \nabla_x (\chi_i\eta_i) ^T\nabla\bar{\rho} \big) \Big) \\ &- \frac{1}{\lambda} \sum_i \Big( \partial_t\nabla\bar{\rho} \chi_i \eta_i + \partial_t(\chi_i\eta_i) \nabla\bar\rho \Big) - \frac{1}{\lambda} \sum_i \Big( u \nabla^2\bar\rho\chi_i\eta_i + u \nabla_x (\chi_i\eta_i)^T\nabla\bar{\rho} \Big) \\ =& \mathop{\rm div}\nolimits\big( B \nabla\bar{\rho} \big) + \frac{1}{\lambda} \sum_i \Big( \mathop{\rm div}\nolimits\big( \tilde A \nabla^2\bar\rho \chi_i \eta_i \big) + \mathop{\rm div}\nolimits\big( \tilde A \nabla_x (\chi_i\eta_i) ^T\nabla\bar{\rho} \big) \Big) \\ &- \frac{1}{\lambda} \sum_i \Big( \eta_i \chi_i \cdot D_t\nabla\bar{\rho}\cdot + \nabla\bar\rho \cdot D_t(\chi_i\eta_i) \Big). \end{split}$$ ◻ **Lemma 17**. *Let $\tilde \rho$ be as in [\[e:hom_trho_ansatz\]](#e:hom_trho_ansatz){reference-type="eqref" reference="e:hom_trho_ansatz"}. Then for any $N \in \ensuremath{\mathbb{N}}^+$, $\tilde \rho$ satisfies $$\label{e:hom_trho_eq_detail} \begin{split} \partial_t\tilde\rho &+ u\cdot\nabla \tilde\rho - \mathop{\rm div}\nolimits\big( \tilde A\nabla\tilde\rho \big) = \frac{1}{\lambda} \Big( E_1 + E_2 + E_3 + E_4 + \sum_{l=1}^N F_l + G_N \Big) \end{split}$$ with $$\begin{aligned} E_1 =& - \sum_{i,j,l} \mathop{\rm div}\nolimits\big( \nabla\partial_j\bar\rho \times (\nabla\Phi_i^T \tilde c^{(i)}_j) \big) \\ E_2 =& - \sum_{i,j,l} \mathop{\rm div}\nolimits\big( \nabla_x \tilde c^{(i)}_{jl} \times \nabla\Phi_{i,l}) \partial_j\bar\rho \big) \\ E_3 =& \mathop{\rm div}\nolimits\big( \tilde A \nabla^2\bar\rho \chi \big) \\ E_4 =& \mathop{\rm div}\nolimits\big( \tilde A\nabla_x \chi^T\nabla\bar{\rho} \big)\end{aligned}$$ and $$\begin{aligned} F_1 =& 0, \\ F_l =& \frac{1}{\lambda^{l-1}} \sum_{i,1 \leq |\boldsymbol{\alpha}| \leq l-1} \mathop{\rm div}\nolimits\Big( f^{(l-1)}_{0,\boldsymbol{\alpha}}( x, t, \lambda \Phi_i) D_t \partial^{\boldsymbol{\alpha}} \bar \rho + f^{(l-1)}_{1,\boldsymbol{\alpha}}( x, t, \lambda \Phi_i) \partial^{\boldsymbol{\alpha}} \bar \rho \Big), \text{ for } l \geq 2, \label{e:hom_trho_FN} \\ G_l =& \frac{1}{\lambda^{l-1}} \sum_{i,1 \leq |\boldsymbol{\alpha}| \leq l} \Big( g^{(l)}_{0,\boldsymbol{\alpha}} ( x, t, \lambda \Phi_i) D_t \partial^{\boldsymbol{\alpha}} \bar \rho + g^{(l)}_{1,\boldsymbol{\alpha}} ( x, t, \lambda \Phi_i) \partial^{\boldsymbol{\alpha}} \bar \rho \Big) \label{e:hom_trho_GN}\end{aligned}$$ where $\tilde c_j^{(i)}(x,t)=c_j^{(i)}(x,t,\lambda\Phi_i(x,t))$ and the functions $c_j^{(i)}$, $f^{(l)}_{0,\boldsymbol{\alpha}}$, $f^{(l)}_{1,\boldsymbol{\alpha}}$, $g^{(l)}_{0,\boldsymbol{\alpha}}$ and $g^{(l)}_{1,\boldsymbol{\alpha}}$, taking arguments $(x,t,\xi)$, satisfy the following estimates, for $n \in \{0,1\}$, $$\begin{aligned} \langle f^{(l)}_{0,\boldsymbol{\alpha}} \rangle = \langle f^{(l)}_{1,\boldsymbol{\alpha}} \rangle &= \langle g^{(l)}_{0,\boldsymbol{\alpha}} \rangle = \langle g^{(l)}_{1,\boldsymbol{\alpha}} \rangle = 0, \label{e:hom_trho_div_free} \\ \| D_t^n\nabla_x^m c^{(i)}_j \|_{L^\infty_\xi} &\lesssim \kappa \Big( 1 + \frac{\sigma^{1/2}\eta_i}{\kappa \lambda} \Big) \frac{\sigma^{1/2}\eta_i}{\kappa \lambda} \tau^{-n}\ell^{-m}, \label{e:hom_trho_c_est} \\ \big\| D_t^n\nabla_x^m f^{(l)}_{0,\boldsymbol{\alpha}} \big\|_{L^\infty_\xi} + \big\| D_t^n\nabla_x^m g^{(l)}_{0,\boldsymbol{\alpha}} \big\|_{L^\infty_\xi} &\lesssim \frac{\sigma^{1/2} \eta_i}{\kappa \lambda} \tau^{-n} \ell^{-(l-|\boldsymbol{\alpha}|+m)}, \label{e:hom_trho_f0g0_est} \\ \big\| D_t^n\nabla_x^m f^{(l)}_{1,\boldsymbol{\alpha}} \big\|_{L^\infty_\xi} + \big\| D_t^n\nabla_x^m g^{(l)}_{1,\boldsymbol{\alpha}} \big\|_{L^\infty_\xi} &\lesssim \frac{\sigma^{1/2} \eta_i}{\kappa \lambda} \tau^{-(n+1)} \ell^{-(l-|\boldsymbol{\alpha}|+m)}. \label{e:hom_trho_f1g1_est}\end{aligned}$$* *Proof of Lemma [Lemma 17](#l:hom_b_comp){reference-type="ref" reference="l:hom_b_comp"}.* Define $\chi_{ij} = \chi_i \cdot e_j$. Let $b_j = B e_j$ and $\tilde b_j = \tilde B e_j$. Direct computations show $$\label{e:hom_b_comp:e2} \begin{split} b_j&(x,t,\xi) = \sum_i \eta_i b^{(i)}_j \\ =& \sum_i \eta_i \nabla\Phi_i^{-1} \Bigg[ \kappa \nabla_\xi\chi_{ij} + \frac{\sigma^{1/2}}{\lambda} \sum_{\vec{k}} a_{\vec{k}}(\tilde R_i) \Big( H_{\vec{k}} \nabla\Phi_i^{-T}e_j + \eta_i H_{\vec{k}} \nabla_\xi \chi_{ij} - \eta_i \langle H_{\vec{k}} \nabla_\xi \chi_{ij} \rangle \Big) \Bigg]. \end{split}$$ We claim $\mathop{\rm div}\nolimits_\xi (\nabla \Phi_i b^{(i)}_j)=0$. Indeed, notice that $$\label{e:hom_b_comp:e3} \begin{split} \mathop{\rm div}\nolimits_\xi \big( H_{\vec{k}} \nabla_\xi \chi_{ij} \big) =& \big( W_{\vec{k}} \cdot \nabla_\xi \big) \chi_{ij} = - \frac{\rho^{1/2}}{\kappa \lambda} \nabla \Phi_i^{-1} \sum_{\vec{k}} a_{\vec{k}} \big( \tilde R_i \big) \big( \psi_{\vec{k}} \vec{k} \cdot \nabla_\xi \big) \varphi_{\vec{k}} \vec{k} \cdot e_j = 0, \\ \mathop{\rm div}\nolimits_\xi \big( H_{\vec{k}} \nabla \Phi_i^{-T} e_j \big) =& W_{\vec{k}} \cdot \big( \nabla \Phi_i^{-T}e_j \big) = \psi_{\vec{k}} \big( \nabla \Phi_i^{-1} \vec{k} \big) \cdot e_j. \end{split}$$ Then plugging [\[e:hom_b\_comp:e3\]](#e:hom_b_comp:e3){reference-type="eqref" reference="e:hom_b_comp:e3"} and [\[e:hom_ed:chi\]](#e:hom_ed:chi){reference-type="eqref" reference="e:hom_ed:chi"} into [\[e:hom_b\_comp:e2\]](#e:hom_b_comp:e2){reference-type="eqref" reference="e:hom_b_comp:e2"} gives $\mathop{\rm div}\nolimits_\xi (\nabla \Phi_i b^{(i)}_j)=0$. Also notice that $\langle \nabla \Phi_i b^{(i)}_j \rangle = 0$, hence we can find a vector potential $c^{(i)}_j=c^{(i)}_j(x,t,\xi)$ so that $\eta_i \nabla\Phi_i \, b^{(i)}_j=\nabla_\xi\times c^{(i)}_j$. Using the fact $\det \nabla\Phi_i^T = 1$ and omitting the argument $\big( x, t, \lambda \Phi_i(x,t) \big)$, we have[^3] $$\begin{split} \frac{1}{\lambda}\nabla\times \big( \nabla \Phi_i^T c^{(i)}_j \big) &= \frac{1}{\lambda} \nabla\times \big( c^{(i)}_{jl}\nabla\Phi_{i,l} \big) = \frac{1}{\lambda} \nabla c^{(i)}_{jl}\times\nabla\Phi_{i,l} \quad \text{(chain rule)}\\ &= \frac{1}{\lambda}\nabla_x c^{(i)}_{jl}\times\nabla\Phi_{i,l} + \big( \nabla\Phi_i^T\nabla_\xi c^{(i)}_{jl} \big) \times \big( \nabla\Phi_i^Te_l \big)\\ &= \frac{1}{\lambda} \nabla_x c^{(i)}_{jl}\times\nabla\Phi_{i,l} + \nabla\Phi_i^{-1} \big( \nabla_\xi\times c^{(i)}_j \big) \quad \text{(linear transform for } \nabla \times ). \end{split}$$ Therefore, omitting the argument $\big( x, t, \lambda \Phi_i(x,t) \big)$, we have $$\begin{aligned} b_j =& \sum_i \nabla \Phi_i^{-1}\nabla_\xi\times c^{(i)}_j = \sum_i \Big( \frac{1}{\lambda}\nabla\times (\nabla\Phi_i^T c^{(i)}_j)-\frac{1}{\lambda}\nabla_x c^{(i)}_{jl}\times \nabla\Phi_{i,l} \Big), \\ b_j\partial_j\bar\rho =& \frac{1}{\lambda}\nabla \times \big( \partial_j\bar\rho \nabla\Phi_i^T c^{(i)}_j \big) - \frac{1}{\lambda} \nabla\partial_j\bar\rho \times \big( \nabla\Phi_i^T c^{(i)}_j \big) - \frac{1}{\lambda} \big( \nabla_x c^{(i)}_{jl}\times\nabla\Phi_{i,l} \big) \partial_j\bar\rho.\end{aligned}$$ Let $\tilde c^{(i)}_j(x,t) = c^{(i)}_j \big( x,t, \lambda\Phi_i(x,t) \big)$. The estimate [\[e:hom_trho_c\_est\]](#e:hom_trho_c_est){reference-type="eqref" reference="e:hom_trho_c_est"} follows from [\[e:hom_assump:chi\]](#e:hom_assump:chi){reference-type="eqref" reference="e:hom_assump:chi"} by Schauder estimate. This gives $E_1$ and $E_2$. $E_3$ and $E_4$ are directly from the second line of [\[e:hom_trho_eq\]](#e:hom_trho_eq){reference-type="eqref" reference="e:hom_trho_eq"}. Next, by induction, we show the last line of [\[e:hom_trho_eq\]](#e:hom_trho_eq){reference-type="eqref" reference="e:hom_trho_eq"} gives $\sum_{l=1}^N F_l+G_N$. When $l=1$, $|\boldsymbol{\alpha}| = 1$, let $g^{(1)}_{0,\boldsymbol{\alpha}}$ be given by $\chi$, $g^{(1)}_{1,\boldsymbol{\alpha}}$ be given by $D_t\chi$ componentwise, $F_1=0$. Using [\[e:hom_assump:chi\]](#e:hom_assump:chi){reference-type="eqref" reference="e:hom_assump:chi"}, we have [\[e:hom_trho_div_free\]](#e:hom_trho_div_free){reference-type="eqref" reference="e:hom_trho_div_free"} and the estimates [\[e:hom_trho_f0g0_est\]](#e:hom_trho_f0g0_est){reference-type="eqref" reference="e:hom_trho_f0g0_est"} [\[e:hom_trho_f1g1_est\]](#e:hom_trho_f1g1_est){reference-type="eqref" reference="e:hom_trho_f1g1_est"} for $l=1$ by Schauder estimate. Assume [\[e:hom_trho_FN\]](#e:hom_trho_FN){reference-type="eqref" reference="e:hom_trho_FN"}, [\[e:hom_trho_GN\]](#e:hom_trho_GN){reference-type="eqref" reference="e:hom_trho_GN"}, [\[e:hom_trho_f0g0_est\]](#e:hom_trho_f0g0_est){reference-type="eqref" reference="e:hom_trho_f0g0_est"} and [\[e:hom_trho_f1g1_est\]](#e:hom_trho_f1g1_est){reference-type="eqref" reference="e:hom_trho_f1g1_est"} are true for $l$. Using chain rule, for any function $h(x,t,\xi): \ensuremath{\mathbb{T}}^3 \times [0,1] \times \ensuremath{\mathbb{T}}^3 \rightarrow \ensuremath{\mathbb{R}}^3$, we have $$\begin{split} \frac{1}{\lambda} \mathop{\rm div}\nolimits\bigl( \nabla\Phi_i^{-1} h (x,t,\lambda\Phi_i) \bigr) &= (\mathop{\rm div}\nolimits_\xi h)(x,t,\lambda\Phi_i) + \frac{1}{\lambda} \mathop{\rm div}\nolimits_x( \nabla\Phi_i^{-1} h) (x,t,\lambda\Phi_i). \end{split}$$ Let $h_0 = \mathop{\rm div}\nolimits^{-1}_\xi g^{(l)}_{0,\boldsymbol{\alpha}}$ and $h_1 = \mathop{\rm div}\nolimits^{-1}_\xi g^{(l)}_{1,\boldsymbol{\alpha}}$, then $\langle h_0 \rangle = \langle h_1 \rangle = 0$. We can deduce (dropping the arguments $(x,t,\lambda\Phi_i)$) $$\label{e:hom_b_comp:e7} \begin{split} \frac{1}{\lambda} \mathop{\rm div}\nolimits\big( \nabla\Phi_i^{-1} h_0 D_t \partial^{\boldsymbol{\alpha}} \bar\rho +& \nabla\Phi_i^{-1} h_1 \partial^{\boldsymbol{\alpha}} \bar\rho \big) = g^{(l)}_{0,\boldsymbol{\alpha}} D_t \partial^{\boldsymbol{\alpha}} \bar\rho + g^{(l)}_{1,\boldsymbol{\alpha}} \partial^{\boldsymbol{\alpha}} \bar\rho\\ &+ \frac{1}{\lambda} \mathop{\rm div}\nolimits_x \big( \nabla\Phi_i^{-1} h_0 \big) D_t \partial^{\boldsymbol{\alpha}} \bar\rho + \frac{1}{\lambda} \mathop{\rm div}\nolimits_x \big( \nabla\Phi_i^{-1} h_1 \big) \partial^{\boldsymbol{\alpha}} \bar\rho\\ &+ \frac{1}{\lambda}\nabla\Phi_i^{-1} h_0 \cdot D_t \partial^{\boldsymbol{\alpha}} \nabla\bar\rho + \frac{1}{\lambda}\nabla\Phi_i^{-1} h_1 \cdot \partial^{\boldsymbol{\alpha}} \nabla \bar\rho. \end{split}$$ Now let $f^{(l)}_{0,\boldsymbol{\alpha}} = \nabla\Phi_i^{-1} h_0$ and $f^{(l)}_{1,\boldsymbol{\alpha}} = \nabla\Phi_i^{-1} h_1$. The second and the third lines in [\[e:hom_b\_comp:e7\]](#e:hom_b_comp:e7){reference-type="eqref" reference="e:hom_b_comp:e7"} give the formulas for $g^{(l+1)}_{0,\boldsymbol{\alpha}}$ and $g^{(l+1)}_{1,\boldsymbol{\alpha}}$. Because $h_0$ and $h_1$ have zero mean in $\xi$, $g^{(l+1)}_{0,\boldsymbol{\alpha}}$ and $g^{(l+1)}_{1,\boldsymbol{\alpha}}$ also have zero mean in $\xi$. The estimates [\[e:hom_trho_f0g0_est\]](#e:hom_trho_f0g0_est){reference-type="eqref" reference="e:hom_trho_f0g0_est"} and [\[e:hom_trho_f1g1_est\]](#e:hom_trho_f1g1_est){reference-type="eqref" reference="e:hom_trho_f1g1_est"} for $l+1$ directly follows from the form of $g^{(l+1)}_{0,\boldsymbol{\alpha}}$ and $g^{(l+1)}_{1,\boldsymbol{\alpha}}$ and [\[e:hom_assump:chi\]](#e:hom_assump:chi){reference-type="eqref" reference="e:hom_assump:chi"}. ◻ *Proof of Proposition [Proposition 12](#p:hom_trho){reference-type="ref" reference="p:hom_trho"} and Corollary [Corollary 13](#c:hom_trho){reference-type="ref" reference="c:hom_trho"}.* Test [\[e:hom_trho_eq_detail\]](#e:hom_trho_eq_detail){reference-type="eqref" reference="e:hom_trho_eq_detail"} with $\tilde \rho$. By Hölder's inequality and Young's inequality, we have $$\label{p:hom_trho:p:e1} \begin{split} \| \tilde \rho(\cdot, t) \|^2_{L^2} + \kappa \int_0^T \| \nabla \tilde \rho \|^2_{L^2} dt \lesssim& \frac{1}{\lambda^2 \kappa} \bigg( \sum_{l=1}^4 \int_0^T \|E_l\|^2_{H^{-1}} dt + \sum_{l=1}^N \int_0^T \|F_l\|^2_{H^{-1}} dt + \int_0^T \|G_N\|^2_{L^2} dt \bigg) \\ &+ \frac{1}{\lambda^2} \| (\chi \cdot \nabla \bar \rho) (\cdot,0)\|^2_{L^2} \end{split}$$ Recall the estimates for $\bar \rho$ from [\[e:rhobarenerg\]](#e:rhobarenerg){reference-type="eqref" reference="e:rhobarenerg"} and [\[e:rhobarflowB\]](#e:rhobarflowB){reference-type="eqref" reference="e:rhobarflowB"}, $$\label{p:hom_trho:p:e3} \begin{split} \int_0^T \| \bar \kappa^{\sfrac{m}{2}} \nabla^{m} \bar \rho \|_{L^2}^2 dt \lesssim& \tau^{-(m-1)} \mathcal{D}_{m-1}, \\ \int_0^T \| \bar \kappa^{\sfrac{m}{2}} D_t \nabla^{m} \bar \rho \|_{L^2}^2 dt \lesssim& \tau^{-(m+1)} \mathcal{D}_{m+1}. \\ \end{split}$$ Note that [\[e:hom_trho_c\_est\]](#e:hom_trho_c_est){reference-type="eqref" reference="e:hom_trho_c_est"} gives $$\begin{aligned} \label{p:hom_trho:p:e4} \| \bar \kappa^{-1} D_t^n\nabla_x^m c^{(i)}_j \|_{L^\infty_\xi} &\lesssim \tau^{-n}\ell^{-m}.\end{aligned}$$ Also [\[e:hom_assump:chi\]](#e:hom_assump:chi){reference-type="eqref" reference="e:hom_assump:chi"} and [\[e:hom_base:A\]](#e:hom_base:A){reference-type="eqref" reference="e:hom_base:A"} give $$\begin{aligned} \label{p:hom_trho:p:e5} \| \bar \kappa^{-1} \tilde A D_t^n\nabla_x^m \chi \|_{L^\infty_\xi} &\lesssim \tau^{-n}\ell^{-m}.\end{aligned}$$ With [\[p:hom_trho:p:e4\]](#p:hom_trho:p:e4){reference-type="eqref" reference="p:hom_trho:p:e4"} and [\[p:hom_trho:p:e5\]](#p:hom_trho:p:e5){reference-type="eqref" reference="p:hom_trho:p:e5"}, the estimates for $E_1$ and $E_3$ are the same. The estimates for $E_2$ and $E_4$ are also the same. Here, we estimate $E_1$ and $E_2$ as examples. $$\label{p:hom_trho:p:e6} \begin{split} \int_0^T \|E_1\|^2_{H^{-1}} dt \lesssim& \| \bar \kappa^{-1} c^{(i)}_j \|^2_{L^\infty} \int_0^T \| \bar \kappa \nabla^2 \bar \rho \|^2_{L^2} dt \\ \lesssim& \tau^{-1} \mathcal{D}_1, \\ \int_0^T \|E_2\|^2_{H^{-1}} dt \lesssim& \|\bar \kappa^{\sfrac{1}{2}}\|^2_{L^\infty} \| \bar \kappa^{-1} \nabla_x c^{(i)}_j \|^2_{L^\infty} \int_0^T \| \bar \kappa^{\sfrac{1}{2}} \nabla \bar \rho \|_{L^2} dt \\ \lesssim& \|\bar \kappa\|_{L^\infty} \ell^{-2} \mathcal{D}_0 \lesssim_{\eqref{e:hom_para_assump:taukappaell}} \tau^{-1} \mathcal{D}_0. \end{split}$$ Note that $\mathcal{D}_1$ is defined in [\[e:hom_trho_est\]](#e:hom_trho_est){reference-type="eqref" reference="e:hom_trho_est"}. Next, we estimate $F_l$ and $G_N$. In the rest of this proof, notice the fact $|\nabla\Phi_i|<2$, contributing to a factor finally depending on $N_h$. Note that from [\[e:hom_trho_f0g0_est\]](#e:hom_trho_f0g0_est){reference-type="eqref" reference="e:hom_trho_f0g0_est"}, [\[e:hom_trho_f1g1_est\]](#e:hom_trho_f1g1_est){reference-type="eqref" reference="e:hom_trho_f1g1_est"} and [\[e:hom_assump:barkappa\]](#e:hom_assump:barkappa){reference-type="eqref" reference="e:hom_assump:barkappa"} we have $$\label{p:hom_trho:p:e8} \begin{split} \big\| \bar \kappa^{-\sfrac{1}{2}} f^{(l)}_{0,\boldsymbol{\alpha}} \big\|_{L^\infty_\xi} + \big\| \bar \kappa^{-\sfrac{1}{2}} g^{(l)}_{0,\boldsymbol{\alpha}} \big\|_{L^\infty_\xi} &\lesssim \kappa^{-\sfrac{1}{2}} \ell^{-(l-|\boldsymbol{\alpha}|)}, \\ \big\| \bar \kappa^{-\sfrac{1}{2}} f^{(l)}_{1,\boldsymbol{\alpha}} \big\|_{L^\infty_\xi} + \big\| \bar \kappa^{-\sfrac{1}{2}} g^{(l)}_{1,\boldsymbol{\alpha}} \big\|_{L^\infty_\xi} &\lesssim \kappa^{-\sfrac{1}{2}} \tau^{-1} \ell^{-(l-|\boldsymbol{\alpha}|)}. \end{split}$$ Note that $F_1 = 0$. For $F_l$ with $F \geq 2$, using [\[p:hom_trho:p:e3\]](#p:hom_trho:p:e3){reference-type="eqref" reference="p:hom_trho:p:e3"} and [\[p:hom_trho:p:e8\]](#p:hom_trho:p:e8){reference-type="eqref" reference="p:hom_trho:p:e8"}, we have $$\label{p:hom_trho:p:e9} \begin{split} \int_0^T \|F_l\|_{H^{-1}}^2 dt \lesssim& \frac{1}{\lambda^{2(l-1)}} \sum_{m=1,|\boldsymbol{\alpha}|=m}^{l-1} \| \bar \kappa^{-(m-1)} \|_{L^\infty} \| \bar \kappa^{-\sfrac{1}{2}} f^{(l-1)}_{0,\boldsymbol{\alpha}} \|^2_{L^\infty} \int_0^T \| \bar \kappa^{\sfrac{m}{2}} D_t \nabla^m \bar \rho \|_{L^2} dt \\ &+ \| \bar \kappa^{-(m-1)} \|_{L^\infty} \| \bar \kappa^{-\sfrac{1}{2}} f^{(l-1)}_{1,\boldsymbol{\alpha}} \|^2_{L^\infty} \int_0^T \| \bar \kappa^{\sfrac{m}{2}} \nabla^m \bar \rho \|_{L^2} dt \\ \lesssim& \frac{1}{\lambda^{2(l-1)}} \sum_{m=1}^{l-1} \| \bar \kappa \|_{L^\infty}^{-(m-1)} |\nabla \Phi_i|^{2(l-2)} \kappa^{-1} \ell^{-2(l-1-m)} \tau^{-(m+1)} \mathcal{D}_{m+1} \\ \lesssim& (\lambda \ell)^{-2(l-1)} \| \bar \kappa \|_{L^\infty} \kappa^{-1} \sum_{m=1}^{l-1} \big( \tau \|\bar\kappa\|_{L^\infty} \ell^{-2} \big)^{-m} \tau^{-1} \mathcal{D}_{m+1} \\ (\eqref{e:hom_para_assump:taukappaell}, m=l-1) \lesssim& (\lambda \ell)^{-2(l-1)} \| \bar \kappa \|_{L^\infty} \kappa^{-1} \big( \tau \|\bar\kappa\|_{L^\infty} \ell^{-2} \big)^{-(l-1)} \tau^{-1} \mathcal{D}_{l} \\ \lesssim& \| \bar \kappa \|_{L^\infty} \kappa^{-1} \big( \tau \|\bar\kappa\|_{L^\infty} \lambda^2 \big)^{-(l-1)} \tau^{-1} \mathcal{D}_{l} \\ (\eqref{e:hom_para_assump:kappataulambda}, l=2) \lesssim& \kappa^{-1} \tau^{-1} \lambda^{-2} \tau^{-1} \mathcal{D}_{l} \\ \lesssim& \tau^{-1} \mathcal{D}_{l}. \end{split}$$ For $G_{N_h}$, using [\[p:hom_trho:p:e3\]](#p:hom_trho:p:e3){reference-type="eqref" reference="p:hom_trho:p:e3"} and [\[p:hom_trho:p:e8\]](#p:hom_trho:p:e8){reference-type="eqref" reference="p:hom_trho:p:e8"}, we have $$\label{p:hom_trho:p:e10} \begin{split} \int_0^T \|G_{N_h}\|_{L^2}^2 dt \lesssim& \frac{1}{\lambda^{2(N_h-1)}} \sum_{m=1,|\boldsymbol{\alpha}|=m}^{N_h} \| \bar \kappa^{-(m-1)} \|_{L^\infty} \| \bar \kappa^{-\sfrac{1}{2}} g^{(N_h)}_{0,\boldsymbol{\alpha}} \|^2_{L^\infty} \int_0^T \| \bar \kappa^{\sfrac{m}{2}} D_t \nabla^m \bar \rho \|_{L^2} dt \\ &+ \| \bar \kappa^{-(m-1)} \|_{L^\infty} \| \bar \kappa^{-\sfrac{1}{2}} g^{(N_h)}_{1,\boldsymbol{\alpha}} \|^2_{L^\infty} \int_0^T \| \bar \kappa^{\sfrac{m}{2}} \nabla^m \bar \rho \|_{L^2} dt \\ \lesssim& \lambda^{-2(N_h-1)} \sum_{m=1}^{N_h} \| \bar \kappa \|_{L^\infty}^{-(m-1)} |\nabla \Phi_i|^{2(N_h-1)} \kappa^{-1} \ell^{-2(N_h-m)} \tau^{-(m+1)} \mathcal{D}_{m+1} \\ \lesssim& (\lambda \ell)^{-2(N_h-1)} \| \bar \kappa \|_{L^\infty} \kappa^{-1} \ell^{-2} \sum_{m=1}^{N_h} \big( \tau \|\bar\kappa\|_{L^\infty} \ell^{-2} \big)^{-m} \tau^{-1} \mathcal{D}_{m+1} \\ (\eqref{e:hom_para_assump:taukappaell}, m=N_h) \lesssim& (\lambda \ell)^{-2(N_h-1)} \| \bar \kappa \|_{L^\infty} \kappa^{-1} \ell^{-2} \big( \tau \|\bar\kappa\|_{L^\infty} \ell^{-2} \big)^{-N_h} \tau^{-1} \mathcal{D}_{N_h+1} \\ \lesssim& \lambda^2 \| \bar \kappa \|_{L^\infty} \kappa^{-1} \big( \tau \|\bar\kappa\|_{L^\infty} \lambda^2 \big)^{-N_h} \tau^{-1} \mathcal{D}_{N_h+1} \\ (\eqref{e:hom_para_assump:Nh_result}) \lesssim& \tau^{-1} \mathcal{D}_{N_h+1}. \end{split}$$ For the last term in [\[p:hom_trho:p:e1\]](#p:hom_trho:p:e1){reference-type="eqref" reference="p:hom_trho:p:e1"} involving initial data, we have $$\begin{aligned} \label{p:hom_trho:p:e7} \| (\chi \nabla \bar \rho) (\cdot,0)\|^2_{L^2} \lesssim& \| \bar \kappa^{-\sfrac{1}{2}} \chi \|_{L^\infty}^2 \| \bar \kappa^{\sfrac{1}{2}} \nabla \rho_{\text{in}} \|_{L^2}^2 \lesssim \kappa^{-1} \| \bar \kappa^{\sfrac{1}{2}} \nabla \rho_{\text{in}} \|_{L^2}^2 \lesssim \kappa^{-1}\tau^{-1} \mathcal{D}_0.\end{aligned}$$ For the proof of Corollary [Corollary 13](#c:hom_trho){reference-type="ref" reference="c:hom_trho"}, the only difference is that we need to estimate the corrector as an error term, which is the same as [\[p:hom_trho:p:e7\]](#p:hom_trho:p:e7){reference-type="eqref" reference="p:hom_trho:p:e7"}. This concludes our proof. ◻ *Proof of Corollary [Corollary 14](#c:hom_dissipation){reference-type="ref" reference="c:hom_dissipation"}.* To simplify the formulas, we introduce the following shorthand notation $$\begin{aligned} M(x,t,\xi) :=& \ensuremath{\mathrm{Id}}+ \sum_{i}\eta_i(x,t)\nabla\Phi_i^T( x, t) \nabla_{\xi} \chi_i^T\big( x, t, \xi\big).\end{aligned}$$ We first claim the following estimates, omitting the argument $(x,t,\lambda \Phi_i)$ $$\begin{aligned} \Big| \kappa \int_0^T \int \big( M^TM - \langle M^TM \rangle \big) \nabla \bar \rho \cdot \nabla \bar \rho dxdt \Big| \lesssim& \Big( \frac{1}{\lambda \ell} + \frac{1}{ \lambda \kappa^{\sfrac{1}{2}} \tau^{\sfrac{1}{2}} } \Big) \mathcal{D}_1 , \label{c:hom_dissipation:claim1} \\ \frac{\kappa}{\lambda^2} \int_0^T \| \nabla^2\bar\rho \chi \|^2_{L^2} + \| \nabla_x \chi^T \nabla\bar{\rho} \|^2_{L^2} dt \lesssim& \frac{1}{\lambda^2 \kappa \tau} \mathcal{D}_1. \label{c:hom_dissipation:claim2}\end{aligned}$$ Direct computations give $$\begin{aligned} M^TM &= \ensuremath{\mathrm{Id}}+ \sum_i\eta_i(\nabla\Phi_i^T \nabla_{\xi} \chi_i^T + \nabla_{\xi} \chi_i \nabla\Phi_i) + \sum_i\eta_i^2\nabla_{\xi} \chi_i \nabla\Phi_i \nabla\Phi_i^T \nabla_{\xi} \chi_i^T, \\ M^TM - \langle M^TM \rangle &= \sum_i\eta_i(\nabla\Phi_i^T \nabla_{\xi} \chi_i^T + \nabla_{\xi} \chi_i \nabla\Phi_i) +\\ &+\sum_i\eta_i^2(\nabla_{\xi} \chi_i \nabla\Phi_i \nabla\Phi_i^T \nabla_{\xi} \chi_i^T - \langle \nabla_{\xi} \chi_i \nabla\Phi_i \nabla\Phi_i^T \nabla_{\xi} \chi_i^T \rangle).\end{aligned}$$ Furthermore, by using [\[e:hom_ed:chi\]](#e:hom_ed:chi){reference-type="eqref" reference="e:hom_ed:chi"} and the properties of Mikado flows in Section [2.1.4](#s:Mikado){reference-type="ref" reference="s:Mikado"} we compute $$\begin{aligned} \kappa\nabla_\xi\chi_i\nabla\Phi_i\nabla\Phi_i^T\nabla_\xi\chi_i^T&=\frac{\sigma}{\kappa\lambda^2}\sum_{\vec{k}}a_{\vec{k}}^2\nabla\Phi_i^{-1}(\vec{k}\otimes\nabla_\xi\varphi_{\vec{k}})\nabla\Phi_i\nabla\Phi_i^T(\nabla_\xi\varphi_{\vec{k}}\otimes\vec{k})\nabla\Phi_i^{-T}\\ &=\frac{\sigma}{\kappa\lambda^2}\sum_{\vec{k}}a_{\vec{k}}^2\nabla\Phi_i^{-1}|\nabla\Phi_i^T\nabla_\xi\varphi_{\vec{k}}|^2\nabla\Phi_i^{-T}.\end{aligned}$$ In particular, comparing with the derivation of $\bar{A}$, we deduce that $$\begin{aligned} \label{c:hom_dissipation_eq3} \big| \kappa \langle M^TM \rangle - \bar A \big| & \lesssim \sum_i\eta_i^2\|\nabla\Phi_i-\ensuremath{\mathrm{Id}}\|_{L^\infty}|\bar{A}|\lesssim \lambda^{-2\gamma} |\bar A|.\end{aligned}$$ Here we use [\[e:hom_ed:chi\]](#e:hom_ed:chi){reference-type="eqref" reference="e:hom_ed:chi"}, [\[e:hom_assump:Phi\]](#e:hom_assump:Phi){reference-type="eqref" reference="e:hom_assump:Phi"}, [\[e:hom_assump:barkappa\]](#e:hom_assump:barkappa){reference-type="eqref" reference="e:hom_assump:barkappa"} and the formula for homogenized matrix [\[e:defbarA\]](#e:defbarA){reference-type="eqref" reference="e:defbarA"}.\ Now we estimate the goal of this corollary. Using [\[l:hom_trho_comp:e3\]](#l:hom_trho_comp:e3){reference-type="eqref" reference="l:hom_trho_comp:e3"} and omitting $(x,t,\lambda \Phi_i)$, we have $$\begin{aligned} \kappa &\int_0^T \| \nabla \rho\|_{L^2}^2 dt = \kappa \int_0^T \Big\| M \nabla \bar \rho + \frac{1}{\lambda} (\nabla^2\bar\rho \chi + \nabla_x \chi^T \nabla\bar{\rho}) + \nabla \tilde\rho \Big\|_{L^2}^2 dt \\ \leq& \Big( 1+\frac{1}{\lambda \kappa^{\sfrac{1}{2}} \tau^{\sfrac{1}{2}} } \Big) \kappa \int_0^T \| M \nabla \bar \rho \|_{L^2}^2 dt + C\big( 1 + \lambda \kappa^{\sfrac{1}{2}} \tau^{\sfrac{1}{2}} \big) \kappa \int_0^T \| \nabla \tilde\rho \|_{L^2}^2 dt \\ & + C\big( 1 + \lambda \kappa^{\sfrac{1}{2}} \tau^{\sfrac{1}{2}} \big) \frac{\kappa}{\lambda^2} \int_0^T \| \nabla^2\bar\rho \chi \|^2_{L^2} + \| \nabla_x \chi^T \nabla\bar{\rho} \|^2_{L^2} dt \\ \leq & \Big( 1+\frac{1}{\lambda \kappa^{\sfrac{1}{2}} \tau^{\sfrac{1}{2}} } \Big) \kappa \int_0^T \int \big( M^TM - \langle M^TM \rangle \big) \nabla \bar \rho \cdot \nabla \bar \rho dxdt + C\Big( 1+\frac{1}{\lambda \kappa^{\sfrac{1}{2}} \tau^{\sfrac{1}{2}} } \Big) \int_0^T \int \bar A \nabla \bar \rho \cdot \nabla \bar \rho dxdt \\ &+ \Big( 1+\frac{1}{\lambda \kappa^{\sfrac{1}{2}} \tau^{\sfrac{1}{2}} } \Big) \int_0^T \int \big( \kappa \langle M^TM \rangle - \bar A \big) \nabla \bar \rho \cdot \nabla \bar \rho dxdt + C\big( 1 + \lambda \kappa^{\sfrac{1}{2}} \tau^{\sfrac{1}{2}} \big) \kappa \int_0^T \int_0^T \| \nabla \tilde\rho \|_{L^2}^2 dt \\ &+ C\big( 1 + \lambda \kappa^{\sfrac{1}{2}} \tau^{\sfrac{1}{2}} \big) \frac{\kappa}{\lambda^2} \int_0^T \| \nabla^2\bar\rho \chi \|^2_{L^2} + \| \nabla_x \chi^T \nabla\bar{\rho} \|^2_{L^2} dt \\ \leq& C\Big( \frac{1}{\lambda \kappa^{\sfrac{1}{2}} \tau^{\sfrac{1}{2}} } + \frac{1}{\lambda \ell} + \lambda^{-2\gamma} \Big) \mathcal{D}_{N_h} +\Big( 1+\frac{1}{\lambda \kappa^{\sfrac{1}{2}} \tau^{\sfrac{1}{2}} } \Big) \int_0^T \int \bar A \nabla \bar \rho \cdot \nabla \bar \rho dxdt.\end{aligned}$$ For the terms after the second $\lesssim$, we estimate them by [\[c:hom_dissipation:claim1\]](#c:hom_dissipation:claim1){reference-type="eqref" reference="c:hom_dissipation:claim1"}, [\[c:hom_dissipation_eq3\]](#c:hom_dissipation_eq3){reference-type="eqref" reference="c:hom_dissipation_eq3"} together with Proposition [Proposition 10](#p:stability_in_ellipticity){reference-type="ref" reference="p:stability_in_ellipticity"}, Proposition [Proposition 12](#p:hom_trho){reference-type="ref" reference="p:hom_trho"} and [\[c:hom_dissipation:claim2\]](#c:hom_dissipation:claim2){reference-type="eqref" reference="c:hom_dissipation:claim2"} respectively. Then we conclude $$\begin{aligned} \kappa \int_0^T \| \nabla \rho\|_{L^2}^2 dt- \int_0^T \int \bar A \nabla \bar \rho \cdot \nabla \bar \rho dxdt\lesssim \Big( \frac{1}{\lambda \kappa^{\sfrac{1}{2}} \tau^{\sfrac{1}{2}} } + \frac{1}{\lambda \ell} + \lambda^{-2\gamma} \Big)\mathcal{D}_{N_h}.\end{aligned}$$ The proof of [\[c:hom_dissipation:claim2\]](#c:hom_dissipation:claim2){reference-type="eqref" reference="c:hom_dissipation:claim2"} is the same as the terms $E_3$ and $E_4$ in the proof of Proposition [Proposition 12](#p:hom_trho){reference-type="ref" reference="p:hom_trho"}, so we omit it here. Now we prove our claim [\[c:hom_dissipation:claim1\]](#c:hom_dissipation:claim1){reference-type="eqref" reference="c:hom_dissipation:claim1"}. As in the proof of Lemma [Lemma 17](#l:hom_b_comp){reference-type="ref" reference="l:hom_b_comp"}, we could find a matrix potential $\omega := \sum_i \eta_i \omega^{(i)}$ taking arguments $(x,t,\xi)$ such that for $n \in \{0,1\}$ $$\begin{aligned} \big( M^TM - \langle M^TM \rangle \big)_{jl} =& \partial_{\xi_1} \omega_{jl} = \sum_i \eta_i \partial_{\xi_1} \omega_{jl}^{(i)}, \nonumber \\ \big\| D_t^n\nabla_x^m \omega_j \big\|_{L^\infty_\xi} \lesssim& \sum_i\Big( \frac{\sigma \eta_i^2}{\kappa^2 \lambda^2} + \frac{\sigma^{1/2} \eta_i}{\kappa \lambda} \Big) \tau^{-n} \ell^{-m} \nonumber \\ \lesssim& \sum_i\Big( 1 + \frac{\sigma \eta_i^2}{\kappa^2 \lambda^2} \Big) \tau^{-n} \ell^{-m} \lesssim \bar \kappa \kappa^{-1} \tau^{-n} \ell^{-m}. \label{c:hom_dissipation_eq7}\end{aligned}$$ Now by the chain rule, $$\begin{aligned} \nabla \big( \omega_{jl}^{(i)}(x,t,\lambda \Phi_i) \big) &= ( \nabla_{x} \omega_{jl}^{(i)}) (x,t,\lambda \Phi_i) + \lambda \nabla \Phi_i^T (\nabla_\xi \omega_{jl}^{(i)}) (x,t,\lambda \Phi_i).\end{aligned}$$ This leads to $$\begin{aligned} (\partial_{\xi_1} \omega_{jl}^{(i)}) (x,t,\lambda \Phi_i) = \frac{1}{\lambda} (\nabla \Phi_i^{-T})_{1m} \partial_{m} \big( \omega_{jl}^{(i)}(x,t,\lambda \Phi_i) \big) - \frac{1}{\lambda} (\nabla \Phi_i^{-T})_{1m} ( \partial_{x_m} \omega_{jl}^{(i)}) (x,t,\lambda \Phi_i).\end{aligned}$$ Omitting the argument $(x,t,\lambda \Phi_i)$, consider the following oscillation term and use integration by parts $$\label{c:hom_dissipation_eq9} \begin{split} \kappa \int_0^T & \int \big( (M^TM) - \langle M^TM \rangle \big) \nabla \bar \rho \cdot \nabla \bar \rho dxdt \\ =& \frac{\kappa}{\lambda} \sum_{i,j,l,m} \Big( \int_0^T \int \eta_i (\nabla \Phi_i^{-T})_{1m} \partial_{m} \omega_{jl}^{(i)} \partial_j \bar \rho \partial_l \bar \rho dxdt - \int_0^T \int \eta_i (\nabla \Phi_i^{-T})_{1m} \partial_{x_m} \omega_{jl}^{(i)} \partial_j \bar \rho \partial_l \bar \rho dxdt \Big) \\ =& \frac{\kappa}{\lambda} \sum_{i,j,l,m} \bigg( - \int_0^T \int \omega_{jl}^{(i)} \partial_{m} \Big( \eta_i (\nabla \Phi_i^{-T})_{1m} \partial_j \bar \rho \partial_l \bar \rho \Big) dxdt - \int_0^T \int \eta_i (\nabla \Phi_i^{-T})_{1m} \partial_{x_m} \omega_{jl}^{(i)} \partial_j \bar \rho \partial_l \bar \rho dxdt \bigg). \end{split}$$ Now, to estimate the terms above, in the following estimate, we use [\[c:hom_dissipation_eq7\]](#c:hom_dissipation_eq7){reference-type="eqref" reference="c:hom_dissipation_eq7"}, [\[e:hom_assump:slow\]](#e:hom_assump:slow){reference-type="eqref" reference="e:hom_assump:slow"}, Young's inequality with the decomposition $\bar \kappa | \nabla \bar \rho|| \nabla^2 \bar \rho| = \tau^{-\sfrac{1}{4}} \bar \kappa^{\sfrac{1}{4}} | \nabla \bar \rho| \cdot \tau^{\sfrac{1}{4}} \bar \kappa^{\sfrac{3}{4}} | \nabla^2 \bar \rho|$. Then we invoke energy estimates [\[p:hom_trho:p:e3\]](#p:hom_trho:p:e3){reference-type="eqref" reference="p:hom_trho:p:e3"}, and use [\[e:hom_para_assump:kappataulambda\]](#e:hom_para_assump:kappataulambda){reference-type="eqref" reference="e:hom_para_assump:kappataulambda"}, $$\begin{aligned} \kappa \int_0^T \int& \big( (M^TM) - \langle M^TM \rangle \big) \nabla \bar \rho \cdot \nabla \bar \rho dxdt \\ \lesssim&_{\eqref{c:hom_dissipation_eq7}, \eqref{e:hom_assump:slow}} \frac{1}{\lambda \ell} \int_0^T \int \bar \kappa | \nabla \bar \rho|^2 dxdt + \frac{1}{\lambda} \int_0^T \int \bar \kappa | \nabla \bar \rho|| \nabla^2 \bar \rho| dxdt \\ \lesssim& \frac{1}{\lambda \ell} \int_0^T \int \bar \kappa | \nabla \bar \rho|^2 dxdt + \frac{1}{\lambda \kappa^{\sfrac{1}{2}} \tau^{\sfrac{1}{2}} } \int_0^T \int \bar \kappa | \nabla \bar \rho|^2 dxdt + \frac{\tau^{\sfrac{1}{2}}}{\lambda\kappa^{\sfrac{1}{2}}} \int_0^T \int | \bar \kappa \nabla^2 \bar \rho|^2 dxdt \\ \lesssim&_{\eqref{p:hom_trho:p:e3}} \Big( \frac{1}{\lambda \ell} + \frac{1}{ \lambda \kappa^{\sfrac{1}{2}} \tau^{\sfrac{1}{2}} } \Big) \mathcal{D}_1 \\ %\lesssim& \lambda^{ \frac{1}{2b} [ -\frac{b-1}{b+1}(1-\beta(2b+1)) + \gamma_T ] } \mathcal{D}_1.\end{aligned}$$ This concludes the proof. ◻ # Time averaging {#s:time} ## Setup Consider constants $\kappa_1>\kappa_0>0$, and the oscillatory $\tilde\kappa=\tilde\kappa(x,t,\tau^{-1}t)$ of the form $$\label{e:t-osc-tildekappa} \tilde\kappa(x,t,\tau^{-1}t)=\kappa_0+\eta(x,t,\tau^{-1}t)\kappa_1\,,$$ where $s\mapsto\eta(x,t,s)$ is $1$-periodic, nonnegative and smooth with $$%0\leq \eta(x,t,s)\leq 2,\quad \langle\eta\rangle:=\int_0^1\eta(x,t,s)\,ds=1\,.$$ We define $$\label{e:t_defg} \kappa:=\kappa_0+\kappa_1,\quad g(x,t,s):=\frac{\kappa_1}{\kappa}(\eta(x,t,s)-1),\quad g_\tau(x,t):=g(x,t,\tau^{-1}t),$$ so that $\langle g\rangle=0$, $g$ is bounded, and we have the identity $$\label{e:t-osc-g} \tilde\kappa(x,t,\tau^{-1})=\kappa_0+\kappa_1 \eta(x,t,\tau^{-1}t)=\kappa+\kappa g_\tau(x,t).$$ In this section we consider the solutions $\rho$ and $\bar{\rho}$ of the following equations $$\label{e:equ-t-oscillating} \begin{split} \partial_t\rho+u\cdot\nabla \rho&=\mathop{\rm div}\nolimits\left(\tilde\kappa\nabla\rho\right)\\ \rho|_{t=0}&=\rho_{in}, \end{split}$$ and $$\label{e:equ-t-average} \begin{split} \partial_t\bar{\rho}+u\cdot\nabla \bar{\rho}&=\mathop{\rm div}\nolimits\left(\kappa\nabla\bar{\rho}\right)\\ \bar\rho|_{t=0}&=\rho_{in}. \end{split}$$ In view of definitions of $\kappa$, $\tilde \kappa$ [\[e:equ-t-average\]](#e:equ-t-average){reference-type="eqref" reference="e:equ-t-average"} can be seen as the time-averaged version of [\[e:equ-t-oscillating\]](#e:equ-t-oscillating){reference-type="eqref" reference="e:equ-t-oscillating"}. Further, we define the respective total dissipation $$\label{e:t-osc-D} \tilde D:=\int_0^T\|\tilde\kappa^{\sfrac12}\nabla\rho\|_{L^2}^2\,dt\,,\quad D:=\kappa\int_0^T\|\nabla\bar\rho\|_{L^2}^2\,dt.$$ We make the following assumptions: there exists $\mu \ge \|\nabla u\|_{\infty}$ (we should think of $\mu^{-1} \sim \|\nabla u\|_{\infty}^{-1}$, being the advection time-scale) such that 1. The fast time-scale is shorter than the advection time-scale, i.e. $\tau \mu <1/2$. Moreover, there exists $N\in\ensuremath{\mathbb{N}}$ such that $$(\tau\mu )^{N-1}\kappa<\kappa_0.$$ 2. Control of higher-order spatial derivatives: for any $n\leq N$ $$\label{Atime_scales} \|\nabla u\|_{C^n}^2\leq C_N\left(\frac{\mu}{\kappa}\right)^n\mu^2.$$ 3. Bound on the initial data: for any $1\leq n\leq N$ $$\label{Atime_Ini} \|\rho_{in}\|_{H^n}^2\leq C_N \left(\frac{\mu}{\kappa}\right)^nD.$$ 4. Control of cutoff and its slow derivatives: for any $1\leq n\leq N$ $$\label{e:Ggcon} \| \nabla^n \eta \|_{L^\infty} \leq C_N (\tau\mu )^2 \left(\frac{\mu}{\kappa}\right)^\frac{n}{2}, \qquad \| D_t \eta \|_{L^\infty} \leq C_N \mu.$$ **Proposition 18**. *Under the assumptions (A1), (A2), (A3), (A4) we have $$\label{e:osc-t-proximity} \|\rho(T)-\bar{\rho}(T)\|_{L^2}^2+\kappa\int_0^T\|\nabla\rho-\nabla\bar{\rho}\|^2_{L^2}\,dt\lesssim (\tau\mu )^2D.$$ Moreover, the total dissipation satisfies $$\label{e:osc-t-dissipation} |D-\tilde D|\lesssim (\tau\mu )(D+\tilde D).$$ The implicit constants depend only on $N$ in asumptions.* *Proof.* Starting with $\rho^{(0)}:=\bar{\rho}$ being the solution of [\[e:equ-t-average\]](#e:equ-t-average){reference-type="eqref" reference="e:equ-t-average"}, we construct the solution $\rho$ of [\[e:equ-t-oscillating\]](#e:equ-t-oscillating){reference-type="eqref" reference="e:equ-t-oscillating"} by successive approximations $\rho^{(i)}$, $i=1,2,\dots,N$, defined to be solutions of $$\label{e:time_setup} \begin{split} \partial_t\rho^{(i)}+u\cdot\nabla \rho^{(i)}&=\kappa\Delta\rho^{(i)}+\mathop{\rm div}\nolimits\left(g_\tau\nabla\rho^{(i-1)}\right)\\ \rho|_{t=0}&=\rho_{in}. \end{split}$$ We will show below in Lemma [Lemma 19](#l:improved-energy){reference-type="ref" reference="l:improved-energy"} the improved energy bound[^4] $$\|\rho^{(i)}(T)-\rho^{(i-1)}(T)\|_{L^2}^2+\kappa\int_0^T\|\nabla\rho^{(i)}(t)-\nabla\rho^{(i-1)}\|^2_{L^2}\,dt\leq C_N(\tau \mu)^{2i}D.$$ Setting $$\rho^{(error)}:=\rho-\rho^{(N)},$$ and using [\[e:equ-t-oscillating\]](#e:equ-t-oscillating){reference-type="eqref" reference="e:equ-t-oscillating"}, [\[e:time_setup\]](#e:time_setup){reference-type="eqref" reference="e:time_setup"} and [\[e:t-osc-g\]](#e:t-osc-g){reference-type="eqref" reference="e:t-osc-g"}, we may write the equation for $\rho^{(error)}$ as $$\begin{split} \partial_t\rho^{(error)}+u\cdot\nabla\rho^{(error)}&=\kappa\Delta\rho^{(error)}+\kappa\mathop{\rm div}\nolimits(g_\tau\nabla\rho^{(error)})+\kappa\mathop{\rm div}\nolimits(g_\tau (\nabla\rho^{(N)}-\nabla\rho^{(N-1)}))\,\\ &= \kappa_0\Delta\rho^{(error)}+\bar{\kappa}\mathop{\rm div}\nolimits(\eta\nabla\rho^{(error)})+\kappa \mathop{\rm div}\nolimits(g_\tau (\nabla\rho^{(N)}-\nabla\rho^{(N-1)}))\,. \end{split}$$ Since $\eta\geq 0$ and $|g|\leq 2$, the standard energy estimate and an application of Young's inequality yields $$\|\rho^{(error)}(T)\|_{L^2}^2+\kappa_0\int_0^T\|\nabla\rho^{(error)}\|_{L^2}^2\,dt\leq C\frac{\kappa}{\kappa_0}\kappa\int_0^T\|\nabla\rho^{(N)}-\nabla\rho^{(N-1)}\|_{L^2}^2\,dt.$$ Combining with the improved energy bound and (A1) we deduce $$\begin{split} \|\rho^{(error)}(T)\|_{L^2}^2+\kappa\int_0^T\|\nabla\rho^{(error)}\|_{L^2}^2\,dt&\leq C_N\left(\frac{\kappa}{\kappa_0}\right)^2(\tau\mu)^{2N}D\\ &\leq C_N(\tau\mu)^2D\,. \end{split}$$ Consequently, $$\begin{aligned} \|\rho(T)-\bar{\rho}(T)\|_{L^2}^2+\kappa\int_0^T\|\nabla\rho-\nabla\bar{\rho}\|^2_{L^2}&\,dt\leq 2\|\rho(T)-\rho^{(N)}(T)\|_{L^2}^2+2\kappa\int_0^T\|\nabla\rho-\nabla\rho^{(N)}\|^2_{L^2}\,dt+ \\ +2^N\sum_{i=0}^{N-1}&\|\rho^{(i+1)}(T)-\rho^{(i)}(T)\|_{L^2}^2+\kappa\int_0^T\|\nabla\rho^{(i+1)}-\nabla\rho^{(i)}\|^2_{L^2}\,dt\\ &\lesssim (\tau\mu)^2D\,,\end{aligned}$$ verifying [\[e:osc-t-proximity\]](#e:osc-t-proximity){reference-type="eqref" reference="e:osc-t-proximity"}. Next, we consider the difference in total dissipation, and write $$\int_0^T\int_{\ensuremath{\mathbb{T}}^3}\kappa|\nabla\bar\rho|^2-\tilde\kappa|\nabla\rho|^2\,dx\,dt=\underbrace{\kappa\int_0^T\int_{\ensuremath{\mathbb{T}}^3}g_\tau|\nabla\bar{\rho}|^2\,dx\,dt}_{(I)}+\underbrace{\int_0^T\int_{\ensuremath{\mathbb{T}}^3}\tilde\kappa(|\nabla\bar\rho|^2-|\nabla\rho|^2)\,dx\,dt}_{(II)}$$ We show below (see [\[l:time_indR\]](#l:time_indR){reference-type="eqref" reference="l:time_indR"}) the bound $$|(I)|\lesssim (\tau\mu)D,$$ since, in the notation introduced below (c.f.[\[n:bili\]](#n:bili){reference-type="eqref" reference="n:bili"}), $I=B^{(0)}(\nabla\rho^{(0)},\nabla\rho^{(0)})$. Next, we use Young's inequality to estimate $$\begin{aligned} |(II)|&=\left|\int_0^T\int_{\ensuremath{\mathbb{T}}^3}\tilde\kappa(\nabla\rho+\nabla\bar\rho)\cdot (\nabla\rho-\nabla\bar\rho)\,dx\,dt\right|\,\\ &\lesssim \varepsilon^{-1}\kappa\int_0^T\|\nabla\rho-\nabla\bar\rho\|_{L^2}^2\,dt+\varepsilon\left(\kappa\int_0^T\|\nabla\bar\rho\|_{L^2}^2\,dt+\int_0^T\|\tilde\kappa^{\sfrac12}\nabla\rho\|_{L^2}^2\,dt\right).\end{aligned}$$ Choosing $\varepsilon=\tau\mu$ and using [\[e:osc-t-proximity\]](#e:osc-t-proximity){reference-type="eqref" reference="e:osc-t-proximity"} we deduce $$|(II)| \lesssim (\tau\mu)(D+\tilde D),$$ verifying [\[e:osc-t-dissipation\]](#e:osc-t-dissipation){reference-type="eqref" reference="e:osc-t-dissipation"}. ◻ The rest of this section is concerned with estimates on the successive approximations [\[e:time_setup\]](#e:time_setup){reference-type="eqref" reference="e:time_setup"}. Since they may be of separate interest, we present them independently from the specific choices made in the previous section. To this end let us observe that the definition [\[e:t_defg\]](#e:t_defg){reference-type="eqref" reference="e:t_defg"} of $g$ and of $\eta$ imply there exists the fast time potential ie $G (x,t,s)= \int_0^s g(x,t,a) da$ such that $$\ensuremath{\partial_s}G (x,t,s) = g(x,t,s), \quad \text{ where } \quad g_\tau (x,t) = g(x,t, \tau^{-1}t).$$ Let us denote $g^{0}:=g, \; g^{l+1}:=G^{l} g, \; \ensuremath{\partial_s}G^{l} = g^{l}$. Since $\frac{\bar\kappa}{\kappa} \le 1$ and $\eta$ is bounded, assumption (A4) and definitions yields that for $f= g^l \text{ or } G^l$ we have $$\label{Atime_Gspace}%\tag{A4'} \begin{split} &\| f \|_{L^\infty} \lesssim 1, \qquad \| (D^{slow}_t f) \|_{L^\infty} \lesssim \mu,\\ &\text{and for } n>0 \qquad \| \nabla^n f\|_{L^\infty} \lesssim (\tau \mu)^2 \left(\frac{\mu}{\kappa}\right)^\frac{n}{2} \lesssim (\tau \mu) \left(\frac{\mu}{\kappa}\right)^\frac{n}{2}. \end{split}$$ We will use usually the bound $\| \nabla^n f\|_{L^\infty} \lesssim (\tau \mu) \left(\frac{\mu}{\kappa}\right)^\frac{n}{2}$, only once the other is needed. ## Notation $$\label{n:D} D := \kappa\int_0^T\|\nabla\rho_{0} (s)\|_{L^2}^2$$ $$\label{n:pnorm} \VE f \VE^2_n := \sup_{t \le T} \|(\nabla^n f)(t)\|^2_{L^2}+ \kappa\int_0^T \|\nabla^{n+1} f (s)\|_{L^2}^2 ds + \left(\frac{\|\nabla u\|_{L^\infty}}{\kappa}\right)^n \left(\kappa\int_0^T \|\nabla f (s)\|_{L^2}^2 ds\right)$$ $$\label{n:bili} B^l (f,g) := \kappa \int_0^T \int g^{l}_\tau f \cdot g, \quad \text{where} \quad g^{0}:=g, \; g^{l+1}:=G^{l} g, \; \ensuremath{\partial_s}G^{l} = g^{l}$$ In particular $$\label{e:fastslow} g^{l} = \tau D_t (G^{l}_\tau) - \tau (D^{slow}_t G^{l})_\tau$$ ## Commutators We will need several estimates for commutators. Let $|\boldsymbol{\alpha}|=n$ $$\label{Atime_comm1} \| [g \cdot \nabla, \ensuremath{ \partial^{\boldsymbol{\alpha}}}] f \|_{L^2} \lesssim \| \nabla^n g \|_{L^\infty} \| \nabla f \|_{L^2} + \| \nabla g \|_{L^\infty} \| \nabla^n f \|_{L^2}$$ $$\label{Atime_comm2} \| [\ensuremath{ \partial^{\boldsymbol{\alpha}}}, g ] \nabla f\|_{L^2} \lesssim \|\nabla^{n} g \|_{L^\infty} \| \nabla f \|_{L^2} + \|\nabla g \|_{L^\infty} \| \nabla^{n} f \|_{L^2}$$ ## Improved energy bound The following is the key technical lemma of this section. Consider $$\label{e:time_setup2} \begin{split} \partial_t \rho^{(i)}+u \cdot\nabla \rho^{(i)}&- \kappa \Delta \rho^{(i)} = \kappa \mathop{\rm div}\nolimits\left(g_\tau \nabla \rho^{(i-1)} \right) \\ \rho^{(i)}|_{t=0}&=0 \; \text{ for } i>0, \qquad \rho^{(0)}|_{t=0}=\rho_{in} \end{split}$$ and $\rho^{(-1)} \equiv 0$. **Lemma 19**. *Assume that there is $\ensuremath{\partial_s}G (x,t,s) = g(x,t,s)$ such that [\[Atime_Gspace\]](#Atime_Gspace){reference-type="eqref" reference="Atime_Gspace"} holds for both $f=g$, $f=G$. Assume [\[Atime_scales\]](#Atime_scales){reference-type="eqref" reference="Atime_scales"}, [\[Atime_Ini\]](#Atime_Ini){reference-type="eqref" reference="Atime_Ini"}, [\[e:Ggcon\]](#e:Ggcon){reference-type="eqref" reference="e:Ggcon"}, then the energy solutions of [\[e:time_setup2\]](#e:time_setup2){reference-type="eqref" reference="e:time_setup2"} satisfy\ **(1) for $\rho^{(0)}$** $$\label{e:time_main_0} \|\rho^{(0)} (t)\|_{L^2}^2+\kappa\int_0^t\|\nabla\rho^{(0)} (s)\|_{L^2}^2\,ds = \|\rho_{in}\|_{L^2}^2$$ and, for any $n\geq 1$ $$\label{e:time_main_0n} \VE \rho^{(0)} \VE^2_n \lesssim_{n} \left(\frac{ \mu}{\kappa}\right)^n D$$ **(2) for $\rho^{(i)}$, $i>0$** $$\label{e:time_main_impr} \begin{split} \VE \rho^{(i)} \VE^2_n \lesssim_{n,i} (\tau \mu)^{2i} \left(\frac{\mu}{\kappa}\right)^n D \end{split}$$ Furthermore $$\label{l:time_indR} \begin{split} &\sup_{|\boldsymbol{\alpha}|=n, |\boldsymbol{\beta}|=m} \left| B^{l} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i)}, \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(i)}) \right| \lesssim_{n,m,i} (\tau \mu)^{2i+1} \left(\frac{\mu}{\kappa}\right)^{\frac{n+m}{2}} D. \end{split}$$* Note that the equation [\[e:time_setup\]](#e:time_setup){reference-type="eqref" reference="e:time_setup"} with assumptions (A2)-(A4) falls within the above lemma. *Proof.* We complete this proof in several steps. *Step 1: Preliminary bound for the bilinear term.* Let $|\boldsymbol{\alpha}|=n$ and $|\boldsymbol{\beta}|=m$. It holds by definition and [\[e:fastslow\]](#e:fastslow){reference-type="eqref" reference="e:fastslow"} $$\label{e:time_Bpre} \begin{split} B^{l} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i)}, &\nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)}) = \kappa \tau \int_0^T \int (D_t (G^{l}_\tau) - (D^{slow}_t G^{l})_\tau) \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i)} \cdot \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)} \\ =& \kappa \tau \int G^{l}_\tau \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i)} \cdot \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)} \bigg|_{t=0}^{t=T} -\kappa \tau \int_0^T \int G^{l}_\tau D_t (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i)} \cdot \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)}) \\ &- \kappa \tau \int_0^T (D^{slow}_t G^{l})_\tau) \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i)} \cdot \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)}. \\ \end{split}$$ We estimate the (time) boundary term in [\[e:time_Bpre\]](#e:time_Bpre){reference-type="eqref" reference="e:time_Bpre"} integrating once by parts in space and using [\[Atime_Gspace\]](#Atime_Gspace){reference-type="eqref" reference="Atime_Gspace"} via[^5] $$\VE \rho^{(i)} \VE_n \VE \rho^{(j)} \VE_{m+2} + (\tau \mu) \left(\frac{\mu}{\kappa}\right)^\frac{1}{2} \VE \rho^{(i)} \VE_n \VE \rho^{(j)} \VE_{m+1}.$$ For the last term of [\[e:time_Bpre\]](#e:time_Bpre){reference-type="eqref" reference="e:time_Bpre"} we simply write the bound $$\tau \| D^{slow}_t G^{l} \|_{L^\infty} \VE \rho^{(i)} \VE_n \VE \rho^{(j)} \VE_{m} \le \tau \mu \VE \rho^{(i)} \VE_n \VE \rho^{(j)} \VE_{m}.$$ Together, for any $|\boldsymbol{\alpha}|=n$ and $|\boldsymbol{\beta}|=m$ $$\label{e:time_Bpre2} \begin{split} \left| B^{l} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i)}, \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)}) \right| \lesssim \kappa \tau \VE \rho^{(i)} \VE_n \VE \rho^{(j)} \VE_{m+2} + \kappa \tau (\tau \mu) \left(\frac{\mu}{\kappa}\right)^\frac{1}{2} \VE \rho^{(i)} \VE_n \VE \rho^{(j)} \VE_{m+1} \\ + \tau \mu \VE \rho^{(i)} \VE_n \VE \rho^{(j)} \VE_{m} + \kappa \tau \left| \int_0^T \int G^{l}_\tau D_t (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i)} \cdot \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)}) \right|. \end{split}$$ *Step 2: Reformulating energy estimates.* [\[ss:ree\]]{#ss:ree label="ss:ree"} The case $i=0$ (nonzero initial datum, zero forcing since $\rho^{(-1)} \equiv 0$) is the energy estimate of Lemma [Lemma 8](#l:energy_lapl){reference-type="ref" reference="l:energy_lapl"}, together with the assumption [\[Atime_Ini\]](#Atime_Ini){reference-type="eqref" reference="Atime_Ini"}. $$\begin{split} &\sup_{t \le T} \|(\nabla^n \rho)(t)\|^2_{L^2}+ \kappa\int_0^T \|\nabla^{n+1} \rho (s)\|_{L^2}^2 ds \le \\ &\|\nabla^n \rho_{in}\|^2_{L^2} + C_n \left(\frac{\mu}{\kappa}\right)^n \kappa \int_0^T \|\nabla \rho (s)\|_{L^2}^2 + \sum_{|\boldsymbol{\alpha}|=n}\left|\int_0^T \int \ensuremath{ \partial^{\boldsymbol{\alpha}}}f \cdot \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho \right|, \end{split}$$ For $i>0$, the energy estimate with forcing (Corollary [Corollary 9](#c:energy_lapl){reference-type="ref" reference="c:energy_lapl"}) gives (zero initial datum for $i>0$) $$\label{e:ener_n1} \begin{split} \sup_{t \le T}& \|\nabla^n \rho^{(i)}(t)\|^2_{L^2}+ \kappa \|\nabla^{n+1} \rho^{(i)}\|_{L_{xT}^2}^2 \le \\ &\kappa \sum_{|\boldsymbol{\alpha}|=n} \left| \int_0^T \int \ensuremath{ \partial^{\boldsymbol{\alpha}}}(g_\tau \nabla \rho^{(i-1)}) \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i)} \right| + C_n \left(\frac{\mu}{\kappa}\right)^n \kappa \|\nabla \rho^{(i)}\|_{L_{xt}^2}^2, \end{split}$$ where in case $n=0$ the second term vanishes. Rewriting $$\begin{split} \VE \rho_{i} \VE^2_n \le & \sum_{|\boldsymbol{\alpha}|=n} \left| B^{0} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i-1)}, \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i)}) \right| + \kappa \sum_{|\boldsymbol{\alpha}|=n} \left| \int_0^T \int([g_\tau \nabla, \ensuremath{ \partial^{\boldsymbol{\alpha}}}] \rho^{(i-1)})\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i)} \right| \\ + &C_n \left(\frac{\mu}{\kappa}\right)^n \kappa \|\nabla \rho^{(i)}\|_{L_{xt}^2}^2, \end{split}$$ where in case $n=0$ only the first term is present. Using above for $n>0$ the case with $n=0$ for the last right-hand side term we get $$\begin{split} \VE \rho_{i} \VE^2_n \le & \sum_{|\boldsymbol{\alpha}|=n} \left| B^{0} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i-1)}, \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i)}) \right| + \kappa \sum_{|\boldsymbol{\alpha}|=n} \left| \int_0^T \int -([g_\tau \nabla, \ensuremath{ \partial^{\boldsymbol{\alpha}}}] \rho^{(i-1)})\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i)} \right| +\\ &C_n \left(\frac{\mu}{\kappa}\right)^n \left| B^{0} (\nabla \rho^{(i-1)}, \nabla\rho^{(i)}) \right|. \end{split}$$ The commutator estimate [\[Atime_comm1\]](#Atime_comm1){reference-type="eqref" reference="Atime_comm1"} allows to bound the middle right-hand side term above with $$\frac12 \kappa \| \nabla^{n+1}\rho^{(i)} \|^2_{L_{xt}^2} + C \kappa \| \nabla^n g \|^2_{L^\infty} \| \nabla \rho^{(i-1)} \|^2_{L_{xt}^2} + C \kappa \| \nabla g \|^2_{L^\infty} \| \nabla^n \rho^{(i-1)} \|^2_{L_{xt}^2},$$ so absorbing the $\nabla^{n+1}$ term above into left-hand side and using [\[Atime_Gspace\]](#Atime_Gspace){reference-type="eqref" reference="Atime_Gspace"} we arrive at $$\label{e:ener_n2} \begin{split} \VE \rho_{i} \VE^2_n &\lesssim_n \left(\frac{\mu}{\kappa}\right)^n \left| B^{0} (\nabla \rho^{(i-1)}, \nabla\rho^{(i)}) \right| + \sum_{|\boldsymbol{\alpha}|=n} \left| B^{0} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i-1)}, \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i)}) \right| \\ +& (\tau \mu)^2 \left(\frac{\mu}{\kappa}\right)^n \kappa \| \nabla \rho^{(i-1)} \|^2_{L_{xt}^2} + (\tau \mu)^2 \left(\frac{\mu}{\kappa}\right) \kappa \| \nabla^n \rho^{(i-1)} \|^2_{L_{xt}^2}, \end{split}$$ where in case $n=0$ only the first term is present. *Step 3: Setting up induction.* Consider the second line of [\[e:ener_n2\]](#e:ener_n2){reference-type="eqref" reference="e:ener_n2"}: (i) it agrees with the desired estimate [\[e:time_main_impr\]](#e:time_main_impr){reference-type="eqref" reference="e:time_main_impr"} for $i=1$, if we use there definition [\[n:D\]](#n:D){reference-type="eqref" reference="n:D"} and [\[e:time_main_0n\]](#e:time_main_0n){reference-type="eqref" reference="e:time_main_0n"}; (ii) moreover, if we knew the estimate [\[e:time_main_impr\]](#e:time_main_impr){reference-type="eqref" reference="e:time_main_impr"} for $\rho_{i-1}$, it agrees with it for $\rho_{i}$. Therefore, if we knew that for $\boldsymbol{\alpha}=0$ and $|\boldsymbol{\alpha}|=n$ it holds $$\label{e:Bneeded} \left| B^{0} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i-1)}, \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i)}) \right| \lesssim (\tau \mu)^{2i} \left(\frac{\mu}{\kappa}\right)^k D,$$ then [\[e:ener_n2\]](#e:ener_n2){reference-type="eqref" reference="e:ener_n2"} can be shown by induction. It is clear then that we should prove our [\[e:time_main_impr\]](#e:time_main_impr){reference-type="eqref" reference="e:time_main_impr"} inductively, keeping track of $B$. We induce over $i=0,1, \dots$. Inductive assumption is that for any $n,m,l$ $$\label{l:time_ind} \begin{split} (a)\qquad & \VE \rho^{(j)} \VE^2_n \lesssim (\tau \mu)^{2j} \left(\frac{\mu}{\kappa}\right)^n D \qquad \forall_{j \le i}\\ (b)\qquad &\sup_{|\boldsymbol{\alpha}|=n, |\boldsymbol{\beta}|=m} \left| B^{l} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(j)}, \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(k)}) \right| \lesssim (\tau \mu)^{j+k+1} \left(\frac{\mu}{\kappa}\right)^{\frac{n+m}{2}} D \qquad \forall_{j,k \le i} \end{split}$$ (where constants in $\lesssim$ may depend on $n,m,l,i$), and we want to show it for $i+1$. *Step 4: Initializing the induction, $i=0$.* At the beginning of section [\[ss:ree\]](#ss:ree){reference-type="ref" reference="ss:ree"} we have already shown [\[e:time_main_0\]](#e:time_main_0){reference-type="eqref" reference="e:time_main_0"}, [\[e:time_main_0n\]](#e:time_main_0n){reference-type="eqref" reference="e:time_main_0n"} for $i=0$, so the part (a) of the inductive assuption for $i=0$ holds. We need to show now $i=0$ for the part (b). From [\[e:time_Bpre2\]](#e:time_Bpre2){reference-type="eqref" reference="e:time_Bpre2"} with $i=0$ it holds, thanks to already known part ($a$) $$\label{e:time_Bpre2ini} \begin{split} \left| B^{l} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(0)}, \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(0)}) \right| \lesssim \tau \mu \left(\frac{ \mu}{\kappa}\right)^{\frac{n+m}{2}} D + \kappa \tau \left| \int_0^T \int G^{l}_\tau D_t (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(0)} \cdot \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(0)})\right| \end{split}$$ for any $|\boldsymbol{\alpha}|=n$, $|\boldsymbol{\beta}|=m$. We focus on the last term of [\[e:time_Bpre2ini\]](#e:time_Bpre2ini){reference-type="eqref" reference="e:time_Bpre2ini"}. Since via [\[e:time_setup\]](#e:time_setup){reference-type="eqref" reference="e:time_setup"} for $\rho_{0}$ (no forcing) we have $$D_t (\ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho_{0}) = \kappa \Delta \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho_{0} + [u \cdot \nabla, \ensuremath{ \partial^{\boldsymbol{\alpha}}}] \rho_{0}$$ It holds $$\label{e:time_B0a} \kappa \tau \int_0^T \int G^{l}_\tau D_t (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(0)}) \cdot \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(0)} = \kappa \tau \int_0^T \int G^{l}_\tau ( \kappa \Delta \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(0)} + [u \cdot \nabla, \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}] \rho^{(0)}) \cdot \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(0)}.$$ The first right-hand side term of [\[e:time_B0a\]](#e:time_B0a){reference-type="eqref" reference="e:time_B0a"} is estimated in modulus via [\[Atime_Gspace\]](#Atime_Gspace){reference-type="eqref" reference="Atime_Gspace"} and [\[e:time_main_0n\]](#e:time_main_0n){reference-type="eqref" reference="e:time_main_0n"} by $$\kappa \tau (\kappa^\frac{1}{2} \| \nabla^{n+3} \rho^{(0)} \|_{L_{xt}^2}) (\kappa^\frac{1}{2} \|\nabla^{m+1}\rho^{(0)} \|_{L_{xt}^2}) \lesssim \kappa \tau \VE \rho^{(0)} \VE_{n+2} \VE \rho^{(0)} \VE_{m} \lesssim (\tau \mu) \left(\frac{ \mu}{\kappa}\right)^\frac{n+m}{2} D.$$ For the second right-hand side term of [\[e:time_B0a\]](#e:time_B0a){reference-type="eqref" reference="e:time_B0a"}, using commutator estimate [\[Atime_comm1\]](#Atime_comm1){reference-type="eqref" reference="Atime_comm1"} (for $n+1$) and then [\[e:time_main_0n\]](#e:time_main_0n){reference-type="eqref" reference="e:time_main_0n"} $$\begin{split} & \kappa \tau \left| \int_0^T \int G^{l}_\tau [u \cdot \nabla, \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}] \rho^{(0)} \cdot \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(0)} \right| \\ \lesssim& \tau \left(\| \nabla^{n+1} u \|_{L^\infty} \kappa^\frac{1}{2} \| \nabla \rho^{(0)} \|_{L_{xt}^2} + \| \nabla u \|_{L^\infty} \kappa^\frac{1}{2} \| \nabla^{n+1} \rho^{(0)} \|_{L_{xt}^2} \right) \kappa^\frac{1}{2} \|\nabla^{m+1}\rho^{(0)} \|_{L_{xt}^2} \\ \lesssim& \tau \left( \| \nabla^{n+1} u \|_{L^\infty} + \| \nabla u \|_{L^\infty} \left(\frac{ \mu}{\kappa}\right)^\frac{n}{2} \right) \left(\frac{ \mu}{\kappa}\right)^\frac{m}{2} D \lesssim (\tau \mu) \left(\frac{ \mu}{\kappa}\right)^\frac{n+m}{2} D, \end{split}$$ where the last inequality follows from the scales assumption [\[Atime_scales\]](#Atime_scales){reference-type="eqref" reference="Atime_scales"}. Hence the entire right-hand side of [\[e:time_B0a\]](#e:time_B0a){reference-type="eqref" reference="e:time_B0a"} is estimated by $(\tau \mu) \left(\frac{ \mu}{\kappa}\right)^\frac{n+m}{2} D$. This and the other estimate with $D_t$ acting on $\nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho_{0}$ (performed analogously) gives via [\[e:time_Bpre2ini\]](#e:time_Bpre2ini){reference-type="eqref" reference="e:time_Bpre2ini"} the bound $$\label{e:indB0} \sup_{|\boldsymbol{\alpha}|=n, |\boldsymbol{\beta}|=m} \left| B^{l} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(0)}, \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(0)}) \right| \lesssim (\tau \mu) \left(\frac{ \mu}{\kappa}\right)^\frac{n+m}{2} D,$$ which is the inductive hypothesis for $i=0$. *Step 5: Inductive step $i \to i+1$.* Now we assume [\[l:time_ind\]](#l:time_ind){reference-type="eqref" reference="l:time_ind"} and want to show (for any $n$, $m$, $l$) $$\label{l:time_indH} \begin{split} (a')\qquad & \VE \rho^{(i+1)} \VE^2_n \lesssim (\tau \mu)^{2i+2} \left(\frac{\mu}{\kappa}\right)^n D \\ (b')\qquad & \sup_{|\boldsymbol{\alpha}|=n, |\boldsymbol{\beta}|=m} \left| B^{l} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)}, \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(i)}) \right| \lesssim (\tau \mu)^{2i+2} \left(\frac{\mu}{\kappa}\right)^{\frac{n+m}{2}} D. \end{split}$$ For any $|\boldsymbol{\alpha}|=n$, $|\boldsymbol{\beta}|=m$ the preliminary inequality [\[e:time_Bpre2\]](#e:time_Bpre2){reference-type="eqref" reference="e:time_Bpre2"} for $i+1,j\le i$ gives via part ($a$) of the inductive assumption [\[l:time_ind\]](#l:time_ind){reference-type="eqref" reference="l:time_ind"} $$\label{e:time_Bpre2H} \begin{split} &\left| B^{l} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)}, \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)}) \right| \lesssim \VE \rho^{(i+1)} \VE_n (\tau \mu)^{j+1} \left(\frac{\mu}{\kappa}\right)^\frac{m}{2} D^\frac{1}{2} + \kappa \tau \left| \int_0^T \int G^{l}_\tau D_t (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)} \cdot \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)})\right|. \end{split}$$ We will focus on the last term of [\[e:time_Bpre2H\]](#e:time_Bpre2H){reference-type="eqref" reference="e:time_Bpre2H"}. By [\[e:time_setup\]](#e:time_setup){reference-type="eqref" reference="e:time_setup"} for $\rho^{(\iota)}$ (arbitrary natural $\iota$), we have $$\label{e:time_Dt} D_t (\partial^{\boldsymbol{\gamma}} \rho^{(\iota)}) = \kappa \Delta \partial^{\boldsymbol{\gamma}} \rho^{(\iota)} + [u \cdot \nabla, \partial^{\boldsymbol{\gamma}} ] \rho^{(\iota)} + \kappa \mathop{\rm div}\nolimits\partial^{\boldsymbol{\gamma}} \left(g \nabla \rho^{(\iota-1)} \right).$$ In what follows, we consider different cases to complete the estimates at step $i+1$. *Substep 5.1: The case when $D_t$ in the last term of [\[e:time_Bpre2H\]](#e:time_Bpre2H){reference-type="eqref" reference="e:time_Bpre2H"} acts on $\rho^{(j)}$*. Use [\[e:time_Dt\]](#e:time_Dt){reference-type="eqref" reference="e:time_Dt"} with $\iota=j$, $$\label{e:time_low1} \begin{split} &\kappa \tau \left| \int_0^T \int G^{l}_\tau \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)} \cdot \left( \kappa \Delta \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)} + [u \cdot \nabla, \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}] \rho^{(j)} + \kappa \nabla \mathop{\rm div}\nolimits\ensuremath{ \partial^{\boldsymbol{\beta}}}\left(g_\tau \nabla \rho^{(j-1)} \right) \right) \right| =\\ &\kappa \tau \left| \int_0^T \int G^{l}_\tau \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)} \cdot ( \kappa \Delta \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)} + [u \cdot \nabla, \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}] \rho^{(j)} + \kappa g_\tau \nabla \mathop{\rm div}\nolimits\ensuremath{ \partial^{\boldsymbol{\beta}}}\nabla \rho^{(j-1)}+ \kappa [\nabla \mathop{\rm div}\nolimits\ensuremath{ \partial^{\boldsymbol{\beta}}}, g_\tau ] \nabla \rho^{(j-1)} \right| \\ &\lesssim \kappa \tau \VE \rho^{(i+1)} \VE_n \left( \VE \rho^{(j)} \VE_{m+2} + \frac{\| \nabla^{m+1} u \|_{L^\infty}}{\kappa} \VE \rho^{(j)} \VE_0 + \frac{\| \nabla u \|_{L^\infty}}{\kappa} \VE \rho^{(j)} \VE_m \right) \\ & +\kappa \tau \VE \rho^{(i+1)} \VE_n \left( \|\nabla^{m+2} g \|_{L^\infty} \VE \rho^{(j-1)}\VE_0 + \|\nabla g \|_{L^\infty} \VE \rho^{(j-1)}\VE_{m+1} \right) \\ &+\kappa^2 \tau \left| \int_0^T \int G^{l}_\tau g_\tau \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)} \cdot \nabla \Delta \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j-1)} \right|. \end{split}$$ The inequality in [\[e:time_low1\]](#e:time_low1){reference-type="eqref" reference="e:time_low1"} holds via commutator estimates [\[Atime_comm1\]](#Atime_comm1){reference-type="eqref" reference="Atime_comm1"} and [\[Atime_comm2\]](#Atime_comm2){reference-type="eqref" reference="Atime_comm2"}. For the three terms on right-hand side of [\[e:time_low1\]](#e:time_low1){reference-type="eqref" reference="e:time_low1"} we use inductive assumption (a) and assumption [\[Atime_scales\]](#Atime_scales){reference-type="eqref" reference="Atime_scales"} to bound them as follows: the first one by $$\VE \rho^{(i+1)} \VE_n (\tau \mu)^{j+1} \left(\frac{\mu}{\kappa}\right)^\frac{m}{2} D^\frac{1}{2},$$ the second one additionally using [\[Atime_Gspace\]](#Atime_Gspace){reference-type="eqref" reference="Atime_Gspace"} (here is the only time when we need the more restrictive case of [\[Atime_Gspace\]](#Atime_Gspace){reference-type="eqref" reference="Atime_Gspace"}) $$\VE \rho^{(i+1)} \VE_n (\tau \mu)^{j+1} \left(\frac{\mu}{\kappa}\right)^\frac{m}{2} D^\frac{1}{2},$$ and we keep the last third right-hand side term of [\[e:time_low1\]](#e:time_low1){reference-type="eqref" reference="e:time_low1"}, written as $\kappa \tau \left| B^{l+1} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)}, \nabla \Delta \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j-1)}) \right|$, in accordance with definition of the bilinear form $B^l$. These two estimates used in [\[e:time_low1\]](#e:time_low1){reference-type="eqref" reference="e:time_low1"} show that the last term of [\[e:time_Bpre2H\]](#e:time_Bpre2H){reference-type="eqref" reference="e:time_Bpre2H"} where $D_t$ acts on $\rho^{(j)}$ is estimated by the following expression $$\label{e:time_B1} \VE \rho^{(i+1)} \VE_n (\tau \mu)^{j+1} \left(\frac{\mu}{\kappa}\right)^\frac{m}{2} D^\frac{1}{2} +\kappa \tau \left| B^{l+1} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)}, \nabla \Delta \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j-1)}) \right| .$$ *Substep 5.2: The case when $D_t$ in the last term of [\[e:time_Bpre2H\]](#e:time_Bpre2H){reference-type="eqref" reference="e:time_Bpre2H"} acts on $\rho^{({i+1})}$.* Use [\[e:time_Dt\]](#e:time_Dt){reference-type="eqref" reference="e:time_Dt"} with $\iota=i+1$ $$\label{e:time_Dtplus} \begin{split} &\kappa \tau \left| \int_0^T \int G^{l}_\tau \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)} \cdot \left( \kappa \Delta \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)} + [u \cdot \nabla, \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}] \rho^{(i+1)} + \kappa \nabla \mathop{\rm div}\nolimits\ensuremath{ \partial^{\boldsymbol{\alpha}}}\left(g_\tau \nabla \rho^{(i)} \right) \right) \right| = \\ &\kappa \tau \left| \int_0^T \int G^{l}_\tau \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)} \cdot ( \kappa \Delta \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)} + [u \cdot \nabla, \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}] \rho^{(i+1)} + \kappa g_\tau \Delta \ensuremath{ \partial^{\boldsymbol{\alpha}}}\nabla \rho^{(i)}+ \kappa [ \nabla \mathop{\rm div}\nolimits\ensuremath{ \partial^{\boldsymbol{\alpha}}}, g_\tau ] \nabla \rho^{(i)} \right|. \end{split}$$ For the first right-hand side term of [\[e:time_Dtplus\]](#e:time_Dtplus){reference-type="eqref" reference="e:time_Dtplus"} we shift the laplacian: $$\begin{split} \kappa^2 \tau \int G^{l}_\tau \Delta \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)} \cdot \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)} =& \kappa^2 \tau \int G^{l}_\tau \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)} \cdot \Delta \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)} + [G^{l}_\tau, \Delta ] \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)} \cdot \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)} \\ =& \kappa^2 \tau \int G^{l}_\tau \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)} \cdot \nabla \Delta \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)} + \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)} \cdot [ \Delta ,G^{l}_\tau] \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)}, \end{split}$$ so that via the commutator estimate [\[Atime_comm2\]](#Atime_comm2){reference-type="eqref" reference="Atime_comm2"} we can bound the left-hand side as follows $$\begin{split} &\kappa \tau \left| \int_0^T \int G^{l}_\tau \kappa \Delta \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)} \cdot \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)} \right| \lesssim \\ & \kappa^\frac{1}{2} \|\nabla^{n+1}\rho^{(i+1)} \|_{L_{xt}^2} \kappa \tau \left( \kappa^\frac{1}{2} \| \nabla^{m+3} \rho^{(j)} \|_{L_{xt}^2} + \|\nabla G^{l}_\tau \|_{L^\infty} \kappa^\frac{1}{2} \| \nabla^{m+2} \rho^{(j)} \|_{L_{xt}^2} + \|\nabla^2 G^{l}_\tau \|_{L^\infty} \kappa^\frac{1}{2} \| \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)} \|_{L_{xt}^2} \right) \\ &\lesssim \VE \rho^{(i+1)} \VE_n (\tau \mu)^{j+1} \left(\frac{\mu}{\kappa}\right)^\frac{m}{2} D^\frac{1}{2}, \end{split}$$ using for the last inequality the inductive assumption and [\[Atime_Gspace\]](#Atime_Gspace){reference-type="eqref" reference="Atime_Gspace"}. The second right-hand side term of [\[e:time_Dtplus\]](#e:time_Dtplus){reference-type="eqref" reference="e:time_Dtplus"} is bounded as follows, by the commutator estimate [\[Atime_comm1\]](#Atime_comm1){reference-type="eqref" reference="Atime_comm1"} $$\begin{split} \kappa \tau \left| \int_0^T \int G^{l}_\tau \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)} \cdot [u \cdot \nabla, \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}] \rho^{(i+1)} \right| &\lesssim \tau \VE \rho^{(j)} \VE_{m} (\| \nabla^{n+1} u \|_{L^\infty} \VE \rho^{(i+1)} \VE_0 + \| \nabla u \|_{L^\infty} \VE \rho^{(i+1)} \VE_n ) \\ &\lesssim (\tau\| \nabla u \|_{L^\infty} ) \VE \rho^{(j)} \VE_{m} \left( \left( \frac{\mu}{\kappa} \right)^\frac{n}{2} \VE \rho^{(i+1)} \VE_0 + \VE \rho^{(i+1)} \VE_n \right) \\ &\lesssim (\tau \mu)^{j+1} \left(\frac{\mu}{\kappa}\right)^\frac{m}{2} D^\frac{1}{2} \left( \left( \frac{\mu}{\kappa} \right)^\frac{n}{2} \VE \rho^{(i+1)} \VE_0 + \VE \rho^{(i+1)} \VE_n \right), \end{split}$$ the second inequality by [\[Atime_scales\]](#Atime_scales){reference-type="eqref" reference="Atime_scales"}, the last by the inductive assumption. The third right-hand side term of [\[e:time_Dtplus\]](#e:time_Dtplus){reference-type="eqref" reference="e:time_Dtplus"} is, integrating by parts once ($\partial^\iota$ below is a standard partial derivative, $\iota=1,2,3$, and we sum over repeated $\iota$) $$\begin{split} & \kappa^2 \tau \left| \int_0^T \int G^{l}_\tau g_\tau \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)} \cdot \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\Delta \rho^{(i)} \right| \\ &= \kappa \tau \left| B^{l+1} (\nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\partial^\iota \rho^{(j)}, \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\partial^\iota \rho^{(i)}) \right| + \kappa^2 \tau \left| \int_0^T \int \partial^\iota G^{l+1}_\tau \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)} \cdot \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\partial^\iota \rho^{(i)} \right|\\ & \lesssim \kappa \tau \left| B^{l+1} (\nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\partial^\iota \rho^{(j)}, \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\partial^\iota \rho^{(i)}) \right|+ (\tau \mu)^{i+j+2} \left(\frac{\mu}{\kappa}\right)^{\frac{n+m}{2}} D, \end{split}$$ using the inductive assumption and [\[Atime_Gspace\]](#Atime_Gspace){reference-type="eqref" reference="Atime_Gspace"}. The fourth right-hand side term of [\[e:time_Dtplus\]](#e:time_Dtplus){reference-type="eqref" reference="e:time_Dtplus"}, via the commutator estimate [\[Atime_comm2\]](#Atime_comm2){reference-type="eqref" reference="Atime_comm2"} is estimated as follows $$\begin{split} \kappa^2 \tau \left| \int_0^T \int G^{l}_\tau \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)} [ \nabla \mathop{\rm div}\nolimits\ensuremath{ \partial^{\boldsymbol{\alpha}}}, g_\tau ] \nabla \rho^{(i)} \right| &\lesssim \kappa \tau \VE \rho^{(j)} \VE_{m} (\|\nabla^{n+2} g_\tau \|_{L^\infty} \VE \rho^{(i)}\VE_0 + \|\nabla g_\tau \|_{L^\infty} \VE \rho^{(i)} \VE_{n+1}) \\ & \lesssim (\tau \mu)^{i+j+2} \left(\frac{\mu}{\kappa}\right)^{\frac{n+m}{2}} D \end{split}$$ using again inductive assumptions and [\[Atime_Gspace\]](#Atime_Gspace){reference-type="eqref" reference="Atime_Gspace"}. Altogether, the last term of [\[e:time_Bpre2H\]](#e:time_Bpre2H){reference-type="eqref" reference="e:time_Bpre2H"} where $D_t$ acts on $\rho^{(i+1)}$ is estimated by $$\label{e:time_B2} \begin{split} (\tau \mu)^{j+1} \left(\frac{\mu}{\kappa}\right)^\frac{m}{2} \left( \VE \rho^{(i+1)} \VE_n + \left(\frac{\mu}{\kappa}\right)^\frac{n}{2} \VE \rho^{(i+1)} \VE_0 + (\tau \mu)^{i+1} \left(\frac{\mu}{\kappa}\right)^{\frac{n}{2}} D^\frac{1}{2} \right)& D^\frac{1}{2} \\ + \kappa \tau \left| B^{l+1} (\nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\partial^\iota \rho^{(j)}, \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\partial^\iota \rho^{(i)}) \right|&. \end{split}$$ *Substep 5.3: Estimate for $B$ involving $\rho^{(i+1)}$.* Together, [\[e:time_B1\]](#e:time_B1){reference-type="eqref" reference="e:time_B1"}, [\[e:time_B2\]](#e:time_B2){reference-type="eqref" reference="e:time_B2"} used for right-hand side of [\[e:time_Bpre2H\]](#e:time_Bpre2H){reference-type="eqref" reference="e:time_Bpre2H"} give $$\label{e:time_B3tog} \begin{split} & \left| B^{l} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)}, \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)}) \right| \lesssim \\ & \kappa \tau \left| B^{l+1} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)}, \nabla \Delta \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j-1)}) \right| + \kappa \tau \left| B^{l+1} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\partial^\iota \rho^{(i)}, \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\partial^\iota \rho^{(j)}) \right| + Q (j,m), \end{split}$$ where we have denoted the parts containing norms of right-hand side by $Q (j,m)$, ie $$\begin{split} Q (j,m) =& (\tau \mu)^{j+1} \left(\frac{\mu}{\kappa}\right)^\frac{m}{2} \left( \VE \rho^{(i+1)} \VE_n + \left(\frac{\mu}{\kappa}\right)^\frac{n}{2} \VE \rho^{(i+1)} \VE_0 + (\tau \mu)^{i+1} \left(\frac{\mu}{\kappa}\right)^{\frac{n}{2}} D^\frac{1}{2} \right) D^\frac{1}{2}. \end{split}$$ Using the inductive assumption [\[l:time_ind\]](#l:time_ind){reference-type="eqref" reference="l:time_ind"} part (b) for the first right-hand side term of [\[e:time_B3tog\]](#e:time_B3tog){reference-type="eqref" reference="e:time_B3tog"} we get $$\label{e:time_B3} \left| B^{l} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)}, \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)}) \right| \lesssim \kappa \tau \left| B^{l+1} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)}, \nabla \Delta \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j-1)}) \right| + Q (j,m),$$ which holds for any $j \le i$ and any multiindex $|\boldsymbol{\alpha}|=n$, $|\boldsymbol{\beta}|=m$. Thus denoting by $$\mathcal{B} (l,m,j) = \sup_{|\boldsymbol{\alpha}|=n, |\boldsymbol{\beta}|=m} \left| B^{l} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)}, \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(j)}) \right|$$ (with $(i+1)$ and $n$ fixed) we translate [\[e:time_B3\]](#e:time_B3){reference-type="eqref" reference="e:time_B3"} and iterate it $$\label{e:iter_back} \begin{split} \mathcal{B} (l,m,j) &\lesssim \kappa \tau \mathcal{B} (l+1,m+2,j-1)+ Q (j,m)\\ &\lesssim (\kappa \tau)^2 \mathcal{B} (l+2,m+4,j-2)+ \kappa \tau Q (j-1,m+2) + Q (j,m)\\ &\dots \lesssim (\kappa \tau)^j \mathcal{B} (l+j,m+2j,0) + \sum_{i=0}^{j-1} (\kappa \tau)^i Q (j-i,m+2i), \end{split}$$ which holds for any $j \le i$. Observe that $\sum_{i=0}^{j} (\kappa \tau)^i Q (j-i,m+2i) \lesssim Q (j,m)$. This in [\[e:iter_back\]](#e:iter_back){reference-type="eqref" reference="e:iter_back"} yields $$\label{e:iter_back_nw} \mathcal{B} (l,m,j) \lesssim (\kappa \tau)^j \mathcal{B} (l+j,m+2j,0) + Q (j,m).$$ Importantly, we can perform one last step[^6]: recall that we have $\rho^{(-1)}\equiv 0$ and observe that [\[e:time_B3tog\]](#e:time_B3tog){reference-type="eqref" reference="e:time_B3tog"} is valid for the choice $j=0$, where it yields $$\begin{split} (\kappa \tau)^j \mathcal{B} (l+j,m+2j,0) &\lesssim (\kappa \tau)^{j+1} \sup_{|\boldsymbol{\alpha}|=n, |\boldsymbol{\beta}|=m+2j} \left| B^{l+j+1} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\partial^\iota \rho^{(i)}, \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\partial^\iota \rho^{(0)}) \right| + (\kappa \tau)^j Q (0,m+2j) \\ &\lesssim (\kappa \tau)^{j+1} (\tau \mu)^{i+1} \left(\frac{\mu}{\kappa}\right)^{\frac{n+m+2j+2}{2}} D + Q (j,m) \lesssim Q (j,m), \end{split}$$ the second inequality by the inductive assumption [\[l:time_ind\]](#l:time_ind){reference-type="eqref" reference="l:time_ind"} ($b'$), and via the already established $(\kappa \tau)^j Q (0,m+2j)\lesssim Q (j,m)$, the last one by the definition of $Q (j,m)$. This estimate used in [\[e:iter_back_nw\]](#e:iter_back_nw){reference-type="eqref" reference="e:iter_back_nw"} with $j=i$ yields via definition of $B$ $$\label{e:time_B4} % \begin{split} \sup_{|\boldsymbol{\alpha}|=n, |\boldsymbol{\beta}|=m} \left| B^{l} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)}, \nabla \ensuremath{ \partial^{\boldsymbol{\beta}}}\rho^{(i)}) \right| \lesssim Q (i,m).$$ *Substep 5.4: Close the induction argument.* To close the argument, we need to return to estimates on $\rho^{(i+1)}$, since it appears in $Q (i,m)$. Note that [\[e:ener_n2\]](#e:ener_n2){reference-type="eqref" reference="e:ener_n2"} for $i+1$ gives for $n=0$ $$\label{e:ener_n2H0} \begin{split} \VE \rho^{(i+1)} \VE^2_0 \lesssim \left| B^{0} (\nabla \rho^{(i)}, \nabla\rho^{(i+1)})\right| &\lesssim (\tau \mu)^{2i+2} D + Q (i,0) \lesssim (\tau \mu)^{2i+2} D + (\tau \mu)^{i+1} \VE \rho^{(i+1)} \VE_0 \end{split}$$ with the second inequality following from [\[e:time_B4\]](#e:time_B4){reference-type="eqref" reference="e:time_B4"} with $m=n=0$ and the last one from definition of the quantity $Q$. Splitting by Young gives inductive hypothesis for $\VE \rho^{(i+1)}\VE_0$. Similarly for $n \ge 1$ via [\[e:ener_n2\]](#e:ener_n2){reference-type="eqref" reference="e:ener_n2"} $$\begin{split} \VE \rho^{(i+1)} \VE^2_n \lesssim& \left(\frac{\mu}{\kappa}\right)^n \left| B^{0} ( \nabla\rho^{(i+1)}, \nabla \rho^{(i)}) \right| + \sum_{|\boldsymbol{\alpha}|=n} \left| B^{0} (\nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i+1)}, \nabla \ensuremath{ \partial^{\boldsymbol{\alpha}}}\rho^{(i)}) \right| + (\tau \mu)^{2i+2} \left(\frac{\mu}{\kappa}\right)^n D, \end{split}$$ where we have used the inductive hypothesis [\[l:time_ind\]](#l:time_ind){reference-type="eqref" reference="l:time_ind"} part ($a$). Using [\[e:time_B4\]](#e:time_B4){reference-type="eqref" reference="e:time_B4"} with $m=n$, the known estimate for $\VE \rho^{(i+1)}\VE_0$, the definition of $Q(i,n)$ yields the inductive hypothesis [\[l:time_indH\]](#l:time_indH){reference-type="eqref" reference="l:time_indH"} ($a'$), i.e. the one for $\rho^{(i+1)}$. Now, we use [\[l:time_indH\]](#l:time_indH){reference-type="eqref" reference="l:time_indH"} ($a'$) for $\rho^{(i+1)}$ in [\[e:time_B4\]](#e:time_B4){reference-type="eqref" reference="e:time_B4"} to close the induction. Lemma [Lemma 19](#l:improved-energy){reference-type="ref" reference="l:improved-energy"} is proven. ◻ # Construction of the vectorfield - Proof of Proposition [Proposition 2](#p:Onsager){reference-type="ref" reference="p:Onsager"} {#s:Onsager} This section is devoted to the proof of Proposition [Proposition 2](#p:Onsager){reference-type="ref" reference="p:Onsager"}. We point out that this proposition is a mild variation of [@BDSV Proposition 2.1], and we very closely follow the proof. On the other hand, for our needs in this paper, we need to adjust certain parameters used in the construction [@BDSV], and for this reason we will need to repeat all the steps. Nevertheless, in this section we do not claim any originality. We start with the following important remark: **Remark 20**. *The basic principle in the subsequent estimates in this section will be, as it has been used in [@BDSV], that implicit constants may depend on these parameters but do not depend on $\lambda_q$ and in particular do not depend on the large constant $a\gg 1$ in the definition of $\lambda_q$ (c.f. [\[e:lambdadelta\]](#e:lambdadelta){reference-type="eqref" reference="e:lambdadelta"}). Consequently, for sufficiently large $a\gg 1$ the implicit constants can be absorbed, so that ultimately we obtain the estimates [\[e:R_q\_inductive_est\]](#e:R_q_inductive_est){reference-type="eqref" reference="e:R_q_inductive_est"}-[\[e:energy_inductive_assumption\]](#e:energy_inductive_assumption){reference-type="eqref" reference="e:energy_inductive_assumption"} for $q+1$.* Next, we choose the parameters $\gamma_T, \gamma_R, \gamma_E, \gamma_L, \bar{N}$ and $\alpha_0$. The universal constants $M,\bar{e}$ will be defined below in Definitions [Definition 33](#d:defM){reference-type="ref" reference="d:defM"} and [Definition 29](#d:defebar){reference-type="ref" reference="d:defebar"}. Actually, for the proof below we will require the following inequalities relating these parameters: $$\begin{aligned} \gamma_L&<(b-1)\beta\,,\label{e:comparison}\\ 4\alpha(1+\gamma_L)+2\gamma_L&<2(b-1)\beta+\gamma_T+\gamma_R\,,\label{e:gluing2}\\ \alpha\gamma_L&<\gamma_T\,,\label{e:taucondition1}\\ 4\alpha(1+\gamma_L)&<\gamma_R\,,\label{e:gammaRalphacondition}\\ 2\beta(b-1)+1+\gamma_R&<\bar{N}\gamma_L\,,\label{e:gluing1}\\ b\alpha+\gamma_T+b\gamma_R&<(b-1)(1-(2b+1)\beta)\,,\label{e:transportcondition}\\ b\gamma_E&<(b-1)(1-(2b+1)\beta)\,.\label{e:energycondition}\end{aligned}$$ We first claim that [\[e:Onsager_Conditions\]](#e:Onsager_Conditions){reference-type="eqref" reference="e:Onsager_Conditions"} allow us to choose $\gamma_L, \bar{N}$ and $\alpha_0>0$ so that [\[e:comparison\]](#e:comparison){reference-type="eqref" reference="e:comparison"}-[\[e:energycondition\]](#e:energycondition){reference-type="eqref" reference="e:energycondition"} are valid. To see this, we can first choose $\gamma_L>0$ sufficiently small so that [\[e:comparison\]](#e:comparison){reference-type="eqref" reference="e:comparison"} holds and $$2\gamma_L<2(b-1)\beta+\gamma_T+\gamma_R.$$ Then we choose $\bar{N}$ sufficiently large (depending on $\gamma_L$) so that [\[e:gluing1\]](#e:gluing1){reference-type="eqref" reference="e:gluing1"} is valid. Finally, we choose $\alpha_0>0$ sufficiently small, so that for any $\alpha\leq \alpha_0$ also [\[e:gluing2\]](#e:gluing2){reference-type="eqref" reference="e:gluing2"}, [\[e:taucondition1\]](#e:taucondition1){reference-type="eqref" reference="e:taucondition1"}, [\[e:gammaRalphacondition\]](#e:gammaRalphacondition){reference-type="eqref" reference="e:gammaRalphacondition"} and [\[e:transportcondition\]](#e:transportcondition){reference-type="eqref" reference="e:transportcondition"} hold. For notational convenience we introduce $$\mathring{\delta}_{q+1}:=\delta_{q+1}\lambda_q^{-\gamma_R}=\lambda_q^{-2b\beta-\gamma_R},$$ so that [\[e:R_q\_inductive_est\]](#e:R_q_inductive_est){reference-type="eqref" reference="e:R_q_inductive_est"} can be written as $\left\|\mathring{R}_q\right\|_{C^0}\leq \mathring{\delta}_{q+1}$. ## Mollification step {#s:mollify} Following [@BDSV Section 2.4] we define $$u_{\ell}:= u_q*\psi_{\ell_q},\quad \mathring{R}_{\ell}:= \mathring R_q*\psi_{\ell_q} -(u_q\mathring\otimes u_q)*\psi_{\ell_q} + u_{\ell_q}\mathring\otimes u_{\ell_q}$$ so that $(u_\ell,\mathring{R}_\ell)$ is another solution to [\[e:EulerReynolds\]](#e:EulerReynolds){reference-type="eqref" reference="e:EulerReynolds"}. However, at variance with [@BDSV Section 2.4], we fix a mollifying kernel $\psi\in C^\infty_c(\ensuremath{\mathbb{R}}^3)$ which, in addition to the usual requirement $\int_{\ensuremath{\mathbb{R}}^3}\psi\,dx=1$ also satisfies $$\label{e:deepmollifier} \int_{\ensuremath{\mathbb{R}}^3}\psi(x)x^\theta\,dx=0\quad\textrm{ for any multiindex $\theta$ with $1\leq|\theta|\leq \bar{N}$}.$$ The construction and use of such mollifiers (called "deep smoothing operators of depth $\bar{N}$") is standard, see e.g. [@GromovBook Section 2.3.4]; the case of infinite depth was introduced by Nash [@Nash56]). We point out that if $\bar{N}\geq 2$, then $\psi$ cannot be nonnegative. The key point is the following lemma, a variant of the usual smoothing estimates. **Lemma 21**. *Let $\psi\in C^\infty_c(\ensuremath{\mathbb{R}}^n)$ be a smoothing operator of depth $\bar{N}\geq 1$ and such that $\int\psi=1$. Then for any real $r,s\geq 0$ $$\|f*\psi_\ell\|_{C^{r+s}}\lesssim \ell^{-s}\|f\|_{C^r}\label{e:mollify1}$$ and for any $r\geq 0$, $0\leq s\leq \bar{N}$ $$\|f-f*\psi_\ell\|_{C^r} \lesssim \ell^{s}\|f\|_{C^{r+s}}\label{e:mollify2}\\ %\|(fg)*\psi_\ell-(f*\psi_\ell)(g*\psi_\ell)\|_{C^r}&\lesssim \ell^{2s -r}\|f\|_{C^{r+s}}\|g\|_{C^{r+s}}.\label{e:mollify3}$$ The implicit constants depend on the choice of $\psi$ as well as on $r,s$.* *Proof.* Concerning [\[e:mollify1\]](#e:mollify1){reference-type="eqref" reference="e:mollify1"} assume first that $r=k$ and $s=l$ are integers and let $a,b$ be multi-indices with $|a|=k, |b|=l$. Then $\partial^{a+b}(f*\psi_\ell)=\partial^{a} f*\partial^{b} \psi_\ell$, hence $$|\partial^{a+b}(f*\psi_\ell)|\leq C_l\ell^{-l}\|f\|_k.$$ If $s=l+\alpha$, we write $$\begin{aligned} \partial^{a} f*\partial^{b}\psi_\ell(x+z)-\partial^{a}f*\partial^{b}\psi_\ell(x)&=\int_{\ensuremath{\mathbb{R}}^n}\partial^{a}f(x-y)\left(\partial^b\psi_\ell(y+z)-\partial^b\psi_\ell(y)\right)\,dy\\ &=\ell^{-k}\int_{\ensuremath{\mathbb{R}}^n}\partial^{a}f(x-y)\left((\partial^b\psi)(y+\ell^{-1} z)-(\partial^b\psi)(y)\right)\,dy,\end{aligned}$$ from which we obtain $$\|\partial^{a+b}(f*\psi_\ell)\|_{\alpha}\leq C_{l,\alpha}\ell^{-l-\alpha}\|f\|_k.$$ Finally, if also $r=k+\beta$ for some $\beta\in(0,1)$, we obtain the required estimate from interpolation between $r=k$ and $r=k+1$. This concludes the proof of [\[e:mollify1\]](#e:mollify1){reference-type="eqref" reference="e:mollify1"} for $r,s\geq 0$. Next, by considering the Taylor expansion of $f$ at $x$ we can write where $Q_x(y)$ is a sum of monomials in $y$ of degree $d$ with $1\leq d\leq \bar{N}$ and $|R_x(y)|\lesssim |y|^{s}\|f\|_{C^s}$. Moreover, from [\[e:deepmollifier\]](#e:deepmollifier){reference-type="eqref" reference="e:deepmollifier"} we deduce that $\int_{\ensuremath{\mathbb{R}}^n}Q_x(y)\psi_\ell(y)\,dy=0$. Thus, $$|f-f*\psi_\ell|= \left|\int \psi_\ell(y)(f(x-y)-f(x))dy\right|\\ \lesssim \|f\|_{C^s}\int \ell^{-n}\left|\psi(\ell^{-1}y)\right||y|^{s}dy\lesssim \ell^{s}\|f\|_s\, .$$ This proves [\[e:mollify2\]](#e:mollify2){reference-type="eqref" reference="e:mollify2"} for the case $r=0$. To obtain the estimate for integer $r=k$, repeat the same argument for the partial derivatives $\partial^a f$ with $|a|=k$. For general real $r\geq 0$ we again proceed by interpolation. ◻ With the help of Lemma [Lemma 21](#l:mollify){reference-type="ref" reference="l:mollify"} we obtain the following bounds. **Proposition 22**. *For any $N\geq 0$ we have $$\begin{aligned} \left\|u_{\ell}\right\|_{C^{N+1}} &\lesssim \begin{cases} \delta_q^{\sfrac 12}\lambda_q^{N+1}&\textrm{ if }N+1\leq\bar{N}\\ \delta_q^{\sfrac 12}\lambda_q^{\bar{N}}\ell_q^{\bar{N}-N-1}& \textrm{ if }N+1\geq \bar{N}\end{cases}\,, \label{e:u:ell:k}\\ \left\|\mathring{R}_{\ell}\right\|_{C^{N}}&\lesssim \mathring{\delta}_{q+1}\ell_q^{-N}+\delta_q\lambda_q^{1+\bar{N}}\ell_q^{\bar{N}-N} \,. \label{e:R:ell}\\ \left|\int_{\ensuremath{\mathbb{T}}^3}\left|u_q\right|^2-\left|u_{\ell}\right|^2\,dx\right| &\lesssim \mathring{\delta}_{q+1}+\delta_q^{\sfrac12}\lambda_q^{\bar{N}}\ell_q^{\bar{N}}\,. \label{e:uq_vell_energy_diff}\end{aligned}$$ Moreover, if $z_q=\mathcal{B}u_q$ and $z_\ell=\mathcal{B}u_{\ell}=z_q*\psi_{\ell_q}$ are the vector potentials, we have in addition $$\begin{aligned} \left\|z_\ell-z_q\right\|_{C^{N+\alpha}}&\lesssim \delta_q^{\sfrac12}\lambda_q^{\bar{N}}\ell_q^{\bar{N}+1-N-\alpha}\,,\label{e:z:ell}\end{aligned}$$* *Proof.* The bounds [\[e:u:ell:k\]](#e:u:ell:k){reference-type="eqref" reference="e:u:ell:k"} and [\[e:z:ell\]](#e:z:ell){reference-type="eqref" reference="e:z:ell"} follow directly from [\[e:mollify1\]](#e:mollify1){reference-type="eqref" reference="e:mollify1"} and [\[e:mollify2\]](#e:mollify2){reference-type="eqref" reference="e:mollify2"} together with the classical Schauder estimates on the Calderón-Zygmund operator $\nabla\mathcal{B}$. For [\[e:R:ell\]](#e:R:ell){reference-type="eqref" reference="e:R:ell"} we use the bound [\[e:u_q\_inductive_est\]](#e:u_q_inductive_est){reference-type="eqref" reference="e:u_q_inductive_est"} and interpolation to obtain $$\|u_q\otimes u_q\|_{C^{\bar{N}}}\lesssim \|u_q\|_{C^0}\|u_q\|_{C^{\bar{N}}}\leq \|u_q\|_{C^1}\|u_q\|_{C^{\bar{N}}}\lesssim\delta_q\lambda_q^{\bar{N}+1}\,.$$ Then we apply [\[e:mollify2\]](#e:mollify2){reference-type="eqref" reference="e:mollify2"} to the decomposition $$\|\mathring{R}_\ell\|_{C^N}\leq \|\mathring{R}_q*\psi_{\ell_q}\|_{C^N}+\|(u_q\mathring\otimes u_q)*\psi_{\ell_q}-u_q\mathring\otimes u_q\|_{C^N}+2\|(u_q-u_{q}*\psi_{\ell_q})\otimes u_q\|_{C^N}\,.$$ ◻ Note that at variance with [@BDSV Proposition 2.2] we are not using a commutator estimate here. From [\[e:gluing1\]](#e:gluing1){reference-type="eqref" reference="e:gluing1"} we obtain $\delta_q^{\sfrac12}\lambda_q^{\bar{N}}\ell_q^{\bar{N}}\leq \delta_q\lambda_q^{1+\bar{N}}\ell_q^{\bar{N}}\leq \mathring{\delta}_{q+1}$. Consequently we have **Corollary 23**. *For any $N\geq 0$ we have the estimates $$\begin{aligned} \left\|u_{\ell}\right\|_{C^{N+1}} &\lesssim \delta_q^{\sfrac 12}\lambda_q\ell_q^{-N}\,, \\ \left\|\mathring{R}_{\ell}\right\|_{C^{N}}&\lesssim \mathring{\delta}_{q+1}\ell_q^{-N}\,, \\ \left|\int_{\ensuremath{\mathbb{T}}^3}\left|u_q\right|^2-\left|u_{\ell}\right|^2\,dx\right| &\lesssim \mathring{\delta}_{q+1}\,,\\ \left\|z_\ell-z_q\right\|_{C^{N+\alpha}}&\lesssim \mathring{\delta}_{q+1}\ell_q^{1-\alpha-N}\,.\end{aligned}$$* ## Gluing step The gluing step, introduced in [@Isett2018], proceeds as follows. For each $i\in\ensuremath{\mathbb{N}}$ we set $t_i=i\tau_q$ and let $u_i$ be the (classical) solution of the Euler equations $$\label{e:gluingEuler} \begin{split} \partial_tu_i+\mathop{\rm div}\nolimits(u_i\otimes u_i)+\nabla p_i&=0\,,\\ \mathop{\rm div}\nolimits u_i&=0\,,\\ u_i(\cdot,t_i)&=u_\ell(\cdot,t_i)\,. \end{split}$$ It is well-known (see for instance [@BDSV Proposition 3.1]) that there exists a constant $c(\alpha)>0$ such that, for each $i\in\ensuremath{\mathbb{N}}$ the solution $u_i$ is smooth, uniquely defined, and satisfies for any $N\geq 1$ the estimates $$\|u_i(t)\|_{C^{N+\alpha}}\lesssim \|u_\ell(t_i)\|_{C^{N+\alpha}}\quad \textrm{ for all }t\in(t_i-T,t_i+T)$$ for $T\leq c\|u_\ell(t_i)\|_{C^{1,\alpha}}^{-1}$, where the implicit constant depends on $N$ and $\alpha\in(0,1)$. In particular, from our choice of $\tau_q$ in [\[e:elltau\]](#e:elltau){reference-type="eqref" reference="e:elltau"} we obtain for any $N\geq 1$ (c.f.[@BDSV Corollary 3.2]) $$\|u_i(t)\|_{C^{N+\alpha}}\lesssim \delta_q^{\sfrac12}\lambda_q\ell_q^{1-N-\alpha}\,,$$ provided $$\label{e:taucondition} \tau_q\|u_\ell\|_{C^{1,\alpha}}\leq c$$ Taking into account Remark [Remark 20](#r:remarkona){reference-type="ref" reference="r:remarkona"} and [\[e:u:ell:k\]](#e:u:ell:k){reference-type="eqref" reference="e:u:ell:k"}, this is ensured by [\[e:taucondition1\]](#e:taucondition1){reference-type="eqref" reference="e:taucondition1"} and by choosing $a\gg 1$ sufficiently large. Following the derivation of the stability estimates in [@BDSV Proposition 3.3] and [@BDSV Proposition 3.4], we deduce **Proposition 24**. *For $|t-t_i|\leq \tau_q$ and $N\geq 0$ we have $$\begin{aligned} \|u_i-u_\ell\|_{C^{N+\alpha}}&\lesssim \tau_q\mathring{\delta}_{q+1}\ell_q^{-N-1-2\alpha} \,,\label{e:stabilityu}\\ \|z_i-z_\ell\|_{C^{N+\alpha}}&\lesssim \tau_q\mathring{\delta}_{q+1}\ell_q^{-N-2\alpha} \,,\label{e:stabilityz}\\ \|(\partial_t+u_\ell\cdot\nabla)(z_i-z_\ell)\|_{C^{N+\alpha}}&\lesssim \mathring{\delta}_{q+1}\ell_q^{-N-2\alpha} \,\label{e:stabilityDz}\end{aligned}$$* *Proof.* Using the equations satisfied by $u_q$ and $u_i$, we obtain the equation for the pressure difference $$\Delta (p_{\ell} - p_i) = \mathop{\rm div}\nolimits\bigl(\nabla u_\ell(u_i-u_\ell)\bigr)+\mathop{\rm div}\nolimits\bigl(\nabla u_i{(u_i-u_\ell)}\bigr)+\mathop{\rm div}\nolimits\mathop{\rm div}\nolimits\mathring{R}_{\ell},$$ and deduce $$\left\|p_{\ell} - p_i \right\|_{C^{1+\alpha}} \lesssim \left\|u_\ell\right\|_{C^{1+\alpha}}\left\|u_{i} - u_\ell\right\|_{C^\alpha}+\|\mathring{R}_\ell\|_{C^{1+\alpha}}\,.$$ Using the equation for $u_q$ and $u_i$ we then obtain $$\|(\partial_t+u_\ell\cdot\nabla)(u_\ell-u_i)\|_{C^\alpha}\lesssim \left\|u_\ell\right\|_{C^{1+\alpha}}\left\|u_{i} - u_\ell\right\|_{C^\alpha}+\|\mathring{R}_\ell\|_{C^{1+\alpha}}\,.$$ Applying Corollary [Corollary 23](#c:mollification){reference-type="ref" reference="c:mollification"}, [\[e:taucondition\]](#e:taucondition){reference-type="eqref" reference="e:taucondition"} and Grönwall's inequality we then conclude $$\|u_i-u_\ell\|_{C^\alpha}\lesssim \tau_q\mathring{\delta}_{q+1}\ell_q^{-1-2\alpha}\,,$$ which is [\[e:stabilityu\]](#e:stabilityu){reference-type="eqref" reference="e:stabilityu"} with $N=0$. The case $N\geq 1$ follows analogously, following the computations in the proof of [@BDSV Proposition 3.3]. Furthermore, the estimates [\[e:stabilityz\]](#e:stabilityz){reference-type="eqref" reference="e:stabilityz"}-[\[e:stabilityDz\]](#e:stabilityDz){reference-type="eqref" reference="e:stabilityDz"} can be deduced in the same manner, following the computations in the proof of [@BDSV Proposition 3.4]. ◻ Next, as in [@BDSV Section 4], we partition time using a partition of unity $\{\chi_i\}_i$, with $\chi_i\in C^{\infty}_c(\ensuremath{\mathbb{R}})$ and $0\leq \chi_i\leq 1$, such that - $\sum_i\chi_i\equiv 1$ in $[0,T]$; - $\mathop{\rm supp}\nolimits\chi_i\subset (t_i-\tfrac{2}{3}\tau_q,t_i+\tfrac{2}{3}\tau_q)$, in particular $\mathop{\rm supp}\nolimits\chi_i\cap\mathop{\rm supp}\nolimits\chi_{i+2}=\emptyset$; - $\chi_i=1$ on $(t_i-\tfrac{1}{3}\tau_q,t_i+\tfrac{1}{3}\tau_q)$ and $\chi_i+\chi_{i+1}=1$ on $(t_i+\tfrac{1}{3}\tau_q,t_i+\tfrac{2}{3}\tau_q)$; - $\|\partial_t^N\chi_i\|_{C^0}\lesssim \tau_q^{-N}$, and define $$\bar u_q=\sum_i \chi_i u_i\,,\quad \bar p_q^{(1)}=\sum_i \chi_i p_i.$$ Further, we define $$\begin{aligned} \mathring{\bar{R}}_q&=\partial_t\chi_i\mathcal{R}(u_i-u_{i+1})-\chi_i(1-\chi_i)(u_i-u_{i+1})\mathring{\otimes} (u_i-u_{i+1})\\ \bar{p}_q^{(2)}&=-\chi_i(1-\chi_i)\left(|u_i-u_{i+1}|^2-\int_{\ensuremath{\mathbb{T}}^3}|u_i-u_{i+1}|^2\,dx\right),\end{aligned}$$ for $t\in (t_i+\tfrac{1}{3}\tau_q,t_i+\tfrac{2}{3}\tau_q)$ and $\mathring{\bar{R}}_q=0$, $\bar{p}_q^{(2)}=0$ for $t\notin\bigcup_{i}(t_i+\tfrac{1}{3}\tau_q,t_i+\tfrac{2}{3}\tau_q)$, where $\mathcal{R}$ is the "inverse divergence" operator for symmetric tracefree 2-tensors, defined as $$\label{e:R:def} \begin{split} ({\mathcal R} f)^{ij} &= {\mathcal R}^{ijk} f^k \\ {\mathcal R}^{ijk} &= - \frac 12 \Delta^{-2} \partial_i \partial_j \partial_k - \frac 12 \Delta^{-1} \partial_k \delta_{ij} + \Delta^{-1} \partial_i \delta_{jk} + \Delta^{-1} \partial_j \delta_{ik}. \end{split}$$ when acting on vectors $f$ with zero mean on $\ensuremath{\mathbb{T}}^3$. See [@BDSV Proposition 4.1] and [@DaSz2017 Definition 4.2 and Lemma 4.3]. Finally, we set $$\bar{p}_q=\bar{p}_q^{(1)}+\bar{p}_q^{(2)}.$$ As in [@BDSV Section 4.2], one can easily verify that - $\mathring{\bar{R}}_q$ is a smooth symmetric and traceless 2-tensor; - For all $(x,t)\in \ensuremath{\mathbb{T}}^3\times [0,T]$ $$\left\{\begin{array}{l} \partial_t\bar{u}_q+\mathop{\rm div}\nolimits(\bar{u}_q\otimes\bar{u}_q)+\nabla \bar{p}_q =\mathop{\rm div}\nolimits\mathring{\bar{R}}_q,\\ \\ \mathop{\rm div}\nolimits\bar{u}_q =0; \end{array}\right.$$ - The support of $\mathring{\bar{R}}_q$ satisfies $$\label{e:gluedsupport} \mathop{\rm supp}\nolimits\mathring{\bar{R}}_q\subset \ensuremath{\mathbb{T}}^3\times \bigcup_i(t_i+\tfrac{1}{3}\tau_q,t_i+\tfrac{2}{3}\tau_q).$$ With our choice of parameters $\tau_q,\ell_q$ the estimates in [@BDSV Section 4.3 and Section 4.4] are modified as follows. **Proposition 25**. *The velocity field $\bar{u}_q$ and its vector potential $\bar{z}_q=\mathcal{B}\bar{u}_q$ satisfy the following estimates:* *$$\begin{aligned} \left\|\bar{u}_q-u_\ell\right\|_{C^{N+\alpha}} &\lesssim \tau_q\mathring{\delta}_{q+1}\ell_q^{-1-N-2\alpha} \label{e:uq:vell:additional}\,,\\ \left\|\bar{z}_q-z_\ell\right\|_{C^{\alpha}}&\lesssim \tau_q\mathring{\delta}_{q+1}\ell_q^{-\alpha}\,. \label{e:zq:vell:additional}\end{aligned}$$ for all $N \geq 0$. The new Reynolds stress $\mathring{\bar{R}}_q$ satisfies the estimates: $$\begin{aligned} \left\|\mathring{\bar R}_q\right\|_{N+\alpha} &\lesssim \mathring{\delta}_{q+1}\ell_q^{-N-2\alpha}+\tau_q^2\mathring{\delta}_{q+1}^2\ell_q^{-N-2-4\alpha}\,, \label{e:Rq:1}\\ \left\|(\partial_t + \bar v_q\cdot \nabla) \mathring{\bar R}_q\right\|_{N+\alpha} &\lesssim \tau_q^{-1}\mathring{\delta}_{q+1}\ell_q^{-N-3\alpha}+\tau_q\mathring{\delta}_{q+1}^2\ell_q^{-N-2-4\alpha}\,. \label{e:Rq:Dt}\end{aligned}$$ Furthermore, we have the estimate $$\label{e:gluingenergy} \left|\int_{\ensuremath{\mathbb{T}}^3}|\bar{u}_q|^2-|u_\ell|^2\,dx\right|\lesssim \tau_q\mathring{\delta}_{q+1}\delta_{q}^{\sfrac12}\lambda_q+\tau_q^2\mathring{\delta}_{q+1}^2\ell_q^{-2-4\alpha}\,.$$* *Proof.* Using the identity $\bar{u}_q-u_\ell=\sum_i\chi_i(u_i-u_\ell)$, the bounds [\[e:uq:vell:additional\]](#e:uq:vell:additional){reference-type="eqref" reference="e:uq:vell:additional"} and [\[e:zq:vell:additional\]](#e:zq:vell:additional){reference-type="eqref" reference="e:zq:vell:additional"} follow directly from [\[e:stabilityu\]](#e:stabilityu){reference-type="eqref" reference="e:stabilityu"} and [\[e:stabilityz\]](#e:stabilityz){reference-type="eqref" reference="e:stabilityz"} in Proposition [Proposition 24](#p:stability){reference-type="ref" reference="p:stability"}. As in the proof of [@BDSV Proposition 4.4] we write the new Reynolds stress as $$\mathring{\bar{R}}_q=\partial_t\chi_i(\mathcal{R}\mathop{\rm curl}\nolimits)(z_i-z_{i+1})-\chi_i(1-\chi_i)(u_i-u_{i+1})\mathring{\otimes} (u_i-u_{i+1})$$ and note that $\mathcal{R}\mathop{\rm curl}\nolimits$ is a zero-order operator of Calderon-Zygmund type, for which Schauder estimates are valid. Therefore we obtain, again applying Proposition [Proposition 24](#p:stability){reference-type="ref" reference="p:stability"}, $$\begin{aligned} \|\mathring{\bar{R}}_q\|_{C^{N+\alpha}}&\lesssim \tau_q^{-1}\|z_i-z_{i+1}\|_{C^{N+\alpha}}+\|u_i-u_{i+1}\|_{C^{N+\alpha}}\|u_i-u_{i+1}\|_{C^\alpha}\,\\ &\lesssim \mathring{\delta}_{q+1}\ell_q^{-N-2\alpha}+\tau_q^2\mathring{\delta}_{q+1}^2\ell_q^{-2-N-4\alpha}\,.\end{aligned}$$ Next, differentiating the expression for $\mathring{\bar{R}}_q$ as in the proof of [@BDSV Proposition 4.4], we obtain $$\begin{aligned} \|(\partial_t+u_\ell\cdot\nabla)\mathring{\bar{R}}_q\|_{C^{N+\alpha}}&\lesssim \tau_q^{-2}\|z_i-z_{i+1}\|_{C^{N+\alpha}}+\tau_q^{-1}\|(\partial_t+u_\ell\cdot\nabla)(z_i-z_{i+1})\|_{C^{N+\alpha}}\\ &+\tau_q^{-1}\|u_\ell\|_{C^{1+\alpha}}\|z_i-z_{i+1}\|_{C^{N+\alpha}}+\tau_q^{-1}\|u_\ell\|_{C^{1+N+\alpha}}\|z_i-z_{i+1}\|_{C^{\alpha}}\\ &+\tau_q^{-1}\|u_i-u_{i+1}\|_{C^{N+\alpha}}\|u_i-u_{i+1}\|_{C^{\alpha}}\\ &+\|(\partial_t+u_\ell\cdot\nabla)u_i-u_{i+1}\|_{C^{N+\alpha}}\|u_i-u_{i+1}\|_{C^{\alpha}}\\ &+\|(\partial_t+u_\ell\cdot\nabla)u_i-u_{i+1}\|_{C^{\alpha}}\|u_i-u_{i+1}\|_{C^{N+\alpha}}\,.\end{aligned}$$ Using again Proposition [Proposition 24](#p:stability){reference-type="ref" reference="p:stability"} we deduce $$\|(\partial_t+u_\ell\cdot\nabla)\mathring{\bar{R}}_q\|_{C^{N+\alpha}}\lesssim \tau_q^{-1}\mathring{\delta}_{q+1}\ell_q^{-N-3\alpha}+\tau_q\mathring{\delta}_{q+1}^2\ell_q^{-2-N-4\alpha}\,.$$ Finally, following the proof of [@BDSV Proposition 4.5] we have $$\left|\frac{d}{dt}\int_{\ensuremath{\mathbb{T}}^3}|u_i|^2-|u_\ell|^2\,dx\right|\lesssim \|u_\ell\|_{C^1}\|\mathring{R}_\ell\|_{C^0}\lesssim \mathring{\delta}_{q+1}\delta_q^{\sfrac12}\lambda_q,$$ so that $$\left|\int_{\ensuremath{\mathbb{T}}^3}|u_i|^2-|u_\ell|^2\,dx\right|\lesssim \tau_q\mathring{\delta}_{q+1}\delta_q^{\sfrac12}\lambda_q.$$ On the other hand $$\int_{\ensuremath{\mathbb{T}}^3}|u_{i}-u_{i+1}|^2\,dx\lesssim \|u_i-u_{i+1}\|^2_{C^\alpha}.$$ Using the identity from the proof of [@BDSV Proposition 4.5] $$|\bar u_q|^2-|u_\ell|^2=\chi_i(|u_i|^2-|u_\ell|^2)+(1-\chi_i)(|u_{i+1}|^2-|u_\ell|^2)-\chi_i(1-\chi_i)|u_i-u_{i+1}|^2\,,$$ and Proposition [Proposition 24](#p:stability){reference-type="ref" reference="p:stability"}, we deduce [\[e:gluingenergy\]](#e:gluingenergy){reference-type="eqref" reference="e:gluingenergy"}. ◻ We conclude this section with the following summary of the mollification/gluing steps: **Corollary 26**. *Let $(u_q,\mathring{R}_q)$ be a smooth solution of [\[e:EulerReynolds\]](#e:EulerReynolds){reference-type="eqref" reference="e:EulerReynolds"} satisfying the inductive assumptions [\[e:R_q\_inductive_est\]](#e:R_q_inductive_est){reference-type="eqref" reference="e:R_q_inductive_est"}-[\[e:energy_inductive_assumption\]](#e:energy_inductive_assumption){reference-type="eqref" reference="e:energy_inductive_assumption"}. Then there exists another smooth solution $(\bar{u}_q,\mathring{\bar{R}}_q)$ of [\[e:EulerReynolds\]](#e:EulerReynolds){reference-type="eqref" reference="e:EulerReynolds"} with the support condition [\[e:gluedsupport\]](#e:gluedsupport){reference-type="eqref" reference="e:gluedsupport"} such that the following estimates hold: $$\begin{aligned} \|\bar{u}_q\|_{C^{N+1}}&\lesssim \delta_q^{\sfrac12}\lambda_q\ell_q^{-N}\,\label{e:c_gluingu}\\ \|\mathring{\bar{R}}_{q}\|_{C^{N+\alpha}}&\lesssim \mathring{\delta}_{q+1}\ell_q^{-N-2\alpha}\,,\label{e:c_gluingR}\\ \|(\partial_t+\bar{u}_q\cdot\nabla)\mathring{\bar{R}}_{q}\|_{C^{N+\alpha}}&\lesssim \tau_q^{-1}\mathring{\delta}_{q+1}\ell_q^{-N-2\alpha}\,,\label{e:c_gluingDR}\\ \left|\int_{\ensuremath{\mathbb{T}}^3}|\bar{u}_q|^2-|u_q|^2\,dx\right|&\lesssim\mathring{\delta}_{q+1}\,,\label{e:c_gluingE}\end{aligned}$$ and moreover the vector potentials satisfy $$\label{e:c_gluingz} \|\bar{z}_{q}-z_q\|_{C^\alpha}\lesssim \tau_q\mathring{\delta}_{q+1}\ell_q^{-\alpha}\,.$$* **Remark 27**. *It is useful to compare these estimates with the corresponding bounds obtained in the mollification/gluing steps in [@BDSV], namely the bounds in [@BDSV (4.7), (4.10), (4.11), (4.12)]. For this comparison let us denote the respective parameters in [@BDSV] (defined in our case by the exponents $\gamma_R, \gamma_L, \gamma_t$) by $\mathring{\delta}_{q+1}^{old},\,\ell_q^{old},\,\tau_q^{old}$, so that, comparing with [@BDSV (2.7), (2.19), (2.26)], we have $$\mathring{\delta}_{q+1}^{old}=3\alpha,\quad \ell_q^{old}=(b-1)\beta+\tfrac{3}{2}\alpha,\quad \tau_q^{old}=2\alpha(1+\gamma_L^{old}).$$ It is not difficult to see that [\[e:c_gluingu\]](#e:c_gluingu){reference-type="eqref" reference="e:c_gluingu"}, [\[e:c_gluingR\]](#e:c_gluingR){reference-type="eqref" reference="e:c_gluingR"} and [\[e:c_gluingE\]](#e:c_gluingE){reference-type="eqref" reference="e:c_gluingE"} are sharper bounds than the corresponding bounds [@BDSV (4.7), (4.10), (4.12)], provided $\gamma_L<\gamma_L^{old}$ and $\gamma_R>3\alpha(1+\gamma_L^{old})$, in particular if $$\gamma_L<(b-1)\beta\,\textrm{ and $\alpha>0$ is sufficiently small,}$$ in agreement with [\[e:comparison\]](#e:comparison){reference-type="eqref" reference="e:comparison"}. In contrast, estimate [\[e:c_gluingDR\]](#e:c_gluingDR){reference-type="eqref" reference="e:c_gluingDR"} would only be sharper than [@BDSV (4.11)] if $\gamma_R<\gamma_R^{(old)}$, a condition which we will not assume, because we will need a better bound from [\[e:c_gluingz\]](#e:c_gluingz){reference-type="eqref" reference="e:c_gluingz"}.* ## Perturbation step {#s:perturbation} The construction of the new vector field $u_{q+1}=\bar{u}_q+w_{q+1}$ is done in [@BDSV Section 5.2] and [@BDSV Section 5.3]. We start by recalling the main steps. First we define space-time cutoff functions $\eta_i$, adapted to the temporal support of $\mathring{\bar{R}}_q$ in [\[e:gluedsupport\]](#e:gluedsupport){reference-type="eqref" reference="e:gluedsupport"}, and supported in "squiggling stripes", as done in [@BDSV Lemma 5.3]. We start with the following construction, which is independent of $q$: **Lemma 28**. *There exists two geometric constants $c_0,c_1>0$ and a family of smooth nonnegative functions $\bar{\eta}_i\in C^\infty(\ensuremath{\mathbb{T}}^3\times \ensuremath{\mathbb{R}})$ with the following properties:* 1. *$0\leq \bar\eta_i(x,t)\leq 1$;* 2. *$\mathop{\rm supp}\nolimits\bar\eta_i\cap\mathop{\rm supp}\nolimits\bar\eta_j=\emptyset$ for $i\neq j$;* 3. *$\ensuremath{\mathbb{T}}^3\times (i+\tfrac13,i+\tfrac23)\subset \{(x,t):\bar\eta_i(x,t)=1\}$;* 4. *$\mathop{\rm supp}\nolimits\bar\eta_i\subset \ensuremath{\mathbb{T}}^3\times (i-\tfrac13,i+\tfrac43)$;* *Moreover, the function $\bar\eta(x,t):=\sum_i\bar\eta_i(x,t)$ is $1$-periodic in $t$ and satisfies* 1. *$\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}\bar\eta^2(x,t)\,dx=c_0$ for all $t$;* 2. *$\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_0^1\bar\eta^2(x,s)\,ds=c_1$ for all $x$.* *Proof.* We start by following the proof of [@BDSV Lemma 5.3] and choose a suitable $h\in C^{\infty}_c(0,1)$ such that, setting $$\label{e:hetai} h_i(x,t):=h\left(t-\tfrac{1}{6}\sin(2\pi x_1)-i\right)$$ the family of functions $\{h_i\}$ satisfies (i)-(iv) above, and there exists a geometric constant $c_0>0$ such that $$\label{e:heta-xaverage} \sum_i\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}h_i^2(x,t)\,dx\geq c_0\qquad\textrm{ for all }t.$$ Then define $$\bar\eta_i(x,t)= \left(\frac{1}{c_0}\sum_i\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}h_i^2(y,t)\,dy\right)^{-\sfrac12}h_i(x,t)\,,$$ and $$\bar\eta(x,t)=\sum_i\bar\eta_i(x,t).$$ Then $\bar\eta_i$ satisfies (i)-(iv), whereas $t\mapsto \eta(x,t)$ is 1-periodic and satisfies (v). Finally, from [\[e:hetai\]](#e:hetai){reference-type="eqref" reference="e:hetai"} we see that $\bar\eta_i(x,t)=\bar\eta_i(0,t-\tfrac16\sin(2\pi x_1))$, and consequently $$\bar\eta(x,t)=\bar\eta(0,t-\tfrac16\sin(2\pi x_1)).$$ But then the $1$-periodicity of $t\mapsto\bar\eta(x,t)$ implies that $\int_0^1\bar\eta^2(x,t)\,dt$ is independent of $x$. This shows (vi). ◻ The constants $c_0,c_1>0$ in Lemma [Lemma 28](#l:Onsageretabar){reference-type="ref" reference="l:Onsageretabar"} determine our choice of $\bar{e}>0$: **Definition 29**. *The constant $\bar{e}$ in [\[e:energy_inductive_assumption\]](#e:energy_inductive_assumption){reference-type="eqref" reference="e:energy_inductive_assumption"} is defined to be $\bar{e}=\frac{3c_0}{c_1}$ for the universal constants in conditions (vi) and (vii) of Lemma [Lemma 28](#l:Onsageretabar){reference-type="ref" reference="l:Onsageretabar"}.* Now we are ready to define the family of cutoff functions $\eta_i$, in analogy with [@BDSV]: let $$\label{e:defetai} \eta_i(x,t)=\bar\eta_i(x,\tau_q^{-1}t),\quad \eta(x,t)=\bar\eta(x,\tau_q^{-1}t).$$ It is easy to see that $\eta_i$ have the properties: - $0\leq \eta_i(x,t)\leq 1$; - $\mathop{\rm supp}\nolimits\eta_i\cap\mathop{\rm supp}\nolimits\eta_j=\emptyset$ for $i\neq j$; - $\ensuremath{\mathbb{T}}^3\times I_i\subset \{(x,t):\eta_i(x,t)=1\}$, where $I_i=(t_i+\tfrac13\tau_q,t_{i}+\tfrac23\tau_q)$; - $\mathop{\rm supp}\nolimits\eta_i\subset \ensuremath{\mathbb{T}}^3\times \tilde I_i$, where $\tilde I_i:=(t_i-\tfrac13\tau_q,t_{i+1}+\tfrac13\tau_q)$; - For any $m,n\in\ensuremath{\mathbb{N}}$ we have the estimate $$\label{e:etai_est} \|\partial_t^m\eta_i\|_{C^n}\lesssim \tau_q^{-m}.$$ The second step is to introduce a scalar function of time, which acts as the trace of the Reynolds stress tensor. In our case this will be defined as $$\label{e:sigmaq} \sigma_q(t):=\frac{1}{3c_0}\left(e(t)-\int_{\ensuremath{\mathbb{T}}^3}|\bar{u}_q|^2\,dx-\bar{e}\delta_{q+2}\right).$$ We remark that the notation for this function in [@BDSV] is $\rho_q(t)$, but in this paper we reserve $\rho$ to denote density in subsequent sections. Moreover, in [@BDSV] the definition involves $\delta_{q+2}/2$ rather than $\bar{e}\delta_{q+2}$, this difference is related to our sharper inductive estimate [\[e:energy_inductive_assumption\]](#e:energy_inductive_assumption){reference-type="eqref" reference="e:energy_inductive_assumption"}. In particular, this leads to the following bound, which follows from [\[e:energy_inductive_assumption\]](#e:energy_inductive_assumption){reference-type="eqref" reference="e:energy_inductive_assumption"} and [\[e:gluingenergy\]](#e:gluingenergy){reference-type="eqref" reference="e:gluingenergy"}: $$\label{e:sigmabound} \left|\sigma_q(t)-\frac{\bar{e}}{3c_0}\delta_{q+1}\right|\lesssim \delta_{q+1}(\lambda_q^{-\gamma_E}+\lambda_q^{-\gamma_R}+\lambda_q^{-(b-1)\beta})\,.$$ Next, we introduce, as in [@BDSV Section 5.2] the localized versions of the Reynolds stress as $$\label{e:defRqi} R_{q,i}=\eta_i^2(\sigma_{q}\ensuremath{\mathrm{Id}}-\mathring{\bar{R}}_q),\quad \tilde R_{q,i}=\frac{\nabla\Phi_i R_{q,i}\nabla\Phi_i^T}{\sigma_{q,i}},\quad \sigma_{q,i}=\eta_i^2\sigma_q$$ where $\Phi_i$ is the backward flow map for the velocity field $\bar{u}_q$, defined as the solution of the transport equation $$\begin{aligned} (\partial_t + \bar{u}_q \cdot \nabla) \Phi_i &=0 \\ \Phi_i(x,t_i) &= x.\end{aligned}$$ By our choice of $\eta_i$ and $\tau_q$ (c.f. [\[e:taucondition\]](#e:taucondition){reference-type="eqref" reference="e:taucondition"}) the backward flow $\Phi_i$ is well-defined in the support of $\eta_i$ and satisfies the estimate $$\label{e:backwardflow} \|\nabla\Phi_i-\ensuremath{\mathrm{Id}}\|_{C^0}\lesssim \tau_q\|\bar{u}_q\|_{C^1}\lesssim \lambda_q^{-\gamma_T}.$$ In particular, we have the following analogue of [@BDSV Lemma 5.4]: **Lemma 30**. *For $a\gg 1$ sufficiently large we have $$\begin{aligned} \|\nabla\Phi_i-\ensuremath{\mathrm{Id}}\|_{C^0}&\leq 1/2\quad\textrm{ for }t\in\tilde I_i.\label{e:backwardflow1}\\ |\sigma_q(t)-\tfrac13\bar{e}\delta_{q+1}|&\leq \tfrac{1}{9}\bar{e}\delta_{q+1}\quad\textrm{ for all }t\,.\label{e:sigma_range}\end{aligned}$$ and for any $N\geq 0$ $$\begin{aligned} \left\|\sigma_{q,i}\right\|_{C^N}&\lesssim \delta_{q+1}\,,\label{e:sigma_i_bnd_N}\\ \left\|\partial_t \sigma_{q,i}\right\|_{C^N} &\lesssim \delta_{q+1}\tau_q^{-1}\,. \label{e:sigma_i_bnd_t}\end{aligned}$$ Moreover, we also have, for any $t\in \tilde I_i$ $$\label{e:Rqi-I} \left|\frac{R_{q,i}}{\sigma_{q,i}}-\ensuremath{\mathrm{Id}}\right|=\left|\sigma_{q}^{-1}\mathring{\bar{R}}_q\right|\lesssim \lambda_q^{-\sfrac{\gamma_R}{2}}.$$ In particular, for $a\gg 1$ sufficiently large and for all $(x,t)$ $$\tilde R_{q,i}(x,t)\in B_{\sfrac12}(\ensuremath{\mathrm{Id}})\subset \mathcal{S}^{3\times 3}_+\,,$$ where $B_{\sfrac12}(\ensuremath{\mathrm{Id}})$ denotes the metric ball of radius $1/2$ around the identity $\ensuremath{\mathrm{Id}}$ in the space $\mathcal{S}^{3\times 3}$.* *Proof.* The proof follows closely the proof of [@BDSV Lemma 5.4]. In particular [\[e:backwardflow1\]](#e:backwardflow1){reference-type="eqref" reference="e:backwardflow1"} follows from [\[e:backwardflow\]](#e:backwardflow){reference-type="eqref" reference="e:backwardflow"} and [\[e:sigma_range\]](#e:sigma_range){reference-type="eqref" reference="e:sigma_range"} follows from [\[e:sigmabound\]](#e:sigmabound){reference-type="eqref" reference="e:sigmabound"}. The estimates [\[e:sigma_i\_bnd_N\]](#e:sigma_i_bnd_N){reference-type="eqref" reference="e:sigma_i_bnd_N"}-[\[e:sigma_i\_bnd_t\]](#e:sigma_i_bnd_t){reference-type="eqref" reference="e:sigma_i_bnd_t"} can be obtained as in [@BDSV (5.13)-(5.15)]. Indeed, we start with using equation [\[e:EulerReynolds\]](#e:EulerReynolds){reference-type="eqref" reference="e:EulerReynolds"} to estimate $$\left|\frac{d}{dt} \int \left|\bar{u}_{q}(x,t)\right|^2\,dx\right|= \left|2\int \nabla \bar{u}_q\cdot \mathring{\bar{R}}_q\,dx \right|\lesssim \delta_{q+1}\delta_q^{\sfrac 12}\lambda_q,$$ so that $$|\tfrac{d}{dt}\sigma_q(t)| \lesssim \|\tfrac{d}{dt}e\|_{C^0}+\delta_{q+1}\delta_q^{\sfrac 12}\lambda_q\lesssim \delta_{q+1}\tau_q^{-1},$$ where we assume $a\gg 1$ is sufficiently large to absorb the term $\|\tfrac{d}{dt}e\|_{C^0}$. Then we use [\[e:etai_est\]](#e:etai_est){reference-type="eqref" reference="e:etai_est"} to conclude the bounds [\[e:sigma_i\_bnd_N\]](#e:sigma_i_bnd_N){reference-type="eqref" reference="e:sigma_i_bnd_N"}-[\[e:sigma_i\_bnd_t\]](#e:sigma_i_bnd_t){reference-type="eqref" reference="e:sigma_i_bnd_t"}. The estimate [\[e:Rqi-I\]](#e:Rqi-I){reference-type="eqref" reference="e:Rqi-I"} follows directly from [\[e:gluingR\]](#e:gluingR){reference-type="eqref" reference="e:gluingR"} and [\[e:gammaRalphacondition\]](#e:gammaRalphacondition){reference-type="eqref" reference="e:gammaRalphacondition"}. Conseuqnetly, bound on the range of $\tilde R_{q,i}$ follows from [\[e:backwardflow\]](#e:backwardflow){reference-type="eqref" reference="e:backwardflow"} and by choosing $a\gg 1$ sufficiently large. ◻ With Lemma [Lemma 30](#l:sigma){reference-type="ref" reference="l:sigma"} and the definitions in [\[e:defRqi\]](#e:defRqi){reference-type="eqref" reference="e:defRqi"} we define the new perturbation $w_{q+1}$, precisely as in [@BDSV Section 5.3], as follows:[^7] $$\label{e:neww} \begin{split} w_{q+1}&=\frac{1}{\lambda_{q+1}}\mathop{\rm curl}\nolimits\left[\sum_{i}\sum_{\vec{k}\in\Lambda}\sigma_{q,i}^{\sfrac12}a_{\vec{k}}(\tilde R_{q,i})\nabla\Phi_i^TU_{\vec{k}}(\lambda_{q+1}\Phi_i)\right],\\ &=\underbrace{\sum_{i}\sum_{\vec{k}\in\Lambda}\sigma_{q,i}^{\sfrac12}a_{\vec{k}}(\tilde R_{q,i})\nabla\Phi_i^{-1}W_{\vec{k}}(\lambda_{q+1}\Phi_i)}_{w_o}+\\ &+\underbrace{\frac{1}{\lambda_{q+1}}\sum_{i}\sum_{\vec{k}\in\Lambda}\nabla (\sigma_{q,i}^{\sfrac12}a_{\vec{k}}(\tilde R_{q,i}))\times \nabla\Phi_i^TU_{\vec{k}}(\lambda_{q+1}\Phi_i)}_{w_c} \end{split}$$ Note that in the formulas above $\vec{k}\in \Lambda$ denotes vectors in $\ensuremath{\mathbb{R}}^3$ and the corresponding sum is finite. In contrast, the notation introduced in [@BDSV] is $$\label{e:wowc-old} w_o=\sum_i\sum_{k\in\ensuremath{\mathbb{Z}}^3\setminus\{0\}}(\nabla\Phi_i)^{-1}b_{i,k}e^{\lambda_{q+1}k\cdot\Phi_i},\quad w_c=\sum_i\sum_{k\in\ensuremath{\mathbb{Z}}^3\setminus\{0\}}c_{i,k}e^{\lambda_{q+1}k\cdot\Phi_i},$$ where $$b_{i,k}=\sigma_{q,i}^{\sfrac12}a_{k}(\tilde R_{q,i})A_{k},\quad c_{i,k}=\frac{-i}{\lambda_{q+1}}\mathop{\rm curl}\nolimits\left[\sigma_{q,i}^{\sfrac12}\frac{\nabla\Phi_i^T(k\times a_k(\tilde R_{q,i}))}{|k|^2}\right],$$ the index $k\in\ensuremath{\mathbb{Z}}^3\setminus\{0\}$ denotes the Fourier variable, and $A_k\in\ensuremath{\mathbb{C}}^3$ are complex vectors arising in the Fourier decomposition of Mikado flows, specifically of the functions $\psi_{\vec{k}}$ in [\[e:defUW\]](#e:defUW){reference-type="eqref" reference="e:defUW"}. In particular, since $\psi_{\vec{k}}$ is smooth, the Fourier coefficients $a_k$ in the expression for $b_{i,k},c_{i,k}$, together with all derivatives, and bounded and have polynomial decay in $k$ of arbitrary order (c.f. [@BDSV (5.5)]). At variance with [@BDSV] we will make use of this fact in the form $$\label{e:decayofak} \|a_k(\tilde R_{q,i})\|_0\lesssim |k|^{-\bar{N}-3}\,.$$ The representation [\[e:wowc-old\]](#e:wowc-old){reference-type="eqref" reference="e:wowc-old"} is useful for obtaining estimates for $w_{q+1}$ and for the new Reynolds stress $\mathring{R}_{q+1}$, whereas the representation [\[e:neww\]](#e:neww){reference-type="eqref" reference="e:neww"} will be useful for computing the bulk diffusion coefficient induced by $w_{q+1}$ in Section [4](#s:homogenization){reference-type="ref" reference="s:homogenization"}. As far as the estimates on $w_{q+1}, \mathring{R}_{q+1}$ are concerned, in light of Remark [Remark 27](#r:comparison){reference-type="ref" reference="r:comparison"} all estimates in [@BDSV Section 5.3-5.5] which do not use transport derivatives remain valid. These are (c.f. [@BDSV Lemma 5.5 and Proposition 5.7]): **Lemma 31**. *There is a geometric constant $\bar{M}$ such that $$\label{e:barM} \|b_{i,k}\|_0 \leq \bar M\delta_{q+1}^{\sfrac{1}{2}}|k|^{-\bar{N}-3} \, .$$ Moreover, for $t\in \tilde I_i$ and any $N\geq 0$ $$\begin{aligned} \left\| (\nabla\Phi_i)^{-1}\right\|_{C^N} + \left\|\nabla\Phi_i\right\|_{C^N} &\lesssim \ell_q^{-N} \,,\label{e:est_PhiN}\\ \left\|\sigma_{q,i}^{-1}R_{q,i}\right\|_{C^N}+\left\|\tilde R_{q,i}\right\|_{C^N} &\lesssim \ell_q^{-N}\,,\label{e:est_Rqi}\\ \left\|b_{i,k}\right\|_{C^N} &\lesssim \delta_{q+1}^{\sfrac12}|k|^{-\bar{N}-3}\ell_q^{-N}\,,\label{e:est_b} \\ \left\|c_{i,k}\right\|_{C^N} &\lesssim \delta_{q+1}^{\sfrac12}\lambda_{q+1}^{-1}|k|^{-\bar{N}-3}\ell_q^{-N-1}\,.\label{e:est_c}\end{aligned}$$* *Proof.* The estimate [\[e:est_PhiN\]](#e:est_PhiN){reference-type="eqref" reference="e:est_PhiN"} follows from [\[e:backwardflow1\]](#e:backwardflow1){reference-type="eqref" reference="e:backwardflow1"} and [\[e:gluingu\]](#e:gluingu){reference-type="eqref" reference="e:gluingu"}. Let us denote $D_t=\partial_t+\bar{u}_q\cdot\nabla$. Since $D_t\Phi_i=0$ by definition, for $N\geq 1$ we have $$\begin{aligned} \left\|D_t\nabla\Phi_i\right\|_{C^N} &\lesssim \|\nabla\bar{u}_q^T\nabla\Phi_i\|_{C^N}\lesssim \|\nabla\bar{u}_q\|_{C^0}\|\nabla\Phi_i\|_{C^N}+\|\nabla\bar{u}_q\|_{C^N}\|\nabla\Phi_i\|_{C^0}\\ &\lesssim \tau_q^{-1}\|\nabla\Phi_i\|_{C^N}+\tau_q^{-1}\ell_q^{-N}\,.\end{aligned}$$ We deduce [\[e:est_PhiN\]](#e:est_PhiN){reference-type="eqref" reference="e:est_PhiN"} from here using Grönwall's inequality. Next, using [\[e:defRqi\]](#e:defRqi){reference-type="eqref" reference="e:defRqi"} we write $$\sigma_{q,i}^{-1}R_{q,i}=\ensuremath{\mathrm{Id}}-\sigma_q^{-1}\mathring{\bar{R}}_q,\quad \tilde R_{q,i}=\nabla\Phi_i(\ensuremath{\mathrm{Id}}-\sigma_q^{-1}\mathring{\bar{R}}_q)\nabla\Phi_i^T\,.$$ Then, applying [\[e:gluingR\]](#e:gluingR){reference-type="eqref" reference="e:gluingR"} and [\[e:sigma_range\]](#e:sigma_range){reference-type="eqref" reference="e:sigma_range"} we obtain $$\begin{aligned} \left\|\sigma_{q,i}^{-1}R_{q,i}\right\|_{C^N}\lesssim \delta_{q+1}^{-1}\left\|R_{q,i}\right\|_{C^{N+\alpha}}\lesssim \frac{\mathring{\delta}_{q+1}}{\delta_{q+1}}\ell_q^{-N-2\alpha}\lesssim \ell_q^{-N},\end{aligned}$$ where in the last inequality we used [\[e:gammaRalphacondition\]](#e:gammaRalphacondition){reference-type="eqref" reference="e:gammaRalphacondition"}. Similarly we obtain the estimate for $\tilde R_{q,i}$, leading to [\[e:est_Rqi\]](#e:est_Rqi){reference-type="eqref" reference="e:est_Rqi"}. The estimates [\[e:est_b\]](#e:est_b){reference-type="eqref" reference="e:est_b"} and [\[e:est_c\]](#e:est_c){reference-type="eqref" reference="e:est_c"} follow directly from [\[e:sigma_i\_bnd_N\]](#e:sigma_i_bnd_N){reference-type="eqref" reference="e:sigma_i_bnd_N"} and [\[e:decayofak\]](#e:decayofak){reference-type="eqref" reference="e:decayofak"} as well as the above. ◻ Because we require inductive estimates on $\bar{N}\geq 1$ derivatives of $u_q$, [@BDSV Corollary 5.9] is replaced by **Lemma 32**. *Under the assumption [\[e:comparison\]](#e:comparison){reference-type="eqref" reference="e:comparison"} and assuming $a\gg 1$ is sufficiently large, we have for any $N=0,1,\dots,\bar{N}$ $$\begin{aligned} \left\|w_o\right\|_{C^N} &\leq \tilde M\delta_{q+1}^{\sfrac 12}\lambda_{q+1}^N\,,\\ \left\|w_c\right\|_{C^N} &\lesssim \delta_{q+1}^{\sfrac 12}\ell_q^{-1}\lambda_{q+1}^{-1}\lambda_{q+1}^N\,,\\ \left\|w_{q+1}\right\|_{C^N} &\leq 2\tilde M \delta_{q+1}^{\sfrac 12}\lambda_{q+1}^N\,, \end{aligned}$$ where the constant $\tilde M$ depends on $\bar{N}$ and $\bar{M}$.* *Proof.* We use the representation in [\[e:wowc-old\]](#e:wowc-old){reference-type="eqref" reference="e:wowc-old"}. First of all, using the chain rule we obtain $$\|e^{i\lambda_{q+1}k\cdot\Phi_i}\|_{C^m}\leq \lambda_{q+1}^m|k|^m\|\nabla\Phi_i\|_{C^0}^m+\sum_{j<m,\theta}C_{j,m}\lambda_{q+1}^j|k|^j\|\nabla\Phi_i\|_{C^0}^{\theta_1}\cdot\dots\cdot\|\nabla\Phi_i\|_{C^{m-1}}^{\theta_m}\,$$ for some constants $C_{j,m}$ (binomial coefficients), where the sum is over $1\leq j\leq m$ and multi-indices $\theta$ with $m=\theta_1+2\theta_2+\dots+m\theta_m$ and $j=\theta_1+\dots+\theta_m$. Then, using Lemma [Lemma 31](#l:boundsonbc){reference-type="ref" reference="l:boundsonbc"} we deduce $$\|e^{i\lambda_{q+1}k\cdot\Phi_i}\|_{C^m}\lesssim \lambda_{q+1}^m|k|^m+\lambda_{q+1}|k|\ell_q^{1-m}.$$ However, from [\[e:comparison\]](#e:comparison){reference-type="eqref" reference="e:comparison"} it follows in particular $\gamma_L<(b-1)$, hence $\ell_q^{-1}<\lambda_{q+1}$, so that we deduce $$\|e^{i\lambda_{q+1}k\cdot\Phi_i}\|_{C^m}\lesssim \lambda_{q+1}^m|k|^m.$$ By applying the product rule and Lemma [Lemma 31](#l:boundsonbc){reference-type="ref" reference="l:boundsonbc"} we the conclude that there exists $\bar{M}$ such that $$\|w_o\|_{C^m}\leq \tilde{M}\delta_{q+1}^{\sfrac12}\lambda_{q+1}^m\quad\textrm{ for all }m=0,1,\dots,\bar{N}.$$ The estimate on $w_c$ follows directly from Lemma [Lemma 31](#l:boundsonbc){reference-type="ref" reference="l:boundsonbc"}. ◻ **Definition 33**. *The constant $M$ in [\[e:u_q\_inductive_est\]](#e:u_q_inductive_est){reference-type="eqref" reference="e:u_q_inductive_est"} is defined as $M:=4\tilde M$, where $\tilde M$ is the constant in Lemma [Lemma 32](#l:boundsonw){reference-type="ref" reference="l:boundsonw"}.* Finally, coming to estimates involving time-derivatives, we have the following variant of [@BDSV Proposition 5.9]: **Lemma 34**. *For any $t\in \tilde I_i$ and $N\geq 0$ we have $$\begin{aligned} \|D_t\nabla\Phi_i\|_{C^N}&\lesssim \delta_q^{1/2}\lambda_q\ell_q^{-N}\,,\\ \|D_t\sigma_{q,i}\|_{C^N}&\lesssim \delta_{q+1}\tau_q^{-1}\ell_q^{-N}\,,\\ \|D_t\tilde{R}_{q,i}\|_{C^N}&\lesssim \tau_q^{-1}\ell_q^{-N}\,,\\ \|D_t c_{i,k}\|_{C^N}&\lesssim \delta_{q+1}^{\sfrac12}\tau_q^{-1}\lambda_{q+1}^{-1}\ell_q^{-N-1}|k|^{-\bar{N}-3} \,, \end{aligned}$$ where $D_t=\partial_t+\bar{u}_q\cdot\nabla$.* *Proof.* The proof follows [@BDSV Proposition 5.9] using this time Lemma [Lemma 30](#l:sigma){reference-type="ref" reference="l:sigma"} and Lemma [Lemma 31](#l:boundsonbc){reference-type="ref" reference="l:boundsonbc"}. In particular, using the expressions for the $D_t$ derivatives, we have $$\begin{aligned} \|D_t\nabla\Phi_i\|_{C^N}&\lesssim \|\nabla\Phi_i\nabla \bar{u}_q\|_{C^N}\lesssim \delta_q^{\sfrac12}\lambda_q\ell_q^{-N}\lesssim \tau_q^{-1}\ell_q^{-N}\,,\\ \|D_t\sigma_{q,i}\|_{C^N}&\lesssim \|\partial_t\sigma_{q,i}\|_{C^N}+\|\sigma_{q,i}\|_{C^{N+1}}\|\bar{u}_q\|_{C^1}+\|\sigma_{q,i}\|_{C^1}\|\bar{u}_q\|_{C^N}\\ &\lesssim \delta_{q+1}\tau_q^{-1}+\delta_{q+1}\delta_q^{\sfrac12}\lambda_q+\delta_{q+1}\delta_q^{\sfrac12}\lambda_q\ell_q^{-N+1}\\ &\lesssim \delta_{q+1}\tau_q^{-1}\ell_q^{-N}\,. \end{aligned}$$ Further, using [\[e:defRqi\]](#e:defRqi){reference-type="eqref" reference="e:defRqi"} we write $\sigma_{q,i}^{-1}R_{q,i}=\ensuremath{\mathrm{Id}}-\sigma_q^{-1}\mathring{\bar{R}}_q$ and compute $$\begin{aligned} \|D_t(\sigma_{q,i}^{-1}R_{q,i})\|_{C^N}&\lesssim \|\sigma_{q}^{-1}\partial_t\sigma_{q}\mathring{\bar{R}}_q\|_{C^{N+\alpha}}+\|\sigma_{q}^{-1}D_t\mathring{\bar{R}}_q\|_{C^{N+\alpha}}\\ &\lesssim \frac{\mathring{\delta}_{q+1}}{\delta_{q+1}}\tau_q^{-1}\ell_q^{-N-2\alpha}\lesssim \tau_q^{-1}\ell_q^{-N}\,, \end{aligned}$$ where we again used [\[e:gammaRalphacondition\]](#e:gammaRalphacondition){reference-type="eqref" reference="e:gammaRalphacondition"}. Then, using Lemma [Lemma 31](#l:boundsonbc){reference-type="ref" reference="l:boundsonbc"}, $$\begin{aligned} \|D_t\tilde R_{q,i}\|_{N}&\lesssim \|D_t\nabla\Phi_i\|_{C^N}\|\sigma_{q,i}^{-1}R_{q,i}\|_{C^0}+ \|D_t\nabla\Phi_i\|_{C^0}\|\sigma_{q,i}^{-1}R_{q,i}\|_{C^{N}}+\\ &+ \|D_t\nabla\Phi_i\|_{C^0}\|\sigma_{q,i}^{-1}R_{q,i}\|_{C^0}\|\nabla\Phi_i\|_{C^N} +\\ &+ \|D_t(\sigma_{q,i}^{-1}R_{q,i})\|_{C^N}+\|D_t(\sigma_{q,i}^{-1}R_{q,i})\|_{C^0}\|\nabla\Phi_i\|_{C^N}\\ &\lesssim \tau_q^{-1}\ell_q^{-N}\,. \end{aligned}$$ The estimate for $D_t c_{i,k}$ follows again from [\[e:decayofak\]](#e:decayofak){reference-type="eqref" reference="e:decayofak"}, Lemma [Lemma 30](#l:sigma){reference-type="ref" reference="l:sigma"}, Lemma [Lemma 31](#l:boundsonbc){reference-type="ref" reference="l:boundsonbc"} and the above. ◻ Having obtain the analogous estimates for the perturbation $w_{q+1}$, the estimates on the new Reynolds stress $\mathring{R}_{q+1}$ proceed precisely as in [@BDSV Section 6.1]. We set (c.f. [@BDSV (5.21)]) $$\label{e:decompR} \mathring{R}_{q+1} = \underbrace{\ensuremath{\mathcal{R}}\left( w_{q+1} \cdot \nabla \bar u_q\right)}_{\mbox{Nash error}} + \underbrace{\ensuremath{\mathcal{R}}\left( \partial_t w_{q+1} + \bar u_q \cdot \nabla w_{q+1} \right)}_{\mbox{Transport error}} + \underbrace{\ensuremath{\mathcal{R}}\mathop{\rm div}\nolimits\left(- {\bar R}_{q} + (w_{q+1} \otimes w_{q+1}) \right)}_{\mbox{Oscillation error}}.$$ where $$\begin{aligned} \bar R_q = \sum_{i} R_{q,i}\, .\end{aligned}$$ With this definition one may verify that $$\left\{ \begin{array}{l} \partial_t u_{q+1} + \mathop{\rm div}\nolimits(u_{q+1} \otimes u_{q+1}) + \nabla p_{q+1} = \mathop{\rm div}\nolimits(\mathring{R}_{q+1}) \, , \\ \\ \mathop{\rm div}\nolimits v_{q+1} = 0 \, , \end{array}\right.$$ where the new pressure is defined by $$p_{q+1}(x,t) = \bar p_q(x,t) - \sum_{i} \sigma_{q,i}(x,t) + \sigma_{q}(t).$$ The analogue of [@BDSV Proposition 6.1] for estimating the new Reynolds stress is **Proposition 35**. *The Reynolds stress error $\mathring R_{q+1}$ satisfies the estimate $$\label{e:final_R_est} \left\|\mathring R_{q+1}\right\|_{0}\lesssim \frac{\delta_{q+1}^{\sfrac12}}{\tau_q\lambda_{q+1} ^{1-\alpha}} \,.$$* *Proof.* We follow the proof of [@BDSV Proposition 6.1] and estimate each term in [\[e:decompR\]](#e:decompR){reference-type="eqref" reference="e:decompR"} separately. Concerning the *Nash term*, we have, as in [@BDSV], for any $N\in\ensuremath{\mathbb{N}}$ $$\begin{aligned} \left\|\mathcal R\left(w_{q+1} \cdot \nabla \bar u_q \right)\right\|_{\alpha}&\lesssim \sum_{k\in\ensuremath{\mathbb{Z}}^3\setminus\{0\}}\frac{\delta_{q+1}^{\sfrac12} \delta_q^{\sfrac12}\lambda_q }{\lambda_{q+1}^{1-\alpha}|k|^{\bar{N}+3}} + \frac{ \delta_{q+1}^{\sfrac12} \delta_q^{\sfrac12}\lambda_{q}}{\lambda_{q+1}^{N-\alpha}\ell_q^{N+\alpha} |k|^{\bar{N}+3}}\\ &+\sum_{k\in\ensuremath{\mathbb{Z}}^3\setminus\{0\}}\frac{\delta_{q+1}^{\sfrac12}\delta_{q}^{\sfrac12}\lambda_q}{\ell_q\lambda_{q+1}^{2-\alpha}|k|^{\bar{N}+3}}+\frac{\delta_{q+1}^{\sfrac12}\delta_{q}^{\sfrac12}\lambda_q}{\ell_q^{N+1-\alpha}\lambda_{q+1}^{N+1-\alpha}|k|^{\bar{N}+3}}\end{aligned}$$ where we have used the representation [\[e:wowc-old\]](#e:wowc-old){reference-type="eqref" reference="e:wowc-old"}, Lemma [Lemma 31](#l:boundsonbc){reference-type="ref" reference="l:boundsonbc"} and the stationary phase estimate [@BDSV Proposition C.2]. We claim that it is possible to choose $N\geq 1$ so that $$\lambda_{q+1}^{N-1}\ell_q^{N+\alpha}>1.$$ Using [\[e:comparison\]](#e:comparison){reference-type="eqref" reference="e:comparison"}, this follows provided $$\label{e:choiceofN} N(b-1)(1-\beta)>b+\alpha(1+\gamma_L).$$ In turn, with this choice of $N$, using [\[e:comparison\]](#e:comparison){reference-type="eqref" reference="e:comparison"} again to obtain $\lambda_{q+1}\ell_q>1$, and using $\bar{N}\geq 2$, we deduce $$\label{e:Nash_est} \left\|\mathcal R\left(w_{q+1} \cdot \nabla \bar u_q \right)\right\|_{\alpha}\lesssim \frac{\delta_{q+1}^{\sfrac12} \delta_q^{\sfrac12}\lambda_q }{\lambda_{q+1}^{1-\alpha}}\,.$$ Concerning the *transport error* we write, for $t\in \tilde I_i$, $$\label{e:wotransport} \begin{split} (\partial_t+\bar u_q\cdot \nabla) w_o =& \sum_{i,k} (\nabla\bar u_q)^T(\nabla\Phi_i)^{-1} b_{i,k} e^{i\lambda_{q+1}k\cdot \Phi_i}\\&\quad + \sum_{i,k} (\nabla\Phi_i)^{-1} (\partial_t+\bar u_q\cdot \nabla) \left(\sigma_{q,i}^{\sfrac12} a_k(\tilde R_{q,i})\right) e^{i\lambda_{q+1}k\cdot \Phi_i} \,. \end{split}$$ As in [@BDSV] we obtain, arguing again as above with a sufficiently large $N$ satisfying [\[e:choiceofN\]](#e:choiceofN){reference-type="eqref" reference="e:choiceofN"}, $$\begin{aligned} \left\|\mathcal R\left( (\nabla\bar u_q)^T(\nabla\Phi_i)^{-1} b_{i,k} e^{i\lambda_{q+1}k\cdot \Phi_i} \right)\right\|_{\alpha} &\lesssim \frac{\delta_{q+1}^{\sfrac12} \delta_q^{\sfrac12}\lambda_q }{\lambda_{q+1}^{1-\alpha}|k|^{\bar{N}+3}} %+ \frac{ \delta_{q+1}^{\sfrac12} \delta_q^{\sfrac12}\lambda_{q}}{\lambda_{q+1}^{N-\alpha} \ell_q^{N+1+3\alpha} |k|^{\bar{N}+3}}\\ %&\lesssim \frac{ \delta_{q+1}^{\sfrac12} \delta_q^{\sfrac12}\lambda_q}{\lambda_{q+1}^{1-\alpha}|k|^{\bar{N}+3}}\, ,\end{aligned}$$ whereas, using Lemma [Lemma 34](#l:boundsonbc1){reference-type="ref" reference="l:boundsonbc1"}, $$\begin{aligned} \left\|\mathcal R\left( (\nabla\Phi_i)^{-1} (\partial_t+\bar u_q\cdot \nabla) (\sigma_{q,i}^{\sfrac 12} a_k(\tilde R_{q,i}))e^{i\lambda_{q+1}k\cdot \Phi_i} \right)\right\|_{\alpha} \lesssim \frac{\delta_{q+1} ^{\sfrac 12}}{\tau_q\lambda_{q+1}^{1-\alpha}|k|^{\bar{N}+3}}\end{aligned}$$ Moreover, using [\[e:wowc-old\]](#e:wowc-old){reference-type="eqref" reference="e:wowc-old"}, we have $$\begin{aligned} (\partial_t+\bar u_q\cdot \nabla) w_c =& \sum_{i,k} \left((\partial_t+\bar u_q\cdot \nabla) c_{i,k }\right)e^{i\lambda_{q+1}k\cdot \Phi_i}\end{aligned}$$ and obtain, again using Lemma [Lemma 34](#l:boundsonbc1){reference-type="ref" reference="l:boundsonbc1"} and arguing as above, $$\begin{aligned} \left\|\mathcal R \left(\left((\partial_t+\bar u_q\cdot \nabla) c_{i,k }\right)e^{i\lambda_{q+1}k\cdot \Phi_i}\right)\right\|_{\alpha}\lesssim & \frac{\delta_{q+1} ^{\sfrac 12}}{\tau_q\ell_q \lambda_{q+1}^{2-\alpha}|k|^{\bar{N}+3}} \lesssim \frac{\delta_{q+1} ^{\sfrac 12}}{\tau_q\lambda_{q+1}^{1-\alpha}|k|^{\bar{N}+3}}\end{aligned}$$ We deduce $$\label{e:trans_est} \|\ensuremath{\mathcal{R}}\left( \partial_t w_{q+1} + \bar u_q \cdot \nabla w_{q+1} \right)\|_\alpha \lesssim \frac{\delta_{q+1} ^{\sfrac 12}}{\tau_q\lambda_{q+1}^{1-\alpha}}\, .$$ Concerning the *oscillation error* we argue precisely as in [@BDSV] and obtain $$\label{e:osc_est} \|\ensuremath{\mathcal{R}}\mathop{\rm div}\nolimits\left(- {\bar R}_q + w_{q+1} \otimes w_{q+1}\right)\|_\alpha \lesssim \frac{\delta_{q+1}}{\ell_q\lambda_{q+1}^{1-\alpha}}\, .$$ From [\[e:comparison\]](#e:comparison){reference-type="eqref" reference="e:comparison"} we deduce $\delta_{q+1}^{\sfrac12}\ell_q^{-1}<\delta_q^{\sfrac12}\lambda_q$. We also recall $\gamma_T>0$, hence $\tau_q<\delta_q^{\sfrac12}\lambda_q$. Consequently, combining [\[e:Nash_est\]](#e:Nash_est){reference-type="eqref" reference="e:Nash_est"}, [\[e:trans_est\]](#e:trans_est){reference-type="eqref" reference="e:trans_est"} and [\[e:osc_est\]](#e:osc_est){reference-type="eqref" reference="e:osc_est"} we finally deduce [\[e:final_R\_est\]](#e:final_R_est){reference-type="eqref" reference="e:final_R_est"} as required. ◻ Finally, the new energy can be estimated, following [@BDSV Section 6.2], as **Proposition 36**. *The energy of $u_{q+1}$ satisfies the following estimate: $$\label{e:final_E_est} \left|e(t)-\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}\left|u_{q+1}\right|^2\,dx-\bar{e}\delta_{q+2} \right|\lesssim \frac{\delta_q^{\sfrac12}\delta_{q+1}^{\sfrac12}\lambda_q}{\lambda_{q+1}}\,.$$* *Proof.* We argue as in [@BDSV]. More precisely, we write $$\begin{aligned} \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}\left|u_{q+1}\right|^2\,dx&=\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}\left|\bar{u}_{q}\right|^2\,dx + 2\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3} w_{q+1}\cdot \bar{u}_q\,dx+\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}\left|w_{q+1}(x,t)\right|^2\,dx\\ &=\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}\left|\bar{u}_{q}\right|^2\,dx +\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}\left|w_o\right|^2\,dx+\mathcal{E}_1,\end{aligned}$$ where, arguing as in [@BDSV] using stationary phase and Lemma [Lemma 32](#l:boundsonw){reference-type="ref" reference="l:boundsonw"}, $$\begin{aligned} |\mathcal{E}_1|=\left|\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int 2w_{q+1}\cdot\bar{u}_q+2w_o\cdot w_c+|w_c|^2\,dx\right|\lesssim \frac{\delta_{q+1}^{\sfrac12}\delta_q^{\sfrac12}\lambda_q}{\lambda_{q+1}}\end{aligned}$$ Similarly, $$\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}\left|w_o\right|^2\,dx=\sum_i\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}\ensuremath{\mathrm{tr\,}}R_{q,i}\,dx+\mathcal{E}_2$$ where, using Lemma [Lemma 30](#l:sigma){reference-type="ref" reference="l:sigma"}, Lemma [Lemma 31](#l:boundsonbc){reference-type="ref" reference="l:boundsonbc"} and [\[e:comparison\]](#e:comparison){reference-type="eqref" reference="e:comparison"}, $$\begin{aligned} |\mathcal{E}_2|\lesssim \frac{\delta_{q+1}}{\ell_q\lambda_{q+1}}\lesssim \frac{\delta_{q+1}^{\sfrac12}\delta_q^{\sfrac12}\lambda_q}{\lambda_{q+1}}. \end{aligned}$$ On the other hand, recalling the definition of $R_{q,i}$ in [\[e:defRqi\]](#e:defRqi){reference-type="eqref" reference="e:defRqi"} and using property (v) of $\bar\eta_i$ in Lemma [Lemma 28](#l:Onsageretabar){reference-type="ref" reference="l:Onsageretabar"} as well as the definition of $\sigma_q(t)$ in [\[e:sigmaq\]](#e:sigmaq){reference-type="eqref" reference="e:sigmaq"}, we have $$\begin{aligned} \sum_i\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}\ensuremath{\mathrm{tr\,}}R_{q,i}(x,t)\,dx&=3\sigma_q(t)\sum_i\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}\eta_{i}^2\,dx=3c_0\sigma_q(t)\\ &=e(t)-\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$ } \vcenter{\hbox{$\textstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$ } \vcenter{\hbox{$\scriptscriptstyle-$ }}\kern-.6\wd 0}}% \!\int_{\ensuremath{\mathbb{T}}^3}|\bar{u}_q|^2\,dx-\bar{e}\delta_{q+2}.\end{aligned}$$ The statement of the proposition follows. ◻ We conclude this section with: *Proof of Proposition [Proposition 2](#p:Onsager){reference-type="ref" reference="p:Onsager"}.* We need to verify that $u_{q+1}:=\bar{u}_q+w_{q+1}$, with $\bar{u}_q$ from Corollary [Corollary 26](#c:gluing){reference-type="ref" reference="c:gluing"}, $w_{q+1}$ defined in [\[e:neww\]](#e:neww){reference-type="eqref" reference="e:neww"}, as well as $\mathring{R}_{q+1}$ defined in [\[e:decompR\]](#e:decompR){reference-type="eqref" reference="e:decompR"} satisfy the inductive estimates [\[e:R_q\_inductive_est\]](#e:R_q_inductive_est){reference-type="eqref" reference="e:R_q_inductive_est"}-[\[e:energy_inductive_assumption\]](#e:energy_inductive_assumption){reference-type="eqref" reference="e:energy_inductive_assumption"} with $q$ replaced by $q+1$. First of all note that [\[e:u_q\_inductive_est\]](#e:u_q_inductive_est){reference-type="eqref" reference="e:u_q_inductive_est"} follows from Lemma [Lemma 32](#l:boundsonw){reference-type="ref" reference="l:boundsonw"}, or choice of $M$ in Definition [Definition 33](#d:defM){reference-type="ref" reference="d:defM"} and by choosing $a\gg 1$ sufficiently large. Secondly, [\[e:R_q\_inductive_est\]](#e:R_q_inductive_est){reference-type="eqref" reference="e:R_q_inductive_est"} follows from [\[e:final_R\_est\]](#e:final_R_est){reference-type="eqref" reference="e:final_R_est"} and the inequality $$C\frac{\delta_{q+1}^{\sfrac12}}{\tau_q\lambda_{q+1}^{1-\alpha}}<\delta_{q+2}\lambda_{q+1}^{-\gamma_R},$$ where $C$ is the implicit constant in [\[e:final_R\_est\]](#e:final_R_est){reference-type="eqref" reference="e:final_R_est"}. In light of the inequality [\[e:transportcondition\]](#e:transportcondition){reference-type="eqref" reference="e:transportcondition"}, this is satisfied provided $a\gg 1$ is sufficiently large. Similarly, [\[e:energy_inductive_assumption\]](#e:energy_inductive_assumption){reference-type="eqref" reference="e:energy_inductive_assumption"} follows from [\[e:final_E\_est\]](#e:final_E_est){reference-type="eqref" reference="e:final_E_est"} and the inequality $$C\frac{\delta_{q+1}^{\sfrac12}\delta_q^{\sfrac12}\lambda_q}{\lambda_{q+1}}<\delta_{q+2}\lambda_{q+1}^{-\gamma_E},$$ where $C$ is the implicit constant in [\[e:final_E\_est\]](#e:final_E_est){reference-type="eqref" reference="e:final_E_est"}. In lilght of the inequality [\[e:energycondition\]](#e:energycondition){reference-type="eqref" reference="e:energycondition"} this is satisfied provided $a\gg 1$ is sufficiently large. Finally, the estimate [\[e:v_diff_prop_est\]](#e:v_diff_prop_est){reference-type="eqref" reference="e:v_diff_prop_est"} follows directly from Lemma [Lemma 32](#l:boundsonw){reference-type="ref" reference="l:boundsonw"}. This concludes the proof of Proposition [Proposition 2](#p:Onsager){reference-type="ref" reference="p:Onsager"}. ◻ CMOW23 Dallas Albritton, Rajendra Beekie, and Matthew Novack. Enhanced dissipation and Hörmander's hypoellipticity. , 283(3):Paper No. 109522, 38, 2022. Michael Aizenman. On vector fields as generators of flows: A counterexample to nelson's conjecture. , 107(2):287--296, 1978. Scott Armstrong and Vlad Vicol. . , 2023. George K. Batchelor. Small-scale variation of convected quantities like temperature in turbulent fluid part 1. general discussion and the case of small conductivity. , 5(1):113--133, 1959. Jacob Bedrossian, Alex Blumenthal, and Sam Punshon-Smith. Lagrangian chaos and scalar advection in stochastic fluid mechanics. , 24(6):1893--1990, 2022. Jacob Bedrossian, Alex Blumenthal, and Samuel Punshon-Smith. Almost-sure exponential mixing of passive scalars by the stochastic Navier-Stokes equations. , 50(1):241--303, 2022. Elia Bruè, Maria Colombo, Gianluca Crippa, Camillo De Lellis, and Massimo Sorella. Onsager critical solutions of the forced navier-stokes equations, 2022. Elia Brué, Maria Colombo, and Camillo De Lellis. Positive solutions of transport equations and classical nonuniqueness of characteristic curves. , 240(2):1055--1090, 2021. Jacob Bedrossian and Michele Coti Zelati. Enhanced dissipation, hypoellipticity, and anomalous small noise inviscid limits in shear flows. , 224(3):1161--1204, 2017. Elia Bruè and Camillo De Lellis. Anomalous dissipation for the forced 3D Navier-Stokes equations. , 400(3):1507--1533, 2023. Tristan Buckmaster, Camillo De Lellis, László Székelyhidi, and Vlad Vicol. . , 72(2):229--274, 2019. Maria Colombo, Michele Coti Zelati, and Klaus Widmayer. Mixing and diffusion for rough shear flows. , pages Paper No. 2, 22, 2021. Peter Constantin, Alexander Kiselev, Lenya Ryzhik, and Andrej Zlatoš. Diffusion and mixing in fluid flow. , 168(2):643--674, sep 2008. Georgiana Chatzigeorgiou, Peter Morfe, Felix Otto, and Lihan Wang. The gaussian free-field as a stream function: asymptotics of effective diffusivity in infra-red cut-off, 2023. Stanley Corrsin. On the spectrum of isotropic temperature fluctuations in an isotropic turbulence. , 22:469--473, 1951. Michele Coti Zelati, Matias G. Delgadino, and Tarek M. Elgindi. On the relation between enhanced dissipation timescales and mixing rates. , 73(6):1205--1244, 2020. Michele Coti Zelati, Matias G. Delgadino, and Tarek M. Elgindi. On the relation between enhanced dissipation timescales and mixing rates. , 73(6):1205--1244, 2020. Michele Coti Zelati, Tarek M. Elgindi, and Klaus Widmayer. Enhanced dissipation in the Navier-Stokes equations near the Poiseuille flow. , 378(2):987--1010, 2020. Theodore D. Drivas, Tarek M. Elgindi, Gautam Iyer, and In-Jee Jeong. Anomalous dissipation in passive scalar transport. , 243(3):1151--1180, 2022. Nicolas Depauw. Non unicité des solutions bornées pour un champ de vecteurs \<i\>bv\</i\> en dehors d'un hyperplan. , 337, 2003. Ron J. DiPerna and Pierre-Louis Lions. Ordinary differential equations, transport theory and Sobolev spaces. , 98(3):511--547, 1989. Camillo De Lellis and László Székelyhidi, Jr. Dissipative continuous Euler flows. , 193(2):377--407, 2013. Sara Daneri, Eris Runa, and László Székelyhidi. Non-uniqueness for the Euler equations up to Onsager's critical exponent. , 7(1):Paper No. 8, 44, 2021. Sara Daneri and László Székelyhidi, Jr. Non-uniqueness and h-principle for Hölder-continuous weak solutions of the Euler equations. , 224(2):471--514, 2017. D. A. Donzis, K. R. Sreenivasan, and P. K. Yeung. Scalar dissipation rate and dissipative anomaly in isotropic turbulence. , 532:199--216, 2005. Tarek Elgindi and Kyle Liss. Norm growth, non-uniqueness, and anomalous dissipation in passive scalars. , 2023. Tarek M. Elgindi, Kyle Liss, and Jonathan C. Mattingly. Optimal enhanced dissipation and mixing for a time-periodic, Lipschitz velocity field on $\mathbb{T}^2$, 2023. Mikhael Gromov. . Springer Berlin Heidelberg, 1986. Zhongtian Hu, Alexander Kiselev, and Yao Yao. Suppression of chemotactic singularity by buoyancy, 2023. Martina Hofmanová, Umberto Pappalettera, Rongchan Zhu, and Xiangchan Zhu. Anomalous and total dissipation due to advection by solutions of randomly forced Navier-Stokes equations, 2023. Tsuyoshi Yoneda In-Jee Jeong. Vortex stretching and anomalous dissipation for the incompressible 3d navier-stokes equations, 2022. Philip Isett. . , 188(3):871, 2018. In-Jee Jeong and Tsuyoshi Yoneda. Quasi-streamwise vortices and enhanced dissipation for incompressible 3D Navier-Stokes equations. , 150(3):1279--1286, 2022. A. Kolmogorov. Zufällige Bewegungen (zur Theorie der Brownschen Bewegung). , 35(1):116--117, 1934. Robert H. Kraichnan. Small-scale structure of a scalar field convected by turbulence. , 11:945--953, 1968. D W McLaughlin, G C Papanicolaou, and O R Pironneau. . , 45(5):780 -- 797, 1985. Stefano Modena and László Székelyhidi, Jr. Non-uniqueness for the transport equation with Sobolev vector fields. , 4(2):Paper No. 18, 38, 2018. Stefano Modena and László Székelyhidi, Jr. Non-renormalized solutions to the continuity equation. , 58(6):Paper No. 208, 30, 2019. Stefano Modena and Gabriel Sattig. Convex integration solutions to the transport equation with full dimensional concentration. , 37(5):1075--1108, 2020. Clément Mouhot and Cédric Villani. On Landau damping. , 207(1):29--201, 1 2011. John Nash. isometric imbeddings. , 60:383--396, 1954. John Nash. The imbedding problem for Riemannian manifolds. , 63(1):20--63, 1956. Weisheng Niu, Zhongwei Shen, and Yao Xu. Quantitative estimates in reiterated homogenization. , 279(11):108759, 39, 2020. Alexander M. Obukhov. Structure of the temperature field in turbulent flow. , XIII(1):58--69, 1949. L. Onsager. Statistical hydrodynamics. , 6(2):279--287, 1949. R.T. Pierrehumbert. Tracer microstructure in the large-eddy dominated regime. , 4(6):1091--1110, 1994. Special Issue: Chaos Applied to Fluid Mixing. Boris I. Shraiman and Eric D. Siggia. Scalar turbulence. , 405(6787):639--646, jun 2000. Katepalli R. Sreenivasan and Jörg Schumacher. Lagrangian views on turbulent mixing of passive scalars. , 368(1916):1561--1577, 2010. Jesenko Vukadinovic. The limit of vanishing diffusivity for passive scalars in Hamiltonian flows. , 242(3):1395--1444, 2021. Dongyi Wei. Diffusion and mixing in fluid flow via the resolvent estimate. , 64(3):507--518, 2021. Michele Coti Zelati, Gianluca Crippa, Gautam Iyer, and Anna L. Mazzucato. Mixing in incompressible flows: transport, dissipation, and their interplay, 2023. Michele Coti Zelati and Thierry Gallay. Enhanced dissipation and taylor dispersion in higher-dimensional parallel shear flows, 2023. [^1]: In [@BDSV] an additional estimate is added for $\|u_q\|_{C^0}$, but it turns out this can be avoided. The only place where a bound on $\|u_q\|_{C^0}$ was needed in [@BDSV] is in [@BDSV Proposition 5.9], but as we show below, the (much worse) bound induced by [\[e:u_q\_inductive_est\]](#e:u_q_inductive_est){reference-type="eqref" reference="e:u_q_inductive_est"} for $n=1$ suffices to control the relevant terms - see Proposition [Proposition 22](#p:est_mollification){reference-type="ref" reference="p:est_mollification"} and Lemma [Lemma 34](#l:boundsonbc1){reference-type="ref" reference="l:boundsonbc1"}. [^2]: Here we use the vector calculus identity $\mathop{\rm curl}\nolimits(F\times G)=F\mathop{\rm div}\nolimits G-G\mathop{\rm div}\nolimits F+(G\cdot\nabla)F-(F\cdot\nabla)G$. [^3]: Here, we use the formula $(Av_1) \times (Av_2) = A^{-T} (v_1 \times v_2) \det A$ for $v_1, v_2 \in \ensuremath{\mathbb{R}}^3$ and $A \in \ensuremath{\mathbb{R}}^{3 \times 3}$ invertible. [^4]: In Section 4 of [@ArmstrongVicol] a similar strategy was employed to deal with fast temporal oscillations, albeit leading to an estimate with much more convolved right-hand side and weaker improvement, compare Lemma 4.2 there. [^5]: We use above the version $\| \nabla^n f\|_{L^\infty} \lesssim (\tau \mu) \left(\frac{\mu}{\kappa}\right)^\frac{n}{2}$ of bound [\[Atime_Gspace\]](#Atime_Gspace){reference-type="eqref" reference="Atime_Gspace"} [^6]: This iteration 'all the way back' allows us to obtain 'optimal improvement'. [^7]: here we use the calculus identities $\mathop{\rm curl}\nolimits[\nabla\Phi^TU(\Phi)]=\nabla\Phi^{-1}(\mathop{\rm curl}\nolimits U)(\Phi)$ and $\mathop{\rm curl}\nolimits(\varphi F)=\varphi \mathop{\rm curl}\nolimits F+\nabla\varphi\times F$.
arxiv_math
{ "id": "2310.02934", "title": "Anomalous dissipation and Euler flows", "authors": "Jan Burczak, L\\'aszl\\'o Sz\\'ekelyhidi Jr., Bian Wu", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | For symbol $a\in S^{n(\rho-1)/2}_{\rho,1}$ the pseudo-differential operator $T_a$ may not be $L^2$ bounded. However, under some mild extra assumptions on $a$, we show that $T_a$ is bounded from $L^{\infty}$ to $BMO$ and on $L^p$ for $2\leq p<\infty$. A key ingredient in our proof of the $L^{\infty}$-$BMO$ boundedness is that we decompose a cube, use $x$-regularity of the symbol and combine certain $L^2$, $L^\infty$ and $L^{\infty}$-$BMO$ boundedness. We use an almost orthogonality argument to prove an $L^2$ boundedness and then interpolation to obtain the desired $L^p$ boundedness. address: - | Department of Mathematics\ University of Science and Technology of China\ Hefei 230026, China - | Department of Mathematics\ Zhejiang Normal University\ Jinhua 321004, China author: - Jingwei Guo - Xiangrong Zhu title: $L^p$ boundedness of pseudo-differential operators with symbols in $S^{n(\rho-1)/2}_{\rho,1}$ --- [^1] # Introduction and main results A pseudo-differential operator is an operator given by $$T_a f(x)=\int_{\mathbb{R}^n}\!e^{2\pi ix\cdot\xi}a(x,\xi)\widehat{f}(\xi)\,\textrm{d}\xi, \quad f\in \mathscr{S}(\mathbb{R}^n), \label{1}$$ where $\widehat{f}$ is the Fourier transform of $f$ and the symbol $a$ belongs to a certain symbol class. One of the most important symbol classes is the Hörmander class $S^m_{\rho,\delta}$ introduced in Hörmander [@H66]. A function $a(x,\xi)\in C^{\infty}(\mathbb{R}^n\times\mathbb{R}^n)$ belongs to the Hörmander class $S^{m}_{\rho,\delta}$ $(m\in \mathbb{R},0\leq\rho,\delta\leq1)$ if it satisfies $$\sup_{x,\xi\in\mathbb{R}^{n}}(1+|\xi|)^{-m+\rho N-\delta M}\left|\nabla^{N}_{\xi}\nabla^{M}_{x}a(x,\xi)\right|=A_{N,M}<\infty$$ for all nonnegative integers $N$ and $M$. We may assume additionally that $a\in S^{m}_{\rho,\delta}$ is compactly supported. Since all our estimates in this paper are independent of the size of the support of symbol $a$ unless clearly stated, one can remove this extra assumption by following the argument in Stein [@S93 Sec. VII.2.5]. There are numerous literature discussing whether pseudo-differential operators are bounded on the Lebesgue space $L^p(\mathbb{R}^n)$ and the Hardy space $H^1(\mathbb{R}^n)$. We mention a few examples. If $a\in S^{m}_{\rho,\delta}$ with $\delta<1$ and $m\leq \min\{0, n(\rho-\delta)/2\}$, then $T_a$ is bounded on $L^2$ and the range of $m$ is sharp. See Hörmander [@H71], Calderón-Vaillancourt [@CV71; @CV72], Hounie [@H86], etc. For $a\in S^{m}_{\rho,1}$, Rodino [@R76] proved that $T_a$ is bounded on $L^2$ if $m<n(\rho-1)/2$ and constructed a symbol $a\in S^{n(\rho-1)/2}_{\rho,1}$ such that $T_a$ is unbounded on $L^2$. For endpoint estimates, in some unpublished lecture notes, Stein showed that if $a\in S^{n(\rho-1)/2}_{\rho,\delta}$ and either $0\leq\delta<\rho=1$ or $0<\delta=\rho<1$, then $T_a$ is of weak type $(1,1)$ and bounded from $H^1$ to $L^1$. This result was extended in Álvarez-Hounie [@AH90] to symbols $a\in S^{m}_{\rho,\delta}$ with $0<\rho\leq 1, 0\leq \delta<1$ and $m=n(\rho-1+\min\{0,\rho-\delta\})/2$. For a systematic study on the $H^1$-$L^1$ boundedness of $T_a$ with $a\in S^{m}_{\rho,1}$ when $m$ equals to the critical index $n(\rho-1)$, see the authors [@GZ]. On the other hand, Kenig-Staubach [@KW07] proved that $T_a$ is bounded on $L^{\infty}$ if $a\in S^{m}_{\rho,1}$ with $m<n(\rho-1)/2$. It is also known that even if $a\in S^{n(\rho-1)/2}_{\rho,0}$ is independent of $x$, $T_a$ is still not bounded on $L^{\infty}$ in general (see [@KW07 Remark 2.6]). When $a\in S^{n(\rho-1)/2}_{\rho,1}$, as $T_a$ is not bounded on $L^2$ in general, it is reasonable to expect that $T_a$ is not bounded from $L^{\infty}$ to $BMO$ in general either. However, under some mild extra assumptions we will show that it is bounded from $L^{\infty}$ to $BMO$ and on $L^p$ as well for $2\leq p<\infty$. **Theorem 1**. *Suppose that $w$ is a function from $(0,\infty)$ to $(0,\infty)$ and there exist constants $A>1$ and $u\in (0,n/(n+2))$ such that $$\int^{\infty}_1\left(\frac{w(t)}{t}\right)^{u}\frac{\textrm{d}t}{t}\leq A \label{6}$$ and $w(t_1)\leq Aw(t_2)$ whenever $0<t_1\leq t_2$.* *If the symbol $a\in S^{n(\rho-1)/2}_{\rho,1}$, $0\leq\rho<1$, satisfies that $$\sup_{x,\xi\in\mathbb{R}^{n}} \left(1+|\xi|\right)^{-n(\rho-1)/2+\rho N}\left(w\left(|\xi|\right)\right)^{-1}\left|\nabla^{N}_{\xi}\nabla_{x}a(x,\xi)\right|=A_{N}<\infty$$ for all nonnegative integer $N$, then $T_a$ is bounded from $L^{\infty}(\mathbb{R}^n)$ to $BMO(\mathbb{R}^n)$.* *Remark 1*. The condition $\rho<1$ is assumed here for some technical reason. When $\rho=1$, this theorem still holds (and we actually have a better version). See the last remark of this section. *Remark 2*. Our theorem works for a class of general functions $w$. It include some known results in literature corresponding to specific $w$, for example, in [@AH90] (with $w(t)=t^{\delta}$, $0\leq \delta\leq \rho$ and $\delta<1$), [@W; @WC] (with $w(t)=t^{\delta}$, $0\leq\delta<1$) and [@RZ23] (with $w(t)=t/\log^6 (1+t)$). It also works for some "weaker" functions, for example $w(t)=t/\log^{\frac{n+2}{n}+\epsilon}(1+t)$ with $\epsilon>0$. *Remark 3*. The idea to prove this theorem is as follows. To estimate the $BMO$ norm of $T_a f$, we estimate $\inf_c \frac{1}{|Q|}\int_Q\!|T_a f(x)-c|\textrm{d}x$ for any cube $Q$. We first make a standard decomposition of $T_a f(x)$ on the frequency side. Depending on the size of frequency, the side length of $Q$ and the function $w$, we may further decompose the cube $Q$ into an almost disjoint union of smaller cubes of the same side length, $Q=\cup_{k=1}^{K} Q_{j,k}$. Over each $Q_{j,k}$, we localize the $x$-variable of symbol $a(x,\xi)$ to the center of $Q_{j,k}$ and split the function $f$ into two parts with one restricted to an enlarged concentric cube $\widetilde{Q}_{j,k}$ and another restricted to the complement of $\widetilde{Q}_{j,k}$. With all these decompositions, the average over $Q$ is split into four parts. To estimate them, we use the $x$-regularity assumption on symbol $a$, certain known $L^2$ and $L^{\infty}$-$BMO$ boundedness and an $L^\infty$ boundedness we prove in Lemma [Lemma 6](#l2.3){reference-type="ref" reference="l2.3"}. At last we balance bounds of different parts to obtain the desired one. *Remark 4*. Only finitely many derivatives of $a$ are needed in this theorem. In this paper we do not pursue the best order of derivatives needed to guarantee the conclusion, which depends on $n$ and $u$. By an almost orthogonality argument we obtain the following. **Theorem 2**. *Suppose that $w$ is a function from $(0,\infty)$ to $(0,\infty)$ and there exists a constant $A>1$ such that $$\int^{\infty}_1\frac{w^2(t)}{t^3}\,\textrm{d}t\leq A \label{2}$$ and $w(t_1)\leq Aw(t_2)$ whenever $0< t_1\leq t_2$.* *If the symbol $a\in S^{n(\rho-1)/2}_{\rho,1}$, $0\leq\rho\leq 1$, satisfies that $$\sup_{x,\xi\in\mathbb{R}^{n}}(1+|\xi|)^{-n(\rho-1)/2+\rho N}\left(w\left(|\xi|\right)\right)^{-1}\left|\nabla^{N}_{\xi}\nabla_{x}a(x,\xi)\right|=A_{N}<\infty$$ for all nonnegative integer $N\leq n$, then $T_a$ is bounded on $L^{2}(\mathbb{R}^n)$.* Since [\[6\]](#6){reference-type="eqref" reference="6"} implies that $w(t)/t$ is uniformly bounded when $t\geq 1$ (see inequality [\[w1\]](#w1){reference-type="eqref" reference="w1"} below), assumptions of Theorem [Theorem 1](#main3){reference-type="ref" reference="main3"} imply those of Theorem [Theorem 2](#main1){reference-type="ref" reference="main1"}. Applying interpolation and Theorems [Theorem 1](#main3){reference-type="ref" reference="main3"} and [Theorem 2](#main1){reference-type="ref" reference="main1"}, we obtain the following. **Theorem 3**. *Under assumptions of Theorem [Theorem 1](#main3){reference-type="ref" reference="main3"}, $T_a$ is bounded on $L^p(\mathbb{R}^n)$ for $2\leq p<\infty$.* *Remark 5*. When $a\in S^0_{1,1}$, it is well-known that the integral kernel $$k(x,y)=\int_{\mathbb{R}^n}\!e^{2\pi i(x-y)\cdot \xi}a(x,\xi)\,\textrm{d}\xi$$ associated to $T_a$ satisfies the Hörmander condition. So the $L^2$ boundedness of $T_a$ yields the $L^{\infty}$-$BMO$, $H^1$-$L^1$ and $L^1$-$L^{1,\infty}$ boundedness directly by the standard Calderón-Zygmund theory. Therefore, when $\rho=1$ under assumptions of Theorem [Theorem 2](#main1){reference-type="ref" reference="main1"} (or stronger assumptions of Theorem [Theorem 1](#main3){reference-type="ref" reference="main3"}) we obtain conclusions of Theorems [Theorem 1](#main3){reference-type="ref" reference="main3"} and [Theorem 3](#main2){reference-type="ref" reference="main2"} immediately. Throughout this note, we use $C$ to denote a positive constant, which may vary from line to line and depend only on $n,\rho,u,A$ and finitely many seminorms of $a$. # Preliminaries {#s2} We first recall two fundamental results on pseudo-differential operators. **Lemma 4** (Hörmander [@H71], Calderón-Vaillancourt [@CV71], Hounie [@H86]). *If\ $0\leq\rho\leq 1$ and $a\in S^{0}_{\rho,0}$, then $$\|T_{a}f\|_{2}\leq C\|f\|_{2},$$ where the constant $C$ depends only on $n$, $\rho$ and finitely many seminorms of $a$ in $S^{0}_{\rho,0}$.* **Lemma 5** (Álvarez-Hounie [@AH90], Stein [@S93 p. 322, Sec. VII.5.12(h)]). *If $0\leq \rho\leq 1$ and $a\in S^{n(\rho-1)/2}_{\rho,0}$, then $$\|T_{a}f\|_{BMO}\leq C\|f\|_{\infty},$$ where the constant $C$ depends only on $n,\rho$ and finitely many seminorms of $a$ in $S^{n(\rho-1)/2}_{\rho,0}$.* We will use the well-known Littlewood-Paley dyadic decomposition. Let $B_r$ be the ball in $\mathbb{R}^n$ centered at the origin with radius $r$. Take a nonnegative function $\eta\in C^{\infty}_{c}(B_{2})$ with $\eta\equiv 1$ on $B_{1}$ and set $\varphi(\xi)=\eta(\xi)-\eta(2\xi)$. It is obvious that $\varphi$ is supported in $\{\xi\in \mathbb{R}^n: 1/2<|\xi|<2\}$ and $$\eta(\xi)+\sum^{\infty}_{j=1}\varphi(2^{-j}\xi)=1, \textrm{ for all $\xi\in \mathbb{R}^n$}.$$ We denote functions $\varphi_0(\xi)=\eta(\xi)$ and $\varphi_j(\xi)=\varphi(2^{-j}\xi)$ for $j\geq 1$, and operators $\triangle_j$ and $S_j$ by $$\label{pddbmo2.1} \widehat{\triangle_jf}(\xi)=\varphi_j(\xi)\widehat{f}(\xi) \textrm{ and } S_jf=\sum^j_{k=0}\triangle_kf \textrm{ for } j\geq 0.$$ We also use the following notations $$a_j(x,\xi)=a(x,\xi)\varphi_j(\xi) \label{5}$$ and $$T_{a,j}f(x)=T_{a}(\triangle_jf)(x)=\int \! e^{2\pi ix\cdot\xi}a(x,\xi)\varphi_j(\xi)\widehat{f}(\xi)\textrm{d}\xi \textrm{ for } j\geq 0.\label{3}$$ For the pseudo-differential operator, we have the following estimate. **Lemma 6**. *Let $n\in \mathbb{N}$, $0\leq\rho\leq 1$ and $j$ a nonnegative integer. If the symbol $b(x,\xi)$ is supported in $\{(x,\xi)\in\mathbb{R}^n\times \mathbb{R}^n:|\xi|<2^{j+1}\}$ and satisfies that $$\sup_{x,\xi\in\mathbb{R}^{n}}\left|\partial^{\alpha}_{\xi}b(x,\xi)\right|\leq A_j2^{-j\rho|\alpha|}$$ for any multi-index $\alpha$ with $|\alpha|\leq n$, then we have $$\|T_b f\|_{p}\leq CA_j2^{jn(1-\rho)/2}\|f\|_{p}, \quad 2\leq p\leq \infty,$$ where the constant $C$ depends only on $n$.* *Proof.* Set $\sigma_j(z)=2^{jn\rho}(1+2^{j\rho}|z|)^{-2n}$. By the Cauchy-Schwarz inequality, we have $$\begin{aligned} &|T_b f(x)|\\ \leq &\left(\int\!\left(1+2^{j\rho}|x-y|\right)^{-2n}|f(y)|^2\textrm{d}y\right)^{1/2}\\ &\left(\int\!\left|\left(1+2^{j\rho}|x-y|\right)^{n}\int\! e^{2\pi i(x-y)\cdot \xi} b(x,\xi)\textrm{d}\xi\right|^2\textrm{d}y\right)^{1/2}\\ \leq & C_n2^{-\frac{jn\rho}{2}}\!\!\left(\sigma_j\ast|f|^2\right)^{\frac{1}{2}}\!(x)\!\!\sum_{|\alpha|\leq n}\!\!\left(\!\int\!\left|\int\! e^{2\pi i(x-y)\cdot \xi}2^{j\rho|\alpha|}\partial_\xi^{\alpha}b(x,\xi)\textrm{d}\xi\right|^2\!\!\!\textrm{d}y\!\right)^{\!\!\!1/2}\!\!.\end{aligned}$$ By the Plancherel theorem and assumptions, the right side is bounded by $$\begin{aligned} &C_n 2^{-\frac{jn\rho}{2}}\left(\sigma_j\ast|f|^2\right)^{1/2}(x)\sum_{|\alpha|\leq n}\left(\int \left|2^{j\rho|\alpha|}\partial_\xi^{\alpha}b(x,\xi)\right|^2\textrm{d}\xi\right)^{1/2}\\ \leq &C_n 2^{\frac{jn(1-\rho)}{2}}A_j\left(\sigma_j\ast|f|^2\right)^{1/2}(x).\end{aligned}$$ By Young's inequality, for $2\leq p\leq \infty$, we get that $$\|T_b f\|_p\leq C_n 2^{\frac{jn(1-\rho)}{2}}A_j\left\|(\sigma_j\ast|f|^2)^{1/2}\right\|_p \leq C_nA_j 2^{\frac{jn(1-\rho)}{2}}\|f\|_p.$$ This finishes the proof. ◻ # Proof of Theorem [Theorem 1](#main3){reference-type="ref" reference="main3"} {#proof-of-theorem-main3} For any cube $Q$, it is enough for us to choose a constant $\lambda_Q$ such that $$\frac{1}{|Q|}\int_{Q}|T_{a}f(x)-\lambda_Q|\,\textrm{d}x\leq C\|f\|_{\infty}$$ for some constant $C$ independent of $Q$. Let $l(Q)$ be the side length of $Q$ and $x_Q$ the center of $Q$. We first decompose $T_{a}f(x)$ into two parts $$\begin{aligned} T_{a}f(x)&=\int e^{2\pi ix\cdot\xi}a(x,\xi)\left(\sum^{j_Q}_{j=0}\varphi_j(\xi)+\sum^{\infty}_{j=j_Q+1}\varphi_j(\xi)\right)\widehat{f}(\xi)\,\textrm{d}\xi \nonumber\\ &=T_{a}(S_{j_Q}f)(x)+\sum^{\infty}_{j=j_Q+1} T_{a,j}f(x),\label{lppdo5.1}\end{aligned}$$ where operators $S_{j_Q}$ and $T_{a,j}$ are defined by [\[pddbmo2.1\]](#pddbmo2.1){reference-type="eqref" reference="pddbmo2.1"} and [\[3\]](#3){reference-type="eqref" reference="3"}. We take $j_Q=\infty$ if $l(Q)\int^{\infty}_1\frac{w(t)}{t}dt\leq 1$ (that is, [\[lppdo5.1\]](#lppdo5.1){reference-type="eqref" reference="lppdo5.1"} only has the first part) and $j_Q=-1$ if $l(Q)\int^4_1\frac{w(t)}{t}dt>1$ (that is, [\[lppdo5.1\]](#lppdo5.1){reference-type="eqref" reference="lppdo5.1"} only has the second part). Otherwise, we take $j_Q$ to be the unique nonnegative integer satisfying $$\int^{2^{j_Q+2}}_1\frac{w(t)}{t}\,\textrm{d}t\leq \frac{1}{l(Q)}<\int^{2^{j_Q+3}}_1\frac{w(t)}{t}\,\textrm{d}t.$$ **Step 1.** In this step we treat the first part of [\[lppdo5.1\]](#lppdo5.1){reference-type="eqref" reference="lppdo5.1"}. Set $$\widetilde{T}_{a_Q}f(x)=\int_{\mathbb{R}^{n}}e^{2\pi ix\cdot\xi}a(x_Q,\xi)\widehat{f}(\xi)\,\textrm{d}\xi.$$ Because $a_Q:=a(x_Q,\xi)\in S^{n(\rho-1)/2}_{\rho,0}$ and its semi-norms are independent of $Q$, Lemma [Lemma 5](#l2.2){reference-type="ref" reference="l2.2"} gives that $$\left\|\widetilde{T}_{a_Q}f\right\|_{BMO}\leq C\|f\|_{\infty}.$$ It is easy to see that $\|S_{j_Q}f\|_{\infty}\leq C\|f\|_{\infty}$ which yields that $$\left\|\widetilde{T}_{a_Q}(S_{j_Q}f)\right\|_{BMO}\leq C\|S_{j_Q}f\|_{\infty}\leq C\|f\|_{\infty}.$$ Thus we can choose a constant $\lambda_Q$ such that $$\begin{aligned} \frac{1}{|Q|}\int_{Q}\left|\widetilde{T}_{a_Q}(S_{j_Q}f)(x)-\lambda_Q\right|\,\textrm{d}x\leq C\|f\|_{\infty} \label{lppdo5.2} \end{aligned}$$ with a constant $C$ independent of $Q$. Set $$b_{Q}(x,\xi)=\frac{a(x,\xi)-a(x_Q,\xi)}{l(Q)}\eta\left(\frac{x-x_Q}{nl(Q)}\right).$$ It is easy to verify that $b_{Q}(x,\xi)=0$ when $|x-x_Q|\geq 2nl(Q)$ and that $b_{Q}(x,\xi)=(a(x,\xi)-a(x_Q,\xi))/l(Q)$ when $x\in Q$. By using assumptions on $a$ and $w$, we have for any multi-index $\alpha$ that $$\left|\partial^{\alpha}_{\xi}\left(b_{Q}(x,\xi)\varphi_j(\xi)\right)\right|\leq C2^{j(\frac {n(\rho-1)}{2}-\rho |\alpha|)}w\left(2^{j+1}\right).$$ Hence, by using Lemma [Lemma 6](#l2.3){reference-type="ref" reference="l2.3"} (with $p=\infty$), we obtain that $$\left\|T_{b_{Q},j}f\right\|_{\infty}\leq Cw\left(2^{j+1}\right)\|f\|_{\infty}.$$ Thus if $x\in Q$ then $$\begin{aligned} &\left|T_{a}(S_{j_Q}f)(x)-\widetilde{T}_{a_Q}(S_{j_Q}f)(x)\right|\\ \leq & l(Q)\sum^{j_Q}_{j=0}\left|T_{b_{Q},j}f(x)\right|\leq C l(Q)\sum^{j_Q}_{j=0} w\left(2^{j+1}\right)\|f\|_{\infty}. \end{aligned}$$ Applying the triangle inequality, the bound [\[lppdo5.2\]](#lppdo5.2){reference-type="eqref" reference="lppdo5.2"} and the above bound yields that $$\begin{aligned} &\frac{1}{|Q|}\int_Q\left|T_{a}(S_{j_Q}f)(x)-\lambda_Q\right|\,\textrm{d}x\nonumber\\ \leq &\frac{1}{|Q|}\int_Q\left|T_{a}(S_{j_Q}f)(x)-\widetilde{T}_{a_Q}(S_{j_Q}f)(x)\right|\,\textrm{d}x+C\|f\|_{\infty}\nonumber\\ \leq &C\left(l(Q)\sum^{j_Q}_{j=0}w\left(2^{j+1}\right)+1\right)\|f\|_{\infty}\nonumber\\ \leq &C\left(l(Q)\int^{2^{j_Q+2}}_1 \frac{w(t)}{t}\,\textrm{d}t+1\right)\|f\|_{\infty}\nonumber\\ \leq &C\|f\|_{\infty},\label{lppdo5.4} \end{aligned}$$ where in the last inequality we have used the definition of $j_Q$. **Step 2.** From this step we start to treat the second part of [\[lppdo5.1\]](#lppdo5.1){reference-type="eqref" reference="lppdo5.1"}. For each $j>j_Q$, we decompose the cube $Q$ into an almost disjoint union of finitely many cubes, $Q=\cup_{k=1}^{K} Q_{j,k}$, such that $$l_j/C\leq l(Q_{j,k})\leq l_j \textrm{ and } \ K\leq Cl^{-n}_j|Q|,$$ where $l_j=2^{-ju}w(2^{j+2})^{u-1}$ and $l(Q_{j,k})$ represents the side length of $Q_{j,k}$. This is feasible because by using assumptions on $w$, we get $$\begin{aligned} &\int^{2^{j+2}}_1\frac{w(t)}{t}\,\textrm{d}t \leq A^{1-u}2^{(j+2)u}w\left(2^{j+2}\right)^{1-u}\int^{2^{j+2}}_1\!\left(\frac{w(t)}{t}\right)^u \frac{\textrm{d}t}{t}\\ \leq &C_{A,u}2^{ju}w\left(2^{j+2}\right)^{1-u}=C_{A,u} l^{-1}_j, \end{aligned}$$ which implies that $l_j\leq C l(Q)$ if $j>j_Q$. Let $x_{j,k}$ be the center of $Q_{j,k}$, $a_{Q_{j,k}}\!\!:=a(x_{j,k},\xi)$ and $\widetilde{T}_{a_{Q_{j,k}, j}}$, $b_{Q_{j,k}}$ be defined as above. Then, by using $l(Q_{j,k})\leq l_j$ and Lemma [Lemma 6](#l2.3){reference-type="ref" reference="l2.3"} (with $p=\infty$), we readily obtain, as in Step 1, that $$\begin{aligned} &\frac{1}{|Q|}\sum^K_{k=1}\int_{Q_{j,k}}\left|T_{a,j}f(x)-\widetilde{T}_{a_{Q_{j,k}},j}f(x)\right|\,\textrm{d}x\nonumber\\ = & \frac{1}{|Q|}\sum^K_{k=1}l(Q_{j,k})\int_{Q_{j,k}}\left|T_{b_{Q_{j,k}},j}f(x)\right|\,\textrm{d}x\nonumber\\ \leq & \frac{l_j}{|Q|}\sum^K_{k=1}\left|Q_{j,k}\right|\left\|T_{b_{Q_{j,k}},j}f\right\|_{\infty}\nonumber\\ \leq & Cl_jw\left(2^{j+1}\right)\|f\|_{\infty}\leq C\left(2^{-j}w\left(2^{j+2}\right)\right)^u\|f\|_{\infty},\label{lppdo5.5}\end{aligned}$$ where $C$ is independent of $Q$. **Step 3.** Set $L_j=2+2^{j(1-\rho)}(w(2^{j+2})/2^j)^{2u/n}$. Let $$\widetilde{Q}_{j,k}=Q(x_{j,k},L_j l(Q_{j,k}))$$ be an enlarged $Q_{j,k}$ with center $x_{j,k}$ and side length $L_j l(Q_{j,k})$, and $$f_{j,k}=f\chi_{\widetilde{Q}_{j,k}}$$ be $f$ restricted to $\widetilde{Q}_{j,k}$. One can easily check that $2^{jn(1-\rho)/2}a(x_{j,k},\xi)\varphi_j(\xi)\in S^{0}_{\rho,0}$ and its seminorms are independent of $j$, $k$ and $Q$. Hence, by Lemma [Lemma 4](#l2.1){reference-type="ref" reference="l2.1"}, we get that $$\begin{aligned} \left\|\widetilde{T}_{a_{Q_{j,k}},j}f_{j,k}\right\|_2&=\left\|2^{\frac{jn(\rho-1)}{2}}\int_{\mathbb{R}^{n}}e^{2\pi ix\cdot\xi}2^{\frac{jn(1-\rho)}{2}}a(x_{j,k},\xi)\varphi_j(\xi)\widehat{f_{j,k}}(\xi)\,\textrm{d}\xi\right\|_2\\ &\leq C2^{\frac{jn(\rho-1)}{2}}\left\|f_{j,k}\right\|_2\\ &\leq C2^{\frac{jn(\rho-1)}{2}}L_j^{\frac n2}|Q_{j,k}|^{\frac 12}\|f\|_{\infty}, \end{aligned}$$ where $C$ is independent of $j$, $k$ and $Q$. Using this bound and Hölder's inequality, we have $$\begin{aligned} &\frac{1}{|Q|}\sum^K_{k=1}\int_{Q_{j,k}}\!\left|\widetilde{T}_{a_{Q_{j,k}},j}f_{j,k}(x)\right|\textrm{d}x\nonumber\\ \leq &\frac{1}{|Q|}\sum^K_{k=1}\left|Q_{j,k}\right|^{\frac 12}\left\|\widetilde{T}_{a_{Q_{j,k}},j}f_{j,k}\right\|_2 \leq C 2^{\frac{jn(\rho-1)}{2}}L_j^{\frac n2}\|f\|_{\infty}\nonumber\\ \leq &C\left(\left(\frac{w(2^{j+2})}{2^j}\right)^{u}+2^{\frac{jn(\rho-1)}{2}}\right)\|f\|_{\infty}.\label{lppdo5.8} \end{aligned}$$ **Step 4.** As $u<n/(n+2)$, we can take $N>n$ sufficiently large such that $$\left(N-\frac n2\right)\left(1-\frac{n+2}{n}u\right)\geq u.$$ By using assumptions on $w$, we have that for $t\geq 1$ $$\begin{aligned} \label{w1} \left(\!\frac{w(t)}{t}\!\right)^{\!\!u}=\int^{2t}_t \!\left(\!\frac{w(t)}{t}\!\right)^{\!\!u}\frac{\textrm{d}s}{t}\leq 2^{1+u}A^{u}\!\int^{2t}_t \!\left(\!\frac{w(s)}{s}\!\right)^{\!\!u}\frac{\textrm{d}s}{s}\leq (2A)^{1+u}.\end{aligned}$$ So, for any nonnegative integer $j$, one has $$\begin{aligned} \label{w2} (1+2^{j\rho}L_jl_j)^{\frac n2-N}\leq \left(\frac{w(2^{j+2})}{2^j}\right)^{\!\left(\frac{n+2}{n}u-1\right)\left(\frac n2-N\right)}\!\!\!\leq C\left(\frac{w(2^{j+2})}{2^j}\right)^{\!u}. \end{aligned}$$ As $L_j>2$, when $y\notin \widetilde{Q}_{j,k}$ and $x\in Q_{j,k}$, one also has $$|y-x|\geq \frac{1}{2}(L_j-1)l(Q_{j,k})\geq \frac{1}{4}L_j l(Q_{j,k})\geq c L_j l_j.$$ For $x\in Q_{j,k}$, applying the Cauchy-Schwarz inequality yields that $$\begin{aligned} &\left|\widetilde{T}_{a_{Q_{j,k}},j}\left(f-f_{j,k}\right)(x)\right|\nonumber\\ =&\left|\int_{\widetilde{Q}^c_{j,k}}\left(\int e^{2\pi i(x-y)\cdot \xi} a\left(x_{j,k},\xi\right)\varphi_j(\xi)\textrm{d}\xi\right)f(y)\,\textrm{d}y\right|\nonumber\\ \leq & \left(\int_{|y-x|\geq cL_jl_j}\left(1+2^{j\rho}|x-y|\right)^{-2N}\textrm{d}y\right)^{1/2}\nonumber\\ &\left(\int \!\!\left(\left(1+2^{j\rho}|x-y|\right)^{N}\!\!\!\int e^{2\pi i(x-y)\cdot \xi} a(x_{j,k},\xi)\varphi_j(\xi)\textrm{d}\xi\right)^2\!\!\!\textrm{d}y\right)^{\!\!\!1/2}\!\!\!\!\!\|f\|_{\infty}, \end{aligned}$$ where by [\[w2\]](#w2){reference-type="eqref" reference="w2"} the first factor is bounded by $$C2^{-\frac{jn\rho}{2}}\left(1+2^{j\rho} L_jl_j\right)^{\frac n2-N}\leq C2^{-\frac{jn\rho}{2}}\left(\frac{w(2^{j+2})}{2^j}\right)^{u},$$ and, by the Plancherel theorem and $a\in S^{n(\rho-1)/2}_{\rho,1}$, the second factor is bounded by $$\begin{aligned} &C\sum_{|\alpha|\leq N}\left(\int \left|\int e^{2\pi i(x-y)\cdot \xi} 2^{j\rho|\alpha|}\partial^{\alpha}_{\xi}\left(a(x_{j,k},\xi)\varphi_j(\xi)\right)\textrm{d}\xi\right|^2\textrm{d}y\right)^{1/2}\\ \leq &C\sum_{|\alpha|\leq N}\left(\int \left|2^{j\rho|\alpha|}\partial^{\alpha}_{\xi}(a(x_{j,k},\xi)\varphi_j(\xi))\right|^2\textrm{d}\xi\right)^{1/2}\\ \leq &C\sum_{|\alpha|\leq N}\left(\int_{2^{j-1}<|\xi|<2^{j+1}}2^{jn(\rho-1)}\textrm{d}\xi\right)^{1/2}\leq C 2^{\frac{jn\rho}{2}}. \end{aligned}$$ Hence we obtain $$\left|\widetilde{T}_{a_{Q_{j,k}},j}\left(f-f_{j,k}\right)(x)\right| \leq C\left(\frac{w(2^{j+2})}{2^j}\right)^{u}\|f\|_{\infty}.\label{lppdo5.10}$$ **Step 5.** Finally, we infer from [\[lppdo5.1\]](#lppdo5.1){reference-type="eqref" reference="lppdo5.1"}, [\[lppdo5.4\]](#lppdo5.4){reference-type="eqref" reference="lppdo5.4"}, [\[lppdo5.5\]](#lppdo5.5){reference-type="eqref" reference="lppdo5.5"}, [\[lppdo5.8\]](#lppdo5.8){reference-type="eqref" reference="lppdo5.8"} and [\[lppdo5.10\]](#lppdo5.10){reference-type="eqref" reference="lppdo5.10"} that $$\begin{aligned} &\frac{1}{|Q|}\int_{Q}\left|T_{a}f(x)-\lambda_Q\right|\,\textrm{d}x\\ \leq &\frac{1}{|Q|}\int_{Q}\left|T_{a}(S_{j_Q}f)(x)-\lambda_Q\right|+\sum^{\infty}_{j=j_Q+1} \left|T_{a,j}f(x)\right|\,\textrm{d}x\\ \leq &C\|f\|_{\infty}+\frac{1}{|Q|}\sum^{\infty}_{j=j_Q+1}\sum^K_{k=1}\int_{Q_{j,k}}\bigg(\left|T_{a,j}f(x)-\widetilde{T}_{a_{Q_{j,k}},j}f(x)\right|\\ &+\left|\widetilde{T}_{a_{Q_{j,k}},j}f_{j,k}(x)\right|+\left|\widetilde{T}_{a_{Q_{j,k}},j}(f-f_{j,k})(x)\right|\bigg)\,\textrm{d}x\\ \leq &C\bigg(1+\sum^{\infty}_{j=j_Q+1}\left(\left(\frac{w(2^{j+2})}{2^j}\right)^{u}+2^{\frac{jn(\rho-1)}{2}}\right)\bigg)\|f\|_{\infty}\\ \leq & C\left(1+\int^{\infty}_{2^{j_Q}}\left(\frac{w(t)}{t}\right)^{u}\frac{\textrm{d}t}{t}\right)\|f\|_{\infty}\leq C \|f\|_{\infty}, \end{aligned}$$ where we have used assumptions $\rho<1$ and [\[6\]](#6){reference-type="eqref" reference="6"}. This completes the proof of Theorem [Theorem 1](#main3){reference-type="ref" reference="main3"}. 0◻ # Proof of Theorem [Theorem 2](#main1){reference-type="ref" reference="main1"} {#proof-of-theorem-main1} We choose a smooth real function $\psi$ such that $$\widehat{\psi}\in C_c^{\infty}(B_{1/100}) \textrm{ and } \int_{\mathbb{R}^n}\!\psi(x)\,\textrm{d}x=1.$$ We then decompose $a$ as follows $$\begin{aligned} a(x,\xi)=&\sum^{\infty}_{j=0}a(x,\xi)\varphi_j(\xi)\\ =&\int\sum^{\infty}_{j=0} a_j(x-u,\xi)2^{jn}\psi\left(2^ju\right)\textrm{d}u\\ &+\sum^{\infty}_{j=0}\int \left(a_j(x,\xi)-a_j(x-u,\xi)\right)2^{jn}\psi\left(2^ju\right)\textrm{d}u\\ =:&b(x,\xi)+\sum^{\infty}_{j=0}\widetilde{a}_j(x,\xi).\end{aligned}$$ Hence $T_a=T_b+\sum^{\infty}_{j=0}T_{\widetilde{a}_j}$. Note that $\xi$-support of the symbol $\widetilde{a}_j$ is contained in $\{\xi\in \mathbb{R}^n:|\xi|<2^{j+1}\}$ and $$\left|\partial^{\alpha}_{\xi}\widetilde{a}_j(x,\xi)\right|\leq C w\left(2^{j+1}\right)2^{j(\frac{n(\rho-1)}{2}-1-\rho|\alpha|)}.$$ Thus by Lemma [Lemma 6](#l2.3){reference-type="ref" reference="l2.3"} (with $p=2$) we get $$\|T_{\widetilde{a}_j}f\|_2\leq C w\left(2^{j+1}\right) 2^{-j}\|f\|_2.$$ To keep the restriction on the frequency of $\widetilde{a}_j$, we introduce multiplier operators $$\widehat{\triangle^{\prime}_0f}(\xi)=\textbf{1}_{\{\xi : |\xi|\leq 2\}}(\xi)\widehat{f}(\xi)$$ and $$\widehat{\triangle^{\prime}_j f}(\xi)=\textbf{1}_{\{\xi : 2^{j-1}\leq |\xi|\leq 2^{j+1}\}}(\xi)\widehat{f}(\xi), \quad j\in\mathbb{N}.$$ Therefore, $$\begin{aligned} \left\|\sum^{\infty}_{j=0}T_{\widetilde{a}_j}f\right\|_{2}\leq & \sum^{\infty}_{j=0}\left\|T_{\widetilde{a}_j}f\right\|_2=\sum^{\infty}_{j=0}\left\|T_{\widetilde{a}_j}\left( \triangle^{\prime}_j f\right)\right\|_2\nonumber\\ \leq &C\sum^{\infty}_{j=0}w\left(2^{j+1}\right)2^{-j}\left\|\triangle^{\prime}_j f\right\|_2\nonumber\\ \leq &C \left(\sum^{\infty}_{j=0}w^2(2^{j+1})2^{-2j} \right)^{1/2} \left(\sum^{\infty}_{j=0}\left\|\triangle^{\prime}_j f\right\|_2^2 \right)^{1/2}\nonumber\\ \leq &C\int^{\infty}_1\frac{w^2(t)}{t^3}\,\textrm{d}t \|f\|_2 \leq C\|f\|_2.\end{aligned}$$ It remains to handle the operator $T_b$. We observe that $T_b f=S(\widehat{f})$ with $$Sf(x)=\int_{\mathbb{R}^n}\!e^{2\pi i x\cdot\xi}b(x,\xi)f(\xi)\,\textrm{d}\xi.$$ By the Plancherel theorem, it suffices to establish the $L^2$ boundedness of the operator $S$. Let $S^*$ be the adjoint operator of $S$. It then suffices to show the $L^2$ boundedness of $$\begin{aligned} SS^*f(x)&=\iint e^{2\pi i (x-y)\cdot\xi}b(x,\xi)\overline{b(y,\xi)}f(y)\,\textrm{d}y\textrm{d}\xi\\ &=\sum_{j=0}^{\infty} \sum_{\substack{|k-j|\leq 1\\ k\geq 0}} \iint e^{2\pi i (x-y)\cdot\xi}a_j\ast_1\!\Psi_j (x,\xi)\overline{a_k}\ast_1 \!\Psi_k(y,\xi)f(y)\,\textrm{d}y\textrm{d}\xi\\ &=: \sum^{\infty}_{j=0}R_jf(x), \end{aligned}$$ where we denote $\Psi_j(u)=2^{jn}\psi(2^j u)$, $j\in\mathbb{N}$, and a partial convolution $f\!*_1\!\phi (x,\xi)=\int_{\mathbb{R}^n} \! f(x-u,\xi)\phi(u)\,\textrm{d}u$ for functions $f$ on $\mathbb{R}^{2n}$ and $\phi$ on $\mathbb{R}^n$. We observe that $$\begin{aligned} \label{lppdo3.1} R_jR_k^*=R^*_jR_k=0 \textrm{ if }|j-k|\geq 5,\end{aligned}$$ where $R_j^*$ denotes the adjoint operator of $R_j$. Indeed, notice that $$\begin{aligned} &\widehat{R_jf}(\eta)=\\ &\sum_{\substack{|k-j|\leq 1\\ k\geq 0}}\!\int \!\! e^{2\pi i (z\cdot(\xi-\eta)-y\cdot\xi)}a_j(z,\xi)\widehat{\psi}\left(2^{-j}(\eta-\xi)\right)\overline{a_k}\ast_1 \!\Psi_k(y,\xi)f(y)\textrm{d}z\textrm{d}y\textrm{d}\xi. \end{aligned}$$ Hence $\widehat{R_jf}(\eta)\neq 0$ implies that there exists a point $\xi$ such that $$a_j(z,\xi)\widehat{\psi}\left(2^{-j}(\eta-\xi)\right)\neq 0.$$ It is easy to see that $\widehat{R_j f}$ is supported in $\{\eta:|\eta|\leq \frac{201}{100}\}$ if $j=0$, and in the shell $\frac{49}{100}2^{j}\leq |\eta|\leq \frac{201}{100}2^{j}$ if $j\geq 1$. Similar computation shows that $\widehat{R_j^*f}$ is supported in $\{\eta:|\eta|\leq \frac{201}{100}2^{j+1}\}$ if $j=0,1$, and in the shell $\frac{49}{100}2^{j-1}\leq |\eta|\leq \frac{201}{100}2^{j+1}$ if $j\geq 2$. By the Plancherel theorem we have $$\begin{aligned} \langle R_j^*R_kf, g\rangle=\langle R_k f, R_j g\rangle=\langle \widehat{R_k f}, \widehat{R_j g}\rangle.\end{aligned}$$ Thus we get [\[lppdo3.1\]](#lppdo3.1){reference-type="eqref" reference="lppdo3.1"}. We also observe that $$\begin{aligned} \label{lppdo3.2} \|R_jf\|_2\leq C\|f\|_2.\end{aligned}$$ Indeed, if we denote $$Q_j f(x)=\int \! e^{2\pi i x\cdot\xi}a_j\ast_1\!\Psi_j (x,\xi)f(\xi)\,\textrm{d}\xi,$$ it is then easy to verify that $$R_jf(x)=\sum_{\substack{|k-j|\leq 1\\ k\geq 0}} Q_j Q_k^*f(x).$$ Lemma [Lemma 6](#l2.3){reference-type="ref" reference="l2.3"} (with $p=2$) and the Plancherel theorem readily give $\|Q_j f\|_2\leq C\|f\|_2$. Then [\[lppdo3.2\]](#lppdo3.2){reference-type="eqref" reference="lppdo3.2"} follows from the simple fact $\|Q_j\|_{L^2-L^2}=\|Q_j^*\|_{L^2-L^2}$. Applying [\[lppdo3.1\]](#lppdo3.1){reference-type="eqref" reference="lppdo3.1"}, [\[lppdo3.2\]](#lppdo3.2){reference-type="eqref" reference="lppdo3.2"} and Cotlar's lemma in [@S93 p. 280] yields the $L^2$ boundedness of the operator $SS^*$, as desired. 0◻ 99 Álvarez, J. and Hounie, J., *Estimates for the kernel and continuity properties of pseudo-differential operators*. **Ark. Mat.** 28 (1990), no. 1, 1--22. Calderón, A. P. and Vaillancourt, R., *On the boundedness of pseudo-differential operators*. **J. Math. Soc. Japan** 23 (1971), 374--378. Calderón, A. P. and Vaillancourt, R., *A class of bounded pseudo-differential operators*. **Proc. Nat. Acad. Sci. U.S.A.** 69 (1972), 1185--1187. Guo, J. and Zhu, X., *Some notes on endpoint estimates for pseudo-differential operators*. **Mediterr. J. Math.** 19 (2022), no. 6, Paper No. 260, 14 pp. Hörmander, L., *Pseudo-differential operators and hypoelliptic equations*. **Singular integrals (Proc. Sympos. Pure Math.**, Chicago, Ill., 1966), 138--183. Amer. Math. Soc., Providence, R.I., 1967. Hörmander, L., *On the $L^{2}$ continuity of pseudo-differential operators*. **Comm. Pure Appl. Math.** 24 (1971), 529--535. Hounie, J., *On the $L^2$ continuity of pseudo-differential operators*. **Comm. Partial Differential Equations** 11 (1986), no. 7, 765--778. Kenig, C. E. and Staubach, W., *$\Psi$-pseudodifferential operators and estimates for maximal oscillatory integrals*. **Studia Math.** 183 (2007), no. 3, 249--258. Rodino, L., *On the boundedness of pseudo differential operators in the class $L^{m}_{\rho,1}$*. **Proc. Amer. Math. Soc.** 58 (1976), 211--215. Ruan, J. and Zhu, X., *$L^{\infty}$-BMO boundedness of some pseudo-differential operators*. **J. Pseudo-Differ. Oper. Appl.** 14 (2023), no. 3, Paper No. 33, 11 pp. Stein, E. M., *Harmonic analysis: real-variable methods, orthogonality, and oscillatory integrals. With the assistance of Timothy S. Murphy*. **Princeton Mathematical Series**, 43. Monographs in Harmonic Analysis, III. Princeton University Press, Princeton, NJ, 1993. Wang, G., *Sharp function and weighted $L^p$ estimates for pseudo-differential operators with symbols in general Hörmander classes*. Preprint, arXiv:2206.09825. Wang, G. and Chen, W., *A pointwise estimate for pseudo-differential operators*. **Bull. Math. Sci.** 13 (2023), no. 2, 2250001 (13 pages). [^1]: Xiangrong Zhu (the corresponding author) was supported by the National Key Research and Development Program of China (No. 2022YFA1005700) and the NSFC Grant (No. 11871436). Jingwei Guo was supported by the NSF of Anhui Province, China (No. 2108085MA12).
arxiv_math
{ "id": "2309.10380", "title": "$L^p$ boundedness of pseudo-differential operators with symbols in\n $S^{n(\\rho-1)/2}_{\\rho,1}$", "authors": "Jingwei Guo, Xiangrong Zhu", "categories": "math.CA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this survey article for the Encyclopedia of Mathematical Physics, 2nd Edition, I give an introduction to quantum character varieties and quantum character stacks, with an emphasis on the unification between four different approaches to their construction. address: School of Mathematics, University of Edinburgh, Edinburgh, UK author: - David Jordan bibliography: - bib.bib date: May 2023 title: - encyclopedia.tex - Quantum character varieties --- # Introduction The purpose of this article is to introduce the notion of a character variety, to explain its central role in mathematical physics -- specifically gauge theory -- and to highlight four different approaches to constructing its deformation quantization, what are colloquially called "quantum character varieties\". Rather remarkably, each distinct mechanism for quantization is motivated in turn by a distinct topological observation about the topology of surfaces and hence the geometry of classical character varieties. The term "character variety\" commonly refers to a moduli space of $G$-local systems on some topological space $X$, equivalently of homomorphisms $\rho:\pi_1(X)\to G$. In this article we discuss several different models for this moduli space -- the ordinary, framed, and decorated character varieties -- as well as their common stacky refinement, the character stack. The distinction between these different models becomes important due to the presence of stabilizers and singularities in the naively defined moduli problem. The term "quantum character variety\" typically refers to any of the following non-commutative deformations of character varieties in the case $X=\Sigma$ is a real surface: 1. **Skein algebras** of surfaces yield deformation quantizations of the algebra of functions on ordinary character varieties of surfaces. 2. **Moduli algebras** attached to ciliated ribbon graphs, yield deformation quantizations of the algebra of functions on framed character varieties of surfaces with at least one boundary component. 3. **Quantum cluster algebras** associated to marked and punctured surfaces quantize a cluster algebra structure on the decorated character variety. 4. **Factorization homology** of surfaces, with coefficients in the ribbon braided tensor category of representations of the quantum group yields linear abelian categories quantizing the category of quasi-coherent sheaves on character stacks. Each of the above constructions depends on a non-zero complex parameter $q$, and reproduces the corresponding "classical\" or "undeformed\" character variety upon setting $q=1$. By definition the classical character variety depends on the underlying space $X$ only through its fundamental group -- in particular, only up to homotopy. The quantum character varieties are more subtle invariants, which depend on the homeomorphism type of the manifold in a way detected only when $q\neq 1$. In this survey we will recount the classical and quantum versions of each of these four constructions, outline their relation to one another, and how their study relates to super-symmetric quantum field theory and the mathematical framework of topological field theory. The topological input for each construction above is a surface, however in each case there are natural extensions to 3-manifolds -- some more developed than others -- which we will also discuss. ## Flat connections One of the most fundamental notions in gauge theory is that of a principal $G$-bundle with connection. A principal $G$-bundle $E$ over a space $X$ consists of a map $\pi:E\to X$, together with a free $G$-action on $E$ which preserves fibers and makes each fiber of $\pi$ into a $G$-torsor, so that $E/G=X$. A connection on $E$ is a 1-form $A\in \Omega^1(X,\operatorname{ad}(E))$ valued in the adjoint bundle, $$\operatorname{ad}(E) = \mathfrak{g}\times_G E = (\mathfrak{g}\times E)/G.$$ Among all connections on $E$ are distinguished the *flat connections* $A$ which satisfy the flatness equation $dA + [A,A]=0$. Equivalently $A$ is flat if the parallel transport along some curve $\gamma$ defined by $A$ depends on $\gamma$ only up to homotopy of paths. ## Local systems From a principal $G$-bundle with flat connection we may extract the more combinatorial data of a principal $G$-bundle $E$, together with parallel transport isomorphisms $\nabla_\gamma:E_x\to E_y$ along the homotopy class of paths connecting $x$ to $y$. Such a pair $(E,\nabla)$ is called a $G$-local system. The choice of a base point $x$ in $X$, and a trivialisation of $E$ at $x$ reduces the data of a $G$-local system to that of a group homomorphism $\pi_1(X)\to G$. Changes of basepoint and changes of framing are both implemented by conjugation in $G$; hence two such homomorphisms represent the same $G$-local system if, and only if, they are related by post-composition with conjugation in $G$. ## Appearances in physics Classical character varieties appear very naturally in classical gauge theories such as Yang-Mills and Chern-Simons theory, in which a gauge field is by definition an adjoint valued 1-form, and the classical equations of motion involve the curvature of the connection -- in particular, in Chern--Simons theory in 3 dimensions, the critical locus is precisely the space of flat connections. Quantum character varieties/stacks play a similar role in super-symmetric quantum field theories. Some notable examples are: 1. Quantum character varieties -- specifically in their skein-theoretic incarnation -- describe topological operators, known as Wilson lines, in the quantization of Chern-Simons theory [@Witten-Jones]. The parameter $\hbar=\log q$ appears as the quantization parameter. 2. It is expected that state spaces of 3-manifolds, and categories of boundary conditions on surfaces, both attached to the Kapustin--Witten twist [@kapustin2006electric] of 4D $\mathcal{N}=4$ super Yang-Mills may be described via skein modules, and skein categories, respectively. Here, the parameter $\Psi=\log q$ identifies with the twisting parameter in Kapustin--Witten's construction. 3. Coulomb branches of 4d $\mathcal{N}=2$ theories of class S, compactified on a circle, are naturally described by character varieties. In this case, the deformation parameter $q$ appears as the exponentiated ratio of $\Omega$-deformation parameter for a pair of commuting circle actions coming from $SO(2)\times SO(2)\subset SO(4)$. [@GMN; @GMN2; @hollands2016spectral; @tachikawa2015skein] In more mathematical terms, Gaiotto has proposed quantizations of Coulomb branches (hence of character varieties) via $\mathbb{C}^\times$-equivariant cohomology. This has been discussed in [@schrader2019k]. ## Acknowledgements In preparing this survey article I have benefited from the inputs of a number of people. I particularly thank David Ben-Zvi, Adrien Brochier, François Costantino, Charlie Frohman, Andy Neitzke, Thang Le, Gus Schrader, and Alexander Shapiro for fact-checking and helping to find complete references. I also thank the Editor Catherine Meusberger, whose detailed comments and suggestions considerably improved the exposition. # Classical character varieties {#sec:classical} Each of the four quantization prescriptions highlighted in the introduction emerges from first understanding the relevant classical moduli space: its geometry, and its Poisson structure. When viewed from the correct perspective, each classical moduli space admits a very natural deformation quantization. For this reason, although the article is focused on quantum character varieties, a significant portion is dedicated to reviewing the classical structures. Let us begin by recalling in more detail the precise construction of the framed character variety, the ordinary character variety (the word "ordinary\" is non-standard, and appears here and throughout merely for emphasis), and the decorated character variety. We turn then to the ordinary and decorated character stacks, and then finally to enumerating the relations between them. Along the way we will introduce the Poisson brackets which will be the quasi-classical limits of the quantum constructions. ## Framed character variety Given a group $G$ and a compact manifold $X$ with basepoint $x$, the framed character variety $\mathrm{Ch}_G^{fr}(X)$ is an affine algebraic variety which parameterises pairs $(E,\eta)$, where $E$ is a $G$-local system, and $\eta:E_x\to G$ is a trivialisation of the fiber $E_x\cong G$ (equivalently, this is the data of a single point $e\in E_x$ which becomes the identity element in $G$ under the framing). In more concrete terms, we may identify the framed character variety with the set of representations $\pi_1(X)\to G$, where we do not however quotient by the $G$-action. This alternative description makes it clear that $\mathrm{Ch}_G^{fr}(X)$ is indeed an algebraic variety: choosing any presentation of $\pi_1(X)$ with $m$ generators and $n$ relations identifies $\mathrm{Ch}_G^{fr}(X)$ with a closed subvariety of the affine variety $G^m$ defined by the $n$ relations, regarded as $G$-valued equations. As there is no a priori reason for this closed subvariety to be equidimensional, the framed character variety will typically not be smooth. An important special case is the framed character variety of a surface $\Sigma_{g,r}$ of genus $g$ with $r\geq 1$ punctures. Since $\pi_1(\Sigma_{g,r})$ is the free group on $2g+r-1$ generators, we have that $$\mathrm{Ch}_G^{fr}(\Sigma_{g,r}) = G^{2g+r-1}.$$ ## Ordinary character variety The (ordinary) character variety $\mathrm{Ch}_G(X)$ is defined as the GIT quotient[^1] of the framed character variety by the $G$-action by conjugation. By definition, this means that the character variety is an affine variety defined as $$\mathrm{Ch}_G(X) = \operatorname{Spec}(\mathcal{O}(\mathrm{Ch}_G^{fr}(X))^G),$$ the spectrum of the sub-algebra of $G$-invariant functions on the framed character variety. More geometrically, the character variety so defined parameterises *closed $G$-orbits* in the framed character variety. The map sending a point of the framed character variety to the closure of its $G$-orbit defines a surjection $\mathrm{Ch}_G^{fr}(X)\to \mathrm{Ch}_G(X)$. ## Decorated character variety For simplicity we will restrict now to the case where $X=\Sigma$ is a compact surface possibly with boundary. An important enhancement of the notion of a $G$-local system is that of a decorated local system. In a series of three highly influential papers [@FG06; @FG09a; @FG09b], Fock and Goncharov established that the moduli space of decorated local systems, known as the decorated character variety, has an open locus admitting the geometric structure of a cluster variety, and they exploited this structure to define its quantization (discussed in Section [4](#sec:quantum){reference-type="ref" reference="sec:quantum"}). For the construction of decorated character varieties we fix in addition to the group $G$ a Borel subgroup $B$ and let $T=B/[B,B]$ denote the quotient of $B$ by its unipotent radical. A $G$-$B$-$T$-coloring of a surface $\Sigma$ consists of a partition of $\Sigma$ into three-sets $\Sigma= \Sigma_G\sqcup \Sigma_B\sqcup \Sigma_T$, where $\Sigma_G$ and $\Sigma_T$ are open and $\Sigma_B=\partial \Sigma_G \cap \partial \Sigma_T$ is closed (see Figure [1](#fig:clusters){reference-type="ref" reference="fig:clusters"} for some examples). We will say that a "marked point\" on a decorated surface $\Sigma$ is a $T$-region which contracts onto an interval in the boundary of $\Sigma$, while a "puncture\" is a $T$-region contracting onto an entire boundary component of $\Sigma$. We will call a connected decorated surface all of whose $T$ regions are of those two types a "marked and punctured surface\". We note that a marked and punctured surface necessarily has a unique $G$-region. **Remark 1**. *In Fock and Goncharov's original work, and most works which follow them, the marked points and punctures are indeed regarded as a finite set of points of $\Sigma$ lying in the boundary and interior of $\Sigma$, respectively, rather than as two-dimensional regions contracting to the boundary, as we have described above. However when one unpacks the data they attach to punctures and marked points, one sees that it expresses quite naturally in the framework of decorated surfaces, and in particular the resulting notion of decorated local system, and hence the decorated character variety defined in either convention is identical.* *For the topological field theory perspective it is important to "zoom in\" on these points and see them as one-dimensional "defects\" between adjoining 2-dimensional regions (the $G$- and $T$-regions discussed above). For example, the "amalgamation\" prescription of Fock and Goncharov -- by which one glues together charts on each triangle of a triangulation to obtain a chart on the decorated character variety -- is just an instance of the excision axiom in factorization homology.* The decorated character variety is a moduli space parameterising triples $(E_G,E_B,E_T)$, where $E_G$ and $E_T$ are $G$- and $T$-local systems over $\Sigma_G$ and $\Sigma_T$, respectively, and where $E_B$ is a reduction of structure over $\Sigma_T$ of the product $E_G\times E_T$ restricted there. By a reduction of structure, we mean a $B$-sub-local system of the $B$-space $E_G\times E_T$. At each point of the curve $\Sigma_B$ this is simply the specification of a $B$-orbit $B\cdot (g,t)\subset G\times T$, for some $(g,t)\in G\times T$ equivalently a point of $(G\times T)/B \cong G/N$; however the local system $E_G$ and $E_T$ can twist as we traverse loops in the surface, so that monodromy around punctures can introduce multiplication by some fixed element $t\in T$. In other words, the monodromy around punctures need only preserve the underlying flag $\overline{\mathcal{F}}\in G/B$ obtained as the image of $\mathcal{F}\in G/N$ under the projection $G/N\to G/B$. ![Three marked surfaces: the "triangle\", a disk with three contractible T-regions (indicated by shading), the "punctured disk with two marked points\", an annulus with one annular $T$-region and two contractible $T$ regions, and "the punctured torus\", the torus (opposite edges are identified) with a disk at the corner removed and an annular $T$-region around the resulting boundary circle. The thin lines on the latter two depict a triangulation. ](clusters.pdf){#fig:clusters height="1in"} As with the ordinary character variety, one may introduce a framed variant of the character variety, by requiring additional data of a trivialisation of $E$ at a sufficiently rich system of basepoints. It suffices to assume there is one basepoint in each connected region of $E_G$ and of $E_T$, and to require a trivialisation of the fiber there: in this case one can show that the resulting groupoid is rigid, so that the framed decorated character variety parameterising this data really is a variety. We may then construct the GIT quotient of $\mathrm{Ch}_G^{fr,dec}(\Sigma)$ by $G^r\times T^s$, where there are $r$ basepoints in $\Sigma_G$ and $s$ basepoints in $\Sigma_T$. It is an exercise analogous to that for the ordinary character variety to see that this construction is in fact independent of the chosen basepoints up to unique isomorphism. In passing from the framed decorated character variety to the decorated character variety, we have quotiented by a typically non-free action of $G^r$, which has the consequence that the GIT quotient is typically singular. Inspired by the Penner coordinate system [@penner1987decorated] on decorated Teichmüller space, Fock and Goncharov found a remarkable open subvariety of the framed and decorated character variety, on which the $G^r$-action is actually free, hence the GIT quotient there is smooth. Moreover, they produced a remarkable set of "cluster charts\" in the framework of cluster algebras as had been introduced in [@ClustersI; @ClustersII; @ClustersIII; @ClustersIV; @gekhtman2002cluster]. We recall the rudiments of their construction here. Fock and Goncharov defined a family of subsets $U_\alpha$ on the framed decorated character variety of a marked and punctured surface with three remarkable properties: 1. The $G$-action on $\mathrm{Ch}_G^{dec,fr}(\Sigma)$ restricts to a *free* $G$-action on $U_\alpha$, and 2. The quotient $U_\alpha/G$ is a torus $(\mathbb{C}^\times)^k$, for some explicitly given $k$. 3. The transition maps between charts $U_\alpha/G$ and $U_\beta/G$ take the form of a "cluster mutation\", an explicitly given birational transformation between tori. The decorated character variety is defined as the union over the charts $\alpha$ of the $U_\alpha/G$; it is a subvariety of the decorated character stack, which although a union of affine charts, is typically not affine. We emphasise moreover that the remaining $T^s$-action on each $U_\alpha/G$ is still not free; different ways of treating this non-free action, as well as specifying the monodromy of the $G/N$-fibers around punctures, leads to various related formulations of decorated character varieties. We will not recall their complete definitions in this survey article. The collection of opens $U_\alpha/G$ form what is called a cluster structure: briefly, this means that one has a combinatorial prescription to reconstruct the union of the charts $U_\alpha/G$ by starting from a single initial chart $U_0/G$ -- together with its coordinates and its Poisson structure $U_0/G$ is called a seed -- and successively adding in new charts, glued via the cluster mutation. To construct the seed, one may choose a triangulation of the surface $\Sigma$, which must be compatible with the decoration, in that each vertex of the triangulation should be some framed basepoint lying in $\Sigma_T\cap \partial \Sigma$. Each triangle contributes a torus, and one combines the different tori together along the edges of the triangulation via a process called "amalgamation\". The coordinates and the Poisson structure are also specified in this construction. The end result is also an algebraic torus, whose coordinates are indexed by the vertices of a quiver $\Gamma$, and whose Poisson bracket is by construction log-canonical; the rest of the cluster charts $U_\alpha/G$ are indexed by mutated graphs, and the precise form of the cluster mutation $U_\alpha/G\to U_\beta/G$ is encoded in this graph. We will see in Section [4](#sec:quantum){reference-type="ref" reference="sec:quantum"} that the explicit combinatorial description of cluster charts underpins an equally explicit Poisson structure and its canonical quantization to a quantum cluster algebra. ## The character stack In each of the above approaches, the presence of stabilisers inhibits a naive definition and introduces complications: for character varieties, we retreat to a moduli space of closed $G$-orbits, and for decorated character varieties, we retreat to an open locus where the $G$-action becomes free. On the other hand, in both the ordinary and decorated case, our presentation has passed implicitly through a more universal notion of a character stack. Without recounting completely the framework of stacks, we will recall only that certain moduli problems -- including both that of classifying ordinary and decorated local systems up to isomorphism -- admit the structure of an Artin stack: this simply means that they may be presented as the group quotient of an algebraic variety by a reductive algebraic group. In fact, such a presentation is typically only required locally, but in our case we have a global such description. One studies stacks algebraically via their locally presentable abelian categories of quasi-coherent sheaves, in particular we may consider the locally presentable abelian categories, $$\mathcal{QC}(\underline{\mathrm{Ch}}_G(X)), \quad \mathcal{QC}(\underline{\mathrm{Ch}}_G^{dec}(X)).$$ As for any stack, $\mathcal{QC}(\underline{\mathrm{Ch}}_G(X))$ carries a distinguished object called the *structure sheaf* $\mathcal{O}$. It follows from basic definitions that the algebra of functions on the ordinary character variety is isomorphic to $\operatorname{End}(\mathcal{O})$. On the other hand, we have a pullback square, $$\begin{tikzcd} \mathrm{Ch}_G^{fr}(X) \arrow{r}{} \arrow[swap]{d}{} & \underline{\mathrm{Ch}}_G(X) \arrow{d}{g} \\ pt \arrow{r} & pt/G. \end{tikzcd}$$ Hence we may describe structures on $\underline{\mathrm{Ch}}_G(X)$ as $G$-equivariant structures on $\mathrm{Ch}_G^{fr}(X)$, and conversely we recover $\mathrm{Ch}_G^{fr}(X)$ by forgetting this equivariance. The relation between decorated character stacks and decorated character varieties of Fock and Goncharov is somewhat more complicated. Each cluster chart $U_\alpha$ of the cluster variety $\mathrm{Ch}_G^{dec}(\Sigma)$ defines an object $\mathcal{O}_\alpha \in \mathcal{QC}(\mathrm{Ch}_G^{dec}(\Sigma))$: this is just the sheaf of functions which are regular on $U_\alpha$. We have that $\operatorname{End}(\mathcal{O}_\alpha)$ is a ring of Laurent polynomials (i.e. functions on the corresponding torus $U_\alpha$), that the full subcategory generated by $\mathcal{O}_\alpha$ is indeed affine, and finally that the transition maps between the different $U_\alpha$'s define exact functors between these subcategories, which are written explicitly as cluster transformations. A crucial feature of classical character stacks is that they fit into the framework of fully extended TQFT. For this we briefly recall that there is a "topological operad\" $E_n$ which encodes the embeddings of finite disjoint unions of disks $\mathbb{R}^n$ inside one another. An $E_n$-algebra is an algebraic structure governed by $E_n$. $E_n$-algebras may be regarded as the "locally constant\" or "topological\" specialisation of the notion of a factorisation algebra, which in physical terms captures the structure of local observables in a quantum field theory, and the condition of being locally constant is a consequence when the QFT is topological. Most relevant to our discussion are the examples that $E_1$-, $E_2$- and $E_3$-algebras in the bi-category of linear categories are monoidal, braided monoidal, and symmetric monoidal categories, respectively. We refer to [@ayala2015factorization] for more technical details, and to [@brochiernotes] for a gentle exposition. One may regard the symmetric monoidal category $\operatorname{Rep}(G)$ of representations of the reductive algebraic group $G$ as an $E_n$-algebra, for any $n$, in the symmetric monoidal 2-category of categories. The collection of such $E_n$-algebras form an $n+1$-category, and the factorisation homology defines a full extended $n$-dimensional topological field theory valued in the symmetric monoidal 2-category of categories. We have an equivalence of categories [@BZFN]: $$\label{eqn:FHom} \mathcal{QC}(\underline{\mathrm{Ch}}_G(X)) \simeq \int_X \operatorname{Rep}(G),$$ where the integral notation on the righthand side denotes the factorization homology functor defined in [@ayala2015factorization]. This equivalence and its consequences are discussed in greater detail in Section [4](#sec:quantum){reference-type="ref" reference="sec:quantum"} below. A similar TFT description can be given for decorated character stacks using stratified factorization [@AFT]. For now, we note that it is not possible for either the ordinary or decorated character varieties to fit into the fully extended framework: if a manifold is given to us by gluing simpler manifolds, we can indeed build any $G$- or $G,B,T$-local system on it by gluing local systems on each piece, however there will be automorphisms of the glued local system which are not the disjoint product of automorphisms on each piece. This simple observation prevents the ordinary and decorated character varieties from satisfying the gluing compatibilities satisfied by their stacky enhancements. # Poisson brackets {#sec:Poisson} Each of the framed, ordinary, and decorated character varieties, as well as the ordinary and decorated character stacks carry canonically defined Poisson brackets, which form the quasi-classical -- or leading order -- degenerations of the quantizations constructed in the next section. We review these here. ## The Atiyah-Bott symplectic structure For reductive group $G$, Atiyah and Bott constructed in [@atiyah1983yang] a symplectic form on the smooth locus of the moduli space of flat $G$-connections on $\Sigma$. Given a $G$-bundle $E$ with a flat connection $A$ on $\Sigma$, the tangent space to $E$ consists of sections of the associated adjoint bundle $\operatorname{ad}(E) = E\times_G \mathfrak{g}$ over $\Sigma$. We denote by $\kappa$ the Killing form on $\mathfrak{g}$. Given a pair $\chi_1, \chi_2$ of sections of $\operatorname{ad}(E)$, regarded as tangent vectors to $E$, we define their symplectic pairing by the formula: $$\omega(\chi_1,\chi_2) = \int_{\Sigma} \kappa(\chi_1\wedge \chi_2).$$ We remark that this symplectic form arises naturally in the Chern-Simons action term for a 3-manifold of the form $\Sigma\times I$. The skew-symmetry, non-degeneracy and closedness of $\omega$ all follow relatively easily from the definition. ## The Goldman bracket on the ordinary character variety The character variety carries a combinatorial analog of the Atiyah--Bott symplectic structure, due to Goldman [@goldman1984symplectic] and Karshon [@karshon1992algebraic], which can be defined purely algebraically without appeal to analysis. The fundamental group $\pi_1(\Sigma)$ of a closed surface has as its second group cohomology $H^2(\pi_1(G),\mathbb{C}) = \mathbb{C}$. The tangent space to a given representation $\rho:\pi_1(\Sigma)\to G$ identifies with $H^1(\pi_1(\Sigma),\mathfrak{g}),$ where $\pi_1(\Sigma)$ acts on $\mathfrak{g}$ through the conjugation action. In analogy with the Atiyah-Bott symplectic structure we obtain a skew pairing, $$H^1(\pi_1(\Sigma),\mathfrak{g})\times H^1(\pi_1(\Sigma),\mathfrak{g}) \xrightarrow{\kappa\circ \cup} H^2(\pi_1(\Sigma),\mathbb{C}) = \mathbb{C},$$ which again defines a symplectic form on the character variety. The corresponding Poisson bracket is known as the Goldman bracket. ## The Fock-Rosly Poisson structure on the framed character variety While both the Atiyah-Bott and Goldman symplectic forms are clearly very natural and general, they don't lead immediately to explicit formulas for the Poisson brackets of functions on the character variety. In the case of the framed character variety of surfaces with at least one boundary component, a much more explicit reformulation was given by Fock and Rosly in [@fock1998poisson]. First we recall that $\pi_1(\Sigma_{g,r})$ is the free group on $2g+r-1$ generators, where $\Sigma_{g,r}$ denotes a surface of genus $g$ with $r$-punctures. Hence the framed character variety is simply the product $G^{2g+r-1}$. The Poisson bracket between between functions $f$ and $g$ of a single $G$ factor is given by the Poisson bivector, $$\label{eqn:STS} \pi_{STS}=\rho^{ad,ad}+t^{r,l}-t^{l,r}.$$ Here we denote with superscripts $r,l, ad$ the right, left and adjoint vector fields on $G$ determined by a Lie algebra element, we let $r\in (\mathfrak g)^{\otimes 2}$ denote the classical $r$-matrix, and $\rho$ and $t$ its anti-symmetric and symmetric parts. The bivector $\pi_{STS}$ induces on $G$ a Poisson structure which has been introduced by Semenov--Tian--Shansky [@STS94]. The Poisson bracket between functions $f$ and $g$ of the $i$th and $j$th factor is given by $\pi_{ij}$, where $$\pi_{ij}= \begin{cases} \pm (r^{ad,ad}) & \text{if $i,j$ are $\pm$ unlinked}\\ \pm (r^{ad,ad}+2t^{r,l}) & \text{if $i,j$ are $\pm$ linked}\\ \pm (r^{ad,ad}-2t^{r,r}+2t^{r,l}) & \text{if $i,j$ are $\pm$ nested } \end{cases}$$ In total we have, $$\pi = \sum_i \pi_{STS}^{(i)}+\sum_{i<j} (\pi_{ij}-\pi_{ji}),$$ where now $\pi_{ij}$ is a 2-tensor acting on the $i$th component of the first factor and the $j$th component of the second factor, given by the formula above. The appearance of the classical $r$-matrix in the Fock-Rosly Poisson bracket foreshadows the role of quantum groups and quantum $R$-matrices in the deformation quantization of Section [4](#sec:quantum){reference-type="ref" reference="sec:quantum"}. ## The Fock-Goncharov cluster Poisson structure on the decorated character variety Recall that the decorated character variety of a marked and punctured surface contains a system of open charts, each isomorphic to a torus $(\mathbb{C}^\times)^k$ for some $k$ depending on $G$ and on the decorated surface. Each chart $U_\alpha$ carries a "log canonical\" Poisson bracket: $$\{x_i,x_j\} = a_{ij}x_ix_j,$$ where $A = (a_{ij})$ is the adjacency matrix of the quiver attached to $U_\alpha$. It is called log-canonical because the formal logarithm of the generators $x_i$ satisfy $[\log(x_i),\log(x_j)] = a_{ij}$, which resemble Heisenberg's "canonical commutation relations\". The birational transformations given by cluster mutation intertwine the different Poisson brackets on each chart, so that they glue together to a globally defined Poisson bracket on the cluster Poisson variety. ## Shifted symplectic structures and character stacks The moduli problem given by the character stack can be phrased in terms of classifying spaces, and this allows a universal construction of the Atiyah--Bott/Goldman symplectic structure on character varieties, due to Pantev, Toën, Vaquie, and Vezzosi [@pantev2013shifted], see [@CALAQUE] for an exposition. Specifically, there exists a classifying stack $BG$, such that for any surface $\Sigma$, we have: $$\underline{\mathrm{Ch}}_G(\Sigma) = \operatorname{Maps}(\Sigma,BG),$$ where $\operatorname{Maps}$ here denotes the stack of locally constant maps from a topological space into an algebraic stack, for instance as obtained by regarding $\Sigma$ as presented by a simplicial set. The classifying space $BG$ has as its tangent and cotangent complexes the Lie algebra $\mathfrak{g}$ and its dual $\mathfrak{g}^*$ in homological degrees 1 and -1, respectively. Hence the Killing form gives an isomorphism from the tangent bundle to the 2-shifted cotangent bundle: such a structure (satisfying some additional properties which follow from those of the Killing form) is known as a 2-shifted symplectic structure. One may then transgress the 2-shifted symplectic structure on $BG$ through the mapping stack construction to give an ordinary -- or 0-shifted -- structure on the character stack. Remarkably, the descent of this symplectic structure to the smooth part of the character variety recovers the Atiyah-Bott/Goldman symplectic structure. Hence the PTVV structure on the character stack may be regarded as a stacky version of the Atiyah--Bott/Goldman construction. ## Hamiltonian reduction interpretation The framework of Hamiltonian reduction gives a natural and very general procedure to pass from the framed character variety of the surface $\Sigma_{g,1}$ to the ordinary character variety of the closed surface $\Sigma_g$. Let $\mu: \mathrm{Ch}_G^{fr}(\Sigma_{g,1})\to G$ denote the map which sends the $G$-local system to its monodromy around the unique boundary component (we assume for simplicity that the basepoint is on the boundary to ensure this map is well-defined, not only up to conjugation). Sealing the boundary component means imposing $\mu=\operatorname{Id}$, and forgetting the framing means quotienting by the $G$-action. Hence we have, $$\mathrm{Ch}_G(\Sigma_g) = \mu^{-1}(\operatorname{Id}) / G.$$ Formulas such as the above are common in the theory of *Hamiltonian reduction*, where one interprets $\mu$ as a *moment map* for a Hamiltonian action of the group $G$ on some phase space -- in this case the phase space is $\mathrm{Ch}^{fr}_G(\Sigma_{g,1})$. The target of the moment map is typically $\mathfrak{g}^*$ rather than $G$, however Alekseev, Kosmann-Schwarzbach, Malkin, and Meinrenken developed in [@alekseev1998lie; @alekseev2002quasi], following [@lu1991momentum], a formalism of "quasi-Hamiltonian\" $G$-spaces, which feature "group-valued moment maps\" valued in $G$ rather than $\mathfrak{g}^*$. The Hamiltonian reduction of a quasi-Hamiltonian $G$-space is symplectic, and recovers to the Atiyah-Bott/Goldman symplectic structure on the closed surface. # Quantum character varieties {#sec:quantum} In this final section, let us recount the four most well-known constructions of quantum character varieties. ## The moduli algebra quantization of the framed character variety The Fock-Rosly Poisson structure on the framed character variety of $\Sigma_{g,r}$, with $r\geq 1$, admits a highly explicit and algebraic quantization, introduced independently by Alekseev, Grosse, and Schomerus [@AGSI; @AGSII] and by Buffenoir and Roche [@buffenoir1995two]. This quantization was orginally called the "moduli algebra\", and is often referred to subsequently as the "AGS algebra\". The constuction also goes under the names "combinatorial Chern-Simons theory\" and "lattice gauge theory\", and has been studied in many different contexts, see e.g. [@roche2002trace], [@MeusbergerSchroers], [@meusburger2017kitaev]. The starting point is that the classical $r$-matrices appearing in the Fock-Rosly Poisson bracket have well-known quantizations into quantum $R$-matrices, which themselves describe the braiding of representations of the quantum group. Artful insertion of quantum $R$-matrices in place of the classical $r$-matrices gives the deformation quantization of framed character varieties: let us now describe their construction in more detail. In the case $g=1,r=1$, the Fock-Rosly Poisson bracket is identical to the Semenov-Tian-Shansky Poisson bracket $\pi_{STS}$ from Equation [\[eqn:STS\]](#eqn:STS){reference-type="eqref" reference="eqn:STS"}. Replacing classical $r$-matrices with quantum $R$-matrices leads to a deformation quantization $\mathcal{F}_q(G)$ of the algebra of functions[^2] on the group $G$, known as the *reflection equation algebra*. The name refers to the fact that for $G=\operatorname{GL}_N$, the commutation relations in this algebra are given[^3] via the "reflection equation\", $$\label{eqn:REA} R_{21}A_1 R_{12} A_2 = A_2 R_{21}A_1R_{12},$$ which appears as a defining relation in Coxeter groups of type $B$, and in mathematical physical models for scattering matrices in 1+1-dimension in the presence of a reflecting wall. A general surface with boundary may be presented combinatorially as a "ciliated ribbon graph\" -- essentially a gluing of the surface from disks. According to this prescription, each edge contributes a factor of $\mathcal{F}_q(G)$ to the moduli algebra (which may be regarded as the quantum monodromy of a connection along that edge), and the cross relations between different edge factors are given by explicit formulas resembling the reflection equation, but with an asymmetries related to the unlinked, linked or nested crossings. ## The skein theory quantization of the ordinary character variety The skein algebra quantization is prefaced on an elegant graphical formulation of the functions on the classical $\operatorname{SL}_2$ character variety in terms of (multi-) curves drawn on the surface. This is then deformed to a similar graphical calculus for curves drawn instead in the surfaces times an interval. Skein algebras were independently introduced by Przytycki [@Przytycki] and Turaev [@Turaev]. Following them, the vast majority of skein theory literature concerns the so-called Kauffmann bracket skein relations, a particular normalisation of the skein relations deforming the $\operatorname{SL}_2$-character variety. In that same tradition, we will recall this special case first in detail, only outlining the extension to general groups, and indeed to general ribbon braided tensor categories afterwards. Given a loop $\gamma: S^1\to X$, a $G$-local system $E$ over $X$, and a finite-dimensional representation $V$ of $G$, we have a polynomial function, $\operatorname{tr}_{\gamma,V}$, sending $E$ to the trace of the parallel transport along $\gamma$ of the associated vector bundle with connection $E\times_G V$. The word "polynomial\" here means, for instance that $\operatorname{tr}_{\gamma,V}$ defines a $G$-invariant function on the framed character variety, hence a polynomial function on the GIT quotient. The skein theoretic approach to quantizing character varieties begins by describing this commutative algebra structure "graphically\", i.e. via the image of the curve $\gamma$ sitting in $\Sigma$, and then introducing a deformation parameter in the graphical presentation. Let us consider a 3-manifold $M=\Sigma\times I$, the cylinder over some surface $\Sigma$. We may depict the function $\operatorname{tr}_{\gamma,V}$ by drawing $\gamma$ with the label $V$ (one often projects onto the $\Sigma$ coordinate, to draw $\gamma$ as a curve with normal crossings on $\Sigma$). We depict the product $\operatorname{tr}_{\gamma_1,V_1}\cdot \operatorname{tr}_{\gamma_2,V_2}$ of two such functions by super-imposing the drawings, as in Figure [3](#fig:skeinrelntorus){reference-type="ref" reference="fig:skeinrelntorus"}. The resulting diagrams will of course develop additional crossings when multiplied, however a basic observation -- which is somewhat special to the case of $\operatorname{SL}_2$ -- is that one can use local "skein relations\" to resolve crossings. To better understand this, consider the Cayley-Hamilton identity for a matrix $X\in \operatorname{SL}_2$: $$X + X^{-1} = \operatorname{tr}(X)\cdot\operatorname{Id}_2,$$ Multiplying by an arbitrary second matrix $Y$ and taking traces gives an identity $$\operatorname{tr}(XY)+\operatorname{tr}(X^{-1}Y) = \operatorname{tr}(X)\operatorname{tr}(Y).\label{eq:classical-skein}$$ Suppose now that we have two paths $\gamma_1$ and $\gamma_2$ intersecting at a point $p \in \Sigma$, and that we consider the product $\operatorname{tr}_{\gamma_1,V_1}\cdot \operatorname{tr}_{\gamma_2,V_2}$, where $V_1=V_2=\mathbb{C}^2$ is the defining representation of $\operatorname{SL}_2$. Then the identity [\[eq:classical-skein\]](#eq:classical-skein){reference-type="eqref" reference="eq:classical-skein"} implies a graphical simplification as depicted in Figures [2](#fig:skeinreln){reference-type="ref" reference="fig:skeinreln"} and [3](#fig:skeinrelntorus){reference-type="ref" reference="fig:skeinrelntorus"}. ![At left, the equations $\operatorname{tr}(XY)+\operatorname{tr}(X^{-1}Y) = \operatorname{tr}(X)\operatorname{tr}(Y)$ and $\operatorname{tr}(\operatorname{Id})=2)$ express graphically as skein relations; the top relation holds between any three curves which are identical outside of the dotted region, and differ as indicated within it, while the bottom relation states that any isolated unknot can be erased at the price of a factor of 2. At right is stated the quantum deformation which depends on a parameter $A\in \mathbb{C}$.](skein-reln.pdf){#fig:skeinreln height="1in"} ![The product $\operatorname{tr}_{\gamma_1,V} \cdot \operatorname{tr}_{\gamma_2,V}$ of two curves is given by their stacking in the $I$ direction on $\Sigma\times I$. The skein relations express this as a linear combination of two new curves as depicted at the right.](torus-skein-reln.pdf){#fig:skeinrelntorus height="1in"} A fundamental observation of Przytycki and Turaev (independently) is that the graphical relation in Figure [2](#fig:skeinreln){reference-type="ref" reference="fig:skeinreln"} can be naturally deformed by introducing coefficients into the relations which mimic the defining relations of the Jones polynomial, more precisely its normalisation known as the Kauffmann bracket. Around the same time, Edward Witten had understood these same "skein relations\" to appear as fundamental relations between Wilson loops in a quantization of the Chern-Simons theory in 3-dimensions. The $\operatorname{SL}_2$-skein algebra of a surface is therefore defined as the quotient of the vector space spanned by links embedded into $\Sigma\times I$, modulo local skein relations: two skeins are declared equivalent if they agree outside of some cylindrical ball $\mathbb{D}^2\times I$ and differ inside as indicated in the Figure. An example of a skein relation is depicted in Figure [3](#fig:skeinrelntorus){reference-type="ref" reference="fig:skeinrelntorus"}: we imagine a thickening of $T^2$ to $T^2 \times I$, and a small cylindrical ball $D^2\times I$ around the intersection point indicated on the left hand side of the equation. Within the cylindrical ball, we replace the skein with the expressions in the right hand side of the skein relation in Figure [2](#fig:skeinreln){reference-type="ref" reference="fig:skeinreln"}. We impose such relations for all cylindrical balls throughout $\Sigma\times I$. Turaev showed that the resulting relations are flat in the parameter $A$ -- more precisely he showed that the skein module[^4] is free as a $\mathbb{Z}[A,A^{-1}]$-module, and that the specialisation at $A=-1$ recovers the vector space of polynomial functions on the character variety of the surface[^5]. Moreover, the skein module of $\Sigma\times I$ obtains an algebra structure by stacking skeins in the $I$-direction. With respect to this stacking operation, the skein module becomes a non-commutative algebra whose $q=1$ specialisation is the algebra of functions on the classical ordinary character variety. Due to the flatness in $q$, we obtain a Poisson bracket on the character variety, by setting: $$\{f,g\} = \frac{f\cdot g - g\cdot f}{q-q^{-1}} \mod q-1.\label{eqn:degen}$$ It was shown in [@bullock1999understanding] that this Poisson bracket agrees with the Atiyah--Bott/Goldman bracket and so the skein algebra is a deformation quantization of the character variety with its Atiyah--Bott/Goldman Poisson bracket. The definition above extends naturally to define the $\operatorname{SL}_2$-skein module of an oriented 3-manifold: one considers the formal linear span of links in $M^3$, modulo the skein relations imposed in each cylindrical ball $D^2\times I$ embedded in the 3-manifold. The $A=-1$ specialisation still coincides with the functions on the character variety of $M$ [@bullock1997rings], however the skein module in general no longer carries an algebra structure, as there is no distinguished direction along which to stack the skeins. The above discussion has been formulated for simplicity in the case of $\operatorname{SL}_2$-skeins, however it generalises naturally to any simple gauge group $G$, indeed to any ribbon tensor category. The careful reader will have noticed that it sufficed in the $\operatorname{SL}_2$-case to consider only the defining two dimensional representation, and that we could reduce always to crossing-less diagrams. For a general group $G$, or more generally for an arbitrary ribbon tensor category, this is no longer possible. Skeins for a general group $G$ consist of embedded oriented ribbon graphs in the 3-manifold, together with a network of labels of representations of the quantum group (more genenerally objects of the ribbon braided tensor category) along each edge, and a morphism at every vertex, from the tensor product of incoming to outgoing edges. We refer to [@cooke2019excision] or [@GJS] for details about the general construction, or to [@kuperberg1996spiders] or [@sikora2005skein] for early examples beyond the Kauffman bracket skein module. An important ingredient in the definition is the Reshetikhin--Turaev evaluation map, which maps any skein on the cylindrical ball $\mathbb{D}^2\times I$ to a morphism from the tensor product of incoming labels along $\mathbb{D}^2\times \{0\}$ to the tensor product of the outgoing labels along $\mathbb{D}^2\times \{1\}$. The skein module is defined as the span of all skeins, modulo the kernel of the Reshtikhin--Turaev evaluation maps, ranging over all embedded disks in $M$. ## The Fock--Goncharov quantum cluster algebra structure on the decorated character variety Recall that the cluster algebra structure on the decorated character varieties consists of a collection of open subsets, each identified via a coorinate system with a torus $(\mathbb{C}^\times)^r$, carrying a "log-canonical\" Poisson bracket preserved by the birational transformations. There is a canonical quantization of such a torus given by introducing invertible operators satisfying $X_iX_j = q^{a_{ij}} X_jX_i$. The Poisson bracket obtained from these relations as in equation [\[eqn:degen\]](#eqn:degen){reference-type="eqref" reference="eqn:degen"} recovers the log-canonical Poisson bracket. An elementary and fundamental observation of Fock and Goncharov is that conjugation by the "quantum dilogarithm\" power series induces a birational equivalence between different quantum charts. This birational isomorphism lifts the cluster mutation taking place at the classical level to an birational isomorphism of the associated quantum tori. Essentially by definition, the cluster quantization of the decorated character variety is the resulting collection of quantum tori, equipped with a preferred system of generators, known as cluster variables, related by quantum cluster mutations. In summary, Fock and Goncharov construct the quantization of the decorated character variety as a quantum cluster algebra. The resulting algebraic, combinatorial, and analytic structures are of considerable interest and are heavily studied, however to survey them in proper depth would be beyond the scope of this survey article. Instead we highlight several important papers following in this tradition: [@Le19; @SS19; @Tes; @Ip18; @GS19; @FST08; @schrader2017continuous], ## Quantum character stacks from factorization homology A basic ingredient in non-commutative algebra and gauge theory is the quantum group. In categorical terms, the category $\operatorname{Rep}_q(G)$ is a ribbon braided tensor category, which $q$-deforms the classical category of representations of the algebraic group $G$. Recall that the classical character stack is computed via factorization homology, as in Equation [\[eqn:FHom\]](#eqn:FHom){reference-type="eqref" reference="eqn:FHom"}. The quantum character stack of a surface is defined, by fiat, by the equation, $$Z(\Sigma) = \int_{\Sigma} \operatorname{Rep}_q(G).$$ Here, as in Section [2](#sec:classical){reference-type="ref" reference="sec:classical"} we are regarding $\operatorname{Rep}_q(G)$ with its braided monoidal structure equivalently as an $E_2$-category in the symmetric monoidal bi-category of categories, in order to define its factorization homology on surfaces. We also note that the ribbon structure on $\operatorname{Rep}_q(G)$ equips it with an $SO(2)$-fixed point structure, so that it descends from an invariant of framed surfaces to one of oriented surfaces. The construction via factorization homology opens up tools in topological field theory, extending the construction of quantum character stacks both *down* to the level of the point, and *up* to dimension 3. The algebraic framework for the extended theory involves a 4-category denoted $\operatorname{BrTens}$, whose objects are braided tensor categories and whose higher morphisms encode notions of algebra, bimodules, functors and natural transformations, respectively, between braided tensor categories. The braided tensor category $\operatorname{Rep}_q(G)$ defines a 3-dualizable object in $\operatorname{BrTens}$, and so according to the cobordism hypothesis it defines a fully extended framed 3-dimensional topological field theory. The $SO(2)$-fixed structure descends this to an oriented topological field theory, and the resulting invariants of oriented surfaces coincide with the factorization homology functor $Z$ as defined above. The construction by factorization homology has a modification where we allow a pair of braided tensor categories and a morphism between them labelling a codimension 1-defect, and from this data produce an invariant of bipartite surfaces. Taking $\operatorname{Rep}_q(G)$ and $\operatorname{Rep}_q(T)$, with $\operatorname{Rep}_q(B)$ labelling the defect, one obtains a quantum deformation of the decorated character stacks. ## Special structures at roots of unity Each flavour of quantum character variety discussed above exhibits special behavior when the quantum parameter $q$ is taken to be a root of unity: 1. Skein algebras at root-of-unity parameter $q$ have large centers, over which the entire algebra is a finitely generated module, as first proved by Bonahon--Wong [@bonahon2016representations]. Following them, numerous papers have studied the implications for the representation theory of the skein algebra, especially the determination of the Azumaya locus -- the points over the spectrum of the center for which the central reduction of the skein algebra is a full rank matrix algebra. Most such results are only proved in the case of $\operatorname{SL}_2$-skeins, but are expected to hold more generally. See [@frohman2019unicity; @ganev2019quantum; @karuo2022azumaya] 2. AGS moduli algebras at a root of unity develop a large centre, and their Azumaya locus is known to contain the preimage $\mu^{-1}(G^\circ)$ of the big cell. This follows easily from the Brown--Gordon/Kac--de Concini technique of Poisson orders [@brown2003poisson; @de1991representations]: the space $\mu^{-1}(G^\circ)$ is precisely the open symplectic leaf in the framed character variety, and BG/KdC method gives an isomorphism of the fiber over any two points in the same symplectic leaf. See [@ganev2019quantum] for a precise formulation in the setting of AGS algebras. 3. Quantum cluster algebras attached to decorated quantum character varieties exhibit parallel behavior at roots of unity: chart by chart, the $\ell$th power of any cluster monomial is central, whenever $q^\ell=1$. The cluster mutations respect this central structure, and lead to a quantum Frobenius embedding of the classical decorated character variety (of each respective type) into the corresponding quantum cluster algebra. It is much more straightforward to determine the Azumaya locus on this setting, given the explicit control on the individual charts. See [@FG09a], and in greater generality [@nguyen2021root]. 4. The Azumaya algebras described above are each an instance of *invertibility*: indeed, an algebra $A$ is said to be invertible over its center of the algebra $A\otimes_{Z(A)} A^{op}$ is Morita equivalent to $Z(A)$. A proof has been announced by Kinnear that quantum character stacks are invertible relative to classical character stacks in the strongest possible sense: the symmetric monoidal category $\mathcal{QC}(\mathrm{Ch}_G(\Sigma))$ acts monoidally on the quantum character stack, and the quantum character stack defines an invertible sheaf of categories for that action. At the level of surfaces, this implies that the factorization homology category is an invertible sheaf of categories of quasi-coherent sheaves on the character stack. More fundamentally, the assertion is that the fully extended 3D TQFT defined by quantum character stack construction at root-of-construction is invertible relative to the fully extended 4D TQFT determined by $\operatorname{Rep}(G)$. ## Unifications of various approaches to quantum character varieties Recall that classically, each of the framed, ordinary, and decorated character stacks could be derived directly from the classical character stack under various geometric operations. This applies equally well to the quantum character stacks. We record the following unifications: - By a result of Cooke [@cooke2019excision], the skein category is computed via the factorization homology of the category $\operatorname{Rep}_q(G)^{cp}$ of compact-projective objects. The skein algebra is the algebra of endomorphisms of the empty skein object. - In [@BBJ18a] Alekseev-Grosse-Schomerus algebras are recovered from the quantum character stack via monadic reconstruction techniques. The quantum Hamiltonian reduction procedure is recast via monadic reconstruction in [@BBJ18b; @safronov2019categorical]. - The Fock--Goncharov quantum cluster algebra may be recovered via an open subcategory of the decorated quantum character stack: this is presently only proved in the $\operatorname{SL}_2$ case, in [@jordan2021quantum]. - The Alekseev-Grosse-Schomerus algebras have also been recovered directly in skein-theoretic terms, via the so-called "stated\" [@Costantino-Le], or "internal\" [@GJS] skein algebra construction. See [@haioun2022relating]. - The Bonahon--Wong approach to the representation theory of skein algebras involves describing skein algebras via quantum trace maps (c.f. [@bonahon2011quantum], [@kim2023sl2], [@le2018triangular],). This relationship has been studied in more physical literature under the term non-abelianization in [@hollands2016spectral; @GMN2; @neitzke2020q; @Neitzke_2020]. ## Quantum character varieties of 3-manifolds In contrast to the vast literature we have surveyed pertaining to quantum character varieties and character stacks, much less is currently known about the quantization of character varieties and character stacks of 3-manifolds. Perhaps one reason for this, as discussed below, is that the very notion of quantization in the context of character stacks of 3-manifolds is different than in the case of surfaces: neither character stacks nor character varieties of 3-manifolds are symplectic/Poisson, but they are rather (-1)-shifted symplectic. This implies a quantization theory of a different nature -- in particular as we discuss below, the skein module quantization of a 3-manifold is essentially never a flat deformation, this happens only when the character variety is a finite set of points. Let us nevertheless highlight what is known and currently under investigation concerning each of the four perspectives on quantization. 1. The AGS algebra attached to the surface $\Sigma_{g,1}$ acts naturally on the underlying vector space of the AGS algebra attached to $\Sigma_{0,g}$, which we should regard instead as attached to the handle body $H_g$ of genus $g$. In [@GJS] it was established that the skein module of a closed oriented 3-manifold $M$ may be computed by choosing a Heegaard splitting, $$M = H_g \cup_{\Sigma_g} H_g,$$ where the splitting involves twisting by a choice of $\gamma$ in the mapping class group of $\Sigma_g$. 2. This presentation of the skein module was used in [@GJS] to establish the finite-dimensionality of skein modules of closed 3-manifolds, as had been previously conjectured by Witten. Several recent papers [@carregaGeneratorsSkeinSpace2017; @detcherryBasisKauffmanSkein2021; @detcherryInfiniteFamiliesHyperbolic2021; @gilmerKauffmanBracketSkein2007; @gilmerKauffmanBracketSkein2018; @detcherry2023kauffman; @GJVY] are devoted to determining these dimensions in special cases, and Gunningham and Safronov have announced an identification of the skein module with the space of sections of a certain perverse sheaf introduced by Joyce as a quantization of the $(-1)$-shifted structure on the character variety regarded as a critical locus. 3. Decorated character varieties of 3-manifolds (perhaps not by this name) were studied in the papers [@Dim; @DGG]: one fixes an ideal triangulation of a hyperbolic 3-manifold, and directly computes a deformation quantization of the $A$-polynomial ideal using quantum cluster algebra techniques tuned to each ideal tetrahedron, regarded as a filling of a decorated $\Sigma_{0,4}$. 4. It was established by Przytycki and Sikora [@Przytycki1997OnSA] that the $q=1$ specialisation of the skein module of $M$ indeed recovers the algebra of functions on the character variety of $M$. Echoing the Azumaya/invertibility at the level of surfaces, it is now expected that skein modules at root-of-unity parameters arise as the global sections of a line bundle on the classical character variety. Two approaches to this result have been announced, one by Kinnear using higher categorical techniques in parallel to the above discussion, and another by Frohman, Kania-Bartoszynska and Le, appealing to the Azumaya property for surfaces and invoking the structure of a (3,2) TQFT. # Further reading The following references may be helpful to a reader hoping to learn this subject in more detail: - "Lectures on gauge theory and Integrable systems\", [@Audin1997]. - "Quantum geometry of moduli spaces of local systems and representation theory\" [@GS19]. - *Cluster algebras and Poisson geometry*, [@gekhtman2010cluster]. - "Factorization homology of braided tensor categories\" [@brochiernotes]. - GEAR Lectures on quantum hyperbolic geometry [@Frohmannotes]. [^1]: with trivial stability condition, equivalently the "categorical quotient\" or "coarse moduli space\" [^2]: A point of disambiguation: the reflection equation algebra $\mathcal{F}_q(G)$ here does not coincide with the so-called FRT algebra quantization, often denoted $\mathcal{O}_q(G)$, which quantizes instead the Sklyanin Poisson bracket on $G$. [^3]: See, e.g, [@JordanWhite] for details about this notation. [^4]: The name "module\" refers simply to the fact that the base ring may be taken to be $\mathbb{Z}[A,A^{-1}]$ (to allow specialisation) rather than a field. However, when a 3-manifold has boundary, it skein module indeed becomes a module for the action by inserting skeins at the boundary. [^5]: The careful reader may note a perhaps unexpected sign here, and in the skein relations at the right side of Figure [2](#fig:skeinreln){reference-type="ref" reference="fig:skeinreln"}; this is a standard convention taken to ensure the Kauffman bracket skein module of a 3-manifold does not depends only on a choice of orientation. Although in the case of surfaces, the signs can be absorbed into the normalisation, we include the standard normalisation here for the sake of consistency. The parameter $A$ discussed here is a fixed square root of the parameter $q$ discussed elsewhere in the article.
arxiv_math
{ "id": "2309.06543", "title": "Quantum character varieties", "authors": "David Jordan", "categories": "math.QA math-ph math.GT math.MP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | For any smooth complex projective surface $S$, we construct semistable refined Vafa--Witten invariants of $S$ which prove the main conjecture of [@Thomas2020]. This is done by extending part of Joyce's universal wall-crossing formalism to equivariant K-theory, and to moduli stacks with symmetric obstruction theories, particularly moduli stacks of sheaves on Calabi--Yau threefolds. An important technical tool which we introduce is the symmetrized pullback, along smooth morphisms, of symmetric obstruction theories. author: - Henry Liu bibliography: - VW-invars.bib title: Semistable refined Vafa--Witten invariants --- # Introduction ## Fix a smooth complex polarized surface $(S, \mathcal{O}_S(1))$ and let $\pi\colon \mathop{\mathrm{tot}}(K_S) \to S$ be the total space of its canonical bundle. Let $\mathfrak{M}$ be the moduli stack [^1] of compactly supported torsion sheaves $E$ on the polarized Calabi--Yau threefold $$(X, \mathcal{O}_X(1)) \coloneqq \left(\mathop{\mathrm{tot}}(K_S), \pi^*\mathcal{O}_S(1)\right).$$ The torus $\mathsf{T}\coloneqq \mathbb{C}^\times$ acts on $X$, and therefore on $\mathfrak{M}$, by scaling the fibers of $\pi$ with a weight which we denote $t^{-1}$. For example, $K_X \cong t \otimes \mathcal{O}_X$. By the spectral construction [@Tanaka2020 §2], a point $[E] \in \mathfrak{M}$ is equivalent to a *Higgs pair* $(\overline{E}, \phi)$ where $\overline{E} = \pi_*E \in \mathsf{Coh}(S)$ and $\phi \in \mathop{\mathrm{Hom}}_S(\overline{E}, \overline{E} \otimes K_S)$. If $\overline{\mathfrak{M}}$ is the moduli stack of all coherent sheaves on $S$, then this identifies $$\mathfrak{M}= T^*[-1]\overline{\mathfrak{M}}$$ as a $(-1)$-shifted cotangent bundle (of appropriate derived enhancements) where $\mathsf{T}$ scales the cotangent fibers. This perspective is useful psychologically but we will avoid any actual use of derived algebraic geometry. [^2] ## Unsurprisingly, there is an exact triangle $$\label{eq:VW-obstruction-theory} R\mathop{\mathrm{Hom}}_X(E, E) \to R\mathop{\mathrm{Hom}}_S(\overline{E}, \overline{E}) \xrightarrow{\circ\phi - \phi\circ} R\mathop{\mathrm{Hom}}_S(\overline{E}, \overline{E} \otimes K_S) \xrightarrow{[1]}$$ relating the obstruction theories of $E$ and $(\overline{E}, \phi)$. However if $H^1(\mathcal{O}_S)$ or $H^2(\mathcal{O}_S)$ is non-zero, then the obstruction theories of $\det \overline{E} \in \mathop{\mathrm{Pic}}(S)$ and $\mathop{\mathrm{tr}}\phi \in H^0(K_S)$ are constant $\mathsf{T}$-fixed pieces of the second and third terms in [\[eq:VW-obstruction-theory\]](#eq:VW-obstruction-theory){reference-type="eqref" reference="eq:VW-obstruction-theory"} respectively. They are removed by taking $$\label{eq:VW-stack} \mathfrak{N}\coloneqq \{\det \overline{E} = L, \; \mathop{\mathrm{tr}}\phi = 0\} \subset \mathfrak{M}$$ for a given $L \in \mathop{\mathrm{Pic}}(S)$. The induced obstruction theory for $\mathfrak{M}$ relative to $H^0(K_S) \times \mathop{\mathrm{Pic}}(S)$, and therefore for $\mathfrak{N}$ by restriction, is given by the first term in the exact triangle $$\label{eq:VW-reduced-obstruction-theory} R\mathop{\mathrm{Hom}}_X(E, E)_\perp \to R\mathop{\mathrm{Hom}}_S(\overline{E}, \overline{E})_0 \xrightarrow{\circ\phi - \phi\circ} R\mathop{\mathrm{Hom}}_S(\overline{E}, \overline{E} \otimes K_S)_0 \xrightarrow{[1]}$$ where a subscript $0$ denotes traceless part. Note that it remains symmetric, in the sense that $R\mathop{\mathrm{Hom}}_X(E, E)_\perp^\vee \simeq t \otimes R\mathop{\mathrm{Hom}}_X(E, E)_\perp[3]$. For details, particularly on the important step of how to construct this obstruction theory in families, see [@Tanaka2020 §5]. [^3] ## Consider Gieseker stability on $\mathfrak{N}$ with respect to $\mathcal{O}_X(1)$. Let $\mathfrak{N}_{r,L,c_2}$ be the component consisting of sheaves $\overline{E}$ with $\det \overline{E} = L$ and rank and Chern classes $$\alpha \coloneqq (r, c_1(L), c_2) \in \mathbb{Z}_{>0} \oplus H^2(S) \oplus H^4(S).$$ When $\alpha$ is such that there are no strictly semistable objects, the semistable locus $\mathfrak{N}_{r,L,c_2}^\mathrm{sst}\subset \mathfrak{N}_{r,L,c_2}$ is a quasi-projective scheme and the symmetric obstruction theory on $\mathfrak{N}_{r,L,c_2}$ becomes perfect upon restriction. The fixed locus $(\mathfrak{N}_{r,L,c_2}^\mathrm{sst})^\mathsf{T}$ is proper and therefore the *refined Vafa--Witten invariant* $$\label{eq:stable-VW-invariant} \mathsf{VW}_\alpha(t) \coloneqq \chi\left(\mathfrak{N}_{r,L,c_2}^\mathrm{sst}, \widehat{\mathcal{O}}^\mathrm{vir}\right) \in \mathbb{Q}(t^{\frac{1}{2}})$$ is well-defined by $\mathsf{T}$-equivariant localization; see §[4](#sec:joyce-song){reference-type="ref" reference="sec:joyce-song"} for details. Deformation-invariance implies that $\mathsf{VW}_\alpha(t)$ depends only on $c_1(L)$ instead of $L$. It is convenient to write $\mathfrak{N}_\alpha \coloneqq \mathfrak{N}_{r,L,c_2}$ for some (unspecified) choice of $L \in \mathop{\mathrm{Pic}}(S)$ with the first Chern class specified by $\alpha$. ## When $\alpha$ has strictly semistable objects, it becomes more difficult to define enumerative invariants. Following the philosophy for generalized Donaldson--Thomas invariants [@Joyce2012], Tanaka and Thomas [@Tanaka2017 §6] [@Thomas2020 §5] proposed to rigidify $\mathfrak{N}_\alpha^\mathrm{sst}$ using an auxiliary moduli stack $$\mathfrak{N}^{Q(k)}_{\alpha,1} = \left\{(E, s) : [E] \in \mathfrak{N}_\alpha, \; s \in H^0(E \otimes \mathcal{O}_X(k))\right\}$$ of *Joyce--Song pairs*, for $k \gg 0$ such that $H^{>0}(E(k)) = 0$. There are no strictly semistable Joyce--Song pairs. The semistable locus $\mathfrak{N}^{Q(k),\mathrm{sst}}_{\alpha,1}$ is a quasi-projective scheme with proper $\mathsf{T}$-fixed locus, and it carries a symmetric perfect obstruction theory. The *refined pairs invariant* $$\widetilde{\mathsf{VW}}_\alpha(k, t) \coloneqq \chi\left(\mathfrak{N}^{Q(k),\mathrm{sst}}_{\alpha,1}, \widehat{\mathcal{O}}^\mathrm{vir}\right) \in \mathbb{Q}(t^{\frac{1}{2}})$$ is well-defined by $\mathsf{T}$-equivariant localization; see §[4](#sec:joyce-song){reference-type="ref" reference="sec:joyce-song"} for details. The conjecture, which we prove in this paper, was as follows. Define the symmetrized quantum integer $$\label{eq:quantum-integer} [n]_t \coloneqq (-1)^{n-1} \frac{t^{\frac{n}{2}} - t^{-\frac{n}{2}}}{t^{\frac{1}{2}} - t^{-\frac{1}{2}}},$$ and let $\tau$ be the Gieseker stability condition (normalized Hilbert polynomial) of Definition [Definition 4](#def:stability-conditions){reference-type="ref" reference="def:stability-conditions"}. The extra sign $(-1)^{n-1}$ in $[n]_t$ saves on signs elsewhere. **Theorem 1** ([@Thomas2020 Conjecture 5.2]). *There exist $\mathsf{VW}_{\alpha_i}(t) \in \mathbb{Q}(t^{\frac{1}{2}})$ such that:* 1. *if $H^1(\mathcal{O}_S) = H^2(\mathcal{O}_S) = 0$, then $$\label{eq:semistable-VW-invariant} \widetilde{\mathsf{VW}}_\alpha(k, t) = \sum_{\substack{n > 0\\\alpha_1 + \cdots + \alpha_n = \alpha\\\forall i: \tau(\alpha_i) = \tau(\alpha)}} \frac{1}{n!} \prod_{i=1}^n \Big[\chi(\alpha_i(k)) - \chi\Big(\sum_{j=1}^{i-1} \alpha_j, \alpha_i\Big)\Big]_t \mathsf{VW}_{\alpha_i}(t);$$* 2. *otherwise $\widetilde{\mathsf{VW}}_\alpha(k, t) = [\chi(\alpha(k))]_t \mathsf{VW}_\alpha(t)$.* These $\mathsf{VW}_\alpha(t)$ are *refined semistable Vafa--Witten invariants*. In principle, there are many sensible ways to define semistable invariants and [\[eq:semistable-VW-invariant\]](#eq:semistable-VW-invariant){reference-type="eqref" reference="eq:semistable-VW-invariant"} is only one possible choice. We expect that it is a good choice because it will have good wall-crossing behavior under variation of stability condition, like in [@Joyce2012 Theorem 5.18]. This will be explored in future work. The original statement [@Thomas2020 Conjecture 5.2] assumed that $\mathcal{O}_S(1)$ is generic for $\alpha$, in the sense of [\[eq:generic-polarization\]](#eq:generic-polarization){reference-type="eqref" reference="eq:generic-polarization"}, which makes $\chi(\alpha_j, \alpha_i) = 0$ and simplifies the formula. ## Note that Theorem [Theorem 1](#thm:VW-invars){reference-type="ref" reference="thm:VW-invars"} becomes trivial if the $\mathsf{VW}_\alpha(k, t)$ were allowed to depend on $k$, because [\[eq:semistable-VW-invariant\]](#eq:semistable-VW-invariant){reference-type="eqref" reference="eq:semistable-VW-invariant"} would be an upper-triangular and invertible transformation between $\{\widetilde{\mathsf{VW}}_\alpha(k, t)\}_\alpha$ and $\{\mathsf{VW}_\alpha(k, t)\}_\alpha$ in $\mathbb{Q}(t^{\frac{1}{2}})$. The non-trivial content is therefore that the $\mathsf{VW}_\alpha(k, t)$ defined (uniquely) in this way are actually independent of $k$. This is a wall-crossing result which we prove using ideas from Joyce's recent universal wall-crossing formalism [@Joyce2021 Theorem 5.7]. Specifically, we: 1. (§[3](#sec:quiver-framed-stacks){reference-type="ref" reference="sec:quiver-framed-stacks"}, §[5](#sec:master-space-calculation){reference-type="ref" reference="sec:master-space-calculation"}) construct a master space $M_\alpha \coloneqq \mathfrak{N}^{\widetilde{Q}(k_1,k_2),\mathrm{sst}}_{\alpha,\bm{1}}$ with a $\mathbb{C}^\times$-action whose fixed loci consist of $\mathfrak{N}_{\alpha,1}^{Q(k_1),\mathrm{sst}}$, $\mathfrak{N}_{\alpha,1}^{Q(k_2),\mathrm{sst}}$, and some "interaction" terms; 2. (§[2](#sec:symmetrized-pullback){reference-type="ref" reference="sec:symmetrized-pullback"}, §[3](#sec:quiver-framed-stacks){reference-type="ref" reference="sec:quiver-framed-stacks"}) put a symmetric perfect obstruction theory on $M_\alpha$ using *symmetrized pullback* of the symmetric obstruction theory on $\mathfrak{N}_\alpha$; 3. (§[5](#sec:master-space-calculation){reference-type="ref" reference="sec:master-space-calculation"}) take K-theoretic residue of the equivariant localization formula on $M_\alpha$ to get a wall-crossing formula relating $\widetilde{\mathsf{VW}}_\alpha(k_1, t)$ and $\widetilde{\mathsf{VW}}_\alpha(k_2, t)$; 4. (§[6](#sec:semistable-invariants){reference-type="ref" reference="sec:semistable-invariants"}) prove a combinatorial lemma which, applied to the wall-crossing formula, shows that $\{\mathsf{VW}_\alpha(k, t)\}_\alpha$ are independent of $k$. Various forms of master spaces and such localization arguments have existed since the beginnings of modern enumerative geometry [@Thaddeus1996; @Mochizuki2009; @Nakajima2011; @Kiem2013]. Step 1 was done in [@Joyce2021] and the general principle of step 3 has also appeared previously (mostly in cohomology), but it appears that step 2 and 4 are genuinely new. Together, steps 1--3 demonstrate how master space arguments work in equivariant K-theory using symmetric obstruction theories, and should be very generally applicable. [^4] ## The formula [\[eq:semistable-VW-invariant\]](#eq:semistable-VW-invariant){reference-type="eqref" reference="eq:semistable-VW-invariant"} is a *quantized* version of the Joyce--Song formula [@Joyce2012 Equation 5.17] expressing generalized Donaldson--Thomas invariants in terms of pair invariants, in the sense that it becomes the original formula in the $t \to 1$ limit. We expect that many other formulas in [@Joyce2012] can be quantized using similar techniques, to be explored in future work. ## In §[2](#sec:symmetrized-pullback){reference-type="ref" reference="sec:symmetrized-pullback"}, we review obstruction theories on Artin stacks and define and construct symmetrized pullbacks. Very generally, if $f\colon \mathfrak{X}\to \mathfrak{Y}$ is a morphism of Artin stacks and $f$ has a relative obstruction theory, then there is a notion of compatibility between obstruction theories (resp. symmetric obstruction theories) on $\mathfrak{X}$ and $\mathfrak{Y}$ due to Manolache [@Manolache2012] (resp. Park [@Park2021 Theorem 0.1]). Symmetrized pullback is an operation (§[2.4](#sec:symmetrized-pullback-construction){reference-type="ref" reference="sec:symmetrized-pullback-construction"}) which takes a symmetric obstruction theory on $\mathfrak{Y}$ and produces a compatible one on $\mathfrak{X}$, under some fairly restrictive assumptions on the relative obstruction theory for $f$. These assumptions are satisfied in our Vafa--Witten setting basically because all the stacks involved are secretly shifted cotangent bundles. Symmetric pullback is crucial in step 2 above for equipping the master space $M_\alpha$ with *any* perfect obstruction theory, let alone a symmetric one; that the resulting perfect obstruction theory is indeed still symmetric is a bonus feature that dramatically simplifies step 3. ## Acknowledgements A good portion of this project is either inspired by or directly based on Dominic Joyce's work [@Joyce2021], and benefitted greatly from discussions with him. I am also grateful for discussions and ongoing collaborations with Arkadij Bojko, Nikolas Kuhn and Felix Thimm, particularly regarding symmetrized pullback and obstruction theories. This work was supported by the Simons Collaboration on Special Holonomy in Geometry, Analysis and Physics. # Symmetrized pullback and K-theory {#sec:symmetrized-pullback} ## **Definition 1**. Let $\mathfrak{X}$ be an Artin stack over a base Artin stack $\mathfrak{B}$, with action of a torus $\mathsf{T}$, and let $\mathsf{QCoh}_\mathsf{T}(\mathfrak{X})$ be the abelian category of $\mathsf{T}$-equivariant quasi-coherent sheaves, with derived category denoted $D\mathsf{QCoh}_\mathsf{T}(\mathfrak{X})$. The cotangent complex $\mathbb{L}_{\mathfrak{X}/\mathfrak{B}} \in D\mathsf{QCoh}_\mathsf{T}(\mathfrak{X})$ has a canonical $\mathsf{T}$-equivariant structure. Let $\mathbb{T}_{\mathfrak{X}/\mathfrak{B}} \coloneqq \mathbb{L}_{\mathfrak{X}/\mathfrak{B}}^\vee$ denote the tangent complex. An ($\mathsf{T}$-equivariant) *obstruction theory* of $\mathfrak{X}$ over $\mathfrak{B}$ is an object $\mathbb{E}\in D\mathsf{QCoh}_\mathsf{T}(\mathfrak{X})$, together with a morphism $\phi\colon \mathbb{E}\to \mathbb{L}_{\mathfrak{X}/\mathfrak{B}}$ in $D\mathsf{QCoh}_\mathsf{T}(\mathfrak{X})$, such that $h^{\geq 0}(\phi)$ are isomorphisms and $h^{-1}(\phi)$ is surjective. An obstruction theory is: - *perfect* if $\mathbb{E}$ is perfect of amplitude contained in $[-1,1]$; - *($\mathsf{T}$-equivariantly) $n$-symmetric* for $n \in \mathbb{Z}$ if, for some $\mathsf{T}$-weight $t$, there is an isomorphism $\Theta\colon \mathbb{E}\xrightarrow{\sim} t \otimes \mathbb{E}^\vee[n-2]$ such that $\Theta^\vee[n-2]=\Theta$. From here on, unless otherwise indicated: all objects and morphisms are $\mathsf{T}$-equivariant, and we take $n=3$ and just say "symmetric". ## Suppose $\mathfrak{X}$ is equipped with a symmetric obstruction theory $\mathbb{E}$ which is perfect as a complex. Symmetry implies $$h^{-i}(\mathbb{E}) = t \otimes h^{i-1}(\mathbb{E})^\vee = 0 \quad \forall i>2.$$ If in addition $h^1(\mathbb{L}_\mathfrak{X}) = 0$, then $h^1(\mathbb{E}) = h^1(\mathbb{L}_\mathfrak{X}) = 0$ and $\mathbb{E}$ would be a perfect obstruction theory. This will not hold in general because objects in $\mathfrak{X}$ can have non-trivial automorphisms. However, if $\mathfrak{X}$ has a stability condition with no strictly semistable objects, then the stable locus $\mathfrak{X}^\mathrm{st}\subset \mathfrak{X}$ is open and $\mathsf{T}$-invariant. Stable objects have no automorphisms, so $h^1(\mathbb{L}_\mathfrak{X})\big|_{\mathfrak{X}^\mathrm{st}} = h^1(\mathbb{L}_{\mathfrak{X}^\mathrm{st}}) = 0$, and hence $\mathbb{E}\big|_{\mathfrak{X}^\mathrm{st}}$ is a symmetric perfect obstruction theory on $\mathfrak{X}^\mathrm{st}$. ## **Definition 2**. Let $f\colon \mathfrak{X}\to \mathfrak{Y}$ be a morphism of Artin stacks with a relative obstruction theory $\phi_f\colon \mathbb{E}_f \to \mathbb{L}_f$. Two symmetric obstruction theories $\phi_\mathfrak{X}\colon \mathbb{E}_\mathfrak{X}\to \mathbb{L}_\mathfrak{X}$ and $\phi_\mathfrak{Y}\colon \mathbb{E}_\mathfrak{Y}\to \mathbb{L}_\mathfrak{Y}$, for the same weight $t$, are *compatible under $f$* if there are morphisms of exact triangles $$\label{eq:symmetric-compatibility-diagram} \begin{tikzcd} \mathbb{E}_f[-1] \ar{r} \ar[equals]{d} & t \otimes \mathbb{F}^\vee[1] \ar{r}{\zeta^\vee} \ar{d}{\eta^\vee} & \mathbb{E}_\mathfrak{X}\ar{d}{\zeta} \ar{r}{+1} & {} \\ \mathbb{E}_f[-1] \ar{r}{\delta} \ar{d}{\phi_f[-1]} & f^*\mathbb{E}_\mathfrak{Y}\ar{r}{\eta} \ar{d}{f^*\phi_\mathfrak{Y}} & \mathbb{F}\ar{d} \ar{r}{+1} & {} \\ \mathbb{L}_f[-1] \ar{r} & f^*\mathbb{L}_\mathfrak{Y}\ar{r} & \mathbb{L}_\mathfrak{X}\ar{r}{+1} & {} \end{tikzcd}$$ for some $\mathbb{F}\in D\mathsf{QCoh}(\mathfrak{X})$ such that the right-most column is $\phi_\mathfrak{X}$. Note that the symmetry of $\mathbb{E}_\mathfrak{X}$ is implied by the symmetry of $\mathbb{E}_\mathfrak{Y}$ since the whole upper right square is symmetric. From a different perspective, if only $\phi_f$ and a symmetric $\phi_\mathfrak{Y}$ are given, a *symmetrized pullback* of $\phi_\mathfrak{Y}$ along $f$ is the data of an object $\mathbb{E}_\mathfrak{X}$ and maps $\delta, \zeta, \eta$ such that [\[eq:symmetric-compatibility-diagram\]](#eq:symmetric-compatibility-diagram){reference-type="eqref" reference="eq:symmetric-compatibility-diagram"} is a morphism of exact triangles. Then $\mathbb{E}_\mathfrak{X}$ inherits the symmetry of $\mathbb{E}_\mathfrak{Y}$, and the five lemma applied to the obvious cohomology long exact sequences shows that both $\mathbb{F}\to \mathbb{L}_\mathfrak{X}$ and $\mathbb{E}_\mathfrak{X}\to \mathbb{F}\to \mathbb{L}_\mathfrak{X}$ are obstruction theories. If $f$ is smooth, our convention is to take $\mathbb{E}_f \coloneqq \mathbb{L}_f$ as the relative obstruction theory. ## {#sec:symmetrized-pullback-construction} Generally, we try to construct symmetrized pullback in the following way. Suppose the solid bottom left square in the following commutative diagram is given: $$\label{eq:symmetrized-pullback-construction} \begin{tikzcd}[column sep=4em] A \ar[dotted]{r}{\gamma} \ar[equals]{d} & \mathop{\mathrm{cocone}}(\delta^\vee[1]) \ar[dotted]{r} \ar[dashed]{d} & \mathop{\mathrm{cocone}}(\beta) \ar[dashed]{d} \ar[dotted]{r}{+1} & {} \\ A \ar{r}{\delta} \ar{d} & f^*\mathbb{E}_\mathfrak{Y}\ar[dashed]{r} \ar{d}{\delta^\vee[1]} & \mathop{\mathrm{cone}}(\delta) \ar[dashed]{d}{\beta} \ar[dashed]{r}{+1} & {} \\ 0 \ar{r} & t \otimes A^\vee[1] \ar[equals]{r} & t \otimes A^\vee[1] \ar{r}{+1} & {} \end{tikzcd}$$ For symmetrized pullback we want $A = \mathbb{E}_f[-1]$, though the following construction works for any $A$ with amplitude in $[0, 2]$. The given data induces all dashed arrows, making all but the topmost row into exact triangles. The octahedral axiom then implies the topmost row can also be completed into an exact triangle. Suppose in addition that $\beta$ is unique (up to isomorphism of $\mathop{\mathrm{cone}}(\delta)$). Then $\gamma\colon A \to \mathop{\mathrm{cocone}}(\delta^\vee[1])$, which a priori came from the octahedral axiom, must actually be $\beta^\vee[1]$. It follows that the entire diagram is symmetric across the diagonal after applying $t \otimes (-)^\vee[1]$. Then set $\mathbb{F}\coloneqq \mathop{\mathrm{cone}}(\delta)$ and $\mathbb{E}_\mathfrak{X}\coloneqq \mathop{\mathrm{cocone}}(\beta)$ to conclude. In general, the conditions that $\delta^\vee[1] \circ \delta = 0$ and that $\beta$ is unique are difficult to fulfill. We will give two special settings where they do hold. ## {#section-10} **Example 1**. Let $\delta\colon A \to f^*\mathbb{E}_\mathfrak{Y}$ be given. Suppose $\mathop{\mathrm{Hom}}(A, A^\vee[1]) = 0 = \mathop{\mathrm{Hom}}(A, A^\vee)$. The first equality implies $\delta[1] \circ \delta = 0$. The second equality implies that $\beta$ in [\[eq:symmetrized-pullback-construction\]](#eq:symmetrized-pullback-construction){reference-type="eqref" reference="eq:symmetrized-pullback-construction"} is unique, since any two ways of filling in $\beta$ are homotopic but then the homotopy is zero. Then the construction in §[2.4](#sec:symmetrized-pullback-construction){reference-type="ref" reference="sec:symmetrized-pullback-construction"} produces the symmetrized pullback. For instance, suppose that $\mathfrak{X}$ is *affine*, $f$ is a (stacky) vector bundle, and $\mathbb{E}_f \coloneqq \mathbb{L}_f$ is the relative obstruction theory. Then $A = \mathbb{L}_f[-1]$ is locally free of amplitude $[1, 2]$, and the desired vanishings occur for degree reasons. In fact, in this setting, even the lift $\delta\colon A \to f^*\mathbb{E}_\mathfrak{Y}$ of the natural map $A \to f^*\mathbb{L}_\mathfrak{Y}$ exists uniquely: the obstructions to existence and uniqueness lie in $\mathop{\mathrm{Hom}}(A, \mathop{\mathrm{cone}}(f^*\phi_\mathfrak{Y}))$ and $\mathop{\mathrm{Hom}}(A, \mathop{\mathrm{cone}}(f^*\phi_\mathfrak{Y})[-1])$ respectively, which also vanish for degree reasons. In [@KLT], this observation is used to construct symmetrized pullbacks using *almost-perfect obstruction theories* [@Kiem2020], an étale-local version of perfect obstruction theory. ## {#section-11} **Example 2**. Suppose that $f\colon \mathfrak{X}\to \mathfrak{Y}$ arises as the classical truncation of the top row in the following homotopy-Cartesian square of derived Artin stacks: $$\label{eq:symmetrized-pullback-shifted-cotangent-bundle-diagram} \begin{tikzcd} \mathfrak{X}^\mathrm{der}\ar{r}{f} \ar{d}{\pi} & T^*[-1]\mathfrak{M}^\mathrm{der}\ar{d}{\pi} \\ \overline{\mathfrak{X}}^\mathrm{der}\ar{r}{\overline{f}} & \mathfrak{M}^\mathrm{der} \end{tikzcd}$$ where $\overline{f}\colon \overline{\mathfrak{X}}^\mathrm{der}\to \mathfrak{M}^\mathrm{der}$ is a given smooth morphism of derived stacks, and $T^*[-1]\mathfrak{M}^\mathrm{der}\coloneqq \mathop{\mathrm{tot}}(t \otimes \mathbb{L}_{\mathfrak{M}^\mathrm{der}}[-1])$ is the $(-1)$-shifted cotangent bundle. Recall that if $\mathfrak{M}^\mathrm{der}$ is a derived Artin stack with classical truncation $\mathfrak{M}$ and its natural inclusion $i\colon \mathfrak{M}\to \mathfrak{M}^\mathrm{der}$, then: - the cotangent complex $\mathbb{L}_{\mathfrak{M}^\mathrm{der}}$ exists in a pre-triangulated dg category $L_{\mathsf{QCoh}}(\mathfrak{M}^\mathrm{der})$, [^5] satisfying the usual properties of cotangent complexes; - $h^k(\mathbb{L}_i) = 0$ for $k \ge -1$ [@Schuerg2015 Proposition 1.2] and therefore $i^*\mathbb{L}_{\mathfrak{M}^\mathrm{der}} \to \mathbb{L}_{\mathfrak{M}}$ is an obstruction theory; - if $f\colon \mathfrak{M}^\mathrm{der}\to \mathfrak{N}^\mathrm{der}$ is a morphism of locally finitely presented derived Artin stacks, [^6] then $\mathbb{L}_f$ is perfect. We assume that all this machinery works equivariantly for the $\mathbb{C}^\times$-action which scales the (shifted) cotangent direction. By definition, $\mathbb{L}_\pi = t \otimes \pi^*\mathbb{L}_{\mathfrak{M}^\mathrm{der}}^\vee[1]$, and by base change, $\mathbb{L}_f \cong \pi^* \mathbb{L}_{\overline{f}}$. So one can form the dashed diagonal maps in $$\label{eq:symmetrized-section-cosection} \begin{tikzcd} \mathbb{L}_f[-1] \ar{d}{\overline{\delta}} \ar[dashed]{dr}{\delta} \\ f^*\pi^*\mathbb{L}_{\mathfrak{M}^\mathrm{der}} \ar{r} & f^*\mathbb{L}_{T^*[-1]\mathfrak{M}^\mathrm{der}} \ar{r} \ar[dashed]{dr}[swap]{\delta^\vee[1]} & t \otimes f^*\pi^*\mathbb{L}_{\mathfrak{M}^\mathrm{der}}^\vee[1] \ar{r}{+1} \ar{d}{\overline{\delta}^\vee[1]} & {} \\ {} & {} & t \otimes \mathbb{L}_f^\vee[2] \end{tikzcd}$$ where the middle row is the exact triangle of cotangent complexes for $\pi$, and $\overline{\delta}$ is the connecting map in the exact triangle of cotangent complexes for $\overline{f}$. Evidently $\delta^\vee[1] \circ \delta = 0$, so this $\delta$ can be used in the construction of §[2.4](#sec:symmetrized-pullback-construction){reference-type="ref" reference="sec:symmetrized-pullback-construction"}. Cones are functorial in pre-triangulated dg categories and so $\beta$ is unique. Classical truncation of the entire diagram [\[eq:symmetrized-pullback-construction\]](#eq:symmetrized-pullback-construction){reference-type="eqref" reference="eq:symmetrized-pullback-construction"} then produces the desired symmetrized pullback. ## {#section-12} **Remark 1**. In fact, the uniqueness of $\beta$, and therefore the symmetry of $\mathop{\mathrm{cocone}}(\beta)$, is often irrelevant in applications where it is enough to construct an obstruction theory $\mathbb{E}_\mathfrak{X}\to \mathbb{L}_\mathfrak{X}$ such that: - the restriction to a stable locus $\mathfrak{X}^\mathrm{st}\subset \mathfrak{X}$ is a perfect obstruction theory; - the *K-theory class* of $\mathbb{E}_\mathfrak{X}$ is symmetric, i.e. $\mathbb{E}_\mathfrak{X}= -t \mathbb{E}_\mathfrak{X}^\vee \in K_\mathsf{T}(\mathfrak{X})$. The construction [\[eq:symmetrized-pullback-construction\]](#eq:symmetrized-pullback-construction){reference-type="eqref" reference="eq:symmetrized-pullback-construction"} satisfies both properties without assuming uniqueness of $\beta$. Namely, the cohomology long exact sequence for the top row of [\[eq:symmetrized-pullback-construction\]](#eq:symmetrized-pullback-construction){reference-type="eqref" reference="eq:symmetrized-pullback-construction"} gives $$0 = h^{-2}(A) \to t \otimes h^1(\mathbb{F})^\vee \xrightarrow{\sim} h^{-2}(\mathbb{E}_\mathfrak{X}) \to h^{-1}(A) = 0,$$ but $h^1(\mathbb{F}) = h^1(\mathbb{L}_\mathfrak{X})$ since $\mathbb{F}$ is an obstruction theory for $\mathfrak{X}$, so indeed $h^{-2}(\mathbb{E}_\mathfrak{X})$ vanishes when restricted to $\mathfrak{X}^\mathrm{st}$. It is also clear that $$\label{eq:symmetrized-pullback-k-class} \mathbb{E}_\mathfrak{X}= f^*\mathbb{E}_\mathfrak{Y}- A + t A^\vee \in K_\mathsf{T}(\mathfrak{X})$$ is symmetric. Abusing terminology, hopefully without too much confusion, we continue to refer to the constructed $\mathbb{E}_\mathfrak{X}\to \mathbb{L}_\mathfrak{X}$ as a symmetrized pullback regardless of whether $\beta$ is unique. ## {#section-13} **Lemma 1**. *Let $\mathbb{E}_\mathfrak{X}$ be a symmetrized pullback of $\mathbb{E}_\mathfrak{Y}$ along a smooth morphism $f\colon \mathfrak{X}\to \mathfrak{Y}$. Then a surjection $h^{-1}(\mathbb{E}_\mathfrak{Y}) \twoheadrightarrow \mathcal{O}_\mathfrak{Y}$ induces a surjection $h^{-1}(\mathbb{E}_\mathfrak{X}) \twoheadrightarrow \mathcal{O}_\mathfrak{X}$.* Such surjections are nowhere-vanishing cosections of the obstruction sheaves, and will be used later to show that certain virtual cycles vanish via Kiem--Li cosection localization [@Kiem2013a]. *Proof.* Smoothness of $f$ means $A \coloneqq \mathbb{E}_f[-1]$ has amplitude in $[1,2]$ only, so the cohomology long exact sequence for the middle row of [\[eq:symmetric-compatibility-diagram\]](#eq:symmetric-compatibility-diagram){reference-type="eqref" reference="eq:symmetric-compatibility-diagram"} or [\[eq:symmetrized-pullback-construction\]](#eq:symmetrized-pullback-construction){reference-type="eqref" reference="eq:symmetrized-pullback-construction"} gives $$0 = h^{-1}(A) \to h^{-1}(f^*\mathbb{E}_\mathfrak{Y}) \xrightarrow{\sim} h^{-1}(\mathbb{F}) \to h^0(A) = 0.$$ Similarly, the right-most column of [\[eq:symmetric-compatibility-diagram\]](#eq:symmetric-compatibility-diagram){reference-type="eqref" reference="eq:symmetric-compatibility-diagram"} or [\[eq:symmetrized-pullback-construction\]](#eq:symmetrized-pullback-construction){reference-type="eqref" reference="eq:symmetrized-pullback-construction"} gives $$h^{-1}(\mathbb{E}_\mathfrak{X}) \twoheadrightarrow h^{-1}(\mathbb{F}) \to h^{-1}(t \otimes A^\vee[1]) \cong t \otimes h^0(A)^\vee = 0.$$ The desired surjection is then $h^{-1}(\mathbb{E}_\mathfrak{X}) \twoheadrightarrow h^{-1}(\mathbb{F}) \cong h^{-1}(f^*\mathbb{E}_\mathfrak{Y}) \twoheadrightarrow f^*\mathcal{O}_\mathfrak{Y}= \mathcal{O}_\mathfrak{X}$. ◻ # Quiver-framed stacks {#sec:quiver-framed-stacks} ## {#section-14} **Definition 3**. Let $\mathfrak{N}= \bigsqcup_\alpha \mathfrak{N}_\alpha$ be the Vafa--Witten moduli stack [\[eq:VW-stack\]](#eq:VW-stack){reference-type="eqref" reference="eq:VW-stack"} from the introduction. For $k \in \mathbb{Z}_{> 0}$, let ${}_k\mathfrak{N}\subset \mathfrak{N}$ be the open locus of $k$-regular sheaves [@Huybrechts2010 §1.7], i.e. those $E$ such that $H^i(E(k-i)) = 0$ for all $i > 0$. The functor $$F_k\colon E \mapsto H^0(E \otimes \mathcal{O}_X(k)) = \chi(E \otimes \mathcal{O}_X(k))$$ is exact on $k$-regular sheaves and induces an inclusion $\mathop{\mathrm{Hom}}(E, E) \to \mathop{\mathrm{Hom}}(F_k(E), F_k(E))$. Set $\lambda_k(\alpha) \coloneqq \dim F_k(\alpha)$ where $\alpha$ is the class of $E$. Let $Q$ be a quiver with no cycles, with edges $Q_1$ and vertices $Q_0 = Q_0^o \sqcup Q_0^f$ split into ordinary vertices $\boldsymbol{\bullet} \in Q_0^o$ and *framing* vertices $\blacksquare \in Q_0^f$ such that ordinary vertices have no outgoing arrows. For $\kappa = (\kappa(v))_{v \in Q_0^o} \in \mathbb{Z}^{|Q_0^o|}$ and a dimension vector $\bm{d} = (d_v)_{v \in Q_0^f}$, let $\mathfrak{N}^{Q(\kappa)}_{\alpha,\bm{d}}$ be the moduli stack of triples $(E, \bm{V}, \bm{\rho})$ where: - $[E] \in {}_\kappa\mathfrak{N}_\alpha \coloneqq \bigcap_{v \in Q_0^o} {}_{\kappa(v)}\mathfrak{N}_\alpha$; - $\bm{V} = (V_v)_{v \in Q_0^f}$ where $V_v$ is a $d_v$-dimensional vector space; set $V_v \coloneqq F_{\kappa(v)}(E)$ for $v \in Q_0^o$; - $\bm{\rho }= (\rho_e)_{e \in Q_1}$ are linear maps between the $V_v$. A morphism between two triples $(E, \bm{V}, \bm{\rho})$ and $(E', \bm{V}', \bm{\rho}')$ consists of a morphism $E \to E'$, inducing morphisms $V_v \to V'_v$ for all $v \in Q_0^o$, along with morphisms $V_v \to V_v'$ for $v \in Q_0^f$ intertwining $\bm{\rho}$ and $\bm{\rho}'$. ## {#section-15} **Remark 2**. Every object in a $\mathbb{C}$-linear category has an action by the group $\mathbb{C}^\times$ of scaling automorphisms. Our convention is that a moduli stack $\mathfrak{X}= \bigsqcup_\beta \mathfrak{X}_\beta$ of such objects has already been *$\mathbb{C}^\times$-rigidified* [@Abramovich2003 § 5.1], meaning that this $\mathbb{C}^\times$ has been quotiented away from all stabilizer groups. Let $\mathfrak{X}^{\mathsf{unrig}} = \bigsqcup_\beta \mathfrak{X}_\beta^\mathsf{unrig}$ be the unrigidified moduli stack, with $\mathfrak{X}_0 = \{0\}$ containing only the zero object, so that $$\mathsf{rig}_\beta\colon \mathfrak{X}_\beta^{\mathsf{unrig}} \to \mathfrak{X}_\beta$$ is a principal $[\mathrm{pt}/\mathbb{C}^\times]$-bundle for all $\beta \neq 0$. When universal families exist on $\mathfrak{X}^{\mathsf{unrig}}$, they have $\mathbb{C}^\times$-weight one and so may not descend to $\mathfrak{X}$; see [@Huybrechts2010 §4.6]. On $\mathfrak{N}^{Q(\kappa),\mathsf{unrig}}_{\alpha,\bm{d}}$, for each vertex $v$, let $\mathcal{V}_v$ be the universal bundle of $V_v$. Indeed, by $k$-regularity, $\mathcal{V}_v$ for $v \in Q_0^f$ is a vector bundle of rank $\lambda_{\kappa(v)}(\alpha)$. Then $$\label{eq:quiver-stack-projection} \Pi_\alpha\colon \mathfrak{N}^{Q(\kappa),\mathsf{unrig}}_{\alpha,\bm{d}} = \bigg[\bigoplus_{[v \to v'] \in Q_1} \mathcal{V}_v^\vee \otimes \mathcal{V}_{v'} / \prod_{v \in Q_0^f} \mathop{\mathrm{GL}}(d_v)\bigg] \to {}_\kappa\mathfrak{N}_\alpha^{\mathsf{unrig}}$$ is a stacky vector bundle. In particular, $\Pi_\alpha$ is smooth. Clearly $\Pi_\alpha$ is equivariant for the diagonal action of $\mathbb{C}^\times$ on the $\mathop{\mathrm{GL}}(d_v)$ and the stabilizer groups of ${}_\kappa\mathfrak{N}_\alpha^{\mathsf{unrig}}$, so it descends to a projection $\mathfrak{N}^{Q(\kappa)}_{\alpha,\bm{d}} \to {}_\kappa\mathfrak{N}_\alpha$ which we continue to denote $\Pi_\alpha$. It has the same fibers as [\[eq:quiver-stack-projection\]](#eq:quiver-stack-projection){reference-type="eqref" reference="eq:quiver-stack-projection"}: while the individual $\mathcal{V}_v$ do not descend, $\mathbb{C}^\times$-weight-zero combinations like $\mathcal{V}_v^\vee \otimes \mathcal{V}_{v'}$ do descend. ## {#sec:quiver-stacks-obstruction-theory} The symmetric obstruction theory on ${}_\kappa \mathfrak{N}_\alpha$ admits a symmetrized pullback along $\Pi_\alpha$ following Example [Example 2](#ex:symmetrized-pullback-shifted-cotangent-bundle){reference-type="ref" reference="ex:symmetrized-pullback-shifted-cotangent-bundle"}. First, replace all moduli stacks with their derived enhancements, and let $$\overline{\mathfrak{N}}_\alpha \coloneqq \{\det \overline{E} = L\} \subset \overline{\mathfrak{M}}_\alpha$$ be the derived moduli stack of coherent sheaves $\overline{E}$ on $S$ with fixed determinant $L$. The construction of Definition [Definition 3](#def:quiver-framed-stack){reference-type="ref" reference="def:quiver-framed-stack"} applies equally well to $\overline{\mathfrak{N}}_\alpha$. Since $H^0(E \otimes \mathcal{O}_X(k)) = H^0(\overline{E} \otimes \mathcal{O}_S(k))$, there is a homotopy-Cartesian square $$\begin{tikzcd} \mathfrak{N}_{\alpha,\bm{d}}^{Q(\kappa)} \ar{r}{\Pi_\alpha} \ar{d} & {}_\kappa \mathfrak{N}_\alpha \ar{d} \\ \overline{\mathfrak{N}}_{\alpha,\bm{d}}^{Q(\kappa)} \ar{r}{\Pi_\alpha} & {}_\kappa \overline{\mathfrak{N}}_\alpha \end{tikzcd}$$ of the form [\[eq:symmetrized-pullback-shifted-cotangent-bundle-diagram\]](#eq:symmetrized-pullback-shifted-cotangent-bundle-diagram){reference-type="eqref" reference="eq:symmetrized-pullback-shifted-cotangent-bundle-diagram"}. Since the symmetric obstruction theory [\[eq:VW-reduced-obstruction-theory\]](#eq:VW-reduced-obstruction-theory){reference-type="eqref" reference="eq:VW-reduced-obstruction-theory"} on $\mathfrak{N}$ is the classical truncation of $\mathbb{L}_{T^*[-1]\overline{\mathfrak{N}}}$ [@Tanaka2020 §1.7] [@Schuerg2015 Proposition 5.2], we are done. ## {#sec:quiver-stacks-obstruction-theory-alternate} Alternatively, the obstruction theory on $\mathfrak{N}^{Q(\kappa)}_{\alpha,\bm{d}}$ may be constructed without resorting to derived algebraic geometry. The main question is how to explicitly construct a lift $$\begin{tikzcd} {} & \Pi_\alpha^* \mathbb{E}_{{}_\kappa \overline{\mathfrak{N}}_\alpha} \ar{d}{\Pi_\alpha^*\phi_\alpha} \\ \mathbb{L}_{\Pi_\alpha}[-1] \ar{r} \ar[dashed]{ur}{\overline{\delta}} & \Pi_\alpha^* \mathbb{L}_{{}_\kappa \overline{\mathfrak{N}}_\alpha} \end{tikzcd}$$ of the connecting map in the exact triangle of relative cotangent complexes. Here $\phi_\alpha$ is the natural obstruction theory [@Tanaka2020 §5.6] given by the traceless part of the Atiyah class of the universal sheaf $\overline{\mathcal{E}}$, or equivalently $\overline{\mathcal{E}}(k)$, on ${}_\kappa \overline{\mathfrak{N}}_\alpha^{\mathsf{unrig}}$. This $\overline{\delta}$ may then be used in [\[eq:symmetrized-section-cosection\]](#eq:symmetrized-section-cosection){reference-type="eqref" reference="eq:symmetrized-section-cosection"}, with the middle row replaced by the exact triangle [\[eq:VW-reduced-obstruction-theory\]](#eq:VW-reduced-obstruction-theory){reference-type="eqref" reference="eq:VW-reduced-obstruction-theory"} defining the obstruction theory on $\mathfrak{N}$, to obtain the desired map $\delta\colon \mathbb{L}_{\Pi_\alpha}[-1] \to \Pi_\alpha^* \mathbb{E}_{{}_\kappa \mathfrak{N}_\alpha}$ such that $\delta^\vee[1] \circ \delta = 0$. We construct $\overline{\delta}$. The universal bundles $(\mathcal{V}_v)_{v \in Q_0^o}$ on $\mathfrak{N}^{Q(\kappa),\mathsf{unrig}}_{\alpha,\bm{d}}$ are pulled back from similar ones on ${}_\kappa \overline{\mathfrak{N}}_\alpha^{\mathsf{unrig}}$, which induce a classifying morphism $$c_{\mathcal{V}}\colon {}_\kappa\overline{\mathfrak{N}}_\alpha^{\mathsf{unrig}} \to [\mathrm{pt}/\mathop{\mathrm{GL}}(\lambda_\kappa(\alpha))], \qquad \mathop{\mathrm{GL}}(\lambda_\kappa(\alpha)) \coloneqq \prod_{v \in Q_0^o} \mathop{\mathrm{GL}}(\lambda_{\kappa(v)}(\alpha)).$$ The construction of Definition [Definition 3](#def:quiver-framed-stack){reference-type="ref" reference="def:quiver-framed-stack"} may be done *universally* on $[\mathrm{pt}/\mathop{\mathrm{GL}}(\lambda_\kappa(\alpha))]$, and becomes $\Pi_\alpha\colon \overline{\mathfrak{N}}^{Q(\kappa),\mathsf{unrig}}_{\alpha,\bm{d}} \to {}_\kappa\overline{\mathfrak{N}}_\alpha^{\mathsf{unrig}}$ upon pullback along $c_\mathcal{V}$. The resulting Cartesian square, after rigidification, induces the dashed diagonal arrow in the commutative diagram $$\begin{tikzcd} & \Pi_\alpha^* c_\mathcal{V}^* \mathbb{L}_{[\mathrm{pt}/\mathop{\mathrm{PGL}}(\lambda_\kappa(\alpha))]} \ar{d} \ar[dotted]{r} & \Pi_\alpha^* \mathbb{E}_{{}_\kappa\overline{\mathfrak{N}}_\alpha} \ar{dl}{\Pi_\alpha^*\phi_\alpha} \\ \mathbb{L}_{\Pi_\alpha}[-1] \ar[dashed]{ur} \ar{r} & \Pi_\alpha^* \mathbb{L}_{{}_\kappa\overline{\mathfrak{N}}_\alpha}. \end{tikzcd}$$ The middle vertical arrow is the traceless part of the cotangent complex map for $c_\mathcal{V}$, which can be identified with the traceless part of the sum of Atiyah classes of $\mathcal{V}_v$ for $v \in Q_0^o$ (cf. [@Schuerg2015 Remark A.1]). By construction there are maps $\mathcal{V}_v \to \overline{\mathcal{E}}(k)$, so functoriality of the (traceless part of the) Atiyah class gives the dotted horizontal arrow. Thus we get $\overline{\delta}$. It is unclear from the classical construction whether the resulting $\beta$ in [\[eq:symmetrized-pullback-construction\]](#eq:symmetrized-pullback-construction){reference-type="eqref" reference="eq:symmetrized-pullback-construction"} is unique, but for our purposes this is irrelevant by Remark [Remark 1](#rem:symmetry-is-unnecessary){reference-type="ref" reference="rem:symmetry-is-unnecessary"}. ## {#section-16} **Definition 4**. For a class $\alpha = (r, c_1(L), c_2)$, let $$P_\alpha(n) \coloneqq \chi(\alpha(n)) = r(\alpha) n^{\dim \alpha} + O(n^{\dim \alpha - 1})$$ be the Hilbert polynomial of the class $\alpha$. Then $$\tau(\alpha) \coloneqq P_\alpha/r(\alpha)$$ is a monic polynomial. Following [@Joyce2021 Definition 7.7], put a total order $\le$ on monic polynomials: $f \le g$ if either $\deg f > \deg g$, or $\deg f = \deg g$ and $f(n) \le g(n)$ for all $n \gg 0$. Then $\tau$ is a stability condition on $\mathfrak{N}$ in Joyce's sense [@Joyce2021 Definition 3.1], and we call it *Gieseker stability* for obvious reasons. We say the polarization $\mathcal{O}_S(1)$ is *generic* for $\alpha$ if $$\label{eq:generic-polarization} \tau(\beta) = \tau(\alpha) \implies \beta = \text{constant} \cdot \alpha.$$ The quantity $\widetilde{r}(\alpha) \coloneqq (\dim \alpha)! \cdot r(\alpha)$ is a positive integer when $\alpha \neq 0$, and if $\tau(\alpha) = \tau(\beta)$ then $\widetilde{r}(\alpha + \beta) = \widetilde{r}(\alpha) + \widetilde{r}(\beta)$. Following [@Joyce2021 (5.13)], given $\bm{\mu }\in \mathbb{R}^{|Q_0^o|}$ define $$\label{eq:quiver-stack-stability-condition} \tau_{\bm{\mu}}(\alpha, \bm{d}) \coloneqq \begin{cases} \left(\tau(\alpha), \bm{\mu }\cdot \bm{d}/\widetilde{r}(\alpha)\right) & \alpha \neq 0 \\ \left(\infty, \bm{\mu }\cdot \bm{d} / |\bm{d}|\right) & \alpha = 0, \; \bm{\mu}\cdot\bm{d} > 0 \\ \left(-\infty, \bm{\mu }\cdot \bm{d} / |\bm{d}|\right) & \alpha = 0, \; \bm{\mu}\cdot \bm{d} \le 0. \end{cases}$$ Use the total order given by: $(a, b) \le (a', b')$ if either $a \le a'$, or $a=a'$ and $b \le b'$. Then $\tau_{\bm{\mu}}$ is a stability condition on $\mathfrak{N}^{Q(\kappa)}$, using the additivity of the functions $\widetilde{r}$ and $\bm{\mu }\cdot (-)$. Clearly $\tau_{\bm{\mu}}$-semistability of $(E, \bm{V}, \bm{\rho})$ implies $\tau$-semistability of $E$. ## {#section-17} **Lemma 2**. *If $\mathfrak{N}_{\alpha,\bm{d}}^{Q(\kappa)}$ has no strictly $\tau_{\bm{\mu}}$-semistables, then the fixed locus $(\mathfrak{N}_{\alpha,\bm{d}}^{Q(\kappa),\mathrm{sst}})^\mathsf{T}$ is proper.* *Proof.* View $X$ as an open locus in the projective $3$-fold $\widehat{X} \coloneqq \mathbb{P}(K_S \oplus \mathcal{O}_S)$, and let $\widehat{\mathfrak{N}}_{\alpha,\bm{d}}^{Q(\kappa)}$ and $\widehat{\mathfrak{M}}_{\alpha,\bm{d}}^{Q(\kappa)}$ be the analogue of $\mathfrak{N}_{\alpha,\bm{d}}^{Q(\kappa)}$ and $\mathfrak{M}_{\alpha,\bm{d}}^{Q(\kappa)}$ respectively on $\widehat{X}$ instead of $X$. It suffices to prove that $\widehat{\mathfrak{M}}_{\alpha,\bm{d}}^{Q(\kappa),\mathrm{sst}}$ is proper, because of the chain of closed immersions $$\left(\mathfrak{N}_{\alpha,\bm{d}}^{Q(\kappa),\mathrm{sst}}\right)^\mathsf{T}\subset \left(\widehat{\mathfrak{N}}_{\alpha,\bm{d}}^{Q(\kappa),\mathrm{sst}}\right)^\mathsf{T}\subset \widehat{\mathfrak{N}}_{\alpha,\bm{d}}^{Q(\kappa),\mathrm{sst}} \subset \widehat{\mathfrak{M}}_{\alpha,\bm{d}}^{Q(\kappa),\mathrm{sst}}.$$ To get properness of $\widehat{\mathfrak{M}}_{\alpha,\bm{d}}^{Q(\kappa),\mathrm{sst}}$, we invoke the results of [@Joyce2021 §7.4] in order to avoid lengthy technicalities. [^7] The machinery there requires either $\dim \alpha = 0, 1, 3$ (which we do not satisfy) or the polarization for Gieseker stability to be a rational Kähler class (which we do satisfy). ◻ # Refined pairs invariants {#sec:joyce-song} ## {#section-18} We begin with some generalities on equivariant K-theory. Let $M$ be a finite-type scheme with an action by a torus $\mathsf{T}$, and suppose $M$ has the $\mathsf{T}$-equivariant resolution property [@Totaro2004 §2]; we say that *equivariant localization holds* for such $M$. [^8] Specifically, for us, $M$ will be a quasi-projective scheme. Let $K_\mathsf{T}(M)$, resp. $K_\mathsf{T}^\circ(M)$, be the Grothendieck group of $\mathsf{T}$-equivariant coherent sheaves on $M$, resp. vector bundles, on $M$. Both are modules for $\mathbbm{k}_\mathsf{T}\coloneqq K_\mathsf{T}(\mathrm{pt})$. Set $$\label{eq:localized-base-ring} \mathbbm{k}_{\mathsf{T},\mathrm{loc}} \coloneqq \mathbbm{k}_\mathsf{T}[(1 - w)^{-1} : w \neq 1],$$ where $w$ ranges over non-trivial weights of $\mathsf{T}$. Let $\iota\colon M^\mathsf{T}\hookrightarrow M$ be the inclusion of the $\mathsf{T}$-fixed locus. A perfect obstruction theory $\mathbb{E}_M$ on $M$ gives: - virtual structure sheaves $\mathcal{O}_M^\mathrm{vir}\in K_\mathsf{T}(M)$ and $\mathcal{O}_{M^\mathsf{T}}^\mathrm{vir}\in K_\mathsf{T}(M^\mathsf{T})$ [@Ciocan-Fontanine2009; @Fantechi2010], the latter because the fixed part $\mathbb{E}_{M^\mathsf{T}} \coloneqq (\iota^*\mathbb{E}_M)^\mathsf{T}$ is a perfect obstruction theory on $M^\mathsf{T}$ [@Graber1999]; - on $K_\mathsf{T}(M)_\mathrm{loc}\coloneqq K_\mathsf{T}(M) \otimes_{\mathbbm{k}_\mathsf{T}} \mathbbm{k}_{\mathsf{T},\mathrm{loc}}$, the equivariant localization formula [@Qu2018] $$\label{eq:equivariant-localization} \mathcal{O}_M^\mathrm{vir}= \iota_* \frac{\mathcal{O}_{M^\mathsf{T}}^\mathrm{vir}}{\mathsf{e}(N^\mathrm{vir}_\iota)} \in K_\mathsf{T}(M)_\mathrm{loc}$$ where $\mathsf{e}$ is the K-theoretic Euler class and $(N^\mathrm{vir}_\iota)^\vee$ is the non-fixed part of $\iota^*\mathbb{E}_M$. Using the resolution property, $N^\mathrm{vir}_\iota$ can be presented as a two-term complex of vector bundles $[E_0 \to E_1]$ all of whose $\mathsf{T}$-weights are non-trivial, and, for such complexes, the K-theoretic Euler class $\mathsf{e}$ is $$\label{eq:k-theoretic-euler-class} \mathsf{e}(E) \coloneqq \mathsf{e}(E_0)/\mathsf{e}(E_1) \in K_\mathsf{T}(M)_\mathrm{loc}, \qquad \mathsf{e}(E_k) \coloneqq \sum_{i \ge 0} (-1)^i \wedge^i\! E_k^\vee.$$ When there is no ambiguity, we sometimes omit the subscripts on $\mathcal{O}^\mathrm{vir}$ and $N^\mathrm{vir}$. ## {#sec:symmetrized-virtual-localization} Suppose that, in addition, the K-theory class of $\mathbb{E}_M$ is symmetric. Then we also get: - a canonical square root [^9] of the restriction of $K^\mathrm{vir}_M \coloneqq \det \mathbb{E}_M$ to $M^\mathsf{T}$, given by $$K^{\mathrm{vir},\frac{1}{2}}_M \coloneqq t^{-\frac{1}{2} \mathop{\mathrm{rank}}\mathbb{E}_{>0}} \det \mathbb{E}_{>0}$$ where $\mathbb{E}_{>0}$ is the summand of $\iota^*\mathbb{E}_M$ with positive $\mathsf{T}$-weight [@Thomas2020 Proposition 2.6] [^10]; - a *symmetrized* virtual cycle (by twisting [\[eq:equivariant-localization\]](#eq:equivariant-localization){reference-type="eqref" reference="eq:equivariant-localization"} by $K^{\mathrm{vir},\frac{1}{2}}_M$) $$\label{eq:symmetrized-virtual-cycle} \widehat{\mathcal{O}}_M^\mathrm{vir}\coloneqq \iota_* \left(\frac{\mathcal{O}_{M^\mathsf{T}}^\mathrm{vir}}{\mathsf{e}(N^\mathrm{vir}_\iota)} \otimes K^{\mathrm{vir},\frac{1}{2}}_M\right) \in K_\mathsf{T}(M)_\mathrm{loc}.$$ In general, neither $K^{\mathrm{vir}}_M$ nor $\det N^\mathrm{vir}_\iota$ admits a square root, but for us it will often be the case that, for some $F$, $$(N^\mathrm{vir}_\iota)^\vee = F - t F^\vee + \cdots \in K^\circ_\mathsf{T}(M^\mathsf{T}).$$ Then the contribution of $E \coloneqq F - t F^\vee$ to $K_M^{\mathrm{vir},\frac{1}{2}}$ may be split off: if $E_{>0} = F_{>0} - t (F_{\le 0})^\vee$ denotes the part of $E$ with positive $\mathsf{T}$-weight, then $$\label{eq:half-canonical-for-balanced-classes} t^{-\frac{1}{2} \mathop{\mathrm{rank}}E_{>0}} \det E_{>0} = t^{-\frac{1}{2} \mathop{\mathrm{rank}}F} \det F.$$ Accordingly, define the *symmetrized* K-theoretic Euler class $\widehat{\mathsf{e}}(E) \coloneqq t^{-\frac{1}{2} \mathop{\mathrm{rank}}F} \det F \otimes \mathsf{e}(E)$. Sometimes we write $\widehat{\mathsf{e}}(F^\vee)/\widehat{\mathsf{e}}(t^{-1} F) \coloneqq \widehat{\mathsf{e}}(E^\vee)$ --- a mild but suggestive abuse of notation. ## {#section-19} A useful result of Thomas is that the K-theoretic virtual cycle $\mathcal{O}^\mathrm{vir}_M$ depends only on the K-theory class of the perfect obstruction theory $\mathbb{E}_M$ [@Thomas2022]. Then clearly the same is true for $\widehat{\mathcal{O}}^\mathrm{vir}_M$. We will use this fact to simplify some calculations in the remainder of this paper, in situations where $M$ has two perfect obstruction theories constructed in different ways. Rather than checking that they are isomorphic, which often involves some annoying diagram-chasing, it suffices to check that they are equal in K-theory and therefore induce the same (symmetrized) K-theoretic virtual cycle. ## {#section-20} **Definition 5**. Fix $k \gg 0$ and consider the quiver $$\label{eq:pairs-quiver} Q(k) \coloneqq \begin{tikzpicture} \coordinate[vertex, label=right:$F_k(E)$] (E); \coordinate[framing, left=of E, label=left:$V$] (V); \draw (E) -- node[label=above:$\rho$]{} (V); \end{tikzpicture}$$ for class $\alpha$ and framing dimension $d = 1$. On $\mathfrak{N}^{Q(k)}_{\alpha,1}$, take the symmetric obstruction theory obtained by symmetrized pullback along $\Pi_\alpha$, and the stability condition $\tau_1$. There are no strictly semistables, so the semistable locus is a quasi-projective scheme. The *refined pairs invariant* $$\label{eq:pairs-invariant} \widetilde{\mathsf{VW}}_\alpha(k, t) \coloneqq \chi\left(\mathfrak{N}^{Q(k),\mathrm{sst}}_{\alpha,1}, \widehat{\mathcal{O}}^\mathrm{vir}\right) \in \mathbbm{k}_{\mathsf{T},\mathrm{loc}}$$ is well-defined by $\mathsf{T}$-equivariant localization (Lemma [Lemma 2](#lem:quiver-stack-fixed-locus-is-proper){reference-type="ref" reference="lem:quiver-stack-fixed-locus-is-proper"}). ## {#section-21} **Remark 3**. The refined pairs invariant in [@Thomas2020 §5] is defined like in [\[eq:pairs-invariant\]](#eq:pairs-invariant){reference-type="eqref" reference="eq:pairs-invariant"} but using a moduli scheme of stable Joyce--Song pairs and its Joyce--Song perfect obstruction theory. These can be matched with Definition [Definition 5](#def:refined-pairs-invariant){reference-type="ref" reference="def:refined-pairs-invariant"}. First, recall that a Joyce--Song pair $(E, s)$ is a point $[E] \in \mathfrak{N}$ along with a non-zero section $s\colon \mathcal{O}_X(-k) \to E$, and it is stable if and only if: - $E$ is $\tau$-semistable; - if $s$ factors through $0 \neq E' \subsetneq E$, then $\tau(E') < \tau(E)$. Clearly the data $(E, s)$ is equivalent to the data $(E, V, \rho)$ where $\dim V = 1$, and one can check [@Joyce2021 Example 5.6] that $\tau_1$-stability is equivalent to Joyce--Song stability. Hence $\mathfrak{N}^{Q(k),\mathrm{sst}}_{\alpha,1}$ is indeed a moduli scheme of stable Joyce--Song pairs. Second, by [@Tanaka2017 (6.2)], the standard Joyce--Song-style symmetric perfect obstruction theory on $\mathfrak{N}^{Q(k)}_{\alpha,1}$ is $R\mathop{\mathrm{Hom}}_X(I^\bullet, I^\bullet)_\perp$ where $I^\bullet \coloneqq [\mathcal{O}(-k) \xrightarrow{s} E] \in D^b\mathsf{Coh}_\mathsf{T}(X)$ and $$R\mathop{\mathrm{Hom}}_X(I^\bullet, I^\bullet) \cong R\mathop{\mathrm{Hom}}_X(I^\bullet, I^\bullet)_\perp \oplus H^*(\mathcal{O}_X) \oplus H^{\ge 1}(\mathcal{O}_S) \oplus H^{\le 1}(K_S)[-1].$$ Using the obvious exact triangles and [\[eq:VW-reduced-obstruction-theory\]](#eq:VW-reduced-obstruction-theory){reference-type="eqref" reference="eq:VW-reduced-obstruction-theory"}, in K-theory $$\begin{aligned} {3} R\mathop{\mathrm{Hom}}_X(I^\bullet, I^\bullet)_\perp &= R\mathop{\mathrm{Hom}}_X(E, E) && - R\mathop{\mathrm{Hom}}_X(\mathcal{O}_X(-k), E) - H^{\ge 1}(\mathcal{O}_S) \nonumber \\ {} & {} && - R\mathop{\mathrm{Hom}}_X(E, \mathcal{O}_X(-k)) + H^{\le 1}(K_S) \nonumber \\ &= R\mathop{\mathrm{Hom}}_X(E, E)_\perp && - \left(R\mathop{\mathrm{Hom}}_X(\mathcal{O}_X(-k), E) - H^0(\mathcal{O}_S)\right) \label{eq:pairs-obstruction-theory} \\ {} & {} && + \left(R\mathop{\mathrm{Hom}}_X(\mathcal{O}_X(-k), E) - H^0(\mathcal{O}_S)\right)^\vee \otimes t^{-1}. \nonumber \end{aligned}$$ Comparing with [\[eq:symmetrized-pullback-k-class\]](#eq:symmetrized-pullback-k-class){reference-type="eqref" reference="eq:symmetrized-pullback-k-class"} and using that $R^{>0}\mathop{\mathrm{Hom}}_X(\mathcal{O}_X(-k), E) = 0$, this is evidently the same K-theory class as the (dual of a shift of the) obstruction theory obtained from symmetrized pullback, and therefore induces the same $\widehat{\mathcal{O}}^\mathrm{vir}$. ## {#section-22} **Proposition 1** ([@Thomas2020 Proposition 5.5]). *If $\alpha$ has no strictly semistables, then [\[eq:semistable-VW-invariant\]](#eq:semistable-VW-invariant){reference-type="eqref" reference="eq:semistable-VW-invariant"} becomes $$\widetilde{\mathsf{VW}}_\alpha(k, t) = [\lambda_k(\alpha)]_t \cdot \mathsf{VW}_\alpha(t).$$ The projection $\Pi_\alpha\colon \mathfrak{N}^{Q(k),\mathrm{sst}}_{\alpha,1} \to \mathfrak{N}^\mathrm{sst}_\alpha$ is a $\mathbb{P}^{\lambda_k(\alpha)-1}$-bundle, and $\mathsf{VW}_\alpha(t)$ is given by [\[eq:stable-VW-invariant\]](#eq:stable-VW-invariant){reference-type="eqref" reference="eq:stable-VW-invariant"}.* *Proof sketch..* Induct on the rank $r$ of $\alpha = (r, c_1, c_2)$. If [\[eq:semistable-VW-invariant\]](#eq:semistable-VW-invariant){reference-type="eqref" reference="eq:semistable-VW-invariant"} had a non-zero contribution indexed by $\alpha_1, \ldots, \alpha_n$ with $n > 1$, then the non-vanishing of the $\mathsf{VW}_{\alpha_i}(t)$ (which equal [\[eq:stable-VW-invariant\]](#eq:stable-VW-invariant){reference-type="eqref" reference="eq:stable-VW-invariant"} by the induction hypothesis) would imply the $\mathfrak{N}_{\alpha_i}^{\mathrm{sst}}$ are non-empty, and picking an element $E_i$ in each would give a strictly semistable $[E_1 \oplus \cdots \oplus E_n] \in \mathfrak{N}_\alpha^\mathrm{sst}$, a contradiction. So only the $n=1$ term in [\[eq:semistable-VW-invariant\]](#eq:semistable-VW-invariant){reference-type="eqref" reference="eq:semistable-VW-invariant"} is present. It is also easy to check that, if $E$ is stable, then $(E, V, \rho)$ is (semi)stable if and only if $\rho \neq 0$, so indeed $\Pi_\alpha$ is a $[\mathop{\mathrm{Hom}}(V, E(k)) \setminus \{0\}/\mathbb{C}^\times]$-bundle. Finally, note that the first line of [\[eq:pairs-obstruction-theory\]](#eq:pairs-obstruction-theory){reference-type="eqref" reference="eq:pairs-obstruction-theory"} is the part of the obstruction theory on $\mathfrak{N}^{Q(k)}_{\alpha,1}$ compatible with the one on $\mathfrak{N}_\alpha$, and the second line is the vector bundle $\Omega_{\Pi_\alpha} \otimes t^{-1}$, so by virtual pullback [@Manolache2012] or otherwise, $$\widehat{\mathcal{O}}^\mathrm{vir}_{\mathfrak{N}^{Q(k)}_{\alpha,1}} = \Pi_\alpha^* \widehat{\mathcal{O}}^\mathrm{vir}_{\mathfrak{N}_\alpha} \otimes \mathsf{e}\left(\Omega_{\Pi_\alpha} \otimes t^{-1}\right) \otimes t^{-\frac{1}{2} \dim \Pi_\alpha} K_{\Pi_\alpha}$$ where $K_{\Pi_\alpha} \coloneqq \det \Omega_{\Pi_\alpha}$ is the relative canonical. The $t^{-\frac{1}{2} \dim \Pi_\alpha} K_{\Pi_\alpha}$ comes from [\[eq:half-canonical-for-balanced-classes\]](#eq:half-canonical-for-balanced-classes){reference-type="eqref" reference="eq:half-canonical-for-balanced-classes"}. Applying $\Pi_{\alpha *}$ and using the projection formula, it remains to compute $\Pi_{\alpha *} \left(\mathsf{e}\left(\Omega_{\Pi_\alpha} \otimes t^{-1}\right) \otimes K_{\Pi_\alpha}\right)$. This is some combination of relative cohomology bundles $R^i\Pi_{\alpha *}\Omega^j_{\Pi_\alpha}$, which are all canonically trivialized by powers of the hyperplane class. So it is enough to compute on fibers: $$\begin{aligned} \chi\left(\mathbb{P}^N, \mathsf{e}\left(\Omega_{\mathbb{P}^N} \otimes t^{-1}\right) \otimes K_{\mathbb{P}^N}\right) t^{-\frac{N}{2}} &= (-1)^N t^{-\frac{N}{2}} \sum_{i,j=0}^N (-1)^{i+j} \dim H^{i,j}(\mathbb{P}^N) t^j \\ &= [N+1]_t \in \mathbbm{k}_\mathsf{T}, \end{aligned}$$ using that the modules $H^{i,j}(\mathbb{P}^N) \subset H^{i+j}(\mathcal{O})$ are $\mathsf{T}$-equivariantly trivial by Hodge theory. This computation motivates the sign $(-1)^{N-1}$ in the definition [\[eq:quantum-integer\]](#eq:quantum-integer){reference-type="eqref" reference="eq:quantum-integer"} of $[N]_t$. ◻ ## {#sec:U-pairs-invariants} To be precise, our $\widetilde{\mathsf{VW}}_\alpha$ is the so-called refined $\mathrm{SU}(r)$ pairs invariant. The $\mathrm{S}$ here corresponds to fixing $\det \overline{E} = L$ and $\mathop{\mathrm{tr}}\phi = 0$, i.e. using the moduli stack $\mathfrak{N}_\alpha \subset \mathfrak{M}_\alpha$ instead of the entire $\mathfrak{M}_\alpha$. There is an analogous $\mathrm{U}(r)$ pairs invariant $$\chi\left(\mathfrak{M}_{\alpha,1}^{Q(k),\mathrm{sst}}, \widehat{\mathcal{O}}^\mathrm{vir}\right)$$ where the symmetric obstruction theory on $\mathfrak{M}_\alpha$ is also given by [\[eq:VW-reduced-obstruction-theory\]](#eq:VW-reduced-obstruction-theory){reference-type="eqref" reference="eq:VW-reduced-obstruction-theory"} except the subscript $0$ on the latter two terms now only removes a copy of $H^0(\mathcal{O}_S)$ and $H^2(K_S) \cong t \otimes H^0(\mathcal{O}_S)^\vee$ respectively, cf. [@Tanaka2020 § 4.1]. This $\mathrm{U}(r)$ pairs invariant agrees with $\widetilde{\mathsf{VW}}_\alpha$ if $H^1(\mathcal{O}_S) = H^2(\mathcal{O}_S) = 0$. Otherwise: - if $H^2(\mathcal{O}_S) \neq 0$, then the summand $H^2(\mathcal{O}_S)$ in $R\mathop{\mathrm{Hom}}_S(\overline{E}, \overline{E})_0$ forms a trivial sub-bundle in the obstruction sheaf on $\mathfrak{M}$, so $\widehat{\mathcal{O}}^\mathrm{vir}= 0$; - if $H^1(\mathcal{O}_S) \neq 0$, then $\mathop{\mathrm{Jac}}(S)$ is non-trivial and acts on $\mathfrak{M}$ by tensoring. This action has no fixed points and defines a nowhere-vanishing section of the tangent sheaf on $\mathfrak{M}$, which is equivalently a nowhere-vanishing cosection of the obstruction sheaf on $\mathfrak{M}$ by the symmetry of the obstruction theory. Lemma [Lemma 1](#lem:symmetrized-pullback-cosection){reference-type="ref" reference="lem:symmetrized-pullback-cosection"} lifts this to a nowhere-vanishing cosection of the obstruction sheaf on $\mathfrak{M}^{Q(k)}$, so $\widehat{\mathcal{O}}^\mathrm{vir}= 0$ by cosection localization [@Kiem2013a]. These vanishings are important motivation for using $\mathfrak{N}_\alpha \subset \mathfrak{M}_\alpha$ to define $\widetilde{\mathsf{VW}}_\alpha$, and will also be important in the master space calculation in §[5.11](#sec:mixed-fixed-locus-cases){reference-type="ref" reference="sec:mixed-fixed-locus-cases"}. # A master space calculation {#sec:master-space-calculation} ## {#section-23} **Definition 6**. Let $k_1, k_2 \gg 0$ and, as in [@Joyce2021 Definition 9.4], consider the quiver $$\label{eq:master-space-quiver} \widetilde{Q}(k_1, k_2) \coloneqq \begin{tikzpicture}[node distance=0.3cm and 1cm] \coordinate[framing, label=below:$V_3$] (V3); \coordinate[framing, above right=of V3, label=above:$V_1$] (V1); \coordinate[framing, below right=of V3, label=below:$V_2$] (V2); \coordinate[vertex, right=of V1, label=right:$F_{k_1}(E)$] (F1); \coordinate[vertex, right=of V2, label=right:$F_{k_2}(E)$] (F2); \draw[->] (V3) -- node[label=above:$\rho_3$]{} (V1); \draw[->] (V1) -- node[label=above:$\rho_1$]{} (F1); \draw[->] (V3) -- node[label=below:$\rho_4$]{} (V2); \draw[->] (V2) -- node[label=below:$\rho_2$]{} (F2); \end{tikzpicture}$$ for class $\alpha$ and framing dimension $\bm{d} = \bm{1} \coloneqq (1, 1, 1)$. On $\mathfrak{N}^{\widetilde{Q}(k_1,k_2)}_{\alpha,\bm{1}}$, take the symmetrized obstruction theory given by symmetrized pullback along $\Pi_\alpha$, and the stability condition $\tau_{\epsilon,\epsilon,1}$ for $\epsilon > 0$. For sufficiently small $\epsilon$ (depending on $\alpha$), there are no strictly semistables and the semistable locus $M_\alpha \coloneqq \mathfrak{N}_{\alpha,\bm{1}}^{\widetilde{Q}(k_1,k_2),\mathrm{sst}}$ is a quasi-projective scheme. We call it the *master space*. Let $\mathbb{C}^\times$ act on $M_\alpha$ by scaling the linear map $\rho_4$ with a weight which we denote $z$. This action commutes with the inherited $\mathsf{T}$-action and the obstruction theory may be made $\mathbb{C}^\times$-equivariant as well. The quantity $$\chi\left(M_\alpha, \widehat{\mathcal{O}}^\mathrm{vir}\right) \in \mathbbm{k}_{\mathbb{C}^\times \times \mathsf{T},\mathrm{loc}}$$ is well-defined by $\mathbb{C}^\times \times \mathsf{T}$-equivariant localization (Lemma [Lemma 2](#lem:quiver-stack-fixed-locus-is-proper){reference-type="ref" reference="lem:quiver-stack-fixed-locus-is-proper"}). We will apply the following Proposition [Proposition 2](#prop:master-space-relation){reference-type="ref" reference="prop:master-space-relation"} to $M_\alpha$ to get a wall-crossing formula. ## {#section-24} **Proposition 2**. *Let $M$ be a scheme with $\mathbb{C}^\times \times \mathsf{T}$-action and symmetric perfect obstruction theory for which equivariant localization holds, and let $\iota_{\mathbb{C}^\times}\colon M^{\mathbb{C}^\times} \hookrightarrow M$ be the $\mathbb{C}^\times$-fixed locus. Suppose that:* 1. *$\chi(M, \widehat{\mathcal{O}}^\mathrm{vir})$ is well-defined and lies in $\mathbbm{k}_{\mathsf{T},\mathrm{loc}}[z^\pm] \subset \mathbbm{k}_{\mathbb{C}^\times \times \mathsf{T},\mathrm{loc}}$;* 2. *$N^\mathrm{vir}_{\iota_{\mathbb{C}^\times}} = F^\vee - t^{-1} F \in K^\circ_{\mathbb{C}^\times \times \mathsf{T}}(M^{\mathbb{C}^\times})$ for some K-theory class $F$.* *Let $F_{>0}$ and $F_{<0}$ be the parts of $F$ with positive and negative $\mathbb{C}^\times$-weight respectively, and set $\mathsf{ind}\coloneqq \mathop{\mathrm{rank}}F_{>0} - \mathop{\mathrm{rank}}F_{<0}$. Then $$\label{eq:master-space-relation} 0 = \chi\bigg(M^{\mathbb{C}^\times}, \widehat{\mathcal{O}}^\mathrm{vir}_{M^{\mathbb{C}^\times}} \otimes (-1)^{\mathsf{ind}}(t^{\frac{\mathsf{ind}}{2}} - t^{-\frac{\mathsf{ind}}{2}})\bigg).$$* The quantity $\mathsf{ind}$ is a kind of Morse--Bott index for connected components $Z \subset M^{\mathbb{C}^\times}$. In particular $\mathsf{ind}$ is constant on each $Z$ and so the scalar $(-1)^{\mathsf{ind}}(t^{\frac{\mathsf{ind}}{2}} - t^{-\frac{\mathsf{ind}}{2}})$ factors out of $\chi(Z, \widehat{\mathcal{O}}^\mathrm{vir}_Z \otimes \cdots)$. Proposition [Proposition 2](#prop:master-space-relation){reference-type="ref" reference="prop:master-space-relation"} is a $\mathsf{T}$-equivariant and K-theoretic version of a common procedure in cohomology (see e.g. [@Kiem2013 §5], or [@Nakajima2011 §5] for a $\mathsf{T}$-equivariant version): if $M$ is proper, take residues of both sides of the $\mathbb{C}^\times$-equivariant localization formula for $[M]^\mathrm{vir}$. Accordingly, the proof of Proposition [Proposition 2](#prop:master-space-relation){reference-type="ref" reference="prop:master-space-relation"}, given in §[5.4](#sec:master-space-relation-proof){reference-type="ref" reference="sec:master-space-relation-proof"}, is by applying the following K-theoretic residue map to the $\mathbb{C}^\times \times \mathsf{T}$-equivariant localization formula. ## {#section-25} **Definition 7**. Given a rational function $f \in \mathbbm{k}_{\mathbb{C}^\times \times \mathsf{T},\mathrm{loc}}$, let $f_\pm \in \mathbbm{k}_{\mathsf{T},\mathrm{loc}}((z^\pm))$ be its formal series expansion around $z=0$ and $z=\infty$ respectively. The *K-theoretic residue map*, see [@Metzler2002] [@Liu2022 Appendix A], is the $\mathbb{Z}$-module homomorphism $$\begin{aligned} \mathop{\mathrm{Res}}^K_z\colon \mathbbm{k}_{\mathbb{C}^\times \times \mathsf{T},\mathrm{loc}} &\to \mathbbm{k}_{\mathsf{T},\mathrm{loc}} \\ f &\mapsto z^0 \text{ term in } (f_+ - f_-). \end{aligned}$$ Note that $\mathop{\mathrm{Res}}_z^K\colon \mathbbm{k}_{\mathsf{T},\mathrm{loc}}[z^\pm] \mapsto 0$, and $\mathop{\mathrm{Res}}_z^K(f) = \lim_{z \to 0} f - \lim_{z \to \infty} f$ whenever the rhs exists. ## {#sec:master-space-relation-proof} *Proof of Proposition [Proposition 2](#prop:master-space-relation){reference-type="ref" reference="prop:master-space-relation"}..* First, factorize $$\iota\colon M^{\mathbb{C}^\times \times \mathsf{T}} \xhookrightarrow{\iota_\mathsf{T}} M^{\mathbb{C}^\times} \xhookrightarrow{\iota_{\mathbb{C}^\times}} M,$$ and, in K-theory, split $N^\mathrm{vir}_\iota = \iota_\mathsf{T}^* N^\mathrm{vir}_{\iota_{\mathbb{C}^\times}} + N^\mathrm{vir}_{\iota_\mathsf{T}}$ into its non-$\mathbb{C}^\times$-fixed part and its $\mathbb{C}^\times$-fixed but non-$\mathsf{T}$-fixed part respectively. Then $$\label{eq:master-space-localization} \chi(M, \widehat{\mathcal{O}}^\mathrm{vir}_{M}) = \chi\bigg(M^{\mathbb{C}^\times \times \mathsf{T}}, \bigg(\frac{\mathcal{O}^\mathrm{vir}_{M^{\mathbb{C}^\times \times \mathsf{T}}}}{\mathsf{e}(N^\mathrm{vir}_{\iota_\mathsf{T}})} \otimes K^{\mathrm{vir},\frac{1}{2}}_{M^{\mathbb{C}^\times}}\bigg) \otimes \frac{1}{\widehat{\mathsf{e}}(\iota_\mathsf{T}^*N^\mathrm{vir}_{\iota_{\mathbb{C}^\times}})}\bigg)$$ using the definition [\[eq:symmetrized-virtual-cycle\]](#eq:symmetrized-virtual-cycle){reference-type="eqref" reference="eq:symmetrized-virtual-cycle"} of $\widehat{\mathcal{O}}^\mathrm{vir}_{M}$ and the discussion of §[4.2](#sec:symmetrized-virtual-localization){reference-type="ref" reference="sec:symmetrized-virtual-localization"}. Note that the inner bracketed term, if pushed forward along $\iota_\mathsf{T}$, becomes $\widehat{\mathcal{O}}^\mathrm{vir}_{M^{\mathbb{C}^\times}}$. [^11] Now apply $\mathop{\mathrm{Res}}^K_z$ to [\[eq:master-space-localization\]](#eq:master-space-localization){reference-type="eqref" reference="eq:master-space-localization"}. By hypothesis, the lhs vanishes. On the rhs, the definition [\[eq:k-theoretic-euler-class\]](#eq:k-theoretic-euler-class){reference-type="eqref" reference="eq:k-theoretic-euler-class"} of K-theoretic Euler class gives $$\label{eq:virtual-normal-bundle-in-localization} \frac{1}{\widehat{\mathsf{e}}(\iota^*_\mathsf{T}N^\mathrm{vir}_{\iota_{\mathbb{C}^\times}})} = \frac{\widehat{\mathsf{e}}(t^{-1}\otimes \iota_\mathsf{T}^* F)}{\widehat{\mathsf{e}}(\iota_\mathsf{T}^* F^\vee)} = \prod_{z^a t^b L \in \iota_\mathsf{T}^* F} \left(-t^{-\frac{1}{2}}\cdot \frac{t - z^a t^b L}{1 - z^a t^b L}\right)$$ where the product ranges over all K-theoretic Chern roots of $\iota_\mathsf{T}^* F \in K^\circ_{\mathbb{C}^\times \times \mathsf{T}}(M^{\mathbb{C}^\times \times \mathsf{T}}) = K^\circ(M^{\mathbb{C}^\times \times \mathsf{T}}) \otimes \mathbbm{k}_{\mathbb{C}^\times \times \mathsf{T}}$. Each such Chern root has $a \neq 0$, so the $z \to 0$ and $z \to \infty$ limits of each term in the product exist and are $-t^{\pm \frac{1}{2}}$ depending on the sign of $a$. [^12] The result is that $$0 = \chi\bigg(M^{\mathbb{C}^\times \times \mathsf{T}}, \bigg(\frac{\mathcal{O}^\mathrm{vir}_{M^{\mathbb{C}^\times \times \mathsf{T}}}}{\mathsf{e}(N^\mathrm{vir}_{\iota_\mathsf{T}})} \otimes K^{\mathrm{vir},\frac{1}{2}}_{M^{\mathbb{C}^\times}}\bigg) \otimes (-1)^{\mathsf{ind}}(t^{\frac{\mathsf{ind}}{2}} - t^{-\frac{\mathsf{ind}}{2}})\bigg).$$ To conclude, pushforward the integrand along $\iota_\mathsf{T}$, recognizing the resulting bracketed term as the definition of $\widehat{\mathcal{O}}^\mathrm{vir}_{M^{\mathbb{C}^\times}}$. ◻ ## {#section-26} **Lemma 3**. *Let $M$ be a scheme with $\mathbb{C}^\times \times \mathsf{T}$-action, such that:* 1. *$M^\mathsf{T}$ is proper;* 2. *if a point $p \in M$ is fixed by $(x, y) \in \mathbb{C}^\times \times \mathsf{T}$, then it is fixed by $y \in \mathsf{T}$.* *If $M$ has a symmetric perfect obstruction theory and equivariant localization holds, then $\chi\big(M, \widehat{\mathcal{O}}^\mathrm{vir}\big) \in \mathbbm{k}_{\mathsf{T},\mathrm{loc}}[z^\pm]$.* For our $M_\alpha$ from Definition [Definition 6](#def:master-space){reference-type="ref" reference="def:master-space"}, condition 1 holds because if $[(E, \bm{V}, \bm{\rho})] \in M_\alpha$ is $(x,y)$-fixed then $E$ is $y$-fixed, and condition 2 holds by Lemma [Lemma 2](#lem:quiver-stack-fixed-locus-is-proper){reference-type="ref" reference="lem:quiver-stack-fixed-locus-is-proper"}. *Proof.* View $\chi(M, \widehat{\mathcal{O}}^\mathrm{vir})$ as a rational function in $z$ and $t$. By the definition [\[eq:localized-base-ring\]](#eq:localized-base-ring){reference-type="eqref" reference="eq:localized-base-ring"} of localized base rings, we must show that its denominator has no factors of $1 - w$ where $w \coloneqq z^a t^b$ for any integers $a \neq 0$ and $b$. Recall the following argument of [@Arbesfeld2021 Proposition 3.2]. Let $\mathsf{T}'$ be the maximal torus of $\ker(w)$. If the fixed locus $M^{\mathsf{T}'}$ is proper, then $\mathsf{T}'$-equivariant localization shows that $$\chi(M, \widehat{\mathcal{O}}^\mathrm{vir})\Big|_{w=1} = \chi\Big(M, \widehat{\mathcal{O}}^\mathrm{vir}\Big|_{w=1}\Big) \in \mathbbm{k}_{\mathsf{T}',\mathrm{loc}}$$ is well-defined. In particular, the original quantity $\chi(M, \widehat{\mathcal{O}}^\mathrm{vir})$ cannot have factors $1 - w^n$ in its denominator, for any integer $n \neq 0$, and we are done. It remains to show that $M^{\mathsf{T}'}$ is proper. Since $a \neq 0$, the projection from $\mathsf{T}' \subset \mathbb{C}^\times \times \mathsf{T}$ to the $\mathsf{T}$ factor is surjective, so condition 1 implies $M^{\mathsf{T}'} = (M^\mathsf{T})^{\mathsf{T}'} \subset M^\mathsf{T}$. We are done since fixed loci are closed, and $M^\mathsf{T}$ is proper by condition 2. ◻ ## {#section-27} **Remark 4**. The combination of Proposition [Proposition 2](#prop:master-space-relation){reference-type="ref" reference="prop:master-space-relation"} and Lemma [Lemma 3](#lem:no-z-poles){reference-type="ref" reference="lem:no-z-poles"} works equally well in equivariant cohomology, upon replacing the K-theoretic residue map $\mathop{\mathrm{Res}}_z^K$ with its cohomological analogue $f \mapsto \mathop{\mathrm{res}}_{z=0}(f \, dz/z)$. Working cohomologically, the result is that terms like $t^{\mathsf{ind}/2}$ in [\[eq:master-space-relation\]](#eq:master-space-relation){reference-type="eqref" reference="eq:master-space-relation"} become $\mathsf{ind}\cdot t$, and the resulting quantum integers in the wall-crossing formulas [\[eq:VW-invariants-relation\]](#eq:VW-invariants-relation){reference-type="eqref" reference="eq:VW-invariants-relation"} and [\[eq:VW-invariants-relation-simple\]](#eq:VW-invariants-relation-simple){reference-type="eqref" reference="eq:VW-invariants-relation-simple"} become specialized to $t=1$. ## {#section-28} We apply Proposition [Proposition 2](#prop:master-space-relation){reference-type="ref" reference="prop:master-space-relation"} and Lemma [Lemma 3](#lem:no-z-poles){reference-type="ref" reference="lem:no-z-poles"} to $M_\alpha$. In §[5.8](#sec:master-space-fixed-loci-1){reference-type="ref" reference="sec:master-space-fixed-loci-1"}, §[5.9](#sec:master-space-fixed-loci-2){reference-type="ref" reference="sec:master-space-fixed-loci-2"}, and §[5.10](#sec:master-space-fixed-loci-3){reference-type="ref" reference="sec:master-space-fixed-loci-3"}, we identify $M_\alpha^{\mathbb{C}^\times}$ as a disjoint union of three types of loci; see [@Joyce2021 Proposition 9.5] for details. All three types of loci are related in very manageable ways to pairs stacks $\mathfrak{N}^{Q(k)}_{\alpha,1}$, and [\[eq:master-space-relation\]](#eq:master-space-relation){reference-type="eqref" reference="eq:master-space-relation"} will become a wall-crossing formula relating the refined pairs invariants $\widetilde{\mathsf{VW}}_\alpha(k_1, t)$ and $\widetilde{\mathsf{VW}}_\alpha(k_2, t)$. On each fixed locus, we also need to identify the $\mathbb{C}^\times$-fixed and non-$\mathbb{C}^\times$-fixed parts of the restriction of the obstruction theory on $M_\alpha$. In K-theory, the obstruction theory is given by $$\label{eq:master-space-obstruction-theory} \begin{aligned} \mathbb{T}_{M_\alpha} &= \Big(\left(\mathcal{V}_3^\vee \otimes \mathcal{V}_1 + \mathcal{V}_3^\vee \otimes \mathcal{V}_2 - \mathcal{V}_3^\vee \otimes \mathcal{V}_3\right) - t^{-1} \otimes (\cdots)^\vee\Big) \\ &\quad+ \sum_{i=1}^2 \Big(\left(\mathcal{V}_i^\vee \otimes \mathcal{F}_{k_i}(\mathcal{E}) - \mathcal{V}_i^\vee \otimes \mathcal{V}_i\right) - t^{-1} \otimes (\cdots)^\vee\Big) - R\mathop{\mathrm{\mathcal{H}{\it om}}}_{\pi_X}(\mathcal{E}, \mathcal{E})_\perp \end{aligned}$$ where $\mathcal{E}$ is the universal sheaf of $E$ on $\pi_X\colon \mathfrak{N}\times X \to \mathfrak{N}$, and $\mathcal{F}_{k_i}(\mathcal{E})$ is the universal bundle of $F_{k_i}(E)$, and each $(\cdots)^\vee$ indicates the dual of the preceding term. This formula can be obtained by combining [\[eq:symmetrized-pullback-k-class\]](#eq:symmetrized-pullback-k-class){reference-type="eqref" reference="eq:symmetrized-pullback-k-class"} and [\[eq:quiver-stack-projection\]](#eq:quiver-stack-projection){reference-type="eqref" reference="eq:quiver-stack-projection"}. For clarity in keeping track of $\mathbb{C}^\times$-weights on $\mathbb{C}^\times$-fixed loci, for components $Z \subset M_\alpha^{\mathbb{C}^\times}$, let $\mathcal{W}_i$ denote the non-$\mathbb{C}^\times$-equivariant bundle on $Z$ such that $\mathcal{V}_i\big|_Z = z^a \otimes \mathcal{W}_i$ for some $a \in \mathbb{Z}$. ## {#sec:master-space-fixed-loci-1} Let $Z_{\rho_4=0} \coloneqq \{\rho_4=0\} \subset M_\alpha$. This locus is obviously $\mathbb{C}^\times$-fixed. By stability, $\rho_3 \neq 0$ and the forgetful map $$\Pi_{\rho_4=0}\colon Z_{\rho_4=0} \to \mathfrak{N}^{Q(k_1),\mathrm{sst}}_{\alpha,1}, \quad (E, \bm{V}, \bm{\rho}) \mapsto (E, V_1, \rho_1)$$ is a $\mathbb{P}^{\lambda_{k_2}(\alpha)-1}$-bundle, coming from the freedom to choose the map $\rho_2$. Now consider the restriction of $\mathbb{T}_{M_\alpha}$. The non-vanishing section $\rho_3$ trivializes the line bundle $\mathcal{V}_3^\vee \otimes \mathcal{V}_1$, which then cancels with $\mathcal{V}_3^\vee \otimes \mathcal{V}_3 \cong \mathcal{O}$ in the first line of [\[eq:master-space-obstruction-theory\]](#eq:master-space-obstruction-theory){reference-type="eqref" reference="eq:master-space-obstruction-theory"}. What remains is the non-$\mathbb{C}^\times$-fixed part $$N^{\mathrm{vir}}_{\iota_{\mathbb{C}^\times}}\Big|_{Z_{\rho_4=0}} = z \mathcal{W}_3^\vee \otimes \mathcal{W}_2 - t^{-1} \otimes (\cdots)^\vee.$$ Finally, the entire second line of [\[eq:master-space-obstruction-theory\]](#eq:master-space-obstruction-theory){reference-type="eqref" reference="eq:master-space-obstruction-theory"} is $\mathbb{C}^\times$-fixed. So, the computation in the proof of Proposition [Proposition 1](#prop:symmetrized-integration-on-projective-bundle){reference-type="ref" reference="prop:symmetrized-integration-on-projective-bundle"} applies to $\Pi_{\rho_4=0}$, yielding $$\label{eq:master-space-fixed-locus-1} \chi\left(Z_{\rho_4=0}, \widehat{\mathcal{O}}^\mathrm{vir}\right) = [\lambda_{k_2}(\alpha)]_t \cdot \chi\left(\mathfrak{N}_{\alpha,1}^{Q(k_1),\mathrm{sst}}, \widehat{\mathcal{O}}^\mathrm{vir}\right) = [\lambda_{k_2}(\alpha)]_t \cdot \widetilde{\mathsf{VW}}_\alpha(k_1, t).$$ ## {#sec:master-space-fixed-loci-2} Let $Z_{\rho_3=0} \coloneqq \{\rho_3=0\} \subset M_\alpha$. This locus is $\mathbb{C}^\times$-fixed because the scaling of $\rho_4$ can be undone by making $\mathbb{C}^\times$ act on $V_3$ with weight $z$. By stability, $\rho_4 \neq 0$ and the forgetful map $$\Pi_{\rho_3=0}\colon Z_{\rho_3=0} \to \mathfrak{N}^{Q(k_2),\mathrm{sst}}_{\alpha,1}, \quad (E, \bm{V}, \bm{\rho}) \mapsto (E, V_2, \rho_2)$$ is a $\mathbb{P}^{\lambda_{k_1}(\alpha)-1}$-bundle, coming from the freedom to choose the map $\rho_1$. Now consider the restriction of $\mathbb{T}_{M_\alpha}$. Like in §[5.8](#sec:master-space-fixed-loci-1){reference-type="ref" reference="sec:master-space-fixed-loci-1"} for $Z_{\rho_4=0}$, the non-$\mathbb{C}^\times$-fixed part is $$N^\mathrm{vir}_{\iota_{\mathbb{C}^\times}}\Big|_{Z_{\rho_3=0}} = z^{-1} \mathcal{W}_3^\vee \otimes \mathcal{W}_1 - t^{-1} \otimes (\cdots)^\vee,$$ where the $\mathbb{C}^\times$ action on $V_3$ creates the non-trivial $z$ weight. The entire second line of [\[eq:master-space-obstruction-theory\]](#eq:master-space-obstruction-theory){reference-type="eqref" reference="eq:master-space-obstruction-theory"} is $\mathbb{C}^\times$-fixed, and $$\label{eq:master-space-fixed-locus-2} \chi\left(Z_{\rho_3=0}, \widehat{\mathcal{O}}^\mathrm{vir}\right) = [\lambda_{k_1}(\alpha)]_t \cdot \chi\left(\mathfrak{N}_{\alpha,1}^{Q(k_2),\mathrm{sst}}, \widehat{\mathcal{O}}^\mathrm{vir}\right) = [\lambda_{k_1}(\alpha)]_t \cdot \widetilde{\mathsf{VW}}_\alpha(k_2, t).$$ ## {#sec:master-space-fixed-loci-3} Finally, when both $\rho_3,\rho_4 \neq 0$, by stability all $\rho_i \neq 0$. In order for all maps to be $\mathbb{C}^\times$-equivariant, $\mathbb{C}^\times$ must act on $V_1$, $V_2$, and $V_3$ with weights $z$, $1$, and $z$ respectively, and $E = zE_1 \oplus E_2$ must split into weight-$z$ and weight-$1$ pieces [^13] such that $\rho_i$ factors as $\rho_i\colon V_i \to F_{k_i}(E_i)$ for $i = 1, 2$. Semistability of $E$ implies $\tau(E_1) = \tau(E_2)$. Note that while $\det \overline{E} = \det(\overline{E}_1) \det(\overline{E}_2) = L \in \mathop{\mathrm{Pic}}(S)$ is pre-specified, $\det(\overline{E}_1)$ and $\det(\overline{E}_2)$ themselves are not. Hence there are fixed loci $$Z_{\alpha_1,\alpha_2} \coloneqq \{\det(\overline{E}_1)\det(\overline{E}_2) = L\} \subset \mathfrak{M}^{Q(k_1),\mathrm{sst}}_{\alpha_1,1} \times \mathfrak{M}^{Q(k_2),\mathrm{sst}}_{\alpha_2,1} \subset M_\alpha$$ for any non-trivial decomposition $\alpha = \alpha_1 + \alpha_2$ with $\tau(\alpha_1) = \tau(\alpha_2)$. Put differently, if $\alpha_i = (r_i, c_{1,i}, c_{2,i})$, fix a splitting $L_1 \otimes L_2 = L$ where $c_1(L_i) = c_{1,i}$, and then there is a map $\det_1\colon Z_{\alpha_1,\alpha_2} \to \mathop{\mathrm{Jac}}(S)$, given by $L_1^\vee \otimes \det \overline{E}_1$, whose fiber at $L_0$ is $$\det\nolimits_1^{-1}(L_0) \cong \mathfrak{N}^{Q(k_1),\mathrm{sst}}_{(r_1,L_1 \otimes L_0,c_{2,1}),1} \times \mathfrak{N}^{Q(k_2),\mathrm{sst}}_{(r_2, L_2 \otimes L_0^{-1}, c_{2,2}),1}.$$ Now consider the restriction of $\mathbb{T}_{M_\alpha}$. The non-vanishing sections $\rho_3$ and $\rho_4$ trivialize the line bundles $\mathcal{V}_3^\vee \otimes \mathcal{V}_1$ and $\mathcal{V}_3^\vee \otimes \mathcal{V}_2$, so the first line of [\[eq:master-space-obstruction-theory\]](#eq:master-space-obstruction-theory){reference-type="eqref" reference="eq:master-space-obstruction-theory"} becomes $\mathcal{O}- t\mathcal{O}^\vee$. The non-trivial $\mathbb{C}^\times$-weight in the splitting $\mathcal{E}\big|_{Z_{\alpha_1,\alpha_2}} = z\mathcal{E}_1 \oplus \mathcal{E}_2$ produces the non-$\mathbb{C}^\times$-fixed part $$\begin{aligned} N^{\mathrm{vir}}_{\iota_{\mathbb{C}^\times}}\Big|_{Z_{\alpha_1,\alpha_2}} &= \left(z^{-1} \mathcal{W}_1^\vee \otimes \mathcal{F}_{k_1}(\mathcal{E}_2) + z \mathcal{W}_2^\vee \otimes \mathcal{F}_{k_2}(\mathcal{E}_1)\right) - t \otimes (\cdots)^\vee \\ &\qquad- \left(z^{-1} R\mathop{\mathrm{\mathcal{H}{\it om}}}_{\pi_X}(\mathcal{E}_1, \mathcal{E}_2) + z R\mathop{\mathrm{\mathcal{H}{\it om}}}_{\pi_X}(\mathcal{E}_2, \mathcal{E}_1)\right)\end{aligned}$$ where the subscript $\perp$ can be removed because trace does not see off-diagonal parts. Recall that Serre duality says $\chi(\alpha_1, \alpha_2) \coloneqq \mathop{\mathrm{rank}}R\mathop{\mathrm{\mathcal{H}{\it om}}}_{\pi_X}(\mathcal{E}_1, \mathcal{E}_2)$ is anti-symmetric. The remaining $\mathbb{C}^\times$-fixed part of the second line of [\[eq:master-space-obstruction-theory\]](#eq:master-space-obstruction-theory){reference-type="eqref" reference="eq:master-space-obstruction-theory"}, combined with the $\mathcal{O}- t\mathcal{O}^\vee$ from the first line, gives the total $\mathbb{C}^\times$-fixed part $$\begin{aligned} &\sum_{i=1}^2 \Big(\left(\mathcal{W}_i^\vee \otimes \mathcal{F}_{k_i}(\mathcal{E}_i) - \mathcal{W}_i^\vee \otimes \mathcal{W}_i^\vee\right) - t \otimes (\cdots)^\vee\Big) \nonumber \\ &- \Big(R\mathop{\mathrm{\mathcal{H}{\it om}}}_{\pi_X}(\mathcal{E}_1, \mathcal{E}_1) + R\mathop{\mathrm{\mathcal{H}{\it om}}}_{\pi_X}(\mathcal{E}_2, \mathcal{E}_2)\Big)_\perp + \mathcal{O}- t\mathcal{O}^\vee. \label{eq:split-fixed-loci-fixed-Ext-term}\end{aligned}$$ Non-canonically, the second line is $-R\mathop{\mathrm{\mathcal{H}{\it om}}}_{\pi_X}(\mathcal{E}_1, \mathcal{E}_1)_\perp - (R\mathop{\mathrm{\mathcal{H}{\it om}}}_{\pi_X}(\mathcal{E}_2, \mathcal{E}_2) - \mathcal{O}+ t\mathcal{O}^\vee)$. ## {#sec:mixed-fixed-locus-cases} There are now three cases for the computation of $\chi(Z_{\alpha_1,\alpha_2}, \widehat{\mathcal{O}}^\mathrm{vir})$. If $H^1(\mathcal{O}_S) = H^2(\mathcal{O}_S) = 0$, then from [\[eq:VW-reduced-obstruction-theory\]](#eq:VW-reduced-obstruction-theory){reference-type="eqref" reference="eq:VW-reduced-obstruction-theory"}, $R\mathop{\mathrm{\mathcal{H}{\it om}}}_{\pi_X}(\mathcal{E}_2, \mathcal{E}_2)_\perp = R\mathop{\mathrm{\mathcal{H}{\it om}}}_{\pi_X}(\mathcal{E}_2, \mathcal{E}_2) - \mathcal{O}+ t\mathcal{O}^\vee$, and so the $\mathbb{C}^\times$-fixed part [\[eq:split-fixed-loci-fixed-Ext-term\]](#eq:split-fixed-loci-fixed-Ext-term){reference-type="eqref" reference="eq:split-fixed-loci-fixed-Ext-term"} becomes, canonically, $$-R\mathop{\mathrm{\mathcal{H}{\it om}}}_{\pi_X}(\mathcal{E}_1, \mathcal{E}_1)_\perp - R\mathop{\mathrm{\mathcal{H}{\it om}}}_{\pi_X}(\mathcal{E}_2, \mathcal{E}_2)_\perp.$$ The base $\mathop{\mathrm{Jac}}(S)$ of the map $\det_1$ is trivial. Hence $$\label{eq:master-space-fixed-locus-3} \chi\left(Z_{\alpha_1,\alpha_2}, \widehat{\mathcal{O}}^\mathrm{vir}\right) = \chi\left(\mathfrak{N}^{Q(k_1),\mathrm{sst}}_{\alpha_1,1} \times \mathfrak{N}^{Q(k_2),\mathrm{sst}}_{\alpha_2,1}, \widehat{\mathcal{O}}^\mathrm{vir}\boxtimes \widehat{\mathcal{O}}^\mathrm{vir}\right) = \widetilde{\mathsf{VW}}_{\alpha_1}(k_1, t) \widetilde{\mathsf{VW}}_{\alpha_2}(k_2, t).$$ If $H^2(\mathcal{O}_S) \neq 0$, then the obstruction sheaf in [\[eq:split-fixed-loci-fixed-Ext-term\]](#eq:split-fixed-loci-fixed-Ext-term){reference-type="eqref" reference="eq:split-fixed-loci-fixed-Ext-term"} has the extra trivial summands $R^2\pi_{S*}\mathcal{O}$ from $R\mathop{\mathrm{\mathcal{H}{\it om}}}_{\pi_S}(\overline{\mathcal{E}}, \overline{\mathcal{E}}) = R\mathop{\mathrm{\mathcal{H}{\it om}}}_{\pi_S}(\overline{\mathcal{E}}, \overline{\mathcal{E}})_0 \oplus R^2\pi_{S*}\mathcal{O}\oplus \cdots$, where $\overline{\mathcal{E}}$ is the universal sheaf of $\overline{E}$ on $\pi_S\colon\mathfrak{N}\times S \to \mathfrak{N}$. Hence $\widehat{\mathcal{O}}^\mathrm{vir}_{Z_{\alpha_1,\alpha_2}} = 0$. Finally, if $H^1(\mathcal{O}_S) \neq 0$, then the base $\mathop{\mathrm{Jac}}(S)$ is non-trivial. Following the vanishing argument in §[4.7](#sec:U-pairs-invariants){reference-type="ref" reference="sec:U-pairs-invariants"}, $\mathop{\mathrm{Jac}}(S)$ acts with no fixed points on the moduli stack $$\{\det(\overline{E}_1) \det(\overline{E}_2) = L\} \subset \mathfrak{M}_{\alpha_1} \times \mathfrak{M}_{\alpha_2},$$ giving a nowhere-vanishing cosection of the obstruction sheaf in [\[eq:split-fixed-loci-fixed-Ext-term\]](#eq:split-fixed-loci-fixed-Ext-term){reference-type="eqref" reference="eq:split-fixed-loci-fixed-Ext-term"}. Lemma [Lemma 1](#lem:symmetrized-pullback-cosection){reference-type="ref" reference="lem:symmetrized-pullback-cosection"} lifts this to a nowhere-vanishing cosection of the obstruction sheaf on $Z_{\alpha_1,\alpha_2}$, so $\widehat{\mathcal{O}}^\mathrm{vir}_{Z_{\alpha_1,\alpha_2}} = 0$ by cosection localization [@Kiem2013a]. ## {#section-29} We are ready to put everything together. Plugging [\[eq:master-space-fixed-locus-1\]](#eq:master-space-fixed-locus-1){reference-type="eqref" reference="eq:master-space-fixed-locus-1"}, [\[eq:master-space-fixed-locus-2\]](#eq:master-space-fixed-locus-2){reference-type="eqref" reference="eq:master-space-fixed-locus-2"}, and [\[eq:master-space-fixed-locus-3\]](#eq:master-space-fixed-locus-3){reference-type="eqref" reference="eq:master-space-fixed-locus-3"} into [\[eq:master-space-relation\]](#eq:master-space-relation){reference-type="eqref" reference="eq:master-space-relation"}, if $H^1(\mathcal{O}_S) = H^2(\mathcal{O}_S) = 0$ then, after dividing by an overall factor of $t^{\frac{1}{2}} - t^{-\frac{1}{2}}$, $$\label{eq:VW-invariants-relation} \begin{aligned} 0 &= [\lambda_{k_2}(\alpha)]_t \widetilde{\mathsf{VW}}_\alpha(k_1,t) - [\lambda_{k_1}(\alpha)]_t \widetilde{\mathsf{VW}}_\alpha(k_2, t) \\ &\quad + \sum_{\substack{\alpha_1+\alpha_2=\alpha\\\tau(\alpha_1)=\tau(\alpha_2)}} [\lambda_{k_2}(\alpha_1) - \lambda_{k_1}(\alpha_2) + \chi(\alpha_1, \alpha_2)]_t \widetilde{\mathsf{VW}}_{\alpha_1}(k_1,t) \widetilde{\mathsf{VW}}_{\alpha_2}(k_2,t) \end{aligned}$$ Otherwise, if $H^1(\mathcal{O}_S) \neq 0$ or $H^2(\mathcal{O}_S) \neq 0$, by the second and third cases in §[5.11](#sec:mixed-fixed-locus-cases){reference-type="ref" reference="sec:mixed-fixed-locus-cases"}, there is no contribution from the sum and only the first two terms remain: $$\label{eq:VW-invariants-relation-simple} 0 = [\lambda_{k_2}(\alpha)]_t \widetilde{\mathsf{VW}}_\alpha(k_1,t) - [\lambda_{k_1}(\alpha)]_t \widetilde{\mathsf{VW}}_\alpha(k_2, t).$$ If one further assumes that $\mathcal{O}_S(1)$ is generic for $\alpha$ in the sense of [\[eq:generic-polarization\]](#eq:generic-polarization){reference-type="eqref" reference="eq:generic-polarization"}, then in the sum in [\[eq:VW-invariants-relation\]](#eq:VW-invariants-relation){reference-type="eqref" reference="eq:VW-invariants-relation"}, $\alpha_2 = c \alpha_1$ for some constant $c$, and therefore $$\chi(\alpha_1, \alpha_2) = c \chi(\alpha_1, \alpha_1) = 0$$ by anti-symmetry of $\chi$, so $\chi(\alpha_1, \alpha_2)$ does not contribute in the quantum integer. This assumption can be used to simplify the combinatorics in §[6](#sec:semistable-invariants){reference-type="ref" reference="sec:semistable-invariants"} but is ultimately unnecessary. # Defining semistable invariants {#sec:semistable-invariants} ## {#section-30} Using [\[eq:VW-invariants-relation-simple\]](#eq:VW-invariants-relation-simple){reference-type="eqref" reference="eq:VW-invariants-relation-simple"}, the proof of case 2 of the main Theorem [Theorem 1](#thm:VW-invars){reference-type="ref" reference="thm:VW-invars"} is clear. It remains to use [\[eq:VW-invariants-relation\]](#eq:VW-invariants-relation){reference-type="eqref" reference="eq:VW-invariants-relation"} to prove case 1. This is a combinatorial result which we prove in this section as Corollary [Corollary 1](#cor:VW-semistable-invariants){reference-type="ref" reference="cor:VW-semistable-invariants"}. ## {#section-31} **Definition 8**. Let $(A, +)$ be a commutative monoid, and let $$\mathscr{A}\coloneqq \bigoplus_{(k,\alpha) \in \mathbb{Z}\times A} \mathscr{A}_{k,\alpha} \coloneqq \mathbb{Q}(t^{\frac{1}{2}}) \left[\{\widetilde{\mathsf{Z}}_{k,0}^\pm\}_k \cup \{\widetilde{\mathsf{Z}}_{k,\alpha}\}_{k,\alpha}\right]$$ be the $(\mathbb{Z}\times A)$-graded $\mathbb{Q}(t^{\frac{1}{2}})$-algebra where $\widetilde{\mathsf{Z}}_{k,\alpha}$ has degree $(k, \alpha) \in \mathbb{Z}\times A$, and $\mathscr{A}_{k,\alpha} \subset \mathscr{A}$ is the $(k,\alpha)$-weight part. Given a set map $\widetilde{\chi}\colon (\mathbb{Z}\times A)^2 \to \mathbb{Z}$, define $$\begin{aligned} \colon \mathscr{A}_{k_1,\alpha_1} \times \mathscr{A}_{k_2,\alpha_2} &\to \mathscr{A}_{k_1+k_2,\alpha_1+\alpha_2} \\ (x, y) &\mapsto \left[\widetilde{\chi}\left((k_1,\alpha_1),(k_2,\alpha_2)\right)\right]_t \cdot xy \end{aligned}$$ and extend it to $[-, -]\colon \mathscr{A}\times \mathscr{A}\to \mathscr{A}$ bilinearly. ## {#section-32} **Lemma 4**. *If $\widetilde{\chi}$ is bilinear and anti-symmetric, then $[-, -]$ is a Lie bracket.* *Proof.* Anti-symmetry of $\widetilde{\chi}$ makes $[-, -]$ anti-symmetric. It remains to verify the Jacobi identity $$[x_1, [x_2, x_3]] + [x_2, [x_3, x_1]] + [x_3, [x_1, x_2]] = 0$$ for $x_i \in \mathscr{A}_{k_i,\alpha_i}$. Set $\widetilde{\chi}_{ij} \coloneqq \widetilde{\chi}((k_i, \alpha_i), (k_j,\alpha_j))$, and let $(a,b,c) \coloneqq (\widetilde{\chi}_{12}, \widetilde{\chi}_{13}, \widetilde{\chi}_{23})$ for short. Then we must prove $$\label{eq:q-integer-jacobi-identity} [a + b]_t [c]_t - [c - a]_t [b]_t - [b + c]_t [a]_t = 0.$$ This is true for arbitrary integers $a, b, c \in \mathbb{Z}$, using the quantum integer addition formula $[a+b]_t = (-1)^a t^{\pm \frac{a}{2}} [b]_t + (-1)^b t^{\mp \frac{b}{2}} [a]_t$ to expand all terms. ◻ ## {#section-33} **Proposition 3**. *Fix $k_1, k_2 \in \mathbb{Z}$. Suppose $\widetilde{\chi}$ is bilinear and anti-symmetric, and $$\widetilde{\chi}\left((k_1, 0), (k_2, 0)\right) = 0.$$ Let $\tau$ be a stability condition $\tau$ on $A$. In the quotient of $\mathscr{A}$ by $$\label{eq:invariants-relation} \left[\widetilde{\mathsf{Z}}_{k_1,\alpha}, \widetilde{\mathsf{Z}}_{k_2,0}\right] + \left[\widetilde{\mathsf{Z}}_{k_1,0}, \widetilde{\mathsf{Z}}_{k_2,\alpha}\right] + \sum_{\substack{\alpha_1+\alpha_2=\alpha\\\tau(\alpha_1)=\tau(\alpha_2)}} \left[\widetilde{\mathsf{Z}}_{k_1,\alpha_1}, \widetilde{\mathsf{Z}}_{k_2,\alpha_2}\right] = 0, \qquad \forall \alpha \in A,$$ the elements $\{\mathsf{Z}^{(i)}_\alpha\}_\alpha \in \mathscr{A}_{0,\alpha}$ uniquely defined by the formulas $$\label{eq:semistable-invariants} \widetilde{\mathsf{Z}}_{k_i,\alpha} = \sum_{\substack{n>0\\\alpha_1 + \cdots + \alpha_n = \alpha\\\forall j: \tau(\alpha_j)=\tau(\alpha)}} \frac{1}{n!} \left[\mathsf{Z}^{(i)}_{\alpha_n}, \left[\cdots \left[ \mathsf{Z}^{(i)}_{\alpha_2}, [\mathsf{Z}^{(i)}_{\alpha_1}, \widetilde{\mathsf{Z}}_{k_i,0}]\right] \cdots \right]\right]$$ are independent of $i$, namely $\mathsf{Z}^{(1)}_\alpha = \mathsf{Z}^{(2)}_\alpha$ for all $\alpha \in A$.* For instance, if $\alpha$ is indecomposable, the rhs of [\[eq:semistable-invariants\]](#eq:semistable-invariants){reference-type="eqref" reference="eq:semistable-invariants"} consists of only the $n=1$ term $[\mathsf{Z}^{(i)}_\alpha, \widetilde{\mathsf{Z}}_{k_i,0}]$. Since $\widetilde{\mathsf{Z}}_{k_i,0}$ is invertible, $\mathsf{Z}^{(i)}_\alpha \propto \widetilde{\mathsf{Z}}_{k_i,\alpha} \widetilde{\mathsf{Z}}^{-1}_{k_i,0} \in \mathscr{A}_{0,\alpha}$, and then one can determine the scalar by the definition of the Lie bracket: $$\widetilde{\mathsf{Z}}_{k_i,\alpha} = [\mathsf{Z}^{(i)}_\alpha, \widetilde{\mathsf{Z}}_{k_i,0}] = [\widetilde{\chi}((0,\alpha), (k_i,0))]_t \cdot \mathsf{Z}^{(i)}_\alpha \widetilde{\mathsf{Z}}_{k_i,0}.$$ This uniquely defines $\mathsf{Z}^{(i)}_\alpha$, and the relation [\[eq:invariants-relation\]](#eq:invariants-relation){reference-type="eqref" reference="eq:invariants-relation"} immediately implies $\mathsf{Z}^{(1)}_\alpha = \mathsf{Z}^{(2)}_\alpha$. ## {#section-34} **Corollary 1**. *Given elements $\{\widetilde{\mathsf{VW}}_\alpha(k_1, t), \widetilde{\mathsf{VW}}_\alpha(k_2, t)\}_\alpha \subset \mathbb{Q}(t^{\frac{1}{2}})$ satisfying [\[eq:VW-invariants-relation\]](#eq:VW-invariants-relation){reference-type="eqref" reference="eq:VW-invariants-relation"}, $$\label{eq:VW-semistable-invariants} \widetilde{\mathsf{VW}}_{\alpha}(k_i, t) \eqqcolon \sum_{\substack{n>0\\\alpha_1 + \cdots + \alpha_n = \alpha\\\forall j: \tau(\alpha_j) = \tau(\alpha)}} \frac{1}{n!} \prod_{j=1}^n \Big[\lambda_{k_i}(\alpha_j) - \chi\Big(\sum_{k=1}^{j-1} \alpha_k, \alpha_j\Big)\Big]_t \mathsf{VW}_{\alpha_j}(k_i, t)$$ uniquely defines elements $\{\mathsf{VW}_\alpha(k_i, t)\}_{\alpha} \subset \mathbb{Q}(t^{\frac{1}{2}})$ which are independent of $i = 1, 2$.* *Proof.* For the Lie bracket, take the quantity $$\widetilde{\chi}\left((k_1,\alpha_1), (k_2,\alpha_2)\right) \coloneqq \lambda_{k_2}(\alpha_1) - \lambda_{k_1}(\alpha_2) + \chi(\alpha_1,\alpha_2).$$ which appears in [\[eq:VW-invariants-relation\]](#eq:VW-invariants-relation){reference-type="eqref" reference="eq:VW-invariants-relation"}. The specialization $$\widetilde{\mathsf{Z}}_{k,\alpha} \mapsto \begin{cases} \widetilde{\mathsf{VW}}_{k,\alpha} & \alpha \neq 0 \\ 1 & \alpha = 0 \end{cases}$$ is valid because the required relation [\[eq:invariants-relation\]](#eq:invariants-relation){reference-type="eqref" reference="eq:invariants-relation"} becomes the wall-crossing formula [\[eq:VW-invariants-relation\]](#eq:VW-invariants-relation){reference-type="eqref" reference="eq:VW-invariants-relation"}. Then [\[eq:semistable-invariants\]](#eq:semistable-invariants){reference-type="eqref" reference="eq:semistable-invariants"} becomes [\[eq:VW-semistable-invariants\]](#eq:VW-semistable-invariants){reference-type="eqref" reference="eq:VW-semistable-invariants"}, as desired. ◻ ## {#section-35} *Proof of Proposition [Proposition 3](#prop:semistable-invariants){reference-type="ref" reference="prop:semistable-invariants"}..* We will induct on $\mathop{\mathrm{rank}}\alpha$ to show that $\mathsf{Z}_\alpha^{(i)} \in \mathscr{A}_{0,\alpha}$ is uniquely defined, and that $\mathsf{Z}_\alpha^{(1)} = \mathsf{Z}_\alpha^{(2)}$. By the induction hypothesis, $$\label{eq:semistable-invariants-inductive} \widetilde{\mathsf{Z}}_{k_i,\alpha} = [\mathsf{Z}_{\alpha}^{(i)}, \widetilde{\mathsf{Z}}_{k_i,0}] + \sum_{\substack{n>1\\\alpha_1 + \cdots + \alpha_n = \alpha\\\forall j: \tau(\alpha_j)=\tau(\alpha)}} \frac{1}{n!} \left[\mathsf{Z}^{(1)}_{\alpha_n}, \left[\cdots \left[ \mathsf{Z}^{(1)}_{\alpha_2}, [\mathsf{Z}^{(1)}_{\alpha_1}, \widetilde{\mathsf{Z}}_{k_i,0}]\right] \cdots \right]\right]$$ and the sum lives in $\mathscr{A}_{k_i,\alpha}$. Then $[\mathsf{Z}^{(i)}_\alpha, \widetilde{\mathsf{Z}}_{k_i,0}] \propto \mathsf{Z}_\alpha^{(i)} \widetilde{\mathsf{Z}}_{k_i,0} \in \mathscr{A}_{k_i,\alpha}$ as well. Since $\widetilde{\mathsf{Z}}_{k_i,0}$ is invertible, this uniquely defines $\mathsf{Z}^{(i)}_\alpha \in \mathscr{A}_{0,\alpha}$. To show $\mathsf{Z}_\alpha^{(1)} = \mathsf{Z}_\alpha^{(2)}$, plug [\[eq:semistable-invariants-inductive\]](#eq:semistable-invariants-inductive){reference-type="eqref" reference="eq:semistable-invariants-inductive"} into [\[eq:invariants-relation\]](#eq:invariants-relation){reference-type="eqref" reference="eq:invariants-relation"} and use the hypothesis on $\widetilde{\chi}$ to get $$\label{eq:semistable-invariants-comparison} \left[\widetilde{\chi}\left((0,\alpha),(k_1,0)\right)\right]_t \left[\widetilde{\chi}\left((0,\alpha),(k_2,0)\right)\right]_t \left(\mathsf{Z}_\alpha^{(1)} - \mathsf{Z}_\alpha^{(2)}\right) + \sum_{\substack{n>1\\\alpha_1+\cdots+\alpha_n=\alpha\\\forall j:\tau(\alpha_j)=\tau(\alpha)}} \!\!C_{\alpha_1,\ldots,\alpha_n} = 0$$ where the first two terms are the $n=1$ case of the sum [\[eq:semistable-invariants-inductive\]](#eq:semistable-invariants-inductive){reference-type="eqref" reference="eq:semistable-invariants-inductive"} and, with the abbreviations $z_\beta \coloneqq \mathsf{Z}_\beta^{(1)}$ and $x \coloneqq \widetilde{\mathsf{Z}}_{k_1,0}$ and $y \coloneqq \widetilde{\mathsf{Z}}_{k_2,0}$, $$C_{\alpha_1,\ldots,\alpha_n} \coloneqq \sum_{m=0}^n \bigg[\frac{1}{m!} \Big[z_{\alpha_m}, \big[\cdots[z_{\alpha_2}, [z_{\alpha_1}, x]]\cdots\big]\Big], \frac{1}{(n-m)!} \Big[z_{\alpha_n}, \big[\cdots[z_{\alpha_{m+2}}, [z_{\alpha_{m+1}}, y]]\cdots\big]\Big]\bigg].$$ To complete the inductive step, it remains to show that the sum in [\[eq:semistable-invariants-comparison\]](#eq:semistable-invariants-comparison){reference-type="eqref" reference="eq:semistable-invariants-comparison"} vanishes. Let $U(\mathscr{A}) = \bigoplus_{k,\alpha} U(\mathscr{A})_{k,\alpha}$ be the universal enveloping algebra of $\mathscr{A}$. It inherits the $(\mathbb{Z}\times A)$-grading since the Lie bracket is homogeneous. On $\widehat{U}(\mathscr{A}) \coloneqq \prod_{k,\alpha} U(\mathscr{A})_{k,\alpha}$, i.e. the completion of $U(\mathscr{A})$ with respect to the grading, consider the operator $\mathop{\mathrm{ad}}_z(-) \coloneqq [z, -]$ where $z \coloneqq \sum_{\beta: \tau(\beta)=\tau(\alpha)} z_\beta$. Then $$\sum_{\substack{n\ge 0\\\alpha_1+\cdots+\alpha_n=\alpha\\\forall j: \tau(\alpha_j)=\tau(\alpha)}} C_{\alpha_1,\ldots,\alpha_n} = (k_1+k_2,\alpha)\text{-weight piece of } \left[e^{\mathop{\mathrm{ad}}_z} x, e^{\mathop{\mathrm{ad}}_z} y\right] \in \widehat{U}(\mathscr{A}).$$ A standard combinatorial result in Lie theory says $e^{\mathop{\mathrm{ad}}_u} v = e^u v e^{-u}$ for any $u, v \in \widehat{U}(\mathscr{A})$, so $$\label{eq:semistable-invariant-combinatorics} \left[e^{\mathop{\mathrm{ad}}_z} x, e^{\mathop{\mathrm{ad}}_z} y\right] = [e^z x e^{-z}, e^z y e^{-z}] = e^z [x, y] e^{-z} = 0.$$ The hypothesis on $\widetilde{\chi}$ implies $[x, y] = 0$, whence the last equality. We are done since $C_\emptyset = [x, y] = 0$ by definition, and $C_{\alpha_1} = 0$ by the Jacobi identity. ◻ ## {#section-36} **Remark 5**. The reader may wonder whether it was truly necessary to introduce the Lie algebra $(\mathscr{A}, [-, -])$, and to prove Corollary [Corollary 1](#cor:VW-semistable-invariants){reference-type="ref" reference="cor:VW-semistable-invariants"} via the more abstract Proposition [Proposition 3](#prop:semistable-invariants){reference-type="ref" reference="prop:semistable-invariants"}. The answer is that a surprising amount of combinatorics is hidden in [\[eq:semistable-invariant-combinatorics\]](#eq:semistable-invariant-combinatorics){reference-type="eqref" reference="eq:semistable-invariant-combinatorics"} and the quantum integer identity [\[eq:q-integer-jacobi-identity\]](#eq:q-integer-jacobi-identity){reference-type="eqref" reference="eq:q-integer-jacobi-identity"} which made $[-, -]$ into a Lie bracket. One could directly substitute [\[eq:VW-semistable-invariants\]](#eq:VW-semistable-invariants){reference-type="eqref" reference="eq:VW-semistable-invariants"} into [\[eq:VW-invariants-relation\]](#eq:VW-invariants-relation){reference-type="eqref" reference="eq:VW-invariants-relation"} and the first half of the proof of Proposition [Proposition 3](#prop:semistable-invariants){reference-type="ref" reference="prop:semistable-invariants"} would work equally well, but then one must prove the vanishing of the sum of the combinatorial coefficients $$\label{eq:numerical-combinatorics} \begin{aligned} C_{\alpha_1,\ldots,\alpha_n} &= \sum_{m=0}^n \binom{n}{m} \Big[\sum_{i=1}^m b_i - \sum_{j=m+1}^n a_j + \sum_{i=1}^m \sum_{j=m+1}^n c_{ij} \Big]_t \\ &\qquad\qquad\qquad\prod_{i=1}^m [a_i - \sum_{k=1}^{i-1} c_{k,i}]_t \prod_{j=m+1}^n [b_j - \sum_{k=m+1}^{j-1} c_{k,j}]_t, \end{aligned}$$ where $a_i \coloneqq \lambda_{k_1}(\alpha_i)$ and $b_j \coloneqq \lambda_{k_2}(\alpha_j)$ and $c_{ij} \coloneqq \chi(\alpha_i, \alpha_j)$. In fact, it is true for arbitrary integers $a_i$, $b_j$, and $c_{ij} = -c_{ji}$ that $$\sum_{\sigma \in S_n} C_{\alpha_{\sigma(1)}, \ldots, \alpha_{\sigma(n)}} = 0,$$ but it appears fairly messy and complicated to use quantum integer identities to rearrange the terms and identify which ones cancel with which. [^14] The Lie bracket formalism helps to disentangle the quantum integer identities from the rearrangement cancellations. [^1]: For this paper, all moduli stacks are already $\mathbb{C}^\times$-rigidified unless indicated otherwise; see Remark [Remark 2](#rem:rigidification){reference-type="ref" reference="rem:rigidification"}. In particular, stabilizers of stable points are *trivial*, not the group $\mathbb{C}^\times$ of scaling automorphisms. [^2]: The two exceptions are Example [Example 2](#ex:symmetrized-pullback-shifted-cotangent-bundle){reference-type="ref" reference="ex:symmetrized-pullback-shifted-cotangent-bundle"} and its use in the construction of §[3.3](#sec:quiver-stacks-obstruction-theory){reference-type="ref" reference="sec:quiver-stacks-obstruction-theory"}, but we also give an alternate non-derived construction in §[3.4](#sec:quiver-stacks-obstruction-theory-alternate){reference-type="ref" reference="sec:quiver-stacks-obstruction-theory-alternate"}. [^3]: The constructions of [@Tanaka2020] are given on stable loci only, but Atiyah classes satisfying all the usual properties can be defined for general Artin stacks [@Kuhn2023]. So we will not worry about the details of how to construct the obstruction theory on the whole stack. [^4]: For instance, we will demonstrate in a sequel paper [@KLT] how they can be applied, along with wall-crossing ideas from [@Joyce2021 Theorem 5.8], to study wall-crossing for invariants like $\mathsf{VW}_\alpha(t)$ under variation of stability condition. [^5]: If $\mathfrak{M}^\mathrm{der}= i(\mathfrak{M})$ is actually a classical stack, then this is just the usual cotangent complex $\mathbb{L}_\mathfrak{M}\in D\mathsf{QCoh}(\mathfrak{M}) = L_{\mathsf{QCoh}}(\mathfrak{M}^\mathrm{der})$. [^6]: Note that a locally finitely presented morphism $f\colon \mathfrak{M}\to \mathfrak{N}$ of classical stacks is not necessarily locally finitely presented as a morphism $i(f)\colon i(\mathfrak{M}) \to i(\mathfrak{N})$ of derived stacks. [^7]: We will give a more self-contained and elementary proof in [@KLT], using the valuative criterion and Langton's elementary modifications. [^8]: The resolution property is not strictly necessary: it is enough to assume that $N^\mathrm{vir}\big|_{M^\mathsf{T}}$ has a global $2$-term $\mathsf{T}$-equivariant resolution by $\mathsf{T}$-equivariant vector bundles (and one must replace $K_\mathsf{T}^\circ(M)$ by the Grothendieck group of $\mathsf{T}$-equivariant perfect complexes). A generalization to DM stacks $M$ is also possible under somewhat-strong assumptions on $M \setminus M^\mathsf{T}$; see [@Qu2018 Remark 3.5]. [^9]: A base change to a double cover of $\mathsf{T}$ is possibly required. We assume throughout that this has already been done. [^10]: Our conventions differ slightly from those of Thomas because our $t$ is his $t^{-1}$. [^11]: It is therefore tempting to rewrite [\[eq:master-space-localization\]](#eq:master-space-localization){reference-type="eqref" reference="eq:master-space-localization"} as an integral over $M^{\mathbb{C}^\times}$, by pushing forward the integrand along $\iota_\mathsf{T}$, but then the second term $\widehat{\mathsf{e}}(N_{\iota_{\mathbb{C}^\times}}^\mathrm{vir}) \in K_{\mathbb{C}^\times \times \mathsf{T}}(M^{\mathbb{C}^\times})_\mathrm{loc}$ is not invertible and so this doesn't make sense. [^12]: There is a mild subtlety here because the inverse of $1 - z^a t^b L$ is a series expansion in $(1 - z^a t^b)^{-1} (1 - L)$, which terminates because $1 - L$ is nilpotent, and not a series expansion in $z^a t^b L$, which does not terminate because $L$ is not necessarily nilpotent. Nonetheless, we can still treat quantities like [\[eq:virtual-normal-bundle-in-localization\]](#eq:virtual-normal-bundle-in-localization){reference-type="eqref" reference="eq:virtual-normal-bundle-in-localization"} formally like rational functions valued in Chern roots. [^13]: No extra summands can exist in the $\mathbb{C}^\times$-weight decomposition of $E$, e.g. a weight-$z^2$ piece, because they would be destabilizing. [^14]: A manageable case is when all the $c_{ij}$ are zero, which occurs if $\mathcal{O}_S(1)$ is generic for $\alpha$. Then the terms in [\[eq:numerical-combinatorics\]](#eq:numerical-combinatorics){reference-type="eqref" reference="eq:numerical-combinatorics"} for $C_{\alpha_1,\ldots,\alpha_n}$ are invariant under $S_m \times S_{n-m}$, and so the sum may be rewritten as $\sum_{I \sqcup J = \{1, \ldots, n\}}$, and a single application of the quantum integer addition formula suffices. Details are left to the interested reader.
arxiv_math
{ "id": "2309.03673", "title": "Semistable refined Vafa-Witten invariants", "authors": "Henry Liu", "categories": "math.AG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We consider initial value problems for $\varepsilon^2\,\varphi''+a(x)\,\varphi=0$ in the highly oscillatory regime, i.e., with $a(x)>0$ and $0<\varepsilon\ll 1$. We discuss their efficient numerical integration on coarse grids, but still yielding accurate solutions. The $\mathcal{O}(h^2)$ one-step method from [@ABN] is based on an analytic WKB-preprocessing of the equation. Here we extend this method to $\mathcal{O}(h^3)$ accuracy. author: - Anton Arnold - Jannis Körner nocite: "[@*]" title: High-order WKB-based Method For The 1D Stationary Schrödinger Equation In The Semi-classical Limit --- # INTRODUCTION This paper is concerned with efficient numerical methods for highly oscillatory ordinary differential equations (ODEs) of the form $$\label{SchIVP} \displaystyle \varepsilon^2 \varphi''(x) + a(x) \varphi(x)= 0 \,, \quad x \in (0,1)\,;\qquad \displaystyle\varphi(0)=\varphi_0\in\mathbb{C}\,,\quad \displaystyle\varepsilon \varphi'(0)=\varphi_1\in\mathbb{C}\,.$$ Here, $0<\varepsilon\ll 1$ is a small parameter and $a(x)\ge a_0>0$ a sufficiently smooth function, such that ([\[SchIVP\]](#SchIVP){reference-type="ref" reference="SchIVP"}) does not include a turning point. For extensions with a turning point, i.e. a sign change of $a(x)$, we refer to [@AN; @AD]. Such problems have applications, e.g. in quantum transport [@BAP06; @Cla_Naoufel], mechanical systems (see references in [@lub]), and cosmology [@AHLH20]. For $\varepsilon\ll 1$, solutions to ([\[SchIVP\]](#SchIVP){reference-type="ref" reference="SchIVP"}) are highly oscillatory, and hence standard ODE-solvers become inefficient since they need to resolve each oscillation by choosing $h=\mathcal{O}(\varepsilon)$. In [@lub], an $\varepsilon$-uniform scheme with $\mathcal{O}(h^2)$ accuracy for large step sizes up to $h =\mathcal{O}(\sqrt\varepsilon)$ was constructed, see also §XIV of [@HLW] and references therein. The $\mathcal{O}(h^2)$-scheme of [@ABN] is based on a (w.r.t. $\varepsilon$) second order WKB-approximation of ([\[SchIVP\]](#SchIVP){reference-type="ref" reference="SchIVP"}) and makes the method even *asymptotically correct*, i.e. the error decreases with $\varepsilon$ even on a coarse spatial grid, if the phase function can be obtained analytically or with spectral accuracy [@AKU]. Here we present an $\mathcal{O}(h^3)$ extension of the latter method; for its detailed analysis we refer to [@ADK]. # WKB-TRANSFORMATION AS ANALYTIC PREPROCESSING The essence of this numerical method is to transform the highly oscillatory problem ([\[SchIVP\]](#SchIVP){reference-type="ref" reference="SchIVP"}) into a much "smoother" problem by eliminating the dominant oscillation frequency. Following [@ABN] we first introduce the vector function $U(x):=\Big(a^{1/4} \varphi(x)\,,\,\displaystyle\frac{\varepsilon (a^{1/4}\varphi)'(x)}{\sqrt{a(x)}}\Big)^\top$. Then we set $Z(x) := e^{-{i\over \varepsilon} \mathbf{\Phi}^\varepsilon(x)} \mathbf{P}\,U(x)$ with the matrices $$\mathbf{P}:= {1\over \sqrt{2}} \left( \begin{array}{cc} i&1\\ 1&i \end{array} \right)\;; \quad \mathbf{P}^{-1} = {1\over \sqrt{2}} \left( \begin{array}{cc} -i&1\\ 1&-i \end{array} \right)\,,$$ $$\label{phase} \mathbf{\Phi}^\varepsilon(x) := %\int_0^x D^\eps(y)\, dy = \mbox{diag}(\phi(x), -\phi(x))\,;\qquad \phi(x):=\int_0^x \left( \sqrt{a(\tau)} - \varepsilon^2 b(\tau)\right) \,d\tau\,; \qquad b(x) := -\frac{1}{2a(x)^{1/4}} \big(a(x)^{-1/4}\big)''\,.$$ We remark that the (real valued) phase function $\phi$ is precisely the phase in the (w.r.t. $\varepsilon$) second order WKB-approximation of ([\[SchIVP\]](#SchIVP){reference-type="ref" reference="SchIVP"}) (cf. [@ABN; @LL85]). Then, $Z$ satisfies the ODE initial value problem (IVP) $$\label{EQZ} \displaystyle Z' = \varepsilon\mathbf{N}^\varepsilon(x) Z\,,\quad x\in(0,1);\qquad \displaystyle Z(0)=Z_I=\mathbf{P}\,U_I\,; \qquad U_I= U(0)\,.$$ $\mathbf{N}^\varepsilon$ is an off-diagonal matrix with the entries $N^\varepsilon_{1,2} (x)= b(x) e^{-\frac{2i}{ \varepsilon} \phi(x)},\, N^\varepsilon_{2,1} (x)= b(x) e^{\frac{2i}{ \varepsilon} \phi(x)}$. While the ODE ([\[EQZ\]](#EQZ){reference-type="ref" reference="EQZ"}) is still oscillatory, in fact with doubled frequency, $Z$ is "smoother" than $\varphi$ and $U$, as its oscillation amplitude is reduced to $\mathcal{O}(\varepsilon^2$), cf. [@ABN]. After numerically solving the ODE ([\[EQZ\]](#EQZ){reference-type="ref" reference="EQZ"}), the original solution is recovered by $U(x)=\mathbf{P}^{-1} e^{{i\over \varepsilon} \mathbf{\Phi}^\varepsilon(x)} Z(x)\,.$ # ASYMPTOTICALLY CORRECT NUMERICAL SCHEME To construct an asymptotically correct one-step scheme for the IVP ([\[EQZ\]](#EQZ){reference-type="ref" reference="EQZ"}) on the uniform grid $x_n:=n\,h;\,n=0,...,N$ with the step size $h=1/N$, we consider first the truncated Picard iteration for ([\[EQZ\]](#EQZ){reference-type="ref" reference="EQZ"}) (with $P=2$ in [@ABN], and $P=3$ for the $O(h^3)$ method here): $$% \begin{equation}\label{picard_limit} Z(\eta)\approx Z(\xi)+\sum_{p=1}^{P}\varepsilon^{p}\mathbf{M}_{p}^{\varepsilon}(\eta;\xi)\,Z(\xi)\,, % \end{equation}$$ where the matrices $\mathbf{M}_{p}^{\varepsilon}$, $p=1,\,2,\,3$ are given by the iterated oscillatory integrals $$% \begin{equation} % \mathbf{M}_{p}^{\eps}(\eta;\xi)&=\int_{\xi}^{\eta}\int_{\xi}^{y_{1}}\cdots\int_{\xi}^{y_{p-1}}\mathbf{N}^{\eps}(y_{1})\cdots\mathbf{N}^{\eps}(y_{p})\,dy_{p}\cdots\mathrm{d}y_{1}\Comma\nonumber\\ \mathbf{M}_{p}^{\varepsilon}(\eta;\xi)=\int_{\xi}^{\eta}\mathbf{N}^{\varepsilon}(y)\mathbf{M}_{p-1}^{\varepsilon}(y;\xi)\,dy\,,\quad \mathbf{M}_{0}^{\varepsilon}=\mathbf{I}\,. % \end{equation}$$ This is followed by a high order approximation of $\mathbf{M}_p^\varepsilon$ (w.r.t. both small parameters $h$ and $\varepsilon$) using the *asymptotic method* for oscillatory integrals [@INO] and a shifted variant [@ABN]. We denote these approximation matrices by $\mathbf{A}_n^{p,P}\approx \varepsilon^p\mathbf{M}_p^\varepsilon(x_{n+1};x_n);\,p=1,...,P$. The two resulting numerical schemes, referred to as WKB2 (for $P=2$) and WKB3 (for $P=3$) have the structure: $$%\begin{equation}\label{scheme} Z_{n+1}:=\left(\mathbf{I}+\sum_{p=1}^P \mathbf{A}^{p,P}_{n}\right)Z_{n}\,,\quad n=0,\dots,N-1\,. %\end{equation}$$ For the coefficients of $\mathbf{A}_n^{p,P}$ we have: $$% \begin{equation}\label{Q1} \mathbf{A}_{n}^{1,P}:=\varepsilon\left( \begin{array}{cc} 0 & \overline{Q_{1}^{P}(x_{n+1},x_n)} \\ Q_{1}^{P}(x_{n+1},x_n) & 0 \end{array}\right)\,,\quad \mathbf{A}_{n}^{2,P}:=\varepsilon^2 \left( \begin{array}{cc} Q_{2}^{P}(x_{n+1},x_n) & 0 \\ 0 & \overline{Q_{2}^{P}(x_{n+1},x_n)} \end{array}\right)\,,\quad P=2,\,3\,, % \end{equation}$$ with $$\begin{aligned} Q_{1}^{P}(x_{n+1},x_n) \!\!\!\!\!\!\!&&:=-\displaystyle \sum_{p=1}^{P}(i\varepsilon)^{p}\left(b_{p-1}(x_{n+1}) e^{\frac{2 i}{\varepsilon}\phi(x_{n+1})}-b_{p-1}(x_{n}) e^{\frac{2 i}{\varepsilon}\phi(x_{n})}\right) - e^{\frac{2 i}{\varepsilon}\phi(x_{n})}\displaystyle \sum_{p=1}^{P}(i\varepsilon)^{p+P}b_{p+P-1}(x_{n+1})\, h_{p}\Big(\frac{2}{\varepsilon}s_n\Big)\,,\\[3mm] \displaystyle Q_{2}^{2}(x_{n+1},x_n) \!\!\!\!\!\!\!&&:= \displaystyle - i \varepsilon(x_{n+1} -x_n) { b(x_{n+1})b_0(x_{n+1}) +b(x_{n}) b_0(x_{n}) \over 2} \\ && \displaystyle- \varepsilon^2 b_0(x_{n}) b_0(x_{n+1})\, h_1\Big(-{2\over \varepsilon}s_{n}\Big) +i \varepsilon^3 b_1(x_{n+1}) [b_0(x_{n})-b_0(x_{n+1})]\, h_2\Big(-{2\over \varepsilon}s_{n}\Big)\,,\\[3mm] Q_{2}^3(x_{n+1},x_n)\!\!\!\!\!\!\!&&:=-i\varepsilon Q_{S}[bb_{0}](x_{n+1},x_n)\\ &&-\varepsilon^{2}\Big[b_{0}(x_n) e^{\frac{2 i}{\varepsilon}\phi(x_n)}\left[b_{0}(y) e^{-\frac{2 i}{\varepsilon}\phi(y)}\right]_{x_n}^{x_{n+1}}-Q_{S}[bb_{1}](x_{n+1},x_n)\Big]\\ &&+ i\varepsilon^{3}\big[b_{0}(x_n)b_{1}(x_{n+1})-b_{1}(x_n)b_{0}(x_{n+1})\big]\, h_{1}\Big(-\frac{2}{\varepsilon}s_{n}\Big)\\ &&+\varepsilon^{4}\big[\left(b_{0}(x_n)+b_{0}(x_{n+1})\right)b_{2}(x_{n+1})-b_{1}(x_n)b_{1}(x_{n+1})-2b_{0}(x_{n+1})b_{3}(x_{n+1})s_{n}\big]\, h_{2}\Big(-\frac{2}{\varepsilon}s_{n}\Big)\\ &&+ i\varepsilon^{5}\big[\left(b_{0}(x_{n+1})-b_{0}(x_n)\right)b_{3}(x_{n+1})-\left(b_{1}(x_{n+1})-b_{1}(x_n)\right)b_{2}(x_{n+1})\big]\, h_{3}\Big(-\frac{2}{\varepsilon}s_{n}\Big)\,,\end{aligned}$$ and the abbreviations $$s_n:= \phi(x_{n+1})-\phi(x_n)\,;\quad b_{0}(x):=\frac{b(x)}{2\phi^{\prime}(x)}\,,\quad b_{p}(x):=\frac{b_{p-1}^{\prime}(x)}{2\phi^{\prime}(x)}\,; \qquad h_{p}(x):= e^{i x}-\sum_{k=0}^{p-1}\frac{(i x)^{k}}{k!}\,, \quad p=1,\,2,\,3\,,$$ $$Q_{S}[f](\eta,\xi):=\frac{\eta-\xi}{6}\left(f(\xi)+4f\left(\frac{\xi+\eta}{2}\right)+f(\eta)\right)\,.$$ Finally we have $$% \begin{equation}\label{Q3} \mathbf{A}_{n}^{3,3}:=\varepsilon^3 \left( \begin{array}{cc} 0 & \overline{Q_{3}^{3}(x_{n+1},x_n)} \\ Q_{3}^{3}(x_{n+1},x_n) & 0 \end{array}\right)\,, % \end{equation}$$ with $$\begin{aligned} Q_{3}^{3}(x_{n+1},x_n) \!\!\!\!\!\!\!&&: =-\varepsilon^{2} e^{\frac{2 i}{\varepsilon}\phi(x_n)}\Bigg[\frac{x_{n+1}-x_n}{2}\left[c_{0}(x_{n+1})+b(x_n)b_{0}(x_n)b_{0}(x_{n+1})\right]\,h_{1}\Big(\frac{2}{\varepsilon}s_{n}\Big)\Bigg] \\ &&- i\varepsilon^{3} e^{\frac{2 i}{\varepsilon}\phi(x_n)}\Bigg[\frac{1}{2}\left[c_{1}(x_{n+1})(x_{n+1}-x_n)+d_{0}(x_{n+1})+b(x_n)b_{0}(x_n)\left(b_{1}(x_{n+1})(x_{n+1}-x_n)+f_{0}(x_{n+1})\right)\right] \\ &&\quad+\left(b_{0}(x_n)b_{0}(x_{n+1})^{2}+2s_{n}\left(l_{0}(x_{n+1})-b_{0}(x_n)\kappa_{0}(x_{n+1})\right)\right)\Bigg]\,h_{2}\Big(\frac{2}{\varepsilon}s_{n}\Big) \\ &&+\varepsilon^{4} e^{\frac{2 i}{\varepsilon}\phi(x_n)}\Bigg[\frac{1}{2}\left[e_{0}(x_{n+1})+d_{1}(x_{n+1})+b(x_n)b_{0}(x_n)\left(g_{0}(x_{n+1})+f_{1}(x_{n+1})\right)\right] \\ &&\quad+2\left[b_{0}(x_n)b_{0}(x_{n+1})b_{1}(x_{n+1})+\left(l_{0}(x_{n+1})-b_{0}(x_n)\kappa_{0}(x_{n+1})\right)\right]\Bigg]\,h_{3}\Big(\frac{2}{\varepsilon}s_{n}\Big)\,,\end{aligned}$$ and the abbreviations $$\begin{aligned} && c_{0}(x):=\frac{b(x)^{2}b_{0}(x)}{2\phi'(x)},\: c_{1}(x):=\frac{c_{0}'(x)}{2\phi'(x)},\:d_{0}(x):=\frac{c_{0}(x)}{2\phi'(x)},\: d_{1}(x):=\frac{d_{0}'(x)}{2\phi'(x)},\: e_{0}(x):=\frac{c_{1}(x)}{2\phi'(x)} ,\:\\ && f_{0}(x):=\frac{b_{0}(x)}{2\phi'(x)},\: f_{1}(x):=\frac{f_{0}'(x)}{2\phi'(x)},\: g_{0}(x):=\frac{b_{1}(x)}{2\phi'(x)},\: \kappa_{0}(x):=\frac{b(x)b_{1}(x)}{2\phi'(x)},\: % \kappa(x):=b(x)b_{1}(x),\: \kappa_{0}(x):=\frac{\kappa(x)}{2\phi'(x)},\: l_{0}(x):=\frac{b(x)b_{0}(x)b_{1}(x)}{2\phi'(x)}\,. % l(x):=b(x)b_{0}(x)b_{1}(x),\: l_{0}(x):=\frac{l(x)}{2\phi'(x)}\,.\end{aligned}$$ For these two schemes the following error estimates were proven in [@ABN; @ADK]: **Theorem 1**. *Let the coefficient $a\in C^\infty[0,1]$ satisfy $a(x)\ge a_0>0$ in $[0,1]$, and let $0<\varepsilon\le\varepsilon_0$ (for some $0<\varepsilon_0\le1$ such that $\phi'(x)\ne0$ for all $x\in[0,1]$ and $0<\varepsilon\le\varepsilon_0$). Then the global errors of the schemes WKB2 and WKB3 satisfy respectively $$\label{error_Z_WKB2} \|Z_{n}-Z(x_{n})\|_{} \le C \varepsilon^3 h^2\,,\quad \|U_n-U(x_{n})\|_{} \le C {h^{\gamma} \over \varepsilon} +C \varepsilon^3 h^2\,,\quad n=0,\dots,N\,,$$ $$\label{error_Z_WKB3} \|Z_{n}-Z(x_{n})\|_{} \le C \varepsilon^3 h^3 \max(\varepsilon,h)\,,\quad \|U_n-U(x_n)\|_{} \le C {h^{\gamma} \over \varepsilon} +C \varepsilon^3 h^3 \max(\varepsilon,h)\,,\quad n=0,\dots,N\,,$$ with $C$ independent of $n$, $h$, and $\varepsilon$. Here, $\gamma >0$ is the order of the chosen numerical integration method for computing the approximation $\phi_n$ of the phase integral ([\[phase\]](#phase){reference-type="ref" reference="phase"}),* *and $\|.\|$ denotes any vector norm in $\mathbb{C}^2$.* The estimates ([\[error_Z\_WKB2\]](#error_Z_WKB2){reference-type="ref" reference="error_Z_WKB2"}) and ([\[error_Z\_WKB3\]](#error_Z_WKB3){reference-type="ref" reference="error_Z_WKB3"}) include the phase error $|\phi_n-\phi(x_n)|$ only in the backward transformation $U_n=\mathbf{P}^{-1} e^{{i\over \varepsilon} \mathbf{\Phi}^\varepsilon_n} Z_n\,.$ In [@AKU; @ADK], extended error estimates also include the phase error of the analytic transformation from $U$ to $Z$. For simplicity we used here only a uniform spatial grid; an extension with an adaptive step size controller as well as a coupling to a Runge-Kutta method close to turning points and for the evanescent regime (i.e. for $a(x)<0$) is presented in [@KAD; @ADK]. # NUMERICAL TEST We revisit the example from [@ABN] with $a(x)=(x+\frac12)^2$. The initial conditions for ([\[SchIVP\]](#SchIVP){reference-type="ref" reference="SchIVP"}) are chosen as $\varphi_0=1$ and $\varphi_1=i$. In Figure [1](#Fig1){reference-type="ref" reference="Fig1"} we present the $L^\infty$--error of the numerical approximation on $[0,1]$, i.e. $\|U_n-U(x_n)\|_\infty$ as a function of the step size $h$ for several values of $\varepsilon$, computed with both WKB3 and WKB2. The error plots are in close agreement with the error estimates ([\[error_Z\_WKB3\]](#error_Z_WKB3){reference-type="ref" reference="error_Z_WKB3"}), ([\[error_Z\_WKB2\]](#error_Z_WKB2){reference-type="ref" reference="error_Z_WKB2"}), both when reducing $h$ and $\varepsilon$. Since the phase ([\[phase\]](#phase){reference-type="ref" reference="phase"}) is explicitly computable in this example, the error term $h^\gamma/\varepsilon$ drops out here. Since the numerical scheme of WKB3 is much more involved than WKB2, and using a lot more function calls, the efficiency gain of WKB3 cannot be inferred only from Figure [1](#Fig1){reference-type="ref" reference="Fig1"}. But a detailed analysis of the CPU times of both methods at comparable error levels shows a speed-up by up to a factor of 20 for highly accurate computations [@ADK]. ![Log-log plot of the $L^\infty$--error of $U$ as a function of the step size $h$ and for three values of $\varepsilon$, computed with WKB3 (left) and WKB2 (right). The error curve saturates around $10^{-13}$ due to round-off errors.](fig_1.eps){#Fig1 width="467pt"} # ACKNOWLEDGMENTS The authors acknowledge support by the projects I3538-N32 and the doctoral school W1245 of the FWF. 99 F.J. Agocs, W.J. Handley, A.N. Lasenby, and M.P. Hobson, Efficient method for solving highly oscillatory ordinary differential equations with applications to physical systems, *Phys. Rev. Research* **2**, 013030 (2020). A. Arnold, N. Ben Abdallah, and C. Negulescu, WKB-based schemes for the oscillatory 1D Schrödinger equation in the semi-classical limit, *SIAM J. Numer. Anal.* **49**, No. 4, 1436--1460 (2011). A. Arnold, K. Döpfner, Stationary Schrödinger equation in the semi-classical limit: WKB-based scheme coupled to a turning point, *Calcolo* **57**, No. 1, Paper no. 3 (2020). A. Arnold, K. Döpfner, J. Körner, WKB-based third order method for the highly oscillatory 1D stationary Schrödinger equation, preprint (2022). A. Arnold, C. Klein, B. Ujvari, WKB-method for the 1D Schrödinger equation in the semi-classical limit: enhanced phase treatment, *BIT Numerical Mathematics* **62**, 1--22 (2022). A. Arnold, C. Negulescu, Stationary Schrödinger equation in the semi-classical limit: numerical coupling of oscillatory and evanescent regions, *Numerische Mathematik* **138**, No. 2, 501--536 (2018). N. Ben Abdallah, O. Pinaud, *Multiscale simulation of transport in an open quantum system: Resonances and WKB interpolation*, J. *Comput. Phys.* **213**, no. 1, 288--310 (2006). E. Hairer, C. Lubich, G. Wanner, *Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations*, 2nd Ed., Springer-Verlag, Berlin Heidelberg (2006). A. Iserles, S.P. Nørsett, S. Olver, Highly oscillatory quadrature: The story so far. In: A. Bermudez de Castro, ed., *Proceeding of ENuMath, Santiago de Compostella (2006)*, 97--118, Springer Verlag, 2006. J. Körner, A. Arnold, K. Döpfner, WKB-based scheme with adaptive step size control for the Schrd̈inger equation in the highly oscillatory regime, *J. Comput. Appl. Math.* **404**, 113905 (2022). L.D. Landau, E.M. Lifschitz, *Quantenmechanik*, Akademie-Verlag, Berlin (1985) K. Lorenz, T. Jahnke, C. Lubich, Adiabatic integrators for highly oscillatory second-order linear differential equations with time-varying eigendecomposition, *BIT* **45**, no. 1, 91--115 (2005). C. Negulescu, N. Ben Abdallah, M. Mouis, An accelerated algorithm for 2D simulations of the quantum ballistic transport in nanoscale MOSFETs, *Journal of Computational Physics* **225**, no. 1, 74--99 (2007).
arxiv_math
{ "id": "2310.00963", "title": "High-order WKB-based Method For The 1D Stationary Schr\\\"odinger Equation\n In The Semi-classical Limit", "authors": "Anton Arnold, Jannis K\\\"orner", "categories": "math.NA cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We consider suitably scaled $(s,2)$-Gagliardo seminorms on spaces of $S^1$-valued functions, and the equivalent $L^2$-norms of s-fractional gradients, and show that they Gamma-converge to vortex-type energies with respect to a suitable compact convergence. In order to obtain such a result, we first compare such functionals with Ginzburg-Landau energies depending on Riezs potentials, borrowing an argument from discrete-to-continuum techniques. This allows to interpret the parameter $s$ (or, better, $1-s$) as having the same role as the geometric parameter $\varepsilon$ in the Ginzburg-Landau theory. As a consequence the energies are coercive with respect to the flat-convergence of the Jacobians of the Riesz potentials, and we can give a lower bound by comparison. As for the upper bound, this is obtained in the spirit of a recent work by Solci, showing that indeed we have pointwise convergence for functions of the form $x/|x|$ close to singularities. Using the explicit form of the $(s,2)$-Gagliardo seminorms we show that we can neglect contributions close to singularities by the use of the Gagliardo-Nirenberg interpolation inequality, and use a direct computation far from the singularities. A variation of a classical density argument completes the proof. author: - "Roberto Alicandro[^1] , Andrea Braides[^2] , Margherita Solci[^3] , and Giorgio Stefani[^4]" title: | Topological singularities arising\ from fractional-gradient energies --- # Introduction We consider $\Omega$ a bounded simply connected open set in $\mathbb R^2$. For $s\in (0,1)$ and for $u$ belonging to the fractional Sobolev space $W^{s,2}(\Omega;\mathbb R^2)$ satisfying $|u|=1$ almost everywhere in $\Omega$, we define the energies $$\label{defFs-intro} F_s(u)=\frac{1}{|\log(1-s)|}\int_{\mathbb R^2}|\nabla_{\!s} u|^2\, dx,$$ where the *fractional gradient* $\nabla_{\!s} u$ is defined by $$\label{fracs-intro} \nabla_{\!s} u(x) = c_s\int_{\mathbb R^2} \frac{(u(x)-u(y))\otimes (y-x)}{|y-x|^{3+s}}\,dy$$ for almost every $x$, and $c_s$ is a normalization constant such that $\nabla_{\!s} u(x) \to \nabla u(x)$ as $s\to 1^-$ if $u\in C^1_c(\mathbb R^2; \mathbb R^2)$. Since the integrations are performed on the whole $\mathbb R^2$ we consider functions satisfying $u=u_0$ outside $\Omega$, where $u_0$ is a fixed function in $W^{1,2}(\mathbb R^2;\mathbb R^2)$ with $|u_0|=1$ on $\partial\Omega$. The asymptotic behaviour as $s\to 1^-$ of the functionals above will be determined by the degree of $u_0$ on $\partial\Omega$ only, and can be described as follows. $\bullet$ using the constraint $|u|=1$ in $\Omega$ and the equality $\nabla_s u=\nabla I_{1-s}u$, where $I_\alpha$ denotes the $\alpha$-*Riesz potential*, we deduce that from a sequence $u_s$ with equibounded energy we can extract a subsequence such that the Jacobians of $I_{1-s}u_s$ converge to a finite sum of Dirac deltas with integer coefficients $\mu=\sum_i d_i\delta_{x_i}$ with $x_i\in\overline \Omega$ and $\sum_i d_i={\rm deg}\,(u_0,\partial\Omega)$. This justifies the use of the convergence $u_s\to \mu$ defined by the convergence of the Jacobians of $I_{1-s}u_s$ to $\mu$ in the computation of the $\Gamma$-limit, and describes its domain. Note that $u_s$ are only functions in $W^{s,2}(\mathbb R^2)$ so that their Jacobians cannot be directly used in the definition of convergence, which explains the necessity of the use of $I_{1-s}u_s$. It is interesting to note that $I_{1-s}u_s(x)$ can be seen as a weighted average of $u_s$, so that the definition of convergence above is in accord with the one given by Solci [@Solci], where (piecewise-affine interpolations of) averages on small cubes are considered. $\bullet$ the lower bound for the $\Gamma$-limit is obtained remarking that, up to some error, functionals [\[defFs-intro\]](#defFs-intro){reference-type="eqref" reference="defFs-intro"} can be estimated from below by $$\label{defFs-intro-1} \frac{1}{|\log(1-s)|}\int_{\Omega}|\nabla I_{1-s}u|^2\, dx+ \frac{C}{(1-s)|\log(1-s)|}\int_\Omega (| I_{1-s}u|^2-1)^2\,dx$$ for some constant $C>0$. If we set $\varepsilon=\sqrt{1-s}$ this is a classical Ginzburg-Landau energy, whose $\Gamma$-limit as $\varepsilon\to 0$ is given by $\pi \sum_i |d_i|$ (see e.g. [@Alberti; @BBH; @J]). $\bullet$ the upper bound for the $\Gamma$-limit can be obtained by a direct construction. As in the paper [@Solci], the key observation is that the function $\overline u(x)= {x\over|x|}$ belongs to all $W^{1,p}_{\rm loc} (\mathbb R^2;\mathbb R^2)$ for $p<2$, and can be therefore directly used as a 'building block' for a recovery sequence for a single vortex of degree $1$. This can be achieved by the equality between $F_s$ and the scaled multiple of the Gagliardo seminorm which has been proved to converge to the Dirichlet integral by Bourgain et al. [@BBM]. The use of a Gagliargo-Nirenberg interpolation shows that points close to the singularity are negligible in the computation of the limit, while a direct computation on points not-so-close to the singularities as in [@Solci] gives a limit contribution of $\pi$. Once this upper bound is achieved for a single singularity of degree $1$ or equivalently $-1$, it can be extended to sums of such singularities by localization. If $\mu$ satisfies the constraint that $\sum_i d_i={\rm deg}\,(u_0,\partial\Omega)$ then the construction can be done satisfying the boundary datum. Finally, the case of arbitrary $\mu$ is dealt with by approximation. After rewriting the $L^2$ norm of the fractional gradient as a scaled Gagliardo seminorm, the result above can be seen as an asymptotic description of energies of the form $$\label{defFs-intro-4} \frac{1-s}{|\log(1-s)|} \int_{\mathbb R^2\times \mathbb R^2}{|u(x)-u(y)|^2\over |x-y|^{2+2s}}\,dx\,dy$$ with the constraint $|u|=1$ in $\Omega$. Moreover, using that $u=u_0$ outside $\Omega$ and the strong convergence of $I_{1-s}u$ to $u_0$ outside $\Omega$, as remarked by Kreisbeck and Schönberger [@KS], we also obtain the convergence of the localized functionals obtained by replacing $\mathbb R^2$ by $\Omega$ in [\[defFs-intro-4\]](#defFs-intro-4){reference-type="eqref" reference="defFs-intro-4"}. This result can be compared with the one in [@Solci], where different kernels of convolution type (see [@AABPT]) are taken into account. Even though the result is the same, the scaling and discretization method used therein cannot be used for our functionals due to the singularity of the kernel. # Notation and statement of the main result Let $\Omega\subset \mathbb R^2$ be a simply connected open bounded Lipschitz domain. We fix a function $u_0\in W^{1,2}(\mathbb R^2;\mathbb R^2)\cap L^\infty(\mathbb R^2;\mathbb R^2)$ with compact support such that $|u(x)|=1$ almost everywhere in an open neigborhood $U$ of $\partial \Omega$. In what follows $R=R(u_0)$ will denote a positive number larger than the diameter of the support of $u_0$. In this way $u(y)=0$ if $y\not\in B_R(x)$ and $x$ is in the support of $u$. For $s\in (0,1)$, we denote by $W^{s,2}_{u_0}(\mathbb R^2;\mathbb R^2)$ the set of functions $u\in W^{s,2}(\mathbb R^2;\mathbb R^2)$ such that $u=u_0$ in $\mathbb R^2\setminus\Omega$ and $|u|=1$ in $\Omega$. We define $F_s: W^{s,2}_{u_0}(\mathbb R^2;\mathbb R^2)\to [0,+\infty)$ as $$\label{defFs} F_s(u)=\frac{1}{|\log(1-s)|}\int_{\mathbb R^2}|\nabla_{\!s} u|^2\, dx,$$ where $\nabla_{\!s} u(x)$ is defined for almost every $x\in\mathbb R^2$ by [\[fracs-intro\]](#fracs-intro){reference-type="eqref" reference="fracs-intro"} . We will study the convergence of $F_s$ as $s\to 1^-$. This will be done with respect to a suitable convergence $u_s\to \mu$, where $\mu=\sum_{i=1}^N d_i\delta_{x_i}$, with $N\in\mathbb N$, $d_i\in\mathbb Z$ and $x_i\in\overline\Omega$ for $i\in\{1,\ldots, N\}$, with respect to which the functionals $F_s$ will be equicoercive. The definition of convergence requires the notion of flat convergence of Jacobians and the use of averages of $u_s$ that allow to overcome the lack of differentiability (and hence of Jacobians) of these functions. To that end the theory of fractional gradients suggests the use of Riesz potentials. ## Flat convergence of Jacobians as currents Given $u\in H^1(\Omega;\mathbb R^2)$, the *Jacobian* $J u$ of $u$ is the $L^1$ function defined by $$J u:= \text{det} \nabla u.$$ We denote by $C^{0,1}(\Omega)$ the space of Lipschitz continuous functions on $\Omega$ endowed with the norm $$\|\varphi\|_{C^{0,1}} := \sup_{x\in\Omega} |\varphi(x)|+ \sup_{x,\, y\in\Omega, \, x\neq y} \frac{\varphi(x) - \varphi(y)}{x-y},$$ and by $C^{0,1}_c(\Omega)$ its subspace of functions with compact support. The norm in the dual of $C^{0,1}_c(\Omega)$ will be denoted by $||\cdot||_{\mathrm{flat}(\Omega)}$ and referred to as the *flat norm*, while $\stackrel{\mathrm{flat}(\Omega)}{\longrightarrow}$ denotes the convergence in the flat norm. Finally, the norm in the dual of $C^{0,1}(\Omega)$ will be denoted by $||\cdot||_{\mathrm{flat}(\overline \Omega)}$ and by $\stackrel{\mathrm{flat}(\overline \Omega)}{\longrightarrow}$ the corresponding convergence. For every $u\in H^1 (\Omega;\mathbb R^2)$, we can interpret $J u$ as an element of the dual of $C^{0,1}_c(\Omega)$ by setting $$\langle J u , \varphi\rangle := \int_{\Omega} J u \, \varphi \, \;\mathrm{d}x\,, \qquad \text{ for any } \varphi\in C^{0,1}_c({\Omega}).$$ Note that $Ju$ can be written in divergence form as $J u= \text{div } (u^1 u^2_{x_2}, - u^1 u^2_{x_1})$; that is, $$\label{jwds} \langle J u, \varphi\rangle= - \int_{\Omega} (u^1 u^2_{x_2} \varphi_{x_1} - v^1 u^2_{x_1} \varphi_{x_2}) \;\mathrm{d}x\qquad \hbox{for any $\varphi \in C^{0,1}_c(\Omega)$.}$$ Equivalently, we have $J u = \mathrm{curl}\;(u^1 \nabla u^2)$ and $J u = \frac{1}{2} \mathrm{curl}\;j(u)$, where $$j(u):= u^1 \nabla u^2 - u^2 \nabla u^1$$ is the so-called *current* associated to $J u$. Note that, for every $v:=(v^1, v^2)$ and $w:=(w^1, w^2)$ belonging to $H^{1}(\Omega;\mathbb R^2)$ it holds $$\label{for} J v - J w = \frac{1}{2} \big(J (v^1 - w^1, v^2 + w^2) - J (v^2-w^2,v^1+w^1)\big).$$ Gathering together [\[jwds\]](#jwds){reference-type="eqref" reference="jwds"} and [\[for\]](#for){reference-type="eqref" reference="for"} one deduces the following lemma. **Lemma 1**. *There exists a universal constant $C>0$ such that for any $v,w\in H^1(\Omega;\mathbb R^2)$ it holds $$\|J v - J w\|_{{\mathrm{flat}}(\Omega)} %_\fla \le C\,\|v - w\|_{2} (\|\nabla v\|_2 + \|\nabla w\|_{2})\,.$$* As a corollary of Lemma [Lemma 1](#cosulemma){reference-type="ref" reference="cosulemma"} we obtain the following proposition. **Proposition 2**. *Let $v_n$ and $w_n$ be two sequences in $H^1(\Omega;\mathbb R^2)$ such that $$\|v_n - w_n\|_2 (\|\nabla v_n\|_2 + \|\nabla w_n\|_2) \to 0.$$ Then $\|J v_n - J w_n\|_{{\mathrm{flat}}(\Omega)} %_{\fla} \to 0$.* ## Riesz potentials In order to define the convergence $u_s\to \mu$ we recall the definition of the $\alpha$-*Riesz potential* of a function $u$ defined in $\mathbb R^d$, as $$\label{defia} I_\alpha u(x)={1\over \gamma_{d,\alpha}}\int_{\mathbb R^d}{ u(y)\over |x-y|^{d-\alpha}}dy,\qquad \gamma_{d,\alpha}= \pi^{d\over 2} 2^\alpha {\Gamma({\alpha\over 2})\over \Gamma({d-\alpha\over 2})}.$$ In particular, we will use $d=2$ and $\alpha=1-s$, for which $$\label{defia-s2} I_{1-s}u(x)={1\over \gamma_{s}}\int_{\mathbb R^d}{ u(y)\over |x-y|^{1+s}}dy,\qquad \gamma_{s}= \pi 2^{1-s} {\Gamma({1-s\over 2})\over \Gamma({1+s\over 2})}.$$ **Definition 3**. *We say that $u_s\to \mu$ if $J(I_{1-s}u_s)\stackrel{\mathrm{flat}(\overline \Omega)}{\longrightarrow}\pi\mu$ as $s\to 1^-$.* ## Degree of Sobolev functions Let $A\subset \Omega$ be an open set with Lipschitz boundary, and let $h\in H^{\frac12}(\partial A; \mathbb R^2)$ with $|h|\ge c >0$. The *degree* of $h$ is defined as $$\label{defcur2} {{\deg(h,\partial A):= \frac{1}{2\pi} \int_{\partial A} \frac{h}{|h|}\cdot \frac{\partial}{\partial \tau} \Big(\frac{h_2}{|h|},- \frac{h_1}{|h|}\Big)\;\mathrm{d}\mathcal H^1\,,}}$$ where $\tau$ is the tangent field to $\partial A$ and the product in the above formula is understood in the sense of the duality between $H^{\frac 1 2}$ and $H^{-\frac{1}{2}}$ . In [@Bo; @BN] it is proven that the definition above is well-posed, it is stable with respect to the strong convergence in $H^\frac12(\partial A;\mathbb R^2\setminus B_c)$ and that $\deg(h,\partial A)\in\mathbb{Z}$. Moreover, if $u\in H^1(A;\mathbb R^2\setminus B_c)$ then $\deg(u,\partial A) = 0$ (here and in what follows we identify $u$ with its trace). If $u\in H^1(A;\mathbb R^2)$ and $|u|=1$ on $\partial A$, by Stokes' theorem (and by approximating $v$ with smooth functions) one has that $$\label{defcur} \int_A Ju \;\mathrm{d}x= \frac{1}{2} \int_A \mathrm{curl}\;j(u)\;\mathrm{d}x := \frac{1}{2} \int_{\partial A} j(u) \cdot \tau \;\mathrm{d}\mathcal H^1= \pi\deg(u,\partial A).$$ ## The main result We set $d_0:=\deg(u_0,\partial \Omega)$, $$X(\Omega):=\bigg\{\mu=\sum_{i=1}^N d_i\delta_{x_i}, \ N\in\mathbb N,\ d_i\in\mathbb Z\ \text{and}\ x_i\in \Omega\ \text{for}\ i\in\{1,\ldots, N\}\bigg\},$$ $$X(\overline\Omega):=\bigg\{\mu=\sum_{i=1}^N d_i\delta_{x_i}, \ N\in\mathbb N,\ d_i\in\mathbb Z\ \text{and}\ x_i\in\overline\Omega\ \text{for}\ i\in\{1,\ldots, N\}\bigg\},$$ We will prove the following result. **Theorem 4**. *The following compactness and $\Gamma$-convergence result holds true.* *i) *(Compactness)* Let $u_s\in W^{s,2}_{u_0}(\mathbb R^2;\mathbb R^2)$ be such that $F_s(u_s)\leq C$. Then there exists $\mu \in X(\overline\Omega)$, with $\mu(\overline\Omega)=d_0$, such that, up to subsequences, $u_s\to\mu$ as $s\to 1^-;$* *ii) *($\Gamma$-liminf inequality)* Let $u_s\in W^{s,2}_{u_0}(\mathbb R^2;\mathbb R^2)$ be such that $u_s\to\mu$. Then $\mu(\overline\Omega)=d_0$ and $$\liminf_{s\to 1^-} F_s(u_s)\geq \pi|\mu|(\overline\Omega);$$* *iii) *($\Gamma$-limsup inequality)* For every $\mu \in X(\overline\Omega)$, with $\mu(\overline\Omega)=d_0$, there exist $u_s\in W^{s,2}_{u_0}(\mathbb R^2;\mathbb R^2)$ such that $u_s\to\mu$ and $$\limsup_{s\to 1^-} F_s(u_s)\leq \pi|\mu| (\overline\Omega).$$* # Preliminaries In the following we will make use of some results on the asymptotic analysis of Ginzburg-Landau functionals and of Gagliardo seminorms. ## Ginzburg-Landau energies Let $A\subset \mathbb R^2$ be a simply connected open bounded Lipschitz domain. The *Ginzburg-Landau functionals* localized on $A$ are defined on $W^{1,2}(A; \mathbb R^2)$ as $$\label{GLe} G\!L_\varepsilon(v,A)= \frac1{|\log \varepsilon|}\int_{A}|\nabla v|^2\, dx+\frac{C}{\varepsilon^2|\log \varepsilon|}\int_{A}(|v(x)|^2-1)^2\, dx,$$ where $C>0$ is an arbitrary constant. We recall a classical result on the asymptotic analysis of $G\!L_\varepsilon(v,A)$ with respect to the flat convergence of $Jv$. **Theorem 5**. *The following compactness and $\Gamma$-convergence result holds true.* *i) *(Compactness)* Let $(v_\varepsilon)\subset W^{1,2}(A;\mathbb R^2)$ be such that $G\!L_\varepsilon(v_\varepsilon,A)\leq C$. Then there exists $\mu \in X(A)$, such that, up to subsequences, $Jv_\varepsilon\stackrel{\mathrm{flat}(A)}{\longrightarrow}\pi\mu$ as $\varepsilon\to 0^+$.* *ii) *($\Gamma$-liminf inequality)* Let $(v_\varepsilon)\subset W^{1,2}(A;\mathbb R^2)$ be such that $Jv_\varepsilon\stackrel{\mathrm{flat}(A)}{\longrightarrow}\mu$. Then $$\liminf_{\varepsilon\to 0^+} G\!L_\varepsilon(v_\varepsilon,A)\geq 2\pi|\mu|(A).$$* *iii) *($\Gamma$-limsup inequality)* For every $\mu \in X(A)$, there exists a sequence $(v_\varepsilon)\subset W^{1,2}(A;\mathbb R^2)$ such that $Jv_\varepsilon\stackrel{\mathrm{flat}(A)}{\longrightarrow}\pi\mu$ and $$\limsup_{\varepsilon\to 0^+} G\!L_\varepsilon(v_\varepsilon,A)\leq 2\pi|\mu|(A).$$* *The result is independent of the choice of $C>0$ in [\[GLe\]](#GLe){reference-type="eqref" reference="GLe"}.* ## Fractional approaches to Sobolev spaces The asymptotic behaviour of the $s$-gradient as $s\to 1^-$ has been studied in [@CS Section 4], and can be compared with the asymptotic behaviour of Gagliardo seminorms as studied in [@BBM] as follows (see [@CS; @SS] and in particular [@CS] Lemma 4.1 for the asymptotic behaviour of $s$-depending constants). **Proposition 6**. *If $u\in W^{s,2}(\mathbb R^2;\mathbb R^2)$ then we have $$\int_{\mathbb R^2}|\nabla_s u|^2\,dx= (1-s) c_s\int_{\mathbb R^2\times \mathbb R^2}{|u(x)-u(y)|^2\over |x-y|^{2+2s}}\,dx\,dy,$$ where $\lim\limits_{s\to 1^-}c_s={2\over \pi}$.* **Proposition 7**. *For all $r>0$, let $\Delta_r = \{(x,y)\in \mathbb R^2\times \mathbb R^2: |x-y|<r\}$. Then the $\Gamma$-limit of $F_s$ is the same as that of $${1-s\over|\log(1-s)|} c_s\int_{\Delta_{\sqrt{1-s}}}{|u(x)-u(y)|^2\over |x-y|^{2+2s}}\,dx\,dy,$$ and hence by comparison also if the integral is computed on $\Delta_{r_s}$ with $r_s\ge\sqrt{1-s}$.* *Proof.* In the proof of [@BBD] Lemma 3 it is proved that for a sequence $u_s$ equibounded in $L^2(\mathbb R^2)$ we have that $$(1-s) \int_{(\mathbb R^2\times \mathbb R^2)\setminus \Delta_{\sqrt{1-s}}}{|u_s(x)-u_s(y)|^2\over |x-y|^{2+2s}}\,dx\,dy$$ is equibounded, so that the claim follows, once we recall the embedding of $W^{s,2}$ in $L^2$ [@Leoni]. ◻ # Proof of Compactness and $\Gamma$-liminf inequality We have the following classical equality regarding fractional gradients and Riesz potentials (see e.g. [@CS; @KS]) $$\label{eqisv} \nabla_s u= \nabla I_{1-s}u,$$ for $u\in W^{s,2}(\mathbb R^2; \mathbb R^2)$, which allows to shift from fractional gradients to classical (weak) gradients. This identity has been fruitfully exploited in [@KS] to characterize lower-semicontinuous envelopes of integrals depending on fractional gradients, and will be essential in the following. With [\[eqisv\]](#eqisv){reference-type="eqref" reference="eqisv"} in mind, in the next lemma we give an estimate on $I_{1-s}u$. Taking $d=2$ and $\alpha=1-s$ in [\[defia\]](#defia){reference-type="eqref" reference="defia"}, if $u=0$ outside $B_R(x)$ we have $$\label{defismu} I_{1-s}u(x)=\frac{\Gamma({1+s\over 2})}{\pi 2^{1-s} \Gamma({1-s\over 2})}\int_{B_R(x)}\frac{u(y)}{|x-y|^{1+s}}\, dy.$$ Since it is convenient to have a normalized kernel, for fixed $s\in(0,1)$ we set $$\label{defv} \widetilde I_{1-s} u (x)=\frac{1-s}{2\pi R^{1-s}}\int_{B_R(x)}\frac{u(y)}{|x-y|^{1+s}}\, dy.$$ Note that $$\label{quoz} \lim_{s\to 1^-}\frac{\Gamma({1+s\over 2})}{\pi 2^{1-s} \Gamma({1-s\over 2})}\cdot \frac{2\pi R^{1-s}}{1-s}=1.$$ **Lemma 8**. *Let $A\subset\mathbb R^2$ be a bounded open set and let $u\colon\mathbb R^2\to \mathbb R^2$ be a bounded and compactly supported function such that $|u(x)|=1$ if $x\in A$. Let $R>0$ be such that for all $x\in A$ and $y\not\in B_R(x)$ then $u(y)=0$. Let $\widetilde I_{1-s} u$ be defined in [\[defv\]](#defv){reference-type="eqref" reference="defv"}. Then, $$\begin{aligned} \label{stimapotenziale}\nonumber \int_{A}(|\widetilde I_{1-s} u(x)|^2-1)^2\, dx&\leq& {4\|u\|_\infty^2 \int_A |u(x)- \widetilde I_{1-s} u (x) |^2\, dx}\\ &\leq& \|u\|_\infty^2\frac{(1-s)^2R^{2s-1}}{\pi}\int_{\mathbb R^2\times \mathbb R^2}\frac{|u(x)-u(y)|^2}{|x-y|^{2+2s}}\, dx\, dy.\end{aligned}$$* *Proof.* We note that for fixed $x$ $$u(x)=\frac{1-s}{2\pi R^{1-s}}\int_{B_R(x)}\frac{u(x)}{|x-y|^{1+s}}\, dy.$$ Then, by using the inequality $||A|^2-|B|^2|=|\langle A-B,A+B\rangle|\leq|A-B||A+B|$ we obtain $$\begin{aligned} (|\widetilde I_{1-s} u(x)|^2-1)^2 &=&(|\widetilde I_{1-s} u(x)|^2-|u(x)|^2)^2\\ &{\leq}& {|\widetilde I_{1-s} u(x) +u(x)|^2 |\widetilde I_{1-s} u(x) -u(x)|^2 }\\ &=& \bigg|\frac{1-s}{2\pi R^{1-s}}\int_{B_R(x)}\frac{u(x)+u(y)}{|x-y|^{1+s}}\, dy\bigg|^2 \ \bigg|\frac{1-s}{2\pi R^{1-s}}\int_{B_R(x)}\frac{u(x)-u(y)}{|x-y|^{1+s}}\, dy\bigg|^2\\ &\leq& 4\|u\|_\infty^2 \Big|\frac{1-s}{2\pi R^{1-s}}\int_{B_R(x)}\frac{u(x)-u(y)}{|x-y|^{1+s}}\, dy\Big|^2\\ &=& \|u\|_\infty^2\frac{(1-s)^2}{\pi^2 R^{2-2s}} \Big|\int_{B_R(x)}\frac{u(x)-u(y)}{|x-y|^{1+s}}\, dy\Big|^2\\ &\leq& \|u\|_\infty^2\frac{(1-s)^2 R^{2s-1}}{\pi} \int_{B_R(x)}\frac{|u(x)-u(y)|^2}{|x-y|^{2+2s}}\, dy, \end{aligned}$$ where in the last line we have used Jensen's inequality. By integrating for $x\in A$ we obtain [\[stimapotenziale\]](#stimapotenziale){reference-type="eqref" reference="stimapotenziale"}. ◻ Set $\widetilde\Omega:=\Omega\cup U$. For given $u$, let $\widetilde I_{1-s}u$ be defined by [\[defv\]](#defv){reference-type="eqref" reference="defv"}. By Proposition [Proposition 6](#equiv-frac){reference-type="ref" reference="equiv-frac"} and Lemma [Lemma 8](#lemma-equiv){reference-type="ref" reference="lemma-equiv"}, applied with $A=\widetilde\Omega$, we have that $$\begin{aligned} F_s(u)&=&\frac{(1-s)c_s}{|\log(1-s)|}\int\int_{\mathbb R^2\times \mathbb R^2}\frac{|u(x)-u(y)|^2}{|x-y|^{2+2s}}\, dx\, dy\\ &\geq& \frac{c_s^\prime}{(1-s)|\log(1-s)|}\int_{\widetilde \Omega}(|\widetilde I_{1-s} u(x)|^2-1)^2\, dx, \end{aligned}$$ where $c_s^\prime=c_s \frac{\pi}{R^{2s}}$. Withe fixed $\eta\in(0,1)$ we then have $$\begin{aligned} \label{stimaGL} F_s(u)&=&(1-\eta)F_s(u)+\eta F_s(u)\nonumber\\ &\geq& \frac{1-\eta}{|\log(1-s)|}\int_{{ \widetilde \Omega}}|\nabla_{\! s} u|^2\, dx+\eta \frac{c_s^\prime}{(1-s)|\log(1-s)|}\int_{{ \widetilde \Omega}}(|v(x)|^2-1)^2\, dx\nonumber\\ &=& \frac{(1-\eta)c^{\prime\prime}_s}{|\log(1-s)|}\int_{{ \widetilde \Omega}}|\nabla { \widetilde I_{1-s} u}|^2\, dx+\eta \frac{c_s^\prime}{(1-s)|\log(1-s)|}\int_{{ \widetilde \Omega}}(|{ \widetilde I_{1-s} u}(x)|^2-1)^2\, dx\nonumber\\ &=& \frac{(1-\eta)c^{\prime\prime}_s}{2|\log\varepsilon|}\int_{{ \widetilde\Omega}}|\nabla { \widetilde I_{1-s}u}|^2\, dx+\eta \frac{c_s^\prime}{2\varepsilon^2|\log\varepsilon|}\int_{{ \widetilde\Omega}}(|{ \widetilde I_{1-s} u}(x)|^2-1)^2\, dx,\end{aligned}$$ where we have set $\varepsilon=\varepsilon_s=\sqrt{1-s}$ and $c_s^{\prime\prime}$ is the constant quotient between $|I_{1-s}u|^2$ and $|{ \widetilde I_{1-s} u}|^2$. Then, given $u_s\in W^{1,2}_{u_0}(\mathbb R^2;\mathbb R^2)$ such that $F_s(u_s)\leq C$, inequality [\[stimaGL\]](#stimaGL){reference-type="eqref" reference="stimaGL"} yields that $G\!L_{\varepsilon_s}(\widetilde I_{1-s}u_s,\widetilde\Omega)\leq C$. By Theorem [Theorem 5](#GL){reference-type="ref" reference="GL"}, there exist $\mu\in X(\widetilde\Omega)$ such that, up to subsequences, $J \widetilde I_{1-s} u_s\stackrel{\mathrm{flat}(\widetilde\Omega)}{\longrightarrow}\pi\mu$ as $s\to 1^-$. Note that Lemma [Lemma 8](#lemma-equiv){reference-type="ref" reference="lemma-equiv"}, in combination with Proposition [Proposition 2](#cosu){reference-type="ref" reference="cosu"}, also implies that $$\label{convl2} \|u_0-\widetilde I_{1-s} u_s\|_{L^2(\widetilde\Omega\setminus\Omega;\mathbb R^2)}\to 0$$ and that $$J \widetilde I_{1-s} u_s\stackrel {\rm flat(\widetilde\Omega\setminus\Omega)}{\to}Ju_0\equiv 0.$$ We deduce that $\mu$ is supported in $\overline \Omega$ and that $J \widetilde I_{1-s} u_s\stackrel{\mathrm{flat}(\overline \Omega)}{\longrightarrow}\pi\mu$. Since, by [\[quoz\]](#quoz){reference-type="eqref" reference="quoz"}, $\lim\limits_{s\to 1} c_s^{\prime\prime}=1$, we infer that $u_s\to \mu$ and, by Theorem [Theorem 5](#GL){reference-type="ref" reference="GL"} and [\[stimaGL\]](#stimaGL){reference-type="eqref" reference="stimaGL"}, that claim (ii) holds. We are left with the proof that $\mu(\overline\Omega)=d_0$. To this end, set for $t>0$ $$\Omega_t:=\{x\in\mathbb R^2:\ \text{dist}(x,\Omega)\leq t\}.$$ and let $$t_0:=\sup\{t>0:\ \Omega_t\subset\widetilde\Omega\}.$$ We have that for almost all $t\in (0,t_0)$ $\text{deg} (u_0,\partial\Omega_t)=d_0$. By our assumptions on $u_0$ we have that $\widetilde I_{1-s}u_s$ is locally bounded in $W^{1,2}(\widetilde\Omega\setminus\Omega;\mathbb R^2)$ and $\widetilde I_{1-s}u_s\rightharpoonup u_0$ (locally) strongly in $W^{1,2}(\widetilde\Omega\setminus\Omega;\mathbb R^2)$ (see [@KS]). By standard Fubini arguments, for almost every $t\in (0,t_0)$ the trace of $\widetilde I_{1-s}u_s$ on $\partial\Omega_t$ is bounded in $W^{1,2}(\partial\Omega_t;\mathbb R^2)$ and hence (up to a not relabeled subsequence) it weakly converges to a function $g$. By [\[convl2\]](#convl2){reference-type="eqref" reference="convl2"}, we get that $g = u_0$ for almost every $t\in (0,t_0)$. Then, by the definition of degree in [\[defcur2\]](#defcur2){reference-type="eqref" reference="defcur2"}, we have that for almost every $t\in (0,t_0)$ $$\lim_{s\to 1^-} \text{deg} (\widetilde I_{1-s} u_s,\partial\Omega_t)= \text{deg} (u_0,\partial\Omega_t)=d_0.$$ Hence, by [\[defcur\]](#defcur){reference-type="eqref" reference="defcur"}, $$\mu(\overline\Omega)=\mu(\Omega_t)=\frac1\pi\lim_{s\to 1^-} \int_{\Omega_t} J \widetilde I_{1-s} u_s\, dx= \lim_{s\to 1^-} \text{deg} (\widetilde I_{1-s} u_s,\partial\Omega_t)= d_0,$$ which gives the constraint on $\mu$. 0◻ # Proof of the $\Gamma$-limsup inequality In this section we use the notation $$\lfloor u \rfloor_{s,2}(A)=\int_{A\times A} {|u(x)-u(y)|^2\over |x-y|^{2+2s}}\,dx dy$$ for the Gagliardo seminorm on an open set $A$. **Lemma 9**. *Let $u(x)=\frac{x}{|x|}$, $s\in(0,1)$ and $\sigma>0$. Then $$\lfloor u \rfloor_{s,2}(B_\sigma)\leq C \frac{\sigma^{1-s}}{\sqrt{1-s}}.$$* *Proof.* By the Gagliardo-Nirenberg interpolation theorem [@Leoni Theorem 8.29], if $v\in W^{s,2}(B)$ and $\frac{1-s}{p_1}+\frac{s}{p_2}=\frac{1}{2}$ then we have $$\begin{aligned} \lfloor v \rfloor_{s,2}(B)\leq C \|v\|^{1-s}_{L^{p_1}(B)}\|\nabla v\|^s_{L^{p_2}(B)}. \end{aligned}$$ This estimate, applied to $u$ in $B_\sigma$, gives $$\lfloor u \rfloor_{s,2}(B_\sigma)\leq C |u|_{0,p_1}^{1-s} \ |u|_{1,p_2}^s \leq C \sigma^\frac{2(1-s)}{p_1} \Big(\frac{\sigma^{2-p_2}}{2-p_2}\Big)^{\frac{s}{p_2}} =C \frac{\sigma^{1-s}}{(2-p_2)^{\frac{s}{p_2}}}.$$ We obtain the claim by letting $p_2$ tend to $2s$. ◻ . We first consider the case $|d_i|=1$ for every $i\in\{1,\dots, N\}$. Set $$u_i(x):= \left(\frac{x-x_i}{|x-x_i|}\right)^{d_i},\quad i\in\{1,\dots, N\},$$ where we have used the identification of $\mathbb R^2$ with $\mathbb{C}$. Let $$r<\Big(\min\Big\{\text{dist} (x_i,\mathbb R^2\setminus \Omega),\ i\in\{1,\dots, N\}\Big\}\Big)\wedge \frac12\min\{|x_i-x_j|,i\neq j\}.$$ Then (see [@BBH]) there exists $\hat u\in H^1(\Omega\setminus\bigcup B_r(x_i);S^1)$ such that $\hat u= u_0$ on $\partial\Omega$ and $u=u_i$ on $\partial B_r(x_i)$. Set $$u(x):=\begin{cases} u_i(x) & \text{if}\ x\in B_r(x_i),\ i=1,\dots, N\cr \hat u(x)& \text{if}\ x\in\Omega\setminus \bigcup B_r(x_i) \cr u_0(x) & \text{if}\ x\in\mathbb R^2\setminus \Omega \end{cases}$$ Let $0<r'<r$ and let $\tau=\sqrt{1-s}$. By Proposition [Proposition 7](#restr){reference-type="ref" reference="restr"}, we can limit the computation of double integrals to $\Delta_{\tau}$, so that, with fixed $M>2$ we estimate $$\begin{aligned} &&\hskip-1cm\frac{1-s}{|\log(1-s)|}\int_{(\mathbb R^2\times \mathbb R^2)\cap\Delta_{\tau}}{|u(x)-u(y)|^2\over |x-y|^{2+2s}}\,dx\,dy \\ &\le& \frac{1-s}{|\log(1-s)|} \sum_{i=1}^N\int_{(B_{(M+1)\tau}(x_i)\times B_{(M+1)\tau}(x_i))\cap\Delta_{\tau}}{|u_i(x)- u_i(y)|^2\over |x-y|^{2+2s}}\,dx\,dy \\ &&+ \frac{1-s}{|\log(1-s)|}\sum_{i=1}^N\int_{((B_r(x_i)\setminus B_{M\tau}(x_i))\times (B_r(x_i)\setminus B_{M\tau}(x_i)))\cap\Delta_{\tau}} {|u_i(x)- u_i(y)|^2\over |x-y|^{2+2s}}\,dx\,dy \\ &&+ \frac{1-s}{|\log(1-s)|}\int_{((\mathbb R^2\setminus \bigcup_i B_{r'}(x_i))\times (\mathbb R^2\setminus \bigcup_i B_{r'}(x_i)))\cap\Delta_{\tau}} {|u(x)-u(y)|^2\over |x-y|^{2+2s}}\,dx\,dy \\&:=& I_{1,s}+I_{2,s}+I_{3,s}\end{aligned}$$ By Lemma [Lemma 9](#interpol){reference-type="ref" reference="interpol"}, we get $$I_{1,s}\leq N C_M \frac{(1-s)^{1-s}}{|\log(1-s)|},$$ which is negligible as $s\to 1^-$. In order to estimate the second integral we first note that for every $i\in \{1,\dots, n\}$ $$\begin{aligned} &&\frac{1-s}{|\log(1-s)|}\int_{((B_r(x_i)\setminus B_{M\tau}(x_i))\times (B_r(x_i)\setminus B_{M\tau}(x_i)))\cap\Delta_{\tau}} {| u_i(x)- u_i(y)|^2\over |x-y|^{2+2s}}\,dx\,dy\\ &\leq&\frac{1-s}{|\log(1-s)|}\int_{B_{\tau}}{1\over |\xi|^{2+2s}} \int_{B_r(x_i)\setminus B_{M\tau}(x_i)} | u_i(x+\xi)- u_i(x)|^2\,dx\,d\xi\\ &\leq&\frac{1-s}{|\log(1-s)|}\int_{B_1}{1\over \tau^{2}|\xi|^{2+2s}} \int_{B_r(x_i)\setminus B_{M\tau}(x_i)} | u_i(x+\tau \xi)- u_i(x)|^2\,dx\,d\xi\\\end{aligned}$$ Now (referring to the analogous argument in the first step in the Proof of Theorem 2.6(iii) in [@Solci] for the algebraic inequalities), we estimate $$\begin{aligned} &&\hskip-2cm\int_{B_r(x_i)\setminus B_{M\tau}(x_i)} |u_i(x+\tau \xi)- u_i(x)|^2\,dx\,\\ &\le& {M\over M-1}\int_{B_{r}\setminus B_{M\tau}} \Big( \tau^2\frac{|\xi|^2}{|x|^2}-\tau^2 \frac{|\langle x,\xi\rangle|^2}{|x|^4}+C\frac{\tau^3|\xi|^2}{|x|^3}\Big)\, dx\\ &\le& {M\over M-1}2\pi \tau^2|\xi|^2\Big( \frac{1}{2}|\log M\tau| +\frac12 |\log r|+\frac{C}{M}\Big). \end{aligned}$$ Hence, $$\begin{aligned} I_{2,s}&\leq& N {M\over M-1}2\pi \Big(\frac{1}{2}|\log M\tau|+\frac12 |\log r|+\frac{C}{M}\Big)\frac{1-s}{|\log(1-s)|}\int_{B_1}{1\over |\xi|^{2s}}\,d\xi\\ %&\le&(1+ o(1)){1\over 2}\pi(1-s)\int_{B_{\tau}}{1\over|\xi|^{2s}} % d\xi= (1+o(1)){1\over 2}\pi^2(1-s)^{(1-s)}\\ &=&{2N\pi^2} {M\over M-1}\Big(\frac{1}{2}|\log M\tau|+\frac12 |\log r|+\frac{C}{M}\Big)\frac{1}{|\log(1-s)|}\end{aligned}$$ Then, letting first $s\to 1^-$ and then $M\to +\infty$, we deduce that $$\limsup_{s\to 1^-} I_{2,s}\leq {1\over 2}{N\pi^2}.$$ Finally, we show that $I_{3,s}$ is negligible as $s\to 1^-$. This is due to the fact that $u\in H^1(\mathbb R^2\setminus \bigcup_i B_{r''}(x_i);\mathbb R^2)$ for $0<r''<r'$. Indeed we have $$\begin{aligned} I_{3,s}&\leq &\frac{1-s}{|\log(1-s)|}\int_{\mathbb R^2\setminus \bigcup_i B_{r'}(x_i)}\int_{B_\tau}{|u(x+z)-u(x)|^2\over |z|^{2+2s}}\, dz dx\\ &=&\frac{1-s}{|\log(1-s)|}\int_{\mathbb R^2\setminus \bigcup_i B_{r'}(x_i)}\int_{B_\tau}{|u(x+z)-u(x)|^2\over |z|^{2}}\frac{1}{|z|^{2s}}\, dz dx\\ &\leq&\frac{1-s}{|\log(1-s)|}\int_{\mathbb R^2\setminus \bigcup_i B_{r'}(x_i)}\int_{B_\tau}\Big(\int_0^1|\nabla u(x+tz)|\,dt\Big)^2\frac{1}{|z|^{2s}}\, dz dx\\ &\leq&\frac{1-s}{|\log(1-s)|}\int_{\mathbb R^2\setminus \bigcup_i B_{r'}(x_i)}\int_{B_\tau}\frac{1}{|z|^{2s}}\int_0^1|\nabla u(x+tz)|^2\,dt\, dz dx\\ &\leq&\frac{1-s}{|\log(1-s)|}\int_{B_\tau}\frac{1}{|z|^{2s}}\, dz\int_{\mathbb R^2\setminus \bigcup_i B_{r''}(x_i)} |\nabla u(x)|^2 dx\\ &\leq&\frac{C}{|\log(1-s)|}\int_{\mathbb R^2\setminus \bigcup_i B_{r''}(x_i)} |\nabla u(x)|^2 dx=o(1).\end{aligned}$$ We are left with the proof that $$\begin{aligned} \label{convflatrecovery} \|J I_{1-s} u - \mu\|_{\rm flat(\overline\Omega)}\to 0.\end{aligned}$$ To this end, set $$u^s(x):=\begin{cases} \min\Big\{\frac{|x-x_i|}{1-s},1\Big\}u_i(x) & \text{if}\ x\in B_r(x_i),\ i=1,\dots, N\cr \hat u(x)& \text{if}\ x\in\Omega\setminus \bigcup B_r(x_i) \cr u_0(x) & \text{if}\ x\in\mathbb R^2\setminus \Omega \end{cases}$$ We have that for any bounded and open set $\Omega'\subset\mathbb R^2$ such that $\Omega\subset\subset\Omega'$ $$\begin{aligned} \label{convflat} \|J u^s - \mu\|_ {\rm flat(\mathbb R^2)}\to 0\end{aligned}$$ and $$\begin{aligned} \label{logscale} \int_{\mathbb R^2} |\nabla u^s|^2\, dx\leq C\log(1-s)\end{aligned}$$ (see [@AP]). Note also that $$\begin{aligned} \label{stimal2} \int_{\mathbb R^2} |u(x)-u^s(x)|^2\, dx= \sum_{i=1}^N\int_{B_{1-s}(x_i)} |u(x)-u^s(x)|^2\, dx\leq C(1-s)^2.\end{aligned}$$ Let $\widetilde I_{1-s} u$ be defined by [\[defv\]](#defv){reference-type="eqref" reference="defv"}. By Lemma [Lemma 8](#lemma-equiv){reference-type="ref" reference="lemma-equiv"}, for any bounded and open set $\Omega'\subset\mathbb R^2$ such that $\Omega\subset\subset\Omega'$ we have that $$\begin{aligned} \label{stimal2bis} \int_{\Omega'} |\widetilde I_{1-s}u(x)-u(x)|^2\, dx %&=&\bigg(\Big|\frac{1-s}{2\pi R^{1-s}}\int_{B_R(x)}\frac{u(y)}{|x-y|^{1+s}}\, dy\Big|^2- %\Big|\frac{1-s}{2\pi R^{1-s}}\int_{B_R(x)}\frac{u(x)}{|x-y|^{1+s}}\, dy\Big|^2 %\bigg)^2\\ %&\leq& \bigg|\frac{1-s}{2\pi R^{1-s}}\int_{B_R(x)}\frac{u(x)+u(y)}{|x-y|^{1+s}}\, dy\bigg|^2 \ %\bigg|\frac{1-s}{2\pi R^{1-s}}\int_{B_R(x)}\frac{u(x)-u(y)}{|x-y|^{1+s}}\, dy\bigg|^2\\ %&\leq& 4 \Big|\frac{1-s}{2\pi R^{1-s}}\int_{B_R(x)}\frac{u(x)-u(y)}{|x-y|^{1+s}}\, dy\Big|^2\\ &=& \frac{(1-s)^2}{\pi^2 R^{2-2s}}\int_{\Omega'} \Big|\int_{B_R(x)}\frac{u(x)-u(y)}{|x-y|^{1+s}}\, dy\Big|^2\, dx\nonumber\\ &\leq& \frac{(1-s)^2 R^{2s}}{\pi}\int_{\Omega'} \int_{B_R(x)}\frac{|u(x)-u(y)|^2}{|x-y|^{2+2s}}\, dy\,dx\nonumber\\ &\leq& \frac{(1-s)^2R^{2s}}{\pi}\int_{\mathbb R^2\times \mathbb R^2}\frac{|u(x)-u(y)|^2}{|x-y|^{2+2s}}\, dx\, dy\nonumber\\ &\leq& \frac{(1-s)^2R^{2s}}{\pi} C|\log(1-s)|.\end{aligned}$$ By Proposition [Proposition 6](#equiv-frac){reference-type="ref" reference="equiv-frac"}, it holds $$\begin{aligned} \label{logscalebis} \int_{\mathbb R^2}|\nabla \widetilde I_{1-s}u|^2\,dx=\frac{1}{c_s''}\int_{\mathbb R^2}|\nabla_s u|^2\,dx=\frac{1}{c_s''}\int_{\mathbb R^2}|\nabla I_{1-s} u|^2\,dx\leq C|\log(1-s)|.\end{aligned}$$ By [\[convflat\]](#convflat){reference-type="eqref" reference="convflat"}--[\[logscalebis\]](#logscalebis){reference-type="eqref" reference="logscalebis"} and Proposition [\[cosu\]](#cosu){reference-type="eqref" reference="cosu"}, we deduce that $$\begin{aligned} \|J \widetilde I_{1-s}u - \mu\|_{\rm flat(\overline\Omega)}\to 0.\end{aligned}$$ The convergence in [\[convflatrecovery\]](#convflatrecovery){reference-type="eqref" reference="convflatrecovery"} is then a consequence of [\[defismu\]](#defismu){reference-type="eqref" reference="defismu"}, [\[defv\]](#defv){reference-type="eqref" reference="defv"} and [\[quoz\]](#quoz){reference-type="eqref" reference="quoz"}. For a general $\mu$ such that $\mu(\overline\Omega)=d_0$ we consider a sequence $\mu_n$ of the form $\sum_{i=1}^{N_n} d^n_i \delta_{x^n_i}$ with $|d^n_1|=1$, $x^n_i\in\Omega$ and $\sum_{i=1}^{N_n} d^n_i=d_0$, weakly$^*$ (and hence also in the flat norm on $\overline \Omega$) converging to $\mu$. The previous construction and a diagonal argument provide an upper bound and conclude the proof.0◻ 99 G. Alberti, S. Baldo, and G. Orlandi. Variational convergence for functionals of Ginzburg-Landau type. *Indiana Univ. Math. J.* 54 (2005), 1411--1472. R. Alicandro, N. Ansini, A. Braides, A. Piatnitski, and A. Tribuzio. *A Variational Theory of Convolution-type Functionals*. SpringerBriefs on PDEs and Data Science. Springer, 2023. R. Alicandro and M. Ponsiglione. Ginzburg-Landau functionals and renormalized energy: A revised $\Gamma$-convergence approach. *J. Funct. Anal.* 266 (2014), 4890--4907. J. Bourgain, H. Brezis, and P. Mironescu. Another look at Sobolev spaces. In *Optimal Control and Partial Differential Equations*, IOS Press, Amsterdam, 2001, pp. 439--455. A. Braides. *$\Gamma$-convergence for Beginners*. Oxford Univ. Press, Oxford, 2002. A. Braides, G.C. Brusca, and D. Donati. Another look at elliptic homogenization. Preprint SISSA, Trieste, 2023. F. Bethuel, H. Brezis, and F. Hélein. *Ginzburg-Landau Vortices*. Progress in Nonlinear Differential Equations and Their Applications, vol.13, Birkhäuser Boston, 1994. A. Boutet de Monvel-Berthier, V.Georgescu, and R. Purice. A boundary value problem related to the Ginzburg-Landau model, *Commun. Math. Phys.* 142 (1991), 1--23. H. Brezis and L. Nirenberg. Degree theory and BMO: Part i: compact manifolds without boundaries, *Selecta Math. (N.S.)* **1** (1995), 197--263. G.E. Comi and G. Stefani, A distributional approach to fractional Sobolev spaces and fractional variation: asymptotics I, *Revista Matemática Complutense*, 2022. G. Dal Maso. *An Introduction to $\Gamma$-convergence*. Birkhäuser, Basel, 1994. R.L. Jerrard. Lower bounds for generalized Ginzburg-Landau functionals. *SIAM J. Math. Anal.* 30 (1999), 721--746. C. Kreisbeck and H. Schönberger. Quasiconvexity in the fractional calculus of variations: characterization of lower semicontinuity and relaxation. *Nonlinear Anal.* 215 (2022), 112625. G. Leoni. *A First Course in Fractional Sobolev Spaces*. AMS, Providence, 2023 T.-T. Shieh and D.E. Spector. On a new class of fractional partial differential equations. *Adv. Calc. Var.* 8 (2015), 321--336. M. Solci. Nonlocal-interaction vortices. Preprint 2023, arXiv 2302.06526. [^1]: Dipartimento di Matematica e Applicazioni "Renato Caccioppoli\", Università degli Studi di Napoli Federico II, via Cintia, Monte S. Angelo, I-80126 Napoli, Italy [^2]: SISSA, via Bonomea 265, Trieste, Italy [^3]: DADU, Università di Sassari, piazza Duomo 6, 07041 Alghero (SS), Italy [^4]: SISSA, via Bonomea 265, Trieste, Italy
arxiv_math
{ "id": "2309.10112", "title": "Topological singularities arising from fractional-gradient energies", "authors": "Roberto Alicandro, Andrea Braides, Margherita Solci, Giorgio Stefani", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- author: - "Dirk Becherer[^1]" - "Christoph Reisinger[^2]" - Jonathan Tam bibliography: - ref.bib title: | Mean-field games of speedy information access\ with observation costs --- # Introduction In decision making, one often has the opportunity to improve the quality of one's observations by expending extra resources. For example, medical laboratories can invest in infrastructure to reduce waiting times for testing results, to enable faster diagnosis and treatment for patients. Balancing such trade-off between information acquisition and the associated costs may be as important as selecting the course of further actions which optimise one's rewards. We introduce a novel mean-field game (MFG) model in discrete-time, in which agents actively control their speed of access to information. The MFG considered can be viewed as a partial observation problem, in which the information stream is not exogeneously given but rather dynamically controlled by the agents. In the game, agents can adjust their speed of information access with suitable costly efforts, and exploit their dynamic information stream to inform the choice of controls for the state dynamics so as to maximise their rewards. We utilise the information structure to construct a suitable augmentation of the state space, which includes the belief state as well as past actions taken within the dynamic delay period, that serves as the finite state space of an equivalent mean field game of standard form. Thereby, numerical schemes for discrete MFGs can be employed to compute approximate mean-field Nash equilibria (MFNE) for our MFG of speedy information access with controlled observations. This paper covers three themes: (1) actively controlled observation delay, (2) observation costs, and (3) the analysis of an associated MFG incorporating the combination of those two features. Standard Markov decision process (MDP) frameworks assume that state observations are received instantaneously, with corresponding actions in response being also applied instantaneously. This limits the applicability of such models in many real-life situations. It is often the case that observation delay arises due to inherent features of a system, or practical limitations from data collection. For example, the times to receive medical diagnosis test results depend on the processing time required for laboratory analysis. In high-frequency trading, observation delay occurs in the form of latency, aggregated over the multiple stages of communication with the exchange [@cartea2023optimal]. There has been a large amount of literature involving the modelling of observation delays, with applications in (but not limited to) network communications [@adlakha2008information; @adlakha2011networked], quantitative finance [@cartea2023optimal; @bruder2009impulse; @oksendal2008optimal] and reinforcement learning [@chen2021delay; @schuitema2010control; @stochastic_delays_RL]. Most models involve an MDP framework with either a constant or random observation delay, both of which are exogenously given by the system. Both constant and random observation delay MDPs can be modelled as a partially observable MDP (POMDP) via state augmentation [@bander1999markov; @altman_nain; @asynchronous_collection; @stochastic_delays_RL]. It has also been shown that action delays can be considered as a form of observation delay, under a suitable transformation of the MDP problem [@asynchronous_collection]. The continuous-time counterpart with an associated HJB-type master equation has been studied in [@saporito_zhang]. In many formulations of optimisation problems in MDPs, the information source is fixed *a priori*. However, it is often desirable to control the observations that one receives, in addition to the dynamics of the underlying process. This frequently occurs in resource-constrained environments where frequent measurements or sampling are either too expensive or impractical. Applications include efficient medical treatment allocation [@winkelmann_markov_2014], environmental management [@YOSHIOKA_1_impulse_control; @Yoshioka_2_river_impulse; @Yoshioka_3_monitoring_deviation; @YOSHIOKA_4_biological_stopping], communications sampling [@paging_registration; @sampling_guo_2021], optimal sensing [@sensor_energy_harvesting; @wu2008optimal; @tzoumas2020lqg], reinforcement learning [@bellinger2020active; @krueger2020active; @bellinger2022], and much more. We shall refer to these as observation cost models (OCMs). In OCM problems, the user can opt to receive an observation of the current state of the process, at the price of an observation cost which is included in the reward functional to be optimised. The OCM can equivalently be characterised as a POMDP, by including the time elapsed, together with the last observed states and actions applied to form an augmented Markov system. In many cases, a reasonable simplification is to assume constant actions between observations [@reisinger2022markov; @huang2021self]. This leads to a finite dimensional characterisation of the augmented state, and allows efficient computation of the resulting system of quasi-variational inequalities via a penalty scheme [@reisinger2022markov]. Analysis for the more general non-constant action case has generally been restricted to the linear-quadratic Gaussian case [@wu2008optimal; @tzoumas2020lqg; @cooper1971optimal]. In stochastic games, the computation of Nash equilibria is often intractable for large number of players. Mean-field games (MFGs), first introduced in [@lasry2007mean] and [@caines2006large], provide a way of seeking approximate Nash equilibria, by assuming symmetric interactions between agents that can be modelled by a mean-field term, in the form of a measure flow. MFGs can be treated as an asymptotic approximation of a game with large number of interacting players. Finding a mean-field Nash equilibrium (MFNE) amounts to a search for an optimal policy for a representative player, and ensuring that the state distribution of said player under such a policy is consistent with the postulated law of the other players, given by the measure flow. In discrete time, the existence of MFNE has been established in [@saldi_mfg_discrete]. Analysis has also appeared for several model variants such as risk-sensitive criteria [@saldi2022partially], partially observable systems [@saldi2019approximate; @saldi2022partially] and unknown reward/transition dynamics [@guo2019learning]. In general, finite MFGs suffer from non-uniqueness of MFNE and non-contractivity of the naively iterated fixed point algorithm [@cui2021approximately]. Several algorithms have emerged to address the efficient computation of MFNEs. Entropy regularisation exploits the duality between convexity and smoothness to achieve contractivity, by either incorporating the entropy term directly into the reward functional, or imposing softmax policies during the optimisation step [@geist2019theory; @saldi_regularisation; @cui2021approximately]. Fictitious play schemes aim to smooth the mean-field updates by averaging new iterates over the past mean-field terms, effectively damping the update to aid numerical convergence [@perrin2020fictitious]. Online mirror descent further decreases computational complexity by replacing best response updates with direct $Q$-function computations [@perolat2021scaling]. In contrast, [@mf-omo] reformulates the problem of searching an MFNE to an equivalent optimisation problem, allowing a possible search for multiple MFNE with standard gradient descent algorithms. We refer to the survey [@lauriere2022learning] for a comprehensive overview of the above algorithms.\ **Our work.** We model agents' strategic choices for speed of information access in the game, by studying a novel MFG where the speed of access is in itself also a part of the costly control. Throughout the paper, we assume that both the state and action spaces are finite. The agents participating in the game have control over two aspects: the time period of their observation delay, and their actions that influence their rewards and transition dynamics. The agent can choose over a given finite set of delay periods, with each value being associated to an observation cost. A higher observation cost corresponds to a shorter delay period, and vice versa. Our framework here differs from existing works, in that the delay period is not exogeneously given as in the constant case[@altman_nain; @bander1999markov], nor is it a random variable with given dynamics as in the stochastic case [@stochastic_delays_RL; @cartea2023optimal; @chen2021delay]. Instead, the length of the delay is dynamically and actively decided by the agent, based on the trade-off between the extra cost versus the accuracy of more speedy observations, the latter of which can be exploited though better informed control of the dynamics and hence higher rewards. The choice of the delay period becomes an extra part of the control in the optimisation problem in tandem with the agent's actions. When considering this as a single agent problem, which occurs during the optimisation step when the measure flow is fixed, we refer to it as a Markov Controllable Delay model (MCDM). The MCDM can be reformulated in terms of a POMDP, by augmenting the state with the most recent observation and actions taken since, to form a Markovian system. This allows the formulation of dynamic programming to obtain the Bellman equation. When viewed as part of the overall MFG, the partial information structure of the problem implies that the measure flow should be specified on the augmented space for the fixed point characterisation of the mean field Nash equilibira (MFNE). However, the underlying transition dynamics and reward structure would depend on the distribution of the states at the present time. In the models of [@saldi2019approximate; @saldi2022partially], the mapping from measures on the augmented space to measures on the underlying state is given by taking the barycenter of the measure. However, our model here differs in two aspects. Firstly, although the belief state is an element of the simplex on the underlying state space, we find a finite parameter description (the state last observed and actions taken thereafter) to establish the MFG on a finitely augmented space. Secondly, due to the delayed structure, the observation kernel depends on the distribution of the states throughout each moment in time across the delay period. Thus, taking an average of a distribution over the augmented space of parameters, as a barycenter map would do, is not applicable here. Instead, we explicitly map a measure flow on the augmented space to a sequence of measures on the underlying states. Intuitively, this corresponds to an agent estimating the distribution of the current states of the population, given the observations he/she possesses (i.e., the distribution of the delay period amongst agents, and the states and actions given such a delay). We detail the construction of the MCDM in and the corresponding MFG formulation, which we will also refer to as the MFG-MCDM, with its MFNE definition in . The second part of this paper focuses on the computation of an MFNE for the MFG of control of information speed. We employ the popular entropy regularisation technique, which aids convergence of the classical iterative scheme: computing an optimal policy for a fixed measure flow, followed by computing the law of the player under said policy. In the standard MFG model, it is shown that the fixed point operator for the regularised problem is contractive under mild conditions [@saldi_regularisation; @cui2021approximately]. This forms the basis of the prior descent algorithm, which is one of the current state-of-the-art algorithms for the computation of approximate Nash equilibria for MFGs [@cui2021approximately; @mfglib]. We prove that for our MFG model of control of information speed, the corresponding fixed point operator also converges when it is sufficiently regularised by an entropy term. This can be summarised in the following theorem, which is a condensed version of . **Theorem 1**. *Let $\Phi^{\mathrm{reg}}_{\eta}$ be the regularised best-response map, with regulariser parameter $\eta$, and let $\Psi^{\mathrm{aug}}$ be the measure-flow map. Then for $\eta > c_{\eta}$, there exists a unique fixed point for $\Psi^{\mathrm{aug}} \circ \Phi^{\mathrm{reg}}_{\eta}$, i.e. there exists a unique regularised MFNE for the MFG-MCDM problem. Here $c_{\eta}$ is a constant that only depends on the Lipschitz constants and bounds of the transition kernels and rewards functions.* We defer the precise definitions of the operators $\Phi^{\mathrm{reg}}_{\eta}$ and $\Psi^{\mathrm{aug}}$, as well as the constant $c_{\eta}$ to . We investigate the infinite horizon discounted cost problem with time-dependent measure flows. This extends the result in [@cui2021approximately] for finite horizon problems, and the result in [@saldi_regularisation] for infinite horizon problems with stationary measure flows. As the MFG-MCDM is a partially observable problem, the proof also requires a crucial extra step to demonstrate that the aforementioned mapping of the measure flow on the augmented space to that on the underlying space is Lipschitz, in other to prove the required contraction.\ The contributions of this paper can be summarised as follows. 1. We show dynamical programming for a Markov Controllable Delay model (MCDM), an MDP model where an individual agent can exercise dynamic control over the latency of their observations, with less information delay being more costly. The problem is cast in terms of a partially observed MDP (POMDP) with controlled but costly partial observations, for which the belief state can be described by a finite parametrization. Solving this POMDP is shown to be equivalent to solving a finite MDP on an augmented finite state space, whose extension also involves past actions taken during the (non-constant but dynamically controlled) delay period. 2. We introduce a corresponding Mean Field Game (MFG) where speedy information access is subject to the agents' strategic control decisions. For a fixed measure flow, which describes the statistical population evolution, the ensuing single agent control problem becomes an MCDM. Although a mean-field Nash equilibrium (MFNE) is defined in terms of the augmented space, the underlying dynamics and rewards still depend on the underlying state distribution. We show how a measure flow on the underlying space is determined and computed from that of the augmented space. This construction exploits the finite parameterization of the belief state; whereas the barycenter approach for belief state which are measure-valued as in [@saldi2019approximate] does not apply here. 3. By using a sufficiently strong entropy regularisation in the reward functional, we prove that the regularised MFG-MCDM has a unique MFNE which is described by a fixed point, and can serve as an approximate Nash equilibrium for a large but finite population size. The characterisation of the MCDM as a finite MDP enables to compute the Nash equilibrium of the corresponding MFG, by using methods from [@saldi_regularisation; @cui2021approximately]. The results also extend to a MFG formulated on infinite horizon with time-dependent measure flows. 4. We demonstrate our model by an epidemiology example, in which we compute both qualitative effects of information delay and cost to the equilibrium, and also the quantitative properties of convergence relating to the entropy regularisation. For computation, we employ the Prior Descent algorithm [@cui2021approximately], applying the new `mfglib` Python package [@mfglib] to our partially observable model.  \ The remainder of the paper is organized as follows. develops the formulation of the MCDM as a POMDP and establishes dynamic programming on the augmented space. explains the corresponding MFG setup, the fixed point characterisation of the equilibrium, and shows some basic properties. establishes the contraction property of the fixed point iteration map for the regularised mean field game and its ability to yield an approximate equilibrium for the finite player game. Finally, demonstrates a numerical example from epidemiology for illustrtration. ## Notation and preliminaries {#sec:notation} For any finite set $E$, we identify the space of probability measures on $E$ with the simplex $\Delta_E$. We equip $\Delta_E$ with the metric $\delta_{TV}$ induced by the total variation norm on the space of signed measures. That is, $$\begin{aligned} \delta_{TV}(p, \hat{p}) = \lVert p - \hat{p} \rVert_{TV} = \frac{1}{2} \sum_{e \in E} \lvert p(e) - \hat{p}(e) \rvert, \qquad p, \hat{p} \in \Delta_E.\end{aligned}$$ We will generally be considering Markovian policies in this paper. A Markovian policy $\pi = (\pi_t)_t$ is then a sequence of maps $\pi_t : E \to \Delta_{{E}^{\prime}}$, mapping a finite set $E$ to the simplex on another finite set ${E}^{\prime}$. Since a policy is bounded, we equip it with the sup norm $$\begin{aligned} \delta_{\Pi}(\pi, \hat{\pi}) = \sup_{t\geq 0} \max_{e \in E} \delta_{TV}(\pi_t(\cdot \mid e), \hat{\pi}_t(\cdot \mid e)).\end{aligned}$$ Let $\Delta_{E}^{T}$ denote the space of measure flows on $E$, with $T \in \mathbb{N}\cup \{\infty\}$. If $T$ is finite, we equip $\Delta_{E}^T$ with the sup metric $$\begin{aligned} \delta_{\mathrm{max}}(\bm{\mu}, \bm{\hat{\mu}}) = \max_{0 \leq t \leq T} \delta_{TV}(\mu_t ,\hat{\mu}_t),\quad \bm{\mu}, \bm{\hat{\mu}} \in \Delta^T_E.\end{aligned}$$ If $T = \infty$, we instead use the metric $$\begin{aligned} \label{eq_flow_metric} \delta_{\infty}(\bm{\mu}, \bm{\hat{\mu}}) = \sum^{\infty}_{t=1} \zeta^{-t} \delta_{TV}(\mu_t ,\hat{\mu}_t),\quad \bm{\mu}, \bm{\hat{\mu}} \in \Delta^{\infty}_E,\end{aligned}$$ where $\zeta > 1$. Note that the choice of $\zeta$ is not canonical, and as long as $\zeta > 1$, $\delta_{\infty}$ induces the product topology on $\Delta_E^{\infty}$, which is compact by Tychonoff's theorem, since each individual simplex $\Delta_E$ is also compact. Hence $(\Delta_E^{\infty}, \delta_{\infty})$ is a complete metric space. This allows us to appeal to Banach's fixed point theorem when considering the contraction mapping arguments later. We will often consider a sequence of actions taken, e.g. $a_1, \ldots, a_n \in A$. In these cases we will use the shorthand notation $(a)^n_1 = (a_1, \ldots, a_n)$. We will use both notations interchangebly throughout the rest of this paper. We will frequently make use of the following proposition in our analysis. **Proposition 2** ([@georgii2011gibbs p.141]). *For any real valued function $F$ on a finite set $E$, given $\nu, \nu^{\prime} \in \Delta_E$ we have the inequality (see also proof of [@saldi_regularisation Proposition 1]) $$\begin{aligned} \left\lvert \sum_{e \in E} F(e) \nu (e) - \sum_{e \in E} F(e) \nu^{\prime}(e) \right \rvert \leq \lambda(F)\ \delta_{TV}(\nu, \nu^{\prime}), \end{aligned}$$ where $\lambda(F) \coloneqq \max_{e \in E}F(e) - \min_{e \in E}F(e)$.* # MDPs with controllable information delay {#sec:single_agent} We first state the definition of a Markov controllable delay model (MCDM) below, which characterises the scenarios where agents can control their information delay. **Definition 3**. A Markov controllable delay model (MCDM) is a tuple $\langle \mathcal{X}, A, \mathcal{D}, \mathcal{C}, p, r \rangle$, where - $\mathcal{X}$ is the finite *state space*; - $A$ is the finite *action space*; - $\mathcal{D} = \{ d_0, \ldots, d_K\}$, with $0 \leq d_K < \ldots < d_0$, is the set of *delay values*; - $\mathcal{C} = \{ c_0, c_1, \ldots, c_K\}$, with $0 = c_0 < c_1 \ldots < c_K$, is the set of *cost values*; - $p: \mathcal{X} \times A \to \Delta_{\mathcal{X}}$ is the *transition kernel*; - $r: \mathcal{X} \times A \to [0, \infty)$ is the *one-step reward function*. Let us also denote the $n$-step transition probabilities by $p^{(n)}(\ \cdot \mid x , (a)^n_1)$, where we use the notation $(a)^n_1 \coloneqq (a_1, \ldots, a_n) \subset A$. For a given set of delay values $\mathcal{D}$, define also - $\overline{\mathcal{D}} \coloneqq [d_K, d_0] = \{d_K, d_K+1, \ldots, d_0\}$. - The intervention variables $(i_t)_t$, taking values in the *intervention set* $\mathcal{I} \coloneqq \{0, 1, \ldots ,K \}$. $\mathcal{D}$ represents the delay values that an agent can choose from, with $(i_t)_t$ representing the decision on the choice of delay, and $\overline{\mathcal{D}}$ represents the range of delay values of the system at any given point in time. A value of $i_n = k$ indicates that at time $n$ the agent wishes to pay a cost of $c_k$ to change their delay to $d_k$ units. To ensure that the setup is well-defined, if $i_n = k$ and the current delay $d$ is shorter than $d_k$, then the delay at time $n+1$ will simply be extended to $d+1$ units (in reality, paying a higher cost for a longer delay is clearly sub-optimal, so such a choice of $i_n$ would not practically occur). Formally, the MCDM evolves sequentially as follows. Suppose at time $t$, the controller observes the underlying state $x_{t-d_0} \in \mathcal{X}$, with knowledge of their actions $a_{t-d_0}, \ldots, a_{t-1} \in A$ applied since. Based on this information, the controller applies an action $a_t$ and receives a reward $r(x_t, a_t)$ (which we assume not to be observable until $x_t$ becomes observable). The controller then decides on the choice of cost $c_i$, which determines their next delay period of $d_i$ units, i.e. observing $x_{t+1-d_i}$ at time $t+1$. This process then repeats at the next time. If no cost is paid, then no new observations occur, until the delay reaches $d_0$ units again. depicts a typical evolution of an MCDM. ![Control of information speed](control_info_speed.png){#fig:MCDM_evol} The precise construction can be set up as follows. We assume that the problem initiates at time $t=0$, and denote prior observations with negative indices. **Definition 4**. Define the *history sets* $\{H_t\}_{t \geq 0}$ as follows: let $H_{0} \coloneqq \overline{\mathcal{D}} \times (\mathcal{X} \times A)^{d_0} \times \mathcal{X}$, denoting its elements in the form of $h_0 = (d, x_{-d_0}, a_{-d_0}, \ldots, x_{-1},a_{-1}, x_0)$. Then, define recursively $$\begin{aligned} H_t \coloneqq H_{t-1} \times A \times \mathcal{I} \times \mathcal{X}, \quad t\geq 1. \end{aligned}$$ The canonical sample space is $$\begin{aligned} \Omega \coloneqq H_{\infty} \coloneqq H_0 \times (A \times \mathcal{I} \times \mathcal{X} )^{\infty}. \end{aligned}$$ A *policy* $\pi = (\pi_t)_{t \geq 0}$ is a sequence of kernels $\pi_t: H_t \to \Delta_{A \times \mathcal{I}}$. Then, given an initial distribution $q_0 \in \Delta_{H_0}$ and a policy $\pi$, the Ionescu-Tulcea theorem [@hernandez1989adaptive Appendix C] gives a unique probability measure $\mathbb{P}^{\pi}_{q_0}$ such that for $\omega = (d, x_{-d_0}, a_{-d_0}, \ldots, x_{0}, a_{0}, i_{0}, x_1, a_1, i_1, \ldots)$, $$\begin{aligned} \mathbb{P}^{\pi}_{q_0}(\omega) & = q_0(h_0)\ p(x_{-d_0+1} \mid x_{-d_0}, a_{-d_0})\ldots p(x_{0} \mid x_{-1}, a_{-1})\\ & \qquad \pi_{0}( a_{0}, i_{0} \mid d, x_{-d_0}, a_{-d_0}, \ldots, x_{0})\ p(x_{1} \mid x_{0}, a_{0})\ldots\end{aligned}$$ The value $d$ appearing in $H_0$ represents the initial delay period. Given a history sequence $h_t \in H_t$, subsequent delay periods at time $t > 0$ can be deduced from the values of $d$ and $(i_n)^{t-1}_{n=0}$. Denote this value by $d(t) \in \overline{\mathcal{D}}$. This leads to the following definition for the set of admissible policies. **Definition 5**. A policy $\pi$ is admissible for an MCDM if at each time $t$, there exists a sequence of kernels $\phi^d_t: \mathcal{X} \times A^d \to \Delta_{A \times \mathcal{I}}$, $d \in \overline{\mathcal{D}}$, such that for each $h_t \in H_t$, $$\begin{aligned} \pi_t(\cdot \mid h_t) = \phi^{d(t)}_t(\cdot \mid x_{-d_0}, \ldots, x_{t-d_{h_t}}, a_{-d_0}, \ldots, a_{t-1}), \end{aligned}$$ where $d(t) \in \overline{\mathcal{D}}$ is the delay period at time $t$ for a corresponding history sequence $h_t \in H_t$. The set of admissible policies for the MCDM is denoted by $\Pi_{DM}$. Given the MCDM $\langle \mathcal{X}, A, \mathcal{D}, \mathcal{C}, p, r \rangle$ and an admissible policy $\pi \in \Pi_{DM}$, the objective function for the infinite horizon problem with discounted cost is $$\begin{aligned} J(\pi) \coloneqq \mathbb{E}^{\pi}_{q_0}\left[ \sum^{\infty}_{n=0} \gamma^{n} \left( r(x_n, a_n) - \sum^K_{k=1} c_k \mathbbm{1}_{\{i_n=k\}} \right)\right],\end{aligned}$$ where $\mathbb{E}^{\pi}_{q_0}$ is the expectation over the measure $\mathbb{P}^{\pi}_{q_0}$, and $\gamma \in (0,1)$ is the discount factor. The search for an optimal $\pi \in \Pi_{DM}$ can be solved by considering an equivalent MDP on an augmented state, which contains all the information that occurred between the current time and the delayed time. As noted in [@altman_nain] for the constant delay case, the lifting is akin to the classical POMDP approach on constructing an equivalent MDP on the belief state, but in this case the 'observations' do not come from an exogenous information stream, but from a past occurred state instead. For a full Markovian system, the augmented variable will include the delay of the system at the current time, the underlying state that is observed with that delay, and the actions applied from that moment until the present. This will be presented as the following. **Definition 6**. Given the delay values $\mathcal{D} = \{d_0, \ldots, d_K\}$, define the **augmented space** $\mathcal{Y}$ by $$\begin{aligned} \mathcal{Y} \coloneqq \bigcup_{d=d_K}^{d_0} \{d\} \times \mathcal{X} \times A^d, \end{aligned}$$ where $\overline{\mathcal{D}} = [d_K, d_0]$. Then an element $y \in \mathcal{Y}$ can be written in the form $$\begin{aligned} (d, x, a_{-d}, \ldots, a_{-1}),\ \mbox{or } (d, x, (a)^{-1}_{-d}), \end{aligned}$$ where negative indices are used to indicate that the actions had occurred in the past. If specific indices are not required, we will also use the notation $y=( d,x, \bm{a})$. Although the length of the delay is implicit from the number of elements in $\bm{a}$, we explicitly include $d$ in $\mathcal{Y}$ for simpler comprehension. **Remark 7**. *As the length of the delay is variable and dependent on the control, the dimension of the augmented state is also variable. In practice, during computation, we can keep the dimensions consistent by introducing dummy variables $\varnothing$. This follows the treatment of stochastic delays in [@stochastic_delays_RL]. Specifically. for any set $E$ we write $E_{\varnothing} = E \cup \{\varnothing\}$. Then an element $y \in \mathcal{Y}$ can be embedded into the space $\mathcal{X} \times A^{d_0}_{\varnothing}$ via the mapping $$\begin{aligned} ( d,x, (a)^{-1}_{-d}) \mapsto (x, (a)^{-1}_{-d}, \underbrace{\varnothing, \ldots, \varnothing}_\text{$d_0-d$ units}). \end{aligned}$$* We can now construct the MDP on the augmented space. For $y = (d, x, \bm{a}), \hat{y}= (\hat{d},\hat{x}, \hat{\bm{a}}) \in \mathcal{Y}$, $a^{\prime} \in A$, and $i \in \mathcal{I}$, let $p_y: \mathcal{Y} \times (A \times \mathcal{I}) \to \Delta_{\mathcal{Y}}$ be the augmented kernel, where $$\begin{aligned} \label{eq:augmented_kernel} p_y(\hat{y} \mid y, a^{\prime}, i ) &= \begin{cases} p^{(d-d_i+1)}\big(\hat{x} \mid x, (a)_{-d}^{-d_i}\big)\ \mathbbm{1}_{\left\{ \hat{d} = d_i,\ \hat{\bm{a}} = \left((a)_{-d_i+1}^{-1}, a^{\prime}\right) \right\}}\ , & \ d_i \leq d \leq d_0;\\ \mathbbm{1}_{\left\{ \hat{x}=x,\ \hat{d} = d+1,\ \hat{\bm{a}} = \big( (a)^{-1}_{-d}, a^{\prime}\big) \right\}}\ ,& \ d_K \leq d< d_i. \end{cases}\end{aligned}$$ Let ${\Pi}^{\prime}$ denote the set of policies for this augmented MDP. That is, ${\pi}^{\prime} = ({\pi}^{\prime}_t) \in {\Pi}^{\prime}$ is such that ${\pi}^{\prime}_t: {H}^{\prime}_t \to \Delta_{A \times \mathcal{I}}$, where ${H}^{\prime}_0 \coloneqq \mathcal{Y}$ and ${H}^{\prime}_t \coloneqq {H}^{\prime}_{t-1} \times A \times \mathcal{I} \times \mathcal{Y}$ for $t \geq 1$. By the Ionescu-Tulcea theorem again, for an initial distribution $q_{y_0} \in \Delta_{\mathcal{Y}}$ and a policy ${\pi}^{\prime} \in {\Pi}^{\prime}$, there exists a unique probability measure $\mathbb{P}^{{\pi}^{\prime}}_{q_{y_0}}$ such that $$\begin{aligned} \label{eq:augmented_canonical_measure} \mathbb{P}^{{\pi}^{\prime}}_{q_{y_0}} (y_0, a_0, i_0, y_1, \ldots) = q_{y_0}(y_0)\ {\pi}^{\prime}_0(a_0, i_0 \mid y_0)\ p_y (y_1 \mid y_0, a_0, i_0) \ldots\end{aligned}$$ It is then straightforward to see that there is a one-to-one correspondence between policies in the original MCDM and policies in the augmented MDP. This follows analogously from the case of a fixed information delay [@altman_nain], and we summarise the argument here: each $h_t \in H_t$ can be mapped to a corresponding ${h}^{\prime}_t \in {H}^{\prime}_t$ via $$\begin{aligned} (d, x_{-d_0}, a_{-d_0}, \ldots, x_{0}, a_{0}, i_{0}, \ldots, x_{t-1}, a_{t-1}, i_{t-1}, x_t) \mapsto (y_0, a_0, i_0, \ldots, y_{t-1}, a_{t-1}, i_{t-1}, y_t)\end{aligned}$$ where $y_t = \Big( d(t), x_{t-d(t)}, (a)_{t-d(t)}^{t-1}\Big)$ for $t \geq 0$. Then, given a policy $\pi \in \Pi_{DM}$, one can define a policy ${\pi}^{\prime}\in{\Pi}^{\prime}$ via $$\begin{aligned} {\pi}^{\prime}_{t}(\cdot \mid {h}^{\prime}_t) = \pi_t(\cdot \mid h_t), \quad t \geq 0.\end{aligned}$$ Moreover, the policies $\pi$ and ${\pi}^{\prime}$ assign the same joint law to $(y_t, a_t, i_t)_{t \geq0}$ (when viewed as the canonical coordinate projection). One can then consider the objective function in the augmented space $\mathcal{Y}$, which is now a fully observable problem: $$\begin{aligned} {J}^{\prime}({\pi}^{\prime}) \coloneqq \mathbb{E}^{{\pi}^{\prime}}_{q_{y_0}}\left[ \sum^{\infty}_{n=0} \gamma^{n} r_y(y_n, a_n, i_n) \right],\quad {\pi}^{\prime} \in {\Pi}^{\prime},\end{aligned}$$ where $r_y: \mathcal{Y} \times (A \times \mathcal{I}) \to [0,\infty)$ and $$\begin{aligned} \label{eq:augmented_reward} r_y(y, {a}^{\prime}, i) = \sum_{{x}^{\prime} \in \mathcal{X}} r({x}^{\prime}, {a}^{\prime}) p^{(d)}({x}^{\prime} \mid x, \bm{a}) - \sum^K_{k=1} c_k \mathbbm{1}_{\{i=k\}},\quad y = (x, d, \bm{a}).\end{aligned}$$ The two problems are equivalent in that $\pi_{*} \in \Pi_{DM}$ is optimal for $J$ if and only if $\pi^{\prime}_{*} \in {\Pi}^{\prime}$ is optimal for ${J}^{\prime}$, and it holds that $$\begin{aligned} \sup_{\pi \in \Pi_{DM}} J(\pi) = J(\pi_{*}) = {J}^{\prime}(\pi^{\prime}_{*})= \sup_{{\pi}^{\prime} \in {\Pi}^{\prime}} {J}^{\prime}({\pi}^{\prime})\end{aligned}$$ Given the equivalence, we shall use $\Pi_{DM}$ to represent the set of admissible policies without loss of generality. This allows us to establish dynamic programming for the MCDM as follows. **Proposition 8**. *Let $v : \mathcal{Y} \to \mathbb{R}$ be the value function $$\begin{aligned} v(y) = \sup_{\pi \in \Pi_{DM}} \mathbb{E}^{\pi}\left[ \sum^{\infty}_{n=0} \gamma^{n} r_y(y_n, a_n, i_n) \middle \vert y_0 = y \right],\end{aligned}$$ Then $v$ satisfies the dynamic programming equation $$\begin{aligned} v(y) = \max_{({a}^{\prime},i) \in A \times \mathcal{I}} \left\{ \sum_{{x}^{\prime} \in \mathcal{X}} r({x}^{\prime}, {a}^{\prime}) p^{(d)}({x}^{\prime} \mid x, \bm{a}) - \sum^{K}_{k=1} c_k \mathbbm{1}_{\{i = k\}} + \gamma \sum_{y^{\prime} \in \mathcal{Y}}p_y( {y}^{\prime}\mid y, {a}^{\prime}, i) v({y}^{\prime}) \right\}, \end{aligned}$$ where $p_y$ is the augmented kernel as in [\[eq:augmented_kernel\]](#eq:augmented_kernel){reference-type="eqref" reference="eq:augmented_kernel"}. Moreover, the optimal policy is given in feedback form, so that $\pi_{*,n} = \phi_n(y_n)$ for some feedback function $\phi_n$.* *Proof.* This is a standard application of dynamic programming for a fully observable MDP, see e.g. [@hernandezlerma_1 Theorem 4.2.3]. ◻ **Remark 9**. *When considering the MFG in the next section, a deterministic measure flow representing the population distribution introduces an implicit time dependence within the transition kernel and reward. The generic single agent problem in the definition of the MFNE then becomes time-inhomogeneous. The time-homogeneous setup in this section readily generalises directly to a setup with time-inhomogeneous transition kernels, rewards and dynamic programming equations. However, for ease of exposition we choose to present the MCDM under the time-homogeneous setting here.* # MFG formulation with control of information speed {#sec:MFG} **To ease notation, in the remainder of the paper we write $U \coloneqq A \times \mathcal{I}$ and $u = (a,i) \in U$.** ## Finite agent game with observation delay Consider an $N$-player game with mean-field interaction, where each agent can control their observation delay. We shall start with incorporating the measure dependence into the MCDM in . **Definition 10**. An MCDM with measure dependence is a tuple $\langle \mathcal{X}, A, \mathcal{D}, \mathcal{C}, p, r \rangle$, where - $\mathcal{X}$ is the finite *state space*; - $A$ is the finite *action space*; - $\mathcal{D} = \{ d_0, \ldots, d_K\}$, with $0 \leq d_K < \ldots < d_0$, is the set of *delay values*; - $\mathcal{C} = \{ c_1, \ldots, c_K\}$, with $0 = c_0 <c_1 < \ldots < c_K$, is the set of *cost values*; - $p: \mathcal{X} \times A \times \Delta_{\mathcal{X}} \to \Delta_{\mathcal{X}}$ is the *transition kernel*; - $r: \mathcal{X} \times A \times \Delta_{\mathcal{X}} \to [0, \infty)$ is the *one-step reward function*. Denote by $x^j_t \in \mathcal{X}$ the state of the $j$-th player at time $t$, and $a^j_t \in A$ the corresponding action. Assume that the mean-field interaction occurs in the reward and the transition probabilities of the players, and is identically distributed for each player. The transition kernel is given by $p: \mathcal{X} \times A \times \Delta_{\mathcal{X}} \to \Delta_{\mathcal{X}}$ so that the $j$-th player moves from state $x^j_t$ to $x^j_{t+1}$ with probability $$\begin{aligned} p( x^j_{t+1} \mid x^j_{t}, a^j_{t}, e^N_{t}),\quad e^N_t(\cdot) = \frac{1}{N} \sum^N_{k=1} \delta_{x^k_{t}}(\cdot).\end{aligned}$$ Here $e^N_{t}$ is the empirical distribution of the states of the agents. Similarly, the one-step reward function is given by $r: \mathcal{X} \times A \times \Delta_{\mathcal{X}} \to [0, \infty)$ so that player $j$ receives a reward of $r(x^j_t, a^j_t, e^N_t)$ at time $t$. Recall that for a fully Markovian system, we consider the lifted problem in the augmented space $$\begin{aligned} \mathcal{Y} \coloneqq \bigcup_{d=d_K}^{d_0} \{d\} \times \mathcal{X} \times A^d.\end{aligned}$$ For the $N$-player model, consider the history sets $H_0 \coloneqq \mathcal{Y}\times \Delta_{\mathcal{Y}}$ and $H_t \coloneqq H_{t-1} \times U \times \mathcal{Y} \times \Delta_{\mathcal{Y}}$ for $t \geq 1$. A policy $\pi = (\pi_t)_{t \geq 0}$ is a sequence of maps $\pi_t: H_t \to \Delta_U$. **Definition 11**. A policy $\pi$ is admissible for the $N$-player MCDM if at each time $t$, there exists a sequence of kernels $\phi^d_t: \mathcal{X} \times A^d \times \Delta^t_{\mathcal{Y}} \to \Delta_{A \times \mathcal{I}}$, $d \in \overline{\mathcal{D}}$, such that for each $h_t \in H_t$, $$\begin{aligned} \pi_t(\cdot \mid h_t) = \phi^{d(t)}_t(\cdot \mid x_{-d_0}, \ldots, x_{t-d_{h_t}}, a_{-d_0}, \ldots, a_{t-1}, \tilde{e}^N_0, \ldots, \tilde{e}^N_t), \end{aligned}$$ where $d(t) \in \overline{\mathcal{D}}$ is the delay period at time $t$ for a corresponding history sequence $h_t \in H_t$, and $\tilde{e}^N_t$ is the empirical distribution of augmented state, i.e. $$\begin{aligned} \tilde{e}^N_t = \frac{1}{N}\sum^N_{k=1} \delta_{y^k_t}(\cdot), \quad y^k_t = (d^k(t), x_{t-d^k(t)}, a_{t-d^k(t)}, \ldots, a_{t-1}). \end{aligned}$$ The set of admissible policies for player $j$ is denoted by $\Pi^j$. Let $\Pi = \prod^N_{j=1} \Pi^j$. Player $j$'s objective function is given by $$\begin{aligned} J^N_j(\pi^{(N)}) = \mathbb{E}^{\pi^{(N)}}\left[ \sum^{\infty}_{n=0} \gamma^{n} r(x^j_n, a^j_n, e^N_n) \right]\end{aligned}$$ where $\pi^{(N)} = (\pi^1, \ldots, \pi^N) \in \Pi$. The notion of optimality in the $N$-player game can be captured by the Nash equilibrium, which intuitively says at equilibrium, no player can make gains by deviating from their current strategy, provided that all other players remain at their strategy. **Definition 12** (Nash equilibrium). $\pi^{(N)}_{*} \in \Pi$ is a Nash equilibrium for the $N$-player MCDM if for each $j \in \{1, \ldots, N\}$, $$\begin{aligned} J^N_j(\pi^{(N)}_{*}) = \sup_{\pi \in \Pi^j} J^N_j(\pi, \pi_{*}^{-j}),\end{aligned}$$ where $\pi^{-j}_{*} = (\pi^1_{*}, \ldots, \pi^{j-1}_{*}, \pi^{j+1}_{*}, \ldots, \pi^N_{*})$. **Definition 13** ($\varepsilon$-Nash equilibrium). For $\varepsilon>0$, a policy $\pi^{(N)} \in \Pi$ is an $\varepsilon$-Nash equilibrium for the MCDM if for each $j \in \{1, \ldots, N \}$, $$\begin{aligned} J^N_j(\pi) \geq \sup_{\pi \in \Pi^j} J^N_j(\pi, \pi^{-j}) - \varepsilon.\end{aligned}$$ In general, the Nash equilibrium is hard to characterise and computationally intractable. It is also impractical to search over policies that depend on the distribution of all players. Therefore it is more useful to consider a search over Markovian policies for each player, and formulate the equilibrium condition with respect to such policies. Indeed the common approach for modelling partially observable games is to consider Markovian policies as above [@saldi2019approximate]. This is a reasonable assumption as in practice it will be hard for each agent to keep track of the movement of all other players, when the number of players grow increasingly large. A policy is *Markovian* if $\pi = (\pi_t)_{t \geq 0}$ is such that $\pi_t: \mathcal{Y} \to \Delta_U$. Let $\Pi_{\mathrm{mrkv}}^j$ denote the set of Markov policies for the player $j$, with $\Pi_{\mathrm{mrkv}} = \prod^N_{j=1} \Pi_{\mathrm{mrkv}}^j$. **Definition 14** (Markov--Nash equilibrium). $\pi^{(N)}_{*} \in \Pi_{\mathrm{mrkv}}$ is a Markov--Nash equilibrium for the $N$-player MCDM if for each $j \in \{1, \ldots, N\}$, $$\begin{aligned} J^N_j(\pi^{(N)}_{*}) = \sup_{\pi \in \Pi_{\mathrm{mrkv}}^j} J^N_j(\pi, \pi_{*}^{-j}),\end{aligned}$$ where $\pi^{-j}_{*} = (\pi^1_{*}, \ldots, \pi^{j-1}_{*}, \pi^{j+1}_{*}, \ldots, \pi^N_{*})$. **Definition 15** ($\varepsilon$-Markov--Nash equilibrium). For $\varepsilon>0$, a policy $\pi^{(N)} \in \Pi_{\mathrm{mrkv}}$ is an $\varepsilon$-Nash equilibrium for the MCDM if for each $j \in \{1, \ldots, N \}$, $$\begin{aligned} J^N_j(\pi) \geq \sup_{\pi \in \Pi_{\mathrm{mrkv}}^j} J^N_j(\pi, \pi^{-j}) - \varepsilon.\end{aligned}$$ ## MFNE for the MFG-MCDM The computation and characterisation of Nash equilibria is typically intractable due to the curse of dimensionality and the coupled dynamics across the different agents. Therefore, as an approximation, we consider the infinite population limit by sending the number of players ${N \to \infty}$, and replacing the empirical distribution of the agents by a measure flow ${\bm{\mu} = (\mu_n)_n \in \Delta^{\infty}_{\mathcal{X}}}$. In the mean-field setting, we consider the viewpoint of one representative agent, and assume that its interactions with members of the population, modelled by the measure flow $\bm{\mu}$, are symmetric. As in the $N$-player game, we consider a tuple $\langle \mathcal{X}, A, \mathcal{D}, \mathcal{C}, p, r \rangle$ (see ). For a given measure flow $\bm{\mu} \in \Delta^{\infty}_{\mathcal{X}}$, at time $n$, a representative agent transitions from the state $x_n$ to a new state $x_{n+1}$ with probability $$\begin{aligned} p(x_{n+1} \mid x_n, a_n, \mu_n),\end{aligned}$$ and collects a reward of $r(x_n, a_n, \mu_n)$. As each transition of the underlying state now depends on the given measure, the $d$-step transition kernel ${p^{(d)}: \mathcal{X} \times A^d \times \Delta^d_{\mathcal{X}} \to \Delta_{\mathcal{X}}}$ now depends on the measure flow across the $d$ time steps, so that we have $$\begin{aligned} \label{eq:n-step kernel w/ measure} x_{n+d} &\sim p^{(d)}(\cdot \mid x_n, (a)_{n}^{n+d-1}; (\mu)_{n}^{n+d-1}).\end{aligned}$$ We impose the follow Lipschitz assumptions on the transition kernels and reward function. **Assumption 16**. * * - *The one-step reward function $r$ satisfies a Lipschitz bound: there exists a constant $L_r$ such that for all $x, \hat{x} \in \mathcal{X}$, $a, \hat{a} \in A$, ${\mu, \hat{\mu} \in \Delta_{\mathcal{X}}}$, $$\begin{aligned} \lvert r(x, a, \mu) - r(\hat{x}, \hat{a}, \hat{\mu}) \rvert \leq L_r \left( \mathbbm{1}_{\{x \neq \hat{ x}\}} + \mathbbm{1}_{\{a \neq \hat{a}\}} + \delta_{TV}( \mu , \hat{\mu} ) \right). \end{aligned}$$* - *For $1 \leq n \leq d_0$, the $n$-step transition kernels satisfy a uniform Lipschitz bound: there exists a constant $L_p$ such that for all $x, \hat{x} \in \mathcal{X}$, $\bm{a}, \hat{\bm{a}} \in A^n$, ${\bm{\mu}, \hat{\bm{\mu}} \in \Delta^n_{\mathcal{X}}}$, $$\begin{aligned} \delta_{TV}\left( p^{(n)}(\cdot \mid x, \bm{a}, \bm{\mu}),\ p^{(n)}(\cdot \mid \hat{x}, \hat{\bm{a}},\hat{\bm{\mu}}) \right) \leq L_p \left (\mathbbm{1}_{\{x \neq \hat{ x}\}} + \mathbbm{1}_{\{\bm{a} \neq \hat{\bm{a}}\}} + \delta_{\mathrm{max}}(\bm{\mu}, \hat{\bm{\mu}}) \right). \end{aligned}$$* *In particular, as both $\mathcal{X}$ and $A$ are assumed to be finite, and the simplex $\Delta_{\mathcal{X}}$ is compact, both the reward function $r$ and transition kernel $p$ are bounded by some constants $M_r$ and $M_p$ respectively.* Once again, we shall consider the lifted problem on the augmented space $$\begin{aligned} \mathcal{Y} \coloneqq \bigcup_{d=d_K}^{d_0} \{ d \} \times \mathcal{X} \times A^d.\end{aligned}$$ Under this augmented space $\mathcal{Y}$, now with the inclusion of the measure dependence, the counterparts to $p_y$ and $r_y$ in [\[eq:augmented_kernel\]](#eq:augmented_kernel){reference-type="eqref" reference="eq:augmented_kernel"} and [\[eq:augmented_reward\]](#eq:augmented_reward){reference-type="eqref" reference="eq:augmented_reward"} are given as follows. Let $y=\left(d, x, (a)^{n-1}_{n-d}\right)$, $\hat{y}= (\hat{d}, \hat{x}, \hat{\bm{a}})$, $u = (a_n, i)$, and ${\bm{\mu} = (\mu)^{n-d_K}_{n-d_0}}$, - ${p_y: \mathcal{Y} \times U \times \Delta^{d_0-d_K+1}_{\mathcal{X}} \to \Delta_{\mathcal{Y}}}$ is given by $$\begin{aligned} & p_y(\hat{y} \mid y, u, \bm{\mu}) \nonumber \\ =& \begin{cases} p^{(d-d_i+1)}\big(\hat{x} \mid x, (a)_{n-d}^{n-d_i}, (\mu)^{n-d_i}_{n-d}\big)\ \mathbbm{1}_{\left\{ \hat{d} = d_i,\ \hat{\bm{a}} = (a)_{n-d_i+1}^{n} \right\}}\ , & \ d_i \leq d \leq d_0;\\ \mathbbm{1}_{\left\{ \hat{x} = x ,\ \hat{d} = d+1,\ \hat{\bm{a}} = (a)^{n}_{n-d} \right\}}\ ,& \ d_K \leq d< d_i. \end{cases}\end{aligned}$$ - The reward function $r_y: \mathcal{Y} \times U \times \Delta^{d_0+1}_{\mathcal{X}} \to [0,\infty)$ is $$\begin{gathered} r_y(y, u, \bm{\mu}) = \sum_{{x}^{\prime} \in \mathcal{X}} r({x}^{\prime}, a_n, \mu_n) p^{(d)}({x}^{\prime} \mid x, \bm{a}, (\mu)^{n-1}_{n-d}) - \sum^K_{k=1} c_k \mathbbm{1}_{\{i=k\}},.\end{gathered}$$ Given , we have the following bounds in $p_y$ and $r_y$. **Proposition 17**. *Under :* - *For all $y, \hat{y} \in \mathcal{Y}$, $u, \hat{u} \in U$, $\bm{\mu}, \hat{\bm{\mu}} \in \Delta^{d_0-d_K+1}_{\mathcal{X}}$, the augmented kernel $p_y$ satisfies the Lipschitz bound $$\begin{aligned} \delta_{TV}\left( p_y(\cdot \mid y, u, \bm{\mu}),\ p_y(\cdot \mid \hat{y}, \hat{u}, \hat{\bm{\mu}}) \right) \leq L_P\left(\mathbbm{1}_{\{y \neq \hat{ y}\}} + \mathbbm{1}_{\{u \neq \hat{u}\}} + \delta_{\mathrm{max}}(\bm{\mu}, \hat{\bm{\mu}}) \right). \end{aligned}$$ where $L_P = \max\{ 2M_p, L_p\}$.* - *For all $y, \hat{y} \in \mathcal{Y}$, $u \in U$, $\bm{\mu}, \hat{\bm{\mu}} \in \Delta^{d_0+1}_{\mathcal{X}}$, the augmented reward function $r_y$ is in $\bm{\mu}$ and satisfies the bound: $$\begin{aligned} \lvert r_y(y, u, \bm{\mu}) - r_y(\hat{y}, u, \hat{\bm{\mu}}) \rvert \leq 2L_r M_p \mathbbm{1}_{\{y \neq \hat{y}\}} + (L_r + c_K-c_0) \mathbbm{1}_{\{u \neq \hat{u}\}} + L_R\ \delta_{\mathrm{max}}(\bm{\mu}, \hat{\bm{\mu}}), \end{aligned}$$ where $L_R= L_r + L_r L_p$. Also $r_y$ is bounded by $M_r + c_K \eqqcolon M_R$.* *Proof.* - We have from the triangle inequality $$\begin{aligned} &\delta_{TV}\left( p_y(\cdot \mid y, u, \bm{\mu}),\ p_y(\cdot \mid \hat{y}, \hat{u}, \hat{\bm{\mu}}) \right)\\ \leq&\ \delta_{TV}\left( p_y(\cdot \mid y, u, \bm{\mu}),\ p_y(\cdot \mid \hat{y}, u, \bm{\mu}) \right) + \delta_{TV}\left( p_y(\cdot \mid \hat{y}, u, \bm{\mu}),\ p_y(\cdot \mid \hat{y}, \hat{u}, \hat{\bm{\mu}}) \right)\\ \leq&\ 2M_p \mathbbm{1}_{\{y \neq \hat{y}\}} + L_p \big(\mathbbm{1}_{\{u \neq \hat{u}\}} + \delta_{\mathrm{max}}(\bm{\mu}, \hat{\bm{\mu}})\big)\end{aligned}$$ where the first term in the last inequality follows from the uniformly bounded $M_p$ for all $d$-step transition kernels $p^{(d)}$, and the the other terms follow from the Lipschitz assumption of $p^{(d)}$. - For consistency sake in notation, we index $\bm{\mu}$ and $\bm{\hat{\mu}}$ from time $n-d_0$ to $n$. Let $y=\left(d, x, \bm{a}\right)$, $\hat{y}= (\hat{d}, \hat{x}, \hat{\bm{a}})$, $u = (a, i)$ and ${u}^{\prime} = ({a}^{\prime}, {i}^{\prime})$, then $$\begin{aligned} \ & \lvert r_y(y, u, \bm{\mu}) - r_y(\hat{y}, \hat{u}, \hat{\bm{\mu}}) \rvert \\ \leq\ & \left\lvert \sum_{{x}^{\prime} \in \mathcal{X}} r({x}^{\prime}, a, \mu_n)\ p^{(d)}({x}^{\prime} \mid x, \bm{a}, (\mu)^{n-1}_{n-d}) - \sum_{{x}^{\prime} \in \mathcal{X}} r({x}^{\prime}, \hat{a}, \hat{\mu}_n)\ p^{(\hat{d})}({x}^{\prime} \mid \hat{x}, \hat{\bm{a}}, (\hat{\mu})^{n-1}_{n-\hat{d}}) \right\rvert + \lvert c_i- c_{{i}^{\prime}} \rvert\\ \leq\ & \left\lvert \sum_{{x}^{\prime} \in \mathcal{X}} r({x}^{\prime}, a, \mu_n) \left(p^{(d)}({x}^{\prime} \mid x, \bm{a}, (\mu)^{n-1}_{n-d}) - \ p^{(\hat{d})}(x \mid \hat{x}, \hat{\bm{a}}, (\hat{\mu})^{n-1}_{n-\hat{d}})\right) \right\rvert\\ & \quad + \left\lvert \sum_{{x}^{\prime} \in \mathcal{X}} \left( r({x}^{\prime}, a, \mu_n)- r({x}^{\prime}, \hat{a}, \hat{\mu}_n)\right) p^{(\hat{d})}({x}^{\prime} \mid \hat{x}, \hat{\bm{a}}, (\hat{\mu})^{n-1}_{n-\hat{d}}) \right\rvert + (c_K- c_0) \mathbbm{1}_{\{i \neq {i}^{\prime}\}}\\ \leq\ & \left( r(x_{\max}, a, \mu_n) - r(x_{\min}, a, \mu_n)\right)\ \delta_{TV}\left( p^{(d)}\big(\cdot \mid x ,\bm{a}, (\mu)^{n-1}_{n-d}\big),\ p^{(\hat{d})}\big(\cdot \mid \hat{x}, \hat{\bm{a}}, (\hat{\mu})^{n-1}_{n-\hat{d}}\big) \right) \\ & \quad + \max_{x \in \mathcal{X}} \lvert r(x,a, \mu_n) - r(x,\hat{a}, \hat{\mu}_n) \rvert + (c_K- c_0) \mathbbm{1}_{\{i \neq {i}^{\prime}\}} \\ \leq\ & L_r \left(2M_P\mathbbm{1}_{\{ y \neq \hat{y} \}} + L_p\ \delta_{\mathrm{max}}(\bm{\mu}, \hat{\bm{\mu}})\right) + L_r \big( \mathbbm{1}_{\{a \neq {a}^{\prime}\}}+\delta_{\mathrm{max}}(\bm{\mu}, \hat{\bm{\mu}})\big) + (c_K- c_0) \mathbbm{1}_{\{i \neq {i}^{\prime}\}} \\ \leq\ & 2L_r M_P \mathbbm{1}_{\{ y \neq \hat{y} \}} + (L_r + c_K -c_0) \mathbbm{1}_{\{u \neq {u}^{\prime}\}} + (L_r + L_r L_p)\ \delta_{\mathrm{max}}(\bm{\mu}, \hat{\bm{\mu}}) ,\end{aligned}$$ where is used for the third inequality. The second part is immediate from the definition of $r_y$.  ◻ We now proceed to establish the mean-field Nash equilibrium (MFNE) condition for agents operating under the MCDM formulation. This is characterised by a fixed point of the composition of the best response map and the measure flow map (e.g. [@saldi_mfg_discrete]). As the presence of observation delays leads to a non-Markovian problem, the fixed point characterisation will be established in terms of the augmented space. However, both $p_y$ and $r_y$ in general depend on the various $d$-step transition kernels [\[eq:n-step kernel w/ measure\]](#eq:n-step kernel w/ measure){reference-type="eqref" reference="eq:n-step kernel w/ measure"}, which in turn depend on measures on the underlying space $\mathcal{X}$. Thus, given a distribution $\nu_t \in \Delta_{\mathcal{Y}}$ on the augmented space, one would like to construct a sequence of measures $(\mu_{t,d})^{d_0}_{d=0} \in \Delta^{d_0+1}_{\mathcal{X}}$ for the transition kernel $p_y$ and augmented reward $r_y$. This can be seen as analogous to players in the $N$-player game estimating the distribution of the underlying states of all players, given the belief state. In order to construct such a sequence of measures described above, **we shall have to further enlarge $\mathcal{Y}$ and consider the space** $$\begin{aligned} \label{eq:augmented space further enlarge} \tilde{\mathcal{Y}} \coloneqq \bigcup_{d=d_K}^{d_0} \{d\} \times \mathcal{X}^{d_0-d+1} \times A^d.\end{aligned}$$ In this instance, an element $\tilde{y}_n \in \tilde{\mathcal{Y}}$ can now be understood as $$\begin{aligned} \tilde{y}_n = (d, x_{n-d_0}, \ldots, x_{n-d}, a_{n-d}, \ldots, a_{n-1}),\end{aligned}$$ where once again, negative indices are used to indicate that the relevant states and actions occurred in the past. Now, given $\nu_t \in \Delta_{\tilde{\mathcal{Y}}}$, we successively compute a sequence of distributions for the states $(x_{n-d_0}, \ldots, x_0)$, starting with $x_{n-d_0}$. The inclusion of the entire sequence of $(x_{n-d_0}, \ldots, x_{n-d})$ for the space $\tilde{\mathcal{Y}}$ is essential, as for each $0<t \leq d_0$, we require a distribution of the state $x_{n-t}$ in order to compute a distribution for the next state $x_{n-t+1}$. The construction of the map $\Delta_{\tilde{\mathcal{Y}}} \ni \nu_t \mapsto \bm{\mu}^{\nu}_t = (\mu_{t,d})^{d_0}_{d=0} \in \Delta^{d_0+1}_{\mathcal{X}}$ is then given by the following. We use superscripts to denote the corresponding marginal and conditional distributions on the coordinates. For example, $\nu^d_t$ as the marginal of $\nu_t$ on the delay coordinate, and $\nu^{\bm{x},\bm{a}\mid \Bar{d}}_t$ is the conditional distribution of $\nu_t$ on the $x$ and $\bm{a}$ coordinates, given a delay of $\Bar{d}$, so that we have $$\begin{aligned} \nu_t (\Bar{d},\bm{x}, \bm{a}) = \nu^d_t (\Bar{d})\ \nu^{\bm{x},\bm{a}\mid \Bar{d}}_t(\bm{x}, \bm{a}).\end{aligned}$$ Now, starting with ${d}^{\prime} = d_0$, take $$\begin{aligned} \mu_{t,d_0} = \nu^{x_{t-d_0}}_t \in \Delta_{\mathcal{X}},\end{aligned}$$ the marginal of $\nu_t$ on the $x_{t-d_0}$ coordinate. Next, define recursively for each $0 \leq {d}^{\prime} < d_0$, $$\begin{aligned} \mu_{t,{d}^{\prime}} (x) = \sum_{\Bar{d}\in \overline{\mathcal{D}}} \nu^d_t (\Bar{d}) \xi^{\Bar{d}}_{t,{d}^{\prime}}(x)\end{aligned}$$ where $$\begin{aligned} \xi^{\Bar{d}}_{t,{d}^{\prime}}(\cdot) = \begin{cases} \nu^{x_{t-{d}^{\prime}}\mid \Bar{d}}_t(\cdot), & \Bar{d} \leq {d}^{\prime}; \\ \sum_{x, \bm{a}} p^{(\Bar{d}-{d}^{\prime})}(\cdot \mid x, \bm{a}, (\mu_{t,d})_{d={d}^{\prime}+1}^{\Bar{d}})\ \nu^{x, \bm{a} \mid \Bar{d}}_t (x, \bm{a}), & \Bar{d} > {d}^{\prime}. \end{cases}\end{aligned}$$ Intuitively, the measures $\bm{\mu}^{\nu}_t = (\mu_{t,d})^{d_0}_{d=0}$ represent the distribution of the underlying states of the agents from time $t-d_0$ to time $t$ based on $\nu_t$, which can be interpreted as the distribution of the information states of the population at time $t$. Since the information state varies with the delay period, the conditional distributions $\nu^{\bm{x},\bm{a}\mid \Bar{d}}_t$ have to be considered separately for each $\Bar{d} \in \overline{\mathcal{D}}$. The following lemma shows that this mapping is also Lipschitz, and will be useful later when establishing a contraction in the regularised regime. **Lemma 18**. *The mapping $\nu_t \mapsto \bm{\mu}^{\nu}_t$ is Lipschitz with constant $L_M = \sum^{d_0}_{k=0} (2L_p)^k$.* *Proof.* Let $\nu_t, \hat{\nu_t} \in \Delta_{\tilde{\mathcal{Y}}}$, with respective images $\bm{\mu}^{\nu}_t = (\mu_{t,d})^{d_0}_{d=0}$ and $\bm{\mu}^{\hat{\nu}}_t = (\hat{\mu}_{t,d})^{d_0}_{d=0}$. First, by definition we have $$\begin{aligned} \delta_{TV}(\mu_{t,d_0}, \hat{\mu}_{t,d_0}) = \delta_{TV}(\nu^{x_{t-d_0}}_t,\hat{\nu}^{x_{t-d_0}}_t)\leq \delta_{TV}(\nu_t, \hat{\nu}_t).\end{aligned}$$ Now fix $0 \leq {d}^{\prime} \leq d_0$. Then $$\begin{aligned} \label{eq:aug_to_underlying_map_proof} \delta_{TV}(\mu_{t,{d}^{\prime}}, \hat{\mu}_{t,{d}^{\prime}}) &= \sum_{{x}^{\prime}\in\mathcal{X}} \Bigg\lvert \underbrace{\sum^{d_0}_{\Bar{d}=0} \nu^d_t(\Bar{d}) \xi^{\Bar{d}}_{t, {d}^{\prime}}({x}^{\prime})}_\text{$I_1$} - \underbrace{\sum^{d_0}_{\Bar{d}=0} \hat{\nu}^d_t(\Bar{d}) \hat{\xi}^{\Bar{d}}_{t, {d}^{\prime}}({x}^{\prime})}_\text{$I_2$} \Bigg\rvert\end{aligned}$$ We can write $I_1$ as $$\begin{aligned} I_1 &= \sum^{{d}^{\prime}}_{\Bar{d} =0 } \nu^d_t(\Bar{d}) \nu^{x_{t-{d}^{\prime}\mid \Bar{d}}}_t({x}^{\prime}) + \sum^{d_0}_{\Bar{d} = {d}^{\prime}+1} \nu^d_t(\Bar{d}) \sum_{x, \bm{a}} p^{(\Bar{d}-{d}^{\prime})}({x}^{\prime} \mid x, \bm{a}, (\mu_{t,d})^{\Bar{d}}_{d={d}^{\prime}+1})\ \nu^{x, \bm{a}\mid \Bar{d}}_t(x, \bm{a})\\ &\eqqcolon \sum^{{d}^{\prime}}_{\Bar{d} = 0} \nu^{d, x_{t-{d}^{\prime}}}_t(\Bar{d}, {x}^{\prime}) + J_1\end{aligned}$$ Similarly, we can write $I_2$ as $$\begin{aligned} I_2 =\sum^{{d}^{\prime}}_{\Bar{d} = 0} \hat{\nu}^{d, x_{t-{d}^{\prime}}}_t(\Bar{d}, {x}^{\prime}) + J_2,\end{aligned}$$ with $J_2$ defined analogously. Then $$\begin{aligned} &\sum_{{x}^{\prime}\in\mathcal{X}} \lvert J_1 - J_2 \rvert \\ \leq &\ \sum_{{x}^{\prime}\in\mathcal{X}} \sum^{d_0}_{\Bar{d} = {d}^{\prime}+1} \sum_{x, \bm{a}} \Big\lvert \nu^d_t(\Bar{d})\ p^{(\Bar{d}-{d}^{\prime})}({x}^{\prime} \mid x, \bm{a}, (\mu_{t,d})^{\Bar{d}}_{d={d}^{\prime}+1})\ \nu^{x, \bm{a}\mid \Bar{d}}_t(x, \bm{a})\\ & \qquad \qquad \qquad \qquad - \hat{\nu}^d_t(\Bar{d})\ p^{(\Bar{d}-{d}^{\prime})}({x}^{\prime} \mid x, \bm{a}, (\hat{\mu}_{t,d})^{\Bar{d}}_{d={d}^{\prime}+1})\ \hat{\nu}^{x, \bm{a}\mid \Bar{d}}_t(x, \bm{a}) \Big\rvert\\ \leq&\ \sum^{d_0}_{\Bar{d}={d}^{\prime}+1} \sum_{x, \bm{a}} \left\lvert \nu^d_t(\Bar{d}) \nu^{x, \bm{a}\mid \Bar{d}}_t(x,\bm{a})\right\rvert \sum_{{x}^{\prime}\in\mathcal{X}} \Big\lvert p^{(\Bar{d}-{d}^{\prime})}({x}^{\prime} \mid x, \bm{a}, (\mu_{t,d})^{\Bar{d}}_{d={d}^{\prime}+1}) - p^{(\Bar{d}-{d}^{\prime})}({x}^{\prime} \mid x, \bm{a}, (\hat{\mu}_{t,d})^{\Bar{d}}_{d={d}^{\prime}+1}) \Big\rvert \\ & \quad +\sum^{d_0}_{\Bar{d} = {d}^{\prime}+1} \sum_{x, \bm{a}}\left\lvert \nu^d_t(\Bar{d}) \nu^{x, \bm{a}\mid \Bar{d}}_t(x, \bm{a}) - \hat{\nu}^d_t(\Bar{d}) \hat{\nu}^{x, \bm{a}\mid \Bar{d}}_t(x, \bm{a}) \right\rvert \sum_{{x}^{\prime}\in\mathcal{X}} p^{(\Bar{d}-{d}^{\prime})}({x}^{\prime} \mid x, \bm{a}, (\hat{\mu}_{t,d})^{\Bar{d}}_{d={d}^{\prime}+1}) \\ \leq&\ 2L_p\ \delta_{\mathrm{max}}\big((\mu_{t,d})^{\Bar{d}}_{d={d}^{\prime}+1}, (\hat{\mu}_{t,d})^{\Bar{d}}_{d={d}^{\prime}+1}\big) + \sum^{d_0}_{\Bar{d} = {d}^{\prime}+1} \sum_{x, \bm{a}} \left\lvert \nu_t^{d,x, \bm{a}}(\Bar{d},x, \bm{a}) - \hat{\nu}^{d,x, \bm{a}}_t(\Bar{d},x, \bm{a}) \right\rvert.\end{aligned}$$ Returning to [\[eq:aug_to_underlying_map_proof\]](#eq:aug_to_underlying_map_proof){reference-type="eqref" reference="eq:aug_to_underlying_map_proof"}, we have $$\begin{aligned} \delta_{TV}(\mu_{t,{d}^{\prime}}, \hat{\mu}_{t,{d}^{\prime}}) & \leq \sum^{{d}^{\prime}}_{\Bar{d} = 0} \sum_{{x}^{\prime} \in \mathcal{X}} \left\lvert \nu^{d, x_{t-{d}^{\prime}}}_t(\Bar{d}, {x}^{\prime}) - \hat{\nu}^{d, x_{t-{d}^{\prime}}}_t(\Bar{d}, {x}^{\prime}) \right\rvert + \sum_{{x}^{\prime}\in\mathcal{X}} \lvert J_1 - J_2 \rvert \\ &\leq 2L_p\ \delta_{\mathrm{max}}\big((\mu_{t,d})^{\Bar{d}}_{d={d}^{\prime}+1}, (\hat{\mu}_{t,d})^{\Bar{d}}_{d={d}^{\prime}+1}\big) + \delta_{TV}(\nu_t, \hat{\nu}_t),\end{aligned}$$ so that $$\begin{aligned} \delta_{\mathrm{max}}(\bm{\mu}^{\nu}_t, \bm{\mu}^{\hat{\nu}}_t) = \max_{0\leq d \leq d_0} \delta_{TV}(\mu_{t,d}, \hat{\mu}_{t,d}) = L_M\ \delta_{TV}(\nu_t, \hat{\nu}_t),\end{aligned}$$ as required. ◻ Now let $\bm{\nu} \in \Delta^{\infty}_{\tilde{\mathcal{Y}}}$. Given this fixed $\bm{\nu}$, optimising the objective function becomes the single agent problem in . Hence, for a policy $\pi \in \Pi_{DM}$, define the objective function $$\begin{aligned} J_{\bm{\nu}}(\pi) \coloneqq \mathbb{E}^{\pi}\left[ \sum^{\infty}_{n=0} \gamma^{n} r_y(y_n, u_n, \bm{\mu}^{\nu}_n) \right],\end{aligned}$$ where $\mathbb{E}^{\pi}$ is the expectation induced by the transition kernel $p_y$ and policy $\pi$. Then, the MFNE for the MCDM is defined as the following. **Definition 19**. Let $\bm{\nu}=(\nu_t)_t \in \Delta^{\infty}_{\tilde{\mathcal{Y}}}$. Define: 1. The best-response map $\Phi^{\mathrm{aug}}: \Delta^{\infty}_{\tilde{\mathcal{Y}}} \to \Pi_{DM}$, given by $$\begin{aligned} \Phi^{\mathrm{aug}}(\bm{\nu}) = \left\{ \hat{\pi} \in \Pi_{DM}: J_{\bm{\nu}}( \hat{\pi}) = \sup_{\pi \in \Pi_{DM}} J_{\bm{\nu}}(\pi) \right\}, \end{aligned}$$ 2. The measure flow map $\Psi^{\mathrm{aug}}: \Pi_{DM} \to \Delta_{\tilde{\mathcal{Y}}}^{\infty}$, defined recursively by $\Psi^{\mathrm{aug}}(\pi)_0 = \nu_0$ and $$\begin{aligned} \Psi^{\mathrm{aug}}(\pi)_{t+1}(\cdot) =\sum_{y \in \tilde{\mathcal{Y}}} \sum_{u \in U} p_y\Big(\cdot \mid y, u, \bm{\mu}_t^{\Psi^{\mathrm{aug}}(\pi)} \Big) \pi_t(u \mid y) \Psi^{\mathrm{aug}}(\pi)_t(y). \end{aligned}$$ 3. The mean-field Nash equilibrium (MFNE) for the MCDM problem ${(\pi^{*}, \bm{\nu}^{*}) \in \Pi_{DM} \times \Delta^{\infty}_{\tilde{\mathcal{Y}}}}$ given by the fixed point $\bm{\nu}^{*}$ of $\Psi^{\mathrm{aug}} \circ \Phi^{\mathrm{aug}}$, for which $\pi^{*} \in \Phi^{\mathrm{aug}}(\bm{\nu}^{*})$ (best response map) and $\bm{\nu}^{*} = \Psi^{\mathrm{aug}}(\pi^{*})$ (measure flow induced by policy) holds. The existence of MFNE for discrete MFGs in the fully observable case is shown in [@saldi_mfg_discrete], by utilising the Kakutani fixed point theorem, and further extended to MFGs with partial information in [@saldi2019approximate]. As we see above, is analogous to classical MFNE characterisations in discrete MFG setups [@saldi_mfg_discrete; @saldi_regularisation; @lauriere2022learning; @cui2021approximately], with the extra step of incorporating the maps $\nu_t \mapsto \bm{\mu}^{\nu}_t$. This is different to the barycenter approach in [@saldi2019approximate p.9]: when the belief state is measure-valued, taking the barycenter of a measure on the augmented state is effectively 'taking the average' to give a measure on the underlying state. Here, the belief state is parameterised by a finite set given by past observations, so the notion of taking the barycenter does not apply here. Moreover, both $p_y$ and $r_y$ depend on the distribution of the underlying state across multiple time points in the past. Therefore an explicit construction of $\bm{\mu}^{\nu}$ here is required. **Remark 20**. *The extra enlargement of the space $\mathcal{Y}$ to $\mathcal{Y}^{\prime}$ is necessary to formulate the MFNE fixed point condition and to compute $\bm{\mu}^{\nu}$ from $\nu$. This enlargement is not required for the best response update, as the extra states are irrelevant when solving the MDP for a fixed measure flow. One can view an element of $\mathcal{Y}$ as an equivalence class on $\tilde{\mathcal{Y}}$, defined by the relation that two elements are equivalent if and only if the values of $(d, x_{n-d}, a_{n-d}, \ldots, a_{n-1})$ are identical.* # Regularised MFG for the MCDM {#sec:computation} It is known that for finite-state MFGs, the MFNE need not be unique, and the fixed point operator given by ${\Psi^{\mathrm{aug}} \circ \Phi^{\mathrm{aug}}}$ does not form a contraction in general [@cui2021approximately]. In order to compute for an approximate MFNE, we mirror the approaches of [@cui2021approximately; @saldi_regularisation] and consider a closely related game with a regulariser. This regulariser is an additive term to the reward in the objective function, and is given by a strongly convex function $\Omega: \Delta_U \to \mathbb{R}$. Then, we consider the regularised objective function $$\begin{aligned} J^{\mathrm{reg}}_{\eta, \bm{\nu}}(\pi) = \sum^{\infty}_{n=0} \gamma^{n}\left( \mathbb{E}^{\pi}[r \big(y_n, u_n, \bm{\mu}^{\nu}_n\big)] - \eta\ \Omega (\pi_n) \right),\end{aligned}$$ where $\eta$ is the regularisation parameter. The regularisation allows for a smoothed maximum to be obtained for the value function, and is often applied in reinforcement learning problems to improve policy exploration [@geist2019theory]. Specifically, if $\Omega$ is strongly convex, then its Legendre-Fenchel transform $\Omega^{*}: \mathbb{R}^U \to \mathbb{R}$, defined as $$\begin{aligned} \Omega^{*}(f) = \max_{\pi \in \Delta_U} \left( \langle \pi, f \rangle - \Omega(\pi) \right),\end{aligned}$$ has the property that $\nabla \Omega^{*}$ is Lipschitz and satisfies $$\begin{aligned} \nabla\Omega^{*}(f) = \mathop{\mathrm{arg\,max}}_{\pi \in \Delta_U} \left( \langle \pi, f \rangle - \Omega(\pi) \right).\end{aligned}$$ In view of the above, one can interpret the $\Omega^{*}(f)$ term as the optimal value for $f$ across the set of admissible policies, with the optimal policy given by $\nabla \Omega^{*}(f)$. Commonly, $\Omega$ will be a KL divergence: $\Omega(\pi_n) = D_{KL}(\pi_n \Vert q)$, for some reference measure $q \in \Delta_U$. Then, the objective function reads $$\begin{aligned} J^{\mathrm{reg}}_{\eta,\bm{\nu}}(\pi) = \mathbb{E}^{\pi}\left[ \sum^{\infty}_{n=0} \gamma^{n} R^{\eta}\big(y_n, u_n, \bm{\mu}^{\nu}_n\big)\right],\end{aligned}$$ where $R^{\eta}(y_n, u_n, \bm{\mu}^{\nu}_n) = r_y(y_n, u_n, \bm{\mu}^{\nu}_n) - \eta \log \frac{\pi(u_n \vert y_n)}{q(u_n)}$. To simplify the analysis, we shall consider $q$ as the uniform distribution, i.e. $q(u) = 1/ \lvert U \rvert$ for the rest of this section. Our statements readily extend to the case of arbitrary reference measures $q$, so long as $q$ is bounded. We shall state the corresponding results for arbitrary $q$ at the end of this section. Following the notation of [@saldi_regularisation], we also consider the following quantities: - the regularised value function ${J^{\mathrm{reg},*}_{\eta, \bm{\nu}}: \mathbb{N}\times \tilde{\mathcal{Y}} \to \mathbb{R}}$, where $$\begin{aligned} J^{\mathrm{reg},*}_{\eta,\bm{\nu}}(t,y) = \sup_{\pi \in \Pi_{DM}} \mathbb{E}^{\pi}\left[ \sum^{\infty}_{n=t} \gamma^{n-t} R^{\eta}\big(y_n, u_n, \bm{\mu}^{\nu}_n\big) \middle \vert y_t = y \right].\end{aligned}$$ - the optimal regularised $Q$-function $Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}: \mathbb{N}\times \tilde{\mathcal{Y}} \times U \to \mathbb{R}$, where$$\begin{aligned} Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}(t,y,u) = \sup_{\pi \in \Pi_{DM}} \mathbb{E}^{\pi}\left[ \sum^{\infty}_{n=t} \gamma^{n-t} R^{\eta}\big(y_n, u_n, \bm{\mu}^{\nu}_n\big) \middle \vert y_t = y, u_t = u \right].\end{aligned}$$ Similarly to the metric $\delta_{\infty}$ for measure flows, for the $Q$-functions we shall use the metric $$\begin{aligned} \delta_{Q} (f,g) = \sum^{\infty}_{t = 0} \zeta^{-t} \max_{\substack{y \in \tilde{\mathcal{Y}} \\ u \in U}} \big(f(t,y,u),g(t,y,u)\big).\end{aligned}$$ Intuitively, we are giving more weight to the values closer to the current time. The optimal regularised $Q$-function satisfies the dynamic programming relation $$\begin{aligned} Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}(t,y,u) = R^{\eta}\big(y, u, \bm{\mu}^{\nu}_t\big) + \gamma \sum_{y^{\prime} \in \tilde{\mathcal{Y}}} J^{\mathrm{reg},*}_{\eta,\bm{\nu}}(t+1, y^{\prime}) p_y\big(y^{\prime}\mid y, u, \bm{\mu}^{\nu}_t\big),\end{aligned}$$ Note that although the transition kernel and reward are time-homogeneous, the inclusion of the time-dependent measure flow leads to a time-inhomogeneous dynamic programming relation (see ). It is well known that, when the regulariser $\Omega$ is given as relative entropy, the policy that maximises the regularised value function $J^{\mathrm{reg},*}_{\eta,\bm{\nu}}$ is the softmax policy $\pi^{\mathrm{soft}}$, where $$\begin{aligned} \pi^{\mathrm{soft}}_t(u\mid y) = \frac{\exp( Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}(t,y,u))}{\sum_{{u}^{\prime}\in U} \exp( Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}(t,y,{u}^{\prime}))}.\end{aligned}$$ Then, the optimal regularised $Q$-function can be written in the form $$\begin{aligned} Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}(t,y,u) &= r_y\big(y, u, \bm{\mu}^{\nu}_t\big)\\ & \quad + \gamma \sum_{y^{\prime} \in \tilde{\mathcal{Y}}} p_y({y}^{\prime} \mid y, u, \bm{\mu}^{\nu}_t)\ \eta \log \left(\frac{1}{\lvert U \rvert}\sum_{{u}^{\prime}\in U} \exp{\frac{Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}(t+1, {y}^{\prime}, {u}^{\prime})}{\eta}}\right).\end{aligned}$$ We can thus define the regularised MCDM-MFNE by the analogous fixed point criteria. **Definition 21**. Let $\bm{\nu}=(\nu_t)_t \in \Delta^{\infty}_{\tilde{\mathcal{Y}}}$ and $\eta > 0$. Define: 1. The best-response map ${\Phi^{\mathrm{reg}}_{\eta}: \Delta^{\infty}_{\tilde{\mathcal{Y}}} \to \Pi_{DM}}$, given by $$\begin{aligned} \Phi^{\mathrm{reg}}_{\eta}(\bm{\nu})_t(u\mid y) = \frac{\exp( Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}(t,y,u))}{\sum_{{u}^{\prime}\in U} \exp( Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}(t,y,{u}^{\prime}))}. \end{aligned}$$ 2. $\Psi^{\mathrm{aug}}: \Pi_{DM} \to \Delta_{\tilde{\mathcal{Y}}}^{\infty}$, the measure flow map as defined previously, where $\Psi^{\mathrm{aug}}(\pi)_0 = \nu_0$ and for $t \geq 0$, $$\begin{aligned} \Psi^{\mathrm{aug}}(\pi)_{t+1}(\cdot) =\sum_{y \in \tilde{\mathcal{Y}}} \sum_{u \in U} p_y\Big(\cdot \mid y, u, \bm{\mu}_t^{\Psi^{\mathrm{aug}}(\pi)} \Big) \pi_t(u \mid y) \Psi^{\mathrm{aug}}(\pi)_t(y). \end{aligned}$$ 3. The regularised MFNE for the MCDM problem $(\pi^{*}, \bm{\nu}^{*})\in \Pi_{DM} \times \Delta^{\infty}_{\tilde{\mathcal{Y}}}$, given by the fixed point $\bm{\nu}^{*}$ of $\Psi^{\mathrm{aug}} \circ \Phi^{\mathrm{reg}}_{\eta}$, for which $\pi^{*} = \Phi^{\mathrm{reg}}_{\eta}(\bm{\nu}^{*})$ (best response map) and $\bm{\nu}^{*} = \Psi^{\mathrm{aug}}(\pi^{*})$ (measure flow induced by policy) holds. The next step is to show that the fixed point operator $\Psi^{\mathrm{aug}} \circ \Phi^{\mathrm{reg}}_{\eta}$, under a suitable choice of metric and regulariser parameter $\eta$, forms a contraction mapping, such that the iteration of these maps will converge towards the fixed point, which is the regularised MFNE. We combine the approaches of [@cui2021approximately; @saldi_regularisation], extending their proof to the case of the infinite horizon problem with time-dependent measure flows, as well as the inclusion of the map $\nu_t \mapsto \bm{\mu}^{\nu}_t$ within the definitions of $\Phi^{\mathrm{reg}}_{\eta}$ and $\Psi^{\mathrm{aug}}$. In order to demonstrate contraction of the regularised iterations, we defer the full statement and first show the following series of propositions regarding the Lipschitz continuity of the individual mappings. When treating the infinite horizon problem, we can approximate the optimal regularised $Q$-functions by considering its truncation at some finite time $N$, that is, first define $$\begin{aligned} J^{N,*}_{\eta, \bm{\nu}} (t,y) = \sup_{\pi \in \Pi} \mathbb{E}^{\pi} \left[ \sum^{N}_{n=t} \gamma^{n-t} R^{\eta}(y_n, u_n, \bm{\mu}^{\nu}_n) \middle\vert y_t= y \right], \quad t \leq N,\ y \in \tilde{\mathcal{Y}}.\end{aligned}$$ Then, extend $J^{N,*}_{\eta,\bm{\nu}}$ to $\mathbb{N}\times \tilde{\mathcal{Y}}$ by defining $J^{N,*}_{\eta, \bm{\nu}}(t,y) = 0$ for all $t > N$, $y \in \tilde{\mathcal{Y}}$. Similarly, define the truncated versions of the optimal regularised $Q$-function by $$\begin{aligned} Q^{N,*}_{\eta,\bm{\nu}}(t,y,u) = \sup_{\pi \in \Pi_{DM}} \mathbb{E}^{\pi}\left[ \sum^{N}_{n=t} \gamma^{n-t} R^{\eta}\big(y_n, u_n, \bm{\mu}^{\nu}_n\big) \middle \vert y_t = y, u_t = u \right], \quad t \leq N,\ y \in \tilde{\mathcal{Y}},\ u \in U,\end{aligned}$$ and once again extend $Q^{N,*}_{\eta,\bm{\nu}}$ to $\mathbb{N}\times \tilde{\mathcal{Y}} \times U$ by defining $Q^{N,*}_{\eta,\bm{\nu}}(t,\cdot, \cdot) = 0$ for all $t > N$. Then, the truncated optimal regularised $Q$-functions satisfy the following: for $t < N$, $$\begin{aligned} Q^{N,*}_{\eta,\bm{\nu}}(t,y,u) &= r_y\big(y, u, \bm{\mu}^{\nu}_t\big) \nonumber \\ &\quad + \gamma \sum_{y^{\prime} \in \tilde{\mathcal{Y}}} p_y({y}^{\prime} \mid y, u, \bm{\mu}^{\nu}_t)\ \eta \log \left(\frac{1}{\lvert U \rvert}\sum_{{u}^{\prime}\in U} \exp{\frac{Q^{N,*}_{\eta,\bm{\nu}}(t+1, {y}^{\prime}, {u}^{\prime})}{\eta}}\right), \\ Q^{N,*}_{\eta,\bm{\nu}}(t,y,u) &= r_y\big(y, u, \bm{\mu}^{\nu}_t\big) \nonumber \\ &\quad+ \gamma \sum_{y^{\prime} \in \tilde{\mathcal{Y}}} p_y({y}^{\prime} \mid y, u, \bm{\mu}^{\nu}_t)\ \eta \log \left(\frac{1}{\lvert U \rvert} \sum_{{u}^{\prime}\in U} \exp{\frac{Q^{N-1,*}_{\eta,\bm{\nu}}(t, {y}^{\prime}, {u}^{\prime})}{\eta}}\right).\end{aligned}$$ It is a standard result via successive approximations that $J^{N,*}_{\eta,\bm{\nu}} \to J^{\mathrm{reg},*}_{\eta,\bm{\nu}}$ and $Q^{N,*}_{\eta,\bm{\nu}} \to Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}$ pointwise [@hernandezlerma_1 Section 4.2]. We will utilise this pointwise convergence repeatedly in our analysis for the rest of this section. **Lemma 22**. *For any $\bm{\nu}, \hat{\bm{\nu}} \in \Delta^{\infty}_{\tilde{\mathcal{Y}}}$ and any pair $(t,N)$ such that $t \leq N$, the truncated $Q$-functions $Q^{N,*}_{\eta,\nu}$ are uniformly bounded by $q^{*} \coloneqq M_R / ( 1- \gamma)$.* *Proof.* First note that $$\begin{aligned} \lvert Q^{N,*}_{\eta,\bm{\nu}}(N,y,u) \rvert = \lvert r_y\big(y, u, \bm{\mu}^{\nu}_N\big) \rvert \leq M_R \eqqcolon q_{N,N}.\end{aligned}$$ Then, for each $t < N$, $$\begin{aligned} \label{eq: qnstar_bound} \lvert Q^{N,*}_{\eta,\bm{\nu}}(t,y,u) \rvert & \leq M_R + \gamma \eta \max_{{y}^{\prime} \in \tilde{\mathcal{Y}}} \left\lvert \log \left(\sum_{{u}^{\prime}\in U} \frac{1}{\lvert U \rvert} \exp{\frac{Q^{N,*}_{\eta,\bm{\nu}}(t+1, {y}^{\prime}, {u}^{\prime})}{\eta}}\right) \right\rvert \nonumber \\ &\leq M_R + \gamma \eta \left(\frac{q_{N,t+1}}{\eta} \right) \nonumber \\ & = M_R + \gamma q_{N,t+1} \eqqcolon q_{N,t}\end{aligned}$$ where $q_{N,t+1}$ is the bound for $Q^{N,*}_{\eta,\bm{\nu}}(t+1,y,u)$. As $N \to \infty$, $q_{N,t}$ converges to the fixed point $q^{*}$ of the map $x \mapsto \gamma x + M_R$, i.e. $q^{*} = M_R / ( 1- \gamma)$. Moreover, $q^{*} > M_R$ and is independent of $t$. Hence, for each $t$, we have $q_{N,t} \uparrow q^{*}$ as $N \to \infty$. Together with the fact that $Q^{N,*}_{\eta, \bm{\nu}} \to Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}$ pointwise, sending $N \to \infty$ in [\[eq: qnstar_bound\]](#eq: qnstar_bound){reference-type="eqref" reference="eq: qnstar_bound"} gives as the uniform bound $$\begin{aligned} \lvert Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}(t,y,u) \rvert \leq q^{*}.\end{aligned}$$ ◻ Now we shall prove by induction the following statement: **Lemma 23**. *Let $\bm{\nu}, \hat{\bm{\nu}} \in \Delta^{\infty}_{\tilde{\mathcal{Y}}}$. Then for each $N$, the truncated $Q$-functions satisfies $$\begin{aligned} \left\lvert Q^{N,*}_{\eta, \bm{\nu}}(t,y,u) - Q^{N,*}_{\eta, \hat{\bm{\nu}}}(t,y,u) \right \rvert \leq l_{n,t}\ \delta_{TV}(\nu_t, \hat{\nu}_t),\quad 0 \leq t \leq N,\ y \in \tilde{\mathcal{Y}},\ u \in U,\ \eta \geq 0, \end{aligned}$$ where $(l_{n,t})_{0 \leq t \leq n}$ satisfy the recurrence relation $$\begin{aligned} l_{n,n} = L_R L_M, \quad l_{n,t} = \left(L_RL_M + \gamma \exp\left(\frac{2q^{*}}{\eta}\right) l_{n-1,t} + 2\gamma q^{*} L_pL_M \right). \end{aligned}$$* *Proof.* For the base case $N=0$, we have $$\begin{aligned} \lvert Q^{0,*}_{\eta, \bm{\nu}}(0, y, u) - Q^{0,*}_{\eta, \hat{\bm{\nu}}}(0, y, u) \rvert = \lvert r_y\big(y, u, \bm{\mu}^{\nu}_0\big) - r_y\big(y, u, \bm{\mu}^{\hat{\nu}}_0\big) \rvert \leq L_R L_M \delta_{TV}(\nu_0, \hat{\nu}_0)\end{aligned}$$ Now assume the hypothesis holds up to some $N=n$. Let $N = n+1$, the case $t = n+1$ is as above. Otherwise for $0 \leq t \leq n$, $$\begin{aligned} & \left\lvert Q^{n+1,*}_{\eta, \bm{\nu}}(t,y,u) - Q^{n+1,*}_{\eta, \hat{\bm{\nu}}}(t,y,u) \right\rvert\\ \leq\ & \left\lvert r_y(y,u, \bm{\mu}^{\nu}_t) - r_y(y,u, \bm{\mu}^{\hat{\nu}}_t)\right\rvert \\ & + \gamma\ \Bigg\lvert \sum_{y^{\prime} \in \tilde{\mathcal{Y}}} p_y({y}^{\prime} \mid y, u, \bm{\mu}^{\nu}_t)\ \eta \log \left(\sum_{{u}^{\prime}\in U}\frac{1}{\lvert U \rvert} \exp{\frac{Q^{n,*}_{\eta,\bm{\nu}}(t, {y}^{\prime}, {u}^{\prime})}{\eta}}\right) \\ & \qquad- \sum_{y^{\prime} \in \tilde{\mathcal{Y}}} p_y({y}^{\prime} \mid y, u, \bm{\mu}^{\hat{\nu}}_t)\ \eta \log \left(\sum_{{u}^{\prime}\in U} \frac{1}{\lvert U \rvert}\exp{\frac{Q^{n,*}_{\eta,\hat{\bm{\nu}}}(t, {y}^{\prime}, {u}^{\prime})}{\eta}}\right)\Bigg\rvert \\ \leq\ & L_R\ \delta_{\mathrm{max}}( \bm{\mu}^{\nu}_t,\bm{\mu}^{\hat{\nu}}_t)\\ & + \gamma\eta \max_{{y}^{\prime} \in \tilde{\mathcal{Y}}}\left\lvert \log \left(\sum_{{u}^{\prime} \in U}\frac{1}{\lvert U \rvert}\exp{\frac{Q^{n,*}_{\eta,\bm{\nu}}(t, {y}^{\prime}, {u}^{\prime})}{\eta}}\right) - \log \left(\sum_{{u}^{\prime} \in U}\frac{1}{\lvert U \rvert}\exp{\frac{Q^{n,*}_{\eta,\hat{\bm{\nu}}}(t, {y}^{\prime}, {u}^{\prime})}{\eta}}\right) \right\rvert \\ & + \gamma \eta\ \delta_{TV}\left( p_y( \cdot \mid y, u, \bm{\mu}^{\nu}_t),\ p_y( \cdot \mid y, u, \bm{\mu}^{\hat{\nu}}_t) \right) \\ & \qquad \qquad \left( \log \left(\sum_{{u}^{\prime}\in U} \frac{1}{\lvert U \rvert}\exp{\frac{Q^{n,*}_{\eta,\bm{\nu}}(t, y_{\mathrm{max}}, {u}^{\prime})}{\eta}}\right) - \log \left(\sum_{{u}^{\prime}\in U}\frac{1}{\lvert U \rvert} \exp{\frac{Q^{n,*}_{\eta,\bm{\nu}}(t, y_{\mathrm{min}}, {u}^{\prime})}{\eta}}\right) \right) \\ \leq\ & L_R L_M \delta_{TV}(\nu_t, \hat{\nu}_t) + I_1 + I_2\end{aligned}$$ For $I_2$, noting the bound on $Q^{n,*}_{\eta,\bm{\nu}}$, we have $$\begin{aligned} I_2 \leq 2\gamma q^{*} L_p\ \delta_{\mathrm{max}}( \bm{\mu}^{\nu}_t,\bm{\mu}^{\hat{\nu}}_t) \leq 2\gamma q^{*} L_p L_M \delta_{TV}(\nu_t, \hat{\nu}_t).\end{aligned}$$ For $I_1$ we use the mean value theorem and the Lipschitz property from induction to obtain $$\begin{aligned} I_1 &\leq \gamma \eta \max_{{y}^{\prime} \in \tilde{\mathcal{Y}}} \sum_{{u}^{\prime} \in U} \left\lvert \frac{\frac{1}{\eta}\exp(\frac{\zeta_{{u}^{\prime}}}{\eta})}{\sum_{u^{\prime\prime}\in U}\exp(\frac{\zeta_{u^{\prime\prime}}}{\eta})} \right\rvert \left\lvert Q^{n,*}_{\eta,\bm{\nu}}(t, {y}^{\prime}, {u}^{\prime}) - Q^{n,*}_{\eta,\hat{\bm{\nu}}}(t, {y}^{\prime}, {u}^{\prime}) \right\rvert \\ & \leq \gamma \exp\left(\frac{2q^{*}}{\eta}\right) l_{n,t}\ \delta_{TV}(\nu_t, \hat{\nu}_t).\end{aligned}$$ Combining all the above, we have $$\begin{aligned} \left\lvert Q^{n+1,*}_{\eta, \bm{\nu}}(t,y,u) - Q^{n+1,*}_{\eta, \hat{\bm{\nu}}}(t,y,u) \right \rvert \leq \left(L_RL_M + \gamma \exp\left(\frac{2q^{*}}{\eta}\right) l_{n,t} + 2\gamma q^{*} L_pL_M \right) \delta_{TV}(\nu_t, \hat{\nu}_t),\end{aligned}$$ which completes the induction step as required. ◻ **Proposition 24**. *For $\eta > \frac{2M_R}{-(1-\gamma)\log \gamma}$, where $M_R$ is the uniform bound of $r_y$, $Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}$ is Lipschitz continuous with respect to $\bm{\nu}$ with Lipschitz constant $$\begin{aligned} l_{\eta} = \frac{L_M(L_R + 2 \gamma q^{*}L_p)}{1- \gamma \exp\left(\frac{2q^{*}}{\eta}\right)}. \end{aligned}$$* *Proof.* By the pointwise convergence $Q^{N,*}_{\eta,\bm{\nu}} \to Q^{\mathrm{reg},*}_{\eta, \bm{\nu}}$, and the assumption that $\eta > \frac{2M_R}{-(1-\gamma)\log \gamma}$, for each $t$, $l_{N,t} \uparrow l_{\eta}$ as $N \to \infty$, to the fixed point of the map $$\begin{aligned} x \mapsto L_R L_M + \gamma \exp\left(\frac{2q^{*}}{\eta}\right) x + 2\gamma q^{*} L_p L_M,\end{aligned}$$ so that for each $t$, $$\begin{aligned} \left\lvert Q^{\mathrm{reg},*}_{\eta, \bm{\nu}}(t,y,u) - Q^{\mathrm{reg},*}_{\eta, \hat{\bm{\nu}}}(t,y,u) \right \rvert \leq l_{\eta}\ \delta_{TV}(\nu_t, \hat{\nu}_t).\end{aligned}$$ Note that $l_{\eta}$ is independent of $t$. Therefore, $$\begin{aligned} \delta_Q(Q^{\mathrm{reg},*}_{\eta, \bm{\nu}}, Q^{\mathrm{reg},*}_{\eta, \hat{\bm{\nu}}}) &= \sum^{\infty}_{t=0} \zeta^{-t} \max_{\substack{y \in \tilde{\mathcal{Y}}\\ u \in U}} \left\lvert Q^{\mathrm{reg},*}_{\eta, \bm{\nu}}(t,y,u) - Q^{\mathrm{reg},*}_{\eta, \hat{\bm{\nu}}}(t,y,u) \right \rvert \\ & \leq \sum^{\infty}_{t=0} \zeta^{-t} l_{\eta}\ \delta_{TV}(\nu_t, \hat{\nu}_t)\\ & = l_{\eta}\ \delta_{\infty}({\bm{\nu}, \hat{\bm{\nu}}}).\end{aligned}$$ as required. ◻ In particular, let $\eta^{*}$ be a constant with $\eta^{*} > \frac{2M_R}{-(1-\gamma)\log \gamma}$. Then for all $\eta \geq \eta^{*}$, $Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}$ has a uniform Lipschitz bound of $l_{\eta^{*}}$. The Lipschitz continuity of $Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}$ allows us to obtain the Lipschitz continuity of $\Phi^{\mathrm{reg}}_{\eta}$. This relies on the following lemma from [@cui2021approximately], which we restate here. **Lemma 25** ([@cui2021approximately Lemma B.7.5]). *Let $\eta >0$ and $f_u:\Delta^{\infty}_{\tilde{\mathcal{Y}}} \to \mathbb{R}$ be Lipschitz continuous with Lipschitz constant $K_f$ for any $u \in U$. Then the function $$\begin{aligned} \bm{\nu} \mapsto \frac{\exp \left( \frac{f_u(\bm{\nu})}{\eta} \right)}{\sum_{{u}^{\prime} \in U} \exp\left( \frac{f_{{u}^{\prime}}(\bm{\nu})}{\eta} \right)} \end{aligned}$$ is Lipschitz with Lipschitz constant $K = \frac{(\lvert U \rvert -1) K_f}{2 \eta}$ for any $u \in U$.* **Corollary 26**. *For $\eta \geq \eta^{*}$, the map $\Phi^{\mathrm{reg}}_{\eta}$ is Lipschitz continuous with Lipschitz constant $K^{\eta}_{\mathrm{soft}} = \frac{\lvert U \rvert(\lvert U \rvert -1 ) l_{\eta^{*}}}{2\eta}$.* *Proof.* Given any $\bm{\nu} \in \Delta^{\infty}_{\tilde{\mathcal{Y}}}$, $\Phi^{\mathrm{reg}}_{\eta}$ maps $\bm{\nu}$ to the softmax policy $$\begin{aligned} \pi^{\mathrm{soft}}_{\bm{\nu},t}(u\mid y)\coloneqq \frac{\exp( Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}(t,y,u))}{\sum_{{u}^{\prime}\in U} \exp( Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}(t,y,{u}^{\prime}))}.\end{aligned}$$ Then we simply note that for any $\bm{\nu}, \hat{\bm{\nu}} \in \Delta^{\infty}_{\tilde{\mathcal{Y}}}$, $$\begin{aligned} \delta_{\Pi}(\Phi^{\mathrm{reg}}_{\eta}(\bm{\nu}), \Phi^{\mathrm{reg}}_{\eta}(\hat{\bm{\nu}}) ) & = \sup_{t \geq 0} \max_{y \in \tilde{\mathcal{Y}}}\ \delta_{\Pi}\left( \pi^{\mathrm{soft}}_{\bm{\nu},t}(\cdot \mid y),\ \pi^{\mathrm{soft}}_{\hat{\bm{\nu}},t}(\cdot\mid y) \right) \\ & = \sup_{t \geq 0} \max_{y \in \tilde{\mathcal{Y}}} \sum_{u \in U} \lvert \pi^{\mathrm{soft}}_{\bm{\nu},t}(u\mid y) - \pi^{\mathrm{soft}}_{\hat{\bm{\nu}},t}(u\mid y) \rvert \end{aligned}$$ Applying , together with the uniform Lipschitz constant $l_{\eta^{*}}$ for $Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}$, gives us the desired result. ◻ We now show that the measure flow map $\Psi^{\mathrm{aug}}$ is Lipschitz, under a suitable choice of the constant $\zeta$ in the metric $\delta_{\infty}$. Intuitively, given two similar policies (in the sense of the metric $\delta_{\Pi}$), the corresponding measure flows will gradually drift apart at a constant rate. The choice of $\zeta$ amounts to the weighting one gives to the current time over the distant future. **Proposition 27**. *For $\zeta \in \mathbb{N}$ such that $\zeta > 2L_P L_M +2$ in the metric $\delta_{\infty}$ [\[eq_flow_metric\]](#eq_flow_metric){reference-type="eqref" reference="eq_flow_metric"}, the map $\Psi^{\mathrm{aug}}$ is Lipschitz with constant $$\begin{aligned} L_{\Psi} = \frac{2L_P}{2L_P L_M +1} \left( \frac{\zeta}{\zeta - 2L_P L_M -2} + \frac{1}{\zeta-1} \right). \end{aligned}$$* *Proof.* We will show inductively that $$\begin{aligned} \delta_{TV}( \Psi^{\mathrm{aug}}(\pi)_{t}, \Psi^{\mathrm{aug}}(\hat{\pi})_{t}) \leq S_t\ \delta_{\Pi}(\pi, \hat{\pi})\end{aligned}$$ for constants where $S_{t+1} = 2L_P + 2S_t (L_P L_M +1)$, $S_0 = 0$. Clearly at $t=0$ we have $$\begin{aligned} \delta_{TV}( \Psi^{\mathrm{aug}}(\pi)_0, \Psi^{\mathrm{aug}}(\hat{\pi})_0) = \delta_{TV}( \nu_0, \nu_0) = 0 \end{aligned}$$ Then for the induction step, for $t \geq 0$, $$\begin{aligned} & \delta_{TV}\left( \Psi^{\mathrm{aug}}(\pi)_{t+1}, \Psi^{\mathrm{aug}}(\hat{\pi})_{t+1} \right) \\ =\ & \sum_{{y}^{\prime} \in \tilde{\mathcal{Y}}} \Bigg\lvert \sum_{y \in \tilde{\mathcal{Y}}} \sum_{u \in U} \bigg( p_y\Big({y}^{\prime} \mid y, u, \bm{\mu}_t^{\Psi^{\mathrm{aug}}(\pi)} \Big) \pi_t(u \mid y) \Psi^{\mathrm{aug}}(\pi)_t(y) \\ & \qquad \qquad - p_y\Big( {y}^{\prime} \mid y, u, \bm{\mu}_t^{\Psi^{\mathrm{aug}}(\hat{\pi})} \Big) \hat{\pi}_t(u \mid y) \Psi^{\mathrm{aug}}(\hat{\pi})_t(y) \bigg) \Bigg\rvert\\ \leq\ & J_1 + J_2, \end{aligned}$$ where $$\begin{aligned} J_1 & \coloneqq \sum_{{y}^{\prime} \in \tilde{\mathcal{Y}}}\left\lvert \sum_{y \in \tilde{\mathcal{Y}}}\sum_{u \in U} \left(p_y\Big({y}^{\prime} \mid y, u, \bm{\mu}_t^{\Psi^{\mathrm{aug}}(\pi)} \Big) \pi_t(u \mid y) - p_y\Big( {y}^{\prime} \mid y, u, \bm{\mu}_t^{\Psi^{\mathrm{aug}}(\hat{\pi})} \Big) \hat{\pi}_t(u \mid y) \right) \Psi^{\mathrm{aug}}(\pi)_t(y)\ \right\rvert \\ & \leq \sum_{y \in \tilde{\mathcal{Y}}} \sum_{{y}^{\prime} \in \tilde{\mathcal{Y}}} \sum_{u \in U} \left\lvert p_y\Big({y}^{\prime} \mid y, u, \bm{\mu}_t^{\Psi^{\mathrm{aug}}(\pi)} \Big) \pi_t(u \mid y) - p_y\Big( {y}^{\prime} \mid y, u, \bm{\mu}_t^{\Psi^{\mathrm{aug}}(\hat{\pi})} \Big) \hat{\pi}_t(u \mid y) \right\rvert \Psi^{\mathrm{aug}}(\pi)_t(y), \end{aligned}$$ and $$\begin{aligned} J_2 & \coloneqq \sum_{y\in \tilde{\mathcal{Y}}}\sum_{{y}^{\prime}\in \tilde{\mathcal{Y}}} \sum_{u \in U} p_y\left({y}^{\prime} \mid y, u, \bm{\mu}_t^{\Psi^{\mathrm{aug}}(\hat{\pi})} \right) \hat{\pi}_t(u \mid y) \left \lvert \Psi^{\mathrm{aug}}(\pi)_t(y) - \Psi^{\mathrm{aug}}(\hat{\pi})_t(y) \right\rvert \\ & \leq 2\ \delta_{TV}(\Psi^{\mathrm{aug}}(\pi)_t, \Psi^{\mathrm{aug}}(\hat{\pi})_t). \end{aligned}$$ The summation over $u \in U$ in $J_1$ can be simplified to $$\begin{aligned} &\sum_{u \in U} \left\lvert p_y\Big({y}^{\prime} \mid y, u, \bm{\mu}_t^{\Psi^{\mathrm{aug}}(\pi)} \Big) \pi_t(u \mid y) - p_y\Big( {y}^{\prime} \mid y, u, \bm{\mu}_t^{\Psi^{\mathrm{aug}}(\hat{\pi})} \Big) \hat{\pi}_t(u \mid y) \right\rvert \\ \leq\ & \sum_{u \in U} p_y\Big({y}^{\prime} \mid y, u, \bm{\mu}_t^{\Psi^{\mathrm{aug}}(\pi)} \Big) \lvert\pi_t(u \mid y) - \hat{\pi}_t(u \mid y) \rvert\\ & \quad + \sum_{u \in U} \left\lvert p_y\Big({y}^{\prime} \mid y, u, \bm{\mu}_t^{\Psi^{\mathrm{aug}}(\pi)} \Big) - p_y\Big( {y}^{\prime} \mid y, u, \bm{\mu}_t^{\Psi^{\mathrm{aug}}(\hat{\pi})} \Big) \right \rvert \hat{\pi}_t(u \mid y)\\ \leq\ & \bigg( p_y\Big({y}^{\prime} \mid y, u_{\mathrm{max}}, \bm{\mu}_t^{\Psi^{\mathrm{aug}}(\pi)} \Big) - p_y\Big({y}^{\prime} \mid y, u_{\mathrm{min}}, \bm{\mu}_t^{\Psi^{\mathrm{aug}}(\pi)} \Big) \bigg) \delta_{TV}(\pi_t(\cdot \mid y), \hat{\pi}_t(\cdot \mid y)) \\ & \quad + \max_{u \in U} \left\lvert p_y\Big({y}^{\prime} \mid y, u, \bm{\mu}_t^{\Psi^{\mathrm{aug}}(\pi)} \Big) - p_y\Big( {y}^{\prime} \mid y, u, \bm{\mu}_t^{\Psi^{\mathrm{aug}}(\hat{\pi})} \Big) \right \rvert, \end{aligned}$$ so that $$\begin{aligned} J_1 & \leq 2L_P\ \delta_{\Pi}( \pi, \hat{\pi}) + 2 \max_{y \in \tilde{\mathcal{Y}}} \max_{u \in U} \delta_{TV}\left( p_y\Big(\cdot \mid y, u, \bm{\mu}_t^{\Psi^{\mathrm{aug}}(\pi)} \Big) - p_y\Big( \cdot \mid y, u, \bm{\mu}_t^{\Psi^{\mathrm{aug}}(\hat{\pi})} \Big) \right)\\ & \leq 2L_P\ \delta_{\Pi}( \pi, \hat{\pi}) + 2 L_P L_M\ \delta_{TV}(\Psi^{\mathrm{aug}}(\pi)_t, \Psi^{\mathrm{aug}}(\hat{\pi})_t ). \end{aligned}$$ Then, applying the inductive step, $$\begin{aligned} & \delta_{TV}( \Psi^{\mathrm{aug}}(\pi)_{t+1}, \Psi^{\mathrm{aug}}(\hat{\pi})_{t+1} )\\ \leq \ & 2L_P\ \delta_{\Pi}( \pi, \hat{\pi} ) + (2 L_P L_M +2)\ \delta_{TV}(\Psi^{\mathrm{aug}}(\pi)_t, \Psi^{\mathrm{aug}}(\hat{\pi})_t ) \\ \leq \ & \big(2L_P + 2S_t (L_P L_M +1)\big) \delta_{\Pi}( \pi, \hat{\pi}), \end{aligned}$$ which proves the claim. Next, we see that $S_t$ satisfies a first-order linear recurrence relation, and more generally has the explicit formula $$\begin{aligned} S_t = \frac{2L_P}{2L_P L_M +1}(2L_P L_M +2)^t - \frac{2L_P}{2L_P L_M +1}. \end{aligned}$$ Therefore, fix some $\zeta \in \mathbb{N}$ such that $\zeta > 2L_P L_M +2$. We then have $$\begin{aligned} \delta_{\infty}( \Psi^{\mathrm{aug}}(\pi), \Psi^{\mathrm{aug}}(\hat{\pi})) &= \sum^{\infty}_{t=0} \zeta^{-t} \delta_{TV}( \Psi^{\mathrm{aug}}(\pi)_{t}, \Psi^{\mathrm{aug}}(\hat{\pi})_{t})\\ & \leq \sum^{\infty}_{t=1} \frac{S_t}{\zeta^t}\ \delta_{\Pi}(\pi, \hat{\pi})\\ &= \frac{2L_P}{2L_P L_M +1} \left( \frac{\zeta}{\zeta - 2L_P L_M -2} + \frac{1}{\zeta-1} \right) \delta_{\Pi}(\pi, \hat{\pi}) \end{aligned}$$ ◻ Our statement of contraction for the regularised fixed point operator of the MFG-MCDM is then essentially a corollary of the previous propositions. **Theorem 28**. *Recall the Lipschitz constants $L_p$, $L_P$, $L_R$ and $L_M$ for $p$, $p_y$, $r_y$ and the map $\nu_t \mapsto \bm{\mu}^{\nu}_t$ respectively. Let $\zeta$ and $\eta^{*}$ be constants such that $\zeta > 2L_P L_M +2$, and $\eta^{*} > \frac{2M_R}{-(1-\gamma)\log \gamma}$. Define $$\begin{aligned} &q^{*} = \frac{M_R}{1-\gamma},\quad l_{\eta^{*}} = \frac{L_M(L_R + 2 \gamma q^{*}L_p)}{1- \gamma \exp\left(\frac{2q^{*}}{\eta^{*}}\right)},\\ &L_{\Psi} = \frac{2L_P}{2L_P L_M +1} \left( \frac{\zeta}{\zeta - 2L_P L_M -2} + \frac{1}{\zeta-1} \right). \end{aligned}$$ Then for any $\eta$ such that $$\begin{aligned} \eta > \frac{1}{2}\lvert U \rvert \left(\lvert U \rvert - 1\right) l_{\eta^{*}} L_{\Psi}, \end{aligned}$$ the fixed point operator $\Psi^{\mathrm{aug}} \circ \Phi^{\mathrm{reg}}_{\eta}$ is a contraction mapping on the space $(\Delta^{\infty}_{\tilde{\mathcal{Y}}}, \delta_{\infty})$, where the constant $\zeta$ in the metric $\delta_{\infty}$ [\[eq_flow_metric\]](#eq_flow_metric){reference-type="eqref" reference="eq_flow_metric"} is as chosen above. In particular by Banach's fixed point theorem, there exists a unique fixed point for $\Psi^{\mathrm{aug}} \circ \Phi^{\mathrm{reg}}_{\eta}$, which is a regularised MFNE for the MFG-MCDM problem.* *Proof.* Let $\bm{\nu}, \hat{\bm{\nu}} \in \Delta^{\infty}_{\tilde{\mathcal{Y}}}$, and let $\pi = \Phi^{\mathrm{reg}}_{\eta}(\bm{\nu})$ and $\hat{\pi}= \Phi^{\mathrm{reg}}_{\eta}(\hat{\bm{\nu}})$. Then by and , $$\begin{aligned} \delta_{\infty}( \Psi^{\mathrm{aug}}(\pi), \Psi^{\mathrm{aug}}(\hat{\pi})) & \leq L_{\Psi} \delta_{\Pi}(\Phi^{\mathrm{reg}}_{\eta}(\bm{\nu}), \Phi^{\mathrm{reg}}_{\eta}(\hat{\bm{\nu}})) \\ & \leq L_{\Psi} K^{\eta}_{\mathrm{soft}} \delta_{\infty}(\bm{\nu}, \hat{\bm{\nu}}),\end{aligned}$$ where we recall from that $K^{\eta}_{\mathrm{soft}} = \frac{\lvert U \rvert(\lvert U \rvert -1 ) l_{\eta^{*}}}{2\eta}$. Since ${\eta > \frac{1}{2}\lvert U \rvert \left(\lvert U \rvert - 1\right) l_{\eta^{*}} L_{\Psi}}$, we have that $L_{\Psi} K^{\eta}_{\mathrm{soft}} < 1$, and hence the map $\Psi^{\mathrm{aug}} \circ \Phi^{\mathrm{reg}}_{\eta}$ is a contraction. As $(\Delta^{\infty}_{\tilde{\mathcal{Y}}}, \delta_{\infty})$ is a complete metric space (see ), by the Banach fixed-point theorem, there exists a unique fixed point, which furthermore serves as a MFNE for the regularised MFG-MCDM by definition. ◻ We now state the analogous result for , when the KL divergence with respect to an arbitrary policy $q = (q_t)_t$ is used. **Theorem 29**. *Let $q \in \Pi_{DM}$ be an arbitrary admissible policy, such that $q$ is bounded above and below by $\Bar{q}< \infty$ and $\underline{q} > 0$ respectively. Consider the regulariser as the KL divergence with respect to $q$: $$\begin{aligned} \Omega(\pi_t) = \sum_{u \in U} \pi_t(u) \log \frac{\pi_t(u)}{q_t(u)}. \end{aligned}$$ Define $l_{\eta,q}$ by $$\begin{aligned} l_{\eta,q} = \frac{L_M(L_R + 2 \gamma q^{*}L_p)}{1- \gamma \frac{\Bar{q}}{\underline{q}} \exp\left(\frac{2q^{*}}{\eta}\right)}, \end{aligned}$$ where $q^{*} = M_R/(1-\gamma)$ as before. Let $\zeta$ and $\eta^{*}$ be constants such that $\zeta > 2L_P L_M +2$, and $\eta^{*} > \frac{2M_R}{(1-\gamma)(\log \underline{q} - \log\gamma\Bar{q})}$. Then for any $\eta$ such that $$\begin{aligned} \eta > \frac{\Bar{q}^2}{2 \underline{q}^2}\lvert U \rvert \left(\lvert U \rvert - 1\right)L_{\Psi} l_{\eta^{*},q} , \end{aligned}$$ the fixed point operator $\Psi^{\mathrm{aug}} \circ \Phi^{\mathrm{reg}}_{\eta}$ is a contraction mapping on the space $(\Delta^{\infty}_{\tilde{\mathcal{Y}}}, \delta_{\infty})$, where the constant $\zeta$ in the metric $\delta_{\infty}$ [\[eq_flow_metric\]](#eq_flow_metric){reference-type="eqref" reference="eq_flow_metric"} is as chosen above. In particular by Banach's fixed point theorem, there exists a unique fixed point for $\Psi^{\mathrm{aug}} \circ \Phi^{\mathrm{reg}}_{\eta}$, which is a regularised MFNE for the MFG-MCDM problem.* *Proof.* This is essentially a corollary of , by noting that the optimal regularised $Q$-function satisfies the dynamic programming $$\begin{aligned} Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}(t,y,u) &= r_y\big(y, u, \bm{\mu}^{\nu}_t\big)\\ & \quad + \gamma \sum_{y^{\prime} \in \tilde{\mathcal{Y}}} p_y({y}^{\prime} \mid y, u, \bm{\mu}^{\nu}_t)\ \eta \log \left(\sum_{{u}^{\prime}\in U} q_{t+1}({u}^{\prime} \mid {y}^{\prime}) \exp{\frac{Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}(t+1, {y}^{\prime}, {u}^{\prime})}{\eta}}\right), \end{aligned}$$ and that the optimal policy $\pi^{\mathrm{soft}}$ now has the form $$\begin{aligned} \pi^{\mathrm{soft}}_{\bm{\nu},t}(u\mid y)\coloneqq \frac{q_t( u \mid y)\exp(Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}(t,y,u))}{\sum_{{u}^{\prime}\in U} q_t( {u}^{\prime} \mid y) \exp( Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}(t,y,{u}^{\prime}))}. \end{aligned}$$ Then the proof of can be followed, inserting the bounds $\Bar{q}$ and $\underline{q}$ into the relevant constants where appropriate. The precise proof in the classical fully observable case is shown in [@cui2021approximately Theorem 3]. ◻ ## Approximate Nash equilibria to the $N$-player game with controllable information delay Recall that in the finite player case, player $j$'s objective function is given by $$\begin{aligned} J^N_j(\pi^{(N)}) = \mathbb{E}^{\pi^{(N)}}\left[ \sum^{\infty}_{n=0} \gamma^{n} r(x^j_n, a^j_n, e^N_n) \right].\end{aligned}$$ This subsection shows that the MFNE obtained from the regularised MFG with speed of information control forms an approximate Nash equilibria for sufficiently small $\eta$. Note that the theorem only states that the regularised Nash equilibria, defined via , acts as an approximate Nash equilibria for the underlying finite game for sufficiently small values of $\eta$. However, one cannot infer from this the computability of the equilibria. Indeed for small $\eta$, there is no guarantee of a contractive fixed point operator, and the MFNE need not be unique. For the rest of this subsection, we only consider starting times of $t=0$, so we shall consider the $Q$-functions as functions over $\tilde{\mathcal{Y}} \times U$, and write, for example, $Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}(y,u)$ for $Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}(0,y,u)$ without loss of generality. Now for any policy $\pi$, define the associated $Q$-function by $$\begin{aligned} Q^{\pi}_{\bm{\nu}}(y,u) = \mathbb{E}^{\pi}\left[ \sum^{\infty}_{n=0} \gamma^{n} r_y\big(y_n, u_n, \bm{\mu}^{\nu}_n\big) \middle \vert y_0 = y, u_0 = u \right]. \end{aligned}$$ Define also the optimal (unregularised) $Q$-function $Q^{*}_{\bm{\nu}} \coloneqq \sup_{\pi \in \Pi_{DM}} Q^{\pi}_{\bm{\nu}}$. First, we shall prove the following convergence statements for the MFG with control of information speed. **Lemma 30**. *The function $\bm{\nu} \mapsto Q^{\Phi^{\mathrm{reg}}_{\eta_n}(\bm{\nu})}_{\bm{\nu}}$ converges to $\bm{\nu} \mapsto Q^{*}_{\bm{\nu}}$ as $n \to \infty$, and converges uniformly over all $y \in \tilde{\mathcal{Y}}$ and $u \in U$.* *Proof.* We shall prove in sequence the following statements: 1. For each $y \in \tilde{\mathcal{Y}}$ and $u \in U$, $\bm{\nu} \mapsto Q^{\mathrm{reg},*}_{\eta_n, \bm{\nu}}(y,u)$ converges to $\bm{\nu} \mapsto Q^{*}_{\bm{\nu}}(y,u)$ pointwise. 2. For each $y \in \tilde{\mathcal{Y}}$ and $u \in U$, $\bm{\nu} \mapsto Q^{\mathrm{reg},*}_{\eta_n, \bm{\nu}}(y,u)$ uniformly converges to ${\bm{\nu} \mapsto Q^{*}_{\bm{\nu}}(y,u)}$. 3. For each $y \in \tilde{\mathcal{Y}}$ and $u \in U$, $\bm{\nu} \mapsto Q^{\Phi^{\mathrm{reg}}_{\eta_n}(\bm{\nu})}_{\bm{\nu}}(y,u)$ converges to $\bm{\nu} \mapsto Q^{*}_{\bm{\nu}}(y,u)$ pointwise. 4. $\bm{\nu} \mapsto Q^{\Phi^{\mathrm{reg}}_{\eta_n}(\bm{\nu}^{*}_n)}_{\bm{\nu}}(y,u)$ uniformly converges to $\bm{\nu} \mapsto Q^{*}_{\bm{\nu}}(y,u)$, uniformly over all $y \in \tilde{\mathcal{Y}}$ and $u \in U$. We note that the corresponding convergence statements have been shown in [@cui2021approximately] for the finite horizon case for fully observable MFGs. By , the map $\bm{\nu} \mapsto \bm{\mu}^{\nu}$ is continuous, therefore we obtain the following statements for the finite horizon problems for MFG-MCDM: 1. For each $y \in \tilde{\mathcal{Y}}$ and $u \in U$, $\bm{\nu} \mapsto Q^{N,*}_{\eta_n, \bm{\nu}}(y,u)$ converges to $\bm{\nu} \mapsto Q^{N,*}_{\bm{\nu}}(y,u)$ pointwise. 2. For each $y \in \tilde{\mathcal{Y}}$ and $u \in U$, $\bm{\nu} \mapsto Q^{N,*}_{\eta_n, \bm{\nu}}(y,u)$ converges uniformly to ${\bm{\nu} \mapsto Q^{N,*}_{\bm{\nu}}(y,u)}$. 3. For each $y \in \tilde{\mathcal{Y}}$ and $u \in U$, $\bm{\nu} \mapsto Q^{N,\Phi^{\mathrm{reg}}_{\eta_n}(\bm{\nu}^{*}_n)}_{\bm{\nu}}(y,u)$ converges to $\bm{\nu} \mapsto Q^{N,*}_{\bm{\nu}}(y,u)$ pointwise. 4. $\bm{\nu} \mapsto Q^{N,\Phi^{\mathrm{reg}}_{\eta_n}(\bm{\nu}^{*}_n)}_{\bm{\nu}}(y,u)$ uniformly converges to $\bm{\nu} \mapsto Q^{N,*}_{\bm{\nu}}(y,u)$, uniformly over all $y \in \tilde{\mathcal{Y}}$ and $u \in U$. We now show (1), the pointwise convergence of $\bm{\nu} \mapsto Q^{\mathrm{reg},*}_{\eta_n, \bm{\nu}}(t,y,u)$ to $\bm{\nu} \mapsto Q^{N,*}_{\bm{\nu}}(t,y,u)$. Fix $\bm{\nu}$, $y$ and $u$. For any $n, N \in \mathbb{N}$, we have $$\begin{aligned} \label{eq:inf_reg to inf_opt} &\lvert Q^{\mathrm{reg},*}_{\eta_n, \bm{\nu}} (y,u) - Q^{*}_{\bm{\nu}}(y,u) \rvert \nonumber \\ \leq\ & \lvert Q^{\mathrm{reg},*}_{\eta_n, \bm{\nu}} (y,u) - Q^{N,*}_{\eta_n, \bm{\nu}} (y,u)\rvert + \lvert Q^{N,*}_{\eta_n, \bm{\nu}}(y,u) - Q^{N,*}_{\bm{\nu}}(y,u) \rvert + \lvert Q^{N,*}_{\bm{\nu}}(y,u) - Q^{*}_{\bm{\nu}}(y,u) \rvert \end{aligned}$$ By the successive approximations property, the finite horizon $Q$-functions converge pointwise to the infinite horizon $Q$-function counterparts. Hence, for any $\varepsilon> 0$, choose ${N}^{\prime} \in \mathbb{N}$ such that the third term of [\[eq:inf_reg to inf_opt\]](#eq:inf_reg to inf_opt){reference-type="eqref" reference="eq:inf_reg to inf_opt"} is smaller than $\frac{\varepsilon}{3}$ and $\sum^{\infty}_{m=N} \gamma^{m} M_r < \frac{\varepsilon}{6}$ for all $N \geq {N}^{\prime}$. Now for the first term we have $$\begin{aligned} \lvert Q^{\mathrm{reg},*}_{\eta_n, \bm{\nu}} (y,u) - Q^{N,*}_{\eta_n, \bm{\nu}}(y,u) \rvert &\leq \left\lvert \sup_{\pi \in \Pi_{DM}}\mathbb{E}^{\pi}\left[ \sum^{\infty}_{m=N} \gamma^{m} R^{\eta_n}\big(y_m, u_m, \bm{\mu}^{\nu}_m\big) \right] \right\rvert \\ & = \left\lvert \mathbb{E}^{\pi_{\mathrm{soft}}}\left[ \sum^{\infty}_{m=N} \gamma^{m} R^{\eta_n}\big(y_m, u_m, \bm{\mu}^{\nu}_m\big) \right] \right\rvert \\ & \leq \sum^{\infty}_{m=N} \gamma^{m} \big\{M_r + \eta_n \big(\log{\lvert U \rvert} + \mathbb{E}[ \log\pi_{\mathrm{soft}}(u_n \mid y_n)] \big) \big\} \\ & \leq \sum^{\infty}_{m=N} \gamma^{m} \{M_r + \eta_n\log{\lvert U \rvert} \} \end{aligned}$$ using the bound on the function $y = x\log x$. Then, given an $N \geq {N}^{\prime}$, by (1a) we can choose ${n}^{\prime} \in \mathbb{N}$ such that the second term of [\[eq:inf_reg to inf_opt\]](#eq:inf_reg to inf_opt){reference-type="eqref" reference="eq:inf_reg to inf_opt"} is smaller than $\frac{\varepsilon}{3}$ and $\sum^{\infty}_{m=N} \gamma^{m}\eta_n\log{\lvert U \rvert}< \frac{\varepsilon}{6}$. So that for all $n \geq {n}^{\prime}$ we have $$\begin{aligned} \lvert Q^{\mathrm{reg},*}_{\eta_n, \bm{\nu}} (t,y,u) - Q^{*}_{\bm{\nu}}(t,y,u) \rvert < \varepsilon \end{aligned}$$ as required.\ Next, to show (2), note that $Q^{N,*}_{\eta, \bm{\nu}}$ is monotonically decreasing in $\eta$, that is for any sequence $(\eta_n)_n$ such that $\eta_n \downarrow 0$, we have for each $n \in N$, $$\begin{aligned} Q^{N,*}_{\eta_n, \bm{\nu}} \leq Q^{N,*}_{\eta_{n+1}, \bm{\nu}}. \end{aligned}$$ Hence sending $N \to \infty$ we also have $$\begin{aligned} Q^{\mathrm{reg},*}_{\eta_n, \bm{\nu}} \leq Q^{\mathrm{reg},*}_{\eta_{n+1}, \bm{\nu}}. \end{aligned}$$ so that $Q^{\mathrm{reg},*}_{\eta, \bm{\nu}}$ is also monotonically decreasing in $\eta$. Moreover by (1), $\bm{\nu} \mapsto Q^{\mathrm{reg},*}_{\eta_n, \bm{\nu}}(y,u)$ converges to $\bm{\nu} \mapsto Q^{*}_{\bm{\nu}}(y,u)$, which is continuous in $\bm{\nu}$ [@saldi_regularisation Lemma 2]. Hence, by using Dini's theorem, we conclude the uniform convergence of $\bm{\nu} \mapsto Q^{\mathrm{reg},*}_{\eta_n, \bm{\nu}}(y,u)$ to $\bm{\nu} \mapsto Q^{*}_{\bm{\nu}}(y,u)$ for each $y \in \tilde{\mathcal{Y}}$, $u \in U$.\ To prove (3), the pointwise convergence of $\bm{\nu} \mapsto Q^{\Phi^{\mathrm{reg}}_{\eta_n}(\bm{\nu})}_{\bm{\nu}}(y,u)$ to $\bm{\nu} \mapsto Q^{*}_{\bm{\nu}}(y,u)$ for each $y \in \tilde{\mathcal{Y}}$ and $u \in U$, we proceed analogously as the proof as (1), utilising the successive approximations property and the finite horizon convergence (3a).\ Finally, we show (4), the uniform convergence of $\bm{\nu} \mapsto Q^{\Phi^{\mathrm{reg}}_{\eta_n}(\bm{\nu})}_{\bm{\nu}}(y,u)$ to $\bm{\nu} \mapsto Q^{*}_{\bm{\nu}}(y,u)$, uniformly over all $y \in \tilde{\mathcal{Y}}$ and $u \in U$. We demonstrate the equicontinuity of the family of functions $$\begin{aligned} \left(\bm{\nu} \mapsto Q^{\Phi^{\mathrm{reg}}_{\eta_n}(\bm{\nu})}_{\bm{\nu}}( y,u)\right)_{n \in \mathbb{N}}. \end{aligned}$$ As the reward function $r_y$ is bounded, given $\varepsilon >0$, we can find for some large $N$ such that $$\begin{aligned} & \lvert Q^{\Phi^{\mathrm{reg}}_{\eta_n}(\bm{\nu}^{*}_n)}_{\bm{\nu}}(y,u) - Q^{\Phi^{\mathrm{reg}}_{\eta_n}(\bm{\hat{\nu}}^{*}_n)}_{\bm{\hat{\nu}}}(y,u) \rvert \\ \leq&\ \lvert Q^{N,\Phi^{\mathrm{reg}}_{\eta_n}(\bm{\nu}^{*}_n)}_{\bm{\nu}}(y,u) - Q^{N,\Phi^{\mathrm{reg}}_{\eta_n}(\bm{\hat{\nu}}^{*}_n)}_{\bm{\hat{\nu}}}(y,u) \rvert + \frac{\varepsilon}{2}. \end{aligned}$$ Then, by (4a) for sufficiently large $n$ and $\delta_{\infty}(\bm{\nu}, \hat{\bm{\nu}}) < \delta$, $$\begin{aligned} \lvert Q^{N,\Phi^{\mathrm{reg}}_{\eta_n}(\bm{\nu})}_{\bm{\nu}}(y,u) - Q^{N,\Phi^{\mathrm{reg}}_{\eta_n}(\bm{\hat{\nu}})}_{\bm{\hat{\nu}}}(y,u) \rvert < \frac{\varepsilon}{2}. \end{aligned}$$ Finally we conclude uniform convergence (4) by appealing to the Arzelà-Ascoli theorem, noting that the space of measure flows is compact by Tychnoff's theorem. ◻ Recall that the softmax policy reads $$\begin{aligned} \pi^{\mathrm{soft}}_{\eta,\bm{\nu}}(u\mid y)\coloneqq \frac{\exp(Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}(y,u))}{\sum_{{u}^{\prime}\in U} \exp( Q^{\mathrm{reg},*}_{\eta,\bm{\nu}}(y,{u}^{\prime}))}. \end{aligned}$$ The uniform convergence of the $Q$-functions implies that the softmax policy converges to the argmax as the regulariser vanishes: **Lemma 31**. *The softmax policy $\pi^{\mathrm{soft}}$ converges to the argmax $\pi^{*}$, where $$\begin{aligned} \pi^{*}(u \mid y) = \mathop{\mathrm{arg\,max}}_{u \in U} Q^{*}_{\bm{\nu}}(y,u). \end{aligned}$$* *Proof.* This follows from the uniform convergence of the $Q$-functions from and the fact that the softmax function $f_c: \mathbb{R}^K \to [0,1]^K$ $$\begin{aligned} f_c(x)_i = \frac{\exp(c \cdot x_i)}{\sum_{i=1}^K \exp(c \cdot x_i)},\quad x = (x_1, \ldots, x_K), \end{aligned}$$ converges to the argmax $f$ as $c \to \infty$, where $f(x)_k = 1$ if $x_k = \mathop{\mathrm{arg\,max}}_i x_i$, and $0$ otherwise. ◻ This leads to the following result showing approximate Nash for a sequence of regularised MFNE. **Theorem 32**. *Let $(\eta_n)_n$ be a sequence with $\eta_n \downarrow 0$. For each $n$, let $(\pi^{*}_n, \bm{\nu}^{*}_n) \in \Pi_{DM} \times \Delta^{\infty}_{\mathcal{Y}}$ be the associated regularised MFNE, defined via , for the MCDM-MFG. Then for any $\varepsilon > 0$, there exists ${n}^{\prime}, {M}^{\prime} \in \mathbb{N}$ such that for all $n\geq{n}^{\prime}$ and $M \geq {M}^{\prime}$, $$\begin{aligned} J^M_j(\pi^{*}_n, \ldots, \pi^{*}_n) \geq \sup_{\pi^j \in \Pi_{DM}} J^M_j(\pi^{*}_n, \ldots, \pi^j,\ldots, \pi^{*}_n) - \varepsilon \quad \mbox{for all $j \in \{1, \ldots, M\}$}. \end{aligned}$$* *Proof.* We adapt of the proof of [@cui2021approximately Theorem 4], which shows the approximate Nash property for fully observable regularised MFGs in finite horizon. By the uniform convergence of the $Q$-functions in , we have that $\pi^{*}_n$ converges to $\pi^{*}$ where $$\begin{aligned} \pi^{*}(u \mid y) = \mathop{\mathrm{arg\,max}}_{u \in U} Q^{*}_{\bm{\nu}}(y,u) \end{aligned}$$ by . By [@cui2021approximately Lemma B.8.11], the regularised policy is approximately optimal for the MFG: for any $\varepsilon >0$, there exists ${n}^{\prime} \in \mathbb{N}$ such that for all $n \geq {n}^{\prime}$, $$\begin{aligned} J_{\bm{\nu}^{*}_n}(\pi^{*}_n) \geq \max_{\pi \in \Pi_{DM}} J_{\bm{\nu}^{*}_n} (\pi) - \varepsilon. \end{aligned}$$ By [@cui2021approximately Lemma B.5.6], if $\pi \in \Pi$ is an arbitrary policy and $\bm{\nu} = \Psi(\pi)$ the induced mean field, then for any sequence of policies $\{\pi^N\}_{N \in \mathbb{N}}$ we have $$\begin{aligned} \lvert J^N_1(\pi^N, \pi, \ldots, \pi) - J_{\bm{\nu}}(\pi^N) \rvert \to 0. \end{aligned}$$ Hence, we can choose a sequence of policies $\{\pi^M\}_{M\in \mathbb{N}}$ such that $$\begin{aligned} \pi^M \in \mathop{\mathrm{arg\,max}}_{\pi \in \Pi_{DM}} J_1^M (\pi, \pi^{*}_n, \ldots, \pi^{*}_n). \end{aligned}$$ This allows us to conclude that for any $\varepsilon>0$, there exists ${n}^{\prime}, {M}^{\prime} \in \mathbb{N}$ such that for all $n \geq {n}^{\prime}$, $M \geq {M}^{\prime}$, $$\begin{aligned} J^M_1(\pi^{*}_n, \ldots, \pi^{*}_n) \geq \max_{\pi\in\Pi_{DM}} J^M_1(\pi, \pi^{*}_n, \ldots, \pi^{*}_n) - \varepsilon \quad \mbox{for all $j \in \{1, \ldots, M\}$} \end{aligned}$$ as desired. ◻ # Numerical experiment in epidemiology {#sec_numerics} We demonstrate the MFG with speed of information with an epidemiology example, in the form of the SIS (susceptible-infected-susceptible) model. This is chosen as representative for a wide class of epidemiological models, including also the SIR (susceptible-infected-recovered) model and its many variants. The extension to many other variants is mathematically and computationally straightforward. ![Relative exploitability scores. Top: relative exploitability for $c_1 = 0.5$ when applied to a uniform policy as a reference measure, fixed across all iterations. Bottom row: relative exploitability for the prior descent algorithm for two different cost values $c_1$.](fixed_prior0.05.png){#exploitability width="95%"} ![Relative exploitability scores. Top: relative exploitability for $c_1 = 0.5$ when applied to a uniform policy as a reference measure, fixed across all iterations. Bottom row: relative exploitability for the prior descent algorithm for two different cost values $c_1$.](0001expls.png){#exploitability width="110%"} ![Relative exploitability scores. Top: relative exploitability for $c_1 = 0.5$ when applied to a uniform policy as a reference measure, fixed across all iterations. Bottom row: relative exploitability for the prior descent algorithm for two different cost values $c_1$.](05expls.png){#exploitability width="110%"} We adapt in particular the discrete-time version of the SIS model used as test case in [@cui2021approximately]. In this model, a virus is assumed to be circulating amongst the population. Each agent can take on two states: susceptible (S), or infected (I). At each moment, the agent can decide to go out (U) or socially distance (D). Thus we have $\mathcal{X} = \{S, I\}$ and $A = \{U, D\}$. The probability of an agent being infected whilst going out is proportional to the fraction of infected people.[^3] Once infected, they have a constant probability of recovering at each unit in time. We use the following parameters for the transition kernel: $$\begin{aligned} p(S \mid I, U ) &= p(S\mid I, D) = 0.2, \\ p(I \mid S, U) &= 0.9^2 \cdot \mu_t(I), \\ p(I \mid S, D) &= 0.\end{aligned}$$ Whilst healthy, there is a cost for socially distancing, and there are larger costs for being infected. As the infection rate in the SIS model does not depend on the proportion of people infected *and* going out, we adjust the reward to penalise this situation to reflect a desired behaviour of socially distancing whilst infected. Thus, the reward for our numerical example is given by $$\begin{aligned} r(S, U) = 0, \quad & r(S,D) = -0.3, \\ r(I,U) = -1.25, \quad & r(I,D) = -1.0.\end{aligned}$$ In addition to the standard model, we introduce the notion of test result times. Assume that during a pandemic, the population undergoes daily testing in order to determine whether they are infected or not. Here we assume the availability of two testing options, the free option which requires a 3-day turnaround, and a paid option which offers a next-day result. Thus we have for our model $$\begin{aligned} \mathcal{D} = \{d_0 = 3, d_1 = 1\}, \quad \mathcal{C} = \{c_0 = 0, c_1 >0 \},\end{aligned}$$ where we shall consider different values of $c_1$ in our numerical experiments.\ For the computation of MFNEs for our model, we utilise the `mfglib` Python package [@mfglib]. We incorporate our own script for the mapping $\bm{\nu}\mapsto \bm{\mu}^{\nu}$ so that the existing library can be adapted for our MFG with control of information speed, and in particular to be computed on the augmented space. We first initialise with a uniform policy as the reference measure $q$, and repeatedly apply the mapping $\Psi^{\mathrm{aug}} \circ \Phi^{\mathrm{reg}}_{\eta}$ for a range of values of the regularisation parameter $\eta$. As a benchmark to test for the convergence towards a regularised MFNE, we utilise the exploitability score, which, for a policy $\pi$, is defined by $$\begin{aligned} \mbox{Expl} (\pi) \coloneqq \max_{\tilde{\pi}} J_{\Psi^{\mathrm{aug}}(\tilde{\pi})}(\tilde{\pi}) - J_{\Psi^{\mathrm{aug}}(\pi)}(\pi).\end{aligned}$$ The exploitability score measures the suboptimality gap for a policy $\pi$ when computed with the measure flow induced by the map $\Psi^{\mathrm{aug}}$. An exploitability score is 0 if and only if $\pi$ is an MFNE for the MFG, and a score of $\varepsilon$ indicates that $\pi$ is an $\varepsilon$-MFNE. We refer to the literature such as [@mf-omo; @perolat2021scaling; @lauriere2022learning] for a more detailed discussion.\ As the exploitability score in general is dependent on the rewards and the initial policy, we consider instead the relative exploitability, which scales the exploitability score by the initial value. We plot the convergence of the relative exploitability, with the uniform policy as reference measure, fixed across all iterations, in the bottom graph of . We see that for lower values of $\eta$, the algorithm converges to a lower relative exploitability value. This corresponds to the fact that the regularised MFG with lower values of $\eta$ approximates the unregularised MFG more closely. However, lower values of $\eta$ require a larger number of iterations for convergence. For values of $\eta$ smaller than 0.2, the algorithm does not even converge in our tests and explodes numerically (not plotted in the graph). This demonstrates an inherent limitation of the use of regularisation: one desires a sufficiently high value of $\eta$ in order to guarantee convergence, but high values leads to a convergence to a regularised MFNE that approximates the original problem poorly. Moreover, searching for a suitable value of $\eta$ is computationally expensive.\ To mitigate the above issues, we utilise the prior descent algorithm [@cui2021approximately]. Here, the reference measure is dynamically updated, by using the policy obtained from the previous iteration as the reference measure for the next iteration. The reference measure can also be updated after a number of iterations instead, creating a double loop for the algorithm. We summarise the prior descent algorithm for the MFG with control of information speed in . The relative exploitability score is plotted in the top row of . The score vastly outperforms the case of the fixed prior, and in general, we find that larger values of $\eta$ require more iterations for convergence, but converge to a lower exploitability score. In [@cui2021approximately], the prior descent algorithm is further improved by using the heuristic $\eta_{n+1} = \eta_n \cdot c$ for some constant $c > 1$, gradually increasing the regularisation to aid convergence. We also applied this heuristic for our problem, but for our case we do not see significant differences compared to initialising with large fixed values of $\eta$.\ ![Behaviour at MFNE for MCDM-MFG for cost $c_1 = 0.001$.](MFNE0.001.eps){#NE_cost0.001 width="11.5cm"} We now examine the MFNE for the SIS model with testing options for different values of $c_1$. corresponds to a low cost of $c_1 = 0.001$, whilst corresponds to a high cost of $c_1 = 0.05$. We compute the problem up to a terminal time of $T=100$, but the truncate the graphs at $T=75$, as the plots near terminal time are skewed by the artificially imposed terminal condition. The graphs on the right column depicts the behaviour in the choice of testing at equilibrium. The top-right shows the proportion of the population opting for the free test, whilst the bottom-right shows the optimal choice for testing for a healthy individual, given their current testing choice. We see a clear disparity across the two different costs. When the cost of the premium test is low, it is optimal to opt for this regime with probability 0.6, and at equilibrium nearly 80% of the population chooses this option. In comparison, when the cost is high, it is optimal to choose the premium option only about 25% of the time, and less than half of the population use the premium option at equilibrium.\ The left columns of and depict the population behaviour with social distancing, and the proportion of infected at equilibrium. At the beginning, there is a large number of infected people, so it is optimal to socially distance; once the proportion of infected is sufficiently low, a portion of the population starts to go out. This leads to a rebound in infection numbers, so that gradually the population socially distances again, and the cycle repeats. This is depicted by the periodic behaviours in the graph on the left.\ ![Behaviour at MFNE for MCDM-MFG for cost $c_1 = 0.05$.](MFNE0.05.eps){#NE_cost0.05 width="11.5cm"} For the case $c_1 = 0.001$, the low cost for premium testing leads to a higher percentage of the population with a more accurate estimation of their status. This leads to larger proportion of people going out, so that the infection occurs at a quicker rate. This in turn then leads to a quick rate of socially distancing, and so forth. This can be seen in the higher frequency of cycles in the infected and distancing graphs of . Interestingly, as the proportion of infected stabilises as time passes to a similar value, regardless of the cost for the premium testing. The difference lies in the initial periods of peaks and troughs in infected numbers. ## Acknowledgements {#acknowledgements .unnumbered} Jonathan Tam is supported by the EPSRC Centre for Doctoral Training in Mathematics of Random Systems: Analysis, Modelling and Simulation (EP/S023925/1). Collaboration by Dirk Becherer in this project is supported by German Science Foundation DFG through the Berlin-Oxford IRTG 2544, Stochastic Analysis in Interaction. [^1]: Institute of Mathematics, Humboldt University of Berlin (`becherer@math.hu-berlin.de`) [^2]: Mathematical Institute, University of Oxford (`christoph.reisinger@maths.ox.ac.uk`, `jonathan.tam@maths.ox.ac.uk`) [^3]: We will discuss later the feasibility of a more realistic extension where the probability of infection is proportional to the fraction of infected people *who also go out*.
arxiv_math
{ "id": "2309.07877", "title": "Mean-field games of speedy information access with observation costs", "authors": "Dirk Becherer, Christoph Reisinger, Jonathan Tam", "categories": "math.OC", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Can we accelerate convergence of gradient descent without changing the algorithm---just by carefully choosing stepsizes? Surprisingly, we show that the answer is yes. Our proposed *Silver Stepsize Schedule* optimizes strongly convex functions in $\kappa^{\log_{\rho} 2} \approx \kappa^{0.7864}$ iterations, where $\rho=1+\sqrt{2}$ is the silver ratio and $\kappa$ is the condition number. This is intermediate between the textbook unaccelerated rate $\kappa$ and the accelerated rate $\kappa^{1/2}$ due to Nesterov in 1983. The non-strongly convex setting is conceptually identical, and standard black-box reductions imply an analogous partially accelerated rate $\varepsilon^{-\log_{\rho} 2} \approx \varepsilon^{-0.7864}$. We conjecture and provide partial evidence that these rates are optimal among all stepsize schedules. The Silver Stepsize Schedule is constructed recursively in a fully explicit way. It is non-monotonic, fractal-like, and approximately periodic of period $\kappa^{\log_{\rho} 2}$. This leads to a phase transition in the convergence rate: initially super-exponential (acceleration regime), then exponential (saturation regime). The core algorithmic intuition is *hedging* between individually suboptimal strategies---short steps and long steps---since bad cases for the former are good cases for the latter, and vice versa. Properly combining these stepsizes yields faster convergence due to the misalignment of worst-case functions. The key challenge in proving this speedup is enforcing long-range consistency conditions along the algorithm's trajectory. We do this by developing a technique that recursively glues constraints from different portions of the trajectory, thus removing a key stumbling block in previous analyses of optimization algorithms. More broadly, we believe that the concepts of hedging and multi-step descent have the potential to be powerful algorithmic paradigms in a variety of contexts in optimization and beyond. This series of papers publishes and extends the first author's 2018 Master's Thesis (advised by the second author)---which established for the first time that carefully choosing stepsizes can enable acceleration in convex optimization. Prior to this thesis, the only such result was for the special case of quadratic optimization, due to Young in 1953. author: - | Jason M. Altschuler\ UPenn\ `alts@upenn.edu` - | Pablo A. Parrilo\ LIDS - MIT\ `parrilo@mit.edu` bibliography: - hedging.bib title: | Acceleration by Stepsize Hedging I:\ Multi-Step Descent and the Silver Stepsize Schedule --- # Introduction {#sec:intro} Gradient descent (GD) is a simple iterative algorithm to minimize an objective function $f$ by producing better and better estimates via the update $$\begin{aligned} x_{t+1} = x_t - \alpha_t \nabla f(x_t)\,, \qquad \forall t = 0, 1, 2, \dots. \label{eq:gd}\end{aligned}$$ GD dates back nearly two hundred years to the work of Cauchy [@cauchy1847], yet it (and its variants) remain a primary workhorse in modern optimization, engineering, and machine learning due to the practical efficacy, simplicity, and scalability. It is of both theoretical and practical importance to analyze the convergence of GD and moreover to optimize parameters so that this convergence is as fast as possible. A central fact in convex optimization is that with a prudent choice of the stepsize schedule $\{\alpha_t\}$---the only[^1] parameters of the algorithm---running GD from any initialization $x_0$ produces iterates which optimize $f$ to arbitrary accuracy. Quantifying this statement leads to two intertwined questions: How fast does $x_n$ converge to a minimizer $x^*$ of $f$? And what stepsize choice $\{\alpha_t\}$ leads to the fastest convergence rate? This series of papers revisits these classical questions in the fundamental setting of smooth[^2] convex optimization. Our overarching goal is to understand how much mileage can be obtained by simply optimizing the stepsize choice for GD. Note that this is markedly different from the past forty years of literature on accelerating the convergence rate for GD. That literature---starting from Nesterov's seminal work in 1983 [@nesterov-agd]---achieves faster convergence rates by modifying the GD algorithm with extra building blocks such as momentum, auxiliary sequences, or other internal dynamics. See the related work section or the recent survey [@d2021acceleration]. In contrast, we investigate the basic question of: can we accelerate convergence without changing the GD algorithm---just by optimizing the stepsizes? #### Mainstream approach. {#mainstream-approach. .unnumbered} The standard analysis of GD uses a constant stepsize schedule, i.e., $\alpha_t = \bar{\alpha}$ for all iterations $t$; see e.g. the textbooks [@bubeck-book; @boyd; @polyakbook; @hazan2016introduction; @nesterov-survey; @luenberger1984linear; @BertsekasNonlinear] among many others. For example, $\bar{\alpha} = 1/M$ in the setting of $M$-smooth convex objectives, or $\bar{\alpha} = 2/(M+m)$ if the objectives are additionally $m$-strongly convex. This prescription is based on the following fact: $$\begin{gathered} \text{For one iteration of GD, there is a unique stepsize } \bar{\alpha} \text{ achieving the fastest convergence rate} \nonumber \\ \text{(in the worst case over functions and initializations).} \label{eq:intro:1-step}\end{gathered}$$ This is provably correct. For example, in the strongly convex setting, this $\bar{\alpha}$ provides the optimal contraction rate---a larger stepsize $\alpha_t > \bar{\alpha}$ can lead to overshooting the target $x^*$, and a smaller stepsize $\alpha_t < \bar{\alpha}$ can lead to undershooting $x^*$. However, it is well-known that even after optimizing the constant $\bar{\alpha}$, this constant stepsize schedule leads to a slow convergence rate. (Hence the intensive research on accelerated GD.) Moreover, even though many alternative stepsize schedules have been proposed in both theory and practice---e.g., exact line search, Armijo-Goldstein rules, Polyak-type schedules, Barzilai-Borwein-type schedules, etc., see the related work section---none of these alternative schedules have led to an analysis that outperforms the slow "unaccelerated" rate of constant stepsize GD. Conventional wisdom therefore dictates that slow convergence is unavoidable, unless one modifies GD by adding extra building blocks beyond choosing stepsizes, e.g., via momentum. **Quadratic** **Convex** -------------------------- ----------------------------------------------------------- ------------------------------------------------------------------ **Mainstream stepsizes** $\Theta(\kappa)$ by constant stepsizes (folklore) $\Theta(\kappa)$ by constant stepsizes (folklore) **Additional dynamics** $\Theta(\sqrt{\kappa})$ by Heavy Ball [@polyak1964some] $\Theta(\sqrt{\kappa})$ by Nesterov Acceleration [@nesterov-agd] **Hedged stepsizes** $\Theta(\sqrt{\kappa})$ by Chebyshev Stepsizes [@young53] : Iteration complexity of various approaches for minimizing a $\kappa$-conditioned function. The dependence on the accuracy $\varepsilon$ is omitted as it is always $\log 1/\varepsilon$. Mainstream stepsize schedules require $\Theta(\kappa)$ iterations; this is the textbook unaccelerated rate. For the special case of quadratics (left), accelerated rates of $\Theta(\sqrt{\kappa})$ can be equivalently achieved via Young's 1953 Chebyshev Stepsize Schedule [@young53] or Polyak's 1964 Heavy Ball Algorithm [@polyak1964some]. For the general case of convex functions (right), this equivalence between internal dynamics and varying stepsizes is false. Acceleration was first achieved by Nesterov's 1983 Fast Gradient Algorithm [@nesterov-agd] and it has long been believed that in the convex setting, any acceleration requires modifying GD by adding internal dynamics, e.g., momentum. We prove that accelerated convex optimization *is* possible by choosing better stepsizes. #### Faster convergence via dynamic stepsizes? {#faster-convergence-via-dynamic-stepsizes .unnumbered} The premise of this series of papers is that this is wrong. Why might the constant stepsize schedule $\alpha_t = \bar{\alpha}$ be sub-optimal? Certainly it is optimal if GD is only run for $n=1$ iteration---this is the assertion [\[eq:intro:1-step\]](#eq:intro:1-step){reference-type="eqref" reference="eq:intro:1-step"}. However, it is sub-optimal for $n$ steps of GD, for any $n > 1$. Briefly, this is because the statement for $n=1$ requires the worst-case problem instance (the objective function $f$ and initialization $x_0$) to align with the choice of stepsize $\alpha_t \neq \bar{\alpha}$ so that the convergence is slow, and for $n > 1$, the worst-case problem instances for each individual step might not align. This suggests an algorithmic opportunity: $$\begin{gathered} \text{Is it possible to combine (individually suboptimal) stepsizes } \alpha_t \neq \bar{\alpha} \nonumber \\ \text{to achieve faster convergence for } n > 1\text{ iterations?} \label{eq:intro:hedge}\end{gathered}$$ We refer to this algorithmic idea as *hedging* between worst-case problem instances. (See §[2](#sec:hedging){reference-type="ref" reference="sec:hedging"} for a fully worked-out example.) #### Motivation: the special case of quadratics. {#motivation-the-special-case-of-quadratics. .unnumbered} Of course, using non-constant stepsizes is not a new idea---for the special case of minimizing *convex quadratics*, it has been known that this enables faster convergence since Young's seminal paper in 1953 [@young53]. In particular, for quadratic optimization, the optimal stepsize schedule is not constant, but given by the inverse roots of Chebyshev polynomials; the order of these stepsizes is irrelevant for the convergence rate (assuming exact arithmetic); and the resulting convergence rate is the so-called *accelerated rate* that is optimal among all Krylov-subspace algorithms [@nem-yudin], including even modifications of GD that use momentum or internal dynamics. See the related work section §[1.2](#ssec:intro:related){reference-type="ref" reference="ssec:intro:related"} for further details. #### A longstanding gap between quadratic and convex optimization. {#a-longstanding-gap-between-quadratic-and-convex-optimization. .unnumbered} However, while the advantage of non-constant stepsizes has been well-understood for quadratic optimization for 70 years (and nowadays is even taught in many introductory optimization courses), it has remained entirely open whether this phenomenon extends to any setting of convex optimization beyond quadratics. In particular, it was unclear whether *any* stepsize schedule could lead to *any* speedup over the textbook GD convergence rate---even by a constant factor. This gap is due to several reasons. First, many phenomena from the quadratic case are simply false in the setting of general convex optimization: e.g., the stepsize schedule based on roots of the Chebyshev polynomials is provably bad for the convex setting [@altschuler2018greed Chapter 8], and the order of the stepsizes dramatically affects the convergence rate in the convex setting [@altschuler2018greed Chapter 8]. Second, any approach for establishing the advantage of a non-constant stepsize schedule must track how progress in the current iteration is affected by previous iterations---and this effect of history appeared to only be explicitly computable in the quadratic setting, essentially since that is the only case in which the GD map is linear (hence tractable to track after repeated iterations). ## Contribution and discussion {#ssec:intro:cont} In this initial paper, we show that GD can converge faster for smooth convex optimization by using certain time-varying, non-monotonic stepsize schedules. This answers the hedging question [\[eq:intro:hedge\]](#eq:intro:hedge){reference-type="eqref" reference="eq:intro:hedge"} in the affirmative. This series of papers publishes and extends the first author's 2018 Master's Thesis [@altschuler2018greed] (advised by the second author), which proved such a result for the first time, see the related work section §[1.2](#ssec:intro:related){reference-type="ref" reference="ssec:intro:related"}. In particular, Chapter 8 of the thesis showed for the first time that a constant-factor improvement over the unaccelerated rate is possible in the smooth strongly convex setting, and Chapter 6 of the thesis showed for the first time that an asymptotic acceleration is possible in any setting beyond quadratics. (The latter result proves that arcsine-distributed random stepsizes achieve the fully accelerated rate $\Theta(\sqrt{\kappa} \log1/\varepsilon)$ if the convex functions are separable; this will be detailed in a forthcoming paper.) Prior to this thesis, the only result for acceleration via choosing stepsizes was for the special case of quadratic optimization, due to Young in 1953. Conceptually, we deviate from traditional analyses of GD (and other optimization algorithms) by directly analyzing the cumulative progress of all the steps of the algorithm, rather than combining separate bounds for the progress of individual steps. As mentioned above, this global analysis of *multi-step descent* is provably necessary to show any benefit for any deviation from the constant stepsize schedule. Indeed, separately analyzing the progress for each iteration---as done, e.g., in standard GD analyses, in exact line search, or in standard offline-to-online convex optimization reductions---is provably too shortsighted and unavoidably leads to pessimistic, unaccelerated convergence rates. The key difficulty is how to track how different iterations affect progress in other iterations. Previously, this could be accomplished only for the special case of quadratics because then the GD update is linear. We show that this can be accomplished for general convex setting by this by using long-range consistency conditions between the gradients seen along the algorithm's trajectory. We provide a high-level overview of these new conceptual ideas in §[2](#sec:hedging){reference-type="ref" reference="sec:hedging"}. Below, we formally state our main result in §[1.1.1](#sssec:intro:dis:main){reference-type="ref" reference="sssec:intro:dis:main"}, and then discuss the improved convergence rate in §[1.1.2](#sssec:intro:dis:rate){reference-type="ref" reference="sssec:intro:dis:rate"}, the proposed stepsize schedule in §[1.1.3](#sssec:intro:dis:schedule){reference-type="ref" reference="sssec:intro:dis:schedule"}, and the generality of the result in §[1.1.4](#sssec:intro:dis:setting){reference-type="ref" reference="sssec:intro:dis:setting"}. ### Main result: acceleration without momentum {#sssec:intro:dis:main} ![Silver Stepsize Schedule, for different condition numbers $\kappa = 4,16,64,256$ -- only the first 64 stepsizes are shown. Notice the recursive, fractal behavior and the approximate periodicity with period of size $n^* = \kappa^{\log_{\rho} 2}$; details in §[1.1.3](#sssec:intro:dis:schedule){reference-type="ref" reference="sssec:intro:dis:schedule"}. Also note the different scales on the vertical axis, since the stepsizes are unnormalized, vs. the normalized stepsizes in Figure [12](#fig:normalizedstepsizes){reference-type="ref" reference="fig:normalizedstepsizes"}.](figs/stepsize-1.pdf "fig:"){#fig:stepsizes width=".24\\columnwidth"} ![Silver Stepsize Schedule, for different condition numbers $\kappa = 4,16,64,256$ -- only the first 64 stepsizes are shown. Notice the recursive, fractal behavior and the approximate periodicity with period of size $n^* = \kappa^{\log_{\rho} 2}$; details in §[1.1.3](#sssec:intro:dis:schedule){reference-type="ref" reference="sssec:intro:dis:schedule"}. Also note the different scales on the vertical axis, since the stepsizes are unnormalized, vs. the normalized stepsizes in Figure [12](#fig:normalizedstepsizes){reference-type="ref" reference="fig:normalizedstepsizes"}.](figs/stepsize-2.pdf "fig:"){#fig:stepsizes width=".24\\columnwidth"} ![Silver Stepsize Schedule, for different condition numbers $\kappa = 4,16,64,256$ -- only the first 64 stepsizes are shown. Notice the recursive, fractal behavior and the approximate periodicity with period of size $n^* = \kappa^{\log_{\rho} 2}$; details in §[1.1.3](#sssec:intro:dis:schedule){reference-type="ref" reference="sssec:intro:dis:schedule"}. Also note the different scales on the vertical axis, since the stepsizes are unnormalized, vs. the normalized stepsizes in Figure [12](#fig:normalizedstepsizes){reference-type="ref" reference="fig:normalizedstepsizes"}.](figs/stepsize-3.pdf "fig:"){#fig:stepsizes width=".24\\columnwidth"} ![Silver Stepsize Schedule, for different condition numbers $\kappa = 4,16,64,256$ -- only the first 64 stepsizes are shown. Notice the recursive, fractal behavior and the approximate periodicity with period of size $n^* = \kappa^{\log_{\rho} 2}$; details in §[1.1.3](#sssec:intro:dis:schedule){reference-type="ref" reference="sssec:intro:dis:schedule"}. Also note the different scales on the vertical axis, since the stepsizes are unnormalized, vs. the normalized stepsizes in Figure [12](#fig:normalizedstepsizes){reference-type="ref" reference="fig:normalizedstepsizes"}.](figs/stepsize-4.pdf "fig:"){#fig:stepsizes width=".24\\columnwidth"} Formalizing this result requires restricting to a function class with controlled curvature. For concreteness, in this first paper we focus on the well-studied setting of strongly convex and smooth $f$, and we measure progress via distance to the optimum $x^*$. While smoothness is classically known to be required for acceleration [@nesterov-survey], the other choices and assumptions in the theorem statement are not essential: strong convexity can be relaxed to convexity, and the progress measure can be replaced with other standard desiderata; see the discussion in §[1.1.4](#sssec:intro:dis:setting){reference-type="ref" reference="sssec:intro:dis:setting"}. Below, let $\rho := 1 + \sqrt{2}$ denote the silver ratio, and assume throughout that $f$ is $\kappa$-conditioned, i.e., $1$-strongly convex and $\kappa$-smooth[^3]---this is without loss of generality after rescaling. **Theorem 1**. *For any horizon $n \in \mathbb{N}$ that is a power of $2$, any dimension $d$, any $\kappa$-conditioned function $f : \mathbb{R}^d \to \mathbb{R}$, and any initialization $x_0$, $$\begin{aligned} \|x_n - x^*\|^2 \leqslant\tau_n \|x_0 - x^*\|^2\,, \label{eq:thm-main:cert} \end{aligned}$$ where $x^*$ denotes the unique minimizer of $f$, $x_n$ denotes the output of $n$ steps of GD using the Silver Stepsize Schedule (defined in §[3](#sec:construction){reference-type="ref" reference="sec:construction"}), and $\tau_n$ denotes the $n$-step Silver Convergence Rate (defined in §[3](#sec:construction){reference-type="ref" reference="sec:construction"}). Moreover, $\tau_n$ undergoes the following phase transition at $n^* = \Theta( \kappa^{\log_{\rho} 2} )$:* - *[Acceleration regime.]{.ul} For $n \leqslant n^*$, $$\tau_n = \exp\left( - \Theta\left( \frac{n^{\log_2 \rho}}{\kappa} \right) \right)\,.$$* - *[Saturation regime.]{.ul} For $n > n^*$, $$\tau_n = \exp\left( - \Theta\left( \frac{n}{n^*} \right) \right)\,.$$* *In particular, in order to achieve a final error $\|x_n - x^*\|^2 \leqslant\varepsilon$, it suffices to run GD using the Silver Stepsize Schedule for $$\begin{aligned} n = \Theta\left( \kappa^{\log_{\rho} 2} \mathop{\mathrm{\log\tfrac{1}{\varepsilon}}}\right) \approx \Theta\left( \kappa^{0.7864} \mathop{\mathrm{\log\tfrac{1}{\varepsilon}}}\right) \;\; \textrm{iterations}\,. \end{aligned}$$* ![Log of the average per-step rate, aka $\tfrac{1}{n} \log \tau_n$, for varying condition numbers $\kappa$. The initial value is the unaccelerated rate $(\tfrac{\kappa -1}{\kappa + 1})^2$. Notice the rate saturation phenomenon that occurs at $n=n^* \asymp \kappa^{\log_{\rho} 2}$. ](figs/RateSaturation.pdf){#fig:ratesaturation height="6cm"} ### Discussion of Silver Convergence Rate {#sssec:intro:dis:rate} #### Partial acceleration. {#partial-acceleration. .unnumbered} Our rate $\Theta(\kappa^{\log_{\rho} 2} \log 1/\varepsilon)$ lies between the textbook rate $\Theta(\kappa \log1/\varepsilon)$ for GD and the accelerated rate $\Theta(\sqrt{\kappa} \log1/\varepsilon)$ due to Nesterov in 1983 [@nesterov-agd]. We emphasize that before the thesis [@altschuler2018greed] that this paper is based upon, it was unknown if any improvement over the unaccelerated rate---even a constant factor---was achievable by any stepsize schedule. Our convergence rate is faster than all known GD stepsize schedules for convex optimization, including constant stepsize schedules, Polyak-type schedules, Barzilai-Borwein-type schedules, Goldstein-Armijo-type schedules, exact line search, etc. #### Phase transition. {#phase-transition. .unnumbered} A distinctive feature of the Silver Convergence Rate $\tau_n$ is that it undergoes a phase transition: $\tau_n$ switches from super-exponential to exponential in the horizon $n$. This transition occurs at $n^* \asymp \kappa^{\log_{\rho} 2}$, which is the number of iterations required to make the error decrease by a constant factor. See Figure [5](#fig:ratesaturation){reference-type="ref" reference="fig:ratesaturation"}. The reason for these two regimes is that beyond $n^*$, the new stepsizes converge quadratically fast to their stationary value; details in §[4](#sec:rate){reference-type="ref" reference="sec:rate"}. - [Acceleration regime.]{.ul} This regime encapsulates the advantage of multi-step descent: the super-exponentiality of the $n$-step bound makes it better than composing the $1$-step bound $n$ times. This super-exponential regime interpolates the $\kappa$ dependence between the unaccelerated rate (achieved at $n = 1$) and our partially accelerated rate (achieved at $n \gtrsim n^*$). - [Saturation regime.]{.ul} Here, the benefit of multi-step descent becomes negligible: $\tau_{2n} \approx \tau_n^2$ for $n \geqslant n^*$.[^4] Briefly, this rate saturation occurs because the Silver Stepsize Schedule is approximately periodic with period $n^*$, see §[1.1.3](#sssec:intro:dis:schedule){reference-type="ref" reference="sssec:intro:dis:schedule"}. #### Dimension independence. {#dimension-independence. .unnumbered} The convergence rate in Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} is independent of the dimension $d$ and thus can be extended to infinite-dimensional Hilbert space. This is because our analysis only uses consistency conditions for the GD trajectory to arise from a convex function---and these consistency conditions are dimension-independent [@rockafellar; @pesto]. This is in common with classical analyses of GD and Nesterov-style acceleration. #### Optimality. {#optimality. .unnumbered} We conjecture the Silver Stepsize Schedule has the fastest convergence rate among all possible choices of GD stepsize schedules. We prove optimality for the $n=2$ case of [@altschuler2018greed Chapter 8] in §[2](#sec:hedging){reference-type="ref" reference="sec:hedging"}; this proof readily extends to small $n$, and we will address the question of optimality for all $n$ in a shortly forthcoming paper. ### Discussion of Silver Stepsize Schedule {#sssec:intro:dis:schedule} #### Recursive construction. {#recursive-construction. .unnumbered} The Silver Stepsize Schedule is defined recursively in a fully explicit way. We briefly overview the construction; see §[3](#sec:construction){reference-type="ref" reference="sec:construction"} for full details. The $1$-step schedule $h^{(1)}$ is initialized to the constant $\bar{\alpha} = 2/(1 + 1/\kappa)$ that is classically known to be optimal for $1$-step descent. We then recursively define the $2n$-step schedule $h^{(2n)}$ as $$\begin{aligned} h^{(2n)} := [ \tilde{h}^{(n)}, a_{2n}, \tilde{h}^{(n)}, b_{2n}]\,,\end{aligned}$$ where $\tilde{h}^{(n)}$ is the $n$-step schedule $h^{(n)}$ with its final stepsize $b_n$ removed, and $a_{2n}$ and $b_{2n}$ are obtained by "splitting" this removed stepsize $b_n$. Modulo a certain normalizing transformation, this splitting produces $a_{2n} < b_n < b_{2n}$ as the roots to a certain quadratic equation in $b_n$. See §[4](#sec:rate){reference-type="ref" reference="sec:rate"} for details and closed-form expressions. #### Finite-horizon schedule. {#finite-horizon-schedule. .unnumbered} This recursive construction produces (normalized) stepsize schedules that follow the pattern $$\begin{aligned} h^{(1)} &= [b_1] \\ h^{(2)} &= [a_2,b_2] \\ h^{(4)} &= [a_2,a_4,a_2,b_4] \\ h^{(8)} &= [a_2,a_4,a_2,a_8,a_2,a_4,a_2,b_8]\end{aligned}$$ See Figure [4](#fig:stepsizes){reference-type="ref" reference="fig:stepsizes"} for a visualization. #### Infinite-horizon schedule. {#infinite-horizon-schedule. .unnumbered} This schedule simplifies in the limit $n \to \infty$: the $i$-th normalized stepsize is given by $a_{B(i)}$, where $B(i)$ denotes the smallest power of $2$ in the binary expansion of $i$. Note that no entries of the $b$ sequence appear. #### Fractal order. {#fractal-order. .unnumbered} For the special case of quadratic optimization, the order of the stepsizes is well-known to be irrelevant for the convergence rate. In contrast, in the general setting of convex optimization, the order of the stepsizes provably does matter [@altschuler2018greed Chapter 8]. For example, it can be shown that the convergence rate in Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} becomes greater than $1$ (i.e., not even contractive) if one reverses the order of the $2$-step Silver Stepsize Schedule. The Silver Stepsize Schedule generates a fractal, see Figure [4](#fig:stepsizes){reference-type="ref" reference="fig:stepsizes"}. This is due to our recursive construction, and is directly evident from the aforementioned fact that the $i$-th stepsize depends on the sparsity pattern of the binary expansion of $i$. This fractal structure aligns with the numerical observations in [@gupta22; @grimmer23], and is in stark contrast with all classical stepsize schedules which, if time-varying, decay monotonically in the iteration number $i$, e.g., as $1/i$. #### Approximate periodicity. {#approximate-periodicity. .unnumbered} The Silver Stepsize Schedule is not periodic as it is continually changes. However, it is approximately periodic with period $n^* \asymp \kappa^{\log_{\rho} 2}$, see Figure [4](#fig:stepsizes){reference-type="ref" reference="fig:stepsizes"}. This is another facet of the rate saturation phenomenon discussed in §[1.1.2](#sssec:intro:dis:rate){reference-type="ref" reference="sssec:intro:dis:rate"}. See §[4](#sec:rate){reference-type="ref" reference="sec:rate"} for details. #### Dependence on horizon. {#dependence-on-horizon. .unnumbered} Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} is stated for horizons $n$ that are powers of $2$. For arbitrary integers $n$, one can simply run the Silver Stepsize Schedule for the largest power of $2$ below $n$, or better, run for all powers of $2$ in the binary expansion of $n$. This affects the average per-step-rate by only a small constant factor. We moreover conjecture that simply using $n$ steps of the infinite-horizon Silver Stepsize Schedule leads to the same convergence rate modulo a lower-order term. This seems reasonable since only logarithmically many stepsizes are changed, but we have not attempted to prove this. Orthogonally, if the horizon is not set in advance, then one can, e.g., do a "doubling" trick by exploiting the fact that the first $2^{i}-1$ stepsizes are identical for all $n \geqslant 2^i$. Specifically, for each $i$, decide on iteration $2^{i} - 1$ whether to stop at $n=2^i$ iterations, or repeat roughly the same amount of effort and go to $2^{i+1}-1$ iterations. ### Discussion of problem setting {#sssec:intro:dis:setting} #### Progress measure. {#progress-measure. .unnumbered} Theorem [Theorem 7](#thm:rate){reference-type="ref" reference="thm:rate"} uses distance as the progress measure. This can be replaced by other standard progress measures such as function suboptimality or gradient norm, in the initial or final condition or both, since these measures are equivalent for $\kappa$-conditioned functions. This black-box replacement affects the rate by only a lower-order term. Moreover, this equivalence factor can be avoided by re-doing our analysis in a conceptually identical way for the desired progress measures (possibly also with minor changes to the stepsize schedule; e.g., for gradient norm contraction, it appears that one should reverse the order [@altschuler2018greed Chapter 8]). #### Smoothness. {#smoothness. .unnumbered} It is well-known that smoothness is required for acceleration: otherwise, GD cannot be accelerated even with momentum or other internal dynamics [@nesterov-survey Chapter 3]. #### Convexity. {#convexity. .unnumbered} Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} is stated for the strongly convex setting, but this can be relaxed to the non-strongly convex setting. Indeed, all our core conceptual ideas extend: the advantage of time-varying, non-monotonic stepsizes, proving this advantage via multi-step descent rather than iterating the greedy $1$-step bound, certifying multi-step descent via recursive gluing, etc. The adaptation requires only minor technical modifications to the stepsize schedule, certificate recursion, and progress measure. These details will appear in a shortly forthcoming paper. We mention that by standard black-box reductions (see e.g., [@allen2016optimal] or [@bubeck-book page 285]), Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} immediately implies accelerated rates for the (non-strongly) convex setting by running GD with the Silver Stepsize Schedule on a quadratically regularized objective, i.e., $f(\cdot) + \delta\| \cdot- y\|^2$ for appropriate choices of $\delta$ and $y$. This gives an analogous partially accelerated rate of $\varepsilon^{-\log \rho_2} \approx \varepsilon^{-0.7864}$ iterations to obtain $\varepsilon$ function suboptimality. This is intermediate between the textbook unaccelerated rate $\Theta(\varepsilon^{-1})$ and Nesterov's accelerated rate $\Theta(\varepsilon^{-1/2})$ from 1983 [@nesterov-agd]. This strongly suggests that acceleration in the (non-strongly) convex case surpasses the $\Theta(1/(T \log T))$ conjecture in [@grimmer23]. The aforementioned forthcoming paper will address this via a direct analysis that bypasses regularization. ## Related work {#ssec:intro:related} ### The special case of quadratic optimization {#sssec:intro:quadratic} #### Three equivalent approaches to acceleration. {#three-equivalent-approaches-to-acceleration. .unnumbered} For quadratic optimization, the GD map becomes linear, which enables three equivalent approaches to acceleration. One approach, taken by Young in 1953 is to choose non-constant stepsizes that are the inverses of the roots of Chebyshev polynomials [@young53]. A second approach is to use momentum, achieved for example by Hestenes and Stiefel's Conjugate Gradient Method in 1952 [@hestenes1952methods] and Polyak's Heavy Ball Method in 1964 [@polyak1964some]. This equivalence arises because momentum amounts to a three-term recurrence, which if the coefficients are chosen appropriately, generates the same sequence of Chebyshev polynomials; see e.g. [@Varga Ch. 5]. A third approach is to use the limiting distribution of the roots of the Chebyshev polynomials: the arcsine distribution [@kalousek; @pronzato11; @pronzato13; @altschuler2018greed]. This equivalence is due to the fact that the order of stepsizes does not affect convergence in the quadratic case, thus as the horizon $n \to \infty$, one might as well draw stepsizes i.i.d. from the equilibrium measure. It is important to emphasize that the elegant equivalences between these three approaches---varying stepsizes, momentum, and equilibrium measures---breaks down beyond the special case of quadratic optimization. #### Desiderata beyond fast convergence. {#desiderata-beyond-fast-convergence. .unnumbered} The above discussion concerns only the convergence rate, not stability. In settings with noisy gradients or inexact arithmetic, the order of the stepsizes may significantly affect the convergence rate of GD, even for quadratic optimization. This question of stability to roundoff errors was already raised in Young's original paper [@young53]. In such settings, it is desirable to find permutations of the Chebyshev roots for which GD trajectories are maximally stable. An effective approach is to interleave the roots of Chebyshev polynomials of increasing degree [@LebedevFinogenov71]. This leads to a fractal pattern, superficially similar to our proposed stepsize schedule; see [@AgarwalGoelZhang] for a recent discussion and additional results. However, we emphasize that this fractal is not only fundamentally different but also arises due to entirely different considerations---stability rather than fast convergence. #### Structured quadratics. {#structured-quadratics. .unnumbered} If the quadratic function's Hessian has additional spectral structure, then improved results are possible. This is because the different viewpoints discussed above are classically known to extend to this situation via potential theory; see the excellent survey [@6steps] and the references within. This enables further refinements of the methods described above for structured quadratics and sometimes also perturbations away from quadratics; see e.g., [@oymak2021provable; @goujaud2022super]. ### The general case of convex optimization {#sssec:intro:convex} #### Unaccelerated GD. {#unaccelerated-gd. .unnumbered} For constant stepsize, the optimal convergence rate for GD is $\Theta(\kappa \log 1/\varepsilon)$ in the strongly convex setting, and $\Theta(1/\varepsilon)$ in the convex setting; see, e.g., the textbooks [@bubeck-book; @boyd; @nocedal; @polyakbook; @hazan2016introduction; @nesterov-survey]. This is often called the unaccelerated rate for GD. Many alternative stepsize schedules have been proposed in both theory and practice. We highlight several well-studied schedules. One family of well-studied strategies adaptively chooses stepsizes either by minimizing the function value over the line spanned by the gradient. This mininimization can be performed exactly via line search [@nocedal; @boyd; @polyakbook; @de2017worst], or approximately via Goldstein-Armijo-type schedules [@nesterov-survey]. Alternatively, it can be done by minimizing the estimated distance to the optimum via Polyak-type schedules [@polyakbook]. Another family is Barzilai-Borwein-type schedules, which are quasi-Newton methods that approximate the Hessian using the past step's change in iterate and gradient [@barzilai1988two]. None of these strategies are known to accelerate beyond the case of quadratics. #### Accelerated GD via internal state. {#accelerated-gd-via-internal-state. .unnumbered} The conventional approach for achieving faster convergence is to consider variations of GD that use auxiliary sequences of iterates and/or different update directions than the gradient. This is of course more powerful than just changing the stepsizes, and can be interpreted from a control theory perspective as adding internal dynamics to the algorithm. Accelerated rates were first shown in Nesterov's seminal work in 1983 [@nesterov-agd], and since then, many other accelerated algorithms and analyses have been proposed [@geometric-gd; @quad-averaging; @tmm; @diakonikolas2017accelerated; @taylor23optimalub; @cohen2020relative; @linear-coupling; @beck2009fast; @tseng2008accelerated], as well as fruitful interpretations via continuous-time analysis [@shi2021understanding; @su2016differential; @shi2019acceleration; @WWC16; @WRJ16; @moucer2022systematic; @diakonikolas2021generalized]. These accelerated algorithms require only $\Theta(\sqrt{\kappa} \log1/\varepsilon)$ iterations, or $\Theta(1/\sqrt{\varepsilon})$ in the convex case, which is known to be minimax-optimal up to a constant for any algorithm that uses only gradient information [@nem-yudin]. Much work has recently sharpened this constant, culminating in exactly matching upper and lower bounds [@taylor23optimalub; @taylor23optimallb]. This recent line work exploits the idea that the worst-case convergence of optimization algorithms can be numerically computed via semidefinite programming (SDP) [@KF16; @DT14; @pesto; @d2021acceleration; @taylor2017convex; @tmm; @LRP16; @DT18; @KF17]. This has also enabled using computer-automated SDP-analyses to investigate richer classes of algorithms, such as robust versions of accelerated methods [@CHVSL17], proximal algorithms [@barre2023principled], operator splitting [@ryu2020operator], line search [@de2017worst], biased stochastic gradient methods [@HSL17], inexact Newton's method [@de2020worst], among many others. This area of research is extremely active and we refer the reader to the excellent recent survey [@d2021acceleration] for a comprehensive set of references and a detailed historical account. #### Accelerated GD via dynamic stepsizes. {#accelerated-gd-via-dynamic-stepsizes. .unnumbered} Although many time-varying stepsize schedules have been considered for GD, no convergences analyses improved over the textbook unaccelerated rate beyond the quadratic case. In 2018, Altschuler's MS thesis [@altschuler2018greed] considered time-varying stepsize schedules in several settings, all through the unifying lens of hedging and multi-step descent. In Chapter 8 of the thesis, the PESTO framework was used to show for the first time the advantage of using time-varying stepsize schedules for GD beyond the quadratic setting. Explicit solutions were given for $n=2,3$ in the strongly convex setting. This showed that a constant-factor improvement over the textbook unaccelerated GD rate was indeed possible. A key difficulty in extending this to larger horizons $n$ is that the search for optimal stepsizes is non-convex. In 2022, @gupta22 combined Branch & Bound techniques with the PESTO SDP to develop algorithms that perform this search numerically, and as an example used this to compute good approximate schedules in the convex setting for larger values of $n$ up to $50$. @grimmer23 very recently developed a technique to round these Branch & Bound solutions to exact rational certificates. This allowed him to extend these approximate stepsize schedules up to $n=127$ in order to get a larger constant-factor improvement, and conjectured that dynamic stepsizes might lead to an accelerated rate of $O(1/(T\log T))$. By extending a recursive application of the $2$-step solution in [@altschuler2018greed], the present paper rigorously proves acceleration for all horizons $n$, and in particular obtains the first asymptotic improvements over the textbook unaccelerated GD rate---not just by a constant factor. ## Organization {#ssec:intro:org} In §[2](#sec:hedging){reference-type="ref" reference="sec:hedging"}, we provide an overview of the core conceptual ideas via the key case $n=2$. §[3](#sec:construction){reference-type="ref" reference="sec:construction"} formally defines the Silver Stepsize Schedule and the Silver Convergence Rate $\tau_n$, §[4](#sec:rate){reference-type="ref" reference="sec:rate"} establishes the claimed properties of $\tau_n$, and §[5](#sec:cert){reference-type="ref" reference="sec:cert"} proves that $\tau_n$ is a valid bound on the convergence rate of the Silver Stepsize Schedule. §[6](#sec:future){reference-type="ref" reference="sec:future"} discusses future directions. Some technical details are deferred to the Appendix. # Conceptual overview: two-step case ($n=2$) {#sec:hedging} This section provides a complete analysis for the minimal non-trivial horizon length: $n=2$. (No hedging can occur if $n=1$.) Our goal here is to provide further intuition for the core concepts of hedging and multi-step descent, and explain concretely how these manifest in the design and analysis of the Silver Stepsize Schedule. Indeed, the $n=2$ case captures most of the core intuition and ideas, and the result for general $n$ is essentially just an amped-up version thereof. These results first appeared in Altschuler's thesis [@altschuler2018greed Chapter 8]; we refer to there for a lengthier treatment. For simplicity, in this section we denote the stepsizes by $\alpha$ and $\beta$, so that the algorithm is $$x_1 = x_0 - \alpha \nabla f(x_0), \qquad x_2 = x_1 - \beta \nabla f(x_1)\,,$$ and the worst-case convergence rate over a function class $\mathcal{F}$ is $$R(\alpha,\beta; \mathcal{F}) := \sup_{f \in \mathcal{F},\; x_0 \neq x^*} \frac{\|x_2 - x^*\|}{\|x_0 - x^*\|}\,.$$ The question of optimal stepsizes is therefore the minimax problem $$\begin{aligned} \min_{\alpha,\beta} R(\alpha,\beta; \mathcal{F})\,. \label{eq:plausibility:minimax}\end{aligned}$$ To motivate why non-constant stepsizes might be helpful, in §[2.1](#ssec:hedging:quad){reference-type="ref" reference="ssec:hedging:quad"} we first briefly recall the classical result of [@young53] which solves this for the case of quadratic $\mathcal{F}$. Then in §[2.2](#ssec:hedging:convex){reference-type="ref" reference="ssec:hedging:convex"}, we solve this problem for convex $\mathcal{F}$ by presenting the $2$-step Silver Stepsize Schedule from [@altschuler2018greed Theorem 8.11], proving its convergence rate via multi-step descent, and proving its optimality via hedging. ![Contour plots of worst-case rates, as a function of the two stepsizes $\alpha$ and $\beta$, for $m=1/4$ and $M=1$. The marked points indicate the global minima. Notice the asymmetry in the convex case (right), due to the non-commutativity of the GD map.](figs/2Step-Quadratic.png "fig:"){#fig:2step width="46%"} ![Contour plots of worst-case rates, as a function of the two stepsizes $\alpha$ and $\beta$, for $m=1/4$ and $M=1$. The marked points indicate the global minima. Notice the asymmetry in the convex case (right), due to the non-commutativity of the GD map.](figs/2Step-Convex.png "fig:"){#fig:2step width="46%"} ## Optimal stepsizes for quadratic optimization {#ssec:hedging:quad} #### Young's argument from 1953. {#youngs-argument-from-1953. .unnumbered} What is the optimal stepsize schedule $(\alpha,\beta)$ for the class $\mathcal{F}$ of quadratic functions $f$ that are $m$-strongly convex and $M$-smooth? Without loss of generality after translating, $f(x) = \frac{1}{2}x^T Hx$ where $mI \preceq H \preceq MI$. By definition of GD, $x_1 = (1 - \alpha H)x_0$ and $x_2 = (1 - \beta H)x_1$, thus $$x_2 = p(H)x_0\,,\qquad \text{ where } \qquad p(H) =(1 - \alpha H)(1 - \beta H).$$ Observe that as one ranges over all possible choices of the stepsizes $(\alpha,\beta)$, the polynomial $p$ ranges over the set $\mathcal{P}$ of all degree $2$ polynomials satisfying the normalizing condition $p(0) = 1$. Therefore finding optimal stepsizes $(\alpha,\beta)$ is equivalent to finding an optimal polynomial $p \in \mathcal{P}$. What is the optimal polynomial? By the above display and properties of the spectral norm, $$\begin{aligned} R(\alpha,\beta;\mathcal{F}) = \sup_{mI \preceq H \preceq MI, \, x_0 \neq 0} \frac{\|p(H)x_0\|}{\|x_0\|} = \sup_{mI \preceq H \preceq MI} \|p(H)\| = \sup_{m \leqslant\lambda \leqslant M} |p(\lambda)|\,. \label{eq:plausibility:quad-pf}\end{aligned}$$ Thus the optimal polynomial $p \in P_2$ is the one with minimal $L_{\infty}$ norm over the interval $[m,M]$. It is classically known that this is the (translated and scaled) Chebyshev polynomial of the first kind, see e.g., [@Rivlin]. Thus the optimal stepsizes $(\alpha^*, \beta^*)$ are the inverses of the roots $\tfrac{M+m}{2} \raisebox{.2ex}{$\scriptstyle\pm$}\tfrac{M-m}{2\sqrt{2}}$ of the Chebyshev polynomial, in either order. These are the symmetric marked points in Figure [7](#fig:2step){reference-type="ref" reference="fig:2step"}, left. Crucially, observe that these two stepsizes are different---hence the advantage of non-constant schedules in the quadratic setting. We now interpret this phenomenon in two ways that are essential to our intuition for the convex setting. This discussion is based on [@altschuler2018greed Chapters 1 and 2]. #### Interpretation via hedging. {#interpretation-via-hedging. .unnumbered} Why is $\bar{\alpha} := \tfrac{2}{M+m}$ suboptimal for $2$ steps of GD when it is optimal for $1$? Recall that it is optimal for $1$ step because GD overshoots when using a longer step $\alpha > \bar{\alpha}$ on the sharp function $f(x) = \tfrac{M}{2}x^2$, and undershoots when using a shorter step $\alpha < \bar{\alpha}$ on the shallow function $f(x) = \tfrac{m}{2}x^2$. The algorithmic opportunity is that these worst-case functions are different for short-step GD and long-step GD. This is why using a short step and a long step---each individually suboptimal---can lead to faster overall convergence than using $\bar{\alpha}$ twice. We refer to this misalignment of worst-case functions as *hedging*. See Figure [7](#fig:2step){reference-type="ref" reference="fig:2step"}, left. #### The necessity of multi-step descent. {#the-necessity-of-multi-step-descent. .unnumbered} There is a dual interpretation of hedging via multi-step descent. By [\[eq:plausibility:quad-pf\]](#eq:plausibility:quad-pf){reference-type="eqref" reference="eq:plausibility:quad-pf"}, the worst-case rate for $2$ steps is $$\begin{aligned} R(\alpha,\beta;\mathcal{F}) = \sup_{m \leqslant\lambda \leqslant M} |(1 - \alpha \lambda)( 1- \beta \lambda)|\,. \label{eq:plausibility:quad-2step}\end{aligned}$$ Contrast this with the greedy analysis, which bounds the worst-case rate after $2$ iterations by the product of the worst-case rates for $1$ step with $\alpha$ or $\beta$, namely $$\begin{aligned} R(\alpha;\mathcal{F}) \cdot R(\beta;\mathcal{F}) = \left( \sup_{m \leqslant\lambda_{\alpha} \leqslant M} |1 - \alpha \lambda_{\alpha}| \right) \cdot \left( \sup_{m \leqslant\lambda_{\beta}\leqslant M} |1 - \beta \lambda_{\beta}| \right) \,. \label{eq:plausibility:quad-1step}\end{aligned}$$ Observe that the greedy analysis [\[eq:plausibility:quad-1step\]](#eq:plausibility:quad-1step){reference-type="eqref" reference="eq:plausibility:quad-1step"} is so shortsighted that it not only leads to worse bounds for any given stepsize schedule, but moreover leads to the wrong prescription of stepsizes. Indeed, optimizing this convergence rate [\[eq:plausibility:quad-1step\]](#eq:plausibility:quad-1step){reference-type="eqref" reference="eq:plausibility:quad-1step"} over $(\alpha,\beta)$ leads to $\alpha = \beta = \tfrac{2}{M+m}$ which is the constant schedule. This necessity of multi-step descent explains why the mainstream approach for convex optimization is constant stepsizes: previous approaches were unable to analyze multi-step descent. (This is only tractable in the quadratic setting because the gradient operator is linear, see [\[eq:plausibility:quad-pf\]](#eq:plausibility:quad-pf){reference-type="eqref" reference="eq:plausibility:quad-pf"}.) ## Optimal stepsizes for convex optimization {#ssec:hedging:convex} We now turn to the convex setting. Let $\mathcal{F}$ denote the set of $m$-strongly convex and $M$-smooth functions. Young's Chebyshev schedule is then provably bad[^5].What are the optimal $2$ stepsizes? Certainly the above discussion of hedging motivates using non-constant stepsizes, but proving this requires multi-step descent, and that has been the longstanding stumbling block preventing progress beyond the quadratic setting. ### Silver Stepsize Schedule for $n=2$ We show below that the $2$-step convergence rate $R(\alpha,\beta; \mathcal{F})$ is minimized by the stepsizes $(\alpha,\beta)$ that are defined by the system of equations $$(M\alpha - 1)(M\beta - 1) = (1 -\alpha m)(1-\beta m) = \frac{(1-m\alpha)(M\beta - 1)}{1+\alpha(M-m)}\, \label{eq:twostepsizes}$$ and moreover the optimal $2$-step convergence rate $R^*$ is given by this equalized value. **Remark 2**. *The equations [\[eq:twostepsizes\]](#eq:twostepsizes){reference-type="eqref" reference="eq:twostepsizes"} can be solved explicitly, to give the alternative expressions $$\alpha^* = \frac{2}{m+S}, \quad \beta^* = \frac{2}{2M+m-S}, \quad R^* = \frac{S - M}{2 m + S - M},$$ where $S = \sqrt{M^2 + (M-m)^2}$. These are the formulas given in [@altschuler2018greed Thm. 8.10], and is the $n=2$ case of the Silver Stepsize Schedule and (square-rooted) Silver Convergence Rate defined in §[4](#sec:rate){reference-type="ref" reference="sec:rate"}.* This $n=2$ solution showcases the key phenomena that also occur for larger $n$: - **Provable advantage of dynamic stepsizes.** Since $R^* < (\tfrac{M-m}{M+m})^2$, this proves that it is possible to improve over standard GD by dynamically changing the stepsize. (Recall that $\tfrac{M-m}{M+m}$ is the textbook unaccelerated rate for 1 step of GD.) This mirrors how for quadratics, the optimal 2-step rate [\[eq:plausibility:quad-2step\]](#eq:plausibility:quad-2step){reference-type="eqref" reference="eq:plausibility:quad-2step"} is better than the squared optimal $1$-step rate [\[eq:plausibility:quad-1step\]](#eq:plausibility:quad-1step){reference-type="eqref" reference="eq:plausibility:quad-1step"}. - **Stepsize splitting.** Since $\alpha^* < \frac{2}{M+m} < \beta^*$, the optimal stepsize $\frac{2}{M+m}$ for $n=1$ splits into a short step $\alpha^*$ and long step $\beta^*$. For general $n$, the Silver Stepsize Schedule mirrors this splitting at every scale: it splits the largest stepsize into a shorter and longer step. - **Unique, asymmetric solution.** Unlike the quadratic case, here the stepsize order is essential for fast convergence: the splitting requires the small stepsize to be first.[^6] As a consequence, here the optimal stepsize schedule is unique. See Figure [7](#fig:2step){reference-type="ref" reference="fig:2step"}, right. - **Milder splitting.** Even ignoring order, the stepsize values differ from the quadratic case. This occurs because the class of convex functions is richer than the class of quadratics, thus the supremum defining the worst-case rate $R(\alpha,\beta; \mathcal{F})$ is over more functions, thus it is harder to misalign the worst-cases by hedging. The result is less aggressive hedging and partial acceleration: the improvement over the $1$-step rate is smaller than in the quadratic case. We now turn to proving that Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} holds in the case $n=2$, and moreover that the proposed Silver Stepsize Schedule is optimal among all $2$-step schedules. **Theorem 3** (Optimal $2$-step schedule for strongly convex optimization, Theorem 8.11 of [@altschuler2018greed]). *Consider any strong-convexity and smoothness parameters $0 < m \leqslant M < \infty$. The unique optimal $2$-step schedule $(\alpha^*,\beta^*) \in \mathop{\mathrm{argmin}}_{\alpha,\beta} R(\alpha,\beta; \mathcal{F})$ and the corresponding optimal $2$-step rate $R^*$ are as stated in Remark [Remark 2](#rem:2step-values){reference-type="ref" reference="rem:2step-values"}.* The proof has two parts: an upper bound on $R(\alpha^*,\beta^*, \mathcal{F})$ that proves that our $2$-step schedule achieves the claimed rate, and a matching lower bound that proves optimality (and in fact uniqueness too). We do this below via multi-step descent and hedging, respectively. ### Upper bound: rate certification via multi-step descent As discussed above, in order to prove any benefit of deviating from the constant stepsizes, we must directly analyze the cumulative multi-step descent of all iterations. This requires capturing how different iterations affect other iterations' progress. We do this by exploiting long-range consistency conditions between the information that GD sees along its trajectory. Our starting point is a known result on convex interpolability, recalled next. There is a set of consistency conditions that any $f \in \mathcal{F}$ must satisfy at any set of points $\{x_i\}_{i \in \mathcal{I}}$: the co-coercivity $$\begin{aligned} Q(x,y) := 2(M-m)(f(x) - f(y)) + 2\langle M \nabla f(y) - m \nabla f(x), y-x \rangle - \|\nabla f(x) - \nabla f(y)\|^2 - Mm\|x-y\|^2 \nonumber\end{aligned}$$ must be non-negative for every pair of points $x,y \in \mathcal{I}$. Of particular interest to us is the converse: there are consistency conditions on a set of data $\{(x_i, g_i, f_i)\}_{i \in \mathcal{I}}$ that ensure it is $\mathcal{F}$-interpolable, i.e., there exists $f \in \mathcal{F}$ satisfying $g_i = \nabla f(x_i)$ and $f_i = f(x_i)$ for each $i \in \mathcal{I}$. Specifically, a celebrated line of work on convex interpolability [@rockafellar] culminated in a beautiful theorem of [@pesto] which states that $\{(x_i, g_i, f_i)\}_{i \in \mathcal{I}}$ is $\mathcal{F}$-interpolable if and only if $$\begin{aligned} Q_{ij} := 2(M-m)(f_i - f_j) + 2\langle M g_j - m g_i, x_j - x_i \rangle - \|g_i - g_j\|^2 - Mm\|x-y\|^2 \label{eq:def:Qij}\end{aligned}$$ is non-negative for every pair of indices $i, j \in \mathcal{I}$. We apply these conditions along the trajectory of GD. Specifically, we take $\mathcal{I}:= \{0, 1, \dots, n, *\}$ to index the GD iterates and the optimum, and let $\{(x_i,g_i,f_i)\}_{i \in \mathcal{I}}$ denote the first-order data[^7]. The upshot is that this theorem enables replacing the supremum over functions $f \in \mathcal{F}$ by the data $\{(x_i,g_i,f_i)\}_{i \in \mathcal{I}}$ in the definition of the worst-case rate $R(\alpha,\beta; \mathcal{F})$. Note that this replacement is lossless since the interpolability conditions in the theorem are necessary and sufficient. From the perspective of hedging, these co-coercivity conditions $\{Q_{ij} \geqslant 0\}_{i \neq j \in \mathcal{I}}$ generate all possible long-range consistency constraints on the objective function given the GD trajectory. From the perspective of multi-step descent, they generate all possible valid inequalities with which one can prove convergence rates for GD. Let us explain how we use this in the case $n=2$. *Proof of rate upper bound for Theorem [Theorem 3](#thm:2step:convex){reference-type="ref" reference="thm:2step:convex"}.* It suffices to prove the *rate certification identity* $$\begin{aligned} R^2 \|x_0 - x^*\|^2 - \|x_2 - x^*\|^2 = \sum_{i \neq j \in \{0, 1, *\}} \lambda_{ij} Q_{ij} \label{eq:pf-2step:cert} \end{aligned}$$ for some non-negative choice of multipliers $\lambda_{ij}$. Indeed, since $Q_{ij} \geqslant 0$ is non-negative for any objective function $f \in \mathcal{F}$, the rate certification identity implies $$\begin{aligned} R^2 \|x_0 - x^*\|^2 - \|x_2 - x^*\|^2 \geqslant 0\,, \end{aligned}$$ which proves the claimed rate. It remains to construct non-negative $\lambda_{ij}$ for the rate certification identity. This is done in [@altschuler2018greed Theorem 8.10]. For completeness, we include the explicit values here, in slightly simpler (but equivalent) form: $$\begin{aligned} \lambda= \frac{\alpha^* (\beta^*)^2}{4} \begin{bmatrix} 0 & \frac{(S-m)(S-M)}{(M-m)} & 0 \\ \frac{(S-M) (2M -S- m)}{M-m} & 0 & \frac{2 M^2 + S^2 - 2 M S -m^2}{M-m} \\ \frac{m^3 - m^2 S + 4 M^2 S - m S^2 - 4 M S^2 + S^3}{M(m+S)} & \frac{2 M S - m^2 - S^2}{M-m} & 0 \\ \end{bmatrix}\,. \end{aligned}$$ Here, the rows and columns are indexed by $0, 1,*$, in that order. ◻ Of course, the challenge in such a proof is finding the multipliers $\lambda_{ij}$. When we prove our result for general $n$, we prove that the multipliers for the $2n$-length Silver Stepsize Schedule are recursively built from repeating the multipliers for the $n$-length Silver Stepsize Schedule twice, modulo a low rank and sparse correction expressible in closed form. With this recursion, (i) the multipliers for the $n=2$ case above can be derived formulaically from the textbook proof for $n=1$, and (ii) the proof for the case of general $n$ mirrors the proof for $n=2$, at least in spirit. ### Lower bound: optimality and uniqueness via hedging *Proof of rate lower bound in Theorem [Theorem 3](#thm:2step:convex){reference-type="ref" reference="thm:2step:convex"}.* We prove the *rate optimality identity* $$\begin{aligned} R(\alpha,\beta; \mathcal{F}) \geqslant\underline{R}(\alpha,\beta)\,, \label{eq:pf-2step:opt} \end{aligned}$$ for all non-trivial[^8] stepsizes $\alpha,\beta \in [1/M,1/m]$, where $$\begin{aligned} \underline{R}(\alpha,\beta) := \max\left\{ (M\alpha-1)(M\beta-1), \, (1 -\alpha m)(1-\beta m), \, (M\alpha-1)(1 - m \beta),\, \frac{(1-m\alpha)(M\beta - 1)}{1+\alpha(M-m)} \right\}\,. \end{aligned}$$ This suffices since it is straightforward to verify that $\min_{\alpha,\beta} \underline{R}(\alpha,\beta)$ is minimized uniquely at $(\alpha^*,\beta^*)$ with value $R^*$; this yields the two defining equations in [\[eq:twostepsizes\]](#eq:twostepsizes){reference-type="eqref" reference="eq:twostepsizes"}. Indeed, this verification can be done by hand by case enumeration, or simpler, it can be rigorously proven using standard symbolic computation techniques such as quantifier elimination [@CavinessJohnson]. It remains to prove [\[eq:pf-2step:opt\]](#eq:pf-2step:opt){reference-type="eqref" reference="eq:pf-2step:opt"}. We do this by exhibiting four "hard-to-optimize" functions $f \in \mathcal{F}$ for which the $2$-step convergence rate of GD from initialization $x_0 = 1$ is given by these four values. The first two functions are the quadratics $f(x) = \frac{\lambda}{2}x^2$ for $\lambda\in \{\alpha,\beta\}$, in which case $x_2 = (1-\lambda\alpha)(1 - \lambda\beta)$. The other two functions are piecewise quadratic. It is perhaps simplest to state these functions via their second derivative since then any function value can be obtained by integrating from the minimum $x^* = 0$. The third function is given by $f''(x) = M$ for $x \geqslant 0$ and $f''(x) = m$ otherwise, in which case $x_2 = (1-M\alpha)(1 - m\beta)$. The fourth function is given by $f''(x) = m$ for $x \geqslant\frac{1-m\alpha}{1+\alpha(M-m)}$ and $f''(x) = M$ otherwise, in which case $x_2 = \frac{(1-m\alpha)(1-M\beta)}{1 + \alpha(M-m)}$. This proves the desired identity [\[eq:pf-2step:opt\]](#eq:pf-2step:opt){reference-type="eqref" reference="eq:pf-2step:opt"}. ◻ It is insightful to contrast these four hard functions defining the $2$-step rate function[^9] with the analog for the quadratic case. Recall from [\[eq:plausibility:quad-2step\]](#eq:plausibility:quad-2step){reference-type="eqref" reference="eq:plausibility:quad-2step"} that in the quadratic setting, the $2$-step rate function $R(\alpha,\beta \; \mathcal{F}) = \sup_{m \leqslant\lambda \leqslant M} |(1 - \alpha \lambda)( 1- \beta \lambda)|$. Although this seems to requires infinitely many $\lambda$, it was shown in [@altschuler2018greed Chapter 5.2.2] that one can replace the continuum $[m,M]$ with the $3$ extrema of Chebyshev polynomials, i.e., $$\max_{\lambda \in \{m,M,\frac{M+m}{2}\}} |(1 - \lambda \alpha) (1 - \lambda \beta)|\,,$$ in the sense that minimizing this over $(\alpha,\beta)$ yields Young's $2$-step Chebyshev Schedule. These three values of $\lambda$ correspond to "hard-to-optimize" quadratic functions $f(x) = \tfrac{\lambda}{2}x^2$. How are they different from the hard functions in the above proof? The first two quadratics are common between the quadratic and convex case, but the remaining functions differ. In particular, the third and fourth functions in the convex case are non-quadratic. In words, the richness of the convex function class enables changing the curvature in different places, which enables more alignment of the bad convergence rates for the individual stepsizes. This makes it provably harder to hedge in the convex setting. Note also that the denominator of the fourth function is singlehandedly responsible for the asymmetry in $\bar{R}(\alpha,\beta)$, and thus in the optimal schedule the convex setting. This proof can be extended to establish the optimality of the Silver Stepsize Schedule, as will be detailed in a shortly forthcoming paper. # Silver Stepsize Schedule {#sec:construction} For simplicity of exposition, from here on we restrict to horizons $n$ that are powers of $2$ (see §[1.1.3](#sssec:intro:dis:schedule){reference-type="ref" reference="sssec:intro:dis:schedule"} for a discussion of extensions to general $n$), and we set $m=1/\kappa$ and $M=1$ to reduce notational overhead (this is without loss of generality after a possible rescaling). ## Normalized Silver Stepsizes {#ssec:construction:yz} We construct auxiliary stepsize sequences $y_n,z_n$, that are normalized in a certain way to lie in the interval $[0,1]$. The particular normalization (a certain linear fractional transformation defined in §[3.2](#ssec:construction:ab){reference-type="ref" reference="ssec:construction:ab"}) simplifies the recursive stepsize splitting by making it a quadratic equation. Explicitly, initialize the sequences $y_1 = z_1 = 1/\kappa$, and define $y_n,z_n$ recursively from $z_{n/2}$ as the solutions to the defining equations $$\begin{aligned} y_nz_n = z_{n/2}^2 \qquad \text{and} \qquad z_n - y_n = 2(z_{n/2} - z_{n/2}^2)\,. \label{eq:yz-defining}\end{aligned}$$ This is the direct analog of the stepsize splitting detailed for the case $n=2$ in §[2.2](#ssec:hedging:convex){reference-type="ref" reference="ssec:hedging:convex"}. Denoting $\xi = 1 - z_{n/2}$, the explicit solution is $$y_n = z_{n/2}\, / (\xi + \sqrt{1+\xi^2}) \qquad \text{ and } \qquad z_n = z_{n/2}\, (\xi + \sqrt{1+\xi^2})\,. \label{eq:yz-recur}$$ ![Cobweb plot describing the evolution of $z_n$, under the iteration $z_{n/2} \mapsto z_n$ given in [\[eq:yz-defining\]](#eq:yz-defining){reference-type="eqref" reference="eq:yz-defining"} and [\[eq:yz-recur\]](#eq:yz-recur){reference-type="eqref" reference="eq:yz-recur"}. The initial condition is $1/\kappa$ (in this plot, $\kappa=32$). The iterates grow exponentially when $z$ is near zero, and converge quadratically to $1$ when $z$ is close to $1$.](figs/Z-cobweb.pdf){#fig:z-cobweb height="6cm"} The following lemma collect several simple observations about these sequences. See §[4](#sec:rate){reference-type="ref" reference="sec:rate"} for a detailed discussion of how $y_n,z_n$ both increase to their limits $y_n, z_n \to 1$, exponentially fast when they are close to $0$, and then doubly exponentially fast when they are close to $1$. **Lemma 4** (Basic properties of the Normalized Silver Stepsizes). *The sequence $z_n$ is monotonically increasing from $z_1 = 1/\kappa$ to $\lim_{n \to \infty} z_n = 1$. For all $n$, $$\begin{aligned} \frac{1}{\kappa} \leqslant y_n \leqslant z_n \leqslant 1\,. \end{aligned}$$ Moreover, the above inequality $y_n \leqslant z_n$ is strict for any $\kappa > 1$.* ## Silver Stepsizes {#ssec:construction:ab} We define the Silver Stepsizes $$\begin{aligned} a_n := \psi(y_n) \qquad \text{and} \qquad b_n := \psi(z_n) \,. \label{eq:yz-reparam-invert}\end{aligned}$$ from the Normalized Silver Stepsizes $y_n,z_n$ via the linear fractional transformation $\psi$ given by $$\begin{aligned} \psi : t \mapsto \frac{1 + \kappa t}{1 + t} \qquad \text{and} \qquad \psi^{-1} : s \mapsto \frac{s-1}{\kappa-s}\,.\end{aligned}$$ We remark that this mapping $\psi$ has the following special values $$\begin{aligned} \psi(0)= 1 \,, \qquad \psi(1/\kappa) = \frac{2}{1+1/\kappa}\,, \qquad \psi(1) = \frac{1+\kappa}{2}\,, \qquad \psi(\infty)= \kappa \,. \label{eq:psi-special}\end{aligned}$$ The significance of the two middle values is that these are the initial stepsizes $a_1 = b_1 = \psi(1/\kappa)$ and the limiting stepsizes $\lim_{n \to \infty} a_n = \lim_{n \to \infty} b_n = \psi(1)$. We remark that these two middle values are the harmonic and arithmetic means of the two extremal values, i.e., $\psi(1/\kappa) = \mathrm{HM}(\psi(0), \psi(\infty))$ and $\psi(1) = \mathrm{AM}(\psi(0), \psi(\infty))$. The looseness in the classical AM-HM inequality therefore quantifies the gap between the initial and limiting stepsizes. The following lemma records this and several other simple observations about these stepsizes. **Lemma 5** (Basic properties of the Silver Stepsizes). *The sequence $b_n$ is monotonically increasing from $b_1 = \mathrm{HM}(1,\kappa)$ to $\lim_{n \to \infty} b_n = \mathrm{AM}(1,\kappa)$. For all $n$, $$\begin{aligned} 1 \leqslant\mathrm{HM}(1,\kappa) \leqslant a_n \leqslant b_n \leqslant\mathrm{AM}(1,\kappa) \leqslant\kappa \,. \end{aligned}$$ Moreover, the above inequality $a_n \leqslant b_n$ is strict for any $\kappa > 1$.* ## Silver Stepsize Schedule {#ssec:construction:schedule} ![Normalized Silver Stepsize Schedule, for different condition numbers $\kappa = 4,16,64,256$. Notice that these are always bounded between 0 and 1. The Silver Stepsize Schedules $h^{(n)}$ shown in Figure [4](#fig:stepsizes){reference-type="ref" reference="fig:stepsizes"} are generated by applying $\psi$ to the schedules here.](figs/normalizedstepsize-1.pdf "fig:"){#fig:normalizedstepsizes width=".24\\columnwidth"} ![Normalized Silver Stepsize Schedule, for different condition numbers $\kappa = 4,16,64,256$. Notice that these are always bounded between 0 and 1. The Silver Stepsize Schedules $h^{(n)}$ shown in Figure [4](#fig:stepsizes){reference-type="ref" reference="fig:stepsizes"} are generated by applying $\psi$ to the schedules here.](figs/normalizedstepsize-2.pdf "fig:"){#fig:normalizedstepsizes width=".24\\columnwidth"} ![Normalized Silver Stepsize Schedule, for different condition numbers $\kappa = 4,16,64,256$. Notice that these are always bounded between 0 and 1. The Silver Stepsize Schedules $h^{(n)}$ shown in Figure [4](#fig:stepsizes){reference-type="ref" reference="fig:stepsizes"} are generated by applying $\psi$ to the schedules here.](figs/normalizedstepsize-3.pdf "fig:"){#fig:normalizedstepsizes width=".24\\columnwidth"} ![Normalized Silver Stepsize Schedule, for different condition numbers $\kappa = 4,16,64,256$. Notice that these are always bounded between 0 and 1. The Silver Stepsize Schedules $h^{(n)}$ shown in Figure [4](#fig:stepsizes){reference-type="ref" reference="fig:stepsizes"} are generated by applying $\psi$ to the schedules here.](figs/normalizedstepsize-4.pdf "fig:"){#fig:normalizedstepsizes width=".24\\columnwidth"} Let $h^{(n)}$ denote the Silver Stepsize Schedule of length $n$. Denote its $n/2$-th stepsize by $a_n$ and its $n$-th by $b_n$. As overviewed briefly in §[1.1.3](#sssec:intro:dis:schedule){reference-type="ref" reference="sssec:intro:dis:schedule"}, we recursively construct $$\begin{aligned} h^{(n)} := [\tilde{h}^{(n/2)}, a_n, \tilde{h}^{(n/2)}, b_n]\,, \label{eq:recurrence-stepsizes}\end{aligned}$$ where $\tilde{h}^{(n/2)}$ denotes everything in $h^{(n/2)}$ except the final step, i.e., everything except $b_{n/2}$. Note that $b_{n/2}$ is in $h^{(n/2)}$, but not in $h^{(n)}$; it is split into $a_n$ and $b_n$. Note also that $a_n$, $b_n$ form the largest stepsizes in $h^{(n)}$, with $b_n$ being the largest (Lemma [Lemma 5](#lem:ab-basic){reference-type="ref" reference="lem:ab-basic"}). For the convenience of the reader, we recall from §[1.1.3](#sssec:intro:dis:schedule){reference-type="ref" reference="sssec:intro:dis:schedule"} that for small $n$, this pattern is $$\begin{aligned} h^{(1)} &= [a_1] \\ h^{(2)} &= [a_2, b_2] \\ h^{(4)} &= [a_2,a_4,a_2,b_4] \\ h^{(8)} &= [a_2,a_4,a_2,a_8,a_2,a_4,a_2,b_8]\end{aligned}$$ See Figure [4](#fig:stepsizes){reference-type="ref" reference="fig:stepsizes"} for an illustration of this pattern, and see §[1.1.3](#sssec:intro:dis:schedule){reference-type="ref" reference="sssec:intro:dis:schedule"} for a discussion of the emergent fractal, dependence on the horizon, and patterns for small $n$. **Remark 6** (Occupation measure). *For all $i \in \mathbb{N}$ and all sufficiently large horizons $n \geqslant 2^i$, the stepsize $a_{2^i}$ is used in $2^{-i}$ fraction of the $n$-step Silver Stepsize Schedule. For example, for all horizons $n \geqslant 2$, the smallest stepsize $a_2 = \kappa/(\kappa - 1)$ is used in every other iteration. For the infinite limit of the Silver Stepsize Schedule (see §[1.1.3](#sssec:intro:dis:schedule){reference-type="ref" reference="sssec:intro:dis:schedule"}), the occupation measure simplifies to $$\begin{aligned} \sum_{i = 1}^{\infty} 2^{-i} \delta_{a_{2^i}}\,. \end{aligned}$$ This can be viewed as a geometric distribution that takes value $a_{2^i}$ with probability $2^{-i}$.* ## Silver Convergence Rate We define the Silver Convergence Rate as $$\begin{aligned} \tau_n := \left(\frac{1-z_n}{1+z_n} \right)^2\,. \label{def:tau} \end{aligned}$$ Of course, from just this definition it is not yet clear why we call $\tau_n$ a rate; in §[5](#sec:cert){reference-type="ref" reference="sec:cert"} we prove that $\tau_n$ is the convergence rate of the Silver Stepsize Schedule. Note that since $z_n$ is monotonically increasing (Lemma [Lemma 4](#lem:yz-basic){reference-type="ref" reference="lem:yz-basic"}), this rate $\tau_n$ is monotonically decreasing from the textbook unaccelerated rate $\tau_1 = ((\kappa- 1)/(\kappa+ 1))^2$ to $\lim_{n \to \infty} \tau_n = 0$. In the following section, we provide a complete understanding of exactly how fast $\tau_n$ converges to $0$. # Analysis of the Silver Convergence Rate {#sec:rate} Here we prove the bound on the Silver Convergence Rate $\tau_n$ in our main result (Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"}). We restate this bound for convenience. **Theorem 7** (Silver Convergence Rate). *Denote $i^* := \lfloor \log_{\rho} \frac{\kappa}{3} \rfloor$ and $n^* := 2^{i^*}$. Then for any $n$ that is a power of $2$, we have the following bound on $\tau_n$.* - *[Acceleration regime.]{.ul} If $n \leqslant n^*$, then $$\tau_n = \exp\left( - \Theta\left( \frac{n^{\log_2 \rho}}{\kappa} \right) \right)\,.$$* - *[Saturation regime.]{.ul} If $n > n^*$, then $$\tau_n = \exp\left( - \Theta\left( \frac{n}{ n^*} \right) \right)\,.$$* This result establishes $n^* \asymp \kappa^{\log_{\rho} 2}$ as the location of a phase transition. There, the Silver Convergence Rate $\tau_n$ switches from super-exponential to exponential in the horizon $n$. See the introduction for a detailed discussion of this phase transition and the intuition behind it in terms of how the Silver Stepsize Schedule is effectively periodic with periodic of length $n^*$. For simplicity, we make no attempt to optimize the constants in the $\Theta$ and the choice of $i^*$ (the $1/3$ in the theorem statement is arbitrary). Our proofs make crude constant bounds to ease the exposition, and it is straightforward to tighten these. However, as established by our upper and lower bounds, our proofs are already tight up to reasonable constant factors. The section is organized as follows. In §[4.1](#ssec:rate:heuristic){reference-type="ref" reference="ssec:rate:heuristic"}, we provide a heuristic derivation of Theorem [Theorem 7](#thm:rate){reference-type="ref" reference="thm:rate"} that explains the phase transition via Taylor expanding the dynamics in the two regimes. This gives the central intuition for the result and its proof. In §[4.2](#ssec:rate:rigorous){reference-type="ref" reference="ssec:rate:rigorous"}, we make these Taylor expansions precise to conclude the proof of Theorem [Theorem 7](#thm:rate){reference-type="ref" reference="thm:rate"}. ## Heuristic derivation {#ssec:rate:heuristic} The phase transition in $\tau_n = (\tfrac{1-z_n}{1+z_n})^2$ is a consequence of the phase transition in the dynamics of the auxiliary sequence $z_n$. To explain this, it is convenient to simplify notation by re-indexing $n = 2^i$ so that iterations of the dynamical process are indexed by $i=0,1,2,3,\dots$ rather than $n=1,2,4,8\dots$. It is helpful to also re-parameterize $$\begin{aligned} h_i := \Psi(z_{2^i})\,,\end{aligned}$$ where $\Psi : (0,1) \to (0,1)$ is the monotone bijection $$\begin{aligned} \Psi(z) := \frac{2z}{1+z}\,.\end{aligned}$$ The significance of this re-parameterization to $h_i$ is that $$\begin{aligned} \tau_n = \left( \frac{1-z_n}{1+z_n} \right)^2 = (1 - h_i)^2 \,. \label{eq:rate:tau-h}\end{aligned}$$ Thus, proving a fast convergence rate amounts to lower bounding $h_i$. ![Cobweb diagram for the function $H(h)$ in equation [\[eq:rate:def-H\]](#eq:rate:def-H){reference-type="eqref" reference="eq:rate:def-H"} and its Taylor approximations [\[eq:regime-acceleration:taylor\]](#eq:regime-acceleration:taylor){reference-type="eqref" reference="eq:regime-acceleration:taylor"} and [\[eq:regime-saturation:taylor\]](#eq:regime-saturation:taylor){reference-type="eqref" reference="eq:regime-saturation:taylor"}. Starting from the initial condition [\[eq:pf-rate:init\]](#eq:pf-rate:init){reference-type="eqref" reference="eq:pf-rate:init"}, iterates grow by a constant multiplicative factor of the Silver Ratio $\rho = 1 + \sqrt{2}$ when $h$ is near zero, and converge quadratically to 1 when $h$ is close to 1. ](figs/H-iteration.pdf){#fig:h-iteration height="7cm"} What do the dynamics of $h_i$ look like? At initialization, $z_1 = 1/\kappa$ (see §[3](#sec:construction){reference-type="ref" reference="sec:construction"}), thus $$\begin{aligned} h_0 = \frac{2}{1 + \kappa} \asymp \frac{1}{\kappa}\,. \label{eq:pf-rate:init}\end{aligned}$$ Then the iterations of this process increase $h_i$ exponentially fast to $1$ when it is sub-constant size, and then doubly-exponentially fast when $h_i$ is of constant size. (This dichotomy is the source of the phase transition in Theorem [Theorem 7](#thm:rate){reference-type="ref" reference="thm:rate"}.) To analyze these dynamics, let $H : (0, 1) \to (0,1)$ denote the update function sending $h_i$ to $h_{i+1}$. Then $H = \Psi \circ F \circ \Psi^{-1}$ where $F(z) = z (1 - z + \sqrt{1 + (1-z)^2})$ is the function that updates $z_n$ to $z_{2n} = F(z_n)$, see Section [3](#sec:construction){reference-type="ref" reference="sec:construction"}. A direct algebraic computation gives the explicit expression $$\begin{aligned} H(h) = \frac{h \, (2 - 3 h + \sqrt{5 h^2 - 12 h + 8})} {2(1-h^2)}\,. \label{eq:rate:def-H}\end{aligned}$$ Taylor expanding $H$ around $h \approx 0$ and $h \approx 1$ illustrates the markedly different dynamics in these two regimes; see Figure [13](#fig:h-iteration){reference-type="ref" reference="fig:h-iteration"}. - [Acceleration regime.]{.ul} For $h \ll 1$, $$\begin{aligned} H(h) \approx \rho h\,. \label{eq:regime-acceleration:taylor} \end{aligned}$$ Thus, in this regime, each $h_i$ increases by a factor of roughly $\rho$, thus $h_i \approx \rho^i h_0 \approx \rho^i / (2\kappa)$, thus the Silver Convergence Rate is roughly $$\begin{aligned} \tau_n = (1 - h_i)^2 \approx \exp( - 2 h_i) \approx \exp\left( -\rho^i / \kappa\right) = \exp\left( - n^{\log_2 \rho} / \kappa \right) \,. \label{eq:informal-rate:acceleration} \end{aligned}$$ This regime lasts for only $i \approx \log_{\rho} \kappa$ iterations (aka horizon $n=2^i \approx \kappa^{\log_2 \rho}$) because at that point $h_i \asymp \rho^i / \kappa\asymp 1$ is of constant size. This is the phase transition. - [Saturation regime.]{.ul} For $h \approx 1$, $$\begin{aligned} 1 - H(h) \approx (1 - h)^2\,. \label{eq:regime-saturation:taylor} \end{aligned}$$ In words, the key phenomenon here is that the average rate $\tau_n^{1/n}$ stays essentially the same as $n$ increases---in contrast to the acceleration regime, in which the average rate improves in $n$. Indeed, the Taylor expansion [\[eq:regime-saturation:taylor\]](#eq:regime-saturation:taylor){reference-type="eqref" reference="eq:regime-saturation:taylor"} indicates that in the saturation regime, $\tau_{n} = \left( 1 - h_{i}\right)^2 \approx (1 - h_{i-1})^4 = \tau_{n/2}^2$. By repeating this argument and then using the fact that $\tau_{n^*} = \exp(-\Theta(1))$ which follows from the acceleration regime, we obtain $$\begin{aligned} \tau_n \approx \left( \tau_{n^*} \right)^{n / n^*} = \exp\left( - \Theta\left( n/n^* \right) \right)\,. \end{aligned}$$ If the approximations were justified in the above two displays, then this informal argument would lead to a proof of Theorem [Theorem 7](#thm:rate){reference-type="ref" reference="thm:rate"}. We do this in the following subsection. ## Rigorous derivation {#ssec:rate:rigorous} Here we prove Theorem [Theorem 7](#thm:rate){reference-type="ref" reference="thm:rate"}. We first state two helper lemmas, which formalize the Taylor approximations [\[eq:regime-acceleration:taylor\]](#eq:regime-acceleration:taylor){reference-type="eqref" reference="eq:regime-acceleration:taylor"} and [\[eq:regime-saturation:taylor\]](#eq:regime-saturation:taylor){reference-type="eqref" reference="eq:regime-saturation:taylor"} in the acceleration regime and saturation regime, respectively. **Lemma 8** (Dynamics in the acceleration regime). *Let $\nu := \frac{3\rho}{2\sqrt{2}} \approx 2.561$. For all $h \geqslant 0$, $$\begin{aligned} \rho h - \nu h^2 \leqslant H(h) \leqslant\rho h \,. \end{aligned}$$* **Lemma 9** (Dynamics in the saturation regime). *For all $h \geqslant 0$, $$\begin{aligned} (1-h)^2 - (1-h)^4 \leqslant 1 - H(h) \leqslant(1 - h)^2\,. \end{aligned}$$* We omit the proofs of these lemmas, since the inequalities are visually obvious from plotting the functions, and can be formally proven in a routine algorithmic way, as they only involve algebraic functions of a single scalar variable. This is done by computing the critical points and using well-known techniques for root isolation; see e.g. [@BPRbook]. An appealing consequence of Lemma [Lemma 9](#lem:dynamics-saturation){reference-type="ref" reference="lem:dynamics-saturation"} is the inequality $\tau_{2n} \leqslant\tau_n^2$. We call this the *rate monotonicity* property of the Silver Stepsize Schedule, since it amounts to the statement that using the $2n$-step schedule is at least as good as using the $n$-step schedule twice. **Corollary 10** (Rate monotonicity property for the Silver Stepsize Schedule). *For any $n$ that is a power of $2$, $$\tau_{2n} \leqslant\tau_{n}^2\,.$$* *Proof.* By [\[eq:rate:tau-h\]](#eq:rate:tau-h){reference-type="eqref" reference="eq:rate:tau-h"}, then Lemma [Lemma 9](#lem:dynamics-saturation){reference-type="ref" reference="lem:dynamics-saturation"}, then [\[eq:rate:tau-h\]](#eq:rate:tau-h){reference-type="eqref" reference="eq:rate:tau-h"} again, we have $\tau_{2n} = (1 - h_{i+1})^2 \leqslant(1-h_i)^4 = \tau_n$. ◻ *Proof of Theorem [Theorem 7](#thm:rate){reference-type="ref" reference="thm:rate"}.* Here we prove the upper bounds (a.k.a., the convergence rates). The matching lower bounds are conceptually identical and deferred to Appendix [7](#app:rate){reference-type="ref" reference="app:rate"} for brevity. [Acceleration regime.]{.ul} Suppose $n \leqslant n^*$. Let $i := \log_2 n \leqslant i^*$. We bound $$\begin{aligned} h_i \geqslant \rho^i h_0 - \nu \sum_{t=0}^{i-1} \rho^{i-1-t} h_t^2 \geqslant\rho^i h_0 - \nu h_0^2 \rho^{i-1} \sum_{t=0}^{i-1} \rho^t \geqslant\rho^i h_0 \left( 1 - \frac{3}{4} \rho^i h_0 \right) \geqslant\frac{\rho^i h_0}{2} = \frac{n^{\log_2 \rho}}{2\kappa}\,. \label{eq:pf-rate:1} \end{aligned}$$ Above, the first step is by $i$ applications of the lower bound in Lemma [Lemma 8](#lem:dynamics-acceleration){reference-type="ref" reference="lem:dynamics-acceleration"}. The second step is because $h_t \leqslant\rho^t h_0$ by $t$ applications of the upper bound in Lemma [Lemma 8](#lem:dynamics-acceleration){reference-type="ref" reference="lem:dynamics-acceleration"}. The third step is by summing the geometric series, crudely dropping a positive term, and simplifying $\nu/(\rho\sqrt{2}) = 3/4$. The fourth step is because $\rho^i h_0 \leqslant\rho^{i^*} h_0 \leqslant 2/3$ by definition of $i^*$ and the initialization upper bound $h_0 \leqslant 2/\kappa$, see [\[eq:pf-rate:init\]](#eq:pf-rate:init){reference-type="eqref" reference="eq:pf-rate:init"}. The final step is by definition of $i = \log_2 n$ and the initialization lower bound $h_0 \geqslant 1/\kappa$, see [\[eq:pf-rate:init\]](#eq:pf-rate:init){reference-type="eqref" reference="eq:pf-rate:init"}. This completes the proof since by [\[eq:rate:tau-h\]](#eq:rate:tau-h){reference-type="eqref" reference="eq:rate:tau-h"}, $$\begin{aligned} \tau_n = (1 - h_i)^2 \leqslant\exp( -2 h_i) \leqslant\exp(- n^{\log_2 \rho} / \kappa)\,. \end{aligned}$$ [Saturation regime.]{.ul} Next, suppose $n > n^*$. By $n/n^*$ applications of Corollary [Corollary 10](#cor:rate-monotonicity){reference-type="ref" reference="cor:rate-monotonicity"} and then using the bound on $\tau_{n^*}$ proved in the acceleration regime, we have $$\begin{aligned} \tau_n \leqslant \left( \tau_{n^*} \right)^{n / n^*} \leqslant \exp\left( - \frac{n}{n^*} \cdot \frac{(n^*)^{\log_2 \rho}}{\kappa} \right) \,. \end{aligned}$$ The proof is complete by using the definition of $n^*$ and $i^*$ to bound $(n^*)^{\log_2 \rho} = \rho^{i^*} \geqslant\frac{\kappa}{3\rho}$. ◻ # Certificate of the Silver Convergence Rate {#sec:cert} Here we prove that the Silver Stepsize Schedule has convergence rate $\tau_n$. This is where we establish multi-step descent. For a conceptual overview, we refer the reader to §[2.2](#ssec:hedging:convex){reference-type="ref" reference="ssec:hedging:convex"} for the case of $n=2$; the proof for general $n$ here mirrors that key case, albeit is more technically involved. Recall from the discussion there that the proof strategy amounts to finding a *certificate* $\{\lambda_{ij}\}$ for the rate $\tau_n$, by which we mean non-negative multipliers $\{\lambda_{ij}\}_{i,j \in \{0, \dots, n-1,*\}}$ such that $$\begin{aligned} \tau_n \|x_0 - x^*\|^2 - \|x_n - x^*\|^2 = \sum_{i,j \in \{0, 1, \dots, n, *\}} \lambda_{ij}Q_{ij}\,.\end{aligned}$$ See §[2.2](#ssec:hedging:convex){reference-type="ref" reference="ssec:hedging:convex"} for a definition of the co-coercivities $Q_{ij}$. Briefly, these are valid inequalities that generate all possible long-range consistency conditions between the gradients seen along GD's trajectory. ![Components of the recursively glued certificate in Theorem [Theorem 12](#thm:cert){reference-type="ref" reference="thm:cert"}, illustrated here for combining two copies of the $n=4$ certificate (shaded) to create the $2n=8$ certificate.](figs/recursive_gluing.png){#fig:gluing width="0.5\\linewidth"} Our proof builds the $2n$-step certificate by *recursively gluing* two copies of the $n$-step certificate and adding slight modifications to account for the fact that the $2n$-step Silver Stepsize Schedule $h^{(2n)}$ differs from $[h^{(n)}, h^{(n)}]$ in two out of the $2n$ stepsizes. Concretely, this recursive gluing can be understood as creating the $(2n+1) \times (2n+1)$ matrix $\{\lambda_{ij}\}_{i,j \in \{0, \dots, 2n-1, *\}}$ from the $(n+1) \times (n+1)$ matrix $\{\sigma_{ij}\}_{i,j \in \{0, \dots, n-1,*\}}$ in three parts: *a tensor product* which glues together two copies of the $n$-step certificate, a *rank-one correction* which affects the rows indexed by $i \in \{n-1,2n-1,*\}$, and a *sparse correction* which affects the $6$ entries $(i,j)$ where $i \neq j \in \{n-1,2n-1,*\}$. See Figure [14](#fig:gluing){reference-type="ref" reference="fig:gluing"}. This recursive gluing is formally stated in Theorem [Theorem 12](#thm:cert){reference-type="ref" reference="thm:cert"} below. To most easily state this result, we first isolate a certain property of the sparsity pattern of the multipliers $\lambda_{ij}$ that holds by construction in our recursion. This property is technical and eases the proof. **Definition 11** ($*$-sparsity property). *A collection of weights $\{\lambda_{i,j}\}_{i,j \in \{0, \dots, n-1,*\}}$ satisfies the *$*$-sparsity property* if $\lambda_{i,*}$ for all $i < n-1$.* **Theorem 12** (Recursive gluing for the Silver Stepsize Schedule). *Let $\kappa \in (1,2) \cup (2, \infty)$. Suppose $\{\sigma_{ij}\}_{i,j \in \{0, \dots, n-1, *\}}$ satisfies $*$-sparsity and certifies the $n$-step rate, i.e., $$\begin{aligned} \tau_n \|x_0 - x^*\|^2 - \|x_n - x^*\|^2 = \sum_{i,j \in \{0, \dots, n-1, *\}} \sigma_{ij} Q_{ij} \, \qquad \text{ for stepsize schedule } h^{(n)}\,. \label{eq:cert:n} \end{aligned}$$ Then there exists $\{\lambda_{ij}\}_{i,j \in \{0, \dots, 2n-1, *\}}$ that satisfies $*$-sparsity and certifies the $2n$-step rate, i.e., $$\begin{aligned} \tau_{2n} \|x_0 - x^*\|^2 - \|x_{2n} - x^*\|^2 = \sum_{i,j \in \{0, \dots, 2n-1, *\}} \lambda_{ij} Q_{ij}\, \qquad \text{ for stepsize schedule } h^{(2n)}\,. \label{eq:cert:2n} \end{aligned}$$ Moreover, this certificate is explicitly given by $$\begin{aligned} \lambda_{ij} := \underbrace{\Theta_{ij}}_{\text{gluing}} + \underbrace{\Xi_{ij}}_{\text{rank-one correction}} + \underbrace{\Delta_{ij}}_{\text{sparse correction}} \label{eq:recurrence:construction} \end{aligned}$$ where the "gluing component" $\Theta$ is defined as $$\begin{aligned} \Theta_{i,j} := \underbrace{\frac{\tau_{2n}}{\tau_n} \sigma_{i,j} \cdot \mathds{1}_{i,j \in \{0, \dots, n-1,*\}}}_{\text{recurrence for first $n$ steps}} \;+ \underbrace{c \sigma_{i-n,j-n} \cdot \mathds{1}_{i,j \in \{n,\dots,2n-1,*\}}}_{\text{recurrence for second $n$ steps}}\,, \end{aligned}$$ the "rank-one correction" $\Xi$ is zero except $\{\Xi_{ij}\}_{i \in \{n-1,2n-1,*\}, j \in \{n, \dots, 2n-2\}}$, and the "sparse correction" $\Delta$ is zero except $\{\Delta_{ij}\}_{i \neq j \in \{n-1,2n-1,*\}}$. The explicit values of $c$, $\Xi$, $\Delta$ are provided in Appendix [8](#app:cert){reference-type="ref" reference="app:cert"}.* While the explicit values of $\Xi$ and $\Delta$ are somewhat involved, the key point is that they can be expressed as rational functions in just $z_n, y_{2n}, z_{2n}$, see Remark [Remark 14](#rem:simple-yz){reference-type="ref" reference="rem:simple-yz"}. Importantly, since $y_{2n}, z_{2n}$ are explicit algebraic functions of $z_n$ by construction (see §[3.1](#ssec:construction:yz){reference-type="ref" reference="ssec:construction:yz"}), this turns verifying the claimed identity [\[eq:cert:2n\]](#eq:cert:2n){reference-type="eqref" reference="eq:cert:2n"} into a straightforward (albeit tedious) algebraic exercise that is rigorously automatable via standard computer algebra techniques [@cox2013ideals]. A few minor remarks. First, for simplicity, Theorem [Theorem 12](#thm:cert){reference-type="ref" reference="thm:cert"} assumes $\kappa \neq 2$. This allows us to multiply and divide by $\kappa-2$, which simplifies expressions. The rate for $\kappa = 2$ anyways follows immediately from $\kappa = 2+\varepsilon$ for $\varepsilon\downarrow 0$. Second, in [\[eq:recurrence:construction\]](#eq:recurrence:construction){reference-type="eqref" reference="eq:recurrence:construction"} the notational shorthand $i-n$ is understood to be $*$ when $i=*$. Third, when it is said that $\lambda$ satisfies $*$-sparsity, it is understood that this property corresponds to the horizon of length $2n$, i.e., $\lambda_{*,i} = 0$ for all $i < 2n-1$. Theorem [Theorem 12](#thm:cert){reference-type="ref" reference="thm:cert"} immediately implies the convergence rate [\[eq:thm-main:cert\]](#eq:thm-main:cert){reference-type="eqref" reference="eq:thm-main:cert"} in our main result Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"}. *Proof of convergence rate in Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"}.* The base case of $n=1$ is the classical analysis of GD; see, e.g., [@altschuler2018greed Chapter 8] for a proof in this language of co-coercivities.[^10] The convergence rate is the textbook unaccelerated rate $\tau_1 = (\tfrac{\kappa - 1}{\kappa + 1})^2$. By induction, Theorem [Theorem 3](#thm:2step:convex){reference-type="ref" reference="thm:2step:convex"} implies that $\tau_n$ is a valid convergence rate for the $n$-step Silver Stepsize Schedule, for all $n$ that are powers of $2$. ◻ Below, in §[5.1](#ssec:rate:corrections){reference-type="ref" reference="ssec:rate:corrections"}, we express the components of the recursively glued certificate as succinct quadratic forms, and then in §[5.2](#ssec:rate:pf){reference-type="ref" reference="ssec:rate:pf"}, we use this to prove Theorem [Theorem 12](#thm:cert){reference-type="ref" reference="thm:cert"}. ## Recursive gluing {#ssec:rate:corrections} Proving Theorem [Theorem 12](#thm:cert){reference-type="ref" reference="thm:cert"} requires establishing that $\lambda$ satisfies the identity [\[eq:cert:2n\]](#eq:cert:2n){reference-type="eqref" reference="eq:cert:2n"}. Ignoring presently the linear form in the function values (that term is much simpler and addressed in §[5.2](#ssec:rate:pf){reference-type="ref" reference="ssec:rate:pf"}), this amounts to showing equality of two quadratic forms. Naïvely, this requires checking equality of *all* coefficients of these quadratic forms---which is painstaking since these are quadratics in all the GD iterates $x_0, \dots, x_{2n-1}, x^*$ and their corresponding gradients $g_0, \dots, g_{2n-1}, g^*$, and moreover are defined over the ideal generated by the GD equations $x_{t+1} = x_t - \alpha_t g_t$. A key observation that removes much of this labor is that *the quadratic forms in our recursive certificate have rank at most $4$.* In fact, these quadratic forms are only in the four variables $x_{n-1}, g_{n-1}, x_{2n-1}, g_{2n-1}$. This reduces the number of coefficients to be checked from $\Theta(n^2)$ to a constant number: $10$. This observation is formalized in the following lemma, which expresses the quadratic forms via coefficient matrices as this is convenient for book-keeping. For brevity, just as in Theorem [Theorem 12](#thm:cert){reference-type="ref" reference="thm:cert"}, the explicit values of these matrices are deferred to the Appendix, but the key point is that each entry can be expressed a rational function of just $z_n, y_{2n}, z_{2n}$, see Remark [Remark 14](#rem:simple-yz){reference-type="ref" reference="rem:simple-yz"}. To isolate the quadratic form component of the co-coercivities, let $P_{ij}$ denote $Q_{ij}$ without its linear component $f_i - f_j$, i.e., $$\begin{aligned} P_{ij} := 2\langle g_j - \frac{g_i}{\kappa}, g_j - g_i \rangle - \|g_i - g_j\|^2 - \frac{1}{2(\kappa - 1)}\|x_i - x_j\|^2\,.\end{aligned}$$ **Lemma 13** (Recursive gluing via succinct quadratic forms). *Consider the setup of Theorem [Theorem 12](#thm:cert){reference-type="ref" reference="thm:cert"}, let $v := [x_{n-1}, g_{n-1}, x_{2n-1}, g_{2n-1}]^T$, and let $E$, $S$, $L$ be the $4 \times 4$ matrices defined in Appendix [8.4](#app:succinct){reference-type="ref" reference="app:succinct"}.* - *[Gluing error:]{.ul} $\tau_{2n} \|x_0\|^2 - \|x_{2n}\|^2 - \sum_{i,j \in \{0,\dots,2n-1,*\}} \Theta_{ij} P_{ij} = \langle E, vv^T \rangle$* - *[Sparse correction:]{.ul} $\sum_{ij} \Delta_{ij}P_{ij} = \langle S, vv^T \rangle$* - *[Rank-one correction:]{.ul} $\sum_{ij} \Xi_{ij} P_{ij} = \langle L, vv^T \rangle$* For brevity, we defer the proof of the sparse and low-rank corrections to Appendix. However, we provide the proof of the gluing error here to provide intuition for why these quadratic forms have constant rank rather than the a priori upper bound of $\Theta(n)$. In particular, the proof shows how the low rank arises from the recursive construction of the Silver Stepsize Schedule that creates $h^{(2n)}$ from $h^{(n)}$, modulo only changing the $n$-th and $2n$-th stepsizes (each increases the rank by $2$). *Proof of gluing error for Lemma [Lemma 13](#lem:succinct){reference-type="ref" reference="lem:succinct"}.* Denote by $\tilde{x}_n := x_{n-1} - b_n g_{n-1}$ and $\tilde{x}_{2n} := x_{2n-1} - b_n g_{2n-1}$ the iterates obtained by running GD with the Silver Stepsize Schedule $h^{(n)}$ from initializations $x_0$ and $x_n$, respectively. By definition of $\sigma$ as a certificate for the $n$-step rate, $$\begin{aligned} \sum_{i,j \in \{0, \dots, n-1, *\}} \sigma_{ij} P_{ij} &= \tau_n \|x_0\|^2 - \|\tilde{x}_n\|^2 \qquad \text{and } \qquad \sum_{i,j \in \{n, \dots, 2n-1, *\}} \sigma_{ij} P_{ij} &= \tau_n \|x_n\|^2 - \|\tilde{x}_{2n}\|^2\ \end{aligned}$$ Thus the desired quantity is equal to $$\begin{aligned} \Big( \tau_{2n} \|x_0\|^2 - \|x_{2n}\|^2 \Big) - \sum_{i,j \in \{0,\dots,2n-1,*\}} \Theta_{ij} P_{ij} &= \Big( \tau_{2n} \|x_0\|^2 - \|x_{2n}\|^2 \Big) - \frac{\tau_{2n}}{\tau_n}\Big( \tau_n \|x_0\|^2 - \|\tilde{x}_n\|^2 \Big) - c\Big( \tau_n \|x_n\|^2 - \|\tilde{x}_{2n}\|^2 \Big) \\ &= \Big( \frac{\tau_{2n}}{\tau_n} \|\tilde{x}_n \|^2 - c\tau_n \|x_n\|^2 \Big) + \Big( c\|\tilde{x}_{2n}\|^2 - \|x_{2n}\|^2 \Big)\,. \end{aligned}$$ Now by definition of GD, $x_n = x_{n-1} - a_{2n} g_{n-1}$, $x_{2n} = x_{2n-1} - b_{2n} g_{2n-1}$, $\tilde{x}_n = x_{n-1} - b_n g_{n-1}$, and $\tilde{x}_{2n} = x_{2n-1} - b_n g_{2n-1}$. By plugging this into the above display and expanding the square, we see that the discrepancy between the $\|\tilde{x}_n\|^2$ and $\|x_n\|^2$ terms creates a quadratic form in just $x_{n-1}, g_{n-1}$, and similarly the discrepancy between the $\|\tilde{x}_{2n}\|^2$ and $\|x_{2n}\|^2$ terms creates a quadratic form in just $x_{2n-1}, g_{2n-1}$. Tracking coefficients completes the proof. ◻ ## Certificate verification {#ssec:rate:pf} *Proof of Theorem [Theorem 12](#thm:cert){reference-type="ref" reference="thm:cert"}.* The non-negativity and $*$-sparsity properties of $\lambda$ are direct from the explicit values of $\lambda$; details in Appendix [8.3](#app:explicit){reference-type="ref" reference="app:explicit"}. It therefore suffices to check the rate certificate [\[eq:cert:2n\]](#eq:cert:2n){reference-type="eqref" reference="eq:cert:2n"}. By definition of the co-coercivity $Q_{ij}$, this certificate has two components: a linear form in $\{f_i\}_{i \in \{0,\dots,2n-1,*\}}$ and a quadratic form in $\{x_i, g_i\}_{i \in \{0, \dots, 2n-1,*\}}$. We check these two components below. #### Quadratic form in iterates and gradients. {#quadratic-form-in-iterates-and-gradients. .unnumbered} By Lemma [Lemma 13](#lem:succinct){reference-type="ref" reference="lem:succinct"}, it suffices to show that $$\begin{aligned} E-S-L = 0\,, \end{aligned}$$ where $E, S, L$ are the matrices defined in Appendix [8.4](#app:succinct){reference-type="ref" reference="app:succinct"}. This amounts to checking the $10$ entries on or above the diagonal of these $4 \times 4$ matrices---elements below the diagonal need not be checked as the matrices are symmetric. By Lemma [Lemma 16](#lem:explicit){reference-type="ref" reference="lem:explicit"}, these entries can be expressed as rational functions in $z_n, y_{2n}, z_{2n}$, which are polynomially related via [\[eq:yz-defining\]](#eq:yz-defining){reference-type="eqref" reference="eq:yz-defining"}. Therefore, checking that these $10$ entries vanish amounts to checking that certain polynomials vanish modulo an associated ideal. This verification is rigorously automatable using standard techniques from computational algebraic geometry such as Gröbner bases; see e.g. [@cox2013ideals; @KreuzerRobbiano]. A simple script for Mathematica (or other computer algebra systems) that verifies these identities is available at the URL given in the references [@MathematicaURL]. We emphasize that this is purely in the interest of brevity: verifying these identities can be done by hand, as it just amounts to straightforward (albeit tedious) algebraic cancellations. #### Linear form in function values. {#linear-form-in-function-values. .unnumbered} Recall that each $Q_{ij}$ contributes $2(M-m)(f_i - f_j)$. Thus, in order to show that all function values vanish in $\sum_{ij} \lambda_{ij} Q_{ij}$, it is equivalent to show that $$\begin{aligned} \sum_{j} \lambda_{ij} = \sum_j \lambda_{ji} \,, \qquad \forall j \in \{0, \dots, 2n-1,*\}\,. \label{eq:pf-cert:netflow} \end{aligned}$$ That is, the $j$-th row and column sums of $\lambda$ must match, for all $j$. We call refer to these identities as *netflow constraints*. Since $\sigma$ is a valid certificate, it satisfies the netflow constraints $\sum_j \sigma_{ij} = \sum_j \sigma_{ji}$ for all $j \in \{0, \dots, n-1,*\}$. Thus, by construction of $\Theta$ from $\sigma$, it follows that $\Theta$ satisfies the netflow constraints $\sum_j \Theta_{ij} = \sum_j \Theta_{ji}$ for all $i \in \{0, \dots, 2n-1,*\}$. Therefore, in order to prove [\[eq:pf-cert:netflow\]](#eq:pf-cert:netflow){reference-type="eqref" reference="eq:pf-cert:netflow"}, it is equivalent to prove the netflow constraints for $\Xi + \Delta$; that is, $$\begin{aligned} \sum_j (\Xi_{ij} + \Delta_{ij}) = \sum_j (\Xi_{ji} + \Delta_{ji}), \qquad \forall i \in \{0, \dots, 2n-1,*\}\,. \label{eq:pf-cert:netflow-2} \end{aligned}$$ The cases $i \in \{0, \dots, n-2\}$ are trivial since on these rows and columns, $\Xi$ and $\Delta$ are identically zero. The cases $i \in \{n, \dots, 2n-2\}$ are similarly trivial because on these rows and columns, $\Delta$ is identically zero and $\sum_j (\Xi_{ji} - \Xi_{ij}) = \Xi_{n-1,i} + \Xi_{2n-1,i} + \Xi_{*,i} = 0$ by construction of $\Xi$. It remains only to prove [\[eq:pf-cert:netflow-2\]](#eq:pf-cert:netflow-2){reference-type="eqref" reference="eq:pf-cert:netflow-2"} for $i \in \{n-1,2n-1,*\}$. By the sparsity patterns of $\Xi$ and $\Delta$, this amounts to showing $$\begin{aligned} \sum_{j \in \{n-1,2n-1,*\} \setminus \{i\}} \left(\Delta_{ij} - \Delta_{ji} \right) + \sum_{j=n}^{2n-2} \Xi_{ij} = 0, \qquad \forall i \in \{n-1,2n-1,*\}\,. \label{eq:pf-cert:netflow-simple} \end{aligned}$$ By Lemma [Lemma 16](#lem:explicit){reference-type="ref" reference="lem:explicit"}, these quantities can be expressed as rational functions in $z_n,y_{2n},z_{2n}$, which are polynomially related via [\[eq:yz-defining\]](#eq:yz-defining){reference-type="eqref" reference="eq:yz-defining"}. Therefore, checking that the three quantities vanish in [\[eq:pf-cert:netflow-simple\]](#eq:pf-cert:netflow-simple){reference-type="eqref" reference="eq:pf-cert:netflow-simple"} amounts to checking that three polynomials vanish modulo an ideal. As mentioned above, this verification is rigorously automatable using standard computational algebra techniques; see the same URL [@MathematicaURL] for a simple script implementing this computation. ◻ # Future work {#sec:future} This work removes a key stumbling block in previous analyses of optimization algorithms: we show that directly analyzing *multi-step descent* can lead to improved convergence analyses. This general principle opens up a number of directions in both the design and analysis of optimization algorithms. We list a few here. #### Beyond GD. {#beyond-gd. .unnumbered} Do these techniques extend to stochastic settings where gradients are noisy or only computed approximately? This is motivated by modern machine learning settings such as empirical risk minimization. What about constrained settings where projections are interleaved? Or other settings where one uses coordinate descent, proximal steps, etc.? What about second-order methods such as Newton or Interior Point methods? The modern optimization toolbox is broad, and the algorithmic opportunity of faster multi-step descent that we establish warrants re-investigating many existing algorithms that use greedy analyses. #### Beyond convexity. {#beyond-convexity. .unnumbered} While our techniques extend to the convex setting (see §[1.1.4](#sssec:intro:dis:setting){reference-type="ref" reference="sssec:intro:dis:setting"}), it is less clear if extensions to non-convex settings are also possible. In particular, can one prove accelerated rates for converging to an stationary point? Could this justify empirical phenomena observed in neural network training such as super-acceleration from cyclic stepsize schedules [@smith2019super; @smith2017cyclical]? #### Faster convergence for restricted function classes. {#faster-convergence-for-restricted-function-classes. .unnumbered} Is faster convergence possible if the objective function is more structured? One well-motivated direction here is low-dimensional objective functions. It is known that faster asymptotic convergence is possible if the dimension $d$ is fixed and the number of iterations $n \to \infty$, e.g., via cutting planes. Recent work has shown that certain momentum-based modifications to GD can also surpass standard lower bounds [@nem-yudin] for sufficiently large $n$ [@peng2023nesterov]. Do such phenomena extend to GD with dynamic stepsizes? Altschuler's thesis [@altschuler2018greed Chapter 6] proved that for univariate convex functions (or more generally, separable convex functions), GD achieves the fully accelerated rate $\Theta(\sqrt{\kappa} \log 1/\varepsilon)$ via a certain (random) dynamic choices of stepsizes. Does this extend to higher dimension? What is the fundamental trade-off between $n$, $d$, and the convergence rate? #### Robustness. {#robustness. .unnumbered} The Silver Stepsize Schedule periodically uses extremely large step sizes, which are overly aggressive in isolation, but effective when combined with other short steps. It is natural to wonder if this dependence between iterations makes such strategies more sensitive to model misspecification, noisy gradients, inexact arithmetic, or other considerations in practical implementations. We expect this may occur, since it does for other accelerated algorithms, see e.g., [@devolder2014first]. #### Acknowledgements. {#acknowledgements. .unnumbered} JMA is grateful to his friends for their patience over the past seven years as he continually complained about how hard this problem was. # Deferred details for §[4](#sec:rate){reference-type="ref" reference="sec:rate"} {#app:rate} Here we prove the matching lower bounds in Theorem [Theorem 7](#thm:rate){reference-type="ref" reference="thm:rate"}. #### Acceleration regime. {#acceleration-regime. .unnumbered} Suppose $n \leqslant n^*$. Then $$\begin{aligned} \tau_n = (1 - h_i)^2 \geqslant \exp\left( - 4h_i \right) \geqslant \exp\left( - 8 \rho^i / \kappa \right) = \exp\left( - 8 n^{\log_2 \rho} / \kappa \right) \nonumber \,. \end{aligned}$$ Above, the first step is by [\[eq:rate:tau-h\]](#eq:rate:tau-h){reference-type="eqref" reference="eq:rate:tau-h"}. The third step is by the upper bound in Lemma [Lemma 8](#lem:dynamics-acceleration){reference-type="ref" reference="lem:dynamics-acceleration"} and the initialization upper bound $h_0 \leqslant 2/\kappa$ from [\[eq:pf-rate:init\]](#eq:pf-rate:init){reference-type="eqref" reference="eq:pf-rate:init"}. The fourth step is by definition of $i = \log_2 n$. It remains to argue the second step. This is due to the elementary inequality $1-h \geqslant\exp(-2h)$ which holds for $h \in (0,2/3)$ and is applicable since $$h_i \leqslant\rho^i h_0 \leqslant\rho^{i^*} h_0 \leqslant 2/3\,.$$ Here, we used same upper bound in Lemma [Lemma 8](#lem:dynamics-acceleration){reference-type="ref" reference="lem:dynamics-acceleration"}, the same initialization upper bound $h_0 \leqslant 2/\kappa$, and, critically, the fact that $i \leqslant i^*$ since we are in the acceleration regime. #### Saturation regime. {#saturation-regime. .unnumbered} Suppose $n > n^*$. Then $$\begin{aligned} \tau_{2n} = \left( 1 - h_{i+1}\right)^2 \geqslant\left( (1-h_i)^2 - (1-h_i)^4 \right)^2 = \tau_n^2 (1 - \tau_n)^2 \geqslant \frac{\tau_n^2}{16}\,. \end{aligned}$$ where the first and third steps are by [\[eq:rate:tau-h\]](#eq:rate:tau-h){reference-type="eqref" reference="eq:rate:tau-h"}, the second step is by the lower bound in Lemma [Lemma 9](#lem:dynamics-saturation){reference-type="ref" reference="lem:dynamics-saturation"}, and the final step is because $\tau_n \leqslant\tau_{n^*} \leqslant\exp(-1/3)$ by the rate monotonicity in Corollary [Corollary 10](#cor:rate-monotonicity){reference-type="ref" reference="cor:rate-monotonicity"} and the upper bound we proved for $\tau_{n^*}$ in the acceleration regime. By unrolling this recursion from $n$ to $n^*$, continuing to crudely bound constants for simplicity of exposition, and then plugging in the lower bound $\tau_{n^*} \geqslant\exp( - 8)$ from the acceleration regime, we conclude $$\begin{aligned} \tau_{n} \geqslant \left( \frac{\tau_{n^*}}{16}\right)^{n/n^*} \geqslant\exp\left( - \frac{n}{2n^*} \right)\,. \nonumber \end{aligned}$$ # Deferred details for §[5](#sec:cert){reference-type="ref" reference="sec:cert"} {#app:cert} Here we provide the deferred details for the proof of Theorem [Theorem 12](#thm:cert){reference-type="ref" reference="thm:cert"}. See §[5](#sec:cert){reference-type="ref" reference="sec:cert"} for a proof overview. Three remarks on notation in this Appendix. First, after a possible translation of both $f$ and $x_0$, we assume without loss of generality that $x^* = 0$. Second, it is convenient to define the shorthand $$q_i := \frac{\alpha_i (1 - \tfrac{\alpha_{i+1}}{\kappa})}{\alpha_{i+1}}\,,$$ where $\alpha_0, \dots, \alpha_{2n-1}$ index the $2n$-length Silver Stepsize Schedule $h^{(2n)}$. Third, as is standard convention, products over the empty set such as $\prod_{t=n}^{n-1} q_t$ have value $1$. We begin by explicitly stating the correction components used in the recursive gluing. It is convenient to provide two equivalent versions of these expressions. In the first version (Definition [Definition 15](#def:cert){reference-type="ref" reference="def:cert"}), the low-rank correction $\Xi$ is explicitly defined for every entry, and the sparse correction $\Delta$ is typically defined as something minus the gluing component $\Theta$. Such expressions for $\Delta$ are convenient for computing the final certificate $\lambda$ because $\Theta$ cancels. In the second version (Lemma [Lemma 16](#lem:explicit){reference-type="ref" reference="lem:explicit"}), $\Xi$ is given only through its row sums (which is the only way $\Xi$ is needed for the proof of Theorem [Theorem 12](#thm:cert){reference-type="ref" reference="thm:cert"}), and $\Delta$ is given via explicit expressions for the subtracted entries of $\Theta$. The key benefit of this second version is that it provides explicit expressions in terms of $z_n,y_{2n},z_{2n}$ for all quantities required in the proof of Theorem [Theorem 12](#thm:cert){reference-type="ref" reference="thm:cert"}. For easy recall, we isolate this important fact in the following remark. **Remark 14** (Explicit rational functions of $z_n, y_{2n}, z_{2n}$). *While the expressions in Definition [Definition 15](#def:cert){reference-type="ref" reference="def:cert"} are somewhat involved, the key point is that all the quantities that are required in the proofs in §[5](#sec:cert){reference-type="ref" reference="sec:cert"} can be expressed as rational functions in $z_n, y_{2n}, z_{2n}$. This is Lemma [Lemma 16](#lem:explicit){reference-type="ref" reference="lem:explicit"}. (Note that $\tau_n, \tau_{2n}$ are by definition rational functions of $z_n, z_{2n}$, and also note that $\Xi$ is needed only through its row sums.) The upshot is that since $z_n, y_{2n}, z_{2n}$ are polynomially related by construction (see §[3.1](#ssec:construction:yz){reference-type="ref" reference="ssec:construction:yz"}), these expressions make the rate verification a routine and rigorously automatable algebraic exercise.* **Definition 15** (Corrections to the recursive gluing in Theorem [Theorem 12](#thm:cert){reference-type="ref" reference="thm:cert"}). *The "low-rank correction" $\Xi$ is defined to be zero except that for all $j \in \{n,\dots,2n-2\}$,* - *$\Xi_{n-1,j} := \phi / \prod_{t=n}^{j-1} q_t$* - *$\Xi_{2n-1,j} := r \phi/ \prod_{t=n}^{j-1} q_t$* - *$\Xi_{*,j} := -(1 + r) \phi / \prod_{t=n}^{j-1} q_t$* *The "sparse correction" $\Delta$ is zero everywhere except for the following entries:* - *$\Delta_{n-1,2n-1} := \phi \frac{\kappa-2}{\kappa(1-z_n)}$* - *$\Delta_{2n-1,n-1} := \phi (\kappa-2) \frac{y_{2n}}{1-z_n}$* - *$\Delta_{*,n-1} := \tau_{2n}\frac{1 + \kappa y_{2n}}{1 - z_n} - \frac{\tau_{2n}}{\tau_n} \sigma_{*,n-1}$* - *$\Delta_{*,2n-1} :=\frac{1+(\kappa-1)z_{2n} + \kappa z_{2n}^2}{(1+z_{2n})^2} -c \sigma_{*,n-1}$* - *$\Delta_{n-1,*} := - \frac{\tau_{2n}}{\tau_n} \sigma_{n-1,*}$* - *$\Delta_{2n-1,*} := \frac{2\kappa z_{2n}}{(1+z_{2n})^2} - c \sigma_{n-1,*}$* *In the above, we used as shorthand the following special values:* - *$r := 1/\prod_{t=0}^{n-1} q_t$* - *$c := \frac{\tau_{2n}}{\tau_n} \left[ r + (1 + r) \left( \frac{z_{2n} + z_n}{z_{2n} - z_n} \right) \right]$* - *$\phi := \tau_{2n} \frac{\kappa}{\kappa - 2} \left(\frac{z_{2n} + z_n}{z_{2n} - z_n}\right)$* We now state the alternative expressions discussed in Remark [Remark 14](#rem:simple-yz){reference-type="ref" reference="rem:simple-yz"}. **Lemma 16** (Alternative expressions in terms of $z_n,y_{2n},z_{2n}$). *The following identities hold:* - *$\Delta_{n-1,2n-1} = \frac{\tau_{2n}}{1-z_n} \left( \frac{z_{2n} + z_n}{z_{2n} - z_n}\right)$* - *$\Delta_{2n-1,n-1} =\frac{ \kappa y_{2n }\tau_{2n}}{1-z_n}\left( \frac{z_{2n} + z_n}{z_{2n} - z_n}\right)$* - *$\Delta_{*,n-1} = \frac{\tau_{2n}}{1-z_n}(1 + \kappa y_{2n}) - \tau_{2n} \frac{ 1 + (\kappa- 1)z_n + \kappa z_n^2 }{(1-z_n)^2}$* - *$\Delta_{*,2n-1} = \tau_{2n} \frac{1 + (\kappa- 1) z_{2n} + \kappa z_{2n}^2 }{(1-z_{2n})^2} - c \tau_n \frac{1 + (\kappa- 1) z_n + \kappa z_n^2}{(1 - z_n)^2}$* - *$\Delta_{n-1,*} = - \frac{2 \kappa z_n\tau_{2n}}{(1-z_n)^2}$* - *$\Delta_{2n-1,*} = \frac{2 \kappa z_{2n} \tau_{2n}}{(1 - z_{2n})^2} - c \frac{2\kappa z_n \tau_n}{(1-z_n)^2}$* - *$\sum_{j=n}^{2n-2} \Xi_{n-1,j} = \frac{1}{r} \sum_{j=n}^{2n-2} \Xi_{2n-1,j} = -\frac{1}{1+r} \sum_{j=n}^{2n-2} \Xi_{*,j} = \tau_{2n} \left( \frac{\kappa z_n - 1}{1 - z_n} \right) \left( \frac{z_{2n} + z_n}{z_{2n} - z_n} \right)$ and is $0$ for $n=1,2$* - *$r = \frac{1-z_n}{1-z_{2n}}$* These equivalent expressions make it straightforward to prove the deferred parts of Theorem [Theorem 12](#thm:cert){reference-type="ref" reference="thm:cert"}. *Proof of non-negativity and $*$-sparsity in Theorem [Theorem 12](#thm:cert){reference-type="ref" reference="thm:cert"}.* [Checking $*$-sparsity.]{.ul} Since $\sigma$ satisfies $*$-sparsity, it follows immediately that $\lambda_{i,*} = 0$ for all $i < 2n-1$ except possibly $i = n-1$. For this remaining case, the definition of the sparse correction $\Delta$ ensures $\lambda_{n-1,*} = 0$. [Checking non-negativity.]{.ul} First observe that $q_i, r, c, \phi$ are all non-negative by the bounds $z_n \leqslant z_{2n} \leqslant 1$ in Lemma [Lemma 4](#lem:yz-basic){reference-type="ref" reference="lem:yz-basic"} and the bounds $1 \leqslant a_n,b_n \leqslant\kappa$ in Lemma [Lemma 5](#lem:ab-basic){reference-type="ref" reference="lem:ab-basic"}. Next, note that all $\Theta_{ij}$ are non-negative since they are a positive multiple of some entry of $\sigma$, which is non-negative by assumption of $\sigma$ being a valid certificate. Thus we need only check non-negativity of $\lambda_{ij} = \Theta_{ij} + \Xi_{ij} + \Delta_{ij}$ on the entries where either the correction $\Xi$ or $\Delta$ is non-zero. This non-negativity is clear from the construction for all of the sparsely-corrected entries $\lambda_{ij}$ where $i \neq j \in \{n-1,2n-1,*\}$ as well as nearly all of the rank-one-corrected entries $\lambda_{ij}$, namely for all $i \in \{n-1,2n-1\}$ and $j \in \{n,\dots,2n-2\}$. It remains only to prove non-negativity of $\lambda_{*,j}$ for $j \in \{n,\dots,2n-2\}$. Since $\Delta_{*,j} = 0$, this amounts to showing that $\Theta_{*,j} \geqslant\Xi_{*,j}$, i.e., $c \sigma_{*,j-n} \geqslant(1+r) \phi / \prod_{t=0}^{j-1} q_t$. This follows by plugging in the explicit formulas for $\sigma_{*,j}$ in Lemma [Lemma 16](#lem:explicit){reference-type="ref" reference="lem:explicit"} and the definitions of $r$, $c$, $\phi$ in Theorem [Theorem 12](#thm:cert){reference-type="ref" reference="thm:cert"}. ◻ The rest of this Appendix section is organized as follows. In [8.1](#app:helper-opt){reference-type="ref" reference="app:helper-opt"} and [8.2](#app:helper-qi){reference-type="ref" reference="app:helper-qi"}, we provide two helper lemmas. The former explicitly computes the values of all co-coercivity multipliers to/from optimum for the $n$-step certificate $\sigma$. The latter provides useful identities involving sums and products of $q_i$. We then use these two helper lemmas in [8.3](#app:explicit){reference-type="ref" reference="app:explicit"} to prove the alternative expressions in Lemma [Lemma 16](#lem:explicit){reference-type="ref" reference="lem:explicit"}, and in [8.4](#app:succinct){reference-type="ref" reference="app:succinct"} to prove the succinct quadratic form representations in Lemma [Lemma 13](#lem:succinct){reference-type="ref" reference="lem:succinct"}. ## Helper lemma: co-coercivities involving $x^*$ {#app:helper-opt} Here we compute all co-coercivity multipliers $\sigma_{t,*}$ and $\sigma_{*,t}$ between GD iterates $x_t$ and $x_*$. **Lemma 17** (Co-coercivity multipliers to/from $x^*$). *Consider the setup of Theorem [Theorem 12](#thm:cert){reference-type="ref" reference="thm:cert"}. Then:* - *$\sigma_{j,*} = 0$ for all $j \in \{0, \dots, n-2\}$* - *$\sigma_{n-1,*} = \frac{2\kappa z_{n}}{(1+z_{n})^2}$* - *$\sigma_{*,j} = \frac{( \prod_{t=j}^{n-3} q_t ) (1-z_n)}{(1+z_n)^2}$ for all $j \in \{0, \dots, n-3\}$* - *$\sigma_{*,n-2} = \frac{1-z_n}{(1+z_n)^2}$* - *$\sigma_{*,n-1} = \frac{ 1 + (\kappa - 1) z_{n} + \kappa z_{n}^2 }{(1 + z_{n})^2}$* *Proof.* That $\sigma_{j,*} = 0$ for all $j < n-1$ is trivially due to the assumption that $\sigma$ satisfies the $*$-sparsity property (Definition [Definition 11](#def:*sparsity){reference-type="ref" reference="def:*sparsity"}). The content of the lemma is solving for the other multipliers. To this end, recall that the $n$-step certificate [\[eq:cert:n\]](#eq:cert:n){reference-type="eqref" reference="eq:cert:n"} establishes that the two quadratic forms $\tau_n \|x_0\|^2 - \|x_n\|^2$ and $\sum_{i\neq j \in \{0, \dots, n-1, *\}} \sigma_{ij} Q_{ij}$ are equal modulo the ideal generated by the equations $x_{t+1} = x_t - \alpha_t g_t$ for all $t \in \{0, \dots, n-1\}$. Thus if we expand both these quadratic forms by replacing, for every $t > 0$, the iterate $x_t$ with $x_0 - \sum_{s=0}^{t-1} \alpha_s g_s$, then the resulting two quadratic forms (now in the variables $\{x_0, g_0, \dots, g_{n-1}\}$) are equal, and in particular the coefficient of any term must match. We prove this lemma by solving the equations that come from matching the coefficients for the terms $\|x_0\|^2$ and $\langle x_0, g_i \rangle$ for $i \in \{0, \dots, n-1\}$. By expanding the definition of the co-coercivity, it is evident that only the co-coercivities of the form $Q_{t,*}$ and $Q_{*,t}$ contributes coefficients for these terms. In particular, matching the coefficients for the term $\|x_0\|^2$ gives the equation $$\begin{aligned} \tau_n - 1 = -\frac{1}{\kappa} \left( \sigma_{n-1,*} + \sum_{j=0}^{n-1} \sigma_{*,j} \right)\,, \label{eq:pf-opt:x0} \end{aligned}$$ matching the coefficients for the term $\langle x_0, g_{n-1} \rangle$ gives the equation $$\begin{aligned} \alpha_{n-1} = \frac{1}{\kappa} \sigma_{n-1,*} + \sigma_{*,n-1}\,, \label{eq:pf-opt:x0gn-1} \end{aligned}$$ and matching the coefficients for the other terms $\langle x_0, g_t \rangle$ gives the equations $$\begin{aligned} \alpha_t = \frac{\alpha_t}{\kappa} \left( \sigma_{n-1,*} + \sum_{i=t+1}^{n-1} \sigma_{*,i} \right) + \sigma_{*,t} \,, \qquad \forall t \in \{0, \dots, n-2\} \label{eq:pf-opt:x0gt} \end{aligned}$$ Intuitively, these equations can be back-solved since they are (essentially) already in triangular form. Below we detail a simple way to do this by hand. First, we obtain the claimed expression for $\sigma_{n-1,*}$ by combining the netflow equation $\sigma_{n-1,*} = \sum_{j=0}^{n-1} \sigma_{*,j}$ with [\[eq:pf-opt:x0\]](#eq:pf-opt:x0){reference-type="eqref" reference="eq:pf-opt:x0"}, re-arranging, and plugging in the definition of $\tau_n = (\frac{1-z_n}{1+z_n})^2$. Next, we obtain the claimed expression for $\sigma_{*,n-1}$ by plugging the now-proved value of $\sigma_{n-1,*}$ into [\[eq:pf-opt:x0gn-1\]](#eq:pf-opt:x0gn-1){reference-type="eqref" reference="eq:pf-opt:x0gn-1"} and using the fact that the Silver Stepsize $\alpha_{n-1} = b_n = (1 + \kappa z_n) / (1+z_n)$, see §[3](#sec:construction){reference-type="ref" reference="sec:construction"}. Next, we obtain the claimed expression for $\sigma_{*,n-2}$ by plugging the now-proved values for $\sigma_{*,n-1}$ and $\sigma_{n-1,*}$ into [\[eq:pf-opt:x0gt\]](#eq:pf-opt:x0gt){reference-type="eqref" reference="eq:pf-opt:x0gt"}, for $t=n-2$, and using the fact that Silver Stepsize $\alpha_{n-2} = \kappa/(\kappa -1)$, see §[3](#sec:construction){reference-type="ref" reference="sec:construction"}. To solve for the remaining variables $\{\sigma_{*,j}\}_{j \in \{0, \dots, n-3\}}$, we could continue back-solving by plugging into [\[eq:pf-opt:x0gt\]](#eq:pf-opt:x0gt){reference-type="eqref" reference="eq:pf-opt:x0gt"}. However, there is a simpler approach: by subtracting the $j$-th equation [\[eq:pf-opt:x0gt\]](#eq:pf-opt:x0gt){reference-type="eqref" reference="eq:pf-opt:x0gt"} from the $(j+1)$-th equation [\[eq:pf-opt:x0gt\]](#eq:pf-opt:x0gt){reference-type="eqref" reference="eq:pf-opt:x0gt"}, the partial sums telescope. After re-arranging, this gives the recurrence $$\begin{aligned} \sigma_{*,j} = q_{j} \sigma_{*,j+1}\,, \qquad \forall j \in \{0, \dots, n-3\}\,. \end{aligned}$$ By plugging in the now-proved value of $\sigma_{*,n-2}$ as the base case for this backwards recurrence, we obtain the claimed expression for $\sigma_{*,j}$ for all $j \in \{0, \dots, n-3\}$. ◻ ## Helper lemma: identities involving $q_i$ {#app:helper-qi} Here we provide useful identities involving $q_i$. This enables expressing $r = \prod_{t=0}^{n-1} 1/q_t$ and sums of the form $\sum_{j=n}^{2n-2} \prod_{t=n}^{j-1} 1/q_t$ in terms of the Normalized Silver Stepsizes $z_n$ and $z_{2n}$. We will use this to prove the final two items of Lemma [Lemma 16](#lem:explicit){reference-type="ref" reference="lem:explicit"} in Appendix [8.3](#app:explicit){reference-type="ref" reference="app:explicit"}. Note that for simplicity, we state these identities for $n \geqslant 4$ since the low-rank component $\Xi$ (which is what this lemma is used to compute) is identically zero for small $n$. **Lemma 18** (Identities involving products of $q_i$). *For $n \geqslant 4$:* - *$\prod_{t=0}^{n-3} \frac{1}{q_t} = \frac{\kappa - 2}{\kappa (1 - z_n)}$* - *$\prod_{t=0}^{n-1} \frac{1}{q_t} = \frac{1-z_n}{1-z_{2n}} = r$* - *$\sum_{j=n}^{2n-2} \prod_{t=n}^{j-1} \frac{1}{q_t} = \frac{(\kappa - 2)(\kappa z_n - 1)}{\kappa(1-z_n)}$* The proof exploits the following elementary identity. We remark that this identity has a probabilistic interpretation, although we do not make explicit use of it. This interpretation is for the case $c_t \in [0,1]$; then $c_t$ can be viewed as the probability that the $t$-th coin is heads, hence $\sum_{t=0}^T c_t \prod_{i=t+1}^T ( 1 - c_i) = 1 - \prod_{t=0}^T (1 - c_t)$ are two expressions for the probability that at least one coin is heads. Below, recall the standard convention that the product over an empty set is $1$. **Lemma 19**. *For any non-negative integer $T$ and any real numbers $c_0, \dots, c_T$, $$\begin{aligned} \sum_{t=0}^T c_t \prod_{i=t+1}^T ( 1 - c_i) = 1 - \prod_{t=0}^T (1 - c_t)\,. \end{aligned}$$* *Proof.* We prove by induction. The base case $T=0$ is trivial. For the inductive step, assume true for $T$; then the claim holds for $T+1$ because $\sum_{t=0}^{T+1} c_t \prod_{i=t+1}^{T+1} ( 1 - c_i) = c_{T+1} + (1 - c_{T+1}) ( \sum_{t=0}^T c_t \prod_{i=t+1}^T (1 - c_i)) = c_{T+1} + (1 - c_{T+1}) ( 1 - \prod_{t=0}^T (1 - c_t)) = 1 - \prod_{t=0}^{T+1} (1 - c_t)$. ◻ *Proof of Lemma [Lemma 18](#lem:identities-qi){reference-type="ref" reference="lem:identities-qi"}.* Since $\sigma$ is a valid certificate, it satisfies the netflow constraint $\sum_j \sigma_{*,j} = \sum_j \sigma_{j,*}$. By using the values for these entries (Lemma [Lemma 17](#lem:identities-opt){reference-type="ref" reference="lem:identities-opt"}) and re-arranging, we obtain $$\begin{aligned} \sum_{j=0}^{n-2} \prod_{t=j}^{n-3} q_j = \kappa z_n - 1\,. \label{eq:pf-lem-qi-1} \end{aligned}$$ Next, we compute this quantity in a different way. By definition of $q_i = \frac{\alpha_i (1 - \alpha_{i+1}/\kappa)}{\alpha_{i+1}}$, multiplying consecutive $q_i$ yields a telescoping product, namely $$\begin{aligned} \prod_{t=j}^{n-3} q_t = \frac{\alpha_j}{\alpha_{n-2}} \prod_{t=j+1}^{n-2} \left( 1 - \frac{\alpha_t}{\kappa} \right)\,. \label{eq:pf-lem-qi-2} \end{aligned}$$ In particular, this implies that $$\begin{aligned} \sum_{j=0}^{n-2} \prod_{t=j}^{n-3} q_j = \frac{\kappa}{\alpha_{n-2}} \,\Bigg( \sum_{j=0}^{n-2} \frac{\alpha_j}{\kappa} \prod_{t=j+1}^{n-2} \left(1 - \frac{\alpha_t}{\kappa} \right) \Bigg) = (\kappa - 1)\, \Bigg( 1 - \prod_{t=0}^{n-2} \left(1 - \frac{\alpha_t}{\kappa} \right) \Bigg)\,. \label{eq:pf-lem-qi-3} \end{aligned}$$ Above, the second step uses Lemma [Lemma 19](#lem:disjunction){reference-type="ref" reference="lem:disjunction"} and the fact that $\alpha_{n-2} = a_2 = \kappa/(\kappa - 1)$ by construction of the Silver Stepsize Schedule (see §[3](#sec:construction){reference-type="ref" reference="sec:construction"}). Now by combining [\[eq:pf-lem-qi-1\]](#eq:pf-lem-qi-1){reference-type="eqref" reference="eq:pf-lem-qi-1"} and [\[eq:pf-lem-qi-3\]](#eq:pf-lem-qi-3){reference-type="eqref" reference="eq:pf-lem-qi-3"}, we obtain $$\begin{aligned} \prod_{t=0}^{n-2} \left( 1 - \frac{\alpha_t}{\kappa} \right) = \frac{\kappa\,(1-z_n)}{\kappa - 1}\,. \end{aligned}$$ By using again the telescoping property [\[eq:pf-lem-qi-2\]](#eq:pf-lem-qi-2){reference-type="eqref" reference="eq:pf-lem-qi-2"} and the fact that $\alpha_0 = \alpha_{n-2}$ by construction of the Silver Stepsize Schedule (see §[3](#sec:construction){reference-type="ref" reference="sec:construction"}), we conclude that $$\begin{aligned} \prod_{t=0}^{n-3} q_t = \frac{\alpha_0}{\alpha_{n-2}} \prod_{t=1}^{n-2} \left( 1 - \frac{\alpha_t}{\kappa} \right) = \frac{ \prod_{t=0}^{n-2} (1 - \alpha_t/\kappa)}{1 - \alpha_0/\kappa} = \frac{\kappa\,(1 - z_n)}{\kappa - 2}\,. \label{eq:pf-lem-qi-4} \end{aligned}$$ This proves the first claim. For the second claim, observe that $$\begin{aligned} q_{n-1} q_{n-2} = \left( 1 - \frac{\alpha_n}{\kappa} \right) \left( 1 - \frac{\alpha_{n-1}}{\kappa} \right) = \frac{\kappa- 2}{\kappa- 1} \cdot \frac{\kappa-1}{\kappa(1+y_{2n})} = \frac{\kappa- 2}{\kappa (1 + y_{2n})}\,, \end{aligned}$$ where above we have simplified by using the facts that $\alpha_{n-2} = \alpha_n = a_2 = \kappa/(\kappa - 1)$ and $\alpha_{n-1} = a_{2n}$ by construction of the Silver Stepsize Schedule (see §[3](#sec:construction){reference-type="ref" reference="sec:construction"}), as well as the re-parameterization of $y_{2n}$ in terms of $a_{2n} = \alpha_{n-1}$. Multiplying the above two displays yields $$\begin{aligned} \prod_{t=0}^{n-1} q_t = \frac{1-z_n}{1+y_{2n}}\,. \end{aligned}$$ The proof of the second claim is then complete by using the identity $(1 - z_n)^2 = (1-z_{2n})(1+y_{2n})$ which follows from the recurrence construction of the $y_n,z_n$ sequences in §[3](#sec:construction){reference-type="ref" reference="sec:construction"}. Finally, for the third claim, divide [\[eq:pf-lem-qi-1\]](#eq:pf-lem-qi-1){reference-type="eqref" reference="eq:pf-lem-qi-1"} by [\[eq:pf-lem-qi-4\]](#eq:pf-lem-qi-4){reference-type="eqref" reference="eq:pf-lem-qi-4"} to conclude the desired identity $$\begin{aligned} \sum_{j=0}^{n-2} \prod_{t=0}^{j-1} \frac{1}{q_t} = \frac{\sum_{j=0}^{n-2} \prod_{t=j}^{n-3} q_j}{\prod_{t=0}^{n-3} q_t} = \frac{(\kappa - 2)(\kappa z_n - 1)}{\kappa (1 - z_n)}\,. \end{aligned}$$ ◻ ## Recursive gluing as a rational function of $z_n, y_{2n}, z_{2n}$ {#app:explicit} *Proof of Lemma [Lemma 16](#lem:explicit){reference-type="ref" reference="lem:explicit"}.* The expressions for $\Delta$ are immediate from Definition [Definition 15](#def:cert){reference-type="ref" reference="def:cert"} and Lemma [Lemma 17](#lem:identities-opt){reference-type="ref" reference="lem:identities-opt"}. The expressions for the row sums of $\Xi$ are immediate by definition of $\Xi$ and the final identity in Lemma [Lemma 18](#lem:identities-qi){reference-type="ref" reference="lem:identities-qi"}. The expression for $r$ follows from Lemma [Lemma 18](#lem:identities-qi){reference-type="ref" reference="lem:identities-qi"}. ◻ ## Recursive gluing as a succinct quadratic form {#app:succinct} Here we provide details for Lemma [Lemma 13](#lem:succinct){reference-type="ref" reference="lem:succinct"} and its proof. The definitions of the coefficient matrices $E$, $S$, and $L$ are as follows. Note that these matrices are symmetric, thus for shorthand we simply write a tilde for their lower-triangular elements. $$\begin{aligned} \small E := \begin{bmatrix} \frac{\tau_{2n}}{\tau_n} - c\tau_n & c \tau_n a_{2n} - \frac{\tau_{2n}}{\tau_n} b_n & 0 & 0 \\ \tilde & \frac{\tau_{2n}}{\tau_n} b_n^2 - c\tau_n a_{2n}^2 & 0 & 0 \\ \tilde & \tilde & c-1 & b_{2n} - c b_n \\ \tilde & \tilde & \tilde & c b_n^2 - b_{2n}^2 \\ \end{bmatrix} \nonumber\end{aligned}$$ and $$\begin{aligned} \small S := \begin{bmatrix} -\frac{1}{\kappa}\sum\limits_{t \neq n-1} (\Delta_{n-1,t} + \Delta_{t,n-1}) & \sum\limits_{t \neq n-1} (\frac{\Delta_{n-1,t}}{\kappa} + \Delta_{t,n-1}) & \frac{1}{\kappa} (\Delta_{n-1,2n-1} + \Delta_{2n-1,n-1}) & -\Delta_{n-1,2n-1}- \frac{\Delta_{2n-1,n-1}}{\kappa} \\ \tilde & - \sum\limits_{t \neq n-1} (\Delta_{n-1,t} + \Delta_{t,n-1}) & - \Delta_{2n-1,n-1} - \frac{\Delta_{n-1,2n-1}}{\kappa} & \Delta_{n-1,2n-1} + \Delta_{2n-1,n-1} \\ \tilde & \tilde & -\frac{1}{\kappa} \sum\limits_{t \neq 2n-1} ( \Delta_{2n-1,t} + \Delta_{t,2n-1}) & \sum\limits_{t \neq 2n-1} (\frac{\Delta_{2n-1,t}}{\kappa} + \Delta_{t,2n-1}) \\ \tilde & \tilde & \tilde & - \sum\limits_{t \neq 2n-1} (\Delta_{2n-1,t} + \Delta_{t,2n-1}) \end{bmatrix} \nonumber\end{aligned}$$ and $L := \phi \left( L^{(n-1)} + r L^{(2n-1)} \right)$, where $$\begin{aligned} \small L^{(n-1)} &:= \frac{\kappa-2}{\kappa(1 - z_n)} \begin{bmatrix} z_n - 2 + \tfrac{1}{\kappa} & a_{2n} (1 - z_n) + \tfrac{\kappa - 1}{\kappa} & 1 - \tfrac{1}{\kappa} & 0 \\ \tilde & 2a_{2n} (z_n - 1) - (\kappa z_n - 1) & -1 + \tfrac{1}{\kappa} & 0 \\ \tilde & \tilde & 0 & 0 \\ \tilde & \tilde & \tilde & 0 \\ \end{bmatrix} \nonumber \\ L^{(2n-1)} &:= \frac{\kappa-2}{\kappa(1 - z_n)} \begin{bmatrix} 0 & 0 & -1+z_n & 1-z_n \\ \tilde & 0 & a_{2n} (1-z_n) & -a_{2n} (1-z_n) \\ \tilde & \tilde & 2 - z_n - \tfrac{1}{\kappa} & -1+z_n \\ \tilde & \tilde & \tilde & 1 - \kappa z_n \end{bmatrix} \nonumber\end{aligned}$$ and is zero for $n=1,2$. *Proof of remaining parts of Lemma [Lemma 13](#lem:succinct){reference-type="ref" reference="lem:succinct"}.* The identity for the sparse correction is immediate by plugging in the definition of $P_{ij}$, simplifying $x^* = g^* = 0$, and collecting terms. It remains to show the identity for the low-rank correction. Suppose $n \geqslant 4$, else $\Xi$ is identically zero and the claim is trivial. We claim that $$\begin{aligned} \sum_{j=n}^{2n-2} \prod_{t=n}^{j-1} \frac{1}{q_j} \left( P_{n-1,j} - P_{*,j} \right) &= \langle L^{(n-1)}, vv^T \rangle \label{eq:lem-lr:n-1} \\ \sum_{j=n}^{2n-2} \prod_{t=n}^{j-1} \frac{1}{q_j} \left( P_{2n-1,j} - P_{*,j} \right) &= \langle L^{(2n-1)}, vv^T \rangle \,. \label{eq:lem-lr:2n-1} \end{aligned}$$ These identities suffice because by the definition of $\Xi$ and $L$, we then have $$\sum_{ij} \Xi_{ij}P_{ij} = \phi \sum_{j=n}^{2n-2} \prod_{t=n}^{j-1} \frac{1}{q_j} \left( \left( P_{n-1,j} - P_{*,j} \right) - r \left( P_{2n-1,j} - P_{*,j} \right) \right) = \langle L, vv^T \rangle\,.$$ We prove [\[eq:lem-lr:n-1\]](#eq:lem-lr:n-1){reference-type="eqref" reference="eq:lem-lr:n-1"}; the proof of [\[eq:lem-lr:2n-1\]](#eq:lem-lr:2n-1){reference-type="eqref" reference="eq:lem-lr:2n-1"} is entirely analogous and thus omitted for brevity. By expanding the squares in the definition of $P_{ij}$, simplifying $x^* = g^* = 0$, and collecting terms, $$\begin{aligned} P_{n-1,j} - P_{*,j} = - \frac{1}{\kappa} \|x_{n-1}\|^2 - \|g_{n-1}\|^2 + \frac{2}{\kappa} \langle x_{n-1}, g_{n-1} \rangle + 2 \langle x_{n-1} - g_{n-1} , \frac{x_j}{\kappa} - g_j\rangle\,. \end{aligned}$$ By expanding $x_{j} = x_{n-1} - \sum_{i=n-1}^{j-1} \alpha_i g_i$ using the definition of GD, and then collecting terms, $$\begin{aligned} P_{n-1,j} - P_{*,j} = \frac{1}{\kappa} \|x_{n-1}\|^2 + \left( \frac{2\alpha_{n-1}}{\kappa} - 1\right) \|g_{n-1}\|^2 - \frac{2\alpha_{n-1}}{\kappa} \langle x_{n-1}, g_{n-1} \rangle - 2 \langle x_{n-1} - g_{n-1} , g_j + \frac{1}{\kappa} \sum_{i=n}^{j-1} \alpha_i g_i\rangle \,. \end{aligned}$$ Since the first three of these four summands are independent of $j$, we conclude $$\begin{aligned} \sum_{j=n}^{2n-2} \prod_{t=n}^{j-1} \frac{1}{q_j} \left( P_{n-1,j} - P_{*,j} \right) &= \left( \sum_{j=n}^{2n-2} \prod_{t=n}^{j-1} \frac{1}{q_j} \right) \Bigg( \frac{1}{\kappa} \|x_{n-1}\|^2 + \left( \frac{2\alpha_{n-1}}{\kappa} - 1\right) \|g_{n-1}\|^2 - \frac{2\alpha_{n-1}}{\kappa} \langle x_{n-1}, g_{n-1} \rangle \Bigg) \nonumber \\ &- 2 \left\langle x_{n-1} - g_{n-1} , \sum_{j=n}^{2n-2} \prod_{t=n}^{j-1} \frac{1}{q_j} \Big( g_j + \frac{1} {\kappa} \sum_{i=n}^{j-1} \alpha_i g_i \Big) \right\rangle \,. \label{eq:pf-lr:1} \end{aligned}$$ For the first term, use Lemma [Lemma 18](#lem:identities-qi){reference-type="ref" reference="lem:identities-qi"} and the fact that $q_j = q_{j-n}$ for $j \in \{n, \dots, 2n-2\}$ to obtain $$\begin{aligned} \sum_{j=n}^{2n-2} \prod_{t=n}^{j-1} \frac{1}{q_j} = \sum_{j=0}^{n-2} \prod_{t=0}^{j-1} \frac{1}{q_j} = \frac{(\kappa - 2)(\kappa z_n-1)}{\kappa (1 - z_n)}\,. \label{eq:pf-lr:2} \end{aligned}$$ For the second term, observe that $$\begin{aligned} \sum_{j=n}^{2n-2} \prod_{t=n}^{j-1} \frac{1}{q_j} \left( g_j + \frac{1} {\kappa} \sum_{i=n}^{j-1} \alpha_i g_i \right) &= \frac{1}{\alpha_n} \sum_{j=n}^{2n-2} \alpha_j \prod_{t=n+1}^{j} \frac{1}{(1 - \alpha_t/\kappa)} \left( g_j + \frac{1}{\kappa} \sum_{i=n}^{j-1} \alpha_i g_i \right) \nonumber \\ &= \frac{1}{\alpha_n} \sum_{i=n}^{2n-2} \alpha_i g_i \, \Bigg[ \frac{1}{\prod_{t=n+1}^i (1 - \alpha_t/\kappa)} + \sum_{j=i+1}^{2n-2} \frac{\alpha_j}{\kappa \prod_{t=n+1}^j (1 - \alpha_t/\kappa)} \Bigg] \nonumber \\ &= \frac{1}{\alpha_n \prod_{t=n+1}^{2n-2}(1 - \alpha_t/\kappa)} \sum_{i=n}^{2n-2} \alpha_i g_i \, \Bigg[ \prod_{t=i+1}^{2n-2}(1 - \alpha_t/\kappa) + \sum_{j=i+1}^{2n-2} \frac{\alpha_j}{\kappa} \prod_{t=j+1}^{2n-2} (1 - \alpha_t/\kappa) \Bigg] \nonumber \\ &= \frac{1}{\alpha_n \prod_{t=n+1}^{2n-2}(1 - \alpha_t/\kappa)} \sum_{i=n}^{2n-2} \alpha_i g_i \nonumber \\ &= \frac{1}{\alpha_n \prod_{t=n+1}^{2n-2}(1 - \alpha_t/\kappa)} \Bigg[ x_{n-1} - x_{2n-1} - \alpha_{n-1} g_{n-1} \Bigg] \nonumber \\&= \frac{(\kappa-1)(\kappa-2)}{\kappa^2(1 - z_n)} \Bigg[ x_{n-1} - x_{2n-1} - \alpha_{n-1} g_{n-1} \Bigg]\,.\label{eq:pf-lr:3} \end{aligned}$$ Above, the first step is by definition of $q_j$ and telescoping. The second step is by re-arranging sums. The third step is by factoring out the product. The fourth step is because $\sum_{j=i+1}^{2n-2} \tfrac{\alpha_j}{\kappa} \prod_{t=j+1}^{2n-2} (1 - \tfrac{\alpha_t}{\kappa}) = 1 - \prod_{t=i+1}^{2n-2} ( 1 - \tfrac{\alpha_t}{\kappa})$ by Lemma [Lemma 19](#lem:disjunction){reference-type="ref" reference="lem:disjunction"}. The fifth step is by definition of GD. The final step uses $\alpha_n \prod_{t=n+1}^{2n-2}(1 - \tfrac{\alpha_t}{\kappa}) = \alpha_{2n-2} \prod_{t=n}^{2n-3} q_t$, Lemma [Lemma 18](#lem:identities-qi){reference-type="ref" reference="lem:identities-qi"}, and the fact that $\alpha_{2n-2} = a_2 = \kappa/(\kappa-1)$. By combining [\[eq:pf-lr:1\]](#eq:pf-lr:1){reference-type="eqref" reference="eq:pf-lr:1"}, [\[eq:pf-lr:2\]](#eq:pf-lr:2){reference-type="eqref" reference="eq:pf-lr:2"}, and [\[eq:pf-lr:3\]](#eq:pf-lr:3){reference-type="eqref" reference="eq:pf-lr:3"}, we conclude that the desired quantity is equal to $$\begin{aligned} \sum_{j=n}^{2n-2} \prod_{t=n}^{j-1} \frac{1}{q_j} \left( P_{n-1,j} - P_{*,j} \right) = \frac{\kappa - 2}{\kappa^2(1-z_n)} \Bigg[ &(\kappa z_n - 1) \Big( \|x_{n-1}\|^2 - 2\alpha_{n-1} \langle x_{n-1}, g_{n-1} \rangle + (2\alpha_{n-1} - \kappa) \|g_{n-1}\|^2 \Big) \nonumber \\&- 2(\kappa-1) \langle x_{n-1} - g_{n-1}, x_{n-1} - x_{2n-1} - \alpha_{n-1} g_{n-1} \rangle \Bigg]\,. \end{aligned}$$ Expanding the inner product and canceling terms completes the proof of [\[eq:lem-lr:n-1\]](#eq:lem-lr:n-1){reference-type="eqref" reference="eq:lem-lr:n-1"}. ◻ [^1]: In convex optimization, we typically view the initialization $x_0$ as part of the problem instance rather than a parameter choice, since $x_0 = 0$ without loss of generality after a possible translation of the objective function $f$. [^2]: In the non-smooth setting, it is classically known that acceleration is impossible, and moreover GD achieves the minimax-optimal convergence rate with simple monotonically decaying stepsize schedules like $\alpha_t \asymp 1/\sqrt{t}$ [@nesterov-survey]. [^3]: Recall that this means $f$ is sandwiched between quadratic lower and upper bounds of curvature $1$ and $\kappa$, respectively, i.e., $\frac{1}{2}\|v\|^2 \leqslant f(x+v) - f(x) - \langle v, \nabla f(x) \rangle \leqslant\frac{\kappa}{2}\|v\|^2$ for all $x, v \in \mathbb{R}^d$. For intuition, this is equivalent to the local curvature bound $I_d \preceq \nabla^2 f(x) \preceq \kappa I_d$ under the assumption of twice-differentiability (not required by our results). [^4]: Note that $\tau_{2n} \leqslant\tau_n^2$; intuitively this amounts to the statement that the optimal $2n$-step schedule is at least as good as repeating the optimal $n$-step schedule twice. We call this inequality *rate monotonicity*, see §[4](#sec:rate){reference-type="ref" reference="sec:rate"}. The statement $\tau_{2n} \approx \tau_n^2$ therefore states that this bound is nearly tight. [^5]: This is not just a failure of analysis techniques: even for mild condition numbers like $\kappa = 10$, using the $2$-step Chebyshev Schedule in either order makes GD divergent (i.e., the contraction rate is larger than $1$). We are not aware of a reference for this, but it can be shown e.g., by using the SDP-analysis framework of [@pesto]. [^6]: We remark that the order may change for different progress measures, see [@altschuler2018greed Chapter 8.2]. [^7]: This is purely an analysis device and does not change the GD algorithm (which neither knows the optimum nor queries function values). Including function values simplifies the interpolability conditions [@pesto] and thus our analysis. [^8]: We call such stepsizes non-trivial since if a stepsize is outside this interval, then clipping it to the interval improves convergence. This can be proved by noticing that, of the four hard functions in this proof, all but the fourth apply if $\alpha \geqslant 1/M$, and all but the third apply if $\alpha \leqslant 1/m$. By optimizing the resulting analogous bounds [\[eq:pf-2step:opt\]](#eq:pf-2step:opt){reference-type="eqref" reference="eq:pf-2step:opt"} for the cases $\alpha < 1/M$ and $\alpha > 1/m$, it follows that clipping to the interval $[1/M,1/m]$ leads to faster convergence. [^9]: The rate optimality identity [\[eq:pf-2step:opt\]](#eq:pf-2step:opt){reference-type="eqref" reference="eq:pf-2step:opt"} actually holds with equality over all non-trivial stepsizes $\alpha,\beta \in [1/M,1/m]$, although this is unnecessary for our purposes. [^10]: One can also use Theorem [Theorem 3](#thm:2step:convex){reference-type="ref" reference="thm:2step:convex"} to take $n=2$ as the base case. Then this paper's proof is fully self-contained.
arxiv_math
{ "id": "2309.07879", "title": "Acceleration by Stepsize Hedging I: Multi-Step Descent and the Silver\n Stepsize Schedule", "authors": "Jason M. Altschuler and Pablo A. Parrilo", "categories": "math.OC cs.DS", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Let $I$ be the edge ideal of a Cohen-Macaulay tree of dimension $d$ over a polynomial ring $S = \mathrm{k}[x_1,\ldots,x_{d},y_1,\ldots,y_d]$. We prove that for all $t \ge 1$, $$\mathop{\mathrm{depth}}(S/I^t) = \max \{d -t + 1, 1 \}.$$ address: - Thai Nguyen University of Sciences, Tan Thinh Ward, Thai Nguyen City, Thai Nguyen, Vietnam - Hong Duc University, 565 Quang Trung Street, Dong Ve Ward, Thanh Hoa, Vietnam - Institute of Mathematics, VAST, 18 Hoang Quoc Viet, Hanoi, Vietnam author: - Nguyen Thu Hang - Truong Thi Hien - Thanh Vu title: Depth of powers of edge ideals of Cohen-Macaulay trees --- # Introduction {#sect_intro} Let $S = \mathrm{k}[x_1, \ldots, x_n]$ be a standard graded polynomial ring over a field $\mathrm{k}$. For a homogeneous ideal $I \subset S$, we denote by $\mathop{\mathrm{dstab}}(I)$ the *index of depth stability* of $I$, i.e., the smallest positive natural number $k$ such that $\mathop{\mathrm{depth}}S/I^\ell = \mathop{\mathrm{depth}}S/I^k$ for all $\ell \ge k$. Such a number exists due to the result of Brodmann [@Br]. Let $G$ be a simple graph on the vertex set $V(G) = \{x_1,\ldots,x_n\}$ and edge set $E(G) \subseteq V(G) \times V(G)$. The edge ideal of $G$, denoted by $I(G)$, is the squarefree monomial ideal generated by $x_i x_j$ where $\{x_i,x_j\}$ is an edge of $G$. In a fundamental paper [@T], Trung found a combinatorial formula for $\mathop{\mathrm{dstab}}(I(G))$ for large classes of graphs, including unicyclic graphs. In particular, when $G$ is a tree, $\mathop{\mathrm{dstab}}(I(G)) = n - \epsilon_0(G)$ where $\epsilon_0(G)$ is the number of leaves of $G$. Though we know the limit depth and its index of depth stability, intermediate values for depth of powers of edge ideals were only known in very few cases, e.g., for path graphs by Balanescu and Cimpoeas [@BC] and cycles and starlike trees by Minh, Trung, and the last author [@MTV1]. While the depth of powers of edge ideals of general trees is very mysterious [@MTV1], in this paper, we compute the depth of powers of edge ideals of all Cohen-Macaulay trees. In [@V], Villarreal classified all Cohen-Macaulay trees. It says that a tree $G$ is Cohen-Macaulay if and only if it is the whisker graph of another tree $T$. In other words, $V(G) = V(T) \cup \{y_1, \ldots, y_d\}$ and $E(G) = E(T) \cup \{\{x_i,y_i\} \mid i = 1, \ldots, d\}$, where $T$ is an arbitrary tree on $d$ vertices. While the structure of $T$ could be very complicated, surprisingly, the depth of powers of $G$ does not depend on $T$. Namely, **Theorem 1**. *Let $I(G) \subset S = \mathrm{k}[x_1, \ldots,x_{d}, y_1, \ldots, y_d]$ be the edge ideal of a Cohen-Macaulay tree $G$ of dimension $d$. Then for all $t \ge 1$, $$\mathop{\mathrm{depth}}S/I(G)^t = \max \{d-t+1, 1\}.$$* We now outline the ideas to carry out this computation. First, we show a general upper bound for the depth of powers of trees. For that purpose, we introduce some notation. Let $\mathbf e= \{e_1, \ldots, e_t\}$ be a set of $t$ distinct edges of $G$. We consider $\mathbf e$ itself as a subgraph of $G$ with edge set $E(\mathbf e) = \{e_1, \ldots, e_t\}$. We denote by $N[\mathbf e]$ the closed neighbourhood of $\mathbf e$ in $G$ (see Subsection [2.2](#subsection_graphs){reference-type="ref" reference="subsection_graphs"} for the definition of closed neighbourhood). Furthermore, $G[\mathbf e]$ denotes the induced subgraph of $G$ on $N[\mathbf e]$ and $G[\bar {\mathbf e}]$ denotes the induced subgraph of $G$ on $V(G) \setminus N[\mathbf e]$. **Lemma 2**. *Let $\mathbf e= \{e_1, \ldots, e_t\}$ be a set of $t$ distinct edges of a simple graph $G$. Assume that $G[\mathbf e]$ is bipartite, $G[\bar {\mathbf e}]$ is weakly chordal, and $\mathbf e$, viewed as a subgraph of $G$, is connected. Let $R$ be the polynomial ring over the vertex set of $G[\bar{\mathbf e}]$. Then $$\mathop{\mathrm{depth}}S/I(G)^{t+1} \le 1 + \mathop{\mathrm{depth}}R/I(G[\bar {\mathbf e}]).$$* For the lower bound, we prove the following general bound for the depth of powers of edge ideals of whisker trees. **Lemma 3**. *Let $T$ be a forest on $n$ vertices. Let $U \subset V(T)$ be a subset of vertices of $T$, and $G$ be the new forest obtained by adding a whisker to each vertex in $U$. Namely, $V(G) = V(T) \cup \{y_i \mid x_i \in U\}$ and $E(G) = E(T) \cup \{ \{x_i, y_i\} \mid x_i \in U\}$. Then for all $t \ge 1$, we have $$\mathop{\mathrm{depth}}(R/I(G)^t) \ge |U| - t + 1,$$ where $R = \mathrm{k}[x_1,\ldots,x_n,y_j \mid x_j \in U]$ is the polynomial ring over the variables corresponding to vertices of $G$.* Our method allows us to compute the depth of powers of the edge ideal of a Cohen-Macaulay bipartite graph constructed by Banerjee and Mukundan [@BM]. **Theorem 4**. *Let $S = k[x_1,\ldots,x_d,y_1,\ldots,y_d]$. For a fix integer $j$ such that $1 \le j \le d$, let $G_{d,j}$ be a bipartite graph whose edge ideal is $I(G_{d,j}) = (x_i y_i, x_1 y_i, x_k y_j \mid 1 \le i \le d, 1 \le k \le j)$. Then $$\mathop{\mathrm{depth}}S/I(G_{d,j})^s = \max(1,d - j -s + 3),$$ for all $s \ge 2$.* We structure the paper as follows. In Section [2](#sec_pre){reference-type="ref" reference="sec_pre"}, we set up the notation and provide some background. In Section [3](#sec_depth_power_CM_trees){reference-type="ref" reference="sec_depth_power_CM_trees"}, we prove Theorem [Theorem 1](#depth_power_CM_tree){reference-type="ref" reference="depth_power_CM_tree"}. In Section [4](#sec_depth_power_CM_bipartite_graph){reference-type="ref" reference="sec_depth_power_CM_bipartite_graph"}, we prove Theorem [Theorem 4](#thm_depth_BM_ideals){reference-type="ref" reference="thm_depth_BM_ideals"}. # Preliminaries {#sec_pre} In this section, we recall some definitions and properties concerning the depth of monomial ideals and edge ideals of graphs. The interested readers are referred to [@BH; @D] for more details. Throughout this section, we denote $S = \mathrm{k}[x_1,\ldots, x_n]$ a standard graded polynomial ring over a field $\mathrm{k}$. Let $\mathfrak m= (x_1,\ldots, x_n)$ be the maximal homogeneous ideal of $S$. ## Depth For a finitely generated graded $S$-module $L$, the depth of $L$ is defined to be $$\mathop{\mathrm{depth}}(L) = \min\{i \mid H_{\mathfrak m}^i(L) \ne 0\},$$ where $H^{i}_{\mathfrak m}(L)$ denotes the $i$-th local cohomology module of $L$ with respect to $\mathfrak m$. **Definition 5**. A finitely generated graded $S$-module $L$ is called Cohen-Macaulay if $\mathop{\mathrm{depth}}L = \dim L$. A homogeneous ideal $I \subseteq S$ is said to be Cohen-Macaulay if $S/I$ is a Cohen-Macaulay $S$-module. The following two results about the depth of monomial ideals will be used frequently in the sequence. The first one is [@R Corollary 1.3]. The second one is [@CHHKTT Theorem 4.3]. **Lemma 6**. *Let $I$ be a monomial ideal and $f$ a monomial such that $f \notin I$. Then $$\mathop{\mathrm{depth}}S/I \le \mathop{\mathrm{depth}}S/(I:f)$$* **Lemma 7**. *Let $I$ be a monomial ideal and $f$ a monomial. Then $$\mathop{\mathrm{depth}}S/I \in \{\mathop{\mathrm{depth}}(S/I:f), \mathop{\mathrm{depth}}(S/(I,f))\}.$$* In the ideals of the form $I+(f)$ and $I:f$, some variables will be part of the minimal generators, and some will not appear in any of the minimal generators. A variable that does not divide any minimal generators of a monomial ideal $J$ will be called a free variable of $J$. We have **Lemma 8**. *Assume that $I = J + (x_a, \ldots, x_b)$ and $x_{b+1}, \ldots, x_n$ are free variables of $I$ where $J$ is a monomial ideal in $R = \mathrm{k}[x_1,\ldots,x_{a-1}]$. Then $$\mathop{\mathrm{depth}}S/I = \mathop{\mathrm{depth}}R/J + (n-b).$$* ## Graphs and their edge ideals {#subsection_graphs} Let $G$ denote a finite simple graph over the vertex set $V(G) = \{x_1,\ldots,x_n\}$ and the edge set $E(G)$. For a vertex $x\in V(G)$, let the neighbours of $x$ be the subset $N_G(x)=\{y\in V(G) \mid \{x,y\}\in E(G)\}$. The closed neighbourhood of $x$ is $N_G[x]=N_G(x)\cup\{x\}$. A vertex $x$ is called a leaf if it has a unique neighbour. An edge that contains a leaf is called a leaf edge. For a subset $U \subset V(G)$, $N_G(U) = \cup (N_G(x) \mid x \in U)$ and $N_G[U] = \cup (N_G[x] \mid x \in U)$. When it is clear from the context, we drop the subscript $G$ from the notation $N_G$. Let $\mathbf e= \{e_1, \ldots, e_t\}$ be a set of $t$ distinct edges of $G$. We denote by $N[\mathbf e]$ the closed neighbourhood of $\mathbf e$ in $G$, $$N[\mathbf e] = \cup (N[x] \mid x \text{ is a vertex of } e_j \text{ for some } j = 1, \ldots, t).$$ Furthermore, $G[\mathbf e]$ denotes the induced subgraph of $G$ on $N[\mathbf e]$ and $G[\bar {\mathbf e}]$ denotes the induced subgraph of $G$ on $V(G) \setminus N[\mathbf e]$. A graph $H$ is called a subgraph of $G$ if $V(H) \subseteq V(G)$ and $E(H) \subseteq E(G)$. Let $U \subset V(G)$ be a subset of vertices of $G$. The induced subgraph of $G$ on $U$, denoted by $G[U]$, is the graph such that $V(G[U]) = U$ and for any vertices $u,v \in U$, $\{u,v\} \in E(G[U])$ if and only if $\{u,v\} \in E(G)$. A set $\mathbf e= \{e_1, \ldots, e_t\}$ of $t$ distinct edges of $G$ is an induced matching if $e_i \cap e_j = \emptyset$ for all $i \neq j \in \{1, \ldots, t\}$ and $\mathbf e$ is an induced subgraph of $G$. A tree is a connected acyclic graph. A cycle of length $m$ in $G$ is a sequence of distinct vertices $x_1,\ldots,x_m$ such that $\{x_1,x_2\}, \ldots, \{x_{m-1}x_m\} , \{x_m,x_1\}$ are edges of $G$. A graph $G$ is bipartite if its vertex set has a decomposition $V(G) = U \cup V$ such that $E(G) \subset U \times V$. It is called a complete bipartite graph if $E(G) = U \times V$, denoted by $K_{U, V}$. A graph $G$ is called weakly chordal if $G$ and its complement do not contain an induced cycle of length at least $5$. The edge ideal of $G$ is defined to be $$I(G)=(x_ix_j \mid \{x_i,x_j\}\in E(G))\subseteq S.$$ A graph $G$ is called Cohen-Macaulay if $I(G)$ is Cohen-Macaulay. For simplicity, we often write $x_i \in G$ (resp. $x_ix_j \in G$) instead of $x_i \in V(G)$ (resp. $\{x_i,x_j\} \in E(G)$). By abuse of notation, we also call $x_i x_j \in I(G)$ an edge of $G$. We have the following result [@Mo Lemma 2.10]. **Lemma 9**. *Suppose that $G$ is a graph and $xy$ is a leaf edge of $G$. Then for all $t \ge 2$, we have $$I(G)^t : (xy) = I(G)^{t-1}.$$* As a consequence, we have the following well-known result. **Lemma 10**. *Let $G$ be a graph. Assume that $G$ has a leaf edge. Then the sequence $\mathop{\mathrm{depth}}S/I(G)^t$ is non-increasing.* *Proof.* Let $xy$ be a leaf edge of $G$. By Lemma [Lemma 9](#lem_colon_leaf){reference-type="ref" reference="lem_colon_leaf"} and Lemma [Lemma 6](#lem_upperbound){reference-type="ref" reference="lem_upperbound"}, we have $$\mathop{\mathrm{depth}}S/I(G)^t \le \mathop{\mathrm{depth}}S/(I(G)^t : xy) = \mathop{\mathrm{depth}}S/I(G)^{t-1},$$ for all $t \ge 2$. The conclusion follows. ◻ ## Bipartite completion and a colon ideal Let $H$ be a connected bipartite graph with the partition $V(H) = U \cup V$. The bipartite completion of $H$, denoted by $\widetilde {H}$, is the complete bipartite graph $K_{U, V}$. We have **Lemma 11**. *Let $\mathbf e= \{e_1, \ldots, e_t\}$ be a set of $t$ distinct edges of $G$. Assume that $\mathbf e$, viewed as a subgraph of $G$, is connected and $G[\mathbf e]$ is bipartite. Then $$I(G)^{t+1}:(e_1 \cdots e_t) = I(H),$$ where $H$ is a graph obtained from $G$ by bipartite completing $G[\mathbf e]$, i.e., $V(H) = V(G)$ and $E(H) = E(G) \cup E(\widetilde {G[\mathbf e]})$.* *Proof.* We prove by induction on $t$. The base case $t = 1$ follows from [@B Theorem 6.7]. Now, assume that the statement holds for $t-1$. There always exists an edge $e_j \in \{e_1, \ldots, e_t\}$ so that $\mathbf e\setminus e_j$ is connected. We may assume that $j = t$. Let $\mathbf e' = \{e_1, \ldots, e_{t-1}\}$. By induction, we have $$I(G)^t : (e_1 \cdots e_{t-1}) = I(H'),$$ where $E(H') = E(G) \cup E(\widetilde{G[\mathbf e']})$. By [@B Theorem 6.7], $I(G)^{t+1} : (e_1 \cdots e_{t}) \supseteq I(H')^2 : e_t$ (see also [@MTV1 Lemma 2.9]). Let $N_G[\mathbf e] = U \cup V$ be the partition of $N_G[\mathbf e]$. Assume that $e_t = uv$ with $u\in U$ and $v\in V$. Since $\mathbf e$ is connected, $\mathop{\mathrm{supp}}\mathbf e' \cap \{u,v\} \neq \emptyset$. We may assume that $u \in \mathop{\mathrm{supp}}\mathbf e'$. Hence, $$v \in N_G[\mathbf e'] \text{ and } N_G[\mathbf e] = N_G[\mathbf e'] \cup N_G(v).$$ In particular, $N_G[\mathbf e'] = U' \cup V$ and $U = U' \cup N_G(v)$. Since the induced subgraph of $H'$ on $N_G[\mathbf e']$ is the complete bipartite graph $K_{U',V}$, we have $N_{H'}(u) = V$ and $N_{H'}(v) = U$. Thus, $I(H')^2:e_t \supseteq I(K_{U,V})$. The conclusion follows from [@B Theorem 6.7] as any new edge of $H$ must have support in $N[\mathbf e]$. ◻ **Remark 12**. The notion of bipartite completion of a bipartite subgraph of a simple graph $G$ was introduced and studied in [@MTV2]. It plays an important role in the study of depth of symbolic powers of $I(G)$. ## Projective dimension of edge ideals of weakly chordal graphs We note that the colon ideals of powers of edge ideals of trees by products of edges are edge ideals of weakly chordal graphs. Their projective dimension can be computed via the notion of strongly disjoint families of complete bipartite subgraphs, introduced by Kimura [@K]. For a graph G, we consider all families of (non-induced) subgraphs $B_1, \ldots, B_g$ of $G$ such that 1. each $B_i$ is a complete bipartite graph for $1 \le i \le g$, 2. the graphs $B_1, \ldots, B_g$ have pairwise disjoint vertex sets, 3. there exist an induced matching $e_1,\ldots, e_g$ of $G$ for each $e_i \in E(B_i)$ for $1\le i \le g$. Such a family is termed a strongly disjoint family of complete bipartite subgraphs. We define $$d(G) = \max ( \sum_1^g |V (B_i)| - g),$$ where the maximum is taken over all the strongly disjoint families of complete bipartite subgraphs $B_1, \ldots, B_g$ of $G$. We have the following [@NV1 Theorem 7.7]. **Theorem 13**. *Let $G$ be a weakly chordal graph with at least one edge. Then $$\mathop{\mathrm{pd}}(S/I(G)) = d(G).$$* # Depth of powers of edge ideals of Cohen-Macaulay trees {#sec_depth_power_CM_trees} In this section, we compute the depth of powers of Cohen-Macaulay trees. First, we prove a general bound for the depth of powers of edge ideals of graphs. Recall that for a set $\mathbf e= \{e_1, \ldots, e_t\}$ of edges of $G$, $G[\mathbf e]$ denotes the induced subgraph of $G$ on $N[\mathbf e]$ and $G[\bar {\mathbf e}]$ denotes the induced subgraph of $G$ on $V(G) \setminus N[\mathbf e]$. **Lemma 14**. *Let $\mathbf e= \{e_1, \ldots, e_t\}$ be a set of $t$ distinct edges of a simple graph $G$. Assume that $G[\mathbf e]$ is bipartite, $G[\bar {\mathbf e}]$ is weakly chordal, and $\mathbf e$, viewed as a subgraph of $G$, is connected. Let $R$ be the polynomial ring over the vertex set of $G[\bar{\mathbf e}]$. Then $$\mathop{\mathrm{depth}}S/I(G)^{t+1} \le 1 + \mathop{\mathrm{depth}}R/I(G[\bar {\mathbf e}]).$$* *Proof.* Let $K = I(G[\bar{\mathbf e}])$ and $f = e_1 \cdots e_t$. By Lemma [Lemma 11](#lem_colon_product_edges){reference-type="ref" reference="lem_colon_product_edges"}, $I(G)^{t+1} : f = I(H)$ where $E(H) = E(G) \cup E(\widetilde {G[\mathbf e]})$. If $G[\bar{\mathbf e}]$ has no edges then $K$ is the zero ideal and $\mathop{\mathrm{depth}}R/K = |V(G[\bar{\mathbf e}])|$. The conclusion then follows from [@K Theorem 1.1]. Now assume that $G[\bar{\mathbf e}]$ has at least one edge. Since $G[\bar{\mathbf e}]$ is weakly chordal, by Theorem [Theorem 13](#thm_pd_weakly_chordal){reference-type="ref" reference="thm_pd_weakly_chordal"}, there exists a family $\mathcal B= \{B_1, \ldots, B_g\}$ of strongly disjoint family of complete bipartite subgraphs of $G[\bar{\mathbf e}]$ such that $\mathop{\mathrm{pd}}(R/K) = \sum_{i=1}^g |V(B_i)| - g$. Then $\mathcal B\cup \widetilde{G[\mathbf e]}$ is a strongly disjoint family of complete bipartite subgraphs of $H$, because $e_1 \in \widetilde{G[\mathbf e]}$ together with the induced matching $e_i' \in B_i$ form an induced matching of $H$. By [@K Theorem 1.1], $$\mathop{\mathrm{pd}}S/I(H) \ge \mathop{\mathrm{pd}}(R/K) + |V(\widetilde{G[\mathbf e]})| - 1.$$ By the Auslander-Buchsbaum formula, we deduce that $\mathop{\mathrm{depth}}S/I(H) \le \mathop{\mathrm{depth}}R/K + 1$. The conclusion then follows from Lemma [Lemma 6](#lem_upperbound){reference-type="ref" reference="lem_upperbound"}. ◻ As a corollary, we deduce an upper bound for the depth of powers of edge ideals of Cohen-Macaulay trees. By the result of Villarreal [@V], we may assume that $$V(G) = \{x_1, \ldots, x_d,y_1, \ldots, y_d\} \text{ and } E(G) = E(T) \cup \{\{x_1,y_1\},\ldots, \{x_d,y_d\}\},$$ where $T$ is the induced subgraph of $G$ on $\{x_1, \ldots, x_d\}$ which is a tree on $d$ vertices. **Corollary 15**. *Let $G$ be a Cohen-Macaulay tree of dimension $d$. Then for any $t$ such that $2 \le t \le d-2$, $$\mathop{\mathrm{depth}}S/I(G)^{t+1} \le d-t.$$* *Proof.* Let $T_1$ be any connected subtree of $T$ with $|E(T_1)| = t$. We may assume that $V(T_1) = \{x_1,\ldots,x_{t+1}\}$ and $$N_G[T_1] = \{x_1,\ldots,x_{t+1},\ldots,x_{t+1},\ldots,x_a,y_1,\ldots,y_{t+1}\}$$ for some $a \ge t+1$. Let $H_2$ be the induced subgraph of $G$ on $V(G) \setminus N_G[V(T_1)] = \{ x_{a+1},\ldots,x_d,y_{t+2},\ldots,y_d\}$. Then $H_2$ is the whisker graph of $T_2$, the induced subgraph of $T$ on $\{x_{a+1},\ldots,x_d\}$ and $\{y_{t+2},\ldots,y_{a}\}$ are isolated vertices of $H_2$. By [@V], $I(H_2)$ is Cohen-Macaulay of dimension $d - t - 1$. By Lemma [Lemma 14](#lem_upperbound_power_tree){reference-type="ref" reference="lem_upperbound_power_tree"}, we have $$\mathop{\mathrm{depth}}S/I(G)^{t+1} \le 1 + \mathop{\mathrm{depth}}R/I(H_2) = d-t.$$ The conclusion follows. ◻ We now prove the following lower bound for the depth of powers of edge ideals of general whisker trees. **Lemma 16**. *Let $T$ be a forest on $n$ vertices. Let $U \subset V(T)$ be a subset of vertices of $T$, and $G$ be the new forest obtained by adding a whisker to each vertex in $U$. Namely, $V(G) = V(T) \cup \{y_i \mid x_i \in U\}$ and $E(G) = E(T) \cup \{ \{x_i, y_i\} \mid x_i \in U\}$. Then for all $t \ge 1$, we have $$\mathop{\mathrm{depth}}(R/I(G)^t) \ge |U| - t + 1,$$ where $R = \mathrm{k}[x_1,\ldots,x_n,y_j \mid x_j \in U]$ is the polynomial ring over the variables corresponding to vertices of $G$.* *Proof.* By [@NV2 Theorem 1.1], we may assume that $T$ is a tree on $n$ vertices. Furthermore, we may assume that $U = \{1, \ldots, d\}$. Hence, $V(G) = \{x_1, \ldots, x_n, y_1, \ldots, y_d\}$ and $E(G) = E(T) \cup \{x_i y_i \mid i = 1, \ldots, d\}$. Since $G$ is a forest, $\mathop{\mathrm{depth}}R/I(G)^t \ge 1$ for all $t \ge 1$. Hence, the statement is vacuous when $t \ge d$. Thus, we may assume that $1 \le t \le d$. We prove by induction on the triple $(n,d,t)$ ordered by the lexicographic order the following $$\label{eq_3_1} \mathop{\mathrm{depth}}R/ (I(G)^t + I(H)) \ge d - t + 1,$$ for all $t \le d$ and all subgraphs $H$ of $G$ with $E(H) \subseteq \{x_1y_1, \ldots, x_dy_d\}$. For ease of reading, we divide the proof into several steps. The base case $d = 1$ is clear as $\mathfrak m$, the maximal homogeneous ideal of $R$ is not an associated prime of $I(G)^t + I(H)$. The base case $t = 1$. The statement follows from [@MTV1 Theorem 4.1] as a maximal independent set of $G$ must contain either $x_i$ or $y_i$ for each $i = 1, \ldots, d$. Reduction to the case $E(H) = \{x_1y_1,\ldots,x_dy_d\}$. Assume that $E(H)$ is a proper subset of $\{x_1y_1, \ldots, x_dy_d\}$, say $x_dy_d \notin E(H)$. Let $J = I(G)^t + I(H)$. By Lemma [Lemma 7](#lem_depth_colon){reference-type="ref" reference="lem_depth_colon"}, Lemma [Lemma 9](#lem_colon_leaf){reference-type="ref" reference="lem_colon_leaf"}, and induction, it suffices to prove that $\mathop{\mathrm{depth}}R/(J + (x_dy_d)) \ge d- t + 1$. Thus, we may assume that $E(H) = \{x_1y_1, \ldots, x_dy_d\}$. Eq. [\[eq_3\_1\]](#eq_3_1){reference-type="eqref" reference="eq_3_1"} becomes $$\label{eq_3_2} \mathop{\mathrm{depth}}(R/I(T)^t + (x_1y_1,\ldots,x_dy_d)) \ge d - t + 1.$$ Induction step. Assume that Eq. [\[eq_3\_2\]](#eq_3_2){reference-type="eqref" reference="eq_3_2"} holds for all tuples $(n',d',t')$ strictly smaller than $(n,d,t)$ in the lexicographic order. Let $J = I(T)^t + (x_1y_1, \ldots, x_dy_d)$. Let $u$ be a leaf of $T$ and $v$ the unique neighbour of $u$ in $T$. There are two cases. **Case 1.** $u \notin \{x_1, \ldots, x_d\}$. Since $J + (u)$ is of the same form but in a smaller ring, by induction, we have $\mathop{\mathrm{depth}}R/ (J + ( u)) \ge d - t + 1$. By Lemma [Lemma 7](#lem_depth_colon){reference-type="ref" reference="lem_depth_colon"}, it suffices to prove that $$\mathop{\mathrm{depth}}R/(J:u) \ge d - t + 1.$$ But $K = J:u = v I(T)^{t-1} + (x_1y_1,\ldots,x_dy_d)$. Since $\mathop{\mathrm{depth}}R/ (K + (v)) \ge d$, by Lemma [Lemma 7](#lem_depth_colon){reference-type="ref" reference="lem_depth_colon"}, it suffices to prove that $\mathop{\mathrm{depth}}R/(K:v) \ge d - t + 1$. There are two subcases. Subcase 1.a. $v \notin \{x_1, \ldots, x_d\}$. Then $K:v = I^{t-1} + (x_1y_1,\ldots, x_dy_d)$. Hence, by induction on $t$, we have $\mathop{\mathrm{depth}}R/(K:v) \ge d - (t-1) + 1$. Subcase 1.b. $v \in \{x_1, \ldots, x_d\}$, say $v = x_d$. Then $$K:v = I^{t-1} + (x_1y_1, \ldots, x_{d-1}y_{d-1}) + (y_d).$$ Hence, by induction, we have $\mathop{\mathrm{depth}}R/(K:v) \ge (d-1) - (t-1) + 1 = d-t+1$. **Case 2.** $u \in \{x_1, \ldots, x_d\}$, say $u = x_d$. Let $T_1$ be the subtree of $T$ restricted to $V(T) \setminus \{u\}$. Then $J + (u) = I(T_1)^t + (x_1y_1,\ldots, x_{d-1}y_{d-1}) + (u)$. In particular, $y_d$ is a free variable. Hence, $\mathop{\mathrm{depth}}R/(J + (u)) = 1 + \mathop{\mathrm{depth}}R_1 / (I(T_1)^t + (x_1y_1,\ldots,x_{d-1}y_{d-1})),$ where $R_1 = \mathrm{k}[x_i,y_j \mid i,j \neq d]$. By induction, we have $\mathop{\mathrm{depth}}R/ (J + (u)) \ge 1 + (d-1) - t + 1 = d - t + 1$. By Lemma [Lemma 7](#lem_depth_colon){reference-type="ref" reference="lem_depth_colon"}, it suffices to prove that $\mathop{\mathrm{depth}}R/(J:u) \ge d - t + 1$. We have $$J:u = v I(T)^{t-1} + (x_1y_1,\ldots,x_{d-1}y_{d-1}) + (y_d).$$ Let $R_1 = \mathrm{k}[x_1,\ldots, x_n,y_1,\ldots,y_{d-1}]$ and $K = v I(T)^{t-1} + (x_1y_1,\ldots,x_{d-1}y_{d-1})$. Then, $\mathop{\mathrm{depth}}R/K = \mathop{\mathrm{depth}}R_1/K$. By Lemma [Lemma 7](#lem_depth_colon){reference-type="ref" reference="lem_depth_colon"}, it suffices to prove that $\mathop{\mathrm{depth}}R_1/K:v \ge d - t + 1$. Again, we have two subcases, Subcase 2.a. $v \notin \{x_1, \ldots, x_{d-1}\}$. Then $K:v = I(T)^{t-1} + (x_1y_1,\ldots,x_{d-1}y_{d-1})$. Hence, by induction, $\mathop{\mathrm{depth}}R_1/(K:v) \ge (d-1) - (t-1) + 1$. Subcase 2.b. $v \in \{x_1, \ldots, x_{d-1}\}$, say $v = x_{d-1}$. Then, $K:v = I(T)^{t-1} + (x_1y_1,\ldots, x_{d-2}y_{d-2}) + (y_{d-1}).$ Let $T'$ be the induced subgraph of $T$ on $V(T) \setminus \{d\}$. If $t =2$ then $$K:v = I(T') + (x_1y_1,\ldots,x_{d-2}y_{d-2},x_{d-1}x_d) + (y_{d-1}).$$ The conclusion follows from [@MTV1 Theorem 4.1]. Now, assume that $t \ge 3$. Let $R_2 = \mathrm{k}[x_1,\ldots,x_n,y_1,\ldots,y_{d-2}]$ and $L = I(T)^{t-1} + (x_1y_1,\ldots,x_{d-2}y_{d-2})$. We need to prove that $\mathop{\mathrm{depth}}R_2/L \ge d - t + 1$. Note that $I(T) = I(T') + (x_{d-1}x_d)$ and $x_d$ can be considered as a whisker at $x_{d-1}$. By Lemma [Lemma 7](#lem_depth_colon){reference-type="ref" reference="lem_depth_colon"}, we have $$\mathop{\mathrm{depth}}R_2/L \in \{\mathop{\mathrm{depth}}(R_2/L:(x_{d-1}x_d)), \mathop{\mathrm{depth}}(R_2/ L + (x_{d-1}x_d))\}.$$ By Lemma [Lemma 9](#lem_colon_leaf){reference-type="ref" reference="lem_colon_leaf"}, $L:x_{d-1}x_d = I(T)^{t-2} + (x_1y_1,\ldots,x_{d-2}y_{d-2})$. Hence, by induction, we have $\mathop{\mathrm{depth}}R_2/(L:x_{d-1}x_d) \ge (d-2) - (t-2) + 1 = d- t + 1$. Finally, we have $L + (x_{d-1}x_d) = I(T')^{t-1} + (x_1y_1,\ldots,x_{d-2}y_{d-2},x_{d-1}x_d)$, with $x_d$ now play the role of $y_{d-1}$. By induction, we also have $\mathop{\mathrm{depth}}R_2 / (L + (x_{d-1}x_d)) \ge d-1 - (t-1) + 1 = d-t + 1$. That concludes the proof of the Lemma. ◻ We are ready for the main result of this section. **Theorem 17**. *Let $I(G) \subset S = \mathrm{k}[x_1, \ldots,x_{d}, y_1, \ldots, y_d]$ be the edge ideal of a Cohen-Macaulay tree $G$ of dimension $d$. Then for all $t \ge 1$, $$\mathop{\mathrm{depth}}S/I(G)^t = \max \{d-t+1, 1\}.$$* *Proof.* The conclusion follows from the result of Villarreal [@V], Corollary [Corollary 15](#cor_upperbound_CM_tree){reference-type="ref" reference="cor_upperbound_CM_tree"}, and Lemma [Lemma 16](#lem_lower_bound_whisker_trees){reference-type="ref" reference="lem_lower_bound_whisker_trees"}. ◻ # Depth of powers of edge ideals of some Cohen-Macaulay bipartite graphs {#sec_depth_power_CM_bipartite_graph} In this section, we study the depth of powers of some Cohen-Macaulay bipartite graphs. First, we consider a graph constructed by Banerjee and Munkudan [@BM]. **Theorem 18**. *Let $S = k[x_1,\ldots,x_d,y_1,\ldots,y_d]$. For a fix integer $j$ such that $1 \le j \le d$, let $G_{d,j}$ be a bipartite graph whose edge ideal is $I(G_{d,j}) = (x_i y_i, x_1 y_i, x_k y_j \mid 1 \le i \le d, 1 \le k \le j)$. Then $$\mathop{\mathrm{depth}}S/I(G_{d,j})^s = \max(1,d - j -s + 3),$$ for all $s \ge 2$.* *Proof.* Fix $j$, we prove by induction on $d$. The case $d = j$ follows from [@BM Example 4.2]. Now assume that the statement holds for $d-1$. Let $s$ be an exponent such that $2 \le s \le d - j - 2$. Let $e_1 = x_1y_j, e_2 = x_1y_{j+1}, \ldots, e_{s-1} = x_1 y_{j + s-2}$ and $\mathbf e= \{e_1, \ldots, e_{s-1}\}$. We have $G_{d,j}[\bar {\mathbf e}]$ is the disjoint union on $d-j-s+2$ edges. By Lemma [Lemma 14](#lem_upperbound_power_tree){reference-type="ref" reference="lem_upperbound_power_tree"}, $$\mathop{\mathrm{depth}}S/I(G_{d,j})^s \le 1 + \mathop{\mathrm{depth}}R/I(G_{d,j}[\bar{\mathbf e}]) = d- j - s + 3.$$ In particular, $\mathop{\mathrm{depth}}S/I(G_{d,j})^{d-j-2} \le 1$. By [@T Theorem 4.4], we deduce that $\mathop{\mathrm{depth}}S/I(G_{d,j})^t = 1$ for all $t \ge d-j-2$. For the lower bound, the proof is similar to that of Lemma [Lemma 16](#lem_lower_bound_whisker_trees){reference-type="ref" reference="lem_lower_bound_whisker_trees"}. Let $H$ be the induced subgraph of $G_{d,j}$ on $\{x_1,\ldots,x_j,y_1,\ldots,y_d\}$. Then $$\begin{aligned} I(H) &= (x_1y_i,x_ky_k,x_ky_j \mid 1 \le i \le d, 2 \le k \le j)\\ I(G_{d,j}) &= I(H) + (x_iy_i \mid i \ge j + 1).\end{aligned}$$ For ease of reading, we divide the proof into several steps. With an argument similar to Step 3 of the proof of Lemma [Lemma 16](#lem_lower_bound_whisker_trees){reference-type="ref" reference="lem_lower_bound_whisker_trees"}, we reduce to proving the following lower bound $$\mathop{\mathrm{depth}}S/(I(H)^s + (x_iy_i \mid i \ge j + 1)) \ge d-j-s + 3.$$ Let $J = I(H)^s + (x_iy_i \mid i \ge j + 1)$. By Lemma [Lemma 7](#lem_depth_colon){reference-type="ref" reference="lem_depth_colon"}, we will prove the bound for $\mathop{\mathrm{depth}}S/(J + (y_d))$ and $\mathop{\mathrm{depth}}S/(J:y_d)$. We have $$J + (y_d) = I(H)^s + (x_1y_1,x_iy_i \mid j+1 \le i \le d-1) + (y_d).$$ Hence, $x_d$ is a free variable of $J + (y_d)$. By induction, we have $$\mathop{\mathrm{depth}}S/(J + (y_d)) \ge 1 + (d-1 - j +s + 3) = d - j -s +3.$$ Furthermore, $$J:y_d = x_1 I(H)^{s-1} + (x_iy_i \mid j+1 \le i \le d-1) + (x_d).$$ Let $K = x_1 I(H)^{s-1} + (x_iy_i \mid j+1 \le i \le d-1)$ and $R = \mathrm{k}[x_1,\ldots,x_{d-1},y_1,\ldots,y_{d-1}]$. Then, $\mathop{\mathrm{depth}}S/(J:y_d) = \mathop{\mathrm{depth}}R/K$. Since $K+(x_1) = (x_iy_i \mid j+1 \le i \le d-1) + (x_1)$, by Lemma [Lemma 7](#lem_depth_colon){reference-type="ref" reference="lem_depth_colon"}, it remains to bound $\mathop{\mathrm{depth}}R/K:x_1$. We have $$K:x_1 = I(H)^{s-1} + (x_iy_i \mid j+1 \le i \le d-1).$$ Hence, by induction, $$\mathop{\mathrm{depth}}R/(K:x_1) \ge d-1 - (s-1) - j + 3 = d -j -s + 3.$$ That concludes the proof of the Theorem. ◻ In [@BM], Banerjee and Mukundan said that the depth sequence of powers of the edge ideal of a graph $G$ has a drop at $k$ if $\mathop{\mathrm{depth}}S/I^k - \mathop{\mathrm{depth}}S/I^{k+1} > 1$. They then constructed a Cohen-Macaulay bipartite graph with an arbitrary number of drops in their depth sequence. Nonetheless, their construction is via sum of ideals, and hence, the resulting Cohen-Macaulay bipartite graph $G$ with $k$ drops has $k$ connected components. By the result of Nguyen and the last author [@NV2], we may reduce the computation of depth of powers of edge ideals of disconnected graphs to that of their connected components. By our computation experiment, we believe that we may construct a connected Cohen-Macaulay bipartite graph with an arbitrary number of drops. To conclude, we provide an example of a connected Cohen-Macaulay bipartite graph with two drops in its depth sequence of powers. **Theorem 19**. *Assume that $d \ge 5$. Let $S = k[x_1,\ldots,x_d,y_1,\ldots,y_d]$. For each $a$ such that $2 \le a \le d-1$, let $G_{d,a}$ be a bipartite graph whose edge ideal is $$I(G_{d,a}) = x_1(y_1, \ldots, y_d) + (x_iy_j \mid 2 \le i \le j \le a) + (x_iy_j \mid a+1 \le i \le j \le d).$$ Then $$\mathop{\mathrm{depth}}S/I(G_{d,a})^t = \begin{cases} d & \text{ if } t = 1\\ \min(a,d-a+1) & \text{ if } t = 2\\ 1 & \text{ if } t \ge 3.\end{cases}$$* *Proof.* Note that $G_{d,a}$ is a Cohen-Macaulay bipartite graph of dimension $d$ by [@EV; @HH]. By Lemma [Lemma 11](#lem_colon_product_edges){reference-type="ref" reference="lem_colon_product_edges"}, $$I(G_{d,a})^3 : (x_1 y_a x_1 y_{d}) = I(K_{d,d}).$$ By Lemma [Lemma 6](#lem_upperbound){reference-type="ref" reference="lem_upperbound"} and Lemma [Lemma 10](#lem_depth_power_non_increasing){reference-type="ref" reference="lem_depth_power_non_increasing"}, $\mathop{\mathrm{depth}}S/I(G_{d,a})^t =1$ for all $t \ge 3$. Thus, it remains to determine $\mathop{\mathrm{depth}}S/I(G_{d,a})^2$. When $\mathbf e= \{x_1y_a\}$, $G_{d,a}[\bar{\mathbf e}]$ is the empty graph on $d-a$ vertices $\{x_{a+1},\ldots, x_d\}$. When $\mathbf e= \{x_1y_d\}$, $G_{d,a}[\bar{\mathbf e}]$ is the empty graph on $a-1$ vertices $\{x_2,\ldots,x_a\}$. Hence, by Lemma [Lemma 14](#lem_upperbound_power_tree){reference-type="ref" reference="lem_upperbound_power_tree"}, $$\mathop{\mathrm{depth}}S/I(G_{d,a})^2 \le \min(a,d-a+1).$$ For the lower bound, we prove by induction on the tuple $(d,a)$. We divide the proof into several steps. $\mathop{\mathrm{depth}}S/(I(G_{d,a})^2 + (y_d)) \ge \min (a,d-a+1)$. Since $I(G_{d,a})^2 + (y_d) = I(G_{d-1,a})^2 + (y_d)$ and $x_d$ is a free variable of $I(G_{d,a})^2 + (y_d)$. By induction, we have $$\mathop{\mathrm{depth}}S / (I(G_{d,a})^2 + (y_d)) \ge 1 + \min (a,d-1-a+1) \ge \min(a,d-a+1).$$ $\mathop{\mathrm{depth}}S/(I(G_{d,a})^2 + (x_1)) \ge \min(a,d-a+1)$. Since $I(G_{d,a})^2 + (x_1) = (I(H_1) + I(H_2))^2 + (x_1)$, where $$\begin{aligned} I(H_1) &= (x_iy_j \mid 2 \le i \le j \le a),\\ I(H_2) &= (x_iy_j \mid a+1 \le i \le j \le d)\end{aligned}$$ are Cohen-Macaulay ideals of dimensions $a-1$ and $d-a$ respectively. Since $y_1$ is a free variable of $I(G_{d,a})^2 + (x_1)$, by [@NV2 Theorem 1.1], $$\begin{aligned} \mathop{\mathrm{depth}}S/(I(G_{d,a})^2 + (x_1)) = 1 + \min (& \mathop{\mathrm{depth}}R_1 / I(H_1) + \mathop{\mathrm{depth}}R_2 / I(H_2) + 1,\\ &\mathop{\mathrm{depth}}R_1/I(H_1)^2 + \mathop{\mathrm{depth}}R/I(H_2),\\ &\mathop{\mathrm{depth}}R_1/I(H_1) + \mathop{\mathrm{depth}}R_2 / I(H_2)^2) \\ &= 1 + \min (a,d-a+1),\end{aligned}$$ where $R_1=\mathrm{k}[x_i,y_i \mid i = 2, \ldots, a]$ and $R_2 = \mathrm{k}[x_i,y_i \mid i = a+1,\ldots, d]$. $\mathop{\mathrm{depth}}S/(I(G_{d,a})^2 + (x_1,y_d)) \ge \min(a,d-a+1)$. Since $I(G_{d,a}) + (x_1,y_d)$ is the mixed sum of two Cohen-Macaulay ideals of dimensions $a-1$ and $d-a-1$ respectively with free variables $y_1,x_d$ in $S$. With argument similar to Step 2, we deduce the desired lower bound. $\mathop{\mathrm{depth}}S/(I(G_{d,a})^2 + x_1y_d) \ge \min (a,d-a+1)$. Since $I(G_{d,a})^2 + (x_1y_d) = (I(G_{d,a})^2 + (x_1)) \cap (I(G_{d,a})^2 + (y_d))$. The conclusion follows from Step 1, Step 2, Step 3, and a standard lemma on depth [@BH Proposition 1.2.9]. $\mathop{\mathrm{depth}}S/(I(G_{d,a})^2:(x_1y_d)) \ge \min (a,d-a+1)$. By Lemma [Lemma 11](#lem_colon_product_edges){reference-type="ref" reference="lem_colon_product_edges"}, $$\label{eq_4_3} I(G_{d,a})^2 : (x_1y_d) = I(G_{d,a}) + (x_{a+1},\ldots,x_d) \cdot (y_1,\ldots,y_d).$$ Let $L = I(G_{d,a})^2 : (x_1y_d)$. Then $$L = (I(G_{d,a}) + (x_{a+1},\ldots,x_d)) \cap (y_1,\ldots,y_d).$$ Since $\mathop{\mathrm{depth}}S/(y_1,\ldots,y_d) = d$ and $\mathop{\mathrm{depth}}S/(x_{a+1},\ldots,x_d,y_1,\ldots,y_d) = a$, it remains to show that $$\mathop{\mathrm{depth}}S/M \ge \min(a,d-a+1),$$ where $M = I(G_{d,a}) + (x_{a+1},\ldots,x_d)$. We have $M:x_1 = (x_{a+1},\ldots,x_d,y_1,\ldots,y_d)$ and $M+(x_1)$ is a Cohen-Macaulay ideal of dimension $a-1$ and $y_1,y_{a+1},\ldots,y_d$ are free variables of $M + (x_1)$. By Lemma [Lemma 7](#lem_depth_colon){reference-type="ref" reference="lem_depth_colon"}, $\mathop{\mathrm{depth}}S/M \ge \min(a,d-a+1)$ as required. Thus, we deduce that $\mathop{\mathrm{depth}}S/I(G_{d,a})^2 \ge \min(a,d-a+1)$ by Step 4, Step 5 and Lemma [Lemma 7](#lem_depth_colon){reference-type="ref" reference="lem_depth_colon"}. That concludes the proof of the Theorem. ◻ # Acknowledgments {#acknowledgments .unnumbered} Nguyen Thu Hang is partially supported by the Thai Nguyen University of Sciences (TNUS) under the grant number CS2021-TN06-16. 2 S. Balanescu, M. Cimpoeas, *Depth and Stanley depth of powers of the path ideal of a path graph*, arXiv:2303.01132. A. Banerjee, *The regularity of powers of edge ideals*, J. Algbr. Comb. **41** (2015), no. 2, 303--321. W. Bruns and J. Herzog, *Cohen-Macaulay rings. Rev. ed.*. Cambridge Studies in Advanced Mathematics **39**, Cambridge University Press (1998). A. Banerjee and V. Mukundan, *The powers of unmixed bipartite edge ideals*, J. Algebra Appl. **18** (2019), 1950209. M. Brodmann, *The asymptotic nature of the analytic spread*, Math. Proc. Cambridge Philos. Soc. **86** (1979), 35--39. G. Caviglia, H. T. Ha, J. Herzog, M. Kummini, N. Terai, and N. V. Trung, *Depth and regularity modulo a principal ideal*, J. Algebraic Combin. **49** (2019), no.1, 1--20. R. Diestel, *Graph theory, 2nd. edition,* Springer: Berlin/Heidelberg/New York/Tokyo, 2000. M. Estrada and R. H. Villarreal, *Cohen-Macaulay bipartite graphs,* Arch. Math. **68** (1997), 124--128. J. Herzog, T. Hibi, *Distributive lattices, bipartite graphs, and Alexander duality*, J. Algebraic Comb. **22** (2005), 289--302. K. Kimura, *Non-vanishingness of Betti numbers of edge ideals and complete bipartite subgraphs*, Commun. Algebra **44** (2016), 710--730. N. C. Minh, T. N. Trung, and T. Vu, *Depth of powers of edge ideals of cycles and trees*, arXiv:2308.00874. N. C. Minh, T. N. Trung, and T. Vu, *Stable value of depth of symbolic powers of edge ideals of graphs,* arXiv:2308.09967. S. Morey, *Depths of powers of the edge ideal of a tree*, Comm. Algebra **38** (2010), 4042--4055. H. D. Nguyen and T. Vu, *Linearity defect of edge ideals and Fröberg's theorem,* J. Algbr. Comb. **44** (2016), 165--199. H. D. Nguyen and T. Vu, *Powers of sums and their homological invariants,* J. Pure Appl. Algebra **223** (2019), 3081--3111. A. Rauf, *Depth and sdepth of multigraded modules*, Comm. Alg. **38**, (2010), 773--784. T. N. Trung, *Stability of depths of powers of edge ideals*, J. Algebra **452**, (2016), 157--187. R. H. Villarreal, *Cohen-Macaulay graphs*, Manuscripta Math. **66**, 277--293 (1990).
arxiv_math
{ "id": "2309.05011", "title": "Depth of powers of edge ideals of Cohen-Macaulay trees", "authors": "Nguyen Thu Hang, Truong Thi Hien, and Thanh Vu", "categories": "math.AC math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Let $B/F$ be a quaternion algebra over a totally real number field. We give an explicit formula for heights of special points on the quaternionic Shimura variety associated with $B$ in terms of Faltings heights of CM abelian varieties. Special points correspond to CM fields $E$ and partial CM-types $\phi\subset \mathop{\mathrm{Hom}}(E, \mathbb C)$. We then show that our height is compatible with the canonical height of a partial CM-type defined by Pila, Shankar, and Tsimerman in [@Pil21]. This gives another proof that the height of a partial CM-type is bounded subpolynomially in terms of the discriminant of $E$. address: Department of Mathematics, University of California, Berkeley, CA, USA. author: - Roy Zhao bibliography: - references.bib title: Heights of Special Points on Quaternionic Shimura Varieties --- # Introduction In this article we study heights of special points on quaternionic Shimura varieties, motivated by the recent proof of the André--Oort conjecture for Shimura varieties by [@Pil21]. Let $B/F$ be a quaternion algebra over a totally real field. Special points on an associated quaternionic Shimura variety $X$ correspond to totally imaginary quadratic extensions $E/F$ lying inside $B$. We give a direct formula for the height of a special point in terms of Faltings heights of CM-types of $E$ and explicit logarithms of discriminants. The splitting behavior of $B$ at infinity gives rise to a partial CM-type $\phi\subset \mathop{\mathrm{Hom}}(E, \mathbb C)$ such that $\phi\cap \overline{\phi} = \varnothing$. We show that the height of a special point on this quaternionic Shimura variety is compatible with the height of the partial CM-type $h(\phi)$ defined by [@Pil21]. Our direct formula gives another proof that the heights of special points of Shimura varieties are bounded in terms of discriminants of number fields. Our formula for $h(\phi)$ also differs from the one given by [@Pil21]. The formula given by [@Pil21] expresses the height $h(\phi)$ in terms of Faltings heights of CM-types of CM-fields $E'$, where the relative discriminant of $E'/E$ is controlled. Our formula expresses $h(\phi)$ in terms of CM-types of $E$ only. The case when $B$ is split at only one archimedean place of $F$ was proved by [@Yua18], and we recover their formula in that setting. To prove our formula, we follow the strategy of [@Yua18] by comparing a Kodaira--Spencer map on $X$ with that of a related PEL-type Shimura variety $X'$. ## Statement of Main Theorem Let $E$ be a CM field, and $F$ be its totally real subfield, so that $[E:F] = 2$. Set $g \coloneqq [F:\mathbb Q]$. Let $\phi\subset \mathop{\mathrm{Hom}}(E, \mathbb C)$ be a partial CM-type, meaning that $\phi\cap \overline{\phi} = \varnothing$. Write $\Sigma \subset \mathop{\mathrm{Hom}}(F, \mathbb R)$ for the restriction of $\phi$ to $F$. Suppose that $B/F$ is a quaternion algebra with the following properties: 1. There exists an embedding $E \hookrightarrow B$; 2. The ramification set of $B$ at infinity is $\Sigma^c$; 3. If $B$ is ramified at a finite prime $\ensuremath{\mathfrak{p}}$ of $F$, then $E$ is also ramified over $\ensuremath{\mathfrak{p}}$. We define the algebraic group $G$ over $\mathbb Q$ as $$G \coloneqq \mathop{\mathrm{Res}}_{F/\mathbb Q} B^\times.$$ For each compact open subgroup $U \subset G(\ensuremath{\mathbb{A}}_f)$, we obtain a (quaternionic) Shimura variety $X_U$, defined over a number field $E_X$, with complex uniformization given by: $$X_U(\mathbb C) = G(\mathbb Q) \backslash (\ensuremath{\mathcal{H}}^\pm)^\Sigma \times G(\ensuremath{\mathbb{A}}_f)/U,$$ where $\ensuremath{\mathcal{H}}^\pm$ is the upper and lower complex half-planes. This is a Shimura variety of abelian type, and by utilizing ideas from [@Car86] and [@Pap13], we can construct a regular integral model $\ensuremath{\mathcal{X}}_U$ for $X_U$ over $\mathop{\mathrm{Spec}}\ensuremath{\mathcal{O}}_{E_X}$. Let $\widehat{\ensuremath{\mathcal{L}}_U}$ be the arithmetic Hodge bundle of $\ensuremath{\mathcal{X}}_U$, which consists of a line bundle $\ensuremath{\mathcal{L}}_U$ on $\ensuremath{\mathcal{X}}_U$ and a Hermitian metric given by: $$\norm*{\bigwedge_{\sigma \in \Sigma} dz_\sigma} \coloneqq \prod_{\sigma \in \Sigma} 2\mathrm{Im}(z_\sigma),$$ where the $z_\sigma$ are given by the complex uniformization of $X_U$. When $U$ is sufficiently small, the Hodge bundle is simply the canonical bundle $\ensuremath{\mathcal{L}}_U = \omega_{\ensuremath{\mathcal{X}}_U/\ensuremath{\mathcal{O}}_{E_X}}$. The precise definition of $\widehat{\ensuremath{\mathcal{L}}_U}$ is given in Section [6](#sec:Hodge bundle){reference-type="ref" reference="sec:Hodge bundle"}. Let $P_U \subset X_U(\overline{\mathbb Q})$ be a special point arising from the embedding $E \hookrightarrow B$, and let $\overline{P_U}$ be the closure of this point in $\ensuremath{\mathcal{X}}_U$, which we will also denote by $P_U$ by abuse of notation. The height of this point relative to $\widehat{\ensuremath{\mathcal{L}}_U}$ is the Arakelov height: $$h_{\widehat{\ensuremath{\mathcal{L}}_U}}(P_U) \coloneqq \frac{1}{[\mathbb Q(P_U):\mathbb Q]} \widehat{\deg}(\widehat{\ensuremath{\mathcal{L}}_U}|_{P_U}).$$ Finally, let $\Phi$ be a full CM-type, and let $h(\Phi)$ be the Faltings height of an abelian variety with complex multiplication by $(\ensuremath{\mathcal{O}}_E, \Phi)$. Let $d_\phi, d_{\overline{\phi}}$ and $d_\Sigma \coloneqq d_{\phi\sqcup \overline{\phi}}$ be certain absolute discriminants of $\phi, \overline{\phi}$, and $\phi\sqcup \overline{\phi}$. These are defined in detail in Section [2](#sec:Faltings height){reference-type="ref" reference="sec:Faltings height"}. Let $\ensuremath{\mathfrak{d}}_{E/F}$ denote the relative discriminant of the extension $E/F$. There is a reflex norm $N_{F/E_X} \colon F \to E_X$ defined by $N_{F/E_X}(x) = \prod_{\sigma \in \Sigma} \sigma(x)$. Let $d_{E/F, \Sigma} \in \mathbb Z$ be the positive generator of $N_{E_X/\mathbb Q}(N_{F/E_X}(\ensuremath{\mathfrak{d}}_{E/F}))$. Let $d_{E/F} \in \mathbb Z$ be the positive generator of $N_{F/\mathbb Q}(\ensuremath{\mathfrak{d}}_{E/F})$ and let $d_F$ be the absolute discriminant of $F$. Let $d_B$ be the positive generator of norm from $F$ to $\mathbb Q$ of the product of all the finite places of $\ensuremath{\mathcal{O}}_F$ over which $B$ ramifies. With these definitions in place, we can now state our main theorem. **Theorem 1**. *Suppose that $U = \prod_v U_v$ is a maximal compact subgroup of $G(\ensuremath{\mathbb{A}}_f)$. Then $$\begin{aligned} \frac{1}{2}h_{\widehat{\ensuremath{\mathcal{L}}_U}}(P_U) =& \frac{1}{2^{\abs{{\Sigma}^{\mathsf{c}}}}} \sum_{\Phi \supset \phi} h(\Phi) - \frac{\abs{{\Sigma}^{\mathsf{c}}}}{g2^g} \sum_\Phi h(\Phi)\\ & + \frac{1}{8} \log d_{E/F, \Sigma}d_\Sigma^{-1} + \frac{1}{4}\log d_\phi d_{\overline{\phi}} + \frac{1}{4g} \log d_Bd_\Sigma + \frac{\abs{\Sigma}}{4g} \log d_F. \end{aligned}$$ The first summation is over all full CM-types which contain $\phi$, and the second summation is over all full CM-types of $E$.* Additionally, if $\abs{\phi} = 1$, then $E_X = F$, and we have that $d_\Sigma = d_{E/F} = d_{E/F, \Sigma}$ and $d_\phi= d_{\overline{\phi}} = 1$. As a result, the expression for $\frac{1}{2}h_{\widehat{\ensuremath{\mathcal{L}}_U}}(P_U)$ simplifies, and we recover [@Yua18 Thm. 1.6], where the factor of $g$ is due to different normalizing factors of $h_{\widehat{\ensuremath{\mathcal{L}}_U}}(P_U)$. In the Pila--Zannier method, an essential step involves bounding the height of a CM point in terms of the discriminant of its splitting field. For general Shimura varieties, a CM point $P$ is associated with a partial CM-type $\phi$ of a CM-field $E$. In [@Pil21], a canonical height $h(\phi)$ is introduced for such $\phi$, and this height $h(\phi)$ is shown to be equal to the height $h(P)$ of the associated CM point $P$, up to $O(\log d_E)$. Then, the height $h(\phi)$ is bounded in terms of the discriminant $d_E$, showing that $h(P)$ is as well. The precise definition of $h(\phi)$ can be found in Section [8](#sec:Andre Oort){reference-type="ref" reference="sec:Andre Oort"}. Our second main result establishes the compatibility between $h_{\widehat{\ensuremath{\mathcal{L}}_U}}(P_U)$ and $h(\phi)$. **Theorem 2**. *$$h(\phi) = \frac{1}{2}h_{\widehat{\ensuremath{\mathcal{L}}_U}}(P_U) + O(\log d_E).$$* With the combination of Theorem [Theorem 1](#thm:main theorem 1){reference-type="ref" reference="thm:main theorem 1"} and Theorem [Theorem 2](#thm:main theorem 2){reference-type="ref" reference="thm:main theorem 2"}, we obtain the following corollary. This is an ingredient used in the Pila--Zannier method, and serves as one of the results of [@Pil21]. **Corollary 3**. *For all $\varepsilon> 0$, there exists a positive constant $c$ depending only on $[E:\mathbb Q]$ such that $$h(\phi) \le c \cdot d_E^\varepsilon$$ for all partial CM-types of $E$.* *Proof.* By [@Tsi18 Cor. 3.3], the Faltings heights $h(\Phi)$ of full CM-types are bounded subpolynomially by $d_E$. Each of the discriminants $d_{E/F, \Sigma}, d_\Sigma, d_\phi, d_F$ are smaller than $d_E$, and $d_B \le d_E$ since we specified that the ramification set of $E$ contains the ramification set of $B$. Thus, each of the logarithm terms are bounded by $\log d_E$, which is also subpolynomial in $d_E$. ◻ This result differs from that of [@Pil21] because in our case, we are able to express $h(\phi)$ in terms of CM-types of $E$, whereas in [@Pil21], they express $h(\phi)$ in terms of CM-types $\Phi^\sharp$ of CM-fields $E^\sharp$ containing $E$, whose relative discriminant over $E$ can be bounded. ## Motivation The André--Oort conjecture states that if $S$ is a Shimura variety and $V \subset S$ is a subvariety with a dense subset of special points, then $V$ itself must be a special subvariety. Definitions and properties for Shimura varieties can be found in [@Mil05 Sec. 4] and for special subvarieties in [@Daw18 Sec. 2]. The André--Oort conjecture was originally proven by André in the case when $S$ is the product of two modular curves [@And98] and later for arbitrary products of modular curves by Pila (see [@Pil11]). Pila and Tsimerman (see [@Pil14]) were able to extend this strategy to prove the conjecture unconditionally for $\ensuremath{\mathcal{A}}_g$, the coarse moduli space of principally polarized abelian varieties of dimension $g$, when $g \le 6$. The averaged Colmez Conjecture, proven by [@And18] and [@Yua18], allowed Tsimerman to give an unconditional proof (see [@Tsi18]) of the conjecture for $\ensuremath{\mathcal{A}}_g$ for all $g$. Moreover, following results of [@Bin23], Pila, Shankar, and Tsimerman recently announced an unconditional proof of the conjecture for all Shimura varieties in [@Pil21]. There is also a version of the André--Oort conjecture for mixed Shimura varieties that was reduced to the pure Shimura varieties case by Gao (see [@Gao16]). Many of the recent results on the André--Oort conjecture were proven using a strategy that was initially proposed by Zannier, by using the theory of o-minimality and a point counting theorem of Pila and Wilkie from [@Pil06], combined with estimates on the sizes of certain Galois orbits. This strategy was shown to be viable when Pila and Zannier used it to reprove the Manin--Mumford conjecture in [@Zan08]. And it is using this Pila--Zannier strategy that the conjecture was proven for products of the modular curve (see [@Pil11]), the moduli space $\ensuremath{\mathcal{A}}_g$ (see [@Pil14; @Tsi18]), and for all Shimura varieties (see [@Pil21]). The strategy can be split up into three main ingredients. 1. The first ingredient is the Pila--Wilkie point counting theorem from [@Pil06]. It gives an upper bound subpolynomial in height on the rational points of the transcendental component of definable sets. The fundamental domains of the universal covering map of a Shimura variety are definable in an o-minimal structure. This was shown for $\ensuremath{\mathcal{A}}_g$ by Peterzil and Starchenko (see [@Pet13]), and for arbitrary Shimura varieties by Klinger, Ullmo, and Yafaev (see [@Kli13]). A sharpened version proven by Binyamini (see [@Bin22]) was necessary for general Shimura varieties. 2. The second ingredient is a lower bound polynomial in height for the size of Galois orbits of special points. This is done in two steps. The first step to provide a lower bound polynomial in height for the discriminant of certain endomorphism algebras. This was done by Pila and Tsimerman (see [@Pil13]) for $\ensuremath{\mathcal{A}}_g$, and by Binyamini, Schmidt, and Yafaev (see [@Bin23]) for arbitrary Shimura varieties. The second step is to provide a lower bound subpolynomial in terms of discriminant of those same algebras. Tsimerman proves this for $\ensuremath{\mathcal{A}}_g$ (see [@Tsi18]) by combining a result of Masser and Wüstholz (see [@Mas95]), and the averaged Colmez conjecture, which was proven independently by Andreatta, Goren, Howard, and Madapusi-Pera (see [@And18]), and Yuan and Zhang (see [@Yua18]). For general Shimura varieties, Binyamini, Schmidt, and Yafaev (see [@Bin23]) reduce this to proving the existence of a canonical height on Shimura varieties and that the height of special points is bounded subpolynomially in terms of the discriminant of certain number fields. A proof of this result was recently announced by Pila, Shankar, and Tsimerman (see [@Pil21]). 3. The third ingredient is the Ax--Lindemann theorem. The first two ingredients combine to show that Galois orbits of special points must lie in the algebraic part of the fundamental domain. Then, the Ax--Lindemann theorem tells us that these algebraic parts are precisely special subvarieties. This was proven for $\ensuremath{\mathcal{A}}_g$ by Pila and Tsimerman (see [@Pil14]), and by Mok, Pila, and Tsimerman for all Shimura varieties (see [@Mok19]). For this article, we are interested in the contribution of [@Pil21] in showing that their canonical height for CM points on Shimura varieties is bounded in terms of discriminants of their splitting fields. The first step is to systematically define a Weil height function for any arbitrary Shimura variety. Given a Shimura variety $\mathrm{Sh}_K(G, X)$, take a $\mathbb Q$-representation $G \to \mathop{\mathrm{GL}}(V)$ and lattice $\Lambda \subset V$. By the Riemann--Hilbert correspondence over $p$-adic local fields, given by [@Dia22], we get a filtered automorphic vector bundle with connection $(_{\mathop{\mathrm{dR}}}V, \mathop{\mathrm{Fil}}^\bullet, \nabla)$, which is defined over the reflex field of $\mathrm{Sh}_K(G, X)$ and all of its $p$-adic places. Then the plan is to define an adelic norm on $\mathop{\mathrm{Gr}}^\bullet_{\mathop{\mathrm{dR}}} V$, which would give rise to an Arakelov height function on $\mathrm{Sh}_K(G, X)$. At the archimedean places, this representation admits a polarization $\psi\colon V \times V \to \mathbb Q$ and the norm can be defined as the Hodge norm $\psi(v, h(i)v)$. Over the finite places, the crystalline norm is used when the representation is crystalline and an alternative intrinsic norm is used at the finitely many other places. This height is compatible in the sense that if $(G_1, X_1) \to (G_2, X_2)$ is a map of Shimura datum with $\rho_i$ a representation of $G_i$ compatible with this morphism, then the height of a point of $\mathrm{Sh}_{K_2}(G_2, X_2)$ with respect to $\rho_2$ is equal to the height of a point of $\mathrm{Sh}_{K_1}(G_1, X_1)$ with respect to $\rho_1$. This height also coincides with the Faltings height for $\ensuremath{\mathcal{A}}_g$. With this height defined, the next step is to bound the height of special points in terms of discriminants of certain number fields. If $(T, x) \subset (G, X)$ is a Shimura sub-datum of a special point, then $T$ splits over a CM field $E/F$. Given a representation $\rho \colon G \to \mathop{\mathrm{GL}}(V)$ and restricted representation $\rho_x \colon T \to G \to \mathop{\mathrm{GL}}(V)$, we can map $(T, x, \rho_x) \to (\mathop{\mathrm{Res}}_{F/\mathbb Q} E^\times/F^\times, x, \rho_\phi)$ to another Shimura datum where $\phi\subset \mathop{\mathrm{Hom}}(E, \mathbb C)$ is a partial CM-type and $\rho_\phi$ is a representation of $\mathop{\mathrm{Res}}_{F/\mathbb Q} E^\times/F^\times$ in terms of $\phi$. The height $h(\phi)$ is defined as the height of $(\mathop{\mathrm{Res}}_{F/\mathbb Q} E^\times/F^\times, x, \rho_\phi)$. By the compatibility of heights, the problem is reduced to bounding the height $h(\phi)$. Given a set of disjoint CM-fields and partial CM-types $\set{(E_i, \phi_i)}_{i = 1}^t$ they are able to express a linear combination of $h(\phi_i)$ in terms of a linear combination of heights of full CM-types $\Phi^S$ of $E^S = \prod_{i \in S} E_i$ for various subsets $S \subset \{1, 2, \dots, t\}$. By ranging over all subsets $S$, we can express the height of an individual $h(\phi_i)$ as a linear combination of Faltings heights of CM-types $\Phi^S$ of $E^S$, where $S$ varies over all possible subsets of $\{1, 2, \dots, t\}$. The height of the full CM-type $h(\Phi^S)$ is bounded subpolynomially in terms of the discriminant $d_{E^S}$. Choosing $E_i$ carefully, we can bound the relative discriminant $\ensuremath{\mathfrak{d}}_{E^S/E_i}$, completing the proof. Instead of expressing the height $h(\phi)$ in terms of heights of CM-types over the many $E^S$, we give an expression in terms of CM-types of $E$ only. ## Idea of the Proof The idea is similar to that of [@Yua18]. We use their decomposition of Faltings heights of a CM abelian variety $h(\Phi)$ into constituent parts $h(\Phi, \tau)$, one for each archimedean place $\tau \in \Phi$. The constituent parts are related to the full CM-type by the formula $$h(\Phi) - \sum_{\tau \in \Phi} h(\Phi, \tau) = \frac{-1}{4[E_\Phi:\mathbb Q]} \log (d_\Phi d_{\overline{\Phi}}),$$ where $d_\Phi, d_{\overline{\Phi}}$ are discriminants associated with $\Phi, \overline{\Phi}$ respectively, and $E_\Phi$ is the reflex field of $\Phi$. Moreover, if $(\Phi_1, \Phi_2)$ are nearby CM-types of $E$ in that they differ only at a single place $\tau_i = \Phi_i \backslash(\Phi_1 \cap \Phi_2)$, then [@Yua18] proves that the quantity $$h(\Phi_1, \tau_1) + h(\Phi_2, \tau_2)$$ is the same across any choice of nearby CM-types. We define the group $$G'' \coloneqq \mathop{\mathrm{Res}}_{F/\mathbb Q} (B^\times \times E^\times)/F^\times,$$ where $F$ embeds diagonally as $a \mapsto (a, a^{-1})$. We can construct a norm $N\colon G'' \to \mathop{\mathrm{Res}}_{F/\mathbb Q} \ensuremath{\mathbb{G}}_m$ and define the group $$G' \coloneqq G'' \times_{\ensuremath{\mathbb{G}}_m} \mathop{\mathrm{Res}}_{F/\mathbb Q} \ensuremath{\mathbb{G}}_m$$ consisting of elements $G''$ with norm lying in $\mathbb Q^\times$. If $\phi$ is a partial CM-type and $\phi'$ is a complementary partial CM-type in that $\phi\sqcup \phi'$ constitute a full CM-type, then we can construct morphisms $h' \colon \mathbb C^\times \to G'(\mathbb R)$ and $h''\colon \mathbb C^\times \to G''(\mathbb R)$. They give rise to Shimura datum and Shimura varieties $X'_{U'}$ and $X''_{U''}$ for compact open subgroups $U' \subset G'(\ensuremath{\mathbb{A}}_f)$ and $U'' \subset G''(\ensuremath{\mathbb{A}}_f)$ with complex uniformizations $$X'_{U'}(\mathbb C) = G'(\mathbb Q) \backslash (\ensuremath{\mathcal{H}}^\pm)^\Sigma \times G'(\ensuremath{\mathbb{A}}_f)/U'$$ and $$X''_{U''}(\mathbb C) = G''(\mathbb Q) \backslash (\ensuremath{\mathcal{H}}^\pm)^\Sigma \times G''(\ensuremath{\mathbb{A}}_f)/U''$$ They have canonical models defined over the same reflex field $E_{X'} = E_{X''}$. The Shimura variety $X'_{U'}$ is of PEL type and has an integral model $\ensuremath{\mathcal{X}}'_{U'}$ by [@Car86] and [@Pap13]. The pair $(\phi, \phi')$ gives rise to a point $P'_{U'} \in X'_{U'}$ which parametrizes an abelian variety isogenous to a product $A_1 \times A_2$ of abelian varieties, one with complex multiplication of type $\phi\sqcup \phi'$ and the other with complex multiplication of type $\overline{\phi} \sqcup \phi'$. After defining a suitable metric on $\omega_{\ensuremath{\mathcal{X}}'_{U'}/\ensuremath{\mathcal{O}}_{E_{X'}}}$, the Kodaira--Spencer isomorphism on $X'_{U'}$ gives us an equality of heights $$h_{\widehat{\omega_{\ensuremath{\mathcal{X}}'_{U'}/\ensuremath{\mathcal{O}}_{E_{X'}}}}}(P'_{U'}) = \sum_{\tau \in \phi} \paren*{h(\phi\sqcup \phi', \tau) + h(\overline{\phi} \sqcup \phi', \overline{\tau})}.$$ Now the idea is to relate $\omega_{\ensuremath{\mathcal{X}}_U/\ensuremath{\mathcal{O}}_{E_X}}$ and $\omega_{\ensuremath{\mathcal{X}}'_{U'}/\ensuremath{\mathcal{O}}_{E_{X'}}}$. We do this by mapping both $X_U$ and $X'_{U'}$ into the third Shimura variety $X''_{U''}$ so that the points $P_U$ and $P'_{U'}$ have the same image $P''_{U''} \in X''_{U''}(\overline{\mathbb Q})$. We represent both canonical bundles in terms of deformations of $p$-divisible groups $\ensuremath{\mathcal{H}}_U$ and $\ensuremath{\mathcal{H}}'_{U'}$ over $\ensuremath{\mathcal{X}}_U$ and $\ensuremath{\mathcal{X}}'_{U'}$ respectively, and then relate those $p$-divisible groups to a $p$-divisible group $\ensuremath{\mathcal{H}}''_{U''}$ over $\ensuremath{\mathcal{X''}}_{U''}$. After showing all this, we get that $$h_{\widehat{\ensuremath{\mathcal{L}}_U}}(P_U) = h_{\widehat{\omega_{\ensuremath{\mathcal{X}}'_{U'}/\ensuremath{\mathcal{O}}_{E_{X'}}}}}(P'_{U'}) = \sum_{\tau \in \phi} \paren*{h(\phi\sqcup \phi', \tau) + h(\overline{\phi} \sqcup \phi', \overline{\tau})}.$$ Of note is that this formula does not depend on the choice of complementary partial CM-type $\phi'$, because the Shimura variety $X_U$ was defined independently of the choice of $\phi'$. We utilize this by summing over all possible complementary CM-type, which will express $h_{\widehat{\ensuremath{\mathcal{L}}_U}}(P_U)$ in terms of heights of full CM-types containing $\phi$ as well as nearby CM-types. The sum of heights of nearby CM-types is shown to be constant in [@Yua18], and equal to the averaged height of all CM-types of $E$. Combining these two, we are able to express the height in terms of CM-types containing $\phi$ and an average of all possible CM-types. ## Structure of the Article We first recall from [@Yua18] the decomposition of a Faltings height in Section [2](#sec:Faltings height){reference-type="ref" reference="sec:Faltings height"}. We then describe three Shimura varieties that can be constructed from a quaternion algebra following [@Tia16] by describing their generic fiber in Section [3](#sec:Quaternionic Shimura Variety){reference-type="ref" reference="sec:Quaternionic Shimura Variety"} and integral models in Section [4](#sec:Integral Model){reference-type="ref" reference="sec:Integral Model"}. We then describe some line bundles on these Shimura varieties in terms of Lie algebras of certain $p$-divisible groups described in Section [5](#sec:p divisible group){reference-type="ref" reference="sec:p divisible group"} and relate these Lie algebras, and finally define the Hodge bundle, in Section [6](#sec:Hodge bundle){reference-type="ref" reference="sec:Hodge bundle"}. Finally, we prove our theorem for the height of partial CM-types in Section [7](#sec:Special Point){reference-type="ref" reference="sec:Special Point"} and compare our height with those introduced in [@Pil21] in Section [8](#sec:Andre Oort){reference-type="ref" reference="sec:Andre Oort"}. ## Acknowledgements We wish to thank Sebastian Eterović, Ananth Shankar, and Xinyi Yuan for their helpful discussions with us. # CM-types and Faltings Heights {#sec:Faltings height} ## Faltings Height We first define the Faltings height of an abelian variety. It will be defined as the degree of a metrized line bundle. Let $A$ be an abelian variety of dimension $g$ defined over a number field $K$ and let $\ensuremath{\mathcal{A}}$ be the Néron model over $\ensuremath{\mathcal{O}}_K$ and let the identity section be $s\colon \mathop{\mathrm{Spec}}\ensuremath{\mathcal{O}}_K \to \ensuremath{\mathcal{A}}$. Let $\Omega_{\ensuremath{\mathcal{A}}/\ensuremath{\mathcal{O}}_K}$ be the sheaf of relative differentials. The *Hodge bundle* of $A$ is the vector bundle $\Omega(\ensuremath{\mathcal{A}}) \coloneqq s^*\Omega_{\ensuremath{\mathcal{A}}/\ensuremath{\mathcal{O}}_K}$ over $\ensuremath{\mathcal{O}}_K$. This is canonically isomorphic to the pushforward $\pi_* \Omega_{\ensuremath{\mathcal{A}}/\ensuremath{\mathcal{O}}_K}$, where $\pi\colon \ensuremath{\mathcal{A}} \to \ensuremath{\mathcal{O}}_K$ is the structure sheaf morphism. The Hodge bundle $\Omega(\ensuremath{\mathcal{A}})$ is a vector bundle over $\ensuremath{\mathcal{O}}_K$ of rank $g$ and taking the determinant $\omega(\ensuremath{\mathcal{A}}) \coloneqq \Omega(\ensuremath{\mathcal{A}})^{\wedge g}$ gives a line bundle over $\ensuremath{\mathcal{O}}_K$. To make this into a metrized line bundle, we need to define a norm for each archimedean place of $K$. We have that $$\omega(\ensuremath{\mathcal{A}}) \otimes_{\ensuremath{\mathcal{O}}_K} K \cong s^* \omega_{A/K} = H^0(A, \omega_{A/K}).$$ For each archimedean place $v$ of $K$, we put the norm as $$\norm{\alpha}_v \coloneqq \abs*{\frac{1}{(2\pi)^g}\int_{A_v(\mathbb C)}\alpha \wedge \overline{\alpha}}^{\frac{1}{2}}$$ for each $\alpha \in \omega(\ensuremath{\mathcal{A}}) \otimes_{\ensuremath{\mathcal{O}}_K} K_v \cong H^0(A_v, \omega_{A_v/K_v})$. In this way, we get a metrized line bundle $\widehat{\omega(\ensuremath{\mathcal{A}})} \coloneqq (\omega(\ensuremath{\mathcal{A}}), \norm{\cdot}_v)$. **Definition 4**. The *Faltings height* of the abelian variety $A/K$ is the Arakelov height $$h(A) \coloneqq \frac{1}{[K: \mathbb Q]} \widehat{\deg} \widehat{\omega(\ensuremath{\mathcal{A}})} = \frac{1}{[K:\mathbb Q]} \paren*{\log \abs{\omega(\ensuremath{\mathcal{A}})/(\ensuremath{\mathcal{O}}_K\cdot s)} - \sum_{\sigma \colon K \to \mathbb C}\log \|s\|_\sigma},$$ for a choice of $s \in \omega(\ensuremath{\mathcal{A}}) \backslash \{0\}$. This is well defined and independent of the choice of $s$ by the product formula. If $A$ has semistable reduction over $K$, then the Faltings height is invariant under finite field extensions. In general, we define the stable Faltings height as the height after base change to a finite extension $K'/K$ such that $A$ has semistable reduction over $K'$. Such a $K'$ always exists. ## CM-types A *CM-field extension* is an extension $E/F$ of number fields such that $F/\mathbb Q$ is a totally real field and $E/F$ is a quadratic totally imaginary extension. We say $E$ is a *CM-field* and $F$ is its *totally real subfield*. A *(full) CM-type* is a subset $\Phi \subset \mathop{\mathrm{Hom}}(E, \mathbb C)$ such that $\Phi \sqcup \overline{\Phi} = \mathop{\mathrm{Hom}}(E, \mathbb C)$, where $\overline{\Phi} = \{\overline{\sigma} : \sigma \in \Phi\}$. A *partial CM-type* is a subset $\phi\subset \mathop{\mathrm{Hom}}(E, \mathbb C)$ such that $\phi\cap \overline{\phi} = \varnothing$. We say that $\phi'$ is a *complementary partial CM-type to $\phi$* if $\phi\sqcup \phi'$ is a CM-type. We say that a complex abelian variety $A$ has complex multiplication of type $(\ensuremath{\mathcal{O}}_E, \Phi)$ if there exists an embedding $\iota\colon \ensuremath{\mathcal{O}}_E \to \mathop{\mathrm{End}}(A)$ and an isomorphism $\mathop{\mathrm{Lie}}(A) \cong \mathbb C^g \overset{\Phi}{\cong} E \otimes_\mathbb Q\mathbb R$ of $\ensuremath{\mathcal{O}}_E$ modules. Let $E$ be a CM-field with degree $[E:\mathbb Q] = 2g$ and let $\Phi \subset \mathop{\mathrm{Hom}}(E, \mathbb C)$ be a CM-type. Let $A_\Phi$ be an abelian variety of CM-type $(\ensuremath{\mathcal{O}}_E, \Phi)$. Then, there is a number field $K$ over which $A_\Phi$ is defined and has a smooth projective integral model $\ensuremath{\mathcal{A}}/\ensuremath{\mathcal{O}}_K$. Colmez proved the following theorem **Theorem 5** ([@Col93 Thm 0.3]). *The Faltings height $h(A_\Phi)$ depends only on the CM-type $(E, \Phi)$.* We write $h(\Phi) \coloneqq h(A_\Phi)$. Colmez conjectured a formula about $h(\Phi)$ in terms of logarithmic derivatives of Artin L-functions related to $\Phi$. This conjecture has been proven when $E/\mathbb Q$ is an abelian extension by Obus and Colmez (see [@Obu13]) and when $F$ is a real quadratic field by Yang (see [@Yan10]). An averaged version was proven in [@Yua18], and independently in [@And18]. **Theorem 6** ([@And18 Thm A], [@Yua18 Thm 1.1]). *Suppose $E/F$ is an CM-extension and let $\chi\colon \ensuremath{\mathbb{A}}_F^\times \to \set{\pm 1}$ the character corresponding to this extension, and $L(s, \chi)$ the corresponding Artin L-function. Let $d_F$ be the absolute discriminant of $F$ and $d_{E/F}$ the norm of the relative discriminent of $E/F$. Then $$\frac{1}{2^g} \sum_\Phi h(\Phi) = - \frac{1}{2} \frac{L'(0, \chi)}{L(0, \chi)} - \frac{1}{4} \log (d_{E/F}d_F),$$ where the sum on the left runs through the set of all CM-types of $E$.* ## Decomposition of Heights We recall the results of [@Yua18] decomposing the Faltings height of a CM-type $\Phi$ into its constituent embeddings $\tau \in \Phi$. To decompose the height, we first decompose the Hodge bundle into its eigenspaces. Let $A$ be an abelian variety with complex multiplication of type $(\ensuremath{\mathcal{O}}_E, \Phi)$. We define $$\Omega(A)_\tau \coloneqq \Omega(A) \otimes_{E_\tau} \mathbb C,$$ where $E_\tau$ acts on $\mathbb C$ through the projection $E \otimes_\mathbb Q\mathbb C\cong \prod_{\sigma \in \mathop{\mathrm{Hom}}(E, \mathbb C)} \mathbb C_\sigma \to \mathbb C_\tau$. This gives us a decomposition of the Hodge bundle as $$\Omega(A) \cong \bigoplus_{\tau\colon E \to \mathbb C} \Omega(A)_\tau \cong \bigoplus_{\tau \in \Phi} \Omega(A)_\tau.$$ The latter isomorphism holds because $\Omega(A)_\tau = 0$ for $\tau \not \in \Phi$. Let $A^t$ be the dual abelian variety of $A$. Then, we have canonical isomorphisms $$\Omega(A^t) = \mathop{\mathrm{Lie}}(A^t)^\vee \cong H^1(A, \ensuremath{\mathcal{O}}_A)^\vee \cong H^{0, 1}(A)^\vee = \overline{\Omega(A)}^\vee,$$ so that if $A$ is of CM-type $(\ensuremath{\mathcal{O}}_E, \Phi)$, then $A^t$ is of CM-type $(\ensuremath{\mathcal{O}}_E, \overline{\Phi})$. From this isomorphism, we also get a perfect Hermitian pairing $\Omega(A^t) \otimes \Omega(A) \to \mathbb C$. Just as before, we can decompose $$\Omega(A^t) \cong \bigoplus_{\tau \in \overline{\Phi}} \Omega(A^t)_\tau.$$ The Hermitian pairing from before decomposes into a sum of orthogonal pairings $\Omega(A)_\tau \otimes \Omega(A^t)_{\overline{\tau}} \to \mathbb C$. Taking the determinant gives a Hermitian norm on the line bundle $$N(A, \tau) \coloneqq \det \Omega(A)_\tau \otimes \det \Omega(A^t)_{\overline{\tau}}.$$ We can extend $N(A, \tau)$ to an integral model of $A$. If $\ensuremath{\mathcal{A}}$ is the Néron model over $\ensuremath{\mathcal{O}}_K$ as before, with $K$ including all embeddings of $E \to \overline{\mathbb Q}$, define $$\Omega(\ensuremath{\mathcal{A}})_\tau \coloneqq \Omega(\ensuremath{\mathcal{A}}) \otimes_{\ensuremath{\mathcal{O}}_K \otimes \ensuremath{\mathcal{O}}_E, \tau} \ensuremath{\mathcal{O}}_K$$ for each $\tau\colon E \to K$. We define $\Omega(\ensuremath{\mathcal{A}}^t)_\tau$ analogously. For each archimedean place of $K$, we use the aforementioned Hermitian norm $\norm{\cdot}$ on the generic fiber of $\det \Omega(\ensuremath{\mathcal{A}})_\tau \otimes \det \Omega(\ensuremath{\mathcal{A}}^t)_{\overline{\tau}}$, and thus we get a metrized line bundle $$\widehat{\ensuremath{\mathcal{N}}(\ensuremath{\mathcal{A}}, \tau)} \coloneqq (\det \Omega(\ensuremath{\mathcal{A}})_\tau \otimes \det \Omega(\ensuremath{\mathcal{A}}^t)_{\overline{\tau}}, \norm{\cdot}).$$ **Definition 7**. If $A$ is an abelian variety of CM-type $(E, \Phi)$ and $\tau\colon E \to \mathbb C$, then the $\tau$-part of the Faltings height of $A$ is $$h(A, \tau) \coloneqq \frac{1}{2[K:\mathbb Q]} \widehat{\deg}\widehat{\ensuremath{\mathcal{N}}(\ensuremath{\mathcal{A}}, \tau)}.$$ Note that if $\tau \not \in \Phi$, then $\ensuremath{\mathcal{N}}(\ensuremath{\mathcal{A}}, \tau) = 0$ and so the height contribution is $0$ as well. Just as with the Faltings height, this $\tau$-component is independent of the abelian variety itself. Thus, we will write $h(\Phi, \tau)$ for $h(A, \tau)$. **Theorem 8** ([@Yua18 Thm 2.2]). *If $A$ has CM of type $(\ensuremath{\mathcal{O}}_E, \Phi)$, the height $h(A, \tau)$ depends only on the pair $(\Phi, \tau)$.* We call a pair of CM-types $(\Phi_1, \Phi_2)$ *nearby* if $\abs{\Phi_1 \cap \Phi_2} = g - 1$. Let $\tau_i = \Phi_i \backslash (\Phi_1 \cap \Phi_2)$ be the place where they differ. Then, the sum of the $\tau_i$-components of $h(\Phi_i)$ is independent of the choice of nearby CM-type. **Theorem 9** ([@Yua18 Thm. 2.7]). *The quantity $h(\Phi_1, \tau_1) + h(\Phi_2, \tau_2)$ is independent of the choice of nearby CM-type $(\Phi_1, \Phi_2)$.* Finally, we compare $h(\Phi)$ with its constituents $h(\Phi, \tau)$. **Definition 10**. Let $\Psi \subset \mathop{\mathrm{Hom}}(E, \mathbb C)$ be any subset, not necessarily a (partial) CM-type. The reflex field $E_\Psi \subset E^{\mathop{\mathrm{Gal}}}$ is the subfield of the Galois closure of $E$ fixed by all automorphisms that fix $\Psi$. The trace map $\mathop{\mathrm{Tr}}_\Psi \colon E \to E_\Psi$ is given by $\mathop{\mathrm{Tr}}_\Psi(x) = \sum_{\tau \in \Psi} \tau(x)$. We can decompose $E_\Psi \otimes_\mathbb QE \cong \widetilde{E_\Psi} \times \widetilde{E_{{\Psi}^{\mathsf{c}}}}$ where the trace of the action of $E$ on $\widetilde{E_\Psi}$ is $\mathop{\mathrm{Tr}}_\Psi$ and the trace of the action on $\widetilde{E_{{\Psi}^{\mathsf{c}}}}$ is $\mathop{\mathrm{Tr}}_{{\Psi}^{\mathsf{c}}}$. Let $\ensuremath{\mathfrak{d}}_\Psi$ be the relative discriminant of the image of $\ensuremath{\mathcal{O}}_{E_\Psi} \otimes_\mathbb Z\ensuremath{\mathcal{O}}_E$ in $\widetilde{E_\Psi}$ over $\ensuremath{\mathcal{O}}_{E_\Psi}$, and let $d_\Psi$ be the positive generator of the $N_{E_\Psi/\mathbb Q}(\ensuremath{\mathfrak{d}}_\Psi)$. **Theorem 11** ([@Yua18 Thm 2.3]). *$$h(\Phi) - \sum_{\tau \in \Phi} h(\Phi, \tau) = \frac{-1}{4[E_\Phi:\mathbb Q]} \log (d_\Phi d_{\overline{\Phi}}).$$* # Quaternionic Shimura Varieties {#sec:Quaternionic Shimura Variety} We fix a totally real field $F/\mathbb Q$ of degree $g$. Let $\Sigma \subset \mathop{\mathrm{Hom}}(F, \mathbb R)$ be the subset of places of $F$ and let $B/F$ be a quaternion algebra over $F$ that is split at infinity precisely at $\Sigma$, which means that $$B \otimes_\mathbb Q\mathbb R\cong \prod_{\tau \in \Sigma} M_2(\mathbb R)_\tau \oplus \prod_{\sigma \not \in \Sigma} \ensuremath{\mathbb{H}}_\sigma.$$ From this quaternion algebra, we will construct three related quaternionic Shimura varieties and relate them. We are primarily interested in the Shimura variety $X$ associated with the group $G = \mathop{\mathrm{Res}}_{F/\mathbb Q} B^\times$. However, this Shimura datum does not parametrize abelian varieties and is of abelian type. We will follow the approach of [@Car86] by finding a unitary Shimura datum $G'$ that has the same derived group as $G$, and is of PEL type which will give us a nice description of the integral models of $X'$ in terms of abelian varieties. Then following [@Kis10; @Kis18], we will give an integral model for $X$ by transporting the connected components of $X'$. We start with the primary Shimura variety of study, the one associated to the group $G = \mathop{\mathrm{Res}}_{F/\mathbb Q} (B^\times)$. Let $h\colon \mathbb C^\times \to G(\mathbb R)$ be a cocharacter defined by $$h(a + bi) = \paren*{\prod_{\tau \in \Sigma} \begin{pmatrix} a & b\\-b & a \end{pmatrix}_\tau, \prod_{\sigma \not \in \Sigma} 1_\sigma} \in \prod_{\tau \in \Sigma} M_2(\mathbb R)_\tau \times \prod_{\sigma \not \in \Sigma} \ensuremath{\mathbb{H}}_\sigma.$$ We can identify the $G(\mathbb R)$-conjugacy class of $h$ with $(\ensuremath{\mathcal{H}}^{\pm})^{\abs \Sigma}$, where $\ensuremath{\mathcal{H}}^\pm \coloneqq \mathbb C\backslash \mathbb R$, by sending $ghg^{-1} \mapsto \prod_{\tau \in \Sigma} g_\tau(i_\tau)$, where $g_\tau \in \mathop{\mathrm{GL}}_2(\mathbb R)_\tau$ is the $\tau$ component of $g$, and acts on $i$ through Möbius transformations. From the Shimura datum $(G, (\ensuremath{\mathcal{H}}^\pm)^{\Sigma})$, we get a Shimura variety $X_U$ for each open compact subgroup $U \subset G(\ensuremath{\mathbb{A}}_f)$ that has a complex uniformization given by $$X_U(\mathbb C) = G(\mathbb Q) \backslash (\ensuremath{\mathcal{H}}^\pm)^{ \Sigma} \times G(\ensuremath{\mathbb{A}}_f)/U.$$ The reflex field $E_X \coloneqq E(G, (\ensuremath{\mathcal{H}}^\pm)^{\Sigma})$ of $X$ is the subfield of $\mathbb C$ fixed by the automorphisms of $\mathbb C$ that fix $\Sigma \subset \mathop{\mathrm{Hom}}(F, \mathbb R)$. The Shimura variety $X_U$ has a canonical model over $E_X$ whose complex points have the above uniformization (see [@Mil90]). Let $N_{B/F}\colon B^\times \to F^\times$ be the reduced norm on $B$. The derived group is $G^{\mathop{\mathrm{der}}} = \ker\paren*{N_{B/F}\colon \mathop{\mathrm{Res}}_{F/\mathbb Q} B^\times \to \mathop{\mathrm{Res}}_{F/\mathbb Q} \ensuremath{\mathbb{G}}_m}$, the elements of $B$ with norm $1 \in F$, and its adjoint group is $\mathop{\mathrm{Res}}_{F/\mathbb Q} B^\times/F^\times$. We now introduce two auxiliary Shimura data that have the same derived group and the same adjoint group as $G$. Thus, their associated Shimura varieties have isomorphic connected components to $X$. Let $E/F$ be a CM extension such that there is an embedding $E \hookrightarrow B$. Let $$G'' \coloneqq \mathop{\mathrm{Res}}_{F/\mathbb Q} (B^\times \times E^\times)/F^\times,$$ where $F^\times \hookrightarrow B^\times \times E^\times$ by $a \mapsto (a, a^{-1})$. Let $\Phi\colon E\otimes_\mathbb Q\mathbb R\overset{\sim}{\to} \mathbb C^g$ be a full CM-type of $E$. Split $\Phi$ into partial CM-types by $$\phi= \{\sigma \in \Phi : \sigma|_F \in \Sigma\},$$ and $$\phi' = \{\sigma \in \Phi : \sigma|_F \not \in \Sigma\},$$ so that $\phi$ and $\phi'$ are complementary partial CM-types. These partial CM-types give maps $\phi\colon E \to \mathbb C^{\Sigma}$ and $\phi'\colon E \to \mathbb C^{{\Sigma}^{\mathsf{c}}}$. Identify $E \otimes_\mathbb Q\mathbb R$ with $\mathbb C^g$ through the CM-type $\Phi$. Define the cocharacter $h_E\colon \mathbb C^\times \to E \otimes_\mathbb Q\mathbb R$ to be $$h_E(z) = (\phi(1), \phi'(z)) \in \mathbb C^g.$$ We can now define the cocharacter $h'' \colon \mathbb C^\times \to G''(\mathbb R)$ as the composition of the map $(h(z), h_E(z))$ with the quotient by $F^\times$-action map. As before, the $G''(\mathbb R)$-conjugacy class of $h''$ can be identified with $(\ensuremath{\mathcal{H}}^{\pm})^\Sigma$. There exists a well defined norm $\nu\colon G'' \to \mathop{\mathrm{Res}}_{F/\mathbb Q} \ensuremath{\mathbb{G}}_m$ given by mapping $$(b, e) \mapsto N_{B/F}(b)N_{E/F}(e).$$ We use this norm to define an algebraic subgroup $$G' \coloneqq G'' \times_{\mathop{\mathrm{Res}}_{F/\mathbb Q} \ensuremath{\mathbb{G}}_m}\ensuremath{\mathbb{G}}_m,$$ which consists of elements of $G''$ whose norm lies in $\mathbb Q^\times \subset F^\times$. For our cocharacter $h''$, its norm is $\nu(h''(a + bi)) = a^2 + b^2 \in \mathbb R^\times \subset (F \otimes_\mathbb Q\mathbb R)^\times$. Hence $h''$ factors through a map $h' \colon \mathbb C^\times \to G'(\mathbb R)$. The $G'(\mathbb R)$ conjugacy class of $h'$ can be identified with $(\ensuremath{\mathcal{H}}^\pm)^{\abs\Sigma}$ as well. For open and compact subgroups $U' \subset G'(\ensuremath{\mathbb{A}}_f)$ and $U'' \subset G''(\ensuremath{\mathbb{A}}_f)$, we get Shimura varieties $X'_{U'}$ and $X''_{U''}$ with complex uniformizations $$X'_{U'}(\mathbb C) = G'(\mathbb Q) \backslash (\ensuremath{\mathcal{H}}^\pm)^{\Sigma} \times G'(\ensuremath{\mathbb{A}}_f) / U',$$ and $$X''_{U''}(\mathbb C) = G''(\mathbb Q) \backslash (\ensuremath{\mathcal{H}}^\pm)^{\Sigma} \times G''(\ensuremath{\mathbb{A}}_f) / U''.$$ The reflex fields of these Shimura varieties $E_{X'} \coloneqq E(G',(\ensuremath{\mathcal{H}}^\pm)^{\Sigma})$ and $E_{X''} \coloneqq E(G'', (\ensuremath{\mathcal{H}}^\pm)^{\Sigma})$ are the same, and equal to the subfield of $\mathbb C$ fixed by all automorphisms of $\mathbb C$ fixing $\phi' \subset \mathop{\mathrm{Hom}}(E, \mathbb C)$. If an automorphism of $\mathbb C$ fixes $\phi'$, then it fixes $\Sigma$ as well. Therefore, the reflex field $E_X$ is a subfield of $E_{X'} = E_{X''}$. We now describe the abelian varieties which $X'$ parametrizes. Let $V = B$ be viewed as a $\mathbb Q$-vector space with a natural left action by $E$, and choose $\gamma \in E \subset B$ so that $\overline{\gamma} = -\gamma$. We define a similitude $\psi\colon V \times V \to \mathbb Q$ by $$\psi(v, w) = \mathop{\mathrm{Tr}}_{F/\mathbb Q} \mathop{\mathrm{Tr}}_{B/F}(\gamma v \overline{w}),$$ where $\overline{w}$ is the canonical involution on $B$. This is a nondegenerate alternating form, and $\psi(ev, w) = \psi'(v, e^*w)$ for all $v, w \in V$ and $e \in E$, where the involution $e^* = \overline{e}$ is just complex conjugation on $E$. We define a left action of $(B \times E)^\times/F^\times$ on $V$ by setting $(b, e) \cdot v = ev\overline{b}$. In this way, we can identify $G'$ with $E$-linear automorphisms of $V$ with rational norm $$G' = \{g \in \mathop{\mathrm{GL}}_{E}(V) : \psi(gv, gw) = \nu(g) \cdot \psi(v, w)\text{ for some }\nu(g) \in \ensuremath{\mathbb{G}}_m\}.$$ The action of $\mathbb C$ on $V_\mathbb R$ through the morphism $h'$ induces a Hodge structure on $V$ of weight $1$. We can choose $\gamma$ such that $\psi$ induces a polarization satisfying $\psi(v, h'(i)v) \ge 0$ for all $v \in V_\mathbb R$, and hence $(V, \psi)$ is a symplectic $(E, *)$-module. Thus, by [@Mil05 Thm 8.17], the pair $(G', (\ensuremath{\mathcal{H}}^\pm)^{\Sigma})$ is PEL Shimura datum and $X'_{U'}$, for $U'$ small enough, represents the functor, for any test scheme $S$ over $E_{X'}$, whose $S$-points are isomorphism classes of quadruples $(A, \iota, \theta, \kappa U')$ where 1. $A/S$ is an abelian scheme of relative dimension $2g$; 2. $\iota\colon E \to \mathop{\mathrm{End}}(A/S)\otimes_\mathbb Z\mathbb Q$ is an injection such that the action of $\iota(E)$ on $\mathop{\mathrm{Lie}}(A/S)$ has trace given by $$\mathop{\mathrm{Tr}}(\ell, \mathop{\mathrm{Lie}}(A/S)) = \mathop{\mathrm{Tr}}_{\phi\sqcup \phi'}(\ell) + \mathop{\mathrm{Tr}}_{\overline{\phi} \sqcup \phi'}(\ell)$$ for all $\ell \in E$, where for a CM-type $(E, \Phi)$ the trace map $\mathop{\mathrm{Tr}}_\Phi\colon E \to \mathbb C$ is defined as $$\mathop{\mathrm{Tr}}_\Phi(e) = \sum_{\sigma \in \Phi} \sigma(e);$$ 3. $\theta\colon A \to A^t$ is a polarization whose Rosati involution on $\mathop{\mathrm{End}}(A/S)_\mathbb Q$ induces the involution $\gamma \mapsto \gamma^*$ on $E$; 4. and $\kappa\colon H_1(A, \ensuremath{\mathbb{A}}_f) \simeq V_{\ensuremath{\mathbb{A}}_f}$ is an isomorphism of $\ensuremath{\mathbb{A}}_{E, f}$-modules that respects the bilinear forms on both factors, up to an element in $\ensuremath{\mathbb{A}}_f^\times$. # Integral Models {#sec:Integral Model} To construct integral models for these Shimura varieties, we first use the PEL structure of $X'$ to get an integral model $\ensuremath{\mathcal{X}}'$ which parametrizes abelian schemes. Then we will transfer the integral model of $X'$ to construct integral models for $X$ and $X''$, as done in [@Kis10; @Kis18]. ## PEL Type $\ensuremath{\mathcal{X}}'$ We construct $\ensuremath{\mathcal{X}}'$ following [@Rap96; @Pap13]. Let $p \in \mathbb Z$ be a prime number. Let $\ensuremath{\mathfrak{p}}$ be a prime of $F$ lying above $p$. Set $\ensuremath{\mathcal{O}}_{E, \ensuremath{\mathfrak{p}}} = \ensuremath{\mathcal{O}}_{E} \otimes_{\ensuremath{\mathcal{O}}_F} \ensuremath{\mathcal{O}}_{F, \ensuremath{\mathfrak{p}}}$. - If $B$ is unramified at $\ensuremath{\mathfrak{p}}$, then $B_\ensuremath{\mathfrak{p}} \cong M_2(F_\ensuremath{\mathfrak{p}})$. Choose an isomorphism such that $\ensuremath{\mathcal{O}}_{E, \ensuremath{\mathfrak{p}}} \subset M_2(\ensuremath{\mathcal{O}}_{F, \ensuremath{\mathfrak{p}}})$ and set $\Lambda_\ensuremath{\mathfrak{p}} = M_2(\ensuremath{\mathcal{O}}_{F, \ensuremath{\mathfrak{p}}})$. - If $B$ is ramified at $\ensuremath{\mathfrak{p}}$, then $B_\ensuremath{\mathfrak{p}}$ is a division algebra over $F_\ensuremath{\mathfrak{p}}$ and there is a unique choice of a maximal order $\ensuremath{\mathcal{O}}_{B, \ensuremath{\mathfrak{p}}}$, which must contain $\ensuremath{\mathcal{O}}_{E, \ensuremath{\mathfrak{p}}}$. We set $\Lambda_\ensuremath{\mathfrak{p}} = \ensuremath{\mathcal{O}}_{B, \ensuremath{\mathfrak{p}}}$. From this choice of $\ensuremath{\mathcal{O}}_{E, \ensuremath{\mathfrak{p}}}$-lattice $\Lambda_\ensuremath{\mathfrak{p}}$, construct a chain of lattices by taking $$\ensuremath{\mathcal{L}}_{\ensuremath{\mathfrak{p}}} = \{\cdots \subset \omega_{\ensuremath{\mathfrak{q}}}\Lambda_{\ensuremath{\mathfrak{p}}} \subset \Lambda_{\ensuremath{\mathfrak{p}}} \subset \omega_{\ensuremath{\mathfrak{q}}}^{-1} \Lambda_{\ensuremath{\mathfrak{p}}} \subset \cdots\},$$ where $\omega_{\ensuremath{\mathfrak{q}}}$ is a uniformizer of $E_\ensuremath{\mathfrak{q}}$, taken to be a uniformizer of $F_\ensuremath{\mathfrak{p}}$ if $\ensuremath{\mathfrak{q}}$ is unramified over $\ensuremath{\mathfrak{p}}$. From these chains, we can construct a multichain $\ensuremath{\mathcal{L}}_p$ of $\ensuremath{\mathcal{O}}_{E} \otimes_\mathbb Z\mathbb Z_p$-lattices which consist of all lattices $\Lambda_p$ which can be written as $$\Lambda_p = \oplus_{\ensuremath{\mathfrak{p}} \mid p} \Lambda_\ensuremath{\mathfrak{p}}, \quad \Lambda_\ensuremath{\mathfrak{p}} \in \ensuremath{\mathcal{L}}_{\ensuremath{\mathfrak{p}}}.$$ We now require that $E$ is ramified above all finite primes $\ensuremath{\mathfrak{p}}$ of $F$ that also ramify in $B$. Let $\delta_{\ensuremath{\mathfrak{p}}/p} \in F_\ensuremath{\mathfrak{p}}$ be a generator for the different ideal $\ensuremath{\mathfrak{d}}_{F_\ensuremath{\mathfrak{p}}/\mathbb Q_p}$ of $F_\ensuremath{\mathfrak{p}}/\mathbb Q_p$. **Lemma 12**. *We can choose $\gamma \in E^\times$ such that* - *$\gamma = -\overline{\gamma}$;* - *$\gamma \in \delta_{\ensuremath{\mathfrak{p}}/p}^{-1}\ensuremath{\mathcal{O}}_{E, \ensuremath{\mathfrak{p}}}^\times$;* - *and $\psi(v, h'(i)v) > 0$ for all $v \in V_\mathbb R\backslash \{0\}$.* *Moreover, under this choice of $\gamma$, the multichain of $\ensuremath{\mathcal{O}}_E \otimes_\mathbb Z\mathbb Z_p$-lattices $\ensuremath{\mathcal{L}}_p$ is self-dual with respect to the $\mathbb Q_p$-linear alternating form $\psi_p \colon V_p \times V_p \to \mathbb Q_p$.* *Proof.* The anti-symmetric elements of $E$ are dense in the anti-symmetric elements of $(E \otimes_\mathbb Q\mathbb Q_p) \oplus (E \otimes_\mathbb Q\mathbb R)$ and the conditions given are all open and non-empty, meaning that we can find such a $\gamma$. To show that $\ensuremath{\mathcal{L}}_p$ is a self-dual multichain, it suffices to look locally at each prime $\ensuremath{\mathfrak{p}}$ of $F$. The alternating form $\psi_p$ is the sum over all primes $\ensuremath{\mathfrak{p}}$ of $$\psi_\ensuremath{\mathfrak{p}}(v, w) = \mathop{\mathrm{Tr}}_{F_\ensuremath{\mathfrak{p}}/\mathbb Q_p} \mathop{\mathrm{Tr}}_{B_\ensuremath{\mathfrak{p}}/F_\ensuremath{\mathfrak{p}}}(\gamma v \overline{w}).$$ Thus, the dual of the lattice $\Lambda_\ensuremath{\mathfrak{p}}$ with respect to $\psi_\ensuremath{\mathfrak{p}}$ is $$\Lambda_{\ensuremath{\mathfrak{p}}}^\vee = \{w \in V_\ensuremath{\mathfrak{p}} : \psi_\ensuremath{\mathfrak{p}}(v, w) \in \mathbb Z_p \forall v \in \Lambda_\ensuremath{\mathfrak{p}}\} = \{w \in V_\ensuremath{\mathfrak{p}} : \mathop{\mathrm{Tr}}_{B_\ensuremath{\mathfrak{p}}/F_\ensuremath{\mathfrak{p}}}(\gamma v \overline{w}) \in \delta_{\ensuremath{\mathfrak{p}}/p}^{-1}\ensuremath{\mathcal{O}}_{F, \ensuremath{\mathfrak{p}}}\}.$$ It suffice to take $\delta = \delta_{\ensuremath{\mathfrak{p}}/p}^{-1}$. First assume that $\ensuremath{\mathfrak{p}}$ is unramified in $B$. Under the isomorphism $B_\ensuremath{\mathfrak{p}} \cong M_2(F_\ensuremath{\mathfrak{p}})$, the involution on $B_\ensuremath{\mathfrak{p}}$ is given by $\overline{\begin{psmallmatrix} a & b\\c & d \end{psmallmatrix}} = \begin{psmallmatrix} d & -b \\ -c & a \end{psmallmatrix}$ and the similitude is $$\mathop{\mathrm{Tr}}_{B_\ensuremath{\mathfrak{p}}/F_\ensuremath{\mathfrak{p}}}\paren*{\delta\begin{psmallmatrix}a & b\\c& d\end{psmallmatrix}\overline{\begin{psmallmatrix}a'&b'\\c'&d'\end{psmallmatrix}}} = \delta_{\ensuremath{\mathfrak{p}}/p}^{-1}(ad' + bc' + a'd + b'c).$$ From this, we see that $\Lambda_\ensuremath{\mathfrak{p}}^\vee = \Lambda_\ensuremath{\mathfrak{p}} = M_2(\ensuremath{\mathcal{O}}_{F, \ensuremath{\mathfrak{p}}})$. If $\ensuremath{\mathfrak{p}}$ is ramified in $B$, we required that $E$ is also ramified at $\ensuremath{\mathfrak{p}}$. So, we can find an element $j \in B_\ensuremath{\mathfrak{p}}$ such that $j^2 \in \ensuremath{\mathcal{O}}_{F, \ensuremath{\mathfrak{p}}}^\times$ and $je = \overline{e}j$ for all $e \in E_\ensuremath{\mathfrak{p}}$. For this choice of $j$, the unique maximal order can be written as $\ensuremath{\mathcal{O}}_{B, \ensuremath{\mathfrak{p}}} = \ensuremath{\mathcal{O}}_{E, \ensuremath{\mathfrak{p}}} + \ensuremath{\mathcal{O}}_{E, \ensuremath{\mathfrak{p}}}j$. For $a + bj \in E_\ensuremath{\mathfrak{p}} + E_\ensuremath{\mathfrak{p}}j = B_\ensuremath{\mathfrak{p}}$, the trace is $\mathop{\mathrm{Tr}}_{B_\ensuremath{\mathfrak{p}}/F_\ensuremath{\mathfrak{p}}} (a + bj) = \mathop{\mathrm{Tr}}_{E_\ensuremath{\mathfrak{p}}/F_\ensuremath{\mathfrak{p}}} (a)$. We thus have $$\mathop{\mathrm{Tr}}_{B_\ensuremath{\mathfrak{p}}/F_\ensuremath{\mathfrak{p}}}\paren*{\gamma(a + bj)(\overline{a' + b'j})} = \delta_{\ensuremath{\mathfrak{p}}/p}^{-1} \mathop{\mathrm{Tr}}_{E_\ensuremath{\mathfrak{p}}/F_\ensuremath{\mathfrak{p}}} (a\overline{a'} - b\overline{b'}j^2).$$ From this, we get that $\Lambda_{\ensuremath{\mathfrak{p}}}^\vee = \omega_\ensuremath{\mathfrak{q}}^{-1}\Lambda_\ensuremath{\mathfrak{p}} \in \ensuremath{\mathcal{L}}_\ensuremath{\mathfrak{p}}$. ◻ Let $U'_p = U'_p(0) \subset G'(\mathbb Q_p)$ be the parahoric subgroup of elements fixing the multichain $\ensuremath{\mathcal{L}}_p$. Let $U'^p\subset G'(\ensuremath{\mathbb{A}}_f^p)$ and set $U' \coloneqq U'_pU'^p \subset G'(\ensuremath{\mathbb{A}}_f)$. Choose a place $v'$ of $\ensuremath{\mathcal{O}}_{E_{X'}}$ lying above $p$. From the integral data of $(\ensuremath{\mathcal{L}}_p, U'_p)$, consider the functor $\ensuremath{\mathcal{F}}'_{U'_pU'^p}$ which associates to a test scheme $\ensuremath{\mathcal{S}}$ over $\mathop{\mathrm{Spec}}\ensuremath{\mathcal{O}}_{E_{X'}, v'}$ the set of isomorphism classes of quadruples $(\ensuremath{\mathcal{A}}, \iota, \theta, \kappa U'^p)$ where: 1. $\ensuremath{\mathcal{A}}/\ensuremath{\mathcal{S}}$ is an abelian scheme of relative dimension $2g$ up to isogeny of order prime to $p$; 2. $\iota\colon \ensuremath{\mathcal{O}}_{E} \otimes_\mathbb Z\mathbb Z_{(p)} \to \mathop{\mathrm{End}}(\ensuremath{\mathcal{A}}/\ensuremath{\mathcal{S}}) \otimes_\mathbb Z\mathbb Z_{(p)}$ is a homomorphism satisfying the following Kottwitz condition. There is an identity of polynomial functions $$\det_{\ensuremath{\mathcal{O}}_S} (\iota(\ell); \mathop{\mathrm{Lie}}\ensuremath{\mathcal{A}}/S) = \prod_{\varphi\in \phi} \varphi(\ell)\overline{\varphi(\ell)} \prod_{\varphi' \in \phi'} \varphi'(\ell)^2;$$ 3. $\theta\colon \ensuremath{\mathcal{A}} \to \ensuremath{\mathcal{A}}^t$ is a principle polarization whose Rosati involution on $\mathop{\mathrm{End}}(\ensuremath{\mathcal{A}}/S) \otimes \mathbb Z_{(p)}$ induces complex conjugation on $\ensuremath{\mathcal{O}}_{E, (p)}$; 4. and $\kappa\colon H_1(\ensuremath{\mathcal{A}}, \ensuremath{\mathbb{A}}_f^p) \simeq V_{\ensuremath{\mathbb{A}}^p_f}$ is an isomorphism of skew $\ensuremath{\mathcal{O}}_{E, (p)}\otimes \ensuremath{\mathbb{A}}^p_f$-modules that respects the bilinear forms up to a constant in $(\ensuremath{\mathbb{A}}_f^p)^\times$. **Theorem 13**. *If $U'^p$ is sufficiently small, then the functor $\ensuremath{\mathcal{F}}'_{U'}$ is represented by a quasi-projective scheme $\ensuremath{\mathcal{M}}_{U'}$ over $\ensuremath{\mathcal{O}}_{E_{X'}, v'}$ whose generic fiber is $X'_{U'}$. Moreover, we have:* 1. *If $p$ is unramified in $E$, the scheme $\ensuremath{\mathcal{M}}_{U'}$ is smooth over $\ensuremath{\mathcal{O}}_{E_{X'}, v'}$;* 2. *The $p$-adic completion of $\ensuremath{\mathcal{M}}_{U'}$ along the basic locus has a $p$-adic uniformization by a Rapoport--Zink space* *Proof.* This is the PEL moduli problem studied by [@Kot92] and [@Rap96]. The first case is covered by [@Kot92 Sec. 5] and the second case is covered by [@Rap96 Thm 6.50]. ◻ If $p$ is unramified in $E$, we set our integral model $\ensuremath{\mathcal{X}}_{U'}$ to be $\ensuremath{\mathcal{M}}_{U'}$. If $p$ is ramified in $E$, then $\ensuremath{\mathcal{M}}_{U'}$ is not necessarily flat. We explain how to construct a flat model following [@Pap13]. We first construct the corresponding local model for $\ensuremath{\mathcal{F}}'_{U'}$ following [@Rap96 Def. 3.27]. Let $\ensuremath{\mathbb{M}}^{\mathrm{naive}}$ be the functor which associates to a locally Noetherian scheme $\ensuremath{\mathcal{S}}$ over $\mathop{\mathrm{Spec}}\ensuremath{\mathcal{O}}_{E_{X'}, v'}$ the set of $\ensuremath{\mathcal{O}}_{E, p} \otimes_{\mathbb Z_p} \ensuremath{\mathcal{O}}_S$-submodules $t_\Lambda \subset \Lambda_p \otimes_{\mathbb Z_p} \ensuremath{\mathcal{O}}_S$ such that 1. $t_\Lambda$ is a finite, locally free $\ensuremath{\mathcal{O}}_S$-module; 2. For all $\ell \in \ensuremath{\mathcal{O}}_{E, p}$, there is an identity of polynomial functions $$\det_{\ensuremath{\mathcal{O}}_S}(\ell; t_\Lambda) = \prod_{\varphi\in \phi} \varphi(\ell)\overline{\varphi(\ell)} \prod_{\varphi' \in \phi'} \varphi'(\ell)^2;$$ 3. and $t_\Lambda$ is totally isotropic under the nondegenerate alternating pairing $$\psi_{p, \ensuremath{\mathcal{O}}_S}\colon (\Lambda_p \otimes_{\mathbb Z_p} \ensuremath{\mathcal{O}}_S) \times (\Lambda_p \otimes_{\mathbb Z_p} \ensuremath{\mathcal{O}}_S) \to \ensuremath{\mathcal{O}}_S.$$ This functor is represented by a closed subscheme of a Grassmannian. Let $\ensuremath{\mathcal{P}}/\mathop{\mathrm{Spec}}\mathbb Z_p$ be the group scheme whose $S$ points are $\mathop{\mathrm{Aut}}(\ensuremath{\mathcal{L}} \otimes_{\mathbb Z_p} \ensuremath{\mathcal{O}}_S)$ automorphisms of the multichain $\ensuremath{\mathcal{L}}$ that respect the similitude $\psi_p$. Then by [@Rap96 3.30], there is a smooth morphism of algebraic stacks of relative dimension $\dim G' = 5g$. $$\ensuremath{\mathcal{M}}_{U'} \to \left[\ensuremath{\mathbb{M}}^{\mathrm{naive}}/\ensuremath{\mathcal{P}}_{\ensuremath{\mathcal{O}}_{E_{X'}, v'}}\right]$$ From this, we see that $\ensuremath{\mathbb{M}}^{\mathrm{naive}}$ controls the structure of $\ensuremath{\mathcal{M}}_{U'}$. While conjectured to be flat, it has since been shown by [@Pap00] that $\ensuremath{\mathbb{M}}^{\mathrm{naive}}$ is not necessarily flat when $p$ is ramified in the PEL datum. To remedy this, take $\ensuremath{\mathbb{M}}^{\mathrm{loc}}$ to be the flat scheme theoretic closure of $\ensuremath{\mathbb{M}}^{\mathrm{naive}} \otimes_{\ensuremath{\mathcal{O}}_{E_{X'}, v'}} E_{X', v'}$ in $\ensuremath{\mathbb{M}}^{\mathrm{naive}}$. **Proposition 14** ([@Pap13 Thm. 9.1]). *The scheme $\ensuremath{\mathbb{M}}^{\mathrm{loc}}$ is normal and Cohen--Macaulay with reduced special fiber. It also admits an action by $\ensuremath{\mathcal{P}}_{\ensuremath{\mathcal{O}}_{E_{X'}, v'}}$ such that the natural inclusion $\ensuremath{\mathbb{M}}^{\mathrm{loc}} \to \ensuremath{\mathbb{M}}^{\mathrm{naive}}$ is $\ensuremath{\mathcal{P}}_{\ensuremath{\mathcal{O}}_{E_{X'}, v'}}$-equivariant.* With this flat local model, we can define a flat integral model $\ensuremath{\mathcal{X}}'_{U'}$ for $X'_{U'}$ by pulling back $\ensuremath{\mathcal{M}}_{U'}$ to $\ensuremath{\mathbb{M}}^{\mathrm{loc}}$ to get the following cartesian diagram. $$\begin{tikzcd} \ensuremath{\mathcal{X}}'_{U'} \arrow[r] \arrow[d] &\ensuremath{\mathcal{M}}_{U'} \arrow[d] \\ \left[\ensuremath{\mathbb{M}}^{\mathrm{loc}}/\ensuremath{\mathcal{P}}_{\ensuremath{\mathcal{O}}_{E_{X'}, v'}}\right] \arrow[r] & \left[\ensuremath{\mathbb{M}}^{\mathrm{naive}}/\ensuremath{\mathcal{P}}_{\ensuremath{\mathcal{O}}_{E_{X'}, v'}}\right] \end{tikzcd}$$ The schemes $\ensuremath{\mathcal{X}}'_{U'}$ and $\ensuremath{\mathcal{M}}_{U'}$ have generic fiber equal to $X'_{U'}$ and since $\ensuremath{\mathbb{M}}^{\mathrm{loc}}$ is flat, our integral model $\ensuremath{\mathcal{X}}'_{U'}$ of $X'_{U'}$ is flat as well. We have defined integral models $\ensuremath{\mathcal{X}}'_{U'}$ when $U' = U'_pU'^p$ where $U'_p = U'_p(0)$ is maximally parahoric and $U'^p$ is sufficiently small. We want to define an integral model $\ensuremath{\mathcal{X}}'_{U'}$ when $U' = \prod_p U'_p(0)$ is maximal at all primes. We now show how to construct these integral models when $U'^p$ is big. Suppose that $p$ is unramified in $E$ so that $\ensuremath{\mathcal{X'}}_{U'}$ is smooth. For integral $m \ge 0$, let $U_p(m)$ denote $$U_p'(m) \coloneqq \set{g \in G'(\mathbb Q_p) : g\Lambda_p = \Lambda_p, g|_{p^{-m}\Lambda_p/\Lambda_p} \equiv 1}$$ the subgroup of $U'_p$ that acts as the identity on $\Lambda_p/p^m\Lambda_p$. The nondegenerate alternating form $\psi_p$ on $\Lambda_p$ gives rise to a nondegenerate $*$-hermitian alternating form $$\inner{,}_{p, m}\colon p^{-m}\Lambda_p/\Lambda_p \times p^{-m}\Lambda_p/\Lambda_p \to p^{-m}\mathbb Z_p/\mathbb Z_p$$ given by $$\inner{x, y}_{p, m} = \psi_p(p^mx, y).$$ Then for $U'^p$ small enough, Mantovan (see [@Man05]) defines an integral model of $\ensuremath{\mathcal{X}}'_{U'_p(m)U'^p}$ over $\mathop{\mathrm{Spec}}\ensuremath{\mathcal{O}}_{E_{X'}, v'}$ by using the notion of a full set of sections. Let $\ensuremath{\mathcal{F}}'_{U'_p(m)U'^p}$ be the functor over $\ensuremath{\mathcal{F}}'_{U'_pU'^p} = \ensuremath{\mathcal{F}}'_{U'_p(0)U'^p}$ which associates to a locally Noetherian scheme $S$ over $\ensuremath{\mathcal{O}}_{E_{X'}, v'}$ the set of isomorphism classes of data $(\ensuremath{\mathcal{A}}, \iota, \theta, \kappa, \alpha)$ where $(\ensuremath{\mathcal{A}}, \iota, \theta, \kappa)$ are as in the functor $\ensuremath{\mathcal{F}}'_{U'_p(0)U'^p}$ and $$\alpha\colon p^{-m}\Lambda_p/\Lambda_p \to \ensuremath{\mathcal{A}}[p^m](S)$$ is an $\ensuremath{\mathcal{O}}_{E, p}$-linear homomorphism such that $\set{\alpha(x) : x \in p^{-m}\Lambda/\Lambda}$ is a full set of sections of $\ensuremath{\mathcal{A}}[p^m]$ and $\alpha$ maps the pairing $\inner{,}_{p, m}$ to the Weil pairing on $\ensuremath{\mathcal{A}}[p^m]$, up to a scalar multiple in $(\mathbb Z/p^m\mathbb Z)^\times$. **Theorem 15** ([@Man05 Prop. 15]). *The functor $\ensuremath{\mathcal{F}}'_{U'_p(m)U'^p}$ is represented by a smooth scheme $\ensuremath{\mathcal{X}}_{U'_p(m)U'^p}$ over $\ensuremath{\mathcal{O}}_{E_{X'}, v'}$.* To make the notion of $U'^p$ small enough explicit, fix a lattice $\Lambda \subset V$ over $\mathbb Z$ such that $\Lambda \otimes_{\mathbb Z} \mathbb Z_p \cong \Lambda_p$. For $N \in \mathbb N$, let $$U'(N) \coloneqq \set{g \in G'(\ensuremath{\mathbb{A}}_f) : g|_{\Lambda/N\Lambda} \equiv 1}.$$ We now remove our restriction that the moduli problem above $p$ is unramified. **Proposition 16**. *If $U'_p(m)U'^p \subset U'(N)$ is a normal subgroup such that $N \ge 3$ and $m = 0$ when $p$ is ramified in $E_{X'}$, then the functor $\ensuremath{\mathcal{F}}'_{U'_p(m)U'^p}$ is representable by a normal scheme with reduced special fiber. Moreover, if $p$ is unramified in $E_{X'}$, then the functor $\ensuremath{\mathcal{F}}'_{U'_p(m)U'^p}$ is representable by a smooth scheme.* *Proof.* Choose a normal subgroup $U'^p_0 \subset U'^p$ sufficiently small so that the functor $\ensuremath{\mathcal{F}}'_{U'_p(m)U'^p_0}$ is represented by the scheme $\ensuremath{\mathcal{X}}'_{U'_p(m)U'^p_0}$. There is an action of $U'^p \subset U'(N)^p$ on this scheme and it suffices to show that $U'(N)^p$ acts freely on $\ensuremath{\mathcal{X}}'_{U'_p(m)U'^p_0}$. A point $x \in \ensuremath{\mathcal{X}}'_{U'_p(m)U'^p_0}$ corresponds to a quintuple $(\ensuremath{\mathcal{A}}, \iota, \theta, \kappa, \alpha)$, where the full set of sections $\alpha$ is trivial when $m = 0$. Suppose that $g \in U'(N)^p$ fixes $x$. We may choose our $\ensuremath{\mathcal{A}}$ and $\kappa$ so that $H_1(\ensuremath{\mathcal{A}}, \ensuremath{\mathbb{A}}_f^p) \cong \Lambda_{\ensuremath{\mathbb{A}}_f^p}$ and $\kappa$ induces an isomorphism between the two. The element $g$ acts by sending $(\ensuremath{\mathcal{A}}, \iota, \theta, \kappa, \alpha)g = (\ensuremath{\mathcal{A}}, \iota, \theta, g^{-1} \circ \kappa, \alpha)$. Thus, there exist some isomorphism $f$ of $\ensuremath{\mathcal{A}}$ and element $g' \in U'^p_0$ such that $(gg')^{-1} \circ \kappa = \kappa \circ f_*\colon H_1(\ensuremath{\mathcal{A}}, \ensuremath{\mathbb{A}}_f^p) \to \Lambda_{\ensuremath{\mathbb{A}}_f^p}$. Now since $gg' \in U'(N)^p$ acts on the identity on $\Lambda/N\Lambda$, we get that $f_*$ must act as the identity on $\ensuremath{\mathcal{A}}[N]$, meaning that $f$ is the identity since $N \ge 3$. Thus $gg' = 1$ and $g^{-1}\circ \kappa$ is in the same $U'^p_0$ orbit as $\kappa$. ◻ We let $\ensuremath{\mathcal{X}}'_{U'}$ be the scheme representing $\ensuremath{\mathcal{F}}'_{U'}$ whenever $U' = \prod_p U'_p \subset U'(N)$ for $N \ge 3$. This gives us a flat and normal integral model for $X'_{U'}$ over $\ensuremath{\mathcal{O}}_{E_{X'}}$. ## Transferring Integral Models Now we can use the integral model for $X'$ to get integral models for $X$ and $X''$, as done in [@Kis10; @Kis18], by extending the adjoint group $G^{\mathop{\mathrm{ad}}}$-action on the neutral component of $\ensuremath{\mathcal{X}}'$. We briefly recall how this is done because we will use the same idea to transfer the $p$-divisible group on $\ensuremath{\mathcal{X}}'$ to $p$-divisible groups over $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{X}}''$. Set $U_p(m) \coloneqq (1 + p^m\ensuremath{\mathcal{O}}_{B, p})^\times$ and $U''_p \coloneqq U''_p(m) \coloneqq (1 + p(\ensuremath{\mathcal{O}}_{B, p}\times \ensuremath{\mathcal{O}}_{E, p}))^\times \subset G''(\mathbb Q_p)$. Then $U_p \coloneqq U_p(0)$ and $U''_p \coloneqq U''_p(0)$ are the $\mathbb Z_p$-points of parahoric subgroups over $\mathbb Z_{(p)}$ which fix the lattices $\ensuremath{\mathcal{O}}_{B, p}$ and $\ensuremath{\mathcal{O}}_{B, p} \times \ensuremath{\mathcal{O}}_{E, p}$ respectively, and over the generic fiber they are isomorphic to $G$ and $G''$. Denote these models $G_p, G''_p$ so that $G_p(\mathbb Z_p) = U_p$ and $G''_p(\mathbb Z_p) = U''_p$. For $S = X, X', X''$, let $U_{S, p} = U_p, U'_p, U''_p$ and $U_{S}^p = U^p, U'^p, U''^p$ and $G_S = G, G', G''$ and $G_{S, p} = G_p, G'_p, G''_p$ respectively. Let $Z_S$ be the center of $G_S$. For each choice of $S$, take the limit over all choices of $U_{S}^p$ to get $$S_{U_{S, p}} = \varprojlim_{U_{S}^p} S_{U_{S, p}U_{S}^p} = G_S(\mathbb Q) \backslash (\ensuremath{\mathcal{H}}^{\pm})^\Sigma \times G_S(\ensuremath{\mathbb{A}}_f)/U_{S, p},$$ and let the entire projective limit be $$S = \varprojlim_{U_{S, p}U_S^p} S_{U_{S, p}U_S^p} = G_S(\mathbb Q) \backslash (\ensuremath{\mathcal{H}}^\pm)^\Sigma \times G_S(\ensuremath{\mathbb{A}}_f).$$ We recall the star product notation of [@Kis10; @Kis18]. Suppose that a group $\Delta$ acts on a group $H$ and suppose that $\Gamma \subset H$ is stable under the action of $\Delta$. Let $\Delta$ act on itself by left conjugation, and suppose there is a group homomorphism $\varphi\colon \Gamma \to \Delta$ that respects $\Delta$-action. We also impose for all $\gamma \in \Gamma$ that the $\varphi(\gamma)$-action on $H$ is by left conjugation by $\gamma$. Then, the subgroup $\{(\gamma, \varphi(\gamma)^{-1}) : \gamma \in \Gamma\}$ is a normal subgroup of $H \rtimes \Delta$ and we let $H *_\Gamma \Delta$ denote the quotient. Let $G_S^{\mathop{\mathrm{ad}}}(\mathbb R)^{+}$ be the neutral component and let $G_S(\mathbb R)_+$ be the preimage of $G_S^{\mathop{\mathrm{ad}}}(\mathbb R)^{+}$ under the map $G_S(\mathbb R) \to G_S^{\mathop{\mathrm{ad}}}(\mathbb R)$. Let $G_S(\mathbb Q)_+ = G_S(\mathbb R)_+ \cap G_S(\mathbb Q)$ and let $G_{S, p}(\mathbb Z_{(p)})_+ = G_{S, p}(\mathbb Z_{(p)}) \cap G_S(\mathbb Q)_+$. Let $G_{S, p}^{\mathop{\mathrm{ad}}}(\mathbb Z_{(p)})^+ = G_{S, p}^{\mathop{\mathrm{ad}}}(\mathbb Z_{(p)}) \cap G_S^{\mathop{\mathrm{ad}}}(\mathbb R)^+$. There is a natural right action of $G_S(\ensuremath{\mathbb{A}}_f)$ on $S$ by right multiplication on the $G_S(\ensuremath{\mathbb{A}}_f)$ factor, under which $Z_S(\mathbb Q)$ acts trivially. There is also a right action of $G^{\mathop{\mathrm{ad}}}_S(\mathbb Q)^+$ on $S$ where $\gamma \in G_S^{\mathop{\mathrm{ad}}}(\mathbb Q)^+$ acts on a representative $[x, g]$ by $[x, g]\gamma = [\gamma^{-1}x, \gamma^{-1}g\gamma]$. Let $Z_S(\mathbb Q)^-$ be the closure of $Z_S(\mathbb Q)$ in $G_S(\ensuremath{\mathbb{A}}_f)$. We thus get an action of $$G_S(\ensuremath{\mathbb{A}}_f)/Z_S(\mathbb Q)^- \rtimes G_S^{\mathop{\mathrm{ad}}}(\mathbb Q)^+$$ on $S$. The subgroup $G_S(\mathbb Q)_+/Z_S(\mathbb Q)$ embeds into to both $G_S(\ensuremath{\mathbb{A}}_f)/Z_S(\mathbb Q)^-$ and $G^{\mathop{\mathrm{ad}}}_S(\mathbb Q)^+$ and its action on $S$ is the same through both embeddings. Thus, we get an action of $$\ensuremath{\mathscr{A}}(G_S) \coloneqq G_S(\ensuremath{\mathbb{A}}_f)/Z_S(Q)^-*_{G_S(\mathbb Q)_+/Z_S(\mathbb Q)} G_S^{\mathop{\mathrm{ad}}}(\mathbb Q)^+$$ on $X$. We also define a subgroup $$\ensuremath{\mathscr{A}}(G_S)^\circ \coloneqq G_S(\mathbb Q)_+^-/Z_S(\mathbb Q)^- *_{G_S(\mathbb Q)_+/Z_S(\mathbb Q)} G_S^{\mathop{\mathrm{ad}}}(\mathbb Q)^+,$$ where $G_S(\mathbb Q)_+^-$ is the closure of $G_S(\mathbb Q)_+$ in $G_S(\ensuremath{\mathbb{A}}_f)$. After taking the quotient of $S$ by $U_{S, p} = G_{S, p}(\mathbb Z_p)$, we get a right action of $$\ensuremath{\mathscr{A}}(G_{S, p}) = G_S(\ensuremath{\mathbb{A}}_f^p)/Z_{S, p}(\mathbb Z_{(p)})^- *_{G_{S, p}(\mathbb Z_{(p)})_+/Z_{S, p}(\mathbb Z_{(p)})} G_{S, p}^{\mathop{\mathrm{ad}}}(\mathbb Z_{(p)})^+$$ on $S_{U_{S, p}}$, where $Z_{S, p}(\mathbb Z_{(p)})^-$ is the closure inside $G_S(\ensuremath{\mathbb{A}}_f^p)$. We also define the subgroup $$\ensuremath{\mathscr{A}}(G_{S, p})^\circ = G_{S, p}(\mathbb Z_{(p)})_+^-/Z_{S, p}(\mathbb Z_{(p)})^- *_{G_{S, p}(\mathbb Z_{(p)})_+/Z_{S, p}(\mathbb Z_{(p)})} G_{S, p}^{\mathop{\mathrm{ad}}}(\mathbb Z_{(p)})^+.$$ Fix a geometrically connected component $S^+ \subset S$ as the image of the product of upper half planes $(\ensuremath{\mathcal{H}}^+)^\Sigma$ in the complex uniformization of $S$. Then take $$S^+ = \varprojlim_{U_{S, p}U_S^p} S_{U_{S, p}U_S^p}^+ = G_S^{\mathop{\mathrm{der}}}(\mathbb Q)\backslash (\ensuremath{\mathcal{H}}^+)^\Sigma \times G_S^{\mathop{\mathrm{der}}}(\ensuremath{\mathbb{A}}_f),$$ and $$S_{U_{S, p}}^+ = \varprojlim_{U_S^p} S_{U_{S, p}U_{S}^p}^+ = G_{S, p}^{\mathop{\mathrm{der}}}(\mathbb Z_{(p)})^- \backslash (\ensuremath{\mathcal{H}}^+)^\Sigma \times G_S^{\mathop{\mathrm{der}}}(\ensuremath{\mathbb{A}}_f^p).$$ Let $E_S$ be the reflex field of $S$ and let $E_S^p \subset \overline{E_S}$ be the maximal extension of $E$ that is unramified over all primes dividing $p$. The connected component $S^+$ is defined over $\overline{E_S}$ and $S_{U_{S, p}}^+$ is defined over $E_S^p$. Let $$\ensuremath{\mathscr{E}}(G_S) \subset \ensuremath{\mathscr{A}}(G_S) \times \mathop{\mathrm{Gal}}(\overline{E_S}/E_S)$$ be the stabilizer of $S^+$ and let $$\ensuremath{\mathscr{E}}(G_{S, p}) \subset \ensuremath{\mathscr{A}}(G_{S, p}) \times \mathop{\mathrm{Gal}}(E_S^p/E_S)$$ be the stabilizer of $S_{U_{S, p}}^+$. Then, we have the following. **Proposition 17** ([@Kis10 Lem. 3.3.7]). *The stabilizer $\ensuremath{\mathscr{E}}(G_S)$ (resp. $\ensuremath{\mathscr{E}}(G_{S, p})$) depends only on $G_S^{\mathop{\mathrm{der}}}$ (resp. $G_{S, p}^{\mathop{\mathrm{der}}}$) and $X^{\mathop{\mathrm{ad}}}$, and it is an extension of $\mathop{\mathrm{Gal}}(\overline{E_S}/E_S)$ (resp. $\mathop{\mathrm{Gal}}(E_S^p/E_S)$) by $\ensuremath{\mathscr{A}}(G_S)^\circ$ (resp. $\ensuremath{\mathscr{A}}(G_{S, p})^\circ$). Moreover, there are canonical isomorphisms $$\ensuremath{\mathscr{A}}(G_{S}) *_{\ensuremath{\mathscr{A}}(G_{S})^\circ} \ensuremath{\mathscr{E}}(G_{S}) \cong \ensuremath{\mathscr{A}}(G_{S}) \times \mathop{\mathrm{Gal}}(\overline{E_S}/E_S)$$ and $$\ensuremath{\mathscr{A}}(G_{S, p}) *_{\ensuremath{\mathscr{A}}(G_{S, p})^\circ} \ensuremath{\mathscr{E}}(G_{S, p}) \cong \ensuremath{\mathscr{A}}(G_{S, p}) \times \mathop{\mathrm{Gal}}(E_S^p/E_S).$$* There is a right action of $\ensuremath{\mathscr{E}}(G_S)$ on $\ensuremath{\mathscr{A}}(G_S) \times S^+$ given by right conjugation via the map $\ensuremath{\mathscr{E}}(G_S) \to \ensuremath{\mathscr{A}}(G_S) \times \mathop{\mathrm{Gal}}(\overline{E_S}/E_S) \to \ensuremath{\mathscr{A}}(G_S)$ on the first factor and right multiplication on the second factor. There is also an action of $\ensuremath{\mathscr{A}}(G_S)$ on $\ensuremath{\mathscr{A}}(G_S) \times S^+$ defined by right multiplication on the first factor and ignoring the second factor. Thus, there is an action of $\ensuremath{\mathscr{A}}(G_S) *_{\ensuremath{\mathscr{A}}(G_S)^\circ} \ensuremath{\mathscr{E}}(G_S) \cong \ensuremath{\mathscr{A}}(G_S) \times \mathop{\mathrm{Gal}}(\overline{E_S}/E_S)$ on $[\ensuremath{\mathscr{A}}(G_S) \times S^+]/\ensuremath{\mathscr{A}}(G_S)^\circ$. Similarly, we can define an action of $\ensuremath{\mathscr{A}}(G_{S, p}) *_{\ensuremath{\mathscr{A}}(G_{S, p})^\circ} \ensuremath{\mathscr{E}}(G_{S, p}) \cong \ensuremath{\mathscr{A}}(G_{S, p}) \times \mathop{\mathrm{Gal}}(E_S^p/E_S)$ on $[\ensuremath{\mathscr{A}}(G_{S, p}) \times S_{U_{S, p}}^+]/\ensuremath{\mathscr{A}}(G_{S, p})^\circ$. **Proposition 18** ([@Kis10 Prop 3.3.10]). *For $S, S' \in \{X, X', X''\}$, there is an isomorphism of $E_S^p$ schemes $$S'_{U_{S', p}} \cong [\ensuremath{\mathscr{A}}(G_{S', p}) \times S_{U_{S, p}}^+]/\ensuremath{\mathscr{A}}(G_{S, p})^\circ$$ that respects $\mathop{\mathrm{Gal}}(E_{S'}^p/E_{S'})$ action, where the Galois group acts on the right via the isomorphism $\ensuremath{\mathscr{A}}(G_{S', p}) *_{\ensuremath{\mathscr{A}}(G_{S', p})^\circ} \ensuremath{\mathscr{E}}(G_{S', p}) \cong \ensuremath{\mathscr{A}}(G_{S', p}) \times \mathop{\mathrm{Gal}}(E_{S'}^p/E_{S'})$.* Here, we have that $\ensuremath{\mathscr{A}}(G_{S, p})^\circ$ acts on $\ensuremath{\mathscr{A}}(G_{S', p})$ via $\ensuremath{\mathscr{A}}(G_{S, p})^\circ \cong \ensuremath{\mathscr{A}}(G_{S', p})^\circ \to \ensuremath{\mathscr{A}}(G_{S', p})$. If there is an integral model $\ensuremath{\mathcal{S}}_{U_{S, p}}$ for $S$ such that the action of $G^{\mathop{\mathrm{ad}}}$ extends to it, then this isomorphism gives us a way to transfer it to all other $S'$ by simply defining $$\ensuremath{\mathcal{S'}}_{U_{S', p}} \coloneqq [\ensuremath{\mathscr{A}}(G_{S', p}) \times \ensuremath{\mathcal{S}}_{U_{S, p}}^+]/\ensuremath{\mathscr{A}}(G_{S, p})^\circ$$ as $\ensuremath{\mathcal{O}}_{E_{S'}^p}$-schemes, and then using Galois descent to descend an $\ensuremath{\mathcal{O}}_{E_{S'}}$-scheme. **Theorem 19**. *Let $v \mid p$ be a prime of $E_X$ and $v'' \mid p$ be a prime of $E_{X''}$. If $U^p \subset G(\ensuremath{\mathbb{A}}_f^p)$ and $U''^p\subset G''(\ensuremath{\mathbb{A}}_f^p)$ are such that $U_pU^p \subset U(N)$ and $U''_pU''^p \subset U''(N)$ for some $N \ge 3$, then there is a projective system of integral models $\ensuremath{\mathcal{X}}_{U_pU^p}$ (resp. $\ensuremath{\mathcal{X}}''_{U''_pU''^p}$) of $X_{U_pU^p}$ (resp. $X''_{U''_pU''^p}$) over $\ensuremath{\mathcal{O}}_{E_X, v}$ (resp. $\ensuremath{\mathcal{O}}_{E_{X''}, v''}$) such that:* 1. *If $p$ is not ramified in $F$ nor $B$, the scheme $\ensuremath{\mathcal{X}}_{U}$ (resp. $\ensuremath{\mathcal{X}}''_{U''}$) is smooth over $\ensuremath{\mathcal{O}}_{E_{X}, v}$ (resp. $\ensuremath{\mathcal{O}}_{E_{X''}, v''}$);* 2. *The schemes $\ensuremath{\mathcal{X}}_U$ and $\ensuremath{\mathcal{X}}''_{U''}$ are normal, flat, and their non-smooth locus has codimension at least $2$;* 3. *The $p$-adic completion of $\ensuremath{\mathcal{X}}_{U}$ (resp. $\ensuremath{\mathcal{X}}''_{U''}$) has a $p$-adic uniformization by a Rapoport--Zink space.* *Proof.* When $E_{X', v'}$ is unramified over $\mathbb Q_p$, then the group $G'$ has a hyperspecial local model over $\ensuremath{\mathcal{O}}_{E_{X'}, v'}$. By [@Kis10 Lem. 3.4.5], the extension property implies that the action of $\ensuremath{\mathscr{A}}(G')$ extends to the integral model $\ensuremath{\mathcal{X}}'_{U'_p}$. Let $\ensuremath{\mathcal{X'}}_{U'}^+$ be the closure of ${X'}_{U'}^+$ in $\ensuremath{\mathcal{X}}'_{U'}$ and let $$\ensuremath{\mathcal{X'}}_{U'_p}^+ \coloneqq \varprojlim_{U'^p} \ensuremath{\mathcal{X'}}_{U'_p{U'}^p}^+.$$ We then define $$\ensuremath{\mathcal{X}}_{U_p} \coloneqq [\ensuremath{\mathscr{A}}(G_p) \times \ensuremath{\mathcal{X'}}_{U'_{p}}^+]/\ensuremath{\mathscr{A}}(G'_p)^\circ$$ and $$\ensuremath{\mathcal{X''}}_{U''_p} \coloneqq [\ensuremath{\mathscr{A}}(G''_p) \times \ensuremath{\mathcal{X'}}_{U'_{p}}^+]/\ensuremath{\mathscr{A}}(G'_p)^\circ.$$ The action of $\ensuremath{\mathscr{E}}(G'_p)$ on ${X'}_{U'_p}^+$ extends to $\ensuremath{\mathcal{X'}}_{U'_p}^+$ and hence we get an action of $$\ensuremath{\mathscr{A}}(G_p) *_{\ensuremath{\mathscr{A}}(G'_p)^\circ} \ensuremath{\mathscr{E}}(G'_p) \cong \ensuremath{\mathscr{A}}(G_p) \times \mathop{\mathrm{Gal}}(E^p_X/E_X)$$ on $\ensuremath{\mathcal{X}}_{U_p}$ (resp. $\ensuremath{\mathscr{A}}(G''_p) \times \mathop{\mathrm{Gal}}(E^p_{X''}/E_{X''})$ on $\ensuremath{\mathcal{X}}''_{U_p''}$). Thus, we can use the Galois action to descend this scheme to an integral model for $X_{U_p}$ (resp. $X''_{U''_p}$) defined over $\ensuremath{\mathcal{O}}_{E_X, v}$ (resp. $\ensuremath{\mathcal{O}}_{E_{X''}, v''}$). When $p$ is not ramified in $F$ nor $B$, then the integral model $\ensuremath{\mathcal{X}}'_{U'_p}$ corresponds to unramified PEL datum and is smooth. If $p$ is ramified in $E_{X'}$, then $G'_p$ is no longer hyperspecial and there is no extension property. However, the action of $\ensuremath{\mathscr{E}}(G'_p)$ still extends to $\ensuremath{\mathcal{X'}}_{U'_p}^+$ by [@Kis18 Cor. 4.6.15]. The local rings of $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{X}}''$ are étale locally isomorphic to the local rings of $\ensuremath{\mathcal{X}}'$ and $\ensuremath{\mathbb{M}}^{\mathrm{loc}}$. Thus, the second and third statement follow from the corresponding statements holding for $\ensuremath{\mathcal{X}}'_{U'_p}$ and $\ensuremath{\mathbb{M}}^{\mathrm{loc}}$. ◻ By gluing these models together, we have an integral model $\ensuremath{\mathcal{X}}_{U}$ over $\mathop{\mathrm{Spec}}\ensuremath{\mathcal{O}}_{E_X}$ for whenever $U = \prod_p U_p \subset U(N)$ for $N \ge 3$ and $U_p = U_p(0)$ is maximal whenever $p$ is ramified in $E_{X}$. We now extend the integral model for $\ensuremath{\mathcal{X}}_U$ when $U = \prod_p U_p$ is maximal at all primes. Take a prime $p$ that is not ramified in $E_X$ such that $U(p) = U_p(1)U^p$ is maximal at all primes away from $p$. Then define $$\ensuremath{\mathcal{X}}_{U} \coloneqq \ensuremath{\mathcal{X}}_{U(p)}/(U/U(p))$$ as the quotient stack. Since the $\ensuremath{\mathcal{X}}_{U_pU^p}$ form a projective system, the definition of $\ensuremath{\mathcal{X}}_U$ does not depend on the choice of prime $p$. We note that $\ensuremath{\mathcal{X}}_U$ is a Deligne-Mumford stack and so it has an étale cover by schemes. So, all of our proofs can and will be reduced to the case of schemes via étale descent. # $p$-Divisible Groups {#sec:p divisible group} When $U' = U'_pU'^p \subset U'(N)$, the functor $\ensuremath{\mathcal{F}}'_{U'}$ is represented by $\ensuremath{\mathcal{X}}'_{U'}$ and so we get a universal abelian scheme $\ensuremath{\mathcal{A}}'_{U'} \to \ensuremath{\mathcal{X}}'_{U'}$ lying over it. We use the ideas of [@Kis10; @Kis18] to transport the $p$-divisible group $\ensuremath{\mathcal{H}}'_{U'} \coloneqq \ensuremath{\mathcal{A}}'_{U'}[p^\infty]$ to $p$-divisible groups over $\ensuremath{\mathcal{X}}_U$ and $\ensuremath{\mathcal{X}}''_{U''}$. In order to do so, we first give a description of $\ensuremath{\mathcal{H}}'_{U'}$ over $X'^+_{U'}$, and then describe an action of $\ensuremath{\mathscr{E}}(G')$ on it. Recall that $\ensuremath{\mathscr{A}}(G')^\circ$ depends only on the derived group $G'^{\mathop{\mathrm{der}}} = \mathop{\mathrm{Res}}_{F/\mathbb Q} B^1$, elements of norm $1 \in \mathbb Q$. The center is $Z(G'^{\mathop{\mathrm{der}}}) = \mathop{\mathrm{Res}}_{F/\mathbb Q} F^1$, elements of $F$ of norm $1$. By Shapiro's lemma, the adjoint map $G'(\mathbb Q) \to G'^{\mathop{\mathrm{ad}}}(\mathbb Q)$ is surjective and so we can write $$\ensuremath{\mathscr{A}}(G')^\circ = B^1/F^1.$$ We can also determine $\ensuremath{\mathscr{A}}(G_S)$ for $S = X, X', X''$. We have that $G^{\mathop{\mathrm{ad}}}(\mathbb Z_{(p)})^+ = \ensuremath{\mathcal{O}}_{B, (p)}^{\times, +}/\ensuremath{\mathcal{O}}_{F, (p)}^{\times, +}$, where the $+$ superscript denotes the elements with norm that is totally positive in $F$. We also have that $G''^{\mathop{\mathrm{ad}}}(\mathbb Z_{(p)})^+ = (\ensuremath{\mathcal{O}}_{B, (p)}^{\times, +} \times_{\ensuremath{\mathcal{O}}_{F, (p)}^{\times, +}} \ensuremath{\mathcal{O}}_{E, (p)}^{\times, +})/\ensuremath{\mathcal{O}}_{E, (p)}^{\times, +} \cong \ensuremath{\mathcal{O}}_{B, (p)}^{\times, +}/\ensuremath{\mathcal{O}}_{F, (p)}^{\times, +}$ because the norm of all elements of $E^\times$ are totally positive in $F$. Thus, it follows that $$\ensuremath{\mathscr{A}}(G) = G(\ensuremath{\mathbb{A}}_f)/Z(\mathbb Q)^- *_{G(\mathbb Q)_+/Z(\mathbb Q)}G^{\mathop{\mathrm{ad}}}(\mathbb Q)^+ = (B \otimes_\mathbb Q\ensuremath{\mathbb{A}}_f)^\times/\ensuremath{\mathcal{O}}_{F}^{\times, -},$$ and $$\ensuremath{\mathscr{A}}(G_p) = G(\ensuremath{\mathbb{A}}_f^p)/Z(\mathbb Z_{(p)})^- *_{G(\mathbb Z_{(p)})_+/Z(\mathbb Z_{(p)})}G^{\mathop{\mathrm{ad}}}(\mathbb Z_{(p)})^+ = (B^p)^\times/\ensuremath{\mathcal{O}}_{F, (p)}^{\times, -},$$ where $B^p \coloneqq B \otimes_\mathbb Q\ensuremath{\mathbb{A}}_f^p$. Similarly, it follows that $\ensuremath{\mathscr{A}}(G'') = G''(\ensuremath{\mathbb{A}}_F)/\ensuremath{\mathcal{O}}_{E}^{\times, -}$ and $\ensuremath{\mathscr{A}}(G''_p) = G''(\ensuremath{\mathbb{A}}_f^p)/\ensuremath{\mathcal{O}}_{E, (p)}^{\times, -}$. Shapiro's lemma does not apply to $G'$, but we can write $$\ensuremath{\mathscr{A}}(G') = G'(\ensuremath{\mathbb{A}}_f)B^{\times, +}/E^{\times, -}$$ and $$\ensuremath{\mathscr{A}}(G'_p) = G'(\ensuremath{\mathbb{A}}_f^p)G''(\mathbb Z_{(p)})_+/\ensuremath{\mathcal{O}}_{E, (p)}^{\times, -}.$$ Moreover, we have $$\ensuremath{\mathscr{A}}(G'_p)^\circ \cong \ensuremath{\mathscr{A}}(G^{\mathop{\mathrm{der}}}_p)^\circ \cong \ensuremath{\mathcal{O}}_{B, (p)}^1/\ensuremath{\mathcal{O}}_{F, (p)}^1$$ Over $X'^+$, the $p$-divisible group $\ensuremath{\mathcal{H}}'$ can be described as $$H'|_{X'^+} = B_p/\ensuremath{\mathcal{O}}_{B, p} \times X'^+ = B_p/\ensuremath{\mathcal{O}}_{B, p} \times B^1 \backslash [(\ensuremath{\mathcal{H}}^+)^\Sigma \times (B \otimes_\mathbb Q\ensuremath{\mathbb{A}}_f)^1].$$ To descend down to $X'^+_{U'_p}$, we quotient by the action of $G'^{\mathop{\mathrm{der}}}_p(\mathbb Z_p) = \ensuremath{\mathcal{O}}_{B, p}^1$ to get $$H'^+_{U'_p} \coloneqq H'|_{X'^+_{U'_p}} = [B_p/\ensuremath{\mathcal{O}}_{B, p} \times X'^+]/\ensuremath{\mathcal{O}}_{B, p}^1,$$ where $U'_p = G'^{\mathop{\mathrm{der}}}_p(\mathbb Z_p) = \ensuremath{\mathcal{O}}_{B, p}^1$ acts on $B_p/\ensuremath{\mathcal{O}}_{B, p}$ by right multiplication. We can now describe $H'$ over all of $X'$. Under the isomorphism of $\overline{E_{X'}}$-schemes $$X' \cong [\ensuremath{\mathscr{A}}(G') \times X'^+]/\ensuremath{\mathscr{A}}(G')^\circ = [\ensuremath{\mathscr{A}}(G') \times X'^+]/B^1,$$ the $p$-divisible group can be written as $$H'|_{X'} \cong B_p/\ensuremath{\mathcal{O}}_{B, p} \times [\ensuremath{\mathscr{A}}(G') \times X'^+]/B^1.$$ After dividing by $G'_p(\mathbb Z_p)$, we get $$H'_{U'_p} |_{X'_{U'_p}} \cong [\ensuremath{\mathscr{A}}(G'_p) \times [B_p/\ensuremath{\mathcal{O}}_{B, p} \times X'^+]/\ensuremath{\mathcal{O}}_{B, p}^1]/\ensuremath{\mathcal{O}}_{B, (p)}^1 \cong [\ensuremath{\mathscr{A}}(G'_p) \times H'^+_{U'_p}]/\ensuremath{\mathcal{O}}^1_{B, (p)},$$ where $\ensuremath{\mathscr{A}}(G'_p)^\circ = \ensuremath{\mathcal{O}}_{B, (p)}^1/\ensuremath{\mathcal{O}}_{F, (p)}^1 \subset (B \otimes_\mathbb Q\ensuremath{\mathbb{A}}_f^p)^\times/(F \otimes_\mathbb Q\ensuremath{\mathbb{A}}_f^p)^\times$ acts trivially on $B_p/\ensuremath{\mathcal{O}}_{B, p}$. In this way, we can also define $p$-divisible groups $H_{U_p}, H''_{U''_p}$ over $X_{U_p}, X''_{U''_p}$ respectively as $$H_{U_p}|_{X_{U_p}} \coloneqq \paren*{\ensuremath{\mathscr{A}}(G_p) \times [B_p/\ensuremath{\mathcal{O}}_{B, p} \times X'^+]/\ensuremath{\mathcal{O}}_{B, p}^1}/\ensuremath{\mathscr{A}}(G'_p)^\circ$$ $$\cong \paren*{B^{p, \times}/\ensuremath{\mathcal{O}}_{F, (p)}^\times \times H'^+_{U'_p}}/\ensuremath{\mathcal{O}}_{B, (p)}^1,$$ and $$H''_{U''_p}|_{X''_{U''_p}} \coloneqq \paren*{\ensuremath{\mathscr{A}}(G''_p) \times [B_p/\ensuremath{\mathcal{O}}_{B, p} \times X'^+]/\ensuremath{\mathcal{O}}_{B, p}^1}/\ensuremath{\mathscr{A}}(G'_p)^\circ$$ $$\cong \paren*{G''(\ensuremath{\mathbb{A}}_f^p)/\ensuremath{\mathcal{O}}_{E, (p)}^\times \times H'^+_{U'_p}}/\ensuremath{\mathcal{O}}_{B, (p)}^1,$$ **Theorem 20**. *Whenever $U \subset U(N)$ (resp. $U'' \subset U''(N)$) for $N \ge 3$, there exists a $p$-divisible group $\ensuremath{\mathcal{H}}_{U}$ over $\ensuremath{\mathcal{X}}_{U}$ with an $\ensuremath{\mathcal{O}}_{E, p}$-action (resp. $\ensuremath{\mathcal{H}}''_{U''}$ over $\ensuremath{\mathcal{X}}''_{U''}$) such that the formal completion $\widehat{\ensuremath{\mathcal{X}}_U}$ (resp. $\widehat{\ensuremath{\mathcal{X}}''_{U''}}$) along its special fiber over $\overline{k_{E_X, v}}$ (resp. $\overline{k_{E_{X''}, v''}}$) is the universal deformation space of $\ensuremath{\mathcal{H}}_{\overline{k_{E_X, v}}}$ (resp. $\ensuremath{\mathcal{H}}_{\overline{k_{E_{X''}, v''}}}$).* *Proof.* We have already translated the $p$-divisible group $\ensuremath{\mathcal{H}}'_{U'_p}$ to $p$-divisible groups over the generic fiber $X_{U_p}$ and $X''_{U''_p}$. Over $\ensuremath{\mathcal{X}}'_{U'_p}$, we have an integral model for $\ensuremath{\mathcal{H}}'_{U'_p}$ by taking the $p^\infty$-torsion of the universal abelain scheme $\ensuremath{\mathcal{A}}'_{U'_p} \to \ensuremath{\mathcal{X}}'_{U'_p}$. We can restrict it to the connected component to get $$\ensuremath{\mathcal{H}}'^+_{U'_p} \coloneqq \ensuremath{\mathcal{H}}'_{U'_p}|_{\ensuremath{\mathcal{X}}'^+_{U'_p}}.$$ Set $$\ensuremath{\mathcal{H}}_{U_p} \coloneqq \paren*{\ensuremath{\mathscr{A}}(G_p) \times \ensuremath{\mathcal{H}}'^+_{U'_p}}/\ensuremath{\mathscr{A}}(G'_p)^\circ,$$ and $$\ensuremath{\mathcal{H''}}_{U''_p} \coloneqq \paren*{\ensuremath{\mathscr{A}}(G''_p) \times \ensuremath{\mathcal{H}}'^+_{U'_p}}/\ensuremath{\mathscr{A}}(G'_p)^\circ.$$ The action of $\ensuremath{\mathscr{E}}(G^{\mathop{\mathrm{der}}}_p)$ on $\ensuremath{\mathcal{X}}'^+_{U'_p}$ extends to an action on $\ensuremath{\mathcal{H}}'^+_{U'_p}$ by acting trivially on $B_p/\ensuremath{\mathcal{O}}_{B, p}$ and thus there is an action of $\ensuremath{\mathscr{A}}(G_p) \times \mathop{\mathrm{Gal}}(E_X^p/E_X)$ on $\ensuremath{\mathcal{H}}_{U_p}$ that is compatible with the structure morphism $\ensuremath{\mathcal{H}}_{U_p} \to \ensuremath{\mathcal{X}}_{U_p}$. Hence, we can descend $\ensuremath{\mathcal{H}}_{U_p}$ and $\ensuremath{\mathcal{H}}''_{U''_p}$ down to $p$-divisible groups defined over $\ensuremath{\mathcal{O}}_{E_X, v}$ and $\ensuremath{\mathcal{O}}_{E_{X''}, v''}$ respectively, whose generic fibers can be identified with $H_{U_p}$ and $H''_{U''_p}$ in a way respecting the structure morphisms down to $X_{U_p}$ and $X''_{U''_p}$. For finite level, we can simply take $\ensuremath{\mathcal{H}}_{U_pU^p} = \ensuremath{\mathcal{H}}_{U_p}/U^p$ where $U^p$ acts below on $\ensuremath{\mathcal{X}}_{U_p}$ and acts trivially on the fibers of $\ensuremath{\mathcal{H}}_{U_p} \to \ensuremath{\mathcal{X}}_{U_p}$. The statements about universal deformation spaces and $\ensuremath{\mathcal{O}}_{F, (p)}$-action follow from the corresponding statements for $\ensuremath{\mathcal{H}}'^+_{U'}$ over $\ensuremath{\mathcal{X}}'^+_{U'}$. These follow from $\ensuremath{\mathcal{X}}'_{U'}$ representing the functor of isomorphism classes of abelian schemes whose $p^\infty$-torsion is $\ensuremath{\mathcal{H}}'_{U'}$. ◻ # Kodaira-Spencer Map {#sec:Hodge bundle} In order to calculate the height of a partial CM point, we will take the height of a special point of $\ensuremath{\mathcal{X}}_U$ with respect to metrized Hodge bundle $\widehat{\ensuremath{\mathcal{L}}_U}$ on $\ensuremath{\mathcal{X}}_U$, which we will introduce. Afterwards, we will relate this Hodge bundle with the Lie groups of the $p$-divisible group $\ensuremath{\mathcal{H}}_U$ over $\ensuremath{\mathcal{X}}_U$. Following [@Yua13], we define the system of Hodge bundles $\{\ensuremath{\mathcal{L}}_U\}_U$ as the canonical bundle $$\ensuremath{\mathcal{L}}_U \coloneqq \omega_{\ensuremath{\mathcal{X}}_U/\ensuremath{\mathcal{O}}_{E_X}}.$$ Since $\ensuremath{\mathcal{X}}_U$ is normal, the singularities of $\ensuremath{\mathcal{X}}_U$ have codimension at least $2$ and so the sheaf $\ensuremath{\mathcal{L}}_U$ is indeed a line bundle. The benefit of using the canonical bundle is that the system $\{\ensuremath{\mathcal{L}}_U\}_U$ is compatible with pullbacks along the canonical maps $\ensuremath{\mathcal{X}}_{U_1} \to \ensuremath{\mathcal{X}}_{U_2}$ with $U_1 \subset U_2$. We call $\ensuremath{\mathcal{L}}_U$ the *Hodge bundle* of $\ensuremath{\mathcal{X}}_U$. At the infinite places, the metric is given by the Petersson metric $$\norm*{\bigwedge_{\sigma \in \Sigma} dz_\sigma} = \prod_{\sigma \in \Sigma} \mathrm{Im}(2z_\sigma).$$ We now relate this line bundle with our $p$-divisible group via a Kodaira-Spencer type map. Suppose $K/E_X$ is a finite extension that contains the normal closure of $E$ and let $S = \mathop{\mathrm{Spec}}\ensuremath{\mathcal{O}}_{K} \otimes_\mathbb Z\mathbb Z_p$. Base change $\ensuremath{\mathcal{X}}_U$ and $\ensuremath{\mathcal{H}}_U$ to $S$ and set $\Omega(\ensuremath{\mathcal{H}}_S) \coloneqq \mathop{\mathrm{Lie}}(\ensuremath{\mathcal{H}}_S)^\vee$ and $\Omega(\ensuremath{\mathcal{H}}^t_S) \coloneqq \mathop{\mathrm{Lie}}(\ensuremath{\mathcal{H}}^t_S)^\vee$. Let $\ensuremath{\mathbb{D}}(\ensuremath{\mathcal{H}}_S)$ and $\ensuremath{\mathbb{D}}(\ensuremath{\mathcal{H}}_S^t)$ be the covariant Dieudonné crystals attached to the $p$-divisible groups. Then [@Mes72 Chap. IV] gives us a short exact sequence $$0 \to \mathop{\mathrm{Lie}}(\ensuremath{\mathcal{H}}_S^t)^\vee \to \ensuremath{\mathbb{D}}(\ensuremath{\mathcal{H}}_S) \to \mathop{\mathrm{Lie}}(\ensuremath{\mathcal{H}}_S) \to 0$$ of $\ensuremath{\mathcal{O}}_S \otimes \ensuremath{\mathcal{O}}_{E}$ modules. Applying the Gauss--Manin connection $\nabla$ on $\ensuremath{\mathbb{D}}(\ensuremath{\mathcal{H}}_S)$ gives the chain of maps $$\Omega(\ensuremath{\mathcal{H}}_S^t) \to \ensuremath{\mathbb{D}}(\ensuremath{\mathcal{H}}_S) \overset{\nabla}{\to} \ensuremath{\mathbb{D}}(\ensuremath{\mathcal{H}}_S) \otimes \Omega_{\ensuremath{\mathcal{X}}_S/S}^1 \to \Omega(\ensuremath{\mathcal{H}}_S)^\vee \otimes \Omega_{\ensuremath{\mathcal{X}}_S/S}^1,$$ which gives a map $$\Omega_{\ensuremath{\mathcal{X}}_S/S}^{1, \vee} \to \mathop{\mathrm{Hom}}(\Omega(\ensuremath{\mathcal{H}}_S^t), \Omega(\ensuremath{\mathcal{H}}_S)^\vee).$$ Both $\Omega(\ensuremath{\mathcal{H}}_S)^\vee$ and $\Omega(\ensuremath{\mathcal{H}}^t_S)^\vee$ have an action by $\ensuremath{\mathcal{O}}_E$ whose determinant is the product of the reflex norms of $\phi\sqcup \phi'$ and $\overline{\phi} \sqcup \phi'$. We can thus decompose the line bundles over $S$ as $$\Omega(\ensuremath{\mathcal{H}}_S)^\vee \to \bigoplus_{\tau \in \mathop{\mathrm{Hom}}(E, \mathbb C)} \Omega(\ensuremath{\mathcal{H}}_S)^\vee_\tau,$$ where $\Omega(\ensuremath{\mathcal{H}}_S)_\tau^\vee \coloneqq \Omega(\ensuremath{\mathcal{H}}_S)^\vee \otimes_{\ensuremath{\mathcal{O}}_S \otimes \ensuremath{\mathcal{O}}_E, \tau} \ensuremath{\mathcal{O}}_S$ and $\ensuremath{\mathcal{O}}_E$ acts on $\ensuremath{\mathcal{O}}_S$ through fixing an inclusion of $\overline{E_X} \to \mathbb C$. Set $\omega(\ensuremath{\mathcal{H}}_S)_\tau \coloneqq \det\Omega(\ensuremath{\mathcal{H}}_S)_\tau$. We define $\Omega(\ensuremath{\mathcal{H}}_S^t)_\tau$ and $\omega(\ensuremath{\mathcal{H}}_S^t)_\tau$ similarly. Thus, we get a map $$\Omega_{\ensuremath{\mathcal{X}}_S/S}^{1, \vee} \to \mathop{\mathrm{Hom}}(\Omega(\ensuremath{\mathcal{H}}_S^t), \Omega(\ensuremath{\mathcal{H}}_S)^\vee) \to \bigoplus_{(\tau, \tau') \in \mathop{\mathrm{Hom}}(E, \mathbb C)^2} \Omega(\ensuremath{\mathcal{H}}_S^t)_\tau^\vee \otimes \Omega(\ensuremath{\mathcal{H}}_S)^\vee_{\tau'}$$ The rank of $\Omega(\ensuremath{\mathcal{H}}_S)_\tau$ is $1$ if $\tau \in \phi$, $2$ if $\tau \in \phi'$, and $0$ if $\tau \in \overline{\phi'}$. The rank of $\Omega(\ensuremath{\mathcal{H}}_S^t)_\tau$ is $2 - \dim \Omega(\ensuremath{\mathcal{H}}_S)_\tau$. So, by projecting, we get the map of vector bundles of equal rank $$\Omega_{\ensuremath{\mathcal{X}}_S/S}^{1, \vee} \to \bigoplus_{\tau \in \phi} \omega(\ensuremath{\mathcal{H}}_S^t)_\tau^\vee \otimes \omega(\ensuremath{\mathcal{H}}_S)_\tau^\vee.$$ Taking the determinant of this map and repeating the map for $\overline{\phi}$ gives the map $$\omega_{\ensuremath{\mathcal{X}}_S/S}^{-2} \to \bigotimes_{\tau \in \phi} \ensuremath{\mathcal{N}}(\ensuremath{\mathcal{H}}_S, \tau)^\vee \otimes \ensuremath{\mathcal{N}}(\ensuremath{\mathcal{H}}_S, \overline{\tau})^\vee,$$ where $\ensuremath{\mathcal{N}}(\ensuremath{\mathcal{H}}_S, \tau) \cong \omega(\ensuremath{\mathcal{H}}_S)_\tau \otimes \omega(\ensuremath{\mathcal{H}}_S^t)_{\overline{\tau}}$. Note that $E_X = E_{\phi\sqcup \overline{\phi}}$ is the reflex field of the set $\phi\sqcup \overline{\phi}$. So, we can descend this map to a map over $\mathop{\mathrm{Spec}}\ensuremath{\mathcal{O}}_{E_X, p}$. This is our Kodaira-Spencer map. Let $\Sigma_E \subset \mathop{\mathrm{Hom}}(E, \mathbb C)$ be the places that lie above a place of $F$ in $\Sigma$. Recall from Definition [Definition 10](#def:CM Type Determinant){reference-type="ref" reference="def:CM Type Determinant"} the relative discriminant $\ensuremath{\mathfrak{d}}_{\Sigma} \coloneqq \ensuremath{\mathfrak{d}}_{\Sigma_E}$. Let $\ensuremath{\mathfrak{d}}_{\Sigma, p} \coloneqq \ensuremath{\mathfrak{d}}_{\Sigma} \otimes \mathbb Z_p$ be the $p$-part of this ideal. View $\ensuremath{\mathfrak{d}}_{\Sigma, p}$ as a divisor of $S$. Moreover, let $\ensuremath{\mathfrak{d}}_{B, p}$ be the divisor corresponding to the ramification of $B$ above $p$. **Theorem 21**. *For any choice of partial CM-type $\phi$ lying above $\Sigma$, we have $$\omega_{\ensuremath{\mathcal{X}}_S/S}^2(\ensuremath{\mathfrak{d}}_{\Sigma, p}\ensuremath{\mathfrak{d}}_{B, p})^{-1} \cong \bigotimes_{\tau \in \phi} \ensuremath{\mathcal{N}}(\ensuremath{\mathcal{H}}_S, \tau) \otimes \ensuremath{\mathcal{N}}(\ensuremath{\mathcal{H}}_S, \overline{\tau}).$$* *Proof.* We have the short exact sequence $$0 \to \mathop{\mathrm{Lie}}(\ensuremath{\mathcal{H}}_S^t)^\vee \to \ensuremath{\mathbb{D}}(\ensuremath{\mathcal{H}}_S) \to \mathop{\mathrm{Lie}}(\ensuremath{\mathcal{H}}_S) \to 0$$ of $\ensuremath{\mathcal{O}}_S \otimes \ensuremath{\mathcal{O}}_{E}$ modules. Moreover, we have a pairing $$\ensuremath{\mathbb{D}}(\ensuremath{\mathcal{H}}_S) \times \ensuremath{\mathbb{D}}(\ensuremath{\mathcal{H}}_S^t) \to S$$ that is well defined up to an element of $S^\times$. Fix a choice. The formal completion of $\ensuremath{\mathcal{X}}_S/S$ along its special fiber over $\overline{k(S)}$ is the universal deformation space of $\ensuremath{\mathcal{H}}_S$. Thus, [@Mes72] gives us that the tangent bundle $\Omega^{1, \vee}_{\ensuremath{\mathcal{X}}_S/S}$ corresponds to choosing a lift of $\mathop{\mathrm{Lie}}(\ensuremath{\mathcal{H}}_S^t)^\vee$ and $\mathop{\mathrm{Lie}}(\ensuremath{\mathcal{H}}_S)$ in $\ensuremath{\mathbb{D}}(\ensuremath{\mathcal{H}}_S)_{S'}$, where $S' = \mathop{\mathrm{Spec}}\ensuremath{\mathcal{O}}_{S}[\varepsilon]/(\varepsilon^2)$, that respects the pairing from $\psi_S$ and $\ensuremath{\mathcal{O}}_S \otimes \ensuremath{\mathcal{O}}_E$ action. For each $\tau \in \mathop{\mathrm{Hom}}(E, \mathbb C)$, we can take the $\tau$-component of the short exact sequence to get $$0 \to \Omega(\ensuremath{\mathcal{H}}_S^t)_{\tau} \to \ensuremath{\mathbb{D}}(\ensuremath{\mathcal{H}}_S)_{\tau} \to \Omega(\ensuremath{\mathcal{H}}_S)_{\tau}^\vee \to 0.$$ For $\tau$ lying above $\Sigma^c$, either $\Omega(\ensuremath{\mathcal{H}}_S^t)_\tau$ or $\Omega(\ensuremath{\mathcal{H}}_S)_\tau$ is $0$ meaning that there is only one choice for a lift of $\Omega(\ensuremath{\mathcal{H}}_S^t)_\tau$ and $\Omega(\ensuremath{\mathcal{H}}_S)_\tau$. For $\tau$ lying above $\Sigma$, both are of rank $1$. The pairing $\psi_{S'}$ decomposes into an orthogonal sum of pairings $$\ensuremath{\mathbb{D}}(\ensuremath{\mathcal{H}}_S)_{S', \tau} \times \ensuremath{\mathbb{D}}(\ensuremath{\mathcal{H}}_S^t)_{S', \overline{\tau}} \to S'.$$ Thus, choosing a lift of $\Omega(\ensuremath{\mathcal{H}}_S)_\tau$ determines the choice for $\Omega(\ensuremath{\mathcal{H}}_S)_{\overline{\tau}}$ under the canonical isomorphism $\ensuremath{\mathbb{D}}(\ensuremath{\mathcal{H}}_S^t) \cong \ensuremath{\mathbb{D}}(\ensuremath{\mathcal{H}}_S)^\vee$. The Hodge filtration gives us that the choice of lift of $\Omega(\ensuremath{\mathcal{H}}_S^t)_\tau$ is a torsor of $\mathop{\mathrm{Hom}}(\Omega(\ensuremath{\mathcal{H}}_S^t)_\tau, \Omega(\ensuremath{\mathcal{H}}_S)_\tau^\vee)$ giving us that the map $$\Omega_{\ensuremath{\mathcal{X}}_S/S}^{1, \vee} \to \mathop{\mathrm{Hom}}(\Omega(\ensuremath{\mathcal{H}}_S^t), \Omega(\ensuremath{\mathcal{H}}_S)^\vee) \to \bigoplus_{\tau \in \phi} \mathop{\mathrm{Hom}}(\Omega(\ensuremath{\mathcal{H}}_S^t)_\tau, \Omega(\ensuremath{\mathcal{H}}_S)_\tau^\vee)$$ has finite cokernel. Taking determinants and repeating the process with $\overline{\phi}$ instead of $\phi$, we get $$\omega_{\ensuremath{\mathcal{X}}_S/S}^{-2} \to \bigotimes_{\tau \in \phi} \omega(\ensuremath{\mathcal{H}}_S^t)^\vee_\tau \otimes \omega(\ensuremath{\mathcal{H}}_S)_\tau^\vee \otimes \omega(\ensuremath{\mathcal{H}}_S^t)^\vee_{\overline{\tau}} \otimes \omega(\ensuremath{\mathcal{H}}_S)_{\overline{\tau}}^\vee.$$ The cokernel of this map classifies the failure of when choosing a lift of $\Omega(\ensuremath{\mathcal{H}}_S)_\tau$ for each $\tau$ does not arise from choosing a lift of $\Omega(\ensuremath{\mathcal{H}}_S)$, and when choosing a lift of $\Omega(\ensuremath{\mathcal{H}}_S)_\tau$ does not result in a lift of $\Omega(\ensuremath{\mathcal{H}}_S)_{\overline{\tau}}$ due to when the pairing $\psi_{S'}$ is not perfect. Let $\pi \in \ensuremath{\mathcal{O}}_S$ be a generator for $\ensuremath{\mathcal{O}}_S$ over $\ensuremath{\mathcal{O}}_{E_X, p}$. For a subset $\Psi \subset \mathop{\mathrm{Hom}}(E, \mathbb C)$, let $$f_\Psi(t) = \prod_{\tau \in \Psi} (t - \tau(\pi)).$$ We see that $f_{\phi\cup \overline{\phi}}(t) \in \ensuremath{\mathcal{O}}_{E_X, p}[t]$ because it is invariant under any automorphism that fixes the underlying places of $F$ under $\phi$. Then, we see that the image of $\ensuremath{\mathcal{O}}_{E_X, p} \otimes_\mathbb Z\ensuremath{\mathcal{O}}_E$ in $\widetilde{E_{\phi\sqcup \overline{\phi}, p}}$ is simply $\ensuremath{\mathcal{O}}_{E_X, p}[t]/f_{\phi\sqcup \overline{\phi}}(t)$, and so $\ensuremath{\mathfrak{d}}_{\phi\sqcup \overline{\phi}, p} \subset \ensuremath{\mathcal{O}}_{E_X, p}$ is the ideal generated by the discriminant of $f_{\phi\sqcup \overline{\phi}}$, By [@Yua18 Cor. 2.5], we have that $$\Omega(\ensuremath{\mathcal{H}}_S) \cong \frac{\ensuremath{\mathcal{O}}_S[t]}{f_{\phi\sqcup \phi'}(t)f_{\overline{\phi}\sqcup \phi'}(t)}, \qquad \Omega(\ensuremath{\mathcal{H}}_S^t) \cong \frac{\ensuremath{\mathcal{O}}_S[t]}{f_{\overline{\phi} \sqcup \overline{\phi'}}(t)f_{\phi\sqcup \overline{\phi'}}(t)}.$$ Each element of $\omega_{\ensuremath{\mathcal{X}}_S/S}^\vee$ corresponds to an element of $\Omega(\ensuremath{\mathcal{H}}_S^t)^\vee \otimes \Omega(\ensuremath{\mathcal{H}}_S)^\vee$. So, the image in $\bigotimes_{\tau \in \phi} \omega(\ensuremath{\mathcal{H}}_S^t)^\vee_\tau \otimes \omega(\ensuremath{\mathcal{H}}_S)_\tau^\vee$ is the determinant of the image of $\Omega(\ensuremath{\mathcal{H}}_S^t) \otimes \Omega(\ensuremath{\mathcal{H}}_S)$ in $$\prod_{\tau \in \phi\cup \overline{\phi}} \Omega(\ensuremath{\mathcal{H}}_S^t)_\tau \otimes \Omega(\ensuremath{\mathcal{H}}_S)_\tau\cong \prod_{\tau \in \phi\sqcup \overline{\phi}} \ensuremath{\mathcal{O}}_S[t]/(t - \tau(\pi)) \otimes \ensuremath{\mathcal{O}}_S[t]/(t - \tau(\pi))$$ under the map $t\mapsto (t, t, \dots, t)$. We deal with $\Omega(\ensuremath{\mathcal{H}}_S)$ and $\Omega(\ensuremath{\mathcal{H}}_S^t)$ separately. A basis for $\Omega(\ensuremath{\mathcal{H}}_S)$ is given by $1, t, \dots, t^{2\abs{\phi} - 1}$, so the lattice formed by the image of $\Omega(\ensuremath{\mathcal{H}}_S)$ in $\prod_{\tau \in \phi\sqcup \overline{\phi}} \Omega(\ensuremath{\mathcal{H}}_S)_\tau \cong \prod_{\tau \in \phi\sqcup \overline{\phi}} \ensuremath{\mathcal{O}}_{S, \tau}$ is generated by $\{(\tau(\pi)^i)_{\tau \in \phi\cup \overline{\phi}}\}_{0 \le i < 2\abs{\phi}}$. To calculate the index of this lattice relative to the maximal lattice, we take the determinant of a $2\abs{\phi} \times 2\abs{\phi}$ matrix whose $ij$-th element is $\tau_i(\pi)^j$. The ideal generated by the determinant of this Vandermonde matrix is $$\paren*{\prod_{1 \le i < j \le 2\abs{\phi}} \abs{\tau_j(\pi) - \tau_i(\pi)}} = \ensuremath{\mathfrak{d}}_{\Sigma, p}^{1/2}.$$ Doing the same for $\Omega(\ensuremath{\mathcal{H}}_S^t)$ nets an additional factor of $\ensuremath{\mathfrak{d}}_{\Sigma, p}^{1/2}$. Finally, when $B$ is ramified, the pairing $\ensuremath{\mathbb{D}}(\ensuremath{\mathcal{H}}_S)_{S'} \times \ensuremath{\mathbb{D}}(\ensuremath{\mathcal{H}}_S^t)_{S'} \to S'$ is not perfect but rather $\Lambda^\vee = \frac{1}{\omega_\ensuremath{\mathfrak{q}}}\Lambda$, meaning our choice in $\Omega(\ensuremath{\mathcal{H}}_S)$ must lie in $\omega_\ensuremath{\mathfrak{q}} \Omega(\ensuremath{\mathcal{H}}_S)$, giving an additional factor of $(\omega_\ensuremath{\mathfrak{q}}) = \ensuremath{\mathfrak{d}}_{B, p}^{1/2}$ since $\ensuremath{\mathfrak{p}}$ was specified to be ramified wherever $B$ was. Doing the same for $\ensuremath{\mathcal{H}}_S^t$ nets us another factor of $\ensuremath{\mathfrak{d}}_{B, p}^{1/2}$. ◻ # Special Points {#sec:Special Point} To calculate heights of special points of $\ensuremath{\mathcal{X}}_U$, we relate the height to heights on $\ensuremath{\mathcal{X}}'_{U'}$, which represent Faltings heights, through $\ensuremath{\mathcal{X}}''_{U''}$. Let $(E, \phi)$ be a partial CM-type with $F \subset E$ the totally real subfield of index $2$ and let $\Sigma = \phi|_F \subset \mathop{\mathrm{Hom}}(F, \mathbb R)$. Let $B$ be a quaternion algebra over $F$ such that $B$ is ramified at infinity at $\Sigma \subset \mathop{\mathrm{Hom}}(F, \mathbb R)$ and whose finite ramification set is a subset of the primes for which $E$ is ramified. Then we can embed $E \to B$ because $E_\ensuremath{\mathfrak{p}}$ embeds into $B_\ensuremath{\mathfrak{p}}$ at every place $\ensuremath{\mathfrak{p}}$ of $F$. Let $\{\ensuremath{\mathcal{X}}_U\}_U$ be the tower of Shimura varieties associated to this particular quaternion algebra $B$. The embedding $E \to B$ gives us an embedding of $T_E \coloneqq \mathop{\mathrm{Res}}_{E/\mathbb Q} \ensuremath{\mathbb{G}}_m \to G$ and hence we get a set of CM points of $X_U$ which are parametrized, under the complex uniformization, by points $(z, t) \in (\ensuremath{\mathcal{H}}^{\pm})^\Sigma \times G(\ensuremath{\mathbb{A}}_f)$ where $z$ is determined by the cocharacter $h_\sigma\colon \mathbb C\cong E_\tau \to B_\sigma$ for each $\tau \in \phi$ and $\sigma \colon F \to \mathbb R$ lying below it, and $t \in T_E(\ensuremath{\mathbb{A}}_f) \subset G(\ensuremath{\mathbb{A}}_f)$. Fix one of these CM points $P \in \ensuremath{\mathcal{X}}_U(\overline{\mathbb Q})$. Pick a complementary partial CM-type $\phi'$ to $\phi$. We can construct the tower $X'_{U'}$ which represents the functor $\ensuremath{\mathcal{F}}'_{U'}$ as before. For any choice of element $t' \in [(T_E \times T_E) \cap G'](\ensuremath{\mathbb{A}}_f)$, the cocharacter formed from $z \in (\ensuremath{\mathcal{H}}^\pm)^\Sigma$ and $h_E$ is a point $P' = [(z, h_E), t'] \in X'_{U'}(\overline{\mathbb Q})$, which represents an abelian variety $A'$ with multiplication by $\ensuremath{\mathcal{O}}_{E}$. From the determinant condition of $\ensuremath{\mathcal{F}}'_{U'}$, we have that $A'$ is isogenous to a product of abelian varieties $A_1 \times A_2$, one with CM by $E$ of type $\phi\sqcup \phi'$ and the other of CM-type $\overline{\phi}\sqcup \phi'$. We now compare the points $P$ and $P'$ by embedding both $X_U$ and $X'_{U'}$ into $X''_{U''}$. Recall that $G'' = \mathop{\mathrm{Res}}_{F/\mathbb Q} [(B^\times \times E^\times)/F^\times]$ where $F^\times \subset B^\times \times E^\times$ by $a \mapsto (a, a^{-1})$. This gives rise to the Shimura variety $X''_{U''}$. The embedding $G' \to G''$ gives an embedding $X'_{U'} \to X''_{U''}$. To relate $X_U$ and $X''_{U''}$, we take the group $\mathop{\mathrm{Res}}_{F/\mathbb Q} (B^\times \times E^\times)$ and the quotient map $\mathop{\mathrm{Res}}_{F/\mathbb Q} (B^\times \times E^\times) \to G''$. The group gives rise to a Shimura variety $X_U \times Y_J$, where $Y_J$ is the zero-dimensional Shimura variety associated with the datum of the torus $T_E$ and morphism $h_E$ as in the definition of $G''$, and $J \subset T_E(\ensuremath{\mathbb{A}}_f)$ an open compact subgroup. This quotient map of Shimura datum gives rise to a surjective morphism $$X_U \times Y_J \to X''_{U''}$$ of Shimura varieties, where $U'' = U \cdot J \subset G''(\ensuremath{\mathbb{A}}_f)$. Thus, we have the following morphisms of algebraic groups $$G \gets G \times T_E \to G'' \gets G'$$ which gives rise to the chain of morphisms of Shimura varieties $$X_U \gets X_U \times Y_J \to X''_{U''} \gets X'_{U'}.$$ However, given a point $y \in Y_J$, we are able to construct a section $X_U \to X_U \times Y_J \to X''_{U''}$. **Proposition 22**. *Suppose $P \in X_U$ is a CM point corresponding to the embedding $E \hookrightarrow B$. We can choose $y \in Y_J$ and $P' \in X'_{U'}$ such that $P \in X_U$ and $P' \in X'_{U'}$ have the same image $P''$ in $X''_{U''}$.* *Proof.* For a given $P \in X_U$, we can choose a representative $[z, t] \in (\ensuremath{\mathcal{H}}^\pm)^\Sigma \times G(\ensuremath{\mathbb{A}}_f)$ under the complex uniformization. If we let $t' = (t, t^{-1})$, then $t' \in (T_E \times T_E \cap G')(\ensuremath{\mathbb{A}}_f)$ because $\nu(t') = 1 \in \ensuremath{\mathbb{G}}_m$. Letting $y \in Y_J$ be the point corresponding to the choice of $t^{-1} \in T_E(\ensuremath{\mathbb{A}}_f)$ makes it so $(P, y) \in X_U \times Y_J$ and $P' = [z \times h_E, t'] \in X'_{U'}$ have the same image in $X''_{U''}$. ◻ All of the geometric points of $Y_J$ are defined over $E_{X'}$, so the integral model $\ensuremath{\mathcal{X}}_U$ for $X_U$ gives rise to an integral model $\ensuremath{\mathcal{X}}_U \times \ensuremath{\mathcal{Y}}_J$ for $X_U \times Y_J$. We have a $p$-divisible group $\ensuremath{\mathcal{H}}''_{U''}$ defined over $\ensuremath{\mathcal{X}}''_{U''}$. We also define a $p$-divisible group $I$ over $Y_J$ by setting $$I_J \coloneqq (E_p/\ensuremath{\mathcal{O}}_{E, p} \times Y)/J.$$ Let $K/E_{X'}$ be a finite extension. Suppose that we have points $x \in X_U(K)$ and $y \in Y_J(K)$. These give rise to a point $x'' \in X''_{U''}(K)$. By [@Yua18 Prop 5.2], we can extend $I_J$ to a $p$-divisible group $\ensuremath{\mathcal{I}}_y$ over the closure of the point $y$ in $\ensuremath{\mathcal{Y}}_J$. **Proposition 23** ([@Yua18 Prop 5.3]). *There are canonical isomorphisms $$\mathop{\mathrm{Lie}}(\ensuremath{\mathcal{H}}''_{x''}) \cong \mathop{\mathrm{Lie}}(\ensuremath{\mathcal{H}}_x) \otimes_{\ensuremath{\mathcal{O}}_{E, p} \otimes \ensuremath{\mathcal{O}}_{K}} \mathop{\mathrm{Lie}}(\ensuremath{\mathcal{I}}^t_{y})^\vee,\quad \mathop{\mathrm{Lie}}(\ensuremath{\mathcal{H}}''^t_{x''}) \cong \mathop{\mathrm{Lie}}(\ensuremath{\mathcal{H}}^t_x) \otimes_{\ensuremath{\mathcal{O}}_{E, p} \otimes \ensuremath{\mathcal{O}}_{K}} \mathop{\mathrm{Lie}}(\ensuremath{\mathcal{I}}^t_{y}).$$* Define $\ensuremath{\mathcal{N}}''(\ensuremath{\mathcal{H}}''_{x''}, \tau) \coloneqq \omega(\ensuremath{\mathcal{H}}''_{x''})_\tau \otimes \omega(\ensuremath{\mathcal{H}}''_{x''})_{\overline{\tau}}$. Then the previous proposition immediately gets us that $$\ensuremath{\mathcal{N}}''(\ensuremath{\mathcal{H}}''_{x''}, \tau) \cong \ensuremath{\mathcal{N}}(\ensuremath{\mathcal{H}}_x, \tau).$$ Since the point $P$ and $P'$ coincide in $\ensuremath{\mathcal{X}}''_{U''}$, the $p$-divisible group $\ensuremath{\mathcal{H}}_P$ coincides with the $p$-infinity torsion of the abelian scheme $\ensuremath{\mathcal{A}}'_{P'}$ over $P' \in \ensuremath{\mathcal{X}}'_{U'}$. Thus, we can use the same norm from the Hermitian pairing $$\norm{\cdot} \colon W(A'_{P'}, \tau) \otimes W(A'^t_{P'}, \overline{\tau}) \to \mathbb C$$ to give $$\widehat{\ensuremath{\mathcal{N}}(\ensuremath{\mathcal{H}}_P, \tau)} \coloneqq \paren*{\omega(\ensuremath{\mathcal{H}}_P)_\tau \otimes \omega(\ensuremath{\mathcal{H}}_P^t)_{\overline{\tau}}, \norm{\cdot}}$$ into a metrized line bundle. **Theorem 24**. *The Kodaira--Spencer isomorphism in Theorem [Theorem 21](#thm:Hodge Bundle p Divisible Group){reference-type="ref" reference="thm:Hodge Bundle p Divisible Group"} respects the norms at infinity at $P$ and hence extends to an isomorphism of metrized line bundles $$\widehat{\ensuremath{\mathcal{L}}_P}^2(\ensuremath{\mathfrak{d}}_{\Sigma, p}\ensuremath{\mathfrak{d}}_{B, p})^{-1} \cong \bigotimes_{\tau \in \phi} \widehat{\ensuremath{\mathcal{N}}(\ensuremath{\mathcal{H}}_P, \tau)} \otimes \widehat{\ensuremath{\mathcal{N}}(\ensuremath{\mathcal{H}}_P, \overline{\tau}}).$$* *Proof.* At the places at infinity, the Dieudonné module $\ensuremath{\mathbb{D}}(\ensuremath{\mathcal{H}}_P)$ is naturally isomorphic to the first de Rham homology of $A_P \coloneqq A'_{P'}$. Thus, the Kodaira--Spencer morphism comes from the Hodge filtration $$0 \to \Omega(A^t_P) \to H_1^{\mathop{\mathrm{dR}}}(A_P) \to \Omega(A_P) \to 0.$$ For each $\tau\colon E \to \mathbb C$, we can look at the $\tau$-component of the filtration $$0 \to \Omega(A^t_P)_\tau \to H_1^{\mathop{\mathrm{dR}}}(A_P)_\tau \to \Omega(A_P)_\tau \to 0$$ For $\tau\colon E \to \mathbb C$ lying above $\Sigma^c$, there is no contribution from either line bundle so we can restrict ourselves to considering the $\tau$ lying above $\Sigma$, and specifically for $\tau \in \phi$. For these $\tau$, we have $$\Omega(A^t_P)_\tau \to H_1^{\mathop{\mathrm{dR}}}(A_P)_\tau \overset{\nabla}{\to} H_1^{\mathop{\mathrm{dR}}}(A_P)_\tau \otimes \Omega^1_{X_U, P} \to \Omega(A_P)^\vee \otimes \Omega^1_{X_U, P}.$$ Explicitly, we have that $H_1^{\mathop{\mathrm{dR}}}(A_P)_\tau \cong V_\tau \cong B \otimes_{E, \tau} \mathbb C$. We can choose an isomorphism $B \otimes_{E, \tau} \mathbb C\cong \mathbb C\oplus \mathbb C$ so that $h(i)_\tau \subset M_2(\mathbb R)$ acts on $V_\tau$ via right transpose action $h(i) \cdot (z_1, z_2) = (z_1, z_2) \overline{h(i)}$. Then in terms of the complex uniformization, an element $z = x + iy \in \ensuremath{\mathbb{H}}_\tau^\pm$ in the complex half planes corresponds to a conjugate of $h(i)$ and $\Omega(A^t_P)_\tau \cong V^{0, -1}_\tau$ is the subset of $\mathbb C^2$ for which $h(i)$ acts as $-i$. Computation shows that $\Omega(A^t_P)_\tau \cong \mathbb C(z, 1) \subset \mathbb C^2$. Moreover, we have that $\mathop{\mathrm{Lie}}(A/P) = V^{-1, 0} \cong \mathbb C(\overline{z}, 1) \subset \mathbb C^2$. Thus, explicitly, the map above gives $$\begin{tikzcd}[row sep = small, column sep=small] \Omega(A^t_P)_\tau \arrow[r] & H_1^{\mathop{\mathrm{dR}}}(A_P)_\tau \arrow[r, "\nabla"] & H_1^{\mathop{\mathrm{dR}}}(A_P)_\tau \otimes \Omega^1_{X_U, P} \arrow[r] & \Omega(A_P)^\vee \otimes \Omega^1_{X_U, P} \\ (z, 1) \arrow[r, maps to] & (z, 1) \arrow[r, maps to] & (1, 0) \otimes dz = \frac{(z, 1) - (\overline{z}, 1)}{2iy} \otimes dz \arrow[r, maps to] & \frac{-(\overline{z}, 1)}{2iy} \otimes dz. \end{tikzcd}$$ Thus at infinity, the isomorphism $\omega_{X_U, P} \to \bigotimes_{\tau \in \phi} \omega(A^t_P)_\tau \otimes \omega(A_P)_\tau$ gives $$\bigwedge_{\tau \in \phi} dz \mapsto \bigotimes_{\tau \in \phi} 2iy_\tau\frac{(z_\tau, 1)}{(\overline{z_\tau}, 1)},$$ and taking norms gives $\prod_{\tau \in \phi} 2y_\tau$ on both sides. ◻ **Theorem 25**. *Let $d_B$ be a positive generator of $N_{F/\mathbb Q} \ensuremath{\mathfrak{d}}_{B}$ and let $d_\Sigma = d_{\phi\sqcup \overline{\phi}}$. We have that $$h_{\widehat{L_U}}(P_U) = \sum_{\tau \in \phi} \paren*{h(\phi\sqcup \phi', \tau) + h(\overline{\phi} \sqcup \phi', \overline{\tau})} + \frac{1}{2g} \log d_Bd_{\Sigma}.$$* *Proof.* By Theorem [Theorem 21](#thm:Hodge Bundle p Divisible Group){reference-type="ref" reference="thm:Hodge Bundle p Divisible Group"}, we get that $$2h_{\widehat{\ensuremath{\mathcal{L}}_U}}(P_U) = \sum_{\tau \in \phi} h_{\widehat{\ensuremath{\mathcal{N}}(\tau)}}(P_U) + h_{\widehat{\ensuremath{\mathcal{N}}(\overline{\tau})}}(P_U) + \frac{1}{2g}\log d_Bd_\Sigma,$$ with the extra factor of $g$ coming from the fact that we defined the height over $\mathbb Q$, which is $[F:\mathbb Q]$ times larger than the usual height defined over $F$. By the previous proposition, we have that $$h_{\widehat{\ensuremath{\mathcal{N}}(\tau)}}(P_U) = h_{\widehat{\ensuremath{\mathcal{N}}''(\tau)}}(P''_{U''}).$$ Then by our choice of $y \in Y_J$ and $P' \in X'_{U'}$, the point $P''_{U''}$ is the image of $P' \in X'_{U'}$ which represents an abelian variety that is isogenous to a product of CM abelian varieties, one of CM-type $\phi\sqcup \phi'$ and the other of CM-type $\overline{\phi} \sqcup \phi'$. Thus, we get that $$h_{\widehat{\ensuremath{\mathcal{N}}''(\tau)}}(P''_{U''}) = h_{\widehat{\ensuremath{\mathcal{N}}'(\tau)}}(P'_{U'}) = h(\phi\sqcup \phi', \tau) + h(\overline{\phi}\sqcup \phi', \overline{\tau}).$$ ◻ This result does not depend on the choice of complementary CM-type $\phi'$ and so summing over all such complementary CM-types nets us the following. **Theorem 26**. *Suppose that $U = \prod_v U_v$ is a maximal compact subgroup of $G(\ensuremath{\mathbb{A}}_f)$. Then $$\begin{aligned} \frac{1}{2}h_{\widehat{\ensuremath{\mathcal{L}}_U}}(P_U) =& \frac{1}{2^{\abs{{\Sigma}^{\mathsf{c}}}}} \sum_{\Phi \supset \phi} h(\Phi) - \frac{\abs{{\Sigma}^{\mathsf{c}}}}{g2^g} \sum_\Phi h(\Phi)\\ &+ \frac{1}{8} \log d_{E/F, \Sigma}d_\Sigma^{-1} + \frac{1}{4}\log d_\phi d_{\overline{\phi}} + \frac{1}{4g} \log d_Bd_\Sigma + \frac{\abs{\Sigma}}{4g} \log d_F, \end{aligned}$$ where the first summation is over all full CM-types which contain $\phi$, and the second summation over all full CM-types of $E$.* *Proof.* We note $h(\Phi) = h(\overline{\Phi})$ and $h(\Phi, \tau) = h(\overline{\Phi}, \overline{\tau})$. So we can write $$\tag{*}\label{eq:*} \begin{split} h_{\widehat{\ensuremath{\mathcal{L}}_U}}(P_U) - \frac{1}{2g}\log d_Bd_\Sigma= & \sum_{\tau \in \phi\sqcup \phi'} h(\phi\sqcup \phi', \tau) + \sum_{\tau \in \phi\sqcup \overline{\phi'}} h(\phi\sqcup \overline{\phi'}, \tau) \\ &- \sum_{\tau \in \phi'} \paren*{h(\phi\sqcup \phi', \tau) + h(\phi\sqcup \overline{\phi'}, \overline{\tau})}. \end{split}$$ Let $(\Phi_1, \Phi_2)$ be a nearby pair of full CM-types meaning that $\abs{\Phi_1 \cap \Phi_2} = g-1$ and let $\tau_i = \Phi_i \backslash (\Phi_1 \cap \Phi_2)$. Then Theorem [Theorem 9](#thm:nearby CM-type){reference-type="ref" reference="thm:nearby CM-type"} tells us the quantity $h(\Phi_1, \tau_1) + h(\Phi_2, \tau_2)$ is independent of the choice of nearby pair, so we will denote it by $h_{\mathrm{nb}}$. By [@Yua18 Cor. 2.6], we have that $$\sum_{\Phi} h(\Phi) = g2^{g - 1}h_{\mathrm{nb}} - 2^{g - 2}\log d_F.$$ Now we sum equation ([\[eq:\*\]](#eq:*){reference-type="ref" reference="eq:*"}) over all complementary types $\phi'$ to get $$2^{\abs{{\Sigma}^{\mathsf{c}}}} h_{\widehat{\ensuremath{\mathcal{L}}_U}}(P_U) - \frac{2^{\abs{{\Sigma}^{\mathsf{c}}}}}{2g}\log d_Bd_\Sigma = 2\sum_{\Phi \supset \phi} \sum_{\tau \in \Phi} h(\Phi, \tau) - \abs{{\Sigma}^{\mathsf{c}}}2^{\abs{{\Sigma}^{\mathsf{c}}}} h_{\mathrm{nb}}.$$ We now use Theorem [Theorem 8](#thm:fullvscomponent){reference-type="ref" reference="thm:fullvscomponent"} to represent the inner summation as $\sum_{\tau \in \Phi} h(\Phi, \tau)$. Doing so gives $$\begin{aligned} \frac{1}{2}h_{\widehat{\ensuremath{\mathcal{L}}_U}}(P_U) =& \frac{1}{2^{\abs{{\Sigma}^{\mathsf{c}}}}} \sum_{\Phi \supset \phi} h(\Phi) - \frac{\abs{{\Sigma}^{\mathsf{c}}}}{g2^g} \sum_\Phi h(\Phi)\\ &+ \frac{1}{2^{\abs{{\Sigma}^{\mathsf{c}}}}} \sum_{\Phi \supset \phi} \frac{1}{4[E_\Phi:\mathbb Q]} \log (d_\Phi d_{\overline{\Phi}}) + \frac{1}{4g} \log d_Bd_\Sigma - \frac{\abs{{\Sigma}^{\mathsf{c}}}}{4g} \log d_F. \end{aligned}$$ Base changing up to $E' = E^{\mathop{\mathrm{Gal}}}$, we can simplify the first sum of logarithms as $$\frac{1}{2^{\abs{{\Sigma}^{\mathsf{c}}}} \cdot 4[E':\mathbb Q]} \sum_{\Phi \supset \phi} \log(d_\Phi d_{\overline{\Phi}}) = \sum_{p < \infty} \sum_{\sigma\colon E' \to \overline{\mathbb Q_p}}\frac{1}{2^{\abs{{\Sigma}^{\mathsf{c}}}} \cdot 4[E':\mathbb Q]} \sum_{\Phi \supset \phi} \log \abs{d_{\Phi, p}d_{\overline{\Phi}, p}}_\sigma.$$ For each $\tau \colon E' \to \overline{\mathbb Q_p}$, let $\pi$ be a generator of $\ensuremath{\mathcal{O}}_{E, p}$ over $\mathbb Z_p$. Let $$f_\Phi(t) = \prod_{\tau\in \Phi} (t - \tau(\pi)) \in \ensuremath{\mathcal{O}}_{E_\Phi, p}[t].$$ The image of $\ensuremath{\mathcal{O}}_{E_\Phi} \times_\mathbb Z\ensuremath{\mathcal{O}}_{E, p}$ in $\widetilde{E_{\Phi, p}}$ is $\ensuremath{\mathcal{O}}_{E_\Phi, p}[t]/f_\Phi(t)$. This means $d_{\Phi, p}$ is the discriminant $f_\Phi(t)$, or $$d_{\Phi, p} = \prod_{(\tau, \tau')} (\tau(\pi) - \tau'(\pi))^2,$$ where the product is taken over all unordered pairs of distinct $\tau \neq \tau' \in \Phi$. We can write the summation over all $\Phi \supset \phi$ as the sum of $\log \abs{\tau(\pi) - \tau'(\pi)}_\sigma$ over all pairs $(\tau, \tau')$, and then subtract the pairs when $\tau = \overline{\tau'}$ and when $\tau \in \phi$ and $\tau' \in \overline{\phi}$, or vice versa. Thus, we can simplify the sum as $$\begin{aligned} \sum_{\Phi \supset \phi} \log \abs{d_{\Phi, p}d_{\overline{\Phi}, p}}_\sigma =&\log \abs*{\frac{\displaystyle\prod_{(\tau, \tau') \in \mathop{\mathrm{Hom}}(E, \overline{\mathbb Q_p})} (\tau(\pi) - \tau'(\pi))^2}{\displaystyle\prod_{\tau \in \Phi} (\tau(\pi) - \overline{\tau}(\pi))^2}}_\sigma^{2^{\abs{{\Sigma}^{\mathsf{c}}} - 1}}\\ &+ \log \abs*{\frac{\displaystyle\prod_{\tau \in \phi} (\tau(\pi) - \overline{\tau}(\pi))^2}{\displaystyle\prod_{(\tau, \tau') \in \phi\sqcup \overline{\phi}} (\tau(\pi) - \tau'(\pi))^2}}_\sigma^{2^{\abs{{\Sigma}^{\mathsf{c}}} - 1}}\\ &+ \log\abs*{\displaystyle\prod_{(\tau, \tau') \in \phi} (\tau(\pi) - \tau'(\pi))^2(\overline{\tau}(\pi) - \overline{\tau'}(\pi))^2}_\sigma^{2^{\abs{{\Sigma}^{\mathsf{c}}}}}. \end{aligned}$$ We can simplify the first term as $\log \abs*{\frac{d_E, p}{d_{E/F, p}}}_\sigma = \log \abs{d_{F, p}}_\sigma^2$. The second term can be written as $\log \abs*{\frac{N_{F/E_X} d_{E/F}}{d_\Sigma}}_\sigma = \log \abs*{\frac{d_{E/F, \Sigma}}{\Sigma}}_\sigma$. Finally, the last term is $\log |d_{\phi, p}d_{\overline{\phi}, p}|_\sigma$. Plugging this back in gives $$\begin{aligned} \frac{1}{2}h_{\widehat{\ensuremath{\mathcal{L}}_U}}(P_U) =& \frac{1}{2^{\abs{{\Sigma}^{\mathsf{c}}}}} \sum_{\Phi \supset \phi} h(\Phi) - \frac{\abs{{\Sigma}^{\mathsf{c}}}}{g2^g} \sum_\Phi h(\Phi)\\ &+ \frac{1}{8} \log d_{E/F, \Sigma}d_\Sigma^{-1} + \frac{1}{4}\log d_\phi d_{\overline{\phi}} + \frac{1}{4g} \log d_B + \frac{\abs{\Sigma}}{4g} \log d_F. \end{aligned}$$ ◻ # André--Oort for Shimura Varieties {#sec:Andre Oort} A definition of the height of a partial CM-type was given by [@Pil21]. We show that their definition of the partial CM-type is compatible with our quaternionic height. We first recall their definition of the modified height on special points, specialized to the case of a partial CM-type. Let $E/F$ be a CM extension, with $[F:\mathbb Q] = g$, and set $R_E \coloneqq \mathop{\mathrm{Res}}_{F/\mathbb Q} E^\times/F^\times$. Let $\phi\subset \mathop{\mathrm{Hom}}(E, \mathbb C)$ be a partial CM-type and $\phi'$ be a complementary partial CM-type. Use them to identify $E\otimes_\mathbb Q\mathbb R\overset{\phi\sqcup \phi'}{\cong} \mathbb C^g$. Then we have that $$R_{E, \mathbb R} \cong \prod_{\sigma \in \phi\sqcup \phi'} \mathbb C_\sigma/\mathbb R,$$ and we take our homomorphism $h_\phi\colon \mathbb C^\times \to R_{E, \mathbb R}$ as $$h_\phi(z) \coloneqq \paren*{\prod_{\sigma \in \phi} z_\sigma, \prod_{\sigma \in \phi'} 1_\sigma}.$$ The Shimura datum $(R_E, h_\phi)$ and compact open subgroup $K \subset R_E(\ensuremath{\mathbb{A}}_f)$ give rise to a $0$-dimensional Shimura variety $T_K$ whose complex points are $$T_K(\mathbb C) \cong R_E(\mathbb Q) \backslash R_E(\ensuremath{\mathbb{A}}_f)/K.$$ It has a canonical model over a number field $E_T$. We identify $$R_E(\mathbb C) \cong \prod_{\sigma \in \phi\sqcup \phi'} \mathbb C_\sigma.$$ Let $\chi\colon R_E(\mathbb C) \to \mathbb C$ be the character given by $$\chi\paren*{\prod_{\sigma \in \Phi}z_\sigma} \coloneqq \prod_{\sigma \in \phi} \frac{z}{\overline{z}}.$$ Let $V$ be the smallest $\mathbb Q$-representation of $R_E$ whose complexification contains $\chi$. Let $\mathop{\mathrm{Fil}}^aV$ be the smallest piece of the Hodge filtration and assume that it is one-dimensional and $R_E$ acts on it via $\chi$. Let $\Lambda \subset V$ be a maximal lattice and now take $K = \prod_p K_p \subset R_E(\ensuremath{\mathbb{A}}_f)$ to be the stabilizer of $\Lambda$. Let $\psi$ be a polarization on $V$ that takes integral values on $\Lambda$. The representation $V$ of $R_E$ gives rise to a vector bundle $\ensuremath{\mathcal{V}}_K$ over $T_K$ and filtration on it. By [@Dia22], over every non-archimedean place $v$ of $\ensuremath{\mathcal{O}}_{E_T}$ lying over a prime $p \in \mathbb Z$, our data $(\Lambda, V, \mathop{\mathrm{Fil}})$ can be functorially identified with data $(_p\Lambda, _pV, _p\mathop{\mathrm{Fil}})$ of a filtered vector bundle over $T_{E_{T, v}, K}$, the Shimura variety extended over local fields. To each place $v$ of $\ensuremath{\mathcal{O}}_{E_T}$, we define a norm on $\mathop{\mathrm{Fil}}^a \Lambda$ as: - If $v \mid \infty$, then the norm is the Hodge norm given by the polarization $q$; - If $v \mid p$ is a non-archimedean place such that - $T$ is unramified at p, - $K_v$ is maximal, and - $p \ge a \dim V + 2$, then use the crystalline norm on ${}_pV$; - For all other places $v$, use the intrinsic norm on ${}_p\Lambda$. Now the height of $\phi$ can be defined as $$h(\phi) \coloneqq \sum_{v} -log \|s\|_v,$$ where $s$ is any element of $\ensuremath{\mathcal{V}}_K$ and the sum is over all places of $\ensuremath{\mathcal{O}}_{E_T}$. The height depends on the choice of lattice $\Lambda$ and polarization $q$, but only up to $d_E$, the discriminant of $E$. **Theorem 27** ([@Pil21 Lem. 9.4, Thm. 9.5, 9.6]). *The height $h(\phi)$ is defined up to $O(\log d_E)$.* **Theorem 28**. *$$2h(\phi) = h_{\widehat{\ensuremath{\mathcal{L}}_U}}(P_U) + O(\log d_E).$$* *Proof.* Consider the representation of $E$ on $V = B$ through left multiplication. Our point $P_U$ corresponds to an action whose trace is $\mathop{\mathrm{Tr}}_{\phi\sqcup \phi'} + \mathop{\mathrm{Tr}}_{\overline{\phi} \sqcup \phi'}$. When we take the Shimura variety associated with the adjoint group $G^{\mathop{\mathrm{ad}}}$, then this representation gives a representation of $R_E$ on $V/F$ whose trace is given by the trace of $\frac{z_\tau}{z_{\overline{\tau}}}$, meaning that we get $2\chi$. Thus, we get a representation of $R_E$ whose complexification contains $2\chi$. Thus, we are reduced to showing that the choice of lattices at each finite place are the same. However, since our equality is only up to $O(d_E)$, it suffices to consider primes where $B, E$ are unramified and the local norm used in the definition of $h(\phi)$ is given by the crystalline norm. Let $S = \ensuremath{\mathcal{O}}_{E_X'^p, p}$ again be the maximal unramified extension of $\ensuremath{\mathcal{O}}_{E_X', p}$. To show that the lattices coincide, it suffices to check the two lattices at each $S$ point of $X_U$. Under the mapping $X_U \to X_U \times Y_J \to X''_{U''}$, the point $P_U$ corresponds to an abelian variety $\ensuremath{\mathcal{A}}$ with complex multiplication of type $\phi\sqcup \phi' + \overline{\phi} \sqcup \phi'$. By [@Pil21 Sec. 9.3], the lattice given by the crystalline norm is the same as the lattice from integral de Rham cohomology. So the lattice at that point is $$\Omega(\ensuremath{\mathcal{A}}) \otimes S \cong \Omega(\ensuremath{\mathcal{A}}[p^\infty]) \otimes S \cong \Omega(\ensuremath{\mathcal{H}}''_S).$$ However, by Proposition [Proposition 23](#prop:compare p divisible groups){reference-type="ref" reference="prop:compare p divisible groups"}, this is the same as $\Omega(\ensuremath{\mathcal{H}}_S)$. Moreover, the pairing is perfect here meaning that we get the same lattice on $\Omega(\ensuremath{\mathcal{A}}^t) \otimes S \cong \Omega(\ensuremath{\mathcal{H}}''^t_S)$. Thus, twice $h(\phi)$ corresponds to taking the height relative to the lattice $\Omega(\ensuremath{\mathcal{A}}) \otimes \Omega(\ensuremath{\mathcal{A}}^t)$ which is $\Omega(\ensuremath{\mathcal{H}}_S) \otimes \Omega(\ensuremath{\mathcal{H}}_S^t)$ which by Theorem [Theorem 21](#thm:Hodge Bundle p Divisible Group){reference-type="ref" reference="thm:Hodge Bundle p Divisible Group"} is just $\ensuremath{\mathcal{L}}_U$, as required. ◻
arxiv_math
{ "id": "2309.08886", "title": "Heights of Special Points on Quaternionic Shimura Varieties", "authors": "Roy Zhao", "categories": "math.NT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We consider small quantum groups with root systems of Cartan, super and modular type, among others. These are constructed as Drinfeld doubles of finite-dimensional Nichols algebras of diagonal type. We prove a linkage principle for them by adapting techniques from the work of Andersen, Jantzen and Soergel in the context of small quantum groups at roots of unity. Consequently we characterize the blocks of the category of modules. We also find a notion of (a)typicality similar to the one in the representation theory of Lie superalgebras. The typical simple modules turn out to be the simple and projective Verma modules. Moreover, we deduce a character formula for $1$-atypical simple modules. address: Facultad de Matemática, Astronomı́a, Fı́sica y Computación, Universidad Nacional de Córdoba. CIEM--CONICET. Medina Allende s/n, Ciudad Universitaria 5000 Córdoba, Argentina author: - Cristian Vay title: Linkage principle for small quantum groups --- [^1] [^2] # Introduction The small quantum groups at roots of unity introduced by Lusztig are beautiful examples of finite-dimensional Hopf algebras with relevant applications in several subjects. They are instrumental in Lusztig's program concerning the representation theory of algebraic groups in positive characteristic [@L-ModRepThe; @L-qgps-at-roots; @L-findim; @L-onqg]. In this framework, he conjectured a character formula for their simple modules, and that this formula implies an analogous one for restricted Lie algebras in positive characteristic. The validity of the "quantum formula" was confirmed by the works of Kazhdan--Lusztig [@KL1; @KL2] and Kashiwara--Tanisaki [@KT]. Andersen, Jantzen and Soergel [@AJS] then proved that the "quantum formula" implies the "modular formula" for large enough characteristics (although, we have to mention that 20 years later Williamson [@williamson] showed that this "enough" could be very big). More recently, Fiebig [@fiebig] gave a new proof for the "quantum formula" building up on certain categories defined in [@AJS] and, joint to Lanini, they have carried out a further development on these categories, cf. [@FL]. Here, we adapt tools and techniques from [@AJS] to investigate the representation theory of a more general class of small quantum groups: the Drinfeld doubles of bosonizations of finite-dimensional Nichols algebras of diagonal type over abelian groups, see Figure [\[fig:uq\]](#fig:uq){reference-type="ref" reference="fig:uq"}. The interest in Nichols algebras have been continually increasing since Andruskiewitsch and Schneider assigned them a central role in the classification of pointed Hopf algebras [@AS-cartantype]. Notably, the classification and the structure of the Nichols algebras of diagonal type are governed by Lie-type objects introduced by Heckenberger [@Hweyl], see also [@HY08; @AHS; @HS]. In particular, they have associated (generalized) root systems and, besides those of Cartan type, we find among the examples root systems of finite-dimensional contragredient Lie superalgebras in characteristic $0$, $3$ and $5$, and root systems of finite-dimensional contragredient Lie algebras in positive characteristic, as it was observed by Andruskiewitsch, Angiono and Yamane [@AAYamane; @AA-diag-survey]. Consequently, we get small quantum groups with these more general root systems. ## Main results Let $u_\mathfrak{q}$ be as in Figure [\[fig:uq\]](#fig:uq){reference-type="ref" reference="fig:uq"}. Then it admits a triangular decomposition $u_\mathfrak{q}=u_\mathfrak{q}^-u_\mathfrak{q}^0u_\mathfrak{q}^+$ which gives rise to Verma type modules $M(\pi)$ for every algebra map $\pi:u_\mathfrak{q}^0\longrightarrow\mathbb{C}$. Moreover, every $M(\pi)$ has a unique simple quotient $L(\pi)$ and any simple $u_\mathfrak{q}$-module can be obtained in this way. For instance, all these were computed for $\mathfrak{q}$ of type $\mathfrak{ufo}(7)$ in [@AAMRufo]. The linkage principle gives us information about the composition factors of the Verma modules as we explain after introducing some notation. Let $\{\alpha_1, ..., \alpha_\theta\}$ be the canonical $\mathbb Z$-basis of $\mathbb Z^\mathbb{I}$ with $\mathbb{I}=\{1, ..., \theta\}$. The matrix $\mathfrak{q}$ defines a bicharacter $\mathbb Z^\mathbb{I}\times\mathbb Z^\mathbb{I}\longrightarrow\mathbb C^\times$ which we denote also $\mathfrak{q}$. Given $\beta\in\mathbb Z^\mathbb{I}$, we set $q_\beta=\mathfrak{q}(\beta,\beta)$ and $b^\mathfrak{q}(\beta)=\mathop{\mathrm{ {\mathrm{ord}} }}q_\beta$. We denote $\rho^\mathfrak{q}:\mathbb Z^\mathbb{I}\longrightarrow\mathbb{C}^\times$ the group homomorphism such that $\rho^\mathfrak{q}(\alpha_i)=q_{\alpha_i}$ for all $i\in\mathbb{I}$. We notice $u_\mathfrak{q}^0$ is an abelian group algebra generated by $K_i,L_i$, $i\in\mathbb{I}$ (in Figure [\[fig:uq\]](#fig:uq){reference-type="ref" reference="fig:uq"}, $\Gamma$ is generated by the $K_i$'s) and unlike $u_q(\mathfrak{g})$, they yield two copies of the (finite) torus. If $\alpha=n_1\alpha_1+\cdots+n_\theta\alpha_\theta\in\mathbb Z^\mathbb{I}$, we denote $K_\alpha=K_1^{n_1}\cdots K_\theta^{n_\theta}$ and $L_\alpha=L_1^{n_1}\cdots L_\theta^{n_\theta}$. The algebra map $\pi\widetilde\mu:u_\mathfrak{q}^0\longrightarrow\mathbb{C}$ is defined by $\pi\widetilde\mu(K_\alpha L_\beta)=\frac{\mathfrak{q}(\alpha,\mu)}{\mathfrak{q}(\mu,\beta)}\pi(K_\alpha L_\beta )$, $\alpha,\beta\in\mathbb Z^\mathbb{I}$. Let $\Delta^\mathfrak{q}_+\subset\mathbb Z_{\geq0}^\mathbb{I}$ be the set of positive roots of the Nichols algebra $\mathfrak{B}_\mathfrak{q}$. If $\beta\in\Delta^\mathfrak{q}_+$, we define $$\begin{aligned} \beta\downarrow\mu=\mu-n^\pi_\beta(\mu)\beta\end{aligned}$$ where $n^\pi_\beta(\mu)$ is the unique $n\in\{1, ..., b^\mathfrak{q}(\beta)-1\}$ such that $q_{\beta}^{n}-\rho^\mathfrak{q}(\beta)\,\pi\widetilde{\mu}(K_{\beta}L_{\beta}^{-1})=0$, if it exists, and otherwise $n^\pi_\beta(\mu)=0$. **Theorem 1** (Strong linkage principle). *Let $L$ be a composition factor of $M(\pi\widetilde{\mu})$, $\mu\in\mathbb Z^\mathbb{I}$. Then $L\simeq L(\pi\widetilde{\mu})$ or $L\simeq L(\pi\widetilde{\lambda})$ with $\lambda=\beta_r\downarrow\cdots\beta_1\downarrow\mu$ for some $\beta_1, ..., \beta_r\in\Delta_+^\mathfrak{q}$.* A first consequence of this principle is that it defines an equivalence relation which completely characterizes the blocks of the category. It is also the starting point to imagine character formulas for the simple modules. In this direction, we deduce that $M(\pi\widetilde{\mu})=L(\pi\widetilde{\mu})$ is simple if and only if $$\begin{aligned} \prod_{\substack{\beta\in\Delta_+^\mathfrak{q}\\ 1\leq t<b^\mathfrak{q}(\beta)}} \left(q_{\beta}^{t}-\rho^\mathfrak{q}(\beta)\,\pi\widetilde{\mu}(K_{\beta}L_{\beta}^{-1})\right)\neq0;\end{aligned}$$ this was also proved in [@HY Proposition 5.16]. Moreover, we give a character formula for $L(\pi\widetilde{\mu})$ if the above expression is zero for a unique $\beta\in\Delta_+^\mathfrak{q}$ and for $t=n_\beta^\pi(\mu)$. Explicitly, $$\begin{aligned} {\mathrm{ch}} L(\pi\widetilde\mu)=\quad\frac{1-e^{-n_\beta^\pi(\mu)\beta}}{1-e^{-\beta}} \prod_{\gamma\in \Delta_+^\mathfrak{q}\setminus\{\beta\}}\frac{1-e^{-b^\mathfrak{q}(\gamma)\gamma}}{1-e^{-\gamma}}.\end{aligned}$$ By analogy with the theory of Lie superalgebras [@kac; @ser], we say that the number of zeros of the former product measures the degree of atypicality. Like in [@Y] where Yamane gives a Weyl--Kac character formula for typical simple modules over quantum groups of Nichols algebras of diagonal type with finite root systems (when the Nichols algebra is finite-dimensional, his typical modules coincides with ours). When I discussed a preliminary version of Theorem [Theorem 1](#teo:main teo){reference-type="ref" reference="teo:main teo"} with Nicolás Andruskiewitsch and Simon Riche, they respectively asked me about the Weyl groupoid and the affine Weyl group, how these objects come into play. In fact, the linkage principle is usually encoded in the dot action of the affine Weyl group, and the Weyl groupoid is its replacement in the theory of Nichols algebras [@Hweyl]. We partially answer their questions in the next corollary. We discover also a phenomenon similar to what occurs in the setting of Lie superalgebras. There, the action of the affine Weyl group generated by the even reflections as well as the translations by odd roots take part in the linkage principle, see *e. g.* [@chenwang]. Let $\Delta_{+,\operatorname{car}}^\mathfrak{q}$ be the set of positive Cartan roots of $\mathfrak{q}$ and $s_\beta$ the associated reflection. We set $\varrho^\mathfrak{q}=\frac{1}{2}\sum_{\beta\in \Delta^\mathfrak{q}_+}(b^\mathfrak{q}(\beta)-1)\beta$. For $\mu\in\mathbb Z^\mathbb{I}$, $\beta\in\Delta_{+,\operatorname{car}}^\mathfrak{q}$ and $m\in\mathbb Z$, we define $$\begin{aligned} s_{\beta,m}\bullet\mu=s_\beta(\mu+mb^\mathfrak{q}(\beta)\beta-\varrho^\mathfrak{q})+\varrho^\mathfrak{q}.\end{aligned}$$ We denote $\mathcal{W}_{\operatorname{link}}^\mathfrak{q}$ the group generated by all the affine reflections $s_{\beta,m}$. We recall that $\mathfrak{q}$ is of Cartan type if $\Delta_+^\mathfrak{q}=\Delta_{+,\operatorname{car}}^\mathfrak{q}$, and $\mathfrak{q}$ is of super type if its root system is isomorphic to the root system of a finite-dimensional contragredient Lie superalgebra in characteristic 0. If $\mathfrak{q}$ is of super type, then $\Delta_{+,\operatorname{odd}}^\mathfrak{q}:=\Delta^\mathfrak{q}_+\setminus\Delta_{+,\operatorname{car}}^\mathfrak{q}$ is not empty and $\mathop{\mathrm{ {\mathrm{ord}} }}q_\beta=2$ for every root $\beta\in\Delta_{+,\operatorname{odd}}^\mathfrak{q}$; in this case $\beta\downarrow\mu=\mu$ or $\mu-\beta$. **Corollary 2** (Linkage principle). *Assume $\pi$ is the trivial algebra map. Let $L(\pi\widetilde{\lambda})$ be a composition factor of $M(\pi\widetilde{\mu})$. Then* 1. *$\lambda\in\mathcal{W}_{\operatorname{link}}^\mathfrak{q}\bullet\mu$ if $\mathfrak{q}$ is of Cartan type.* 2. *$\lambda\in\mathcal{W}^\mathfrak{q}_{\operatorname{link}}\bullet(\mu+\mathbb Z\Delta_{+,\operatorname{odd}}^\mathfrak{q})$ if $\mathfrak{q}$ is of super type.* This corollary holds more generally for matrices of standard type, those with constant bundle of root system. However, there are matrices which are not, as the matrices of modular type. The assumption on $\pi$ is not particularly restrictive as we deal with all the simple modules of highest-weight $\widetilde{\mu}:u_\mathfrak{q}^0\longrightarrow\mathbb C$, $K_\alpha L_\beta\mapsto\frac{\mathfrak{q}(\alpha,\mu)}{\mathfrak{q}(\mu,\beta)}$, for all $\mu\in\mathbb Z^\mathbb{I}$. In the case of $u_q(\mathfrak{q})$, these form the category of modules of type $1$ in the sense of Lusztig, cf. [@AJS §2.4]. ## Sketch of the proof As we mentioned at the beginning, we imitate the ideas of [@AJS §1-§7]. There, the authors consider $\mathbb Z^\mathbb{I}$-graded algebras admitting a triangular decomposition $U=U^-U^0U^+$ with $U^0$ commutative and satisfying additional conditions which are fulfilled by $u_q(\mathfrak{g})$ and $U^{[p]}(\mathfrak{g}_k)$. Then, given a Noetherian commutative algebra $\mathbf{A}$ and an algebra map $\pi:U^0\longrightarrow\mathbf{A}$, they define certain categories $\mathcal{C}_\mathbf{A}$ of $\mathbb Z^\mathbb{I}$-graded $(U,\mathbf{A})$-bimodules. We observe here that we can consider these categories also for $u_\mathfrak{q}$. Roughly speaking, in the case $\mathbf{A}=\mathbb C$, this identifies with the abelian subcategory generated by the simple modules $L(\pi\widetilde{\mu})$ for $\mu\in\mathbb Z^\mathbb{I}$. A powerful tool used in *loc. cit.* to study the categories $\mathcal{C}_\mathbf{A}$ are the so-called Lusztig automorphisms $T_w$ of $u_q(\mathfrak{g})$, where $w$ runs in the Weyl group of $\mathfrak{g}$. We find a difference between $u_q(\mathfrak{g})$ and $u_\mathfrak{q}$ at this point. Indeed, we have Lusztig isomorphisms but connecting possibly different algebras, *i.e.* $T_w:u_{w^{-*}\mathfrak{q}}\longrightarrow u_{\mathfrak{q}}$ and the matrices $w^{-*}\mathfrak{q}$ and $\mathfrak{q}$ are not necessarily equal. These isomorphisms were defined in [@Hlusztigiso] for each $w\in{}^\mathfrak{q}\mathcal{W}$, the Weyl groupoid of $\mathfrak{q}$ [@Hweyl]. Nevertheless, we can carefully use them as in [@AJS]. First, we produce different triangular decompositions on $u_{\mathfrak{q}}$ and then each triangular decomposition gives rise to new Verma modules. Namely, for $w\in{}^\mathfrak{q}\mathcal{W}$ and $\mu\in\mathbb Z^\mathbb{I}$, we denote $Z_\mathbb C^w(\mu)$ the Verma module induced by $\pi\widetilde{\mu}$ using the triangular decomposition $u_\mathfrak{q}=T_w(u_{w^{-*}\mathfrak{q}}^-)\,u_\mathfrak{q}^0\,T_w(u_{w^{-*}\mathfrak{q}}^+)$. Let $L_\mathbb C^w(\mu)$ denote the unique simple quotient of $Z_\mathbb C^w(\mu)$. For instance, $Z_\mathbb C(\mu):=Z_\mathbb C^ {\mathrm{id}} (\mu)=M(\pi\widetilde{\mu})$ and $L_\mathbb C(\mu):=L_\mathbb C^ {\mathrm{id}} (\mu)=L(\pi\widetilde{\mu})$. Now, we set $\mu\langle w\rangle=\mu+ w(\varrho^{w^{-*}\mathfrak{q}})-\varrho^\mathfrak{q}\in\mathbb Z^\mathbb{I}$; we notice that $\varrho^{w^{-*}\mathfrak{q}}$ and $\varrho^\mathfrak{q}$ could be different. We show that the Verma modules $Z^w_\mathbb C(\mu\langle w\rangle)$ and $Z^x_\mathbb C(\mu\langle x\rangle)$ have identical characters and the Hom-space between them is one-dimensional, for all $w,x\in{}^\mathfrak{q}\mathcal{W}$. Moreover, we construct inductively a generator for each space and compute its kernel. We also prove that the image of $\Phi:Z_\mathbb C(\mu)\longrightarrow Z^{w_0}_\mathbb C(\mu\langle w_{0}\rangle)$ is isomorphic to $L_\mathbb C(\mu)$, cf. Figure [\[fig:sketch\]](#fig:sketch){reference-type="ref" reference="fig:sketch"}. We are ready to outline the last step of the proof of Theorem [Theorem 1](#teo:main teo){reference-type="ref" reference="teo:main teo"}. Let $L_\mathbb C(\lambda)$ be a composition factor of $Z_\mathbb C(\mu)$ not isomorphic to $L_\mathbb C(\mu)$. Then $L_\mathbb C(\lambda)$ is a composition factor of $\mathop{\mathrm{ {\mathrm{Ker}} }}\Phi$ and hence also of $\mathop{\mathrm{ {\mathrm{Ker}} }}\varphi= {\mathrm{Im}} \psi$, cf. Figure [\[fig:sketch\]](#fig:sketch){reference-type="ref" reference="fig:sketch"} (for some $s$). Since the Verma modules in the same row of Figure [\[fig:sketch\]](#fig:sketch){reference-type="ref" reference="fig:sketch"} have identical characters, we conclude that $L_\mathbb C(\lambda)$ is a composition factor of $Z_\mathbb C(\beta\downarrow\mu)$ as well. We observe that $-\beta^\mathfrak{q}_{top}=\sum_{\beta\in \Delta^\mathfrak{q}_+}(b^\mathfrak{q}(\beta)-1)\beta\leq\lambda\leq\beta\downarrow\mu<\mu$ because $\beta^\mathfrak{q}_{top}$ is the maximum $\mathbb Z^\mathbb{I}$-degree of $\mathfrak{B}_\mathfrak{q}$. Therefore, by repeating this procedure, we will find $\beta_1, ..., \beta_r\in\Delta_+^\mathfrak{q}$ such that $\lambda=\beta_r\downarrow\cdots\beta_1\downarrow\mu$ as desired. ## Relations with other algebras in the literature We would like to remark that the representations of the present small quantum groups can be helpful for other algebras. For instance, this was pointed out by Andruskiewitsch, Angiono and Yakimov [@AAYakimov §1.3.3] for the large quantum groups studied in [@Ang-reine; @Ang-distinguished; @AAYakimov] which are analogous to the quantized enveloping algebras of De Concini--Kac--Procesi. The small quantum groups are particular quotients of them. As in [@AAYakimov], we highlight that our results apply to small quantum groups at even roots of unity. Another example is given by the braided Drinfeld doubles of Nichols algebras recently constructed by Laugwitz and Sanmarco [@LS]. These are quotients of $u_\mathfrak{q}$ with only one copy of the finite torus. Since the corresponding projections preserve the triangular decompositions [@LS Proposition 3.16], a linkage principle for them can be deduced from Theorem [Theorem 1](#teo:main teo){reference-type="ref" reference="teo:main teo"}. Finally, it is worth noting that Pan and Shu [@panshu] have obtained results comparable to ours, as well as their proofs, for modular Lie superalgebras. This could suggest a relationship between small quantum groups of super type and the corresponding modular Lie superalgebras as it happens between $u_q(\mathfrak{g})$ and $U^{[p]}(\mathfrak{g}_k)$. ## Organization The exposition is mostly self-contained. In Section [2](#subsection:conventions){reference-type="ref" reference="subsection:conventions"} we set up general conventions. In Section [3](#sec:nichols){reference-type="ref" reference="sec:nichols"} we collect the main concepts regarding Nichols algebras (PBW basis, Weyl groupoid, root system) and their properties. We illustrate them in super type $A(1|1)$. In Section [4](#sec:Drinfeld){reference-type="ref" reference="sec:Drinfeld"} we recall the construction of the Drinfeld doubles and their Lusztig isomorphisms. In Section [5](#sec:AJS){reference-type="ref" reference="sec:AJS"} we introduce the categories defined in [@AJS] and summarize their general features. Next we investigate these categories over the Drinfeld doubles of Section [4](#sec:Drinfeld){reference-type="ref" reference="sec:Drinfeld"} using their Lusztig isomorphisms: in Section [6](#sec:vermas){reference-type="ref" reference="sec:vermas"}, we construct the different Verma modules mentioned previously in this introduction, and we study the morphisms between them in Section [7](#sec:morphisms){reference-type="ref" reference="sec:morphisms"}. Finally, we prove our main results in Section [8](#sec:linkage){reference-type="ref" reference="sec:linkage"}. ## Acknowledgments I am very grateful to Nicolas Andruskiewitsch, for motivating me to study the representations of Hopf algebras since my PhD, to Ivan Angiono, for patiently answering all my questions about Nichols algebras of diagonal type, and to Simon Riche, for teaching me so much about representation theory. I also thank them for the fruitful discussions. # Conventions {#subsection:conventions} Throughout our work $\Bbbk$ denotes an algebraically closed field of characteristic zero. For $q\in\Bbbk^\times$ and $n\in\mathbb{N}$, we recall the quantum numbers $$\begin{aligned} (n)_q =\sum_{j=0}^{n-1}q^{j} \quad\mbox{and the identity}\quad (n)_{q}=q^{n-1} (n)_{q^{-1}}.\end{aligned}$$ Let $\theta \in\mathbb{N}$. We set $\mathbb{I}=\mathbb{I}_\theta=\{1, 2,\dots,\theta\}$. We denote $\Pi=\{\alpha_{1}, \dots ,\alpha_{\theta}\}$ the canonical $\mathbb Z$-basis of $\mathbb Z^\mathbb{I}$. We will write $0=0\alpha_1+\cdots+0\alpha_\theta$. We will use $\Pi$ to identify the matrices of size $\theta\times\theta$ and the bicharacters on $\mathbb Z^\mathbb{I}$ with values in $\Bbbk^\times$. Explicitly, given $\mathfrak{q}=(q_{ij})_{i,j\in\mathbb{I}}\in(\Bbbk^\times)^{\mathbb{I}\times\mathbb{I}}$, by abuse of notation, we will denote $\mathfrak{q}:\mathbb Z^\mathbb{I}\times\mathbb Z^\mathbb{I}\longrightarrow\Bbbk^\times$ the bicharacter defined by $$\begin{aligned} \mathfrak{q}(\alpha_i,\alpha_j)=q_{ij}\quad\forall\,i,j\in\mathbb{I}.\end{aligned}$$ Let $\beta\in\mathbb Z^\mathbb{I}$. We will write $q_\beta=\mathfrak{q}(\beta,\beta)$. In particular, $q_{\alpha_i}=q_{ii}$ for all $i\in\mathbb{I}$. The bound function [@HY $(2.12)$] is $$\begin{aligned} \label{eq: b} b^{\mathfrak{q}}(\beta)= \begin{cases} \min\{m\in\mathbb{N}\mid (m)_{q_\beta}=0\}&\mbox{if $(m)_{q_\beta}=0$ for some $m\in\mathbb{N}$},\\ \infty&\mbox{otherwise}. \end{cases}\end{aligned}$$ Of course, $b^{\mathfrak{q}}(\beta)$ is finite if and only if $q_\beta$ is a primitive root of $1$ of order $b^{\mathfrak{q}}(\beta)$. We will consider the dual action of $\operatorname{Aut}_{\mathbb Z}(\mathbb Z^\mathbb{I})$ on bicharacters. That is, if $w\in\operatorname{Aut}_\mathbb Z(\mathbb Z^\mathbb{I})$, then the bicharacter $w^*{\mathfrak{q}}:\mathbb Z^\mathbb{I}\times\mathbb Z^\mathbb{I}\longrightarrow\Bbbk^\times$ is defined by $$\begin{aligned} w^*{\mathfrak{q}}(\alpha,\beta)={\mathfrak{q}}(w^{-1}(\alpha),w^{-1}(\beta))\quad\forall \alpha,\beta\in\mathbb Z^\mathbb{I}.\end{aligned}$$ The partial order $\leq^w$ in $\mathbb Z^\mathbb{I}$ is defined by $\lambda\leq^w\mu$ if and only if $w^{-1}(\mu-\lambda)\in\mathbb Z_{\geq0}^\mathbb{I}$. When $w= {\mathrm{id}}$, we simply write $\leq$. Let $M=\oplus_{\nu\in\mathbb Z^\mathbb{I}} M_{\nu}$ be a $\mathbb Z^\mathbb{I}$-graded module over a ring $\mathbf{A}$. We call *weight space* to the homogeneous component $M_\nu$. The support $\sup(M)$ is the set of all $\nu\in\mathbb Z^\mathbb{I}$ such that $M_\nu\neq0$. If $w\in\operatorname{Aut}_\mathbb Z(\mathbb Z^\mathbb{I})$, the $w$*-twisted* $M[w]$ is the $\mathbb Z^\mathbb{I}$-graded $\mathbf{A}$-module whose weight spaces are $M[w]_\nu=M_{w^{-1}\nu}$ for all $\nu\in\mathbb Z^\mathbb{I}$. This an endofunctors in the category of $\mathbb Z^\mathbb{I}$-graded $\mathbf{A}$-modules. If the weight spaces are $\mathbf{A}$-free, the formal character of $M$ is $$\begin{aligned} {\mathrm{ch}} M=\sum_{\mu\in\mathbb Z^\mathbb{I}} {\mathrm{rank}} _\mathbf{A}(M_\mu)\,e^\mu\end{aligned}$$ which belongs to the $\mathbb Z$-algebra generated by the symbols $e^\mu$ with multiplication $e^\mu\cdot e^\nu=e^{\mu+\nu}$. We denote $\overline{(-)}$ and $w(-)$ the automorphisms of it given by $\overline{e^\mu}=e^{-\mu}$ and $w(e^\mu)=e^{w\mu}$ for all $\mu\in\mathbb Z^\mathbb{I}$ and $w\in\operatorname{Aut}_{\mathbb Z}(\mathbb Z^\mathbb{I})$. Therefore $$\begin{aligned} \label{eq:w ch} {\mathrm{ch}} (M[w])=w( {\mathrm{ch}} \, M).\end{aligned}$$ # Finite-dimensional Nichols algebras of diagonal type {#sec:nichols} The finite-dimensional Nichols algebras of diagonal type were classified by Heckenberger in [@Hroot][^3]. They are parameterized by matrices of scalars. The classification and the structure of them are governed by certain Lie type objects, such as, PBW basis, the generalized Cartan matrix, the Weyl groupoid and the generalized root system, among others. We recall here their main features needed for our work. We refer to [@AA-diag-survey] for an overview of the theory. We fix $\theta\in\mathbb{N}$ and set $\mathbb{I}=\mathbb{I}_\theta$. Throughout this section $\mathfrak{q}=(q_{ij})_{i,j\in\mathbb{I}}\in(\Bbbk^\times)^{\mathbb{I}\times\mathbb{I}}$ denotes a matrix with finite-dimensional Nichols algebra $\mathfrak{B}_\mathfrak{q}$. This is constructed as a quotient of the free $\Bbbk$-algebra generated by $E_1, ..., E_\theta$, that is $$\begin{aligned} \mathfrak{B}_\mathfrak{q}=\Bbbk\langle E_1, ..., E_\theta\rangle/{\mathcal{J}}_\mathfrak{q}\end{aligned}$$ for certain ideal ${\mathcal{J}}_\mathfrak{q}$. It is a $\mathbb Z^\mathbb{I}$-graded algebra with $$\begin{aligned} \deg E_i=\alpha_i\quad\forall i\in\mathbb{I}\end{aligned}$$ and $\mathbb Z$-graded with $\deg E_i=1$ for all $i\in\mathbb{I}$. Moreover, $\mathfrak{B}_\mathfrak{q}$ is a braided Hopf algebra and a minimal set of generators of ${\mathcal{J}}_\mathfrak{q}$ is known [@Ang-reine; @Ang-presentation] but we do not need more details. **Example 3**. Let $\mathfrak{g}$ be a finite dimensional semisimple Lie algebra over $\mathbb{C}$ with Cartan matrix $C=(c_{ij})_{i,j\in\mathbb{I}}$ and $D=(d_i)_{i\in\mathbb{I}}$ such that $DC$ is symmetric. Let $q$ be a primitive root of unity and $\mathfrak{q}=(q^{d_ic_{ij}})_{i,j\in\mathbb{I}}$. Then $\mathfrak{B}_\mathfrak{q}$ is finite-dimensional. Moreover, if $\mathop{\mathrm{ {\mathrm{ord}} }}q$ is an odd prime, not $3$ if $\mathfrak{g}$ has a component of type $G_2$, then $\mathfrak{B}_\mathfrak{q}$ is the positive part of the small quantum group $u_q(\mathfrak{g})$, one of the algebras analyzed by Andersen--Jantzen--Soergel [@AJS Section 1.3. Case 2]. **Example 4**. Let $q\in\Bbbk$ be a primitive root of order $N>2$ and $$\begin{aligned} \mathfrak{q}=\begin{pmatrix} -1 & -q \\ -1 & -1 \end{pmatrix}.\end{aligned}$$ Set $E_{12}=\operatorname{ad}_cE_1(E_2)=(E_1E_2-q_{12}E_2E_1)$. Then $$\begin{aligned} \mathfrak{B}_\mathfrak{q}=\langle E_1,E_2\mid E_1^2=E_2^2=E_{12}^N=0\rangle.\end{aligned}$$ This is a Nichols algebra of super type $A(1|1)$. This term refers to the root system which will be introduced in §[3.3](#subsec:root system){reference-type="ref" reference="subsec:root system"}; see Example [Example 7](#ex:the example root system){reference-type="ref" reference="ex:the example root system"}. Through this section we will use this example to illustrate the different concept. See also [@AA-diag-survey §5.1.11]. ## PBW basis In [@karchencko], Karchencko proves the existence of homogeneous elements $E_{\beta_1}, ..., E_{\beta_n}\in\mathfrak{B}_\mathfrak{q}$ with $\deg E_{\beta_\nu}=\beta_\nu\in\mathbb Z_{\geq0}^\mathbb{I}$ and $b^\mathfrak{q}(\beta_\nu)<\infty$, recall [\[eq: b\]](#eq: b){reference-type="eqref" reference="eq: b"}, such that $$\begin{aligned} \left\{ E_{\beta_{1}}^{m_1}\cdots E_{\beta_{n}}^{m_n}\mid 0\leq m_\nu<b^{\mathfrak{q}}(\beta_\nu),\,1\leq \nu\leq n\right\}\end{aligned}$$ is a linear basis of $\mathfrak{B}_\mathfrak{q}$. We will see in the next section a way of constructing these elements $E_{\beta_\nu}$. The set of *positive roots* of $\mathfrak{q}$ is $$\begin{aligned} \label{eq:positive roots} \Delta_+^\mathfrak{q}=\{\beta_1, ..., \beta_n\}.\end{aligned}$$ It turns out that the elements of $\Delta_+^\mathfrak{q}$ are pairwise different [@CH Proposition 2.12]. The set of (positive) *simple roots* is $\Pi^\mathfrak{q}=\{\alpha_1, ..., \alpha_\theta\}$. We set $$\begin{aligned} \label{eq:beta top} \beta_{top}^\mathfrak{q}=\sum_{\beta\in \Delta^\mathfrak{q}_+}(b^\mathfrak{q}(\beta)-1)\beta.\end{aligned}$$ This is the weight of the homogeneous component of maximum $\mathbb Z^\mathbb{I}$-degree of $\mathfrak{B}_\mathfrak{q}$. **Example 5**. The positive roots of $\mathfrak{q}$ in Example [Example 4](#ex:the example){reference-type="ref" reference="ex:the example"} are $$\begin{aligned} \Delta_+^\mathfrak{q}=\{\alpha_1,\alpha_1+\alpha_2,\alpha_2\}\end{aligned}$$ with $$\begin{aligned} b^\mathfrak{q}(\alpha_1)=2,\quad b^\mathfrak{q}(\alpha_1+\alpha_2)=N,\quad b^\mathfrak{q}(\alpha_2)=2.\end{aligned}$$ The associated PBW basis is $$\begin{aligned} \{E_1^{m_1}E_{12}^{m_2}E_2^{m_3}\mid 0\leq m_1,m_3<2,\,0\leq m_2<N\}.\end{aligned}$$ ## Weyl groupoid For distinct $i,j\in\mathbb{I}$, by [@Roso] there exists $m\in\mathbb{N}_0$ such that $$\begin{aligned} (m+1)_{q_{ii}}(q_{ii}^mq_{ij}q_{ji}-1)=0.\end{aligned}$$ Thus, it is possible to define the *generalized Cartan matrix* $C^\mathfrak{q}=(c_{ij}^\mathfrak{q})_{i,j\in\mathbb{I}}$ of $\mathfrak{q}$ as $$\begin{aligned} \label{eq:C chi} c_{ij}^\mathfrak{q}=\begin{cases} 2,&\mbox{if $i=j$};\\ -\min\{m\in\mathbb{N}_0\mid(m+1)_{q_{ii}}(q_{ii}^mq_{ij}q_{ji}-1)=0\},&\mbox{otherwise.} \end{cases}\end{aligned}$$ This matrix leads to the reflection $\sigma_i^{\mathfrak{q}}\in\operatorname{Aut}_\mathbb Z(\mathbb Z^\mathbb{I})$, $i\in\mathbb{I}$, given by $$\begin{aligned} \label{eq:sigma chi} \sigma_i^{\mathfrak{q}}(\alpha_j)=\alpha_j-c_{ij}^\mathfrak{q}\alpha_i\quad\forall\,j\in\mathbb{I}.\end{aligned}$$ It turns out that the Nichols algebra of $(\sigma_i^\mathfrak{q})^*\mathfrak{q}$ is also finite-dimensional for all $i\in\mathbb{I}$. However, $\mathfrak{B}_{(\sigma_i^\mathfrak{q})^*\mathfrak{q}}$ is not necessarily isomorphic to $\mathfrak{B}_\mathfrak{q}$. Let ${\mathcal{H}}$ be the family of matrices with finite-dimensional Nichols algebras and $r_i:{\mathcal{H}}\longrightarrow{\mathcal{H}}$ the bijection given by $$\begin{aligned} \label{eq:r i} r_i(\mathfrak{q})=(\sigma_i^{{\mathfrak{q}}})^*{\mathfrak{q}}.\end{aligned}$$ for all $\mathfrak{q}\in{\mathcal{H}}$ and $i\in\mathbb{I}$. For instance, $r_p(\mathfrak{q})(\alpha_i,\alpha_j)=q_{ij}q_{ip}^{-c_{pj}^{\mathfrak{q}}}q_{pj}^{-c_{pi}^{\mathfrak{q}}}q_{pp}^{c_{pi}^{\mathfrak{q}}c_{pj}^{\mathfrak{q}}}$ for all $p,i,j\in\mathbb{I}$ and hence $$\begin{aligned} \label{eq: C invariante} c_{ij}^{r_i(\mathfrak{q})}=c_{ij}^\mathfrak{q}.\end{aligned}$$ It is immediate that $\sigma_i^{r_i(\mathfrak{q})}=\sigma_i^{\mathfrak{q}}$ and $r_i^2(\mathfrak{q})=\mathfrak{q}$. We notice that $$\begin{aligned} r_{i_k}(\cdots (r_{i_1}(\mathfrak{q})))=\left(\sigma_{i_k}^{r_{i_{k-1}}\cdots r_{i_{1}}(\mathfrak{q})}\cdots \sigma_{i_2}^{r_1(\mathfrak{q})}\sigma_{i_1}^\mathfrak{q}\right)^*\mathfrak{q}.\end{aligned}$$ We denote $\mathcal{G}=\langle r_i\mid i\in\mathbb{I}\rangle$ which is a subgroup of the group of bijections on ${\mathcal{H}}$. Let $\mathcal{X}$ be a $\mathcal{G}$-orbit. The *Weyl groupoid* $\mathcal{W}$ of $\mathcal{X}$ is the category whose objects are the matrices belong to $\mathcal{X}$, and their morphisms are generated by the arrows $\sigma_i^{{\mathfrak{q}}}:{\mathfrak{q}}\rightarrow r_i({\mathfrak{q}})$ for all $\mathfrak{q}\in\mathcal{X}$ and $i\in\mathbb{I}$. We denote $1^{\mathfrak{q}}$ the identity of $\mathfrak{q}$ and set $\mathcal{W}_\mathfrak{q}=\operatorname{Hom}_{\mathcal{W}}(\mathfrak{q},\mathfrak{q})$. Thus, a morphism in $\mathcal{W}$ is of the form $$\begin{aligned} \sigma_{i_k}^{r_{i_{k-1}}\cdots r_{i_{1}}(\mathfrak{q})}\cdots \sigma_{i_2}^{r_1(\mathfrak{q})}\sigma_{i_1}^\mathfrak{q}&:\mathfrak{q}\longrightarrow r_{i_k}(\cdots (r_{i_1}(\mathfrak{q}))).&\end{aligned}$$ To shorten notation, we observe that a morphism in $\mathcal{W}$ is determined either by specifying the source, $\sigma_{i_k}\cdots\sigma_{i_1}^{\mathfrak{q}}:\mathfrak{q}\longrightarrow r_{i_k}\cdots r_{i_1}(\mathfrak{q})$, or by specifying the target, $1^{\mathfrak{q}}\sigma_{i_k}\cdots\sigma_{i_1}:r_{i_1}\cdots r_{i_k}(\mathfrak{q})\longrightarrow\mathfrak{q}$. For $w=\sigma_{i_k}\cdots\sigma_{i_1}^{\mathfrak{q}}$, we will write $$\begin{aligned} w^{*}\mathfrak{q}=r_{i_k}(\cdots (r_{i_1}(\mathfrak{q})))\quad\mbox{and}\quad w^{-*}\mathfrak{q}=(w^{-1})^*\mathfrak{q}.\end{aligned}$$ We set ${}^\mathfrak{q}\mathcal{W}$, resp. $\mathcal{W}^\mathfrak{q}$, the family of morphisms in $\mathcal{W}$ whose target, resp. source, is $\mathfrak{q}$. Clearly, $(\sigma_p^{\mathfrak{q}})^*\mathfrak{q}(\sigma_p^{\mathfrak{q}}(\alpha),\sigma_p^{\mathfrak{q}}(\alpha))=\mathfrak{q}(\alpha,\alpha)$. Hence, for all $w\in\mathcal{W}^\mathfrak{q}$ and $\alpha\in\mathbb Z^\mathbb{I}$, it holds $$\begin{aligned} \label{eq: b invariante} b^{w^*\mathfrak{q}}(w\alpha)=b^{\mathfrak{q}}(\alpha). \end{aligned}$$ By [@HY08 Theorem 1], the defining relations of the Weyl groupoid are of Coxeter type: $$\begin{aligned} (\sigma_i\sigma_i)1^{\mathfrak{q}}=1^{\mathfrak{q}}\quad\mbox{and}\quad(\sigma_i\sigma_j)^{m_{ij}^{\mathfrak{q}}}1^{\mathfrak{q}}=1^{\mathfrak{q}}\quad\forall \mathfrak{q}\in\mathcal{X},\,\forall i,j\in\mathbb{I},\,i\neq j;\end{aligned}$$ for certain exponents $m_{ij}^{\mathfrak{q}}$. Similar to Coxeter groups, there exists a length function $\ell:\mathcal{W}\longrightarrow\mathbb{N}_0$ given by $$\begin{aligned} \ell(w)=\min\bigl\{k\in\mathbb{N}_0\mid \exists i_1, \dots, i_k\in\mathbb{I},\mathfrak{q}\in\mathcal{X}:w=\sigma_{i_k}\cdots\sigma_{i_1}^{\mathfrak{q}}\bigr\} \end{aligned}$$ for $w\in\mathcal{W}$. Also, for each $\mathfrak{q}\in\mathcal{X}$, there exists a unique morphism $w_0\in{}^\mathfrak{q}\mathcal{W}$ of maximal length. Let $w_0=1^\mathfrak{q}\sigma_{i_1}\cdots\sigma_{i_n}$ be a reduced expression of the longest element in ${}^\mathfrak{q}\mathcal{W}$. The set of positive roots can be constructed as follows $$\begin{aligned} \label{eq:roots are conjugate to simple} \Delta_+^\mathfrak{q}=\biggl\{\beta_\nu=1^\mathfrak{q}\sigma_{i_1}\cdots\sigma_{i_{\nu-1}}(\alpha_{i_\nu})\mid1\leq\nu\leq n\biggr\},\end{aligned}$$ see for instance [@CH Proposition 2.12]. **Example 6**. Let $\mathfrak{q}$ be as in Example [Example 4](#ex:the example){reference-type="ref" reference="ex:the example"}. We set $$\begin{aligned} \mathfrak{p}= \left( \begin{array}{cc} -1 & q^{-1} \\ 1 & q \end{array} \right) \quad\mbox{and}\quad \mathfrak{r}= \left( \begin{array}{cc} q & q^{-1} \\ 1 & -1 \end{array} \right) . \end{aligned}$$ Then $\mathcal{X}=\{\mathfrak{q},\mathfrak{p},\mathfrak{r},\mathfrak{q}^t,\mathfrak{p}^t,\mathfrak{r}^t\}$ is a $\mathcal{G}$-orbit. Indeed, the generalized Cartan matrices of them are all equal to $$\begin{aligned} \left( \begin{array}{rr} 2 & -1 \\ -1 & 2 \end{array} \right)\end{aligned}$$ and the Weyl groupoid of $\mathcal{X}$ is The corresponding Nichols algebras are not necessarily isomorphic. For instance, $$\begin{aligned} \mathfrak{B}_\mathfrak{p}&=\langle E_1,E_2\mid E_1^2=E_2^N=E_{221}^2=0\rangle\quad\mbox{with}\quad E_{221}=(\operatorname{ad}_cE_{2})^2(E_1),\\ \mathfrak{B}_\mathfrak{r}&=\langle E_1,E_2\mid E_1^N=E_2^2=E_{112}^2=0\rangle\quad\mbox{with}\quad E_{112}=(\operatorname{ad}_cE_{1})^2(E_2).\end{aligned}$$ ## Root systems {#subsec:root system} The bundle ${\mathcal{R}}=\{\Delta^\mathfrak{q}\}_{\mathfrak{q}\in\mathcal{X}}$ with $$\begin{aligned} \Delta^{\mathfrak{q}}=\Delta^{\mathfrak{q}}_+\cup-\Delta^{\mathfrak{q}}_+ \end{aligned}$$ is the so-called *(generalized) root system of $\mathcal{X}$* (or of $\mathfrak{q}\in\mathcal{X}$) We highlight that, unlike classical root systems, $\Delta^\mathfrak{q}$ and $\Delta^{\mathfrak{p}}$ are not necessarily equals for distinct $\mathfrak{q},\mathfrak{p}\in\mathcal{X}$. When it is needed, we will write $\beta^\mathfrak{q}$ to emphasize that $\beta\in\mathbb Z^\mathbb{I}$ belongs to $\Delta^\mathfrak{q}$. However, they share other characteristics with classical root systems which will be useful, cf. [@CH; @HY]. In particular, for all $i\in\mathbb{I}$, it holds that $$\begin{aligned} \label{eq:sigma Delta = Delta} \sigma_i^{\mathfrak{q}}(\Delta_+^{\mathfrak{q}}\setminus\{\alpha_i\})=\Delta_+^{r_i(\mathfrak{q})}\setminus\{\alpha_i\},\quad\sigma_i^{\mathfrak{q}}(\alpha_i)=-\alpha_i \quad\mbox{and}\quad w(\Delta^{w^{-*}\mathfrak{q}})=\Delta^{\mathfrak{q}}.\end{aligned}$$ Also, $\ell(w)=\left|w(\Delta_+^{\mathfrak{p}})\cap-\Delta_+^{\mathfrak{q}}\right|$ for any $w:\mathfrak{p}\rightarrow\mathfrak{q}$ [@HY08 Lemma 8 $(iii)$]. This implies that $$\begin{aligned} w_0(\beta_{top}^{w_0^{-*}\mathfrak{q}})=-\beta_{top}^\mathfrak{q}. \end{aligned}$$ **Example 7**. The generalized root system of $\mathfrak{q}$ from Example [Example 4](#ex:the example){reference-type="ref" reference="ex:the example"} is constant: $\Delta^\mathfrak{z}=\{\pm\alpha_1,\pm(\alpha_1+\alpha_2),\pm\alpha_2\}$ for all $\mathfrak{z}\in\mathcal{X}$. We have introduced here generalized root systems for Nichols algebras. However, this is a combinatorial object appearing in other context. According to [@AA-diag-survey Theorem 2.34] contragredient Lie superalgebras have associated generalized root systems. For instance, that associated to $\mathfrak{sl}(1|1)$ is equal to ${\mathcal{R}}$. ## A shift operation on weights Recall the element $\beta_{top}^\mathfrak{q}$ in [\[eq:beta top\]](#eq:beta top){reference-type="eqref" reference="eq:beta top"}. We set $$\begin{aligned} \label{eq:varrho} \varrho^\mathfrak{q}=\frac{1}{2}\beta^\mathfrak{q}_{top}.\end{aligned}$$ This element will play the role of the semi-sum of the positive roots in Lie theory. The following definition is analogous to [@AJS 4.7$(4)$]. **Definition 8**. For $\mu\in\mathbb Z^\mathbb{I}$ and $w:w^{-*}\mathfrak{q}\rightarrow\mathfrak{q}$ in $\mathcal{W}$, we set $$\begin{aligned} \mu\langle w\rangle=\mu+ w(\varrho^{w^{-*}\mathfrak{q}})-\varrho^\mathfrak{q}\in\mathbb Z^\mathbb{I}.\end{aligned}$$ We observe that $$\begin{aligned} \label{eq:w varrho - varrho} w(\varrho^{w^{-*}\mathfrak{q}})-\varrho^\mathfrak{q}=-\sum_{\beta\in \Delta_+^\mathfrak{q}\,:\,w^{-1}\beta\in \Delta_-^{w^{-*}\mathfrak{q}}}(b^\mathfrak{q}(\beta)-1)\beta\end{aligned}$$ since $w(\Delta^{w^{-*}\mathfrak{q}})=\Delta^{\mathfrak{q}}$ and $b^{w^{-*}\mathfrak{q}}(w^{-1}\beta)=b^{\mathfrak{q}}(\beta)$. For instance, $\mu\langle w_0\rangle=\mu-\beta^\mathfrak{q}_{top}$. This operation satisfies that $$\begin{aligned} \label{eq:mu w s} \mu\langle w\sigma_i\rangle=\mu\langle w\rangle-(b^\mathfrak{q}(w\alpha_{i})-1)w\alpha_{i}\end{aligned}$$ where $w\sigma_i:\sigma_i^*w^{-*}\mathfrak{q}\rightarrow\mathfrak{q}$ and $i\in\mathbb{I}$. Indeed, $$\begin{aligned} w\sigma_i\left(\varrho^{\sigma_i^*w^{-*}\mathfrak{q}}\right)-\varrho^\mathfrak{q}&=w\left(\sigma_i(\varrho^{\sigma_i^*w^{-*}\mathfrak{q}})-\varrho^{w^{-*}\mathfrak{q}}+\varrho^{w^{-*}\mathfrak{q}}\right)-\varrho^\mathfrak{q}\\ &=w\left(-(b^{w^{-*}\mathfrak{q}}(\alpha_i)-1)\alpha_i+\varrho^{w^{-*}\mathfrak{q}}\right)-\varrho^\mathfrak{q}\\ &=w\left(\varrho^{w^{-*}\mathfrak{q}}\right)-\varrho^\mathfrak{q}-\left(b^{w^{-*}\mathfrak{q}}(\alpha_i)-1\right)w\alpha_i.\end{aligned}$$ **Example 9**. Even if the generalized root system is constant, the corresponding elements $\varrho^\mathfrak{q}$ could be different. For instance, $$\begin{aligned} \varrho^\mathfrak{q}=\frac{1}{2}\left(\alpha_1+(N-1)(\alpha_1+\alpha_2)+\alpha_2\right)\\ \varrho^\mathfrak{p}=\frac{1}{2}\left(\alpha_1+(\alpha_1+\alpha_2)+(N-1)\alpha_2\right)\\ \varrho^\mathfrak{r}=\frac{1}{2}\left((N-1)\alpha_1+(\alpha_1+\alpha_2)+\alpha_2\right)\end{aligned}$$ where $\mathfrak{q}$, $\mathfrak{p}$ and $\mathfrak{r}$ are as in Example [Example 6](#ex:the example groupoid){reference-type="ref" reference="ex:the example groupoid"}. # Small quantum groups {#sec:Drinfeld} In this section, we introduce the Hopf algebras in which we are interested in. We will follow [@Hlusztigiso]. We fix $\theta\in\mathbb{N}$ and a matrix $\mathfrak{q}=(q_{ij})_{i,j\in\mathbb{I}}\in(\Bbbk^\times)^{\mathbb{I}\times\mathbb{I}}$ with finite-dimensional Nichols algebra $\mathfrak{B}_\mathfrak{q}$, where $\mathbb{I}=\mathbb{I}_\theta$. Let $\mathcal{X}$ be the $\mathcal{G}$-orbit of $\mathfrak{q}$ and $\mathcal{W}$ its Weyl groupoid. We denote $U_\mathfrak{q}$ the Hopf algebra generated by the symbols $K_i$, $K_i^{-1}$, $L_i$, $L_i^{-1}$, $E_i$ and $F_i$, with $i\in\mathbb{I}$, subject to the relations: $$\begin{aligned} K_iE_j=q_{ij}E_jK_i,\quad& L_iE_j=q_{ji}^{-1}E_jL_i\\ K_iF_j=q_{ij}^{-1}F_jK_i,\quad& L_iF_j=q_{ji}F_jL_i\end{aligned}$$ $$\begin{aligned} \label{eq: EF FE} E_iF_j-F_jE_i&=\delta_{i,j}(K_i-L_i)\\ \notag XY&=YX,\\ \notag K_iK_i^{-1}&=L_iL_i^{-1}=1\end{aligned}$$ for all $i,j\in\mathbb{I}$ and $X,Y\in\{K_i^{\pm1},L_i^{\pm1}\mid i\in\mathbb{I}\}$. Also, the generators $E_i$ (resp. $F_i$) are subject to the relations given by ${\mathcal{J}}_\mathfrak{q}$ (resp. $\tau({\mathcal{J}}_\mathfrak{q})$, cf. [\[eq:tau\]](#eq:tau){reference-type="eqref" reference="eq:tau"} below). However, these last relations will not be needed and neither the counit, nor he comultiplication and nor the antipode of $U_\mathfrak{q}$. We have that $U_\mathfrak{q}=\oplus_{\mu\in\mathbb Z^\mathbb{I}} \,(U_\mathfrak{q})_\mu$ is a $\mathbb Z^\mathbb{I}$-graded Hopf algebra with $$\begin{aligned} \deg E_i=-\deg F_i=\alpha_i\quad\mbox{and}\quad \deg K_i^{\pm1}=\deg L_i^{\pm1}=0\quad\forall\,i\in\mathbb{I}.\end{aligned}$$ For $\alpha=n_1\alpha_1+\cdots+n_\theta\alpha_\theta\in\mathbb Z^\mathbb{I}$, we set $$\begin{aligned} K_\alpha=K_1^{n_1}\cdots K_\theta^{n_\theta}\quad\mbox{and}\quad L_\alpha=L_1^{n_1}\cdots L_\theta^{n_\theta}.\end{aligned}$$ In particular, $K_{\alpha_i}=K_i$ for $i\in\mathbb{I}$. Given $\mu\in\mathbb Z^\mathbb{I}$, we define $\mathfrak{q}_\mu\in\operatorname{Alg}(U_\mathfrak{q}^0,\Bbbk)$ and $\widetilde{\mu}\in\operatorname{Aut}_{\Bbbk-alg}(U_\mathfrak{q}^0)$ by $$\begin{aligned} \label{eq:tilde mu} \mathfrak{q}_\mu(K_\alpha L_\beta)=\frac{\mathfrak{q}(\alpha,\mu)}{\mathfrak{q}(\mu,\beta)} \quad\mbox{and}\quad \widetilde{\mu}(K_\alpha L_\beta)=\mathfrak{q}_\mu(K_\alpha L_\beta)\,K_\alpha L_\beta\end{aligned}$$ for all $\alpha,\beta\in\mathbb Z^\mathbb{I}$. Then, it follows easily that $$\begin{aligned} s\,u=u\,\widetilde\mu(s)\quad\forall u\in (U_\mathfrak{q})_\mu,\,s\in U_\mathfrak{q}^0\end{aligned}$$ and $$\begin{aligned} \label{eq:composition tilde} \widetilde{\nu}\circ\widetilde{\mu}=\widetilde{\nu+\mu}\quad\forall\nu,\mu\in\mathbb Z^\mathbb{I}.\end{aligned}$$ The multiplication of $U_\mathfrak{q}$ induces a linear isomorphism $$\begin{aligned} U_\mathfrak{q}^-\otimes U_\mathfrak{q}^0\otimes U_\mathfrak{q}^+\longrightarrow U_\mathfrak{q}\end{aligned}$$ where $$\begin{aligned} U_\mathfrak{q}^{+}=\Bbbk\langle E_i\mid i\in\mathbb{I}\rangle\simeq\mathfrak{B}_\mathfrak{q},\quad U_\mathfrak{q}^{0}=\Bbbk\langle K_i,L_i\mid i\in\mathbb{I}\rangle, \quad U_\mathfrak{q}^{-}=\Bbbk\langle F_i\mid i\in\mathbb{I}\rangle\end{aligned}$$ are subalgebras of $U_\mathfrak{q}$. We notice that $U_\mathfrak{q}^0$ identifies with the groups algebra $\Bbbk(\mathbb Z^\mathbb{I}\times\mathbb Z^\mathbb{I})$ for any matrix $\mathfrak{q}$. **Definition 10**. Let $p:U_\mathfrak{q}\longrightarrow u_\mathfrak{q}$ be a $\mathbb Z^\mathbb{I}$-graded algebra projection and set $u_\mathfrak{q}^{\pm,0}:=p(U_\mathfrak{q}^{\pm,0})$. We call $u_\mathfrak{q}$ *small quantum group* if the multiplication $u_\mathfrak{q}^-\otimes u_\mathfrak{q}^0\otimes u_\mathfrak{q}^+\longrightarrow u_\mathfrak{q}$ induces a linear isomorphism and $u_\mathfrak{q}^\pm\simeq U_\mathfrak{q}^\pm$. We say they are "small" because $u_\mathfrak{q}^\pm$ is finite-dimensional but we do not make any assumption on the size of $u_\mathfrak{q}^0$. Thus, $U_\mathfrak{q}$ is even a small quantum group. **Example 11**. Let $\mathfrak{q}$ be as in Example [Example 3](#ex:small qg){reference-type="ref" reference="ex:small qg"}. Then $U_\mathfrak{q}/\langle K_i-L_i^{-1}, K_i^{2\mathop{\mathrm{ {\mathrm{ord}} }}q}-1\mid i\in\mathbb{I}\rangle$ is an small quantum group. Moreover, if $\mathop{\mathrm{ {\mathrm{ord}} }}q$ is an odd prime, not $3$ if $\mathfrak{g}$ has a component of type $G_2$, then $u_q(\mathfrak{g})\simeq U_\mathfrak{q}/\langle K_i-L_i^{-1}, K_i^{2\mathop{\mathrm{ {\mathrm{ord}} }}q}-1\mid i\in\mathbb{I}\rangle$. **Example 12**. Let us explain how to construct the small quantum group $u_\mathfrak{q}$ of Figure [\[fig:uq\]](#fig:uq){reference-type="ref" reference="fig:uq"}. By [@Hlusztigiso Corollary 5.9], there is a skew Hopf pairing $\eta$ between $U^+_\mathfrak{q}\#\Bbbk\langle K_i\mid i\in\mathbb{I}\rangle$ and $U^-_\mathfrak{q}\#\Bbbk\langle L_i\mid i\in\mathbb{I}\rangle$, and the corresponding Drinfeld double is isomorphic to $U_\mathfrak{q}$. Let $\Gamma=\overline{\langle K_i\mid i\in\mathbb{I}\rangle}$ be a group quotient and set $g_i=\overline{K_i}$, $i\in\mathbb{I}$. Suppose the character $\chi_i:\Gamma\longrightarrow\Bbbk^\times$, $\chi_i(g_j)=q_{ij}$ for all $i,j\in\mathbb{I}$, is well-defined and set $\widetilde{\Gamma}=\langle \chi_i\mid i\in\mathbb{I}\rangle$. Then $\langle L_i\mid i\in\mathbb{I}\rangle\longrightarrow\widetilde{\Gamma}$, $L_i\mapsto\chi_i$, is a group quotient. Moreover, $\eta$ induces a pairing between $U^+_\mathfrak{q}\#\Bbbk\Gamma$ and $U^-_\mathfrak{q}\#\Bbbk\widetilde{\Gamma}$, and the corresponding Drinfeld double is $u_\mathfrak{q}$. **Example 13**. The braided Drinfeld doubles introduced in [@LS §3.2] are small quantum groups. ## Lusztig isomorphisms {#subsec:lusztig iso} In [@Hlusztigiso Theorem 6.11], Heckenberger constructs certain algebra isomorphisms $$\begin{aligned} \label{eq:Ti} T_i=T_i^{(\sigma_i^\mathfrak{q})^*\mathfrak{q}}:U_{(\sigma_i^\mathfrak{q})^*\mathfrak{q}}\longrightarrow U_\mathfrak{q}\end{aligned}$$ for all $i\in\mathbb{I}$. They emulate some properties of the Lusztig automorphisms of small quantum groups. We emphasize that these isomorphisms depends on the matrix defining the Drinfeld double of the domain, but we will omit this in the notation when no confusion can arise. We do not need the precise definition of these functions. We just recall some useful properties for us. Let $w:w^{-*}\mathfrak{q}\rightarrow\mathfrak{q}$ be a morphism in $\mathcal{W}$. We choose a reduced expression $w=1^\mathfrak{q}\sigma_{i_k}\cdots\sigma_{i_1}$ and denote $$\begin{aligned} T_w=T_{i_k}\cdots T_{i_1}:U_{w^{-*}\mathfrak{q}}\longrightarrow U_\mathfrak{q}. \end{aligned}$$ This isomorphism depends on our choosed expression for $w$. However, if $w=1^\mathfrak{q}\sigma_{j_k}\cdots\sigma_{j_1}$ is another reduced expression, there exists $\underline{a}=(a)_{i\in\mathbb{I}}\in(\Bbbk^{\times})^\mathbb{I}$ such that $$\begin{aligned} \label{eq:Tw different presentation} T_{i_k}\cdots T_{i_1}=T_{j_k}\cdots T_{j_1} \varphi_{\underline{a}}\end{aligned}$$ where $\varphi_{\underline{a}}$ is the algebra automorphism of $U_{w^{-*}\mathfrak{q}}$ given by $$\begin{aligned} %\label{eq:varphi a} \varphi_{\underline{a}}(K_i)=K_i,\quad \varphi_{\underline{a}}(L_i)=L_i,\quad \varphi_{\underline{a}}(E_i)=a_iE_i\quad\mbox{and}\quad \varphi_{\underline{a}}(F_i)=a_i^{-1}F_i\quad\forall i\in\mathbb{I}.\end{aligned}$$ Indeed, both reduced expression of $w$ can be transformed each other using only the Coxeter type relations by [@HY08 Theorem 5]. Then, [\[eq:Tw different presentation\]](#eq:Tw different presentation){reference-type="eqref" reference="eq:Tw different presentation"} follows using [@Hlusztigiso Theorem 6.19 and Proposition 6.8 $(ii)$]. These isomorphisms permute the weight spaces according in the following way: $$\begin{aligned} \label{eq:Tw U alpha es U w alpha} T_w\left((U_{w^{-*}\mathfrak{q}})_{\alpha}\right)=(U_\mathfrak{q})_{w\alpha}\end{aligned}$$ for all $\alpha\in\mathbb Z^\mathbb{I}$, cf. [@HY Proposition 4.2]. They also have a good behavior on the middle subalgebras $U^0_{w^{-*}\mathfrak{q}}= U^0_\mathfrak{q}$. Explicitly, $$\begin{aligned} \label{eq:Tw restricted to U0} T_w(K_\alpha L_\beta)=K_{w\alpha}L_{w\beta}\end{aligned}$$ for all $\alpha,\beta\in\mathbb Z^\mathbb{I}$ because $T_i^{\pm1}(K_j)=K_{\sigma_i^\mathfrak{q}(\alpha_j)}$ and $T_i^{\pm1}(L_j)=L_{\sigma_i^\mathfrak{q}(\alpha_j)}$ by definition [@Hlusztigiso Lemma 6.6]. It follows that $$\begin{aligned} \label{eq: mu conjugado por Tw} T_w\circ\widetilde\mu\circ T_w^{-1}{}_{|U^0_\mathfrak{q}}=\widetilde{w(\mu)}\end{aligned}$$ for all $\mu\in\mathbb Z^\mathbb{I}$; keep in mind that $\widetilde{\mu}$ depends on $w^{-*}\mathfrak{q}$ and $\widetilde{w(\mu)}$ depends on $\mathfrak{q}$. For the longest element $w_0$, there is a permutation $f$ of $\mathbb{I}$ such that $w_0^{-*}\mathfrak{q}(\alpha_i,\alpha_j)=f^{*}\mathfrak{q}(\alpha_i,\alpha_j)=\mathfrak{q}(\alpha_{f(i)},\alpha_{f(j)})$. Also, there is $\underline{b}=(b)_{i\in\mathbb{I}}\in(\Bbbk^{\times})^\mathbb{I}$ such that $$\begin{aligned} \label{eq:T w0} T_{w_0}=\phi_1\,\varphi_f\,\varphi_{\underline{b}} \end{aligned}$$ by [@Hlusztigiso Corollary 6.21]; where $\phi_1$ is the algebra automorphism of $U_{\mathfrak{q}}$ given by $$\begin{aligned} %\label{eq:phi 1} \phi_{1}(K_i)=K_i^{-1}, \quad \phi_{1}(L_i)=L_i^{-1}, \quad \phi_{1}(E_i)=F_iL_i^{-1}\quad\mbox{and}\quad \varphi_{1}(F_i)=K_i^{-1}E_i\end{aligned}$$ for all $i\in\mathbb{I}$ and $\varphi_f:U_\mathfrak{q}\longrightarrow U_{f^{*}\mathfrak{q}}$ is the algebra isomorphism given by $$\begin{aligned} \varphi_f(K_i)=K_{f(i)},\quad \varphi_f(L_i)=L_{f(i)},\quad \varphi_f(E_i)=E_{f(i)},\quad\mbox{and}\quad \varphi_f(F_i)=F_{f(i)}\end{aligned}$$ for all $i\in\mathbb{I}$. Let $\tau$ be the algebra antiautomorphism of $U_\mathfrak{q}$ defined by $$\begin{aligned} \label{eq:tau} \tau(K_i)=K_i,\quad \tau(L_i)=L_i,\quad \tau(E_i)=F_i\quad\mbox{and}\quad \tau(F_i)=E_i\end{aligned}$$ for all $i\in\mathbb{I}$, see [@Hlusztigiso Proposition 4.9 $(7)$]. Notice that $\tau^2= {\mathrm{id}}$. The generators of the PBW basis of the Nichols algebras can be constructed using the Lusztig isomorphisms as follows. Let $w_0=1^\mathfrak{q}\sigma_{i_1}\cdots\sigma_{i_n}$ be a reduced expression of the longest element in ${}^\mathfrak{q}\mathcal{W}$ and recall from [\[eq:roots are conjugate to simple\]](#eq:roots are conjugate to simple){reference-type="eqref" reference="eq:roots are conjugate to simple"} that $\beta_\nu=1^\mathfrak{q}\sigma_{i_1}\cdots\sigma_{i_{\nu-1}}(\alpha_{i_\nu})$, $1\leq\nu\leq n$, are the positive roots of $\mathfrak{q}$. We set $$\begin{aligned} E_{\beta_\nu}=T_{i_1}\cdots T_{i_{\nu-1}}(E_{i_\nu})\in (U^+_\mathfrak{q})_{\beta_\nu} \quad\mbox{and}\quad F_{\beta_\nu}=T_{i_1}\cdots T_{i_{\nu-1}}(F_{i_\nu})\in (U^-_\mathfrak{q})_{-\beta_\nu},\end{aligned}$$ for all $1\leq\nu\leq n$; by an abuse of notation $E_{i_\nu}$ and $F_{i_\nu}$ denote the generators of $U_{(\sigma_{i_{\nu-1}} \cdots \sigma_{i_1})^*\mathfrak{q}}$. These elements depend on the reduced expression of $w_0$. By [@HY Theorem 4.9], we know that $$\begin{aligned} \label{eq:PBW basis} \begin{split} \left\{ E_{\beta_{f(1)}}^{m_1}\cdots E_{\beta_{f(n)}}^{m_n}\right.&\left.\mid 0\leq m_\nu<b^\mathfrak{q}(\beta_\nu),\,1\leq\nu\leq n\right\}\quad\mbox{and}\\ &\left\{ F_{\beta_{f(1)}}^{m_1}\cdots F_{\beta_{f(n)}}^{m_n}\mid 0\leq m_\nu<b^\mathfrak{q}(\beta_\nu),\,1\leq\nu\leq n\right\} \end{split}\end{aligned}$$ are linear basis of $U_\mathfrak{q}^+$ and $U_\mathfrak{q}^-$ for any permutation $f$ of $\mathbb{I}$. It is immediate that $$\begin{aligned} \label{eq:ch U-} {\mathrm{ch}} \, U^-_\mathfrak{q}= \prod_{\beta\in \Delta_+^\mathfrak{q}}\frac{1-e^{-b^\mathfrak{q}(\beta)\beta}}{1-e^{-\beta}} =\prod_{\beta\in \Delta_+^\mathfrak{q}}\left(1+e^{-\beta}+\cdots+e^{(1-b^\mathfrak{q}(\beta))\beta}\right).\end{aligned}$$ We point out that the weight space of minimum degree of $U^-_\mathfrak{q}$ is one-dimensional and spanned by $$\begin{aligned} F^\mathfrak{q}_{top} =F_{\beta_1}^{b^\mathfrak{q}(\beta_1)-1}\cdots F_{\beta_n}^{b^\mathfrak{q}(\beta_n)-1}.\end{aligned}$$ **Example 14**. Keeping the notation of Example [Example 6](#ex:the example groupoid){reference-type="ref" reference="ex:the example groupoid"}, $T_1^\mathfrak{p}:U_\mathfrak{p}\rightarrow U_\mathfrak{q}$ is defined by $$\begin{aligned} T_1^\mathfrak{p}(E_1)&=F_1L_1^{-1},\quad T_1^\mathfrak{p}(E_2)=E_{12},\quad T_1^\mathfrak{p}(K_\alpha L_\beta)=K_{\sigma_1^\mathfrak{p}(\alpha)}L_{\sigma_1^\mathfrak{p}(\alpha)},\\ T_1^\mathfrak{p}(F_1)&=K_1^{-1}E_1,\quad T_1^\mathfrak{p}(F_2)=\frac{1}{q-1}(F_1F_2+F_2F_1).\end{aligned}$$ ## Parabolic subalgebras We fix $i\in\mathbb{I}$ and denote $U_\mathfrak{q}(\alpha_i)$ and $U_\mathfrak{q}(-\alpha_i)$ the subalgebras of $U_\mathfrak{q}$ generated by $E_i$ and $F_i$, respectively. We set $$\begin{aligned} \label{eq:P alpha i} P_\mathfrak{q}(\alpha_i)=U_\mathfrak{q}(-\alpha_i)U^0_\mathfrak{q}U^+_\mathfrak{q}.\end{aligned}$$ This is a subalgebra of $U_\mathfrak{q}$ thanks to the defining relations. By the definition of the Lusztig isomorphisms, the restriction $T_i:P_{\sigma_i^{*}\mathfrak{q}}(\alpha_i)\longrightarrow P_\mathfrak{q}(\alpha_i)$ is an isomorphism and we can decompose $P_\mathfrak{q}(\alpha_i)$ as $$\begin{aligned} \label{eq:P alpha i otra} P_{\mathfrak{q}}(\alpha_i)=U_\mathfrak{q}(\alpha_i)\,U^0_\mathfrak{q}\,T_i(U^+_{(\sigma_i^\mathfrak{q})^{*}\mathfrak{q}}).\end{aligned}$$ Indeed, $T_i(E_i)=F_iL_i^{-1}$, $T_i(E_j)=\operatorname{ad}_c^{-c_{ij}^\mathfrak{q}} E_i(E_j)$ and $T_i(F_i)=K_i^{-1}E_i$ [@Hlusztigiso Lemma 6.6]. ## Some definitions We introduce some elements which will be key in the analysis of the representations of $U_\mathfrak{q}$. **Definition 15**. Let $\beta\in\Delta^\mathfrak{q}$ and $n\in\mathbb{N}$. We set $$\begin{aligned} =(n)_{q_{\beta}^{-1}}K_\beta-(n)_{q_{\beta}}L_\beta \quad\mbox{and}\quad [n;\beta]=(n)_{q_{\beta}^{-1}}L_\beta-(n)_{q_{\beta}}K_\beta.\end{aligned}$$ It follows from the defining relations that $$\begin{aligned} \label{eq: E Fn} E_iF^n_i=F_i^nE_i+F_i^{n-1}[\alpha_i;n] \quad\mbox{and}\quad F_iE^n_i=E_i^nF_i+E_i^{n-1}[n;\alpha_i].\end{aligned}$$ for all $i\in\mathbb{I}$. Moreover, once we have fixed a PBW basis as in [\[eq:PBW basis\]](#eq:PBW basis){reference-type="eqref" reference="eq:PBW basis"}, we can apply the corresponding Lusztig isomorphisms to the above identities and obtain that $$\begin{aligned} E_\beta F^n_\beta=F_\beta^nE_\beta+F_\beta^{n-1}[\beta;n]\quad\mbox{and}\quad F_\beta E^n_\beta=E_\beta^nF_\beta+E_\beta^{n-1}[n;\beta]\end{aligned}$$ for all $\beta\in\Delta^\mathfrak{q}_+$. **Definition 16**. Given $\beta\in\Delta^\mathfrak{q}$, $\mu\in\mathbb Z^\mathbb{I}$ and a $U_\mathfrak{q}^0$-algebra $\mathbf{A}$ with structural map $\pi:U_\mathfrak{q}^0\longrightarrow\mathbf{A}$, we define $t_\beta^\pi(\mu)$ as the unique $t\in\{1, ..., b^\mathfrak{q}(\beta)-1\}$ such that $$\begin{aligned} 1=q_{\beta}^{1-t}\pi\tilde\mu(K_\beta L_\beta^{-1}),\end{aligned}$$ if it exists, and otherwise $t^\pi_\beta(\mu)=0$. Equivalently, we see can say that, modulo $b^\mathfrak{q}(\beta)$, $t_\beta^\pi(\mu)$ is the unique $t\in\{1, ..., b^\mathfrak{q}(\beta)\}$ such that $\pi\tilde\mu([\beta;t])=0$. In fact, $\pi\tilde\mu([\beta;b^\mathfrak{q}(\beta)])=0$ and $$\begin{aligned} \label{eq:[beta;n] = otra} [\beta;t]=(t)_{q_{\beta}}L_\beta\left(q_{\beta}^{1-t}K_\beta L_\beta^{-1}-1\right).\end{aligned}$$ We also observe that $$\begin{aligned} \label{eq:[n;beta] = otra} [t;\beta]=(t)_{q_{\beta}^{-1}}L_\beta \left(1-q_{\beta}^{t-1}K_\beta L_\beta^{-1}\right).\end{aligned}$$ Given a $U_\mathfrak{q}^0$-algebra $\mathbf{A}$ with structural map $\pi:U_\mathfrak{q}^0\longrightarrow\mathbf{A}$ and a morphism $w\in\mathcal{W}^\mathfrak{q}$, we denote $\mathbf{A}[w]$ the $U_\mathfrak{q}^0$-algebra with structural map $\pi[w]:U_\mathfrak{q}^0\longrightarrow\mathbf{A}$ defined by $$\begin{aligned} \label{eq:pi[w]} \pi[w](K_\alpha L_\alpha)=\pi(K_{w^{-1}\alpha} L_{w^{-1}\beta})\end{aligned}$$ for all $\alpha,\beta\in\mathbb Z^\mathbb{I}$. We highlight that $\pi[w]=\pi\circ T_w^{-1}{}_{|U_\mathfrak{q}^0}$ for any Lusztig isomorphism $T_w:U_\mathfrak{q}\longrightarrow U_{w^{*}\mathfrak{q}}$ associated to $w$ by [\[eq:Tw restricted to U0\]](#eq:Tw restricted to U0){reference-type="eqref" reference="eq:Tw restricted to U0"}. If $\beta=w\alpha_i\in\Delta^{w^*\mathfrak{q}}$ for some $i\in\mathbb{I}$, it holds that $$\begin{aligned} \label{eq: t w alpha i} t_{w\alpha_i}^{\pi[w]}(w\mu)=t_{\alpha_i}^\pi(\mu).\end{aligned}$$ In fact, we have that $$\begin{aligned} \notag \pi[w]\widetilde{w\mu}([\beta,t]) &=\pi[w]\left((t)_{w^{*}\mathfrak{q}(\beta,\beta)^{-1}}w^{*}\mathfrak{q}(\beta,w\mu)K_{\beta}-(t)_{w^{*}\mathfrak{q}(\beta,\beta)}w^{*}\mathfrak{q}(w\mu,-\beta)L_{\beta}\right)\\ \label{eq:pi w mu beta} &=\pi\left((t)_{q_{\alpha_i}^{-1}}\mathfrak{q}(\alpha_i,\mu)K_{\alpha_i}-(t)_{q_{\alpha_i}}\mathfrak{q}(\mu,-\alpha_i)L_{\alpha_i}\right)\\ \notag &=\pi\tilde\mu[\alpha_i,t].\end{aligned}$$ As in [@HY Definition 2.16], we define the group homomorphism $\rho^\mathfrak{q}:\mathbb Z^\mathbb{I}\longrightarrow\Bbbk^\times$ by $\rho^\mathfrak{q}(\alpha_i)=\mathfrak{q}(\alpha_i,\alpha_i)$ for all $i\in\mathbb{I}$. **Definition 17**. Given $\beta\in\Delta^\mathfrak{q}$, $\mu\in\mathbb Z^\mathbb{I}$ and a $U_\mathfrak{q}^0$-algebra $\mathbf{A}$ with structural map $\pi:U_\mathfrak{q}^0\longrightarrow\mathbf{A}$, we define $n_\beta^\pi(\mu)$ as the unique $n\in\{1, ..., b^\mathfrak{q}(\beta)-1\}$ such that $$\begin{aligned} q_{\beta}^{n}=\rho^\mathfrak{q}(\beta)\,\pi\widetilde{\mu}(K_{\beta}L_{\beta}^{-1}),\end{aligned}$$ if it exists, and otherwise $n^\pi_\beta(\mu)=0$. The above numbers are related in the following way. **Lemma 18**. *If $\beta=w\alpha_i\in\Delta^{\mathfrak{q}}$ for some $i\in\mathbb{I}$, then $n_\beta^\pi(\mu)=t_\beta^\pi(\mu\langle w\rangle)$.* *Proof.* We first claim that $$\begin{aligned} \label{eq:rho = ...} \rho^\mathfrak{q}(w\alpha_i)=\mathfrak{q}(w\alpha_i,w\alpha_i)\,\mathfrak{q}(0\langle w\rangle,w\alpha_i)\, \mathfrak{q}(w\alpha_i,0\langle w\rangle).\end{aligned}$$ We prove it by induction on the length of $w$. If $w=1^\mathfrak{q}$, then $n_{\alpha_i}^\pi(\mu)=t_{\alpha_i}^\pi(\mu)$ by [\[eq:\[beta;n\] = otra\]](#eq:[beta;n] = otra){reference-type="eqref" reference="eq:[beta;n] = otra"}. We now assume the equality holds for all bicharacters and morphisms in $\mathcal{W}$ of length $r$. Thus, if $w=\sigma_jw_1$ with $j\in\mathbb{I}$ and $\ell(\sigma_jw_1)=1+\ell(w_1)=1+r$, similar to [\[eq:w varrho - varrho\]](#eq:w varrho - varrho){reference-type="eqref" reference="eq:w varrho - varrho"}, one can check that $$\begin{aligned} 0\langle\sigma_jw_1\rangle=-\sum_{\gamma\in \Delta_+^{\sigma_j^{-*}(\mathfrak{q})}:w_1^{-1}\gamma\in \Delta_-^{(\sigma_jw_1)^{-*}\mathfrak{q}}}(b^\mathfrak{q}(\sigma_j\gamma)-1)\sigma_j\gamma+(b^\mathfrak{q}(\alpha_j)-1)\sigma_j\alpha_j.\end{aligned}$$ Therefore $$\begin{aligned} \mathfrak{q}(\sigma_jw_1\alpha_i,\sigma_jw_1\alpha_i)&\,\mathfrak{q}(0\langle\sigma_j w_1\rangle,\sigma_jw_1\alpha_i)\, \mathfrak{q}(\sigma_jw_1\alpha_i,0\langle \sigma_jw_1\rangle)=\\ &=\sigma_j^{-*}(\mathfrak{q})(w_1\alpha_i,w_1\alpha_i)\, \sigma_j^{-*}(\mathfrak{q})(0\langle w_1\rangle,w_1\alpha_i)\,\sigma_j^{-*}(\mathfrak{q})(w_1\alpha_i,0\langle w_1\rangle)\times\\ &\quad\quad\quad \sigma_j^{-*}(\mathfrak{q})(\alpha_j,w_1\alpha_i)^{b^{\sigma_j^{-*}(\mathfrak{q})}(\alpha_j)-1}\,\sigma_j^{-*}(\mathfrak{q})(w_1\alpha_i,\alpha_j)^{b^{\sigma_j^{-*}(\mathfrak{q})}(\alpha_j)-1}\\ &\overset{(\star)}{=}\rho^{\sigma_j^{-*}(\mathfrak{q})}(w_1\alpha_i)\,\frac{\rho^\mathfrak{q}(\sigma_jw_1\alpha_i)}{\rho^{\sigma_j^{-*}(\mathfrak{q})}(w_1\alpha_i)}=\rho^\mathfrak{q}(\sigma_jw_1\alpha_i);\end{aligned}$$ $(\star)$ follows from the inductive hypothesis and [@HY Lemma 2.17]. This concludes the induction and our claim holds. In consequence, we have that $$\begin{aligned} q_{\beta}\pi\widetilde{\mu\langle w\rangle}(K_{\beta}L_{\beta}^{-1})&=\mathfrak{q}(\beta,\beta)\mathfrak{q}(0\langle w\rangle,\beta)\mathfrak{q}(\beta,0\langle w\rangle)\,\pi\widetilde{\mu}(K_{\beta}L_{\beta}^{-1})\\ &=\rho^\mathfrak{q}(\beta)\,\pi\widetilde{\mu}(K_{\beta}L_{\beta}^{-1})\end{aligned}$$ which implies the lemma. ◻ # Andersen--Jantzen--Soergel categories {#sec:AJS} In [@AJS Section 2], Andersen, Jantzen and Soergel have defined certain categories of modules for algebras fulfilling most of the more remarkable features of the small quantum groups at roots if unity. We will call them *AJS categories*. They consider any $\mathbb Z^\mathbb{I}$-graded $\Bbbk$-algebra $U$ endowed with a triangular decomposition $$\begin{aligned} U^-\otimes U^0\otimes U^+\longrightarrow U,\end{aligned}$$ *i.e.* this is a $\Bbbk$-linear isomorphism induced by the multiplication and $U^-$, $U^0$ and $U^+$ are $\mathbb Z^\mathbb{I}$-graded subalgebras satisfying the following properties, cf. [@AJS §1.1 and §2.1]: $$\begin{aligned} \label{eq:property 0} &U^0\subset U_0,\quad (U^\pm)_0=\Bbbk;\\ \label{eq:property order} &(U^\pm)_{\nu}\neq0\Rightarrow \pm\nu\geq0;\\ \label{property A} &\mbox{$\sup(U^\pm)$ is finite};\\ \label{property B C} &\mbox{Each $(U^\pm)_\nu$, and hence $U^\pm$, is finite-dimensional over $\Bbbk$}.\end{aligned}$$ They also assume that $U^0$ is commutative and the existence of a group homomorphism $\mathbb Z^\mathbb{I}\longrightarrow\operatorname{Aut}_{\Bbbk-alg}(U^0)$, $\mu\mapsto\widetilde{\mu}$, such that $$\begin{aligned} \label{property D} s\,u=u\,\widetilde\mu(s)\quad\forall u\in U_\mu,\,s\in U^0. \end{aligned}$$ From [\[eq:property 0\]](#eq:property 0){reference-type="eqref" reference="eq:property 0"}, we deduce that there are augmentation maps $U^\pm\longrightarrow (U^\pm)_0=\Bbbk$. We denote both of them $\varepsilon$. **Example 19**. A small quantum group satisfies all the previous properties. We fix a Noetherian commutative $U^0$-algebra $\mathbf{A}$ with structural map $\pi:U^0\longrightarrow\mathbf{A}$. We now present the AJS category $\mathcal{C}_\mathbf{A}$, cf. [@AJS 2.3]. An object of $\mathcal{C}_\mathbf{A}$ is a $\mathbb Z^\mathbb{I}$-graded $U\otimes\mathbf{A}$-module $M$, or equivalently a left $U$-module and right $\mathbf{A}$-module, such that $$\begin{aligned} \label{eq: cCA fin gen} &\mbox{$M$ is finitely generated over $\mathbf{A}$;} \\ \label{eq: cCA is A graded} &\mbox{$M_\mu\mathbf{A}\subset M_\mu$,} \\ \label{eq: cCA is U graded} &\mbox{$U_\nu M_\mu\subset M_{\nu+\mu}$,} \\ \label{eq:compatibilidad sm ma} &\mbox{$s\,m=m\,\pi\widetilde{\mu}(s)$,}\end{aligned}$$ for all $\mu,\nu\in\mathbb Z^\mathbb{I}$, $m\in M_\mu$ and $s\in U^0$. The last compatibility means that the $U^0$-action is determined by the $\mathbf{A}$-action and the $\mathbb Z^\mathbb{I}$-grading. The morphisms between two objects are the morphisms of $\mathbb Z^\mathbb{I}$-graded $U\otimes\mathbf{A}$-modules. The authors also defined categories $\mathcal{C}'_\mathbf{A}$ and $\mathcal{C}''_\mathbf{A}$ in similar fashion by replacing $U$ with $U^0U^+$ and $U^0$, respectively. There are obvious induced functors $\mathcal{C}''_\mathbf{A}\longrightarrow\mathcal{C}'_\mathbf{A}$ and $\mathcal{C}'_\mathbf{A}\longrightarrow\mathcal{C}_\mathbf{A}$ which are left adjoints of the forgetful functors. Moreover, the categories $\mathcal{C}_\mathbf{A}$ and $\mathcal{C}'_\mathbf{A}$ have enough projectives [@AJS Lemma 2.7]. We next summarize the most important attributes of the AJS categories. ## Verma modules Let $\mu\in\mathbb Z^\mathbb{I}$. We denote $\mathbf{A}^\mu$ the free right $\mathbf{A}$-module of rank one concentrated in degree $\mu$ and generated by the symbol $\boldsymbol{|}\mu\boldsymbol{\rangle}=\boldsymbol{|}\mu\boldsymbol{\rangle}_\mathbf{A}$. We consider $\mathbf{A}^\mu$ as an object of $\mathcal{C}'_\mathbf{A}$ with left $U^+$-action given by the augmentation map and left $U^0$-action determined by [\[eq:compatibilidad sm ma\]](#eq:compatibilidad sm ma){reference-type="eqref" reference="eq:compatibilidad sm ma"}. Explicitly, $$\begin{aligned} \label{eq:Amu} su\cdot\boldsymbol{|}\mu\boldsymbol{\rangle}=\varepsilon(u)\,\pi\widetilde{\mu}(s)\,\boldsymbol{|}\mu\boldsymbol{\rangle}\quad\forall s\in U^0,\,u\in U^+.\end{aligned}$$ We call *Verma modules* to the induced modules $$\begin{aligned} \label{eq:Z} Z_\mathbf{A}(\mu)=U\otimes_{U^0U^+}\mathbf{A}^\mu\end{aligned}$$ with $U$ and $\mathbf{A}$ acting by left and right multiplication on the left and right factor, respectively. It is isomorphic to $U^-\otimes\mathbf{A}$ as $U^-\otimes\mathbf{A}$-module and its weight spaces are $$\begin{aligned} Z_\mathbf{A}(\mu)_\beta=(U^-)_{\beta-\mu}\otimes\boldsymbol{|}\mu\boldsymbol{\rangle}\end{aligned}$$ for all $\beta\in\mathbb Z^\mathbb{I}$. Therefore $$\begin{aligned} \label{eq:ch Z} {\mathrm{ch}} \,Z_\mathbf{A}(\mu)=e^\mu\, {\mathrm{ch}} \,U^-.\end{aligned}$$ By an abuse of notation, we also denote $\boldsymbol{|}\mu\boldsymbol{\rangle}$ the generator $1\otimes\boldsymbol{|}\mu\boldsymbol{\rangle}$ of $Z_\mathbf{k}(\mu)$. A *$Z$-filtration* of a module in $\mathcal{C}_\mathbf{A}$ is a filtration whose subquotient are isomorphic to Verma modules. In [@AJS §2.11-§2.16] is proved that projective modules admit $Z$-filtrations. Several other properties of these modules are proved in [@AJS Sections 2-4]. ## Simple modules {#subsec:simples} Assume that $\mathbf{A}=\mathbf{k}$ is a field. Then $Z_\mathbf{k}(\mu)$ has a unique simple quotient denoted $L_\mathbf{k}(\mu)$ [@AJS §4.1]. This object is characterized as the unique (up to isomorphism) simple *highest-weight module* $L$ in $\mathcal{C}_\mathbf{k}$, that is $L$ is generated by some $v\in L_\mu$ with $(U^+)_\nu v=0$ for all $\nu>0$. We say that $v$ is *highest-weight vector of weight $\mu$*. Moreover, each simple module in $\mathcal{C}_\mathbf{k}$ is isomorphic to a unique simple highest-weight module. This characterization of the simple modules implies that their characters are linearly independent. We notice that all modules have composition series of finite length. For $M\in\mathcal{C}_\mathbf{k}$, $[M:L_\mathbf{k}(\lambda)]$ denotes the number of composition factors isomorphic to $L_\mathbf{k}(\lambda)$. Two important properties of the Verma modules are $$\begin{aligned} =1\quad\mbox{and}\quad[Z_\mathbf{k}(\mu):L_\mathbf{k}(\lambda)]\neq0\Rightarrow\lambda\leq\mu.\end{aligned}$$ The next lemma is standard. It says that we can read the composition factors of a module from its character. In particular, modules with equal characters have the same composition factors. **Lemma 20**. *Let $M\in\mathcal{C}_\mathbf{k}$. It holds that ${\mathrm{ch}} \, M=\sum_\lambda a_\lambda\, {\mathrm{ch}} \, L_\mathbf{k}(\lambda)$ if and only if $a_\lambda=[M:L_\mathbf{k}(\lambda)]$ for all $\lambda\in\mathbb Z^\mathbb{I}$. 0◻* **Example 21**. Let $U=U_\mathfrak{q}$ be a Drinfeld double, $\mathbf{k}=\Bbbk$ and $\mu=0$. Then $K_\alpha L_\beta\cdot\boldsymbol{|}0\boldsymbol{\rangle}=\pi(K_\alpha L_\beta)\boldsymbol{|}0\boldsymbol{\rangle}$ and hence $Z_\Bbbk(0)=\mathcal{M}^\mathfrak{q}(\pi)$ is the Verma module of [@HY Definition 5.1] and $L_\Bbbk(0)=L^\mathfrak{q}(\pi)$ its quotient as in [@HY $(5.7)$]. **Example 22**. $L_\mathbf{k}(\mu)=\mathbf{k}^\mu$ is one-dimensional if and only if $$\begin{aligned} \pi\tilde{\mu}(K_iL_i^{-1})=1 \quad\forall i\in\mathbb{I}.\end{aligned}$$ Indeed, using [\[eq: EF FE\]](#eq: EF FE){reference-type="eqref" reference="eq: EF FE"} and [\[eq:Amu\]](#eq:Amu){reference-type="eqref" reference="eq:Amu"}, $\mathbf{k}^\mu$ is the simple quotient of $Z_\mathbf{k}(\mu)$ if and only if $$\begin{aligned} 0=(K_i-L_i)\cdot\boldsymbol{|}\mu\boldsymbol{\rangle}=\left(\mathfrak{q}(\alpha_i,\mu)\pi(K_i)-\frac{1}{\mathfrak{q}(\mu,\alpha_i)}\pi(L_i)\right)\boldsymbol{|}\mu\boldsymbol{\rangle}\end{aligned}$$ which is equivalent to our claim. ## Blocks Let $\sim_b$ denote the smallest equivalence relation in $\mathbb Z^\mathbb{I}$ such that $\lambda\sim_b\mu$ if $\operatorname{Hom}_{\mathcal{C}_\mathbf{A}}(Z_\mathbf{A}(\lambda),Z_\mathbf{A}(\mu))\neq0$ or $\mathop{\mathrm{ {\mathrm{Ext}} }}^1_{\mathcal{C}_\mathbf{k}}(Z_\mathbf{A}(\lambda),Z_\mathbf{A}(\mu))\neq0$. The equivalence classes of $\sim_b$ are called blocks [@AJS §6.9]. In case $\mathbf{A}=\mathbf{k}$ is a field, this definition coincides with the usual definition of blocks via simple modules [@AJS Lemma 6.12]. Namely, $\lambda$ and $\mu$ belong to the same block if $L_\mathbf{k}(\lambda)$ and $L_\mathbf{k}(\mu)$ have a non trivial extension. Let $\mathcal{D}_\mathbf{A}$ denote the full subcategory of $\mathcal{C}_\mathbf{A}$ of all objects admitting a $Z$-filtration. If $b$ is a block, $\mathcal{D}_\mathbf{A}(b)$ is the full subcategory of all object admitting a $Z$-filtration whose factors are $Z_\mathbf{A}(\mu)$ with $\mu\in b$. Likewise $\mathcal{C}_\mathbf{A}(b)$ denotes the full subcategory of all objects in $\mathcal{C}_\mathbf{A}$ which are the homomorphic image of an object in $\mathcal{D}_\mathbf{A}(b)$. The abelian categories $\mathcal{D}_\mathbf{A}$ and $\mathcal{C}_\mathbf{A}$ decompose into the sum $\oplus_b\mathcal{D}_\mathbf{A}(b)$ and $\oplus_b\mathcal{C}_\mathbf{A}(b)$. These and other properties are proved in [@AJS 6.10]. We also will call blocks to the subcategories $\mathcal{D}_\mathbf{A}(b)$ and $\mathcal{C}_\mathbf{A}(b)$. The *principal block*, denoted $\mathcal{C}_{\mathbf{k}}(0)$, is the block containing $L_\mathbf{k}(0)$. ## Duals Let $M\in\mathcal{C}_\mathbf{A}$. Then $M^\tau=\operatorname{Hom}_\mathbf{A}(M,\mathbf{A})$ is an object in $\mathcal{C}_\mathbf{A}$ with $U$-action $$\begin{aligned} (uf)(m)=f(\tau(u)m),\end{aligned}$$ for all $m\in M$ and $f\in M^\tau$, and homogeneous components $$\begin{aligned} (M^\tau)_\lambda=\left\{f\in\operatorname{Hom}_\mathbf{A}(M,\mathbf{A})\mid f(M_\mu)=0\,\forall\mu\neq\lambda\right\}\end{aligned}$$ for all $\lambda\in\mathbb Z^\mathbb{I}$. If $M$ is $\mathbf{A}$-free, then $$\begin{aligned} \label{eq:ch duals} {\mathrm{ch}} (M^\tau)= {\mathrm{ch}} (M).\end{aligned}$$ From the characterization of the simple objects in $\mathcal{C}_\mathbf{k}$, $\mathbf{k}$ is a field, we deduce that $$\begin{aligned} L_\mathbf{k}(\mu)^\tau\simeq L_\mathbf{k}(\mu).\end{aligned}$$ for all $\mu\in\mathbb Z^\mathbb{I}$. ## AJS categories versus usual module categories {#subsec:finite} Assume that $U^0$ is finite-dimensional. By forgetting the right $\mathbf{A}$-action, we obtain a fully faithful exact functor from the AJS categories $\mathcal{C}_\mathbf{A}$ to the category ${}_{U}\mathcal{G}$ of finite-dimensional $\mathbb Z^\mathbb{I}$-graded $U$-modules. Roughly speaking, the next proposition says that we know all the simple objects and the blocks of the category ${}_U\mathcal{G}$ if we know the simple module $L_{\Bbbk}(0)$ and the principal block $\mathcal{C}_{\Bbbk}(0)$ for all the algebra maps $\pi:U^0\longrightarrow\Bbbk$. **Proposition 23**. *Every block in ${}_{U}\mathcal{G}$ is equivalent as an abelian category to the principal block of $\mathcal{C}_\Bbbk$ for some algebra map $\pi:U^0\longrightarrow\Bbbk$.* *Proof.* One can construct Verma and simple modules in ${}_{U}\mathcal{G}$ like in the AJS categories, see for instance [@BT]. Given an algebra map $\pi:U^0\longrightarrow\Bbbk$, let us denote of $\Delta(\pi)$ and $L(\pi)$ the corresponding Verma module and simple modules in ${}_{U}\mathcal{G}$. Namely, $L(\pi)$ coincides with the image of $L_{\Bbbk}(0)$ under the forgetful functor. We claim that every simple module in ${}_{U}\mathcal{G}$ belonging to the same block of $L(\pi)$ is isomorphic to the image of $L_{\Bbbk}(\mu)$ under the forgetful functor for some $\mu$ in the principal block of $\mathcal{C}_{\Bbbk}$. In fact, suppose that $L(\mu)$ is a simple module in ${}_{U}\mathcal{G}$ and $$\begin{aligned} 0\longrightarrow L(\mu)\longrightarrow M\longrightarrow L(\pi)\longrightarrow0 \end{aligned}$$ is a non trivial extension. Let $m_\mu$ be a highest-weight vector in $L(\mu)$ and $m\in M$ such that its image in $L(\pi)$ is a highest-weight vector. Then $m$ generates $M$ and there is $u_\mu\in U_\mu$ such that $m_\mu=u_\mu \cdot m$. Let $s\in U^0$. Hence $$\begin{aligned} s\cdot m_\mu=su_\mu\cdot m\overset{\eqref{property D}}{=}u_\mu\widetilde{\mu}(s)\cdot m \overset{\eqref{eq:compatibilidad sm ma}}{=}(\pi\widetilde{\mu})(s)\,m_\mu.\end{aligned}$$ This implies that $L(\mu)$ is isomorphic to the image of $L_\Bbbk(\mu)$ under the forgetful functor. Therefore every simple module of ${}_U\mathcal{G}$ belonging to the same block of $L(\pi)$ also is in the image of the principal block, and the proposition follows. ◻ ## AJS categories and quotients {#subsec:quotients} Let $p:U\longrightarrow\overline{U}$ be a $\mathbb Z^\mathbb{I}$-graded algebra projection and set $\overline{U}^{\pm,0}:=p(U^{\pm,0})$. Assume that $\overline{U}^-\otimes\overline{U}^0\otimes\overline{U}^+\longrightarrow \overline{U}$ induces a linear isomorphism, $\overline{U}^\pm\simeq U^\pm$ and $\widetilde{\mu}$ induces an algebra automorphism on $\overline{U}^0$. Thus, given an algebra map $\overline{\pi}:\overline{U}\longrightarrow\mathbf{A}$ we can consider the corresponding AJS category which we denote $\overline{\mathcal{C}}_\mathbf{A}$. We write $\overline{Z}_\mathbf{A}(\mu)$ and $\overline{L}_\mathbf{k}(\mu)$ for the Verma and simple modules in $\overline{\mathcal{C}}_\mathbf{A}$ and $\overline{\mathcal{C}}_\mathbf{k}$, respectively. If $\pi=\overline{\pi}\circ p:U\longrightarrow\mathbf{A}$, we get an obvious functor $\operatorname{Inf}_{\overline{U}}^U:\overline{\mathcal{C}}_\mathbf{A}\longrightarrow\mathcal{C}_\mathbf{A}$ such that $\operatorname{Inf}_{\overline{U}}^U(\overline{Z}_\mathbf{A}(\mu))\simeq Z_\mathbf{A}(\mu)$ and $\operatorname{Inf}_{\overline{U}}^U(\overline{L}_\mathbf{k}(\mu))\simeq L_\mathbf{k}(\mu)$ for all $\mu\in\mathbb Z^\mathbb{I}$, by the assumptions on $p$. Then ${\mathrm{ch}} \overline{Z}_\mathbf{k}(\mu)= {\mathrm{ch}} Z_\mathbf{k}(\mu)$ and ${\mathrm{ch}} \overline{L}_\mathbf{k}(\mu)= {\mathrm{ch}} L_\mathbf{k}(\mu)$ and therefore $$\begin{aligned} \label{eq:quotients} \left[\overline{Z}_\mathbf{k}(\mu):\overline{L}_\mathbf{k}(\lambda)\right]=[Z_\mathbf{k}(\mu):L_\mathbf{k}(\lambda)]\end{aligned}$$ for all $\mu,\lambda\in\mathbb Z^\mathbb{I}$ by Lemma [Lemma 20](#lem:ch and composition){reference-type="ref" reference="lem:ch and composition"}. Moreover, if $\widetilde{\mu}$ does not descend to an algebra automorphism on $\overline{U}^0$, we have an analogous to [\[eq:quotients\]](#eq:quotients){reference-type="eqref" reference="eq:quotients"} by considering the category ${}_{\overline{U}}\mathcal{G}$ instead of $\overline{\mathcal{C}}_\Bbbk$. # Twisted Verma modules {#sec:vermas} Through this section we fix a matrix $\mathfrak{q}=(q_{ij})_{i,j\in\mathbb{I}}\in(\Bbbk^\times)^{\mathbb{I}\times\mathbb{I}}$ with finite-dimensional Nichols algebra $\mathfrak{B}_\mathfrak{q}$, where $\mathbb{I}=\mathbb{I}_\theta$. Let $\mathcal{X}$ be the $\mathcal{G}$-orbit of $\mathfrak{q}$ and $\mathcal{W}$ its Weyl groupoid. Let $U_\mathfrak{q}$ be the Hopf algebra introduced in Section [4](#sec:Drinfeld){reference-type="ref" reference="sec:Drinfeld"}. In the sequel $\mathbf{A}$ denotes a Noetherian commutative ring and $\mathbf{k}$ denotes a field. We assume that both are algebras over $U^0$ and, by abuse of notation, we denote their structural maps $\pi:U_\mathfrak{q}^0\longrightarrow\mathbf{A}$ and $\pi:U_\mathfrak{q}^0\longrightarrow\mathbf{k}$. We denote $\mathcal{C}_\mathbf{A}^\mathfrak{q}$, $\mathcal{C}'^\mathfrak{q}_\mathbf{A}$ and $\mathcal{C}''^\mathfrak{q}_\mathbf{A}$ the corresponding AJS categories; and similarly over the field $\mathbf{k}$. We point out that the results regarding to simple modules of the following subsections only hold for the AJS categories over the field $\mathbf{k}$, cf. §[5.2](#subsec:simples){reference-type="ref" reference="subsec:simples"}, and the results being valid over $\mathbf{A}$ obviously hold over $\mathbf{k}$. In this section we construct and study different Verma modules for $U_\mathfrak{q}$ using the Lusztig isomorphisms of §[4.1](#subsec:lusztig iso){reference-type="ref" reference="subsec:lusztig iso"}. To do this, we mimic the ideas from [@AJS] where the authors use the Lusztig automorphisms [@L-qgps-at-roots]. Although our demonstrations are almost identical to [@AJS], it is worthwhile to do thoroughly again them as we have morphisms between possibly different algebras parameterized by $\mathcal{X}$ and permuted by the action of the Weyl groupoid. ## $w$-Verma modules Let $w:w^{-*}\mathfrak{q}\rightarrow\mathfrak{q}$ be a morphism in $\mathcal{W}$ with $w=1^\mathfrak{q}\sigma_{i_k}\cdots\sigma_{i_1}$ a reduced expression. We consider the Lusztig isomorphism $$\begin{aligned} T_w=T_{i_k}\cdots T_{i_1}:U_{w^{-*}\mathfrak{q}}\longrightarrow U_\mathfrak{q}. \end{aligned}$$ Thus, the triangular decomposition of $U_{w^{-*}\mathfrak{q}}$ induces a new triangular decomposition on $U_\mathfrak{q}$. Explicitly, $$\begin{aligned} \label{eq:triangular decomposition w recall} T_w(U^-_{w^{-*}\mathfrak{q}})\otimes U^0_\mathfrak{q}\otimes T_w(U^+_{w^{-*}\mathfrak{q}})\longrightarrow U_\mathfrak{q}\end{aligned}$$ since $T_w(U^0_{w^{-*}\mathfrak{q}})=U^0_{\mathfrak{q}}$. Given $\mu\in\mathbb Z^\mathbb{I}$, we conisder $\mathbf{A}^\mu$ as a $U^0_\mathfrak{q}\,T_w(U^+_{w^{-*}\mathfrak{q}})$-module with action $$\begin{aligned} \label{eq:Amu twisted} su\cdot\boldsymbol{|}\mu\boldsymbol{\rangle}=\varepsilon(u)\,\pi(\widetilde{\mu}(s))\,\boldsymbol{|}\mu\boldsymbol{\rangle}\quad\forall s\in U_\mathfrak{q}^0,\,u\in T_w(U^+_{w^{-*}\chi})\end{aligned}$$ and imitating [@AJS §4.3], we introduce the module $$\begin{aligned} \label{eq:Verma twisted} Z^w_\mathbf{A}(\mu)=U_\mathfrak{q}\otimes_{U_\mathfrak{q}^0T_w(U^+_{w^{-*}\mathfrak{q}})}\mathbf{A}^\mu\end{aligned}$$ which belongs to $\mathcal{C}^\mathfrak{q}_\mathbf{A}$. We call it *$w$-Verma module*. Of course, $Z_\mathbf{A}^{1^\mathfrak{q}}(\mu)=Z_\mathbf{A}(\mu)$. We notice that $Z^w_\mathbf{A}(\mu)$ does not depend on the reduced expression of $w$. In fact, if $\tilde T_w$ is the Lusztig isomorphism associated to other reduced expression of $w$, then $T_w(U^\pm_{w^{-*}\mathfrak{q}})=\tilde T_w(U^\pm_{w^{-*}\mathfrak{q}})$ by [\[eq:Tw different presentation\]](#eq:Tw different presentation){reference-type="eqref" reference="eq:Tw different presentation"}. Hence the decomposition [\[eq:triangular decomposition w recall\]](#eq:triangular decomposition w recall){reference-type="eqref" reference="eq:triangular decomposition w recall"} does not depend on the reduced expression. Moreover, [\[eq:Amu twisted\]](#eq:Amu twisted){reference-type="eqref" reference="eq:Amu twisted"} defines the same module for both expressions, and *a posteriori* isomorphic Verma modules. **Lemma 24**. *Let $\mu\in\mathbb Z^\mathbb{I}$ and $w\in{}^\mathfrak{q}\mathcal{W}$. Then $$\begin{aligned} {\mathrm{ch}} Z_\mathbf{A}^w(\mu)=e^{\mu-(w(\varrho^{w^{-*}\mathfrak{q}})-\varrho^\mathfrak{q})} {\mathrm{ch}} U^-_\mathfrak{q}.\end{aligned}$$* *Proof.* By the definition, we see that ${\mathrm{ch}} \,Z_\mathbf{A}^w(\mu)=e^\mu\, {\mathrm{ch}} \,T_w(U^-_{w^{-*}\mathfrak{q}})$. On the other hand, $T_w(U^-_{w^{-*}\mathfrak{q}})\simeq(U_{w^{-*}\mathfrak{q}}^-)[w]$ as $\mathbb Z^\mathbb{I}$-graded object by [\[eq:Tw U alpha es U w alpha\]](#eq:Tw U alpha es U w alpha){reference-type="eqref" reference="eq:Tw U alpha es U w alpha"}. Let us write $\Delta_+^\mathfrak{q}=R_1\cup R_2$ with $$\begin{aligned} R_1=\left\{\beta\in \Delta_+^\mathfrak{q}\,:\,w^{-1}\beta\in\Delta_-^{w^{-*}\mathfrak{q}}\right\} \quad\mbox{and}\quad R_2=\left\{\beta\in\Delta_+^\mathfrak{q}\,:\,w^{-1}\beta\in\Delta_+^{w^{-*}\mathfrak{q}}\right\}.\end{aligned}$$ Therefore $$\begin{aligned} \label{eq:chTwU-} \begin{split} {\mathrm{ch}} \,T_w(U^-_{w^{-*}\mathfrak{q}})\overset{\eqref{eq:w ch}}{=}w( {\mathrm{ch}} \,U^-_{w^{-*}\mathfrak{q}})&\overset{\eqref{eq:ch U-}}{=} \prod_{\gamma\in \Delta_+^{w^{-*}\mathfrak{q}}}\frac{1-e^{-b^{w^{-*}\mathfrak{q}}(\gamma)w\gamma}}{1-e^{-w\gamma}}\\ &= \prod_{\beta\in R_1}\frac{1-e^{b^{\mathfrak{q}}(\beta)\beta}}{1-e^{\beta}} \cdot \prod_{\beta\in R_2}\frac{1-e^{-b^{\mathfrak{q}}(\beta)\beta}}{1-e^{-\beta}}\\ &= \prod_{\beta\in R_1}e^{(b^{\mathfrak{q}}(\beta)-1)\beta} \cdot \prod_{\beta\in\Delta_+^\mathfrak{q}}\frac{1-e^{-b^{\mathfrak{q}}(\beta)\beta}}{1-e^{-\beta}}\\ &\overset{\eqref{eq:w varrho - varrho}}{=}e^{\varrho^\mathfrak{q}-w(\varrho^{w^{-*}\mathfrak{q}})} {\mathrm{ch}} U^-_\mathfrak{q} \end{split}\end{aligned}$$ and the lemma follows. ◻ As a consequence, we have that $$\begin{aligned} \label{eq: ch Zmu = ch Zw muw} {\mathrm{ch}} Z_\mathbf{A}(\mu)= {\mathrm{ch}} Z_\mathbf{A}^w(\mu\langle w\rangle).\end{aligned}$$ for all $\mu\in\mathbb Z^\mathbb{I}$ and $w\in{}^\mathfrak{q}\mathcal{W}$ with $\mu\langle w\rangle=\mu-(w(\varrho^{w^{-*}\mathfrak{q}})-\varrho^\mathfrak{q})$, recall Definition [Definition 8](#def:mu w){reference-type="ref" reference="def:mu w"}. Moreover, we next see that the Hom spaces between these $w$-Verma modules are free of rank one over $\mathbf{A}$, cf. [@AJS Lemma 4.7]. **Proposition 25**. *Let $\mu\in\mathbb Z^\mathbb{I}$ and $x,w\in{}^\mathfrak{q}\mathcal{W}$. Then $$\begin{aligned} \operatorname{Hom}_{\mathcal{C}_\mathbf{A}^\mathfrak{q}}(Z_\mathbf{A}^x(\mu\langle x\rangle),Z_\mathbf{A}^w(\mu\langle w\rangle))\simeq \mathbf{A}\quad\mbox{and}\quad \operatorname{Hom}_{\mathcal{C}_\mathbf{A}^\mathfrak{q}}(Z_\mathbf{A}^x(\mu\langle x\rangle),Z_\mathbf{A}^w(\mu\langle w\rangle)^\tau)\simeq \mathbf{A}.\end{aligned}$$* *Proof.* Let us abbreviate $M^x=Z_\mathbf{A}^x(\mu\langle x\rangle)$. We have that $(M^x)_{\mu\langle x\rangle}$ is $\mathbf{A}$-free of rank $1$ and $\mu\langle x\rangle+x\beta$ is not a weight for any $\beta\in \Delta_+^{x^{-*}\mathfrak{q}}$ because ${\mathrm{ch}} \,M^x=e^{\mu\langle x\rangle} {\mathrm{ch}} \,T_x(U^-_{x^{-*}\mathfrak{q}})$. By [\[eq: ch Zmu = ch Zw muw\]](#eq: ch Zmu = ch Zw muw){reference-type="eqref" reference="eq: ch Zmu = ch Zw muw"}, these claims are also true for $M^w$. Thus $(M^w)_{\mu\langle x\rangle}=v\mathbf{A}$ and $T_x(U^+_{x^{-*}\mathfrak{q}})v=0$. Therefore, there is a morphism $M^x\rightarrow M^w$, induced by $\boldsymbol{|}\mu\langle x\rangle\boldsymbol{\rangle}\mapsto v$ and any other morphism has to be a multiple of it. This shows the first isomorphism. The second one follows similarly by using [\[eq:ch duals\]](#eq:ch duals){reference-type="eqref" reference="eq:ch duals"}. ◻ **Corollary 26**. *Let $\mu\in\mathbb Z^\mathbb{I}$ and $w\in{}^{\mathfrak{q}}\mathcal{W}$. Then $Z^w_\mathbf{A}(\mu\langle w\rangle)$ and $Z_\mathbf{A}(\mu)$ belong to the same block.* *Proof.* It follows as [@AJS Lemma 6.11] using the above proposition. ◻ ## Twisted simple modules Let $w\in\mathcal{W}$. The new triangular decomposition [\[eq:triangular decomposition w recall\]](#eq:triangular decomposition w recall){reference-type="eqref" reference="eq:triangular decomposition w recall"} on $U_\mathfrak{q}$ satisfies [\[eq:property 0\]](#eq:property 0){reference-type="eqref" reference="eq:property 0"}--[\[property D\]](#property D){reference-type="eqref" reference="property D"} with respect to the $w$-twisted $\mathbb Z^\mathbb{I}$-grading on $U_\mathfrak{q}$ and the partial order $\leq^w$; [\[property D\]](#property D){reference-type="eqref" reference="property D"} holds thanks to [\[eq: mu conjugado por Tw\]](#eq: mu conjugado por Tw){reference-type="eqref" reference="eq: mu conjugado por Tw"}. Similar to [@AJS §4.3], this observation ensures that the $w$-Verma modules satisfy most of the properties of the usual Verma modules. For instance, we can construct the $w$-Verma modules via induced functors, they have simple head, and the projectives modules admit $Z^w$-filtrations. Therefore, in the case of the field $\mathbf{k}$, the $w$-Verma module $Z^w_\mathbf{k}(\mu)$ has a unique simple quotient which we denote $L^w_\mathbf{k}(\mu)$ for all $\mu\in\mathbb Z^\mathbb{I}$. As the simple modules in $\mathcal{C}_\mathbf{k}^\mathfrak{q}$ are determined by their highest-weights, $w$ induces a bijection in $\mathbb Z^\mathbb{I}$, $\mu\leftrightarrow\mu_w$, such that $$\begin{aligned} \label{eq:Lw L mu w} L^w_\mathbf{k}(\mu)\simeq L_\mathbf{k}(\mu_w).\end{aligned}$$ Notice that if $w_0\in{}^\mathfrak{q}\mathcal{W}$ is the longest element, then $\mu_{w_0}$ is the *lowest-weight* of $L_\mathbf{k}(\mu)$ by [\[eq:T w0\]](#eq:T w0){reference-type="eqref" reference="eq:T w0"}; *i.e.* $(U^-)_\nu\cdot L_\mathbf{k}(\mu)_{\mu_{w_0}}=0$ for all $\nu<0$. We next give more information about the $w$-Verma modules over a the field $\mathbf{k}$. **Lemma 27** ([@AJS Lemma 4.8]). *Let $\mu\in\mathbb Z^\mathbb{I}$ and $w\in{}^\mathfrak{q}\mathcal{W}$. Then the socle of $Z_\mathbf{k}^w(\mu)$ is a simple module in $\mathcal{C}_\mathbf{k}^\mathfrak{q}$. Furthermore, the element $T_w(F_{top}^{w^{-*}\mathfrak{q}})\boldsymbol{|}\mu\boldsymbol{\rangle}$ generates the socle and spans the homogeneous component of weight $\mu-w(\beta_{top}^{w^{-*}\mathfrak{q}})$.* *Proof.* The socle of $T_w(U^-_{w^{-*}\mathfrak{q}})$ as a module over itself is spanned by $T_w(F_{top}^{w^{-*}\mathfrak{q}})$ whose degree is $-w(\beta_{top}^{w^{-*}\mathfrak{q}})$. Then any simple submodule of $Z_\mathbf{k}^w(\mu)$ in $\mathcal{C}^\mathfrak{q}_\mathbf{k}$ should contains $T_w(F_{top}^{w^{-*}\mathfrak{q}})\boldsymbol{|}\mu\boldsymbol{\rangle}$ and the lemma follows. ◻ Morphisms as in the lemma below exists by Proposition [Proposition 25](#prop:Hom Zx y Zw){reference-type="ref" reference="prop:Hom Zx y Zw"}. Recall that $w_0$ denotes the longest element in ${}^\mathfrak{q}\mathcal{W}$. **Lemma 28** ([@AJS Lemma 4.9]). *Let $\mu\in\mathbb Z^\mathbb{I}$, $$\begin{aligned} \Phi: Z_\mathbf{k}(\mu)\longrightarrow Z_\mathbf{k}^{w_0}(\mu\langle w_0\rangle) \quad\mbox{and}\quad\Phi':Z_\mathbf{k}^{w_0}(\mu\langle w_0\rangle)\longrightarrow Z_\mathbf{k}(\mu)\end{aligned}$$ be non-zero morphisms in $\mathcal{C}^\mathfrak{q}_\mathbf{k}$. Then $$\begin{aligned} L_\mathbf{k}(\mu)\simeq \operatorname{soc}Z_\mathbf{k}^{w_0}(\mu\langle w_0\rangle)= {\mathrm{Im}} \Phi \quad\mbox{and}\quad L_\mathbf{k}^{w_0}(\mu\langle w_0\rangle)\simeq\operatorname{soc}Z_\mathbf{k}(\mu)= {\mathrm{Im}} \Phi'.\end{aligned}$$* *Proof.* As $\Phi'$ is graded, $\Phi'$ sends $\boldsymbol{|}\mu\langle w_0\rangle\boldsymbol{\rangle}$ to the generator of the socle of $Z_\mathbf{k}(\mu)$ by Lemma [Lemma 27](#le:socle of Zw){reference-type="ref" reference="le:socle of Zw"}. This shows the second isomorphism and the second one follows in a similar way. ◻ **Example 29**. $L^{w_0}_\mathbf{k}(\mu\langle w_0\rangle)=\mathbf{k}^{\mu\langle w_0\rangle}$ is one-dimensional if and only if $$\begin{aligned} \pi\widetilde{\mu\langle w_0\rangle}(K_iL_i^{-1})=1 \quad\forall i\in\mathbb{I}.\end{aligned}$$ Indeed, this follows as Example [Example 21](#example:L one dimensional){reference-type="ref" reference="example:L one dimensional"} using [\[eq:Amu twisted\]](#eq:Amu twisted){reference-type="eqref" reference="eq:Amu twisted"} and [\[eq:Verma twisted\]](#eq:Verma twisted){reference-type="eqref" reference="eq:Verma twisted"}. ## Highest weight theory The proofs of the next results run as in [@AJS]. **Lemma 30** ([@AJS Lemma 4.10]). *For all $\mu\in\mathbb Z^\mathbb{I}$, we have that $Z_\mathbf{A}^{w_0}(\mu\langle w_0\rangle)\simeq Z_\mathbf{A}(\mu)^\tau$ and $Z_\mathbf{A}(\mu+\beta_{top}^\mathfrak{q})\simeq Z_\mathbf{A}^{w_0}(\mu)^\tau$. 0◻* **Proposition 31** ([@AJS Propositions 11 and 4.12]). *Let $\lambda,\mu\in\mathbb Z^\theta$. Then $$\begin{aligned} \mathop{\mathrm{ {\mathrm{Ext}} }}^n_{\mathcal{C}_\mathbf{A}^\mathfrak{q}}\left(Z_\mathbf{A}(\lambda),Z_\mathbf{A}^{w_0}(\mu)\right)=\begin{cases} \mathbf{A},&\mbox{if $n=0$ and $\mu=\lambda\langle w_0\rangle$}\\ 0,&\mbox{otherwise.} \end{cases}\end{aligned}$$ 0◻* A by-product of the above is that the category $\mathcal{C}_\mathbf{k}^\mathfrak{q}$ satisfies all the axioms of the definition of highest weight category in [@BGS §3.2] except for the axiom (2) since $\mathcal{C}_\mathbf{k}^\mathfrak{q}$ has infinitely many simple objects. **Theorem 32**. *$\mathcal{C}_\mathbf{k}^\mathfrak{q}$ is a highest weight category with infinitely many simple modules $L_\mathbf{k}(\lambda)$, $\lambda\in\mathbb Z^\mathbb{I}$. The standard and costandard modules are $Z_\mathbf{k}(\lambda)$ and $Z_\mathbf{k}(\lambda)^\tau$, $\lambda\in\mathbb Z^\mathbb{I}$. 0◻* Another interesting consequence is the so-called BGG Reciprocity [@AJS Proposition 4.15]. Let $P_\mathbf{k}(\lambda)\in\mathcal{C}^\mathfrak{q}_\mathbf{k}$ be the projective cover of $L_\mathbf{k}(\lambda)$, $\lambda\in\mathbb Z^\mathbb{I}$. Recall $P_\mathbf{k}(\lambda)$ admits a $Z$-filtration [@AJS Lemma 2.16]. Given $\mu\in\mathbb Z^\mathbb{I}$, we denote $[P_\mathbf{k}(\lambda):Z_\mathbf{k}(\mu)]$ the number of subquotients isomorphic to $Z_\mathbf{k}(\mu)$ in a $Z$-filtration of $P_\mathbf{k}(\lambda)$. **Theorem 33** (BGG Reciprocity). *Let $\lambda,\mu\in\mathbb Z^\mathbb{I}$. Then $$\begin{aligned} =[Z_\mathbf{k}(\mu):L_\mathbf{k}(\lambda)].\end{aligned}$$* *Proof.* We know that $[Z_\mathbf{k}(\mu):L_\mathbf{k}(\lambda)]=\dim\operatorname{Hom}_{\mathcal{C}_\mathbf{k}^\mathfrak{q}}(P_\mathbf{k}(\lambda),Z_\mathbf{k}(\mu))$ [@AJS 4.15 $(2)$]. Using Proposition [Proposition 31](#prop:ext Z){reference-type="ref" reference="prop:ext Z"} we can deduce that $[P_\mathbf{k}(\lambda):Z_\mathbf{k}(\mu)]=\dim\operatorname{Hom}_{\mathcal{C}_\mathbf{k}^\mathfrak{q}}\left(P_\mathbf{k}(\lambda),Z_\mathbf{k}^{w_0}(\lambda\langle w_0\rangle)\right)$ which is equal to $[Z_\mathbf{k}^{w_0}(\lambda\langle w_0\rangle):L_\mathbf{k}(\lambda)]$. Since ${\mathrm{ch}} \, Z_\mathbf{k}(\lambda)= {\mathrm{ch}} \, Z_\mathbf{k}^{w_0}(\lambda\langle w_0\rangle)$, Lemma [Lemma 20](#lem:ch and composition){reference-type="ref" reference="lem:ch and composition"} implies the equality in the statement. ◻ # Morphisms between twisted Verma modules {#sec:morphisms} We keep the notation of the previous section. Here we construct generators of the Hom spaces between twisted Verma modules. Recall they are $\mathbf{A}$-free of rank one by Proposition [Proposition 25](#prop:Hom Zx y Zw){reference-type="ref" reference="prop:Hom Zx y Zw"}. We will proceed in an inductive fashion starting from morphisms between a sort of parabolic Verma modules. ## Parabolic Verma modules We fix $i\in\mathbb{I}$ and write $\sigma_i=1^\mathfrak{q}\sigma_i$. We also fix $\mu\in\mathbb Z^\mathbb{I}$. To shorten notation, we write $\mu'=\mu\langle\sigma_i\rangle=\mu-(b^\mathfrak{q}(\alpha_i)-1)\alpha_i$. Recall $P_\mathfrak{q}(\alpha_i)$ from [\[eq:P alpha i\]](#eq:P alpha i){reference-type="eqref" reference="eq:P alpha i"}-[\[eq:P alpha i otra\]](#eq:P alpha i otra){reference-type="eqref" reference="eq:P alpha i otra"}. By construction $P_\mathfrak{q}(\alpha_i)$ has a triangular decomposition satisfying [\[eq:property 0\]](#eq:property 0){reference-type="eqref" reference="eq:property 0"}-[\[property D\]](#property D){reference-type="eqref" reference="property D"}. We denote ${}_{\alpha_i}\mathcal{C}_\mathbf{A}^\mathfrak{q}$ the corresponding AJS category. The *parabolic Verma module* and the *parabolic $\sigma_i$-Verma module* are $$\begin{aligned} \Psi_\mathbf{A}(\mu)=P_\mathfrak{q}(\alpha_i)\otimes_{U_\mathfrak{q}^0U_\mathfrak{q}^+}\mathbf{A}^\mu \quad\mbox{and}\quad \Psi'_\mathbf{A}(\mu')=P_\mathfrak{q}(\alpha_i)\otimes_{U^0T_i(U^+_{\sigma_i^{*}\mathfrak{q}})}\mathbf{A}^{\mu'},\end{aligned}$$ respectively. These are object of ${}_{\alpha_i}\mathcal{C}_\mathbf{A}^\mathfrak{q}$. Clearly, the elements $F_i^t\boldsymbol{|}\mu\boldsymbol{\rangle}$, $0\leq t< b^\mathfrak{q}(\alpha_i)$, form a basis of $\Psi_\mathbf{A}(\mu)$. By [\[eq: E Fn\]](#eq: E Fn){reference-type="eqref" reference="eq: E Fn"}, the action of $P_\mathfrak{q}(\alpha_i)$ is given by $$\begin{aligned} \label{eq: E Ft mu} E_i\cdot F_i^t\boldsymbol{|}\mu\boldsymbol{\rangle}&=\pi\tilde{\mu}[\alpha_i;t]\,F_i^{t-1}\boldsymbol{|}\mu\boldsymbol{\rangle},\end{aligned}$$ $E_j\cdot F_i^t\boldsymbol{|}\mu\boldsymbol{\rangle}=0$ for all $j\in\mathbb{I}\setminus\{i\}$ and $F_i\cdot F_i^t\boldsymbol{|}\mu\boldsymbol{\rangle}=F_i^{t+1}\boldsymbol{|}\mu\boldsymbol{\rangle}$. The weight of $F_i^t\boldsymbol{|}\mu\boldsymbol{\rangle}$ is $\mu-t\alpha_i$. In turn, the elements $E_i^t\boldsymbol{|}\mu'\boldsymbol{\rangle}$, $0\leq t<b^{\mathfrak{q}}(\alpha_i)$, form a basis of $\Psi'_A(\mu')$. By [\[eq: E Fn\]](#eq: E Fn){reference-type="eqref" reference="eq: E Fn"}, $$\begin{aligned} \label{eq: F Et mu} F_i\cdot E_i^t\boldsymbol{|}\mu'\boldsymbol{\rangle}&=\pi\widetilde{\mu'}[t;\alpha_i]\,E_i^{t-1}\boldsymbol{|}\mu'\boldsymbol{\rangle}.\end{aligned}$$ These are vectors of weights $\mu'+t\alpha_i=\mu-(b^\mathfrak{q}(\alpha_i)-1-t)\alpha_i$, respectively. This implies that $E_j\cdot E_i^{t}\boldsymbol{|}\mu'\boldsymbol{\rangle}=0$ for all $j\in\mathbb{I}\setminus\{i\}$. Of course, $E_i\cdot E_i^t\boldsymbol{|}\mu'\boldsymbol{\rangle}=E_i^{t+1}\boldsymbol{|}\mu'\boldsymbol{\rangle}$. Therefore there exists in ${}_{\alpha_i}\mathcal{C}_\mathbf{A}^\mathfrak{q}$ a morphism $$\begin{aligned} \label{eq:varphi i} f_i:\Psi_\mathbf{A}(\mu)\longrightarrow \Psi'_\mathbf{A}(\mu')\end{aligned}$$ such that $f_i(\boldsymbol{|}\mu\boldsymbol{\rangle})= E_i^{b^\mathfrak{q}(\alpha_i)-1}\boldsymbol{|}\mu'\boldsymbol{\rangle}$. Indeed, this is the morphism induced by the fact that $E_j\cdot E_i^{b^\mathfrak{q}(\alpha_i)-1}\boldsymbol{|}\mu'\boldsymbol{\rangle}=0$ for all $j\in\mathbb{I}$. Moreover, any morphism from $\Psi_\mathbf{A}(\mu)$ to $\Psi'_\mathbf{A}(\mu')$ is a $\mathbf{A}$-multiple of $f_i$ since the weight spaces are of rank one over $\mathbf{A}$. We can compute explicitly this morphism as follows: $$\begin{aligned} \label{eq: varphi i on Fi mu} f_i(F_i^n\boldsymbol{|}\mu\boldsymbol{\rangle})&=F_i^n\cdot E_i^{b^\mathfrak{q}(\alpha_i)-1}\boldsymbol{|}\mu'\boldsymbol{\rangle}=\prod_{t=1}^n\pi\widetilde{\mu'}([b^{\mathfrak{q}}(\alpha_i)-t;\alpha_i])\,E_i^{b^\mathfrak{q}(\alpha_i)-1-n}\boldsymbol{|}\mu'\boldsymbol{\rangle}\end{aligned}$$ for all $0\leq n<b^\mathfrak{q}(\alpha_i)$. Analogously, there exists $$\begin{aligned} f_i':\Psi'_\mathbf{A}(\mu')\longrightarrow \Psi_\mathbf{A}(\mu)\end{aligned}$$ given by $$\begin{aligned} \label{eq: varphi prima i on Ei mu} f_i'(E_i^n\boldsymbol{|}\mu'\boldsymbol{\rangle})&=E_i^n\cdot F_i^{b^\mathfrak{q}(\alpha_i)-1}\boldsymbol{|}\mu\boldsymbol{\rangle}=\prod_{t=1}^n\pi\widetilde{\mu}([\alpha_i;b^{\mathfrak{q}}(\alpha_i)-t])\,F_i^{b^\mathfrak{q}(\alpha_i)-1-n}\boldsymbol{|}\mu\boldsymbol{\rangle}.\end{aligned}$$ **Lemma 34**. *If $\pi\widetilde{\mu}([\alpha_i;t])$ is a unit in $\mathbf{A}$ for all $1\leq t\leq b^\mathfrak{q}(\alpha_i)-1$, then $f_i$ and $f_i'$ are isomorphisms.* *Proof.* The claim for $f_i'$ follows directly from [\[eq: varphi prima i on Ei mu\]](#eq: varphi prima i on Ei mu){reference-type="eqref" reference="eq: varphi prima i on Ei mu"} by the hypothesis. By the formula [\[eq: varphi i on Fi mu\]](#eq: varphi i on Fi mu){reference-type="eqref" reference="eq: varphi i on Fi mu"}, $f_i$ is an isomorphism if $\pi\widetilde{\mu'}[b^{\mathfrak{q}}(\alpha_i)-t;\alpha_i]$ is a unit for all $1\leq t\leq b^\mathfrak{q}(\alpha_i)-1$. We have that $q_{ii}^{-t-1}\pi\widetilde{\mu'}(K_iL_i^{-1})=q_{ii}^{1-t}\pi\tilde\mu(K_iL_i^{-1})$ and hence [\[eq:\[n;beta\] = otra\]](#eq:[n;beta] = otra){reference-type="eqref" reference="eq:[n;beta] = otra"} implies that $$\begin{aligned} \label{eq:pi mu' en} \pi\widetilde{\mu'}[b^{\mathfrak{q}}(\alpha_i)-t;\alpha_i]=(b^\mathfrak{q}(\alpha_i)-t)_{q_{ii}^{-1}}\pi\widetilde{\mu'}(L_i)\left(1-q_{ii}^{1-t}\pi\tilde\mu(K_iL_i^{-1})\right).\end{aligned}$$ Since $(b^\mathfrak{q}(\alpha_i)-t)_{q_{ii}^{-1}}$ and $\pi\widetilde{\mu'}(L_i)$ are units, $f_i$ is an isomorphism by the hypothesis and [\[eq:\[beta;n\] = otra\]](#eq:[beta;n] = otra){reference-type="eqref" reference="eq:[beta;n] = otra"}. ◻ We next restrict ourselves to the case of the field $\mathbf{k}$. To simplify notation, we write $$\begin{aligned} t_i=t^\pi_{\alpha_i}(\mu),\end{aligned}$$ recall Definition [Definition 16](#def:t beta){reference-type="ref" reference="def:t beta"}. By Lemma [Lemma 34](#le: unit then iso varphi i){reference-type="ref" reference="le: unit then iso varphi i"}, $f_i$ and $f_i'$ are isomorphisms in ${}_{\alpha_i}\mathcal{C}_\mathbf{k}^\mathfrak{q}$ if $t_i=0$. **Lemma 35**. *Suppose $t_i\neq0$. Then $$\begin{aligned} \mathop{\mathrm{ {\mathrm{Ker}} }}f_i= {\mathrm{Im}} f_i'=\langle\, F_i^n\boldsymbol{|}\mu\boldsymbol{\rangle}\mid n\geq t_i\rangle\quad\mbox{and}\quad {\mathrm{Im}} f_i=\mathop{\mathrm{ {\mathrm{Ker}} }}f_i'=\langle E_i^n\boldsymbol{|}\mu'\boldsymbol{\rangle}\mid n\geq b^\mathfrak{q}(\alpha_i)-t_i\rangle.\end{aligned}$$* *Proof.* Using [\[eq: varphi i on Fi mu\]](#eq: varphi i on Fi mu){reference-type="eqref" reference="eq: varphi i on Fi mu"}, [\[eq:pi mu\' en\]](#eq:pi mu' en){reference-type="eqref" reference="eq:pi mu' en"} and [\[eq:\[n;beta\] = otra\]](#eq:[n;beta] = otra){reference-type="eqref" reference="eq:[n;beta] = otra"}, we see that $\mathop{\mathrm{ {\mathrm{Ker}} }}f_i=\langle\, F_i^n\boldsymbol{|}\mu\boldsymbol{\rangle}\mid n\geq t_i\rangle$ and ${\mathrm{Im}} f_i=\langle E_i^n\boldsymbol{|}\mu\boldsymbol{\rangle}\mid n\geq b^\mathfrak{q}(\alpha_i)-t_i\rangle$. For $f_i'$, we first observe that $b^\mathfrak{q}(\alpha_i)-t_i$ is the minimum natural number $s_i$ such that $q_{ii}^{1+s_i}\pi\widetilde{\mu}(K_iL_i^{-1})-1=0$. Indeed, $q_{ii}^{1+s_i}\pi\widetilde{\mu}(K_iL_i^{-1})-1=0 =q_{ii}^{1-t_i}\pi\tilde\mu(K_i L_i^{-1})-1$ implies $q_{ii}^{t_i+s_i}=1$, recall [\[eq:\[beta;n\] = otra\]](#eq:[beta;n] = otra){reference-type="eqref" reference="eq:[beta;n] = otra"}. Hence $b^\mathfrak{q}(\alpha_i)=\mathop{\mathrm{ {\mathrm{ord}} }}q_{ii}\leq t_i+s_i$. On the other hand, $\pi\tilde\mu([\alpha_i;b^\mathfrak{q}(\alpha_i)-s_i])=0$ and hence $t_i\leq b^\mathfrak{q}(\alpha_i)-s_i$. Therefore $b^\mathfrak{q}(\alpha_i)=t_i+s_i$. Finally, the equalities for ${\mathrm{Im}} f_i'$ and $\mathop{\mathrm{ {\mathrm{Ker}} }}f_i'$ follow from [\[eq: varphi prima i on Ei mu\]](#eq: varphi prima i on Ei mu){reference-type="eqref" reference="eq: varphi prima i on Ei mu"} since $\pi\widetilde{\mu}([\alpha_i;b^{\mathfrak{q}}(\alpha_i)-t])=(b^{\mathfrak{q}}(\alpha_i)-t)_{q_{ii}}\pi\widetilde{\mu}(L_i)(q_{ii}^{1+t}\pi\widetilde{\mu}(K_i L_i^{-1})-1)$. ◻ As a consequence of the above lemma we see that $F_i^{t_i}\boldsymbol{|}\mu\boldsymbol{\rangle}$ is a highest-weight vector, *i.e.* $E_j\cdot F_i^{t_i}\boldsymbol{|}\mu\boldsymbol{\rangle}=0$ for all $j\in\mathbb{I}$, since the weights of $\mathop{\mathrm{ {\mathrm{Ker}} }}f_i$ are $\mu-n\alpha_i$ with $n\geq t_i$. Therefore there exists a morphism $$\begin{aligned} \label{eq:gi} g_i:\Psi_\mathbf{k}(\mu-t_i\alpha_i)\longrightarrow\Psi_\mathbf{k}(\mu) \end{aligned}$$ in ${}_{\alpha_i}\mathcal{C}_\mathbf{k}^\mathfrak{q}$ such that $\boldsymbol{|}\mu-t_i\alpha_i\boldsymbol{\rangle}\mapsto F_i^{t_i}\boldsymbol{|}\mu\boldsymbol{\rangle}$. Clearly, ${\mathrm{Im}} g_i=\mathop{\mathrm{ {\mathrm{Ker}} }}f_i$. **Lemma 36**. *There exists a long exact sequence $$\begin{aligned} \cdots\Psi_\mathbf{k}(\mu-(b^\mathfrak{q}(\alpha_i)+t_i)\alpha_i)\longrightarrow\Psi_\mathbf{k}(\mu-b^\mathfrak{q}(\alpha_i)\alpha_i)\longrightarrow\Psi_\mathbf{k}(\mu-t_i\alpha_i)\overset{g_i}{\longrightarrow}\Psi_\mathbf{k}(\mu)\longrightarrow\cdots \end{aligned}$$* *Proof.* The kernel of $g_i$ is generated by $F_i^{b^\mathfrak{q}(\alpha_i)-t_i}\boldsymbol{|}\mu-t_i\alpha_i\boldsymbol{\rangle}$ since $g_i(F_i^n\boldsymbol{|}\mu-t_i\alpha_i\boldsymbol{\rangle})=F_i^{n+t_i}\boldsymbol{|}\mu\boldsymbol{\rangle}$. Hence this generator is a highest-weight vector. Therefore we have a morphism $\Psi_\mathbf{k}(\mu-b^\mathfrak{q}(\alpha_i)\alpha_i)\longrightarrow\Psi_\mathbf{k}(\mu-t_i\alpha_i)$ and its kernel is generated by $F_i^{t_i}\boldsymbol{|}\mu-b^\mathfrak{q}(\alpha_i)\alpha_i\boldsymbol{\rangle}$. We can repeat the arguments with $\mu-b^\mathfrak{q}(\alpha_i)\alpha_i$ instead of $\mu$ in order to construct the desired long exact sequence. ◻ **Remark 37**. In a similar way, we can see that $T_i(E_j)\cdot E_i^{b^\mathfrak{q}(\alpha_i)-t_i}\boldsymbol{|}\mu'\boldsymbol{\rangle}=0$ for all $j\in\mathbb{I}$. Therefore there exists a morphism $g_i':\Psi_\mathbf{k}'(\mu'+(b^\mathfrak{q}(\alpha_i)-t_i)\alpha_i)\longrightarrow\Psi_\mathbf{k}'(\mu')$ in ${}_{\alpha_i}\mathcal{C}_\mathbf{k}^\mathfrak{q}$ such that $\boldsymbol{|}\mu'+(b^\mathfrak{q}(\alpha_i)-t_i)\alpha_i\boldsymbol{\rangle}\mapsto E_i^{b^\mathfrak{q}(\alpha_i)-t_i}\boldsymbol{|}\mu'\boldsymbol{\rangle}$ and ${\mathrm{Im}} g_i'=\mathop{\mathrm{ {\mathrm{Ker}} }}f_i'$. ## Morphisms between Verma modules twisted by a simple reflection {#subsec:twisting by a reflection} Here we lift the morphisms of the above subsection to morphisms between actual Verma modules. We continue with the same fixed elements $i\in\mathbb{I}$ and $\mu\in\mathbb Z^\mathbb{I}$. Recall $t_i=t^\pi_{\alpha_i}(\mu)$. We can construct the Verma modules inducing from the parabolic ones. Namely, we have the next isomorphisms in $\mathcal{C}_\mathbf{A}^\mathfrak{q}$: $$\begin{aligned} \label{eq: ZAmu iso} Z_\mathbf{A}(\mu)&\simeq U_\mathfrak{q}\otimes_{P_\mathfrak{q}(\alpha_i)}(P_\mathfrak{q}(\alpha_i)\otimes_{U_\mathfrak{q}^0U_\mathfrak{q}^+}\mathbf{A}^\mu)\simeq U_\mathfrak{q}\otimes_{P_\mathfrak{q}(\alpha_i)}\Psi_\mathbf{A}(\mu) \quad\mbox{and}\quad\\ \label{eq: ZAmu sigma i iso} Z^{\sigma_i}_\mathbf{A}(\mu)&\simeq U_\mathfrak{q}\otimes_{P_\mathfrak{q}(\alpha_i)}(P_\mathfrak{q}(\alpha_i)\otimes_{U_\mathfrak{q}^0T_i(U^+_{\sigma_i^*\mathfrak{q}})}\mathbf{A}^\mu)\simeq U_\mathfrak{q}\otimes_{P_\mathfrak{q}(\alpha_i)}\Psi'_\mathbf{A}(\mu).\end{aligned}$$ These isomorphisms allow us to lift $f_i$ and $f_i'$ to morphisms in $\mathcal{C}^\mathfrak{q}_\mathbf{A}$. **Lemma 38**. *The morphisms $$\begin{aligned} \label{eq: varphi} \varphi=1\otimes f_i:Z_\mathbf{A}(\mu)\longrightarrow Z^{\sigma_i}_\mathbf{A}(\mu\langle\sigma_i\rangle) \quad\mbox{and}\quad \varphi'=1\otimes f_i':Z^{\sigma_i}_\mathbf{A}(\mu\langle\sigma_i\rangle)\longrightarrow Z_\mathbf{A}(\mu)\end{aligned}$$ are generators of the respective Hom spaces as $\mathbf{A}$-modules. Also, $$\begin{aligned} \label{eq: ker varphi} \mathop{\mathrm{ {\mathrm{Ker}} }}\varphi\simeq U_\mathfrak{q}\otimes_{P_\mathfrak{q}(\alpha_i)}\mathop{\mathrm{ {\mathrm{Ker}} }}f_i \quad\mbox{and}\quad \mathop{\mathrm{ {\mathrm{Ker}} }}\varphi'\simeq U_\mathfrak{q}\otimes_{P_\mathfrak{q}(\alpha_i)}\mathop{\mathrm{ {\mathrm{Ker}} }}f_i'.\end{aligned}$$* *Proof.* On the space of weight $\mu$, $\varphi$ is an isomorphism by construction. Then $\varphi$ is a generator of $\operatorname{Hom}_{\mathcal{C}_\mathbf{A}^\mathfrak{q}}(Z_\mathbf{A}(\mu),Z^{\sigma_i}_\mathbf{A}(\mu\langle\sigma_i\rangle))$ by Proposition [Proposition 25](#prop:Hom Zx y Zw){reference-type="ref" reference="prop:Hom Zx y Zw"}. By the PBW basis [\[eq:PBW basis\]](#eq:PBW basis){reference-type="eqref" reference="eq:PBW basis"}, $U_\mathfrak{q}$ is free over $P_\mathfrak{q}(\alpha_i)$ and hence $\mathop{\mathrm{ {\mathrm{Ker}} }}\varphi\simeq U_\mathfrak{q}\otimes_{P_\mathfrak{q}(\alpha_i)}\mathop{\mathrm{ {\mathrm{Ker}} }}f_i$. The proof for $\varphi'$ is analogous. ◻ From the above considerations we can extend immediately to $\varphi$ and $\varphi'$ Lemmas [Lemma 34](#le: unit then iso varphi i){reference-type="ref" reference="le: unit then iso varphi i"}-[Lemma 36](#le:long exact sequence for Psi){reference-type="ref" reference="le:long exact sequence for Psi"}, cf. [@AJS Lemmas 5.8 and 5.9]. **Lemma 39**. *Assume that $\pi\tilde\mu([\alpha_i;t])$ is a unit in $\mathbf{A}$ for all $1\leq t\leq b^\mathfrak{q}(\alpha_i)-1$. Then $\varphi$ and $\varphi'$ are isomorphisms in $\mathcal{C}_\mathbf{A}^\mathfrak{q}$. 0◻* **Lemma 40**. *If $t_i=0$, then $L_\mathbf{k}(\mu)\simeq L_\mathbf{k}^{\sigma_i}(\mu\langle\sigma_i\rangle)$ in $\mathcal{C}_\mathbf{k}^\mathfrak{q}$. 0◻* **Lemma 41**. *Suppose $t_i\neq0$. Then $$\begin{aligned} \mathop{\mathrm{ {\mathrm{Ker}} }}\varphi= {\mathrm{Im}} \varphi'=U_\mathfrak{q}\cdot F_i^{t_i}\boldsymbol{|}\mu\boldsymbol{\rangle}\quad\mbox{and}\quad {\mathrm{Im}} \varphi=\mathop{\mathrm{ {\mathrm{Ker}} }}\varphi'=U_\mathfrak{q}\cdot E_i^{b^\mathfrak{q}(\alpha_i)-t_i}\boldsymbol{|}\mu\boldsymbol{\rangle}.\end{aligned}$$ 0◻* Moreover, if $t_i\neq0$, we have that $$\begin{aligned} \label{eq:ch Ker varphi} {\mathrm{ch}} \mathop{\mathrm{ {\mathrm{Ker}} }}\varphi=e^\mu\left(e^{-t_i\alpha_i}+\cdots+e^{(1-b^\mathfrak{q}(\alpha_i))\alpha_i}\right) \prod_{\gamma\in \Delta_+^\mathfrak{q}\setminus\{\alpha_i\}}\frac{1-e^{-b^\mathfrak{q}(\gamma)\gamma}}{1-e^{-\gamma}}.\end{aligned}$$ because of the PBW basis [\[eq:PBW basis\]](#eq:PBW basis){reference-type="eqref" reference="eq:PBW basis"} and [\[eq: ker varphi\]](#eq: ker varphi){reference-type="eqref" reference="eq: ker varphi"}. The morphism $\psi$ below is induced by $g_i$ of [\[eq:gi\]](#eq:gi){reference-type="eqref" reference="eq:gi"}. **Lemma 42**. *Suppose $t_i\neq0$. Then $F_i^{t_i}\boldsymbol{|}\mu\boldsymbol{\rangle}$ is a highest-weight vector of $Z_\mathbf{k}(\mu)$ and hence there exists a morphism $$\begin{aligned} \psi:Z_\mathbf{k}(\mu-t_i\alpha_i)\longrightarrow Z_\mathbf{k}(\mu) \end{aligned}$$ in $\mathcal{C}_\mathbf{k}^\mathfrak{q}$ such that $\boldsymbol{|}\mu-t_i\alpha_i\boldsymbol{\rangle}\mapsto F_i^{t_i}\boldsymbol{|}\mu\boldsymbol{\rangle}$. Moreover, ${\mathrm{Im}} \psi=\mathop{\mathrm{ {\mathrm{Ker}} }}\varphi$ and there is a long exact sequence $$\begin{aligned} \cdots Z_\mathbf{k}(\mu-(b^\mathfrak{q}(\alpha_i)+t_i)\alpha_i)\longrightarrow Z_\mathbf{k}(\mu-b^\mathfrak{q}(\alpha_i)\alpha_i)\longrightarrow Z_\mathbf{k}(\mu-t_i\alpha_i)\overset{\psi}{\longrightarrow} Z_\mathbf{k}(\mu)\longrightarrow\cdots \end{aligned}$$ 0◻* **Remark 43**. There is a morphism $\psi':Z_\mathbf{k}^{\sigma_i}(\mu-(t_i-1)\alpha_i)\longrightarrow Z_\mathbf{k}^{\sigma_i}(\mu\langle\sigma_i\rangle)$ in $\mathcal{C}_\mathbf{k}^\mathfrak{q}$, induced by $\psi'$ of Remark [Remark 37](#obs:gi'){reference-type="ref" reference="obs:gi'"}, such that $\boldsymbol{|}\mu-(t_i-1)\alpha_i\boldsymbol{\rangle}\mapsto E_i^{b^\mathfrak{q}(\alpha_i)-t_i}\boldsymbol{|}\mu\langle\sigma_i\rangle\boldsymbol{\rangle}$ and ${\mathrm{Im}} \psi'=\mathop{\mathrm{ {\mathrm{Ker}} }}\varphi'$. From the long exact sequence, we see that $$\begin{aligned} {\mathrm{ch}} (\mathop{\mathrm{ {\mathrm{Ker}} }}\varphi)=\sum_{\ell\geq0} {\mathrm{ch}} Z_\mathbf{k}(\mu-(\ell b^\mathfrak{q}(\alpha_i)-t_i)\alpha_i)-\sum_{\ell\geq1} {\mathrm{ch}} Z_\mathbf{k}(\mu-\ell b^\mathfrak{q}(\alpha_i)\alpha_i).\end{aligned}$$ As another consequence we obtain the next isomorphisms between simple modules. **Proposition 44**. *Suppose $t_i\neq0$. In $\mathcal{C}_\mathbf{k}^\mathfrak{q}$, it holds that $$\begin{aligned} L_\mathbf{k}(\mu)\simeq L^{\sigma_i}_\mathbf{k}(\mu-(t_i-1)\alpha_i)\quad\mbox{and}\quad L_\mathbf{k}(\mu-t_i\alpha_i)\simeq L_\mathbf{k}^{\sigma_i}(\mu\langle\sigma_i\rangle).\end{aligned}$$* *Proof.* Since ${\mathrm{Im}} \varphi= {\mathrm{Im}} \psi'$ and ${\mathrm{Im}} \psi= {\mathrm{Im}} \varphi'$, the respective domains must to have isomorphic heads, which are the isomorphisms in the statement. ◻ **Remark 45**. Maps similar to $\varphi$ were constructed in [@HY Lemma 5.8]. Those maps are just linear morphism between Verma modules in different categories. By considering the $\sigma_i$-Verma module, we obtain morphisms between objects in the same category. ## Twisting the categories {#subsec:twisting the categories} Let $w:\mathfrak{q}\longrightarrow w^{*}\mathfrak{q}$ be a morphism in $\mathcal{W}$ and $w=\sigma_{i_k}\cdots\sigma_{i_1}1^\mathfrak{q}$ a reduced expression. Fix the Lusztig isomorphism $$\begin{aligned} T_w=T_{i_1}\cdots T_{i_k}:U_\mathfrak{q}\longrightarrow U_{w^{*}\mathfrak{q}}.\end{aligned}$$ Recall the $U_\mathfrak{q}^0$-algebra $\mathbf{A}[w]$ from [\[eq:pi\[w\]\]](#eq:pi[w]){reference-type="eqref" reference="eq:pi[w]"} with structural map $$\begin{aligned} \pi[w]=\pi\circ T_w^{-1}{}_{|U_\mathfrak{q}^0}:U_\mathfrak{q}^0\longrightarrow\mathbf{A}.\end{aligned}$$ This does not depend on the presentation of $w$ by [\[eq:Tw restricted to U0\]](#eq:Tw restricted to U0){reference-type="eqref" reference="eq:Tw restricted to U0"}. Also, $\mathbf{A}[w][x]=\mathbf{A}[xw]$ for $x:w^*\mathfrak{q}\longrightarrow x^*(w^*\mathfrak{q})=(wx)^*\mathfrak{q}$. We have equivalence of categories ${}^{w}F^{\mathfrak{q}}_{\mathbf{A}}:\mathcal{C}_\mathbf{A}^\mathfrak{q}\longrightarrow\mathcal{C}^{w^{*}\mathfrak{q}}_{\mathbf{A}[w]}$ given by: if $M\in\mathcal{C}_\mathbf{A}^\mathfrak{q}$, then $F_w(M)=M[w]$ is an object of $\mathcal{C}^{w^{*}\mathfrak{q}}_{\mathbf{A}[w]}$ with the action of $U_{w^*\mathfrak{q}}$ twisted by $T_w^{-1}$, that is $$\begin{aligned} u\cdot_{T_w^{-1}} m=T_w^{-1}(u)m \quad\forall m\in M[w],\,u\in U_{w^*\mathfrak{q}}.\end{aligned}$$ Indeed $M[w]$ satisfies [\[eq: cCA is U graded\]](#eq: cCA is U graded){reference-type="eqref" reference="eq: cCA is U graded"}, since $$\begin{aligned} (U_{w^*\mathfrak{q}})_\alpha\cdot_{T_w^{-1}} M[w]_\mu &\overset{\eqref{eq:Tw U alpha es U w alpha}}{=}(U_\mathfrak{q})_{w^{-1}\alpha}\, M_{w^{-1}\mu}\subset M_{w^{-1}\alpha+w^{-1}\mu}=M_{w^{-1}(\alpha+\mu)}=M[w]_{\alpha+\mu}.\end{aligned}$$ It also satisfies [\[eq:compatibilidad sm ma\]](#eq:compatibilidad sm ma){reference-type="eqref" reference="eq:compatibilidad sm ma"}, since for $m\in M[w]_\mu=M_{w^{-1}\mu}$ and $s\in U^0_{w^*\mathfrak{q}}$ we have that $$\begin{aligned} s\cdot_{T_w^{-1}}m=T_w^{-1}(s)m=m(\pi\circ\widetilde{w^{-1}(\mu)}\circ T_w^{-1})(s)\overset{\eqref{eq: mu conjugado por Tw}}{=} m(\pi\circ T_w^{-1}\circ\widetilde\mu)(s)= m\pi[w](\widetilde\mu(s))\end{aligned}$$ Clearly, ${}^{w}F^{\mathfrak{q}}_{\mathbf{A}}$ depends on the reduced expression of $w$. Its inverse ${}^{w}G^{\mathfrak{q}}_{\mathbf{A}}:\mathcal{C}^{w^{*}\mathfrak{q}}_{\mathbf{A}[w]}\longrightarrow\mathcal{C}_\mathbf{A}^\mathfrak{q}$ is given by ${}^{w}G^{\mathfrak{q}}_{\mathbf{A}}(M)=M[w^{-1}]$ with the action of $U_\mathfrak{q}$ twisted by $T_{w}$. When no confusion can arise, we will write simply $M[w]$ and $M[w^{-1}]$ instead of ${}^{w}F^{\mathfrak{q}}_{\mathbf{A}}(M)$ and ${}^{w}G^{\mathfrak{q}}_{\mathbf{A}}(M)$, respectively. The Verma modules of both categories are related as follows. **Lemma 46**. *Let $x\in{}^\mathfrak{q}\mathcal{W}$, $w\in\mathcal{W}^\mathfrak{q}$ and $\mu\in\mathbb Z^\mathbb{I}$. Then $$\begin{aligned} \label{eq:verma twisted} Z_{\mathbf{A}}^{x}(\mu)[w]\simeq Z_{\mathbf{A}[w]}^{wx}(w\mu)\end{aligned}$$ in the category $\mathcal{C}_{\mathbf{A}[w]}^{w^*\mathfrak{q}}$. Moreover, for the field $\mathbf{k}$, it holds that $$\begin{aligned} \label{eq:L [w] iso} L_\mathbf{k}^x(\mu)[w]\simeq L_{\mathbf{k}[w]}^{wx}(w\mu). \end{aligned}$$* *Proof.* We can repeat the arguments in [@AJS 4.4$(2)$] but we must be thorough with the categories where we are considering the Verma modules. We first observe that $Z_\mathbf{A}^x(\mu)$ has not weights of the form $\mu+x\beta$ for all $\beta\in \Delta_+^{x^{-*}\mathfrak{q}}$ because ${\mathrm{ch}} \,Z_\mathbf{A}^x(\mu)=e^{\mu} {\mathrm{ch}} \,T_x(U^-_{x^{-*}\mathfrak{q}})$. Therefore $Z_\mathbf{A}^x(\mu)[w]$ has not weights of the form $w\mu+wx\beta$ for all $\beta\in\Delta_+^{x^{-*}\mathfrak{q}}$. Also, the space of weight $w\mu$ of $Z_\mathbf{A}^x(\mu)[w]$ is $\mathbf{A}$-spanned by $v_1=\boldsymbol{|}\mu\boldsymbol{\rangle}_\mathbf{A}$. In particular, we have that $T_{wx}(U^+_{x^{-*}\mathfrak{q}})$ annuls $v_1$. On the other hand, we are thinking of $Z_{\mathbf{A}[w]}^{wx}(w\mu)$ as an object in $\mathcal{C}_{\mathbf{A}[w]}^{w^*\mathfrak{q}}$. Thus, it is constructed using the triangular decomposition of $U_{w^{*}\mathfrak{q}}$ induced by $T_{wx}:U_{x^{-*}\mathfrak{q}}\rightarrow U_{w^*\mathfrak{q}}$. This means that its generator $v_2=\boldsymbol{|}w\mu\boldsymbol{\rangle}_{\mathbf{A}[w]}$ is annuled by $T_{wx}(U^+_{x^{-*}\mathfrak{q}})$. Therefore there exists a morphism $f:Z_{\mathbf{A}[w]}^{wx}(w\mu)\longrightarrow Z_{\mathbf{A}}^{x}(\mu)[w]$ given by $v_2\mapsto v_1$. In a similar fashion, we get a morphism $Z_\mathbf{A}^x(\mu)\longrightarrow Z_{\mathbf{A}[w]}^{wx}(w\mu)[w^{-1}]$ in $\mathcal{C}_\mathbf{A}^\mathfrak{q}$ such that $v_1\mapsto v_2$. Then, we can transform it into a morphism $g:Z_\mathbf{A}^x(\mu)[w]\longrightarrow Z_{\mathbf{A}[w]}^{wx}(w\mu)$ in $\mathcal{C}_{\mathbf{A}[w]}^{w^*\mathfrak{q}}$ such that $v_1\mapsto v_2$. Clearly, $g\circ f= {\mathrm{id}}$ and $g[w^{-1}]\circ f[w^{-1}]= {\mathrm{id}}$. This proves the isomorphisms between the Verma modules. Therefore, in the case of the field $\mathbf{k}$, we deduce that their heads also are isomorphic. ◻ ## Morphisms between twisted Verma modules {#morphisms-between-twisted-verma-modules} Here we extend the results of §[7.2](#subsec:twisting by a reflection){reference-type="ref" reference="subsec:twisting by a reflection"} to any morphism in the Weyl groupoid using the functors of §[7.3](#subsec:twisting the categories){reference-type="ref" reference="subsec:twisting the categories"} and following the ideas of [@AJS §5.10]. We fix $w\in{}^\mathfrak{q}\mathcal{W}$ and $\beta\in\Delta^\mathfrak{q}$ such that $\beta=w\alpha_i$ for fixed $\alpha_i\in\Pi^{w^{-*}\mathfrak{q}}$ and $i\in\mathbb{I}$. We also fix $\mu\in\mathbb Z^\mathbb{I}$. We set $t_\beta=t_\beta^\pi(\mu)$, recall Definition [Definition 16](#def:t beta){reference-type="ref" reference="def:t beta"}. We will use the functor ${}^wF_{\mathbf{A}[w^{-1}]}^{w^{-*}\mathfrak{q}}:\mathcal{C}_{\mathbf{A}[w^{-1}]}^{w^{-*}\mathfrak{q}}\longrightarrow\mathcal{C}_\mathbf{A}^\mathfrak{q}$. For abbreviation, we will denote $M[w]$ and $f[w]$ the image of objects and morphisms through this functor. Using ${}^wF_{\mathbf{A}[w^{-1}]}^{w^{-*}\mathfrak{q}}$, Lemma [Lemma 46](#le: Zx mu w isomorphic to){reference-type="ref" reference="le: Zx mu w isomorphic to"} implies that $$\begin{aligned} \label{eq:w Zx mu w isomorphic to} \begin{split} Z_{\mathbf{A}[w^{-1}]}(w^{-1}\mu)[w]&\simeq Z_{\mathbf{A}}^{w}(\mu) \quad\mbox{and}\\ &Z_{\mathbf{A}[w^{-1}]}^{\sigma_i}((w^{-1}\mu)')[w]\simeq Z_{\mathbf{A}}^{w\sigma_i}(\mu-(b^\mathfrak{q}(\beta)-1)\beta) \end{split}\end{aligned}$$ for $(w^{-1}\mu)'=(w^{-1}\mu)\langle\sigma_i\rangle=w^{-1}\mu-(b^{w^{-*}\mathfrak{q}}(\alpha_i)-1)\alpha_i$; for the first isomorphism take $x=1^{w^{-*}\mathfrak{q}}$ and for the second one $x=\sigma_i^{(\sigma_iw)^{-*}\mathfrak{q}}$. On the other hand, let $\varphi$ and $\varphi'$ be the morphisms in the category $\mathcal{C}_{\mathbf{A}[w^{-1}]}^{w^{-*}\mathfrak{q}}$ between $Z_{\mathbf{A}[w^{-1}]}(w^{-1}\mu)$ and $Z_{\mathbf{A}[w^{-1}]}^{\sigma_i}((w^{-1}\mu)')$ given in [\[eq: varphi\]](#eq: varphi){reference-type="eqref" reference="eq: varphi"}. We apply to them the functor ${}^wF_{\mathbf{A}[w^{-1}]}^{w^{-*}\mathfrak{q}}$ and obtain the morphisms $$\begin{aligned} \label{eq: varphi Zw mu Zws mu prima} \begin{split} \varphi[w]&:Z_\mathbf{A}^w(\mu)\longrightarrow Z^{w\sigma_i}_\mathbf{A}(\mu-(b^\mathfrak{q}(\beta)-1)\beta) \quad\mbox{and}\\ \noalign{\smallskip} &\varphi'[w]:Z^{w\sigma_i}_\mathbf{A}(\mu-(b^\mathfrak{q}(\beta)-1)\beta)\longrightarrow Z_\mathbf{A}^w(\mu)\quad\mbox{in $\mathcal{C}_\mathbf{A}^\mathfrak{q}$}. \end{split}\end{aligned}$$ We are ready to extend the results from §[7.2](#subsec:twisting by a reflection){reference-type="ref" reference="subsec:twisting by a reflection"}. **Lemma 47**. *If $\pi\tilde\mu([\beta;t])$ is a unit for all $1\leq t\leq b^\mathfrak{q}(\beta)-1$, then $\varphi[w]$ and $\varphi'[w]$ are isomorphisms in $\mathcal{C}_\mathbf{A}^\mathfrak{q}$.* *Proof.* By Lemma [Lemma 39](#le:varphi iso){reference-type="ref" reference="le:varphi iso"}, $\varphi$ and $\varphi'$ are isomorphisms if $\pi[w^{-1}]\widetilde{w^{-1}\mu}([\alpha_i;t])$ is a unit for all $1\leq t\leq b^{w^{-*}\mathfrak{q}}(\alpha_i)-1=b^{\mathfrak{q}}(\beta)-1$. As $\pi[w^{-1}]\widetilde{w^{-1}\mu}([\alpha_i;t])=\pi\tilde\mu([\beta;t])$ by [\[eq:pi w mu beta\]](#eq:pi w mu beta){reference-type="eqref" reference="eq:pi w mu beta"}, the lemma follows from the hypothesis. ◻ In the case of the field $\mathbf{k}$, we immediately deduce an isomorphism between the heads of the Verma modules. **Lemma 48**. *If $t_\beta=0$, then $L_\mathbf{k}^w(\mu)\simeq L_\mathbf{k}^{w\sigma_i}(\mu-(b^\mathfrak{q}(\beta)-1)\beta)$ in $\mathcal{C}_\mathbf{k}^\mathfrak{q}$. 0◻* Suppose now $t_\beta\neq0$. Since $t_{\alpha_i}^{\pi[w^{-1}]}(w^{-1}\mu)=t_\beta$ by [\[eq: t w alpha i\]](#eq: t w alpha i){reference-type="eqref" reference="eq: t w alpha i"}, we can consider the morphisms $\psi$ and $\psi'$ between $Z_{\mathbf{k}[w^{-1}]}(w^{-1}\mu)$ and $Z_{\mathbf{k}[w^{-1}]}^{\sigma_i}((w^{-1}\mu)')$ in the category $\mathcal{C}_{\mathbf{k}[w^{-1}]}^{w^{-*}\mathfrak{q}}$ given by Lemma [Lemma 42](#le:psi){reference-type="ref" reference="le:psi"}. By applying the functor ${}^{w}F_{\mathbf{k}[w^{-1}]}^{w^{-*}\mathfrak{q}}$ to this lemma, we obtain the following. **Lemma 49**. *Suppose $t_\beta\neq0$. Then morphisms $$\begin{aligned} \psi[w]:Z_\mathbf{k}^w(\mu-t_\beta\beta)&\longrightarrow Z_\mathbf{k}^w(\mu) \quad\mbox{and}\\ &\psi'[w]:Z_\mathbf{k}^{w\sigma_i}(\mu-(t_\beta-1)\beta)\longrightarrow Z_\mathbf{k}^{w\sigma_i}(\mu-(b^\mathfrak{q}(\beta)-1)\beta)\end{aligned}$$ in $\mathcal{C}_\mathbf{k}^\mathfrak{q}$ satisfy that $$\begin{aligned} \mathop{\mathrm{ {\mathrm{Ker}} }}\varphi[w]= {\mathrm{Im}} \varphi'[w]= {\mathrm{Im}} \psi[w]\quad\mbox{and}\quad {\mathrm{Im}} \varphi[w]=\mathop{\mathrm{ {\mathrm{Ker}} }}\varphi'[w]= {\mathrm{Im}} \psi'[w].\end{aligned}$$ We also have a long exact sequence $$\begin{aligned} \cdots Z^w_\mathbf{k}(\mu-(b^\mathfrak{q}(\beta)+t_\beta)\beta)\longrightarrow Z^w_\mathbf{k}(\mu-b^\mathfrak{q}(\beta)\beta)\longrightarrow Z^w_\mathbf{k}(\mu-t_\beta\beta)\longrightarrow Z^w_\mathbf{k}(\mu)\longrightarrow\cdots \end{aligned}$$ 0◻* Similar to Proposition [Proposition 44](#prop:L iso L sigma){reference-type="ref" reference="prop:L iso L sigma"}, we deduce an isomorphism between the heads of the $w$-Verma modules above. **Proposition 50**. *Suppose $t_\beta\neq0$. In $\mathcal{C}_\mathbf{k}^\mathfrak{q}$, it holds that $$\begin{aligned} L_{\mathbf{k}}^w(\mu)\simeq L_{\mathbf{k}}^{w\sigma_i}(\mu-(t_\beta-1)\beta) \quad\mbox{and}\quad L_{\mathbf{k}}^{w}(\mu-t_\beta\beta)\simeq L_{\mathbf{k}}^{w\sigma_i}(\mu-(b^\mathfrak{q}(\beta)-1)\beta).\end{aligned}$$ 0◻* **Remark 51**. Using iteratively the prior lemma, we can calculate $\mu_w\in\mathbb Z^\mathbb{I}$ such that $L_{\mathbf{k}}^w(\mu)=L_{\mathbf{k}}(\mu_w)$, recall [\[eq:Lw L mu w\]](#eq:Lw L mu w){reference-type="eqref" reference="eq:Lw L mu w"}. Let us make some comments about the kernel of $\varphi[w]$ in the case of the field $\mathbf{k}$. First, using [\[eq:ch Ker varphi\]](#eq:ch Ker varphi){reference-type="eqref" reference="eq:ch Ker varphi"} and $\mathop{\mathrm{ {\mathrm{Ker}} }}(\varphi[w])=\mathop{\mathrm{ {\mathrm{Ker}} }}(\varphi)[w]$, we have that $$\begin{aligned} \notag {\mathrm{ch}} \mathop{\mathrm{ {\mathrm{Ker}} }}(\varphi[w])=e^\mu\left(e^{-t_\beta\beta}+\cdots+\right.&\left.e^{(1-b^\mathfrak{q}(\beta))\beta}\right)\times\\ \label{eq:ch Ker varphi w} \prod_{\gamma\in\Delta^\mathfrak{q}_+\setminus\{\beta\}:w^{-1}\gamma\in \Delta^{w^{-*}\mathfrak{q}}_+}&\left(1+e^{-\gamma}+\cdots+e^{(1-b^\mathfrak{q}(\gamma))\gamma}\right)\times\\ \notag &\prod_{\gamma\in\Delta^\mathfrak{q}_+\setminus\{\beta\}:w^{-1}\gamma \in \Delta^{w^{-*}\mathfrak{q}}_-}\left(1+e^{\gamma}+\cdots+e^{(b^\mathfrak{q}(\gamma)-1)\gamma}\right).\end{aligned}$$ Then, if $\beta=w\alpha_i\in\Delta^\mathfrak{q}_+$, it follows that $$\begin{aligned} \label{eq:mu is not a weight of Ker} \mu-(w(\varrho^{w^{-*}\mathfrak{q}})-\varrho^\mathfrak{q})\mbox{ is not a weight of }\mathop{\mathrm{ {\mathrm{Ker}} }}(\varphi[w])\end{aligned}$$ since $\mu-(w(\varrho^{w^{-*}\mathfrak{q}})-\varrho^\mathfrak{q})=\mu+\sum_{\gamma\in \Delta_+^\mathfrak{q}\,:\,w^{-1}\gamma\in \Delta_-^{w^{-*}\mathfrak{q}}}(b^\mathfrak{q}(\gamma)-1)\gamma$ by [\[eq:w varrho - varrho\]](#eq:w varrho - varrho){reference-type="eqref" reference="eq:w varrho - varrho"}. Lastly, we claim that $$\begin{aligned} T_w(F_i)^{t_\beta}\boldsymbol{|}\mu\boldsymbol{\rangle}_\mathbf{k}\mbox{ is a $U_\mathfrak{q}$-generator of }\mathop{\mathrm{ {\mathrm{Ker}} }}(\varphi[w])\end{aligned}$$ where $T_w:U_{w^{-*}\mathfrak{q}}\longrightarrow U_\mathfrak{q}$ is a Lusztig isomorphism associated to $w$. In fact, $\psi(\boldsymbol{|}w^{-1}\mu-t_\beta\alpha_i\boldsymbol{\rangle}_{\mathbf{k}[w^{-1}]})=F_i^{t_\beta}\boldsymbol{|}w^{-1}\mu\boldsymbol{\rangle}_{\mathbf{k}[w^{-1}]}$ by Lemma [Lemma 42](#le:psi){reference-type="ref" reference="le:psi"}. From the proof of Lemma [Lemma 46](#le: Zx mu w isomorphic to){reference-type="ref" reference="le: Zx mu w isomorphic to"}, we see that $\psi[w](\boldsymbol{|}\mu-t_\beta\beta\boldsymbol{\rangle}_\mathbf{k})=g(F_i^{t_\beta}\boldsymbol{|}w^{-1}\mu\boldsymbol{\rangle}_{\mathbf{k}[w^{-1}]})=g(T_w(F_i^{t_\beta})\cdot_{T_w^{-1}}\boldsymbol{|}w^{-1}\mu\boldsymbol{\rangle}_{\mathbf{k}[w^{-1}]})=T_w(F_i^{t_\beta})\boldsymbol{|}\mu\boldsymbol{\rangle}_{\mathbf{k}}$. ### A generator {#sss:a generator} We end this subsection by constructing a generator of the Hom spaces between twisted Verma modules as in Proposition [Proposition 25](#prop:Hom Zx y Zw){reference-type="ref" reference="prop:Hom Zx y Zw"}. We follow the strategy in [@AJS §5.13]. We fix $x:x^{-*}\mathfrak{q}\rightarrow\mathfrak{q}$ and a reduced expression $x^{-1}w=1^{x^{-*}\mathfrak{q}}\sigma_{i_1}\cdots\sigma_{i_r}$. We set $$\begin{aligned} x_s=1^\mathfrak{q}x\sigma_{i_1}\cdots\sigma_{i_{s-1}} \end{aligned}$$ for $1\leq s\leq r+1$. Then $$\begin{aligned} \mu\langle x_{s+1}\rangle=\mu\langle x_{s}\rangle-(b^{\mathfrak{q}}(x_s\alpha_{i_s})-1)x_s\alpha_{i_s}\end{aligned}$$ for all $1\leq s\leq r$ by [\[eq:mu w s\]](#eq:mu w s){reference-type="eqref" reference="eq:mu w s"}. Notice $x_1=x$ and $x_{r+1}=w$. We set $\varphi_s=\varphi[x_s]$ the morphism in $\mathcal{C}_\mathbf{A}^\mathfrak{q}$ given by [\[eq: varphi Zw mu Zws mu prima\]](#eq: varphi Zw mu Zws mu prima){reference-type="eqref" reference="eq: varphi Zw mu Zws mu prima"} for $\mu\langle x_s\rangle$. Explicitly, $$\begin{aligned} \varphi_s:Z_\mathbf{A}^{x_s}(\mu\langle x_s\rangle)\longrightarrow Z_\mathbf{A}^{x_{s+1}}(\mu\langle x_{s+1}\rangle)\end{aligned}$$ for all $1\leq s\leq r$. We get a morphism $$\begin{aligned} \varphi_{r}\cdots\varphi_1:Z_\mathbf{A}^{x}(\mu\langle x\rangle)\longrightarrow Z_\mathbf{A}^{w}(\mu\langle w\rangle).\end{aligned}$$ **Proposition 52**. *Suppose that $\pi\widetilde{\mu\langle x_s\rangle}([x_s\alpha_{i_s};t])$ is a unit for all $1\leq t\leq b^{\mathfrak{q}}(x_s\alpha_{i_s})-1$ or $x_s\alpha_{i_s}\in\Delta_+^\mathfrak{q}$, for all $1\leq s\leq r$. Then $\varphi_{r}\cdots\varphi_1$ induces an $\mathbf{A}$-isomorphism between the spaces of weight $\mu$ and therefore $\varphi_{r}\cdots\varphi_1$ is an $\mathbf{A}$-generator of the corresponding $\operatorname{Hom}$ space.* *Proof.* The spaces of weight $\mu$ are free of rank $1$ over $\mathbf{A}$ by [\[eq: ch Zmu = ch Zw muw\]](#eq: ch Zmu = ch Zw muw){reference-type="eqref" reference="eq: ch Zmu = ch Zw muw"} and then the first assertion implies the second one. Also, it is enough to prove it for each $s$ and assuming that $\mathbf{A}$ is a field. Thus, under the first supposition, $\varphi_s$ is an isomorphism by Lemma [Lemma 47](#le:t beta unit){reference-type="ref" reference="le:t beta unit"}. Under the second one, $\mu$ is not a weight of $\mathop{\mathrm{ {\mathrm{Ker}} }}\varphi_s$ by [\[eq:mu is not a weight of Ker\]](#eq:mu is not a weight of Ker){reference-type="eqref" reference="eq:mu is not a weight of Ker"} as $\mu=\mu\langle x_s\rangle-(x_s(\varrho^{x_s^{-*}\mathfrak{q}})-\varrho^\mathfrak{q})$ and hence $\varphi_s$ is an isomorphism on the spaces of weight $\mu$. ◻ We will use the following corollary with $x=1^\mathfrak{q}$ and $w=w_0\in{}^\mathfrak{q}\mathcal{W}$ the longest element to prove the linkage principle in the next section. **Corollary 53**. *If $\ell(w)=\ell(x)+\ell(x^{-1}w)$, then $\varphi_{r}\cdots\varphi_1$ is an $\mathbf{A}$-generator of the corresponding Hom space.* *Proof.* We have that $x_{s}\alpha_{i_s}\in\Delta_+^\mathfrak{q}$ by [@HY08 Lemma 8 $(i)$] for all $1\leq s\leq r$. ◻ # The linkage principle {#sec:linkage} We are ready to prove our main results. We follow here the ideas in [@AJS §6]. We keep the notation of the above sections and restrict ourselves to the case of the field $\mathbf{k}$. Recall Definition [Definition 17](#def:n beta){reference-type="ref" reference="def:n beta"} **Definition 54**. Let $\beta\in\Delta_+^\mathfrak{q}$ and $\mu\in\mathbb Z^\mathbb{I}$. We set $$\begin{aligned} \beta\downarrow\mu=\mu-n_\beta^\pi(\mu)\,\beta.\end{aligned}$$ We say that $\lambda\in\mathbb Z^\mathbb{I}$ is strongly linked to $\mu$ if and only if there exist $\beta_1, ..., \beta_r\in\Delta_+^\mathfrak{q}$ such that $\lambda=\beta_r\downarrow\cdots\beta_1\downarrow\mu$. We denote $$\begin{aligned} {}^{\downarrow}\mu=\left\{\lambda\in\mathbb Z^\mathbb{I}\mid\lambda\,\mbox{is strongly linked to}\,\mu\,\mbox{and}\,\mu-\beta_{top}^\mathfrak{q}\leq\lambda\leq\mu \right\}. \end{aligned}$$ Lastly, being linked is the smallest equivalence relation in $\mathbb Z^\mathbb{I}$ such that $\lambda$ and $\mu$ are linked if $\lambda$ is strongly linked to $\mu$ or *vice versa*. We denote $[\mu]_{\operatorname{link}}$ the equivalence class of $\mu\in\mathbb Z^\mathbb{I}$. **Theorem 55**. *If $L_\mathbf{k}(\lambda)$ is a composition factor of $Z_\mathbf{k}(\mu)$, then $\lambda=\mu$ or $L_\mathbf{k}(\lambda)$ is a composition factor of $Z_\mathbf{k}(\beta\downarrow\mu)$ for some $\beta\in\Delta_+^\mathfrak{q}$. Moreover, $\lambda\in{}^{\downarrow}\mu$.* *Proof.* We use the notation of Figure [\[fig:linked\]](#fig:linked){reference-type="ref" reference="fig:linked"}. Then $w_0=w_{n+1}$ and $\mu\langle w_{n+1}\rangle=\mu-\beta^\mathfrak{q}_{top}$. Therefore $\Phi=\varphi_n\cdots\varphi_1:Z_\mathbf{k}(\mu)\longrightarrow Z^{w_{0}}_\mathbf{k}(\mu-\beta^\mathfrak{q}_{top})$ is non-zero by Corollary [Corollary 53](#cor:a generator){reference-type="ref" reference="cor:a generator"} and hence $L_\mathbf{k}(\mu)\simeq {\mathrm{Im}} \Phi\simeq\operatorname{soc}Z_\mathbf{k}^{w_0}(\mu-\beta^\mathfrak{q}_{top})$ by Lemma [Lemma 28](#le:imagen de Z w0 mu w0 en Z mu){reference-type="ref" reference="le:imagen de Z w0 mu w0 en Z mu"}. Now, suppose $\lambda\neq\mu$. Then $L_\mathbf{k}(\lambda)$ is a composition factor of $\mathop{\mathrm{ {\mathrm{Ker}} }}\Phi$ and hence of $\mathop{\mathrm{ {\mathrm{Ker}} }}\varphi_s$ for some $s$. By Lemma [Lemma 49](#le:psi w){reference-type="ref" reference="le:psi w"}, $\mathop{\mathrm{ {\mathrm{Ker}} }}\varphi_s$ is a homomorphic image of $Z^{w_s}_\mathbf{k}(\mu\langle w_s\rangle-t_s\beta_s)$. Therefore $L_\mathbf{k}(\lambda)$ is also a composition factor of $Z^{w_s}_\mathbf{k}(\mu\langle w_s\rangle-t_s\beta_s)$. By Lemma [Lemma 18](#le:n beta = t beta){reference-type="ref" reference="le:n beta = t beta"}, we have that $t_s=n_{\beta_s}^\pi(\mu)$ and hence $$\begin{aligned} \mu\langle w_s\rangle-t_s\beta_s=(\mu-t_s\beta_s)\langle w_s\rangle=(\beta_s\downarrow\mu)\langle w_s\rangle.\end{aligned}$$ Thus, ${\mathrm{ch}} Z^{w_s}_\mathbf{k}(\mu\langle w_s\rangle-t_s\beta_s)= {\mathrm{ch}} Z_\mathbf{k}(\beta_s\downarrow\mu)$ by [\[eq: ch Zmu = ch Zw muw\]](#eq: ch Zmu = ch Zw muw){reference-type="eqref" reference="eq: ch Zmu = ch Zw muw"}, and hence $L_\mathbf{k}(\lambda)$ is also a composition factor of $Z_\mathbf{k}(\beta_s\downarrow\mu)$. This shows the first part of the statement. For the second one, we repeat the reasoning with $\beta_s\downarrow\mu$ instead of $\mu$, and so on. This procedure will end after a finite number of steps since $\beta\downarrow\mu\leq\mu$ and $\mu-\beta_{top}^\mathfrak{q}\leq\lambda<\mu$. Hence, there exist $\beta_{s_1}$, \..., $\beta_{s_r}$ such that $\lambda=\beta_{s_r}\downarrow\cdots\beta_{s_1}\downarrow\mu$ as desired. ◻ Besides the following particular case, it is not necessarily true that $L_\mathbf{k}(\lambda)$ is a composition factor of $Z_\mathbf{k}(\mu)$ if $\lambda\in{}^\downarrow\mu$, see Example [Example 61](#ex:the example a simple){reference-type="ref" reference="ex:the example a simple"}. **Lemma 56**. *If $\lambda=\beta\downarrow\mu$, then $L_\mathbf{k}(\lambda)$ is a composition factor of $Z_\mathbf{k}(\mu)$.* *Proof.* We can assume we are in the situation of Figure [\[fig:linked\]](#fig:linked){reference-type="ref" reference="fig:linked"}. That is, $\beta=\beta_s=w_s\alpha_{i_s}$ and $\lambda=\beta_s\downarrow\mu=\mu-t_s\beta_s$. Thus, we have a projection from $Z_\mathbf{k}^{w_s}(\mu\langle w_s\rangle)$ to $\mathop{\mathrm{ {\mathrm{Ker}} }}\varphi_s$. Notice that $\lambda$ is a weight of $\mathop{\mathrm{ {\mathrm{Ker}} }}\varphi_s$ by [\[eq:ch Ker varphi w\]](#eq:ch Ker varphi w){reference-type="eqref" reference="eq:ch Ker varphi w"}. On the other hand, $\tilde\varphi_{s-1}\cdots\tilde\varphi_1$ induces an $\mathbf{k}$-isomorphism between the spaces of weight $\lambda$ by Proposition [Proposition 52](#prop:hom generator){reference-type="ref" reference="prop:hom generator"} which is one-dimensional. Hence $\psi\tilde\varphi_{s-1}\cdots\tilde\varphi_1$ is not zero on the space of weight $\lambda$ because ${\mathrm{Im}} \psi=\mathop{\mathrm{ {\mathrm{Ker}} }}\varphi_s$. This implies that $L_\mathbf{k}(\lambda)$ is a composition factor of $\mathop{\mathrm{ {\mathrm{Ker}} }}\varphi_s$ and therefore so is of $Z^{w_s}_\mathbf{k}(\mu\langle w_s\rangle)$. As ${\mathrm{ch}} Z^{w_s}_\mathbf{k}(\mu\langle w_s\rangle)= {\mathrm{ch}} Z_\mathbf{k}(\mu)$, the lemma follows. ◻ In general, we can assert that the strongly linked weights belong to the same block. **Corollary 57**. *Let $\lambda,\mu\in\mathbb Z^\mathbb{I}$. Then $\lambda$ and $\mu$ are linked if and only if $L_\mathbf{k}(\lambda)$ and $L_\mathbf{k}(\mu)$ belong to the same block.* *Proof.* We first prove that being linked implies the belonging to the same block. To this end, it is enough to consider the case $\lambda=\beta\downarrow\mu$ for some $\beta\in\Delta_+^\mathfrak{q}$ which follows from Lemma [Lemma 56](#le:Linkage){reference-type="ref" reference="le:Linkage"}. For the reciprocal, as above, it is enough to consider the existence of a non-trivial extension of the form $0\longrightarrow L_\mathbf{k}(\lambda)\longrightarrow M\longrightarrow L_\mathbf{k}(\mu)\longrightarrow0$. Therefore $M$ is a quotient of $Z_\mathbf{k}(\mu)$ and hence $\lambda\in{}^{\downarrow}\mu$ by Theorem [Theorem 55](#teo:strongly linked){reference-type="ref" reference="teo:strongly linked"}. ◻ ## Typical weights For $\beta\in\Delta_+^\mathfrak{q}$ and $\mu\in\mathbb Z^\mathbb{I}$, we introduce $$\begin{aligned} \label{eq:P} \begin{split} \mathfrak{P}_\mathbf{k}^\mathfrak{q}(\beta,\mu)&=\prod_{1\leq t<b^\mathfrak{q}(\beta)} \left(q_{\beta}^{t}-\rho^\mathfrak{q}(\beta)\,\pi\widetilde{\mu}(K_{\beta}L_{\beta}^{-1})\right) \quad\mbox{and}\\ \noalign{\smallskip} &\mathfrak{P}_\mathbf{k}^\mathfrak{q}(\mu)=\prod_{\beta\in\Delta_+^\mathfrak{q}}\mathfrak{P}_\mathbf{k}^\mathfrak{q}(\beta,\mu) \end{split}\end{aligned}$$ A weight $\mu$ is called *typical* if $\mathfrak{P}_\mathbf{k}^\mathfrak{q}(\mu)\neq0$. Otherwise, it is called *atypical* and the numbers of positive roots for which $\mathfrak{P}_\mathbf{k}^\mathfrak{q}(\beta,\mu)=0$ is its *degree of atypicality*; if it is $\ell$ we say that $\mu$ is $\ell$-atypical. This terminology is borrowed from [@kac], see also [@ser; @Y]. **Corollary 58** ([@AJS §6.3]). *Let $\mu\in\mathbb Z^\mathbb{I}$. The following are equivalent:* 1. *$\mu$ is typical.* 2. *$Z_\mathbf{k}(\mu)=L_\mathbf{k}(\mu)$ is simple.* 3. *$Z_\mathbf{k}(\mu)=L_\mathbf{k}(\mu)$ is projective.* *Proof.* If $\mu$ is typical, then $L_\mathbf{k}(\mu)$ is the unique composition factor of $Z_\mathbf{k}(\mu)$ by Theorem [Theorem 55](#teo:strongly linked){reference-type="ref" reference="teo:strongly linked"}. Since $[Z_\mathbf{k}(\mu),L_\mathbf{k}(\mu)]=1$, it follows that $Z_\mathbf{k}(\mu)$ is simple. Instead, if $\mathfrak{P}_\mathbf{k}^\mathfrak{q}(\mu)=0$, then $\mathop{\mathrm{ {\mathrm{Ker}} }}\varphi_s\neq0$ for some $s$, cf. Figure [\[fig:linked\]](#fig:linked){reference-type="ref" reference="fig:linked"}. Since the morphism $\varphi_n\cdots\varphi_1:Z_\mathbf{k}(\mu)\longrightarrow Z^{w_{0}}_\mathbf{k}(\mu-\beta^\mathfrak{q}_{top})$ is non trivial, $\mathop{\mathrm{ {\mathrm{Ker}} }}\varphi_s\neq Z_\mathbf{k}(\mu)$ and hence $Z_\mathbf{k}(\mu)$ is non simple. This proves that (1) is equivalent to (2). The equivalence between (2) and (3) is a consequence of Theorem [Theorem 33](#teo:BGG){reference-type="ref" reference="teo:BGG"}. ◻ **Remark 59**. The equivalence between $(1)$ and $(2)$ was proved before in [@HY Proposition 5.16]; see also [@Y Remark 6.25]. ## $1$-atypical weights For weights with degree of atypicality $1$ we can compute the character of the associated simple module similar to [@AJS §6.4]. **Corollary 60**. *Let $\mu\in\mathbb Z^\mathbb{I}$ be a $1$-atypical weight with $\mathfrak{P}_\mathbf{k}^\mathfrak{q}(\beta,\mu)=0$ for certain $\beta\in\Delta_+^\mathfrak{q}$. Then $$\begin{aligned} {\mathrm{ch}} L_\mathbf{k}(\mu)=e^\mu\quad\frac{1-e^{-n^\pi_\beta(\mu)\beta}}{1-e^{-\beta}} \prod_{\gamma\in \Delta_+^\mathfrak{q}\setminus\{\beta\}}\frac{1-e^{-b^\mathfrak{q}(\gamma)\gamma}}{1-e^{-\gamma}}.\end{aligned}$$ Moreover, there exists an exact sequence $$\begin{aligned} 0\longrightarrow L_\mathbf{k}(\beta\downarrow\mu)\longrightarrow Z_\mathbf{k}(\mu)\longrightarrow L_\mathbf{k}(\mu)\longrightarrow0.\end{aligned}$$* *Proof.* We keep the notation of Figure [\[fig:linked\]](#fig:linked){reference-type="ref" reference="fig:linked"}: $\beta=\beta_s=w_s\alpha_{i_s}$, $\lambda=\beta_s\downarrow\mu=\mu-t_s\beta_s$ and $\Phi=\varphi_n\cdots\varphi_1$. As $\mu$ is $1$-atypical, the morphisms $\varphi_\ell$ are isomorphisms for all $\ell\neq s$ by Lemma [Lemma 47](#le:t beta unit){reference-type="ref" reference="le:t beta unit"} and hence ${\mathrm{Im}} \varphi_s\simeq {\mathrm{Im}} \Phi\simeq L_\mathbf{k}(\mu)$, recall Lemma [Lemma 28](#le:imagen de Z w0 mu w0 en Z mu){reference-type="ref" reference="le:imagen de Z w0 mu w0 en Z mu"}. Thus, the character formula is consequence of [\[eq:ch Ker varphi w\]](#eq:ch Ker varphi w){reference-type="eqref" reference="eq:ch Ker varphi w"}. For the existence of the exact sequence, we claim that $L_\mathbf{k}(\lambda)\simeq\mathop{\mathrm{ {\mathrm{Ker}} }}\varphi_s$. Indeed, in the proof of Lemma [Lemma 56](#le:Linkage){reference-type="ref" reference="le:Linkage"}, we saw that $L_\mathbf{k}(\lambda)$ is a composition factor of $\mathop{\mathrm{ {\mathrm{Ker}} }}\varphi_s$. Thus, $\dim L_\mathbf{k}(\lambda)\leq\dim \mathop{\mathrm{ {\mathrm{Ker}} }}\varphi_s= (b^q(\beta)-n_\beta^\pi(\mu))\prod_{\gamma\in \Delta_+^\mathfrak{q}\setminus\{\beta\}}b^\mathfrak{q}(\gamma)$. On the other hand, $n_\beta^\pi(\lambda)=b^q(\beta)-n_\beta^\pi(\mu)$ and hence $\dim\mathop{\mathrm{ {\mathrm{Ker}} }}\tilde\varphi_s=n_\beta^\pi(\mu)\prod_{\gamma\in \Delta_+^\mathfrak{q}\setminus\{\beta\}}b^\mathfrak{q}(\gamma)$ by [\[eq:ch Ker varphi w\]](#eq:ch Ker varphi w){reference-type="eqref" reference="eq:ch Ker varphi w"}. Let $\tilde\Phi=\tilde\varphi_n\cdots\tilde\varphi_1$. Then $L_\mathbf{k}(\lambda)\simeq {\mathrm{Im}} \tilde\Phi$ by Lemma [Lemma 28](#le:imagen de Z w0 mu w0 en Z mu){reference-type="ref" reference="le:imagen de Z w0 mu w0 en Z mu"} and therefore $\dim L_\mathbf{k}(\lambda)=\dim Z_\mathbf{k}(\lambda)-\dim\mathop{\mathrm{ {\mathrm{Ker}} }}\tilde\Phi\geq\dim Z_\mathbf{k}(\lambda)-\dim\mathop{\mathrm{ {\mathrm{Ker}} }}\tilde\varphi_s=(b^q(\beta)-n_\beta^\pi(\mu))\prod_{\gamma\in \Delta_+^\mathfrak{q}\setminus\{\beta\}}b^\mathfrak{q}(\gamma)$. This implies our claim and the corollary is proved. ◻ **Example 61**. Let $\mathfrak{q}$ be as in Example [Example 4](#ex:the example){reference-type="ref" reference="ex:the example"}. Its positive roots are $\alpha_1$, $\beta=\alpha_1+\alpha_2$ and $\alpha_2$, recall Example [Example 5](#ex:the example pbw){reference-type="ref" reference="ex:the example pbw"}. Let $\pi:U_\mathfrak{q}^0\longrightarrow\Bbbk$ be an algebra map. For $\mu=0$, we have that $$\begin{aligned} \mathfrak{P}_\Bbbk^\mathfrak{q}(0)= \bigl(-1+\pi(K_1L_1^{-1})\bigr)\, \prod_{t=1}^{N-1}\bigl(q^t-\pi(K_{\beta}L_{\beta}^{-1})\bigr)\, \bigl(-1+\pi(K_2L_2^{-1})\bigr).\end{aligned}$$ Suppose $\pi(K_{\beta}L_{\beta}^{-1})=q^t$ for some $1\leq t\leq N-1$ and $\pi(K_1L_1^{-1})\neq1\neq\pi(K_2L_2^{-1})$. Then $\mu=0$ is $1$-atypical, $\beta\downarrow0=-t\beta$ and hence there is an exact sequence of the form $$\begin{aligned} 0\longrightarrow L_\Bbbk(-t\beta)\longrightarrow Z_\Bbbk(0)\longrightarrow L_\Bbbk(0)\longrightarrow0.\end{aligned}$$ Moreover, ${\mathrm{ch}} L_\Bbbk(0)=\left(1+e^{-\alpha_1}\right)\left(1+e^{-\beta}+\cdots+e^{(1-t)\beta}\right)\left(1+e^{-\alpha_2}\right)$. We observe now that $q_{\beta}^{N-t}-\rho^\mathfrak{q}(\beta)\,\pi\widetilde{(-t\beta)}(K_{\beta}L_{\beta}^{-1})=0$ and hence $$\begin{aligned} \beta\downarrow\beta\downarrow0=\beta\downarrow-t\beta=-t\beta-(N-t)\beta=-N\beta=-\beta^\mathfrak{q}_{top}.\end{aligned}$$ Therefore $-\beta^\mathfrak{q}_{top}\in{}^\downarrow0$ but $L_\Bbbk(-\beta^\mathfrak{q}_{top})$ is not a composition factor of $Z_\Bbbk(0)$. ## The linkage principle as a dot action {#subsec:linkage via dot} In this subsection, we assume that $\mathfrak{q}$ is of standard type [@AA-diag-survey], this means the bundles of matrices $\{C^\mathfrak{p}\}_{\mathfrak{p}\in\mathcal{X}}$ and roots $\{\Delta^\mathfrak{p}\}_{\mathfrak{p}\in\mathcal{X}}$ are constant. We will see that the operation $\downarrow$ can be carried out as the action of a group when $\pi:U_\mathfrak{q}^0\longrightarrow\mathbf{k}$ satisfies $\pi(K_i)=\pi(L_i)=1$ for all $i\in\mathbb{I}$, *e.g.* $\pi=\varepsilon$ the counit. Let us introduce some notation. For $i\in\mathbb{I}$, we define the group homomorphism $$\begin{aligned} \langle\alpha^\vee_i,-\rangle:\mathbb Z^\mathbb{I}\longrightarrow\mathbb Z\quad\mbox{by}\quad\langle\alpha^\vee_i,\alpha_j\rangle=c_{ij}^\mathfrak{q}\quad\forall j\in\mathbb{I}.\end{aligned}$$ Therefore $$\begin{aligned} \sigma_i=\sigma_i^\mathfrak{p}(\mu)=\mu-\langle\alpha^\vee_i,\mu\rangle\,\alpha_i\end{aligned}$$ for all $\mu\in\mathbb Z^\mathbb{I}$ and $\mathfrak{p}\in\mathcal{X}$, as the bundle of Cartan matrices is constant. In the next definition we think of the morphisms in the Weyl groupoid just as $\mathbb Z$-automorphisms of $\mathbb Z^\mathbb{I}$. **Definition 62**. Let $\beta=w\alpha_i\in\Delta^\mathfrak{q}$ with $w\in{}^\mathfrak{q}\mathcal{W}$ and $\alpha_i\in\Pi^{w^{-*}\mathfrak{q}}$. We define $s_\beta\in\operatorname{Aut}_\mathbb Z(\mathbb Z^\mathbb{I})$ and the group homomorphism $\langle\beta^\vee,-\rangle:\mathbb Z^\mathbb{I}\longrightarrow\mathbb Z$ as follows $$\begin{aligned} s_\beta=w\,\sigma_i\, w^{-1}\quad\mbox{and}\quad\langle\beta^\vee,\mu\rangle=\langle\alpha_i^\vee,w^{-1}\mu\rangle\quad\forall\mu\in\mathbb Z^\mathbb{I}.\end{aligned}$$ Of course, $s_\beta$ is defined for all roots thanks to [\[eq:roots are conjugate to simple\]](#eq:roots are conjugate to simple){reference-type="eqref" reference="eq:roots are conjugate to simple"}. This definition and the next lemma are in [@AARB §3.2] for Cartan roots. The proof runs essentially as in *loc. cit.* **Lemma 63**. *Let $\beta\in\Delta^\mathfrak{q}$. Then $s_\beta$ and $\langle\beta^\vee,-\rangle$ are well-defined, that is, they do not depend on $w$ and $\alpha_i$. Moreover, $s_\beta(\beta)=-\beta$ and $$\begin{aligned} s_\beta(\mu)=\mu-\langle\beta^\vee,\mu\rangle\,\beta\quad\forall\mu\in\mathbb Z^\mathbb{I}.\end{aligned}$$* *Proof.* Assume $\beta=w\alpha_i$ for certain $w\in{}^\mathfrak{q}\mathcal{W}$ and $\alpha_i\in\Pi^{w^{-*}\mathfrak{q}}$. Then $$\begin{aligned} s_\beta(\mu)=w\,\sigma_i(w^{-1}\mu)=w(w^{-1}\mu-\langle\alpha^\vee_i,w^{-1}\mu\rangle\,\alpha_i)=\mu-\langle\beta^\vee,\mu\rangle\,\beta.\end{aligned}$$ This implies that $s_\beta$ is a reflection in $\operatorname{End}_{\mathbb{Q}}(\mathbb{Q}^\theta)$ in the sense of [@B Chapitre V §2.2]. Also, $s_\beta(\Delta^\mathfrak{q})=\Delta^\mathfrak{q}$ as we are assuming $\mathfrak{q}$ is of standard type. Therefore $s_\beta$ is well-defined, and hence $\langle\beta^\vee,-\rangle$ so is, by [@B Chapitre VI §1, Lemme 1]. This proves the lemma. ◻ We recall [@Ang-reine Definition 2.6]: $i\in\mathbb{I}$ is a Cartan vertex of $\mathfrak{p}\in\mathcal{X}$ if $\mathfrak{p}(\alpha_i,\alpha_i)^{c_{ij}^\mathfrak{p}}=\mathfrak{p}(\alpha_i,\alpha_j)\mathfrak{p}(\alpha_j,\alpha_i)$ for all $j\in\mathbb{I}$. The set of Cartan roots of $\mathfrak{q}$ is $$\begin{aligned} \Delta_{\operatorname{car}}^\mathfrak{q}=\left\{w(\alpha_i)\mid w:\mathfrak{p}\rightarrow\mathfrak{q}\mbox{ and $i$ is a Cartan vertex of $\mathfrak{p}$}\right\}.\end{aligned}$$ We introduce a Cartan-type Weyl group $$\begin{aligned} \mathcal{W}_{\operatorname{car}}^\mathfrak{q}=\langle s_\beta\mid\beta\in\Delta_{\operatorname{car}}^\mathfrak{q}\rangle\subset\operatorname{Aut}_\mathbb Z(\mathbb Z^\mathbb{I}),\end{aligned}$$ and its affine extension $$\begin{aligned} \mathcal{W}_{\operatorname{aff}}^\mathfrak{q}=\mathcal{W}_{\operatorname{car}}^\mathfrak{q}\ltimes\mathbb Z^\mathbb{I}.\end{aligned}$$ For $m\in\mathbb Z$, we denote $s_{\beta,m}=s_\beta\ltimes mb^\mathfrak{q}(\beta)\beta\in\mathcal{W}_{\operatorname{aff}}^\mathfrak{q}$ and $$\begin{aligned} \mathcal{W}_{\operatorname{link}}^\mathfrak{q}=\langle s_{\beta,m}\mid\beta\in\Delta_{\operatorname{car}}^\mathfrak{q},m\in\mathbb Z\rangle\subset\mathcal{W}^\mathfrak{q}_{\operatorname{aff}}.\end{aligned}$$ Finally, we define the dot action of $\mathcal{W}^\mathfrak{q}_{\operatorname{aff}}$ on $\mathbb Z^\mathbb{I}$ as $$\begin{aligned} (w\gamma)\bullet\mu=w(\mu+\gamma-\varrho^\mathfrak{q})+\varrho^\mathfrak{q}\end{aligned}$$ for all $w\in\mathcal{W}_{\operatorname{car}}^\mathfrak{q}$ and $\gamma,\mu\in\mathbb Z^\mathbb{I}$. **Lemma 64**. *Assume that $\pi:U_\mathfrak{q}^0\longrightarrow\mathbf{k}$ satifies $\pi(K_j)=\pi(L_j)=1$ for all $j\in\mathbb{I}$. Let $\beta\in\Delta_+^\mathfrak{q}$ be a Cartan root and $\mu\in\mathbb Z^\mathbb{I}$. Hence there exists $m\in\mathbb Z$ such that $$\begin{aligned} \beta\downarrow\mu=s_{\beta,m}\bullet\mu.\end{aligned}$$* *Proof.* Let $w:\mathfrak{p}\rightarrow\mathfrak{q}$ and $\alpha_i\in\Pi^\mathfrak{p}$ with $i\in\mathbb{I}$ a Cartan vertex such that $\beta=w\alpha_i$. For abbreviation, we set $n=n_\beta^\pi(\mu)$ and $b=b^\mathfrak{q}(\beta)=\mathop{\mathrm{ {\mathrm{ord}} }}q_\beta$. If $1\leq n\leq b-1$, then $$\begin{aligned} q_\beta^n=&\rho^\mathfrak{q}(\beta)\pi\tilde\mu(K_\beta L_\beta^{-1})\\ =&\mathfrak{p}(\alpha_i,\alpha_i)\mathfrak{p}(\alpha_i,w^{-1}(\mu\langle w\rangle))\mathfrak{p}(w^{-1}(\mu\langle w\rangle),\alpha_i)\\ =&\mathfrak{p}(\alpha_i,\alpha_i)\mathfrak{p}(\alpha_i,\alpha_i)^{\langle\alpha_i^\vee,w^{-1}(\mu\langle w\rangle)\rangle}\\ =&q_\beta^{\langle\beta^\vee,\mu\langle w\rangle\rangle+1};\end{aligned}$$ the second equality follows from [\[eq:rho = \...\]](#eq:rho = ...){reference-type="eqref" reference="eq:rho = ..."} and the assumption on $\pi$; the third one holds because $i$ is a Cartan vertex. Therefore $n\equiv \langle\beta^\vee,\mu\langle w\rangle\rangle+1\mod b$. If $n=0$, then $q_\beta^t\neq q_\beta^{\langle\beta^\vee,\mu\langle w\rangle\rangle+1}$ for all $1\leq t\leq b-1$ and hence $1= q_\beta^{\langle\beta^\vee,\mu\langle w\rangle\rangle+1}$ because $b$ is the order of $q_\beta$. In both cases there exists $k\in\mathbb Z$ such that $$\begin{aligned} n+kb=\langle\beta^\vee,\mu\langle w\rangle\rangle+1. \end{aligned}$$ We claim that $m=1-k$ has the desired property. Indeed, we notice that $\mathfrak{p}(\gamma,\gamma)=\sigma_i^{*}\mathfrak{p}(\gamma,\gamma)$ for all $\gamma\in\mathbb Z^\mathbb{I}$ because $i$ is a Cartan vertex, and then $\varrho^\mathfrak{p}=\varrho^{\sigma_i^{*}\mathfrak{p}}$ as $\Delta^\mathfrak{p}=\Delta^{\sigma_i^{*}\mathfrak{p}}$. Hence $\sigma_i(\varrho^{\mathfrak{p}})-\varrho^\mathfrak{p}=\sigma_i(\varrho^{\sigma_i^{*}\mathfrak{p}})-\varrho^\mathfrak{p}=-(b-1)\alpha_i=-\langle\alpha_i^\vee,\varrho^\mathfrak{p}\rangle\alpha_i$. Therefore $\langle\beta^\vee,w\varrho^\mathfrak{p}\rangle=\langle\alpha_i^\vee,\varrho^\mathfrak{p}\rangle=b-1$. We use this equality in the next computation: $$\begin{aligned} s_\beta\bullet\mu=s_\beta(\mu-\varrho^\mathfrak{q})+\varrho^\mathfrak{q} =&\mu-\varrho^\mathfrak{q}-\langle\beta^\vee,\mu-\varrho^\mathfrak{q}\rangle\beta+\varrho^\mathfrak{q}\\ =&\mu-\langle\beta^\vee,\mu-\varrho^\mathfrak{q}\rangle\beta-\langle\beta^\vee,w\varrho^\mathfrak{p}\rangle\beta-\beta+b\beta\\ =&\mu-(\langle\beta^\vee,\mu\langle w\rangle\rangle+1)\beta+b\beta\\ =&\mu-(n+kb)\beta+b\beta\\ =&\beta\downarrow\mu+mb\beta.\end{aligned}$$ Since $s_\beta(\beta)=-\beta$, the lemma follows. ◻ The family of standard type matrices is arranged into three subfamilies according to [@AA-diag-survey]: Cartan, super and the remainder. We next analyze them separately. ### Cartan type This is the case in which all $i\in\mathbb{I}$ are Cartan vertexes and therefore all roots are Cartan roots. Its Weyl groupoid turns out be just the Weyl group of the Cartan matrix $C=C^\mathfrak{q}$, and hence it coincides with $\mathcal{W}^\mathfrak{q}_{\operatorname{car}}$. The indecomposable matrices of Cartan type are listed in [@AA-diag-survey §4]. The previous results immediately imply the following. **Corollary 65**. *Assume that $\mathfrak{q}$ is of Cartan type and $\pi:U_\mathfrak{q}^0\longrightarrow\mathbf{k}$ satifies $\pi(K_j)=\pi(L_j)=1$ for all $j\in\mathbb{I}$. Then $[\mu]_{\operatorname{link}}\subset\mathcal{W}^\mathfrak{q}_{\operatorname{link}}\bullet\mu$ for all $\mu\in\mathbb Z^\mathbb{I}$. 0◻* The natural example of Cartan type matrix is $\mathfrak{q}=(q^{d_ic_{ij}})_{i,j\in\mathbb{I}}$ as in Example [Example 3](#ex:small qg){reference-type="ref" reference="ex:small qg"}. In particular, if $\mathop{\mathrm{ {\mathrm{ord}} }}q$ is an odd prime, not $3$ if $\mathfrak{g}$ has a component of type $G_2$, and $\pi(K_i)=\pi(L_i)=1$, then the objects in the category $\mathcal{C}^\mathfrak{q}_\mathbf{k}$ associated to $u_q(\mathfrak{q})$ turn out to be $u_q(\mathfrak{q})$-modules of type $1$ in the sense of Lusztig, cf. [@AJS §2.4]. Let $\delta^\mathfrak{q}=\frac{1}{2}\sum_{\beta\in \Delta^\mathfrak{q}_+}\beta$ be the semi-sum of the positive roots of the Lie algebra associated to $C$. We can replace $\varrho^\mathfrak{q}$ with $\delta^\mathfrak{q}$ in the dot action under the next assumption. For instance, when $\mathfrak{q}$ is as in the above paragraph but this is not the case if $\mathop{\mathrm{ {\mathrm{ord}} }}q$ is even. In this way, we recover the usual dot action of the affine Weyl group. **Lemma 66**. *Suppose $b=b^\mathfrak{q}(\gamma)$ is constant for all $\gamma\in\Delta_+^\mathfrak{q}$. Let $\mu\in\mathbb Z^\mathbb{I}$ and $\beta\in\Delta_+^\mathfrak{q}$. Then $s_\beta\bullet\mu=s_\beta(\mu+\delta^\mathfrak{q})-\delta^\mathfrak{q}+b\langle\beta^\vee,\delta^\mathfrak{q}\rangle\beta$.* *Proof.* From the proof of Corollary [Corollary 65](#cor:cartan dot action){reference-type="ref" reference="cor:cartan dot action"}, we see that $$\begin{aligned} s_\beta\bullet\mu=\mu-\langle\beta^\vee,\mu-\varrho^\mathfrak{q}\rangle\beta=\mu-\langle\beta^\vee,\mu+\delta^\mathfrak{q}\rangle\beta+b\langle\beta^\vee,\delta^\mathfrak{q}\rangle\beta\end{aligned}$$ as we wanted. ◻ ### Super type These are the matrices $\mathfrak{q}$ whose root systems are isomorphic to the root systems of finite-dimensional contragredient Lie superalgebras in characteristic 0 [@AA-diag-survey; @AAYamane]. The indecomposable matrices of super type are listed in [@AA-diag-survey §5]. An element in $\Delta_{\operatorname{odd}}^\mathfrak{q}:=\Delta^\mathfrak{q}\setminus\Delta_{\operatorname{car}}^\mathfrak{q}$ is called *odd root*. We can see by inspection on [@AA-diag-survey §5] that $\mathfrak{q}(\alpha_i,\alpha_i)=-1$ for all $\alpha_i$ a simple odd root. Then $q_\beta=-1$ for all $\beta\in\Delta_{\operatorname{odd}}^\mathfrak{q}$ and hence $\beta\downarrow\mu=\mu$ or $\mu-\beta$. For odd root, we can not always carry out $\downarrow$ as a dot action, see the example below. Instead we find that the classes $[\mu]_{\operatorname{link}}$ behave like in representation theory of Lie superalgebras, see for instance [@chenwang; @panshu]. Let $\mathbb Z\Delta_{\operatorname{odd}}^\mathfrak{q}$ be the $\mathbb Z$-span of the odd roots in $\mathbb Z^\mathbb{I}$. **Corollary 67**. *Assume that $\mathfrak{q}$ is of super type and $\pi:U_\mathfrak{q}^0\longrightarrow\mathbf{k}$ satifies $\pi(K_j)=\pi(L_j)=1$ for all $j\in\mathbb{I}$. Then $[\mu]_{\operatorname{link}}\subset\mathcal{W}^\mathfrak{q}_{\operatorname{link}}\bullet(\mu+\mathbb Z\Delta_{\operatorname{odd}}^\mathfrak{q})$ for all $\mu\in\mathbb Z^\mathbb{I}$.* *Proof.* The set of Cartan roots is invariant by $s_\beta$ for all $\beta\in\Delta_{\operatorname{car}}^\mathfrak{q}$ by [@AARB Lemma 3.6] and hence so is $\Delta_{\operatorname{odd}}^\mathfrak{q}$. Then the lemma is a direct consequence of Lemma [Lemma 64](#le:linkage via dot){reference-type="ref" reference="le:linkage via dot"} and Theorem [Theorem 55](#teo:strongly linked){reference-type="ref" reference="teo:strongly linked"}. ◻ Let $\delta^\mathfrak{q}=\delta_{\operatorname{car}}^\mathfrak{q}-\delta_{\operatorname{odd}}^\mathfrak{q}$ with $\delta_{\operatorname{car}}^\mathfrak{q}=\frac{1}{2}\sum_{\beta\in \Delta^\mathfrak{q}_{+,\operatorname{car}}}\beta$ the semi-sum of the positive Cartan roots and $\delta_{\operatorname{odd}}^\mathfrak{q}=\frac{1}{2}\sum_{\beta\in \Delta^\mathfrak{q}_{+,\operatorname{odd}}}\beta$. The following is analogous to Lemma [Lemma 66](#le:dot with delta cartan){reference-type="ref" reference="le:dot with delta cartan"}. **Lemma 68**. *Suppose $b=b^\mathfrak{q}(\gamma)$ is constant for all $\gamma\in\Delta_{+,\operatorname{car}}^\mathfrak{q}$. Let $\mu\in\mathbb Z^\mathbb{I}$ and $\beta\in\Delta_{+,\operatorname{car}}^\mathfrak{q}$. Then $s_\beta\bullet(\mu)=s_\beta(\mu+\delta^\mathfrak{q})-\delta^\mathfrak{q}+b\langle\beta^\vee,\delta_{\operatorname{car}}^\mathfrak{q}\rangle\beta$. 0◻* **Example 69**. Let $\mathfrak{q}$ and $\mathfrak{p}$ be as in Example [Example 4](#ex:the example){reference-type="ref" reference="ex:the example"} and Example [Example 6](#ex:the example groupoid){reference-type="ref" reference="ex:the example groupoid"}, respectively. Recall their root system in Example [Example 7](#ex:the example root system){reference-type="ref" reference="ex:the example root system"}. Then $\pm\alpha_1$ and $\pm\alpha_2$ are odd roots and $\pm(\alpha_1+\alpha_2)=\sigma_1^\mathfrak{p}(\alpha_2)$ is a Cartan root because $2$ is Cartan vertex of $\mathfrak{p}$. Let $\mu=\mu_1\alpha_1+\mu_2\alpha_2\in\mathbb Z^2$. Then $$\begin{aligned} s_{\alpha_1}\bullet\mu=s_{\alpha_1}(\mu-\varrho^\mathfrak{q})+\varrho^\mathfrak{q}=\mu-(2\mu_1-\mu_2-1)\alpha_1,\end{aligned}$$ and $$\begin{aligned} n_{\alpha_1}^\pi(\mu)=\begin{cases} 1&\mbox{if }q^{\mu_2}=1,\\ 0&\mbox{otherwise.} \end{cases}\end{aligned}$$ Thus, $\alpha_1\downarrow\mu\neq s_{\alpha_1}\bullet\mu+mb^\mathfrak{q}(\alpha_1)\alpha_1$ for all $m\in\mathbb Z$ if $\mathop{\mathrm{ {\mathrm{ord}} }}q\neq\mu_2\in2\mathbb Z$ as $b^\mathfrak{q}(\alpha_1)=2$. ### Besides the matrices of Cartan and super type there exist an infinite family of indecomposable matrices and one $2\times2$-matrix whose roots systems are constant [@AA-diag-survey §6]. For a matrix $\mathfrak{q}$ in this class and $\beta\in\Delta^\mathfrak{q}\setminus\Delta^\mathfrak{q}_{\operatorname{car}}$, the order of $q_\beta$ belongs to $\{2, 3,4\}$. Corollary [Corollary 67](#cor:Linkage super){reference-type="ref" reference="cor:Linkage super"} can be stated in this case but replacing $\Delta^\mathfrak{q}_{\operatorname{odd}}$ with $\Delta^\mathfrak{q}\setminus\Delta^\mathfrak{q}_{\operatorname{car}}$. ## Proof of Theorem [Theorem 1](#teo:main teo){reference-type="ref" reference="teo:main teo"} and Corollary [Corollary 2](#cor:main cor){reference-type="ref" reference="cor:main cor"} {#proof-of-theorem-teomain-teo-and-corollary-cormain-cor} **Corollary 70**. *Let $u_\mathfrak{q}$ be a small quantum group in the sense of Definition [Definition 10](#def:small quantum group){reference-type="ref" reference="def:small quantum group"}. *Mutatis mutandis*, all the results of this section hold for $u_\mathfrak{q}$ instead of $U_\mathfrak{q}$.* *Proof.* By definition there is a projection $U_\mathfrak{q}\longrightarrow u_\mathfrak{q}$ preserving the triangular decomposition as in §[5.6](#subsec:quotients){reference-type="ref" reference="subsec:quotients"} and then all the results of this section can be restated for $u_\mathfrak{q}$ thanks to [\[eq:quotients\]](#eq:quotients){reference-type="eqref" reference="eq:quotients"}. ◻ In particular, the above corollary applies to small quantum groups as in Figure [\[fig:uq\]](#fig:uq){reference-type="ref" reference="fig:uq"}, recall Example [Example 12](#ex:uq){reference-type="ref" reference="ex:uq"}, and hence Theorem [Theorem 1](#teo:main teo){reference-type="ref" reference="teo:main teo"} and Corollary [Corollary 2](#cor:main cor){reference-type="ref" reference="cor:main cor"} follow. Alternatively, it is not difficult to see that the Lusztig isomorphisms descend to isomorphisms between these small quantum groups, and $\widetilde{\mu}$ induces an algebra automorphism in $u_\mathfrak{q}^0$ for all $\mu\in\mathbb Z^\mathbb{I}$. Thus, one could repeat all the treatment of Sections [6](#sec:vermas){reference-type="ref" reference="sec:vermas"} and [7](#sec:morphisms){reference-type="ref" reference="sec:morphisms"} for $u_\mathfrak{q}$, and gives a direct proof of Theorem [Theorem 1](#teo:main teo){reference-type="ref" reference="teo:main teo"} and Corollary [Corollary 2](#cor:main cor){reference-type="ref" reference="cor:main cor"}. 39 H. H. Andersen, J. C. Jantzen and W. Soergel, *Representations of quantum groups at a $p$-th root of unity and of semisimple groups in characteristic $p$: independence of $p$*. Astérisque **220** (1994). N. Andruskiewitsch. *An Introduction to Nichols Algebras*. In Quantization, Geometry and Noncommutative Structures in Mathematics and Physics. A. Cardona et al, eds., pp. 135--195, Springer (2017). N. Andruskiewitsch and I. Angiono, *On finite dimensional Nichols algebras of diagonal type*. Bull. Math. Sci. **7** (2017), 353--573. N. Andruskiewitsch and I. Angiono, *On Nichols algebras with generic braiding*. In Modules and comodules. Proceedings of the international conference, Porto, Portugal, September 6--8, 2006. Dedicated to Robert Wisbauer on the occasion of his 65th birthday. Basel: Birkhäuser. 47--64. N. Andruskiewitsch, I. Angiono, A. Mejía and C. Renz *Simple modules of the quantum double of the Nichols algebra of unidentified diagonal type $\mathfrak{ufo}(7)$*. Commun. Algebra **46** (2018), no. 4, 1770--1798. N. Andruskiewitsch, I. Angiono and F. Rossi Bertone, *Lie algebras arising from Nichols algebras of diagonal type*. Int. Math. Res. Not. **4** (2023), 3424--3459. N. Andruskiewitsch, I. Angiono and M. Yakimov, *Poisson orders on large quantum groups*, Adv. Math. **428** (2023), 66 p. N. Andruskiewitsch, I. Angiono and H. Yamane, *On pointed Hopf superalgebras*. Contempl. Math. **544** (2011), 123--140 N. Andruskiewitsch, I. Heckenberger and H.-J. Schneider. *The Nichols algebra of a semisimple Yetter-Drinfeld module.* Am. J. Math. **132** (2010), no. 6, 1493-1547. N. Andruskiewitsch and H.-J. Schneider, *Finite quantum groups and Cartan matrices*. Adv. Math. **154** (2000), 1--45. I. Angiono, *On Nichols algebras of diagonal type*. J. Reine Angew. Math. **683** (2013), 189--251. I. Angiono, *A presentation by generators and relations of Nichols algebras of diagonal type and convex orders on root systems.* J. Eur. Math. Soc. **17** (2015), 2643--2671. I. Angiono, *Distinguished pre-Nichols algebras.* Transf. Groups **21** (2016), 1--33. A. Beı̆linson, V. Ginzburg, and W. Soergel, *Koszul duality patterns in representation theory*. J. Amer. Math. Soc. **9** (1996), 473--527. N. Bourbaki, *Groupes et Algèbres de Lie*, Chapitres IV, V, VI, Actualiès Scientifiques et Industrielles, No. 1337. Hermann, Paris, 1968, 288 pp. G. Bellamy and U. Thiel. *Highest weight theory for finite-dimensional graded algebras with triangular decomposition*. Adv. Math. **330** (2018), 361--419. S.-J. Cheng and W. Wang, *Dualities and representations of Lie superalgebras*. Graduate Studies in Mathematics Series Profile **144**. Providence, RI: American Mathematical Society (AMS) (2012). M. Cuntz and I. Heckenberger, *Weyl groupoids with at most three objects*. J. Pure Appl. Algebra **213** (2009) 1112--1128. P. Fiebig, *Sheaves on affine Schubert varieties, modular representations, and Lusztig's conjecture.* J. Am. Math. Soc. **24** (2011), no. 1, 133--181. P. Fiebig and M. Lanini, *The combinatorial category of Andersen, Jantzen and Soergel and filtered moment graph sheaves*. Abh. Math. Semin. Univ. Hamb. **86** (2016), no. 2, 203--212. I. Heckenberger, *The Weyl groupoid of a Nichols algebra of diagonal type.* Invent. Math. **164** (2006), 175--188. I. Heckenberger, *Classification of arithmetic root systems.* Adv. Math. **220** (2009), 59--124. I. Heckenberger, *Lusztig isomorphisms for Drinfel'd doubles of bosonizations of Nichols algebras of diagonal type.* J. Algebra **323** (2010), no. 2, 2130-2182. I. Heckenberger and H.-J. Schneider. *Hopf algebras and root systems.* Mathematical Surveys and Monographs **247**. Providence, RI: American Mathematical Society (AMS). xix, 582 p. (2020). I. Heckenberger and H. Yamane, *A generalization of Coxeter groups, root systems, and Matsumoto's theorem.* Math. Z. **259** (2008), 255--276. I. Heckenberger and H. Yamane, *Drinfel'd doubles and Shapovalov determinants.* Rev. Un. Mat. Argentina **51** (2010), no. 2, 107--146. V. Kac, *Characters of typical representations of classical Lie superalgebras.* Comm. Algebra **5** (1977), 889--897. V. Kharchenko, *A quantum analogue of the Poincaré--Birkhoff--Witt theorem.* Algebra Log. **38** (1999), 259--276. M. Kashiwara and T. Tanisaki, *Kazhdan--Lusztig conjecture for affine Lie algebras with negative level*. Duke Math. J. **77** (1995), no. 1, 21--62. D. Kazhdan and G. Lusztig, *Tensor structures arising from affine Lie algebras I-II*. J. Amer. Math. Soc. **6** (1993), 905--1011. D. Kazhdan and G. Lusztig, *Tensor structures arising from affine Lie algebras III-IV*. J. Amer. Math. Soc. **7** (1994), 335--383--453. R. Laugwitz and G. Sanmarco *Finite-dimensional quantum groups of type Super A and non-semisimple modular categories*. `arXiv:2301.10685` (2023). G. Lusztig, *Modular representations and quantum groups*. Classical groups and related topics, Proc. Conf., Beijing/China 1987, Contemp. Math. **82** (1989), 59--77. G. Lusztig, *Quantum groups at roots of 1*. Geom. Dedicata **35** (1990), 89--114. G. Lusztig, *Finite dimensional Hopf algebras arising from quantized universal enveloping algebras*. J. Am. Math. Soc. **3** (1990), no. 1, 25--296. G. Lusztig, *On quantum groups*. J. Algebra **131** (1990), no. 2, 466--475. L. Pan and B. Shu, *Jantzen filtration and strong linkage principle for modular Lie superalgebras*. Forum Math. **30** (2018), no. 6, 1573--1598. M. Rosso, *Quantum groups and quantum shuffles.* Invent. Math. **133** (1988), 399--416. V. Serganova, *Kazhdan-Lusztig Polynomials and Character Formula for the Lie Superalgebra $\mathfrak{gl}(m|n)$*. Selecta Math. (N.S.) **2** (1996), 607--651. G. Williamson, *Schubert calculus and torsion explosion.* J. Am. Math. Soc. **30** (2017), 1023--1046. H. Yamane, *Typical irreducible characters of generalized quantum groups.* J. Algebra Appl. **20** (2021), no. 1, 2140014 (58 pages). [^1]: *2020 Mathematics Subject Classification.* 16T05, 17B37, 22E47, 17B10, 17B35, 20G05.\ *Key words.* Quantum group, Nichols algebra, Weyl groupoid, Root system, Representation theory [^2]: This work is partially supported by CONICET PIP 11220200102916CO, Foncyt PICT 2020-SERIEA-02847 and Secyt (UNC) [^3]: Indeed, he classifies a more large family but we only consider here finite-dimensional Nichols algebras.
arxiv_math
{ "id": "2310.00103", "title": "Linkage principle for small quantum groups", "authors": "Cristian Vay", "categories": "math.RT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We construct an endomorphism of the Jiang-Su algebra $\mathcal{Z}$ which does not admit a conditional expectation. This answers a question in the testamentary homework by E. Kirchberg. As an application, it is shown that any unital separable nuclear $\mathcal{Z}$-absorbing $\mathrm{C}^*$-algebra is non-transportable in the Cuntz algebra $\mathcal{O}_2$. author: - "Yasuhiko Sato [^1]" title: | Endomorphisms of $\mathcal{Z}$-absorbing $\mathrm{C}^*$-algebras\ without conditional expectations --- *Dedicated to the memory of Eberhard Kirchberg* # Introduction {#Sec1} Recent developments in the theory of classifications of amenable $\mathrm{C}^*$-algebras were achieved by numerous researchers, and consequently a very abstract class of unital separable simple $\mathrm{C}^*$-algebras has now been classified by K-theoretical invariants; see [@K1], [@K2], [@KP], [@P], [@GLN], [@GLN2], [@TWW], [@EN], [@EGLN], and [@EGLN2]. The Kirchberg-Phillips classification theorem has been recognized as one of the most successful results in the classification program established by G. A. Elliott. For an overview of the history and context of Elliott's classification programme, we refer the reader to [@ET], [@Win2], and [@Whi]. As an application of the classification theorem, we are now in position to follow a typical but crucial strategy consists of the following three steps: Here we assume that it is required to show a certain property, denoted by property $X$, for a given unital separable simple classifiable $\mathrm{C}^*$-algebra $A$. The first step in the strategy is to construct an amenable $\mathrm{C}^*$-algebra $B$ with the property $X$, which may be expected naturally. Second, by modifying $B$ and keeping the property $X$, it allows one to reconstruct a unital separable simple amenable $\mathrm{C}^*$-algebra $B_A$ with the property $X$ whose K-theoretical invariant coincides with that of $A$. In other words, at the revel of classification invariants $B_A$ is isomorphic to $A$. Finally, refining $B_A$ such that it is classifiable further, the classification theorem enables us to show that $A$ is isomorphic to $B_A$ as $\mathrm{C}^*$-algebras, this leads to the desired property $X$ of $A$. If the property $X$ is compatible with tensor products, the tensor product with the Jiang-Su algebra $\mathcal{Z}$ easily induces the classifiability, see [@MS], [@SWW], [@CETWW]. This classification strategy can be applied to "the existence of non-approximately inner flow" or more generally "the existence of a flow which realizes a possible KMS-bundle" as the property $X$ in UHF algebras, which in fact provides a counter example to the Powers-Sakai conjecture [@Kis], [@MS], and a concrete realization of possible KMS-bundles on classifiable $\mathrm{C}^*$-algebras [@Thoms2], [@ET], [@EST], [@ES]. In this paper, we apply the classification strategy to show the existence of a unital endomorphism without conditional expectations for classifiable $\mathrm{C}^*$-algebras as follows, that has been studied in Kirchberg's manuscript [@EK]. **Theorem 1**. *There exists an endomorphism of the Jiang-Su algebra $\mathcal{Z}$ which does not have a conditional expectation.* *Suppose that $A$ and $B$ are unital nuclear $\mathcal{Z}$-absorbing $\mathrm{C}^*$-algebras and that there exists a unital embedding of $B$ into $A$. Then there exists a unital embedding of $B$ into $A$ which does not have a conditional expectation.* This theorem answers a question in [@EK p.795 and p.1220], which asks whether the CAR algebra $M_{2^{\infty}}$ has a unital endomorphism with no conditional expectations (see Corollary [Corollary 6](#Cor3.3){reference-type="ref" reference="Cor3.3"}). In consequence of this, we see that the Cuntz algebra $\mathcal{O}_2$ is not transportable in itself (see Theorem [Theorem 8](#Thm4.2){reference-type="ref" reference="Thm4.2"}). Before going to the next section, let us collect some basic notations and terminologies. Throughout the paper, let $A^+$ denote the cone of positive elements in a $\mathrm{C}^*$-algebra $A$. In the case of a unital $\mathrm{C}^*$-algebra $A$, we denote by $1_A$ the unit of $A$. For $n\in\mathbb{N}$, let $M_n$ be the $\mathrm{C}^*$-algebra of complex $n\times n$ matrices, $\{e_{i, j}^{(n)}\ :\ i, j=1,2,...,n\}$ the set of canonical matrix units of $M_n$, and ${\mathrm{tr}}_n$ the normalized trace on $M_n$. To simplify, we set $1_n =1_{M_n}$. For a $\mathrm{C}^*$-algebra $A$, we denote by $\mathop{\mathrm{id}}_A$ the identity map on $A$, i.e., $\mathop{\mathrm{id}}_A(a)=a$ for any $a\in A$. For two $\mathrm{C}^*$-algebras $A$ and $B$, by an *embedding* $\varphi$ of $A$ into $B$ we mean an injective $*$-homomorphism from $A$ to $B$. A *conditional expectation* $E$ for an embedding $\varphi$ of $A$ into $B$ is a completely positive contractive map from $B$ to $A$ such that $E\circ\varphi =\mathop{\mathrm{id}}_A$. Because of Tomiyama's theorem (see [@Lan], [@BO]), it is well-known that $E(\varphi(a)\, b\, \varphi(c))=a\,E(b)\,c$ for any $a$, $c\in A$, and $b\in B$. If an embedding $\varphi$ is unital (i.e., $\varphi(1_A)=1_B$), then so is a conditional expectation for $\varphi$. # Jiang and Su's connecting maps reconstructed {#Sec2} In the original construction of the Jiang-Su algebra $\mathcal{Z}$ [@JS Proposition 2.5], the connecting maps can not have conditional expectations, even in the constructions [@JS Theorem 4.1] and [@Ror Theorem 2.1] the existence of conditional expectations is not necessarily guaranteed (see Proposition [Proposition 3](#Prop2.2){reference-type="ref" reference="Prop2.2"} (i)). For this reason, in Proposition [Proposition 3](#Prop2.2){reference-type="ref" reference="Prop2.2"} (ii), we shall modify the connecting maps with conditional expectations for the construction of $\mathcal{Z}$. First, we prepare a rather accurate notion for embeddings of prime dimension drop algebras which is an approximate formula of standard $*$-homomorphisms in [@Ror (2.1)]. **Definition 2**. For relatively prime natural numbers $p_0$, $p_1$, and for unital embeddings $\iota_i$ of $M_{p_i}$ into $M_{p_0}\otimes M_{p_1}$, $i=0, 1$, we denote by $I(p_0, p_1)$ the prime dimension drop algebra defined by $$I(p_0, p_1)=\{ f\in C([0, 1])\otimes M_{p_0}\otimes M_{p_1}\ : \ f(i)\in \mathop{\mathrm{Im}}(\iota_i) \text{ for both } i=0, 1\}.$$ From the definition, it follows that $I(p_0, p_1)$ is independent of the choice of unital embeddings $\iota_i$, $i=0, 1$. Let $\mu$ be the Lebesgue measure on $[0, 1]$. A unital $*$-homomorphism $\varphi$ from $I(p_0, p_1)$ into $\mathcal{Z}$ is called *standard* if $$\tau_{\mathcal{Z}}\circ\varphi(f)=\int_{[0, 1]}{\mathrm{tr}}_{p_0p_1}(f(t))\ d\mu(t),$$ for any $f\in I(p_0, p_1)$, where $\tau_{\mathcal{Z}}$ is the unique tracial state of $\mathcal{Z}$. Let ${\widetilde{p}}_0$ and ${\widetilde{p}}_1$ be relatively prime natural numbers and $\varphi$ a unital embedding of $I(p_0, p_1)$ into $I({\widetilde{p}}_0, {\widetilde{p}}_1)$. For a finite subset $F$ of $I(p_0, p_1)$ and $\varepsilon>0$, we say that $\varphi$ is *$(F, \varepsilon)$-standard* if $$\rvert \tau\circ \varphi(f) -\int_{[0, 1]}{\mathrm{tr}}_{p_0p_1}(f(t))\ d\mu(t)\rvert< \varepsilon,$$ for any tracial state $\tau$ of $I({\widetilde{p}}_0, {\widetilde{p}}_1)$ and $f\in I(p_0, p_1)$. **Proposition 3**. *Let $p_0$ and $p_1$ be relatively prime natural numbers.* *The unital embedding of $I(p_0, p_1)$ constructed in the same way as [@JS Proposition 2.5] does not have a conditional expectation.* *For a finite subset $F$ of $I(p_0, p_1)$ and $\varepsilon>0$ , there exists a natural number $N$ satisfying that: for relatively prime natural numbers ${\widetilde{p}}_0$ and ${\widetilde{p}}_1$ with ${\widetilde{p}}_i>N$ for $i=0, 1$, there exist a unital $(F, \varepsilon)$-standard embedding $\varphi$ of $I(p_0, p_1)$ into $I({\widetilde{p}}_0, {\widetilde{p}}_1)$ and a conditional expectation for $\varphi$.* *Proof.* To simplify some notations we regard $i=0, 1$ as elements in $\mathbb{Z}/2\mathbb{Z}$, and set $d=p_0p_1$. Let $l_i$, $i=0, 1$ be natural numbers such that $l_i> 2p_{i+1}$ for both $i=0, 1$, and $l_0p_0$ and $l_1p_1$ are also relatively prime. Set ${\widetilde{p}}_i=l_ip_i$ for $i=0, 1$, and $k=l_0l_1$. Recall that the connecting map $\varphi$ from $I(p_0, p_1)$ to $I({\widetilde{p}}_0, {\widetilde{p}}_1)$ in the proof of [@JS Proposition 2.5] is constructed from continuous functions $\xi_j\in C([0, 1])$, $j=1,2,..., k$ (called a section of eigenvalues) defined by $$\xi_j(t)= \begin{cases} t/2,\quad \quad &1\leq j\leq r_0, \\ 1/2, \quad \quad &r_0<j \leq k-r_1,\\ (t+1)/2, \quad \quad &k-r_1< j\leq k, \end{cases}$$ where $r_0$ and $r_1$ are natural numbers such that $r_i<{\widetilde{p}}_{i+1}$ and ${\widetilde{p}}_{i+1}|k-r_i$ for $i=0$, $1$. Set $\xi(f)=\mathop{\mathrm{diag}}(f\circ\xi_1, f\circ\xi_2, ..., f\circ\xi_k)\in C([0, 1])\otimes M_{d}\otimes M_k$ for $f\in I(p_0, p_1)$. By the choice of $r_0$ and $r_1$, we obtain a unitary $u\in C([0, 1])\otimes M_d\otimes M_k$ such that $\mathop{\mathrm{Ad}}u\circ\xi(f)\in I({\widetilde{p}}_0, {\widetilde{p}}_1)$ for any $f\in I(p_0, p_1)$. The unital embedding $\varphi$ was defined by $\varphi=\mathop{\mathrm{Ad}}u\circ\xi$. Since the definition of $I({\widetilde{p}}_0, {\widetilde{p}}_1)$ is independent of the unital embeddings of $M_{{\widetilde{p}}_i}$, $i=0, 1$ into $M_d\otimes M_k$, we may regard $\varphi=\xi$. Assume that there exists a conditional expectation $E_{\varphi}$ for $\varphi$. Let $g_i$, $i=0, 1$ be positive continuous functions on $[0, 1]$ such that $g_i(i)=1$ for both $i=0, 1$, $g_0|_{[1/2, 1]}=0$, and $g_1|_{[0, 1/2]}=0$. Define a positive element $F$ in $I({\widetilde{p}}_0, {\widetilde{p}}_1)$ by $$F(t) =(1-t)\varphi(g_0\otimes 1_d)(0) +t (1_{dk}-\varphi(g_1\otimes 1_d)(1)),$$ for $t\in [0, 1]$. For any $h\in C([0, 1])$ with $h|_{[1/2, 1]}=0$, we have $$E_{\varphi}(F)(h\otimes 1_d)=E_{\varphi}(F\,\varphi(h\otimes 1_d))=E_{\varphi}(\varphi(h\otimes 1_d))=h\otimes 1_d.$$ On the other hand, for any $h\in C([0, 1])$ with $h|_{[0, 1/2]}=0$, we have $$E_{\varphi}(F)(h\otimes 1_d)=E_{\varphi}(F\, \varphi(h\otimes 1_d))=E_{\varphi}(0)=0,$$ which contradicts the continuity of $E_{\varphi}(F)$ at $1/2$. This argument is a variant of the proofs of [@JS Theorem 4.1] and [@Ror Theorem 2.1]. In order to obtain a conditional expectation, we shall insert in addition the following section $\xi_e$, and arrange the amplifications suitably. Replacing $\varepsilon$ with $\varepsilon/\max\{\, \| f\|\, :\, f\in F\}$, we may assume that $F$ is a set of contractions in $I(p_0, p_1)$. For a Lipschitz continuous function $f\in I(p_0, p_1)$, we denote by $\mathop{\mathrm{Lip}}(f)$ the Lipschitz constant of $f$. Since the set of all Lipschitz continuous functions is dense in $I(p_0, p_1)$, without loss of generality we may assume that $\mathop{\mathrm{Lip}}(f)<\infty$ for all $f\in F$. Let $\widetilde{N}\in\mathbb{N}$ be such that $\displaystyle \max\{\mathop{\mathrm{Lip}}(f), 1\}< \widetilde{N}\varepsilon/ 8$, and $$\left\lvert\int_{[0, 1]} {\mathrm{tr}}_d(f(t))\ d\mu(t) -\frac{1}{k}\sum_{j=1}^k {\mathrm{tr}}_d(f(j/k))\right\rvert< \varepsilon/2,$$ for all $f\in F$ and $k\geq \widetilde{N}$. Set $N=(\widetilde{N}d)^2\in \mathbb{N}$. Let $\iota_i$, $i=0, 1$ be the unital embeddings of $M_{p_i}$ into $M_d\cong M_{p_0}\otimes M_{p_1}$ which determine the prime dimension drop algebra $I(p_0, p_1)$. We suppose that ${\widetilde{p}}_0$ and ${\widetilde{p}}_1$ are relatively prime natural numbers such that ${\widetilde{p}}_0$, ${\widetilde{p}}_1>N$, and let $l_0$ and $l_1\in\mathbb{N}$ be such that $l_ip_i={\widetilde{p}}_i$ for both $i=0, 1$. Since the same argument works for the case ${\widetilde{p}}_1>{\widetilde{p}}_0$, in what follows we assume that ${\widetilde{p}}_0> {\widetilde{p}}_1$. Setting $k=l_0l_1$, we will prepare some natural numbers $y_j$ and $y_{j+1/2}$ inducing a partition of $k$ below. By the choice of $N$, there exist natural numbers $\overline{N}_i \geq \widetilde{N}$, $i=0, 1$ and $r_i$, $i=0, 1$ such that $l_i=(\overline{N}_i-1)p_{i+1}+r_i$, and $0\leq r_i < p_{i+1}$ for both $i=0, 1$. Since ${\widetilde{p}}_0=l_0p_0$ and ${\widetilde{p}}_1=l_1p_1$ are relatively prime, it follows that $r_i>0$ for $i=0, 1$. We set $\overline{N}=\overline{N}_1$, and note that $$k=l_0l_1=(\overline{N}-1){\widetilde{p}}_0+l_0r_1=(\overline{N}_0-1){\widetilde{p}}_1 +l_1r_0.$$ By ${\widetilde{p}}_0>{\widetilde{p}}_1$, we have natural numbers $y_0$, $y_1$,\..., $y_{\overline{N}-1}$, $y_{1/2}$, $y_{3/2}$,\..., $y_{\overline{N}-3/2}$, and $c_1$, $c_2$,\..., $c_{\overline{N}-1}$ such that $y_j+y_{j+1/2}={\widetilde{p}}_0$ for $j=0, 1,..., \overline{N}-2$, $y_{\overline{N}-1} =l_0r_1$, $y_0=l_1r_0$, and $y_{j-1/2} +y_j=c_j{\widetilde{p}}_1$ for $j=1,2, ..., \overline{N}-1$. Note that $$\sum_{j=0}^{\overline{N}-1} y_j+\sum_{j=1}^{\overline{N}-1}y_{j-1/2} =(\overline{N}-1){\widetilde{p}}_0+l_0 r_1=\left(\sum_{j=1}^{\overline{N}-1} c_j\right){\widetilde{p}}_1 +l_1r_0=k.$$ The required unital embedding $\varphi$ of $I(p_0, p_1)$ into $I({\widetilde{p}}_0, {\widetilde{p}}_1)$ can be constructed as the block diagonal map $\mathop{\mathrm{diag}}(f\circ\omega_1, f\circ\omega_2,..., f\circ\omega_k)$ for $f\in I(p_0, p_1)$ by setting the following continuous functions $\omega_j\in C([0, 1])$, $j=1,2,...,k$. For $j=0,1,..., \overline{N}-1$, we define continuous functions $\xi_j$ and $\xi_{j+1/2}\in C([0, 1])$ by $$\xi_j(t) = (t+j)/\overline{N},\quad \xi_{j+1/2} (t) = (j+1)/\overline{N}\quad \text{for }t\in [0, 1],$$ and $\xi_e=\mathop{\mathrm{id}}_{[0, 1]}\in C([0, 1])$. Set $z_{-1/2}=0$, and inductively $z_j= z_{j-1/2} + y_j$, and $z_{j+1/2}=z_j+y_{j+1/2}$ for $j=0, 1, ...\overline{N}-1$. We define $\omega_1=\xi_e$, $$\begin{aligned} \omega_m&=\xi_j \quad \ \ \text{ for } j=0, 1,..., \overline{N}-1, \text{ and } m\in \mathbb{N}\text{ with } z_{j-1/2}+2 \leq m \leq z_j, \\ \omega_m&=\xi_{j+1/2} \text{ for }j=0, 1,..., \overline{N}-2, \text{ and } m\in \mathbb{N}\text{ with } z_j +1\leq m \leq z_{j+1/2} +1. \end{aligned}$$ As shown in the next paragraph, there exist unital embeddings $\widetilde{\iota}_i$ of $M_{{\widetilde{p}}_i}$ into $M_d\otimes M_k$, $i= 0, 1$ such that $\varphi(f)(i)\in \mathop{\mathrm{Im}}(\widetilde{\iota}_i)$ for any $f\in I(p_0, p_1)$ and $i=0, 1$. Thus we may regard $\varphi$ as a unital embedding of $I(p_0, p_1)$ into $I({\widetilde{p}}_0, {\widetilde{p}}_1)$. Let $g_j$, $j=0, 1, ..., \overline{N}$ be positive continuous functions on $[0, 1]$ such that $g_j(j/\overline{N})=1$ and $\mathop{\mathrm{supp}}(g_j)\subset [(2j-1)/2\overline{N}, (2j+1)/2\overline{N}]\cap[0, 1]$. For $i=0, 1$ and $j=i, 1+i, 2+i, ..., \overline{N}-1+i$, we let $h_j^{(i)}$ be the projection in $M_k$ such that $1_d\otimes h_j^{(i)}=\varphi(g_j\otimes 1_d)(i)$. Note that $$\mathop{\mathrm{rank}}(h_0^{(0)})=1+(y_0-1)=l_1r_0,\quad \mathop{\mathrm{rank}}(h_j^{(0)})= (y_{j-1/2}+1)+(y_j-1)=c_j{\widetilde{p}}_1,$$ for $j=1,2,...,\overline{N}-1$. Then there exist unital embeddings $\iota_0^{(0)}$ of $M_{r_0}$ into $1_d\otimes h_0^{(0)}M_kh_0^{(0)}$ and $\iota_{0, j}$ of $M_{c_jp_1}$ into $(\mathop{\mathrm{Im}}(\iota_0)'\cap M_d)\otimes h_j^{(0)} M_k h_j^{(0)}$, $j=1,2,..., \overline{N}-1$ such that $$\iota_0^{(0)}(e_{1, 1}^{(r_0)})\, 1_d\otimes e_{1, 1}^{(k)}= 1_d\otimes e_{1, 1}^{(k)},\quad \iota_{0, j}(M_{p_1}\otimes 1_{c_j})=(\mathop{\mathrm{Im}}(\iota_0)'\cap M_d)\otimes h_j^{(0)}.$$ In the matrix algebra $(\mathop{\mathrm{Im}}(\iota_0)'\cap M_d)\otimes M_k\cong M_{p_1k}$, it follows that $$\mathop{\mathrm{rank}}(\iota_0^{(0)}(e_{1,1}^{(r_0)}))=(l_1p_1r_0)/r_0={\widetilde{p}}_1,\quad \mathop{\mathrm{rank}}(\iota_{0, j}(e_{1, 1}^{(c_jp_1)}))=c_j{\widetilde{p}}_1/c_j={\widetilde{p}}_1,$$ for $j=1, 2, ..., \overline{N}-1$. By $\displaystyle r_0+\left(\sum_{j=1}^{\overline{N}-1}c_j\right)p_1=l_0$, we can find a unital embedding $\overline{\iota}_0$ of $M_{l_0}$ into $(\mathop{\mathrm{Im}}(\iota_0)'\cap M_d)\otimes M_k$ such that $\mathop{\mathrm{Im}}(\iota_0^{(0)})\cup \bigcup_{j=1}^{\overline{N}-1}\mathop{\mathrm{Im}}(\iota_{0, j})\subset \mathop{\mathrm{Im}}(\overline{\iota}_0)$ and $$\overline{\iota}_0(e_{1, 1}^{(l_0)})\, (1_d\otimes e_{1, 1}^{(k)}) = \iota_0^{(0)}(e_{1,1}^{(r_0)})(1_d\otimes e_{1, 1}^{(k)}) = 1_d\otimes e_{1, 1}^{(k)}.$$ In a similar way, we see that $\mathop{\mathrm{rank}}(h_j^{(1)})=(y_{j-1}-1)+(y_{j-1/2}+1)={\widetilde{p}}_0$ for $j=1, ..., \overline{N}-1$, and $\mathop{\mathrm{rank}}(h_{\overline{N}}^{(1)})=(y_{\overline{N}-1}-1)+1=l_0r_1$. Recall that the position of $\omega_1=\xi_e$ in the definition of $\varphi$ is $1_d\otimes e_{1, 1}^{(k)}$, which implies that $e_{1, 1}^{(k)}\, h_{\overline{N}}^{(1)}=e_{1, 1}^{(k)}$. Then there exist unital embeddings $\iota_1^{(1)}$ of $M_{r_1}$ into $1_d\otimes h_{\overline{N}}^{(1)}M_kh_{\overline{N}}^{(1)}$ and $\iota_{1, j}$ of $M_{p_0}$ into $(\mathop{\mathrm{Im}}(\iota_1)'\cap M_d)\otimes h_j^{(1)}$, $j=1,2,...,\overline{N}-1$ such that $$\iota_1^{(1)}(e_{1, 1}^{(r_1)})(1_d\otimes e_{1, 1}^{(k)})=1_d\otimes e_{1, 1}^{(k)},\quad \mathop{\mathrm{rank}}(\iota_{1, j}(e_{1, 1}^{(p_0)}))=\mathop{\mathrm{rank}}(\iota_1^{(1)}(e_{1, 1}^{(r_1)}))={\widetilde{p}}_0$$ in $(\mathop{\mathrm{Im}}(\iota_1)'\cap M_d)\otimes M_k$ for $j=1, 2,..., \overline{N}-1$. By $r_1+(\overline{N}-1)p_0=l_1$, we can also find a unital embedding $\overline{\iota}_1$ of $M_{l_1}$ into $(\mathop{\mathrm{Im}}(\iota_1)'\cap M_d)\otimes M_k$ such that $\mathop{\mathrm{Im}}(\iota_1^{(1)})\cup\bigcup_{j=1}^{\overline{N}-1}\mathop{\mathrm{Im}}(\iota_{1, j})\subset \mathop{\mathrm{Im}}(\overline{\iota}_1)$ and $$\overline{\iota}_1(e_{1, 1}^{(l_1)})(1_d\otimes e_{1, 1}^{(k)})=\iota_1^{(1)}(e_{1, 1}^{(r_1)})(1_d\otimes e_{1, 1}^{(k)}) =1_d\otimes e_{1, 1}^{(k)}.$$ For each $i=0, 1$, define a unital embedding $\widetilde{\iota}_i$ of $M_{{\widetilde{p}}_i}$ into $M_{dk}\cong M_{{\widetilde{p}}_0}\otimes M_{{\widetilde{p}}_1}$ by $$\widetilde{\iota}_i(a\otimes b) =(\iota_i(a)\otimes 1_k)\, \overline{\iota}_i(b)$$ for $a\in M_{p_i}$ and $b\in M_{l_i}$. Set $p_j^{(0)}=c_jp_1$ for $j=1, 2, ...,\overline{N}-1$ and $p_j^{(1)}=p_0$. Because of $$(\mathop{\mathrm{Im}}(\iota_i)\otimes 1_k)((\mathop{\mathrm{Im}}(\iota_i)'\cap M_d)\otimes 1_k)\, \iota_{i, j}(1_{p_j^{(i)}})\subset\mathop{\mathrm{Im}}(\widetilde{\iota}_i),$$ for all $i=0, 1$ and $j=1, 2, ..., \overline{N}-1$, thus we conclude that $$\varphi(f)(i)=(f(i)\otimes 1_k)\, \iota_i^{(i)}(1_{r_i})+\sum_{j=1}^{\overline{N}-1}(f(j/\overline{N})\otimes 1_k)\, \iota_{i, j}(1_{p_j^{(i)}})\in \mathop{\mathrm{Im}}(\widetilde{\iota}_i),$$ for any $i=0, 1$ and $f\in I(p_0, p_1)$. Next, we show that $\varphi$ is $(F, \varepsilon)$-standard. For all $f\in F$, from $\mathop{\mathrm{Lip}}(f)<\varepsilon\overline{N}/8$ it follows that $$\max_{t\in [0, 1]}\left\lVert f\circ \omega_m(t) - f(j/\overline{N})\right\rVert <\varepsilon /8,$$ for all $j=1,2,..., \overline{N}-1$ and $m\in \mathbb{N}\setminus\{1\}$ with $(j-1){\widetilde{p}}_0+1\leq m\leq j{\widetilde{p}}_0$. For any tracial state $\tau$ of $I({\widetilde{p}}_0, {\widetilde{p}}_1)$, one obtain a probability measure $\mu_{\tau}$ on $[0, 1]$ such that $$\tau(g)=\int_{[0, 1]}{\mathrm{tr}}_{dk}(g(t))\ d\mu_{\tau}(t),$$ for any $g\in I({\widetilde{p}}_0, {\widetilde{p}}_1)$ (see [@JS Lemma 2.4] for example). Because of ${\widetilde{p}}_i>(\widetilde{N}d)^2$ and $(l_0r_1+{\widetilde{p}}_0)/k< 2p_0/l_1 < \varepsilon/8$, it follows that for any tracial state $\tau$ of $I({\widetilde{p}}_0, {\widetilde{p}}_1)$ and $f\in F$, $$\begin{aligned} \tau\circ\varphi(f) &=\int_{[0, 1]} {\mathrm{tr}}_d\otimes {\mathrm{tr}}_k(\varphi(f)(t))\ d\mu_{\tau}(t) \\ &\approx_{\frac{\varepsilon}{8}}\frac{1}{k}\sum_{m=2}^{k-l_0r_1}\int_{[0, 1]}{\mathrm{tr}}_d(f\circ\omega_m(t))\ d\mu_{\tau}(t) \\ &\approx_{\frac{\varepsilon}{4}}\frac{{\widetilde{p}}_0}{k}\sum_{j=1}^{\overline{N}}{\mathrm{tr}}_d(f(j/\overline{N}))\approx_{\frac{\varepsilon}{8}}\frac{1}{\overline{N}}\sum_{j=1}^{\overline{N}}{\mathrm{tr}}_d(f(j/\overline{N})) \\ &\approx_{\frac{\varepsilon}{2}}\int_{[0, 1]}{\mathrm{tr}}_d(f(t))\ d\mu(t).\end{aligned}$$ Finally, it remains to show that there exists a conditional expectation for $\varphi$. We define a unital completely positive map $E_{\varphi}$ from $I({\widetilde{p}}_0, {\widetilde{p}}_1)$ into $C([0, 1])\otimes M_d$ by $$E_{\varphi}=k(\mathop{\mathrm{id}}_{C([0, 1])\otimes M_d}\otimes\, {\mathrm{tr}}_k)\circ\mathop{\mathrm{Ad}}(1_{C([0, 1])\otimes M_d}\otimes e_{1, 1}^{(k)}).$$ It is straightforward to check that $E_{\varphi}\circ\varphi=\mathop{\mathrm{id}}_{I(p_0, p_1)}$. In the definition of $\overline{\iota}_i$, we have seen that $(1_d\otimes e_{1, 1}^{(k)})\overline{\iota}_i(e_{1, 1}^{(l_i)})=1_d\otimes e_{1, 1}^{(k)}$ for both $i=0, 1$. Then for any $g\in I({\widetilde{p}}_0, {\widetilde{p}}_1)$, it follows that $\mathop{\mathrm{Ad}}(1_d\otimes e_{1, 1}^{(k)})(g(i))\in \mathop{\mathrm{Im}}(\iota_i)\otimes e_{1, 1}^{(k)}$, which implies that $E_{\varphi}(g)(i)\in \mathop{\mathrm{Im}}(\iota_i)$ for both $i=0, 1$. Hence we may regard $E_{\varphi}$ as a conditional expectation for $\varphi$ ◻ # Expectationless embeddings {#Sec3} The construction of the endomorphism in Theorem [Theorem 1](#Thm1.1){reference-type="ref" reference="Thm1.1"} will be derived from the basic observation of a continuous path of two embeddings below. Let $A$, $B$, and $B_i$, $i=0, 1$ be unital $\mathrm{C}^*$-algebras. For $i=0, 1$, we let $\iota_i$ be a unital embedding of $B_i$ into $B$ and $\psi_i$ a unital embedding of $A$ into $\mathop{\mathrm{Im}}(\iota_i)$. Assume that $\psi_0$, and $\psi_1$ are asymptotically unitarily equivalent by a continuous path $u_t$, $t\in[0, 1)$ of unitaries in $B$ with $u_0=1_B$ (that is $\lim_{t\to 1}\mathop{\mathrm{Ad}}u_t\circ\psi_0(a)=\psi_1(a)$ in the norm topology of $B$ for any $a\in A$). We define a generalized dimension drop algebra $I(B_0, B_1)$ as the $\mathrm{C}^*$-subalgebra of $C([0, 1])\otimes B$ consisting of all functions $f$ such that $f(i)\in \mathop{\mathrm{Im}}(\iota_i)$ for both $i=0, 1$. Thus we obtain a unital embedding $\psi$ of $A$ into $I(B_0, B_1)$ defined by $$\psi(a)\, (t)= \begin{cases} \mathop{\mathrm{Ad}}u_t\circ\psi_0(a),\quad\ \quad &t\in[0, 1), \\ \psi_1(a), \quad &t=1,\quad \text{ for } a\in A. \end{cases}$$ **Lemma 4**. *In the setting above, additionally suppose that $A$ is simple. For each $i=0, 1$, if $\psi_i$ does not have a conditional expectation, then neither does $\psi$.* *Proof.* Assume that there exists a conditional expectation $E$ for $\psi$. For $f\in C([0, 1])$, it follows that $E(f\otimes 1_B)\in A'\cap A=\mathbb{C}1_A$. Regarding $E$ as a state of $C([0, 1])$, we obtain the probability measure $\mu_{E}$ on $[0, 1]$ which is determined by $$\int_{[0, 1]}f(t)\ d\mu_E(t)\cdot 1_A= E(f\otimes 1_B)\quad\text{ for } f\in C([0, 1]).$$ Denote by $z$ the positive continuous function on $[0, 1]$ defined by $z(t)=1-t$ for $t\in[0, 1]$. For $b\in B_0$, we set $\widetilde{b}(t)=z(t)\mathop{\mathrm{Ad}}u_t\circ\iota_0(b)\in B$, $t\in [0, 1)$ and $\widetilde{b}(1)=0$. Then it follows that $\widetilde{b}\in I(B_0, B_1)$. In the case $\mu_E(z)=E(z\otimes 1_B)>0$, we can define a linear map $E_0$ from $B_0$ to $A$ by $$E_0(b)=\mu_E(z)^{-1}E(\widetilde{b})\quad\text{ for } b\in B_0.$$ It is straightforward to show that $E_0\circ\iota_0^{-1} : \mathop{\mathrm{Im}}(\iota_0)\rightarrow A$ is a conditional expectation for $\psi_0$, which is a contradiction. Assume that $\mu_E(z)=0$. Since $E((1-z)^n\otimes \iota_1(b))\in A$, $n\in\mathbb{N}$ is a Cauchy sequence in the norm topology for any $b\in B_1$, we obtain a completely positive map $E_1$ from $B_1$ to $A$ defined by $$E_1(b)=\lim_{n\to\infty}E((1-z)^n\otimes \iota_1(b))\quad\text{ for }b\in B_1.$$ By $\mu_E(z)=0$, the map $E_1\circ\iota_1^{-1} : \mathop{\mathrm{Im}}(\iota_1)\rightarrow A$ is a conditional expectation for $\psi_1$, which contradicts the assumption. ◻ By applying the above lemma, in order to construct the required endomorphism of $\mathcal{Z}$ in Theorem [Theorem 1](#Thm1.1){reference-type="ref" reference="Thm1.1"}, it suffices to find two unital expectationless embeddings of $\mathcal{Z}$ into relatively prime UHF-algebras. The following construction of unital embeddings can be regarded as a suitable modification of [@Sat Proposition 2.3] with conditional expectations. **Proposition 5**. *There exist two UHF-algebras $B_i$, $i=0, 1$ and unital embeddings $\iota_i$ of $\mathcal{Z}$ into $B_i$, $i=0, 1$ such that the supernatural numbers of $B_0$ and $B_1$ are relatively prime and $\iota_i$ does not have a conditional expectation for both $i=0, 1$.* *Proof.* To simplify notations, we regard $i=0, 1$ as elements in $\mathbb{Z}/2\mathbb{Z}$. We let $p_n^{(i)}$, $q_n^{(i)}$, $n\in\mathbb{N}$, $i=0, 1$ be sequences of mutually relatively prime natural numbers such that $$p_n^{(i)}|p_{n+1}^{(i)},\quad q_n^{(i)}|q_{n+1}^{(i)},\quad p_n^{(i+1)}|(p_{n+1}^{(i)}/p_n^{(i)}-1), \quad q_n^{(i+1)}|(q_{n+1}^{(i)}/q_n^{(i)}-1),$$ for any $n\in\mathbb{N}$ and $i=0, 1$. For example, such natural numbers are obtained by $p_1^{(0)}=2$, $p_1^{(1)}=3$, $q_1^{(0)}=5$, $q_1^{(1)}=7$, $$\begin{aligned} p_{n+1}^{(0)}&=p_n^{(0)}(L_np_n^{(1)}q_n^{(0)}q_n^{(1)}+1),\quad p_{n+1}^{(1)}=p_n^{(1)}(L_np_{n+1}^{(0)}q_n^{(0)}q_n^{(1)}+1),\\ q_{n+1}^{(0)}&=q_n^{(0)}(L_np_{n+1}^{(0)}p_{n+1}^{(1)}q_n^{(1)}+1),\quad q_{n+1}^{(1)}=q_n^{(1)}(L_np_{n+1}^{(0)}p_{n+1}^{(1)}q_{n+1}^{(0)}+1), \end{aligned}$$ for some natural numbers $L_n$, $n\in\mathbb{N}$ inductively. Let $\varepsilon_n>0$, $n\in\mathbb{N}$ be a decreasing sequence which converges to $0$. Set $I_n=I(p_n^{(0)}, p_n^{(1)})$ and $J_n=I(q_n^{(0)}, q_n^{(1)})$, $n\in\mathbb{N}$. Taking large natural numbers $L_n$, $n\in\mathbb{N}$ and applying Proposition [Proposition 3](#Prop2.2){reference-type="ref" reference="Prop2.2"} (ii) inductively, for each $n\in\mathbb{N}$ we can obtain increasing sequences $F_{n, m}$ and $G_{n, m}$, $m\in\mathbb{N}$ of finite subsets in $I_n$ and $J_n$, and unital embeddings $\varphi_n :I_n\rightarrow I_{n+1}$ and $\psi_n : J_n\rightarrow J_{n+1}$ satisfying the following properties: - $\bigcup_{m=1}^{\infty} F_{n, m}$ and $\bigcup_{l=m}^{\infty} G_{n, m}$ are dense in the unit ball of $I_n$ and $J_n$, - $\bigcup_{l, m=1}^{n} \varphi_{n, m}(F_{m, l})\subset F_{n+1, n+1}$, $\bigcup_{l, m=1}^n \psi_{n, m}(G_{m, l})\subset G_{n+1, n+1}$,\ where $\varphi_{n, m}$ and $\psi_{n, m}$, $n>m$, denote the composed connecting maps defined by $$\varphi_{n, m}=\varphi_n\circ\varphi_{n-1}\circ\cdots\circ\varphi_m,\quad \psi_{n, m}=\psi_n\circ\psi_{n-1}\circ\cdots\circ\psi_m,$$ - $\varphi_n$ is $(F_{n, n}, \varepsilon_n)$-standard, $\psi_n$ is $(G_{n, n}, \varepsilon_n)$-standard, - there exist conditional expectations for $\varphi_n$ and $\psi_n$. Let $B_0$ and $B_1$ are UHF-algebras whose supernatural numbers are $$p_1^{(0)}p_1^{(1)}\prod_{n\in\mathbb{N}} \frac{p_{n+1}^{(0)}p_{n+1}^{(1)}}{p_n^{(0)}p_n^{(1)}}\quad\text{ and }\quad q_1^{(0)}q_1^{(1)}\prod_{n\in\mathbb{N}} \frac{q_{n+1}^{(0)}q_{n+1}^{(1)}}{q_n^{(0)}q_n^{(1)}}.$$ We shall construct the required embeddings $\iota_i$ just for $i=0$. By replacing $p_n^{(i)}$, $I_n$, $F_{n, m}$, and $\varphi_n$ with $q_n^{(i)}$, $J_n$, $G_{n, m}$, and $\psi_n$, the same argument shows the existence of $\iota_1$. To simplify our notations, we set $l_n^{(i)}=p_{n+1}^{(i)}/p_n^{(i)}$, $i=0, 1$, $m_n=l_n^{(0)}l_n^{(1)}$, $d_n=p_n^{(0)}p_n^{(1)}$, and $C_n=C([0, 1])\otimes M_{d_n}$ for $n\in\mathbb{N}$. The construction of $\varphi_n$ in the proof of Proposition [Proposition 3](#Prop2.2){reference-type="ref" reference="Prop2.2"} (ii) is determined by the section of eigenvalues $\omega_j\in C([0, 1])$, $j=1,2,...,m_n$. By using the same $\omega_j$, we define unital embeddings ${\widetilde{\varphi}}_n$ of $C_n$ into $C_{n+1}$ by $${\widetilde{\varphi}}_n(f)=\mathop{\mathrm{diag}}(f\circ\omega_1, f\circ\omega_2, ..., f\circ\omega_{m_n})\quad\text{ for } f\in C_n.$$ It is clear that ${\widetilde{\varphi}}_n|_{I_n}=\varphi_n$ for any $n\in\mathbb{N}$. We denote by $\displaystyle\lim_{\longrightarrow}(I_n,\varphi_n)$ the inductive limit $\mathrm{C}^*$-algebra determined by $I_n$ and $\varphi_n$, $n\in\mathbb{N}$. For $n>m$, set $\varphi_{n,m}=\varphi_n\circ\varphi_{n-1}\circ\cdots\circ\varphi_m$ and let $\varphi_{\infty, n}$ be the induced map of $\{\varphi_n\}_{n\in\mathbb{N}}$ from $I_n$ to $\displaystyle\lim_{\longrightarrow}(I_n,\varphi_n)$. Since $\varphi_n$ is $(F_{n, n}, \varepsilon_n)$-standard for any $n\in\mathbb{N}$, it follows that the inductive limit $\mathrm{C}^*$-algebra $\displaystyle\lim_{\longrightarrow}(I_n,\varphi_n)$ has a unique tracial state. Actually, if $\tau$ and $\sigma$ are two tracial states of $\displaystyle\lim_{\longrightarrow}(I_n,\varphi_n)$, then it follows that $$\begin{aligned} \tau\circ\varphi_{\infty, n}(f)&=\tau\circ\varphi_{\infty, m+1}\circ\varphi_{m, n}(f)\approx_{\varepsilon_m}\int_{[0, 1]}{\mathrm{tr}}_{d_m}(\varphi_{m-1, n}(f)(t))\ d\mu(t)\\ &\approx_{\varepsilon_m}\sigma\circ\varphi_{\infty, m+1}\circ\varphi_{m, n}(f)=\sigma\circ\varphi_{\infty, n}(f),\quad \text{for }f\in F_{n, n}.\end{aligned}$$ Since $\varepsilon_m$, $m\in\mathbb{N}$ converges to $0$, we have $\tau\circ\varphi_{\infty, n}(f)=\sigma\circ\varphi_{\infty, n}(f)$ for all $f\in F_{n, n}$. From the definition of $F_{n, n}$, it is not so hard to see that $\bigcup_{n=1}^{\infty}\varphi_{\infty, n}(F_{n, n})$ is dense in the unit ball of $\displaystyle\lim_{\longrightarrow}(I_n, \varphi_n)$, which implies that $\tau=\sigma$. By the construction of $\varphi_n$, the simplicity of $\displaystyle\lim_{\longrightarrow}(I_n,\varphi_n)$ follows from a standard argument using [@Dav Lemma III.4.1]. Then the classification theory of [@JS Theorem 6.2] allows us to assert that $\displaystyle\lim_{\longrightarrow}(I_n,\varphi_n)$ is isomorphic to $\mathcal{Z}$. For the same reason, we also see that the inductive limit $\mathrm{C}^*$-algebra $\displaystyle\lim_{\longrightarrow}(C_n, {\widetilde{\varphi}}_n)$ has a unique tracial state. Thus, because of [@Thoms Theorem 1.4], it follows that $\displaystyle\lim_{\longrightarrow}(C_n, {\widetilde{\varphi}}_n)$ is a UHF-algebra. Since it is well-known that $K_0(C_n)\cong K_0(M_{d_n})$ for any $n\in\mathbb{N}$ as ordered abelian groups, then the classification theorem of [@Ell] allows us to assert that $\displaystyle\lim_{\longrightarrow}(C_n, {\widetilde{\varphi}}_n)$ is isomorphic to $B_0$ (A slightly different argument is found in [@Sat Proposition 2.3]). For $n\in\mathbb{N}$, let $\eta_n$ be the canonical embedding of $I_n$ into $C_n$. One can check that $\eta_n$ does not have a conditional expectation for any $n\in\mathbb{N}$. Now the following diagram commutes: $$\xymatrix{ C_1\ar[r]^-{{\widetilde{\varphi}}_{1}}&C_2\ar[r]^-{{\widetilde{\varphi}}_{2}} &C_3\ar[r]&\quad\cdots \quad\ar[r]&C_n\ar[r]^-{{\widetilde{\varphi}}_{n}}&C_{n+1}\ar[r]^-{{\widetilde{\varphi}}_{n+1}}&\quad \cdots\\ I_1\ar[u]^-{\eta_{1}}\ar[r]_{\varphi_{1}}&I_2\ar[u]^-{\eta_{2}}\ar[r]_{\varphi_{2}} &I_3\ar[u]^-{\eta_{3}}\ar[r]&\quad\cdots\quad\ar[r] &I_n\ar[u]^-{\eta_{n}}\ar[r]_{\varphi_{n}}&I_{n+1}\ar[u]^-{\eta_{n+1}}\ar[r]_{\varphi_{n+1}}&\quad \cdots. }$$ We let $\iota_0$ be the induced map of $\eta_n$, $n\in\mathbb{N}$ which is determined by $\iota_0(\varphi_{\infty, n}(a))={\widetilde{\varphi}}_{\infty, n}\circ\eta_n(a)$ for $a\in I_n$. This $\iota_0$ is a unital embedding of $\mathcal{Z}$ into $B_0$ which does not have a conditional expectation. Because, considering conditional expectations $E_n$ for $\varphi_n$, $n\in\mathbb{N}$, we obtain a conditional expectation $E$ for $\varphi_{\infty, 1}$ defined by $$E(\varphi_{\infty, n}(a))=E_1\circ E_2\circ\cdots\circ E_{n-1}(a)\quad\text{ for }n\in\mathbb{N}\text{ and } a\in I_n.$$ If there exists a conditional expectation $E_{\iota_0}$ for $\iota_0$, then the composition map $E\circ E_{\iota_0}\circ{\widetilde{\varphi}}_{\infty, 1}$ can be a conditional expectation for $\eta_1$, which is a contradiction. ◻ *Proof of Theorem 1.1.*   By Proposition [Proposition 5](#Prop3.2){reference-type="ref" reference="Prop3.2"}, we obtain two UHF-algebras $B_i$, $i=0, 1$ whose supernatural numbers are relatively prime, and unital embeddings $\overline{\psi}_i$, $i=0, 1$ of $\mathcal{Z}$ into $B_i$ without conditional expectations. Although one can show that $\overline{\psi}_0$ and $\overline{\psi}_1$ are asymptotically unitarily equivalent using [@LN Lemma 5.3], we take an elementary approach for the sake of self-contained presentations. Considering the infinite tensor product $\bigotimes_{n\in\mathbb{N}} B_i$ instead of $B_i$ for both $i= 0, 1$, we may assume that $B_i$, $i=0, 1$ are UHF-algebras of infinite type. Let $\Phi_i$, $i=0, 1$ be isomorphisms from $B_i\otimes B_i$ onto $B_i$. We identify $B_0\otimes B_1$ with $B_1\otimes B_0$ without mentioning, and let $B=B_0\otimes B_1$. For $i=0, 1$, we let $\iota_i$ be the canonical embedding of $B_i$ into $B$. Set unital embeddings ${\widetilde{\psi}}_i$, $i=0, 1$ of $\mathcal{Z}\otimes B$ into $B$ by $${\widetilde{\psi}}_i(a\otimes b_0\otimes b_1)=\Phi_i(\overline{\psi}_i(a)\otimes b_i)\otimes b_{i+1}\quad \text{for }a\in \mathcal{Z},\ b_i\in B_i,$$ where we regard $i=0, 1$ as elements in $\mathbb{Z}/2\mathbb{Z}$. Since ${\widetilde{\psi}}_0$ and ${\widetilde{\psi}}_1$ are unital endomorphisms of the UHF-algebra $B\cong \mathcal{Z}\otimes B$, then they are asymptotically unitarily equivalent, see [@Bl Theorem 3.1] or [@Ror0 Proposition 1.3.4]. We define unital embeddings $\psi_i$, $i=0, 1$ of $\mathcal{Z}$ into $\mathop{\mathrm{Im}}(\iota_i)$ by $\psi_i(a)={\widetilde{\psi}}_i(a\otimes 1_B)$ for $a\in \mathcal{Z}$. Then $\psi_0$ and $\psi_1$ are also asymptotically unitarily equivalent in $B$. Since $\overline{\psi}_i$ does not have a conditional expectation, neither does $\psi_i$ for both $i=0, 1$. Since any unitary in $B$ is homotopic to $1_B$, these $\iota_i$ and $\psi_i$, $i=0, 1$ satisfy the assumptions for Lemma [Lemma 4](#Lem3.1){reference-type="ref" reference="Lem3.1"}. Then there exists a unital embedding $\psi$ of $\mathcal{Z}$ into $I(B_0, B_1)$ which does not have a conditional expectation. By [@RW Proposition 3.3], there exists a unital embedding $\iota$ of $I(B_0, B_1)$ into $\mathcal{Z}$. Therefore, the composition map $\varphi=\iota\circ \psi$ is a unital endomorphism of $\mathcal{Z}$ which does not have a conditional expectation. Indeed, if there exists a conditional expectation $E_{\varphi}$ for $\varphi$, then $E_{\psi}=E_{\varphi}\circ\iota$ is so for $\psi$. Let $\iota$ be a unital embedding of $B$ into $A$, and let $\varphi$ the endomorphism of $\mathcal{Z}$ obtained in (i). Then the tensor product $\eta=\iota\otimes\varphi$ is a unital embedding of $B\cong B\otimes \mathcal{Z}$ into $A\cong A\otimes \mathcal{Z}$. If there exists a conditional expectation $E_{\eta}$ for $\eta$, then taking a state $\omega$ of $B$ we can define a unital completely positive map $E_{\varphi}$ from $\mathcal{Z}$ into $\mathcal{Z}$ by $E_{\varphi}(a)=(\omega\otimes \mathop{\mathrm{id}}_{\mathcal{Z}})\circ E_{\eta}(1_A\otimes a)$ for $a\in$ $\mathcal{Z}$. It is straightforward to check that $E_{\varphi}$ is a conditional expectation for $\varphi$, which contradicts the assumption of $\varphi$ in (i). ◻ **Corollary 6**. *For a unital nuclear $\mathcal{Z}$-absorbing $\mathrm{C}^*$-algebra, there exists a unital endomorphism of it which does not have a conditional expectation. In particular, UHF-algebras $M_{n^{\infty}}$, $n\in\mathbb{N}\setminus\{1\}$, the Cuntz algebras $\mathcal{O}_n$, $n\in\mathbb{N}\cup\{\infty\}\setminus\{ 1 \}$, and the irrational rotation algebras $\mathcal{A}_{\theta}$, $\theta\in\mathbb{R}\setminus\mathbb{Q}$ have such unital endomorphisms.* *For any supernatural number $\frak n$, there exists a unital embedding of $\mathcal{Z}$ into the UHF-algebra of type $\frak n$ which does not have a conditional expectation. In other words, for any given pair of relatively prime supernatural numbers we have two UHF-algebras and unital embeddings satisfying the condition in Proposition [Proposition 5](#Prop3.2){reference-type="ref" reference="Prop3.2"}.* *For relatively prime supernatural numbers $\frak p$ and $\frak q$, there exists a unital embedding of $I(M_{\frak p}, M_{\frak q})$ into $\mathcal{Z}$ which does not have a conditional expectation. (This result is a variation of [@RW Proposition 3.3] involving the expectationless condition.)* *Proof.* Considering a unital endomorphism $\varphi$ of $\mathcal{Z}$ which does not have a conditional expectation by Theorem [Theorem 1](#Thm1.1){reference-type="ref" reference="Thm1.1"} (i), for a unital nuclear $\mathcal{Z}$-absorbing $\mathrm{C}^*$-algebra $A$ it is obvious that the endomorphism $\mathop{\mathrm{id}}_A\otimes \varphi$ of $A\otimes \mathcal{Z}\cong A$ does not have a conditional expectation. It is well-known that both the UHF-algebra $M_{\frak n}$ of type ${\frak n}$ and $\mathcal{Z}$ absorb $\mathcal{Z}$ tensorially [@JS]. By taking a unital embedding of $\mathcal{Z}$ into $M_{\frak n}$ (for example the canonical embedding $\mathcal{Z}\rightarrow\mathcal{Z}\otimes M_{\frak n}\cong M_{\frak n}$), the statement follows from Theorem [Theorem 1](#Thm1.1){reference-type="ref" reference="Thm1.1"} (ii) directly. By [@RW Corollary 3.2] and [@JS], we see that both $I(M_{\frak p}, M_{\frak q})$ and $\mathcal{Z}$ absorb $\mathcal{Z}$ tensorially. Furthermore, from [@RW Proposition 3.3] there exists a unital embedding of $I(M_{\frak p}, M_{\frak q})$ into $\mathcal{Z}$. Then the statement also follows from Theorem [Theorem 1](#Thm1.1){reference-type="ref" reference="Thm1.1"} (ii). ◻ # Non-transportable $\mathrm{C}^*$-algebras Endomorphisms without conditional expectations were studied in [@EK Appendix B, Section 6] to consider the question: *Is it true that the Cuntz algebra $\mathcal{O}_2$, or a unital separable infinite dimensional AF-algebra is transportable in $\mathcal{O}_2$?* Our main result can be applied to give a negative answer for $\mathcal{O}_2$ and simple AF-algebras, more generally for unital separable nuclear $\mathcal{Z}$-absorbing $\mathrm{C}^*$-algebras. We recall the definition of transportable $\mathrm{C}^*$-algebras in $\mathcal{O}_2$ introduced by E. Kirchberg. **Definition 7**. For a unital separable $\mathrm{C}^*$-algebra $A$, a unital embedding $\iota$ of $A$ into $\mathcal{O}_2$ is called *in general position* if there exists a unital embedding of $\mathcal{O}_2$ into the relative commutant $\mathrm{C}^*$-algebra $\mathop{\mathrm{Im}}(\iota)'\cap \mathcal{O}_2$ of the image of $\iota$. A unital separable nuclear $\mathrm{C}^*$-algebra $A$ is called *transportable in $\mathcal{O}_2$*, if for any two unital embeddings $\iota$ and $\eta$ of $A$ into $\mathcal{O}_2$ in general position there exists an automorphism $\alpha$ of $\mathcal{O}_2$ such that $\alpha\circ \iota=\eta$. **Theorem 8**. *Any unital separable nuclear $\mathcal{Z}$-absorbing $\mathrm{C}^*$-algebra is not transportable in $\mathcal{O}_2$. In particular, all Kirchberg algebras, unital separable $\mathcal{Z}$-absorbing $AF$-algebras, and irrational rotation algebras are not transportable in $\mathcal{O}_2$.* *Proof.* The famous Kirchberg's theorem [@K1] (see also [@Ror0 Theorem 6.3.12],) allows us to obtain an embedding $\iota$ of $A$ into $\mathcal{O}_2$ and a conditional expectation $E_{\iota}$ for $\iota$. Replacing $\mathcal{O}_2$ with $\iota(1_A)\mathcal{O}_2\iota(1_A)\cong \mathcal{O}_2$, we may assume that $\iota$ is a unital embedding. Considering $\mathcal{O}_2\otimes\mathcal{O}_2\cong\mathcal{O}_2$ and a state of $\mathcal{O}_2$, we may further assume that $\iota$ is in general position. By Corollary [Corollary 6](#Cor3.3){reference-type="ref" reference="Cor3.3"} (i), there exists a unital endomorphism $\varphi$ of $A$ which does not have a conditional expectation. Define a unital embedding $\eta$ of $A$ into $\mathcal{O}_2$ by $\eta=\iota\circ\varphi$, which is also in general position. Note that $\eta$ does not have a conditional expectation. So, if there is an automorphism $\alpha$ of $\mathcal{O}_2$ satisfying $\alpha\circ\iota=\eta$, then $E_{\iota}\circ\alpha^{-1}$ becomes a conditional expectation for $\eta$, which is a contradiction. ◻ The author would like to thank Professor Masaki Izumi, and Professor Narutaka Ozawa for helpful comments on this research. He also wish to express his gratitude to Professor Guihua Gong, Professor Huaxin Lin and the organizers of Special Week on Operator Algebras 2023. 99 B. E. Blackadar, *A simple unital projectionless $\mathrm{C}^*$-algebra*, J. Operator Theory 5 (1981), no. 1, 63--71. N. Brown and N. Ozawa, [$\mathrm{C}^*$-algebras and finite-dimensional approximations]{.roman}, Graduate Studies in Mathematics, vol. 88, American Mathematical Society, Providence, RI, 2008. J. Castillejos, S. Evington, A. Tikuisis, S. White, and W. Winter, *Nuclear dimension of simple $\mathrm{C}^*$-algebras*, Invent. Math. 224 (2021), no. 1, 245--290. K. R. Davidson, [$\mathrm{C}^*$-algebras by example]{.roman}, Fields Institute Monographs, 6. American Mathematical Society, 1996. xiv+309 pp. G. A. Elliott, *On the classification of inductive limits of sequences of semisimple finitedimensional algebras*, J. Algebra, 38 (1976), 29--44. G. A. Elliott, G. Gong, H. Lin, and Z. Niu, *On the classification of simple amenable $\mathrm{C}^*$- algebras with finite decomposition rank, II*, preprint, arXiv:1507.03437. G. A. Elliott, G. Gong, H. Lin, and Z. Niu, *The classification of simple separable $KK$-contractible $\mathrm{C}^*$-algebras with finite nuclear dimension*, J. Geom. Phys. 158 (2020), 103861, 51 pp. G. A. Elliott and Z. Niu, *On the classification of simple amenable $\mathrm{C}^*$-algebras with finite decomposition rank*, Operator algebras and their applications, Contemp. Math., 671, Amer. Math. Soc., Providence, RI, 2016, 117--125. G. A. Elliott and Y. Sato *Rationally AF algebras and KMS states of $\mathcal{Z}$-absorbing $\mathrm{C}^*$-algebras*, preprint, arXiv:2207.11653. G. A. Elliott, Y. Sato, and K. Thomsen, *On the bundle of KMS state spaces for flows on a $\mathcal{Z}$-absorbing $\mathrm{C}^*$-algebra*, Comm. Math. Phys. 393 (2022), no.2, 1105--1123. G. A. Elliott, A. S. Toms, *Regularity properties in the classification program for separable amenable $\mathrm{C}^*$-algebras*, Bull. Amer. Math. Soc. (N.S.) 45 (2008) G. A. Elliott and K. Thomsen, *The bundle of KMS state spaces for flows on a unital AF $\mathrm{C}^*$-algebra*, C. R. Math. Acad. Sci. Soc. R. Can. 43 (2022), no. 4, 103--121, arXiv:2109.06464. X. Jiang and H. Su, *On a simple unital projectionless $\mathrm{C}^*$-algebra*, Amer. J. Math. 121 (1999), no. 2, 359--413. E. Kirchberg, *The classification of purely infinite $C^*$-algebras using Kasparov's theory*, preprint, 1994. E. Kirchberg, *Das nicht-kommutative Michael-Auswahlprinzip und die Klassifikation nichteinfacher Algebren. $\mathrm{C}^*$-algebras* (Müunster, 1999), Springer, Berlin, (2000), pp. 92--141. E. Kirchberg, and N. C. Phillips, *Embedding of exact $C^*$-algebras in the Cuntz algebra $\mathcal{O}_2$*, J. Reine Angew. Math. 525 (2000), 17--53. E. Kirchberg, *The Classification of Purely Infinite C\*-Algebras Using Kasparov's Theory*. manuscript available at https://ivv5hpp.uni-muenster.de/u/echters/ekneu1.pdf, Münster, July 18, 2023, J. Cuntz, S. Echterhoff, J. Gabe, M. Røradam. C. Lance, *On Nuclear C\*-Algebras*, J. Funct. Anal. 12 (1973), 157--176. G. Gong, H. Lin, and Z. Niu, *A classification of finite simple amenable $\mathcal{Z}$-stable $\mathrm{C}^*$-algebras, I: $\mathrm{C}^*$-algebras with generalized tracial rank one*, C. R. Math. Acad. Sci. Soc. R. Can. 42 (2020), no. 3, 63--450. G. Gong, H. Lin, and Z. Niu, *A classification of finite simple amenable $\mathcal{Z}$-stable $\mathrm{C}^*$-algebras, II: $\mathrm{C}^*$-algebras with rational generalized tracial rank one*, C. R. Math. Acad. Sci. Soc. R. Can. 42 (2020), no. 4, 451--539. A. Kishimoto, *Non-commutative shifts and crossed products*, J. Funct. Anal. 200 (2003), no.2, 281--300. H. Lin and Z. Niu, [Lifting $KK$-elements, asymptotic unitary equivalence and classification of simple C$^*$-algebras]{.roman}. *Adv. Math.*, **219** (2008), 1729--1769. H. Matui and Y. Sato, *Decomposition rank of UHF-absorbing $\mathrm{C}^*$-algebras*, Duke Math. J. 163 (14) (2014), 2687--2708. N. C. Phillips, *A classification theorem for nuclear purely infinite simple $\mathrm{C}^*$-algebras*, Doc. Math. 5 (2000), 49--114. M. Rørdam, *Classification of nuclear $C^*$-algebras*, Entropy in operator algebras, pp. 1--145. Encyclopaedia Math. Sci. 126, Springer, Berlin, 2002. M. Rørdam, *The stable and the real rank of $\mathcal{Z}$-absorbing C$^*$-algebras*. Internat. J. Math., 15 (2004), 1065-1084. M. Rørdam and W. Winter, *The Jiang-Su algebra revisited*, J. Reine Angew. Math. 642 (2010), 129--155. S. Sakai, [C\*-algebra and W\*-algebra]{.roman}, Reprint of the 1971 edition, Classics Math., Springer, Berlin, 1998. Y. Sato, *Certain aperiodic automorphisms of unital simpl projectionless C$^*$-algebras*, Internat. J. Math. 20 (2009), no. 10, 1233--1261. Y. Sato, S. White, and W. Winter, *Nuclear dimension and Z-stability*, Invent. Math. 202 (2015), no. 2, 893--921. K. Thomsen, *Inductive limits of interval algebras: the tracial state space*, Amer. J. Math. 116 (1994), no. 3, 605--620. K. Thomsen, *The possible temperatures for flows on a simple AF algebra*, Comm. Math. Phys. 386 (2021), no. 3, 1489--1518. A. Tikuisis, S. White, and W. Winter, *Quasidiagonality of nuclear $\mathrm{C}^*$-algebras*, Ann. of Math. (2) 185 (2017), no. 1, 229--284. S. White, *Abstract classification theorems for amenable $\mathrm{C}^*$-algebras* Proceedings of the International Congress of Mathematics 2022, to appear, arXiv:2307.03782. W. Winter, *Localizing the Elliott conjecture at strongly self-absorbing $\mathrm{C}^*$-algebras with an appendix by Huaxin Lin*. *Preprint.* arXiv:0708.0283. W. Winter *Structure of nuclear C\*-algebras: From quasidiagonality to classification, and back again*, World Sci. Publ., Hackensack, NJ, 2018, pp.1801--1823. Yasuhiko Sato\ Graduate School of Mathematics,\ Kyushu University,\ 744 Motoka, Nishi-ku, Fukuoka\ 819-0395 Japan\ e-mail: ysato\@math.kyushu-u.ac.jp [^1]: The author was supported by JSPS KAKENHI Grant Numbers 19K03516 and the Department of Mathematical Sciences, Kyushu University.
arxiv_math
{ "id": "2309.11068", "title": "Endomorphisms of Z-absorbing C*-algebras without conditional\n expectations", "authors": "Yasuhiko Sato", "categories": "math.OA math.FA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We study a class of linear parabolic path-dependent PDEs (PPDEs) defined on the space of càdlàg paths ${\rm x}\in D([0,T])$, in which the coefficient functions at time $t$ depend on ${\rm x}(t)$ and $\int_{0}^{t}{\rm x}(s)dA_{s}$, for some (deterministic) continuous function $A$ with bounded variations. Under uniform ellipticity and Hölder regularity conditions on the coefficients, together with some technical conditions on $A$, we obtain the existence of a smooth solution to the PPDE by appealing to the notion of Dupire's derivatives. It provides a generalization to the existing literature studying the case where $A_t = t$, and complements our recent work in [@bouchard2021approximate] on the regularity of approximate viscosity solutions for parabolic PPDEs. As a by-product, we also obtain existence and uniqueness of weak solutions for a class of path-dependent SDEs. author: - "Bruno Bouchard [^1]" - "Xiaolu Tan [^2]" title: On the regularity of solutions of some linear parabolic path-dependent PDEs --- **Keywords:** Path-dependent PDE, degenerate parabolic PDE, Dupire's functional calculus.\ **MSC2020 subject classifications:** 35K65,60H10. # Introduction We consider linear parabolic path-dependent PDEs (PPDEs) of the form $$\begin{aligned} \label{eq:ppde_intro} \partial_{t}{\rm v}~+~ \bar{\mu}\partial_{{\rm x}} {\rm v}~+~\frac12 \bar{\sigma}^{2} \partial^{2}_{{\rm x}} {\rm v}~+~ \bar \ell &= 0, ~\mbox{on}~ [0,T)\times D([0,T])\\ {\rm v}(T,\cdot)&=\bar g~\mbox{on}~ D([0,T]).\nonumber \end{aligned}$$ In the above, $D([0,T])$ denotes the space of all real-valued càdlàg path ${\rm x}= ({\rm x}(t))_{t \in [0,T]}$ on $[0,T]$, the derivatives are taken in the sense of Dupire [@dupireito; @cont2013functional] (see Section [2.1](#sec:conditions){reference-type="ref" reference="sec:conditions"} below), and the coefficient functions $(\bar{\mu}, \bar{\sigma}, \bar \ell,\bar g): [0,T] \times D([0,T]) \longrightarrow \mathbb{R}\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}$ are of the form $$\big( \bar{\mu}_{t}, \bar{\sigma}_{t}, \bar \ell_t,\bar g \big) ( {\rm x}) ~=~ \big( \mu_{t}, \sigma_{t}, \ell_{t}\big) \big( {\rm x}(t), I_t({\rm x}) \big),\;\bar g({\rm x})=g({\rm x}_{T},I_{T}({\rm x})), ~~\mbox{with}~ I_t({\rm x}) := \int_{0}^{t} {\rm x}(s)dA_{s},$$ for some functions $(\mu, \sigma, \ell,g): [0,T] \times\mathbb{R}^2 \longrightarrow \mathbb{R}\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}$, and a continuous process $A$ with bounded variations. When $A$ is absolutely continuous, say simply $A_t = t$, the above can be written as a degenerate parabolic PDE $$\label{eq:pde_intro} \partial_{t} v + \mu \partial_{ x_{1}} v+ x_{1} \partial_{x_{2}} v +\frac12 \sigma^{2} \partial^{2}_{x_{1}x_{1}} v + \ell = 0 ~\mbox{on}~ [0,T)\times\mathbb{R}^{2},\;v(T,\cdot)=g ~\mbox{on}~ \mathbb{R}^{2},$$ in which the derivatives are now taken in the usual sense and $${\rm v}(t,{\rm x}) ~=~ v \big( t,{\rm x}_{t}, I_t({\rm x}) \big).$$ Indeed, the Dupire's horizontal derivative $\partial_{t}{\rm v}$ and vertical derivatives $(\partial_{{\rm x}} {\rm v},\partial^{2}_{{\rm x}} {\rm v})$ are related to the partial derivatives of $v$ through $$(\partial_{t},\partial_{{\rm x}} ,\partial^{2}_{{\rm x}} ) {\rm v}(t,{\rm x}) = \left(\partial_{t}+{\rm x}(t) \partial_{x_{2}},\partial_{x_{1}},\partial^{2}_{x_{1}x_{1}}\right) v \big(t,{\rm x}(t), I_t({\rm x}) \big).$$ Various works are devoted to such equations, going back to [@kolmogorov1934theorie], in more complex multivariate frameworks, see e.g. [@delarue2010density; @francesco2005class; @lanconelli2002linear; @sonin1967class; @weber1951fundamental] and the references therein. The latter PDE may not admit a $C^{1,2}$-solution, in the traditional sense, even when $\bar{\sigma}$ is uniformly elliptic: $\partial_{t}v$ and $\partial_{x_{2}} v$ are in general not well-defined and one needs to define $\partial_{t}v + x_{1} \partial_{x_{2}} v$ jointly, appealing to the notion of Lie derivative, which amounts to considering Dupire's horizontal derivative when the PDE is seen as a PPDE. The main novelty of this paper is that we do not assume anymore that $(A_t)_{t\in [0,T]}$ is absolutely continuous in $t$. In this case, the PDE formulation [\[eq:pde_intro\]](#eq:pde_intro){reference-type="eqref" reference="eq:pde_intro"} is not valide anymore, but the PPDE formulation [\[eq:ppde_intro\]](#eq:ppde_intro){reference-type="eqref" reference="eq:ppde_intro"} is still adequate. We provide conditions under which [\[eq:ppde_intro\]](#eq:ppde_intro){reference-type="eqref" reference="eq:ppde_intro"} admits a solution that is smooth in the sense of Dupire's deviratives. It complements [@bouchard2021approximate] in which coefficients are assumed to be $C^{1+\alpha}$, which allows one to construct the so-called approximate viscosity solutions of non-linear path-dependent PDEs with first order Dupire's vertical derivative enjoying some Hölder-type regularity (see [@bouchard2021approximate] for details). As shown in e.g. [@bouchard2021c], in many situations, this is already sufficient to derive a Feynman-Kac's representation of the solution by appealing to a version of Itô-Dupire's stochastic calculus for path-dependent functionals that are only vertically differentiable up to the first order. In contrast to [@bouchard2021approximate], we only assume here that the coefficients are Hölder continuous, but require $\bar \sigma$ to be non-degenerate, so as to expect the classical regularization effect to operate. We rely on the parametrix approach, see e.g. [@friedman2008partial Chapter 1]. For this, we perform a change of variables which allows us to reduce to a PDE of the form $$\partial_{t} u + \mu \langle (1,A), D u \rangle + \frac12 \sigma^{2} (1,A) D^{2} u (1,A)^{\top} = 0,$$ which can be written even if $A$ is not absolutely continuous. The above is again degenerate and $(D u,D^{2} u)$ may not be well-defined. However, the parametrix approach allows one to show that $\langle (1,A), D u \rangle$ and $(1,A) D^{2} u (1,A)^{\top}$ are, which in turn implies that the vertical derivatives $(\partial_{{\rm x}} {\rm v},\partial^{2}_{{\rm x}} {\rm v})$ of the path-dependent functional ${\rm v}$ are. As a by-product, we establish the existence and uniqueness of a weak solution to the path-dependent stochastic differential equation (SDE) $$X_t = X_0 + \int_0^t \mu_{s} ( X_s , I_s ) d s + \int_0^t \sigma_{t} ( X_s , I_s ) dW_t, ~~~ I_t = \int_{0}^{t} X_s d A_s, ~~ t \ge 0,$$ and provide some first properties of the transition density of the Markov process $( X, I)$, as well as the corresponding Feynman-Kac's formula. These results require structural conditions relating the Hölder regularity of the coefficient $(\mu,\sigma)$ and the path behavior of $A$. If one knows a priori that the above SDE admits a unique weak solution, then one can prove under weaker conditions that the candidate solution to [\[eq:ppde_intro\]](#eq:ppde_intro){reference-type="eqref" reference="eq:ppde_intro"}, deduced from a formal application of the Feynman-Kac's formula[^3], is already $C^{1}$ in space, in the sense of Dupire. As mentioned above, this turns out to be enough to deduce its Itô-Dupire's semimartingale decomposition. All over this paper, we stick to a one-dimensional setting for ease of notations. Extensions to multivariate frameworks can be provided by using similar techniques. The rest of this paper is organized as follows. Section [2](#sec: main results){reference-type="ref" reference="sec: main results"} states our main results. Proofs are collected in Section [3](#sec: proofs){reference-type="ref" reference="sec: proofs"}. In the following, the $i$-th component of a vector $x$ is denoted by $x_{i}$, the $(i,j)$-component of a matrix $M$ is denoted by $M_{ij}$. Given $\phi: (t,x)\in [0,T] \times\mathbb{R}^2 \longrightarrow \phi(t,x)\in \mathbb{R}$, we let $D\phi$ and $D^{2}\phi$ (or $D_{x}\phi$ and $D^{2}_{xx}\phi$) be the gradient and the Hessian matrix with respect to $x$. The space partial derivatives are denoted by $\partial_{x_{i}}\phi$, $\partial^{2}_{x_{i}x_{j}}\phi$, and so on if we have to consider higher orders. # Dupire's regularity for linear PPDEs depending on the average of the path {#sec: main results} ## Notations and assumptions {#sec:conditions} Given $T> 0$, let $D([0,T])$ denote the Skorokhod space of all $\mathbb{R}$--valued càdlàg paths ${\rm x}=({\rm x}(t))_{t \in [0,T]}$ on $[0,T]$, and let $C([0,T])$ denote the subspace of continuous paths. Let us equipped $D([0,T])$ with the Skorokhod topology, and $C([0,T])$ with the uniform convergence topology. Let $A = (A_t)_{t \ge 0}$ be a deterministic continuous process with finite variation, and $(\mu, \sigma, \ell): [0,T] \times\mathbb{R}^2 \longrightarrow \mathbb{R}\times\mathbb{R}\times\mathbb{R}$ be coefficient functions, from which we define path-dependent functionals $(\bar{\mu}, \bar{\sigma}, \bar \ell): [0,T] \times D([0,T]) \longrightarrow \mathbb{R}\times\mathbb{R}\times\mathbb{R}$ by $$\big( \bar{\mu}_{t}, \bar{\sigma}_{t}, \bar \ell_t \big) ( {\rm x}) ~:=~ \big( \mu_{t}, \sigma_{t}, \ell_{t} \big) \big( {\rm x}(t), I_t({\rm x}) \big), ~~\mbox{with}~ I_t({\rm x}) := \int_{0}^{t} {\rm x}(s)dA_{s}.$$ We study the following linear parabolic path-dependent PDE (PPDE): $$\begin{aligned} \label{eq:ppde} \partial_{t}{\rm v}~+~ \bar{\mu}\partial_{{\rm x}} {\rm v}~+~\frac12 \bar{\sigma}^{2} \partial^{2}_{{\rm x}} {\rm v}~+~ \bar \ell ~=~ 0, ~\mbox{on}~ [0,T)\times D([0,T]), \end{aligned}$$ with terminal condition ${\rm v}(T, {\rm x}) = \bar g({\rm x}):= g \big({\rm x}(T), I_T({\rm x}) \big)$ for some function $g: \mathbb{R}\times\mathbb{R}\longrightarrow \mathbb{R}$. In the above, the derivatives are taken in the sense of Dupire. #### Dupire's derivatives for path-dependent functionals To give a precise definition to the PPDE [\[eq:ppde\]](#eq:ppde){reference-type="eqref" reference="eq:ppde"}, let us recall Dupire's [@dupireito; @cont2013functional] notion of horizontal derivative $\partial_t$ and vertical derivatives $\partial_{{\rm x}}$ and $\partial^2_{{\rm x}}$ for path-dependent functionals. Let $F: [0,T] \times D([0,T]) \longrightarrow \mathbb{R}$ be a path-dependent functional, it is said to be non-anticipative if $F(s,{\rm x})=F(s,{\rm x}(s\wedge \cdot))$ for all $(s,{\rm x})\in [0,T] \times D([0,T])$. For an non-anticipative map $F$, its horizontal derivative $\partial_s F(s, {\rm x})$ at $(s,{\rm x}) \in [0,T) \times D([0,T])$ is defined as $$\partial_{s}F(s,{\rm x}) ~:=~ \lim_{h\searrow 0} \frac{F(s+h,{\rm x}(s \wedge\cdot )) - F(s,{\rm x})}{h},$$ and its vertical derivative $\partial_{{\rm x}} F(s, {\rm x})$ is defined as $$\partial_{{\rm x}} F(s, {\rm x}) ~:=~ \lim_{y \to 0} \frac{F(s,{\rm x}+ y\mathbf{1}_{[s,T]}) - F(s, {\rm x})}{y},$$ whenever the limits exist. In the above, ${\rm x}+ y \mathbf{1}_{[s,T]}$ denotes the path taking value ${\rm x}_t + y\mathbf{1}_{[s,T]}(t)$ at time $t \in [0,T]$. Similarly, one can define the second order vertical derivative $\partial^2_{{\rm x}} F$ as the vertical derivative of $\partial_{{\rm x}} F$. Given $t \in (0,T]$, we denote by ${\mathbb C}([0,t))$ the space of all continuous non-anticipative functionals $F: [0,t) \times D([0,T]) \longrightarrow \mathbb{R}$, and we set $${\mathbb C}^{0,1}([0,t)) := \big\{ F \in {\mathbb C}([0,t)) ~: \partial_{{\rm x}}F ~\mbox{is well-defined and belongs to}~ {\mathbb C}([0,t)) \big\},$$ as well as $${\mathbb C}^{1,2}([0,t)) := \big\{ F \in {\mathbb C}^{0,1}([0,t))~: \partial_s F~\mbox{and}~\partial^{2}_{{\rm x}}F~\mbox{are well-defined and belong to}~{\mathbb C}([0,t)) \big\}.$$ #### Assumptions on the process $A$: Recall that $A = (A_t)_{t \in [0,T]}$ is a deterministic process with finite variation. For $0 \le s < t \le T$, let us define $\overline A_{s,t}:=\frac1{t-s} \int_{s}^{t} A_{r}dr$ and $$m_{s,t}:=\frac1{t-s}\int_{s}^{t} \big(A_{r}-\overline A_{s,t} \big)^{2} dr, ~~~ \tilde m_{s,t}:=\frac1{t-s}\int_{s}^{t} (A_{r}-A_{s} )^{2} dr.$$ The above will play a major role in our analysis, as they will drive the behavior of the parametrix density on small time intervals. **Assumption 1**. *$\mathrm{(i)}$ There exist constants $\beta_0, \beta_1, \beta _2, \beta_3 \ge 0$ and $C_{(\ref{eq : hyp vitesse explo})}, C_{(\ref{eq:order_m})}> 0$ such that, for all $0\le s< t \le T$, $$\begin{aligned} \frac{1}{C_{(\ref{eq : hyp vitesse explo})}} (t-s)^{-\beta_1} ~\le~ &\frac{\tilde m_{s,t}}{m_{s,t}} ~\le~ C_{(\ref{eq : hyp vitesse explo})} (t-s)^{-\beta_0}, \label{eq : hyp vitesse explo}\\ \frac{1}{C_{(\ref{eq:order_m})}} (t-s)^{- \beta_2} ~\le~ &\frac{1}{m_{s,t}} ~\le~ C_{(\ref{eq:order_m})} (t-s)^{- \beta_3}. \label{eq:order_m} \end{aligned}$$* *$\mathrm{(ii)}$ There exist constants $\beta_4 \ge 0$ and $C_{\eqref{eq:holder A}}>0$ such that $$\begin{aligned} |A_t-A_s| ~\le~ C_{\eqref{eq:holder A}} (t-s)^{\beta_4}, ~~\mbox{for all}~ 0 \le s< t \le T. \label{eq:holder A} \end{aligned}$$* **Remark 2**. *Notice that, $\lim_{t\downarrow s}\tilde m_{s,t}=0$ by continuity of $A$. Without loss of generality, one can therefore assume that $$\beta_{1}\le \beta_{0}\le \beta_{3}\mbox{ and } \beta_{1}\le \beta_{2}\le \beta_{3}.$$ Moreover, one can always choose $\beta_{0}=\beta_{4}=0$ since $m_{s,t} \le \tilde m_{s,t}$ by their definitions.* Let us provide some typical examples. **Example 3**. *$\mathrm{(i)}$ Let $A$ be defined by $A_{t}= \int_{0}^{t} \rho(s) ds$, $t\ge 0$, with $\varepsilon\le \rho\le 1/\varepsilon$ a.e. for some $\varepsilon>0$. Then, it is easy to check that Assumption [Assumption 1](#hyp:A){reference-type="ref" reference="hyp:A"} holds with $$\beta_0 = 0, ~~\beta_1 = 0, ~~\beta_2 = 2, ~~\beta_3 = 2, ~~\beta_4 = 1.$$ In this setting, our main results are similar to those in [@francesco2005class], which studied a multivariate version of the case where $\rho$ is constant.* *$\mathrm{(ii)}$ Let $A_{t}=t^{\gamma}$ for some $\gamma \in (0,1)$. Then, $$m_{s,t} ~=~ \frac{t^{2\gamma+1}-s^{2\gamma+1}}{(2\gamma+1)(t-s)}-\frac{|t^{\gamma+1}-s^{\gamma+1}|^{2}}{(\gamma+1)^{2}(t-s)^{2}},$$ and $$\tilde m_{s,t} ~=~ \frac{\frac1{2\gamma+1}(t^{2\gamma+1}-s^{2\gamma+1})-2\frac1{\gamma+1}(t^{\gamma+1}-s^{\gamma+1})s^{\gamma}+(t-s)s^{2\gamma}}{t-s}.$$ In this setting, Assumption [Assumption 1](#hyp:A){reference-type="ref" reference="hyp:A"} holds true with $$\beta_0 = 0, ~~\beta_1 = 0, ~~\beta_2 = 2 \gamma, ~~\beta_3 = 2\gamma, ~~\beta_4 = \gamma.$$* *$\mathrm{(iii)}$ Assume that there exists $1\ge \gamma_{1}\ge \gamma_{2}>0$ and $C_{1},C_{2}>0$ such that $$C_{1}|t-s|^{\gamma_{1}}\le A_{t}-A_{s}\le C_{2} |t-s|^{\gamma_{2}},\mbox{ for all } s\le t\le T.$$ Then, Assumption [Assumption 1](#hyp:A){reference-type="ref" reference="hyp:A"} holds with $$\beta_0 = 2(\gamma_{1}-\gamma_{2}), ~~\beta_1 = 0, ~~\beta_2 = 2 \gamma_{2}, ~~\beta_3 = 2\gamma_{1}, ~~\beta_4 = \gamma_{2}.$$ Indeed, let us choose $t_{0} \in [0,T]$ such that $A_{t_{0}}=\overline A_{s,t}$. Assume that $t-t_{0}\ge (t-s)/2$ (otherwise $t_{0}-s\ge (t-s)/2$ and we can use similar computations), then $$\begin{aligned} {(t-s)m_{s,t}} ~=~ \int_{s}^{t}|A_{r}-A_{t_{0}}|^{2} &~\ge~ \int_{t_{0}}^{t}C_{1}^{2}|{r}-{t_{0}}|^{2\gamma_{1}}dr ~\ge~ \frac{C_{1}^{2}}{2^{2\gamma_{1}+1}(2\gamma_{1}+1)}|t-s|^{2\gamma_{1}+1}. \end{aligned}$$ On the other hand, ${(t-s)m_{s,t}\le } \int_{s}^{t}|A_{r}-A_{s}|^{2}\le \frac{C_{2}^{2}}{2\gamma_{2}+1} |t-s|^{2\gamma_{2}+1}$. Thus, $$1 ~\le~ \frac{\tilde m_{s,t}}{m_{s,t}} ~\le~ 2^{2\gamma_{1}+1}\frac{C_{2}^{2}(2\gamma_{1}+1)}{C_{1}^{2}(2\gamma_{2}+1)} |t-s|^{-2(\gamma_{1}-\gamma_{2})}.$$* #### Assumptions on the coefficient functions $\mu$ and $\sigma$: As in e.g. [@francesco2005class], the following Hölder regularity assumption on the coefficient functions $(\mu, \sigma): [0,T] \times\mathbb{R}^2 \longrightarrow \mathbb{R}\times\mathbb{R}$ is calibrated to match with the explosion rate of the quadratic form entering the parametrix. It does not impose smoothness conditions on $\mu$ and $\sigma$ as in e.g. [@sonin1967class], and facilitate the analysis, see Remark [Remark 6](#rem: si holder){reference-type="ref" reference="rem: si holder"} below. Let us set $$\Theta ~:=~ \big\{ (s,x,t,y) \in [0,T]\times\mathbb{R}^{2}\times[0,T]\times\mathbb{R}^{2}~:~s< t \big\},$$ and, for $(s,x,t,y) \in \Theta$, $$\begin{aligned} {\rm w}_{s,t}(x,y):=x- E_{s,t}(y) \in \mathbb{R}^2, ~~\mbox{with}~~ E_{s,t}(y):= \left(\begin{array}{cc}1&0\\ -(A_{t}-A_{s})&1\end{array}\right)y \in \mathbb{R}^2. \label{eq:def_w_xy} \end{aligned}$$ **Assumption 4**. *Let $\beta_0, \beta_1, \beta_2 \ge 0$ be the constants in Assumption [Assumption 1](#hyp:A){reference-type="ref" reference="hyp:A"}. Then,* *$\mathrm{(i)}$ We have $$\beta'_1 := \beta_1-\beta_0 > -1, ~~~~ \beta'_2 := \beta_2 - \beta_0 > -1.$$ Moreover, the coefficients $\mu$ and $\sigma$ are continuous, and there exist constants $(\underline \mathfrak{a}, \bar \mathfrak{a})\in \mathbb{R}^{2}$, $\mathfrak{b}\in \mathbb{R}$, $C_{(\ref{eq : hyp holder coeff bar sigma})} > 0$ and $\alpha>0$ such that $$\begin{aligned} \label{eq : unif elliptic} |\mu| \le \mathfrak{b}, ~~ 0<\underline \mathfrak{a}\le \sigma^{2} \le \bar \mathfrak{a}, \;\mbox{ on } [0,T] \times\mathbb{R}^2, \end{aligned}$$ and $$\begin{aligned} & \big|\sigma_{s}(x) -\sigma_{t}(y) \big| ~\le~ C_{(\ref{eq : hyp holder coeff bar sigma})} \Big( |t-s|^{\alpha} + \big| {\rm w}_{s,t}(x,y)\big|^{\frac{2\alpha}{1+\beta'_1}} + \big|{\rm w}_{s,t}(x,y) \big|^{\frac{2 \alpha}{1+\beta'_2}} \Big), \label{eq : hyp holder coeff bar sigma} \end{aligned}$$ for all $(s,x,t,y) \in \Theta$.* *$\mathrm{(ii)}$ There exists a constant $C_{(\ref{eq : hyp holder coeff bar mu})}>0$ such that $$\label{eq : hyp holder coeff bar mu} \big| \mu_{t}( x) - \mu_{t}( y) \big| \le C_{(\ref{eq : hyp holder coeff bar mu})} \Big( \big| x_{1}- y_{1}\big|^{\frac{2\alpha}{1+\beta'_1}}+\big| x_{2}- y_{2}\big|^{\frac{2\alpha}{1+\beta'_2}} \Big), ~\mbox{for all}~ (t,x,y) \in [0,T] \times\mathbb{R}^{2} \times\mathbb{R}^2.$$* **Example 5**. *The condition [\[eq : hyp holder coeff bar sigma\]](#eq : hyp holder coeff bar sigma){reference-type="eqref" reference="eq : hyp holder coeff bar sigma"} holds for instance if $\sigma_{s}(x_1, x_2)$ depends only on $(s, x_{1})$ and is Hölder with respect to $(s, x_1)$. It would also hold if it is of the form $\sigma_{s}(x)=\tilde \sigma_{s}(x_{1},x_{2}-A_{s}x_{1})$ for some Hölder continuous map $\tilde \sigma$. Indeed, one has $$\begin{aligned} \big |x_{2}-A_{s}x_{1}-(y_{2}-A_{t}y_{1}) \big| &~=~ \big| x_{2}-A_{s}x_{1}-(y_{2}-A_{t}y_{1})-A_{s}(y_{1}-x_{1})+A_{s}(y_{1}-x_{1}) \big| \\ &~\le~ \big|x_{2} -y_{2}+(A_{t}-A_{s})y_{1}) \big| + |A_{s}| \big|y_{1}-x_{1} \big| \\ &~\le~ \Big( 1+\max_{[0,T]}|A| \Big) |{\rm w}_{s,t}(x,y)|. \end{aligned}$$* **Remark 6**. *The case where the coefficient $\sigma$ is Hölder in the classical sense, i.e. $$\big|\sigma_{s}(x) -\sigma_{t}(y) \big| ~\le~ C \Big( |t-s|^{\alpha} + \big| x_{1}-y_1\big|^{\frac{2\alpha}{1+\beta'_1}} + \big|x_{2}-y_{2} \big|^{\frac{2 \alpha}{1+\beta'_2}} \Big)$$ can be tackled by combining the arguments below with those of e.g. [@sonin1967class]. This will add additional exponentially growing terms in the estimates on $\widetilde{\Phi}$ in Proposition [Proposition 18](#prop:existence_continuite_Phi){reference-type="ref" reference="prop:existence_continuite_Phi"} below, which can be handled, to the price of adapted restrictions on the coefficients $(\beta_{i})_{0\le i\le 4}$. We chose the formulation of the conditions in [\[eq : hyp holder coeff bar sigma\]](#eq : hyp holder coeff bar sigma){reference-type="eqref" reference="eq : hyp holder coeff bar sigma"} for sake of simplicity.* ## Heuristic derivation using a change of variables and the parametrix method Let us consider the path-dependent SDE $$\label{eq:SDE_XI} X_t = X_0 + \int_0^t \mu_{s} ( X_s , I_s ) d s + \int_0^t \sigma_{t} ( X_s , I_s ) dW_s, ~~~ I_t = \int_{0}^{t} X_s d A_s, ~~ t \in [0,T],$$ where $W$ is a Brownian motion. Assume that the above SDE has a solution $X$ such that $(X, I)$ is Markov. Then, to deduce a solution to the PPDE [\[eq:ppde\]](#eq:ppde){reference-type="eqref" reference="eq:ppde"}, it suffices to find the transition probability (density) function $f(s,x; t,y)$ of the Markov process $(X, I)$ from $(s,x)$ to $(t,y)$. When $t \longmapsto A_t$ is absolutely continuous, it is well-known that $(s,x) \longmapsto f(s,x; t,y)$ solves a Kolmogorov's backward PDE and that $(t,y) \longmapsto f(s,x; t,y)$ solves a Kolmogorov's forward PDE. One can then apply the classical parametrix method as in [@francesco2005class Section 4] to guess the expression of $f(s,x; t, y)$. In our setting where $A$ is not necessarily absolutely continuous, it is no more possible to write the Kolmogorov's PDE for the transition probability (density) function of $(X, I)$. We therefore perform a change of variable and set $$\begin{aligned} \label{eq: change variable tilde X} \widetilde{X}_t := {\rm \bf A}_t \left(\begin{array}{c} X_t \\ I_t \end{array}\right), ~\mbox{with}~ {\rm \bf A}_t := \left(\begin{array}{cc} 1 & 0\\ A_t &-1 \end{array}\right), ~~ t \in [0,T]. \end{aligned}$$ Notice that ${\rm \bf A}_t^{-1} = {\rm \bf A}_t$ and that $\widetilde{X}$ is a diffusion process with dynamics $$\begin{aligned} \label{eq:def_Xt} \widetilde{X}_t &~=~ \widetilde{X}_0 + \int_{0}^{t} \tilde{\mu}_{s}(\widetilde{X}_s) \overrightarrow{\rm A}_s ds +\int_{0}^{t} \tilde{\sigma}_{s}(\widetilde{X}_s) \overrightarrow{\rm A}_s dW_s, ~~ t \in [0,T], \end{aligned}$$ where $\overrightarrow{\rm A}$, $\tilde{\mu}: \mathbb{R}^2 \longrightarrow \mathbb{R}$ and $\tilde{\sigma}: \mathbb{R}^2 \longrightarrow \mathbb{R}$ are defined by $$\begin{aligned} \label{eq: def mu sigma A} \overrightarrow{\rm A}_s := \left(\begin{array}{c} 1 \\ A_{s} \end{array}\right), ~ \tilde{\mu}_{s}(x) := \mu_{s}({\rm \bf A}_s x) ~\mbox{and}~ \tilde{\sigma}_{s}(x) := \sigma_{s}({\rm \bf A}_s x), ~(s,x)\in [0,T] \times\mathbb{R}^{2}. \end{aligned}$$ The generator $\widetilde{{\cal L}}$ of $\widetilde{X}$ is given by $$\widetilde{{\cal L}}\phi(s,x) ~:=~ \tilde{\mu}_{s}(x)~\overrightarrow{\rm A}_s \cdot D\phi(s,x)+\frac12 \tilde{\sigma}_{s}(x)^2 ~ {\rm Tr}\left[ \overrightarrow{\rm A}_s (\overrightarrow{\rm A}_s)^{\top} D^{2}\phi(s,x) \right],$$ for smooth functions $\phi: [0,T] \times\mathbb{R}^2 \longrightarrow \mathbb{R}$. Assume that the SDE [\[eq:def_Xt\]](#eq:def_Xt){reference-type="eqref" reference="eq:def_Xt"} has a solution $\widetilde{X}$ which is Markovian and has a smooth transition probability density function $\tilde{f}(s,x; t,y)$, from $x$ at $s$ to $y$ at $t$, then $(s,x) \longmapsto \tilde{f}(s,x; t,y)$ solves the Kolmogorov backward equation $$\begin{aligned} \label{eq: pde ft} \big(\partial_{s} + \widetilde{{\cal L}}\big) \tilde{f}(s,x;t,y) = 0, ~~\mbox{for}~ (s,x) \in [0,t)\times\mathbb{R}^{2}. \end{aligned}$$ Notice that, in the above, the operator $\widetilde{{\cal L}}$ acts on the first two arguments $(s,x)$ of $\tilde f(s,x; t, y)$. To construct the parametrix, we consider the following process, with volatility frozen at $(r,z) \in [0,T] \times\mathbb{R}^2$, $$\widetilde{X}^{r,z}_t ~:=~ \tilde{\sigma}_r(z) \int_0^t \overrightarrow{\rm A}_s dW_s, ~~ t \in [0,T].$$ The corresponding generator $\widetilde{{\cal L}}^{r,z}$ is then given by $$\widetilde{{\cal L}}^{r,z} \phi(s,x) ~:=~ \frac12 \tilde{\sigma}_{r}(z)^2 ~ {\rm Tr}\left[ \overrightarrow{\rm A}_s (\overrightarrow{\rm A}_s)^{\top} D^{2}\phi(s,x) \right], ~\mbox{for smooth functions}~\phi.$$ We further define $\tilde{f}_{r,z} (s,x; t, y)$ as the corresponding transition probability function from $(s,x)$ to $(t, y)$, for $(s,x,t,y) \in \Theta$. Notice that $\tilde{f}_{r,z}$ is explicitly given and that $y \longmapsto \tilde{f}_{r,z} (s,x; t,y)$ is the density function of the Gaussian random vector $x + \tilde{\sigma}_r(z) \int_s^t \overrightarrow{\rm A}_r d W_r$. It satisfies $$\begin{aligned} \label{eq: pde fycirc} \big (\partial_{s} + \widetilde{{\cal L}}^{r, z} \big) \tilde{f}_{r,z}(s,x;t,y) = 0,\; \mbox{ for } (s,x) \in [0,t)\times\mathbb{R}^{2}. \end{aligned}$$ Now, we employ the machinery of the parametrix method (see e.g. [@friedman2008partial Chapter 1] or [@francesco2005class]), taking $\tilde{f}_{t, y}(s,x; t,y)$ as parametrix, and expressing $\tilde{f}(s,x; t,y)$ in the following form: $$\label{eq: def ft recursif} \tilde{f}(s,x;t,y)= \tilde{f}_{t, y}(s,x;t,y)+ \int_{s}^{t}\int_{\mathbb{R}^{2}} \tilde{f}_{r, z}(s,x;r,z) \widetilde{\Phi}(r,z,t,y)dzdr,$$ for some function $\widetilde{\Phi}: \Theta \longrightarrow \mathbb{R}$. By [\[eq: pde ft\]](#eq: pde ft){reference-type="eqref" reference="eq: pde ft"} and [\[eq: pde fycirc\]](#eq: pde fycirc){reference-type="eqref" reference="eq: pde fycirc"}, one must have $$\begin{aligned} 0 ~=~& \big( \partial_{s} + \widetilde{{\cal L}}\big) \tilde{f}(s,x;t,y) \\ ~=~& \big( \partial_{s} + \widetilde{{\cal L}}\big) \tilde{f}_{t, y} (s,x;t,y) + \big( \partial_{s} + \widetilde{{\cal L}}\big) \int_{s}^{t}\int_{\mathbb{R}^{2}} \tilde{f}_{r, z}(s,x;r,z)\widetilde{\Phi}(r,z,t,y)dzdr \\ ~=~& \big(\widetilde{{\cal L}}- \widetilde{{\cal L}}^{t,y} \big) \tilde{f}_{t, y}(s,x; t,y) - \widetilde{\Phi}(s,x; t, y)\\ & + \int_s^t \int_{\mathbb{R}^2} \big( \widetilde{{\cal L}}- \widetilde{{\cal L}}^{t,z}\big) \tilde{f}_{r, z}(s,x; r,z) \widetilde{\Phi}(r,z; t, y) dz dr. \end{aligned}$$ Therefore, $\widetilde{\Phi}$ must satisfy $$\label{eq:Phit_int_eq} \widetilde{\Phi}(s,x; t, y) ~=~ \big(\widetilde{{\cal L}}- \widetilde{{\cal L}}^{t,y} \big) \tilde{f}_{t, y}(s,x; t,y) + \int_s^t \int_{\mathbb{R}^2} \big( \widetilde{{\cal L}}- \widetilde{{\cal L}}^{t,z}\big) \tilde{f}_{r, z}(s,x; r,z) \widetilde{\Phi}(r,z; t, y) dz dr.$$ In view of [\[eq:Phit_int_eq\]](#eq:Phit_int_eq){reference-type="eqref" reference="eq:Phit_int_eq"}, we obtain $$\label{eq:def_Phit} \widetilde{\Phi}(s,x; t, y) ~:=~ \sum_{k=0}^{\infty} \widetilde{\Delta}_k(s,x; t,y),$$ where $\widetilde{\Delta}_0(s,x; t,y) := \big(\widetilde{{\cal L}}- \widetilde{{\cal L}}^{t,y} \big) \tilde{f}_{t, y}(s,x; t,y)$, and $$\begin{aligned} \label{eq: def Delta L k} \widetilde{\Delta}_{k+1} (s,x;t,y) := \int_s^t \int_{\mathbb{R}^2} \widetilde{\Delta}_0(s,x; r,z) \widetilde{\Delta}_{k}(r,z;t,y) dz dr, ~~k\ge 0. \end{aligned}$$ Notice that $\widetilde{{\cal L}}$, $\widetilde{{\cal L}}^{t,y}$ and $\tilde{f}_{t,y}$ have explicit expressions. The main strategy of the classical parametrix method consists in checking that $\widetilde{\Phi}$ in [\[eq:def_Phit\]](#eq:def_Phit){reference-type="eqref" reference="eq:def_Phit"} is well-defined and solves the integral equation [\[eq:Phit_int_eq\]](#eq:Phit_int_eq){reference-type="eqref" reference="eq:Phit_int_eq"}. Then, one defines $\tilde{f}$ by [\[eq: def ft recursif\]](#eq: def ft recursif){reference-type="eqref" reference="eq: def ft recursif"}, and check that it provides a solution to [\[eq: pde ft\]](#eq: pde ft){reference-type="eqref" reference="eq: pde ft"}. If $\tilde{f}$ is smooth, one can basically deduce that $\tilde{f}$ is the transition probability density function of $\widetilde{X}$ in [\[eq:def_Xt\]](#eq:def_Xt){reference-type="eqref" reference="eq:def_Xt"} by using the Feynman-Kac's formula. The main difficulty here lies in the fact that $\tilde{f}$ is, in general, not smooth enough. For smoothness properties, we will therefore turn back to the initial coordinates $(X,I)$ and define the candidate transition probability function $f$ of the process $(X, I)$ in [\[eq:SDE_XI\]](#eq:SDE_XI){reference-type="eqref" reference="eq:SDE_XI"} through [\[eq:def_Phi\]](#eq:def_Phi){reference-type="eqref" reference="eq:def_Phi"}-[\[eq:def_f\_transition_proba\]](#eq:def_f_transition_proba){reference-type="eqref" reference="eq:def_f_transition_proba"} below, and work on it directly. ## Main results {#sec:main_results} Under some conditions on the constants $\alpha$ and $(\beta)_{i = 0, \cdots, 4}$ given in Assumptions [Assumption 1](#hyp:A){reference-type="ref" reference="hyp:A"} and [Assumption 4](#hyp: stand ass){reference-type="ref" reference="hyp: stand ass"}, we will show that $\widetilde{\Phi}: \Theta \longrightarrow \mathbb{R}$ is well-defined by [\[eq:def_Phit\]](#eq:def_Phit){reference-type="eqref" reference="eq:def_Phit"}-[\[eq: def Delta L k\]](#eq: def Delta L k){reference-type="eqref" reference="eq: def Delta L k"}. For $(r,z) \in [0,T] \times\mathbb{R}^2$, we can then define $f_{r,z}$ and $\Phi$ by inverting the change of variables in [\[eq: change variable tilde X\]](#eq: change variable tilde X){reference-type="eqref" reference="eq: change variable tilde X"}: $$\label{eq:def_Phi} f_{r,z} (s,x;t,y) := \tilde{f}_{r, {\rm \bf A}_r z} (s,{\rm \bf A}_{s}x;t,{\rm \bf A}_{t}y), ~~ \Phi (s,x;t,y) := \widetilde{\Phi}(s,{\rm \bf A}_{s}x;t,{\rm \bf A}_{t}y).$$ The corresponding candidate transition density $f: \Theta \longrightarrow \mathbb{R}$ for $(X,I)$ is therefore: $$\label{eq:def_f_transition_proba} f (s,x;t,y) ~:=~ f_{t, y}(s,x;t,y)+ \int_{s}^{t}\int_{\mathbb{R}^{2}} f_{r, z}(s,x;r,z) \Phi (r,z; t,y)dzdr,$$ for all $(s,x,t,y) \in \Theta$. For any positive constant $a \in \mathbb{R}_+$ and $0 \le s < t \le T$, let us set $$\Sigma_{s,t}(a) ~:=~ a \left( \begin{array}{cc} t- s & - \int_s^t (A_r - A_s) dr \\ - \int_s^t (A_r - A_s) dr & \int_s^t (A_r - A_s)^2 dr \end{array} \right).$$ For $(r,z) \in [0,T] \times\mathbb{R}^2$, we write $\Sigma_{s,t}(r,z) := \Sigma_{s,t}( \sigma^2_r(z) )$ for simplicity. Equivalently, $$\label{eq:Sigma_st} \Sigma_{s,t} (r, z) ~:=~ \sigma^2_{r} \big( z \big) \left( \begin{array}{cc} t- s & - \int_s^t (A_r - A_s) dr \\ - \int_s^t (A_r - A_s) dr & \int_s^t (A_r - A_s)^2 dr \end{array} \right).$$ Then, it is easy to check that $y \longmapsto f_{r,z}(s, x; t, y)$ is the density function of the Gaussian random vector $$\Big( x_1 + \sigma_r(z) (W_t - W_s), ~x_2 + \int_s^t \big( x_1 + \sigma_r(z) (W_u- W_s) \big) d A_u \Big)^{\top},$$ so that, with $w:={\rm w}_{s,t}(x,y)$ as in [\[eq:def_w\_xy\]](#eq:def_w_xy){reference-type="eqref" reference="eq:def_w_xy"}, $$f_{r,z}(s,x;t,y) ~=~ \frac1{2\pi~ {\rm det}\left( \Sigma_{s,t}(r, z)\right)^{\frac12}} \exp \Big( -\frac{1}{2} \big\langle \Sigma^{-1}_{s,t}(r, z) w, w \big\rangle \Big).$$ Let us also define the Gaussian transition probability function ${f^{\circ}}: \Theta \longrightarrow \mathbb{R}$ by $$\begin{aligned} \label{eq: def fcirc} f^{\circ} (s,x;t,y) ~:=~ \frac{1}{2\pi~ {\rm det}\left(\Sigma_{s,t}( 4 \bar \mathfrak{a}) \right)^{\frac12}} \exp \Big( - \frac{1}{2} \big \langle \Sigma^{-1}_{s,t}( 4 \bar \mathfrak{a}) w, w \big \rangle \Big), ~~(s,x,t,y) \in \Theta. \end{aligned}$$ As a first main result, we show that $f$ is well-defined under some conditions on the coefficients $\alpha$ and $\beta_0$, and then provide some first regularity and bound estimates. **Theorem 7**. *Let Assumption [Assumption 1](#hyp:A){reference-type="ref" reference="hyp:A"}.$\mathrm{(i)}$ and Assumption [Assumption 4](#hyp: stand ass){reference-type="ref" reference="hyp: stand ass"}.$\mathrm{(i)}$ hold true.* *$\mathrm{(i)}$ Assume that $$\label{eq:def_kappa0} \kappa_0 ~:=~ \frac{1-\beta_{0}}{2}\wedge( \alpha - \beta_0 ) ~>~ 0.$$ Then $\widetilde{\Phi}$ in [\[eq:def_Phit\]](#eq:def_Phit){reference-type="eqref" reference="eq:def_Phit"}-[\[eq: def Delta L k\]](#eq: def Delta L k){reference-type="eqref" reference="eq: def Delta L k"} is well-defined, and so is $f: \Theta \longrightarrow \mathbb{R}$ in [\[eq:def_f\_transition_proba\]](#eq:def_f_transition_proba){reference-type="eqref" reference="eq:def_f_transition_proba"}. Moreover, $f$ is continuous on $\Theta$, and there exists a constant $C>0$ such that $$\label{eq:up_bound_f} {\big| f(s,x ;t,y) \big|} ~\le~ C f^{\circ}(s,x;t,y), ~~\mbox{for all}~ (s,x,t,y )\in \Theta.$$* *$\mathrm{(ii)}$ Assume that [\[eq:def_kappa0\]](#eq:def_kappa0){reference-type="eqref" reference="eq:def_kappa0"} holds and that $$\begin{aligned} \label{eq: kappa1} \kappa_{1} ~:=~ \kappa_{0}+\frac{1-\beta_{0}}{2} ~=~ (1-\beta_{0})\wedge (\frac12 +\alpha-\frac32 \beta_{0}) ~>~ 0. \end{aligned}$$ Then, the partial derivative $(s,x; t, y)\in \Theta\mapsto \partial_{x_1} f(s,x; t, y)$ exists, is continuous on $\Theta$, and, for some constant $C_{\eqref{eq: them estimation derive fb}}>0$, $$\begin{aligned} \label{eq: them estimation derive fb} \big|\partial_{x_1} f(s,x; t,y) \big| ~\le~ \frac{C_{\eqref{eq: them estimation derive fb}}}{(t-s)^{1-\kappa_{1}}} f^{\circ}(s,x;t,y), ~\mbox{for all}~ (s,x,t,y) \in \Theta. \end{aligned}$$* Under further conditions, we can obtain more regularity of $f$ and then check that it is the transition probability function of the Markov process $(X,I)$. To be more precise, let us rephrase this in terms of path-dependent functionals. For $0 \le s < t \le T$, ${\rm x}\in D([0,T])$ and $y \in \mathbb{R}^2$, we set $$\begin{aligned} \label{eq: def f bar on path} ({\rm f},{\rm f}^{\circ})(s,{\rm x};t,y) ~:=~ (f,f^{\circ}) \big(s, {\rm x}(s), I_s({\rm x}); t,y \big ), ~\mbox{with}~ I_s({\rm x}) := \int_{0}^{s} {\rm x}(r)dA_{r}. \end{aligned}$$ We now fix $\ell : [0,T] \times\mathbb{R}^2 \longrightarrow \mathbb{R}$ and $g: \mathbb{R}^2 \longrightarrow \mathbb{R}$ such that, for some constants $C_{{\ell,g}} > 0$ and $\alpha_{\ell} > 0$, $$\label{eq:bound_g_l} \big| \ell(t, x) \big| + \big| g(x) \big| ~\le~ C_{{\ell,g}} \exp \big( C_{{\ell,g}} |x| \big),$$ and $$\label{eq:holder_l} \big| \ell(t,x) - \ell(t, x') \big| ~\le~ C_{{\ell,g}} \big( e^{C_{{\ell,g}}|x|} + e^{C_{{\ell,g}}|x'|} \big) \Big( |x_1 - x_1'|^{\frac{2 \alpha_{\ell}}{1+ \beta_1'}} + |x_2 - x_2'|^{\frac{2 \alpha_{\ell}}{1+ \beta_2'}} \Big),$$ for all $t \in [0,T]$ and $x,x' \in \mathbb{R}^2$. In view of the upper-bound estimate of $f$ in [\[eq:up_bound_f\]](#eq:up_bound_f){reference-type="eqref" reference="eq:up_bound_f"}, we can then define $$\label{eq:def_vr} {\rm v}(s,{\rm x}) ~:= \int_s^T\!\! \int_{\mathbb{R}^2} \!\! \ell(t,y) {\rm f}(s, {\rm x}; t, y) dy dt ~+ \int_{\mathbb{R}^2} \!\! g(y) {\rm f}(s, {\rm x}; T, y) dy, ~~ (s,{\rm x}) \in [0,T) \times D([0,T]).$$ **Remark 8**. *By its definition in [\[eq: def f bar on path\]](#eq: def f bar on path){reference-type="eqref" reference="eq: def f bar on path"}, it is straightforward to check that $$\partial_{{\rm x}} {\rm f}(s, {\rm x}; t, y) ~=~ \partial_{x_1} f \big( s, {\rm x}(s), I_s({\rm x}); t, y \big).$$ Similarly, let us define, for $(r,z), (t,y) \in [0,T] \times\mathbb{R}^d$, $${\rm f}_{r,z}(s, {\rm x}; t,y) ~:=~ f_{r,z}(s, {\rm x}(s), I_t({\rm x}); t,y),~(s, {\rm x}) \in [0,t) \times D([0,T]).$$ Then, the functional $(s,{\rm x}) \longmapsto {\rm f}_{r,z}(s, {\rm x}; t,y)$ is a classical solution to the PPDE $$\label{eq:PPDE_f_rz} \partial_s {\rm f}_{r,z}(s,{\rm x}; t,y) + \frac12 \sigma_r(z)^{2} \partial^2_{{\rm x}{\rm x}} {\rm f}_{r,z} (s, {\rm x}; t, y) = 0, ~\mbox{for}~(s,{\rm x}) \in [0,t) \times D([0,T]).$$* **Theorem 9**. *Let Assumptions [Assumption 1](#hyp:A){reference-type="ref" reference="hyp:A"} and [Assumption 4](#hyp: stand ass){reference-type="ref" reference="hyp: stand ass"} hold true. Assume that [\[eq:def_kappa0\]](#eq:def_kappa0){reference-type="eqref" reference="eq:def_kappa0"}, [\[eq: kappa1\]](#eq: kappa1){reference-type="eqref" reference="eq: kappa1"}, [\[eq:bound_g\_l\]](#eq:bound_g_l){reference-type="eqref" reference="eq:bound_g_l"} and [\[eq:holder_l\]](#eq:holder_l){reference-type="eqref" reference="eq:holder_l"} hold, and that there exists $\alpha_{\Phi}\in \mathbb{R}$ such that $$0<\alpha_{\Phi}<\kappa_{0}\wedge \hat \alpha_\Phi\wedge \min\limits_{i=1,2} \frac{1+\beta'_{i}}{2}, ~~\mbox{with}~~ \hat \alpha_\Phi := \frac12- \beta_{0}-\frac{\widehat{\Delta \beta}}{2} -\frac{ (\beta_{0}+1-2\alpha)^{+}}2,$$ where $$\begin{aligned} \label{eq: def wide hat Delat beta} \widehat{\Delta \beta}:=\max\left\{ \beta_{0}-\beta_{1}\;,\; \beta_{3}-\beta_{2}\right\}, \end{aligned}$$ and $${\min \Big(\frac{2\beta_{4}+1+\beta'_{1}}{1+\beta'_{2}}, 1 \Big) \min\{ \alpha_{\Phi},\alpha_{\ell},\alpha\}- \beta_0>0. }$$* *$\mathrm{(i)}$ For each $(t,y)\in (0,T]\times\mathbb{R}^{2}$, the path-dependent functional ${\rm f}(\cdot; t, y)$ belongs to ${\mathbb C}^{1,2}([0,t))$.* *$\mathrm{ {(ii)}}$ ${\rm v}\in {\mathbb C}^{1,2}([0,T))$ and it solves the PPDE [\[eq:ppde\]](#eq:ppde){reference-type="eqref" reference="eq:ppde"}. Moreover, there exists $C>0$ such that, for all $(s,{\rm x}) \in [0,T) \times D([0,T])$,* *$$\label{eq:bound_dvr} \big| \partial_{{\rm x}} {\rm v}(s,{\rm x}) \big| ~\le~ \frac{C e^{C (|{\rm x}_s| + |I_s({\rm x})|)}}{(T-s)^{1-\kappa_1}}, ~\mbox{and}~ \big| \partial_{t} {\rm v}(s,{\rm x}) \big| + \big| \partial^{2}_{{\rm x}} {\rm v}(s,{\rm x}) \big| ~\le~ \frac{C e^{C (|{\rm x}_s| + |I_s({\rm x})|)}}{(T-s)^{1+\beta_0}}.$$* *If, in addition, $g: \mathbb{R}^2 \longrightarrow \mathbb{R}$ is continuous, then ${\rm v}$ is the unique classical solution to the PPDE [\[eq:ppde\]](#eq:ppde){reference-type="eqref" reference="eq:ppde"} satisfying $$\begin{aligned} \label{eq: borne croissance vr et cond bord} \lim_{t \nearrow T} {\rm v}(t,{\rm x}) = \bar g({\rm x}) {~~and ~ |{\rm v}(s,{\rm x})| \le Ce^{C (|{\rm x}_s| + |I_s({\rm x})|)}} \end{aligned}$$ for all $(s,{\rm x}) \in [0,T]\times C([0,T])$, for some $C>0$.* *$\mathrm{ {(iii)}}$ The SDE [\[eq:SDE_XI\]](#eq:SDE_XI){reference-type="eqref" reference="eq:SDE_XI"} has a unique weak solution $X$. Moreover, $(X, I)$ is a strong Markov process with transition probability given by $f$ and $$\label{eq:vr_as_exp} {\rm v}(s, {\rm x}) ~=~ \mathbb{E}\Big[ \int_s^T \ell(X_t, I_t) dt + g(X_T, I_T) \Big| X_s = {\rm x}(s), I_s = I_s({\rm x}) \Big],\;(s,{\rm x})\in [0,T]\times D([0,T]).$$* **Remark 10**. *To check the conditions on $\alpha$ and $\beta_i,~ i=0, \cdots, 4$ in Theorem [Theorem 9](#thm:c12){reference-type="ref" reference="thm:c12"}, let us stay in the setting of Example [Example 3](#example : cas de A){reference-type="ref" reference="example : cas de A"}.* *$\mathrm{(i)-(ii)}$ In these cases, $\kappa_{0}=\frac12 \wedge \alpha$, $\kappa_{1}=1 \wedge (\alpha+\frac12)$, $\hat \alpha_{\Phi}=\frac12-[\frac12-\alpha]^{+}=\alpha\wedge \frac12$ and we can choose $\alpha_{\Phi}\in (0,\frac12\wedge \alpha)$.* *$\mathrm{(iii)}$ In this case, $\kappa_{0}=\frac{1-2(\gamma_{1}-\gamma_{2})}2 \wedge (\alpha-2(\gamma_{1}-\gamma_{2}))$ which requires that $\gamma_{1}-\gamma_{2}< \frac12\wedge \frac{\alpha}{2}$ to ensure that $\kappa_{0}>0$. Then, $\kappa_{1}>0$ and $\hat \alpha_{\Phi}=\frac12- 3(\gamma_{1}-\gamma_{2})-[\gamma_{1}-\gamma_{2}+\frac12-\alpha]^{+}$. If $2\alpha/(1+\beta'_{1})\le 1$, then $\alpha\le 1/2$ and therefore $\gamma_{1}-\gamma_{2}+\frac12-\alpha\ge 0$. In this case, we can choose $\alpha_{\Phi}\in (0,\alpha- {4}(\gamma_{1}-\gamma_{2}))$ if $\gamma_{1}-\gamma_{2}<\alpha/ {4}$. If $2\alpha/(1+\beta'_{1})> 1$, then $(\mu,\sigma)$ does no depend on its first argument, and the different cases can also be treated explicitly, leading to a suitable $\alpha_{\Phi}$ when $\gamma_{1}-\gamma_{2}$ is small enough.* The conditions in Theorem [Theorem 9](#thm:c12){reference-type="ref" reference="thm:c12"} ensure that $f$ is smooth enough, so that one can basically apply the Feynman-Kac's formula to justify that it is the transition probability function of a Markov process. It can then be used to prove that the wellposedness (existence and uniqueness) of the SDE [\[eq:SDE_XI\]](#eq:SDE_XI){reference-type="eqref" reference="eq:SDE_XI"}. If one already knows that the SDE [\[eq:SDE_XI\]](#eq:SDE_XI){reference-type="eqref" reference="eq:SDE_XI"} has a unique a weak solution, then one can rely on Theorem [Theorem 11](#thm:vr_uniqueX){reference-type="ref" reference="thm:vr_uniqueX"} below, which requires less technical conditions on $A$ and $(\mu,\sigma)$, to check that $f$ is the corresponding transition probability function. In this case, the path-dependent functional ${\rm v}$ defined above may only be ${\mathbb C}^{0,1}([0,T))$, but it is enough to deduce that [\[eq:vr_as_exp\]](#eq:vr_as_exp){reference-type="eqref" reference="eq:vr_as_exp"} holds, and obtain it's Itô-Dupire's decomposition, whenever it satisfies for instance one of the conditions a. or b. of Theorem [Theorem 11](#thm:vr_uniqueX){reference-type="ref" reference="thm:vr_uniqueX"} . **Theorem 11**. *Let Assumption [Assumption 1](#hyp:A){reference-type="ref" reference="hyp:A"}.(i) and Assumption [Assumption 4](#hyp: stand ass){reference-type="ref" reference="hyp: stand ass"}.(i) hold true, and assume that the SDE [\[eq:SDE_XI\]](#eq:SDE_XI){reference-type="eqref" reference="eq:SDE_XI"} has a unique weak solution, so that the corresponding process $( X, I)$ is a strong Markov process.* *$\mathrm{(i)}$ Assume in addition that [\[eq:def_kappa0\]](#eq:def_kappa0){reference-type="eqref" reference="eq:def_kappa0"} holds true so that $f$ is well defined. Then, $f$ is the transition probability function of $( X, I)$, and [\[eq:vr_as_exp\]](#eq:vr_as_exp){reference-type="eqref" reference="eq:vr_as_exp"} holds whenever [\[eq:bound_g\_l\]](#eq:bound_g_l){reference-type="eqref" reference="eq:bound_g_l"} does.* *$\mathrm{(ii)}$ Assume that [\[eq:def_kappa0\]](#eq:def_kappa0){reference-type="eqref" reference="eq:def_kappa0"}, [\[eq: kappa1\]](#eq: kappa1){reference-type="eqref" reference="eq: kappa1"} and [\[eq:bound_g\_l\]](#eq:bound_g_l){reference-type="eqref" reference="eq:bound_g_l"} hold. Then, ${\rm v}\in {\mathbb C}^{0,1}([0,T))$. Suppose in addition that one of the following holds:* - *there exists $C_{\eqref{eq: cond vr Lip}}>0$ such that $$\begin{aligned} \label{eq: cond vr Lip} \big|{\rm v}(t,{\rm x})-{\rm v}(t,{\rm x}') \big| ~\le~ C_{\eqref{eq: cond vr Lip}} \int_{0}^{t} |{\rm x}(r)-{\rm x}'(r)| d|A|_{r}, \end{aligned}$$ for all $t \in [0,T]$, ${\rm x},{\rm x}'\in D([0,T])$ such that ${\rm x}(t) = {\rm x}'(t)$, in which $|A|$ denotes the total variation of $A$.* - *$A$ is monotone and $0 < \frac{1+\beta_{2}-\beta_{0}}{2+4\beta_{4}} < 1-\frac{\beta_3-\beta_{2}+\beta_{0}}{2}$.* *Then, $$\begin{aligned} \label{eq: Ito vr} {\rm v}(t, X)={\rm v}(0,X)+\int_{0}^{t} \partial_{{\rm x}} {\rm v}(s, X) \bar{\sigma}_s (X) dW_s -\int_0^t \bar \ell_{{s}}( X) ds, ~~t \in [0,T]. \end{aligned}$$* **Remark 12**. *When $b/\sigma$ is bounded, and $\sigma$ is Lipschitz in its space variable in the sense that, for some constant $C_{\eqref{eq: coeff Lipschitz}}>0$, $$\label{eq: coeff Lipschitz} \big|\sigma_{s}(x)-\sigma_{s}(x') \big| ~\le~ C_{\eqref{eq: coeff Lipschitz}}|x-x'|, ~~s \in [0, T],\; x,x'\in \mathbb{R}^{2},$$ with $(\sigma_{s}(0))_{s\le T}$ bounded, then the SDE [\[eq:SDE_XI\]](#eq:SDE_XI){reference-type="eqref" reference="eq:SDE_XI"} has a unique weak solution.* **Remark 13**. *To check the conditions in Theorem [Theorem 11](#thm:vr_uniqueX){reference-type="ref" reference="thm:vr_uniqueX"}.$\mathrm{(ii).(b)}$, let us consider the situations of Example [Example 3](#example : cas de A){reference-type="ref" reference="example : cas de A"}.* *$\mathrm{(i)-(ii)}$ In these cases, $\frac{1+\beta_{2}-\beta_{0}}{2+4\beta_{4}} = \frac12$ and $1-\frac{\beta_3-\beta_{2}+\beta_{0}}{2} = 1$, so that the conditions inTheorem [Theorem 11](#thm:vr_uniqueX){reference-type="ref" reference="thm:vr_uniqueX"}.$\mathrm{(ii).(b)}$ hold true.* *$\mathrm{(iii)}$ In this case, $\frac{1+\beta_{2}-\beta_{0}}{2+4\beta_{4}} = \frac12-\frac{\gamma_{1}-\gamma_{2}}{1+2\gamma_{2}}$ and $1-\frac{\beta_3-\beta_{2}+\beta_{0}}{2} =1-2(\gamma_{1}-\gamma_{2})$. Therefore, the conditions in Theorem [Theorem 11](#thm:vr_uniqueX){reference-type="ref" reference="thm:vr_uniqueX"}.$\mathrm{(ii).(b)}$ hold true when $\gamma_{1}-\gamma_{2}$ is small enough.\ * # Proofs {#sec: proofs} This section is devoted to the proof of Theorems [Theorem 7](#thm:f_well_defined){reference-type="ref" reference="thm:f_well_defined"}, [Theorem 9](#thm:c12){reference-type="ref" reference="thm:c12"} and [Theorem 11](#thm:vr_uniqueX){reference-type="ref" reference="thm:vr_uniqueX"}. ## A priori estimates Recall that, with $w :={\rm w}_{s,t}(x,y)$ (see [\[eq:def_w\_xy\]](#eq:def_w_xy){reference-type="eqref" reference="eq:def_w_xy"}), $$f_{r,z}(s,x;t,y) ~:=~ \frac1{2\pi {\rm det}\left( \Sigma_{s,t}(r, z)\right)^{\frac12}} \exp \Big( -\frac{1}{2} \big\langle \Sigma^{-1}_{s,t}(r, z) w, w \big\rangle \Big),$$ where $$\Sigma_{s,t} (r, z) ~:=~ \sigma^2_{r} \big( z \big) \left( \begin{array}{cc} t- s & - \int_s^t (A_r - A_s) dr \\ - \int_s^t (A_r - A_s) dr & \int_s^t (A_r - A_s)^2 dr \end{array} \right).$$ By direct computation, one has $$\label{eq: det sigma bar} {\rm det}\left(\Sigma_{s,t}(r, z) \right) %= %\sigma^4_{r} \big( z \big) \Big[ (t-s)\!\! \int_s^t (A_r\!-\! A_{s})^2 dr - \Big(\!\! \int_s^t (A_r\!-\!A_{s}) dr \Big)^2 \Big] ~=~ \sigma^4_{r} \big( z \big) (t-s)^2 m_{s,t},$$ and hence $$\begin{aligned} \label{eq: def Sigmab -1} \Sigma_{s,t}^{-1} (r, z) ~=~ \sigma^{-2}_{r}\big(z \big) \left( \begin{array}{cc} \frac{1}{t-s} \frac{\tilde m_{s,t}}{m_{s,t}} & \frac{\int_s^t (A_r - A_s) dr}{(t-s)^2 m_{s,t}} \\ \frac{\int_s^t (A_r - A_s) dr}{(t-s)^2 m_{s,t}} & \frac{1}{(t-s) m_{s,t}} \end{array} \right). \end{aligned}$$ The following quantities will play an important role in our analysis. For $i=1,2$, $(r,z) \in \mathbb{R}^{2}$ and $(s,x,t,y)\in \Theta$, with $w :={\rm w}_{s,t}(x,y)$ (see [\[eq:def_w\_xy\]](#eq:def_w_xy){reference-type="eqref" reference="eq:def_w_xy"}), we compute that $$\label{eq:Dx1fb} \partial_{x_i} f_{r,z} (s,x; t,y) = f_{r,z} (s,x; t, y) \Big( - \big(\Sigma^{-1}_{s,t}(r, z) w \big)_i \Big),$$ $$\label{eq:Dx1x2fb} \partial^2_{x_1 x_i} f_{r,z} (s,x; t,y) = f_{r,z}(s,x; t,y) \Big( \big( \Sigma^{-1}_{s,t}(r,z) w \big)_1 \big( \Sigma^{-1}_{s,t}(r,z) w \big)_i - \big( \Sigma^{-1}_{s,t} (r,z) \big)_{1,i} \Big),$$ and $$\begin{aligned} \label{eq:Dx1x1x2fb} \frac{\partial^3_{x_1 x_1 x_i} f_{r,z} (s,x; t,y)}{f_{r,z} (s,x; t, y)} =~ & 2 \big( \Sigma^{-1}_{s,t}(r,z) w \big)_1 \big( \Sigma^{-1}_{s,t}(r,z) \big)_{1,i} + \big( \Sigma^{-1}_{s,t}(r,z) w \big)_i \big( \Sigma^{-1}_{s,t}(r,z) \big)_{1,1} \nonumber \\ & - \big( \Sigma^{-1}_{s,t}(r,z) w \big)_1^2 \big( \Sigma^{-1}_{s,t}(r,z) w \big)_i. \end{aligned}$$ Let us first provide some estimations in the following lemma. **Lemma 14**. *Let Assumption [Assumption 1](#hyp:A){reference-type="ref" reference="hyp:A"}.(i) hold. Then, there exists constants $C_{\eqref{eq: norne Sigma-1 w 1}}$, $C_{\eqref{eq: norne Sigma-1 w 2}}$, $C_{{\eqref{eq: borne Sigma -1 11} }}$, $C_{{\eqref{eq: borne Sigma -1 12} }} > 0$, such that, for all $(s,x,t,y)\in \Theta$ and $(r,z) \in [0,T] \times\mathbb{R}^{2}$, with $w := {\rm w}_{s,t}(x,y)$ (see [\[eq:def_w\_xy\]](#eq:def_w_xy){reference-type="eqref" reference="eq:def_w_xy"}), we have $$\begin{aligned} \label{eq: norne Sigma-1 w 1} \Big| \big(\Sigma^{-1}_{s,t} (r, z) w \big)_1 \Big| &~=~ \Big| \frac{\partial_{x_1} f_{r,z} (s,x; t,y)}{f_{r,z} (s,x; t, y)} \Big| ~\le~ \frac{C_{\eqref{eq: norne Sigma-1 w 1}}}{(t-s)^{\frac{1 + \beta_0}{2}}} \sqrt{\big \langle \Sigma^{-1}_{s,t}(r,z) w, w \big \rangle},\\ \label{eq: norne Sigma-1 w 2} \Big| \big(\Sigma^{-1}_{s,t} (r, z) w \big)_2 \Big| &~\le~ \frac{C_{\eqref{eq: norne Sigma-1 w 2}} }{(t-s)^{\frac{1 + \beta_3}{2}}} \sqrt{\big \langle \Sigma^{-1}_{s,t}(r,z) w, w \big \rangle}, \\ \Big| \big( \Sigma^{-1}_{s,t} (r,z) \big)_{1,1} \Big| &~\le~ \frac{C_{{\eqref{eq: borne Sigma -1 11} }}}{(t-s)^{1+ \beta_{0} }}\label{eq: borne Sigma -1 11}, \\ \Big| \big( \Sigma^{-1}_{s,t} (r,z) \big)_{1,2} \Big| &~\le~ \frac{C_{{\eqref{eq: borne Sigma -1 12} }}}{(t-s)^{1+\frac{\beta_{0}+\beta_{3}}{2}}}\label{eq: borne Sigma -1 12}. \end{aligned}$$* *Proof.* The bound in [\[eq: borne Sigma -1 11\]](#eq: borne Sigma -1 11){reference-type="eqref" reference="eq: borne Sigma -1 11"} and [\[eq: borne Sigma -1 12\]](#eq: borne Sigma -1 12){reference-type="eqref" reference="eq: borne Sigma -1 12"} are immediate consequences of Assumption [Assumption 1](#hyp:A){reference-type="ref" reference="hyp:A"}.(i) and [\[eq: def Sigmab -1\]](#eq: def Sigmab -1){reference-type="eqref" reference="eq: def Sigmab -1"}, up to appealing to Cauchy-Schwarz's inequality for the latter. By direct computation, one has $$\begin{aligned} \sigma_{t}(z)^{2}\big \langle \Sigma^{-1}_{s,t}(r, z) w, w \big \rangle &= \frac{1}{(t-s)^2 m_{s,t}} \int_s^t \Big( (A_r - A_s)^2 w_1^2 + 2 (A_r - A_s) w_1 w_2 + w_2^2 \Big) dr \\ &= \frac{1}{(t-s)^2 m_{s,t}} \int_s^t \big( (A_r - A_s)w_1 + w_2 \big)^2 dr. \end{aligned}$$ Hence, using Assumption [Assumption 4](#hyp: stand ass){reference-type="ref" reference="hyp: stand ass"} and [\[eq : hyp vitesse explo\]](#eq : hyp vitesse explo){reference-type="eqref" reference="eq : hyp vitesse explo"}, $$\begin{aligned} \underline\mathfrak{a}\Big| \big(\Sigma^{-1}_{s,t}(r, z) w \big)_1 \Big| &\le \sigma_{t}( z)^{2}\Big| \big(\Sigma^{-1}_{s,t}(r, z) w \big)_1 \Big| \\ &= \frac{1}{(t-s)^2 m_{s,t}} \Big| \int_s^t \big(A_r -A_s\big) \big( (A_r - A_s) w_1 + w_2 \big) dr \Big| \\ &\le \sqrt{ \frac{ \tilde m_{s,t}}{ (t-s) m_{s,t}} }\sqrt{\bar \mathfrak{a}\big \langle \Sigma^{-1}_{s,t}(r, z) w, w \big \rangle} \le \frac{\sqrt{\bar \mathfrak{a}C_\eqref{eq : hyp vitesse explo}}}{(t-s)^{\frac{1 + \beta_0}{2}}} \sqrt{\big \langle \Sigma^{-1}_{s,t}(r, z) w, w \big \rangle}. \end{aligned}$$ Similarly, using the above, Assumption [Assumption 4](#hyp: stand ass){reference-type="ref" reference="hyp: stand ass"} and Cauchy-Schwarz inequality, implies that $$\begin{aligned} \underline\mathfrak{a}\Big| \big(\Sigma^{-1}_{s,t}(r, z) w \big)_2 \Big| &\le \sigma_{t}( z)^{2}\Big| \big(\Sigma^{-1}_{s,t}(r,z) w \big)_2 \Big| \\&= \Big| \frac{1}{(t-s)^2 m_{s,t}} \int_s^t \big( (A_r - A_s) w_1 + w_2 \big) dr \Big| \\ &\le \frac{\sqrt{\bar \mathfrak{a}C_{\eqref{eq:order_m}}}}{(t-s)^{\frac{1 + \beta_3}{2}}} \sqrt{\big \langle \Sigma^{-1}_{s,t}(r,z) w, w \big \rangle}. \end{aligned}$$ ◻ As usual, an important step consists in providing a suitable upper-bound on the parametrix density. Recall that $y \longmapsto f^{\circ} (s,x;t, y)$ defined in [\[eq: def fcirc\]](#eq: def fcirc){reference-type="eqref" reference="eq: def fcirc"} is a Gaussian density function on $\mathbb{R}^{2}$. **Lemma 15**. *Let Assumption [Assumption 1](#hyp:A){reference-type="ref" reference="hyp:A"}.(i) hold. Then, there exists $C_{\eqref{eq: def varomega}} >0$ such that, for all $(s,x,t,y)\in \Theta$ and $(r,z) \in [0,T] \times\mathbb{R}^{2}$, we have $$\begin{aligned} \label{eq: majo fy} f_{r,z} (s,x;t,y) ~\le~ \varpi(s,x;t,y) ~f^{\circ} (s,x;t,y), \end{aligned}$$ in which $\varpi:=\varpi^{1}\varpi^{2}$ with $$\begin{aligned} \label{eq: def varomega} \left\{\begin{array}{l} \varpi^{1}(s,x;t,y) ~:=~ C_{\eqref{eq: def varomega}} \exp\left( - \frac{1}{C_{\eqref{eq: def varomega}}} \Big( \frac{|w_1|^2}{(t-s)^{1+\beta'_1}} + \frac{|w_2|^2}{(t-s)^{1+\beta'_2}} \Big) \right), \\ \varpi^{2}(s,x;t,y) ~:=~ \exp\Big( -\frac12 \langle \Sigma^{-1}_{s,t}(4 \bar \mathfrak{a}) w, w \big \rangle \Big), \end{array} \right. \end{aligned}$$ where $w: ={\rm w}_{s,t}(x,y)$ as defined in [\[eq:def_w\_xy\]](#eq:def_w_xy){reference-type="eqref" reference="eq:def_w_xy"}.* *Proof.* Let us first observe that $m_{s,t}=\tilde m_{s,t}-[(t-s)^{-1}\int_{s}^{t}(A_{r}-A_{s})ds]^{2}$, so that the right-hand side of [\[eq : hyp vitesse explo\]](#eq : hyp vitesse explo){reference-type="eqref" reference="eq : hyp vitesse explo"} is equivalent to $$\begin{aligned} \left(\frac1{t-s}\int_{s}^{t}(A_{r}-A_{s})ds\right)^{2} ~\le~ \tilde m_{s,t} \left(1-\frac{(t-s)^{\beta_{0}}}{C_{\eqref{eq : hyp vitesse explo}}}\right). \end{aligned}$$ Note that, upon changing the value of $C_{\eqref{eq : hyp vitesse explo}}$, one can assume that $C_{\eqref{eq : hyp vitesse explo}}\ge 2T^{\beta_{0}}$. Hence, using the inequality $2ab\le a^{2}+b^{2}$ for $a,b\in \mathbb{R}$, $$\begin{aligned} 2\left|\frac{\int_{s}^{t}(A_{r}-A_{s})w_{1}w_{2}ds}{(t-s)^{2}m_{s,t}}\right| &\le 2 \left[\tilde m_{s,t} \left(1-\frac{(t-s)^{\beta_{0}}}{C_{\eqref{eq : hyp vitesse explo}}}\right)\right]^{\frac12} \frac{w_{1}w_{2}}{(t-s)m_{s,t}}\\ &\le \left(1-\frac{(t-s)^{\beta_{0}}}{C_{\eqref{eq : hyp vitesse explo}}}\right)^{\frac12}\left\{ \frac{\tilde m_{s,t}}{(t-s)m_{s,t}} |w_{1}|^{2} +\frac{1}{(t-s)m_{s,t}}|w_{2}|^{2}\right\}. \end{aligned}$$ Combining the above with [\[eq : hyp vitesse explo\]](#eq : hyp vitesse explo){reference-type="eqref" reference="eq : hyp vitesse explo"}-[\[eq:order_m\]](#eq:order_m){reference-type="eqref" reference="eq:order_m"} and Assumption [Assumption 4](#hyp: stand ass){reference-type="ref" reference="hyp: stand ass"} implies that $$\begin{aligned} \big \langle \Sigma^{-1}_{s,t}(r, z) w, w \big \rangle &\ge \frac{1}{\sigma_{r}(z)^{2}} \left(\frac{\tilde m_{s,t}}{(t-s)m_{s,t}}|w_{1}|^{2}- 2\left|\frac{\int_{s}^{t}(A_{r}-A_{s})w_{1}w_{2}ds}{(t-s)^{2}m_{s,t}}\right|+\frac{ 1}{(t-s)m_{s,t}}|w_{2}|^{2} \right) \nonumber \\ &\ge \frac{C}{\bar \mathfrak{a}} \left( \frac{|w_1|^2}{(t-s)^{1+\beta_1-\beta_{0}}} + \frac{|w_2|^2}{(t-s)^{1+\beta_2-\beta_{0}}} \right),\label{eq: mino forme quadra par diago} \end{aligned}$$ for some $C>0$ that does not depend on $(s,x,t,y)$. The required result then follows from obvious algebra and Assumption [Assumption 4](#hyp: stand ass){reference-type="ref" reference="hyp: stand ass"}. ◻ **Lemma 16**. *Let Assumption [Assumption 1](#hyp:A){reference-type="ref" reference="hyp:A"}.(i) hold. Let us define the transition density function $f^{\circ,\frac12}$ by, for $(s,x,t,y)\in \Theta$, $$\begin{aligned} \label{eq: def fcirc1/2} f^{\circ,\frac12}(s,x;t,y) := \frac{1}{2 \pi~ {\rm det}\left(\Sigma_{s,t}(8 \bar \mathfrak{a}) \right)^{\frac12}} \exp \Big( - \frac12 \big \langle \Sigma^{-1}_{s,t}(8 \bar \mathfrak{a}){\rm w}_{s,t}(x,y) , {\rm w}_{s,t}(x,y) \big \rangle\Big). \end{aligned}$$ Then, there exists $C_{[\ref{lem: fcirc 1/2}]} >0$ such that, for all $(s,x,t,y)\in \Theta$ and $x'\in \mathbb{R}^{2}$ satisfying $$\begin{aligned} \label{eq: hypo dist x x'} |x_1 - x_1'|^{\frac{1}{1+\beta'_1}} + |x_2 - x_2'|^{\frac{1}{1+\beta'_2}} ~\le~ (t-s)^{1/2}, \end{aligned}$$ we have $$\begin{aligned} \label{eq: majo fcirc par fcirc 1/2} f^{\circ} (s,x';t,y) \le C_{[\ref{lem: fcirc 1/2}]} f^{\circ,\frac12} (s,x;t,y) \end{aligned}$$ and $$\begin{aligned} \label{eq: borne norme compensee par varpi} \left(\frac{|{\rm w}_{s,t}(x',y)_{1}|^{2}}{(t-s)^{{1+\beta'_{1}}}} +\frac{|{\rm w}_{s,t}(x',y)_{2}|^{2}}{(t-s)^{{1+\beta'_{2}}}}\right) \varpi^{1}(s,x,t,y) \le C_{[\ref{lem: fcirc 1/2}]} . \end{aligned}$$* *Proof.* Set $w:={\rm w}_{s,t}(x,y)$ and $w':={\rm w}_{s,t}(x',y)$. First observe that $$\left(\langle \Sigma^{-1}_{s,t}(\bar \mathfrak{a}) w,w\rangle\right)^{\frac12} \le \left(\langle \Sigma^{-1}_{s,t}(\bar \mathfrak{a})(w-w'),(w-w')\rangle\right)^{\frac12} + \left(\langle \Sigma^{-1}_{s,t}(\bar \mathfrak{a})w',w'\rangle\right)^{\frac12}.$$ Using that $2ab\le 2 a^{2}+2b^{2}$ for $a,b\ge 0$, we deduce that $$\begin{aligned} -\langle \Sigma^{-1}_{s,t}(\bar \mathfrak{a})w',w'\rangle &\le -\frac12 \langle \Sigma^{-1}_{s,t}(\bar \mathfrak{a})w,w\rangle + \langle \Sigma^{-1}_{s,t}(\bar \mathfrak{a})(w-w'),(w-w')\rangle. \end{aligned}$$ Now, by the same arguments as in the proof of Lemma [Lemma 15](#lem: control density){reference-type="ref" reference="lem: control density"} and Assumption [Assumption 4](#hyp: stand ass){reference-type="ref" reference="hyp: stand ass"}, we have $$\begin{aligned} \big\langle \Sigma^{-1}_{s,t}(\bar \mathfrak{a})(w-w'),(w-w') \big \rangle &\le 2 \bar \mathfrak{a}\left( \frac{|x_1-x_{1}'|^2}{(t-s)^{1+\beta'_1}} + \frac{|x_2-x_{2}'|^2}{(t-s)^{1+\beta'_2}} \right)\le 4 \bar \mathfrak{a}, \end{aligned}$$ in which we used [\[eq:def_w\_xy\]](#eq:def_w_xy){reference-type="eqref" reference="eq:def_w_xy"} and our assumption [\[eq: hypo dist x x\'\]](#eq: hypo dist x x'){reference-type="eqref" reference="eq: hypo dist x x'"}. This proves [\[eq: majo fcirc par fcirc 1/2\]](#eq: majo fcirc par fcirc 1/2){reference-type="eqref" reference="eq: majo fcirc par fcirc 1/2"}. The assertion [\[eq: borne norme compensee par varpi\]](#eq: borne norme compensee par varpi){reference-type="eqref" reference="eq: borne norme compensee par varpi"} is proved similarly, upon interchanging the role of $x$ and $x'$. ◻ ## Wellposedness of $\widetilde{\Phi}$ In this section, we prove that $\widetilde{\Phi}$ in [\[eq:def_Phit\]](#eq:def_Phit){reference-type="eqref" reference="eq:def_Phit"}-[\[eq: def Delta L k\]](#eq: def Delta L k){reference-type="eqref" reference="eq: def Delta L k"} is well defined. Recall that $f^{\circ}$ is defined in [\[eq: def fcirc\]](#eq: def fcirc){reference-type="eqref" reference="eq: def fcirc"}, and let us define $$\begin{aligned} \tilde{f}^{\circ}(s,x;t,y) ~:=~ f^{\circ}(s,{\rm \bf A}_{s}x;t,{\rm \bf A}_{t}y), ~~ (s,x,t,y)\in \Theta. \end{aligned}$$ Noticing that ${\rm \bf A}={\rm \bf A}^{-1}$, and recalling that $$f_{r, z}(s,x;t,y)~:=~\tilde{f}_{r, {\rm \bf A}_{r}z}(s,{\rm \bf A}_{s}x;t,{\rm \bf A}_{t}y),$$ it is straightforward to check that $$\begin{aligned} \partial_{x_{1}} f_{r,z}(s,x;t,y) &~=~ \overrightarrow{\rm A}_{s} \cdot D_x \tilde{f}_{r, {\rm \bf A}_{r} z}(s,{\rm \bf A}_{s}x;t, {\rm \bf A}_{t}y), \label{eq: lien derive f bar f} \\ \partial^{2}_{x_{1}x_{1}} f_{r,z}(s,x;t,y) &~=~ {\rm Tr} \Big[ \overrightarrow{\rm A}_{s} \big(\overrightarrow{\rm A}_s \big)^{\top} D^{2}_{xx} \tilde{f}_{r, {\rm \bf A}_{r} z}(s,{\rm \bf A}_{s}x; {\rm \bf A}_{t}y) \Big]. \label{eq: lien derive seconde f bar f} \end{aligned}$$ **Lemma 17**. * Let the conditions of Theorem [Theorem 7](#thm:f_well_defined){reference-type="ref" reference="thm:f_well_defined"}.(i) hold. Then, there exist a constant $C_{\eqref{eq: estimee L-Lfy}}>0$ such that $$\begin{aligned} \label{eq: estimee L-Lfy} \big| \big(\widetilde{{\cal L}}- \widetilde{{\cal L}}^{t, \tilde y} \big) \tilde{f}_{t, \tilde y}(s,\tilde x;t,\tilde y) \big| &~\le~ \frac{C_{\eqref{eq: estimee L-Lfy}}}{(t-s)^{1-\kappa_0}} \tilde{f}^{\circ}(s,\tilde x;t,\tilde y), ~\mbox{for all}~ (s,\tilde x,t,\tilde y)\in \Theta, \end{aligned}$$ in which $\kappa_{0}$ is defined in [\[eq:def_kappa0\]](#eq:def_kappa0){reference-type="eqref" reference="eq:def_kappa0"}.* *Proof.* For simplicity, we assume that $t-s\le 1$, the case $t-s>1$ being trivially handled. Let us denote $$\label{eq:def_xbyb} x := {\rm \bf A}_{s} \tilde x ~~\mbox{and}~ y := {\rm \bf A}_{t} \tilde y.$$ $\mathrm{(i)}$ Using [\[eq: def mu sigma A\]](#eq: def mu sigma A){reference-type="eqref" reference="eq: def mu sigma A"} and [\[eq: lien derive f bar f\]](#eq: lien derive f bar f){reference-type="eqref" reference="eq: lien derive f bar f"}, we first estimate $$\begin{aligned} I_{1} ~:=~ \tilde{\mu}_{s}(\tilde x) ~\overrightarrow{\rm A}\cdot D_x \tilde{f}_{t,\tilde y}(s,\tilde x;t,\tilde y) ~=~ \mu_{s}(x) ~ \partial_{x_1} f_{t, y} (s, x; t, y). \end{aligned}$$ Then, by Assumption [Assumption 4](#hyp: stand ass){reference-type="ref" reference="hyp: stand ass"}, Lemmas [Lemma 14](#lemma:Sigamw){reference-type="ref" reference="lemma:Sigamw"} and [Lemma 15](#lem: control density){reference-type="ref" reference="lem: control density"}, it follows that $$\begin{aligned} | I_{1}| ~\le~ \frac{\mathfrak{b}C_{\eqref{eq: norne Sigma-1 w 1}}}{(t-s)^{\frac{1 + \beta_0}{2} }} \sqrt{\big \langle \Sigma^{-1}_{s,t}(t,y) {w}, {w} \big \rangle} f_{t,y}(s, x;t, y) ~\le~ \frac{\mathfrak{b}C_{\eqref{eq: norne Sigma-1 w 1}} C_{\eqref{eq: def const 1}}}{ (t-s)^{1 - \frac{1-\beta_0 }2}} f^{\circ}(s, x; t, y), \label{eq: borne I1 estimation L-L} \end{aligned}$$ in which $w := {\rm w}_{s,t}({x},{y})$ and, with $w' :={\rm w}_{{s',t'}}({x'},{y'})$, $$\begin{aligned} \label{eq: def const 1} C_{\eqref{eq: def const 1}} ~:= \sup_{(s',x',t',y',z')\in \Theta\times\mathbb{R}^{2}} \sqrt{\big \langle \Sigma^{-1}_{s',t'}(t',z') w', w' \big \rangle} \varpi(s',x'; t',y') ~<~ \infty. \end{aligned}$$ $\mathrm{(ii)}$ Using [\[eq: def mu sigma A\]](#eq: def mu sigma A){reference-type="eqref" reference="eq: def mu sigma A"} and [\[eq: lien derive seconde f bar f\]](#eq: lien derive seconde f bar f){reference-type="eqref" reference="eq: lien derive seconde f bar f"}, we now estimate $$\begin{aligned} I_{2} &~:=~ {\rm Tr}\Big[ \big( \tilde{\sigma}^2_{t}(\tilde y) - \tilde{\sigma}^2_{s}(\tilde x) \big) \overrightarrow{\rm A}_s (\overrightarrow{\rm A}_s)^{\top} D^{2}_{xx} \tilde{f}_{t,\tilde y}(s,\tilde x; t,\tilde y) \Big] = \big[ \sigma^2_{t}(y) - \sigma^2_{s}(x) \big] \partial^2_{x_1 x_1} f_{y} (s, x; t, y). \end{aligned}$$ By [\[eq:Dx1x2fb\]](#eq:Dx1x2fb){reference-type="eqref" reference="eq:Dx1x2fb"} and Lemma [Lemma 14](#lemma:Sigamw){reference-type="ref" reference="lemma:Sigamw"}, one obtains $$\begin{aligned} |I_2| &~\le~ \big[ \sigma^2_{t}(y) - \sigma^2_{s}(x) \big] \frac{(C_{\eqref{eq: norne Sigma-1 w 1}})^{2}\vee C_{(\ref{eq: borne Sigma -1 11})}}{(t-s)^{1+\beta_0}} \Big( \big \langle \overline \Sigma_{s,t}^{-1} (t,y) w, w \big \rangle + 1 \Big) f_{y} (s, x; t, y). \end{aligned}$$ Recalling [\[eq:def_w\_xy\]](#eq:def_w_xy){reference-type="eqref" reference="eq:def_w_xy"}, [\[eq : hyp holder coeff bar sigma\]](#eq : hyp holder coeff bar sigma){reference-type="eqref" reference="eq : hyp holder coeff bar sigma"}, [\[eq : unif elliptic\]](#eq : unif elliptic){reference-type="eqref" reference="eq : unif elliptic"} and Lemma [Lemma 15](#lem: control density){reference-type="ref" reference="lem: control density"}, it follows that, for some $C>0$ that does not depend on $(s,x,t,y)$ and $z\in \mathbb{R}^{2}$, $$\begin{aligned} |I_2| &~\le~ C \frac{1}{(t-s)^{1+\beta_0-\alpha}}\Big( 1 + \big| {\rm w}_{s,t}(x,y)\big|^{\frac{2\alpha}{1+\beta'_1}} + \big|{\rm w}_{s,t}(x,y) \big|^{\frac{2\alpha}{1+\beta'_2}} \Big)(\varpi^{1} f^{\circ})(s, x; t, y), \end{aligned}$$ and we conclude by using the definition of $\varpi^{1}$ in [\[eq: def varomega\]](#eq: def varomega){reference-type="eqref" reference="eq: def varomega"}. ◻ **Proposition 18**. *Let the conditions of Theorem [Theorem 7](#thm:f_well_defined){reference-type="ref" reference="thm:f_well_defined"}.(i) hold. Then, the sum in [\[eq:def_Phit\]](#eq:def_Phit){reference-type="eqref" reference="eq:def_Phit"} is well-defined and there exists a constant $C_{\eqref{eq: estime vraie densite}}>0$ such that $$\begin{aligned} \label{eq: estime vraie densite} \big| \widetilde{\Phi}(s,x;t,y) \big| &~\le~ \frac{C_{\eqref{eq: estime vraie densite}}}{(t-s)^{1-\kappa_{0}}} \tilde{f}^{\circ}(s,x;t,y), ~\mbox{for all}~ (s,x,t,y)\in \Theta. \end{aligned}$$ Moreover, $\widetilde{\Phi}$ is continuous on $\Theta$ and satisfies $$\begin{aligned} \label{eq: sol eq Phi} \widetilde{\Phi}(s,x;t,y) &=\widetilde{\Delta}_0(s,x; t,y)+\int_s^t \int_{\mathbb{R}^2} \widetilde{\Delta}_0(s,x; r,z) \widetilde{\Phi}(r,z;t,y) dz dr, ~\mbox{for all}~ (s,x,t,y)\in \Theta. \end{aligned}$$* *Proof.* Let us recall that, if well-defined, $$\widetilde{\Phi}(s,x; t, y) ~:=~ \sum_{k=0}^{\infty} \widetilde{\Delta}_k(s,x; t,y),$$ where $\widetilde{\Delta}_0(s,x; t,y) := \big( \widetilde{{\cal L}}- \widetilde{{\cal L}}^{t, y} \big) \tilde{f}_{t,y}(s,x; t,y)$, and $$\begin{aligned} \widetilde{\Delta}_{k+1} (s,x;t,y) := \int_s^t \int_{\mathbb{R}^2} \widetilde{\Delta}_0(s,x; r,z) \widetilde{\Delta}_{k}(r,z;t,y) dz dr, ~~k\ge 0. \end{aligned}$$ We already know from Lemma [Lemma 17](#lem : estime L-Lfy){reference-type="ref" reference="lem : estime L-Lfy"} that $$\begin{aligned} \big| \big( \widetilde{{\cal L}}-\widetilde{{\cal L}}^{t,y} \big) \tilde{f}_{t, y}(s,x;t,y) \big| &~\le~ \frac{C_{\eqref{eq: estimee L-Lfy}}}{(t-s)^{1-\kappa_0}} \tilde{f}^{\circ}(s,x;t,y), \end{aligned}$$ for all $(s,x,t,y)\in \Theta$. By the same induction argument as in [@francesco2005class proof of Proposition 4.1], together with [\[eq: det sigma bar\]](#eq: det sigma bar){reference-type="eqref" reference="eq: det sigma bar"} and [\[eq:order_m\]](#eq:order_m){reference-type="eqref" reference="eq:order_m"}, we then deduce that $$\begin{aligned} \label{eq: estime Delta k} \big| \widetilde{\Delta}_{k} (s,x;t,y) \big| ~\le~ \frac{M_{k}}{(t-s)^{1-k\kappa_{0}}} \tilde{f}^{\circ}(s,x;t,y) ~\le~ CM_{k} (t-s)^{k\kappa_{0}-2-\frac{\beta_{3}}{2}}, \end{aligned}$$ in which, $C>0$ does not depend on $(s,x,t,y)$ and $k$, and $$M_{k} ~:=~ \frac{\{C_{\eqref{eq: estimee L-Lfy}}\Gamma(\kappa_{0})\}^{k}}{\Gamma(k\kappa_{0})},$$ where $\Gamma$ denotes the Gamma function. By dominated convergence, each map $\widetilde{\Delta}_{k}$ is continuous. Then, the well-posedness of $\widetilde{\Phi}$ follows from the fact that the power series $\sum_{k\ge 0} M_{k} u^{k}$ has a radius of convergence equal to $\infty$. Continuity of $\widetilde{\Phi}$ is a consequence of the absolute continuity of the series. It remains to prove [\[eq: sol eq Phi\]](#eq: sol eq Phi){reference-type="eqref" reference="eq: sol eq Phi"}. Note that, by the above, $$\widetilde{\Phi}(s,x;t,y) = \widetilde{\Delta}_0(s,x; t,y)+\sum_{k\ge 0 }\int_s^t \int_{\mathbb{R}^2} \widetilde{\Delta}_0(s,x; r,z) \widetilde{\Delta}_k(r,z;t,y) dz dr$$ and the family $\{(r,z)\in (s,t)\times\mathbb{R}^{2} \mapsto \sum_{k=0}^{n}\widetilde{\Delta}_0(s,x; r,z) \widetilde{\Delta}_k(r,z;t,y)$, $n\ge 1\}$ is uniformly integrable and converges to $\widetilde{\Delta}_0(s,x; \cdot) \widetilde{\Phi}(\cdot;t,y)$. This implies [\[eq: sol eq Phi\]](#eq: sol eq Phi){reference-type="eqref" reference="eq: sol eq Phi"}. ◻ Recall that $$\Phi (s,x;t,y) ~:=~ \widetilde{\Phi}(s,{\rm \bf A}_{s}x; t, {\rm \bf A}_{t}y).$$ **Proposition 19**. *Let the conditions of Theorem [Theorem 7](#thm:f_well_defined){reference-type="ref" reference="thm:f_well_defined"}.(i) hold. Then, $f: \Theta \longrightarrow \mathbb{R}$ is well-defined in [\[eq:def_f\_transition_proba\]](#eq:def_f_transition_proba){reference-type="eqref" reference="eq:def_f_transition_proba"}. Moreover, it is continuous on $\Theta$ and, for some $C_{[\ref{prop:existence_fb_regul}]}>0$, $$\begin{aligned} \big | f(s,x ;t,y) \big| ~\le~ C_{[\ref{prop:existence_fb_regul}]}f^{\circ}(s,x;t,y), ~\mbox{for all}~ (s,x,t,y)\in \Theta. \end{aligned}$$* *Proof.* This is an immediate consequence of Proposition [Proposition 18](#prop:existence_continuite_Phi){reference-type="ref" reference="prop:existence_continuite_Phi"} and Lemma [Lemma 15](#lem: control density){reference-type="ref" reference="lem: control density"}, recalling that $f^{\circ}$ is a transition density and observing that $\int_{s}^{t}(t-r)^{-1+\kappa_{0}}dr\le C T^{\kappa_{0}}$. ◻ ## $C^1$-regularity We now prove that $x = (x_1, x_2) \mapsto f(s,x; t, y)$ is $C^{1}$ in its first space variable $x_{1}$, with partial derivative dominated by a Gaussian density. **Lemma 20**. *Let the conditions of Theorem [Theorem 7](#thm:f_well_defined){reference-type="ref" reference="thm:f_well_defined"} hold. Then, there exists $C_{[\ref{lem : borne derive fbar}]}>0$ such that, for all $(r,z) \in [0,T] \times\mathbb{R}^2$ and $(s,x,t,y)\in \Theta$, $$\begin{aligned} \big| \partial_{x_{1}} f_{r, z}(s,x;t,y) \big| &~\le~ \frac{C_{[\ref{lem : borne derive fbar}]}} {(t-s)^{\frac{\beta_0+1}{2}}} f^{\circ}(s,x;t,y). \end{aligned}$$ Moreover, let $h: \mathbb{R}^2 \longrightarrow \mathbb{R}$ be a (measurable) function such that $\int_{\mathbb{R}^2} f^{\circ}(s,x; t, y) |h(y)| dy < \infty$, and $$V(s, x; t) ~:=~ \int_{\mathbb{R}^2} f_{t, y}(s,x; t, y) h(y) dy, ~~(s,x) \in [0,t) \times\mathbb{R}^{2},$$ then $(s,x)\in [0,t)\times\mathbb{R}^{2}\mapsto V(s,x;t)$ is continuously differentiable in its first space variable $x_{1}$ and satisfies $$\big| \partial_{x_{1}} V(s,x;t) \big| ~\le~ \frac{C_{[\ref{lem : borne derive fbar}]}} {(t-s)^{\frac{\beta_0+1}{2}}}\int_{\mathbb{R}^{2}} f^{\circ}(s, x;t,y) ~ |h(y)| dy,$$ in which $C_{[\ref{lem : borne derive fbar}]}>0$ does not depend on $(s,x,t)\in [0,T]\times\mathbb{R}^{2}\times[0,T]$ with $s<t$.* *Proof.* The first inequality follows immediately from Lemmas [Lemma 14](#lemma:Sigamw){reference-type="ref" reference="lemma:Sigamw"} and [Lemma 15](#lem: control density){reference-type="ref" reference="lem: control density"}, as in the proof of [\[eq: borne I1 estimation L-L\]](#eq: borne I1 estimation L-L){reference-type="eqref" reference="eq: borne I1 estimation L-L"}. The second one then follows by dominated convergence. ◻ For the following, we recall the defintion of $\kappa_{1}$ in [\[eq: kappa1\]](#eq: kappa1){reference-type="eqref" reference="eq: kappa1"}. **Proposition 21**. *Let the conditions of Theorem [Theorem 7](#thm:f_well_defined){reference-type="ref" reference="thm:f_well_defined"} hold. Then, for each $(t,y)\in (0,T]\times\mathbb{R}^{2}$, the map $(s,x)\in [0,t)\times\mathbb{R}^{2}\mapsto f(s,x;t,y)$ is continuously differentiable in its first space variable $x_{1}$. Moreover, there exists $C_{[\ref{prop : fb C1}]}>0$ such that $$|\partial_{x_{1}}f(s,x;t,y) | ~\le~ \frac{C_{[\ref{prop : fb C1}]}}{(t-s)^{1-\kappa_{1}}} f^{\circ}(s,x;t,y), ~ \mbox{for all}~ (s,x;t,y)\in\Theta.$$* *Proof.* Fix $z\in \mathbb{R}^{2}$. In view of the estimate in [\[eq: estime vraie densite\]](#eq: estime vraie densite){reference-type="eqref" reference="eq: estime vraie densite"}, together with Lemma [Lemma 20](#lem : borne derive fbar){reference-type="ref" reference="lem : borne derive fbar"}, we can find $C>0$, that does not depend on $(s,x,t,y)\in \Theta$, such that $$\begin{aligned} &\int_{\mathbb{R}^{2}}\Big| \partial_{x_1} f_{r,z}(s,x;r, z) \Phi(r, z; t,y)\Big|d z \\ \le~& C (r-s)^{\frac{-\beta_0-1}{2}} \int_{\mathbb{R}^{2}} f^{\circ}(s, x; r, z) \big| \Phi(r, z; t, y) \big| d z \\ \le~& C (t-r)^{-1 + \kappa_{0}} (r-s)^{\frac{-\beta_0-1}{2}} \int_{\mathbb{R}^{2}} f^{\circ}(s, x; r, z) f^{\circ}(r, z; t,y) d z\\ =~& C (t-r)^{-1 + \kappa_{0}} (r-s)^{\frac{-\beta_0-1}{2}} f^{\circ}(s, x; t, y). \end{aligned}$$ Therefore, by the dominated convergence theorem, $$\partial_{x_1} \int_{s}^{t}\int_{\mathbb{R}^{2}} f_{r, z}(s,x;r, z) \Phi(r, z,t,y)d zdr$$ is well-defined and continuous, and so is $\partial_{x_1} f(\cdot; t,y)$. The latter is bounded from the above estimates by integrating over $r$ and using the relation between the Euler-Gamma and the Beta functions. ◻ We conclude this section by a continuity property result on $f$, which allows one to apply the $C^1$-Itô's formula in the context of Theorem [Theorem 11](#thm:vr_uniqueX){reference-type="ref" reference="thm:vr_uniqueX"}. **Proposition 22**. *Let Assumptions [Assumption 1](#hyp:A){reference-type="ref" reference="hyp:A"} and [Assumption 4](#hyp: stand ass){reference-type="ref" reference="hyp: stand ass"}.(i) hold true. Assume in addition that [\[eq:def_kappa0\]](#eq:def_kappa0){reference-type="eqref" reference="eq:def_kappa0"} holds and that $\frac{\beta_3-\beta_{2}+\beta_{0}}{2} < 1$, and let us fix $\alpha'\in \big( 0,\frac{1+\beta'_{2}}{2} \wedge (1- \frac{\beta_3-\beta_{2}+\beta_{0}}{2}) \big]$. Then, for all $\delta>0$, there exists $C_{[\ref{prop : holder f en x2}]}>0$ such that $$\begin{aligned} \big| f(s,x; t,y) - f(s,x'; t,y) \big| &~\le~ C_{[\ref{prop : holder f en x2}]}|x_{2}-x'_{2}|^{\frac{2\alpha'}{1+\beta'_{2}}}, \end{aligned}$$ for all $(s,x,t,y)\in \Theta$ and $x' = (x'_1, x'_2) \in \mathbb{R}^{2}$ such that $t-s\ge \delta$ and $x_{1}=x'_{1}$.* *Proof.* Let $I:=\big| f_{r,z}(s,x; t,y) - f_{r,z}(s,x'; t,y) \big|$ and denote by $C>0$ a generic constant that can change from line to line but does not depend on $(s,x,x',t,y,z)$. Then, by [\[eq:Dx1fb\]](#eq:Dx1fb){reference-type="eqref" reference="eq:Dx1fb"}, Lemma [Lemma 14](#lemma:Sigamw){reference-type="ref" reference="lemma:Sigamw"} and Lemma [Lemma 15](#lem: control density){reference-type="ref" reference="lem: control density"}, one can find $x''_{2}$ in the interval formed by $x_{2}$ and $x'_{2}$ such that, with $x'':=(x_{1}, x''_{2})$, $$\begin{aligned} I&\le |x_{2}-x'_{2}|\;|\partial_{x_{2}}f_{r, z}(s,x''; t,y)|\le C |x_{2}-x'_{2}| \frac{1} {(t-s)^{\frac{1+\beta_3}{2}}} f^{\circ}(s,x'';t,y). \end{aligned}$$ If $(t-s)^{\frac{1+\beta'_{2}}{2}}/|x_{2}-x'_{2}|\ge 1$, then $$\begin{aligned} I& \le C |x_{2}-x'_{2}|^{\frac{2\alpha'}{1+\beta'_{2}}} \frac{1}{(t-s)^{\frac{1+\beta_3}{2}-\frac{1+\beta'_{2}}{2}+\alpha'}} f^{\circ}(s,x'';t,y)\\ &=C |x_{2}-x'_{2}|^{\frac{2\alpha'}{1+\beta'_{2}}} \frac{1}{(t-s)^{\alpha'+\frac{\beta_3-\beta_{2}+\beta_{0}}{2}}} f^{\circ}(s,x'';t,y). \end{aligned}$$ Otherwise, by [\[eq: majo fy\]](#eq: majo fy){reference-type="eqref" reference="eq: majo fy"}, $$I\le |x_{2}-x'_{2}|^{\frac{2\alpha'}{1+\beta'_{2}}} \frac{1}{(t-s)^{\alpha'}} \big( f^{\circ}(s,x; t,y) + f^{\circ}(s,x'; t,y) \big).$$ We conclude by using the fact that $\beta_3-\beta_{2}+\beta_{0}\ge 0$ and by appealing to [\[eq: estime vraie densite\]](#eq: estime vraie densite){reference-type="eqref" reference="eq: estime vraie densite"}. ◻ ## $C^2$-regularity We now prove that $f$ is $C^{2}$ in its first space variable $x_{1}$ and that ${\rm v}$ is a smooth solution of the path-dependent PDE [\[eq:ppde\]](#eq:ppde){reference-type="eqref" reference="eq:ppde"}. ### Potential estimate and Hölder regularity of $\Phi$ Let $0 \le s < t \le T$ and $x \in \mathbb{R}^2$, $h: \mathbb{R}^2 \longrightarrow \mathbb{R}$ be a (measurable) function, we first estimate the second order derivative of the following functional: $$V(s,x;t) ~:= \int_{\mathbb{R}^{2}} f_{t, y}(s,x;t,y) h(y)dy.$$ Let us also denote $$\begin{aligned} \label{eq: def bar E-1} E^{-1}_{s,t}(x) ~:=~ \left( \begin{array}{cc} 1 & 0 \\ A_t - A_s & 1 \end{array} \right) x . \end{aligned}$$ **Lemma 23**. *Let Assumption [Assumption 1](#hyp:A){reference-type="ref" reference="hyp:A"} and Assumption [Assumption 4](#hyp: stand ass){reference-type="ref" reference="hyp: stand ass"}.(i) hold. Let $h: \mathbb{R}^2 \longrightarrow \mathbb{R}$ and $h_{\circ}: \mathbb{R}^2 \longrightarrow \mathbb{R}_+$ be such that, for some $\alpha_h > 0$ and $C_h > 0$, $$\begin{aligned} &\big| h(y) - h(y') \big| \le C_{h} \Big( |y_1 - y'_1|^{\frac{2 \alpha_h}{1+\beta'_1}} + |y_2 - y'_2|^{\frac{2 \alpha_h}{1+\beta'_2}} \Big) \big( h_{\circ} (y) + h_{\circ} (y') \big), \mbox{ for all } y,y'\in \mathbb{R}^{2}, \end{aligned}$$ and $$\int_{\mathbb{R}^{2}} f^{\circ}(s,x;t,y) h_{\circ} (y) dy ~<~\infty, ~\mbox{for all}~ 0 \le s < t \le T, ~~x \in \mathbb{R}^2.$$ Assume that $$\kappa_h ~:=~ \min \Big(\frac{2\beta_{4}+1+\beta'_{1}}{1+\beta'_{2}}, 1 \Big)\min\{ \alpha_h,\alpha\}- \beta_0 ~>~ 0.$$ Then, $\partial^2_{x_1x_1} V(s,x;t)$ is well defined and continuous. Moreover $$\begin{aligned} \partial^{2}_{x_{1}x_{1}}V(s,x;t) &= \int_{\mathbb{R}^{2}} \partial^{2}_{x_{1}x_{1}} f_{t,y}(s, x;t,y) h(y)dy, \end{aligned}$$ and there exists $C>0$, that does not depend on $C_{h}>0$, such that $$\begin{aligned} \big| \partial^{2}_{x_{1}x_{1}}V(s,x;t) \big| &\le \frac{C C_{h}}{(t-s)^{1-\kappa_{h}}} \left( \big| h \big(E^{-1}_{s,t}(x)\big) \big| + { \big| h_{\circ} \big(E^{-1}_{s,t}(x)\big) \big| + \int_{\mathbb{R}^{2}} f^{\circ}(s,x;t,y) h_{\circ} (y) dy } \right), \end{aligned}$$ for all $0 \le s < t \le T$ and $x \in \mathbb{R}^2$.* *Proof.* For simplicity, we only consider the case $t-s\le 1$. To estimate the second order derivative, we decompose $$I:=\int_{\mathbb{R}^{2}} \partial_{x_{1}x_{1}}^{2}f_{t, y}(s,x;t,y) h(y)dy$$ into the sum of the three following terms, with $\check x:=E^{-1}_{s,t}(x)$, $$\begin{aligned} I_{1}&:=\int_{\mathbb{R}^{2}} \partial_{x_{1}x_{1}}^{2} f_{t, y}(s,x;t,y) \big[ h(y)- h \big(\check x\big) \big]dy, \\ I_{2}&:= h \big(\check x\big) \int_{\mathbb{R}^{2}} \left\{\partial_{x_{1}x_{1}}^{2} f_{t,y}(s,x;t,y) - \partial_{x_{1}x_{1}}^{2} f_{t, \check x} (s,x;t,y)\right\} dy, \\ I_{3}&:= h \big(\check x\big) \int_{\mathbb{R}^{2}} \partial_{x_{1}x_{1}}^{2}f_{t, \check x} (s,x;t,y) dy. \end{aligned}$$ All over this proof, $C>0$ denotes a generic constant that may change from line to line but does not depend on $C_{h}$, $(s,x;t,y)\in \Theta$ and $z\in \mathbb{R}^{2}$. $\mathrm{(i)}$ We first estimate $I_{1}$. Set $w={\rm w}_{s,t}(x,y)$, recall [\[eq:def_w\_xy\]](#eq:def_w_xy){reference-type="eqref" reference="eq:def_w_xy"}. By the Hölder regularity property of $h$ and the inequality $(a+b)^{\gamma}\le 2^{\gamma}(a^{\gamma}+b^{\gamma})$ for $a,b\ge 0$ and $\gamma>0$, one has $$\begin{aligned} &\Big| \partial_{x_{1}x_{1}}^{2} f_{t, y}(s,x;t,y) \big[h(y)- h \big(\check x\big)] \Big| \\ &\le C_{h} \Big| \partial_{x_{1}x_{1}}^{2}f_{t,y}(s,x;t,y)\Big| \Big( |x_1 - y_1|^{\frac{2\alpha_h}{1+\beta'_1}}+ |y_2 - x_2 -(A_t - A_s) x_1 |^{\frac{2\alpha_h}{1+\beta'_2}} \Big) \big( h_{\circ} (y) + h_{\circ} (E^{-1}_{s,t}(x))\big) \\ &\le C C_{h}\Big| \partial_{x_{1}x_{1}}^{2}f_{t,y}(s,x;t,y)\Big| \Big( |w_1|^{\frac{2\alpha_h}{1+\beta'_1}} + |w_2|^{\frac{2 \alpha_h}{1+\beta'_2}} +\big |(A_t-A_s) w_1 \big|^{\frac{2\alpha_h}{1+\beta'_2}} \Big) \big( h_{\circ} (y) +h_{\circ} (E^{-1}_{s,t}(x)) \big). \end{aligned}$$ Then, arguing as in the proof of Lemma [Lemma 17](#lem : estime L-Lfy){reference-type="ref" reference="lem : estime L-Lfy"} and using [\[eq:holder A\]](#eq:holder A){reference-type="eqref" reference="eq:holder A"}, we deduce that $$\begin{aligned} \big| I_1 \big| \le &C C_{h}\Big( \frac{1}{(t-s)^{1 + \beta_0 - \alpha_h}} + \frac{1}{(t-s)^{1+\beta_{0}-\alpha_{h}\frac{2\beta_{4}+1+\beta'_{1}}{1+\beta'_{2}}}} \Big) \int_{\mathbb{R}^{2}} f^{\circ}(s, x;t,y) \big(h_{\circ} (y) + h_{\circ} (E^{-1}_{s,t}(x)) \big) dy, \\ \le~& \frac{C C_{h}}{(t-s)^{1-\kappa_{h}}} {\Big( \int_{\mathbb{R}^{2}} f^{\circ}(s,x;t,y) h_{\circ} (y) dy + h_{\circ} \big(E^{-1}_{s,t}(x) \big) \Big) .} \end{aligned}$$ $\mathrm{(ii)}$ We now consider $I_{2}$. By [\[eq:Dx1x2fb\]](#eq:Dx1x2fb){reference-type="eqref" reference="eq:Dx1x2fb"} and Lemma [Lemma 14](#lemma:Sigamw){reference-type="ref" reference="lemma:Sigamw"}, $$\begin{aligned} & \big|\partial_{x_{1}x_{1}}^{2}f_{t,y}(s,x;t,y)-\partial_{x_{1}x_{1}}^{2}f_{t,\check x}(s,x;t,y) \big| \\ ~\le~& \big| f_{t,y}(s,x; t,y) - f_{t,\check x}(s,x; t,y) \big| \Big| \big(\Sigma^{-1}_{s,t}(t,y)w \big)_1^2 - \big( \Sigma^{-1}_{s,t}(t,y) \big)_{1,1} \Big| \\ &+ f_{t,\check x}(s,x; t,y) \Big| \big(\Sigma^{-1}_{s,t}(t,y)w \big)_1^2 - \big(\Sigma^{-1}_{s,t}(t,\check x)w \big)_1^2 \Big| \\ &+ f_{t,\check x}(s,x; t,y) \Big| \big( \Sigma^{-1}_{s,t}(t,y) \big)_{1,1} - \big( \Sigma^{-1}_{s,t}(t,\check x) \big)_{1,1} \Big|\\ ~=~& \big| f_{t,y}(s,x; t,y) - f_{t,\check x}(s,x; t,y) \big| \Big| \big(\Sigma^{-1}_{s,t}(t,y)w \big)_1^2 - \big( \Sigma^{-1}_{s,t}(t,y) \big)_{1,1} \Big| \\ &+ f_{t,\check x}(s,x; t,y) \left|\sigma_{t}(y)^{-4}-\sigma_{t}(\check x)^{-4} \right| \Big| \big(\Sigma^{-1}_{s,t}(1)w \big)_1\Big|^{2} \\ &+ f_{t,\check x}(s,x; t,y) \left|\sigma_{t}(y)^{-2}-\sigma_{t}(\check x)^{-2} \right| \Big| \big( \Sigma^{-1}_{s,t}(1) \big)_{1,1} \Big|, \end{aligned}$$ in which, by [\[eq : hyp holder coeff bar sigma\]](#eq : hyp holder coeff bar sigma){reference-type="eqref" reference="eq : hyp holder coeff bar sigma"}, $$\begin{aligned} \left|\sigma_{t}(y)-\sigma_{t}(\check x) \right| &\le C_{(\ref{eq : hyp holder coeff bar sigma})} \Big( \big| y_{1}-x_{1} \big|^{\frac{2\alpha}{1+\beta'_1}} + \big| y_{2}-x_{1}(A_{t}-A_{s})-x_{2}\big|^{\frac{2 \alpha}{1+\beta'_2}} \Big)\\ &\le C \Big( |w_1|^{\frac{2\alpha_g}{1+\beta'_1}} + |w_2|^{\frac{2 \alpha_g}{1+\beta'_2}} +\big |(A_t-A_s) w_1 \big|^{\frac{2\alpha_g}{1+\beta'_2}} \Big) \end{aligned}$$ by the same arguments as in in step 1. Using Lemma [Lemma 25](#lem : derive f par rapport vol){reference-type="ref" reference="lem : derive f par rapport vol"} below, [\[eq: norne Sigma-1 w 1\]](#eq: norne Sigma-1 w 1){reference-type="eqref" reference="eq: norne Sigma-1 w 1"}, [\[eq: borne Sigma -1 11\]](#eq: borne Sigma -1 11){reference-type="eqref" reference="eq: borne Sigma -1 11"} and [\[eq : unif elliptic\]](#eq : unif elliptic){reference-type="eqref" reference="eq : unif elliptic"}, it follows that $$\big| I_2 \big| ~\le~ \frac{C |h \big(\check x\big)|}{(t-s)^{1+\beta_{0}-\alpha\frac{2\beta_{4}+1+\beta'_{1}}{1+\beta'_{2}}}} \int_{\mathbb{R}^{2}} f^{\circ}(s, x;t,y) dy\le \frac{C |h \big(E^{-1}_{s,t}(x)\big)|}{(t-s)^{1-\kappa_{h}}}.$$ $\mathrm{(iii)}$ We finally consider $I_{3}$. Notice that $y \mapsto f_{t,\check x}(s,x; t, y)$ is a Gaussian density function, so that $$\int_{\mathbb{R}^2} \big| \partial^2_{x_1 x_1} f_{t,\check x}(s,x; t, y) \big| dy ~<~ \infty.$$ Moreover, by the definition of $f_{t,\check x}(s,x; t, y)$, one has $$D_y f_{t,\check x}(s,x; t, y) = - \left( \begin{array}{cc} 1 & 0 \\ - (A_t - A_s) & 1 \end{array} \right)^{\top} D_{x} f_{t,\check x}(s,x; t,y),$$ so that $$\begin{aligned} \partial_{x_{1}}f_{t,\check x}(s,x;t,y) = -\partial_{y_1} f_{t,\check x}(s,x; t, y) - (A_t- A_s) \partial_{y_2} f_{t,\check x}(s,x; t,y), \end{aligned}$$ which implies $$\int_{\mathbb{R}} \int_{\mathbb{R}} \partial_{x_{1}} f_{t,\check x} (s,x;t,y) dy_1 dy_2 ~=~ 0,$$ and therefore $I_3 = 0$. Finally, we can apply the Leibniz integral rule to interchange the derivative and the integral, and hence to conclude the proof. ◻ **Remark 24**. *Let us consider $V(s,x;t)$ as a path-dependent functional: $$\overline V(s, {\rm x}; t) ~:=~ V(s, {\rm x}(s), I_s({\rm x}); t) ~=~ \int_{\mathbb{R}^2} {\rm f}_{t, y}(s, {\rm x}; t, y) h(y) dy.$$ In view of Remark [Remark 8](#rem:PPDE_f_rz){reference-type="ref" reference="rem:PPDE_f_rz"}, the above results implies that, in the context of Lemma [Lemma 23](#lem : borne derive seconde fbar){reference-type="ref" reference="lem : borne derive seconde fbar"}, the second order vertical derivative $\partial^2_{{\rm x}{\rm x}} \overline V(s,{\rm x}; t)$ is well defined. Moreover, by [\[eq:PPDE_f\_rz\]](#eq:PPDE_f_rz){reference-type="eqref" reference="eq:PPDE_f_rz"}, one has $$\label{eq:bound_ds_fr} \partial_s {\rm f}_{t,y}(s, {\rm x}; t,y) ~=~ - \frac12 \sigma(t,y)^{2} \partial^2_{{\rm x}{\rm x}} {\rm f}_{t,y} (s, {\rm x}; t,y).$$ Then, by the same technique as in Lemma [Lemma 23](#lem : borne derive seconde fbar){reference-type="ref" reference="lem : borne derive seconde fbar"}, we can deduce that the horizontal derivative $\partial_s \overline V(s, {\rm x}; t)$ is also well defined.* We now provide the following easy estimate which is used in the proof of Lemma [Lemma 23](#lem : borne derive seconde fbar){reference-type="ref" reference="lem : borne derive seconde fbar"}. **Lemma 25**. *Let Assumption [Assumption 1](#hyp:A){reference-type="ref" reference="hyp:A"}.(i) and Assumption [Assumption 4](#hyp: stand ass){reference-type="ref" reference="hyp: stand ass"}.(i) hold. Then, there exists $C_{[\ref{lem : derive f par rapport vol}]}>0$ such that, for all $(s,x,t,y)\in \Theta$ and $z,z'\in \mathbb{R}^{2}$, $$\begin{aligned} & \big| f_{t,z}(s,x; t,y) - f_{t, z'}(s,x; t,y) \big|\\ &\le C_{[\ref{lem : derive f par rapport vol}]}\left(|z_{1}-z_{1}'|^{\frac{2\alpha}{1+\beta'_{1}}}+|z_{2}-z_{2}'|^{\frac{2\alpha}{1+\beta'_{2}}}\right) \left[1+ \langle \Sigma^{-1}_{s,t}(1) w,w\rangle\right]( \varpi f^{\circ})(s,x;t,y) \end{aligned}$$ in which $w:={\rm w}_{s,t}(x,y)$.* *Proof.* Let us write $f_{[a]}$ for $f_{t, z}$ if $\sigma^{2}_{t}(z)=a$, and let $\partial_{a} f_{[a]}$ denote its derivative with respect to this parameter $a$. Then, $$\partial_{a} f_{[a]}(s,x; t,y)=\left[-\frac1{a}+\frac{\sigma_{t}(0)^{2}}{2a^{2}}\langle \Sigma^{-1}_{s,t}(0)w,w\rangle\right]f_{[a]}(s,x; t,y)$$ in which $w={\rm w}_{s,t}(x,y)$ is as in [\[eq:def_w\_xy\]](#eq:def_w_xy){reference-type="eqref" reference="eq:def_w_xy"}. In view of Assumption [Assumption 4](#hyp: stand ass){reference-type="ref" reference="hyp: stand ass"} and Lemma [Lemma 15](#lem: control density){reference-type="ref" reference="lem: control density"}, it follows that $$|\partial_{a} f_{[a]}(s,x; t,y)|\le C\left[1+\langle \Sigma^{-1}_{s,t}(0)w,w\rangle\right] \varpi(s,x;t,y) f^{\circ} (s,x;t,y),$$ for some $C>0$ that does not depend on $a, (s,x,t,y)$. We conclude by appealing to [\[eq : hyp holder coeff bar sigma\]](#eq : hyp holder coeff bar sigma){reference-type="eqref" reference="eq : hyp holder coeff bar sigma"}. ◻ In order to apply Lemma [Lemma 23](#lem : borne derive seconde fbar){reference-type="ref" reference="lem : borne derive seconde fbar"} to [\[eq: def ft recursif\]](#eq: def ft recursif){reference-type="eqref" reference="eq: def ft recursif"}, we need to prove that the function $\Phi(s,x; t, y)$ defined by [\[eq:def_Phit\]](#eq:def_Phit){reference-type="eqref" reference="eq:def_Phit"} and [\[eq:def_Phi\]](#eq:def_Phi){reference-type="eqref" reference="eq:def_Phi"} is Hölder in $x$. Recall the definition of $\widehat{\Delta \beta}$ in [\[eq: def wide hat Delat beta\]](#eq: def wide hat Delat beta){reference-type="eqref" reference="eq: def wide hat Delat beta"}, of $\kappa_{0}$ in [\[eq:def_kappa0\]](#eq:def_kappa0){reference-type="eqref" reference="eq:def_kappa0"} and of $f^{\circ,\frac12}$ in [\[eq: def fcirc1/2\]](#eq: def fcirc1/2){reference-type="eqref" reference="eq: def fcirc1/2"}. **Lemma 26**. *Let the conditions of Theorem [Theorem 9](#thm:c12){reference-type="ref" reference="thm:c12"} hold. Fix $\alpha_\Phi \in \big(0,\hat \alpha_\Phi\wedge \kappa_{0}\wedge \min\limits_{i=1,2} \frac{1+\beta'_{i}}{2} \big)$. Then, there exists $C_{\alpha_\Phi}>0$ such that, $$\begin{aligned} &| \Phi(s,x;t,y)- \Phi(s,x';t,y)| \\ \le~& C_{\alpha_\Phi} \frac{|x_1-x_1'|^{\frac{2\alpha_\Phi}{1+\beta'_1}} + |x_2 - x_2'|^{\frac{2\alpha_\Phi}{1+\beta'_2}}}{(t-s)^{1 - \eta_{\Phi} }} \Big( f^{\circ,\frac12}(s, x;t,y)+f^{\circ,\frac12}(s,x';t,y) \Big), \end{aligned}$$ for all $(s,x,t,y)\in \Theta$, in which $$\eta_{\Phi} ~:=~ \hat \alpha_\Phi\wedge \kappa_{0}- \alpha_{\Phi} ~>~ 0.$$* *Proof.* In all this proof, $C>0$ denotes a generic constant, whose value can change from line to line, but which does not depend on $(s,x,t,y)\in \Theta$. We set $\Delta_{k} (s,x;t,y) := \widetilde{\Delta}_{k} (s,{\rm \bf A}_{s}x;t,{\rm \bf A}_{t}y)$ and recall that $\Phi (s,x;t,y) := \widetilde{\Phi}(s,{\rm \bf A}_{s}x;t,{\rm \bf A}_{t}y)$, $(s,x,t,y)\in \Theta$. $\mathrm{(i)}$ Let us first consider $$\begin{aligned} I ~:=~ & \Delta_0(s,x; t, y) - \Delta_0(s,x'; t, y) \nonumber \\ =~& \mu_{s}(x) \partial_{x_1} f_{t,y} (s,x; t, y) + \frac12 \big( \sigma^2_{s}(x) - \sigma^2_{t}(y) \big) \partial^2_{x_1 x_1} f_{t,y}(s,x; t,y) \nonumber\\ &- \Big(\mu_{s}(x') \partial_{x_1} f_{t,y}(s,x'; t, y) + \frac12 \big( \sigma^2_{s}(x') - \sigma^2_{t}(y) \big) \partial^2_{x_1 x_1} f_{t,y}(s,x'; t,y) \Big). \label{eq: proof C2 def I} \end{aligned}$$ $\mathrm{(i.1)}$ In the case where $$|x_1 - x_1'|^{\frac{1}{1+\beta'_1}} + |x_2 - x_2'|^{\frac{1}{1+\beta'_2}} ~>~ (t-s)^{1/2},$$ Lemma [Lemma 17](#lem : estime L-Lfy){reference-type="ref" reference="lem : estime L-Lfy"} implies that, for $\alpha'\in (0,\kappa_{0})$, $$\begin{aligned} &\Big| \Delta_0(s, x; t, y) - \Delta_0(s,x'; t, y) \Big| \\ \le~& C_{\eqref{eq: estimee L-Lfy}} \frac{1}{(t-s)^{1-\kappa_0}} \big( f^{\circ}(s,x;t,y)+f^{\circ}(s,x';t,y) \big) \\ \le~& C_{\eqref{eq: estimee L-Lfy}} \frac{|x_1- x_1'|^{\frac{2\alpha'}{1+\beta'_1}} + |x_2 - x_2'|^{\frac{2\alpha'}{1+\beta'_2}} }{(t-s)^{1 - \kappa_0+\alpha'}} \big( f^{\circ}(s,x;t,y)+f^{\circ}(s,x';t,y) \big). \end{aligned}$$ $\mathrm{(i.2)}$ We next consider the case where $$\label{eq:dx_le_dt} |x_1 -x_1'|^{\frac{1}{1+\beta'_1}} + |x_2 -x_2'|^{\frac{1}{1+\beta'_2}} ~\le~ (t-s)^{1/2}.$$ Let us write $$I ~:=~ \Delta_0(s,x; t, y) - \Delta_0(s,x'; t, y) ~=~ I_1 + I_2 + I_3 + I_4,$$ where $$I_1 ~:=~ \big( \mu_{s}(x) - \mu_{s}(x') \big) \partial_{x_1} f_{t,y}(s,x; t, y),$$ $$I_2 ~:=~ \mu_{s}(x') \big( \partial_{x_1} f_{t,y}(s,x; t, y )- \partial_{x_1} f_{t,y}(s,x'; t, y) \big),$$ $$I_3 ~:=~ \frac12 \big( \sigma^2_{s}(x) - \sigma^2_{s}(x') \big) \partial^2_{x_1 x_1} f_{t,y}(s,x'; t, y),$$ and $$I_4 ~:=~ \frac12 \big( \sigma^2_{s}( x) - \sigma^2_{t}(y) \big) \Big( \partial^2_{x_1 x_1} f_{t,y}(s,x; t, y) - \partial^2_{x_1 x_1} f_{t,y}(s,x'; t, y) \Big).$$ For $I_1$, we use the Hölder continuity property of $\mu$ in [\[eq : hyp holder coeff bar mu\]](#eq : hyp holder coeff bar mu){reference-type="eqref" reference="eq : hyp holder coeff bar mu"}, Lemma [Lemma 14](#lemma:Sigamw){reference-type="ref" reference="lemma:Sigamw"} and Lemma [Lemma 15](#lem: control density){reference-type="ref" reference="lem: control density"} to obtain that $$\begin{aligned} \label{eq: proof C2 borne I1} \big| I_1 \big| &~\le~ C\frac{ |x_1 - x_1'|^{\frac{2 \alpha}{1+ \beta'_1}} + | x_2 - x_2 '|^{\frac{2 \alpha}{1+\beta'_2}}}{(t-s)^{\frac{1+\beta_{0}}{2}}} f^{\circ}(s,x; t, y). \end{aligned}$$ For $I_2$, let us fix $\rho \in [0,1]$ and $x'' = \rho x + (1-\rho) x'$ so that, using Assumption [Assumption 4](#hyp: stand ass){reference-type="ref" reference="hyp: stand ass"}, $$\big| I_2 \big| ~\le~ \mathfrak{b}\Big( \big| \partial^2_{x_1 x_1} f_{t,y}(s,x''; t, y) \big| \big| x_1 -x_1' \big| + \big| \partial^2_{x_1 x_2} f_{t,y}(s, x''; t, y) \big| \big| x_2 - x_2' \big| \Big).$$ Using [\[eq:Dx1x2fb\]](#eq:Dx1x2fb){reference-type="eqref" reference="eq:Dx1x2fb"}, [\[eq:Dx1x2fb\]](#eq:Dx1x2fb){reference-type="eqref" reference="eq:Dx1x2fb"}, Lemma [Lemma 14](#lemma:Sigamw){reference-type="ref" reference="lemma:Sigamw"}, Lemma [Lemma 15](#lem: control density){reference-type="ref" reference="lem: control density"} and the fact that $\beta_{0}\le \beta_{3}$, it follows that $$\begin{aligned} \big| I_2 \big| &\le C \Big( \frac{|x_1 - x_1'|}{(t-s)^{1+ \beta_{0}}} + \frac{|x_2 - x_2'|}{(t-s)^{1+ \frac{\beta_0+ \beta_{3}}{2}}} \Big) f^{\circ}(s,x''; t, y). \end{aligned}$$ Since $x''$ lies in the interval formed by $x$ and $x'$, Lemma [Lemma 16](#lem: fcirc 1/2){reference-type="ref" reference="lem: fcirc 1/2"} and [\[eq:dx_le_dt\]](#eq:dx_le_dt){reference-type="eqref" reference="eq:dx_le_dt"} imply that $$\begin{aligned} \label{eq: proof C2 borne I2} \big| I_2 \big| &\le C \Big( \frac{|x_1 - x_1'|}{(t-s)^{1+ \beta_{0}}} + \frac{|x_2 - x_2'|}{(t-s)^{1+ \frac{\beta_0+ \beta_{3}}{2}}} \Big) f^{\circ,\frac12}(s,x; t, y). \end{aligned}$$ Next, using the Hölder property of $\sigma$ in [\[eq : hyp holder coeff bar sigma\]](#eq : hyp holder coeff bar sigma){reference-type="eqref" reference="eq : hyp holder coeff bar sigma"}, Lemma [Lemma 14](#lemma:Sigamw){reference-type="ref" reference="lemma:Sigamw"} and Lemma [Lemma 15](#lem: control density){reference-type="ref" reference="lem: control density"}, it follows that $$\begin{aligned} \label{eq: proof C2 borne I3} &\big| I_3 \big| ~\le~ \frac{C }{(t-s)^{1+\beta_0}} \Big( |x_1 - x_1'|^{\frac{2\alpha}{1+\beta'_1}} + |x_2 - x_2'|^{\frac{2 \alpha}{1+\beta'_2}} \Big)f^{\circ}(s, x; t,y). \end{aligned}$$ Finally, $I_4$ is tackled as $I_{2}$. Namely, we can find $\tilde x'' = \tilde \rho x + (1-\tilde \rho) x'$ with $\tilde \rho \in [0,1]$ such that $$\begin{aligned} &\Big| \partial^2_{x_1 x_1} f_{t,y}(s,x; t, y) - \partial^2_{x_1 x_1} f_{t,y}(s,x'; t, y) \Big|\\ \le~& \big| \partial^3_{x_1 x_1 x_{1}} f_{t,y}(s,\tilde x''; t, y) \big| \big| x_1 -x_1' \big| + \big| \partial^3_{x_1x_{1} x_2} f_{t,y}(s, \tilde x''; t, y) \big| \big| x_2 - x_2' \big| \\ \le~& C\left(\frac{ \big| x_1 -x_1' \big|}{(t-s)^{\frac32(1+\beta_{0})}}+\frac{\big| x_2 - x_2' \big|}{(t-s)^{\frac32+ \beta_{0} +\frac{\beta_{3}}{2}}}\right) (\varpi^{1}f^{\circ})(s, \tilde x''; t,y), \end{aligned}$$ in which we used Lemma [Lemma 14](#lemma:Sigamw){reference-type="ref" reference="lemma:Sigamw"} and Lemma [Lemma 15](#lem: control density){reference-type="ref" reference="lem: control density"} again. Next, we appeal to [\[eq : hyp holder coeff bar sigma\]](#eq : hyp holder coeff bar sigma){reference-type="eqref" reference="eq : hyp holder coeff bar sigma"} to deduce that $$\begin{aligned} \big| \sigma^2_{s}(x) - \sigma^2_{t}(y) \big| &\le C_{\eqref{eq : hyp holder coeff bar sigma}} \Big(|t-s|^{\alpha}+ | w_{1}|^{\frac{2 \alpha}{1+\beta'_1}} + | w_{2}|^{\frac{2 \alpha}{1+\beta'_2}} \Big) . \end{aligned}$$ Using that $\beta_{3}\ge \beta_{0}$, the condition [\[eq:dx_le_dt\]](#eq:dx_le_dt){reference-type="eqref" reference="eq:dx_le_dt"} together with Lemma [Lemma 16](#lem: fcirc 1/2){reference-type="ref" reference="lem: fcirc 1/2"} and the fact that $\tilde x''$ lies on the interval formed by $x$ and $x'$ implies that $$\begin{aligned} |I_{4}| &~\le~ C |t-s|^{\alpha} \left( \frac{ \big| x_1-x'_{1} \big|}{(t-s)^{\frac32(1+\beta_{0})}}+\frac{\big| x_2 - x_2' \big|}{(t-s)^{\frac32+ \beta_{0} +\frac{\beta_{3}}{2}}}\right) f^{\circ,\frac12}(s, x; t,y).\label{eq: proof C2 borne I4} \end{aligned}$$ Note that there exists $C>0$, that does not depend on $(s, x, t, y)$ such that $$f^{\circ}( s, x; t, y) ~\le~ Cf^{\circ,\frac12}( s, x; t, y).$$ Thus, combining [\[eq: proof C2 borne I1\]](#eq: proof C2 borne I1){reference-type="eqref" reference="eq: proof C2 borne I1"}-[\[eq: proof C2 borne I4\]](#eq: proof C2 borne I4){reference-type="eqref" reference="eq: proof C2 borne I4"} and recalling [\[eq: proof C2 def I\]](#eq: proof C2 def I){reference-type="eqref" reference="eq: proof C2 def I"} and [\[eq: majo fcirc par fcirc 1/2\]](#eq: majo fcirc par fcirc 1/2){reference-type="eqref" reference="eq: majo fcirc par fcirc 1/2"} leads to a upper bound for $$J:=\frac{|I|}{C \big( f^{\circ,\frac12}(s, x; t,y)+f^{\circ,\frac12}(s, x'; t,y) \big) }.$$ Namely, $$\begin{aligned} J~\le~& \frac{ |x_1 - x_1'|^{\frac{2 \alpha}{1+ \beta'_1}} +| x_2 - x_2 '|^{\frac{2 \alpha}{1+ \beta'_2}}}{(t-s)^{1+\beta_0}}\\ &+~ | x_1 - x_1'|\left(\frac1{(t-s)^{1+\beta_{0} }}+\frac{1}{(t-s)^{\frac32(1+\beta_{0})-\alpha}}\right)\\ &+~ |x_2 - x_2'|\left(\frac1{(t-s)^{1+\frac{\beta_{0}+ \beta_{3}}{2}}}+\frac{1}{(t-s)^{\frac32+\beta_{0}+\frac{\beta_{3}}{2}-\alpha}}\right). \end{aligned}$$ We then use that $(t-s)^{\frac{1+\beta'_{i}}{2}}/|x_{i}-x'_{i}|\ge 1$, for $i=1,2$, to deduce that, for $0 < \alpha' \le \alpha\wedge \min\limits_{i=1,2} \frac{1+\beta'_{i}}{2}$, $$\begin{aligned} J ~\le~& \frac{ |x_1 - x_1'|^{\frac{2 \alpha'}{1+ \beta'_1}} +| x_2 - x_2 '|^{\frac{2 \alpha'}{1+ \beta'_2}}}{(t-s)^{1+\beta_0+\alpha'-\alpha}}\\ &+~ \frac{ | x_1 - x_1'|^{\frac{2 \alpha'}{1+ \beta'_1}}}{(t-s)^{(1+\beta_{0})\vee(\frac32(1+\beta_{0})-\alpha)-\frac{1+\beta'_{1}}{2}+\alpha'}}+\frac{ | x_2 - x_2'|^{\frac{2 \alpha'}{1+ \beta'_2}} }{(t-s)^{(1+\frac{\beta_{0}+ \beta_{3}}{2})\vee(\frac32+\beta_{0}+\frac{\beta_{3}}{2}-\alpha)-\frac{1+\beta'_{2}}{2}+\alpha'}}. \end{aligned}$$ Since $\beta_{0}\ge \beta_{1}$ and $\beta_{3}\ge \beta_{2}$, $$\begin{aligned} J&\le \frac{|x_1 - x_1'|^{\frac{2 \alpha'}{1+ \beta'_1}}}{(t-s)^{(\frac12+\beta_{0}+\frac{\beta_{0}-\beta_{1}}{2})\vee(1+\frac32\beta_{0}+\frac{\beta_{0}-\beta_{1}}{2}-\alpha)+\alpha'}} + \frac{|x_2 - x_2'|^{\frac{2 \alpha'}{1+ \beta'_2}}}{(t-s)^{(\frac12+\beta_{0}+\frac{ \beta_{3}-\beta_{2}}{2})\vee(1+\frac32 \beta_{0}+\frac{\beta_{3}-\beta_{2}}{2}-\alpha)+\alpha'}}. \end{aligned}$$ $\mathrm{(i.3)}$ We now combine the results of steps $\mathrm{(i.1)}$ and $\mathrm{(i.2)}$ to deduce that, when $\alpha'= \alpha_\Phi \in (0,\hat \alpha_\Phi\wedge \kappa_{0})$, $$\begin{aligned} |I| &~\le~ C \frac{|x_1 - x_1'|^{\frac{2 \alpha_\Phi}{1+ \beta'_1}}+|x_2 - x_2'|^{\frac{2 \alpha_\Phi}{1+ \beta'_2}}}{(t-s)^{1-\eta_{\Phi}}}\left(f^{\circ,\frac12}(s, x; t,y)+f^{\circ,\frac12}(s, x'; t,y)\right). \end{aligned}$$ $\mathrm{(ii)}$ To conclude, it remains to use an induction argument as in the end of the proof of Proposition [Proposition 18](#prop:existence_continuite_Phi){reference-type="ref" reference="prop:existence_continuite_Phi"}. ◻ ### Smoothness of the transition density and Feynman-Kac's representation {#sec : Smoothness of the transition density and Feynman-Kac representation} Recall that ${\rm f}(s, {\rm x}; t, y)$ is defined in [\[eq: def f bar on path\]](#eq: def f bar on path){reference-type="eqref" reference="eq: def f bar on path"}. **Proposition 27**. *Let the conditions of Theorem [Theorem 9](#thm:c12){reference-type="ref" reference="thm:c12"} hold. Then, the vertical derivative $\partial^2_{{\rm x}{\rm x}} {\rm f}(s, {\rm x}; t, y)$ and horizontal derivative $\partial_s {\rm f}(s, {\rm x}; t, y)$ are well-defined for all $0 \le s < t \le T$, ${\rm x}\in D([0,T])$ and $y \in \mathbb{R}^2$. Moreover, for all $(t,y) \in [0,T] \times\mathbb{R}^2$, $\partial^2_{{\rm x}{\rm x}} {\rm f}(\cdot; t, y)$ and $\partial_s {\rm f}(\cdot; t, y)$ are continuous on $[0,t) \times C([0,T])$.* *Proof.* We denote by $C>0$ a generic constant that does not depend on $(s,x, t,y)$. Let us fix $t_0 \in (s, t)$, then by [\[eq:def_f\_transition_proba\]](#eq:def_f_transition_proba){reference-type="eqref" reference="eq:def_f_transition_proba"} and [\[eq: def f bar on path\]](#eq: def f bar on path){reference-type="eqref" reference="eq: def f bar on path"}, $$\begin{aligned} {\rm f}(s, {\rm x}; t, y) ~:=~& {\rm f}_{t,y}(s, {\rm x}; t,y) + \int_s^{t_0}\!\! \int_{\mathbb{R}^2} {\rm f}_{r,z}(s, {\rm x}; r,z) \Phi(r,z; t,y) dz dr \\ &+ \int_{t_0}^t \int_{\mathbb{R}^{2}} {\rm f}_{r,z}(s, {\rm x}; r,z) \Phi(r,z; t,y) dz dr \\ ~=:~& {\rm f}_{t,y}(s, {\rm x}; t,y) ~+~ {\rm f}_1(s, {\rm x}; t,y) ~+~ {\rm f}_2(s, {\rm x}; t,y). \end{aligned}$$ First, the existence and continuity of the vertical derivative and horizontal derivative of ${\rm f}_{t,y}(s, {\rm x}; t,y)$ is trivial. For ${\rm f}_1(s, {\rm x}; t,y)$, we can use Lemmas [Lemma 23](#lem : borne derive seconde fbar){reference-type="ref" reference="lem : borne derive seconde fbar"} and [Lemma 26](#lem: PHI holder){reference-type="ref" reference="lem: PHI holder"}, Proposition [Proposition 18](#prop:existence_continuite_Phi){reference-type="ref" reference="prop:existence_continuite_Phi"}, together with [\[eq: majo fcirc par fcirc 1/2\]](#eq: majo fcirc par fcirc 1/2){reference-type="eqref" reference="eq: majo fcirc par fcirc 1/2"}, to obtain that $$\int_s^{t_0}\!\! \left| \int_{\mathbb{R}^2} \partial^2_{{\rm x}{\rm x}} {\rm f}_{r,z}(s, {\rm x}; r,z) \Phi(r,z; t,y) dz \right| dr ~\le~ C \int_s^{t_0} \frac{I_{1}(s, {\rm x}; r; t,y) +I_{2}(s, {\rm x}; r; t,y) }{(r-s)^{1-\kappa_{\Phi}} } dr,$$ where $$\kappa_\Phi ~:=~ \min \Big(\frac{2\beta_{4}+1+\beta'_{1}}{1+\beta'_{2}}, 1 \Big)\min\{ \alpha_\Phi,\alpha\}- \beta_0 ~>~ 0,$$ and, with $x := ({\rm x}(s), I_s({\rm x}))$, $$\begin{aligned} I_{1}(s,{\rm x}; r; t,y) ~:=~& \int_{\mathbb{R}^{2}} f^{\circ,\frac12}(s, x;r,z) \big( f^{\circ,\frac12}(r, E^{-1}_{s,r}(x);t,y)+ f^{\circ,\frac12}(r,z;t,y) \big) dz \\ ~=~& f^{\circ,\frac12}(r, E^{-1}_{s,r}(x);t,y)+ f^{\circ,\frac12}(s,x;t,y) \\ I_{2}(s,{\rm x}; r; t,y) ~:=~&f^{\circ,\frac12}(r,E^{-1}_{s,r}(x);t,y). \end{aligned}$$ Since $t_0 < t$, we can then easily obtain the existence and continuity of $\partial^2_{{\rm x}{\rm x}} {\rm f}_1(\cdot; t, y)$ by dominated convergence. Further, in view of Remark [Remark 24](#rem:dt_Vsxr){reference-type="ref" reference="rem:dt_Vsxr"} and in particular [\[eq:bound_ds_fr\]](#eq:bound_ds_fr){reference-type="eqref" reference="eq:bound_ds_fr"}, we can also deduce the existence and continuity of the horizontal derivative $\partial_s {\rm f}_1(\cdot; t,y)$. For ${\rm f}_2(s, {\rm x}; t, y)$, we notice that $$\big| \partial_s {\rm f}_{r,z}(s, {\rm x}; r,z) \big| + \big| \partial^2_{{\rm x}{\rm x}} {\rm f}_{r,z}(s, {\rm x}; r,z) \big| ~\le~ C {\rm f}^{\circ}(s, {\rm x}; r,z), ~ \mbox{for}~r \ge t_0 > s, ~z \in \mathbb{R}^2.$$ Together with the estimate on $\Phi(r,z; t, y)$ in Proposition [Proposition 18](#prop:existence_continuite_Phi){reference-type="ref" reference="prop:existence_continuite_Phi"}, it follows the existence and continuity of the vertical derivative $\partial^2_{{\rm x}{\rm x}} {\rm f}_2(\cdot; t, y)$ and the horizontal derivative $\partial_s {\rm f}_2(\cdot; t,y)$. ◻ Recall the growth condition [\[eq:bound_g\_l\]](#eq:bound_g_l){reference-type="eqref" reference="eq:bound_g_l"} on $\ell$ and $g$, and the Hölder continuity condition [\[eq:holder_l\]](#eq:holder_l){reference-type="eqref" reference="eq:holder_l"} on $\ell$. Let $$v(s,x) ~:=~ \int_s^T\!\! \int_{\mathbb{R}^2} \ell(t,y) f(s, x; t, y) dy dt + \int_{\mathbb{R}^2} g(y) f(s, x; T, y) dy, ~~(s,x) \in [0,T) \times\mathbb{R}^2.$$ Then, with ${\rm v}$ defined in [\[eq:def_vr\]](#eq:def_vr){reference-type="eqref" reference="eq:def_vr"}, one has, for $x = ({\rm x}(s), I_s({\rm x}))$, $${\rm v}(s,{\rm x}) = v(s,x), ~ \partial_{{\rm x}} {\rm v}(s,{\rm x}) = \partial_{x_1}v(s,x) ~\mbox{and}~ \partial^2_{{\rm x}{\rm x}} {\rm v}(s,{\rm x}) = \partial^2_{x_1 x_1}v(s,x).$$ **Proposition 28**. *Let the conditions of Theorem [Theorem 9](#thm:c12){reference-type="ref" reference="thm:c12"} hold. Then:* *$\mathrm{(i)}$ ${\rm v}\in {\mathbb C}^{1,2}([0,T))$ and the bound estimates in [\[eq:bound_dvr\]](#eq:bound_dvr){reference-type="eqref" reference="eq:bound_dvr"} hold true.* *$\mathrm{(ii)}$ The function ${\rm v}$ is a classical solution to the PPDE [\[eq:ppde\]](#eq:ppde){reference-type="eqref" reference="eq:ppde"}. If in addition $g$ is continuous, then ${\rm v}$ is the unique classical solution of [\[eq:ppde\]](#eq:ppde){reference-type="eqref" reference="eq:ppde"} satisfying [\[eq: borne croissance vr et cond bord\]](#eq: borne croissance vr et cond bord){reference-type="eqref" reference="eq: borne croissance vr et cond bord"}.* *Proof.* $\mathrm{(i)}$ Let us define, for $(r,z) \in [0,T) \times\mathbb{R}^2$, $$\begin{aligned} \label{eq: def v bar Phi} v_{ \Phi}(r,z) ~:= \int_r^T \int_{\mathbb{R}^2} \Phi(r,z; t,y) \ell(t,y) dy dt ~+ \int_{\mathbb{R}^{2}} \Phi(r,z;T,y)g(y)dy , \end{aligned}$$ so that $$\begin{aligned} \label{eq : partial x1 x1 v(s,barx)} v(s,x) ~=& \int_{\mathbb{R}^2} f_{T,y} (s,x; T,y) g(y) dy ~+ \int_s^T \int_{\mathbb{R}^2} f_{r,z}(s,x; r,z) \big( v_{\Phi}(r,z) + \ell(r,z) \big) dz dr. \end{aligned}$$ Then, it follows from Lemma [Lemma 26](#lem: PHI holder){reference-type="ref" reference="lem: PHI holder"} that $$\begin{aligned} | v_{ \Phi}(r,z)- v_{\Phi}(r,z')| \le C_{\alpha_\Phi} \frac{|z_1-z_1'|^{\frac{2\alpha_\Phi}{1+\beta'_1}} + |z_2 - z_2'|^{\frac{2\alpha_\Phi}{1+\beta'_2}}}{(T-r)^{1 - \eta_{\Phi} }} \big( v^{\circ,\frac12}(r,z)+ v^{\circ,\frac12} (r,z') \big), \end{aligned}$$ in which $$v^{\circ,\frac12}(r,z) ~:=~ \int_r^T \int_{\mathbb{R}^2} f^{\circ,\frac12}(r, z;T, y) \big| \ell(t,y) \big| dy dt + \int_{\mathbb{R}^{2}} f^{\circ,\frac12}(r, z;T, y) \big|g(y) \big| dy.$$ Together with the Hölder continuity condition on $\ell$ in [\[eq:holder_l\]](#eq:holder_l){reference-type="eqref" reference="eq:holder_l"}, we can then apply Lemma [Lemma 23](#lem : borne derive seconde fbar){reference-type="ref" reference="lem : borne derive seconde fbar"} to deduce that $\partial^2_{x_1 x_1} v(s,x)$ exists and $$\begin{aligned} &~\partial^{2}_{x_{1}x_{1}} v(s,x) \\ =& \int_{\mathbb{R}^{2}} \! \partial^{2}_{x_{1}x_{1}} f_{T,y}(s,x;T,y)g(y)dy +\! \int_{s}^{T} \!\!\! \int_{\mathbb{R}^{2}} \! \partial^{2}_{x_{1}x_{1}} f_{r,z}(s,x;r,z) \big( v_{\Phi}(r,z) + \ell(r.z) \big) dzdr. \end{aligned}$$ Then, using [\[eq:Dx1x2fb\]](#eq:Dx1x2fb){reference-type="eqref" reference="eq:Dx1x2fb"} and Lemma [Lemma 14](#lemma:Sigamw){reference-type="ref" reference="lemma:Sigamw"}, we deduce that, for some constant $C > 0$, $$\int_{\mathbb{R}^{2}} \! \Big| \partial^{2}_{x_{1}x_{1}} f_{T,y}(s,x;T,y)g(y) \Big| dy ~\le~ \frac{C}{(T-s)^{1+\beta_0}} \int_{\mathbb{R}^2} f^{\circ}(s, x; T,y) | g(y)| dy ~\le~ \frac{C e^{C|x|} }{(T-s)^{1+\beta_0}}.$$ By Lemma [Lemma 23](#lem : borne derive seconde fbar){reference-type="ref" reference="lem : borne derive seconde fbar"}, one can choose $C> 0$ such that, $$\begin{aligned} &\left| \int_{s}^{T} \!\!\! \int_{\mathbb{R}^{2}} \! \partial^{2}_{x_{1}x_{1}} f_{r,z}(s,x;r,z) \ell(r.z) dzdr \right| \\ \le~& \int_s^T \frac{C}{(r-s)^{1-\kappa_{\ell}}} \Big( \big| \ell( E^{-1}_{s,r}(x)) \big| + C e^{| E^{-1}_{s,r}(x)|} + \int_{\mathbb{R}^2} f^{\circ}(s,x; r,z) e^{C|z|} dz \Big) dr ~\le~ C e^{C|x|}, \end{aligned}$$ in which $$\kappa_\ell ~:=~ \min \Big(\frac{2\beta_{4}+1+\beta'_{1}}{1+\beta'_{2}}, 1 \Big)\min\{ \alpha_\ell,\alpha\}- \beta_0 ~>~ 0,$$ and $$\begin{aligned} \left| \int_{s}^{T} \!\!\! \int_{\mathbb{R}^{2}} \! \partial^{2}_{x_{1}x_{1}} f_{r,z}(s,x;r,z) v_{\Phi} (r.z) dzdr \right| \le \int_s^T \frac{C}{(r-s)^{1-\kappa_{\Phi}} (T-r)^{1-\eta_{\Phi}} } e^{C|x|} dr ~\le~ C e^{C|x|}. \end{aligned}$$ This proves the bound estimate on $\partial^2_{x_1x_1} v(s,x)$ (or equivalently $\partial^2_{{\rm x}{\rm x}} {\rm v}(s, {\rm x})$) in [\[eq:bound_dvr\]](#eq:bound_dvr){reference-type="eqref" reference="eq:bound_dvr"}. In view of [\[eq:bound_ds_fr\]](#eq:bound_ds_fr){reference-type="eqref" reference="eq:bound_ds_fr"}, one can obtain the same bound on $\partial_s {\rm v}(s, {\rm x})$ in [\[eq:bound_dvr\]](#eq:bound_dvr){reference-type="eqref" reference="eq:bound_dvr"}. Finally, $\partial_{{\rm x}} {\rm v}(s, {\rm x})$ is estimated by appealing to Proposition [Proposition 21](#prop : fb C1){reference-type="ref" reference="prop : fb C1"} and [\[eq:bound_g\_l\]](#eq:bound_g_l){reference-type="eqref" reference="eq:bound_g_l"}. The bound on the right-hand side of [\[eq: borne croissance vr et cond bord\]](#eq: borne croissance vr et cond bord){reference-type="eqref" reference="eq: borne croissance vr et cond bord"} is proved similarly. $\mathrm{(ii)}$ Recall that $$\begin{aligned} {\rm f}(s,{\rm x};t,y) ~=~ {\rm f}_{t, y}(s,{\rm x};t,y) ~+ \int_{s}^{t}\int_{\mathbb{R}^{2}} {\rm f}_{r, z}(s,{\rm x};r,z) \Phi(r,z,t,y)dzdr, \end{aligned}$$ and that $(s,{\rm x})\in [0,t)\times D([0,T])\mapsto {\rm f}_{t,y}(s,{\rm x};t,y)$ solves $$\begin{aligned} \label{eq: LcircE bar fE =0} {\rm L}_{t, y} {\rm f}_{t, y}(\cdot;t,y)=0 \mbox{ on $[s,t)\times C([0,T])$,} \end{aligned}$$ where $${\rm L}_{t, y} ~:=~ \partial_s + \frac12\sigma_{t}( y)^{2} \partial^{2}_{{\rm x}{\rm x}}.$$ Let $${\rm L} \phi(s,{\rm x}) ~:=~ \partial_s \phi(s, {\rm x}) + \mu_{s}({\rm x}) \partial_{{\rm x}}\phi(s,{\rm x})+\frac12\sigma_{s}^{2}({\rm x})\partial^{2}_{{\rm x}}\phi(s,{\rm x}),$$ for $\phi \in {\mathbb C}^{1,2}([0,T))$. Recalling the definition of $v_{ \Phi}$ in [\[eq: def v bar Phi\]](#eq: def v bar Phi){reference-type="eqref" reference="eq: def v bar Phi"} and using [\[eq : partial x1 x1 v(s,barx)\]](#eq : partial x1 x1 v(s,barx)){reference-type="eqref" reference="eq : partial x1 x1 v(s,barx)"}, we obtain that, with $x := ({\rm x}(s), I_s({\rm x}))$, $$\begin{aligned} {\rm L} {\rm v}(s,{\rm x}) =& \int_{\mathbb{R}^{2}}{\rm L} {\rm f}_{ {T},y}(s,{\rm x};T,y)g(y)dy - v_{ \Phi}(s, x) {- \ell(s,x)} \nonumber \\ &+ \int_{s}^{T} \!\! \int_{\mathbb{R}^{2}} {\rm L} {\rm f}_{r, z}(s,{\rm x};r,z) \big( v_{ \Phi}(r,z) { + \ell(r,z)} \big) dz dr. \label{eq: Lxr tilde vr} \end{aligned}$$ At the same time, as a consequence of [\[eq: sol eq Phi\]](#eq: sol eq Phi){reference-type="eqref" reference="eq: sol eq Phi"} and [\[eq: lien derive f bar f\]](#eq: lien derive f bar f){reference-type="eqref" reference="eq: lien derive f bar f"}-[\[eq: lien derive seconde f bar f\]](#eq: lien derive seconde f bar f){reference-type="eqref" reference="eq: lien derive seconde f bar f"}, we observe that $$\begin{aligned} \Phi(s,x;T,y) =~& ({\rm L} -{\rm L}_{t, y}) {\rm f}_{t, y}(s,{\rm x}; t,y) \\ &+ \int_{s}^{t} \! \int_{\mathbb{R}^{2}}({\rm L} -{\rm L}_{r,z}) { {\rm f}_{r,z}}(s,{\rm x};r,z) \Phi(r,z;t,y) dz dr. \end{aligned}$$ Hence, recalling Lemma [Lemma 17](#lem : estime L-Lfy){reference-type="ref" reference="lem : estime L-Lfy"} and Proposition [Proposition 18](#prop:existence_continuite_Phi){reference-type="ref" reference="prop:existence_continuite_Phi"}, it follows by [\[eq: def v bar Phi\]](#eq: def v bar Phi){reference-type="eqref" reference="eq: def v bar Phi"} that $$\begin{aligned} v_{ \Phi}(s,x)=&\int_{\mathbb{R}^{2}}({\rm L} -{\rm L}_{T, y}) {\rm f}_{ {T,}y}(s,{\rm x};T, y)g(y)d y \\ &+\int_{s}^{T} \!\! \int_{\mathbb{R}^{2}}({\rm L} -{\rm L}_{r,z}) {\rm f}_{ {r,}z}(s,{\rm x};r,z) \big( v_{ \Phi}(r,z) {+ \ell(r,z) } \big) dz dr. \end{aligned}$$ We then use [\[eq: LcircE bar fE =0\]](#eq: LcircE bar fE =0){reference-type="eqref" reference="eq: LcircE bar fE =0"} to obtain $$\begin{aligned} v_{\Phi}(s,x) ~=& \int_{\mathbb{R}^{2}}{\rm L} {\rm f}_{T, y}(s,{\rm x};T, y)g( y)d y + \int_{s}^{T} \!\! \int_{\mathbb{R}^{2}} {\rm L} {\rm f}_{r, z}(s,{\rm x};r,z) \big( v_{\Phi} (r,z) {+ \ell(r,z) } \big) dz dr. \end{aligned}$$ It follows then by [\[eq: Lxr tilde vr\]](#eq: Lxr tilde vr){reference-type="eqref" reference="eq: Lxr tilde vr"} that ${\rm v}$ is a classical solution to the PPDE [\[eq:ppde\]](#eq:ppde){reference-type="eqref" reference="eq:ppde"}. $\mathrm{(iii)}$ We now prove that $\lim_{s \nearrow T} {\rm v}(s,{\rm x}) = g({\rm x}_T, I_T({\rm x}))$, or equivalently $\lim_{s \nearrow T} v(s,x) = g(x)$, whenever $g$ is continuous. In view of the estimates in [\[eq: majo fy\]](#eq: majo fy){reference-type="eqref" reference="eq: majo fy"} and [\[eq: estime vraie densite\]](#eq: estime vraie densite){reference-type="eqref" reference="eq: estime vraie densite"}, and Proposition [Proposition 19](#prop:existence_fb_regul){reference-type="ref" reference="prop:existence_fb_regul"}, one has $$\begin{aligned} \lim_{s \nearrow T} v(s,x) ~=~& \lim_{s \nearrow T} \int_{\mathbb{R}^2} f(s,x; T,y) g(y) dy ~=~ \lim_{s \nearrow T} \int_{\mathbb{R}^2} f_{T,y} (s,x; T,y) g(y) dy \\ ~=~& \lim_{M \to \infty} \lim_{s \nearrow T} \int_{D^M_{s,T}} \!\! f_{T,y} (s,x; T,y) g(y) dy ~= \lim_{M \to \infty} \lim_{s \nearrow T} \int_{D^M_{s,T}} \!\! f_{T,x} (s,x; T,y) g(y) dy \\ ~=~& \lim_{s \nearrow T} \int_{\mathbb{R}^2} f_{T,x} (s,x; T,y) g(y) dy ~=~ g(x), \end{aligned}$$ in which $$D^M_{s,T} := \Big[ x_1 -M \sqrt{T-s}, ~x_1+ M \sqrt{T-s} \Big] \times\Big[ x_2 - M \sqrt{(T-s) \tilde m_{s,t}}, ~x_2 + M \sqrt{(T-s) \tilde m_{s,t}} \Big],$$ so that third and fifth equalities are true since both $f_{T,y}(s,x; T,y)$ and $f_{T,x}(s,x; T,y)$ are dominated by $C f^{\circ}(s,x; T,y)$ in which the covariance matrix in $f^{\circ}$ is given by $\Sigma_{s,T}(4\bar \mathfrak{a})$, and the fourth equality follows by the fact that, for every fixed $M > 0$, $$\lim_{s \nearrow T} \sup_{y \in D^M_{s,T}} \left| \frac{f_{T,y}(s,x; T,y)}{ f_{T,x}(s,x; T,y)} - 1 \right| = 0.$$ $\mathrm{(iv)}$ The fact that ${\rm v}$ is the unique solution of [\[eq:ppde\]](#eq:ppde){reference-type="eqref" reference="eq:ppde"} satisfying [\[eq: borne croissance vr et cond bord\]](#eq: borne croissance vr et cond bord){reference-type="eqref" reference="eq: borne croissance vr et cond bord"} holds true follows easily by a verification argument based on Itô-Dupire's formula, see [@cont2013functional], whenever $g$ is continuous.  ◻ ## Proofs of Theorems [Theorem 7](#thm:f_well_defined){reference-type="ref" reference="thm:f_well_defined"}, [Theorem 9](#thm:c12){reference-type="ref" reference="thm:c12"} and [Theorem 11](#thm:vr_uniqueX){reference-type="ref" reference="thm:vr_uniqueX"} {#proofs-of-theorems-thmf_well_defined-thmc12-and-thmvr_uniquex} **Proof of Theorem [Theorem 7](#thm:f_well_defined){reference-type="ref" reference="thm:f_well_defined"}*.* $\mathrm{(i)}$ First, the well-posedness of $\widetilde{\Phi}$ in [\[eq:def_Phit\]](#eq:def_Phit){reference-type="eqref" reference="eq:def_Phit"}-[\[eq: def Delta L k\]](#eq: def Delta L k){reference-type="eqref" reference="eq: def Delta L k"} is proved in Proposition [Proposition 18](#prop:existence_continuite_Phi){reference-type="ref" reference="prop:existence_continuite_Phi"}. Further, the well-posedness of $f$ in [\[eq:def_f\_transition_proba\]](#eq:def_f_transition_proba){reference-type="eqref" reference="eq:def_f_transition_proba"} as well as its continuity and growth property is proved in Proposition [Proposition 19](#prop:existence_fb_regul){reference-type="ref" reference="prop:existence_fb_regul"}. $\mathrm{(ii)}$ Under further conditions, the existence of $\partial_{x_1} f(s,x; t,y)$ as well as its continuity and growth property is proved in Proposition [Proposition 21](#prop : fb C1){reference-type="ref" reference="prop : fb C1"}. ◻ **Proof of Theorem [Theorem 9](#thm:c12){reference-type="ref" reference="thm:c12"}*.* $\mathrm{(i)}$ The fact that ${\rm f}(\cdot; t, y) \in {\mathbb C}^{1,2}([0,t))$ is proved in Proposition [Proposition 27](#prop:fr_C12){reference-type="ref" reference="prop:fr_C12"}. $\mathrm{(ii)}$ The fact that ${\rm v}$ provides a classical solution to the PPDE, as well as the estimation on the derivatives are proved in Proposition [Proposition 28](#prop:vr_C12_edp){reference-type="ref" reference="prop:vr_C12_edp"}. $\mathrm{(iii)}$ We now use the PPDE results in Item $\mathrm{(ii)}$ to study the path-dependent SDE [\[eq:SDE_XI\]](#eq:SDE_XI){reference-type="eqref" reference="eq:SDE_XI"}. To study the weak solution of the SDE [\[eq:SDE_XI\]](#eq:SDE_XI){reference-type="eqref" reference="eq:SDE_XI"}, we consider the martingale problem on the canonical space $C([0,T])$ of all $\mathbb{R}^2$-valued continuous paths on $[0,T]$. By abuse of notation, we denote by $(X_t, I_t)_{t \in [0,T]}$ the canonical process, which generates the canonical filtration $\mathbb{F}$. Then, given an initial condition $(t, x) \in [0, T] \times\mathbb{R}^2$, a solution to the corresponding martingale problem is a probability measure $\mathbb{P}$ on $C([0,T])$ such that $\mathbb{P}[(X_s, I_s) = x = (x_1, x_2), ~s \in [0,t]] = 1$, $\mathbb{P}[I_s = x_2 + \int_t^s X_r dA_r, ~s \in [t, T]] = 1$ and the process $$\varphi(X_s) - \int_t^s \Big( \bar{\mu}_r(X) D \varphi(X_r) + \frac12 \bar{\sigma}_r (X) D^2 \varphi(X_r) \Big) dr, ~s \in [t,T],$$ is a $(\mathbb{P}, \mathbb{F})$-martingale for all bounded smooth functions $\varphi: \mathbb{R}\longrightarrow \mathbb{R}$. Let us denote, for all $(t,x) \in [0,T] \times\mathbb{R}^2$, $${\cal P}(t,x) ~:=~ \big\{ \mathbb{P}~: \mathbb{P}~\mbox{is solution to the martingale problem with initial condition}~(t,x) \big\}.$$ Notice that $\bar{\mu}$ and $\bar{\sigma}$ are both bounded continuous, it is then classical to know that ${\cal P}(t,x)$ is a nonempty compact set (see e.g. Stroock and Varadhan [@stroock1997multidimensional Chapter VI]). We next apply the classical Markovian selection technique (see e.g. [@stroock1997multidimensional Chapter 12.2]) to construct a weak solution to the SDE [\[eq:SDE_XI\]](#eq:SDE_XI){reference-type="eqref" reference="eq:SDE_XI"} such that $(X_t, I_t)_{t \in [0,T]}$ is a strong Markov process. Let $(\phi_n)_{n \ge 1}$ be a sequence of bounded continuous functions from $[0,T] \times\mathbb{R}^2 \longrightarrow \mathbb{R}$ such that it is a measure determining sequence in the sense that the sequence $$\Big\{ \mathbb{E}^{\mathbb{P}} \Big[ \int_0^T \phi_n(t, X_t, I_t) dt \Big] \Big\}_{n \ge 1}$$ can determinate the probability measure $\mathbb{P}$ on $C([0,T])$. For each $(t,x) \in [0,T] \times\mathbb{R}^2$, let ${\cal P}^+_0(t,x) := {\cal P}(t,x)$, and then define, for each $n \ge 0$, $${\cal P}^+_{n+1}(t,x) = \Big\{ \mathbb{P}\in {\cal P}^+_n (t,x) ~: \mathbb{E}^{\mathbb{P}} \Big[ \int_0^T \phi_n(t, X_t, I_t) dt \Big] = \max_{\mathbb{P}' \in {\cal P}^+_n(t,x)} \mathbb{E}^{\mathbb{P}'} \Big[ \int_0^T \phi_n(t, X_t, I_t) dt \Big] \Big\}.$$ It is easy to see that each ${\cal P}^+_n(t,x)$ is a non-empty compact set, so that ${\cal P}^+(t,x) := \cap_{n \ge 1} {\cal P}^+_n(t,x)$ is also non-empty compact, as the sequence is non-increasing. Moreover, since any two probability measures in ${\cal P}^+(t,x)$ has the same value by evaluating w.r.t. any $\phi_n$, this implies that ${\cal P}^+(t,x)$ contains exactly one probability measure denoted by $\mathbb{P}^+_{t,x}$. By the dynamic programming principle for the optimal control problem in the definition of ${\cal P}^+_{n+1}$, it follows that $(X, I, (\mathbb{P}^+_{t,x})_{(t,x) \in [0,T] \times\mathbb{R}^2})$ provides a Markov process solution to SDE [\[eq:SDE_XI\]](#eq:SDE_XI){reference-type="eqref" reference="eq:SDE_XI"} such that $( X, I)$ is a strong Markov process. At the same time, one can apply the above Markovian selection argument to construct another Markov process $(X, I, (\mathbb{P}^-_{t,x})_{(t,x) \in [0,T] \times\mathbb{R}^2}$ by replacing "$\max$" by "$\min$" in the definition of ${\cal P}^+_{n+1}(t,x)$. If the class of all martingale solutions ${\cal P}(t,x)$ is not unique, then $\mathbb{P}^+_{t,x} \neq \mathbb{P}^-_{t,x}$ as $(\phi_n)_{n \ge 1}$ is measure determining. At the same time, by the results in Item $\mathrm{(ii)}$ and the Feynman-Kac's formula in the case $g\equiv 0$, one has $$\mathbb{E}^{\mathbb{P}^+_{s,x}} \Big[ \int_s^T \ell_t(X_t, I_t) dt \Big] = \mathbb{E}^{\mathbb{P}^-_{s,x}} \Big[ \int_s^T \ell_t(X_t, I_t) dt \Big] = \int_s^T \int_{\mathbb{R}^2} f(s,x; t,y) \ell_t(y) dy dt.$$ Since $\ell$ could be an arbitrary bounded continuous function, this implies that $\mathbb{P}^+_{s,x} = \mathbb{P}^-_{s,x}$ for all $(s,x) \in [0,T] \times\mathbb{R}^2$. Therefore, for all initial condition $(t,x)$, there exists a unique solution to the martingale problem, i.e. a unique weak solution. Moreover the (unique) solution process $(X, I)$ is a strong Markov process, and the transition probability function is given by $f$. ◻ **Proof of Theorem [Theorem 11](#thm:vr_uniqueX){reference-type="ref" reference="thm:vr_uniqueX"}.*.* When the SDE [\[eq:SDE_XI\]](#eq:SDE_XI){reference-type="eqref" reference="eq:SDE_XI"} admits weak uniqueness, the above Markovian selection argument shows that the only solution $( X, I)$ is a strong Markov process. $\mathrm{(i)}$ Let $W^{\perp}$ be a Brownian motion independent of $W$, $(\varepsilon_n)_{n \ge 1}$ be a sequence of positive constants such that $\varepsilon_n \longrightarrow 0$. For each $n > 1$, let us define $\widetilde{X}^{n} = (\widetilde{X}^{n,1}, \widetilde{X}^{n,2})$ as the unique (Markovian) solution to the SDE $$\begin{aligned} d \widetilde{X}^{n,1}_t & = \tilde{\mu}_t (\widetilde{X}^n_t) dt + \tilde{\sigma}_t(\widetilde{X}^n_t) dW_t, ~~~ d \widetilde{X}^{n,2}_t = \tilde{\mu}_t(\widetilde{X}^n_t) A_t dt + \tilde{\sigma}_t(\widetilde{X}^n_t) A_t d W_t + \varepsilon_n dW^{\perp}_t. \end{aligned}$$ By stability of weak solutions of SDEs, it is clear that, by using the same initial condition for the above SDE as that in [\[eq:def_Xt\]](#eq:def_Xt){reference-type="eqref" reference="eq:def_Xt"} for $\widetilde{X}$, one has $\widetilde{X}^n \longrightarrow \widetilde{X}$ weakly. At the same time, it follows from e.g. [@francesco2005class] that, for each $t\in (s,T]$, $\widetilde{X}^{n}_{t}$ has a density $\tilde{f}^{n}(s,x;t,\cdot)$ whenever $\widetilde{X}^{n}_{s}=x$. Moreover, $\tilde{f}^n$ can be defined in the form $$\tilde{f}^{n}(s,x;t,y)= \tilde{f}^{n}_{{t},y}(s,x;t,y)+ \int_{s}^{t}\int_{\mathbb{R}^{2}} \tilde{f}^{n}_{{r},z}(s,x;r,z)\widetilde{\Phi}^{n}(r,z;t,y)dzdr$$ in which $\tilde{f}^{n}_{{t},y}$ is defined as $\tilde{f}_{{t},y}$ but with $$\Sigma^{n}_{s,t} (r, z) = \sigma^2_{r} (z) \left( \begin{array}{cc} t- s & - \int_s^t (A_u - A_s) du \\ - \int_s^t (A_u - A_s) du & \int_s^t \big[(A_u - A_s)^2+ \varepsilon^{2}_{{n}} \sigma_r (z)^{-2} \big] du \end{array} \right)$$ in place of $\Sigma_{s,t} (r, z)$, and $$\widetilde{\Phi}^{n}(s,x; t, y) ~:=~ \sum_{k=0}^{\infty} \widetilde{\Delta}^{n}_k(s,x; t,y),$$ where $\widetilde{\Delta}^{n} _0(s,x; t,y) := \big( \widetilde{{\cal L}}^n_s - \widetilde{{\cal L}}^{n,t,y}_s \big) \tilde{f}^{n}_{t,y}(s,x; t,y)$, $$\begin{aligned} \widetilde{\Delta}^{n}_{k+1} (s,x;t,y) := \int_s^t \int_{\mathbb{R}^2} \widetilde{\Delta}^{n} _0(s,x; r,z) \widetilde{\Delta}^{n}_{k}(r,z;t,y) dz dr, ~~k\ge 0. \end{aligned}$$ In the above, $\widetilde{{\cal L}}^n$ is the generator of $\widetilde{X}^{n}$ and $\widetilde{{\cal L}}^{n,t,y}$ is defined from $\widetilde{{\cal L}}^n$ as $\widetilde{{\cal L}}^{t,y}$ is defined from $\widetilde{{\cal L}}$ by freezing $\sigma$ to $\sigma_{t}(y)$ and erasing the drift term. Then, we define $f^{n}_{r,z}$ from $\tilde{f}^n_{\cdot}$ as $f_{r,z}$ is defined from $\tilde{f}_{\cdot}$ in Section [2.3](#sec:main_results){reference-type="ref" reference="sec:main_results"}. It is straightforward to check that the estimates in [\[eq: estime Delta k\]](#eq: estime Delta k){reference-type="eqref" reference="eq: estime Delta k"} hold for $(\widetilde{\Delta}^{n}_{k})_{k\ge 0}$ in place of $(\widetilde{\Delta}_{k})_{k\ge 0}$, uniformly in $n>0$. Then, an induction argument combined with the fact that $\tilde{f}^{n}_{t, y}(s,x; t,y)\to \tilde{f}_{t, y}(s,x; t,y)$ as $n\to \infty$, for all $(s,x,t,y)\in \Theta$, implies that $f^{n}(s, x;t, y):= \tilde{f}^n (s,{\rm \bf A}_{s} x;t,{\rm \bf A}_{t} y)$ converges to $f(s, x;t, y)$ as $n \longrightarrow \infty$, for all $(s, x,t, y)\in \Theta$. By the weak convergence of the sequence of processes $(\widetilde{X}^n)_{n \ge 1}$ to $X$, this shows that $f$ is the transition probability function of $( X, I)$. $\mathrm{(ii)}$ As shown in Theorem [Theorem 7](#thm:f_well_defined){reference-type="ref" reference="thm:f_well_defined"}, one has ${\rm v}\in {\mathbb C}^{0,1}([0,T))$ and the vertical derivative $\partial_{{\rm x}}{\rm v}$ is locally bounded. Let $(X,I)$ be the solution of SDE [\[eq:SDE_XI\]](#eq:SDE_XI){reference-type="eqref" reference="eq:SDE_XI"}, then by Feynman-Kac's formula, the process $${\rm v}(t,X) + \int_0^t \bar \ell(s, X) ds ,~t \in [0,T], ~\mbox{is a local martingale}.$$ One can further apply the $C^1$-Itô formula for path-dependent functionals in [@bouchard2021c] to prove [\[eq: Ito vr\]](#eq: Ito vr){reference-type="eqref" reference="eq: Ito vr"}. Indeed, when [\[eq: cond vr Lip\]](#eq: cond vr Lip){reference-type="eqref" reference="eq: cond vr Lip"} holds true, one can directly apply [@bouchard2021c Proposition 2.11 and Theorem 2.5]. Otherwise, when $A$ is monotone and $0<\frac{1+\beta_{2}-\beta_{0}}{2+4\beta_{4}} < 1-\frac{\beta_3-\beta_{2}+\beta_{0}}{2}$, we can fix $\alpha'\in (\frac{1+\beta_{2}-\beta_{0}}{2+4\beta_{4}},1-\frac{\beta_3-\beta_{2}+\beta_{0}}{2})$, and, by Proposition [Proposition 22](#prop : holder f en x2){reference-type="ref" reference="prop : holder f en x2"}, there exists a constant $C> 0$ such that, for all $\varepsilon> 0$, $$\begin{aligned} &\mathbb{E}\Big[ \Big| {\rm v}\big(s+\varepsilon, X \big)-{\rm v}\big(s+\varepsilon, X_{s \wedge \cdot} \oplus_{s+\varepsilon} (X_{s+\varepsilon} - X_s) \big) \Big|^{2} \Big] \\ ~\le~& C \mathbb{E}\Big[ \Big(\sup_{ s \le t \le s+\varepsilon} \big| X_t - X_s \big| ~\varepsilon^{\beta_{4}} \Big)^{\frac{4\alpha'}{1+\beta_{2}'}} \Big] ~\le~ C\varepsilon^{\frac{\alpha'(2 +4\beta_{4})}{1+\beta_{2}'}}, \end{aligned}$$ where $\big( X_{s \wedge \cdot} \oplus_{s+\varepsilon} (X_{s+\varepsilon} - X_s) \big)_t := \mathbf{1}_{[0, s+\varepsilon)}(t) X_{s\wedge t} + \mathbf{1}_{[s+\varepsilon,T]} (t) X_{s+\varepsilon}$ for all $t \in [0,T]$. Since $\frac{\alpha'(2+4\beta_{4})}{1+\beta_{2}'}>1$, it follows that $$\lim_{\varepsilon\searrow 0} \frac1\varepsilon \mathbb{E}\Big[ \Big|{\rm v}\big(s+\varepsilon, X\big)-{\rm v}\big( X_{s \wedge \cdot} \oplus_{s+\varepsilon} (X_{s+\varepsilon} - X_s) \big) \Big|^{2} \Big] ~=~ 0.$$ Finally, we can apply [@bouchard2021c Proposition 2.6 and Theorem 2.5] to deduce [\[eq: Ito vr\]](#eq: Ito vr){reference-type="eqref" reference="eq: Ito vr"}. ◻ 10 Bruno Bouchard, Grégoire Loeper, and Xiaolu Tan. A $C^{0, 1}$ -functional Itô's formula and its applications in mathematical finance. , 148:1299--323, 2022. Bruno Bouchard, Grégoire Loeper, and Xiaolu Tan. Approximate viscosity solutions of path-dependent pdes and dupire's vertical differentiability. , to appear. Rama Cont and David-Antoine Fournié. Functional Itô calculus and stochastic integral representation of martingales. , 41(1):109--133, 2013. Andrea Cosso, Fausto Gozzi, Mauro Rosestolato, and Francesco Russo. Path-dependent Hamilton-Jacobi-Bellman equation: Uniqueness of Crandall-Lions viscosity solutions. , 2021. François Delarue and Stéphane Menozzi. Density estimates for a random noise propagating through a chain of differential equations. , 259(6):1577--1630, 2010. Bruno Dupire. Functional Itô calculus. , 04, 2009. Ibrahim Ekren, Christian Keller, Nizar Touzi, and Jianfeng Zhang. On viscosity solutions of path dependent pdes. , 42(1):204--236, 2014. Marco Di Francesco and Andrea Pascucci. On a class of degenerate parabolic equations of Kolmogorov type. , 2005(3):77--116, 2005. Avner Friedman. . Courier Dover Publications, 2008. Andreï N. Kolmogorov. Zufällige Bewegungen (zur Theorie der Brownschen Bewegung). , 35:116--117, 1934. Ermanno Lanconelli, Andrea Pascucci, and Sergio Polidoro. Linear and nonlinear ultraparabolic equations of kolmogorov type arising in diffusion theory and in finance. , 2:243--265, 2002. Zhenjie Ren, Nizar Touzi, and Jianfeng Zhang. Comparison of viscosity solutions of semi-linear path-dependent PDEs. , 58(1):277-302, 2020. Isaac M. Sonin. On a class of degenerate diffusion processes. , 12(3):490--496, 1967. Daniel W. Stroock and Srinivasa R. Varadhan. , volume 233. Springer Science & Business Media, 1997. Maria Weber. The fundamental solution of a degenerate partial differential equation of parabolic type. , 71(1):24--37, 1951. Jianjun Zhou. Viscosity solutions to second order path-dependent hamilton-jacobi-bellman equations and applications. , 2020. [^1]: CEREMADE, Université Paris-Dauphine, PSL, CNRS. bouchard\@ceremade.dauphine.fr. [^2]: Department of Mathematics, The Chinese University of Hong Kong. xiaolu.tan\@cuhk.edu.hk. [^3]: Or more rigorously its viscosity solution in the sense of [@cosso2021path; @zhou2020viscosity], see also e.g. [@bouchard2021approximate; @ekren2014viscosity; @ren2014comparison] and the references therein for an alternative definition.
arxiv_math
{ "id": "2310.04308", "title": "On the regularity of solutions of some linear parabolic path-dependent\n PDEs", "authors": "Bruno Bouchard and Xiaolu Tan", "categories": "math.PR math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The paper describes two possible ways of extending the definition of Haar measure to non-Hausdorff locally compact groups. The first one forces compact sets to be measurable: with this construction, a counterexample to the existence of the Haar measure is provided. The second one makes use of closed compact sets instead of compact sets in the definition of Radon measure: this way, the classical theorems of existence and uniqueness of the Haar measure can be generalised to locally compact groups, not necessarily Hausdorff. author: - Lisa Valentini bibliography: - bibliography.bib date: July 17, 2023 title: | Haar measure for non-Hausdorff\ locally compact groups --- # Introduction {#introduction .unnumbered} A Haar measure on a Hausdorff topological group is a nonzero translation-invariant Radon measure; on a locally compact Hausdorff group, the Haar measure exists and it is unique up to scalar multiplication. Moreover, it is of general knowledge that these two results can be somehow generalised to the non-Hausdorff case. Nevertheless, this fact is usually stated without proof and it is considered a folklore result. Many dissertations in the literature, e.g. Folland [@bi:folland Chapter 11] and Hewitt and Ross [@bi:hewitt Theorem 5.36], focus on Hausdorff topological groups, briefly justifying that this is not a loss of generality. In fact, some sources quote the following construction as a way to generalise Haar measure to those topological groups that are not Hausdorff: given a locally compact group $G$, the quotient $G / \overline{\{e_G\}}$ is locally compact and Hausdorff, thus it admits a Haar measure $\mu$; given the quotient map $\pi \colon G \to G / \overline{\{e_G\}}$, the assignment $\mu'(A)\coloneqq \mu(\pi(A))$ for every $A\in\mathcal{B}(G)$ defines a Haar measure on $G$. The problem in this statement concerns the definition itself of a Haar measure, which classically requires the underlying space to be Hausdorff in order to guarantee that compact sets are Borel sets. #### Our first issue, then, is re-defining the Haar measure for not necessarily Hausdorff topological groups. In this paper, two possible ways are considered: they both arise as extremely natural generalisations of the common definition of Haar measure on Hausdorff topological groups, but just one of them leads to the results of existence and uniqueness (with the only assumption of the group being locally compact). They develop as follows. The first option that is taken into account forces compact sets to be measurable by defining a Radon measure on the wider $\sigma$-algebra generated by compact sets and open sets: this attempt fails in ensuring the existence of a Haar measure, even in case the topological group is locally compact, and an example of this situation is provided (Example [Example 9](#ex:V.1){reference-type="ref" reference="ex:V.1"}). The second way consists in replacing the collection of compact sets with the collection of closed compact sets, which are automatically Borel sets, in the definition of Radon measure, and more specifically of local finiteness and inner regularity (Definition [Definition 11](#def:Radon){reference-type="ref" reference="def:Radon"}): by implementing the consequent definition of Haar measure (Definition [Definition 12](#def:Haar){reference-type="ref" reference="def:Haar"}), on a non necessarily Hausdorff locally compact group the Haar measure exists and is unique up to scalar multiplication. The present paper is mainly devoted to give a direct proof of these facts, without involving quotient groups. The main theorems read as follows. Let $G$ be a locally compact topological group. Then there exists a left Haar measure on $G$. Let $G$ be a locally compact topological group, and let $\mu$ and $\mu'$ be left Haar measures over $G$. Then there exists $a>0$ such that $\mu'=a\mu$. The proofs provided here are an adaptation of the well-known ones valid for locally compact Hausdorff groups, which are described, e.g., by Cohn [@bi:cohn]. The generalisation is based on four main facts: a topological group is always regular (see, e.g., [@bi:dikr Proposition 3.5.1]); in a regular space compact sets and closed sets can be separated (Lemma [Lemma 13](#th:compact_closed_disjoint){reference-type="ref" reference="th:compact_closed_disjoint"}); a locally compact regular space is strongly locally compact (Lemma [Proposition 16](#th:loc_comp_group){reference-type="ref" reference="th:loc_comp_group"}); in a strongly locally compact space every compact set has a closed compact neighborhood (Lemma [Lemma 15](#th:c_o_cc){reference-type="ref" reference="th:c_o_cc"}). Moreover, with the definition of Haar measure adopted here, the construction via quotients of a Haar measure on a locally compact group that is not Hausdorff actually works, as shown in Proposition [Proposition 29](#th:quotient){reference-type="ref" reference="th:quotient"}. The key point is that this construction provides a way of proving existence and uniqueness of the Haar measure on locally compact groups by assuming that these results are valid on locally compact Hausdorff groups: on the contrary, the proof given in Theorems [Theorem 22](#th:existence){reference-type="ref" reference="th:existence"} and [Theorem 25](#th:uniqueness){reference-type="ref" reference="th:uniqueness"} does not rely on this fact. #### A standard example of a non-Hausdorff locally compact topological group is the product of groups $X \times G$, where $X$ is a Hausdorff locally compact topological group and $G$ is a non-trivial group, endowed with the topology induced by the topology on $X$, that is $\{U \times G \mid U \subseteq X \text{ open} \}$: using the generalised definition of Haar measure, we show that the Haar measure on $X \times G$ is the composition of the Haar measure on $X$ with the projection on $X$ (Example [Example 28](#ex:product){reference-type="ref" reference="ex:product"}). #### The paper is structured as follows. In the first section, standard definitions and theorems concerning the Haar measure on locally compact Hausdorff groups are quickly recalled, in order to underline the aspects that need to be adapted to the non-Hausdorff case. In the second section, we discuss the two above-mentioned attempts of generalising the Haar measure to non-Hausdorff topological groups. Example [Example 9](#ex:V.1){reference-type="ref" reference="ex:V.1"} shows that the first attempt does not lead to the willed result. Instead, Definitions [Definition 10](#def:regular){reference-type="ref" reference="def:regular"}, [Definition 11](#def:Radon){reference-type="ref" reference="def:Radon"} and [Definition 12](#def:Haar){reference-type="ref" reference="def:Haar"} introduce the Haar measure using closed compact sets to build local finiteness and inner regularity, and they are adopted for the rest of the dissertation. The third section hosts preliminary facts given by regularity and local compactness of topological spaces. In the forth section, Theorems [Theorem 22](#th:existence){reference-type="ref" reference="th:existence"}, [Theorem 25](#th:uniqueness){reference-type="ref" reference="th:uniqueness"} and [Proposition 26](#th:dx_sx){reference-type="ref" reference="th:dx_sx"} prove that on a locally compact group the Haar measure exists and is unique up to multiplication with a positive constant. Applications are given in Example [Example 27](#ex:V.2){reference-type="ref" reference="ex:V.2"} and, more in general, in Example [Example 28](#ex:product){reference-type="ref" reference="ex:product"}. Both results of existence and uniqueness are checked directly, that means without assuming that they are valid for locally compact Hausdorff groups. In the final section we check that the construction via quotients of a Haar measure on a (non-Hausdorff) locally compact group works. Therefore, Propositions [Proposition 29](#th:quotient){reference-type="ref" reference="th:quotient"} and [Proposition 31](#th:quotient_uniq){reference-type="ref" reference="th:quotient_uniq"} provides a second way of proving existence and uniqueness of the Haar measure also in the non-Hausdorff case. ------------------------------------------------------------------------ # Standard definitions As definitions are under examination themselves, we start by fixing terminology. **Notation 1**. Let $X$ be a topological space. The Borel $\sigma$-algebra on $X$ will be denoted $\mathcal{B}(X)$. The set of continuous functions $X \to \mathbbm{R}$ and the set of compactly supported continuous functions $X \to \mathbbm{R}$ will be denoted $C(X)$ and $C_c(X)$ respectively. The support of a function $f \colon X \to \mathbbm{R}$ will be denoted $\mathop{\mathrm{spt}}(f)$. Given a subset $Y\subseteq X$, the indicator function of $Y$ will be denoted $\mathbbm{1}_Y$. We make use of the standard definitions of regular topological space and normal topological space, as given, e.g., by Kelley [@bi:kelley p. 112]: they do not require the topological space to be $T_1$. The terminology in Definition [Definition 1](#def:loc_comp){reference-type="ref" reference="def:loc_comp"} is taken from Steen and Seebach [@bi:steen_seebach]. **Definition 1**. A topological space $X$ is said to be: 1. *locally compact* if every point of $X$ admits a compact neighborhood; 2. *strongly locally compact* if every point of $X$ admits a closed compact neighborhood (or, equivalently, a relatively compact neighborhood). In a standard dissertation about the Haar measure on Hausdorff topological groups, the following vocabulary is commonly used. **Definition 2**. Let $X$ be a Hausdorff topological space and $\mathcal{M}$ be a $\sigma$-algebra on $X$ such that $\mathcal{M}\supseteq \mathcal{B}(X)$. A positive measure $\mu\colon\mathcal{M}\to[0,+\infty]$ is said to be: 1. *locally finite (on compact sets)* if for every $K\subseteq X$ compact $\mu(K)<+\infty$; 2. *outer regular on $E\in\mathcal{M}$* if $\mu(E)=\inf \{\mu(U)\mid U\supseteq E \text{ open}\}$; 3. *inner regular on $E\in\mathcal{M}$* if $\mu(E)=\sup \{\mu(K)\mid K\subseteq E \text{ compact}\}$. **Definition 3**. Let $X$ be a Hausdorff topological space. A *Radon measure* on $X$ is a positive Borel measure that is locally finite (on compact sets), outer regular on all Borel sets and inner regular on all open sets. **Definition 4**. Let $G$ be a topological group. A positive Borel measure $\mu$ on $G$ is said *left-translation-invariant* (resp. *right-translation-invariant*) if for every $A\in\mathcal{B}(G)$ and for every $g\in G$ one has $\mu(gA)=\mu(A)$ (resp. $\mu(Ag)=\mu(A)$). **Definition 5**. Let $G$ be a Hausdorff topological group. A *left* (resp. *right*) *Haar measure* on $G$ is a nonzero left-translation-invariant (resp. right-translation-invariant) Radon measure. In this context, the Hausdorff hypothesis ensures compact sets to be Borel sets, and then Definition [Definition 2](#def:regular_H){reference-type="ref" reference="def:regular_H"}, [Definition 3](#def:Radon_H){reference-type="ref" reference="def:Radon_H"} and [Definition 5](#def:Haar_H){reference-type="ref" reference="def:Haar_H"} to be consistent. Together with the local compactness of the topological group, it allows to build a Radon measure that is eventually checked to be a Haar measure, and to prove its uniqueness up to a multiplicative constant. The classical theorems concerning the Haar measure on locally compact Hausdorff groups state as follows. **Theorem 6**. *Let $G$ be a locally compact Hausdorff topological group. Then there exists a left Haar measure on $G$.* **Theorem 7**. *Let $G$ be a locally compact Hausdorff topological group, and let $\mu$ and $\mu'$ be left Haar measures on $G$. Then there exists $a>0$ such that $\mu'=a\mu$.* **Remark 1**. It is immediate to see that Theorem [Theorem 6](#th:existence_H){reference-type="ref" reference="th:existence_H"} is no longer true for a Hausdorff topological group that is not locally compact. For instance, it is known that the set of rational numbers $\mathbbm{Q}$ does not have a Haar measure, since or it would be zero or any infinite compact set would have infinite measure. **Remark 2**. We use the terminology *local finiteness* (on compact sets) in Definition [Definition 2](#def:regular_H){reference-type="ref" reference="def:regular_H"} because, by the previous remark, we are interested in applying this definition only to locally compact Hausdorff groups (for which local compactness and finiteness on compact sets together imply local finiteness). # Generalised definitions #### Our aim is generalising the Haar measure to topological groups that are not necessarily Hausdorff. The following example will be considered throughout the dissertation. **Example 8**. Let $V$ be the group $(\mathbbm{R}^2,+)$ equipped with the topology $\tau$ induced by the seminorm $N\colon \mathbbm{R}^2 \to [0,+\infty)$, $(x,y) \mapsto |x|$. The topology $\tau$ is generated by balls of radius $r>0$ and center $(x_0,y_0)\in\mathbbm{R}^2$ defined as $S_{\mathbbm{R}^2}((x_0,y_0),r)\coloneqq \{(x,y)\in\mathbbm{R}^2 \mid N((x,y)-(x_0,y_0))<r\}$. Given $\mathbbm{R}$ with the euclidean topology, then $S_{\mathbbm{R}^2}((x_0,y_0),r) = B_{\mathbbm{R}}(x_0,r) \times \mathbbm{R}$. Thus, the equalities $\tau = \{U\times\mathbbm{R}\mid U\subseteq\mathbbm{R}\text{ open}\}$ and $\mathcal{B}(V) = \{E\times\mathbbm{R}\mid E\in \mathcal{B}(\mathbbm{R}) \}$ hold, and a subset $K\subseteq \mathbbm{R}^2$ is closed and compact if and only if it is of the form $K = K' \times \mathbbm{R}$ for a certain compact set $K'\subseteq \mathbbm{R}$. By the properties just shown, it is immediate to see that $V$ is locally compact, since it is true for $\mathbbm{R}$, and that it is not Hausdorff. Firstly, in order to preserve the notions of local finiteness and inner regularity of a measure $\mu$ on a (not necessarily Hausdorff) topological space $X$, the domain of $\mu$ must contain a collection of sets which is closed under finite unions. As seen in Definition [Definition 2](#def:regular_H){reference-type="ref" reference="def:regular_H"}, in the case of a Hausdorff topological space this role is played by the collection of compact sets. The first idea that can be examined is rephrasing that definition by forcing compact sets to be measurable: that is, by considering $\mathcal{BK}(X)$ the $\sigma$-algebra generated by open subsets and compact subsets of $X$, and by defining local finiteness, inner regularity and outer regularity as given in Definition [Definition 2](#def:regular_H){reference-type="ref" reference="def:regular_H"} for a measure $\mu \colon \mathcal{M} \to [0, + \infty]$ where $\mathcal{M} \supseteq \mathcal{BK}(X)$. There follow analogous definitions of Radon measure and Haar measure whose domain is now $\mathcal{BK}(X)$. Nevertheless, the following example shows that in these terms the existence of a Haar measure on a topological group $G$ is not guaranteed, neither in case $G$ is locally compact (or, equivalently, every point of $G$ has a local base of closed compact neighbourhoods; see Proposition [Proposition 16](#th:loc_comp_group){reference-type="ref" reference="th:loc_comp_group"}). **Example 9**. Let $V$ be the abelian topological group considered in Example [Example 8](#ex:V){reference-type="ref" reference="ex:V"}. It does not admit a Haar measure in the generalized sense just described. By contradiction, suppose there exist a Haar measure $\mu \colon \mathcal{BK}(V) \to [0,+\infty]$, where $\mathcal{BK}(V)$ is the $\sigma$-algebra generated by open sets and compact sets of $V$. The sets $K \coloneqq [0,1]\times\mathbbm{R}$ and $K_{m,n} \coloneqq [m,m+1]\times [n,n+1]$ for $m,n\in\mathbbm{Z}$ are compact; in particular, the sets $(K_{0,2n})_{n\in\mathbbm{N}}$ are pairwise disjoint and contained in $K$. Since $\mu$ is finite on compact sets and translation-invariant, one has: $$+\infty > \mu(K) \geq \mu\Bigl(\bigcup_{n\in\mathbbm{N}}K_{0,2n}\Bigr)=\sum_{n\in\mathbbm{N}} \mu(K_{0,2n}) = \sum_{n\in\mathbbm{N}} \mu(K_{0,0}) \, \text{.}$$ It follows that necessarily $\mu(K_{0,0})=0$, and then: $$\mu(\mathbbm{R}^2)= \mu\Bigl(\bigcup_{m\in\mathbbm{Z}}\bigcup_{n\in\mathbbm{Z}}K_{m,n}\Bigr) \leq \sum_{m\in\mathbbm{Z}}\sum_{n\in\mathbbm{Z}}\mu(K_{m,n}) = \sum_{m\in\mathbbm{Z}}\sum_{n\in\mathbbm{Z}}\mu(K_{0,0}) = 0 \, \text{;}$$ but $\mu$ is nonzero, which leads to a contradiction. Thus, the previous attempt must be abandoned. Instead, the construction considered in this paper makes use of the collection of closed compact sets, which is then automatically included in the $\sigma$-algebra of Borel sets. With this generalisation, we get an extension of the theorems concerning existence and uniqueness of the Haar measure also to non-Hausdorff locally compact groups. The following two sections are devoted to details concerning this generalisation. **Definition 10**. Let $X$ be a topological space and $\mathcal{M}$ be a $\sigma$-algebra on $X$ such that $\mathcal{M}\supseteq \mathcal{B}(X)$. A positive measure $\mu\colon\mathcal{M}\to[0,+\infty]$ is said to be: 1. *locally finite (on closed compact sets)* if for every $K\subseteq X$ closed and compact $\mu(K)<+\infty$; 2. *outer regular on $E\in\mathcal{M}$* if   $\mu(E)=\inf \{\mu(U)\mid U\supseteq E \text{ open}\}$; 3. *inner regular on $E\in\mathcal{M}$* if   $\mu(E)=\sup \{\mu(K)\mid K\subseteq E \text{ closed compact}\}$. The generalized definitions of Radon measure and Haar measure follow: they are formally identical to those given in Definitions [Definition 3](#def:Radon_H){reference-type="ref" reference="def:Radon_H"} and [Definition 5](#def:Haar_H){reference-type="ref" reference="def:Haar_H"} but they are also valid in the non-Hausdorff case. **Definition 11**. Let $X$ be a topological space. A *Radon measure* on $X$ is a positive Borel measure that is locally finite (on closed compact sets), outer regular on all Borel sets and inner regular on all open sets. **Definition 12**. Let $G$ be a topological group. A *left* (resp. *right*) *Haar measure* on $G$ is a nonzero left-translation-invariant (resp. right-translation-invariant) Radon measure. **Remark 3**. Being translation-invariant, a left (resp. right) Haar measure $\mu$ on a topological group $G$ keeps the property of inducing translation-invariant integrals for a function $f\in \mathcal{L}^1(\mu)$, that is $\int_G f(gx) d\mu(x) = \int_G f(x) d\mu(x)$ (resp. $\int_G f(xg) d\mu(x) = \int_G f(x) d\mu(x)$) for every $g\in G$. **Remark 4**. Definitions [Definition 10](#def:regular){reference-type="ref" reference="def:regular"}, [Definition 11](#def:Radon){reference-type="ref" reference="def:Radon"} and [Definition 12](#def:Haar){reference-type="ref" reference="def:Haar"} are coherent with Definitions [Definition 2](#def:regular_H){reference-type="ref" reference="def:regular_H"}, [Definition 3](#def:Radon_H){reference-type="ref" reference="def:Radon_H"} and [Definition 5](#def:Haar_H){reference-type="ref" reference="def:Haar_H"} in case the underlying space is Hausdorff. From this point onward, we will refer to this extended definitions of Radon measure and Haar measure. # Locally compact regular spaces #### This section collects the preliminary results that will be used in section 4. Most of them are adaptions of standard theorems. #### Since we are interested in topological groups, which are regular (see, e.g., [@bi:dikr Proposition 3.5.1], we will focus on regular spaces. We begin with the already known ability of separating a compact set and a closed set in a regular space: see, e.g., [@bi:kelley ch. 5, Theorem 10]. **Lemma 13**. *Let $X$ be a regular topological space, and let $A,B\subseteq X$ be disjoint subsets, with $A$ compact and $B$ closed. Then there exist $U$ and $V$ disjoint open sets such that $A\subseteq U$ and $B\subseteq V$. In particular, if $X$ is regular and compact, than it is normal.* The following technical lemma is a slight variant of a result that can be found, e.g., in [@bi:cohn Lemma 7.1.10]. **Lemma 14**. *Let $X$ be a regular topological space. Consider a closed compact set $K$ and two open sets $U_1$ and $U_2$ of $X$ such that $K \subseteq U_1 \cup U_2$. Then there exist $K_1$ and $K_2$ closed compact sets of $X$ such that $K_1 \subseteq U_1$, $K_2\subseteq U_2$ and $K=K_1 \cup K_2$.* *Proof.* Let $L_i\coloneqq K\setminus U_i$ for $i=1,2$: they are closed in $K$ and thus compact, and they are disjoint because $K \subseteq U_1 \cup U_2$ implies $L_1 \cap L_2 = K\setminus (U_1\cup U_2) = \emptyset$. By Lemma [Lemma 13](#th:compact_closed_disjoint){reference-type="ref" reference="th:compact_closed_disjoint"} there exist $V_1$ and $V_2$ disjoint open sets such that $L_i \subseteq V_i$ for $i=1,2$. Let $K_i\coloneqq K\setminus V_i$ for $i=1,2$, which are closed and compact. One has $K_i=K\setminus V_i \subseteq K\setminus L_i=K\setminus(K\setminus U_i)=K\cap U_i\subseteq U_i$ for $i=1,2$, and $K_1\cup K_2= K\setminus(V_1\cap V_2)=K$. ◻ #### We now deal with some properties linked to local compactness. We start with a fact given by strong local compactness: it will be useful to replace compact sets with closed compact sets in the dissertation that drops Hausdorff hypothesis. **Lemma 15**. *Let $X$ be a strongly locally compact topological space and let $K$ be a compact subset of $X$. Then there exist an open set $U$ and a closed compact set $L$ such that $K\subseteq U \subseteq L$.* *Proof.* Because of the strong local compactness of $X$, for every $x\in K$ there exist a closed compact set $L_x$ and an open set $U_x$ such that $x\in U_x\subseteq L_x$. The collection $\{U_x\}_{x\in K}$ is an open cover of $K$, then there exist $x_1,\dots,x_n\in K$ such that $K\subseteq \bigcup_{i=1}^n U_{x_i} \subseteq \bigcup_{i=1}^n L_{x_i}$, where $U \coloneqq \bigcup_{i=1}^n U_{x_i}$ is open and $L\coloneqq \bigcup_{i=1}^n L_{x_i}$ is closed and compact. ◻ The following result shows that the apparently stronger condition of a locally compact regular space to be strongly locally compact is actually not restrictive; see, e.g., [@bi:kelley ch.5, Theorem 17] for a proof, or Gompa [@bi:loc_comp] for a wider dissertation around local compactness. **Proposition 16**. *Let $X$ be a regular topological space. Then the following conditions are equivalent:* 1. *the space $X$ is locally compact;* 2. *the space $X$ is strongly locally compact;* 3. *every point of $X$ has a local base of compact neighbourhoods;* 4. *every point of $X$ has a local base of closed compact neighbourhoods.* **Remark 5**. In particular, a locally compact topological group is strongly locally compact. We also need to replace the well known statement of the Urysohn lemma for locally compact Hausdorff spaces with an analogous version concerning regular locally compact spaces: a proof given by Kelley in [@bi:kelley ch.5, Theorem 18] provides Lemma [Lemma 17](#th:K<U){reference-type="ref" reference="th:K<U"} and, by just a few more specifications, the willed theorem too. **Lemma 17**. *Let $X$ be a regular locally compact topological space, and consider an open set $U$ and a closed compact set $K$ in $X$ such that $K\subseteq U$. Then there exists an open set $V$ with compact closure such that $K \subseteq V \subseteq \overline{V} \subseteq U$.* **Theorem 18**. *Let $X$ be a regular locally compact topological space, and consider an open set $U$ and a closed compact set $K$ in $X$ such that $K\subseteq U$. Then there exists $g\in C_c(X)$ such that $\mathbbm{1}_K\leq g \leq \mathbbm{1}_U$ e $\mathop{\mathrm{spt}}(g)\subseteq U$.* *Proof.* Let $V$ be an open neighborhood of $K$ as in Lemma [Lemma 17](#th:K<U){reference-type="ref" reference="th:K<U"}. See [@bi:kelley ch.5, Theorem 18]: it provides a continuous function $f \colon X \to [0,1]$ which is zero on $K$ and one on $\overline{V}$. The function we are looking for is $g\coloneqq 1-f$, which is also compactly supported in $U$, since $\{x\in X \mid g(x) \ne 0\} \subseteq \overline{V} \subseteq U$, where $\overline{V}$ is compact and closed. ◻ The previous result allows to adapt to regular locally compact spaces the well known Riesz representation theorem for positive linear functionals, specifically the part concerning the uniqueness of the representing measure. **Theorem 19**. *Let $X$ be a regular locally compact topological space, and let $\mathcal{M}$ be a $\sigma$-algebra on $X$ such that $\mathcal{M}\supseteq\mathcal{B}(X)$. Let $\Lambda\colon C_c(X)\to\mathbbm{R}$ be a positive linear functional and $\mu_1,\mu_2\colon\mathcal{M}\to[0,+\infty]$ be Radon measures such that for every $f\in C_c(X)$ the equality $\int_X f \, d\mu_i=\Lambda(f)$ holds for $i=1,2$. Then $\mu_1=\mu_2$.* *Proof.* By inner and outer regularity, it is enough to prove that $\mu_1$ and $\mu_2$ coincide on closed compact sets. Given a closed compact set $K$ and an open set $U$ such that $K\subseteq U$, by Theorem [Theorem 18](#th:urysohn){reference-type="ref" reference="th:urysohn"} there exists $f\in C_c(X)$ such that $\mathbbm{1}_K\leq f \leq \mathbbm{1}_U$ and $\mathop{\mathrm{spt}}(f)\subseteq U$. Then: $$\mu_1(K)=\int_X \mathbbm{1}_K \, d\mu_1 \leq \int_X f \, d\mu_1 = \Lambda(f) = \int_X f \, d\mu_2 \leq \int_X \mathbbm{1}_U \, d\mu_2 = \mu_2(U)$$ Taking the lower bound in the right side of the inequality upon the open sets containing $K$, one has $\mu_1(K)\leq\mu_2(K)$. By switching the role of the two measures, the equality follows. ◻ Lastly, we will need to compute integrals on a product space with respect to Radon measures, which are not $\sigma$-finite in general. The next lemma can be found, e.g., in [@bi:cohn Lemma 7.6.3], while the following proposition, that makes use of the lemma, is proved as a slight variant of [@bi:cohn Proposition 7.6.4]: in order to avoid redundancy, some technical passages make explicit reference to the original proof. **Lemma 20**. *Let $X$ and $Y$ be topological spaces, with $Y$ being compact, and let $f:X\times Y \to \mathbbm{R}$ be continuous. Then for every $x_0\in X$ and for every $\epsilon>0$ there exists an open neighborhood $U_0$ of $x_0$ such that for every $x\in U_0$ and for every $y\in Y$ one has $|f(x,y)-f(x_0,y)|<\epsilon$. [\[cont_compatto\]]{#cont_compatto label="cont_compatto"}* **Proposition 21**. *Let $X$ and $Y$ be regular locally compact topological spaces, and let $\mu$ and $\lambda$ be Radon measures on $X$ and $Y$ respectively. Then for every $f\in C_c(X\times Y)$ one has the following:* 1. *$[y \mapsto f(x,y)] \in C_c(Y)$ for every $x\in X$, and $[x \mapsto f(x,y)]\in C_c(X)$ for every $y\in Y$;* 2. *$\Bigl[ x \mapsto \displaystyle\int_Y f(x,y) \, d\lambda(y) \Bigr] \in C_c(X)$, and $\Bigl[ y \mapsto \displaystyle\int_X f(x,y) \, d\mu(x) \Bigr] \in C_c(Y)$;* 3. *$\displaystyle\int_X \int_Y f(x,y) \, d\lambda(y) d\mu(x) = \int_Y \int_X f(x,y) \, d\mu(x) d\lambda(y)$.* *Proof.* Let $f\in C_c(X\times Y)$: define $K \coloneqq \mathop{\mathrm{spt}}(f)$, $K_1 \coloneqq \pi_1(K)$ and $K_2 \coloneqq \pi_2(K)$, which are all compact. Let $K_1'$ and $K_2'$ be compact closed sets containing $K_1$ and $K_2$ respectively, as given by Lemma [Lemma 15](#th:c_o_cc){reference-type="ref" reference="th:c_o_cc"}. 1. Given $x\in X$, the function $y \mapsto f(x,y)$ is continuous and vanishes outside $K_2$, and then outside the closed set $K_2'$: therefore, its support lays in $K_2'$ and thus it is compact. Given $y\in Y$, the proof for $x \mapsto f(x,y)$ is similar. 2. The integrals involved exist by the previous statement. Let $x_0\in X$ and $\epsilon>0$. Applying Lemma [Lemma 20](#th:lemma_int){reference-type="ref" reference="th:lemma_int"} to $f|_{X \times K_2'}$, one finds an open neighborhood $U_0$ of $x_0$ such that $| f(x,y) - f(x_0,y) | < \epsilon$ for every $x\in U_0$ and for every $y\in K_2'$. Therefore, for every $x\in U_0$ one has: $$\begin{split} \bigg|\int_Y f(x,y) \, d\lambda(y) - \int_Y f(x_0,y) \, d\lambda(y)\bigg| & \leq \int_{K_2'} \big|f(x,y) - f(x_0,y)\big|\,d\lambda(y) \\ &\leq \epsilon\lambda(K_2') \, \text{.} \end{split}$$ Then, the function $x \mapsto \int_Y f(x,y) \, d\lambda(y)$ is continuous. Moreover, it vanishes outside $K_1$, and then outside the closed set $K_1'$: thus its support is compact since contained in the compact set $K_1'$. The proof for $y\mapsto \int_X f(x,y) \, d\mu(x)$ is similar. 3. The integrals involved exist by the previous statement. Let $\epsilon>0$. Applying the procedure shown in [@bi:cohn Proposition 7.6.4] to the closed compact set $K_1'$, one finds $x_1,\dots,x_n\in K_1'$ and $A_1,\dots,A_n$ disjoint Borel sets such that $K_1' = \bigcup_{k=1}^n A_k$ and $| f(x,y) - f(x_k,y) | < \epsilon$ for every $x\in A_k$ and for every $y\in K_2'$, for every $k\in\{1,\dots,n\}$. Define: $$g \colon X \times Y \to \mathbbm{R}, \quad (x,y) \mapsto \sum_{k=1}^n \mathbbm{1}_{A_k}(x) f(x_k,y) \,\text{.}$$ Both iterate integrals of $g$ converge and are equal to: $$\sum_{k=1}^n \mu(A_k) \int_Y f(x_k,y)d\lambda(y)$$ because $[y \mapsto f(x_k,y)]\in C_c(Y)$ and $\mu(A_k)\leq\mu(K_1')<+\infty$ for every $k\in\{1,\dots,n\}$, since $K_1'$ is a closed compact set and $\mu$ is locally finite (on closed compact sets). The functions $f$ and $g$ vanish outside $K_1' \times K_2'$, while for $(x,y)\in K_1' \times K_2'$\ there exists a unique $i\in\{1,\dots,n\}$ such that $x\in A_i$ and therefore $|f(x,y)-g(x,y)|=|f(x,y)-f(x_i,y)|<\epsilon$ holds. Then: $$\bigg|\int_Y \int_X f(x,y) \, d\mu(x) d\lambda(y) - \int_Y \int_X g(x,y) \, d\mu(x) d\lambda(y)\bigg| \leq \epsilon\mu(K_1')\lambda(K_2')$$ $$\bigg|\int_X \int_Y f(x,y) \, d\lambda(y) d\mu(x) - \int_X \int_Y g(x,y) \, d\lambda(y) d\mu(x)\bigg| \leq \epsilon\mu(K_1')\lambda(K_2') \, \text{.}$$ Since the iterate integrals of $g$ are equal, one gets: $$\bigg|\int_Y \int_X f(x,y) \, d\mu(x) d\lambda(y) - \int_X \int_Y f(x,y) \, d\lambda(y) d\mu(x)\bigg| \leq 2\epsilon\mu(K_1')\lambda(K_2') \, \text{.}$$ Since $\epsilon$ is arbitrary, the claim follows.  ◻ # Haar measure on locally compact groups #### In this section we prove the theorems of existence and uniqueness of the Haar measure on a locally compact group. The structure of the proofs is faithful to the classical theorems concerning locally compact Hausdorff groups; Cohn [@bi:cohn ch. 9.2] is taken as reference. However, the generalised proof is outlined here in order to underlying the differences to the original version. We start by showing the existence of a left Haar measure on a locally compact topological group. **Theorem 22**. *Let $G$ be a locally compact topological group. Then there exists a left Haar measure on $G$.* *Proof.* The proof is structured as in [@bi:cohn Theorem 9.2.2], and considers the collection of closed compact sets instead of the collection of compact sets. A closed compact set $K_0$ with non-empty interior is fixed: it exists since $G$ is strongly locally compact (Proposition [Proposition 16](#th:loc_comp_group){reference-type="ref" reference="th:loc_comp_group"}). Let $\mathcal{K}$ be the collection of closed compact subsets of $G$ and let $\mathcal{U}$ be the collection of open neighborhoods of $e_G$. For every $K\in \mathcal{K}$ and for every subset $S\subseteq G$ with non-empty interior $S^{\circ}$, one defines $(K:S)\coloneqq \min\{n\in\mathbbm{Z}_+\mid \exists g_1,\dots,g_n\in G \text{ such that } K\subseteq \bigcup_{k=1}^n g_kS^{\circ}\}$. For every $U\in\mathcal{U}$, one considers the following function: $$\mu_U\colon\mathcal{K}\to\mathbbm{R}, \quad K \mapsto \frac{(K:U)}{(K_0:U)} \, \text{.}$$ All the required properties concerning $\mu_U$ are proved as in [@bi:cohn Lemma 9.2.3]. Endowing the interval $[0,(K:K_0)]$ with the euclidean topology for every $K\in\mathcal{K}$, one defines the compact space $X\coloneqq \prod_{K\in\mathcal{K}} [0,(K:K_0)]$. Proceeding as in [@bi:cohn Theorem 9.2.2], for every $U \in \mathcal{U}$ one considers the function $\mu_U$ as a point of $X$, and finds a function $\mu^*\in X$ as an element of the intersection of a collection of closed subsets of $X$ satisfying the finite intersection property. All the required properties concerning $\mu^*$ are proved as in [@bi:cohn Lemma 9.2.4]; the only difference lies in showing that $\mu^*(K_1 \cup K_2) = \mu^* (K_1) + \mu^*(K_2)$ if $K_1$ and $K_2$ are disjoint closed compact sets [@bi:cohn Lemma 9.2.4(g)]: one makes use of Lemma [Lemma 13](#th:compact_closed_disjoint){reference-type="ref" reference="th:compact_closed_disjoint"} in order to separate $K_1$ and $K_2$ with disjoint open sets, and then proceeds as in the original proof. Then, the function $\mu^*$ is extended to all subsets of $G$: for every open set $U \subseteq G$ define $\mu^*(U)\coloneqq \sup \{\mu^*(K)\mid K\subseteq U, K\in\mathcal{K}\}$, and for every subset $Y\subseteq G$ define $\mu^*(Y)\coloneqq \inf \{\mu^*(U)\mid Y\subseteq U, \, U\subseteq G \text{ open}\}$. As in [@bi:cohn Theorem 9.2.2], the function $\mu^*$ is shown to be an external measure on $G$; just note that, in order to verify the $\sigma$-subadditivity of $\mu^*$, one turns to Lemma [Lemma 14](#th:compact_in_opens){reference-type="ref" reference="th:compact_in_opens"} to write a closed compact set as union of closed compact sets subordinated to a finite open cover, and then proceeds as in the original proof. One checks that Borel sets are $\mu^*$-measurable and defines a measure $\mu$ as the restriction of $\mu^*$ to $\mathcal{B}(G)$, by applying Carathéodory's Theorem; eventually, one proves that $\mu$ is a left Haar measure on $G$. The only difference with [@bi:cohn Theorem 9.2.2] lies in showing that $\mu$ is locally finite (on closed compact sets): by Lemma [Lemma 17](#th:K<U){reference-type="ref" reference="th:K<U"}, every closed compact set is included in an open set with compact closure, and the proof follows as in the original version. ◻ In order to prove to the uniqueness of the left Haar measure, two more results need to be generalised. Just recall that in a topological group the product of a compact set and a closed set is closed (see, e.g., [@bi:hewitt Theorem 4.4]). **Proposition 23**. *Let $G$ be a locally compact group and $\mu$ be a Radon measure on $G$. Then for every $f\in C_c(G)$ one has:* 1. *$[x \mapsto f(xg)]\in C_c(G)$ and $[x \mapsto f(gx)]\in C_c(G)$ for every $g\in G$;* 2. *$\Bigl[ g \mapsto \displaystyle\int_G f(gx) d\mu(x) \Bigr] \in C(G)$ and $\Bigl[ g \mapsto \displaystyle\int_G f(xg) d\mu(x) \Bigr] \in C(G)$.* *Proof.* Claim (a) is straightforward. Claim (b) is proved as in [@bi:cohn Corollary 9.1.6]: we recall the main steps to prove that $g\mapsto \int_G f(xg)d\mu(x)$ is continuous in a point $g_0\in G$. Let $K\coloneqq \mathop{\mathrm{spt}}(f)$ and let $U$ be an open neighborhood of $g_0$ with compact closure. Since the set $K\times\overline{U}^{-1}$ is compact, the set $K\overline{U}^{-1}$ is compact, and it is closed too, because $K$ is compact and $\overline{U}^{-1}$ is closed. Then $\mu(K\overline{U}^{-1})<+\infty$, since $\mu$ is locally finite (on closed compact sets). The proof proceeds as in [@bi:cohn Corollary 9.1.6]: fixed $\epsilon>0$, by applying Lemma [Lemma 20](#th:lemma_int){reference-type="ref" reference="th:lemma_int"} one finds an open neighborhood $V$ of $g_0$ such that for every $g\in V$ the function $x \mapsto f(xg) - f(xg_0)$ has absolute value controlled by $\epsilon$ and vanishes outside $K\overline{U}^{-1}$; therefore, for every $g\in V$ one has: $$\begin{split} \bigg|\int_G f(xg) \, d\mu(x)-\int_G f(xg_0) \, d\mu(x)\bigg| & \leq \int_{K\overline{U}^{-1}} |f(xg)-f(xg_0)| \, d\mu(x) \\ & \leq \epsilon\mu(K\overline{U}^{-1}) \, \text{.} \end{split}$$ Thus the function $g\mapsto \int_G f(xg)d\mu(x)$ is continuous. ◻ **Proposition 24**. *Let $G$ be a locally compact topological group, and let $\mu$ be a (left or right) Haar measure on $G$. Then:* 1. *there exists a closed compact set $K$ such that $\mu(K)>0$;* 2. *for every non empty open set $U$ one has $\mu(U)>0$;* 3. *for every $f\in C_c(G)$ non negative and nonzero one has $\int_G f d\mu>0$.* *Proof.* The proof follows exactly as in [@bi:cohn Theorem 9.2.5]: claim (a) holds since the Radon measure $\mu$ is nonzero, and claims (b) and (c) are straightforward consequences. ◻ We are now ready to prove the uniqueness of the Haar measure on locally compact groups. **Theorem 25**. *Let $G$ be a locally compact topological group, and let $\mu$ and $\mu'$ be left Haar measures on $G$. Then there exists $a>0$ such that $\mu'=a\mu$.* *Proof.* The proof follows as in [@bi:cohn Theorem 9.2.6]. Let $K$ be a non-empty closed compact set such that $\mu(K)>0$, that exists by Proposition [Proposition 24](#th:Haar_positive){reference-type="ref" reference="th:Haar_positive"}(a). By applying Theorem [Theorem 18](#th:urysohn){reference-type="ref" reference="th:urysohn"} to $K$ and $G$, one gets a function $g\in C_c(G)$ which is nonzero and non negative. Fix $f\in C_c(G)$. Let $L\coloneqq \mathop{\mathrm{spt}}(f)$ and $F\coloneqq \mathop{\mathrm{spt}}(g)$. Consider the function: $$h\colon G\times G \to \mathbbm{R}, \quad (x,y)\mapsto \frac{f(x) \, g(yx)}{\int_G g(tx) \, d\mu'(t)}$$ which is well defined by Propositions [Proposition 23](#th:cont_trans){reference-type="ref" reference="th:cont_trans"}(a) and [Proposition 24](#th:Haar_positive){reference-type="ref" reference="th:Haar_positive"}(c), and continuous by Proposition [Proposition 23](#th:cont_trans){reference-type="ref" reference="th:cont_trans"}(b). Moreover, it is compactly supported since $\mathop{\mathrm{spt}}(h)\subseteq L\times FL^{-1}$, which is closed and compact. The Haar measures $\mu$ and $\mu'$ induce translation-invariant integrals; by applying multiple times Proposition [Proposition 21](#th:product_measure){reference-type="ref" reference="th:product_measure"} to $h\in C_c(G\times G)$ (where the first and the second factors of $G \times G$ are endowed with $\mu$ and $\mu'$ respectively), one proceeds as in [@bi:cohn Theorem 9.2.6] to get: $$\begin{split} \int_G f(x) \, d\mu(x) & = \int_G \biggl[ f(x) \; \frac{\int_G g(yx) \, d\mu'(y)}{\int_G g(tx) \, d\mu'(t)} \biggr] d\mu(x) \\ & = \int_G \biggl[ \int_G h(x,y) \, d\mu'(y) \biggr] d\mu(x) \\ & = \int_G \biggl[ \int_G h(y^{-1},xy) \, d\mu'(y) \biggr] d\mu(x) \\ & = \int_G \biggl[ \int_G \frac{f(y^{-1}) \, g(x)}{\int_G g(ty^{-1}) \, d\mu'(t)} \, d\mu'(y) \biggr] d\mu(x) \\ & = \biggl( \int_G g(x) \, d\mu(x) \biggr) \biggl( \int_G \frac{f(y^{-1})}{\int_G g(ty^{-1})\, d\mu'(t)} \, d\mu'(y) \biggr) \end{split}$$ Analogously, one gets: $$\int_G f(x) \, d\mu'(x) = \biggl( \int_G g(x) d\mu'(x) \biggr) \biggl( \int_G \frac{f(y^{-1})}{\int_G g(ty^{-1}) \, d\mu'(t)} \, d\mu'(y) \biggr) \, \text{,}$$ All the integrals involved are nonzero by Proposition [Proposition 24](#th:Haar_positive){reference-type="ref" reference="th:Haar_positive"}. Therefore: $$\frac{\int_G f(x) \, d\mu(x)}{\int_G g(x) \, d\mu(x)} = \frac{\int_G f(x) \, d\mu'(x)}{\int_G g(x) \, d\mu'(x)} \,\text{.}$$ By defining $a \coloneqq \frac{\int_G g(x) \, d\mu'(x)}{\int_G g(x) \, d\mu(x)} > 0$, the equality $\int_G f \, d\mu' = a\int_G f \, d\mu$ holds for every $f\in C_c(G)$. Eventually, define the measure $\nu \coloneqq a\mu$, which is a Radon measure too, and consider the positive linear functionals $\phi(f) \coloneqq \int_G f d\mu'$ and $\psi(f) \coloneqq \int_G f d\nu$ for $f\in C_c(G)$. Since $\psi(f) = \int_G f d\nu = a\int_G f d\mu = \int_G f d\mu' = \phi(f)$ for every $f\in C_c(G)$, by Theorem [Theorem 19](#th:riesz){reference-type="ref" reference="th:riesz"} one concludes that $\mu'=\nu$, that is $\mu' = a \mu$. ◻ Lastly, as in the classical proof, the following proposition ensures existence and uniqueness of the right Haar measure too, since right Haar measures and left Haar measures are in a one-to-one correspondence. **Proposition 26**. *Let $G$ be a topological group, and let $\mu$ be a positive Borel measure on $G$. For every $A\in\mathcal{B}(G)$ define $\mu'(A)\coloneqq \mu(A^{-1})$. Then $\mu$ is a left (resp. right) Haar measure on $G$ if and only if $\mu'$ is a right (resp. left) Haar measure on $G$.* *Proof.* The proof follows as in [@bi:cohn Proposition 9.3.1]: if $\mu$ is a left Haar measure, $\mu'$ is a positive Borel measure which is nonzero and right-translation-invariant; it is also a Radon measure since the inversion map is an homeomorphism, and therefore $K\subseteq G$ is closed and compact if and only if $K^{-1}\subseteq G$ is closed and compact. ◻ **Example 27**. Let $V$ be the abelian topological group considered in Example [Example 8](#ex:V){reference-type="ref" reference="ex:V"}. It is immediate to see that the Haar measure on $V$ is given by: $$\mu \colon \mathcal{B}(V) \to [0,+\infty], \quad E \mapsto |\pi_1(E)|_1$$ where $\pi_1$ is the projection of $\mathbbm{R}^2$ on the first factor, and $|\cdot|_1$ is the Lebesgue measure on $\mathbbm{R}$. **Example 28**. Let $X$ be a Hausdorff locally compact topological group with Haar measure $\mu$, and let $G$ be a non-trivial group. Consider the product of groups $X \times G$, endowed with the topology induced by the topology on $X$, that is $\{U \times G \mid U \subseteq X \text{ open} \}$: it is immediate to check that it is a locally compact topological group. As in Example [Example 27](#ex:V.2){reference-type="ref" reference="ex:V.2"}, the Haar measure on $X \times G$ is given by: $$\overline{\mu} \colon \mathcal{B}(X \times G) \to [0,+\infty], \quad E \mapsto \mu(\pi_X(E))$$ where $\pi_X$ is the projection on the first factor. **Remark 6**. In the case of a locally compact Hausdorff group, the generalised statements, that are Theorems [Theorem 22](#th:existence){reference-type="ref" reference="th:existence"} and [Theorem 25](#th:uniqueness){reference-type="ref" reference="th:uniqueness"}, exactly coincide with their usual version, since in a Hausdorff space the collection of closed compact sets coincides with the collection of compact sets. **Remark 7**. By the previous remark, the use of the terminology *local finiteness* (on closed compact sets) is justified as in the Hausdorff case. # The construction via quotients #### As said in the introduction, some sources in the literature suggest a way to generalise the Haar measure to non-Hausdorff groups through quotients. Using the definitions given in section 3, it is easy now to prove the consistency of that construction. See, e.g., Dikranjan [@bi:dikr] for the properties of topological groups that will be used. #### Let $G$ be a topological group. Since the set $\overline{\{e_G\}}$ is a normal subgroup of $G$ [@bi:dikr Lemma 3.4.2], let $G' \coloneqq G / \overline{\{e_G\}}$ endowed with the quotient topology, and consider the quotient map $\pi \colon G \to G'$. We recall some known results. 1. [\[ls:pi_open\]]{#ls:pi_open label="ls:pi_open"} The map $\pi$ is open [@bi:dikr Lemma 3.6.1]. 2. [\[ls:G\'\_T2\]]{#ls:G'_T2 label="ls:G'_T2"} The topological group $G'$ is Hausdorff [@bi:dikr Lemma 4.1.8(1)]. 3. [\[ls:pi_closed\]]{#ls:pi_closed label="ls:pi_closed"} The map $\pi$ is closed [@bi:dikr Lemma 7.2.3]. 4. [\[ls:G\'\_loc_comp\]]{#ls:G'_loc_comp label="ls:G'_loc_comp"} If $G$ is locally compact, then $G'$ is locally compact too [@bi:dikr Lemma 7.2.5(a)]. 5. [\[ls:pi_compacts\]]{#ls:pi_compacts label="ls:pi_compacts"} If $G$ is locally compact and the set $C\subseteq G'$ is compact, then there exist a closed compact set $K\subseteq G$ such that $\pi(K)=C$: it follows by [@bi:dikr Lemma 7.2.5(b)], where the provided compact set $K$ is also closed. Moreover, it is immediate to see that the following statements are true. Here, $\tau_G$ and $\tau_{G'}$ denote the topologies on $G$ and $G'$ respectively. 1. [\[ls:G\'\_topology\]]{#ls:G'_topology label="ls:G'_topology"} The equality $\tau_{G'}=\{\pi(U) \mid U\in\tau_G\}$ holds, because the map $\pi$ is continuous and open by [\[ls:pi_open\]](#ls:pi_open){reference-type="ref" reference="ls:pi_open"}. 2. [\[ls:G\'\_borel\]]{#ls:G'_borel label="ls:G'_borel"} The equality $\mathcal{B}(G')=\{\pi(E) \mid E\in\mathcal{B}(G)\}$ holds. Indeed, the collection $\{\pi(E) \mid E\in\mathcal{B}(G)\}$ is a $\sigma$-algebra on $G'$; the operators of union and complementary commute with $\pi$ because $\pi$ is surjective, and therefore $\{\pi(E) \mid E\in\mathcal{B}(G)\}$ coincides with the $\sigma$-algebra generated by the collection $\{\pi(U) \mid U\in\tau_G\}$: the claim follows from [\[ls:G\'\_topology\]](#ls:G'_topology){reference-type="ref" reference="ls:G'_topology"}. 3. [\[ls:open_regularity\]]{#ls:open_regularity label="ls:open_regularity"} If $U \in \tau_G$ and $x \in U$, then $\overline{\{x\}} \subseteq U$, for $G$ being regular. 4. [\[ls:borel_regularity\]]{#ls:borel_regularity label="ls:borel_regularity"} If $E \in \mathcal{B}(G)$ and $x \in E$, then $\overline{\{x\}} \subseteq E$. Indeed, one can consider the collection $\{ X\subseteq G \mid x\in X \implies \overline{\{x\}} \subseteq X\}$: it is a $\sigma$-algebra and contains $\tau_G$ by [\[ls:open_regularity\]](#ls:open_regularity){reference-type="ref" reference="ls:open_regularity"}, then contains $\mathcal{B}(G)$. 5. [\[ls:borel_disjoint\]]{#ls:borel_disjoint label="ls:borel_disjoint"} If $E_1,E_2\in \mathcal{B}(G)$ are disjoint, then $\pi(E_1),\pi(E_2)\in \mathcal{B}(G')$ are disjoint, by [\[ls:borel_regularity\]](#ls:borel_regularity){reference-type="ref" reference="ls:borel_regularity"} and recalling that $x \overline{ \{e_G\} } = \overline{\{x\}}$ for $x\in G$. 6. [\[ls:borel_inclusion\]]{#ls:borel_inclusion label="ls:borel_inclusion"} If $E_1,E_2\in \mathcal{B}(G)$ are such that $\pi(E_1)\subseteq \pi(E_2)$, then $E_1\subseteq E_2$. Indeed, given $x\in E_1$, since $\pi(x)\in \pi(E_1)\subseteq \pi(E_2)$, there exist $y\in E_2$ such that $x\in \overline{\{y\}}$, and then $x\in E_2$ by [\[ls:borel_regularity\]](#ls:borel_regularity){reference-type="ref" reference="ls:borel_regularity"}. 7. [\[ls:borel_equality\]]{#ls:borel_equality label="ls:borel_equality"} If $E_1,E_2\in \mathcal{B}(G)$ are such that $\pi(E_1) = \pi(E_2)$, then $E_1 = E_2$, as a direct consequence of [\[ls:borel_inclusion\]](#ls:borel_inclusion){reference-type="ref" reference="ls:borel_inclusion"}. **Remark 8**. If $G$ is a locally compact group, $G'$ is a locally compact Hausdorff group by statements [\[ls:G\'\_T2\]](#ls:G'_T2){reference-type="ref" reference="ls:G'_T2"} and [\[ls:G\'\_loc_comp\]](#ls:G'_loc_comp){reference-type="ref" reference="ls:G'_loc_comp"}, and then it has a Haar measure which is unique up to a multiplicative constant, by Theorems [Theorem 6](#th:existence_H){reference-type="ref" reference="th:existence_H"} and [Theorem 7](#th:uniqueness_H){reference-type="ref" reference="th:uniqueness_H"}. Now, using the extended Definition [Definition 12](#def:Haar){reference-type="ref" reference="def:Haar"}, one can check that the pullback measure built on a locally compact group through $\pi$ is a Haar measure: this is a constructive proof of its existence. **Proposition 29**. *Let $G$ be a locally compact topological group; consider the topological group $G' \coloneqq G / \overline{\{e_G\}}$ and let $\pi \colon G \to G'$ the quotient map. Let $\mu$ be a left (resp. right) Haar measure on $G'$. For every $A\in \mathcal{B}(G)$ define $\overline{\mu}(A)\coloneqq \mu(\pi(A))$. Then $\overline{\mu}$ is a left (resp. right) Haar measure on $G'$.* *Proof.* The function $\overline{\mu}$ is well defined by statement [\[ls:G\'\_borel\]](#ls:G'_borel){reference-type="ref" reference="ls:G'_borel"}, and it is clearly positive and nonzero. By statement [\[ls:borel_disjoint\]](#ls:borel_disjoint){reference-type="ref" reference="ls:borel_disjoint"}, $\sigma$-addictivity follows. Translation-invariance is given by the fact that $\pi$ is a group homomorphism and $\mu$ is translation-invariant. Local finiteness works because $\pi$ is continuous. Given $A\in \mathcal{B}(G)$, outer regularity is satisfied: $$\begin{split} \overline{\mu}(A) & = \mu(\pi(A)) = \inf\{\mu(U) \mid U \supseteq \pi(A), U\in\tau_{G'}\} \\ & = \inf\{\mu(\pi(V)) \mid \pi(V) \supseteq \pi(A), V\in\tau_G\} \\ & = \inf\{\mu(\pi(V)) \mid V \supseteq A, V\in\tau_G\} \\ & = \inf\{ \overline{\mu}(V) \mid V \supseteq A, V\in\tau_G\} \end{split}$$ where the third equality follows by statement [\[ls:G\'\_topology\]](#ls:G'_topology){reference-type="ref" reference="ls:G'_topology"}, and the forth one by statement [\[ls:borel_inclusion\]](#ls:borel_inclusion){reference-type="ref" reference="ls:borel_inclusion"}. Given $U\in\tau_G$, inner regularity is satisfied: $$\begin{split} \overline{\mu}(U) & = \mu(\pi(U)) = \sup\{\mu(K) \mid K \subseteq \pi(U) \text{, $K$ compact in } G'\} \\ & = \sup\{\mu(\pi(C)) \mid \pi(C) \subseteq \pi(U) \text{, $C$ closed and compact in } G \} \\ & = \sup\{ \mu(\pi(C)) \mid C \subseteq U \text{, $C$ closed and compact in } G \} \\ & = \sup\{ \overline{\mu}(C) \mid C \subseteq U \text{, $C$ closed and compact in } G \} \end{split}$$ where the third equality follows by statements [\[ls:pi_closed\]](#ls:pi_closed){reference-type="ref" reference="ls:pi_closed"} and [\[ls:pi_compacts\]](#ls:pi_compacts){reference-type="ref" reference="ls:pi_compacts"}, and the forth one follows by statement [\[ls:borel_inclusion\]](#ls:borel_inclusion){reference-type="ref" reference="ls:borel_inclusion"}. ◻ **Example 30**. Consider the topological group $V$ as in Example [Example 8](#ex:V){reference-type="ref" reference="ex:V"}. It is immediate to observe that $V / \overline{\{0\}} = \mathbbm{R}$ as topological groups. In fact, the Haar measure described in Example [Example 27](#ex:V.2){reference-type="ref" reference="ex:V.2"} coincides with the measure constructed via quotient map as in Theorem [Proposition 29](#th:quotient){reference-type="ref" reference="th:quotient"}. We conclude the paper giving an alternative proof of Theorem [Theorem 25](#th:uniqueness){reference-type="ref" reference="th:uniqueness"}, which is based on the construction via quotients. #### Given a locally compact topological group $G$, consider the topological group $G' \coloneqq G / \overline{\{e_G\}}$ and $\pi \colon G \to G'$ the quotient map. Denote with $\mathfrak{M}_H(G)$ and $\mathfrak{M}_H(G')$ the sets of left Haar measures on $G$ and $G'$ respectively. We consider the following functions: $$\pi_* \colon \mathfrak{M}_H(G) \to \mathfrak{M}_H(G'), \quad \mu \mapsto \pi_*\mu$$ $$\pi^* \colon \mathfrak{M}_H(G') \to \mathfrak{M}_H(G), \quad \nu \mapsto \pi^*\nu$$ defined by: $$\pi_*\mu(F) = \mu(\pi^{-1}(F)) \quad \text{for every } F\in\mathcal{B}(G')$$ $$\pi^*\nu(E) = \nu(\pi(E)) \quad \text{for every } E\in\mathcal{B}(G)$$ **Remark 9**. For every $\nu \in \mathfrak{M}_H(G')$, the image $\pi^*\nu$ is the *pullback* measure $\overline{\nu}$ introduced in Theorem [Proposition 29](#th:quotient){reference-type="ref" reference="th:quotient"}. For every $\mu \in \mathfrak{M}_H(G)$, the image $\pi_*\mu$ is the well known *pushforward* measure induced by the continuous map $\pi$: by an argument similar to the one given in Theorem [Proposition 29](#th:quotient){reference-type="ref" reference="th:quotient"}, it is immediate to see that $\pi_*\mu$, which is a measure in general, is actually a left Haar measure. **Proposition 31**. *Let $G$ be a locally compact topological group; consider the topological group $G' \coloneqq G / \overline{\{e_G\}}$ and let $\pi \colon G \to G'$ the quotient map. Then the functions $\pi_*$ and $\pi^*$ define a one-to-one correspondence between $\mathfrak{M}_H(G)$ and $\mathfrak{M}_H(G')$.* *Proof.* For every $\mu \in \mathfrak{M}_H(G)$ and for every $E\in\mathcal{B}(G)$ one directly computes that $\pi^*\pi_*\mu(E) = \pi_*\mu(\pi(E)) = \mu(\pi^{-1}(\pi(E))) = \mu(E)$, since $\pi^{-1}(\pi(E)) = E$ by statement [\[ls:borel_equality\]](#ls:borel_equality){reference-type="ref" reference="ls:borel_equality"}. For every $\nu \in \mathfrak{M}_H(G')$ and for every $F\in\mathcal{B}(G')$ one has $\pi_*\pi^*\nu(F) = \pi^*\nu(\pi^{-1}(F)) = \nu(\pi(\pi^{-1}(F))) = \nu(F)$. Therefore, the functions $\pi_*$ and $\pi^*$ are inverses of each other. ◻ Theorem [Theorem 25](#th:uniqueness){reference-type="ref" reference="th:uniqueness"} can be proved as a corollary of Proposition [Proposition 31](#th:quotient_uniq){reference-type="ref" reference="th:quotient_uniq"}. *Alternative proof of Theorem [Theorem 25](#th:uniqueness){reference-type="ref" reference="th:uniqueness"}.* Given $\mu, \mu' \in \mathfrak{M}_H(G)$, by Theorem [Theorem 7](#th:uniqueness_H){reference-type="ref" reference="th:uniqueness_H"} there exists a constant $a>0$ such that $\pi_*\mu'=a\pi_*\mu$. For every $E\in \mathcal{B}(G)$, one has that $\mu' (E)= \pi^*\pi_*\mu' (E) = \pi^*a\pi_*\mu(E) = a\pi_*\mu(\pi(E)) = a \mu (E)$. Therefore $\mu' = a \mu$. ◻ ------------------------------------------------------------------------ #### Acknowledgements I am grateful to Professor Arvid Perego (Università di Genova) for his encouragement and guidance, and I sincerely thank Professor Giovanni Alberti (Università di Genova) and Professor Filippo De Mari (Università di Genova) for their precious advice. ------------------------------------------------------------------------
arxiv_math
{ "id": "2309.07644", "title": "Haar measure for non-Hausdorff locally compact groups", "authors": "Lisa Valentini", "categories": "math.GR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We derive an asymptotic expansion for the critical percolation density of the random connection model as the dimension of the encapsulating space tends to infinity. We calculate rigorously the first expansion terms for the Gilbert disk model, the hyper-cubic model, the Gaussian connection kernel, and a coordinate-wise Cauchy kernel. author: - "Matthew Dickson[^1]" - "Markus Heydenreich[^2]" bibliography: - bibliography.bib title: Expansion of the Critical Intensity for the Random Connection Model --- -3em *Mathematics Subject Classification (2020).* 60K35, 82B43, 60G55. *Keywords and phrases.* Continuum percolation, Random connection model, Critical threshold, Asymptotic series, Lace expansion. # Introduction ## Motivation We study percolative systems, and address the question: *What is the value of the critical percolation threshold?* A specific answer is only possible in very exceptional cases. We are pursuing a different route instead, namely an asymptotic expression of the critical threshold as a function of the dimension $d$ of the encapsulating space in the $d\to\infty$ limit. This has been solved for percolation on the hypercubic lattice $\mathbb Z^d$: for *bond* percolation on the hypercubic lattice it is known that $$\label{eq:pc_bond} p_c^\text{bond}(\mathbb Z^d)=\frac1{2d}+\frac{1}{\left(2d\right)^2} +\frac{7}{2}\frac{1}{\left(2d\right)^3} +\mathcal{O}\left(\frac 1{d^4}\right)\quad \text{as $d\to\infty$,}$$ cf. [@HarSla95; @HofSla05], whereas for hypercubic *site* percolation on $\mathbb Z^d$ it is $$\label{eq:pc_site} p_c^\text{site}(\mathbb Z^d)= \frac1{2d}+\frac{5}{2}\frac{1}{\left(2d\right)^2} +\frac{31}{4}\frac{1}{\left(2d\right)^3} +\mathcal{O}\left(\frac 1{d^4}\right)\quad \text{as $d\to\infty$,}$$ cf. [@HeyMat20]. Mertens and Moore [@mertens2018series] use involved numerical enumeration to identify a few more terms (without a rigorous bound on the error). In the present work, we address a corresponding question for continuum percolation. Interestingly, our analysis establishes an exponentially decaying series rather then an algebraic decay as on lattices. We shall discuss this point further in the discussion section. ## The Model To this end, we are considering the random connection model (henceforth abbreviated RCM), a spatial random graph model whose points are given as a homogeneous Poison process $\eta$ on $\mathbb R^d$ with intensity measure $\lambda\operatorname{Leb}$, and we refer to $\lambda>0$ as the *intensity* of the model. Each pair of vertices $x,y$ in the support of $\eta$ are connected independently with probability $\varphi(x-y)$, where $$\varphi\colon \mathbb R^d\to[0,1]$$ is integrable and symmetric (i.e. $\varphi(x)=\varphi(-x)$ for all $x\in\mathbb R^d$). The classical example is the *Gilbert disk model* [@Gil61] or hyper-sphere random connection model with $$\varphi(x) = \mathds 1_{\left\{\abs*{x}< R\right\}}$$ for some $R>0$: two vertices are connected whenever their (Euclidean) distance is at most $R$. We are interested in the percolation phase transition of the RCM, that is, the critical intensity $\lambda_c$ given as the infimum of those values of $\lambda$ such that the resulting random graph has an infinite connected component: $$\lambda_c=\inf\{\lambda \mid \text{the RCM with intensity $\lambda$ has an infinite component}\}.$$ See [@HeyHofLasMat19 Section 2] for a more formal definition. Penrose [@Pen91] uses the 'method of generations' to show that for all dimensions $d\geq 1$ the critical intensity is strictly positive. In particular he derives the lower bound $$q_\varphi\lambda_c \geq 1,$$ where $q_\varphi= \int \varphi(x)\mathrm{d}x$. He also uses a coarse-graining argument to show that for $d\geq 2$ the critical intensity is finite if $q_\varphi>0$. Meester, Penrose and Sarkar [@MeePenSar97] prove the $0^{th}$ order asymptotics of $\lambda_c$ for radial non-increasing $\varphi$ (with uniform bounds on the variance of the jumps taken by random walk with jump intensity proportional to $\varphi$). Specifically they prove that for such models $$q_\varphi\lambda_c \to 1$$ as $d\to\infty$. In the present work we significantly expand their result by identifying several additional terms. ## Results We shall now make a couple of assumptions before formulating our main result. Throughout this paper we will denote the convolution of two non-negative functions $f,g\colon \mathbb R^d\to \mathbb R_{\geq 0}$ to be $$f\star g(x) := \int f(x-u)g(u)\mathrm{d}u,$$ and $f^{\star n}(x)$ to be the convolution of $n$ copies of $f$. In particular, $f^{\star 1}\equiv f$. We will also denote the Fourier transform of an integrable function $f\colon \mathbb R^d\to\mathbb R$ by $$\widehat{f}(k) := \int\text{e}^{ik\cdot x}f(x)\mathrm{d}x,$$ for all $k\in\mathbb R^d$. Our first set of assumptions are exactly those that allow us to use the results relating to lace expansion arguments. [\[Assumption\]]{#Assumption label="Assumption"} We require $\varphi$ to satisfy the following two properties: 1. [\[Assump:DecayBound\]]{#Assump:DecayBound label="Assump:DecayBound"} There exists a function $g\colon\mathbb N \to \mathbb R_{\geq 0}$ with the following three properties. Firstly, that $g(d)\to 0$ as $d\to\infty$. Secondly, that for $m\geq 3$, the $m$-fold convolution $\varphi^{\star m}$ of $\varphi$ satisfies $$\frac{1}{q_\varphi^{m-1}}\sup_{x\in\mathbb R^d}\varphi^{\star m}(x) \leq g(d).$$ Thirdly, that the Lebesgue volume $$\label{eqn:2-convolutionAssumption} \frac{1}{q_\varphi}\abs*{\left\{x\in\mathbb R^d\colon \frac{1}{q_\varphi} \varphi\star\varphi(x) > g\left(d\right)\right\}} \leq g(d).$$ 2. [\[Assump:QuadraticBound\]]{#Assump:QuadraticBound label="Assump:QuadraticBound"} There are constants $b,c_1,c_2>0$ (independent of $d$) such that the Fourier transform $\widehat\varphi$ satisfies $$\inf_{\abs*{k}\leq b}\frac{1}{\abs*{k}^2}\left(1-\frac{1}{q_\varphi}\widehat\varphi(k)\right) > c_1,\qquad \inf_{\abs*{k}> b}\left(1-\frac{1}{q_\varphi}\widehat\varphi(k)\right) > c_2.$$ *Remark 1*. We believe that the condition [\[eqn:2-convolutionAssumption\]](#eqn:2-convolutionAssumption){reference-type="eqref" reference="eqn:2-convolutionAssumption"} is not necessary for our results. In [@DicHey2022triangle] it was required by the lace expansion argument to provide the skeleton of an argument that would work for "spread out" models in dimensions $d=7,8$ (in addition to $d\geq 9$). However, we are concerned here with taking $d\to\infty$, and so it should not be required. $\diamond$ It will sometimes be more natural to work with a parameter $\beta(d)$ that is related to $g(d)$. From [@DicHey2022triangle] it is defined by $$\label{eqn:betafromgfunction} \beta(d) := \begin{cases} g(d)^{\frac{1}{4} -\frac{3}{2d}}d^{-\frac{3}{2}} &\colon \lim_{d\to\infty}g(d)\rho^{-d}\Gamma\left(\frac{d}{2}+1\right)^2 = 0 \qquad\forall\rho>0,\\ g(d)^\frac{1}{4}&\colon \text{otherwise}. \end{cases}$$ Note that the Assumption [\[Assump:ExponentialDecay\]](#Assump:ExponentialDecay){reference-type="ref" reference="Assump:ExponentialDecay"} below implies that $\beta(d)=g(d)^{\frac{1}{4}}$. Our second set of assumptions allow us to keep suitable control of asymptotic properties. Let us define $h\colon \mathbb N\to\mathbb R_{\geq 0}$ and $N\colon \mathbb N\to\mathbb N$ by $$\begin{aligned} h(d) := & \frac{1}{q_\varphi^5}\varphi^{\star 6}\left(\mathbf{0}\right) + \frac{1}{q_\varphi^4}\int \varphi(x)\varphi^{\star 2}(x)\varphi^{\star 3}(x)\mathrm{d}x + \frac{1}{q_\varphi^4}\int \left(\varphi^{\star 2}(x)\right)^3\mathrm{d}x,\\ N(d) := & \ceil*{\frac{\log h(d)}{\log \beta(d)}}. \end{aligned}$$ [\[AssumptionBeta\]]{#AssumptionBeta label="AssumptionBeta"} We require that: 1. [\[Assump:ExponentialDecay\]]{#Assump:ExponentialDecay label="Assump:ExponentialDecay"} There exists $\rho>0$ such that $\liminf_{d\to\infty}\rho^{-d}q_\varphi^{-5}\varphi^{\star 6}\left(\mathbf{0}\right)>0$. 2. [\[Assump:NumberBound\]]{#Assump:NumberBound label="Assump:NumberBound"} $\limsup_{d\to\infty}N(d) < \infty$. Mind that Assumption [\[Assump:NumberBound\]](#Assump:NumberBound){reference-type="ref" reference="Assump:NumberBound"} is in practice a lower bound on $h(d)$ because $\beta(d)<1$ for large $d$. *Remark 2*. The factor $\varphi^{\star 6}\left(\mathbf{0}\right)$ appears in Assumption [\[Assump:ExponentialDecay\]](#Assump:ExponentialDecay){reference-type="ref" reference="Assump:ExponentialDecay"} only because $\varphi^{\star 6}\left(\mathbf{0}\right)$ is the precision at which we stop our expansion. If we wished to proceed up to the $\varphi^{\star m}\left(\mathbf{0}\right)$ term, then we would need a version of  [\[Assump:ExponentialDecay\]](#Assump:ExponentialDecay){reference-type="ref" reference="Assump:ExponentialDecay"} with $\varphi^{\star m}\left(\mathbf{0}\right)$ replacing $\varphi^{\star 6}\left(\mathbf{0}\right)$. Assumption [\[Assump:ExponentialDecay\]](#Assump:ExponentialDecay){reference-type="ref" reference="Assump:ExponentialDecay"} appears in our proof via Lemma [Lemma 14](#lem:tailbound){reference-type="ref" reference="lem:tailbound"}. A close inspection of the proof would reveal that it is a slightly stronger condition than is needed there. However the version presented is more concise and sufficient for the models we consider here. The requirement that [\[Assump:NumberBound\]](#Assump:NumberBound){reference-type="ref" reference="Assump:NumberBound"} holds becomes apparent through Proposition [Proposition 23](#prop:PiBounds){reference-type="ref" reference="prop:PiBounds"}. We take great care in describing the asymptotics of the first few terms in the expansion of $\widehat\Pi_{\lambda_c}(0)$ because they dictate the behaviour of $\lambda_c$ that we are interested in. On the other hand we can utilise pre-existing bounds for the tail of the expansion to show that it can be neglected in our calculations. If we fix a cut-off $N\geq 1$ in this expansion then these pre-existing bounds are of order $\beta^N$. Assumption [\[Assump:NumberBound\]](#Assump:NumberBound){reference-type="ref" reference="Assump:NumberBound"} ensures that we can choose a fixed $N$ such that this tail error is smaller than the error terms arising elsewhere in the expansion. If this was not the case, we may try to let $N\to\infty$ as $d\to\infty$, but then we would be summing a diverging number of "small" terms prior to the cut-off and we would not have a good control on this. $\diamond$ **Definition 3**. In addition to using the convolution operation to combine two non-negative functions $f,g\colon \mathbb R^d\to \mathbb R_{\geq 0}$, we will also find it convenient to use $f\cdot g$ to denote the pointwise multiplication of $f$ and $g$: $$f\cdot g(x) := f(x)g(x).$$ Furthermore, for $n_1,n_2,n_3\geq 1$, we will denote $$\varphi^{\star n_1\star n_2 \cdot n_3}\left(\mathbf{0}\right) := \varphi^{\star n_1}\star\left(\varphi^{\star n_2}\cdot\varphi^{\star n_3}\right)\left(\mathbf{0}\right) = \int \varphi^{\star n_1}(x)\varphi^{\star n_2}(x)\varphi^{\star n_3}(x)\mathrm{d}x.$$ This expression shows that $\varphi^{\star n_1\star n_2 \cdot n_3}\left(\mathbf{0}\right)$ is invariant under the permutation of $n_1$, $n_2$, and $n_3$. **Theorem 4**. *Suppose Assumptions [\[Assumption\]](#Assumption){reference-type="ref" reference="Assumption"} and [\[AssumptionBeta\]](#AssumptionBeta){reference-type="ref" reference="AssumptionBeta"} are satisfied. Then as $d\to\infty$, $$\begin{gathered} \label{eq:ExpansionMain} \lambda_c = \frac{1}{q_\varphi} + \frac{1}{q_\varphi^3}\varphi^{\star 3}\left(\mathbf{0}\right) + \frac{3}{2}\frac{1}{q_\varphi^4}\varphi^{\star 4}\left(\mathbf{0}\right) + 2\frac{1}{q_\varphi^5}\varphi^{\star 5}\left(\mathbf{0}\right) - \frac{5}{2}\frac{1}{q_\varphi^4}\varphi^{\star 1\star 2\cdot2}\left(\mathbf{0}\right) + 2\frac{1}{q_\varphi^5}\left(\varphi^{\star 3}\left(\mathbf{0}\right)\right)^2 \\+ \mathcal{O}\left(\frac{1}{q_\varphi^6}\varphi^{\star 3}\left(\mathbf{0}\right)\varphi^{\star 4}\left(\mathbf{0}\right) + \frac{1}{q_\varphi^7}\left(\varphi^{\star 3}\left(\mathbf{0}\right)\right)^3 + \frac{1}{q_\varphi^6}\varphi^{\star 6}\left(\mathbf{0}\right) + \frac{1}{q_\varphi^5}\varphi^{\star 2\star 2\cdot 2}\left(\mathbf{0}\right) + \frac{1}{q_\varphi^5}\varphi^{\star 1 \star 2\cdot 3}\left(\mathbf{0}\right)\right). \end{gathered}$$* #### Remarks on Graphical Notation. It will often be convenient and clearer to represent the objects like $\varphi^{\star n}\left(\mathbf{0}\right)$ and $\varphi^{\star n_1 \star n_2 \cdot n_3}\left(\mathbf{0}\right)$ pictorially. By expanding out the convolutions in these expressions it is clear that they are integrals over some finite set of points with functions associating pairs of these points (and sometimes the origin). We are therefore able to represent these integrals pictorially as rooted graphs. In these we represent the spatial origin $\mathbf{0}\in\mathbb R^d$ with the root vertex $\raisebox{1pt}{\tikz{\draw (0,0) circle (2pt)}}$, and an integral of some $x\in\mathbb R^d$ with the vertex $\raisebox{1pt}{\tikz{\filldraw (0,0) circle (2pt)}}$. If we can interpret a $\varphi$ function to be "connecting" two $\mathbb R^d$-values, then we draw a line $\raisebox{2pt}{\tikz{\draw (0,0) -- (1,0);}}$ between the vertices corresponding to the two $\mathbb R^d$-values. For example, this allows us to graphically represent objects such as $$\begin{aligned} \varphi^{\star 3}\left(\mathbf{0}\right) &= \int\varphi(x)\varphi(y)\varphi(x-y)\mathrm{d}x\mathrm{d}y = \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} },\\ \varphi^{\star 1\star 2\cdot 2}\left(\mathbf{0}\right) &= \int\varphi(x)\varphi(y)\varphi(z)\varphi(x-z)\varphi(z-y)\mathrm{d}x\mathrm{d}y\mathrm{d}z = \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \draw (0,0) -- (2,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }. \end{aligned}$$ Observe that convolution is a commutative binary relation. This means for example that various diagrams the position of the root vertex $\raisebox{1pt}{\tikz{\draw (0,0) circle (2pt)}}$ is not important. The most common example of this in our arguments will relate to $\varphi^{\star 1 \star 2 \cdot 2}\left(\mathbf{0}\right)$. By first recalling that $\varphi^{\star n_1\star n_2 \cdot n_3}\left(\mathbf{0}\right)$ is invariant under the permutation of $n_1$, $n_2$ and $n_3$, and then using the commutativity property of convolution, we find $$\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \draw (0,0) -- (2,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }= \varphi\star\left(\varphi^{\star 2}\cdot\varphi^{\star 2}\right)\left(\mathbf{0}\right) = \varphi^{\star 2}\star\left(\varphi\cdot\varphi^{\star 2}\right)\left(\mathbf{0}\right) = \varphi\star\left(\varphi\cdot\varphi^{\star 2}\right)\star\varphi\left(\mathbf{0}\right) = \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }.$$ We will tend to prefer $\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }$ over $\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \draw (0,0) -- (2,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }$, as we find the former slightly easier to read. This graphical notation allows us to write the expansion of Theorem [Theorem 4](#thm:CriticalIntensityExpansion){reference-type="ref" reference="thm:CriticalIntensityExpansion"} in a form that is much easier to read. By a rescaling argument (see [@HeyHofLasMat19 Section 5.1] for the details), we may assume without loss of generality that $$q_\varphi=\int \varphi(x)\mathrm{d}x =1,$$ and we shall silently make this assumption in our analysis. Under this scaling choice, the expansion [\[eq:ExpansionMain\]](#eq:ExpansionMain){reference-type="eqref" reference="eq:ExpansionMain"} is represented pictorially by $$\begin{gathered} \label{eq:expansionmainpic} \lambda_c = 1+\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=black] (1,-0.6) circle (3pt); \filldraw[fill=black] (1,0.6) circle (3pt); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \frac{3}{2}\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \filldraw[fill=black] (1,0.6) circle (3pt); \filldraw[fill=black] (1,-0.6) circle (3pt); \filldraw[fill=black] (2,0) circle (3pt); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ 2\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \draw (0,0) -- (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=black] (1,0.6) circle (3pt); \filldraw[fill=black] (1,-0.6) circle (3pt); \filldraw[fill=black] (2,0.6) circle (3pt); \filldraw[fill=black] (2,-0.6) circle (3pt); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }- \frac{5}{2}\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=black] (1,0.6) circle (3pt); \filldraw[fill=black] (1,-0.6) circle (3pt); \filldraw[fill=black] (2,0) circle (3pt); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+2\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=black] (1,-0.6) circle (3pt); \filldraw[fill=black] (1,0.6) circle (3pt); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right)^2 \\+ \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=black] (1,-0.6) circle (3pt); \filldraw[fill=black] (1,0.6) circle (3pt); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\times\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \filldraw[fill=black] (1,0.6) circle (3pt); \filldraw[fill=black] (1,-0.6) circle (3pt); \filldraw[fill=black] (2,0) circle (3pt); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=black] (1,-0.6) circle (3pt); \filldraw[fill=black] (1,0.6) circle (3pt); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right)^3 + \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=black] (1,0.6) circle (3pt); \filldraw[fill=black] (1,-0.6) circle (3pt); \filldraw[fill=black] (2,0.6) circle (3pt); \filldraw[fill=black] (2,-0.6) circle (3pt); \filldraw[fill=black] (3,0) circle (3pt); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=black] (1,0.6) circle (3pt); \filldraw[fill=black] (1,-0.6) circle (3pt); \filldraw[fill=black] (2,0.6) circle (3pt); \filldraw[fill=black] (2,-0.6) circle (3pt); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=black] (1,0.6) circle (3pt); \filldraw[fill=black] (1,-0.6) circle (3pt); \filldraw[fill=black] (2,0) circle (3pt); \filldraw[fill=black] (1,0) circle (3pt); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right).\end{gathered}$$ For some calculations, we will want to integrate a $\tau_\lambda$ function instead of a $\varphi$ ($\tau_\lambda$ is defined below at [\[eqn:two-point_function\]](#eqn:two-point_function){reference-type="eqref" reference="eqn:two-point_function"}). We will also sometimes find it convenient to write the sum of two integrals as one integral by using $1-\varphi$ to associate two $\mathbb R^d$-values. When we can interpret a $\tau_\lambda$ function to be "connecting" two $\mathbb R^d$-values, then we draw a green line $\raisebox{2pt}{\tikz{\draw[green] (0,0) -- (1,0);}}$ between the vertices corresponding to the two $\mathbb R^d$-values, and similarly we draw a red line $\raisebox{2pt}{\tikz{\draw[red] (0,0) -- (1,0);}}$ when a $1-\varphi$ connects two values. As examples, we can use these to represent the following two integrals: $$\begin{aligned} \int\varphi(y)\tau_\lambda(x)\tau_\lambda(x-y)\mathrm{d}x\mathrm{d}y & = \raisebox{-17pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,0.6) -- (1,-0.6); \draw[green] (1,0.6) -- (2,0) -- (1,-0.6); %\draw (1,0) circle (0pt) node[rotate=90,above]{$\sim$}; \filldraw[fill=white] (1,0.6) circle (2pt); \filldraw (1,-0.6) circle (2pt); \filldraw (2,0) circle (2pt); \end{tikzpicture} },\\ \int\varphi(x)\varphi(y)\varphi(z-x)\varphi(z-y)\left(1-\varphi(z)\right)\mathrm{d}x\mathrm{d}y\mathrm{d}z& = \raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (0.5,0) -- (1,0.6) -- (1.5,0) -- (1,-0.6) -- (0.5,0); \draw[red] (1,0.6) -- (1,-0.6); \filldraw[fill=white] (1,-0.6) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \end{tikzpicture} }.\end{aligned}$$ ## Applications {#sec:applications} The result of Theorem [Theorem 4](#thm:CriticalIntensityExpansion){reference-type="ref" reference="thm:CriticalIntensityExpansion"} is very general in that the Assumptions [\[Assumption\]](#Assumption){reference-type="ref" reference="Assumption"} and [\[AssumptionBeta\]](#AssumptionBeta){reference-type="ref" reference="AssumptionBeta"} apply to very many models. We now apply it to a number of examples. ### The Gilbert disk model resp. the Hyper-Sphere RCM For $R>0$, the Hyper-Sphere RCM is defined by having $$\varphi(x) = \mathds 1_{\left\{\abs*{x}< R\right\}}.$$ This is the classical model for Boolean percolation studied by Gilbert in 1961 [@Gil61]. Writing $B\left(x;a,b\right)=\int^x_0t^{a-1}\left(1-t\right)^{b-1}\mathrm{d}t$ for the *incomplete Beta function* and $\Gamma\left(x\right)=\int^\infty_0t^{x-1}\text{e}^{-t}\mathrm{d}t$ is the *Gamma function*, we obtain the following expansion of the critical intensity: **Corollary 5**. *For the Hyper-Sphere RCM with radius $R=R(d)>0$, $$\frac{\pi^{\frac{d}{2}}}{\Gamma\left(\frac{d}{2}+1\right)}R^{d}\lambda_c =1 + \frac{3}{2\sqrt{\pi}}\frac{\Gamma\left(\frac{d}{2}+1\right)}{\Gamma\left(\frac{d}{2}+\frac{1}{2}\right)}B\left(\frac{3}{4};\frac{d}{2}+\frac{1}{2},\frac{1}{2}\right) + \mathcal{O}\left(\frac{1}{\sqrt{d}}\left(\frac{16}{27}\right)^\frac{d}{2}\right).$$* *Remark 6*. Here we only expand as far as the $\varphi^{\star 3}\left(\mathbf{0}\right)$ term, and our error is the asymptotic size of the $\varphi^{\star 4}\left(\mathbf{0}\right)$ term. This is because these are the only terms for which we have rigorous closed-form expressions for their asymptotic size. Conjecture [Conjecture 46](#conj:HyperDiscModel){reference-type="ref" reference="conj:HyperDiscModel"} gives the expected terms in the expansion based on numerical estimates of their asymptotic behaviour. $\diamond$ ![Left: The Hyper-Sphere RCM -- two Poisson points are connected whenever the circles of radius $R/2$ overlap. Right: The Hyper-Cube RCM -- two Poisson points are connected whenever the cubes of side length $L/2$ overlap.](Boolean-expansion-sphere.pdf "fig:"){#fig:enter-label width=".3\\textwidth"} 5em ![Left: The Hyper-Sphere RCM -- two Poisson points are connected whenever the circles of radius $R/2$ overlap. Right: The Hyper-Cube RCM -- two Poisson points are connected whenever the cubes of side length $L/2$ overlap.](Boolean-expansion-square.pdf "fig:"){#fig:enter-label width=".3\\textwidth"} ### The Hyper-Cube RCM While the hyper-sphere model is a good example showing that the numerical integration of the various convolutions of the adjacency function in [\[eq:ExpansionMain\]](#eq:ExpansionMain){reference-type="eqref" reference="eq:ExpansionMain"} can get fairly involved, the calculations simplify significantly for the Hyper-Cubic RCM given by $$\varphi(x) = \prod^d_{j=1}\mathds 1_{\left\{\abs*{x_j}\leq L/2\right\}},$$ where $x=\left(x_1,\ldots,x_d\right)\in\mathbb R^d$ and $L>0$ is a parameter. **Corollary 7**. *For the Hyper-Cubic RCM with side length $L=L(d)>0$, as $d\to\infty$ $$L^d\lambda_c = 1 + \left(\frac{3}{4}\right)^d + \frac{3}{2}\left(\frac{2}{3}\right)^d + 2\left(\frac{115}{192}\right)^d - \frac{5}{2}\left(\frac{7}{12}\right)^d + 2\left(\frac{9}{16}\right)^d + \mathcal{O}\left(\left(\frac{11}{20}\right)^d\right).$$* ### The Gaussian RCM For $\sigma^2>0$ and $0<\mathcal{A}\leq \left(2\pi\sigma^2\right)^\frac{d}{2}$, the Gaussian RCM is defined by having $$\varphi\left(x\right) = \frac{\mathcal{A}}{\left(2\pi \sigma^2\right)^\frac{d}{2}}\exp\left(-\frac{1}{2\sigma^2}\abs*{x}^2\right).$$ The parameter $\sigma$ is a length scale parameter while the $\mathcal{A}$ factor ensures $\mathcal{A}= \int\varphi(x)\mathrm{d}x$. The upper bound on $\mathcal{A}$ is only there to ensure $\varphi$ is $\left[0,1\right]$-valued. Then we have the following expansion: **Corollary 8**. *For the Gaussian RCM with $\mathcal{A}=\mathcal{A}(d)>0$ and $\sigma=\sigma(d)>0$ such that $\liminf_{d\to\infty}\varphi\left(\mathbf{0}\right)^{\frac{1}{d}}>0$, as $d\to\infty$ $$\mathcal{A}\lambda_c = 1 + \mathcal{A}\left(6\pi\sigma^2\right)^{-\frac{d}{2}} + \frac{3}{2}\mathcal{A}\left(8\pi\sigma^2\right)^{-\frac{d}{2}} + 2\mathcal{A}\left(10\pi\sigma^2\right)^{-\frac{d}{2}} + \mathcal{O}\left(\mathcal{A}\left(12\pi\sigma^2\right)^{-\frac{d}{2}}\right).$$ In particular, if $\varphi\left(\mathbf{0}\right)=\mathcal{A}\left(2\pi\sigma^2\right)^{-\frac{d}{2}} = 1$, then $$\mathcal{A}\lambda_c = 1 + 3^{-\frac{d}{2}} + \frac{3}{2}\times4^{-\frac{d}{2}} +2\times5^{-\frac{d}{2}} + \mathcal{O}\left(6^{-\frac{d}{2}}\right).$$* ### The Coordinate-wise Cauchy RCM In a similar flavour to the previous example, let $\gamma>0$ and $0<\mathcal{A}\leq \left(\gamma\pi\right)^d$, and define the Coordinate-Cauchy RCM through $$\varphi(x) = \frac{\mathcal{A}}{\left(\gamma\pi\right)^d}\prod^d_{j=1}\frac{\gamma^2}{\gamma^2+x^2_j},$$ where $x=\left(x_1,\ldots,x_d\right)\in\mathbb R^d$. Like for the Gaussian RCM we have a length-scale parameter $\gamma$ while the $\mathcal{A}$ factor ensures $\mathcal{A}= \int\varphi(x)\mathrm{d}x$ and the upper bound on $\mathcal{A}$ is only there to ensure $\varphi$ is $\left[0,1\right]$-valued. Then the expansion of the critical intensity is as follows: **Corollary 9**. *For the Coordinate-Cauchy RCM with $\mathcal{A}=\mathcal{A}(d)>0$ and $\gamma=\gamma(d)>0$ such that $\liminf_{d\to\infty}\varphi\left(\mathbf{0}\right)^{\frac{1}{d}}>0$, as $d\to\infty$ $$\mathcal{A}\lambda_c = 1 + \mathcal{A}\left(3\gamma\pi\right)^{-d} + \frac{3}{2}\mathcal{A}\left(4\gamma\pi\right)^{-d} + 2\mathcal{A}\left(5\gamma\pi\right)^{-d} + \mathcal{O}\left(\mathcal{A}\left(6\gamma\pi\right)^{-d}\right).$$ In particular, if $\varphi\left(\mathbf{0}\right) = \mathcal{A}\left(\gamma\pi\right)^{-d} = 1$, then $$\mathcal{A}\lambda_c = 1 + 3^{-d} + \frac{3}{2}\times 4^{-d} + 2\times 5^{-d} + \mathcal{O}\left(6^{-d}\right).$$* *Remark 10*. The condition on $\varphi\left(\mathbf{0}\right)$ appearing in Corollaries [Corollary 8](#cor:Gaussian){reference-type="ref" reference="cor:Gaussian"} and [Corollary 9](#cor:Cauchy){reference-type="ref" reference="cor:Cauchy"} is to ensure that [\[Assump:ExponentialDecay\]](#Assump:ExponentialDecay){reference-type="ref" reference="Assump:ExponentialDecay"} is satisfied. If this were not imposed, then the terms in our expansion could be so small that extra error terms arising from the volume of small balls of fixed radius could become significant and dominate. $\diamond$ ## Discussion Our results reveal a remarkable difference between continuum percolation models and lattice percolation: while the expansion in [\[eq:pc_bond\]](#eq:pc_bond){reference-type="eqref" reference="eq:pc_bond"} and [\[eq:pc_site\]](#eq:pc_site){reference-type="eqref" reference="eq:pc_site"} decays algebraically in $d$, we observe that the expansions in Corollaries [Corollary 5](#cor:sphere){reference-type="ref" reference="cor:sphere"}--[Corollary 9](#cor:Cauchy){reference-type="ref" reference="cor:Cauchy"} decay exponentially in $d$. Interestingly, the expansion in [\[eq:ExpansionMain\]](#eq:ExpansionMain){reference-type="eqref" reference="eq:ExpansionMain"} resp. [\[eq:expansionmainpic\]](#eq:expansionmainpic){reference-type="eqref" reference="eq:expansionmainpic"} is indeed algebraic, and it is the calculation of the convolutions of $\varphi$ that transform it to an exponentially decaying series. This is reflected in the observation that the hypercubic lattice is a "sparse" graph in high dimensional Euclidean space. Indeed, the analysis in [@SatKamSak20] suggests that we do have exponential decay on lattices that use the space more efficiently such as the body-centred cubic lattice. Torquato [@Tor12] has provided an expansion for $\lambda_c$ using exact calculations. Interestingly, for the hyper-cubic Boolean model, we seem to get a slightly different expansion as the $\varphi^{\star 5}\left(\mathbf{0}\right)$ term is absent from their expression. It is clear that the value of $\lambda_c$ is highly sensitive to the choice of the connectivity function $\varphi$. As a result, we get fairly different expansions for the four models in Section [1.4](#sec:applications){reference-type="ref" reference="sec:applications"}. Jonasson [@Jon01] has shown that for Boolean models, $\lambda_c$ is maximised for the hypersphere model, and minimised for a certain triangular shape. Our analysis is based on the lace expansion for the (plain) random connection model derived in [@HeyHofLasMat19]. A key quantity in that expansion is the *lace expansion coefficient* $\Pi_{\lambda_c}(x)$ (defined in Definition [Definition 21](#def:Pi){reference-type="ref" reference="def:Pi"} below), see [\[eqn:OZE\]](#eqn:OZE){reference-type="eqref" reference="eqn:OZE"}. The main insight is that $\int\Pi_{\lambda_c}(x)\mathrm{d}x$ encodes $\lambda_c$, see [\[eqn:critical\]](#eqn:critical){reference-type="eqref" reference="eqn:critical"}, and we therefore need to investigate this integral as the dimension $d$ increases. While the original lace expansion only needs (fairly crude) upper bounds on the different terms that constitute $\Pi_{\lambda_c}(x)$, in the present work we need to improve and refine these bounds to get asymptotically matching upper and lower bounds. This is the content of Section [3](#sec:Pibd){reference-type="ref" reference="sec:Pibd"}. In our main expansion in [\[eq:ExpansionMain\]](#eq:ExpansionMain){reference-type="eqref" reference="eq:ExpansionMain"}, there are various terms appearing on the right-hand side. Apart from the constant term ${q_\varphi}^{-1}$, the main contribution is given by the single loop diagram ${q_\varphi^{-3}}\varphi^{\star 3}\left(\mathbf{0}\right)$. However, the order of the further terms may depend on the particular form of $\varphi$, e.g. compare Corollaries [Corollary 7](#cor:cube){reference-type="ref" reference="cor:cube"} and [Corollary 8](#cor:Gaussian){reference-type="ref" reference="cor:Gaussian"}. It is an open problem to extend this analysis to the marked random connection model, for which the lace expansion has recently been derived in [@DicHey2022triangle]. # Preliminaries Recall that $\eta$ denotes the homogeneous Poisson point process on $\mathbb R^d$ that gives the vertex set of the RCM. We then let $\xi$ denote the vertex set and the edge set together - the whole random graph. We also want to consider the augmented configurations $\eta^x$ and $\xi^x$. Here $\eta^x$ is produced by introducing an extra vertex at $x\in\mathbb R^d$, and $\xi^x$ then takes this augmented vertex set, copies the old edges, and independently forms edges between the old vertices and the new vertex. This can also be extended to get $\eta^{x,y}$ and $\xi^{x,y}$ for $x,y\in\mathbb R^d$, or for any finite number of augmenting vertices. For the full details of this construction see [@HeyHofLasMat19 Section 2.2]. Recall that $\varphi(x)$ returns the probability that a vertex at the origin and a vertex at $x$ have a common edge, or are adjacent. Given two vertices $x,y\in\mathbb R^d$, we say that $x$ and $y$ are *connected in $\xi^{x,y}$*, or $x \longleftrightarrow y\textrm { in } \xi^{x,y}$, if there exists a finite sequence of distinct vertices $x=u_0,u_1,\ldots,u_k,u_{k+1}=y\in\eta^{x,y}$ (with $k\in\mathbb N_0$) such that $u_i\sim u_{i+1}$ for all $0\leq i\leq k$. We can then define the *two-point* (or *pair-connectedness*) function $\tau_\lambda\colon \mathbb R^d\to \left[0,1\right]$ by $$\label{eqn:two-point_function} \tau_\lambda(x) := \mathbb P_\lambda\left(\mathbf{0} \longleftrightarrow x\textrm { in } \xi^{\mathbf{0},x}\right).$$ Now we introduce two preliminary results that we will use on many occasions in this paper: Mecke's (multivariate) equation, and the BK inequality. #### Mecke's Equation Since our vertex set $\eta$ is a Poisson point process, we will often rely on a result called Mecke's Equation to use integral expressions to describe the expected number of certain configurations in our RCM. For a discussion of this result see [@LasPen17 Chapter 4]. Given $m\in\mathbb N$ and a measurable non-negative function $f=f(\xi,\vec{x})$, the Mecke equation for $\xi$ states that $$\mathbb E_\lambda \left[ \sum_{\vec x \in \eta^{(m)}} f(\xi, \vec{x})\right] = \lambda^m \int \mathbb E_\lambda\left[ f\left(\xi^{x_1, \ldots, x_m}, \vec x\right)\right] \mathrm{d}\vec{x}, \label{eq:prelim:mecke_n}$$ where $\vec x=(x_1,\ldots,x_m)$ and $\eta^{(m)}=\{(x_1,\ldots,x_m)\colon x_i \in \eta, x_i \neq x_j \text{ for } i \neq j\}$. #### BK Inequality We give an overview here, but the full details can be found in [@HeyHofLasMat19]. Given two increasing events $E_1$ and $E_2$, we define $E_1\circ E_2$ to be the event that $E_1$ and $E_2$ both occur, but do so on disjoint subsets of the vertices $\eta$. Note that in the case of $E_1=\left\{x \longleftrightarrow y\textrm { in } \xi^{x,y}\right\}$ and $E_2=\left\{u \longleftrightarrow v\textrm { in } \xi^{u,v}\right\}$, $E_1\circ E_2$ can still occur if $x\in\left\{u,v\right\}$ or $y\in\left\{u,v\right\}$ - the intermediate vertices need to be disjoint. The BK inequality then gives us a simple upper bound on the probability of this disjoint occurence. **Theorem 11** (BK inequality). *Let $E_1$ and $E_2$ be two increasing events that live on some bounded measurable subset on $\mathbb R^d$. Then $$\mathbb P_\lambda\left(E_1\circ E_2\right) \leq \mathbb P_\lambda\left(E_1\right)\mathbb P_\lambda\left(E_2\right).$$* *Proof.* See [@HeyHofLasMat19 Theorem 2.1]. ◻ **Definition 12**. We make use of a bootstrap function also used in [@HeyHofLasMat19] (itself adapted from an argument in [@HeyHofSak08]). Recall that we are using the scaling choice that $\widehat\varphi(0) = q_\varphi= 1$. For $\lambda\geq 0$ and $k,l\in\mathbb R^d$, we define $$\begin{aligned} {\mu_\lambda}&:= 1- \frac{1}{\widehat\tau_\lambda(0)}\\ \widehat G_{{\mu_\lambda}}(k) &:= \frac{1}{1-{\mu_\lambda}\widehat\varphi(k)}. \end{aligned}$$ Note that $\widehat G_{{\mu_\lambda}}$ can be interpreted as the Fourier transform of the Green's function of a random walk with transition density ${\mu_\lambda}\varphi$. We can define $f \colon \mathbb R_{\geq 0}\to \mathbb R_{\geq 0}$ with $$f(\lambda):= \sup_{k\in\mathbb R^d}\frac{\abs*{\widehat\tau_\lambda(k)}}{\widehat G_{{\mu_\lambda}}(k)}.$$ **Proposition 13**. *Suppose Assumption [\[Assumption\]](#Assumption){reference-type="ref" reference="Assumption"} holds. Then for $d$ sufficiently large, $f(\lambda)\leq 2$ for all $\lambda\in\left[0,\lambda_c\right)$.* *Proof.* This is implied by [@HeyHofLasMat19 Proposition 5.10]. ◻ **Lemma 14**. *Suppose Assumption [\[Assumption\]](#Assumption){reference-type="ref" reference="Assumption"} holds and that there exists $\rho>0$ such that $\liminf_{d\to\infty}\rho^{-d}\varphi^{\star m}\left(\mathbf{0}\right) >0$. Let $d$ be sufficiently large, $m\geq 1$ be even, $s\geq 1$, and $\lambda\in\left[0,\lambda_c\right]$. Then there exists $K_{s}<\infty$ independent of $d$, $m$, and $\lambda$ such that $$\sup_{x\in\mathbb R^d}\varphi^{\star m}\star\tau_\lambda^{\star s}\left(x\right) \leq K_{s}\varphi^{\star m}\left(\mathbf{0}\right).$$* This is a key lemma in our proof as it allows us to identify leading order decay for convolutions of the adjacency function and the two-point function. *Proof.* First let us consider $\lambda<\lambda_c$. We slightly adapt [@HeyHofLasMat19 Lemma 5.4] for our purposes. From the Fourier inverse formula, $$\sup_{x\in\mathbb R^d}\varphi^{\star m}\star\tau_\lambda^{\star s}\left(x\right) \leq \sup_{x\in\mathbb R^d}\int \text{e}^{-ik\cdot x}\widehat\varphi(k)^m\widehat\tau_\lambda(k)^s\frac{\mathrm{d}k}{\left(2\pi\right)^d} \leq \int \widehat\varphi(k)^m\abs*{\widehat\tau_\lambda(k)}^s\frac{\mathrm{d}k}{\left(2\pi\right)^d}.$$ We can omit $\abs*{\cdot}$ from around $\widehat\varphi(k)^m$ because $\widehat\varphi(k)$ is real and $m$ is even. From the definition of the bootstrap function $f(\lambda)$, we can bound $\abs*{\widehat\tau_\lambda(k)}$ with $f(\lambda)\widehat G_{{\mu_\lambda}}(k)$ and then use ${\mu_\lambda}\leq 1$ to get $$\label{eqn:integralbound} \sup_{x\in\mathbb R^d}\varphi^{\star m}\star\tau_\lambda^{\star s}\left(x\right) \leq f(\lambda)^{s}\int \frac{\widehat\varphi(k)^m}{\left(1-{\mu_\lambda}\widehat\varphi(k)\right)^s}\frac{\mathrm{d}k}{\left(2\pi\right)^d} \leq f(\lambda)^{s}\int \frac{\widehat\varphi(k)^m}{\left(1-\widehat\varphi(k)\right)^s}\frac{\mathrm{d}k}{\left(2\pi\right)^d}.$$ Recall the parameter $b>0$ arising from Assumption [\[Assump:QuadraticBound\]](#Assump:QuadraticBound){reference-type="ref" reference="Assump:QuadraticBound"}. We partition the integral on the right hand side of [\[eqn:integralbound\]](#eqn:integralbound){reference-type="eqref" reference="eqn:integralbound"} into one integral over $\abs*{k}\leq b$, and one integral over $\abs*{k} > b$. For $\abs*{k}\leq b$, [\[Assump:QuadraticBound\]](#Assump:QuadraticBound){reference-type="ref" reference="Assump:QuadraticBound"} tells us that there exists $c_1>0$ such that $\left(1-\widehat\varphi(k)\right)^{-1}\leq c_1^{-1}\abs*{k}^{-2}$, and therefore $$\int_{\abs*{k}\leq b} \frac{\widehat\varphi(k)^m}{\left(1-\widehat\varphi(k)\right)^s}\frac{\mathrm{d}k}{\left(2\pi\right)^d} \leq \frac{1}{c_1^s}\int_{\abs*{k}\leq b} \frac{1}{\abs*{k}^{2s}} \frac{\mathrm{d}k}{\left(2\pi\right)^d} = \frac{1}{c_1^s}\frac{\mathfrak{S}_{d-1}}{d-2s}\frac{b^{d-2s}}{\left(2\pi\right)^{d}},$$ where $\mathfrak{S}_{d-1} = d\pi^{\frac{d}{2}}\Gamma\left(1 + \frac{d}{2}\right)^{-1}$ is the surface area of a dimension $d$ hyper-sphere with unit radius. An application of Stirling's formula tells us that for all $\rho>0$ we have $\frac{\mathfrak{S}_{d-1}}{d-2s}\frac{b^{d-2s}}{\left(2\pi\right)^{d}} \leq \rho^d$ for sufficiently large $d$. Therefore this contribution is negligible for our purposes. For $\abs*{k}> b$, [\[Assump:QuadraticBound\]](#Assump:QuadraticBound){reference-type="ref" reference="Assump:QuadraticBound"} tells us that $\left(1-\widehat\varphi(k)\right)^{-1}\leq c_2^{-1}$, and therefore $$\int_{\abs*{k}> b} \frac{\widehat\varphi(k)^m}{\left(1-\widehat\varphi(k)\right)^s}\frac{\mathrm{d}k}{\left(2\pi\right)^d} \leq \frac{1}{c_2^s}\int \widehat\varphi(k)^m \frac{\mathrm{d}k}{\left(2\pi\right)^d} = \frac{1}{c_2^s}\varphi^{\star m}\left(\mathbf{0}\right).$$ In conjunction with [\[eqn:integralbound\]](#eqn:integralbound){reference-type="eqref" reference="eqn:integralbound"} and Proposition [Proposition 13](#prop:bootstrapbound){reference-type="ref" reference="prop:bootstrapbound"}, this proves the result for $\lambda<\lambda_c$. To extend the result to $\lambda\leq \lambda_c$, we note that $\tau_\lambda(x)$ is monotone increasing in $\lambda$ for all $x\in\mathbb R^d$. Monotone convergence and the independence of the bound on $\lambda$ then proves the full result. ◻ **Definition 15**. For $n\in\mathbb N$ and $x,y\in \mathbb R^d$, $x$ is connected to $y$ in $\xi^{x,y}$ by a path of length exactly $n$ if there exists a sequence of vertices $x=u_0,u_1,\ldots,u_{n-1},u_n=y$ such that $u_i\sim u_{i+1}$ for $0\leq i \leq n-1$. We then define $\left\{x \xleftrightarrow{\,\,=n\,\,} y\textrm { in } \xi^{x,y}\right\}$ as the event that $x$ is connected to $y$ in $\xi^{x,y}$ by a path of length exactly $n$, but no path of length $<n$. For $\lambda>0$ we denote $$\varphi^{[n]}(x) := \mathbb P_\lambda\left(\mathbf{0} \xleftrightarrow{\,\,= n\,\,} x\textrm { in } \xi^{\mathbf{0},x}\right).$$ In particular, $\varphi^{[1]}\equiv \varphi$. Additionally define for finite $A\subset\mathbb R^d$, $$\varphi^{[n]}_{\langle A \rangle}\left(x,y\right) := \mathbb P_\lambda\left(x \xleftrightarrow{\,\,= n\,\,} y\textrm { in } \xi^{x,y}_{\langle A \rangle}\right).$$ That is, $\varphi^{[n]}_{\langle A \rangle}\left(x,y\right)$ is the probability that there exists a path of length $n$ connecting $x$ and $y$ in $\xi^{x,y}$, but none of the interior vertices in this path are adjacent to any vertices in $A$ and there is no path connecting $x$ and $y$ in $\xi^{x,y}$ that is of length $<n$. A more formal definition of $\xi^{x,y}_{\langle A \rangle}$ can be found below in Definition [Definition 20](#defn:ThinningsPivots){reference-type="ref" reference="defn:ThinningsPivots"}. **Lemma 16**. *Let $x,y\in\mathbb R^d$ be distinct, $\lambda>0$, and $A\subset \mathbb R^d$ be a finite number of singletons. Then for $n\geq 1$, $$\begin{aligned} \varphi^{[1]}_{\langle A \rangle}(x,y) &= \varphi(x-y) \label{eqn:Exconnf_1_AvoidA}\\ \varphi^{[n+1]}_{\langle A \rangle}(x,y) &= \left(1-\varphi(x-y)\right)\left(1 - \exp\left(-\lambda\int \varphi(v-y)\varphi^{[n]}_{\langle A\cup \left\{y\right\} \rangle}(x,v)\prod_{z\in A}\left(1-\varphi(v-z)\right)\mathrm{d}v\right)\right)\label{eqn:Exconnf_n+1_AvoidA}\\ \varphi^{[n+1]}(x) &= \left(1-\varphi(x)\right)\left(1 - \exp\left(-\lambda\int \varphi(v)\varphi^{[n]}_{\langle \mathbf{0} \rangle}(x,v)\mathrm{d}v\right)\right). \label{eqn:Exconnf_n+1} \end{aligned}$$ In particular, $$\begin{aligned} \varphi^{[2]}(x) &= \left(1-\varphi(x)\right)\left(1-\exp\left(-\lambda\varphi^{\star 2}(x)\right)\right)\label{eqn:ExConnf2}\\ \varphi^{[3]}(x) &= \left(1-\varphi(x)\right)\left(1 - \exp\left(-\lambda\int \varphi(v)\left(1-\varphi(x-v)\right)\right.\right.\nonumber\\ &\hspace{3cm}\left.\left.\times\left(1 - \exp\left(-\lambda\int \varphi(w-v)\varphi(x-w)\left(1-\varphi(w)\right)\mathrm{d}w\right)\right)\mathrm{d}v\right)\right).\label{eqn:ExConnf3}\end{aligned}$$* *Proof.* To show [\[eqn:Exconnf_1\_AvoidA\]](#eqn:Exconnf_1_AvoidA){reference-type="eqref" reference="eqn:Exconnf_1_AvoidA"}, observe that if $x \sim y\textrm { in } \xi^{x,y}$ then there are no interior points on this path to be adjacent to $A$. Therefore $\varphi^{[1]}_{\langle A \rangle}(x,y) = \varphi^{[1]}(x-y) = \varphi(x-y)$. For [\[eqn:Exconnf_n+1_AvoidA\]](#eqn:Exconnf_n+1_AvoidA){reference-type="eqref" reference="eqn:Exconnf_n+1_AvoidA"}, we first note that the existence of a single edge connecting $x$ and $y$ is independent of everything else. Since we cannot have this edge, we have a factor of $1-\varphi(x-y)$ outside everything else. Let us now consider the neighbours of $y$ in $\eta$. The event $\left\{x \xleftrightarrow{\,\,= n+1\,\,} y\textrm { in } \xi^{x,y}_{\langle A \rangle}\right\}$ occurs exactly when $x\not\sim y$ and there exists a neighbour $v$ of $y$ that is not adjacent to any point in $A$ and has a path of length $n$ from $v$ to $x$ that does not use any vertex adjacent to $A$ or adjacent to $y$ (otherwise a "shortcut" would exist). The existence of such a path is exactly the event $\left\{x \xleftrightarrow{\,\,=n\,\,} v\textrm { in } \xi^{x,v}_{\langle A\cup\left\{y\right\} \rangle}\right\}$. Since $\eta$ is a Poisson point process, the number of such vertices is a Poisson distributed random variable with mean given by (via Mecke's equation) $$\begin{gathered} \mathbb E_\lambda\left[\#\left\{v\in\eta\colon v\sim y, x \xleftrightarrow{\,\,= n+1\,\,} y\textrm { in } \xi^{x,y}_{\langle A \rangle}, v\not\sim z \text{ for all }z\in A\right\}\right] \\= \lambda\int \varphi(v-y)\varphi^{[n]}_{\langle A\cup \left\{y\right\} \rangle}(x,v)\prod_{z\in A}\left(1-\varphi(v-z)\right)\mathrm{d}v. \end{gathered}$$ If $X$ is a Poisson random variable with mean $M$, then $\mathbb{P}\left(X\geq 1\right) = 1 - \text{e}^{-M}$. Since the number $\#\left\{v\in\eta\colon v\sim y, x \xleftrightarrow{\,\,= n+1\,\,} y\textrm { in } \xi^{x,y}_{\langle A \rangle}, v\not\sim z \text{ for all }z\in A\right\}$ is a Poisson random variable, this returns the required second factor in [\[eqn:Exconnf_n+1_AvoidA\]](#eqn:Exconnf_n+1_AvoidA){reference-type="eqref" reference="eqn:Exconnf_n+1_AvoidA"}. To get [\[eqn:Exconnf_n+1\]](#eqn:Exconnf_n+1){reference-type="eqref" reference="eqn:Exconnf_n+1"}, use [\[eqn:Exconnf_n+1_AvoidA\]](#eqn:Exconnf_n+1_AvoidA){reference-type="eqref" reference="eqn:Exconnf_n+1_AvoidA"} with $A=\emptyset$ and $y=\mathbf{0}$. To calculate $\varphi^{[2]}$ and $\varphi^{[3]}$, we iteratively use [\[eqn:Exconnf_1\_AvoidA\]](#eqn:Exconnf_1_AvoidA){reference-type="eqref" reference="eqn:Exconnf_1_AvoidA"}, [\[eqn:Exconnf_n+1_AvoidA\]](#eqn:Exconnf_n+1_AvoidA){reference-type="eqref" reference="eqn:Exconnf_n+1_AvoidA"}, and [\[eqn:Exconnf_n+1\]](#eqn:Exconnf_n+1){reference-type="eqref" reference="eqn:Exconnf_n+1"}. For $\varphi^{[2]}$ we have $$\begin{aligned} \varphi^{[2]}(x) &= \left(1-\varphi(x)\right)\left(1 - \exp\left(-\lambda\int \varphi(v)\varphi^{[1]}_{\langle \mathbf{0} \rangle}(x,v)\mathrm{d}v\right)\right)\nonumber\\ &= \left(1-\varphi(x)\right)\left(1 - \exp\left(-\lambda\int \varphi(v)\varphi(x-v)\mathrm{d}v\right)\right) \nonumber\\ &= \left(1-\varphi(x)\right)\left(1-\exp\left(-\lambda\varphi^{\star 2}(x)\right)\right). \end{aligned}$$ Similarly, we find $$\varphi^{[2]}_{\langle \mathbf{0} \rangle}(x,v) = \left(1-\varphi(x-v)\right)\left(1 - \exp\left(-\lambda\int \varphi(w-v)\varphi(x-w)\left(1-\varphi(w)\right)\mathrm{d}w\right)\right),$$ and therefore $$\begin{aligned} \varphi^{[3]}(x) &= \left(1-\varphi(x)\right)\left(1 - \exp\left(-\lambda\int \varphi(v)\varphi^{[2]}_{\langle \mathbf{0} \rangle}(x,v)\mathrm{d}v\right)\right)\nonumber\\ &= \left(1-\varphi(x)\right)\left(1 - \exp\left(-\lambda\int \varphi(v)\left(1-\varphi(x-v)\right)\right.\right.\nonumber\\ &\hspace{4cm}\left.\left.\times\left(1 - \exp\left(-\lambda\int \varphi(w-v)\varphi(x-w)\left(1-\varphi(w)\right)\mathrm{d}w\right)\right)\mathrm{d}v\right)\right). \end{aligned}$$ ◻ **Lemma 17**. *For $n\geq 1$, $\lambda>0$, and $x\in\mathbb R^d$, $$\varphi^{[n]}(x) \leq \lambda^{n-1}\varphi^{\star n}(x).$$* *Proof.* The expression $\varphi^{[n]}(x)$ gives the probability that there exists at least one path from $\mathbf{0}$ to $x$ of length $n$, and no shorter paths. We can bound this by the probability that there exists at least one path from $\mathbf{0}$ to $x$ of length $n$. Then by Markov's inequality this is bounded by the expected number of paths from $\mathbf{0}$ to $x$ of length $n$. By Mecke's equation this is given by $\lambda^{n-1}\varphi^{\star n}(x)$. ◻ **Lemma 18**. *For $m,n\geq 1$, $\lambda>0$, and $x\in\mathbb R^d$, $$\sum^{m}_{i=1}\varphi^{[i]}(x) \leq \tau_\lambda(x) \leq \sum^{n}_{i=1}\varphi^{[i]}(x) + \lambda^n\varphi^{\star (n+1)}(x) + \lambda^{n+1}\varphi^{\star (n+1)}\star\tau_\lambda(x).$$* *Proof.* First note that the events $\left\{\left\{\mathbf{0} \xleftrightarrow{\,\,=i\,\,} x\textrm { in } \xi^{\mathbf{0},x}\right\}\right\}_{i\in\mathbb N}$ are pairwise disjoint. They are also all contained in the event $\left\{\mathbf{0} \longleftrightarrow x\textrm { in } \xi^{\mathbf{0},x}\right\}$. Therefore $\sum^{m}_{i=1}\varphi^{[i]}(x) \leq \tau_\lambda(x)$. For the upper bound, the above comments imply that $\tau_\lambda(x)-\sum^{n+1}_{i=1}\varphi^{[i]}(x)$ is the probability that $\mathbf{0}$ and $x$ are connected in $\xi^{\mathbf{0},x}$ by some path of length $n+2$ or longer. We can then use Markov's inequality to bound this probability by the expected number of paths of length $n+2$ or longer. By using Mecke's equation, we get $$\begin{aligned} \tau_\lambda(x)-\sum^{n+1}_{i=1}\varphi^{[i]}(x) &\leq \mathbb E_{\lambda}\left[\sum_{y\in\eta}\mathds 1_{\left\{\mathbf{0} \xleftrightarrow{\,\,=n+1\,\,} y\textrm { in } \xi^{\mathbf{0}}\right\}\circ\left\{y \longleftrightarrow x\textrm { in } \xi^x\right\}}\right]\nonumber\\ &= \lambda \int \mathbb P_\lambda\left(\left\{\mathbf{0} \xleftrightarrow{\,\,=n+1\,\,} y\textrm { in } \xi^{\mathbf{0},y}\right\}\circ\left\{y \longleftrightarrow x\textrm { in } \xi^{y,x}\right\}\right)\mathrm{d}y\nonumber\\ &\leq \lambda\int\varphi^{[n+1]}(y)\tau_\lambda(x-y)\mathrm{d}y. \end{aligned}$$ In this last inequality we have used the the BK inequality to bound the probability of the vertex-disjoint occurrence. We therefore have $$\tau_\lambda(x) \leq \sum^{n}_{i=1}\varphi^{[i]}(x) + \varphi^{[n+1]}(x) + \lambda\varphi^{[n+1]}\star\tau_\lambda(x).$$ Bounding $\varphi^{[n+1]}(x) \leq \lambda^{n}\varphi^{\star (n+1)}(x)$ (as shown in Lemma [Lemma 17](#lem:ExconnfBound){reference-type="ref" reference="lem:ExconnfBound"}) in these last two terms then gives the result. ◻ **Lemma 19**. *If $n_1,n_2,n_3\geq 2$, then $$\int \varphi^{\star n_1}(x)\varphi^{\star n_2}(x)\varphi^{\star n_3}(x)\mathrm{d}x \leq \left(\int\varphi(x)\mathrm{d}x\right)^{n_1+n_2+n_3 - 6} \int\left(\varphi^{\star 2}(x)\right)^3\mathrm{d}x.$$ If $n_1,n_2\geq 2$ and $n_1+n_2\geq 6$, then $$\int \varphi^{\star n_1}(x)\varphi^{\star n_2}(x)\varphi(x)\mathrm{d}x \leq \varphi^{\star (n_1+n_2)}\left(\mathbf{0}\right) \leq \left(\int\varphi(x)\mathrm{d}x\right)^{n_1+n_2 - 6}\varphi^{\star 6}\left(\mathbf{0}\right).$$* *Proof.* Recall that the Fourier transform of the convolution of two functions equals the pointwise product of their individual Fourier transforms, and the Fourier transform of the pointwise product of two functions equals the convolution of their individual Fourier transforms. Therefore $$\int \varphi^{\star n_1}(x)\varphi^{\star n_2}(x)\varphi^{\star n_3}(x)\mathrm{d}x = \int \widehat\varphi(k)^{n_1}\widehat\varphi(k-l)^{n_2}\widehat\varphi(l)^{n_3}\frac{\mathrm{d}k \mathrm{d}l}{\left(2\pi\right)^{2d}}.$$ We then note that having $\varphi(x)\geq 0$ implies $\sup_{k}\abs*{\widehat\varphi(k)} = \widehat\varphi(0) = \int\varphi(x)\mathrm{d}x$. Therefore a supremum bound implies $$\begin{aligned} \int \widehat\varphi(k)^{n_1}\widehat\varphi(k-l)^{n_2}\widehat\varphi(l)^{n_3}\frac{\mathrm{d}k \mathrm{d}l}{\left(2\pi\right)^{2d}} &\leq \left(\int\varphi(x)\mathrm{d}x\right)^{n_1+n_2+n_3 - 6} \int \abs*{\widehat\varphi(k)}^{2}\abs*{\widehat\varphi(k-l)}^{2}\abs*{\widehat\varphi(l)}^{2}\frac{\mathrm{d}k \mathrm{d}l}{\left(2\pi\right)^{2d}}\nonumber\\ & = \left(\int\varphi(x)\mathrm{d}x\right)^{n_1+n_2+n_3 - 6} \int \widehat\varphi(k)^{2}\widehat\varphi(k-l)^{2}\widehat\varphi(l)^{2}\frac{\mathrm{d}k \mathrm{d}l}{\left(2\pi\right)^{2d}}\nonumber\\ & = \left(\int\varphi(x)\mathrm{d}x\right)^{n_1+n_2+n_3 - 6} \int\left(\varphi^{\star 2}(x)\right)^3\mathrm{d}x. \end{aligned}$$ For the second inequality, we bound $\varphi(x)\leq 1$ to leave the convolution $\varphi^{\star n_1}\star\varphi^{\star n_2}\left(\mathbf{0}\right)= \varphi^{\star (n_1+n_2)}\left(\mathbf{0}\right)$. Then like above we have $$\begin{gathered} \varphi^{\star (n_1+n_2)}\left(\mathbf{0}\right) = \int \widehat\varphi(k)^{n_1 + n_2}\frac{\mathrm{d}k}{\left(2\pi\right)^d} \\ \leq \left(\int\varphi(x)\mathrm{d}x\right)^{n_1+n_2 - 6} \int \widehat\varphi(k)^{6}\frac{\mathrm{d}k}{\left(2\pi\right)^d} = \left(\int\varphi(x)\mathrm{d}x\right)^{n_1+n_2 - 6}\varphi^{\star 6}\left(\mathbf{0}\right). \end{gathered}$$ ◻ # Lace Expansion Coefficients {#sec:Pibd} The key to our proof is a decomposition of the lace expansion coefficients. In preparation for defining them, we need a few more elementary definitions. The full definitions can be found in [@HeyHofLasMat19]. **Definition 20** (Thinnings and Pivotal Points). Let $x,y\in\mathbb R^d$ and $A\subset \mathbb R^d$ be a locally finite set. 1. Let $\eta$ be a vertex set. We produce a vertex set $\eta_{\langle A \rangle}$ by retaining each $\omega\in\eta$ with probability $\overline{\varphi}(A,\omega):= \prod_{z\in A}\left(1-\varphi(\omega,z)\right)$. We call $\eta_{\langle A \rangle}$ an *$A$-thinning* of $\eta$. A similar procedure can be followed to define $\eta^x_{\langle A \rangle}$ from $\eta^x$. 2. Define $\left\{x \xleftrightarrow{\,\,A\,\,} y\textrm { in } \xi\right\}$ to be the event that $x,y\in\eta$ and $x$ is connected to $y$ in $\xi$, but that this connection does not survive an $A$-thinning of $\eta\setminus\left\{x\right\}$. In particular, the connection does not survive if $y$ is thinned out. 3. The vertex $u\in\mathbb R^d$ is *pivotal* and $u\in\textsf {Piv}(x,y,\xi)$ if every path on $\xi^{x,y}$ that connects $x$ to $y$ uses the vertex $u$. The end points $x$ and $y$ are never said to be pivotal. 4. Define $$E\left(x,y;A,\xi\right) := \left\{x \xleftrightarrow{\,\,A\,\,} y\textrm { in } \xi\right\}\cap\left\{\not\exists w\in\textsf {Piv}(x,y;\xi)\colon x \xleftrightarrow{\,\,A\,\,} w\textrm { in } \xi\right\}.$$ If one considers the pivotal points from $x$ to $y$ in $\xi$ in sequence, then this is the event that an $A$-thinning breaks the connection after the last pivotal point and not before. 5. Define $$\left\{x \Longleftrightarrow y\textrm { in } \xi^{x,y}\right\}:= \left\{x \longleftrightarrow y\textrm { in } \xi^{x,y}\right\}\circ\left\{x \longleftrightarrow y\textrm { in } \xi^{x,y}\right\}.$$ Note that this is equal to the event that $x$ and $y$ are adjacent or there exist vertices $u,v$ in $\eta$ that are adjacent to $x$ and have disjoint paths to $y$ that both do not contain $x$. Alternatively, there are no pivotal points for the connection of $x$ and $y$ in $\xi^{x,y}$. We are now able to define the lace expansion coefficients, which will be the main objects of study in the remainder of the paper. **Definition 21**. For $n\in\mathbb N$, $x\in\mathbb R^d$, and $\lambda\in\left[0,\lambda_c\right]$ we define $$\begin{aligned} \Pi_\lambda^{(0)}(x) &:= \mathbb P_\lambda\left(\mathbf{0} \Longleftrightarrow x\textrm { in } \xi^{\mathbf{0},x}\right) - \varphi(x), \label{eq:LE:Pi0_def} \\ \Pi_\lambda^{(n)}(x) &:= \lambda^n \int \mathbb P_\lambda\left( \{\mathbf{0} \Longleftrightarrow u_0\textrm { in } \xi^{\mathbf{0}, u_0}_{0}\} \cap \bigcap_{i=1}^{n} E\left(u_{i-1},u_i; \mathscr {C}_{i-1}, \xi^{u_{i-1}, u_i}_{i}\right) \right) \mathrm{d}\vec u_{[0,n-1]} , \label{eq:LE:Pin_def} \end{aligned}$$ where $u_n=x$, $\left\{\xi_i\right\}_{i\geq 0}$ are independent copies of $\xi$, and $\mathscr {C}_{i} = \mathscr {C}\left(u_{i-1}, \xi^{u_{i-1}}_{i}\right)$ is the cluster of $u_{i-1}$ in $\xi^{u_{i-1}}_i$. Then we further define $$\Pi_\lambda(x) = \sum_{n=0}^\infty\left(-1\right)^n\Pi_\lambda^{(n)}(x).$$ Note that [@HeyHofLasMat19 Corollary 6.1] proves that $\Pi_{\lambda_c}^{(n)}(x) = \lim_{\lambda\nearrow\lambda_c}\Pi_\lambda^{(n)}(x)$, and (in the proof) that $\widehat\Pi_{\lambda_c}^{(n)}(0) = \lim_{\lambda\nearrow\lambda_c}\widehat\Pi_\lambda^{(n)}(0)$ and $\widehat\Pi_{\lambda_c}(0) = \lim_{\lambda\nearrow\lambda_c}\widehat\Pi_\lambda(0)$. **Proposition 22**. *Suppose Assumption [\[Assumption\]](#Assumption){reference-type="ref" reference="Assumption"} holds and $d$ is sufficiently large. Then for all $\lambda\leq\lambda_c$ and $x\in\mathbb R^d$ $$\label{eqn:OZE} \tau_\lambda(x) = \varphi(x) + \Pi_\lambda(x) + \lambda\left(\varphi+ \Pi_\lambda\right)\star\tau_\lambda(x).$$* *Proof.* This is the Ornstein-Zerneke equation for the random connection model, and it is proven in [@HeyHofLasMat19]. The $\lambda<\lambda_c$ result is in Corollary 5.3, and the $\lambda=\lambda_c$ result is in Corollary 6.1. ◻ Our main result for this section is the following proposition. **Proposition 23**. *Suppose Assumptions [\[Assumption\]](#Assumption){reference-type="ref" reference="Assumption"} and [\[Assump:ExponentialDecay\]](#Assump:ExponentialDecay){reference-type="ref" reference="Assump:ExponentialDecay"} hold. Also let $n_0\geq 4$ and $N\geq 1$ be fixed. Then as $d\to\infty$, $$\begin{aligned} \lambda_c\widehat\Pi_{\lambda_c}^{(0)}(0) &= \frac{1}{2}\lambda_c^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }- \frac{1}{2}\lambda_c^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \lambda_c^4\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\nonumber\\ &\hspace{4cm}+ \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right),\label{eqPi0bd}\\ \lambda_c\widehat\Pi_{\lambda_c}^{(1)}(0) &= \lambda_c^2\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ 2\lambda_c^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ 3\lambda_c^4\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }- 2\lambda_c^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\nonumber\\ &\hspace{4cm}+ \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right),\label{eqPi1bd}\\ \lambda_c\widehat\Pi_{\lambda_c}^{(2)}(0) &= \lambda_c^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right),\label{eqPi2bd}\\ \lambda_c\widehat\Pi_{\lambda_c}^{(3)}(0) &= \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right),\label{eqPi3bd}\\ \lambda_c\sum^{n_0}_{n=4}\left(-1\right)^n\widehat\Pi_{\lambda_c}^{(n)}(0) &= \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right),\label{eqPinbd}\\ \lambda_c\sum^\infty_{n=N}\left(-1\right)^n\widehat\Pi_{\lambda_c}^{(n)}(0) &= \mathcal{O}\left(\beta^N\right). \label{eqn:PiTailBound} \end{aligned}$$* Note that when Assumption [\[Assump:NumberBound\]](#Assump:NumberBound){reference-type="ref" reference="Assump:NumberBound"} holds we can choose a fixed finite $N^*$ such that $$N^* \geq \ceil*{\frac{1}{\log \beta(d)}\log\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right)}$$ for all $d\in\mathbb N$. If we then let $N=N^*$ in [\[eqn:PiTailBound\]](#eqn:PiTailBound){reference-type="eqref" reference="eqn:PiTailBound"}, the bound becomes $$\lambda_c\sum^\infty_{n=N^*}\left(-1\right)^n\widehat\Pi_{\lambda_c}^{(n)}(0) = \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right).$$ **Corollary 24**. *Suppose Assumptions [\[Assumption\]](#Assumption){reference-type="ref" reference="Assumption"} and [\[AssumptionBeta\]](#AssumptionBeta){reference-type="ref" reference="AssumptionBeta"} hold. Then as $d\to\infty$, $$\begin{gathered} \lambda_c\widehat\Pi_{\lambda_c}(0) = - \lambda_c^2\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }- \frac{3}{2}\lambda_c^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }- 2\lambda_c^4\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \frac{5}{2}\lambda_c^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\\+ \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right). \end{gathered}$$* *Proof.* The corollary follows from $\widehat\Pi_{\lambda_c}(0)=\sum^\infty_{n=0}\left(-1\right)^n\widehat\Pi_{\lambda_c}^{(n)}(0)$ and the bounds in Proposition [Proposition 23](#prop:PiBounds){reference-type="ref" reference="prop:PiBounds"}. ◻ We prove Proposition [Proposition 23](#prop:PiBounds){reference-type="ref" reference="prop:PiBounds"} in the remainder of the section: [\[eqPi0bd\]](#eqPi0bd){reference-type="eqref" reference="eqPi0bd"} is proved in Section [3.1](#sec:Pi0bd){reference-type="ref" reference="sec:Pi0bd"}, [\[eqPi1bd\]](#eqPi1bd){reference-type="eqref" reference="eqPi1bd"} is proved in Section [3.2](#sec:Pi1bd){reference-type="ref" reference="sec:Pi1bd"}, [\[eqPi2bd\]](#eqPi2bd){reference-type="eqref" reference="eqPi2bd"} is proved in Section [3.3](#sec:Pi2bd){reference-type="ref" reference="sec:Pi2bd"}, [\[eqPi3bd\]](#eqPi3bd){reference-type="eqref" reference="eqPi3bd"} and [\[eqPinbd\]](#eqPinbd){reference-type="eqref" reference="eqPinbd"} are proven in Section [3.1](#sec:Pi0bd){reference-type="ref" reference="sec:Pi0bd"}. But first we show how it implies our main result. *Proof of Theorem [Theorem 4](#thm:CriticalIntensityExpansion){reference-type="ref" reference="thm:CriticalIntensityExpansion"}.* By applying the Fourier transform to both sides of [\[eqn:OZE\]](#eqn:OZE){reference-type="eqref" reference="eqn:OZE"}, we can rearrange terms to find $$\widehat\tau_\lambda(k) = \frac{\widehat\varphi(k) + \widehat\Pi_\lambda(k)}{1- \lambda\left(\widehat\varphi(k) + \widehat\Pi_\lambda(k)\right)}$$ for all $k\in\mathbb R^d$ and $\lambda\leq\lambda_c$ (where we interpret the right hand side as $=\infty$ if the denominator vanishes). Since Mecke's equation implies $\chi\left(\lambda\right) = 1 + \lambda \widehat\tau_\lambda(0)$, and $\lambda_c = \inf\left\{\lambda>0 \colon \chi\left(\lambda\right)=\infty\right\}$, this tells us that $\lambda_c$ satisfies $$\label{eqn:critical} \lambda_c\left(1+\widehat\Pi_{\lambda_c}(0)\right) = 1,$$ where we have used $\widehat\varphi(0)=1$. We now aim to use our expansion for $\widehat\Pi_{\lambda_c}(0)$ to get an expansion for $\lambda_c$. Let us denote $a=\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=black] (1,-0.6) circle (3pt); \filldraw[fill=black] (1,0.6) circle (3pt); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }$, $b=\frac{3}{2}\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \filldraw[fill=black] (1,0.6) circle (3pt); \filldraw[fill=black] (1,-0.6) circle (3pt); \filldraw[fill=black] (2,0) circle (3pt); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }- \frac{5}{2}\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=black] (1,0.6) circle (3pt); \filldraw[fill=black] (1,-0.6) circle (3pt); \filldraw[fill=black] (2,0) circle (3pt); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }$, $c=2\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \draw (0,0) -- (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=black] (1,0.6) circle (3pt); \filldraw[fill=black] (1,-0.6) circle (3pt); \filldraw[fill=black] (2,0.6) circle (3pt); \filldraw[fill=black] (2,-0.6) circle (3pt); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }$, and $r=\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=black] (1,0.6) circle (3pt); \filldraw[fill=black] (1,-0.6) circle (3pt); \filldraw[fill=black] (2,0.6) circle (3pt); \filldraw[fill=black] (2,-0.6) circle (3pt); \filldraw[fill=black] (3,0) circle (3pt); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=black] (1,0.6) circle (3pt); \filldraw[fill=black] (1,-0.6) circle (3pt); \filldraw[fill=black] (2,0.6) circle (3pt); \filldraw[fill=black] (2,-0.6) circle (3pt); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=black] (1,0.6) circle (3pt); \filldraw[fill=black] (1,-0.6) circle (3pt); \filldraw[fill=black] (2,0) circle (3pt); \filldraw[fill=black] (1,0) circle (3pt); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }$. Using Corollary [Corollary 24](#thm:CoefficientExpansion){reference-type="ref" reference="thm:CoefficientExpansion"}, [\[eqn:critical\]](#eqn:critical){reference-type="eqref" reference="eqn:critical"} becomes $$\lambda_c - a\lambda_c^2 - b\lambda_c^3 - c\lambda_c^4 + \mathcal{O}\left(r\right)= 1.$$ We can rearrange this to get $$\lambda_c = 1+ a\lambda_c^2 + b\lambda_c^3 + c\lambda_c^4 + \mathcal{O}\left(r\right),$$ and by substituting this into itself produces $$\begin{aligned} \lambda_c &= 1 + a\left(1+ a\lambda_c^2 + b\lambda_c^3 + c\lambda_c^4 + \mathcal{O}\left(r\right)\right)^2 + b\left(1+ a\lambda_c^2 + b\lambda_c^3 + c\lambda_c^4 + \mathcal{O}\left(r\right)\right)^3\nonumber\\ &\hspace{7cm} + c\left(1+ a\lambda_c^2 + b\lambda_c^3 + c\lambda_c^4 + \mathcal{O}\left(r\right)\right)^4 + \mathcal{O}\left(r\right)\nonumber\\ &= 1 + a + 2a^2\lambda_c^2 + \mathcal{O}\left(ab\lambda_c^3 + a^3\lambda_c^4\right) +b + \mathcal{O}\left(ab\lambda^2_c\right) + c + \mathcal{O}\left(ac\lambda_c^2\right) + \mathcal{O}\left(r\right)\nonumber\\ & = 1 + a + b+ c+ 2a^2 + \mathcal{O}\left(ab + a^3 + r\right). \end{aligned}$$ Finally, note that $b=\mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \filldraw[fill=black] (1,0.6) circle (3pt); \filldraw[fill=black] (1,-0.6) circle (3pt); \filldraw[fill=black] (2,0) circle (3pt); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right)$ and so the last term is exatly as stated in our result. ◻ ## Bounds on the Zeroth Lace Expansion Coefficient {#sec:Pi0bd} In this subsection we prove [\[eqPi0bd\]](#eqPi0bd){reference-type="eqref" reference="eqPi0bd"}. #### Upper Bound on $\widehat\Pi_{\lambda_c}^{(0)}(0)$ **Lemma 25**. *Suppose Assumption [\[Assumption\]](#Assumption){reference-type="ref" reference="Assumption"} holds. Then as $d\to\infty$, $$\begin{gathered} \lambda_c\widehat\Pi_{\lambda_c}^{(0)}(0) \leq \frac{1}{2}\lambda_c^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }- \frac{1}{2}\lambda_c^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \lambda_c^4\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\\+ \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right) \end{gathered}$$* *Proof.* We first consider $\mathbb P_\lambda\left(\mathbf{0} \Longleftrightarrow x\textrm { in } \xi^{\mathbf{0},x}\right)$. Since the existence of an edge between $\mathbf{0}$ and $x$ is independent of everything else, $$\begin{gathered} \mathbb P_\lambda\left(\mathbf{0} \Longleftrightarrow x\textrm { in } \xi^{\mathbf{0},x}\right) \\= \varphi(x) + \left(1-\varphi(x)\right)\mathbb P_\lambda\left(\exists u,v\in\eta\colon \mathbf{0}\sim u, \mathbf{0}\sim v, \left\{u \longleftrightarrow x\textrm { in } \xi^{x}\right\}\circ\left\{v \longleftrightarrow x\textrm { in } \xi^{x}\right\}\right). \end{gathered}$$ Then note that the disjoint occurrence is a subset of the intersection of each occurrence: $\left\{u \longleftrightarrow x\textrm { in } \xi^{x}\right\}\circ\left\{v \longleftrightarrow x\textrm { in } \xi^{x}\right\} \subset \left\{u \longleftrightarrow x\textrm { in } \xi^{x}\right\}\cap\left\{v \longleftrightarrow x\textrm { in } \xi^{x}\right\}$. Therefore $$\begin{gathered} \mathbb P_\lambda\left(\exists u,v\in\eta\colon u\ne v, \mathbf{0}\sim u, \mathbf{0}\sim v, \left\{u \longleftrightarrow x\textrm { in } \xi^{x}\right\}\circ\left\{v \longleftrightarrow x\textrm { in } \xi^{x}\right\}\right) \\\leq \mathbb P_\lambda\left(\#\left\{u\in\eta\colon \mathbf{0}\sim u, u \longleftrightarrow x\textrm { in } \xi^{x}\right\}\geq 2\right). \end{gathered}$$ Since $\eta$ is a Poisson point process, the number of such vertices is Poisson distributed and Mecke's equation tells us that the expected number of such vertices is given by $$\mathbb E_{\lambda}\left[\#\left\{u\in\eta\colon\mathbf{0}\sim u, u \longleftrightarrow x\textrm { in } \xi^{x}\right\}\right] = \lambda\int\varphi(v)\tau_\lambda(x-v)\mathrm{d}v = \lambda\varphi\star\tau_\lambda(x).$$ Therefore $$\begin{aligned} &\mathbb P_\lambda\left(\exists u,v\in\eta\colon u\ne v, \mathbf{0}\sim u, \mathbf{0}\sim v, \left\{u \longleftrightarrow x\textrm { in } \xi^{x}\right\}\circ\left\{v \longleftrightarrow x\textrm { in } \xi^{x}\right\}\right) \nonumber\\ &\hspace{5cm}\leq 1 - \mathbb P_\lambda\left(\#\left\{u\in\eta\colon\mathbf{0}\sim u, u \longleftrightarrow x\textrm { in } \xi^{x}\right\}\leq 1\right)\\ & \hspace{5cm}= 1 - \exp\left(-\lambda\varphi\star\tau_\lambda(x)\right) - \lambda\varphi\star\tau_\lambda(x)\exp\left(-\lambda\varphi\star\tau_\lambda(x)\right). \end{aligned}$$ Using this with $1-\text{e}^{-x} - x\text{e}^{-x}\leq \frac{1}{2}x^2 + \frac{1}{6}x^3$ for all $x\in\mathbb R$ and [\[eq:LE:Pi0_def\]](#eq:LE:Pi0_def){reference-type="eqref" reference="eq:LE:Pi0_def"}, we can get $$\begin{gathered} \lambda\widehat\Pi_\lambda^{(0)}(0) \leq \lambda\int \left(1-\varphi(x)\right)\left(1 - \exp\left(-\lambda\varphi\star\tau_\lambda(x)\right) - \lambda\varphi\star\tau_\lambda(x)\exp\left(-\lambda\varphi\star\tau_\lambda(x)\right)\right)\mathrm{d}x \\ \leq \frac{1}{2}\lambda\int\left(1-\varphi(x)\right)\left(\lambda\varphi\star\tau_\lambda(x)\right)^2\mathrm{d}x + \frac{1}{6}\lambda\int\left(1-\varphi(x)\right)\left(\lambda\varphi\star\tau_\lambda(x)\right)^3\mathrm{d}x. \end{gathered}$$ By applying $\tau_\lambda(x) \leq \varphi(x) + \lambda\varphi\star\tau_\lambda(x)$ iteratively, we get $$\begin{aligned} &\int\left(1-\varphi(x)\right)\left(\varphi\star\tau_\lambda(x)\right)^2\mathrm{d}x \nonumber\\ &\hspace{1cm}\leq \int \left(1-\varphi(x)\right)\varphi^{\star 2}(x)^2\mathrm{d}x + 2\lambda\int \left(1-\varphi(x)\right)\varphi^{\star 2}(x)\varphi^{\star 3}(x)\mathrm{d}x + 2\lambda^2\int \left(1-\varphi(x)\right)\varphi^{\star 2}(x)\varphi^{\star 4}(x)\mathrm{d}x\nonumber\\ &\hspace{2cm} + 2\lambda^3\int \left(1-\varphi(x)\right)\varphi^{\star 2}(x)\varphi^{\star 4}\star\tau_\lambda(x)\mathrm{d}x + \lambda^2\int \left(1-\varphi(x)\right)\varphi^{\star 3}(x)^2\mathrm{d}x \nonumber\\ &\hspace{2cm} + 2\lambda^3\int \left(1-\varphi(x)\right)\varphi^{\star 3}(x)\varphi^{\star 3}\star\tau_\lambda(x)\mathrm{d}x + \lambda^4\int \left(1-\varphi(x)\right)\varphi^{\star 3}\star\tau_\lambda(x)^2\mathrm{d}x\nonumber\\ &\hspace{1cm}\leq \int \left(1-\varphi(x)\right)\varphi^{\star 2}(x)^2\mathrm{d}x + 2\lambda\int \left(1-\varphi(x)\right)\varphi^{\star 2}(x)\varphi^{\star 3}(x)\mathrm{d}x\nonumber\\ &\hspace{2cm}+ 3\lambda^2\varphi^{\star 6}\left(\mathbf{0}\right) + 4\lambda^3\varphi^{\star 6}\star\tau_\lambda\left(\mathbf{0}\right) + \lambda^4\varphi^{\star 6}\star\tau_\lambda^{\star 2}\left(\mathbf{0}\right). \end{aligned}$$ From Lemma [Lemma 14](#lem:tailbound){reference-type="ref" reference="lem:tailbound"}, we know that for $\lambda\leq\lambda_c$ these last three terms are all $\mathcal{O}\left(\varphi^{\star 6}\left(\mathbf{0}\right)\right)$. By further expanding the first two terms via the $\left(1-\varphi(x)\right)$ factors, we find $$\begin{gathered} \int\left(1-\varphi(x)\right)\left(\varphi\star\tau_{\lambda_c}(x)\right)^2\mathrm{d}x = \varphi^{\star 4}\left(\mathbf{0}\right) - \int \varphi(x)\varphi^{\star 2}(x)^2\mathrm{d}x + 2\lambda_c\varphi^{\star 5}\left(\mathbf{0}\right)\\ + \mathcal{O}\left(\int \varphi(x)\varphi^{\star 2}(x)\varphi^{\star 3}(x)\mathrm{d}x + \varphi^{\star 6}\left(\mathbf{0}\right)\right). \end{gathered}$$ By the same approach, we find $$\begin{aligned} &\int\left(1-\varphi(x)\right)\left(\varphi\star\tau_\lambda(x)\right)^3\mathrm{d}x \nonumber\\ &\hspace{1cm}\leq \int \left(1-\varphi(x)\right)\varphi^{\star 2}(x)^3\mathrm{d}x + 3\lambda\int \left(1-\varphi(x)\right)\varphi^{\star 2}(x)^2\varphi^{\star 3}(x)\mathrm{d}x \nonumber\\ &\hspace{2cm} + 3\lambda^2\int \left(1-\varphi(x)\right)\varphi^{\star 2}(x)^2\varphi^{\star 4}(x)\mathrm{d}x + 3\lambda^3\int \left(1-\varphi(x)\right)\varphi^{\star 2}(x)^2\varphi^{\star 4}\star\tau_\lambda(x)\mathrm{d}x \nonumber\\ &\hspace{2cm}+ 3\lambda^2\int \left(1-\varphi(x)\right)\varphi^{\star 2}(x)\varphi^{\star 3}(x)^2\mathrm{d}x + 6\lambda^3\int \left(1-\varphi(x)\right)\varphi^{\star 2}(x)\varphi^{\star 3}(x)\varphi^{\star 4}(x)\mathrm{d}x \nonumber\\ &\hspace{2cm} + 6\lambda^4\int \left(1-\varphi(x)\right)\varphi^{\star 2}(x)\varphi^{\star 3}(x)\varphi^{\star 4}\star\tau_\lambda(x)\mathrm{d}x + 3\lambda^4\int \left(1-\varphi(x)\right)\varphi^{\star 2}(x)\varphi^{\star 3}\star\tau_\lambda(x)^2\mathrm{d}x \nonumber\\ &\hspace{2cm} + \lambda^3\int \left(1-\varphi(x)\right)\varphi^{\star 3}(x)^3\mathrm{d}x + 3\lambda^4\int \left(1-\varphi(x)\right)\varphi^{\star 3}(x)^2\varphi^{\star 3}\star\tau_\lambda(x)\mathrm{d}x \nonumber\\ &\hspace{2cm}+ 3\lambda^5\int \left(1-\varphi(x)\right)\varphi^{\star 3}(x)\varphi^{\star 3}\star\tau_\lambda(x)^2\mathrm{d}x + \lambda^6\int \left(1-\varphi(x)\right)\varphi^{\star 3}\star\tau_\lambda(x)^3\mathrm{d}x\nonumber\\ &\hspace{1cm}\leq \int \left(1-\varphi(x)\right)\varphi^{\star 2}(x)^3\mathrm{d}x + 3\lambda\int \left(1-\varphi(x)\right)\varphi^{\star 2}(x)^2\varphi^{\star 3}(x)\mathrm{d}x \nonumber\\ &\hspace{2cm} + 6\lambda^2\varphi^{\star 6}\left(\mathbf{0}\right)\int\varphi(v)\mathrm{d}v + 3\lambda^3\varphi^{\star 6}\star\tau_\lambda\left(\mathbf{0}\right)\int\varphi(v)\mathrm{d}v +6\lambda^3\varphi^{\star 7}\left(\mathbf{0}\right)\int\varphi(v)\mathrm{d}v \nonumber\\ &\hspace{2cm} + 6\lambda^4\varphi^{\star 7}\star\tau_\lambda\left(\mathbf{0}\right)\int\varphi(v)\mathrm{d}v + 3\lambda^4\varphi^{\star 6}\star\tau_\lambda^{\star 2}\left(\mathbf{0}\right)\int\varphi(v)\mathrm{d}v \nonumber\\ &\hspace{2cm}+ \lambda^3\varphi^{\star 6}\left(\mathbf{0}\right)\left(\int\varphi(v)\mathrm{d}v\right)^2 + 3\lambda^4\varphi^{\star 6}\star\tau_\lambda\left(\mathbf{0}\right)\left(\int\varphi(v)\mathrm{d}v\right)^2 + 3\lambda^5\varphi^{\star 6}\star\tau_\lambda^{\star 2}\left(\mathbf{0}\right)\left(\int\varphi(v)\mathrm{d}v\right)^2 \nonumber\\ &\hspace{2cm} + \lambda^6\varphi^{\star 6}\star \tau_\lambda^{\star 2}\left(\mathbf{0}\right)\left(\int\varphi(v)\mathrm{d}v\right)^3. \end{aligned}$$ Note that in this last inequality we identify two paths that form a loop - this contributes the terms $\varphi^{\star 6}\left(\mathbf{0}\right)$, $\varphi^{\star 6}\star\tau_\lambda\left(\mathbf{0}\right)$, etc. The $\left(1-\varphi(x)\right)$ we again simply bound by $1$. This leaves a third path from $\mathbf{0}$ to $x$. We deal with this by bounding one of the steps in the convolution by $1$ and the remaining steps form a 'loose' integration. For example, $$\begin{gathered} \int \left(1-\varphi(x)\right)\varphi^{\star 3}\star\tau_\lambda(x)^3\mathrm{d}x \leq \int \varphi^{\star 3}\star\tau_\lambda(x)^2\left(\int\varphi^{\star 3}(u)\tau_\lambda(x-u)\mathrm{d}u\right)\mathrm{d}x \\\leq \int \varphi^{\star 3}\star\tau_\lambda(x)^2\left(\int\varphi^{\star 3}(u)\mathrm{d}u\right)\mathrm{d}x = \varphi^{\star 6}\star \tau_\lambda^{\star 2}\left(\mathbf{0}\right)\left(\int\varphi(v)\mathrm{d}v\right)^3. \end{gathered}$$ Recall that we have chosen the scaling $\int\varphi(v)\mathrm{d}v=1$ for our proof. Furthermore, by bounding $1-\varphi(x)$ we find that the first two terms are $\mathcal{O}\left(\int \varphi^{\star 2}(x)^3\mathrm{d}x \right)$. Therefore $$\int\left(1-\varphi(x)\right)\left(\varphi\star\tau_{\lambda_c}(x)\right)^3\mathrm{d}x = \mathcal{O}\left(\int \varphi^{\star 2}(x)^3\mathrm{d}x + \varphi^{\star 6}\left(\mathbf{0}\right)\right).$$ In summary, these bounds give us $$\begin{gathered} \lambda_c\widehat\Pi_{\lambda_c}^{(0)}(0) \leq \frac{1}{2}\lambda_c^3\int \varphi^{\star 2}(x)^2\mathrm{d}x - \frac{1}{2}\lambda_c^3\int\varphi(x) \varphi^{\star 2}(x)^2\mathrm{d}x + \lambda_c^4\int \varphi^{\star 2}(x)\varphi^{\star 3}(x)\mathrm{d}x\\ + \mathcal{O}\left(\int \varphi^{\star 2}(x)^3\mathrm{d}x + \int \varphi(x)\varphi^{\star 2}(x)\varphi^{\star 3}(x)\mathrm{d}x + \varphi^{\star 6}\left(\mathbf{0}\right)\right) \end{gathered}$$ as required. ◻ #### Lower Bound on $\widehat\Pi_{\lambda_c}^{(0)}(0)$ **Lemma 26**. *$$\lambda_c\widehat\Pi_{\lambda_c}^{(0)}(0) \geq \frac{1}{2}\lambda_c^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }- \frac{1}{2}\lambda_c^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \lambda_c^4\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+\mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right)$$* *Proof.* We lower bound $\Pi_\lambda^{(0)}(x)$ by identifying an appropriate subset of $\left\{\mathbf{0} \Longleftrightarrow x\textrm { in } \xi^{\mathbf{0},x}\right\}$. Consider $\mathcal{F}:= \mathcal{F}_1 \cup \mathcal{F}_2 \cup \mathcal{F}_3$, where $$\begin{aligned} \mathcal{F}_1 :=& \left\{\mathbf{0}\sim x\right\} \\ \mathcal{F}_2 :=& \left\{\mathbf{0}\not\sim x\right\} \cap\left\{\#\left\{u\in\eta\colon \mathbf{0}\sim u\sim x\right\}\geq 2\right\}\\ \mathcal{F}_3 :=& \left\{\mathbf{0}\not\sim x\right\} \cap\left\{\#\left\{u\in\eta\colon \mathbf{0}\sim u\sim x\right\}= 1\right\} \cap \left\{\#\left\{v\in\eta\colon \mathbf{0} \sim v\textrm { in } \xi^{\mathbf{0}}, v \xleftrightarrow{\,\,=2\,\,} x\textrm { in } \xi^{v,x}_{\langle \mathbf{0} \rangle}\right\}\geq 1\right\}. \end{aligned}$$ In each, either $\mathbf{0}$ is adjacent to $x$ or there exist two vertex disjoint paths from $\mathbf{0}$ to $x$. Therefore $\mathcal{F}\subset \left\{\mathbf{0} \Longleftrightarrow x\textrm { in } \xi^{\mathbf{0},x}\right\}$. The components $\mathcal{F}_1$, $\mathcal{F}_2$, and $\mathcal{F}_3$ are also all disjoint by construction, so $$\mathbb P_\lambda\left(\mathbf{0} \Longleftrightarrow x\textrm { in } \xi^{\mathbf{0},x}\right) \geq \mathbb P_\lambda\left(\mathcal{F}_1\right) + \mathbb P_\lambda\left(\mathcal{F}_2\right) + \mathbb P_\lambda\left(\mathcal{F}_3\right).$$ Since $\eta$ is distributed as a Poisson point process on $\mathbb R^d$ with intensity $\lambda$, $$\begin{aligned} \mathbb P_\lambda\left(\mathcal{F}_1\right) =& \varphi(x)\\ \mathbb P_\lambda\left(\mathcal{F}_2\right) =& \left(1-\varphi(x)\right)\left(1-\exp\left(-\lambda\varphi^{\star 2}(x)\right) - \lambda\varphi^{\star 2}(x)\exp\left(-\lambda\varphi^{\star 2}(x)\right)\right)\\ \mathbb P_\lambda\left(\mathcal{F}_3\right) =& \lambda\varphi^{\star 2}(x)\exp\left(-\lambda\varphi^{\star 2}(x)\right)\varphi^{[3]}(x)\nonumber\\ =& \lambda\varphi^{\star 2}(x)\exp\left(-\lambda\varphi^{\star 2}(x)\right)\left(1-\varphi(x)\right)\nonumber\\ &\times\left(1 - \exp\left(-\lambda\int \varphi(v)\left(1-\varphi(x-v)\right)\left(1 - \exp\left(-\lambda\int \varphi(x-w)\left(1-\varphi(w)\right)\varphi(w-v)\mathrm{d}w\right)\right)\mathrm{d}v\right)\right). \end{aligned}$$ Therefore $$\widehat\Pi_{\lambda_c}^{(0)}(0) \geq \int \left(\mathbb{P}_{\lambda_c}\left(\mathcal{F}_2\right) + \mathbb{P}_{\lambda_c}\left(\mathcal{F}_3\right)\right)\mathrm{d}x,$$ and we now want to lower bound the integrals of $\mathbb{P}_{\lambda_c}\left(\mathcal{F}_2\right)$ and $\mathbb{P}_{\lambda_c}\left(\mathcal{F}_3\right)$. By using $1 - \text{e}^{-x} - x\text{e}^{-x} \geq \frac{1}{2}x^2 - \frac{1}{2}x^3$ for all $x\in\mathbb R$, $$\lambda\int\mathbb P_\lambda\left(\mathcal{F}_2\right)\mathrm{d}x \geq \frac{1}{2}\lambda^3\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (0.5,0) -- (1,0.6) -- (1.5,0) -- (1,-0.6) -- (0.5,0); \draw[red] (1,0.6) -- (1,-0.6); \filldraw[fill=white] (1,-0.6) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \end{tikzpicture} }- \frac{1}{2}\lambda^4\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (0.5,0) -- (1,0.6) -- (1.5,0) -- (1,-0.6) -- (0.5,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \draw[red] (1,0.6) -- (1,-0.6); \filldraw[fill=white] (1,-0.6) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (2,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \end{tikzpicture} }= \frac{1}{2}\lambda^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }- \frac{1}{2}\lambda^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \draw (0,0) -- (2,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right)$$ We find our lower bound on $\lambda\int\mathbb P_\lambda\left(\mathcal{F}_3\right)\mathrm{d}x$ in a few more steps. Since we have $x\text{e}^{-x} \geq x - x^2$ for all $x\in\mathbb R$, $$\label{eqn:F3Pt1} \lambda\varphi^{\star 2}(x)\exp\left(-\lambda\varphi^{\star 2}(x)\right)\left(1-\varphi(x)\right) \geq \lambda\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,0.6) -- (1.5,0) -- (1,-0.6); \draw[red] (1,0.6) -- (1,-0.6); \filldraw[fill=white] (1,-0.6) circle (2pt) node[left]{$\mathbf{0}$}; \filldraw (1.5,0) circle (2pt); \filldraw[fill=white] (1,0.6) circle (2pt) node[left]{$x$}; \end{tikzpicture} } - \lambda^2\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (0.5,0) -- (1,0.6) -- (1.5,0) -- (1,-0.6) -- (0.5,0); \draw[red] (1,0.6) -- (1,-0.6); \filldraw[fill=white] (1,-0.6) circle (2pt) node[left]{$\mathbf{0}$}; \filldraw (0.5,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw[fill=white] (1,0.6) circle (2pt) node[left]{$x$}; \end{tikzpicture} }.$$ Since we have $1-\text{e}^{-x} \geq x - \frac{1}{2}x^2$ for all $x\in\mathbb R$, $$1 - \exp\left(-\lambda\int \varphi(x-w)\left(1-\varphi(w)\right)\varphi(w-v)\mathrm{d}w\right) \geq \lambda\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,0.6) -- (1.5,0) -- (1,0); \draw[red] (1.5,0) -- (1,-0.6); \filldraw[fill=white] (1,-0.6) circle (2pt) node[left]{$\mathbf{0}$}; \filldraw[fill=white] (1,0) circle (2pt) node[left]{$v$}; \filldraw (1.5,0) circle (2pt); \filldraw[fill=white] (1,0.6) circle (2pt) node[left]{$x$}; \end{tikzpicture} } -\frac{1}{2}\lambda^2\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,0.6) -- (1.5,0) -- (1,0) -- (0.5,0) -- (1,0.6); \draw[red] (1.5,0) -- (1,-0.6) -- (0.5,0); \filldraw[fill=white] (1,-0.6) circle (2pt) node[left]{$\mathbf{0}$}; \filldraw[fill=white] (1,0) circle (2pt) node[above]{$v$}; \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw[fill=white] (1,0.6) circle (2pt) node[left]{$x$}; \end{tikzpicture} },$$ and $$\begin{gathered} \lambda\int \varphi(v)\left(1-\varphi(x-v)\right)\left(1 - \exp\left(-\lambda\int \varphi(x-w)\left(1-\varphi(w)\right)\varphi(w-v)\mathrm{d}w\right)\right)\mathrm{d}v \\ \geq \lambda^2\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,0.6) -- (1.5,0) -- (1,0) -- (1,-0.6); \draw[red] (1.5,0) -- (1,-0.6); \draw[red] (1,0) -- (1,0.6); \filldraw[fill=white] (1,-0.6) circle (2pt) node[left]{$\mathbf{0}$}; \filldraw (1,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw[fill=white] (1,0.6) circle (2pt) node[left]{$x$}; \end{tikzpicture} } - \frac{1}{2}\lambda^3\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,0.6) -- (1.5,0) -- (1,0) -- (0.5,0) -- (1,0.6); \draw (1,0)--(1,-0.6); \draw[red] (1.5,0) -- (1,-0.6) -- (0.5,0); \draw[red] (1,0) -- (1,0.6); \filldraw[fill=white] (1,-0.6) circle (2pt) node[left]{$\mathbf{0}$}; \filldraw (1,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw[fill=white] (1,0.6) circle (2pt) node[left]{$x$}; \end{tikzpicture} }. \end{gathered}$$ Since $x\mapsto 1-\text{e}^{-x}$ is monotone increasing and $1-\text{e}^{-x} \geq x - \frac{1}{2}x^2$ for all $x\in\mathbb R$, $$\begin{gathered} \label{eqn:F3Pt2} 1 - \exp\left(-\lambda\int \varphi(v)\left(1-\varphi(x-v)\right)\left(1 - \exp\left(-\lambda\int \varphi(x-w)\left(1-\varphi(w)\right)\varphi(w-v)\mathrm{d}w\right)\right)\mathrm{d}v\right)\\ \geq \lambda^2\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,0.6) -- (1.5,0) -- (1,0) -- (1,-0.6); \draw[red] (1.5,0) -- (1,-0.6); \draw[red] (1,0) -- (1,0.6); \filldraw[fill=white] (1,-0.6) circle (2pt) node[left]{$\mathbf{0}$}; \filldraw (1,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw[fill=white] (1,0.6) circle (2pt) node[left]{$x$}; \end{tikzpicture} } - \frac{1}{2}\lambda^3\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,0.6) -- (1.5,0) -- (1,0) -- (0.5,0) -- (1,0.6); \draw (1,0)--(1,-0.6); \draw[red] (1.5,0) -- (1,-0.6) -- (0.5,0); \draw[red] (1,0) -- (1,0.6); \filldraw[fill=white] (1,-0.6) circle (2pt) node[left]{$\mathbf{0}$}; \filldraw (1,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw[fill=white] (1,0.6) circle (2pt) node[left]{$x$}; \end{tikzpicture} } - \frac{1}{2}\lambda^4\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,-0.6) -- (0.5,0) -- (0.85,0) -- (1,0.6) -- (1.5,0) -- (1.15,0) -- (1,-0.6); \draw[red] (1,-0.6) -- (0.85,0); \draw[red] (1,-0.6) -- (1.5,0); \draw[red] (1,0.6) -- (0.5,0); \draw[red] (1,0.6) -- (1.15,0); \filldraw[fill=white] (1,-0.6) circle (2pt) node[left]{$\mathbf{0}$}; \filldraw (1.15,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw (0.85,0) circle (2pt); \filldraw[fill=white] (1,0.6) circle (2pt) node[left]{$x$}; \end{tikzpicture} } + \frac{1}{2}\lambda^5\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,-0.6) -- (0.5,0) -- (0.85,0) -- (1,0.6) -- (1.85,0) -- (1.5,0) -- (1,-0.6); \draw (1,0.6) -- (1.15,0) -- (1.5,0); \draw[red] (1,-0.6) -- (0.85,0); \draw[red] (1,-0.6) -- (1.15,0); \draw[red] (1,-0.6) -- (1.85,0); \draw[red] (1,0.6) -- (0.5,0); \draw[red] (1,0.6) -- (1.5,0); \filldraw[fill=white] (1,-0.6) circle (2pt) node[left]{$\mathbf{0}$}; \filldraw (1.15,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw (0.85,0) circle (2pt); \filldraw (1.85,0) circle (2pt); \filldraw[fill=white] (1,0.6) circle (2pt) node[left]{$x$}; \end{tikzpicture} } - \frac{1}{8}\lambda^6\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,-0.6) -- (0.5,0) -- (0.15,0) -- (1,0.6) -- (1.85,0) -- (1.5,0) -- (1,-0.6); \draw (1,0.6) -- (1.15,0) -- (1.5,0); \draw (1,0.6) -- (0.85,0) -- (0.5,0); \draw[red] (1,-0.6) -- (0.85,0); \draw[red] (1,-0.6) -- (1.15,0); \draw[red] (1,-0.6) -- (1.85,0); \draw[red] (1,-0.6) -- (0.15,0); \draw[red] (1,0.6) -- (0.5,0); \draw[red] (1,0.6) -- (1.5,0); \filldraw[fill=white] (1,-0.6) circle (2pt) node[left]{$\mathbf{0}$}; \filldraw (1.15,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw (0.15,0) circle (2pt); \filldraw (0.85,0) circle (2pt); \filldraw (1.85,0) circle (2pt); \filldraw[fill=white] (1,0.6) circle (2pt) node[left]{$x$}; \end{tikzpicture} }. \end{gathered}$$ When we combine [\[eqn:F3Pt1\]](#eqn:F3Pt1){reference-type="eqref" reference="eqn:F3Pt1"} and [\[eqn:F3Pt2\]](#eqn:F3Pt2){reference-type="eqref" reference="eqn:F3Pt2"} and integrate over $x$, we see that many integrals can be bounded by integrals of two loops. These terms will be $\mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right)$. Therefore $$\begin{gathered} \lambda\int\mathbb P_\lambda\left(\mathcal{F}_3\right)\mathrm{d}x \geq \lambda^4\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,-0.6) -- (0.5,0) -- (1,0.6) -- (1.5,0) -- (1.25,0) -- (1,-0.6); \draw[red] (1,-0.6) -- (1,0.6); \draw[red] (1,-0.6) -- (1.5,0); \draw[red] (1,0.6) -- (1.25,0); \filldraw[fill=white] (1,-0.6) circle (2pt); \filldraw (1.25,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \end{tikzpicture} }+ \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right) \\= \lambda^4\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right). \end{gathered}$$ We then have a lower bound on $\widehat\Pi_\lambda^{(0)}(0)$ for any $\lambda>0$, and this gives the required result. ◻ ## Bounds on the First Lace Expansion Coefficient {#sec:Pi1bd} In this subsection we prove [\[eqPi1bd\]](#eqPi1bd){reference-type="eqref" reference="eqPi1bd"}. #### Upper Bound on $\widehat\Pi_{\lambda_c}^{(1)}(0)$ **Lemma 27**. *Suppose Assumption [\[Assumption\]](#Assumption){reference-type="ref" reference="Assumption"} holds. Then as $d\to\infty$, $$\begin{gathered} \lambda_c\widehat\Pi_{\lambda_c}^{(1)}(0) \leq\lambda_c^2\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ 2\lambda_c^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ 3\lambda_c^4\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }- 2\lambda_c^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\\ + \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right)\end{gathered}$$* We borrow from [@HeyHofLasMat19] in bounding $\mathbb P_\lambda\left( \{\mathbf{0} \Longleftrightarrow u\textrm { in } \xi^{\mathbf{0}, u}_{0}\} \cap E\left(u,x; \mathscr {C}_{0}, \xi^{u, x}_{1}\right) \right)$, but we need to make refinements so that our lower bound will match the upper bound at the precision we are interested in. We begin by bounding $\{\mathbf{0} \Longleftrightarrow u\textrm { in } \xi^{\mathbf{0}, u}_{0}\} \cap E\left(u,x; \mathscr {C}_{0}, \xi^{u, x}_{1}\right)$ by a slightly different event. **Definition 28**. Let $\xi_0,\xi_1$ be independent instances of the random graph with locally finite vertex sets $\eta_0$ and $\eta_1$. - Let $\left\{u \leftrightsquigarrow x\textrm { in } (\xi_0, \xi_1) \right\}$ denote the event that $u\in\eta_0$ and $x\in\eta_1$, but that $x$ does not survive a $\mathscr {C}\left(u, \xi^{u}_0\right)$-thinning of $\eta_1$. - Let $m\in\mathbb N$ and $\vec x, \vec y \in \left(\mathbb R^d\right)^m$. We define $\bigcirc_m^\leftrightarrow((x_j, y_j)_{1 \leq j \leq m}; \xi)$ as the event that $\{x_j \longleftrightarrow y_j\textrm { in } \xi\}$ occurs for every $1 \leq j \leq m$ with the additional requirement that every point in $\eta$ is the interior vertex of at most one of the $m$ paths, and none of the $m$ paths contains an interior vertex in the set $\left\{x_j\colon j\in[m]\right\} \cup \left\{y_j: j\in [m]\right\}$. - Let $\bigcirc_m^\leftrightsquigarrow\left( (x_j,y_j)_{1 \leq j \leq m}; (\xi_0,\xi_1)\right)$ be the intersection of the following two events. Firstly, that $\bigcirc_{m-1}^\leftrightarrow\left((x_j,y_j)_{1 \leq j <m};\xi_0\right)$ occurs but no path uses $x_{m}$ or $y_{m}$ as an interior vertex. Secondly, that $\{x_{m} \leftrightsquigarrow y_{m}\textrm { in } (\xi_0[\eta_0\setminus \{x_i, y_i\}_{1 \leq i <m}],\xi_1) \}$ occurs in such a way that at least one point $z$ in $\xi_0$ that is responsible for thinning out $y_m$ is connected to $x_m$ by a path $\gamma$ so that $z$ as well as all interior vertices of $\gamma$ are not contained in any path of the $\bigcirc_{m-1}^\leftrightarrow((x_j,y_j)_{1 \leq j <m};\xi_0)$ event. Now let $t,u,w,x,z\in\mathbb R^d$. Then define $$\begin{aligned} F^{(1)}_0\left(w,u,z;\xi_0,\xi_1\right) &:= \left\{\mathbf{0} \not\sim u\textrm { in } \xi_0\right\}\cap\bigcirc_4^\leftrightsquigarrow\left(\left(\mathbf{0},u\right),\left(\mathbf{0},w\right),\left(u,w\right),\left(w,z\right);\left(\xi_0,\xi_1\right)\right)\\ F^{(2)}_0\left(w,u,z;\xi_0,\xi_1\right) &:= \left\{w=\mathbf{0}\right\}\cap \left\{\mathbf{0} \sim u\textrm { in } \xi_1\right\}\cap\left\{w \leftrightsquigarrow z\textrm { in } \left(\xi_0\setminus\{u\},\xi_1\right) \right\}\\ F_1^{(1)}\left(u,t,z,x;\xi_1\right) &:= \left\{\#\left\{t,z,x\right\}=3\right\}\cap \bigcirc_4^\leftrightarrow\left(\left(u,t\right),\left(t,z\right),\left(t,x\right),\left(z,x\right);\xi_1\right)\cap\left\{t \not\sim x\textrm { in } \xi_1\right\}\\ F_1^{(2)}\left(u,t,z,x;\xi_1\right) &:=\left\{t=z=x\right\}\cap\left\{u \longleftrightarrow x\textrm { in } \xi_1\right\}. \end{aligned}$$ Also let $F_0 := F^{(1)}_0 \cup F^{(2)}_0$ and $F_1 := F^{(1)}_1 \cup F^{(2)}_1$. **Lemma 29**. *Let $x,u\in\mathbb R^d$ be distinct points. Then $$\mathds 1_{\left\{\mathbf{0} \Longleftrightarrow u\textrm { in } \xi^{\mathbf{0}, u}_{0}\right\}}\mathds 1_{E\left(u,x; \mathscr {C}_{0}, \xi^{u, x}_{1}\right)} \leq \sum_{z\in\eta_1^x}\left(\sum_{w\in \eta_0^\mathbf{0}}\mathds 1_{F_0\left(w,u,z;\xi_0^{\mathbf{0},u},\xi_1^{u,x}\right)}\right)\left(\sum_{t\in\eta_1^{u,x}}\mathds 1_{F_1\left(u,t,z,x;\xi_1^{u,x}\right)}\right).$$* *Proof.* We first prove that $$\label{eqn:F-bounds_FirstStep} \mathds 1_{E\left(u,x; \mathscr {C}_{0}, \xi^{u, x}_{1}\right)} \leq \sum_{z\in\eta_1^x}\sum_{t\in\eta_1^{u,x}}\mathds 1_{F_1\left(u,t,z,x;\xi_1^{u,x}\right)}\mathds 1_{\left\{\mathbf{0} \leftrightsquigarrow z\textrm { in } \left(\xi_0^{\mathbf{0}},\xi_1^{u,x}\right) \right\}}.$$ Note that the event $E\left(u,x; \mathscr {C}_{0}, \xi^{u, x}_{1}\right)$ is contained in the event that $u$ is connected to $x$ and that this connection fails after a $\mathscr {C}_0$-thinning of $\eta_1^x$. There are two cases under which this can happen. Case (a): The point $x$ itself is thinned out. In this case $$\begin{gathered} E\left(u,x; \mathscr {C}_{0}, \xi^{u, x}_{1}\right) \subset \left\{u \longleftrightarrow x\textrm { in } \xi_1^{u,x}\right\}\cap\left\{\mathbf{0} \leftrightsquigarrow x\textrm { in } \left(\xi_0^{\mathbf{0}},\xi_1^{u,x}\right) \right\} \\= F^{(2)}_1\left(u,x,x,x;\xi_1^{u,x}\right) \cap\left\{\mathbf{0} \leftrightsquigarrow x\textrm { in } \left(\xi_0^{\mathbf{0}},\xi_1^{u,x}\right) \right\}. \end{gathered}$$ Case (b): The point $x$ is not thinned out. This implies that there is at least one interior point on the path between $u$ and $x$, and that at least one of these interior points is thinned out by $\mathscr {C}_0$. Let $t$ be the last pivotal point in $\textsf {Piv}(u,x;\xi_1^{u,x})$, and set $t=u$ if $\textsf {Piv}(u,x;\xi_1^{u,x})=\emptyset$. Since $t$ is the last pivotal point (or there are no pivotal points), we have $\left\{t \Longleftrightarrow x\textrm { in } \xi_1^x\right\}$. The event $E\left(u,x; \mathscr {C}_{0}, \xi^{u, x}_{1}\right)$ implies that all the paths from $t$ to $x$ fail after a $\mathscr {C}_0$-thinning, but that $t$ is not thinned out. We can pick any thinned out point on a path from $t$ to $x$ to be our $z$, while noting that $t$ and $x$ cannot be adjacent. Therefore this case corresponds to the possible occurrences of $F^{(1)}_1$, and we have proven [\[eqn:F-bounds_FirstStep\]](#eqn:F-bounds_FirstStep){reference-type="eqref" reference="eqn:F-bounds_FirstStep"}. Now it only remains to prove that $$\mathds 1_{\left\{\mathbf{0} \Longleftrightarrow u\textrm { in } \xi^{\mathbf{0}, u}_{0}\right\}}\mathds 1_{\left\{\mathbf{0} \leftrightsquigarrow z\textrm { in } \left(\xi_0^{\mathbf{0}},\xi_1^{u,x}\right) \right\}} \leq \sum_{w\in \eta_0^\mathbf{0}}\mathds 1_{F_0\left(w,u,z;\xi_0^{\mathbf{0},u},\xi_1^{u,x}\right)}.$$ The event $\left\{\mathbf{0} \leftrightsquigarrow z\textrm { in } \left(\xi_0^{\mathbf{0}},\xi_1^{u,x}\right) \right\}$ implies that there exists at least one point in $\mathscr {C}_0$ that is responsible for thinning out $z$. Let $\gamma$ denote a the path from $\mathbf{0}$ to this point in $\mathscr {C}_0$. We once again now have two cases to consider. Case (a): $\mathbf{0} \not\sim u\textrm { in } \xi^{\mathbf{0},u}_0$. Then $\left\{\mathbf{0} \Longleftrightarrow u\textrm { in } \xi^{\mathbf{0}, u}_{0}\right\}$ implies that there exist two disjoint paths (denoted $\gamma'$ and $\gamma''$) from $\mathbf{0}$ to $u$. Both of these paths are necessarily of length greater than or equal to $2$. Let $w$ denote the last vertex $\gamma$ shares with either $\gamma'$ or $\gamma''$ (allowing for the possibility that $w=\mathbf{0}$). Requiring that $\gamma$, $\gamma'$, and $\gamma''$ exist results precisely in the event $F_0^{(1)}$. Case (b): $\mathbf{0} \sim u\textrm { in } \xi^{\mathbf{0},u}_0$. Now we fix $w=\mathbf{0}$ immediately. The existence of the path from $\mathbf{0}$ to the thinning point implies the event $F_0^{(2)}$. ◻ **Definition 30** (The $\psi$ functions). Let $r,s,u,w,x,y\in\mathbb R^d$ and $n\geq 1$. We first set $\tau_\lambda^\circ(x) := \lambda^{-1}\delta_{x,\mathbf{0}} + \tau_\lambda(x)$ and $\tau_\lambda^{(\geq 2)}(x) := \mathbb P_\lambda\left(\mathbf{0} \xleftrightarrow{\,\,\geq 2\,\,} x\textrm { in } \xi^{\mathbf{0},x}\right) = \tau_\lambda(x) - \varphi(x)$. Also define $$\begin{aligned} \psi_0^{(1)}(w,u) &:= \lambda^2\tau_\lambda^{(\geq 2)}(u)\tau_\lambda(u-w)\tau_\lambda(w),\\ \psi_0^{(2)}(w,u) &:=\lambda^2 \delta_{w,\mathbf{0}} \tau_\lambda^{(\geq 2)}(u)\int \tau_\lambda(u-t)\tau_\lambda(t) \mathrm{d}t,\\ \psi_0^{(3)}(w,u) &:= \lambda\varphi(u) \delta_{w,\mathbf{0}},\\ \psi^{(1)}(w,u,r,s) &:= \lambda^4\tau_\lambda(w-u)\int \tau_\lambda^\circ(t-s) \tau_\lambda(t-w)\tau_\lambda(u-z)\tau_\lambda(z-t)\tau_\lambda(z-r)\mathrm{d}z \mathrm{d}t, \\ \psi^{(2)}(w,u,r,s) &:= \lambda^4\tau_\lambda^\circ(w-s)\int \tau_\lambda(t-z)\tau_\lambda(z-u)\tau_\lambda(u-t)\tau_\lambda^\circ(t-w)\tau_\lambda(z-r)\mathrm{d}z \mathrm{d}t, \\ \psi^{(3)}(w,u,r,s) &:= \lambda^2\tau_\lambda(u-w)\tau_\lambda(w-s)\tau_\lambda(u-r),\\ \psi^{(4)}(w,u,r,s) &:= \lambda\delta_{w,s}\tau_\lambda(u-w)\tau_\lambda(u-r),\\ \psi_n^{(1)} (x,r,s) &:= \lambda^3\int \tau_\lambda^\circ(t-s)\tau_\lambda(z-r)\tau_\lambda(t-z)\tau_\lambda(z-x)\tau_\lambda^{(\geq 2)}(x-t)\mathrm{d}z\mathrm{d}t,\\ \psi_n^{(2)}(x,r,s) &:=\lambda \tau_\lambda(x-s)\tau_\lambda(x-r),\end{aligned}$$ and set $\psi_0 := \psi_0^{(1)}+\psi_0^{(2)}+\psi_0^{(3)}$, $\psi_n := \psi_n^{(1)} + \psi_n^{(2)}$, and $\psi:= \psi^{(1)}+\psi^{(2)}+\psi^{(3)} + \psi^{(4)}$. For our bounds on $\widehat\Pi_{\lambda_c}^{(1)}(0)$ we will only require $\psi_0$ and $\psi_1$. Later we will also use $\psi$ to bound $\widehat\Pi_{\lambda_c}^{(n)}(0)$ for $n\geq 2$. **Lemma 31**. *$$\begin{gathered} \lambda_c\widehat\Pi_{\lambda_c}^{(1)}(0) \leq \int \psi_0\left(w,u\right)\psi_1\left(x,w,u\right)\mathrm{d}u\mathrm{d}w\mathrm{d}x \\= \lambda_c^2\int \varphi(u)\tau_{\lambda_c}(x)\tau_{\lambda_c}(u-x)\mathrm{d}u\mathrm{d}x + \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right).\end{gathered}$$* *Proof.* The first inequality is proven in very nearly exactly the same way as [@HeyHofLasMat19 Proposition 7.2]. The first difference is that our event $F^{(1)}_1$ has the intersection with $\left\{t \not\sim x\textrm { in } \xi_1\right\}$. Since the event $\bigcirc_4^\leftrightarrow\left(\left(u,t\right),\left(t,z\right),\left(t,x\right),\left(z,x\right);\xi_1\right)$ ensures that $t$ and $x$ are connected in $\xi_1$, this means that the event $t \xleftrightarrow{\,\,\geq 2\,\,} x\textrm { in } \xi_1$ occurs. This then manifests in the end result as the occurrence of a $\tau_\lambda^{(\geq 2)}$ function rather than a $\tau_\lambda$ function in the integral in $\psi^{(1)}_1$. Similarly, the event $F^{(1)}_0$ implies that $\mathbf{0} \xleftrightarrow{\,\,\geq 2\,\,} u\textrm { in } \xi_0$, and this results in the $\tau_\lambda^{(\geq 2)}(u)$ appearing rather than $\tau_\lambda(u)$ in $\psi^{(1)}_0$ and $\psi^{(2)}_0$. For the equality, we first note that $$\int \psi^{(3)}_0\left(w,u\right)\psi^{(2)}_1\left(x,w,u\right)\mathrm{d}u\mathrm{d}w\mathrm{d}x = \lambda^2\int \varphi(u)\tau_\lambda(x)\tau_\lambda(u-x)\mathrm{d}u\mathrm{d}x.$$ Our task in then to show that all the other terms $\int \psi^{(j_0)}_0\left(w,u\right)\psi^{(j_1)}_1\left(x,w,u\right)\mathrm{d}u\mathrm{d}w\mathrm{d}x$ are error terms. To make it clearer what we are trying to bound, we present the integral $\int \psi_0\left(w,u\right)\psi_1\left(x,w,u\right)\mathrm{d}u\mathrm{d}w\mathrm{d}x$ diagrammatically: $$\begin{gathered} \lambda\widehat\Pi_\lambda^{(1)}(0) \leq \lambda^5\raisebox{-17pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw[green] (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw[green] (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \draw[green] (2,0.6) -- (3,0) -- (2,-0.6); \draw (1.5,-0.6) circle (0pt) node[above]{$\circ$}; \draw (0.5,-0.3) circle (0pt) node[rotate=-31,above]{\tiny $\geq 2$}; \draw (2.5,-0.3) circle (0pt) node[rotate=31,above]{\tiny $\geq 2$}; \filldraw[fill=white] (0,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \filldraw (1,-0.6) circle (2pt); \filldraw (2,0.6) circle (2pt); \filldraw (2,-0.6) circle (2pt); \filldraw (3,0) circle (2pt); \end{tikzpicture} }+ \lambda^3\raisebox{-17pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw[green] (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw[green] (1,0.6) -- (2,0) -- (1,-0.6); \draw (0.5,-0.3) circle (0pt) node[rotate=-31,above]{\tiny $\geq 2$}; \filldraw[fill=white] (0,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \filldraw (1,-0.6) circle (2pt); \filldraw (2,0) circle (2pt); \end{tikzpicture} }+ \lambda^5\raisebox{-17pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw[green] (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw[green] (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \draw[green] (2,0.6) -- (3,0) -- (2,-0.6); \draw (1.5,-0.6) circle (0pt) node[above]{$\circ$}; \draw (1,0) circle (0pt) node[rotate=90,above]{\tiny $\geq 2$}; \draw (2.5,-0.3) circle (0pt) node[rotate=31,above]{\tiny $\geq 2$}; \filldraw (0,0) circle (2pt); \filldraw[fill=white] (1,0.6) circle (2pt); \filldraw (1,-0.6) circle (2pt); \filldraw (2,0.6) circle (2pt); \filldraw (2,-0.6) circle (2pt); \filldraw (3,0) circle (2pt); \end{tikzpicture} }\\ +\lambda^3\raisebox{-17pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw[green] (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw[green] (1,0.6) -- (2,0) -- (1,-0.6); \draw (1,0) circle (0pt) node[rotate=90,above]{\tiny $\geq 2$}; \filldraw (0,0) circle (2pt); \filldraw[fill=white] (1,0.6) circle (2pt); \filldraw (1,-0.6) circle (2pt); \filldraw (2,0) circle (2pt); \end{tikzpicture} }+ \lambda^4\raisebox{-17pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,0.6) -- (1,-0.6); \draw[green] (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \draw[green] (2,0.6) -- (3,0) -- (2,-0.6); \draw (1.5,-0.6) circle (0pt) node[above]{$\circ$}; %\draw (1,0) circle (0pt) node[rotate=90,above]{$\sim$}; \draw (2.5,-0.3) circle (0pt) node[rotate=31,above]{\tiny $\geq 2$}; \filldraw[fill=white] (1,0.6) circle (2pt); \filldraw (1,-0.6) circle (2pt); \filldraw (2,0.6) circle (2pt); \filldraw (2,-0.6) circle (2pt); \filldraw (3,0) circle (2pt); \end{tikzpicture} }+ \lambda^2\raisebox{-17pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,0.6) -- (1,-0.6); \draw[green] (1,0.6) -- (2,0) -- (1,-0.6); %\draw (1,0) circle (0pt) node[rotate=90,above]{$\sim$}; \filldraw[fill=white] (1,0.6) circle (2pt); \filldraw (1,-0.6) circle (2pt); \filldraw (2,0) circle (2pt); \end{tikzpicture} }. \end{gathered}$$ The last of these six diagrams will be the only relevant one for our level of precision. To demonstrate how we bound these other five, we examine the second: $$\lambda^3\raisebox{-17pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw[green] (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw[green] (1,0.6) -- (2,0) -- (1,-0.6); \draw (0.5,-0.3) circle (0pt) node[rotate=-31,above]{\tiny $\geq 2$}; \filldraw[fill=white] (0,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \filldraw (1,-0.6) circle (2pt); \filldraw (2,0) circle (2pt); \end{tikzpicture} }= \lambda^3\int\tau_\lambda^{(\geq 2)}(u)\tau_\lambda(w)\tau_\lambda(w-u)\tau_\lambda(x-w)\tau_\lambda(x-u)\mathrm{d}u\mathrm{d}w\mathrm{d}x.$$ First we expand $\tau_\lambda^{(\geq 2)}(u)\leq \lambda\varphi^{\star 2}(u) + \lambda^2\varphi^{\star 3}(u) + \lambda^3\varphi^{\star 4}(u) + \lambda^4\varphi^{\star 5}(u) + \lambda^5\varphi^{\star 6}(u) + \lambda^6\varphi^{\star 6}\star \tau_\lambda(u)$. For the two diagrams that result from the last two terms in this expansion, we can bound $\tau_\lambda(w-u)\leq 1$ to get terms of the form $\lambda^{j+5}\varphi^{\star 6}\star\tau_\lambda^{\star j}\left(\mathbf{0}\right)$ for $j\in\left\{3,4\right\}$. From Lemma [Lemma 14](#lem:tailbound){reference-type="ref" reference="lem:tailbound"}, both of these are $\mathcal{O}\left(\varphi^{\star 6}\left(\mathbf{0}\right)\right)$ when $\lambda\leq \lambda_c$. For the remaining diagrams we then bound $\tau_\lambda(w) \leq \varphi(w) + \lambda\varphi^{\star 2}(w) + \lambda^2\varphi^{\star 3}(w) + \lambda^3\varphi^{\star 4}(w) + \lambda^4\varphi^{\star 4}\star \tau_\lambda(w)$ and if the diagrams contain a loop of at least six $\varphi$ functions and maybe some $\tau_\lambda$ functions, we once again bound $\tau_\lambda(w-u)\leq 1$ and use Lemma [Lemma 14](#lem:tailbound){reference-type="ref" reference="lem:tailbound"} to show that they are $\mathcal{O}\left(\varphi^{\star 6}\left(\mathbf{0}\right)\right)$. For the remaining diagrams we bound $\tau_\lambda(x-w) \leq \varphi(x-w) + \lambda\varphi^{\star 2}(x-w) + \lambda^2\varphi^{\star 3}(x-w) + \lambda^3\varphi^{\star 3}\star \tau_\lambda(x-w)$. Again bounding $\tau_\lambda(w-u)\leq 1$ allows us to use Lemma [Lemma 14](#lem:tailbound){reference-type="ref" reference="lem:tailbound"} to show that some of these diagrams are $\mathcal{O}\left(\varphi^{\star 6}\left(\mathbf{0}\right)\right)$. After bounding $\tau_\lambda(x-u) \leq \varphi(x-u) + \lambda\varphi^{\star 2}(x-u) + \lambda^2\varphi^{\star 2}\star\tau_\lambda(x-u)$ and showing that some terms are $\mathcal{O}\left(\varphi^{\star 6}\left(\mathbf{0}\right)\right)$, we arrive at $$\lambda_c^3\raisebox{-17pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw[green] (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw[green] (1,0.6) -- (2,0) -- (1,-0.6); \draw (0.5,-0.3) circle (0pt) node[rotate=-31,above]{\tiny $\geq 2$}; \filldraw[fill=white] (0,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \filldraw (1,-0.6) circle (2pt); \filldraw (2,0) circle (2pt); \end{tikzpicture} }\leq \lambda_c^4\int \varphi^{\star 2}(u)\varphi(w)\tau_{\lambda_c}(w-u)\varphi(x-w)\varphi(x-u)\mathrm{d}u\mathrm{d}w\mathrm{d}x + \mathcal{O}\left(\varphi^{\star 6}\left(\mathbf{0}\right)\right).$$ Then we bound $\tau_\lambda(w-u)\leq \varphi(w-u) + \lambda\varphi^{\star 2}(w-u) + \lambda^2\varphi^{\star 3}(w-u) + \lambda^3\varphi^{\star 3}\star\tau_\lambda(w-u)$ to get $$\begin{aligned} &\lambda^4\int \varphi^{\star 2}(u)\varphi(w)\tau_\lambda(w-u)\varphi(x-w)\varphi(x-u)\mathrm{d}u\mathrm{d}w\mathrm{d}x \nonumber\\ &\hspace{4cm}\leq \lambda^4\int \varphi^{\star 2}(u)\varphi(w)\varphi(w-u)\varphi(x-w)\varphi(x-u)\mathrm{d}u\mathrm{d}w\mathrm{d}x \nonumber\\ &\hspace{5cm}+ \lambda^5\int \varphi^{\star 2}(u)\varphi(w)\varphi^{\star 2}(w-u)\varphi(x-w)\varphi(x-u)\mathrm{d}u\mathrm{d}w\mathrm{d}x \nonumber\\ &\hspace{5cm}+ \lambda^6\int \varphi^{\star 2}(u)\varphi(w)\varphi^{\star 3}(w-u)\varphi(x-w)\varphi(x-u)\mathrm{d}u\mathrm{d}w\mathrm{d}x\nonumber\\ &\hspace{5cm}+ \lambda^7\int \varphi^{\star 2}(u)\varphi(w)\varphi^{\star 3}\star\tau_\lambda(w-u)\varphi(x-w)\varphi(x-u)\mathrm{d}u\mathrm{d}w\mathrm{d}x. \end{aligned}$$ From the commutativity of convolution, observe that the first two terms are $\mathcal{O}\left(\int\varphi(x)\varphi^{\star 2}(x)\varphi^{\star 3}(x)\mathrm{d}x\right)$. For the last two terms we bound $\int\varphi(x-w)\varphi(x-u)\mathrm{d}x \leq \int\varphi(x-w)\mathrm{d}x = 1$ for all $u,w\in\mathbb R^d$. Therefore we can apply Lemma [Lemma 14](#lem:tailbound){reference-type="ref" reference="lem:tailbound"} to show that these diagrams are $\mathcal{O}\left(\varphi^{\star 6}\left(\mathbf{0}\right)\right)$ when $\lambda\leq \lambda_c$. In summary, we have $$\lambda_c^3\raisebox{-17pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw[green] (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw[green] (1,0.6) -- (2,0) -- (1,-0.6); \draw (0.5,-0.3) circle (0pt) node[rotate=-31,above]{\tiny $\geq 2$}; \filldraw[fill=white] (0,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \filldraw (1,-0.6) circle (2pt); \filldraw (2,0) circle (2pt); \end{tikzpicture} }= \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right).$$ Repeating these ideas for the other diagrams produces $$\begin{aligned} \lambda_c^5\raisebox{-17pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw[green] (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw[green] (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \draw[green] (2,0.6) -- (3,0) -- (2,-0.6); \draw (1.5,-0.6) circle (0pt) node[above]{$\circ$}; \draw (0.5,-0.3) circle (0pt) node[rotate=-31,above]{\tiny $\geq 2$}; \draw (2.5,-0.3) circle (0pt) node[rotate=31,above]{\tiny $\geq 2$}; \filldraw[fill=white] (0,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \filldraw (1,-0.6) circle (2pt); \filldraw (2,0.6) circle (2pt); \filldraw (2,-0.6) circle (2pt); \filldraw (3,0) circle (2pt); \end{tikzpicture} }&= \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right),\\ \lambda_c^5\raisebox{-17pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw[green] (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw[green] (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \draw[green] (2,0.6) -- (3,0) -- (2,-0.6); \draw (1.5,-0.6) circle (0pt) node[above]{$\circ$}; \draw (1,0) circle (0pt) node[rotate=90,above]{\tiny $\geq 2$}; \draw (2.5,-0.3) circle (0pt) node[rotate=31,above]{\tiny $\geq 2$}; \filldraw (0,0) circle (2pt); \filldraw[fill=white] (1,0.6) circle (2pt); \filldraw (1,-0.6) circle (2pt); \filldraw (2,0.6) circle (2pt); \filldraw (2,-0.6) circle (2pt); \filldraw (3,0) circle (2pt); \end{tikzpicture} }&= \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right),\\ \lambda_c^3\raisebox{-17pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw[green] (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw[green] (1,0.6) -- (2,0) -- (1,-0.6); \draw (1,0) circle (0pt) node[rotate=90,above]{\tiny $\geq 2$}; \filldraw (0,0) circle (2pt); \filldraw[fill=white] (1,0.6) circle (2pt); \filldraw (1,-0.6) circle (2pt); \filldraw (2,0) circle (2pt); \end{tikzpicture} }&= \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right),\\ \lambda_c^4\raisebox{-17pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,0.6) -- (1,-0.6); \draw[green] (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \draw[green] (2,0.6) -- (3,0) -- (2,-0.6); \draw (1.5,-0.6) circle (0pt) node[above]{$\circ$}; %\draw (1,0) circle (0pt) node[rotate=90,above]{$\sim$}; \draw (2.5,-0.3) circle (0pt) node[rotate=31,above]{\tiny $\geq 2$}; \filldraw[fill=white] (1,0.6) circle (2pt); \filldraw (1,-0.6) circle (2pt); \filldraw (2,0.6) circle (2pt); \filldraw (2,-0.6) circle (2pt); \filldraw (3,0) circle (2pt); \end{tikzpicture} }&= \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right). \end{aligned}$$ ◻ **Lemma 32**. *Let $x,u\in\mathbb R^d$. Then $$\begin{aligned} \label{eqn:TauTauExpansion} \tau_\lambda(x)\tau_\lambda(u-x)&\leq \varphi(x)\varphi(u-x) + \varphi(x)\varphi^{[2]}(u-x) + \varphi^{[2]}(x)\varphi(u-x) + \varphi(x)\varphi^{[3]}(u-x) + \varphi^{[3]}(x)\varphi(u-x)\nonumber\\ &\qquad+ \varphi^{[2]}(x)\varphi^{[2]}(u-x) + \lambda^3\varphi(x)\varphi^{\star 4}(u-x) + \lambda^5\varphi(x)\varphi^{\star 4}\star\tau_\lambda(u-x)\nonumber\\ &\qquad + \lambda^3\varphi^{\star 2}(x)\varphi^{\star 3}(u-x) + \lambda^4\varphi^{\star 2}(x)\varphi^{\star 3}\star\tau_\lambda(u-x) + \lambda^4\varphi^{\star 3}(x)\varphi^{\star 2}(u-x) \nonumber\\ &\qquad+ \lambda^4\varphi^{\star 3}(x)\varphi^{\star 2}\star\tau_\lambda(u-x) + \lambda^3\varphi^{\star 4}(x)\varphi(u-x) + \lambda^4\varphi^{\star 4}(x)\varphi\star\tau_\lambda(u-x)\nonumber\\ &\qquad + \lambda^4\varphi^{\star 4}\star\tau_\lambda(x)\varphi(u-x)+ \lambda^5\varphi^{\star 4}\star\tau_\lambda(x)\varphi\star\tau_\lambda(u-x). \end{aligned}$$ Therefore $$\begin{aligned} \lambda_c^2\varphi\star\tau_{\lambda_c}^{\star 2}\left(\mathbf{0}\right) &\leq \lambda_c^2\varphi^{\star 3}\left(\mathbf{0}\right) + 2\lambda_c^2\varphi^{\star 2}\star\varphi^{[2]}\left(\mathbf{0}\right) + 2\lambda_c^2\varphi^{\star 2}\star\varphi^{[3]}\left(\mathbf{0}\right) + \lambda_c^2\varphi\star\varphi^{[2]}\star\varphi^{[2]}\left(\mathbf{0}\right) + \mathcal{O}\left(\varphi^{\star 6}\left(\mathbf{0}\right)\right)\nonumber\\ &\leq \lambda_c^2\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ 2\lambda_c^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ 3\lambda_c^4\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }- 2\lambda_c^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right). \end{aligned}$$* *Proof.* Equation [\[eqn:TauTauExpansion\]](#eqn:TauTauExpansion){reference-type="eqref" reference="eqn:TauTauExpansion"} follows from applying Lemma [Lemma 18](#lem:tauUpperbound){reference-type="ref" reference="lem:tauUpperbound"} to both $\tau(x)$ and $\tau(u-x)$ with $n= 4$, and then bounding $\varphi^{[m]} \leq \lambda^{m-1}\varphi^{\star m}$ in some places. For the second part, we use [\[eqn:TauTauExpansion\]](#eqn:TauTauExpansion){reference-type="eqref" reference="eqn:TauTauExpansion"} in conjunction with Lemma [Lemma 14](#lem:tailbound){reference-type="ref" reference="lem:tailbound"} to show that many of the terms are $\mathcal{O}\left(\varphi^{\star 6}\left(\mathbf{0}\right)\right)$ and produce the first inequality. We then immediately have $\lambda^2\varphi^{\star 3}\left(\mathbf{0}\right) = \lambda^2 \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }$, and simply bounding $\varphi^{[3]}\leq \lambda^2\varphi^{\star 3}$ and $\varphi^{[2]}\leq \lambda\varphi^{\star 2}$ gives $$2\lambda^2\varphi^{\star 2}\star\varphi^{[3]}\left(\mathbf{0}\right) + \lambda^2\varphi\star\varphi^{[2]}\star\varphi^{[2]} \left(\mathbf{0}\right) \leq 3\lambda^4\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }.$$ To bound $2\lambda^2\varphi^{\star 2}\star\varphi^{[2]}\left(\mathbf{0}\right)$ appropriately requires a little more care though. Recall from [\[eqn:ExConnf2\]](#eqn:ExConnf2){reference-type="eqref" reference="eqn:ExConnf2"} that $\varphi^{[2]}(x) = \left(1-\varphi(x)\right)\left(1-\exp\left(-\lambda\varphi^{\star 2}(x)\right)\right)$. Using $1-\text{e}^{-x} \leq x - \frac{1}{2}x^2 + \frac{1}{6}x^3$ for all $x\in\mathbb R$ then gives $$\begin{gathered} \lambda^2\varphi^{\star 2}\star\varphi^{[2]}\left(\mathbf{0}\right) \leq \lambda^3\int\left(1-\varphi(x)\right)\varphi^{\star 2}(x)^2\mathrm{d}x \\-\frac{1}{2}\lambda^4\int\left(1-\varphi(x)\right)\varphi^{\star 2}(x)^3\mathrm{d}x + \frac{1}{6}\lambda^5\int\left(1-\varphi(x)\right)\varphi^{\star 2}(x)^4\mathrm{d}x. \end{gathered}$$ The second term we can safely neglect, and for the third term we use $1-\varphi(x)\leq 1$ and $\varphi^{\star 2}(x)\leq \int\varphi(x)\mathrm{d}x=1$ for all $x\in\mathbb R^d$. This produces $$\begin{gathered} \lambda^2\varphi^{\star 2}\star\varphi^{[2]}\left(\mathbf{0}\right) \leq \lambda^3\int\left(1-\varphi(x)\right)\varphi^{\star 2}(x)^2\mathrm{d}x + \frac{1}{6}\lambda^5\int\varphi^{\star 2}(x)^3\mathrm{d}x \\= \lambda^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }- \lambda^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right), \end{gathered}$$ as required. ◻ This concludes the proof of Lemma [Lemma 27](#lem:Pi1_UpperBound){reference-type="ref" reference="lem:Pi1_UpperBound"}. #### Lower Bound on $\widehat\Pi_{\lambda_c}^{(1)}(0)$ **Lemma 33**. *$$\lambda_c\widehat\Pi_{\lambda_c}^{(1)}(0) \geq \lambda_c^2\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ 2\lambda_c^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ 3\lambda_c^4\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }- 2\lambda_c^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right)$$* *Proof.* Our strategy is to identify disjoint events contained in $\{\mathbf{0} \Longleftrightarrow u\textrm { in } \xi^{\mathbf{0}, u}_{0}\} \cap E\left(u,x; \mathscr {C}_{0}, \xi^{u, x}_{1}\right)$ for each $u,x\in\mathbb R^d$, and show that the integral of their probabilities is equal to our upper bound to the required precision. Our disjoint events are the following: $$\begin{aligned} \mathcal{G}_1 &:= \left\{\mathbf{0} \sim u\textrm { in } \xi^{\mathbf{0},u}_0\right\}\cap\left\{u \sim x\textrm { in } \xi^{u,x}_1 \right\}\cap\left\{x\notin \eta^{u,x}_{1,\langle \mathbf{0} \rangle}\right\} \\ \mathcal{G}_2 &:= \left\{\mathbf{0} \sim u\textrm { in } \xi^{\mathbf{0},u}_0\right\}\cap\left\{u \xleftrightarrow{\,\,2\,\,} x\textrm { in } \xi^{u,x}_1 \right\}\cap\left\{x\notin \eta^{u,x}_{1,\langle \mathbf{0} \rangle}\right\}\\ \mathcal{G}_3 &:= \left\{\mathbf{0} \sim u\textrm { in } \xi^{\mathbf{0},u}_0\right\}\cap\left\{u \sim x\textrm { in } \xi^{u,x}_1 \right\}\cap\left\{x\in \eta^{u,x}_{1,\langle \mathbf{0} \rangle}\right\}\cap\left\{\exists v\in\eta_0\colon \mathbf{0} \sim v\textrm { in } \xi^\mathbf{0}_0, x\notin \eta^{u_0,x}_{1,\langle v \rangle}\right\} \\ \mathcal{G}_4 &:= \left\{\mathbf{0} \sim u\textrm { in } \xi^{\mathbf{0},u}_0\right\}\cap\left\{u \xleftrightarrow{\,\,3\,\,} x\textrm { in } \xi^{u,x}_1 \right\}\cap\left\{x\notin \eta^{u,x}_{1,\langle \mathbf{0} \rangle}\right\}\\ \mathcal{G}_5 &:= \left\{\mathbf{0} \sim u\textrm { in } \xi^{\mathbf{0},u}_0\right\}\cap\left\{u \xleftrightarrow{\,\,2\,\,} x\textrm { in } \xi^{u,x}_1 \right\}\cap\left\{x\in \eta^{u,x}_{1,\langle \mathbf{0} \rangle}\right\}\cap \left\{\exists v\in\eta_0\colon \mathbf{0} \sim v\textrm { in } \xi^\mathbf{0}_0, x\notin \eta^{u_0,x}_{1,\langle v \rangle}\right\}\\ \mathcal{G}_6 &:= \left\{\mathbf{0} \sim u\textrm { in } \xi^{\mathbf{0},u}_0\right\}\cap\left\{u \sim x\textrm { in } \xi^{u,x}_1 \right\}\cap\left\{x\in \eta^{u,x}_{1,\langle \mathbf{0} \rangle}\right\} \cap\left\{\not\exists v\in\eta_0\colon \mathbf{0} \sim v\textrm { in } \xi^\mathbf{0}_0, x\in\eta^{u_0,x}_{1,\langle v \rangle}\right\}\nonumber\\ &\hspace{5cm}\cap \left\{\exists w\in\eta_0\colon \mathbf{0} \xleftrightarrow{\,\,2\,\,} w\textrm { in } \xi^\mathbf{0}_0, x\notin \eta^{u_0,x}_{1,\langle w \rangle}\right\}\end{aligned}$$ Observe that these events are indeed disjoint, and all are subsets of $\{\mathbf{0} \Longleftrightarrow u\textrm { in } \xi^{\mathbf{0}, u}_{0}\} \cap E\left(u,x; \mathscr {C}_{0}, \xi^{u, x}_{1}\right)$. We now want to bound their probabilities from below. For $\mathcal{G}_1$, the events $\big\{\mathbf{0} \sim u\textrm { in } \xi^{\mathbf{0},u}_0\big\}$ and $\left\{u \sim x\textrm { in } \xi^{u,x}_1 \right\}$ are clearly independent. The event $\big\{x\notin \eta^{u,x}_{1,\langle \mathbf{0} \rangle}\big\}$ is also independent of these previous two, because it uses a thinning random variable from $\eta_1^{u,x}$. The probability that $x$ is thinned out by the single vertex $\mathbf{0}$ is also equal to the probability that an edge forms between these vertices. Therefore $\mathbb P_\lambda\left(\mathcal{G}_1\right) = \varphi(u)\varphi(x-u)\varphi(x)$. The other events proceed similarly with a few points to note. All the events that are intersected to compose the $\mathcal{G}_i$ are independent because they use different (independent) edge random variables and thinning random variables. Also, the events like $\big\{u \xleftrightarrow{\,\,n\,\,} x\textrm { in } \xi^{u,x}_1 \big\}$ have probability given exactly by $\varphi^{[n]}(x-u)$ by definition of that function. The event $\big\{x\in \eta^{u,x}_{1,\langle \mathbf{0} \rangle}\big\}\cap\big\{\exists v\in\eta_0\colon \mathbf{0} \sim v\textrm { in } \xi^\mathbf{0}_0, x\notin \eta^{u_0,x}_{1,\langle v \rangle}\big\}$ says that $x$ *is not* thinned out by $\mathbf{0}$, and that there exists a $v$ that forms an edge with $\mathbf{0}$ and thins out $x$. This has probability equal to that of the event that no edge forms between $\mathbf{0}$ and $x$, and that they have at least one mutual neighbour. This is precisely the probability given by $\varphi^{[2]}(x)$. Similar considerations allow us to find factors of $\varphi^{[2]}$ and $\varphi^{[3]}$ in the probability of the remaining events. The lower bounds we use are summarised here: $$\begin{aligned} \mathbb P_\lambda\left(\mathcal{G}_1\right) &= \varphi(u)\varphi(x-u)\varphi(x),\\ \mathbb P_\lambda\left(\mathcal{G}_2\right) &= \varphi(u)\varphi^{[2]}(x-u)\varphi(x)\nonumber\\ &\geq \varphi(u)\varphi(x)\left(1-\varphi(x-u)\right)\left(\lambda\varphi^{\star 2}(x-u) - \frac{1}{2}\lambda^2\varphi^{\star 2}(x-u)^2\right),\\ \mathbb P_\lambda\left(\mathcal{G}_3\right) &= \varphi(u)\varphi(x-u)\varphi^{[2]}(x)\nonumber\\ &\geq \varphi(u)\varphi(x-u)\left(1-\varphi(x)\right)\left(\lambda\varphi^{\star 2}(x) - \frac{1}{2}\lambda^2\varphi^{\star 2}(x)^2\right),\\ \mathbb P_\lambda\left(\mathcal{G}_4\right) &= \varphi(u)\varphi^{[3]}(x-u)\varphi(x),\\ \mathbb P_\lambda\left(\mathcal{G}_5\right) &= \varphi(u)\varphi^{[2]}(x-u)\varphi^{[2]}(x)\nonumber\\ &\geq \varphi(u)\left(1-\varphi(x-u)\right)\left(1-\varphi(x)\right)\left(\lambda^2\varphi^{\star 2}(x-u)\varphi^{\star 2}(x) - \frac{1}{2}\lambda^3\varphi^{\star 2}(x-u)^2\varphi^{\star 2}(x)\right.\nonumber\\ &\hspace{5cm}\left.- \frac{1}{2}\lambda^3\varphi^{\star 2}(x-u)\varphi^{\star 2}(x)^2 + \frac{1}{4}\lambda^4\varphi^{\star 2}(x-u)^2\varphi^{\star 2}(x)^2\right),\\ \mathbb P_\lambda\left(\mathcal{G}_6\right) &= \varphi(u)\varphi(x-u)\varphi^{[3]}(x).\end{aligned}$$ For $\mathbb P_\lambda\left(\mathcal{G}_2\right)$, $\mathbb P_\lambda\left(\mathcal{G}_3\right)$, and $\mathbb P_\lambda\left(\mathcal{G}_5\right)$ we have used a lower bound on $\varphi^{[2]}$ by observing $1-\text{e}^{-x}\geq x-\frac{1}{2}x^2$ for all $x\in\mathbb R$ and using this with the expression for $\varphi^{[2]}$ in [\[eqn:ExConnf2\]](#eqn:ExConnf2){reference-type="eqref" reference="eqn:ExConnf2"}. From these we can bound $$\begin{aligned} \lambda^2\int\mathbb P_\lambda\left(\mathcal{G}_1\right)\mathrm{d}u\mathrm{d}x &= \lambda^2\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} },\\ \lambda^2\int\mathbb P_\lambda\left(\mathcal{G}_2\right)\mathrm{d}u\mathrm{d}x &\geq \lambda^3\raisebox{-18pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \draw[red] (1,0.6) -- (1,-0.6); \filldraw (1,-0.6) circle (2pt); \filldraw[fill=white] (0,0) circle (2pt); \filldraw (2,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \end{tikzpicture} }- \frac{1}{2}\lambda^4\raisebox{-18pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (0,0) -- (1,0.6) -- (1.5,0) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \draw[red] (1,0.6) -- (1,-0.6); \filldraw (1,-0.6) circle (2pt); \filldraw[fill=white] (0,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \filldraw (2,0) circle (2pt); \end{tikzpicture} }\nonumber\\ &= \lambda^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }- \lambda^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right),\\ \lambda^2\int\mathbb P_\lambda\left(\mathcal{G}_3\right)\mathrm{d}u\mathrm{d}x &\geq \lambda^3\raisebox{-18pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \draw[red] (1,0.6) -- (1,-0.6); \filldraw[fill=white] (1,-0.6) circle (2pt); \filldraw (0,0) circle (2pt); \filldraw (2,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \end{tikzpicture} }- \frac{1}{2}\lambda^4\raisebox{-18pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (0,0) -- (1,0.6) -- (1.5,0) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \draw[red] (1,0.6) -- (1,-0.6); \filldraw[fill=white] (1,-0.6) circle (2pt); \filldraw (0,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \filldraw (2,0) circle (2pt); \end{tikzpicture} }\nonumber\\ &= \lambda^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }- \lambda^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0) -- (1,-0.6) -- (0,0); \draw (0,0) -- (2,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right),\\ \lambda^2\int\mathbb P_\lambda\left(\mathcal{G}_5\right)\mathrm{d}u\mathrm{d}x &\geq \lambda^4\raisebox{-18pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (0,-0.6) -- (1,-0.6) -- (1.5,0) -- (0.5,0.6) -- (-0.5,0) -- (0,-0.6); \draw[red] (0,-0.6) -- (0.5,0.6) -- (1,-0.6); \filldraw[fill=white] (0,-0.6) circle (2pt); \filldraw (1,-0.6) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0.6) circle (2pt); \filldraw (-0.5,0) circle (2pt); \end{tikzpicture} }-\frac{1}{2}\lambda^5\raisebox{-18pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (0,-0.6) -- (1,-0.6) -- (1.5,0) -- (0.5,0.6) -- (-0.5,0) -- (0,-0.6); \draw (0,-0.6) -- (-0.1,0) -- (0.5,0.6); \draw[red] (0,-0.6) -- (0.5,0.6) -- (1,-0.6); \filldraw[fill=white] (0,-0.6) circle (2pt); \filldraw (1,-0.6) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0.6) circle (2pt); \filldraw (-0.1,0) circle (2pt); \filldraw (-0.5,0) circle (2pt); \end{tikzpicture} }-\frac{1}{2}\lambda^5\raisebox{-18pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (0,-0.6) -- (1,-0.6) -- (1.5,0) -- (0.5,0.6) -- (-0.5,0) -- (0,-0.6); \draw (1,-0.6) -- (1.1,0) -- (0.5,0.6); \draw[red] (0,-0.6) -- (0.5,0.6) -- (1,-0.6); \filldraw[fill=white] (0,-0.6) circle (2pt); \filldraw (1,-0.6) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0.6) circle (2pt); \filldraw (-0.5,0) circle (2pt); \filldraw (1.1,0) circle (2pt); \end{tikzpicture} }+ \frac{1}{4}\lambda^6\raisebox{-18pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (0,-0.6) -- (1,-0.6) -- (1.5,0) -- (0.5,0.6) -- (-0.5,0) -- (0,-0.6); \draw (0,-0.6) -- (-0.1,0) -- (0.5,0.6); \draw (1,-0.6) -- (1.1,0) -- (0.5,0.6); \draw[red] (0,-0.6) -- (0.5,0.6) -- (1,-0.6); \filldraw[fill=white] (0,-0.6) circle (2pt); \filldraw (1,-0.6) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0.6) circle (2pt); \filldraw (-0.1,0) circle (2pt); \filldraw (-0.5,0) circle (2pt); \filldraw (1.1,0) circle (2pt); \end{tikzpicture} }\nonumber\\ &= \lambda^4\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right).\end{aligned}$$ To bound the integrals of $\mathbb P_\lambda\left(\mathcal{G}_4\right)$ and $\mathbb P_\lambda\left(\mathcal{G}_6\right)$, recall $$\varphi^{[3]}(x) \geq \left(1-\varphi(x)\right)\times\left(\lambda^2\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,0.6) -- (1.5,0) -- (1,0) -- (1,-0.6); \draw[red] (1.5,0) -- (1,-0.6); \draw[red] (1,0) -- (1,0.6); \filldraw[fill=white] (1,-0.6) circle (2pt) node[left]{$\mathbf{0}$}; \filldraw (1,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw[fill=white] (1,0.6) circle (2pt) node[left]{$x$}; \end{tikzpicture} } - \frac{1}{2}\lambda^3\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,0.6) -- (1.5,0) -- (1,0) -- (0.5,0) -- (1,0.6); \draw (1,0)--(1,-0.6); \draw[red] (1.5,0) -- (1,-0.6) -- (0.5,0); \draw[red] (1,0) -- (1,0.6); \filldraw[fill=white] (1,-0.6) circle (2pt) node[left]{$\mathbf{0}$}; \filldraw (1,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw[fill=white] (1,0.6) circle (2pt) node[left]{$x$}; \end{tikzpicture} } - \frac{1}{2}\lambda^4\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,-0.6) -- (0.5,0) -- (0.85,0) -- (1,0.6) -- (1.5,0) -- (1.15,0) -- (1,-0.6); \draw[red] (1,-0.6) -- (0.85,0); \draw[red] (1,-0.6) -- (1.5,0); \draw[red] (1,0.6) -- (0.5,0); \draw[red] (1,0.6) -- (1.15,0); \filldraw[fill=white] (1,-0.6) circle (2pt) node[left]{$\mathbf{0}$}; \filldraw (1.15,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw (0.85,0) circle (2pt); \filldraw[fill=white] (1,0.6) circle (2pt) node[left]{$x$}; \end{tikzpicture} } + \frac{1}{2}\lambda^5\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,-0.6) -- (0.5,0) -- (0.85,0) -- (1,0.6) -- (1.85,0) -- (1.5,0) -- (1,-0.6); \draw (1,0.6) -- (1.15,0) -- (1.5,0); \draw[red] (1,-0.6) -- (0.85,0); \draw[red] (1,-0.6) -- (1.15,0); \draw[red] (1,-0.6) -- (1.85,0); \draw[red] (1,0.6) -- (0.5,0); \draw[red] (1,0.6) -- (1.5,0); \filldraw[fill=white] (1,-0.6) circle (2pt) node[left]{$\mathbf{0}$}; \filldraw (1.15,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw (0.85,0) circle (2pt); \filldraw (1.85,0) circle (2pt); \filldraw[fill=white] (1,0.6) circle (2pt) node[left]{$x$}; \end{tikzpicture} } - \frac{1}{8}\lambda^6\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,-0.6) -- (0.5,0) -- (0.15,0) -- (1,0.6) -- (1.85,0) -- (1.5,0) -- (1,-0.6); \draw (1,0.6) -- (1.15,0) -- (1.5,0); \draw (1,0.6) -- (0.85,0) -- (0.5,0); \draw[red] (1,-0.6) -- (0.85,0); \draw[red] (1,-0.6) -- (1.15,0); \draw[red] (1,-0.6) -- (1.85,0); \draw[red] (1,-0.6) -- (0.15,0); \draw[red] (1,0.6) -- (0.5,0); \draw[red] (1,0.6) -- (1.5,0); \filldraw[fill=white] (1,-0.6) circle (2pt) node[left]{$\mathbf{0}$}; \filldraw (1.15,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw (0.15,0) circle (2pt); \filldraw (0.85,0) circle (2pt); \filldraw (1.85,0) circle (2pt); \filldraw[fill=white] (1,0.6) circle (2pt) node[left]{$x$}; \end{tikzpicture} }\right).$$ Then $$\begin{aligned} \lambda^2\int\mathbb P_\lambda\left(\mathcal{G}_4\right) & \geq \lambda^4\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,-0.6) -- (0.5,0) -- (1,0.6) -- (1.5,0) -- (1.25,0) -- (1,-0.6); \draw[red] (1,-0.6) -- (1,0.6); \draw[red] (1,-0.6) -- (1.5,0); \draw[red] (1,0.6) -- (1.25,0); \filldraw (1,-0.6) circle (2pt); \filldraw (1.25,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw [fill=white] (0.5,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \end{tikzpicture} }-\frac{1}{2} \lambda^5\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,-0.6) -- (0.5,0) -- (1,0.6) -- (1.95,0) -- (1.6,0) -- (1,-0.6); \draw (1,0.6) -- (1.25,0) -- (1.6,0); \draw[red] (1,-0.6) -- (1.25,0); \draw[red] (1,-0.6) -- (1.95,0); \draw[red] (1,0.6) -- (1.6,0); \draw[red] (1,-0.6) -- (1,0.6); \filldraw (1,-0.6) circle (2pt); \filldraw (1.25,0) circle (2pt); \filldraw (1.6,0) circle (2pt); \filldraw[fill=white] (0.5,0) circle (2pt); \filldraw (1.95,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \end{tikzpicture} }-\frac{1}{2} \lambda^6\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,-0.6) -- (0.5,0) -- (0.85,0) -- (1,0.6) -- (1.5,0) -- (1.15,0) -- (1,-0.6); \draw (1,-0.6) to [out=180,in=305] (0.15,0) to [out=55,in=180] (1,0.6); \draw[red] (1,-0.6) -- (0.85,0); \draw[red] (1,-0.6) -- (1.5,0); \draw[red] (1,0.6) -- (0.5,0); \draw[red] (1,0.6) -- (1.15,0); \draw[red] (1,-0.6) -- (1,0.6); \filldraw (1,-0.6) circle (2pt); \filldraw (1.15,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw (0.85,0) circle (2pt); \filldraw[fill=white] (0.15,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \end{tikzpicture} }+ \frac{1}{2} \lambda^7\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,-0.6) -- (0.5,0) -- (0.85,0) -- (1,0.6) -- (1.85,0) -- (1.5,0) -- (1,-0.6); \draw (1,0.6) -- (1.15,0) -- (1.5,0); \draw (1,-0.6) to [out=180,in=305] (0.15,0) to [out=55,in=180] (1,0.6); \draw[red] (1,-0.6) -- (0.85,0); \draw[red] (1,-0.6) -- (1.15,0); \draw[red] (1,-0.6) -- (1.85,0); \draw[red] (1,0.6) -- (0.5,0); \draw[red] (1,0.6) -- (1.5,0); \draw[red] (1,0.6) -- (1,-0.6); \filldraw (1,-0.6) circle (2pt); \filldraw (1.15,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw (0.85,0) circle (2pt); \filldraw (1.85,0) circle (2pt); \filldraw[fill=white] (0.15,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \end{tikzpicture} }- \frac{1}{8}\lambda^8\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,-0.6) -- (0.5,0) -- (0.15,0) -- (1,0.6) -- (1.85,0) -- (1.5,0) -- (1,-0.6); \draw (1,-0.6) to [out=180,in=315] (-0.2,0) to [out=45,in=180] (1,0.6); \draw (1,0.6) -- (1.15,0) -- (1.5,0); \draw (1,0.6) -- (0.85,0) -- (0.5,0); \draw[red] (1,-0.6) -- (0.85,0); \draw[red] (1,-0.6) -- (1.15,0); \draw[red] (1,-0.6) -- (1.85,0); \draw[red] (1,-0.6) -- (0.15,0); \draw[red] (1,0.6) -- (0.5,0); \draw[red] (1,0.6) -- (1.5,0); \draw[red] (1,-0.6) -- (1,0.6); \filldraw (1,-0.6) circle (2pt); \filldraw (1.15,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw (0.15,0) circle (2pt); \filldraw (0.85,0) circle (2pt); \filldraw (1.85,0) circle (2pt); \filldraw[fill=white] (-0.2,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \end{tikzpicture} }\nonumber\\ & = \lambda^4\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right),\\ \lambda^2\int\mathbb P_\lambda\left(\mathcal{G}_6\right) & \geq \lambda^4\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,-0.6) -- (0.5,0) -- (1,0.6) -- (1.5,0) -- (1.25,0) -- (1,-0.6); \draw[red] (1,-0.6) -- (1,0.6); \draw[red] (1,-0.6) -- (1.5,0); \draw[red] (1,0.6) -- (1.25,0); \filldraw[fill=white] (1,-0.6) circle (2pt); \filldraw (1.25,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \end{tikzpicture} }-\frac{1}{2} \lambda^5\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,-0.6) -- (0.5,0) -- (1,0.6) -- (1.95,0) -- (1.6,0) -- (1,-0.6); \draw (1,0.6) -- (1.25,0) -- (1.6,0); \draw[red] (1,-0.6) -- (1.25,0); \draw[red] (1,-0.6) -- (1.95,0); \draw[red] (1,0.6) -- (1.6,0); \draw[red] (1,-0.6) -- (1,0.6); \filldraw[fill=white] (1,-0.6) circle (2pt); \filldraw (1.25,0) circle (2pt); \filldraw (1.6,0) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw (1.95,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \end{tikzpicture} }-\frac{1}{2} \lambda^6\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,-0.6) -- (0.5,0) -- (0.85,0) -- (1,0.6) -- (1.5,0) -- (1.15,0) -- (1,-0.6); \draw (1,-0.6) to [out=180,in=305] (0.15,0) to [out=55,in=180] (1,0.6); \draw[red] (1,-0.6) -- (0.85,0); \draw[red] (1,-0.6) -- (1.5,0); \draw[red] (1,0.6) -- (0.5,0); \draw[red] (1,0.6) -- (1.15,0); \draw[red] (1,-0.6) -- (1,0.6); \filldraw[fill=white] (1,-0.6) circle (2pt); \filldraw (1.15,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw (0.85,0) circle (2pt); \filldraw (0.15,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \end{tikzpicture} }+ \frac{1}{2}\lambda^7\raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,-0.6) -- (0.5,0) -- (0.85,0) -- (1,0.6) -- (1.85,0) -- (1.5,0) -- (1,-0.6); \draw (1,0.6) -- (1.15,0) -- (1.5,0); \draw (1,-0.6) to [out=180,in=305] (0.15,0) to [out=55,in=180] (1,0.6); \draw[red] (1,-0.6) -- (0.85,0); \draw[red] (1,-0.6) -- (1.15,0); \draw[red] (1,-0.6) -- (1.85,0); \draw[red] (1,0.6) -- (0.5,0); \draw[red] (1,0.6) -- (1.5,0); \draw[red] (1,0.6) -- (1,-0.6); \filldraw[fill=white] (1,-0.6) circle (2pt); \filldraw (1.15,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw (0.85,0) circle (2pt); \filldraw (1.85,0) circle (2pt); \filldraw (0.15,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \end{tikzpicture} }-\frac{1}{8}\lambda^8 \raisebox{-20pt}{ \begin{tikzpicture}[scale=1] \filldraw (1,-0.7) circle (0pt); \draw (1,-0.6) -- (0.5,0) -- (0.15,0) -- (1,0.6) -- (1.85,0) -- (1.5,0) -- (1,-0.6); \draw (1,-0.6) to [out=180,in=315] (-0.2,0) to [out=45,in=180] (1,0.6); \draw (1,0.6) -- (1.15,0) -- (1.5,0); \draw (1,0.6) -- (0.85,0) -- (0.5,0); \draw[red] (1,-0.6) -- (0.85,0); \draw[red] (1,-0.6) -- (1.15,0); \draw[red] (1,-0.6) -- (1.85,0); \draw[red] (1,-0.6) -- (0.15,0); \draw[red] (1,0.6) -- (0.5,0); \draw[red] (1,0.6) -- (1.5,0); \draw[red] (1,-0.6) -- (1,0.6); \filldraw[fill=white] (1,-0.6) circle (2pt); \filldraw (1.15,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (0.5,0) circle (2pt); \filldraw (0.15,0) circle (2pt); \filldraw (0.85,0) circle (2pt); \filldraw (1.85,0) circle (2pt); \filldraw (-0.2,0) circle (2pt); \filldraw (1,0.6) circle (2pt); \end{tikzpicture} }\nonumber\\ & = \lambda^4\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right).\end{aligned}$$ Summing the integrals of the probabilities of the $\mathcal{G}_i$ events gives us our desired lower bound for any $\lambda>0$, and in particular $\lambda=\lambda_c$. ◻ ## Bounds on the Second Lace Expansion Coefficient {#sec:Pi2bd} We start with an upper bound, which we need both for $n=1$ and also for $n\ge3$ in the next subsection. **Proposition 34**. *Let $n \geq 1$, $x\in\mathbb R^d$, and $\lambda\in \left[0,\lambda_c\right]$. Then $$\label{eqn:Lacefunction_bound} \lambda\Pi_\lambda^{(n)}(x) \leq \int \psi_n(x,w_{n-1},u_{n-1}) \left( \prod_{i=1}^{n-1} \psi(w_i,u_i,w_{i-1},u_{i-1}) \right) \psi_0(w_0,u_0) \mathrm{d}\left( \left(\vec w, \vec u\right)_{[0,n-1]} \right).$$ Furthermore, for $\lambda\in \left[0,\lambda_c\right]$ there exists $c>0$ (independent of $\lambda$ and $d$) such that $$\sum_{n=N}^\infty \widehat\Pi_\lambda^{(n)}(0) \leq c\beta^N.$$* *Proof.* As in Lemma [Lemma 31](#lem:Pi1UpperStep1){reference-type="ref" reference="lem:Pi1UpperStep1"}, the first inequality is proven in very nearly exactly the same way as [@HeyHofLasMat19 Proposition 7.2]. The proof of [@HeyHofLasMat19 Corollary 5.3] only needs to be slightly adjusted to get the second part of our result for $\lambda<\lambda_c$, and a dominated convergence argument like that appearing in the proof of [@HeyHofLasMat19 Corollary 6.1] allows us to extend the result to $\lambda=\lambda_c$. ◻ #### Upper Bound on $\widehat\Pi_{\lambda_c}^{(2)}(0)$ **Lemma 35**. *Suppose Assumption [\[Assumption\]](#Assumption){reference-type="ref" reference="Assumption"} holds. Then as $d\to\infty$, $$\begin{gathered} \lambda_c\widehat\Pi_{\lambda_c}^{(2)}(0) \leq \int\psi_0(w_0,u_0)\psi(w_1,u_1,w_0,u_0)\psi_2(x,w_1,u_1)\mathrm{d}w_0\mathrm{d}u_0\mathrm{d}w_1 \mathrm{d}u_1 \mathrm{d}x \\ = \lambda_c^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right). \end{gathered}$$* *Proof.* The first inequality is an application of [\[eqn:Lacefunction_bound\]](#eqn:Lacefunction_bound){reference-type="eqref" reference="eqn:Lacefunction_bound"}. After expanding $\psi_0$, $\psi$, and $\psi_2$, we get $$\begin{gathered} \int\psi_0(w_0,u_0)\psi(w_1,u_1,w_0,u_0)\psi_2(x,w_1,u_1)\mathrm{d}w_0\mathrm{d}u_0\mathrm{d}w_1 \mathrm{d}u_1 \mathrm{d}x \\= \sum_{j_0=1}^3\sum_{j_1=1}^4\sum_{j_2=1}^2\int\psi^{(j_0)}_0(w_0,u_0)\psi^{(j_1)}(w_1,u_1,w_0,u_0)\psi^{(j_2)}_2(x,w_1,u_1)\mathrm{d}w_0\mathrm{d}u_0\mathrm{d}w_1 \mathrm{d}u_1 \mathrm{d}x. \end{gathered}$$ We can index the $3\times4\times2=24$ resulting diagrams by $\left(j_0,j_1,j_2\right)$. For $\left(j_0,j_1,j_2\right)\notin\left\{\left(3,2,2\right),\left(3,3,2\right),\left(3,4,2\right)\right\}$, we can identify a cycle of length $6$ or longer that visits each vertex. For each factor of $\tau_\lambda$ that is not part of this cycle we can then bound by $1$. For each factor of $\tau_\lambda$ that is part of the cycle, we bound $\tau_\lambda\leq \varphi+ \lambda\varphi\star\tau_\lambda$. For $\lambda\leq\lambda_c$, Lemma [Lemma 14](#lem:tailbound){reference-type="ref" reference="lem:tailbound"} then lets us bound each of these diagrams by $\mathcal{O}\left(\varphi^{\star 6}\left(\mathbf{0}\right)\right)$. For the $\left(3,2,2\right)$ diagram, we first expand out the $\tau_\lambda^\circ$ edges. In many of the resulting diagrams we can apply the above strategy of finding a cycle and bounding the excess edges to bound the diagram by $\mathcal{O}\left(\varphi^{\star 6}\left(\mathbf{0}\right)\right)$. The result is that for $\lambda\leq \lambda_c$ we have $$\begin{gathered} \int\psi^{(3)}_0(w_0,u_0)\psi^{(2)}(w_1,u_1,w_0,u_0)\psi^{(2)}_2(x,w_1,u_1)\mathrm{d}w_0\mathrm{d}u_0\mathrm{d}w_1 \mathrm{d}u_1 \mathrm{d}x \\= \lambda^4\int\varphi(u_0)\tau_\lambda(z)\tau_\lambda(z-u_0)\tau_\lambda(u_1-z)\tau_\lambda(u_1-u_0)\tau_\lambda(x-u_1)\tau_\lambda(x-u_0)\mathrm{d}u_0 \mathrm{d}u_1 \mathrm{d}z \mathrm{d}x +\mathcal{O}\left(\varphi^{\star 6}\left(\mathbf{0}\right)\right). \end{gathered}$$ In this first integral we can bound $\tau_\lambda(u_1-u_0)\leq 1$, $\tau_\lambda(z-u_0)\leq \varphi(z-u_0) + \lambda\int\varphi(x)\mathrm{d}x$, and the other $\tau_\lambda\leq \varphi+ \lambda\varphi\star\tau_\lambda$ to find $$\begin{gathered} \lambda_c^4\int\varphi(u_0)\tau_{\lambda_c}(z)\tau_{\lambda_c}(z-u_0)\tau_{\lambda_c}(u_1-z)\tau_{\lambda_c}(u_1-u_0)\tau_{\lambda_c}(x-u_1)\tau_{\lambda_c}(x-u_0)\mathrm{d}u_0 \mathrm{d}u_1 \mathrm{d}z \mathrm{d}x \\= \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right). \end{gathered}$$ For the $\left(3,3,2\right)$ diagram we bound $\tau_\lambda(u_1-z)\leq \varphi(u_1-z) + \lambda\int\varphi(x)\mathrm{d}x$, and the other $\tau_\lambda\leq \varphi+ \lambda\varphi\star\tau_\lambda$ to find $$\begin{gathered} \lambda_c^4\int \varphi(u_0)\tau_{\lambda_c}(z-u_0)\tau_{\lambda_c}(u_1)\tau_{\lambda_c}(z-u_1)\tau_{\lambda_c}(x-u_1)\tau_{\lambda_c}(x-z)\mathrm{d}u_0 \mathrm{d}u_1 \mathrm{d}z \mathrm{d}x \\= \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right). \end{gathered}$$ For the $\left(3,4,2\right)$ diagram we bound $\tau_\lambda(u_1-u_0)\leq \varphi(u_1-u_0) + \lambda\varphi^{\star 2}(u_1-u_0) + \lambda^2\left(\int\varphi(x)\mathrm{d}x\right)^2$, and the other $\tau_\lambda\leq \varphi+ \lambda\varphi\star\tau_\lambda$ to find $$\begin{gathered} \lambda_c^3\int\varphi(u_0)\tau_{\lambda_c}(u_1)\tau_{\lambda_c}(u_1-u_0)\tau_{\lambda_c}(x-u_0)\tau_{\lambda_c}(x-u_1)\mathrm{d}u_0 \mathrm{d}u_1 \mathrm{d}x \\\leq \lambda_c^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (1,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right). \end{gathered}$$ ◻ #### Lower Bound on $\widehat\Pi_{\lambda_c}^{(2)}(0)$ **Lemma 36**. *$$\lambda_c\widehat\Pi_{\lambda_c}^{(2)}(0) \geq \lambda_c^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }.$$* *Proof.* We begin by identifying a suitable event for each $u_0,u_1,x\in\mathbb R^d$ that is contained in $\left\{\mathbf{0} \Longleftrightarrow u\textrm { in } \xi_0^{\mathbf{0},u_0}\right\}\cap E\left(u_0,u_1,\mathscr {C}_0,\xi^{u_0,u_1}_1\right)\cap E\left(u_1,x,\mathscr {C}_1,\xi_2^{u_1,x}\right)$. We choose the following event $\mathcal{H}_1$: $$\mathcal{H}_1 = \left\{\mathbf{0} \sim u_0\textrm { in } \xi^{\mathbf{0},u_0}_0\right\}\cap\left\{u_0 \sim u_1\textrm { in } \xi^{u_0,u_1}_1\right\}\cap\left\{u_1\notin \eta^{u_0,u_1}_{1,\langle \mathbf{0} \rangle}\right\}\cap \left\{u_1 \sim x\textrm { in } \xi^{u_1,x}_2\right\}\cap\left\{x\notin \eta^{u_1,x}_{2,\langle u_0 \rangle}\right\}$$ and note that $\lambda\widehat\Pi_\lambda^{(2)}(0)\geq \lambda^3\int \mathbb{P}_{\lambda}\left(\mathcal{H}_1\right)\mathrm{d}u_0 \mathrm{d}u_1 \mathrm{d}x$. This event is constructed so that all the intersecting events are independent, and the probability of each is easily calculated - once we recall that the probability that a singleton thins out a vertex is equal to the probability that an edge forms between the singleton and the vertex. Therefore $$\mathbb P_\lambda\left(\mathcal{H}_1\right) := \varphi(u_0)\varphi(u_1-u_0)\varphi(u_1)\varphi(x-u_1)\varphi(x-u_0).$$ Integrating $\mathbb P_\lambda\left(\mathcal{H}_1\right)$ then gives a lower bound for $\lambda_c\widehat\Pi_{\lambda_c}^{(2)}(0)$. This lower bound is then $$\lambda^3\int \mathbb P_\lambda\left(\mathcal{H}_1\right)\mathrm{d}u_0 \mathrm{d}u_1 \mathrm{d}x = \lambda^3\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }.$$ This gives us our desired lower bound for any $\lambda>0$, and in particular $\lambda=\lambda_c$. ◻ ## Bounds on Later Lace Expansion Coefficients We first prove [\[eqPi3bd\]](#eqPi3bd){reference-type="ref" reference="eqPi3bd"} and then [\[eqPinbd\]](#eqPinbd){reference-type="ref" reference="eqPinbd"}. #### Upper Bound on $\widehat\Pi_{\lambda_c}^{(3)}(0)$ {#sec:Pinbd} We are first dealing with the case $n=3$, which required a special treatment, and subsequently with the general case $n\ge4$. **Lemma 37**. *Suppose Assumption [\[Assumption\]](#Assumption){reference-type="ref" reference="Assumption"} holds. Then as $d\to\infty$, $$\lambda_c\widehat\Pi_{\lambda_c}^{(3)}(0) \leq \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right).$$* *Proof.* From [\[eqn:Lacefunction_bound\]](#eqn:Lacefunction_bound){reference-type="eqref" reference="eqn:Lacefunction_bound"} we have $$\lambda_c\widehat\Pi_{\lambda_c}^{(3)}(0) \leq \int\psi_0(w_0,u_0)\psi(w_1,u_1,w_0,u_0)\psi(w_2,u_2,w_1,u_1)\psi_3(x,w_2,u_2)\mathrm{d}w_0\mathrm{d}u_0\mathrm{d}w_1 \mathrm{d}u_1 \mathrm{d}w_2 \mathrm{d}u_2 \mathrm{d}x.$$ Then as in the proof of Lemma [Lemma 35](#lem:Pi2UpperBound){reference-type="ref" reference="lem:Pi2UpperBound"} we expand out the $\psi_0$, $\psi$, and $\psi_3$ functions. Then for each integral we aim to identify a cycle of length $6$ or longer that visits each vertex. For each factor of $\tau_{\lambda_c}$ that is not part of this cycle we can then bound by $1$. For each factor of $\tau_{\lambda_c}$ that is part of the cycle, we bound $\tau_{\lambda_c}\leq \varphi+ \lambda_c\varphi\star\tau_{\lambda_c}$. Lemma [Lemma 14](#lem:tailbound){reference-type="ref" reference="lem:tailbound"} then lets us bound each of these diagrams by $\mathcal{O}\left(\varphi^{\star 6}\left(\mathbf{0}\right)\right)$. The only integral that we cannot perform this strategy for corresponds to the integral $$\begin{gathered} \int\psi^{(3)}_0(w_0,u_0)\psi^{(4)}(w_1,u_1,w_0,u_0)\psi^{(4)}(w_2,u_2,w_1,u_1)\psi^{(2)}_3(x,w_2,u_2)\mathrm{d}w_0\mathrm{d}u_0\mathrm{d}w_1 \mathrm{d}u_1 \mathrm{d}w_2 \mathrm{d}u_2 \mathrm{d}x \\=\lambda_c^4\int \varphi(u_0)\tau_{\lambda_c}(u_1)\tau_{\lambda_c}(u_1-u_0)\tau_{\lambda_c}(u_2-u_0)\tau_{\lambda_c}(u_2-u_1)\tau_{\lambda_c}(x-u_1)\tau_{\lambda_c}(x-u_2)\mathrm{d}u_0 \mathrm{d}u_1 \mathrm{d}u_2 \mathrm{d}x. \end{gathered}$$ If we bound $\tau_{\lambda_c}(u_2-u_1)\leq \varphi(u_2-u_1) + \lambda_c\varphi^{\star 2}(u_2-u_1) + \lambda_c^2\varphi^{\star 3}(u_2-u_1) + \lambda_c^3\left(\int\varphi(x)\mathrm{d}x\right)^3$, and the other $\tau_{\lambda_c}\leq \varphi+ \lambda_c\varphi\star\tau_{\lambda_c}$ we find that $$\begin{gathered} \lambda_c^4\int \varphi(u_0)\tau_{\lambda_c}(u_1)\tau_{\lambda_c}(u_1-u_0)\tau_{\lambda_c}(u_2-u_0)\tau_{\lambda_c}(u_2-u_1)\tau_{\lambda_c}(x-u_1)\tau_{\lambda_c}(x-u_2)\mathrm{d}u_0 \mathrm{d}u_1 \mathrm{d}u_2 \mathrm{d}x \\ = \mathcal{O}\left(\raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \filldraw (3,0) circle (3pt); \draw (0,0) -- (1,0.6) -- (2,0.6) -- (3,0) -- (2,-0.6) -- (1,-0.6) -- (0,0); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }+ \raisebox{-8pt}{ \begin{tikzpicture}[scale=0.5] \filldraw (1,0.6) circle (3pt); \filldraw (1,-0.6) circle (3pt); \filldraw (2,0.6) circle (3pt); \filldraw (2,-0.6) circle (3pt); \draw (0,0) -- (1,0.6) -- (1,-0.6) -- (0,0); \draw (1,0.6) -- (2,0.6) -- (2,-0.6) -- (1,-0.6); \filldraw[fill=white] (0,0) circle (3pt); \end{tikzpicture} }\right). \end{gathered}$$ ◻ #### Upper Bound on $\widehat\Pi_{\lambda_c}^{(n)}(0)$ for $n\geq 4$ **Lemma 38**. *Suppose Assumption [\[Assumption\]](#Assumption){reference-type="ref" reference="Assumption"} holds and $n\geq 1$. Then as $d\to\infty$, $$\lambda_c\widehat\Pi_{\lambda_c}^{(n)}(0) = \begin{cases} \mathcal{O}\left(\varphi^{\star (n+2)}\left(\mathbf{0}\right)\right)&\colon n \text{ is even,}\\ \mathcal{O}\left(\varphi^{\star (n+1)}\left(\mathbf{0}\right)\right)&\colon n \text{ is odd.}\\ \end{cases}$$* *Proof.* We begin this proof by using Proposition [Proposition 34](#prop:LaceBoundwithDiagram){reference-type="ref" reference="prop:LaceBoundwithDiagram"} to get an upper bound for $\lambda_c\widehat\Pi_{\lambda_c}^{(n)}(0)$ in terms of a sum of integrals of $\psi^{(j_0)}_0$, $\psi^{(j)}$, and $\psi^{(j_n)}_n$. Our strategy to bound each of these diagrams is to identify a loop of length at least $n+2$ around each of these diagrams. For each $\psi$-function we provide an upper bound in a $\overline{\psi}$-function, so that when they are applied to our integral bounds we get terms of the form $\lambda^{m-1}\tau_\lambda^{\star m}\left(\mathbf{0}\right)$ for some $m\geq n+1$. We have $$\begin{aligned} \psi_0^{(1)}(w,u) &\leq \overline{\psi}_0^{(1)}(w,u) := \lambda^2\tau_\lambda(u)\tau_\lambda(w),\\ \psi_0^{(2)}(w,u) &\leq \overline{\psi}_0^{(2)}(w,u) := \lambda^2 \delta_{w,\mathbf{0}}\int \tau_\lambda(u-t)\tau_\lambda(t) \mathrm{d}t,\\ \psi_0^{(3)}(w,u) & \leq \overline{\psi}_0^{(3)}(w,u) := \lambda\tau_\lambda(u) \delta_{w,\mathbf{0}},\\ \psi^{(1)}(w,u,r,s) &\leq \overline{\psi}^{(1)}(w,u,r,s) := \lambda^4\int \tau_\lambda^\circ(t-s)\tau_\lambda(w-t)\mathrm{d}t \int \tau_\lambda(u-z)\tau_\lambda(z-r)\mathrm{d}z, \\ \psi^{(2)}(w,u,r,s) &\leq \overline{\psi}^{(2)}(w,u,r,s) := \lambda^4\tau_\lambda^\circ(w-s)\int \tau_\lambda(z-r)\tau_\lambda(t-z)\tau_\lambda(u-t)\mathrm{d}z \mathrm{d}t \\ &\hspace{7cm}+ \lambda^3\tau_\lambda^\circ(w-s)\int \tau_\lambda(z-r)\tau_\lambda(u-z)\mathrm{d}z, \\ \psi^{(3)}(w,u,r,s) &\leq \overline{\psi}^{(3)}(w,u,r,s) := \lambda^2\tau_\lambda(w-s)\tau_\lambda(u-r),\\ \psi^{(4)}(w,u,r,s) &\leq \overline{\psi}^{(4)}(w,u,r,s) := \lambda\delta_{w,s}\tau_\lambda(u-r),\\ \psi_n^{(1)} (x,r,s) &\leq \overline{\psi}_n^{(1)}(w,r,s) := \lambda^3\int \tau_\lambda^\circ(t-s)\tau_\lambda(z-r)\tau_\lambda(z-x)\tau_\lambda(x-t)\mathrm{d}z\mathrm{d}t,\\ \psi_n^{(2)}(x,r,s) &= \overline{\psi}_n^{(2)}(x,r,s) := \lambda \tau_\lambda(x-s)\tau_\lambda(x-r).\end{aligned}$$ We can also define $\overline{\psi}_0$, $\overline{\psi}$, and $\overline{\psi}_n$ analogously to how we defined $\psi_0$, $\psi$, and $\psi_n$. First off, we leave $\psi_n^{(2)}$ alone. For $\psi_0^{(3)}$ we bound $\varphi\leq \tau_\lambda$. For most of the others, the bound is achieved only by bounding $\tau_\lambda^{(\geq2)}\leq \tau_\lambda$ and $\tau_\lambda\leq 1$ in the appropriate places. The bound for $\psi^{(2)}$ deserves a little more explanation. Here we first expand $\tau_\lambda^\circ(t-w) = \tau_\lambda(t-w) + \lambda^{-1}\delta_{t,w}$ to get two expressions. We then bound $\tau_\lambda\leq 1$ as for the others, but in different ways for each of the two expressions. We therefore find that $\overline{\psi}_0$ can contribute one or two factors of $\tau_\lambda$, $\overline{\psi}$ can contribute one, two, three, or four factors of $\tau_\lambda$, and $\overline{\psi}_n$ can contribute two, three, or four factors of $\tau_\lambda$. Our bound is therefore a sum of terms of $\tau_\lambda^{\star m}\left(\mathbf{0}\right)$ where $m$ is at least $1+1\times(n-1)+2 = n+2$ and at most $2 + 4\times(n-1)+4 = 4n+2$. Therefore $$\lambda_c\widehat\Pi_{\lambda_c}^{(n)}\left(0\right) \leq \mathcal{O}\left(\sum^{4n+2}_{m=n+2}\tau_\lambda^{\star m}\left(\mathbf{0}\right)\right).$$ For each factor of $\tau_\lambda$ here we now bound $\tau_\lambda\leq \varphi+ \lambda\varphi\star\tau_\lambda$ to get $$\lambda_c\widehat\Pi_{\lambda_c}^{(n)}(0) \leq \mathcal{O}\left(\sum_{m=n+2}^{4n+2}\sum_{j=0}^{4n+2}\varphi^{\star m}\star\tau_\lambda^{\star j}\left(\mathbf{0}\right)\right).$$ If $m$ is odd and $j\geq 1$ then we bound $\varphi^{\star m}\star\tau_\lambda^{\star j}\left(\mathbf{0}\right) \leq \left(\int\varphi(x)\mathrm{d}x\right)\varphi^{\star (m-1)}\star\tau_\lambda^{\star j}\left(\mathbf{0}\right)$. Then Lemma [Lemma 14](#lem:tailbound){reference-type="ref" reference="lem:tailbound"} gives us $$\lambda_c\widehat\Pi_{\lambda_c}^{(n)}(0) \leq \begin{cases} \mathcal{O}\left(\sum_{m=n+2}^{4n+2}\varphi^{\star m}\left(\mathbf{0}\right)\right) &\colon n\text{ is even,}\\ \mathcal{O}\left(\sum_{m=n+1}^{4n+2}\varphi^{\star m}\left(\mathbf{0}\right)\right) &\colon n\text{ is odd.} \end{cases}$$ If $n$ is even we bound $\varphi^{\star m}\left(\mathbf{0}\right)\leq \left(\int\varphi(x)\mathrm{d}x\right)^{m-n-2}\varphi^{\star (n+2)}\left(\mathbf{0}\right)$ to get our result, and if $n$ is odd we bound $\varphi^{\star m}\left(\mathbf{0}\right)\leq \left(\int\varphi(x)\mathrm{d}x\right)^{m-n-1}\varphi^{\star (n+1)}\left(\mathbf{0}\right)$ to get our result. ◻ # Calculations for Specific Models We now provide details for the specific percolation models in Section [1.4](#sec:applications){reference-type="ref" reference="sec:applications"}. To this end, we need to show that each of the four models satisfies Assumptions [\[Assumption\]](#Assumption){reference-type="ref" reference="Assumption"} and [\[AssumptionBeta\]](#AssumptionBeta){reference-type="ref" reference="AssumptionBeta"} and find the specific values of the integrals of $\varphi$ appearing in [\[eq:ExpansionMain\]](#eq:ExpansionMain){reference-type="eqref" reference="eq:ExpansionMain"}. ## Hyper-Sphere Calculations Recall that for radius $R>0$, the Hyper-Sphere RCM is defined by having $$\varphi(x) = \mathds 1_{\left\{\abs*{x}< R\right\}}.$$ Throughout this section we choose a scaling of $\mathbb R^d$ such that $R=R(d)$ is the radius of the unit $d$-volume ball. Therefore $R(d)=\pi^{-\frac{1}{2}}\Gamma\left(\frac{d}{2}+1\right)^{\frac{1}{d}} = \sqrt{\frac{d}{2\pi \text{e}}}\left(1+o\left(1\right)\right)$ (by an application of Stirling's formula). **Lemma 39**. *The Hyper-Sphere RCM satisfies Assumption [\[Assumption\]](#Assumption){reference-type="ref" reference="Assumption"}.* *Proof.* It is proven in [@HeyHofLasMat19 Proposition 1.1] that the Hyper-Sphere RCM satisfies Assumption [\[Assumption\]](#Assumption){reference-type="ref" reference="Assumption"} with $g(d) = \varrho^d$ for some $\varrho\in\left(0,1\right)$. ◻ **Lemma 40**. *The Hyper-Sphere RCM satisfies Assumption [\[AssumptionBeta\]](#AssumptionBeta){reference-type="ref" reference="AssumptionBeta"}.* *Proof.* In order to prove that [\[Assump:ExponentialDecay\]](#Assump:ExponentialDecay){reference-type="ref" reference="Assump:ExponentialDecay"} holds, we need to get a lower bound on $\varphi^{\star 6}\left(\mathbf{0}\right)$. We begin using the Fourier inverse formula to get $$\varphi^{\star 6}\left(\mathbf{0}\right) = \int_{\mathbb R^d}\widehat\varphi(k)^6\frac{\mathrm{d}k}{\left(2\pi\right)^d}.$$ Since $\varphi$ is symmetric, $\widehat\varphi(k)$ is real, and therefore we can get a lower bound on $\varphi ^{\star 6}\left(\mathbf{0}\right)$ by getting a lower bound on $\widehat\varphi(k)^6$. From [@grafakos2008classical Appendix B.5], we can find that $$\label{eqn:FourierAdjacencyBooleanDisc} \widehat\varphi(k) = \left(\frac{2\pi R(d)}{\abs*{k}}\right)^\frac{d}{2} J_{\frac{d}{2}}\left(\abs*{k}R(d)\right),$$ where $J_{\frac{d}{2}}$ is the Bessel function of the first kind of order $\frac{d}{2}$, and $R(d)$ is the radius of the unit volume ball in $d$ dimensions. In Figure [\[fig:Sketch_fconnf_BooleanDisc\]](#fig:Sketch_fconnf_BooleanDisc){reference-type="ref" reference="fig:Sketch_fconnf_BooleanDisc"} we highlight three important values of $\abs*{k}$ in the shape of $\widehat\varphi(k)$. The Bessel function $J_{\frac{d}{2}}$ achieves its global maximum (in absolute value) at its first non-zero stationary point, $j'_{\frac{d}{2},1}$. From [@abramowitz1970handbook p.371], we have $j'_{\frac{d}{2},1} = \frac{d}{2} + \gamma_1\left(\frac{d}{2}\right)^\frac{1}{3} + \mathcal{O}\left(d^{-\frac{1}{3}}\right)$ for a given $\gamma_1 \approx 0.81$, and $J_{\frac{d}{2}}\left(j'_{\frac{d}{2},1}\right) = \Gamma_1 d^{-\frac{1}{3}} + \mathcal{O}\left(d^{-1}\right)$, where $\Gamma_1\approx 0.54$. Then $J_{\frac{d}{2}}$ has its first zero at $j_{\frac{d}{2},1}>j'_{\frac{d}{2},1}$, where $j_{\frac{d}{2},1} = \frac{d}{2} + \gamma_2\left(\frac{d}{2}\right)^{\frac{1}{3}} + \mathcal{O}\left(d^{-\frac{1}{3}}\right)$ and $\gamma_2\approx 1.86$ (again, see [@abramowitz1970handbook]). From differential inequalities relating Bessel functions (see [@grafakos2008classical]), we have $$\frac{\mathrm{d}}{\mathrm{d}\abs*{k}} \widehat\varphi(k) = -R(d)\left(\frac{2\pi R(d)}{\abs*{k}}\right)^\frac{d}{2}J_{\frac{d}{2}+1}\left(\abs*{k} R(d)\right).$$ Therefore $\widehat\varphi(k)$ is decreasing in $\abs*{k}$ until $\abs*{k}R(d) = j_{\frac{d}{2}+1,1} = \frac{d}{2} + \gamma_2\left(\frac{d}{2}\right)^{\frac{1}{3}} + 1 +\mathcal{O}\left(d^{-\frac{1}{3}}\right)$. In particular, $j_{\frac{d}{2}+1,1}> j_{\frac{d}{2},1}$. The significance of these points is that they allow us to bound $$\abs*{\widehat\varphi(k)} \geq \widehat\varphi\left(\sfrac{j'_{\frac{d}{2},1}}{R(d)}\right)\mathds 1_{\left\{\abs*{k}\leq \sfrac{j'_{\frac{d}{2},1}}{R(d)}\right\}}.$$ Since $R(d)$ is the radius of the unit volume ball in $d$ dimensions, $$\int_{\mathbb R^d}\mathds 1_{\left\{\abs*{k}\leq \sfrac{j'_{\frac{d}{2},1}}{R(d)}\right\}}\frac{\mathrm{d}k}{\left(2\pi\right)^d} = \left(\frac{j'_{\frac{d}{2},1}}{2\pi R(d)^2}\right)^d.$$ Therefore we can arrive at $$\varphi^{\star 6}\left(\mathbf{0}\right) \geq \left(\frac{2\pi R(d)^2}{j'_{\frac{d}{2},1}}\right)^{2d} J_{\frac{d}{2}}\left(j'_{\frac{d}{2},1}\right)^6 = \Gamma_1^6 \frac{1}{d^2}\left(\frac{2}{\text{e}}+o\left(1\right)\right)^{2d}\left(1+ o(1)\right).$$ Here we have used the leading order asymptotics of $R(d)$, $j'_{\frac{d}{2},1}$, and $J_{\frac{d}{2}}\left(j'_{\frac{d}{2},1}\right)$ we described above. From this lower bound, we know that $\rho$ will satisfy the bound in [\[Assump:ExponentialDecay\]](#Assump:ExponentialDecay){reference-type="ref" reference="Assump:ExponentialDecay"} if $\rho < 4 \text{e}^{-2}$. From the above argument we have an exponential lower bound on $\varphi^{\star 6}\left(\mathbf{0}\right)$ and therefore a linear lower bound on $h(d)$. It is proven in [@HeyHofLasMat19 Proposition 1.1a] that $g(d)=\varrho^{d}$ for some $\varrho\in\left(0,1\right)$, and therefore $\beta(d) = \varrho^{\frac{d}{4}}$. We can then bound $N(d)$ to show [\[Assump:NumberBound\]](#Assump:NumberBound){reference-type="ref" reference="Assump:NumberBound"} holds. ◻ **Lemma 41**. *For $n\geq 3$, $$\label{eqn:BooleanLoopGeneral} \varphi^{\star n}\left(\mathbf{0}\right) = d2^{d\left(\frac{n}{2}-1\right)}\Gamma\left(\frac{d}{2}+1\right)^{n-2}\int^\infty_0 x^{-1-d\left(\frac{n}{2}-1\right)}\left(J_{\frac{d}{2}}(x)\right)^n \mathrm{d}x.$$ In particular, $$\begin{aligned} \varphi^{\star 3}\left(\mathbf{0}\right) &= \frac{d\Gamma\left(\frac{d}{2}+1\right)}{\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{d}{2}+\frac{1}{2}\right)}\int^1_0x^{d-1}B\left(1-\frac{x^2}{4};\frac{d}{2}+\frac{1}{2},\frac{1}{2}\right)\mathrm{d}x = \frac{3}{2}\frac{\Gamma\left(\frac{d}{2}+1\right)}{\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{d}{2}+\frac{1}{2}\right)}B\left(\frac{3}{4};\frac{d}{2}+\frac{1}{2},\frac{1}{2}\right),\label{eqn:BooleanLoop3}\\ \varphi^{\star 4}\left(\mathbf{0}\right) &= \frac{d\Gamma\left(\frac{d}{2}+1\right)^2}{\Gamma\left(\frac{1}{2}\right)^2\Gamma\left(\frac{d}{2}+\frac{1}{2}\right)^2}\int^2_0x^{d-1}B\left(1-\frac{x^2}{4};\frac{d}{2}+\frac{1}{2},\frac{1}{2}\right)^2\mathrm{d}x,\\ \varphi^{\star 5}\left(\mathbf{0}\right) &= \frac{d2^d\Gamma\left(\frac{d}{2}+1\right)^3}{\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{d}{2}+\frac{1}{2}\right)}\int^2_0x^{\frac{d}{2}}B\left(1-\frac{x^2}{4};\frac{d}{2}+\frac{1}{2},\frac{1}{2}\right)\left(\int^\infty_0 k^{-d}\left(J_{\frac{d}{2}}(k)\right)^3J_{\frac{d}{2}-1}(kx) \mathrm{d}k\right)\mathrm{d}x, \\ \varphi^{\star 6}\left(\mathbf{0}\right) &= \frac{d2^\frac{3d}{2}\Gamma\left(\frac{d}{2}+1\right)^4}{\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{d}{2}+\frac{1}{2}\right)}\int^2_0x^{\frac{d}{2}}B\left(1-\frac{x^2}{4};\frac{d}{2}+\frac{1}{2},\frac{1}{2}\right)\left(\int^\infty_0 k^{-\frac{3d}{2}}\left(J_{\frac{d}{2}}(k)\right)^4J_{\frac{d}{2}-1}(kx) \mathrm{d}k\right)\mathrm{d}x. \end{aligned}$$ Furthermore, $$\begin{aligned} \varphi^{\star 1\star 2 \cdot 2}\left(\mathbf{0}\right) &= \frac{d\Gamma\left(\frac{d}{2}+1\right)^2}{\Gamma\left(\frac{1}{2}\right)^2\Gamma\left(\frac{d}{2}+\frac{1}{2}\right)^2}\int_0^1 x^{d-1}B\left(1-\frac{x^2}{4};\frac{d}{2}+\frac{1}{2},\frac{1}{2}\right)^2 \mathrm{d}x,\\ \varphi^{\star 2\star 2 \cdot 2}\left(\mathbf{0}\right) &= \frac{d\Gamma\left(\frac{d}{2}+1\right)^3}{\Gamma\left(\frac{1}{2}\right)^3\Gamma\left(\frac{d}{2}+\frac{1}{2}\right)^3}\int_0^2 x^{d-1}B\left(1-\frac{x^2}{4};\frac{d}{2}+\frac{1}{2},\frac{1}{2}\right)^3 \mathrm{d}x,\\ \varphi^{\star 1\star 2 \cdot 3}\left(\mathbf{0}\right) &= \frac{d2^d\Gamma\left(\frac{d}{2}+1\right)^3}{\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{d}{2}+\frac{1}{2}\right)}\int^1_0x^{\frac{d}{2}}B\left(1-\frac{x^2}{4};\frac{d}{2}+\frac{1}{2},\frac{1}{2}\right)\left(\int^\infty_0 k^{-d}\left(J_{\frac{d}{2}}(k)\right)^3J_{\frac{d}{2}-1}(kx) \mathrm{d}k\right)\mathrm{d}x. \end{aligned}$$* *Proof.* Let $R=R(d)$ denote the radius of the unit volume $d$-dimensional Euclidean ball, i.e. $R(d)=\frac{1}{\sqrt{\pi}}\Gamma\left(\frac{d}{2}+1\right)^{\frac{1}{d}}$. In particular, note the relation $$1 = \mathfrak{S}_{d-1}\int^R_0r^{d-1}\mathrm{d}r = \frac{\mathfrak{S}_{d-1}}{d}R^d,$$ where $\mathfrak{S}_{d-1}=\frac{d\pi^{\frac{d}{2}}}{\Gamma\left(\frac{d}{2}+1\right)}$ is the surface area of the unit *radius* $d$-dimensional Euclidean ball. The general formula [\[eqn:BooleanLoopGeneral\]](#eqn:BooleanLoopGeneral){reference-type="eqref" reference="eqn:BooleanLoopGeneral"} follows from a Fourier decomposition. By the Fourier inversion formula, $$\varphi^{\star n}(x) = \frac{1}{\left(2\pi\right)^d}\int \widehat\varphi(k)^n\mathrm{d}k.$$ Recall the expression [\[eqn:FourierAdjacencyBooleanDisc\]](#eqn:FourierAdjacencyBooleanDisc){reference-type="eqref" reference="eqn:FourierAdjacencyBooleanDisc"} for the Fourier transform $\widehat\varphi(k)$. Then $$\begin{gathered} \varphi^{\star n}\left(\mathbf{0}\right) = \frac{1}{\left(2\pi\right)^d}\left(2\pi\right)^{\frac{d}{2}n}R^{\frac{d}{2}n}\mathfrak{S}_{d-1}\int^\infty_0 k^{d-1-\frac{d}{2}n}\left(J_{\frac{d}{2}}\left(Rk\right)\right)^n\mathrm{d}k\\ =\left(2\pi\right)^{d\left(\frac{n}{2}-1\right)}R^d\mathfrak{S}_{d-1}\int^\infty_0x^{-1-d\left(\frac{n}{2}-1\right)}\left(J_{\frac{d}{2}}\left(x\right)\right)^n\mathrm{d}x. \end{gathered}$$ Then observing that $R^d\mathfrak{S}_{d-1}=d$ produces the result. In the cases $n=3,4$, a more geometric approach may be taken. First note that $\varphi^{\star 2}(x)$ can be interpreted as the $d$-volume of the intersection of a hyper-sphere of radius $R$ at the origin with a hyper-sphere of radius $R$ at the position $x$. An expression for this volume is given by [@li2011concise] using incomplete Beta functions: $$\varphi^{\star 2}(x) = \frac{\Gamma\left(\frac{d}{2}+1\right)}{\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{d}{2}+\frac{1}{2}\right)}B\left(1-\frac{\abs*{x}^2}{4R^2};\frac{d}{2}+\frac{1}{2},\frac{1}{2}\right), \qquad \text{for }\abs*{x}\leq 2R.$$ Clearly $\varphi^{\star 2}(x)=0$ for $\abs*{x}>2R$. It then follows that $$\begin{gathered} \varphi^{\star 3}\left(\mathbf{0}\right) = \int\varphi(x)\varphi^{\star 2}(x)\mathrm{d}x = \frac{\Gamma\left(\frac{d}{2}+1\right)}{\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{d}{2}+\frac{1}{2}\right)}\mathfrak{S}_{d-1}\int^R_0 r^{d-1}B\left(1-\frac{r^2}{4R^2};\frac{d}{2}+\frac{1}{2},\frac{1}{2}\right)\mathrm{d}r\\= \frac{\Gamma\left(\frac{d}{2}+1\right)}{\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{d}{2}+\frac{1}{2}\right)}\mathfrak{S}_{d-1}R^d\int^1_0 x^{d-1}B\left(1-\frac{x^2}{4};\frac{d}{2}+\frac{1}{2},\frac{1}{2}\right)\mathrm{d}x. \end{gathered}$$ Again, noting that $R^d\mathfrak{S}_{d-1} = d$ produces the required first equality in [\[eqn:BooleanLoop3\]](#eqn:BooleanLoop3){reference-type="eqref" reference="eqn:BooleanLoop3"}. It was noted in [@Tor12] that for the Hyper-Sphere model we have $\varphi^{\star 3}\left(\mathbf{0}\right)=\frac{3}{2}\varphi^{\star 2}(\tilde{x})$, where $\abs*{\tilde{x}}=R$. This can be proven by writing out the incomplete Beta function as an integral to get a double integral, partitioning the domain appropriately, and using a suitable trigonometric substitution on each part of the domain. We omit the details here. This relation allows us to get the second equality in [\[eqn:BooleanLoop3\]](#eqn:BooleanLoop3){reference-type="eqref" reference="eqn:BooleanLoop3"}. For the specific form of $\varphi^{\star 4}\left(\mathbf{0}\right)$, we do a similar calculation to that above: $$\begin{gathered} \varphi^{\star 4}\left(\mathbf{0}\right) = \int\varphi^{\star 2}(x)^2\mathrm{d}x = \frac{\Gamma\left(\frac{d}{2}+1\right)^2}{\Gamma\left(\frac{1}{2}\right)^2\Gamma\left(\frac{d}{2}+\frac{1}{2}\right)^2}\mathfrak{S}_{d-1}\int^{2R}_0 r^{d-1}B\left(1-\frac{r^2}{4R^2};\frac{d}{2}+\frac{1}{2},\frac{1}{2}\right)^2\mathrm{d}r\\= \frac{\Gamma\left(\frac{d}{2}+1\right)^2}{\Gamma\left(\frac{1}{2}\right)^2\Gamma\left(\frac{d}{2}+\frac{1}{2}\right)^2}\mathfrak{S}_{d-1}R^d\int^2_0 x^{d-1}B\left(1-\frac{x^2}{4};\frac{d}{2}+\frac{1}{2},\frac{1}{2}\right)^2\mathrm{d}x. \end{gathered}$$ Using $R^d\mathfrak{S}_{d-1} = d$ gives the result. For $\varphi^{\star1\star2\cdot2}\left(\mathbf{0}\right)$ and $\varphi^{\star 2\star 2\cdot 1}\left(\mathbf{0}\right)$ this approach also works. We find $$\begin{aligned} \varphi^{\star1\star2\cdot2}\left(\mathbf{0}\right) = \int\varphi(x)\varphi^{\star2 }(x)^2\mathrm{d}x &= \frac{\Gamma\left(\frac{d}{2}+1\right)^2}{\Gamma\left(\frac{1}{2}\right)^2\Gamma\left(\frac{d}{2}+\frac{1}{2}\right)^2}\mathfrak{S}_{d-1}\int^{R}_0 r^{d-1}B\left(1-\frac{r^2}{4R^2};\frac{d}{2}+\frac{1}{2},\frac{1}{2}\right)^2\mathrm{d}r\nonumber\\ &= \frac{\Gamma\left(\frac{d}{2}+1\right)^2}{\Gamma\left(\frac{1}{2}\right)^2\Gamma\left(\frac{d}{2}+\frac{1}{2}\right)^2}\mathfrak{S}_{d-1}R^d\int^1_0 x^{d-1}B\left(1-\frac{x^2}{4};\frac{d}{2}+\frac{1}{2},\frac{1}{2}\right)^2\mathrm{d}x, \end{aligned}$$ $$\begin{aligned} \varphi^{\star2\star2\cdot2}\left(\mathbf{0}\right) = \int\varphi^{\star2 }(x)^3\mathrm{d}x &= \frac{\Gamma\left(\frac{d}{2}+1\right)^3}{\Gamma\left(\frac{1}{2}\right)^3\Gamma\left(\frac{d}{2}+\frac{1}{2}\right)^3}\mathfrak{S}_{d-1}\int^{2R}_0 r^{d-1}B\left(1-\frac{r^2}{4R^2};\frac{d}{2}+\frac{1}{2},\frac{1}{2}\right)^3\mathrm{d}r\nonumber\\ &= \frac{\Gamma\left(\frac{d}{2}+1\right)^3}{\Gamma\left(\frac{1}{2}\right)^3\Gamma\left(\frac{d}{2}+\frac{1}{2}\right)^3}\mathfrak{S}_{d-1}R^d\int^2_0 x^{d-1}B\left(1-\frac{x^2}{4};\frac{d}{2}+\frac{1}{2},\frac{1}{2}\right)^3\mathrm{d}x. \end{aligned}$$ As before, using $R^d\mathfrak{S}_{d-1} = d$ gives the result. Evaluating $\varphi^{\star 5}\left(\mathbf{0}\right)$, $\varphi^{\star 6}\left(\mathbf{0}\right)$, and $\varphi^{\star1\star2\cdot3}\left(\mathbf{0}\right)$ is more challenging than the above expressions because we don't have such a nice expression for $\varphi^{\star 3}(x)$ as we did for $\varphi^{\star 2}(x)$. We can nevertheless use Fourier transforms to get an expression. Using the well-known expression $$J_{\nu}(x) = \frac{x^\nu}{2^\nu \Gamma\left(\frac{1}{2}\right)\Gamma\left(\nu+\frac{1}{2}\right)}\int^\pi_0\text{e}^{ix\cos \theta}\left(\sin \theta\right)^{2\nu}\mathrm{d}\theta,\qquad \mathrm{Re}~\nu\geq -\frac{1}{2}$$ from [@abramowitz1970handbook p.360, Eqn.(9.1.20)], we can write $$\begin{gathered} \varphi^{\star 3}(x) = \frac{\mathfrak{S}_{d-2}}{\left(2\pi\right)^d}\int^\infty_{0} k^{d-1}\widehat\varphi(k)^3\left(\int^\pi_0\text{e}^{ik\abs*{x}\cos \theta}\left(\sin \theta\right)^{d-2}\mathrm{d}\theta\right)\mathrm{d}k \\= \left(2\pi\right)^d R^{\frac{3}{2}d}\abs*{x}^{1-\frac{d}{2}}\int^\infty_0k^{-d}\left(J_{\frac{d}{2}}\left(kR\right)\right)^3 J_{\frac{d}{2}-1}\left(k\abs*{x}\right)\mathrm{d}k. \end{gathered}$$ Using this expression with the expression for $\varphi^{\star 2}(x)$ used previously then gives the result: $$\begin{aligned} \varphi^{\star 1 \star 2\cdot 3}\left(\mathbf{0}\right) &= \int\varphi(x)\varphi^{\star2}(x)\varphi^{\star 3}(x)\mathrm{d}x \nonumber\\&= \frac{\Gamma\left(\frac{d}{2}+1\right)}{\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{d}{2}+\frac{1}{2}\right)}\left(2\pi\right)^d R^{\frac{3}{2}d}\mathfrak{S}_{d-1}\int^R_0 r^{d-1}r^{1-\frac{d}{2}}B\left(1-\frac{r^2}{4R^2};\frac{d}{2}+\frac{1}{2},\frac{1}{2}\right)\nonumber\\ &\hspace{7cm}\times\left(\int^\infty_0k^{-d}\left(J_{\frac{d}{2}}\left(kR\right)\right)^3 J_{\frac{d}{2}-1}\left(kr\right)\mathrm{d}k\right)\mathrm{d}r\nonumber\\ &= d2^d\frac{\Gamma\left(\frac{d}{2}+1\right)^3}{\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{d}{2}+\frac{1}{2}\right)}\int^1_0x^{\frac{d}{2}}B\left(1-\frac{x^2}{4};\frac{d}{2}+\frac{1}{2},\frac{1}{2}\right)\left(\int^\infty_0 k^{-d}\left(J_{\frac{d}{2}}(k)\right)^3J_{\frac{d}{2}-1}(kx) \mathrm{d}k\right)\mathrm{d}x, \end{aligned}$$ where we explicitly use $R^d=\pi^{-\frac{d}{2}}\Gamma\left(\frac{d}{2}+1\right)$. Writing $\varphi^{\star5}\left(\mathbf{0}\right) = \int\varphi^{\star2}(x)\varphi^{\star 3}(x)\mathrm{d}x$ and using the same strategy gives its result. Getting the expression for $\varphi^{\star 6}\left(\mathbf{0}\right)$ requires an expression for $\varphi^{\star 4}\left(x\right)$. Using the same strategy as for $\varphi^{\star 3}\left(x\right)$ above, we get $$\begin{gathered} \varphi^{\star 4}(x) = \frac{\mathfrak{S}_{d-2}}{\left(2\pi\right)^d}\int^\infty_{0} k^{d-1}\widehat\varphi(k)^4\left(\int^\pi_0\text{e}^{ik\abs*{x}\cos \theta}\left(\sin \theta\right)^{d-2}\mathrm{d}\theta\right)\mathrm{d}k \\= \left(2\pi\right)^{\frac{3}{2}d} R^{2d}\abs*{x}^{1-\frac{d}{2}}\int^\infty_0k^{-\frac{3}{2}d}\left(J_{\frac{d}{2}}\left(kR\right)\right)^4 J_{\frac{d}{2}-1}\left(k\abs*{x}\right)\mathrm{d}k. \end{gathered}$$ Using this with $\varphi^{\star6}\left(\mathbf{0}\right) = \int\varphi^{\star2}(x)\varphi^{\star 4}(x)\mathrm{d}x$ then gives the required expression. ◻ We now turn towards asymptotic values of the terms appearing in Lemma [Lemma 41](#lem:BooleanCalcExpressions){reference-type="ref" reference="lem:BooleanCalcExpressions"}. For the terms $\varphi^{\star 3}\left(\mathbf{0}\right)$ and $\varphi^{\star 4}\left(\mathbf{0}\right)$, the asymptotics have already been worked out. **Lemma 42**. *For the Hyper-Sphere RCM, $$\begin{aligned} \varphi^{\star 3}\left(\mathbf{0}\right) &\sim \left(\frac{27}{2\pi d}\right)^{\frac{1}{2}}\left(\frac{3}{4}\right)^\frac{d}{2},\\ \varphi^{\star 4}\left(\mathbf{0}\right) &\sim \left(\frac{32}{3\pi d}\right)^{\frac{1}{2}}\left(\frac{16}{27}\right)^\frac{d}{2}. \end{aligned}$$* *Proof.* These follow from the calculations in [@luban1982third; @joslin1982third]. ◻ ![Plot of the larger diagrams](TopHalfPlot.pdf){#fig:TopHalfPlot} ![Plot of the smaller diagrams](BottomHalfPlot.pdf){#fig:BottomHalfPlot} ![Plot of the ratio $\sfrac{\varphi^{\star 1\star2\cdot2}\left(\mathbf{0}\right)}{\varphi^{\star3}\left(\mathbf{0}\right)^2}$](Comparison122vs3x3.pdf){#fig:Comparison122vs3x3} ![Plot of the ratio $\sfrac{\varphi^{\star 1\star2\cdot3}\left(\mathbf{0}\right)}{\varphi^{\star3}\left(\mathbf{0}\right)\varphi^{\star4}\left(\mathbf{0}\right)}$](Comparison123vs3x4.pdf){#fig:Comparison123vs3x4} ![Plot of the ratio $\sfrac{\left(\log\varphi^{\star 2\star 2\cdot 2}\left(\mathbf{0}\right)-3\log\varphi^{\star 3}\left(\mathbf{0}\right)\right)}{\log d}$](Comparison222vs3x3x3.pdf){#fig:Comparison222vs3x3x3} ![Plot of the ratio $\sfrac{\left(\log\varphi^{\star 6}\left(\mathbf{0}\right)-3\log\varphi^{\star 3}\left(\mathbf{0}\right)\right)}{\log d}$](Comparison6vs3x3x3.pdf){#fig:Comparison6vs3x3x3} *Remark 43*. These asymptotics naturally also give the asymptotics of $\varphi^{\star 3}\left(\mathbf{0}\right)^2$, $\varphi^{\star 3}\left(\mathbf{0}\right)^3$, and $\varphi^{\star 3}\left(\mathbf{0}\right)\varphi^{\star 4}\left(\mathbf{0}\right)$. For $\varphi^{\star 5}\left(\mathbf{0}\right)$, $\varphi^{\star 6}\left(\mathbf{0}\right)$, $\varphi^{\star 1\star2\cdot2}\left(\mathbf{0}\right)$, $\varphi^{\star 2\star2\cdot2}\left(\mathbf{0}\right)$, and $\varphi^{\star 1\star2\cdot3}\left(\mathbf{0}\right)$ we don't have any rigorous description of their asymptotic behaviour. Nevertheless we can use our expressions from Lemma [Lemma 41](#lem:BooleanCalcExpressions){reference-type="ref" reference="lem:BooleanCalcExpressions"} and numerical integration to calculate their values for a range of dimensions. Figure [\[fig:DiagramSizes\]](#fig:DiagramSizes){reference-type="ref" reference="fig:DiagramSizes"} presents the results of these calculations. Here we used MATLAB to plot $\frac{1}{d}\log\left(\cdot\right)$ (where $\log$ is the *natural* logarithm) for each of our diagrams against the dimension $d$. We chose this function of the diagrams because if a diagram was of the form $A(d)\varrho^d$ for some constant $\varrho>0$ and some slowly varying $A(d)$, then our plot should approach $\log\varrho$ as $d\to\infty$. The data in Figure [3](#fig:TopHalfPlot){reference-type="ref" reference="fig:TopHalfPlot"} are consistent with this behaviour (indeed we know it to be true for $\varphi^{\star 3}\left(\mathbf{0}\right)$, $\varphi^{\star 4}\left(\mathbf{0}\right)$ and $\varphi^{\star 3}\left(\mathbf{0}\right)^2$). We only plot the data up to $d=50$ because the calculations of $\varphi^{\star 5}\left(\mathbf{0}\right)$ and $\varphi^{\star 1\star2\star3}\left(\mathbf{0}\right)$ fail for $d>54$ - we comment on this more later. The data in Figure [4](#fig:BottomHalfPlot){reference-type="ref" reference="fig:BottomHalfPlot"} appear a little less definitive, but the authors argue these are still consistent with the hypothesised behaviour (we know it to be true for $\varphi^{\star 3}\left(\mathbf{0}\right)\varphi^{\star 4}\left(\mathbf{0}\right)$ and $\varphi^{\star 3}\left(\mathbf{0}\right)^3$). Note that the vertical scale is over a much narrower range than in Figure [3](#fig:TopHalfPlot){reference-type="ref" reference="fig:TopHalfPlot"}, which gives the false impression that the plots are increasing with $d$ faster than they indeed are. We are also further restricting the domain of $d$ to $d\leq 36$. This is because the calculation of $\varphi^{\star 6}\left(\mathbf{0}\right)$ fails for $d>36$. $\diamond$ *Remark 44*. We comment here on our choices of the range of dimensions $d$ presented in the data in Figure [\[fig:DiagramSizes\]](#fig:DiagramSizes){reference-type="ref" reference="fig:DiagramSizes"}. We found that the limiting factor in our ability to calculate the expressions in Lemma [Lemma 41](#lem:BooleanCalcExpressions){reference-type="ref" reference="lem:BooleanCalcExpressions"} were the prefactors of powers of $2$ and gamma functions. If we wanted to use [\[eqn:BooleanLoopGeneral\]](#eqn:BooleanLoopGeneral){reference-type="eqref" reference="eqn:BooleanLoopGeneral"} to calculate $\varphi^{\star 6}\left(\mathbf{0}\right)$ for $d=25$ we would have to deal with $d2^{2d}\Gamma\left(\tfrac{d}{2}+1\right)^4 \approx 2.41\times 10^{53}$, while $\varphi^{\star 6}\left(\mathbf{0}\right)\approx 5.34\times10^{-6}$. Fortunately MATLAB has the function `betainc(x,a,b)`, which calculates the (normalised) incomplete beta function $$\frac{\Gamma\left(a+b\right)}{\Gamma(a)\Gamma(b)}\int^x_0 t^{a-1}\left(1-t\right)^{b-1}\mathrm{d}t = \frac{\Gamma\left(a+b\right)}{\Gamma(a)\Gamma(b)}B\left(x;a,b\right).$$ This `betainc` function is more efficient at dealing with the different sizes of the prefactor and integral than our naïve attempts, and this is why we put the extra effort in Lemma [Lemma 41](#lem:BooleanCalcExpressions){reference-type="ref" reference="lem:BooleanCalcExpressions"} to include factors of $B\left(x;a,b\right)$. In particular, this makes $\varphi^{\star 3}\left(\mathbf{0}\right)$ very easy to calculate: MATLAB got to over $d=5000$ before it produced an error (for $d=5000$, $\varphi^{\star 3}\left(\mathbf{0}\right)\approx 1.32\times 10^{-314}$). We can also calculate our expressions for $\varphi^{\star 4}\left(\mathbf{0}\right)$, $\varphi^{\star 1\star2\cdot2}\left(\mathbf{0}\right)$, and $\varphi^{\star 2\star2\cdot 2}\left(\mathbf{0}\right)$ over dimension $d=1000$. Unfortunately the use of `betainc` does not deal with the whole prefactor for $\varphi^{\star 5}\left(\mathbf{0}\right)$, $\varphi^{\star 6}\left(\mathbf{0}\right)$, and $\varphi^{\star 1\star2\cdot3}\left(\mathbf{0}\right)$, and this affects the dimension we can run up to. For $\varphi^{\star 5}\left(\mathbf{0}\right)$ and $\varphi^{\star 1\star2\cdot3}\left(\mathbf{0}\right)$ we can run up to $d=54$ (where they are $\approx9.06\times 10^{-10}$ and $\approx2.29\times10^{-11}$ respectively). For $\varphi^{\star 6}\left(\mathbf{0}\right)$ we can only run to $d=36$, where find $\varphi^{\star 6}\left(\mathbf{0}\right)\approx3.58\times10^{-8}$. $\diamond$ *Remark 45*. Upon inspecting Figure [3](#fig:TopHalfPlot){reference-type="ref" reference="fig:TopHalfPlot"}, the plots of $\varphi^{\star1\star2\cdot2}\left(\mathbf{0}\right)$ and $\varphi^{\star3}\left(\mathbf{0}\right)^2$ appear very close together. The plots of $\varphi^{\star1\star2\cdot3}\left(\mathbf{0}\right)$ and $\varphi^{\star3}\left(\mathbf{0}\right)\varphi^{\star4}\left(\mathbf{0}\right)$ in Figure [4](#fig:BottomHalfPlot){reference-type="ref" reference="fig:BottomHalfPlot"} also appear to be tracking closely together. In Figure [\[fig:Comparison\]](#fig:Comparison){reference-type="ref" reference="fig:Comparison"} we plot how the ratio of these similar terms vary with dimension. Since we are able to evaluate $\varphi^{\star1\star2\cdot2}\left(\mathbf{0}\right)$ and $\varphi^{\star3}\left(\mathbf{0}\right)^2$ to relatively high dimensions, we are able to plot their ratio all the way up to $d=2500$ in Figure [5](#fig:Comparison122vs3x3){reference-type="ref" reference="fig:Comparison122vs3x3"}. From this plot it is very tempting to suggest that their ratio is approaching a finite and positive limit. In fact, since $\sfrac{\varphi^{\star 1\star2\cdot2}\left(\mathbf{0}\right)}{\varphi^{\star3}\left(\mathbf{0}\right)^2}\approx1.329$ at $d=2500$, it is tempting to suggest that the ratio approaches $\tfrac{4}{3}$ as $d\to\infty$. Since we rigorously have the asymptotics of $\varphi^{\star3}\left(\mathbf{0}\right)$, this would imply the asymptotics of $\varphi^{\star 1\star2\cdot2}\left(\mathbf{0}\right)$. We are not able to evaluate $\varphi^{\star1\star2\cdot3}\left(\mathbf{0}\right)$ to similarly high dimensions - we can only reach $d=54$. Nevertheless, the slope of the plot in Figure [6](#fig:Comparison123vs3x4){reference-type="ref" reference="fig:Comparison123vs3x4"} is shallowing and it is tempting to suggest that the ratio $\sfrac{\varphi^{\star 1\star2\cdot2}\left(\mathbf{0}\right)}{\varphi^{\star3}\left(\mathbf{0}\right)\varphi^{\star4}\left(\mathbf{0}\right)}$ approaches a finite and positive limit. While we don't conjecture a value for the limit here, the existence of such a limit would allow us to find the asymptotic scale of $\varphi^{\star 1\star2\cdot3}\left(\mathbf{0}\right)$. If we look at the ratio of the other pairs of diagrams it is usually very clear that one is far larger than the other, with the ratio apparently growing at an exponential rate. The only exceptions are the trio of $\varphi^{\star 6}\left(\mathbf{0}\right)$, $\varphi^{\star 2\star 2\cdot 2}\left(\mathbf{0}\right)$, and $\varphi^{\star 3}\left(\mathbf{0}\right)^3$. While the ratios appear to be growing for each pair in this trio, the rate seems to be slowing. If $\varphi^{\star 2\star 2\cdot 2}\left(\mathbf{0}\right)$ and $\varphi^{\star 3}\left(\mathbf{0}\right)^3$ were both decaying at the same exponential rate but had different polynomial corrections, then we would have $\sfrac{\left(\log\varphi^{\star 2\star 2\cdot 2}\left(\mathbf{0}\right)-3\log\varphi^{\star 3}\left(\mathbf{0}\right)\right)}{\log d}$ approaching a non-zero limit as $d\to\infty$. In Figure [\[fig:ComparisonSmallTerms\]](#fig:ComparisonSmallTerms){reference-type="ref" reference="fig:ComparisonSmallTerms"} we plot this comparison for the two independent pairs in the trio, and it indeed seems plausible that the plots are approaching a non-zero limit. Nevertheless, these three terms look to be far smaller than the $\varphi^{\star1\star2\cdot3}\left(\mathbf{0}\right)$ and $\varphi^{\star3}\left(\mathbf{0}\right)\varphi^{\star 4}\left(\mathbf{0}\right)$ terms, and so will both be negligible for our discussion. $\diamond$ The observations made in Remarks [Remark 43](#rem:BooleanOrderofTerms){reference-type="ref" reference="rem:BooleanOrderofTerms"}-[Remark 45](#rem:BooleanRatios){reference-type="ref" reference="rem:BooleanRatios"} and the plots in Figures [\[fig:DiagramSizes\]](#fig:DiagramSizes){reference-type="ref" reference="fig:DiagramSizes"} and [\[fig:Comparison\]](#fig:Comparison){reference-type="ref" reference="fig:Comparison"} allow us to make the following conjecture. We use the notation $f\gg g$ to indicate $\frac{f(d)}{g(d)}\to\infty$, and $f\asymp g$ to indicate $\frac{f(d)}{g(d)}$ and $\frac{g(d)}{f(d)}$ are both bounded as $d\to\infty$. **Conjecture 46**. *For the Hyper-Sphere RCM, as $d\to\infty$, $$\varphi^{\star 3}\left(\mathbf{0}\right) \gg \varphi^{\star 4}\left(\mathbf{0}\right) \gg \varphi^{\star 1\star 2\cdot 2}\left(\mathbf{0}\right) \asymp \left(\varphi^{\star 3}\left(\mathbf{0}\right)\right)^2 \gg \varphi^{\star 5}\left(\mathbf{0}\right) \gg \varphi^{\star 1 \star 2\cdot 3}\left(\mathbf{0}\right) \asymp \varphi^{\star 3}\left(\mathbf{0}\right)\varphi^{\star 4}\left(\mathbf{0}\right),$$ and $$\varphi^{\star 6}\left(\mathbf{0}\right) + \varphi^{\star 2\star 2\cdot 2}\left(\mathbf{0}\right) + \left(\varphi^{\star 3}\left(\mathbf{0}\right)\right)^3 = \mathcal{O}\left(\varphi^{\star 3}\left(\mathbf{0}\right)\varphi^{\star 4}\left(\mathbf{0}\right)\right).$$ Therefore $$q_\varphi\lambda_c = 1 + \frac{1}{q_\varphi^2}\varphi^{\star 3}\left(\mathbf{0}\right) + \frac{3}{2}\frac{1}{q_\varphi^3}\varphi^{\star 4}\left(\mathbf{0}\right) - \frac{5}{2}\frac{1}{q_\varphi^3}\varphi^{\star 1\star 2\cdot2}\left(\mathbf{0}\right) + 2\frac{1}{q_\varphi^4}\left(\varphi^{\star 3}\left(\mathbf{0}\right)\right)^2 + 2\frac{1}{q_\varphi^4}\varphi^{\star 5}\left(\mathbf{0}\right) + \mathcal{O}\left(\frac{1}{d}\left(\frac{2}{3}\right)^d\right).$$* Note that this would be a different order of terms than that we found for the Hyper-Cube RCM in Corollary [Corollary 7](#cor:cube){reference-type="ref" reference="cor:cube"}. ## Hyper-Cube Calculations Recall that for side length $L>0$, the Hyper-Cubic RCM is defined by having $$\varphi(x) = \prod^d_{j=1}\mathds 1_{\left\{\abs*{x_j}\leq L/2\right\}},$$ where $x=\left(x_1,\ldots,x_d\right)\in\mathbb R^d$. Throughout this section we choose a scaling of $\mathbb R^d$ such that $L=1$. **Lemma 47**. *For the Hyper-Cube RCM with side length $L=1$, $$\begin{aligned} \varphi^{\star 3}\left(\mathbf{0}\right) &= \left(\frac{3}{4}\right)^d = \left(0.75\right)^d,\\ \varphi^{\star 4}\left(\mathbf{0}\right) &= \left(\frac{2}{3}\right)^d \approx \left(0.66667\right)^d,\\ \varphi^{\star 5}\left(\mathbf{0}\right) &= \left(\frac{115}{192}\right)^d \approx\left(0.59896\right)^d,\\ \varphi^{\star 1\star 2\cdot 2}\left(\mathbf{0}\right) &= \left(\frac{7}{12}\right)^d \approx \left(0.58333\right)^d,\\ \varphi^{\star 3}\left(\mathbf{0}\right)^2 &= \left(\frac{9}{16}\right)^d =\left(0.5625\right)^d,\\ \varphi^{\star 6}\left(\mathbf{0}\right) &= \left(\frac{11}{20}\right)^d = \left(0.55\right)^d,\\ \varphi^{\star 7}\left(\mathbf{0}\right) & = \left(\frac{5887}{11520}\right)^d \approx \left(0.51102\right)^d,\\ \varphi^{\star 1 \star 2\cdot 3}\left(\mathbf{0}\right) &= \left(\frac{49}{96}\right)^d \approx \left(0.51042\right)^d,\\ \varphi^{\star 2\star 2\cdot 2}\left(\mathbf{0}\right) &= \left(\frac{1}{2}\right)^d = \left(0.5\right)^d,\\ \varphi^{\star 3}\left(\mathbf{0}\right)\varphi^{\star 4}\left(\mathbf{0}\right) &= \left(\frac{1}{2}\right)^d = \left(0.5\right)^d,\\ \varphi^{\star 8}\left(\mathbf{0}\right) & = \left(\frac{151}{315}\right)^d \approx \left(0.47937\right)^d,\\ \varphi^{\star 3}\left(\mathbf{0}\right)^3 &= \left(\frac{27}{64}\right)^d \approx\left(0.42188\right)^d. \end{aligned}$$* *Proof.* First note that the hyper-cubic adjacency function factorises into the $d$ dimensions: $$\varphi(x) = \prod^d_{i=1}\mathds 1_{\left\{\abs*{x_i}<\frac{1}{2}\right\}},$$ where $x=\left(x_1,x_2,\ldots,x_d\right)$. Therefore to find the desired expressions, we only need to evaluate them for dimension $1$, and then take the result to the power $d$ to get the result for dimension $d$. Let us denote the $1$-dimensional adjacency function $\varphi_1\colon \mathbb R\to \left[0,1\right]$, $$\varphi_1(x) = \begin{cases} 1&\colon \abs*{x}<\frac{1}{2}\\ 0&\colon \abs*{x}\geq\frac{1}{2}. \end{cases}$$ By direct calculation (these can be easily verified by Mathematica, for example), one finds $$\begin{aligned} \varphi_1^{\star 2}(x) &= \begin{cases} 1-\abs*{x} &\colon \abs*{x}<1\\ 0 &\colon \abs*{x}\geq 1, \end{cases}\\ \varphi_1^{\star 3}(x) &=\begin{cases} \frac{1}{4}\left(3-4x^2\right) &\colon \abs*{x}<\frac{1}{2}\\ \frac{1}{8}\left(3-2\abs*{x}\right)^2 &\colon \frac{1}{2}\leq \abs*{x}<\frac{3}{2}\\ 0 &\colon \abs*{x}\geq \frac{3}{2}, \end{cases}\\ \varphi_1^{\star 4}(x) &=\begin{cases} \frac{1}{6}\left(4-6x^2+3\abs*{x}^3\right) &\colon \abs*{x}<1\\ \frac{1}{6}\left(2-\abs*{x}\right)^3 &\colon 1\leq \abs*{x}<2\\ 0 &\colon \abs*{x}\geq 2. \end{cases} \end{aligned}$$ In particular, this means $\varphi_1^{\star 3}\left(\mathbf{0}\right) =\frac{3}{4}$ and $\varphi_1^{\star 4}\left(\mathbf{0}\right) = \frac{2}{3}$. Taking these to the power $d$ returns the required results for $\varphi^{\star 3}\left(\mathbf{0}\right)$ and $\varphi^{\star 4}\left(\mathbf{0}\right)$. These also give the results for $\varphi^{\star 3}\left(\mathbf{0}\right)^2$, $\varphi^{\star 3}\left(\mathbf{0}\right)^3$ and $\varphi^{\star 3}\left(\mathbf{0}\right)\varphi^{\star 4}\left(\mathbf{0}\right)$. Then let us observe and calculate $$\begin{aligned} \varphi_1^{\star 5}\left(\mathbf{0}\right) &= \int^1_{-1}\varphi_1^{\star 2}(x)\varphi_1^{\star 3}(x)\mathrm{d}x = \frac{115}{192} \approx 0.59896,\\ \varphi_1^{\star 6}\left(\mathbf{0}\right)&= \int^\frac{3}{2}_{-\frac{3}{2}}\varphi_1^{\star 3}(x)\varphi_1^{\star 3}(x)\mathrm{d}x = \frac{11}{20} = 0.55,\\ \varphi_1^{\star 7}\left(\mathbf{0}\right) &= \int^\frac{3}{2}_{-\frac{3}{2}}\varphi_1^{\star 3}(x)\varphi_1^{\star 4}(x)\mathrm{d}x = \frac{5887}{11520}\approx 0.51102,\\ \varphi_1^{\star 8}\left(\mathbf{0}\right) &= \int^2_{-2}\varphi_1^{\star 4}(x)\varphi_1^{\star 4}(x)\mathrm{d}x = \frac{151}{315}\approx 0.47937. \end{aligned}$$ Similarly, we find $$\begin{aligned} \varphi_1^{\star 1\star 2\cdot 2} \left(\mathbf{0}\right) &= \int^\frac{1}{2}_{-\frac{1}{2}}\varphi_1^{\star 2}(x)^2\mathrm{d}x = \frac{7}{12}\approx 0.58333\\ \varphi_1^{\star 2\star 2\cdot 2}\left(\mathbf{0}\right) &= \int^1_{-1}\varphi_1^{\star 2}(x)^3\mathrm{d}x = \frac{1}{2} = 0.5\\ \varphi_1^{\star 1\star 2\cdot 3}\left(\mathbf{0}\right) &= \int^\frac{1}{2}_{-\frac{1}{2}}\varphi_1^{\star 2}(x)\varphi_1^{\star 3}(x)\mathrm{d}x = \frac{49}{96} \approx 0.51042. \end{aligned}$$ Finally taking these values to the $d^{\mathrm {th}}$ power gives the required results. ◻ **Lemma 48**. *The Hyper-Cube RCM satisfies Assumptions [\[Assumption\]](#Assumption){reference-type="ref" reference="Assumption"} and [\[AssumptionBeta\]](#AssumptionBeta){reference-type="ref" reference="AssumptionBeta"}.* *Proof.* For Assumption [\[Assump:DecayBound\]](#Assump:DecayBound){reference-type="ref" reference="Assump:DecayBound"}, recall that $$\varphi^{\star 2}(x) = \prod_{i=1}^d \left(1-\abs*{x_i}\right)\mathds 1_{\left\{\abs*{x_i}\leq 1\right\}},$$ where $x=\left(x_1,x_2,\ldots,x_d\right)$. In conjunction with Lemma [Lemma 47](#lem:HyperCubeCalculations){reference-type="ref" reference="lem:HyperCubeCalculations"}, we see that Assumption [\[Assump:DecayBound\]](#Assump:DecayBound){reference-type="ref" reference="Assump:DecayBound"} is satisfied with $g(d)=\left(\frac{3}{4}\right)^d$. For [\[Assump:QuadraticBound\]](#Assump:QuadraticBound){reference-type="ref" reference="Assump:QuadraticBound"} we note that $$\label{eqn:H-CubeFourier} \widehat\varphi(k) = \prod^d_{i=1}\left(\frac{2}{k_i}\sin\frac{k_i}{2}\right),$$ where $k=\left(k_1,k_2,\ldots,k_d\right)$. Since $\sin x \leq x - \frac{1}{6}x^3 + \frac{1}{120}x^5$ for all $x\in\mathbb R$, $$\widehat\varphi(k) \leq \prod^d_{i=1}\left(1-\frac{1}{24}k_i^2 + \frac{1}{1920}k_i^4\right).$$ Therefore for $\max_i\abs*{k_i}\leq 3$ we have $$\begin{aligned} \widehat\varphi(k) &\leq \prod^d_{i=1}\left(1-\frac{71}{1920}k_i^2\right)\nonumber\\ &= 1 - \frac{71}{1920}\abs*{k}^2 + \left(\frac{71}{1920}\right)^2\sum_{\substack{i,j=1\\ i<j}}^dk_i^2k_j^2 - \left(\frac{71}{1920}\right)^3\sum_{\substack{i,j,l=1\\ i<j<l}}^dk_i^2k_j^2k_l^2 + \ldots \pm \left(\frac{71}{1920}\right)^dk_1^2\ldots k_d^2\nonumber\\ &\leq 1 - \frac{71}{1920}\abs*{k}^2 + \left(\frac{71}{1920}\right)^2\abs*{k}^4 + 0 + \left(\frac{71}{1920}\right)^4\abs*{k}^8 + 0 + \ldots +\begin{cases} \left(\frac{71}{1920}\right)^{d}\abs*{k}^{2d} &\colon d \text{ is even}\\ \left(\frac{71}{1920}\right)^{d-1}\abs*{k}^{2d-2} &\colon d \text{ is odd.} \end{cases} \end{aligned}$$ Here we have bounded the later negative terms above by $0$, and bounded the positive terms above by powers of $\abs*{k}^4$. Therefore if $\abs*{k}^2 < \frac{1920}{71}$ then we have $$\widehat\varphi(k) \leq 1 - \frac{71}{1920}\abs*{k}^2 + \sum^\infty_{n=1}\left(\frac{71}{1920}\right)^{2n}\abs*{k}^{4n} = 1 - \frac{71}{1920}\abs*{k}^2 + \left(\frac{71}{1920}\right)^{2}\abs*{k}^{4}\left(1 - \left(\frac{71}{1920}\right)^{2}\abs*{k}^{4}\right)^{-1}.$$ Note that $\abs*{k}\leq 3 \implies \max_i\abs*{k_i}\leq 3$, and therefore $\abs*{k}\leq 3$ also implies $$\widehat\varphi(k) \leq 1 - \frac{71}{1920}\abs*{k}^2\left(1- \frac{213}{640}\left(1 - \left(\frac{213}{640}\right)^{2}\right)^{-1}\right) \leq 1 - \frac{5}{8}\times\frac{71}{1920}\abs*{k}^2,$$ where we have used $\frac{213}{640}<\frac{1}{3}$. Therefore we have constants $b,c_1>0$ such that $\abs*{k}\leq b$ implies that $\widehat\varphi(k) \leq 1 - c_1\abs*{k}^2$. From [\[eqn:H-CubeFourier\]](#eqn:H-CubeFourier){reference-type="eqref" reference="eqn:H-CubeFourier"} it is clear that $\widehat\varphi(k)$ is radially decreasing and non-negative on the set $\left\{k\in\mathbb R^d\colon \max_{i}\abs*{k_i}\leq 2\pi\right\}$. On the other hand if there exists $i^*\in\left\{1,2,\ldots,d\right\}$ such that $\abs*{k_{i^*}}>2\pi$, then $\abs*{\widehat\varphi(k)}<\frac{1}{\pi}$. Therefore if $\abs*{k}>3$ we can bound $$\widehat\varphi(k) \leq 1 - \frac{5}{3}\times\frac{71}{1920}\times3^2 < \frac{1}{2}.$$ We have therefore proven that [\[Assump:QuadraticBound\]](#Assump:QuadraticBound){reference-type="ref" reference="Assump:QuadraticBound"} holds with $b=3$, $c_1= \frac{5}{3}\times\frac{71}{1920}$, and $c_2=\frac{1}{2}$. Lemma [Lemma 47](#lem:HyperCubeCalculations){reference-type="ref" reference="lem:HyperCubeCalculations"} and our above observation that we can have $g(d)=\left(\frac{3}{4}\right)^d$ ensures that Assumption [\[AssumptionBeta\]](#AssumptionBeta){reference-type="ref" reference="AssumptionBeta"} holds. ◻ ## Gaussian Calculations Recall that for $\sigma^2>0$ and $0<\mathcal{A}\leq \left(2\pi\sigma^2\right)^\frac{d}{2}$, the Gaussian RCM is defined by having $$\varphi\left(x\right) = \frac{\mathcal{A}}{\left(2\pi \sigma^2\right)^\frac{d}{2}}\exp\left(-\frac{1}{2\sigma^2}\abs*{x}^2\right).$$ **Lemma 49**. *For the Gaussian RCM, $$\begin{aligned} \varphi^{\star n}\left(\mathbf{0}\right) &= \mathcal{A}^n\left(2n\pi\sigma^2\right)^{-\frac{d}{2}}\qquad\forall n\geq 1,\\ \varphi^{\star n_1 \star n_2 \cdot n_3}\left(\mathbf{0}\right) &= \mathcal{A}^{n_1+n_2+n_3}\left(\left(n_1n_2 + n_1n_3 + n_2n_3\right)\left(2\pi\sigma^2\right)^2\right)^{-\frac{d}{2}}\qquad\forall n_1,n_2,n_3\geq 1. \end{aligned}$$ In particular, $$\begin{aligned} \varphi^{\star 1\star 2\cdot 2}\left(\mathbf{0}\right) &= \mathcal{A}^5\left(32\pi^2\sigma^4\right)^{-\frac{d}{2}} = \mathcal{A}^5\left(8\times \left(2\pi\sigma^2\right)^2\right)^{-\frac{d}{2}}\\ \varphi^{\star 1 \star 2\cdot 3}\left(\mathbf{0}\right) &= \mathcal{A}^6\left(44\pi^2\sigma^4\right)^{-\frac{d}{2}} = \mathcal{A}^6\left(11\times \left(2\pi\sigma^2\right)^2\right)^{-\frac{d}{2}}\\ \varphi^{\star 2\star 2\cdot 2}\left(\mathbf{0}\right) &= \mathcal{A}^6\left(48\pi^2\sigma^4\right)^{-\frac{d}{2}} = \mathcal{A}^6\left(12\times \left(2\pi\sigma^2\right)^2\right)^{-\frac{d}{2}}. \end{aligned}$$* *Proof.* Without loss of generality, we scale space so that $q_\varphi= \mathcal{A}= 1$. First we note that the convolution of two unit-mass Gaussian functions is itself a unit-mass Gaussian function whose "variance" parameter is the sum of the variance parameters of the two initial Gaussian functions: $$\begin{gathered} \int_{\mathbb R^d}\frac{1}{\left(2\pi\sigma_1^2\right)^\frac{d}{2}}\exp\left(-\frac{1}{2\sigma_1^2}\abs*{x-y}^2\right)\frac{1}{\left(2\pi\sigma_2^2\right)^\frac{d}{2}}\exp\left(-\frac{1}{2\sigma_2^2}\abs*{y}^2\right) \mathrm{d}y \\= \frac{1}{\left(2\pi\left(\sigma_1^2 + \sigma_2^2\right)\right)^\frac{d}{2}}\exp\left(-\frac{1}{2\left(\sigma_1^2+\sigma_2^2\right)}\abs*{x}^2\right). \end{gathered}$$ It therefore follows that $$\varphi^{\star n}(x) = \frac{1}{\left(2\pi n\sigma^2\right)^\frac{d}{2}}\exp\left(-\frac{1}{2n\sigma^2}\abs*{x}^2\right),$$ and $\varphi^{\star n}\left(\mathbf{0}\right)=\left(2\pi n\sigma^2\right)^{-\frac{d}{2}}$. For the remaining expressions we write the pointwise product of two unit-mass Gaussian functions as a constant multiple of a unit-mass Gaussian function: $$\begin{gathered} \frac{1}{\left(2\pi\sigma_1^2\right)^\frac{d}{2}}\exp\left(-\frac{1}{2\sigma_1^2}\abs*{x}^2\right)\frac{1}{\left(2\pi\sigma_2^2\right)^\frac{d}{2}}\exp\left(-\frac{1}{2\sigma_2^2}\abs*{x}^2\right)=\frac{1}{\left(4\pi^2\sigma_1^2\sigma_2^2\right)^\frac{d}{2}}\exp\left(-\frac{\sigma_1^2+\sigma_2^2}{2\sigma_1^2\sigma_2^2}\abs*{x}^2\right) \\= \frac{1}{\left(2\pi\left(\sigma_1^2+\sigma_2^2\right)\right)^\frac{d}{2}}\left(\frac{\sigma_1^2+\sigma_2^2}{2\pi\sigma_1^2\sigma_2^2}\right)^{\frac{d}{2}}\exp\left(-\frac{\sigma_1^2+\sigma_2^2}{2\sigma_1^2\sigma_2^2}\abs*{x}^2\right). \end{gathered}$$ Using this expression, we find $$\begin{gathered} \varphi^{\star n_1 \star n_2 \cdot n_3}\left(\mathbf{0}\right) = \left(2\pi\sigma^2\left(n_2+n_3\right)\right)^{-\frac{d}{2}}\left(2\pi\sigma^2\left(n_1 + \frac{n_2n_3}{n_2+n_3}\right)\right)^{-\frac{d}{2}} \\= \left(4\pi^2\sigma^4\left(n_1n_2 + n_1n_3 + n_2n_3\right)\right)^{-\frac{d}{2}}. \end{gathered}$$ This produces the results. ◻ **Lemma 50**. *The Gaussian RCM with $\liminf_{d\to\infty}\varphi\left(\mathbf{0}\right)^{\frac{1}{d}}>0$ satisfies Assumptions [\[Assumption\]](#Assumption){reference-type="ref" reference="Assumption"} and [\[AssumptionBeta\]](#AssumptionBeta){reference-type="ref" reference="AssumptionBeta"}.* *Proof.* For this proof we make the scaling choice that the total mass of the adjacency function in each dimension is set to be equal to $1$. Clearly this maps $\mathcal{A}\mapsto \widetilde{\mathcal{A}}\equiv1$, but since $\varphi\left(\mathbf{0}\right)=\mathcal{A}\left(2\pi\sigma^2\right)^{-\frac{d}{2}}$ is left invariant, we also have $\sigma\mapsto\widetilde{\sigma}=\sigma\mathcal{A}^{-\frac{1}{d}}$. The condition that $\liminf \varphi\left(\mathbf{0}\right)^{\frac{1}{d}}>0$ now means that $\limsup\widetilde{\sigma}<\infty$, and the trivial condition that $\varphi\left(\mathbf{0}\right)\leq 1$ means that $\widetilde{\sigma}^2\geq \sfrac {1}{2\pi}$. The results of Lemma [Lemma 49](#lem:GaussianCalculations){reference-type="ref" reference="lem:GaussianCalculations"} proves that [\[Assump:DecayBound\]](#Assump:DecayBound){reference-type="ref" reference="Assump:DecayBound"} holds with the choice $g(d)= \left(4\pi\widetilde{\sigma}^2\right)^{-\frac{d}{2}} = 2^{-\frac{d}{2}}\varphi\left(\mathbf{0}\right)$ and therefore $\beta(d) = 2^{-\frac{d}{8}}\varphi\left(\mathbf{0}\right)^{\frac{1}{4}}$ (here we use $\limsup\widetilde{\sigma}<\infty$ to get the appropriate form of $\beta$ from [\[eqn:betafromgfunction\]](#eqn:betafromgfunction){reference-type="eqref" reference="eqn:betafromgfunction"}). Now observe that the Fourier transform of $\varphi(x)$ is given by $$\widehat\varphi(k) = \exp\left(-\frac{1}{2}\widetilde{\sigma}^2\norm*{k}_2^2\right) \leq \exp\left(-\frac{1}{4\pi}\norm*{k}_2^2\right),$$ where the inequality follows from $\widetilde{\sigma}^2\geq \sfrac {1}{2\pi}$. Therefore [\[Assump:QuadraticBound\]](#Assump:QuadraticBound){reference-type="ref" reference="Assump:QuadraticBound"} holds. For Assumption [\[AssumptionBeta\]](#AssumptionBeta){reference-type="ref" reference="AssumptionBeta"}, we first use Lemma [Lemma 49](#lem:GaussianCalculations){reference-type="ref" reference="lem:GaussianCalculations"} to see that $\varphi^{\star 6}\left(\mathbf{0}\right) = 6^{-\frac{d}{2}}q_\varphi^5\varphi\left(\mathbf{0}\right)$. Therefore [\[Assump:ExponentialDecay\]](#Assump:ExponentialDecay){reference-type="ref" reference="Assump:ExponentialDecay"} can be seen to hold with $\rho=6^{-\frac{1}{2}}\liminf \varphi\left(\mathbf{0}\right)^{\frac{1}{d}}>0$. This also provides a lower bound on $h(d)$. After noting that $\log \beta(d)<0$, we have $$\frac{\log h(d)}{\log \beta(d)} \leq \frac{-\frac{d}{2}\log 6 + \log \varphi\left(\mathbf{0}\right)}{-\frac{d}{8}\log 2 + \frac{1}{4}\log\varphi\left(\mathbf{0}\right)} \leq \frac{4\log 6 - 8\log \varphi\left(\mathbf{0}\right)^{\frac{1}{d}}}{\log 2 - 2 \log \varphi\left(\mathbf{0}\right)^{\frac{1}{d}}}.$$ Note that $\log \varphi\left(\mathbf{0}\right)^{\frac{1}{d}}\leq0$. By taking the derivative of the map $x\mapsto \frac{4\log 6 - 8x}{\log 2-2x}$ for $x\leq 0$ we can find that it is maximised at $x=0$. Therefore $$\frac{\log h(d)}{\log \beta(d)} \leq 4\frac{\log 6}{\log 2} = 4\log_2 6.$$ Since this is finite, we have proven Assumption [\[Assump:NumberBound\]](#Assump:NumberBound){reference-type="ref" reference="Assump:NumberBound"}. ◻ ## Coordinate-Cauchy Calculations Recall that for $\gamma>0$ and $0<\mathcal{A}\leq \left(\gamma\pi\right)^d$, the Coordinate-Cauchy RCM is defined by having $$\varphi(x) = \frac{\mathcal{A}}{\left(\gamma\pi\right)^d}\prod^d_{j=1}\frac{\gamma^2}{\gamma^2+x^2_j},$$ where $x=\left(x_1,\ldots,x_d\right)\in\mathbb R^d$. **Lemma 51**. *For the Coordinate-Cauchy RCM, $$\begin{aligned} \varphi^{\star n}\left(\mathbf{0}\right) &= \mathcal{A}^n\left(\frac{1}{n\gamma\pi}\right)^d, \qquad\forall n\geq 1,\\ \varphi^{\star n_1\star n_2 \cdot n_3}\left(\mathbf{0}\right) &= \mathcal{A}^{n_1+n_2+n_3}\left(\frac{n_1+n_2+n_3}{\left(n_1+n_2\right)\left(n_1+n_3\right)\left(n_2+n_3\right)\gamma^2\pi^2}\right)^d, \qquad\forall n_1,n_2,n_3\geq 1. \end{aligned}$$ In particular, $$\varphi^{\star 1\star 2\cdot 2}\left(\mathbf{0}\right) = \mathcal{A}^5\left(\frac{5}{36\gamma^2\pi^2}\right)^d,\qquad \varphi^{\star1\star2\cdot3}\left(\mathbf{0}\right) = \mathcal{A}^6\left(\frac{1}{10\gamma^2\pi^2}\right)^d,\qquad \varphi^{\star2\star2\cdot2}\left(\mathbf{0}\right) = \mathcal{A}^6\left(\frac{3}{32\gamma^2\pi^2}\right)^d.$$* *Proof.* We begin with the simplification that $q_\varphi=\mathcal{A}$ is set to be equal to $1$ (by a spatial scaling choice). Like for the Hyper-Cubic model, the factorisable structure of the adjacency function means that we only need to evaluate the answers for the $1$-dimensional model, and then we can take the result to the power $d$ to get the $d$-dimensional answer. Let the $1$-dimensional adjacency function be denoted $$\varphi_1(x) = \frac{\gamma}{\pi\left(\gamma^2+x^2\right)}.$$ By well-known complex analysis techniques, the Fourier transform of this function is given by $$\widehat\varphi_1(k) = \text{e}^{-\gamma\abs*{k}}$$ for $k\in\mathbb R$. Then by the Fourier inversion formula, for $n\geq 1$, $$\varphi_1^{\star n}\left(\mathbf{0}\right) = \frac{1}{2\pi}\int^\infty_{-\infty}\text{e}^{-n\gamma\abs*{k}}\mathrm{d}k = \frac{1}{\gamma\pi}\int^\infty_{0}\text{e}^{-n k}\mathrm{d}k = \frac{1}{n\gamma\pi}.$$ The calculation is a little more complicated for the remaining objects. For $n_1,n_2,n_3\geq 1$, $$\begin{gathered} \varphi_1^{\star n_1 \star n_2 \cdot n_3}\left(\mathbf{0}\right) = \frac{1}{\left(2\pi\right)^2}\int^\infty_{-\infty}\int^\infty_{-\infty}\text{e}^{-n_1 \gamma\abs{k} - n_2\gamma\abs{k-l} - n_3\gamma\abs{l}}\mathrm{d}k\mathrm{d}l \\= \frac{1}{\left(2\gamma\pi\right)^2}\int^\infty_{-\infty}\int^\infty_{-\infty}\text{e}^{-n_1 \abs{k} - n_2\abs{k-l} - n_3\abs{l}}\mathrm{d}k\mathrm{d}l. \end{gathered}$$ For $l\geq 0$, the $k$-integral can then be partitioned as $$\begin{aligned} \int^\infty_{-\infty}\text{e}^{-n_1 \abs{k} - n_2\abs{k-l}}\mathrm{d}k &= \int^\infty_{l}\text{e}^{-n_1k - n_2k + n_2l}\mathrm{d}k + \int^{l}_{0}\text{e}^{-n_1k + n_2k-n_2l}\mathrm{d}k + \int^0_{-\infty}\text{e}^{n_1k + n_2k-n_2l}\mathrm{d}k\nonumber\\ &= \frac{1}{n_1+n_2}\left(\text{e}^{-n_1l}+\text{e}^{-n_2l}\right) + \begin{cases} l\text{e}^{-n_1l} &\colon n_1=n_2\\ \frac{1}{n_1-n_2}\left(\text{e}^{-n_2l}-\text{e}^{-n_1l}\right) &\colon n_1\ne n_2 \end{cases}\nonumber\\ &= \begin{cases} \left(\frac{1}{n_1} + l\right)\text{e}^{-n_1l} &\colon n_1=n_2\\ \frac{2n_1}{n_1^2- n_2^2}\text{e}^{-n_2l} - \frac{2n_2}{n_1^2-n_2^2}\text{e}^{-n_1l} &\colon n_1\ne n_2. \end{cases} \end{aligned}$$ The calculation is performed similarly for $l<0$, and we get $$\int^\infty_{-\infty}\text{e}^{-n_1 \abs{k} - n_2\abs{k-l}}\mathrm{d}k = \begin{cases} \left(\frac{1}{n_1} + \abs*{l}\right)\text{e}^{-n_1\abs*{l}} &\colon n_1=n_2\\ \frac{2n_1}{n_1^2- n_2^2}\text{e}^{-n_2\abs*{l}} - \frac{2n_2}{n_1^2-n_2^2}\text{e}^{-n_1\abs*{l}} &\colon n_1\ne n_2 \end{cases}$$ for all $l\in\mathbb R$. For $n_1=n_2$ we then get $$\begin{aligned} \varphi_1^{\star n_1\star n_1\cdot n_3}\left(\mathbf{0}\right) &= \frac{1}{4\gamma^2\pi^2}\int^\infty_{-\infty}\left(\frac{1}{n_1}+\abs*{l}\right)\text{e}^{-\left(n_1+n_3\right)\abs*{l}}\mathrm{d}l\nonumber\\ &= \frac{1}{2\gamma^2\pi^2}\left(\frac{1}{n_1\left(n_1+n_3\right)} + \frac{1}{\left(n_1+n_3\right)^2}\right) \nonumber\\ &=\frac{2n_1 + n_3}{2n_1\left(n_1+n_3\right)^2}\frac{1}{\gamma^2\pi^2}.\label{eqn:n1=n2Case} \end{aligned}$$ Using $n_1=n_2=2$ and $n_3=1$, and $n_1=n_2=2$ and $n_3=2$ gives us two of our desired results. We are only left with $\varphi^{\star1\star2\cdot 3}\left(\mathbf{0}\right)$. For $n_1\ne n_2$ we get $$\begin{aligned} \frac{1}{\left(2\gamma\pi\right)^2}\int^\infty_{-\infty}\int^\infty_{-\infty}\text{e}^{-n_1 \abs{k} - n_2\abs{k-l} - n_3\abs{l}}\mathrm{d}k\mathrm{d}l &= \frac{1}{2\gamma^2\pi^2}\int^\infty_{0}\left(\frac{2n_1}{n_1^2- n_2^2}\text{e}^{-n_2l} - \frac{2n_2}{n_1^2-n_2^2}\text{e}^{-n_1l}\right)\text{e}^{-n_3l}\mathrm{d}l\nonumber\\ &= \frac{1}{\gamma^2\pi^2}\left(\frac{n_1}{\left(n_1^2-n_2^2\right)\left(n_2+n_3\right)} - \frac{n_2}{\left(n_1^2-n_2^2\right)\left(n_1+n_3\right)}\right)\nonumber\\ &= \frac{n_1+n_2+n_3}{\left(n_1+n_2\right)\left(n_1+n_3\right)\left(n_2+n_3\right)}\frac{1}{\gamma^2\pi^2}. \end{aligned}$$ Note that this expression reduces to the case [\[eqn:n1=n2Case\]](#eqn:n1=n2Case){reference-type="eqref" reference="eqn:n1=n2Case"} if $n_1=n_2$. ◻ **Lemma 52**. *The Coordinate-Cauchy RCM with $\liminf_{d\to\infty}\varphi\left(\mathbf{0}\right)^{\frac{1}{d}}>0$ satisfies Assumptions [\[Assumption\]](#Assumption){reference-type="ref" reference="Assumption"} and [\[AssumptionBeta\]](#AssumptionBeta){reference-type="ref" reference="AssumptionBeta"}.* *Proof.* For simplicity we scale space so that $q_\varphi=\mathcal{A}=1$. As argued analogously for the Gaussian RCM in Lemma [Lemma 50](#lem:GaussianAssumptions){reference-type="ref" reference="lem:GaussianAssumptions"}, the condition $\liminf\varphi\left(\mathbf{0}\right)^{\frac{1}{d}}>0$ then becomes $\limsup \gamma <\infty$, and $\varphi\left(\mathbf{0}\right)\leq 1$ becomes $\gamma\geq \sfrac{1}{\pi}$. Since $\widehat\varphi(k) = \text{e}^{-\gamma\norm*{k}_1}\geq 0$, we know that $\mathop{\mathrm{ess\,sup}}_{x\in\mathbb R^d}\varphi^{\star m}(x) = \varphi^{\star m}\left(\mathbf{0}\right)$ for all $m\geq 1$. Therefore $$\mathop{\mathrm{ess\,sup}}_{x\in\mathbb R^d}\varphi^{\star m}(x) = \left(m\gamma\pi\right)^{-d}.$$ Since $\gamma\pi\geq 1$, this approaches zero for all $m\geq 2$. Therefore $\ref{Assump:DecayBound}$ holds with the choice $g(d) = \left(2\gamma\pi\right)^{-d}= 2^{-d}\varphi\left(\mathbf{0}\right)$ and $\beta(d)=2^{-\frac{d}{4}}\varphi\left(\mathbf{0}\right)^{\frac{1}{4}}$ (here we use $\limsup \gamma <\infty$ to get the appropriate form of $\beta$ from [\[eqn:betafromgfunction\]](#eqn:betafromgfunction){reference-type="eqref" reference="eqn:betafromgfunction"}). Furthermore, $\gamma$ cannot approach $0$ and therefore our expression for $\widehat\varphi(k)$ implies [\[Assump:QuadraticBound\]](#Assump:QuadraticBound){reference-type="ref" reference="Assump:QuadraticBound"} holds too. From our prior calculations we have $\varphi^{\star 6}\left(\mathbf{0}\right) =6^{-d}q_\varphi^5\varphi\left(\mathbf{0}\right)$ and therefore [\[Assump:ExponentialDecay\]](#Assump:ExponentialDecay){reference-type="ref" reference="Assump:ExponentialDecay"} can be seen to hold with $\rho=6^{-1}\liminf \varphi\left(\mathbf{0}\right)^{\frac{1}{d}}>0$. It also allows us to lower bound $h(d)$. Noting that $\log\beta(d)<0$, this implies that $$\frac{\log h(d)}{\log \beta(d)} \leq \frac{-d\log 6 + \log \varphi\left(\mathbf{0}\right)}{-\frac{d}{4}\log 2 + \frac{1}{4}\log\varphi\left(\mathbf{0}\right)} \leq \frac{4\log 6 - 4\log \varphi\left(\mathbf{0}\right)^{\frac{1}{d}}}{\log 2 - \log \varphi\left(\mathbf{0}\right)^{\frac{1}{d}}}.$$ Note that $\log \varphi\left(\mathbf{0}\right)^{\frac{1}{d}}\leq0$. By taking the derivative of the map $x\mapsto \frac{4\log 6 - 4x}{\log 2-x}$ for $x\leq 0$ we can find that it is maximised at $x=0$. Therefore $$\frac{\log h(d)}{\log \beta(d)} \leq 4\frac{\log 6}{\log 2} = 4\log_2 6.$$ Since this is finite, we have proven Assumption [\[Assump:NumberBound\]](#Assump:NumberBound){reference-type="ref" reference="Assump:NumberBound"}. ◻ #### Acknowledgements. This work is supported by *Deutsche Forschungsgemeinschaft* (project number 443880457) through priority program "Random Geometric Systems" (SPP 2265). The authors thank Sabine Jansen and Günter Last for their inspiring discussions. [^1]: University of British Columbia, Department of Mathematics, Vancouver, BC, Canada, V6T 1Z2; Email: dickson\@math.ubc.ca; ![image](ORCIDiD_icon16x16.png){height="1em"} https://orcid.org/0000-0002-8629-4796 [^2]: Universität Augsburg, Institut für Mathematik, Universitätsstr. 2, 86135 Augsburg, Germany; Email: markus.heydenreich\@uni-a.de; ![image](ORCIDiD_icon16x16.png){height="1em"} https://orcid.org/0000-0002-3749-7431
arxiv_math
{ "id": "2309.08830", "title": "Expansion of the Critical Intensity for the Random Connection Model", "authors": "Matthew Dickson and Markus Heydenreich", "categories": "math.PR", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | For a localization of a smooth proper category, we show that morphisms in Efimov's algebraizable categorical formal punctured neighborhood of infinity can be computed using the natural cone between right and left adjoints of the localization functor. In particular, this recovers the following result of Ganatra--Gao--Venkatesh: morphisms in categorical formal punctured neighborhoods of wrapped Fukaya categories are computed by Rabinowitz wrapping. author: - Tatsuki Kuwagaki and Vivek Shende bibliography: - bibs.bib title: Adjoints, wrapping, and morphisms at infinity --- To any dg category $\mathcal{S}$ over a field $\ensuremath{\mathbb{K}}$, Efimov has associated an 'algebraizable categorical formal punctured neighborhood of infinity' [@Efimov]. $$\ensuremath{\mathcal{S}}\to \widehat{\ensuremath{\mathcal{S}}}_\infty$$ We are interested here in the case when $\ensuremath{\mathcal{S}}$ admits a localization sequence $$\label{seq} 0 \to \ensuremath{\mathcal{K}}\xrightarrow{j} \ensuremath{\mathcal{C}}\xrightarrow{i^L} \ensuremath{\mathcal{S}}\to 0$$ where $\ensuremath{\mathcal{C}}$ is smooth (perfect diagonal bimodule) and locally proper (finite dimensional Hom spaces). In this case, Efimov showed that $\widehat{\ensuremath{\mathcal{S}}}_\infty$ can be computed as follows. To any category $\ensuremath{\mathcal{T}}$ we may associate its 'pseudo-perfect modules' $\ensuremath{\mathcal{T}}^{pp} = \operatorname{Hom}(\ensuremath{\mathcal{T}}, \mathrm{Perf}\, \ensuremath{\mathbb{K}})$. Since $\ensuremath{\mathcal{K}}$ is locally proper, the Yoneda embedding gives $\ensuremath{\mathcal{K}}\hookrightarrow \ensuremath{\mathcal{K}}^{pp}$. Form the quotient: $$\mathrm{Perf}_{top}(\widehat{\ensuremath{\mathcal{S}}}_\infty):= \ensuremath{\mathcal{K}}^{pp} /\ensuremath{\mathcal{K}}$$ The composition of the Yoneda functor with passage to the quotient gives a map $$\mathcal{C} \to \operatorname{Hom}(\ensuremath{\mathcal{K}}, \mathrm{Perf}(\ensuremath{\mathbb{K}}))/\ensuremath{\mathcal{K}}$$ This map evidently factors through $\ensuremath{\mathcal{S}}$, and $\widehat{\ensuremath{\mathcal{S}}}_\infty$ is the full subcategory of $\mathrm{Perf}_{top}(\widehat{\ensuremath{\mathcal{S}}}_\infty)$ generated by the image of $\ensuremath{\mathcal{S}}$, or equivalently $\ensuremath{\mathcal{C}}$. As always with quotient categories, it is not easy to compute morphism spaces directly from the definition. Our purpose here is to give a more explicit formula for morphisms in $\widehat{\ensuremath{\mathcal{S}}}_\infty$. Our result is inspired by, and implies, a result of Gao-Ganatra-Venkatesh in the situation where $\ensuremath{\mathcal{S}}$ is the Fukaya category of a Weinstein manifold [@GGV]. **Theorem 1**. *Let $i:\mathrm{Mod}\, \ensuremath{\mathcal{S}}\to \mathrm{Mod}\, \ensuremath{\mathcal{C}}$ be the pullback functor on module categories. Then for $c, d \in \ensuremath{\mathcal{C}}$, there is a natural isomorphism $$\operatorname{Hom}_{\widehat \ensuremath{\mathcal{S}}_\infty}(c, d) = \operatorname{Cone}(\operatorname{Hom}_{\mathrm{Mod}\, \ensuremath{\mathcal{C}}}(ii^L(c), d)\rightarrow \operatorname{Hom}_{\mathrm{Mod}\, \ensuremath{\mathcal{C}}}(c, ii^L(d)))$$ where the map is induced by the unit maps $c \to ii^L(c)$ and $d \to ii^L(d)$.* **Remark 2**. The map $i$ also has a right adjoint $i^R$; we can also express the formula as $\operatorname{Hom}_{\widehat \ensuremath{\mathcal{S}}_\infty}(c, d) = \operatorname{Hom}_{\mathrm{Mod}\, \ensuremath{\mathcal{C}}}(c, \operatorname{Cone}(ii^R(d) \to ii^L(d)))$. **Remark 3**. It may be nontrivial to express compositions in $\widehat{S}_\infty$ in terms of the formula above. We give an expression at the level of cohomology in Appendix [1](#compositions){reference-type="ref" reference="compositions"}. We will give the proof of this theorem after illustrating in algebraic and symplectic geometry: **Example 4** (Coherent sheaves). Let $Y$ be a smooth proper algebraic variety, and $X \subset Y$ an open subvariety with complement $Z$. Then $\mathrm{Coh}(Y)$ is smooth and proper, and one has $$\mathrm{Coh}(X) = \mathrm{Coh}(Y) / \mathrm{Coh}_Z(Y),$$ where $\mathrm{Coh}_Z(Y)$ is the full subcategory on sheaves set-theoretically supported on $Z$. Writing $x: X \to Y$ for the inclusion, our result asserts that given $E, F \in \mathrm{Coh}(Y)$, $$\operatorname{Hom}_{\widehat{\mathrm{Coh}(X)}_\infty}(E, F) = \operatorname{Cone}(\operatorname{Hom}_{Q \mathrm{Coh}(Y)}(x_* x^* E, F) \to \operatorname{Hom}_{Q \mathrm{Coh}(Y)}(E, x_* x^* F))$$ Note we may compute this cone of Homs after restricting to any Zariski neighborhood of $Z$, since $x_* x^*E \to E$ and $x_* x^*F \to F$ are isomorphisms away from such neighborhood. Let us do an example of the example. We take $Y = \ensuremath{\mathbb{P}}^1$, $X = \ensuremath{\mathbb{P}}^1 \setminus 0$, and $E = F = \mathcal{O}$. In the Zariski chart $\ensuremath{\mathbb{P}}^1 \setminus \infty$, we compute: $$\operatorname{Cone}(\operatorname{Hom}_{\ensuremath{\mathbb{K}}[t]}(\ensuremath{\mathbb{K}}[t, t^{-1}], \ensuremath{\mathbb{K}}[t]) \to \operatorname{Hom}_{\ensuremath{\mathbb{K}}[t]}(\ensuremath{\mathbb{K}}[t], \ensuremath{\mathbb{K}}[t, t^{-1}])) \cong \ensuremath{\mathbb{K}}((t))$$ Indeed, the second term in the cone is obviously $\ensuremath{\mathbb{K}}[t, t^{-1}]$. One can show that the first is in fact isomorphic to $(\ensuremath{\mathbb{K}}[[t]]/\ensuremath{\mathbb{K}}[t]) [-1]$; we include a calculation in Appendix [2](#ridiculous hom calculation){reference-type="ref" reference="ridiculous hom calculation"}. We leave it to the reader to check that the cone realizes the nontrivial extension $$0 \to \ensuremath{\mathbb{K}}[t, t^{-1}] \to \ensuremath{\mathbb{K}}((t)) \to \ensuremath{\mathbb{K}}[[t]]/\ensuremath{\mathbb{K}}[t] \to 0.$$ **Example 5** (Fukaya categories). Let $W$ be a Weinstein symplectic manifold and $\Lambda \subset \partial_\infty W$ a generically Legendrian total stop, such as the core of a fiber of an open book decomposition of $\partial_\infty W$. Let $\Lambda' \subset \Lambda$ be a closed subset. Then [@GPS2] the (partially) wrapped Fukaya category $\mathrm{Fuk}(W, \Lambda)$ is smooth and proper, and we have a localization sequence $$\label{localization-fuk} 0 \to \langle D_{\Lambda \setminus \Lambda'} \rangle \to \mathrm{Fuk}(W, \Lambda) \to \mathrm{Fuk}(W, \Lambda') \to 0$$ where $D_{\Lambda \setminus \Lambda'}$ are the so-called linking disks to $\Lambda \setminus \Lambda'$. Suppose given a Lagrangian $M \in \mathrm{Fuk}(W, \Lambda)$. As in [@GPS2], by a *negative wrapping* $M \rightsquigarrow M^-$, we mean an isotopy induced by a Hamiltonian which is linear and negative at contact infinity. So long as $M^-$ avoids $\Lambda$ and hence defines an element of $\mathrm{Fuk}(W, \Lambda)$, there is a continuation morphism $M \to M^-$. Essentially by definition, $$\operatorname{Hom}_{\mathrm{Fuk}(W, \Lambda')}(\,\cdot\, , M) = \lim_{\substack{\longrightarrow \\ M \to M^-}} \!\! \operatorname{Hom}_{\mathrm{Fuk}(W, \Lambda)}(\,\cdot\, , M^-) = \operatorname{Hom}_{\mathrm{Mod}\, \mathrm{Fuk}(W, \Lambda)}(\,\cdot\,, \!\! \lim_{\substack{\longrightarrow \\ M \to M^-}} \!\! M^-)$$ where the limit is taken over wrappings where the entire isotopy avoids $\Lambda'$. In other words, there is a natural isomorphism $$ii^L(M) \cong \lim_{\substack{\longrightarrow \\ M \to M^-}} \!\! M^-$$ We conclude: $$\begin{aligned} \operatorname{Hom}_{\widehat{\mathrm{Fuk}(W, \Lambda')}_\infty } (L, M) & = & \operatorname{Cone}(\operatorname{Hom}_{\mathrm{Fuk}(W, \Lambda)}(\!\lim_{\substack{\longrightarrow \\ L \to L^-}} \!\! L^-, M) \to \operatorname{Hom}_{\mathrm{Fuk}(W, \Lambda)}(L, \!\!\lim_{\substack{\longrightarrow \\ M \to M^-}} \!\! M^-) ) \\ & = & \operatorname{Cone}(\!\lim_{\substack{\longleftarrow \\ L \to L^-}} \operatorname{Hom}_{\mathrm{Fuk}(W, \Lambda)}(L^-, M) \to \!\lim_{\substack{\longrightarrow \\ M \to M^-}} \operatorname{Hom}_{\mathrm{Fuk}(W, \Lambda)}(L, M^-)) \end{aligned}$$ This recovers a result originally proven in [@GGV] for $\Lambda' = \emptyset$. The remainder of this note concerns the proof of Theorem [Theorem 1](#rab formula){reference-type="ref" reference="rab formula"}. We have the diagram: $$\begin{tikzcd} \mathrm{Mod}\,\ensuremath{\mathcal{K}}\arrow[rr, bend left = 35, "j^{RR}"] \arrow[rr, bend right = 35, "j", swap] & & \mathrm{Mod}\, \ensuremath{\mathcal{C}}\arrow[rr, bend left = 35, "i^R"] \arrow[ll, "j^R", swap] \arrow[rr, bend right = 35, "i^L", swap] & & \mathrm{Mod}\, \ensuremath{\mathcal{S}}\arrow[ll, "i", swap] \end{tikzcd}$$ Here, $j^R$ and $i$ are the natural pullback of modules under the identification of ind- and module- categories. These each have right and left adjoints, and the left adjoints compose with the Yoneda embeddings to give the original $j$ and $i^L$: We note some properties of this diagram. The maps $i, j, j^{RR}$ are fully faithful; we have $j^R j = 1_{\mathrm{Mod}\, \ensuremath{\mathcal{K}}} = j^R j^{RR}$ and $i^L i = 1_{\mathrm{Mod}\, \ensuremath{\mathcal{S}}} = i^R i$. We will later be interested in the Drinfeld-Verdier quotient $(\mathrm{Mod}\, \ensuremath{\mathcal{C}}) / \ensuremath{\mathcal{K}}$. (Note this differs from $\mathrm{Mod}\, \ensuremath{\mathcal{C}}/ \mathrm{Mod}\, \ensuremath{\mathcal{K}}= \mathrm{Mod}\, \ensuremath{\mathcal{S}}$.) It will be useful that certain morphisms can already be computed in $\ensuremath{\mathcal{C}}$: **Lemma 6**. *For any $c, d \in \ensuremath{\mathcal{C}}$, $$\label{indckL_} \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}}(ii^L(c),d)\cong \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}}(ii^L(c), d)\cong \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}}(c,ii^R(d)).$$ and $$\label{indckLL} \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}}(ii^L(c),ii^L(d))\cong \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}}(ii^L(c),ii^L(d))\cong \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}}(c,ii^L(d)).$$ Additionally, $$\label{indck_L} \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}}(c,ii^L(d))\cong \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}}(c,ii^L(d)).$$ and $$\label{indck__} \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}}(c,d)\cong \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}}(c,ii^L(d))$$* *Proof.* A morphism in $\operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}}(ii^L(c),(d))$ is given by a roof diagram $$ii^L(c)\xrightarrow{f}c'\xleftarrow{g} d$$ such that $\operatorname{Cone}(g)\in \ensuremath{\mathcal{K}}$. Since $\operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}}(ii^L(c), \operatorname{Cone}(g))=0$, $f$ is induced by a morphism $ii^L(c)\rightarrow d$. This shows [\[indckL\_\]](#indckL_){reference-type="eqref" reference="indckL_"}. Now [\[indckLL\]](#indckLL){reference-type="eqref" reference="indckLL"} follows $i i^R i i^L = i i^L$. Similarly, take a morphism in $\operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}}(c, ii^L(d))$. Then it is given by a roof diagram $$c\xleftarrow{f} c'\xrightarrow{g} ii^L(d)$$ such that $\operatorname{Cone}(f)\in \ensuremath{\mathcal{K}}$. Since $\operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}}(\operatorname{Cone}(f), ii^L(d))=0$, $g$ in induced by a morphism $c\rightarrow ii^L(d)$. This establishes [\[indck_L\]](#indck_L){reference-type="eqref" reference="indck_L"}. Finally, since $j^R(d)\in \mathrm{Mod}\ensuremath{\mathcal{K}}$, we have $d_i\in \ensuremath{\mathcal{K}}$ such that $\displaystyle{\lim_{\substack{\longrightarrow \\ i}}}\,d_i=j^R(d)$. Since $j$ is colimit preserving and $c$ is compact, we have $$\operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}}(c,jj^R(d))\cong \lim_{\substack{\longrightarrow \\ i}}\operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}}(c,d_i).$$ Take any morphism $f\in \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}}(c,jj^R(d))$. The above isomorphism implies $f$ factors through $d_i\in \ensuremath{\mathcal{K}}$ for some sufficiently large $i$. This implies $\operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}}(c,jj^R(d))\cong 0$. Applying this result to the triangle $$\operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}}(c,d)\rightarrow \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}}(c,ii^L(d))\rightarrow \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}}(c,jj^R(d))\rightarrow,$$ we get [\[indck\_\_\]](#indck__){reference-type="eqref" reference="indck__"}. ◻ **Lemma 7**. *Given an exact sequence as in [\[seq\]](#seq){reference-type="eqref" reference="seq"}, the restrictions of $i$ and $j^R$ to pseudo-perfect modules have the following properties:* - *$i: \ensuremath{\mathcal{S}}^{pp} \to \ensuremath{\mathcal{C}}^{pp}$ is fully faithful* - *the image of $i$ is the kernel of $j^R$* *Proof.* For the second statement: $$\ensuremath{\mathcal{S}}^{pp} = \operatorname{Hom}(\ensuremath{\mathcal{S}}, \mathrm{Perf}(\ensuremath{\mathbb{K}})) = \operatorname{Hom}(\ensuremath{\mathcal{C}}\oplus_{\ensuremath{\mathcal{K}}} 0, \mathrm{Perf}(\ensuremath{\mathbb{K}})) = \ensuremath{\mathcal{C}}^{pp} \times_{\ensuremath{\mathcal{K}}^{pp}} 0$$ ◻ **Remark 8**. Note we do not claim the map $\ensuremath{\mathcal{C}}^{pp}/i(\ensuremath{\mathcal{S}}^{pp}) \to \ensuremath{\mathcal{K}}^{pp}$ is fully faithful. **Corollary 9**. *Assume $\ensuremath{\mathcal{C}}$ is smooth and proper, so $\ensuremath{\mathcal{C}}^{pp} = \ensuremath{\mathcal{C}}$. Then the kernel of the map $$\ensuremath{\mathcal{C}}\xrightarrow{j^R} \ensuremath{\mathcal{K}}^{pp} \to \ensuremath{\mathcal{K}}^{pp} / \ensuremath{\mathcal{K}}$$ is generated by $\ensuremath{\mathcal{K}}$ and $\ensuremath{\mathcal{C}}\cap i(\ensuremath{\mathcal{S}})$.* *Proof.* After Lemma [Lemma 7](#ppexact){reference-type="ref" reference="ppexact"}, the only thing remaining to check is $i(\ensuremath{\mathcal{S}}^{pp}) = \ensuremath{\mathcal{C}}\cap i(\ensuremath{\mathcal{S}})$. Smoothness of $\ensuremath{\mathcal{C}}$ implies smoothness of $\ensuremath{\mathcal{S}}$, hence $\ensuremath{\mathcal{S}}^{pp} \subset \ensuremath{\mathcal{S}}$, giving the inclusion $\subset$. On the other hand for $s \in \ensuremath{\mathcal{S}}$ satisfies $i(s) \in \ensuremath{\mathcal{C}}$, then for $c \in \ensuremath{\mathcal{C}}$ we have $$\operatorname{Hom}_{\ensuremath{\mathcal{S}}}(i^L(c), s) = \operatorname{Hom}_{\ensuremath{\mathcal{C}}}(c, i(s))$$ by properness of $\ensuremath{\mathcal{C}}$, this Hom is perfect. But $i^L$ is surjective, so $s \in \ensuremath{\mathcal{S}}^{pp}$. ◻ *Proof of Theorem [Theorem 1](#rab formula){reference-type="ref" reference="rab formula"}.* Consider the category $\left(\ensuremath{\mathcal{C}}, \mathrm{Mod}\, \ensuremath{\mathcal{S}}\right)$ generated by $\ensuremath{\mathcal{C}}$ and $\mathrm{Mod}\, \ensuremath{\mathcal{S}}$ in $\mathrm{Mod}\,\ensuremath{\mathcal{C}}$. Since $j^R$ kills $\mathrm{Mod}\, \ensuremath{\mathcal{S}}$, we have an induced functor $\left(\ensuremath{\mathcal{C}}, \mathrm{Mod}\,\ensuremath{\mathcal{S}}\right)\rightarrow \ensuremath{\mathcal{K}}^{pp}$. The kernel is generated by $\mathrm{Mod}\, \ensuremath{\mathcal{S}}$, and we have a map $$\colon \left(\ensuremath{\mathcal{C}}, \mathrm{Mod}\ensuremath{\mathcal{S}}\right)/\mathrm{Mod}\ensuremath{\mathcal{S}}\to \ensuremath{\mathcal{K}}^{pp}.$$ As $[j_R]$ can be embedded into an equivalence $\mathrm{Mod}\, \ensuremath{\mathcal{C}}/\mathrm{Mod}\, \ensuremath{\mathcal{S}}\cong \mathrm{Mod}\ensuremath{\mathcal{K}}$, it is in particular fully faithful. Hence we get an equivalence: $$\left(\left(\ensuremath{\mathcal{C}}, \mathrm{Mod}\ensuremath{\mathcal{S}}\right)/\mathrm{Mod}\ensuremath{\mathcal{S}}\right)/\ensuremath{\mathcal{K}}\cong \widehat \ensuremath{\mathcal{S}}_\infty\subset \ensuremath{\mathcal{K}}^{pp}/\ensuremath{\mathcal{K}}.$$ Consider the embedding $$\left(\ensuremath{\mathcal{C}}, \mathrm{Mod}\ensuremath{\mathcal{S}}\right)/\mathrm{Mod}\ensuremath{\mathcal{S}}\hookrightarrow \mathrm{Mod}\ensuremath{\mathcal{K}}\hookrightarrow\mathrm{Mod}\ensuremath{\mathcal{C}}.$$ given by $jj^R$. We use the same notation after passing to the quotient by $\ensuremath{\mathcal{K}}$: $$jj^R\colon \left(\left(\ensuremath{\mathcal{C}}, \mathrm{Mod}\ensuremath{\mathcal{S}}\right)/\mathrm{Mod}\ensuremath{\mathcal{S}}\right)/\ensuremath{\mathcal{K}}\hookrightarrow\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}$$ Thus far we have shown $$\operatorname{Hom}_{\widehat \ensuremath{\mathcal{S}}_\infty}(c, d) = \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}}(jj^R(c), jj^R(d))$$ Since we have an exact triangle $$jj^R \rightarrow\operatorname{id}\rightarrow ii^L\rightarrow,$$ we have $$\begin{split} \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}}(jj^R(c), jj^R(d))&\cong \operatorname{Cone}( C_1\rightarrow C_2)[-1] \end{split}$$ where $$\begin{split} C_1&:=\operatorname{Cone}(\operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}}(ii^L(c),d)\rightarrow\operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}}(c, d) )\\ C_2&:=\operatorname{Cone}(\operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}}(ii^L(c),ii^L(d))\rightarrow \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}}(c,ii^L(d))). \end{split}$$ By [\[indckLL\]](#indckLL){reference-type="eqref" reference="indckLL"}, we see $C_2 = 0$. To complete the proof we rewrite $C_1$ using [\[indckL\_\]](#indckL_){reference-type="eqref" reference="indckL_"} and [\[indck\_\_\]](#indck__){reference-type="eqref" reference="indck__"}. ◻ # Compositions in $\widehat{S}_\infty$ {#compositions} Let $c_0,c_1,c_2$ be objects of $\ensuremath{\mathcal{C}}$, viewed also as objects of $\widehat \ensuremath{\mathcal{S}}_{\infty}$. We express the underlying complex of $\operatorname{Hom}_{\widehat\ensuremath{\mathcal{S}}_\infty}(c_i,c_{i+1})$ as $$\operatorname{Hom}_{\mathrm{Mod}\, \ensuremath{\mathcal{C}}}(ii^L(c_i), c_{i+1})[1]\oplus\operatorname{Hom}_{\mathrm{Mod}\, \ensuremath{\mathcal{C}}}(ii^L(c_i), ii^L(c_{i+1}))).$$ We will use the unit morphism $$u\colon c_i\rightarrow ii^L(c_i).$$ We will compose $$\begin{split} (f_0,g_0)&\in \operatorname{Hom}_{\mathrm{Mod}\, \ensuremath{\mathcal{C}}}(ii^L(c_0), c_{1})[1]\oplus\operatorname{Hom}_{\mathrm{Mod}\, \ensuremath{\mathcal{C}}}(ii^L(c_i), ii^L(c_{i+1})))\\ (f_1,g_1)&\in \operatorname{Hom}_{\mathrm{Mod}\, \ensuremath{\mathcal{C}}}(ii^L(c_1), c_{2})[1]\oplus\operatorname{Hom}_{\mathrm{Mod}\, \ensuremath{\mathcal{C}}}(ii^L(c_i), ii^L(c_{i+1}))). \end{split}$$ We use the notation from the proof of Theorem [Theorem 1](#rab formula){reference-type="ref" reference="rab formula"}. We have the projection $$\pi\colon \operatorname{Cone}(C_1\rightarrow C_2)[-1]\rightarrow C_1,$$ which is a quasi-isomorphism. For each $(f_i,g_i)$, we have a cocycle lift $$\begin{split} &(f_i,g_i, u\circ g_i\circ u^{-1}, 0)\\ &\in \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}}(ii^L(c_i),c_{i+1})[-1]\oplus \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}}(c_i, c_{i+1})\\ &\oplus \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}}(ii^L(c_i),ii^L(c_{i+1}))[-2]\oplus \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{K}}}(c_i,ii^L(c_{i+1})))[-1], \end{split}$$ which is the underlying vector space of $\operatorname{Cone}(C_1\rightarrow C_2)$, which is the underlying vector space of the hom-space $\operatorname{Hom}(\operatorname{Cone}(c_i\rightarrow ii^L(c_i)), \operatorname{Cone}(c_{i+1}\rightarrow ii^L(c_{i+1})))$. Here $g_i\circ u^{-1}$ is only cohomologically well-defined. We then directly calculate and get $$\begin{split} &(f_1,g_1, u\circ g_1\circ u^{-1}, 0)\circ (f_0,g_0, u\circ g_0\circ u^{-1}, 0)\\ &=(g_1\circ f_0+f_1\circ u\circ g_0\circ u^{-1}, g_1\circ g_0 ,\star_1, \star_2), \end{split}$$ where the last two components are omitted. We interpret each term as a morphism of $\mathrm{Mod}\, \ensuremath{\mathcal{C}}$. By taking the following identification, $u^{-1}$ disappears: $$\begin{split} &(f_i,g_i, g_i, 0)\\ &\in \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}}(ii^L(c_i),c_{i+1})[-1]\oplus \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}}(ii^L(c_i), ii^L(c_{i+1}))\\ &\oplus \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}}(ii^L(c_i),ii^L(c_{i+1}))[-2]\oplus \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}}(ii^L(c_i),ii^L(c_{i+1})))[-1]. \end{split}$$ Then the terms in $$(g_1\circ u\circ f_0+f_1\circ g_0, g_1\circ g_0)$$ are well-defined except for $g_1\circ f_0$ lands in the correct place $\operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}}(ii^L(c_i),c_{i+1})[-1]\oplus \operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}}(ii^L(c_i), ii^L(c_{i+1}))$. Here we put $u$ the head of two $f_0$, which also comes from the identification with $\mathrm{Mod}\, \ensuremath{\mathcal{C}}$. A priori, $g_1\circ u\circ f_0$ is not in $\operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}}(ii^L(c_i),c_{i+1})[-1]$, but $\operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}}(ii^L(c_i),ii^L(c_{i+1}))[-1]$. But, by construction, there is some $u^{-1}\circ g_1\circ u\circ f_0\in\operatorname{Hom}_{\mathrm{Mod}\ensuremath{\mathcal{C}}}(ii^L(c_i),c_{i+1})[-1]$ such that $u\circ (u^{-1}\circ g_1\circ f_0)=g_1\circ u\circ f_0$. Hence, at the cohomological level, we obtain the following formula for the composition: $$(f_1, g_1)\circ (f_0,g_0):=(u^{-1}\circ g_1\circ u\circ f_0+f_1\circ g_0, g_1\circ g_0).$$ One way to write formulas beyond the cohomological level would be the following. Choose a projection $C_1\rightarrow H^*(C_1)$ and the splitting of $\operatorname{Cone}(C_1\rightarrow C_2)[1]\rightarrow H^*(C_1)$, one obtains the contracting homotopy from $\operatorname{Cone}(C_1\rightarrow C_2)[1]$ to $H^*(C_1)$. Then, by running the homological perturbation theory, one obtains an $A_\infty$-structure upgrading the above composition formula, which is by construction quasi-equivalent to $\widehat{\ensuremath{\mathcal{S}}}_\infty$. # $\operatorname{Hom}_{\ensuremath{\mathbb{K}}[t]}(\ensuremath{\mathbb{K}}[t, t^{-1}], \ensuremath{\mathbb{K}}[t])$ {#ridiculous hom calculation} A free resolution of $\ensuremath{\mathbb{K}}[t, t^{-1}]$ is given by: $$\begin{aligned} \bigoplus_{n \leq -1} \ensuremath{\mathbb{K}}[t] \cdot r_n & \to & \bigoplus_{n \leq 0} \ensuremath{\mathbb{K}}[t] \cdot s_n \\ r_n & \mapsto & t s_n - s_{n+1} \\\end{aligned}$$ where $r_n, s_n$ are just basis elements. Dualizing gives $$\begin{aligned} \prod_{n \leq 0} \ensuremath{\mathbb{K}}[t] \cdot s_n^* &\to & \prod_{n \leq -1} \ensuremath{\mathbb{K}}[t] \cdot r_n^* \\ s_n^* & \mapsto & t r_n^*- r_{n-1}^* \end{aligned}$$ Consider the following $\ensuremath{\mathbb{K}}[t]$-linear map $$\Sigma\colon \prod_{n\leq -1}\ensuremath{\mathbb{K}}[t]r_n^*\rightarrow \ensuremath{\mathbb{K}}[[t]]; r_n^*\mapsto t^{-n-1}.$$ We claim that $$\prod_{n \leq -1} \ensuremath{\mathbb{K}}[t] \cdot s_n^* \to \prod_{n \leq -1} \ensuremath{\mathbb{K}}[t] \cdot r_n^* \rightarrow \ensuremath{\mathbb{K}}[[t]]\rightarrow 0$$ is an exact sequence. Indeed, it is obvious that the composition is zero. Suppose $\prod f_n(t)r_n^*$ goes to zero. For each monomial $\alpha r_n^*$ of $\prod f_n(t)r_n^*$, we set $$\deg (\alpha r_n^*):=\deg (\alpha)-n-1.$$ Let $N$ be the lowest nonzero number where $\prod f_n(t)r_n^*$ has a nonzero degree $N$ monomial. Note that the number of degree $N$ monomials in $\prod f_n(t)r_n^*$ are finite. Hence, by adding an element coming from $\prod_{n \leq -1} \ensuremath{\mathbb{K}}[t] \cdot s_n^*$, one can assume that the sum of the degree $N$ monomials is $\beta r_{-N-1}^*$ for some scalar $\beta$. Since this is still in the kernel of $\Sigma$ and the degree $N$-part of $\Sigma(\beta r_{-N-1}^*)=\beta t^{N}$, $\beta$ is zero. Inductively, adding elements coming from $\prod_{n \leq -1} \ensuremath{\mathbb{K}}[t] \cdot s_n^*$, we get $\ker \Sigma=\prod_{n \leq -1} \ensuremath{\mathbb{K}}[t] \cdot s_n^*$. Hence $$\prod_{n \leq 0} \ensuremath{\mathbb{K}}[t] \cdot s_n^* \to \prod_{n \leq -1} \ensuremath{\mathbb{K}}[t] \cdot r_n^* \rightarrow \ensuremath{\mathbb{K}}[[t]]/\ensuremath{\mathbb{K}}[t]\rightarrow 0$$ is also an exact sequence. (It is also easy to see that the first map is injective.) ## Acknowledgment {#acknowledgment .unnumbered} We would like to thank Adrian Petr for some questions about the Rabinowitz Fukaya category. The first-named author's work is supported by JSPS KAKENHI Grant Numbers 22K13912, 20H01794, 23H01068. The second-named author's work is supported Novo Nordisk Foundation grant NNF20OC0066298, Villum Fonden Villum Investigator grant 37814, and Danish National Research Foundation grant DNRF157.
arxiv_math
{ "id": "2309.17062", "title": "Adjoints, wrapping, and morphisms at infinity", "authors": "Tatsuki Kuwagaki and Vivek Shende", "categories": "math.SG math.AG math.CT math.KT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We use a method of Bieri, Geoghegan and Kochloukova to calculate the BNSR-invariants for the irrational slope Thompson's group $F_{\tau}$. To do so we establish conditions under which the Sigma invariants coincide with those of a subgroup of finite index, addressing a problem posed by Strebel. address: - | Royal Holloway, University of London\ Department of Mathematics, McCrea Building, TW20 0EX Egham, UK - | Otto-von-Guericke-Universität Magdeburg\ FMA -- Institut für Algebra und Geometrie\ PSF 4120, 39016 Magdeburg, Germany author: - Lewis Molyneux - Brita Nucinkis - Yuri Santos Rego bibliography: - references.bib title: The Sigma Invariants for the Golden Mean Thompson Group --- # Introduction The study of what is now known as the first Sigma invariant or Bieri--Neumann--Strebel invariant $\Sigma^1(G)$ for a finitely generated group goes back to [@BieriStrebelMetabelianSigma; @BNS] and was later extended by Bieri and Renz to a sequence of homotopical invariants $$\cdots \subseteq \Sigma^n(G) \subseteq \Sigma^{n-1}(G) \subseteq \cdots \subseteq \Sigma^1(G) \subseteq S(G)$$ and homological invariants $$\cdots \subseteq \Sigma^n(G, R) \subseteq \Sigma^{n-1}(G,R) \subseteq \cdots \subseteq \Sigma^1(G,R) \subseteq S(G),$$ where $R$ is a commutative ring; cf. [@RenzThesis; @Homology]. In this note we compute the Sigma invariants for the Golden Mean Thompson group $F_\tau$ as defined in [@Cleary], see also [@Ftau]. We prove: **Theorem 1**. *Let $\lambda, \rho: F_\tau \to \mathbb{R}$ be the characters given by: $$\lambda(f) =\log_{\tau}(f'(0)) \quad \text{ and } \quad \rho(f) = \log_{\tau}(f'(1)).$$ Then the Sigma invariants of $F_{\tau}$ are as follows:* 1. *$\Sigma^1(F_{\tau}) = \Sigma^1(F_{\tau}, \mathbb{Z}) = S(F_{\tau}) \backslash \{[-\lambda], [-\rho]\}$, and* 2. *$\Sigma^{\infty}(F_{\tau}) =\Sigma^{\infty}(F_{\tau}, \mathbb{Z}) = \Sigma^2(F_{\tau}) = \Sigma^1(F_{\tau}) \backslash \{-a\lambda -b\rho \mid a,b > 0\}.$* Note that $\Sigma^1(F_\tau)$ is already known, see [Citation 2](#Sigma1-thm){reference-type="ref" reference="Sigma1-thm"} below. As a first step, we consider the behaviour of the Sigma invariants $\Sigma^n(G)$ under passage to subgroups of finite index. Using this, the computations for the Sigma invariants for $F_\tau$ then follow from methods similar to those of Bieri--Geoghegan--Kochloukova in [@SigmaF]. Throughout denote by $G$ a finitely generated group, and by $G_{ab}$ its abelianisation. We consider $\chi \in \mathop{\mathrm{Hom}}(G, \mathbb{R})$. Define an equivalence relation by $\chi \sim \chi'$ if and only if there exists an $a \in \mathbb{R}_{>0}$ such that $\chi = a\chi'.$ The set of equivalence classes is a sphere in $\mathbb{R}^n$, called the *c*haracter sphere $S(G)$. Its dimension is given by the torsion-free rank $r_0(G_{ab})$ of $G_{ab}$, see [@Bieri Lemma 1.1]. Now consider the following subset of the Cayley graph $\Gamma(G)$ with respect to some finite generating set: $\Gamma_\chi(G)$ is the subgraph of $\Gamma(G)$ consisting of those vertices with $\chi(g) >0,$ and edges that have both initial and terminal vertices in $\Gamma_\chi(G).$ The first homotopical Sigma invariant is now defined as $$\Sigma^1(G) = \{ [\chi] \in S(G) \mid \Gamma_\chi(G) \text{ is connected}\}.$$ Note that this is independent of the choice of generating set for the Cayley graph [@BNS]. For certain groups of homeomorphisms of the real line, including Thompson's group $F$ and the Golden Mean Thompson group $F_\tau$ we have a complete description of $\Sigma^1(G)$: **Citation 2** ([@Bieri Corollary 3.4]). *Let $G$ be an irreducible subgroup of the group of piecewise linear homeomorphisms of the interval $[0,1]$. Take the characters $\chi_1(g)=ln(g'(0))$ as the natural log of the right derivative of an element $g \in G$ at $0$ and $\chi_2(g) = ln(g'(1))$ as the natural log of the left derivative of that element at $1$. Then $G = \ker \chi_1 \cdot \ker \chi_2 \implies \Sigma^1(G)^c = \{[\chi_1],[\chi_2]\}$.* In the 1990s, Bieri and Strebel gave a formula to compute the complement $\Sigma^1(G)^c$ using $\Sigma^1(H)^c$ and a subsphere of $S(H)$ in the case where $H$ is a subgroup of finite index in $G$; see [@Bieri Proposition 2.9] and [@StrebelNotes Proposition B1.11]. In higher dimensions, a related formula was recently considered by Koban--Wong in [@KobanWong]. In his notes [@StrebelNotes Section B1.2c], Strebel goes on to wonder about the applicability of this formula, and poses the following. **Citation 3** ([@StrebelNotes Problem B1.13]). *Find situations where one is interested in $\Sigma^1(G)$ with $G$ admitting a subgroup of finite index which is easier to deal with and for which $\Sigma^1$ can be computed.* We give a positive contribution towards Strebel's problem and find a sufficient condition for equality of Sigma invariants with subgroups of finite index, where this equality is induced by the inclusion map. **Theorem 4**. *Let $G$ be a group of type $\mathtt{F}_{n}$ and $H \leq G$ a subgroup of finite index. Then $$r_0(G_{ab}) = r_0(H_{ab}) \implies \Sigma^n(G) = \Sigma^n(H).$$* **Theorem 5**. *Let $A$ be a $\mathbb{Z}G$-module of type $\mathtt{FP}_{n}$ and $H \leq G$ a subgroup of finite index. Then $$r_0(G_{ab}) = r_0(H_{ab}) \implies \Sigma^n(G, A) = \Sigma^n(H, A).$$* The above statements mean that the map induced by restriction $\chi \in \mathop{\mathrm{Hom}}(G,\mathbb{R}) \mapsto \chi_{|H} \in \mathop{\mathrm{Hom}}(H,\mathbb{R})$ is an isomorphism of vector spaces inducing a well-defined homeomorphism of character spheres and of Sigma invariants. The main issue, as should be known to experts, is whether characters of the subgroup $H$ can be extended to characters of the whole group $G$. However, we were unable to find explicit references of statements along the lines of Theorems [Theorem 4](#htpy-fi){reference-type="ref" reference="htpy-fi"} and [Theorem 5](#hom-fi){reference-type="ref" reference="hom-fi"}. We note that conditions for Sigma invariants for finite index subgroups were considered, for instance, in [@Bieri Appendix B], [@MMvW], and [@KobanWong]. We will make use of the homological result in [@MMvW] later on. The authors in [@KobanWong] study Sigma invariants for finite index normal subgroups. In [@FriedlVidussi], the problem of extending characters from subgroups is also addressed. More recently in [@DesiVidussi], the authors give conditions under which one can extend a character from certain subgroups of infinite index. The reader is referred to [3](#3){reference-type="ref" reference="3"} for more on extensions of characters and the proofs of Theorems [Theorem 4](#htpy-fi){reference-type="ref" reference="htpy-fi"} and [Theorem 5](#hom-fi){reference-type="ref" reference="hom-fi"}. We remark that neither $r_0(G) = r_0(H)$ nor finite index alone suffice as hypotheses, as the following examples show. **Example 6**. Note that $r_0(G) = r_0(H)$ is insufficient to show an embedding of character spheres. As a counterexample, consider Thompson's Group $F = \langle x_0, x_1, \ldots \mid x_i x_j x_i^{-1} = x_{j+1} \rangle$ and the subgroup $F[1] = \langle x_1, x_2, \ldots \mid x_i x_j x_i^{-1} = x_{j+1} \rangle$. Clearly $F \cong F[1]$, and so $r_0(F) \cong r_0(F_1) \cong \mathbb{Z}^2$, but any character $\chi \in \mathop{\mathrm{Hom}}(F,\mathbb{R})$ with $\chi(x_1) = 0$ restricts to the trivial character on $F[1]$, and all other character classes in $S(F)$ restrict to $[\pm \chi_1]$, where $\chi_1(x_0) = 0, \chi_1(x_1)=1$. Hence, $S(F) \not\subseteq S(F[1])$. **Example 7**. Similarly, $\vert G : H\vert < \infty$ does not guarantee that the character spheres of $G$ and $H$ agree. For instance, the infinite dihedral group $D_\infty \cong \mathbb{Z}\rtimes C_2$ contains $\mathbb{Z}$ as a subgroup of index two. While $S(\mathbb{Z})$ is the $0$-sphere (and thus consists of two points), one has that $S(D_\infty)$ is empty since the abelianisation of $D_\infty$ is a finite group. Also note that the implications in Theorems [Theorem 4](#htpy-fi){reference-type="ref" reference="htpy-fi"} and [Theorem 5](#hom-fi){reference-type="ref" reference="hom-fi"} cannot be reversed: **Example 8**. Let $\mathbb{F}_n$ denote the free group on $n$ letters. It is known, see [@Bieri Proposition III.4.5] that $\Sigma^1(\mathbb{F}_n) = \emptyset$ for all $n \geq 2$. Furthermore, $\mathbb{F}_n$ embeds with finite index in $\mathbb{F}_2$ [@LyndonSchupp Prop I.3.9]. However, the torsion-free ranks of these groups are not equal as long as $n > 2$. We begin by establishing facts about both the Sigma invariants and $F_{\tau}.$ In [3](#3){reference-type="ref" reference="3"} we prove Theorems [Theorem 4](#htpy-fi){reference-type="ref" reference="htpy-fi"} and [Theorem 5](#hom-fi){reference-type="ref" reference="hom-fi"}. And finally, in [4](#4){reference-type="ref" reference="4"} we compute the Sigma invariants for $F_\tau$. # Background ## Higher homotopical Sigma invariants {#higher} We will begin with recalling some general definitions and facts that can be found, for example, in [@Topics]. An Eilenberg--MacLane space, denoted $K(G,1)$ is an aspherical CW-complex $Y$ with $\pi_1(Y)=G.$ Its universal cover $X$ is contractible and has $G$ acting freely by deck transformations. Such a space is also called a model for $EG$ and is unique up to $G$-homotopy. A group $G$ is said to be of type $\mathtt{F}_{n}$ if there is a model for $EG$ with $G$-finite $n$-skeleton, and $G$ is said to be of type $\mathtt{F}_{\infty}$ if it has a cocompact model for $EG.$ From now on let $G$ be of type $\mathtt{F}_{n}$ and let $X$ be a model for $EG$ with $G$-finite $n$-skeleton. The following construction is due to Renz [@RenzThesis], see also [@Bieri B1.1]: For a given character $\chi \in \mathop{\mathrm{Hom}}(G, \mathbb{R})$ one defines an action of $G$ on $\mathbb{R}$ by $gr=r +\chi(g)$ for all $g \in G$ and $r \in\mathbb{R}.$ This can be extended to a corresponding continuous $G$-equivariant map $h_\chi : X \to \mathbb{R}$, also called height function. Any such height function gives rise to an $\mathbb{R}$-filtration of $X$ given by the closed subspaces $h^{-1}[r, \infty).$ We shall consider subcomplexes ${X}_h^{[r,+\infty)}$ as the largest subcomplex of $X$ such that $$x \in {X}_h^{[r,+\infty)} \implies h(x) \in [r,+\infty).$$ When considering ${X}_h^{[0,+\infty)}$ we shall use the notation ${X}_h.$ **Definition 9** ([@Bieri page 194]). Let $G$ be a group of type $\mathtt{F}_{n}$. Then the $n$-th Sigma invariant $\Sigma^n(G) \subseteq S(G)$ is defined as follows: $[\chi] \in \Sigma^n(G)$ if there is a model $X$ for $EG$ and a corresponding height function $h$ on $X$ such that ${X}_h$ is $(n-1)$-connected. Renz [@RenzThesis] also showed that $\Sigma^n(G)$ is independent of the choice of model for $EG$ with $G$-finite $n$-skeleton. In particular: **Definition 10** ([@Bieri B1.2]). For $X^{[r,+\infty)}_h$ as defined in [2.1](#higher){reference-type="ref" reference="higher"}, we say that $X^{[r,+\infty)}_h$ is essentially $k$-connected for $k \in \mathbb{Z}_{\geq -1}$ if there is a real number $d \geq 0$ such that the map $\iota_j:\pi_j(X^{[r,+\infty)}_h)\to \pi_j(X^{[r-d,+\infty)}_h)$ induced by the inclusion $\iota : X^{[r,+\infty)}_h \hookrightarrow X^{[r-d,+\infty)}_h$ is the zero map for all $j \leq k$. **Citation 11** ([@Bieri Theorem B1.1]). *Let $G$ be a group of type $\mathtt{F}_{n}$ and let $X$ be a model for $EG$ with $G$-finite $n$-skeleton. Let $\chi \colon G \to \mathbb{R}$ be a nontrivial character and $h \colon X \to \mathbb{R}$ a corresponding height function as above. Then $$[\chi] \in \Sigma^n(G) \iff X_h \text{ is essentially } (m-1)\text{-connected}.$$* ## The Homological Invariant $\Sigma^n(G,A)$ {#homSigma} We will now give a brief overview of the definition and essential properties of the homological invariants $\Sigma^n(G,A)$, where $A$ is a $\mathbb{Z}G$-module, see [@Homology]. **Definition 12** ([@BieriDimension I.2]). Let $G$ be a group. A $\mathbb{Z}G$-module $A$ is said to be of type $\mathtt{FP}_{n}$ if $A$ admits a resolution of the form $$\mathbf{P} \quad \colon \quad \ldots \to P_i \to P_{i-1} \to \ldots \to P_1 \to P_0 \to A \to 0$$ where the $P_i$ are free $\mathbb{Z}G$-modules which are finitely generated for $i \leq n$. If the trivial $\mathbb{Z}G$-module $\mathbb{Z}$ is of type $\mathtt{FP}_{n}$, then we say the group $G$ is of type $\mathtt{FP}_{n}$. **Definition 13** ([@Homology 1.3]). Let $G$ be a group and $A$ a $\mathbb{Z}G$-module of type $\mathtt{FP}_{n}$. The $n$-th homological Sigma invariant $\Sigma^n(G,A) \subseteq S(G)$ is defined as follows: $$[\chi] \in \Sigma^n(G,A) \iff A \text{ is of type } \mathtt{FP}_{n}\text{ over } \mathbb{Z} G_{\chi},$$ where $G_{\chi}$ is the submonoid $G_{\chi} = \{ g \in G \mid \chi(g) \geq 0\}$. Now let $G$ be a group of type $\mathtt{F}_{n}$. Then (cf. [@Bieri Theorem B 3.1] and also [@Homology]) it holds: $$\label{homotopy} \begin{aligned} \Sigma^1(G) & = \Sigma^1(G; \mathbb{Z}) \\ \Sigma^n(G) & = \Sigma^2(G) \cap \Sigma^n(G; \mathbb{Z}) \text{ for } n \geq 2 \end{aligned}$$ Similarly to the homotopical case, it was shown that the definition of $\Sigma^n(G;A)$ does not depend on the partial finitely generated projective resolution of the $\mathbb{Z}G$-module $A$, see [@Homology Theorem 3.2] ## Background on the Golden Mean Thompson group $F_{\tau}$ {#sec:DefFtau} Let $\tau$ denote the small Golden Ratio, that is, the positive solution $\tau = \frac{\sqrt{5}-1}{2}$ to the equation $x^2+x=1$. **Definition 14** ([@Cleary]). The group $F_{\tau}$ is defined as the subgroup of piecewise linear, orientation-preserving homeomorphisms of the interval $[0,1]$ with slopes in the group $\langle \tau \rangle$ and breakpoints in $\mathbb{Z}[\tau]$. **Citation 15** ([@Ftau Theorem 4.4]). *$F_{\tau}$ has the (infinite) presentation $$\label{pres} F_{\tau} \cong \langle x_i, y_i \mid a_j b_i = b_i a_{j+1}, y_i^2=x_i x_{i+1}; a,b \in \{x,y\}, 0 \leq i < j \rangle.$$* As for the original Thompson group $F$, the elements of $F_\tau$ have a unique normal form [@Ftau Theorem 7.3]. We shall use the following semi-normal form: **Citation 16** ([@Ftau Section 7]). *Any element of $F_{\tau}$ can be expressed in the form $$x_0^{i_0} y_0^{\epsilon_0} x_1^{i_1} y_1^{\epsilon_1} \cdots x_n^{i_n} y_n^{\epsilon_n} x_m^{-j_m} x_{m-1}^{-j_{m-1}} \cdots x_0^{-j_0},$$ where $i_k, j_k \in \mathbb{N}, \epsilon_k \in \{0,1\}$, and $0 \leq k \leq n.$* **Citation 17** ([@Ftau Theorem 6.4]). *The commutator subgroup of $F_{\tau}$ is simple.* Like $F$, the group $F_\tau$ also enjoys all (homotopical and homological) finiteness properties. **Citation 18** ([@Cleary]). *The Golden Mean Thompson group $F_\tau$ if of type $\mathtt{F}_{\infty}.$* # Sigma Invariants and Finite Index {#3} In this section we prove Theorems [Theorem 4](#htpy-fi){reference-type="ref" reference="htpy-fi"} and [Theorem 5](#hom-fi){reference-type="ref" reference="hom-fi"}. **Lemma 19**. *Let $H \leq G$ and write $\pi : G \twoheadrightarrow G_{\mathrm{ab}}$ for the canonical projection and $\iota : H \hookrightarrow G$ for the inclusion. Then the following hold.* 1. *[\[lem:InjSurjHom1\]]{#lem:InjSurjHom1 label="lem:InjSurjHom1"} If $\vert G : H \vert < \infty$, then the map $\iota^\ast : \mathop{\mathrm{Hom}}(G,\mathbb{R}) \to \mathop{\mathrm{Hom}}(H,\mathbb{R})$ induced by the inclusion is injective.* 2. *[\[lem:InjSurjHom2\]]{#lem:InjSurjHom2 label="lem:InjSurjHom2"} If the image $\pi(H)$ is infinite, then there exists a morphism $e : \mathop{\mathrm{Hom}}(H,\mathbb{R}) \to \mathop{\mathrm{Hom}}(G,\mathbb{R})$. That is, any character $\psi$ of $H \leq G$ gives rise to a character $e(\psi)$ of $G$.* [Lemma 19](#lem:InjSurjHom){reference-type="ref" reference="lem:InjSurjHom"}[\[lem:InjSurjHom2\]](#lem:InjSurjHom2){reference-type="eqref" reference="lem:InjSurjHom2"} was observed by Kochloukova--Vidussi; cf. [@DesiVidussi Proof of Theorem 1.1]. Kochloukova and Vidussi work with characters in $G$ that are assumed to be extensions of characters of a subgroup $H \leq G$. However, in the form we state [Lemma 19](#lem:InjSurjHom){reference-type="ref" reference="lem:InjSurjHom"}, the character $e(\psi) \in \mathop{\mathrm{Hom}}(G,\mathbb{R})$ need not be a valid extension of the original character $\psi \in \mathop{\mathrm{Hom}}(H,\mathbb{R})$. That is, it might be the case that $\iota^\ast \circ e (\psi) \neq \psi$. From now on, when working in the abelianisation of a group, we will write the group operation additively. *Proof.* Part (1): Take a nonzero character $\chi \in \mathrm{Hom}(G,\mathbb{R})$ and suppose that $\iota^*(\chi)=\chi_{|H}=0$. As $\chi(G) \neq 0$, there exists $g \in G$ such that $\chi(g)\neq 0$, but as $\chi(H) = 0$ one has $g \not\in H$. Furthermore, we can say $g^n \not\in H$ for all $n \in \mathbb{N}$, as $$\begin{aligned} g^n \in H \implies \chi(g^n) & = 0 \\ \iff n\chi(g) & = 0 \\ \iff \chi(g) & = 0, \end{aligned}$$ which contradicts $\chi(g) \neq 0$. As such, $g^n H$ are all distinct cosets of $H$, which means that $H$ is not finite index, contradicting our assumption. Hence, $\chi(H) \neq 0$. Part (2): Consider the (finite dimensional) $\mathbb{Q}$-vector space $V = G_{\mathrm{ab}} \otimes_\mathbb{Z}\mathbb{Q}$. Since the image $\pi(H) \subseteq G_{\mathrm{ab}}$ is infinite, the set $\pi(H) \otimes_\mathbb{Z}\mathbb{Q}$ contains a partial basis for $V$, say $\mathcal{B}' = \{\overline{h}_1, \ldots, \overline{h}_m\}$, where each $\overline{h}_i$ is the image in $G_{\mathrm{ab}}$ of some $h_i \in H$. Extend this to a basis $\mathcal{B} = \{ \overline{h}_1, \ldots, \overline{h}_m, \overline{g}_{m+1}, \ldots, \overline{g}_{r} \}$ of $V$, again with $\overline{g}_j$ being the image of some $g_j \in G$. Since the image of characters of a group factor through their abelianisation, we may define $$e(\psi)(g) := \sum_{i=1}^m a_i \psi(h_i),$$ where the $a_x, x \in \mathcal{B}$, are the coordinates of the image of $g$ in $G_{\mathrm{ab}} \otimes_\mathbb{Z}\mathbb{Q}$ in the basis $\mathcal{B}$. It is straightforward to check that $e$ is a homomorphism from $\mathop{\mathrm{Hom}}(H,\mathbb{R})$ to $\mathop{\mathrm{Hom}}(G,\mathbb{R})$. ◻ **Example 20**. As mentioned above, the proof of [Lemma 19](#lem:InjSurjHom){reference-type="ref" reference="lem:InjSurjHom"}[\[lem:InjSurjHom2\]](#lem:InjSurjHom2){reference-type="eqref" reference="lem:InjSurjHom2"} might yield an extension of a character $\psi$ of $H$ such that $\iota^\ast \circ e (\psi) \neq \psi.$ For example, let $H = \mathbb{Z}\times \mathbb{Z}\leq G = D_\infty \times \mathbb{Z}$ and take $\psi$ to be a character of $H$ which is nonzero on the first coordinate. However, provided $H$ is of finite index in $G$ and the torsion-free ranks of the abelianisations are equal, we construct a lift from $\mathrm{Hom}(H,\mathbb{R})$ to $\mathrm{Hom}(G,\mathbb{R})$ that circumvents these problems. **Lemma 21**. *Let $G$ be a finitely generated group and $H \leq G$ a subgroup of finite index such that $r_0(G_{ab}) = r_0(H_{ab}).$ Then for any character $\chi \in \mathrm{Hom}(H,\mathbb{R})$, there exists a lift $\chi' \in \mathrm{Hom}(G,\mathbb{R})$ such that $\chi'|_H = \chi$ and moreover $$\chi \neq 0 \iff \chi' \neq 0.$$* *Proof.* As before, for $g \in G$, denote by $\overline{g} \in G_{ab}$ its image under the canonical projection. We denote by $(G_{ab})_0$ and $(H_{ab})_0$ the torsion-free parts of $G_{ab}$ and $H_{ab}$ respectively. Let $\{x_i,\ldots,x_n\}$ be a generating set for $G$. We have $r_0(G) = r_0(H) =k \leq n$. Without loss of generality we can assume that $\{\overline{x_1},\ldots,\overline{x_k}\}$ generates $(G_{ab})_0$. Since $|G:H| < \infty,$ for each $i=1,\ldots,n$, there exists an $\alpha_i \in \mathbb{N}$ such that $x_i^{\alpha_i} \in H$. Hence, using functoriality of abelianisations, and the fact that $\bar x_i$ has infinite order in $G_{ab}$, we have that $0 \neq\alpha_i \overline{x_i} \in (H_{ab})_0$ for all $i =1,\ldots,k$. Let $\alpha = \mathop{\mathrm{lcm}}\{\alpha_1,\ldots,\alpha_k\}.$ Now let $\chi: H \to \mathbb{R}$ be a character. We define its lift $\chi': G \to \mathbb{R}$ by $$\chi'(x_i) =\frac{1}{\alpha}\chi(x_i^\alpha), \text{ for all } i=1,\ldots,n.$$ As above we have that $\alpha \overline{x_i} \neq 0$ for each $i=1,\ldots,k$. Hence an easy computation implies the claim. ◻ **Proposition 22**. *Let $G$ be a finitely generated group and $H$ a subgroup of finite index. Then $$r_0(G) = r_0(H) \iff S(G)=S(H).$$* *That is to say, the map $\iota: \mathop{\mathrm{Hom}}(G,\mathbb{R}) \rightarrow \mathop{\mathrm{Hom}}(H,\mathbb{R})$ induced by the restriction $\iota(\chi) = \chi_{|H}$ is an isomorphism of vector spaces.* *Proof.* The if-direction follows directly from [@Bieri Lemma 1.1]. For the other direction we begin by considering the map $\iota : S(G) \rightarrow S(H)$ given by $\iota([\chi]) = [\chi_{|H}]$. [Lemma 19](#lem:InjSurjHom){reference-type="ref" reference="lem:InjSurjHom"}[\[lem:InjSurjHom1\]](#lem:InjSurjHom1){reference-type="eqref" reference="lem:InjSurjHom1"} implies that this map is an embedding. [Lemma 21](#Findex 2){reference-type="ref" reference="Findex 2"} now gives surjectivity and hence the claim. ◻ To finish off [Theorem 5](#hom-fi){reference-type="ref" reference="hom-fi"} we make use of the following: **Citation 23** ([@MMvW Proposition 9.3]). *Suppose that $H \leq G$ is a subgroup of finite index and $A$ a $\mathbb{Z}G$-module of type $\mathtt{FP}_{n},$ and suppose that $\chi: G \to \mathbb{R}$ restricts to a nonzero homomorphism of $H$. Then $$[\chi_{|H}] \in \Sigma^n(H, A) \iff [\chi] \in \Sigma^n(G,A).$$* *Proof of [Theorem 5](#hom-fi){reference-type="ref" reference="hom-fi"}.* This follows directly from [Lemma 21](#Findex 2){reference-type="ref" reference="Findex 2"}, [Proposition 22](#chareq){reference-type="ref" reference="chareq"}, and [Citation 23](#Prop9.3MMvW){reference-type="ref" reference="Prop9.3MMvW"}. ◻ For completeness we now give an elementary proof of the homotopical part, which needs the following. **Proposition 24**. *Let $G$ be a group of type $\mathtt{F}_{n}$ and $H$ a subgroup of finite index such that $r_0(G)=r_0(H).$ With the notation of [Lemma 21](#Findex 2){reference-type="ref" reference="Findex 2"} we have $$[\chi] \in \Sigma^n(G) \implies [\chi_{|H}] \in \Sigma^n(H),$$ and $$[\chi] \in \Sigma^n(H) \implies [\chi'] \in \Sigma^n(G).$$* *Proof.* To prove the first claim, consider a model $X$ for $EG$ with $G$-finite $n$-skeleton. Now suppose $[\chi] \in \Sigma^n(G),$ hence $X^{[r,+\infty)}_{h_\chi}$ is $(n-1)$-connected for the height function $h_\chi$ corresponding to $\chi.$ Since $H$ is finite index in $G$, the space $X$ is also a model for $EH$ with $H$-finite $n$-skeleton, and $h_\chi =h_{\chi_{|H}}$. Hence $[\chi_{|H}] \in \Sigma^n(H).$ Let us now assume $[\chi] \in \Sigma^n(H).$ By [Citation 11](#essential){reference-type="ref" reference="essential"} and the fact that $H$ has finite index in $G$, we can choose a model for $X$ for $EH$ as above: $X$ is a simplicial complex with $G$-finite $n$-skeleton and one $G$-orbit of zero-cells labeled by $G$. We now fix a set $T=\{t_0,\ldots,t_{m-1}\}$ of coset representatives of $H$ in $G$, and put $t_0 =e$, and define the function $h_\chi: X \to \mathbb{R}$ on the vertices of $X$ as follows: For $\gamma \in H$ we put $h_{\chi}(\gamma)=\chi(\gamma)$ and for $h_{\chi}(t_i)=0.$ Hence, since every $g \in G$ has a unique expression as $g =t_i \gamma$, we get $$h_\chi(g) = h_{\chi}(t_i)+h_{\chi}(\gamma)=\chi(\gamma).$$ Finally, we extend this function piecewise linearly to the entire $n$-skeleton on $X.$ Hence $X^{[r,+\infty)}_{h_\chi}$ is essentially $(n-1)$-connected for any $r \in \mathbb{R}$, see [Citation 11](#essential){reference-type="ref" reference="essential"}. It remains to show that this applies when using a height function $h_{\chi'}$ corresponding to the lift $\chi'$ of $\chi$. Note that $\chi'(t_i)$ is not necessarily equal to $0$. Define $d=\textbf{min}\{\chi(t_i)\}$. We claim that for very $g \in G$, $h_\chi(g) \geq 0 \iff \chi'(g) \geq d.$ To see this, write $g = t_i\gamma$ as above. Since $h_\chi(g) = \chi(\gamma)$ and $\chi'(g) = \chi'(t_i \gamma) = \chi'(t_i) +\chi(\gamma)$ we get $$h_\chi(g) \geq 0 \iff \chi(\gamma) \geq 0 \iff \ \chi'(t_i) + \chi(\gamma) \geq d+0 \iff \chi'(g) \geq d$$ as required. This now implies implies that the $0$-skeleton of $X^{[0,+\infty)}_{h_\chi}$ is precisely the same as the $0$-skeleton of $X^{[d,+\infty)}_{h_{\chi'}}$. As the space $X^{[r,+\infty)}_h$ is defined as the maximal subcomplex of $X$ contained in $h^{-1}([r,+\infty))$, where an $m$-cell is included if all of its boundary cells are included [@Bieri page 194], we have shown that $X^{[0,+\infty)}_{h_\chi} = X^{[d,+\infty)}_{h_{\chi'}}$ and this is an essentially $(n-1)$-connected model for $EG$. Hence $\chi' \in \Sigma^n(G)$, see [Citation 11](#essential){reference-type="ref" reference="essential"}. ◻ *Proof of [Theorem 4](#htpy-fi){reference-type="ref" reference="htpy-fi"}.* This follows directly from [Lemma 21](#Findex 2){reference-type="ref" reference="Findex 2"} and Propositions [Proposition 22](#chareq){reference-type="ref" reference="chareq"} and [Proposition 24](#sigmaeq){reference-type="ref" reference="sigmaeq"}. ◻ # The Sigma invariants for $F_{\tau}$ {#4} We begin by collecting some properties of $F_\tau$ and its characters as well as exhibiting a finite index subgroup which satisfies the assumptions of Theorems [Theorem 4](#htpy-fi){reference-type="ref" reference="htpy-fi"} and [Theorem 5](#hom-fi){reference-type="ref" reference="hom-fi"}. It was shown in [@Ftau Chapter 5] that $$(F_\tau)_{ab} \cong \mathbb{Z}^2 \oplus \mathbb{Z}/2\mathbb{Z}.$$ Hence $$S(F_\tau) =S^1.$$ Similarly to the original Thompson's group case, we have the two linearly independent characters $\lambda$ and $\rho$ given by the slopes at $0$ and $1$ respectively, such that $[\lambda]$ and $[\rho]$ span $S(F_\tau)$. In particular, these, for every $f \in F_\tau$, are given by $$\lambda(f) =\log_{\tau}(f'(0)) \quad \text{ and } \quad \rho(f) = \log_{\tau}(f'(1)).$$ By taking appropriate subdivisions of $[0,1]$, one can construct elements $f \in F_\tau$ with support in $[0,b] \cap \mathbb{Z}[\tau]$ for some $b < 1$ and such that $f'(0)=\tau$. Analogously, one can find $g \in F_\tau$ with support in $[a,1]$ for some $a > 0$ and with $g'(1)=\tau$. Hence $\lambda(f) = 1 = \rho(g)$, $\lambda(g) = 0 = \rho(f)$ and thus $\lambda$ and $\rho$ are linearly independent. As an example, we can use the following elements: **Example 25**. $$f(x) = \left\{ \begin{array}{lll} \tau x & \mbox{for } 0 \leq x \leq \tau^2 \\ \tau^{-1}x - \tau^2 & \mbox{for } \tau^2 \leq x \leq \tau \\ x & \mbox{for } \tau \leq x \leq 1 \end{array} \right.$$ $$g(x) = \left\{ \begin{array}{lll} x & \mbox{for } 0 \leq x \leq \tau^2 \\ \tau^{-1}x - \tau^3 & \mbox{for } \tau^2 \leq x \leq \tau \\ \tau x + \tau^2 & \mbox{for } \tau \leq x \leq 1. \end{array} \right.$$ **Proposition 26**. *Let $K$ denote the subgroup of $F_\tau$ generated by $\{x_0,x_1,y_1,x_2,y_2,...\}.$ Then $|F_\tau: K| =2$ and $K_{ab} \cong \mathbb{Z}^2 \oplus \mathbb{Z}/2\mathbb{Z}.$* *Proof.* We claim $F_\tau = K \sqcup y_0K.$ To do so, consider $g \in F_\tau$ in semi-normal form, see [Citation 16](#seminormal){reference-type="ref" reference="seminormal"}: $$g=x_0^{i_0}y_0^{\epsilon_0}x_1^{i_1}y_1^{\epsilon_1}\cdots x_n^{i_n}y_n^{\epsilon_n}x_m^{-j_m}x_{m-1}^{j_{m-1}}\cdots x_0^{j_0}$$ where $i_0,\ldots,i_n, j_0,\ldots,j_m \in \mathbb{Z}_{\geq 0}$ and $\epsilon_0,\dots,\epsilon_m \in \{0,1\}$. Hence $$gK = x_0^{i_0}y_0^{\epsilon_0}K.$$ When $\epsilon_0 =0$ we have $g \in K,$ and when $\epsilon_0 =1$ a repeated application of the following computation gives $g \in y_0K$: $$\label{relator} \begin{aligned} x_0^{i_0}y_0 & = x_0^{i_0-1}x_0y_0 = x_0^{i_0-1}x_0x_1x_1^{-1}y_0 = x_0^{i_0-1}y_0^2x_1^{-1}y_0 = \\ & = x_0^{i_0-1}y_0^2y_0x_2^{-1} = x_0^{i_0-1}y_0y_0^2x_2^{-1} = x_0^{i_0-1}y_0x_0x_1x_2^{-1} \end{aligned}$$ This shows that $|F_\tau : K| =2$. To determine the abelianisation we do a similar calculation to that in [@Ftau Section 5]: We denote the images of an element $f \in F_\tau$ in the abelianisation by $\bar f$ and write the group operation additively. From the relations it follows immediately that $\bar{x}_i =\bar{x}_{i+1}$ and that $2\bar{y}_i = 2\bar{x_1}$ for all $i \geq 1.$ Substituting $\bar z = \bar{y}_1-\bar{x}_1,$ we have the two infinite order generators $\bar{x}_0$ and $\bar{x}_1$ as well an order $2$ generator $\bar z$ as required. ◻ Let $H$ be a group and $\sigma : H \to H$ a monomorphism. An ascending HNN extension (with base $H$) is a group given by the presentation $$H\ast_{t,\sigma} = \langle H,t \,|\, tht^{-1}=\sigma(h); h \in H \rangle.$$ We now consider the subgroup $F_\tau[1] \leq F_\tau$ generated by $\{x_1,y_1,x_2,y_2,...\}$. There is a well-known monomorphism $\sigma: F_\tau \to F_\tau$ given by $\sigma(x_n)=x_{n+1}$ and $\sigma(y_n)=y_{n+1}$, whose image is clearly $F_\tau[1] \subsetneq F_\tau$.Restricting to $F_\tau[1]$ gives a monomorphism $\sigma : F_\tau[1] \to F_\tau[1]$ whose image is the proper subgroup $F_\tau[2] \subsetneq F_\tau[1]$ generated by $\{x_2,y_2,x_3,y_3,\ldots\}$, and so on. Hence any $F_\tau[m]$ is isomorphic to $F_\tau$ and thus of type $\mathtt{F}_{\infty}$. After putting $t = x_0^{-1}$, a straightforward check, for example applying Reidemeister--Schreier to $F_\tau = K \sqcup y_0 K$ to deduce a presentation for $K$, shows that $$K \cong F_\tau[1]\ast_{t, \sigma}.$$ We will now adapt the calculations for Thompson's group $F$ as in [@SigmaF] to compute the Sigma invariants for $F_\tau.$ **Citation 27** ([@SigmaF Theorem 2.1]). *Let $G$ decompose as an ascending HNN extension $H\ast_{t,\sigma}$. Let $\chi$ be a character such that $\chi(H)=0$, $\chi(t)=1$.* - *Suppose $H$ is of type $\mathtt{F}_{n}$, then $[\chi] \in \Sigma^n(G)$.* - *Suppose $H$ is of type $\mathtt{FP}_{n}$, then $[\chi] \in \Sigma^n(G;\mathbb{Z})$.* - *If $H$ is finitely generated and $\sigma$ is not surjective, then $[-\chi] \notin \Sigma^1(G)$.* **Lemma 28**. *Let $\lambda$ and $\rho$ be the characters defined at the beginning of this section. Then $$[\lambda], [\rho] \in \Sigma^{\infty}(K) \cap \Sigma^{\infty}(F_\tau) \quad \text{ and } \quad [-\lambda], [-\rho] \notin \Sigma^1(K) \cup \Sigma^1(F_\tau),$$ $$[\lambda], [\rho] \in \Sigma^{\infty}(K;\mathbb{Z}) \cap \Sigma^{\infty}(F_\tau;\mathbb{Z}) \text{ and } [-\lambda], [-\rho] \notin \Sigma^1(K;\mathbb{Z}) \cup \Sigma^1(F_\tau;\mathbb{Z}).$$* *Proof.* We begin by determining the result for $[\lambda]$ and $[-\lambda]$. The support of $F_\tau[1]$ lies in $[\tau^2, 1]$ and hence $\lambda(F_\tau[1]) = 0.$ The slope of $x_0$ at $0$ is $\tau^{-2}$. Hence, taking the character $\chi:=\frac{1}{2}\lambda \in [\lambda]$, we obtain $\chi(t) = 1$. We can thus apply [Citation 27](#points){reference-type="ref" reference="points"} to conclude that $[\lambda] \in \Sigma^\infty(K)$ and $[-\lambda] \notin \Sigma^1(K)$. By [Theorem 4](#htpy-fi){reference-type="ref" reference="htpy-fi"}, it follows that $[\lambda] \in \Sigma^\infty(F_\tau)$ and $[-\lambda] \notin \Sigma^1(F_\tau)$. As in [@SigmaF Section 1.4] we now consider a specific automorphism $\nu$ of $F_\tau$ to clear the case of $\rho$. Viewing the group $F_\tau$ as a group of PL homeomorphisms of the unit interval, $\nu$ is given by conjugation by $t \mapsto 1-t.$ This induces a homeomorphism of the character sphere that in particular swaps $[\lambda]$ with $[\rho]$, and also $[-\lambda]$ with $[-\rho]$, thus proving the Lemma for $F_\tau$. A further application of [Theorem 4](#htpy-fi){reference-type="ref" reference="htpy-fi"} also yields the result for $K$. The homological variant of the lemma is obtained similarly; see also [\[homotopy\]](#homotopy){reference-type="ref" reference="homotopy"}. ◻ We shall now consider the arcs between $[-\lambda]$ and $[-\rho]$ on the character sphere $S(F_\tau)$. Since $[-\lambda]$ and $[-\rho]$ are not antipodal points, there is a unique (closed) geodesic segment in $S(F_\tau)$ connecting them, which we denote by $\mathrm{conv}([-\lambda],[-\rho])$. In the other direction, there is a unique local geodesic from $[-\lambda]$ and $[-\rho]$, which we call the *long arc*, whose union with $\mathrm{conv}([-\lambda],[-\rho])$ yields the great circle in $S(F_\tau)$ containing $[-\lambda]$ and $[-\rho]$, in particular in this one-dimensional case, this is just $S(F_\tau)$ itself. We will need the following: **Citation 29** ([@SigmaF Theorem 2.3]). *Let $G$ decompose as an ascending HNN extension $G=H*_{t,\sigma}$. Let $\chi$ be a character of $G$ such that $\chi_{|H} \neq 0$. If $H$ is of type $\mathtt{F}_{\infty}$ and $\chi|_H \in \Sigma^{\infty}(H)$, then $\chi \in \Sigma^{\infty}(G)$.* **Proposition 30**. *All of $S(F_\tau)$, except possibly the closed geodesic $$\mathrm{conv}([-\lambda],[-\rho]),$$ lies in $\Sigma^\infty(F_\tau)$ and in $\Sigma^\infty(F_\tau,\mathbb{Z})$.* *Proof.* Again we use our previous expression of the subgroup $K$ as a HNN extension of $H=F_{\tau}[1]$. By [Lemma 28](#lem:plusminuslambdarho){reference-type="ref" reference="lem:plusminuslambdarho"}, we know that $[\rho] \in \Sigma^{\infty}(K) \cap \Sigma^{\infty}(F_\tau)$. Now let $\chi \in \mathop{\mathrm{Hom}}(F_\tau, \mathbb{R})$ be arbitrary. We claim that $$\label{claim:x1positive} \chi(x_1) > 0 \iff \chi|_H \in [\rho|_H].$$ In effect, $\chi = r\lambda + s\rho$ for some (unique) $r,s \in \mathbb{R}$ as $\lambda$ and $\rho$ are linearly independent and $\dim_\mathbb{R}(\mathop{\mathrm{Hom}}(F_\tau,\mathbb{R}))=2$. Since $\lambda(x_1) = \lambda(y_1) = 0$ and $a_j = a_0 a_{j+1} a_0^{-1}$ for any $j \geq 1$ and $a \in \{x,y\}$, it follows that $\chi(w) = s \rho(w)$ for any $w \in H = F_\tau[1]$. This means that $\chi|_H \in \{[\rho|_H],[-\rho|_H]\}$. Finally, $\rho(x_1)=1$ implies that $\chi(x_1) = s$, whence $\chi(x_1) > 0 \iff \chi|_H \in [\rho|_H]$. From here, we highlight that $H=F_{\tau}[1]$ is isomorphic to $F_{\tau}$, via the isomorphism $\gamma$ such that $\gamma(x_i)=x_{i-1}$ and $\gamma(y_i)=y_{i-1}$ for $i \geq 1$. The homeomorphism $S(F_\tau[1]) \cong S(F_\tau)$ induced by $\gamma$ maps $[\rho|_H]$ to $[\rho]$. As $[\rho] \in \Sigma^{\infty}(F_{\tau})$, this means $[\rho|_H] \in \Sigma^{\infty}(F_{\tau}[1])$. In particular, if $\chi \in \mathop{\mathrm{Hom}}(F_\tau,\mathbb{R})$ is positive on $x_1$, Claim [\[claim:x1positive\]](#claim:x1positive){reference-type="eqref" reference="claim:x1positive"} yields $[\chi|_H] = [\rho|_H] \in \Sigma^{\infty}(F_{\tau}[1])$. From here, we can apply [Citation 29](#long){reference-type="ref" reference="long"} to conclude that $[\chi|_{K}] \in \Sigma^{\infty}(K)$. Thus, $\chi(x_1)>0 \implies [\chi|_K] \in \Sigma^{\infty}(K)$, whence $[\chi] \in \Sigma^{\infty}(F_{\tau})$ by [Theorem 4](#htpy-fi){reference-type="ref" reference="htpy-fi"}. A straightforward computation shows that any character $\chi$ on the straight line from $\lambda$ to $\rho$ in $\mathop{\mathrm{Hom}}(F_\tau,\mathbb{R})$ satisfies $\chi(x_1) > 0$. The same holds for any character on the straight line from $\rho$ to $-\lambda$. Hence, we have that the open arc in $S(F_{\tau})$ from $[\lambda]$ to $[-\lambda]$ that contains $[\rho]$ actually lies in $\Sigma^{\infty}(F_{\tau})$. Arguing again with the symmetry in $S(F_\tau)$ given by the automorphism $\nu$ induced by conjugation with $t \mapsto 1-t$, we conclude that the open arc from $[\rho]$ to $[-\rho]$ containing $[\lambda]$ is also in $\Sigma^{\infty}(F_\tau)$. Altogether, the long (open) arc from $[-\lambda]$ to $[-\rho]$ is in $\Sigma^{\infty}(F_{\tau})$, as claimed. The homological version follows directly from [\[homotopy\]](#homotopy){reference-type="ref" reference="homotopy"}. ◻ It now remains to consider the remaining short arc $\mathrm{conv}([-\lambda],[-\rho])$. To do this we will follow the approach of [@SigmaF Section 2.3]. We will need the following two results: **Citation 31** ([@SigmaF Corollary 1.2]). *The kernel of a nonzero discrete character $\chi$ has type $\mathtt{FP}_{n}$ over the ring $R$ if and only if both $[\chi]$ and $[-\chi]$ lie in $\Sigma^n(G,R)$.* **Citation 32** ([@SigmaF Theorem 2.7]). *Suppose $G$ is of type $\mathtt{FP}_{2}$ over a ring $R$ and contains no nonabelian free subgroups. Let $\widetilde{\chi}: G \to \mathbb{R}$ be a nonzero discrete character. Then $G$ decomposes as an ascending HNN extension $H*_{t, \sigma}$, where $H$ is a finitely generated subgroup of $\ker(\widetilde{\chi}),$ and $\widetilde{\chi}(t)$ generates the image of $\widetilde{\chi}$.* **Proposition 33**. *Let $R$ be a ring. Then $$\mathrm{conv}([-\lambda],[-\rho])\cap \Sigma^2(F_{\tau},R) = \emptyset.$$* *Proof.* We note that it suffices to show that no discrete character $\chi \in \mathrm{conv}([-\lambda],[-\rho])$ lies in $\Sigma^2(F_{\tau},R)$ because such characters are dense in $\mathrm{conv}([-\lambda],[-\rho])$ and $\Sigma^2(F_{\tau},R)$ is open; see, e.g., [@SigmaF Proposition 2.9]. Observe further that $[-\lambda]$, $[-\rho] \notin \Sigma^2(F_{\tau},R)$ by [Lemma 28](#lem:plusminuslambdarho){reference-type="ref" reference="lem:plusminuslambdarho"}. So let $\chi$ be a discrete character of the form $\chi = a \lambda + b \rho$, with $a,b \in \mathbb{Q} \setminus\{0\}$. Using the elements $f,g \in F_\tau$ of [Example 25](#li-example){reference-type="ref" reference="li-example"}, we can construct elements $t \in F_{\tau}$ with the following properties: $$\label{eq:constructingt} \lambda(t)=mb \quad \text{ and } \quad \rho(t)=-ma \text{ for some } m \in \mathbb{Q} \setminus \{0\}.$$ In particular, $\chi(t) = 0$. Since $\lambda$ has discrete image in $\mathbb{R}$ and $a \neq 0$, there exists $t_0$ satisfying [\[eq:constructingt\]](#eq:constructingt){reference-type="eqref" reference="eq:constructingt"} such that $|\lambda(t_0)|$ is minimal among all elements $t$ fulfilling property [\[eq:constructingt\]](#eq:constructingt){reference-type="eqref" reference="eq:constructingt"}. Moreover, $\lambda(t_0) \neq 0$ for otherwise $t_0$ would not fulfill [\[eq:constructingt\]](#eq:constructingt){reference-type="eqref" reference="eq:constructingt"}. Let $G = \ker(\chi)$. Then, since the abelianisation of $F_\tau$ is $\mathbb{Z}^2 \times \mathbb{Z}/2\mathbb{Z},$ we have that $G = \langle \sqrt{F'_\tau}, t_0 \rangle = \sqrt{F'_\tau} \rtimes \langle t_0 \rangle$, where $\sqrt{F'_{\tau}} := \{f \in F_{\tau} \mid f^n \in F_{\tau}' \text{ for some } n\}$. Note that $\lambda|_G$ is a discrete nonzero character vanishing on the subgroup $\sqrt{F'_\tau} \leq G$ and such that $\mathop{\mathrm{im}}(\lambda|_G)$ is generated by $\lambda(t_0)$. Now suppose $G$ has type $\mathtt{FP}_{2}$ over a ring $R$. By [Citation 32](#FP2){reference-type="ref" reference="FP2"}, we can decompose $G$ as the HNN extension $H*_{t, \sigma}$, where $H$ is a finitely generated subgroup of $\sqrt{F'_{\tau}}$. As $H$ is generated by a finite set of elements of $F_{\tau}$, and each generator has support away from $0$ and $1$, there exists a value $\varepsilon'' > 0$ such that all elements of $H$ are supported in the interval $[\varepsilon'', 1-\varepsilon'']$. Similarly, as $t_0$ has finitely many breakpoints, there is a value $\varepsilon' > 0$ such that $t_0$ is linear on the intervals $[0, \varepsilon']$ and $[1-\varepsilon', 1]$. Let $\varepsilon = \mathbf{min}\{\varepsilon', \varepsilon''\}$, giving us a value with both of these properties. Since $\sqrt{F'_\tau} \rtimes \langle t_0 \rangle = G \cong H*_{t, \sigma}$, we can say that $\sqrt{F'_{\tau}} = \bigcup_{n \geq 1} t^n H t^{-n}$. Hence for each $f \in \sqrt{F'_{\tau}}$, there is a value $n$ such that $t^{-n} f t^n \in H$, hence $t^{-n} f t^n$ is supported in $[\varepsilon, 1-\varepsilon]$. From here, we can see that any $f \in \sqrt{F'_{\tau}}$ must be supported in $[t_0^n(\varepsilon), t_0^n(1-\varepsilon)]$ for some $n$. As $\sqrt{F'_{\tau}}$ has support $(0,1)$, we can see that $(t_0^n(\varepsilon))_{n\in\mathbb{N}}$ must have a subsequence that converges to $0$ and $(t_0^n(1-\varepsilon))_{n\in\mathbb{N}}$ must have a subsequence that converges to $1$. As $t_0$ is linear on the intervals $[0, \varepsilon]$ and $[1-\varepsilon, 1]$, it holds $t_0(\varepsilon) < \varepsilon$ and $t_0(1-\varepsilon) > 1-\varepsilon$. Hence $t_0$ must have slope smaller than $1$ near $0$ and slope bigger than $1$ near $1$. Therefore $ab <0$. Given that we started with the assumption that $G=\ker(\chi)$ was of type $\mathtt{FP}_{2}$, we obtain the implication $$\chi = a\lambda + b\rho \quad \text{ and } \quad \ker(\chi) \text{ of type } \mathtt{FP}_{2} \implies ab < 0$$ whenever $a,b \in \mathbb{Q}\setminus\{0\}$. The contrapositive of this is that $ab>0$ implies $\ker(\chi)$ is not of type $\mathtt{FP}_{2}$. Combining this with [Citation 31](#kernel){reference-type="ref" reference="kernel"}, we see that we cannot have both $[\chi]$ and $[-\chi]$ in $\Sigma^2(F_{\tau},R)$. In particular, if the antipodal point $[-\chi]$ lies in $\Sigma^2(F_\tau,R)$, then by [Proposition 30](#prop:almosteverything){reference-type="ref" reference="prop:almosteverything"} we have that $[\chi] \not\in \Sigma^2(F_{\tau},R)$. We can transfer this result to the homotopical invariant with the use of [\[homotopy\]](#homotopy){reference-type="ref" reference="homotopy"}, which lets us conclude that if $[\chi] \not\in \Sigma^2(F_{\tau},R)$, then $[\chi] \not\in \Sigma^2(F_{\tau})$. ◻ This now concludes the proof of [Theorem 1](#main-thm){reference-type="ref" reference="main-thm"}.
arxiv_math
{ "id": "2309.12213", "title": "The Sigma Invariants for the Golden Mean Thompson Group", "authors": "Lewis Molyneux and Brita Nucinkis and Yuri Santos Rego", "categories": "math.GR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we consider chaotic dynamics and variational structures of area-preserving maps. There is a lot of study on the dynamics of their maps and the works of Poincare and Birkhoff are well-known. To consider variational structures of area-preserving maps, we define a special class of area-preserving maps called *monotone twist maps*. Variational structures determined from twist maps can be used for constructing characteristic trajectories of twist maps. Our goal is to prove the existence of *infinite transition orbit*, which represents an oscillating orbit between fixed points infinite times, through minimizing methods. author: - Yuika Kajihara bibliography: - Re_infinite_arxiv.bib title: | Variational Structures for Infinite Transition Orbits\ of Monotone Twist Maps --- # Introduction {#section:intro} In this paper, we consider chaotic dynamics and variational structures of area-preserving maps. The dynamics of such maps have been widely studied, with key findings by Poincaré and Birkhoff. There are a lot of related works, see [@Birkhoff1922], [@Birkhoff1966], [@Mather1986] for example. To explore these variational structures, we define a special class of area-preserving maps called *monotone twist maps*: **Definition 1** (monotone twist maps). *Set a map $f \colon \mathbb{R}/ \mathbb{Z}\times [a,b] \to \mathbb{R}/ \mathbb{Z}\times [a,b]$ and assume that $f \in C^1$ and a lift $\tilde{f}$ of $f$ $\colon \mathbb{R}\times [a,b] \to \mathbb{R}\times [a,b]$, $(x,y) \mapsto (f_1(x,y),f_2(x,y))(=(X,Y))$ satisfy the followings:* - *$\tilde{f}$ is area-preserving, i.e., $dx \wedge dy = dX \wedge dY$;* - *$\partial X / \partial y>0$ (twist condition), and* - *Both two straight lines $y=a$ and $y=b$ are invariant curves of $f$, i.e. $f(x,a) - a =0$ and $f(x,b) - b =0$ for all $x \in \mathbb{R}/ \mathbb{Z}$.* *Then $f$ is said to be a monotone twist map.* By Poincaré's lemma, we get a generating function $h$ for a monotone twist map $f$ and it satisfies $dh=YdX -ydx.$ That is, $$y= \partial_1 h(x,X), \ Y= -\partial_2 h(x,X),$$ where $\partial_1 = \partial/\partial x$ and $\partial_2 = \partial/\partial X$. For the above $h$, by abuse of notation, we define $h \colon \mathbb{R}^{n+1} \to \mathbb{R}$ by: $$\begin{aligned} \label{action_n} h(x_0,x_1, \cdots, x_n)=\sum_{i=1}^{n} h(x_i,x_{i+1}).\end{aligned}$$ We can regard $h$ as a variational structure associated with $f$, because any critical point of $\eqref{action_n}$, say $(x_0,\cdots,x_n)$, gives us an orbit of $\tilde{f}$ by $y_i=-\partial h_1(x_i,x_{i+1})=\partial h_2(x_{i-1},x_{i})$. This is known as the Aubry-Mather theory, which is so called because Aubry studied critical points of the action $h$ in [@AubrDaer1983] and Mather developed the idea (e.g. [@Mather1982; @Mather1987]). We briefly summarize Bangert's investigation of good conditions of $h$ for study in minimal sets [@Bangert1988]. We consider the space of bi-infinite sequences of real numbers, and define convergence of a sequence $x^n = (x^n_i)_{i \in \mathbb{Z}}\in \mathbb{R}^\mathbb{Z}$ to $x \in \mathbb{R}^\mathbb{Z}$ by: $$\begin{aligned} \label{t_metric} \lim_{n \to \infty} |x^n_i - x_i| = 0 \ ({}^{\forall} i \in \mathbb{Z}).\end{aligned}$$ Now we treat a function $h$ satisfying *variational principle*. **Definition 2** (variational principle). *Let $h$ be a continuous map from $\mathbb{R}^2$ to $\mathbb{R}$ We call a function $h$ variational principle if it satisfies the following:* - *For all $(\xi,\eta) \in \mathbb{R}^2$, $h(\xi,\eta)=h(\xi+1,\eta+1)$;* - *$\displaystyle{\lim_{\eta \to \infty} h(\xi,\xi+\eta) \rightrightarrows \infty}$;* - *If $\underline{\xi}< \bar{\xi}$ and $\underline{\eta}< \bar{\eta}$, then $h(\underline{\xi},\underline{\eta}) + h(\bar{\xi},\bar{\eta}) < h(\underline{\xi},\bar{\eta}) + h(\bar{\xi},\underline{\eta})$; and* - *If $(x,x_0,x_1)$ and $(\xi,x_0,\xi_1)$ are minimal and $(x,x_0,x_1) \neq (\xi,x_0,\xi_1)$, then $(x-\xi)(x_1-\xi_1)<0$.* In this paper, we call an element $x=(x_i)_{i \in \mathbb{Z}} \in \mathbb{R}^\mathbb{Z}$ a configuraition. There are distinctive configurations referred to as *minimal configurations* and *stationary configurations*. **Definition 3** (minimal configuration/stationary configuration). *Fix $n$ and $m$ with $n<m$ arbitrarily. A finite sequence $x=(x_i)_{i=n}^{m}$ is said to be minimal if, for any (finite) configuration $(y_i)_{i=n_0}^{n_1}$ with $y_{n_0}=x_{n_0}$ and $y_{n_1}=x_{n_1}$, $$h(x_{n_0},x_{n_0+1}, \cdots, x_{n_1-1}, x_{n_1}) \le h(y_{n_0},y_{n_0+1}, \cdots, y_{n_1-1}, y_{n_1}),$$ where $n \le n_0 < n_1 \le m$. A configuration $x=(x_i)_{i \in \mathbb{Z}}$ is called minimal if, for any $n<m$, we have $x=(x_i)_{i=n}^{m}$ is minimal. Moreover, a configuration $x$ is called locally minimal or a stationary configuration if, for any $i \in \mathbb{Z}$, it holds that $\partial_2 h (x_{i-1},x_i) + \partial_1h(x_i,x_{i+1})=0.$* For $x=(x_i)_{i \in \mathbb{Z}} \in \mathbb{R}^\mathbb{Z}$, we define $\alpha^+(x)$ and $\alpha^-(x)$ by: $$\alpha^+(x) =\lim_{i \to \infty} \frac{x_i}{i},\ \alpha^-(x) =\lim_{i \to -\infty} \frac{x_i}{i}.$$ We only discuss the case of $\alpha^+(x)=\alpha^{-}(x)$. **Definition 4** (rotation number). *If both $\alpha^+(x)$ and $\alpha^- (x)$ exist and $\alpha^+(x)=\alpha^-(x)(=:\alpha(x))$, then we call $\alpha(x)$ a rotation number of $x$.* Let $\mathcal{M}_\alpha$ be a minimal set consisting of minimal configurations with rotation number $\alpha$. It is known that for any $\alpha \in \mathbb{R}$, the set $\mathcal{M}_{\alpha}$ is non-empty and compact (see [@Bangert1988] for the proof). For $\alpha \in \mathbb{Q}$, we define periodicity in the following: **Definition 5** (periodic configurations). *For $q \in \mathbb{N}$ and $p \in \mathbb{Z}$, a configuration $x=(x_i)_{i \in \mathbb{Z}}$ is said to be $(q,p)$-periodic if $x=(x_i)_{i \in \mathbb{Z}} \in \mathbb{R}^\mathbb{Z}$ satisfies $$x_{i+q}=x_{i} + p,$$ for any $i \in \mathbb{Z}$.* It is easily seen that if $x$ is $(q,p)$-periodic, then its rotation number is $p/q$. This paper discusses only the case where $\alpha \in \mathbb{Q}$. For $\alpha=p/q \in \mathbb{Q}$, we set: $$\mathcal{M}_{\alpha}^\mathrm{per}:= \{ x \in \mathcal{M}_{\alpha} \mid \text{$x$ is $(q,p)$-periodic}\} \cap \mathcal{M}_{\alpha}.$$ **Definition 6** (neighboring pair). *For a set $A \subset \mathbb{R}^\mathbb{Z}$ and $a,b \in A$ with $a<b$, we call $(a,b)$ a neighboring pair of $A$ if $a,b \in A$ and there is no other $x \in A$ with $a<x<b$. Here $a<b$ means $a_i < b_i$ for any $i \in \mathbb{Z}$.* Given a neighboring pair $(x^0,x^1)$ of $\mathcal{M}_{\alpha}^\mathrm{per}$, we define: $$\begin{aligned} \mathcal{M}_{\alpha}^+(x^0,x^1)&=\{x \in \mathcal{M}_{\alpha} \mid |x_i - x_i^0| \to 0 \ (i \to -\infty) \ and \ |x_i - x_i^1| \to 0 \ (i \to \infty) \} \ and \\ \mathcal{M}_{\alpha}^-(x^0,x^1)&=\{x \in \mathcal{M}_{\alpha} \mid |x_i - x_i^0| \to 0 \ (i \to \infty) \ and \ |x_i - x_i^1| \to 0 \ (i \to -\infty) \}.\end{aligned}$$ Bangert showed the following proposition. **Proposition 7** ([@Bangert1988]). *Given $\alpha \in \mathbb{Q}$, $\mathcal{M}_{\alpha}^\mathrm{per}$ is nonempty. Moreover, if $\mathcal{M}_\alpha^\mathrm{per}$ has a neighboring pair, then $\mathcal{M}_{\alpha}^+$ and $\mathcal{M}_{\alpha}^-$ are nonempty.* Although we have discussed minimal configurations in the preceding paragraph, there are also interesting works that treat non-minimal orbits between periodic orbits, particularly, [@Rabinowitz2008] and [@Yu2022]. In [@Rabinowitz2008], Rabinowitz used minimizing methods to prove the existence of three types of solutions-periodic, heteroclinic and homoclinic-in potential systems with reversibility for time, i.e. $V(t,x)=V(-t,x)$. Under an assumption called a *gap*, which is similar to a neighboring pair, for a set of periodic and heteroclinic solutions, non-minimal heteroclinic and homoclinic orbits can be given between two periodic orbits. **Remark 8**. *There are two remarks about one-transition orbits: (a) Each element of the sets $\mathcal{M}_{\alpha}^\pm$ implies monotone heteroclinic orbits and Mather's result [@Mather1993] is the first to discuss this problem. (b) The existence of one-transition orbits ( i.e., monotone heteroclinic orbits) does not require gaps for heteroclinic orbits. This can also be illustrated by considering a simple pendulum system since a set of heteroclinic orbits is dense in its system.* Since the heteroclinic/homoclinic orbit is a trajectory that transitions between two equilibrium points, we refer to these orbits as *$k$-transition orbits* in this paper. For example, a monotone heteroclinic orbit is a one-transition orbit. Non-minimal orbits are realized as *n-transition orbits* for $n \ge 2$ and they are heteroclinic when $n$ is odd and homoclinic when $n$ is even (see the following figures). ![One and two transition orbits](1transition.pdf){width="5cm"} ![One and two transition orbits](2transition.pdf){width="5cm"} Rabinowitz's approach can be applied to variational methods for area-preserving maps. Yu [@Yu2022] added variational principle $h$ to the following assumption $(h_5)-(h_6)$ to $h$: - There exists a positive continuous function $p$ on $\mathbb{R}^2$ such that: $$h(\xi,\eta') + h(\eta,\xi') - h(\xi,\xi') - h(\eta,\eta') > \int_{\xi}^{\eta} \int_{\xi'}^{\eta'} p$$ if $\xi < \eta$ and $\xi' < \eta'$. - There is a $\theta > 0$ satisfying the following conditions: - $\xi \mapsto \theta \xi^2 /2 - h(\xi,\xi')$ is convex for any $\xi'$, and - $\xi' \mapsto \theta \xi'^2 /2 - h(\xi,\xi')$ is convex for any $\xi$. **Remark 9**. *  [\[rem:delta_bangert\]]{#rem:delta_bangert label="rem:delta_bangert"}* - *One of a sufficient conditions for $(h_2)-(h_5)$ is $$(\tilde{h}) \ \text{$h \in C^2$ and $\partial_1 \partial_2 h \le -\delta <0$ for some $\delta>0$}.$$ To verify the assumption $(h_5)$, we can ensure it by choosing a positive function $\rho = \delta$. If $f \in C^1$ and satisfies $\partial X/\partial y \ge \delta$ for some $\delta>0$, a generating function $h$ for $f$ satisfies $(\tilde{h})$. However, $(\tilde{h})$ is not a necessary condition for satisfying $(h_2)-(h_5)$.* - *Assuming $(h_6)$ allows us to derive Lipschitz continuity for $h$ in the following meaning: there is a Lipschitz constant $C$ satisfying: $$\begin{aligned} &h(\xi,\eta_1)-h(\xi,\eta_2) \le C|\eta_1-\eta_2|, \text{and}\\ &h(\xi_1,\eta)-h(\xi_2,\eta) \le C|\xi_1-\xi_2|\end{aligned}$$* Clearly, $(h_5)$ implies $(h_3)$. Mather [@Mather1987] proved that if $h$ satisfies $(h_1)$-$(h_6)$, then $\partial_2 h (x_{i-1},x_i)$ and $\partial_1h(x_i,x_{i+1})$ exist in the meaning of the left-sided limit. In addition, he proved that if $x$ is a locally minimal configuration, then it satisfies: $$\partial_2 h (x_{i-1},x_i) + \partial_1h(x_i,x_{i+1})=0, \ {}^{\forall} {i \in \mathbb{Z}}.$$ Yu applied Rabinowitz's methods to monotone twist maps to show finite transition orbits of monotone twist maps for all $\alpha=p/q \in \mathbb{Q}$. We will give a summary of his idea in the case of $\alpha=0$ (i.e. $(q,p)=(1,0)$ in Definition [Definition 5](#def:periodic){reference-type="ref" reference="def:periodic"}). Let $(u^0,u^1)$ be a neighboring pair of $\mathcal{M}_0^\mathrm{per}$. By abuse of notation, we then denote $u^j$ for $j=0,1$ by the constant configuration $u^j= (x_i)_{i\in\mathbb{Z}}$ where $x_i=u^j$ for all $i \in \mathbb{Z}$. We set: $$\begin{aligned} \label{c} {c:=\min_{ x \in \mathbb{R}} h(x,x)}(=h(u^0,u^0)=h(u^1,u^1)).\end{aligned}$$ And: $$\begin{aligned} \label{pre-action} I(x) := \sum_{i \in \mathbb{Z}} a_i(x),\end{aligned}$$ where $a_i(x)= h(x_i,x_{i+1}) - c$. Yu [@Yu2022] studied local minimizers of $I$ to show the existence of finite transition orbits, i.e., heteroclinic or homoclinic orbits. Given a rational number $\alpha \in \mathbb{Q}$ and a neighboring pair $(x^0,x^1)$ with $x^0, x^1 \in \mathcal{M}_{\alpha}^\mathrm{per}$, we let: $$\begin{aligned} I^+_{\alpha}(x^0,x^1)&=\{x_0 \in \mathbb{R}\mid x \in \mathcal{M}_{\alpha}^+(u^0,u^1)\}, \ and\\ I^-_{\alpha}(x^0,x^1)&=\{x_0 \in \mathbb{R}\mid x \in \mathcal{M}_{\alpha}^-(u^0,u^1) \}.\end{aligned}$$ Under the above setting, he showed: **Theorem 10** (Theorem 1.7, [@Yu2022]). *Given a rational number $\alpha \in \mathbb{Q}$ and a neighboring pair $(x^0,x^1)$ of $p_0(\mathcal{M}_{\alpha}^\mathrm{per})$. If $$\begin{aligned} \label{hetero_gap} I^+_{\alpha}(x^0,x^1) \neq (x_0^0,x_0^1) \ \text{and} \ I^-_{\alpha}(x^0,x^1) \neq (x_0^0,x_0^1) ,\end{aligned}$$ then, for every ${\delta}>0$ small enough, there is an $m=m(\delta)$ such that for every sequence of integers $q=( q_i )_{i \in \mathbb{Z}}$ with $q_{i+1}-q_{i} \ge 4m$ and for every $j,k \in \mathbb{Z}$ with $j<k$, there is a configuration $x$ satisfying:* 1. *$x_i^0<x_i<x_{i}^1$ for all $i \in \mathbb{Z}$;* 2. *$|x_{q_i-m}-x^i_{q_i-m}| \le \delta$   $|x_{q_i+m}-x^i_{q_i+m}| \le \delta$ for all $i=j,\dots,k$;* 3. *$|x_i - x_i^j| \to 0$ as $i \to -\infty$   $|x_i - x_i^k| \to 0$ as $i \to +\infty$.* *Here, for any $j \in \mathbb{Z}$, $x^j=x^0$, if $j$ is even, and $x^j=x^1$, if $j$ is odd.* Furthermore, Rabinowitz [@Rabinowitz2008] proved the existence of an infinite transition orbit as a limit of sequences of finite transition orbits. However, the variational structure of infinite transition orbits for potential systems is an open question in his paper. To consider the question for twist maps, the following proposition is crucial. **Proposition 11** (Proposition 2.2, [@Yu2022]). *If $I(x) < \infty$, then $|x_i -u^1| \to 0$ or $|x_i -u^0| \to 0$ as $|i| \to \infty$.* Since this implies that $I(x) = \infty$ if $x$ has infinite transitions, we need to fix the normalization of $I$. Therefore, we focus on giving the variational structure and boundary condition that characterize infinite transition orbits of monotone twist maps. As a result, the function $J$ and set $X_{k,\rho}$ defined in Section [3](#sec:pf){reference-type="ref" reference="sec:pf"} represent a variational structure and a configuration space for infinite transition orbits. Through this variational problem, we showed: **Theorem 12** (Our main theorem). *Assume the same condition of Theorem [Theorem 10](#theo:yu_main){reference-type="ref" reference="theo:yu_main"}. Then, for every positive sequence ${\epsilon}=(\epsilon_i)_{i \in \mathbb{Z}}$, there is an $m=( m_i )_{i \in \mathbb{Z}}$ such that for every sequence of integers $k=(k_i)_{i \in \mathbb{Z}}$ with $k_{i+1}-k_{i} \ge m_i$, there is a configuration $x=(x_i)_{i \in \mathbb{Z}}$ satisfying:* 1. *$x_i^0<x_i<x_{i}^1$ for all $i \in \mathbb{Z}$;* 2. *for any $j \in \mathbb{Z}$,  $|x_{i}-x^{2j}_{i}| \le \epsilon_{2j}$ if $i \in [k_{4j}, k_{4j+1}]$   $|x_{i}-x^{2j-1}_{i}| \le \epsilon_{2j+1}$ if $i \in [k_{4j+2}, k_{4j+3}]$.* This paper is organized as follows. Section [2](#sec:pre){reference-type="ref" reference="sec:pre"} deals with Yu's results in [@Yu2022] and related remarks. In Section [3](#sec:pf){reference-type="ref" reference="sec:pf"}, our main results are stated. We first give the proof of the case of $\alpha=0$ and then see the generalized cases. Section [4](#sec:add){reference-type="ref" reference="sec:add"} provides, as additional discussions, a special example and the number of the obtained infinite transition orbits. # Yu's results {#sec:pre} In this section, we would like to introduce properties of $\eqref{pre-action}$ and minimal configurations using several useful results in [@Yu2022]. Moreover, we study estimates of heteroclinic configurations. ## Properties of minimal configurations Let $(u^0,u^1)$ be a neighboring pair of $\mathcal{M}_0^\mathrm{per}$ and: $$\begin{aligned} \label{eq:x} \begin{split} X &= X(u^0,u^1)= \{ x = (x_i)_{i \in \mathbb{Z}} \mid u^0 \le x_i \le u^1 \ ({}^{\forall} i \in \mathbb{Z}) \},\\ X(n)&=X(n ; u^0,u^1)=\{ x =(x_i)_{i=0}^{n} \mid u^0 \le x_i \le u^1 \ ({}^{\forall} i \in \{0, \cdots, n\}) \}, and\\ \hat{X}(n) &= \hat{X}(n ; u^0,u^1)=\{ x =(x_i)_{i=0}^{n} \mid x_0=x_n, \ u^0 \le x_i \le u^1 \ ({}^{\forall} i \in \{0, \cdots, n\}) \}. \end{split}\end{aligned}$$ **Definition 13** ([@Yu2022]). *For $x \in X$, we set: $$d(x):=\max_{0 \le i \le n} \min_{j \in \{0,1\}} |x_i -u^j|.$$ For any $\delta > 0$, let: $$\begin{aligned} \label{phi} \phi(\delta):= \inf_{n \in \mathbb{Z}^+} \inf \left\{ \sum_{i=0}^{n-1} a_i(x) \mid x \in \hat{X}(n) \ {\text{and}} \ d(x) \ge \delta \right\}.\end{aligned}$$* **Lemma 14** (Lemma 2.7, [@Yu2022]). *The function $\phi$ is continuous and satisfies $\phi(\delta)>0$ if $\delta >0$; $\phi(\delta)=0$ if $\delta=0$. It increases monotonically with respect to $\delta$. Moreover, For any $n \in \mathbb{N}$ and $x \in \hat{X}(n)$ satisfying $$\min_{j=0,1} |x_i -u^j| \ge \delta,\ (i=1,\cdots,n-1),$$ then $$\sum_{i=0}^{n-1} a_i(x) \ge n \phi(\delta)$$ and for any $n \in \mathbb{N}$ and $x \in {X}(n)$,* **Lemma 15** (Lemma 2.8, [@Yu2022]). *For any $n \in \mathbb{N}$ and $x \in {X}(n)$ satisfying $d(x) \ge \delta$, $$\sum_{i=0}^{n-1} a_i(x) \ge \phi(\delta)-C|x_n-x_0| \ge -C|x_n-x_0|.$$* *Proof.* See [@Yu2022]. This proof requires $(h_3)$. ◻ **Lemma 16** (Lemma 2.10, [@Yu2022]). *If $x \in X$ satisfies $|x_i - u^0|$ as $|i| \to \infty$ and $x_i \neq u^0$ for some $i \in \mathbb{Z}$, then $I(x)>0$.* We also need to check that each component of stationary configurations is not equal to $u^0$ or $u^1$. This follows from the next lemmas. **Lemma 17** (Lemma 2.11, [@Yu2022]). *For any $\delta \in (0, u^1-u^0]$, if $(x_i)_{i=0}^{2}$ satisfies* 1. *$x_i \in [u^0,u^1]$ for all $i=0,1,2$;* 2. *$x_1 \in [u^1-\delta,u^1]$, and $x_0 \neq u^1$ or $x_2 \neq u^1$; and* 3. *$h(x_0,x_1,x_2) \le h(x_0, \xi,x_2)$ for all $\xi \in [u^1-\delta,u^1]$,* *then $x_1 \neq u^1$. This still holds if we replace every $u^1$ by $u^0$ and every $[u^1-\delta,u^1]$ by $[u^0, u^0+\delta]$.* **Lemma 18** (Lemma 2.12, [@Yu2022]). *For any $n_0$ and $n_1 \in \mathbb{N}$ with $n_0 < n_1$, if a finite configuration $x=(x_i)_{i=n_0}^{n_1}$ satisfies* 1. *$x_i \in [u^0,u^1]$ for all $i=n_0, \cdots, n_1$ and* 2. *for any $(y_{i})_{i=n_0}^{n_1}$ satisfying $y_{n_0}=x_{n_0}$, $y_{n_1}=x_{n_1}$, and $y_i \in [u^0,u^1]$, $$h(x_{n_1},x_{n_1+1}, \cdots, x_{n_2-1}, x_{n_2}) \le h(y_{n_1},y_{n_1+1}, \cdots, y_{n_2-1}, y_{n_2}),$$* *then $x$ is a minimal configuration. Moreover, if $x$ also satisfies $x_{n_0} \notin \{u^0,u^1\}$ or $x_{n_1} \notin \{u^0,u^1\}$, then $x_i \notin \{u^0,u^1\}$ for all $i=n_{0}+1, \cdots, n_1-1$.* *Proof of the two lemmas above.* See [@Yu2022]. These proofs require $(h_4)$ and $(h_5)$. ◻ Moreover, we can replace $\alpha=0$ with arbitrarily other rational numbers as seen below. **Definition 19** (Definition 5.1, [@Yu2022]). *For $\alpha=p/q \in \mathbb{Q}\backslash \{0\}$, we set: $$\begin{aligned} %X_{\alpha}(q;x^-,x^+)&:=\{x=(x_i)_{i=0}^{q} \mid x_i^- \le x_i \le x_i^+ (i=0, \cdots ,q)\}\\ %X_{\alpha}(q;x^-,x^+)&:=\{x \in X(q) \mid x_q =x_0 + p\}\\ X_{\alpha}(x^-,x^+)&:=\{x=(x_i)_{i \in \mathbb{Z}} \mid x_i^- \le x_i \le x_i^+ (i \in \mathbb{Z})\}.\end{aligned}$$ where $x^-$ and $x^+$ is in $\mathcal{M}_{\alpha}^{\mathrm{per}}$ and $(x^-_0,x^+_0)$ is a neighboring pair in $p_0(\mathcal{M}_{\alpha}^{\mathrm{per}})$.* **Definition 20** (Definition 5.2, [@Yu2022]). *Let $h_i \colon \mathbb{R}^2 \to \mathbb{R}$ be a continuous function for $i=1,2$. For $h_1$ and $h_2$, we define $h_1 \ast h_2 \colon \mathbb{R}^2 \to \mathbb{R}$ by $$h_1 \ast h_2(x_1,x_2) = \min_{\xi \in \mathbb{R}} (h_1(x_1, \xi) + h_2(\xi, x_2)).$$ We call this the *conjunction of $h_1$ and $h_2$.** Using the conjunction, we denote a function $H \colon \mathbb{R}^2 \to \mathbb{R}$ for $\alpha=p/q$ by: $$H(\xi,\xi') = h^{*q}(\xi,\xi'+p),$$ where $h^{*q}(x,y)=h_1 \ast h_2 \ast \cdots \ast h_q(x,y)$ and $h_i=h$ for all $i = 1,2, \cdots , q$. **Definition 21** (Definition 5.5, [@Yu2022]). *For any $y =(y_i)_{i \in \mathbb{Z}} \in X(x_0^-,x_0^+)$, we define $x=(x_i)_{i \in \mathbb{Z}} \in X_{\alpha}(x^-,x^+)$ as follows:* 1. *$x_{iq}=y_i + ip$ and* 2. *$(x_j)_{j=iq}^{(i+1)q}$ satisfies $$h(x_{iq}, \cdots, x_{(i+1)q}) =H(x_{iq},x_{(i+1)q}) = H(y_i,y_{i+1}),$$ i.e., $(x_j)_{j=iq}^{(i+1)q}$ is a minimal configuration of $h$.* Although we focus on the case of rotation number $\alpha=0$, we may apply our proof to all rational rotation numbers from the following. **Proposition 22** (Proposition 5.6, [@Yu2022]). *Let $y \in X(x_0^-,x_0^+)$ and $x \in X_{\alpha}(x^-,x^+)$ be defined as above. If $y$ is a stationary configuration of $H$, then $x$ must be a stationary configuration of $h$.* ## Some remarks for heteroclinic orbits Let $X^0$ and $X^1$ be given by: $$\begin{aligned} X^0&=\{x \in X \mid |x_i - u^1| \to 0 \ (i \to \infty), |x_i - u^0| \to 0 \ (i \to -\infty)\} \ and\\ X^1&=\{x \in X \mid |x_i - u^1| \to 0 \ (i \to -\infty), |x_i - u^0| \to 0 \ (i \to \infty)\}.\end{aligned}$$ By considering a local minimizer (precisely, a global minimizer in $X^0$ or $X^1$), Yu [@Yu2022] proved the existence of heteroclinic orbits, which Bangert showed in [@Bangert1988], as per the following proposition. **Proposition 23** (Theorem 3.4 and Proposition 3.5, [@Yu2022]). *There exists a stationary configuration $x$ in $X^0$ (resp. $X^1$) satisfying $I(x)=c_0$ (resp. $I(x)=c_1$), where $$c_0=\inf_{x \in X^0} I(x), \ c_1=\inf_{x \in X^1} I(x)$$ Moreover, $x$ is strictly monotone, i.e., $x_i<x_{i+1}$ (resp. $x_i>x_{i+1}$) for all $i \in Z$.* Let: $$\begin{aligned} &\mathcal{M}^0(u^0,u^1)=\{x \in X \mid c_0=\inf_{x \in X^0} I(x)\} \ and\\ &\mathcal{M}^1(u^0,u^1)=\{x \in X \mid c_1=\inf_{x \in X^1} I(x)\}.\end{aligned}$$ Set $$\begin{aligned} \label{cstar} c_\ast:=I(x^0) + I(x^1),\end{aligned}$$ where $x^i \in \mathcal{M}^i$. From the above and Lemma [Lemma 16](#lemm:positive){reference-type="ref" reference="lemm:positive"}, we immediately obtain the following corollary. **Corollary 24**. *$c_{\ast}>0$* *Proof.* Choose $x^0 \in \mathcal{M}^0(u^0,u^1)$ and $x^1 \in \mathcal{M}^1(u^0,u^1)$ arbitrarily. From monotonicity, $x^0$ and $x^1$ intersect exactly once. We define $x^+$ and $x^-$ in $X$ by $x^+_i := \max\{x^0_i,x^1_i\}$ and $x^-_i := \min\{x^0_i,x^1_i\}$. By $(h_3)$ and Lemma [Lemma 16](#lemm:positive){reference-type="ref" reference="lemm:positive"}, $$c_{\ast}=I(x^0) + I(x^1) \ge I(x^+) + I(x^-) >0.$$ This completes the proof. ◻ Next, we consider a minimal configuration under fixed ends. For $0<a,b< (u^1-u^0)/2$, let: $$\begin{aligned} Y^{\ast,0}(n,a,b) &= X(n) \cap \{x_0=u^0+a\} \cap \{x_n=u^0+b\} \ and\\ Y^{\ast,1}(n,a,b) &= X(n) \cap \{x_0=u^1-a\} \cap \{x_n=u^1-b\}\end{aligned}$$ and $y^0(n,a,b)=(y^0_i) \in Y^{\ast,0}(n,a,b)$ be a finite configuration satisfying: $$\sum_{i=0}^{n-1} {a_i}(y^0(n,a,b)) = \min_{x \in Y^{\ast,0}(n,a,b)} \sum_{i=0}^{n-1} {a_i}(x).$$ The assertion of the next lemma may appear confusing, but it is useful for our proof in Section [3](#sec:pf){reference-type="ref" reference="sec:pf"}. **Lemma 25**. *For any $m_0, m_1 \in \mathbb{Z}_{\ge 0} (m_0<m_1)$ and $\rho_0, \rho_1$, and $\delta$ with $\rho_0 + \rho_1 + 2\delta <c_{\ast}/2C$, there exists $n_0$ such that for all $n \ge n_0$, there exists $l \in [m_0, n-m_1] \cap \mathbb{Z}$ satisfying $|y^0_l(n,\rho_0, \rho_1) - u^0| \le \delta$. A similar argument holds if $u^0$ and $y^0$ are replaced by $u^1$ and $y^1$.* *Proof.* We first consider the case $m_0=m_1=0$. Let $z=(z_i)_{i=0}^{n}$ be given by $z_0=y^\ast_0$, $z_n=y^\ast_n$ and $z_{i} = u^0$ otherwise. Clearly, for any $n \in \mathbb{Z}_{>0}$, $$\sum_{i=0}^{n} a_{i}(y^\ast_l(n,\rho_0, \rho_1)) \le \sum_{i=0}^{n} a_{i}(z) \le C(\rho_1+\rho_2) < \frac{c_{\ast}}{2}.$$ On the other hand, Lemma [Lemma 15](#lemm:finite_bdd){reference-type="ref" reference="lemm:finite_bdd"} and the definition of $y^0$ imply that if $x \in X(n) \cap \{\min |x_i - u^j| \ge \delta\}$, then $$\begin{aligned} \sum_{i=0}^{n} a_i(x) \ge n \phi(\delta) - C|\rho_1-\rho_0|.\end{aligned}$$ and $\sum_{i=0}^{n} a_i(x)>{c_{\ast}}/{2}$ for sufficiently large $n$, which is a contradiction. The above remark implies that for sufficiently large $n$, there exists $l \in [0, n] \cap \mathbb{Z}$ such that $|y_l(n,\rho_0, \rho_1) - u^0| \le \delta$ or $|y_l(n,\rho_0, \rho_1) - u^1| \le \delta$. That is, either of the following two conditions holds: 1. There exists $l \in [0, n] \cap \mathbb{Z}$ such that $|x_l(n,\rho_0, \rho_1) - u^0| \le \delta$ and $|x_i(n,\rho_0, \rho_1) - u^1|>\delta$ for all $i \in [0,n] \cap \mathbb{Z}$, or 2. There exists $l \in [0, n] \cap \mathbb{Z}$ such that $|x_l(n,\rho_0, \rho_1) - u^1| \le \delta$. To prove our claim for $m_0=m_1=0$, it suffices to show that the case of $(b)$ does not occur for sufficiently large $n$. If $(b)$ holds, then by Corollary [Corollary 24](#co:positive_het){reference-type="ref" reference="co:positive_het"}, $$\begin{aligned} \sum_{i} a_i(x) > c_{\ast} -C(\rho_1+\rho_2) -2C\delta> \frac{c_{\ast}}{2},\end{aligned}$$ which is a contradiction. Next we fix $m_0,m_1 \in \mathbb{Z}_{>0}$ arbitrarily. It also suffices to show that the case of $(b)$ does not occur. By Lipschitz continuity, $$\sum a_i(y^0(n,a,b)) + \sum a_i(y^0(n,a,b)) \le C(y^0_{m_0}-u^0)+ C(y^0_{n-m_1}-u^0)$$ and if $x \in X(n) \cap \{\min |x_i - u^j| \ge \delta\}$, then $$\begin{aligned} \sum_{i=m_0}^{n-m_1-1} a_i(x) \ge (n-m_1-m_0) \phi(\delta) - C|\rho_1-\rho_0|,\end{aligned}$$ which is contradiction for any $n$ satisfying $|n-m_1-m_0| \gg 1$. ◻ In fact, $y^i(n,a,b)$ can be asymptotic to $u^i$ any number of times, as per the following lemma. **Lemma 26**. *For any $k$, $\rho_0, \rho_1$ and $\delta$ with $\rho_0 + \rho_1 + 2\delta <c_{\ast}/2C$, there exists $n_0$ such that for all $n \ge n_0$, there exist $l_1, \cdots, l_k \in [0, n] \cap \mathbb{Z}$ satisfying $|y^0_{l_i}(n,\rho_0, \rho_1) - u^0| \le \delta$ for all $i =1, \cdots, k$. A similar argument holds if $u^0$ and $y^0$ are replaced by $u^1$ and $y^1$.* *Proof.* We only discuss the case where $k=2$. If we assume that the lemma is false, then we set $j_1=\lfloor n/2 \rfloor$. For some $j_0 \in [0, n]$, there exists $x=(x_i)_{i=j_0}^{j_0+j_1}$ satisfying $|x_l(n,\rho_0, \rho_1) - u^0| > \delta$ for all $i \in [j_0,\cdots,j_0+j_1] \cap \mathbb{Z}$. On the other hand, $(h_1)$ and Lemma [Lemma 25](#lemm:one_delta){reference-type="ref" reference="lemm:one_delta"} imply that for sufficiently large $j_1$, it holds that $\sum a_i(x) > c_{\ast}/2$, which is a contradiction. Other cases are shown in the same way. ◻ **Lemma 27**. *For any $\epsilon >0$, there exist $n_0 \in \mathbb{N}$ and $x \in \mathcal{M}^0(u^0,u^1)$ $(resp. x \in \mathcal{M}^1(u^0,u^1))$ such that $\sum_{i=-n}^{n-1} a_i(x) \in (c_0 -\epsilon, c_0 + \epsilon)$ $(resp. \sum_{i=-n}^{n-1} a_i(x) \in (c_1 -\epsilon, c_1 + \epsilon))$ for all $n \ge n_0$.* *Proof.* For sufficiently large $n_0$, there exists $y \in \mathcal{M}^0$ such that for any $n \ge n_0$, $$y_{-n}-u^0 < \epsilon/2C, \text{and} \ u^1-y_n< \epsilon/2C.$$ Since $c_0=\sum_{i \in \mathbb{Z}} a_i(y)$ by the minimality of $y$, we get: $$\begin{aligned} \left| \sum_{i=-n}^{n-1} a_i(y) -c_0 \right| = \left| \sum_{i <-n} a_i(y) + \sum_{i \ge n} a_i(y) \right| \le C( (y_{-n}-u^0) + (u^1-y_n)) < \epsilon\end{aligned}$$ as desired. A similar way is valid for the rest of the proof. ◻ We will check the properties of the 'pseudo' minimal heteroclinic configurations. Under $\eqref{hetero_gap}$, the following lemma holds. **Lemma 28** (Proposition 4.1, [@Yu2022]). *Assume $\eqref{hetero_gap}$ holds. For any $\epsilon>0$, there exist $\delta_i \in (0, \epsilon)$ $(i=1,2,3,4)$ and positive constants $e_0=e_0(\delta_1,\delta_2)$ and $e_1=e_1(\delta_3,\delta_4)$ satisfying: $$\begin{aligned} \inf\{ I(x) \mid x \in X^0, x_0 =u_0+\delta_1 \text{ or } x_0=u_1-\delta_2\} &= c_0+e_0 \ and\\ \inf\{ I(x) \mid x \in X^1, x_0 =u_1-\delta_3 \text{ or } x_0=u_0+\delta_4\} &= c_1+e_1.\end{aligned}$$* We omit the proof. We need to choose each $\delta_i$ small enough satisfying $\delta_1, \delta_2 \in I^+_{0}(u^0,u^1)$ and $\delta_3, \delta_4 \in I^-_{0}(u^0,u^1)$. It is immediately shown that: **Lemma 29**. *Let $x \in X^0$ (resp. $x \in X^1)$ be satisfy $I(x)=c_0+e_0$ (resp. $I(x)=c_1+e_1$). Then, for any $\epsilon >0$, there exist $n_0 \in \mathbb{Z}_{\ge 0}$ such that $\sum_{i=-n}^{n-1} a_i(x) \in (c_0+e_0 -\epsilon, c_0+e_0 + \epsilon)$ $(resp. \sum_{i=-n}^{n-1} a_i(x) \in (c_1+e_1 -\epsilon, c_1+e_1 + \epsilon))$ for all $n \ge n_0$.* # The proofs of our main theorem and some remarks {#sec:pf} ## Variational settings {#subsec:setting} Let $(u^0,u^1)$ be a neighboring pair of $p_0( \mathcal{M}^\mathrm{per}_{0})$ and set: $$\begin{aligned} \label{def:kp} \begin{split} K&=\left\{ k=(k_i)_{i \in \mathbb{Z}} \subset \mathbb{Z}\mid k_0=0, k_i < k_{i+1} \right\}, and:\\ \mathcal{C}(n;a,b)&=\min \left\{ \sum_{i=0}^{n-1} h(x_i,x_{i+1}) \mid {x \in X(n),x_0=a,x_n=b} \right\}. \end{split}\end{aligned}$$ For $k \in K$, set $I_i = [k_i, k_{i+1}-1] \cap \mathbb{Z}$ and $|I_i|=|k_i-k_{i+1}|$ for each $i \in \mathbb{Z}$. Now we define the renormalized function $J$ by: $$J(x)=J_k(x)=\sum_{j \in \mathbb{Z}} A_j(x),$$ where $A_j(x)=h(x_j,x_{j+1}) -c(j)$ and: $$\begin{aligned} c(j) = \begin{dcases} %\calc(|I_{2i}|,u^i,u^i)/|I_{2i}|& \ (j \in I_{2i} )\\ \frac{\mathcal{C}(|I_{2i+1}|,u^i,u^{i+1})}{|I_{2i+1}|}&\ (j \in I_{2i+1} \ \text{for some} \ i \in \mathbb{Z})\\ \frac{\mathcal{C}(|I_{2i}|,u^i,u^i)}{|I_{2i}|} &\ (\text{otherwise}) \end{dcases} .\end{aligned}$$ **Remark 30**. *Theorem 5.1 in [@Bangert1988] shows that $x \in \mathcal{M}_{\alpha}^{\mathrm{per}}$ has minimal period $(q,p)$ with $q$ and $p$ relatively prime. It indicates that: $$c =\frac{\mathcal{C}(|I_{2i}|,u^0,u^0)}{|I_{2i}|}= \frac{\mathcal{C}(|I_{2i}|,u^1,u^1)}{|I_{2i}|}.$$* Next, we set: $$\begin{aligned} P&=\left\{ \rho=(\rho_i)_{i \in \mathbb{Z}} \subset \mathbb{R}_{>0} \mid 0 <\rho_i <\frac{u^1-u^0}{2} \ ({}^{\forall} i \in \mathbb{Z}), \ \sum_{i \in \mathbb{Z}} \rho_i <\infty \right\}.\end{aligned}$$ For $k \in K$ and $\rho \in P$, the set $X_{k,\rho}$ is given by: $$%\xkr =\{ x \in \R^\Z \mid \text{$0 \le x_{k_i} - u_{k_i}^0 \le \rho_i$ if $i \equiv 0, 1$ and $0 \le u_{k_i}^1 - x_{k_i} \le \rho_i$ if $i \equiv -1, 2$} \} X_{k,\rho}= \bigcap_{i \in \mathbb{Z}} \left\{ \left( \bigcap_{i \equiv 0,1} Y^0(k_i,\rho_i) \right) \cap \left( \bigcap_{i \equiv -1,2} Y^1(k_i,\rho_i) \right) \right\} ,$$ where $$Y^j(l,p) = \{x \in X \mid |x_l - u^j| \le p \} \ (j=0,1)$$ and $a \equiv b$ means $a \equiv b \ (\mathrm{mod} \ 4)$. (See $\eqref{eq:x}$ for the definition of $X$.) It is easily seen that an element of $X_{k,\rho}$ has infinite transitions. ![An element of $X_{k,\rho}$](infinite_transition.pdf){width="8.5cm"} Notice that since compactness and sequential compactness are equivalent in the presence of the second countability axiom, $X$ is a sequentially compact set by Tychonoff's theorem. Clearly, $X_{k,\rho}$ is a closed subset of $X$, so the set $X_{k,\rho}$ is also sequentially compact. **Lemma 31**. *The set $X$ is sequentially compact.* *Proof.* By Tychonoff's theorem, $X$ is a compact set. Since a compact set on a metric space is sequentially compact, we only need to define the metric on $X$ as $$\begin{aligned} d(x,y) &= \sum_{i \in \mathbb{Z}} \frac{|x_i - y_i|}{2^{|i|}}.\end{aligned}$$ It suffices to check that $X$ is metrizable. Let $d \colon X \times X \to \mathbb{R}$ be given by $$\begin{aligned} d(x,y) &= \sum_{i \in \mathbb{Z}} \frac{|x_i - y_i|}{2^{|i|}}\end{aligned}$$ Clearly, $d$ is a metric function. We show that convergence on $d$ and $\eqref{t_metric}$ is equivalent. Since for all $i \in \mathbb{Z}$ $$\frac{|x_i - y_i|}{2^{|i|}} \le d(x,y),$$ it is sufficient to show that for each $x,y \in X$, the function $d(x,y)$ goes to $0$ if $\eqref{t_metric}$ holds. Let $(x^n)$ be a convergence sequence to $y$. There is a constant $M >0$ such that for all $j \in \mathbb{Z}$ $$\begin{aligned} d(x^{n},y) \le c(j,n)+ \frac{M}{2^{|j|}}\end{aligned}$$ where $c(j,n) = \sum_{i \le |j|} {|x_i^{n} - y_i|}/{2^{|i|}}$. Notice that for each $j \in \mathbb{Z}$, $c(j,n) \to 0$ as $n \to \infty$. Thus, for any $\epsilon > 0$, we can take $i_0$ and $n_0$ such that ${M}/{2^{|i_0|}}< {\epsilon}/{2}$ and $c(i_0,n_0) < {\epsilon}/{2}$, thus completing the proof. ◻ As a basic property of $J$, we first show that for an infinite transition orbit, say $x$, $J(x)$ can be finite unlike $I(x)$. **Lemma 32**. *If $\rho \in P$, then there exists $y \in X_{k,\rho}$ such that $J(y) =0$ for all $k \in K$.* *Proof.* By the definition of $J$, we can choose a configuration $y$ given by $y_{k_i} = u^0$; if $i \equiv 0,1$,  $y_{k_i} = u^1$; if $i \equiv 0,1$, and $\sum_{i=k_j}^{k_{j+1}}A_i(y)=0$ for all $j \in \mathbb{Z}$. ◻ The above lemma implies that $J$ overcomes the problem referred to in Proposition [Proposition 11](#prop:finite){reference-type="ref" reference="prop:finite"}. Next, we show that $J$ is bounded below. **Lemma 33**. *If $\rho \in P$, then there is a constant $M \in \mathbb{R}$ such that $J(x) \ge M (> -\infty)$ for all $x \in X_{k,\rho}$.* *Proof.* For each $x \in X_{k,\rho}$ with $J(x) \le 0$, we define $y \in X_{k,\rho}$ by $y_j=x_j$: if $j \in I_{2i+1} \backslash \{k_{2i+1}\}$ , $y_j=u^0$: if $j \in I_{4i} \cup \{k_{4i+1} \}$, and $y_j=u^1$: if $j \in I_{2(2i+1)} \cup \{k_{2(2i+1)+1} \}$ for some $i \in \mathbb{Z}$. From the definition, $0 \le J(y) < \infty$. Lipschitz continuity of $h$ shows: $$-J(x) \le J(y) -J(x) \le C \sum_{i \in \mathbb{Z}} \rho_i < \infty$$ and we get $J(x) \ge -C \sum_{i \in \mathbb{Z}} \rho_i$ for all $x \in X_{k,\rho}$, thus completing the proof. ◻ To ensure that $J$ has a minimizer in $X_{k,\rho}$, we present the following lemma. **Lemma 34**. *The function $J$ is well-defined on $\mathbb{R}\cup \{+\infty\}$, i.e., $$\alpha:=\liminf_{n \to \infty} \sum_{|i| \le n} A_i(x) = \limsup_{n \to \infty} \sum_{|i| \le n} A_i(x)=:\beta.$$* *Proof.* For the proof, we use a similar argument to Yu's proof of Proposition 2.9 and Lemma 6.1 in [@Yu2022]. By contradiction, we assume $\alpha<\beta$. First, we consider the case where $\beta = +\infty$. Fix $\gamma \in \mathbb{R}_{<0}$ arbitrarily. For $\alpha< +\infty$, we take a constant $\tilde{\alpha}$ with $\tilde{\alpha} > \alpha+1-2\gamma$. Then there are constants $n_0$ and $n_1$ such that $n_0 < n_1$ and: $$\begin{aligned} \sum_{|i| \le n_0} A_i(x) \ge \tilde{\alpha} \ \text{and} \ \sum_{|i| \le n_1} A_i(x) \le \alpha+1.\end{aligned}$$ Then, $$2\gamma > \alpha+1-\tilde{\alpha}\ge \sum_{|i| \le n_1} A_i(x) - \sum_{|i| \le n_0} A_i(x) =\sum_{ i= -n_1}^{-n_0} A_i(x) + \sum_{i = n_0}^{n_1} A_i(x).$$ Combining the first term and end terms implies: $$\sum_{ i= -n_1}^{-n_0} A_i(x) < \gamma \ \text{or} \ \sum_{ i= n_0}^{n_1} A_i(x) <\gamma.$$ For $\gamma$ small enough, this contradicts Lemma [Lemma 33](#lemm:lower){reference-type="ref" reference="lemm:lower"}. Next, we assume $\beta<+\infty$. Since $\alpha< \beta$, there are two sequences of positive integers $\{m_j \to \infty\}_{j \in \mathbb{N}}$ and $\{l_j \to \infty\}_{j \in \mathbb{N}}$ satisfying $m_j<m_{j+1}$, $l_j<l_{j+1}$ and $m_j+1 < l_j < m_{j+1} -1$ for all $j \in \mathbb{Z}_{>0}$, and: $$\beta= \lim_{j \to \infty} \sum_{i \le |m_j|} A_i(x) > \lim_{j \to \infty} \sum_{i \le |l_j|} A_i(x) =\alpha.$$ Then we can find $j \gg 0$ such that $$\begin{aligned} \label{a-b} \sum_{i \le |l_j|} A_i(x) - \sum_{i \le |m_j|} A_i(x) = \sum_{i =-l_j}^{-m_j} A_i(x) + \sum_{i =m_j}^{l_j} A_i(x) < \frac{\alpha -\beta}{2}.\end{aligned}$$ Since $|l_j|$ and $|m_j|$ are finite for fixed $j$, the above calculation does not depend on the order of the sums. For sufficiently large $j$, a similar argument in the proof of Lemma [Lemma 33](#lemm:lower){reference-type="ref" reference="lemm:lower"} shows: $$\sum_{i = -l_j}^{-m_j} A_i(x) \ge -C\sum_{i = -l_j}^{-m_j} \rho_i > \frac{\alpha-\beta}{4}$$ and $$\sum_{i = m_j}^{l_j} A_i(x) \ge -C\sum_{i = m_j}^{l_j} \rho_i > \frac{\alpha-\beta}{4}$$ because $\rho \in P$ implies $$\sum_{|i|>n} \rho_i \to 0 \ (n \to \infty).$$ and both $m_j$ and $l_j$ goes to infinity as $j \to \infty$. Therefore: $$\begin{aligned} \sum_{i =-l_j}^{-m_j} A_i(x) + \sum_{i =m_j}^{l_j} A_i(x) > \frac{\alpha-\beta}{2}, \end{aligned}$$ which contradicts $\eqref{a-b}$. ◻ **Proposition 35**. *For all $k \in K$ and $\rho \in P$, there exists a minimizer of $J$ in $X_{k,\rho}$.* *Proof.* By Lemmas [Lemma 32](#lemm:upper){reference-type="ref" reference="lemm:upper"} and [Lemma 33](#lemm:lower){reference-type="ref" reference="lemm:lower"}, we can take a minimizing sequence $x=(x^n)_{n \in \mathbb{N}}$ of $J$ in $X_{k,\rho}$. Since $X_{k,\rho}$ is sequentially compact, there exists $\tilde{x} \in X_{k,\rho}$ which $x^{n_k}$ converges to $\tilde{x}$ for some subsequence $(n_k)_{k \in \mathbb{N}}$. To ensure our claim, it is enough to show that for any $\epsilon>0$, there exists $j_0$ and $n_0 \in \mathbb{N}$ such that: $$\begin{aligned} \label{eq:lower_conti} \sum_{|i|>j_0} A_i(x^n) >-\epsilon \ (\text{for all} \ n \ge n_0) \text{\ and\ } \sum_{|i|>j_0} A_i(\tilde{x}) <\epsilon,\end{aligned}$$ because if the above inequalities hold, we obtain: $$\begin{aligned} J(\tilde{x}) &=\sum_{|i| \le j_0} A_i(x)+ \sum_{|i| > j_0} A_i(x)\\ &\le \lim_{n \to \infty} \sum_{|i| \le j_0} A_i(x^n) + \epsilon = \lim_{n \to \infty} (\sum_{i \in \mathbb{Z}} A_i(x^n) - \sum_{|i|>j_0} A_i(x^n))+ \epsilon \\ &\le \lim_{n \to \infty} \sum_{i \in \mathbb{Z}} A_i(x^n) + 2\epsilon.\end{aligned}$$ Using an arbitrary value of $\epsilon$, we have $J(\tilde{x}) \le \lim_{n \to \infty} \sum_{i \in \mathbb{Z}} A_i(x^n)$ and $\tilde{x}$ is the infimum (or greatest lower bound) of $J$. We now show the first inequality of [\[eq:lower_conti\]](#eq:lower_conti){reference-type="eqref" reference="eq:lower_conti"}. Lemma [Lemma 33](#lemm:lower){reference-type="ref" reference="lemm:lower"} implies that for any $n \in \mathbb{N}$ and $j \in \mathbb{N}$: $$\begin{aligned} \label{eq:bdd_low} %\sum_{|i|>j} A_i(x^n) \ge -C \sum_{|i|>j} \max \{ \rho_{2i}, \rho_{2i+1}\} \ge -C \sum_{|i|>j} \rho_i. \lim_{j \to \infty} \sum_{|i|>j} A_i(x^n) =0\end{aligned}$$ Since $\sum_{i \in \mathbb{Z}} \rho_i$ is finite, we have $\sum_{|i|>j} \rho_i < \epsilon/C$ for sufficiently large $j$. Hence, the first inequality holds. To check the second inequality of [\[eq:lower_conti\]](#eq:lower_conti){reference-type="eqref" reference="eq:lower_conti"}, it suffices to show that the value of $J(\tilde{x})$ is finite. If $J(y)$ is infinite, then for any $M>0$, there is a $j_0 \in \mathbb{N}$ such that: $$\begin{aligned} M \le \sum_{|i| \le j_0} A_i(\tilde{x}) = \sum_{|i| \le j_0} A_i(\lim_{n \to \infty} x^n) = \lim_{n \to \infty} \sum_{|i| \le j_0} A_i(x^n),\end{aligned}$$ since a finite sum $\sum_{|i| \le j_0} A_i(x)$ is continuous. On the other hand, for any $\delta>0$, there exists $n_0$ such that if $n \ge n_0$, $\displaystyle{J(x^n)< \inf_{x \in X_{k,\rho}} J(x) + \delta}$. Moreover, for any $\epsilon >0$, there exists $j_0=j_0(\epsilon)$ such that for any $j \ge j_0$, $| \sum_{|i|>j} A_i(x^n)| < \epsilon$. Then, for any $n \ge n_0$ and any $\epsilon >0$, we get: $$\begin{aligned} \sum_{|i| \le j_0} A_i(x^n) &= J(x^n) - \sum_{|i| > j_0} A_i(x^n)\\ &< \inf_{x \in X_{k,\rho}} J(x) + \delta + \epsilon,\end{aligned}$$ so $\sum_{|i| \le j_0} A_i(x^n)$ is finite, which is a contradiction. ◻ ## Properties of the minimizers of $J$ in $X_{k,\rho}$ Let $x^\ast=(x^\ast_i)_{i \in \mathbb{Z}}$ be a minimizer (depending on $k \in K$ and $\rho \in P$) in Proposition [Proposition 35](#prop:min){reference-type="ref" reference="prop:min"}. Let $x(n;a,b)=\{x_i(n;a,b)\}_{i=0}^{n}$ be a minimizing sequence of $\sum_{i=0}^{n-1} {h({x_i,x_{i+1})}}$ on $X(n)$ (defined by $\eqref{eq:x}$) that satisfies $x_0(n;a,b)=a$, and $x_n(n;a,b)=b$, i.e., it holds that $\sum_{i=0}^{n-1} {h({x_i(n;a,b),x_{i+1}(n;a,b))}} = \mathcal{C}(n;a,b)$ (see [\[def:kp\]](#def:kp){reference-type="eqref" reference="def:kp"}). **Lemma 36**. *For any $\epsilon \in (0, \min\{c_{\ast}/(2C), (u^1-u^0)/2\})$ ($c_\ast$ is given by $\eqref{cstar}$) , there exist two positive real numbers $r_1$ and $r_2$ which satisfy that: for any $n \ge 2$, $a \in [u^0,u^0+r_1]$ (resp. $a \in [u^1-r_1,u^1]$), and $b \in [u^0,u^0+r_2]$ (resp. $b \in [u^1-r_2,u^1]$), $$0< x_i(n;a,b)-u^0 <\epsilon \ (resp. 0<u^1 - x_i(n;a,b) <\epsilon) \ \text{for all} \ i \in \{0, \dots,n\}.$$* *Proof.* For any $\epsilon \in (0, \min\{c_{\ast}/(2C), (u^1-u^0)/2\})$, we can take $r_1$ and $r_2 \in (0, \epsilon)$ satisfying $$r_1 +r_2 < \min \left\{\frac{\phi(\epsilon)}{2C},\frac{c_{\ast}}{2C} - \epsilon \right\}.$$ See $\eqref{phi}$ for the definition of $\phi$. We demonstrate that the claim holds for the selected $r_1$ and $r_2$ in the above. (We only prove the case of $a \in [u^0,u^0+r_1]$ and $b \in [u^0,u^0+r_2]$. The proof of the other case is similar.) Set a finite sequence $y=( y_i )_{i=0}^{n}$ by $y_0 = x_0$, $y_n = x_n$, and $y_i=u^0$ otherwise. For any $n \in \mathbb{N}$, $$\begin{aligned} \label{val_xnab} \begin{split} \sum_{i=0}^{n-1} a_i (x) &=\mathcal{C}(n;a,b) -nc= \sum_{i=0}^{n-1}({h(x_i,x_{i+1})} -h(u^0,u^0))\\ &\le \sum_{i=0}^{n-1}({h(y_i,y_{i+1})}-h(u^0,u^0)) \le C(r_1+r_2) . \end{split}\end{aligned}$$ If there exists $i \in \{1,\dots,n-1\}$ satisfying $x_i-u^0 \ge \epsilon$ and $u^1 - x_i \ge \epsilon$, Combining Lemma [Lemma 14](#lemm:yu_lower){reference-type="ref" reference="lemm:yu_lower"} with $\eqref{val_xnab}$ yields: $$C(r_1+r_2) \ge \mathcal{C}(n;a,b) -nc \ge \phi(\epsilon) - C|a-b| > \phi(\epsilon) - C(r_1+r_2).$$ Thus we get $\phi(\epsilon) < 2C(r_1+r_2)$, which is a contradiction. Next, we assume that $u^1 - x_i < \epsilon$ for all $i \in \{1,\dots,n-1\}$. Define a configuration $z^+$ and $z^-$ by: $$z^+= \begin{cases} u^0 \ (i \le 0)\\ x_i \ (i=1) , \\ u^1 \ (i \ge 1) \end{cases} z^-= \begin{cases} u^1 \ (i \le 2)\\ x_i \ (3 \le i \le n-1).\\ u^0 \ (i \ge n) \end{cases}$$ Applying $c=h(u^0,u^0)=h(u^1,u^1)$, Lipschitz continuity, and [\[val_xnab\]](#val_xnab){reference-type="eqref" reference="val_xnab"}, we see that: $$\begin{aligned} I(z^+) + I(z^-) &= \sum_{i=0}^{1} a_i(z^+) + \sum_{i=2}^{n-1} a_i(z^-)\\ &< \sum_{i=0}^{n-1} a_i (x) + C(r_1+r_2 + 2\epsilon)\\ &\le C(r_1+r_2) + C(r_1+r_2 + 2\epsilon).\end{aligned}$$ On the other hand, $z^+ \in X^0$ and $z^- \in X^1$ imply $I(z^+) + I(z^-) \ge c_{\ast}$ and we get $$c_{\ast} < 2C(r_1+r_2 + \epsilon),$$ which is a contradiction. ◻ **Lemma 37**. *Assume that both $a - u^0$ and $b-u^0$ (resp. both $u^1-a$ and $u^1-b$) are small enough. Then, for any $\delta>0$ and $m \in \mathbb{N}$, there exists $N \in \mathbb{N}$ such that for any $n \ge N$, there exist $i(1), \dots, i(m) \in \{0, \cdots, n\}$ satisfying: $$\begin{aligned} \label{delta} x_{i(j)}(n;a,b) -u^0 < \delta \ (resp. \ u^1 -x_{i(j)}(n;a,b) < \delta) \ \text{for all $j \in \{1, \cdots,m\}$}\end{aligned}$$* *Proof.* If we replace $\eqref{delta}$ with the following: $$x_{i(j)}(n;a,b) -u^0 < \delta \ \text{or} \ u^1 -x_{i(j)}(n;a,b) < \delta) \ \text{for all $j \in \{1, \cdots,m\}$},$$ our statement for $m=1$ is immediately shown from Proposition [Proposition 11](#prop:finite){reference-type="ref" reference="prop:finite"} and $\eqref{val_xnab}$. For $m \ge 2$, Since both $a - u^0$ and $b-u^0$ are small enough, Lemma [Lemma 36](#lemm:distance){reference-type="ref" reference="lemm:distance"} is valid and it implies [\[delta\]](#delta){reference-type="eqref" reference="delta"}. ◻ Since those statements of the above two lemmas seem a bit complicated, we will roughly summarize the statement of Lemma [Lemma 36](#lemm:distance){reference-type="ref" reference="lemm:distance"} and [Lemma 37](#lemm:distance_2){reference-type="ref" reference="lemm:distance_2"}. The former states that for any $\epsilon$, if two endpoints are close to $u^0$ or $u^1$, then a minimal configuration between them is in a band whose width is $\epsilon$ independent of its length. On the other hand, Lemma [Lemma 37](#lemm:distance_2){reference-type="ref" reference="lemm:distance_2"} shows that no matter the width of the band, if we take the interval between the endpoints to be longer (i.e., if we make the length of the band longer), a minimal configuration can get arbitrarily close to either $u^0$ or $u^1$. Now we are ready to state our main theorem when the rotation number $\alpha$ is zero: *Proof of Theorem [Theorem 12](#theo:main){reference-type="ref" reference="theo:main"} for $\alpha=0$.* To see that a minimizer $x^\ast \in X_{k,\rho}$ is a stationary configuration, it suffices to show that $x^\ast$ is not on the boundary of $X_{k,\rho}$. For any positive sequence $\epsilon = ( \epsilon_i)_{i \in \mathbb{Z}}$ with $\epsilon_i < c_{\ast}/4C$, we choose $\rho \in P$ and $k \in K$ in the following steps: - Since we assume that $(u^0,u^1) \neq I^+_{0}(u^0,u^1)$ and $(u^0,u^1) \neq I^-_{0}(u^0,u^1)$, both $(u^0,u^1) \backslash I^+_{0}(u^0,u^1)$ and $(u^0,u^1) \backslash I^-_{0}(u^0,u^1)$ are nonempty and we can take $\rho \in P$ so that: - $u^{i+1}+\rho_i \in (u^0,u^1) \backslash I^+_{0}(u^0,u^1)$ for all $i \equiv 1,2$ and $u^i-\rho_i \in (u^0,u^1) \backslash I^-_{0}(u^0,u^1)$ for all $i \equiv -1,0$, and - For any $i \in \mathbb{Z}$, $\displaystyle{ \rho_{2i} + \rho_{2i+1} < \min \left\{\frac{\phi(\epsilon_i)}{2C},\frac{c_{\ast}}{2C} - \epsilon_i \right\} }$, where $u^i=u^0$ when $i$ is even, and $u^i=u^1$ when $i$ is odd. It easily follows from Lemma [Lemma 36](#lemm:distance){reference-type="ref" reference="lemm:distance"} that, for any $k \in K$ and $\rho \in P$ satisfying $(p_2)$, a minimizer $x^\ast=(x^\ast_j)_{j \in \mathbb{Z}} \in X_{k,\rho}$ is on an $\epsilon_i$-neighborhood of $u^i$ for each $j \in [k_{2i}, k_{2i+1}]$, i.e., $|x^\ast_j - u^i|< \epsilon_i$ if $j \in [k_{2i}, k_{2i+1}]$. Notice that $\rho_i$ can be chosen as arbitrarily small since a monotone heteroclinic configuration (one transition orbit), say $(x_i)_{i \in \mathbb{Z}}$, satisfies $|x_i - u^j| \to 0$ as $|i| \to \infty$ for $j=0$ or $1$ and it is follows from $(h_1)$ that if $x=(x_i)_{i \in \mathbb{Z}}$ is a stationary configuration whose rotation number is $0$, then so is $y=(y_i)_{i \in \mathbb{Z}}$ with $y_i = x_{i+l}$ for any $l \in \mathbb{Z}$. - Next, we consider taking a $k \in K$ dependent of the chosen $\rho \in P$ in the previous step. By the definition of $K$, to take $k$ is to determine the values of $|k_{2i-1} - k_{2i} |$ and $|k_{2i} - k_{2i+1} |$ for all $i \in \mathbb{Z}$. For each $i \in \mathbb{Z}$, we take the value of $|k_{2i-1} - k_{2i} |$ satisfying: $$\mathcal{M}^i(u^0,u^1) \cap Y^i(k_{2i-1}, \rho_{2i-1}) \cap Y^{i+1}(k_{2i}, \rho_{2i}) \neq \emptyset,$$ i.e., $$\mathcal{M}^i(u^0,u^1) \cap Y^i(0, \rho_{2i-1}) \cap Y^{i+1}(|k_{2i-1}-k_{2i}|, \rho_{2i}) \neq \emptyset,$$ where $\mathcal{M}^i=\mathcal{M}^0$ and $Y^i=Y^0$ when $i$ is even, and $\mathcal{M}^i=\mathcal{M}^1$ and $Y^i=Y^1$ when $i$ is odd. - Before describing how we choose the value of $|k_{2i} - k_{2i+1} |$ for each $i \in \mathbb{Z}$, we define several positive bi-infinite sequences. Set $\tilde{e}=(\tilde{e}_i)_{i \in \mathbb{Z}}$ by: $$\begin{aligned} \tilde{e}_i= \begin{cases} e_0(\rho_{2i-1},\rho_{2i}), & (i : \text{even})\\ e_1(\rho_{2i-1},\rho_{2i}) & (i : \text{odd}). \end{cases}\end{aligned}$$ Furthermore, let's choose two positive sequence $\delta=(\delta_i)_{i \in \mathbb{Z}}$ and $\tilde{\epsilon}=(\tilde{\epsilon}_i)_{i \in \mathbb{Z}}$ so that, for each $i \in \mathbb{Z}$, $$2 \tilde\epsilon_i +C( \delta_{2i -1} + \delta_{2i})< \frac{\tilde{e}^i}{2}.$$ - For each $\tilde\epsilon_i$ in the one before step, Lemma [Lemma 27](#lemm:finite_het){reference-type="ref" reference="lemm:finite_het"} and [Lemma 29](#lemm:finite_fake_het){reference-type="ref" reference="lemm:finite_fake_het"} show that there exist two integers $N_{2i-1}, N_{2i} \in \mathbb{N}$ and $x^{i} \in \mathcal{M}^i(u^0,u^1)$ satisfying the following: - For any $i \in \mathbb{Z}$, if $n_{2i-1} \ge N_{2i-1}$ and $n_{2i} \ge N_{2i}$, then both the folllowing 1 and 2 hold: 1. $\displaystyle{\sum_{j=k_{2i-1}-n_{2i-1}-1}^{k_{2i} +n_{2i}+1} A_j(x^i) \le c_i +\tilde{ \epsilon}_i}$ 2. $\displaystyle{\sum_{j=k_{2i-1}-n_{2i-1}-1}^{k_{2i} +n_{2i}+1} A_j(y) \ge c_i + \tilde{e}_i - \tilde{\epsilon}_i}$ for all: $$\begin{aligned} y \in \{ x=(x_i)_{i \in \mathbb{Z}} \in X^i \cap Y^{i-1}(k_{2i-1}, \rho_{2i}) &\cap Y^i(k_{2i}, \rho_{2i}) \\ & \mid |x_{k_{2i-1}} - u^{i-1}|= \rho_{2i-1} \ \text{or} \ |x_{k_{2i}} - u^{i}|= \rho_{2i} \},\end{aligned}$$ where $X^i=X^0$ when $i$ is even, and $X^i=X^1$ when $i$ is odd. For the above ${(n_i)}_{i \in \mathbb{Z}}$, we construct in Step $4$, we take $|k_{2i}-k_{2i+1}|$ so that - $|k_{2i}-k_{2i+1}| \gg N_{2i} + N_{2i+1}$ We will give more precise conditions for $k$ and explain the role of the sequence $\delta$ later. We shall show that our statement holds for $\rho$ and $k$ along with the above steps. Combining Lemmas [Lemma 17](#lemm:estimate_periodic1){reference-type="ref" reference="lemm:estimate_periodic1"} and [Lemma 18](#lemm:estimate_periodic2){reference-type="ref" reference="lemm:estimate_periodic2"} implies that $x^\ast_i \notin \{u^0,u^1\}$ for all $i \in \mathbb{Z}$. We assume $x^\ast_{k_1} = u^0 + \rho_1$ for a contradiction argument. Let $y=(y_i)_{i \in \mathbb{Z}}$ be: $$\begin{aligned} y_i= \begin{cases} x_i^\ast & i < k_1-n_1 \ \text{or} \ i >k_2+n_2\\ x^1_i & i \in [k_1-n_1,k_2+n_2 ] \end{cases},\end{aligned}$$ where $x^1 = (x_i^1)_{i\in\mathbb{Z}}$ is given in Step $4$. It is sufficient to show that: $$\sum_{i \in [k_1-n_1-1,k_2+n_2+1 ]} A_i(y) < \sum_{i \in [k_1-l_1-1,k_2+n_2+1 ]} A_i(x^\ast).$$ Applying Lemma [Lemma 36](#lemm:distance){reference-type="ref" reference="lemm:distance"} and [Lemma 37](#lemm:distance_2){reference-type="ref" reference="lemm:distance_2"} for $m=2$ shows that, for $|k_{2i}-k_{2i+1}|$ sufficiently large, there exist $n_{2i-1}$ and $n_{2i}$ such that $$\max\{ x^i_{k_{2i-1} -n_{2i-1}}, y_{k_{2i-1} -n_{2i-1}} \} \le \delta_{2i-1}, \text{and} \ \max\{ x^i_{k_{2i} +n_{2i}}, y_{k_{2i} +n_{2i}} \} \le \delta_{2i}.$$ Applying Step $4$ and Lipschitz continuity of $h$, we see that: $$\begin{aligned} & \sum_{i \in [k_1-l_1-1,k_2+n_2+1 ]}( A_i(x^\ast) - A_{i}(y))\\ &\ge \tilde{e}_1 - 2\tilde{\epsilon}_1 + (h(x_{k_1-n_1-1},x_{k_1-n_1}) - h(x_{k_1-n_1-1},y_{k_1-n_1}) )+ (h(x_{k_2+n_2},x_{k_2+n_2+1}) - h(y_{k_2+n_2},x_{k_2+n_2+1})) \\ &\ge \tilde{e}_1 - 2\tilde{\epsilon}_1 - C(\delta_{1} + \delta_{2}) > \frac{\tilde{e}_1}{2}> 0.\end{aligned}$$ This completes the proof. ◻ **Theorem 38**. *For any $\epsilon=(\epsilon_{2i})_{i \in \mathbb{Z}}$ with $\epsilon_i >0$, there are sequences $\rho=(\rho_i)_{i \in \mathbb{Z}} \in P$ and $\tilde{I}=(\tilde{I}_i)_{i \in \mathbb{Z}}$ such that for any $k =(k_i)_{i \in \mathbb{Z}} \in K$ with $|k_i-k_{i+1}| \ge \tilde{I}_i$ for all $i \in \mathbb{Z}$, for each $i \in \mathbb{Z}$, $|x_j^\ast-u^i| \le \epsilon_{j}$ for all $j \in [k_i,k_{i+1}] \cap \mathbb{Z}$ where $x^\ast_{k, \rho}=(x^\ast_i)_{i \in \mathbb{Z}}$* *Proof.* ◻ Moreover, we immediately obtain the following: *Proof of Theorem [Theorem 12](#theo:main){reference-type="ref" reference="theo:main"} for $\alpha \in \mathbb{Q}\backslash \{0\}$.* Let $x^+=(x^{+}_i)_{i \in \mathbb{Z}}$ and $x^-=(x^{-}_i)_{i \in \mathbb{Z}} \in \mathcal{M}_{\alpha}^{\mathrm{per}}$ satisfy that $(x^{-}_0, x^{+}_0)$ is a neighboring pair of $\mathcal{M}_{\alpha}^{\mathrm{per}}$, and $(x_0,x_1, \cdots,x_q)$ satisfy $h(x_{0}, \cdots, x_{q}) =H(x_{0},x_{q})$. It is clear that if $|x_0-x_0^{\pm}| \to 0$ and $|x_q-x_q^{\pm}| \to 0$, then $|x_i-x_i^{\pm} | \to 0$ for all $i=1, \cdots, q-1$. Thus we get a desired stationary configuration from Proposition [Proposition 22](#prop:alpha_configuration){reference-type="ref" reference="prop:alpha_configuration"} and the proofs of Theorem [Theorem 12](#theo:main){reference-type="ref" reference="theo:main"} for $\alpha=0$. ◻ Though the previous discussion treats bi-infinite transition orbits, we can construct one-sided infinite transition orbits by replacing $c(j)$ of $J$ with: $$\begin{aligned} \tilde{c}(j) = \begin{dcases} \frac{\mathcal{C}(|I_{2i+1}|,u^i,u^{i+1})}{|I_{2i+1}|}&\ (j \in I_{2i+1} \ \text{for some} \ i \ge 0),\\ \frac{\mathcal{C}(|I_{2i}|,u^i,u^i)}{|I_{2i}|} & \ (\text{otherwise}), \end{dcases}\end{aligned}$$ and $X_{k,\rho}$ with: $$\tilde{X}_{k,\rho}(a,b) = \bigcap_{i \in \mathbb{Z}} \left\{ \left( \bigcap_{i < 0} Y^a(k_{bi},\rho_{0}) \right) \cap \left( \bigcap_{i \equiv 0,1, i \ge 0} Y^a(k_{bi},\rho_{bi}) \right) \cap \left( \bigcap_{i \equiv -1,2, i \ge 0} Y^{|1-a|}(k_{bi},\rho_{bi}) \right) \right\} ,$$ where $a \in \{0,1\}$, $b \in \{-1,1\}$, $k \in K$ and $\rho \in P$. Let $\tilde{J}$ be the replaced function instead of $J$, i.e., $$\tilde{J}(x) = \sum_{j \in \mathbb{Z}} (h(x_j,x_{j+1})-\tilde{c}(j)).$$ Notice that Proposition [Proposition 11](#prop:finite){reference-type="ref" reference="prop:finite"} implies that if $x=(x_i)_{i \in \mathbb{Z}}$ satisfies that $\tilde{J}(x)$ is finite, then $|x_i - x^{a}_{i}|\to 0$  $(bi \to \infty)$. ![An element of $\tilde{X}_{k,\rho}(0,1)$](infinite_transition.pdf){width="8.5cm"} Thus we get: **Theorem 39** (). *Assume the same condition of Theorem [Theorem 10](#theo:yu_main){reference-type="ref" reference="theo:yu_main"}. Then, for any $a \in \{0,1\}$, $b \in \{-1,1\}$, and positive sequence ${\epsilon}=(\epsilon_i)_{i \in \mathbb{Z}}$ with $\epsilon_i$ small enough, there is an $m=\{m_i\}_{i \in \mathbb{Z}}$ such that for every sequence of integers $k=(k_i)_{i \in \mathbb{Z}}$ with $k_{i+1}-k_{i} \ge m_i$, there is a configuration $x$ satisfying:* 1. *$x_i^0<x_i<x_{i}^1$ for all $i \in \mathbb{Z}$;* 2. *for any $j \in \mathbb{Z}$,  $|x_{i}-x^{2j+a}_{i}| \le \epsilon_i$ if $i \in [k_{4j}, k_{4j+1}]$   $|x_{i}-x^{2j-1+a}_{i}| \le \epsilon_i$ if $i \in [k_{4j+2}, k_{4j+3}]$;* 3. *$|x_i - x^{a}_{i}|\to 0$  $(bi \to \infty)$.* ## Billiard maps Next, we consider a billiard map that does not satisfy the twist condition. Let $f \colon \mathbb{R}\to \mathbb{R}$ be a positive smooth function satisfying $f(x)=f(x+1)$ for all $x \in \mathbb{R}$. Set an area $D=D(f)$ by $$D = \{ (x,y) \in \mathbb{R}^2 \mid -f(x) \le y \le f(x)\}.$$ Let $s$ be an arc-length parameter, i.e., $s$ is given by $$s=\int_{0}^{x} \sqrt{1 + f'(\tau)^2} d \tau =:g(x)$$ and $x$ is represented by $x=g^{-1}(s)$. Moreover, we set $$\tilde{f}(s) = f(g^{-1}(s)).$$ We can see a variational structure of billiard maps for the above settings. Consider $$h(s,s') = d_{\mathrm{E}}(a_{+}(s),a_{-}(s')),$$ where $a_{\pm}(s):=(g^-(s), \pm \tilde{f}(s))$ and $d_{\mathrm{E}}$ is Euclidean metric on $\mathbb{R}^2$. We check that $h$ satisfies $(h_1)$-$(h_5)$. # Additional remarks {#sec:add} ## The number of infinite transition orbits {#sec:general} We first see that Theorem [Theorem 12](#theo:main){reference-type="ref" reference="theo:main"} and [Theorem 39](#theo:main_2){reference-type="ref" reference="theo:main_2"} show the existence of uncountable many infinite transition orbits. We can take $k \in K$ and $\rho \in P$ given in Theorem [Theorem 12](#theo:main){reference-type="ref" reference="theo:main"} so that for all $i \in \mathbb{N}$, $k_{i}-k_{i-1} < k_{i+1}-k_{i}$ and $k_{-i} - k_{-(i+1)} <k_{-i+1}-k_{-i}$. For $j=(j_i)_{i \in \mathbb{Z}} \in K$, set: $$\begin{aligned} X_j= \bigcap_{i \in \mathbb{Z}} \left\{ \left( \bigcap_{i \equiv 0,1} Y^0(k_{j_i},\rho_i) \right) \cap \left( \bigcap_{i \equiv -1,2} Y^1(k_{j_i},\rho_i) \right) \right\}\end{aligned}$$ Let $x^\ast(j)$ be a minimizer of $J$ on $X_j$, i.e., $$J(x^\ast(j))=\inf_{x \in X_j} J(x).$$ Let $j^0=(j^0_i)_{i \in \mathbb{Z}}$ be a sequence of $j^0_i=i$. The previous section deals with the case of $j=j^0$. It is easily seen that if $l \neq m \in K$, then $x^\ast(l)$ and $x^\ast(m)$ are different and we immediately get the following theorem. **Theorem 40**. *Let $\# \chi_1$ and $\# \chi_2$ be the number of infinite transition orbits in Theorem [Theorem 12](#theo:main){reference-type="ref" reference="theo:main"} and [Theorem 39](#theo:main_2){reference-type="ref" reference="theo:main_2"}. Then $\# \chi_1 =\# \chi_2 = \# \mathbb{R}$.* *Proof.* We only discuss the case of Theorem [Theorem 12](#theo:main){reference-type="ref" reference="theo:main"}. For any real number $r \in \mathbb{R}_{>0}$, we can choose a corresponding bi-infinite sequence ${(a_i)}_{i \in \mathbb{Z}} \subset \mathbb{Z}_{\ge 0}$. (For example, when $r=12.34$, $a_{-1}=1, a_0=2, a_1=3, a_2=4$ and $a_i =0$ otherwise.) The proof is straightforward by setting $a_i := j_{i+1} - j_i -1$ for $i \in \mathbb{Z}$. It is also clear that if $r_1 \neq r_2$, each corresponding stationary configuration is different. A similar proof is valid for Theorem [Theorem 39](#theo:main_2){reference-type="ref" reference="theo:main_2"}. ◻ To obtain a more general representation of the variational structure for transition orbits, set $A = \{A_0,A_1\}$ and: $$\begin{aligned} X_{k,\rho}(A) = \left( \bigcap_{i \in A_0} Y^0(k_{i},\rho_i) \right) \cap \left( \bigcap_{i \in A_1} Y^1(k_{i},\rho_i) \right),\end{aligned}$$ where $A_0$ and $A_1$ are sets so that $A_0 \cap A_1 = \emptyset$. For $A$, we set: $$\mathcal{S}:=\mathcal{S}(A)=\{i \in \mathbb{Z}\mid i \in A_j \text{\ and \ } i+1 \in A_{|j-1|} \, (j=0 \text{\, or \,}1) \},$$ and $$\tilde{P} := \tilde{P}(\mathcal{S}) = \left\{ \rho= (\rho_i)_{i \in \mathbb{Z}} \mid 0<\rho_i<(u^1-u^0)/2, \ \sum_{i \in \mathcal{S}} \rho_i <\infty \right\}.$$ Clearly, it always holds that $\sum_{i \in \mathcal{S}} \rho_i <\infty$ if $\# \mathcal{S}< \infty$. When $A_0 \cup A_1 = \mathbb{Z}$, for $k \in K$ and $\rho \in\tilde{P}$, let $\tilde{J} \colon \mathbb{R}^\mathbb{Z}\to \mathbb{R}$ be given by: $$\begin{aligned} \label{re:actionJ} \tilde{J}(x) :=\tilde{J}_{k,\rho,A}(x) = \sum_{i \in \mathbb{Z}} B_i(x),\end{aligned}$$ where $$\begin{aligned} \label{action} B_i(x)= \begin{cases} \{\sum_{j \in I_i} h(x_j,x_{j+1})\} - |I_i|c & { i , i+1\in A_0} \text{\ or \ } { i , i+1\in A_1} \\ \{ \sum_{j \in I_i} h(x_j,x_{j+1}) \} -c_i^{+}& {i \in A_1 \text{\ and \ } i+1 \in A_0}\\ \{ \sum_{j \in I_i} h(x_j,x_{j+1} ) \} - c_i^{-}& {i \in A_0 \text{\ and \ } i+1 \in A_1} \end{cases}.\end{aligned}$$ For example, Section [3](#sec:pf){reference-type="ref" reference="sec:pf"} dealt with the case of $A_0 = \{i \in \mathbb{Z}\mid i \equiv 0,1 \ (\text{mod} \ 4)\}$ and $A_1=\{i \in \mathbb{Z}\mid i \equiv -1,2 \ (\text{mod} \ 4)\}$, so $\mathcal{S}=\mathbb{Z}$. Let $x^\ast(k,\rho,A)$ be a minimizer on $X_{k,\rho}(A)$, i.e., $x^\ast(k,\rho,A)$ satisfies: $$\begin{aligned} \tilde{J}(x^\ast(k,\rho,A))= \inf_{x \in X_{k,\rho}(A) } \tilde{J}(x).\end{aligned}$$ It is easily seen that there exists a minimizer $x^\ast(k,\rho,A)$ by a similar discussion of Proposition [Proposition 11](#prop:finite){reference-type="ref" reference="prop:finite"} and [Proposition 35](#prop:min){reference-type="ref" reference="prop:min"}. In a similar way of proving in Theorem [Theorem 12](#theo:main){reference-type="ref" reference="theo:main"}, we obtain the following. **Theorem 41**. *Assume that $\eqref{hetero_gap}$ and there is no $i \in \mathbb{Z}$ such that $i-1 \in A_{|j-1|}$, $i \in A_{j}$, and $i+1 \in A_{|j-1|}$ for $j=0$ or $1$. Then, for any $\epsilon>0$, there exist two sequences $k \in K$ and $\rho \in \tilde{P}$ with $\rho_i < \epsilon$ for all $i \in \mathbb{Z}$ such that $x^\ast(k,\rho,A)$ is a stationary configuration and it has $\# \mathcal{S}$ transitions.* *Proof.* First, we consider $\# \mathcal{S}=1$. Only this case does not require $\eqref{hetero_gap}$ (but, of course, a neighboring pair $(u^0,u^1)$ is needed). Set $A_0 =\mathbb{Z}_{\le 0}$, $A_1=\mathbb{Z}_{>0}$, $k \in K$ is given by $k_i=i$   $(i \in A_0)$ and $k_{i+1} = k_{i}+1$   $(i \in A_1)$, and $\rho \in \tilde{P}$ is a constant sequence, i.e., $\rho_i = \epsilon_0 < \epsilon$ for all $i \in \mathbb{Z}$. We define $k_1$ later. Clearly, $I(x)-\tilde{J}(x)$ is a finite value when $\# \mathcal{S}< \infty$, and Proposition [Proposition 11](#prop:finite){reference-type="ref" reference="prop:finite"} implies that $|x^\ast_i - u^0| \to 0$ as $i \to -\infty$ and $|x^\ast_i - u^1| \to 0$ as $i \to \infty$ for all $k, \rho$. By using $(h_1)$ we can assume that $u^0<x^\ast_0< u^0 + \rho_0$. Hereafter, we abbreviate $x^\ast$ as $x$. We first show that $x$ is strictly monotone for $i \in A_0$, i.e., $x_{i-1} < x_{i}$ for all $i \in \mathbb{Z}$. The following proof of monotonicity is similar to Proposition 3.5 of [@Yu2022]. Assume $x_j=x_{j+1}$ for some $j \in \mathbb{Z}$. Lemma [Lemma 17](#lemm:estimate_periodic1){reference-type="ref" reference="lemm:estimate_periodic1"} implies $x_j \in (u^0,u^1)$. Then $h(x_j,x_j)-c>0$. Set $\bar{x} = (\cdots, x_{j-1}, x_{j+1}, \cdots)$ satisfying $x_0 = \bar{x}_0$ and $\bar{x} \in X_{k,\rho}(A)$, then $\tilde{J}(x)>\tilde{J}(\bar{x})$. Next, we assume that $(x_j -x_{j-1})(x_j -x_{j+1})>0$ for some $j \in \mathbb{Z}$. By using $(h_3)$, $$\begin{aligned} &\tilde{J}(x) - \tilde{J}(\bar{x})\\ &> \tilde{J}(x) - (\tilde{J}(\bar{x}) + h(x_j,x_j)-c) \\ &> h(x_{j-1},x_j) + h(x_{j},x_{j+1}) -(h(x_{j+1},x_{j+1}) +h(x_{j},x_j) ) >0,\end{aligned}$$ which is a contradiction. Now we check that there is $k_1 \in \mathbb{Z}_{>0}$ satisfying $x_{k_1}> u^1 - \rho_1$. We assume that for any $k_1 \in \mathbb{Z}$, $x_{k_1}= u^1 - \rho_1$. The monotonicity of $x$ implies that if $x_{k_1}= u^1 - \rho_1$, then $$\min_{j \in \{ 0,1\}} |u^j-x_i| \ge \min\{x^0-u^0,\rho_1\} (=:\delta)>0$$ for $i=0, \cdots, k_1$. Applying Lemma [Lemma 14](#lemm:yu_lower){reference-type="ref" reference="lemm:yu_lower"} and Lipschitz continuity, we obtain: $$\begin{aligned} \sum_{i=0}^{k_1-1} a_i(x) \ge k_1 \phi(\delta) - C|x_{k_1}-x_0|.\end{aligned}$$ The left side goes to infinity as $k_1 \to \infty$, which is a contradiction. Hence, there is $k_i \in \mathbb{Z}_{>0}$ such that $x_{k_1}> u^1 - \rho_1$. Moreover, by Lemma [Lemma 17](#lemm:estimate_periodic1){reference-type="ref" reference="lemm:estimate_periodic1"} and [Lemma 18](#lemm:estimate_periodic2){reference-type="ref" reference="lemm:estimate_periodic2"}, it holds that $x_i \in (u^0,u^1)$ for all $i \in \mathbb{Z}$. Secondly, we consider the case of $\# \mathcal{S}\ge 2$. For example, set $A_0 = \mathbb{Z}\backslash A_1$ and $A_1 = \{1,2,\cdots, n\}$. Lemma [Lemma 26](#lemm:finite_delta){reference-type="ref" reference="lemm:finite_delta"} means that for any $\rho \in \tilde{P}$, there exists $k \in K$ such that a minimizer of $\tilde{J}_{k,\rho,\{A_0,A_1\}}$ on $X_{k,\rho}(\{ A_0,A_1\})$ and $X_{k,\rho}(\{ A_0, \{1,n\}\})$ are the same. The case of $A_0 = \mathbb{Z}\backslash A_1$ and $A_1 = \{1, 2\}$ is shown in a similar way to the proof of Theorem [\[theo:inf_transition\]](#theo:inf_transition){reference-type="ref" reference="theo:inf_transition"}. In the same manner, we can show the remaining cases. ◻ ## A special case {#sec:example_h} We will give a special example at the end of this section. In the previous section, we cannot generally show: $$\begin{aligned} \label{per_min_ineq} h(x,y) -c \ge 0.\end{aligned}$$ Therefore the proof of Proposition [Proposition 11](#prop:finite){reference-type="ref" reference="prop:finite"} is somewhat technical. However, as we will see later, [\[per_min_ineq\]](#per_min_ineq){reference-type="eqref" reference="per_min_ineq"} holds if $h$ satisfies: $$\begin{aligned} \label{rev} h(x,y) = h(y,x).\end{aligned}$$ This is kind of natural because the analogy of [\[per_min_ineq\]](#per_min_ineq){reference-type="eqref" reference="per_min_ineq"} for differential equations holds in variational structures of potential systems with reversibility (see [@Rabinowitz2008]). One of the examples satisfying [\[rev\]](#rev){reference-type="eqref" reference="rev"} is the Frenkel-Kontorova model [@AubrDaer1983; @FrenKont1939] and the corresponding $h$ is given by: $$\begin{aligned} \label{fkmodel} h(x,y)=\frac{1}{2} \left\{ C(x-y)^2 + V(x) + V(y) \right\},\end{aligned}$$ where $C$ is a positive constant and $V(x)=V(x+1)$ for all $x \in \mathbb{R}$. Since $\partial_1 \partial_2 h \le C < 0$, Remark [\[rem:delta_bangert\]](#rem:delta_bangert){reference-type="ref" reference="rem:delta_bangert"} implies that [\[fkmodel\]](#fkmodel){reference-type="eqref" reference="fkmodel"} satisfies $(h_1)$-$(h_5)$. Using $\eqref{rev}$, we can easily show the following lemma, which implies $h(x,y) -c \ge 0$. **Lemma 42**. *If a continuous function $h \colon \mathbb{R}^2 \to \mathbb{R}$ satisfies ($h_1$)-($h_3$) and $\eqref{rev}$, then all minimizers of $h$ are $(1,0)$-periodic, i.e., $$\inf_{x \in \mathbb{R}} h(x,x) = \inf_{(x,y) \in \mathbb{R}^2} h(x,y).$$* *Proof.* First, we see that it follows from $(h_2)$ that there exists an infimum of $h(x,y)$ on $\mathbb{R}^2$. From $(h_1)$, we can choice $x^\ast$ satisfying $h(x^\ast,x^\ast)=\min_{x \in [0,1]} h(x,x)=\inf_{x \in \mathbb{R}} h(x,x)$. By contradiction, there is $(x,y) \in \mathbb{R}^2$ such that $x \neq y$ and $h(x,y) < h(x^\ast,x^\ast)$. Then, $\eqref{rev}$ implies: $$h(x,y) + h(y,x) < h(x^\ast,x^\ast) + h(x^\ast,x^\ast) \le h(x,x) + h(y,y),$$ but it contradicts $(h_3)$. ◻ **Remark 43**. *If $h$ is of class $C^1$, we do not require $(h_6)$.* If $h$ satisfies [\[rev\]](#rev){reference-type="eqref" reference="rev"}, the obtained orbits in Theorem [Theorem 12](#theo:main){reference-type="ref" reference="theo:main"} are monotone in the following sense: **Proposition 44**. *Suppose [\[rev\]](#rev){reference-type="eqref" reference="rev"}. Then for any $x^\ast$ given in Theorem [Theorem 12](#theo:main){reference-type="ref" reference="theo:main"}, $(x_i^\ast)_{i={k_{2j-1}}}^{k_{2j}}$ is strictly monotone increasing if $j$ is odd, and it is strictly monotone decreasing if $j$ is even.* *Proof.* We only consider the case where $j=1$. By a contradiction argument, suppose that $x_m > x_{m+1}$ for some $m \in [k_1,k_2-1]$. By the definition of $X_{k,\rho}$, $x_{k_1} < x_{k_2}$. Set $y^1=(y_i^1)_{i=k_1}^{k_2}$ and $y^2=(y_i^2)_{i=k_1}^{k_2}$ by: $$\begin{aligned} y_i^1 = \begin{cases} z_i^1& (i=k_1, \cdots, m) \\ x_{i}& (i=m+1, \cdots, k_2) \end{cases} \text{\ , and \ } y_i^2 = \begin{cases} x_{i}& (i=k_1, \cdots, m)\\ z_i^2 & (i=m+1, \cdots, k_2) \end{cases}, \end{aligned}$$ where $z^1=(z^1)_{i=k_1}^{m}$ is a finite minimal configuration in $X_{k,\rho}$ with $z^1_0=x_0$ and $z^1_m=x_{m+1}$ and $z^2=(z^2_i)_{i=m+1}^{k_2}$ with $z^2_{m+1}=x_m$ and $z^2_{k_2}=x_{k_2}$ is given similarly. Applying $\eqref{rev}$ and $(h_3)$, we have: $$\begin{aligned} 2h(x_{m},x_{m+1}) =h(x_{m},x_{m+1}) + h(x_{m+1},x_{m}) > h(x_{m},x_{m}) + h(x_{m+1},x_{m+1}),\end{aligned}$$ and: $$\begin{aligned} &2 \sum_{i=k_1}^{k_2-1} h(x_i,x_{i+1}) - \sum_{i=k_1}^{k_2-1} h(y^1_{i},y^1_{i+1}) - \sum_{i=k_1}^{k_2} h(y^2_{i},y^2_{i+1})\\ &> \sum_{i=k_1}^{m-1} h(x_i,x_{i+1}) + \sum_{i=m+1}^{k_2-1} h(x_i,x_{i+1}) - \sum_{i=0}^{m-1} h(y^1_{i},y^1_{i+1}) - \sum_{i=m+1}^{k_2-1} h(y^2_{i},y^2_{i+1})\\ &= \sum_{i=k_1}^{k_2-1} h(x_i,x_{i+1}) + \sum_{i=k_1}^{k_2-1} h(x_{2m+1-i},x_{2m+1-{(i+1)}})\\ &\quad\quad\qquad\qquad\qquad- \sum_{i=k_1}^{m-1} h(y^1_{i},y^1_{i+1}) - \sum_{i=k_1}^{m-1} h(y^2_{2m+1-i},y^2_{2m+1-{(i+1)}})\\ &>0.\end{aligned}$$ Hence, it holds that $\sum_{i=k_1}^{k_2-1} h(x_i,x_{i+1}) > \sum_{i=k_1}^{k_2-1} h(y^j_{i},y^j_{i+1})$ for $i=1$ or $2$. The same reasoning applies to the remaining cases. This completes the proof. ◻ # Other patterns We can show the existence \... in a similar way by replacing $\mathbb{Z}$ with $\mathbb{Z}>0$ and considering $$J(x) = \sum_{i \in Z_{\le 0}} a_i(x) + \sum_{\mathbb{Z}_{>0}} A_i(x)$$ We however fix our proof slightly because Moreover, the generalized argument implies **Theorem 45**. *For any $\alpha \in \mathbb{Q}$* We can replace $P$ with $P^+$ or $P^-$, where $$\begin{aligned} P^+&=\left\{ \rho=(\rho_i)_{i \in \mathbb{Z}} \subset \mathbb{R}_{>0} \mid 0 <\rho_i <1, \ \sum_{i \in \mathbb{Z}_{>0}} \rho_i <\infty \right\} and\\ P^-&=\left\{ \rho=(\rho_i)_{i \in \mathbb{Z}} \subset \mathbb{R}_{>0} \mid 0 <\rho_i <1, \ \sum_{i \in \mathbb{Z}_{<0}} \rho_i <\infty \right\}.\end{aligned}$$ We need to give the assumption of limit. e.g., when considering $P^+$ # Examples {#sec:ex} ## The Frenkel-Kontorova model Consider $$h(x,y)=\frac{1}{2} \left\{ C(x-y)^2 + V(x) + V(y) \right\}$$ where $V(x)=V(x+1)$ for all $x \in \mathbb{R}$. It is called the Frenkel-Kontorova model. Clearly, $\partial h_1h_2 \le \delta < 0$ and $$\begin{aligned} \label{rev} h(x,y) = h(y,x).\end{aligned}$$ Using $\eqref{rev}$, we can easily show **Lemma 46**. *If a continuous function $h \colon \mathbb{R}^2 \to \mathbb{R}^2$ satisfies ($h_1$)-($h_4$) and $\eqref{rev}$, then all minimizers are $(1,0)$-periodic.* *Proof.* From $(h_1)$, we can choice $x^\ast$ satisfying $h(x^\ast,x^\ast)=\min_{x \in \mathbb{R}} h(x,x)$. By contradiction, there is $(x,y) \in \mathbb{R}^2$ such that $x \neq y$ and $h(x,y) < h(x^\ast,x^\ast)$. Then $$h(x,y) + h(y,x) < h(x^\ast,x^\ast) + h(x^\ast,x^\ast) \le h(x,x) + h(y,y),$$ but it contradicts $(h_3)$. ◻ The above lemma implies $h(x,y) -c \ge 0$. ## Billiyard maps In this paragraph, we introduce an example of not satisfying the twist condition. Set a domain $D=D(f_1,f_2)$ by $$D = \{ (x,y) \in \mathbb{R}^2 \mid f_1(x) \le y \le f_2(x)\}$$ where $f_1$ and $f_2$ are continuous function on $\mathbb{R}^2$. Consider $$h = .$$ We check that $h$ satisfies $(h_1)$-$(h_5)$. # Another proof of heteroclinic and homoclinic orbits The advantage of using this method is that there is no need to study asymptotic behavior when $|i| \to \infty$. In this section, we give another existence proof of finite transition orbits. In [@Yu2022], he considered $X^0$ and $X^1$ which restrict convergence of elements when $|i| \to \infty$ to get heteroclinic configurations. We will replace $X^0$ and $X^1$ with $Y^0$ and $Y^1$ given by $$\begin{aligned} Y^0\\ Y^1\end{aligned}$$ we get infinite transition orbits and they are different from our orbits in Theorem.
arxiv_math
{ "id": "2309.11110", "title": "Variational Structures for Infinite Transition Orbits of Monotone Twist\n Maps", "authors": "Yuika Kajihara", "categories": "math.DS", "license": "http://creativecommons.org/publicdomain/zero/1.0/" }
arxiv_math
{ "id": "2309.06614", "title": "The Right Angled Artin Group Functor as a Categorical Embedding", "authors": "Chris Grossack", "categories": "math.GR math.CO math.CT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We give a new proof, along with some generalizations, of a folklore theorem (attributed to Laurent Lafforgue) that a rigid matroid (i.e., a matroid with indecomposable basis polytope) has only finitely many projective equivalence classes of representations over any given field. address: - Matthew Baker, School of Mathematics, Georgia Institute of Technology, Atlanta, USA - Oliver Lorscheid, University of Groningen, the Netherlands, and IMPA, Rio de Janeiro, Brazil author: - Matthew Baker - Oliver Lorscheid bibliography: - matroid.bib title: On a theorem of Lafforgue --- [^1] # Introduction A matroid $M$ is called *rigid* if its base polytope $P_M$ has no non-trivial regular matroid polytope subdivisions. Such matroids are interesting for a number of reasons; for example, a theorem of Bollen--Draisma--Pendavingh [@BDP18] asserts that for each prime number $p$, a rigid matroid is algebraically representable in characteristic $p$ if and only if it is linearly representable in characteristic $p$. A folklore theorem, attributed to L. Lafforgue, asserts that a rigid matroid has at most finitely many representations over any field, up to projective equivalence. This is mentioned without proof in a few places throughout the literature, for example in Alex Fink's Ph.D. thesis [@Fink10 p. 10], where he writes: > Matroid subdivisions have made prominent appearances in algebraic geometry. \[...\] Lafforgue's work implies, for instance, that a matroid whose polytope has no subdivisions is representable in at most finitely many ways, up to the actions of the obvious groups. We have been unable to find a proof of this result in the papers of Lafforgue cited by Fink [@Lafforgue99; @Lafforgue03], though a proof sketch appears in [@Fink15 Theorem 7.8]. In this paper, we provide a rigorous and efficient proof of Lafforgue's theorem, along with some new generalizations. What is arguably most interesting about our approach to Lafforgue's theorem is that we deduce it from a purely algebraic statement which has nothing to do with matroids. The only input from matroid theory needed is the fact that the *rescaling class functor* ${\mathcal X}_M$ from pastures to sets is representable (see below for further details). We believe this to be a nice illustration of the power, and elegance, of the algebraic theory developed by the authors in [@Baker-Lorscheid20] and [@Baker-Lorscheid21b]. # Reformulation and generalizations of Lafforgue's theorem {#sec:reformulation} It is well-known to experts that a matroid $M$ is rigid if and only if every valuated matroid ${\mathbb M}$ whose underlying matroid is $M$ is rescaling equivalent to the trivially valuated matroid. Since we could not find a reference for this result, we provide a proof in . Recall from [@Baker-Lorscheid20] (see also ) that there is a category of algebraic objects called *pastures*, which generalize not only fields but also partial fields and hyperfields. According to [@Baker-Bowler19], there is a robust notion of (weak) matroids over a pasture[^2] $P$ such that (to mention just a few examples): - Matroids over the Krasner hyperfield ${\mathbb K}$ are the same thing as matroids in the usual sense. - Matroids over the tropical hyperfield ${\mathbb T}$ are the same thing as valuated matroids. - Matroids over a field $K$ are the same thing as $K$-representable matroids, together with a choice of a matrix representation (up to the equivalence relation where two matrices are equivalent if they have the same row space). For every matroid $M$ there is a functor ${\mathcal X}_M$ from pastures to sets taking a pasture $P$ to the set of rescaling equivalence classes of (weak) $P$-representations of $M$. A matroid $M$ is rigid if and only if ${\mathcal X}_M({\mathbb T})$ consists of a single point. For a field $K$, ${\mathcal X}_M(K)$ coincides with the set of projective equivalence classes of representations of $M$ over $K$. Thus Lafforgue's theorem is equivalent to the assertion that if ${\mathcal X}_M({\mathbb T})$ is a singleton, then ${\mathcal X}_M(K)$ is finite for every field $K$. Recall from [@Baker-Lorscheid20] that for every matroid $M$, the functor ${\mathcal X}_M$ is representable by a pasture $F_M$ canonically associated to $M$, called the *foundation* of $M$. Concretely, this means that $\mathop{\mathrm{Hom}}(F_M,P) = {\mathcal X}_M(P)$ for every pasture $P$, functorially in $P$. From this point of view, Lafforgue's theorem is equivalent to the assertion that if $\mathop{\mathrm{Hom}}(F_M,{\mathbb T}) = \{ 0 \}$, then $\mathop{\mathrm{Hom}}(F_M,K)$ is finite for every field $K$. This is the statement of Lafforgue's theorem that we actually prove in this paper. The advantage of this formulation is that it turns out to be a special case of a result which can be formulated purely in the language of pastures, without any mention of matroids! In fact, the algebraic incarnation of this result holds more generally with pastures (which generalize fields) replaced by *bands* (which generalize rings). See for an overview of bands, including a definition, some examples, and the key facts needed for the present paper. ## An algebraic generalization of Lafforgue's theorem In order to state the algebraic result about bands which implies Lafforgue's theorem, we mention (see below) that given a band $B$ and a field $K$, there is a canonically associated $K$-algebra $\myrho_K(B)$ with the universal property that $\mathop{\mathrm{Hom}}_{\rm Band}(B,S) = \mathop{\mathrm{Hom}}_{K-{\rm alg}}(\myrho_K(B),S)$ for every $K$-algebra $S$. Moreover, if $B$ is finitely generated (which is the case, for example, when $B=F_M$ for some matroid $M$), then so is $\myrho_K(B)$. If $B$ is finitely presented, the set $\mathop{\mathrm{Hom}}(B,{\mathbb T})$ has the structure of a finite polyhedral complex $\Sigma_B$; cf. . Moreover, if $K$ is a field, the set $\mathop{\mathrm{Hom}}_{\rm Band}(B,K)$ is equal to $\mathop{\mathrm{Hom}}_{K-{\rm alg}}(\myrho_K(B),K)$, which is in turn equal to the set $X_{B,K}(K)$ of $K$-points of the finite type affine $K$-scheme $X_{B,K} := \mathop{\mathrm{Spec}}(\myrho_K(B))$. (When $B=F_M$ for a matroid $M$, we call $X_{B,K}$ the *reduced realization space* of $M$ over $K$.) Our first generalization of Lafforgue's theorem is as follows: **Theorem 1**. *For every finitely generated band $B$ and every field $K$, we have the inequality $\dim X_{B,K} \leqslant\dim \Sigma_B$. In particular, if $\mathop{\mathrm{Hom}}(B,{\mathbb T}) = \{ 0 \}$, then $\dim \Sigma_B = 0$ and thus $X_{B,K}(K) = \mathop{\mathrm{Hom}}(B,K)$ is finite for every field $K$.* Applying to $B=F_M$ immediately gives: **Corollary 1** (Lafforgue). *If $M$ is a rigid matroid, then ${\mathcal X}_M(K)$ is finite for every field $K$.* In the terminology of , in the case $B=F_M$ says precisely that for any field $K$, the dimension of the reduced realization space of $M$ over $K$ is bounded above by the dimension of the reduced Dressian of $M$. ## A relative version of Lafforgue's theorem Rudi Pendavingh (private communication) asked if there might be a relative version of Lafforgue's theorem with respect to minors of $M$. More precisely, Pendavingh asked the following question: Suppose $N$ is an (embedded) minor of $M$ with the property that a valuated matroid structure on $M$ is determined, up to rescaling equivalence, by its restriction to $N$. Is it then true that, for every field $K$, there are (up to projective equivalence) at most finitely many extensions of each $K$-representation of $N$ to a $K$-representation of $M$? We answer Pendavingh's question in the affirmative, proving the following algebraic generalization of : **Theorem 2**. *Let $K$ be an algebraically closed valued field, and let $v : K \to {\mathbb T}$ be a non-trivial valuation. If $f : B_1 \to B_2$ is a homomorphism of finitely generated bands, then the fiber dimension of $f_K : \mathop{\mathrm{Hom}}(B_2,K) \to \mathop{\mathrm{Hom}}(B_1,K)$ is bounded above by the fiber dimension of $f_{{\mathbb T}} : \mathop{\mathrm{Hom}}(B_2,{\mathbb T}) \to \mathop{\mathrm{Hom}}(B_1,{\mathbb T})$, i.e., if $x \in \mathop{\mathrm{Hom}}(B_1,K)$ and $x'$ is the image of $x$ in $\mathop{\mathrm{Hom}}(B_1,{\mathbb T})$, then $\dim f_K^{-1} (x) \leqslant\dim f_{{\mathbb T}}^{-1} (x')$.* *In particular, setting $B_1 = F_N$ and $B_2 = F_M$ when $N$ is an embedded minor of a matroid $M$, we find that if the induced map ${\mathcal X}_N({\mathbb T}) \to {\mathcal X}_M({\mathbb T})$ has finite fibers (i.e., a valuated matroid structure on $N$ has at most finitely many extensions to $M$, up to rescaling equivalence) then, for every field $k$, the natural map ${\mathcal X}_N(k) \to {\mathcal X}_M(k)$ has finite fibers, i.e., every $k$-representation of $N$ has at most finitely many extensions to $M$, up to projective equivalence.* Note that Lafforgue's theorem () follows from the special case of where $N$ is the trivial (empty) matroid and $f_{{\mathbb T}} : \mathop{\mathrm{Hom}}(B_2,{\mathbb T}) \to \mathop{\mathrm{Hom}}(B_1,{\mathbb T})$ has finite fibers. # Some examples In this section we present examples of both rigid and non-rigid matroids (see for some details on our notation). **Example 1** (Dress--Wenzel). In [@Dress-Wenzel92b Theorem 5.11], Dress and Wenzel showed that if the inner Tutte group $F_M^\times$ of the matroid $M$ is finite, then $M$ is rigid. From our point of view, this is clear, since the inner Tutte group is the multiplicative group of the foundation (cf. [@Baker-Lorscheid21b Corollary 7.13]) and a non-trivial homomorphism $F_M \to {\mathbb T}$ of pastures would give, in particular, a nonzero group homomorphism $F_M^\times \to ({\mathbb R},+)$; however the only torsion element of $({\mathbb R},+)$ is 0. For example: 1. The foundation of the Fano matroid $F_7$ is ${\mathbb F}_2$, so $F_7$ is rigid. More generally, any binary matroid has foundation equal to either ${\mathbb F}_1^\pm$ or ${\mathbb F}_2$ [@Baker-Lorscheid21b Corollary 7.32] and so it is rigid. 2. The foundation of the ternary spike $T_8$ is ${\mathbb F}_3$ (see [@Part2 Proposition 8.9]), so $T_8$ is also rigid. 3. Dress and Wenzel prove in [@Dress-Wenzel92b Corollary 3.8] that the inner Tutte group of any finite projective space of dimension at least 2 is finite, which provides a wealth of additional examples of rigid matroids. 4. Since the automorphism group of the ternary affine plane $M = {\rm AG}(2,3)$ acts transitively, all single-element deletions are isomorphic to each other. Let $M'$ be any of these deletions. By [@Part2 Proposition 6.2], the foundation of $M'$ is equal to the hexagonal (or sixth-root-of-unity) partial field ${\mathbb H}\ = \ {{\mathbb F}_1^\pm}( T ) \!\sslash\!\langle\!\!\langle T^3 + 1, T-T^2 - 1 \rangle\!\!\rangle$, whose multiplicative group is the group of sixth roots of unity in ${\mathbb C}$. Therefore $M'$ is rigid. It is not true that a matroid $M$ is rigid if and only if its inner Tutte group (or, equivalently, its foundation) is finite. For example: **Example 2** (suggested by Rudi Pendavingh). Let $M$ be the Betsy Ross matroid (cf. [@vanZwam09 Figure 3.3], where $M$ is also called $B_{11}$). Using the Macaulay2 software described in [@Chen-Zhang], we have checked that $F_M$ is the (infinite) golden ratio partial field ${\mathbb G}\ = \ {{\mathbb F}_1^\pm}( T ) \!\sslash\!\langle\!\!\langle T^2 - T - 1 \rangle\!\!\rangle$. One checks easily that ${\rm Hom}({\mathbb G},{\mathbb T})$ is trivial, so $M$ is rigid; in particular, the converse of the statement "$F_M$ finite implies $M$ rigid" is not true. It is also easy to see directly that ${\mathbb G}$ admits only finitely many homomorphisms to any field. **Example 3**. The matroid $U_{2,4}$ is not rigid, since its foundation is the near-regular partial field ${\mathbb U}\ = \ {{\mathbb F}_1^\pm}( T_1,T_2 ) \!\sslash\!\langle\!\!\langle T_1 + T_2 - 1 \rangle\!\!\rangle$, which admits infinitely many different homomorphisms to ${\mathbb T}$ (map $T_1$ to 1 and $T_2$ to any element less than or equal to 1, or vice-versa). And for any field $K$, the reduced realization space ${\mathcal X}_M(K)$ is equal to $K \backslash\{ 0,1 \}$, so in particular it is infinite whenever $K$ is. The base polytope of $U_{2,4}$ is an octahedron, which admits a regular matroid decomposition into two tetrahedra (see [@Maclagan-Sturmfels15 p. 189] for a nice visualization). **Example 4**. The non-Fano matroid $M=F_7^-$ is not rigid, and it provides an example for which the dimension of the reduced realization spaces ${\mathcal X}_M(K)$ and ${\mathcal X}_M({\mathbb T})$ jumps. The foundation of $M$ is the dyadic partial field ${\mathbb D}={{\mathbb F}_1^\pm}( T ) \!\sslash\!\langle\!\!\langle T+T-1 \rangle\!\!\rangle$ by [@Part2 Prop. 8.4], and there is at most one homomorphism $F_M={\mathbb D}\to K$ into any field $K$, sending $T$ to the multiplicative inverse of $2$ (if it exists, i.e., if $\textup{char}\; K\neq 2$). In contrast, there are infinitely many homomorphisms ${\mathbb D}\to{\mathbb T}$ (parametrized by the image of $f(T)\in{\mathbb T}$). So $\dim{\mathcal X}_M(K)=0<1=\dim{\mathcal X}({\mathbb T})$. # Proof of the main theorems The key fact needed for the proof of is the following theorem of Bieri and Groves [@Bieri-Groves84 Theorem A], which is a cornerstone of tropical geometry. For the statement, recall that a *semi-valuation* from a ring $R$ to $\overline{\mathbb R}={\mathbb R}\cup\{+\infty\}$ is a map $v : R \to \overline{\mathbb R}$ such that $v(0)=+\infty$, $v(xy)=v(x)+v(y)$, and $v(x+y) \geqslant\textup{min}\{ v(x),v(y) \}$ for all $x,y \in R$. (The map $v$ is called a *valuation* if, in addition, $v(x)=+\infty$ implies that $x=0$.) If $R$ is a $K$-algebra, where $K$ is a valued field (i.e., a field endowed with a valuation $v : K \to \overline{\mathbb R}$), a *$K$-semi-valuation* is a semi-valuation which restricts to the given valuation on $K$. **Theorem 3** (Bieri--Groves). *Let $K$ be a field endowed with a real valuation $v$, and suppose $R$ is a finitely generated $K$-algebra with Krull dimension equal to $n$, having generators $T_1,\ldots,T_n$. Let $X = {\rm Spec}(R)$ be the corresponding affine $K$-scheme. Then the set $${\rm Trop}(X) := \{ (v(T_1),\ldots,v(T_n)) \; | \; v : R \to \overline{\mathbb R}\textrm{ is a $K$-semi-valuation} \}$$ is a polyhedral complex of dimension $\dim({\rm Trop}(X)) = \dim X$.* **Remark 1**. Bieri and Groves assume that $X$ is irreducible and show, more precisely, that ${\rm Trop}(X)$ has *pure* dimension $n$. Our formulation of the Bieri--Groves theorem (which does not include the purity statement) follows immediately from theirs by decomposing $X$ into irreducible components. **Remark 2**. More or less by definition, a semi-valuation on a ring $R$ is precisely the same thing as a homomorphism from $R$ to ${\mathbb T}$ in the category of bands, and if $K$ is a valued field then a $K$-semi-valuation on $R$ is the same thing as a homomorphism from $R$ to ${\mathbb T}$ which restricts to the given homomorphism $v : K \to {\mathbb T}$ on $K$. Let $K$ be a field, and let $\mathop{\mathrm{{Alg}}}_K$ denote the category of $K$-algebras, i.e. ring extensions $R$ of $K$ together with $K$-linear ring homomorphisms. We write $\mathop{\mathrm{Hom}}_K(R,S)$ for the set of $K$-algebra homomorphisms between two $K$-algebras $R$ and $S$. Given a band $B$, we define the *associated $K$-algebra* as $$\myrho_K(B) \ = \ K[B] \, / \, \langle N_B \rangle,$$ where $K[B]$ is the monoid algebra over $K$ and the elements of the nullset $N_B$ are interpreted as elements of $K[B]$ (cf. ). It comes with a band homomorphism $\myalpha _B:B\to \myrho_K(B)$, which maps $a$ to $[a]$. The other main ingredient needed for the proof of is the following technical but important result: **Proposition 1**. *Let $K$ a field, $B$ be a band and $R=\myrho_K(B)$ the associated $K$-algebra.* 1. *[\[alg1\]]{#alg1 label="alg1"} The homomorphism $\myalpha _B:B\to \myrho_K(B)$ is initial for all homomorphisms from $B$ to a $K$-algebra, i.e., for every $K$-algebra $S$ the natural map $$\mathop{\mathrm{Hom}}_K(R,S) \ \stackrel{\myalpha _B^\ast}{\longrightarrow} \ \mathop{\mathrm{Hom}}(B,S)$$ is a bijection.* 2. *[\[alg2\]]{#alg2 label="alg2"} Assume we are given a valuation $v_K : K \to {\mathbb T}$, and that $B$ is finitely generated by $a_1,\dotsc,a_n$. Let $T_i=\myalpha _B(a_i)$ for $i=1,\dotsc,n$, and let $X=\mathop{\mathrm{Spec}}R$. Let $\exp^n:\overline{\mathbb R}^n\to{\mathbb T}^n$ be the coordinate-wise exponential map. Then the $T_i$ generate $R$ as a $K$-algebra, and $$\exp^n \Big(\mathop{\mathrm{Trop}}(X)\Big) \ \subset \ \mathop{\mathrm{Hom}}(B,{\mathbb T})$$ as subsets of ${\mathbb T}^n$.* *Proof.* We begin with [\[alg1\]](#alg1){reference-type="eqref" reference="alg1"}. The map $\myalpha _B^\ast$ is injective since $R$ is generated by the subset $\myalpha _B(B)$, and therefore every homomorphism $f:R\to S$ is determined by the composition $f\circ\myalpha _B:B\to S$. In order to show that $\myalpha _B^\ast$ is surjective, consider a band homomorphism $f:B\to S$, which is, in particular a multiplicative map. Therefore it extends (uniquely) to a $K$-linear homomorphism $\hat{f}:K[B]\to S$ from the monoid algebra $K[B]$ to $S$. For every $\sum a_i\in N_B$, we have $\sum f(a_i)\in N_S$ by the definition of a band homomorphism. By the definition of $N_S$, this means that $\sum f(a_i)=0$ in $S$. Thus $\hat{f}$ factorizes through $\bar f:R=K[B]/\langle N_B \rangle\to S$, and, by construction, we have $f=\bar f\circ\myalpha _B=\myalpha _B^\ast(\bar f)$. This establishes [\[alg1\]](#alg1){reference-type="eqref" reference="alg1"}. We continue with [\[alg2\]](#alg2){reference-type="eqref" reference="alg2"}. Since $B$ is generated by $a_1,\dotsc,a_n$ as a pointed monoid and $\myalpha _B(B)$ generates $R$ as a $K$-algebra, $R$ is generated as a $K$-algebra by $T_1,\dotsc,T_n$. In order to verify that $\exp^n(\mathop{\mathrm{Trop}}(X))\subset\mathop{\mathrm{Hom}}(B,{\mathbb T})$, consider a point $(v(T_1),\dotsc,v(T_n))\in \mathop{\mathrm{Trop}}(X)$, where $v:R\to\overline{\mathbb R}$ is a $K$-semi-valuation. Post-composing $v$ with $\exp$ yields a seminorm $v':R\to{\mathbb T}$, which is, equivalently, a band homomorphism. Pre-composing $v'$ with $\myalpha _B$ yields a band homomorphism $v'':B\to{\mathbb T}$, which is an element of $\mathop{\mathrm{Hom}}(B,{\mathbb T})$. By construction, $\exp^n(v(T_1),\dotsc,v(T_n))=v''$, which establishes the last assertion. ◻ **Remark 3**. 1. Under the assumptions of .[\[alg2\]](#alg2){reference-type="eqref" reference="alg2"}, $\mathop{\mathrm{Hom}}(B,{\mathbb T})$ embeds as a subspace of ${\mathbb T}^n$, which has a well-defined (Lebesgue) covering dimension in the sense of [@Pears75 Chapter 3]. As discussed in [@Lorscheid22], the subspace topology of $\mathop{\mathrm{Hom}}(B,{\mathbb T})\subset{\mathbb T}^n$ is equal to the compact-open topology for $\mathop{\mathrm{Hom}}(B,{\mathbb T})$ with respect to the discrete topology for $B$ and the natural order topology for ${\mathbb T}$, which shows that the dimension of $\mathop{\mathrm{Hom}}(B,{\mathbb T})$ does not depend on the embedding into ${\mathbb T}^n$. 2. With the topologies just described, $\exp^n$ defines a continuous injection from $\mathop{\mathrm{Trop}}(X)$ to $\mathop{\mathrm{Hom}}(B,{\mathbb T})$ which identifies the former with a closed subspace of the latter. In particular, [@Pears75 Prop. 3.1.5] shows that $\dim \mathop{\mathrm{Trop}}(X) \leqslant\dim \mathop{\mathrm{Hom}}(B,{\mathbb T})$. 3. If in addition to the assumptions of [\[alg2\]](#alg2){reference-type="eqref" reference="alg2"}, $N_B$ is finitely generated as an ideal of $B^+$, then $\mathop{\mathrm{Hom}}(B,{\mathbb T})$ is a tropical pre-variety in ${\mathbb T}^n$ and is therefore the underlying set of a finite polyhedral complex. The dimension of $\mathop{\mathrm{Hom}}(B,{\mathbb T})$ as a polyhedral complex is equal to its covering dimension [@Pears75 Theorem 2.7 and Section 3.7]. *Proof of .* Let $v : K \to {\mathbb T}$ be a valuation (which we can take to be the trivial valuation if we like). Let $\myalpha _B:B\to R$ be the canonical homomorphism to the associated $K$-algebra $R=\myrho_K(B)$, cf. . Let $a_1,\dotsc,a_n\in B$ be a set of generators for $B$, and for $i=1,\dotsc,n$ let $T_i=\myalpha _B(a_i)$. By , the $T_i$ generate $R$ as a $K$-algebra, i.e., $R=K[T_1,\dotsc,T_n]/I$ for some ideal $I$. Let $X=\mathop{\mathrm{Spec}}R$, so that $X(K) = \mathop{\mathrm{Hom}}_K(R,K)$. yields a commutative diagram $$\begin{tikzcd} X(K) = \mathop{\mathrm{Hom}}_K(R,K) \arrow[r, "\simeq"] \arrow[d] & \mathop{\mathrm{Hom}}(B,K) \ar[d] \\ \mathop{\mathrm{Trop}}(X) \ar[right hook->,r,"\exp^n"] & \mathop{\mathrm{Hom}}(B,{\mathbb T}) \end{tikzcd}$$ where the right-hand vertical map is obtained by composing with $v : K \to {\mathbb T}$ and the left-hand vertical map is induced by composing the embedding of $X(K) = \mathop{\mathrm{Hom}}_K(R,K)$ into $K^n$ via $\myphi\mapsto (\myphi(T_i))_{i=1}^n$ with the coordinate-wise absolute value $v_K^n:K^n\to{\mathbb T}^n$. By the Bieri--Groves theorem (), the dimension of the affine variety $X$ is equal to the dimension of $\mathop{\mathrm{Trop}}(X)$, as defined in . Using [\[alg2\]](#alg2){reference-type="eqref" reference="alg2"} and [\[alg2\]](#alg2){reference-type="eqref" reference="alg2"}, we conclude that $$\dim\Big(X \Big) \ = \ \dim\Big(\mathop{\mathrm{Trop}}(X)\Big) \ \leqslant\ \dim\Big(\mathop{\mathrm{Hom}}(B,{\mathbb T})\Big),$$ as desired. ◻ *Proof of .* Suppose $f : B_1 \to B_2$ is a band homomorphism. Choose generators $x_1,\ldots,x_m$ for $B_1$. Completing $f(x_1),\ldots,f(x_m)$ to a set of generators for $B_2$ if necessary, we find a generating set $y_1,\ldots,y_n$ for $B_2$ with $m \leqslant n$ such that $f(x_i)=y_i$ for $i=1,\ldots,m$. Setting $X = \mathop{\mathrm{Spec}}(\myrho_K(B_1))$ and $Y = \mathop{\mathrm{Spec}}(\myrho_K(B_2))$, and letting $\mathop{\mathrm{Trop}}(X)$ (resp. $\mathop{\mathrm{Trop}}(Y)$) be the tropicalization of $X$ with respect to $\myalpha _{B_1}(x_1),\ldots,\myalpha _{B_1}(x_m)$ (resp. $\myalpha _{B_2}(y_1),\ldots,\myalpha _{B_2}(y_n)$), we obtain a commutative diagram $$\begin{tikzcd} Y(K) \arrow{r}{f_K}\arrow[d] & X(K) \arrow[d] \\ \mathop{\mathrm{Trop}}(Y) \arrow{r}{f_{{\mathbb T}}} \arrow[d,hook] & \mathop{\mathrm{Trop}}(X) \arrow[d,hook] \\ \mathop{\mathrm{Hom}}(B_2,{\mathbb T}) \arrow[r] & \mathop{\mathrm{Hom}}(B_1,{\mathbb T}) \end{tikzcd}$$ Since $\mathop{\mathrm{Trop}}(Y)$ is a closed subspace of $\mathop{\mathrm{Hom}}(B_2,{\mathbb T})$ (resp. $\mathop{\mathrm{Trop}}(X)$ is a closed subspace of $\mathop{\mathrm{Hom}}(B_1,{\mathbb T})$), it suffices to prove that if $x \in X(K)$ and $x' = \mathop{\mathrm{Trop}}(x) \in \mathop{\mathrm{Trop}}(X)$, then $\dim f_K^{-1} (x) \leqslant\dim f_{{\mathbb T}}^{-1} (x')$. To see this, write $f_K^{-1}(x) = Z(K)$ with $Z$ an affine subscheme of $Y$. If we pull back the functions $\myalpha _{B_2}(y_1),\ldots,\myalpha _{B_2}(y_n)$ to a set of generators for the affine coordinate ring of $Z$, we obtain a commutative diagram $$\begin{tikzcd} Z(K) \arrow[r,hook] \arrow[d] & Y(K) \arrow[r] \arrow[d] & X(K) \arrow[d] \\ \mathop{\mathrm{Trop}}(Z) \arrow[r,hook] & \mathop{\mathrm{Trop}}(Y) \arrow[r] & \mathop{\mathrm{Trop}}(X) \end{tikzcd}$$ Applying the Bieri--Groves theorem to $Z$, we find that the image of $Z(K)$ under ${\rm Trop}$ has dimension equal to $\dim f_{K}^{-1} (x)$. In addition, the natural map $\mathop{\mathrm{Trop}}(Z) \to \mathop{\mathrm{Trop}}(Y)$ identifies $\mathop{\mathrm{Trop}}(Z)$ with a closed subspace of $\mathop{\mathrm{Trop}}(Y)$, since $\mathop{\mathrm{Trop}}(Z)$ (resp. $\mathop{\mathrm{Trop}}(Y)$) is the topological closure of $Z(K)$ (resp. $Y(K)$) in ${\mathbb T}^n$ (cf. [@Payne09 Proposition 2.2]). By construction, $\mathop{\mathrm{Trop}}(Z)$ is in fact a closed subspace of $\dim f_{{\mathbb T}}^{-1} (x')$. This means that $\dim f_K^{-1} (x) = \dim \mathop{\mathrm{Trop}}(Z) \leqslant\dim f_{{\mathbb T}}^{-1} (x')$ as desired. ◻ # Pastures and Bands {#Appendix:Bands and Pastures} More details pertaining to the following overview of bands and pastures can be found in [@Baker-Jin-Lorscheid]. In this text, a *pointed monoid* is a (multiplicatively written) commutative semigroup $A$ with identity $1$, together with a distinguished element $0$ that satisfies $0\cdot a=0$ for all $a\in A$. The *ambient semiring of $A$* is the semiring $A^+={\mathbb N}[A]/\langle 0 \rangle$, which consists of all finite formal sums $\sum a_i$ of nonzero elements $a_i\in A$. Note that $A$ is embedded as a submonoid in $A^+$, where $0$ is identified with the empty sum. An *ideal of $A^+$* is a subset $I$ that contains $0$ and is closed under both addition and multiplication by elements of $A^+$. **Definition 1**. A *band* is a pointed monoid $B$ together with an ideal $N_B$ of $B^+$ (called the *nullset*) such that for every $a\in A$, there is a unique $b\in A$ with $a+b\in N_B$. We call this $b$ the *additive inverse of $a$*, and we denote it by $-a$. A *band homomorphism* is a multiplicative map $f:B\to C$ preserving $0$ and $1$ such that $\sum a_i\in N_B$ implies $\sum f(a_i)\in N_C$. This defines the category $\mathop{\mathrm{{Bands}}}$. ( For a subset $S$ of $B^+$, we denote by $\langle\!\!\langle S \rangle\!\!\rangle$ the smallest ideal of $B^+$ that contains $S$ and is closed under the *fusion axiom* (cf. [@Baker-Zhang23]) 1. if $c+\sum a_i$ and $-c+\sum b_j$ are in $\langle\!\!\langle S \rangle\!\!\rangle$, then $\sum a_i+\sum b_j$ is in $\langle\!\!\langle S \rangle\!\!\rangle$. **Definition 2**. A band $B$ is *finitely generated* if it is finitely generated as a monoid. It is a *finitely presented fusion band*, which we abbreviate by simply saying that $B$ is *finitely presented*, if it is finitely generated and $N_B=\langle\!\!\langle S \rangle\!\!\rangle$ for a finite subset $S$ of $N_B$. The *unit group of $B$* is the submonoid $B^\times=\{a\in B\mid ab=1\text{ for some }b\in B\}$ of $B$, which is indeed a group. **Definition 3**. A *pasture* is a band $P$ with $P^\times=P-\{0\}$ and $$N_P \ = \ \Big\langle\!\!\Big\langle \, a+b+c \in P^+ \; \Big| \; a+b+c\in N_P \, \Big\rangle\!\!\Big\rangle.$$ **Example 5**. Every ring $R$ is a band, with nullset $N_R=\{\sum a_i\mid \sum a_i = 0 \text{ in }R\}$. In fact, this defines a fully faithful embedding $\mathop{\mathrm{{Rings}}}\to\mathop{\mathrm{{Bands}}}$. Every field is a pasture. The following examples of interest are bands which are not rings (we write $a-b$ for $a+(-b)$): - The *regular partial field* is the pasture ${{\mathbb F}_1^\pm}=\{0,1,-1\}$ with nullset $$N_{{\mathbb F}_1^\pm}\ = \ \Big\{n.1+n.(-1) \, \Big| \, n\geqslant 0 \Big\} \ = \ \langle\!\!\langle 1-1 \rangle\!\!\rangle.% \ = \ \Big\{0,\ \ 1+(-1), \ \ 1+(-1)+1+(-1), \dotsc \Big\}$$ - The *Krasner hyperfield* is the pasture ${\mathbb K}=\{0,1\}$ with nullset $$N_{\mathbb K}\ = \ {\mathbb N}-\{1\} \ = \ \langle\!\!\langle 1+1,\ 1+1+1 \rangle\!\!\rangle.$$ - The *tropical hyperfield* is the pasture ${\mathbb T}={\mathbb R}_{\geqslant 0}$ with nullset $$\textstyle N_{\mathbb T}\ = \ \{0\} \ \cup \ \Big\{ \sum a_i \; \Big| \; a_1,\dotsc,a_n\text{ assumes its maximum at least twice}\Big\}.$$ Examples of band homomorphisms are the inclusion ${\mathbb K}\hookrightarrow {\mathbb T}$ and the surjection ${\mathbb T}\to{\mathbb K}$ that sends every nonzero element to $1$. A band homomorphism $R\to{\mathbb T}$ from a ring $R$ into ${\mathbb T}$ is the same thing as a non-archimedean seminorm. In particular, the trivial absolute value on a field $K$ is the unique band homomorphism $K\to{\mathbb T}$ that factors through ${\mathbb K}$. The pasture ${{\mathbb F}_1^\pm}$ is an initial object in $\mathop{\mathrm{{Bands}}}$, i.e., every band $B$ comes with a unique homomorphism ${{\mathbb F}_1^\pm}\to B$. This leads to a description $B={{\mathbb F}_1^\pm}[ T_i\mid i\in I ] \!\sslash\!\langle S \rangle$ of $B$ in terms of generators $\{T_i\mid i\in I\}$ and relations $S\subset B^+$, in the sense that $\{T_i\}\cup\{0,-1\}$ generates $B$ as a monoid, $S$ generates the ideal $N_B$, and $S$ contains a complete set of binary relations between the signed products $x=\pm T_{i_1}\dotsb T_{i_r}$ of the $T_i$, i.e., if $x-y\in S$ then $x=y$ as elements of $B$. Similarly, we write $P={{\mathbb F}_1^\pm}( T_i\mid i\in I ) \!\sslash\!\langle\!\!\langle S \rangle\!\!\rangle$ for a pasture $P$ if $P^\times$ is generated as a group by $\{T_i\mid i\in I\}$ and $-1$, if $N_P=\langle\!\!\langle S \rangle\!\!\rangle$, and if $S$ contains a complete set of binary relations between the signed products of the $T_i$. For example, $${\mathbb K}\ = \ {{\mathbb F}_1^\pm}\!\sslash\!\langle\!\!\langle 1+1,\ 1+1+1 \rangle\!\!\rangle, \quad \text{and} \quad {\mathbb F}_5 \ = \ {{\mathbb F}_1^\pm}( T ) \!\sslash\!\langle\!\!\langle T^2+1,\ T-1-1 \rangle\!\!\rangle.$$ # Valuated matroids and subdivisions of the basis polytope {#Appendix:Matroid Polytope Subdivisions} In this section, we show that a matroid is rigid if and only if it has a unique rescaling class over ${\mathbb T}$. We begin with some observations and recall some results from the literature. For a pasture $F$, we can identify isomorphism classes of a (weak) Grassmann-Plücker function $\Delta$ with the corresponding *Plücker vector* $(\Delta(I))_{I \in \binom Er} \in {\mathbb P}^{\binom Er}(F)$. We call this Plücker vector a *representation* of $M$, and by abuse of terminology we use the terms "Grassmann-Plücker function" and "Plücker vector" interchangeably. Every matroid $M$ can be (uniquely) represented over ${\mathbb K}$ by the Grassmann-Plücker function $\Delta_M:\binom Er\to{\mathbb K}$ which sends an $r$-subset $I$ of $E$ to $1$ if it a basis of $M$ and to $0$ otherwise. Post-composing $\Delta_M$ with the inclusion ${\mathbb K}\hookrightarrow {\mathbb T}$ defines the *trivial representation of $M$*, which shows that $M$ has at least one rescaling class over ${\mathbb T}$. Recall that the *basis polytope* $P_M$ of $M$ is the convex hull of the points $e_I=\sum_{i\in I} e_i\in {\mathbb R}^n$ for which $I$ is a basis of $M$. Let $\Delta:\binom Er\to{\mathbb T}$ be a Plücker vector for $M$, i.e., $\mathop{\mathrm{supp\,}}(\Delta)=\mathop{\mathrm{supp\,}}(\Delta_M)$. Let ${\mathcal S}_\Delta=\{e_I\mid \Delta(I)\neq0\}$ be the support of $\Delta$, considered as a subset of ${\mathbb R}^n$. Post-composing with $\log$ yields a function $\tilde\Delta:{\mathcal S}_\Delta\to{\mathbb R}$ whose graph $\Gamma$ is a subset of ${\mathbb R}^n\times{\mathbb R}$. The convex closure of $\Gamma$ has a unique coarsest structure as a polyhedral complex. The lower faces of this polyhedral complex are those faces for which the last coordinate of the outward normal vector is negative. Omitting this last coordinate projects these faces onto $P_M$ and defines a polyhedral subdivision of $P_M$ called the *regular subdivision associated to $\Delta$*, see e.g. [@Maclagan-Sturmfels15 Definition 2.3.8]. By a theorem of Speyer (cf. [@Speyer08 Prop. 2.2]), this subdivision of $P_M$ is a *matroid subdivision*, i.e., all faces of the subdivision are themselves matroid polytopes, and conversely every regular matroid subdivision of $P_M$ comes from a ${\mathbb T}$-representation of $M$ (see also [@Maclagan-Sturmfels15 Lemma 4.4.6] and [@Joswig21 Thm. 10.35]). **Proposition 2**. *A matroid $M$ is rigid if and only if $M$ has a unique rescaling class over ${\mathbb T}$.* *Proof.* Let $r$ be the rank and $E=\{1,\dotsc,n\}$ the ground set of $M$. Let $\Delta:\binom Er\to{\mathbb T}$ be a tropical Plücker vector for $M$, and let ${\mathcal S}_\Delta$ be as above. By definition, $M$ is rigid if and only if $P_M$ admits only the trivial regular matroid subdivision. Since none of the points of ${\mathcal S}_\Delta$ lies in the convex closure of the other points, $\Delta:\binom Er\to{\mathbb T}$ induces the trivial matroid subdivision if and only if the subset $\Big\{(e_I,\tilde\Delta(I))\mid I\in{\mathcal S}_\Delta\Big\}$ of ${\mathbb R}^n\times{\mathbb R}$ is contained in an affine hyperplane $H$. In this case, let $x_ie_i$ be the unique intersection point of $H$ with the coordinate axis generated by $e_i$ (in the case of a loop $i$ of $M$ there is no such intersection point, and we can formally put $x_i=+\infty$). Then $\tilde\Delta(I)=\sum_{k=1}^r x_{i_k}e_{i_k}$ for $I\in{\mathcal S}_\Delta$. Rescaling $\Delta$ by $t=(\exp(-x_i)\mid i=1,\dotsc,n)$ yields a Plücker vector $\Delta_0=t.\Delta:\binom Er\to{\mathbb T}$ for which $$\tilde\Delta_0(I) \ = \ \tilde\Delta(I)-\sum_{k=1}^r x_{i_k}e_{i_k} \ = \ 0$$ for every $I\in{\mathcal S}_\Delta$. Thus $\tilde\Delta_0$ is the trivial representation of $M$. Conversely, rescaling $\Delta_0$ yields a Plücker vector $\Delta$ for which $\Big\{(e_I,\tilde\Delta(I))\mid I\in{\mathcal S}_\Delta\Big\}$ is contained in an affine hyperplane, which concludes the proof. ◻ **Remark 4**. The (local) *Dressian* of a matroid $M$ (cf. [@OPS19]) is a polyhedral complex $\Delta_M$ whose underlying set consists of all ${\mathbb T}$-representations of $M$; the polyhedral structure is defined by the 3-term tropical Plücker relations. One can show using [@OPS19 Cor. 18] that the lineality space of $\Delta_M$ is precisely the set of valuations on $M$ which are projectively equivalent to the trivial valuation. The topological space ${\rm Hom}(F_M,{\mathbb T})$ considered in the body of this paper can then be naturally identified with $\Delta_M$ modulo its lineality space, which we call the *reduced Dressian* $\overline{\Delta}_M$. (We omit the details, as it would take us too far afield into a somewhat lengthy discussion of various topologies and polyhedral structures.) See [@Brandt-Speyer22 Section 3] for an algorithm for computing the Dressian and/or reduced Dressian of a matroid $M$, and also (in Section 5) some interesting counterexamples to plausible-sounding assertions. [^1]: The authors thank David Speyer, Alex Fink, and Rudi Pendavingh for helpful discussions. We also thank BIRS for their hospitality hosting the workshop 23w5149: \"Algebraic Aspects of Matroid Theory\", during which the arguments in this paper emerged. Thanks also to Justin Chen, Eric Katz, and Bernd Sturmfels for their feedback on an earlier version of the paper. The first author was supported by NSF grant DMS-2154224 and a Simons Fellowship in Mathematics. The second author was supported by Marie Skłodowska Curie Fellowship MSCA-IF-101022339. [^2]: Technically speaking, [@Baker-Bowler19] deals with *tracts*, not pastures, but the difference between the two is immaterial when considering weak matroids. For the sake of brevity, we do not define tracts in this paper, nor do we consider idylls or ordered blueprints (both of which play a prominent role in [@Baker-Lorscheid21b]).
arxiv_math
{ "id": "2309.01746", "title": "On a theorem of Lafforgue", "authors": "Matthew Baker and Oliver Lorscheid", "categories": "math.CO math.AG", "license": "http://creativecommons.org/licenses/by/4.0/" }